author
int64 658
755k
| date
stringlengths 19
19
| timezone
int64 -46,800
43.2k
| hash
stringlengths 40
40
| message
stringlengths 5
490
| mods
list | language
stringclasses 20
values | license
stringclasses 3
values | repo
stringlengths 5
68
| original_message
stringlengths 12
491
|
---|---|---|---|---|---|---|---|---|---|
49,706 | 30.10.2020 21:17:42 | -3,600 | 5aafde0a029360535ead2926200baaa9bdb1c0cf | Split Federated Tests
This commit merge federated tests into blocks of reasonable execution times.
That should sum to ~ 30 min.
Furthermore this should reduce the number of docker image pulls our tests
produce, since there is an 6 hour limit of 200 images. | [
{
"change_type": "MODIFY",
"old_path": ".github/workflows/functionsTests.yml",
"new_path": ".github/workflows/functionsTests.yml",
"diff": "@@ -35,8 +35,25 @@ jobs:\nstrategy:\nfail-fast: false\nmatrix:\n+ tests: [\n+ \"**.functions.aggregate.**,**.functions.append.**\",\n+ \"**.functions.binary.frame.**,**.functions.binary.matrix.**,**.functions.binary.scalar.**,**.functions.binary.tensor.**\",\n+ \"**.functions.blocks.**,**.functions.compress.**,**.functions.countDistinct.**,**.functions.data.misc.**,**.functions.data.rand.**,**.functions.data.tensor.**\",\n+ \"**.functions.binary.matrix_full_**\",\n+ \"**.functions.codegenalg.partone**\",\n+ \"**.functions.codegenalg.parttwo**,**.functions.codegen.**,**.functions.caching.**\",\n+ \"**.functions.builtin.**\",\n+ \"**.functions.federated.**\",\n+ \"**.functions.frame.**,**.functions.indexing.**,**.functions.io.**,**.functions.jmlc.**,**.functions.lineage.**\",\n+ \"**.functions.dnn.**,**.functions.misc.**,**.functions.mlcontext.**\",\n+ \"**.functions.paramserv.**\",\n+ \"**.functions.nary.**,**.functions.parfor.**\",\n+ \"**.functions.pipelines.**,**.functions.privacy.**,**.functions.quaternary.**,**.functions.unary.scalar.**,**.functions.updateinplace.**,**.functions.vect.**\",\n+ \"**.functions.r**,**.functions.t**\",\n+ \"**.functions.unary.matrix.**\"\n+ ]\nos: [ubuntu-latest]\n- name: Function Test\n+ name: Function Test ${{ matrix.tests }}\nsteps:\n- name: Checkout Repository\nuses: actions/checkout@v2\n@@ -49,9 +66,9 @@ jobs:\nrestore-keys: |\n${{ runner.os }}-maven-test-\n- - name: Run all Function Tests\n+ - name: ${{ matrix.tests }}\nuses: ./.github/action/\nid: test\nwith:\n- test-to-run: org.apache.sysds.test.functions.**\n+ test-to-run: ${{ matrix.tests }}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/python/tests/algorithms/test_kmeans.py",
"new_path": "src/main/python/tests/algorithms/test_kmeans.py",
"diff": "@@ -90,7 +90,7 @@ class TestKMeans(unittest.TestCase):\ndef test_invalid_input_2(self):\nfeatures = Matrix(self.sds, np.array([1]))\nwith self.assertRaises(ValueError) as context:\n- kmeans(features, k=-1)\n+ kmeans(features, k=-1, seed= 13142)\ndef generate_matrices_for_k_means(self, dims: (int, int), seed: int = 1234):\nnp.random.seed(seed)\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-2707] Split Federated Tests
This commit merge federated tests into blocks of reasonable execution times.
That should sum to ~ 30 min.
Furthermore this should reduce the number of docker image pulls our tests
produce, since there is an 6 hour limit of 200 images. |
49,706 | 30.10.2020 22:40:14 | -3,600 | 51d293628b2b5efd0be661086e356209fdda3239 | [MINOR] syntax fixes in functiontest + merge short tests | [
{
"change_type": "MODIFY",
"old_path": ".github/workflows/functionsTests.yml",
"new_path": ".github/workflows/functionsTests.yml",
"diff": "@@ -36,20 +36,15 @@ jobs:\nfail-fast: false\nmatrix:\ntests: [\n- \"**.functions.aggregate.**,**.functions.append.**\",\n- \"**.functions.binary.frame.**,**.functions.binary.matrix.**,**.functions.binary.scalar.**,**.functions.binary.tensor.**\",\n- \"**.functions.blocks.**,**.functions.compress.**,**.functions.countDistinct.**,**.functions.data.misc.**,**.functions.data.rand.**,**.functions.data.tensor.**\",\n- \"**.functions.binary.matrix_full_**\",\n- \"**.functions.codegenalg.partone**\",\n- \"**.functions.codegenalg.parttwo**,**.functions.codegen.**,**.functions.caching.**\",\n+ \"**.functions.aggregate.**,**.functions.append.**,**.functions.binary.frame.**,**.functions.binary.matrix.**,**.functions.binary.scalar.**,**.functions.binary.tensor.**\",\n+ \"**.functions.blocks.**,**.functions.compress.**,**.functions.countDistinct.**,**.functions.data.misc.**,**.functions.data.rand.**,**.functions.data.tensor.**,**.functions.codegenalg.parttwo.**,**.functions.codegen.**,**.functions.caching.**\",\n+ \"**.functions.federated.**,**.functions.binary.matrix_full_cellwise.**,**.functions.binary.matrix_full_other.**\",\n+ \"**.functions.codegenalg.partone.**\",\n\"**.functions.builtin.**\",\n- \"**.functions.federated.**\",\n\"**.functions.frame.**,**.functions.indexing.**,**.functions.io.**,**.functions.jmlc.**,**.functions.lineage.**\",\n- \"**.functions.dnn.**,**.functions.misc.**,**.functions.mlcontext.**\",\n- \"**.functions.paramserv.**\",\n- \"**.functions.nary.**,**.functions.parfor.**\",\n- \"**.functions.pipelines.**,**.functions.privacy.**,**.functions.quaternary.**,**.functions.unary.scalar.**,**.functions.updateinplace.**,**.functions.vect.**\",\n- \"**.functions.r**,**.functions.t**\",\n+ \"**.functions.dnn.**,**.functions.misc.**,**.functions.mlcontext.**,**.functions.paramserv.**\",\n+ \"**.functions.nary.**,**.functions.parfor.**,**.functions.pipelines.**,**.functions.privacy.**,**.functions.quaternary.**,**.functions.unary.scalar.**,**.functions.updateinplace.**,**.functions.vect.**\",\n+ \"**.functions.reorg.**,**.functions.rewrite.**,**.functions.ternary.**,**.functions.transform.**\",\n\"**.functions.unary.matrix.**\"\n]\nos: [ubuntu-latest]\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR] syntax fixes in functiontest + merge short tests |
49,738 | 31.10.2020 15:48:39 | -3,600 | f308d56c560df52785e52456ece60ac74de49672 | [MINOR] Cleanup warnings (imports, serial IDs, static, formatting) | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ColGroupValue.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ColGroupValue.java",
"diff": "@@ -323,7 +323,7 @@ public abstract class ColGroupValue extends ColGroup implements Cloneable {\nreturn ret;\n}\n- protected final double[] sparsePreaggValues(int numVals, double v, boolean allocNew, double[] dictVals) {\n+ protected static double[] sparsePreaggValues(int numVals, double v, boolean allocNew, double[] dictVals) {\ndouble[] ret = allocNew ? new double[numVals + 1] : allocDVector(numVals + 1, true);\nfor(int k = 0; k < numVals; k++)\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/lib/LibRightMultBy.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/lib/LibRightMultBy.java",
"diff": "@@ -332,7 +332,7 @@ public class LibRightMultBy {\n}\nthatV = db.valuesAt(0);\n- List<ColGroup> retCg = new ArrayList<ColGroup>();\n+ List<ColGroup> retCg = new ArrayList<>();\nint[] newColIndexes = new int[that.getNumColumns()];\nfor(int i = 0; i < that.getNumColumns(); i++) {\nnewColIndexes[i] = i;\n@@ -385,7 +385,7 @@ public class LibRightMultBy {\n}\n}\n- List<ColGroup> retCg = new ArrayList<ColGroup>();\n+ List<ColGroup> retCg = new ArrayList<>();\nint[] newColIndexes = new int[that.getNumColumns()];\nfor(int i = 0; i < that.getNumColumns(); i++) {\nnewColIndexes[i] = i;\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/lib/LibScalar.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/lib/LibScalar.java",
"diff": "@@ -50,9 +50,9 @@ public class LibScalar {\n// private static final Log LOG = LogFactory.getLog(LibScalar.class.getName());\nprivate static final int MINIMUM_PARALLEL_SIZE = 8096;\n- public static MatrixBlock scalarOperations(ScalarOperator sop, CompressedMatrixBlock m1, CompressedMatrixBlock ret,\n- boolean overlapping) {\n- // LOG.error(sop);\n+ public static MatrixBlock scalarOperations(ScalarOperator sop, CompressedMatrixBlock m1,\n+ CompressedMatrixBlock ret, boolean overlapping)\n+ {\nif(sop instanceof LeftScalarOperator) {\nif(sop.fn instanceof Minus) {\nm1 = (CompressedMatrixBlock) scalarOperations(new RightScalarOperator(Multiply.getMultiplyFnObject(),\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/controlprogram/paramserv/FederatedPSControlThread.java",
"new_path": "src/main/java/org/apache/sysds/runtime/controlprogram/paramserv/FederatedPSControlThread.java",
"diff": "@@ -52,6 +52,7 @@ import java.util.stream.Collectors;\nimport static org.apache.sysds.runtime.util.ProgramConverter.*;\npublic class FederatedPSControlThread extends PSWorker implements Callable<Void> {\n+ private static final long serialVersionUID = 6846648059569648791L;\nFederatedData _featuresData;\nFederatedData _labelsData;\nfinal long _batchCounterVarID;\n@@ -140,6 +141,7 @@ public class FederatedPSControlThread extends PSWorker implements Callable<Void>\n* Setup UDF executed on the federated worker\n*/\nprivate static class setupFederatedWorker extends FederatedUDF {\n+ private static final long serialVersionUID = -3148991224792675607L;\nlong _batchSize;\nlong _dataSize;\nlong _numBatches;\n@@ -209,6 +211,8 @@ public class FederatedPSControlThread extends PSWorker implements Callable<Void>\n* Teardown UDF executed on the federated worker\n*/\nprivate static class teardownFederatedWorker extends FederatedUDF {\n+ private static final long serialVersionUID = -153650281873318969L;\n+\nprotected teardownFederatedWorker() {\nsuper(new long[]{});\n}\n@@ -326,6 +330,8 @@ public class FederatedPSControlThread extends PSWorker implements Callable<Void>\n* This is the code that will be executed on the federated Worker when computing a single batch\n*/\nprivate static class federatedComputeBatchGradients extends FederatedUDF {\n+ private static final long serialVersionUID = -3652112393963053475L;\n+\nprotected federatedComputeBatchGradients(long[] inIDs) {\nsuper(inIDs);\n}\n@@ -438,6 +444,8 @@ public class FederatedPSControlThread extends PSWorker implements Callable<Void>\n* This is the code that will be executed on the federated Worker when computing one epoch\n*/\nprivate static class federatedComputeEpochGradients extends FederatedUDF {\n+ private static final long serialVersionUID = -3075901536748794832L;\n+\nprotected federatedComputeEpochGradients(long[] inIDs) {\nsuper(inIDs);\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/privacy/finegrained/FineGrainedPrivacyList.java",
"new_path": "src/main/java/org/apache/sysds/runtime/privacy/finegrained/FineGrainedPrivacyList.java",
"diff": "@@ -165,21 +165,13 @@ public class FineGrainedPrivacyList implements FineGrainedPrivacy {\n}\nprivate boolean listEquals(ArrayList<Map.Entry<DataRange,PrivacyLevel>> otherFGP){\n- if ( otherFGP.size() == constraintCollection.size() ){\n+ if ( otherFGP.size() != constraintCollection.size() )\n+ return false;\nfor ( Map.Entry<DataRange, PrivacyLevel> constraint : constraintCollection){\n- if ( !innerEquals(constraint, otherFGP) )\n+ if ( !otherFGP.contains(constraint) )\nreturn false;\n}\nreturn true;\n- } else return false;\n- }\n-\n- private boolean innerEquals(Map.Entry<DataRange, PrivacyLevel> constraint, ArrayList<Map.Entry<DataRange,PrivacyLevel>> otherFGP){\n- for (Map.Entry<DataRange, PrivacyLevel> otherConstraint : otherFGP){\n- if ( constraint.equals(otherConstraint) )\n- return true;\n- }\n- return false;\n}\n@Override\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/privacy/propagation/PrivacyPropagator.java",
"new_path": "src/main/java/org/apache/sysds/runtime/privacy/propagation/PrivacyPropagator.java",
"diff": "@@ -22,7 +22,6 @@ package org.apache.sysds.runtime.privacy.propagation;\nimport java.util.*;\nimport org.apache.sysds.parser.DataExpression;\n-import org.apache.sysds.runtime.DMLRuntimeException;\nimport org.apache.sysds.runtime.controlprogram.context.ExecutionContext;\nimport org.apache.sysds.runtime.instructions.Instruction;\nimport org.apache.sysds.runtime.instructions.cp.*;\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/component/paramserv/SerializationTest.java",
"new_path": "src/test/java/org/apache/sysds/test/component/paramserv/SerializationTest.java",
"diff": "@@ -27,7 +27,6 @@ import java.io.ObjectInputStream;\nimport java.util.Arrays;\nimport java.util.Collection;\n-import org.apache.sysds.runtime.DMLRuntimeException;\nimport org.junit.Assert;\nimport org.junit.Test;\nimport org.apache.sysds.runtime.controlprogram.caching.MatrixObject;\n@@ -44,11 +43,8 @@ public class SerializationTest {\nprivate int _named;\[email protected]\n- public static Collection named() {\n- return Arrays.asList(new Object[][] {\n- { 0 },\n- { 1 }\n- });\n+ public static Collection<?> named() {\n+ return Arrays.asList(new Object[][] {{ 0 }, { 1 }});\n}\npublic SerializationTest(Integer named) {\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/privacy/ReadWriteTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/privacy/ReadWriteTest.java",
"diff": "@@ -133,7 +133,7 @@ public class ReadWriteTest extends AutomatedTestBase {\nreturn a;\n}\n- private void setFineGrained(PrivacyConstraint privacyConstraint){\n+ private static void setFineGrained(PrivacyConstraint privacyConstraint){\nFineGrainedPrivacy fgp = privacyConstraint.getFineGrainedPrivacy();\nfgp.put(new DataRange(new long[]{1,2}, new long[]{5,4}), PrivacyLevel.Private);\nfgp.put(new DataRange(new long[]{7,1}, new long[]{9,1}), PrivacyLevel.Private);\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/privacy/propagation/AppendPropagatorTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/privacy/propagation/AppendPropagatorTest.java",
"diff": "package org.apache.sysds.test.functions.privacy.propagation;\n-import org.apache.sysds.api.DMLScript;\n-import org.apache.sysds.common.Types;\n-import org.apache.sysds.parser.DataExpression;\n-import org.apache.sysds.runtime.instructions.cp.*;\n+import org.apache.sysds.runtime.instructions.cp.Data;\n+import org.apache.sysds.runtime.instructions.cp.DoubleObject;\n+import org.apache.sysds.runtime.instructions.cp.IntObject;\n+import org.apache.sysds.runtime.instructions.cp.ListObject;\n+import org.apache.sysds.runtime.instructions.cp.ScalarObject;\nimport org.apache.sysds.runtime.matrix.data.MatrixBlock;\nimport org.apache.sysds.runtime.meta.MatrixCharacteristics;\nimport org.apache.sysds.runtime.privacy.PrivacyConstraint;\nimport org.apache.sysds.runtime.privacy.PrivacyConstraint.PrivacyLevel;\n-import org.apache.sysds.runtime.privacy.PrivacyUtils;\nimport org.apache.sysds.runtime.privacy.finegrained.DataRange;\n-import org.apache.sysds.runtime.privacy.finegrained.FineGrainedPrivacy;\n-import org.apache.sysds.runtime.privacy.propagation.*;\n+import org.apache.sysds.runtime.privacy.propagation.AppendPropagator;\n+import org.apache.sysds.runtime.privacy.propagation.CBindPropagator;\n+import org.apache.sysds.runtime.privacy.propagation.ListAppendPropagator;\n+import org.apache.sysds.runtime.privacy.propagation.ListRemovePropagator;\n+import org.apache.sysds.runtime.privacy.propagation.Propagator;\n+import org.apache.sysds.runtime.privacy.propagation.PropagatorMultiReturn;\n+import org.apache.sysds.runtime.privacy.propagation.RBindPropagator;\nimport org.apache.sysds.test.AutomatedTestBase;\nimport org.apache.sysds.test.TestConfiguration;\nimport org.apache.sysds.test.TestUtils;\n-import org.apache.sysds.test.functions.federated.primitives.FederatedRCBindTest;\n-import org.apache.wink.json4j.JSONException;\n-import org.apache.wink.json4j.JSONObject;\nimport org.junit.Assert;\nimport org.junit.Ignore;\nimport org.junit.Test;\n@@ -46,8 +48,6 @@ import java.util.Arrays;\nimport java.util.List;\nimport java.util.Map;\n-import static org.junit.Assert.assertEquals;\n-\npublic class AppendPropagatorTest extends AutomatedTestBase {\nprivate final static String TEST_DIR = \"functions/privacy/\";\n@@ -100,23 +100,6 @@ public class AppendPropagatorTest extends AutomatedTestBase {\ngeneralOnlyRBindTest(new PrivacyConstraint(PrivacyLevel.Private), new PrivacyConstraint(PrivacyLevel.PrivateAggregation));\n}\n- private void generalOnlyRBindTest(PrivacyConstraint constraint1, PrivacyConstraint constraint2){\n- int columns = 2;\n- int rows1 = 4;\n- int rows2 = 3;\n- MatrixBlock inputMatrix1 = new MatrixBlock(rows1,columns,3);\n- MatrixBlock inputMatrix2 = new MatrixBlock(rows2,columns,4);\n- AppendPropagator propagator = new RBindPropagator(inputMatrix1, constraint1, inputMatrix2, constraint2);\n- PrivacyConstraint mergedConstraint = propagator.propagate();\n- Assert.assertEquals(mergedConstraint.getPrivacyLevel(), PrivacyLevel.None);\n- Map<DataRange, PrivacyLevel> firstHalfPrivacy = mergedConstraint.getFineGrainedPrivacy().getPrivacyLevel(\n- new DataRange(new long[]{0,0}, new long[]{rows1-1,columns-1}));\n- firstHalfPrivacy.forEach((range,level) -> Assert.assertEquals(constraint1.getPrivacyLevel(),level));\n- Map<DataRange, PrivacyLevel> secondHalfPrivacy = mergedConstraint.getFineGrainedPrivacy().getPrivacyLevel(\n- new DataRange(new long[]{rows1,0}, new long[]{rows1+rows2-1,columns-1}));\n- secondHalfPrivacy.forEach((range,level) -> Assert.assertEquals(constraint2.getPrivacyLevel(),level));\n- }\n-\n@Test\npublic void generalOnlyCBindPrivate1Test(){\ngeneralOnlyCBindTest(new PrivacyConstraint(PrivacyLevel.Private), new PrivacyConstraint());\n@@ -152,23 +135,6 @@ public class AppendPropagatorTest extends AutomatedTestBase {\ngeneralOnlyCBindTest(new PrivacyConstraint(PrivacyLevel.Private), new PrivacyConstraint(PrivacyLevel.PrivateAggregation));\n}\n- private void generalOnlyCBindTest(PrivacyConstraint constraint1, PrivacyConstraint constraint2){\n- int rows = 2;\n- int columns1 = 4;\n- int columns2 = 3;\n- MatrixBlock inputMatrix1 = new MatrixBlock(rows,columns1,3);\n- MatrixBlock inputMatrix2 = new MatrixBlock(rows,columns2,4);\n- AppendPropagator propagator = new CBindPropagator(inputMatrix1, constraint1, inputMatrix2, constraint2);\n- PrivacyConstraint mergedConstraint = propagator.propagate();\n- Assert.assertEquals(mergedConstraint.getPrivacyLevel(), PrivacyLevel.None);\n- Map<DataRange, PrivacyLevel> firstHalfPrivacy = mergedConstraint.getFineGrainedPrivacy().getPrivacyLevel(\n- new DataRange(new long[]{0,0}, new long[]{rows-1,columns1-1}));\n- firstHalfPrivacy.forEach((range,level) -> Assert.assertEquals(constraint1.getPrivacyLevel(),level));\n- Map<DataRange, PrivacyLevel> secondHalfPrivacy = mergedConstraint.getFineGrainedPrivacy().getPrivacyLevel(\n- new DataRange(new long[]{0,columns1}, new long[]{rows,columns1+columns2-1}));\n- secondHalfPrivacy.forEach((range,level) -> Assert.assertEquals(constraint2.getPrivacyLevel(),level));\n- }\n-\n@Test\npublic void generalOnlyListAppendPrivate1Test(){\ngeneralOnlyListAppendTest(new PrivacyConstraint(PrivacyLevel.Private), new PrivacyConstraint());\n@@ -204,25 +170,6 @@ public class AppendPropagatorTest extends AutomatedTestBase {\ngeneralOnlyListAppendTest(new PrivacyConstraint(PrivacyLevel.Private), new PrivacyConstraint(PrivacyLevel.PrivateAggregation));\n}\n- private void generalOnlyListAppendTest(PrivacyConstraint constraint1, PrivacyConstraint constraint2){\n- int length1 = 6;\n- List<Data> dataList1 = Arrays.asList(new Data[length1]);\n- ListObject input1 = new ListObject(dataList1);\n- int length2 = 11;\n- List<Data> dataList2 = Arrays.asList(new Data[length2]);\n- ListObject input2 = new ListObject(dataList2);\n- Propagator propagator = new ListAppendPropagator(input1, constraint1, input2, constraint2);\n- PrivacyConstraint mergedConstraint = propagator.propagate();\n- Map<DataRange, PrivacyLevel> firstHalfPrivacy = mergedConstraint.getFineGrainedPrivacy().getPrivacyLevel(\n- new DataRange(new long[]{0}, new long[]{length1-1})\n- );\n- firstHalfPrivacy.forEach((range,level) -> Assert.assertEquals(constraint1.getPrivacyLevel(),level));\n- Map<DataRange, PrivacyLevel> secondHalfPrivacy = mergedConstraint.getFineGrainedPrivacy().getPrivacyLevel(\n- new DataRange(new long[length1], new long[]{length1+length2-1})\n- );\n- secondHalfPrivacy.forEach((range,level) -> Assert.assertEquals(constraint2.getPrivacyLevel(),level));\n- }\n-\n@Test\npublic void generalOnlyListRemoveAppendPrivate1Test(){\ngeneralOnlyListRemoveAppendTest(new PrivacyConstraint(PrivacyLevel.Private), new PrivacyConstraint(),\n@@ -265,27 +212,6 @@ public class AppendPropagatorTest extends AutomatedTestBase {\nPrivacyLevel.Private, PrivacyLevel.Private);\n}\n- private void generalOnlyListRemoveAppendTest(\n- PrivacyConstraint constraint1, PrivacyConstraint constraint2, PrivacyLevel expected1, PrivacyLevel expected2){\n- int dataLength = 9;\n- List<Data> dataList = new ArrayList<>();\n- for ( int i = 0; i < dataLength; i++){\n- dataList.add(new DoubleObject(i));\n- }\n- ListObject inputList = new ListObject(dataList);\n-\n- int removePositionInt = 5;\n- ScalarObject removePosition = new IntObject(removePositionInt);\n-\n- PropagatorMultiReturn propagator = new ListRemovePropagator(inputList, constraint1, removePosition, constraint2);\n- PrivacyConstraint[] mergedConstraints = propagator.propagate();\n-\n- Assert.assertEquals(expected1, mergedConstraints[0].getPrivacyLevel());\n- Assert.assertEquals(expected2, mergedConstraints[1].getPrivacyLevel());\n- Assert.assertFalse(\"The first output constraint should have no fine-grained constraints\", mergedConstraints[0].hasFineGrainedConstraints());\n- Assert.assertFalse(\"The second output constraint should have no fine-grained constraints\", mergedConstraints[1].hasFineGrainedConstraints());\n- }\n-\n@Test\npublic void finegrainedRBindPrivate1(){\nPrivacyConstraint constraint1 = new PrivacyConstraint();\n@@ -340,29 +266,6 @@ public class AppendPropagatorTest extends AutomatedTestBase {\nfinegrainedRBindTest(constraint1, constraint2);\n}\n- private void finegrainedRBindTest(PrivacyConstraint constraint1, PrivacyConstraint constraint2){\n- int columns = 2;\n- int rows1 = 4;\n- int rows2 = 3;\n- MatrixBlock inputMatrix1 = new MatrixBlock(rows1,columns,3);\n- MatrixBlock inputMatrix2 = new MatrixBlock(rows2,columns,4);\n- AppendPropagator propagator = new RBindPropagator(inputMatrix1, constraint1, inputMatrix2, constraint2);\n- PrivacyConstraint mergedConstraint = propagator.propagate();\n- Assert.assertEquals(mergedConstraint.getPrivacyLevel(), PrivacyLevel.None);\n- Map<DataRange, PrivacyLevel> firstHalfPrivacy = mergedConstraint.getFineGrainedPrivacy().getPrivacyLevel(\n- new DataRange(new long[]{0,0}, new long[]{rows1-1,columns-1}));\n- constraint1.getFineGrainedPrivacy().getAllConstraintsList().forEach(\n- constraint -> Assert.assertTrue(\"Merged constraint should contain same privacy levels as input 1\",\n- firstHalfPrivacy.containsValue(constraint.getValue()))\n- );\n- Map<DataRange, PrivacyLevel> secondHalfPrivacy = mergedConstraint.getFineGrainedPrivacy().getPrivacyLevel(\n- new DataRange(new long[]{rows1,0}, new long[]{rows1+rows2-1,columns-1}));\n- constraint2.getFineGrainedPrivacy().getAllConstraintsList().forEach(\n- constraint -> Assert.assertTrue(\"Merged constraint should contain same privacy levels as input 2\",\n- secondHalfPrivacy.containsValue(constraint.getValue()))\n- );\n- }\n-\n@Test\npublic void finegrainedCBindPrivate1(){\nPrivacyConstraint constraint1 = new PrivacyConstraint();\n@@ -417,29 +320,6 @@ public class AppendPropagatorTest extends AutomatedTestBase {\nfinegrainedCBindTest(constraint1, constraint2);\n}\n- private void finegrainedCBindTest(PrivacyConstraint constraint1, PrivacyConstraint constraint2){\n- int rows = 6;\n- int columns1 = 4;\n- int columns2 = 3;\n- MatrixBlock inputMatrix1 = new MatrixBlock(rows,columns1,3);\n- MatrixBlock inputMatrix2 = new MatrixBlock(rows,columns2,4);\n- AppendPropagator propagator = new CBindPropagator(inputMatrix1, constraint1, inputMatrix2, constraint2);\n- PrivacyConstraint mergedConstraint = propagator.propagate();\n- Assert.assertEquals(mergedConstraint.getPrivacyLevel(), PrivacyLevel.None);\n- Map<DataRange, PrivacyLevel> firstHalfPrivacy = mergedConstraint.getFineGrainedPrivacy().getPrivacyLevel(\n- new DataRange(new long[]{0,0}, new long[]{rows-1,columns1-1}));\n- constraint1.getFineGrainedPrivacy().getAllConstraintsList().forEach(\n- constraint -> Assert.assertTrue(\"Merged constraint should contain same privacy levels as input 1\",\n- firstHalfPrivacy.containsValue(constraint.getValue()))\n- );\n- Map<DataRange, PrivacyLevel> secondHalfPrivacy = mergedConstraint.getFineGrainedPrivacy().getPrivacyLevel(\n- new DataRange(new long[]{0,columns1}, new long[]{rows,columns1+columns2-1}));\n- constraint2.getFineGrainedPrivacy().getAllConstraintsList().forEach(\n- constraint -> Assert.assertTrue(\"Merged constraint should contain same privacy levels as input 2\",\n- secondHalfPrivacy.containsValue(constraint.getValue()))\n- );\n- }\n-\n@Test\npublic void finegrainedListAppendPrivate1(){\nPrivacyConstraint constraint1 = new PrivacyConstraint();\n@@ -494,39 +374,12 @@ public class AppendPropagatorTest extends AutomatedTestBase {\nfinegrainedListAppendTest(constraint1, constraint2);\n}\n- private void finegrainedListAppendTest(PrivacyConstraint constraint1, PrivacyConstraint constraint2){\n- int length1 = 6;\n- List<Data> dataList1 = Arrays.asList(new Data[length1]);\n- ListObject input1 = new ListObject(dataList1);\n- int length2 = 11;\n- List<Data> dataList2 = Arrays.asList(new Data[length2]);\n- ListObject input2 = new ListObject(dataList2);\n- Propagator propagator = new ListAppendPropagator(input1, constraint1, input2, constraint2);\n- PrivacyConstraint mergedConstraint = propagator.propagate();\n- Assert.assertEquals(mergedConstraint.getPrivacyLevel(), PrivacyLevel.None);\n- Map<DataRange, PrivacyLevel> firstHalfPrivacy = mergedConstraint.getFineGrainedPrivacy().getPrivacyLevel(\n- new DataRange(new long[]{0}, new long[]{length1-1})\n- );\n- constraint1.getFineGrainedPrivacy().getAllConstraintsList().forEach(\n- constraint -> Assert.assertTrue(\"Merged constraint should contain same privacy levels as input 1\",\n- firstHalfPrivacy.containsValue(constraint.getValue()))\n- );\n- Map<DataRange, PrivacyLevel> secondHalfPrivacy = mergedConstraint.getFineGrainedPrivacy().getPrivacyLevel(\n- new DataRange(new long[]{length1}, new long[]{length1+length2-1})\n- );\n- constraint2.getFineGrainedPrivacy().getAllConstraintsList().forEach(\n- constraint -> Assert.assertTrue(\"Merged constraint should contain same privacy levels as input 2\",\n- secondHalfPrivacy.containsValue(constraint.getValue()))\n- );\n- }\n-\n@Test\npublic void testFunction(){\nint dataLength = 9;\nList<Data> dataList = new ArrayList<>();\n- for ( int i = 0; i < dataLength; i++){\n+ for ( int i = 0; i < dataLength; i++)\ndataList.add(new DoubleObject(i));\n- }\nListObject l = new ListObject(dataList);\nListObject lCopy = l.copy();\nint position = 4;\n@@ -591,38 +444,6 @@ public class AppendPropagatorTest extends AutomatedTestBase {\nfinegrainedListRemoveAppendTest(constraint1, constraint2, PrivacyLevel.PrivateAggregation);\n}\n- private void finegrainedListRemoveAppendTest(\n- PrivacyConstraint constraint1, PrivacyConstraint constraint2, PrivacyLevel expectedOutput2){\n- finegrainedListRemoveAppendTest(constraint1, constraint2, expectedOutput2, false);\n- }\n-\n- private void finegrainedListRemoveAppendTest(\n- PrivacyConstraint constraint1, PrivacyConstraint constraint2, PrivacyLevel expectedOutput2, boolean singleElementPrivacy){\n- int dataLength = 9;\n- List<Data> dataList = new ArrayList<>();\n- for ( int i = 0; i < dataLength; i++){\n- dataList.add(new DoubleObject(i));\n- }\n- ListObject inputList = new ListObject(dataList);\n- int removePositionInt = 5;\n- ScalarObject removePosition = new IntObject(removePositionInt);\n- PropagatorMultiReturn propagator = new ListRemovePropagator(inputList, constraint1, removePosition, constraint2);\n- PrivacyConstraint[] mergedConstraints = propagator.propagate();\n-\n- if ( !singleElementPrivacy ){\n- Map<DataRange, PrivacyLevel> outputPrivacy = mergedConstraints[0].getFineGrainedPrivacy().getPrivacyLevel(\n- new DataRange(new long[]{0}, new long[]{dataLength-1})\n- );\n- constraint1.getFineGrainedPrivacy().getAllConstraintsList().forEach(\n- constraint -> Assert.assertTrue(\"Merged constraint should contain same privacy levels as input 1\",\n- outputPrivacy.containsValue(constraint.getValue()))\n- );\n- }\n-\n- Assert.assertEquals(expectedOutput2, mergedConstraints[1].getPrivacyLevel());\n- Assert.assertFalse(mergedConstraints[1].hasFineGrainedConstraints());\n- }\n-\n@Test\npublic void integrationRBindTestNoneNone(){\nPrivacyConstraint pc1 = new PrivacyConstraint(PrivacyLevel.None);\n@@ -865,26 +686,6 @@ public class AppendPropagatorTest extends AutomatedTestBase {\nintegrationCBindTest(constraint1, constraint2, pcExpected);\n}\n- private void integrationCBindTest(PrivacyConstraint privacyConstraint1, PrivacyConstraint privacyConstraint2,\n- PrivacyConstraint expectedOutput){\n- TestConfiguration config = getAndLoadTestConfiguration(TEST_NAME_CBIND);\n- fullDMLScriptName = SCRIPT_DIR + TEST_DIR + config.getTestScript() + \".dml\";\n-\n- int cols1 = 20;\n- int cols2 = 30;\n- int rows = 10;\n- double[][] A = getRandomMatrix(rows, cols1, -10, 10, 0.5, 1);\n- double[][] B = getRandomMatrix(rows, cols2, -10, 10, 0.5, 1);\n- writeInputMatrixWithMTD(\"A\", A, false, new MatrixCharacteristics(rows, cols1), privacyConstraint1);\n- writeInputMatrixWithMTD(\"B\", B, false, new MatrixCharacteristics(rows, cols2), privacyConstraint2);\n-\n- programArgs = new String[]{\"-nvargs\", \"A=\" + input(\"A\"), \"B=\" + input(\"B\"), \"C=\" + output(\"C\")};\n- runTest(true,false,null,-1);\n-\n- PrivacyConstraint outputConstraint = getPrivacyConstraintFromMetaData(\"C\");\n- Assert.assertEquals(expectedOutput, outputConstraint);\n- }\n-\n@Test\npublic void integrationStringAppendTestNoneNone(){\nPrivacyConstraint pc1 = new PrivacyConstraint(PrivacyLevel.None);\n@@ -920,25 +721,6 @@ public class AppendPropagatorTest extends AutomatedTestBase {\nintegrationStringAppendTest(pc1, pc2, pc2);\n}\n- private void integrationStringAppendTest(PrivacyConstraint privacyConstraint1, PrivacyConstraint privacyConstraint2,\n- PrivacyConstraint expectedOutput){\n- TestConfiguration config = getAndLoadTestConfiguration(TEST_NAME_STRING);\n- fullDMLScriptName = SCRIPT_DIR + TEST_DIR + config.getTestScript() + \".dml\";\n-\n- int cols = 1;\n- int rows = 1;\n- double[][] A = getRandomMatrix(rows, cols, -10, 10, 0.5, 1);\n- double[][] B = getRandomMatrix(rows, cols, -10, 10, 0.5, 1);\n- writeInputMatrixWithMTD(\"A\", A, false, new MatrixCharacteristics(rows, cols), privacyConstraint1);\n- writeInputMatrixWithMTD(\"B\", B, false, new MatrixCharacteristics(rows, cols), privacyConstraint2);\n-\n- programArgs = new String[]{\"-nvargs\", \"A=\" + input(\"A\"), \"B=\" + input(\"B\"), \"C=\" + output(\"C\")};\n- runTest(true,false,null,-1);\n-\n- PrivacyConstraint outputConstraint = getPrivacyConstraintFromMetaData(\"C\");\n- Assert.assertEquals(expectedOutput, outputConstraint);\n- }\n-\n@Ignore\n@Test\npublic void integrationListAppendTestNoneNone(){\n@@ -981,6 +763,223 @@ public class AppendPropagatorTest extends AutomatedTestBase {\nintegrationListAppendTest(pc1, pc2, pc2);\n}\n+ private static void generalOnlyRBindTest(PrivacyConstraint constraint1, PrivacyConstraint constraint2){\n+ int columns = 2;\n+ int rows1 = 4;\n+ int rows2 = 3;\n+ MatrixBlock inputMatrix1 = new MatrixBlock(rows1,columns,3);\n+ MatrixBlock inputMatrix2 = new MatrixBlock(rows2,columns,4);\n+ AppendPropagator propagator = new RBindPropagator(inputMatrix1, constraint1, inputMatrix2, constraint2);\n+ PrivacyConstraint mergedConstraint = propagator.propagate();\n+ Assert.assertEquals(mergedConstraint.getPrivacyLevel(), PrivacyLevel.None);\n+ Map<DataRange, PrivacyLevel> firstHalfPrivacy = mergedConstraint.getFineGrainedPrivacy().getPrivacyLevel(\n+ new DataRange(new long[]{0,0}, new long[]{rows1-1,columns-1}));\n+ firstHalfPrivacy.forEach((range,level) -> Assert.assertEquals(constraint1.getPrivacyLevel(),level));\n+ Map<DataRange, PrivacyLevel> secondHalfPrivacy = mergedConstraint.getFineGrainedPrivacy().getPrivacyLevel(\n+ new DataRange(new long[]{rows1,0}, new long[]{rows1+rows2-1,columns-1}));\n+ secondHalfPrivacy.forEach((range,level) -> Assert.assertEquals(constraint2.getPrivacyLevel(),level));\n+ }\n+\n+ private static void generalOnlyCBindTest(PrivacyConstraint constraint1, PrivacyConstraint constraint2){\n+ int rows = 2;\n+ int columns1 = 4;\n+ int columns2 = 3;\n+ MatrixBlock inputMatrix1 = new MatrixBlock(rows,columns1,3);\n+ MatrixBlock inputMatrix2 = new MatrixBlock(rows,columns2,4);\n+ AppendPropagator propagator = new CBindPropagator(inputMatrix1, constraint1, inputMatrix2, constraint2);\n+ PrivacyConstraint mergedConstraint = propagator.propagate();\n+ Assert.assertEquals(mergedConstraint.getPrivacyLevel(), PrivacyLevel.None);\n+ Map<DataRange, PrivacyLevel> firstHalfPrivacy = mergedConstraint.getFineGrainedPrivacy().getPrivacyLevel(\n+ new DataRange(new long[]{0,0}, new long[]{rows-1,columns1-1}));\n+ firstHalfPrivacy.forEach((range,level) -> Assert.assertEquals(constraint1.getPrivacyLevel(),level));\n+ Map<DataRange, PrivacyLevel> secondHalfPrivacy = mergedConstraint.getFineGrainedPrivacy().getPrivacyLevel(\n+ new DataRange(new long[]{0,columns1}, new long[]{rows,columns1+columns2-1}));\n+ secondHalfPrivacy.forEach((range,level) -> Assert.assertEquals(constraint2.getPrivacyLevel(),level));\n+ }\n+\n+ private static void generalOnlyListAppendTest(PrivacyConstraint constraint1, PrivacyConstraint constraint2){\n+ int length1 = 6;\n+ List<Data> dataList1 = Arrays.asList(new Data[length1]);\n+ ListObject input1 = new ListObject(dataList1);\n+ int length2 = 11;\n+ List<Data> dataList2 = Arrays.asList(new Data[length2]);\n+ ListObject input2 = new ListObject(dataList2);\n+ Propagator propagator = new ListAppendPropagator(input1, constraint1, input2, constraint2);\n+ PrivacyConstraint mergedConstraint = propagator.propagate();\n+ Map<DataRange, PrivacyLevel> firstHalfPrivacy = mergedConstraint.getFineGrainedPrivacy().getPrivacyLevel(\n+ new DataRange(new long[]{0}, new long[]{length1-1})\n+ );\n+ firstHalfPrivacy.forEach((range,level) -> Assert.assertEquals(constraint1.getPrivacyLevel(),level));\n+ Map<DataRange, PrivacyLevel> secondHalfPrivacy = mergedConstraint.getFineGrainedPrivacy().getPrivacyLevel(\n+ new DataRange(new long[length1], new long[]{length1+length2-1})\n+ );\n+ secondHalfPrivacy.forEach((range,level) -> Assert.assertEquals(constraint2.getPrivacyLevel(),level));\n+ }\n+\n+ private static void generalOnlyListRemoveAppendTest(\n+ PrivacyConstraint constraint1, PrivacyConstraint constraint2, PrivacyLevel expected1, PrivacyLevel expected2){\n+ int dataLength = 9;\n+ List<Data> dataList = new ArrayList<>();\n+ for ( int i = 0; i < dataLength; i++){\n+ dataList.add(new DoubleObject(i));\n+ }\n+ ListObject inputList = new ListObject(dataList);\n+\n+ int removePositionInt = 5;\n+ ScalarObject removePosition = new IntObject(removePositionInt);\n+\n+ PropagatorMultiReturn propagator = new ListRemovePropagator(inputList, constraint1, removePosition, constraint2);\n+ PrivacyConstraint[] mergedConstraints = propagator.propagate();\n+\n+ Assert.assertEquals(expected1, mergedConstraints[0].getPrivacyLevel());\n+ Assert.assertEquals(expected2, mergedConstraints[1].getPrivacyLevel());\n+ Assert.assertFalse(\"The first output constraint should have no fine-grained constraints\", mergedConstraints[0].hasFineGrainedConstraints());\n+ Assert.assertFalse(\"The second output constraint should have no fine-grained constraints\", mergedConstraints[1].hasFineGrainedConstraints());\n+ }\n+\n+ private static void finegrainedRBindTest(PrivacyConstraint constraint1, PrivacyConstraint constraint2){\n+ int columns = 2;\n+ int rows1 = 4;\n+ int rows2 = 3;\n+ MatrixBlock inputMatrix1 = new MatrixBlock(rows1,columns,3);\n+ MatrixBlock inputMatrix2 = new MatrixBlock(rows2,columns,4);\n+ AppendPropagator propagator = new RBindPropagator(inputMatrix1, constraint1, inputMatrix2, constraint2);\n+ PrivacyConstraint mergedConstraint = propagator.propagate();\n+ Assert.assertEquals(mergedConstraint.getPrivacyLevel(), PrivacyLevel.None);\n+ Map<DataRange, PrivacyLevel> firstHalfPrivacy = mergedConstraint.getFineGrainedPrivacy().getPrivacyLevel(\n+ new DataRange(new long[]{0,0}, new long[]{rows1-1,columns-1}));\n+ constraint1.getFineGrainedPrivacy().getAllConstraintsList().forEach(\n+ constraint -> Assert.assertTrue(\"Merged constraint should contain same privacy levels as input 1\",\n+ firstHalfPrivacy.containsValue(constraint.getValue()))\n+ );\n+ Map<DataRange, PrivacyLevel> secondHalfPrivacy = mergedConstraint.getFineGrainedPrivacy().getPrivacyLevel(\n+ new DataRange(new long[]{rows1,0}, new long[]{rows1+rows2-1,columns-1}));\n+ constraint2.getFineGrainedPrivacy().getAllConstraintsList().forEach(\n+ constraint -> Assert.assertTrue(\"Merged constraint should contain same privacy levels as input 2\",\n+ secondHalfPrivacy.containsValue(constraint.getValue()))\n+ );\n+ }\n+\n+ private static void finegrainedCBindTest(PrivacyConstraint constraint1, PrivacyConstraint constraint2){\n+ int rows = 6;\n+ int columns1 = 4;\n+ int columns2 = 3;\n+ MatrixBlock inputMatrix1 = new MatrixBlock(rows,columns1,3);\n+ MatrixBlock inputMatrix2 = new MatrixBlock(rows,columns2,4);\n+ AppendPropagator propagator = new CBindPropagator(inputMatrix1, constraint1, inputMatrix2, constraint2);\n+ PrivacyConstraint mergedConstraint = propagator.propagate();\n+ Assert.assertEquals(mergedConstraint.getPrivacyLevel(), PrivacyLevel.None);\n+ Map<DataRange, PrivacyLevel> firstHalfPrivacy = mergedConstraint.getFineGrainedPrivacy().getPrivacyLevel(\n+ new DataRange(new long[]{0,0}, new long[]{rows-1,columns1-1}));\n+ constraint1.getFineGrainedPrivacy().getAllConstraintsList().forEach(\n+ constraint -> Assert.assertTrue(\"Merged constraint should contain same privacy levels as input 1\",\n+ firstHalfPrivacy.containsValue(constraint.getValue()))\n+ );\n+ Map<DataRange, PrivacyLevel> secondHalfPrivacy = mergedConstraint.getFineGrainedPrivacy().getPrivacyLevel(\n+ new DataRange(new long[]{0,columns1}, new long[]{rows,columns1+columns2-1}));\n+ constraint2.getFineGrainedPrivacy().getAllConstraintsList().forEach(\n+ constraint -> Assert.assertTrue(\"Merged constraint should contain same privacy levels as input 2\",\n+ secondHalfPrivacy.containsValue(constraint.getValue()))\n+ );\n+ }\n+\n+ private static void finegrainedListAppendTest(PrivacyConstraint constraint1, PrivacyConstraint constraint2){\n+ int length1 = 6;\n+ List<Data> dataList1 = Arrays.asList(new Data[length1]);\n+ ListObject input1 = new ListObject(dataList1);\n+ int length2 = 11;\n+ List<Data> dataList2 = Arrays.asList(new Data[length2]);\n+ ListObject input2 = new ListObject(dataList2);\n+ Propagator propagator = new ListAppendPropagator(input1, constraint1, input2, constraint2);\n+ PrivacyConstraint mergedConstraint = propagator.propagate();\n+ Assert.assertEquals(mergedConstraint.getPrivacyLevel(), PrivacyLevel.None);\n+ Map<DataRange, PrivacyLevel> firstHalfPrivacy = mergedConstraint.getFineGrainedPrivacy().getPrivacyLevel(\n+ new DataRange(new long[]{0}, new long[]{length1-1})\n+ );\n+ constraint1.getFineGrainedPrivacy().getAllConstraintsList().forEach(\n+ constraint -> Assert.assertTrue(\"Merged constraint should contain same privacy levels as input 1\",\n+ firstHalfPrivacy.containsValue(constraint.getValue()))\n+ );\n+ Map<DataRange, PrivacyLevel> secondHalfPrivacy = mergedConstraint.getFineGrainedPrivacy().getPrivacyLevel(\n+ new DataRange(new long[]{length1}, new long[]{length1+length2-1})\n+ );\n+ constraint2.getFineGrainedPrivacy().getAllConstraintsList().forEach(\n+ constraint -> Assert.assertTrue(\"Merged constraint should contain same privacy levels as input 2\",\n+ secondHalfPrivacy.containsValue(constraint.getValue()))\n+ );\n+ }\n+\n+ private static void finegrainedListRemoveAppendTest(\n+ PrivacyConstraint constraint1, PrivacyConstraint constraint2, PrivacyLevel expectedOutput2){\n+ finegrainedListRemoveAppendTest(constraint1, constraint2, expectedOutput2, false);\n+ }\n+\n+ private static void finegrainedListRemoveAppendTest(\n+ PrivacyConstraint constraint1, PrivacyConstraint constraint2, PrivacyLevel expectedOutput2, boolean singleElementPrivacy){\n+ int dataLength = 9;\n+ List<Data> dataList = new ArrayList<>();\n+ for ( int i = 0; i < dataLength; i++){\n+ dataList.add(new DoubleObject(i));\n+ }\n+ ListObject inputList = new ListObject(dataList);\n+ int removePositionInt = 5;\n+ ScalarObject removePosition = new IntObject(removePositionInt);\n+ PropagatorMultiReturn propagator = new ListRemovePropagator(inputList, constraint1, removePosition, constraint2);\n+ PrivacyConstraint[] mergedConstraints = propagator.propagate();\n+\n+ if ( !singleElementPrivacy ){\n+ Map<DataRange, PrivacyLevel> outputPrivacy = mergedConstraints[0].getFineGrainedPrivacy().getPrivacyLevel(\n+ new DataRange(new long[]{0}, new long[]{dataLength-1})\n+ );\n+ constraint1.getFineGrainedPrivacy().getAllConstraintsList().forEach(\n+ constraint -> Assert.assertTrue(\"Merged constraint should contain same privacy levels as input 1\",\n+ outputPrivacy.containsValue(constraint.getValue()))\n+ );\n+ }\n+\n+ Assert.assertEquals(expectedOutput2, mergedConstraints[1].getPrivacyLevel());\n+ Assert.assertFalse(mergedConstraints[1].hasFineGrainedConstraints());\n+ }\n+\n+ private void integrationCBindTest(PrivacyConstraint privacyConstraint1, PrivacyConstraint privacyConstraint2,\n+ PrivacyConstraint expectedOutput){\n+ TestConfiguration config = getAndLoadTestConfiguration(TEST_NAME_CBIND);\n+ fullDMLScriptName = SCRIPT_DIR + TEST_DIR + config.getTestScript() + \".dml\";\n+\n+ int cols1 = 20;\n+ int cols2 = 30;\n+ int rows = 10;\n+ double[][] A = getRandomMatrix(rows, cols1, -10, 10, 0.5, 1);\n+ double[][] B = getRandomMatrix(rows, cols2, -10, 10, 0.5, 1);\n+ writeInputMatrixWithMTD(\"A\", A, false, new MatrixCharacteristics(rows, cols1), privacyConstraint1);\n+ writeInputMatrixWithMTD(\"B\", B, false, new MatrixCharacteristics(rows, cols2), privacyConstraint2);\n+\n+ programArgs = new String[]{\"-nvargs\", \"A=\" + input(\"A\"), \"B=\" + input(\"B\"), \"C=\" + output(\"C\")};\n+ runTest(true,false,null,-1);\n+\n+ PrivacyConstraint outputConstraint = getPrivacyConstraintFromMetaData(\"C\");\n+ Assert.assertEquals(expectedOutput, outputConstraint);\n+ }\n+\n+ private void integrationStringAppendTest(PrivacyConstraint privacyConstraint1, PrivacyConstraint privacyConstraint2,\n+ PrivacyConstraint expectedOutput){\n+ TestConfiguration config = getAndLoadTestConfiguration(TEST_NAME_STRING);\n+ fullDMLScriptName = SCRIPT_DIR + TEST_DIR + config.getTestScript() + \".dml\";\n+\n+ int cols = 1;\n+ int rows = 1;\n+ double[][] A = getRandomMatrix(rows, cols, -10, 10, 0.5, 1);\n+ double[][] B = getRandomMatrix(rows, cols, -10, 10, 0.5, 1);\n+ writeInputMatrixWithMTD(\"A\", A, false, new MatrixCharacteristics(rows, cols), privacyConstraint1);\n+ writeInputMatrixWithMTD(\"B\", B, false, new MatrixCharacteristics(rows, cols), privacyConstraint2);\n+\n+ programArgs = new String[]{\"-nvargs\", \"A=\" + input(\"A\"), \"B=\" + input(\"B\"), \"C=\" + output(\"C\")};\n+ runTest(true,false,null,-1);\n+\n+ PrivacyConstraint outputConstraint = getPrivacyConstraintFromMetaData(\"C\");\n+ Assert.assertEquals(expectedOutput, outputConstraint);\n+ }\n+\nprivate void integrationListAppendTest(PrivacyConstraint privacyConstraint1, PrivacyConstraint privacyConstraint2,\nPrivacyConstraint expectedOutput){\nTestConfiguration config = getAndLoadTestConfiguration(TEST_NAME_LIST);\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR] Cleanup warnings (imports, serial IDs, static, formatting) |
49,738 | 31.10.2020 20:25:59 | -3,600 | 998d82e27b8add5a0ca55ac687f0bfd9abe54c8b | Extended federated binary element-wise operations
This patch generalizes the existing federated binary element-wise
operations to avoid unsupported scenarios. Specifically, if the
right-hand-side matrix (instead of left-hand-side) matrix is federated
and the operation is commutative (e.g., mult/add) we canonicalize the
inputs accordingly. | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/BinaryMatrixMatrixFEDInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/BinaryMatrixMatrixFEDInstruction.java",
"diff": "@@ -25,6 +25,7 @@ import org.apache.sysds.runtime.controlprogram.context.ExecutionContext;\nimport org.apache.sysds.runtime.controlprogram.federated.FederatedRequest;\nimport org.apache.sysds.runtime.controlprogram.federated.FederationUtils;\nimport org.apache.sysds.runtime.instructions.cp.CPOperand;\n+import org.apache.sysds.runtime.matrix.operators.BinaryOperator;\nimport org.apache.sysds.runtime.matrix.operators.Operator;\npublic class BinaryMatrixMatrixFEDInstruction extends BinaryFEDInstruction\n@@ -39,8 +40,16 @@ public class BinaryMatrixMatrixFEDInstruction extends BinaryFEDInstruction\nMatrixObject mo1 = ec.getMatrixObject(input1);\nMatrixObject mo2 = ec.getMatrixObject(input2);\n- FederatedRequest fr2 = null;\n+ //canonicalization for federated lhs\n+ if( !mo1.isFederated() && mo2.isFederated()\n+ && mo1.getDataCharacteristics().equalDims(mo2.getDataCharacteristics())\n+ && ((BinaryOperator)_optr).isCommutative() ) {\n+ mo1 = ec.getMatrixObject(input2);\n+ mo2 = ec.getMatrixObject(input1);\n+ }\n+ //execute federated operation on mo1 or mo2\n+ FederatedRequest fr2 = null;\nif( mo2.isFederated() ) {\nif(mo1.isFederated() && mo1.getFedMapping().isAligned(mo2.getFedMapping(), false)) {\nfr2 = FederationUtils.callInstruction(instString, output, new CPOperand[]{input1, input2},\n@@ -48,12 +57,12 @@ public class BinaryMatrixMatrixFEDInstruction extends BinaryFEDInstruction\nmo1.getFedMapping().execute(getTID(), true, fr2);\n}\nelse {\n- throw new DMLRuntimeException(\"Matrix-matrix binary operations \"\n- + \" with a federated right input are not supported yet.\");\n+ throw new DMLRuntimeException(\"Matrix-matrix binary operations with a \"\n+ + \"federated right input are only supported for special cases yet.\");\n}\n}\nelse {\n- //matrix-matrix binary oFederatedRequest fr2 = null;perations -> lhs fed input -> fed output\n+ //matrix-matrix binary operations -> lhs fed input -> fed output\nif(mo2.getNumRows() > 1 && mo2.getNumColumns() == 1 ) { //MV row vector\nFederatedRequest[] fr1 = mo1.getFedMapping().broadcastSliced(mo2, false);\nfr2 = FederationUtils.callInstruction(instString, output, new CPOperand[]{input1, input2},\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/matrix/operators/BinaryOperator.java",
"new_path": "src/main/java/org/apache/sysds/runtime/matrix/operators/BinaryOperator.java",
"diff": "@@ -56,6 +56,7 @@ public class BinaryOperator extends Operator implements Serializable\nprivate static final long serialVersionUID = -2547950181558989209L;\npublic final ValueFunction fn;\n+ public final boolean commutative;\npublic BinaryOperator(ValueFunction p) {\n//binaryop is sparse-safe iff (0 op 0) == 0\n@@ -65,6 +66,8 @@ public class BinaryOperator extends Operator implements Serializable\n|| p instanceof BitwAnd || p instanceof BitwOr || p instanceof BitwXor\n|| p instanceof BitwShiftL || p instanceof BitwShiftR);\nfn = p;\n+ commutative = p instanceof Plus || p instanceof Multiply\n+ || p instanceof And || p instanceof Or || p instanceof Xor;\n}\n/**\n@@ -111,6 +114,10 @@ public class BinaryOperator extends Operator implements Serializable\nreturn null;\n}\n+ public boolean isCommutative() {\n+ return commutative;\n+ }\n+\n@Override\npublic String toString() {\nreturn \"BinaryOperator(\"+fn.getClass().getSimpleName()+\")\";\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/meta/DataCharacteristics.java",
"new_path": "src/main/java/org/apache/sysds/runtime/meta/DataCharacteristics.java",
"diff": "@@ -188,6 +188,8 @@ public abstract class DataCharacteristics implements Serializable {\ndimOut.set(dim1.getRows(), dim2.getCols(), dim1.getBlocksize());\n}\n+ public abstract boolean equalDims(Object anObject);\n+\n@Override\npublic abstract boolean equals(Object anObject);\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/meta/MatrixCharacteristics.java",
"new_path": "src/main/java/org/apache/sysds/runtime/meta/MatrixCharacteristics.java",
"diff": "@@ -230,6 +230,16 @@ public class MatrixCharacteristics extends DataCharacteristics\n|| (nonZero < numRows*numColumns - singleBlk);\n}\n+ @Override\n+ public boolean equalDims(Object anObject) {\n+ if( !(anObject instanceof MatrixCharacteristics) )\n+ return false;\n+ MatrixCharacteristics mc = (MatrixCharacteristics) anObject;\n+ return dimsKnown() && mc.dimsKnown()\n+ && numRows == mc.numRows\n+ && numColumns == mc.numColumns;\n+ }\n+\n@Override\npublic boolean equals (Object anObject) {\nif( !(anObject instanceof MatrixCharacteristics) )\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/meta/TensorCharacteristics.java",
"new_path": "src/main/java/org/apache/sysds/runtime/meta/TensorCharacteristics.java",
"diff": "@@ -156,6 +156,15 @@ public class TensorCharacteristics extends DataCharacteristics\nreturn \"[\"+Arrays.toString(_dims)+\", nnz=\"+_nnz + \", blocksize= \"+_blocksize+\"]\";\n}\n+ @Override\n+ public boolean equalDims(Object anObject) {\n+ if( !(anObject instanceof TensorCharacteristics) )\n+ return false;\n+ TensorCharacteristics tc = (TensorCharacteristics) anObject;\n+ return dimsKnown() && tc.dimsKnown()\n+ && Arrays.equals(_dims, tc._dims);\n+ }\n+\n@Override\npublic boolean equals (Object anObject) {\nif( !(anObject instanceof TensorCharacteristics) )\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/federated/algorithms/FederatedGLMTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/federated/algorithms/FederatedGLMTest.java",
"diff": "@@ -123,7 +123,7 @@ public class FederatedGLMTest extends AutomatedTestBase {\nAssert.assertTrue(heavyHittersContainsString(\"fed_ba+*\"));\nAssert.assertTrue(heavyHittersContainsString(\"fed_uark+\",\"fed_uarsqk+\"));\nAssert.assertTrue(heavyHittersContainsString(\"fed_uack+\"));\n- Assert.assertTrue(heavyHittersContainsString(\"fed_uak+\"));\n+ //Assert.assertTrue(heavyHittersContainsString(\"fed_uak+\"));\nAssert.assertTrue(heavyHittersContainsString(\"fed_mmchain\"));\n//check that federated input files are still existing\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/federated/algorithms/FederatedKmeansTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/federated/algorithms/FederatedKmeansTest.java",
"diff": "@@ -128,8 +128,10 @@ public class FederatedKmeansTest extends AutomatedTestBase {\n// check for federated operations\nAssert.assertTrue(heavyHittersContainsString(\"fed_ba+*\"));\n- Assert.assertTrue(heavyHittersContainsString(\"fed_uasqk+\"));\n+ //Assert.assertTrue(heavyHittersContainsString(\"fed_uasqk+\"));\nAssert.assertTrue(heavyHittersContainsString(\"fed_uarmin\"));\n+ Assert.assertTrue(heavyHittersContainsString(\"fed_uark+\"));\n+ Assert.assertTrue(heavyHittersContainsString(\"fed_uack+\"));\nAssert.assertTrue(heavyHittersContainsString(\"fed_*\"));\nAssert.assertTrue(heavyHittersContainsString(\"fed_+\"));\nAssert.assertTrue(heavyHittersContainsString(\"fed_<=\"));\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-2549] Extended federated binary element-wise operations
This patch generalizes the existing federated binary element-wise
operations to avoid unsupported scenarios. Specifically, if the
right-hand-side matrix (instead of left-hand-side) matrix is federated
and the operation is commutative (e.g., mult/add) we canonicalize the
inputs accordingly. |
49,738 | 31.10.2020 21:34:27 | -3,600 | 9f41108cc498e13f03095d4ebb1b903cde9010ec | Fix missing federated unary aggregate for scalar mean
With the fixed missing size propagation for federated init statements,
now rewrites trigger, which expose operations we don't support yet. This
patch adds, besides the existing row means and columns means, also
support for full mean aggregates. | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/controlprogram/federated/FederationUtils.java",
"new_path": "src/main/java/org/apache/sysds/runtime/controlprogram/federated/FederationUtils.java",
"diff": "@@ -98,7 +98,9 @@ public class FederationUtils {\nMatrixBlock ret = null;\nlong size = 0;\nfor(int i=0; i<ffr.length; i++) {\n- MatrixBlock tmp = (MatrixBlock)ffr[i].get().getData()[0];\n+ Object input = ffr[i].get().getData()[0];\n+ MatrixBlock tmp = (input instanceof ScalarObject) ?\n+ new MatrixBlock(((ScalarObject)input).getDoubleValue()) : (MatrixBlock) input;\nsize += ranges[i].getSize(0);\nsop1 = sop1.setConstant(ranges[i].getSize(0));\ntmp = tmp.scalarOperations(sop1, new MatrixBlock());\n@@ -167,10 +169,11 @@ public class FederationUtils {\n}\n}\n- public static ScalarObject aggScalar(AggregateUnaryOperator aop, Future<FederatedResponse>[] ffr) {\n+ public static ScalarObject aggScalar(AggregateUnaryOperator aop, Future<FederatedResponse>[] ffr, FederationMap map) {\nif(!(aop.aggOp.increOp.fn instanceof KahanFunction || (aop.aggOp.increOp.fn instanceof Builtin &&\n- (((Builtin) aop.aggOp.increOp.fn).getBuiltinCode() == BuiltinCode.MIN ||\n- ((Builtin) aop.aggOp.increOp.fn).getBuiltinCode() == BuiltinCode.MAX)))) {\n+ (((Builtin) aop.aggOp.increOp.fn).getBuiltinCode() == BuiltinCode.MIN\n+ || ((Builtin) aop.aggOp.increOp.fn).getBuiltinCode() == BuiltinCode.MAX)\n+ || aop.aggOp.increOp.fn instanceof Mean ))) {\nthrow new DMLRuntimeException(\"Unsupported aggregation operator: \"\n+ aop.aggOp.increOp.getClass().getSimpleName());\n}\n@@ -181,7 +184,10 @@ public class FederationUtils {\nboolean isMin = ((Builtin) aop.aggOp.increOp.fn).getBuiltinCode() == BuiltinCode.MIN;\nreturn new DoubleObject(aggMinMax(ffr, isMin, true, Optional.empty()).getValue(0,0));\n}\n- else {\n+ else if( aop.aggOp.increOp.fn instanceof Mean ) {\n+ return new DoubleObject(aggMean(ffr, map).getValue(0,0));\n+ }\n+ else { //if (aop.aggOp.increOp.fn instanceof KahanFunction)\ndouble sum = 0; //uak+\nfor( Future<FederatedResponse> fr : ffr )\nsum += ((ScalarObject)fr.get().getData()[0]).getDoubleValue();\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/AggregateUnaryFEDInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/AggregateUnaryFEDInstruction.java",
"diff": "@@ -66,7 +66,7 @@ public class AggregateUnaryFEDInstruction extends UnaryFEDInstruction {\n//execute federated commands and cleanups\nFuture<FederatedResponse>[] tmp = map.execute(getTID(), fr1, fr2, fr3);\nif( output.isScalar() )\n- ec.setVariable(output.getName(), FederationUtils.aggScalar(aop, tmp));\n+ ec.setVariable(output.getName(), FederationUtils.aggScalar(aop, tmp, map));\nelse\nec.setMatrixOutput(output.getName(), FederationUtils.aggMatrix(aop, tmp, map));\n}\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-2709] Fix missing federated unary aggregate for scalar mean
With the fixed missing size propagation for federated init statements,
now rewrites trigger, which expose operations we don't support yet. This
patch adds, besides the existing row means and columns means, also
support for full mean aggregates. |
49,738 | 31.10.2020 21:41:51 | -3,600 | e96e1cbafc53e90f64c08899cccbc1f02a9e46b3 | [MINOR] Fix null-pointer exceptions in new federated I/O tests
In order to allow running these tests in isolation they need to enable
output buffering and cannot rely on running in the same JVM where some
other test might have already enabled it. | [
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/federated/io/FederatedReaderTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/federated/io/FederatedReaderTest.java",
"diff": "@@ -21,7 +21,6 @@ package org.apache.sysds.test.functions.federated.io;\nimport java.util.Arrays;\nimport java.util.Collection;\n-import org.apache.sysds.api.DMLScript;\nimport org.apache.sysds.common.Types;\nimport org.apache.sysds.runtime.controlprogram.caching.MatrixObject;\nimport org.apache.sysds.runtime.meta.MatrixCharacteristics;\n@@ -70,13 +69,9 @@ public class FederatedReaderTest extends AutomatedTestBase {\n}\npublic void federatedRead(Types.ExecMode execMode) {\n- boolean sparkConfigOld = DMLScript.USE_LOCAL_SPARK_CONFIG;\n- Types.ExecMode platformOld = rtplatform;\n- rtplatform = execMode;\n- if(rtplatform == Types.ExecMode.SPARK) {\n- DMLScript.USE_LOCAL_SPARK_CONFIG = true;\n- }\n+ Types.ExecMode oldPlatform = setExecMode(execMode);\ngetAndLoadTestConfiguration(TEST_NAME);\n+ setOutputBuffering(true);\n// write input matrices\nint halfRows = rows / 2;\n@@ -95,15 +90,9 @@ public class FederatedReaderTest extends AutomatedTestBase {\nThread t2 = startLocalFedWorkerThread(port2);\nString host = \"localhost\";\n- MatrixObject fed = FederatedTestObjectConstructor.constructFederatedInput(rows,\n- cols,\n- blocksize,\n- host,\n- begins,\n- ends,\n- new int[] {port1, port2},\n- new String[] {input(\"X1\"), input(\"X2\")},\n- input(\"X.json\"));\n+ MatrixObject fed = FederatedTestObjectConstructor.constructFederatedInput(\n+ rows, cols, blocksize, host, begins, ends, new int[] {port1, port2},\n+ new String[] {input(\"X1\"), input(\"X2\")}, input(\"X.json\"));\nwriteInputFederatedWithMTD(\"X.json\", fed, null);\ntry {\n@@ -120,16 +109,16 @@ public class FederatedReaderTest extends AutomatedTestBase {\nAssert.assertTrue(heavyHittersContainsString(\"fed_uak+\"));\n// Verify output\nAssert.assertEquals(Double.parseDouble(refOut.split(\"\\n\")[0]),\n- Double.parseDouble(out.split(\"\\n\")[0]),\n- 0.00001);\n+ Double.parseDouble(out.split(\"\\n\")[0]), 0.00001);\n}\ncatch(Exception e) {\ne.printStackTrace();\nAssert.assertTrue(false);\n}\n+ finally {\n+ resetExecMode(oldPlatform);\n+ }\nTestUtils.shutdownThreads(t1, t2);\n- rtplatform = platformOld;\n- DMLScript.USE_LOCAL_SPARK_CONFIG = sparkConfigOld;\n}\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/federated/io/FederatedWriterTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/federated/io/FederatedWriterTest.java",
"diff": "@@ -21,8 +21,7 @@ package org.apache.sysds.test.functions.federated.io;\nimport java.util.Arrays;\nimport java.util.Collection;\n-import org.apache.sysds.api.DMLScript;\n-import org.apache.sysds.common.Types;\n+import org.apache.sysds.common.Types.ExecMode;\nimport org.apache.sysds.runtime.meta.MatrixCharacteristics;\nimport org.apache.sysds.test.AutomatedTestBase;\nimport org.apache.sysds.test.TestConfiguration;\n@@ -65,17 +64,13 @@ public class FederatedWriterTest extends AutomatedTestBase {\n@Test\npublic void federatedSinglenodeWrite() {\n- federatedWrite(Types.ExecMode.SINGLE_NODE);\n+ federatedWrite(ExecMode.SINGLE_NODE);\n}\n- public void federatedWrite(Types.ExecMode execMode) {\n- boolean sparkConfigOld = DMLScript.USE_LOCAL_SPARK_CONFIG;\n- Types.ExecMode platformOld = rtplatform;\n- rtplatform = execMode;\n- if(rtplatform == Types.ExecMode.SPARK) {\n- DMLScript.USE_LOCAL_SPARK_CONFIG = true;\n- }\n+ public void federatedWrite(ExecMode execMode) {\n+ ExecMode oldPlatform = setExecMode(execMode);\ngetAndLoadTestConfiguration(TEST_NAME);\n+ setOutputBuffering(true);\n// write input matrices\nint halfRows = rows / 2;\n@@ -97,10 +92,7 @@ public class FederatedWriterTest extends AutomatedTestBase {\nfullDMLScriptName = SCRIPT_DIR + \"functions/federated/io/FederatedReaderTestCreate.dml\";\nprogramArgs = new String[] {\"-stats\", \"-explain\", \"-args\", input(\"X1\"), input(\"X2\"), port1 + \"\", port2 + \"\",\ninput(\"X.json\")};\n- // String writer = runTest(null).toString();\nrunTest(null);\n- // LOG.error(writer);\n- // LOG.error(\"Writing Done\");\n// Run reference dml script with normal matrix\nfullDMLScriptName = SCRIPT_DIR + \"functions/federated/io/FederatedReaderTest.dml\";\n@@ -120,16 +112,16 @@ public class FederatedWriterTest extends AutomatedTestBase {\n// Verify output\nAssert.assertEquals(Double.parseDouble(refOut.split(\"\\n\")[0]),\n- Double.parseDouble(out.split(\"\\n\")[0]),\n- 0.00001);\n+ Double.parseDouble(out.split(\"\\n\")[0]), 0.00001);\n}\ncatch(Exception e) {\ne.printStackTrace();\nAssert.assertTrue(false);\n}\n+ finally {\n+ resetExecMode(oldPlatform);\n+ }\nTestUtils.shutdownThreads(t1, t2);\n- rtplatform = platformOld;\n- DMLScript.USE_LOCAL_SPARK_CONFIG = sparkConfigOld;\n}\n}\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR] Fix null-pointer exceptions in new federated I/O tests
In order to allow running these tests in isolation they need to enable
output buffering and cannot rely on running in the same JVM where some
other test might have already enabled it. |
49,738 | 01.11.2020 11:47:53 | -3,600 | c18bf0e3e299e7fe3f2b928e2ec026339565b794 | Fix federated bivariate statistics tests (threaded) | [
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/AutomatedTestBase.java",
"new_path": "src/test/java/org/apache/sysds/test/AutomatedTestBase.java",
"diff": "@@ -1412,6 +1412,7 @@ public abstract class AutomatedTestBase {\n* @param port Port to use for the JVM\n* @return the process associated with the worker.\n*/\n+ @Deprecated\nprotected Process startLocalFedWorker(int port) {\nProcess process = null;\nString separator = System.getProperty(\"file.separator\");\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/federated/algorithms/FederatedBivarTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/federated/algorithms/FederatedBivarTest.java",
"diff": "@@ -108,10 +108,10 @@ public class FederatedBivarTest extends AutomatedTestBase {\nint port2 = getRandomAvailablePort();\nint port3 = getRandomAvailablePort();\nint port4 = getRandomAvailablePort();\n- Process t1 = startLocalFedWorker(port1);\n- Process t2 = startLocalFedWorker(port2);\n- Process t3 = startLocalFedWorker(port3);\n- Process t4 = startLocalFedWorker(port4);\n+ Thread t1 = startLocalFedWorkerThread(port1);\n+ Thread t2 = startLocalFedWorkerThread(port2);\n+ Thread t3 = startLocalFedWorkerThread(port3);\n+ Thread t4 = startLocalFedWorkerThread(port4);\nTestConfiguration config = availableTestConfigurations.get(TEST_NAME);\nloadTestConfiguration(config);\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/federated/primitives/FederatedStatisticsTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/federated/primitives/FederatedStatisticsTest.java",
"diff": "package org.apache.sysds.test.functions.federated.primitives;\n-import java.io.BufferedReader;\n-import java.io.InputStreamReader;\nimport java.util.Arrays;\nimport java.util.Collection;\n@@ -97,20 +95,8 @@ public class FederatedStatisticsTest extends AutomatedTestBase {\nfullDMLScriptName = \"\";\nint port1 = getRandomAvailablePort();\nint port2 = getRandomAvailablePort();\n- Process t1 = startLocalFedWorker(port1);\n- Process t2 = startLocalFedWorker(port2);\n-\n- BufferedReader output = new BufferedReader(new InputStreamReader(t1.getInputStream()));\n- BufferedReader error = new BufferedReader(new InputStreamReader(t1.getInputStream()));\n-\n- Thread t = new Thread(() -> {\n- output.lines().forEach(s -> System.out.println(s));\n- });\n- Thread te = new Thread(() -> {\n- error.lines().forEach(s -> System.err.println(s));\n- });\n- t.start();\n- te.start();\n+ Thread t1 = startLocalFedWorkerThread(port1);\n+ Thread t2 = startLocalFedWorkerThread(port2);\nTestConfiguration config = availableTestConfigurations.get(TEST_NAME);\nloadTestConfiguration(config);\n@@ -133,7 +119,6 @@ public class FederatedStatisticsTest extends AutomatedTestBase {\ncompareResults(1e-9);\nTestUtils.shutdownThreads(t1, t2);\n- TestUtils.shutdownThreads(t, te);\n// check for federated operations\nAssert.assertTrue(\"contains federated matrix mult\", heavyHittersContainsString(\"fed_ba+*\"));\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-2681] Fix federated bivariate statistics tests (threaded) |
49,706 | 02.11.2020 09:08:33 | -3,600 | b492ac4fc18092f0f591a83a50a3d7e0b46fb1b1 | Fix Python One hot encode
This commit fixes one hot encode in python API, where previously
the One hot encode only allowed vector input. This in turn produce
errors when the input was column vectors, that are encoded as a two
dimensional data structure. | [
{
"change_type": "MODIFY",
"old_path": "src/main/python/systemds/operator/operation_node.py",
"new_path": "src/main/python/systemds/operator/operation_node.py",
"diff": "@@ -480,9 +480,9 @@ class OperationNode(DAGNode):\n\"\"\"\nself._check_matrix_op()\n- if len(self.shape) != 1:\n+ if len(self.shape) == 2 and self.shape[1] != 1:\nraise ValueError(\n- \"Only Matrixes with a single column or row is valid in One Hot, \" + str(self.shape) + \" is invalid\")\n+ \"Only Matrixes with a single column is valid in One Hot, \" + str(self.shape) + \" is invalid\")\nif num_classes < 2:\nraise ValueError(\"Number of classes should be larger than 1\")\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/python/tests/matrix/test_to_one_hot.py",
"new_path": "src/main/python/tests/matrix/test_to_one_hot.py",
"diff": "@@ -74,6 +74,21 @@ class TestMatrixOneHot(unittest.TestCase):\n# with self.assertRaises(ValueError) as context:\n# res = Matrix(self.sds, m1).to_one_hot(2).compute()\n+ def test_one_hot_matrix_1(self):\n+ m1 = np.array([[1],[2],[3]])\n+ res = Matrix(self.sds, m1).to_one_hot(3).compute()\n+ self.assertTrue((res == [[1,0,0], [0,1,0], [0,0,1]]).all())\n+\n+ def test_one_hot_matrix_2(self):\n+ m1 = np.array([[1],[3],[3]])\n+ res = Matrix(self.sds, m1).to_one_hot(3).compute()\n+ self.assertTrue((res == [[1,0,0], [0,0,1], [0,0,1]]).all())\n+\n+ def test_one_hot_matrix_3(self):\n+ m1 = np.array([[1],[2],[1]])\n+ res = Matrix(self.sds, m1).to_one_hot(2).compute()\n+ self.assertTrue((res == [[1,0], [0,1], [1,0]]).all())\n+\ndef test_neg_one_hot_numClasses(self):\nm1 = np.array([1])\nwith self.assertRaises(ValueError) as context:\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-2711] Fix Python One hot encode
This commit fixes one hot encode in python API, where previously
the One hot encode only allowed vector input. This in turn produce
errors when the input was column vectors, that are encoded as a two
dimensional data structure. |
49,693 | 02.11.2020 17:46:06 | -3,600 | 34116722d908dc5dec172104ab3a6efdfb71a8bc | Removing deprecation in test code
Resolving JIRA issue | [
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/AutomatedTestBase.java",
"new_path": "src/test/java/org/apache/sysds/test/AutomatedTestBase.java",
"diff": "@@ -94,7 +94,6 @@ import org.junit.Before;\n* </ul>\n*\n*/\n-@SuppressWarnings(\"deprecation\")\npublic abstract class AutomatedTestBase {\nprivate static final Log LOG = LogFactory.getLog(AutomatedTestBase.class.getName());\n@@ -996,25 +995,6 @@ public abstract class AutomatedTestBase {\nreturn String.format(\"<%s>%s</%s>\", tagName, value, tagName);\n}\n- /**\n- * <p>\n- * Loads a test configuration with its parameters. Adds the output directories to the output list as well as to the\n- * list of possible comparison files.\n- * </p>\n- *\n- * @param configurationName test configuration name\n- *\n- */\n- @Deprecated\n- protected void loadTestConfiguration(String configurationName) {\n- if(!availableTestConfigurations.containsKey(configurationName))\n- fail(\"test configuration not available: \" + configurationName);\n-\n- TestConfiguration config = availableTestConfigurations.get(configurationName);\n-\n- loadTestConfiguration(config);\n- }\n-\n/**\n* Runs an R script, default to the old way\n*/\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/TestUtils.java",
"new_path": "src/test/java/org/apache/sysds/test/TestUtils.java",
"diff": "@@ -60,6 +60,7 @@ import org.apache.hadoop.fs.FileStatus;\nimport org.apache.hadoop.fs.FileSystem;\nimport org.apache.hadoop.fs.Path;\nimport org.apache.hadoop.io.SequenceFile;\n+import org.apache.hadoop.io.SequenceFile.Writer;\nimport org.apache.sysds.common.Types.FileFormat;\nimport org.apache.sysds.common.Types.ValueType;\nimport org.apache.sysds.runtime.data.TensorBlock;\n@@ -462,10 +463,11 @@ public class TestUtils\n/**\n* Reads values from a matrix file in HDFS in DML format\n*\n- * @deprecated You should not use this method, it is recommended to use the\n- * corresponding method in AutomatedTestBase\n- * @param filePath\n- * @return\n+ * NOTE: For reading the output of a matrix produced by a JUnit test, use the convenience\n+ * function {@link AutomatedTestBase#readDMLMatrixFromHDFS(String)}\n+ *\n+ * @param filePath Path to the file to be read.\n+ * @return Matrix values in a hashmap <index,value>\n*/\npublic static HashMap<CellIndex, Double> readDMLMatrixFromHDFS(String filePath)\n{\n@@ -492,11 +494,11 @@ public class TestUtils\n/**\n* Reads values from a matrix file in OS's FS in R format\n*\n- * @deprecated You should not use this method, it is recommended to use the\n- * corresponding method in AutomatedTestBase\n+ * NOTE: For reading the output of a matrix produced by a R validation code of a JUnit test, use the convenience\n+ * function {@link AutomatedTestBase#readRMatrixFromFS(String)}\n*\n- * @param filePath\n- * @return\n+ * @param filePath Path to the file to be read.\n+ * @return Matrix values in a hashmap <index,value>\n*/\npublic static HashMap<CellIndex, Double> readRMatrixFromFS(String filePath)\n{\n@@ -2083,10 +2085,11 @@ public class TestUtils\nSequenceFile.Writer writer = null;\ntry {\nPath path = new Path(file);\n- FileSystem fs = IOUtilFunctions.getFileSystem(path, conf);\n- writer = new SequenceFile.Writer(fs, conf, path,\n- MatrixIndexes.class, MatrixCell.class);\n-\n+ Writer.Option filePath = Writer.file(path);\n+ Writer.Option keyClass = Writer.keyClass(MatrixIndexes.class);\n+ Writer.Option valueClass = Writer.valueClass(MatrixBlock.class);\n+ Writer.Option compression = Writer.compression(SequenceFile.CompressionType.NONE);\n+ writer = SequenceFile.createWriter(conf, filePath, keyClass, valueClass, compression);\nMatrixIndexes index = new MatrixIndexes();\nMatrixCell value = new MatrixCell();\nfor (int i = 0; i < matrix.length; i++) {\n@@ -2131,10 +2134,11 @@ public class TestUtils\ntry {\nPath path = new Path(file);\n- FileSystem fs = IOUtilFunctions.getFileSystem(path, conf);\n- writer = new SequenceFile.Writer(fs, conf, path,\n- MatrixIndexes.class, MatrixBlock.class);\n-\n+ Writer.Option filePath = Writer.file(path);\n+ Writer.Option keyClass = Writer.keyClass(MatrixIndexes.class);\n+ Writer.Option valueClass = Writer.valueClass(MatrixBlock.class);\n+ Writer.Option compression = Writer.compression(SequenceFile.CompressionType.NONE);\n+ writer = SequenceFile.createWriter(conf, filePath, keyClass, valueClass, compression);\nMatrixIndexes index = new MatrixIndexes();\nMatrixBlock value = new MatrixBlock();\nfor (int i = 0; i < matrix.length; i += rowsInBlock) {\n@@ -2142,7 +2146,7 @@ public class TestUtils\nfor (int j = 0; j < matrix[i].length; j += colsInBlock) {\nint cols = Math.min(colsInBlock, (matrix[i].length - j));\nindex.setIndexes(((i / rowsInBlock) + 1), ((j / colsInBlock) + 1));\n- value = new MatrixBlock(rows, cols, sparseFormat);\n+ value.reset(rows, cols, sparseFormat);\nfor (int k = 0; k < rows; k++) {\nfor (int l = 0; l < cols; l++) {\nvalue.setValue(k, l, matrix[i + k][j + l]);\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/parfor/misc/ParForAdversarialLiteralsTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/parfor/misc/ParForAdversarialLiteralsTest.java",
"diff": "@@ -100,11 +100,9 @@ public class ParForAdversarialLiteralsTest extends AutomatedTestBase\nrunLiteralTest(TEST_NAME4b);\n}\n- @SuppressWarnings(\"deprecation\")\nprivate void runLiteralTest( String testName )\n{\n- String TEST_NAME = testName;\n- TestConfiguration config = getTestConfiguration(TEST_NAME);\n+ TestConfiguration config = getTestConfiguration(testName);\nconfig.addVariable(\"rows\", rows);\nconfig.addVariable(\"cols\", cols);\nloadTestConfiguration(config);\n@@ -114,18 +112,17 @@ public class ParForAdversarialLiteralsTest extends AutomatedTestBase\nString IN = \"A\";\nString OUT = (testName.equals(TEST_NAME1a)||testName.equals(TEST_NAME1b))?Lop.CP_ROOT_THREAD_ID:\"B\";\n- fullDMLScriptName = HOME + TEST_NAME + \".dml\";\n+ fullDMLScriptName = HOME + testName + \".dml\";\nprogramArgs = new String[]{\"-args\", input(IN),\nInteger.toString(rows), Integer.toString(cols), output(OUT) };\n- fullRScriptName = HOME + TEST_NAME + \".R\";\n+ fullRScriptName = HOME + testName + \".R\";\nrCmd = \"Rscript\" + \" \" + fullRScriptName + \" \" + inputDir() + \" \" + expectedDir();\ndouble[][] A = getRandomMatrix(rows, cols, 0, 1, sparsity, 7);\nwriteInputMatrix(\"A\", A, false);\n- boolean exceptionExpected = false;\n- runTest(true, exceptionExpected, null, -1);\n+ runTest(true, false, null, -1);\n//compare matrices\nHashMap<CellIndex, Double> dmlin = TestUtils.readDMLMatrixFromHDFS(input(IN));\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-151] Removing deprecation in test code
Resolving JIRA issue https://issues.apache.org/jira/browse/SYSTEMDS-151 |
49,706 | 04.11.2020 12:38:31 | -3,600 | 20d132a8044549a689c7c32b0ce77ccfad77373a | [MINOR] Federated Parameter server multithreaded
This commit change the parameter server to run in unlimited number of
threads for each of the federated workers instructions. | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/cp/ParamservBuiltinCPInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/cp/ParamservBuiltinCPInstruction.java",
"diff": "@@ -96,6 +96,7 @@ public class ParamservBuiltinCPInstruction extends ParameterizedBuiltinCPInstruc\n@Override\npublic void processInstruction(ExecutionContext ec) {\n// check if the input is federated\n+\nif(ec.getMatrixObject(getParam(PS_FEATURES)).isFederated() ||\nec.getMatrixObject(getParam(PS_LABELS)).isFederated()) {\nrunFederated(ec);\n@@ -142,7 +143,7 @@ public class ParamservBuiltinCPInstruction extends ParameterizedBuiltinCPInstruc\nLocalVariableMap newVarsMap = createVarsMap(ec);\n// Level of par is 1 because one worker will be launched per task\n// TODO: Fix recompilation\n- ExecutionContext newEC = ParamservUtils.createExecutionContext(ec, newVarsMap, updFunc, aggFunc, 1, true);\n+ ExecutionContext newEC = ParamservUtils.createExecutionContext(ec, newVarsMap, updFunc, aggFunc, -1, true);\n// Create workers' execution context\nList<ExecutionContext> federatedWorkerECs = ParamservUtils.copyExecutionContext(newEC, workerNum);\n// Create the agg service's execution context\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR] Federated Parameter server multithreaded
This commit change the parameter server to run in unlimited number of
threads for each of the federated workers instructions. |
49,706 | 05.11.2020 00:06:55 | -3,600 | 0bf553627a7f2a55da05f355ac45d3c10457a822 | Seeded NN layers
Add seeds to the initialization of affine and Conv2d layers | [
{
"change_type": "MODIFY",
"old_path": "scripts/nn/layers/affine.dml",
"new_path": "scripts/nn/layers/affine.dml",
"diff": "@@ -64,7 +64,7 @@ backward = function(matrix[double] dout, matrix[double] X,\ndb = colSums(dout)\n}\n-init = function(int D, int M)\n+init = function(int D, int M, int seed = -1 )\nreturn (matrix[double] W, matrix[double] b) {\n/*\n* Initialize the parameters of this layer.\n@@ -81,12 +81,13 @@ init = function(int D, int M)\n* Inputs:\n* - D: Dimensionality of the input features (number of features).\n* - M: Number of neurons in this layer.\n+ * - seed: The seed to initialize the weights\n*\n* Outputs:\n* - W: Weights, of shape (D, M).\n* - b: Biases, of shape (1, M).\n*/\n- W = rand(rows=D, cols=M, pdf=\"normal\") * sqrt(2.0/D)\n+ W = rand(rows=D, cols=M, pdf=\"normal\", seed=seed) * sqrt(2.0/D)\nb = matrix(0, rows=1, cols=M)\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "scripts/nn/layers/conv2d.dml",
"new_path": "scripts/nn/layers/conv2d.dml",
"diff": "@@ -153,7 +153,7 @@ backward = function(matrix[double] dout, int Hout, int Wout,\n}\n}\n-init = function(int F, int C, int Hf, int Wf)\n+init = function(int F, int C, int Hf, int Wf, int seed = -1)\nreturn (matrix[double] W, matrix[double] b) {\n/*\n* Initialize the parameters of this layer.\n@@ -172,12 +172,13 @@ init = function(int F, int C, int Hf, int Wf)\n* - C: Number of input channels (dimensionality of depth).\n* - Hf: Filter height.\n* - Wf: Filter width.\n+ * - seed: The seed to initialize the weights\n*\n* Outputs:\n* - W: Weights, of shape (F, C*Hf*Wf).\n* - b: Biases, of shape (F, 1).\n*/\n- W = rand(rows=F, cols=C*Hf*Wf, pdf=\"normal\") * sqrt(2.0/(C*Hf*Wf))\n+ W = rand(rows=F, cols=C*Hf*Wf, pdf=\"normal\", seed=seed ) * sqrt(2.0/(C*Hf*Wf))\nb = matrix(0, rows=F, cols=1)\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "scripts/nn/layers/conv2d_builtin.dml",
"new_path": "scripts/nn/layers/conv2d_builtin.dml",
"diff": "@@ -131,7 +131,7 @@ backward = function(matrix[double] dout, int Hout, int Wout,\ndb = util::channel_sums(dout, F, Hout, Wout)\n}\n-init = function(int F, int C, int Hf, int Wf)\n+init = function(int F, int C, int Hf, int Wf, int seed = -1)\nreturn (matrix[double] W, matrix[double] b) {\n/*\n* Initialize the parameters of this layer.\n@@ -150,12 +150,13 @@ init = function(int F, int C, int Hf, int Wf)\n* - C: Number of input channels (dimensionality of depth).\n* - Hf: Filter height.\n* - Wf: Filter width.\n+ * - seed: The seed to initialize the weights\n*\n* Outputs:\n* - W: Weights, of shape (F, C*Hf*Wf).\n* - b: Biases, of shape (F, 1).\n*/\n- W = rand(rows=F, cols=C*Hf*Wf, pdf=\"normal\") * sqrt(2.0/(C*Hf*Wf))\n+ W = rand(rows=F, cols=C*Hf*Wf, pdf=\"normal\", seed=seed) * sqrt(2.0/(C*Hf*Wf))\nb = matrix(0, rows=F, cols=1)\n}\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-2714] Seeded NN layers
Add seeds to the initialization of affine and Conv2d layers |
49,689 | 05.11.2020 00:38:08 | -3,600 | 1967f8bb23109b2d3c6b0692fbcbf22324295594 | [MINOR] Improve lineage cache spilling
This patch:
- adds lineage tracing for frame indexing,
- reduces starting computation time for spilling from 100 to 10ms
Allowing more entries to be spilled to disk increases peroformance,
and makes the difference between the policies smaller. | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/cp/FrameIndexingCPInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/cp/FrameIndexingCPInstruction.java",
"diff": "@@ -21,9 +21,12 @@ package org.apache.sysds.runtime.instructions.cp;\nimport org.apache.sysds.lops.LeftIndex;\nimport org.apache.sysds.lops.RightIndex;\n+import org.apache.commons.lang3.tuple.Pair;\nimport org.apache.sysds.common.Types.DataType;\nimport org.apache.sysds.runtime.DMLRuntimeException;\nimport org.apache.sysds.runtime.controlprogram.context.ExecutionContext;\n+import org.apache.sysds.runtime.lineage.LineageItem;\n+import org.apache.sysds.runtime.lineage.LineageItemUtils;\nimport org.apache.sysds.runtime.matrix.data.FrameBlock;\nimport org.apache.sysds.runtime.util.IndexRange;\n@@ -83,4 +86,10 @@ public final class FrameIndexingCPInstruction extends IndexingCPInstruction {\nelse\nthrow new DMLRuntimeException(\"Invalid opcode (\" + opcode +\") encountered in FrameIndexingCPInstruction.\");\n}\n+\n+ @Override\n+ public Pair<String, LineageItem> getLineageItem(ExecutionContext ec) {\n+ return Pair.of(output.getName(), new LineageItem(getOpcode(),\n+ LineageItemUtils.getLineage(ec, input1,input2,input3,rowLower,rowUpper,colLower,colUpper)));\n+ }\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/cp/ParameterizedBuiltinCPInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/cp/ParameterizedBuiltinCPInstruction.java",
"diff": "@@ -446,7 +446,7 @@ public class ParameterizedBuiltinCPInstruction extends ComputationCPInstruction\n}\nelse if (opcode.equalsIgnoreCase(\"transformdecode\") ||\nopcode.equalsIgnoreCase(\"transformapply\")) {\n- CPOperand target = getTargetOperand();\n+ CPOperand target = new CPOperand(params.get(\"target\"), ValueType.FP64, DataType.FRAME);\nCPOperand meta = getLiteral(\"meta\", ValueType.UNKNOWN, DataType.FRAME);\nCPOperand spec = getStringLiteral(\"spec\");\nreturn Pair.of(output.getName(), new LineageItem(getOpcode(),\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageCacheConfig.java",
"new_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageCacheConfig.java",
"diff": "@@ -75,9 +75,9 @@ public class LineageCacheConfig\nprivate static boolean _allowSpill = false;\n// Minimum reliable spilling estimate in milliseconds.\n- public static final double MIN_SPILL_TIME_ESTIMATE = 100;\n+ public static final double MIN_SPILL_TIME_ESTIMATE = 10;\n// Minimum reliable data size for spilling estimate in MB.\n- public static final double MIN_SPILL_DATA = 20;\n+ public static final double MIN_SPILL_DATA = 2;\n// Default I/O in MB per second for binary blocks\npublic static double FSREAD_DENSE = 200;\npublic static double FSREAD_SPARSE = 100;\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageCacheEviction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageCacheEviction.java",
"diff": "@@ -216,7 +216,7 @@ public class LineageCacheEviction\nif (exectime > LineageCacheConfig.MIN_SPILL_TIME_ESTIMATE) {\nSystem.out.print(\"LI \" + e._key.getOpcode());\nSystem.out.print(\" exec time \" + ((double) e._computeTime) / 1000000);\n- System.out.print(\" estimate time \" + getDiskSpillEstimate(e) * 1000);\n+ System.out.print(\" spill time \" + getDiskSpillEstimate(e) * 1000);\nSystem.out.print(\" dim \" + e.getMBValue().getNumRows() + \" \" + e.getMBValue().getNumColumns());\nSystem.out.println(\" size \" + getDiskSizeEstimate(e));\n}\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR] Improve lineage cache spilling
This patch:
- adds lineage tracing for frame indexing,
- reduces starting computation time for spilling from 100 to 10ms
Allowing more entries to be spilled to disk increases peroformance,
and makes the difference between the policies smaller. |
49,738 | 08.11.2020 20:28:06 | -3,600 | c8a543317394131463e25a7f95c90a8d0f1c14fb | [MINOR] Fix warnings (imports, resources) and wrong code formatting | [
{
"change_type": "DELETE",
"old_path": "src/main/cuda/ext/jitify",
"new_path": null,
"diff": "-Subproject commit 3e96bcceb9e42105f6a32315abb2af04585a55b0\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/conf/ConfigurationManager.java",
"new_path": "src/main/java/org/apache/sysds/conf/ConfigurationManager.java",
"diff": "package org.apache.sysds.conf;\nimport org.apache.hadoop.mapred.JobConf;\n-import org.apache.sysds.api.DMLScript;\nimport org.apache.sysds.conf.CompilerConfig.ConfigType;\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/hops/codegen/SpoofCompiler.java",
"new_path": "src/main/java/org/apache/sysds/hops/codegen/SpoofCompiler.java",
"diff": "@@ -266,7 +266,7 @@ public class SpoofCompiler {\n}\nprivate static void extractCodegenSources(String resource_path, String jar_path) throws IOException {\n- JarFile jar_file = new JarFile(jar_path);\n+ try(JarFile jar_file = new JarFile(jar_path)) {\nEnumeration<JarEntry> files_in_jar = jar_file.entries();\nwhile (files_in_jar.hasMoreElements()) {\n@@ -283,6 +283,7 @@ public class SpoofCompiler {\n}\n}\n}\n+ }\nprivate static boolean compile_cuda(String name, String src) {\nreturn compile_cuda_kernel(native_contexts.get(GeneratorAPI.CUDA), name, src);\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/hops/codegen/cplan/CNode.java",
"new_path": "src/main/java/org/apache/sysds/hops/codegen/cplan/CNode.java",
"diff": "@@ -27,8 +27,6 @@ import org.apache.sysds.runtime.controlprogram.parfor.util.IDSequence;\nimport org.apache.sysds.runtime.util.UtilFunctions;\nimport org.apache.sysds.hops.codegen.SpoofCompiler.GeneratorAPI;\n-import static org.apache.sysds.hops.codegen.SpoofCompiler.GeneratorAPI.CUDA;\n-\npublic abstract class CNode\n{\nprivate static final IDSequence _seqVar = new IDSequence();\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/hops/codegen/cplan/cpp/CellWise.java",
"new_path": "src/main/java/org/apache/sysds/hops/codegen/cplan/cpp/CellWise.java",
"diff": "package org.apache.sysds.hops.codegen.cplan.cpp;\n+import java.io.FileInputStream;\n+import java.io.IOException;\n+\nimport org.apache.sysds.conf.ConfigurationManager;\nimport org.apache.sysds.conf.DMLConfig;\nimport org.apache.sysds.hops.codegen.cplan.CNodeBinary;\n@@ -28,8 +31,6 @@ import org.apache.sysds.hops.codegen.cplan.CodeTemplate;\nimport org.apache.sysds.runtime.codegen.SpoofCellwise;\nimport org.apache.sysds.runtime.io.IOUtilFunctions;\n-import java.io.*;\n-import java.util.stream.Collectors;\n// ToDo: clean code template and load from file\npublic class CellWise implements CodeTemplate {\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/codegen/SpoofCUDA.java",
"new_path": "src/main/java/org/apache/sysds/runtime/codegen/SpoofCUDA.java",
"diff": "@@ -35,6 +35,7 @@ import org.apache.sysds.runtime.matrix.data.MatrixBlock;\nimport static org.apache.sysds.runtime.matrix.data.LibMatrixNative.isSinglePrecision;\npublic class SpoofCUDA extends SpoofOperator {\n+ private static final long serialVersionUID = -2161276866245388359L;\nprivate final CNodeTpl cnt;\npublic final String name;\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/gpu/context/GPUContextPool.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/gpu/context/GPUContextPool.java",
"diff": "import static jcuda.driver.JCudaDriver.cuDeviceGetCount;\nimport static jcuda.driver.JCudaDriver.cuInit;\n-import static jcuda.runtime.JCuda.cudaGetDevice;\nimport static jcuda.runtime.JCuda.cudaGetDeviceProperties;\nimport java.util.ArrayList;\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR] Fix warnings (imports, resources) and wrong code formatting |
49,738 | 09.11.2020 11:57:07 | -3,600 | 997ba10ac59414685a76f97739126eac1f415e97 | [MINOR] Fix ML context tests (lineage cleanup on object cleanup) | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/controlprogram/caching/CacheableData.java",
"new_path": "src/main/java/org/apache/sysds/runtime/controlprogram/caching/CacheableData.java",
"diff": "@@ -711,6 +711,7 @@ public abstract class CacheableData<T extends CacheBlock> extends Data\n// clear the in-memory data\n_data = null;\nclearCache();\n+ setCacheLineage(null);\n// clear rdd/broadcast back refs\nif( _rddHandle != null )\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/mlcontext/MLContextScratchCleanupTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/mlcontext/MLContextScratchCleanupTest.java",
"diff": "@@ -103,7 +103,6 @@ public class MLContextScratchCleanupTest extends AutomatedTestBase\nScript script2 = dmlFromFile(dml2).in(\"X\", X).out(\"z\");\nString z = ml.execute(script2).getString(\"z\");\n-\nSystem.out.println(z);\n}\ncatch(Exception ex) {\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR] Fix ML context tests (lineage cleanup on object cleanup) |
49,738 | 10.11.2020 13:11:33 | -3,600 | e8153caf8107374548178344261ed21c9f77275a | New built-in function for train/test splitting
This patch introduces a new dml-bodied builtin function for common
train/test splitting of feature matrices and labels. We support two
types: contiguous (ranges of rows) and sampled (uniform selection of
rows without replacement). | [
{
"change_type": "ADD",
"old_path": null,
"new_path": "scripts/builtin/split.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+# Split input data X and y into contiguous or samples train/test sets\n+# ------------------------------------------------------------------------------\n+# NAME TYPE DEFAULT MEANING\n+# ------------------------------------------------------------------------------\n+# X Matrix --- Input feature matrix\n+# y Matrix --- Input\n+# f Double 0.7 Train set fraction [0,1]\n+# cont Boolean TRUE contiuous splits, otherwise sampled\n+# ------------------------------------------------------------------------------\n+# Xtrain Matrix --- Train split of feature matrix\n+# Xtest Matrix --- Test split of feature matrix\n+# ytrain Matrix --- Train split of label matrix\n+# ytest Matrix --- Test split of label matrix\n+# ------------------------------------------------------------------------------\n+\n+m_split = function(Matrix[Double] X, Matrix[Double] y, Double f=0.7, Boolean cont=TRUE)\n+ return (Matrix[Double] Xtrain, Matrix[Double] Xtest, Matrix[Double] ytrain, Matrix[Double] ytest)\n+{\n+ # basic sanity checks\n+ if( f <= 0 | f >= 1 )\n+ print(\"Invalid train/test split configuration: f=\"+f);\n+ if( nrow(X) != nrow(y) )\n+ print(\"Mismatching number of rows X and y: \"+nrow(X)+\" \"+nrow(y) )\n+\n+ # contiguous train/test splits\n+ if( cont ) {\n+ Xtrain = X[1:f*nrow(X),];\n+ ytrain = y[1:f*nrow(X),];\n+ Xtest = X[(nrow(Xtrain)+1):nrow(X),];\n+ ytest = y[(nrow(Xtrain)+1):nrow(X),];\n+ }\n+ # sampled train/test splits\n+ else {\n+ I = rand(rows=nrow(X), cols=1) <= f;\n+ P1 = removeEmpty(target=diag(I), margin=\"rows\", select=I);\n+ P2 = removeEmpty(target=diag(I==0), margin=\"rows\", select=I==0);\n+ Xtrain = P1 %*% X;\n+ ytrain = P1 %*% y;\n+ Xtest = P2 %*% X;\n+ ytest = P2 %*% y;\n+ }\n+}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/common/Builtins.java",
"new_path": "src/main/java/org/apache/sysds/common/Builtins.java",
"diff": "@@ -187,6 +187,7 @@ public enum Builtins {\nSLICEFINDER(\"slicefinder\", true),\nSMOTE(\"smote\", true),\nSOLVE(\"solve\", false),\n+ SPLIT(\"split\", true),\nSQRT(\"sqrt\", false),\nSUM(\"sum\", false),\nSVD(\"svd\", false, ReturnType.MULTI_RETURN),\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-2722] New built-in function for train/test splitting
This patch introduces a new dml-bodied builtin function for common
train/test splitting of feature matrices and labels. We support two
types: contiguous (ranges of rows) and sampled (uniform selection of
rows without replacement). |
49,706 | 10.11.2020 14:39:22 | -3,600 | a89fceec652e700f1fc7ffd3149b18e5719b03ae | Split Testing
Added tests (builtin and federated) for new split builtin. | [
{
"change_type": "MODIFY",
"old_path": "scripts/builtin/split.dml",
"new_path": "scripts/builtin/split.dml",
"diff": "#\n#-------------------------------------------------------------\n-# Split input data X and y into contiguous or samples train/test sets\n+# Split input data X and Y into contiguous or samples train/test sets\n# ------------------------------------------------------------------------------\n# NAME TYPE DEFAULT MEANING\n# ------------------------------------------------------------------------------\n# X Matrix --- Input feature matrix\n-# y Matrix --- Input\n+# Y Matrix --- Input Labels\n# f Double 0.7 Train set fraction [0,1]\n# cont Boolean TRUE contiuous splits, otherwise sampled\n+# seed Integer -1 The seed to reandomly select rows in sampled mode\n# ------------------------------------------------------------------------------\n# Xtrain Matrix --- Train split of feature matrix\n# Xtest Matrix --- Test split of feature matrix\n# ytest Matrix --- Test split of label matrix\n# ------------------------------------------------------------------------------\n-m_split = function(Matrix[Double] X, Matrix[Double] y, Double f=0.7, Boolean cont=TRUE)\n- return (Matrix[Double] Xtrain, Matrix[Double] Xtest, Matrix[Double] ytrain, Matrix[Double] ytest)\n+m_split = function(Matrix[Double] X, Matrix[Double] Y, Double f=0.7, Boolean cont=TRUE, Integer seed=-1)\n+ return (Matrix[Double] Xtrain, Matrix[Double] Xtest, Matrix[Double] Ytrain, Matrix[Double] Ytest)\n{\n# basic sanity checks\nif( f <= 0 | f >= 1 )\n- print(\"Invalid train/test split configuration: f=\"+f);\n- if( nrow(X) != nrow(y) )\n- print(\"Mismatching number of rows X and y: \"+nrow(X)+\" \"+nrow(y) )\n+ stop(\"Invalid train/test split configuration: f=\"+f);\n+ if( nrow(X) != nrow(Y) )\n+ stop(\"Mismatching number of rows X and Y: \"+nrow(X)+\" \"+nrow(Y) )\n# contiguous train/test splits\nif( cont ) {\nXtrain = X[1:f*nrow(X),];\n- ytrain = y[1:f*nrow(X),];\n+ Ytrain = Y[1:f*nrow(X),];\nXtest = X[(nrow(Xtrain)+1):nrow(X),];\n- ytest = y[(nrow(Xtrain)+1):nrow(X),];\n+ Ytest = Y[(nrow(Xtrain)+1):nrow(X),];\n}\n# sampled train/test splits\nelse {\n- I = rand(rows=nrow(X), cols=1) <= f;\n+ I = rand(rows=nrow(X), cols=1, seed=seed) <= f;\nP1 = removeEmpty(target=diag(I), margin=\"rows\", select=I);\nP2 = removeEmpty(target=diag(I==0), margin=\"rows\", select=I==0);\nXtrain = P1 %*% X;\n- ytrain = P1 %*% y;\n+ Ytrain = P1 %*% Y;\nXtest = P2 %*% X;\n- ytest = P2 %*% y;\n+ Ytest = P2 %*% Y;\n}\n}\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/java/org/apache/sysds/test/functions/builtin/BuiltinSplitTest.java",
"diff": "+/*\n+ * Licensed to the Apache Software Foundation (ASF) under one\n+ * or more contributor license agreements. See the NOTICE file\n+ * distributed with this work for additional information\n+ * regarding copyright ownership. The ASF licenses this file\n+ * to you under the Apache License, Version 2.0 (the\n+ * \"License\"); you may not use this file except in compliance\n+ * with the License. You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.apache.sysds.test.functions.builtin;\n+\n+import org.apache.sysds.common.Types.ExecMode;\n+import org.apache.sysds.lops.LopProperties;\n+import org.apache.sysds.lops.LopProperties.ExecType;\n+import org.apache.sysds.test.AutomatedTestBase;\n+import org.apache.sysds.test.TestConfiguration;\n+import org.apache.sysds.test.TestUtils;\n+import org.junit.Assert;\n+import org.junit.Test;\n+\n+public class BuiltinSplitTest extends AutomatedTestBase {\n+ private final static String TEST_NAME = \"split\";\n+ private final static String TEST_DIR = \"functions/builtin/\";\n+ private final static String TEST_CLASS_DIR = TEST_DIR + BuiltinSplitTest.class.getSimpleName() + \"/\";\n+\n+ @Override\n+ public void setUp() {\n+ TestUtils.clearAssertionInformation();\n+ addTestConfiguration(TEST_NAME, new TestConfiguration(TEST_CLASS_DIR, TEST_NAME, new String[] {\"B\",}));\n+ }\n+\n+ public double eps = 0.00001;\n+ public int cols = 100;\n+ public int rows = 10;\n+\n+ @Test\n+ public void test_CP() {\n+\n+ runSplitTest(LopProperties.ExecType.CP);\n+\n+ }\n+\n+ @Test\n+ public void test_Spark() {\n+ runSplitTest(LopProperties.ExecType.SPARK);\n+ }\n+\n+ private void runSplitTest(ExecType instType) {\n+ ExecMode platformOld = setExecMode(instType);\n+\n+ try {\n+ setOutputBuffering(true);\n+\n+ loadTestConfiguration(getTestConfiguration(TEST_NAME));\n+\n+ String HOME = SCRIPT_DIR + TEST_DIR;\n+\n+ fullDMLScriptName = HOME + TEST_NAME + \".dml\";\n+ programArgs = new String[] {\"-nvargs\", \"cols=\" + cols, \"rows=\" + rows};\n+\n+ String out = runTest(null).toString();\n+ Assert.assertTrue(out.contains(\"TRUE\"));\n+ }\n+ finally {\n+ rtplatform = platformOld;\n+ }\n+ }\n+}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/federated/primitives/FederatedMultiplyTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/federated/primitives/FederatedMultiplyTest.java",
"diff": "package org.apache.sysds.test.functions.federated.primitives;\n+import org.junit.Ignore;\nimport org.junit.Test;\nimport org.junit.runner.RunWith;\nimport org.junit.runners.Parameterized;\n@@ -63,11 +64,13 @@ public class FederatedMultiplyTest extends AutomatedTestBase {\nfederatedMultiply(Types.ExecMode.SINGLE_NODE);\n}\n- /*\n- * FIXME spark execution mode support\n- *\n- * @Test public void federatedMultiplySP() { federatedMultiply(Types.ExecMode.SPARK); }\n- */\n+\n+ @Test\n+ @Ignore\n+ public void federatedMultiplySP() {\n+ // TODO Fix me Spark execution error\n+ federatedMultiply(Types.ExecMode.SPARK);\n+ }\npublic void federatedMultiply(Types.ExecMode execMode) {\nboolean sparkConfigOld = DMLScript.USE_LOCAL_SPARK_CONFIG;\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/java/org/apache/sysds/test/functions/federated/primitives/FederatedSplitTest.java",
"diff": "+/*\n+ * Licensed to the Apache Software Foundation (ASF) under one\n+ * or more contributor license agreements. See the NOTICE file\n+ * distributed with this work for additional information\n+ * regarding copyright ownership. The ASF licenses this file\n+ * to you under the Apache License, Version 2.0 (the\n+ * \"License\"); you may not use this file except in compliance\n+ * with the License. You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.apache.sysds.test.functions.federated.primitives;\n+\n+import java.util.Arrays;\n+import java.util.Collection;\n+\n+import org.apache.commons.logging.Log;\n+import org.apache.commons.logging.LogFactory;\n+import org.apache.sysds.api.DMLScript;\n+import org.apache.sysds.common.Types;\n+import org.apache.sysds.runtime.meta.MatrixCharacteristics;\n+import org.apache.sysds.test.AutomatedTestBase;\n+import org.apache.sysds.test.TestConfiguration;\n+import org.apache.sysds.test.TestUtils;\n+import org.junit.Test;\n+import org.junit.runner.RunWith;\n+import org.junit.runners.Parameterized;\n+\n+@RunWith(value = Parameterized.class)\[email protected]\n+public class FederatedSplitTest extends AutomatedTestBase {\n+\n+ private static final Log LOG = LogFactory.getLog(FederatedSplitTest.class.getName());\n+ private final static String TEST_DIR = \"functions/federated/\";\n+ private final static String TEST_NAME = \"FederatedSplitTest\";\n+ private final static String TEST_CLASS_DIR = TEST_DIR + FederatedSplitTest.class.getSimpleName() + \"/\";\n+\n+ private final static int blocksize = 1024;\n+ @Parameterized.Parameter()\n+ public int rows;\n+ @Parameterized.Parameter(1)\n+ public int cols;\n+ @Parameterized.Parameter(2)\n+ public String cont;\n+\n+ @Parameterized.Parameters\n+ public static Collection<Object[]> data() {\n+ return Arrays.asList(new Object[][] {{152, 12, \"TRUE\"},{132, 11, \"FALSE\"}});\n+ }\n+\n+ @Override\n+ public void setUp() {\n+ TestUtils.clearAssertionInformation();\n+ addTestConfiguration(TEST_NAME, new TestConfiguration(TEST_CLASS_DIR, TEST_NAME, new String[] {\"Z\"}));\n+ }\n+\n+ @Test\n+ public void federatedSplitCP() {\n+ federatedSplit(Types.ExecMode.SINGLE_NODE);\n+ }\n+\n+ public void federatedSplit(Types.ExecMode execMode) {\n+ boolean sparkConfigOld = DMLScript.USE_LOCAL_SPARK_CONFIG;\n+ Types.ExecMode platformOld = rtplatform;\n+ rtplatform = execMode;\n+ if(rtplatform == Types.ExecMode.SPARK) {\n+ DMLScript.USE_LOCAL_SPARK_CONFIG = true;\n+ }\n+\n+ getAndLoadTestConfiguration(TEST_NAME);\n+ String HOME = SCRIPT_DIR + TEST_DIR;\n+\n+ // write input matrices\n+ int halfRows = rows / 2;\n+ // We have two matrices handled by a single federated worker\n+ double[][] X1 = getRandomMatrix(halfRows, cols, 0, 1, 1, 42);\n+ double[][] X2 = getRandomMatrix(halfRows, cols, 0, 1, 1, 1340);\n+ // And another two matrices handled by a single federated worker\n+ double[][] Y1 = getRandomMatrix(halfRows, cols, 0, 1, 1, 44);\n+ double[][] Y2 = getRandomMatrix(halfRows, cols, 0, 1, 1, 21);\n+\n+ writeInputMatrixWithMTD(\"X1\", X1, false, new MatrixCharacteristics(halfRows, cols, blocksize, halfRows * cols));\n+ writeInputMatrixWithMTD(\"X2\", X2, false, new MatrixCharacteristics(halfRows, cols, blocksize, halfRows * cols));\n+ writeInputMatrixWithMTD(\"Y1\", Y1, false, new MatrixCharacteristics(halfRows, cols, blocksize, halfRows * cols));\n+ writeInputMatrixWithMTD(\"Y2\", Y2, false, new MatrixCharacteristics(halfRows, cols, blocksize, halfRows * cols));\n+\n+ TestConfiguration config = availableTestConfigurations.get(TEST_NAME);\n+ loadTestConfiguration(config);\n+\n+ int port1 = getRandomAvailablePort();\n+ int port2 = getRandomAvailablePort();\n+ Thread t1 = startLocalFedWorkerThread(port1);\n+ Thread t2 = startLocalFedWorkerThread(port2);\n+\n+ // Run reference dml script with normal matrix\n+ fullDMLScriptName = HOME + TEST_NAME + \"Reference.dml\";\n+ programArgs = new String[] {\"-nvargs\", \"X1=\" + input(\"X1\"), \"X2=\" + input(\"X2\"), \"Y1=\" + input(\"Y1\"),\n+ \"Y2=\" + input(\"Y2\"), \"Z=\" + expected(\"Z\"), \"Cont=\" + cont};\n+ String out = runTest(null).toString();\n+\n+ // Run actual dml script with federated matrix\n+ fullDMLScriptName = HOME + TEST_NAME + \".dml\";\n+ programArgs = new String[] {\"-nvargs\", \"X1=\" + TestUtils.federatedAddress(port1, input(\"X1\")),\n+ \"X2=\" + TestUtils.federatedAddress(port2, input(\"X2\")),\n+ \"Y1=\" + TestUtils.federatedAddress(port1, input(\"Y1\")),\n+ \"Y2=\" + TestUtils.federatedAddress(port2, input(\"Y2\")), \"r=\" + rows, \"c=\" + cols, \"Z=\" + output(\"Z\"),\n+ \"Cont=\" + cont};\n+ String fedOut = runTest(null).toString();\n+\n+ LOG.error(out);\n+ LOG.error(fedOut);\n+ // compare via files\n+ compareResults(1e-9);\n+\n+ TestUtils.shutdownThreads(t1, t2);\n+ rtplatform = platformOld;\n+ DMLScript.USE_LOCAL_SPARK_CONFIG = sparkConfigOld;\n+ }\n+}\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/builtin/split.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+\n+X = rand(rows = $rows, cols=$cols, seed=1)\n+Y = rand(rows = $rows, cols=1, seed=13)\n+\n+[Xtrain, Xtest, Ytrain, Ytest] = split(X=X,Y=Y, seed= 132)\n+\n+sumX = sum(X)\n+sumY = sum(Y)\n+\n+sumXt = sum(Xtrain) + sum(Xtest)\n+sumYt = sum(Ytrain) + sum(Ytest)\n+\n+sameXAndY = abs( sumX + sumY - sumXt - sumYt) < 0.001\n+\n+print(sameXAndY)\n\\ No newline at end of file\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/federated/FederatedSplitTest.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+X = federated(addresses=list($X1, $X2),\n+ ranges=list(list(0, 0), list($r / 2, $c), list($r / 2, 0), list($r, $c)))\n+Y = federated(addresses=list($Y1, $Y2),\n+ ranges=list(list(0, 0), list($r / 2, $c), list($r / 2, 0), list($r, $c)))\n+\n+\n+[Xtr, Xte, Ytr, Yte] = split(X=X,Y=Y,f=0.95, cont=$Cont, seed = 13)\n+write(Xte, $Z)\n+print(toString(Xte))\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/federated/FederatedSplitTestReference.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+X = rbind(read($X1), read($X2))\n+Y = rbind(read($Y1), read($Y2))\n+[Xtr, Xte, Ytr, Yte] = split(X=X,Y=Y, f=0.95 ,cont=$Cont, seed = 13)\n+write(Xte, $Z)\n+print(toString(Xte))\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-2722] Split Testing
Added tests (builtin and federated) for new split builtin. |
49,722 | 10.11.2020 16:05:30 | -3,600 | e2bd5bf4fe90a66f316898208c851f796117a90c | Federated right indexing | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/controlprogram/federated/FederatedRange.java",
"new_path": "src/main/java/org/apache/sysds/runtime/controlprogram/federated/FederatedRange.java",
"diff": "@@ -102,6 +102,21 @@ public class FederatedRange implements Comparable<FederatedRange> {\nreturn Arrays.toString(_beginDims) + \" - \" + Arrays.toString(_endDims);\n}\n+ @Override public boolean equals(Object o) {\n+ if(this == o)\n+ return true;\n+ if(o == null || getClass() != o.getClass())\n+ return false;\n+ FederatedRange range = (FederatedRange) o;\n+ return Arrays.equals(_beginDims, range._beginDims) && Arrays.equals(_endDims, range._endDims);\n+ }\n+\n+ @Override public int hashCode() {\n+ int result = Arrays.hashCode(_beginDims);\n+ result = 31 * result + Arrays.hashCode(_endDims);\n+ return result;\n+ }\n+\npublic FederatedRange shift(long rshift, long cshift) {\n//row shift\n_beginDims[0] += rshift;\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/controlprogram/federated/FederationMap.java",
"new_path": "src/main/java/org/apache/sysds/runtime/controlprogram/federated/FederationMap.java",
"diff": "@@ -224,8 +224,10 @@ public class FederationMap\npublic FederationMap copyWithNewID(long id) {\nMap<FederatedRange, FederatedData> map = new TreeMap<>();\n//TODO handling of file path, but no danger as never written\n- for( Entry<FederatedRange, FederatedData> e : _fedMap.entrySet() )\n+ for( Entry<FederatedRange, FederatedData> e : _fedMap.entrySet() ) {\n+ if(e.getKey().getSize() != 0)\nmap.put(new FederatedRange(e.getKey()), e.getValue().copyWithNewID(id));\n+ }\nreturn new FederationMap(id, map, _type);\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/FEDInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/FEDInstruction.java",
"diff": "@@ -37,6 +37,7 @@ public abstract class FEDInstruction extends Instruction {\nTsmm,\nMMChain,\nReorg,\n+ MatrixIndexing\n}\nprotected final FEDType _fedType;\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/FEDInstructionUtils.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/FEDInstructionUtils.java",
"diff": "@@ -32,6 +32,7 @@ import org.apache.sysds.runtime.instructions.cp.BinaryCPInstruction;\nimport org.apache.sysds.runtime.instructions.cp.Data;\nimport org.apache.sysds.runtime.instructions.cp.MMChainCPInstruction;\nimport org.apache.sysds.runtime.instructions.cp.MMTSJCPInstruction;\n+import org.apache.sysds.runtime.instructions.cp.MatrixIndexingCPInstruction;\nimport org.apache.sysds.runtime.instructions.cp.MultiReturnParameterizedBuiltinCPInstruction;\nimport org.apache.sysds.runtime.instructions.cp.ParameterizedBuiltinCPInstruction;\nimport org.apache.sysds.runtime.instructions.cp.ReorgCPInstruction;\n@@ -127,6 +128,15 @@ public class FEDInstructionUtils {\nif( mo.isFederated() )\nfedinst = ReorgFEDInstruction.parseInstruction(rinst.getInstructionString());\n}\n+ else if(inst instanceof MatrixIndexingCPInstruction && inst.getOpcode().equalsIgnoreCase(\"rightIndex\")) {\n+ // matrix indexing\n+ MatrixIndexingCPInstruction minst = (MatrixIndexingCPInstruction) inst;\n+ if(minst.input1.isMatrix()) {\n+ CacheableData<?> fo = ec.getCacheableData(minst.input1);\n+ if(fo.isFederated())\n+ fedinst = MatrixIndexingFEDInstruction.parseInstruction(minst.getInstructionString());\n+ }\n+ }\nelse if(inst instanceof VariableCPInstruction ){\nVariableCPInstruction ins = (VariableCPInstruction) inst;\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/IndexingFEDInstruction.java",
"diff": "+/*\n+ * Licensed to the Apache Software Foundation (ASF) under one\n+ * or more contributor license agreements. See the NOTICE file\n+ * distributed with this work for additional information\n+ * regarding copyright ownership. The ASF licenses this file\n+ * to you under the Apache License, Version 2.0 (the\n+ * \"License\"); you may not use this file except in compliance\n+ * with the License. You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.apache.sysds.runtime.instructions.fed;\n+\n+import org.apache.sysds.common.Types;\n+import org.apache.sysds.lops.LeftIndex;\n+import org.apache.sysds.lops.RightIndex;\n+import org.apache.sysds.runtime.DMLRuntimeException;\n+import org.apache.sysds.runtime.controlprogram.context.ExecutionContext;\n+import org.apache.sysds.runtime.instructions.InstructionUtils;\n+import org.apache.sysds.runtime.instructions.cp.CPOperand;\n+import org.apache.sysds.runtime.util.IndexRange;\n+\n+public abstract class IndexingFEDInstruction extends UnaryFEDInstruction {\n+ protected final CPOperand rowLower, rowUpper, colLower, colUpper;\n+\n+ protected IndexingFEDInstruction(CPOperand in, CPOperand rl, CPOperand ru, CPOperand cl, CPOperand cu,\n+ CPOperand out, String opcode, String istr) {\n+ super(FEDInstruction.FEDType.MatrixIndexing, null, in, out, opcode, istr);\n+ rowLower = rl;\n+ rowUpper = ru;\n+ colLower = cl;\n+ colUpper = cu;\n+ }\n+\n+ protected IndexingFEDInstruction(CPOperand lhsInput, CPOperand rhsInput, CPOperand rl, CPOperand ru, CPOperand cl,\n+ CPOperand cu, CPOperand out, String opcode, String istr) {\n+ super(FEDInstruction.FEDType.MatrixIndexing, null, lhsInput, rhsInput, out, opcode, istr);\n+ rowLower = rl;\n+ rowUpper = ru;\n+ colLower = cl;\n+ colUpper = cu;\n+ }\n+\n+ protected IndexRange getIndexRange(ExecutionContext ec) {\n+ return new IndexRange( //rl, ru, cl, ru\n+ (int) (ec.getScalarInput(rowLower).getLongValue() - 1),\n+ (int) (ec.getScalarInput(rowUpper).getLongValue() - 1),\n+ (int) (ec.getScalarInput(colLower).getLongValue() - 1),\n+ (int) (ec.getScalarInput(colUpper).getLongValue() - 1));\n+ }\n+\n+ public static IndexingFEDInstruction parseInstruction(String str) {\n+ String[] parts = InstructionUtils.getInstructionPartsWithValueType(str);\n+ String opcode = parts[0];\n+\n+ if(opcode.equalsIgnoreCase(RightIndex.OPCODE)) {\n+ if(parts.length == 7) {\n+ CPOperand in, rl, ru, cl, cu, out;\n+ in = new CPOperand(parts[1]);\n+ rl = new CPOperand(parts[2]);\n+ ru = new CPOperand(parts[3]);\n+ cl = new CPOperand(parts[4]);\n+ cu = new CPOperand(parts[5]);\n+ out = new CPOperand(parts[6]);\n+ if(in.getDataType() == Types.DataType.MATRIX)\n+ return new MatrixIndexingFEDInstruction(in, rl, ru, cl, cu, out, opcode, str);\n+ // else if( in.getDataType() == Types.DataType.FRAME )\n+ // return new FrameIndexingCPInstruction(in, rl, ru, cl, cu, out, opcode, str);\n+ // else if( in.getDataType() == Types.DataType.LIST )\n+ // return new ListIndexingCPInstruction(in, rl, ru, cl, cu, out, opcode, str);\n+ else\n+ throw new DMLRuntimeException(\"Can index only on matrices, frames, and lists.\");\n+ }\n+ else {\n+ throw new DMLRuntimeException(\"Invalid number of operands in instruction: \" + str);\n+ }\n+ }\n+ // else if ( opcode.equalsIgnoreCase(LeftIndex.OPCODE)) {\n+ // if ( parts.length == 8 ) {\n+ // CPOperand lhsInput, rhsInput, rl, ru, cl, cu, out;\n+ // lhsInput = new CPOperand(parts[1]);\n+ // rhsInput = new CPOperand(parts[2]);\n+ // rl = new CPOperand(parts[3]);\n+ // ru = new CPOperand(parts[4]);\n+ // cl = new CPOperand(parts[5]);\n+ // cu = new CPOperand(parts[6]);\n+ // out = new CPOperand(parts[7]);\n+ // if( lhsInput.getDataType()== Types.DataType.MATRIX )\n+ // return new MatrixIndexingFEDInstruction(lhsInput, rhsInput, rl, ru, cl, cu, out, opcode, str);\n+ // else if (lhsInput.getDataType() == Types.DataType.FRAME)\n+ // return new FrameIndexingFEDInstruction(lhsInput, rhsInput, rl, ru, cl, cu, out, opcode, str);\n+ // else if( lhsInput.getDataType() == Types.DataType.LIST )\n+ // return new ListIndexingFEDInstruction(lhsInput, rhsInput, rl, ru, cl, cu, out, opcode, str);\n+ // else\n+ // throw new DMLRuntimeException(\"Can index only on matrices, frames, and lists.\");\n+ // }\n+ // else {\n+ // throw new DMLRuntimeException(\"Invalid number of operands in instruction: \" + str);\n+ // }\n+ // }\n+ else {\n+ throw new DMLRuntimeException(\"Unknown opcode while parsing a MatrixIndexingFEDInstruction: \" + str);\n+ }\n+ }\n+}\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/MatrixIndexingFEDInstruction.java",
"diff": "+/*\n+ * Licensed to the Apache Software Foundation (ASF) under one\n+ * or more contributor license agreements. See the NOTICE file\n+ * distributed with this work for additional information\n+ * regarding copyright ownership. The ASF licenses this file\n+ * to you under the Apache License, Version 2.0 (the\n+ * \"License\"); you may not use this file except in compliance\n+ * with the License. You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.apache.sysds.runtime.instructions.fed;\n+\n+import java.util.HashMap;\n+import java.util.Map;\n+\n+import org.apache.commons.logging.Log;\n+import org.apache.commons.logging.LogFactory;\n+import org.apache.sysds.runtime.DMLRuntimeException;\n+import org.apache.sysds.runtime.controlprogram.caching.MatrixObject;\n+import org.apache.sysds.runtime.controlprogram.context.ExecutionContext;\n+import org.apache.sysds.runtime.controlprogram.federated.FederatedRange;\n+import org.apache.sysds.runtime.controlprogram.federated.FederatedRequest;\n+import org.apache.sysds.runtime.controlprogram.federated.FederatedResponse;\n+import org.apache.sysds.runtime.controlprogram.federated.FederatedUDF;\n+import org.apache.sysds.runtime.controlprogram.federated.FederationMap;\n+import org.apache.sysds.runtime.controlprogram.federated.FederationUtils;\n+import org.apache.sysds.runtime.instructions.cp.CPOperand;\n+import org.apache.sysds.runtime.instructions.cp.Data;\n+import org.apache.sysds.runtime.matrix.data.MatrixBlock;\n+import org.apache.sysds.runtime.util.IndexRange;\n+\n+public final class MatrixIndexingFEDInstruction extends IndexingFEDInstruction {\n+ private static final Log LOG = LogFactory.getLog(MatrixIndexingFEDInstruction.class.getName());\n+\n+ public MatrixIndexingFEDInstruction(CPOperand in, CPOperand rl, CPOperand ru, CPOperand cl, CPOperand cu,\n+ CPOperand out, String opcode, String istr) {\n+ super(in, rl, ru, cl, cu, out, opcode, istr);\n+ }\n+\n+ @Override\n+ public void processInstruction(ExecutionContext ec) {\n+ rightIndexing(ec);\n+ }\n+\n+\n+ private void rightIndexing (ExecutionContext ec) {\n+ MatrixObject in = ec.getMatrixObject(input1);\n+ FederationMap fedMapping = in.getFedMapping();\n+ IndexRange ixrange = getIndexRange(ec);\n+ FederationMap.FType fedType;\n+ Map <FederatedRange, IndexRange> ixs = new HashMap<>();\n+\n+ FederatedRange nextDim = new FederatedRange(new long[]{0, 0}, new long[]{0, 0});\n+\n+ for (int i = 0; i < fedMapping.getFederatedRanges().length; i++) {\n+ long rs = fedMapping.getFederatedRanges()[i].getBeginDims()[0], re = fedMapping.getFederatedRanges()[i]\n+ .getEndDims()[0], cs = fedMapping.getFederatedRanges()[i].getBeginDims()[1], ce = fedMapping.getFederatedRanges()[i].getEndDims()[1];\n+\n+ // for OTHER\n+ fedType = ((i + 1) < fedMapping.getFederatedRanges().length &&\n+ fedMapping.getFederatedRanges()[i].getEndDims()[0] == fedMapping.getFederatedRanges()[i+1].getBeginDims()[0]) ?\n+ FederationMap.FType.ROW : FederationMap.FType.COL;\n+\n+ long rsn = 0, ren = 0, csn = 0, cen = 0;\n+\n+ rsn = (ixrange.rowStart >= rs && ixrange.rowStart < re) ? (ixrange.rowStart - rs) : 0;\n+ ren = (ixrange.rowEnd >= rs && ixrange.rowEnd < re) ? (ixrange.rowEnd - rs) : (re - rs - 1);\n+ csn = (ixrange.colStart >= cs && ixrange.colStart < ce) ? (ixrange.colStart - cs) : 0;\n+ cen = (ixrange.colEnd >= cs && ixrange.colEnd < ce) ? (ixrange.colEnd - cs) : (ce - cs - 1);\n+\n+ fedMapping.getFederatedRanges()[i].setBeginDim(0, i != 0 ? nextDim.getBeginDims()[0] : 0);\n+ fedMapping.getFederatedRanges()[i].setBeginDim(1, i != 0 ? nextDim.getBeginDims()[1] : 0);\n+ if((ixrange.colStart < ce) && (ixrange.colEnd >= cs) && (ixrange.rowStart < re) && (ixrange.rowEnd >= rs)) {\n+ fedMapping.getFederatedRanges()[i].setEndDim(0, ren - rsn + 1 + nextDim.getBeginDims()[0]);\n+ fedMapping.getFederatedRanges()[i].setEndDim(1, cen - csn + 1 + nextDim.getBeginDims()[1]);\n+\n+ ixs.put(fedMapping.getFederatedRanges()[i], new IndexRange(rsn, ren, csn, cen));\n+ } else {\n+ fedMapping.getFederatedRanges()[i].setEndDim(0, i != 0 ? nextDim.getBeginDims()[0] : 0);\n+ fedMapping.getFederatedRanges()[i].setEndDim(1, i != 0 ? nextDim.getBeginDims()[1] : 0);\n+ }\n+\n+ if(fedType == FederationMap.FType.ROW) {\n+ nextDim.setBeginDim(0,fedMapping.getFederatedRanges()[i].getEndDims()[0]);\n+ nextDim.setBeginDim(1, fedMapping.getFederatedRanges()[i].getBeginDims()[1]);\n+ } else if(fedType == FederationMap.FType.COL) {\n+ nextDim.setBeginDim(1,fedMapping.getFederatedRanges()[i].getEndDims()[1]);\n+ nextDim.setBeginDim(0, fedMapping.getFederatedRanges()[i].getBeginDims()[0]);\n+ }\n+ }\n+\n+ long varID = FederationUtils.getNextFedDataID();\n+ FederationMap slicedMapping = fedMapping.mapParallel(varID, (range, data) -> {\n+ try {\n+ FederatedResponse response = data.executeFederatedOperation(new FederatedRequest(FederatedRequest.RequestType.EXEC_UDF,\n+ -1, new SliceMatrix(data.getVarID(), varID, ixs.getOrDefault(range, new IndexRange(-1, -1, -1, -1))))).get();\n+ if(!response.isSuccessful())\n+ response.throwExceptionFromResponse();\n+ }\n+ catch(Exception e) {\n+ throw new DMLRuntimeException(e);\n+ }\n+ return null;\n+ });\n+\n+ MatrixObject sliced = ec.getMatrixObject(output);\n+ sliced.getDataCharacteristics().set(fedMapping.getMaxIndexInRange(0), fedMapping.getMaxIndexInRange(1), (int) in.getBlocksize());\n+ sliced.setFedMapping(slicedMapping);\n+ }\n+\n+ private static class SliceMatrix extends FederatedUDF {\n+\n+ private static final long serialVersionUID = 5956832933333848772L;\n+ private final long _outputID;\n+ private final IndexRange _ixrange;\n+\n+ private SliceMatrix(long input, long outputID, IndexRange ixrange) {\n+ super(new long[] {input});\n+ _outputID = outputID;\n+ _ixrange = ixrange;\n+ }\n+\n+\n+ @Override public FederatedResponse execute(ExecutionContext ec, Data... data) {\n+ MatrixBlock mb = ((MatrixObject) data[0]).acquireReadAndRelease();\n+ MatrixBlock res;\n+ if(_ixrange.rowStart != -1)\n+ res = mb.slice(_ixrange, new MatrixBlock());\n+ else res = new MatrixBlock();\n+ MatrixObject mout = ExecutionContext.createMatrixObject(res);\n+ ec.setVariable(String.valueOf(_outputID), mout);\n+\n+ return new FederatedResponse(FederatedResponse.ResponseType.SUCCESS_EMPTY);\n+ }\n+ }\n+}\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/java/org/apache/sysds/test/functions/federated/primitives/FederatedRightIndexTest.java",
"diff": "+/*\n+ * Licensed to the Apache Software Foundation (ASF) under one\n+ * or more contributor license agreements. See the NOTICE file\n+ * distributed with this work for additional information\n+ * regarding copyright ownership. The ASF licenses this file\n+ * to you under the Apache License, Version 2.0 (the\n+ * \"License\"); you may not use this file except in compliance\n+ * with the License. You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.apache.sysds.test.functions.federated.primitives;\n+\n+import java.util.Arrays;\n+import java.util.Collection;\n+\n+import org.apache.sysds.api.DMLScript;\n+import org.apache.sysds.common.Types.ExecMode;\n+import org.apache.sysds.runtime.meta.MatrixCharacteristics;\n+import org.apache.sysds.runtime.util.HDFSTool;\n+import org.apache.sysds.test.AutomatedTestBase;\n+import org.apache.sysds.test.TestConfiguration;\n+import org.apache.sysds.test.TestUtils;\n+import org.junit.Assert;\n+import org.junit.Test;\n+import org.junit.runner.RunWith;\n+import org.junit.runners.Parameterized;\n+\n+@RunWith(value = Parameterized.class)\[email protected]\n+public class FederatedRightIndexTest extends AutomatedTestBase {\n+ private final static String TEST_NAME1 = \"FederatedRightIndexRightTest\";\n+ private final static String TEST_NAME2 = \"FederatedRightIndexLeftTest\";\n+ private final static String TEST_NAME3 = \"FederatedRightIndexFullTest\";\n+\n+ private final static String TEST_DIR = \"functions/federated/\";\n+ private static final String TEST_CLASS_DIR = TEST_DIR + FederatedRightIndexTest.class.getSimpleName() + \"/\";\n+\n+ private final static int blocksize = 1024;\n+ @Parameterized.Parameter()\n+ public int rows;\n+ @Parameterized.Parameter(1)\n+ public int cols;\n+\n+ @Parameterized.Parameter(2)\n+ public int from;\n+\n+ @Parameterized.Parameter(3)\n+ public int to;\n+\n+ @Parameterized.Parameter(4)\n+ public boolean rowPartitioned;\n+\n+ @Parameterized.Parameters\n+ public static Collection<Object[]> data() {\n+ return Arrays.asList(new Object[][] {\n+ {20, 10, 6, 8, true}, {20, 10, 2, 10, true},\n+ {20, 12, 2, 10, false}, {20, 12, 1, 4, false}\n+ });\n+ }\n+\n+ private enum IndexType {\n+ RIGHT, LEFT, FULL\n+ }\n+\n+ @Override\n+ public void setUp() {\n+ TestUtils.clearAssertionInformation();\n+ addTestConfiguration(TEST_NAME1, new TestConfiguration(TEST_CLASS_DIR, TEST_NAME1, new String[] {\"S\"}));\n+ addTestConfiguration(TEST_NAME2, new TestConfiguration(TEST_CLASS_DIR, TEST_NAME2, new String[] {\"S\"}));\n+ addTestConfiguration(TEST_NAME3, new TestConfiguration(TEST_CLASS_DIR, TEST_NAME3, new String[] {\"S\"}));\n+ }\n+\n+ @Test\n+ public void testRightIndexRightDenseMatrixCP() {\n+ runAggregateOperationTest(IndexType.RIGHT, ExecMode.SINGLE_NODE);\n+ }\n+\n+ @Test\n+ public void testRightIndexLeftDenseMatrixCP() {\n+ runAggregateOperationTest(IndexType.LEFT, ExecMode.SINGLE_NODE);\n+ }\n+\n+ @Test\n+ public void testRightIndexFullDenseMatrixCP() {\n+ runAggregateOperationTest(IndexType.FULL, ExecMode.SINGLE_NODE);\n+ }\n+\n+ private void runAggregateOperationTest(IndexType type, ExecMode execMode) {\n+ boolean sparkConfigOld = DMLScript.USE_LOCAL_SPARK_CONFIG;\n+ ExecMode platformOld = rtplatform;\n+\n+ if(rtplatform == ExecMode.SPARK)\n+ DMLScript.USE_LOCAL_SPARK_CONFIG = true;\n+\n+ String TEST_NAME = null;\n+ switch(type) {\n+ case RIGHT:\n+ TEST_NAME = TEST_NAME1; break;\n+ case LEFT:\n+ TEST_NAME = TEST_NAME2; break;\n+ case FULL:\n+ TEST_NAME = TEST_NAME3; break;\n+ }\n+\n+ getAndLoadTestConfiguration(TEST_NAME);\n+ String HOME = SCRIPT_DIR + TEST_DIR;\n+\n+ // write input matrices\n+ int r = rows;\n+ int c = cols / 4;\n+ if(rowPartitioned) {\n+ r = rows / 4;\n+ c = cols;\n+ }\n+\n+ double[][] X1 = getRandomMatrix(r, c, 1, 5, 1, 3);\n+ double[][] X2 = getRandomMatrix(r, c, 1, 5, 1, 7);\n+ double[][] X3 = getRandomMatrix(r, c, 1, 5, 1, 8);\n+ double[][] X4 = getRandomMatrix(r, c, 1, 5, 1, 9);\n+\n+ MatrixCharacteristics mc = new MatrixCharacteristics(r, c, blocksize, r * c);\n+ writeInputMatrixWithMTD(\"X1\", X1, false, mc);\n+ writeInputMatrixWithMTD(\"X2\", X2, false, mc);\n+ writeInputMatrixWithMTD(\"X3\", X3, false, mc);\n+ writeInputMatrixWithMTD(\"X4\", X4, false, mc);\n+\n+ // empty script name because we don't execute any script, just start the worker\n+ fullDMLScriptName = \"\";\n+ int port1 = getRandomAvailablePort();\n+ int port2 = getRandomAvailablePort();\n+ int port3 = getRandomAvailablePort();\n+ int port4 = getRandomAvailablePort();\n+ Thread t1 = startLocalFedWorkerThread(port1);\n+ Thread t2 = startLocalFedWorkerThread(port2);\n+ Thread t3 = startLocalFedWorkerThread(port3);\n+ Thread t4 = startLocalFedWorkerThread(port4);\n+\n+ rtplatform = execMode;\n+ if(rtplatform == ExecMode.SPARK) {\n+ System.out.println(7);\n+ DMLScript.USE_LOCAL_SPARK_CONFIG = true;\n+ }\n+ TestConfiguration config = availableTestConfigurations.get(TEST_NAME);\n+ loadTestConfiguration(config);\n+\n+ // Run reference dml script with normal matrix\n+ fullDMLScriptName = HOME + TEST_NAME + \"Reference.dml\";\n+ programArgs = new String[] { \"-args\", input(\"X1\"), input(\"X2\"), input(\"X3\"), input(\"X4\"),\n+ String.valueOf(from), String.valueOf(to), Boolean.toString(rowPartitioned).toUpperCase(), expected(\"S\")};\n+ runTest(true, false, null, -1);\n+\n+ // Run actual dml script with federated matrix\n+\n+ fullDMLScriptName = HOME + TEST_NAME + \".dml\";\n+ programArgs = new String[] {\"-stats\", \"100\", \"-nvargs\",\n+ \"in_X1=\" + TestUtils.federatedAddress(port1, input(\"X1\")),\n+ \"in_X2=\" + TestUtils.federatedAddress(port2, input(\"X2\")),\n+ \"in_X3=\" + TestUtils.federatedAddress(port3, input(\"X3\")),\n+ \"in_X4=\" + TestUtils.federatedAddress(port4, input(\"X4\")), \"rows=\" + rows, \"cols=\" + cols,\n+ \"from=\" + from, \"to=\" + to, \"rP=\" + Boolean.toString(rowPartitioned).toUpperCase(),\n+ \"out_S=\" + output(\"S\")};\n+\n+ runTest(true, false, null, -1);\n+\n+ // compare via files\n+ compareResults(1e-9);\n+\n+ Assert.assertTrue(heavyHittersContainsString(\"fed_rightIndex\"));\n+\n+ // check that federated input files are still existing\n+ Assert.assertTrue(HDFSTool.existsFileOnHDFS(input(\"X1\")));\n+ Assert.assertTrue(HDFSTool.existsFileOnHDFS(input(\"X2\")));\n+ Assert.assertTrue(HDFSTool.existsFileOnHDFS(input(\"X3\")));\n+ Assert.assertTrue(HDFSTool.existsFileOnHDFS(input(\"X4\")));\n+\n+ TestUtils.shutdownThreads(t1, t2, t3, t4);\n+\n+ rtplatform = platformOld;\n+ DMLScript.USE_LOCAL_SPARK_CONFIG = sparkConfigOld;\n+\n+ }\n+}\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/federated/FederatedRightIndexFullTest.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+from = $from;\n+to = $to;\n+\n+if ($rP) {\n+ A = federated(addresses=list($in_X1, $in_X2, $in_X3, $in_X4),\n+ ranges=list(list(0, 0), list($rows/4, $cols), list($rows/4, 0), list(2*$rows/4, $cols),\n+ list(2*$rows/4, 0), list(3*$rows/4, $cols), list(3*$rows/4, 0), list($rows, $cols)));\n+} else {\n+ A = federated(addresses=list($in_X1, $in_X2, $in_X3, $in_X4),\n+ ranges=list(list(0, 0), list($rows, $cols/4), list(0,$cols/4), list($rows, $cols/2),\n+ list(0,$cols/2), list($rows, 3*($cols/4)), list(0, 3*($cols/4)), list($rows, $cols)));\n+}\n+\n+s = A[from:to, from:to];\n+write(s, $out_S);\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/federated/FederatedRightIndexFullTestReference.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+from = $5;\n+to = $6;\n+\n+if($7) { A = rbind(read($1), read($2), read($3), read($4)); }\n+else { A = cbind(read($1), read($2), read($3), read($4)); }\n+\n+s = A[from:to, from:to];\n+write(s, $8);\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/federated/FederatedRightIndexLeftTest.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+from = $from;\n+to = $to;\n+\n+if ($rP) {\n+ A = federated(addresses=list($in_X1, $in_X2, $in_X3, $in_X4),\n+ ranges=list(list(0, 0), list($rows/4, $cols), list($rows/4, 0), list(2*$rows/4, $cols),\n+ list(2*$rows/4, 0), list(3*$rows/4, $cols), list(3*$rows/4, 0), list($rows, $cols)));\n+} else {\n+ A = federated(addresses=list($in_X1, $in_X2, $in_X3, $in_X4),\n+ ranges=list(list(0, 0), list($rows, $cols/4), list(0,$cols/4), list($rows, $cols/2),\n+ list(0,$cols/2), list($rows, 3*($cols/4)), list(0, 3*($cols/4)), list($rows, $cols)));\n+}\n+\n+s = A[from:to,];\n+write(s, $out_S);\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/federated/FederatedRightIndexLeftTestReference.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+from = $5;\n+to = $6;\n+\n+if($7) { A = rbind(read($1), read($2), read($3), read($4)); }\n+else { A = cbind(read($1), read($2), read($3), read($4)); }\n+\n+s = A[from:to,];\n+write(s, $8);\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/federated/FederatedRightIndexRightTest.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+from = $from;\n+to = $to;\n+\n+if ($rP) {\n+ A = federated(addresses=list($in_X1, $in_X2, $in_X3, $in_X4),\n+ ranges=list(list(0, 0), list($rows/4, $cols), list($rows/4, 0), list(2*$rows/4, $cols),\n+ list(2*$rows/4, 0), list(3*$rows/4, $cols), list(3*$rows/4, 0), list($rows, $cols)));\n+} else {\n+ A = federated(addresses=list($in_X1, $in_X2, $in_X3, $in_X4),\n+ ranges=list(list(0, 0), list($rows, $cols/4), list(0,$cols/4), list($rows, $cols/2),\n+ list(0,$cols/2), list($rows, 3*($cols/4)), list(0, 3*($cols/4)), list($rows, $cols)));\n+}\n+\n+s = A[, from:to];\n+write(s, $out_S);\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/federated/FederatedRightIndexRightTestReference.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+from = $5;\n+to = $6;\n+\n+if($7) { A = rbind(read($1), read($2), read($3), read($4)); }\n+else { A = cbind(read($1), read($2), read($3), read($4)); }\n+\n+s = A[, from:to];\n+write(s, $8);\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-2726] Federated right indexing |
49,706 | 12.11.2020 19:02:33 | -3,600 | adb50434184c5a706cb268c63e3ad06738603d85 | [MINOR] Use Log4j in overwriting config test | [
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/codegen/CellwiseTmplTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/codegen/CellwiseTmplTest.java",
"diff": "@@ -22,8 +22,8 @@ package org.apache.sysds.test.functions.codegen;\nimport java.io.File;\nimport java.util.HashMap;\n-import org.junit.Assert;\n-import org.junit.Test;\n+import org.apache.commons.logging.Log;\n+import org.apache.commons.logging.LogFactory;\nimport org.apache.sysds.common.Types.ExecMode;\nimport org.apache.sysds.hops.OptimizerUtils;\nimport org.apache.sysds.lops.LopProperties.ExecType;\n@@ -31,9 +31,13 @@ import org.apache.sysds.runtime.matrix.data.MatrixValue.CellIndex;\nimport org.apache.sysds.test.AutomatedTestBase;\nimport org.apache.sysds.test.TestConfiguration;\nimport org.apache.sysds.test.TestUtils;\n+import org.junit.Assert;\n+import org.junit.Test;\npublic class CellwiseTmplTest extends AutomatedTestBase\n{\n+ private static final Log LOG = LogFactory.getLog(CellwiseTmplTest.class.getName());\n+\nprivate static final String TEST_NAME = \"cellwisetmpl\";\nprivate static final String TEST_NAME1 = TEST_NAME+1;\nprivate static final String TEST_NAME2 = TEST_NAME+2;\n@@ -539,7 +543,7 @@ public class CellwiseTmplTest extends AutomatedTestBase\nprotected File getConfigTemplateFile() {\n// Instrumentation in this test's output log to show custom configuration file used for template.\nFile TEST_CONF_FILE = new File(SCRIPT_DIR + TEST_DIR, TEST_CONF);\n- System.out.println(\"This test case overrides default configuration with \" + TEST_CONF_FILE.getPath());\n+ LOG.info(\"This test case overrides default configuration with \" + TEST_CONF_FILE.getPath());\nreturn TEST_CONF_FILE;\n}\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/codegen/DAGCellwiseTmplTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/codegen/DAGCellwiseTmplTest.java",
"diff": "@@ -22,8 +22,8 @@ package org.apache.sysds.test.functions.codegen;\nimport java.io.File;\nimport java.util.HashMap;\n-import org.junit.Assert;\n-import org.junit.Test;\n+import org.apache.commons.logging.Log;\n+import org.apache.commons.logging.LogFactory;\nimport org.apache.sysds.common.Types.ExecMode;\nimport org.apache.sysds.hops.OptimizerUtils;\nimport org.apache.sysds.lops.LopProperties.ExecType;\n@@ -31,9 +31,14 @@ import org.apache.sysds.runtime.matrix.data.MatrixValue.CellIndex;\nimport org.apache.sysds.test.AutomatedTestBase;\nimport org.apache.sysds.test.TestConfiguration;\nimport org.apache.sysds.test.TestUtils;\n+import org.junit.Assert;\n+import org.junit.Test;\npublic class DAGCellwiseTmplTest extends AutomatedTestBase\n{\n+\n+ private static final Log LOG = LogFactory.getLog(DAGCellwiseTmplTest.class.getName());\n+\nprivate static final String TEST_NAME1 = \"DAGcellwisetmpl1\";\nprivate static final String TEST_NAME2 = \"DAGcellwisetmpl2\";\nprivate static final String TEST_NAME3 = \"DAGcellwisetmpl3\";\n@@ -160,7 +165,7 @@ public class DAGCellwiseTmplTest extends AutomatedTestBase\n@Override\nprotected File getConfigTemplateFile() {\n// Instrumentation in this test's output log to show custom configuration file used for template.\n- System.out.println(\"This test case overrides default configuration with \" + TEST_CONF_FILE.getPath());\n+ LOG.info(\"This test case overrides default configuration with \" + TEST_CONF_FILE.getPath());\nreturn TEST_CONF_FILE;\n}\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/codegen/MiscPatternTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/codegen/MiscPatternTest.java",
"diff": "@@ -22,8 +22,8 @@ package org.apache.sysds.test.functions.codegen;\nimport java.io.File;\nimport java.util.HashMap;\n-import org.junit.Assert;\n-import org.junit.Test;\n+import org.apache.commons.logging.Log;\n+import org.apache.commons.logging.LogFactory;\nimport org.apache.sysds.common.Types.ExecMode;\nimport org.apache.sysds.hops.OptimizerUtils;\nimport org.apache.sysds.lops.LopProperties.ExecType;\n@@ -31,9 +31,14 @@ import org.apache.sysds.runtime.matrix.data.MatrixValue.CellIndex;\nimport org.apache.sysds.test.AutomatedTestBase;\nimport org.apache.sysds.test.TestConfiguration;\nimport org.apache.sysds.test.TestUtils;\n+import org.junit.Assert;\n+import org.junit.Test;\npublic class MiscPatternTest extends AutomatedTestBase\n{\n+\n+ private static final Log LOG = LogFactory.getLog(MiscPatternTest.class.getName());\n+\nprivate static final String TEST_NAME = \"miscPattern\";\nprivate static final String TEST_NAME1 = TEST_NAME+\"1\"; //Y + (X * U%*%t(V)) overlapping cell-outer\nprivate static final String TEST_NAME2 = TEST_NAME+\"2\"; //multi-agg w/ large common subexpression\n@@ -169,7 +174,7 @@ public class MiscPatternTest extends AutomatedTestBase\n@Override\nprotected File getConfigTemplateFile() {\n// Instrumentation in this test's output log to show custom configuration file used for template.\n- System.out.println(\"This test case overrides default configuration with \" + TEST_CONF_FILE.getPath());\n+ LOG.info(\"This test case overrides default configuration with \" + TEST_CONF_FILE.getPath());\nreturn TEST_CONF_FILE;\n}\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/codegen/MultiAggTmplTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/codegen/MultiAggTmplTest.java",
"diff": "@@ -22,8 +22,8 @@ package org.apache.sysds.test.functions.codegen;\nimport java.io.File;\nimport java.util.HashMap;\n-import org.junit.Assert;\n-import org.junit.Test;\n+import org.apache.commons.logging.Log;\n+import org.apache.commons.logging.LogFactory;\nimport org.apache.sysds.common.Types.ExecMode;\nimport org.apache.sysds.hops.OptimizerUtils;\nimport org.apache.sysds.lops.LopProperties.ExecType;\n@@ -31,9 +31,13 @@ import org.apache.sysds.runtime.matrix.data.MatrixValue.CellIndex;\nimport org.apache.sysds.test.AutomatedTestBase;\nimport org.apache.sysds.test.TestConfiguration;\nimport org.apache.sysds.test.TestUtils;\n+import org.junit.Assert;\n+import org.junit.Test;\npublic class MultiAggTmplTest extends AutomatedTestBase\n{\n+ private static final Log LOG = LogFactory.getLog(MultiAggTmplTest.class.getName());\n+\nprivate static final String TEST_NAME = \"multiAggPattern\";\nprivate static final String TEST_NAME1 = TEST_NAME+\"1\"; //min(X>7), max(X>7)\nprivate static final String TEST_NAME2 = TEST_NAME+\"2\"; //sum(X>7), sum((X>7)^2)\n@@ -206,7 +210,7 @@ public class MultiAggTmplTest extends AutomatedTestBase\n@Override\nprotected File getConfigTemplateFile() {\n// Instrumentation in this test's output log to show custom configuration file used for template.\n- System.out.println(\"This test case overrides default configuration with \" + TEST_CONF_FILE.getPath());\n+ LOG.info(\"This test case overrides default configuration with \" + TEST_CONF_FILE.getPath());\nreturn TEST_CONF_FILE;\n}\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/codegen/OuterProdTmplTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/codegen/OuterProdTmplTest.java",
"diff": "@@ -22,8 +22,8 @@ package org.apache.sysds.test.functions.codegen;\nimport java.io.File;\nimport java.util.HashMap;\n-import org.junit.Assert;\n-import org.junit.Test;\n+import org.apache.commons.logging.Log;\n+import org.apache.commons.logging.LogFactory;\nimport org.apache.sysds.common.Types.ExecMode;\nimport org.apache.sysds.hops.OptimizerUtils;\nimport org.apache.sysds.lops.LopProperties.ExecType;\n@@ -31,9 +31,12 @@ import org.apache.sysds.runtime.matrix.data.MatrixValue.CellIndex;\nimport org.apache.sysds.test.AutomatedTestBase;\nimport org.apache.sysds.test.TestConfiguration;\nimport org.apache.sysds.test.TestUtils;\n+import org.junit.Assert;\n+import org.junit.Test;\npublic class OuterProdTmplTest extends AutomatedTestBase\n{\n+ private static final Log LOG = LogFactory.getLog(OuterProdTmplTest.class.getName());\nprivate static final String TEST_NAME1 = \"wdivmm\";\nprivate static final String TEST_NAME2 = \"wdivmmRight\";\nprivate static final String TEST_NAME3 = \"wsigmoid\";\n@@ -310,7 +313,7 @@ public class OuterProdTmplTest extends AutomatedTestBase\n@Override\nprotected File getConfigTemplateFile() {\n// Instrumentation in this test's output log to show custom configuration file used for template.\n- System.out.println(\"This test case overrides default configuration with \" + TEST_CONF_FILE.getPath());\n+ LOG.info(\"This test case overrides default configuration with \" + TEST_CONF_FILE.getPath());\nreturn TEST_CONF_FILE;\n}\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/codegen/RowAggTmplTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/codegen/RowAggTmplTest.java",
"diff": "@@ -22,19 +22,23 @@ package org.apache.sysds.test.functions.codegen;\nimport java.io.File;\nimport java.util.HashMap;\n-import org.junit.Assert;\n-import org.junit.Test;\n+import org.apache.commons.logging.Log;\n+import org.apache.commons.logging.LogFactory;\nimport org.apache.sysds.common.Types.ExecMode;\nimport org.apache.sysds.hops.OptimizerUtils;\n-import org.apache.sysds.lops.RightIndex;\nimport org.apache.sysds.lops.LopProperties.ExecType;\n+import org.apache.sysds.lops.RightIndex;\nimport org.apache.sysds.runtime.matrix.data.MatrixValue.CellIndex;\nimport org.apache.sysds.test.AutomatedTestBase;\nimport org.apache.sysds.test.TestConfiguration;\nimport org.apache.sysds.test.TestUtils;\n+import org.junit.Assert;\n+import org.junit.Test;\npublic class RowAggTmplTest extends AutomatedTestBase\n{\n+ private static final Log LOG = LogFactory.getLog(RowAggTmplTest.class.getName());\n+\nprivate static final String TEST_NAME = \"rowAggPattern\";\nprivate static final String TEST_NAME1 = TEST_NAME+\"1\"; //t(X)%*%(X%*%(lamda*v))\nprivate static final String TEST_NAME2 = TEST_NAME+\"2\"; //t(X)%*%(lamda*(X%*%v))\n@@ -861,7 +865,7 @@ public class RowAggTmplTest extends AutomatedTestBase\n@Override\nprotected File getConfigTemplateFile() {\n// Instrumentation in this test's output log to show custom configuration file used for template.\n- System.out.println(\"This test case overrides default configuration with \" + TEST_CONF_FILE.getPath());\n+ LOG.info(\"This test case overrides default configuration with \" + TEST_CONF_FILE.getPath());\nreturn TEST_CONF_FILE;\n}\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/codegen/RowConv2DOperationsTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/codegen/RowConv2DOperationsTest.java",
"diff": "@@ -22,8 +22,8 @@ package org.apache.sysds.test.functions.codegen;\nimport java.io.File;\nimport java.util.HashMap;\n-import org.junit.Assert;\n-import org.junit.Test;\n+import org.apache.commons.logging.Log;\n+import org.apache.commons.logging.LogFactory;\nimport org.apache.sysds.common.Types.ExecMode;\nimport org.apache.sysds.hops.OptimizerUtils;\nimport org.apache.sysds.lops.LopProperties.ExecType;\n@@ -31,9 +31,13 @@ import org.apache.sysds.runtime.matrix.data.MatrixValue.CellIndex;\nimport org.apache.sysds.test.AutomatedTestBase;\nimport org.apache.sysds.test.TestConfiguration;\nimport org.apache.sysds.test.TestUtils;\n+import org.junit.Assert;\n+import org.junit.Test;\npublic class RowConv2DOperationsTest extends AutomatedTestBase\n{\n+ private static final Log LOG = LogFactory.getLog(RowConv2DOperationsTest.class.getName());\n+\nprivate final static String TEST_NAME1 = \"RowConv2DTest\";\nprivate final static String TEST_DIR = \"functions/codegen/\";\nprivate final static String TEST_CLASS_DIR = TEST_DIR + RowConv2DOperationsTest.class.getSimpleName() + \"/\";\n@@ -121,7 +125,7 @@ public class RowConv2DOperationsTest extends AutomatedTestBase\n@Override\nprotected File getConfigTemplateFile() {\n// Instrumentation in this test's output log to show custom configuration file used for template.\n- System.out.println(\"This test case overrides default configuration with \" + TEST_CONF_FILE.getPath());\n+ LOG.info(\"This test case overrides default configuration with \" + TEST_CONF_FILE.getPath());\nreturn TEST_CONF_FILE;\n}\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/codegen/RowVectorComparisonTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/codegen/RowVectorComparisonTest.java",
"diff": "@@ -22,8 +22,8 @@ package org.apache.sysds.test.functions.codegen;\nimport java.io.File;\nimport java.util.HashMap;\n-import org.junit.Assert;\n-import org.junit.Test;\n+import org.apache.commons.logging.Log;\n+import org.apache.commons.logging.LogFactory;\nimport org.apache.sysds.common.Types.ExecMode;\nimport org.apache.sysds.hops.OptimizerUtils;\nimport org.apache.sysds.lops.LopProperties.ExecType;\n@@ -31,9 +31,13 @@ import org.apache.sysds.runtime.matrix.data.MatrixValue.CellIndex;\nimport org.apache.sysds.test.AutomatedTestBase;\nimport org.apache.sysds.test.TestConfiguration;\nimport org.apache.sysds.test.TestUtils;\n+import org.junit.Assert;\n+import org.junit.Test;\npublic class RowVectorComparisonTest extends AutomatedTestBase\n{\n+ private static final Log LOG = LogFactory.getLog(RowVectorComparisonTest.class.getName());\n+\nprivate static final String TEST_NAME1 = \"rowComparisonEq\";\nprivate static final String TEST_NAME2 = \"rowComparisonNeq\";\nprivate static final String TEST_NAME3 = \"rowComparisonLte\";\n@@ -164,7 +168,7 @@ public class RowVectorComparisonTest extends AutomatedTestBase\n@Override\nprotected File getConfigTemplateFile() {\n// Instrumentation in this test's output log to show custom configuration file used for template.\n- System.out.println(\"This test case overrides default configuration with \" + TEST_CONF_FILE.getPath());\n+ LOG.info(\"This test case overrides default configuration with \" + TEST_CONF_FILE.getPath());\nreturn TEST_CONF_FILE;\n}\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/codegen/SparseSideInputTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/codegen/SparseSideInputTest.java",
"diff": "@@ -22,8 +22,8 @@ package org.apache.sysds.test.functions.codegen;\nimport java.io.File;\nimport java.util.HashMap;\n-import org.junit.Assert;\n-import org.junit.Test;\n+import org.apache.commons.logging.Log;\n+import org.apache.commons.logging.LogFactory;\nimport org.apache.sysds.common.Types.ExecMode;\nimport org.apache.sysds.hops.OptimizerUtils;\nimport org.apache.sysds.lops.LopProperties.ExecType;\n@@ -31,9 +31,13 @@ import org.apache.sysds.runtime.matrix.data.MatrixValue.CellIndex;\nimport org.apache.sysds.test.AutomatedTestBase;\nimport org.apache.sysds.test.TestConfiguration;\nimport org.apache.sysds.test.TestUtils;\n+import org.junit.Assert;\n+import org.junit.Test;\npublic class SparseSideInputTest extends AutomatedTestBase\n{\n+ private static final Log LOG = LogFactory.getLog(SparseSideInputTest.class.getName());\n+\nprivate static final String TEST_NAME = \"SparseSideInput\";\nprivate static final String TEST_NAME1 = TEST_NAME+\"1\"; //row sum(X/rowSums(X)+Y)\nprivate static final String TEST_NAME2 = TEST_NAME+\"2\"; //cell sum(abs(X^2)+Y)\n@@ -189,7 +193,7 @@ public class SparseSideInputTest extends AutomatedTestBase\nprotected File getConfigTemplateFile() {\n// Instrumentation in this test's output log to show custom configuration file used for template.\nFile f = new File(SCRIPT_DIR + TEST_DIR, TEST_CONF);\n- System.out.println(\"This test case overrides default configuration with \" + f.getPath());\n+ LOG.info(\"This test case overrides default configuration with \" + f.getPath());\nreturn f;\n}\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/codegen/SumProductChainTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/codegen/SumProductChainTest.java",
"diff": "@@ -22,8 +22,8 @@ package org.apache.sysds.test.functions.codegen;\nimport java.io.File;\nimport java.util.HashMap;\n-import org.junit.Assert;\n-import org.junit.Test;\n+import org.apache.commons.logging.Log;\n+import org.apache.commons.logging.LogFactory;\nimport org.apache.sysds.common.Types.ExecMode;\nimport org.apache.sysds.hops.OptimizerUtils;\nimport org.apache.sysds.lops.LopProperties.ExecType;\n@@ -31,9 +31,13 @@ import org.apache.sysds.runtime.matrix.data.MatrixValue.CellIndex;\nimport org.apache.sysds.test.AutomatedTestBase;\nimport org.apache.sysds.test.TestConfiguration;\nimport org.apache.sysds.test.TestUtils;\n+import org.junit.Assert;\n+import org.junit.Test;\npublic class SumProductChainTest extends AutomatedTestBase\n{\n+ private static final Log LOG = LogFactory.getLog(SumProductChainTest.class.getName());\n+\nprivate static final String TEST_NAME1 = \"SumProductChain\";\nprivate static final String TEST_NAME2 = \"SumAdditionChain\";\nprivate static final String TEST_DIR = \"functions/codegen/\";\n@@ -149,7 +153,7 @@ public class SumProductChainTest extends AutomatedTestBase\n@Override\nprotected File getConfigTemplateFile() {\n// Instrumentation in this test's output log to show custom configuration file used for template.\n- System.out.println(\"This test case overrides default configuration with \" + TEST_CONF_FILE.getPath());\n+ LOG.info(\"This test case overrides default configuration with \" + TEST_CONF_FILE.getPath());\nreturn TEST_CONF_FILE;\n}\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/federated/io/FederatedSSLTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/federated/io/FederatedSSLTest.java",
"diff": "@@ -23,6 +23,8 @@ import java.io.File;\nimport java.util.Arrays;\nimport java.util.Collection;\n+import org.apache.commons.logging.Log;\n+import org.apache.commons.logging.LogFactory;\nimport org.apache.sysds.common.Types;\nimport org.apache.sysds.runtime.controlprogram.caching.MatrixObject;\nimport org.apache.sysds.runtime.meta.MatrixCharacteristics;\n@@ -38,8 +40,8 @@ import org.junit.runners.Parameterized;\n@RunWith(value = Parameterized.class)\[email protected]\npublic class FederatedSSLTest extends AutomatedTestBase {\n+ private static final Log LOG = LogFactory.getLog(FederatedSSLTest.class.getName());\n- // private static final Log LOG = LogFactory.getLog(FederatedReaderTest.class.getName());\n// This test use the same scripts as the Federated Reader tests, just with SSL enabled.\nprivate final static String TEST_DIR = \"functions/federated/io/\";\nprivate final static String TEST_NAME = \"FederatedReaderTest\";\n@@ -135,7 +137,7 @@ public class FederatedSSLTest extends AutomatedTestBase {\n@Override\nprotected File getConfigTemplateFile() {\n// Instrumentation in this test's output log to show custom configuration file used for template.\n- System.out.println(\"This test case overrides default configuration with \" + TEST_CONF_FILE.getPath());\n+ LOG.info(\"This test case overrides default configuration with \" + TEST_CONF_FILE.getPath());\nreturn TEST_CONF_FILE;\n}\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/resources/log4j.properties",
"new_path": "src/test/resources/log4j.properties",
"diff": "@@ -24,7 +24,7 @@ log4j.rootLogger=ERROR,console\nlog4j.logger.org.apache.sysds.api.DMLScript=OFF\nlog4j.logger.org.apache.sysds.test=INFO\nlog4j.logger.org.apache.sysds.test.AutomatedTestBase=ERROR\n-log4j.logger.org.apache.sysds=WARN\n+log4j.logger.org.apache.sysds=ERROR\n#log4j.logger.org.apache.sysds.hops.codegen.SpoofCompiler=TRACE\nlog4j.logger.org.apache.sysds.runtime.compress.AbstractCompressedMatrixBlock=ERROR\n# log4j.logger.org.apache.sysds.runtime.instructions.fed=DEBUG\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR] Use Log4j in overwriting config test |
49,706 | 12.11.2020 15:01:34 | -3,600 | 6d19dbaffc3e2ead068474c734ad289141025b5a | Cast to frame Federated | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/controlprogram/federated/FederatedData.java",
"new_path": "src/main/java/org/apache/sysds/runtime/controlprogram/federated/FederatedData.java",
"diff": "@@ -74,6 +74,13 @@ public class FederatedData {\n_allFedSites.add(_address);\n}\n+ public FederatedData(Types.DataType dataType, InetSocketAddress address, String filepath, long varID) {\n+ _dataType = dataType;\n+ _address = address;\n+ _filepath = filepath;\n+ _varID = varID;\n+ }\n+\npublic InetSocketAddress getAddress() {\nreturn _address;\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/FEDInstructionUtils.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/FEDInstructionUtils.java",
"diff": "@@ -36,6 +36,8 @@ import org.apache.sysds.runtime.instructions.cp.MatrixIndexingCPInstruction;\nimport org.apache.sysds.runtime.instructions.cp.MultiReturnParameterizedBuiltinCPInstruction;\nimport org.apache.sysds.runtime.instructions.cp.ParameterizedBuiltinCPInstruction;\nimport org.apache.sysds.runtime.instructions.cp.ReorgCPInstruction;\n+import org.apache.sysds.runtime.instructions.cp.UnaryCPInstruction;\n+import org.apache.sysds.runtime.instructions.cp.UnaryMatrixCPInstruction;\nimport org.apache.sysds.runtime.instructions.cp.VariableCPInstruction;\nimport org.apache.sysds.runtime.instructions.cp.VariableCPInstruction.VariableOperationCode;\nimport org.apache.sysds.runtime.instructions.spark.AggregateUnarySPInstruction;\n@@ -82,7 +84,8 @@ public class FEDInstructionUtils {\nif( mo.isFederated() )\nfedinst = TsmmFEDInstruction.parseInstruction(linst.getInstructionString());\n}\n- else if (inst instanceof AggregateUnaryCPInstruction) {\n+ else if(inst instanceof UnaryCPInstruction){\n+ if (inst instanceof AggregateUnaryCPInstruction) {\nAggregateUnaryCPInstruction instruction = (AggregateUnaryCPInstruction) inst;\nif( instruction.input1.isMatrix() && ec.containsVariable(instruction.input1) ) {\nMatrixObject mo1 = ec.getMatrixObject(instruction.input1);\n@@ -92,6 +95,7 @@ public class FEDInstructionUtils {\n}\n}\n}\n+ }\nelse if (inst instanceof BinaryCPInstruction) {\nBinaryCPInstruction instruction = (BinaryCPInstruction) inst;\nif( (instruction.input1.isMatrix() && ec.getMatrixObject(instruction.input1).isFederated())\n@@ -141,13 +145,20 @@ public class FEDInstructionUtils {\nVariableCPInstruction ins = (VariableCPInstruction) inst;\nif(ins.getVariableOpcode() == VariableOperationCode.Write\n+ && ins.getInput1().isMatrix()\n&& ins.getInput3().getName().contains(\"federated\")){\nfedinst = VariableFEDInstruction.parseInstruction(ins);\n}\n+ else if(ins.getVariableOpcode() == VariableOperationCode.CastAsFrameVariable\n+ && ins.getInput1().isMatrix()\n+ && ec.getCacheableData(ins.getInput1()).isFederated()){\n+ fedinst = VariableFEDInstruction.parseInstruction(ins);\n+ }\n}\n+\n//set thread id for federated context management\nif( fedinst != null ) {\nfedinst.setTID(ec.getTID());\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/VariableFEDInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/VariableFEDInstruction.java",
"diff": "package org.apache.sysds.runtime.instructions.fed;\n+import java.util.Arrays;\n+import java.util.HashMap;\n+import java.util.Map;\n+\nimport org.apache.commons.lang3.tuple.Pair;\nimport org.apache.commons.logging.Log;\nimport org.apache.commons.logging.LogFactory;\n+import org.apache.sysds.common.Types;\n+import org.apache.sysds.common.Types.ValueType;\nimport org.apache.sysds.runtime.DMLRuntimeException;\n+import org.apache.sysds.runtime.controlprogram.caching.FrameObject;\n+import org.apache.sysds.runtime.controlprogram.caching.MatrixObject;\nimport org.apache.sysds.runtime.controlprogram.context.ExecutionContext;\n+import org.apache.sysds.runtime.controlprogram.federated.FederatedData;\n+import org.apache.sysds.runtime.controlprogram.federated.FederatedRange;\n+import org.apache.sysds.runtime.controlprogram.federated.FederatedRequest;\n+import org.apache.sysds.runtime.controlprogram.federated.FederationMap;\n+import org.apache.sysds.runtime.controlprogram.federated.FederationUtils;\n+import org.apache.sysds.runtime.instructions.cp.CPOperand;\nimport org.apache.sysds.runtime.instructions.cp.VariableCPInstruction;\nimport org.apache.sysds.runtime.instructions.cp.VariableCPInstruction.VariableOperationCode;\nimport org.apache.sysds.runtime.lineage.LineageItem;\n@@ -52,6 +66,13 @@ public class VariableFEDInstruction extends FEDInstruction implements LineageTra\nprocessWriteInstruction(ec);\nbreak;\n+ case CastAsMatrixVariable:\n+ processCastAsMatrixVariableInstruction(ec);\n+ break;\n+ case CastAsFrameVariable:\n+ processCastAsFrameVariableInstruction(ec);\n+ break;\n+\ndefault:\nthrow new DMLRuntimeException(\"Unsupported Opcode for federated Variable Instruction : \" + opcode);\n}\n@@ -66,6 +87,42 @@ public class VariableFEDInstruction extends FEDInstruction implements LineageTra\n_in.processInstruction(ec);\n}\n+ private void processCastAsMatrixVariableInstruction(ExecutionContext ec){\n+ LOG.error(\"Not Implemented\");\n+ throw new DMLRuntimeException(\"Not Implemented Cast as Matrix\");\n+\n+ }\n+\n+ private void processCastAsFrameVariableInstruction(ExecutionContext ec){\n+\n+ MatrixObject mo1 = ec.getMatrixObject(_in.getInput1());\n+\n+ if( !mo1.isFederated() )\n+ throw new DMLRuntimeException(\"Federated Reorg: \"\n+ + \"Federated input expected, but invoked w/ \"+mo1.isFederated());\n+\n+ //execute transpose at federated site\n+ FederatedRequest fr1 = FederationUtils.callInstruction(_in.getInstructionString(), _in.getOutput(),\n+ new CPOperand[]{_in.getInput1()}, new long[]{mo1.getFedMapping().getID()});\n+ mo1.getFedMapping().execute(getTID(), true, fr1);\n+\n+ //drive output federated mapping\n+ FrameObject out = ec.getFrameObject(_in.getOutput());\n+ out.getDataCharacteristics().set(mo1.getNumColumns(),\n+ mo1.getNumRows(), (int)mo1.getBlocksize(), mo1.getNnz());\n+ FederationMap outMap = mo1.getFedMapping().copyWithNewID(fr1.getID());\n+ Map<FederatedRange, FederatedData> newMap = new HashMap<>();\n+ for(Map.Entry<FederatedRange, FederatedData> pair : outMap.getFedMapping().entrySet()){\n+ FederatedData om = pair.getValue();\n+ FederatedData nf = new FederatedData(Types.DataType.FRAME, om.getAddress(),om.getFilepath(),om.getVarID());\n+ newMap.put(pair.getKey(), nf);\n+ }\n+ ValueType[] schema = new ValueType[(int)mo1.getDataCharacteristics().getCols()];\n+ Arrays.fill(schema, ValueType.FP64);\n+ out.setSchema(schema);\n+ out.setFedMapping(outMap);\n+ }\n+\n@Override\npublic Pair<String, LineageItem> getLineageItem(ExecutionContext ec) {\nreturn _in.getLineageItem(ec);\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/java/org/apache/sysds/test/functions/federated/primitives/FederetedCastToFrameTest.java",
"diff": "+/*\n+ * Licensed to the Apache Software Foundation (ASF) under one\n+ * or more contributor license agreements. See the NOTICE file\n+ * distributed with this work for additional information\n+ * regarding copyright ownership. The ASF licenses this file\n+ * to you under the Apache License, Version 2.0 (the\n+ * \"License\"); you may not use this file except in compliance\n+ * with the License. You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.apache.sysds.test.functions.federated.primitives;\n+\n+import java.util.Arrays;\n+import java.util.Collection;\n+\n+import org.apache.commons.logging.Log;\n+import org.apache.commons.logging.LogFactory;\n+import org.apache.sysds.api.DMLScript;\n+import org.apache.sysds.common.Types;\n+import org.apache.sysds.runtime.meta.MatrixCharacteristics;\n+import org.apache.sysds.test.AutomatedTestBase;\n+import org.apache.sysds.test.TestConfiguration;\n+import org.apache.sysds.test.TestUtils;\n+import org.junit.Assert;\n+import org.junit.Ignore;\n+import org.junit.Test;\n+import org.junit.runner.RunWith;\n+import org.junit.runners.Parameterized;\n+\n+@RunWith(value = Parameterized.class)\[email protected]\n+public class FederetedCastToFrameTest extends AutomatedTestBase {\n+ private static final Log LOG = LogFactory.getLog(FederetedCastToFrameTest.class.getName());\n+\n+ private final static String TEST_DIR = \"functions/federated/primitives/\";\n+ private final static String TEST_NAME = \"FederatedCastToFrameTest\";\n+ private final static String TEST_CLASS_DIR = TEST_DIR + FederetedCastToFrameTest.class.getSimpleName() + \"/\";\n+\n+ private final static int blocksize = 1024;\n+ @Parameterized.Parameter()\n+ public int rows;\n+ @Parameterized.Parameter(1)\n+ public int cols;\n+\n+ @Override\n+ public void setUp() {\n+ TestUtils.clearAssertionInformation();\n+ addTestConfiguration(TEST_NAME, new TestConfiguration(TEST_CLASS_DIR, TEST_NAME));\n+ }\n+\n+ @Parameterized.Parameters\n+ public static Collection<Object[]> data() {\n+ // rows have to be even and > 1\n+ return Arrays.asList(new Object[][] {{10, 32}});\n+ }\n+\n+ @Test\n+ public void federatedMultiplyCP() {\n+ federatedMultiply(Types.ExecMode.SINGLE_NODE);\n+ }\n+\n+ @Test\n+ @Ignore\n+ public void federatedMultiplySP() {\n+ // TODO Fix me Spark execution error\n+ federatedMultiply(Types.ExecMode.SPARK);\n+ }\n+\n+ public void federatedMultiply(Types.ExecMode execMode) {\n+ boolean sparkConfigOld = DMLScript.USE_LOCAL_SPARK_CONFIG;\n+ Types.ExecMode platformOld = rtplatform;\n+ rtplatform = execMode;\n+ if(rtplatform == Types.ExecMode.SPARK) {\n+ DMLScript.USE_LOCAL_SPARK_CONFIG = true;\n+ }\n+\n+ getAndLoadTestConfiguration(TEST_NAME);\n+ String HOME = SCRIPT_DIR + TEST_DIR;\n+\n+ // write input matrices\n+ int halfRows = rows / 2;\n+ // We have two matrices handled by a single federated worker\n+ double[][] X1 = getRandomMatrix(halfRows, cols, 0, 1, 1, 42);\n+ double[][] X2 = getRandomMatrix(halfRows, cols, 0, 1, 1, 1340);\n+\n+ writeInputMatrixWithMTD(\"X1\", X1, false, new MatrixCharacteristics(halfRows, cols, blocksize, halfRows * cols));\n+ writeInputMatrixWithMTD(\"X2\", X2, false, new MatrixCharacteristics(halfRows, cols, blocksize, halfRows * cols));\n+\n+ int port1 = getRandomAvailablePort();\n+ int port2 = getRandomAvailablePort();\n+ Thread t1 = startLocalFedWorkerThread(port1);\n+ Thread t2 = startLocalFedWorkerThread(port2);\n+\n+ TestConfiguration config = availableTestConfigurations.get(TEST_NAME);\n+ loadTestConfiguration(config);\n+\n+ // Run reference dml script with normal matrix\n+ fullDMLScriptName = HOME + TEST_NAME + \"Reference.dml\";\n+ programArgs = new String[] {\"-nvargs\", \"X1=\" + input(\"X1\"), \"X2=\" + input(\"X2\")};\n+ String out = runTest(null).toString().split(\"SystemDS Statistics:\")[0];\n+\n+ // Run actual dml script with federated matrix\n+ fullDMLScriptName = HOME + TEST_NAME + \".dml\";\n+ programArgs = new String[] {\"-stats\", \"100\", \"-nvargs\", \"X1=\" + TestUtils.federatedAddress(port1, input(\"X1\")),\n+ \"X2=\" + TestUtils.federatedAddress(port2, input(\"X2\")), \"r=\" + rows, \"c=\" + cols};\n+ String fedOut = runTest(null).toString();\n+\n+ LOG.error(fedOut);\n+ fedOut = fedOut.split(\"SystemDS Statistics:\")[0];\n+ Assert.assertTrue(\"Equal Printed Output\", out.equals(fedOut));\n+ Assert.assertTrue(\"Contains federated Cast to frame\", heavyHittersContainsString(\"fed_castdtf\"));\n+ TestUtils.shutdownThreads(t1, t2);\n+\n+ rtplatform = platformOld;\n+ DMLScript.USE_LOCAL_SPARK_CONFIG = sparkConfigOld;\n+ }\n+}\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/federated/primitives/FederatedCastToFrameTest.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+X = federated(addresses=list($X1, $X2),\n+ ranges=list(list(0, 0), list($r / 2, $c), list($r / 2, 0), list($r, $c)))\n+\n+Z = as.frame(X)\n+print(toString(Z[1]))\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/federated/primitives/FederatedCastToFrameTestReference.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+X = rbind(read($X1), read($X2))\n+\n+Z = as.frame(X)\n+print(toString(Z[1]))\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-2723] Cast to frame Federated |
49,706 | 12.11.2020 18:13:40 | -3,600 | d61c3bffc677443c28d0fca27364267c1ca41111 | Cast to matrix Federated
Closes | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/FEDInstructionUtils.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/FEDInstructionUtils.java",
"diff": "@@ -36,8 +36,6 @@ import org.apache.sysds.runtime.instructions.cp.MatrixIndexingCPInstruction;\nimport org.apache.sysds.runtime.instructions.cp.MultiReturnParameterizedBuiltinCPInstruction;\nimport org.apache.sysds.runtime.instructions.cp.ParameterizedBuiltinCPInstruction;\nimport org.apache.sysds.runtime.instructions.cp.ReorgCPInstruction;\n-import org.apache.sysds.runtime.instructions.cp.UnaryCPInstruction;\n-import org.apache.sysds.runtime.instructions.cp.UnaryMatrixCPInstruction;\nimport org.apache.sysds.runtime.instructions.cp.VariableCPInstruction;\nimport org.apache.sysds.runtime.instructions.cp.VariableCPInstruction.VariableOperationCode;\nimport org.apache.sysds.runtime.instructions.spark.AggregateUnarySPInstruction;\n@@ -84,8 +82,7 @@ public class FEDInstructionUtils {\nif( mo.isFederated() )\nfedinst = TsmmFEDInstruction.parseInstruction(linst.getInstructionString());\n}\n- else if(inst instanceof UnaryCPInstruction){\n- if (inst instanceof AggregateUnaryCPInstruction) {\n+ else if (inst instanceof AggregateUnaryCPInstruction) {\nAggregateUnaryCPInstruction instruction = (AggregateUnaryCPInstruction) inst;\nif( instruction.input1.isMatrix() && ec.containsVariable(instruction.input1) ) {\nMatrixObject mo1 = ec.getMatrixObject(instruction.input1);\n@@ -95,7 +92,6 @@ public class FEDInstructionUtils {\n}\n}\n}\n- }\nelse if (inst instanceof BinaryCPInstruction) {\nBinaryCPInstruction instruction = (BinaryCPInstruction) inst;\nif( (instruction.input1.isMatrix() && ec.getMatrixObject(instruction.input1).isFederated())\n@@ -154,10 +150,12 @@ public class FEDInstructionUtils {\n&& ec.getCacheableData(ins.getInput1()).isFederated()){\nfedinst = VariableFEDInstruction.parseInstruction(ins);\n}\n-\n+ else if(ins.getVariableOpcode() == VariableOperationCode.CastAsMatrixVariable\n+ && ins.getInput1().isFrame()\n+ && ec.getCacheableData(ins.getInput1()).isFederated()){\n+ fedinst = VariableFEDInstruction.parseInstruction(ins);\n+ }\n}\n-\n-\n//set thread id for federated context management\nif( fedinst != null ) {\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/VariableFEDInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/VariableFEDInstruction.java",
"diff": "@@ -88,9 +88,32 @@ public class VariableFEDInstruction extends FEDInstruction implements LineageTra\n}\nprivate void processCastAsMatrixVariableInstruction(ExecutionContext ec) {\n- LOG.error(\"Not Implemented\");\n- throw new DMLRuntimeException(\"Not Implemented Cast as Matrix\");\n+ FrameObject mo1 = ec.getFrameObject(_in.getInput1());\n+\n+ if(!mo1.isFederated())\n+ throw new DMLRuntimeException(\n+ \"Federated Reorg: \" + \"Federated input expected, but invoked w/ \" + mo1.isFederated());\n+\n+ // execute function at federated site.\n+ FederatedRequest fr1 = FederationUtils.callInstruction(_in.getInstructionString(),\n+ _in.getOutput(),\n+ new CPOperand[] {_in.getInput1()},\n+ new long[] {mo1.getFedMapping().getID()});\n+ mo1.getFedMapping().execute(getTID(), true, fr1);\n+\n+ // Construct output local.\n+\n+ MatrixObject out = ec.getMatrixObject(_in.getOutput());\n+ FederationMap outMap = mo1.getFedMapping().copyWithNewID(fr1.getID());\n+ Map<FederatedRange, FederatedData> newMap = new HashMap<>();\n+ for(Map.Entry<FederatedRange, FederatedData> pair : outMap.getFedMapping().entrySet()) {\n+ FederatedData om = pair.getValue();\n+ FederatedData nf = new FederatedData(Types.DataType.MATRIX, om.getAddress(), om.getFilepath(),\n+ om.getVarID());\n+ newMap.put(pair.getKey(), nf);\n+ }\n+ out.setFedMapping(outMap);\n}\nprivate void processCastAsFrameVariableInstruction(ExecutionContext ec) {\n@@ -98,23 +121,25 @@ public class VariableFEDInstruction extends FEDInstruction implements LineageTra\nMatrixObject mo1 = ec.getMatrixObject(_in.getInput1());\nif(!mo1.isFederated())\n- throw new DMLRuntimeException(\"Federated Reorg: \"\n- + \"Federated input expected, but invoked w/ \"+mo1.isFederated());\n-\n- //execute transpose at federated site\n- FederatedRequest fr1 = FederationUtils.callInstruction(_in.getInstructionString(), _in.getOutput(),\n- new CPOperand[]{_in.getInput1()}, new long[]{mo1.getFedMapping().getID()});\n+ throw new DMLRuntimeException(\n+ \"Federated Reorg: \" + \"Federated input expected, but invoked w/ \" + mo1.isFederated());\n+\n+ // execute function at federated site.\n+ FederatedRequest fr1 = FederationUtils.callInstruction(_in.getInstructionString(),\n+ _in.getOutput(),\n+ new CPOperand[] {_in.getInput1()},\n+ new long[] {mo1.getFedMapping().getID()});\nmo1.getFedMapping().execute(getTID(), true, fr1);\n- //drive output federated mapping\n+ // Construct output local.\nFrameObject out = ec.getFrameObject(_in.getOutput());\n- out.getDataCharacteristics().set(mo1.getNumColumns(),\n- mo1.getNumRows(), (int)mo1.getBlocksize(), mo1.getNnz());\n+ out.getDataCharacteristics().set(mo1.getNumColumns(), mo1.getNumRows(), (int) mo1.getBlocksize(), mo1.getNnz());\nFederationMap outMap = mo1.getFedMapping().copyWithNewID(fr1.getID());\nMap<FederatedRange, FederatedData> newMap = new HashMap<>();\nfor(Map.Entry<FederatedRange, FederatedData> pair : outMap.getFedMapping().entrySet()) {\nFederatedData om = pair.getValue();\n- FederatedData nf = new FederatedData(Types.DataType.FRAME, om.getAddress(),om.getFilepath(),om.getVarID());\n+ FederatedData nf = new FederatedData(Types.DataType.FRAME, om.getAddress(), om.getFilepath(),\n+ om.getVarID());\nnewMap.put(pair.getKey(), nf);\n}\nValueType[] schema = new ValueType[(int) mo1.getDataCharacteristics().getCols()];\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/federated/primitives/FederetedCastToFrameTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/federated/primitives/FederetedCastToFrameTest.java",
"diff": "@@ -114,7 +114,7 @@ public class FederetedCastToFrameTest extends AutomatedTestBase {\n\"X2=\" + TestUtils.federatedAddress(port2, input(\"X2\")), \"r=\" + rows, \"c=\" + cols};\nString fedOut = runTest(null).toString();\n- LOG.error(fedOut);\n+ LOG.debug(fedOut);\nfedOut = fedOut.split(\"SystemDS Statistics:\")[0];\nAssert.assertTrue(\"Equal Printed Output\", out.equals(fedOut));\nAssert.assertTrue(\"Contains federated Cast to frame\", heavyHittersContainsString(\"fed_castdtf\"));\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/java/org/apache/sysds/test/functions/federated/primitives/FederetedCastToMatrixTest.java",
"diff": "+/*\n+ * Licensed to the Apache Software Foundation (ASF) under one\n+ * or more contributor license agreements. See the NOTICE file\n+ * distributed with this work for additional information\n+ * regarding copyright ownership. The ASF licenses this file\n+ * to you under the Apache License, Version 2.0 (the\n+ * \"License\"); you may not use this file except in compliance\n+ * with the License. You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.apache.sysds.test.functions.federated.primitives;\n+\n+import java.io.IOException;\n+import java.util.Arrays;\n+import java.util.Collection;\n+\n+import org.apache.commons.logging.Log;\n+import org.apache.commons.logging.LogFactory;\n+import org.apache.sysds.api.DMLScript;\n+import org.apache.sysds.common.Types;\n+import org.apache.sysds.common.Types.DataType;\n+import org.apache.sysds.common.Types.FileFormat;\n+import org.apache.sysds.common.Types.ValueType;\n+import org.apache.sysds.hops.OptimizerUtils;\n+import org.apache.sysds.runtime.io.FrameWriter;\n+import org.apache.sysds.runtime.io.FrameWriterFactory;\n+import org.apache.sysds.runtime.matrix.data.FrameBlock;\n+import org.apache.sysds.runtime.meta.MatrixCharacteristics;\n+import org.apache.sysds.runtime.util.HDFSTool;\n+import org.apache.sysds.test.AutomatedTestBase;\n+import org.apache.sysds.test.TestConfiguration;\n+import org.apache.sysds.test.TestUtils;\n+import org.apache.sysds.test.functions.frame.DetectSchemaTest;\n+import org.junit.Assert;\n+import org.junit.Ignore;\n+import org.junit.Test;\n+import org.junit.runner.RunWith;\n+import org.junit.runners.Parameterized;\n+\n+@RunWith(value = Parameterized.class)\[email protected]\n+public class FederetedCastToMatrixTest extends AutomatedTestBase {\n+ private static final Log LOG = LogFactory.getLog(FederetedCastToMatrixTest.class.getName());\n+\n+ private final static String TEST_DIR = \"functions/federated/primitives/\";\n+ private final static String TEST_NAME = \"FederatedCastToMatrixTest\";\n+ private final static String TEST_CLASS_DIR = TEST_DIR + FederetedCastToMatrixTest.class.getSimpleName() + \"/\";\n+\n+ @Parameterized.Parameter()\n+ public int rows;\n+ @Parameterized.Parameter(1)\n+ public int cols;\n+\n+ @Override\n+ public void setUp() {\n+ TestUtils.clearAssertionInformation();\n+ addTestConfiguration(TEST_NAME, new TestConfiguration(TEST_CLASS_DIR, TEST_NAME));\n+ }\n+\n+ @Parameterized.Parameters\n+ public static Collection<Object[]> data() {\n+ // rows have to be even and > 1\n+ return Arrays.asList(new Object[][] {{10, 32}});\n+ }\n+\n+ @Test\n+ public void federatedMultiplyCP() {\n+ federatedMultiply(Types.ExecMode.SINGLE_NODE);\n+ }\n+\n+ @Test\n+ @Ignore\n+ public void federatedMultiplySP() {\n+ // TODO Fix me Spark execution error\n+ federatedMultiply(Types.ExecMode.SPARK);\n+ }\n+\n+ public void federatedMultiply(Types.ExecMode execMode) {\n+ boolean sparkConfigOld = DMLScript.USE_LOCAL_SPARK_CONFIG;\n+ Types.ExecMode platformOld = rtplatform;\n+ rtplatform = execMode;\n+ if(rtplatform == Types.ExecMode.SPARK) {\n+ DMLScript.USE_LOCAL_SPARK_CONFIG = true;\n+ }\n+ try {\n+ getAndLoadTestConfiguration(TEST_NAME);\n+ String HOME = SCRIPT_DIR + TEST_DIR;\n+\n+ ValueType[] schema = new ValueType[cols];\n+ Arrays.fill(schema, ValueType.FP64);\n+ FrameBlock frame1 = new FrameBlock(schema);\n+ FrameBlock frame2 = new FrameBlock(schema);\n+ FrameWriter writer = FrameWriterFactory.createFrameWriter(FileFormat.BINARY);\n+\n+ // write input matrices\n+ int halfRows = rows / 2;\n+ // We have two matrices handled by a single federated worker\n+ double[][] X1 = getRandomMatrix(halfRows, cols, 0, 1, 1, 42);\n+ double[][] X2 = getRandomMatrix(halfRows, cols, 0, 1, 1, 1340);\n+\n+ DetectSchemaTest.initFrameDataString(frame1, X1, schema, halfRows, cols);\n+ writer.writeFrameToHDFS(frame1.slice(0, halfRows - 1, 0, schema.length - 1, new FrameBlock()),\n+ input(\"X1\"),\n+ halfRows,\n+ schema.length);\n+\n+ DetectSchemaTest.initFrameDataString(frame2, X2, schema, halfRows, cols);\n+ writer.writeFrameToHDFS(frame2.slice(0, halfRows - 1, 0, schema.length - 1, new FrameBlock()),\n+ input(\"X2\"),\n+ halfRows,\n+ schema.length);\n+\n+ MatrixCharacteristics mc = new MatrixCharacteristics(X1.length, X1[0].length,\n+ OptimizerUtils.DEFAULT_BLOCKSIZE, -1);\n+ HDFSTool.writeMetaDataFile(input(\"X1\") + \".mtd\", null, schema, DataType.FRAME, mc, FileFormat.BINARY);\n+ HDFSTool.writeMetaDataFile(input(\"X2\") + \".mtd\", null, schema, DataType.FRAME, mc, FileFormat.BINARY);\n+\n+ int port1 = getRandomAvailablePort();\n+ int port2 = getRandomAvailablePort();\n+ Thread t1 = startLocalFedWorkerThread(port1);\n+ Thread t2 = startLocalFedWorkerThread(port2);\n+\n+ TestConfiguration config = availableTestConfigurations.get(TEST_NAME);\n+ loadTestConfiguration(config);\n+\n+ // Run reference dml script with normal matrix\n+ fullDMLScriptName = HOME + TEST_NAME + \"Reference.dml\";\n+ programArgs = new String[] {\"-nvargs\", \"X1=\" + input(\"X1\"), \"X2=\" + input(\"X2\")};\n+ String out = runTest(null).toString().split(\"SystemDS Statistics:\")[0];\n+\n+ // Run actual dml script with federated matrix\n+ fullDMLScriptName = HOME + TEST_NAME + \".dml\";\n+ programArgs = new String[] {\"-stats\", \"100\", \"-nvargs\",\n+ \"X1=\" + TestUtils.federatedAddress(port1, input(\"X1\")),\n+ \"X2=\" + TestUtils.federatedAddress(port2, input(\"X2\")), \"r=\" + rows, \"c=\" + cols};\n+ String fedOut = runTest(null).toString();\n+\n+ LOG.debug(fedOut);\n+ fedOut = fedOut.split(\"SystemDS Statistics:\")[0];\n+ Assert.assertTrue(\"Equal Printed Output\", out.equals(fedOut));\n+ Assert.assertTrue(\"Contains federated Cast to frame\", heavyHittersContainsString(\"fed_castdtm\"));\n+ TestUtils.shutdownThreads(t1, t2);\n+\n+ rtplatform = platformOld;\n+ DMLScript.USE_LOCAL_SPARK_CONFIG = sparkConfigOld;\n+ }\n+ catch(IOException e) {\n+ Assert.fail(\"Error writing input frame.\");\n+ }\n+ }\n+}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/frame/DetectSchemaTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/frame/DetectSchemaTest.java",
"diff": "@@ -117,7 +117,7 @@ public class DetectSchemaTest extends AutomatedTestBase {\n}\nelse {\ndouble[][] A = getRandomMatrix(rows, 3, -Float.MAX_VALUE, Float.MAX_VALUE, 0.7, 2373);\n- initFrameDataString(frame1, A, schema);\n+ initFrameDataString(frame1, A, schema, rows, 3);\nwriter.writeFrameToHDFS(frame1.slice(0, rows-1, 0, schema.length-1, new FrameBlock()), input(\"A\"), rows, schema.length);\nschema[schema.length-2] = Types.ValueType.FP64;\n}\n@@ -143,8 +143,8 @@ public class DetectSchemaTest extends AutomatedTestBase {\n}\n}\n- private static void initFrameDataString(FrameBlock frame1, double[][] data, Types.ValueType[] lschema) {\n- for (int j = 0; j < 3; j++) {\n+ public static void initFrameDataString(FrameBlock frame1, double[][] data, Types.ValueType[] lschema, int rows, int cols) {\n+ for (int j = 0; j < cols; j++) {\nTypes.ValueType vt = lschema[j];\nswitch (vt) {\ncase STRING:\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/lineage/CacheEvictionTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/lineage/CacheEvictionTest.java",
"diff": "* under the License.\n*/\n+\npackage org.apache.sysds.test.functions.lineage;\nimport java.util.ArrayList;\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/federated/primitives/FederatedCastToMatrixTest.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+X = federated(type=\"frame\", addresses=list($X1, $X2),\n+ ranges=list(list(0, 0), list($r / 2, $c), list($r / 2, 0), list($r, $c)))\n+\n+Z = as.matrix(X)\n+print(toString(Z[1]))\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/federated/primitives/FederatedCastToMatrixTestReference.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+X = rbind(read($X1), read($X2))\n+\n+Z = as.matrix(X)\n+print(toString(Z[1]))\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-2724] Cast to matrix Federated
Closes #1100 |
49,722 | 13.11.2020 03:01:11 | -3,600 | 809d53f6236232cb31eb02668263aab2ac80116d | [MINOR] Modifications to Federated Tests
This commit change the federated tests to execute more consistantly on
github. | [
{
"change_type": "MODIFY",
"old_path": "scripts/builtin/dist.dml",
"new_path": "scripts/builtin/dist.dml",
"diff": "@@ -24,7 +24,6 @@ m_dist = function(Matrix[Double] X) return (Matrix[Double] Y) {\nG = X %*% t(X);\nI = matrix(1, rows = nrow(G), cols = ncol(G));\nY = -2 * (G) + (diag(G) * I) + (I * t(diag(G)));\n-# Y = -2 * (G) + t(I %*% diag(diag(G))) + t(diag(diag(G)) %*% I);\nY = sqrt(Y);\nY = replace(target = Y, pattern=0/0, replacement = 0);\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/controlprogram/federated/FederationUtils.java",
"new_path": "src/main/java/org/apache/sysds/runtime/controlprogram/federated/FederationUtils.java",
"diff": "@@ -172,30 +172,6 @@ public class FederationUtils {\npublic static MatrixBlock aggVar(Future<FederatedResponse>[] ffr, Future<FederatedResponse>[] meanFfr, FederationMap map, boolean isRowAggregate, boolean isScalar) {\ntry {\n-// else if(aop.aggOp.increOp.fn instanceof CM) {\n-// double var = ((ScalarObject) ffr[0].get().getData()[0]).getDoubleValue();\n-// double mean = ((ScalarObject) meanFfr[0].get().getData()[0]).getDoubleValue();\n-// long size = map.getFederatedRanges()[0].getSize();\n-// for(int i = 0; i < ffr.length - 1; i++) {\n-// long l = size + map.getFederatedRanges()[i+1].getSize();\n-// double k = ((size * var) + (map.getFederatedRanges()[i+1].getSize() * ((ScalarObject) ffr[i+1].get().getData()[0]).getDoubleValue())) / l;\n-// var = k + (size * map.getFederatedRanges()[i+1].getSize()) * Math.pow((mean - ((ScalarObject) meanFfr[i+1].get().getData()[0]).getDoubleValue()) / l, 2);\n-// mean = (mean * size + ((ScalarObject) meanFfr[i+1].get().getData()[0]).getDoubleValue() * (map.getFederatedRanges()[i+1].getSize())) / l;\n-// size = l;\n-// System.out.println(\"Olga\");\n-// // long l = sizes[i] + sizes[i + 1];\n-// // double k = Math.pow(means[i] - means[i+1], 2) * (sizes[i] * sizes[i+1]);\n-// // k += ((sizes[i] * vars[i]) + (sizes[i+1] * vars[i+1])) * l;\n-// // vars[i+1] = k / Math.pow(l, 2);\n-// //\n-// // means[i+1] = (means[i] * sizes[i] + means[i] * sizes[i]) / l;\n-// // sizes[i+1] = l;\n-// }\n-// return new DoubleObject(var);\n-//\n-// }\n-\n-\nFederatedRange[] ranges = map.getFederatedRanges();\nBinaryOperator plus = InstructionUtils.parseBinaryOperator(\"+\");\nBinaryOperator minus = InstructionUtils.parseBinaryOperator(\"-\");\n@@ -204,13 +180,13 @@ public class FederationUtils {\nScalarOperator dev1 = InstructionUtils.parseScalarBinaryOperator(\"/\", false);\nScalarOperator pow = InstructionUtils.parseScalarBinaryOperator(\"^2\", false);\n- long size1 = isScalar ? ranges[0].getSize() : ranges[0].getSize(isRowAggregate ? 0 : 1);\n+ long size1 = isScalar ? ranges[0].getSize() : ranges[0].getSize(isRowAggregate ? 1 : 0);\nMatrixBlock var1 = (MatrixBlock)ffr[0].get().getData()[0];\nMatrixBlock mean1 = (MatrixBlock)meanFfr[0].get().getData()[0];\nfor(int i=0; i < ffr.length - 1; i++) {\nMatrixBlock var2 = (MatrixBlock)ffr[i+1].get().getData()[0];\nMatrixBlock mean2 = (MatrixBlock)meanFfr[i+1].get().getData()[0];\n- long size2 = isScalar ? ranges[i+1].getSize() : ranges[i+1].getSize(isRowAggregate ? 0 : 1);\n+ long size2 = isScalar ? ranges[i+1].getSize() : ranges[i+1].getSize(isRowAggregate ? 1 : 0);\nmult1 = mult1.setConstant(size1);\nvar1 = var1.scalarOperations(mult1, new MatrixBlock());\n@@ -219,11 +195,12 @@ public class FederationUtils {\ndev1 = dev1.setConstant(size1 + size2);\nvar1 = var1.scalarOperations(dev1, new MatrixBlock());\n- MatrixBlock tmp1 = (mean1.binaryOperationsInPlace(minus, mean2)).scalarOperations(dev1, new MatrixBlock());\n+ MatrixBlock tmp1 = new MatrixBlock(mean1);\n+ tmp1 = tmp1.binaryOperationsInPlace(minus, mean2);\n+ tmp1 = tmp1.scalarOperations(dev1, new MatrixBlock());\ntmp1 = tmp1.scalarOperations(pow, new MatrixBlock());\nmult1 = mult1.setConstant(size1*size2);\ntmp1 = tmp1.scalarOperations(mult1, new MatrixBlock());\n-\nvar1 = tmp1.binaryOperationsInPlace(plus, var1);\n// next mean\n@@ -272,13 +249,6 @@ public class FederationUtils {\nvar = k + (size * map.getFederatedRanges()[i+1].getSize()) * Math.pow((mean - ((ScalarObject) meanFfr[i+1].get().getData()[0]).getDoubleValue()) / l, 2);\nmean = (mean * size + ((ScalarObject) meanFfr[i+1].get().getData()[0]).getDoubleValue() * (map.getFederatedRanges()[i+1].getSize())) / l;\nsize = l;\n-// long l = sizes[i] + sizes[i + 1];\n-// double k = Math.pow(means[i] - means[i+1], 2) * (sizes[i] * sizes[i+1]);\n-// k += ((sizes[i] * vars[i]) + (sizes[i+1] * vars[i+1])) * l;\n-// vars[i+1] = k / Math.pow(l, 2);\n-//\n-// means[i+1] = (means[i] * sizes[i] + means[i] * sizes[i]) / l;\n-// sizes[i+1] = l;\n}\nreturn new DoubleObject(var);\n@@ -311,7 +281,7 @@ public class FederationUtils {\nboolean isMin = ((Builtin) aop.aggOp.increOp.fn).getBuiltinCode() == BuiltinCode.MIN;\nreturn aggMinMax(ffr,isMin,false, Optional.of(map.getType()));\n} else if(aop.aggOp.increOp.fn instanceof CM) {\n- return aggVar(ffr, meanFfr, map, aop.isRowAggregate(), !(aop.isColAggregate() && aop.isRowAggregate())); //TODO\n+ return aggVar(ffr, meanFfr, map, aop.isRowAggregate(), !(aop.isColAggregate() || aop.isRowAggregate())); //TODO\n}\nelse\nthrow new DMLRuntimeException(\"Unsupported aggregation operator: \"\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/AggregateUnaryFEDInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/AggregateUnaryFEDInstruction.java",
"diff": "@@ -119,6 +119,6 @@ public class AggregateUnaryFEDInstruction extends UnaryFEDInstruction {\nif( output.isScalar() )\nec.setVariable(output.getName(), FederationUtils.aggScalar(aop, tmp, meanTmp, map));\nelse\n- ec.setMatrixOutput(output.getName(), FederationUtils.aggMatrix(aop, meanTmp, tmp, map));\n+ ec.setMatrixOutput(output.getName(), FederationUtils.aggMatrix(aop, tmp, meanTmp, map));\n}\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/FEDInstructionUtils.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/FEDInstructionUtils.java",
"diff": "@@ -100,22 +100,19 @@ public class FEDInstructionUtils {\nfedinst = ReorgFEDInstruction.parseInstruction(rinst.getInstructionString());\n}\nelse if(instruction.input1 != null && instruction.input1.isMatrix()\n- && ec.getMatrixObject(instruction.input1).isFederated()\n&& ec.containsVariable(instruction.input1)) {\nMatrixObject mo1 = ec.getMatrixObject(instruction.input1);\n- if(instruction.getOpcode().equalsIgnoreCase(\"cm\")) {\n+ if(instruction.getOpcode().equalsIgnoreCase(\"cm\") && mo1.isFederated()) {\nfedinst = CentralMomentFEDInstruction.parseInstruction(inst.getInstructionString());\n- }\n- else if(inst instanceof AggregateUnaryCPInstruction &&\n+ } else if(inst.getOpcode().equalsIgnoreCase(\"qsort\") && mo1.isFederated()) {\n+ if(mo1.getFedMapping().getFederatedRanges().length == 1)\n+ fedinst = QuantileSortFEDInstruction.parseInstruction(inst.getInstructionString());\n+ } else if(inst instanceof AggregateUnaryCPInstruction && mo1.isFederated() &&\n((AggregateUnaryCPInstruction) instruction).getAUType() == AggregateUnaryCPInstruction.AUType.DEFAULT) {\nfedinst = AggregateUnaryFEDInstruction.parseInstruction(inst.getInstructionString());\n}\n- else if(inst.getOpcode().equalsIgnoreCase(\"qsort\") &&\n- mo1.getFedMapping().getFederatedRanges().length == 1) {\n- fedinst = QuantileSortFEDInstruction.parseInstruction(inst.getInstructionString());\n- }\n}\n}\nelse if (inst instanceof BinaryCPInstruction) {\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/federated/algorithms/FederatedCorTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/federated/algorithms/FederatedCorTest.java",
"diff": "@@ -53,7 +53,7 @@ public class FederatedCorTest extends AutomatedTestBase {\[email protected]\npublic static Collection<Object[]> data() {\n- return Arrays.asList(new Object[][] {{1600, 8, true}});\n+ return Arrays.asList(new Object[][] {{1600, 40, true}});\n}\n@Override\n@@ -133,9 +133,10 @@ public class FederatedCorTest extends AutomatedTestBase {\nrunTest(true, false, null, -1);\n// compare via files\n- compareResults(1e-9);\n+ compareResults(1e-2);\n- // Assert.assertTrue(heavyHittersContainsString(\"k+\"));\n+ Assert.assertTrue(heavyHittersContainsString(\"fed_uacvar\"));\n+ Assert.assertTrue(heavyHittersContainsString(\"fed_tsmm\"));\n// check that federated input files are still existing\nAssert.assertTrue(HDFSTool.existsFileOnHDFS(input(\"X1\")));\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/federated/algorithms/FederatedVarTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/federated/algorithms/FederatedVarTest.java",
"diff": "@@ -58,12 +58,8 @@ public class FederatedVarTest extends AutomatedTestBase {\[email protected]\npublic static Collection<Object[]> data() {\nreturn Arrays.asList(new Object[][] {\n- // {10, 1000, false},\n- {100, 4, false},\n- // {36, 1000, true},\n- {1000, 10, true},\n- // {4, 100, true}\n- // {1600, 8, false},\n+ {1000, 40, false},\n+ {1000, 400, true}\n});\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/federated/primitives/FederatedFullAggregateTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/federated/primitives/FederatedFullAggregateTest.java",
"diff": "@@ -43,6 +43,7 @@ public class FederatedFullAggregateTest extends AutomatedTestBase {\nprivate final static String TEST_NAME2 = \"FederatedMeanTest\";\nprivate final static String TEST_NAME3 = \"FederatedMaxTest\";\nprivate final static String TEST_NAME4 = \"FederatedMinTest\";\n+ private final static String TEST_NAME5 = \"FederatedVarTest\";\nprivate final static String TEST_DIR = \"functions/federated/aggregate/\";\nprivate static final String TEST_CLASS_DIR = TEST_DIR + FederatedFullAggregateTest.class.getSimpleName() + \"/\";\n@@ -68,7 +69,7 @@ public class FederatedFullAggregateTest extends AutomatedTestBase {\n}\nprivate enum OpType {\n- SUM, MEAN, MAX, MIN\n+ SUM, MEAN, MAX, MIN, VAR\n}\n@Override\n@@ -78,6 +79,7 @@ public class FederatedFullAggregateTest extends AutomatedTestBase {\naddTestConfiguration(TEST_NAME2, new TestConfiguration(TEST_CLASS_DIR, TEST_NAME2, new String[] {\"S.scalar\"}));\naddTestConfiguration(TEST_NAME3, new TestConfiguration(TEST_CLASS_DIR, TEST_NAME3, new String[] {\"S.scalar\"}));\naddTestConfiguration(TEST_NAME4, new TestConfiguration(TEST_CLASS_DIR, TEST_NAME4, new String[] {\"S.scalar\"}));\n+ addTestConfiguration(TEST_NAME5, new TestConfiguration(TEST_CLASS_DIR, TEST_NAME5, new String[] {\"S.scalar\"}));\n}\n@Test\n@@ -101,7 +103,11 @@ public class FederatedFullAggregateTest extends AutomatedTestBase {\n}\n@Test\n- @Ignore\n+ public void testVarDenseMatrixCP() {\n+ runColAggregateOperationTest(OpType.VAR, ExecType.CP);\n+ }\n+\n+ @Test\npublic void testSumDenseMatrixSP() {\nrunColAggregateOperationTest(OpType.SUM, ExecType.SPARK);\n}\n@@ -124,6 +130,11 @@ public class FederatedFullAggregateTest extends AutomatedTestBase {\nrunColAggregateOperationTest(OpType.MIN, ExecType.SPARK);\n}\n+ @Test\n+ public void testVarDenseMatrixSP() {\n+ runColAggregateOperationTest(OpType.VAR, ExecType.SPARK);\n+ }\n+\nprivate void runColAggregateOperationTest(OpType type, ExecType instType) {\nExecMode platformOld = rtplatform;\nswitch(instType) {\n@@ -152,6 +163,9 @@ public class FederatedFullAggregateTest extends AutomatedTestBase {\ncase MIN:\nTEST_NAME = TEST_NAME4;\nbreak;\n+ case VAR:\n+ TEST_NAME = TEST_NAME5;\n+ break;\n}\ngetAndLoadTestConfiguration(TEST_NAME);\n@@ -165,10 +179,10 @@ public class FederatedFullAggregateTest extends AutomatedTestBase {\nc = cols;\n}\n- double[][] X1 = getRandomMatrix(r, c, 1, 5, 1, 3);\n- double[][] X2 = getRandomMatrix(r, c, 1, 5, 1, 7);\n- double[][] X3 = getRandomMatrix(r, c, 1, 5, 1, 8);\n- double[][] X4 = getRandomMatrix(r, c, 1, 5, 1, 9);\n+ double[][] X1 = getRandomMatrix(r, c, 1, 3, 1, 3);\n+ double[][] X2 = getRandomMatrix(r, c, 1, 3, 1, 7);\n+ double[][] X3 = getRandomMatrix(r, c, 1, 3, 1, 8);\n+ double[][] X4 = getRandomMatrix(r, c, 1, 3, 1, 9);\nMatrixCharacteristics mc = new MatrixCharacteristics(r, c, blocksize, r * c);\nwriteInputMatrixWithMTD(\"X1\", X1, false, mc);\n@@ -209,7 +223,7 @@ public class FederatedFullAggregateTest extends AutomatedTestBase {\nrunTest(true, false, null, -1);\n// compare via files\n- compareResults(1e-9);\n+ compareResults(type == OpType.VAR ? 1e-2 : 1e-9);\nswitch(type) {\ncase SUM:\n@@ -224,6 +238,9 @@ public class FederatedFullAggregateTest extends AutomatedTestBase {\ncase MIN:\nAssert.assertTrue(heavyHittersContainsString(\"fed_uamin\"));\nbreak;\n+ case VAR:\n+ Assert.assertTrue(heavyHittersContainsString(\"fed_uavar\"));\n+ break;\n}\n// check that federated input files are still existing\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/federated/primitives/FederatedRowColAggregateTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/federated/primitives/FederatedRowColAggregateTest.java",
"diff": "@@ -45,6 +45,8 @@ public class FederatedRowColAggregateTest extends AutomatedTestBase {\nprivate final static String TEST_NAME6 = \"FederatedRowMeanTest\";\nprivate final static String TEST_NAME7 = \"FederatedRowMaxTest\";\nprivate final static String TEST_NAME8 = \"FederatedRowMinTest\";\n+ private final static String TEST_NAME9 = \"FederatedRowVarTest\";\n+ private final static String TEST_NAME10 = \"FederatedColVarTest\";\nprivate final static String TEST_DIR = \"functions/federated/aggregate/\";\nprivate static final String TEST_CLASS_DIR = TEST_DIR + FederatedRowColAggregateTest.class.getSimpleName() + \"/\";\n@@ -60,13 +62,14 @@ public class FederatedRowColAggregateTest extends AutomatedTestBase {\[email protected]\npublic static Collection<Object[]> data() {\nreturn Arrays.asList(\n- new Object[][] {{10, 1000, false},\n- //{100, 4, false}, {36, 1000, true}, {1000, 10, true}, {4, 100, true}\n+ new Object[][] {\n+ {10, 1000, false},\n+ {1000, 40, true},\n});\n}\nprivate enum OpType {\n- SUM, MEAN, MAX, MIN\n+ SUM, MEAN, MAX, MIN, VAR\n}\nprivate enum InstType {\n@@ -84,6 +87,8 @@ public class FederatedRowColAggregateTest extends AutomatedTestBase {\naddTestConfiguration(TEST_NAME6, new TestConfiguration(TEST_CLASS_DIR, TEST_NAME6, new String[] {\"S\"}));\naddTestConfiguration(TEST_NAME7, new TestConfiguration(TEST_CLASS_DIR, TEST_NAME7, new String[] {\"S\"}));\naddTestConfiguration(TEST_NAME8, new TestConfiguration(TEST_CLASS_DIR, TEST_NAME8, new String[] {\"S\"}));\n+ addTestConfiguration(TEST_NAME9, new TestConfiguration(TEST_CLASS_DIR, TEST_NAME9, new String[] {\"S\"}));\n+ addTestConfiguration(TEST_NAME10, new TestConfiguration(TEST_CLASS_DIR, TEST_NAME10, new String[] {\"S\"}));\n}\n@Test\n@@ -126,6 +131,16 @@ public class FederatedRowColAggregateTest extends AutomatedTestBase {\nrunAggregateOperationTest(OpType.MIN, InstType.COL, ExecMode.SINGLE_NODE);\n}\n+ @Test\n+ public void testRowVarDenseMatrixCP() {\n+ runAggregateOperationTest(OpType.VAR, InstType.ROW, ExecMode.SINGLE_NODE);\n+ }\n+\n+ @Test\n+ public void testColVarDenseMatrixCP() {\n+ runAggregateOperationTest(OpType.VAR, InstType.COL, ExecMode.SINGLE_NODE);\n+ }\n+\nprivate void runAggregateOperationTest(OpType type, InstType instr, ExecMode execMode) {\nboolean sparkConfigOld = DMLScript.USE_LOCAL_SPARK_CONFIG;\nExecMode platformOld = rtplatform;\n@@ -147,6 +162,9 @@ public class FederatedRowColAggregateTest extends AutomatedTestBase {\ncase MIN:\nTEST_NAME = instr == InstType.COL ? TEST_NAME4 : TEST_NAME8;\nbreak;\n+ case VAR:\n+ TEST_NAME = instr == InstType.COL ? TEST_NAME10 : TEST_NAME9;\n+ break;\n}\ngetAndLoadTestConfiguration(TEST_NAME);\n@@ -160,10 +178,10 @@ public class FederatedRowColAggregateTest extends AutomatedTestBase {\nc = cols;\n}\n- double[][] X1 = getRandomMatrix(r, c, 1, 5, 1, 3);\n- double[][] X2 = getRandomMatrix(r, c, 1, 5, 1, 7);\n- double[][] X3 = getRandomMatrix(r, c, 1, 5, 1, 8);\n- double[][] X4 = getRandomMatrix(r, c, 1, 5, 1, 9);\n+ double[][] X1 = getRandomMatrix(r, c, 1, 3, 1, 3);\n+ double[][] X2 = getRandomMatrix(r, c, 1, 3, 1, 7);\n+ double[][] X3 = getRandomMatrix(r, c, 1, 3, 1, 8);\n+ double[][] X4 = getRandomMatrix(r, c, 1, 3, 1, 9);\nMatrixCharacteristics mc = new MatrixCharacteristics(r, c, blocksize, r * c);\nwriteInputMatrixWithMTD(\"X1\", X1, false, mc);\n@@ -209,7 +227,7 @@ public class FederatedRowColAggregateTest extends AutomatedTestBase {\nrunTest(true, false, null, -1);\n// compare via files\n- compareResults(1e-9);\n+ compareResults(type == FederatedRowColAggregateTest.OpType.VAR ? 1e-2 : 1e-9);\nString fedInst = instr == InstType.COL ? \"fed_uac\" : \"fed_uar\";\n@@ -226,6 +244,9 @@ public class FederatedRowColAggregateTest extends AutomatedTestBase {\ncase MIN:\nAssert.assertTrue(heavyHittersContainsString(fedInst.concat(\"min\")));\nbreak;\n+ case VAR:\n+ Assert.assertTrue(heavyHittersContainsString(fedInst.concat(\"var\")));\n+ break;\n}\n// check that federated input files are still existing\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/federated/aggregate/FederatedColVarTest.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+\n+if ($rP) {\n+ A = federated(addresses=list($in_X1, $in_X2, $in_X3, $in_X4),\n+ ranges=list(list(0, 0), list($rows/4, $cols), list($rows/4, 0), list(2*$rows/4, $cols),\n+ list(2*$rows/4, 0), list(3*$rows/4, $cols), list(3*$rows/4, 0), list($rows, $cols)));\n+} else {\n+ A = federated(addresses=list($in_X1, $in_X2, $in_X3, $in_X4),\n+ ranges=list(list(0, 0), list($rows, $cols/4), list(0,$cols/4), list($rows, $cols/2),\n+ list(0,$cols/2), list($rows, 3*($cols/4)), list(0, 3*($cols/4)), list($rows, $cols)));\n+}\n+\n+s = colVars(A);\n+write(s, $out_S);\n\\ No newline at end of file\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/federated/aggregate/FederatedColVarTestReference.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+if($6) { A = rbind(read($1), read($2), read($3), read($4)); }\n+else { A = cbind(read($1), read($2), read($3), read($4)); }\n+\n+s = colVars(A);\n+write(s, $5);\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/federated/aggregate/FederatedRowVarTest.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+\n+if ($rP) {\n+ A = federated(addresses=list($in_X1, $in_X2, $in_X3, $in_X4),\n+ ranges=list(list(0, 0), list($rows/4, $cols), list($rows/4, 0), list(2*$rows/4, $cols),\n+ list(2*$rows/4, 0), list(3*$rows/4, $cols), list(3*$rows/4, 0), list($rows, $cols)));\n+} else {\n+ A = federated(addresses=list($in_X1, $in_X2, $in_X3, $in_X4),\n+ ranges=list(list(0, 0), list($rows, $cols/4), list(0,$cols/4), list($rows, $cols/2),\n+ list(0,$cols/2), list($rows, 3*($cols/4)), list(0, 3*($cols/4)), list($rows, $cols)));\n+}\n+\n+s = rowVars(A);\n+write(s, $out_S);\n\\ No newline at end of file\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/federated/aggregate/FederatedRowVarTestReference.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+if($6) { A = rbind(read($1), read($2), read($3), read($4)); }\n+else { A = cbind(read($1), read($2), read($3), read($4)); }\n+\n+s = rowVars(A);\n+write(s, $5);\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/federated/aggregate/FederatedVarTest.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+if ($rP) {\n+ A = federated(addresses=list($in_X1, $in_X2, $in_X3, $in_X4),\n+ ranges=list(list(0, 0), list($rows/4, $cols), list($rows/4, 0), list(2*$rows/4, $cols),\n+ list(2*$rows/4, 0), list(3*$rows/4, $cols), list(3*$rows/4, 0), list($rows, $cols)));\n+} else {\n+ A = federated(addresses=list($in_X1, $in_X2, $in_X3, $in_X4),\n+ ranges=list(list(0, 0), list($rows, $cols/4), list(0,$cols/4), list($rows, $cols/2),\n+ list(0,$cols/2), list($rows, 3*($cols/4)), list(0, 3*($cols/4)), list($rows, $cols)));\n+}\n+\n+s = var(A);\n+write(s, $out_S);\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/federated/aggregate/FederatedVarTestReference.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+if($6) { A = rbind(read($1), read($2), read($3), read($4)); }\n+else { A = cbind(read($1), read($2), read($3), read($4)); }\n+\n+s = var(A);\n+write(s, $5);\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR] Modifications to Federated Tests
This commit change the federated tests to execute more consistantly on
github. |
49,722 | 14.11.2020 17:42:33 | -3,600 | 4d8ec5dd64922441f0acc452eee3e49bee0653cd | Federated remove empty
closes | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/FEDInstructionUtils.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/FEDInstructionUtils.java",
"diff": "@@ -129,7 +129,7 @@ public class FEDInstructionUtils {\n}\nelse if( inst instanceof ParameterizedBuiltinCPInstruction ) {\nParameterizedBuiltinCPInstruction pinst = (ParameterizedBuiltinCPInstruction) inst;\n- if(pinst.getOpcode().equals(\"replace\") && pinst.getTarget(ec).isFederated()) {\n+ if((pinst.getOpcode().equals(\"replace\") || pinst.getOpcode().equals(\"rmempty\")) && pinst.getTarget(ec).isFederated()) {\nfedinst = ParameterizedBuiltinFEDInstruction.parseInstruction(pinst.getInstructionString());\n}\nelse if((pinst.getOpcode().equals(\"transformdecode\") || pinst.getOpcode().equals(\"transformapply\")) &&\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/InitFEDInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/InitFEDInstruction.java",
"diff": "@@ -234,7 +234,7 @@ public class InitFEDInstruction extends FEDInstruction {\n}\ntry {\nint timeout = ConfigurationManager.getDMLConfig().getIntValue(DMLConfig.DEFAULT_FEDERATED_INITIALIZATION_TIMEOUT);\n- LOG.error(\"Federated Initialization with timeout: \" + timeout);\n+ LOG.debug(\"Federated Initialization with timeout: \" + timeout);\nfor (Pair<FederatedData, Future<FederatedResponse>> idResponse : idResponses)\nidResponse.getRight().get(timeout,TimeUnit.SECONDS); //wait for initialization\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/ParameterizedBuiltinFEDInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/ParameterizedBuiltinFEDInstruction.java",
"diff": "package org.apache.sysds.runtime.instructions.fed;\n+import java.util.AbstractMap;\n+import java.util.ArrayList;\nimport java.util.Arrays;\nimport java.util.HashMap;\nimport java.util.LinkedHashMap;\n-\nimport java.util.List;\n+import java.util.Map;\n+\nimport org.apache.sysds.common.Types;\nimport org.apache.sysds.common.Types.DataType;\nimport org.apache.sysds.common.Types.ValueType;\n@@ -34,6 +37,7 @@ import org.apache.sysds.runtime.controlprogram.caching.CacheableData;\nimport org.apache.sysds.runtime.controlprogram.caching.FrameObject;\nimport org.apache.sysds.runtime.controlprogram.caching.MatrixObject;\nimport org.apache.sysds.runtime.controlprogram.context.ExecutionContext;\n+import org.apache.sysds.runtime.controlprogram.federated.FederatedRange;\nimport org.apache.sysds.runtime.controlprogram.federated.FederatedRequest;\nimport org.apache.sysds.runtime.controlprogram.federated.FederatedResponse;\nimport org.apache.sysds.runtime.controlprogram.federated.FederatedResponse.ResponseType;\n@@ -47,6 +51,7 @@ import org.apache.sysds.runtime.instructions.cp.CPOperand;\nimport org.apache.sysds.runtime.instructions.cp.Data;\nimport org.apache.sysds.runtime.matrix.data.FrameBlock;\nimport org.apache.sysds.runtime.matrix.data.MatrixBlock;\n+import org.apache.sysds.runtime.matrix.operators.BinaryOperator;\nimport org.apache.sysds.runtime.matrix.operators.Operator;\nimport org.apache.sysds.runtime.matrix.operators.SimpleOperator;\nimport org.apache.sysds.runtime.meta.MatrixCharacteristics;\n@@ -100,7 +105,7 @@ public class ParameterizedBuiltinFEDInstruction extends ComputationFEDInstructio\nLinkedHashMap<String, String> paramsMap = constructParameterMap(parts);\n// determine the appropriate value function\n- if( opcode.equalsIgnoreCase(\"replace\") ) {\n+ if(opcode.equalsIgnoreCase(\"replace\") || opcode.equalsIgnoreCase(\"rmempty\")) {\nValueFunction func = ParameterizedBuiltin.getParameterizedBuiltinFnObject(opcode);\nreturn new ParameterizedBuiltinFEDInstruction(new SimpleOperator(func), paramsMap, out, opcode, str);\n}\n@@ -120,8 +125,10 @@ public class ParameterizedBuiltinFEDInstruction extends ComputationFEDInstructio\n// similar to unary federated instructions, get federated input\n// execute instruction, and derive federated output matrix\nMatrixObject mo = (MatrixObject) getTarget(ec);\n- FederatedRequest fr1 = FederationUtils.callInstruction(instString, output,\n- new CPOperand[] {getTargetOperand()}, new long[] {mo.getFedMapping().getID()});\n+ FederatedRequest fr1 = FederationUtils.callInstruction(instString,\n+ output,\n+ new CPOperand[] {getTargetOperand()},\n+ new long[] {mo.getFedMapping().getID()});\nmo.getFedMapping().execute(getTID(), true, fr1);\n// derive new fed mapping for output\n@@ -129,6 +136,9 @@ public class ParameterizedBuiltinFEDInstruction extends ComputationFEDInstructio\nout.getDataCharacteristics().set(mo.getDataCharacteristics());\nout.setFedMapping(mo.getFedMapping().copyWithNewID(fr1.getID()));\n}\n+ else if(opcode.equals(\"rmempty\")) {\n+ rmempty(ec);\n+ }\nelse if(opcode.equalsIgnoreCase(\"transformdecode\"))\ntransformDecode(ec);\nelse if(opcode.equalsIgnoreCase(\"transformapply\"))\n@@ -138,6 +148,136 @@ public class ParameterizedBuiltinFEDInstruction extends ComputationFEDInstructio\n}\n}\n+ private void rmempty(ExecutionContext ec) {\n+ MatrixObject mo = (MatrixObject) getTarget(ec);\n+ MatrixObject out = ec.getMatrixObject(output);\n+ Map<FederatedRange, int[]> dcs;\n+ if((instString.contains(\"margin=rows\") && mo.isFederated(FederationMap.FType.ROW)) ||\n+ (instString.contains(\"margin=cols\") && mo.isFederated(FederationMap.FType.COL))) {\n+ FederatedRequest fr1 = FederationUtils.callInstruction(instString,\n+ output,\n+ new CPOperand[] {getTargetOperand()},\n+ new long[] {mo.getFedMapping().getID()});\n+ mo.getFedMapping().execute(getTID(), true, fr1);\n+ out.setFedMapping(mo.getFedMapping().copyWithNewID(fr1.getID()));\n+\n+ // new ranges\n+ dcs = new HashMap<>();\n+ out.getFedMapping().forEachParallel((range, data) -> {\n+ try {\n+ FederatedResponse response = data\n+ .executeFederatedOperation(new FederatedRequest(FederatedRequest.RequestType.EXEC_UDF, -1,\n+ new GetDataCharacteristics(data.getVarID())))\n+ .get();\n+\n+ if(!response.isSuccessful())\n+ response.throwExceptionFromResponse();\n+ int[] subRangeCharacteristics = (int[]) response.getData()[0];\n+ synchronized(dcs) {\n+ dcs.put(range, subRangeCharacteristics);\n+ }\n+ }\n+ catch(Exception e) {\n+ throw new DMLRuntimeException(e);\n+ }\n+ return null;\n+ });\n+ }\n+ else {\n+ Map.Entry<FederationMap, Map<FederatedRange, int[]>> entry = rmemptyC(ec, mo);\n+ out.setFedMapping(entry.getKey());\n+ dcs = entry.getValue();\n+ }\n+ out.getDataCharacteristics().set(mo.getDataCharacteristics());\n+ for(int i = 0; i < mo.getFedMapping().getFederatedRanges().length; i++) {\n+ int[] newRange = dcs.get(out.getFedMapping().getFederatedRanges()[i]);\n+\n+ out.getFedMapping().getFederatedRanges()[i].setBeginDim(0,\n+ (out.getFedMapping().getFederatedRanges()[i].getBeginDims()[0] == 0 ||\n+ i == 0) ? 0 : out.getFedMapping().getFederatedRanges()[i - 1].getEndDims()[0]);\n+\n+ out.getFedMapping().getFederatedRanges()[i].setEndDim(0,\n+ out.getFedMapping().getFederatedRanges()[i].getBeginDims()[0] + newRange[0]);\n+\n+ out.getFedMapping().getFederatedRanges()[i].setBeginDim(1,\n+ (out.getFedMapping().getFederatedRanges()[i].getBeginDims()[1] == 0 ||\n+ i == 0) ? 0 : out.getFedMapping().getFederatedRanges()[i - 1].getEndDims()[1]);\n+\n+ out.getFedMapping().getFederatedRanges()[i].setEndDim(1,\n+ out.getFedMapping().getFederatedRanges()[i].getBeginDims()[1] + newRange[1]);\n+ }\n+\n+ out.getDataCharacteristics().set(out.getFedMapping().getMaxIndexInRange(0),\n+ out.getFedMapping().getMaxIndexInRange(1),\n+ (int) mo.getBlocksize());\n+ }\n+\n+ private Map.Entry<FederationMap, Map<FederatedRange, int[]>> rmemptyC(ExecutionContext ec, MatrixObject mo) {\n+ boolean marginRow = instString.contains(\"margin=rows\");\n+\n+ // find empty in ranges\n+ List<MatrixBlock> colSums = new ArrayList<>();\n+ mo.getFedMapping().forEachParallel((range, data) -> {\n+ try {\n+ FederatedResponse response = data\n+ .executeFederatedOperation(new FederatedRequest(FederatedRequest.RequestType.EXEC_UDF, -1,\n+ new GetVector(data.getVarID(), marginRow)))\n+ .get();\n+\n+ if(!response.isSuccessful())\n+ response.throwExceptionFromResponse();\n+ MatrixBlock vector = (MatrixBlock) response.getData()[0];\n+ synchronized(colSums) {\n+ colSums.add(vector);\n+ }\n+ }\n+ catch(Exception e) {\n+ throw new DMLRuntimeException(e);\n+ }\n+ return null;\n+ });\n+\n+ // find empty in matrix\n+ BinaryOperator plus = InstructionUtils.parseBinaryOperator(\"+\");\n+ BinaryOperator greater = InstructionUtils.parseBinaryOperator(\">\");\n+ MatrixBlock tmp1 = colSums.get(0);\n+ for(int i = 1; i < colSums.size(); i++)\n+ tmp1 = tmp1.binaryOperationsInPlace(plus, colSums.get(i));\n+ tmp1 = tmp1.binaryOperationsInPlace(greater, new MatrixBlock(tmp1.getNumRows(), tmp1.getNumColumns(), 0.0));\n+\n+ // remove empty from matrix\n+ Map<FederatedRange, int[]> dcs = new HashMap<>();\n+ long varID = FederationUtils.getNextFedDataID();\n+ MatrixBlock finalTmp = new MatrixBlock(tmp1);\n+ FederationMap resMapping;\n+ if(tmp1.sum() == (marginRow ? tmp1.getNumColumns() : tmp1.getNumRows())) {\n+ resMapping = mo.getFedMapping();\n+ }\n+ else {\n+ resMapping = mo.getFedMapping().mapParallel(varID, (range, data) -> {\n+ try {\n+ FederatedResponse response = data\n+ .executeFederatedOperation(new FederatedRequest(FederatedRequest.RequestType.EXEC_UDF, -1,\n+ new ParameterizedBuiltinFEDInstruction.RemoveEmpty(data.getVarID(), varID, finalTmp,\n+ params.containsKey(\"select\") ? ec.getMatrixInput(params.get(\"select\")) : null,\n+ Boolean.parseBoolean(params.get(\"empty.return\").toLowerCase()), marginRow)))\n+ .get();\n+ if(!response.isSuccessful())\n+ response.throwExceptionFromResponse();\n+ int[] subRangeCharacteristics = (int[]) response.getData()[0];\n+ synchronized(dcs) {\n+ dcs.put(range, subRangeCharacteristics);\n+ }\n+ }\n+ catch(Exception e) {\n+ throw new DMLRuntimeException(e);\n+ }\n+ return null;\n+ });\n+ }\n+ return new AbstractMap.SimpleEntry<>(resMapping, dcs);\n+ }\n+\nprivate void transformDecode(ExecutionContext ec) {\n// acquire locks\nMatrixObject mo = ec.getMatrixObject(params.get(\"target\"));\n@@ -170,9 +310,8 @@ public class ParameterizedBuiltinFEDInstruction extends ComputationFEDInstructio\nFederatedResponse response;\ntry {\n- response = data.executeFederatedOperation(\n- new FederatedRequest(FederatedRequest.RequestType.EXEC_UDF, -1,\n- new DecodeMatrix(data.getVarID(), varID, metaSlice, decoder))).get();\n+ response = data.executeFederatedOperation(new FederatedRequest(FederatedRequest.RequestType.EXEC_UDF,\n+ -1, new DecodeMatrix(data.getVarID(), varID, metaSlice, decoder))).get();\nif(!response.isSuccessful())\nresponse.throwExceptionFromResponse();\n@@ -217,7 +356,8 @@ public class ParameterizedBuiltinFEDInstruction extends ComputationFEDInstructio\ntry {\nFederatedResponse response = data\n.executeFederatedOperation(new FederatedRequest(FederatedRequest.RequestType.EXEC_UDF, -1,\n- new GetColumnNames(data.getVarID()))).get();\n+ new GetColumnNames(data.getVarID())))\n+ .get();\n// no synchronization necessary since names should anyway match\nString[] subRangeColNames = (String[]) response.getData()[0];\n@@ -261,7 +401,8 @@ public class ParameterizedBuiltinFEDInstruction extends ComputationFEDInstructio\nEncoderOmit subRangeEncoder = (EncoderOmit) omitEncoder.subRangeEncoder(range.asIndexRange().add(1));\nFederatedResponse response = data\n.executeFederatedOperation(new FederatedRequest(FederatedRequest.RequestType.EXEC_UDF, -1,\n- new InitRowsToRemoveOmit(data.getVarID(), subRangeEncoder))).get();\n+ new InitRowsToRemoveOmit(data.getVarID(), subRangeEncoder)))\n+ .get();\n// no synchronization necessary since names should anyway match\nEncoder builtEncoder = (Encoder) response.getData()[0];\n@@ -353,4 +494,93 @@ public class ParameterizedBuiltinFEDInstruction extends ComputationFEDInstructio\nreturn new FederatedResponse(ResponseType.SUCCESS, new Object[] {_encoder});\n}\n}\n+\n+ private static class GetDataCharacteristics extends FederatedUDF {\n+\n+ private static final long serialVersionUID = 578461386177730925L;\n+\n+ public GetDataCharacteristics(long varID) {\n+ super(new long[] {varID});\n+ }\n+\n+ @Override\n+ public FederatedResponse execute(ExecutionContext ec, Data... data) {\n+ MatrixBlock mb = ((MatrixObject) data[0]).acquireReadAndRelease();\n+ return new FederatedResponse(ResponseType.SUCCESS, new int[] {mb.getNumRows(), mb.getNumColumns()});\n+ }\n+ }\n+\n+ private static class RemoveEmpty extends FederatedUDF {\n+\n+ private static final long serialVersionUID = 12341521331L;\n+ private final MatrixBlock _vector;\n+ private final long _outputID;\n+ private MatrixBlock _select;\n+ private boolean _emptyReturn;\n+ private final boolean _marginRow;\n+\n+ public RemoveEmpty(long varID, long outputID, MatrixBlock vector, MatrixBlock select, boolean emptyReturn,\n+ boolean marginRow) {\n+ super(new long[] {varID});\n+ _vector = vector;\n+ _outputID = outputID;\n+ _select = select;\n+ _emptyReturn = emptyReturn;\n+ _marginRow = marginRow;\n+ }\n+\n+ @Override\n+ public FederatedResponse execute(ExecutionContext ec, Data... data) {\n+ MatrixBlock mb = ((MatrixObject) data[0]).acquireReadAndRelease();\n+\n+ BinaryOperator plus = InstructionUtils.parseBinaryOperator(\"+\");\n+ BinaryOperator minus = InstructionUtils.parseBinaryOperator(\"-\");\n+\n+ mb = mb.binaryOperationsInPlace(plus, new MatrixBlock(mb.getNumRows(), mb.getNumColumns(), 1.0));\n+ for(int i = 0; i < mb.getNumRows(); i++)\n+ for(int j = 0; j < mb.getNumColumns(); j++)\n+ if(_marginRow)\n+ mb.setValue(i, j, _vector.getValue(i, 0) * mb.getValue(i, j));\n+ else\n+ mb.setValue(i, j, _vector.getValue(0, j) * mb.getValue(i, j));\n+\n+ MatrixBlock res = mb.removeEmptyOperations(new MatrixBlock(), _marginRow, _emptyReturn, _select);\n+ res = res.binaryOperationsInPlace(minus, new MatrixBlock(res.getNumRows(), res.getNumColumns(), 1.0));\n+\n+ MatrixObject mout = ExecutionContext.createMatrixObject(res);\n+ ec.setVariable(String.valueOf(_outputID), mout);\n+\n+ return new FederatedResponse(FederatedResponse.ResponseType.SUCCESS,\n+ new int[] {res.getNumRows(), res.getNumColumns()});\n+ }\n+ }\n+\n+ private static class GetVector extends FederatedUDF {\n+\n+ private static final long serialVersionUID = -1003061862215703768L;\n+ private final boolean _marginRow;\n+\n+ public GetVector(long varID, boolean marginRow) {\n+ super(new long[] {varID});\n+ _marginRow = marginRow;\n+ }\n+\n+ @Override\n+ public FederatedResponse execute(ExecutionContext ec, Data... data) {\n+ MatrixBlock mb = ((MatrixObject) data[0]).acquireReadAndRelease();\n+\n+ BinaryOperator plus = InstructionUtils.parseBinaryOperator(\"+\");\n+ BinaryOperator greater = InstructionUtils.parseBinaryOperator(\">\");\n+ int len = _marginRow ? mb.getNumColumns() : mb.getNumRows();\n+ MatrixBlock tmp1 = _marginRow ? mb.slice(0, mb.getNumRows() - 1, 0, 0, new MatrixBlock()) : mb\n+ .slice(0, 0, 0, mb.getNumColumns() - 1, new MatrixBlock());\n+ for(int i = 1; i < len; i++) {\n+ MatrixBlock tmp2 = _marginRow ? mb.slice(0, mb.getNumRows() - 1, i, i, new MatrixBlock()) : mb\n+ .slice(i, i, 0, mb.getNumColumns() - 1, new MatrixBlock());\n+ tmp1 = tmp1.binaryOperationsInPlace(plus, tmp2);\n+ }\n+ tmp1 = tmp1.binaryOperationsInPlace(greater, new MatrixBlock(tmp1.getNumRows(), tmp1.getNumColumns(), 0.0));\n+ return new FederatedResponse(ResponseType.SUCCESS, tmp1);\n+ }\n+ }\n}\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/java/org/apache/sysds/test/functions/federated/primitives/FederatedRemoveEmptyTest.java",
"diff": "+/*\n+ * Licensed to the Apache Software Foundation (ASF) under one\n+ * or more contributor license agreements. See the NOTICE file\n+ * distributed with this work for additional information\n+ * regarding copyright ownership. The ASF licenses this file\n+ * to you under the Apache License, Version 2.0 (the\n+ * \"License\"); you may not use this file except in compliance\n+ * with the License. You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.apache.sysds.test.functions.federated.primitives;\n+\n+import java.util.Arrays;\n+import java.util.Collection;\n+\n+import org.apache.sysds.api.DMLScript;\n+import org.apache.sysds.common.Types.ExecMode;\n+import org.apache.sysds.runtime.meta.MatrixCharacteristics;\n+import org.apache.sysds.runtime.util.HDFSTool;\n+import org.apache.sysds.test.AutomatedTestBase;\n+import org.apache.sysds.test.TestConfiguration;\n+import org.apache.sysds.test.TestUtils;\n+import org.junit.Assert;\n+import org.junit.Test;\n+import org.junit.runner.RunWith;\n+import org.junit.runners.Parameterized;\n+\n+@RunWith(value = Parameterized.class)\[email protected]\n+public class FederatedRemoveEmptyTest extends AutomatedTestBase {\n+ // private static final Log LOG = LogFactory.getLog(FederatedRightIndexTest.class.getName());\n+\n+ private final static String TEST_NAME = \"FederatedRemoveEmptyTest\";\n+\n+ private final static String TEST_DIR = \"functions/federated/\";\n+ private static final String TEST_CLASS_DIR = TEST_DIR + FederatedRemoveEmptyTest.class.getSimpleName() + \"/\";\n+\n+ private final static int blocksize = 1024;\n+ @Parameterized.Parameter()\n+ public int rows;\n+ @Parameterized.Parameter(1)\n+ public int cols;\n+\n+ @Parameterized.Parameter(2)\n+ public boolean rowPartitioned;\n+\n+ @Parameterized.Parameters\n+ public static Collection<Object[]> data() {\n+ return Arrays.asList(new Object[][] {{20, 10, true}, {20, 12, false}});\n+ }\n+\n+ @Override\n+ public void setUp() {\n+ TestUtils.clearAssertionInformation();\n+ addTestConfiguration(TEST_NAME, new TestConfiguration(TEST_CLASS_DIR, TEST_NAME, new String[] {\"S\"}));\n+ }\n+\n+ @Test\n+ public void testRemoveEmptyCP() {\n+ runAggregateOperationTest(ExecMode.SINGLE_NODE);\n+ }\n+\n+ private void runAggregateOperationTest(ExecMode execMode) {\n+ boolean sparkConfigOld = DMLScript.USE_LOCAL_SPARK_CONFIG;\n+ ExecMode platformOld = rtplatform;\n+\n+ if(rtplatform == ExecMode.SPARK)\n+ DMLScript.USE_LOCAL_SPARK_CONFIG = true;\n+\n+ getAndLoadTestConfiguration(TEST_NAME);\n+ String HOME = SCRIPT_DIR + TEST_DIR;\n+\n+ // write input matrices\n+ int r = rows;\n+ int c = cols / 4;\n+ if(rowPartitioned) {\n+ r = rows / 4;\n+ c = cols;\n+ }\n+\n+ double[][] X1 = getRandomMatrix(r, c, 1, 5, 1, 3);\n+ double[][] X2 = getRandomMatrix(r, c, 1, 5, 1, 7);\n+ double[][] X3 = getRandomMatrix(r, c, 1, 5, 1, 8);\n+ double[][] X4 = getRandomMatrix(r, c, 1, 5, 1, 9);\n+\n+ for(int k : new int[] {1, 2, 3}) {\n+ Arrays.fill(X3[k], 0);\n+ if(!rowPartitioned) {\n+ Arrays.fill(X1[k], 0);\n+ Arrays.fill(X2[k], 0);\n+ Arrays.fill(X4[k], 0);\n+ }\n+ }\n+\n+ MatrixCharacteristics mc = new MatrixCharacteristics(r, c, blocksize, r * c);\n+ writeInputMatrixWithMTD(\"X1\", X1, false, mc);\n+ writeInputMatrixWithMTD(\"X2\", X2, false, mc);\n+ writeInputMatrixWithMTD(\"X3\", X3, false, mc);\n+ writeInputMatrixWithMTD(\"X4\", X4, false, mc);\n+\n+ // empty script name because we don't execute any script, just start the worker\n+ fullDMLScriptName = \"\";\n+ int port1 = getRandomAvailablePort();\n+ int port2 = getRandomAvailablePort();\n+ int port3 = getRandomAvailablePort();\n+ int port4 = getRandomAvailablePort();\n+ Thread t1 = startLocalFedWorkerThread(port1, 10);\n+ Thread t2 = startLocalFedWorkerThread(port2, 10);\n+ Thread t3 = startLocalFedWorkerThread(port3, 10);\n+ Thread t4 = startLocalFedWorkerThread(port4);\n+\n+ rtplatform = execMode;\n+ if(rtplatform == ExecMode.SPARK) {\n+ System.out.println(7);\n+ DMLScript.USE_LOCAL_SPARK_CONFIG = true;\n+ }\n+ TestConfiguration config = availableTestConfigurations.get(TEST_NAME);\n+ loadTestConfiguration(config);\n+\n+ // Run reference dml script with normal matrix\n+ fullDMLScriptName = HOME + TEST_NAME + \"Reference.dml\";\n+ programArgs = new String[] {\"-stats\", \"100\", \"-args\", input(\"X1\"), input(\"X2\"), input(\"X3\"), input(\"X4\"),\n+ Boolean.toString(rowPartitioned).toUpperCase(), expected(\"S\")};\n+\n+ runTest(null);\n+\n+ fullDMLScriptName = HOME + TEST_NAME + \".dml\";\n+ programArgs = new String[] {\"-stats\", \"100\", \"-nvargs\",\n+ \"in_X1=\" + TestUtils.federatedAddress(port1, input(\"X1\")),\n+ \"in_X2=\" + TestUtils.federatedAddress(port2, input(\"X2\")),\n+ \"in_X3=\" + TestUtils.federatedAddress(port3, input(\"X3\")),\n+ \"in_X4=\" + TestUtils.federatedAddress(port4, input(\"X4\")), \"rows=\" + rows, \"cols=\" + cols,\n+ \"rP=\" + Boolean.toString(rowPartitioned).toUpperCase(), \"out_S=\" + output(\"S\")};\n+\n+ runTest(null);\n+\n+ // compare via files\n+ compareResults(1e-9);\n+\n+ // check that federated input files are still existing\n+ Assert.assertTrue(HDFSTool.existsFileOnHDFS(input(\"X1\")));\n+ Assert.assertTrue(HDFSTool.existsFileOnHDFS(input(\"X2\")));\n+ Assert.assertTrue(HDFSTool.existsFileOnHDFS(input(\"X3\")));\n+ Assert.assertTrue(HDFSTool.existsFileOnHDFS(input(\"X4\")));\n+\n+ TestUtils.shutdownThreads(t1, t2, t3, t4);\n+\n+ rtplatform = platformOld;\n+ DMLScript.USE_LOCAL_SPARK_CONFIG = sparkConfigOld;\n+\n+ }\n+}\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/federated/FederatedRemoveEmptyTest.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+if ($rP) {\n+ A = federated(addresses=list($in_X1, $in_X2, $in_X3, $in_X4),\n+ ranges=list(list(0, 0), list($rows/4, $cols), list($rows/4, 0), list(2*$rows/4, $cols),\n+ list(2*$rows/4, 0), list(3*$rows/4, $cols), list(3*$rows/4, 0), list($rows, $cols)));\n+} else {\n+ A = federated(addresses=list($in_X1, $in_X2, $in_X3, $in_X4),\n+ ranges=list(list(0, 0), list($rows, $cols/4), list(0,$cols/4), list($rows, $cols/2),\n+ list(0,$cols/2), list($rows, 3*($cols/4)), list(0, 3*($cols/4)), list($rows, $cols)));\n+}\n+\n+s = removeEmpty(target=A, margin=\"cols\");\n+write(s, $out_S);\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/federated/FederatedRemoveEmptyTestReference.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+if($5) { A = rbind(read($1), read($2), read($3), read($4)); }\n+else { A = cbind(read($1), read($2), read($3), read($4)); }\n+\n+s = removeEmpty(target=A, margin=\"cols\");\n+write(s, $6);\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-2732] Federated remove empty
closes #1104 |
49,706 | 15.11.2020 20:24:44 | -3,600 | b29eb148c75cdc87b3613a9b603c7d1750cc2b7c | [MINOR] Federated Type Fixes
Add an additional type of federated matrix, that covers the one location
Federated Matrix, to enable both optimized code paths for column
partitioned and row partitioned federated operations. | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/controlprogram/caching/CacheableData.java",
"new_path": "src/main/java/org/apache/sysds/runtime/controlprogram/caching/CacheableData.java",
"diff": "@@ -372,7 +372,7 @@ public abstract class CacheableData<T extends CacheBlock> extends Data\n}\npublic boolean isFederated(FType type) {\n- return isFederated() && _fedMapping.getType() == type;\n+ return isFederated() && _fedMapping.getType().isType(type);\n}\n/**\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/controlprogram/federated/FederationMap.java",
"new_path": "src/main/java/org/apache/sysds/runtime/controlprogram/federated/FederationMap.java",
"diff": "@@ -48,7 +48,29 @@ public class FederationMap\npublic enum FType {\nROW, //row partitioned, groups of rows\nCOL, //column partitioned, groups of columns\n- OTHER,\n+ FULL, // Meaning both Row and Column indicating a single federated location and a full matrix\n+ OTHER;\n+\n+ public boolean isRowPartitioned() {\n+ return this == ROW || this == FULL;\n+ }\n+\n+ public boolean isColPartitioned() {\n+ return this == ROW || this == FULL;\n+ }\n+\n+ public boolean isType(FType t){\n+ switch (t) {\n+ case ROW:\n+ return isRowPartitioned();\n+ case COL:\n+ return isColPartitioned();\n+ case FULL:\n+ case OTHER:\n+ default:\n+ return t == this;\n+ }\n+ }\n}\nprivate long _ID = -1;\n@@ -278,6 +300,7 @@ public class FederationMap\n}\n//derive output type\nswitch(_type) {\n+ case FULL: _type = FType.FULL; break;\ncase ROW: _type = FType.COL; break;\ncase COL: _type = FType.ROW; break;\ndefault: _type = FType.OTHER;\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/AggregateBinaryFEDInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/AggregateBinaryFEDInstruction.java",
"diff": "package org.apache.sysds.runtime.instructions.fed;\n+import java.util.concurrent.Future;\n+\nimport org.apache.sysds.runtime.DMLRuntimeException;\nimport org.apache.sysds.runtime.controlprogram.caching.MatrixObject;\nimport org.apache.sysds.runtime.controlprogram.context.ExecutionContext;\nimport org.apache.sysds.runtime.controlprogram.federated.FederatedRequest;\n-import org.apache.sysds.runtime.controlprogram.federated.FederatedResponse;\n-import org.apache.sysds.runtime.controlprogram.federated.FederationUtils;\nimport org.apache.sysds.runtime.controlprogram.federated.FederatedRequest.RequestType;\n+import org.apache.sysds.runtime.controlprogram.federated.FederatedResponse;\nimport org.apache.sysds.runtime.controlprogram.federated.FederationMap.FType;\n+import org.apache.sysds.runtime.controlprogram.federated.FederationUtils;\nimport org.apache.sysds.runtime.instructions.InstructionUtils;\nimport org.apache.sysds.runtime.instructions.cp.CPOperand;\nimport org.apache.sysds.runtime.matrix.data.MatrixBlock;\nimport org.apache.sysds.runtime.matrix.operators.Operator;\n-import java.util.concurrent.Future;\n-\npublic class AggregateBinaryFEDInstruction extends BinaryFEDInstruction {\n+ // private static final Log LOG = LogFactory.getLog(AggregateBinaryFEDInstruction.class.getName());\npublic AggregateBinaryFEDInstruction(Operator op, CPOperand in1,\nCPOperand in2, CPOperand out, String opcode, String istr) {\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/InitFEDInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/InitFEDInstruction.java",
"diff": "@@ -247,7 +247,8 @@ public class InitFEDInstruction extends FEDInstruction {\noutput.getDataCharacteristics().setNonZeros(-1);\noutput.getDataCharacteristics().setBlocksize(ConfigurationManager.getBlocksize());\noutput.setFedMapping(new FederationMap(id, fedMapping));\n- output.getFedMapping().setType(rowPartitioned ? FType.ROW : colPartitioned ? FType.COL : FType.OTHER);\n+ output.getFedMapping().setType(rowPartitioned && colPartitioned ? FType.FULL :\n+ rowPartitioned ? FType.ROW : colPartitioned ? FType.COL : FType.OTHER);\n}\npublic static void federateFrame(FrameObject output, List<Pair<FederatedRange, FederatedData>> workers) {\n@@ -260,6 +261,8 @@ public class InitFEDInstruction extends FEDInstruction {\n// and the future for the response\nList<Pair<FederatedData, Pair<Integer, Future<FederatedResponse>>>> idResponses = new ArrayList<>();\nlong id = FederationUtils.getNextFedDataID();\n+ boolean rowPartitioned = true;\n+ boolean colPartitioned = true;\nfor (Map.Entry<FederatedRange, FederatedData> entry : fedMapping.entrySet()) {\nFederatedRange range = entry.getKey();\nFederatedData value = entry.getValue();\n@@ -272,6 +275,8 @@ public class InitFEDInstruction extends FEDInstruction {\n}\nidResponses.add(new ImmutablePair<>(value, new ImmutablePair<>((int) beginDims[1], value.initFederatedData(id))));\n}\n+ rowPartitioned &= (range.getSize(1) == output.getNumColumns());\n+ colPartitioned &= (range.getSize(0) == output.getNumRows());\n}\n// columns are definitely in int range, because we throw an DMLRuntime Exception in `processInstruction` else\nTypes.ValueType[] schema = new Types.ValueType[(int) output.getNumColumns()];\n@@ -290,6 +295,8 @@ public class InitFEDInstruction extends FEDInstruction {\noutput.getDataCharacteristics().setNonZeros(output.getNumColumns() * output.getNumRows());\noutput.setSchema(schema);\noutput.setFedMapping(new FederationMap(id, fedMapping));\n+ output.getFedMapping().setType(rowPartitioned && colPartitioned ? FType.FULL :\n+ rowPartitioned ? FType.ROW : colPartitioned ? FType.COL : FType.OTHER);\n}\nprivate static void handleFedFrameResponse(Types.ValueType[] schema, FederatedData federatedData,\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/federated/algorithms/FederatedYL2SVMTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/federated/algorithms/FederatedYL2SVMTest.java",
"diff": "package org.apache.sysds.test.functions.federated.algorithms;\n-import org.junit.Test;\n-import org.junit.runner.RunWith;\n-import org.junit.runners.Parameterized;\n+import java.util.Arrays;\n+import java.util.Collection;\n+\n+import org.apache.commons.logging.Log;\n+import org.apache.commons.logging.LogFactory;\nimport org.apache.sysds.api.DMLScript;\nimport org.apache.sysds.common.Types;\nimport org.apache.sysds.runtime.meta.MatrixCharacteristics;\nimport org.apache.sysds.test.AutomatedTestBase;\nimport org.apache.sysds.test.TestConfiguration;\nimport org.apache.sysds.test.TestUtils;\n-\n-import java.util.Arrays;\n-import java.util.Collection;\n+import org.junit.Test;\n+import org.junit.runner.RunWith;\n+import org.junit.runners.Parameterized;\n@RunWith(value = Parameterized.class)\[email protected]\npublic class FederatedYL2SVMTest extends AutomatedTestBase {\n+ private static final Log LOG = LogFactory.getLog(FederatedYL2SVMTest.class.getName());\nprivate final static String TEST_DIR = \"functions/federated/\";\nprivate final static String TEST_NAME = \"FederatedYL2SVMTest\";\n@@ -114,7 +117,7 @@ public class FederatedYL2SVMTest extends AutomatedTestBase {\n// Run reference dml script with normal matrix\nfullDMLScriptName = HOME + TEST_NAME + \"Reference.dml\";\nprogramArgs = new String[] {\"-args\", input(\"X1\"), input(\"X2\"), input(\"Y1\"), input(\"Y2\"), expected(\"Z\")};\n- runTest(true, false, null, -1);\n+ LOG.debug(runTest(null));\n// Run actual dml script with federated matrixz\nfullDMLScriptName = HOME + TEST_NAME + \".dml\";\n@@ -122,7 +125,7 @@ public class FederatedYL2SVMTest extends AutomatedTestBase {\n\"in_X2=\" + TestUtils.federatedAddress(port2, input(\"X2\")), \"rows=\" + rows, \"cols=\" + cols,\n\"in_Y1=\" + TestUtils.federatedAddress(port1, input(\"Y1\")),\n\"in_Y2=\" + TestUtils.federatedAddress(port2, input(\"Y2\")), \"out=\" + output(\"Z\")};\n- runTest(true, false, null, -1);\n+ LOG.debug(runTest(null));\n// compare via files\ncompareResults(1e-9);\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/federated/primitives/FederatedNegativeTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/federated/primitives/FederatedNegativeTest.java",
"diff": "@@ -47,6 +47,7 @@ public class FederatedNegativeTest {\ntry{\nString[] args = {\"-w\", Integer.toString(port)};\nt = AutomatedTestBase.startLocalFedWorkerWithArgs(args);\n+ Thread.sleep(1000);\n} catch(Exception e){\nNegativeTest1();\n}\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR] Federated Type Fixes
Add an additional type of federated matrix, that covers the one location
Federated Matrix, to enable both optimized code paths for column
partitioned and row partitioned federated operations. |
49,693 | 17.11.2020 00:41:38 | -3,600 | 981b1e5b6832a9680effb88900550025f2221acf | [MINOR] Fix R execution on Windows when using R version 4.0 | [
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/AutomatedTestBase.java",
"new_path": "src/test/java/org/apache/sysds/test/AutomatedTestBase.java",
"diff": "@@ -1041,10 +1041,6 @@ public abstract class AutomatedTestBase {\n// *** END HACK ***\n}\n- if(System.getProperty(\"os.name\").contains(\"Windows\")) {\n- cmd = cmd.replace('/', '\\\\');\n- executionFile = executionFile.replace('/', '\\\\');\n- }\nif(DEBUG) {\nif(!newWay) { // not sure why have this condition\nTestUtils.printRScript(executionFile);\n@@ -1075,6 +1071,16 @@ public abstract class AutomatedTestBase {\nString outputR;\nString errorString;\ntry {\n+ // if R < 4.0 on Windows is used, the file separator needs to be Windows style\n+ if(System.getProperty(\"os.name\").contains(\"Windows\")) {\n+ Process r_ver_cmd = Runtime.getRuntime().exec(\"RScript --version\");\n+ String r_ver = IOUtils.toString(r_ver_cmd.getErrorStream());\n+ if(!r_ver.contains(\"4.0\")) {\n+ cmd = cmd.replace('/', '\\\\');\n+ executionFile = executionFile.replace('/', '\\\\');\n+ }\n+ }\n+\nlong t0 = System.nanoTime();\nif(LOG.isInfoEnabled()) {\nLOG.info(\"starting R script\");\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR] Fix R execution on Windows when using R version 4.0 |
49,693 | 17.11.2020 00:53:41 | -3,600 | a82c21a25ff8c746b73af1a90b696cadd67d0aed | [MINOR] Cuda code template classes package rename cpp->cuda | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/hops/codegen/cplan/CNode.java",
"new_path": "src/main/java/org/apache/sysds/hops/codegen/cplan/CNode.java",
"diff": "@@ -237,13 +237,13 @@ public abstract class CNode\nswitch (api) {\ncase CUDA:\nif(caller instanceof CNodeCell)\n- return new org.apache.sysds.hops.codegen.cplan.cpp.CellWise();\n+ return new org.apache.sysds.hops.codegen.cplan.cuda.CellWise();\nelse if (caller instanceof CNodeUnary)\n- return new org.apache.sysds.hops.codegen.cplan.cpp.Unary();\n+ return new org.apache.sysds.hops.codegen.cplan.cuda.Unary();\nelse if (caller instanceof CNodeBinary)\n- return new org.apache.sysds.hops.codegen.cplan.cpp.Binary();\n+ return new org.apache.sysds.hops.codegen.cplan.cuda.Binary();\nelse if (caller instanceof CNodeTernary)\n- return new org.apache.sysds.hops.codegen.cplan.cpp.Ternary();\n+ return new org.apache.sysds.hops.codegen.cplan.cuda.Ternary();\nelse\nreturn null;\ncase JAVA:\n"
},
{
"change_type": "RENAME",
"old_path": "src/main/java/org/apache/sysds/hops/codegen/cplan/cpp/Binary.java",
"new_path": "src/main/java/org/apache/sysds/hops/codegen/cplan/cuda/Binary.java",
"diff": "* under the License.\n*/\n-package org.apache.sysds.hops.codegen.cplan.cpp;\n+package org.apache.sysds.hops.codegen.cplan.cuda;\nimport org.apache.sysds.hops.codegen.cplan.CNodeBinary;\nimport org.apache.sysds.hops.codegen.cplan.CNodeTernary;\n"
},
{
"change_type": "RENAME",
"old_path": "src/main/java/org/apache/sysds/hops/codegen/cplan/cpp/CellWise.java",
"new_path": "src/main/java/org/apache/sysds/hops/codegen/cplan/cuda/CellWise.java",
"diff": "* under the License.\n*/\n-package org.apache.sysds.hops.codegen.cplan.cpp;\n+package org.apache.sysds.hops.codegen.cplan.cuda;\nimport java.io.FileInputStream;\nimport java.io.IOException;\n"
},
{
"change_type": "RENAME",
"old_path": "src/main/java/org/apache/sysds/hops/codegen/cplan/cpp/Ternary.java",
"new_path": "src/main/java/org/apache/sysds/hops/codegen/cplan/cuda/Ternary.java",
"diff": "* under the License.\n*/\n-package org.apache.sysds.hops.codegen.cplan.cpp;\n+package org.apache.sysds.hops.codegen.cplan.cuda;\nimport org.apache.sysds.hops.codegen.cplan.CNodeBinary;\nimport org.apache.sysds.hops.codegen.cplan.CNodeTernary;\n"
},
{
"change_type": "RENAME",
"old_path": "src/main/java/org/apache/sysds/hops/codegen/cplan/cpp/Unary.java",
"new_path": "src/main/java/org/apache/sysds/hops/codegen/cplan/cuda/Unary.java",
"diff": "* under the License.\n*/\n-package org.apache.sysds.hops.codegen.cplan.cpp;\n+package org.apache.sysds.hops.codegen.cplan.cuda;\nimport org.apache.commons.lang.StringUtils;\nimport org.apache.sysds.hops.codegen.cplan.CNodeBinary;\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR] Cuda code template classes package rename cpp->cuda |
49,722 | 17.11.2020 21:21:55 | -3,600 | 2a8cb78827daed00fe016f6af22ab24f154be40c | Modified fed removeEmpty
This commits change the remove empty federated command, to
among other things improve the split function performance.
Closes | [
{
"change_type": "MODIFY",
"old_path": "scripts/builtin/split.dml",
"new_path": "scripts/builtin/split.dml",
"diff": "@@ -53,12 +53,13 @@ m_split = function(Matrix[Double] X, Matrix[Double] Y, Double f=0.7, Boolean con\n}\n# sampled train/test splits\nelse {\n+ # create random select vector according to f and then\n+ # extract tuples via permutation (selection) matrix multiply\n+ # or directly via removeEmpty by selection vector\nI = rand(rows=nrow(X), cols=1, seed=seed) <= f;\n- P1 = removeEmpty(target=diag(I), margin=\"rows\", select=I);\n- P2 = removeEmpty(target=diag(I==0), margin=\"rows\", select=I==0);\n- Xtrain = P1 %*% X;\n- Ytrain = P1 %*% Y;\n- Xtest = P2 %*% X;\n- Ytest = P2 %*% Y;\n+ Xtrain = removeEmpty(target=X, margin=\"rows\", select=I);\n+ Ytrain = removeEmpty(target=Y, margin=\"rows\", select=I);\n+ Xtest = removeEmpty(target=X, margin=\"rows\", select=(I==0));\n+ Ytest = removeEmpty(target=Y, margin=\"rows\", select=(I==0));\n}\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/controlprogram/federated/FederationMap.java",
"new_path": "src/main/java/org/apache/sysds/runtime/controlprogram/federated/FederationMap.java",
"diff": "@@ -43,8 +43,7 @@ import org.apache.sysds.runtime.matrix.data.MatrixBlock;\nimport org.apache.sysds.runtime.util.CommonThreadPool;\nimport org.apache.sysds.runtime.util.IndexRange;\n-public class FederationMap\n-{\n+public class FederationMap {\npublic enum FType {\nROW, // row partitioned, groups of rows\nCOL, // column partitioned, groups of columns\n@@ -56,7 +55,7 @@ public class FederationMap\n}\npublic boolean isColPartitioned() {\n- return this == ROW || this == FULL;\n+ return this == COL || this == FULL;\n}\npublic boolean isType(FType t) {\n@@ -133,13 +132,11 @@ public class FederationMap\n}\n/**\n- * Creates separate slices of an input data object according\n- * to the index ranges of federated data. Theses slices are then\n- * wrapped in separate federated requests for broadcasting.\n+ * Creates separate slices of an input data object according to the index ranges of federated data. Theses slices\n+ * are then wrapped in separate federated requests for broadcasting.\n*\n* @param data input data object (matrix, tensor, frame)\n- * @param transposed false: slice according to federated data,\n- * true: slice according to transposed federated data\n+ * @param transposed false: slice according to federated data, true: slice according to transposed federated data\n* @return array of federated requests corresponding to federated data\n*/\npublic FederatedRequest[] broadcastSliced(CacheableData<?> data, boolean transposed) {\n@@ -151,17 +148,19 @@ public class FederationMap\nint[][] ix = new int[_fedMap.size()][];\nint pos = 0;\nfor(Entry<FederatedRange, FederatedData> e : _fedMap.entrySet()) {\n- int rl = transposed ? 0 : e.getKey().getBeginDimsInt()[0];\n- int ru = transposed ? cb.getNumRows()-1 : e.getKey().getEndDimsInt()[0]-1;\n- int cl = transposed ? e.getKey().getBeginDimsInt()[0] : 0;\n- int cu = transposed ? e.getKey().getEndDimsInt()[0]-1 : cb.getNumColumns()-1;\n+ int rl, ru, cl, cu;\n+ // TODO Handle different cases than ROW aligned Matrices.\n+ rl = transposed ? 0 : e.getKey().getBeginDimsInt()[0];\n+ ru = transposed ? cb.getNumRows() - 1 : e.getKey().getEndDimsInt()[0] - 1;\n+ cl = transposed ? e.getKey().getBeginDimsInt()[0] : 0;\n+ cu = transposed ? e.getKey().getEndDimsInt()[0] - 1 : cb.getNumColumns() - 1;\nix[pos++] = new int[] {rl, ru, cl, cu};\n}\n// multi-threaded block slicing and federation request creation\nFederatedRequest[] ret = new FederatedRequest[ix.length];\n- Arrays.parallelSetAll(ret, i ->\n- new FederatedRequest(RequestType.PUT_VAR, id,\n+ Arrays.parallelSetAll(ret,\n+ i -> new FederatedRequest(RequestType.PUT_VAR, id,\ncb.slice(ix[i][0], ix[i][1], ix[i][2], ix[i][3], new MatrixBlock())));\nreturn ret;\n}\n@@ -171,8 +170,7 @@ public class FederationMap\n// at the same federated site (which allows for purely federated operation)\nboolean ret = true;\nfor(Entry<FederatedRange, FederatedData> e : _fedMap.entrySet()) {\n- FederatedRange range = !transposed ? e.getKey() :\n- new FederatedRange(e.getKey()).transpose();\n+ FederatedRange range = !transposed ? e.getKey() : new FederatedRange(e.getKey()).transpose();\nFederatedData dat2 = that._fedMap.get(range);\nret &= e.getValue().equalAddress(dat2);\n}\n@@ -192,14 +190,14 @@ public class FederationMap\n}\n@SuppressWarnings(\"unchecked\")\n- public Future<FederatedResponse>[] execute(long tid, boolean wait, FederatedRequest[] frSlices, FederatedRequest... fr) {\n+ public Future<FederatedResponse>[] execute(long tid, boolean wait, FederatedRequest[] frSlices,\n+ FederatedRequest... fr) {\n// executes step1[] - step 2 - ... step4 (only first step federated-data-specific)\nsetThreadID(tid, frSlices, fr);\nList<Future<FederatedResponse>> ret = new ArrayList<>();\nint pos = 0;\nfor(Entry<FederatedRange, FederatedData> e : _fedMap.entrySet())\n- ret.add(e.getValue().executeFederatedOperation(\n- (frSlices!=null) ? addAll(frSlices[pos++], fr) : fr));\n+ ret.add(e.getValue().executeFederatedOperation((frSlices != null) ? addAll(frSlices[pos++], fr) : fr));\n// prepare results (future federated responses), with optional wait to ensure the\n// order of requests without data dependencies (e.g., cleanup RPCs)\n@@ -215,8 +213,7 @@ public class FederationMap\nList<Pair<FederatedRange, Future<FederatedResponse>>> readResponses = new ArrayList<>();\nFederatedRequest request = new FederatedRequest(RequestType.GET_VAR, _ID);\nfor(Map.Entry<FederatedRange, FederatedData> e : _fedMap.entrySet())\n- readResponses.add(new ImmutablePair<>(e.getKey(),\n- e.getValue().executeFederatedOperation(request)));\n+ readResponses.add(new ImmutablePair<>(e.getKey(), e.getValue().executeFederatedOperation(request)));\nreturn readResponses;\n}\n@@ -240,7 +237,8 @@ public class FederationMap\nprivate static FederatedRequest[] addAll(FederatedRequest a, FederatedRequest[] b) {\nFederatedRequest[] ret = new FederatedRequest[b.length + 1];\n- ret[0] = a; System.arraycopy(b, 0, ret, 1, b.length);\n+ ret[0] = a;\n+ System.arraycopy(b, 0, ret, 1, b.length);\nreturn ret;\n}\n@@ -300,19 +298,23 @@ public class FederationMap\n}\n// derive output type\nswitch(_type) {\n- case FULL: _type = FType.FULL; break;\n- case ROW: _type = FType.COL; break;\n- case COL: _type = FType.ROW; break;\n- default: _type = FType.OTHER;\n+ case FULL:\n+ _type = FType.FULL;\n+ break;\n+ case ROW:\n+ _type = FType.COL;\n+ break;\n+ case COL:\n+ _type = FType.ROW;\n+ break;\n+ default:\n+ _type = FType.OTHER;\n}\nreturn this;\n}\n-\npublic long getMaxIndexInRange(int dim) {\n- return _fedMap.keySet().stream()\n- .mapToLong(range -> range.getEndDims()[dim]).max()\n- .orElse(-1L);\n+ return _fedMap.keySet().stream().mapToLong(range -> range.getEndDims()[dim]).max().orElse(-1L);\n}\n/**\n@@ -360,10 +362,10 @@ public class FederationMap\nwhile(iter.hasNext()) {\nEntry<FederatedRange, FederatedData> e = iter.next();\nFederatedRange range = e.getKey();\n- long rs = range.getBeginDims()[0], re = range.getEndDims()[0],\n- cs = range.getBeginDims()[1], ce = range.getEndDims()[1];\n- boolean overlap = ((ixrange.colStart <= ce) && (ixrange.colEnd >= cs)\n- && (ixrange.rowStart <= re) && (ixrange.rowEnd >= rs));\n+ long rs = range.getBeginDims()[0], re = range.getEndDims()[0], cs = range.getBeginDims()[1],\n+ ce = range.getEndDims()[1];\n+ boolean overlap = ((ixrange.colStart <= ce) && (ixrange.colEnd >= cs) && (ixrange.rowStart <= re) &&\n+ (ixrange.rowEnd >= rs));\nif(!overlap)\niter.remove();\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/ParameterizedBuiltinFEDInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/ParameterizedBuiltinFEDInstruction.java",
"diff": "package org.apache.sysds.runtime.instructions.fed;\n-import java.util.AbstractMap;\nimport java.util.ArrayList;\nimport java.util.Arrays;\nimport java.util.HashMap;\n@@ -136,9 +135,8 @@ public class ParameterizedBuiltinFEDInstruction extends ComputationFEDInstructio\nout.getDataCharacteristics().set(mo.getDataCharacteristics());\nout.setFedMapping(mo.getFedMapping().copyWithNewID(fr1.getID()));\n}\n- else if(opcode.equals(\"rmempty\")) {\n+ else if(opcode.equals(\"rmempty\"))\nrmempty(ec);\n- }\nelse if(opcode.equalsIgnoreCase(\"transformdecode\"))\ntransformDecode(ec);\nelse if(opcode.equalsIgnoreCase(\"transformapply\"))\n@@ -149,79 +147,26 @@ public class ParameterizedBuiltinFEDInstruction extends ComputationFEDInstructio\n}\nprivate void rmempty(ExecutionContext ec) {\n+ String margin = params.get(\"margin\");\n+ if( !(margin.equals(\"rows\") || margin.equals(\"cols\")) )\n+ throw new DMLRuntimeException(\"Unspupported margin identifier '\"+margin+\"'.\");\n+\nMatrixObject mo = (MatrixObject) getTarget(ec);\n+ MatrixObject select = params.containsKey(\"select\") ? ec.getMatrixObject(params.get(\"select\")) : null;\nMatrixObject out = ec.getMatrixObject(output);\n- Map<FederatedRange, int[]> dcs;\n- if((instString.contains(\"margin=rows\") && mo.isFederated(FederationMap.FType.ROW)) ||\n- (instString.contains(\"margin=cols\") && mo.isFederated(FederationMap.FType.COL))) {\n- FederatedRequest fr1 = FederationUtils.callInstruction(instString,\n- output,\n- new CPOperand[] {getTargetOperand()},\n- new long[] {mo.getFedMapping().getID()});\n- mo.getFedMapping().execute(getTID(), true, fr1);\n- out.setFedMapping(mo.getFedMapping().copyWithNewID(fr1.getID()));\n- // new ranges\n- dcs = new HashMap<>();\n- out.getFedMapping().forEachParallel((range, data) -> {\n- try {\n- FederatedResponse response = data\n- .executeFederatedOperation(new FederatedRequest(FederatedRequest.RequestType.EXEC_UDF, -1,\n- new GetDataCharacteristics(data.getVarID())))\n- .get();\n+ boolean marginRow = params.get(\"margin\").equals(\"rows\");\n+ boolean k = ((marginRow && mo.getFedMapping().getType().isColPartitioned()) ||\n+ (!marginRow && mo.getFedMapping().getType().isRowPartitioned()));\n- if(!response.isSuccessful())\n- response.throwExceptionFromResponse();\n- int[] subRangeCharacteristics = (int[]) response.getData()[0];\n- synchronized(dcs) {\n- dcs.put(range, subRangeCharacteristics);\n- }\n- }\n- catch(Exception e) {\n- throw new DMLRuntimeException(e);\n- }\n- return null;\n- });\n- }\n- else {\n- Map.Entry<FederationMap, Map<FederatedRange, int[]>> entry = rmemptyC(ec, mo);\n- out.setFedMapping(entry.getKey());\n- dcs = entry.getValue();\n- }\n- out.getDataCharacteristics().set(mo.getDataCharacteristics());\n- for(int i = 0; i < mo.getFedMapping().getFederatedRanges().length; i++) {\n- int[] newRange = dcs.get(out.getFedMapping().getFederatedRanges()[i]);\n-\n- out.getFedMapping().getFederatedRanges()[i].setBeginDim(0,\n- (out.getFedMapping().getFederatedRanges()[i].getBeginDims()[0] == 0 ||\n- i == 0) ? 0 : out.getFedMapping().getFederatedRanges()[i - 1].getEndDims()[0]);\n-\n- out.getFedMapping().getFederatedRanges()[i].setEndDim(0,\n- out.getFedMapping().getFederatedRanges()[i].getBeginDims()[0] + newRange[0]);\n-\n- out.getFedMapping().getFederatedRanges()[i].setBeginDim(1,\n- (out.getFedMapping().getFederatedRanges()[i].getBeginDims()[1] == 0 ||\n- i == 0) ? 0 : out.getFedMapping().getFederatedRanges()[i - 1].getEndDims()[1]);\n-\n- out.getFedMapping().getFederatedRanges()[i].setEndDim(1,\n- out.getFedMapping().getFederatedRanges()[i].getBeginDims()[1] + newRange[1]);\n- }\n-\n- out.getDataCharacteristics().set(out.getFedMapping().getMaxIndexInRange(0),\n- out.getFedMapping().getMaxIndexInRange(1),\n- (int) mo.getBlocksize());\n- }\n-\n- private Map.Entry<FederationMap, Map<FederatedRange, int[]>> rmemptyC(ExecutionContext ec, MatrixObject mo) {\n- boolean marginRow = instString.contains(\"margin=rows\");\n-\n- // find empty in ranges\n+ MatrixBlock s = new MatrixBlock();\n+ if(select == null && k) {\nList<MatrixBlock> colSums = new ArrayList<>();\nmo.getFedMapping().forEachParallel((range, data) -> {\ntry {\nFederatedResponse response = data\n.executeFederatedOperation(new FederatedRequest(FederatedRequest.RequestType.EXEC_UDF, -1,\n- new GetVector(data.getVarID(), marginRow)))\n+ new GetVector(data.getVarID(), margin.equals(\"rows\"))))\n.get();\nif(!response.isSuccessful())\n@@ -236,37 +181,75 @@ public class ParameterizedBuiltinFEDInstruction extends ComputationFEDInstructio\n}\nreturn null;\n});\n-\n// find empty in matrix\nBinaryOperator plus = InstructionUtils.parseBinaryOperator(\"+\");\nBinaryOperator greater = InstructionUtils.parseBinaryOperator(\">\");\n- MatrixBlock tmp1 = colSums.get(0);\n+ s = colSums.get(0);\nfor(int i = 1; i < colSums.size(); i++)\n- tmp1 = tmp1.binaryOperationsInPlace(plus, colSums.get(i));\n- tmp1 = tmp1.binaryOperationsInPlace(greater, new MatrixBlock(tmp1.getNumRows(), tmp1.getNumColumns(), 0.0));\n+ s = s.binaryOperationsInPlace(plus, colSums.get(i));\n+ s = s.binaryOperationsInPlace(greater, new MatrixBlock(s.getNumRows(), s.getNumColumns(), 0.0));\n+ select = ExecutionContext.createMatrixObject(s);\n- // remove empty from matrix\n- Map<FederatedRange, int[]> dcs = new HashMap<>();\nlong varID = FederationUtils.getNextFedDataID();\n- MatrixBlock finalTmp = new MatrixBlock(tmp1);\n- FederationMap resMapping;\n- if(tmp1.sum() == (marginRow ? tmp1.getNumColumns() : tmp1.getNumRows())) {\n- resMapping = mo.getFedMapping();\n+ ec.setVariable(String.valueOf(varID), select);\n+ params.put(\"select\", String.valueOf(varID));\n+ // construct new string\n+ String[] oldString = InstructionUtils.getInstructionParts(instString);\n+ String[] newString = new String[oldString.length+1];\n+ newString[2] = \"select=\"+varID;\n+ System.arraycopy(oldString, 0, newString, 0,2);\n+ System.arraycopy(oldString,2, newString, 3, newString.length-3);\n+ instString = instString.replace(InstructionUtils.concatOperands(oldString), InstructionUtils.concatOperands(newString));\n+ }\n+\n+ if (select == null) {\n+ FederatedRequest fr1 = FederationUtils.callInstruction(instString, output,\n+ new CPOperand[] {getTargetOperand()},\n+ new long[] {mo.getFedMapping().getID()});\n+ mo.getFedMapping().execute(getTID(), true, fr1);\n+ out.setFedMapping(mo.getFedMapping().copyWithNewID(fr1.getID()));\n}\n- else {\n- resMapping = mo.getFedMapping().mapParallel(varID, (range, data) -> {\n+ else if (!k) {\n+ //construct commands: broadcast , fed rmempty, clean broadcast\n+ FederatedRequest[] fr1 = mo.getFedMapping().broadcastSliced(select, !marginRow);\n+ FederatedRequest fr2 = FederationUtils.callInstruction(instString,\n+ output,\n+ new CPOperand[] {getTargetOperand(), new CPOperand(params.get(\"select\"), ValueType.FP64, DataType.MATRIX)},\n+ new long[] {mo.getFedMapping().getID(), fr1[0].getID()});\n+ FederatedRequest fr3 = mo.getFedMapping().cleanup(getTID(), fr1[0].getID());\n+\n+ //execute federated operations and set output\n+ mo.getFedMapping().execute(getTID(), true, fr1, fr2, fr3);\n+ out.setFedMapping(mo.getFedMapping().copyWithNewID(fr2.getID()));\n+ } else {\n+ //construct commands: broadcast , fed rmempty, clean broadcast\n+ FederatedRequest fr1 = mo.getFedMapping().broadcast(select);\n+ FederatedRequest fr2 = FederationUtils.callInstruction(instString,\n+ output,\n+ new CPOperand[] {getTargetOperand(), new CPOperand(params.get(\"select\"), ValueType.FP64, DataType.MATRIX)},\n+ new long[] {mo.getFedMapping().getID(), fr1.getID()});\n+ FederatedRequest fr3 = mo.getFedMapping().cleanup(getTID(), fr1.getID());\n+\n+ //execute federated operations and set output\n+ mo.getFedMapping().execute(getTID(), true, fr1, fr2, fr3);\n+ out.setFedMapping(mo.getFedMapping().copyWithNewID(fr2.getID()));\n+ }\n+\n+ // new ranges\n+ Map<FederatedRange, int[]> dcs = new HashMap<>();\n+ Map<FederatedRange, int[]> finalDcs1 = dcs;\n+ out.getFedMapping().forEachParallel((range, data) -> {\ntry {\nFederatedResponse response = data\n.executeFederatedOperation(new FederatedRequest(FederatedRequest.RequestType.EXEC_UDF, -1,\n- new ParameterizedBuiltinFEDInstruction.RemoveEmpty(data.getVarID(), varID, finalTmp,\n- params.containsKey(\"select\") ? ec.getMatrixInput(params.get(\"select\")) : null,\n- Boolean.parseBoolean(params.get(\"empty.return\").toLowerCase()), marginRow)))\n+ new GetDataCharacteristics(data.getVarID())))\n.get();\n+\nif(!response.isSuccessful())\nresponse.throwExceptionFromResponse();\nint[] subRangeCharacteristics = (int[]) response.getData()[0];\n- synchronized(dcs) {\n- dcs.put(range, subRangeCharacteristics);\n+ synchronized(finalDcs1) {\n+ finalDcs1.put(range, subRangeCharacteristics);\n}\n}\ncatch(Exception e) {\n@@ -274,8 +257,28 @@ public class ParameterizedBuiltinFEDInstruction extends ComputationFEDInstructio\n}\nreturn null;\n});\n+ dcs = finalDcs1;\n+ out.getDataCharacteristics().set(mo.getDataCharacteristics());\n+ for(int i = 0; i < mo.getFedMapping().getFederatedRanges().length; i++) {\n+ int[] newRange = dcs.get(out.getFedMapping().getFederatedRanges()[i]);\n+\n+ out.getFedMapping().getFederatedRanges()[i].setBeginDim(0,\n+ (out.getFedMapping().getFederatedRanges()[i].getBeginDims()[0] == 0 ||\n+ i == 0) ? 0 : out.getFedMapping().getFederatedRanges()[i - 1].getEndDims()[0]);\n+\n+ out.getFedMapping().getFederatedRanges()[i].setEndDim(0,\n+ out.getFedMapping().getFederatedRanges()[i].getBeginDims()[0] + newRange[0]);\n+\n+ out.getFedMapping().getFederatedRanges()[i].setBeginDim(1,\n+ (out.getFedMapping().getFederatedRanges()[i].getBeginDims()[1] == 0 ||\n+ i == 0) ? 0 : out.getFedMapping().getFederatedRanges()[i - 1].getEndDims()[1]);\n+\n+ out.getFedMapping().getFederatedRanges()[i].setEndDim(1,\n+ out.getFedMapping().getFederatedRanges()[i].getBeginDims()[1] + newRange[1]);\n}\n- return new AbstractMap.SimpleEntry<>(resMapping, dcs);\n+\n+ out.getDataCharacteristics().set(out.getFedMapping().getMaxIndexInRange(0),\n+ out.getFedMapping().getMaxIndexInRange(1), (int) mo.getBlocksize());\n}\nprivate void transformDecode(ExecutionContext ec) {\n@@ -506,52 +509,9 @@ public class ParameterizedBuiltinFEDInstruction extends ComputationFEDInstructio\n@Override\npublic FederatedResponse execute(ExecutionContext ec, Data... data) {\nMatrixBlock mb = ((MatrixObject) data[0]).acquireReadAndRelease();\n- return new FederatedResponse(ResponseType.SUCCESS, new int[] {mb.getNumRows(), mb.getNumColumns()});\n- }\n- }\n-\n- private static class RemoveEmpty extends FederatedUDF {\n-\n- private static final long serialVersionUID = 12341521331L;\n- private final MatrixBlock _vector;\n- private final long _outputID;\n- private MatrixBlock _select;\n- private boolean _emptyReturn;\n- private final boolean _marginRow;\n-\n- public RemoveEmpty(long varID, long outputID, MatrixBlock vector, MatrixBlock select, boolean emptyReturn,\n- boolean marginRow) {\n- super(new long[] {varID});\n- _vector = vector;\n- _outputID = outputID;\n- _select = select;\n- _emptyReturn = emptyReturn;\n- _marginRow = marginRow;\n- }\n-\n- @Override\n- public FederatedResponse execute(ExecutionContext ec, Data... data) {\n- MatrixBlock mb = ((MatrixObject) data[0]).acquireReadAndRelease();\n-\n- BinaryOperator plus = InstructionUtils.parseBinaryOperator(\"+\");\n- BinaryOperator minus = InstructionUtils.parseBinaryOperator(\"-\");\n-\n- mb = mb.binaryOperationsInPlace(plus, new MatrixBlock(mb.getNumRows(), mb.getNumColumns(), 1.0));\n- for(int i = 0; i < mb.getNumRows(); i++)\n- for(int j = 0; j < mb.getNumColumns(); j++)\n- if(_marginRow)\n- mb.setValue(i, j, _vector.getValue(i, 0) * mb.getValue(i, j));\n- else\n- mb.setValue(i, j, _vector.getValue(0, j) * mb.getValue(i, j));\n-\n- MatrixBlock res = mb.removeEmptyOperations(new MatrixBlock(), _marginRow, _emptyReturn, _select);\n- res = res.binaryOperationsInPlace(minus, new MatrixBlock(res.getNumRows(), res.getNumColumns(), 1.0));\n-\n- MatrixObject mout = ExecutionContext.createMatrixObject(res);\n- ec.setVariable(String.valueOf(_outputID), mout);\n-\n- return new FederatedResponse(FederatedResponse.ResponseType.SUCCESS,\n- new int[] {res.getNumRows(), res.getNumColumns()});\n+ int r = mb.getDenseBlockValues() != null ? mb.getNumRows() : 0;\n+ int c = mb.getDenseBlockValues() != null ? mb.getNumColumns(): 0;\n+ return new FederatedResponse(ResponseType.SUCCESS, new int[] {r, c});\n}\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/federated/primitives/FederatedRemoveEmptyTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/federated/primitives/FederatedRemoveEmptyTest.java",
"diff": "@@ -55,7 +55,10 @@ public class FederatedRemoveEmptyTest extends AutomatedTestBase {\[email protected]\npublic static Collection<Object[]> data() {\n- return Arrays.asList(new Object[][] {{20, 10, true}, {20, 12, false}});\n+ return Arrays.asList(new Object[][] {\n+ {20, 12, true},\n+ {20, 12, false}\n+ });\n}\n@Override\n@@ -94,11 +97,6 @@ public class FederatedRemoveEmptyTest extends AutomatedTestBase {\nfor(int k : new int[] {1, 2, 3}) {\nArrays.fill(X3[k], 0);\n- if(!rowPartitioned) {\n- Arrays.fill(X1[k], 0);\n- Arrays.fill(X2[k], 0);\n- Arrays.fill(X4[k], 0);\n- }\n}\nMatrixCharacteristics mc = new MatrixCharacteristics(r, c, blocksize, r * c);\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/federated/primitives/FederatedSplitTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/federated/primitives/FederatedSplitTest.java",
"diff": "@@ -54,7 +54,9 @@ public class FederatedSplitTest extends AutomatedTestBase {\[email protected]\npublic static Collection<Object[]> data() {\n- return Arrays.asList(new Object[][] {{152, 12, \"TRUE\"}, {132, 11, \"FALSE\"}});\n+ return Arrays.asList(new Object[][] {\n+ // {152, 12, \"TRUE\"},\n+ {132, 11, \"FALSE\"}});\n}\n@Override\n@@ -125,9 +127,7 @@ public class FederatedSplitTest extends AutomatedTestBase {\nif(cont.equals(\"TRUE\"))\nAssert.assertTrue(heavyHittersContainsString(\"fed_rightIndex\"));\nelse {\n- Assert.assertTrue(heavyHittersContainsString(\"fed_ba+*\"));\n- // TODO add federated diag operator.\n- // Assert.assertTrue(heavyHittersContainsString(\"fed_rdiag\"));\n+ Assert.assertTrue(heavyHittersContainsString(\"fed_rmempty\"));\n}\nTestUtils.shutdownThreads(t1, t2);\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/scripts/functions/federated/FederatedSplitTest.dml",
"new_path": "src/test/scripts/functions/federated/FederatedSplitTest.dml",
"diff": "@@ -24,7 +24,6 @@ X = federated(addresses=list($X1, $X2),\nY = federated(addresses=list($Y1, $Y2),\nranges=list(list(0, 0), list($r / 2, $c), list($r / 2, 0), list($r, $c)))\n-\n[Xtr, Xte, Ytr, Yte] = split(X=X, Y=Y, f=0.95, cont=$Cont, seed = 13)\nwrite(Xte, $Z)\nprint(toString(Xte))\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-2730] Modified fed removeEmpty
This commits change the remove empty federated command, to
among other things improve the split function performance.
Closes #1109 |
49,706 | 18.11.2020 13:12:53 | -3,600 | b0f584bcdaabcf918d1673389f1eb30b016a8cc8 | [MINOR] l2svm move verbose print | [
{
"change_type": "MODIFY",
"old_path": "scripts/builtin/l2svm.dml",
"new_path": "scripts/builtin/l2svm.dml",
"diff": "@@ -120,8 +120,6 @@ m_l2svm = function(Matrix[Double] X, Matrix[Double] Y, Boolean intercept = FALSE\nh = dd + sum(Xd * sv * Xd)\nstep_sz = step_sz - g/h\ncontinue1 = (g*g/h >= epsilon)\n- if(verbose)\n- print(\"Inner Iter:\" + toString(iiter))\niiter = iiter + 1\n}\n@@ -137,7 +135,7 @@ m_l2svm = function(Matrix[Double] X, Matrix[Double] Y, Boolean intercept = FALSE\nif(verbose) {\ncolstr = ifelse(columnId!=-1, \", Col:\"+columnId + \" ,\", \" ,\")\n- print(\"Iter:\" + toString(iter) + colstr + \" Obj:\" + obj)\n+ print(\"Iter:\" + toString(iter) + \"InnerIter:\" + toString(iiter) +\" --- \"+ colstr + \" Obj:\" + obj)\n}\ntmp = sum(s * g_old)\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR] l2svm move verbose print |
49,706 | 18.11.2020 17:11:23 | -3,600 | ddc5dec6154d15469ead992fa594950f440a9590 | Federated Ternary aggregate
This commit adds Federated Ternary Aggregation, as well as change the
federated cleanup, to run in separated threads. The latter improve the
system performance by synchronizing and cleaning workers in parallel with
computation continuing.
Also a minior syntax error is corrected in builtin l2svm.
closes | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/controlprogram/federated/FederationMap.java",
"new_path": "src/main/java/org/apache/sysds/runtime/controlprogram/federated/FederationMap.java",
"diff": "@@ -231,8 +231,14 @@ public class FederationMap {\nList<Future<FederatedResponse>> tmp = new ArrayList<>();\nfor(FederatedData fd : _fedMap.values())\ntmp.add(fd.executeFederatedOperation(request));\n- // wait to avoid interference w/ following requests\n- FederationUtils.waitFor(tmp);\n+ // This cleaning is allowed to go in a separate thread, and finish on its own.\n+ // The benefit is that the program is able to continue working on other things.\n+ // The downside is that at the end of execution these threads can have executed\n+ // for some extra time that can in particular be noticeable for shorter federated jobs.\n+\n+ // To force the cleanup use waitFor -> drastically increasing execution time if\n+ // communication is slow to federated sites.\n+ // FederationUtils.waitFor(tmp);\n}\nprivate static FederatedRequest[] addAll(FederatedRequest a, FederatedRequest[] b) {\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/AggregateTernaryFEDInstruction.java",
"diff": "+/*\n+ * Licensed to the Apache Software Foundation (ASF) under one\n+ * or more contributor license agreements. See the NOTICE file\n+ * distributed with this work for additional information\n+ * regarding copyright ownership. The ASF licenses this file\n+ * to you under the Apache License, Version 2.0 (the\n+ * \"License\"); you may not use this file except in compliance\n+ * with the License. You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.apache.sysds.runtime.instructions.fed;\n+\n+import java.util.concurrent.Future;\n+\n+import org.apache.sysds.runtime.DMLRuntimeException;\n+import org.apache.sysds.runtime.controlprogram.caching.MatrixObject;\n+import org.apache.sysds.runtime.controlprogram.context.ExecutionContext;\n+import org.apache.sysds.runtime.controlprogram.federated.FederatedRequest;\n+import org.apache.sysds.runtime.controlprogram.federated.FederatedRequest.RequestType;\n+import org.apache.sysds.runtime.controlprogram.federated.FederatedResponse;\n+import org.apache.sysds.runtime.controlprogram.federated.FederationUtils;\n+import org.apache.sysds.runtime.instructions.cp.AggregateTernaryCPInstruction;\n+import org.apache.sysds.runtime.instructions.cp.CPOperand;\n+import org.apache.sysds.runtime.instructions.cp.DoubleObject;\n+import org.apache.sysds.runtime.instructions.cp.ScalarObject;\n+\n+public class AggregateTernaryFEDInstruction extends FEDInstruction {\n+ // private static final Log LOG = LogFactory.getLog(AggregateTernaryFEDInstruction.class.getName());\n+\n+ public final AggregateTernaryCPInstruction _ins;\n+\n+ protected AggregateTernaryFEDInstruction(AggregateTernaryCPInstruction ins) {\n+ super(FEDType.AggregateTernary, ins.getOperator(), ins.getOpcode(), ins.getInstructionString());\n+ _ins = ins;\n+ }\n+\n+ public static AggregateTernaryFEDInstruction parseInstruction(AggregateTernaryCPInstruction ins) {\n+ return new AggregateTernaryFEDInstruction(ins);\n+ }\n+\n+ @Override\n+ public void processInstruction(ExecutionContext ec) {\n+ MatrixObject mo1 = ec.getMatrixObject(_ins.input1);\n+ MatrixObject mo2 = ec.getMatrixObject(_ins.input2);\n+ MatrixObject mo3 = _ins.input3.isLiteral() ? null : ec.getMatrixObject(_ins.input3);\n+\n+ if(mo1.isFederated() && mo2.isFederated() && mo1.getFedMapping().isAligned(mo2.getFedMapping(), false) &&\n+ mo3 == null) {\n+ FederatedRequest fr1 = mo1.getFedMapping().broadcast(ec.getScalarInput(_ins.input3));\n+ FederatedRequest fr2 = FederationUtils.callInstruction(_ins.getInstructionString(),\n+ _ins.getOutput(),\n+ new CPOperand[] {_ins.input1, _ins.input2, _ins.input3},\n+ new long[] {mo1.getFedMapping().getID(), mo2.getFedMapping().getID(), fr1.getID()});\n+ FederatedRequest fr3 = new FederatedRequest(RequestType.GET_VAR, fr2.getID());\n+ FederatedRequest fr4 = mo2.getFedMapping().cleanup(getTID(), fr1.getID(), fr2.getID());\n+ Future<FederatedResponse>[] tmp = mo1.getFedMapping().execute(getTID(), fr1, fr2, fr3, fr4);\n+\n+ if(_ins.output.getDataType().isScalar()) {\n+ double sum = 0;\n+ for(Future<FederatedResponse> fr : tmp)\n+ try {\n+ sum += ((ScalarObject) fr.get().getData()[0]).getDoubleValue();\n+ }\n+ catch(Exception e) {\n+ throw new DMLRuntimeException(\"Federated Get data failed with exception on TernaryFedInstruction\", e);\n+ }\n+\n+ ec.setScalarOutput(_ins.output.getName(), new DoubleObject(sum));\n+ }\n+ else {\n+ throw new DMLRuntimeException(\"Not Implemented Federated Ternary Variation\");\n+ }\n+ }\n+ else {\n+ if(mo3 == null)\n+ throw new DMLRuntimeException(\"Federated AggregateTernary not supported with the \"\n+ + \"following federated objects: \" + mo1.isFederated() + \":\" + mo1.getFedMapping() + \" \"\n+ + mo2.isFederated() + \":\" + mo2.getFedMapping());\n+ else\n+ throw new DMLRuntimeException(\"Federated AggregateTernary not supported with the \"\n+ + \"following federated objects: \" + mo1.isFederated() + \":\" + mo1.getFedMapping() + \" \"\n+ + mo2.isFederated() + \":\" + mo2.getFedMapping() + mo3.isFederated() + \":\" + mo3.getFedMapping());\n+ }\n+\n+ }\n+}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/FEDInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/FEDInstruction.java",
"diff": "@@ -29,6 +29,7 @@ public abstract class FEDInstruction extends Instruction {\npublic enum FEDType {\nAggregateBinary,\nAggregateUnary,\n+ AggregateTernary,\nAppend,\nBinary,\nInit,\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/FEDInstructionUtils.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/FEDInstructionUtils.java",
"diff": "@@ -28,6 +28,7 @@ import org.apache.sysds.runtime.controlprogram.context.ExecutionContext;\nimport org.apache.sysds.runtime.controlprogram.federated.FederationMap.FType;\nimport org.apache.sysds.runtime.instructions.Instruction;\nimport org.apache.sysds.runtime.instructions.cp.AggregateBinaryCPInstruction;\n+import org.apache.sysds.runtime.instructions.cp.AggregateTernaryCPInstruction;\nimport org.apache.sysds.runtime.instructions.cp.AggregateUnaryCPInstruction;\nimport org.apache.sysds.runtime.instructions.cp.BinaryCPInstruction;\nimport org.apache.sysds.runtime.instructions.cp.Data;\n@@ -158,7 +159,6 @@ public class FEDInstructionUtils {\n}\nelse if(inst instanceof VariableCPInstruction ){\nVariableCPInstruction ins = (VariableCPInstruction) inst;\n-\nif(ins.getVariableOpcode() == VariableOperationCode.Write\n&& ins.getInput1().isMatrix()\n&& ins.getInput3().getName().contains(\"federated\")){\n@@ -175,6 +175,13 @@ public class FEDInstructionUtils {\nfedinst = VariableFEDInstruction.parseInstruction(ins);\n}\n}\n+ else if(inst instanceof AggregateTernaryCPInstruction){\n+ AggregateTernaryCPInstruction ins = (AggregateTernaryCPInstruction) inst;\n+ if(ins.input1.isMatrix() && ec.getCacheableData(ins.input1).isFederated() && ins.input2.isMatrix() &&\n+ ec.getCacheableData(ins.input2).isFederated()) {\n+ fedinst = AggregateTernaryFEDInstruction.parseInstruction(ins);\n+ }\n+ }\n//set thread id for federated context management\nif( fedinst != null ) {\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/InitFEDInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/InitFEDInstruction.java",
"diff": "@@ -66,7 +66,8 @@ public class InitFEDInstruction extends FEDInstruction {\nprivate CPOperand _type, _addresses, _ranges, _output;\n- public InitFEDInstruction(CPOperand type, CPOperand addresses, CPOperand ranges, CPOperand out, String opcode, String instr) {\n+ public InitFEDInstruction(CPOperand type, CPOperand addresses, CPOperand ranges, CPOperand out, String opcode,\n+ String instr) {\nsuper(FEDType.Init, opcode, instr);\n_type = type;\n_addresses = addresses;\n@@ -157,8 +158,8 @@ public class InitFEDInstruction extends FEDInstruction {\n}\nelse if(type.equalsIgnoreCase(FED_FRAME_IDENTIFIER)) {\nif(usedDims[1] > Integer.MAX_VALUE)\n- throw new DMLRuntimeException(\"federated Frame can not have more than max int columns, because the \" +\n- \"schema can only be max int length\");\n+ throw new DMLRuntimeException(\"federated Frame can not have more than max int columns, because the \"\n+ + \"schema can only be max int length\");\nFrameObject output = ec.getFrameObject(_output);\noutput.getDataCharacteristics().setRows(usedDims[0]).setCols(usedDims[1]);\nfederateFrame(output, feds);\n@@ -203,8 +204,8 @@ public class InitFEDInstruction extends FEDInstruction {\nreturn new String[] {host, String.valueOf(port), filePath};\n}\ncatch(MalformedURLException e) {\n- throw new IllegalArgumentException(\"federated address `\" + input\n- + \"` does not fit required URL pattern of \\\"host:port/directory\\\"\", e);\n+ throw new IllegalArgumentException(\n+ \"federated address `\" + input + \"` does not fit required URL pattern of \\\"host:port/directory\\\"\", e);\n}\n}\n@@ -233,7 +234,8 @@ public class InitFEDInstruction extends FEDInstruction {\ncolPartitioned &= (range.getSize(0) == output.getNumRows());\n}\ntry {\n- int timeout = ConfigurationManager.getDMLConfig().getIntValue(DMLConfig.DEFAULT_FEDERATED_INITIALIZATION_TIMEOUT);\n+ int timeout = ConfigurationManager.getDMLConfig()\n+ .getIntValue(DMLConfig.DEFAULT_FEDERATED_INITIALIZATION_TIMEOUT);\nLOG.debug(\"Federated Initialization with timeout: \" + timeout);\nfor(Pair<FederatedData, Future<FederatedResponse>> idResponse : idResponses)\nidResponse.getRight().get(timeout, TimeUnit.SECONDS); // wait for initialization\n@@ -247,8 +249,12 @@ public class InitFEDInstruction extends FEDInstruction {\noutput.getDataCharacteristics().setNonZeros(-1);\noutput.getDataCharacteristics().setBlocksize(ConfigurationManager.getBlocksize());\noutput.setFedMapping(new FederationMap(id, fedMapping));\n- output.getFedMapping().setType(rowPartitioned && colPartitioned ? FType.FULL :\n- rowPartitioned ? FType.ROW : colPartitioned ? FType.COL : FType.OTHER);\n+\n+ output.getFedMapping().setType(rowPartitioned &&\n+ colPartitioned ? FType.FULL : rowPartitioned ? FType.ROW : colPartitioned ? FType.COL : FType.OTHER);\n+\n+ if(LOG.isDebugEnabled())\n+ LOG.debug(\"Fed map Inited:\" + output.getFedMapping());\n}\npublic static void federateFrame(FrameObject output, List<Pair<FederatedRange, FederatedData>> workers) {\n@@ -273,7 +279,8 @@ public class InitFEDInstruction extends FEDInstruction {\nfor(int i = 0; i < dims.length; i++) {\ndims[i] = endDims[i] - beginDims[i];\n}\n- idResponses.add(new ImmutablePair<>(value, new ImmutablePair<>((int) beginDims[1], value.initFederatedData(id))));\n+ idResponses.add(\n+ new ImmutablePair<>(value, new ImmutablePair<>((int) beginDims[1], value.initFederatedData(id))));\n}\nrowPartitioned &= (range.getSize(1) == output.getNumColumns());\ncolPartitioned &= (range.getSize(0) == output.getNumRows());\n@@ -295,8 +302,11 @@ public class InitFEDInstruction extends FEDInstruction {\noutput.getDataCharacteristics().setNonZeros(output.getNumColumns() * output.getNumRows());\noutput.setSchema(schema);\noutput.setFedMapping(new FederationMap(id, fedMapping));\n- output.getFedMapping().setType(rowPartitioned && colPartitioned ? FType.FULL :\n- rowPartitioned ? FType.ROW : colPartitioned ? FType.COL : FType.OTHER);\n+ output.getFedMapping().setType(rowPartitioned &&\n+ colPartitioned ? FType.FULL : rowPartitioned ? FType.ROW : colPartitioned ? FType.COL : FType.OTHER);\n+\n+ if(LOG.isDebugEnabled())\n+ LOG.debug(\"Fed map Inited: \" + output.getFedMapping());\n}\nprivate static void handleFedFrameResponse(Types.ValueType[] schema, FederatedData federatedData,\n@@ -315,7 +325,8 @@ public class InitFEDInstruction extends FEDInstruction {\nelse\nschema[schema_index] = vType;\n}\n- } catch (Exception e){\n+ }\n+ catch(Exception e) {\nthrow new DMLRuntimeException(\"Exception in frame response from federated worker.\", e);\n}\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/VariableFEDInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/VariableFEDInstruction.java",
"diff": "@@ -61,18 +61,15 @@ public class VariableFEDInstruction extends FEDInstruction implements LineageTra\npublic void processInstruction(ExecutionContext ec) {\nVariableOperationCode opcode = _in.getVariableOpcode();\nswitch(opcode) {\n-\ncase Write:\nprocessWriteInstruction(ec);\nbreak;\n-\ncase CastAsMatrixVariable:\nprocessCastAsMatrixVariableInstruction(ec);\nbreak;\ncase CastAsFrameVariable:\nprocessCastAsFrameVariableInstruction(ec);\nbreak;\n-\ndefault:\nthrow new DMLRuntimeException(\"Unsupported Opcode for federated Variable Instruction : \" + opcode);\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/io/ReaderWriterFederated.java",
"new_path": "src/main/java/org/apache/sysds/runtime/io/ReaderWriterFederated.java",
"diff": "*/\npackage org.apache.sysds.runtime.io;\n-import static org.junit.Assert.fail;\n-\nimport java.io.BufferedWriter;\nimport java.io.DataOutputStream;\nimport java.io.IOException;\n@@ -117,7 +115,7 @@ public class ReaderWriterFederated {\nIOUtilFunctions.deleteCrcFilesFromLocalFileSystem(fs, path);\n}\ncatch(IOException e) {\n- fail(\"Unable to write test federated matrix to (\" + file + \"): \" + e.getMessage());\n+ throw new DMLRuntimeException(\"Unable to write test federated matrix to (\" + file + \"): \" + e.getMessage());\n}\n}\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-2736] Federated Ternary aggregate
This commit adds Federated Ternary Aggregation, as well as change the
federated cleanup, to run in separated threads. The latter improve the
system performance by synchronizing and cleaning workers in parallel with
computation continuing.
Also a minior syntax error is corrected in builtin l2svm.
closes #1110 |
49,722 | 20.11.2020 14:55:34 | -3,600 | f9e60f2cbd9bbced6f5cccf9d4ad960f9b18ea70 | [MINOR] LM pipeline test with 4 workers
Closes | [
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/federated/algorithms/FederatedLmPipeline.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/federated/algorithms/FederatedLmPipeline.java",
"diff": "package org.apache.sysds.test.functions.federated.algorithms;\n-import org.junit.Test;\nimport org.apache.sysds.common.Types;\nimport org.apache.sysds.common.Types.ExecMode;\nimport org.apache.sysds.runtime.instructions.InstructionUtils;\n@@ -29,13 +28,14 @@ import org.apache.sysds.runtime.transform.encode.EncoderRecode;\nimport org.apache.sysds.test.AutomatedTestBase;\nimport org.apache.sysds.test.TestConfiguration;\nimport org.apache.sysds.test.TestUtils;\n-\n+import org.junit.Test;\[email protected]\npublic class FederatedLmPipeline extends AutomatedTestBase {\nprivate final static String TEST_DIR = \"functions/federated/\";\n- private final static String TEST_NAME = \"FederatedLmPipeline\";\n+ private final static String TEST_NAME1 = \"FederatedLmPipeline\";\n+ private final static String TEST_NAME2 = \"FederatedLmPipeline4Workers\";\nprivate final static String TEST_CLASS_DIR = TEST_DIR + FederatedLmPipeline.class.getSimpleName() + \"/\";\npublic int rows = 10000;\n@@ -44,20 +44,31 @@ public class FederatedLmPipeline extends AutomatedTestBase {\n@Override\npublic void setUp() {\nTestUtils.clearAssertionInformation();\n- addTestConfiguration(TEST_NAME, new TestConfiguration(TEST_CLASS_DIR, TEST_NAME, new String[] {\"Z\"}));\n+ addTestConfiguration(TEST_NAME1, new TestConfiguration(TEST_CLASS_DIR, TEST_NAME1, new String[] {\"Z\"}));\n+ addTestConfiguration(TEST_NAME2, new TestConfiguration(TEST_CLASS_DIR, TEST_NAME2, new String[] {\"Z\"}));\n}\n@Test\npublic void federatedLmPipelineContinguous() {\n- federatedLmPipeline(Types.ExecMode.SINGLE_NODE, true);\n+ federatedLmPipeline(Types.ExecMode.SINGLE_NODE, true, TEST_NAME1);\n+ }\n+\n+ @Test\n+ public void federatedLmPipelineContinguous4Workers() {\n+ federatedLmPipeline(Types.ExecMode.SINGLE_NODE, true, TEST_NAME2);\n}\n@Test\npublic void federatedLmPipelineSampled() {\n- federatedLmPipeline(Types.ExecMode.SINGLE_NODE, false);\n+ federatedLmPipeline(Types.ExecMode.SINGLE_NODE, false, TEST_NAME1);\n+ }\n+\n+ @Test\n+ public void federatedLmPipelineSampled4Workers() {\n+ federatedLmPipeline(Types.ExecMode.SINGLE_NODE, false, TEST_NAME2);\n}\n- public void federatedLmPipeline(ExecMode execMode, boolean contSplits) {\n+ public void federatedLmPipeline(ExecMode execMode, boolean contSplits, String TEST_NAME) {\nExecMode oldExec = setExecMode(execMode);\nboolean oldSort = EncoderRecode.SORT_RECODE_MAP;\nEncoderRecode.SORT_RECODE_MAP = true;\n@@ -76,37 +87,48 @@ public class FederatedLmPipeline extends AutomatedTestBase {\nX = rc.append(X, new MatrixBlock(), true);\n// We have two matrices handled by a single federated worker\n- int halfRows = rows / 2;\n- writeInputMatrixWithMTD(\"X1\", X.slice(0, halfRows-1), false);\n- writeInputMatrixWithMTD(\"X2\", X.slice(halfRows, rows-1), false);\n+ int quarterRows = TEST_NAME.equals(TEST_NAME2) ? rows / 4 : rows / 2;\n+ int[] k = TEST_NAME.equals(TEST_NAME2) ? new int[] {quarterRows - 1, quarterRows, 2 * quarterRows - 1,\n+ 2 * quarterRows, 3 * quarterRows - 1, 3 * quarterRows,\n+ rows - 1} : new int[] {quarterRows - 1, quarterRows, rows - 1, 0, 0, 0, 0};\n+ writeInputMatrixWithMTD(\"X1\", X.slice(0, k[0]), false);\n+ writeInputMatrixWithMTD(\"X2\", X.slice(k[1], k[2]), false);\n+ writeInputMatrixWithMTD(\"X3\", X.slice(k[3], k[4]), false);\n+ writeInputMatrixWithMTD(\"X4\", X.slice(k[5], k[6]), false);\nwriteInputMatrixWithMTD(\"Y\", y, false);\n// empty script name because we don't execute any script, just start the worker\nfullDMLScriptName = \"\";\nint port1 = getRandomAvailablePort();\nint port2 = getRandomAvailablePort();\n+ int port3 = getRandomAvailablePort();\n+ int port4 = getRandomAvailablePort();\nThread t1 = startLocalFedWorkerThread(port1, FED_WORKER_WAIT_S);\n- Thread t2 = startLocalFedWorkerThread(port2);\n+ Thread t2 = startLocalFedWorkerThread(port2, FED_WORKER_WAIT_S);\n+ Thread t3 = startLocalFedWorkerThread(port3, FED_WORKER_WAIT_S);\n+ Thread t4 = startLocalFedWorkerThread(port4);\nTestConfiguration config = availableTestConfigurations.get(TEST_NAME);\nloadTestConfiguration(config);\n// Run reference dml script with normal matrix\nfullDMLScriptName = HOME + TEST_NAME + \"Reference.dml\";\n- programArgs = new String[] {\"-args\", input(\"X1\"), input(\"X2\"), input(\"Y\"),\n+ programArgs = new String[] {\"-args\", input(\"X1\"), input(\"X2\"), input(\"X3\"), input(\"X4\"), input(\"Y\"),\nString.valueOf(contSplits).toUpperCase(), expected(\"Z\")};\nrunTest(true, false, null, -1);\n// Run actual dml script with federated matrix\nfullDMLScriptName = HOME + TEST_NAME + \".dml\";\nprogramArgs = new String[] {\"-nvargs\", \"in_X1=\" + TestUtils.federatedAddress(port1, input(\"X1\")),\n- \"in_X2=\" + TestUtils.federatedAddress(port2, input(\"X2\")), \"rows=\" + rows, \"cols=\" + (cols+1),\n+ \"in_X2=\" + TestUtils.federatedAddress(port2, input(\"X2\")),\n+ \"in_X3=\" + TestUtils.federatedAddress(port3, input(\"X3\")),\n+ \"in_X4=\" + TestUtils.federatedAddress(port4, input(\"X4\")), \"rows=\" + rows, \"cols=\" + (cols + 1),\n\"in_Y=\" + input(\"Y\"), \"cont=\" + String.valueOf(contSplits).toUpperCase(), \"out=\" + output(\"Z\")};\nrunTest(true, false, null, -1);\n// compare via files\ncompareResults(1e-2);\n- TestUtils.shutdownThreads(t1, t2);\n+ TestUtils.shutdownThreads(t1, t2, t3, t4);\n}\nfinally {\nresetExecMode(oldExec);\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/federated/FederatedLmPipeline4Workers.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+Fin = federated(addresses=list($in_X1, $in_X2, $in_X3, $in_X4),\n+ ranges=list(list(0, 0), list($rows/4, $cols), list($rows/4, 0), list(2*$rows/4, $cols),\n+ list(2*$rows/4, 0), list(3*$rows/4, $cols), list(3*$rows/4, 0), list($rows, $cols)))\n+\n+y = read($in_Y)\n+\n+# one hot encoding categorical, other passthrough\n+Fall = as.frame(Fin)\n+jspec = \"{ ids:true, dummycode:[1] }\"\n+[X,M] = transformencode(target=Fall, spec=jspec)\n+print(\"ncol(X) = \"+ncol(X))\n+\n+# clipping out of value ranges\n+colSD = colSds(X)\n+colMean = (colMeans(X))\n+upperBound = colMean + 1.5 * colSD\n+lowerBound = colMean - 1.5 * colSD\n+outFilter = (X < lowerBound) | (X > upperBound)\n+X = X - outFilter*X + outFilter*colMeans(X);\n+\n+# normalization\n+X = scale(X=X, center=TRUE, scale=TRUE);\n+\n+# split training and testing\n+[Xtrain , Xtest, ytrain, ytest] = split(X=X, Y=y, cont=$cont, seed=7)\n+\n+# train regression model\n+B = lm(X=Xtrain, y=ytrain, icpt=1, reg=1e-3, tol=1e-9, verbose=TRUE)\n+\n+# model evaluation on test split\n+yhat = lmpredict(X=Xtest, w=B, icpt=1);\n+y_residual = ytest - yhat;\n+\n+avg_res = sum(y_residual) / nrow(ytest);\n+ss_res = sum(y_residual^2);\n+ss_avg_res = ss_res - nrow(ytest) * avg_res^2;\n+R2 = 1 - ss_res / (sum(y^2) - nrow(ytest) * (sum(y)/nrow(ytest))^2);\n+print(\"\\nAccuracy:\" +\n+ \"\\n--sum(ytest) = \" + sum(ytest) +\n+ \"\\n--sum(yhat) = \" + sum(yhat) +\n+ \"\\n--AVG_RES_Y: \" + avg_res +\n+ \"\\n--SS_AVG_RES_Y: \" + ss_avg_res +\n+ \"\\n--R2: \" + R2 );\n+\n+# write trained model and meta data\n+write(B, $out)\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/federated/FederatedLmPipeline4WorkersReference.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+Fin = rbind(read($1), read($2), read($3), read($4))\n+\n+y = read($5)\n+\n+# one hot encoding categorical, other passthrough\n+Fall = as.frame(Fin)\n+jspec = \"{ ids:true, dummycode:[1] }\"\n+[X,M] = transformencode(target=Fall, spec=jspec)\n+print(\"ncol(X) = \"+ncol(X))\n+\n+# clipping out of value ranges\n+colSD = colSds(X)\n+colMean = (colMeans(X))\n+upperBound = colMean + 1.5 * colSD\n+lowerBound = colMean - 1.5 * colSD\n+outFilter = (X < lowerBound) | (X > upperBound)\n+X = X - outFilter*X + outFilter*colMeans(X);\n+\n+# normalization\n+X = scale(X=X, center=TRUE, scale=TRUE);\n+\n+# split training and testing\n+[Xtrain , Xtest, ytrain, ytest] = split(X=X, Y=y, cont=$6, seed=7)\n+\n+# train regression model\n+B = lm(X=Xtrain, y=ytrain, icpt=1, reg=1e-3, tol=1e-9, verbose=TRUE)\n+\n+# model evaluation on test split\n+yhat = lmpredict(X=Xtest, w=B, icpt=1);\n+y_residual = ytest - yhat;\n+\n+avg_res = sum(y_residual) / nrow(ytest);\n+ss_res = sum(y_residual^2);\n+ss_avg_res = ss_res - nrow(ytest) * avg_res^2;\n+R2 = 1 - ss_res / (sum(y^2) - nrow(ytest) * (sum(y)/nrow(ytest))^2);\n+print(\"\\nAccuracy:\" +\n+ \"\\n--sum(ytest) = \" + sum(ytest) +\n+ \"\\n--sum(yhat) = \" + sum(yhat) +\n+ \"\\n--AVG_RES_Y: \" + avg_res +\n+ \"\\n--SS_AVG_RES_Y: \" + ss_avg_res +\n+ \"\\n--R2: \" + R2 );\n+\n+# write trained model and meta data\n+write(B, $7)\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/scripts/functions/federated/FederatedLmPipelineReference.dml",
"new_path": "src/test/scripts/functions/federated/FederatedLmPipelineReference.dml",
"diff": "#-------------------------------------------------------------\nFin = rbind(read($1), read($2))\n-y = read($3)\n+\n+y = read($5)\n# one hot encoding categorical, other passthrough\nFall = as.frame(Fin)\n@@ -40,7 +41,7 @@ X = X - outFilter*X + outFilter*colMeans(X);\nX = scale(X=X, center=TRUE, scale=TRUE);\n# split training and testing\n-[Xtrain , Xtest, ytrain, ytest] = split(X=X, Y=y, cont=$4, seed=7)\n+[Xtrain , Xtest, ytrain, ytest] = split(X=X, Y=y, cont=$6, seed=7)\n# train regression model\nB = lm(X=Xtrain, y=ytrain, icpt=1, reg=1e-3, tol=1e-9, verbose=TRUE)\n@@ -61,4 +62,4 @@ print(\"\\nAccuracy:\" +\n\"\\n--R2: \" + R2 );\n# write trained model and meta data\n-write(B, $5)\n+write(B, $7)\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR] LM pipeline test with 4 workers
Closes #1111 |
49,689 | 12.11.2020 22:23:09 | -3,600 | 608073796965354fcdbbd545194bc5b96c1e53d6 | Adjust computeTime for CostNSize with ref counts
This patch improves the CostNsize lineage cache eviction policy
by adjusting the compute time of an cache entry with reference
counts (#hits, #misses). This patch also introduces a non-recursive
equal method for LineageItem. | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageCache.java",
"new_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageCache.java",
"diff": "@@ -225,7 +225,7 @@ public class LineageCache\npublic static boolean probe(LineageItem key) {\n//TODO problematic as after probe the matrix might be kicked out of cache\nboolean p = _cache.containsKey(key); // in cache or in disk\n- if (!p && DMLScript.STATISTICS && LineageCacheEviction._removelist.contains(key))\n+ if (!p && DMLScript.STATISTICS && LineageCacheEviction._removelist.containsKey(key))\n// The sought entry was in cache but removed later\nLineageCacheStatistics.incrementDelHits();\nreturn p;\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageCacheConfig.java",
"new_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageCacheConfig.java",
"diff": "@@ -40,7 +40,7 @@ public class LineageCacheConfig\n\"uamean\", \"max\", \"min\", \"ifelse\", \"-\", \"sqrt\", \">\", \"uak+\", \"<=\",\n\"^\", \"uamax\", \"uark+\", \"uacmean\", \"eigen\", \"ctableexpand\", \"replace\",\n\"^2\", \"uack+\", \"tak+*\", \"uacsqk+\", \"uark+\", \"n+\", \"uarimax\", \"qsort\",\n- \"qpick\", \"transformapply\"\n+ \"qpick\", \"transformapply\", \"uarmax\", \"n+\"\n//TODO: Reuse everything.\n};\nprivate static String[] REUSE_OPCODES = new String[] {};\n@@ -287,6 +287,10 @@ public class LineageCacheConfig\nreturn (WEIGHTS[1] > 0);\n}\n+ public static boolean isCostNsize() {\n+ return (WEIGHTS[0] > 0);\n+ }\n+\npublic static boolean isDagHeightBased() {\n// Check the DAGHEIGHT component of weights array.\nreturn (WEIGHTS[2] > 0);\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageCacheEntry.java",
"new_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageCacheEntry.java",
"diff": "package org.apache.sysds.runtime.lineage;\n+import java.util.Map;\n+\nimport org.apache.sysds.common.Types.DataType;\nimport org.apache.sysds.runtime.DMLRuntimeException;\nimport org.apache.sysds.runtime.instructions.cp.ScalarObject;\n@@ -138,6 +140,30 @@ public class LineageCacheEntry {\nrecomputeScore();\n}\n+ protected synchronized void computeScore(Map<LineageItem, Integer> removeList) {\n+ setTimestamp();\n+ if (removeList.containsKey(_key)) {\n+ //FIXME: increase computetime instead of score (that now leads to overflow).\n+ // updating computingtime seamlessly takes care of spilling\n+ //_computeTime = _computeTime * (1 + removeList.get(_key));\n+ score = score * (1 + removeList.get(_key));\n+ }\n+ if (_computeTime < 0)\n+ System.out.println(\"after recache: \"+_computeTime+\" miss count: \"+removeList.get(_key));\n+ }\n+\n+ protected synchronized void updateComputeTime() {\n+ if ((Long.MAX_VALUE - _computeTime) < _computeTime) {\n+ System.out.println(\"Overflow for: \"+_key.getOpcode());\n+ }\n+ //FIXME: increase computetime instead of score (that now leads to overflow).\n+ // updating computingtime seamlessly takes care of spilling\n+ //_computeTime = _computeTime * (1 + removeList.get(_key));\n+ //_computeTime += _computeTime;\n+ //recomputeScore();\n+ score *= 2;\n+ }\n+\nprotected synchronized long getTimestamp() {\nreturn _timestamp;\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageCacheEviction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageCacheEviction.java",
"diff": "package org.apache.sysds.runtime.lineage;\nimport java.io.IOException;\n-import java.util.HashSet;\n+import java.util.HashMap;\nimport java.util.Map;\n-import java.util.Set;\nimport java.util.TreeSet;\nimport org.apache.sysds.api.DMLScript;\n@@ -37,7 +36,7 @@ public class LineageCacheEviction\nprivate static long _cachesize = 0;\nprivate static long CACHE_LIMIT; //limit in bytes\nprivate static long _startTimestamp = 0;\n- protected static final Set<LineageItem> _removelist = new HashSet<>();\n+ protected static final Map<LineageItem, Integer> _removelist = new HashMap<>();\nprivate static String _outdir = null;\nprivate static TreeSet<LineageCacheEntry> weightedQueue = new TreeSet<>(LineageCacheConfig.LineageCacheComparator);\n@@ -71,28 +70,41 @@ public class LineageCacheEviction\n// Don't add the memory pinned entries in weighted queue.\n// The eviction queue should contain only entries that can\n// be removed or spilled to disk.\n- entry.setTimestamp();\n+ //entry.setTimestamp();\n+ entry.computeScore(_removelist);\n+ // Adjust score according to cache miss counts.\nweightedQueue.add(entry);\n}\n}\nprotected static void getEntry(LineageCacheEntry entry) {\n// Reset the timestamp to maintain the LRU component of the scoring function\n- if (!LineageCacheConfig.isTimeBased())\n- return;\n-\n+ if (LineageCacheConfig.isTimeBased()) {\nif (weightedQueue.remove(entry)) {\nentry.setTimestamp();\nweightedQueue.add(entry);\n}\n}\n+ // Increase computation time of the sought entry.\n+ if (LineageCacheConfig.isCostNsize()) {\n+ if (weightedQueue.remove(entry)) {\n+ entry.updateComputeTime();\n+ weightedQueue.add(entry);\n+ }\n+ }\n+ }\nprivate static void removeEntry(Map<LineageItem, LineageCacheEntry> cache, LineageCacheEntry e) {\nif (cache.remove(e._key) != null)\n_cachesize -= e.getSize();\n+ // Increase priority if same entry is removed multiple times\n+ if (_removelist.containsKey(e._key))\n+ _removelist.replace(e._key, _removelist.get(e._key)+1);\n+ else\n+ _removelist.put(e._key, 1);\n+\nif (DMLScript.STATISTICS) {\n- _removelist.add(e._key);\nLineageCacheStatistics.incrementMemDeletes();\n}\n// NOTE: The caller of this method maintains the eviction queue.\n@@ -211,6 +223,7 @@ public class LineageCacheEviction\n// Estimate time to write to FS + read from FS.\ndouble spilltime = getDiskSpillEstimate(e) * 1000; // in milliseconds\ndouble exectime = ((double) e._computeTime) / 1000000; // in milliseconds\n+ //FIXME: this comuteTime is not adjusted according to hit/miss counts\nif (LineageCache.DEBUG) {\nSystem.out.print(\"LI = \" + e._key.getOpcode());\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageItem.java",
"new_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageItem.java",
"diff": "@@ -178,7 +178,7 @@ public class LineageItem {\nreturn false;\nresetVisitStatusNR();\n- boolean ret = equalsLI((LineageItem) o);\n+ boolean ret = equalsLINR((LineageItem) o);\nresetVisitStatusNR();\nreturn ret;\n}\n@@ -198,6 +198,33 @@ public class LineageItem {\nreturn ret;\n}\n+ private boolean equalsLINR(LineageItem that) {\n+ Stack<LineageItem> s1 = new Stack<>();\n+ Stack<LineageItem> s2 = new Stack<>();\n+ s1.push(this);\n+ s2.push(that);\n+ boolean ret = false;\n+ while (!s1.empty() && !s2.empty()) {\n+ LineageItem li1 = s1.pop();\n+ LineageItem li2 = s2.pop();\n+ if (li1.isVisited() || li1 == li2)\n+ return true;\n+\n+ ret = li1._opcode.equals(li2._opcode);\n+ ret &= li1._data.equals(li2._data);\n+ ret &= (li1.hashCode() == li2.hashCode());\n+ if (!ret) break;\n+ if (ret && li1._inputs != null && li1._inputs.length == li2._inputs.length)\n+ for (int i=0; i<li1._inputs.length; i++) {\n+ s1.push(li1.getInputs()[i]);\n+ s2.push(li2.getInputs()[i]);\n+ }\n+ li1.setVisited();\n+ }\n+\n+ return ret;\n+ }\n+\n@Override\npublic int hashCode() {\nif (_hash == 0) {\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-2739] Adjust computeTime for CostNSize with ref counts
This patch improves the CostNsize lineage cache eviction policy
by adjusting the compute time of an cache entry with reference
counts (#hits, #misses). This patch also introduces a non-recursive
equal method for LineageItem. |
49,689 | 23.11.2020 10:08:20 | -3,600 | 9bbe13731569c9735bbea8040cc39fc71e2584a8 | Add saved and missed compute time to Lineage stats
Example: LinCache Computetime (S/M): 108.305/7.504 sec.
That is, lineage cache saved 108.3 seconds of execution, but
missed 7.5 seconds due to evictions. | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageCache.java",
"new_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageCache.java",
"diff": "@@ -307,9 +307,13 @@ public class LineageCache\nelse if (data instanceof ScalarObject)\ncentry.setValue((ScalarObject)data, computetime);\n+ if (DMLScript.STATISTICS && LineageCacheEviction._removelist.containsKey(centry._key)) {\n+ // Add to missed compute time\n+ LineageCacheStatistics.incrementMissedComputeTime(centry._computeTime);\n+ }\n+\n//maintain order for eviction\nLineageCacheEviction.addEntry(centry);\n-\n}\n}\n}\n@@ -388,10 +392,13 @@ public class LineageCache\n// This method is called only when entry is present either in cache or in local FS.\nLineageCacheEntry e = _cache.get(key);\nif (e != null && e.getCacheStatus() != LineageCacheStatus.SPILLED) {\n+ if (DMLScript.STATISTICS) {\n+ // Increment hit count and saved computation time.\n+ LineageCacheStatistics.incrementMemHits();\n+ LineageCacheStatistics.incrementSavedComputeTime(e._computeTime);\n+ }\n// Maintain order for eviction\nLineageCacheEviction.getEntry(e);\n- if (DMLScript.STATISTICS)\n- LineageCacheStatistics.incrementMemHits();\nreturn e;\n}\nelse\n@@ -423,6 +430,10 @@ public class LineageCache\noe._nextEntry = e;\n}\n+ if (DMLScript.STATISTICS && LineageCacheEviction._removelist.containsKey(e._key))\n+ // Add to missed compute time\n+ LineageCacheStatistics.incrementMissedComputeTime(e._computeTime);\n+\n//maintain order for eviction\nLineageCacheEviction.addEntry(e);\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageCacheEviction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageCacheEviction.java",
"diff": "@@ -86,6 +86,7 @@ public class LineageCacheEviction\n}\n}\n// Increase computation time of the sought entry.\n+ // FIXME: avoid when called from partial reuse methods\nif (LineageCacheConfig.isCostNsize()) {\nif (weightedQueue.remove(entry)) {\nentry.updateComputeTime();\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageCacheStatistics.java",
"new_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageCacheStatistics.java",
"diff": "@@ -38,6 +38,8 @@ public class LineageCacheStatistics {\nprivate static final LongAdder _numRewrites = new LongAdder();\nprivate static final LongAdder _ctimeFSRead = new LongAdder(); //in nano sec\nprivate static final LongAdder _ctimeFSWrite = new LongAdder(); //in nano sec\n+ private static final LongAdder _ctimeSaved = new LongAdder(); //in nano sec\n+ private static final LongAdder _ctimeMissed = new LongAdder(); //in nano sec\npublic static void reset() {\n_numHitsMem.reset();\n@@ -52,6 +54,8 @@ public class LineageCacheStatistics {\n_numRewrites.reset();\n_ctimeFSRead.reset();\n_ctimeFSWrite.reset();\n+ _ctimeSaved.reset();\n+ _ctimeMissed.reset();\n}\npublic static void incrementMemHits() {\n@@ -122,6 +126,18 @@ public class LineageCacheStatistics {\n_ctimeFSWrite.add(delta);\n}\n+ public static void incrementSavedComputeTime(long delta) {\n+ // Total time saved by reusing.\n+ // TODO: Handle overflow\n+ _ctimeSaved.add(delta);\n+ }\n+\n+ public static void incrementMissedComputeTime(long delta) {\n+ // Total time missed due to eviction.\n+ // TODO: Handle overflow\n+ _ctimeMissed.add(delta);\n+ }\n+\npublic static long getMultiLevelFnHits() {\nreturn _numHitsFunc.longValue();\n}\n@@ -166,11 +182,18 @@ public class LineageCacheStatistics {\nreturn sb.toString();\n}\n- public static String displayTime() {\n+ public static String displayFSTime() {\nStringBuilder sb = new StringBuilder();\nsb.append(String.format(\"%.3f\", ((double)_ctimeFSRead.longValue())/1000000000)); //in sec\nsb.append(\"/\");\nsb.append(String.format(\"%.3f\", ((double)_ctimeFSWrite.longValue())/1000000000)); //in sec\nreturn sb.toString();\n}\n+ public static String displayComputeTime() {\n+ StringBuilder sb = new StringBuilder();\n+ sb.append(String.format(\"%.3f\", ((double)_ctimeSaved.longValue())/1000000000)); //in sec\n+ sb.append(\"/\");\n+ sb.append(String.format(\"%.3f\", ((double)_ctimeMissed.longValue())/1000000000)); //in sec\n+ return sb.toString();\n+ }\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/utils/Statistics.java",
"new_path": "src/main/java/org/apache/sysds/utils/Statistics.java",
"diff": "@@ -984,7 +984,8 @@ public class Statistics\nsb.append(\"LinCache hits (Mem/FS/Del): \\t\" + LineageCacheStatistics.displayHits() + \".\\n\");\nsb.append(\"LinCache MultiLevel (Ins/SB/Fn):\" + LineageCacheStatistics.displayMultiLevelHits() + \".\\n\");\nsb.append(\"LinCache writes (Mem/FS/Del): \\t\" + LineageCacheStatistics.displayWtrites() + \".\\n\");\n- sb.append(\"LinCache FStimes (Rd/Wr): \\t\" + LineageCacheStatistics.displayTime() + \" sec.\\n\");\n+ sb.append(\"LinCache FStimes (Rd/Wr): \\t\" + LineageCacheStatistics.displayFSTime() + \" sec.\\n\");\n+ sb.append(\"LinCache Computetime (S/M): \\t\" + LineageCacheStatistics.displayComputeTime() + \" sec.\\n\");\nsb.append(\"LinCache Rewrites: \\t\\t\" + LineageCacheStatistics.displayRewrites() + \".\\n\");\n}\nif( ConfigurationManager.isCodegenEnabled() ) {\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-2740] Add saved and missed compute time to Lineage stats
Example: LinCache Computetime (S/M): 108.305/7.504 sec.
That is, lineage cache saved 108.3 seconds of execution, but
missed 7.5 seconds due to evictions. |
49,706 | 16.11.2020 14:43:15 | -3,600 | a05884ad2f042644bde0be23129ed1c5ca8246cb | Add federated read 1 worker test
This commit adds a tests for one federated worker case, since this was
not tested before. Also a test case for Federated Y L2SVM is added for
a different number of workers. | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/controlprogram/caching/CacheableData.java",
"new_path": "src/main/java/org/apache/sysds/runtime/controlprogram/caching/CacheableData.java",
"diff": "@@ -364,7 +364,8 @@ public abstract class CacheableData<T extends CacheBlock> extends Data\nif(_fedMapping == null && _metaData instanceof MetaDataFormat){\nMetaDataFormat mdf = (MetaDataFormat) _metaData;\nif(mdf.getFileFormat() == FileFormat.FEDERATED){\n- InitFEDInstruction.federateMatrix(this, ReaderWriterFederated.read(_hdfsFileName, mdf.getDataCharacteristics()));\n+ InitFEDInstruction.federateMatrix(\n+ this, ReaderWriterFederated.read(_hdfsFileName, mdf.getDataCharacteristics()));\nreturn true;\n}\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/controlprogram/federated/FederatedWorkerHandler.java",
"new_path": "src/main/java/org/apache/sysds/runtime/controlprogram/federated/FederatedWorkerHandler.java",
"diff": "package org.apache.sysds.runtime.controlprogram.federated;\nimport java.io.BufferedReader;\n+import java.io.IOException;\nimport java.io.InputStreamReader;\nimport java.util.Arrays;\n@@ -84,7 +85,8 @@ public class FederatedWorkerHandler extends ChannelInboundHandlerAdapter {\nlog.debug(\"Received: \" + msg.getClass().getSimpleName());\n}\nif(!(msg instanceof FederatedRequest[]))\n- throw new DMLRuntimeException(\"FederatedWorkerHandler: Received object no instance of 'FederatedRequest[]'.\");\n+ throw new DMLRuntimeException(\n+ \"FederatedWorkerHandler: Received object no instance of 'FederatedRequest[]'.\");\nFederatedRequest[] requests = (FederatedRequest[]) msg;\nFederatedResponse response = null; // last response\n@@ -105,10 +107,9 @@ public class FederatedWorkerHandler extends ChannelInboundHandlerAdapter {\n// select the response for the entire batch of requests\nif(!tmp.isSuccessful()) {\n- log.error(\"Command \" + request.getType() + \" failed: \"\n- + tmp.getErrorMessage() + \"full command: \\n\" + request.toString());\n- response = (response == null || response.isSuccessful())\n- ? tmp : response; //return first error\n+ log.error(\"Command \" + request.getType() + \" failed: \" + tmp.getErrorMessage() + \"full command: \\n\"\n+ + request.toString());\n+ response = (response == null || response.isSuccessful()) ? tmp : response; // return first error\n}\nelse if(request.getType() == RequestType.GET_VAR) {\nif(response != null && response.isSuccessful())\n@@ -150,17 +151,15 @@ public class FederatedWorkerHandler extends ChannelInboundHandlerAdapter {\nreturn execClear();\ndefault:\nString message = String.format(\"Method %s is not supported.\", method);\n- return new FederatedResponse(ResponseType.ERROR,\n- new FederatedWorkerHandlerException(message));\n+ return new FederatedResponse(ResponseType.ERROR, new FederatedWorkerHandlerException(message));\n}\n}\ncatch(DMLPrivacyException | FederatedWorkerHandlerException ex) {\nreturn new FederatedResponse(ResponseType.ERROR, ex);\n}\ncatch(Exception ex) {\n- return new FederatedResponse(ResponseType.ERROR,\n- new FederatedWorkerHandlerException(\"Exception of type \"\n- + ex.getClass() + \" thrown when processing request\", ex));\n+ return new FederatedResponse(ResponseType.ERROR, new FederatedWorkerHandlerException(\n+ \"Exception of type \" + ex.getClass() + \" thrown when processing request\", ex));\n}\n}\n@@ -183,18 +182,18 @@ public class FederatedWorkerHandler extends ChannelInboundHandlerAdapter {\ncd = new FrameObject(filename);\nbreak;\ndefault:\n- // should NEVER happen (if we keep request codes in sync with actual behaviour)\n+ // should NEVER happen (if we keep request codes in sync with actual behavior)\nreturn new FederatedResponse(ResponseType.ERROR,\nnew FederatedWorkerHandlerException(\"Could not recognize datatype\"));\n}\n- // read metadata\nFileFormat fmt = null;\nboolean header = false;\n+ FileSystem fs = null;\ntry {\nString mtdname = DataExpression.getMTDFileName(filename);\nPath path = new Path(mtdname);\n- FileSystem fs = IOUtilFunctions.getFileSystem(mtdname); //no auto-close\n+ fs = IOUtilFunctions.getFileSystem(mtdname);\ntry(BufferedReader br = new BufferedReader(new InputStreamReader(fs.open(path)))) {\nJSONObject mtd = JSONHelper.parse(br);\nif(mtd == null)\n@@ -213,11 +212,21 @@ public class FederatedWorkerHandler extends ChannelInboundHandlerAdapter {\ncatch(Exception ex) {\nthrow new DMLRuntimeException(ex);\n}\n+ finally {\n+ if(fs != null)\n+ try {\n+ fs.close();\n+ }\n+ catch(IOException e) {\n+ return new FederatedResponse(ResponseType.ERROR, id);\n+ }\n+ }\n// put meta data object in symbol table, read on first operation\ncd.setMetaData(new MetaDataFormat(mc, fmt));\n// TODO send FileFormatProperties with request and use them for CSV, this is currently a workaround so reading\n// of CSV files works\n+ if(fmt == FileFormat.CSV)\ncd.setFileFormatProperties(new FileFormatPropertiesCSV(header, DataExpression.DEFAULT_DELIM_DELIMITER,\nDataExpression.DEFAULT_DELIM_SPARSE));\ncd.enableCleanup(false); // guard against deletion\n@@ -228,8 +237,7 @@ public class FederatedWorkerHandler extends ChannelInboundHandlerAdapter {\nframeObject.acquireRead();\nframeObject.refreshMetaData(); // get block schema\nframeObject.release();\n- return new FederatedResponse(ResponseType.SUCCESS,\n- new Object[] {id, frameObject.getSchema()});\n+ return new FederatedResponse(ResponseType.SUCCESS, new Object[] {id, frameObject.getSchema()});\n}\nreturn new FederatedResponse(ResponseType.SUCCESS, id);\n}\n@@ -239,8 +247,7 @@ public class FederatedWorkerHandler extends ChannelInboundHandlerAdapter {\nString varname = String.valueOf(request.getID());\nExecutionContext ec = _ecm.get(request.getTID());\nif(ec.containsVariable(varname)) {\n- return new FederatedResponse(ResponseType.ERROR,\n- \"Variable \"+request.getID()+\" already existing.\");\n+ return new FederatedResponse(ResponseType.ERROR, \"Variable \" + request.getID() + \" already existing.\");\n}\n// wrap transferred cache block into cacheable data\n@@ -252,7 +259,8 @@ public class FederatedWorkerHandler extends ChannelInboundHandlerAdapter {\nelse if(request.getParam(0) instanceof ListObject)\ndata = (ListObject) request.getParam(0);\nelse\n- throw new DMLRuntimeException(\"FederatedWorkerHandler: Unsupported object type, has to be of type CacheBlock or ScalarObject\");\n+ throw new DMLRuntimeException(\n+ \"FederatedWorkerHandler: Unsupported object type, has to be of type CacheBlock or ScalarObject\");\n// set variable and construct empty response\nec.setVariable(varname, data);\n@@ -280,8 +288,8 @@ public class FederatedWorkerHandler extends ChannelInboundHandlerAdapter {\ncase SCALAR:\nreturn new FederatedResponse(ResponseType.SUCCESS, dataObject);\ndefault:\n- return new FederatedResponse(ResponseType.ERROR,\n- new FederatedWorkerHandlerException(\"Unsupported return datatype \" + dataObject.getDataType().name()));\n+ return new FederatedResponse(ResponseType.ERROR, new FederatedWorkerHandlerException(\n+ \"Unsupported return datatype \" + dataObject.getDataType().name()));\n}\n}\n@@ -289,8 +297,7 @@ public class FederatedWorkerHandler extends ChannelInboundHandlerAdapter {\nExecutionContext ec = _ecm.get(request.getTID());\nBasicProgramBlock pb = new BasicProgramBlock(null);\npb.getInstructions().clear();\n- Instruction receivedInstruction = InstructionParser\n- .parseSingleInstruction((String)request.getParam(0));\n+ Instruction receivedInstruction = InstructionParser.parseSingleInstruction((String) request.getParam(0));\npb.getInstructions().add(receivedInstruction);\ntry {\npb.execute(ec); // execute single instruction\n@@ -308,10 +315,8 @@ public class FederatedWorkerHandler extends ChannelInboundHandlerAdapter {\n// get function and input parameters\nFederatedUDF udf = (FederatedUDF) request.getParam(0);\n- Data[] inputs = Arrays.stream(udf.getInputIDs())\n- .mapToObj(id -> ec.getVariable(String.valueOf(id)))\n- .map(PrivacyMonitor::handlePrivacy)\n- .toArray(Data[]::new);\n+ Data[] inputs = Arrays.stream(udf.getInputIDs()).mapToObj(id -> ec.getVariable(String.valueOf(id)))\n+ .map(PrivacyMonitor::handlePrivacy).toArray(Data[]::new);\n// execute user-defined function\ntry {\n@@ -352,12 +357,8 @@ public class FederatedWorkerHandler extends ChannelInboundHandlerAdapter {\npublic void operationComplete(ChannelFuture channelFuture) throws InterruptedException {\nif(!channelFuture.isSuccess()) {\nlog.error(\"Federated Worker Write failed\");\n- channelFuture\n- .channel()\n- .writeAndFlush(\n- new FederatedResponse(ResponseType.ERROR,\n- new FederatedWorkerHandlerException(\"Error while sending response.\")))\n- .channel().close().sync();\n+ channelFuture.channel().writeAndFlush(new FederatedResponse(ResponseType.ERROR,\n+ new FederatedWorkerHandlerException(\"Error while sending response.\"))).channel().close().sync();\n}\nelse {\nPrivacyMonitor.clearCheckedConstraints();\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/FEDInstructionUtils.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/FEDInstructionUtils.java",
"diff": "package org.apache.sysds.runtime.instructions.fed;\n-import org.apache.commons.logging.Log;\n-import org.apache.commons.logging.LogFactory;\nimport org.apache.sysds.runtime.controlprogram.caching.CacheableData;\nimport org.apache.sysds.runtime.controlprogram.caching.FrameObject;\nimport org.apache.sysds.runtime.controlprogram.caching.MatrixObject;\n@@ -54,7 +52,7 @@ import org.apache.sysds.runtime.instructions.spark.UnarySPInstruction;\nimport org.apache.sysds.runtime.instructions.spark.WriteSPInstruction;\npublic class FEDInstructionUtils {\n- private static final Log LOG = LogFactory.getLog(FEDInstructionUtils.class.getName());\n+ // private static final Log LOG = LogFactory.getLog(FEDInstructionUtils.class.getName());\n// This is currently a rather simplistic to our solution of replacing instructions with their correct federated\n// counterpart, since we do not propagate the information that a matrix is federated, therefore we can not decide\n@@ -104,7 +102,6 @@ public class FEDInstructionUtils {\n&& ec.containsVariable(instruction.input1)) {\nMatrixObject mo1 = ec.getMatrixObject(instruction.input1);\n-\nif(instruction.getOpcode().equalsIgnoreCase(\"cm\") && mo1.isFederated()) {\nfedinst = CentralMomentFEDInstruction.parseInstruction(inst.getInstructionString());\n} else if(inst.getOpcode().equalsIgnoreCase(\"qsort\") && mo1.isFederated()) {\n@@ -153,7 +150,6 @@ public class FEDInstructionUtils {\nMatrixIndexingCPInstruction minst = (MatrixIndexingCPInstruction) inst;\nif(inst.getOpcode().equalsIgnoreCase(\"rightIndex\")\n&& minst.input1.isMatrix() && ec.getCacheableData(minst.input1).isFederated()) {\n- LOG.info(\"Federated Right Indexing\");\nfedinst = MatrixIndexingFEDInstruction.parseInstruction(minst.getInstructionString());\n}\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/federated/algorithms/FederatedYL2SVMTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/federated/algorithms/FederatedYL2SVMTest.java",
"diff": "@@ -30,6 +30,7 @@ import org.apache.sysds.runtime.meta.MatrixCharacteristics;\nimport org.apache.sysds.test.AutomatedTestBase;\nimport org.apache.sysds.test.TestConfiguration;\nimport org.apache.sysds.test.TestUtils;\n+import org.junit.Ignore;\nimport org.junit.Test;\nimport org.junit.runner.RunWith;\nimport org.junit.runners.Parameterized;\n@@ -41,6 +42,7 @@ public class FederatedYL2SVMTest extends AutomatedTestBase {\nprivate final static String TEST_DIR = \"functions/federated/\";\nprivate final static String TEST_NAME = \"FederatedYL2SVMTest\";\n+ private final static String TEST_NAME_2 = \"FederatedYL2SVMTest2\";\nprivate final static String TEST_CLASS_DIR = TEST_DIR + FederatedYL2SVMTest.class.getSimpleName() + \"/\";\nprivate final static int blocksize = 1024;\n@@ -53,6 +55,7 @@ public class FederatedYL2SVMTest extends AutomatedTestBase {\npublic void setUp() {\nTestUtils.clearAssertionInformation();\naddTestConfiguration(TEST_NAME, new TestConfiguration(TEST_CLASS_DIR, TEST_NAME, new String[] {\"Z\"}));\n+ addTestConfiguration(TEST_NAME_2, new TestConfiguration(TEST_CLASS_DIR, TEST_NAME_2, new String[] {\"Z\"}));\n}\[email protected]\n@@ -65,16 +68,24 @@ public class FederatedYL2SVMTest extends AutomatedTestBase {\n@Test\npublic void federatedL2SVMCP() {\n- federatedL2SVM(Types.ExecMode.SINGLE_NODE);\n+ federatedL2SVM(Types.ExecMode.SINGLE_NODE, TEST_NAME);\n}\n- /*\n- * TODO support SPARK execution mode -> RDDs and SPARK instructions lead to quite a few problems\n- *\n- * @Test public void federatedL2SVMSP() { federatedL2SVM(Types.ExecMode.SPARK); }\n- */\n+ @Test\n+ public void federatedL2SVMCP_2() {\n+ // This test is equal to the first tests, just with one worker location used instead.\n+ // making all federated matrices FULL type.\n+ federatedL2SVM(Types.ExecMode.SINGLE_NODE, TEST_NAME_2);\n+\n+ }\n- public void federatedL2SVM(Types.ExecMode execMode) {\n+ @Test\n+ @Ignore\n+ public void federatedL2SVMSP() {\n+ federatedL2SVM(Types.ExecMode.SPARK, TEST_NAME);\n+ }\n+\n+ public void federatedL2SVM(Types.ExecMode execMode, String testName) {\nboolean sparkConfigOld = DMLScript.USE_LOCAL_SPARK_CONFIG;\nTypes.ExecMode platformOld = rtplatform;\nrtplatform = execMode;\n@@ -82,7 +93,7 @@ public class FederatedYL2SVMTest extends AutomatedTestBase {\nDMLScript.USE_LOCAL_SPARK_CONFIG = true;\n}\n- getAndLoadTestConfiguration(TEST_NAME);\n+ getAndLoadTestConfiguration(testName);\nString HOME = SCRIPT_DIR + TEST_DIR;\n// write input matrices\n@@ -110,18 +121,17 @@ public class FederatedYL2SVMTest extends AutomatedTestBase {\nThread t1 = startLocalFedWorkerThread(port1, FED_WORKER_WAIT_S);\nThread t2 = startLocalFedWorkerThread(port2);\n- TestConfiguration config = availableTestConfigurations.get(TEST_NAME);\n+ TestConfiguration config = availableTestConfigurations.get(testName);\nloadTestConfiguration(config);\n-\n// Run reference dml script with normal matrix\n- fullDMLScriptName = HOME + TEST_NAME + \"Reference.dml\";\n+ fullDMLScriptName = HOME + testName + \"Reference.dml\";\nprogramArgs = new String[] {\"-args\", input(\"X1\"), input(\"X2\"), input(\"Y1\"), input(\"Y2\"), expected(\"Z\")};\nLOG.debug(runTest(null));\n// Run actual dml script with federated matrixz\n- fullDMLScriptName = HOME + TEST_NAME + \".dml\";\n- programArgs = new String[] {\"-nvargs\", \"in_X1=\" + TestUtils.federatedAddress(port1, input(\"X1\")),\n+ fullDMLScriptName = HOME + testName + \".dml\";\n+ programArgs = new String[] {\"-stats\", \"-nvargs\", \"in_X1=\" + TestUtils.federatedAddress(port1, input(\"X1\")),\n\"in_X2=\" + TestUtils.federatedAddress(port2, input(\"X2\")), \"rows=\" + rows, \"cols=\" + cols,\n\"in_Y1=\" + TestUtils.federatedAddress(port1, input(\"Y1\")),\n\"in_Y2=\" + TestUtils.federatedAddress(port2, input(\"Y2\")), \"out=\" + output(\"Z\")};\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/federated/io/FederatedReaderTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/federated/io/FederatedReaderTest.java",
"diff": "*/\npackage org.apache.sysds.test.functions.federated.io;\n-\nimport java.util.Arrays;\nimport java.util.Collection;\n+import org.apache.commons.logging.Log;\n+import org.apache.commons.logging.LogFactory;\nimport org.apache.sysds.common.Types;\nimport org.apache.sysds.runtime.controlprogram.caching.MatrixObject;\nimport org.apache.sysds.runtime.meta.MatrixCharacteristics;\n@@ -38,7 +39,7 @@ import org.junit.runners.Parameterized;\[email protected]\npublic class FederatedReaderTest extends AutomatedTestBase {\n- // private static final Log LOG = LogFactory.getLog(FederatedReaderTest.class.getName());\n+ private static final Log LOG = LogFactory.getLog(FederatedReaderTest.class.getName());\nprivate final static String TEST_DIR = \"functions/federated/ioR/\";\nprivate final static String TEST_NAME = \"FederatedReaderTest\";\nprivate final static String TEST_CLASS_DIR = TEST_DIR + FederatedReaderTest.class.getSimpleName() + \"/\";\n@@ -65,11 +66,18 @@ public class FederatedReaderTest extends AutomatedTestBase {\n}\n@Test\n- public void federatedSinglenodeRead() {\n- federatedRead(Types.ExecMode.SINGLE_NODE);\n+ public void federatedSingleNodeReadOneWorker() {\n+ LOG.debug(\"1Federated\");\n+ federatedRead(Types.ExecMode.SINGLE_NODE, 1);\n}\n- public void federatedRead(Types.ExecMode execMode) {\n+ @Test\n+ public void federatedSingleNodeReadTwoWorker() {\n+ LOG.debug(\"2Federated\");\n+ federatedRead(Types.ExecMode.SINGLE_NODE, 2);\n+ }\n+\n+ public void federatedRead(Types.ExecMode execMode, int workerCount) {\nTypes.ExecMode oldPlatform = setExecMode(execMode);\ngetAndLoadTestConfiguration(TEST_NAME);\nsetOutputBuffering(true);\n@@ -91,16 +99,29 @@ public class FederatedReaderTest extends AutomatedTestBase {\nThread t2 = startLocalFedWorkerThread(port2);\nString host = \"localhost\";\n-\ntry {\n- MatrixObject fed = FederatedTestObjectConstructor.constructFederatedInput(\n- rows, cols, blocksize, host, begins, ends, new int[] {port1, port2},\n- new String[] {input(\"X1\"), input(\"X2\")}, input(\"X.json\"));\n+ MatrixObject fed = FederatedTestObjectConstructor.constructFederatedInput(rows,\n+ cols,\n+ blocksize,\n+ host,\n+ begins,\n+ ends,\n+ workerCount == 2 ? new int[] {port1, port2} : new int[] {port1},\n+ workerCount == 2 ? new String[] {input(\"X1\"), input(\"X2\")} : new String[] {input(\"X1\")},\n+ input(\"X.json\"));\nwriteInputFederatedWithMTD(\"X.json\", fed, null);\n// Run reference dml script with normal matrix\n- fullDMLScriptName = SCRIPT_DIR + \"functions/federated/io/\" + TEST_NAME + (rowPartitioned ? \"Row\" : \"Col\")\n- + \"Reference.dml\";\n+\n+ if(workerCount == 1) {\n+ fullDMLScriptName = SCRIPT_DIR + \"functions/federated/io/\" + TEST_NAME + \"1Reference.dml\";\n+ programArgs = new String[] {\"-stats\", \"-args\", input(\"X1\")};\n+ }\n+ else {\n+ fullDMLScriptName = SCRIPT_DIR + \"functions/federated/io/\" + TEST_NAME\n+ + (rowPartitioned ? \"Row\" : \"Col\") + \"2Reference.dml\";\nprogramArgs = new String[] {\"-stats\", \"-args\", input(\"X1\"), input(\"X2\")};\n+ }\n+\nString refOut = runTest(null).toString();\n// Run federated\n@@ -111,7 +132,8 @@ public class FederatedReaderTest extends AutomatedTestBase {\nAssert.assertTrue(heavyHittersContainsString(\"fed_uak+\"));\n// Verify output\nAssert.assertEquals(Double.parseDouble(refOut.split(\"\\n\")[0]),\n- Double.parseDouble(out.split(\"\\n\")[0]), 0.00001);\n+ Double.parseDouble(out.split(\"\\n\")[0]),\n+ 0.00001);\n}\ncatch(Exception e) {\ne.printStackTrace();\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/federated/io/FederatedSSLTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/federated/io/FederatedSSLTest.java",
"diff": "@@ -105,7 +105,7 @@ public class FederatedSSLTest extends AutomatedTestBase {\nwriteInputFederatedWithMTD(\"X.json\", fed, null);\n// Run reference dml script with normal matrix\nfullDMLScriptName = SCRIPT_DIR + \"functions/federated/io/\" + TEST_NAME + (rowPartitioned ? \"Row\" : \"Col\")\n- + \"Reference.dml\";\n+ + \"2Reference.dml\";\nprogramArgs = new String[] {\"-stats\", \"-args\", input(\"X1\"), input(\"X2\")};\nString refOut = runTest(null).toString();\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/federated/FederatedYL2SVMTest2.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+X = federated(addresses=list($in_X1),\n+ ranges=list(list(0, 0), list($rows / 2, $cols)))\n+Y = federated(addresses=list($in_Y1),\n+ ranges=list(list(0, 0), list($rows / 2, 1)))\n+model = l2svm(X=X, Y=Y, intercept = FALSE, epsilon = 1e-12, lambda = 1, maxIterations = 100)\n+write(model, $out)\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/federated/FederatedYL2SVMTest2Reference.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+X = read($1)\n+Y = read($3)\n+model = l2svm(X=X, Y=Y, intercept = FALSE, epsilon = 1e-12, lambda = 1, maxIterations = 100)\n+write(model, $5)\n"
},
{
"change_type": "RENAME",
"old_path": "src/test/scripts/functions/federated/io/FederatedReaderTestColReference.dml",
"new_path": "src/test/scripts/functions/federated/io/FederatedReaderTest1Reference.dml",
"diff": "#\n#-------------------------------------------------------------\n-X = cbind(read($1), read($2))\n+X = read($1)\nprint(sum(X))\n"
},
{
"change_type": "RENAME",
"old_path": "src/test/scripts/functions/federated/io/FederatedReaderTestRowReference.dml",
"new_path": "src/test/scripts/functions/federated/io/FederatedReaderTestCol2Reference.dml",
"diff": "#\n#-------------------------------------------------------------\n-X = rbind(read($1), read($2))\n-print(sum(X))\n+Y = cbind(read($1), read($2))\n+print(sum(Y))\n+\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/federated/io/FederatedReaderTestRow2Reference.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+Y = rbind(read($1), read($2))\n+print(sum(Y))\n+\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-2704] Add federated read 1 worker test
This commit adds a tests for one federated worker case, since this was
not tested before. Also a test case for Federated Y L2SVM is added for
a different number of workers. |
49,720 | 25.11.2020 12:21:45 | -3,600 | 7ec6e7039184a08d677658773eff15be65ac2736 | [MINOR] Removing print statement from FrameCastingTest.java | [
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/component/frame/FrameCastingTest.java",
"new_path": "src/test/java/org/apache/sysds/test/component/frame/FrameCastingTest.java",
"diff": "@@ -107,7 +107,6 @@ public class FrameCastingTest extends AutomatedTestBase\nfor( int j=0; j<schema.length; j++ )\nrow1[j] = UtilFunctions.doubleToObject(schema[j], A[i][j]);\nframe1.appendRow(row1);\n- System.out.println(Arrays.toString(row1));\n}\nMatrixBlock mb = DataConverter.convertToMatrixBlock(frame1);\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR] Removing print statement from FrameCastingTest.java |
49,706 | 27.11.2020 14:44:19 | -3,600 | c50165ddd47f50bc1c71c752f67c2a6fbaaa5aaa | [MINOR] Singleton Federated SSL context | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/controlprogram/federated/FederatedData.java",
"new_path": "src/main/java/org/apache/sysds/runtime/controlprogram/federated/FederatedData.java",
"diff": "@@ -58,16 +58,8 @@ public class FederatedData {\nprivate static final Log LOG = LogFactory.getLog(FederatedData.class.getName());\nprivate static final Set<InetSocketAddress> _allFedSites = new HashSet<>();\n- private static SslContext sslCtx;\n-\n- static {\n- try {\n- sslCtx = SslContextBuilder.forClient().trustManager(InsecureTrustManagerFactory.INSTANCE).build();\n- }\n- catch(SSLException e) {\n- LOG.error(\"Static SSL setup failed for client side\");\n- }\n- }\n+ /** A Singleton constructed SSL context, that only is assigned if ssl is enabled. */\n+ private static SslContextMan instance = null;\nprivate final Types.DataType _dataType;\nprivate final InetSocketAddress _address;\n@@ -178,8 +170,8 @@ public class FederatedData {\nprotected void initChannel(SocketChannel ch) throws Exception {\nChannelPipeline cp = ch.pipeline();\nif(ConfigurationManager.getDMLConfig().getBooleanValue(DMLConfig.USE_SSL_FEDERATED_COMMUNICATION)) {\n- cp.addLast(\n- sslCtx.newHandler(ch.alloc(), address.getAddress().getHostAddress(), address.getPort()));\n+ cp.addLast(SslConstructor().context\n+ .newHandler(ch.alloc(), address.getAddress().getHostAddress(), address.getPort()));\n}\ncp.addLast(\"ObjectDecoder\",\n@@ -254,6 +246,28 @@ public class FederatedData {\n}\n}\n+ private static class SslContextMan {\n+ protected final SslContext context;\n+\n+ private SslContextMan() {\n+ try {\n+ context = SslContextBuilder.forClient().trustManager(InsecureTrustManagerFactory.INSTANCE).build();\n+ }\n+ catch(SSLException e) {\n+ throw new DMLRuntimeException(\"Static SSL setup failed for client side\", e);\n+ }\n+ }\n+ }\n+\n+ private static SslContextMan SslConstructor() {\n+ if(instance == null) {\n+ return new SslContextMan();\n+ }\n+ else {\n+ return instance;\n+ }\n+ }\n+\n@Override\npublic String toString() {\nStringBuilder sb = new StringBuilder();\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR] Singleton Federated SSL context |
49,738 | 28.11.2020 23:47:53 | -3,600 | 9641173dd54ba9d43e4869483006bfc4fc66897c | Fix in-memory reblock for federated matrices/frames
This patch fixes the spark reblock instructions (always compiled in
hybrid mode), which incorrectly consolidate federated matrices/frames
into the driver. We now simply extended the implementation to respect
existing federated data objects. | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/hops/recompile/Recompiler.java",
"new_path": "src/main/java/org/apache/sysds/hops/recompile/Recompiler.java",
"diff": "@@ -68,6 +68,7 @@ import org.apache.sysds.runtime.controlprogram.LocalVariableMap;\nimport org.apache.sysds.runtime.controlprogram.ParForProgramBlock;\nimport org.apache.sysds.runtime.controlprogram.ProgramBlock;\nimport org.apache.sysds.runtime.controlprogram.WhileProgramBlock;\n+import org.apache.sysds.runtime.controlprogram.caching.CacheBlock;\nimport org.apache.sysds.runtime.controlprogram.caching.CacheableData;\nimport org.apache.sysds.runtime.controlprogram.caching.FrameObject;\nimport org.apache.sysds.runtime.controlprogram.caching.MatrixObject;\n@@ -80,7 +81,6 @@ import org.apache.sysds.runtime.instructions.cp.FunctionCallCPInstruction;\nimport org.apache.sysds.runtime.instructions.cp.IntObject;\nimport org.apache.sysds.runtime.instructions.cp.ScalarObject;\nimport org.apache.sysds.runtime.io.IOUtilFunctions;\n-import org.apache.sysds.runtime.matrix.data.FrameBlock;\nimport org.apache.sysds.runtime.matrix.data.MatrixBlock;\nimport org.apache.sysds.runtime.meta.DataCharacteristics;\nimport org.apache.sysds.runtime.meta.MatrixCharacteristics;\n@@ -1568,33 +1568,25 @@ public class Recompiler\n&& !OptimizerUtils.exceedsCachingThreshold(dc.getCols(), OptimizerUtils.estimateSize(dc));\n}\n- public static void executeInMemoryMatrixReblock(ExecutionContext ec, String varin, String varout) {\n- MatrixObject in = ec.getMatrixObject(varin);\n- MatrixObject out = ec.getMatrixObject(varout);\n+ @SuppressWarnings(\"unchecked\")\n+ public static void executeInMemoryReblock(ExecutionContext ec, String varin, String varout) {\n+ CacheableData<CacheBlock> in = (CacheableData<CacheBlock>) ec.getCacheableData(varin);\n+ CacheableData<CacheBlock> out = (CacheableData<CacheBlock>) ec.getCacheableData(varout);\n+ if( in.isFederated() ) {\n+ out.setMetaData(in.getMetaData());\n+ out.setFedMapping(in.getFedMapping());\n+ }\n+ else {\n//read text input matrix (through buffer pool, matrix object carries all relevant\n//information including additional arguments for csv reblock)\n- MatrixBlock mb = in.acquireRead();\n+ CacheBlock mb = in.acquireRead();\n//set output (incl update matrix characteristics)\nout.acquireModify(mb);\nout.release();\nin.release();\n}\n-\n- public static void executeInMemoryFrameReblock(ExecutionContext ec, String varin, String varout)\n- {\n- FrameObject in = ec.getFrameObject(varin);\n- FrameObject out = ec.getFrameObject(varout);\n-\n- //read text input frame (through buffer pool, frame object carries all relevant\n- //information including additional arguments for csv reblock)\n- FrameBlock fb = in.acquireRead();\n-\n- //set output (incl update matrix characteristics)\n- out.acquireModify( fb );\n- out.release();\n- in.release();\n}\nprivate static void tryReadMetaDataFileDataCharacteristics( DataOp dop )\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/spark/CSVReblockSPInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/spark/CSVReblockSPInstruction.java",
"diff": "@@ -114,10 +114,8 @@ public class CSVReblockSPInstruction extends UnarySPInstruction {\n//check for in-memory reblock (w/ lazy spark context, potential for latency reduction)\nif( Recompiler.checkCPReblock(sec, input1.getName()) ) {\n- if( input1.getDataType() == DataType.MATRIX )\n- Recompiler.executeInMemoryMatrixReblock(sec, input1.getName(), output.getName());\n- else if( input1.getDataType() == DataType.FRAME )\n- Recompiler.executeInMemoryFrameReblock(sec, input1.getName(), output.getName());\n+ if( input1.getDataType().isMatrix() || input1.getDataType().isFrame() )\n+ Recompiler.executeInMemoryReblock(sec, input1.getName(), output.getName());\nStatistics.decrementNoOfExecutedSPInst();\nreturn;\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/spark/ReblockSPInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/spark/ReblockSPInstruction.java",
"diff": "@@ -97,10 +97,8 @@ public class ReblockSPInstruction extends UnarySPInstruction {\n//check for in-memory reblock (w/ lazy spark context, potential for latency reduction)\nif( Recompiler.checkCPReblock(sec, input1.getName()) ) {\n- if( input1.getDataType() == DataType.MATRIX )\n- Recompiler.executeInMemoryMatrixReblock(sec, input1.getName(), output.getName());\n- else if( input1.getDataType() == DataType.FRAME )\n- Recompiler.executeInMemoryFrameReblock(sec, input1.getName(), output.getName());\n+ if( input1.getDataType().isMatrix() || input1.getDataType().isFrame() )\n+ Recompiler.executeInMemoryReblock(sec, input1.getName(), output.getName());\nStatistics.decrementNoOfExecutedSPInst();\nreturn;\n}\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-2550] Fix in-memory reblock for federated matrices/frames
This patch fixes the spark reblock instructions (always compiled in
hybrid mode), which incorrectly consolidate federated matrices/frames
into the driver. We now simply extended the implementation to respect
existing federated data objects. |
49,698 | 08.12.2020 17:50:26 | -19,080 | 67e150d3d68de5dd63f4255112ce2161fcd7873f | [MINOR][DOC] Update algolia index name
Refer | [
{
"change_type": "MODIFY",
"old_path": "docs/_includes/scripts.html",
"new_path": "docs/_includes/scripts.html",
"diff": "@@ -75,7 +75,7 @@ limitations under the License.\ndocsearch({\napiKey: '78c19564c220d4642a41197baae304ef',\n- indexName: 'apache_systemml',\n+ indexName: 'apache_systemds',\ninputSelector: \"#s-bar\",\n// For custom styling for the dropdown, please set debug to true\n// so that the dropdown won't disappear when the inspect tools are\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR][DOC] Update algolia index name
Refer https://github.com/algolia/docsearch-configs/pull/2943 |
49,706 | 12.12.2020 14:29:36 | -3,600 | d439f0e321fd84c37ed4f26bea68552ee907c6b1 | Improved scale built-in function
Closes | [
{
"change_type": "MODIFY",
"old_path": "scripts/builtin/scale.dml",
"new_path": "scripts/builtin/scale.dml",
"diff": "#\n#-------------------------------------------------------------\n-m_scale = function(Matrix[Double] X, Boolean center, Boolean scale) return (Matrix[Double] Y) {\n- # This function centers scales and performs z-score on the input matrix X\n+# Scale and center individual features in the input matrix\n+# (column-wise) using z-score to scale the values.\n+# -----------------------------------------------------------------------------\n+# NAME TYPE DEFAULT MEANING\n+# -----------------------------------------------------------------------------\n+# X Matrix --- Input feature matrix\n+# center Boolean TRUE Indicates whether or not to center the feature matrix\n+# scale Boolean TRUE Indicates whether or not to scale the feature matrix\n+# -----------------------------------------------------------------------------\n+# Y Matrix --- Output feature matrix with K columns\n+# -----------------------------------------------------------------------------\n+m_scale = function(Matrix[Double] X, Boolean center, Boolean scale) return (Matrix[Double] Y) {\nif( center )\nX = X - colMeans(X);\nif (scale) {\n- N = nrow(X);\n- if( center )\n- cvars = (colSums(X^2) - N*(colMeans(X)^2))/(N-1);\n- else\n- cvars = colSums(X^2)/(N-1);\n+ cvars = colSums(X^2)/(nrow(X)-1);\n#scale by std-dev and replace NaNs with 0's\nX = replace(target=X/sqrt(cvars),\npattern=NaN, replacement=0);\n}\n+\nY = X;\n}\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-2756] Improved scale built-in function
Closes #1123. |
49,689 | 13.12.2020 19:26:57 | -3,600 | 0d5e94ace91b9883f9e64d3156ae405a6b3c4bea | Minor bug fixes for Lineage Estimator | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageEstimatorStatistics.java",
"new_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageEstimatorStatistics.java",
"diff": "@@ -26,7 +26,7 @@ import org.apache.sysds.utils.Statistics;\npublic class LineageEstimatorStatistics {\nprivate static final LongAdder _ctimeSaved = new LongAdder(); //in nano sec\n- private static int maxInsts = 10;\n+ private static int INSTCOUNT = 10;\npublic static void reset() {\n_ctimeSaved.reset();\n@@ -61,7 +61,8 @@ public class LineageEstimatorStatistics {\n// Total time saved and reuse counts per opcode, ordered by saved time\nStringBuilder sb = new StringBuilder();\nsb.append(\"# Instrunction\\t\" + \" \"+\"Time(s) Count \\n\");\n- for (int i=1; i<=maxInsts; i++) {\n+ int instCount = Math.min(INSTCOUNT, LineageEstimator.computeSavingInst.size());\n+ for (int i=1; i<=instCount; i++) {\nMutableTriple<String, Long, Double> op = LineageEstimator.computeSavingInst.poll();\nint tl = String.valueOf(op.getRight()*1e-3).indexOf(\".\");\nif (op != null && op.getRight() > 0)\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-2755] Minor bug fixes for Lineage Estimator |
49,700 | 17.12.2020 11:37:17 | -3,600 | 0968c3b1fb2e88d279e71a3cde6154a163d2baf0 | Add federated lmCG test for rewrite debugging
Closes | [
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/AutomatedTestBase.java",
"new_path": "src/test/java/org/apache/sysds/test/AutomatedTestBase.java",
"diff": "@@ -106,7 +106,7 @@ public abstract class AutomatedTestBase {\npublic static final double GPU_TOLERANCE = 1e-9;\npublic static final int FED_WORKER_WAIT = 1000; // in ms\n- public static final int FED_WORKER_WAIT_S = 30; // in ms\n+ public static final int FED_WORKER_WAIT_S = 40; // in ms\n// With OpenJDK 8u242 on Windows, the new changes in JDK are not allowing\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/builtin/BuiltinLmTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/builtin/BuiltinLmTest.java",
"diff": "@@ -130,7 +130,7 @@ public class BuiltinLmTest extends AutomatedTestBase\nfullDMLScriptName = HOME + dml_test_name + \".dml\";\n- programArgs = new String[]{\"-args\", input(\"A\"), input(\"B\"), output(\"C\") };\n+ programArgs = new String[]{\"-explain\", \"-args\", input(\"A\"), input(\"B\"), output(\"C\") };\nfullRScriptName = HOME + TEST_NAME + \".R\";\nrCmd = \"Rscript\" + \" \" + fullRScriptName + \" \" + inputDir() + \" \" + expectedDir();\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/java/org/apache/sysds/test/functions/privacy/FederatedLmCGTest.java",
"diff": "+/*\n+ * Licensed to the Apache Software Foundation (ASF) under one\n+ * or more contributor license agreements. See the NOTICE file\n+ * distributed with this work for additional information\n+ * regarding copyright ownership. The ASF licenses this file\n+ * to you under the Apache License, Version 2.0 (the\n+ * \"License\"); you may not use this file except in compliance\n+ * with the License. You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.apache.sysds.test.functions.privacy;\n+\n+import org.junit.Assert;\n+import org.junit.Test;\n+\n+\n+import org.apache.sysds.common.Types.ExecMode;\n+import org.apache.sysds.lops.LopProperties.ExecType;\n+import org.apache.sysds.test.AutomatedTestBase;\n+import org.apache.sysds.test.TestConfiguration;\n+import org.apache.sysds.test.TestUtils;\n+\n+public class FederatedLmCGTest extends AutomatedTestBase\n+{\n+ private final static String TEST_NAME = \"lmCGFederated\";\n+ private final static String TEST_DIR = \"functions/privacy/\";\n+ private static final String TEST_CLASS_DIR = TEST_DIR + FederatedLmCGTest.class.getSimpleName() + \"/\";\n+\n+ private final static int rows = 10;\n+ private final static int cols = 3;\n+ private final static double spSparse = 0.3;\n+ private final static double spDense = 0.7;\n+\n+ @Override\n+ public void setUp() {\n+ addTestConfiguration(TEST_NAME,new TestConfiguration(TEST_CLASS_DIR, TEST_NAME,new String[]{\"C\"}));\n+ }\n+\n+ @Test\n+ public void testLmMatrixDenseCPlmCG1() {\n+ runLmTest(false, ExecType.CP, false);\n+ }\n+\n+ @Test\n+ public void testLmMatrixSparseCPlmCG1() {\n+ runLmTest(true, ExecType.CP, false);\n+ }\n+\n+ @Test\n+ public void testLmMatrixDenseCPlmCG2() {\n+ runLmTest(false, ExecType.CP, true);\n+ }\n+\n+ @Test\n+ public void testLmMatrixSparseCPlmCG2() {\n+ runLmTest(true, ExecType.CP, true);\n+ }\n+\n+ @Test\n+ public void testLmMatrixDenseSPlmCG() {\n+ runLmTest(false, ExecType.SPARK, true);\n+ }\n+\n+ @Test\n+ public void testLmMatrixSparseSPlmCG() {\n+ runLmTest(true, ExecType.SPARK, true);\n+ }\n+\n+ private void runLmTest(boolean sparse, ExecType instType, boolean doubleFederated)\n+ {\n+ ExecMode platformOld = setExecMode(instType);\n+\n+ try\n+ {\n+ loadTestConfiguration(getTestConfiguration(TEST_NAME));\n+ double sparsity = sparse ? spSparse : spDense;\n+\n+ String HOME = SCRIPT_DIR + TEST_DIR;\n+\n+ int port1 = getRandomAvailablePort();\n+ int port2 = getRandomAvailablePort();\n+ Thread t1 = startLocalFedWorkerThread(port1, FED_WORKER_WAIT_S);\n+ Thread t2 = startLocalFedWorkerThread(port2);\n+\n+ fullDMLScriptName = HOME + \"FederatedLmCG\" + (doubleFederated?\"2\":\"\") + \".dml\";\n+\n+ if (doubleFederated){\n+ programArgs = new String[]{\n+ \"-explain\", \"-nvargs\",\n+ \"X1=\"+TestUtils.federatedAddress(port1, input(\"X1\")),\n+ \"X2=\"+TestUtils.federatedAddress(port2, input(\"X2\")),\n+ \"y1=\" + TestUtils.federatedAddress(port1, input(\"y1\")),\n+ \"y2=\" + TestUtils.federatedAddress(port2, input(\"y2\")),\n+ \"C=\"+output(\"C\"),\n+ \"r=\" + rows, \"c=\" + cols};\n+ } else {\n+ programArgs = new String[]{\n+ \"-explain\", \"-nvargs\",\n+ \"X1=\"+TestUtils.federatedAddress(port1, input(\"X1\")),\n+ \"X2=\"+TestUtils.federatedAddress(port2, input(\"X2\")),\n+ \"y=\" + input(\"y\"),\n+ \"C=\"+output(\"C\"),\n+ \"r=\" + rows, \"c=\" + cols};\n+ }\n+\n+ //generate actual dataset\n+ int halfRows = rows / 2;\n+ double[][] X1 = getRandomMatrix(halfRows, cols, 0, 1, sparsity, 7);\n+ writeInputMatrixWithMTD(\"X1\", X1, false);\n+ double[][] X2 = getRandomMatrix(halfRows, cols, 0, 1, sparsity, 8);\n+ writeInputMatrixWithMTD(\"X2\", X2, false);\n+\n+ if ( doubleFederated ){\n+ double[][] y1 = getRandomMatrix(halfRows, 1, 0, 10, 1.0, 3);\n+ double[][] y2 = getRandomMatrix(halfRows, 1, 0, 10, 1.0, 4);\n+ writeInputMatrixWithMTD(\"y1\", y1, false);\n+ writeInputMatrixWithMTD(\"y2\", y2, false);\n+ } else {\n+ double[][] y = getRandomMatrix(rows, 1, 0, 10, 1.0, 3);\n+ writeInputMatrixWithMTD(\"y\", y, false);\n+ }\n+\n+ runTest(true, false, null, -1);\n+\n+ //check expected operations\n+ Assert.assertTrue(heavyHittersContainsString(\"fed_mmchain\"));\n+\n+ TestUtils.shutdownThreads(t1, t2);\n+ }\n+ finally {\n+ rtplatform = platformOld;\n+ }\n+ }\n+}\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/privacy/FederatedLmCG.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+X = federated(addresses=list($X1, $X2),\n+ ranges=list(list(0, 0), list($r / 2, $c), list($r / 2, 0), list($r, $c)))\n+y = read($y)\n+C = lmCG(X = X, y = y, reg = 1e-12, verbose=FALSE)\n+write(C, $C)\n+\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/privacy/FederatedLmCG2.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+X = federated(addresses=list($X1, $X2),\n+ ranges=list(list(0, 0), list($r / 2, $c), list($r / 2, 0), list($r, $c)))\n+y = federated(addresses=list($y1, $y2),\n+ ranges=list(list(0, 0), list($r / 2, 0), list($r / 2, 0), list($r, 0)))\n+C = lmCG(X = X, y = y, reg = 1e-12, maxi = 2, verbose=FALSE)\n+write(C, $C)\n+\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-2759] Add federated lmCG test for rewrite debugging
Closes #1126. |
49,738 | 17.12.2020 13:17:03 | -3,600 | ed8b5f5c725526f8414d2dbe9113d52260597352 | Fix error handling federated read, wrong test meta data
This patch improves the federated read by explicit error handling for
invalid meta data (e.g., federated data partitions larger than global
matrix dimensions) in order to detect meta data inconsistencies. | [
{
"change_type": "MODIFY",
"old_path": ".github/workflows/functionsTests.yml",
"new_path": ".github/workflows/functionsTests.yml",
"diff": "@@ -40,7 +40,8 @@ jobs:\n\"**.functions.aggregate.**,**.functions.append.**,**.functions.binary.frame.**,**.functions.binary.matrix.**,**.functions.binary.scalar.**,**.functions.binary.tensor.**\",\n\"**.functions.blocks.**,**.functions.compress.**,**.functions.countDistinct.**,**.functions.data.misc.**,**.functions.data.rand.**,**.functions.data.tensor.**,**.functions.codegenalg.parttwo.**,**.functions.codegen.**,**.functions.caching.**\",\n\"**.functions.binary.matrix_full_cellwise.**,**.functions.binary.matrix_full_other.**\",\n- \"**.functions.federated.**\",\n+ \"**.functions.federated.algorithms.**\",\n+ \"**.functions.federated.io.**,**.functions.federated.paramserv.**,**.functions.federated.primitives.**,**.functions.federated.transform.**\",\n\"**.functions.codegenalg.partone.**\",\n\"**.functions.builtin.**\",\n\"**.functions.frame.**,**.functions.indexing.**,**.functions.io.**,**.functions.jmlc.**,**.functions.lineage.**\",\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/controlprogram/federated/FederatedWorkerHandler.java",
"new_path": "src/main/java/org/apache/sysds/runtime/controlprogram/federated/FederatedWorkerHandler.java",
"diff": "@@ -237,9 +237,9 @@ public class FederatedWorkerHandler extends ChannelInboundHandlerAdapter {\nframeObject.acquireRead();\nframeObject.refreshMetaData(); // get block schema\nframeObject.release();\n- return new FederatedResponse(ResponseType.SUCCESS, new Object[] {id, frameObject.getSchema()});\n+ return new FederatedResponse(ResponseType.SUCCESS, new Object[] {id, frameObject.getSchema(), mc});\n}\n- return new FederatedResponse(ResponseType.SUCCESS, id);\n+ return new FederatedResponse(ResponseType.SUCCESS, new Object[] {id, mc});\n}\nprivate FederatedResponse putVariable(FederatedRequest request) {\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/InitFEDInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/InitFEDInstruction.java",
"diff": "@@ -56,6 +56,7 @@ import org.apache.sysds.runtime.instructions.cp.Data;\nimport org.apache.sysds.runtime.instructions.cp.ListObject;\nimport org.apache.sysds.runtime.instructions.cp.ScalarObject;\nimport org.apache.sysds.runtime.instructions.cp.StringObject;\n+import org.apache.sysds.runtime.meta.DataCharacteristics;\npublic class InitFEDInstruction extends FEDInstruction {\n@@ -236,9 +237,16 @@ public class InitFEDInstruction extends FEDInstruction {\ntry {\nint timeout = ConfigurationManager.getDMLConfig()\n.getIntValue(DMLConfig.DEFAULT_FEDERATED_INITIALIZATION_TIMEOUT);\n+ if( LOG.isDebugEnabled() )\nLOG.debug(\"Federated Initialization with timeout: \" + timeout);\n- for(Pair<FederatedData, Future<FederatedResponse>> idResponse : idResponses)\n- idResponse.getRight().get(timeout, TimeUnit.SECONDS); // wait for initialization\n+ for(Pair<FederatedData, Future<FederatedResponse>> idResponse : idResponses) {\n+ // wait for initialization and check dimensions\n+ FederatedResponse re = idResponse.getRight().get(timeout, TimeUnit.SECONDS);\n+ DataCharacteristics dc = (DataCharacteristics) re.getData()[1];\n+ if( dc.getRows() > output.getNumRows() || dc.getCols() > output.getNumColumns() )\n+ throw new DMLRuntimeException(\"Invalid federated meta data: \"\n+ + output.getDataCharacteristics()+\" vs federated response: \"+dc);\n+ }\n}\ncatch(TimeoutException e) {\nthrow new DMLRuntimeException(\"Federated Initialization timeout exceeded\", e);\n@@ -294,6 +302,10 @@ public class InitFEDInstruction extends FEDInstruction {\nFederatedResponse response = idResponse.getRight().getRight().get();\nint startCol = idResponse.getRight().getLeft();\nhandleFedFrameResponse(schema, fedData, response, startCol);\n+ DataCharacteristics dc = (DataCharacteristics) response.getData()[2];\n+ if( dc.getRows() > output.getNumRows() || dc.getCols() > output.getNumColumns() )\n+ throw new DMLRuntimeException(\"Invalid federated meta data: \"\n+ + output.getDataCharacteristics()+\" vs federated response: \"+dc);\n}\n}\ncatch(Exception e) {\n@@ -315,7 +327,7 @@ public class InitFEDInstruction extends FEDInstruction {\n// Index 0 is the varID, Index 1 is the schema of the frame\nObject[] data = response.getData();\nfederatedData.setVarID((Long) data[0]);\n- // copy the\n+ // copy the schema\nTypes.ValueType[] range_schema = (Types.ValueType[]) data[1];\nfor(int i = 0; i < range_schema.length; i++) {\nTypes.ValueType vType = range_schema[i];\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/privacy/FederatedLmCGTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/privacy/FederatedLmCGTest.java",
"diff": "@@ -95,7 +95,7 @@ public class FederatedLmCGTest extends AutomatedTestBase\nif (doubleFederated){\nprogramArgs = new String[]{\n- \"-explain\", \"-nvargs\",\n+ \"-explain\", \"-stats\", \"-nvargs\",\n\"X1=\"+TestUtils.federatedAddress(port1, input(\"X1\")),\n\"X2=\"+TestUtils.federatedAddress(port2, input(\"X2\")),\n\"y1=\" + TestUtils.federatedAddress(port1, input(\"y1\")),\n@@ -104,7 +104,7 @@ public class FederatedLmCGTest extends AutomatedTestBase\n\"r=\" + rows, \"c=\" + cols};\n} else {\nprogramArgs = new String[]{\n- \"-explain\", \"-nvargs\",\n+ \"-explain\", \"-stats\", \"-nvargs\",\n\"X1=\"+TestUtils.federatedAddress(port1, input(\"X1\")),\n\"X2=\"+TestUtils.federatedAddress(port2, input(\"X2\")),\n\"y=\" + input(\"y\"),\n@@ -132,6 +132,7 @@ public class FederatedLmCGTest extends AutomatedTestBase\nrunTest(true, false, null, -1);\n//check expected operations\n+ if( instType == ExecType.CP )\nAssert.assertTrue(heavyHittersContainsString(\"fed_mmchain\"));\nTestUtils.shutdownThreads(t1, t2);\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/scripts/functions/privacy/FederatedLmCG2.dml",
"new_path": "src/test/scripts/functions/privacy/FederatedLmCG2.dml",
"diff": "X = federated(addresses=list($X1, $X2),\nranges=list(list(0, 0), list($r / 2, $c), list($r / 2, 0), list($r, $c)))\ny = federated(addresses=list($y1, $y2),\n- ranges=list(list(0, 0), list($r / 2, 0), list($r / 2, 0), list($r, 0)))\n+ ranges=list(list(0, 0), list($r / 2, 1), list($r / 2, 0), list($r, 1)))\nC = lmCG(X = X, y = y, reg = 1e-12, maxi = 2, verbose=FALSE)\nwrite(C, $C)\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-2759] Fix error handling federated read, wrong test meta data
This patch improves the federated read by explicit error handling for
invalid meta data (e.g., federated data partitions larger than global
matrix dimensions) in order to detect meta data inconsistencies. |
49,738 | 19.12.2020 19:08:51 | -3,600 | c54213df08b259fc3b8c96d4c3ffe6b0ea6b1eb1 | Fix indexed addition assignment (accumulation)
This patch adds the missing support for addition assignments in left
indexing expressions for both scalars and matrices as well as scalar and
matrix indexed ranges.
Thanks to Rene Haubitzer for catching this issue. | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/parser/DMLTranslator.java",
"new_path": "src/main/java/org/apache/sysds/parser/DMLTranslator.java",
"diff": "@@ -1137,11 +1137,8 @@ public class DMLTranslator\nif (!(target instanceof IndexedIdentifier)) {\n//process right hand side and accumulation\nHop ae = processExpression(source, target, ids);\n- if( ((AssignmentStatement)current).isAccumulator() ) {\n- DataIdentifier accum = liveIn.getVariable(target.getName());\n- if( accum == null )\n- throw new LanguageException(\"Invalid accumulator assignment \"\n- + \"to non-existing variable \"+target.getName()+\".\");\n+ if( as.isAccumulator() ) {\n+ DataIdentifier accum = getAccumulatorData(liveIn, target.getName());\nae = HopRewriteUtils.createBinary(ids.get(target.getName()), ae, OpOp2.PLUS);\ntarget.setProperties(accum.getOutput());\n}\n@@ -1170,6 +1167,15 @@ public class DMLTranslator\nelse {\nHop ae = processLeftIndexedExpression(source, (IndexedIdentifier)target, ids);\n+ if( as.isAccumulator() ) {\n+ DataIdentifier accum = getAccumulatorData(liveIn, target.getName());\n+ Hop rix = processIndexingExpression((IndexedIdentifier)target, null, ids);\n+ Hop rhs = processExpression(source, null, ids);\n+ Hop binary = HopRewriteUtils.createBinary(rix, rhs, OpOp2.PLUS);\n+ HopRewriteUtils.replaceChildReference(ae, ae.getInput(1), binary);\n+ target.setProperties(accum.getOutput());\n+ }\n+\nids.put(target.getName(), ae);\n// obtain origDim values BEFORE they are potentially updated during setProperties call\n@@ -1298,7 +1304,14 @@ public class DMLTranslator\n}\nsb.updateLiveVariablesOut(updatedLiveOut);\nsb.setHops(output);\n+ }\n+ private static DataIdentifier getAccumulatorData(VariableSet liveIn, String varname) {\n+ DataIdentifier accum = liveIn.getVariable(varname);\n+ if( accum == null )\n+ throw new LanguageException(\"Invalid accumulator assignment \"\n+ + \"to non-existing variable \"+varname+\".\");\n+ return accum;\n}\nprivate void appendDefaultArguments(FunctionStatement fstmt, List<String> inputNames, List<Hop> inputs, HashMap<String, Hop> ids) {\n@@ -1630,41 +1643,9 @@ public class DMLTranslator\nreturn processExpression(source, tmpOut, hops );\n}\n- private Hop processLeftIndexedExpression(Expression source, IndexedIdentifier target, HashMap<String, Hop> hops)\n- {\n+ private Hop processLeftIndexedExpression(Expression source, IndexedIdentifier target, HashMap<String, Hop> hops) {\n// process target indexed expressions\n- Hop rowLowerHops = null, rowUpperHops = null, colLowerHops = null, colUpperHops = null;\n-\n- if (target.getRowLowerBound() != null)\n- rowLowerHops = processExpression(target.getRowLowerBound(),null,hops);\n- else\n- rowLowerHops = new LiteralOp(1);\n-\n- if (target.getRowUpperBound() != null)\n- rowUpperHops = processExpression(target.getRowUpperBound(),null,hops);\n- else\n- {\n- if ( target.getDim1() != -1 )\n- rowUpperHops = new LiteralOp(target.getOrigDim1());\n- else {\n- rowUpperHops = new UnaryOp(target.getName(), DataType.SCALAR, ValueType.INT64, OpOp1.NROW, hops.get(target.getName()));\n- rowUpperHops.setParseInfo(target);\n- }\n- }\n- if (target.getColLowerBound() != null)\n- colLowerHops = processExpression(target.getColLowerBound(),null,hops);\n- else\n- colLowerHops = new LiteralOp(1);\n-\n- if (target.getColUpperBound() != null)\n- colUpperHops = processExpression(target.getColUpperBound(),null,hops);\n- else\n- {\n- if ( target.getDim2() != -1 )\n- colUpperHops = new LiteralOp(target.getOrigDim2());\n- else\n- colUpperHops = new UnaryOp(target.getName(), DataType.SCALAR, ValueType.INT64, OpOp1.NCOL, hops.get(target.getName()));\n- }\n+ Hop[] ixRange = getIndexingBounds(target, hops, true);\n// process the source expression to get source Hops\nHop sourceOp = processExpression(source, target, hops);\n@@ -1678,12 +1659,11 @@ public class DMLTranslator\nif( sourceOp.getDataType().isMatrix() && source.getOutput().getDataType().isScalar() )\nsourceOp.setDataType(DataType.SCALAR);\n- Hop leftIndexOp = new LeftIndexingOp(target.getName(), target.getDataType(), ValueType.FP64,\n- targetOp, sourceOp, rowLowerHops, rowUpperHops, colLowerHops, colUpperHops,\n+ Hop leftIndexOp = new LeftIndexingOp(target.getName(), target.getDataType(),\n+ ValueType.FP64, targetOp, sourceOp, ixRange[0], ixRange[1], ixRange[2], ixRange[3],\ntarget.getRowLowerEqualsUpper(), target.getColLowerEqualsUpper());\nsetIdentifierParams(leftIndexOp, target);\n-\nleftIndexOp.setParseInfo(target);\nleftIndexOp.setDim1(target.getOrigDim1());\nleftIndexOp.setDim2(target.getOrigDim2());\n@@ -1694,38 +1674,7 @@ public class DMLTranslator\nprivate Hop processIndexingExpression(IndexedIdentifier source, DataIdentifier target, HashMap<String, Hop> hops) {\n// process Hops for indexes (for source)\n- Hop rowLowerHops = null, rowUpperHops = null, colLowerHops = null, colUpperHops = null;\n-\n- if (source.getRowLowerBound() != null)\n- rowLowerHops = processExpression(source.getRowLowerBound(),null,hops);\n- else\n- rowLowerHops = new LiteralOp(1);\n-\n- if (source.getRowUpperBound() != null)\n- rowUpperHops = processExpression(source.getRowUpperBound(),null,hops);\n- else\n- {\n- if ( source.getOrigDim1() != -1 )\n- rowUpperHops = new LiteralOp(source.getOrigDim1());\n- else {\n- rowUpperHops = new UnaryOp(source.getName(), DataType.SCALAR, ValueType.INT64, OpOp1.NROW, hops.get(source.getName()));\n- rowUpperHops.setParseInfo(source);\n- }\n- }\n- if (source.getColLowerBound() != null)\n- colLowerHops = processExpression(source.getColLowerBound(),null,hops);\n- else\n- colLowerHops = new LiteralOp(1);\n-\n- if (source.getColUpperBound() != null)\n- colUpperHops = processExpression(source.getColUpperBound(),null,hops);\n- else\n- {\n- if ( source.getOrigDim2() != -1 )\n- colUpperHops = new LiteralOp(source.getOrigDim2());\n- else\n- colUpperHops = new UnaryOp(source.getName(), DataType.SCALAR, ValueType.INT64, OpOp1.NCOL, hops.get(source.getName()));\n- }\n+ Hop[] ixRange = getIndexingBounds(source, hops, false);\nif (target == null) {\ntarget = createTarget(source);\n@@ -1735,7 +1684,7 @@ public class DMLTranslator\ntarget.setNnz(-1);\nHop indexOp = new IndexingOp(target.getName(), target.getDataType(), target.getValueType(),\n- hops.get(source.getName()), rowLowerHops, rowUpperHops, colLowerHops, colUpperHops,\n+ hops.get(source.getName()), ixRange[0], ixRange[1], ixRange[2], ixRange[3],\nsource.getRowLowerEqualsUpper(), source.getColLowerEqualsUpper());\nindexOp.setParseInfo(target);\n@@ -1744,6 +1693,34 @@ public class DMLTranslator\nreturn indexOp;\n}\n+ private Hop[] getIndexingBounds(IndexedIdentifier ix, HashMap<String, Hop> hops, boolean lix) {\n+ Hop rowLowerHops = (ix.getRowLowerBound() != null) ?\n+ processExpression(ix.getRowLowerBound(),null, hops) : new LiteralOp(1);\n+ Hop colLowerHops = (ix.getColLowerBound() != null) ?\n+ processExpression(ix.getColLowerBound(),null, hops) : new LiteralOp(1);\n+\n+ Hop rowUpperHops = null, colUpperHops = null;\n+ if (ix.getRowUpperBound() != null)\n+ rowUpperHops = processExpression(ix.getRowUpperBound(),null,hops);\n+ else {\n+ rowUpperHops = ((lix ? ix.getDim1() : ix.getOrigDim1()) != -1) ?\n+ new LiteralOp(ix.getOrigDim1()) :\n+ new UnaryOp(ix.getName(), DataType.SCALAR, ValueType.INT64, OpOp1.NROW, hops.get(ix.getName()));\n+ rowUpperHops.setParseInfo(ix);\n+ }\n+\n+ if (ix.getColUpperBound() != null)\n+ colUpperHops = processExpression(ix.getColUpperBound(),null,hops);\n+ else {\n+ colUpperHops = ((lix ? ix.getDim2() : ix.getOrigDim2()) != -1) ?\n+ new LiteralOp(ix.getOrigDim2()) :\n+ new UnaryOp(ix.getName(), DataType.SCALAR, ValueType.INT64, OpOp1.NCOL, hops.get(ix.getName()));\n+ colUpperHops.setParseInfo(ix);\n+ }\n+\n+ return new Hop[] {rowLowerHops, rowUpperHops, colLowerHops, colUpperHops};\n+ }\n+\n/**\n* Construct Hops from parse tree : Process Binary Expression in an\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/java/org/apache/sysds/test/functions/indexing/IndexedAdditionAssignmentTest.java",
"diff": "+/*\n+ * Licensed to the Apache Software Foundation (ASF) under one\n+ * or more contributor license agreements. See the NOTICE file\n+ * distributed with this work for additional information\n+ * regarding copyright ownership. The ASF licenses this file\n+ * to you under the Apache License, Version 2.0 (the\n+ * \"License\"); you may not use this file except in compliance\n+ * with the License. You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.apache.sysds.test.functions.indexing;\n+\n+\n+import org.junit.Assert;\n+import org.junit.Test;\n+\n+import org.apache.sysds.common.Types.ExecMode;\n+import org.apache.sysds.lops.LopProperties.ExecType;\n+import org.apache.sysds.runtime.matrix.data.MatrixValue.CellIndex;\n+import org.apache.sysds.test.AutomatedTestBase;\n+import org.apache.sysds.test.TestConfiguration;\n+\n+public class IndexedAdditionAssignmentTest extends AutomatedTestBase\n+{\n+ private final static String TEST_DIR = \"functions/indexing/\";\n+ private final static String TEST_NAME = \"IndexedAdditionTest\";\n+\n+ private final static String TEST_CLASS_DIR = TEST_DIR + IndexedAdditionAssignmentTest.class.getSimpleName() + \"/\";\n+\n+ private final static int rows = 1279;\n+ private final static int cols = 1050;\n+\n+ @Override\n+ public void setUp() {\n+ addTestConfiguration(TEST_NAME, new TestConfiguration(TEST_CLASS_DIR, TEST_NAME, new String[] {\"A\"}));\n+ }\n+\n+ @Test\n+ public void testIndexedAssignmentAddScalarCP() {\n+ runIndexedAdditionAssignment(true, ExecType.CP);\n+ }\n+\n+ @Test\n+ public void testIndexedAssignmentAddMatrixCP() {\n+ runIndexedAdditionAssignment(false, ExecType.CP);\n+ }\n+\n+ @Test\n+ public void testIndexedAssignmentAddScalarSpark() {\n+ runIndexedAdditionAssignment(true, ExecType.SPARK);\n+ }\n+\n+ @Test\n+ public void testIndexedAssignmentAddMatrixSpark() {\n+ runIndexedAdditionAssignment(false, ExecType.SPARK);\n+ }\n+\n+ private void runIndexedAdditionAssignment(boolean scalar, ExecType instType) {\n+ ExecMode platformOld = setExecMode(instType);\n+\n+ try {\n+ TestConfiguration config = getTestConfiguration(TEST_NAME);\n+ loadTestConfiguration(config);\n+\n+ //test is adding or subtracting 7 to area 1x1 or 10x10\n+ //of an initially constraint (3) matrix and sums it up\n+ String HOME = SCRIPT_DIR + TEST_DIR;\n+ fullDMLScriptName = HOME + TEST_NAME + \".dml\";\n+ programArgs = new String[]{\"-explain\" , \"-args\",\n+ Long.toString(rows), Long.toString(cols),\n+ String.valueOf(scalar).toUpperCase(), output(\"A\")};\n+\n+ runTest(true, false, null, -1);\n+\n+ Double ret = readDMLMatrixFromOutputDir(\"A\").get(new CellIndex(1,1));\n+ Assert.assertEquals(new Double(3*rows*cols + 7*(scalar?1:100)), ret);\n+ }\n+ finally {\n+ resetExecMode(platformOld);\n+ }\n+ }\n+}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/indexing/LeftIndexingScalarTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/indexing/LeftIndexingScalarTest.java",
"diff": "@@ -22,7 +22,6 @@ package org.apache.sysds.test.functions.indexing;\nimport java.util.HashMap;\nimport org.junit.Test;\n-import org.apache.sysds.api.DMLScript;\nimport org.apache.sysds.common.Types.ExecMode;\nimport org.apache.sysds.lops.LopProperties.ExecType;\nimport org.apache.sysds.runtime.matrix.data.MatrixValue.CellIndex;\n@@ -33,7 +32,6 @@ import org.apache.sysds.test.TestUtils;\npublic class LeftIndexingScalarTest extends AutomatedTestBase\n{\n-\nprivate final static String TEST_DIR = \"functions/indexing/\";\nprivate final static String TEST_NAME = \"LeftIndexingScalarTest\";\nprivate final static String TEST_CLASS_DIR = TEST_DIR + LeftIndexingScalarTest.class.getSimpleName() + \"/\";\n@@ -52,31 +50,18 @@ public class LeftIndexingScalarTest extends AutomatedTestBase\n}\n@Test\n- public void testLeftIndexingScalarCP()\n- {\n+ public void testLeftIndexingScalarCP() {\nrunLeftIndexingTest(ExecType.CP);\n}\n@Test\n- public void testLeftIndexingScalarSP()\n- {\n+ public void testLeftIndexingScalarSP() {\nrunLeftIndexingTest(ExecType.SPARK);\n}\nprivate void runLeftIndexingTest( ExecType instType )\n{\n- //rtplatform for MR\n- ExecMode platformOld = rtplatform;\n- if(instType == ExecType.SPARK) {\n- rtplatform = ExecMode.SPARK;\n- }\n- else {\n- rtplatform = ExecMode.HYBRID;\n- }\n- boolean sparkConfigOld = DMLScript.USE_LOCAL_SPARK_CONFIG;\n- if( rtplatform == ExecMode.SPARK )\n- DMLScript.USE_LOCAL_SPARK_CONFIG = true;\n-\n+ ExecMode platformOld = setExecMode(instType);\ntry\n{\n@@ -102,11 +87,8 @@ public class LeftIndexingScalarTest extends AutomatedTestBase\nTestUtils.compareMatrices(dmlfile, rfile, epsilon, \"A-DML\", \"A-R\");\ncheckDMLMetaDataFile(\"A\", new MatrixCharacteristics(rows,cols,1,1));\n}\n- finally\n- {\n- rtplatform = platformOld;\n- DMLScript.USE_LOCAL_SPARK_CONFIG = sparkConfigOld;\n+ finally {\n+ resetExecMode(platformOld);\n}\n}\n}\n-\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/indexing/IndexedAdditionTest.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+\n+A = matrix(3, $1, $2);\n+\n+if( $3 )\n+ A[10,20] += 7;\n+else\n+ A[10:19,20:29] += 7;\n+\n+R = as.matrix(sum(A))\n+write(R, $4, format=\"text\")\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-2745] Fix indexed addition assignment (accumulation)
This patch adds the missing support for addition assignments in left
indexing expressions for both scalars and matrices as well as scalar and
matrix indexed ranges.
Thanks to Rene Haubitzer for catching this issue. |
49,738 | 19.12.2020 22:45:40 | -3,600 | 538f5472e13bbb011baa0bddee887807396bb45c | [MINOR] Fix flaky lineage cache eviction test (robustness github test) | [
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/lineage/CacheEvictionTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/lineage/CacheEvictionTest.java",
"diff": "@@ -129,7 +129,10 @@ public class CacheEvictionTest extends LineageBase {\nAssert.assertTrue(expCount_lru >= expCount_wt);\n// Compare counts of evicted items\n// LRU tends to evict more entries to recover equal amount of memory\n- Assert.assertTrue(evictedCount_lru > evictedCount_wt);\n+ // Note: changed to equals to fix flaky tests where both are not evicted at all\n+ // (e.g., due to high execution time as sometimes observed through github actions)\n+ Assert.assertTrue((\"Violated expected evictions: \"+evictedCount_lru+\" >= \"+evictedCount_wt),\n+ evictedCount_lru >= evictedCount_wt);\n// Compare cache hits\nAssert.assertTrue(hitCount_lru < hitCount_wt);\n}\n@@ -139,5 +142,4 @@ public class CacheEvictionTest extends LineageBase {\nRecompiler.reinitRecompiler();\n}\n}\n-\n}\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR] Fix flaky lineage cache eviction test (robustness github test) |
49,722 | 19.12.2020 23:01:40 | -3,600 | 0126197085d836441bdff29c8caa654a6d77e876 | Federated reshape operations (aligned)
Closes | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/controlprogram/federated/FederationMap.java",
"new_path": "src/main/java/org/apache/sysds/runtime/controlprogram/federated/FederationMap.java",
"diff": "@@ -242,11 +242,18 @@ public class FederationMap {\n}\nprivate static FederatedRequest[] addAll(FederatedRequest a, FederatedRequest[] b) {\n+ // empty b array\n+ if( b == null || b.length==0 ) {\n+ return new FederatedRequest[] {a};\n+ }\n+ // concat with b array\n+ else {\nFederatedRequest[] ret = new FederatedRequest[b.length + 1];\nret[0] = a;\nSystem.arraycopy(b, 0, ret, 1, b.length);\nreturn ret;\n}\n+ }\npublic FederationMap identCopy(long tid, long id) {\nFuture<FederatedResponse>[] copyInstr = execute(tid,\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/controlprogram/federated/FederationUtils.java",
"new_path": "src/main/java/org/apache/sysds/runtime/controlprogram/federated/FederationUtils.java",
"diff": "@@ -78,6 +78,25 @@ public class FederationUtils {\nreturn new FederatedRequest(RequestType.EXEC_INST, id, linst);\n}\n+ public static FederatedRequest[] callInstruction(String[] inst, CPOperand varOldOut, CPOperand[] varOldIn, long[] varNewIn) {\n+ long id = getNextFedDataID();\n+ String[] linst = inst;\n+ FederatedRequest[] fr = new FederatedRequest[inst.length];\n+ for(int j=0; j<inst.length; j++) {\n+ for(int i = 0; i < varOldIn.length; i++) {\n+ linst[j] = linst[j].replace(ExecType.SPARK.name(), ExecType.CP.name());\n+ linst[j] = linst[j].replace(Lop.OPERAND_DELIMITOR + varOldOut.getName() + Lop.DATATYPE_PREFIX, Lop.OPERAND_DELIMITOR + String.valueOf(id) + Lop.DATATYPE_PREFIX);\n+\n+ if(varOldIn[i] != null) {\n+ linst[j] = linst[j].replace(Lop.OPERAND_DELIMITOR + varOldIn[i].getName() + Lop.DATATYPE_PREFIX, Lop.OPERAND_DELIMITOR + String.valueOf(varNewIn[i]) + Lop.DATATYPE_PREFIX);\n+ linst[j] = linst[j].replace(\"=\" + varOldIn[i].getName(), \"=\" + String.valueOf(varNewIn[i])); //parameterized\n+ }\n+ }\n+ fr[j] = new FederatedRequest(RequestType.EXEC_INST, id, (Object) linst[j]);\n+ }\n+ return fr;\n+ }\n+\npublic static MatrixBlock aggAdd(Future<FederatedResponse>[] ffr) {\ntry {\nSimpleOperator op = new SimpleOperator(Plus.getPlusFnObject());\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/FEDInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/FEDInstruction.java",
"diff": "@@ -38,6 +38,7 @@ public abstract class FEDInstruction extends Instruction {\nTsmm,\nMMChain,\nReorg,\n+ Reshape,\nMatrixIndexing,\nQSort,\nQPick\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/FEDInstructionUtils.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/FEDInstructionUtils.java",
"diff": "@@ -107,6 +107,8 @@ public class FEDInstructionUtils {\n} else if(inst.getOpcode().equalsIgnoreCase(\"qsort\") && mo1.isFederated()) {\nif(mo1.getFedMapping().getFederatedRanges().length == 1)\nfedinst = QuantileSortFEDInstruction.parseInstruction(inst.getInstructionString());\n+ } else if(inst.getOpcode().equalsIgnoreCase(\"rshape\") && mo1.isFederated()) {\n+ fedinst = ReshapeFEDInstruction.parseInstruction(inst.getInstructionString());\n} else if(inst instanceof AggregateUnaryCPInstruction && mo1.isFederated() &&\n((AggregateUnaryCPInstruction) instruction).getAUType() == AggregateUnaryCPInstruction.AUType.DEFAULT) {\nfedinst = AggregateUnaryFEDInstruction.parseInstruction(inst.getInstructionString());\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/ReshapeFEDInstruction.java",
"diff": "+/*\n+ * Licensed to the Apache Software Foundation (ASF) under one\n+ * or more contributor license agreements. See the NOTICE file\n+ * distributed with this work for additional information\n+ * regarding copyright ownership. The ASF licenses this file\n+ * to you under the Apache License, Version 2.0 (the\n+ * \"License\"); you may not use this file except in compliance\n+ * with the License. You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.apache.sysds.runtime.instructions.fed;\n+\n+import java.util.Arrays;\n+import java.util.stream.Collectors;\n+\n+import org.apache.commons.lang3.tuple.Pair;\n+import org.apache.sysds.common.Types;\n+import org.apache.sysds.lops.Lop;\n+import org.apache.sysds.runtime.DMLRuntimeException;\n+import org.apache.sysds.runtime.controlprogram.caching.MatrixObject;\n+import org.apache.sysds.runtime.controlprogram.context.ExecutionContext;\n+import org.apache.sysds.runtime.controlprogram.federated.FederatedRange;\n+import org.apache.sysds.runtime.controlprogram.federated.FederatedRequest;\n+import org.apache.sysds.runtime.controlprogram.federated.FederationMap;\n+import org.apache.sysds.runtime.controlprogram.federated.FederationUtils;\n+import org.apache.sysds.runtime.instructions.InstructionUtils;\n+import org.apache.sysds.runtime.instructions.cp.BooleanObject;\n+import org.apache.sysds.runtime.instructions.cp.CPOperand;\n+import org.apache.sysds.runtime.lineage.LineageItem;\n+import org.apache.sysds.runtime.lineage.LineageItemUtils;\n+import org.apache.sysds.runtime.matrix.operators.Operator;\n+\n+public class ReshapeFEDInstruction extends UnaryFEDInstruction {\n+ private final CPOperand _opRows;\n+ private final CPOperand _opCols;\n+ private final CPOperand _opDims;\n+ private final CPOperand _opByRow;\n+\n+ private ReshapeFEDInstruction(Operator op, CPOperand in1, CPOperand in2, CPOperand in3, CPOperand in4,\n+ CPOperand in5, CPOperand out, String opcode, String istr) {\n+ super(FEDInstruction.FEDType.Reshape, op, in1, out, opcode, istr);\n+ _opRows = in2;\n+ _opCols = in3;\n+ _opDims = in4;\n+ _opByRow = in5;\n+ }\n+\n+ public static ReshapeFEDInstruction parseInstruction(String str) {\n+ String[] parts = InstructionUtils.getInstructionPartsWithValueType(str);\n+ InstructionUtils.checkNumFields(parts, 6);\n+ String opcode = parts[0];\n+ CPOperand in1 = new CPOperand(parts[1]);\n+ CPOperand in2 = new CPOperand(parts[2]);\n+ CPOperand in3 = new CPOperand(parts[3]);\n+ CPOperand in4 = new CPOperand(parts[4]);\n+ CPOperand in5 = new CPOperand(parts[5]);\n+ CPOperand out = new CPOperand(parts[6]);\n+ if(!opcode.equalsIgnoreCase(\"rshape\"))\n+ throw new DMLRuntimeException(\"Unknown opcode while parsing an ReshapeInstruction: \" + str);\n+ else\n+ return new ReshapeFEDInstruction(new Operator(true), in1, in2, in3, in4, in5, out, opcode, str);\n+ }\n+\n+ @Override\n+ public void processInstruction(ExecutionContext ec) {\n+ if(output.getDataType() == Types.DataType.MATRIX) {\n+ MatrixObject mo1 = ec.getMatrixObject(input1);\n+ BooleanObject byRow = (BooleanObject) ec\n+ .getScalarInput(_opByRow.getName(), Types.ValueType.BOOLEAN, _opByRow.isLiteral());\n+ int rows = (int) ec.getScalarInput(_opRows).getLongValue();\n+ int cols = (int) ec.getScalarInput(_opCols).getLongValue();\n+\n+ if(!mo1.isFederated())\n+ throw new DMLRuntimeException(\"Federated Rshape: \"\n+ + \"Federated input expected, but invoked w/ \" + mo1.isFederated());\n+ if(mo1.getNumColumns() * mo1.getNumRows() != rows * cols)\n+ throw new DMLRuntimeException(\"Reshape matrix requires consistent numbers of input/output cells (\"\n+ + mo1.getNumRows() + \":\" + mo1.getNumColumns() + \", \" + rows + \":\" + cols + \").\");\n+\n+ boolean isNotAligned = Arrays.stream(mo1.getFedMapping().getFederatedRanges())\n+ .map(e -> e.getSize() % (byRow.getBooleanValue() ? cols : rows) == 0).collect(Collectors.toList())\n+ .contains(false);\n+\n+ if(isNotAligned)\n+ throw new DMLRuntimeException(\n+ \"Reshape matrix requires consistent numbers of input/output cells for each worker.\");\n+\n+ String[] newInstString = getNewInstString(mo1, instString, rows, cols, byRow.getBooleanValue());\n+\n+ //execute at federated site\n+ FederatedRequest[] fr1 = FederationUtils.callInstruction(newInstString,\n+ output, new CPOperand[] {input1}, new long[] {mo1.getFedMapping().getID()});\n+ mo1.getFedMapping().execute(getTID(), true, fr1, new FederatedRequest[0]);\n+\n+ // set new fed map\n+ FederationMap reshapedFedMap = mo1.getFedMapping();\n+ for(int i = 0; i < reshapedFedMap.getFederatedRanges().length; i++) {\n+ long cells = reshapedFedMap.getFederatedRanges()[i].getSize();\n+ long row = byRow.getBooleanValue() ? cells / cols : rows;\n+ long col = byRow.getBooleanValue() ? cols : cells / rows;\n+\n+ reshapedFedMap.getFederatedRanges()[i].setBeginDim(0,\n+ (reshapedFedMap.getFederatedRanges()[i].getBeginDims()[0] == 0 || i == 0) ? 0 :\n+ reshapedFedMap.getFederatedRanges()[i - 1].getEndDims()[0]);\n+ reshapedFedMap.getFederatedRanges()[i]\n+ .setEndDim(0, reshapedFedMap.getFederatedRanges()[i].getBeginDims()[0] + row);\n+ reshapedFedMap.getFederatedRanges()[i].setBeginDim(1,\n+ (reshapedFedMap.getFederatedRanges()[i].getBeginDims()[1] == 0 || i == 0) ? 0 :\n+ reshapedFedMap.getFederatedRanges()[i - 1].getEndDims()[1]);\n+ reshapedFedMap.getFederatedRanges()[i]\n+ .setEndDim(1, reshapedFedMap.getFederatedRanges()[i].getBeginDims()[1] + col);\n+ }\n+\n+ //derive output federated mapping\n+ MatrixObject out = ec.getMatrixObject(output);\n+ out.getDataCharacteristics().set(rows, cols, (int) mo1.getBlocksize(), mo1.getNnz());\n+ out.setFedMapping(reshapedFedMap.copyWithNewID(fr1[0].getID()));\n+ }\n+ else {\n+ // TODO support tensor out, frame and list\n+ throw new DMLRuntimeException(\"Federated Reshape Instruction only supports matrix as output.\");\n+ }\n+ }\n+\n+ // replace old reshape values for each worker\n+ private static String[] getNewInstString(MatrixObject mo1, String instString, int rows, int cols, boolean byRow) {\n+ String[] instStrings = new String[mo1.getFedMapping().getSize()];\n+\n+ int sameFedSize = Arrays.stream(mo1.getFedMapping().getFederatedRanges()).map(FederatedRange::getSize)\n+ .collect(Collectors.toSet()).size();\n+ sameFedSize = sameFedSize == 1 ? 1 : mo1.getFedMapping().getSize();\n+\n+ for(int i = 0; i < sameFedSize; i++) {\n+ String[] instParts = instString.split(Lop.OPERAND_DELIMITOR);\n+ long size = mo1.getFedMapping().getFederatedRanges()[i].getSize();\n+ String oldInstStringPart = byRow ? instParts[3] : instParts[4];\n+ String newInstStringPart = byRow ?\n+ oldInstStringPart.replace(String.valueOf(rows), String.valueOf(size/cols)) :\n+ oldInstStringPart.replace(String.valueOf(cols), String.valueOf(size/rows));\n+ instStrings[i] = instString.replace(oldInstStringPart, newInstStringPart);\n+ }\n+\n+ if(sameFedSize == 1)\n+ Arrays.fill(instStrings, instStrings[0]);\n+\n+ return instStrings;\n+ }\n+\n+ @Override\n+ public Pair<String, LineageItem> getLineageItem(ExecutionContext ec) {\n+ return Pair.of(output.getName(),\n+ new LineageItem(getOpcode(), LineageItemUtils.getLineage(ec, input1, _opRows, _opCols, _opDims, _opByRow)));\n+ }\n+}\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/java/org/apache/sysds/test/functions/federated/primitives/FederatedReshapeTest.java",
"diff": "+/*\n+ * Licensed to the Apache Software Foundation (ASF) under one\n+ * or more contributor license agreements. See the NOTICE file\n+ * distributed with this work for additional information\n+ * regarding copyright ownership. The ASF licenses this file\n+ * to you under the Apache License, Version 2.0 (the\n+ * \"License\"); you may not use this file except in compliance\n+ * with the License. You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.apache.sysds.test.functions.federated.primitives;\n+\n+import java.util.Arrays;\n+import java.util.Collection;\n+\n+import org.apache.sysds.api.DMLScript;\n+import org.apache.sysds.common.Types;\n+import org.apache.sysds.runtime.meta.MatrixCharacteristics;\n+import org.apache.sysds.runtime.util.HDFSTool;\n+import org.apache.sysds.test.AutomatedTestBase;\n+import org.apache.sysds.test.TestConfiguration;\n+import org.apache.sysds.test.TestUtils;\n+import org.junit.Assert;\n+import org.junit.Ignore;\n+import org.junit.Test;\n+import org.junit.runner.RunWith;\n+import org.junit.runners.Parameterized;\n+\n+@RunWith(value = Parameterized.class)\[email protected]\n+public class FederatedReshapeTest extends AutomatedTestBase {\n+ private final static String TEST_DIR = \"functions/federated/\";\n+ private final static String TEST_NAME = \"FederatedReshapeTest\";\n+ private final static String TEST_CLASS_DIR = TEST_DIR + FederatedReshapeTest.class.getSimpleName() + \"/\";\n+\n+ private final static int blocksize = 1024;\n+ @Parameterized.Parameter()\n+ public int rows;\n+\n+ @Parameterized.Parameter(1)\n+ public int cols;\n+\n+ @Parameterized.Parameter(2)\n+ public int rRows;\n+\n+ @Parameterized.Parameter(3)\n+ public int rCols;\n+\n+ @Parameterized.Parameters\n+ public static Collection<Object[]> data() {\n+ return Arrays.asList(new Object[][] {\n+ {12, 12, 144, 1},\n+ {12, 12, 24, 6},\n+ {12, 12, 48, 3}\n+ });\n+ }\n+\n+ @Override\n+ public void setUp() {\n+ TestUtils.clearAssertionInformation();\n+ addTestConfiguration(TEST_NAME, new TestConfiguration(TEST_CLASS_DIR, TEST_NAME, new String[] {\"S\"}));\n+ }\n+\n+ @Test\n+ public void federatedReshapeCP() {\n+ federatedReshape(Types.ExecMode.SINGLE_NODE);\n+ }\n+\n+ @Test\n+ @Ignore\n+ public void federatedReshapeSP() {\n+ federatedReshape(Types.ExecMode.SPARK);\n+ }\n+\n+ public void federatedReshape(Types.ExecMode execMode) {\n+ boolean sparkConfigOld = DMLScript.USE_LOCAL_SPARK_CONFIG;\n+ Types.ExecMode platformOld = rtplatform;\n+\n+ getAndLoadTestConfiguration(TEST_NAME);\n+ String HOME = SCRIPT_DIR + TEST_DIR;\n+\n+ double[][] X1 = getRandomMatrix(2, cols, 1, 5, 1, 3);\n+ double[][] X2 = getRandomMatrix(2, cols, 1, 5, 1, 7);\n+ double[][] X3 = getRandomMatrix(6, cols, 1, 5, 1, 8);\n+ double[][] X4 = getRandomMatrix(2, cols, 1, 5, 1, 9);\n+\n+ MatrixCharacteristics mc1 = new MatrixCharacteristics(6, cols, blocksize, 6*cols);\n+ MatrixCharacteristics mc2 = new MatrixCharacteristics(2, cols, blocksize, 2*cols);\n+ writeInputMatrixWithMTD(\"X1\", X1, false, mc2);\n+ writeInputMatrixWithMTD(\"X2\", X2, false, mc2);\n+ writeInputMatrixWithMTD(\"X3\", X3, false, mc1);\n+ writeInputMatrixWithMTD(\"X4\", X4, false, mc2);\n+\n+ // empty script name because we don't execute any script, just start the worker\n+ fullDMLScriptName = \"\";\n+ int port1 = getRandomAvailablePort();\n+ int port2 = getRandomAvailablePort();\n+ int port3 = getRandomAvailablePort();\n+ int port4 = getRandomAvailablePort();\n+ Thread t1 = startLocalFedWorkerThread(port1, FED_WORKER_WAIT_S);\n+ Thread t2 = startLocalFedWorkerThread(port2, FED_WORKER_WAIT_S);\n+ Thread t3 = startLocalFedWorkerThread(port3, FED_WORKER_WAIT_S);\n+ Thread t4 = startLocalFedWorkerThread(port4);\n+\n+ // reference file should not be written to hdfs, so we set platform here\n+ rtplatform = execMode;\n+ if(rtplatform == Types.ExecMode.SPARK) {\n+ DMLScript.USE_LOCAL_SPARK_CONFIG = true;\n+ }\n+ // Run reference dml script with normal matrix for Row/Col\n+ fullDMLScriptName = HOME + TEST_NAME + \"Reference.dml\";\n+ programArgs = new String[] {\"-stats\", \"100\", \"-args\",\n+ input(\"X1\"), input(\"X2\"), input(\"X3\"), input(\"X4\"), expected(\"S\"), String.valueOf(rRows), String.valueOf(rCols)};\n+ runTest(null);\n+\n+ TestConfiguration config = availableTestConfigurations.get(TEST_NAME);\n+ loadTestConfiguration(config);\n+\n+ fullDMLScriptName = HOME + TEST_NAME + \".dml\";\n+ programArgs = new String[] {\"-stats\", \"100\", \"-nvargs\",\n+ \"in_X1=\" + TestUtils.federatedAddress(port1, input(\"X1\")),\n+ \"in_X2=\" + TestUtils.federatedAddress(port2, input(\"X2\")),\n+ \"in_X3=\" + TestUtils.federatedAddress(port3, input(\"X3\")),\n+ \"in_X4=\" + TestUtils.federatedAddress(port4, input(\"X4\")),\n+ \"rows=\" + rows,\n+ \"cols=\" + cols,\n+ \"r_rows=\" + rRows,\n+ \"r_cols=\" + rCols,\n+ \"out_S=\" + output(\"S\")};\n+ runTest(null);\n+\n+ // compare all sums via files\n+ compareResults(0.01);\n+\n+ Assert.assertTrue(heavyHittersContainsString(\"fed_rshape\"));\n+\n+ // check that federated input files are still existing\n+ Assert.assertTrue(HDFSTool.existsFileOnHDFS(input(\"X1\")));\n+ Assert.assertTrue(HDFSTool.existsFileOnHDFS(input(\"X2\")));\n+ Assert.assertTrue(HDFSTool.existsFileOnHDFS(input(\"X3\")));\n+ Assert.assertTrue(HDFSTool.existsFileOnHDFS(input(\"X4\")));\n+\n+ TestUtils.shutdownThreads(t1, t2, t3, t4);\n+ rtplatform = platformOld;\n+ DMLScript.USE_LOCAL_SPARK_CONFIG = sparkConfigOld;\n+ }\n+}\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/federated/FederatedReshapeTest.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+/*\n+A = federated(addresses=list($in_X1, $in_X2, $in_X3, $in_X4),\n+ ranges=list(list(0, 0), list($rows/4, $cols), list($rows/4, 0), list(2*$rows/4, $cols),\n+ list(2*$rows/4, 0), list(3*$rows/4, $cols), list(3*$rows/4, 0), list($rows, $cols)));\n+*/\n+A = federated(addresses=list($in_X1, $in_X2, $in_X3, $in_X4),\n+ ranges=list(list(0, 0), list(2, 12), list(2, 0), list(4, $cols),\n+ list(4, 0), list(10, $cols), list(10, 0), list(12, $cols)));\n+\n+s = matrix(A, rows=$r_rows, cols=$r_cols);\n+write(s, $out_S);\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/federated/FederatedReshapeTestReference.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+A = rbind(read($1), read($2), read($3), read($4));\n+s = matrix(A, rows=$6, cols=$7);\n+write(s, $5);\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-2762] Federated reshape operations (aligned)
Closes #1129. |
49,700 | 20.12.2020 19:29:13 | -3,600 | b685db68f6ff590f5a90bc8f82cd9a028fa7f02f | Modified Privacy Monitor for Fine-Grained Constraints
Closes | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/privacy/PrivacyMonitor.java",
"new_path": "src/main/java/org/apache/sysds/runtime/privacy/PrivacyMonitor.java",
"diff": "@@ -29,6 +29,8 @@ public class PrivacyMonitor\n{\nprivate static EnumMap<PrivacyLevel,LongAdder> checkedConstraints;\n+ private static boolean checkPrivacy = false;\n+\nstatic {\ncheckedConstraints = new EnumMap<>(PrivacyLevel.class);\nfor ( PrivacyLevel level : PrivacyLevel.values() ){\n@@ -36,19 +38,37 @@ public class PrivacyMonitor\n}\n}\n- private static boolean checkPrivacy = false;\n-\npublic static EnumMap<PrivacyLevel,LongAdder> getCheckedConstraints() {\nreturn checkedConstraints;\n}\nprivate static void incrementCheckedConstraints(PrivacyLevel privacyLevel) {\n- if ( checkPrivacy ){\nif ( privacyLevel == null )\nthrow new NullPointerException(\"Cannot increment checked constraints log: Privacy level is null.\");\ncheckedConstraints.get(privacyLevel).increment();\n}\n+ /**\n+ * Update checked constraints log if checkPrivacy is activated.\n+ * The checked constraints log is updated with both the general\n+ * privacy constraint and the fine-grained constraints.\n+ *\n+ * @param privacyConstraint used for updating log\n+ */\n+ private static void updateCheckedConstraintsLog(PrivacyConstraint privacyConstraint) {\n+ if ( checkPrivacy ){\n+ if ( privacyConstraint.privacyLevel != PrivacyLevel.None){\n+ incrementCheckedConstraints(privacyConstraint.privacyLevel);\n+ }\n+ if ( PrivacyUtils.privacyConstraintFineGrainedActivated(privacyConstraint) ){\n+ int privateNum = privacyConstraint.getFineGrainedPrivacy()\n+ .getDataRangesOfPrivacyLevel(PrivacyLevel.Private).length;\n+ int aggregateNum = privacyConstraint.getFineGrainedPrivacy()\n+ .getDataRangesOfPrivacyLevel(PrivacyLevel.PrivateAggregation).length;\n+ checkedConstraints.get(PrivacyLevel.Private).add(privateNum);\n+ checkedConstraints.get(PrivacyLevel.PrivateAggregation).add(aggregateNum);\n+ }\n+ }\n}\npublic static void clearCheckedConstraints(){\n@@ -61,6 +81,7 @@ public class PrivacyMonitor\n/**\n* Throws DMLPrivacyException if privacy constraint is set to private or private aggregation.\n+ * The checked constraints log will be updated before throwing an exception.\n* @param dataObject input data object\n* @return data object or data object with privacy constraint removed in case the privacy level was none.\n*/\n@@ -68,51 +89,12 @@ public class PrivacyMonitor\nif(dataObject == null)\nreturn null;\nPrivacyConstraint privacyConstraint = dataObject.getPrivacyConstraint();\n- if (privacyConstraint != null){\n- PrivacyLevel privacyLevel = privacyConstraint.getPrivacyLevel();\n- incrementCheckedConstraints(privacyLevel);\n- switch(privacyLevel){\n- case None:\n- dataObject.setPrivacyConstraints(null);\n- break;\n- case Private:\n- case PrivateAggregation:\n- throw new DMLPrivacyException(\"Cannot share variable, since the privacy constraint \"\n- + \"of the requested variable is set to \" + privacyLevel.name());\n- default: {\n- throw new DMLPrivacyException(\"Privacy level \"\n- + privacyLevel.name() + \" of variable not recognized\");\n- }\n- }\n- }\n- return dataObject;\n- }\n- /**\n- * Throws DMLPrivacyException if privacy constraint of data object has level privacy.\n- * @param dataObject input matrix object\n- * @return data object or data object with privacy constraint removed in case the privacy level was none.\n- */\n- public static Data handlePrivacyAllowAggregation(Data dataObject){\n- PrivacyConstraint privacyConstraint = dataObject.getPrivacyConstraint();\n- if (privacyConstraint != null){\n- PrivacyLevel privacyLevel = privacyConstraint.getPrivacyLevel();\n- incrementCheckedConstraints(privacyLevel);\n- switch(privacyLevel){\n- case None:\n- dataObject.setPrivacyConstraints(null);\n- break;\n- case Private:\n+ if ( PrivacyUtils.someConstraintSetUnary(privacyConstraint) ){\n+ updateCheckedConstraintsLog(privacyConstraint);\nthrow new DMLPrivacyException(\"Cannot share variable, since the privacy constraint \"\n- + \"of the requested variable is set to \" + privacyLevel.name());\n- case PrivateAggregation:\n- break;\n- default: {\n- throw new DMLPrivacyException(\"Privacy level \"\n- + privacyLevel.name() + \" of variable not recognized\");\n- }\n- }\n- }\n+ + \"of the requested variable is activated\");\n+ } else dataObject.setPrivacyConstraints(null);\nreturn dataObject;\n}\n}\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-2668] Modified Privacy Monitor for Fine-Grained Constraints
Closes #1120. |
49,706 | 17.12.2020 12:42:14 | -3,600 | fea39149aee39419202cdbdf0c5273d8836629c8 | Transpose micro benchmark
This micro benchmark considers multiple cases, tallskinny, shortwide
and "normal" matrices. It gives an indication of if the transpose is
parallelizing and using the hardware appropriately.
Closes # 1127 | [
{
"change_type": "MODIFY",
"old_path": ".gitignore",
"new_path": ".gitignore",
"diff": "@@ -113,3 +113,4 @@ src/main/cpp/bin\n# Performance Test artifacts\nscripts/perftest/results\n+scripts/perftest/in\n\\ No newline at end of file\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "scripts/perftest/MatrixTranspose.sh",
"diff": "+#!/usr/bin/env bash\n+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+# Set properties\n+export LOG4JPROP='scripts/perftest/conf/log4j-off.properties'\n+export SYSDS_QUIET=1\n+export SYSTEMDS_ROOT=$(pwd)\n+export PATH=$SYSTEMDS_ROOT/bin:$PATH\n+\n+export SYSTEMDS_STANDALONE_OPTS=\"-Xmx20g -Xms20g -Xmn2000m\"\n+\n+mkdir -p 'scripts/perftest/results'\n+\n+repeatScript=5\n+methodRepeat=5\n+sparsities=(\"1.0 0.1\")\n+\n+for s in $sparsities; do\n+\n+ LogName=\"scripts/perftest/results/transpose-skinny-$s.log\"\n+ rm -f $LogName\n+\n+ # Baseline\n+ perf stat -d -d -d -r $repeatScript \\\n+ systemds scripts/perftest/scripts/transpose.dml \\\n+ -config scripts/perftest/conf/std.xml \\\n+ -stats \\\n+ -args 2500000 50 $s $methodRepeat \\\n+ >>$LogName 2>&1\n+\n+ echo $LogName\n+ cat $LogName | grep -E ' r. |Total elapsed time|-----------| instructions | cycles | CPUs utilized ' | tee $LogName.log\n+\n+ LogName=\"scripts/perftest/results/transpose-wide-$s.log\"\n+ rm -f $LogName\n+\n+ # Baseline\n+ perf stat -d -d -d -r $repeatScript \\\n+ systemds scripts/perftest/scripts/transpose.dml \\\n+ -config scripts/perftest/conf/std.xml \\\n+ -stats \\\n+ -args 50 2500000 $s $methodRepeat \\\n+ >>$LogName 2>&1\n+\n+ echo $LogName\n+ cat $LogName | grep -E ' r. |Total elapsed time|-----------| instructions | cycles | CPUs utilized ' | tee $LogName.log\n+\n+ LogName=\"scripts/perftest/results/transpose-full-$s.log\"\n+ rm -f $LogName\n+\n+ # Baseline\n+ perf stat -d -d -d -r $repeatScript \\\n+ systemds scripts/perftest/scripts/transpose.dml \\\n+ -config scripts/perftest/conf/std.xml \\\n+ -stats \\\n+ -args 20000 5000 $s $methodRepeat \\\n+ >>$LogName 2>&1\n+\n+ echo $LogName\n+ cat $LogName | grep -E ' r. |Total elapsed time|-----------| instructions | cycles | CPUs utilized ' | tee $LogName.log\n+done\n+\n+LogName=\"scripts/perftest/results/transpose-large.log\"\n+rm -f $LogName\n+# Baseline\n+perf stat -d -d -d -r $repeatScript \\\n+ systemds scripts/perftest/scripts/transpose.dml \\\n+ -config scripts/perftest/conf/std.xml \\\n+ -stats \\\n+ -args 15000000 30 0.8 $methodRepeat \\\n+ >>$LogName 2>&1\n+\n+echo $LogName\n+cat $LogName | grep -E ' r. |Total elapsed time|-----------| instructions | cycles | CPUs utilized ' | tee $LogName.log\n+\n+\n"
},
{
"change_type": "MODIFY",
"old_path": "scripts/perftest/runAll.sh",
"new_path": "scripts/perftest/runAll.sh",
"diff": "# Micro Benchmarks:\n./scripts/perftest/MatrixMult.sh\n+./scripts/perftest/MatrixTranspose.sh\n# Algorithms Benchmarks:\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "scripts/perftest/scripts/transpose.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+x = rand(rows=$1, cols=$2, min= 0.0, max= 1.0, sparsity=$3, seed= 12)\n+for(i in 1:$4) {\n+ res = t(x)\n+}\n+print(sum(res))\n\\ No newline at end of file\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-2760] Transpose micro benchmark
This micro benchmark considers multiple cases, tallskinny, shortwide
and "normal" matrices. It gives an indication of if the transpose is
parallelizing and using the hardware appropriately.
Closes # 1127 |
49,706 | 12.12.2020 16:05:01 | -3,600 | 06b5e4d741d9db2cdcd99b0cc10cc476b5c98668 | PCA Transpose(Predict) and Inverse
This commit adds functions for PCA transpose and inverse. to enable
transposing on unseen data, from training and to inverse the
pca transpose, to an approximation of the original data. | [
{
"change_type": "MODIFY",
"old_path": "scripts/builtin/pca.dml",
"new_path": "scripts/builtin/pca.dml",
"diff": "# ---------------------------------------------------------------------------------------------\n# Xout Matrix --- Output feature matrix with K columns\n# Mout Matrix --- Output dominant eigen vectors (can be used for projections)\n+# Centering Matrix --- The column means of the input, subtracted to construct the PCA\n+# ScaleFactor Matrix --- The Scaling of the values, to make each dimension same size.\n# ---------------------------------------------------------------------------------------------\nm_pca = function(Matrix[Double] X, Integer K=2, Boolean center=TRUE, Boolean scale=TRUE)\n- return (Matrix[Double] Xout, Matrix[Double] Mout)\n+ return (Matrix[Double] Xout, Matrix[Double] Mout, Matrix[Double] Centering, Matrix[Double] ScaleFactor)\n{\nN = nrow(X);\nD = ncol(X);\n# perform z-scoring (centering and scaling)\n- X = scale(X, center, scale);\n+ [X, Centering, ScaleFactor] = scale(X, center, scale);\n# co-variance matrix\nmu = colSums(X)/N;\n@@ -61,4 +63,5 @@ m_pca = function(Matrix[Double] X, Integer K=2, Boolean center=TRUE, Boolean sca\n# Construct new data set by treating computed dominant eigenvectors as the basis vectors\nXout = X %*% evec_dominant;\nMout = evec_dominant;\n+\n}\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "scripts/builtin/pcaInverse.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+# Principal Component Analysis (PCA) for reconstruction of approximation of the original data.\n+#\n+# This methods allows to reconstruct an approximation of the original matrix, and is usefull for\n+# calculating how much information is lost in the PCA.\n+#\n+# ---------------------------------------------------------------------------------------------\n+# NAME TYPE DEFAULT MEANING\n+# ---------------------------------------------------------------------------------------------\n+# X Matrix --- Input features that have PCA applied to them\n+# Centering Matrix empty matrix The column means of the PCA model, subtracted to construct the PCA\n+# ScaleFactor Matrix empty matrix The scaling of each dimension in the PCA model\n+# ---------------------------------------------------------------------------------------------\n+# Y Matrix --- Output feature matrix reconstructing and approximation of the original matrix\n+# ---------------------------------------------------------------------------------------------\n+\n+m_pcaInverse = function(Matrix[Double] Y, Matrix[Double] Clusters,\n+ Matrix[Double] Centering = matrix(0, rows= 0, cols=0),\n+ Matrix[Double] ScaleFactor = matrix(0, rows= 0, cols=0))\n+ return (Matrix[Double] X)\n+{\n+ X = Y %*% t(Clusters)\n+\n+ if(nrow(ScaleFactor) > 0 & ncol(ScaleFactor) > 0){\n+ X = X * ScaleFactor\n+ }\n+\n+ if(nrow(Centering) > 0 & ncol(Centering) > 0){\n+ X = X + Centering\n+ }\n+\n+}\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "scripts/builtin/pcaTransform.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+# Principal Component Analysis (PCA) for dimensionality reduction prediciton\n+#\n+# This method is used to transpose data, which the PCA model was not trained on. To validate how good\n+# The PCA is, and to apply in production.\n+#\n+# ---------------------------------------------------------------------------------------------\n+# NAME TYPE DEFAULT MEANING\n+# ---------------------------------------------------------------------------------------------\n+# X Matrix --- Input feature matrix\n+# Centering Matrix empty matrix The column means of the PCA model, subtracted to construct the PCA\n+# ScaleFactor Matrix empty matrix The scaling of each dimension in the PCA model\n+# ---------------------------------------------------------------------------------------------\n+# Y Matrix --- Output feature matrix dimensionally reduced by PCA\n+# ---------------------------------------------------------------------------------------------\n+\n+m_pcaTransform = function(Matrix[Double] X, Matrix[Double] Clusters,\n+ Matrix[Double] Centering = matrix(0, rows= 0, cols=0),\n+ Matrix[Double] ScaleFactor = matrix(0, rows= 0, cols=0))\n+ return (Matrix[Double] Y)\n+{\n+\n+ if(nrow(Centering) > 0 & ncol(Centering) > 0){\n+ X = X - Centering\n+ }\n+ if(nrow(ScaleFactor) > 0 & ncol(ScaleFactor) > 0){\n+ X = X / ScaleFactor\n+ }\n+\n+ Y = X %*% Clusters\n+}\n"
},
{
"change_type": "MODIFY",
"old_path": "scripts/builtin/scale.dml",
"new_path": "scripts/builtin/scale.dml",
"diff": "#\n#-------------------------------------------------------------\n-# Scale and center individual features in the input matrix\n-# (column-wise) using z-score to scale the values.\n-# -----------------------------------------------------------------------------\n+# Scale and center individual features in the input matrix (column wise.) using z-score to scale the values.\n+# ---------------------------------------------------------------------------------------------\n# NAME TYPE DEFAULT MEANING\n-# -----------------------------------------------------------------------------\n+# ---------------------------------------------------------------------------------------------\n# X Matrix --- Input feature matrix\n-# center Boolean TRUE Indicates whether or not to center the feature matrix\n-# scale Boolean TRUE Indicates whether or not to scale the feature matrix\n-# -----------------------------------------------------------------------------\n+# Center Boolean TRUE Indicates whether or not to center the feature matrix\n+# Scale Boolean TRUE Indicates whether or not to scale the feature matrix\n+# ---------------------------------------------------------------------------------------------\n# Y Matrix --- Output feature matrix with K columns\n-# -----------------------------------------------------------------------------\n+# ColMean Matrix --- The column means of the input, subtracted if Center was TRUE\n+# ScaleFactor Matrix --- The Scaling of the values, to make each dimension have similar value ranges\n+# ---------------------------------------------------------------------------------------------\n-m_scale = function(Matrix[Double] X, Boolean center, Boolean scale) return (Matrix[Double] Y) {\n- if( center )\n- X = X - colMeans(X);\n+m_scale = function(Matrix[Double] X, Boolean center, Boolean scale)\n+ return (Matrix[Double] Y, Matrix[Double] ColMean, Matrix[Double] ScaleFactor)\n+{\n+ if(center){\n+ ColMean = colMeans(X)\n+ X = X - ColMean\n+ }\n+ else {\n+ # Allocate the ColMean as an empty matrix,\n+ # to return something on the function call.\n+ ColMean = matrix(0,rows=0,cols=0)\n+ }\nif (scale) {\n- cvars = colSums(X^2)/(nrow(X)-1);\n+ N = nrow(X)\n+\n+ ScaleFactor = sqrt(colSums(X^2)/(N-1))\n- #scale by std-dev and replace NaNs with 0's\n- X = replace(target=X/sqrt(cvars),\n- pattern=NaN, replacement=0);\n+ # Replace entries in the scale factor that are 0 with 1.\n+ # To avoid division by 0, introducing NaN to the ouput.\n+ ScaleFactor = replace(target=ScaleFactor,\n+ pattern=0, replacement=1);\n+\n+ X = X/ScaleFactor\n+\n+ }\n+ else{\n+ # Allocate the Scale factor as an empty matrix,\n+ # to return something on the function call.\n+ ScaleFactor = matrix(0, rows= 0, cols=0)\n}\n- Y = X;\n+ Y = X\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/common/Builtins.java",
"new_path": "src/main/java/org/apache/sysds/common/Builtins.java",
"diff": "@@ -164,6 +164,8 @@ public enum Builtins {\nOUTLIER_SD(\"outlierBySd\", true),\nOUTLIER_IQR(\"outlierByIQR\", true),\nPCA(\"pca\", true),\n+ PCAINVERSE(\"pcaInverse\", true),\n+ PCATRANSFORM(\"pcaTransform\", true),\nPNMF(\"pnmf\", true),\nPPCA(\"ppca\", true),\nPPRED(\"ppred\", false),\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/federated/algorithms/FederatedPCATest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/federated/algorithms/FederatedPCATest.java",
"diff": "@@ -69,17 +69,17 @@ public class FederatedPCATest extends AutomatedTestBase {\n@Test\npublic void federatedPCASinglenode() {\n- federatedL2SVM(Types.ExecMode.SINGLE_NODE);\n+ federatedPCA(Types.ExecMode.SINGLE_NODE);\n}\n@Test\npublic void federatedPCAHybrid() {\n- federatedL2SVM(Types.ExecMode.HYBRID);\n+ federatedPCA(Types.ExecMode.HYBRID);\n}\n- public void federatedL2SVM(Types.ExecMode execMode) {\n+ public void federatedPCA(Types.ExecMode execMode) {\nExecMode platformOld = setExecMode(execMode);\n-\n+ setOutputBuffering(true);\ngetAndLoadTestConfiguration(TEST_NAME);\nString HOME = SCRIPT_DIR + TEST_DIR;\n@@ -114,7 +114,7 @@ public class FederatedPCATest extends AutomatedTestBase {\nfullDMLScriptName = HOME + TEST_NAME + \"Reference.dml\";\nprogramArgs = new String[] {\"-stats\", \"-args\", input(\"X1\"), input(\"X2\"), input(\"X3\"), input(\"X4\"),\nString.valueOf(scaleAndShift).toUpperCase(), expected(\"Z\")};\n- runTest(true, false, null, -1);\n+ runTest(null);\n// Run actual dml script with federated matrix\nfullDMLScriptName = HOME + TEST_NAME + \".dml\";\n@@ -123,7 +123,7 @@ public class FederatedPCATest extends AutomatedTestBase {\n\"in_X3=\" + TestUtils.federatedAddress(port3, input(\"X3\")),\n\"in_X4=\" + TestUtils.federatedAddress(port4, input(\"X4\")), \"rows=\" + rows, \"cols=\" + cols,\n\"scaleAndShift=\" + String.valueOf(scaleAndShift).toUpperCase(), \"out=\" + output(\"Z\")};\n- runTest(true, false, null, -1);\n+ runTest(null);\n// compare via files\ncompareResults(1e-9);\n@@ -138,7 +138,6 @@ public class FederatedPCATest extends AutomatedTestBase {\nAssert.assertTrue(heavyHittersContainsString(\"fed_uacmean\"));\nAssert.assertTrue(heavyHittersContainsString(\"fed_-\"));\nAssert.assertTrue(heavyHittersContainsString(\"fed_/\"));\n- Assert.assertTrue(heavyHittersContainsString(\"fed_replace\"));\n}\n// check that federated input files are still existing\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/scripts/functions/federated/FederatedPCATest.dml",
"new_path": "src/test/scripts/functions/federated/FederatedPCATest.dml",
"diff": "X = federated(addresses=list($in_X1, $in_X2, $in_X3, $in_X4),\nranges=list(list(0, 0), list($rows/4, $cols), list($rows/4, 0), list(2*$rows/4, $cols),\nlist(2*$rows/4, 0), list(3*$rows/4, $cols), list(3*$rows/4, 0), list($rows, $cols)))\n-[X2,M] = pca(X=X, K=2, scale=$scaleAndShift, center=$scaleAndShift)\n+[X2,M,c,s] = pca(X=X, K=2, scale=$scaleAndShift, center=$scaleAndShift)\nwrite(X2, $out)\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/scripts/functions/federated/FederatedPCATestReference.dml",
"new_path": "src/test/scripts/functions/federated/FederatedPCATestReference.dml",
"diff": "#-------------------------------------------------------------\nX = rbind(read($1), read($2), read($3), read($4));\n-[X2,M] = pca(X=X, K=2, scale=$5, center=$5)\n+[X2,M,c,s] = pca(X=X, K=2, scale=$5, center=$5)\nwrite(X2, $6)\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-2757] PCA Transpose(Predict) and Inverse
This commit adds functions for PCA transpose and inverse. to enable
transposing on unseen data, from training and to inverse the
pca transpose, to an approximation of the original data. |
49,706 | 23.12.2020 12:20:13 | -3,600 | 4e7ad989107d98633b5dfdd8612a99b1b16053bd | [MINOR] Fix settings for IntelliJ default test execution
This commit change the default test settings such that they execute
out of the box in the IntelliJ IDE without having to use custom
configurations forcing IntelliJ to use maven. | [
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/AutomatedTestBase.java",
"new_path": "src/test/java/org/apache/sysds/test/AutomatedTestBase.java",
"diff": "@@ -216,9 +216,16 @@ public abstract class AutomatedTestBase {\nprivate static boolean outputBuffering = false;\nstatic {\n+ // Load configuration from setting file build by maven.\n+ // If maven is not used as test setup, (as default in intellij for instance) default values are used.\n+ // If one wants to use custom configurations, setup the IDE to build using maven, and set execution flags\n+ // accordingly.\n+ // Settings available can be found in the properties inside pom.xml.\n+ // The custom configuration is required to run tests using GPU backend.\njava.io.InputStream inputStream = Thread.currentThread().getContextClassLoader()\n.getResourceAsStream(\"my.properties\");\njava.util.Properties properties = new Properties();\n+ if(inputStream != null){\ntry {\nproperties.load(inputStream);\n}\n@@ -231,6 +238,13 @@ public abstract class AutomatedTestBase {\nboolean stats = Boolean.parseBoolean(properties.getProperty(\"enableStats\"));\nVERBOSE_STATS = VERBOSE_STATS || stats;\n}\n+ else{\n+ // If no properties file exists.\n+ outputBuffering = false;\n+ TEST_GPU = false;\n+ VERBOSE_STATS = false;\n+ }\n+ }\n// Timestamp before test start.\nprivate long lTimeBeforeTest;\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR] Fix settings for IntelliJ default test execution
This commit change the default test settings such that they execute
out of the box in the IntelliJ IDE without having to use custom
configurations forcing IntelliJ to use maven. |
49,689 | 23.12.2020 18:38:13 | -3,600 | b75cf91b9a1077fc04468417ce87d33181295c62 | Fix lineage cache eviction test
This patch replaces the current cache eviction test script
with a better and robust (hopefully) one. This script simulates
a mini-batch scenario with batch-wise preprocessing, which can
be reused per epoch. | [
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/lineage/CacheEvictionTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/lineage/CacheEvictionTest.java",
"diff": "@@ -29,7 +29,6 @@ import org.apache.sysds.hops.recompile.Recompiler;\nimport org.apache.sysds.runtime.lineage.Lineage;\nimport org.apache.sysds.runtime.lineage.LineageCacheConfig;\nimport org.apache.sysds.runtime.lineage.LineageCacheConfig.ReuseCacheType;\n-import org.apache.sysds.runtime.lineage.LineageCacheEviction;\nimport org.apache.sysds.runtime.lineage.LineageCacheStatistics;\nimport org.apache.sysds.runtime.matrix.data.MatrixValue;\nimport org.apache.sysds.test.TestConfiguration;\n@@ -42,7 +41,7 @@ import org.junit.Test;\npublic class CacheEvictionTest extends LineageBase {\nprotected static final String TEST_DIR = \"functions/lineage/\";\n- protected static final String TEST_NAME1 = \"CacheEviction1\";\n+ protected static final String TEST_NAME1 = \"CacheEviction2\";\nprotected String TEST_CLASS_DIR = TEST_DIR + CacheEvictionTest.class.getSimpleName() + \"/\";\n@@ -65,17 +64,16 @@ public class CacheEvictionTest extends LineageBase {\nLOG.debug(\"------------ BEGIN \" + testname + \"------------\");\n/* This test verifies the order of evicted items w.r.t. the specified\n- * cache policies. This test enables individual components of the\n- * scoring function by masking the other components, and compare the\n- * order of evicted entries for different policies. HYBRID policy is\n- * not considered for this test as it is hard to anticipate the reuse\n- * statistics if all the components are unmasked.\n+ * cache policies, using a mini-batch wise autoencoder inspired\n+ * test script. An epoch-wise reusable scale and shift is part of\n+ * every batch processing. LRU fails to reuse the scale calls as\n+ * it tends to evicts scale and shift intermediates due to higher\n+ * number of post scale intermediates, where cost & size successfully\n+ * reuses all the reusable operations.\n*\n- * TODO: Test disk spilling, which will need some tunings in eviction\n- * logic; otherwise the automated test might take significantly\n- * longer as eviction logic tends to just delete entries with little\n- * computation and estimated I/O time. Note that disk spilling is\n- * already happening as part of other tests (e.g. MultiLogReg).\n+ * TODO: add DagHeight. All three policies perform as expected in my\n+ * laptop, but for some reasons, LRU performs better in github actions\n+ * - that leads to failed comparison between dagheight and LRU.\n*/\nOptimizerUtils.ALLOW_ALGEBRAIC_SIMPLIFICATION = false;\n@@ -84,8 +82,7 @@ public class CacheEvictionTest extends LineageBase {\ngetAndLoadTestConfiguration(testname);\nfullDMLScriptName = getScript();\nLineage.resetInternalState();\n- long cacheSize = LineageCacheEviction.getCacheLimit();\n- LineageCacheConfig.setReusableOpcodes(\"exp\", \"+\", \"round\");\n+ LineageCacheConfig.setSpill(false); //disable spilling\n// LRU based eviction\nList<String> proArgs = new ArrayList<>();\n@@ -94,14 +91,12 @@ public class CacheEvictionTest extends LineageBase {\nproArgs.add(ReuseCacheType.REUSE_FULL.name().toLowerCase());\nproArgs.add(\"policy_lru\");\nproArgs.add(\"-args\");\n- proArgs.add(String.valueOf(cacheSize));\nproArgs.add(output(\"R\"));\nprogramArgs = proArgs.toArray(new String[proArgs.size()]);\nrunTest(true, EXCEPTION_NOT_EXPECTED, null, -1);\nHashMap<MatrixValue.CellIndex, Double> R_lru = readDMLMatrixFromOutputDir(\"R\");\n- long expCount_lru = Statistics.getCPHeavyHitterCount(\"exp\");\nlong hitCount_lru = LineageCacheStatistics.getInstHits();\n- long evictedCount_lru = LineageCacheStatistics.getMemDeletes();\n+ long colmeanCount_lru = Statistics.getCPHeavyHitterCount(\"uacmean\");\n// costnsize scheme (computationTime/Size)\nproArgs.clear();\n@@ -110,35 +105,28 @@ public class CacheEvictionTest extends LineageBase {\nproArgs.add(ReuseCacheType.REUSE_FULL.name().toLowerCase());\nproArgs.add(\"policy_costnsize\");\nproArgs.add(\"-args\");\n- proArgs.add(String.valueOf(cacheSize));\nproArgs.add(output(\"R\"));\nprogramArgs = proArgs.toArray(new String[proArgs.size()]);\nLineage.resetInternalState();\nrunTest(true, EXCEPTION_NOT_EXPECTED, null, -1);\nHashMap<MatrixValue.CellIndex, Double> R_costnsize= readDMLMatrixFromOutputDir(\"R\");\n- long expCount_wt = Statistics.getCPHeavyHitterCount(\"exp\");\n- long hitCount_wt = LineageCacheStatistics.getInstHits();\n- long evictedCount_wt = LineageCacheStatistics.getMemDeletes();\n- LineageCacheConfig.resetReusableOpcodes();\n+ long hitCount_cs = LineageCacheStatistics.getInstHits();\n+ long colmeanCount_cs = Statistics.getCPHeavyHitterCount(\"uacmean\");\n// Compare results\nLineage.setLinReuseNone();\nTestUtils.compareMatrices(R_lru, R_costnsize, 1e-6, \"LRU\", \"costnsize\");\n-\n- // Compare reused instructions\n- Assert.assertTrue(expCount_lru >= expCount_wt);\n- // Compare counts of evicted items\n- // LRU tends to evict more entries to recover equal amount of memory\n- // Note: changed to equals to fix flaky tests where both are not evicted at all\n- // (e.g., due to high execution time as sometimes observed through github actions)\n- Assert.assertTrue((\"Violated expected evictions: \"+evictedCount_lru+\" >= \"+evictedCount_wt),\n- evictedCount_lru >= evictedCount_wt);\n// Compare cache hits\n- Assert.assertTrue(hitCount_lru < hitCount_wt);\n+ Assert.assertTrue(\"Violated cache hit count: \"+hitCount_lru+\" < \"+hitCount_cs,\n+ hitCount_lru < hitCount_cs);\n+ // Compare reused instruction (uacmean) counts\n+ Assert.assertTrue(\"Violated uacmean count: \"+colmeanCount_cs+\" < \"+colmeanCount_lru,\n+ colmeanCount_cs < colmeanCount_lru);\n}\nfinally {\nOptimizerUtils.ALLOW_ALGEBRAIC_SIMPLIFICATION = old_simplification;\nOptimizerUtils.ALLOW_SUM_PRODUCT_REWRITES = old_sum_product;\n+ LineageCacheConfig.setSpill(true);\nRecompiler.reinitRecompiler();\n}\n}\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/lineage/CacheEviction2.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+D = rand(rows=6400, cols=784, min=0, max=20, seed=42)\n+bs = 32;\n+ep = 10;\n+iter_ep = ceil(nrow(D)/bs);\n+maxiter = ep * iter_ep;\n+beg = 1;\n+iter = 0;\n+i = 1;\n+\n+while (iter < maxiter) {\n+ end = beg + bs - 1;\n+ if (end>nrow(D))\n+ end = nrow(D);\n+ X = D[beg:end,]\n+\n+ #reusable OP across epochs\n+ X = scale(X, TRUE, TRUE);\n+ #pollute cache with not reusable OPs\n+ X = ((X + X) * i - X) / (i+1)\n+ X = ((X + X) * i - X) / (i+1)\n+ X = ((X + X) * i - X) / (i+1)\n+ X = ((X + X) * i - X) / (i+1)\n+ X = ((X + X) * i - X) / (i+1)\n+ X = ((X + X) * i - X) / (i+1)\n+ X = ((X + X) * i - X) / (i+1)\n+ X = ((X + X) * i - X) / (i+1)\n+ X = ((X + X) * i - X) / (i+1)\n+ X = ((X + X) * i - X) / (i+1)\n+\n+ iter = iter + 1;\n+ if (end == nrow(D))\n+ beg = 1;\n+ else\n+ beg = end + 1;\n+ i = i + 1;\n+\n+}\n+R = X;\n+write(R, $1, format=\"text\");\n+\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-2769] Fix lineage cache eviction test
This patch replaces the current cache eviction test script
with a better and robust (hopefully) one. This script simulates
a mini-batch scenario with batch-wise preprocessing, which can
be reused per epoch. |
49,697 | 28.12.2020 22:08:54 | -3,600 | 1d2517e1eb75646363e767a04c5c1b37ad12124e | Federated weighted cross entropy operations (WCEMM)
Quaternary operations, part 1
Closes | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/controlprogram/federated/FederationUtils.java",
"new_path": "src/main/java/org/apache/sysds/runtime/controlprogram/federated/FederationUtils.java",
"diff": "@@ -317,6 +317,10 @@ public class FederationUtils {\n}\n}\n+ public static ScalarObject aggScalar(AggregateUnaryOperator aop, Future<FederatedResponse>[] ffr) {\n+ return aggScalar(aop, ffr, null);\n+ }\n+\npublic static ScalarObject aggScalar(AggregateUnaryOperator aop, Future<FederatedResponse>[] ffr, FederationMap map) {\nif(!(aop.aggOp.increOp.fn instanceof KahanFunction || (aop.aggOp.increOp.fn instanceof Builtin &&\n(((Builtin) aop.aggOp.increOp.fn).getBuiltinCode() == BuiltinCode.MIN\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/FEDInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/FEDInstruction.java",
"diff": "@@ -40,6 +40,7 @@ public abstract class FEDInstruction extends Instruction {\nReorg,\nReshape,\nMatrixIndexing,\n+ Quaternary,\nQSort,\nQPick\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/FEDInstructionUtils.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/FEDInstructionUtils.java",
"diff": "@@ -36,6 +36,7 @@ import org.apache.sysds.runtime.instructions.cp.MMTSJCPInstruction;\nimport org.apache.sysds.runtime.instructions.cp.MatrixIndexingCPInstruction;\nimport org.apache.sysds.runtime.instructions.cp.MultiReturnParameterizedBuiltinCPInstruction;\nimport org.apache.sysds.runtime.instructions.cp.ParameterizedBuiltinCPInstruction;\n+import org.apache.sysds.runtime.instructions.cp.QuaternaryCPInstruction;\nimport org.apache.sysds.runtime.instructions.cp.ReorgCPInstruction;\nimport org.apache.sysds.runtime.instructions.cp.UnaryCPInstruction;\nimport org.apache.sysds.runtime.instructions.cp.VariableCPInstruction;\n@@ -48,6 +49,7 @@ import org.apache.sysds.runtime.instructions.spark.CentralMomentSPInstruction;\nimport org.apache.sysds.runtime.instructions.spark.MapmmSPInstruction;\nimport org.apache.sysds.runtime.instructions.spark.QuantilePickSPInstruction;\nimport org.apache.sysds.runtime.instructions.spark.QuantileSortSPInstruction;\n+import org.apache.sysds.runtime.instructions.spark.QuaternarySPInstruction;\nimport org.apache.sysds.runtime.instructions.spark.UnarySPInstruction;\nimport org.apache.sysds.runtime.instructions.spark.WriteSPInstruction;\n@@ -183,6 +185,12 @@ public class FEDInstructionUtils {\nfedinst = AggregateTernaryFEDInstruction.parseInstruction(ins);\n}\n}\n+ else if(inst instanceof QuaternaryCPInstruction) {\n+ QuaternaryCPInstruction instruction = (QuaternaryCPInstruction) inst;\n+ Data data = ec.getVariable(instruction.input1);\n+ if(data instanceof MatrixObject && ((MatrixObject) data).isFederated())\n+ fedinst = QuaternaryFEDInstruction.parseInstruction(instruction.getInstructionString());\n+ }\n//set thread id for federated context management\nif( fedinst != null ) {\n@@ -256,6 +264,12 @@ public class FEDInstructionUtils {\nreturn VariableCPInstruction.parseInstruction(instruction.getInstructionString());\n}\n}\n+ else if(inst instanceof QuaternarySPInstruction) {\n+ QuaternarySPInstruction instruction = (QuaternarySPInstruction) inst;\n+ Data data = ec.getVariable(instruction.input1);\n+ if(data instanceof MatrixObject && ((MatrixObject) data).isFederated())\n+ fedinst = QuaternaryFEDInstruction.parseInstruction(instruction.getInstructionString());\n+ }\n//set thread id for federated context management\nif( fedinst != null ) {\nfedinst.setTID(ec.getTID());\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/QuaternaryFEDInstruction.java",
"diff": "+/*\n+ * Licensed to the Apache Software Foundation (ASF) under one\n+ * or more contributor license agreements. See the NOTICE file\n+ * distributed with this work for additional information\n+ * regarding copyright ownership. The ASF licenses this file\n+ * to you under the Apache License, Version 2.0 (the\n+ * \"License\"); you may not use this file except in compliance\n+ * with the License. You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.apache.sysds.runtime.instructions.fed;\n+\n+import org.apache.sysds.common.Types.DataType;\n+import org.apache.sysds.common.Types.ExecType;\n+import org.apache.sysds.lops.Lop;\n+import org.apache.sysds.lops.WeightedCrossEntropy.WCeMMType;\n+import org.apache.sysds.runtime.DMLRuntimeException;\n+import org.apache.sysds.runtime.instructions.InstructionUtils;\n+import org.apache.sysds.runtime.instructions.cp.CPOperand;\n+import org.apache.sysds.runtime.instructions.fed.QuaternaryWCeMMFEDInstruction;\n+import org.apache.sysds.runtime.matrix.operators.Operator;\n+import org.apache.sysds.runtime.matrix.operators.QuaternaryOperator;\n+\n+public abstract class QuaternaryFEDInstruction extends ComputationFEDInstruction\n+{\n+ protected CPOperand _input4 = null;\n+\n+ protected QuaternaryFEDInstruction(FEDInstruction.FEDType type, Operator operator,\n+ CPOperand in1, CPOperand in2, CPOperand in3, CPOperand in4, CPOperand out, String opcode, String instruction_str)\n+ {\n+ super(type, operator, in1, in2, in3, out, opcode, instruction_str);\n+ _input4 = in4;\n+ }\n+\n+ public static QuaternaryFEDInstruction parseInstruction(String str)\n+ {\n+ if(str.startsWith(ExecType.SPARK.name())) {\n+ // rewrite the spark instruction to a cp instruction\n+ str = str.replace(ExecType.SPARK.name(), ExecType.CP.name());\n+ str = str.replace(\"mapwcemm\", \"wcemm\");\n+ str += Lop.OPERAND_DELIMITOR + \"1\"; //num threads\n+ }\n+\n+ String[] parts = InstructionUtils.getInstructionPartsWithValueType(str);\n+ String opcode = parts[0];\n+\n+ CPOperand in1 = new CPOperand(parts[1]);\n+ CPOperand in2 = new CPOperand(parts[2]);\n+ CPOperand in3 = new CPOperand(parts[3]);\n+ CPOperand out = new CPOperand(parts[5]);\n+\n+ InstructionUtils.checkNumFields(parts, 7);\n+\n+ if(opcode.equals(\"wcemm\")) {\n+ CPOperand in4 = new CPOperand(parts[4]);\n+ checkDataTypes(in1, in2, in3, in4);\n+\n+ WCeMMType wcemm_type = WCeMMType.valueOf(parts[6]);\n+ QuaternaryOperator quaternary_operator = (wcemm_type.hasFourInputs() ?\n+ new QuaternaryOperator(wcemm_type, Double.parseDouble(in4.getName())) :\n+ new QuaternaryOperator(wcemm_type));\n+ return new QuaternaryWCeMMFEDInstruction(quaternary_operator, in1, in2, in3, in4, out, opcode, str);\n+ }\n+\n+ throw new DMLRuntimeException(\"Unsupported opcode (\" + opcode + \") for QuaternaryFEDInstruction.\");\n+ }\n+\n+ protected static void checkDataTypes(CPOperand in1, CPOperand in2, CPOperand in3, CPOperand in4) {\n+ if(in1.getDataType() != DataType.MATRIX || in2.getDataType() != DataType.MATRIX\n+ || in3.getDataType() != DataType.MATRIX\n+ || !(in4.getDataType() == DataType.SCALAR || in4.getDataType() == DataType.MATRIX)) {\n+ throw new DMLRuntimeException(\"Federated quaternary operations \"\n+ + \"only supported with matrix inputs and scalar epsilon.\");\n+ }\n+ }\n+}\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/QuaternaryWCeMMFEDInstruction.java",
"diff": "+/*\n+ * Licensed to the Apache Software Foundation (ASF) under one\n+ * or more contributor license agreements. See the NOTICE file\n+ * distributed with this work for additional information\n+ * regarding copyright ownership. The ASF licenses this file\n+ * to you under the Apache License, Version 2.0 (the\n+ * \"License\"); you may not use this file except in compliance\n+ * with the License. You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.apache.sysds.runtime.instructions.fed;\n+\n+import org.apache.sysds.runtime.controlprogram.federated.FederatedRequest.RequestType;\n+import org.apache.sysds.runtime.matrix.operators.AggregateUnaryOperator;\n+import org.apache.sysds.runtime.instructions.InstructionUtils;\n+import org.apache.sysds.common.Types.DataType;\n+import org.apache.sysds.runtime.DMLRuntimeException;\n+import org.apache.sysds.runtime.controlprogram.caching.MatrixObject;\n+import org.apache.sysds.runtime.controlprogram.context.ExecutionContext;\n+import org.apache.sysds.runtime.controlprogram.federated.FederatedRequest;\n+import org.apache.sysds.runtime.controlprogram.federated.FederatedResponse;\n+import org.apache.sysds.runtime.controlprogram.federated.FederationMap;\n+import org.apache.sysds.runtime.controlprogram.federated.FederationUtils;\n+import org.apache.sysds.runtime.instructions.cp.CPOperand;\n+import org.apache.sysds.runtime.instructions.cp.DoubleObject;\n+import org.apache.sysds.runtime.instructions.cp.ScalarObject;\n+import org.apache.sysds.runtime.matrix.operators.Operator;\n+import org.apache.sysds.runtime.matrix.operators.QuaternaryOperator;\n+\n+import java.util.concurrent.Future;\n+\n+public class QuaternaryWCeMMFEDInstruction extends QuaternaryFEDInstruction\n+{\n+ // input1 ... federated X\n+ // input2 ... U\n+ // input3 ... V\n+ // _input4 ... W (=epsilon)\n+ protected QuaternaryWCeMMFEDInstruction(Operator operator,\n+ CPOperand in1, CPOperand in2, CPOperand in3, CPOperand in4,\n+ CPOperand out, String opcode, String instruction_str)\n+ {\n+ super(FEDType.Quaternary, operator, in1, in2, in3, in4, out, opcode, instruction_str);\n+ }\n+\n+ @Override\n+ public void processInstruction(ExecutionContext ec)\n+ {\n+ QuaternaryOperator qop = (QuaternaryOperator) _optr;\n+ MatrixObject X = ec.getMatrixObject(input1);\n+ MatrixObject U = ec.getMatrixObject(input2);\n+ MatrixObject V = ec.getMatrixObject(input3);\n+ ScalarObject eps = null;\n+\n+ if(qop.hasFourInputs()) {\n+ eps = (_input4.getDataType() == DataType.SCALAR) ?\n+ ec.getScalarInput(_input4) :\n+ new DoubleObject(ec.getMatrixInput(_input4.getName()).quickGetValue(0, 0));\n+ }\n+\n+ if(!(X.isFederated() && !U.isFederated() && !V.isFederated()))\n+ throw new DMLRuntimeException(\"Unsupported federated inputs (X, U, V) = (\"\n+ +X.isFederated()+\", \"+U.isFederated()+\", \"+V.isFederated()+\")\");\n+\n+ FederationMap fedMap = X.getFedMapping();\n+ FederatedRequest[] fr1 = fedMap.broadcastSliced(U, false);\n+ FederatedRequest fr2 = fedMap.broadcast(V);\n+ FederatedRequest fr3 = null;\n+ FederatedRequest frComp = null;\n+\n+ // broadcast the scalar epsilon if there are four inputs\n+ if(eps != null) {\n+ fr3 = fedMap.broadcast(eps);\n+ // change the is_literal flag from true to false because when broadcasted it is no literal anymore\n+ instString = instString.replace(\"true\", \"false\");\n+ frComp = FederationUtils.callInstruction(instString, output,\n+ new CPOperand[]{input1, input2, input3, _input4},\n+ new long[]{fedMap.getID(), fr1[0].getID(), fr2.getID(), fr3.getID()});\n+ }\n+ else {\n+ frComp = FederationUtils.callInstruction(instString, output,\n+ new CPOperand[]{input1, input2, input3},\n+ new long[]{fedMap.getID(), fr1[0].getID(), fr2.getID()});\n+ }\n+\n+ FederatedRequest frGet = new FederatedRequest(RequestType.GET_VAR, frComp.getID());\n+ FederatedRequest frClean1 = fedMap.cleanup(getTID(), frComp.getID());\n+ FederatedRequest frClean2 = fedMap.cleanup(getTID(), fr1[0].getID());\n+ FederatedRequest frClean3 = fedMap.cleanup(getTID(), fr2.getID());\n+\n+ Future<FederatedResponse>[] response;\n+ if(fr3 != null) {\n+ FederatedRequest frClean4 = fedMap.cleanup(getTID(), fr3.getID());\n+ // execute federated instructions\n+ response = fedMap.execute(getTID(), true, fr1, fr2, fr3,\n+ frComp, frGet, frClean1, frClean2, frClean3, frClean4);\n+ }\n+ else {\n+ // execute federated instructions\n+ response = fedMap.execute(getTID(), true, fr1, fr2,\n+ frComp, frGet, frClean1, frClean2, frClean3);\n+ }\n+\n+ //aggregate partial results from federated responses\n+ AggregateUnaryOperator aop = InstructionUtils.parseBasicAggregateUnaryOperator(\"uak+\");\n+ ec.setVariable(output.getName(), FederationUtils.aggScalar(aop, response));\n+ }\n+}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/ReorgFEDInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/ReorgFEDInstruction.java",
"diff": "package org.apache.sysds.runtime.instructions.fed;\n-import java.util.AbstractMap;\nimport java.util.HashMap;\n-import java.util.List;\nimport java.util.Map;\nimport org.apache.sysds.common.Types;\n@@ -36,16 +34,13 @@ import org.apache.sysds.runtime.controlprogram.federated.FederationMap;\nimport org.apache.sysds.runtime.controlprogram.federated.FederationUtils;\nimport org.apache.sysds.runtime.functionobjects.DiagIndex;\nimport org.apache.sysds.runtime.functionobjects.RevIndex;\n-import org.apache.sysds.runtime.functionobjects.SortIndex;\nimport org.apache.sysds.runtime.functionobjects.SwapIndex;\nimport org.apache.sysds.runtime.instructions.InstructionUtils;\nimport org.apache.sysds.runtime.instructions.cp.CPOperand;\nimport org.apache.sysds.runtime.instructions.cp.Data;\n-import org.apache.sysds.runtime.instructions.cp.ReorgCPInstruction;\nimport org.apache.sysds.runtime.matrix.data.MatrixBlock;\nimport org.apache.sysds.runtime.matrix.operators.Operator;\nimport org.apache.sysds.runtime.matrix.operators.ReorgOperator;\n-import org.apache.sysds.runtime.util.IndexRange;\npublic class ReorgFEDInstruction extends UnaryFEDInstruction {\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/AutomatedTestBase.java",
"new_path": "src/test/java/org/apache/sysds/test/AutomatedTestBase.java",
"diff": "@@ -837,6 +837,10 @@ public abstract class AutomatedTestBase {\nreturn TestUtils.readDMLMatrixFromHDFS(baseDirectory + OUTPUT_DIR + fileName);\n}\n+ protected static HashMap<CellIndex, Double> readDMLMatrixFromExpectedDir(String fileName) {\n+ return TestUtils.readDMLMatrixFromHDFS(baseDirectory + EXPECTED_DIR + fileName);\n+ }\n+\npublic HashMap<CellIndex, Double> readRMatrixFromExpectedDir(String fileName) {\nif(LOG.isInfoEnabled())\nLOG.info(\"R script out: \" + baseDirectory + EXPECTED_DIR + cacheDir + fileName);\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/java/org/apache/sysds/test/functions/federated/primitives/FederatedWeightedCrossEntropyTest.java",
"diff": "+/*\n+ * Licensed to the Apache Software Foundation (ASF) under one\n+ * or more contributor license agreements. See the NOTICE file\n+ * distributed with this work for additional information\n+ * regarding copyright ownership. The ASF licenses this file\n+ * to you under the Apache License, Version 2.0 (the\n+ * \"License\"); you may not use this file except in compliance\n+ * with the License. You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.apache.sysds.test.functions.federated.primitives;\n+\n+import org.apache.sysds.common.Types.ExecMode;\n+import org.apache.sysds.runtime.matrix.data.MatrixValue.CellIndex;\n+import org.apache.sysds.runtime.meta.MatrixCharacteristics;\n+import org.apache.sysds.runtime.util.HDFSTool;\n+import org.apache.sysds.test.AutomatedTestBase;\n+import org.apache.sysds.test.TestConfiguration;\n+import org.apache.sysds.test.TestUtils;\n+import org.junit.Assert;\n+import org.junit.BeforeClass;\n+import org.junit.Test;\n+import org.junit.runner.RunWith;\n+import org.junit.runners.Parameterized;\n+\n+import java.util.Arrays;\n+import java.util.Collection;\n+import java.util.HashMap;\n+\n+@RunWith(value = Parameterized.class)\[email protected]\n+public class FederatedWeightedCrossEntropyTest extends AutomatedTestBase\n+{\n+ private final static String STD_TEST_NAME = \"FederatedWCeMMTest\";\n+ private final static String EPS_TEST_NAME = \"FederatedWCeMMEpsTest\";\n+ private final static String TEST_DIR = \"functions/federated/quaternary/\";\n+ private final static String TEST_CLASS_DIR = TEST_DIR + FederatedWeightedCrossEntropyTest.class.getSimpleName() + \"/\";\n+\n+ private final static String OUTPUT_NAME = \"Z\";\n+ private final static double TOLERANCE = 1e-9;\n+ private final static int blocksize = 1024;\n+\n+ @Parameterized.Parameter()\n+ public int rows;\n+ @Parameterized.Parameter(1)\n+ public int cols;\n+ @Parameterized.Parameter(2)\n+ public int rank;\n+ @Parameterized.Parameter(3)\n+ public double epsilon;\n+ @Parameterized.Parameter(4)\n+ public double sparsity;\n+\n+ @Override\n+ public void setUp() {\n+ addTestConfiguration(STD_TEST_NAME, new TestConfiguration(TEST_CLASS_DIR, STD_TEST_NAME, new String[]{OUTPUT_NAME}));\n+ addTestConfiguration(EPS_TEST_NAME, new TestConfiguration(TEST_CLASS_DIR, EPS_TEST_NAME, new String[]{OUTPUT_NAME}));\n+ }\n+\n+ @Parameterized.Parameters\n+ public static Collection<Object[]> data() {\n+ // rows must be even\n+ return Arrays.asList(new Object[][] {\n+ // {rows, cols, rank, epsilon, sparsity}\n+ {2000, 50, 10, 0.01, 0.01},\n+ {2000, 50, 10, 0.01, 0.9},\n+ {2000, 50, 10, 6.45, 0.01},\n+ {2000, 50, 10, 6.45, 0.9}\n+ });\n+ }\n+\n+ @BeforeClass\n+ public static void init() {\n+ TestUtils.clearDirectory(TEST_DATA_DIR + TEST_CLASS_DIR);\n+ }\n+\n+ @Test\n+ public void federatedWeightedCrossEntropySingleNode() {\n+ federatedWeightedCrossEntropy(STD_TEST_NAME, ExecMode.SINGLE_NODE);\n+ }\n+\n+ @Test\n+ public void federatedWeightedCrossEntropySpark() {\n+ federatedWeightedCrossEntropy(STD_TEST_NAME, ExecMode.SPARK);\n+ }\n+\n+ @Test\n+ public void federatedWeightedCrossEntropySingleNodeEpsilon() {\n+ federatedWeightedCrossEntropy(EPS_TEST_NAME, ExecMode.SINGLE_NODE);\n+ }\n+\n+ @Test\n+ public void federatedWeightedCrossEntropySparkEpsilon() {\n+ federatedWeightedCrossEntropy(EPS_TEST_NAME, ExecMode.SPARK);\n+ }\n+\n+// -----------------------------------------------------------------------------\n+\n+ public void federatedWeightedCrossEntropy(String testname, ExecMode execMode)\n+ {\n+ // store the previous platform config to restore it after the test\n+ ExecMode platform_old = setExecMode(execMode);\n+\n+ getAndLoadTestConfiguration(testname);\n+ String HOME = SCRIPT_DIR + TEST_DIR;\n+\n+ int fed_rows = rows / 2;\n+ int fed_cols = cols;\n+\n+ // generate dataset\n+ // matrix handled by two federated workers\n+ double[][] X1 = getRandomMatrix(fed_rows, fed_cols, 0, 1, sparsity, 3);\n+ double[][] X2 = getRandomMatrix(fed_rows, fed_cols, 0, 1, sparsity, 7);\n+\n+ double[][] U = getRandomMatrix(rows, rank, 0, 1, 1, 512);\n+ double[][] V = getRandomMatrix(cols, rank, 0, 1, 1, 5040);\n+\n+ writeInputMatrixWithMTD(\"X1\", X1, false, new MatrixCharacteristics(fed_rows, fed_cols, blocksize, fed_rows * fed_cols));\n+ writeInputMatrixWithMTD(\"X2\", X2, false, new MatrixCharacteristics(fed_rows, fed_cols, blocksize, fed_rows * fed_cols));\n+\n+ writeInputMatrixWithMTD(\"U\", U, true);\n+ writeInputMatrixWithMTD(\"V\", V, true);\n+\n+ // empty script name because we don't execute any script, just start the worker\n+ fullDMLScriptName = \"\";\n+ int port1 = getRandomAvailablePort();\n+ int port2 = getRandomAvailablePort();\n+ Thread thread1 = startLocalFedWorkerThread(port1, FED_WORKER_WAIT_S);\n+ Thread thread2 = startLocalFedWorkerThread(port2);\n+\n+ getAndLoadTestConfiguration(testname);\n+\n+ // Run reference dml script with normal matrix\n+ fullDMLScriptName = HOME + testname + \"Reference.dml\";\n+ programArgs = new String[] {\"-nvargs\", \"in_X1=\" + input(\"X1\"), \"in_X2=\" + input(\"X2\"),\n+ \"in_U=\" + input(\"U\"), \"in_V=\" + input(\"V\"), \"in_W=\" + Double.toString(epsilon),\n+ \"out_Z=\" + expected(OUTPUT_NAME)};\n+ runTest(true, false, null, -1);\n+\n+ // Run actual dml script with federated matrix\n+ fullDMLScriptName = HOME + testname + \".dml\";\n+ programArgs = new String[] {\"-stats\", \"-nvargs\",\n+ \"in_X1=\" + TestUtils.federatedAddress(port1, input(\"X1\")),\n+ \"in_X2=\" + TestUtils.federatedAddress(port2, input(\"X2\")),\n+ \"in_U=\" + input(\"U\"),\n+ \"in_V=\" + input(\"V\"),\n+ \"in_W=\" + Double.toString(epsilon),\n+ \"rows=\" + fed_rows, \"cols=\" + fed_cols, \"out_Z=\" + output(OUTPUT_NAME)};\n+ runTest(true, false, null, -1);\n+\n+ // compare the results via files\n+ HashMap<CellIndex, Double> refResults = readDMLMatrixFromExpectedDir(OUTPUT_NAME);\n+ HashMap<CellIndex, Double> fedResults = readDMLMatrixFromOutputDir(OUTPUT_NAME);\n+ TestUtils.compareMatrices(fedResults, refResults, TOLERANCE, \"Fed\", \"Ref\");\n+\n+ TestUtils.shutdownThreads(thread1, thread2);\n+\n+ // check for federated operations\n+ Assert.assertTrue(heavyHittersContainsString(\"fed_wcemm\"));\n+\n+ // check that federated input files are still existing\n+ Assert.assertTrue(HDFSTool.existsFileOnHDFS(input(\"X1\")));\n+ Assert.assertTrue(HDFSTool.existsFileOnHDFS(input(\"X2\")));\n+ resetExecMode(platform_old);\n+ }\n+}\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/federated/quaternary/FederatedWCeMMEpsTest.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+X = federated(addresses=list($in_X1, $in_X2),\n+ ranges=list(list(0, 0), list($rows, $cols), list($rows, 0), list($rows * 2, $cols)))\n+\n+U = read($in_U)\n+V = read($in_V)\n+epsilon = $in_W\n+\n+Z = as.matrix(sum(X * log(U %*% t(V) + epsilon)))\n+\n+write(Z, $out_Z)\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/federated/quaternary/FederatedWCeMMEpsTestReference.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+X = rbind(read($in_X1), read($in_X2))\n+U = read($in_U)\n+V = read($in_V)\n+epsilon = $in_W\n+\n+Z = as.matrix(sum(X * log(U %*% t(V) + epsilon)))\n+\n+write(Z, $out_Z)\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/federated/quaternary/FederatedWCeMMTest.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+X = federated(addresses=list($in_X1, $in_X2),\n+ ranges=list(list(0, 0), list($rows, $cols), list($rows, 0), list($rows * 2, $cols)))\n+\n+U = read($in_U)\n+V = read($in_V)\n+\n+Z = as.matrix(sum(X * log(U %*% t(V))))\n+\n+write(Z, $out_Z)\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/federated/quaternary/FederatedWCeMMTestReference.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+X = rbind(read($in_X1), read($in_X2))\n+U = read($in_U)\n+V = read($in_V)\n+\n+Z = as.matrix(sum(X * log(U %*% t(V))))\n+\n+write(Z, $out_Z)\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-2747] Federated weighted cross entropy operations (WCEMM)
Quaternary operations, part 1
Closes #1133. |
49,763 | 31.12.2020 19:45:12 | -3,600 | 22ec7b13805f8e33b3d9b409924d8e299ad722bb | New arima built-in function (time series forecasting)
DIA project WS2020/21.
Closes | [
{
"change_type": "ADD",
"old_path": null,
"new_path": "scripts/builtin/arima.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+# Builtin function that implements ARIMA\n+#\n+# INPUT PARAMETERS:\n+# ----------------------------------------------------------------------------\n+# NAME TYPE DEFAULT MEANING\n+# ----------------------------------------------------------------------------\n+# X Double --- The input Matrix to apply Arima on.\n+# max_func_invoc Int 1000 ?\n+# p Int 0 non-seasonal AR order\n+# d Int 0 non-seasonal differencing order\n+# q Int 0 non-seasonal MA order\n+# P Int 0 seasonal AR order\n+# D Int 0 seasonal differencing order\n+# Q Int 0 seasonal MA order\n+# s Int 1 period in terms of number of time-steps\n+# include_mean Boolean FALSE center to mean 0, and include in result\n+# solver String jacobi solver, is either \"cg\" or \"jacobi\"\n+#\n+# RETURN VALUES\n+# ----------------------------------------------------------------------------\n+# NAME TYPE DEFAULT MEANING\n+# ----------------------------------------------------------------------------\n+# best_point String --- The calculated coefficients\n+# ----------------------------------------------------------------------------\n+\n+m_arima = function(Matrix[Double] X, Integer max_func_invoc=1000, Integer p=0,\n+ Integer d=0, Integer q=0, Integer P=0, Integer D=0, Integer Q=0, Integer s=1,\n+ Boolean include_mean=FALSE, String solver=\"jacobi\")\n+ return (Matrix[Double] best_point)\n+{\n+ totcols = 1+p+P+Q+q #target col (X), p-P cols, q-Q cols\n+ #print (\"totcols=\" + totcols)\n+\n+ #TODO: check max_func_invoc < totcols --> print warning (stop here ??)\n+\n+ num_rows = nrow(X)\n+ #print(\"nrows of X: \" + num_rows)\n+ if(num_rows <= d)\n+ print(\"non-seasonal differencing order should be smaller than length of the time-series\")\n+\n+ mu = 0.0\n+ if(include_mean == 1){\n+ mu = mean(X)\n+ X = X - mu\n+ }\n+\n+ # d-th order differencing:\n+ for(i in seq(1,d,1))\n+ X = X[2:nrow(X),] - X[1:nrow(X)-1,]\n+\n+ num_rows = nrow(X)\n+ if(num_rows <= s*D)\n+ print(\"seasonal differencing order should be smaller than number of observations divided by length of season\")\n+\n+ for(i in seq(1,D,1)){\n+ n1 = nrow(X)+0.0\n+ X = X[s+1:n1,] - X[1:n1-s,]\n+ }\n+\n+ n = nrow(X)\n+\n+ #Matrix Z with target values of prediction (X) in first column and\n+ #all values that can be used to predict a this target value in column 2:totcols of same row\n+ Z = cbind(X, matrix(0, n, totcols - 1))\n+\n+ #TODO: This operations can be optimised/simplified\n+\n+ #fills Z with values used for non seasonal AR prediction\n+ parfor(i1 in seq(1, p, 1), check=0){\n+ Z[i1+1:n,1+i1] = X[1:n-i1,]\n+ }\n+\n+ #prediciton values for seasonal AR\n+ parfor(i2 in seq(1, P, 1), check=0){\n+ Z[s*i2+1:n,1+p+i2] = X[1:n-s*i2,]\n+ }\n+\n+ #prediciton values for non seasonal MA\n+ parfor(i5 in seq(1, q, 1), check=0){\n+ Z[i5+1:n,1+P+p+i5] = X[1:n-i5,]\n+ }\n+\n+ #prediciton values for seasonal MA\n+ parfor(i6 in seq(1,Q, 1), check=0){\n+ Z[s*i6+1:n,1+P+p+q+i6] = X[1:n-s*i6,]\n+ }\n+\n+ simplex = cbind(matrix(0, totcols-1, 1), diag(matrix(0.1, totcols-1, 1)))\n+\n+ num_func_invoc = 0\n+\n+ objvals = matrix(0, 1, ncol(simplex))\n+ parfor(i3 in seq(1,ncol(simplex))){\n+ objvals[1,i3] = arima_css(simplex[,i3], Z, p, P, q, Q, s, solver)\n+ }\n+\n+ num_func_invoc += ncol(simplex)\n+ #print (\"num_func_invoc = \" + num_func_invoc)\n+ tol = 1.5 * 10^(-8) * as.scalar(objvals[1,1])\n+\n+ continue = TRUE\n+ while(continue & num_func_invoc <= max_func_invoc){\n+ best_index = as.scalar(rowIndexMin(objvals))\n+ worst_index = as.scalar(rowIndexMax(objvals))\n+\n+ best_obj_val = as.scalar(objvals[1,best_index])\n+ worst_obj_val = as.scalar(objvals[1,worst_index])\n+\n+ continue = (worst_obj_val > best_obj_val + tol)\n+\n+ #print(\"#Function calls::\" + num_func_invoc + \" OBJ: \" + best_obj_val)\n+\n+ c = (rowSums(simplex) - simplex[,worst_index])/(nrow(simplex))\n+\n+ x_r = 2*c - simplex[,worst_index]\n+ obj_x_r = arima_css(x_r, Z, p, P, q, Q, s, solver)\n+ num_func_invoc += 1\n+\n+ if(obj_x_r < best_obj_val){\n+ x_e = 2*x_r - c\n+ obj_x_e = arima_css(x_e, Z, p, P, q, Q, s, solver)\n+ num_func_invoc = num_func_invoc + 1\n+\n+ simplex[,worst_index] = ifelse (obj_x_r <= obj_x_e, x_r, x_e)\n+ objvals[1,worst_index] = ifelse (obj_x_r <= obj_x_e, obj_x_r, obj_x_e)\n+ }\n+ else {\n+ if(obj_x_r < worst_obj_val){\n+ simplex[,worst_index] = x_r\n+ objvals[1,worst_index] = obj_x_r\n+ }\n+\n+ x_c_in = (simplex[,worst_index] + c)/2\n+ obj_x_c_in = arima_css(x_c_in, Z, p, P, q, Q, s, solver)\n+ num_func_invoc += 1\n+\n+ if(obj_x_c_in < as.scalar(objvals[1,worst_index])){\n+ simplex[,worst_index] = x_c_in\n+ objvals[1,worst_index] = obj_x_c_in\n+ }\n+ else if(obj_x_r >= worst_obj_val){\n+ best_point = simplex[,best_index]\n+ parfor(i4 in 1:ncol(simplex)){\n+ if(i4 != best_index){\n+ simplex[,i4] = (simplex[,i4] + best_point)/2\n+ objvals[1,i4] = arima_css(simplex[,i4], Z, p, P, q, Q, s, solver)\n+ }\n+ }\n+ num_func_invoc += ncol(simplex) - 1\n+ }\n+ }\n+ }\n+\n+ best_point = simplex[,best_index]\n+ if(include_mean)\n+ best_point = rbind(best_point, as.matrix(mu))\n+}\n+\n+ # changing to additive sar since R's arima seems to do that\n+arima_css = function(Matrix[Double] w, Matrix[Double] X,\n+ Integer pIn, Integer P, Integer qIn, Integer Q, Integer s, String solver)\n+ return (Double obj)\n+{\n+ b = X[,2:ncol(X)]%*%w\n+ R = matrix(0, nrow(X), nrow(X))\n+ for(i7 in seq(1, qIn, 1)){\n+ d_ns = matrix(as.scalar(w[P+pIn+i7,1]), nrow(R)-i7, 1)\n+ R[1+i7:nrow(R),1:ncol(R)-i7] = R[1+i7:nrow(R),1:ncol(R)-i7] + diag(d_ns)\n+ }\n+\n+ for(i8 in seq(1, Q, 1)){\n+ err_ind_s = s*i8\n+ d_s = matrix(as.scalar(w[P+pIn+qIn+i8,1]), nrow(R)-err_ind_s, 1)\n+ R[1+err_ind_s:nrow(R),1:ncol(R)-err_ind_s] = R[1+err_ind_s:nrow(R),1:ncol(R)-err_ind_s] + diag(d_s)\n+ }\n+\n+ #TODO: provide default dml \"solve()\" as well\n+ solution = eval(solver + \"_solver\", R, b, 0.01, 100)\n+\n+ errs = X[,1] - solution\n+ obj = sum(errs*errs)\n+}\n+\n+cg_solver = function (Matrix[Double] R, Matrix[Double] B, Double tolerance, Integer max_iterations)\n+ return (Matrix[Double] y_hat)\n+{\n+ y_hat = matrix(0, nrow(R), 1)\n+ iter = 0\n+\n+ A = R + diag(matrix(1, rows=nrow(R), cols=1))\n+ Z = t(A)%*%A\n+ r = -(t(A)%*%B)\n+ p = -r\n+ norm_r2 = sum(r^2)\n+\n+ continue = (norm_r2 != 0)\n+ while(iter < max_iterations & continue){\n+ q = Z%*%p\n+ alpha = norm_r2 / as.scalar(t(p) %*% q)\n+ y_hat += alpha * p\n+ r += alpha * q\n+ old_norm_r2 = norm_r2\n+ norm_r2 = sum(r^2)\n+ continue = (norm_r2 >= tolerance)\n+ beta = norm_r2 / old_norm_r2\n+ p = -r + beta * p\n+ iter += 1\n+ }\n+}\n+\n+jacobi_solver = function (Matrix[Double] A, Matrix[Double] B, Double tolerance, Integer max_iterations)\n+ return (Matrix[Double] y_hat)\n+{\n+ y_hat = matrix(0, nrow(A), 1)\n+ iter = 0\n+ diff = tolerance+1\n+\n+ #checking for strict diagonal dominance\n+ #required for jacobi's method\n+ check = sum(rowSums(abs(A)) >= 1)\n+ if(check > 0)\n+ print(\"The matrix is not diagonal dominant. Suggest switching to an exact solver.\")\n+\n+ while(iter < max_iterations & diff > tolerance){\n+ y_hat_new = B - A%*%y_hat\n+ diff = sum((y_hat_new-y_hat)^2)\n+ y_hat = y_hat_new\n+ iter += 1\n+ }\n+}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/common/Builtins.java",
"new_path": "src/main/java/org/apache/sysds/common/Builtins.java",
"diff": "@@ -38,6 +38,7 @@ import org.apache.sysds.common.Types.ReturnType;\n*/\npublic enum Builtins {\n//builtin functions\n+ ARIMA(\"arima\", true),\nABS(\"abs\", false),\nGET_ACCURACY(\"getAccuracy\", true),\nABSTAIN(\"abstain\", true),\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/java/org/apache/sysds/test/functions/builtin/BuiltinArimaTest.java",
"diff": "+/*\n+ * Licensed to the Apache Software Foundation (ASF) under one\n+ * or more contributor license agreements. See the NOTICE file\n+ * distributed with this work for additional information\n+ * regarding copyright ownership. The ASF licenses this file\n+ * to you under the Apache License, Version 2.0 (the\n+ * \"License\"); you may not use this file except in compliance\n+ * with the License. You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.apache.sysds.test.functions.builtin;\n+\n+import java.util.Arrays;\n+import java.util.Collection;\n+import java.util.HashMap;\n+\n+import org.junit.Test;\n+import org.apache.sysds.common.Types;\n+import org.apache.sysds.lops.LopProperties;\n+import org.junit.runner.RunWith;\n+import org.junit.runners.Parameterized;\n+import org.junit.runners.Parameterized.Parameters;\n+\n+import org.apache.sysds.runtime.matrix.data.MatrixValue.CellIndex;\n+import org.apache.sysds.runtime.meta.MatrixCharacteristics;\n+import org.apache.sysds.test.AutomatedTestBase;\n+import org.apache.sysds.test.TestConfiguration;\n+import org.apache.sysds.test.TestUtils;\n+\n+@RunWith(value = Parameterized.class)\n+public class BuiltinArimaTest extends AutomatedTestBase {\n+ private final static String TEST_NAME = \"arima\";\n+ private final static String TEST_DIR = \"functions/builtin/\";\n+ private static final String TEST_CLASS_DIR = TEST_DIR + BuiltinArimaTest.class.getSimpleName() + \"/\";\n+\n+ protected int max_func_invoc, p, d, q, P, D, Q, s, include_mean, useJacobi;\n+\n+ public BuiltinArimaTest(int m, int p, int d, int q, int P, int D, int Q, int s, int include_mean, int useJacobi){\n+ this.max_func_invoc = m;\n+ this.p = p;\n+ this.d = d;\n+ this.q = q;\n+ this.P = P;\n+ this.D = D;\n+ this.Q = Q;\n+ this.s = s;\n+ this.include_mean = include_mean;\n+ this.useJacobi = useJacobi;\n+ }\n+\n+ @Parameters\n+ public static Collection<Object[]> data() {\n+ return Arrays.asList(new Object[][] {\n+ {20, 1, 0, 0, 0, 0, 0, 24, 1, 1},\n+ {20, 0, 0, 1, 0, 0, 0, 24, 1, 1},\n+ {20, 2, 0, 1, 0, 0, 0, 24, 1, 1},\n+\n+ //TODO fix remaining configurations (e.g., differencing)\n+ //{10, 1, 0, 10, 0, 0, 0, 24, 1, 1}\n+ // {10, 1, 1, 2, 0, 0, 0, 24, 1, 1},\n+ // {10, 0, 1, 2, 0, 0, 0, 24, 1, 1},\n+ // {10, 0, 0, 0, 1, 1, 0, 24, 1, 1},\n+ // {10, 0, 0, 0, 1, 1, 2, 24, 1, 1},\n+ // {10, 0, 0, 0, 0, 1, 2, 24, 1, 1}}\n+ });\n+ }\n+\n+ @Override\n+ public void setUp() {\n+ addTestConfiguration(TEST_NAME, new TestConfiguration(TEST_CLASS_DIR, TEST_NAME, new String[]{\"B\"}));\n+ }\n+\n+ @Test\n+ public void testArima(){\n+ Types.ExecMode platformOld = setExecMode(LopProperties.ExecType.CP);\n+ try {\n+ loadTestConfiguration(getTestConfiguration(TEST_NAME));\n+ String HOME = SCRIPT_DIR + TEST_DIR;\n+ fullDMLScriptName = HOME + TEST_NAME + \".dml\";\n+ fullRScriptName = HOME + TEST_NAME + \".R\";\n+\n+ programArgs = new String[]{\n+ \"-nvargs\", \"X=\" + input(\"col.mtx\"), \"max_func_invoc=\" + max_func_invoc,\n+ \"p=\" + p, \"d=\" + d, \"q=\" + q, \"P=\" + P, \"D=\" + D, \"Q=\" + Q,\n+ \"s=\" + s, \"include_mean=\" + include_mean, \"useJacobi=\" + useJacobi,\n+ \"model=\" + output(\"learnt.model\"),};\n+\n+ rCmd = getRCmd(input(\"col.mtx\"), Integer.toString(max_func_invoc), Integer.toString(p),\n+ Integer.toString(d), Integer.toString(q), Integer.toString(P), Integer.toString(D),\n+ Integer.toString(Q), Integer.toString(s), Integer.toString(include_mean),\n+ Integer.toString(useJacobi), expected(\"learnt.model\"));\n+\n+ int timeSeriesLength = 3000;\n+ double[][] timeSeries = getRandomMatrix(timeSeriesLength, 1, 1, 5, 0.9, 54321);\n+\n+ MatrixCharacteristics mc = new MatrixCharacteristics(timeSeriesLength,1,-1,-1);\n+ writeInputMatrixWithMTD(\"col\", timeSeries, true, mc);\n+\n+ runTest(true, false, null, -1);\n+ runRScript(true);\n+\n+ double tol = Math.pow(10, -14);\n+ HashMap<CellIndex, Double> arima_model_R = readRMatrixFromExpectedDir(\"learnt.model\");\n+ HashMap<CellIndex, Double> arima_model_SYSTEMDS= readDMLMatrixFromOutputDir(\"learnt.model\");\n+ TestUtils.compareMatrices(arima_model_R, arima_model_SYSTEMDS, tol, \"arima_R\", \"arima_SYSTEMDS\");\n+ }\n+ catch(Exception ex) {\n+ ex.printStackTrace();\n+ }\n+ finally {\n+ rtplatform = platformOld;\n+ }\n+ }\n+}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/scripts/applications/arima_box-jenkins/arima.dml",
"new_path": "src/test/scripts/applications/arima_box-jenkins/arima.dml",
"diff": "@@ -288,7 +288,7 @@ if(include_mean){\n}\nresult_format = ifdef($result_format, \"csv\")\n-write(best_point, dest, format=result_format)\n+write(best_point, dest)\ndebug_printMatrixShape = function (Matrix[Double] M, String matrixIdentifier){\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/builtin/arima.R",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+\n+#----------------------------------------\n+# -------- r's arima function -----------\n+# library(\"Matrix\")\n+# args = commandArgs(TRUE)\n+# path_to_x = args[1]\n+# max_func_invoc = as.integer(args[2])\n+# p = as.integer(args[3])\n+# d = as.integer(args[4])\n+# q = as.integer(args[5])\n+# P = as.integer(args[6])\n+# D = as.integer(args[7])\n+# Q = as.integer(args[8])\n+# s = as.integer(args[9])\n+# include_mean = as.integer(args[10])\n+# solver = args[11]\n+#\n+# X = as.matrix(readMM(path_to_x))\n+# model = arima(ts(X), order=c(p,d,q), seasonal=list(order=c(p,d,q),period=s),include.mean=include_mean,method=\"CSS\")\n+# print(coef(model))\n+#----------------------------------------\n+\n+\n+#----------------------------------------\n+# ------- copied test function ----------\n+args <- commandArgs(TRUE)\n+library(Matrix)\n+\n+arima_css = function(w, X, p, P, q, Q, s, useJacobi){\n+ b = matrix(X[,2:ncol(X)], nrow(X), ncol(X)-1)%*%w\n+\n+ R = matrix(0, nrow(X), nrow(X))\n+ if(q>0){\n+ for(i7 in 1:q){\n+ ma_ind_ns = P+p+i7\n+ err_ind_ns = i7\n+ ones_ns = rep(1, nrow(R)-err_ind_ns)\n+ d_ns = ones_ns * w[ma_ind_ns,1]\n+ R[(1+err_ind_ns):nrow(R),1:(ncol(R)-err_ind_ns)] = R[(1+err_ind_ns):nrow(R),1:(ncol(R)-err_ind_ns)] + diag(d_ns)\n+ }\n+ }\n+ if(Q>0){\n+ for(i8 in 1:Q){\n+ ma_ind_s = P+p+q+i8\n+ err_ind_s = s*i8\n+ ones_s = rep(1, nrow(R)-err_ind_s)\n+ d_s = ones_s * w[ma_ind_s,1]\n+ R[(1+err_ind_s):nrow(R),1:(ncol(R)-err_ind_s)] = R[(1+err_ind_s):nrow(R),1:(ncol(R)-err_ind_s)] + diag(d_s)\n+ }\n+ }\n+\n+ max_iter = 100\n+ tol = 0.01\n+\n+ y_hat = matrix(0, nrow(X), 1)\n+ iter = 0\n+\n+ if(useJacobi == 1){\n+ check = sum(ifelse(rowSums(abs(R)) >= 1, 1, 0))\n+ if(check > 0){\n+ print(\"R is not diagonal dominant. Suggest switching to an exact solver.\")\n+ }\n+ diff = tol+1.0\n+ while(iter < max_iter & diff > tol){\n+ y_hat_new = matrix(b - R%*%y_hat, nrow(y_hat), 1)\n+ diff = sum((y_hat_new-y_hat)*(y_hat_new-y_hat))\n+ y_hat = y_hat_new\n+ iter = iter + 1\n+ }\n+ }else{\n+ ones = rep(1, nrow(X))\n+ A = R + diag(ones)\n+ Z = t(A)%*%A\n+ y = t(A)%*%b\n+ r = -y\n+ p = -r\n+ norm_r2 = sum(r*r)\n+ while(iter < max_iter & norm_r2 > tol){\n+ q = Z%*%p\n+ alpha = norm_r2 / sum(p*q)\n+ y_hat = y_hat + alpha * p\n+ old_norm_r2 = norm_r2\n+ r = r + alpha * q\n+ norm_r2 = sum(r * r)\n+ beta = norm_r2 / old_norm_r2\n+ p = -r + beta * p\n+ iter = iter + 1\n+ }\n+ }\n+\n+ errs = X[,1] - y_hat\n+ obj = sum(errs*errs)\n+\n+ return(obj)\n+}\n+\n+#input col of time series data\n+X = readMM(args[1])\n+\n+max_func_invoc = as.integer(args[2])\n+\n+#non-seasonal order\n+p = as.integer(args[3])\n+d = as.integer(args[4])\n+q = as.integer(args[5])\n+\n+#seasonal order\n+P = as.integer(args[6])\n+D = as.integer(args[7])\n+Q = as.integer(args[8])\n+\n+#length of the season\n+s = as.integer(args[9])\n+\n+include_mean = as.integer(args[10])\n+\n+useJacobi = as.integer(args[11])\n+\n+num_rows = nrow(X)\n+\n+if(num_rows <= d){\n+ print(\"non-seasonal differencing order should be larger than length of the time-series\")\n+}\n+\n+Y = matrix(X[,1], nrow(X), 1)\n+if(d>0){\n+ for(i in 1:d){\n+ n1 = nrow(Y)\n+ Y = matrix(Y[2:n1,] - Y[1:(n1-1),], n1-1, 1)\n+ }\n+}\n+\n+num_rows = nrow(Y)\n+if(num_rows <= s*D){\n+ print(\"seasonal differencing order should be larger than number of observations divided by length of season\")\n+}\n+\n+if(D>0){\n+ for(i in 1:D){\n+ n1 = nrow(Y)\n+ Y = matrix(Y[(s+1):n1,] - Y[1:(n1-s),], n1-s, 1)\n+ }\n+}\n+\n+n = nrow(Y)\n+\n+max_ar_col = P+p\n+max_ma_col = Q+q\n+if(max_ar_col > max_ma_col){\n+ max_arma_col = max_ar_col\n+}else{\n+ max_arma_col = max_ma_col\n+}\n+\n+mu = 0\n+if(include_mean == 1){\n+ mu = sum(Y)/nrow(Y)\n+ Y = Y - mu\n+}\n+\n+totcols = 1+p+P+Q+q #target col (X), p-P cols, q-Q cols\n+\n+Z = matrix(0, n, totcols)\n+Z[,1] = Y #target col\n+\n+if(p>0){\n+ for(i1 in 1:p){\n+ Z[(i1+1):n,1+i1] = Y[1:(n-i1),]\n+ }\n+}\n+if(P>0){\n+ for(i2 in 1:P){\n+ Z[(s*i2+1):n,1+p+i2] = Y[1:(n-s*i2),]\n+ }\n+}\n+if(q>0){\n+ for(i5 in 1:q){\n+ Z[(i5+1):n,1+P+p+i5] = Y[1:(n-i5),]\n+ }\n+}\n+if(Q>0){\n+ for(i6 in 1:Q){\n+ Z[(s*i6+1):n,1+P+p+q+i6] = Y[1:(n-s*i6),]\n+ }\n+}\n+\n+simplex = matrix(0, totcols-1, totcols)\n+for(i in 2:ncol(simplex)){\n+ simplex[i-1,i] = 0.1\n+}\n+\n+num_func_invoc = 0\n+\n+objvals = matrix(0, 1, ncol(simplex))\n+for(i3 in 1:ncol(simplex)){\n+ objvals[1,i3] = arima_css(matrix(simplex[,i3], nrow(simplex), 1), Z, p, P, q, Q, s, useJacobi)\n+}\n+num_func_invoc = num_func_invoc + ncol(simplex)\n+\n+tol = 1.5 * 10^(-8) * objvals[1,1]\n+\n+continue = TRUE\n+while(continue == 1 & num_func_invoc <= max_func_invoc) {\n+ #print(paste(num_func_invoc, max_func_invoc))\n+ best_index = 1\n+ worst_index = 1\n+ for(i in 2:ncol(objvals)){\n+ this = objvals[1,i]\n+ that = objvals[1,best_index]\n+ if(that > this){\n+ best_index = i\n+ }\n+ that = objvals[1,worst_index]\n+ if(that < this){\n+ worst_index = i\n+ }\n+ }\n+\n+ best_obj_val = objvals[1,best_index]\n+ worst_obj_val = objvals[1,worst_index]\n+ continue = (worst_obj_val > best_obj_val + tol)\n+\n+ #print(paste(\"#Function calls::\", num_func_invoc, \"OBJ:\", best_obj_val))\n+\n+ c = (rowSums(simplex) - simplex[,worst_index])/(nrow(simplex))\n+\n+ x_r = 2*c - simplex[,worst_index]\n+ obj_x_r = arima_css(matrix(x_r, nrow(simplex), 1), Z, p, P, q, Q, s, useJacobi)\n+ num_func_invoc = num_func_invoc + 1\n+\n+ if(obj_x_r < best_obj_val){\n+ x_e = 2*x_r - c\n+ obj_x_e = arima_css(matrix(x_e, nrow(simplex), 1), Z, p, P, q, Q, s, useJacobi)\n+ num_func_invoc = num_func_invoc + 1\n+\n+ simplex[,worst_index] = ifelse (obj_x_r <= obj_x_e, x_r, x_e)\n+ objvals[1,worst_index] = ifelse (obj_x_r <= obj_x_e, obj_x_r, obj_x_e)\n+\n+ } else {\n+ if(obj_x_r < worst_obj_val){\n+ simplex[,worst_index] = x_r\n+ objvals[1,worst_index] = obj_x_r\n+ }\n+\n+ x_c_in = (simplex[,worst_index] + c)/2\n+ obj_x_c_in = arima_css(matrix(x_c_in, nrow(simplex), 1), Z, p, P, q, Q, s, useJacobi)\n+ num_func_invoc = num_func_invoc + 1\n+\n+ if(obj_x_c_in < objvals[1,worst_index]){\n+ simplex[,worst_index] = x_c_in\n+ objvals[1,worst_index] = obj_x_c_in\n+ }else{\n+ if(obj_x_r >= worst_obj_val){\n+ best_point = simplex[,best_index]\n+\n+ for(i4 in 1:ncol(simplex)){\n+ if(i4 != best_index){\n+ simplex[,i4] = (simplex[,i4] + best_point)/2\n+ objvals[1,i4] = arima_css(matrix(simplex[,i4], nrow(simplex), 1), Z, p, P, q, Q, s, useJacobi)\n+ }\n+ }\n+ num_func_invoc = num_func_invoc + ncol(simplex) - 1\n+ }\n+ }\n+ }\n+}\n+\n+best_point = matrix(simplex[,best_index], nrow(simplex), 1)\n+#(simplex)\n+#print(best_point)\n+if(include_mean == 1){\n+ tmp = matrix(0, totcols, 1)\n+ tmp[1:nrow(best_point),1] = best_point\n+ tmp[nrow(tmp),1] = mu\n+ best_point = tmp\n+}\n+\n+writeMM(as(best_point, \"CsparseMatrix\"), args[12])\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/builtin/arima.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+X = read($X)\n+solver = ifelse($useJacobi, \"jacobi\", \"cg\")\n+\n+coefficients = arima(X=X, max_func_invoc=$max_func_invoc, p=$p, d=$d, q=$q,\n+ P=$P, D=$D, Q=$Q, s=$s, include_mean=$include_mean, solver=solver)\n+\n+write(coefficients, $model)\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-2773] New arima built-in function (time series forecasting)
DIA project WS2020/21.
Closes #1137. |
49,700 | 31.12.2020 20:08:01 | -3,600 | 43ed96efe29a6b3e46ad64179b1e4e07995cbcdf | Fix federated privacy exception handling
Remove exception instances from FederatedResponse (do not leak
information in stack traces)
Add catch clauses for DMLPrivacyException and
FederatedWorkerHandlerException
Also remove unused constructors from DMLPrivacyException and add
comments to FederatedWorkerHandlerException
Closes | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/controlprogram/federated/FederatedWorkerHandler.java",
"new_path": "src/main/java/org/apache/sysds/runtime/controlprogram/federated/FederatedWorkerHandler.java",
"diff": "package org.apache.sysds.runtime.controlprogram.federated;\nimport java.io.BufferedReader;\n-import java.io.IOException;\nimport java.io.InputStreamReader;\nimport java.util.Arrays;\n@@ -158,8 +157,10 @@ public class FederatedWorkerHandler extends ChannelInboundHandlerAdapter {\nreturn new FederatedResponse(ResponseType.ERROR, ex);\n}\ncatch (Exception ex) {\n- return new FederatedResponse(ResponseType.ERROR, new FederatedWorkerHandlerException(\n- \"Exception of type \" + ex.getClass() + \" thrown when processing request\", ex));\n+ String msg = \"Exception of type \" + ex.getClass() + \" thrown when processing request\";\n+ log.error(msg, ex);\n+ return new FederatedResponse(ResponseType.ERROR,\n+ new FederatedWorkerHandlerException(msg));\n}\n}\n@@ -209,17 +210,16 @@ public class FederatedWorkerHandler extends ChannelInboundHandlerAdapter {\nfmt = FileFormat.safeValueOf(mtd.getString(DataExpression.FORMAT_TYPE));\n}\n}\n+ catch (DMLPrivacyException | FederatedWorkerHandlerException ex){\n+ throw ex;\n+ }\ncatch (Exception ex) {\n- throw new DMLRuntimeException(ex);\n+ String msg = \"Exception in reading metadata of: \" + filename;\n+ log.error(msg, ex);\n+ throw new DMLRuntimeException(msg);\n}\nfinally {\n- if(fs != null)\n- try {\n- fs.close();\n- }\n- catch(IOException e) {\n- return new FederatedResponse(ResponseType.ERROR, id);\n- }\n+ IOUtilFunctions.closeSilently(fs);\n}\n// put meta data object in symbol table, read on first operation\n@@ -302,9 +302,14 @@ public class FederatedWorkerHandler extends ChannelInboundHandlerAdapter {\ntry {\npb.execute(ec); // execute single instruction\n}\n+ catch(DMLPrivacyException | FederatedWorkerHandlerException ex){\n+ throw ex;\n+ }\ncatch(Exception ex) {\n- return new FederatedResponse(ResponseType.ERROR, new FederatedWorkerHandlerException(\n- \"Exception of type \" + ex.getClass() + \" thrown when processing EXEC_INST request\", ex));\n+ String msg = \"Exception of type \" + ex.getClass() + \" thrown when processing EXEC_INST request\";\n+ log.error(msg, ex);\n+ return new FederatedResponse(ResponseType.ERROR,\n+ new FederatedWorkerHandlerException(msg));\n}\nreturn new FederatedResponse(ResponseType.SUCCESS_EMPTY);\n}\n@@ -322,9 +327,13 @@ public class FederatedWorkerHandler extends ChannelInboundHandlerAdapter {\ntry {\nreturn udf.execute(ec, inputs);\n}\n+ catch(DMLPrivacyException | FederatedWorkerHandlerException ex){\n+ throw ex;\n+ }\ncatch(Exception ex) {\n- return new FederatedResponse(ResponseType.ERROR, new FederatedWorkerHandlerException(\n- \"Exception of type \" + ex.getClass() + \" thrown when processing EXEC_UDF request\", ex));\n+ String msg = \"Exception of type \" + ex.getClass() + \" thrown when processing EXEC_UDF request\";\n+ log.error(msg, ex);\n+ return new FederatedResponse(ResponseType.ERROR, new FederatedWorkerHandlerException(msg));\n}\n}\n@@ -332,9 +341,13 @@ public class FederatedWorkerHandler extends ChannelInboundHandlerAdapter {\ntry {\n_ecm.clear();\n}\n+ catch(DMLPrivacyException | FederatedWorkerHandlerException ex){\n+ throw ex;\n+ }\ncatch(Exception ex) {\n- return new FederatedResponse(ResponseType.ERROR, new FederatedWorkerHandlerException(\n- \"Exception of type \" + ex.getClass() + \" thrown when processing CLEAR request\", ex));\n+ String msg = \"Exception of type \" + ex.getClass() + \" thrown when processing CLEAR request\";\n+ log.error(msg, ex);\n+ return new FederatedResponse(ResponseType.ERROR, new FederatedWorkerHandlerException(msg));\n}\nreturn new FederatedResponse(ResponseType.SUCCESS_EMPTY);\n}\n@@ -342,8 +355,8 @@ public class FederatedWorkerHandler extends ChannelInboundHandlerAdapter {\nprivate static void checkNumParams(int actual, int... expected) {\nif(Arrays.stream(expected).anyMatch(x -> x == actual))\nreturn;\n- throw new DMLRuntimeException(\"FederatedWorkerHandler: Received wrong amount of params:\" + \" expected=\"\n- + Arrays.toString(expected) + \", actual=\" + actual);\n+ throw new DMLRuntimeException(\"FederatedWorkerHandler: Received wrong amount of params:\"\n+ + \" expected=\" + Arrays.toString(expected) + \", actual=\" + actual);\n}\n@Override\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/controlprogram/federated/FederatedWorkerHandlerException.java",
"new_path": "src/main/java/org/apache/sysds/runtime/controlprogram/federated/FederatedWorkerHandlerException.java",
"diff": "@@ -38,6 +38,14 @@ public class FederatedWorkerHandlerException extends RuntimeException {\nsuper(msg);\n}\n+ /**\n+ * Create new instance of FederatedWorkerHandlerException with a message\n+ * and a throwable representing the original cause of the exception.\n+ * This constructor should not be used unless the throwable @param t\n+ * does not expose any private data in any use case.\n+ * @param msg message describing the exception\n+ * @param t throwable representing the original cause of the exception\n+ */\npublic FederatedWorkerHandlerException(String msg, Throwable t) {\nsuper(msg, t);\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/privacy/DMLPrivacyException.java",
"new_path": "src/main/java/org/apache/sysds/runtime/privacy/DMLPrivacyException.java",
"diff": "@@ -28,15 +28,6 @@ public class DMLPrivacyException extends DMLRuntimeException\n{\nprivate static final long serialVersionUID = 1L;\n- //prevent string concatenation of classname w/ stop message\n- private DMLPrivacyException(Exception e) {\n- super(e);\n- }\n-\n- private DMLPrivacyException(String string, Exception ex){\n- super(string,ex);\n- }\n-\n/**\n* This is the only valid constructor for DMLPrivacyException.\n*\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-2774] Fix federated privacy exception handling
Remove exception instances from FederatedResponse (do not leak
information in stack traces)
Add catch clauses for DMLPrivacyException and
FederatedWorkerHandlerException
Also remove unused constructors from DMLPrivacyException and add
comments to FederatedWorkerHandlerException
Closes #1054. |
49,738 | 31.12.2020 20:51:33 | -3,600 | b6a18804c6d17a67355e1657f54d3b7d362c73a1 | [MINOR] Fix arima test (flag as thread-unsafe test to avoid inference) | [
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/builtin/BuiltinArimaTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/builtin/BuiltinArimaTest.java",
"diff": "@@ -24,8 +24,7 @@ import java.util.Collection;\nimport java.util.HashMap;\nimport org.junit.Test;\n-import org.apache.sysds.common.Types;\n-import org.apache.sysds.lops.LopProperties;\n+import org.apache.sysds.common.Types.ExecMode;\nimport org.junit.runner.RunWith;\nimport org.junit.runners.Parameterized;\nimport org.junit.runners.Parameterized.Parameters;\n@@ -37,6 +36,7 @@ import org.apache.sysds.test.TestConfiguration;\nimport org.apache.sysds.test.TestUtils;\n@RunWith(value = Parameterized.class)\[email protected]\npublic class BuiltinArimaTest extends AutomatedTestBase {\nprivate final static String TEST_NAME = \"arima\";\nprivate final static String TEST_DIR = \"functions/builtin/\";\n@@ -81,7 +81,8 @@ public class BuiltinArimaTest extends AutomatedTestBase {\n@Test\npublic void testArima(){\n- Types.ExecMode platformOld = setExecMode(LopProperties.ExecType.CP);\n+ ExecMode platformOld = setExecMode(ExecMode.HYBRID);\n+\ntry {\nloadTestConfiguration(getTestConfiguration(TEST_NAME));\nString HOME = SCRIPT_DIR + TEST_DIR;\n@@ -113,11 +114,8 @@ public class BuiltinArimaTest extends AutomatedTestBase {\nHashMap<CellIndex, Double> arima_model_SYSTEMDS= readDMLMatrixFromOutputDir(\"learnt.model\");\nTestUtils.compareMatrices(arima_model_R, arima_model_SYSTEMDS, tol, \"arima_R\", \"arima_SYSTEMDS\");\n}\n- catch(Exception ex) {\n- ex.printStackTrace();\n- }\nfinally {\n- rtplatform = platformOld;\n+ resetExecMode(platformOld);\n}\n}\n}\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR] Fix arima test (flag as thread-unsafe test to avoid inference) |
49,691 | 04.01.2021 11:39:34 | -3,600 | f3988996fbe7c6baad3ddc2e9db31f9b0ba7a838 | Initial design document for LLVM codegen backend
DIA project WS2020/21, part I
Closes | [
{
"change_type": "ADD",
"old_path": null,
"new_path": "scripts/staging/llvm-codegen-backend/llvm-codegen-design.md",
"diff": "+<!--\n+{% comment %}\n+Licensed to the Apache Software Foundation (ASF) under one or more\n+contributor license agreements. See the NOTICE file distributed with\n+this work for additional information regarding copyright ownership.\n+The ASF licenses this file to you under the Apache License, Version 2.0\n+(the \"License\"); you may not use this file except in compliance with\n+the License. You may obtain a copy of the License at\n+\n+http://www.apache.org/licenses/LICENSE-2.0\n+\n+Unless required by applicable law or agreed to in writing, software\n+distributed under the License is distributed on an \"AS IS\" BASIS,\n+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+See the License for the specific language governing permissions and\n+limitations under the License.\n+{% end comment %}\n+-->\n+\n+# LLVM Code Generator - Design Document\n+\n+## Introduction\n+This document describes the initial design of the LLVM-based code generator backend.\n+\n+The LLVM generator reuses the existing operator fusion optimizer. It has to compile LLVM IR and execute from C++ based operator templates. I will add the support for cellwise operation for dense matrices.\n+\n+## General Design\n+\n+### C++ design\n+I will add a folder to put the LLVM header files and the jni_bridge files (header and cpp) to interact through JNI in src/main/llvm, also I will use the already written helper functions GET/RELEASE_ARRAY to handle input arrays from java code.\n+However, eventually only a native proxy shared library will be added to the repository to avoid unnecessary dependencies, while LLVM libraries will be loaded from the native library path similar to native BLAS libraries.\n+\n+The following method will be exposed:\n+- initialize_llvm_context(): handle the creation of the LLVMContext and retrieve the hardware specification (LLVM api);\n+- compile_llvm(string: spoofLLVM) that take as input the generated spoof code and add to the LLVM runtime;\n+- execute_ir that pass the matrices and compute the cellwise operation and return the result to continue the computation flow.\n+\n+I will add CMakeLists.txt to support the compilation and linking pass as it was done for the CUDA files. Following the [LLVM documentation](https://llvm.org/doxygen/) I've made a simple example (10.0.0 version) of the usage of the LLVM api that can be found [here](https://github.com/FraCorti/llvm10.0.0-example/blob/main/main.cpp). Technical note: I don't know which [LLVM version](https://releases.llvm.org/) will be better to use since it has changed every two months the last year and there aren't any suggestions online about it, so any suggestions regarding this choice will be useful.\n+\n+The SpoofLLVMContext class will have the following structure:\n+\n+```\n+class SpoofLLVMContext{\n+private:\n+ std::unique_ptr<LLVMContext> context;\n+ std::unique_ptr<SMDiagnostic> error;\n+ std::string targetTriple; // target hardware specification\n+ std::map<std::string, Module> loadedModules; // store the spoof operator\n+ std::unique_ptr<ExecutionEngine> executionEngine; // runtime executor\n+public:\n+ bool loadModule(const std::string& modulePath);\n+ GenericValue executeModuleFunction(std::string& functionName, GenericValue* params); // execute operation\n+};\n+```\n+\n+I will add the needed LLVM header manually through Maven to handle the build process ,as it was done for the CUDA files.\n+\n+### Java design\n+I will introduce the new GeneratorAPI value \"LLVM\" inside the SpoofCompiler class.\n+After that I will add:\n+ - in SpoofCompiler loadNativeCodeGenerator() an initilization of the LLVM context through a native call;\n+ - in SpoofCompiler optimize() a native call for compile LLVM IR retrieved from the\n+ - in CNode getLanguageTemplateClass() the call to the LLVM creation API.\n+\n+I will create a folder llvm inside the cplan folder hops/codegen/cplan/llvm and I will create a CellWise class that follows the structure of the java/CellWise but will return LLVM IR code as a template when the getTemplate(SpoofCellwise.CellType ct) method is called.\n+Then, following the CUDA implemented structure I will create a SpoofLLVM class that store the name of the CNodeTpl generated. This SpoofLLVMs will be stored inside CodeGenUtils new HashMap<String, SpoofLLVM> data structure. The SpoofLLVM will have a native method for passing the operands and execute the computation.\n+\n+## Steps\n+I will first implement the syntactic part and then the runtime part. I will follow the following steps:\n+1. Add LLVM to GeneratorAPI in SpoofCompiler and manage it in the SystemDS flow, then create the llvm/Cellwise class;\n+2. Integrate LLVM header through Maven (but for tests only), and create the JNI interface to interact with it;\n+3. Creation of the SpoofLLVMContext C++ class and SpoofLLVM java class;\n+4. Port SpoofCellWise.java to C++ and call it inside the generated LLVM IR spoof template.\n+\n\\ No newline at end of file\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-2785] Initial design document for LLVM codegen backend
DIA project WS2020/21, part I
Closes #1138. |
49,689 | 08.01.2021 16:07:46 | -3,600 | d4839b42c5c071df34ba25201d0708595494af06 | Calculate checksums for Fed PUT requests
For lineage-based reuse, it is necessary to uniquely
identify each data item sent via PUT requests. Inspired
by Spark, this patch introduces Adler32 checksum calculation
for every CacheBlock, and materialize it in Federated request. | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/controlprogram/federated/FederatedRequest.java",
"new_path": "src/main/java/org/apache/sysds/runtime/controlprogram/federated/FederatedRequest.java",
"diff": "package org.apache.sysds.runtime.controlprogram.federated;\n+import java.io.DataOutput;\n+import java.io.IOException;\nimport java.io.Serializable;\nimport java.util.ArrayList;\nimport java.util.Arrays;\nimport java.util.List;\n-import java.util.stream.Collectors;\n+import java.util.zip.Adler32;\n+import java.util.zip.Checksum;\n+import org.apache.sysds.api.DMLException;\nimport org.apache.sysds.api.DMLScript;\n-import org.apache.sysds.runtime.lineage.LineageItem;\n+import org.apache.sysds.runtime.controlprogram.caching.CacheBlock;\n+import org.apache.sysds.runtime.controlprogram.caching.CacheDataOutput;\n+import org.apache.sysds.runtime.controlprogram.caching.LazyWriteBuffer;\n+import org.apache.sysds.runtime.instructions.cp.ScalarObject;\nimport org.apache.sysds.utils.Statistics;\npublic class FederatedRequest implements Serializable {\n@@ -47,7 +54,7 @@ public class FederatedRequest implements Serializable {\nprivate long _tid;\nprivate List<Object> _data;\nprivate boolean _checkPrivacy;\n- private List<Integer> _lineageHash;\n+ private List<Long> _checksums;\npublic FederatedRequest(RequestType method) {\n@@ -68,6 +75,8 @@ public class FederatedRequest implements Serializable {\n_id = id;\n_data = data;\nsetCheckPrivacy();\n+ if (DMLScript.LINEAGE)\n+ setChecksum();\n}\npublic RequestType getType() {\n@@ -120,14 +129,50 @@ public class FederatedRequest implements Serializable {\nreturn _checkPrivacy;\n}\n- public void setLineageHash(LineageItem[] liItems) {\n- // copy the hash of the corresponding lineage DAG\n- // TODO: copy both Adler32 checksum (on data) and hash (on lineage DAG)\n- _lineageHash = Arrays.stream(liItems).map(li -> li.hashCode()).collect(Collectors.toList());\n+ public void setChecksum() {\n+ // Calculate Adler32 checksum. This is used as a leaf node of Lineage DAGs\n+ // in the workers, and helps to uniquely identify a node (tracing PUT)\n+ // TODO: append lineageitem hash if checksum is not enough\n+ _checksums = new ArrayList<>();\n+ try {\n+ calcChecksum();\n}\n+ catch (IOException e) {\n+ throw new DMLException(e);\n+ }\n+ }\n+\n+ public long getChecksum(int i) {\n+ return _checksums.get(i);\n+ }\n+\n+ private void calcChecksum() throws IOException {\n+ for (Object ob : _data) {\n+ if (!(ob instanceof CacheBlock) && !(ob instanceof ScalarObject))\n+ continue;\n- public int getLineageHash(int i) {\n- return _lineageHash.get(i);\n+ Checksum checksum = new Adler32();\n+ if (ob instanceof ScalarObject) {\n+ byte bytes[] = ((ScalarObject)ob).getStringValue().getBytes();\n+ checksum.update(bytes, 0, bytes.length);\n+ _checksums.add(checksum.getValue());\n+ }\n+\n+ if (ob instanceof CacheBlock) {\n+ try {\n+ CacheBlock cb = (CacheBlock)ob;\n+ long cbsize = LazyWriteBuffer.getCacheBlockSize((CacheBlock)ob);\n+ DataOutput dout = new CacheDataOutput(new byte[(int)cbsize]);\n+ cb.write(dout);\n+ byte bytes[] = ((CacheDataOutput) dout).getBytes();\n+ checksum.update(bytes, 0, bytes.length);\n+ _checksums.add(checksum.getValue());\n+ }\n+ catch(Exception ex) {\n+ throw new IOException(\"Failed to serialize cache block.\", ex);\n+ }\n+ }\n+ }\n}\n@Override\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/controlprogram/federated/FederatedWorkerHandler.java",
"new_path": "src/main/java/org/apache/sysds/runtime/controlprogram/federated/FederatedWorkerHandler.java",
"diff": "@@ -272,7 +272,7 @@ public class FederatedWorkerHandler extends ChannelInboundHandlerAdapter {\nec.setVariable(varname, data);\nif (DMLScript.LINEAGE)\n// TODO: Identify MO uniquely. Use Adler32 checksum.\n- ec.getLineage().set(varname, new LineageItem(String.valueOf(request.getLineageHash(0))));\n+ ec.getLineage().set(varname, new LineageItem(String.valueOf(request.getChecksum(0))));\nreturn new FederatedResponse(ResponseType.SUCCESS_EMPTY);\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/AggregateBinaryFEDInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/AggregateBinaryFEDInstruction.java",
"diff": "@@ -21,7 +21,6 @@ package org.apache.sysds.runtime.instructions.fed;\nimport java.util.concurrent.Future;\n-import org.apache.sysds.api.DMLScript;\nimport org.apache.sysds.runtime.DMLRuntimeException;\nimport org.apache.sysds.runtime.controlprogram.caching.MatrixObject;\nimport org.apache.sysds.runtime.controlprogram.context.ExecutionContext;\n@@ -32,7 +31,6 @@ import org.apache.sysds.runtime.controlprogram.federated.FederationMap.FType;\nimport org.apache.sysds.runtime.controlprogram.federated.FederationUtils;\nimport org.apache.sysds.runtime.instructions.InstructionUtils;\nimport org.apache.sysds.runtime.instructions.cp.CPOperand;\n-import org.apache.sysds.runtime.lineage.LineageItemUtils;\nimport org.apache.sysds.runtime.matrix.data.MatrixBlock;\nimport org.apache.sysds.runtime.matrix.operators.Operator;\n@@ -80,10 +78,6 @@ public class AggregateBinaryFEDInstruction extends BinaryFEDInstruction {\nelse if(mo1.isFederated(FType.ROW)) { // MV + MM\n//construct commands: broadcast rhs, fed mv, retrieve results\nFederatedRequest fr1 = mo1.getFedMapping().broadcast(mo2);\n- if (DMLScript.LINEAGE)\n- //also copy the hash of the lineage DAG\n- fr1.setLineageHash(LineageItemUtils.getLineage(ec, input1));\n- //TODO: calculate Adler32 checksum on data, and move this code inside FederationMap.\nFederatedRequest fr2 = FederationUtils.callInstruction(instString, output,\nnew CPOperand[]{input1, input2}, new long[]{mo1.getFedMapping().getID(), fr1.getID()});\nif( mo2.getNumColumns() == 1 ) { //MV\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-2784] Calculate checksums for Fed PUT requests
For lineage-based reuse, it is necessary to uniquely
identify each data item sent via PUT requests. Inspired
by Spark, this patch introduces Adler32 checksum calculation
for every CacheBlock, and materialize it in Federated request. |
49,754 | 09.01.2021 00:08:15 | -3,600 | 3a9baf48427c8ad6f51a233feeff03a407175f64 | Disguised Missing Values Detection
DIA project WS2020/21.
Closes
Date: Sat Jan 9 00:05:47 2021 +0100 | [
{
"change_type": "MODIFY",
"old_path": "docs/site/builtins-reference.md",
"new_path": "docs/site/builtins-reference.md",
"diff": "@@ -32,6 +32,7 @@ limitations under the License.\n* [`DBSCAN`-Function](#DBSCAN-function)\n* [`discoverFD`-Function](#discoverFD-function)\n* [`dist`-Function](#dist-function)\n+ * [`dmv`-Function](#dmv-function)\n* [`glm`-Function](#glm-function)\n* [`gridSearch`-Function](#gridSearch-function)\n* [`hyperband`-Function](#hyperband-function)\n@@ -299,6 +300,43 @@ X = rand (rows = 5, cols = 5)\nY = dist(X)\n```\n+\n+\n+## `dmv`-Function\n+\n+The `dmv`-function is used to find disguised missing values utilising syntactical pattern recognition.\n+\n+### Usage\n+\n+```r\n+dmv(X, threshold, replace)\n+```\n+\n+### Arguments\n+\n+| Name | Type | Default | Description |\n+| :-------- | :------------ | :------- | :----------------------------------------------------------- |\n+| X | Frame[String] | required | Input Frame |\n+| threshold | Double | 0.8 | threshold value in interval [0, 1] for dominant pattern per column (e.g., 0.8 means that 80% of the entries per column must adhere this pattern to be dominant) |\n+| replace | String | \"NA\" | The string disguised missing values are replaced with |\n+\n+### Returns\n+\n+| Type | Description |\n+| :------------ | :----------------------------------------------------- |\n+| Frame[String] | Frame `X` including detected disguised missing values |\n+\n+### Example\n+\n+```r\n+A = read(\"fileA\", data_type=\"frame\", rows=10, cols=8);\n+Z = dmv(X=A)\n+Z = dmv(X=A, threshold=0.9)\n+Z = dmv(X=A, threshold=0.9, replace=\"NaN\")\n+```\n+\n+\n+\n## `glm`-Function\nThe `glm`-function is a flexible generalization of ordinary linear regression that allows for response variables that have\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "scripts/builtin/dmv.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#------------------------------------------------------------\n+\n+s_dmv = function(Frame[String] X, Double threshold=0.8, String replace=\"NA\") return (Frame[String] Y) {\n+\n+ if( threshold < 0 | threshold > 1 )\n+ stop(\"Stopping due to invalid input, threshold required in interval [0, 1] found \" + threshold)\n+\n+ Y = map(X, \"UtilFunctions.syntacticalPatternDiscovery(\" + threshold + \",\" + replace + \")\")\n+}\n+\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/common/Builtins.java",
"new_path": "src/main/java/org/apache/sysds/common/Builtins.java",
"diff": "@@ -98,6 +98,7 @@ public enum Builtins {\nDIAG(\"diag\", false),\nDISCOVER_FD(\"discoverFD\", true),\nDIST(\"dist\", true),\n+ DMV(\"dmv\", true),\nDROP_INVALID_TYPE(\"dropInvalidType\", false),\nDROP_INVALID_LENGTH(\"dropInvalidLength\", false),\nEIGEN(\"eigen\", false, ReturnType.MULTI_RETURN),\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/matrix/data/FrameBlock.java",
"new_path": "src/main/java/org/apache/sysds/runtime/matrix/data/FrameBlock.java",
"diff": "@@ -56,6 +56,7 @@ import org.apache.sysds.runtime.io.IOUtilFunctions;\nimport org.apache.sysds.runtime.matrix.operators.BinaryOperator;\nimport org.apache.sysds.runtime.transform.encode.EncoderRecode;\nimport org.apache.sysds.runtime.util.CommonThreadPool;\n+import org.apache.sysds.runtime.util.DMVUtils;\nimport org.apache.sysds.runtime.util.IndexRange;\nimport org.apache.sysds.runtime.util.UtilFunctions;\n@@ -2101,9 +2102,22 @@ public class FrameBlock implements CacheBlock, Externalizable {\n}\npublic FrameBlock map(String lambdaExpr) {\n+ if(!lambdaExpr.contains(\"->\"))\n+ {\n+ //return map(getCompiledFunctionBlock(lambdaExpr));\n+ String args = lambdaExpr.substring(lambdaExpr.indexOf('(') + 1, lambdaExpr.indexOf(')'));\n+ if(args.contains(\",\")) {\n+ String[] arguments = args.split(\",\");\n+ return DMVUtils.syntacticalPatternDiscovery(this, Double.parseDouble(arguments[0]), arguments[1]);\n+ }\n+ }\nreturn map(getCompiledFunction(lambdaExpr));\n}\n+ public FrameBlock map(FrameBlockMapFunction lambdaExpression) {\n+ return lambdaExpression.apply();\n+ }\n+\npublic FrameBlock map(FrameMapFunction lambdaExpr) {\n// Prepare temporary output array\nString[][] output = new String[getNumRows()][getNumColumns()];\n@@ -2120,16 +2134,20 @@ public class FrameBlock implements CacheBlock, Externalizable {\n}\npublic static FrameMapFunction getCompiledFunction(String lambdaExpr) {\n- // split lambda expression\n+ String varname;\n+ String expr;\n+\n+ String cname = \"StringProcessing\"+CLASS_ID.getNextID();\n+ StringBuilder sb = new StringBuilder();\n+\n+\nString[] parts = lambdaExpr.split(\"->\");\nif( parts.length != 2 )\nthrow new DMLRuntimeException(\"Unsupported lambda expression: \"+lambdaExpr);\n- String varname = parts[0].trim();\n- String expr = parts[1].trim();\n+ varname = parts[0].trim();\n+ expr = parts[1].trim();\n// construct class code\n- String cname = \"StringProcessing\"+CLASS_ID.getNextID();\n- StringBuilder sb = new StringBuilder();\nsb.append(\"import org.apache.sysds.runtime.util.UtilFunctions;\\n\");\nsb.append(\"import org.apache.sysds.runtime.matrix.data.FrameBlock.FrameMapFunction;\\n\");\nsb.append(\"public class \"+cname+\" extends FrameMapFunction {\\n\");\n@@ -2147,8 +2165,39 @@ public class FrameBlock implements CacheBlock, Externalizable {\n}\n}\n+\n+ public FrameBlockMapFunction getCompiledFunctionBlock(String lambdaExpression) {\n+ // split lambda expression\n+ String expr;\n+\n+ String cname = \"StringProcessing\"+CLASS_ID.getNextID();\n+ StringBuilder sb = new StringBuilder();\n+\n+ expr = lambdaExpression;\n+\n+ sb.append(\"import org.apache.sysds.runtime.util.UtilFunctions;\\n\");\n+ sb.append(\"import org.apache.sysds.runtime.matrix.data.FrameBlock.FrameBlockMapFunction;\\n\");\n+ sb.append(\"public class \"+cname+\" extends FrameBlockMapFunction {\\n\");\n+ sb.append(\"@Override\\n\");\n+ sb.append(\"public FrameBlock apply() {\\n\");\n+ sb.append(\" return \"+expr+\"; }}\\n\");\n+\n+ try {\n+ return (FrameBlockMapFunction) CodegenUtils\n+ .compileClass(cname, sb.toString()).newInstance();\n+ }\n+ catch(InstantiationException | IllegalAccessException e) {\n+ throw new DMLRuntimeException(\"Failed to compile FrameBlockMapFunction.\", e);\n+ }\n+ }\n+\npublic static abstract class FrameMapFunction implements Serializable {\nprivate static final long serialVersionUID = -8398572153616520873L;\npublic abstract String apply(String input);\n}\n+\n+ public static abstract class FrameBlockMapFunction implements Serializable {\n+ private static final long serialVersionUID = -8398573333616520876L;\n+ public abstract FrameBlock apply();\n+ }\n}\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/main/java/org/apache/sysds/runtime/util/DMVUtils.java",
"diff": "+/*\n+ * Licensed to the Apache Software Foundation (ASF) under one\n+ * or more contributor license agreements. See the NOTICE file\n+ * distributed with this work for additional information\n+ * regarding copyright ownership. The ASF licenses this file\n+ * to you under the Apache License, Version 2.0 (the\n+ * \"License\"); you may not use this file except in compliance\n+ * with the License. You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.apache.sysds.runtime.util;\n+\n+import org.apache.commons.collections.map.HashedMap;\n+import org.apache.sysds.runtime.matrix.data.FrameBlock;\n+\n+import java.util.ArrayList;\n+import java.util.HashMap;\n+import java.util.Iterator;\n+import java.util.Map;\n+\n+public class DMVUtils {\n+ public static final char DIGIT = 'd';\n+ public static final char LOWER = 'l';\n+ public static final char UPPER = 'u';\n+ public static final char ALPHA = 'a';\n+ public static final char SPACE = 's';\n+ public static final char DOT = 't';\n+ public static final char OTHER = 'y';\n+ public static final char ARBITRARY_LEN = '+';\n+ public static final char MINUS = '-';\n+ public static String DISGUISED_VAL = \"\";\n+\n+ public enum LEVEL_ENUM { LEVEL1, LEVEL2, LEVEL3, LEVEL4, LEVEL5, LEVEL6}\n+\n+ public static FrameBlock syntacticalPatternDiscovery(FrameBlock frame, double threshold, String disguised_value) {\n+\n+ // Preparation\n+ DISGUISED_VAL = disguised_value;\n+ int numCols = frame.getNumColumns();\n+ int numRows = frame.getNumRows();\n+ ArrayList<Map<String, Integer>> table_Hist = new ArrayList(numCols); // list of every column with values and their frequency\n+\n+ int idx;\n+ for (idx = 0; idx < numCols; idx++) {\n+ Object c = frame.getColumnData(idx);\n+ String[] column = (String[]) c;\n+ String key = \"\";\n+ for (String attr : column) {\n+ key = (attr.isEmpty()) ? \"NULL\": attr;\n+ addDistinctValueOrIncrementCounter(table_Hist, key, idx);\n+ }\n+ }\n+\n+ // Syntactic Pattern Discovery\n+ idx = -1;\n+ for (Map<String, Integer> col_Hist : table_Hist) {\n+ idx++;\n+ Map<String, Double> dominant_patterns_ratio = new HashedMap();\n+ Map<String, Integer> prev_pattern_hist = col_Hist;\n+ for(LEVEL_ENUM level : LEVEL_ENUM.values()) {\n+ dominant_patterns_ratio.clear();\n+ Map<String, Integer> current_pattern_hist = LevelsExecutor(prev_pattern_hist, level);\n+ dominant_patterns_ratio = calculatePatternsRatio(current_pattern_hist, numRows);\n+ String dominant_pattern = findDominantPattern(dominant_patterns_ratio, threshold);\n+ if(dominant_pattern != null) { //found pattern\n+ detectDisguisedValues(dominant_pattern, frame.getColumnData(idx), idx, frame, level);\n+ break;\n+ }\n+ prev_pattern_hist = current_pattern_hist;\n+ }\n+ }\n+ return frame;\n+ }\n+\n+\n+ public static Map<String, Double> calculatePatternsRatio(Map<String, Integer> patterns_hist, int nr_entries) {\n+ Map<String, Double> patterns_ratio_map = new HashedMap();\n+ Iterator it = patterns_hist.entrySet().iterator();\n+ while(it.hasNext()) {\n+ Map.Entry pair = (Map.Entry) it.next();\n+ String pattern = (String) pair.getKey();\n+ Double nr_occurences = new Double((Integer)pair.getValue());\n+\n+ double current_ratio = nr_occurences / nr_entries; // percentage of current pattern in column\n+ patterns_ratio_map.put(pattern, current_ratio);\n+ }\n+ return patterns_ratio_map;\n+ }\n+\n+ public static String findDominantPattern(Map<String, Double> dominant_patterns, double threshold) {\n+\n+ Iterator it = dominant_patterns.entrySet().iterator();\n+ while(it.hasNext()) {\n+ Map.Entry pair = (Map.Entry) it.next();\n+ String pattern = (String) pair.getKey();\n+ Double pattern_ratio = (Double)pair.getValue();\n+\n+ if(pattern_ratio > threshold)\n+ return pattern;\n+\n+ }\n+ return null;\n+ }\n+\n+ private static void addDistinctValueOrIncrementCounter(ArrayList<Map<String, Integer>> maps, String key, Integer idx) {\n+ if (maps.size() == idx) {\n+ HashMap<String, Integer> m = new HashMap<>();\n+ m.put(key, 1);\n+ maps.add(m);\n+ return;\n+ }\n+\n+ if (!(maps.get(idx).containsKey(key))) {\n+ maps.get(idx).put(key, 1);\n+ } else {\n+ maps.get(idx).compute(key, (k, v) -> v + 1);\n+ }\n+ }\n+\n+ private static void addDistinctValueOrIncrementCounter(Map<String, Integer> map, String encoded_value, Integer nr_occurrences) {\n+ if (!(map.containsKey(encoded_value))) {\n+ map.put(encoded_value, nr_occurrences);\n+ } else {\n+ map.compute(encoded_value, (k, v) -> v + nr_occurrences);\n+ }\n+ }\n+\n+ public static Map<String, Integer> LevelsExecutor(Map<String, Integer> old_pattern_hist, LEVEL_ENUM level) {\n+ Map<String, Integer> new_pattern_hist = new HashedMap();\n+ Iterator it = old_pattern_hist.entrySet().iterator();\n+ while (it.hasNext()) {\n+ Map.Entry pair = (Map.Entry) it.next();\n+ String pattern = (String) pair.getKey();\n+ Integer nr_of_occurrences = (Integer)pair.getValue();\n+\n+ String new_pattern;\n+ switch(level) {\n+ case LEVEL1: // default encoding\n+ new_pattern = encodeRawString(pattern);\n+ break;\n+ case LEVEL2: // ignores the number of occurrences. It replaces all numbers with '+'\n+ new_pattern = removeNumbers(pattern);\n+ break;\n+ case LEVEL3: // ignores upper and lowercase characters. It replaces all 'u' and 'l' with 'a' = Alphabet\n+ new_pattern = removeUpperLowerCase(pattern);\n+ break;\n+ case LEVEL4: // changes floats to digits\n+ new_pattern = removeInnerCharacterInPattern(pattern, DIGIT, DOT);\n+ break;\n+ case LEVEL5: // removes spaces between strings\n+ new_pattern = removeInnerCharacterInPattern(pattern, ALPHA, SPACE);\n+ break;\n+ case LEVEL6: // changes negative numbers to digits\n+ new_pattern = acceptNegativeNumbersAsDigits(pattern);\n+ break;\n+ default:\n+ new_pattern = \"\";\n+ break;\n+ }\n+ addDistinctValueOrIncrementCounter(new_pattern_hist, new_pattern, nr_of_occurrences);\n+ }\n+\n+ return new_pattern_hist;\n+ }\n+\n+ public static String acceptNegativeNumbersAsDigits(String pattern) {\n+ char[] chars = pattern.toCharArray();\n+ StringBuilder tmp = new StringBuilder();\n+ boolean currently_minus_digit = false;\n+ for (char ch : chars) {\n+ if(ch == MINUS && !currently_minus_digit) {\n+ currently_minus_digit = true;\n+ }\n+ else if(ch == DIGIT && currently_minus_digit) {\n+ tmp.append(ch);\n+ currently_minus_digit = false;\n+ }\n+ else if(currently_minus_digit) {\n+ tmp.append(MINUS);\n+ tmp.append(ch);\n+ currently_minus_digit = false;\n+ }\n+ else {\n+ tmp.append(ch);\n+ }\n+ }\n+ return tmp.toString();\n+ }\n+\n+ public static String removeInnerCharacterInPattern(String pattern, char outter_char, char inner_char) {\n+ char[] chars = pattern.toCharArray();\n+ StringBuilder tmp = new StringBuilder();\n+ boolean currently_digit = false;\n+ for (char ch : chars) {\n+ if(ch == outter_char && !currently_digit) {\n+ currently_digit = true;\n+ tmp.append(ch);\n+ }\n+ else if(currently_digit && (ch == outter_char || ch == inner_char))\n+ continue;\n+ else if(ch != inner_char && ch != ARBITRARY_LEN) {\n+ currently_digit = false;\n+ tmp.append(ch);\n+ }\n+ else {\n+ if(tmp.length() > 0 && tmp.charAt(tmp.length() - 1) != ARBITRARY_LEN)\n+ tmp.append(ch);\n+ }\n+ }\n+ return tmp.toString();\n+ }\n+\n+\n+ public static String removeUpperLowerCase(String pattern) {\n+ char[] chars = pattern.toCharArray();\n+ StringBuilder tmp = new StringBuilder();\n+ boolean currently_alphabetic = false;\n+ for (char ch : chars) {\n+ if(ch == UPPER || ch == LOWER) {\n+ if(!currently_alphabetic) {\n+ currently_alphabetic = true;\n+ tmp.append(ALPHA);\n+ }\n+ }\n+ else if(ch == ARBITRARY_LEN) {\n+ if(tmp.charAt(tmp.length() - 1) != ARBITRARY_LEN)\n+ tmp.append(ch);\n+ }\n+ else {\n+ tmp.append(ch);\n+ currently_alphabetic = false;\n+ }\n+ }\n+ return tmp.toString();\n+ }\n+\n+ private static String removeNumbers(String pattern) {\n+ char[] chars = pattern.toCharArray();\n+ StringBuilder tmp = new StringBuilder();\n+ for (char ch : chars) {\n+ if(Character.isDigit(ch))\n+ tmp.append(ARBITRARY_LEN);\n+ else\n+ tmp.append(ch);\n+ }\n+ return tmp.toString();\n+ }\n+\n+ public static String encodeRawString(String input) {\n+ char[] chars = input.toCharArray();\n+\n+ StringBuilder tmp = new StringBuilder();\n+ for (char ch : chars) {\n+ tmp.append(getCharClass(ch));\n+ }\n+ return getFrequencyOfEachConsecutiveChar(tmp.toString());\n+ }\n+\n+ private static char getCharClass(char c) {\n+ if (Character.isDigit(c)) return DIGIT;\n+ if (Character.isLowerCase(c)) return LOWER;\n+ if (Character.isUpperCase(c)) return UPPER;\n+ if (Character.isSpaceChar(c)) return SPACE;\n+ if (c == '.') return DOT;\n+ if(c == '-') return MINUS;\n+ return OTHER;\n+ }\n+\n+ public static String getFrequencyOfEachConsecutiveChar(String s) {\n+ StringBuilder retval = new StringBuilder();\n+ for (int i = 0; i < s.length(); i++) {\n+ int count = 1;\n+ while (i + 1 < s.length() && s.charAt(i) == s.charAt(i + 1)) {\n+ i++;\n+ count++;\n+ }\n+ retval.append(s.charAt(i));\n+ retval.append(count);\n+ }\n+ return retval.toString();\n+ }\n+\n+ private static void detectDisguisedValues(String dom_pattern, Object col, int col_idx,\n+ FrameBlock frameBlock, LEVEL_ENUM level)\n+ {\n+ int row_idx = -1;\n+ String pattern = \"\";\n+ String[] column = (String[]) col;\n+ for (String attr : column) {\n+ switch (level){\n+ case LEVEL1:\n+ pattern = encodeRawString(attr);\n+ break;\n+ case LEVEL2:\n+ pattern = encodeRawString(attr);\n+ pattern = removeNumbers(pattern);\n+ break;\n+ case LEVEL3:\n+ pattern = encodeRawString(attr);\n+ pattern = removeNumbers(pattern);\n+ pattern = removeUpperLowerCase(pattern);\n+ break;\n+ case LEVEL4:\n+ pattern = encodeRawString(attr);\n+ pattern = removeNumbers(pattern);\n+ pattern = removeUpperLowerCase(pattern);\n+ pattern = removeInnerCharacterInPattern(pattern, DIGIT, DOT);\n+ break;\n+ case LEVEL5:\n+ pattern = encodeRawString(attr);\n+ pattern = removeNumbers(pattern);\n+ pattern = removeUpperLowerCase(pattern);\n+ pattern = removeInnerCharacterInPattern(pattern, DIGIT, DOT);\n+ pattern = removeInnerCharacterInPattern(pattern, ALPHA, SPACE);\n+ break;\n+ case LEVEL6:\n+ pattern = encodeRawString(attr);\n+ pattern = removeNumbers(pattern);\n+ pattern = removeUpperLowerCase(pattern);\n+ pattern = removeInnerCharacterInPattern(pattern, DIGIT, DOT);\n+ pattern = removeInnerCharacterInPattern(pattern, ALPHA, SPACE);\n+ pattern = acceptNegativeNumbersAsDigits(pattern);\n+ default:\n+ //System.out.println(\"Could not find suitable level\");\n+ }\n+ row_idx++;\n+ if(pattern.equals(dom_pattern)) continue;\n+// System.out.println(\"[\" + level +\"] Disguised value: \" + frameBlock.get(row_idx, col_idx) + \" (c=\" + col_idx + \",r=\" + row_idx + \")\");\n+ frameBlock.set(row_idx, col_idx, DISGUISED_VAL);\n+ }\n+ }\n+}\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/java/org/apache/sysds/test/functions/builtin/BuiltinDMVTest.java",
"diff": "+/*\n+ * Licensed to the Apache Software Foundation (ASF) under one\n+ * or more contributor license agreements. See the NOTICE file\n+ * distributed with this work for additional information\n+ * regarding copyright ownership. The ASF licenses this file\n+ * to you under the Apache License, Version 2.0 (the\n+ * \"License\"); you may not use this file except in compliance\n+ * with the License. You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.apache.sysds.test.functions.builtin;\n+\n+import java.util.ArrayList;\n+import java.util.List;\n+\n+import org.apache.sysds.common.Types;\n+import org.apache.sysds.runtime.io.FrameWriterFactory;\n+import org.apache.sysds.runtime.matrix.data.FrameBlock;\n+import org.junit.AfterClass;\n+import org.junit.BeforeClass;\n+import org.junit.Test;\n+import org.apache.sysds.lops.LopProperties.ExecType;\n+import org.apache.sysds.test.AutomatedTestBase;\n+import org.apache.sysds.test.TestConfiguration;\n+import org.apache.sysds.test.TestUtils;\n+\n+public class BuiltinDMVTest extends AutomatedTestBase {\n+\n+ private final static String TEST_NAME = \"disguisedMissingValue\";\n+ private final static String TEST_DIR = \"functions/builtin/\";\n+ private static final String TEST_CLASS_DIR = TEST_DIR + BuiltinOutlierTest.class.getSimpleName() + \"/\";\n+\n+ @BeforeClass\n+ public static void init() {\n+ TestUtils.clearDirectory(TEST_DATA_DIR + TEST_CLASS_DIR);\n+ }\n+\n+ @AfterClass\n+ public static void cleanUp() {\n+ if (TEST_CACHE_ENABLED) {\n+ TestUtils.clearDirectory(TEST_DATA_DIR + TEST_CLASS_DIR);\n+ }\n+ }\n+\n+ @Override\n+ public void setUp() {\n+ TestUtils.clearAssertionInformation();\n+ addTestConfiguration(TEST_NAME,new TestConfiguration(TEST_CLASS_DIR, TEST_NAME,new String[]{\"B\"}));\n+ if (TEST_CACHE_ENABLED) {\n+ setOutAndExpectedDeletionDisabled(true);\n+ }\n+ }\n+\n+ @Test\n+ public void NormalStringFrameTest() {\n+ FrameBlock f = generateRandomFrameBlock(1000, 4,null);\n+ String[] disguised_values = new String[]{\"?\", \"9999\", \"?\", \"9999\"};\n+ ArrayList<List<Integer>> positions = getDisguisedPositions(f, 4, disguised_values);\n+ runMissingValueTest(f, ExecType.CP, 0.8, \"DMV\", positions);\n+ }\n+\n+ @Test\n+ public void PreDefinedStringsFrameTest() {\n+ String[] testarray0 = new String[]{\"77\",\"77\",\"55\",\"89\",\"43\", \"99\", \"46\"}; // detect Weg\n+ String[] testarray1 = new String[]{\"8010\",\"9999\",\"8456\",\"4565\",\"89655\", \"86542\", \"45624\"}; // detect ?\n+ String[] testarray2 = new String[]{\"David K\",\"Valentin E\",\"Patrick L\",\"VEVE\",\"DK\", \"VE\", \"PL\"}; // detect 45\n+ String[] testarray3 = new String[]{\"3.42\",\"45\",\"0.456\",\".45\",\"4589.245\", \"97\", \"33\"}; // detect ka\n+ String[] testarray4 = new String[]{\"99\",\"123\",\"158\",\"146\",\"158\", \"174\", \"201\"}; // detect 9999\n+\n+ String[][] teststrings = new String[][]{testarray0, testarray1, testarray2, testarray3, testarray4};\n+ FrameBlock f = generateRandomFrameBlock(7, 5, teststrings);\n+ String[] disguised_values = new String[]{\"Patrick-Lovric-Weg-666\", \"?\", \"45\", \"ka\", \"9999\"};\n+ ArrayList<List<Integer>> positions = getDisguisedPositions(f, 1, disguised_values);\n+ runMissingValueTest(f, ExecType.CP, 0.7,\"NA\", positions);\n+ }\n+\n+ @Test\n+ public void PreDefinedDoubleFrame() {\n+ Double[] test_val = new Double[10000];\n+ for(int i = 0; i < test_val.length; i++) {\n+ test_val[i] = TestUtils.getPositiveRandomDouble();\n+ }\n+ String[] test_string = new String[test_val.length];\n+ for(int j = 0; j < test_val.length; j++) {\n+ test_string[j] = test_val[j].toString();\n+ }\n+\n+ String[][] teststrings = new String[][]{test_string};\n+ FrameBlock f = generateRandomFrameBlock(test_string.length, 1, teststrings);\n+ String[] disguised_values = new String[]{\"9999999999\"};\n+ ArrayList<List<Integer>> positions = getDisguisedPositions(f, 10, disguised_values);\n+ runMissingValueTest(f, ExecType.CP, 0.6, \"-1\", positions);\n+ }\n+\n+ private void runMissingValueTest(FrameBlock test_frame, ExecType et, Double threshold, String replacement,\n+ ArrayList<List<Integer>> positions)\n+ {\n+ Types.ExecMode platformOld = setExecMode(et);\n+\n+ try {\n+ getAndLoadTestConfiguration(TEST_NAME);\n+\n+ String HOME = SCRIPT_DIR + TEST_DIR;\n+ fullDMLScriptName = HOME + TEST_NAME + \".dml\";\n+ programArgs = new String[] {\"-nvargs\", \"F=\" + input(\"F\"), \"O=\" + output(\"O\"),\n+ \"threshold=\" + threshold, \"replacement=\" + replacement\n+ };\n+\n+ FrameWriterFactory.createFrameWriter(Types.FileFormat.CSV).\n+ writeFrameToHDFS(test_frame, input(\"F\"), test_frame.getNumRows(), test_frame.getNumColumns());\n+\n+ runTest(true, false, null, -1);\n+\n+ FrameBlock outputFrame = readDMLFrameFromHDFS(\"O\", Types.FileFormat.CSV);\n+\n+ for(int i = 0; i < positions.size(); i++) {\n+ String[] output = (String[]) outputFrame.getColumnData(i);\n+ for(int j = 0; j < positions.get(i).size(); j++) {\n+ if(replacement.equals(\"NA\")) {\n+ TestUtils.compareScalars(null, output[positions.get(i).get(j)]);\n+ }\n+ else {\n+ TestUtils.compareScalars(replacement, output[positions.get(i).get(j)]);\n+ }\n+ }\n+ }\n+ }\n+ catch (Exception ex) {\n+ throw new RuntimeException(ex);\n+ }\n+ finally {\n+ resetExecMode(platformOld);\n+ }\n+ }\n+\n+ private FrameBlock generateRandomFrameBlock(int rows, int cols, String[][] defined_strings)\n+ {\n+ Types.ValueType[] schema = new Types.ValueType[cols];\n+ for(int i = 0; i < cols; i++) {\n+ schema[i] = Types.ValueType.STRING;\n+ }\n+\n+ if(defined_strings != null)\n+ {\n+ String[] names = new String[cols];\n+ for(int i = 0; i < cols; i++)\n+ names[i] = schema[i].toString();\n+ FrameBlock frameBlock = new FrameBlock(schema, names);\n+ frameBlock.ensureAllocatedColumns(rows);\n+ for(int row = 0; row < rows; row++)\n+ for(int col = 0; col < cols; col++)\n+ frameBlock.set(row, col, defined_strings[col][row]);\n+ return frameBlock;\n+ }\n+ return TestUtils.generateRandomFrameBlock(rows, cols, schema ,TestUtils.getPositiveRandomInt());\n+ }\n+\n+ private ArrayList<List<Integer>> getDisguisedPositions(FrameBlock frame, int amountValues, String[] disguisedValue)\n+ {\n+ ArrayList<List<Integer>> positions = new ArrayList<>();\n+ int counter;\n+ for(int i = 0; i < frame.getNumColumns(); i++)\n+ {\n+ counter = 0;\n+ List<Integer> arrayToFill = new ArrayList<>();\n+ while(counter < frame.getNumRows() && counter < amountValues)\n+ {\n+ int position = TestUtils.getPositiveRandomInt() % frame.getNumRows();\n+ while(counter != 0 && arrayToFill.contains(position))\n+ {\n+ position = (position + TestUtils.getPositiveRandomInt() + 5) % frame.getNumRows();\n+ }\n+ arrayToFill.add(position);\n+ if(disguisedValue.length > 1)\n+ {\n+ frame.set(position, i, disguisedValue[i]);\n+ }\n+ else if (disguisedValue.length == 1)\n+ {\n+ frame.set(position, i, disguisedValue[0]);\n+ }\n+\n+ counter++;\n+ }\n+ positions.add(i, arrayToFill);\n+ }\n+\n+ return positions;\n+ }\n+\n+}\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/builtin/disguisedMissingValue.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+X = read($F, data_type=\"frame\", format=\"csv\", header=FALSE)\n+Z = dmv(X=X, threshold=$threshold, replace=$replacement)\n+\n+write(Z, $O, format = \"csv\")\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-2789] Disguised Missing Values Detection
Co-authored-by: Patrick Lovric <[email protected]>
Co-authored-by: Valentin Edelsbrunner <[email protected]>
DIA project WS2020/21.
Closes #1144.
Date: Sat Jan 9 00:05:47 2021 +0100 |
49,738 | 09.01.2021 21:38:22 | -3,600 | 83e5eefd4f30095f05d8dfc549a2db1aae66f766 | Cleanup disguised missing value detection
* Fix thread-safeness for parfor environments (no static instance vars)
* Fix hashmap/iterator handling (removed HashedMap, unnecessary casts)
* Fix formatting related test
DIA project WS2020/21, part 2 | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/matrix/data/FrameBlock.java",
"new_path": "src/main/java/org/apache/sysds/runtime/matrix/data/FrameBlock.java",
"diff": "@@ -2102,9 +2102,7 @@ public class FrameBlock implements CacheBlock, Externalizable {\n}\npublic FrameBlock map(String lambdaExpr) {\n- if(!lambdaExpr.contains(\"->\"))\n- {\n- //return map(getCompiledFunctionBlock(lambdaExpr));\n+ if(!lambdaExpr.contains(\"->\")) {\nString args = lambdaExpr.substring(lambdaExpr.indexOf('(') + 1, lambdaExpr.indexOf(')'));\nif(args.contains(\",\")) {\nString[] arguments = args.split(\",\");\n@@ -2134,18 +2132,15 @@ public class FrameBlock implements CacheBlock, Externalizable {\n}\npublic static FrameMapFunction getCompiledFunction(String lambdaExpr) {\n- String varname;\n- String expr;\n-\nString cname = \"StringProcessing\"+CLASS_ID.getNextID();\nStringBuilder sb = new StringBuilder();\n-\n-\nString[] parts = lambdaExpr.split(\"->\");\n+\nif( parts.length != 2 )\nthrow new DMLRuntimeException(\"Unsupported lambda expression: \"+lambdaExpr);\n- varname = parts[0].trim();\n- expr = parts[1].trim();\n+\n+ String varname = parts[0].trim();\n+ String expr = parts[1].trim();\n// construct class code\nsb.append(\"import org.apache.sysds.runtime.util.UtilFunctions;\\n\");\n@@ -2167,13 +2162,9 @@ public class FrameBlock implements CacheBlock, Externalizable {\npublic FrameBlockMapFunction getCompiledFunctionBlock(String lambdaExpression) {\n- // split lambda expression\n- String expr;\n-\nString cname = \"StringProcessing\"+CLASS_ID.getNextID();\nStringBuilder sb = new StringBuilder();\n-\n- expr = lambdaExpression;\n+ String expr = lambdaExpression;\nsb.append(\"import org.apache.sysds.runtime.util.UtilFunctions;\\n\");\nsb.append(\"import org.apache.sysds.runtime.matrix.data.FrameBlock.FrameBlockMapFunction;\\n\");\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/util/DMVUtils.java",
"new_path": "src/main/java/org/apache/sysds/runtime/util/DMVUtils.java",
"diff": "package org.apache.sysds.runtime.util;\n-import org.apache.commons.collections.map.HashedMap;\n+import org.apache.sysds.runtime.DMLRuntimeException;\nimport org.apache.sysds.runtime.matrix.data.FrameBlock;\nimport java.util.ArrayList;\nimport java.util.HashMap;\n-import java.util.Iterator;\nimport java.util.Map;\n+import java.util.Map.Entry;\npublic class DMVUtils {\npublic static final char DIGIT = 'd';\n@@ -37,20 +37,17 @@ public class DMVUtils {\npublic static final char OTHER = 'y';\npublic static final char ARBITRARY_LEN = '+';\npublic static final char MINUS = '-';\n- public static String DISGUISED_VAL = \"\";\npublic enum LEVEL_ENUM { LEVEL1, LEVEL2, LEVEL3, LEVEL4, LEVEL5, LEVEL6}\npublic static FrameBlock syntacticalPatternDiscovery(FrameBlock frame, double threshold, String disguised_value) {\n-\n// Preparation\n- DISGUISED_VAL = disguised_value;\n+ String disguisedVal = disguised_value;\nint numCols = frame.getNumColumns();\nint numRows = frame.getNumRows();\n- ArrayList<Map<String, Integer>> table_Hist = new ArrayList(numCols); // list of every column with values and their frequency\n+ ArrayList<Map<String, Integer>> table_Hist = new ArrayList<>(numCols); // list of every column with values and their frequency\n- int idx;\n- for (idx = 0; idx < numCols; idx++) {\n+ for (int idx = 0; idx < numCols; idx++) {\nObject c = frame.getColumnData(idx);\nString[] column = (String[]) c;\nString key = \"\";\n@@ -61,10 +58,10 @@ public class DMVUtils {\n}\n// Syntactic Pattern Discovery\n- idx = -1;\n+ int idx = -1;\nfor (Map<String, Integer> col_Hist : table_Hist) {\nidx++;\n- Map<String, Double> dominant_patterns_ratio = new HashedMap();\n+ Map<String, Double> dominant_patterns_ratio = new HashMap<>();\nMap<String, Integer> prev_pattern_hist = col_Hist;\nfor(LEVEL_ENUM level : LEVEL_ENUM.values()) {\ndominant_patterns_ratio.clear();\n@@ -72,7 +69,7 @@ public class DMVUtils {\ndominant_patterns_ratio = calculatePatternsRatio(current_pattern_hist, numRows);\nString dominant_pattern = findDominantPattern(dominant_patterns_ratio, threshold);\nif(dominant_pattern != null) { //found pattern\n- detectDisguisedValues(dominant_pattern, frame.getColumnData(idx), idx, frame, level);\n+ detectDisguisedValues(dominant_pattern, frame.getColumnData(idx), idx, frame, level, disguisedVal);\nbreak;\n}\nprev_pattern_hist = current_pattern_hist;\n@@ -83,13 +80,10 @@ public class DMVUtils {\npublic static Map<String, Double> calculatePatternsRatio(Map<String, Integer> patterns_hist, int nr_entries) {\n- Map<String, Double> patterns_ratio_map = new HashedMap();\n- Iterator it = patterns_hist.entrySet().iterator();\n- while(it.hasNext()) {\n- Map.Entry pair = (Map.Entry) it.next();\n- String pattern = (String) pair.getKey();\n- Double nr_occurences = new Double((Integer)pair.getValue());\n-\n+ Map<String, Double> patterns_ratio_map = new HashMap<>();\n+ for(Entry<String, Integer> e : patterns_hist.entrySet()) {\n+ String pattern = e.getKey();\n+ double nr_occurences = e.getValue();\ndouble current_ratio = nr_occurences / nr_entries; // percentage of current pattern in column\npatterns_ratio_map.put(pattern, current_ratio);\n}\n@@ -97,16 +91,11 @@ public class DMVUtils {\n}\npublic static String findDominantPattern(Map<String, Double> dominant_patterns, double threshold) {\n-\n- Iterator it = dominant_patterns.entrySet().iterator();\n- while(it.hasNext()) {\n- Map.Entry pair = (Map.Entry) it.next();\n- String pattern = (String) pair.getKey();\n- Double pattern_ratio = (Double)pair.getValue();\n-\n+ for(Entry<String, Double> e : dominant_patterns.entrySet()) {\n+ String pattern = e.getKey();\n+ Double pattern_ratio = e.getValue();\nif(pattern_ratio > threshold)\nreturn pattern;\n-\n}\nreturn null;\n}\n@@ -119,28 +108,24 @@ public class DMVUtils {\nreturn;\n}\n- if (!(maps.get(idx).containsKey(key))) {\n+ if (!(maps.get(idx).containsKey(key)))\nmaps.get(idx).put(key, 1);\n- } else {\n+ else\nmaps.get(idx).compute(key, (k, v) -> v + 1);\n}\n- }\nprivate static void addDistinctValueOrIncrementCounter(Map<String, Integer> map, String encoded_value, Integer nr_occurrences) {\n- if (!(map.containsKey(encoded_value))) {\n+ if (!(map.containsKey(encoded_value)))\nmap.put(encoded_value, nr_occurrences);\n- } else {\n+ else\nmap.compute(encoded_value, (k, v) -> v + nr_occurrences);\n}\n- }\npublic static Map<String, Integer> LevelsExecutor(Map<String, Integer> old_pattern_hist, LEVEL_ENUM level) {\n- Map<String, Integer> new_pattern_hist = new HashedMap();\n- Iterator it = old_pattern_hist.entrySet().iterator();\n- while (it.hasNext()) {\n- Map.Entry pair = (Map.Entry) it.next();\n- String pattern = (String) pair.getKey();\n- Integer nr_of_occurrences = (Integer)pair.getValue();\n+ Map<String, Integer> new_pattern_hist = new HashMap<>();\n+ for(Entry<String, Integer> e : old_pattern_hist.entrySet()) {\n+ String pattern = e.getKey();\n+ Integer nr_of_occurrences = e.getValue();\nString new_pattern;\nswitch(level) {\n@@ -219,7 +204,6 @@ public class DMVUtils {\nreturn tmp.toString();\n}\n-\npublic static String removeUpperLowerCase(String pattern) {\nchar[] chars = pattern.toCharArray();\nStringBuilder tmp = new StringBuilder();\n@@ -290,7 +274,7 @@ public class DMVUtils {\n}\nprivate static void detectDisguisedValues(String dom_pattern, Object col, int col_idx,\n- FrameBlock frameBlock, LEVEL_ENUM level)\n+ FrameBlock frameBlock, LEVEL_ENUM level, String disguisedVal)\n{\nint row_idx = -1;\nString pattern = \"\";\n@@ -329,13 +313,14 @@ public class DMVUtils {\npattern = removeInnerCharacterInPattern(pattern, DIGIT, DOT);\npattern = removeInnerCharacterInPattern(pattern, ALPHA, SPACE);\npattern = acceptNegativeNumbersAsDigits(pattern);\n+ break;\ndefault:\n- //System.out.println(\"Could not find suitable level\");\n+ throw new DMLRuntimeException(\"Could not find suitable level\");\n}\nrow_idx++;\n- if(pattern.equals(dom_pattern)) continue;\n-// System.out.println(\"[\" + level +\"] Disguised value: \" + frameBlock.get(row_idx, col_idx) + \" (c=\" + col_idx + \",r=\" + row_idx + \")\");\n- frameBlock.set(row_idx, col_idx, DISGUISED_VAL);\n+ if(pattern.equals(dom_pattern))\n+ continue;\n+ frameBlock.set(row_idx, col_idx, disguisedVal);\n}\n}\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/builtin/BuiltinDMVTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/builtin/BuiltinDMVTest.java",
"diff": "@@ -111,9 +111,8 @@ public class BuiltinDMVTest extends AutomatedTestBase {\nString HOME = SCRIPT_DIR + TEST_DIR;\nfullDMLScriptName = HOME + TEST_NAME + \".dml\";\n- programArgs = new String[] {\"-nvargs\", \"F=\" + input(\"F\"), \"O=\" + output(\"O\"),\n- \"threshold=\" + threshold, \"replacement=\" + replacement\n- };\n+ programArgs = new String[] {\"-nvargs\", \"F=\" + input(\"F\"),\n+ \"O=\" + output(\"O\"), \"threshold=\" + threshold, \"replacement=\" + replacement};\nFrameWriterFactory.createFrameWriter(Types.FileFormat.CSV).\nwriteFrameToHDFS(test_frame, input(\"F\"), test_frame.getNumRows(), test_frame.getNumColumns());\n@@ -142,15 +141,14 @@ public class BuiltinDMVTest extends AutomatedTestBase {\n}\n}\n- private FrameBlock generateRandomFrameBlock(int rows, int cols, String[][] defined_strings)\n+ private static FrameBlock generateRandomFrameBlock(int rows, int cols, String[][] defined_strings)\n{\nTypes.ValueType[] schema = new Types.ValueType[cols];\nfor(int i = 0; i < cols; i++) {\nschema[i] = Types.ValueType.STRING;\n}\n- if(defined_strings != null)\n- {\n+ if(defined_strings != null) {\nString[] names = new String[cols];\nfor(int i = 0; i < cols; i++)\nnames[i] = schema[i].toString();\n@@ -164,31 +162,24 @@ public class BuiltinDMVTest extends AutomatedTestBase {\nreturn TestUtils.generateRandomFrameBlock(rows, cols, schema ,TestUtils.getPositiveRandomInt());\n}\n- private ArrayList<List<Integer>> getDisguisedPositions(FrameBlock frame, int amountValues, String[] disguisedValue)\n+ private static ArrayList<List<Integer>> getDisguisedPositions(FrameBlock frame,\n+ int amountValues, String[] disguisedValue)\n{\nArrayList<List<Integer>> positions = new ArrayList<>();\nint counter;\n- for(int i = 0; i < frame.getNumColumns(); i++)\n- {\n+ for(int i = 0; i < frame.getNumColumns(); i++) {\ncounter = 0;\nList<Integer> arrayToFill = new ArrayList<>();\n- while(counter < frame.getNumRows() && counter < amountValues)\n- {\n+ while(counter < frame.getNumRows() && counter < amountValues) {\nint position = TestUtils.getPositiveRandomInt() % frame.getNumRows();\n- while(counter != 0 && arrayToFill.contains(position))\n- {\n+ while(counter != 0 && arrayToFill.contains(position)) {\nposition = (position + TestUtils.getPositiveRandomInt() + 5) % frame.getNumRows();\n}\narrayToFill.add(position);\nif(disguisedValue.length > 1)\n- {\nframe.set(position, i, disguisedValue[i]);\n- }\nelse if (disguisedValue.length == 1)\n- {\nframe.set(position, i, disguisedValue[0]);\n- }\n-\ncounter++;\n}\npositions.add(i, arrayToFill);\n@@ -196,5 +187,4 @@ public class BuiltinDMVTest extends AutomatedTestBase {\nreturn positions;\n}\n-\n}\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-2789] Cleanup disguised missing value detection
* Fix thread-safeness for parfor environments (no static instance vars)
* Fix hashmap/iterator handling (removed HashedMap, unnecessary casts)
* Fix formatting related test
DIA project WS2020/21, part 2
Co-authored-by: David Kerschbaumer <[email protected]>
Co-authored-by: Patrick Lovric <[email protected]>
Co-authored-by: Valentin Edelsbrunner <[email protected]> |
49,763 | 09.01.2021 23:03:40 | -3,600 | 6bb9a0d082fe89451e1f90ba85dc1a8796bd19bc | New outlierByArima (AR) built-in function
DIA project WS2020/21.
Closes | [
{
"change_type": "ADD",
"old_path": null,
"new_path": "scripts/builtin/outlierByArima.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+\n+# Built-in function for detecting and repairing outliers in time series,\n+# by training an ARIMA model and classifying values that are more than\n+# k standard-deviations away from the predicated values as outliers.\n+#\n+# INPUT PARAMETERS:\n+# ---------------------------------------------------------------------------------------------\n+# NAME TYPE DEFAULT MEANING\n+# ---------------------------------------------------------------------------------------------\n+# X Double --- Matrix X\n+# k Double 3 threshold values 1, 2, 3 for 68%, 95%, 99.7% respectively (3-sigma rule)\n+# repairMethod Integer 1 values: 0 = delete rows having outliers, 1 = replace outliers as zeros\n+# 2 = replace outliers as missing values\n+# p Int 0 non-seasonal AR order\n+# d Int 0 non-seasonal differencing order\n+# q Int 0 non-seasonal MA order\n+# P Int 0 seasonal AR order\n+# D Int 0 seasonal differencing order\n+# Q Int 0 seasonal MA order\n+# s Int 1 period in terms of number of time-steps\n+# include_mean Bool FALSE\n+# solver String \"jacobi\" solver, is either \"cg\" or \"jacobi\"\n+# ---------------------------------------------------------------------------------------------\n+\n+\n+#Output(s)\n+# ---------------------------------------------------------------------------------------------\n+# NAME TYPE DEFAULT MEANING\n+# ---------------------------------------------------------------------------------------------\n+# X_corrected Double --- Matrix X with no outliers\n+\n+m_outlierByArima = function(Matrix[Double] X, Double k = 3, Integer repairMethod = 1, Integer p=0, Integer d=0,\n+ Integer q=0, Integer P=0, Integer D=0, Integer Q=0, Integer s=1, Boolean include_mean=FALSE, String solver=\"jacobi\")\n+ return(Matrix[Double] X_corrected)\n+{\n+ outlierFilter = as.matrix(0)\n+\n+ if( k < 1 | k > 7)\n+ stop(\"outlierBySd: invalid argument - k should be in range 1-7 found \"+k)\n+\n+ features = transform_matrix(X,p)\n+ X_adapted = X[p+1:nrow(X),]\n+\n+ # TODO replace by ARIMA once fully supported, LM only emulated the AR part\n+ model = lm(X=features, y=X_adapted)\n+ y_hat = lmpredict(X=features, w=model)\n+\n+ upperBound = sd(X) + k * y_hat\n+ lowerBound = sd(X) - k * y_hat\n+ outlierFilter = (X_adapted < lowerBound) | (X_adapted > upperBound)\n+ outlierFilter = rbind(matrix(0.0, rows=p,cols=1), outlierFilter)\n+ X_corrected = fix_outliers(X, outlierFilter, repairMethod)\n+}\n+\n+transform_matrix = function(Matrix[Double] X, Integer p) return (Matrix[Double] features){\n+ nrows = nrow(X)\n+ features = matrix(0, rows=nrows-p, cols=1)\n+\n+ for (i in 1:p){\n+ features = cbind(features, X[p+1-i:nrows-i,])\n+ }\n+ features = features[,2:p+1]\n+}\n+\n+fix_outliers = function(Matrix[Double] X, Matrix[Double] outlierFilter, Integer repairMethod)\n+ return (Matrix[Double] X_filtered)\n+{\n+ rows = nrow(X)\n+ cols = ncol(X)\n+ if(repairMethod == 0) {\n+ sel = (outlierFilter == 0)\n+ X = removeEmpty(target = X, margin = \"rows\", select = sel)\n+ }\n+ else if(repairMethod == 1)\n+ X = (outlierFilter == 0) * X\n+ else if (repairMethod == 2) {\n+ outlierFilter = replace(target = (outlierFilter == 0), pattern = 0, replacement = NaN)\n+ X = outlierFilter * X\n+ }\n+ else{\n+ stop(\"outlierBySd: invalid argument - repair required 0-1 found: \"+repairMethod)\n+ }\n+ X_filtered = X\n+}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/common/Builtins.java",
"new_path": "src/main/java/org/apache/sysds/common/Builtins.java",
"diff": "@@ -166,6 +166,7 @@ public enum Builtins {\nOUTLIER(\"outlier\", true, false), //TODO parameterize opposite\nOUTLIER_SD(\"outlierBySd\", true),\nOUTLIER_IQR(\"outlierByIQR\", true),\n+ OUTLIER_ARIMA(\"outlierByArima\",true),\nPCA(\"pca\", true),\nPCAINVERSE(\"pcaInverse\", true),\nPCATRANSFORM(\"pcaTransform\", true),\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/java/org/apache/sysds/test/functions/builtin/BuiltinOutlierByArima.java",
"diff": "+/*\n+ * Licensed to the Apache Software Foundation (ASF) under one\n+ * or more contributor license agreements. See the NOTICE file\n+ * distributed with this work for additional information\n+ * regarding copyright ownership. The ASF licenses this file\n+ * to you under the Apache License, Version 2.0 (the\n+ * \"License\"); you may not use this file except in compliance\n+ * with the License. You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.apache.sysds.test.functions.builtin;\n+\n+import java.util.Arrays;\n+import java.util.Collection;\n+import java.util.HashMap;\n+import org.junit.Test;\n+import org.apache.sysds.common.Types;\n+import org.apache.sysds.lops.LopProperties;\n+import org.junit.runner.RunWith;\n+import org.junit.runners.Parameterized;\n+import org.junit.runners.Parameterized.Parameters;\n+\n+import org.apache.sysds.runtime.matrix.data.MatrixValue.CellIndex;\n+import org.apache.sysds.runtime.meta.MatrixCharacteristics;\n+import org.apache.sysds.test.AutomatedTestBase;\n+import org.apache.sysds.test.TestConfiguration;\n+import org.apache.sysds.test.TestUtils;\n+\n+import java.util.concurrent.ThreadLocalRandom;\n+\n+@RunWith(value = Parameterized.class)\[email protected]\n+public class BuiltinOutlierByArima extends AutomatedTestBase {\n+ private final static String TEST_NAME = \"outlierByArima\";\n+ private final static String TEST_DIR = \"functions/builtin/\";\n+ private static final String TEST_CLASS_DIR = TEST_DIR + BuiltinArimaTest.class.getSimpleName() + \"/\";\n+\n+ protected int max_func_invoc, p, d, q, P, D, Q, s, include_mean, useJacobi, repairMethod;\n+\n+ public BuiltinOutlierByArima(int m, int p, int d, int q, int P, int D, int Q, int s, int include_mean, int useJacobi, int repairMethod){\n+ this.max_func_invoc = m;\n+ this.p = p;\n+ this.d = d;\n+ this.q = q;\n+ this.P = P;\n+ this.D = D;\n+ this.Q = Q;\n+ this.s = s;\n+ this.include_mean = include_mean;\n+ this.useJacobi = useJacobi;\n+ this.repairMethod = repairMethod;\n+ }\n+\n+ @Parameters\n+ public static Collection<Object[]> data() {\n+ return Arrays.asList(new Object[][] {\n+ {1, 2, 0, 0, 0, 0, 0, 24, 1, 1, 1},\n+ {1, 2, 0, 0, 0, 0, 0, 24, 1, 1, 2},\n+ {1, 2, 0, 0, 0, 0, 0, 24, 1, 1, 3}});\n+ }\n+\n+ @Override\n+ public void setUp() {\n+ addTestConfiguration(TEST_NAME, new TestConfiguration(TEST_CLASS_DIR, TEST_NAME, new String[]{\"B\"}));\n+ }\n+\n+ @Test\n+ public void testOutlierByArima(){\n+ Types.ExecMode platformOld = setExecMode(LopProperties.ExecType.CP);\n+ try {\n+ loadTestConfiguration(getTestConfiguration(TEST_NAME));\n+ String HOME = SCRIPT_DIR + TEST_DIR;\n+ fullDMLScriptName = HOME + TEST_NAME + \".dml\";\n+ fullRScriptName = HOME + TEST_NAME + \".R\";\n+\n+ programArgs = new String[]{\n+ \"-nvargs\", \"X=\" + input(\"col.mtx\"), \"p=\" + p, \"repairMethod=\" + 1,\n+ \"outputfilename=\" + output(\"result\"),};\n+ rCmd = getRCmd(input(\"bad.mtx\"), expected(\"result\"));\n+\n+ int timeSeriesLength = 3000;\n+ int num_outliers = 10;\n+ double[][] timeSeries = getRandomMatrix(timeSeriesLength, 1, 1, 3, 1, System.currentTimeMillis());\n+ double[][] comparisonSeries = deepCopy(timeSeries);\n+ for(int i=0; i<num_outliers; i++) {\n+ int r = ThreadLocalRandom.current().nextInt(0, timeSeries.length);\n+ double badValue = ThreadLocalRandom.current().nextDouble(10, 50);\n+ timeSeries[r][0] = badValue;\n+ if (repairMethod == 1)\n+ comparisonSeries[r][0] = 0.0;\n+ else if (repairMethod == 2)\n+ comparisonSeries[r][0] = Double.NaN;\n+ }\n+\n+ MatrixCharacteristics mc = new MatrixCharacteristics(timeSeriesLength,1,-1,-1);\n+ writeInputMatrixWithMTD(\"col\", timeSeries, true, mc);\n+ writeInputMatrixWithMTD(\"bad\", comparisonSeries, true, mc);\n+\n+ runTest(true, false, null, -1);\n+ runRScript(true);\n+\n+ HashMap<CellIndex, Double> time_series_SYSTEMDS = readDMLMatrixFromOutputDir(\"result\");\n+ HashMap<CellIndex, Double> time_series_real = readRMatrixFromExpectedDir(\"result\");\n+\n+ double tol = Math.pow(10, -14);\n+ if (repairMethod == 3)\n+ TestUtils.compareScalars(time_series_real.size()-num_outliers, time_series_SYSTEMDS.size(), tol);\n+ else\n+ TestUtils.compareMatrices(time_series_real, time_series_SYSTEMDS, tol, \"time_series_real\", \"time_series_SYSTEMDS\");\n+ }\n+ finally {\n+ rtplatform = platformOld;\n+ }\n+ }\n+\n+ private static double[][] deepCopy(double[][] input) {\n+ double[][] result = new double[input.length][];\n+ for (int r = 0; r < input.length; r++) {\n+ result[r] = input[r].clone();\n+ }\n+ return result;\n+ }\n+}\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/builtin/outlierByArima.R",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+args <- commandArgs(TRUE)\n+library(Matrix)\n+\n+X = as(readMM(args[1]),\"CsparseMatrix\")\n+\n+# (for whatever reason) NaNs are seen as NAs after reading. replace them by zero instead of NAN since resulting NAN\n+# will be seen as NA after reading it again in test file\n+X[is.na(X)] = 0.0\n+\n+writeMM(X, file=args[2])\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/builtin/outlierByArima.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+X = read($X)\n+\n+Y = outlierByArima(X=X, p=$p, repairMethod=$repairMethod)\n+write(Y, $outputfilename, sparse=FALSE)\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-2790] New outlierByArima (AR) built-in function
DIA project WS2020/21.
Closes #1149. |
49,722 | 11.01.2021 23:52:53 | -3,600 | 18f86d4eb0f4efa24eb7a616016c85e66ee73bf9 | MDedup Builtin for finding duplicate rows
DIA project WS2020/21.
Closes
Date: Mon Jan 11 23:50:57 2021 +0100 | [
{
"change_type": "MODIFY",
"old_path": "docs/site/builtins-reference.md",
"new_path": "docs/site/builtins-reference.md",
"diff": "@@ -56,6 +56,7 @@ limitations under the License.\n* [`slicefinder`-Function](#slicefinder-function)\n* [`normalize`-Function](#normalize-function)\n* [`gnmf`-Function](#gnmf-function)\n+ * [`mdedup`-Function](#mdedup-function)\n* [`msvm`-Function](#msvm-function)\n* [`naivebayes`-Function](#naivebayes-function)\n* [`outlier`-Function](#outlier-function)\n@@ -1275,6 +1276,48 @@ X = round(rand(rows = 10, cols = 10, min = 1, max = numClasses))\ny = toOneHot(X,numClasses)\n```\n+## `mdedup`-Function\n+\n+The `mdedup`-function implements builtin for deduplication using matching dependencies\n+(e.g. Street 0.95, City 0.90 -> ZIP 1.0) by Jaccard distance.\n+\n+### Usage\n+\n+```r\n+mdedup(X, Y, intercept, epsilon, lamda, maxIterations, verbose)\n+```\n+\n+\n+### Arguments\n+\n+| Name | Type | Default | Description |\n+| :------ | :------------- | -------- | :---------- |\n+| X | Frame | --- | Input Frame X |\n+| LHSfeatures | Matrix[Integer] | --- | A matrix 1xd with numbers of columns for MDs |\n+| LHSthreshold | Matrix[Double] | --- | A matrix 1xd with threshold values in interval [0, 1] for MDs |\n+| RHSfeatures | Matrix[Integer] | --- | A matrix 1xd with numbers of columns for MDs |\n+| RHSthreshold | Matrix[Double] | --- | A matrix 1xd with threshold values in interval [0, 1] for MDs |\n+| verbose | Boolean | False | Set to true to print duplicates.|\n+\n+\n+### Returns\n+\n+| Type | Default | Description |\n+| :-------------- | -------- | :---------- |\n+| Matrix[Integer] | --- | Matrix of duplicates (rows). |\n+\n+\n+### Example\n+\n+```r\n+X = as.frame(rand(rows = 50, cols = 10))\n+LHSfeatures = matrix(\"1 3 19\", 1, 2)\n+LHSthreshold = matrix(\"0.85 0.85\", 1, 2)\n+RHSfeatures = matrix(\"30\", 1, 1)\n+RHSthreshold = matrix(\"1.0\", 1, 1)\n+duplicates = mdedup(X, LHSfeatures, LHSthreshold, RHSfeatures, RHSthreshold, verbose = FALSE)\n+```\n+\n## `msvm`-Function\nThe `msvm`-function implements builtin multiclass SVM with squared slack variables\n"
},
{
"change_type": "MODIFY",
"old_path": "docs/site/dml-language-reference.md",
"new_path": "docs/site/dml-language-reference.md",
"diff": "@@ -2067,6 +2067,23 @@ print(toString(Z)) </code>\nWEST\nEAST\n+It is also possible to compute Jaccard similarity matrix of rows of a vector.\n+<code> dist = map(Xi, \"(x, y) -> UtilFunctions.jaccardSim(x, y)\") <br/>\n+print(toString(dist)) </code>\n+\n+ # FRAME: nrow = 10, ncol = 10\n+ # DOUBLE\n+ # 0,000 0,286 0,125 0,600 0,286 0,125 0,125 1,000 1,000 0,600\n+ 0,286 0,000 0,429 0,286 1,000 0,429 0,429 0,286 0,286 0,286\n+ 0,125 0,429 0,000 0,125 0,429 1,000 1,000 0,125 0,125 0,125\n+ 0,600 0,286 0,125 0,000 0,286 0,125 0,125 0,600 0,600 1,000\n+ 0,286 1,000 0,429 0,286 0,000 0,429 0,429 0,286 0,286 0,286\n+ 0,125 0,429 1,000 0,125 0,429 0,000 1,000 0,125 0,125 0,125\n+ 0,125 0,429 1,000 0,125 0,429 1,000 0,000 0,125 0,125 0,125\n+ 1,000 0,286 0,125 0,600 0,286 0,125 0,125 0,000 1,000 0,600\n+ 1,000 0,286 0,125 0,600 0,286 0,125 0,125 1,000 0,000 0,600\n+ 0,600 0,286 0,125 1,000 0,286 0,125 0,125 0,600 0,600 0,000\n+ #\n* * *\n"
},
{
"change_type": "MODIFY",
"old_path": "scripts/builtin/discoverFD.dml",
"new_path": "scripts/builtin/discoverFD.dml",
"diff": "@@ -56,7 +56,7 @@ m_discoverFD = function(Matrix[Double] X, Matrix[Double] Mask, Double threshold)\n# allocate output and working sets\nn = nrow(X)\nd = ncol(X)\n- FD = matrix(0, d, d)\n+ FD = diag(matrix(1, d, 1))\ncm = matrix(0, 1, d)\n# num distinct per column\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "scripts/builtin/mdedup.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#------------------------------------------------------------------------------------------------------------------\n+\n+# Implements builtin for deduplication using matching dependencies (e.g. Street 0.95, City 0.90 -> ZIP 1.0)\n+# and Jaccard distance.\n+#\n+# INPUT PARAMETERS:\n+# -----------------------------------------------------------------------------------------------------------------\n+# NAME TYPE DEFAULT MEANING\n+# -----------------------------------------------------------------------------------------------------------------\n+# X Frame -- Input Frame X\n+# LHSfeatures Matrix[Integer] -- A matrix 1xd with numbers of columns for MDs\n+# (e.g. Street 0.95, City 0.90 -> ZIP 1.0)\n+# LHSthreshold Matrix[Double] -- A matrix 1xd with threshold values in interval [0, 1] for MDs\n+# RHSfeatures Matrix[Integer] -- A matrix 1xd with numbers of columns for MDs\n+# RHSthreshold Matrix[Double] -- A matrix 1xd with threshold values in interval [0, 1] for MDs\n+# verbose Boolean -- To print the output\n+# -----------------------------------------------------------------------------------------------------------------\n+#\n+# Output(s)\n+# -----------------------------------------------------------------------------------------------------------------\n+# NAME TYPE DEFAULT MEANING\n+# -----------------------------------------------------------------------------------------------------------------\n+# MD Matrix[Double] --- Matrix nx1 of duplicates\n+\n+s_mdedup = function(Frame[String] X, Matrix[Double] LHSfeatures, Matrix[Double] LHSthreshold,\n+ Matrix[Double] RHSfeatures, Matrix[Double] RHSthreshold, Boolean verbose)\n+ return(Matrix[Double] MD)\n+{\n+ n = nrow(X)\n+ d = ncol(X)\n+\n+ if (0 > (ncol(LHSfeatures) + ncol(RHSfeatures)) > d)\n+ stop(\"Invalid input: thresholds should in interval [0, \" + d + \"]\")\n+\n+ if ((ncol(LHSfeatures) != ncol(LHSthreshold)) | (ncol(RHSfeatures) != ncol(RHSthreshold)))\n+ stop(\"Invalid input: number of thresholds and columns to compare should be equal for LHS and RHS.\")\n+\n+ if (max(LHSfeatures) > d | max(RHSfeatures) > d)\n+ stop(\"Invalid input: feature values should be less than \" + d)\n+\n+ if (sum(LHSthreshold > 1) > 0 | sum(RHSthreshold > 1) > 0)\n+ stop(\"Invalid input: threshold values should be in the interval [0, 1].\")\n+\n+ MD = matrix(0, n, 1)\n+ LHS_MD = getMDAdjacency(X, LHSfeatures, LHSthreshold)\n+ RHS_MD = matrix(0, n, n)\n+\n+ if (sum(LHS_MD) > 0) {\n+ RHS_MD = getMDAdjacency(X, RHSfeatures, RHSthreshold)\n+ }\n+\n+ MD = detectDuplicates(LHS_MD, RHS_MD)\n+\n+ if(verbose)\n+ print(toString(MD))\n+}\n+\n+getMDAdjacency = function(Frame[String] X, Matrix[Double] features, Matrix[Double] thresholds)\n+ return(Matrix[Double] adjacency)\n+{\n+ n = nrow(X)\n+ d = ncol(X)\n+ adjacency = matrix(0, n, n)\n+\n+ i = 1\n+ while (i <= ncol(features)) {\n+ # slice col\n+ pos = as.scalar(features[1, i])\n+ Xi = X[, pos]\n+ # distances between words in each row of col\n+ dist = map(Xi, \"(x, y) -> UtilFunctions.jaccardSim(x, y)\")\n+ jaccardDist = as.matrix(dist)\n+ jaccardDist = jaccardDist + t(jaccardDist)\n+ threshold = as.scalar(thresholds[1, i])\n+\n+ if(i == 1) {\n+ adjacency = jaccardDist >= threshold\n+ } else {\n+ adjacency = adjacency & (jaccardDist >= threshold)\n+ }\n+\n+ # break if one of MDs is false\n+ if (sum(adjacency) == 0)\n+ i = ncol(features)\n+\n+ i = i + 1\n+ }\n+}\n+\n+detectDuplicates = function(Matrix[Double] LHS_adj, Matrix[Double] RHS_adj)\n+ return(Matrix[Double] MD)\n+{\n+\n+ n = nrow(LHS_adj)\n+ adjacency = LHS_adj * RHS_adj\n+ # find duplicates\n+ # TODO size propagation issue of adjacency matrix inside components call\n+ colDuplicates = components(G=adjacency[1:n, 1:n], verbose=FALSE)\n+ MD = colDuplicates * (rowSums(adjacency[1:n, 1:n]) > 0)\n+}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/common/Builtins.java",
"new_path": "src/main/java/org/apache/sysds/common/Builtins.java",
"diff": "@@ -97,6 +97,7 @@ public enum Builtins {\nDETECTSCHEMA(\"detectSchema\", false),\nDIAG(\"diag\", false),\nDISCOVER_FD(\"discoverFD\", true),\n+ DISCOVER_MD(\"mdedup\", true),\nDIST(\"dist\", true),\nDMV(\"dmv\", true),\nDROP_INVALID_TYPE(\"dropInvalidType\", false),\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/parser/BuiltinFunctionExpression.java",
"new_path": "src/main/java/org/apache/sysds/parser/BuiltinFunctionExpression.java",
"diff": "@@ -1562,9 +1562,16 @@ public class BuiltinFunctionExpression extends DataIdentifier\ncheckMatrixFrameParam(getFirstExpr());\ncheckScalarParam(getSecondExpr());\noutput.setDataType(DataType.FRAME);\n+ if(_args[1].getText().contains(\"jaccardSim\")) {\n+ output.setDimensions(id.getDim1(), id.getDim1());\n+ output.setValueType(ValueType.FP64);\n+ }\n+ else {\noutput.setDimensions(id.getDim1(), 1);\n- output.setBlocksize (id.getBlocksize());\noutput.setValueType(ValueType.STRING);\n+ }\n+ output.setBlocksize (id.getBlocksize());\n+\nbreak;\ndefault:\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/spark/BinaryFrameScalarSPInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/spark/BinaryFrameScalarSPInstruction.java",
"diff": "@@ -44,6 +44,11 @@ public class BinaryFrameScalarSPInstruction extends BinarySPInstruction {\n// Create local compiled functions (once) and execute on RDD\nJavaPairRDD<Long, FrameBlock> out = in1.mapValues(new RDDStringProcessing(expression));\n+ if(expression.contains(\"jaccardSim\")) {\n+ long rows = sec.getDataCharacteristics(output.getName()).getRows();\n+ sec.getDataCharacteristics(output.getName()).setDimension(rows, rows);\n+ }\n+\nsec.setRDDHandleForVariable(output.getName(), out);\nsec.addLineageRDD(output.getName(), input1.getName());\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/matrix/data/FrameBlock.java",
"new_path": "src/main/java/org/apache/sysds/runtime/matrix/data/FrameBlock.java",
"diff": "@@ -2109,17 +2109,14 @@ public class FrameBlock implements CacheBlock, Externalizable {\nreturn DMVUtils.syntacticalPatternDiscovery(this, Double.parseDouble(arguments[0]), arguments[1]);\n}\n}\n+ if(lambdaExpr.contains(\"jaccardSim\"))\n+ return mapDist(getCompiledFunction(lambdaExpr));\nreturn map(getCompiledFunction(lambdaExpr));\n}\n- public FrameBlock map(FrameBlockMapFunction lambdaExpression) {\n- return lambdaExpression.apply();\n- }\n-\npublic FrameBlock map (FrameMapFunction lambdaExpr) {\n// Prepare temporary output array\nString[][] output = new String[getNumRows()][getNumColumns()];\n-\n// Execute map function on all cells\nfor(int j = 0; j < getNumColumns(); j++) {\nArray input = getColumn(j);\n@@ -2131,64 +2128,54 @@ public class FrameBlock implements CacheBlock, Externalizable {\nreturn new FrameBlock(UtilFunctions.nCopies(getNumColumns(), ValueType.STRING), output);\n}\n+ public FrameBlock mapDist (FrameMapFunction lambdaExpr) {\n+ String[][] output = new String[getNumRows()][getNumRows()];\n+ for(String[] row : output)\n+ Arrays.fill(row, \"0.0\");\n+ Array input = getColumn(0);\n+ for(int j = 0; j < input._size - 1; j++) {\n+ for(int i = j + 1; i < input._size; i++)\n+ if(input.get(i) != null && input.get(j) != null) {\n+ output[j][i] = lambdaExpr.apply(String.valueOf(input.get(j)), String.valueOf(input.get(i)));\n+ // output[i][j] = output[j][i];\n+ }\n+ }\n+ return new FrameBlock(UtilFunctions.nCopies(getNumRows(), ValueType.STRING), output);\n+ }\n+\npublic static FrameMapFunction getCompiledFunction (String lambdaExpr) {\nString cname = \"StringProcessing\" + CLASS_ID.getNextID();\nStringBuilder sb = new StringBuilder();\nString[] parts = lambdaExpr.split(\"->\");\n-\nif(parts.length != 2)\nthrow new DMLRuntimeException(\"Unsupported lambda expression: \" + lambdaExpr);\n-\n- String varname = parts[0].trim();\n+ String[] varname = parts[0].replaceAll(\"[()]\", \"\").split(\",\");\nString expr = parts[1].trim();\n// construct class code\nsb.append(\"import org.apache.sysds.runtime.util.UtilFunctions;\\n\");\nsb.append(\"import org.apache.sysds.runtime.matrix.data.FrameBlock.FrameMapFunction;\\n\");\nsb.append(\"public class \" + cname + \" extends FrameMapFunction {\\n\");\n- sb.append(\"@Override\\n\");\n- sb.append(\"public String apply(String \"+varname+\") {\\n\");\n+ if(varname.length == 1) {\n+ sb.append(\"public String apply(String \" + varname[0].trim() + \") {\\n\");\nsb.append(\" return String.valueOf(\" + expr + \"); }}\\n\");\n-\n- // compile class, and create FrameMapFunction object\n- try {\n- return (FrameMapFunction) CodegenUtils\n- .compileClass(cname, sb.toString()).newInstance();\n- }\n- catch(InstantiationException | IllegalAccessException e) {\n- throw new DMLRuntimeException(\"Failed to compile FrameMapFunction.\", e);\n}\n+ else if(varname.length == 2) {\n+ sb.append(\"public String apply(String \" + varname[0].trim() + \", String \" + varname[1].trim() + \") {\\n\");\n+ sb.append(\" return String.valueOf(\" + expr + \"); }}\\n\");\n}\n-\n-\n- public FrameBlockMapFunction getCompiledFunctionBlock(String lambdaExpression) {\n- String cname = \"StringProcessing\"+CLASS_ID.getNextID();\n- StringBuilder sb = new StringBuilder();\n- String expr = lambdaExpression;\n-\n- sb.append(\"import org.apache.sysds.runtime.util.UtilFunctions;\\n\");\n- sb.append(\"import org.apache.sysds.runtime.matrix.data.FrameBlock.FrameBlockMapFunction;\\n\");\n- sb.append(\"public class \"+cname+\" extends FrameBlockMapFunction {\\n\");\n- sb.append(\"@Override\\n\");\n- sb.append(\"public FrameBlock apply() {\\n\");\n- sb.append(\" return \"+expr+\"; }}\\n\");\n-\n+ // compile class, and create FrameMapFunction object\ntry {\n- return (FrameBlockMapFunction) CodegenUtils\n- .compileClass(cname, sb.toString()).newInstance();\n+ return (FrameMapFunction) CodegenUtils.compileClass(cname, sb.toString()).newInstance();\n}\ncatch(InstantiationException | IllegalAccessException e) {\n- throw new DMLRuntimeException(\"Failed to compile FrameBlockMapFunction.\", e);\n+ throw new DMLRuntimeException(\"Failed to compile FrameMapFunction.\", e);\n}\n}\n- public static abstract class FrameMapFunction implements Serializable {\n+ public static class FrameMapFunction implements Serializable {\nprivate static final long serialVersionUID = -8398572153616520873L;\n- public abstract String apply(String input);\n- }\n-\n- public static abstract class FrameBlockMapFunction implements Serializable {\n- private static final long serialVersionUID = -8398573333616520876L;\n- public abstract FrameBlock apply();\n+ public String apply(String input) {return null;}\n+ public String apply(String input1, String input2) { return null;}\n}\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/util/UtilFunctions.java",
"new_path": "src/main/java/org/apache/sysds/runtime/util/UtilFunctions.java",
"diff": "package org.apache.sysds.runtime.util;\n-import java.text.ParseException;\n-import java.text.SimpleDateFormat;\n-import java.util.*;\n-\nimport org.apache.commons.lang.ArrayUtils;\nimport org.apache.commons.math3.random.RandomDataGenerator;\nimport org.apache.sysds.common.Types.ValueType;\n@@ -35,6 +31,10 @@ import org.apache.sysds.runtime.matrix.data.MatrixIndexes;\nimport org.apache.sysds.runtime.matrix.data.Pair;\nimport org.apache.sysds.runtime.meta.TensorCharacteristics;\n+import java.text.ParseException;\n+import java.text.SimpleDateFormat;\n+import java.util.*;\n+\npublic class UtilFunctions {\n// private static final Log LOG = LogFactory.getLog(UtilFunctions.class.getName());\n@@ -835,6 +835,17 @@ public class UtilFunctions {\n.map(DATE_FORMATS::get).orElseThrow(() -> new NullPointerException(\"Unknown date format.\"));\n}\n+ public static double jaccardSim(String x, String y) {\n+ Set<String> charsX = new LinkedHashSet<>(Arrays.asList(x.split(\"(?!^)\")));\n+ Set<String> charsY = new LinkedHashSet<>(Arrays.asList(y.split(\"(?!^)\")));\n+\n+ final int sa = charsX.size();\n+ final int sb = charsY.size();\n+ charsX.retainAll(charsY);\n+ final int intersection = charsX.size();\n+ return 1d / (sa + sb - charsX.size()) * intersection;\n+ }\n+\n/**\n* Generates a random FrameBlock with given parameters.\n*\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/java/org/apache/sysds/test/functions/builtin/BuiltinMDTest.java",
"diff": "+/*\n+ * Licensed to the Apache Software Foundation (ASF) under one\n+ * or more contributor license agreements. See the NOTICE file\n+ * distributed with this work for additional information\n+ * regarding copyright ownership. The ASF licenses this file\n+ * to you under the Apache License, Version 2.0 (the\n+ * \"License\"); you may not use this file except in compliance\n+ * with the License. You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+\n+package org.apache.sysds.test.functions.builtin;\n+\n+import java.util.Arrays;\n+import java.util.Collection;\n+\n+import org.apache.sysds.common.Types;\n+import org.apache.sysds.lops.LopProperties;\n+import org.apache.sysds.test.AutomatedTestBase;\n+import org.apache.sysds.test.TestConfiguration;\n+import org.apache.sysds.test.TestUtils;\n+import org.junit.Test;\n+import org.junit.runner.RunWith;\n+import org.junit.runners.Parameterized;\n+\n+@RunWith(value = Parameterized.class)\n+public class BuiltinMDTest extends AutomatedTestBase {\n+ private final static String TEST_NAME = \"matching_dependency\";\n+ private final static String TEST_DIR = \"functions/builtin/\";\n+ private static final String TEST_CLASS_DIR = TEST_DIR + BuiltinMDTest.class.getSimpleName() + \"/\";\n+\n+ @Parameterized.Parameter()\n+ public double[][] LHSf;\n+\n+ @Parameterized.Parameter(1)\n+ public double[][] LHSt;\n+\n+ @Parameterized.Parameter(2)\n+ public double[][] RHSf;\n+\n+ @Parameterized.Parameter(3)\n+ public double[][] RHSt;\n+\n+ @Parameterized.Parameters\n+ public static Collection<Object[]> data() {\n+ return Arrays.asList(new Object[][] {\n+ {new double[][] {{1}}, new double[][] {{0.95}},\n+ new double[][] {{5}}, new double[][] {{0.65}}},\n+\n+ {new double[][] {{1,3}}, new double[][] {{0.7,0.8}},\n+ new double[][] {{5}}, new double[][] {{0.8}}},\n+\n+ {new double[][] {{1,4,5}}, new double[][] {{0.9,0.9,0.9}},\n+ new double[][] {{6}}, new double[][] {{0.9}}},\n+\n+ {new double[][] {{1,4,5}}, new double[][] {{0.75,0.6,0.9}},\n+ new double[][] {{3}}, new double[][] {{0.8}}},\n+ });\n+ }\n+\n+ @Override\n+ public void setUp() {\n+ TestUtils.clearAssertionInformation();\n+ addTestConfiguration(TEST_NAME, new TestConfiguration(TEST_CLASS_DIR, TEST_NAME, new String[] {\"D\"}));\n+ if (TEST_CACHE_ENABLED) {\n+ setOutAndExpectedDeletionDisabled(true);\n+ }\n+ }\n+\n+ @Test\n+ public void testMDCP() {\n+ double[][] D = {\n+ {7567, 231, 1231, 1232, 122, 321},\n+ {5321, 23123, 122, 123, 1232, 11},\n+ {7267, 3, 223, 432, 1132, 0},\n+ {7267, 3, 223, 432, 1132, 500},\n+ {7254, 3, 223, 432, 1132, 0},\n+ };\n+ runMDTests(D, LHSf, LHSt, RHSf, RHSt, LopProperties.ExecType.CP);\n+ }\n+\n+ @Test\n+ public void testMDSP() {\n+ double[][] D = {\n+ {7567, 231, 1231, 1232, 122, 321},\n+ {5321, 23123, 122, 123, 1232, 11},\n+ {7267, 3, 223, 432, 1132, 0},\n+ {7267, 3, 223, 432, 1132, 500},\n+ {7254, 3, 223, 432, 1132, 0},\n+ };\n+ runMDTests(D, LHSf, LHSt, RHSf, RHSt, LopProperties.ExecType.SPARK);\n+ }\n+\n+ private void runMDTests(double [][] X , double[][] LHSf, double[][] LHSt, double[][] RHSf, double[][] RHSt, LopProperties.ExecType instType) {\n+ Types.ExecMode platformOld = setExecMode(instType);\n+ try\n+ {\n+ loadTestConfiguration(getTestConfiguration(TEST_NAME));\n+ String HOME = SCRIPT_DIR + TEST_DIR;\n+ fullDMLScriptName = HOME + TEST_NAME + \".dml\";\n+ programArgs = new String[]{\"-stats\",\"-args\", input(\"X\"),\n+ input(\"LHSf\"), input(\"LHSt\"), input(\"RHSf\"), input(\"RHSt\"), output(\"B\")};\n+\n+ double[][] A = getRandomMatrix(20, 6, 50, 500, 1, 2);\n+ System.arraycopy(X, 0, A, 0, X.length);\n+\n+ writeInputMatrixWithMTD(\"X\", A, false);\n+ writeInputMatrixWithMTD(\"LHSf\", LHSf, true);\n+ writeInputMatrixWithMTD(\"LHSt\", LHSt, true);\n+ writeInputMatrixWithMTD(\"RHSf\", RHSf, true);\n+ writeInputMatrixWithMTD(\"RHSt\", RHSt, true);\n+\n+ runTest(true, false, null, -1);\n+ }\n+ catch(Exception e) {\n+ e.printStackTrace();\n+ }\n+ finally {\n+ rtplatform = platformOld;\n+ }\n+ }\n+}\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/builtin/matching_dependency.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+# X = read($1, data_type = \"frame\", format = \"csv\", header = FALSE);\n+X = as.frame(read($1))\n+LHSf = read($2);\n+LHSt = read($3);\n+RHSf = read($4);\n+RHSt = read($5);\n+B = mdedup(X, LHSf, LHSt, RHSf, RHSt, TRUE);\n+write(B, $6);\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-2782] MDedup Builtin for finding duplicate rows
DIA project WS2020/21.
Closes #1139.
Date: Mon Jan 11 23:50:57 2021 +0100 |
49,689 | 13.01.2021 14:38:49 | -3,600 | 0febfb1f5060b2043e4ff1d48b70c94bd7e07ec4 | Improve serialization of dedup DAGs
This patch cuts the dedup patch DAGs at placeholders just
after each loop iteration, instead of at the time of
serialization. This will help comparing compressed DAGs. | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageDedupUtils.java",
"new_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageDedupUtils.java",
"diff": "@@ -21,6 +21,7 @@ package org.apache.sysds.runtime.lineage;\nimport java.util.ArrayList;\nimport java.util.Map;\n+import java.util.Stack;\nimport org.apache.sysds.runtime.controlprogram.BasicProgramBlock;\nimport org.apache.sysds.runtime.controlprogram.ForProgramBlock;\n@@ -106,9 +107,6 @@ public class LineageDedupUtils {\nString ph = LineageItemUtils.LPLACEHOLDER;\nfor (int i=0; i<liinputs.length; i++) {\n// Wrap the inputs with order-preserving placeholders.\n- // An alternative way would be to replace the non-literal leaves with\n- // placeholders after each iteration, but that requires a full DAG\n- // traversal after each iteration.\nLineageItem phInput = new LineageItem(ph+String.valueOf(i), new LineageItem[] {liinputs[i]});\n_tmpLineage.set(inputnames.get(i), phInput);\n}\n@@ -125,11 +123,16 @@ public class LineageDedupUtils {\npublic static void setDedupMap(LineageDedupBlock ldb, long takenPath) {\n// if this iteration took a new path, store the corresponding map\n- if (ldb.getMap(takenPath) == null)\n- ldb.setMap(takenPath, _tmpLineage.getLineageMap());\n+ if (ldb.getMap(takenPath) == null) {\n+ LineageMap patchMap = _tmpLineage.getLineageMap();\n+ // Cut the DAGs at placeholders\n+ cutAtPlaceholder(patchMap);\n+ ldb.setMap(takenPath, patchMap);\n+ }\n}\nprivate static void initLocalLineage(ExecutionContext ec) {\n+ _mainLineage = ec.getLineage();\n_tmpLineage = _tmpLineage == null ? new Lineage() : _tmpLineage;\n_tmpLineage.clearLineageMap();\n_tmpLineage.clearDedupBlock();\n@@ -165,6 +168,39 @@ public class LineageDedupUtils {\nreturn sb.toString();\n}\n+ public static void cutAtPlaceholder(LineageMap lmap) {\n+ // Gather all the DAG roots and cut each at placeholder\n+ for (Map.Entry<String, LineageItem> litem : lmap.getTraces().entrySet()) {\n+ LineageItem root = litem.getValue();\n+ root.resetVisitStatusNR();\n+ cutAtPlaceholder(root);\n+ }\n+ }\n+\n+ public static void cutAtPlaceholder(LineageItem root) {\n+ Stack<LineageItem> q = new Stack<>();\n+ q.push(root);\n+ while (!q.empty()) {\n+ LineageItem tmp = q.pop();\n+ if (tmp.isVisited())\n+ continue;\n+\n+ if (tmp.getOpcode().startsWith(LineageItemUtils.LPLACEHOLDER)) {\n+ // set inputs to null\n+ tmp.resetInputs();\n+ tmp.setVisited();\n+ continue;\n+ }\n+\n+ if (tmp.getInputs() != null)\n+ for (int i=0; i<tmp.getInputs().length; i++) {\n+ LineageItem li = tmp.getInputs()[i];\n+ q.push(li);\n+ }\n+ tmp.setVisited();\n+ }\n+ }\n+\n//------------------------------------------------------------------------------\n/* The below static functions help to compute the number of distinct paths\n* in any program block, and are used for diagnostic purposes. These will\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageItem.java",
"new_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageItem.java",
"diff": "@@ -31,7 +31,7 @@ public class LineageItem {\nprivate final long _id;\nprivate final String _opcode;\nprivate final String _data;\n- private final LineageItem[] _inputs;\n+ private LineageItem[] _inputs;\nprivate int _hash = 0;\nprivate long _distLeaf2Node;\n// init visited to true to ensure visited items are\n@@ -93,6 +93,11 @@ public class LineageItem {\nreturn _inputs;\n}\n+ public void resetInputs() {\n+ _inputs = null;\n+ _hash = 0;\n+ }\n+\npublic void setInput(int i, LineageItem item) {\n_inputs[i] = item;\n_hash = 0; //reset hash\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageItemUtils.java",
"new_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageItemUtils.java",
"diff": "@@ -114,6 +114,10 @@ public class LineageItemUtils {\nsb.append(\"(\").append(getString(li)).append(\") \");\nif (li.isLeaf()) {\n+ if (li.getOpcode().startsWith(LPLACEHOLDER))\n+ //This is a special node. Serialize opcode instead of data\n+ sb.append(li.getOpcode()).append(\" \");\n+ else\nsb.append(li.getData()).append(\" \");\n} else {\nif (li.getType() == LineageItemType.Dedup)\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageParser.java",
"new_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageParser.java",
"diff": "@@ -63,6 +63,11 @@ public class LineageParser\nswitch (type) {\ncase Creation:\n+ if (representation.startsWith(LineageItemUtils.LPLACEHOLDER)) {\n+ // Handle the placeholder nodes\n+ li = new LineageItem(id, representation, \"Create\"+representation);\n+ break;\n+ }\nInstruction inst = InstructionParser.parseSingleInstruction(representation);\nif (!(inst instanceof LineageTraceable))\nthrow new ParseException(\"Invalid Instruction (\" + inst.getOpcode() + \") traced\");\n@@ -96,11 +101,11 @@ public class LineageParser\nthrow new ParseException(\"Invalid length ot lineage item \"+tokens.length+\".\");\nString opcode = tokens[0];\n- if (opcode.startsWith(LineageItemUtils.LPLACEHOLDER)) {\n+ /*if (opcode.startsWith(LineageItemUtils.LPLACEHOLDER)) {\n// Convert this to a leaf node (creation type)\nString data = opcode;\nreturn new LineageItem(id, data, \"Create\"+opcode);\n- }\n+ }*/\nArrayList<LineageItem> inputs = new ArrayList<>();\nfor( int i=1; i<tokens.length; i++ ) {\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageRecomputeUtils.java",
"new_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageRecomputeUtils.java",
"diff": "@@ -82,6 +82,10 @@ public class LineageRecomputeUtils {\npublic static Map<String, DedupLoopItem> loopPatchMap = new HashMap<>();\npublic static Data parseNComputeLineageTrace(String mainTrace, String dedupPatches) {\n+ if (DEBUG) {\n+ System.out.println(mainTrace);\n+ System.out.println(dedupPatches);\n+ }\nLineageItem root = LineageParser.parseLineageTrace(mainTrace);\nif (dedupPatches != null)\nLineageParser.parseLineageTraceDedup(dedupPatches);\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-2581] Improve serialization of dedup DAGs
This patch cuts the dedup patch DAGs at placeholders just
after each loop iteration, instead of at the time of
serialization. This will help comparing compressed DAGs. |
49,706 | 13.01.2021 17:57:40 | -3,600 | 44f29960b2e2c36e9e529817b3226ec9972c0766 | [MINOR] Change sensitivity of ALSTest | [
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/builtin/BuiltinALSTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/builtin/BuiltinALSTest.java",
"diff": "@@ -35,7 +35,7 @@ public class BuiltinALSTest extends AutomatedTestBase {\nprivate final static String TEST_DIR = \"functions/builtin/\";\nprivate static final String TEST_CLASS_DIR = TEST_DIR + BuiltinALSTest.class.getSimpleName() + \"/\";\n- private final static double eps = 0.00001;\n+ private final static double eps = 0.0001;\nprivate final static int rows = 6;\nprivate final static int cols = 6;\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR] Change sensitivity of ALSTest |
49,689 | 15.01.2021 16:00:10 | -3,600 | 7f6182715414e9685d13d7a9f02dbe95a8f599c0 | Reuse of FED instruction results in Coordinator
This patch introduces lineage-based caching of FED instructions
in the coordinator. This enables skipping federated execution entirely
if the output is available in the cache. To avoid unnecessary pulling,
we only cache if the output object is not federated. | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/cp/CPInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/cp/CPInstruction.java",
"diff": "@@ -95,6 +95,8 @@ public abstract class CPInstruction extends Instruction\n//robustness federated instructions (runtime assignment)\ntmp = FEDInstructionUtils.checkAndReplaceCP(tmp, ec);\n+ //NOTE: Retracing of lineage is not needed as the lineage trace\n+ //is same for an instruction and its FED version.\ntmp = PrivacyPropagator.preprocessInstruction(tmp, ec);\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageCache.java",
"new_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageCache.java",
"diff": "@@ -36,12 +36,14 @@ import org.apache.sysds.runtime.controlprogram.parfor.stat.InfrastructureAnalyze\nimport org.apache.sysds.runtime.instructions.CPInstructionParser;\nimport org.apache.sysds.runtime.instructions.Instruction;\nimport org.apache.sysds.runtime.instructions.cp.CPInstruction.CPType;\n+import org.apache.sysds.runtime.instructions.cp.CPOperand;\nimport org.apache.sysds.runtime.instructions.cp.ComputationCPInstruction;\nimport org.apache.sysds.runtime.instructions.cp.Data;\nimport org.apache.sysds.runtime.instructions.cp.MMTSJCPInstruction;\nimport org.apache.sysds.runtime.instructions.cp.MultiReturnBuiltinCPInstruction;\nimport org.apache.sysds.runtime.instructions.cp.ParameterizedBuiltinCPInstruction;\nimport org.apache.sysds.runtime.instructions.cp.ScalarObject;\n+import org.apache.sysds.runtime.instructions.fed.ComputationFEDInstruction;\nimport org.apache.sysds.runtime.lineage.LineageCacheConfig.LineageCacheStatus;\nimport org.apache.sysds.runtime.lineage.LineageCacheConfig.ReuseCacheType;\nimport org.apache.sysds.runtime.matrix.data.MatrixBlock;\n@@ -84,8 +86,10 @@ public class LineageCache\n//NOTE: the check for computation CP instructions ensures that the output\n// will always fit in memory and hence can be pinned unconditionally\nif (LineageCacheConfig.isReusable(inst, ec)) {\n- ComputationCPInstruction cinst = (ComputationCPInstruction) inst;\n- LineageItem instLI = cinst.getLineageItem(ec).getValue();\n+ ComputationCPInstruction cinst = inst instanceof ComputationCPInstruction ? (ComputationCPInstruction)inst : null;\n+ ComputationFEDInstruction cfinst = inst instanceof ComputationFEDInstruction ? (ComputationFEDInstruction)inst : null;\n+\n+ LineageItem instLI = (cinst != null) ? cinst.getLineageItem(ec).getValue():cfinst.getLineageItem(ec).getValue();\nList<MutablePair<LineageItem, LineageCacheEntry>> liList = null;\nif (inst instanceof MultiReturnBuiltinCPInstruction) {\nliList = new ArrayList<>();\n@@ -119,7 +123,10 @@ public class LineageCache\n//create a placeholder if no reuse to avoid redundancy\n//(e.g., concurrent threads that try to start the computation)\nif(e == null && isMarkedForCaching(inst, ec)) {\n+ if (cinst != null)\nputIntern(item.getKey(), cinst.output.getDataType(), null, null, 0);\n+ else\n+ putIntern(item.getKey(), cfinst.output.getDataType(), null, null, 0);\n//FIXME: different o/p datatypes for MultiReturnBuiltins.\n}\n}\n@@ -134,8 +141,10 @@ public class LineageCache\nif (inst instanceof MultiReturnBuiltinCPInstruction)\noutName = ((MultiReturnBuiltinCPInstruction)inst).\ngetOutput(entry.getKey().getOpcode().charAt(entry.getKey().getOpcode().length()-1)-'0').getName();\n- else\n+ else if (inst instanceof ComputationCPInstruction)\noutName = cinst.output.getName();\n+ else\n+ outName = cfinst.output.getName();\nif (e.isMatrixValue())\nec.setMatrixOutput(outName, e.getMBValue());\n@@ -248,7 +257,9 @@ public class LineageCache\nif (LineageCacheConfig.isReusable(inst, ec) ) {\nLineageItem item = ((LineageTraceable) inst).getLineageItem(ec).getValue();\n//This method is called only to put matrix value\n- MatrixObject mo = ec.getMatrixObject(((ComputationCPInstruction) inst).output);\n+ MatrixObject mo = inst instanceof ComputationCPInstruction ?\n+ ec.getMatrixObject(((ComputationCPInstruction) inst).output) :\n+ ec.getMatrixObject(((ComputationFEDInstruction) inst).output);\nsynchronized( _cache ) {\nputIntern(item, DataType.MATRIX, mo.acquireReadAndRelease(), null, computetime);\n}\n@@ -278,7 +289,9 @@ public class LineageCache\n}\n}\nelse\n- liData = Arrays.asList(Pair.of(instLI, ec.getVariable(((ComputationCPInstruction) inst).output)));\n+ liData = inst instanceof ComputationCPInstruction ?\n+ Arrays.asList(Pair.of(instLI, ec.getVariable(((ComputationCPInstruction) inst).output))) :\n+ Arrays.asList(Pair.of(instLI, ec.getVariable(((ComputationFEDInstruction) inst).output)));\nsynchronized( _cache ) {\nfor (Pair<LineageItem, Data> entry : liData) {\nLineageItem item = entry.getKey();\n@@ -291,6 +304,13 @@ public class LineageCache\ncontinue;\n}\n+ if (LineageCacheConfig.isOutputFederated(inst, data)) {\n+ // Do not cache federated outputs (in the coordinator)\n+ // Cannot skip putting the placeholder as the above is only known after execution\n+ _cache.remove(item);\n+ continue;\n+ }\n+\nMatrixBlock mb = (data instanceof MatrixObject) ?\n((MatrixObject)data).acquireReadAndRelease() : null;\nlong size = mb != null ? mb.getInMemorySize() : ((ScalarObject)data).getSize();\n@@ -456,8 +476,13 @@ public class LineageCache\nif (!LineageCacheConfig.getCompAssRW())\nreturn true;\n- if (((ComputationCPInstruction)inst).output.isMatrix()) {\n- MatrixObject mo = ec.getMatrixObject(((ComputationCPInstruction)inst).output);\n+ CPOperand output = inst instanceof ComputationCPInstruction ?\n+ ((ComputationCPInstruction)inst).output :\n+ ((ComputationFEDInstruction)inst).output;\n+ if (output.isMatrix()) {\n+ MatrixObject mo = inst instanceof ComputationCPInstruction ?\n+ ec.getMatrixObject(((ComputationCPInstruction)inst).output) :\n+ ec.getMatrixObject(((ComputationFEDInstruction)inst).output);\n//limit this to full reuse as partial reuse is applicable even for loop dependent operation\nreturn !(LineageCacheConfig.getCacheType() == ReuseCacheType.REUSE_FULL\n&& !mo.isMarked());\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageCacheConfig.java",
"new_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageCacheConfig.java",
"diff": "@@ -21,12 +21,15 @@ package org.apache.sysds.runtime.lineage;\nimport org.apache.commons.lang3.ArrayUtils;\nimport org.apache.sysds.api.DMLScript;\n+import org.apache.sysds.runtime.controlprogram.caching.MatrixObject;\nimport org.apache.sysds.runtime.controlprogram.context.ExecutionContext;\nimport org.apache.sysds.runtime.instructions.Instruction;\nimport org.apache.sysds.runtime.instructions.cp.ComputationCPInstruction;\n+import org.apache.sysds.runtime.instructions.cp.Data;\nimport org.apache.sysds.runtime.instructions.cp.DataGenCPInstruction;\nimport org.apache.sysds.runtime.instructions.cp.ListIndexingCPInstruction;\nimport org.apache.sysds.runtime.instructions.cp.MatrixIndexingCPInstruction;\n+import org.apache.sysds.runtime.instructions.fed.ComputationFEDInstruction;\nimport java.util.Comparator;\n@@ -186,6 +189,7 @@ public class LineageCacheConfig\npublic static boolean isReusable (Instruction inst, ExecutionContext ec) {\nboolean insttype = inst instanceof ComputationCPInstruction\n+ || inst instanceof ComputationFEDInstruction\n&& !(inst instanceof ListIndexingCPInstruction);\nboolean rightop = (ArrayUtils.contains(REUSE_OPCODES, inst.getOpcode())\n|| (inst.getOpcode().equals(\"append\") && isVectorAppend(inst, ec))\n@@ -193,7 +197,8 @@ public class LineageCacheConfig\n|| (inst instanceof DataGenCPInstruction) && ((DataGenCPInstruction) inst).isMatrixCall());\nboolean updateInplace = (inst instanceof MatrixIndexingCPInstruction)\n&& ec.getMatrixObject(((ComputationCPInstruction)inst).input1).getUpdateType().isInPlace();\n- return insttype && rightop && !updateInplace;\n+ boolean federatedOutput = false;\n+ return insttype && rightop && !updateInplace && !federatedOutput;\n}\nprivate static boolean isVectorAppend(Instruction inst, ExecutionContext ec) {\n@@ -205,6 +210,16 @@ public class LineageCacheConfig\nreturn(c1 == 1 || c2 == 1);\n}\n+ public static boolean isOutputFederated(Instruction inst, Data data) {\n+ if (!(inst instanceof ComputationFEDInstruction))\n+ return false;\n+ // return true if the output matrixobject is federated\n+ if (inst instanceof ComputationFEDInstruction)\n+ if (data instanceof MatrixObject && ((MatrixObject) data).isFederated())\n+ return true;\n+ return false;\n+ }\n+\npublic static void setConfigTsmmCbind(ReuseCacheType ct) {\n_cacheType = ct;\n_itemH = CachedItemHead.TSMM;\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/lineage/FedFullReuseTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/lineage/FedFullReuseTest.java",
"diff": "package org.apache.sysds.test.functions.lineage;\n+import static org.junit.Assert.assertTrue;\n+\nimport java.util.Arrays;\nimport java.util.Collection;\n@@ -38,7 +40,8 @@ import org.junit.runners.Parameterized;\npublic class FedFullReuseTest extends AutomatedTestBase {\nprivate final static String TEST_DIR = \"functions/lineage/\";\n- private final static String TEST_NAME = \"FedFullReuse1\";\n+ private final static String TEST_NAME1 = \"FedFullReuse1\";\n+ private final static String TEST_NAME2 = \"FedFullReuse2\";\nprivate final static String TEST_CLASS_DIR = TEST_DIR + FedFullReuseTest.class.getSimpleName() + \"/\";\nprivate final static int blocksize = 1024;\n@@ -50,7 +53,8 @@ public class FedFullReuseTest extends AutomatedTestBase {\n@Override\npublic void setUp() {\nTestUtils.clearAssertionInformation();\n- addTestConfiguration(TEST_NAME, new TestConfiguration(TEST_CLASS_DIR, TEST_NAME, new String[] {\"Z\"}));\n+ addTestConfiguration(TEST_NAME1, new TestConfiguration(TEST_CLASS_DIR, TEST_NAME1, new String[] {\"Z\"}));\n+ addTestConfiguration(TEST_NAME2, new TestConfiguration(TEST_CLASS_DIR, TEST_NAME2, new String[] {\"Z\"}));\n}\[email protected]\n@@ -65,12 +69,20 @@ public class FedFullReuseTest extends AutomatedTestBase {\n}\n@Test\n- public void federatedReuseMM() { //reuse inside federated workers\n- federatedReuse();\n+ public void federatedOutputReuse() {\n+ //don't cache federated outputs in the coordinator\n+ //reuse inside federated workers\n+ federatedReuse(TEST_NAME1);\n+ }\n+\n+ @Test\n+ public void nonfederatedOutputReuse() {\n+ //cache non-federated outputs in the coordinator\n+ federatedReuse(TEST_NAME2);\n}\n- public void federatedReuse() {\n- getAndLoadTestConfiguration(TEST_NAME);\n+ public void federatedReuse(String test) {\n+ getAndLoadTestConfiguration(test);\nString HOME = SCRIPT_DIR + TEST_DIR;\n// write input matrices\n@@ -93,11 +105,11 @@ public class FedFullReuseTest extends AutomatedTestBase {\nThread t1 = startLocalFedWorkerThread(port1, otherargs, FED_WORKER_WAIT_S);\nThread t2 = startLocalFedWorkerThread(port2, otherargs);\n- TestConfiguration config = availableTestConfigurations.get(TEST_NAME);\n+ TestConfiguration config = availableTestConfigurations.get(test);\nloadTestConfiguration(config);\n// Run reference dml script with normal matrix. Reuse of ba+*.\n- fullDMLScriptName = HOME + TEST_NAME + \"Reference.dml\";\n+ fullDMLScriptName = HOME + test + \"Reference.dml\";\nprogramArgs = new String[] {\"-stats\", \"-lineage\", \"reuse_full\",\n\"-nvargs\", \"X1=\" + input(\"X1\"), \"X2=\" + input(\"X2\"), \"Y1=\" + input(\"Y1\"),\n\"Y2=\" + input(\"Y2\"), \"Z=\" + expected(\"Z\")};\n@@ -106,7 +118,7 @@ public class FedFullReuseTest extends AutomatedTestBase {\n// Run actual dml script with federated matrix\n// The fed workers reuse ba+*\n- fullDMLScriptName = HOME + TEST_NAME + \".dml\";\n+ fullDMLScriptName = HOME + test + \".dml\";\nprogramArgs = new String[] {\"-stats\",\"-lineage\", \"reuse_full\",\n\"-nvargs\", \"X1=\" + TestUtils.federatedAddress(port1, input(\"X1\")),\n\"X2=\" + TestUtils.federatedAddress(port2, input(\"X2\")),\n@@ -114,6 +126,7 @@ public class FedFullReuseTest extends AutomatedTestBase {\n\"Y2=\" + TestUtils.federatedAddress(port2, input(\"Y2\")), \"r=\" + rows, \"c=\" + cols, \"Z=\" + output(\"Z\")};\nrunTest(true, false, null, -1);\nlong mmCount_fed = Statistics.getCPHeavyHitterCount(\"ba+*\");\n+ long fedMMCount = Statistics.getCPHeavyHitterCount(\"fed_ba+*\");\n// compare results\ncompareResults(1e-9);\n@@ -121,6 +134,19 @@ public class FedFullReuseTest extends AutomatedTestBase {\n// #federated execution of ba+* = #threads times #non-federated execution of ba+* (after reuse)\nAssert.assertTrue(\"Violated reuse count: \"+mmCount_fed+\" == \"+mmCount*2,\nmmCount_fed == mmCount * 2); // #threads = 2\n+ switch(test) {\n+ case TEST_NAME1:\n+ // If the o/p is federated, fed_ba+* will be called everytime\n+ // but the workers should be able to reuse ba+*\n+ assertTrue(fedMMCount > mmCount_fed);\n+ break;\n+ case TEST_NAME2:\n+ // If the o/p is non-federated, fed_ba+* will be called once\n+ // and each worker will call ba+* once.\n+ assertTrue(fedMMCount < mmCount_fed);\n+ break;\n+ }\n+\nTestUtils.shutdownThreads(t1, t2);\n}\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/lineage/FedFullReuse2.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+X = federated(addresses=list($X1, $X2),\n+ ranges=list(list(0, 0), list($r / 2, $c), list($r / 2, 0), list($r, $c)));\n+\n+vec = rand(rows=1, cols=100, seed=42);\n+\n+for(i in 1:10)\n+ Z = vec %*% X;\n+\n+write(Z, $Z);\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/lineage/FedFullReuse2Reference.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+X = rbind(read($X1), read($X2));\n+vec = rand(rows=1, cols=100, seed=42);\n+\n+for(i in 1:10)\n+ Z = vec %*% X;\n+\n+write(Z, $Z);\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-2795] Reuse of FED instruction results in Coordinator
This patch introduces lineage-based caching of FED instructions
in the coordinator. This enables skipping federated execution entirely
if the output is available in the cache. To avoid unnecessary pulling,
we only cache if the output object is not federated. |
49,720 | 15.01.2021 18:54:57 | -3,600 | 068f631b748f0ff425c94827c48667200afe5b36 | [MINOR] Mdedup size propagation fix for Spark context | [
{
"change_type": "MODIFY",
"old_path": "scripts/builtin/mdedup.dml",
"new_path": "scripts/builtin/mdedup.dml",
"diff": "@@ -113,7 +113,6 @@ detectDuplicates = function(Matrix[Double] LHS_adj, Matrix[Double] RHS_adj)\nn = nrow(LHS_adj)\nadjacency = LHS_adj * RHS_adj\n# find duplicates\n- # TODO size propagation issue of adjacency matrix inside components call\n- colDuplicates = components(G=adjacency[1:n, 1:n], verbose=FALSE)\n- MD = colDuplicates * (rowSums(adjacency[1:n, 1:n]) > 0)\n+ colDuplicates = components(G=adjacency, verbose=FALSE)\n+ MD = colDuplicates * (rowSums(adjacency) > 0)\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/hops/BinaryOp.java",
"new_path": "src/main/java/org/apache/sysds/hops/BinaryOp.java",
"diff": "@@ -941,6 +941,14 @@ public class BinaryOp extends MultiThreadedHop\nsetDim1(0);\nsetDim2(0);\n}\n+ else if(getDataType() == DataType.FRAME)\n+ {\n+ if(getInput().toString().split(\",\")[2].contains(\"UtilFunctions.jaccardSim\"))\n+ {\n+ setDim1(input1.getDim1());\n+ setDim2(input1.getDim1());\n+ }\n+ }\nelse //MATRIX OUTPUT\n{\n//TODO quantile\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/parser/BuiltinFunctionExpression.java",
"new_path": "src/main/java/org/apache/sysds/parser/BuiltinFunctionExpression.java",
"diff": "@@ -1562,14 +1562,8 @@ public class BuiltinFunctionExpression extends DataIdentifier\ncheckMatrixFrameParam(getFirstExpr());\ncheckScalarParam(getSecondExpr());\noutput.setDataType(DataType.FRAME);\n- if(_args[1].getText().contains(\"jaccardSim\")) {\n- output.setDimensions(id.getDim1(), id.getDim1());\n- output.setValueType(ValueType.FP64);\n- }\n- else {\noutput.setDimensions(id.getDim1(), 1);\noutput.setValueType(ValueType.STRING);\n- }\noutput.setBlocksize (id.getBlocksize());\nbreak;\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/spark/BinaryFrameScalarSPInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/spark/BinaryFrameScalarSPInstruction.java",
"diff": "@@ -44,11 +44,6 @@ public class BinaryFrameScalarSPInstruction extends BinarySPInstruction {\n// Create local compiled functions (once) and execute on RDD\nJavaPairRDD<Long, FrameBlock> out = in1.mapValues(new RDDStringProcessing(expression));\n- if(expression.contains(\"jaccardSim\")) {\n- long rows = sec.getDataCharacteristics(output.getName()).getRows();\n- sec.getDataCharacteristics(output.getName()).setDimension(rows, rows);\n- }\n-\nsec.setRDDHandleForVariable(output.getName(), out);\nsec.addLineageRDD(output.getName(), input1.getName());\n}\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR] Mdedup size propagation fix for Spark context |
49,720 | 15.01.2021 19:40:17 | -3,600 | 8e5c5f3dc52e5c99bdea3bce39eece71ce91a6c8 | [MINOR] Refactoring commit | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/hops/BinaryOp.java",
"new_path": "src/main/java/org/apache/sysds/hops/BinaryOp.java",
"diff": "@@ -943,7 +943,7 @@ public class BinaryOp extends MultiThreadedHop\n}\nelse if(getDataType() == DataType.FRAME)\n{\n- if(getInput().toString().split(\",\")[2].contains(\"UtilFunctions.jaccardSim\"))\n+ if(getInput().toString().contains(\"UtilFunctions.jaccardSim\"))\n{\nsetDim1(input1.getDim1());\nsetDim2(input1.getDim1());\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/scripts/functions/builtin/matching_dependency.dml",
"new_path": "src/test/scripts/functions/builtin/matching_dependency.dml",
"diff": "@@ -25,5 +25,5 @@ LHSf = read($2);\nLHSt = read($3);\nRHSf = read($4);\nRHSt = read($5);\n-B = mdedup(X, LHSf, LHSt, RHSf, RHSt, TRUE);\n+B = mdedup(X, LHSf, LHSt, RHSf, RHSt, FALSE);\nwrite(B, $6);\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR] Refactoring commit 068f631 |
49,720 | 15.01.2021 21:03:08 | -3,600 | 178bdd7ae292323b7a8d57d5ce2612437958800e | [MINOR] Fixing failed test by reverting the changes in BinaryOp file. | [
{
"change_type": "MODIFY",
"old_path": "scripts/builtin/mdedup.dml",
"new_path": "scripts/builtin/mdedup.dml",
"diff": "@@ -113,6 +113,6 @@ detectDuplicates = function(Matrix[Double] LHS_adj, Matrix[Double] RHS_adj)\nn = nrow(LHS_adj)\nadjacency = LHS_adj * RHS_adj\n# find duplicates\n- colDuplicates = components(G=adjacency, verbose=FALSE)\n- MD = colDuplicates * (rowSums(adjacency) > 0)\n+ colDuplicates = components(G=adjacency[1:n, 1:n], verbose=FALSE)\n+ MD = colDuplicates * (rowSums(adjacency[1:n, 1:n]) > 0)\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/hops/BinaryOp.java",
"new_path": "src/main/java/org/apache/sysds/hops/BinaryOp.java",
"diff": "@@ -941,14 +941,6 @@ public class BinaryOp extends MultiThreadedHop\nsetDim1(0);\nsetDim2(0);\n}\n- else if(getDataType() == DataType.FRAME)\n- {\n- if(getInput().toString().contains(\"UtilFunctions.jaccardSim\"))\n- {\n- setDim1(input1.getDim1());\n- setDim2(input1.getDim1());\n- }\n- }\nelse //MATRIX OUTPUT\n{\n//TODO quantile\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/parser/BuiltinFunctionExpression.java",
"new_path": "src/main/java/org/apache/sysds/parser/BuiltinFunctionExpression.java",
"diff": "@@ -1562,8 +1562,14 @@ public class BuiltinFunctionExpression extends DataIdentifier\ncheckMatrixFrameParam(getFirstExpr());\ncheckScalarParam(getSecondExpr());\noutput.setDataType(DataType.FRAME);\n+ if(_args[1].getText().contains(\"jaccardSim\")) {\n+ output.setDimensions(id.getDim1(), id.getDim1());\n+ output.setValueType(ValueType.FP64);\n+ }\n+ else {\noutput.setDimensions(id.getDim1(), 1);\noutput.setValueType(ValueType.STRING);\n+ }\noutput.setBlocksize (id.getBlocksize());\nbreak;\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/spark/BinaryFrameScalarSPInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/spark/BinaryFrameScalarSPInstruction.java",
"diff": "@@ -44,6 +44,11 @@ public class BinaryFrameScalarSPInstruction extends BinarySPInstruction {\n// Create local compiled functions (once) and execute on RDD\nJavaPairRDD<Long, FrameBlock> out = in1.mapValues(new RDDStringProcessing(expression));\n+ if(expression.contains(\"jaccardSim\")) {\n+ long rows = sec.getDataCharacteristics(output.getName()).getRows();\n+ sec.getDataCharacteristics(output.getName()).setDimension(rows, rows);\n+ }\n+\nsec.setRDDHandleForVariable(output.getName(), out);\nsec.addLineageRDD(output.getName(), input1.getName());\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/builtin/BuiltinOutlierByArima.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/builtin/BuiltinOutlierByArima.java",
"diff": "@@ -111,7 +111,7 @@ public class BuiltinOutlierByArima extends AutomatedTestBase {\nHashMap<CellIndex, Double> time_series_SYSTEMDS = readDMLMatrixFromOutputDir(\"result\");\nHashMap<CellIndex, Double> time_series_real = readRMatrixFromExpectedDir(\"result\");\n- double tol = Math.pow(10, -14);\n+ double tol = Math.pow(10, -12);\nif (repairMethod == 3)\nTestUtils.compareScalars(time_series_real.size()-num_outliers, time_series_SYSTEMDS.size(), tol);\nelse\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR] Fixing failed test by reverting the changes in BinaryOp file. |
49,749 | 16.01.2021 15:53:50 | -3,600 | e8cc1de36777a91a23d02ba5998c2083bbb224b0 | Builtin function statsNA for computing NA statistics
DIA project WS2020/21.
Closes | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/common/Builtins.java",
"new_path": "src/main/java/org/apache/sysds/common/Builtins.java",
"diff": "@@ -203,6 +203,7 @@ public enum Builtins {\nSMOTE(\"smote\", true),\nSOLVE(\"solve\", false),\nSPLIT(\"split\", true),\n+ STATSNA(\"statsNA\", true),\nSQRT(\"sqrt\", false),\nSUM(\"sum\", false),\nSVD(\"svd\", false, ReturnType.MULTI_RETURN),\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/java/org/apache/sysds/test/functions/builtin/BuiltinStatsNATest.java",
"diff": "+/*\n+ * Licensed to the Apache Software Foundation (ASF) under one\n+ * or more contributor license agreements. See the NOTICE file\n+ * distributed with this work for additional information\n+ * regarding copyright ownership. The ASF licenses this file\n+ * to you under the Apache License, Version 2.0 (the\n+ * \"License\"); you may not use this file except in compliance\n+ * with the License. You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.apache.sysds.test.functions.builtin;\n+\n+import org.apache.sysds.common.Types;\n+import org.apache.sysds.lops.LopProperties;\n+import org.apache.sysds.runtime.matrix.data.MatrixValue;\n+import org.apache.sysds.test.AutomatedTestBase;\n+import org.apache.sysds.test.TestConfiguration;\n+import org.apache.sysds.test.TestUtils;\n+import org.junit.Test;\n+\n+import java.util.HashMap;\n+\n+public class BuiltinStatsNATest extends AutomatedTestBase {\n+ private final static String TEST_NAME = \"statsNATest\";\n+ private final static String TEST_DIR = \"functions/builtin/\";\n+ private final static String TEST_CLASS_DIR = TEST_DIR + BuiltinSplitTest.class.getSimpleName() + \"/\";\n+ private final static double eps = 1e-3;\n+\n+ @Override\n+ public void setUp() {\n+ TestUtils.clearAssertionInformation();\n+ addTestConfiguration(TEST_NAME, new TestConfiguration(TEST_CLASS_DIR, TEST_NAME, new String[]{\"B\",}));\n+ }\n+\n+ @Test\n+ public void testStatsNA1() {\n+ runStatsNA(1, 100, LopProperties.ExecType.CP);\n+ }\n+\n+ @Test\n+ public void testStatsNA2() {\n+ runStatsNA(4, 100, LopProperties.ExecType.CP);\n+ }\n+\n+ @Test\n+ public void testStatsNA3() {\n+ runStatsNA(100, 1000, LopProperties.ExecType.CP);\n+ }\n+\n+ @Test\n+ public void testStatsNA4() {\n+ runStatsNA(100, 10000, LopProperties.ExecType.CP);\n+ }\n+\n+\n+ private void runStatsNA(int bins, int size, LopProperties.ExecType instType) {\n+ Types.ExecMode platformOld = setExecMode(instType);\n+ try\n+ {\n+ loadTestConfiguration(getTestConfiguration(TEST_NAME));\n+ String HOME = SCRIPT_DIR + TEST_DIR;\n+ fullDMLScriptName = HOME + TEST_NAME + \".dml\";\n+ programArgs = new String[]{ \"-nvargs\", \"X=\" + input(\"A\"), \"bins=\" + bins, \"Out=\" + output(\"Out\")};\n+\n+ double[][] A = getRandomMatrix(size, 1, -10, 10, 0.6, 7);\n+ writeInputMatrixWithMTD(\"A\", A, true);\n+\n+ fullRScriptName = HOME + TEST_NAME + \".R\";\n+ rCmd = getRCmd(inputDir(), Integer.toString(bins), expectedDir());\n+\n+ runTest(true, false, null, -1);\n+ runRScript(true);\n+ //compare matrices\n+ HashMap<MatrixValue.CellIndex, Double> dmlfileOut1 = readDMLMatrixFromOutputDir(\"Out\");\n+ HashMap<MatrixValue.CellIndex, Double> rfileOut1 = readRMatrixFromExpectedDir(\"Out\");\n+ MatrixValue.CellIndex key_ce = new MatrixValue.CellIndex(1, 1);\n+\n+ TestUtils.compareMatrices(dmlfileOut1, rfileOut1, eps, \"Stat-DML\", \"Stat-R\");\n+ }\n+ catch(Exception e) {\n+ e.printStackTrace();\n+ }\n+ finally {\n+ rtplatform = platformOld;\n+ }\n+ }\n+}\n\\ No newline at end of file\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/builtin/statsNATest.R",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+args <- commandArgs(TRUE)\n+\n+library(\"Matrix\")\n+library(\"imputeTS\")\n+\n+input_matrix = as.matrix(readMM(paste(args[1], \"A.mtx\", sep=\"\")))\n+input_matrix[input_matrix==0] = NA\n+\n+bins_in = as.numeric(args[2])\n+output = matrix(0, nrow=8, ncol=1)\n+\n+Out = statsNA(input_matrix, bins = bins_in, print_only = FALSE)\n+\n+output[1,1]=Out[\"length_series\"][[1]]\n+output[2,1]=Out[\"number_NAs\"][[1]]\n+output[3,1]=as.numeric(sub(\"%\",\"\",Out[\"percentage_NAs\"][[1]],fixed=TRUE))/100\n+output[4,1]=Out[\"number_na_gaps\"][[1]]\n+output[5,1]=Out[\"average_size_na_gaps\"][[1]]\n+output[6,1]=Out[\"longest_na_gap\"][[1]]\n+output[7,1]=Out[\"most_frequent_na_gap\"][[1]]\n+output[8,1]=Out[\"most_weighty_na_gap\"][[1]]\n+\n+writeMM(as(output, \"CsparseMatrix\"), paste(args[3], \"Out\", sep=\"\"))\n+\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/builtin/statsNATest.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+input_matrix = read($X);\n+# replace zeros with NaN\n+dataWithNa = replace(target=input_matrix, pattern = 0, replacement = NaN)\n+Out = statsNA(dataWithNa, $bins, TRUE)\n+write(Out, $Out);\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-2797] Builtin function statsNA for computing NA statistics
Co-authored-by: Ismael Ibrahim <[email protected]>
DIA project WS2020/21.
Closes #1117. |
49,738 | 16.01.2021 21:47:31 | -3,600 | 6165b509c3b3fcfc0b0690d52d1afe6e13d3fc17 | Cleanup new statsNA built-in function
* Vectorized all loops of statsNA
* Fix statsNA verbose printing of gaps vector
* Fix statsNA test formatting and warnings,
* Fix statsNA documentation (formatting, conciseness)
DIA project WS2020/21, part 2 | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/lib/LibRightMultBy.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/lib/LibRightMultBy.java",
"diff": "@@ -34,7 +34,6 @@ import org.apache.sysds.runtime.DMLRuntimeException;\nimport org.apache.sysds.runtime.compress.CompressedMatrixBlock;\nimport org.apache.sysds.runtime.compress.CompressionSettings;\nimport org.apache.sysds.runtime.compress.colgroup.ColGroup;\n-import org.apache.sysds.runtime.compress.colgroup.ColGroupOLE;\nimport org.apache.sysds.runtime.compress.colgroup.ColGroupUncompressed;\nimport org.apache.sysds.runtime.compress.colgroup.ColGroupValue;\nimport org.apache.sysds.runtime.matrix.data.MatrixBlock;\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/component/compress/ParCompressedMatrixTest.java",
"new_path": "src/test/java/org/apache/sysds/test/component/compress/ParCompressedMatrixTest.java",
"diff": "@@ -21,7 +21,6 @@ package org.apache.sysds.test.component.compress;\nimport org.apache.sysds.runtime.compress.CompressedMatrixBlock;\nimport org.apache.sysds.runtime.compress.CompressionSettings;\n-import org.apache.sysds.runtime.controlprogram.parfor.stat.InfrastructureAnalyzer;\nimport org.apache.sysds.runtime.instructions.InstructionUtils;\nimport org.apache.sysds.runtime.matrix.data.MatrixBlock;\nimport org.apache.sysds.runtime.matrix.operators.AggregateBinaryOperator;\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/builtin/BuiltinStatsNATest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/builtin/BuiltinStatsNATest.java",
"diff": "@@ -64,8 +64,7 @@ public class BuiltinStatsNATest extends AutomatedTestBase {\nprivate void runStatsNA(int bins, int size, LopProperties.ExecType instType) {\nTypes.ExecMode platformOld = setExecMode(instType);\n- try\n- {\n+ try {\nloadTestConfiguration(getTestConfiguration(TEST_NAME));\nString HOME = SCRIPT_DIR + TEST_DIR;\nfullDMLScriptName = HOME + TEST_NAME + \".dml\";\n@@ -82,8 +81,6 @@ public class BuiltinStatsNATest extends AutomatedTestBase {\n//compare matrices\nHashMap<MatrixValue.CellIndex, Double> dmlfileOut1 = readDMLMatrixFromOutputDir(\"Out\");\nHashMap<MatrixValue.CellIndex, Double> rfileOut1 = readRMatrixFromExpectedDir(\"Out\");\n- MatrixValue.CellIndex key_ce = new MatrixValue.CellIndex(1, 1);\n-\nTestUtils.compareMatrices(dmlfileOut1, rfileOut1, eps, \"Stat-DML\", \"Stat-R\");\n}\ncatch(Exception e) {\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-2797] Cleanup new statsNA built-in function
* Vectorized all loops of statsNA
* Fix statsNA verbose printing of gaps vector
* Fix statsNA test formatting and warnings,
* Fix statsNA documentation (formatting, conciseness)
DIA project WS2020/21, part 2
Co-authored-by: haubitzer <[email protected]>
Co-authored-by: Ismael Ibrahim <[email protected]> |
49,738 | 16.01.2021 22:21:00 | -3,600 | 94e557ee3e087c307367465302a6fc498b08e3b6 | [MINOR] Fix mdedup built-in function test (marked as thread-unsafe) | [
{
"change_type": "DELETE",
"old_path": "scripts/builtin/.autoencoder_2layer.dml.swp",
"new_path": "scripts/builtin/.autoencoder_2layer.dml.swp",
"diff": "Binary files a/scripts/builtin/.autoencoder_2layer.dml.swp and /dev/null differ\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/builtin/BuiltinGLMTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/builtin/BuiltinGLMTest.java",
"diff": "@@ -39,7 +39,6 @@ import org.junit.runners.Parameterized;\n@RunWith(value = Parameterized.class)\[email protected]\n-\npublic class BuiltinGLMTest extends AutomatedTestBase\n{\nprotected final static String TEST_NAME = \"glmTest\";\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/builtin/BuiltinMDTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/builtin/BuiltinMDTest.java",
"diff": "@@ -33,6 +33,7 @@ import org.junit.runner.RunWith;\nimport org.junit.runners.Parameterized;\n@RunWith(value = Parameterized.class)\[email protected]\npublic class BuiltinMDTest extends AutomatedTestBase {\nprivate final static String TEST_NAME = \"matching_dependency\";\nprivate final static String TEST_DIR = \"functions/builtin/\";\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR] Fix mdedup built-in function test (marked as thread-unsafe) |
49,722 | 16.01.2021 23:37:36 | -3,600 | fa770df49fdc18fa83bf7a4898e9f59b235dfb7e | [SYSTEMDS-2548,2796] Federated frame and matrix right indexing
Closes | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/InstructionUtils.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/InstructionUtils.java",
"diff": "@@ -25,7 +25,9 @@ import java.util.StringTokenizer;\nimport org.apache.sysds.common.Types;\nimport org.apache.sysds.common.Types.AggOp;\nimport org.apache.sysds.common.Types.CorrectionLocationType;\n+import org.apache.sysds.common.Types.DataType;\nimport org.apache.sysds.common.Types.Direction;\n+import org.apache.sysds.common.Types.ValueType;\nimport org.apache.sysds.lops.Lop;\nimport org.apache.sysds.lops.WeightedCrossEntropy;\nimport org.apache.sysds.lops.WeightedCrossEntropyR;\n@@ -976,6 +978,10 @@ public class InstructionUtils\n}\n}\n+ public static String createLiteralOperand(String val, ValueType vt) {\n+ return InstructionUtils.concatOperandParts(val, DataType.SCALAR.name(), vt.name(), \"true\");\n+ }\n+\npublic static String replaceOperand(String instStr, int operand, String newValue) {\n//split instruction and check for correctness\nString[] parts = instStr.split(Lop.OPERAND_DELIMITOR);\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/FEDInstructionUtils.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/FEDInstructionUtils.java",
"diff": "@@ -33,7 +33,6 @@ import org.apache.sysds.runtime.instructions.cp.Data;\nimport org.apache.sysds.runtime.instructions.cp.IndexingCPInstruction;\nimport org.apache.sysds.runtime.instructions.cp.MMChainCPInstruction;\nimport org.apache.sysds.runtime.instructions.cp.MMTSJCPInstruction;\n-import org.apache.sysds.runtime.instructions.cp.MatrixIndexingCPInstruction;\nimport org.apache.sysds.runtime.instructions.cp.MultiReturnParameterizedBuiltinCPInstruction;\nimport org.apache.sysds.runtime.instructions.cp.ParameterizedBuiltinCPInstruction;\nimport org.apache.sysds.runtime.instructions.cp.QuaternaryCPInstruction;\n@@ -152,12 +151,13 @@ public class FEDInstructionUtils {\n}\n}\n}\n- else if(inst instanceof MatrixIndexingCPInstruction) {\n- // matrix indexing\n- MatrixIndexingCPInstruction minst = (MatrixIndexingCPInstruction) inst;\n+ else if(inst instanceof IndexingCPInstruction) {\n+ // matrix and frame indexing\n+ IndexingCPInstruction minst = (IndexingCPInstruction) inst;\nif(inst.getOpcode().equalsIgnoreCase(\"rightIndex\")\n- && minst.input1.isMatrix() && ec.getCacheableData(minst.input1).isFederated()) {\n- fedinst = MatrixIndexingFEDInstruction.parseInstruction(minst.getInstructionString());\n+ && (minst.input1.isMatrix() || minst.input1.isFrame())\n+ && ec.getCacheableData(minst.input1).isFederated()) {\n+ fedinst = IndexingFEDInstruction.parseInstruction(minst.getInstructionString());\n}\n}\nelse if(inst instanceof VariableCPInstruction ){\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/IndexingFEDInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/IndexingFEDInstruction.java",
"diff": "package org.apache.sysds.runtime.instructions.fed;\n+import java.util.ArrayList;\n+import java.util.Arrays;\n+import java.util.Collections;\n+import java.util.List;\n+\nimport org.apache.sysds.common.Types;\n+import org.apache.sysds.common.Types.ValueType;\nimport org.apache.sysds.lops.LeftIndex;\n+import org.apache.sysds.lops.Lop;\nimport org.apache.sysds.lops.RightIndex;\nimport org.apache.sysds.runtime.DMLRuntimeException;\n+import org.apache.sysds.runtime.controlprogram.caching.CacheableData;\n+import org.apache.sysds.runtime.controlprogram.caching.FrameObject;\n+import org.apache.sysds.runtime.controlprogram.caching.MatrixObject;\nimport org.apache.sysds.runtime.controlprogram.context.ExecutionContext;\n+import org.apache.sysds.runtime.controlprogram.federated.FederatedRange;\n+import org.apache.sysds.runtime.controlprogram.federated.FederatedRequest;\n+import org.apache.sysds.runtime.controlprogram.federated.FederationMap;\n+import org.apache.sysds.runtime.controlprogram.federated.FederationUtils;\nimport org.apache.sysds.runtime.instructions.InstructionUtils;\nimport org.apache.sysds.runtime.instructions.cp.CPOperand;\nimport org.apache.sysds.runtime.util.IndexRange;\n-public abstract class IndexingFEDInstruction extends UnaryFEDInstruction {\n+public final class IndexingFEDInstruction extends UnaryFEDInstruction {\nprotected final CPOperand rowLower, rowUpper, colLower, colUpper;\nprotected IndexingFEDInstruction(CPOperand in, CPOperand rl, CPOperand ru, CPOperand cl, CPOperand cu,\n@@ -70,10 +84,11 @@ public abstract class IndexingFEDInstruction extends UnaryFEDInstruction {\ncl = new CPOperand(parts[4]);\ncu = new CPOperand(parts[5]);\nout = new CPOperand(parts[6]);\n- if(in.getDataType() == Types.DataType.MATRIX)\n- return new MatrixIndexingFEDInstruction(in, rl, ru, cl, cu, out, opcode, str);\n- else\n- throw new DMLRuntimeException(\"Can index only on matrices, frames, and lists in federated.\");\n+\n+ if(in.getDataType() != Types.DataType.MATRIX && in.getDataType() != Types.DataType.FRAME)\n+ throw new DMLRuntimeException(\"Can index only on matrices, frames in federated.\");\n+\n+ return new IndexingFEDInstruction(in, rl, ru, cl, cu, out, opcode, str);\n}\nelse {\nthrow new DMLRuntimeException(\"Invalid number of operands in instruction: \" + str);\n@@ -86,4 +101,74 @@ public abstract class IndexingFEDInstruction extends UnaryFEDInstruction {\nthrow new DMLRuntimeException(\"Unknown opcode while parsing a MatrixIndexingFEDInstruction: \" + str);\n}\n}\n+\n+ @Override\n+ public void processInstruction(ExecutionContext ec) {\n+ rightIndexing(ec);\n+ }\n+\n+ private void rightIndexing(ExecutionContext ec)\n+ {\n+ //get input and requested index range\n+ CacheableData<?> in = ec.getCacheableData(input1);\n+ IndexRange ixrange = getIndexRange(ec);\n+\n+ //prepare output federation map (copy-on-write)\n+ FederationMap fedMap = in.getFedMapping().filter(ixrange);\n+\n+ //modify federated ranges in place\n+ String[] instStrings = new String[fedMap.getSize()];\n+\n+ //create new frame schema\n+ List<Types.ValueType> schema = new ArrayList<>();\n+\n+ // replace old reshape values for each worker\n+ int i = 0;\n+ for(FederatedRange range : fedMap.getMap().keySet()) {\n+ long rs = range.getBeginDims()[0], re = range.getEndDims()[0],\n+ cs = range.getBeginDims()[1], ce = range.getEndDims()[1];\n+ long rsn = (ixrange.rowStart >= rs) ? (ixrange.rowStart - rs) : 0;\n+ long ren = (ixrange.rowEnd >= rs && ixrange.rowEnd < re) ? (ixrange.rowEnd - rs) : (re - rs - 1);\n+ long csn = (ixrange.colStart >= cs) ? (ixrange.colStart - cs) : 0;\n+ long cen = (ixrange.colEnd >= cs && ixrange.colEnd < ce) ? (ixrange.colEnd - cs) : (ce - cs - 1);\n+\n+ range.setBeginDim(0, Math.max(rs - ixrange.rowStart, 0));\n+ range.setBeginDim(1, Math.max(cs - ixrange.colStart, 0));\n+ range.setEndDim(0, (ixrange.rowEnd >= re ? re-ixrange.rowStart : ixrange.rowEnd-ixrange.rowStart + 1));\n+ range.setEndDim(1, (ixrange.colEnd >= ce ? ce-ixrange.colStart : ixrange.colEnd-ixrange.colStart + 1));\n+\n+ long[] newIx = new long[]{rsn, ren, csn, cen};\n+\n+ // change 4 indices in instString\n+ instStrings[i] = instString;\n+ String[] instParts = instString.split(Lop.OPERAND_DELIMITOR);\n+ for(int j = 3; j < 7; j++)\n+ instParts[j] = InstructionUtils.createLiteralOperand(String.valueOf(newIx[j-3]+1), ValueType.INT64);\n+ instStrings[i] = String.join(Lop.OPERAND_DELIMITOR, instParts);\n+\n+ if(input1.isFrame()) {\n+ //modify frame schema\n+ if(in.isFederated(FederationMap.FType.ROW))\n+ schema = Arrays.asList(((FrameObject) in).getSchema((int) csn, (int) cen));\n+ else\n+ Collections.addAll(schema, ((FrameObject) in).getSchema((int) csn, (int) cen));\n+ }\n+ i++;\n+ }\n+ FederatedRequest[] fr1 = FederationUtils.callInstruction(instStrings,\n+ output, new CPOperand[] {input1}, new long[] {fedMap.getID()});\n+ fedMap.execute(getTID(), true, fr1, new FederatedRequest[0]);\n+\n+ if(input1.isFrame()) {\n+ FrameObject out = ec.getFrameObject(output);\n+ out.setSchema(schema.toArray(new Types.ValueType[0]));\n+ out.getDataCharacteristics().setDimension(fedMap.getMaxIndexInRange(0), fedMap.getMaxIndexInRange(1));\n+ out.setFedMapping(fedMap.copyWithNewID(fr1[0].getID()));\n+ } else {\n+ MatrixObject out = ec.getMatrixObject(output);\n+ out.getDataCharacteristics().set(fedMap.getMaxIndexInRange(0), fedMap.getMaxIndexInRange(1),\n+ (int) ((MatrixObject)in).getBlocksize());\n+ out.setFedMapping(fedMap.copyWithNewID(fr1[0].getID()));\n+ }\n+ }\n}\n"
},
{
"change_type": "DELETE",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/MatrixIndexingFEDInstruction.java",
"new_path": null,
"diff": "-/*\n- * Licensed to the Apache Software Foundation (ASF) under one\n- * or more contributor license agreements. See the NOTICE file\n- * distributed with this work for additional information\n- * regarding copyright ownership. The ASF licenses this file\n- * to you under the Apache License, Version 2.0 (the\n- * \"License\"); you may not use this file except in compliance\n- * with the License. You may obtain a copy of the License at\n- *\n- * http://www.apache.org/licenses/LICENSE-2.0\n- *\n- * Unless required by applicable law or agreed to in writing,\n- * software distributed under the License is distributed on an\n- * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n- * KIND, either express or implied. See the License for the\n- * specific language governing permissions and limitations\n- * under the License.\n- */\n-package org.apache.sysds.runtime.instructions.fed;\n-\n-import java.util.HashMap;\n-import java.util.Map;\n-\n-import org.apache.commons.logging.Log;\n-import org.apache.commons.logging.LogFactory;\n-import org.apache.sysds.runtime.DMLRuntimeException;\n-import org.apache.sysds.runtime.controlprogram.caching.MatrixObject;\n-import org.apache.sysds.runtime.controlprogram.context.ExecutionContext;\n-import org.apache.sysds.runtime.controlprogram.federated.FederatedRange;\n-import org.apache.sysds.runtime.controlprogram.federated.FederatedRequest;\n-import org.apache.sysds.runtime.controlprogram.federated.FederatedResponse;\n-import org.apache.sysds.runtime.controlprogram.federated.FederatedUDF;\n-import org.apache.sysds.runtime.controlprogram.federated.FederationMap;\n-import org.apache.sysds.runtime.controlprogram.federated.FederationUtils;\n-import org.apache.sysds.runtime.instructions.cp.CPOperand;\n-import org.apache.sysds.runtime.instructions.cp.Data;\n-import org.apache.sysds.runtime.matrix.data.MatrixBlock;\n-import org.apache.sysds.runtime.util.IndexRange;\n-\n-public final class MatrixIndexingFEDInstruction extends IndexingFEDInstruction {\n- private static final Log LOG = LogFactory.getLog(MatrixIndexingFEDInstruction.class.getName());\n-\n- public MatrixIndexingFEDInstruction(CPOperand in, CPOperand rl, CPOperand ru, CPOperand cl, CPOperand cu,\n- CPOperand out, String opcode, String istr) {\n- super(in, rl, ru, cl, cu, out, opcode, istr);\n- }\n-\n- @Override\n- public void processInstruction(ExecutionContext ec) {\n- rightIndexing(ec);\n- }\n-\n- private void rightIndexing(ExecutionContext ec)\n- {\n- //get input and requested index range\n- MatrixObject in = ec.getMatrixObject(input1);\n- IndexRange ixrange = getIndexRange(ec);\n-\n- //prepare output federation map (copy-on-write)\n- FederationMap fedMap = in.getFedMapping().filter(ixrange);\n-\n- //modify federated ranges in place\n- Map<FederatedRange, IndexRange> ixs = new HashMap<>();\n- for(FederatedRange range : fedMap.getMap().keySet()) {\n- long rs = range.getBeginDims()[0], re = range.getEndDims()[0],\n- cs = range.getBeginDims()[1], ce = range.getEndDims()[1];\n- long rsn = (ixrange.rowStart >= rs) ? (ixrange.rowStart - rs) : 0;\n- long ren = (ixrange.rowEnd >= rs && ixrange.rowEnd < re) ? (ixrange.rowEnd - rs) : (re - rs - 1);\n- long csn = (ixrange.colStart >= cs) ? (ixrange.colStart - cs) : 0;\n- long cen = (ixrange.colEnd >= cs && ixrange.colEnd < ce) ? (ixrange.colEnd - cs) : (ce - cs - 1);\n- if(LOG.isDebugEnabled()) {\n- LOG.debug(\"Ranges for fed location: \" + rsn + \" \" + ren + \" \" + csn + \" \" + cen);\n- LOG.debug(\"ixRange : \" + ixrange);\n- LOG.debug(\"Fed Mapping : \" + range);\n- }\n- range.setBeginDim(0, Math.max(rs - ixrange.rowStart, 0));\n- range.setBeginDim(1, Math.max(cs - ixrange.colStart, 0));\n- range.setEndDim(0, (ixrange.rowEnd >= re ? re-ixrange.rowStart : ixrange.rowEnd-ixrange.rowStart + 1));\n- range.setEndDim(1, (ixrange.colEnd >= ce ? ce-ixrange.colStart : ixrange.colEnd-ixrange.colStart + 1));\n- if(LOG.isDebugEnabled())\n- LOG.debug(\"Fed Mapping After : \" + range);\n- ixs.put(range, new IndexRange(rsn, ren, csn, cen));\n- }\n-\n- // execute slicing of valid range\n- long varID = FederationUtils.getNextFedDataID();\n- FederationMap slicedFedMap = fedMap.mapParallel(varID, (range, data) -> {\n- try {\n- FederatedResponse response = data.executeFederatedOperation(new FederatedRequest(\n- FederatedRequest.RequestType.EXEC_UDF, -1,\n- new SliceMatrix(data.getVarID(), varID, ixs.get(range)))).get();\n- if(!response.isSuccessful())\n- response.throwExceptionFromResponse();\n- return null;\n- }\n- catch(Exception e) {\n- throw new DMLRuntimeException(e);\n- }\n- });\n-\n- //update output mapping and data characteristics\n- MatrixObject sliced = ec.getMatrixObject(output);\n- sliced.getDataCharacteristics()\n- .set(slicedFedMap.getMaxIndexInRange(0), slicedFedMap.getMaxIndexInRange(1), (int) in.getBlocksize());\n- sliced.setFedMapping(slicedFedMap);\n-\n- //TODO is this really necessary\n- if(ixrange.rowEnd - ixrange.rowStart == 0)\n- slicedFedMap.setType(FederationMap.FType.COL);\n- else if(ixrange.colEnd - ixrange.colStart == 0)\n- slicedFedMap.setType(FederationMap.FType.ROW);\n- }\n-\n- private static class SliceMatrix extends FederatedUDF {\n-\n- private static final long serialVersionUID = 5956832933333848772L;\n- private final long _outputID;\n- private final IndexRange _ixrange;\n-\n- private SliceMatrix(long input, long outputID, IndexRange ixrange) {\n- super(new long[] {input});\n- _outputID = outputID;\n- _ixrange = ixrange;\n- }\n-\n- @Override\n- public FederatedResponse execute(ExecutionContext ec, Data... data) {\n- MatrixBlock mb = ((MatrixObject) data[0]).acquireReadAndRelease();\n- MatrixBlock res = mb.slice(_ixrange, new MatrixBlock());\n- MatrixObject mout = ExecutionContext.createMatrixObject(res);\n- ec.setVariable(String.valueOf(_outputID), mout);\n-\n- return new FederatedResponse(FederatedResponse.ResponseType.SUCCESS_EMPTY);\n- }\n- }\n-}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/federated/primitives/FederatedRightIndexTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/federated/primitives/FederatedRightIndexTest.java",
"diff": "@@ -44,6 +44,7 @@ public class FederatedRightIndexTest extends AutomatedTestBase {\nprivate final static String TEST_NAME1 = \"FederatedRightIndexRightTest\";\nprivate final static String TEST_NAME2 = \"FederatedRightIndexLeftTest\";\nprivate final static String TEST_NAME3 = \"FederatedRightIndexFullTest\";\n+ private final static String TEST_NAME4 = \"FederatedRightIndexFrameFullTest\";\nprivate final static String TEST_DIR = \"functions/federated/\";\nprivate static final String TEST_CLASS_DIR = TEST_DIR + FederatedRightIndexTest.class.getSimpleName() + \"/\";\n@@ -65,37 +66,49 @@ public class FederatedRightIndexTest extends AutomatedTestBase {\[email protected]\npublic static Collection<Object[]> data() {\n- return Arrays.asList(new Object[][] {{20, 10, 1, 1, true}, {20, 10, 3, 5, true}, {10, 12, 1, 10, false}});\n+ return Arrays.asList(new Object[][] {\n+ {20, 10, 1, 1, true}, {20, 10, 3, 5, true},\n+ {10, 12, 1, 10, false}});\n}\nprivate enum IndexType {\nRIGHT, LEFT, FULL\n}\n+ private enum DataType {\n+ MATRIX, FRAME\n+ }\n+\n@Override\npublic void setUp() {\nTestUtils.clearAssertionInformation();\naddTestConfiguration(TEST_NAME1, new TestConfiguration(TEST_CLASS_DIR, TEST_NAME1, new String[] {\"S\"}));\naddTestConfiguration(TEST_NAME2, new TestConfiguration(TEST_CLASS_DIR, TEST_NAME2, new String[] {\"S\"}));\naddTestConfiguration(TEST_NAME3, new TestConfiguration(TEST_CLASS_DIR, TEST_NAME3, new String[] {\"S\"}));\n+ addTestConfiguration(TEST_NAME4, new TestConfiguration(TEST_CLASS_DIR, TEST_NAME4, new String[] {\"S\"}));\n}\n// @Test\n// public void testRightIndexRightDenseMatrixCP() {\n- // runAggregateOperationTest(IndexType.RIGHT, ExecMode.SINGLE_NODE);\n+ // runAggregateOperationTest(IndexType.RIGHT, DataType.MATRIX, ExecMode.SINGLE_NODE);\n// }\n// @Test\n// public void testRightIndexLeftDenseMatrixCP() {\n- // runAggregateOperationTest(IndexType.LEFT, ExecMode.SINGLE_NODE);\n+ // runAggregateOperationTest(IndexType.LEFT, DataType.MATRIX, ExecMode.SINGLE_NODE);\n// }\n@Test\npublic void testRightIndexFullDenseMatrixCP() {\n- runAggregateOperationTest(IndexType.FULL, ExecMode.SINGLE_NODE);\n+ runAggregateOperationTest(IndexType.FULL, DataType.MATRIX, ExecMode.SINGLE_NODE);\n+ }\n+\n+ @Test\n+ public void testRightIndexFullDenseFrameCP() {\n+ runAggregateOperationTest(IndexType.FULL, DataType.FRAME, ExecMode.SINGLE_NODE);\n}\n- private void runAggregateOperationTest(IndexType type, ExecMode execMode) {\n+ private void runAggregateOperationTest(IndexType indexType, DataType dataType, ExecMode execMode) {\nboolean sparkConfigOld = DMLScript.USE_LOCAL_SPARK_CONFIG;\nExecMode platformOld = rtplatform;\n@@ -103,7 +116,7 @@ public class FederatedRightIndexTest extends AutomatedTestBase {\nDMLScript.USE_LOCAL_SPARK_CONFIG = true;\nString TEST_NAME = null;\n- switch(type) {\n+ switch(indexType) {\ncase RIGHT:\nfrom = from <= cols ? from : cols;\nto = to <= cols ? to : cols;\n@@ -115,7 +128,10 @@ public class FederatedRightIndexTest extends AutomatedTestBase {\nTEST_NAME = TEST_NAME2;\nbreak;\ncase FULL:\n+ if(dataType == DataType.MATRIX)\nTEST_NAME = TEST_NAME3;\n+ else\n+ TEST_NAME = TEST_NAME4;\nfrom = from <= rows && from <= cols ? from : Math.min(rows, cols);\nto = to <= rows && to <= cols ? to : Math.min(rows, cols);\nbreak;\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/federated/FederatedRightIndexFrameFullTest.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+from = $from;\n+to = $to;\n+\n+if ($rP) {\n+ A = federated(addresses=list($in_X1, $in_X2, $in_X3, $in_X4),\n+ ranges=list(list(0, 0), list($rows/4, $cols), list($rows/4, 0), list(2*$rows/4, $cols),\n+ list(2*$rows/4, 0), list(3*$rows/4, $cols), list(3*$rows/4, 0), list($rows, $cols)));\n+} else {\n+ A = federated(addresses=list($in_X1, $in_X2, $in_X3, $in_X4),\n+ ranges=list(list(0, 0), list($rows, $cols/4), list(0,$cols/4), list($rows, $cols/2),\n+ list(0,$cols/2), list($rows, 3*($cols/4)), list(0, 3*($cols/4)), list($rows, $cols)));\n+}\n+\n+A = as.frame(A)\n+\n+s = A[from:to, from:to];\n+write(s, $out_S);\n+\n+print(toString(s))\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/federated/FederatedRightIndexFrameFullTestReference.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+from = $5;\n+to = $6;\n+\n+if($7)\n+ A = rbind(read($1), read($2), read($3), read($4));\n+else\n+ A = cbind(read($1), read($2), read($3), read($4));\n+\n+A = as.frame(A)\n+\n+s = A[from:to, from:to];\n+write(s, $8);\n+\n+print(toString(s))\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/scripts/functions/federated/FederatedRightIndexFullTest.dml",
"new_path": "src/test/scripts/functions/federated/FederatedRightIndexFullTest.dml",
"diff": "from = $from;\nto = $to;\n+\nif ($rP) {\nA = federated(addresses=list($in_X1, $in_X2, $in_X3, $in_X4),\nranges=list(list(0, 0), list($rows/4, $cols), list($rows/4, 0), list(2*$rows/4, $cols),\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-2548,2796] Federated frame and matrix right indexing
Closes #1142. |
49,706 | 17.01.2021 10:51:15 | -3,600 | 4c08c28132ea68e3a0c749b2f4729e8daccd70b3 | Builtin (de)compress function
Dedicated script level functions to compress and decompress a matrix. | [
{
"change_type": "MODIFY",
"old_path": "pom.xml",
"new_path": "pom.xml",
"diff": "</executions>\n<configuration>\n<excludes>\n+ <exclude>scripts/perftest/results/**</exclude>\n<exclude>.gitignore</exclude>\n<exclude>.gitmodules</exclude>\n<exclude>.repository/</exclude>\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/common/Builtins.java",
"new_path": "src/main/java/org/apache/sysds/common/Builtins.java",
"diff": "@@ -79,6 +79,8 @@ public enum Builtins {\nCOLSUM(\"colSums\", false),\nCOLVAR(\"colVars\", false),\nCOMPONENTS(\"components\", true),\n+ COMPRESS(\"compress\", false),\n+ DECOMPRESS(\"decompress\", false),\nCONV2D(\"conv2d\", false),\nCONV2D_BACKWARD_FILTER(\"conv2d_backward_filter\", false),\nCONV2D_BACKWARD_DATA(\"conv2d_backward_data\", false),\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/common/Types.java",
"new_path": "src/main/java/org/apache/sysds/common/Types.java",
"diff": "@@ -207,6 +207,8 @@ public class Types\nSIGMOID, //sigmoid function: 1 / (1 + exp(-X))\nLOG_NZ, //sparse-safe log; ppred(X,0,\"!=\")*log(X)\n+ COMPRESS, DECOMPRESS,\n+\n//low-level operators //TODO used?\nMULT2, MINUS1_MULT, MINUS_RIGHT,\nPOW2, SUBTRACT_NZ;\n@@ -235,6 +237,8 @@ public class Types\ncase CUMSUM: return \"ucumk+\";\ncase CUMSUMPROD: return \"ucumk+*\";\ncase COLNAMES: return \"colnames\";\n+ case COMPRESS: return \"compress\";\n+ case DECOMPRESS: return \"decompress\";\ncase DETECTSCHEMA: return \"detectSchema\";\ncase MULT2: return \"*2\";\ncase NOT: return \"!\";\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/hops/OptimizerUtils.java",
"new_path": "src/main/java/org/apache/sysds/hops/OptimizerUtils.java",
"diff": "@@ -206,6 +206,12 @@ public class OptimizerUtils\n*/\npublic static final boolean ALLOW_COMBINE_FILE_INPUT_FORMAT = true;\n+ /**\n+ * This variable allows for insertion of Compress and decompress in the dml script from the user.\n+ * This is added because we want to have a way to test, and verify the correct placement of compress and decompress commands.\n+ */\n+ public static final boolean ALLOW_SCRIPT_LEVEL_COMPRESS_COMMAND = true;\n+\n//////////////////////\n// Optimizer levels //\n//////////////////////\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/hops/UnaryOp.java",
"new_path": "src/main/java/org/apache/sysds/hops/UnaryOp.java",
"diff": "@@ -446,7 +446,9 @@ public class UnaryOp extends MultiThreadedHop\npublic boolean isExpensiveUnaryOperation() {\nreturn (_op == OpOp1.EXP\n|| _op == OpOp1.LOG\n- || _op == OpOp1.SIGMOID);\n+ || _op == OpOp1.SIGMOID\n+ || _op == OpOp1.COMPRESS\n+ || _op == OpOp1.DECOMPRESS);\n}\npublic boolean isMetadataOperation() {\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/parser/BuiltinFunctionExpression.java",
"new_path": "src/main/java/org/apache/sysds/parser/BuiltinFunctionExpression.java",
"diff": "package org.apache.sysds.parser;\n+import java.util.ArrayList;\n+import java.util.Arrays;\n+import java.util.HashMap;\n+import java.util.HashSet;\n+\nimport org.antlr.v4.runtime.ParserRuleContext;\nimport org.apache.commons.lang.ArrayUtils;\nimport org.apache.commons.lang.NotImplementedException;\n@@ -26,16 +31,12 @@ import org.apache.sysds.common.Builtins;\nimport org.apache.sysds.common.Types.DataType;\nimport org.apache.sysds.common.Types.ValueType;\nimport org.apache.sysds.conf.ConfigurationManager;\n+import org.apache.sysds.hops.OptimizerUtils;\nimport org.apache.sysds.parser.LanguageException.LanguageErrorCodes;\nimport org.apache.sysds.runtime.meta.MatrixCharacteristics;\nimport org.apache.sysds.runtime.util.DnnUtils;\nimport org.apache.sysds.runtime.util.UtilFunctions;\n-import java.util.ArrayList;\n-import java.util.Arrays;\n-import java.util.HashMap;\n-import java.util.HashSet;\n-\npublic class BuiltinFunctionExpression extends DataIdentifier\n{\nprotected Expression[] _args = null;\n@@ -1573,7 +1574,20 @@ public class BuiltinFunctionExpression extends DataIdentifier\noutput.setBlocksize (id.getBlocksize());\nbreak;\n+ case COMPRESS:\n+ case DECOMPRESS:\n+ if(OptimizerUtils.ALLOW_SCRIPT_LEVEL_COMPRESS_COMMAND){\n+ checkNumParameters(1);\n+ checkMatrixParam(getFirstExpr());\n+ output.setDataType(DataType.MATRIX);\n+ output.setDimensions(id.getDim2(), id.getDim1());\n+ output.setBlocksize (id.getBlocksize());\n+ output.setValueType(id.getValueType());\n+ }\n+ else\n+ raiseValidateError(\"Compress instruction not allowed in dml script\");\n+ break;\ndefault:\nif( isMathFunction() ) {\ncheckMathFunctionParam();\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/parser/DMLTranslator.java",
"new_path": "src/main/java/org/apache/sysds/parser/DMLTranslator.java",
"diff": "@@ -2481,6 +2481,12 @@ public class DMLTranslator\ncase CAST_AS_BOOLEAN:\ncurrBuiltinOp = new UnaryOp(target.getName(), target.getDataType(), ValueType.BOOLEAN, OpOp1.CAST_AS_BOOLEAN, expr);\nbreak;\n+ case COMPRESS:\n+ currBuiltinOp = new UnaryOp(target.getName(), target.getDataType(), ValueType.FP64, OpOp1.COMPRESS, expr);\n+ break;\n+ case DECOMPRESS:\n+ currBuiltinOp = new UnaryOp(target.getName(), target.getDataType(), ValueType.FP64, OpOp1.DECOMPRESS, expr);\n+ break;\n// Boolean binary\ncase XOR:\n@@ -2691,7 +2697,6 @@ public class DMLTranslator\nsetBlockSizeAndRefreshSizeInfo(expr, currBuiltinOp);\nbreak;\n}\n-\ndefault:\nthrow new ParseException(\"Unsupported builtin function type: \"+source.getOpCode());\n}\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/java/org/apache/sysds/test/functions/compress/compressInstruction.java",
"diff": "+/*\n+ * Licensed to the Apache Software Foundation (ASF) under one\n+ * or more contributor license agreements. See the NOTICE file\n+ * distributed with this work for additional information\n+ * regarding copyright ownership. The ASF licenses this file\n+ * to you under the Apache License, Version 2.0 (the\n+ * \"License\"); you may not use this file except in compliance\n+ * with the License. You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.apache.sysds.test.functions.compress;\n+\n+import static org.junit.Assert.assertTrue;\n+\n+import org.apache.sysds.common.Types;\n+import org.apache.sysds.lops.LopProperties;\n+import org.apache.sysds.lops.LopProperties.ExecType;\n+import org.apache.sysds.test.AutomatedTestBase;\n+import org.apache.sysds.test.TestConfiguration;\n+import org.apache.sysds.test.TestUtils;\n+import org.apache.sysds.utils.DMLCompressionStatistics;\n+import org.apache.sysds.utils.Statistics;\n+import org.junit.Assert;\n+import org.junit.Test;\n+\n+public class compressInstruction extends AutomatedTestBase {\n+\n+ protected String getTestClassDir() {\n+ return getTestDir() + this.getClass().getSimpleName() + \"/\";\n+ }\n+\n+ protected String getTestName() {\n+ return \"compress\";\n+ }\n+\n+ protected String getTestDir() {\n+ return \"functions/compress/compressInstruction/\";\n+ }\n+\n+ @Test\n+ public void empty() {\n+\n+ }\n+\n+ @Test\n+ public void testCompressInstruction_01() {\n+ compressTest(1, 1000, 0.2, ExecType.CP, 0, 5, 0, 1, \"01\");\n+ }\n+\n+ @Test\n+ public void testCompressInstruction_02() {\n+ compressTest(1, 1000, 0.2, ExecType.CP, 0, 5, 1, 1, \"02\");\n+ }\n+\n+ public void compressTest(int cols, int rows, double sparsity, LopProperties.ExecType instType, int min, int max,\n+ int decompressionCountExpected, int compressionCountsExpected, String name) {\n+\n+ Types.ExecMode platformOld = setExecMode(instType);\n+ try {\n+\n+ loadTestConfiguration(getTestConfiguration(getTestName()));\n+\n+ fullDMLScriptName = SCRIPT_DIR + \"/\" + getTestDir() + \"compress_\" + name + \".dml\";\n+\n+ programArgs = new String[] {\"-stats\", \"100\", \"-nvargs\", \"cols=\" + cols, \"rows=\" + rows,\n+ \"sparsity=\" + sparsity, \"min=\" + min, \"max= \" + max};\n+\n+ runTest(null);\n+\n+ int decompressCount = 0;\n+ decompressCount += DMLCompressionStatistics.getDecompressionCount();\n+ decompressCount += DMLCompressionStatistics.getDecompressionSTCount();\n+ long compressionCount = Statistics.getCPHeavyHitterCount(\"compress\");\n+\n+ Assert.assertEquals(compressionCount, compressionCountsExpected);\n+ Assert.assertEquals(decompressionCountExpected, decompressCount);\n+\n+ }\n+ catch(Exception e) {\n+ e.printStackTrace();\n+ assertTrue(\"Exception in execution: \" + e.getMessage(), false);\n+ }\n+ finally {\n+ rtplatform = platformOld;\n+ }\n+ }\n+\n+ @Override\n+ public void setUp() {\n+ TestUtils.clearAssertionInformation();\n+ addTestConfiguration(getTestName(), new TestConfiguration(getTestClassDir(), getTestName()));\n+ }\n+\n+}\n"
},
{
"change_type": "RENAME",
"old_path": "src/test/java/org/apache/sysds/test/functions/compress/CompressBase.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/compress/configuration/CompressBase.java",
"diff": "* under the License.\n*/\n-package org.apache.sysds.test.functions.compress;\n+package org.apache.sysds.test.functions.compress.configuration;\nimport static org.junit.Assert.assertTrue;\n"
},
{
"change_type": "RENAME",
"old_path": "src/test/java/org/apache/sysds/test/functions/compress/CompressCost.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/compress/configuration/CompressCost.java",
"diff": "* under the License.\n*/\n-package org.apache.sysds.test.functions.compress;\n+package org.apache.sysds.test.functions.compress.configuration;\nimport java.io.File;\n"
},
{
"change_type": "RENAME",
"old_path": "src/test/java/org/apache/sysds/test/functions/compress/CompressForce.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/compress/configuration/CompressForce.java",
"diff": "* under the License.\n*/\n-package org.apache.sysds.test.functions.compress;\n+package org.apache.sysds.test.functions.compress.configuration;\nimport java.io.File;\n"
},
{
"change_type": "RENAME",
"old_path": "src/test/java/org/apache/sysds/test/functions/compress/CompressLossy.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/compress/configuration/CompressLossy.java",
"diff": "* under the License.\n*/\n-package org.apache.sysds.test.functions.compress;\n+package org.apache.sysds.test.functions.compress.configuration;\nimport java.io.File;\n"
},
{
"change_type": "RENAME",
"old_path": "src/test/java/org/apache/sysds/test/functions/compress/CompressLossyCost.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/compress/configuration/CompressLossyCost.java",
"diff": "* under the License.\n*/\n-package org.apache.sysds.test.functions.compress;\n+package org.apache.sysds.test.functions.compress.configuration;\nimport java.io.File;\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/compress/compressInstruction/compress_01.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+A = rand(rows=$rows, cols=$cols, sparsity=$sparsity, min=$min, max=$max)\n+A = round(A)\n+A = compress(A)\n+\n+print(sum(A))\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/compress/compressInstruction/compress_02.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+A = rand(rows=$rows, cols=$cols, sparsity=$sparsity, min=$min, max=$max)\n+A = round(A)\n+A = compress(A)\n+A = decompress(A)\n+print(sum(A))\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-2798] Builtin (de)compress function
Dedicated script level functions to compress and decompress a matrix. |
49,738 | 23.01.2021 18:13:32 | -3,600 | 0bf27eaad3ffb40da8e6bf87d42721f0a333ba19 | Fix parfor handling of negative loop increments
* Fix parfor dependency analysis for negative increments (via
normalization)
* Fix parfor runtime program block (determine number of iterations
correctly, to prevent invalid early-abort)
* Fix removed local debug flag for parfor dependency analysis
* Tests for parfor dependency analysis and runtime predicates | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/parser/ParForStatementBlock.java",
"new_path": "src/main/java/org/apache/sysds/parser/ParForStatementBlock.java",
"diff": "@@ -33,6 +33,10 @@ import org.apache.sysds.hops.IndexingOp;\nimport org.apache.sysds.hops.LiteralOp;\nimport org.apache.sysds.hops.OptimizerUtils;\nimport org.apache.sysds.hops.rewrite.HopRewriteUtils;\n+import org.apache.commons.logging.Log;\n+import org.apache.commons.logging.LogFactory;\n+import org.apache.log4j.Level;\n+import org.apache.log4j.Logger;\nimport org.apache.sysds.common.Builtins;\nimport org.apache.sysds.common.Types.DataType;\nimport org.apache.sysds.common.Types.OpOp1;\n@@ -60,6 +64,9 @@ import org.apache.sysds.runtime.util.UtilFunctions;\n*/\npublic class ParForStatementBlock extends ForStatementBlock\n{\n+ private static final boolean LDEBUG = false; //internal local debug level\n+ protected static final Log LOG = LogFactory.getLog(ParForStatementBlock.class.getName());\n+\n//external parameter names\nprivate static HashSet<String> _paramNames;\npublic static final String CHECK = \"check\"; //run loop dependency analysis\n@@ -140,6 +147,13 @@ public class ParForStatementBlock extends ForStatementBlock\nif( USE_FN_CACHE ) {\n_fncache = new HashMap<>();\n}\n+\n+ // for internal debugging only\n+ if( LDEBUG ) {\n+ Logger.getLogger(\"org.apache.sysds.parser.ParForStatementBlock\")\n+ .setLevel(Level.TRACE);\n+ System.out.println();\n+ }\n}\npublic ParForStatementBlock() {\n@@ -909,6 +923,14 @@ public class ParForStatementBlock extends ForStatementBlock\nelse\nincr = ( low <= up ) ? 1 : -1;\n+ //normalize bounds to positive increment (for dependency analysis only)\n+ if( incr < 0 ) {\n+ long tmp = low;\n+ low = up;\n+ up = tmp;\n+ incr *= -1; //positive increment\n+ }\n+\n_bounds._lower.put(ip.getIterVar()._name, low);\n_bounds._upper.put(ip.getIterVar()._name, up);\n_bounds._increment.put(ip.getIterVar()._name, incr);\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/controlprogram/ParForProgramBlock.java",
"new_path": "src/main/java/org/apache/sysds/runtime/controlprogram/ParForProgramBlock.java",
"diff": "@@ -569,7 +569,8 @@ public class ParForProgramBlock extends ForProgramBlock\n+ \"of variable '\" + _iterPredVar + \"' must evaluate to a non-zero value.\");\n//early exit on num iterations = zero\n- _numIterations = computeNumIterations(from, to, incr);\n+ _numIterations = UtilFunctions.getSeqLength(\n+ from.getDoubleValue(), to.getDoubleValue(), incr.getDoubleValue());\nif( _numIterations <= 0 )\nreturn; //avoid unnecessary optimization/initialization\n@@ -1520,10 +1521,6 @@ public class ParForProgramBlock extends ForProgramBlock\n}\n}\n- private static long computeNumIterations( IntObject from, IntObject to, IntObject incr ) {\n- return (long)Math.ceil(((double)(to.getLongValue() - from.getLongValue() + 1)) / incr.getLongValue());\n- }\n-\n/**\n* NOTE: Only required for remote parfor. Hence, there is no need to transfer DMLConfig to\n* the remote workers (MR job) since nested remote parfor is not supported.\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/component/parfor/ParForDependencyAnalysisTest.java",
"new_path": "src/test/java/org/apache/sysds/test/component/parfor/ParForDependencyAnalysisTest.java",
"diff": "@@ -68,6 +68,8 @@ import org.apache.sysds.test.TestConfiguration;\n* 53a: no, 53b dep, 53c dep, 53d dep, 53e dep\n* * lists\n* 54a: no, 54b: no, 54c: dep, 54d: dep\n+ * * negative loop increment\n+ * 55a: no, 55b: yes\n*/\npublic class ParForDependencyAnalysisTest extends AutomatedTestBase\n{\n@@ -325,6 +327,12 @@ public class ParForDependencyAnalysisTest extends AutomatedTestBase\n@Test\npublic void testDependencyAnalysis54d() { runTest(\"parfor54d.dml\", true); }\n+ @Test\n+ public void testDependencyAnalysis55a() { runTest(\"parfor55a.dml\", false); }\n+\n+ @Test\n+ public void testDependencyAnalysis55b() { runTest(\"parfor55b.dml\", true); }\n+\nprivate void runTest( String scriptFilename, boolean expectedException ) {\nboolean raisedException = false;\ntry\n@@ -371,5 +379,4 @@ public class ParForDependencyAnalysisTest extends AutomatedTestBase\n//check correctness\nAssert.assertEquals(expectedException, raisedException);\n}\n-\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/parfor/misc/ForLoopPredicateTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/parfor/misc/ForLoopPredicateTest.java",
"diff": "@@ -29,7 +29,6 @@ import org.apache.sysds.test.TestConfiguration;\npublic class ForLoopPredicateTest extends AutomatedTestBase\n{\n-\nprivate final static String TEST_NAME1 = \"for_pred1a\"; //const\nprivate final static String TEST_NAME2 = \"for_pred1b\"; //const seq\nprivate final static String TEST_NAME3 = \"for_pred2a\"; //var\n@@ -37,6 +36,8 @@ public class ForLoopPredicateTest extends AutomatedTestBase\nprivate final static String TEST_NAME5 = \"for_pred3a\"; //expression\nprivate final static String TEST_NAME6 = \"for_pred3b\"; //expression seq\nprivate final static String TEST_NAME7 = \"for_pred1a_seq\"; //const seq two parameters (this tests is only for parser)\n+ private final static String TEST_NAME8 = \"parfor_pred1_neg\"; //to:from (negative increment)\n+ private final static String TEST_NAME9 = \"parfor_pred2_neg\"; //2:1 (negative increment, two steps)\nprivate final static String TEST_DIR = \"functions/parfor/\";\nprivate final static String TEST_CLASS_DIR = TEST_DIR + ForLoopPredicateTest.class.getSimpleName() + \"/\";\n@@ -45,8 +46,7 @@ public class ForLoopPredicateTest extends AutomatedTestBase\nprivate final static int increment = 1;\n@Override\n- public void setUp()\n- {\n+ public void setUp() {\naddTestConfiguration(TEST_NAME1, new TestConfiguration(TEST_CLASS_DIR, TEST_NAME1, new String[]{\"R\"}));\naddTestConfiguration(TEST_NAME2, new TestConfiguration(TEST_CLASS_DIR, TEST_NAME2, new String[]{\"R\"}));\naddTestConfiguration(TEST_NAME3, new TestConfiguration(TEST_CLASS_DIR, TEST_NAME3, new String[]{\"R\"}));\n@@ -54,102 +54,103 @@ public class ForLoopPredicateTest extends AutomatedTestBase\naddTestConfiguration(TEST_NAME5, new TestConfiguration(TEST_CLASS_DIR, TEST_NAME5, new String[]{\"R\"}));\naddTestConfiguration(TEST_NAME6, new TestConfiguration(TEST_CLASS_DIR, TEST_NAME6, new String[]{\"R\"}));\naddTestConfiguration(TEST_NAME7, new TestConfiguration(TEST_CLASS_DIR, TEST_NAME7, new String[]{\"R\"}));\n+ addTestConfiguration(TEST_NAME8, new TestConfiguration(TEST_CLASS_DIR, TEST_NAME8, new String[]{\"R\"}));\n+ addTestConfiguration(TEST_NAME9, new TestConfiguration(TEST_CLASS_DIR, TEST_NAME9, new String[]{\"R\"}));\n}\n@Test\n- public void testForConstIntegerPredicate()\n- {\n+ public void testForConstIntegerPredicate() {\nrunForPredicateTest(1, true);\n}\n@Test\n- public void testForConstIntegerSeq2ParametersPredicate()\n- {\n+ public void testForConstIntegerSeq2ParametersPredicate() {\nrunForPredicateTest(7, true);\n}\n@Test\n- public void testForConstIntegerSeqPredicate()\n- {\n+ public void testForConstIntegerSeqPredicate() {\nrunForPredicateTest(2, true);\n}\n@Test\n- public void testForVariableIntegerPredicate()\n- {\n+ public void testForVariableIntegerPredicate() {\nrunForPredicateTest(3, true);\n}\n@Test\n- public void testForVariableIntegerSeqPredicate()\n- {\n+ public void testForVariableIntegerSeqPredicate() {\nrunForPredicateTest(4, true);\n}\n@Test\n- public void testForExpressionIntegerPredicate()\n- {\n+ public void testForExpressionIntegerPredicate() {\nrunForPredicateTest(5, true);\n}\n@Test\n- public void testForExpressionIntegerSeqPredicate()\n- {\n+ public void testForExpressionIntegerSeqPredicate() {\nrunForPredicateTest(6, true);\n}\n@Test\n- public void testForConstDoublePredicate()\n- {\n+ public void testForConstDoublePredicate() {\nrunForPredicateTest(1, false);\n}\n@Test\n- public void testForConstDoubleSeq2ParametersPredicate()\n- {\n+ public void testForConstDoubleSeq2ParametersPredicate() {\nrunForPredicateTest(7, false);\n}\n@Test\n- public void testForConstDoubleSeqPredicate()\n- {\n+ public void testForConstDoubleSeqPredicate() {\nrunForPredicateTest(2, false);\n}\n@Test\n- public void testForVariableDoublePredicate()\n- {\n+ public void testForVariableDoublePredicate() {\nrunForPredicateTest(3, false);\n}\n@Test\n- public void testForVariableDoubleSeqPredicate()\n- {\n+ public void testForVariableDoubleSeqPredicate() {\nrunForPredicateTest(4, false);\n}\n@Test\n- public void testForExpressionDoublePredicate()\n- {\n+ public void testForExpressionDoublePredicate() {\nrunForPredicateTest(5, false);\n}\n@Test\n- public void testForExpressionDoubleSeqPredicate()\n- {\n+ public void testForExpressionDoubleSeqPredicate() {\nrunForPredicateTest(6, false);\n}\n- /**\n- *\n- * @param testNum\n- * @param intScalar\n- */\n- private void runForPredicateTest( int testNum, boolean intScalar )\n+ @Test\n+ public void testParFor1IntNegativeIncrement() {\n+ runForPredicateTest(8, true, true);\n+ }\n+\n+ @Test\n+ public void testParFor1DoubleNegativeIncrement() {\n+ runForPredicateTest(8, false, true);\n+ }\n+\n+ @Test\n+ public void testParFor2IntNegativeIncrement() {\n+ runForPredicateTest(9, true, true);\n+ }\n+\n+ private void runForPredicateTest( int testNum, boolean intScalar ) {\n+ runForPredicateTest(testNum, intScalar, false);\n+ }\n+\n+ private void runForPredicateTest( int testNum, boolean intScalar, boolean negative )\n{\nString TEST_NAME = null;\n- switch( testNum )\n- {\n+ switch( testNum ) {\ncase 1: TEST_NAME = TEST_NAME1; break;\ncase 2: TEST_NAME = TEST_NAME2; break;\ncase 3: TEST_NAME = TEST_NAME3; break;\n@@ -157,6 +158,8 @@ public class ForLoopPredicateTest extends AutomatedTestBase\ncase 5: TEST_NAME = TEST_NAME5; break;\ncase 6: TEST_NAME = TEST_NAME6; break;\ncase 7: TEST_NAME = TEST_NAME7; break;\n+ case 8: TEST_NAME = TEST_NAME8; break;\n+ case 9: TEST_NAME = TEST_NAME9; break;\n}\ngetAndLoadTestConfiguration(TEST_NAME);\n@@ -164,14 +167,12 @@ public class ForLoopPredicateTest extends AutomatedTestBase\nObject valFrom = null;\nObject valTo = null;\nObject valIncrement = null;\n- if( intScalar )\n- {\n+ if( intScalar ) {\nvalFrom = Integer.valueOf((int)Math.round(from));\nvalTo = Integer.valueOf((int)Math.round(to));\nvalIncrement = Integer.valueOf(increment);\n}\n- else\n- {\n+ else {\nvalFrom = new Double(from);\nvalTo = new Double(to);\nvalIncrement = new Double(increment);\n@@ -196,5 +197,4 @@ public class ForLoopPredicateTest extends AutomatedTestBase\nAssert.assertEquals( Double.valueOf(Math.ceil((Math.round(to)-Math.round(from)+1)/increment)),\ndmlfile.get(new CellIndex(1,1)));\n}\n-\n}\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/component/parfor/parfor55a.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+\n+A = matrix(0, rows=10,cols=1);\n+B = Rand(rows=10,cols=1);\n+\n+parfor( i in 10:1 )\n+ A[i,1] = B[i,1];\n+\n+#print(A);\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/component/parfor/parfor55b.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+\n+A = matrix(0,rows=10,cols=1);\n+B = Rand(rows=10,cols=1);\n+\n+parfor( i in 10:2 )\n+ A[i,1] = B[i,1] + A[i-1,1];\n+\n+#print(A);\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/parfor/parfor_pred1_neg.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+R = matrix(0, rows=1, cols=1);\n+parfor( i in $2:$1 )\n+ R += matrix(1,1,1);\n+write(R, $4);\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/parfor/parfor_pred2_neg.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+R = matrix(0, rows=1, cols=1);\n+parfor( i in 2:1 )\n+ R += matrix(1,1,1);\n+off = ceil(round($2)-round($1)+1)-2;\n+R += matrix(off,1,1);\n+write(R, $4);\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-2802] Fix parfor handling of negative loop increments
* Fix parfor dependency analysis for negative increments (via
normalization)
* Fix parfor runtime program block (determine number of iterations
correctly, to prevent invalid early-abort)
* Fix removed local debug flag for parfor dependency analysis
* Tests for parfor dependency analysis and runtime predicates |
49,694 | 23.01.2021 22:16:47 | -3,600 | 4fb8ad1eed144db052640f021dd3e460f0898328 | New built-in functions ALS-predict, ALS-topk-predict
DIA project WS2020/21.
Closes | [
{
"change_type": "ADD",
"old_path": null,
"new_path": "scripts/builtin/alsPredict.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+# This script computes the rating/scores for a given list of userIDs\n+# using 2 factor matrices L and R. We assume that all users have rates\n+# at least once and all items have been rates at least once.\n+#\n+# INPUT PARAMETERS:\n+# ---------------------------------------------------------------------------------------------\n+# NAME TYPE DEFAULT MEANING\n+# ---------------------------------------------------------------------------------------------\n+# userIDs Matrix --- Column vector of user-ids (n x 1)\n+# I Matrix --- Indicator matrix user-id x user-id to exclude from scoring\n+# L Matrix --- The factor matrix L: user-id x feature-id\n+# R Matrix --- The factor matrix R: feature-id x item-id\n+# ---------------------------------------------------------------------------------------------\n+# OUTPUT:\n+# Y Matrix --- The output user-id/item-id/score\n+\n+m_alsPredict = function(Matrix[Double] userIDs, Matrix[Double] I, Matrix[Double] L, Matrix[Double] R)\n+ return (Matrix[Double] Y)\n+{\n+ n = nrow(userIDs)\n+ X_user_max = max(userIDs);\n+\n+ if (X_user_max > nrow(L))\n+ stop (\"Predictions cannot be provided. Maximum user-id exceeds the number of users.\");\n+ if (ncol(L) != nrow(R))\n+ stop (\"Predictions cannot be provided. Number of columns of L don't match the number of columns of R.\");\n+\n+ # creates projection matrix to select users\n+ P = table(seq(1,n), userIDs, n, nrow(L));\n+\n+ # selects users from factor L and exclude list\n+ Usel = P %*% L;\n+ Isel = P %*% I;\n+\n+ # calculates scores for selected users and filter exclude list\n+ Y = (Isel == 0) * (Usel %*% R);\n+}\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "scripts/builtin/alsTopkPredict.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+# This script computes the top-K rating/scores for a given list of userIDs\n+# using 2 factor matrices L and R. We assume that all users have rates\n+# at least once and all items have been rates at least once.\n+#\n+# INPUT PARAMETERS:\n+# ---------------------------------------------------------------------------------------------\n+# NAME TYPE DEFAULT MEANING\n+# ---------------------------------------------------------------------------------------------\n+# userIDs Matrix --- Column vector of user-ids (n x 1)\n+# I Matrix --- Indicator matrix user-id x user-id to exclude from scoring\n+# L Matrix --- The factor matrix L: user-id x feature-id\n+# R Matrix --- The factor matrix R: feature-id x item-id\n+# K Int 5 The number of top-K items\n+# ---------------------------------------------------------------------------------------------\n+# OUTPUT:\n+# TopIxs Matrix --- A matrix containing the top-K item-ids with highest predicted ratings\n+# for the specified users (rows)\n+# TopVals Matrix --- A matrix containing the top-K predicted ratings for the specified users (rows)\n+\n+m_alsTopkPredict = function(Matrix[Double] userIDs, Matrix[Double] I, Matrix[Double] L, Matrix[Double] R, Integer K = 5)\n+ return (Matrix[Double] TopIxs, Matrix[Double] TopVals)\n+{\n+ zero_cols_ind = (colSums (R != 0)) == 0;\n+ K = min (ncol(R) - sum (zero_cols_ind), K);\n+\n+ Y = alsPredict(userIDs=userIDs, I=I, L=L, R=R)\n+\n+ # stores sorted movies for selected users\n+ TopIxs = matrix(0, rows = nrow (userIDs), cols = K);\n+ TopVals = matrix(0, rows = nrow (userIDs), cols = K);\n+\n+ # uses rowIndexMax/rowMaxs to update kth ratings for all users (assumes no duplicates)\n+ # (alternatively, we could sort the scores per user, but likely nrow(userIDs)>>K)\n+ for (i in 1:K) {\n+ TopIxs[,i] = rowIndexMax(Y);\n+ TopVals[,i] = rowMaxs(Y);\n+ Y = Y * (table(seq(1,nrow(Y)), rowIndexMax(Y), nrow(Y), ncol(Y)) != 0);\n+ }\n+\n+ # post-processing to handle edge cases\n+ TopIxs = TopIxs * (TopVals > 0);\n+}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/common/Builtins.java",
"new_path": "src/main/java/org/apache/sysds/common/Builtins.java",
"diff": "@@ -46,6 +46,8 @@ public enum Builtins {\nALS(\"als\", true),\nALS_CG(\"alsCG\", true),\nALS_DS(\"alsDS\", true),\n+ ALS_PREDICT(\"alsPredict\", true),\n+ ALS_TOPK_PREDICT(\"alsTopkPredict\", true),\nASIN(\"asin\", false),\nATAN(\"atan\", false),\nAUTOENCODER2LAYER(\"autoencoder_2layer\", true),\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/java/org/apache/sysds/test/functions/builtin/BuiltinALSPredictTest.java",
"diff": "+/*\n+ * Licensed to the Apache Software Foundation (ASF) under one\n+ * or more contributor license agreements. See the NOTICE file\n+ * distributed with this work for additional information\n+ * regarding copyright ownership. The ASF licenses this file\n+ * to you under the Apache License, Version 2.0 (the\n+ * \"License\"); you may not use this file except in compliance\n+ * with the License. You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.apache.sysds.test.functions.builtin;\n+\n+import org.apache.sysds.test.AutomatedTestBase;\n+import org.apache.sysds.test.TestConfiguration;\n+import org.junit.Test;\n+\n+import java.util.ArrayList;\n+import java.util.List;\n+\n+public class BuiltinALSPredictTest extends AutomatedTestBase {\n+ private final static String TEST_NAME1 = \"alsPredict\";\n+ private final static String TEST_NAME2 = \"alsTopkPredict\";\n+ private final static String TEST_DIR = \"functions/builtin/\";\n+ private static final String TEST_CLASS_DIR = TEST_DIR + BuiltinALSPredictTest.class.getSimpleName() + \"/\";\n+\n+ @Override\n+ public void setUp() {\n+ addTestConfiguration(TEST_NAME1,new TestConfiguration(TEST_CLASS_DIR, TEST_NAME1, new String[]{\"B\"}));\n+ addTestConfiguration(TEST_NAME2,new TestConfiguration(TEST_CLASS_DIR, TEST_NAME2, new String[]{\"B\"}));\n+ }\n+\n+ @Test\n+ public void testALSPredict() {\n+ runtestALSPredict(TEST_NAME1);\n+ }\n+\n+ @Test\n+ public void testALSTopkPredict() {\n+ runtestALSPredict(TEST_NAME2);\n+ }\n+\n+ private void runtestALSPredict(String testname) {\n+ loadTestConfiguration(getTestConfiguration(testname));\n+ String HOME = SCRIPT_DIR + TEST_DIR;\n+ fullDMLScriptName = HOME + testname + \".dml\";\n+ List<String> proArgs = new ArrayList<>();\n+\n+ proArgs.add(\"-stats\");\n+ proArgs.add(\"-args\");\n+ proArgs.add(input(\"X\"));\n+ proArgs.add(input(\"L\"));\n+ proArgs.add(input(\"R\"));\n+ proArgs.add(output(\"Y\"));\n+ programArgs = proArgs.toArray(new String[proArgs.size()]);\n+\n+ double[][] X = {{1, 1}, {2, 2}, {3, 3}, {4, 4}, {5, 5}};\n+ writeInputMatrixWithMTD(\"X\", X, true);\n+\n+ double[][] L = {{1, 2, 3, 4, 5}, {1, 2, 3, 4, 5}, {1, 2, 3, 4, 5}, {1, 2, 3, 4, 5}, {1, 2, 3, 4, 5}};\n+ writeInputMatrixWithMTD(\"L\", L, true);\n+\n+ double[][] R = {{1, 2, 3, 4, 5}, {1, 2, 3, 4, 5}, {1, 2, 3, 4, 5}, {1, 2, 3, 4, 5}, {1, 2, 3, 4, 5}};\n+ writeInputMatrixWithMTD(\"R\", R, true);\n+\n+ runTest(true, EXCEPTION_NOT_EXPECTED, null, -1);\n+ }\n+}\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/builtin/alsPredict.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+X = read($1)\n+L = read($2)\n+R = read($3)\n+Y = alsPredict(userIDs=X, I=matrix(0,nrow(L),ncol(R)), L=L, R=R)\n+write(Y, $4)\n\\ No newline at end of file\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/builtin/alsTopkPredict.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+X = read($1)\n+L = read($2)\n+R = read($3)\n+[TopIxs, TopVals] = alsTopkPredict(userIDs=X, I=matrix(0,nrow(L),ncol(R)), L=L, R=R)\n+write(TopVals, $4)\n\\ No newline at end of file\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-2804] New built-in functions ALS-predict, ALS-topk-predict
DIA project WS2020/21.
Closes #1162.
Co-Authored-By: Sven Celin <[email protected]> |
49,720 | 23.01.2021 23:58:03 | -3,600 | 0eba4dcdd3d92c91b5192e1e7d2d84cff5326068 | Builtin function gmmPredict for clustering instances
Closes | [
{
"change_type": "ADD",
"old_path": null,
"new_path": "scripts/builtin/gmmPredict.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+# ------------------------------------------\n+# Gaussian Mixture Model Predict\n+# ------------------------------------------\n+\n+# INPUT PARAMETERS:\n+# ---------------------------------------------------------------------------------------------\n+# NAME TYPE DEFAULT MEANING\n+# ---------------------------------------------------------------------------------------------\n+# X Double --- Matrix X (instances to be clustered)\n+# weight Double --- Weight of learned model\n+# mu Double --- fitted clusters mean\n+# precisions_cholesky Double --- fitted precision matrix for each mixture\n+# model String --- fitted model\n+# ---------------------------------------------------------------------------------------------\n+\n+# OUTPUT:\n+# ---------------------------------------------------------------------------------------------\n+# NAME TYPE DEFAULT MEANING\n+# ---------------------------------------------------------------------------------------------\n+# predict Double --- predicted cluster labels\n+# posterior_prob Double --- probabilities of belongingness\n+# ---------------------------------------------------------------------------------------------\n+\n+# compute posterior probabilities for new instances given the variance and mean of fitted data\n+\n+m_gmmPredict = function(Matrix[Double] X, Matrix[Double] weight,\n+ Matrix[Double] mu, Matrix[Double] precisions_cholesky, String model)\n+ return(Matrix[Double] predict, Matrix[Double] posterior_prob)\n+{\n+ # compute the posterior probabilities for new instances\n+ weighted_log_prob = compute_log_gaussian_prob(X, mu, precisions_cholesky, model) + log(weight)\n+ log_prob_norm = logSumExp(weighted_log_prob, \"rows\")\n+ log_resp = weighted_log_prob - log_prob_norm\n+ posterior_prob = exp(log_resp)\n+ predict = rowIndexMax(weighted_log_prob)\n+}\n+\n+compute_log_gaussian_prob = function(Matrix[Double] X, Matrix[Double] mu,\n+ Matrix[Double] prec_chol, String model)\n+ return(Matrix[Double] es_log_prob ) # nrow(X) * n_components\n+{\n+ n_components = nrow(mu)\n+ d = ncol(X)\n+\n+ if(model == \"VVV\") {\n+ log_prob = matrix(0, nrow(X), n_components) # log probabilities\n+ log_det_chol = matrix(0, 1, n_components) # log determinant\n+ i = 1\n+ for(k in 1:n_components) {\n+ prec = prec_chol[i:(k*ncol(X)),]\n+ y = X %*% prec - mu[k,] %*% prec\n+ log_prob[, k] = rowSums(y*y)\n+ # compute log_det_cholesky\n+ log_det = sum(log(diag(t(prec))))\n+ log_det_chol[1,k] = log_det\n+ i = i + ncol(X)\n+ }\n+ }\n+ else if(model == \"EEE\") {\n+ log_prob = matrix(0, nrow(X), n_components)\n+ log_det_chol = as.matrix(sum(log(diag(prec_chol))))\n+ prec = prec_chol\n+ for(k in 1:n_components) {\n+ y = X %*% prec - mu[k,] %*% prec\n+ log_prob[, k] = rowSums(y*y)\n+ }\n+ }\n+ else if(model == \"VVI\") {\n+ log_det_chol = t(rowSums(log(prec_chol)))\n+ prec = prec_chol\n+ precisions = prec^2\n+ bc_matrix = matrix(1,nrow(X), nrow(mu))\n+ log_prob = (bc_matrix*t(rowSums(mu^2 * precisions))\n+ - 2 * (X %*% t(mu * precisions)) + X^2 %*% t(precisions))\n+ }\n+ else if (model == \"VII\") {\n+ log_det_chol = t(d * log(prec_chol))\n+ prec = prec_chol\n+ precisions = prec^ 2\n+ bc_matrix = matrix(1,nrow(X), nrow(mu))\n+ log_prob = (bc_matrix * t(rowSums(mu^2) * precisions)\n+ - 2 * X %*% t(mu * precisions) + rowSums(X*X) %*% t(precisions) )\n+ }\n+ if(ncol(log_det_chol) == 1)\n+ log_det_chol = matrix(1, 1, ncol(log_prob)) * log_det_chol\n+\n+ es_log_prob = -.5 * (ncol(X) * log(2 * pi) + log_prob) + log_det_chol\n+}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/common/Builtins.java",
"new_path": "src/main/java/org/apache/sysds/common/Builtins.java",
"diff": "@@ -116,6 +116,7 @@ public enum Builtins {\nGET_PERMUTATIONS(\"getPermutations\", true),\nGLM(\"glm\", true),\nGMM(\"gmm\", true),\n+ GMM_PREDICT(\"gmmPredict\", true),\nGNMF(\"gnmf\", true),\nGRID_SEARCH(\"gridSearch\", true),\nHYPERBAND(\"hyperband\", true),\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/java/org/apache/sysds/test/functions/builtin/BuiltinGMMPredictTest.java",
"diff": "+/*\n+ * Licensed to the Apache Software Foundation (ASF) under one\n+ * or more contributor license agreements. See the NOTICE file\n+ * distributed with this work for additional information\n+ * regarding copyright ownership. The ASF licenses this file\n+ * to you under the Apache License, Version 2.0 (the\n+ * \"License\"); you may not use this file except in compliance\n+ * with the License. You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.apache.sysds.test.functions.builtin;\n+\n+import org.apache.sysds.common.Types;\n+import org.apache.sysds.hops.OptimizerUtils;\n+import org.apache.sysds.lops.LopProperties;\n+import org.apache.sysds.test.AutomatedTestBase;\n+import org.apache.sysds.test.TestConfiguration;\n+import org.apache.sysds.test.TestUtils;\n+import org.junit.Assert;\n+import org.junit.Test;\n+\n+public class BuiltinGMMPredictTest extends AutomatedTestBase {\n+ private final static String TEST_NAME = \"GMM_Predict\";\n+ private final static String TEST_DIR = \"functions/builtin/\";\n+ private static final String TEST_CLASS_DIR = TEST_DIR + BuiltinGMMPredictTest.class.getSimpleName() + \"/\";\n+\n+ private final static double eps = 2;\n+ private final static double tol = 1e-3;\n+ private final static double tol2 = 1e-5;\n+\n+ private final static String DATASET = SCRIPT_DIR + \"functions/transform/input/iris/iris.csv\";\n+\n+ @Override\n+ public void setUp() {\n+ addTestConfiguration(TEST_NAME, new TestConfiguration(TEST_CLASS_DIR, TEST_NAME, new String[] {\"B\"}));\n+ }\n+\n+ @Test\n+ public void testGMMMPredictCP1() {\n+ runGMMPredictTest(3, \"VVI\", \"random\", 10,\n+ 0.000000001, tol,42,true, LopProperties.ExecType.CP);\n+ }\n+\n+ @Test\n+ public void testGMMMPredictCP2() {\n+ runGMMPredictTest(3, \"VII\", \"random\", 50,\n+ 0.000001, tol2,42,true, LopProperties.ExecType.CP);\n+ }\n+\n+ @Test\n+ public void testGMMMPredictCPKmean1() {\n+ runGMMPredictTest(3, \"VVV\", \"kmeans\", 10,\n+ 0.0000001, tol,42,true, LopProperties.ExecType.CP);\n+ }\n+\n+ @Test\n+ public void testGMMMPredictCPKmean2() {\n+ runGMMPredictTest(3, \"EEE\", \"kmeans\", 150,\n+ 0.000001, tol,42,true, LopProperties.ExecType.CP);\n+ }\n+\n+ @Test\n+ public void testGMMMPredictCPKmean3() {\n+ runGMMPredictTest(3, \"VII\", \"kmeans\", 50,\n+ 0.000001, tol2,42,true, LopProperties.ExecType.CP);\n+ }\n+\n+// @Test\n+// public void testGMMM1Spark() {\n+// runGMMPredictTest(3, \"VVV\", \"random\", 10,\n+// 0.0000001, tol,42,true, LopProperties.ExecType.SPARK); }\n+//\n+// @Test\n+// public void testGMMM2Spark() {\n+// runGMMPredictTest(3, \"EEE\", \"random\", 50,\n+// 0.0000001, tol,42,true, LopProperties.ExecType.CP);\n+// }\n+//\n+// @Test\n+// public void testGMMMS3Spark() {\n+// runGMMPredictTest(3, \"VVI\", \"random\", 100,\n+// 0.000001, tol,42,true, LopProperties.ExecType.CP);\n+// }\n+//\n+// @Test\n+// public void testGMMM4Spark() {\n+// runGMMPredictTest(3, \"VII\", \"random\", 100,\n+// 0.000001, tol1,42,true, LopProperties.ExecType.CP);\n+// }\n+//\n+// @Test\n+// public void testGMMM1KmeanSpark() {\n+// runGMMPredictTest(3, \"VVV\", \"kmeans\", 100,\n+// 0.000001, tol2,42,false, LopProperties.ExecType.SPARK);\n+// }\n+//\n+// @Test\n+// public void testGMMM2KmeanSpark() {\n+// runGMMPredictTest(3, \"EEE\", \"kmeans\", 50,\n+// 0.00000001, tol1,42,false, LopProperties.ExecType.SPARK);\n+// }\n+//\n+// @Test\n+// public void testGMMM3KmeanSpark() {\n+// runGMMPredictTest(3, \"VVI\", \"kmeans\", 100,\n+// 0.000001, tol,42,false, LopProperties.ExecType.SPARK);\n+// }\n+//\n+// @Test\n+// public void testGMMM4KmeanSpark() {\n+// runGMMPredictTest(3, \"VII\", \"kmeans\", 100,\n+// 0.000001, tol,42,false, LopProperties.ExecType.SPARK);\n+// }\n+\n+ private void runGMMPredictTest(int G_mixtures, String model, String init_param, int iter,\n+ double reg, double tol, int seed, boolean rewrite, LopProperties.ExecType instType) {\n+\n+ Types.ExecMode platformOld = setExecMode(instType);\n+ boolean rewriteOld = OptimizerUtils.ALLOW_ALGEBRAIC_SIMPLIFICATION;\n+ OptimizerUtils.ALLOW_ALGEBRAIC_SIMPLIFICATION = rewrite;\n+ try {\n+ loadTestConfiguration(getTestConfiguration(TEST_NAME));\n+ String HOME = SCRIPT_DIR + TEST_DIR;\n+ fullDMLScriptName = HOME + TEST_NAME + \".dml\";\n+ String outFile = output(\"O\");\n+ System.out.println(outFile);\n+ programArgs = new String[] {\"-args\", DATASET,\n+ String.valueOf(G_mixtures), model, init_param, String.valueOf(iter), String.valueOf(reg),\n+ String.valueOf(tol), String.valueOf(seed), outFile};\n+\n+ runTest(true, false, null, -1);\n+ // compare results\n+ double accuracy = TestUtils.readDMLScalar(outFile);\n+ Assert.assertEquals(1, accuracy, eps);\n+ }\n+ finally {\n+ resetExecMode(platformOld);\n+ OptimizerUtils.ALLOW_ALGEBRAIC_SIMPLIFICATION = rewriteOld;\n+ }\n+ }\n+}\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/builtin/GMM_Predict.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+\n+X = read($1, data_type = \"frame\", format = \"csv\", header=TRUE)\n+X = X[ , 2:ncol(X) - 1]\n+X = as.matrix(X)\n+\n+# divide in train and test set\n+train = X[1:45,]\n+train = rbind(train, X[52:95,])\n+train = rbind(train, X[102:145,])\n+\n+test = X[46:51,]\n+test = rbind(test, X[96:101,])\n+test = rbind(test, X[146:150,])\n+\n+# train GMM\n+[labels, prob, df, bic, mu, prec_chol, w] = gmm(X=train, n_components = $2,\n+ model = $3, init_params = $4, iter = $5, reg_covar = $6, tol = $7, seed=$8, verbose=TRUE)\n+\n+# predict labels\n+[pred, pp] = gmmPredict(test, w, mu, prec_chol, $3)\n+\n+# expected clusters/predictions\n+expected = matrix(\"6 6 5\", 3, 1)\n+\n+resp = matrix(1, 17, 3) * t(seq(1,3))\n+resp = resp == pred\n+cluster = t(colSums(resp))\n+\n+cluster = order(target = cluster, by = 1, decreasing = FALSE, index.return=FALSE)\n+correct_Predictions = order(target = expected, by = 1, decreasing = FALSE, index.return=FALSE)\n+\n+error = mean(abs(correct_Predictions - cluster))\n+write(error, $9, format = \"text\")\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-2735] Builtin function gmmPredict for clustering instances
Closes #1108. |
49,729 | 24.01.2021 20:20:14 | -3,600 | 7203c00dc7394d293f711293c95d5652aabb69a3 | Unified memory manager design and APIs
DIA project WS2020/21, part 1.
Closes | [
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/main/java/org/apache/sysds/runtime/controlprogram/caching/UnifiedMemoryManager.java",
"diff": "+/*\n+ * Licensed to the Apache Software Foundation (ASF) under one\n+ * or more contributor license agreements. See the NOTICE file\n+ * distributed with this work for additional information\n+ * regarding copyright ownership. The ASF licenses this file\n+ * to you under the Apache License, Version 2.0 (the\n+ * \"License\"); you may not use this file except in compliance\n+ * with the License. You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.apache.sysds.runtime.controlprogram.caching;\n+\n+import org.apache.commons.lang.NotImplementedException;\n+\n+/**\n+ * Unified Memory Manager - Initial Design\n+ *\n+ * Motivation:\n+ * The Unified Memory Manager, henceforth UMM, will act as a central manager of in-memory\n+ * matrix (uncompressed and compressed), frame, and tensor blocks within SystemDS control\n+ * program. So far, operation memory (70%) and buffer pool memory (15%, LazyWriteBuffer)\n+ * are managed independently, which causes unnecessary evictions. New components like the\n+ * LineageCache also use and manage statically provisioned memory areas. Ultimately, the\n+ * UMM aims to eliminate these shortcomings by providing a central, potentially thread-local,\n+ * memory management.\n+ *\n+ * Memory Areas:\n+ * Initially, the UMM only handles CacheBlock objects (e.g., MatrixBlock, FrameBlock, and\n+ * TensorBlock), and manages two memory areas:\n+ * (1) operation memory (pinned cache blocks and reserved memory) and\n+ * (2) dirty objects (dirty cache blocks that need to be written to local FS before eviction)\n+ *\n+ * The UMM is configured with a capacity (absolute size in byte). Relative to this capacity,\n+ * the operations and buffer pool memory areas each will have a min and max amount of memory\n+ * they can occupy, meaning that the boundary for the areas can shift dynamically depending\n+ * on the current load. Most importantly, though, dirty objects must not be counted twice\n+ * when pinning such an object for an operation. The min/max constraints are not exposed but\n+ * configured internally. An good starting point are the following constraints (relative to\n+ * JVM max heap size):\n+ * ___________________________\n+ * | operations | 0% | 70% | (pin requests always accepted)\n+ * | buffer pool | 15% | 85% | (eviction on demand)\n+ *\n+ * Object Lifecycle:\n+ * The UMM will also need to keep track of the current state of individual cache blocks, for\n+ * which it will have a few member variables. A queue similar to the current EvictionQueue is\n+ * used to add/remove entries with LRU as its eviction policy. In general, there are three\n+ * properties of object status to consider:\n+ * (1) Non-dirty/dirty: non-dirty objects have a representation on HDFS or can be recomputed\n+ * from lineage trace (e.g., rand/seq outputs), while dirty objects need to be preserved.\n+ * (2) FS Persisted: on eviction, dirty objects need to be written to local file system.\n+ * As long the local file representation exist, dirty objects can simply be dropped.\n+ * (3) Pinned/unpinned: For operations, objects are pinned into memory to guard against\n+ * eviction. All pin requests have to be accepted, and once a non-dirty object is released\n+ * (unpinned) it can be dropped without persisting it to local FS.\n+ *\n+ * Thread-safeness:\n+ * Initially, the UMM will be used in an instance-based manner. For global visibility and\n+ * use in parallel for loops, the UMM would need to provide a static, synchronized API, but\n+ * this constitutes a source of severe contention. In the future, we will consider a design\n+ * with thread-local UMMs for the individual parfor workers.\n+ *\n+ * Testing:\n+ * The UMM will be developed bottom up, and thus initially tested via component tests for\n+ * evaluating the eviction behavior for sequences of API requests.\n+ */\n+public class UnifiedMemoryManager\n+{\n+ public UnifiedMemoryManager(long capacity) {\n+ //TODO implement\n+ throw new NotImplementedException();\n+ }\n+\n+ /**\n+ * Pins a cache block into operation memory.\n+ *\n+ * @param key unique identifier and local FS filename for eviction\n+ * @param block cache block if not under UMM control, null otherwise\n+ * @param dirty indicator if block is dirty (subject to buffer pool management)\n+ * @return pinned cache block, potentially restored from local FS\n+ */\n+ public CacheBlock pin(String key, CacheBlock block, boolean dirty) {\n+ //TODO implement\n+ throw new NotImplementedException();\n+ }\n+\n+ /**\n+ * Pins a virtual cache block into operation memory, by making a size reservation.\n+ * The provided size is an upper bound of the actual object size, and can be\n+ * updated on unpin (once the actual cache block is provided).\n+ *\n+ * @param key unique identifier and local FS filename for eviction\n+ * @param size memory reservation in operation area\n+ * @param dirty indicator if block is dirty (subject to buffer pool management)\n+ */\n+ public void pin(String key, long size, boolean dirty) {\n+ //TODO implement\n+ throw new NotImplementedException();\n+ }\n+\n+ /**\n+ * Unpins (releases) a cache block from operation memory. Dirty objects\n+ * are logically moved back to the buffer pool area.\n+ *\n+ * @param key unique identifier and local FS filename for eviction\n+ */\n+ public void unpin(String key) {\n+ //TODO implement\n+ throw new NotImplementedException();\n+ }\n+\n+ /**\n+ * Unpins (releases) a cache block from operation memory. If the size of\n+ * the provided cache block differs from the UMM meta data, the UMM meta\n+ * data is updated. Use cases include update-in-place operations and\n+ * size reservations via worst-case upper bound estimates.\n+ *\n+ * @param key unique identifier and local FS filename for eviction\n+ * @param block cache block which may be under UMM control, if null ignored\n+ */\n+ public void unpin(String key, CacheBlock block) {\n+ //TODO implement\n+ throw new NotImplementedException();\n+ }\n+\n+ /**\n+ * Removes a cache block associated with the given key from all memory\n+ * areas, and deletes evicted representations (files in local FS). The\n+ * local file system deletes can happen asynchronously.\n+ *\n+ * @param key unique identifier and local FS filename for eviction\n+ */\n+ public void delete(String key) {\n+ //TODO implement\n+ throw new NotImplementedException();\n+ }\n+\n+ /**\n+ * Removes all cache blocks from all memory areas and deletes all evicted\n+ * representations (files in local FS). All internally thread pools must be\n+ * shut down in a gracefully manner (e.g., wait for pending deletes).\n+ */\n+ public void deleteAll() {\n+ //TODO implement\n+ throw new NotImplementedException();\n+ }\n+}\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-2806] Unified memory manager design and APIs
DIA project WS2020/21, part 1.
Closes #1147. |
49,689 | 25.01.2021 17:43:26 | -3,600 | b87001b11101fc32ece6618922b73a5c5741a2e3 | [MINOR] Update default lineage cache eviction policy to COSTNSIZE | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/api/DMLOptions.java",
"new_path": "src/main/java/org/apache/sysds/api/DMLOptions.java",
"diff": "@@ -62,7 +62,7 @@ public class DMLOptions {\npublic boolean lineage = false; // whether compute lineage trace\npublic boolean lineage_dedup = false; // whether deduplicate lineage items\npublic ReuseCacheType linReuseType = ReuseCacheType.NONE; // reuse type (full, partial, hybrid)\n- public LineageCachePolicy linCachePolicy= LineageCachePolicy.HYBRID; // lineage cache eviction policy\n+ public LineageCachePolicy linCachePolicy= LineageCachePolicy.COSTNSIZE; // lineage cache eviction policy\npublic boolean lineage_estimate = false; // whether estimate reuse benefits\npublic boolean fedWorker = false;\npublic int fedWorkerPort = -1;\n@@ -136,8 +136,6 @@ public class DMLOptions {\ndmlOptions.linCachePolicy = LineageCachePolicy.COSTNSIZE;\nelse if (lineageType.equalsIgnoreCase(\"policy_dagheight\"))\ndmlOptions.linCachePolicy = LineageCachePolicy.DAGHEIGHT;\n- else if (lineageType.equalsIgnoreCase(\"policy_hybrid\"))\n- dmlOptions.linCachePolicy = LineageCachePolicy.HYBRID;\nelse if (lineageType.equalsIgnoreCase(\"estimate\"))\ndmlOptions.lineage_estimate = lineageType.equalsIgnoreCase(\"estimate\");\nelse\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageCacheConfig.java",
"new_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageCacheConfig.java",
"diff": "@@ -106,8 +106,8 @@ public class LineageCacheConfig\n//-------------EVICTION RELATED CONFIGURATIONS--------------//\nprivate static LineageCachePolicy _cachepolicy = null;\n- // Weights for scoring components (computeTime/size, LRU timestamp)\n- protected static double[] WEIGHTS = {0, 1, 0};\n+ // Weights for scoring components (computeTime/size, LRU timestamp, DAG height)\n+ protected static double[] WEIGHTS = {1, 0, 0};\nprotected enum LineageCacheStatus {\nEMPTY, //Placeholder with no data. Cannot be evicted.\n@@ -126,7 +126,6 @@ public class LineageCacheConfig\nLRU,\nCOSTNSIZE,\nDAGHEIGHT,\n- HYBRID;\n}\nprotected static Comparator<LineageCacheEntry> LineageCacheComparator = (e1, e2) -> {\n@@ -158,9 +157,6 @@ public class LineageCacheConfig\ne1_ts < e2_ts ? -1 : 1;\nbreak;\n}\n- case HYBRID:\n- // order entries with same score by IDs\n- ret = Long.compare(e1._key.getId(), e2._key.getId());\n}\n}\nelse\n@@ -175,7 +171,7 @@ public class LineageCacheConfig\n//setup static configuration parameters\nREUSE_OPCODES = OPCODES;\nsetSpill(true);\n- setCachePolicy(LineageCachePolicy.HYBRID);\n+ setCachePolicy(LineageCachePolicy.COSTNSIZE);\nsetCompAssRW(true);\n}\n@@ -271,6 +267,7 @@ public class LineageCacheConfig\n}\npublic static void setCachePolicy(LineageCachePolicy policy) {\n+ // TODO: Automatic tuning of weights.\nswitch(policy) {\ncase LRU:\nWEIGHTS[0] = 0; WEIGHTS[1] = 1; WEIGHTS[2] = 0;\n@@ -281,15 +278,6 @@ public class LineageCacheConfig\ncase DAGHEIGHT:\nWEIGHTS[0] = 0; WEIGHTS[1] = 0; WEIGHTS[2] = 1;\nbreak;\n- case HYBRID:\n- WEIGHTS[0] = 1; WEIGHTS[1] = 0.0033; WEIGHTS[2] = 0;\n- // FIXME: Relative timestamp fix reduces the absolute\n- // value of the timestamp component of the scoring function\n- // to a comparatively much smaller number. W[1] needs to be\n- // re-tuned accordingly.\n- // FIXME: Tune hybrid with a ratio of all three.\n- // TODO: Automatic tuning of weights.\n- break;\n}\n_cachepolicy = policy;\n}\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR] Update default lineage cache eviction policy to COSTNSIZE |
49,706 | 29.01.2021 13:53:51 | -3,600 | 81d0e95b0ad59dd0a9ee2635b4ce40a6b9376bd6 | [MINOR] Compression stats only if used | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/utils/DMLCompressionStatistics.java",
"new_path": "src/main/java/org/apache/sysds/utils/DMLCompressionStatistics.java",
"diff": "@@ -77,7 +77,7 @@ public class DMLCompressionStatistics {\n}\npublic static void display(StringBuilder sb) {\n-\n+ if(Phase0 > 0.0){ // If compression have been used\nsb.append(String.format(\n\"CLA Compression Phases :\\t%.3f/%.3f/%.3f/%.3f/%.3f/%.3f\\n\",\nPhase0 / 1000,\n@@ -94,3 +94,4 @@ public class DMLCompressionStatistics {\nDecompressMT / 1000));\n}\n}\n+}\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR] Compression stats only if used |
49,738 | 29.01.2021 13:55:41 | -3,600 | 527e47d7ac4d594020c0ac508a2ceae03048429a | [MINOR] Fix multiLogRegPredict (sanity check matching dims, cleanup) | [
{
"change_type": "MODIFY",
"old_path": "scripts/builtin/multiLogRegPredict.dml",
"new_path": "scripts/builtin/multiLogRegPredict.dml",
"diff": "# accuracy Double --- scalar value of accuracy\n# ---------------------------------------------------------------------------------------------\n-\nm_multiLogRegPredict = function(Matrix[Double] X, Matrix[Double] B, Matrix[Double] Y, Boolean verbose = FALSE)\nreturn(Matrix[Double] M, Matrix[Double] predicted_Y, Double accuracy)\n{\nif(min(Y) <= 0)\n- stop(\"class labels should be greater than zero\")\n+ stop(\"multiLogRegPredict: class labels should be greater than zero\")\n+ if(ncol(X) < nrow(B)-1)\n+ stop(\"multiLogRegPredict: mismatching ncol(X) and nrow(B): \"+ncol(X)+\" \"+nrow(B));\n- num_records = nrow(X);\n- num_features = ncol(X);\nbeta = B[1:ncol(X), ];\n- intercept = B[nrow(B), ];\n-\n- if (nrow(B) == ncol(X))\n- intercept = 0.0 * intercept;\n- else\n- num_features = num_features + 1;\n-\n- ones_rec = matrix(1, rows = num_records, cols = 1);\n- linear_terms = X %*% beta + ones_rec %*% intercept;\n+ intercept = ifelse(ncol(X)==nrow(B), matrix(0,1,ncol(B)), B[nrow(B),]);\n+ linear_terms = X %*% beta + matrix(1,nrow(X),1) %*% intercept;\nM = probabilities(linear_terms); # compute the probablitites on unknown data\npredicted_Y = rowIndexMax(M); # extract the class labels\nif(nrow(Y) != 0)\n- accuracy = sum((predicted_Y - Y) == 0) / num_records * 100;\n+ accuracy = sum((predicted_Y - Y) == 0) / nrow(Y) * 100;\nif(verbose)\nprint(\"Accuracy (%): \" + accuracy);\n}\nprobabilities = function (Matrix[double] linear_terms)\n- return (Matrix[double] means) {\n+ return (Matrix[double] means)\n+{\n# PROBABLITIES FOR MULTINOMIAL LOGIT DISTRIBUTION\nnum_points = nrow (linear_terms);\nelt = exp (linear_terms);\n@@ -80,5 +73,3 @@ probabilities = function (Matrix[double] linear_terms)\nones_ctg = matrix (1, rows = ncol (elt), cols = 1);\nmeans = elt / (rowSums (elt) %*% t(ones_ctg));\n}\n-\n-\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR] Fix multiLogRegPredict (sanity check matching dims, cleanup) |
49,738 | 29.01.2021 15:38:49 | -3,600 | b2699ae3951c05ba8d6e14888eac6acd3ac5e77c | Cleanup lmPredict (rename, parameters, eval accuracy) | [
{
"change_type": "MODIFY",
"old_path": "docs/site/builtins-reference.md",
"new_path": "docs/site/builtins-reference.md",
"diff": "@@ -45,7 +45,7 @@ limitations under the License.\n* [`lm`-Function](#lm-function)\n* [`lmDS`-Function](#lmds-function)\n* [`lmCG`-Function](#lmcg-function)\n- * [`lmpredict`-Function](#lmpredict-function)\n+ * [`lmPredict`-Function](#lmPredict-function)\n* [`mice`-Function](#mice-function)\n* [`multiLogReg`-Function](#multiLogReg-function)\n* [`pnmf`-Function](#pnmf-function)\n@@ -183,7 +183,7 @@ y = toOneHot(X, numClasses)\n## `cvlm`-Function\nThe `cvlm`-function is used for cross-validation of the provided data model. This function follows a non-exhaustive\n-cross validation method. It uses [`lm`](#lm-function) and [`lmpredict`](#lmpredict-function) functions to solve the linear\n+cross validation method. It uses [`lm`](#lm-function) and [`lmPredict`](#lmPredict-function) functions to solve the linear\nregression and to predict the class of a feature vector with no intercept, shifting, and rescaling.\n### Usage\n@@ -425,7 +425,7 @@ Through multiple parallel brackets and consecutive trials it will return the hyp\non a validation dataset. A set of hyper parameter combinations is drawn from uniform distributions with given ranges; Those\nmake up the candidates for `hyperband`.\nNotes:\n-* `hyperband` is hard-coded for `lmCG`, and uses `lmpredict` for validation\n+* `hyperband` is hard-coded for `lmCG`, and uses `lmPredict` for validation\n* `hyperband` is hard-coded to use the number of iterations as a resource\n* `hyperband` can only optimize continuous hyperparameters\n@@ -778,14 +778,14 @@ y = X %*% rand(rows = ncol(X), cols = 1)\nlmCG(X = X, y = y, maxi = 10)\n```\n-## `lmpredict`-Function\n+## `lmPredict`-Function\n-The `lmpredict`-function predicts the class of a feature vector.\n+The `lmPredict`-function predicts the class of a feature vector.\n### Usage\n```r\n-lmpredict(X, w)\n+lmPredict(X=X, B=w)\n```\n### Arguments\n@@ -793,8 +793,11 @@ lmpredict(X, w)\n| Name | Type | Default | Description |\n| :------ | :------------- | -------- | :---------- |\n| X | Matrix[Double] | required | Matrix of feature vector(s). |\n-| w | Matrix[Double] | required | 1-column matrix of weights. |\n-| icpt | Matrix[Double] | `0` | Intercept presence, shifting and rescaling of X ([Details](#icpt-argument))|\n+| B | Matrix[Double] | required | 1-column matrix of weights. |\n+| ytest | Matrix[Double] | optional | Optional test labels, used only for verbose output. |\n+| icpt | Integer | 0 | Intercept presence, shifting and rescaling of X ([Details](#icpt-argument))|\n+| verbose | Boolean | FALSE | Print various statistics for evaluating accuracy. |\n+\n### Returns\n@@ -808,7 +811,7 @@ lmpredict(X, w)\nX = rand (rows = 50, cols = 10)\ny = X %*% rand(rows = ncol(X), cols = 1)\nw = lm(X = X, y = y)\n-yp = lmpredict(X, w)\n+yp = lmPredict(X = X, B = w)\n```\n## `mice`-Function\n"
},
{
"change_type": "MODIFY",
"old_path": "scripts/builtin/cvlm.dml",
"new_path": "scripts/builtin/cvlm.dml",
"diff": "@@ -42,7 +42,7 @@ m_cvlm = function(Matrix[Double] X, Matrix[Double] y, Integer k, Integer icpt =\n}\nbeta = lm(X=trainSet, y=trainRes, icpt=icpt, reg=reg);\n- pred = lmpredict(X=testSet, w=beta, icpt=icpt);\n+ pred = lmPredict(X=testSet, B=beta, icpt=icpt);\ny_predict[testS:testE,] = pred;\nallbeta[i,] = t(beta);\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "scripts/builtin/hyperband.dml",
"new_path": "scripts/builtin/hyperband.dml",
"diff": "@@ -104,7 +104,7 @@ m_hyperband = function(Matrix[Double] X_train, Matrix[Double] y_train,\ntol=as.scalar(args[1]), reg=as.scalar(args[2]), maxi=r_i, verbose=FALSE));\ncandidateWeights[curCandidate] = t(weights)\n- preds = lmpredict(X=X_val, w=weights);\n+ preds = lmPredict(X=X_val, B=weights);\nscoreboard[curCandidate,1] = as.matrix(sum((y_val - preds)^2));\n}\n"
},
{
"change_type": "RENAME",
"old_path": "scripts/builtin/lmpredict.dml",
"new_path": "scripts/builtin/lmPredict.dml",
"diff": "#\n#-------------------------------------------------------------\n-m_lmpredict = function(Matrix[Double] X, Matrix[Double] w, Integer icpt = 0) return (Matrix[Double] y) {\n- intercept_status = icpt;\n+m_lmPredict = function(Matrix[Double] X, Matrix[Double] B,\n+ Matrix[Double] ytest = matrix(0,1,1), Integer icpt = 0, Boolean verbose = FALSE)\n+ return (Matrix[Double] yhat)\n+{\n+ intercept = ifelse(icpt==0, matrix(0,1,ncol(B)), B[nrow(B),]);\n+ yhat = X %*% B[1:ncol(X)] + matrix(1,nrow(X),1) %*% intercept;\n- if (intercept_status == 0) {\n- y = X %*% w\n- }\n- else if (intercept_status == 1) {\n- ones_n = matrix (1, rows = nrow (X), cols = 1);\n- X = cbind (X, ones_n);\n- y = X %*% w;\n- } else {\n- #ToDo: icpt == 2\n+ if( verbose ) {\n+ y_residual = ytest - yhat;\n+ avg_res = sum(y_residual) / nrow(ytest);\n+ ss_res = sum(y_residual^2);\n+ ss_avg_res = ss_res - nrow(ytest) * avg_res^2;\n+ R2 = 1 - ss_res / (sum(ytest^2) - nrow(ytest) * (sum(ytest)/nrow(ytest))^2);\n+ print(\"\\nAccuracy:\" +\n+ \"\\n--sum(ytest) = \" + sum(ytest) +\n+ \"\\n--sum(yhat) = \" + sum(yhat) +\n+ \"\\n--AVG_RES_Y: \" + avg_res +\n+ \"\\n--SS_AVG_RES_Y: \" + ss_avg_res +\n+ \"\\n--R2: \" + R2 );\n}\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "scripts/builtin/mice.dml",
"new_path": "scripts/builtin/mice.dml",
"diff": "@@ -133,7 +133,7 @@ m_mice= function(Matrix[Double] X, Matrix[Double] cMask, Integer iter = 3, Boole\n# learn a regression line\nbeta = lm(X=train_X, y=train_Y, verbose=FALSE, icpt=1, reg = 1e-7, tol = 1e-7);\n# predicting missing values\n- pred = lmpredict(X=test_X, w=beta, icpt=1)\n+ pred = lmPredict(X=test_X, B=beta, icpt=1)\n# imputing missing column values (assumes Mask_Filled being 0/1-matrix)\nR = removeEmpty(target=Mask_Filled[, in_c] * seq(1,nrow(X1)), margin=\"rows\");\n# TODO modify removeEmpty to return zero row and n columns\n"
},
{
"change_type": "MODIFY",
"old_path": "scripts/builtin/outlierByArima.dml",
"new_path": "scripts/builtin/outlierByArima.dml",
"diff": "@@ -64,7 +64,7 @@ m_outlierByArima = function(Matrix[Double] X, Double k = 3, Integer repairMethod\n# TODO replace by ARIMA once fully supported, LM only emulated the AR part\nmodel = lm(X=features, y=X_adapted)\n- y_hat = lmpredict(X=features, w=model)\n+ y_hat = lmPredict(X=features, B=model)\nupperBound = sd(X) + k * y_hat\nlowerBound = sd(X) - k * y_hat\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/common/Builtins.java",
"new_path": "src/main/java/org/apache/sysds/common/Builtins.java",
"diff": "@@ -144,7 +144,7 @@ public enum Builtins {\nLM(\"lm\", true),\nLMCG(\"lmCG\", true),\nLMDS(\"lmDS\", true),\n- LMPREDICT(\"lmpredict\", true),\n+ LMPREDICT(\"lmPredict\", true),\nLOG(\"log\", false),\nLOGSUMEXP(\"logSumExp\", true),\nLSTM(\"lstm\", false, ReturnType.MULTI_RETURN),\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/scripts/functions/builtin/lmpredict.dml",
"new_path": "src/test/scripts/functions/builtin/lmpredict.dml",
"diff": "@@ -23,5 +23,5 @@ X = read($1) # Training data\ny = read($2) # response values\np = read($3) # random data to predict\nw = lmDS(X = X, y = y, icpt = 1, reg = 1e-12)\n-p = lmpredict(X = X, w = w, icpt = 1)\n+p = lmPredict(X = X, B = w, icpt = 1)\nwrite(p, $4)\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/scripts/functions/federated/FederatedLmPipeline.dml",
"new_path": "src/test/scripts/functions/federated/FederatedLmPipeline.dml",
"diff": "@@ -47,19 +47,7 @@ X = scale(X=X, center=TRUE, scale=TRUE);\nB = lm(X=Xtrain, y=ytrain, icpt=1, reg=1e-3, tol=1e-9, verbose=TRUE)\n# model evaluation on test split\n-yhat = lmpredict(X=Xtest, w=B, icpt=1);\n-y_residual = ytest - yhat;\n-\n-avg_res = sum(y_residual) / nrow(ytest);\n-ss_res = sum(y_residual^2);\n-ss_avg_res = ss_res - nrow(ytest) * avg_res^2;\n-R2 = 1 - ss_res / (sum(y^2) - nrow(ytest) * (sum(y)/nrow(ytest))^2);\n-print(\"\\nAccuracy:\" +\n- \"\\n--sum(ytest) = \" + sum(ytest) +\n- \"\\n--sum(yhat) = \" + sum(yhat) +\n- \"\\n--AVG_RES_Y: \" + avg_res +\n- \"\\n--SS_AVG_RES_Y: \" + ss_avg_res +\n- \"\\n--R2: \" + R2 );\n+yhat = lmPredict(X=Xtest, B=B, icpt=1, ytest=ytest verbose=TRUE);\n# write trained model and meta data\nwrite(B, $out)\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/scripts/functions/federated/FederatedLmPipeline4Workers.dml",
"new_path": "src/test/scripts/functions/federated/FederatedLmPipeline4Workers.dml",
"diff": "@@ -49,19 +49,7 @@ X = scale(X=X, center=TRUE, scale=TRUE);\nB = lm(X=Xtrain, y=ytrain, icpt=1, reg=1e-3, tol=1e-9, verbose=TRUE)\n# model evaluation on test split\n-yhat = lmpredict(X=Xtest, w=B, icpt=1);\n-y_residual = ytest - yhat;\n-\n-avg_res = sum(y_residual) / nrow(ytest);\n-ss_res = sum(y_residual^2);\n-ss_avg_res = ss_res - nrow(ytest) * avg_res^2;\n-R2 = 1 - ss_res / (sum(y^2) - nrow(ytest) * (sum(y)/nrow(ytest))^2);\n-print(\"\\nAccuracy:\" +\n- \"\\n--sum(ytest) = \" + sum(ytest) +\n- \"\\n--sum(yhat) = \" + sum(yhat) +\n- \"\\n--AVG_RES_Y: \" + avg_res +\n- \"\\n--SS_AVG_RES_Y: \" + ss_avg_res +\n- \"\\n--R2: \" + R2 );\n+yhat = lmPredict(X=Xtest, B=B, icpt=1, ytest=ytest verbose=TRUE);\n# write trained model and meta data\nwrite(B, $out)\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/scripts/functions/federated/FederatedLmPipeline4WorkersReference.dml",
"new_path": "src/test/scripts/functions/federated/FederatedLmPipeline4WorkersReference.dml",
"diff": "@@ -47,19 +47,7 @@ X = scale(X=X, center=TRUE, scale=TRUE);\nB = lm(X=Xtrain, y=ytrain, icpt=1, reg=1e-3, tol=1e-9, verbose=TRUE)\n# model evaluation on test split\n-yhat = lmpredict(X=Xtest, w=B, icpt=1);\n-y_residual = ytest - yhat;\n-\n-avg_res = sum(y_residual) / nrow(ytest);\n-ss_res = sum(y_residual^2);\n-ss_avg_res = ss_res - nrow(ytest) * avg_res^2;\n-R2 = 1 - ss_res / (sum(y^2) - nrow(ytest) * (sum(y)/nrow(ytest))^2);\n-print(\"\\nAccuracy:\" +\n- \"\\n--sum(ytest) = \" + sum(ytest) +\n- \"\\n--sum(yhat) = \" + sum(yhat) +\n- \"\\n--AVG_RES_Y: \" + avg_res +\n- \"\\n--SS_AVG_RES_Y: \" + ss_avg_res +\n- \"\\n--R2: \" + R2 );\n+yhat = lmPredict(X=Xtest, B=B, icpt=1, ytest=ytest verbose=TRUE);\n# write trained model and meta data\nwrite(B, $7)\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/scripts/functions/federated/FederatedLmPipelineReference.dml",
"new_path": "src/test/scripts/functions/federated/FederatedLmPipelineReference.dml",
"diff": "@@ -47,19 +47,7 @@ X = scale(X=X, center=TRUE, scale=TRUE);\nB = lm(X=Xtrain, y=ytrain, icpt=1, reg=1e-3, tol=1e-9, verbose=TRUE)\n# model evaluation on test split\n-yhat = lmpredict(X=Xtest, w=B, icpt=1);\n-y_residual = ytest - yhat;\n-\n-avg_res = sum(y_residual) / nrow(ytest);\n-ss_res = sum(y_residual^2);\n-ss_avg_res = ss_res - nrow(ytest) * avg_res^2;\n-R2 = 1 - ss_res / (sum(y^2) - nrow(ytest) * (sum(y)/nrow(ytest))^2);\n-print(\"\\nAccuracy:\" +\n- \"\\n--sum(ytest) = \" + sum(ytest) +\n- \"\\n--sum(yhat) = \" + sum(yhat) +\n- \"\\n--AVG_RES_Y: \" + avg_res +\n- \"\\n--SS_AVG_RES_Y: \" + ss_avg_res +\n- \"\\n--R2: \" + R2 );\n+yhat = lmPredict(X=Xtest, B=B, icpt=1, ytest=ytest verbose=TRUE);\n# write trained model and meta data\nwrite(B, $7)\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/scripts/functions/lineage/LineageReuseAlg6.dml",
"new_path": "src/test/scripts/functions/lineage/LineageReuseAlg6.dml",
"diff": "@@ -89,7 +89,7 @@ Kc = floor(ncol(A) * 0.8);\nfor (i in 1:10) {\nnewA1 = PCA(A=A, K=Kc+i);\nbeta1 = lm(X=newA1, y=y, icpt=1, reg=0.0001, verbose=FALSE);\n- y_predict1 = lmpredict(X=newA1, w=beta1, icpt=1);\n+ y_predict1 = lmPredict(X=newA1, B=beta1, icpt=1);\nR2_ad1 = checkR2(newA1, y, y_predict1, beta1, 1);\nR[,i] = R2_ad1;\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/scripts/functions/recompile/IPAFunctionArgsFor.dml",
"new_path": "src/test/scripts/functions/recompile/IPAFunctionArgsFor.dml",
"diff": "@@ -92,7 +92,7 @@ Kc = floor(ncol(A) * 0.8);\nfor (i in 1:10) {\nnewA1 = PCA(A=A, K=Kc+i);\nbeta1 = lm(X=newA1, y=y, icpt=1, reg=0.0001, verbose=FALSE);\n- y_predict1 = lmpredict(X=newA1, w=beta1, icpt=1);\n+ y_predict1 = lmPredict(X=newA1, B=beta1, icpt=1);\nR2_ad1 = checkR2(newA1, y, y_predict1, beta1, 1);\nR[,i] = R2_ad1;\n}\n@@ -100,7 +100,7 @@ for (i in 1:10) {\nfor (i in 1:10) {\nnewA3 = PCA(A=A, K=Kc+5);\nbeta3 = lm(X=newA3, y=y, icpt=1, reg=0.001*i, verbose=FALSE);\n- y_predict3 = lmpredict(X=newA3, w=beta3, icpt=1);\n+ y_predict3 = lmPredict(X=newA3, B=beta3, icpt=1);\nR2_ad3 = checkR2(newA3, y, y_predict3, beta3, 1);\nR[,10+i] = R2_ad3;\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/scripts/functions/recompile/IPAFunctionArgsParfor.dml",
"new_path": "src/test/scripts/functions/recompile/IPAFunctionArgsParfor.dml",
"diff": "@@ -92,7 +92,7 @@ Kc = floor(ncol(A) * 0.8);\nfor (i in 1:10) {\nnewA1 = PCA(A=A, K=Kc+i);\nbeta1 = lm(X=newA1, y=y, icpt=1, reg=0.0001, verbose=FALSE);\n- y_predict1 = lmpredict(X=newA1, w=beta1, icpt=1);\n+ y_predict1 = lmPredict(X=newA1, B=beta1, icpt=1);\nR2_ad1 = checkR2(newA1, y, y_predict1, beta1, 1);\nR[,i] = R2_ad1;\n}\n@@ -100,7 +100,7 @@ for (i in 1:10) {\nparfor (i in 1:10) {\nnewA3 = PCA(A=A, K=Kc+5);\nbeta3 = lm(X=newA3, y=y, icpt=1, reg=0.001*i, verbose=FALSE);\n- y_predict3 = lmpredict(X=newA3, w=beta3, icpt=1);\n+ y_predict3 = lmPredict(X=newA3, B=beta3, icpt=1);\nR2_ad3 = checkR2(newA3, y, y_predict3, beta3, 1);\nR[,10+i] = R2_ad3;\n}\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-113] Cleanup lmPredict (rename, parameters, eval accuracy) |
49,689 | 29.01.2021 17:37:55 | -3,600 | aa09b5c3d3b5d221426fd06871b6690e1297ee9e | Add lineage specific flags to SystemDS-config
This patch adds two lineage specific configurations, enable/disable
cache spilling and enable/disable compiler assisted dynamic
rewrites. Both are true by default. These flags help in
automating microbenchmarks. | [
{
"change_type": "MODIFY",
"old_path": "conf/SystemDS-config.xml.template",
"new_path": "conf/SystemDS-config.xml.template",
"diff": "<!-- Allocator to use to allocate GPU device memory. Supported values are cuda, unified_memory (default: cuda) -->\n<sysds.gpu.memory.allocator>cuda</sysds.gpu.memory.allocator>\n+\n+ <!-- enables disk spilling for lineage cache -->\n+ <sysds.lineage.cachespill>true</sysds.lineage.cachespill>\n+\n+ <!-- enables compiler assisted partial rewrites (e.g. Append-TSMM) -->\n+ <sysds.lineage.compilerassisted>true</sysds.lineage.compilerassisted>\n</root>\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/conf/DMLConfig.java",
"new_path": "src/main/java/org/apache/sysds/conf/DMLConfig.java",
"diff": "@@ -88,6 +88,8 @@ public class DMLConfig\npublic static final String EAGER_CUDA_FREE = \"sysds.gpu.eager.cudaFree\"; // boolean: whether to perform eager CUDA free on rmvar\npublic static final String GPU_EVICTION_POLICY = \"sysds.gpu.eviction.policy\"; // string: can be lru, lfu, min_evict\npublic static final String LOCAL_SPARK_NUM_THREADS = \"sysds.local.spark.number.threads\";\n+ public static final String LINEAGECACHESPILL = \"sysds.lineage.cachespill\"; // boolean: whether to spill cache entries to disk\n+ public static final String COMPILERASSISTED_RW = \"sysds.lineage.compilerassisted\"; // boolean: whether to apply compiler assisted rewrites\n// Fraction of available memory to use. The available memory is computer when the GPUContext is created\n// to handle the tradeoff on calling cudaMemGetInfo too often.\n@@ -137,6 +139,8 @@ public class DMLConfig\n_defaultVals.put(CODEGEN_LITERALS, \"1\" );\n_defaultVals.put(NATIVE_BLAS, \"none\" );\n_defaultVals.put(NATIVE_BLAS_DIR, \"none\" );\n+ _defaultVals.put(LINEAGECACHESPILL, \"true\" );\n+ _defaultVals.put(COMPILERASSISTED_RW, \"true\" );\n_defaultVals.put(PRINT_GPU_MEMORY_INFO, \"false\" );\n_defaultVals.put(EVICTION_SHADOW_BUFFERSIZE, \"0.0\" );\n_defaultVals.put(STATS_MAX_WRAP_LEN, \"30\" );\n@@ -396,7 +400,7 @@ public class DMLConfig\nCOMPRESSED_LINALG, COMPRESSED_LOSSY, COMPRESSED_VALID_COMPRESSIONS, COMPRESSED_OVERLAPPING,\nCOMPRESSED_SAMPLING_RATIO, COMPRESSED_COCODE, COMPRESSED_TRANSPOSE,\nCODEGEN, CODEGEN_API, CODEGEN_COMPILER, CODEGEN_OPTIMIZER, CODEGEN_PLANCACHE, CODEGEN_LITERALS,\n- STATS_MAX_WRAP_LEN, PRINT_GPU_MEMORY_INFO,\n+ STATS_MAX_WRAP_LEN, LINEAGECACHESPILL, COMPILERASSISTED_RW, PRINT_GPU_MEMORY_INFO,\nAVAILABLE_GPUS, SYNCHRONIZE_GPU, EAGER_CUDA_FREE, FLOATING_POINT_PRECISION, GPU_EVICTION_POLICY,\nLOCAL_SPARK_NUM_THREADS, EVICTION_SHADOW_BUFFERSIZE, GPU_MEMORY_ALLOCATOR, GPU_MEMORY_UTILIZATION_FACTOR,\nUSE_SSL_FEDERATED_COMMUNICATION, DEFAULT_FEDERATED_INITIALIZATION_TIMEOUT\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/hops/rewrite/RewriteAlgebraicSimplificationDynamic.java",
"new_path": "src/main/java/org/apache/sysds/hops/rewrite/RewriteAlgebraicSimplificationDynamic.java",
"diff": "@@ -55,6 +55,8 @@ import org.apache.sysds.lops.MapMultChain.ChainType;\nimport org.apache.sysds.parser.DataExpression;\nimport org.apache.sysds.common.Types.DataType;\nimport org.apache.sysds.common.Types.ValueType;\n+import org.apache.sysds.conf.ConfigurationManager;\n+import org.apache.sysds.conf.DMLConfig;\n/**\n* Rule: Algebraic Simplifications. Simplifies binary expressions\n@@ -154,6 +156,7 @@ public class RewriteAlgebraicSimplificationDynamic extends HopRewriteRule\nhi = removeUnnecessaryReorgOperation(hop, hi, i); //e.g., matrix(X) -> X, if dims(in)==dims(out); r(X)->X, if 1x1 dims\nhi = removeUnnecessaryOuterProduct(hop, hi, i); //e.g., X*(Y%*%matrix(1,...) -> X*Y, if Y col vector\nhi = removeUnnecessaryIfElseOperation(hop, hi, i);//e.g., ifelse(E, A, B) -> A, if E==TRUE or nnz(E)==length(E)\n+ if(ConfigurationManager.getDMLConfig().getBooleanValue(DMLConfig.COMPILERASSISTED_RW))\nhi = removeUnnecessaryAppendTSMM(hop, hi, i); //e.g., X = t(rbind(A,B,C)) %*% rbind(A,B,C) -> t(A)%*%A + t(B)%*%B + t(C)%*%C\nif(OptimizerUtils.ALLOW_OPERATOR_FUSION)\nhi = fuseDatagenAndReorgOperation(hop, hi, i); //e.g., t(rand(rows=10,cols=1)) -> rand(rows=1,cols=10), if one dim=1\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageCacheConfig.java",
"new_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageCacheConfig.java",
"diff": "@@ -21,6 +21,8 @@ package org.apache.sysds.runtime.lineage;\nimport org.apache.commons.lang3.ArrayUtils;\nimport org.apache.sysds.api.DMLScript;\n+import org.apache.sysds.conf.ConfigurationManager;\n+import org.apache.sysds.conf.DMLConfig;\nimport org.apache.sysds.runtime.controlprogram.caching.MatrixObject;\nimport org.apache.sysds.runtime.controlprogram.context.ExecutionContext;\nimport org.apache.sysds.runtime.instructions.Instruction;\n@@ -77,7 +79,7 @@ public class LineageCacheConfig\n//-------------DISK SPILLING RELATED CONFIGURATIONS--------------//\n- private static boolean _allowSpill = false;\n+ //private static boolean _allowSpill = false;\n// Minimum reliable spilling estimate in milliseconds.\npublic static final double MIN_SPILL_TIME_ESTIMATE = 10;\n// Minimum reliable data size for spilling estimate in MB.\n@@ -170,7 +172,7 @@ public class LineageCacheConfig\nstatic {\n//setup static configuration parameters\nREUSE_OPCODES = OPCODES;\n- setSpill(true);\n+ //setSpill(true);\nsetCachePolicy(LineageCachePolicy.COSTNSIZE);\nsetCompAssRW(true);\n}\n@@ -308,13 +310,15 @@ public class LineageCacheConfig\nreturn (WEIGHTS[2] > 0);\n}\n- public static void setSpill(boolean toSpill) {\n+ /*public static void setSpill(boolean toSpill) {\n_allowSpill = toSpill;\n// NOTE: _allowSpill only enables/disables disk spilling, but has\n// no control over eviction order of cached items.\n- }\n+ }*/\npublic static boolean isSetSpill() {\n- return _allowSpill;\n+ // Check if cachespill set in SystemDS-config (default true)\n+ DMLConfig conf = ConfigurationManager.getDMLConfig();\n+ return conf.getBooleanValue(DMLConfig.LINEAGECACHESPILL);\n}\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/lineage/CacheEvictionTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/lineage/CacheEvictionTest.java",
"diff": "package org.apache.sysds.test.functions.lineage;\n+import java.io.File;\nimport java.util.ArrayList;\nimport java.util.HashMap;\nimport java.util.List;\n@@ -27,7 +28,6 @@ import java.util.List;\nimport org.apache.sysds.hops.OptimizerUtils;\nimport org.apache.sysds.hops.recompile.Recompiler;\nimport org.apache.sysds.runtime.lineage.Lineage;\n-import org.apache.sysds.runtime.lineage.LineageCacheConfig;\nimport org.apache.sysds.runtime.lineage.LineageCacheConfig.ReuseCacheType;\nimport org.apache.sysds.runtime.lineage.LineageCacheStatistics;\nimport org.apache.sysds.runtime.matrix.data.MatrixValue;\n@@ -44,6 +44,8 @@ public class CacheEvictionTest extends LineageBase {\nprotected static final String TEST_NAME1 = \"CacheEviction2\";\nprotected String TEST_CLASS_DIR = TEST_DIR + CacheEvictionTest.class.getSimpleName() + \"/\";\n+ private final static String TEST_CONF = \"SystemDS-config-eviction.xml\";\n+ private final static File TEST_CONF_FILE = new File(SCRIPT_DIR + TEST_DIR, TEST_CONF);\n@Override\npublic void setUp() {\n@@ -82,7 +84,6 @@ public class CacheEvictionTest extends LineageBase {\ngetAndLoadTestConfiguration(testname);\nfullDMLScriptName = getScript();\nLineage.resetInternalState();\n- LineageCacheConfig.setSpill(false); //disable spilling\n// LRU based eviction\nList<String> proArgs = new ArrayList<>();\n@@ -126,8 +127,17 @@ public class CacheEvictionTest extends LineageBase {\nfinally {\nOptimizerUtils.ALLOW_ALGEBRAIC_SIMPLIFICATION = old_simplification;\nOptimizerUtils.ALLOW_SUM_PRODUCT_REWRITES = old_sum_product;\n- LineageCacheConfig.setSpill(true);\nRecompiler.reinitRecompiler();\n}\n}\n+ /**\n+ * Override default configuration with custom test configuration to ensure\n+ * scratch space and local temporary directory locations are also updated.\n+ */\n+ @Override\n+ protected File getConfigTemplateFile() {\n+ // Instrumentation in this test's output log to show custom configuration file used for template.\n+ System.out.println(\"This test case overrides default configuration with \" + TEST_CONF_FILE.getPath());\n+ return TEST_CONF_FILE;\n+ }\n}\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/lineage/SystemDS-config-eviction.xml",
"diff": "+<!--\n+ * Licensed to the Apache Software Foundation (ASF) under one\n+ * or more contributor license agreements. See the NOTICE file\n+ * distributed with this work for additional information\n+ * regarding copyright ownership. The ASF licenses this file\n+ * to you under the Apache License, Version 2.0 (the\n+ * \"License\"); you may not use this file except in compliance\n+ * with the License. You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+-->\n+\n+<root>\n+ <!-- local fs tmp working directory-->\n+ <sysds.localtmpdir>/tmp/systemds</sysds.localtmpdir>\n+\n+ <!-- hdfs tmp working directory-->\n+ <sysds.scratch>scratch_space</sysds.scratch>\n+\n+ <!-- compiler optimization level, valid values: 0 | 1 | 2 | 3 | 4, default: 2 -->\n+ <sysds.optlevel>2</sysds.optlevel>\n+\n+ <!-- default block dim for binary block files -->\n+ <sysds.defaultblocksize>1000</sysds.defaultblocksize>\n+\n+ <!-- enables multi-threaded operations in singlenode control program -->\n+ <sysds.cp.parallel.ops>true</sysds.cp.parallel.ops>\n+\n+ <!-- enables multi-threaded read/write in singlenode control program -->\n+ <sysds.cp.parallel.io>true</sysds.cp.parallel.io>\n+\n+ <!-- enables compressed linear algebra, experimental feature -->\n+ <sysds.compressed.linalg>auto</sysds.compressed.linalg>\n+\n+ <!-- enables operator fusion via code generation, experimental feature -->\n+ <sysds.codegen.enabled>false</sysds.codegen.enabled>\n+\n+ <!-- set the codegen API (auto, java, cuda) -->\n+ <sysds.codegen.api>auto</sysds.codegen.api>\n+\n+ <!-- set the codegen java compiler (auto, janino, javac, nvcc, nvrtc) -->\n+ <sysds.codegen.compiler>auto</sysds.codegen.compiler>\n+\n+ <!-- set the codegen optimizer (fuse_all, fuse_no_redundancy, fuse_cost_based_v2) -->\n+ <sysds.codegen.optimizer>fuse_cost_based_v2</sysds.codegen.optimizer>\n+\n+ <!-- if codegen.enabled, enables source code caching of fused operators -->\n+ <sysds.codegen.plancache>true</sysds.codegen.plancache>\n+\n+ <!-- if codegen.enabled, compile literals as constants: 1..heuristic, 2..always -->\n+ <sysds.codegen.literals>1</sysds.codegen.literals>\n+\n+ <!-- enables native blas for matrix multiplication and convolution, experimental feature (options: auto, mkl, openblas, none) -->\n+ <sysds.native.blas>none</sysds.native.blas>\n+\n+ <!-- custom directory where BLAS libraries are available, experimental feature (options: absolute directory path or none). If set to none, we use standard LD_LIBRARY_PATH. -->\n+ <sysds.native.blas.directory>none</sysds.native.blas.directory>\n+\n+ <!-- sets the GPUs to use per process, -1 for all GPUs, a specific GPU number (5), a range (eg: 0-2) or a comma separated list (eg: 0,2,4)-->\n+ <sysds.gpu.availableGPUs>-1</sysds.gpu.availableGPUs>\n+\n+ <!-- whether to synchronize GPUs after every GPU instruction -->\n+ <sysds.gpu.sync.postProcess>false</sysds.gpu.sync.postProcess>\n+\n+ <!-- whether to perform eager CUDA free on rmvar instruction -->\n+ <sysds.gpu.eager.cudaFree>false</sysds.gpu.eager.cudaFree>\n+\n+ <!-- Developer flag used to debug GPU memory leaks. This has huge performance overhead and should be only turned on for debugging purposes. -->\n+ <sysds.gpu.print.memoryInfo>false</sysds.gpu.print.memoryInfo>\n+\n+ <!-- the floating point precision. supported values are double, single -->\n+ <sysds.floating.point.precision>double</sysds.floating.point.precision>\n+\n+ <!-- the eviction policy for the GPU bufferpool. Supported values are lru, mru, lfu, min_evict, align_memory -->\n+ <sysds.gpu.eviction.policy>min_evict</sysds.gpu.eviction.policy>\n+\n+ <!-- maximum wrap length for instruction and miscellaneous timer column of statistics -->\n+ <sysds.stats.maxWrapLength>30</sysds.stats.maxWrapLength>\n+\n+ <!-- Advanced optimization: fraction of driver memory to use for GPU shadow buffer. This optimization is ignored for double precision.\n+ By default, it is disabled (hence set to 0.0). If you intend to train network larger than GPU memory size, consider using single precision and setting this to 0.1 -->\n+ <sysds.gpu.eviction.shadow.bufferSize>0.0</sysds.gpu.eviction.shadow.bufferSize>\n+\n+ <!-- Fraction of available GPU memory to use. This is similar to TensorFlow's per_process_gpu_memory_fraction configuration property. (default: 0.9) -->\n+ <sysds.gpu.memory.util.factor>0.9</sysds.gpu.memory.util.factor>\n+\n+ <!-- Allocator to use to allocate GPU device memory. Supported values are cuda, unified_memory (default: cuda) -->\n+ <sysds.gpu.memory.allocator>cuda</sysds.gpu.memory.allocator>\n+\n+ <!-- enables disk spilling for lineage cache -->\n+ <sysds.lineage.cachespill>false</sysds.lineage.cachespill>\n+\n+ <!-- enables compiler assisted partial rewrites (e.g. Append-TSMM) -->\n+ <sysds.lineage.compilerassisted>true</sysds.lineage.compilerassisted>\n+</root>\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-2813] Add lineage specific flags to SystemDS-config
This patch adds two lineage specific configurations, enable/disable
cache spilling and enable/disable compiler assisted dynamic
rewrites. Both are true by default. These flags help in
automating microbenchmarks. |
49,697 | 30.01.2021 23:23:19 | -3,600 | 65d6ad393acea7f309c1cff4e768cec0b5998d15 | Federated ALS-CG test, extended federated binary ops
Closes | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/BinaryMatrixMatrixFEDInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/BinaryMatrixMatrixFEDInstruction.java",
"diff": "@@ -23,6 +23,7 @@ import org.apache.sysds.runtime.DMLRuntimeException;\nimport org.apache.sysds.runtime.controlprogram.caching.MatrixObject;\nimport org.apache.sysds.runtime.controlprogram.context.ExecutionContext;\nimport org.apache.sysds.runtime.controlprogram.federated.FederatedRequest;\n+import org.apache.sysds.runtime.controlprogram.federated.FederationMap.FType;\nimport org.apache.sysds.runtime.controlprogram.federated.FederationUtils;\nimport org.apache.sysds.runtime.instructions.cp.CPOperand;\nimport org.apache.sysds.runtime.matrix.operators.BinaryOperator;\n@@ -63,7 +64,7 @@ public class BinaryMatrixMatrixFEDInstruction extends BinaryFEDInstruction\n}\nelse {\n//matrix-matrix binary operations -> lhs fed input -> fed output\n- if(mo2.getNumRows() > 1 && mo2.getNumColumns() == 1 ) { //MV row vector\n+ if(mo2.getNumRows() > 1 && mo2.getNumColumns() == 1 ) { //MV col vector\nFederatedRequest[] fr1 = mo1.getFedMapping().broadcastSliced(mo2, false);\nfr2 = FederationUtils.callInstruction(instString, output, new CPOperand[]{input1, input2},\nnew long[]{mo1.getFedMapping().getID(), fr1[0].getID()});\n@@ -71,7 +72,7 @@ public class BinaryMatrixMatrixFEDInstruction extends BinaryFEDInstruction\n//execute federated instruction and cleanup intermediates\nmo1.getFedMapping().execute(getTID(), true, fr1, fr2, fr3);\n}\n- else { //MM or MV col vector\n+ else if(mo2.getNumRows() == 1 && mo2.getNumColumns() > 1) { //MV row vector\nFederatedRequest fr1 = mo1.getFedMapping().broadcast(mo2);\nfr2 = FederationUtils.callInstruction(instString, output, new CPOperand[]{input1, input2},\nnew long[]{mo1.getFedMapping().getID(), fr1.getID()});\n@@ -79,6 +80,24 @@ public class BinaryMatrixMatrixFEDInstruction extends BinaryFEDInstruction\n//execute federated instruction and cleanup intermediates\nmo1.getFedMapping().execute(getTID(), true, fr1, fr2, fr3);\n}\n+ else { //MM\n+ if(mo1.isFederated(FType.ROW)) {\n+ FederatedRequest[] fr1 = mo1.getFedMapping().broadcastSliced(mo2, false);\n+ fr2 = FederationUtils.callInstruction(instString, output, new CPOperand[]{input1, input2},\n+ new long[]{mo1.getFedMapping().getID(), fr1[0].getID()});\n+ FederatedRequest fr3 = mo1.getFedMapping().cleanup(getTID(), fr1[0].getID());\n+ //execute federated instruction and cleanup intermediates\n+ mo1.getFedMapping().execute(getTID(), true, fr1, fr2, fr3);\n+ }\n+ else {\n+ FederatedRequest fr1 = mo1.getFedMapping().broadcast(mo2);\n+ fr2 = FederationUtils.callInstruction(instString, output, new CPOperand[]{input1, input2},\n+ new long[]{mo1.getFedMapping().getID(), fr1.getID()});\n+ FederatedRequest fr3 = mo1.getFedMapping().cleanup(getTID(), fr1.getID());\n+ //execute federated instruction and cleanup intermediates\n+ mo1.getFedMapping().execute(getTID(), true, fr1, fr2, fr3);\n+ }\n+ }\n}\n//derive new fed mapping for output\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/FEDInstructionUtils.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/FEDInstructionUtils.java",
"diff": "@@ -22,6 +22,7 @@ package org.apache.sysds.runtime.instructions.fed;\nimport org.apache.sysds.runtime.controlprogram.caching.CacheableData;\nimport org.apache.sysds.runtime.controlprogram.caching.FrameObject;\nimport org.apache.sysds.runtime.controlprogram.caching.MatrixObject;\n+import org.apache.sysds.runtime.controlprogram.caching.TensorObject;\nimport org.apache.sysds.runtime.controlprogram.context.ExecutionContext;\nimport org.apache.sysds.runtime.controlprogram.federated.FederationMap.FType;\nimport org.apache.sysds.runtime.instructions.Instruction;\n@@ -43,7 +44,11 @@ import org.apache.sysds.runtime.instructions.cp.VariableCPInstruction.VariableOp\nimport org.apache.sysds.runtime.instructions.spark.AggregateUnarySPInstruction;\nimport org.apache.sysds.runtime.instructions.spark.AppendGAlignedSPInstruction;\nimport org.apache.sysds.runtime.instructions.spark.AppendGSPInstruction;\n+import org.apache.sysds.runtime.instructions.spark.BinaryMatrixMatrixSPInstruction;\n+import org.apache.sysds.runtime.instructions.spark.BinaryMatrixScalarSPInstruction;\nimport org.apache.sysds.runtime.instructions.spark.BinarySPInstruction;\n+import org.apache.sysds.runtime.instructions.spark.BinaryTensorTensorBroadcastSPInstruction;\n+import org.apache.sysds.runtime.instructions.spark.BinaryTensorTensorSPInstruction;\nimport org.apache.sysds.runtime.instructions.spark.CentralMomentSPInstruction;\nimport org.apache.sysds.runtime.instructions.spark.MapmmSPInstruction;\nimport org.apache.sysds.runtime.instructions.spark.QuantilePickSPInstruction;\n@@ -254,6 +259,17 @@ public class FEDInstructionUtils {\nfedinst = AppendFEDInstruction.parseInstruction(instruction.getInstructionString());\n}\n}\n+ else if (inst instanceof BinaryMatrixScalarSPInstruction\n+ || inst instanceof BinaryMatrixMatrixSPInstruction\n+ || inst instanceof BinaryTensorTensorSPInstruction\n+ || inst instanceof BinaryTensorTensorBroadcastSPInstruction) {\n+ BinarySPInstruction instruction = (BinarySPInstruction) inst;\n+ Data data = ec.getVariable(instruction.input1);\n+ if((data instanceof MatrixObject && ((MatrixObject)data).isFederated())\n+ || (data instanceof TensorObject && ((TensorObject)data).isFederated())) {\n+ fedinst = BinaryFEDInstruction.parseInstruction(inst.getInstructionString());\n+ }\n+ }\n}\nelse if (inst instanceof WriteSPInstruction) {\nWriteSPInstruction instruction = (WriteSPInstruction) inst;\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/java/org/apache/sysds/test/functions/federated/algorithms/FederatedAlsCGTest.java",
"diff": "+/*\n+ * Licensed to the Apache Software Foundation (ASF) under one\n+ * or more contributor license agreements. See the NOTICE file\n+ * distributed with this work for additional information\n+ * regarding copyright ownership. The ASF licenses this file\n+ * to you under the Apache License, Version 2.0 (the\n+ * \"License\"); you may not use this file except in compliance\n+ * with the License. You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.apache.sysds.test.functions.federated.algorithms;\n+\n+import org.apache.sysds.common.Types.ExecMode;\n+import org.apache.sysds.runtime.matrix.data.MatrixValue.CellIndex;\n+import org.apache.sysds.runtime.meta.MatrixCharacteristics;\n+import org.apache.sysds.runtime.util.HDFSTool;\n+import org.apache.sysds.test.AutomatedTestBase;\n+import org.apache.sysds.test.TestConfiguration;\n+import org.apache.sysds.test.TestUtils;\n+import org.junit.Assert;\n+import org.junit.BeforeClass;\n+import org.junit.Test;\n+import org.junit.runner.RunWith;\n+import org.junit.runners.Parameterized;\n+\n+import java.util.Arrays;\n+import java.util.Collection;\n+import java.util.HashMap;\n+\n+@RunWith(value = Parameterized.class)\[email protected]\n+public class FederatedAlsCGTest extends AutomatedTestBase\n+{\n+ private final static String TEST_NAME = \"FederatedAlsCGTest\";\n+ private final static String TEST_DIR = \"functions/federated/\";\n+ private final static String TEST_CLASS_DIR = TEST_DIR + FederatedAlsCGTest.class.getSimpleName() + \"/\";\n+\n+ private final static String OUTPUT_NAME = \"Z\";\n+ private final static double TOLERANCE = 0.01;\n+ private final static int blocksize = 1024;\n+\n+ @Parameterized.Parameter()\n+ public int rows;\n+ @Parameterized.Parameter(1)\n+ public int cols;\n+ @Parameterized.Parameter(2)\n+ public int rank;\n+ @Parameterized.Parameter(3)\n+ public String regression;\n+ @Parameterized.Parameter(4)\n+ public double lambda;\n+ @Parameterized.Parameter(5)\n+ public int max_iter;\n+ @Parameterized.Parameter(6)\n+ public double threshold;\n+ @Parameterized.Parameter(7)\n+ public double sparsity;\n+\n+ @Override\n+ public void setUp() {\n+ addTestConfiguration(TEST_NAME, new TestConfiguration(TEST_CLASS_DIR, TEST_NAME, new String[]{OUTPUT_NAME}));\n+ }\n+\n+ @Parameterized.Parameters\n+ public static Collection<Object[]> data() {\n+ // rows must be even\n+ return Arrays.asList(new Object[][] {\n+ // {rows, cols, rank, regression, lambda, max_iter, threshold, sparsity}\n+ {30, 15, 10, \"L2\", 0.0000001, 50, 0.000001, 1},\n+ {30, 15, 10, \"wL2\", 0.0000001, 50, 0.000001, 1}\n+ });\n+ }\n+\n+ @BeforeClass\n+ public static void init() {\n+ TestUtils.clearDirectory(TEST_DATA_DIR + TEST_CLASS_DIR);\n+ }\n+\n+ @Test\n+ public void federatedAlsCGSingleNode() {\n+ federatedAlsCG(TEST_NAME, ExecMode.SINGLE_NODE);\n+ }\n+\n+// @Test\n+// public void federatedAlsCGSpark() {\n+// federatedAlsCG(TEST_NAME, ExecMode.SPARK);\n+// }\n+\n+// -----------------------------------------------------------------------------\n+\n+ public void federatedAlsCG(String testname, ExecMode execMode)\n+ {\n+ // store the previous platform config to restore it after the test\n+ ExecMode platform_old = setExecMode(execMode);\n+\n+ getAndLoadTestConfiguration(testname);\n+ String HOME = SCRIPT_DIR + TEST_DIR;\n+\n+ int fed_rows = rows / 2;\n+ int fed_cols = cols;\n+\n+ double[][] X1 = getRandomMatrix(fed_rows, fed_cols, 1, 2, sparsity, 13);\n+ double[][] X2 = getRandomMatrix(fed_rows, fed_cols, 1, 2, sparsity, 2);\n+\n+ writeInputMatrixWithMTD(\"X1\", X1, false, new MatrixCharacteristics(\n+ fed_rows, fed_cols, blocksize, fed_rows * fed_cols));\n+ writeInputMatrixWithMTD(\"X2\", X2, false, new MatrixCharacteristics(\n+ fed_rows, fed_cols, blocksize, fed_rows * fed_cols));\n+\n+ // empty script name because we don't execute any script, just start the worker\n+ fullDMLScriptName = \"\";\n+ int port1 = getRandomAvailablePort();\n+ int port2 = getRandomAvailablePort();\n+ Thread thread1 = startLocalFedWorkerThread(port1, FED_WORKER_WAIT_S);\n+ Thread thread2 = startLocalFedWorkerThread(port2);\n+\n+ getAndLoadTestConfiguration(testname);\n+\n+ // Run reference dml script with normal matrix\n+ fullDMLScriptName = HOME + testname + \"Reference.dml\";\n+ programArgs = new String[] {\"-stats\", \"-nvargs\",\n+ \"in_X1=\" + input(\"X1\"), \"in_X2=\" + input(\"X2\"), \"in_rank=\" + Integer.toString(rank),\n+ \"in_reg=\" + regression, \"in_lambda=\" + Double.toString(lambda),\n+ \"in_maxi=\" + Integer.toString(max_iter), \"in_thr=\" + Double.toString(threshold),\n+ \"out_Z=\" + expected(OUTPUT_NAME)};\n+ runTest(true, false, null, -1);\n+\n+ // Run actual dml script with federated matrix\n+ fullDMLScriptName = HOME + testname + \".dml\";\n+ programArgs = new String[] {\"-stats\", \"-nvargs\",\n+ \"in_X1=\" + TestUtils.federatedAddress(port1, input(\"X1\")),\n+ \"in_X2=\" + TestUtils.federatedAddress(port2, input(\"X2\")),\n+ \"in_rank=\" + Integer.toString(rank),\n+ \"in_reg=\" + regression,\n+ \"in_lambda=\" + Double.toString(lambda),\n+ \"in_maxi=\" + Integer.toString(max_iter),\n+ \"in_thr=\" + Double.toString(threshold),\n+ \"rows=\" + fed_rows, \"cols=\" + fed_cols,\n+ \"out_Z=\" + output(OUTPUT_NAME)};\n+ runTest(true, false, null, -1);\n+\n+ // compare the results via files\n+ HashMap<CellIndex, Double> refResults = readDMLMatrixFromExpectedDir(OUTPUT_NAME);\n+ HashMap<CellIndex, Double> fedResults = readDMLMatrixFromOutputDir(OUTPUT_NAME);\n+ TestUtils.compareMatrices(fedResults, refResults, TOLERANCE, \"Fed\", \"Ref\");\n+\n+ TestUtils.shutdownThreads(thread1, thread2);\n+\n+ // check for federated operations\n+ Assert.assertTrue(heavyHittersContainsString(\"fed_!=\"));\n+ Assert.assertTrue(heavyHittersContainsString(\"fed_fedinit\"));\n+ Assert.assertTrue(heavyHittersContainsString(\"fed_wdivmm\"));\n+ Assert.assertTrue(heavyHittersContainsString(\"fed_wsloss\"));\n+\n+ // check that federated input files are still existing\n+ Assert.assertTrue(HDFSTool.existsFileOnHDFS(input(\"X1\")));\n+ Assert.assertTrue(HDFSTool.existsFileOnHDFS(input(\"X2\")));\n+\n+ resetExecMode(platform_old);\n+ }\n+}\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/java/org/apache/sysds/test/functions/federated/primitives/FederatedLogicalTest.java",
"diff": "+/*\n+ * Licensed to the Apache Software Foundation (ASF) under one\n+ * or more contributor license agreements. See the NOTICE file\n+ * distributed with this work for additional information\n+ * regarding copyright ownership. The ASF licenses this file\n+ * to you under the Apache License, Version 2.0 (the\n+ * \"License\"); you may not use this file except in compliance\n+ * with the License. You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.apache.sysds.test.functions.federated.primitives;\n+\n+import org.apache.sysds.common.Types.ExecMode;\n+import org.apache.sysds.runtime.matrix.data.MatrixValue.CellIndex;\n+import org.apache.sysds.runtime.meta.MatrixCharacteristics;\n+import org.apache.sysds.runtime.util.HDFSTool;\n+import org.apache.sysds.test.AutomatedTestBase;\n+import org.apache.sysds.test.TestConfiguration;\n+import org.apache.sysds.test.TestUtils;\n+import org.junit.Assert;\n+import org.junit.BeforeClass;\n+import org.junit.Test;\n+import org.junit.runner.RunWith;\n+import org.junit.runners.Parameterized;\n+\n+import java.util.Arrays;\n+import java.util.Collection;\n+import java.util.HashMap;\n+\n+@RunWith(value = Parameterized.class)\[email protected]\n+public class FederatedLogicalTest extends AutomatedTestBase\n+{\n+ private final static String SCALAR_TEST_NAME = \"FederatedLogicalMatrixScalarTest\";\n+ private final static String MATRIX_TEST_NAME = \"FederatedLogicalMatrixMatrixTest\";\n+ private final static String TEST_DIR = \"functions/federated/binary/\";\n+ private final static String TEST_CLASS_DIR = TEST_DIR + FederatedLogicalTest.class.getSimpleName() + \"/\";\n+\n+ private final static String OUTPUT_NAME = \"Z\";\n+ private final static double TOLERANCE = 0;\n+ private final static int blocksize = 1024;\n+\n+ public enum Type{\n+ GREATER,\n+ LESS,\n+ EQUALS,\n+ NOT_EQUALS,\n+ GREATER_EQUALS,\n+ LESS_EQUALS\n+ }\n+\n+ @Parameterized.Parameter()\n+ public int rows;\n+ @Parameterized.Parameter(1)\n+ public int cols;\n+ @Parameterized.Parameter(2)\n+ public double sparsity;\n+\n+ @Override\n+ public void setUp() {\n+ addTestConfiguration(SCALAR_TEST_NAME, new TestConfiguration(TEST_CLASS_DIR, SCALAR_TEST_NAME, new String[]{OUTPUT_NAME}));\n+ addTestConfiguration(MATRIX_TEST_NAME, new TestConfiguration(TEST_CLASS_DIR, MATRIX_TEST_NAME, new String[]{OUTPUT_NAME}));\n+ }\n+\n+ @Parameterized.Parameters\n+ public static Collection<Object[]> data() {\n+ // rows must be even\n+ return Arrays.asList(new Object[][] {\n+ // {rows, cols, sparsity}\n+ {100, 75, 0.01},\n+ {100, 75, 0.9},\n+ {2, 75, 0.01},\n+ {2, 75, 0.9}\n+ });\n+ }\n+\n+ @BeforeClass\n+ public static void init() {\n+ TestUtils.clearDirectory(TEST_DATA_DIR + TEST_CLASS_DIR);\n+ }\n+\n+ //---------------------------MATRIX SCALAR--------------------------\n+ @Test\n+ public void federatedLogicalScalarGreaterSingleNode() {\n+ federatedLogicalTest(SCALAR_TEST_NAME, Type.GREATER, ExecMode.SINGLE_NODE);\n+ }\n+\n+ @Test\n+ public void federatedLogicalScalarGreaterSpark() {\n+ federatedLogicalTest(SCALAR_TEST_NAME, Type.GREATER, ExecMode.SPARK);\n+ }\n+\n+ @Test\n+ public void federatedLogicalScalarLessSingleNode() {\n+ federatedLogicalTest(SCALAR_TEST_NAME, Type.LESS, ExecMode.SINGLE_NODE);\n+ }\n+\n+ @Test\n+ public void federatedLogicalScalarLessSpark() {\n+ federatedLogicalTest(SCALAR_TEST_NAME, Type.LESS, ExecMode.SPARK);\n+ }\n+\n+ @Test\n+ public void federatedLogicalScalarEqualsSingleNode() {\n+ federatedLogicalTest(SCALAR_TEST_NAME, Type.EQUALS, ExecMode.SINGLE_NODE);\n+ }\n+\n+ @Test\n+ public void federatedLogicalScalarEqualsSpark() {\n+ federatedLogicalTest(SCALAR_TEST_NAME, Type.EQUALS, ExecMode.SPARK);\n+ }\n+\n+ @Test\n+ public void federatedLogicalScalarNotEqualsSingleNode() {\n+ federatedLogicalTest(SCALAR_TEST_NAME, Type.NOT_EQUALS, ExecMode.SINGLE_NODE);\n+ }\n+\n+ @Test\n+ public void federatedLogicalScalarNotEqualsSpark() {\n+ federatedLogicalTest(SCALAR_TEST_NAME, Type.NOT_EQUALS, ExecMode.SPARK);\n+ }\n+\n+ @Test\n+ public void federatedLogicalScalarGreaterEqualsSingleNode() {\n+ federatedLogicalTest(SCALAR_TEST_NAME, Type.GREATER_EQUALS, ExecMode.SINGLE_NODE);\n+ }\n+\n+ @Test\n+ public void federatedLogicalScalarGreaterEqualsSpark() {\n+ federatedLogicalTest(SCALAR_TEST_NAME, Type.GREATER_EQUALS, ExecMode.SPARK);\n+ }\n+\n+ @Test\n+ public void federatedLogicalScalarLessEqualsSingleNode() {\n+ federatedLogicalTest(SCALAR_TEST_NAME, Type.LESS_EQUALS, ExecMode.SINGLE_NODE);\n+ }\n+\n+ @Test\n+ public void federatedLogicalScalarLessEqualsSpark() {\n+ federatedLogicalTest(SCALAR_TEST_NAME, Type.LESS_EQUALS, ExecMode.SPARK);\n+ }\n+\n+ //---------------------------MATRIX MATRIX--------------------------\n+ @Test\n+ public void federatedLogicalMatrixGreaterSingleNode() {\n+ federatedLogicalTest(MATRIX_TEST_NAME, Type.GREATER, ExecMode.SINGLE_NODE);\n+ }\n+\n+ @Test\n+ public void federatedLogicalMatrixGreaterSpark() {\n+ federatedLogicalTest(MATRIX_TEST_NAME, Type.GREATER, ExecMode.SPARK);\n+ }\n+\n+ @Test\n+ public void federatedLogicalMatrixLessSingleNode() {\n+ federatedLogicalTest(MATRIX_TEST_NAME, Type.LESS, ExecMode.SINGLE_NODE);\n+ }\n+\n+ @Test\n+ public void federatedLogicalMatrixLessSpark() {\n+ federatedLogicalTest(MATRIX_TEST_NAME, Type.LESS, ExecMode.SPARK);\n+ }\n+\n+ @Test\n+ public void federatedLogicalMatrixEqualsSingleNode() {\n+ federatedLogicalTest(MATRIX_TEST_NAME, Type.EQUALS, ExecMode.SINGLE_NODE);\n+ }\n+\n+ @Test\n+ public void federatedLogicalMatrixEqualsSpark() {\n+ federatedLogicalTest(MATRIX_TEST_NAME, Type.EQUALS, ExecMode.SPARK);\n+ }\n+\n+ @Test\n+ public void federatedLogicalMatrixNotEqualsSingleNode() {\n+ federatedLogicalTest(MATRIX_TEST_NAME, Type.NOT_EQUALS, ExecMode.SINGLE_NODE);\n+ }\n+\n+ @Test\n+ public void federatedLogicalMatrixNotEqualsSpark() {\n+ federatedLogicalTest(MATRIX_TEST_NAME, Type.NOT_EQUALS, ExecMode.SPARK);\n+ }\n+\n+ @Test\n+ public void federatedLogicalMatrixGreaterEqualsSingleNode() {\n+ federatedLogicalTest(MATRIX_TEST_NAME, Type.GREATER_EQUALS, ExecMode.SINGLE_NODE);\n+ }\n+\n+ @Test\n+ public void federatedLogicalMatrixGreaterEqualsSpark() {\n+ federatedLogicalTest(MATRIX_TEST_NAME, Type.GREATER_EQUALS, ExecMode.SPARK);\n+ }\n+\n+ @Test\n+ public void federatedLogicalMatrixLessEqualsSingleNode() {\n+ federatedLogicalTest(MATRIX_TEST_NAME, Type.LESS_EQUALS, ExecMode.SINGLE_NODE);\n+ }\n+\n+ @Test\n+ public void federatedLogicalMatrixLessEqualsSpark() {\n+ federatedLogicalTest(MATRIX_TEST_NAME, Type.LESS_EQUALS, ExecMode.SPARK);\n+ }\n+\n+// -----------------------------------------------------------------------------\n+\n+ public void federatedLogicalTest(String testname, Type op_type, ExecMode execMode)\n+ {\n+ // store the previous platform config to restore it after the test\n+ ExecMode platform_old = setExecMode(execMode);\n+\n+ getAndLoadTestConfiguration(testname);\n+ String HOME = SCRIPT_DIR + TEST_DIR;\n+\n+ int fed_rows = rows / 2;\n+ int fed_cols = cols;\n+\n+ // generate dataset\n+ // matrix handled by two federated workers\n+ double[][] X1 = getRandomMatrix(fed_rows, fed_cols, 1, 2, 1, 13);\n+ double[][] X2 = getRandomMatrix(fed_rows, fed_cols, 1, 2, 1, 2);\n+\n+ writeInputMatrixWithMTD(\"X1\", X1, false, new MatrixCharacteristics(fed_rows, fed_cols, blocksize, fed_rows * fed_cols));\n+ writeInputMatrixWithMTD(\"X2\", X2, false, new MatrixCharacteristics(fed_rows, fed_cols, blocksize, fed_rows * fed_cols));\n+\n+ boolean is_matrix_test = testname.equals(MATRIX_TEST_NAME);\n+\n+ double[][] Y_mat = null;\n+ double Y_scal = 0;\n+ if(is_matrix_test) {\n+ Y_mat = getRandomMatrix(rows, cols, 0, 1, sparsity, 5040);\n+ writeInputMatrixWithMTD(\"Y\", Y_mat, true);\n+ }\n+\n+ // empty script name because we don't execute any script, just start the worker\n+ fullDMLScriptName = \"\";\n+ int port1 = getRandomAvailablePort();\n+ int port2 = getRandomAvailablePort();\n+ Thread thread1 = startLocalFedWorkerThread(port1, FED_WORKER_WAIT_S);\n+ Thread thread2 = startLocalFedWorkerThread(port2);\n+\n+ getAndLoadTestConfiguration(testname);\n+\n+ // Run reference dml script with normal matrix\n+ fullDMLScriptName = HOME + testname + \"Reference.dml\";\n+ programArgs = new String[] {\"-nvargs\", \"in_X1=\" + input(\"X1\"), \"in_X2=\" + input(\"X2\"),\n+ \"in_Y=\" + (is_matrix_test ? input(\"Y\") : Double.toString(Y_scal)),\n+ \"in_op_type=\" + Integer.toString(op_type.ordinal()),\n+ \"out_Z=\" + expected(OUTPUT_NAME)};\n+ runTest(true, false, null, -1);\n+\n+ // Run actual dml script with federated matrix\n+ fullDMLScriptName = HOME + testname + \".dml\";\n+ programArgs = new String[] {\"-stats\", \"-nvargs\",\n+ \"in_X1=\" + TestUtils.federatedAddress(port1, input(\"X1\")), \"in_X2=\" + TestUtils.federatedAddress(port2, input(\"X2\")),\n+ \"in_Y=\" + (is_matrix_test ? input(\"Y\") : Double.toString(Y_scal)),\n+ \"in_op_type=\" + Integer.toString(op_type.ordinal()),\n+ \"rows=\" + fed_rows, \"cols=\" + fed_cols, \"out_Z=\" + output(OUTPUT_NAME)};\n+ runTest(true, false, null, -1);\n+\n+ // compare the results via files\n+ HashMap<CellIndex, Double> refResults = readDMLMatrixFromExpectedDir(OUTPUT_NAME);\n+ HashMap<CellIndex, Double> fedResults = readDMLMatrixFromOutputDir(OUTPUT_NAME);\n+ TestUtils.compareMatrices(fedResults, refResults, TOLERANCE, \"Fed\", \"Ref\");\n+\n+ TestUtils.shutdownThreads(thread1, thread2);\n+\n+ // check for federated operations\n+ switch(op_type)\n+ {\n+ case GREATER:\n+ Assert.assertTrue(heavyHittersContainsString(\"fed_>\"));\n+ break;\n+ case LESS:\n+ Assert.assertTrue(heavyHittersContainsString(\"fed_<\"));\n+ break;\n+ case EQUALS:\n+ Assert.assertTrue(heavyHittersContainsString(\"fed_==\"));\n+ break;\n+ case NOT_EQUALS:\n+ Assert.assertTrue(heavyHittersContainsString(\"fed_!=\"));\n+ break;\n+ case GREATER_EQUALS:\n+ Assert.assertTrue(heavyHittersContainsString(\"fed_>=\"));\n+ break;\n+ case LESS_EQUALS:\n+ Assert.assertTrue(heavyHittersContainsString(\"fed_<=\"));\n+ break;\n+ }\n+\n+ // check that federated input files are still existing\n+ Assert.assertTrue(HDFSTool.existsFileOnHDFS(input(\"X1\")));\n+ Assert.assertTrue(HDFSTool.existsFileOnHDFS(input(\"X2\")));\n+\n+ resetExecMode(platform_old);\n+ }\n+}\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/federated/FederatedAlsCGTest.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+X = federated(addresses=list($in_X1, $in_X2),\n+ ranges=list(list(0, 0), list($rows, $cols), list($rows, 0), list($rows * 2, $cols)));\n+\n+rank = $in_rank;\n+reg = $in_reg;\n+lambda = $in_lambda;\n+maxi = $in_maxi;\n+thr = $in_thr;\n+\n+[U, V] = alsCG(X = X, rank = rank, reg = reg, lambda = lambda, maxi = maxi, check = TRUE, thr = thr);\n+\n+Z = U %*% V;\n+\n+write(Z, $out_Z);\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/federated/FederatedAlsCGTestReference.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+X = rbind(read($in_X1), read($in_X2));\n+\n+rank = $in_rank;\n+reg = $in_reg;\n+lambda = $in_lambda;\n+maxi = $in_maxi;\n+thr = $in_thr;\n+\n+[U, V] = alsCG(X = X, rank = rank, reg = reg, lambda = lambda, maxi = maxi, check = TRUE, thr = thr);\n+\n+Z = U %*% V;\n+\n+write(Z, $out_Z);\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/federated/binary/FederatedLogicalMatrixMatrixTest.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+X = federated(addresses=list($in_X1, $in_X2),\n+ ranges=list(list(0, 0), list($rows, $cols), list($rows, 0), list($rows * 2, $cols)));\n+\n+Y = read($in_Y);\n+op_type = $in_op_type;\n+\n+if(op_type == 0)\n+ Z = (X > Y)\n+else if(op_type == 1)\n+ Z = (X < Y)\n+else if(op_type == 2)\n+ Z = (X == Y)\n+else if(op_type == 3)\n+ Z = (X != Y)\n+else if(op_type == 4)\n+ Z = (X >= Y)\n+else if(op_type == 5)\n+ Z = (X <= Y)\n+\n+write(Z, $out_Z);\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/federated/binary/FederatedLogicalMatrixMatrixTestReference.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+X = rbind(read($in_X1), read($in_X2));\n+\n+Y = read($in_Y);\n+op_type = $in_op_type;\n+\n+if(op_type == 0)\n+ Z = (X > Y)\n+else if(op_type == 1)\n+ Z = (X < Y)\n+else if(op_type == 2)\n+ Z = (X == Y)\n+else if(op_type == 3)\n+ Z = (X != Y)\n+else if(op_type == 4)\n+ Z = (X >= Y)\n+else if(op_type == 5)\n+ Z = (X <= Y)\n+\n+write(Z, $out_Z);\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/federated/binary/FederatedLogicalMatrixScalarTest.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+X = federated(addresses=list($in_X1, $in_X2),\n+ ranges=list(list(0, 0), list($rows, $cols), list($rows, 0), list($rows * 2, $cols)));\n+\n+y = $in_Y;\n+op_type = $in_op_type;\n+\n+if(op_type == 0)\n+ Z = (X > y)\n+else if(op_type == 1)\n+ Z = (X < y)\n+else if(op_type == 2)\n+ Z = (X == y)\n+else if(op_type == 3)\n+ Z = (X != y)\n+else if(op_type == 4)\n+ Z = (X >= y)\n+else if(op_type == 5)\n+ Z = (X <= y)\n+\n+write(Z, $out_Z);\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/federated/binary/FederatedLogicalMatrixScalarTestReference.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+X = rbind(read($in_X1), read($in_X2));\n+\n+y = $in_Y;\n+op_type = $in_op_type;\n+\n+if(op_type == 0)\n+ Z = (X > y)\n+else if(op_type == 1)\n+ Z = (X < y)\n+else if(op_type == 2)\n+ Z = (X == y)\n+else if(op_type == 3)\n+ Z = (X != y)\n+else if(op_type == 4)\n+ Z = (X >= y)\n+else if(op_type == 5)\n+ Z = (X <= y)\n+\n+write(Z, $out_Z);\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-2747] Federated ALS-CG test, extended federated binary ops
Closes #1170. |
49,700 | 31.01.2021 00:28:02 | -3,600 | eeffef63cd8e2a5563c31f067cfe21e73a08e32d | Added conv1d layer to nn library (via conv2d)
Closes | [
{
"change_type": "ADD",
"old_path": null,
"new_path": "scripts/nn/layers/conv1d.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+source(\"scripts/nn/util.dml\") as util\n+\n+forward = function(matrix[double] input, matrix[double] filter, int pad, int stride,\n+ int numInput, int numChannels, int inputWidth, int numFilters, int filterSize)\n+ return (matrix[double] out)\n+{\n+ /*\n+ * Computes the forward pass for a 1D spatial convolutional layer\n+ * by reshaping the input to fit conv2d.\n+ *\n+ * Inputs:\n+ * - input: Inputs, of shape (N, C*W).\n+ * - filter: Weights, of shape (F, C*W).\n+ * - pad: Padding for left and right sides of input elements.\n+ * - stride: Stride for moving filter.\n+ * - numInput: Number of input elements N\n+ * - numChannels: Number of input channels (dimensionality of input depth).\n+ * - inputWidth: Input width.\n+ * - numFilters: Number of filters F\n+ * - filterSize: Filter width.\n+ *\n+ * Outputs:\n+ * - out: Outputs, of shape (N, F*Wout).\n+ */\n+ out = conv2d(input, filter, padding=[0,pad], stride=[1, stride],\n+ input_shape=[numInput,numChannels,1,inputWidth], filter_shape=[numFilters,numChannels,1,filterSize])\n+}\n+\n+backward_data = function(matrix[double] filter, matrix[double] dout, int pad, int stride,\n+ int numInput, int numChannels, int inputWidth, int numFilters, int filterSize)\n+ return (matrix[double] out)\n+{\n+ /*\n+ * Computes the backward pass regarding the input data for a 1D spatial convolutional layer\n+ * by reshaping the input to fit conv2d backward data pass.\n+ *\n+ * Inputs:\n+ * - filter: Weights, of shape (F, C*W).\n+ * - dout: Output of the forward pass\n+ * - pad: Padding for left and right sides of input elements.\n+ * - stride: Stride for moving filter.\n+ * - numInput: Number of input elements N\n+ * - numChannels: Number of input channels (dimensionality of input depth).\n+ * - inputWidth: Input width.\n+ * - numFilters: Number of filters F\n+ * - filterSize: Filter width.\n+ *\n+ * Outputs:\n+ * - out: gradients based on the input data of the convolution.\n+ */\n+ out = conv2d_backward_data(filter, dout, stride=[1,stride], padding=[0,pad],\n+ input_shape=[numInput,numChannels,1,inputWidth], filter_shape=[numFilters,numChannels,1,filterSize])\n+}\n+\n+backward_filter = function(matrix[double] input, matrix[double] dout, int pad, int stride,\n+ int numInput, int numChannels, int inputWidth, int numFilters, int filterSize)\n+ return (matrix[double] out)\n+{\n+ /*\n+ * Computes the backward pass regarding the filter for a 1D spatial convolutional layer\n+ * by reshaping the input to fit conv2d backward data pass.\n+ *\n+ * Inputs:\n+ * - input: Inputs, of shape (N, C*W).\n+ * - dout: Output of the forward pass\n+ * - pad: Padding for left and right sides of input elements.\n+ * - stride: Stride for moving filter.\n+ * - numInput: Number of input elements N\n+ * - numChannels: Number of input channels (dimensionality of input depth).\n+ * - inputWidth: Input width.\n+ * - numFilters: Number of filters F\n+ * - filterSize: Filter width.\n+ *\n+ * Outputs:\n+ * - out: gradients bsaed on the filter of the convolution.\n+ */\n+ out = conv2d_backward_filter(input, dout, stride=[1,stride], padding=[0,pad],\n+ input_shape=[numInput,numChannels,1,inputWidth], filter_shape=[numFilters,numChannels,1,filterSize])\n+}\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/java/org/apache/sysds/test/functions/dnn/Conv1DTest.java",
"diff": "+/*\n+ * Licensed to the Apache Software Foundation (ASF) under one\n+ * or more contributor license agreements. See the NOTICE file\n+ * distributed with this work for additional information\n+ * regarding copyright ownership. The ASF licenses this file\n+ * to you under the Apache License, Version 2.0 (the\n+ * \"License\"); you may not use this file except in compliance\n+ * with the License. You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.apache.sysds.test.functions.dnn;\n+\n+import java.util.HashMap;\n+import java.util.stream.IntStream;\n+\n+import org.junit.Test;\n+import org.apache.sysds.api.DMLScript;\n+import org.apache.sysds.common.Types.ExecMode;\n+import org.apache.sysds.lops.LopProperties.ExecType;\n+import org.apache.sysds.runtime.matrix.data.MatrixValue.CellIndex;\n+import org.apache.sysds.test.AutomatedTestBase;\n+import org.apache.sysds.test.TestConfiguration;\n+import org.apache.sysds.test.TestUtils;\n+\n+public class Conv1DTest extends AutomatedTestBase\n+{\n+ private final static String TEST_NAME_1 = \"Conv1DTest\";\n+ private final static String TEST_NAME_2 = \"Conv1DBackwardDataTest\";\n+ private final static String TEST_NAME_3 = \"Conv1DBackwardFilterTest\";\n+ private final static String TEST_DIR = \"functions/tensor/\";\n+ private final static String TEST_CLASS_DIR = TEST_DIR + Conv1DTest.class.getSimpleName() + \"/\";\n+ private final static double epsilon=0.0000000001;\n+\n+ @Override\n+ public void setUp() {\n+ TestUtils.clearAssertionInformation();\n+ addTestConfiguration(TEST_NAME_1, new TestConfiguration(TEST_CLASS_DIR, TEST_NAME_1, new String[] {\"output\"}));\n+ addTestConfiguration(TEST_NAME_2, new TestConfiguration(TEST_CLASS_DIR, TEST_NAME_2, new String[] {\"output\"}));\n+ addTestConfiguration(TEST_NAME_3, new TestConfiguration(TEST_CLASS_DIR, TEST_NAME_3, new String[] {\"output\"}));\n+ }\n+\n+ @Test\n+ public void testSimpleConv1DDenseSingleBatchSingleChannelSingleFilter(){\n+ int numImg = 1; int imgSize = 4; int numChannels = 1; int numFilters = 1; int filterSize = 2; int stride = 1; int pad = 0;\n+ HashMap<CellIndex, Double> expected = new HashMap<>();\n+ expected.put(new CellIndex(1,1), 3.0);\n+ expected.put(new CellIndex(1,2), 5.0);\n+ expected.put(new CellIndex(1,3), 7.0);\n+ runConv1DTest(ExecType.CP, imgSize, numImg, numChannels, numFilters, filterSize, stride, pad, expected, TEST_NAME_1);\n+ }\n+\n+ @Test\n+ public void testConv1DDense1() {\n+ int numImg = 5; int imgSize = 3; int numChannels = 3; int numFilters = 3; int filterSize = 2; int stride = 1; int pad = 0;\n+ HashMap<CellIndex, Double> expected = new HashMap<>();\n+ fillExpected(expected, 1, 6, 21.0, 39.0);\n+ fillExpected(expected, 2, 6, 75.0, 93.0);\n+ fillExpected(expected, 3, 6, 129.0, 147.0);\n+ fillExpected(expected, 4, 6, 183.0, 201.0);\n+ fillExpected(expected, 5, 6, 237.0, 255.0);\n+\n+ runConv1DTest(ExecType.CP, imgSize, numImg, numChannels, numFilters, filterSize, stride, pad, expected, TEST_NAME_1);\n+ }\n+\n+ @Test\n+ public void testConv1DDense2() {\n+ int numImg = 1; int imgSize = 10; int numChannels = 4; int numFilters = 3; int filterSize = 4; int stride = 2; int pad = 0;\n+ HashMap<CellIndex, Double> expected = new HashMap<>();\n+ fillExpectedRepeated(expected, 3, new double[]{136.,264.,392.,520.},1);\n+ runConv1DTest(ExecType.CP, imgSize, numImg, numChannels, numFilters, filterSize, stride, pad, expected, TEST_NAME_1);\n+ }\n+\n+ @Test\n+ public void testConv1DDense3() {\n+ int numImg = 1; int imgSize = 10; int numChannels = 4; int numFilters = 3; int filterSize = 4; int stride = 2; int pad = 1;\n+ HashMap<CellIndex, Double> expected = new HashMap<>();\n+ fillExpectedRepeated(expected, 3, new double[]{78.,200.,328.,456.,414.},1);\n+ runConv1DTest(ExecType.CP, imgSize, numImg, numChannels, numFilters, filterSize, stride, pad, expected, TEST_NAME_1);\n+ }\n+\n+ @Test\n+ public void testConv1DDense4() {\n+ int numImg = 3; int imgSize = 10; int numChannels = 1; int numFilters = 3; int filterSize = 2; int stride = 2; int pad = 1;\n+ HashMap<CellIndex, Double> expected = new HashMap<>();\n+ fillExpectedRepeated(expected,3,new double[]{1.,5.,9.,13.,17.,10.},1);\n+ fillExpectedRepeated(expected,3,new double[]{11.,25.,29.,33.,37.,20.},2);\n+ fillExpectedRepeated(expected,3,new double[]{21.,45.,49.,53.,57.,30.},3);\n+ runConv1DTest(ExecType.CP, imgSize, numImg, numChannels, numFilters, filterSize, stride, pad, expected, TEST_NAME_1);\n+ }\n+\n+ @Test\n+ public void testConv1DDense5() {\n+ int numImg = 3; int imgSize = 8; int numChannels = 2; int numFilters = 3; int filterSize = 3; int stride = 1; int pad = 2;\n+ HashMap<CellIndex, Double> expected = new HashMap<>();\n+ fillExpectedRepeated(expected,3,new double[]{3.,10.,21.,33.,45.,57.,69.,81.,58.,31.},1);\n+ fillExpectedRepeated(expected,3,new double[]{35.,74.,117.,129.,141.,153.,165.,177.,122.,63.},2);\n+ fillExpectedRepeated(expected,3,new double[]{67.,138.,213.,225.,237.,249.,261.,273.,186.,95.},3);\n+ runConv1DTest(ExecType.CP, imgSize, numImg, numChannels, numFilters, filterSize, stride, pad, expected, TEST_NAME_1);\n+ }\n+\n+ @Test\n+ public void testConv1DDense6() {\n+ int numImg = 1; int imgSize = 10; int numChannels = 4; int numFilters = 3; int filterSize = 4; int stride = 1; int pad = 0;\n+ HashMap<CellIndex, Double> expected = new HashMap<>();\n+ fillExpectedRepeated(expected,3,new double[]{136.,200.,264.,328.,392.,456.,520.},1);\n+ runConv1DTest(ExecType.CP, imgSize, numImg, numChannels, numFilters, filterSize, stride, pad, expected, TEST_NAME_1);\n+ }\n+\n+ @Test\n+ public void testConv1DDense7() {\n+ int numImg = 3; int imgSize = 64; int numChannels = 1; int numFilters = 3; int filterSize = 2; int stride = 1; int pad = 0;\n+ HashMap<CellIndex, Double> expected = new HashMap<>();\n+ double[] firstExpected = IntStream.iterate(3,n -> n+2).limit(63).mapToDouble(i->(double)i).toArray();\n+ double[] secondExpected = IntStream.iterate(131,n -> n+2).limit(63).mapToDouble(i->(double)i).toArray();\n+ double[] thirdExpected = IntStream.iterate(259,n -> n+2).limit(63).mapToDouble(i->(double)i).toArray();\n+ fillExpectedRepeated(expected,3,firstExpected,1);\n+ fillExpectedRepeated(expected,3,secondExpected,2);\n+ fillExpectedRepeated(expected,3,thirdExpected,3);\n+ runConv1DTest(ExecType.CP, imgSize, numImg, numChannels, numFilters, filterSize, stride, pad, expected, TEST_NAME_1);\n+ }\n+\n+ @Test\n+ public void testConv1DDense1SP()\n+ {\n+ int numImg = 5; int imgSize = 3; int numChannels = 3; int numFilters = 6; int filterSize = 2; int stride = 1; int pad = 0;\n+ HashMap<CellIndex, Double> expected = new HashMap<>();\n+ fillExpected(expected, 1, 12, 21.0, 39.0);\n+ fillExpected(expected, 2, 12, 75.0, 93.0);\n+ fillExpected(expected, 3, 12, 129.0, 147.0);\n+ fillExpected(expected, 4, 12, 183.0, 201.0);\n+ fillExpected(expected, 5, 12, 237.0, 255.0);\n+ runConv1DTest(ExecType.SPARK, imgSize, numImg, numChannels, numFilters, filterSize, stride, pad, expected, TEST_NAME_1);\n+ }\n+\n+ @Test\n+ public void testConv1DBackwardDataDense1() {\n+ int numImg = 5; int imgSize = 3; int numChannels = 3; int numFilters = 3; int filterSize = 1; int stride = 1; int pad = 0;\n+ HashMap<CellIndex, Double> expected = new HashMap<>();\n+ fillExpectedRepeated(expected,3,new double[]{6.,15.,24.},1);\n+ fillExpectedRepeated(expected,3,new double[]{33.,42.,51.},2);\n+ fillExpectedRepeated(expected,3,new double[]{60.,69.,78.},3);\n+ fillExpectedRepeated(expected,3,new double[]{87.,96.,105.},4);\n+ fillExpectedRepeated(expected,3,new double[]{114.,123.,132.},5);\n+\n+ runConv1DTest(ExecType.CP, imgSize, numImg, numChannels, numFilters, filterSize, stride, pad, expected, TEST_NAME_2);\n+ }\n+\n+ @Test\n+ public void testConv1DBackwardFilterDense1() {\n+ int numImg = 2; int imgSize = 3; int numChannels = 2; int numFilters = 3; int filterSize = 1; int stride = 1; int pad = 0;\n+ HashMap<CellIndex, Double> expected = new HashMap<>();\n+ fillExpectedRepeatedCol(expected,3,new double[]{608.,686.});\n+\n+ runConv1DTest(ExecType.CP, imgSize, numImg, numChannels, numFilters, filterSize, stride, pad, expected, TEST_NAME_3);\n+ }\n+\n+ @Test\n+ public void testConv1DBackwardFilterDense2() {\n+ int numImg = 2; int imgSize = 3; int numChannels = 2; int numFilters = 3; int filterSize = 2; int stride = 1; int pad = 0;\n+ HashMap<CellIndex, Double> expected = new HashMap<>();\n+ fillExpectedRepeatedCol(expected,3,new double[]{680.,888.,784.,992.});\n+\n+ runConv1DTest(ExecType.CP, imgSize, numImg, numChannels, numFilters, filterSize, stride, pad, expected, TEST_NAME_3);\n+ }\n+\n+ private static void fillExpected(HashMap<CellIndex, Double> expected,\n+ int rowNum, int rowLength, double value1, double value2)\n+ {\n+ for ( int m = 1; m <= rowLength; m+=2){\n+ expected.put(new CellIndex(rowNum,m), value1);\n+ expected.put(new CellIndex(rowNum,m+1), value2);\n+ }\n+ }\n+\n+ private static void fillExpectedRepeated(HashMap<CellIndex, Double> expected,\n+ int repetitionNum, double[] values, int row)\n+ {\n+ int colPointer = 1;\n+ for (int i = 1; i <= repetitionNum;i++){\n+ for(double value : values) {\n+ expected.put(new CellIndex(row, colPointer), value);\n+ colPointer++;\n+ }\n+ }\n+ }\n+\n+ private static void fillExpectedRepeatedCol(HashMap<CellIndex, Double> expected,int repetitionRows, double[] values){\n+ for ( int i = 1; i <= repetitionRows; i++){\n+ for ( int j = 1; j <= values.length; j++ ){\n+ expected.put(new CellIndex(i,j), values[j-1]);\n+ }\n+ }\n+ }\n+\n+ public void runConv1DTest( ExecType et, int imgSize, int numImg, int numChannels, int numFilters,\n+ int filterSize, int stride, int pad, HashMap<CellIndex, Double> expected, String TEST_NAME)\n+ {\n+ ExecMode platformOld = rtplatform;\n+ switch( et ){\n+ case SPARK: rtplatform = ExecMode.SPARK; break;\n+ default: rtplatform = ExecMode.HYBRID; break;\n+ }\n+ boolean sparkConfigOld = DMLScript.USE_LOCAL_SPARK_CONFIG;\n+ if( rtplatform == ExecMode.SPARK )\n+ DMLScript.USE_LOCAL_SPARK_CONFIG = true;\n+\n+ try\n+ {\n+ getAndLoadTestConfiguration(TEST_NAME);\n+\n+ String SCRIPT_HOME = SCRIPT_DIR + TEST_DIR + TEST_NAME;\n+ fullDMLScriptName = SCRIPT_HOME + \".dml\";\n+\n+ programArgs = new String[] {\n+ \"-nvargs\",\n+ \"imgSize=\" + imgSize,\n+ \"numImg=\" + numImg,\n+ \"numChannels=\" + numChannels,\n+ \"numFilters=\" + numFilters,\n+ \"filterSize=\" + filterSize,\n+ \"stride=\" + stride,\n+ \"pad=\" + pad,\n+ \"output=\" + output(\"output\")\n+ };\n+\n+ // Run DML\n+ runTest(true, false, null, -1);\n+\n+ HashMap<CellIndex, Double> dmlfile = readDMLMatrixFromOutputDir(\"output\");\n+ System.out.println(dmlfile.toString());\n+ if ( expected != null)\n+ TestUtils.compareMatrices(dmlfile, expected, epsilon, \"B-DML\", \"B-R\");\n+ }\n+ finally {\n+ rtplatform = platformOld;\n+ DMLScript.USE_LOCAL_SPARK_CONFIG = sparkConfigOld;\n+ }\n+ }\n+}\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/tensor/Conv1DBackwardDataTest.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+source(\"scripts/nn/layers/conv1d.dml\") as conv1d\n+\n+# Build input\n+rowBuild = seq(1,$numChannels*$imgSize,$numChannels)\n+if ( $numChannels > 1 & $imgSize > 1 ){\n+ for ( i in 2:$numChannels ){\n+ rowBuild = rbind(rowBuild, seq(i,$numChannels*$imgSize,$numChannels))\n+ }\n+}\n+\n+colBuild = rowBuild\n+if ( $numImg > 1 ){\n+ for ( j in 2:$numImg ){\n+ rowBuild = seq((j-1)*$imgSize*$numChannels+1,j*$imgSize*$numChannels,$numChannels)\n+ if ( $numChannels > 1 ){\n+ for ( i in 2:$numChannels ){\n+ rowBuild = rbind(rowBuild, seq((j-1)*$imgSize*$numChannels+i,j*$numChannels*$imgSize,$numChannels))\n+ }\n+ }\n+ colBuild = cbind(colBuild, rowBuild)\n+ }\n+}\n+\n+# Set input variables\n+x = t(colBuild)\n+w=matrix(1,rows=$numFilters, cols=$numChannels*$filterSize)\n+\n+output = conv1d::backward_data(w, x, $pad, $stride, $numImg, $numChannels, $imgSize, $numFilters, $filterSize)\n+\n+#Write output\n+write(output, $output)\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/tensor/Conv1DBackwardFilterTest.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+source(\"scripts/nn/layers/conv1d.dml\") as conv1d\n+\n+# Build input\n+rowBuild = seq(1,$numChannels*$imgSize,$numChannels)\n+if ( $numChannels > 1 & $imgSize > 1 ){\n+ for ( i in 2:$numChannels ){\n+ rowBuild = rbind(rowBuild, seq(i,$numChannels*$imgSize,$numChannels))\n+ }\n+}\n+\n+colBuild = rowBuild\n+if ( $numImg > 1 ){\n+ for ( j in 2:$numImg ){\n+ rowBuild = seq((j-1)*$imgSize*$numChannels+1,j*$imgSize*$numChannels,$numChannels)\n+ if ( $numChannels > 1 ){\n+ for ( i in 2:$numChannels ){\n+ rowBuild = rbind(rowBuild, seq((j-1)*$imgSize*$numChannels+i,j*$numChannels*$imgSize,$numChannels))\n+ }\n+ }\n+ colBuild = cbind(colBuild, rowBuild)\n+ }\n+}\n+\n+# Set input variables\n+x = t(colBuild)\n+w=matrix(1,rows=$numFilters, cols=$numChannels*$filterSize)\n+dout = conv1d::forward(x, w, $pad, $stride, $numImg, $numChannels, $imgSize, $numFilters, $filterSize)\n+\n+output = conv1d::backward_filter(x, dout, $pad, $stride, $numImg, $numChannels, $imgSize, $numFilters, $filterSize)\n+#Write output\n+write(output, $output)\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/tensor/Conv1DTest.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+source(\"scripts/nn/layers/conv1d.dml\") as conv1d\n+\n+# Build input\n+rowBuild = seq(1,$numChannels*$imgSize,$numChannels)\n+if ( $numChannels > 1 & $imgSize > 1 ){\n+ for ( i in 2:$numChannels ){\n+ rowBuild = rbind(rowBuild, seq(i,$numChannels*$imgSize,$numChannels))\n+ }\n+}\n+\n+colBuild = rowBuild\n+if ( $numImg > 1 ){\n+ for ( j in 2:$numImg ){\n+ rowBuild = seq((j-1)*$imgSize*$numChannels+1,j*$imgSize*$numChannels,$numChannels)\n+ if ( $numChannels > 1 ){\n+ for ( i in 2:$numChannels ){\n+ rowBuild = rbind(rowBuild, seq((j-1)*$imgSize*$numChannels+i,j*$numChannels*$imgSize,$numChannels))\n+ }\n+ }\n+ colBuild = cbind(colBuild, rowBuild)\n+ }\n+}\n+\n+# Set input variables\n+x = t(colBuild)\n+w=matrix(1,rows=$numFilters, cols=$numChannels*$filterSize)\n+\n+# Call function\n+output = conv1d::forward(x, w, $pad, $stride, $numImg, $numChannels, $imgSize, $numFilters, $filterSize)\n+\n+#Write output\n+write(output, $output)\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-2812] Added conv1d layer to nn library (via conv2d)
Closes #1174. |
49,706 | 01.02.2021 11:58:55 | -3,600 | f7b5f8811ad0c264f6ee1c7ecb9d16f4315698ca | Python Stability
There have been some startup issues in the python API, where some tests
would not properly connect to the JVM.
This task addresses this by introducing a retry startup of the context
in case of failures. | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/conf/DMLConfig.java",
"new_path": "src/main/java/org/apache/sysds/conf/DMLConfig.java",
"diff": "@@ -71,7 +71,7 @@ public class DMLConfig\npublic static final String COMPRESSED_LOSSY = \"sysds.compressed.lossy\";\npublic static final String COMPRESSED_VALID_COMPRESSIONS = \"sysds.compressed.valid.compressions\";\npublic static final String COMPRESSED_OVERLAPPING = \"sysds.compressed.overlapping\"; // true, false\n- public static final String COMPRESSED_SAMPLING_RATIO = \"sysds.compressed.sampling.ratio\"; // 0.1\n+ public static final String COMPRESSED_SAMPLING_RATIO = \"sysds.compressed.sampling.ratio\";\npublic static final String COMPRESSED_COCODE = \"sysds.compressed.cocode\"; // COST\npublic static final String COMPRESSED_TRANSPOSE = \"sysds.compressed.transpose\"; // true, false, auto.\npublic static final String NATIVE_BLAS = \"sysds.native.blas\";\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/python/systemds/context/systemds_context.py",
"new_path": "src/main/python/systemds/context/systemds_context.py",
"diff": "@@ -23,6 +23,7 @@ __all__ = [\"SystemDSContext\"]\nimport copy\nimport os\n+import socket\nimport time\nfrom glob import glob\nfrom queue import Empty, Queue\n@@ -34,10 +35,10 @@ from typing import Dict, Iterable, Sequence, Tuple, Union\nimport numpy as np\nfrom py4j.java_gateway import GatewayParameters, JavaGateway\nfrom py4j.protocol import Py4JNetworkError\n-from systemds.utils.consts import VALID_INPUT_TYPES\n-from systemds.utils.helpers import get_module_dir\nfrom systemds.operator import OperationNode\nfrom systemds.script_building import OutputType\n+from systemds.utils.consts import VALID_INPUT_TYPES\n+from systemds.utils.helpers import get_module_dir\nclass SystemDSContext(object):\n@@ -55,67 +56,12 @@ class SystemDSContext(object):\nStandard out and standard error form the JVM is also handled in this class, filling up Queues,\nthat can be read from to get the printed statements from the JVM.\n\"\"\"\n-\n- root = os.environ.get(\"SYSTEMDS_ROOT\")\n- if root == None:\n- # If there is no systemds install default to use the PIP packaged java files.\n- root = os.path.join(get_module_dir(), \"systemds-java\")\n-\n- # nt means its Windows\n- cp_separator = \";\" if os.name == \"nt\" else \":\"\n-\n- if os.environ.get(\"SYSTEMDS_ROOT\") != None:\n- lib_cp = os.path.join(root, \"target\", \"lib\", \"*\")\n- systemds_cp = os.path.join(root, \"target\", \"SystemDS.jar\")\n- classpath = cp_separator.join([lib_cp, systemds_cp])\n-\n- command = [\"java\", \"-cp\", classpath]\n- files = glob(os.path.join(root, \"conf\", \"log4j*.properties\"))\n- if len(files) > 1:\n- print(\n- \"WARNING: Multiple logging files found selecting: \" + files[0])\n- if len(files) == 0:\n- print(\"WARNING: No log4j file found at: \"\n- + os.path.join(root, \"conf\")\n- + \" therefore using default settings\")\n- else:\n- command.append(\"-Dlog4j.configuration=file:\" + files[0])\n- else:\n- lib_cp = os.path.join(root, \"lib\", \"*\")\n- command = [\"java\", \"-cp\", lib_cp]\n-\n- command.append(\"org.apache.sysds.api.PythonDMLScript\")\n-\n+ command = self.__build_startup_command()\n# TODO add an argument parser here\n-\n- # Find a random port, and hope that no other process\n- # steals it while we wait for the JVM to startup\nport = self.__get_open_port()\ncommand.append(str(port))\n- process = Popen(command, stdout=PIPE, stdin=PIPE, stderr=PIPE)\n- first_stdout = process.stdout.readline()\n-\n- if(not b\"GatewayServer Started\" in first_stdout):\n- stderr = process.stderr.readline().decode(\"utf-8\")\n- if(len(stderr) > 1):\n- raise Exception(\n- \"Exception in startup of GatewayServer: \" + stderr)\n- outputs = []\n- outputs.append(first_stdout.decode(\"utf-8\"))\n- max_tries = 10\n- for i in range(max_tries):\n- next_line = process.stdout.readline()\n- if(b\"GatewayServer Started\" in next_line):\n- print(\"WARNING: Stdout corrupted by prints: \" + str(outputs))\n- print(\"Startup success\")\n- break\n- else:\n- outputs.append(next_line)\n-\n- if (i == max_tries-1):\n- raise Exception(\"Error in startup of systemDS gateway process: \\n gateway StdOut: \" + str(\n- outputs) + \" \\n gateway StdErr\" + process.stderr.readline().decode(\"utf-8\"))\n+ process = self.__try_startup(command)\n# Handle Std out from the subprocess.\nself.__stdout = Queue()\n@@ -166,7 +112,79 @@ class SystemDSContext(object):\nprint(\"exception\")\nprint(e)\nself.close()\n- exit()\n+\n+\n+ def __try_startup(self, command, rep = 0):\n+ try:\n+ process = Popen(command, stdout=PIPE, stdin=PIPE, stderr=PIPE)\n+ self.__verify_startup(process)\n+ return process\n+ except Exception as e:\n+ if rep > 3:\n+ raise Exception(\"Failed to start SystemDS context with \" + rep + \" repeated tries\")\n+ else:\n+ ret += 1\n+ print(\"Failed to startup JVM process, retrying: \" + rep)\n+ sleep(rep) # Sleeping increasingly long time, maybe this helps.\n+ return self.__try_startup()\n+\n+ def __verify_startup(self, process):\n+ first_stdout = process.stdout.readline()\n+ if(not b\"GatewayServer Started\" in first_stdout):\n+ stderr = process.stderr.readline().decode(\"utf-8\")\n+ if(len(stderr) > 1):\n+ raise Exception(\n+ \"Exception in startup of GatewayServer: \" + stderr)\n+ outputs = []\n+ outputs.append(first_stdout.decode(\"utf-8\"))\n+ max_tries = 10\n+ for i in range(max_tries):\n+ next_line = process.stdout.readline()\n+ if(b\"GatewayServer Started\" in next_line):\n+ print(\"WARNING: Stdout corrupted by prints: \" + str(outputs))\n+ print(\"Startup success\")\n+ break\n+ else:\n+ outputs.append(next_line)\n+\n+ if (i == max_tries-1):\n+ raise Exception(\"Error in startup of systemDS gateway process: \\n gateway StdOut: \" + str(\n+ outputs) + \" \\n gateway StdErr\" + process.stderr.readline().decode(\"utf-8\"))\n+\n+ def __build_startup_command(self):\n+\n+ command = [\"java\", \"-cp\"]\n+ root = os.environ.get(\"SYSTEMDS_ROOT\")\n+ if root == None:\n+ # If there is no systemds install default to use the PIP packaged java files.\n+ root = os.path.join(get_module_dir(), \"systemds-java\")\n+\n+ # nt means its Windows\n+ cp_separator = \";\" if os.name == \"nt\" else \":\"\n+\n+ if os.environ.get(\"SYSTEMDS_ROOT\") != None:\n+ lib_cp = os.path.join(root, \"target\", \"lib\", \"*\")\n+ systemds_cp = os.path.join(root, \"target\", \"SystemDS.jar\")\n+ classpath = cp_separator.join([lib_cp, systemds_cp])\n+\n+ command.append(classpath)\n+ files = glob(os.path.join(root, \"conf\", \"log4j*.properties\"))\n+ if len(files) > 1:\n+ print(\n+ \"WARNING: Multiple logging files found selecting: \" + files[0])\n+ if len(files) == 0:\n+ print(\"WARNING: No log4j file found at: \"\n+ + os.path.join(root, \"conf\")\n+ + \" therefore using default settings\")\n+ else:\n+ command.append(\"-Dlog4j.configuration=file:\" + files[0])\n+ else:\n+ lib_cp = os.path.join(root, \"lib\", \"*\")\n+ command.append(lib_cp)\n+\n+ command.append(\"org.apache.sysds.api.PythonDMLScript\")\n+\n+ return command\ndef __enter__(self):\nreturn self\n@@ -191,11 +209,11 @@ class SystemDSContext(object):\nqueue.put(line.decode(\"utf-8\").strip())\ndef __get_open_port(self):\n- \"\"\"Get a random available port.\"\"\"\n- # TODO Verify that it is not taking some critical ports change to select a good port range.\n- # TODO If it tries to select a port already in use, find another.\n+ \"\"\"Get a random available port.\n+ and hope that no other process steals it while we wait for the JVM to startup\n+ \"\"\"\n# https://stackoverflow.com/questions/2838244/get-open-tcp-port-in-python\n- import socket\n+\ns = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\ns.bind((\"\", 0))\ns.listen(1)\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/python/systemds/operator/operation_node.py",
"new_path": "src/main/python/systemds/operator/operation_node.py",
"diff": "@@ -95,29 +95,12 @@ class OperationNode(DAGNode):\nprint(self._script.dml_script)\nif lineage:\n- result_variables, self._lineage_trace = self._script.execute(\n- lineage)\n+ result_variables, self._lineage_trace = self._script.execute_with_lineage()\nelse:\n- result_variables = self._script.execute(lineage)\n+ result_variables = self._script.execute()\n+\n+ self._result_var = self.__parse_output_result_variables(result_variables)\n- if self.output_type == OutputType.DOUBLE:\n- self._result_var = result_variables.getDouble(\n- self._script.out_var_name[0])\n- elif self.output_type == OutputType.MATRIX:\n- self._result_var = matrix_block_to_numpy(self.sds_context.java_gateway.jvm,\n- result_variables.getMatrixBlock(self._script.out_var_name[0]))\n- elif self.output_type == OutputType.LIST:\n- self._result_var = []\n- for idx, v in enumerate(self._script.out_var_name):\n- if(self._output_types == None):\n- self._result_var.append(matrix_block_to_numpy(self.sds_context.java_gateway.jvm,\n- result_variables.getMatrixBlock(v)))\n- elif(self._output_types[idx] == OutputType.MATRIX):\n- self._result_var.append(matrix_block_to_numpy(self.sds_context.java_gateway.jvm,\n- result_variables.getMatrixBlock(v)))\n- else:\n- self._result_var.append(result_variables.getDouble(\n- self._script.out_var_name[idx]))\nif verbose:\nfor x in self.sds_context.get_stdout():\nprint(x)\n@@ -129,6 +112,31 @@ class OperationNode(DAGNode):\nelse:\nreturn self._result_var\n+ def __parse_output_result_variables(self, result_variables):\n+ if self.output_type == OutputType.DOUBLE:\n+ return self.__parse_output_result_double(result_variables, self._script.out_var_name[0])\n+ elif self.output_type == OutputType.MATRIX:\n+ return self.__parse_output_result_matrix(result_variables, self._script.out_var_name[0])\n+ elif self.output_type == OutputType.LIST:\n+ return self.__parse_output_result_list(result_variables)\n+\n+ def __parse_output_result_double(self, result_variables, var_name):\n+ return result_variables.getDouble(var_name)\n+\n+ def __parse_output_result_matrix(self, result_variables, var_name):\n+ return matrix_block_to_numpy(self.sds_context.java_gateway.jvm,\n+ result_variables.getMatrixBlock(var_name))\n+\n+ def __parse_output_result_list(self, result_variables):\n+ result_var = []\n+ for idx, v in enumerate(self._script.out_var_name):\n+ if(self._output_types == None or self._output_types[idx] == OutputType.MATRIX):\n+ result_var.append(self.__parse_output_result_matrix(result_variables,v))\n+ else:\n+ result_var.append(result_variables.getDouble(\n+ self._script.out_var_name[idx]))\n+ return result_var\n+\ndef get_lineage_trace(self) -> str:\n\"\"\"Get the lineage trace for this node.\n@@ -501,7 +509,8 @@ class OperationNode(DAGNode):\nother._check_matrix_op()\nif self.shape[1] != other.shape[1]:\n- raise ValueError(\"The input matrices to rbind does not have the same number of columns\")\n+ raise ValueError(\n+ \"The input matrices to rbind does not have the same number of columns\")\nreturn OperationNode(self.sds_context, 'rbind', [self, other], shape=(self.shape[0] + other.shape[0], self.shape[1]))\n@@ -516,6 +525,7 @@ class OperationNode(DAGNode):\nother._check_matrix_op()\nif self.shape[0] != other.shape[0]:\n- raise ValueError(\"The input matrices to cbind does not have the same number of columns\")\n+ raise ValueError(\n+ \"The input matrices to cbind does not have the same number of columns\")\nreturn OperationNode(self.sds_context, 'cbind', [self, other], shape=(self.shape[0], self.shape[1] + other.shape[1]))\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/python/systemds/script_building/script.py",
"new_path": "src/main/python/systemds/script_building/script.py",
"diff": "#\n# -------------------------------------------------------------\n-from typing import Any, Collection, KeysView, Tuple, Union, Optional, Dict, TYPE_CHECKING\n+from typing import Any, Collection, KeysView, Tuple, Union, Optional, Dict, TYPE_CHECKING, List\nfrom py4j.java_collections import JavaArray\nfrom py4j.java_gateway import JavaObject, JavaGateway\n@@ -44,7 +44,7 @@ class DMLScript:\ndml_script: str\ninputs: Dict[str, DAGNode]\nprepared_script: Optional[Any]\n- out_var_name: str\n+ out_var_name: List[str]\n_variable_counter: int\ndef __init__(self, context: 'SystemDSContext') -> None:\n@@ -70,7 +70,7 @@ class DMLScript:\n\"\"\"\nself.inputs[var_name] = input_var\n- def execute(self, lineage: bool = False) -> Union[JavaObject, Tuple[JavaObject, str]]:\n+ def execute(self) -> JavaObject:\n\"\"\"If not already created, create a preparedScript from our DMLCode, pass python local data to our prepared\nscript, then execute our script and return the resultVariables\n@@ -78,27 +78,29 @@ class DMLScript:\n\"\"\"\n# we could use the gateway directly, non defined functions will be automatically\n# sent to the entry_point, but this is safer\n- gateway = self.sds_context.java_gateway\n- entry_point = gateway.entry_point\n- if self.prepared_script is None:\n- input_names = self.inputs.keys()\n- connection = entry_point.getConnection()\n- self.prepared_script = connection.prepareScript(\n- self.dml_script,\n- _list_to_java_array(gateway, input_names),\n- _list_to_java_array(gateway, self.out_var_name))\n- for (name, input_node) in self.inputs.items():\n- input_node.pass_python_data_to_prepared_script(\n- self.sds_context, name, self.prepared_script)\n- if lineage:\n- connection.setLineage(True)\ntry:\n+ self.__prepare_script()\nret = self.prepared_script.executeScript()\n+ return ret\nexcept Exception as e:\nself.sds_context.exception_and_close(e)\n+ return None\n+\n+ def execute_with_lineage(self) -> Tuple[JavaObject, str]:\n+ \"\"\"If not already created, create a preparedScript from our DMLCode, pass python local data to our prepared\n+ script, then execute our script and return the resultVariables\n+\n+ :return: resultVariables of our execution and the string lineage trace\n+ \"\"\"\n+ # we could use the gateway directly, non defined functions will be automatically\n+ # sent to the entry_point, but this is safer\n+ try:\n+ connection = self.__prepare_script()\n+ connection.setLineage(True)\n+ ret = self.prepared_script.executeScript()\n+\n- if lineage:\nif len(self.out_var_name) == 1:\nreturn ret, self.prepared_script.getLineageTrace(self.out_var_name[0])\nelse:\n@@ -107,7 +109,25 @@ class DMLScript:\ntraces.append(self.prepared_script.getLineageTrace(output))\nreturn ret, traces\n- return ret\n+ except Exception as e:\n+ self.sds_context.exception_and_close(e)\n+ return None, None\n+\n+ def __prepare_script(self):\n+ gateway = self.sds_context.java_gateway\n+ entry_point = gateway.entry_point\n+ if self.prepared_script is None:\n+ input_names = self.inputs.keys()\n+ connection = entry_point.getConnection()\n+ self.prepared_script = connection.prepareScript(\n+ self.dml_script,\n+ _list_to_java_array(gateway, input_names),\n+ _list_to_java_array(gateway, self.out_var_name))\n+ for (name, input_node) in self.inputs.items():\n+ input_node.pass_python_data_to_prepared_script(\n+ self.sds_context, name, self.prepared_script)\n+ return connection\n+\ndef get_lineage(self) -> str:\ngateway = self.sds_context.java_gateway\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-2821] Python Stability
There have been some startup issues in the python API, where some tests
would not properly connect to the JVM.
This task addresses this by introducing a retry startup of the context
in case of failures. |
49,722 | 16.12.2020 20:09:55 | -3,600 | 250c7345980f69f4b7918c252af5f2ab2213c1aa | Federated rowIndexMax and rowIndexMin
This commit adds the functions prod and cov for federated execution.
(also included tests).
Closes | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/lops/PartialAggregate.java",
"new_path": "src/main/java/org/apache/sysds/lops/PartialAggregate.java",
"diff": "@@ -233,9 +233,16 @@ public class PartialAggregate extends Lop\nsb.append( OPERAND_DELIMITOR );\nif( getExecType() == ExecType.SPARK )\nsb.append( _aggtype );\n- else if( getExecType() == ExecType.CP )\n+ else if( getExecType() == ExecType.CP ) {\nsb.append(_numThreads);\n+ //number of outputs, valid for fed instruction\n+ if(getOpcode().equalsIgnoreCase(\"uarimin\") || getOpcode().equalsIgnoreCase(\"uarimax\")) {\n+ sb.append(OPERAND_DELIMITOR);\n+ sb.append(\"1\");\n+ }\n+ }\n+\nreturn sb.toString();\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/controlprogram/federated/FederationUtils.java",
"new_path": "src/main/java/org/apache/sysds/runtime/controlprogram/federated/FederationUtils.java",
"diff": "@@ -38,12 +38,16 @@ import org.apache.sysds.runtime.functionobjects.Builtin.BuiltinCode;\nimport org.apache.sysds.runtime.functionobjects.CM;\nimport org.apache.sysds.runtime.functionobjects.KahanFunction;\nimport org.apache.sysds.runtime.functionobjects.Mean;\n+import org.apache.sysds.runtime.functionobjects.Multiply;\nimport org.apache.sysds.runtime.functionobjects.Plus;\n+import org.apache.sysds.runtime.functionobjects.ReduceAll;\nimport org.apache.sysds.runtime.instructions.InstructionUtils;\nimport org.apache.sysds.runtime.instructions.cp.CPOperand;\nimport org.apache.sysds.runtime.instructions.cp.DoubleObject;\nimport org.apache.sysds.runtime.instructions.cp.ScalarObject;\n+import org.apache.sysds.runtime.matrix.data.LibMatrixAgg;\nimport org.apache.sysds.runtime.matrix.data.MatrixBlock;\n+import org.apache.sysds.runtime.matrix.operators.AggregateOperator;\nimport org.apache.sysds.runtime.matrix.operators.AggregateUnaryOperator;\nimport org.apache.sysds.runtime.matrix.operators.BinaryOperator;\nimport org.apache.sysds.runtime.matrix.operators.ScalarOperator;\n@@ -189,6 +193,30 @@ public class FederationUtils {\n}\n}\n+ public static MatrixBlock aggMinMaxIndex(Future<FederatedResponse>[] ffr, boolean isMin, FederationMap map) {\n+ try {\n+ MatrixBlock prev = (MatrixBlock) ffr[0].get().getData()[0];\n+ int size = 0;\n+ for(int i = 1; i < ffr.length; i++) {\n+ MatrixBlock next = (MatrixBlock) ffr[i].get().getData()[0];\n+ size = map.getFederatedRanges()[i-1].getEndDimsInt()[1];\n+ for(int j = 0; j < prev.getNumRows(); j++) {\n+ next.setValue(j, 0, next.getValue(j, 0) + size);\n+ if((prev.getValue(j, 1) > next.getValue(j, 1) && !isMin) ||\n+ (prev.getValue(j, 1) < next.getValue(j, 1) && isMin)) {\n+ next.setValue(j, 0, prev.getValue(j, 0));\n+ next.setValue(j, 1, prev.getValue(j, 1));\n+ }\n+ }\n+ prev = next;\n+ }\n+ return prev.slice(0, prev.getNumRows()-1, 0,0, true, new MatrixBlock());\n+ }\n+ catch (Exception ex) {\n+ throw new DMLRuntimeException(ex);\n+ }\n+ }\n+\npublic static MatrixBlock aggVar(Future<FederatedResponse>[] ffr, Future<FederatedResponse>[] meanFfr, FederationMap map, boolean isRowAggregate, boolean isScalar) {\ntry {\nFederatedRange[] ranges = map.getFederatedRanges();\n@@ -325,13 +353,24 @@ public class FederationUtils {\nif(!(aop.aggOp.increOp.fn instanceof KahanFunction || (aop.aggOp.increOp.fn instanceof Builtin &&\n(((Builtin) aop.aggOp.increOp.fn).getBuiltinCode() == BuiltinCode.MIN\n|| ((Builtin) aop.aggOp.increOp.fn).getBuiltinCode() == BuiltinCode.MAX)\n- || aop.aggOp.increOp.fn instanceof Mean ))) {\n+ || aop.aggOp.increOp.fn instanceof Mean\n+ || aop.aggOp.increOp.fn instanceof Multiply))) {\nthrow new DMLRuntimeException(\"Unsupported aggregation operator: \"\n+ aop.aggOp.increOp.getClass().getSimpleName());\n}\ntry {\n- if(aop.aggOp.increOp.fn instanceof Builtin){\n+ if(aop.aggOp.increOp.fn instanceof Multiply){\n+ MatrixBlock ret = new MatrixBlock(ffr.length, 1, false);\n+ MatrixBlock res = new MatrixBlock(0);\n+ for(int i = 0; i < ffr.length; i++)\n+ ret.setValue(i, 0, ((ScalarObject)ffr[i].get().getData()[0]).getDoubleValue());\n+ LibMatrixAgg.aggregateUnaryMatrix(ret, res,\n+ new AggregateUnaryOperator(new AggregateOperator(1, Multiply.getMultiplyFnObject()),\n+ ReduceAll.getReduceAllFnObject()));\n+ return new DoubleObject(res.quickGetValue(0, 0));\n+ }\n+ else if(aop.aggOp.increOp.fn instanceof Builtin){\n// then we know it is a Min or Max based on the previous check.\nboolean isMin = ((Builtin) aop.aggOp.increOp.fn).getBuiltinCode() == BuiltinCode.MIN;\nreturn new DoubleObject(aggMinMax(ffr, isMin, true, Optional.empty()).getValue(0,0));\n@@ -361,12 +400,21 @@ public class FederationUtils {\nreturn aggAdd(ffr);\nelse if( aop.aggOp.increOp.fn instanceof Mean )\nreturn aggMean(ffr, map);\n- else if (aop.aggOp.increOp.fn instanceof Builtin &&\n- (((Builtin) aop.aggOp.increOp.fn).getBuiltinCode() == BuiltinCode.MIN ||\n+ else if (aop.aggOp.increOp.fn instanceof Builtin) {\n+ if ((((Builtin) aop.aggOp.increOp.fn).getBuiltinCode() == BuiltinCode.MIN ||\n((Builtin) aop.aggOp.increOp.fn).getBuiltinCode() == BuiltinCode.MAX)) {\nboolean isMin = ((Builtin) aop.aggOp.increOp.fn).getBuiltinCode() == BuiltinCode.MIN;\nreturn aggMinMax(ffr,isMin,false, Optional.of(map.getType()));\n- } else\n+ }\n+ else if((((Builtin) aop.aggOp.increOp.fn).getBuiltinCode() == BuiltinCode.MININDEX)\n+ || (((Builtin) aop.aggOp.increOp.fn).getBuiltinCode() == BuiltinCode.MAXINDEX)) {\n+ boolean isMin = ((Builtin) aop.aggOp.increOp.fn).getBuiltinCode() == BuiltinCode.MININDEX;\n+ return aggMinMaxIndex(ffr,isMin, map);\n+ }\n+ else throw new DMLRuntimeException(\"Unsupported aggregation operator: \"\n+ + aop.aggOp.increOp.fn.getClass().getSimpleName());\n+ }\n+ else\nthrow new DMLRuntimeException(\"Unsupported aggregation operator: \"\n+ aop.aggOp.increOp.fn.getClass().getSimpleName());\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/InstructionUtils.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/InstructionUtils.java",
"diff": "@@ -401,6 +401,21 @@ public class InstructionUtils\nreturn aggun;\n}\n+ public static AggregateUnaryOperator parseAggregateUnaryRowIndexOperator(String opcode, int numOutputs, int numThreads) {\n+ AggregateUnaryOperator aggun = null;\n+ AggregateOperator agg = null;\n+ if (opcode.equalsIgnoreCase(\"uarimax\") )\n+ agg = new AggregateOperator(Double.NEGATIVE_INFINITY, Builtin.getBuiltinFnObject(\"maxindex\"),\n+ numOutputs == 1 ? CorrectionLocationType.LASTCOLUMN : CorrectionLocationType.NONE);\n+\n+ else if (opcode.equalsIgnoreCase(\"uarimin\") )\n+ agg = new AggregateOperator(Double.POSITIVE_INFINITY, Builtin.getBuiltinFnObject(\"minindex\"),\n+ numOutputs == 1 ? CorrectionLocationType.LASTCOLUMN : CorrectionLocationType.NONE);\n+\n+ aggun = new AggregateUnaryOperator(agg, ReduceCol.getReduceColFnObject(), numThreads);\n+ return aggun;\n+ }\n+\npublic static AggregateTernaryOperator parseAggregateTernaryOperator(String opcode) {\nreturn parseAggregateTernaryOperator(opcode, 1);\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/cp/AggregateUnaryCPInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/cp/AggregateUnaryCPInstruction.java",
"diff": "@@ -85,6 +85,12 @@ public class AggregateUnaryCPInstruction extends UnaryCPInstruction {\nreturn new AggregateUnaryCPInstruction(new SimpleOperator(null),\nin1, out, AUType.COUNT_DISTINCT_APPROX, opcode, str);\n}\n+ else if(opcode.equalsIgnoreCase(\"uarimax\") || opcode.equalsIgnoreCase(\"uarimin\")){\n+ // parse with number of outputs\n+ AggregateUnaryOperator aggun = InstructionUtils\n+ .parseAggregateUnaryRowIndexOperator(opcode, Integer.parseInt(parts[4]), Integer.parseInt(parts[3]));\n+ return new AggregateUnaryCPInstruction(aggun, in1, out, AUType.DEFAULT, opcode, str);\n+ }\nelse { //DEFAULT BEHAVIOR\nAggregateUnaryOperator aggun = InstructionUtils\n.parseBasicAggregateUnaryOperator(opcode, Integer.parseInt(parts[3]));\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/AggregateUnaryFEDInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/AggregateUnaryFEDInstruction.java",
"diff": "@@ -21,6 +21,7 @@ package org.apache.sysds.runtime.instructions.fed;\nimport java.util.concurrent.Future;\n+import org.apache.sysds.lops.Lop;\nimport org.apache.sysds.lops.LopProperties.ExecType;\nimport org.apache.sysds.runtime.controlprogram.caching.MatrixObject;\nimport org.apache.sysds.runtime.controlprogram.context.ExecutionContext;\n@@ -56,7 +57,13 @@ public class AggregateUnaryFEDInstruction extends UnaryFEDInstruction {\nString opcode = parts[0];\nCPOperand in1 = new CPOperand(parts[1]);\nCPOperand out = new CPOperand(parts[2]);\n- AggregateUnaryOperator aggun = InstructionUtils.parseBasicAggregateUnaryOperator(opcode);\n+\n+ AggregateUnaryOperator aggun = null;\n+ if(opcode.equalsIgnoreCase(\"uarimax\") || opcode.equalsIgnoreCase(\"uarimin\"))\n+ aggun = InstructionUtils.parseAggregateUnaryRowIndexOperator(opcode, Integer.parseInt(parts[4]), 1);\n+ else\n+ aggun = InstructionUtils.parseBasicAggregateUnaryOperator(opcode);\n+\nif(InstructionUtils.getExecType(str) == ExecType.SPARK)\nstr = InstructionUtils.replaceOperand(str, 4, \"-1\");\nreturn new AggregateUnaryFEDInstruction(aggun, in1, out, opcode, str);\n@@ -77,6 +84,9 @@ public class AggregateUnaryFEDInstruction extends UnaryFEDInstruction {\nMatrixObject in = ec.getMatrixObject(input1);\nFederationMap map = in.getFedMapping();\n+ if((instOpcode.equalsIgnoreCase(\"uarimax\") || instOpcode.equalsIgnoreCase(\"uarimin\")) && in.isFederated(FederationMap.FType.COL))\n+ instString = InstructionUtils.replaceOperand(instString, 5, \"2\");\n+\n//create federated commands for aggregation\nFederatedRequest fr1 = FederationUtils.callInstruction(instString, output,\nnew CPOperand[]{input1}, new long[]{in.getFedMapping().getID()});\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/matrix/data/LibMatrixAgg.java",
"new_path": "src/main/java/org/apache/sysds/runtime/matrix/data/LibMatrixAgg.java",
"diff": "@@ -231,6 +231,9 @@ public class LibMatrixAgg\npublic static void aggregateUnaryMatrix(MatrixBlock in, MatrixBlock out, AggregateUnaryOperator uaop, int k) {\n//fall back to sequential version if necessary\nif( !satisfiesMultiThreadingConstraints(in, out, uaop, k) ) {\n+ if(uaop.aggOp.increOp.fn instanceof Builtin && (((((Builtin) uaop.aggOp.increOp.fn).getBuiltinCode() == BuiltinCode.MININDEX)\n+ || (((Builtin) uaop.aggOp.increOp.fn).getBuiltinCode() == BuiltinCode.MAXINDEX)) && uaop.aggOp.correction.getNumRemovedRowsColumns()==0))\n+ out.clen = 2;\naggregateUnaryMatrix(in, out, uaop);\nreturn;\n}\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/java/org/apache/sysds/test/functions/federated/primitives/FederatedProdTest.java",
"diff": "+/*\n+ * Licensed to the Apache Software Foundation (ASF) under one\n+ * or more contributor license agreements. See the NOTICE file\n+ * distributed with this work for additional information\n+ * regarding copyright ownership. The ASF licenses this file\n+ * to you under the Apache License, Version 2.0 (the\n+ * \"License\"); you may not use this file except in compliance\n+ * with the License. You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.apache.sysds.test.functions.federated.primitives;\n+\n+import java.util.Arrays;\n+import java.util.Collection;\n+\n+import org.apache.sysds.api.DMLScript;\n+import org.apache.sysds.common.Types.ExecMode;\n+import org.apache.sysds.runtime.meta.MatrixCharacteristics;\n+import org.apache.sysds.runtime.util.HDFSTool;\n+import org.apache.sysds.test.AutomatedTestBase;\n+import org.apache.sysds.test.TestConfiguration;\n+import org.apache.sysds.test.TestUtils;\n+import org.junit.Assert;\n+import org.junit.Test;\n+import org.junit.runner.RunWith;\n+import org.junit.runners.Parameterized;\n+\n+@RunWith(value = Parameterized.class)\[email protected]\n+public class FederatedProdTest extends AutomatedTestBase {\n+\n+ private final static String TEST_NAME = \"FederatedProdTest\";\n+\n+ private final static String TEST_DIR = \"functions/federated/\";\n+ private static final String TEST_CLASS_DIR = TEST_DIR + FederatedProdTest.class.getSimpleName() + \"/\";\n+\n+ private final static int blocksize = 1024;\n+ @Parameterized.Parameter()\n+ public int rows;\n+ @Parameterized.Parameter(1)\n+ public int cols;\n+\n+ @Parameterized.Parameter(2)\n+ public boolean rowPartitioned;\n+\n+ @Parameterized.Parameters\n+ public static Collection<Object[]> data() {\n+ return Arrays.asList(new Object[][] {\n+ {100, 12, true},\n+ {100, 12, false}\n+ });\n+ }\n+\n+ @Override\n+ public void setUp() {\n+ TestUtils.clearAssertionInformation();\n+ addTestConfiguration(TEST_NAME, new TestConfiguration(TEST_CLASS_DIR, TEST_NAME, new String[] {\"S.scalar\"}));\n+ }\n+\n+ @Test\n+ public void testProdCP() { runProdTest(ExecMode.SINGLE_NODE); }\n+\n+ private void runProdTest(ExecMode execMode) {\n+ boolean sparkConfigOld = DMLScript.USE_LOCAL_SPARK_CONFIG;\n+ ExecMode platformOld = rtplatform;\n+\n+ if(rtplatform == ExecMode.SPARK)\n+ DMLScript.USE_LOCAL_SPARK_CONFIG = true;\n+\n+ getAndLoadTestConfiguration(TEST_NAME);\n+ String HOME = SCRIPT_DIR + TEST_DIR;\n+\n+ // write input matrices\n+ int r = rows;\n+ int c = cols / 4;\n+ if(rowPartitioned) {\n+ r = rows / 4;\n+ c = cols;\n+ }\n+\n+ double[][] X1 = getRandomMatrix(r, c, 0, 2, 1, 3);\n+ double[][] X2 = getRandomMatrix(r, c, 0, 2, 1, 7);\n+ double[][] X3 = getRandomMatrix(r, c, 0, 2, 1, 8);\n+ double[][] X4 = getRandomMatrix(r, c, 0, 2, 1, 9);\n+\n+ MatrixCharacteristics mc = new MatrixCharacteristics(r, c, blocksize, r * c);\n+ writeInputMatrixWithMTD(\"X1\", X1, false, mc);\n+ writeInputMatrixWithMTD(\"X2\", X2, false, mc);\n+ writeInputMatrixWithMTD(\"X3\", X3, false, mc);\n+ writeInputMatrixWithMTD(\"X4\", X4, false, mc);\n+\n+ // empty script name because we don't execute any script, just start the worker\n+ fullDMLScriptName = \"\";\n+ int port1 = getRandomAvailablePort();\n+ int port2 = getRandomAvailablePort();\n+ int port3 = getRandomAvailablePort();\n+ int port4 = getRandomAvailablePort();\n+ Thread t1 = startLocalFedWorkerThread(port1, FED_WORKER_WAIT_S);\n+ Thread t2 = startLocalFedWorkerThread(port2, FED_WORKER_WAIT_S);\n+ Thread t3 = startLocalFedWorkerThread(port3, FED_WORKER_WAIT_S);\n+ Thread t4 = startLocalFedWorkerThread(port4);\n+\n+ rtplatform = execMode;\n+ if(rtplatform == ExecMode.SPARK) {\n+ DMLScript.USE_LOCAL_SPARK_CONFIG = true;\n+ }\n+ TestConfiguration config = availableTestConfigurations.get(TEST_NAME);\n+ loadTestConfiguration(config);\n+\n+ // Run reference dml script with normal matrix\n+ fullDMLScriptName = HOME + TEST_NAME + \"Reference.dml\";\n+ programArgs = new String[] {\"-stats\", \"100\", \"-args\", input(\"X1\"), input(\"X2\"), input(\"X3\"), input(\"X4\"),\n+ Boolean.toString(rowPartitioned).toUpperCase(), expected(\"S\")};\n+\n+ runTest(null);\n+\n+ fullDMLScriptName = HOME + TEST_NAME + \".dml\";\n+ programArgs = new String[] {\"-stats\", \"100\", \"-nvargs\",\n+ \"in_X1=\" + TestUtils.federatedAddress(port1, input(\"X1\")),\n+ \"in_X2=\" + TestUtils.federatedAddress(port2, input(\"X2\")),\n+ \"in_X3=\" + TestUtils.federatedAddress(port3, input(\"X3\")),\n+ \"in_X4=\" + TestUtils.federatedAddress(port4, input(\"X4\")), \"rows=\" + rows, \"cols=\" + cols,\n+ \"rP=\" + Boolean.toString(rowPartitioned).toUpperCase(), \"out_S=\" + output(\"S\")};\n+\n+ runTest(null);\n+\n+ // compare via files\n+ compareResults(1e-9);\n+\n+ Assert.assertTrue(heavyHittersContainsString(\"fed_ua*\"));\n+\n+ // check that federated input files are still existing\n+ Assert.assertTrue(HDFSTool.existsFileOnHDFS(input(\"X1\")));\n+ Assert.assertTrue(HDFSTool.existsFileOnHDFS(input(\"X2\")));\n+ Assert.assertTrue(HDFSTool.existsFileOnHDFS(input(\"X3\")));\n+ Assert.assertTrue(HDFSTool.existsFileOnHDFS(input(\"X4\")));\n+\n+ TestUtils.shutdownThreads(t1, t2, t3, t4);\n+\n+ rtplatform = platformOld;\n+ DMLScript.USE_LOCAL_SPARK_CONFIG = sparkConfigOld;\n+\n+ }\n+}\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/java/org/apache/sysds/test/functions/federated/primitives/FederatedRowIndexTest.java",
"diff": "+/*\n+ * Licensed to the Apache Software Foundation (ASF) under one\n+ * or more contributor license agreements. See the NOTICE file\n+ * distributed with this work for additional information\n+ * regarding copyright ownership. The ASF licenses this file\n+ * to you under the Apache License, Version 2.0 (the\n+ * \"License\"); you may not use this file except in compliance\n+ * with the License. You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.apache.sysds.test.functions.federated.primitives;\n+\n+import java.util.Arrays;\n+import java.util.Collection;\n+\n+import org.apache.sysds.api.DMLScript;\n+import org.apache.sysds.common.Types.ExecMode;\n+import org.apache.sysds.runtime.meta.MatrixCharacteristics;\n+import org.apache.sysds.runtime.util.HDFSTool;\n+import org.apache.sysds.test.AutomatedTestBase;\n+import org.apache.sysds.test.TestConfiguration;\n+import org.apache.sysds.test.TestUtils;\n+import org.junit.Assert;\n+import org.junit.Test;\n+import org.junit.runner.RunWith;\n+import org.junit.runners.Parameterized;\n+\n+@RunWith(value = Parameterized.class)\[email protected]\n+public class FederatedRowIndexTest extends AutomatedTestBase {\n+\n+ private final static String TEST_NAME = \"FederatedRowIndexTest\";\n+\n+ private final static String TEST_DIR = \"functions/federated/\";\n+ private static final String TEST_CLASS_DIR = TEST_DIR + FederatedRowIndexTest.class.getSimpleName() + \"/\";\n+\n+ private final static int blocksize = 1024;\n+ @Parameterized.Parameter()\n+ public int rows;\n+ @Parameterized.Parameter(1)\n+ public int cols;\n+\n+ @Parameterized.Parameter(2)\n+ public boolean rowPartitioned;\n+\n+ @Parameterized.Parameters\n+ public static Collection<Object[]> data() {\n+ return Arrays.asList(new Object[][] {\n+ {1000, 12, true},\n+ {1000, 12, false}\n+ });\n+ }\n+\n+ @Override\n+ public void setUp() {\n+ TestUtils.clearAssertionInformation();\n+ addTestConfiguration(TEST_NAME, new TestConfiguration(TEST_CLASS_DIR, TEST_NAME, new String[] {\"S\"}));\n+ }\n+\n+ @Test\n+ public void testRowIndexCP() {\n+ runRowIndexTest(ExecMode.SINGLE_NODE);\n+ }\n+\n+ private void runRowIndexTest(ExecMode execMode) {\n+ boolean sparkConfigOld = DMLScript.USE_LOCAL_SPARK_CONFIG;\n+ ExecMode platformOld = rtplatform;\n+\n+ if(rtplatform == ExecMode.SPARK)\n+ DMLScript.USE_LOCAL_SPARK_CONFIG = true;\n+\n+ getAndLoadTestConfiguration(TEST_NAME);\n+ String HOME = SCRIPT_DIR + TEST_DIR;\n+\n+ // write input matrices\n+ int r = rows;\n+ int c = cols / 4;\n+ if(rowPartitioned) {\n+ r = rows / 4;\n+ c = cols;\n+ }\n+\n+ double[][] X1 = getRandomMatrix(r, c, 1, 5, 1, 3);\n+ double[][] X2 = getRandomMatrix(r, c, 1, 5, 1, 7);\n+ double[][] X3 = getRandomMatrix(r, c, 1, 5, 1, 8);\n+ double[][] X4 = getRandomMatrix(r, c, 1, 5, 1, 9);\n+\n+ MatrixCharacteristics mc = new MatrixCharacteristics(r, c, blocksize, r * c);\n+ writeInputMatrixWithMTD(\"X1\", X1, false, mc);\n+ writeInputMatrixWithMTD(\"X2\", X2, false, mc);\n+ writeInputMatrixWithMTD(\"X3\", X3, false, mc);\n+ writeInputMatrixWithMTD(\"X4\", X4, false, mc);\n+\n+ // empty script name because we don't execute any script, just start the worker\n+ fullDMLScriptName = \"\";\n+ int port1 = getRandomAvailablePort();\n+ int port2 = getRandomAvailablePort();\n+ int port3 = getRandomAvailablePort();\n+ int port4 = getRandomAvailablePort();\n+ Thread t1 = startLocalFedWorkerThread(port1, FED_WORKER_WAIT_S);\n+ Thread t2 = startLocalFedWorkerThread(port2, FED_WORKER_WAIT_S);\n+ Thread t3 = startLocalFedWorkerThread(port3, FED_WORKER_WAIT_S);\n+ Thread t4 = startLocalFedWorkerThread(port4);\n+\n+ rtplatform = execMode;\n+ if(rtplatform == ExecMode.SPARK) {\n+ System.out.println(7);\n+ DMLScript.USE_LOCAL_SPARK_CONFIG = true;\n+ }\n+ TestConfiguration config = availableTestConfigurations.get(TEST_NAME);\n+ loadTestConfiguration(config);\n+\n+ // Run reference dml script with normal matrix\n+ fullDMLScriptName = HOME + TEST_NAME + \"Reference.dml\";\n+ programArgs = new String[] {\"-stats\", \"100\", \"-args\", input(\"X1\"), input(\"X2\"), input(\"X3\"), input(\"X4\"),\n+ Boolean.toString(rowPartitioned).toUpperCase(), expected(\"S\")};\n+\n+ runTest(null);\n+\n+ fullDMLScriptName = HOME + TEST_NAME + \".dml\";\n+ programArgs = new String[] {\"-stats\", \"100\", \"-nvargs\",\n+ \"in_X1=\" + TestUtils.federatedAddress(port1, input(\"X1\")),\n+ \"in_X2=\" + TestUtils.federatedAddress(port2, input(\"X2\")),\n+ \"in_X3=\" + TestUtils.federatedAddress(port3, input(\"X3\")),\n+ \"in_X4=\" + TestUtils.federatedAddress(port4, input(\"X4\")), \"rows=\" + rows, \"cols=\" + cols,\n+ \"rP=\" + Boolean.toString(rowPartitioned).toUpperCase(), \"out_S=\" + output(\"S\")};\n+\n+ runTest(null);\n+\n+// compare via files\n+ compareResults(1e-9);\n+\n+ Assert.assertTrue(heavyHittersContainsString(\"fed_uarimax\"));\n+\n+ // check that federated input files are still existing\n+ Assert.assertTrue(HDFSTool.existsFileOnHDFS(input(\"X1\")));\n+ Assert.assertTrue(HDFSTool.existsFileOnHDFS(input(\"X2\")));\n+ Assert.assertTrue(HDFSTool.existsFileOnHDFS(input(\"X3\")));\n+ Assert.assertTrue(HDFSTool.existsFileOnHDFS(input(\"X4\")));\n+\n+ TestUtils.shutdownThreads(t1, t2, t3, t4);\n+\n+ rtplatform = platformOld;\n+ DMLScript.USE_LOCAL_SPARK_CONFIG = sparkConfigOld;\n+\n+ }\n+}\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/federated/FederatedProdTest.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+if ($rP) {\n+ A = federated(addresses=list($in_X1, $in_X2, $in_X3, $in_X4),\n+ ranges=list(list(0, 0), list($rows/4, $cols), list($rows/4, 0), list(2*$rows/4, $cols),\n+ list(2*$rows/4, 0), list(3*$rows/4, $cols), list(3*$rows/4, 0), list($rows, $cols)));\n+} else {\n+ A = federated(addresses=list($in_X1, $in_X2, $in_X3, $in_X4),\n+ ranges=list(list(0, 0), list($rows, $cols/4), list(0,$cols/4), list($rows, $cols/2),\n+ list(0,$cols/2), list($rows, 3*($cols/4)), list(0, 3*($cols/4)), list($rows, $cols)));\n+}\n+\n+s = prod(A);\n+write(s, $out_S);\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/federated/FederatedProdTestReference.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+if($5) { A = rbind(read($1), read($2), read($3), read($4)); }\n+else { A = cbind(read($1), read($2), read($3), read($4)); }\n+\n+s = prod(A);\n+write(s, $6);\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/federated/FederatedRowIndexTest.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+if ($rP) {\n+ A = federated(addresses=list($in_X1, $in_X2, $in_X3, $in_X4),\n+ ranges=list(list(0, 0), list($rows/4, $cols), list($rows/4, 0), list(2*$rows/4, $cols),\n+ list(2*$rows/4, 0), list(3*$rows/4, $cols), list(3*$rows/4, 0), list($rows, $cols)));\n+} else {\n+ A = federated(addresses=list($in_X1, $in_X2, $in_X3, $in_X4),\n+ ranges=list(list(0, 0), list($rows, $cols/4), list(0,$cols/4), list($rows, $cols/2),\n+ list(0,$cols/2), list($rows, 3*($cols/4)), list(0, 3*($cols/4)), list($rows, $cols)));\n+}\n+\n+s = rowIndexMax(A);\n+write(s, $out_S);\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/federated/FederatedRowIndexTestReference.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+if($5) { A = rbind(read($1), read($2), read($3), read($4)); }\n+else { A = cbind(read($1), read($2), read($3), read($4)); }\n+\n+s = rowIndexMax(A);\n+write(s, $6);\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-2763] Federated rowIndexMax and rowIndexMin
This commit adds the functions prod and cov for federated execution.
(also included tests).
Closes #1130 |
49,706 | 04.02.2021 11:11:41 | -3,600 | f4be1a30ddc88b6a1145cd1c035d243619243ace | [MINOR] remove accidental 'it' | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/lib/LibLeftMultBy.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/lib/LibLeftMultBy.java",
"diff": "@@ -768,8 +768,7 @@ public class LibLeftMultBy {\n}\nif(_rl != _ru)\nleftMultByTransposeSelf(_groups, _ret, _v, nCol - _ru, nCol - _rl, _cl, _cu, _overlapping);\n-\n-it return null;\n+ return null;\n}\n}\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR] remove accidental 'it' |
49,689 | 04.02.2021 20:35:50 | -3,600 | cb9c890d207feb7877d0768395c6ea473c7ab0bd | Update UDF reuse, add Fed pipeline reuse test
This patch handles reusing those UDFs that return metadata
with federated SUCCESS response (e.g. Rdiag, DiagMatrix).
Furthermore, this adds a test to reuse a full federated
pipeline having tasks such as preprocessing and
hyperparameter tuning (LM). | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/controlprogram/federated/FederatedWorkerHandler.java",
"new_path": "src/main/java/org/apache/sysds/runtime/controlprogram/federated/FederatedWorkerHandler.java",
"diff": "@@ -350,9 +350,9 @@ public class FederatedWorkerHandler extends ChannelInboundHandlerAdapter {\n// reuse or execute user-defined function\ntry {\n// reuse UDF outputs if available in lineage cache\n- if (LineageCache.reuse(udf, ec))\n- return new FederatedResponse(FederatedResponse.ResponseType.SUCCESS_EMPTY);\n- //FIXME: few UDFs (e.g. Rdiag, DiagMatrix) return additional data with response\n+ FederatedResponse reuse = LineageCache.reuse(udf, ec);\n+ if (reuse.isSuccessful())\n+ return reuse;\n// else execute the UDF\nlong t0 = !ReuseCacheType.isNone() ? System.nanoTime() : 0;\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/ReorgFEDInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/ReorgFEDInstruction.java",
"diff": "@@ -239,7 +239,7 @@ public class ReorgFEDInstruction extends UnaryFEDInstruction {\nreturn new RdiagResult(diagFedMap, dcs);\n}\n- private static class Rdiag extends FederatedUDF {\n+ public static class Rdiag extends FederatedUDF {\nprivate static final long serialVersionUID = -3466926635958851402L;\nprivate final long _outputID;\n@@ -293,7 +293,7 @@ public class ReorgFEDInstruction extends UnaryFEDInstruction {\n}\n}\n- private static class DiagMatrix extends FederatedUDF {\n+ public static class DiagMatrix extends FederatedUDF {\nprivate static final long serialVersionUID = -3466926635958851402L;\nprivate final long _outputID;\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageCache.java",
"new_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageCache.java",
"diff": "@@ -32,6 +32,7 @@ import org.apache.sysds.parser.Statement;\nimport org.apache.sysds.runtime.DMLRuntimeException;\nimport org.apache.sysds.runtime.controlprogram.caching.MatrixObject;\nimport org.apache.sysds.runtime.controlprogram.context.ExecutionContext;\n+import org.apache.sysds.runtime.controlprogram.federated.FederatedResponse;\nimport org.apache.sysds.runtime.controlprogram.federated.FederatedUDF;\nimport org.apache.sysds.runtime.controlprogram.parfor.stat.InfrastructureAnalyzer;\nimport org.apache.sysds.runtime.instructions.CPInstructionParser;\n@@ -236,10 +237,10 @@ public class LineageCache\n}\n//Reuse federated UDFs\n- public static boolean reuse(FederatedUDF udf, ExecutionContext ec)\n+ public static FederatedResponse reuse(FederatedUDF udf, ExecutionContext ec)\n{\nif (ReuseCacheType.isNone() || udf.getOutputIds() == null)\n- return false;\n+ return new FederatedResponse(FederatedResponse.ResponseType.ERROR);\n//TODO: reuse only those UDFs which are part of reusable instructions\nboolean reuse = false;\n@@ -249,7 +250,8 @@ public class LineageCache\n//TODO: support multi-return UDFs\nif (udf.getLineageItem(ec) == null)\n//TODO: trace all UDFs\n- return false;\n+ return new FederatedResponse(FederatedResponse.ResponseType.ERROR);\n+\nLineageItem li = udf.getLineageItem(ec).getValue();\nli.setDistLeaf2Node(1); //to save from early eviction\nLineageCacheEntry e = null;\n@@ -282,20 +284,27 @@ public class LineageCache\nreuse = false;\nif (reuse) {\n- udfOutputs.forEach((var, val) -> {\n+ FederatedResponse res = null;\n+ for (Map.Entry<String, Data> entry : udfOutputs.entrySet()) {\n+ String var = entry.getKey();\n+ Data val = entry.getValue();\n//cleanup existing data bound to output name\nData exdata = ec.removeVariable(var);\nif (exdata != val)\nec.cleanupDataObject(exdata);\n//add or replace data in the symbol table\nec.setVariable(var, val);\n- });\n+ //build and return a federated response\n+ res = LineageItemUtils.setUDFResponse(udf, (MatrixObject) val);\n+ }\nif (DMLScript.STATISTICS)\n//TODO: dedicated stats for federated reuse\nLineageCacheStatistics.incrementInstHits();\n+\n+ return res;\n}\n- return reuse;\n+ return new FederatedResponse(FederatedResponse.ResponseType.ERROR);\n}\npublic static boolean probe(LineageItem key) {\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageCacheConfig.java",
"new_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageCacheConfig.java",
"diff": "@@ -200,6 +200,15 @@ public class LineageCacheConfig\n}\nprivate static boolean isVectorAppend(Instruction inst, ExecutionContext ec) {\n+ if (inst instanceof ComputationFEDInstruction) {\n+ ComputationFEDInstruction fedinst = (ComputationFEDInstruction) inst;\n+ if (!fedinst.input1.isMatrix() || !fedinst.input2.isMatrix())\n+ return false;\n+ long c1 = ec.getMatrixObject(fedinst.input1).getNumColumns();\n+ long c2 = ec.getMatrixObject(fedinst.input2).getNumColumns();\n+ return(c1 == 1 || c2 == 1);\n+ }\n+ else { //CPInstruction\nComputationCPInstruction cpinst = (ComputationCPInstruction) inst;\nif( !cpinst.input1.isMatrix() || !cpinst.input2.isMatrix() )\nreturn false;\n@@ -207,6 +216,7 @@ public class LineageCacheConfig\nlong c2 = ec.getMatrixObject(cpinst.input2).getNumColumns();\nreturn(c1 == 1 || c2 == 1);\n}\n+ }\npublic static boolean isOutputFederated(Instruction inst, Data data) {\nif (!(inst instanceof ComputationFEDInstruction))\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageItemUtils.java",
"new_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageItemUtils.java",
"diff": "@@ -47,7 +47,9 @@ import org.apache.sysds.lops.UnaryCP;\nimport org.apache.sysds.lops.compile.Dag;\nimport org.apache.sysds.runtime.DMLRuntimeException;\nimport org.apache.sysds.runtime.controlprogram.caching.CacheableData;\n+import org.apache.sysds.runtime.controlprogram.caching.MatrixObject;\nimport org.apache.sysds.runtime.controlprogram.context.ExecutionContext;\n+import org.apache.sysds.runtime.controlprogram.federated.FederatedResponse;\nimport org.apache.sysds.runtime.controlprogram.federated.FederatedUDF;\nimport org.apache.sysds.runtime.instructions.Instruction;\nimport org.apache.sysds.runtime.instructions.InstructionParser;\n@@ -56,6 +58,8 @@ import org.apache.sysds.runtime.instructions.cp.Data;\nimport org.apache.sysds.runtime.instructions.cp.DataGenCPInstruction;\nimport org.apache.sysds.runtime.instructions.cp.ScalarObject;\nimport org.apache.sysds.runtime.instructions.cp.VariableCPInstruction;\n+import org.apache.sysds.runtime.instructions.fed.ReorgFEDInstruction.DiagMatrix;\n+import org.apache.sysds.runtime.instructions.fed.ReorgFEDInstruction.Rdiag;\nimport org.apache.sysds.runtime.util.HDFSTool;\nimport java.io.IOException;\n@@ -159,6 +163,14 @@ public class LineageItemUtils {\n}\n}\n+ public static FederatedResponse setUDFResponse(FederatedUDF udf, MatrixObject mo) {\n+ if (udf instanceof DiagMatrix || udf instanceof Rdiag)\n+ return new FederatedResponse(FederatedResponse.ResponseType.SUCCESS,\n+ new int[]{(int) mo.getNumRows(), (int) mo.getNumColumns()});\n+\n+ return new FederatedResponse(FederatedResponse.ResponseType.SUCCESS_EMPTY);\n+ }\n+\npublic static void constructLineageFromHops(Hop[] roots, String claName, Hop[] inputs, HashMap<Long, Hop> spoofmap) {\n//probe existence and only generate lineage if non-existing\n//(a fused operator might be used in multiple places of a program)\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/java/org/apache/sysds/test/functions/lineage/LineageFedReuseAlg.java",
"diff": "+/*\n+ * Licensed to the Apache Software Foundation (ASF) under one\n+ * or more contributor license agreements. See the NOTICE file\n+ * distributed with this work for additional information\n+ * regarding copyright ownership. The ASF licenses this file\n+ * to you under the Apache License, Version 2.0 (the\n+ * \"License\"); you may not use this file except in compliance\n+ * with the License. You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.apache.sysds.test.functions.lineage;\n+\n+import static org.junit.Assert.assertTrue;\n+\n+import org.apache.sysds.common.Types;\n+import org.apache.sysds.common.Types.ExecMode;\n+import org.apache.sysds.runtime.instructions.InstructionUtils;\n+import org.apache.sysds.runtime.lineage.Lineage;\n+import org.apache.sysds.runtime.matrix.data.LibMatrixMult;\n+import org.apache.sysds.runtime.matrix.data.MatrixBlock;\n+import org.apache.sysds.runtime.transform.encode.EncoderRecode;\n+import org.apache.sysds.test.AutomatedTestBase;\n+import org.apache.sysds.test.TestConfiguration;\n+import org.apache.sysds.test.TestUtils;\n+import org.apache.sysds.utils.Statistics;\n+import org.junit.Test;\n+\[email protected]\n+public class LineageFedReuseAlg extends AutomatedTestBase {\n+\n+ private final static String TEST_DIR = \"functions/lineage/\";\n+ private final static String TEST_NAME1 = \"FedLmPipelineReuse\";\n+ private final static String TEST_CLASS_DIR = TEST_DIR + LineageFedReuseAlg.class.getSimpleName() + \"/\";\n+\n+ public int rows = 10000;\n+ public int cols = 100;\n+\n+ @Override\n+ public void setUp() {\n+ TestUtils.clearAssertionInformation();\n+ addTestConfiguration(TEST_NAME1, new TestConfiguration(TEST_CLASS_DIR, TEST_NAME1, new String[] {\"Z\"}));\n+ }\n+\n+ @Test\n+ public void federatedLmPipelineContinguous() {\n+ federatedLmPipeline(Types.ExecMode.SINGLE_NODE, true, TEST_NAME1);\n+ }\n+\n+ @Test\n+ public void federatedLmPipelineSampled() {\n+ federatedLmPipeline(Types.ExecMode.SINGLE_NODE, false, TEST_NAME1);\n+ }\n+\n+ public void federatedLmPipeline(ExecMode execMode, boolean contSplits, String TEST_NAME) {\n+ ExecMode oldExec = setExecMode(execMode);\n+ boolean oldSort = EncoderRecode.SORT_RECODE_MAP;\n+ EncoderRecode.SORT_RECODE_MAP = true;\n+\n+ getAndLoadTestConfiguration(TEST_NAME);\n+ String HOME = SCRIPT_DIR + TEST_DIR;\n+\n+ try {\n+ // generated lm data\n+ MatrixBlock X = MatrixBlock.randOperations(rows, cols, 1.0, 0, 1, \"uniform\", 7);\n+ MatrixBlock w = MatrixBlock.randOperations(cols, 1, 1.0, 0, 1, \"uniform\", 3);\n+ MatrixBlock y = new MatrixBlock(rows, 1, false).allocateBlock();\n+ LibMatrixMult.matrixMult(X, w, y);\n+ MatrixBlock c = MatrixBlock.randOperations(rows, 1, 1.0, 1, 50, \"uniform\", 23);\n+ MatrixBlock rc = c.unaryOperations(InstructionUtils.parseUnaryOperator(\"round\"), new MatrixBlock());\n+ X = rc.append(X, new MatrixBlock(), true);\n+\n+ // We have two matrices handled by a single federated worker\n+ int quarterRows = rows / 2;\n+ int[] k = new int[] {quarterRows - 1, quarterRows, rows - 1, 0, 0, 0, 0};\n+ writeInputMatrixWithMTD(\"X1\", X.slice(0, k[0]), false);\n+ writeInputMatrixWithMTD(\"X2\", X.slice(k[1], k[2]), false);\n+ writeInputMatrixWithMTD(\"X3\", X.slice(k[3], k[4]), false);\n+ writeInputMatrixWithMTD(\"X4\", X.slice(k[5], k[6]), false);\n+ writeInputMatrixWithMTD(\"Y\", y, false);\n+\n+ int port1 = getRandomAvailablePort();\n+ int port2 = getRandomAvailablePort();\n+ int port3 = getRandomAvailablePort();\n+ int port4 = getRandomAvailablePort();\n+ String[] otherargs = new String[] {\"-lineage\", \"reuse_full\"};\n+ Thread t1 = startLocalFedWorkerThread(port1, otherargs, FED_WORKER_WAIT_S);\n+ Thread t2 = startLocalFedWorkerThread(port2, otherargs);\n+\n+ TestConfiguration config = availableTestConfigurations.get(TEST_NAME);\n+ loadTestConfiguration(config);\n+\n+ // Run with federated matrix and without reuse\n+ fullDMLScriptName = HOME + TEST_NAME + \".dml\";\n+ programArgs = new String[] {\"-stats\", \"20\",\n+ \"-nvargs\", \"in_X1=\" + TestUtils.federatedAddress(port1, input(\"X1\")),\n+ \"in_X2=\" + TestUtils.federatedAddress(port2, input(\"X2\")),\n+ \"in_X3=\" + TestUtils.federatedAddress(port3, input(\"X3\")),\n+ \"in_X4=\" + TestUtils.federatedAddress(port4, input(\"X4\")), \"rows=\" + rows, \"cols=\" + (cols + 1),\n+ \"in_Y=\" + input(\"Y\"), \"cont=\" + String.valueOf(contSplits).toUpperCase(), \"out=\" + expected(\"Z\")};\n+ runTest(true, false, null, -1);\n+ long tsmmCount = Statistics.getCPHeavyHitterCount(\"tsmm\");\n+ long fed_tsmmCount = Statistics.getCPHeavyHitterCount(\"fed_tsmm\");\n+ long mmCount = Statistics.getCPHeavyHitterCount(\"ba+*\");\n+ long fed_mmCount = Statistics.getCPHeavyHitterCount(\"fed_ba+*\");\n+\n+ // Run with federated matrix and with reuse\n+ fullDMLScriptName = HOME + TEST_NAME + \".dml\";\n+ programArgs = new String[] {\"-stats\", \"20\", \"-lineage\", \"reuse_full\",\n+ \"-nvargs\", \"in_X1=\" + TestUtils.federatedAddress(port1, input(\"X1\")),\n+ \"in_X2=\" + TestUtils.federatedAddress(port2, input(\"X2\")),\n+ \"in_X3=\" + TestUtils.federatedAddress(port3, input(\"X3\")),\n+ \"in_X4=\" + TestUtils.federatedAddress(port4, input(\"X4\")), \"rows=\" + rows, \"cols=\" + (cols + 1),\n+ \"in_Y=\" + input(\"Y\"), \"cont=\" + String.valueOf(contSplits).toUpperCase(), \"out=\" + output(\"Z\")};\n+ Lineage.resetInternalState();\n+ runTest(true, false, null, -1);\n+ long tsmmCount_reuse = Statistics.getCPHeavyHitterCount(\"tsmm\");\n+ long fed_tsmmCount_reuse = Statistics.getCPHeavyHitterCount(\"fed_tsmm\");\n+ long mmCount_reuse = Statistics.getCPHeavyHitterCount(\"ba+*\");\n+ long fed_mmCount_reuse = Statistics.getCPHeavyHitterCount(\"fed_ba+*\");\n+\n+ // compare results\n+ compareResults(1e-2);\n+ // compare potentially reused instruction counts\n+ assertTrue(tsmmCount > tsmmCount_reuse);\n+ assertTrue(fed_tsmmCount > fed_tsmmCount_reuse);\n+ assertTrue(mmCount > mmCount_reuse);\n+ assertTrue(fed_mmCount > fed_mmCount_reuse);\n+\n+ TestUtils.shutdownThreads(t1, t2);\n+ }\n+ finally {\n+ resetExecMode(oldExec);\n+ EncoderRecode.SORT_RECODE_MAP = oldSort;\n+ }\n+ }\n+}\n\\ No newline at end of file\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/lineage/FedLmPipelineReuse.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+Fin = federated(addresses=list($in_X1, $in_X2),\n+ ranges=list(list(0, 0), list($rows / 2, $cols), list($rows / 2, 0), list($rows, $cols)))\n+y = read($in_Y)\n+\n+# one hot encoding categorical, other passthrough\n+Fall = as.frame(Fin)\n+jspec = \"{ ids:true, dummycode:[1] }\"\n+[X,M] = transformencode(target=Fall, spec=jspec)\n+print(\"ncol(X) = \"+ncol(X))\n+\n+# clipping out of value ranges\n+colSD = colSds(X)\n+colMean = (colMeans(X))\n+upperBound = colMean + 1.5 * colSD\n+lowerBound = colMean - 1.5 * colSD\n+outFilter = (X < lowerBound) | (X > upperBound)\n+X = X - outFilter*X + outFilter*colMeans(X);\n+\n+# normalization\n+X = scale(X=X, center=TRUE, scale=TRUE);\n+\n+# split training and testing\n+[Xtrain , Xtest, ytrain, ytest] = split(X=X, Y=y, cont=$cont, seed=7)\n+\n+# train regression model with different hyperparameters\n+for (i in 1:10) {\n+ reg = 1e-3 + (0 * 0.001);\n+ B = lm(X=Xtrain, y=ytrain, icpt=1, reg=reg, tol=1e-9, verbose=TRUE);\n+ # TODO: find the best beta\n+}\n+\n+# model evaluation on test split\n+yhat = lmPredict(X=Xtest, B=B, icpt=1, ytest=ytest, verbose=TRUE);\n+\n+# write trained model and meta data\n+write(B, $out)\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-2799] Update UDF reuse, add Fed pipeline reuse test
This patch handles reusing those UDFs that return metadata
with federated SUCCESS response (e.g. Rdiag, DiagMatrix).
Furthermore, this adds a test to reuse a full federated
pipeline having tasks such as preprocessing and
hyperparameter tuning (LM). |
49,693 | 05.02.2021 02:09:02 | -3,600 | 98af625fb8cc8e9bed57bd82a7bc536e4d13d74a | [MINOR] Disable generation of GPU instruction for sum_sq reduction (needs a bugfix) | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/hops/AggUnaryOp.java",
"new_path": "src/main/java/org/apache/sysds/hops/AggUnaryOp.java",
"diff": "@@ -97,7 +97,7 @@ public class AggUnaryOp extends MultiThreadedHop\nreturn false;\n}\nelse if ((_op == AggOp.SUM && (_direction == Direction.RowCol || _direction == Direction.Row || _direction == Direction.Col))\n- || (_op == AggOp.SUM_SQ && (_direction == Direction.RowCol || _direction == Direction.Row || _direction == Direction.Col))\n+// || (_op == AggOp.SUM_SQ && (_direction == Direction.RowCol || _direction == Direction.Row || _direction == Direction.Col))\n|| (_op == AggOp.MAX && (_direction == Direction.RowCol || _direction == Direction.Row || _direction == Direction.Col))\n|| (_op == AggOp.MIN && (_direction == Direction.RowCol || _direction == Direction.Row || _direction == Direction.Col))\n|| (_op == AggOp.MEAN && (_direction == Direction.RowCol || _direction == Direction.Row || _direction == Direction.Col))\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/hops/codegen/cplan/CNodeCell.java",
"new_path": "src/main/java/org/apache/sysds/hops/codegen/cplan/CNodeCell.java",
"diff": "@@ -238,6 +238,7 @@ public class CNodeCell extends CNodeTpl\n}\n@Override\npublic boolean isSupported(GeneratorAPI api) {\n- return (api == GeneratorAPI.CUDA || api == GeneratorAPI.JAVA) && _output.isSupported(api);\n+ return (api == GeneratorAPI.CUDA || api == GeneratorAPI.JAVA) && _output.isSupported(api) &&\n+ !(getSpoofAggOp() == SpoofCellwise.AggOp.SUM_SQ);\n}\n}\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR] Disable generation of GPU instruction for sum_sq reduction (needs a bugfix) |
49,693 | 05.02.2021 02:09:22 | -3,600 | d269cb905f057dff134256ac43c61e39308e91f6 | Avoid recompiling generated cuda operators | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/hops/codegen/SpoofCompiler.java",
"new_path": "src/main/java/org/apache/sysds/hops/codegen/SpoofCompiler.java",
"diff": "@@ -191,6 +191,22 @@ public class SpoofCompiler {\nnative_contexts = new HashMap<>();\nif(!native_contexts.containsKey(generator)) {\n+ String local_tmp = ConfigurationManager.getDMLConfig().getTextValue(DMLConfig.LOCAL_TMP_DIR);\n+ String jar_path = SpoofCompiler.class.getProtectionDomain().getCodeSource().getLocation().getPath();\n+ if(jar_path.contains(\".jar\")) {\n+ try {\n+ extractCodegenSources(local_tmp, jar_path);\n+ }\n+ catch (IOException e){\n+ LOG.error(\"Could not extract spoof files from jar: \" + e);\n+ API = GeneratorAPI.JAVA;\n+ return;\n+ }\n+ }\n+ else {\n+ local_tmp = System.getProperty(\"user.dir\") + \"/src/main\".replace(\"/\", File.separator);\n+ }\n+\nif(generator == GeneratorAPI.CUDA) {\n// init GPUs with jCuda to avoid double initialization problems\nGPUContextPool.initializeGPU();\n@@ -222,21 +238,7 @@ public class SpoofCompiler {\nisLoaded = NativeHelper.loadLibraryHelperFromResource(libName);\nif(isLoaded) {\n- String local_tmp = ConfigurationManager.getDMLConfig().getTextValue(DMLConfig.LOCAL_TMP_DIR);\n- String jar_path = SpoofCompiler.class.getProtectionDomain().getCodeSource().getLocation().getPath();\n- if(jar_path.contains(\".jar\")) {\n- try {\n- extractCodegenSources(local_tmp, jar_path);\n- }\n- catch (IOException e){\n- LOG.error(\"Could not extract spoof files from jar: \" + e);\n- API = GeneratorAPI.JAVA;\n- return;\n- }\n- }\n- else {\n- local_tmp = System.getProperty(\"user.dir\") + \"/src/main\".replace(\"/\", File.separator);\n- }\n+\nlong ctx_ptr = initialize_cuda_context(0, local_tmp);\nif(ctx_ptr != 0) {\n@@ -272,7 +274,8 @@ public class SpoofCompiler {\nwhile (files_in_jar.hasMoreElements()) {\nJarEntry in_file = files_in_jar.nextElement();\n- if (in_file.getName().startsWith(\"cuda/\") && !in_file.isDirectory()) {\n+ if ((in_file.getName().startsWith(\"cuda/\") || in_file.getName().startsWith(\"java/\")) &&\n+ !in_file.isDirectory()) {\nFile out_file = new File(resource_path, in_file.getName());\nout_file.deleteOnExit();\nFile parent = out_file.getParentFile();\n@@ -512,20 +515,24 @@ public class SpoofCompiler {\nif( cla == null ) {\nString src = \"\";\n+ String src_cuda = \"\";\nboolean native_compiled_successfully = false;\n+ src = tmp.getValue().codegen(false, GeneratorAPI.JAVA);\n+ cla = CodegenUtils.compileClass(\"codegen.\"+ tmp.getValue().getClassname(), src);\n- if(API == GeneratorAPI.CUDA && tmp.getValue().isSupported(API)) {\n- src = tmp.getValue().codegen(false, GeneratorAPI.CUDA);\n- native_compiled_successfully = compile_cuda(tmp.getValue().getVarname(), src);\n+ if(API == GeneratorAPI.CUDA) {\n+ if(tmp.getValue().isSupported(API)) {\n+ src_cuda = tmp.getValue().codegen(false, GeneratorAPI.CUDA);\n+ native_compiled_successfully = compile_cuda(tmp.getValue().getVarname(), src_cuda);\nif(native_compiled_successfully)\nCodegenUtils.putNativeOpData(new SpoofCUDA(tmp.getValue()));\n- else\n+ else {\nLOG.warn(\"CUDA compilation failed, falling back to JAVA\");\n+ tmp.getValue().setGeneratorAPI(GeneratorAPI.JAVA);\n}\n-\n- if(API == GeneratorAPI.JAVA || !native_compiled_successfully) {\n- src = tmp.getValue().codegen(false, GeneratorAPI.JAVA);\n- cla = CodegenUtils.compileClass(\"codegen.\"+ tmp.getValue().getClassname(), src);\n+ }\n+ else\n+ LOG.warn(\"CPlan \" + tmp.getValue().getVarname() + \" not supported by SPOOF CUDA\");\n}\n//explain debug output cplans or generated source code\n@@ -536,9 +543,16 @@ public class SpoofCompiler {\n+ Explain.explainCPlan(cplan.getValue().getValue()));\n}\nif( LOG.isTraceEnabled() || DMLScript.EXPLAIN.isRuntimeType(recompile) ) {\n- LOG.info(\"Codegen EXPLAIN (generated code for HopID: \" + cplan.getKey() +\n+ LOG.info(\"JAVA Codegen EXPLAIN (generated code for HopID: \" + cplan.getKey() +\n\", line \"+tmp.getValue().getBeginLine() + \", hash=\"+tmp.getValue().hashCode()+\"):\");\n- LOG.info(src);\n+ LOG.info(CodegenUtils.printWithLineNumber(src));\n+\n+ if(API == GeneratorAPI.CUDA) {\n+ LOG.info(\"CUDA Codegen EXPLAIN (generated code for HopID: \" + cplan.getKey() +\n+ \", line \" + tmp.getValue().getBeginLine() + \", hash=\" + tmp.getValue().hashCode() + \"):\");\n+\n+ LOG.info(CodegenUtils.printWithLineNumber(src_cuda));\n+ }\n}\n//maintain plan cache\n@@ -550,8 +564,20 @@ public class SpoofCompiler {\n}\n//make class available and maintain hits\n- if(cla != null || API != GeneratorAPI.JAVA)\n+ if(cla != null) {\n+ if(CodegenUtils.getNativeOpData(cla.getName()) != null) {\n+ if(tmp.getValue().getVarname() == null) {\n+ tmp.getValue().setVarName(cla.getName());\n+ if(tmp.getValue().getGeneratorAPI() != CodegenUtils.getNativeOpData(cla.getName())\n+ .getCNodeTemplate().getGeneratorAPI())\n+ {\n+ tmp.getValue().setGeneratorAPI(CodegenUtils.getNativeOpData(cla.getName())\n+ .getCNodeTemplate().getGeneratorAPI());\n+ }\n+ }\n+ }\nclas.put(cplan.getKey(), new Pair<Hop[], Class<?>>(tmp.getKey(), cla));\n+ }\nif( DMLScript.STATISTICS )\nStatistics.incrementCodegenOpCacheTotal();\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/hops/codegen/cplan/CNode.java",
"new_path": "src/main/java/org/apache/sysds/hops/codegen/cplan/CNode.java",
"diff": "@@ -73,6 +73,7 @@ public abstract class CNode\n}\npublic String createVarname() {\n+ if(_genVar == null)\n_genVar = \"TMP\"+_seqVar.getNextID();\nreturn _genVar;\n}\n@@ -83,7 +84,6 @@ public abstract class CNode\npublic String getVarname(GeneratorAPI api) { return getVarname(); }\n-\npublic String getVectorLength() {\nif( getVarname().startsWith(\"a\") )\nreturn \"len\";\n@@ -264,4 +264,6 @@ public abstract class CNode\n}\npublic abstract boolean isSupported(GeneratorAPI api);\n+\n+ public void setVarName(String name) { _genVar = name; }\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/hops/codegen/cplan/CNodeCell.java",
"new_path": "src/main/java/org/apache/sysds/hops/codegen/cplan/CNodeCell.java",
"diff": "@@ -125,7 +125,11 @@ public class CNodeCell extends CNodeTpl\nString tmpDense = _output.codegen(false, api);\n_output.resetGenerated();\n+ if(getVarname() == null)\ntmp = tmp.replace(\"%TMP%\", createVarname());\n+ else\n+ tmp = tmp.replace(\"%TMP%\", getVarname());\n+\ntmp = tmp.replace(\"%BODY_dense%\", tmpDense);\n//return last TMP\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/hops/codegen/cplan/CNodeTpl.java",
"new_path": "src/main/java/org/apache/sysds/hops/codegen/cplan/CNodeTpl.java",
"diff": "@@ -236,4 +236,6 @@ public abstract class CNodeTpl extends CNode implements Cloneable\n}\npublic GeneratorAPI getGeneratorAPI() { return api; }\n+\n+ public void setGeneratorAPI(GeneratorAPI _api) { api = _api; }\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/lops/SpoofFused.java",
"new_path": "src/main/java/org/apache/sysds/lops/SpoofFused.java",
"diff": "@@ -34,7 +34,7 @@ public class SpoofFused extends Lop\nprivate final int _numThreads;\nprivate final String _genVarName;\n- private GeneratorAPI _api;\n+ private final GeneratorAPI _api;\npublic SpoofFused(ArrayList<Lop> inputs, DataType dt, ValueType vt, Class<?> cla, GeneratorAPI api,\nString genVarName, int k, ExecType etype) {\nsuper(Type.SpoofFused, dt, vt);\n@@ -110,10 +110,14 @@ public class SpoofFused extends Lop\nsb.append( OPERAND_DELIMITOR );\nsb.append( _api);\nsb.append( OPERAND_DELIMITOR );\n- if(_class != null)\n- sb.append( _class.getName() );\n+ if(_api == GeneratorAPI.CUDA)\n+ if(_genVarName.contains(\"codegen\"))\n+ sb.append(_genVarName);\n+ else\n+ sb.append(\"codegen.\").append(_genVarName);\nelse\n- sb.append(\"codegen.\" + _genVarName);\n+ sb.append( _class.getName() );\n+\nfor(int i=0; i < inputs.length; i++) {\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-2825] Avoid recompiling generated cuda operators |
49,693 | 05.02.2021 02:09:27 | -3,600 | 81b806835ba198fd94cc6d512990b883c7995c4e | [MINOR] Better codegen source debug print with line numbers | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/codegen/CodegenUtils.java",
"new_path": "src/main/java/org/apache/sysds/runtime/codegen/CodegenUtils.java",
"diff": "@@ -28,6 +28,7 @@ import java.util.Arrays;\nimport java.util.Iterator;\nimport java.util.List;\nimport java.util.Map.Entry;\n+import java.util.Scanner;\nimport java.util.concurrent.ConcurrentHashMap;\nimport javax.tools.Diagnostic;\n@@ -315,4 +316,41 @@ public class CodegenUtils\nLocalFileUtils.createLocalFileIfNotExist(tmp);\n_workingDir = tmp;\n}\n+\n+ /**\n+ * <p>Extension of org.apache.commons.lang.StringUtils\n+ * to account for negatives and decimals.</p>\n+ *\n+ * @param str the String to check, may be null\n+ * @return <code>true</code> if only contains digits,-,., and is non-null\n+ */\n+ public static boolean isNumeric(String str) {\n+ if (str == null) {\n+ return false;\n+ }\n+ int sz = str.length();\n+ for (int i = 0; i < sz; i++) {\n+ if (!Character.isDigit(str.charAt(i))) {\n+ if((str.charAt(i) == '-') && (i == 0))\n+ continue;\n+// if((str.charAt(i) == '.') && (sz > 1))\n+// continue;\n+ return false;\n+ }\n+ }\n+ return true;\n+ }\n+\n+ public static String printWithLineNumber(String src) {\n+ StringBuilder sb = new StringBuilder();\n+ sb.append(\"\\n\");\n+ Scanner scanner = new Scanner(src);\n+ int line_count = 0;\n+ while (scanner.hasNextLine()) {\n+ String line = scanner.nextLine();\n+ sb.append(line_count++ + \": \" + line + System.lineSeparator());\n+ }\n+ scanner.close();\n+ return sb.toString();\n+ }\n}\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR] Better codegen source debug print with line numbers |
49,738 | 09.02.2021 17:30:47 | -3,600 | 0a112356e059c20baf609cd6c1f06a232ddd2f4c | Fix missing federated col-partitioned matrix multiply
This patch adds the missing support for federated matrix multiplication
for column partitioned federated matrices. In addition, we changed the
log level of federated request command from info to debug, for reduced
default output in local tests. | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/controlprogram/federated/FederatedWorkerHandler.java",
"new_path": "src/main/java/org/apache/sysds/runtime/controlprogram/federated/FederatedWorkerHandler.java",
"diff": "@@ -96,10 +96,10 @@ public class FederatedWorkerHandler extends ChannelInboundHandlerAdapter {\nfor(int i = 0; i < requests.length; i++) {\nFederatedRequest request = requests[i];\n- if(log.isInfoEnabled()) {\n- log.info(\"Executing command \" + (i + 1) + \"/\" + requests.length + \": \" + request.getType().name());\nif(log.isDebugEnabled()) {\n- log.debug(\"full command: \" + request.toString());\n+ log.debug(\"Executing command \" + (i + 1) + \"/\" + requests.length + \": \" + request.getType().name());\n+ if(log.isTraceEnabled()) {\n+ log.trace(\"full command: \" + request.toString());\n}\n}\nPrivacyMonitor.setCheckPrivacy(request.checkPrivacy());\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/controlprogram/federated/FederationMap.java",
"new_path": "src/main/java/org/apache/sysds/runtime/controlprogram/federated/FederationMap.java",
"diff": "@@ -148,13 +148,16 @@ public class FederationMap {\nint[][] ix = new int[_fedMap.size()][];\nint pos = 0;\nfor(Entry<FederatedRange, FederatedData> e : _fedMap.entrySet()) {\n- int rl, ru, cl, cu;\n- // TODO Handle different cases than ROW aligned Matrices.\n- rl = transposed ? 0 : e.getKey().getBeginDimsInt()[0];\n- ru = transposed ? cb.getNumRows() - 1 : e.getKey().getEndDimsInt()[0] - 1;\n- cl = transposed ? e.getKey().getBeginDimsInt()[0] : 0;\n- cu = transposed ? e.getKey().getEndDimsInt()[0] - 1 : cb.getNumColumns() - 1;\n- ix[pos++] = new int[] {rl, ru, cl, cu};\n+ int beg = e.getKey().getBeginDimsInt()[(_type == FType.ROW ? 0 : 1)];\n+ int end = e.getKey().getEndDimsInt()[(_type == FType.ROW ? 0 : 1)];\n+ int nr = _type == FType.ROW ? cb.getNumRows() : cb.getNumColumns();\n+ int nc = _type == FType.ROW ? cb.getNumColumns() : cb.getNumRows();\n+ int rl = transposed ? 0 : beg;\n+ int ru = transposed ? nr - 1 : end - 1;\n+ int cl = transposed ? beg : 0;\n+ int cu = transposed ? end - 1 : nc - 1;\n+ ix[pos++] = _type == FType.ROW ?\n+ new int[] {rl, ru, cl, cu} : new int[] {cl, cu, rl, ru};\n}\n// multi-threaded block slicing and federation request creation\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/AggregateBinaryFEDInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/AggregateBinaryFEDInstruction.java",
"diff": "@@ -110,6 +110,19 @@ public class AggregateBinaryFEDInstruction extends BinaryFEDInstruction {\nMatrixBlock ret = FederationUtils.aggAdd(tmp);\nec.setMatrixOutput(output.getName(), ret);\n}\n+ //#3 col-federated matrix vector multiplication\n+ else if (mo1.isFederated(FType.COL)) {// VM + MM\n+ //construct commands: broadcast rhs, fed mv, retrieve results\n+ FederatedRequest[] fr1 = mo1.getFedMapping().broadcastSliced(mo2, true);\n+ FederatedRequest fr2 = FederationUtils.callInstruction(instString, output,\n+ new CPOperand[]{input1, input2}, new long[]{mo1.getFedMapping().getID(), fr1[0].getID()});\n+ FederatedRequest fr3 = new FederatedRequest(RequestType.GET_VAR, fr2.getID());\n+ FederatedRequest fr4 = mo1.getFedMapping().cleanup(getTID(), fr1[0].getID(), fr2.getID());\n+ //execute federated operations and aggregate\n+ Future<FederatedResponse>[] tmp = mo1.getFedMapping().execute(getTID(), fr1, fr2, fr3, fr4);\n+ MatrixBlock ret = FederationUtils.aggAdd(tmp);\n+ ec.setMatrixOutput(output.getName(), ret);\n+ }\nelse { //other combinations\nthrow new DMLRuntimeException(\"Federated AggregateBinary not supported with the \"\n+ \"following federated objects: \"+mo1.isFederated()+\":\"+mo1.getFedMapping()\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/FEDInstructionUtils.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/FEDInstructionUtils.java",
"diff": "@@ -78,7 +78,7 @@ public class FEDInstructionUtils {\nif( instruction.input1.isMatrix() && instruction.input2.isMatrix() ) {\nMatrixObject mo1 = ec.getMatrixObject(instruction.input1);\nMatrixObject mo2 = ec.getMatrixObject(instruction.input2);\n- if (mo1.isFederated(FType.ROW) || mo2.isFederated(FType.ROW)) {\n+ if (mo1.isFederated(FType.ROW) || mo2.isFederated(FType.ROW) || mo1.isFederated(FType.COL)) {\nfedinst = AggregateBinaryFEDInstruction.parseInstruction(inst.getInstructionString());\n}\n}\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-2855] Fix missing federated col-partitioned matrix multiply
This patch adds the missing support for federated matrix multiplication
for column partitioned federated matrices. In addition, we changed the
log level of federated request command from info to debug, for reduced
default output in local tests. |
49,689 | 10.02.2021 20:15:04 | -3,600 | b37026bac137f780bc9f3fb34887c7dac35a2a5c | Fix Cost&Size eviction policy
This patch fixes a bug in the logic of adjusting scores
by cache reference count. In addition to that, this patch
makes the estimation of saved and missed compute time more
robust and accurate. | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageCache.java",
"new_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageCache.java",
"diff": "@@ -153,6 +153,9 @@ public class LineageCache\nelse\nec.setScalarOutput(outName, e.getSOValue());\nreuse = true;\n+\n+ if (DMLScript.STATISTICS) //increment saved time\n+ LineageCacheStatistics.incrementSavedComputeTime(e._computeTime);\n}\nif (DMLScript.STATISTICS)\nLineageCacheStatistics.incrementInstHits();\n@@ -172,6 +175,7 @@ public class LineageCache\nreturn false;\nboolean reuse = (outParams.size() != 0);\n+ long savedComputeTime = 0;\nHashMap<String, Data> funcOutputs = new HashMap<>();\nHashMap<String, LineageItem> funcLIs = new HashMap<>();\nfor (int i=0; i<numOutputs; i++) {\n@@ -211,6 +215,8 @@ public class LineageCache\nfuncOutputs.put(boundVarName, boundValue);\nLineageItem orig = e._origItem;\nfuncLIs.put(boundVarName, orig);\n+ //all the entries have the same computeTime\n+ savedComputeTime = e._computeTime;\n}\nelse {\n// if one output cannot be reused, we need to execute the function\n@@ -231,6 +237,9 @@ public class LineageCache\n});\n//map original lineage items return to the calling site\nfuncLIs.forEach((var, li) -> ec.getLineage().set(var, li));\n+\n+ if (DMLScript.STATISTICS) //increment saved time\n+ LineageCacheStatistics.incrementSavedComputeTime(savedComputeTime);\n}\nreturn reuse;\n@@ -246,6 +255,7 @@ public class LineageCache\nboolean reuse = false;\nList<Long> outIds = udf.getOutputIds();\nHashMap<String, Data> udfOutputs = new HashMap<>();\n+ long savedComputeTime = 0;\n//TODO: support multi-return UDFs\nif (udf.getLineageItem(ec) == null)\n@@ -278,6 +288,7 @@ public class LineageCache\noutValue = e.getSOValue();\n}\nudfOutputs.put(outName, outValue);\n+ savedComputeTime = e._computeTime;\nreuse = true;\n}\nelse\n@@ -298,9 +309,11 @@ public class LineageCache\nres = LineageItemUtils.setUDFResponse(udf, (MatrixObject) val);\n}\n- if (DMLScript.STATISTICS)\n+ if (DMLScript.STATISTICS) {\n//TODO: dedicated stats for federated reuse\nLineageCacheStatistics.incrementInstHits();\n+ LineageCacheStatistics.incrementSavedComputeTime(savedComputeTime);\n+ }\nreturn res;\n}\n@@ -324,6 +337,14 @@ public class LineageCache\nreturn e.getMBValue();\n}\n+ public static LineageCacheEntry getEntry(LineageItem key) {\n+ LineageCacheEntry e = null;\n+ synchronized( _cache ) {\n+ e = getIntern(key);\n+ }\n+ return e;\n+ }\n+\n//NOTE: safe to pin the object in memory as coming from CPInstruction\n//TODO why do we need both of these public put methods\npublic static void putMatrix(Instruction inst, ExecutionContext ec, long computetime) {\n@@ -545,11 +566,10 @@ public class LineageCache\n// This method is called only when entry is present either in cache or in local FS.\nLineageCacheEntry e = _cache.get(key);\nif (e != null && e.getCacheStatus() != LineageCacheStatus.SPILLED) {\n- if (DMLScript.STATISTICS) {\n- // Increment hit count and saved computation time.\n+ if (DMLScript.STATISTICS)\n+ // Increment hit count.\nLineageCacheStatistics.incrementMemHits();\n- LineageCacheStatistics.incrementSavedComputeTime(e._computeTime);\n- }\n+\n// Maintain order for eviction\nLineageCacheEviction.getEntry(e);\nreturn e;\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageCacheEntry.java",
"new_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageCacheEntry.java",
"diff": "@@ -141,27 +141,23 @@ public class LineageCacheEntry {\n}\nprotected synchronized void computeScore(Map<LineageItem, Integer> removeList) {\n+ // Set timestamp and compute initial score\nsetTimestamp();\n- if (removeList.containsKey(_key)) {\n- //FIXME: increase computetime instead of score (that now leads to overflow).\n- // updating computingtime seamlessly takes care of spilling\n- //_computeTime = _computeTime * (1 + removeList.get(_key));\n- score = score * (1 + removeList.get(_key));\n- }\n- if (_computeTime < 0)\n- System.out.println(\"after recache: \"+_computeTime+\" miss count: \"+removeList.get(_key));\n- }\n-\n- protected synchronized void updateComputeTime() {\n- if ((Long.MAX_VALUE - _computeTime) < _computeTime) {\n- System.out.println(\"Overflow for: \"+_key.getOpcode());\n- }\n- //FIXME: increase computetime instead of score (that now leads to overflow).\n- // updating computingtime seamlessly takes care of spilling\n- //_computeTime = _computeTime * (1 + removeList.get(_key));\n- //_computeTime += _computeTime;\n- //recomputeScore();\n- score *= 2;\n+\n+ // Update score to emulate computeTime scaling by #misses\n+ if (removeList.containsKey(_key) && LineageCacheConfig.isCostNsize()) {\n+ //score = score * (1 + removeList.get(_key));\n+ double w1 = LineageCacheConfig.WEIGHTS[0];\n+ int missCount = 1 + removeList.get(_key);\n+ score = score + (w1*(((double)_computeTime)/getSize()) * missCount);\n+ }\n+ }\n+\n+ protected synchronized void updateScore() {\n+ // Update score to emulate computeTime scaling by cache hit\n+ //score *= 2;\n+ double w1 = LineageCacheConfig.WEIGHTS[0];\n+ score = score + w1*(((double)_computeTime)/getSize());\n}\nprotected synchronized long getTimestamp() {\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageCacheEviction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageCacheEviction.java",
"diff": "@@ -70,7 +70,8 @@ public class LineageCacheEviction\n// Don't add the memory pinned entries in weighted queue.\n// The eviction queue should contain only entries that can\n// be removed or spilled to disk.\n- //entry.setTimestamp();\n+\n+ // Set timestamp, score, and scale score by #misses\nentry.computeScore(_removelist);\n// Adjust score according to cache miss counts.\nweightedQueue.add(entry);\n@@ -85,11 +86,11 @@ public class LineageCacheEviction\nweightedQueue.add(entry);\n}\n}\n- // Increase computation time of the sought entry.\n+ // Scale score of the sought entry after every cache hit\n// FIXME: avoid when called from partial reuse methods\nif (LineageCacheConfig.isCostNsize()) {\nif (weightedQueue.remove(entry)) {\n- entry.updateComputeTime();\n+ entry.updateScore();\nweightedQueue.add(entry);\n}\n}\n@@ -99,7 +100,7 @@ public class LineageCacheEviction\nif (cache.remove(e._key) != null)\n_cachesize -= e.getSize();\n- // Increase priority if same entry is removed multiple times\n+ // Maintain miss count to increase the score if the item enters the cache again\nif (_removelist.containsKey(e._key))\n_removelist.replace(e._key, _removelist.get(e._key)+1);\nelse\n@@ -224,7 +225,6 @@ public class LineageCacheEviction\n// Estimate time to write to FS + read from FS.\ndouble spilltime = getDiskSpillEstimate(e) * 1000; // in milliseconds\ndouble exectime = ((double) e._computeTime) / 1000000; // in milliseconds\n- //FIXME: this comuteTime is not adjusted according to hit/miss counts\nif (LineageCache.DEBUG) {\nSystem.out.print(\"LI = \" + e._key.getOpcode());\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageEstimatorStatistics.java",
"new_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageEstimatorStatistics.java",
"diff": "@@ -47,7 +47,7 @@ public class LineageEstimatorStatistics {\n}\npublic static String displaySize() {\n- //size of all cached reusable intermediates/size of reused intermediates//cache size\n+ //size of all cached reusable intermediates/size of reused intermediates/cache size\nStringBuilder sb = new StringBuilder();\nsb.append(String.format(\"%.3f\", ((double)LineageEstimator._totReusableSize)/(1024*1024))); //in MB\nsb.append(\"/\");\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageRewriteReuse.java",
"new_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageRewriteReuse.java",
"diff": "@@ -72,6 +72,7 @@ public class LineageRewriteReuse\nprivate static BasicProgramBlock _lrPB = null;\nprivate static ExecutionContext _lrEC = null;\nprivate static boolean _disableReuse = true;\n+ private static long _computeTime = 0;\nprivate static final Log LOG = LogFactory.getLog(LineageRewriteReuse.class.getName());\npublic static boolean executeRewrites (Instruction curr, ExecutionContext ec)\n@@ -120,7 +121,9 @@ public class LineageRewriteReuse\nec.setVariable(((ComputationCPInstruction)curr).output.getName(), lrwec.getVariable(LR_VAR));\n//put the result into the cache\n- LineageCache.putMatrix(curr, ec, t1-t0);\n+ //Projected CT(Rewritten entry) = CT(last entry) + CT(rewrite), where CT = ComputeTime\n+ long totCT = _computeTime + (t1-t0);\n+ LineageCache.putMatrix(curr, ec, totCT);\nDMLScript.EXPLAIN = et; //TODO can't change this here\n//cleanup execution context\n@@ -826,8 +829,10 @@ public class LineageRewriteReuse\n// create tsmm lineage on top of the input of last append\nLineageItem input1 = source.getInputs()[0];\nLineageItem tmp = new LineageItem(curr.getOpcode(), new LineageItem[] {input1});\n- if (LineageCache.probe(tmp))\n+ if (LineageCache.probe(tmp)) {\ninCache.put(\"lastMatrix\", LineageCache.getMatrix(tmp));\n+ _computeTime = LineageCache.getEntry(tmp)._computeTime;\n+ }\n// look for the old matrix in cache\nif( LineageCache.probe(input1) )\ninCache.put(\"X\", LineageCache.getMatrix(input1));\n@@ -863,8 +868,10 @@ public class LineageRewriteReuse\nLineageItem tmp = new LineageItem(curr.getOpcode(), new LineageItem[] {input1});\nif( LineageCache.probe(input1) )\ninCache.put(\"X\", LineageCache.getMatrix(input1));\n- if (LineageCache.probe(tmp))\n+ if (LineageCache.probe(tmp)) {\ninCache.put(\"lastMatrix\", LineageCache.getMatrix(tmp));\n+ _computeTime += LineageCache.getEntry(tmp)._computeTime;\n+ }\n}\n}\n// return true only if the last tsmm result is found\n@@ -884,8 +891,10 @@ public class LineageRewriteReuse\n// create tsmm lineage on top of the input of last append\nLineageItem input1 = source.getInputs()[0];\nLineageItem tmp = new LineageItem(curr.getOpcode(), new LineageItem[] {input1});\n- if (LineageCache.probe(tmp))\n+ if (LineageCache.probe(tmp)) {\ninCache.put(\"lastMatrix\", LineageCache.getMatrix(tmp));\n+ _computeTime = LineageCache.getEntry(tmp)._computeTime;\n+ }\n// look for the appended column in cache\nif (source.getInputs().length>1 && LineageCache.probe(source.getInputs()[1]))\ninCache.put(\"deltaX\", LineageCache.getMatrix(source.getInputs()[1]));\n@@ -912,8 +921,10 @@ public class LineageRewriteReuse\nLineageItem L2appin1 = input.getInputs()[0];\nLineageItem tmp = new LineageItem(\"cbind\", new LineageItem[] {L2appin1, source.getInputs()[1]});\nLineageItem toProbe = new LineageItem(curr.getOpcode(), new LineageItem[] {tmp});\n- if (LineageCache.probe(toProbe))\n+ if (LineageCache.probe(toProbe)) {\ninCache.put(\"lastMatrix\", LineageCache.getMatrix(toProbe));\n+ _computeTime = LineageCache.getEntry(toProbe)._computeTime;\n+ }\n// look for the appended column in cache\nif (LineageCache.probe(input.getInputs()[1]))\ninCache.put(\"deltaX\", LineageCache.getMatrix(input.getInputs()[1]));\n@@ -951,8 +962,10 @@ public class LineageRewriteReuse\nLineageItem old_cbind = new LineageItem(\"cbind\", new LineageItem[] {L2appin1, old_RI});\nLineageItem tmp = new LineageItem(\"cbind\", new LineageItem[] {old_cbind, source.getInputs()[1]});\nLineageItem toProbe = new LineageItem(curr.getOpcode(), new LineageItem[] {tmp});\n- if (LineageCache.probe(toProbe))\n+ if (LineageCache.probe(toProbe)) {\ninCache.put(\"lastMatrix\", LineageCache.getMatrix(toProbe));\n+ _computeTime = LineageCache.getEntry(toProbe)._computeTime;\n+ }\n}\n}\n}\n@@ -974,8 +987,10 @@ public class LineageRewriteReuse\nLineageItem leftSource = left.getInputs()[0]; //left inpur of rbind = X\n// create ba+* lineage on top of the input of last append\nLineageItem tmp = new LineageItem(curr.getOpcode(), new LineageItem[] {leftSource, right});\n- if (LineageCache.probe(tmp))\n+ if (LineageCache.probe(tmp)) {\ninCache.put(\"lastMatrix\", LineageCache.getMatrix(tmp));\n+ _computeTime = LineageCache.getEntry(tmp)._computeTime;\n+ }\n// look for the appended column in cache\nif (LineageCache.probe(left.getInputs()[1]))\ninCache.put(\"deltaX\", LineageCache.getMatrix(left.getInputs()[1]));\n@@ -999,8 +1014,10 @@ public class LineageRewriteReuse\nLineageItem rightSource = right.getInputs()[0]; //left inpur of rbind = X\n// create ba+* lineage on top of the input of last append\nLineageItem tmp = new LineageItem(curr.getOpcode(), new LineageItem[] {left, rightSource});\n- if (LineageCache.probe(tmp))\n+ if (LineageCache.probe(tmp)) {\ninCache.put(\"lastMatrix\", LineageCache.getMatrix(tmp));\n+ _computeTime = LineageCache.getEntry(tmp)._computeTime;\n+ }\n// look for the appended column in cache\nif (LineageCache.probe(right.getInputs()[1]))\ninCache.put(\"deltaY\", LineageCache.getMatrix(right.getInputs()[1]));\n@@ -1030,8 +1047,10 @@ public class LineageRewriteReuse\nreturn false;\n// create ba+* lineage on top of the input of last append\nLineageItem tmp = new LineageItem(curr.getOpcode(), new LineageItem[] {left, rightSource1});\n- if (LineageCache.probe(tmp))\n+ if (LineageCache.probe(tmp)) {\ninCache.put(\"lastMatrix\", LineageCache.getMatrix(tmp));\n+ _computeTime = LineageCache.getEntry(tmp)._computeTime;\n+ }\n}\n}\nreturn inCache.containsKey(\"lastMatrix\") ? true : false;\n@@ -1052,8 +1071,10 @@ public class LineageRewriteReuse\nLineageItem rightSource = right.getInputs()[0]; //right inpur of rbind = Y\n// create * lineage on top of the input of last append\nLineageItem tmp = new LineageItem(curr.getOpcode(), new LineageItem[] {leftSource, rightSource});\n- if (LineageCache.probe(tmp))\n+ if (LineageCache.probe(tmp)) {\ninCache.put(\"lastMatrix\", LineageCache.getMatrix(tmp));\n+ _computeTime = LineageCache.getEntry(tmp)._computeTime;\n+ }\n// look for the appended rows in cache\nif (LineageCache.probe(left.getInputs()[1]))\ninCache.put(\"deltaX\", LineageCache.getMatrix(left.getInputs()[1]));\n@@ -1079,8 +1100,10 @@ public class LineageRewriteReuse\nLineageItem rightSource = right.getInputs()[0]; //right inpur of cbind = Y\n// create * lineage on top of the input of last append\nLineageItem tmp = new LineageItem(curr.getOpcode(), new LineageItem[] {leftSource, rightSource});\n- if (LineageCache.probe(tmp))\n+ if (LineageCache.probe(tmp)) {\ninCache.put(\"lastMatrix\", LineageCache.getMatrix(tmp));\n+ _computeTime = LineageCache.getEntry(tmp)._computeTime;\n+ }\n// look for the appended columns in cache\nif (LineageCache.probe(left.getInputs()[1]))\ninCache.put(\"deltaX\", LineageCache.getMatrix(left.getInputs()[1]));\n@@ -1110,8 +1133,10 @@ public class LineageRewriteReuse\nLineageItem input1 = target.getInputs()[0];\nLineageItem tmp = new LineageItem(curr.getOpcode(),\nnew LineageItem[] {input1, groups, weights, fn, ngroups});\n- if (LineageCache.probe(tmp))\n+ if (LineageCache.probe(tmp)) {\ninCache.put(\"lastMatrix\", LineageCache.getMatrix(tmp));\n+ _computeTime = LineageCache.getEntry(tmp)._computeTime;\n+ }\n// look for the appended column in cache\nif (LineageCache.probe(target.getInputs()[1]))\ninCache.put(\"deltaX\", LineageCache.getMatrix(target.getInputs()[1]));\n@@ -1137,8 +1162,10 @@ public class LineageRewriteReuse\nLineageItem right = item.getInputs()[1];\nif (right.getOpcode().equalsIgnoreCase(\"rightIndex\")) {\nLineageItem indexSource = right.getInputs()[0];\n- if (LineageCache.probe(indexSource) && indexSource.getOpcode().equalsIgnoreCase(\"ba+*\"))\n+ if (LineageCache.probe(indexSource) && indexSource.getOpcode().equalsIgnoreCase(\"ba+*\")) {\ninCache.put(\"indexSource\", LineageCache.getMatrix(indexSource));\n+ _computeTime = LineageCache.getEntry(indexSource)._computeTime;\n+ }\nLineageItem tmp = new LineageItem(item.getOpcode(), new LineageItem[] {left, indexSource});\nif (LineageCache.probe(tmp))\ninCache.put(\"BigMatMult\", LineageCache.getMatrix(tmp));\n@@ -1160,8 +1187,10 @@ public class LineageRewriteReuse\nLineageItem src21 = src1.getInputs()[0];\nLineageItem src22 = src1.getInputs()[1]; //ones\nif (src21.getOpcode().equalsIgnoreCase(\"ba+*\")) {\n- if (LineageCache.probe(src21))\n+ if (LineageCache.probe(src21)) {\ninCache.put(\"projected\", LineageCache.getMatrix(src21));\n+ _computeTime = LineageCache.getEntry(src21)._computeTime;\n+ }\nLineageItem src31 = src21.getInputs()[1];\nLineageItem src32 = src21.getInputs()[0];\n@@ -1174,8 +1203,10 @@ public class LineageRewriteReuse\nLineageItem old_ba = new LineageItem(\"ba+*\", new LineageItem[] {src32, old_RI});\nLineageItem old_cbind = new LineageItem(\"cbind\", new LineageItem[] {old_ba, src22});\nLineageItem old_tsmm = new LineageItem(\"tsmm\", new LineageItem[] {old_cbind});\n- if (LineageCache.probe(old_tsmm))\n+ if (LineageCache.probe(old_tsmm)) {\ninCache.put(\"lastMatrix\", LineageCache.getMatrix(old_tsmm));\n+ _computeTime += LineageCache.getEntry(old_tsmm)._computeTime;\n+ }\n}\n}\n}\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-2739] Fix Cost&Size eviction policy
This patch fixes a bug in the logic of adjusting scores
by cache reference count. In addition to that, this patch
makes the estimation of saved and missed compute time more
robust and accurate. |
49,706 | 11.02.2021 11:03:12 | -3,600 | 65d07b355750ba35c10f8f56e9e080769b9770eb | [MINOR] parse threads in fed binary instruction | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/BinaryFEDInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/BinaryFEDInstruction.java",
"diff": "@@ -34,11 +34,14 @@ public abstract class BinaryFEDInstruction extends ComputationFEDInstruction {\npublic static BinaryFEDInstruction parseInstruction(String str) {\nString[] parts = InstructionUtils.getInstructionPartsWithValueType(str);\n- InstructionUtils.checkNumFields(parts, 3);\n+ InstructionUtils.checkNumFields(parts, 3, 4);\nString opcode = parts[0];\nCPOperand in1 = new CPOperand(parts[1]);\nCPOperand in2 = new CPOperand(parts[2]);\nCPOperand out = new CPOperand(parts[3]);\n+ // threads to use\n+ int k = Integer.parseInt(parts[4]);\n+\ncheckOutputDataType(in1, in2, out);\nOperator operator = InstructionUtils.parseBinaryOrBuiltinOperator(opcode, in1, in2);\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR] parse threads in fed binary instruction |
49,706 | 11.02.2021 11:46:18 | -3,600 | 0ca9d79718161b99678a6bf4b0fb927428f1daeb | [MINOR] fix log instruction parsing for builtin | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/CPInstructionParser.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/CPInstructionParser.java",
"diff": "@@ -413,12 +413,14 @@ public class CPInstructionParser extends InstructionParser\ncase Builtin:\nString[] parts = InstructionUtils.getInstructionPartsWithValueType(str);\nif ( parts[0].equals(\"log\") || parts[0].equals(\"log_nz\") ) {\n- if ( parts.length == 3 || (parts.length == 5 &&\n+ UtilFunctions.isIntegerNumber(parts[parts.length-1]);\n+ if ( parts.length == 4 || (parts.length == 6 &&\nUtilFunctions.isIntegerNumber(parts[3])) ) {\n// B=log(A), y=log(x)\nreturn UnaryCPInstruction.parseInstruction(str);\n- } else if ( parts.length == 4 ) {\n+ } else if ( parts.length == 5 ) {\n// B=log(A,10), y=log(x,10)\n+\nreturn BinaryCPInstruction.parseInstruction(str);\n}\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/BinaryFEDInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/BinaryFEDInstruction.java",
"diff": "@@ -39,8 +39,6 @@ public abstract class BinaryFEDInstruction extends ComputationFEDInstruction {\nCPOperand in1 = new CPOperand(parts[1]);\nCPOperand in2 = new CPOperand(parts[2]);\nCPOperand out = new CPOperand(parts[3]);\n- // threads to use\n- int k = Integer.parseInt(parts[4]);\ncheckOutputDataType(in1, in2, out);\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/federated/primitives/FederatedNegativeTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/federated/primitives/FederatedNegativeTest.java",
"diff": "@@ -47,7 +47,7 @@ public class FederatedNegativeTest {\ntry{\nString[] args = {\"-w\", Integer.toString(port)};\nt = AutomatedTestBase.startLocalFedWorkerWithArgs(args);\n- Thread.sleep(1000);\n+ Thread.sleep(2000);\n} catch(Exception e){\nNegativeTest1();\n}\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR] fix log instruction parsing for builtin |
49,738 | 11.02.2021 14:52:21 | -3,600 | 2f39d9bda04b44d95ed44093fe34a939607388a4 | Cleanup binary matrix-matrix/scalar instruction parsing | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/CPInstructionParser.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/CPInstructionParser.java",
"diff": "@@ -413,14 +413,14 @@ public class CPInstructionParser extends InstructionParser\ncase Builtin:\nString[] parts = InstructionUtils.getInstructionPartsWithValueType(str);\nif ( parts[0].equals(\"log\") || parts[0].equals(\"log_nz\") ) {\n- UtilFunctions.isIntegerNumber(parts[parts.length-1]);\n- if ( parts.length == 4 || (parts.length == 6 &&\n+ if ( parts.length == 3 || (parts.length == 5 &&\nUtilFunctions.isIntegerNumber(parts[3])) ) {\n// B=log(A), y=log(x)\nreturn UnaryCPInstruction.parseInstruction(str);\n- } else if ( parts.length == 5 ) {\n+ } else if ( parts.length == 4 || (parts.length == 5 &&\n+ UtilFunctions.isIntegerNumber(parts[4])) ) {\n// B=log(A,10), y=log(x,10)\n-\n+ // num threads non-existing for scalar-scalar\nreturn BinaryCPInstruction.parseInstruction(str);\n}\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/BinaryFEDInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/BinaryFEDInstruction.java",
"diff": "@@ -40,7 +40,6 @@ public abstract class BinaryFEDInstruction extends ComputationFEDInstruction {\nCPOperand in2 = new CPOperand(parts[2]);\nCPOperand out = new CPOperand(parts[3]);\n-\ncheckOutputDataType(in1, in2, out);\nOperator operator = InstructionUtils.parseBinaryOrBuiltinOperator(opcode, in1, in2);\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-2856] Cleanup binary matrix-matrix/scalar instruction parsing |
49,706 | 11.02.2021 17:01:38 | -3,600 | 2d48bb5ffab38fba82d90c99dec81378f862eeff | Federated parameterserver shceme isolated argument | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/parser/Statement.java",
"new_path": "src/main/java/org/apache/sysds/parser/Statement.java",
"diff": "@@ -101,6 +101,7 @@ public abstract class Statement implements ParseInfo\npublic enum PSScheme {\nDISJOINT_CONTIGUOUS, DISJOINT_ROUND_ROBIN, DISJOINT_RANDOM, OVERLAP_RESHUFFLE\n}\n+ public static final String PS_FED_SCHEME = \"fed_scheme\";\npublic enum FederatedPSScheme {\nKEEP_DATA_ON_WORKER, SHUFFLE, REPLICATE_TO_MAX, SUBSAMPLE_TO_MIN, BALANCE_TO_AVG\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/cp/ParamservBuiltinCPInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/cp/ParamservBuiltinCPInstruction.java",
"diff": "@@ -41,6 +41,7 @@ import static org.apache.sysds.parser.Statement.PS_MODE;\nimport static org.apache.sysds.parser.Statement.PS_MODEL;\nimport static org.apache.sysds.parser.Statement.PS_PARALLELISM;\nimport static org.apache.sysds.parser.Statement.PS_SCHEME;\n+import static org.apache.sysds.parser.Statement.PS_FED_SCHEME;\nimport static org.apache.sysds.parser.Statement.PS_UPDATE_FUN;\nimport static org.apache.sysds.parser.Statement.PS_UPDATE_TYPE;\nimport static org.apache.sysds.parser.Statement.PS_FED_RUNTIME_BALANCING;\n@@ -526,11 +527,11 @@ public class ParamservBuiltinCPInstruction extends ParameterizedBuiltinCPInstruc\nprivate FederatedPSScheme getFederatedScheme() {\nFederatedPSScheme federated_scheme = DEFAULT_FEDERATED_SCHEME;\n- if (getParameterMap().containsKey(PS_SCHEME)) {\n+ if (getParameterMap().containsKey(PS_FED_SCHEME)) {\ntry {\n- federated_scheme = FederatedPSScheme.valueOf(getParam(PS_SCHEME));\n+ federated_scheme = FederatedPSScheme.valueOf(getParam(PS_FED_SCHEME));\n} catch (IllegalArgumentException e) {\n- throw new DMLRuntimeException(String.format(\"Paramserv function in federated mode: not support data partition scheme '%s'\", getParam(PS_SCHEME)));\n+ throw new DMLRuntimeException(String.format(\"Paramserv function in federated mode: not support data partition scheme '%s'\", getParam(PS_FED_SCHEME)));\n}\n}\nreturn federated_scheme;\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-2858] Federated parameterserver shceme isolated argument |
49,738 | 11.02.2021 23:48:04 | -3,600 | bc93ea4578d6826eb15e085ad9a06ea6b77a5e43 | Cleanup privacy constraints L2SVM test (arg updates) | [
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/privacy/algorithms/FederatedL2SVMTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/privacy/algorithms/FederatedL2SVMTest.java",
"diff": "@@ -390,7 +390,7 @@ public class FederatedL2SVMTest extends AutomatedTestBase {\n// Run reference dml script with normal matrix\nfullDMLScriptName = HOME + TEST_NAME + \"Reference.dml\";\n- programArgs = new String[] {\"-args\", input(\"MX1\"), input(\"MX2\"), input(\"MY\"), expected(\"Z\")};\n+ programArgs = new String[] {\"-args\", input(\"MX1\"), input(\"MX2\"), input(\"MY\"), \"FALSE\", expected(\"Z\")};\nrunTest(true, exception1, expectedException1, -1);\n// Run actual dml script with federated matrix\n@@ -398,7 +398,7 @@ public class FederatedL2SVMTest extends AutomatedTestBase {\nprogramArgs = new String[] {\"-checkPrivacy\",\n\"-nvargs\", \"in_X1=\" + TestUtils.federatedAddress(port1, input(\"X1\")),\n\"in_X2=\" + TestUtils.federatedAddress(port2, input(\"X2\")), \"rows=\" + rows, \"cols=\" + cols,\n- \"in_Y=\" + input(\"Y\"), \"out=\" + output(\"Z\")};\n+ \"in_Y=\" + input(\"Y\"), \"single=FALSE\", \"out=\" + output(\"Z\")};\nrunTest(true, exception2, expectedException2, -1);\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-2859] Cleanup privacy constraints L2SVM test (arg updates) |
49,738 | 13.02.2021 00:58:16 | -3,600 | da6a209696baf1102e15c65e4968e8106313a6a5 | [MINOR] Performance local parameter server (parallel updates) | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/controlprogram/paramserv/LocalPSWorker.java",
"new_path": "src/main/java/org/apache/sysds/runtime/controlprogram/paramserv/LocalPSWorker.java",
"diff": "@@ -37,10 +37,15 @@ public class LocalPSWorker extends PSWorker implements Callable<Void> {\nprotected static final Log LOG = LogFactory.getLog(LocalPSWorker.class.getName());\nprivate static final long serialVersionUID = 5195390748495357295L;\n+ private boolean _parUpdates = false;\n+\nprotected LocalPSWorker() {}\n- public LocalPSWorker(int workerID, String updFunc, Statement.PSFrequency freq, int epochs, long batchSize, ExecutionContext ec, ParamServer ps) {\n+ public LocalPSWorker(int workerID, String updFunc, Statement.PSFrequency freq,\n+ int epochs, long batchSize, ExecutionContext ec, ParamServer ps, boolean parUpdates)\n+ {\nsuper(workerID, updFunc, freq, epochs, batchSize, ec, ps);\n+ _parUpdates = parUpdates;\n}\n@Override\n@@ -86,7 +91,8 @@ public class LocalPSWorker extends PSWorker implements Callable<Void> {\nboolean localUpdate = j < batchIter - 1;\n// Accumulate the intermediate gradients\n- accGradients = ParamservUtils.accrueGradients(accGradients, gradients, !localUpdate);\n+ accGradients = ParamservUtils.accrueGradients(\n+ accGradients, gradients, _parUpdates, !localUpdate);\n// Update the local model with gradients\nif(localUpdate)\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/controlprogram/paramserv/dp/DCLocalScheme.java",
"new_path": "src/main/java/org/apache/sysds/runtime/controlprogram/paramserv/dp/DCLocalScheme.java",
"diff": "@@ -26,6 +26,7 @@ import java.util.stream.Collectors;\nimport org.apache.sysds.runtime.controlprogram.caching.MatrixObject;\nimport org.apache.sysds.runtime.controlprogram.paramserv.ParamservUtils;\nimport org.apache.sysds.runtime.matrix.data.MatrixBlock;\n+import org.apache.sysds.runtime.util.CollectionUtils;\n/**\n* Disjoint_Contiguous data partitioner:\n@@ -37,6 +38,8 @@ import org.apache.sysds.runtime.matrix.data.MatrixBlock;\npublic class DCLocalScheme extends DataPartitionLocalScheme {\npublic static List<MatrixBlock> partition(int k, MatrixBlock mb) {\n+ if( k == 1 )\n+ return CollectionUtils.asArrayList(mb);\nList<MatrixBlock> list = new ArrayList<>();\nlong stepSize = (long) Math.ceil((double) mb.getNumRows() / k);\nlong begin = 1;\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR] Performance local parameter server (parallel updates) |
49,738 | 13.02.2021 02:15:57 | -3,600 | a677fcd5de7de8e46f87eec13833aedbe5e870bd | [MINOR] Fix compressed unary aggregates (in-place binarycell ops)
The compressed unary aggregates used the normal binary element-wise
operations for update-in-place which broke the contract. Instead this is
now replaced with the appropriate binary in-place operations.
Furthermore, the formatting of this file used spaces instead of tabs
which is now also fixed. | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/lib/LibCompAgg.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/lib/LibCompAgg.java",
"diff": "@@ -275,9 +275,7 @@ public class LibCompAgg {\nprivate static void reduceColOverlappingFutures(List<Future<MatrixBlock>> rtasks, MatrixBlock ret,\nAggregateUnaryOperator op) throws InterruptedException, ExecutionException {\nfor(Future<MatrixBlock> rtask : rtasks) {\n- LibMatrixBincell.bincellOp(rtask.get(),\n- ret,\n- ret,\n+ LibMatrixBincell.bincellOpInPlace(ret, rtask.get(),\n(op.aggOp.increOp.fn instanceof KahanFunction) ? new BinaryOperator(\nPlus.getPlusFnObject()) : op.aggOp.increOp);\n}\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR] Fix compressed unary aggregates (in-place binarycell ops)
The compressed unary aggregates used the normal binary element-wise
operations for update-in-place which broke the contract. Instead this is
now replaced with the appropriate binary in-place operations.
Furthermore, the formatting of this file used spaces instead of tabs
which is now also fixed. |
49,697 | 20.02.2021 17:14:57 | -3,600 | f487c187af9d4202787620dfa34bddd603f93585 | Federated PNMF test, extended quaternary operations
Closes | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/QuaternaryWCeMMFEDInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/QuaternaryWCeMMFEDInstruction.java",
"diff": "@@ -29,6 +29,7 @@ import org.apache.sysds.runtime.controlprogram.context.ExecutionContext;\nimport org.apache.sysds.runtime.controlprogram.federated.FederatedRequest;\nimport org.apache.sysds.runtime.controlprogram.federated.FederatedResponse;\nimport org.apache.sysds.runtime.controlprogram.federated.FederationMap;\n+import org.apache.sysds.runtime.controlprogram.federated.FederationMap.FType;\nimport org.apache.sysds.runtime.controlprogram.federated.FederationUtils;\nimport org.apache.sysds.runtime.instructions.cp.CPOperand;\nimport org.apache.sysds.runtime.instructions.cp.DoubleObject;\n@@ -66,10 +67,7 @@ public class QuaternaryWCeMMFEDInstruction extends QuaternaryFEDInstruction\nnew DoubleObject(ec.getMatrixInput(_input4.getName()).quickGetValue(0, 0));\n}\n- if(!(X.isFederated() && !U.isFederated() && !V.isFederated()))\n- throw new DMLRuntimeException(\"Unsupported federated inputs (X, U, V) = (\"\n- +X.isFederated()+\", \"+U.isFederated()+\", \"+V.isFederated()+\")\");\n-\n+ if(X.isFederated(FType.ROW) && !U.isFederated() && !V.isFederated()) {\nFederationMap fedMap = X.getFedMapping();\nFederatedRequest[] fr1 = fedMap.broadcastSliced(U, false);\nFederatedRequest fr2 = fedMap.broadcast(V);\n@@ -113,4 +111,9 @@ public class QuaternaryWCeMMFEDInstruction extends QuaternaryFEDInstruction\nAggregateUnaryOperator aop = InstructionUtils.parseBasicAggregateUnaryOperator(\"uak+\");\nec.setVariable(output.getName(), FederationUtils.aggScalar(aop, response));\n}\n+ else {\n+ throw new DMLRuntimeException(\"Unsupported federated inputs (X, U, V) = (\"\n+ + X.isFederated() + \", \" + U.isFederated() + \", \" + V.isFederated() + \")\");\n+ }\n+ }\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/QuaternaryWDivMMFEDInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/QuaternaryWDivMMFEDInstruction.java",
"diff": "@@ -86,10 +86,7 @@ public class QuaternaryWDivMMFEDInstruction extends QuaternaryFEDInstruction\n}\n}\n- if(!(X.isFederated(FType.ROW) && !U.isFederated() && !V.isFederated()))\n- throw new DMLRuntimeException(\"Unsupported federated inputs (X, U, V) = (\"\n- +X.isFederated()+\", \"+U.isFederated()+\", \"+V.isFederated() + \")\");\n-\n+ if(X.isFederated(FType.ROW) && !U.isFederated() && !V.isFederated()) {\nFederationMap fedMap = X.getFedMapping();\nFederatedRequest[] frInit1 = fedMap.broadcastSliced(U, false);\nFederatedRequest frInit2 = fedMap.broadcast(V);\n@@ -161,4 +158,10 @@ public class QuaternaryWDivMMFEDInstruction extends QuaternaryFEDInstruction\nthrow new DMLRuntimeException(\"Federated WDivMM only supported for BASIC, LEFT or RIGHT variants.\");\n}\n}\n+ else {\n+ throw new DMLRuntimeException(\"Unsupported federated inputs (X, U, V) = (\"\n+ + X.isFederated() + \", \" + U.isFederated() + \", \" + V.isFederated() + \")\");\n+ }\n}\n+}\n+\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/QuaternaryWSLossFEDInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/QuaternaryWSLossFEDInstruction.java",
"diff": "@@ -25,6 +25,7 @@ import org.apache.sysds.runtime.controlprogram.federated.FederatedRequest;\nimport org.apache.sysds.runtime.controlprogram.federated.FederatedRequest.RequestType;\nimport org.apache.sysds.runtime.controlprogram.federated.FederatedResponse;\nimport org.apache.sysds.runtime.controlprogram.federated.FederationMap;\n+import org.apache.sysds.runtime.controlprogram.federated.FederationMap.FType;\nimport org.apache.sysds.runtime.controlprogram.federated.FederationUtils;\nimport org.apache.sysds.runtime.DMLRuntimeException;\nimport org.apache.sysds.runtime.instructions.cp.CPOperand;\n@@ -69,10 +70,7 @@ public class QuaternaryWSLossFEDInstruction extends QuaternaryFEDInstruction {\nW = ec.getMatrixObject(_input4);\n}\n- if(!(X.isFederated() && !U.isFederated() && !V.isFederated() && (W == null || !W.isFederated())))\n- throw new DMLRuntimeException(\"Unsupported federated inputs (X, U, V, W) = (\" + X.isFederated() + \", \"\n- + U.isFederated() + \", \" + V.isFederated() + \", \" + (W != null ? W.isFederated() : \"none\") + \")\");\n-\n+ if(X.isFederated(FType.ROW) && !U.isFederated() && !V.isFederated() && (W == null || !W.isFederated())) {\nFederationMap fedMap = X.getFedMapping();\nFederatedRequest[] frInit1 = fedMap.broadcastSliced(U, false);\nFederatedRequest frInit2 = fedMap.broadcast(V);\n@@ -116,4 +114,9 @@ public class QuaternaryWSLossFEDInstruction extends QuaternaryFEDInstruction {\nAggregateUnaryOperator aop = InstructionUtils.parseBasicAggregateUnaryOperator(\"uak+\");\nec.setVariable(output.getName(), FederationUtils.aggScalar(aop, response));\n}\n+ else {\n+ throw new DMLRuntimeException(\"Unsupported federated inputs (X, U, V, W) = (\" + X.isFederated() + \", \"\n+ + U.isFederated() + \", \" + V.isFederated() + \", \" + (W != null ? W.isFederated() : \"none\") + \")\");\n+ }\n+ }\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/QuaternaryWSigmoidFEDInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/QuaternaryWSigmoidFEDInstruction.java",
"diff": "@@ -28,6 +28,7 @@ import org.apache.sysds.runtime.controlprogram.federated.FederatedRequest;\nimport org.apache.sysds.runtime.controlprogram.federated.FederatedRequest.RequestType;\nimport org.apache.sysds.runtime.controlprogram.federated.FederatedResponse;\nimport org.apache.sysds.runtime.controlprogram.federated.FederationMap;\n+import org.apache.sysds.runtime.controlprogram.federated.FederationMap.FType;\nimport org.apache.sysds.runtime.controlprogram.federated.FederationUtils;\nimport org.apache.sysds.runtime.instructions.cp.CPOperand;\nimport org.apache.sysds.runtime.matrix.operators.Operator;\n@@ -58,10 +59,7 @@ public class QuaternaryWSigmoidFEDInstruction extends QuaternaryFEDInstruction {\nMatrixObject U = ec.getMatrixObject(input2);\nMatrixObject V = ec.getMatrixObject(input3);\n- if(!(X.isFederated() && !U.isFederated() && !V.isFederated()))\n- throw new DMLRuntimeException(\"Unsupported federated inputs (X, U, V) = (\" + X.isFederated() + \", \"\n- + U.isFederated() + \", \" + V.isFederated() + \")\");\n-\n+ if(X.isFederated(FType.ROW) && !U.isFederated() && !V.isFederated()) {\nFederationMap fedMap = X.getFedMapping();\nFederatedRequest[] frInit1 = fedMap.broadcastSliced(U, false);\nFederatedRequest frInit2 = fedMap.broadcast(V);\n@@ -84,6 +82,10 @@ public class QuaternaryWSigmoidFEDInstruction extends QuaternaryFEDInstruction {\n// bind partial results from federated responses\nec.setMatrixOutput(output.getName(), FederationUtils.bind(response, false));\n-\n+ }\n+ else {\n+ throw new DMLRuntimeException(\"Unsupported federated inputs (X, U, V) = (\"\n+ + X.isFederated() + \", \" + U.isFederated() + \", \" + V.isFederated() + \")\");\n+ }\n}\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/QuaternaryWUMMFEDInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/QuaternaryWUMMFEDInstruction.java",
"diff": "@@ -60,10 +60,7 @@ public class QuaternaryWUMMFEDInstruction extends QuaternaryFEDInstruction {\nMatrixObject U = ec.getMatrixObject(input2);\nMatrixObject V = ec.getMatrixObject(input3);\n- if(!(X.isFederated(FType.ROW) && !U.isFederated() && !V.isFederated()))\n- throw new DMLRuntimeException(\"Unsupported federated inputs (X, U, V) = (\" + X.isFederated() + \", \"\n- + U.isFederated() + \", \" + V.isFederated() + \")\");\n-\n+ if(X.isFederated(FType.ROW) && !U.isFederated() && !V.isFederated()) {\nFederationMap fedMap = X.getFedMapping();\nFederatedRequest[] frInit1 = fedMap.broadcastSliced(U, false);\nFederatedRequest frInit2 = fedMap.broadcast(V);\n@@ -86,4 +83,9 @@ public class QuaternaryWUMMFEDInstruction extends QuaternaryFEDInstruction {\n// bind partial results from federated responses\nec.setMatrixOutput(output.getName(), FederationUtils.bind(response, false));\n}\n+ else {\n+ throw new DMLRuntimeException(\"Unsupported federated inputs (X, U, V) = (\"\n+ + X.isFederated() + \", \" + U.isFederated() + \", \" + V.isFederated() + \")\");\n+ }\n+ }\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageItem.java",
"new_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageItem.java",
"diff": "@@ -354,7 +354,7 @@ public class LineageItem {\n// Compare a dedup patch with a sub-DAG, and map the inputs of the sub-dag\n// to the placeholder inputs of the dedup patch\n- private boolean equalsDedupPatch(LineageItem dli1, LineageItem dli2, Map<Integer, LineageItem> phMap) {\n+ private static boolean equalsDedupPatch(LineageItem dli1, LineageItem dli2, Map<Integer, LineageItem> phMap) {\nStack<LineageItem> s1 = new Stack<>();\nStack<LineageItem> s2 = new Stack<>();\ns1.push(dli1);\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/federated/algorithms/FederatedAlsCGTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/federated/algorithms/FederatedAlsCGTest.java",
"diff": "@@ -46,7 +46,7 @@ public class FederatedAlsCGTest extends AutomatedTestBase\nprivate final static String OUTPUT_NAME = \"Z\";\nprivate final static double TOLERANCE = 0.01;\n- private final static int blocksize = 1024;\n+ private final static int BLOCKSIZE = 1024;\[email protected]()\npublic int rows;\n@@ -112,9 +112,9 @@ public class FederatedAlsCGTest extends AutomatedTestBase\ndouble[][] X2 = getRandomMatrix(fed_rows, fed_cols, 1, 2, sparsity, 2);\nwriteInputMatrixWithMTD(\"X1\", X1, false, new MatrixCharacteristics(\n- fed_rows, fed_cols, blocksize, fed_rows * fed_cols));\n+ fed_rows, fed_cols, BLOCKSIZE, fed_rows * fed_cols));\nwriteInputMatrixWithMTD(\"X2\", X2, false, new MatrixCharacteristics(\n- fed_rows, fed_cols, blocksize, fed_rows * fed_cols));\n+ fed_rows, fed_cols, BLOCKSIZE, fed_rows * fed_cols));\n// empty script name because we don't execute any script, just start the worker\nfullDMLScriptName = \"\";\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/java/org/apache/sysds/test/functions/federated/algorithms/FederatedPNMFTest.java",
"diff": "+/*\n+ * Licensed to the Apache Software Foundation (ASF) under one\n+ * or more contributor license agreements. See the NOTICE file\n+ * distributed with this work for additional information\n+ * regarding copyright ownership. The ASF licenses this file\n+ * to you under the Apache License, Version 2.0 (the\n+ * \"License\"); you may not use this file except in compliance\n+ * with the License. You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.apache.sysds.test.functions.federated.algorithms;\n+\n+import org.apache.sysds.common.Types.ExecMode;\n+import org.apache.sysds.runtime.matrix.data.MatrixValue.CellIndex;\n+import org.apache.sysds.runtime.meta.MatrixCharacteristics;\n+import org.apache.sysds.runtime.util.HDFSTool;\n+import org.apache.sysds.test.AutomatedTestBase;\n+import org.apache.sysds.test.TestConfiguration;\n+import org.apache.sysds.test.TestUtils;\n+import org.junit.Assert;\n+import org.junit.BeforeClass;\n+import org.junit.Test;\n+import org.junit.runner.RunWith;\n+import org.junit.runners.Parameterized;\n+\n+import java.util.Arrays;\n+import java.util.Collection;\n+import java.util.HashMap;\n+\n+@RunWith(value = Parameterized.class)\[email protected]\n+public class FederatedPNMFTest extends AutomatedTestBase\n+{\n+ private final static String TEST_NAME = \"FederatedPNMFTest\";\n+ private final static String TEST_DIR = \"functions/federated/\";\n+ private final static String TEST_CLASS_DIR = TEST_DIR + FederatedPNMFTest.class.getSimpleName() + \"/\";\n+\n+ private final static String OUTPUT_NAME = \"Z\";\n+ private final static double TOLERANCE = 0.2;\n+ private final static int BLOCKSIZE = 1024;\n+\n+ @Parameterized.Parameter()\n+ public int rows;\n+ @Parameterized.Parameter(1)\n+ public int cols;\n+ @Parameterized.Parameter(2)\n+ public int rank;\n+ @Parameterized.Parameter(3)\n+ public int max_iter;\n+ @Parameterized.Parameter(4)\n+ public double sparsity;\n+\n+ @Override\n+ public void setUp() {\n+ addTestConfiguration(TEST_NAME, new TestConfiguration(TEST_CLASS_DIR, TEST_NAME, new String[]{OUTPUT_NAME}));\n+ }\n+\n+ @Parameterized.Parameters\n+ public static Collection<Object[]> data() {\n+ // rows must be even\n+ return Arrays.asList(new Object[][] {\n+ // {rows, cols, rank, max_iter, sparsity}\n+ {1000, 750, 420, 10, 1}\n+ });\n+ }\n+\n+ @BeforeClass\n+ public static void init() {\n+ TestUtils.clearDirectory(TEST_DATA_DIR + TEST_CLASS_DIR);\n+ }\n+\n+ @Test\n+ public void federatedPNMFSingleNode() {\n+ federatedPNMF(ExecMode.SINGLE_NODE);\n+ }\n+\n+ @Test\n+ public void federatedPNMFSpark() {\n+ federatedPNMF(ExecMode.SPARK);\n+ }\n+\n+// -----------------------------------------------------------------------------\n+\n+ public void federatedPNMF(ExecMode execMode)\n+ {\n+ // store the previous platform config to restore it after the test\n+ ExecMode platform_old = setExecMode(execMode);\n+\n+ getAndLoadTestConfiguration(TEST_NAME);\n+ String HOME = SCRIPT_DIR + TEST_DIR;\n+\n+ int fed_rows = rows / 2;\n+ int fed_cols = cols;\n+\n+ // generate dataset\n+ // matrix handled by two federated workers\n+ double[][] X1 = getRandomMatrix(fed_rows, fed_cols, 1, 2, sparsity, 13);\n+ double[][] X2 = getRandomMatrix(fed_rows, fed_cols, 1, 2, sparsity, 2);\n+\n+ writeInputMatrixWithMTD(\"X1\", X1, false, new MatrixCharacteristics(fed_rows, fed_cols, BLOCKSIZE, fed_rows * fed_cols));\n+ writeInputMatrixWithMTD(\"X2\", X2, false, new MatrixCharacteristics(fed_rows, fed_cols, BLOCKSIZE, fed_rows * fed_cols));\n+\n+ // empty script name because we don't execute any script, just start the worker\n+ fullDMLScriptName = \"\";\n+ int port1 = getRandomAvailablePort();\n+ int port2 = getRandomAvailablePort();\n+ Thread thread1 = startLocalFedWorkerThread(port1, FED_WORKER_WAIT_S);\n+ Thread thread2 = startLocalFedWorkerThread(port2);\n+\n+ getAndLoadTestConfiguration(TEST_NAME);\n+\n+ // Run reference dml script with normal matrix\n+ fullDMLScriptName = HOME + TEST_NAME + \"Reference.dml\";\n+ programArgs = new String[] {\"-stats\", \"-nvargs\",\n+ \"in_X1=\" + input(\"X1\"), \"in_X2=\" + input(\"X2\"), \"in_rank=\" + Integer.toString(rank), \"in_max_iter=\" + Integer.toString(max_iter),\n+ \"out_Z=\" + expected(OUTPUT_NAME)};\n+ runTest(true, false, null, -1);\n+\n+ // Run actual dml script with federated matrix\n+ fullDMLScriptName = HOME + TEST_NAME + \".dml\";\n+ programArgs = new String[] {\"-stats\", \"-nvargs\",\n+ \"in_X1=\" + TestUtils.federatedAddress(port1, input(\"X1\")),\n+ \"in_X2=\" + TestUtils.federatedAddress(port2, input(\"X2\")),\n+ \"in_rank=\" + Integer.toString(rank),\n+ \"in_max_iter=\" + Integer.toString(max_iter),\n+ \"rows=\" + fed_rows, \"cols=\" + fed_cols,\n+ \"out_Z=\" + output(OUTPUT_NAME)};\n+ runTest(true, false, null, -1);\n+\n+ // compare the results via files\n+ HashMap<CellIndex, Double> refResults = readDMLMatrixFromExpectedDir(OUTPUT_NAME);\n+ HashMap<CellIndex, Double> fedResults = readDMLMatrixFromOutputDir(OUTPUT_NAME);\n+ TestUtils.compareMatrices(fedResults, refResults, TOLERANCE, \"Fed\", \"Ref\");\n+\n+ TestUtils.shutdownThreads(thread1, thread2);\n+\n+ // check for federated operations\n+ Assert.assertTrue(heavyHittersContainsString(\"fed_wcemm\"));\n+ Assert.assertTrue(heavyHittersContainsString(\"fed_wdivmm\"));\n+ Assert.assertTrue(heavyHittersContainsString(\"fed_fedinit\"));\n+\n+ // check that federated input files are still existing\n+ Assert.assertTrue(HDFSTool.existsFileOnHDFS(input(\"X1\")));\n+ Assert.assertTrue(HDFSTool.existsFileOnHDFS(input(\"X2\")));\n+ resetExecMode(platform_old);\n+ }\n+}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/federated/primitives/FederatedWeightedCrossEntropyTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/federated/primitives/FederatedWeightedCrossEntropyTest.java",
"diff": "@@ -47,7 +47,7 @@ public class FederatedWeightedCrossEntropyTest extends AutomatedTestBase\nprivate final static String OUTPUT_NAME = \"Z\";\nprivate final static double TOLERANCE = 1e-9;\n- private final static int blocksize = 1024;\n+ private final static int BLOCKSIZE = 1024;\[email protected]()\npublic int rows;\n@@ -124,8 +124,8 @@ public class FederatedWeightedCrossEntropyTest extends AutomatedTestBase\ndouble[][] U = getRandomMatrix(rows, rank, 0, 1, 1, 512);\ndouble[][] V = getRandomMatrix(cols, rank, 0, 1, 1, 5040);\n- writeInputMatrixWithMTD(\"X1\", X1, false, new MatrixCharacteristics(fed_rows, fed_cols, blocksize, fed_rows * fed_cols));\n- writeInputMatrixWithMTD(\"X2\", X2, false, new MatrixCharacteristics(fed_rows, fed_cols, blocksize, fed_rows * fed_cols));\n+ writeInputMatrixWithMTD(\"X1\", X1, false, new MatrixCharacteristics(fed_rows, fed_cols, BLOCKSIZE, fed_rows * fed_cols));\n+ writeInputMatrixWithMTD(\"X2\", X2, false, new MatrixCharacteristics(fed_rows, fed_cols, BLOCKSIZE, fed_rows * fed_cols));\nwriteInputMatrixWithMTD(\"U\", U, true);\nwriteInputMatrixWithMTD(\"V\", V, true);\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/federated/primitives/FederatedWeightedDivMatrixMultTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/federated/primitives/FederatedWeightedDivMatrixMultTest.java",
"diff": "@@ -60,7 +60,7 @@ public class FederatedWeightedDivMatrixMultTest extends AutomatedTestBase\nprivate final static double TOLERANCE = 1e-9;\n- private final static int blocksize = 1024;\n+ private final static int BLOCKSIZE = 1024;\[email protected]()\npublic int rows;\n@@ -256,11 +256,11 @@ public class FederatedWeightedDivMatrixMultTest extends AutomatedTestBase\ndouble[][] U = getRandomMatrix(rows, rank, 0, 1, 1, 512);\ndouble[][] V = getRandomMatrix(cols, rank, 0, 1, 1, 5040);\n- writeInputMatrixWithMTD(\"X1\", X1, false, new MatrixCharacteristics(fed_rows, fed_cols, blocksize, fed_rows * fed_cols));\n- writeInputMatrixWithMTD(\"X2\", X2, false, new MatrixCharacteristics(fed_rows, fed_cols, blocksize, fed_rows * fed_cols));\n+ writeInputMatrixWithMTD(\"X1\", X1, false, new MatrixCharacteristics(fed_rows, fed_cols, BLOCKSIZE, fed_rows * fed_cols));\n+ writeInputMatrixWithMTD(\"X2\", X2, false, new MatrixCharacteristics(fed_rows, fed_cols, BLOCKSIZE, fed_rows * fed_cols));\n- writeInputMatrixWithMTD(\"U\", U, true, new MatrixCharacteristics(rows, rank, blocksize, rows * rank));\n- writeInputMatrixWithMTD(\"V\", V, true, new MatrixCharacteristics(cols, rank, blocksize, rows * rank));\n+ writeInputMatrixWithMTD(\"U\", U, true, new MatrixCharacteristics(rows, rank, BLOCKSIZE, rows * rank));\n+ writeInputMatrixWithMTD(\"V\", V, true, new MatrixCharacteristics(cols, rank, BLOCKSIZE, rows * rank));\n// empty script name because we don't execute any script, just start the worker\nfullDMLScriptName = \"\";\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/federated/primitives/FederatedWeightedSigmoidTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/federated/primitives/FederatedWeightedSigmoidTest.java",
"diff": "@@ -50,7 +50,7 @@ public class FederatedWeightedSigmoidTest extends AutomatedTestBase {\nprivate final static double TOLERANCE = 0;\n- private final static int blocksize = 1024;\n+ private final static int BLOCKSIZE = 1024;\[email protected]()\npublic int rows;\n@@ -151,11 +151,11 @@ public class FederatedWeightedSigmoidTest extends AutomatedTestBase {\nwriteInputMatrixWithMTD(\"X1\",\nX1,\nfalse,\n- new MatrixCharacteristics(fed_rows, fed_cols, blocksize, fed_rows * fed_cols));\n+ new MatrixCharacteristics(fed_rows, fed_cols, BLOCKSIZE, fed_rows * fed_cols));\nwriteInputMatrixWithMTD(\"X2\",\nX2,\nfalse,\n- new MatrixCharacteristics(fed_rows, fed_cols, blocksize, fed_rows * fed_cols));\n+ new MatrixCharacteristics(fed_rows, fed_cols, BLOCKSIZE, fed_rows * fed_cols));\nwriteInputMatrixWithMTD(\"U\", U, true);\nwriteInputMatrixWithMTD(\"V\", V, true);\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/federated/primitives/FederatedWeightedSquaredLossTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/federated/primitives/FederatedWeightedSquaredLossTest.java",
"diff": "@@ -50,7 +50,7 @@ public class FederatedWeightedSquaredLossTest extends AutomatedTestBase {\nprivate final static double TOLERANCE = 1e-8;\n- private final static int blocksize = 1024;\n+ private final static int BLOCKSIZE = 1024;\[email protected]()\npublic int rows;\n@@ -138,11 +138,11 @@ public class FederatedWeightedSquaredLossTest extends AutomatedTestBase {\nwriteInputMatrixWithMTD(\"X1\",\nX1,\nfalse,\n- new MatrixCharacteristics(fed_rows, fed_cols, blocksize, fed_rows * fed_cols));\n+ new MatrixCharacteristics(fed_rows, fed_cols, BLOCKSIZE, fed_rows * fed_cols));\nwriteInputMatrixWithMTD(\"X2\",\nX2,\nfalse,\n- new MatrixCharacteristics(fed_rows, fed_cols, blocksize, fed_rows * fed_cols));\n+ new MatrixCharacteristics(fed_rows, fed_cols, BLOCKSIZE, fed_rows * fed_cols));\nwriteInputMatrixWithMTD(\"U\", U, true);\nwriteInputMatrixWithMTD(\"V\", V, true);\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/federated/primitives/FederatedWeightedUnaryMatrixMultTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/federated/primitives/FederatedWeightedUnaryMatrixMultTest.java",
"diff": "@@ -51,7 +51,7 @@ public class FederatedWeightedUnaryMatrixMultTest extends AutomatedTestBase\nprivate final static double TOLERANCE = 0;\n- private final static int blocksize = 1024;\n+ private final static int BLOCKSIZE = 1024;\[email protected]()\npublic int rows;\n@@ -147,11 +147,11 @@ public class FederatedWeightedUnaryMatrixMultTest extends AutomatedTestBase\ndouble[][] U = getRandomMatrix(rows, rank, 0, 1, 1, 512);\ndouble[][] V = getRandomMatrix(cols, rank, 0, 1, 1, 5040);\n- writeInputMatrixWithMTD(\"X1\", X1, false, new MatrixCharacteristics(fed_rows, fed_cols, blocksize, fed_rows * fed_cols));\n- writeInputMatrixWithMTD(\"X2\", X2, false, new MatrixCharacteristics(fed_rows, fed_cols, blocksize, fed_rows * fed_cols));\n+ writeInputMatrixWithMTD(\"X1\", X1, false, new MatrixCharacteristics(fed_rows, fed_cols, BLOCKSIZE, fed_rows * fed_cols));\n+ writeInputMatrixWithMTD(\"X2\", X2, false, new MatrixCharacteristics(fed_rows, fed_cols, BLOCKSIZE, fed_rows * fed_cols));\n- writeInputMatrixWithMTD(\"U\", U, false, new MatrixCharacteristics(rows, rank, blocksize, rows * rank));\n- writeInputMatrixWithMTD(\"V\", V, false, new MatrixCharacteristics(cols, rank, blocksize, rows * rank));\n+ writeInputMatrixWithMTD(\"U\", U, false, new MatrixCharacteristics(rows, rank, BLOCKSIZE, rows * rank));\n+ writeInputMatrixWithMTD(\"V\", V, false, new MatrixCharacteristics(cols, rank, BLOCKSIZE, rows * rank));\n// empty script name because we don't execute any script, just start the worker\nfullDMLScriptName = \"\";\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/federated/FederatedPNMFTest.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+X = federated(addresses=list($in_X1, $in_X2),\n+ ranges=list(list(0, 0), list($rows, $cols), list($rows, 0), list($rows * 2, $cols)));\n+\n+rank = $in_rank;\n+max_iter = $in_max_iter;\n+\n+[W, H] = pnmf(X = X, rnk = rank, maxi = max_iter);\n+\n+Z = W %*% H;\n+\n+write(Z, $out_Z);\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/federated/FederatedPNMFTestReference.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+X = rbind(read($in_X1), read($in_X2));\n+\n+rank = $in_rank;\n+max_iter = $in_max_iter;\n+\n+[W, H] = pnmf(X = X, rnk = rank, maxi = max_iter);\n+\n+Z = W %*% H;\n+\n+write(Z, $out_Z);\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-2747] Federated PNMF test, extended quaternary operations
Closes #1175. |
49,697 | 20.02.2021 18:04:33 | -3,600 | 2576c2e9df350f549e6fd9c3463466c9630d923f | Cleanup federated binary operations, incl tests
Closes | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/BinaryFEDInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/BinaryFEDInstruction.java",
"diff": "package org.apache.sysds.runtime.instructions.fed;\nimport org.apache.sysds.common.Types.DataType;\n+import org.apache.sysds.common.Types.ExecType;\n+import org.apache.sysds.lops.BinaryM.VectorType;\n+import org.apache.sysds.lops.Lop;\nimport org.apache.sysds.runtime.DMLRuntimeException;\nimport org.apache.sysds.runtime.instructions.InstructionUtils;\nimport org.apache.sysds.runtime.instructions.cp.CPOperand;\n@@ -33,6 +36,11 @@ public abstract class BinaryFEDInstruction extends ComputationFEDInstruction {\n}\npublic static BinaryFEDInstruction parseInstruction(String str) {\n+ if(str.startsWith(ExecType.SPARK.name())) {\n+ // rewrite the spark instruction to a cp instruction\n+ str = rewriteSparkInstructionToCP(str);\n+ }\n+\nString[] parts = InstructionUtils.getInstructionPartsWithValueType(str);\nInstructionUtils.checkNumFields(parts, 3, 4);\nString opcode = parts[0];\n@@ -65,4 +73,15 @@ public abstract class BinaryFEDInstruction extends ComputationFEDInstruction {\nthrow new DMLRuntimeException(\"Element-wise matrix operations between variables \" + in1.getName() +\n\" and \" + in2.getName() + \" must produce a matrix, which \" + out.getName() + \" is not\");\n}\n+\n+ private static String rewriteSparkInstructionToCP(String inst_str) {\n+ // rewrite the spark instruction to a cp instruction\n+ inst_str = inst_str.replace(ExecType.SPARK.name(), ExecType.CP.name());\n+ inst_str = inst_str.replace(Lop.OPERAND_DELIMITOR + \"map\", Lop.OPERAND_DELIMITOR);\n+ inst_str = inst_str.replace(Lop.OPERAND_DELIMITOR + \"RIGHT\", \"\");\n+ inst_str = inst_str.replace(Lop.OPERAND_DELIMITOR + VectorType.ROW_VECTOR.name(), \"\");\n+ inst_str = inst_str.replace(Lop.OPERAND_DELIMITOR + VectorType.COL_VECTOR.name(), \"\");\n+\n+ return inst_str;\n+ }\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/BinaryMatrixMatrixFEDInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/BinaryMatrixMatrixFEDInstruction.java",
"diff": "@@ -29,6 +29,7 @@ import org.apache.sysds.runtime.instructions.cp.CPOperand;\nimport org.apache.sysds.runtime.matrix.operators.BinaryOperator;\nimport org.apache.sysds.runtime.matrix.operators.Operator;\n+\npublic class BinaryMatrixMatrixFEDInstruction extends BinaryFEDInstruction\n{\nprotected BinaryMatrixMatrixFEDInstruction(Operator op,\n@@ -62,17 +63,25 @@ public class BinaryMatrixMatrixFEDInstruction extends BinaryFEDInstruction\n+ \"federated right input are only supported for special cases yet.\");\n}\n}\n- else {\n- //matrix-matrix binary operations -> lhs fed input -> fed output\n- if(mo2.getNumRows() > 1 && mo2.getNumColumns() == 1 ) { //MV col vector\n- FederatedRequest[] fr1 = mo1.getFedMapping().broadcastSliced(mo2, false);\n+ else { // matrix-matrix binary operations -> lhs fed input -> fed output\n+ if(mo1.isFederated(FType.FULL)) {\n+ // full federated (row and col)\n+ if(mo1.getFedMapping().getSize() == 1) {\n+ // only one partition (MM on a single fed worker)\n+ FederatedRequest fr1 = mo1.getFedMapping().broadcast(mo2);\nfr2 = FederationUtils.callInstruction(instString, output, new CPOperand[]{input1, input2},\n- new long[]{mo1.getFedMapping().getID(), fr1[0].getID()});\n- FederatedRequest fr3 = mo1.getFedMapping().cleanup(getTID(), fr1[0].getID());\n+ new long[]{mo1.getFedMapping().getID(), fr1.getID()});\n+ FederatedRequest fr3 = mo1.getFedMapping().cleanup(getTID(), fr1.getID());\n//execute federated instruction and cleanup intermediates\nmo1.getFedMapping().execute(getTID(), true, fr1, fr2, fr3);\n}\n- else if(mo2.getNumRows() == 1 && mo2.getNumColumns() > 1) { //MV row vector\n+ else {\n+ throw new DMLRuntimeException(\"Matrix-matrix binary operations with a full partitioned federated input with multiple partitions are not supported yet.\");\n+ }\n+ }\n+ else if((mo1.isFederated(FType.ROW) && mo2.getNumRows() == 1 && mo2.getNumColumns() > 1)\n+ || (mo1.isFederated(FType.COL) && mo2.getNumRows() > 1 && mo2.getNumColumns() == 1)) {\n+ // MV row partitioned row vector, MV col partitioned col vector\nFederatedRequest fr1 = mo1.getFedMapping().broadcast(mo2);\nfr2 = FederationUtils.callInstruction(instString, output, new CPOperand[]{input1, input2},\nnew long[]{mo1.getFedMapping().getID(), fr1.getID()});\n@@ -80,8 +89,8 @@ public class BinaryMatrixMatrixFEDInstruction extends BinaryFEDInstruction\n//execute federated instruction and cleanup intermediates\nmo1.getFedMapping().execute(getTID(), true, fr1, fr2, fr3);\n}\n- else { //MM\n- if(mo1.isFederated(FType.ROW)) {\n+ else if(mo1.isFederated(FType.ROW) ^ mo1.isFederated(FType.COL)) {\n+ // row partitioned MM or col partitioned MM\nFederatedRequest[] fr1 = mo1.getFedMapping().broadcastSliced(mo2, false);\nfr2 = FederationUtils.callInstruction(instString, output, new CPOperand[]{input1, input2},\nnew long[]{mo1.getFedMapping().getID(), fr1[0].getID()});\n@@ -90,18 +99,13 @@ public class BinaryMatrixMatrixFEDInstruction extends BinaryFEDInstruction\nmo1.getFedMapping().execute(getTID(), true, fr1, fr2, fr3);\n}\nelse {\n- FederatedRequest fr1 = mo1.getFedMapping().broadcast(mo2);\n- fr2 = FederationUtils.callInstruction(instString, output, new CPOperand[]{input1, input2},\n- new long[]{mo1.getFedMapping().getID(), fr1.getID()});\n- FederatedRequest fr3 = mo1.getFedMapping().cleanup(getTID(), fr1.getID());\n- //execute federated instruction and cleanup intermediates\n- mo1.getFedMapping().execute(getTID(), true, fr1, fr2, fr3);\n- }\n+ throw new DMLRuntimeException(\"Matrix-matrix binary operations are only supported with a row partitioned or column partitioned federated input yet.\");\n}\n}\n// derive new fed mapping for output\nMatrixObject out = ec.getMatrixObject(output);\n+\nout.getDataCharacteristics().set(mo1.getDataCharacteristics());\nout.setFedMapping(mo1.getFedMapping().copyWithNewID(fr2.getID()));\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/FEDInstructionUtils.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/FEDInstructionUtils.java",
"diff": "@@ -45,6 +45,7 @@ import org.apache.sysds.runtime.instructions.cp.VariableCPInstruction.VariableOp\nimport org.apache.sysds.runtime.instructions.spark.AggregateUnarySPInstruction;\nimport org.apache.sysds.runtime.instructions.spark.AppendGAlignedSPInstruction;\nimport org.apache.sysds.runtime.instructions.spark.AppendGSPInstruction;\n+import org.apache.sysds.runtime.instructions.spark.BinaryMatrixBVectorSPInstruction;\nimport org.apache.sysds.runtime.instructions.spark.BinaryMatrixMatrixSPInstruction;\nimport org.apache.sysds.runtime.instructions.spark.BinaryMatrixScalarSPInstruction;\nimport org.apache.sysds.runtime.instructions.spark.BinarySPInstruction;\n@@ -266,6 +267,7 @@ public class FEDInstructionUtils {\n}\nelse if (inst instanceof BinaryMatrixScalarSPInstruction\n|| inst instanceof BinaryMatrixMatrixSPInstruction\n+ || inst instanceof BinaryMatrixBVectorSPInstruction\n|| inst instanceof BinaryTensorTensorSPInstruction\n|| inst instanceof BinaryTensorTensorBroadcastSPInstruction) {\nBinarySPInstruction instruction = (BinarySPInstruction) inst;\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/federated/primitives/FederatedLogicalTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/federated/primitives/FederatedLogicalTest.java",
"diff": "@@ -36,6 +36,12 @@ import java.util.Arrays;\nimport java.util.Collection;\nimport java.util.HashMap;\n+/*\n+ * Testing following logical operations:\n+ * >, <, ==, !=, >=, <=\n+ * with a row/col partitioned federated matrix X and a scalar/vector/matrix Y\n+*/\n+\n@RunWith(value = Parameterized.class)\[email protected]\npublic class FederatedLogicalTest extends AutomatedTestBase\n@@ -47,9 +53,9 @@ public class FederatedLogicalTest extends AutomatedTestBase\nprivate final static String OUTPUT_NAME = \"Z\";\nprivate final static double TOLERANCE = 0;\n- private final static int blocksize = 1024;\n+ private final static int BLOCKSIZE = 1024;\n- public enum Type{\n+ private enum Type {\nGREATER,\nLESS,\nEQUALS,\n@@ -58,12 +64,29 @@ public class FederatedLogicalTest extends AutomatedTestBase\nLESS_EQUALS\n}\n+ private enum FederationType {\n+ SINGLE_FED_WORKER,\n+ ROW_PARTITIONED,\n+ COL_PARTITIONED,\n+ FULL_PARTITIONED\n+ }\n+\n+ private enum YType {\n+ MATRIX,\n+ ROW_VEC,\n+ COL_VEC\n+ }\n+\[email protected]()\npublic int rows;\[email protected](1)\npublic int cols;\[email protected](2)\npublic double sparsity;\n+ @Parameterized.Parameter(3)\n+ public FederationType fed_type;\n+ @Parameterized.Parameter(4)\n+ public YType y_type;\n@Override\npublic void setUp() {\n@@ -73,13 +96,73 @@ public class FederatedLogicalTest extends AutomatedTestBase\[email protected]\npublic static Collection<Object[]> data() {\n- // rows must be even\n+ // rows must be divisable by 4 for row partitioned data\n+ // cols must be divisable by 4 for col partitioned data\n+ // rows and cols must be divisable by 2 for full partitioned data\nreturn Arrays.asList(new Object[][] {\n- // {rows, cols, sparsity}\n- {100, 75, 0.01},\n- {100, 75, 0.9},\n- {2, 75, 0.01},\n- {2, 75, 0.9}\n+ // {rows, cols, sparsity, fed_type, y_type}\n+\n+ // row partitioned MM\n+ {100, 75, 0.01, FederationType.ROW_PARTITIONED, YType.MATRIX},\n+ {100, 75, 0.9, FederationType.ROW_PARTITIONED, YType.MATRIX},\n+ // {4, 75, 0.01, FederationType.ROW_PARTITIONED, YType.MATRIX},\n+ // {4, 75, 0.9, FederationType.ROW_PARTITIONED, YType.MATRIX},\n+ // {100, 1, 0.01, FederationType.ROW_PARTITIONED, YType.MATRIX},\n+ // {100, 1, 0.9, FederationType.ROW_PARTITIONED, YType.MATRIX},\n+\n+ // row partitioned MV row vector\n+ {100, 75, 0.01, FederationType.ROW_PARTITIONED, YType.ROW_VEC},\n+ {100, 75, 0.9, FederationType.ROW_PARTITIONED, YType.ROW_VEC},\n+ // {4, 75, 0.01, FederationType.ROW_PARTITIONED, YType.ROW_VEC},\n+ // {4, 75, 0.9, FederationType.ROW_PARTITIONED, YType.ROW_VEC},\n+ // {100, 1, 0.01, FederationType.ROW_PARTITIONED, YType.ROW_VEC},\n+ // {100, 1, 0.9, FederationType.ROW_PARTITIONED, YType.ROW_VEC},\n+\n+ // row partitioned MV col vector\n+ {100, 75, 0.01, FederationType.ROW_PARTITIONED, YType.COL_VEC},\n+ {100, 75, 0.9, FederationType.ROW_PARTITIONED, YType.COL_VEC},\n+ // {4, 75, 0.01, FederationType.ROW_PARTITIONED, YType.COL_VEC},\n+ // {4, 75, 0.9, FederationType.ROW_PARTITIONED, YType.COL_VEC},\n+ // {100, 1, 0.01, FederationType.ROW_PARTITIONED, YType.COL_VEC},\n+ // {100, 1, 0.9, FederationType.ROW_PARTITIONED, YType.COL_VEC},\n+\n+ // col partitioned MM\n+ {100, 76, 0.01, FederationType.COL_PARTITIONED, YType.MATRIX},\n+ {100, 76, 0.9, FederationType.COL_PARTITIONED, YType.MATRIX},\n+ // {1, 76, 0.01, FederationType.COL_PARTITIONED, YType.MATRIX},\n+ // {1, 76, 0.9, FederationType.COL_PARTITIONED, YType.MATRIX},\n+ // {100, 4, 0.01, FederationType.COL_PARTITIONED, YType.MATRIX},\n+ // {100, 4, 0.9, FederationType.COL_PARTITIONED, YType.MATRIX},\n+\n+ // col partitioned MV row vector\n+ {100, 76, 0.01, FederationType.COL_PARTITIONED, YType.ROW_VEC},\n+ {100, 76, 0.9, FederationType.COL_PARTITIONED, YType.ROW_VEC},\n+ // {1, 76, 0.01, FederationType.COL_PARTITIONED, YType.ROW_VEC},\n+ // {1, 76, 0.9, FederationType.COL_PARTITIONED, YType.ROW_VEC},\n+ // {100, 4, 0.01, FederationType.COL_PARTITIONED, YType.ROW_VEC},\n+ // {100, 4, 0.9, FederationType.COL_PARTITIONED, YType.ROW_VEC},\n+\n+ // col partitioned MV col vector\n+ {100, 76, 0.01, FederationType.COL_PARTITIONED, YType.COL_VEC},\n+ {100, 76, 0.9, FederationType.COL_PARTITIONED, YType.COL_VEC},\n+ // {1, 76, 0.01, FederationType.COL_PARTITIONED, YType.COL_VEC},\n+ // {1, 76, 0.9, FederationType.COL_PARTITIONED, YType.COL_VEC},\n+ // {100, 4, 0.01, FederationType.COL_PARTITIONED, YType.COL_VEC},\n+ // {100, 4, 0.9, FederationType.COL_PARTITIONED, YType.COL_VEC},\n+\n+ // single federated worker MM\n+ {100, 75, 0.01, FederationType.SINGLE_FED_WORKER, YType.MATRIX},\n+ {100, 75, 0.9, FederationType.SINGLE_FED_WORKER, YType.MATRIX},\n+ // {1, 75, 0.01, FederationType.SINGLE_FED_WORKER, YType.MATRIX},\n+ // {1, 75, 0.9, FederationType.SINGLE_FED_WORKER, YType.MATRIX},\n+ // {100, 1, 0.01, FederationType.SINGLE_FED_WORKER, YType.MATRIX},\n+ // {100, 1, 0.9, FederationType.SINGLE_FED_WORKER, YType.MATRIX},\n+\n+ // full partitioned (not supported yet)\n+ // {70, 80, 0.01, FederationType.FULL_PARTITIONED, YType.MATRIX},\n+ // {70, 80, 0.9, FederationType.FULL_PARTITIONED, YType.MATRIX},\n+ // {2, 2, 0.01, FederationType.FULL_PARTITIONED, YType.MATRIX},\n+ // {2, 2, 0.9, FederationType.FULL_PARTITIONED, YType.MATRIX}\n});\n}\n@@ -99,15 +182,15 @@ public class FederatedLogicalTest extends AutomatedTestBase\nfederatedLogicalTest(SCALAR_TEST_NAME, Type.GREATER, ExecMode.SPARK);\n}\n- @Test\n- public void federatedLogicalScalarLessSingleNode() {\n- federatedLogicalTest(SCALAR_TEST_NAME, Type.LESS, ExecMode.SINGLE_NODE);\n- }\n-\n- @Test\n- public void federatedLogicalScalarLessSpark() {\n- federatedLogicalTest(SCALAR_TEST_NAME, Type.LESS, ExecMode.SPARK);\n- }\n+// @Test\n+// public void federatedLogicalScalarLessSingleNode() {\n+// federatedLogicalTest(SCALAR_TEST_NAME, Type.LESS, ExecMode.SINGLE_NODE);\n+// }\n+//\n+// @Test\n+// public void federatedLogicalScalarLessSpark() {\n+// federatedLogicalTest(SCALAR_TEST_NAME, Type.LESS, ExecMode.SPARK);\n+// }\n@Test\npublic void federatedLogicalScalarEqualsSingleNode() {\n@@ -139,15 +222,15 @@ public class FederatedLogicalTest extends AutomatedTestBase\nfederatedLogicalTest(SCALAR_TEST_NAME, Type.GREATER_EQUALS, ExecMode.SPARK);\n}\n- @Test\n- public void federatedLogicalScalarLessEqualsSingleNode() {\n- federatedLogicalTest(SCALAR_TEST_NAME, Type.LESS_EQUALS, ExecMode.SINGLE_NODE);\n- }\n-\n- @Test\n- public void federatedLogicalScalarLessEqualsSpark() {\n- federatedLogicalTest(SCALAR_TEST_NAME, Type.LESS_EQUALS, ExecMode.SPARK);\n- }\n+// @Test\n+// public void federatedLogicalScalarLessEqualsSingleNode() {\n+// federatedLogicalTest(SCALAR_TEST_NAME, Type.LESS_EQUALS, ExecMode.SINGLE_NODE);\n+// }\n+//\n+// @Test\n+// public void federatedLogicalScalarLessEqualsSpark() {\n+// federatedLogicalTest(SCALAR_TEST_NAME, Type.LESS_EQUALS, ExecMode.SPARK);\n+// }\n//---------------------------MATRIX MATRIX--------------------------\n@Test\n@@ -160,15 +243,15 @@ public class FederatedLogicalTest extends AutomatedTestBase\nfederatedLogicalTest(MATRIX_TEST_NAME, Type.GREATER, ExecMode.SPARK);\n}\n- @Test\n- public void federatedLogicalMatrixLessSingleNode() {\n- federatedLogicalTest(MATRIX_TEST_NAME, Type.LESS, ExecMode.SINGLE_NODE);\n- }\n-\n- @Test\n- public void federatedLogicalMatrixLessSpark() {\n- federatedLogicalTest(MATRIX_TEST_NAME, Type.LESS, ExecMode.SPARK);\n- }\n+// @Test\n+// public void federatedLogicalMatrixLessSingleNode() {\n+// federatedLogicalTest(MATRIX_TEST_NAME, Type.LESS, ExecMode.SINGLE_NODE);\n+// }\n+//\n+// @Test\n+// public void federatedLogicalMatrixLessSpark() {\n+// federatedLogicalTest(MATRIX_TEST_NAME, Type.LESS, ExecMode.SPARK);\n+// }\n@Test\npublic void federatedLogicalMatrixEqualsSingleNode() {\n@@ -200,15 +283,15 @@ public class FederatedLogicalTest extends AutomatedTestBase\nfederatedLogicalTest(MATRIX_TEST_NAME, Type.GREATER_EQUALS, ExecMode.SPARK);\n}\n- @Test\n- public void federatedLogicalMatrixLessEqualsSingleNode() {\n- federatedLogicalTest(MATRIX_TEST_NAME, Type.LESS_EQUALS, ExecMode.SINGLE_NODE);\n- }\n-\n- @Test\n- public void federatedLogicalMatrixLessEqualsSpark() {\n- federatedLogicalTest(MATRIX_TEST_NAME, Type.LESS_EQUALS, ExecMode.SPARK);\n- }\n+// @Test\n+// public void federatedLogicalMatrixLessEqualsSingleNode() {\n+// federatedLogicalTest(MATRIX_TEST_NAME, Type.LESS_EQUALS, ExecMode.SINGLE_NODE);\n+// }\n+//\n+// @Test\n+// public void federatedLogicalMatrixLessEqualsSpark() {\n+// federatedLogicalTest(MATRIX_TEST_NAME, Type.LESS_EQUALS, ExecMode.SPARK);\n+// }\n// -----------------------------------------------------------------------------\n@@ -220,39 +303,78 @@ public class FederatedLogicalTest extends AutomatedTestBase\ngetAndLoadTestConfiguration(testname);\nString HOME = SCRIPT_DIR + TEST_DIR;\n- int fed_rows = rows / 2;\n- int fed_cols = cols;\n+ int fed_rows = 0;\n+ int fed_cols = 0;\n+ switch(fed_type) {\n+ case SINGLE_FED_WORKER:\n+ fed_rows = rows;\n+ fed_cols = cols;\n+ break;\n+ case ROW_PARTITIONED:\n+ fed_rows = rows / 4;\n+ fed_cols = cols;\n+ break;\n+ case COL_PARTITIONED:\n+ fed_rows = rows;\n+ fed_cols = cols / 4;\n+ break;\n+ case FULL_PARTITIONED:\n+ fed_rows = rows / 2;\n+ fed_cols = cols / 2;\n+ break;\n+ }\n+\n+ boolean single_fed_worker = (fed_type == FederationType.SINGLE_FED_WORKER);\n// generate dataset\n- // matrix handled by two federated workers\n- double[][] X1 = getRandomMatrix(fed_rows, fed_cols, 1, 2, 1, 13);\n- double[][] X2 = getRandomMatrix(fed_rows, fed_cols, 1, 2, 1, 2);\n+ // matrix handled by four federated workers\n+ // X2, X3, X4 not used if single_fed_worker == true\n+ double[][] X1 = getRandomMatrix(fed_rows, fed_cols, 0, 1, sparsity, 13);\n+ double[][] X2 = (!single_fed_worker ? getRandomMatrix(fed_rows, fed_cols, 0, 1, sparsity, 2) : null);\n+ double[][] X3 = (!single_fed_worker ? getRandomMatrix(fed_rows, fed_cols, 0, 1, sparsity, 211) : null);\n+ double[][] X4 = (!single_fed_worker ? getRandomMatrix(fed_rows, fed_cols, 0, 1, sparsity, 65) : null);\n- writeInputMatrixWithMTD(\"X1\", X1, false, new MatrixCharacteristics(fed_rows, fed_cols, blocksize, fed_rows * fed_cols));\n- writeInputMatrixWithMTD(\"X2\", X2, false, new MatrixCharacteristics(fed_rows, fed_cols, blocksize, fed_rows * fed_cols));\n+ writeInputMatrixWithMTD(\"X1\", X1, false, new MatrixCharacteristics(fed_rows, fed_cols, BLOCKSIZE, fed_rows * fed_cols));\n+ if(!single_fed_worker) {\n+ writeInputMatrixWithMTD(\"X2\", X2, false, new MatrixCharacteristics(fed_rows, fed_cols, BLOCKSIZE, fed_rows * fed_cols));\n+ writeInputMatrixWithMTD(\"X3\", X3, false, new MatrixCharacteristics(fed_rows, fed_cols, BLOCKSIZE, fed_rows * fed_cols));\n+ writeInputMatrixWithMTD(\"X4\", X4, false, new MatrixCharacteristics(fed_rows, fed_cols, BLOCKSIZE, fed_rows * fed_cols));\n+ }\nboolean is_matrix_test = testname.equals(MATRIX_TEST_NAME);\ndouble[][] Y_mat = null;\ndouble Y_scal = 0;\nif(is_matrix_test) {\n- Y_mat = getRandomMatrix(rows, cols, 0, 1, sparsity, 5040);\n- writeInputMatrixWithMTD(\"Y\", Y_mat, true);\n+ int y_rows = (y_type == YType.ROW_VEC ? 1 : rows);\n+ int y_cols = (y_type == YType.COL_VEC ? 1 : cols);\n+\n+ Y_mat = getRandomMatrix(y_rows, y_cols, 0, 1, sparsity, 5040);\n+ writeInputMatrixWithMTD(\"Y\", Y_mat, false, new MatrixCharacteristics(y_rows, y_cols, BLOCKSIZE, y_rows * y_cols));\n}\n// empty script name because we don't execute any script, just start the worker\nfullDMLScriptName = \"\";\nint port1 = getRandomAvailablePort();\n- int port2 = getRandomAvailablePort();\n- Thread thread1 = startLocalFedWorkerThread(port1, FED_WORKER_WAIT_S);\n- Thread thread2 = startLocalFedWorkerThread(port2);\n+ int port2 = (!single_fed_worker ? getRandomAvailablePort() : 0);\n+ int port3 = (!single_fed_worker ? getRandomAvailablePort() : 0);\n+ int port4 = (!single_fed_worker ? getRandomAvailablePort() : 0);\n+ Thread thread1 = startLocalFedWorkerThread(port1, (!single_fed_worker ? FED_WORKER_WAIT_S : FED_WORKER_WAIT));\n+ Thread thread2 = (!single_fed_worker ? startLocalFedWorkerThread(port2, FED_WORKER_WAIT_S) : null);\n+ Thread thread3 = (!single_fed_worker ? startLocalFedWorkerThread(port3, FED_WORKER_WAIT_S) : null);\n+ Thread thread4 = (!single_fed_worker ? startLocalFedWorkerThread(port4) : null);\ngetAndLoadTestConfiguration(testname);\n// Run reference dml script with normal matrix\nfullDMLScriptName = HOME + testname + \"Reference.dml\";\n- programArgs = new String[] {\"-nvargs\", \"in_X1=\" + input(\"X1\"), \"in_X2=\" + input(\"X2\"),\n+ programArgs = new String[] {\"-nvargs\",\n+ \"in_X1=\" + input(\"X1\"),\n+ \"in_X2=\" + (!single_fed_worker ? input(\"X2\") : input(\"X1\")), // not needed in case of a single federated worker\n+ \"in_X3=\" + (!single_fed_worker ? input(\"X3\") : input(\"X1\")), // not needed in case of a single federated worker\n+ \"in_X4=\" + (!single_fed_worker ? input(\"X4\") : input(\"X1\")), // not needed in case of a single federated worker\n\"in_Y=\" + (is_matrix_test ? input(\"Y\") : Double.toString(Y_scal)),\n+ \"in_fed_type=\" + Integer.toString(fed_type.ordinal()),\n\"in_op_type=\" + Integer.toString(op_type.ordinal()),\n\"out_Z=\" + expected(OUTPUT_NAME)};\nrunTest(true, false, null, -1);\n@@ -260,10 +382,15 @@ public class FederatedLogicalTest extends AutomatedTestBase\n// Run actual dml script with federated matrix\nfullDMLScriptName = HOME + testname + \".dml\";\nprogramArgs = new String[] {\"-stats\", \"-nvargs\",\n- \"in_X1=\" + TestUtils.federatedAddress(port1, input(\"X1\")), \"in_X2=\" + TestUtils.federatedAddress(port2, input(\"X2\")),\n+ \"in_X1=\" + TestUtils.federatedAddress(port1, input(\"X1\")),\n+ \"in_X2=\" + (!single_fed_worker ? TestUtils.federatedAddress(port2, input(\"X2\")) : null),\n+ \"in_X3=\" + (!single_fed_worker ? TestUtils.federatedAddress(port3, input(\"X3\")) : null),\n+ \"in_X4=\" + (!single_fed_worker ? TestUtils.federatedAddress(port4, input(\"X4\")) : null),\n\"in_Y=\" + (is_matrix_test ? input(\"Y\") : Double.toString(Y_scal)),\n+ \"in_fed_type=\" + Integer.toString(fed_type.ordinal()),\n\"in_op_type=\" + Integer.toString(op_type.ordinal()),\n- \"rows=\" + fed_rows, \"cols=\" + fed_cols, \"out_Z=\" + output(OUTPUT_NAME)};\n+ \"rows=\" + Integer.toString(fed_rows), \"cols=\" + Integer.toString(fed_cols),\n+ \"out_Z=\" + output(OUTPUT_NAME)};\nrunTest(true, false, null, -1);\n// compare the results via files\n@@ -271,7 +398,9 @@ public class FederatedLogicalTest extends AutomatedTestBase\nHashMap<CellIndex, Double> fedResults = readDMLMatrixFromOutputDir(OUTPUT_NAME);\nTestUtils.compareMatrices(fedResults, refResults, TOLERANCE, \"Fed\", \"Ref\");\n- TestUtils.shutdownThreads(thread1, thread2);\n+ TestUtils.shutdownThreads(thread1);\n+ if(!single_fed_worker)\n+ TestUtils.shutdownThreads(thread2, thread3, thread4);\n// check for federated operations\nswitch(op_type)\n@@ -298,7 +427,11 @@ public class FederatedLogicalTest extends AutomatedTestBase\n// check that federated input files are still existing\nAssert.assertTrue(HDFSTool.existsFileOnHDFS(input(\"X1\")));\n+ if(!single_fed_worker) {\nAssert.assertTrue(HDFSTool.existsFileOnHDFS(input(\"X2\")));\n+ Assert.assertTrue(HDFSTool.existsFileOnHDFS(input(\"X3\")));\n+ Assert.assertTrue(HDFSTool.existsFileOnHDFS(input(\"X4\")));\n+ }\nresetExecMode(platform_old);\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/scripts/functions/federated/binary/FederatedLogicalMatrixMatrixTest.dml",
"new_path": "src/test/scripts/functions/federated/binary/FederatedLogicalMatrixMatrixTest.dml",
"diff": "#\n#-------------------------------------------------------------\n-X = federated(addresses=list($in_X1, $in_X2),\n- ranges=list(list(0, 0), list($rows, $cols), list($rows, 0), list($rows * 2, $cols)));\n+fed_type = $in_fed_type;\n+\n+if(fed_type == 0) { # single federated worker\n+ X = federated(addresses=list($in_X1),\n+ ranges=list(list(0, 0), list($rows, $cols)));\n+}\n+else if(fed_type == 1) { # row partitioned\n+ X = federated(addresses=list($in_X1, $in_X2, $in_X3, $in_X4),\n+ ranges=list(list(0, 0), list($rows, $cols), list($rows, 0), list($rows * 2, $cols),\n+ list($rows * 2, 0), list($rows * 3, $cols), list($rows * 3, 0), list($rows * 4, $cols)));\n+}\n+else if(fed_type == 2) { # col partitioned\n+ X = federated(addresses=list($in_X1, $in_X2, $in_X3, $in_X4),\n+ ranges=list(list(0, 0), list($rows, $cols), list(0, $cols), list($rows, $cols * 2),\n+ list(0, $cols * 2), list($rows, $cols * 3), list(0, $cols * 3), list($rows, $cols * 4)));\n+}\n+else if(fed_type == 3) { # full partitioned\n+ X = federated(addresses=list($in_X1, $in_X2, $in_X3, $in_X4),\n+ ranges=list(list(0, 0), list($rows, $cols), list(0, $cols), list($rows, $cols * 2),\n+ list($rows, 0), list($rows * 2, $cols), list($rows, $cols), list($rows * 2, $cols * 2)));\n+}\nY = read($in_Y);\nop_type = $in_op_type;\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/scripts/functions/federated/binary/FederatedLogicalMatrixMatrixTestReference.dml",
"new_path": "src/test/scripts/functions/federated/binary/FederatedLogicalMatrixMatrixTestReference.dml",
"diff": "#\n#-------------------------------------------------------------\n-X = rbind(read($in_X1), read($in_X2));\n+fed_type = $in_fed_type;\n+\n+if(fed_type == 0) { # single federated worker\n+ X = read($in_X1);\n+}\n+else if(fed_type == 1) { # row partitioned\n+ X = rbind(read($in_X1), read($in_X2), read($in_X3), read($in_X4));\n+}\n+else if(fed_type == 2) { # col partitioned\n+ X = cbind(read($in_X1), read($in_X2), read($in_X3), read($in_X4));\n+}\n+else if(fed_type == 3) { # full partitioned\n+ X = rbind(cbind(read($in_X1), read($in_X2)), cbind(read($in_X3), read($in_X4)));\n+}\nY = read($in_Y);\nop_type = $in_op_type;\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/scripts/functions/federated/binary/FederatedLogicalMatrixScalarTest.dml",
"new_path": "src/test/scripts/functions/federated/binary/FederatedLogicalMatrixScalarTest.dml",
"diff": "#\n#-------------------------------------------------------------\n-X = federated(addresses=list($in_X1, $in_X2),\n- ranges=list(list(0, 0), list($rows, $cols), list($rows, 0), list($rows * 2, $cols)));\n+fed_type = $in_fed_type;\n+\n+if(fed_type == 0) { # single federated worker\n+ X = federated(addresses=list($in_X1),\n+ ranges=list(list(0, 0), list($rows, $cols)));\n+}\n+else if(fed_type == 1) { # row partitioned\n+ X = federated(addresses=list($in_X1, $in_X2, $in_X3, $in_X4),\n+ ranges=list(list(0, 0), list($rows, $cols), list($rows, 0), list($rows * 2, $cols),\n+ list($rows * 2, 0), list($rows * 3, $cols), list($rows * 3, 0), list($rows * 4, $cols)));\n+}\n+else if(fed_type == 2) { # col partitioned\n+ X = federated(addresses=list($in_X1, $in_X2, $in_X3, $in_X4),\n+ ranges=list(list(0, 0), list($rows, $cols), list(0, $cols), list($rows, $cols * 2),\n+ list(0, $cols * 2), list($rows, $cols * 3), list(0, $cols * 3), list($rows, $cols * 4)));\n+}\n+else if(fed_type == 3) { # full partitioned\n+ X = federated(addresses=list($in_X1, $in_X2, $in_X3, $in_X4),\n+ ranges=list(list(0, 0), list($rows, $cols), list(0, $cols), list($rows, $cols * 2),\n+ list($rows, 0), list($rows * 2, $cols), list($rows, $cols), list($rows * 2, $cols * 2)));\n+}\ny = $in_Y;\nop_type = $in_op_type;\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/scripts/functions/federated/binary/FederatedLogicalMatrixScalarTestReference.dml",
"new_path": "src/test/scripts/functions/federated/binary/FederatedLogicalMatrixScalarTestReference.dml",
"diff": "#\n#-------------------------------------------------------------\n-X = rbind(read($in_X1), read($in_X2));\n+fed_type = $in_fed_type;\n+\n+if(fed_type == 0) { # single federated worker\n+ X = read($in_X1);\n+}\n+else if(fed_type == 1) { # row partitioned\n+ X = rbind(read($in_X1), read($in_X2), read($in_X3), read($in_X4));\n+}\n+else if(fed_type == 2) { # col partitioned\n+ X = cbind(read($in_X1), read($in_X2), read($in_X3), read($in_X4));\n+}\n+else if(fed_type == 3) { # full partitioned\n+ X = rbind(cbind(read($in_X1), read($in_X2)), cbind(read($in_X3), read($in_X4)));\n+}\ny = $in_Y;\nop_type = $in_op_type;\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-2867] Cleanup federated binary operations, incl tests
Closes #1182. |
49,684 | 20.02.2021 18:13:41 | -3,600 | 2931f6ec82798e4e71281ac8ca9cc47a55381266 | Improved parameter server epoch timing/logging
Closes | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/parser/ParameterizedBuiltinFunctionExpression.java",
"new_path": "src/main/java/org/apache/sysds/parser/ParameterizedBuiltinFunctionExpression.java",
"diff": "@@ -290,7 +290,7 @@ public class ParameterizedBuiltinFunctionExpression extends DataIdentifier\nStatement.PS_VAL_FEATURES, Statement.PS_VAL_LABELS, Statement.PS_UPDATE_FUN, Statement.PS_AGGREGATION_FUN,\nStatement.PS_VAL_FUN, Statement.PS_MODE, Statement.PS_UPDATE_TYPE, Statement.PS_FREQUENCY, Statement.PS_EPOCHS,\nStatement.PS_BATCH_SIZE, Statement.PS_PARALLELISM, Statement.PS_SCHEME, Statement.PS_FED_RUNTIME_BALANCING,\n- Statement.PS_FED_WEIGHING, Statement.PS_HYPER_PARAMS, Statement.PS_CHECKPOINTING, Statement.PS_SEED);\n+ Statement.PS_FED_WEIGHTING, Statement.PS_HYPER_PARAMS, Statement.PS_CHECKPOINTING, Statement.PS_SEED);\ncheckInvalidParameters(getOpCode(), getVarParams(), valid);\n// check existence and correctness of parameters\n@@ -310,7 +310,7 @@ public class ParameterizedBuiltinFunctionExpression extends DataIdentifier\ncheckDataValueType(true, fname, Statement.PS_PARALLELISM, DataType.SCALAR, ValueType.INT64, conditional);\ncheckStringParam(true, fname, Statement.PS_SCHEME, conditional);\ncheckStringParam(true, fname, Statement.PS_FED_RUNTIME_BALANCING, conditional);\n- checkStringParam(true, fname, Statement.PS_FED_WEIGHING, conditional);\n+ checkStringParam(true, fname, Statement.PS_FED_WEIGHTING, conditional);\ncheckDataValueType(true, fname, Statement.PS_HYPER_PARAMS, DataType.LIST, ValueType.UNKNOWN, conditional);\ncheckStringParam(true, fname, Statement.PS_CHECKPOINTING, conditional);\ncheckDataValueType(true, fname, Statement.PS_SEED, DataType.SCALAR, ValueType.INT64, conditional);\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/parser/Statement.java",
"new_path": "src/main/java/org/apache/sysds/parser/Statement.java",
"diff": "@@ -89,10 +89,10 @@ public abstract class Statement implements ParseInfo\npublic enum PSFrequency {\nBATCH, EPOCH\n}\n- public static final String PS_FED_WEIGHING = \"weighing\";\n+ public static final String PS_FED_WEIGHTING = \"weighting\";\npublic static final String PS_FED_RUNTIME_BALANCING = \"runtime_balancing\";\npublic enum PSRuntimeBalancing {\n- NONE, RUN_MIN, CYCLE_AVG, CYCLE_MAX, SCALE_BATCH\n+ NONE, BASELINE, CYCLE_MIN, CYCLE_AVG, CYCLE_MAX, SCALE_BATCH\n}\npublic static final String PS_EPOCHS = \"epochs\";\npublic static final String PS_BATCH_SIZE = \"batchsize\";\n@@ -101,7 +101,6 @@ public abstract class Statement implements ParseInfo\npublic enum PSScheme {\nDISJOINT_CONTIGUOUS, DISJOINT_ROUND_ROBIN, DISJOINT_RANDOM, OVERLAP_RESHUFFLE\n}\n- public static final String PS_FED_SCHEME = \"fed_scheme\";\npublic enum FederatedPSScheme {\nKEEP_DATA_ON_WORKER, SHUFFLE, REPLICATE_TO_MAX, SUBSAMPLE_TO_MIN, BALANCE_TO_AVG\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/controlprogram/paramserv/FederatedPSControlThread.java",
"new_path": "src/main/java/org/apache/sysds/runtime/controlprogram/paramserv/FederatedPSControlThread.java",
"diff": "@@ -78,19 +78,19 @@ public class FederatedPSControlThread extends PSWorker implements Callable<Void>\nprivate final PSRuntimeBalancing _runtimeBalancing;\nprivate int _numBatchesPerEpoch;\nprivate int _possibleBatchesPerLocalEpoch;\n- private final boolean _weighing;\n- private double _weighingFactor = 1;\n- private final boolean _cycleStartAt0 = false;\n+ private final boolean _weighting;\n+ private double _weightingFactor = 1;\n+ private boolean _cycleStartAt0 = false;\npublic FederatedPSControlThread(int workerID, String updFunc, Statement.PSFrequency freq,\n- PSRuntimeBalancing runtimeBalancing, boolean weighing, int epochs, long batchSize,\n+ PSRuntimeBalancing runtimeBalancing, boolean weighting, int epochs, long batchSize,\nint numBatchesPerGlobalEpoch, ExecutionContext ec, ParamServer ps)\n{\nsuper(workerID, updFunc, freq, epochs, batchSize, ec, ps);\n_numBatchesPerEpoch = numBatchesPerGlobalEpoch;\n_runtimeBalancing = runtimeBalancing;\n- _weighing = weighing;\n+ _weighting = weighting;\n// generate the ID for the model\n_modelVarID = FederationUtils.getNextFedDataID();\n}\n@@ -98,40 +98,42 @@ public class FederatedPSControlThread extends PSWorker implements Callable<Void>\n/**\n* Sets up the federated worker and control thread\n*\n- * @param weighingFactor Gradients from this worker will be multiplied by this factor if weighing is enabled\n+ * @param weightingFactor Gradients from this worker will be multiplied by this factor if weighting is enabled\n*/\n- public void setup(double weighingFactor) {\n+ public void setup(double weightingFactor) {\nincWorkerNumber();\n// prepare features and labels\n_featuresData = (FederatedData) _features.getFedMapping().getMap().values().toArray()[0];\n_labelsData = (FederatedData) _labels.getFedMapping().getMap().values().toArray()[0];\n- // weighing factor is always set, but only used when weighing is specified\n- _weighingFactor = weighingFactor;\n+ // weighting factor is always set, but only used when weighting is specified\n+ _weightingFactor = weightingFactor;\n// different runtime balancing calculations\nlong dataSize = _features.getNumRows();\n// calculate scaled batch size if balancing via batch size.\n// In some cases there will be some cycling\n- if(_runtimeBalancing == PSRuntimeBalancing.SCALE_BATCH) {\n+ if(_runtimeBalancing == PSRuntimeBalancing.SCALE_BATCH)\n_batchSize = (int) Math.ceil((double) dataSize / _numBatchesPerEpoch);\n- }\n// Calculate possible batches with batch size\n_possibleBatchesPerLocalEpoch = (int) Math.ceil((double) dataSize / _batchSize);\n// If no runtime balancing is specified, just run possible number of batches\n// WARNING: Will get stuck on miss match\n- if(_runtimeBalancing == PSRuntimeBalancing.NONE) {\n+ if(_runtimeBalancing == PSRuntimeBalancing.NONE)\n_numBatchesPerEpoch = _possibleBatchesPerLocalEpoch;\n- }\n+\n+ // If running in baseline mode set cycle to false\n+ if(_runtimeBalancing == PSRuntimeBalancing.BASELINE)\n+ _cycleStartAt0 = true;\nif( LOG.isInfoEnabled() ) {\nLOG.info(\"Setup config for worker \" + this.getWorkerName());\nLOG.info(\"Batch size: \" + _batchSize + \" possible batches: \" + _possibleBatchesPerLocalEpoch\n- + \" batches to run: \" + _numBatchesPerEpoch + \" weighing factor: \" + _weighingFactor);\n+ + \" batches to run: \" + _numBatchesPerEpoch + \" weighting factor: \" + _weightingFactor);\n}\n// serialize program\n@@ -321,16 +323,16 @@ public class FederatedPSControlThread extends PSWorker implements Callable<Void>\nprotected void weighAndPushGradients(ListObject gradients) {\n// scale gradients - must only include MatrixObjects\n- if(_weighing && _weighingFactor != 1) {\n- Timing tWeighing = DMLScript.STATISTICS ? new Timing(true) : null;\n+ if(_weighting && _weightingFactor != 1) {\n+ Timing tWeighting = DMLScript.STATISTICS ? new Timing(true) : null;\ngradients.getData().parallelStream().forEach((matrix) -> {\nMatrixObject matrixObject = (MatrixObject) matrix;\nMatrixBlock input = matrixObject.acquireReadAndRelease().scalarOperations(\n- new RightScalarOperator(Multiply.getMultiplyFnObject(), _weighingFactor), new MatrixBlock());\n+ new RightScalarOperator(Multiply.getMultiplyFnObject(), _weightingFactor), new MatrixBlock());\nmatrixObject.acquireModify(input);\nmatrixObject.release();\n});\n- accFedPSGradientWeighingTime(tWeighing);\n+ accFedPSGradientWeightingTime(tWeighting);\n}\n// Push the gradients to ps\n@@ -557,9 +559,9 @@ public class FederatedPSControlThread extends PSWorker implements Callable<Void>\n}\n// Statistics methods\n- protected void accFedPSGradientWeighingTime(Timing time) {\n+ protected void accFedPSGradientWeightingTime(Timing time) {\nif (DMLScript.STATISTICS && time != null)\n- Statistics.accFedPSGradientWeighingTime((long) time.stop());\n+ Statistics.accFedPSGradientWeightingTime((long) time.stop());\n}\n@Override\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/controlprogram/paramserv/ParamServer.java",
"new_path": "src/main/java/org/apache/sysds/runtime/controlprogram/paramserv/ParamServer.java",
"diff": "@@ -98,7 +98,7 @@ public abstract class ParamServer\n_finishedStates = new boolean[workerNum];\nsetupAggFunc(_ec, aggFunc);\n- if(valFunc != null && numBatchesPerEpoch > 0) {\n+ if(valFunc != null && numBatchesPerEpoch > 0 && valFeatures != null && valLabels != null) {\nsetupValFunc(_ec, valFunc, valFeatures, valLabels);\n}\n_numBatchesPerEpoch = numBatchesPerEpoch;\n@@ -204,12 +204,15 @@ public abstract class ParamServer\n// This if has grown to be quite complex its function is rather simple. Validate at the end of each epoch\n// In the BSP batch case that occurs after the sync counter reaches the number of batches and in the\n// BSP epoch case every time\n- if ((_freq == Statement.PSFrequency.EPOCH ||\n+ if (_numBatchesPerEpoch != -1 &&\n+ (_freq == Statement.PSFrequency.EPOCH ||\n(_freq == Statement.PSFrequency.BATCH && ++_syncCounter % _numBatchesPerEpoch == 0))) {\nif(LOG.isInfoEnabled())\nLOG.info(\"[+] PARAMSERV: completed EPOCH \" + _epochCounter);\n+ time_epoch();\n+\nif(_validationPossible)\nvalidate();\n@@ -229,12 +232,15 @@ public abstract class ParamServer\nupdateGlobalModel(gradients);\n// This if works similarly to the one for BSP, but divides the sync couter through the number of workers,\n// creating \"Pseudo Epochs\"\n- if ((_freq == Statement.PSFrequency.EPOCH && ((float) ++_syncCounter % _numWorkers) == 0) ||\n- (_freq == Statement.PSFrequency.BATCH && ((float) ++_syncCounter / _numWorkers) % (float) _numBatchesPerEpoch == 0)) {\n+ if (_numBatchesPerEpoch != -1 &&\n+ ((_freq == Statement.PSFrequency.EPOCH && ((float) ++_syncCounter % _numWorkers) == 0) ||\n+ (_freq == Statement.PSFrequency.BATCH && ((float) ++_syncCounter / _numWorkers) % (float) _numBatchesPerEpoch == 0))) {\nif(LOG.isInfoEnabled())\nLOG.info(\"[+] PARAMSERV: completed PSEUDO EPOCH (ASP) \" + _epochCounter);\n+ time_epoch();\n+\nif(_validationPossible)\nvalidate();\n@@ -320,10 +326,29 @@ public abstract class ParamServer\nStatistics.accPSModelBroadcastTime((long) tBroad.stop());\n}\n+ /**\n+ * Prints the time the epoch took to complete\n+ */\n+ private void time_epoch() {\n+ if (DMLScript.STATISTICS) {\n+ //TODO double check correctness with multiple, potentially concurrent paramserv invocation\n+ Statistics.accPSExecutionTime((long) Statistics.getPSExecutionTimer().stop());\n+ double current_total_execution_time = Statistics.getPSExecutionTime();\n+ double current_total_validation_time = Statistics.getPSValidationTime();\n+ double time_to_epoch = current_total_execution_time - current_total_validation_time;\n+\n+ if (LOG.isInfoEnabled())\n+ if(_validationPossible)\n+ LOG.info(\"[+] PARAMSERV: epoch timer (excl. validation): \" + time_to_epoch / 1000 + \" secs.\");\n+ else\n+ LOG.info(\"[+] PARAMSERV: epoch timer: \" + time_to_epoch / 1000 + \" secs.\");\n+ }\n+ }\n+\n/**\n* Checks the current model against the validation set\n*/\n- private synchronized void validate() {\n+ private void validate() {\nTiming tValidate = DMLScript.STATISTICS ? new Timing(true) : null;\n_ec.setVariable(Statement.PS_MODEL, _model);\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/controlprogram/paramserv/dp/BalanceToAvgFederatedScheme.java",
"new_path": "src/main/java/org/apache/sysds/runtime/controlprogram/paramserv/dp/BalanceToAvgFederatedScheme.java",
"diff": "@@ -52,7 +52,7 @@ public class BalanceToAvgFederatedScheme extends DataPartitionFederatedScheme {\nList<MatrixObject> pFeatures = sliceFederatedMatrix(features);\nList<MatrixObject> pLabels = sliceFederatedMatrix(labels);\nBalanceMetrics balanceMetricsBefore = getBalanceMetrics(pFeatures);\n- List<Double> weighingFactors = getWeighingFactors(pFeatures, balanceMetricsBefore);\n+ List<Double> weightingFactors = getWeightingFactors(pFeatures, balanceMetricsBefore);\nint average_num_rows = (int) balanceMetricsBefore._avgRows;\n@@ -79,7 +79,7 @@ public class BalanceToAvgFederatedScheme extends DataPartitionFederatedScheme {\npLabels.get(i).updateDataCharacteristics(update);\n}\n- return new Result(pFeatures, pLabels, pFeatures.size(), getBalanceMetrics(pFeatures), weighingFactors);\n+ return new Result(pFeatures, pLabels, pFeatures.size(), getBalanceMetrics(pFeatures), weightingFactors);\n}\n/**\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/controlprogram/paramserv/dp/DataPartitionFederatedScheme.java",
"new_path": "src/main/java/org/apache/sysds/runtime/controlprogram/paramserv/dp/DataPartitionFederatedScheme.java",
"diff": "@@ -45,15 +45,15 @@ public abstract class DataPartitionFederatedScheme {\npublic final List<MatrixObject> _pLabels;\npublic final int _workerNum;\npublic final BalanceMetrics _balanceMetrics;\n- public final List<Double> _weighingFactors;\n+ public final List<Double> _weightingFactors;\n- public Result(List<MatrixObject> pFeatures, List<MatrixObject> pLabels, int workerNum, BalanceMetrics balanceMetrics, List<Double> weighingFactors) {\n+ public Result(List<MatrixObject> pFeatures, List<MatrixObject> pLabels, int workerNum, BalanceMetrics balanceMetrics, List<Double> weightingFactors) {\n_pFeatures = pFeatures;\n_pLabels = pLabels;\n_workerNum = workerNum;\n_balanceMetrics = balanceMetrics;\n- _weighingFactors = weighingFactors;\n+ _weightingFactors = weightingFactors;\n}\n}\n@@ -125,12 +125,12 @@ public abstract class DataPartitionFederatedScheme {\nreturn new BalanceMetrics(minRows, sum / slices.size(), maxRows);\n}\n- static List<Double> getWeighingFactors(List<MatrixObject> pFeatures, BalanceMetrics balanceMetrics) {\n- List<Double> weighingFactors = new ArrayList<>();\n+ static List<Double> getWeightingFactors(List<MatrixObject> pFeatures, BalanceMetrics balanceMetrics) {\n+ List<Double> weightingFactors = new ArrayList<>();\npFeatures.forEach((feature) -> {\n- weighingFactors.add((double) feature.getNumRows() / balanceMetrics._avgRows);\n+ weightingFactors.add((double) feature.getNumRows() / balanceMetrics._avgRows);\n});\n- return weighingFactors;\n+ return weightingFactors;\n}\n/**\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/controlprogram/paramserv/dp/KeepDataOnWorkerFederatedScheme.java",
"new_path": "src/main/java/org/apache/sysds/runtime/controlprogram/paramserv/dp/KeepDataOnWorkerFederatedScheme.java",
"diff": "@@ -35,7 +35,7 @@ public class KeepDataOnWorkerFederatedScheme extends DataPartitionFederatedSchem\nList<MatrixObject> pFeatures = sliceFederatedMatrix(features);\nList<MatrixObject> pLabels = sliceFederatedMatrix(labels);\nBalanceMetrics balanceMetrics = getBalanceMetrics(pFeatures);\n- List<Double> weighingFactors = getWeighingFactors(pFeatures, balanceMetrics);\n- return new Result(pFeatures, pLabels, pFeatures.size(), balanceMetrics, weighingFactors);\n+ List<Double> weightingFactors = getWeightingFactors(pFeatures, balanceMetrics);\n+ return new Result(pFeatures, pLabels, pFeatures.size(), balanceMetrics, weightingFactors);\n}\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/controlprogram/paramserv/dp/ReplicateToMaxFederatedScheme.java",
"new_path": "src/main/java/org/apache/sysds/runtime/controlprogram/paramserv/dp/ReplicateToMaxFederatedScheme.java",
"diff": "@@ -52,7 +52,7 @@ public class ReplicateToMaxFederatedScheme extends DataPartitionFederatedScheme\npublic Result partition(MatrixObject features, MatrixObject labels, int seed) {\nList<MatrixObject> pFeatures = sliceFederatedMatrix(features);\nList<MatrixObject> pLabels = sliceFederatedMatrix(labels);\n- List<Double> weighingFactors = getWeighingFactors(pFeatures, getBalanceMetrics(pFeatures));\n+ List<Double> weightingFactors = getWeightingFactors(pFeatures, getBalanceMetrics(pFeatures));\nint max_rows = 0;\nfor (MatrixObject pFeature : pFeatures) {\n@@ -82,7 +82,7 @@ public class ReplicateToMaxFederatedScheme extends DataPartitionFederatedScheme\npLabels.get(i).updateDataCharacteristics(update);\n}\n- return new Result(pFeatures, pLabels, pFeatures.size(), getBalanceMetrics(pFeatures), weighingFactors);\n+ return new Result(pFeatures, pLabels, pFeatures.size(), getBalanceMetrics(pFeatures), weightingFactors);\n}\n/**\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/controlprogram/paramserv/dp/ShuffleFederatedScheme.java",
"new_path": "src/main/java/org/apache/sysds/runtime/controlprogram/paramserv/dp/ShuffleFederatedScheme.java",
"diff": "@@ -51,7 +51,7 @@ public class ShuffleFederatedScheme extends DataPartitionFederatedScheme {\nList<MatrixObject> pFeatures = sliceFederatedMatrix(features);\nList<MatrixObject> pLabels = sliceFederatedMatrix(labels);\nBalanceMetrics balanceMetrics = getBalanceMetrics(pFeatures);\n- List<Double> weighingFactors = getWeighingFactors(pFeatures, balanceMetrics);\n+ List<Double> weightingFactors = getWeightingFactors(pFeatures, balanceMetrics);\nfor(int i = 0; i < pFeatures.size(); i++) {\n// Works, because the map contains a single entry\n@@ -71,7 +71,7 @@ public class ShuffleFederatedScheme extends DataPartitionFederatedScheme {\n}\n}\n- return new Result(pFeatures, pLabels, pFeatures.size(), balanceMetrics, weighingFactors);\n+ return new Result(pFeatures, pLabels, pFeatures.size(), balanceMetrics, weightingFactors);\n}\n/**\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/controlprogram/paramserv/dp/SubsampleToMinFederatedScheme.java",
"new_path": "src/main/java/org/apache/sysds/runtime/controlprogram/paramserv/dp/SubsampleToMinFederatedScheme.java",
"diff": "@@ -52,7 +52,7 @@ public class SubsampleToMinFederatedScheme extends DataPartitionFederatedScheme\npublic Result partition(MatrixObject features, MatrixObject labels, int seed) {\nList<MatrixObject> pFeatures = sliceFederatedMatrix(features);\nList<MatrixObject> pLabels = sliceFederatedMatrix(labels);\n- List<Double> weighingFactors = getWeighingFactors(pFeatures, getBalanceMetrics(pFeatures));\n+ List<Double> weightingFactors = getWeightingFactors(pFeatures, getBalanceMetrics(pFeatures));\nint min_rows = Integer.MAX_VALUE;\nfor (MatrixObject pFeature : pFeatures) {\n@@ -82,7 +82,7 @@ public class SubsampleToMinFederatedScheme extends DataPartitionFederatedScheme\npLabels.get(i).updateDataCharacteristics(update);\n}\n- return new Result(pFeatures, pLabels, pFeatures.size(), getBalanceMetrics(pFeatures), weighingFactors);\n+ return new Result(pFeatures, pLabels, pFeatures.size(), getBalanceMetrics(pFeatures), weightingFactors);\n}\n/**\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/cp/ParamservBuiltinCPInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/cp/ParamservBuiltinCPInstruction.java",
"diff": "@@ -41,11 +41,10 @@ import static org.apache.sysds.parser.Statement.PS_MODE;\nimport static org.apache.sysds.parser.Statement.PS_MODEL;\nimport static org.apache.sysds.parser.Statement.PS_PARALLELISM;\nimport static org.apache.sysds.parser.Statement.PS_SCHEME;\n-import static org.apache.sysds.parser.Statement.PS_FED_SCHEME;\nimport static org.apache.sysds.parser.Statement.PS_UPDATE_FUN;\nimport static org.apache.sysds.parser.Statement.PS_UPDATE_TYPE;\nimport static org.apache.sysds.parser.Statement.PS_FED_RUNTIME_BALANCING;\n-import static org.apache.sysds.parser.Statement.PS_FED_WEIGHING;\n+import static org.apache.sysds.parser.Statement.PS_FED_WEIGHTING;\nimport static org.apache.sysds.parser.Statement.PS_SEED;\nimport static org.apache.sysds.parser.Statement.PS_VAL_FEATURES;\nimport static org.apache.sysds.parser.Statement.PS_VAL_LABELS;\n@@ -127,7 +126,9 @@ public class ParamservBuiltinCPInstruction extends ParameterizedBuiltinCPInstruc\n}\nprivate void runFederated(ExecutionContext ec) {\n- Timing tExecutionTime = DMLScript.STATISTICS ? new Timing(true) : null;\n+ if(DMLScript.STATISTICS)\n+ Statistics.getPSExecutionTimer().start();\n+\nTiming tSetup = DMLScript.STATISTICS ? new Timing(true) : null;\nLOG.info(\"PARAMETER SERVER\");\nLOG.info(\"[+] Running in federated mode\");\n@@ -135,12 +136,11 @@ public class ParamservBuiltinCPInstruction extends ParameterizedBuiltinCPInstruc\n// get inputs\nString updFunc = getParam(PS_UPDATE_FUN);\nString aggFunc = getParam(PS_AGGREGATION_FUN);\n- String valFunc = getValFunction();\nPSUpdateType updateType = getUpdateType();\nPSFrequency freq = getFrequency();\nFederatedPSScheme federatedPSScheme = getFederatedScheme();\nPSRuntimeBalancing runtimeBalancing = getRuntimeBalancing();\n- boolean weighing = getWeighing();\n+ boolean weighting = getWeighting();\nint seed = getSeed();\nif( LOG.isInfoEnabled() ) {\n@@ -148,7 +148,7 @@ public class ParamservBuiltinCPInstruction extends ParameterizedBuiltinCPInstruc\nLOG.info(\"[+] Frequency: \" + freq);\nLOG.info(\"[+] Data Partitioning: \" + federatedPSScheme);\nLOG.info(\"[+] Runtime Balancing: \" + runtimeBalancing);\n- LOG.info(\"[+] Weighing: \" + weighing);\n+ LOG.info(\"[+] Weighting: \" + weighting);\nLOG.info(\"[+] Seed: \" + seed);\n}\nif (tSetup != null)\n@@ -179,12 +179,14 @@ public class ParamservBuiltinCPInstruction extends ParameterizedBuiltinCPInstruc\nExecutionContext aggServiceEC = ParamservUtils.copyExecutionContext(newEC, 1).get(0);\n// Create the parameter server\nListObject model = ec.getListObject(getParam(PS_MODEL));\n- ParamServer ps = createPS(PSModeType.FEDERATED, aggFunc, updateType, freq, workerNum, model, aggServiceEC, valFunc,\n- getNumBatchesPerEpoch(runtimeBalancing, result._balanceMetrics), ec.getMatrixObject(getParam(PS_VAL_FEATURES)), ec.getMatrixObject(getParam(PS_VAL_LABELS)));\n+ MatrixObject val_features = (getParam(PS_VAL_FEATURES) != null) ? ec.getMatrixObject(getParam(PS_VAL_FEATURES)) : null;\n+ MatrixObject val_labels = (getParam(PS_VAL_LABELS) != null) ? ec.getMatrixObject(getParam(PS_VAL_LABELS)) : null;\n+ ParamServer ps = createPS(PSModeType.FEDERATED, aggFunc, updateType, freq, workerNum, model, aggServiceEC, getValFunction(),\n+ getNumBatchesPerEpoch(runtimeBalancing, result._balanceMetrics), val_features, val_labels);\n// Create the local workers\nint finalNumBatchesPerEpoch = getNumBatchesPerEpoch(runtimeBalancing, result._balanceMetrics);\nList<FederatedPSControlThread> threads = IntStream.range(0, workerNum)\n- .mapToObj(i -> new FederatedPSControlThread(i, updFunc, freq, runtimeBalancing, weighing,\n+ .mapToObj(i -> new FederatedPSControlThread(i, updFunc, freq, runtimeBalancing, weighting,\ngetEpochs(), getBatchSize(), finalNumBatchesPerEpoch, federatedWorkerECs.get(i), ps))\n.collect(Collectors.toList());\nif(workerNum != threads.size()) {\n@@ -194,7 +196,7 @@ public class ParamservBuiltinCPInstruction extends ParameterizedBuiltinCPInstruc\nfor (int i = 0; i < threads.size(); i++) {\nthreads.get(i).setFeatures(result._pFeatures.get(i));\nthreads.get(i).setLabels(result._pLabels.get(i));\n- threads.get(i).setup(result._weighingFactors.get(i));\n+ threads.get(i).setup(result._weightingFactors.get(i));\n}\nif (DMLScript.STATISTICS)\nStatistics.accPSSetupTime((long) tSetup.stop());\n@@ -206,7 +208,7 @@ public class ParamservBuiltinCPInstruction extends ParameterizedBuiltinCPInstruc\n// Fetch the final model from ps\nec.setVariable(output.getName(), ps.getResult());\nif (DMLScript.STATISTICS)\n- Statistics.accPSExecutionTime((long) tExecutionTime.stop());\n+ Statistics.accPSExecutionTime((long) Statistics.getPSExecutionTimer().stop());\n} catch (InterruptedException | ExecutionException e) {\nthrow new DMLRuntimeException(\"ParamservBuiltinCPInstruction: unknown error: \", e);\n} finally {\n@@ -293,6 +295,9 @@ public class ParamservBuiltinCPInstruction extends ParameterizedBuiltinCPInstruc\n}\nprivate void runLocally(ExecutionContext ec, PSModeType mode) {\n+ if(DMLScript.STATISTICS)\n+ Statistics.getPSExecutionTimer().start();\n+\nTiming tSetup = DMLScript.STATISTICS ? new Timing(true) : null;\nint workerNum = getWorkerNum(mode);\nBasicThreadFactory factory = new BasicThreadFactory.Builder()\n@@ -314,9 +319,15 @@ public class ParamservBuiltinCPInstruction extends ParameterizedBuiltinCPInstruc\nPSFrequency freq = getFrequency();\nPSUpdateType updateType = getUpdateType();\n+ double rows_per_worker = Math.ceil((float) ec.getMatrixObject(getParam(PS_FEATURES)).getNumRows() / workerNum);\n+ int num_batches_per_epoch = (int) Math.ceil(rows_per_worker / getBatchSize());\n+\n// Create the parameter server\nListObject model = ec.getListObject(getParam(PS_MODEL));\n- ParamServer ps = createPS(mode, aggFunc, updateType, freq, workerNum, model, aggServiceEC);\n+ MatrixObject val_features = (getParam(PS_VAL_FEATURES) != null) ? ec.getMatrixObject(getParam(PS_VAL_FEATURES)) : null;\n+ MatrixObject val_labels = (getParam(PS_VAL_LABELS) != null) ? ec.getMatrixObject(getParam(PS_VAL_LABELS)) : null;\n+ ParamServer ps = createPS(mode, aggFunc, updateType, freq, workerNum, model, aggServiceEC, getValFunction(),\n+ num_batches_per_epoch, val_features, val_labels);\n// Create the local workers\nList<LocalPSWorker> workers = IntStream.range(0, workerNum)\n@@ -344,6 +355,8 @@ public class ParamservBuiltinCPInstruction extends ParameterizedBuiltinCPInstruc\nret.get(); //error handling\n// Fetch the final model from ps\nec.setVariable(output.getName(), ps.getResult());\n+ if (DMLScript.STATISTICS)\n+ Statistics.accPSExecutionTime((long) Statistics.getPSExecutionTimer().stop());\n} catch (InterruptedException | ExecutionException e) {\nthrow new DMLRuntimeException(\"ParamservBuiltinCPInstruction: some error occurred: \", e);\n} finally {\n@@ -529,11 +542,11 @@ public class ParamservBuiltinCPInstruction extends ParameterizedBuiltinCPInstruc\nprivate FederatedPSScheme getFederatedScheme() {\nFederatedPSScheme federated_scheme = DEFAULT_FEDERATED_SCHEME;\n- if (getParameterMap().containsKey(PS_FED_SCHEME)) {\n+ if (getParameterMap().containsKey(PS_SCHEME)) {\ntry {\n- federated_scheme = FederatedPSScheme.valueOf(getParam(PS_FED_SCHEME));\n+ federated_scheme = FederatedPSScheme.valueOf(getParam(PS_SCHEME));\n} catch (IllegalArgumentException e) {\n- throw new DMLRuntimeException(String.format(\"Paramserv function in federated mode: not support data partition scheme '%s'\", getParam(PS_FED_SCHEME)));\n+ throw new DMLRuntimeException(String.format(\"Paramserv function in federated mode: not support data partition scheme '%s'\", getParam(PS_SCHEME)));\n}\n}\nreturn federated_scheme;\n@@ -548,7 +561,7 @@ public class ParamservBuiltinCPInstruction extends ParameterizedBuiltinCPInstruc\n*/\nprivate int getNumBatchesPerEpoch(PSRuntimeBalancing runtimeBalancing, DataPartitionFederatedScheme.BalanceMetrics balanceMetrics) {\nint numBatchesPerEpoch;\n- if(runtimeBalancing == PSRuntimeBalancing.RUN_MIN) {\n+ if(runtimeBalancing == PSRuntimeBalancing.CYCLE_MIN || runtimeBalancing == PSRuntimeBalancing.BASELINE) {\nnumBatchesPerEpoch = (int) Math.ceil(balanceMetrics._minRows / (float) getBatchSize());\n} else if (runtimeBalancing == PSRuntimeBalancing.CYCLE_AVG\n|| runtimeBalancing == PSRuntimeBalancing.SCALE_BATCH) {\n@@ -561,8 +574,8 @@ public class ParamservBuiltinCPInstruction extends ParameterizedBuiltinCPInstruc\nreturn numBatchesPerEpoch;\n}\n- private boolean getWeighing() {\n- return getParameterMap().containsKey(PS_FED_WEIGHING) && Boolean.parseBoolean(getParam(PS_FED_WEIGHING));\n+ private boolean getWeighting() {\n+ return getParameterMap().containsKey(PS_FED_WEIGHTING) && Boolean.parseBoolean(getParam(PS_FED_WEIGHTING));\n}\nprivate String getValFunction() {\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/utils/Statistics.java",
"new_path": "src/main/java/org/apache/sysds/utils/Statistics.java",
"diff": "@@ -38,6 +38,7 @@ import org.apache.sysds.hops.OptimizerUtils;\nimport org.apache.sysds.runtime.controlprogram.caching.CacheStatistics;\nimport org.apache.sysds.runtime.controlprogram.context.SparkExecutionContext;\nimport org.apache.sysds.runtime.controlprogram.federated.FederatedRequest.RequestType;\n+import org.apache.sysds.runtime.controlprogram.parfor.stat.Timing;\nimport org.apache.sysds.runtime.instructions.Instruction;\nimport org.apache.sysds.runtime.instructions.InstructionUtils;\nimport org.apache.sysds.runtime.instructions.cp.FunctionCallCPInstruction;\n@@ -117,6 +118,7 @@ public class Statistics\nprivate static final LongAdder sparkBroadcastCount = new LongAdder();\n// Paramserv function stats (time is in milli sec)\n+ private static final Timing psExecutionTimer = new Timing(false);\nprivate static final LongAdder psExecutionTime = new LongAdder();\nprivate static final LongAdder psNumWorkers = new LongAdder();\nprivate static final LongAdder psSetupTime = new LongAdder();\n@@ -130,7 +132,7 @@ public class Statistics\n// Federated parameter server specifics (time is in milli sec)\nprivate static final LongAdder fedPSDataPartitioningTime = new LongAdder();\nprivate static final LongAdder fedPSWorkerComputingTime = new LongAdder();\n- private static final LongAdder fedPSGradientWeighingTime = new LongAdder();\n+ private static final LongAdder fedPSGradientWeightingTime = new LongAdder();\nprivate static final LongAdder fedPSCommunicationTime = new LongAdder();\n//PARFOR optimization stats (low frequency updates)\n@@ -571,6 +573,14 @@ public class Statistics\npsNumWorkers.add(n);\n}\n+ public static Timing getPSExecutionTimer() {\n+ return psExecutionTimer;\n+ }\n+\n+ public static double getPSExecutionTime() {\n+ return psExecutionTime.doubleValue();\n+ }\n+\npublic static void accPSExecutionTime(long n) {\npsExecutionTime.add(n);\n}\n@@ -603,6 +613,10 @@ public class Statistics\npsRpcRequestTime.add(t);\n}\n+ public static double getPSValidationTime() {\n+ return psValidationTime.doubleValue();\n+ }\n+\npublic static void accPSValidationTime(long t) {\npsValidationTime.add(t);\n}\n@@ -615,8 +629,8 @@ public class Statistics\nfedPSWorkerComputingTime.add(t);\n}\n- public static void accFedPSGradientWeighingTime(long t) {\n- fedPSGradientWeighingTime.add(t);\n+ public static void accFedPSGradientWeightingTime(long t) {\n+ fedPSGradientWeightingTime.add(t);\n}\npublic static void accFedPSCommunicationTime(long t) { fedPSCommunicationTime.add(t);}\n@@ -1049,7 +1063,7 @@ public class Statistics\nsb.append(String.format(\"PS fed data partitioning time:\\t%.3f secs.\\n\", fedPSDataPartitioningTime.doubleValue() / 1000));\nsb.append(String.format(\"PS fed comm time (cum):\\t\\t%.3f secs.\\n\", fedPSCommunicationTime.doubleValue() / 1000));\nsb.append(String.format(\"PS fed worker comp time (cum):\\t%.3f secs.\\n\", fedPSWorkerComputingTime.doubleValue() / 1000));\n- sb.append(String.format(\"PS fed grad weigh time (cum):\\t%.3f secs.\\n\", fedPSGradientWeighingTime.doubleValue() / 1000));\n+ sb.append(String.format(\"PS fed grad. weigh. time (cum):\\t%.3f secs.\\n\", fedPSGradientWeightingTime.doubleValue() / 1000));\nsb.append(String.format(\"PS fed global model agg time:\\t%.3f secs.\\n\", psAggregationTime.doubleValue() / 1000));\n}\nelse {\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/federated/paramserv/FederatedParamservTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/federated/paramserv/FederatedParamservTest.java",
"diff": "@@ -54,7 +54,7 @@ public class FederatedParamservTest extends AutomatedTestBase {\nprivate final String _freq;\nprivate final String _scheme;\nprivate final String _runtime_balancing;\n- private final String _weighing;\n+ private final String _weighting;\nprivate final String _data_distribution;\nprivate final int _seed;\n@@ -66,16 +66,16 @@ public class FederatedParamservTest extends AutomatedTestBase {\n// basic functionality\n//{\"TwoNN\", 4, 60000, 32, 4, 0.01, \"BSP\", \"BATCH\", \"KEEP_DATA_ON_WORKER\", \"NONE\" , \"false\",\"BALANCED\", 200},\n- {\"TwoNN\", 2, 4, 1, 4, 0.01, \"BSP\", \"BATCH\", \"KEEP_DATA_ON_WORKER\", \"RUN_MIN\" , \"true\", \"IMBALANCED\", 200},\n+ {\"TwoNN\", 2, 4, 1, 4, 0.01, \"BSP\", \"BATCH\", \"KEEP_DATA_ON_WORKER\", \"BASELINE\", \"true\", \"IMBALANCED\", 200},\n{\"CNN\", 2, 4, 1, 4, 0.01, \"BSP\", \"EPOCH\", \"SHUFFLE\", \"NONE\", \"true\", \"IMBALANCED\", 200},\n- {\"CNN\", 2, 4, 1, 4, 0.01, \"ASP\", \"BATCH\", \"REPLICATE_TO_MAX\", \"RUN_MIN\" , \"true\", \"IMBALANCED\", 200},\n+ {\"CNN\", 2, 4, 1, 4, 0.01, \"ASP\", \"BATCH\", \"REPLICATE_TO_MAX\", \"CYCLE_MIN\", \"true\", \"IMBALANCED\", 200},\n{\"TwoNN\", 2, 4, 1, 4, 0.01, \"ASP\", \"EPOCH\", \"BALANCE_TO_AVG\", \"CYCLE_MAX\", \"true\", \"IMBALANCED\", 200},\n{\"TwoNN\", 5, 1000, 100, 2, 0.01, \"BSP\", \"BATCH\", \"KEEP_DATA_ON_WORKER\", \"NONE\", \"true\", \"BALANCED\", 200},\n/*\n// runtime balancing\n- {\"TwoNN\", 2, 4, 1, 4, 0.01, \"BSP\", \"BATCH\", \"KEEP_DATA_ON_WORKER\", \"RUN_MIN\" , \"true\", \"IMBALANCED\", 200},\n- {\"TwoNN\", 2, 4, 1, 4, 0.01, \"BSP\", \"EPOCH\", \"KEEP_DATA_ON_WORKER\", \"RUN_MIN\" , \"true\", \"IMBALANCED\", 200},\n+ {\"TwoNN\", 2, 4, 1, 4, 0.01, \"BSP\", \"BATCH\", \"KEEP_DATA_ON_WORKER\", \"CYCLE_MIN\", \"true\", \"IMBALANCED\", 200},\n+ {\"TwoNN\", 2, 4, 1, 4, 0.01, \"BSP\", \"EPOCH\", \"KEEP_DATA_ON_WORKER\", \"CYCLE_MIN\", \"true\", \"IMBALANCED\", 200},\n{\"TwoNN\", 2, 4, 1, 4, 0.01, \"BSP\", \"BATCH\", \"KEEP_DATA_ON_WORKER\", \"CYCLE_AVG\", \"true\", \"IMBALANCED\", 200},\n{\"TwoNN\", 2, 4, 1, 4, 0.01, \"BSP\", \"EPOCH\", \"KEEP_DATA_ON_WORKER\", \"CYCLE_AVG\", \"true\", \"IMBALANCED\", 200},\n{\"TwoNN\", 2, 4, 1, 4, 0.01, \"BSP\", \"BATCH\", \"KEEP_DATA_ON_WORKER\", \"CYCLE_MAX\", \"true\", \"IMBALANCED\", 200},\n@@ -94,7 +94,7 @@ public class FederatedParamservTest extends AutomatedTestBase {\n}\npublic FederatedParamservTest(String networkType, int numFederatedWorkers, int dataSetSize, int batch_size,\n- int epochs, double eta, String utype, String freq, String scheme, String runtime_balancing, String weighing, String data_distribution, int seed) {\n+ int epochs, double eta, String utype, String freq, String scheme, String runtime_balancing, String weighting, String data_distribution, int seed) {\n_networkType = networkType;\n_numFederatedWorkers = numFederatedWorkers;\n@@ -106,7 +106,7 @@ public class FederatedParamservTest extends AutomatedTestBase {\n_freq = freq;\n_scheme = scheme;\n_runtime_balancing = runtime_balancing;\n- _weighing = weighing;\n+ _weighting = weighting;\n_data_distribution = data_distribution;\n_seed = seed;\n}\n@@ -192,7 +192,7 @@ public class FederatedParamservTest extends AutomatedTestBase {\n\"freq=\" + _freq,\n\"scheme=\" + _scheme,\n\"runtime_balancing=\" + _runtime_balancing,\n- \"weighing=\" + _weighing,\n+ \"weighting=\" + _weighting,\n\"network_type=\" + _networkType,\n\"channels=\" + C,\n\"hin=\" + Hin,\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/scripts/functions/federated/paramserv/CNN.dml",
"new_path": "src/test/scripts/functions/federated/paramserv/CNN.dml",
"diff": "@@ -161,7 +161,7 @@ train = function(matrix[double] X, matrix[double] y, matrix[double] X_val,\ntrain_paramserv = function(matrix[double] X, matrix[double] y,\nmatrix[double] X_val, matrix[double] y_val, int num_workers, int epochs,\nstring utype, string freq, int batch_size, string scheme, string runtime_balancing,\n- string weighing, double eta, int C, int Hin, int Win, int seed = -1)\n+ string weighting, double eta, int C, int Hin, int Win, int seed = -1)\nreturn (list[unknown] model)\n{\nN = nrow(X)\n@@ -208,7 +208,7 @@ train_paramserv = function(matrix[double] X, matrix[double] y,\nagg=\"./src/test/scripts/functions/federated/paramserv/CNN.dml::aggregation\",\nval=\"./src/test/scripts/functions/federated/paramserv/CNN.dml::validate\",\nk=num_workers, utype=utype, freq=freq, epochs=epochs, batchsize=batch_size,\n- scheme=scheme, runtime_balancing=runtime_balancing, weighing=weighing, hyperparams=hyperparams, seed=seed)\n+ scheme=scheme, runtime_balancing=runtime_balancing, weighting=weighting, hyperparams=hyperparams, seed=seed)\n}\n/*\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/scripts/functions/federated/paramserv/FederatedParamservTest.dml",
"new_path": "src/test/scripts/functions/federated/paramserv/FederatedParamservTest.dml",
"diff": "@@ -27,13 +27,13 @@ features = read($features)\nlabels = read($labels)\nif($network_type == \"TwoNN\") {\n- model = TwoNN::train_paramserv(features, labels, matrix(0, rows=100, cols=784), matrix(0, rows=100, cols=10), 0, $epochs, $utype, $freq, $batch_size, $scheme, $runtime_balancing, $weighing, $eta, $seed)\n+ model = TwoNN::train_paramserv(features, labels, matrix(0, rows=100, cols=784), matrix(0, rows=100, cols=10), 0, $epochs, $utype, $freq, $batch_size, $scheme, $runtime_balancing, $weighting, $eta, $seed)\nprint(\"Test results:\")\n[loss_test, accuracy_test] = TwoNN::validate(matrix(0, rows=100, cols=784), matrix(0, rows=100, cols=10), model, list())\nprint(\"[+] test loss: \" + loss_test + \", test accuracy: \" + accuracy_test + \"\\n\")\n}\nelse {\n- model = CNN::train_paramserv(features, labels, matrix(0, rows=100, cols=784), matrix(0, rows=100, cols=10), 0, $epochs, $utype, $freq, $batch_size, $scheme, $runtime_balancing, $weighing, $eta, $channels, $hin, $win, $seed)\n+ model = CNN::train_paramserv(features, labels, matrix(0, rows=100, cols=784), matrix(0, rows=100, cols=10), 0, $epochs, $utype, $freq, $batch_size, $scheme, $runtime_balancing, $weighting, $eta, $channels, $hin, $win, $seed)\nprint(\"Test results:\")\nhyperparams = list(learning_rate=$eta, C=$channels, Hin=$hin, Win=$win)\n[loss_test, accuracy_test] = CNN::validate(matrix(0, rows=100, cols=784), matrix(0, rows=100, cols=10), model, hyperparams)\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/scripts/functions/federated/paramserv/TwoNN.dml",
"new_path": "src/test/scripts/functions/federated/paramserv/TwoNN.dml",
"diff": "@@ -125,7 +125,7 @@ train = function(matrix[double] X, matrix[double] y,\n*/\ntrain_paramserv = function(matrix[double] X, matrix[double] y,\nmatrix[double] X_val, matrix[double] y_val,\n- int num_workers, int epochs, string utype, string freq, int batch_size, string scheme, string runtime_balancing, string weighing,\n+ int num_workers, int epochs, string utype, string freq, int batch_size, string scheme, string runtime_balancing, string weighting,\ndouble eta, int seed = -1)\nreturn (list[unknown] model) {\n@@ -156,7 +156,7 @@ train_paramserv = function(matrix[double] X, matrix[double] y,\nagg=\"./src/test/scripts/functions/federated/paramserv/TwoNN.dml::aggregation\",\nval=\"./src/test/scripts/functions/federated/paramserv/TwoNN.dml::validate\",\nk=num_workers, utype=utype, freq=freq, epochs=epochs, batchsize=batch_size,\n- scheme=scheme, runtime_balancing=runtime_balancing, weighing=weighing, hyperparams=hyperparams, seed=seed)\n+ scheme=scheme, runtime_balancing=runtime_balancing, weighting=weighting, hyperparams=hyperparams, seed=seed)\n}\n/*\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-2550] Improved parameter server epoch timing/logging
Closes #1176. |
49,688 | 23.02.2021 19:10:53 | -3,600 | 846d1e25156ddec7187be040fb8eb92f6ad7d8e1 | [MINOR] Fix null value check in EncoderMVImpute
Closes | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/transform/encode/EncoderMVImpute.java",
"new_path": "src/main/java/org/apache/sysds/runtime/transform/encode/EncoderMVImpute.java",
"diff": "@@ -178,7 +178,7 @@ public class EncoderMVImpute extends Encoder\n_hist.get(colID) : new HashMap<>();\nfor( int i=0; i<in.getNumRows(); i++ ) {\nString key = String.valueOf(in.get(i, colID-1));\n- if( key != null && !key.isEmpty() ) {\n+ if(!key.equals(\"null\") && !key.isEmpty() ) {\nLong val = hist.get(key);\nhist.put(key, (val!=null) ? val+1 : 1);\n}\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR] Fix null value check in EncoderMVImpute
Closes #1187. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.