id
int64
20
338k
vocab_size
int64
2
671
ast_levels
int64
4
32
nloc
int64
1
451
n_ast_nodes
int64
12
5.6k
n_identifiers
int64
1
186
n_ast_errors
int64
0
10
n_words
int64
2
2.17k
n_whitespaces
int64
2
13.8k
fun_name
stringlengths
2
73
commit_message
stringlengths
51
15.3k
url
stringlengths
31
59
code
stringlengths
51
31k
ast_errors
stringlengths
0
1.46k
token_counts
int64
6
3.32k
file_name
stringlengths
5
56
language
stringclasses
1 value
path
stringlengths
7
134
commit_id
stringlengths
40
40
repo
stringlengths
3
28
complexity
int64
1
153
289,232
35
11
17
161
24
0
40
123
test_setup_devices_exception
Update to iaqualink 0.5.0 (#80304) * Update to iaqualink 0.5.0. * Boolean conditional style fix Co-authored-by: Martin Hjelmare <[email protected]> * Fix black formatting * Update iaqualink tests after update to 0.5.x * Remove debug print statements Co-authored-by: Martin Hjelmare <[email protected]>
https://github.com/home-assistant/core.git
async def test_setup_devices_exception(hass, config_entry, client): config_entry.add_to_hass(hass) system = get_aqualink_system(client, cls=IaquaSystem) systems = {system.serial: system} with patch( "homeassistant.components.iaqualink.AqualinkClient.login", return_value=None, ), patch( "homeassistant.components.iaqualink.AqualinkClient.get_systems", return_value=systems, ), patch.object( system, "get_devices" ) as mock_get_devices: mock_get_devices.side_effect = AqualinkServiceException await hass.config_entries.async_setup(config_entry.entry_id) await hass.async_block_till_done() assert config_entry.state is ConfigEntryState.SETUP_RETRY
97
test_init.py
Python
tests/components/iaqualink/test_init.py
abec592a248607869dc1d495f956ca397dc189f4
core
1
107,739
35
13
11
132
12
0
43
157
set_rotation
Deprecate toplevel mpl.text.get_rotation; normalize rotations early. get_rotation had been made a toplevel function a long time ago to be used in TextWithDash, which has now been removed, so there isn't much justification to have it separate. Also, note that while the toplevel get_rotation's implementation also supported string-form of floats (see test_text.test_get_rotation_string), this was not consistent with the docstring, and, more importantly, it was not possible to set a Text's rotation to e.g. "15." because Text.set_rotation() would first reject that anyways. Also, we can directly normalize angles in Text.set_rotation, rather than doing it again and again in Text.get_rotation. Again, this made the old inconsistency (on supporting string-form floats) clearer.
https://github.com/matplotlib/matplotlib.git
def set_rotation(self, s): if isinstance(s, numbers.Real): self._rotation = float(s) % 360 elif cbook._str_equal(s, 'horizontal') or s is None: self._rotation = 0. elif cbook._str_equal(s, 'vertical'): self._rotation = 90. else: raise ValueError("rotation must be 'vertical', 'horizontal' or " f"a number, not {s}") self.stale = True
78
text.py
Python
lib/matplotlib/text.py
1f8f50e522f7e623279626381d650f8ff63627e3
matplotlib
5
141,966
51
15
23
348
29
0
83
261
test_syncer_callback_sync_period
[tune] Refactor Syncer / deprecate Sync client (#25655) This PR includes / depends on #25709 The two concepts of Syncer and SyncClient are confusing, as is the current API for passing custom sync functions. This PR refactors Tune's syncing behavior. The Sync client concept is hard deprecated. Instead, we offer a well defined Syncer API that can be extended to provide own syncing functionality. However, the default will be to use Ray AIRs file transfer utilities. New API: - Users can pass `syncer=CustomSyncer` which implements the `Syncer` API - Otherwise our off-the-shelf syncing is used - As before, syncing to cloud disables syncing to driver Changes: - Sync client is removed - Syncer interface introduced - _DefaultSyncer is a wrapper around the URI upload/download API from Ray AIR - SyncerCallback only uses remote tasks to synchronize data - Rsync syncing is fully depracated and removed - Docker and kubernetes-specific syncing is fully deprecated and removed - Testing is improved to use `file://` URIs instead of mock sync clients
https://github.com/ray-project/ray.git
def test_syncer_callback_sync_period(ray_start_2_cpus, temp_data_dirs): tmp_source, tmp_target = temp_data_dirs with freeze_time() as frozen: syncer_callback = TestSyncerCallback( sync_period=60, local_logdir_override=tmp_target ) trial1 = MockTrial(trial_id="a", logdir=tmp_source) syncer_callback.on_trial_result(iteration=1, trials=[], trial=trial1, result={}) syncer_callback.wait_for_all() assert_file(True, tmp_target, "level0.txt") assert_file(False, tmp_target, "level0_new.txt") # Add new file to source directory with open(os.path.join(tmp_source, "level0_new.txt"), "w") as f: f.write("Data\n") frozen.tick(30) # Should not sync after 30 seconds syncer_callback.on_trial_result(iteration=2, trials=[], trial=trial1, result={}) syncer_callback.wait_for_all() assert_file(True, tmp_target, "level0.txt") assert_file(False, tmp_target, "level0_new.txt") frozen.tick(30) # Should sync after 60 seconds syncer_callback.on_trial_result(iteration=3, trials=[], trial=trial1, result={}) syncer_callback.wait_for_all() assert_file(True, tmp_target, "level0.txt") assert_file(True, tmp_target, "level0_new.txt")
210
test_syncer_callback.py
Python
python/ray/tune/tests/test_syncer_callback.py
6313ddc47cf9df4df8c8907997df559850a1b874
ray
1
43,203
25
11
19
116
19
0
26
231
test_confirm
Don't rely on current ORM structure for db clean command (#23574) For command DB clean, by not relying on the ORM models, we will be able to use the command even when the metadatabase is not yet upgraded to the version of Airflow you have installed. Additionally we archive all rows before deletion.
https://github.com/apache/airflow.git
def test_confirm(self, run_cleanup_mock, confirm_arg, expected): args = self.parser.parse_args( [ 'db', 'clean', '--clean-before-timestamp', '2021-01-01', *confirm_arg, ] ) db_command.cleanup_tables(args) run_cleanup_mock.assert_called_once_with( table_names=None, dry_run=False, clean_before_timestamp=pendulum.parse('2021-01-01 00:00:00Z'), verbose=False, confirm=expected, skip_archive=False, )
74
test_db_command.py
Python
tests/cli/commands/test_db_command.py
95bd6b71cc9f5da377e272707f7b68000d980939
airflow
1
60,240
61
18
24
362
31
0
105
454
configure_crop
Balanced joint maximum mean discrepancy for deep transfer learning
https://github.com/jindongwang/transferlearning.git
def configure_crop(self, context_pad): # crop dimensions in_ = self.inputs[0] tpose = self.transformer.transpose[in_] inv_tpose = [tpose[t] for t in tpose] self.crop_dims = np.array(self.blobs[in_].data.shape[1:])[inv_tpose] #.transpose(inv_tpose) # context padding self.context_pad = context_pad if self.context_pad: in_ = self.inputs[0] transpose = self.transformer.transpose.get(in_) channel_order = self.transformer.channel_swap.get(in_) raw_scale = self.transformer.raw_scale.get(in_) # Padding context crops needs the mean in unprocessed input space. mean = self.transformer.mean.get(in_) if mean is not None: inv_transpose = [transpose[t] for t in transpose] crop_mean = mean.copy().transpose(inv_transpose) if channel_order is not None: channel_order_inverse = [channel_order.index(i) for i in range(crop_mean.shape[2])] crop_mean = crop_mean[:, :, channel_order_inverse] if raw_scale is not None: crop_mean /= raw_scale self.crop_mean = crop_mean else: self.crop_mean = np.zeros(self.crop_dims, dtype=np.float32)
233
detector.py
Python
code/deep/BJMMD/caffe/python/caffe/detector.py
cc4d0564756ca067516f71718a3d135996525909
transferlearning
8
258,625
98
14
35
495
40
0
137
456
fit
ENH Replaced RandomState with Generator compatible calls (#22271)
https://github.com/scikit-learn/scikit-learn.git
def fit(self, X, y): y = self._validate_data(X="no_validation", y=y) if self.code_size <= 0: raise ValueError( "code_size should be greater than 0, got {0}".format(self.code_size) ) _check_estimator(self.estimator) random_state = check_random_state(self.random_state) check_classification_targets(y) self.classes_ = np.unique(y) n_classes = self.classes_.shape[0] if n_classes == 0: raise ValueError( "OutputCodeClassifier can not be fit when no class is present." ) code_size_ = int(n_classes * self.code_size) # FIXME: there are more elaborate methods than generating the codebook # randomly. self.code_book_ = random_state.uniform(size=(n_classes, code_size_)) self.code_book_[self.code_book_ > 0.5] = 1 if hasattr(self.estimator, "decision_function"): self.code_book_[self.code_book_ != 1] = -1 else: self.code_book_[self.code_book_ != 1] = 0 classes_index = {c: i for i, c in enumerate(self.classes_)} Y = np.array( [self.code_book_[classes_index[y[i]]] for i in range(_num_samples(y))], dtype=int, ) self.estimators_ = Parallel(n_jobs=self.n_jobs)( delayed(_fit_binary)(self.estimator, X, Y[:, i]) for i in range(Y.shape[1]) ) if hasattr(self.estimators_[0], "n_features_in_"): self.n_features_in_ = self.estimators_[0].n_features_in_ if hasattr(self.estimators_[0], "feature_names_in_"): self.feature_names_in_ = self.estimators_[0].feature_names_in_ return self
318
multiclass.py
Python
sklearn/multiclass.py
254ea8c453cd2100ade07644648f1f00392611a6
scikit-learn
9
269,246
70
22
43
498
25
0
184
640
_get_data_iterator_from_dataset
fixed import random, removed keras import ,fixed grammer issues
https://github.com/keras-team/keras.git
def _get_data_iterator_from_dataset(dataset,dataset_type_spec) : if dataset_type_spec == list: if len(dataset) == 0: raise ValueError('Received an empty list dataset. ' 'Please provide a non-empty list of arrays.') if _get_type_spec(dataset[0]) is np.ndarray: expected_shape = dataset[0].shape for i,element in enumerate(dataset): if not np.array(element).shape[0] == expected_shape[0]: raise ValueError('Received a list of numpy arrays with different ' f'lengths. Mismatch found at index {i}, ' f'Expected shape={expected_shape} ' f'Received shape={np.array(element).shape}.' f'Please provide a list of numpy arrays with ' f'same length.') else: raise ValueError('Expected a list of numpy.ndarrays,' 'Received: {}'.format(type(dataset[0]))) return iter(zip(*dataset)) elif dataset_type_spec == tuple: if len(dataset) == 0: raise ValueError('Received an empty list dataset.' 'Please provide a non-empty tuple of arrays.') if _get_type_spec(dataset[0]) is np.ndarray: expected_shape = dataset[0].shape for i,element in enumerate(dataset): if not np.array(element).shape[0] == expected_shape[0]: raise ValueError('Received a tuple of numpy arrays with different ' f'lengths. Mismatch found at index {i}, ' f'Expected shape={expected_shape} ' f'Received shape={np.array(element).shape}.' f'Please provide a tuple of numpy arrays with ' 'same length.') else: raise ValueError('Expected a tuple of numpy.ndarrays, ' 'Received: {}'.format(type(dataset[0]))) return iter(zip(*dataset)) elif dataset_type_spec == tf.data.Dataset: if is_batched(dataset): dataset = dataset.unbatch() return iter(dataset) elif dataset_type_spec == np.ndarray: return iter(dataset)
270
dataset_utils.py
Python
keras/utils/dataset_utils.py
c3a27a6642c03c6380aca22c6e3d73d0b29bb271
keras
14
176,593
12
13
6
80
8
1
14
48
is_strongly_connected
Added examples in connected and strongly connected functions (#5559) * added examples * Update networkx/algorithms/components/connected.py Co-authored-by: Ross Barnowski <[email protected]> Co-authored-by: Ross Barnowski <[email protected]>
https://github.com/networkx/networkx.git
def is_strongly_connected(G): if len(G) == 0: raise nx.NetworkXPointlessConcept( ) return len(list(strongly_connected_components(G))[0]) == len(G) @not_implemented_for("undirected")
@not_implemented_for("undirected")
40
strongly_connected.py
Python
networkx/algorithms/components/strongly_connected.py
7cad29b3542ad867f1eb5b7b6a9087495f252749
networkx
2
27,800
78
15
93
595
45
0
129
330
test_orderline_query
Metadata added to checkout and order lines (#10040) * Metadata added to checkout and order lines * CHANGELOG.md update * Missing tests added
https://github.com/saleor/saleor.git
def test_orderline_query(staff_api_client, permission_manage_orders, fulfilled_order): order = fulfilled_order query = line = order.lines.first() metadata_key = "md key" metadata_value = "md value" line.store_value_in_private_metadata({metadata_key: metadata_value}) line.store_value_in_metadata({metadata_key: metadata_value}) line.save() staff_api_client.user.user_permissions.add(permission_manage_orders) response = staff_api_client.post_graphql(query) content = get_graphql_content(response) order_data = content["data"]["orders"]["edges"][0]["node"] first_order_data_line = order_data["lines"][0] variant_id = graphene.Node.to_global_id("ProductVariant", line.variant.pk) assert first_order_data_line["thumbnail"] is None assert first_order_data_line["variant"]["id"] == variant_id assert first_order_data_line["quantity"] == line.quantity assert first_order_data_line["unitPrice"]["currency"] == line.unit_price.currency assert first_order_data_line["metadata"] == [ {"key": metadata_key, "value": metadata_value} ] assert first_order_data_line["privateMetadata"] == [ {"key": metadata_key, "value": metadata_value} ] expected_unit_price = Money( amount=str(first_order_data_line["unitPrice"]["gross"]["amount"]), currency="USD", ) assert first_order_data_line["totalPrice"]["currency"] == line.unit_price.currency assert expected_unit_price == line.unit_price.gross expected_total_price = Money( amount=str(first_order_data_line["totalPrice"]["gross"]["amount"]), currency="USD", ) assert expected_total_price == line.unit_price.gross * line.quantity allocation = line.allocations.first() allocation_id = graphene.Node.to_global_id("Allocation", allocation.pk) warehouse_id = graphene.Node.to_global_id( "Warehouse", allocation.stock.warehouse.pk ) assert first_order_data_line["allocations"] == [ { "id": allocation_id, "quantity": allocation.quantity_allocated, "warehouse": {"id": warehouse_id}, } ]
349
test_order.py
Python
saleor/graphql/order/tests/test_order.py
a68553e1a55e3a1bd32826cdce294d27f74175e9
saleor
1
209,520
34
9
6
58
6
0
40
115
recap
E275 - Missing whitespace after keyword (#3711) Co-authored-by: Alexander Aring <[email protected]> Co-authored-by: Anmol Sarma <[email protected]> Co-authored-by: antoine.torre <[email protected]> Co-authored-by: Antoine Vacher <[email protected]> Co-authored-by: Arnaud Ebalard <[email protected]> Co-authored-by: atlowl <[email protected]> Co-authored-by: Brian Bienvenu <[email protected]> Co-authored-by: Chris Packham <[email protected]> Co-authored-by: CQ <[email protected]> Co-authored-by: Daniel Collins <[email protected]> Co-authored-by: Federico Maggi <[email protected]> Co-authored-by: Florian Maury <[email protected]> Co-authored-by: _Frky <[email protected]> Co-authored-by: g-mahieux <[email protected]> Co-authored-by: gpotter2 <[email protected]> Co-authored-by: Guillaume Valadon <[email protected]> Co-authored-by: Hao Zheng <[email protected]> Co-authored-by: Haresh Khandelwal <[email protected]> Co-authored-by: Harri Hämäläinen <[email protected]> Co-authored-by: hecke <[email protected]> Co-authored-by: Jan Romann <[email protected]> Co-authored-by: Jan Sebechlebsky <[email protected]> Co-authored-by: jdiog0 <[email protected]> Co-authored-by: jockque <[email protected]> Co-authored-by: Julien Bedel <[email protected]> Co-authored-by: Keith Scott <[email protected]> Co-authored-by: Kfir Gollan <[email protected]> Co-authored-by: Lars Munch <[email protected]> Co-authored-by: ldp77 <[email protected]> Co-authored-by: Leonard Crestez <[email protected]> Co-authored-by: Marcel Patzlaff <[email protected]> Co-authored-by: Martijn Thé <[email protected]> Co-authored-by: Martine Lenders <[email protected]> Co-authored-by: Michael Farrell <[email protected]> Co-authored-by: Michał Mirosław <[email protected]> Co-authored-by: mkaliszan <[email protected]> Co-authored-by: mtury <[email protected]> Co-authored-by: Neale Ranns <[email protected]> Co-authored-by: Octavian Toader <[email protected]> Co-authored-by: Peter Eisenlohr <[email protected]> Co-authored-by: Phil <[email protected]> Co-authored-by: Pierre Lalet <[email protected]> Co-authored-by: Pierre Lorinquer <[email protected]> Co-authored-by: piersoh <[email protected]> Co-authored-by: plorinquer <[email protected]> Co-authored-by: pvinci <[email protected]> Co-authored-by: Rahul Jadhav <[email protected]> Co-authored-by: Robin Jarry <[email protected]> Co-authored-by: romain-perez <[email protected]> Co-authored-by: rperez <rperez@debian> Co-authored-by: Sabrina Dubroca <[email protected]> Co-authored-by: Sebastian Baar <[email protected]> Co-authored-by: sebastien mainand <[email protected]> Co-authored-by: smehner1 <[email protected]> Co-authored-by: speakinghedge <[email protected]> Co-authored-by: Steven Van Acker <[email protected]> Co-authored-by: Thomas Faivre <[email protected]> Co-authored-by: Tran Tien Dat <[email protected]> Co-authored-by: Wael Mahlous <[email protected]> Co-authored-by: waeva <[email protected]> Co-authored-by: Alexander Aring <[email protected]> Co-authored-by: Anmol Sarma <[email protected]> Co-authored-by: antoine.torre <[email protected]> Co-authored-by: Antoine Vacher <[email protected]> Co-authored-by: Arnaud Ebalard <[email protected]> Co-authored-by: atlowl <[email protected]> Co-authored-by: Brian Bienvenu <[email protected]> Co-authored-by: Chris Packham <[email protected]> Co-authored-by: CQ <[email protected]> Co-authored-by: Daniel Collins <[email protected]> Co-authored-by: Federico Maggi <[email protected]> Co-authored-by: Florian Maury <[email protected]> Co-authored-by: _Frky <[email protected]> Co-authored-by: g-mahieux <[email protected]> Co-authored-by: gpotter2 <[email protected]> Co-authored-by: Guillaume Valadon <[email protected]> Co-authored-by: Hao Zheng <[email protected]> Co-authored-by: Haresh Khandelwal <[email protected]> Co-authored-by: Harri Hämäläinen <[email protected]> Co-authored-by: hecke <[email protected]> Co-authored-by: Jan Romann <[email protected]> Co-authored-by: Jan Sebechlebsky <[email protected]> Co-authored-by: jdiog0 <[email protected]> Co-authored-by: jockque <[email protected]> Co-authored-by: Julien Bedel <[email protected]> Co-authored-by: Keith Scott <[email protected]> Co-authored-by: Kfir Gollan <[email protected]> Co-authored-by: Lars Munch <[email protected]> Co-authored-by: ldp77 <[email protected]> Co-authored-by: Leonard Crestez <[email protected]> Co-authored-by: Marcel Patzlaff <[email protected]> Co-authored-by: Martijn Thé <[email protected]> Co-authored-by: Martine Lenders <[email protected]> Co-authored-by: Michael Farrell <[email protected]> Co-authored-by: Michał Mirosław <[email protected]> Co-authored-by: mkaliszan <[email protected]> Co-authored-by: mtury <[email protected]> Co-authored-by: Neale Ranns <[email protected]> Co-authored-by: Octavian Toader <[email protected]> Co-authored-by: Peter Eisenlohr <[email protected]> Co-authored-by: Phil <[email protected]> Co-authored-by: Pierre Lalet <[email protected]> Co-authored-by: Pierre Lorinquer <[email protected]> Co-authored-by: piersoh <[email protected]> Co-authored-by: pvinci <[email protected]> Co-authored-by: Rahul Jadhav <[email protected]> Co-authored-by: Robin Jarry <[email protected]> Co-authored-by: romain-perez <[email protected]> Co-authored-by: rperez <rperez@debian> Co-authored-by: Sabrina Dubroca <[email protected]> Co-authored-by: Sebastian Baar <[email protected]> Co-authored-by: sebastien mainand <[email protected]> Co-authored-by: smehner1 <[email protected]> Co-authored-by: Steven Van Acker <[email protected]> Co-authored-by: Thomas Faivre <[email protected]> Co-authored-by: Tran Tien Dat <[email protected]> Co-authored-by: Wael Mahlous <[email protected]> Co-authored-by: waeva <[email protected]>
https://github.com/secdev/scapy.git
def recap(self, nc): # type: (int) -> None assert nc >= 0 t = self._dynamic_table_cap_size > nc self._dynamic_table_cap_size = nc if t: # The RFC is not clear about whether this resize should happen; # we do it anyway self.resize(nc)
33
http2.py
Python
scapy/contrib/http2.py
08b1f9d67c8e716fd44036a027bdc90dcb9fcfdf
scapy
2
215,674
202
20
160
1,811
55
0
479
2,383
test_modify
Adding the ability to add, delete, purge, and modify Salt scheduler jobs when the Salt minion is not running.
https://github.com/saltstack/salt.git
def test_modify(sock_dir, job1, schedule_config_file): current_job1 = { "function": "salt", "seconds": "3600", "maxrunning": 1, "name": "job1", "enabled": True, "jid_include": True, } new_job1 = { "function": "salt", "seconds": "60", "maxrunning": 1, "name": "job1", "enabled": True, "jid_include": True, } comm1 = "Modified job: job1 in schedule." changes1 = { "job1": { "new": salt.utils.odict.OrderedDict(new_job1), "old": salt.utils.odict.OrderedDict(current_job1), } } new_job4 = { "function": "test.version", "seconds": "3600", "maxrunning": 1, "name": "job1", "enabled": True, "jid_include": True, } changes4 = { "job1": { "new": salt.utils.odict.OrderedDict(new_job4), "old": salt.utils.odict.OrderedDict(current_job1), } } expected1 = {"comment": comm1, "changes": changes1, "result": True} comm2 = ( 'Error: Unable to use "seconds", "minutes", "hours", ' 'or "days" with "when" option.' ) expected2 = {"comment": comm2, "changes": {}, "result": False} comm3 = 'Unable to use "when" and "cron" options together. Ignoring.' expected3 = {"comment": comm3, "changes": {}, "result": False} comm4 = "Job: job1 would be modified in schedule." expected4 = {"comment": comm4, "changes": changes4, "result": True} comm5 = "Job job2 does not exist in schedule." expected5 = {"comment": comm5, "changes": {}, "result": False} with patch.dict( schedule.__opts__, {"schedule": {"job1": current_job1}, "sock_dir": sock_dir} ): mock = MagicMock(return_value=True) with patch.dict(schedule.__salt__, {"event.fire": mock}): _ret_value = {"complete": True, "schedule": {"job1": current_job1}} with patch.object(SaltEvent, "get_event", return_value=_ret_value): ret = schedule.modify("job1", seconds="60") assert "job1" in ret["changes"] assert "new" in ret["changes"]["job1"] assert "old" in ret["changes"]["job1"] for key in [ "maxrunning", "function", "seconds", "jid_include", "name", "enabled", ]: assert ( ret["changes"]["job1"]["new"][key] == expected1["changes"]["job1"]["new"][key] ) assert ( ret["changes"]["job1"]["old"][key] == expected1["changes"]["job1"]["old"][key] ) assert ret["comment"] == expected1["comment"] assert ret["result"] == expected1["result"] _ret_value = {"complete": True, "schedule": {"job1": current_job1}} with patch.object(SaltEvent, "get_event", return_value=_ret_value): ret = schedule.modify( "job1", function="test.ping", seconds=3600, when="2400" ) assert ret == expected2 _ret_value = {"complete": True, "schedule": {"job1": current_job1}} with patch.object(SaltEvent, "get_event", return_value=_ret_value): ret = schedule.modify( "job1", function="test.ping", when="2400", cron="2" ) assert ret == expected3 _ret_value = {"complete": True, "schedule": {"job1": current_job1}} with patch.object(SaltEvent, "get_event", return_value=_ret_value): ret = schedule.modify("job1", function="test.version", test=True) assert "job1" in ret["changes"] assert "new" in ret["changes"]["job1"] assert "old" in ret["changes"]["job1"] for key in [ "maxrunning", "function", "jid_include", "name", "enabled", ]: assert ( ret["changes"]["job1"]["new"][key] == expected4["changes"]["job1"]["new"][key] ) assert ( ret["changes"]["job1"]["old"][key] == expected4["changes"]["job1"]["old"][key] ) assert ret["comment"] == expected4["comment"] assert ret["result"] == expected4["result"] _ret_value = {"complete": True, "schedule": {}} with patch.object(SaltEvent, "get_event", return_value=_ret_value): ret = schedule.modify("job2", function="test.version", test=True) assert ret == expected5 _schedule_data = {"job1": job1} comm = "Modified job: job1 in schedule." changes = {"job1": "removed"} changes = { "job1": { "new": OrderedDict( [ ("function", "test.version"), ("maxrunning", 1), ("name", "job1"), ("enabled", True), ("jid_include", True), ] ), "old": OrderedDict( [ ("function", "test.ping"), ("maxrunning", 1), ("name", "job1"), ("jid_include", True), ("enabled", True), ] ), } } with patch.dict( schedule.__opts__, {"schedule": {"job1": "salt"}, "sock_dir": sock_dir} ): with patch("salt.utils.files.fopen", mock_open(read_data="")) as fopen_mock: with patch.object( schedule, "list_", MagicMock(return_value=_schedule_data) ): assert schedule.modify( "job1", function="test.version", offline="True" ) == {"comment": comm, "changes": changes, "result": True} _call = call( b"schedule:\n job1: {enabled: true, function: test.version, jid_include: true, maxrunning: 1,\n name: job1}\n" ) write_calls = fopen_mock.filehandles[schedule_config_file][ 0 ].write._mock_mock_calls assert _call in write_calls # 'is_enabled' function tests: 1
1,004
test_schedule.py
Python
tests/pytests/unit/modules/test_schedule.py
62908a04f5166e0a26f69ff1a5296a19bad351ad
salt
3
125,720
79
15
42
543
29
0
172
425
test_component_activities_hook
[dashboard] Update cluster_activities endpoint to use pydantic. (#26609) Update cluster_activities endpoint to use pydantic so we have better data validation. Make timestamp a required field. Add pydantic to ray[default] requirements
https://github.com/ray-project/ray.git
def test_component_activities_hook(set_ray_cluster_activity_hook, call_ray_start): external_hook = set_ray_cluster_activity_hook response = requests.get("http://127.0.0.1:8265/api/component_activities") response.raise_for_status() # Validate schema of response data = response.json() schema_path = os.path.join( os.path.dirname(dashboard.__file__), "modules/snapshot/component_activities_schema.json", ) pprint.pprint(data) jsonschema.validate(instance=data, schema=json.load(open(schema_path))) # Validate driver response can be cast to RayActivityResponse object # and that there are no active drivers. driver_ray_activity_response = RayActivityResponse(**data["driver"]) assert driver_ray_activity_response.is_active == "INACTIVE" assert driver_ray_activity_response.reason is None # Validate external component response can be cast to RayActivityResponse object if external_hook[-1] == "5": external_activity_response = RayActivityResponse(**data["test_component5"]) assert external_activity_response.is_active == "ACTIVE" assert external_activity_response.reason == "Counter: 1" elif external_hook[-1] == "4": external_activity_response = RayActivityResponse(**data["external_component"]) assert external_activity_response.is_active == "ERROR" assert ( "'Error in external cluster activity hook'" in external_activity_response.reason ) elif external_hook[-1] == "3": external_activity_response = RayActivityResponse(**data["external_component"]) assert external_activity_response.is_active == "ERROR" elif external_hook[-1] == "2": external_activity_response = RayActivityResponse(**data["test_component2"]) assert external_activity_response.is_active == "ERROR" elif external_hook[-1] == "1": external_activity_response = RayActivityResponse(**data["test_component1"]) assert external_activity_response.is_active == "ACTIVE" assert external_activity_response.reason == "Counter: 1" # Call endpoint again to validate different response response = requests.get("http://127.0.0.1:8265/api/component_activities") response.raise_for_status() data = response.json() jsonschema.validate(instance=data, schema=json.load(open(schema_path))) external_activity_response = RayActivityResponse(**data["test_component1"]) assert external_activity_response.is_active == "ACTIVE" assert external_activity_response.reason == "Counter: 2"
308
test_snapshot.py
Python
dashboard/modules/snapshot/tests/test_snapshot.py
e8222ff600f79cc7c5cc28f43a951215c4b5460c
ray
6
86,843
11
9
4
48
8
0
11
43
drop
feat(replays): Remove usage of the attachment cache (#39987) closes https://github.com/getsentry/replay-backend/issues/154
https://github.com/getsentry/sentry.git
def drop(self): part = RecordingSegmentPart(self.prefix) for i in range(self.num_parts): del part[i]
29
cache.py
Python
src/sentry/replays/cache.py
5113b00cb01ae85a9b0578bed8f9eb7a1a66967a
sentry
2
2,529
20
11
23
115
14
0
22
142
_object2proto
Changes for publishing dataset and passing actions to enclave
https://github.com/OpenMined/PySyft.git
def _object2proto(self) -> PublishDatasetMessage_PB: return PublishDatasetMessage_PB( msg_id=serialize(self.id), address=serialize(self.address), reply_to=serialize(self.reply_to), dataset_id=self.dataset_id, deployment_id=self.deployment_id, host_or_ip = self.host_or_ip, protocol = self.protocol, port = self.port, client=serialize(self.client), )
78
oblv_messages.py
Python
packages/syft/src/syft/core/node/common/node_service/oblv/oblv_messages.py
54c0a2f6738090252dc2b2863eb3c13b1bcb9e6a
PySyft
1
126,133
2
6
16
13
2
0
2
5
test_receive_event_by_http
[workflow] http_event_provider and accompanied listener (#26010) ### Why are these changes needed? This PR enhances workflow functionality to receive external events from a Serve based HTTP endpoint. A workflow can then consume events asynchronously as they arrive. ### Design Logic A `workflow.wait_for_event` node subscribes to the endpoint instantiated by a Ray Serve deployment of class `http_event_provider.HTTPEventProvider`. The subscription is made through a helper class `http_event_provider.HTTPListener`. `HTTPListener` implements the methods of `EventListener` to poll from and confirm event checkpointing to `HTTPEventProvider`, before `HTTPEventProvider`acknowledges success or error to the event submitter. ### Architecture Improvement The logic of this enhancement conforms with existing workflow runtime design.
https://github.com/ray-project/ray.git
def test_receive_event_by_http(workflow_start_regular_shared_serve):
92
test_http_events.py
Python
python/ray/workflow/tests/test_http_events.py
659d25a3a9c4794db9dbe8f428ec587470b261b0
ray
4
288,138
19
8
7
72
10
0
20
63
_async_ble_device_disconnected
Add ESPHome BleakClient (#78911) Co-authored-by: Paulus Schoutsen <[email protected]>
https://github.com/home-assistant/core.git
def _async_ble_device_disconnected(self) -> None: _LOGGER.debug("%s: BLE device disconnected", self._source) self._is_connected = False self.services = BleakGATTServiceCollection() # type: ignore[no-untyped-call] self._async_call_bleak_disconnected_callback() self._unsubscribe_connection_state()
40
client.py
Python
homeassistant/components/esphome/bluetooth/client.py
7042d6d35be54865b1252c0b28a50cce1a92eabc
core
1
337,299
58
15
28
236
20
0
82
354
recursively_apply
Convert documentation to the new front (#271) * Main conversion * Doc styling * Style * New front deploy * Fixes * Fixes * Fix new docstrings * Style
https://github.com/huggingface/accelerate.git
def recursively_apply(func, data, *args, test_type=is_torch_tensor, error_on_other_type=False, **kwargs): if isinstance(data, (tuple, list)): return honor_type( data, ( recursively_apply( func, o, *args, test_type=test_type, error_on_other_type=error_on_other_type, **kwargs ) for o in data ), ) elif isinstance(data, Mapping): return type(data)( { k: recursively_apply( func, v, *args, test_type=test_type, error_on_other_type=error_on_other_type, **kwargs ) for k, v in data.items() } ) elif test_type(data): return func(data, *args, **kwargs) elif error_on_other_type: raise TypeError( f"Can't apply {func.__name__} on object of type {type(data)}, only of nested list/tuple/dicts of objects " f"that satisfy {test_type.__name__}." ) return data
146
utils.py
Python
src/accelerate/utils.py
fb5ed62c102c0323486b89805e1888495de3db15
accelerate
7
90,708
7
9
24
26
3
0
7
26
mixed_payload
feat(metrics): functionality for the indexer-last-seen-updater (#34865) * update meta structure to support last_seen updater better * use metrics to record last_seen_updater info * actually produce headers in indexer output message
https://github.com/getsentry/sentry.git
def mixed_payload(): return bytes( , encoding="utf-8", )
14
test_last_seen_updater.py
Python
tests/sentry/sentry_metrics/test_last_seen_updater.py
261437e3bbb102732344817f67762142c0d6977e
sentry
1
322,882
25
13
9
81
10
0
26
113
available_labels
Add NLP model interpretation (#1752) * upload NLP interpretation * fix problems and relocate project * remove abandoned picture * remove abandoned picture * fix dead link in README * fix dead link in README * fix code style problems * fix CR round 1 * remove .gitkeep files * fix code style * fix file encoding problem * fix code style * delete duplicated files due to directory rebuild * fix CR round 2 * fix code style * fix ernie tokenizer * fix code style * fix problem from CR round 1 * fix bugs * fix README * remove duplicated files * deal with diff of old and new tokenizer results * fix CR round 4 * fix code style * add missing dependence * fix broken import path * move some data file to cloud * MRC upper case to lower case Co-authored-by: Zeyu Chen <[email protected]> Co-authored-by: binlinquge <xxx> Co-authored-by: Guo Sheng <[email protected]>
https://github.com/PaddlePaddle/PaddleNLP.git
def available_labels(self): try: assert self.mode == "classification" except AssertionError: raise NotImplementedError( 'Not supported for regression explanations.') else: ans = self.top_labels if self.top_labels else self.local_exp.keys() return list(ans)
46
explanation.py
Python
examples/model_interpretation/task/senti/LIME/explanation.py
93cae49c0c572b5c1ac972759140fbe924b0374d
PaddleNLP
3
181,596
62
11
23
232
20
0
67
287
test_driver_5
Revert "Deployed 7ccda9a with MkDocs version: 1.3.0" This reverts commit bd9629c40e01241766197119b581a99409b07068.
https://github.com/EpistasisLab/tpot.git
def test_driver_5(): # Catch FutureWarning https://github.com/scikit-learn/scikit-learn/issues/11785 if (np.__version__ >= LooseVersion("1.15.0") and sklearn.__version__ <= LooseVersion("0.20.0")): raise nose.SkipTest("Warning raised by scikit-learn") args_list = [ 'tests/tests.csv', '-is', ',', '-target', 'class', '-o', 'test_export.py', '-g', '1', '-p', '2', '-cv', '3', '-s', '42', '-config', 'TPOT light', '-v', '0' ] args = _get_arg_parser().parse_args(args_list) with captured_output() as (out, err): tpot_driver(args) ret_stdout = out.getvalue() assert ret_stdout == "" assert path.isfile("test_export.py") remove("test_export.py") # clean up exported file
121
driver_tests.py
Python
tests/driver_tests.py
388616b6247ca4ea8de4e2f340d6206aee523541
tpot
3
245,559
42
13
20
278
26
0
57
245
get_whs_and_shapes
[Fix] replace mmcv's function and modules imported with mmengine's (#8594) * use mmengine's load_state_dict and load_checkpoint * from mmengine import dump * from mmengine import FileClient dump list_from_file * remove redundant registry * update * update * update * replace _load_checkpoint with CheckpointLoad.load_checkpoint * changes according to mmcv #2216 * changes due to mmengine #447 * changes due mmengine #447 and mmcv #2217 * changes due mmengine #447 and mmcv #2217 * update * update * update
https://github.com/open-mmlab/mmdetection.git
def get_whs_and_shapes(self): self.logger.info('Collecting bboxes from annotation...') bbox_whs = [] img_shapes = [] prog_bar = ProgressBar(len(self.dataset)) for idx in range(len(self.dataset)): ann = self.dataset.get_ann_info(idx) data_info = self.dataset.data_infos[idx] img_shape = np.array([data_info['width'], data_info['height']]) gt_bboxes = ann['bboxes'] for bbox in gt_bboxes: wh = bbox[2:4] - bbox[0:2] img_shapes.append(img_shape) bbox_whs.append(wh) prog_bar.update() print('\n') bbox_whs = np.array(bbox_whs) img_shapes = np.array(img_shapes) self.logger.info(f'Collected {bbox_whs.shape[0]} bboxes.') return bbox_whs, img_shapes
160
optimize_anchors.py
Python
tools/analysis_tools/optimize_anchors.py
d0695e68654ca242be54e655491aef8c959ac345
mmdetection
3
77,101
40
13
31
176
25
0
49
196
test_add_post_duplicate_choose_permission
Add duplicate detection to multiple image upload view Add utility function to find an image's potential duplicates Add logic to detect duplicates on multiple images upload view Add template shown when a user is prompted to confirm a duplicate upload Add client-side logic to confirm a duplicate upload Add/update styles Add tests for duplicate image uploads Index Image file_hash field Ensure that a user can choose an image from duplicates returned by find_image_duplicates Use CSS classes instead of HTML elements to hide edit form on duplicate upload Add ImagesPermissionPolicy helper to retrieve the permission policy dynamically This allows test cases that override the base image model to pick up the corresponding permission policy, should they need it. Remove usage of sibling selector Use wagtail image templatetag to generate image Renamed ImagesPermissionPolicy to ImagesPermissionPolicyGetter Fail loudly when setting permission policy and a wromg image model is provided Add decorator to disconnect a signal's receiver during a test execution and use it in get_image_model tests Improve warning message on duplicate upload in multiple upload view Show matching form when confirming a duplicate upload
https://github.com/wagtail/wagtail.git
def test_add_post_duplicate_choose_permission(self): # Create group with access to admin and add permission. bakers_group = Group.objects.create(name="Bakers") access_admin_perm = Permission.objects.get( content_type__app_label="wagtailadmin", codename="access_admin" ) bakers_group.permissions.add(access_admin_perm) # Create the "Bakery" Collection and grant "add" permission to the Bakers group. root = Collection.objects.get(id=get_root_collection_id()) bakery_collection = root.add_child(instance=Collection(name="Bakery")) GroupCollectionPermission.objects.create( group=bakers_group, collection=bakery_collection, permission=Permission.objects.get( content_type__app_label="wagtailimages", codename="add_image" ), )
221
test_admin_views.py
Python
wagtail/images/tests/test_admin_views.py
c136f461bc052cef362991458e1bd1fca37a3da9
wagtail
1
305,067
24
14
30
138
13
0
36
118
extra_state_attributes
Awair local use config entry name + add measurement state class (#77383)
https://github.com/home-assistant/core.git
def extra_state_attributes(self) -> dict: sensor_type = self.entity_description.key attrs: dict = {} if not self._air_data: return attrs if sensor_type in self._air_data.indices: attrs["awair_index"] = abs(self._air_data.indices[sensor_type]) elif sensor_type in DUST_ALIASES and API_DUST in self._air_data.indices: attrs["awair_index"] = abs(self._air_data.indices.dust) return attrs
84
sensor.py
Python
homeassistant/components/awair/sensor.py
79b5147b46a16b65404c74df5dd9a10ce16ea216
core
5
186,353
12
8
3
45
4
0
12
21
parse_includes
Various clean-ups in certbot-apache. Use f-strings. (#9132) * Various clean-ups in certbot-apache. Use f-strings. * Smaller tweaks
https://github.com/certbot/certbot.git
def parse_includes(apachectl): inc_cmd = [apachectl, "-t", "-D", "DUMP_INCLUDES"] return parse_from_subprocess(inc_cmd, r"\(.*\) (.*)")
25
apache_util.py
Python
certbot-apache/certbot_apache/_internal/apache_util.py
eeca208c8f57304590ac1af80b496e61021aaa45
certbot
1
195,097
20
10
5
50
7
0
23
66
_get_batch_context
Added director agent and safety experiment commands. (#4602) * Added director agent and safety. * ran autoformat.sh
https://github.com/facebookresearch/ParlAI.git
def _get_batch_context(self, batch): if 'full_text_vec' not in batch: logging.warn('Batch does not have full text vec, resorting to text vec') return batch.text_vec return batch.full_text_vec
28
director_agent.py
Python
projects/director/director_agent.py
2ef5586ed0d644abe18cd3ff45ef9fa01981e87c
ParlAI
2
186,742
6
7
7
25
4
0
6
20
must_staple
Must staple: check for OCSP support (#9226) * Must staple: check for OCSP support * Expand error message * s/Must Staple/Must-Staple * Broaden the term webserver * Improve error message
https://github.com/certbot/certbot.git
def must_staple(self) -> bool: return self.namespace.must_staple
14
configuration.py
Python
certbot/certbot/configuration.py
a513b57e5e3667365f77b040e070cbec05212174
certbot
1
139,475
7
6
7
27
5
0
7
21
extra_learn_fetches_fn
[RLlib] Introduce new policy base classes. (#24742)
https://github.com/ray-project/ray.git
def extra_learn_fetches_fn(self) -> Dict[str, TensorType]: return {}
16
dynamic_tf_policy_v2.py
Python
rllib/policy/dynamic_tf_policy_v2.py
bc3a1d35cf6e9a5fd7eef908a8e76aefb80ce6a9
ray
1
259,850
30
11
7
121
19
0
32
68
test_dataframe_support
FIX DecisionBoundaryPlot should not raise spurious warning (#23318)
https://github.com/scikit-learn/scikit-learn.git
def test_dataframe_support(): pd = pytest.importorskip("pandas") df = pd.DataFrame(X, columns=["col_x", "col_y"]) estimator = LogisticRegression().fit(df, y) with warnings.catch_warnings(): # no warnings linked to feature names validation should be raised warnings.simplefilter("error", UserWarning) DecisionBoundaryDisplay.from_estimator(estimator, df, response_method="predict")
68
test_boundary_decision_display.py
Python
sklearn/inspection/_plot/tests/test_boundary_decision_display.py
b0b8a39d8bb80611398e4c57895420d5cb1dfe09
scikit-learn
1
60,423
41
13
6
90
12
0
45
80
CheckForCopyright
Balanced joint maximum mean discrepancy for deep transfer learning
https://github.com/jindongwang/transferlearning.git
def CheckForCopyright(filename, lines, error): # We'll check up to line 10. Don't forget there's a # dummy line at the front. for line in xrange(1, min(len(lines), 11)): if _RE_COPYRIGHT.search(lines[line], re.I): error(filename, 0, 'legal/copyright', 5, 'Copyright message found. ' 'You should not include a copyright line.')
56
cpp_lint.py
Python
code/deep/BJMMD/caffe/scripts/cpp_lint.py
cc4d0564756ca067516f71718a3d135996525909
transferlearning
3
19,899
13
8
10
47
5
0
14
53
installed_as_egg
check point progress on only bringing in pip==22.0.4 (#4966) * vendor in pip==22.0.4 * updating vendor packaging version * update pipdeptree to fix pipenv graph with new version of pip. * Vendoring of pip-shims 0.7.0 * Vendoring of requirementslib 1.6.3 * Update pip index safety restrictions patch for pip==22.0.4 * Update patches * exclude pyptoject.toml from black to see if that helps. * Move this part of the hash collection back to the top (like prior implementation) because it affects the outcome of this test now in pip 22.0.4
https://github.com/pypa/pipenv.git
def installed_as_egg(self) -> bool: location = self.location if not location: return False return location.endswith(".egg")
26
base.py
Python
pipenv/patched/notpip/_internal/metadata/base.py
f3166e673fe8d40277b804d35d77dcdb760fc3b3
pipenv
2
5,874
70
12
32
326
37
0
90
257
test_visualization_compare_performance_output_saved
Use tempfile to automatically garbage collect data and modeling artifacts in ludwig integration tests. (#1642) * Use tmpdir to automatically garbage collect data and modeling artifacts in ludwig integration tests.
https://github.com/ludwig-ai/ludwig.git
def test_visualization_compare_performance_output_saved(csv_filename): input_features = [text_feature(encoder="parallel_cnn")] output_features = [category_feature()] # Generate test data rel_path = generate_data(input_features, output_features, csv_filename) input_features[0]["encoder"] = "parallel_cnn" exp_dir_name = run_experiment_with_visualization(input_features, output_features, dataset=rel_path) vis_output_pattern_pdf = os.path.join(exp_dir_name, "*.pdf") vis_output_pattern_png = os.path.join(exp_dir_name, "*.png") test_stats = os.path.join(exp_dir_name, TEST_STATISTICS_FILE_NAME) test_cmd_pdf = [ "python", "-m", "ludwig.visualize", "--visualization", "compare_performance", "--test_statistics", test_stats, test_stats, "-m", "Model1", "Model2", "-od", exp_dir_name, ] test_cmd_png = test_cmd_pdf.copy() + ["-ff", "png"] commands = [test_cmd_pdf, test_cmd_png] vis_patterns = [vis_output_pattern_pdf, vis_output_pattern_png] for command, viz_pattern in zip(commands, vis_patterns): result = subprocess.run(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE) figure_cnt = glob.glob(viz_pattern) assert 0 == result.returncode assert 1 == len(figure_cnt)
200
test_visualization.py
Python
tests/integration_tests/test_visualization.py
4fb8f63181f5153b4f6778c6ef8dad61022c4f3f
ludwig
2
269,142
166
15
44
462
47
0
285
553
wrap_layer_functions
Support Keras saving/loading for ShardedVariables with arbitrary partitions. PiperOrigin-RevId: 439837516
https://github.com/keras-team/keras.git
def wrap_layer_functions(layer, serialization_cache): # Since Sequential models may be modified in place using model.add() or # model.pop(), don't use saved functions. if (isinstance(layer, keras_load.RevivedLayer) and not isinstance(layer, sequential_lib.Sequential)): return { fn_name: getattr(layer.keras_api, fn_name, None) for fn_name in serialized_attributes.LayerAttributes.all_functions } # Reset the losses of the layer and its children. The call function in each # child layer is replaced with tf.functions. original_fns = _replace_child_layer_functions(layer, serialization_cache) original_losses = _reset_layer_losses(layer) # Wrap all the layer call and activity regularizer functions. # Use LayerCallCollection to ensure that all layer call functions (__call__, # call with losses) are traced with the same inputs. call_collection = LayerCallCollection(layer) call_fn_with_losses = call_collection.add_function( _wrap_call_and_conditional_losses(layer), '{}_layer_call_and_return_conditional_losses'.format(layer.name), # If any of this layer's child layers use the training arg, the traced # call functions of this layer will have a training keyword argument. If # the original layer does not expect the training arg, then it will have # to be removed (by setting `match_layer_training_arg`). match_layer_training_arg=True) call_fn = call_collection.add_function( _extract_outputs_from_fn(layer, call_fn_with_losses), '{}_layer_call_fn'.format(layer.name), # Since `call_fn` wraps call_fn_with_losses and not the original call # function, `match_layer_training_arg` should be set to False. match_layer_training_arg=False) fns = { 'call_and_return_conditional_losses': call_fn_with_losses, '__call__': call_fn } if layer._activity_regularizer is not None: # pylint: disable=protected-access fns['activity_regularizer_fn'] = _wrap_activity_regularizer(layer) fns['call_and_return_all_conditional_losses'] = ( call_collection.add_function( _append_activity_regularizer_loss(layer, call_fn_with_losses, fns['activity_regularizer_fn']), '{}_layer_call_and_return_all_conditional_losses'.format( layer.name), match_layer_training_arg=False)) else: fns['activity_regularizer_fn'] = None fns['call_and_return_all_conditional_losses'] = call_fn_with_losses # Manually trigger traces before restoring the overwritten functions. The # functions are traced within the layer call context to ensure that layer # functions (e.g. add_loss) behave as though running in graph mode. with tracing_scope(): call_collection.trace_with_input_signature() with base_layer_utils.call_context().enter( layer, inputs=None, build_graph=True, training=None, saving=True): for fn in fns.values(): if fn is not None and not isinstance(fn, LayerCall): fn.get_concrete_function() # Restore overwritten functions and losses _restore_child_layer_functions(original_fns) _restore_layer_losses(original_losses) return fns
277
save_impl.py
Python
keras/saving/saved_model/save_impl.py
e61cbc52fd3b0170769c120e9b8dabc8c4205322
keras
8
178,270
41
16
29
266
28
0
63
382
url
fix: DEV-3911: Move persistent storages to OS (#3377) * fix: DEV-3911: Move persistent storages to OS * Fix * Add deps * Back header * Move DownloadStorageData handler * Update all urls json * Fix import * add nginx config * Fix GSC storage Co-authored-by: Sergei Ivashchenko <[email protected]> Co-authored-by: Sergey Zhuk <[email protected]>
https://github.com/heartexlabs/label-studio.git
def url(self, name): name = self._normalize_name(clean_name(name)) blob = self.bucket.blob(name) blob_params = self.get_object_parameters(name) no_signed_url = ( blob_params.get('acl', self.default_acl) == 'publicRead' or not self.querystring_auth) if not self.custom_endpoint and no_signed_url: return blob.public_url elif no_signed_url: out = '{storage_base_url}/{quoted_name}'.format( storage_base_url=self.custom_endpoint, quoted_name=_quote(name, safe=b"/~"), ) return out elif not self.custom_endpoint: out2 = blob.generate_signed_url( expiration=self.expiration, version="v4", **self._get_signing_kwargs() ) return out2 else: out3 = blob.generate_signed_url( bucket_bound_hostname=self.custom_endpoint, expiration=self.expiration, version="v4", **self._get_signing_kwargs() ) return out3
164
storage.py
Python
label_studio/core/storage.py
92314e4a9c431c407533e4a064481acf3c5983ab
label-studio
6
250,580
13
10
6
55
5
0
14
64
intercept
Flow.intercept: use an Event instead of the reply system This is patch 3/4 of the reply-ectomy.
https://github.com/mitmproxy/mitmproxy.git
def intercept(self): if self.intercepted: return self.intercepted = True if self._resume_event is not None: self._resume_event.clear()
32
flow.py
Python
mitmproxy/flow.py
ede269fce40ec4000a4717d5f5aec7835d9931c2
mitmproxy
3
258,441
17
12
17
89
11
0
17
71
get_openapi_specs
bug: fix the docs rest api reference url (#3775) * bug: fix the docs rest api reference url * revert openapi json changes * remove last line on json files * Add explanation about `servers` and remove `servers` parameter from FastAPI * generate openapi schema without empty end line
https://github.com/deepset-ai/haystack.git
def get_openapi_specs() -> dict: app = get_app() return get_openapi( title=app.title, version=app.version, openapi_version=app.openapi_version, description=app.description, routes=app.routes, servers=[{"url": "http://localhost:8000"}], )
56
utils.py
Python
rest_api/rest_api/utils.py
86ade4817eda3142d2ddef65a0b1e29ffee770e3
haystack
1
212,208
23
12
8
84
14
0
26
46
_clone
Discover unstable defaults in `HasProps.__init__()` (#11959) * Discover unstable defaults in HasProps.__init__() * Make HasProps.__getattr__() fail properly * More sensible implementation of HasProps._clone() * Make InstanceDefault a generic class * Fix recursive model definition in tests * Fix default override in test_document * Add unit tests
https://github.com/bokeh/bokeh.git
def _clone(self) -> HasProps: attrs = self.properties_with_values(include_defaults=False, include_undefined=True) return self.__class__(**{key: val for key, val in attrs.items() if val is not Undefined}) KindRef = Any # TODO
49
has_props.py
Python
bokeh/core/has_props.py
b23a3b77447ede916b31756fca997cbb1b806de7
bokeh
3
95,638
26
10
10
117
16
0
29
111
test_unsupported_null_response
ref(webhooks): Handle unexpected webhook responses (#31143)
https://github.com/getsentry/sentry.git
def test_unsupported_null_response(self): responses.add( responses.POST, "http://example.com", body="null", content_type="application/json" ) try: self.plugin.notify(self.notification) except Exception as exc: assert False, f"'self.plugin.notify' raised an exception {exc}" assert len(responses.calls) == 1 assert responses.calls[0].response.status_code == 200
68
test_plugin.py
Python
tests/sentry/plugins/sentry_webhooks/test_plugin.py
03c688897205e936302f528dc72fc391b3ef5904
sentry
2
208,774
22
13
7
128
9
0
40
73
find_entry_points
Added additional entrypoint script. Added a third entrypoint to use python's minor version as well. This can help when testing out differences of python versions. One could easily open "ipython3.10" and test it's differences with "ipython3.8".
https://github.com/ipython/ipython.git
def find_entry_points(): ep = [ 'ipython%s = IPython:start_ipython', ] major_suffix = str(sys.version_info[0]) minor_suffix = ".".join([str(sys.version_info[0]), str(sys.version_info[1])]) return [e % '' for e in ep] + [e % major_suffix for e in ep] + [e % minor_suffix for e in ep]
80
setupbase.py
Python
setupbase.py
1db65d02e89f31c28c221197a2ed04f3ade3b195
ipython
4
176,920
53
12
20
276
27
0
91
179
_hits_numpy
Make HITS numpy and scipy private functions (#5771) * Make HITS numpy and scipy private functions * fix examples with correct imports * remove functions from TOC
https://github.com/networkx/networkx.git
def _hits_numpy(G, normalized=True): import numpy as np if len(G) == 0: return {}, {} adj_ary = nx.to_numpy_array(G) # Hub matrix H = adj_ary @ adj_ary.T e, ev = np.linalg.eig(H) h = ev[:, np.argmax(e)] # eigenvector corresponding to the maximum eigenvalue # Authority matrix A = adj_ary.T @ adj_ary e, ev = np.linalg.eig(A) a = ev[:, np.argmax(e)] # eigenvector corresponding to the maximum eigenvalue if normalized: h /= h.sum() a /= a.sum() else: h /= h.max() a /= a.max() hubs = dict(zip(G, map(float, h))) authorities = dict(zip(G, map(float, a))) return hubs, authorities
173
hits_alg.py
Python
networkx/algorithms/link_analysis/hits_alg.py
e5f1edb82a379ceb6afcf421fa5f6b4cb43cfbaf
networkx
3
297,831
24
8
34
101
16
1
24
95
async_start
String formatting and max line length - Part 1 (#84390) Co-authored-by: Erik Montnemery <[email protected]>
https://github.com/home-assistant/core.git
async def async_start(self) -> None: _LOGGER.info("Starting Home Assistant") setattr(self.loop, "_thread_ident", threading.get_ident()) self.state = CoreState.starting self.bus.async_fire(EVENT_CORE_CONFIG_UPDATE) self.bus.async_fire(EVENT_HOMEASSISTANT_START) try: # Only block for EVENT_HOMEASSISTANT_START listener self.async_stop_track_tasks()
async def async_start(self) -> None: """Finalize startup from inside the event loop. This method is a coroutine. """ _LOGGER.info("Starting Home Assistant") setattr(self.loop, "_thread_ident", threading.get_ident()) self.state = CoreState.starting self.bus.async_fire(EVENT_CORE_CONFIG_UPDATE) self.bus.async_fire(EVENT_HOMEASSISTANT_START) try: # Only block for EVENT_HOMEASSISTANT_START listener self.async_stop_track_tasks()
150
core.py
Python
homeassistant/core.py
b0cee0bc46cbd7efe0e6421da18d91595c7a25ad
core
3
196,005
211
17
60
658
37
0
472
1,095
multiset_derangements
make remap canonical; more inline comments In addition, clean-up of multiset_derangement's rv is done to encourage proper use of the function: if you want to see the output you have to copy it since the derangements are generated in place. If you don't want this or find it a hassle, then use generate_derangements.
https://github.com/sympy/sympy.git
def multiset_derangements(s): from sympy.core.sorting import ordered # create multiset dictionary of hashable elements or else # remap elements to integers try: ms = multiset(s) except TypeError: # give each element a canonical integer value key = dict(enumerate(ordered(uniq(s)))) h = [] for si in s: for k in key: if key[k] == si: h.append(k) break for i in multiset_derangements(h): yield [key[j] for j in i] return mx = max(ms.values()) # max repetition of any element n = len(s) # the number of elements ## special cases # 1) one element has more than half the total cardinality of s: no # derangements are possible. if mx*2 > n: return # 2) all elements appear once: singletons if len(ms) == n: yield from _set_derangements(s) return # find the first element that is repeated the most to place # in the following two special cases where the selection # is unambiguous: either there are two elements with multiplicity # of mx or else there is only one with multiplicity mx for M in ms: if ms[M] == mx: break inonM = [i for i in range(n) if s[i] != M] # location of non-M iM = [i for i in range(n) if s[i] == M] # locations of M rv = [None]*n # 3) half are the same if 2*mx == n: # M goes into non-M locations for i in inonM: rv[i] = M # permutations of non-M go to M locations for p in multiset_permutations([s[i] for i in inonM]): for i, pi in zip(iM, p): rv[i] = pi yield rv # clean-up (and encourages proper use of routine) rv[:] = [None]*n return # 4) single repeat covers all but 1 of the non-repeats: # if there is one repeat then the multiset of the values # of ms would be {mx: 1, 1: n - mx}, i.e. there would # be n - mx + 1 values with the condition that n - 2*mx = 1 if n - 2*mx == 1 and len(ms.values()) == n - mx + 1: for i in range(len(inonM)): i1 = inonM[i] ifill = inonM[:i] + inonM[i+1:] for j in ifill: rv[j] = M for p in permutations([s[j] for j in ifill]): rv[i1] = s[i1] for j, pi in zip(iM, p): rv[j] = pi k = i1 for j in iM: rv[j], rv[k] = rv[k], rv[j] yield rv k = j # clean-up (and encourages proper use of routine) rv[:] = [None]*n return ## general case is handled with 3 helpers: # 1) `finish_derangements` will place the last two elements # which have arbitrary multiplicities, e.g. for multiset # {c: 3, a: 2, b: 2}, the last two elements are a and b # 2) `iopen` will tell where a given element can be placed # 3) `do` will recursively place elements into subsets of # valid locations
461
iterables.py
Python
sympy/utilities/iterables.py
94d6e4fc1ac2c516d328fdad20933d3d92bf52bb
sympy
28
155,525
221
16
68
962
68
0
366
834
merge_percentiles
Replace `interpolation` with `method` and `method` with `internal_method` (#8525) Following the change in numpy 1.22.0 Co-authored-by: James Bourbeau <[email protected]>
https://github.com/dask/dask.git
def merge_percentiles(finalq, qs, vals, method="lower", Ns=None, raise_on_nan=True): from .utils import array_safe if isinstance(finalq, Iterator): finalq = list(finalq) finalq = array_safe(finalq, like=finalq) qs = list(map(list, qs)) vals = list(vals) if Ns is None: vals, Ns = zip(*vals) Ns = list(Ns) L = list(zip(*[(q, val, N) for q, val, N in zip(qs, vals, Ns) if N])) if not L: if raise_on_nan: raise ValueError("No non-trivial arrays found") return np.full(len(qs[0]) - 2, np.nan) qs, vals, Ns = L # TODO: Perform this check above in percentile once dtype checking is easy # Here we silently change meaning if vals[0].dtype.name == "category": result = merge_percentiles( finalq, qs, [v.codes for v in vals], method, Ns, raise_on_nan ) import pandas as pd return pd.Categorical.from_codes(result, vals[0].categories, vals[0].ordered) if not np.issubdtype(vals[0].dtype, np.number): method = "nearest" if len(vals) != len(qs) or len(Ns) != len(qs): raise ValueError("qs, vals, and Ns parameters must be the same length") # transform qs and Ns into number of observations between percentiles counts = [] for q, N in zip(qs, Ns): count = np.empty_like(finalq, shape=len(q)) count[1:] = np.diff(array_safe(q, like=q[0])) count[0] = q[0] count *= N counts.append(count) # Sort by calculated percentile values, then number of observations. combined_vals = np.concatenate(vals) combined_counts = array_safe(np.concatenate(counts), like=combined_vals) sort_order = np.argsort(combined_vals) combined_vals = np.take(combined_vals, sort_order) combined_counts = np.take(combined_counts, sort_order) # percentile-like, but scaled by total number of observations combined_q = np.cumsum(combined_counts) # rescale finalq percentiles to match combined_q finalq = array_safe(finalq, like=combined_vals) desired_q = finalq * sum(Ns) # the behavior of different interpolation methods should be # investigated further. if method == "linear": rv = np.interp(desired_q, combined_q, combined_vals) else: left = np.searchsorted(combined_q, desired_q, side="left") right = np.searchsorted(combined_q, desired_q, side="right") - 1 np.minimum(left, len(combined_vals) - 1, left) # don't exceed max index lower = np.minimum(left, right) upper = np.maximum(left, right) if method == "lower": rv = combined_vals[lower] elif method == "higher": rv = combined_vals[upper] elif method == "midpoint": rv = 0.5 * (combined_vals[lower] + combined_vals[upper]) elif method == "nearest": lower_residual = np.abs(combined_q[lower] - desired_q) upper_residual = np.abs(combined_q[upper] - desired_q) mask = lower_residual > upper_residual index = lower # alias; we no longer need lower index[mask] = upper[mask] rv = combined_vals[index] else: raise ValueError( "interpolation method can only be 'linear', 'lower', " "'higher', 'midpoint', or 'nearest'" ) return rv
612
percentile.py
Python
dask/array/percentile.py
3c46e89aea2af010e69049cd638094fea2ddd576
dask
18
136,218
13
9
8
71
11
0
13
34
poll
[Dashboard] Optimize and backpressure actor_head.py (#29580) Signed-off-by: SangBin Cho <[email protected]> This optimizes the actor head CPU usage and guarantees a stable API response from the dashboard under lots of actor events published to drivers. The below script is used for testing, and I could reproduce the same level of delay as many_nodes_actor_test (250 nodes + 10k actors)
https://github.com/ray-project/ray.git
async def poll(self, timeout=None, batch_size=500) -> List[Tuple[bytes, str]]: await self._poll(timeout=timeout) return self._pop_actors(self._queue, batch_size=batch_size)
46
gcs_pubsub.py
Python
python/ray/_private/gcs_pubsub.py
9da53e3e5a6ec7b9bea32cd68f00f3e9468056ef
ray
1
296,364
5
6
2
17
2
0
5
12
async_tear_down
Refactor MQTT discovery (#67966) * Proof of concept * remove notify platform * remove loose test * Add rework from #67912 (#1) * Move notify serviceupdater to Mixins * Move tag discovery handler to Mixins * fix tests * Add typing for async_load_platform_helper * Add add entry unload support for notify platform * Simplify discovery updates * Remove not needed extra logic * Cleanup inrelevant or duplicate code * reuse update_device and move to mixins * Remove notify platform * revert changes to notify platform * Rename update class * unify tag entry setup * Use shared code for device_trigger `update_device` * PoC shared dispatcher for device_trigger * Fix bugs * Improve typing - remove async_update * Unload config_entry and tests * Release dispatcher after setup and deduplicate * closures to methods, revert `in` to `=`, updates * Re-add update support for tag platform * Re-add update support for device-trigger platform * Cleanup rediscovery code revert related changes * Undo discovery code shift * Update homeassistant/components/mqtt/mixins.py Co-authored-by: Erik Montnemery <[email protected]> * Update homeassistant/components/mqtt/device_trigger.py Co-authored-by: Erik Montnemery <[email protected]> * Update homeassistant/components/mqtt/mixins.py Co-authored-by: Erik Montnemery <[email protected]> * revert doc string changes * move conditions * typing and check config_entry_id * Update homeassistant/components/mqtt/mixins.py Co-authored-by: Erik Montnemery <[email protected]> * cleanup not used attribute * Remove entry_unload code and tests * update comment * add second comment Co-authored-by: Erik Montnemery <[email protected]>
https://github.com/home-assistant/core.git
async def async_tear_down(self) -> None:
8
mixins.py
Python
homeassistant/components/mqtt/mixins.py
3b2aae5045f9f08dc8f174c5d975852588e1a132
core
1
141,872
10
12
8
50
7
0
11
32
remote_execution_api
[tune/ci] Multinode support killing nodes in Ray client mode (#25709) The multi node testing utility currently does not support controlling cluster state from within Ray tasks or actors., but it currently requires Ray client. This makes it impossible to properly test e.g. fault tolerance, as the driver has to be executed on the client machine in order to control cluster state. However, this client machine is not part of the Ray cluster and can't schedule tasks on the local node - which is required by some utilities, e.g. checkpoint to driver syncing. This PR introduces a remote control API for the multi node cluster utility that utilizes a Ray queue to communicate with an execution thread. That way we can instruct cluster commands from within the Ray cluster.
https://github.com/ray-project/ray.git
def remote_execution_api(self) -> "RemoteAPI": self._execution_queue = Queue(actor_options={"num_cpus": 0}) stop_event = self._execution_event
55
test_utils.py
Python
python/ray/autoscaler/_private/fake_multi_node/test_utils.py
b574f75a8f5a28d2f9e55696ca5c40b2e095d9f9
ray
1
213,073
10
9
2
31
4
0
10
31
__reduce__
fix: Py27hash fix (#2182) * Add third party py27hash code * Add Py27UniStr and unit tests * Add py27hash_fix utils and tests * Add to_py27_compatible_template and tests * Apply py27hash fix to wherever it is needed * Apply py27hash fix, all tests pass except api_with_any_method_in_swagger * apply py27hash fix in openapi + run black * remove py27 testing * remove other py27 references * black fixes * fixes/typos * remove py27 from tox.ini * refactoring * third party notice * black * Fix py27hash fix to deal with null events * Fix Py27UniStr repr for unicode literals * black reformat * Update _template_has_api_resource to check data type more defensively * Apply py27Dict in _get_authorizers * Apply Py27Dict to authorizers and gateway responses which will go into swagger * Update to_py27_compatible_template to handle parameter_values; Add Py27LongInt class * Rename _convert_to_py27_dict to _convert_to_py27_type * Apply Py27UniStr to path param name * Handle HttpApi resource under to_py27_compatible_template * Fix InvalidDocumentException to not sort different exceptions * black reformat * Remove unnecessary test files Co-authored-by: Wing Fung Lau <[email protected]>
https://github.com/aws/serverless-application-model.git
def __reduce__(self): # pylint: disable = W0235 return super(Py27Dict, self).__reduce__()
17
py27hash_fix.py
Python
samtranslator/utils/py27hash_fix.py
a5db070f446b7cfebdaa6ad2e3dcf78f6105a272
serverless-application-model
1
177,008
66
16
30
340
22
1
112
408
naive_all_pairs_lowest_common_ancestor
Naive lowest common ancestor implementation (#5736) * Add naive lca methods * Naive algorithm implementation for LCA * Modify naive lca functions * Correct parameters of nx.ancestors * Update lowest_common_ancestors.py * Parametrize tests * Apply suggestions from code review Co-authored-by: Dan Schult <[email protected]> * Yield instead of append * Tests for naive lca * Correct test cases for naive lca algorithms * Apply suggestions from code review Co-authored-by: Mridul Seth <[email protected]> * Fix function name -when calling * Make requested changes * Inlining _get_a_lowest_common_ancestor Co-authored-by: dtuncturk <[email protected]> Co-authored-by: Dan Schult <[email protected]> Co-authored-by: Mridul Seth <[email protected]>
https://github.com/networkx/networkx.git
def naive_all_pairs_lowest_common_ancestor(G, pairs=None): if not nx.is_directed_acyclic_graph(G): raise nx.NetworkXError("LCA only defined on directed acyclic graphs.") elif len(G) == 0: raise nx.NetworkXPointlessConcept("LCA meaningless on null graphs.") elif None in G: raise nx.NetworkXError("None is not a valid node.") ancestor_cache = {} if pairs is None: pairs = combinations_with_replacement(G, 2) for v, w in pairs: if v not in ancestor_cache: ancestor_cache[v] = nx.ancestors(G, v) ancestor_cache[v].add(v) if w not in ancestor_cache: ancestor_cache[w] = nx.ancestors(G, w) ancestor_cache[w].add(w) common_ancestors = ancestor_cache[v] & ancestor_cache[w] if common_ancestors: common_ancestor = next(iter(common_ancestors)) while True: successor = None for lower_ancestor in G.successors(common_ancestor): if lower_ancestor in common_ancestors: successor = lower_ancestor break if successor is None: break common_ancestor = successor yield ((v, w), common_ancestor) @not_implemented_for("undirected") @not_implemented_for("multigraph")
@not_implemented_for("undirected") @not_implemented_for("multigraph")
200
lowest_common_ancestors.py
Python
networkx/algorithms/lowest_common_ancestors.py
b2f91c34a23058dd70b41784af0d87890216026a
networkx
13
54,484
6
10
4
41
4
0
6
26
assert_does_not_warn
Fix deprecated use of pytest.warn to assert warnings are not raised
https://github.com/PrefectHQ/prefect.git
def assert_does_not_warn(): with warnings.catch_warnings(): warnings.simplefilter("error") yield
19
testing.py
Python
src/prefect/utilities/testing.py
2b2f421054df5bb301a13322f420b5bb44fcd2aa
prefect
1
183,774
8
8
3
31
6
0
8
22
_cursor_at_right_edge
Support for bracketed paste mode (#567) * Detecting bracketed paste, sending paste events * Bracketed pasting support in TextInput * Restore debugging conditional * Handle pasting of text in text-input, improve scrolling * Fix ordering of handling in parser for bracketed pastes * Docstrings * Add docstrings
https://github.com/Textualize/textual.git
def _cursor_at_right_edge(self) -> bool: return self._visible_content_to_cursor_cell_len == self.content_region.width
18
text_input.py
Python
src/textual/widgets/text_input.py
fe151a7f25cfd7f1134ebafbddc7eeade1c18ccb
textual
1
69,293
73
20
37
455
37
0
91
54
get_price_list
refactor: rewrite `Item Prices Report` queries in `QB`
https://github.com/frappe/erpnext.git
def get_price_list(): rate = {} ip = frappe.qb.DocType("Item Price") pl = frappe.qb.DocType("Price List") cu = frappe.qb.DocType("Currency") price_list = ( frappe.qb.from_(ip) .from_(pl) .from_(cu) .select( ip.item_code, ip.buying, ip.selling, (IfNull(cu.symbol, ip.currency)).as_("currency"), ip.price_list_rate, ip.price_list, ) .where((ip.price_list == pl.name) & (pl.currency == cu.name) & (pl.enabled == 1)) ).run(as_dict=True) for d in price_list: d.update( {"price": "{0} {1} - {2}".format(d.currency, round(d.price_list_rate, 2), d.price_list)} ) d.pop("currency") d.pop("price_list_rate") d.pop("price_list") if d.price: rate.setdefault(d.item_code, {}).setdefault("Buying" if d.buying else "Selling", []).append( d.price ) item_rate_map = {} for item in rate: for buying_or_selling in rate[item]: item_rate_map.setdefault(item, {}).setdefault( buying_or_selling, ", ".join(rate[item].get(buying_or_selling, [])) ) return item_rate_map
282
item_prices.py
Python
erpnext/stock/report/item_prices/item_prices.py
e312d17eae2a957b71c3a11480cad1d5dd848077
erpnext
6
132,856
53
13
6
98
9
0
72
150
is_finished
[CI] Format Python code with Black (#21975) See #21316 and #21311 for the motivation behind these changes.
https://github.com/ray-project/ray.git
def is_finished(self): # The checks here are partly redundant but optimized for quick # evaluation. Specifically, if there are live trials, we check # these live trials first. Only if none of the live trials is # live anymore do we loop over all trials for a final check. trials_done = ( len(self._live_trials) == 0 or all(trial.is_finished() for trial in self._live_trials) ) and all(trial.is_finished() for trial in self._trials) return trials_done and self._search_alg.is_finished()
58
trial_runner.py
Python
python/ray/tune/trial_runner.py
7f1bacc7dc9caf6d0ec042e39499bbf1d9a7d065
ray
6
260,358
9
8
4
47
7
0
9
37
fit
MAINT Use _validate_params in FastICA (#23711) Co-authored-by: Guillaume Lemaitre <[email protected]> Co-authored-by: jeremiedbb <[email protected]>
https://github.com/scikit-learn/scikit-learn.git
def fit(self, X, y=None): self._validate_params() self._fit_transform(X, compute_sources=False) return self
29
_fastica.py
Python
sklearn/decomposition/_fastica.py
4cc347d4d0cbbfdcbd353f08842e0668fed78c9f
scikit-learn
1
269,454
65
14
30
478
37
1
103
346
dot
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
https://github.com/keras-team/keras.git
def dot(x, y): if ndim(x) is not None and (ndim(x) > 2 or ndim(y) > 2): x_shape = [] for i, s in zip(int_shape(x), tf.unstack(tf.shape(x))): if i is not None: x_shape.append(i) else: x_shape.append(s) x_shape = tuple(x_shape) y_shape = [] for i, s in zip(int_shape(y), tf.unstack(tf.shape(y))): if i is not None: y_shape.append(i) else: y_shape.append(s) y_shape = tuple(y_shape) y_permute_dim = list(range(ndim(y))) y_permute_dim = [y_permute_dim.pop(-2)] + y_permute_dim xt = tf.reshape(x, [-1, x_shape[-1]]) yt = tf.reshape( tf.compat.v1.transpose(y, perm=y_permute_dim), [y_shape[-2], -1] ) return tf.reshape( tf.matmul(xt, yt), x_shape[:-1] + y_shape[:-2] + y_shape[-1:] ) if is_sparse(x): out = tf.sparse.sparse_dense_matmul(x, y) else: out = tf.matmul(x, y) return out @keras_export("keras.backend.batch_dot") @tf.__internal__.dispatch.add_dispatch_support @doc_controls.do_not_generate_docs
@keras_export("keras.backend.batch_dot") @tf.__internal__.dispatch.add_dispatch_support @doc_controls.do_not_generate_docs
286
backend.py
Python
keras/backend.py
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
9
125,559
4
8
2
22
2
0
4
10
find_gcs_addresses
[core] ray.init defaults to an existing Ray instance if there is one (#26678) ray.init() will currently start a new Ray instance even if one is already existing, which is very confusing if you are a new user trying to go from local development to a cluster. This PR changes it so that, when no address is specified, we first try to find an existing Ray cluster that was created through `ray start`. If none is found, we will start a new one. This makes two changes to the ray.init() resolution order: 1. When `ray start` is called, the started cluster address was already written to a file called `/tmp/ray/ray_current_cluster`. For ray.init() and ray.init(address="auto"), we will first check this local file for an existing cluster address. The file is deleted on `ray stop`. If the file is empty, autodetect any running cluster (legacy behavior) if address="auto", or we will start a new local Ray instance if address=None. 2. When ray.init(address="local") is called, we will create a new local Ray instance, even if one is already existing. This behavior seems to be necessary mainly for `ray.client` use cases. This also surfaces the logs about which Ray instance we are connecting to. Previously these were hidden because we didn't set up the log until after connecting to Ray. So now Ray will log one of the following messages during ray.init: ``` (Connecting to existing Ray cluster at address: <IP>...) ...connection... (Started a local Ray cluster.| Connected to Ray Cluster.)( View the dashboard at <URL>) ``` Note that this changes the dashboard URL to be printed with `ray.init()` instead of when the dashboard is first started. Co-authored-by: Eric Liang <[email protected]>
https://github.com/ray-project/ray.git
def find_gcs_addresses(): return _find_address_from_flag("--gcs-address")
10
services.py
Python
python/ray/_private/services.py
55a0f7bb2db941d8c6ff93f55e4b3193f404ddf0
ray
1
3,682
39
13
17
250
30
0
48
119
test_page_token_expired_retry_succeeds
Source Google Ads: handle page token expired exception (#9812) * dynamic date range * raise exception if exites the cycle without error * if range days is 1 already do not retry * added unit tests * added comments * added comments * common mock classes are moved to common module * change read_records * refactored get_date_params * handle corner case * added parse_dates function * added test_streams * check mock calls * fix unit tests for chunk date range refactoring * removed commented codes * remove commented line * refactor test_streams * refactor CustomQuery.get_query * remove TODO * deleted unused json * format * fix chunk_date_range * added docstring * set range_days to 15 for ShoppingPerformanceReport * refactor chunk_date_range * format code 2 * call parent read_records method * add return type in get_date_params * change e to exception * set start_date as end_date * log page token has expired * bump version * updated spec and def yaml Co-authored-by: auganbay <[email protected]>
https://github.com/airbytehq/airbyte.git
def test_page_token_expired_retry_succeeds(mock_ads_client, test_config): stream_slice = {"start_date": "2021-01-01", "end_date": "2021-01-15"} google_api = MockGoogleAds(credentials=test_config["credentials"], customer_id=test_config["customer_id"]) incremental_stream_config = dict( api=google_api, conversion_window_days=test_config["conversion_window_days"], start_date=test_config["start_date"], time_zone="local", end_date="2021-04-04", ) stream = ClickView(**incremental_stream_config) stream.get_query = Mock() stream.get_query.return_value = "query" result = list(stream.read_records(sync_mode=SyncMode.incremental, cursor_field=["segments.date"], stream_slice=stream_slice)) assert len(result) == 9 assert stream.get_query.call_count == 2 stream.get_query.assert_called_with({"start_date": "2021-01-03", "end_date": "2021-01-15"})
145
test_streams.py
Python
airbyte-integrations/connectors/source-google-ads/unit_tests/test_streams.py
359fcd801128239b39297828d39821f631ce00c0
airbyte
1
310,597
13
10
5
42
5
0
13
45
_async_change_light
Migrate amcrest integration to new async API (#56294)
https://github.com/home-assistant/core.git
async def _async_change_light(self) -> None: await self._async_change_setting( self._audio_enabled or self.is_streaming, "indicator light" )
23
camera.py
Python
homeassistant/components/amcrest/camera.py
7781e308cd7b28c67b6cf339f9b115c7190456fe
core
2
261,218
14
11
4
68
8
0
15
31
axis0_safe_slice
DOC Ensure that sklearn.utils.axis0_safe_slice passes numpydoc (#24561)
https://github.com/scikit-learn/scikit-learn.git
def axis0_safe_slice(X, mask, len_mask): if len_mask != 0: return X[safe_mask(X, mask), :] return np.zeros(shape=(0, X.shape[1]))
45
__init__.py
Python
sklearn/utils/__init__.py
537c325f2927895449ce418b3a77750135c0ba7b
scikit-learn
2
170,649
7
6
3
25
6
0
7
14
obj_to_write
issue 48855 enable pylint unnecessary-pass (#49418) issue 48855 enable unnecessary-pass
https://github.com/pandas-dev/pandas.git
def obj_to_write(self) -> NDFrame | Mapping[IndexLabel, Any]:
16
_json.py
Python
pandas/io/json/_json.py
76923d7b58d8f25329e779a40b87e2b6959f9cea
pandas
1
213,001
131
21
57
973
33
0
258
945
cut_ansi_string_into_parts
Removed old code that used Popen and instead uses the PySimpleGUI Exec API calls for an all-in-one demo. Added expansion of the Multilline and a SizeGrip so that it's obvious to user the window is resizable.
https://github.com/PySimpleGUI/PySimpleGUI.git
def cut_ansi_string_into_parts(string_with_ansi_codes): color_codes_english = ['Black', 'Red', 'Green', 'Yellow', 'Blue', 'Magenta', 'Cyan', 'White', 'Reset'] color_codes = ["30m", "31m", "32m", "33m", "34m", "35m", "36m", "37m", "0m"] effect_codes_english = ['Italic', 'Underline', 'Slow Blink', 'Rapid Blink', 'Crossed Out'] effect_codes = ["3m", "4m", "5m", "6m", "9m"] background_codes = ["40m", "41m", "42m", "43m", "44m", "45m", "46m", "47m"] background_codes_english = ["Black", "Red", "Green", "Yellow", "Blue", "Magenta", "Cyan", "White"] ansi_codes = color_codes + effect_codes tuple_list = [] string_list = string_with_ansi_codes.split("\u001b[") if (len(string_list)) == 1: string_list = string_with_ansi_codes.split("\033[") for teststring in string_list: if teststring == string_with_ansi_codes: tuple_list += [(teststring, None, None, None)] break if any(code in teststring for code in ansi_codes): static_string = None color_used = None effect_used = None background_used = None for color in range(0, len(color_codes)): if teststring.startswith(color_codes[color]): working_thread = teststring.split(color_codes[color]) ansi_strip = re.compile(r'\x1B[@-_][0-?]*[ -/]*[@-~]') static_string = ansi_strip.sub('', working_thread[1]) color_used = color_codes_english[color] for effect in range(0, len(effect_codes)): if teststring.startswith(effect_codes[effect]): working_thread = teststring.split(effect_codes[effect]) ansi_strip = re.compile(r'\x1B[@-_][0-?]*[ -/]*[@-~]') static_string = ansi_strip.sub('', working_thread[1]) effect_used = effect_codes_english[effect] for background in range(0, len(background_codes)): if teststring.startswith(background_codes[background]): working_thread = teststring.split(background_codes[background]) ansi_strip = re.compile(r'\x1B[@-_][0-?]*[ -/]*[@-~]') static_string = ansi_strip.sub('', working_thread[1]) background_used = background_codes_english[background] try: if not tuple_list[len(tuple_list) - 1][0]: if not tuple_list[len(tuple_list) - 1][1] == None: color_used = tuple_list[len(tuple_list) - 1][1] if not tuple_list[len(tuple_list) - 1][2] == None: background_used = tuple_list[len(tuple_list) - 1][2] if not tuple_list[len(tuple_list) - 1][3] == None: effect_used = tuple_list[len(tuple_list) - 1][3] tuple_list += [(static_string, color_used, background_used, effect_used)] else: tuple_list += [(static_string, color_used, background_used, effect_used)] except Exception: tuple_list += [(static_string, color_used, background_used, effect_used)] new_tuple_list = [] for x in range(0, len(tuple_list)): if tuple_list[x][0]: new_tuple_list += [[tuple_list[x][0], tuple_list[x][1], tuple_list[x][2], tuple_list[x][3]]] return new_tuple_list
603
Demo_Script_Launcher_ANSI_Color_Output.py
Python
DemoPrograms/Demo_Script_Launcher_ANSI_Color_Output.py
a35687ac51dac5a2a0664ca20e7dd7cba6836c7b
PySimpleGUI
19
153,298
62
12
19
244
16
0
90
298
_read
REFACTOR-#3900: add flake8-no-implicit-concat plugin and refactor flake8 error codes (#3901) Signed-off-by: jeffreykennethli <[email protected]>
https://github.com/modin-project/modin.git
def _read(cls, path_or_buf, **kwargs): if cls._validate_hdf_format(path_or_buf=path_or_buf) is None: ErrorMessage.default_to_pandas( "File format seems to be `fixed`. For better distribution consider " + "saving the file in `table` format. df.to_hdf(format=`table`)." ) return cls.single_worker_read(path_or_buf, **kwargs) columns = kwargs.pop("columns", None) # Have to do this because of Dask's keyword arguments kwargs["_key"] = kwargs.pop("key", None) if not columns: start = kwargs.pop("start", None) stop = kwargs.pop("stop", None) empty_pd_df = pandas.read_hdf(path_or_buf, start=0, stop=0, **kwargs) if start is not None: kwargs["start"] = start if stop is not None: kwargs["stop"] = stop columns = empty_pd_df.columns return cls.build_query_compiler(path_or_buf, columns, **kwargs)
148
hdf_dispatcher.py
Python
modin/core/io/column_stores/hdf_dispatcher.py
e5e9634357e60925a5a70e56a1d4882d269f533a
modin
5
43,494
74
12
97
1,393
23
0
246
983
upgrade
Have consistent types between the ORM and the migration files (#24044) We currently don't compare column types between ORM and the migration files. Some columns in the migration files have different types from the same columns in the ORM. Here, I made effort to match the types in migration files with the types in ORM, using the migration files as the source of truth in most cases. I couldn't convert the MySQL VARCHAR collation in db(utf8_bin) to use the one in ORM(utf8mb3_bin). It seems it's not possible to convert a collation of an already existing column in MySQL.
https://github.com/apache/airflow.git
def upgrade(): conn = op.get_bind() with op.batch_alter_table('connection', schema=None) as batch_op: batch_op.alter_column( 'extra', existing_type=sa.TEXT(), type_=sa.Text(), existing_nullable=True, ) with op.batch_alter_table('log_template', schema=None) as batch_op: batch_op.alter_column( 'created_at', existing_type=sa.DateTime(), type_=TIMESTAMP(), existing_nullable=False ) with op.batch_alter_table('serialized_dag', schema=None) as batch_op: # drop server_default batch_op.alter_column( 'dag_hash', existing_type=sa.String(32), server_default=None, type_=sa.String(32), existing_nullable=False, ) with op.batch_alter_table('trigger', schema=None) as batch_op: batch_op.alter_column( 'created_date', existing_type=sa.DateTime(), type_=TIMESTAMP(), existing_nullable=False ) if conn.dialect.name != 'sqlite': return with op.batch_alter_table('serialized_dag', schema=None) as batch_op: batch_op.alter_column('fileloc_hash', existing_type=sa.Integer, type_=sa.BigInteger()) # Some sqlite date are not in db_types.TIMESTAMP. Convert these to TIMESTAMP. with op.batch_alter_table('dag', schema=None) as batch_op: batch_op.alter_column( 'last_pickled', existing_type=sa.DATETIME(), type_=TIMESTAMP(), existing_nullable=True ) batch_op.alter_column( 'last_expired', existing_type=sa.DATETIME(), type_=TIMESTAMP(), existing_nullable=True ) with op.batch_alter_table('dag_pickle', schema=None) as batch_op: batch_op.alter_column( 'created_dttm', existing_type=sa.DATETIME(), type_=TIMESTAMP(), existing_nullable=True ) with op.batch_alter_table('dag_run', schema=None) as batch_op: batch_op.alter_column( 'execution_date', existing_type=sa.DATETIME(), type_=TIMESTAMP(), existing_nullable=False ) batch_op.alter_column( 'start_date', existing_type=sa.DATETIME(), type_=TIMESTAMP(), existing_nullable=True ) batch_op.alter_column( 'end_date', existing_type=sa.DATETIME(), type_=TIMESTAMP(), existing_nullable=True ) with op.batch_alter_table('import_error', schema=None) as batch_op: batch_op.alter_column( 'timestamp', existing_type=sa.DATETIME(), type_=TIMESTAMP(), existing_nullable=True ) with op.batch_alter_table('job', schema=None) as batch_op: batch_op.alter_column( 'start_date', existing_type=sa.DATETIME(), type_=TIMESTAMP(), existing_nullable=True ) batch_op.alter_column( 'end_date', existing_type=sa.DATETIME(), type_=TIMESTAMP(), existing_nullable=True ) batch_op.alter_column( 'latest_heartbeat', existing_type=sa.DATETIME(), type_=TIMESTAMP(), existing_nullable=True ) with op.batch_alter_table('log', schema=None) as batch_op: batch_op.alter_column('dttm', existing_type=sa.DATETIME(), type_=TIMESTAMP(), existing_nullable=True) batch_op.alter_column( 'execution_date', existing_type=sa.DATETIME(), type_=TIMESTAMP(), existing_nullable=True ) with op.batch_alter_table('serialized_dag', schema=None) as batch_op: batch_op.alter_column( 'last_updated', existing_type=sa.DATETIME(), type_=TIMESTAMP(), existing_nullable=False ) with op.batch_alter_table('sla_miss', schema=None) as batch_op: batch_op.alter_column( 'execution_date', existing_type=sa.DATETIME(), type_=TIMESTAMP(), existing_nullable=False ) batch_op.alter_column( 'timestamp', existing_type=sa.DATETIME(), type_=TIMESTAMP(), existing_nullable=True ) with op.batch_alter_table('task_fail', schema=None) as batch_op: batch_op.alter_column( 'start_date', existing_type=sa.DATETIME(), type_=TIMESTAMP(), existing_nullable=True ) batch_op.alter_column( 'end_date', existing_type=sa.DATETIME(), type_=TIMESTAMP(), existing_nullable=True ) with op.batch_alter_table('task_instance', schema=None) as batch_op: batch_op.alter_column( 'start_date', existing_type=sa.DATETIME(), type_=TIMESTAMP(), existing_nullable=True ) batch_op.alter_column( 'end_date', existing_type=sa.DATETIME(), type_=TIMESTAMP(), existing_nullable=True ) batch_op.alter_column( 'queued_dttm', existing_type=sa.DATETIME(), type_=TIMESTAMP(), existing_nullable=True )
840
0113_2_4_0_compare_types_between_orm_and_db.py
Python
airflow/migrations/versions/0113_2_4_0_compare_types_between_orm_and_db.py
25537acfa28eebc82a90274840e0e6fb5c91e271
airflow
2
279,720
33
12
9
148
25
1
37
122
_load_state
Move optimizer methods not related to distributed training to the base class. PiperOrigin-RevId: 471880396
https://github.com/keras-team/keras.git
def _load_state(self, dir_path): # To avoid circular import from keras.saving.experimental import saving_lib file_path = tf.io.gfile.join(dir_path, saving_lib.STATE_FILENAME) if tf.io.gfile.exists(file_path): loaded_npz = np.load(file_path) logging.debug(f"Loaded state from {file_path}") self._set_state( {file: loaded_npz[file] for file in loaded_npz.files} ) base_optimizer_keyword_args = @keras_export("keras.optimizers.experimental.Optimizer", v1=[])
@keras_export("keras.optimizers.experimental.Optimizer", v1=[])
77
optimizer.py
Python
keras/optimizers/optimizer_experimental/optimizer.py
3ba4d8dadb4db52cf066662f5068e4f99ebd87ee
keras
3
34,475
64
13
16
168
14
0
92
236
simplify_replacements
Add model like (#14992) * Add new model like command * Bad doc-styler * black and doc-styler, stop fighting! * black and doc-styler, stop fighting! * At last * Clean up * Typo * Bad doc-styler * Bad doc-styler * All good maybe? * Use constants * Add doc and type hints * More cleaning * Add doc * Fix Copied from * Doc template * Use typing.Pattern instead * Framework-specific files * Fixes * Select frameworks clean model init * Deal with frameworks in main init * fixes * Last fix * Prompt user for info * Delete exemple config * Last fixes * Add test config * Fix bug with model_type included in each other * Fixes * More fixes * More fixes * Adapt config * Remove print statements * Will fix tokenization later, leave it broken for now * Add test * Quality * Try this way * Debug * Maybe by setting the path? * Let's try another way * It should go better when actually passing the arg... * Remove debug statements and style * Fix config * Add tests * Test require the three backends * intermediate commit * Revamp pattern replacements and start work on feature extractors * Adapt model info * Finalize code for processors * Fix in main init additions * Finish questionnaire for processing classes * Fix file name * Fix for real * Fix patterns * Style * Remove needless warnings * Copied from should work now. * Include Copied form in blocks * Add test * More fixes and tests * Apply suggestions from code review Co-authored-by: Lysandre Debut <[email protected]> * Address review comment Co-authored-by: Lysandre Debut <[email protected]>
https://github.com/huggingface/transformers.git
def simplify_replacements(replacements): if len(replacements) <= 1: # Nothing to simplify return replacements # Next let's sort replacements by length as a replacement can only "imply" another replacement if it's shorter. replacements.sort(key=lambda x: len(x[0])) idx = 0 while idx < len(replacements): old, new = replacements[idx] # Loop through all replacements after j = idx + 1 while j < len(replacements): old_2, new_2 = replacements[j] # If the replacement is implied by the current one, we can drop it. if old_2.replace(old, new) == new_2: replacements.pop(j) else: j += 1 idx += 1 return replacements
101
add_new_model_like.py
Python
src/transformers/commands/add_new_model_like.py
81156d20cd76c1a43ed44fdbc785e237d60b6896
transformers
5
269,031
50
16
12
128
12
0
67
138
_set_object_by_path
Expose keras/dtensor package to public PiperOrigin-RevId: 430366845
https://github.com/keras-team/keras.git
def _set_object_by_path(object_to_set, path, value): for i, attr_name in enumerate(path): if i == len(path) - 1: # We found the actual attribute to set if isinstance(attr_name, int): # This means we are trying to set an element in the array, make sure the # instance is array like object. object_to_set[attr_name] = value else: setattr(object_to_set, attr_name, value) else: if isinstance(attr_name, int): object_to_set = object_to_set[attr_name] else: object_to_set = getattr(object_to_set, attr_name)
80
layout_map.py
Python
keras/dtensor/layout_map.py
a179ed22f002e2f4a43ae4770348a9b8e1d5a051
keras
5
241,679
15
15
10
97
13
0
18
68
val_batch_idx
Integrate progress tracking into the progress bar (#11213)
https://github.com/Lightning-AI/lightning.git
def val_batch_idx(self) -> int: if self.trainer is None: return 0 if self.trainer.state.fn == "fit": return self.trainer.fit_loop.epoch_loop.val_loop.epoch_loop.batch_progress.current.processed return self.trainer.validate_loop.epoch_loop.batch_progress.current.processed
60
base.py
Python
pytorch_lightning/callbacks/progress/base.py
8a549a550cb10189ff1db382f546a40cd1c6c5b3
lightning
3
100,776
18
14
7
65
11
0
22
73
is_admin
Add Apple M1 to setup.py add libblas to requirements
https://github.com/deepfakes/faceswap.git
def is_admin(self) -> bool: try: retval = os.getuid() == 0 except AttributeError: retval = ctypes.windll.shell32.IsUserAnAdmin() != 0 # type: ignore return retval
37
setup.py
Python
setup.py
a586ef6bf3db26752fc1164835e46b6e375576ca
faceswap
2
165,785
8
7
6
28
5
0
8
22
length
TYP: fix mid and length for Interval and Intervalarray (#46472)
https://github.com/pandas-dev/pandas.git
def length(self) -> Index: return self.right - self.left
16
interval.py
Python
pandas/core/arrays/interval.py
6d7e004b1fc69942390d953bf21098a786c12c92
pandas
1
292,161
73
15
30
318
27
0
101
456
async_step_link
Ensure lutron caseta imports set the unique id (#66754)
https://github.com/home-assistant/core.git
async def async_step_link(self, user_input=None): errors = {} # Abort if existing entry with matching host exists. self._async_abort_entries_match({CONF_HOST: self.data[CONF_HOST]}) self._configure_tls_assets() if ( not self.attempted_tls_validation and await self.hass.async_add_executor_job(self._tls_assets_exist) and await self.async_get_lutron_id() ): self.tls_assets_validated = True self.attempted_tls_validation = True if user_input is not None: if self.tls_assets_validated: # If we previous paired and the tls assets already exist, # we do not need to go though pairing again. return self.async_create_entry(title=self.bridge_id, data=self.data) assets = None try: assets = await async_pair(self.data[CONF_HOST]) except (asyncio.TimeoutError, OSError): errors["base"] = "cannot_connect" if not errors: await self.hass.async_add_executor_job(self._write_tls_assets, assets) return self.async_create_entry(title=self.bridge_id, data=self.data) return self.async_show_form( step_id="link", errors=errors, description_placeholders={ CONF_NAME: self.bridge_id, CONF_HOST: self.data[CONF_HOST], }, )
199
config_flow.py
Python
homeassistant/components/lutron_caseta/config_flow.py
64277058b5ba6fb10029553422695964204f0ebb
core
8
225,776
29
8
14
72
4
0
33
104
test_expand_tokens_with_subtokens
add more unit tests for keyword table (#45) Co-authored-by: Jerry Liu <[email protected]>
https://github.com/jerryjliu/llama_index.git
def test_expand_tokens_with_subtokens() -> None: response = "foo bar, baz, Hello hello wOrld bye" keywords = extract_keywords_given_response(response) assert keywords == { "foo bar", "foo", "bar", "baz", "hello hello world bye", "hello", "world", "bye", }
37
test_utils.py
Python
tests/indices/keyword_table/test_utils.py
3c7e1ad1ea0d6feace926d9749c73c7870397714
llama_index
1
138,395
80
16
32
234
31
0
107
518
get_objects
[State Observability] Tasks and Objects API (#23912) This PR implements ray list tasks and ray list objects APIs. NOTE: You can ignore the merge conflict for now. It is because the first PR was reverted. There's a fix PR open now.
https://github.com/ray-project/ray.git
async def get_objects(self) -> dict: replies = await asyncio.gather( *[ self._client.get_object_info(node_id, timeout=DEFAULT_RPC_TIMEOUT) for node_id in self._client.get_all_registered_raylet_ids() ] ) worker_stats = [] for reply in replies: for core_worker_stat in reply.core_workers_stats: # NOTE: Set preserving_proto_field_name=False here because # `construct_memory_table` requires a dictionary that has # modified protobuf name # (e.g., workerId instead of worker_id) as a key. worker_stats.append( self._message_to_dict( message=core_worker_stat, fields_to_decode=["object_id"], preserving_proto_field_name=False, ) ) result = {} memory_table = memory_utils.construct_memory_table(worker_stats) for entry in memory_table.table: data = entry.as_dict() # `construct_memory_table` returns object_ref field which is indeed # object_id. We do transformation here. # TODO(sang): Refactor `construct_memory_table`. data["object_id"] = data["object_ref"] del data["object_ref"] data = filter_fields(data, ObjectState) result[data["object_id"]] = data return result
140
state_aggregator.py
Python
dashboard/state_aggregator.py
30ab5458a7e4ba2351d5e1beef8c8797b5946493
ray
5
246,122
4
6
9
16
3
0
4
11
_setup_get_username_for_registration
Add a module callback to set username at registration (#11790) This is in the context of mainlining the Tchap fork of Synapse. Currently in Tchap usernames are derived from the user's email address (extracted from the UIA results, more specifically the m.login.email.identity step). This change also exports the check_username method from the registration handler as part of the module API, so that a module can check if the username it's trying to generate is correct and doesn't conflict with an existing one, and fallback gracefully if not. Co-authored-by: David Robertson <[email protected]>
https://github.com/matrix-org/synapse.git
def _setup_get_username_for_registration(self) -> Mock:
38
test_password_providers.py
Python
tests/handlers/test_password_providers.py
2d3bd9aa670eedd299cc03093459929adec41918
synapse
1
288,152
4
6
1
17
4
0
4
11
attribute_updated
Add configuration entities and device actions for Inovelli Blue Series switch to ZHA (#79106) * Add Inovelli configutation entities to ZHA * add device actions * fix attribute name collision * add device action tests * disable remote protection per Inovelli request * expect_reply to false * update test for expect_reply change * inovelli feedback * translation keys and strings * clean up numbers * prevent double events * remove individual LED defaults per inovelli * redundant check * update test
https://github.com/home-assistant/core.git
def attribute_updated(self, attrid, value):
10
manufacturerspecific.py
Python
homeassistant/components/zha/core/channels/manufacturerspecific.py
2ed48a9b28f10784a5b8fc27ddae5ae299b43deb
core
1
154,845
51
9
7
76
7
0
61
91
get_keywords
REFACTOR-#5012: Add mypy checks for singleton files in base modin directory (#5013) Signed-off-by: Jonathan Shi <[email protected]>
https://github.com/modin-project/modin.git
def get_keywords() -> Dict[str, str]: # these strings will be replaced by git during git-archive. # setup.py/versioneer.py will grep for the variable names, so they must # each be defined on a line of their own. _version.py will just call # get_keywords(). git_refnames = "$Format:%d$" git_full = "$Format:%H$" git_date = "$Format:%ci$" keywords = {"refnames": git_refnames, "full": git_full, "date": git_date} return keywords
38
_version.py
Python
modin/_version.py
446148dbf9b66debd0a0dbf9ce778253380d5921
modin
1
34,282
42
15
19
186
15
0
57
262
_run_split_on_punc
Add FastTokenizer to REALM (#15211) * Remove BertTokenizer abstraction * Add FastTokenizer to REALM * Fix config archive map * Fix copies * Update realm.mdx * Apply suggestions from code review
https://github.com/huggingface/transformers.git
def _run_split_on_punc(self, text, never_split=None): if never_split is not None and text in never_split: return [text] chars = list(text) i = 0 start_new_word = True output = [] while i < len(chars): char = chars[i] if _is_punctuation(char): output.append([char]) start_new_word = True else: if start_new_word: output.append([]) start_new_word = False output[-1].append(char) i += 1 return ["".join(x) for x in output]
114
tokenization_realm.py
Python
src/transformers/models/realm/tokenization_realm.py
841d979190319098adc8101f9820a02ee3be4c8b
transformers
7
285,675
18
10
21
91
17
0
22
75
copy_func
Next release : reports on steroids (#2349) * fix gov tests * refactor insider * new virtual path extraction * removed some symbol default params as they're considered critical * little adjustments * portfolio refactor * merge API factory * add helpers, stocks, crypto, forex * minor forex changes * include forex api paths * add 2 missing forex funcs * portfolio brokers refactor * display help on api func call * add econometrics virtual paths to api * add api unit test * fixed report for the new api * minor portfolio refactorings * added gdapps * anchor_yield path * some more crypto path fixes * small change * fixed wrong param * minor fixes * wip - inital commit for forex report * add bw as a model, we'll get better solution afterwards * added ema with dummy model as it adds great functionality to the report * minor fixes * wip - added functions to forex report * add feedparser news path * add new virtual paths to api * adding commands to equity report * revert to old paths, new ones were breaking * Add in very basic ETF report * Add candle chart to ETF report * add etf load * allow use of candle without data * add raw to candle * added forex report * ongoing equity report * equity report change * fix some portfolio bugs and add docstrings * include portfolio paths and coin class * add crypto paths * change event dates to str * starting economy report * window for limit * equity report and refactor newsapi * add helper to api * update on economy report * equity report * update economy report * refactor some docstrings * change maturities helper * refactor newsapi * refactor futures command * add some sauce to ycrv plot * black * update report * refactor alphavantage * refactor wsj * update economy report * ycrv tenor * map avaiable_indices * map economy helpers * fix econdb docstring * add plots on economy report * minor fixes * wip - crypto report * update economy report * added same default args as view * added view to explicity use chart=True when suing the api * adjustments - removed rich tables to use only df * final version economy report * change report name * equity report for review * linting * add etf symbols endpoint * incorporate feedback economy report * fix reports launch by adding tag to economy report * fix equity bug * remove analyst name * fix * fix news * make links hyperlinks for equity * click links * fixed arg name * improved news * small improves * Fix light terminal stylesheet that would prevent using it in notebooks (#2473) * improved report * run reports in installer * fix #2209 * minor ycrv refactoring * refactor portfolio/holdv virtual path * refactor benchmark trades * fix events args * adapt economy report to changes * fix portfolio controller bug * holdv refactor * refactor perf command * start portfolio report * remove perf view * refactor holp * add textwrap3 to poetry (doesn't solve the error) * fix equity after merge * add some rolling commands * fix equity after save button * improved crypto report, plus minor fixes * minor fixes on the reports * add maxdd and distr * refactor qa * var command * refactor qa expected shortfall * add es command * add es command * fix qa percentile bug * fix economy rendering * refactor qa omega * add om command * add summary command * add dret command * add mret command * add yret command * add metrics * add allocs to report * remove bro and po commands, add later * fixed some tests * adjustments to crypto report * Fix docstring for VSCode Added a note about installing Jupyter PowerToys extension for optimal API usage in Jupyter VSCode, in the API_README.md. * minor adjustment * remove nft calendar model virtual paths * Add in Portfolio report * fix external axes portfolio view * Update portfolio report with rolling plots * Details for ETF and Portfolio * fix economy report * change analyst to openbb * floppy * fixed unmatched axis in reports * Speed up tests * fix file and load on po * get_news output * add some po paths * Add integration tests for Reports menu * refactor maxsharpe * open maxsharpe * open minrisk * open maxutil * open maxret * Added fixes * black * remove useless views * Fixed small issue * refactor ef * open ef api * portfolio optimization report * Added fixes * unblock api loading * add more endpoints * update po report * unblock api loading * update po report * expose herc * expose property endpoint * Added fixes * More api fixes * flake8 * Fixed some mypy * news api model * flake8 * mypy fix * mypy * black * pylint * fix tests * markdown * markdown * Added fixes * fix economy report * merge * fix economy report * remove empty notebook * expose nco * remove jupyter notebook * expose plot endpoint * remove po report, just used for tests * api v paths plot * remove api_old * change loading msg Co-authored-by: montezdesousa <[email protected]> Co-authored-by: hjoaquim <[email protected]> Co-authored-by: montezdesousa <[email protected]> Co-authored-by: Om Gupta <[email protected]> Co-authored-by: minhhoang1023 <[email protected]> Co-authored-by: JerBouma <[email protected]> Co-authored-by: Theodore Aptekarev <[email protected]> Co-authored-by: Om Gupta <[email protected]> Co-authored-by: Diogo Sousa <[email protected]> Co-authored-by: Colin Delahunty <[email protected]> Co-authored-by: northern-64bit <[email protected]> Co-authored-by: colin99d <[email protected]> Co-authored-by: Minh Hoang <[email protected]>
https://github.com/OpenBB-finance/OpenBBTerminal.git
def copy_func(f) -> Callable: g = types.FunctionType( f.__code__, f.__globals__, name=f.__name__, argdefs=f.__defaults__, closure=f.__closure__, ) g = functools.update_wrapper(g, f) g.__kwdefaults__ = f.__kwdefaults__ return g
60
api.py
Python
openbb_terminal/api.py
72b0a9f1ee8b91ad9fd9e76d80d2ccab51ee6d21
OpenBBTerminal
1
20,389
95
17
47
575
36
0
169
795
format_unencoded
check point progress on only bringing in pip==22.0.4 (#4966) * vendor in pip==22.0.4 * updating vendor packaging version * update pipdeptree to fix pipenv graph with new version of pip. * Vendoring of pip-shims 0.7.0 * Vendoring of requirementslib 1.6.3 * Update pip index safety restrictions patch for pip==22.0.4 * Update patches * exclude pyptoject.toml from black to see if that helps. * Move this part of the hash collection back to the top (like prior implementation) because it affects the outcome of this test now in pip 22.0.4
https://github.com/pypa/pipenv.git
def format_unencoded(self, tokensource, outfile): x = self.xoffset y = self.yoffset if not self.nowrap: if self.encoding: outfile.write('<?xml version="1.0" encoding="%s"?>\n' % self.encoding) else: outfile.write('<?xml version="1.0"?>\n') outfile.write('<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.0//EN" ' '"http://www.w3.org/TR/2001/REC-SVG-20010904/DTD/' 'svg10.dtd">\n') outfile.write('<svg xmlns="http://www.w3.org/2000/svg">\n') outfile.write('<g font-family="%s" font-size="%s">\n' % (self.fontfamily, self.fontsize)) counter = self.linenostart counter_step = self.linenostep counter_style = self._get_style(Comment) line_x = x if self.linenos: if counter % counter_step == 0: outfile.write('<text x="%s" y="%s" %s text-anchor="end">%s</text>' % (x+self.linenowidth,y,counter_style,counter)) line_x += self.linenowidth + self.ystep counter += 1 outfile.write('<text x="%s" y="%s" xml:space="preserve">' % (line_x, y)) for ttype, value in tokensource: style = self._get_style(ttype) tspan = style and '<tspan' + style + '>' or '' tspanend = tspan and '</tspan>' or '' value = escape_html(value) if self.spacehack: value = value.expandtabs().replace(' ', '&#160;') parts = value.split('\n') for part in parts[:-1]: outfile.write(tspan + part + tspanend) y += self.ystep outfile.write('</text>\n') if self.linenos and counter % counter_step == 0: outfile.write('<text x="%s" y="%s" text-anchor="end" %s>%s</text>' % (x+self.linenowidth,y,counter_style,counter)) counter += 1 outfile.write('<text x="%s" y="%s" ' 'xml:space="preserve">' % (line_x,y)) outfile.write(tspan + parts[-1] + tspanend) outfile.write('</text>') if not self.nowrap: outfile.write('</g></svg>\n')
332
svg.py
Python
pipenv/patched/notpip/_vendor/pygments/formatters/svg.py
f3166e673fe8d40277b804d35d77dcdb760fc3b3
pipenv
15
279,959
18
14
7
88
8
0
19
100
from_config
Some changes on the new optimizer: 1. Include `custom_objects` in `from_config` for deserializing custom learning rate. 2. Handle the error of seeing unrecognized variable with a better error message. PiperOrigin-RevId: 476505974
https://github.com/keras-team/keras.git
def from_config(cls, config, custom_objects=None): if "learning_rate" in config: if isinstance(config["learning_rate"], dict): config["learning_rate"] = learning_rate_schedule.deserialize( config["learning_rate"], custom_objects=custom_objects ) return cls(**config)
52
optimizer.py
Python
keras/optimizers/optimizer_experimental/optimizer.py
51a6050b936ec87cd684fc1a052f79785ec9aaec
keras
3
33,933
40
10
40
196
17
1
67
246
forward
[Fix doc examples] Add missing from_pretrained (#15044) * fix doc example - ValueError: Parameter config should be an instance of class `PretrainedConfig` * Update src/transformers/models/segformer/modeling_segformer.py Co-authored-by: NielsRogge <[email protected]> * update Co-authored-by: ydshieh <[email protected]> Co-authored-by: NielsRogge <[email protected]>
https://github.com/huggingface/transformers.git
def forward(self, pixel_values, output_attentions=None, output_hidden_states=None, return_dict=None): r output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_hidden_states = ( output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states ) return_dict = return_dict if return_dict is not None else self.config.use_return_dict encoder_outputs = self.encoder( pixel_values, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) sequence_output = encoder_outputs[0] if not return_dict: return (sequence_output,) + encoder_outputs[1:] return BaseModelOutput( last_hidden_state=sequence_output, hidden_states=encoder_outputs.hidden_states, attentions=encoder_outputs.attentions, ) @add_start_docstrings( , SEGFORMER_START_DOCSTRING, )
@add_start_docstrings( """ SegFormer Model transformer with an image classification head on top (a linear layer on top of the final hidden states) e.g. for ImageNet. """, SEGFORMER_START_DOCSTRING, )
127
modeling_segformer.py
Python
src/transformers/models/segformer/modeling_segformer.py
ac224bb0797c1ee6522d814139f3eb0a8947267b
transformers
5
297,740
42
11
25
319
18
0
88
197
test_create_area
Add aliases to area registry items (#84294) * Add aliases to area registry items * Update test * Fix WS API
https://github.com/home-assistant/core.git
async def test_create_area(hass, registry, update_events): # Create area with only mandatory parameters area = registry.async_create("mock") assert area == area_registry.AreaEntry( name="mock", normalized_name=ANY, aliases=set(), id=ANY, picture=None ) assert len(registry.areas) == 1 await hass.async_block_till_done() assert len(update_events) == 1 assert update_events[-1]["action"] == "create" assert update_events[-1]["area_id"] == area.id # Create area with all parameters area = registry.async_create( "mock 2", aliases={"alias_1", "alias_2"}, picture="/image/example.png" ) assert area == area_registry.AreaEntry( name="mock 2", normalized_name=ANY, aliases={"alias_1", "alias_2"}, id=ANY, picture="/image/example.png", ) assert len(registry.areas) == 2 await hass.async_block_till_done() assert len(update_events) == 2 assert update_events[-1]["action"] == "create" assert update_events[-1]["area_id"] == area.id
191
test_area_registry.py
Python
tests/helpers/test_area_registry.py
1a42bd5c4cb51ffbfcaf8d5389b80a228712ac81
core
1
273,987
4
6
3
17
4
0
4
7
_zero_state_tensors
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
https://github.com/keras-team/keras.git
def _zero_state_tensors(state_size, batch_size, dtype):
23
legacy_cells.py
Python
keras/layers/rnn/legacy_cells.py
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
1
30,229
71
19
23
198
19
0
112
277
create_github_url
update web code Co-Authored-By: Peyton Creery <[email protected]>
https://github.com/spotDL/spotify-downloader.git
def create_github_url(url): repo_only_url = re.compile( r"https:\/\/github\.com\/[a-z\d](?:[a-z\d]|-(?=[a-z\d])){0,38}\/[a-zA-Z0-9]+$" ) re_branch = re.compile("/(tree|blob)/(.+?)/") # Check if the given url is a url to a GitHub repo. If it is, tell the # user to use 'git clone' to download it if re.match(repo_only_url, url): print( "✘ The given url is a complete repository. Use 'git clone' to download the repository", "red", ) sys.exit() # extract the branch name from the given url (e.g master) branch = re_branch.search(url) if branch: download_dirs = url[branch.end() :] api_url = ( url[: branch.start()].replace("github.com", "api.github.com/repos", 1) + "/contents/" + download_dirs + "?ref=" + branch.group(2) ) return api_url, download_dirs raise ValueError("The given url is not a valid GitHub url") # Modification of https://github.com/sdushantha/gitdir/blob/master/gitdir/gitdir.py
111
web.py
Python
spotdl/console/web.py
bbb7a02ef889134af71593102bc6f65035ab14cb
spotify-downloader
3
7,623
37
15
14
139
20
0
40
114
load_data_for_viz
Encoder refactor V2 (#2370) * Added base files and some initial code * More files created, fleshing out binary feature and corresponding encoders * Added more schema infra * Registered all feature encoders * Separated feature utils infra * Added all preprocessing classes * Filled out rest of schema configs * Fixed preproc dataclass * Fixed small errors blocking import * Tests should be passing * Deleted unnecesssary files and removed commented out code * fixed flake8 * Fixed most tests * fixed pattern validation * Fixed missing val strategies and solved custom encoder update issue * Removed preprocessing from features due to schema SSOT * fix flake 8 * Started encoder schema work * Parallel CNN Encoder * StackedCNN Encoder * Added image encoders * Finished sequence encoders * Partway through text encoders * Added text encoders * Bag Encoders * Binary and Date Encoders * category, date, h3, and set encoders * Wired up encoder schemas * Switched input feature encoder schema definitions * Fixed handful of issues * Fix schema issues * Refactored a bunch of test configs * Small changes * Removed default param from register_encoder * Schema working now, working on refactoring * Finished decoder schemas * Removed default param from register_decoder * Added some default params to output features and more decoder work * Refactored all input feature encoder/decoder referencing * Refactored pretty much all the tests * Added back constants * Solved gbm issue * Fixed save_load test * various fixes * Fixed import issue * Flake 8 and various fixes * Solved more failed tests * Refactored missed tests * Removed commented lines * Added init file for decoders schema * Fixed failing tests * Fixed hyperopt shared params test * Added backwards compatability logic and test * Flake 8 * removed comment * Added base files and some initial code * More files created, fleshing out binary feature and corresponding encoders * Added more schema infra * Registered all feature encoders * Separated feature utils infra * Added all preprocessing classes * Filled out rest of schema configs * Fixed preproc dataclass * Fixed small errors blocking import * Tests should be passing * Deleted unnecesssary files and removed commented out code * fixed flake8 * Fixed most tests * fixed pattern validation * Fixed missing val strategies and solved custom encoder update issue * Removed preprocessing from features due to schema SSOT * fix flake 8 * Started encoder schema work * Parallel CNN Encoder * StackedCNN Encoder * Added image encoders * Finished sequence encoders * Partway through text encoders * Added text encoders * Bag Encoders * Binary and Date Encoders * category, date, h3, and set encoders * Wired up encoder schemas * Switched input feature encoder schema definitions * Fixed handful of issues * Fix schema issues * Refactored a bunch of test configs * Small changes * Removed default param from register_encoder * Schema working now, working on refactoring * Finished decoder schemas * Removed default param from register_decoder * Added some default params to output features and more decoder work * Refactored all input feature encoder/decoder referencing * Refactored pretty much all the tests * Added back constants * Solved gbm issue * Fixed save_load test * various fixes * Fixed import issue * Flake 8 and various fixes * Solved more failed tests * Refactored missed tests * Removed commented lines * Added init file for decoders schema * Fixed failing tests * Fixed hyperopt shared params test * Added backwards compatability logic and test * Flake 8 * removed comment * Skipping CTRL Encoder test since it's blasting memory * Fixed audio_feature test * Addressed failing tests * Fixed backwards compatability * Fixed more failing tests * Flake 8 * Fixed more tests * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Refactored default logic for all features * Fixed H3 weighted_sum encoder wrong type * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix import issue * Mark slow HF tests * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fixed defaults tests * Pin Ray nightly version * fix link * pin torch to 07/26 * cleanup * upgrade ray pinned version to enable parquet partition filtering * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * downgrade Ray to ensure TensorDtypes are not inferred during Ray Dataset <=> Dask conversions * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Removed custom encoder decoder helper method * unpin torch * Flake 8 * Daniel feedback * Small fixes * Fixed default weights init * Added test with encoder dependencies for global defaults * Fixed Arnav's test * Addressed Arnav's feedback * Address nit * Addressed feedback * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Address nit * Fix test * Initial feedback refactor * More refactoring * Added vocab field to all text_encoder configs * More refactoring * Fixed more tests * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix audio feature test, also s/logging/logger. * param names should start with lowercase s/N/n * Re-added schema utils used in encoder refactor. * Removes unused overwrite_defaults() * Oops, name is passed to feature as a kwarg not a member of the feature config. Why? Probably should change that. * Change lowercase default back to True. Fixes test_strings_utils * Set feature validation error with output size 1. * MLP mixer encoder needs num_channels. * Use schema.dump instead of .__dict__ to convert marshmallow dataclass to dict * (x,) in python is a tuple with a single element x. Watch out for this when defining schemas. * Construct features by using build_single_input/output to share code for deserializing feature configs. Also changes ECD to BaseModel, IMO its confusing to import ECD to use a class method from BaseModel. * Fix test_trainer_utils, adds convenience method BaseFeature.load_from_dictionary * Use feature load_from_dictionary instead of BaseModel in feature tests. * Populate encoder and decoder types in shared test fixtures, fixes error expectations in test_validate_config_combiner.py * Fixes test_validate_config_misc.py by ensuring only one option of OneOf allows None, because OneOf fails validation if more than one condition match. * Updates test_defaults.py * Adds type, column, proc_column to feature schemas. Revert feature tests by passing in config dict again. * decorate feature base classes with @dataclass, fixes failure building input features in trainer. * Implement _serialize for PreprocessingDataclassField. * use type(feature) to get schema class. * Fix test_trainer_utils.py * audio_feature requires embedding_size, but passthrough encoder does not have this property. Technically, passthrough encoder is not supported for audio features. * Wow, apparently the order of elements in the oneOf affects which error message we get from jsonschema. * Get default encoders from feature schema. * Get encoder defaults from schema in config_utils.py * Make number feature allow decoders without clip property * s/list/List * Adds reduce_output to h3 encoder. * Moves decoder params into nested decoder. * Update processing parameters with computed_fill_value. * Removes test code. * Adds input_size to decoder base because some features assume decoders have an input_size * dense encoder not supported for bag features, changed to embed. * Adds input_size param to dense encoder schema, since its a required parameter of dense encoder. * Fixes vector feature input_size in encoder metadata. * Fixes test reducers, set sequence reduce mode in output feature base. * Don't nest encoder parameters in decoder * Fixes test_torchscript, get num_classes from encoder config. * Audio feature padding is float, not int. * Adds temp check for threshold to fix GBM tests. * Adds missing value strategy drop_row for vector feature in test. * Drop row should work even if computed_fill_value is an empty string * Removes duplicated TOP_K constant. * Consolidated set_default_values * Removes commented-out defaults. * Remove load_config from OutputFeature, it isn't doing anything here. * Removes comment. * Fix type annotations for input/output feature constructors. * Fixes output feature dependencies being ignored. * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Adds test for construction of output features with dependencies. * Encoder/Decoder config now lives on encoder/decoder object * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fixes decoder params to match their respective classes. Moves fc_stack params and threshold back to output feature. * Make clip property of number output feature again. * Adds threshold property to set feature schema, use this property instead of storing it in the decoder. * input_size in output_feature instead of decoder. * Made vector_size property of vector_feature. * Fixed gbm tests * Fixed flake 8 * Re-adds num_classes as member of category output feature. * Makes vocab_size match vocab used in preprocessing. * num_classes in CategoryOutputFeature. * Moves num_classes from decoder to category output feature. * Fixes test_model_training_options. Copies fc_layer keys into decoder if they are present on output features. * Adds field descriptors for fc_layers params in BaseOutputFeatureConfig. Co-authored-by: connor-mccorm <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: connor-mccorm <[email protected]> Co-authored-by: Geoffrey Angus <[email protected]> Co-authored-by: Arnav Garg <[email protected]> Co-authored-by: Daniel Treiman <[email protected]>
https://github.com/ludwig-ai/ludwig.git
def load_data_for_viz(load_type, model_file_statistics, **kwargs): supported_load_types = dict( load_json=load_json, load_from_file=partial( load_from_file, dtype=kwargs.get("dtype", int), ground_truth_split=kwargs.get("ground_truth_split", 2) ), ) loader = supported_load_types[load_type] try: stats_per_model = [loader(stats_f) for stats_f in model_file_statistics] except (TypeError, AttributeError): logger.exception(f"Unable to open model statistics file {model_file_statistics}!") raise return stats_per_model
86
visualize.py
Python
ludwig/visualize.py
03b4ab273abd7e22a56bb550b56f3d667200abf9
ludwig
3
78,237
12
10
2
61
8
1
12
17
classnames
add classnames template tag for generating classnames - use classnames template tag in shared header template - add classname as documented variable for the shared header template
https://github.com/wagtail/wagtail.git
def classnames(*classes): return " ".join([classname.strip() for classname in classes if classname]) @register.simple_tag(takes_context=True)
@register.simple_tag(takes_context=True)
26
wagtailadmin_tags.py
Python
wagtail/admin/templatetags/wagtailadmin_tags.py
e2d4cb77458878d7d7076a7aa8b6d590deb99463
wagtail
3
156,111
61
14
17
249
30
0
80
183
repartition
absolufy-imports - No relative - PEP8 (#8796) Conversation in https://github.com/dask/distributed/issues/5889
https://github.com/dask/dask.git
def repartition(df, divisions=None, force=False): token = tokenize(df, divisions) if isinstance(df, _Frame): tmp = "repartition-split-" + token out = "repartition-merge-" + token dsk = repartition_divisions( df.divisions, divisions, df._name, tmp, out, force=force ) graph = HighLevelGraph.from_collections(out, dsk, dependencies=[df]) return new_dd_object(graph, out, df._meta, divisions) elif is_dataframe_like(df) or is_series_like(df): name = "repartition-dataframe-" + token from dask.dataframe.utils import shard_df_on_index dfs = shard_df_on_index(df, divisions[1:-1]) dsk = {(name, i): df for i, df in enumerate(dfs)} return new_dd_object(dsk, name, df, divisions) raise ValueError("Data must be DataFrame or Series")
165
core.py
Python
dask/dataframe/core.py
cccb9d8d8e33a891396b1275c2448c352ef40c27
dask
5
293,984
8
6
7
25
4
0
8
22
title
Add update entity platform (#68248) Co-authored-by: Glenn Waters <[email protected]>
https://github.com/home-assistant/core.git
def title(self) -> str | None: return self._attr_title
14
__init__.py
Python
homeassistant/components/update/__init__.py
073fb40b79cf8aa06790fdceb23b6857db888c99
core
1
299,378
19
8
2
28
3
0
19
40
shuffle
Improve repeat and shuffle support for Squeezebox (#70941)
https://github.com/home-assistant/core.git
def shuffle(self): # Squeezebox has a third shuffle mode (album) not recognized by Home Assistant return self._player.shuffle == "song"
14
media_player.py
Python
homeassistant/components/squeezebox/media_player.py
0264f060e4fc988f3a0442ba8f951677816c11ea
core
1
255,418
4
6
9
16
2
0
4
11
test_case_connect_partially_no_name_collision
Use Python type annotations rather than comments (#3962) * These have been supported since Python 3.5. ONNX doesn't support Python < 3.6, so we can use the annotations. Diffs generated by https://pypi.org/project/com2ann/. Signed-off-by: Gary Miguel <[email protected]> * Remove MYPY conditional logic in gen_proto.py It breaks the type annotations and shouldn't be needed. Signed-off-by: Gary Miguel <[email protected]> * Get rid of MYPY bool from more scripts Signed-off-by: Gary Miguel <[email protected]> * move Descriptors class above where its referenced in type annotation Signed-off-by: Gary Miguel <[email protected]> * fixes Signed-off-by: Gary Miguel <[email protected]> * remove extra blank line Signed-off-by: Gary Miguel <[email protected]> * fix type annotations Signed-off-by: Gary Miguel <[email protected]> * fix type annotation in gen_docs Signed-off-by: Gary Miguel <[email protected]> * fix Operators.md Signed-off-by: Gary Miguel <[email protected]> * fix TestCoverage.md Signed-off-by: Gary Miguel <[email protected]> * fix protoc-gen-mypy.py Signed-off-by: Gary Miguel <[email protected]>
https://github.com/onnx/onnx.git
def test_case_connect_partially_no_name_collision(self) -> None:
37
compose_test.py
Python
onnx/test/compose_test.py
83fa57c74edfd13ddac9548b8a12f9e3e2ed05bd
onnx
1
35,663
9
9
3
38
6
0
9
34
freeze_base_model
Add Data2Vec (#15507) * Add data2vec model cloned from roberta * Add checkpoint conversion script * Fix copies * Update docs * Add checkpoint conversion script * Remove fairseq data2vec_text script and fix format * Add comment on where to get data2vec_text.py * Remove mock implementation cheat.py and fix style * Fix copies * Remove TF and Flax classes from init * Add back copy from fairseq data2vec_text.py and fix style * Update model name in docs/source/index.mdx to be CamelCase * Revert model name in table to lower-case to get check_table test to pass * Update src/transformers/models/data2vec/__init__.py Co-authored-by: Patrick von Platen <[email protected]> * Update src/transformers/models/data2vec/convert_data2vec_original_pytorch_checkpoint_to_pytorch.py Co-authored-by: Patrick von Platen <[email protected]> * Update src/transformers/models/data2vec/modeling_data2vec.py Co-authored-by: Patrick von Platen <[email protected]> * Update src/transformers/models/data2vec/modeling_data2vec.py Co-authored-by: Patrick von Platen <[email protected]> * Update src/transformers/models/data2vec/modeling_data2vec.py Co-authored-by: Patrick von Platen <[email protected]> * Update src/transformers/models/data2vec/modeling_data2vec.py Co-authored-by: Patrick von Platen <[email protected]> * Update docs/source/model_doc/data2vec.mdx Co-authored-by: Sylvain Gugger <[email protected]> * Update docs/source/model_doc/data2vec.mdx Co-authored-by: Sylvain Gugger <[email protected]> * Update src/transformers/models/auto/configuration_auto.py Co-authored-by: Sylvain Gugger <[email protected]> * Update src/transformers/models/data2vec/configuration_data2vec.py Co-authored-by: Sylvain Gugger <[email protected]> * Update src/transformers/models/data2vec/modeling_data2vec.py Co-authored-by: Sylvain Gugger <[email protected]> * Update src/transformers/models/data2vec/modeling_data2vec.py Co-authored-by: Sylvain Gugger <[email protected]> * Update src/transformers/models/data2vec/modeling_data2vec.py Co-authored-by: Sylvain Gugger <[email protected]> * Update tests/test_modeling_data2vec.py Co-authored-by: Sylvain Gugger <[email protected]> * Update src/transformers/models/data2vec/configuration_data2vec.py Co-authored-by: Sylvain Gugger <[email protected]> * Update src/transformers/models/data2vec/modeling_data2vec.py Co-authored-by: Sylvain Gugger <[email protected]> * Update documentation * Copy-paste Data2VecConfig from BertConfig * Update config checkpoint to point to edugp/data2vec-nlp-base. Fix style and repo-consistency * Update config special tokens to match RoBERTa * Split multiple assertions and add individual error messages * Rename Data2VecModel to Data2VecForTextModel * Add Data2Vec to _toctree.yml * Rename Data2VecEmbeddings to Data2VecForTextEmbeddings * Add initial Data2VecForAudio model (unfinished). Only matching fairseq's implementation up to the feature encoder (before positional encoding). * finish audio model * finish audio file * Update names and fix style, quality and repo consistency * Remove Data2VecAudioForPretraining. Add tests for Data2VecAudio, mimicking the Wav2Vec2 test suite. Fix bias initilization in positional conv layers. Move back configurations for audio and text to separate files. * add inputs to logits to data2vec' * correct autio models * correct config auto * correct tok auto * Update utils/tests_fetcher.py * delete unnecessary files * delete unnecessary files * further renaming * make all tests pass * finish * remove useless test file * Update tests/test_modeling_common.py * Update utils/check_repo.py Co-authored-by: Patrick von Platen <[email protected]> * Update src/transformers/models/data2vec/modeling_data2vec_text.py Co-authored-by: Patrick von Platen <[email protected]> * Fix copies * Update docs * Remove fairseq data2vec_text script and fix format * Add comment on where to get data2vec_text.py * Remove mock implementation cheat.py and fix style * Fix copies * Remove TF and Flax classes from init * Add back copy from fairseq data2vec_text.py and fix style * Update model name in docs/source/index.mdx to be CamelCase * Revert model name in table to lower-case to get check_table test to pass * Update documentation * Update src/transformers/models/data2vec/__init__.py Co-authored-by: Patrick von Platen <[email protected]> * Update src/transformers/models/data2vec/convert_data2vec_original_pytorch_checkpoint_to_pytorch.py Co-authored-by: Patrick von Platen <[email protected]> * Update src/transformers/models/data2vec/modeling_data2vec.py Co-authored-by: Patrick von Platen <[email protected]> * Update src/transformers/models/data2vec/modeling_data2vec.py Co-authored-by: Patrick von Platen <[email protected]> * Update src/transformers/models/data2vec/modeling_data2vec.py Co-authored-by: Patrick von Platen <[email protected]> * Update src/transformers/models/data2vec/modeling_data2vec.py Co-authored-by: Patrick von Platen <[email protected]> * Update src/transformers/models/auto/configuration_auto.py Co-authored-by: Sylvain Gugger <[email protected]> * Update src/transformers/models/data2vec/configuration_data2vec.py Co-authored-by: Sylvain Gugger <[email protected]> * Update src/transformers/models/data2vec/modeling_data2vec.py Co-authored-by: Sylvain Gugger <[email protected]> * Update src/transformers/models/data2vec/modeling_data2vec.py Co-authored-by: Sylvain Gugger <[email protected]> * Update src/transformers/models/data2vec/modeling_data2vec.py Co-authored-by: Sylvain Gugger <[email protected]> * Update tests/test_modeling_data2vec.py Co-authored-by: Sylvain Gugger <[email protected]> * Update src/transformers/models/data2vec/configuration_data2vec.py Co-authored-by: Sylvain Gugger <[email protected]> * Update src/transformers/models/data2vec/modeling_data2vec.py Co-authored-by: Sylvain Gugger <[email protected]> * Copy-paste Data2VecConfig from BertConfig * Update config checkpoint to point to edugp/data2vec-nlp-base. Fix style and repo-consistency * Update config special tokens to match RoBERTa * Split multiple assertions and add individual error messages * Rename Data2VecModel to Data2VecForTextModel * Add Data2Vec to _toctree.yml * Rename Data2VecEmbeddings to Data2VecForTextEmbeddings * Add initial Data2VecForAudio model (unfinished). Only matching fairseq's implementation up to the feature encoder (before positional encoding). * finish audio model * finish audio file * add inputs to logits to data2vec' * Update names and fix style, quality and repo consistency * Remove Data2VecAudioForPretraining. Add tests for Data2VecAudio, mimicking the Wav2Vec2 test suite. Fix bias initilization in positional conv layers. Move back configurations for audio and text to separate files. * correct autio models * correct config auto * correct tok auto * delete unnecessary files * delete unnecessary files * Update utils/tests_fetcher.py * further renaming * make all tests pass * finish * remove useless test file * Update tests/test_modeling_common.py * Update utils/check_repo.py Co-authored-by: Patrick von Platen <[email protected]> * Update src/transformers/models/data2vec/modeling_data2vec_text.py Co-authored-by: Patrick von Platen <[email protected]> * Move data2vec tests to new structure * Fix test imports for text tests * Remove fairseq files * Change paper link to arxiv * Modify Data2Vec documentation to reflect that the encoder is not shared across the audio and text models in the current implementation. * Update text model checkpoint to be facebook/data2vec-text-base * Add 'Copy from' statements and update paper links and docs * fix copy from statements * improve copied from * correct more copied from statements * finish copied from stuff * make style * add model to README * add to master Co-authored-by: Eduardo Gonzalez Ponferrada <[email protected]> Co-authored-by: Patrick von Platen <[email protected]> Co-authored-by: Sylvain Gugger <[email protected]>
https://github.com/huggingface/transformers.git
def freeze_base_model(self): for param in self.data2vec_audio.parameters(): param.requires_grad = False
22
modeling_data2vec_audio.py
Python
src/transformers/models/data2vec/modeling_data2vec_audio.py
df5a4094a6e3f98f2cb2058cdb688fcc3f453220
transformers
2
244,211
11
6
10
18
4
0
11
17
split_batch
[Tools] Support respliting data_batch with tag (#7641) * support respliting data_batch with tag * add citations * add a unit test * fix lint
https://github.com/open-mmlab/mmdetection.git
def split_batch(img, img_metas, kwargs): # only stack img in the batch
94
split_batch.py
Python
mmdet/utils/split_batch.py
c6f467fe9baccc281b0695368c1eae14d5d21fd5
mmdetection
4
181,691
16
10
11
69
14
0
16
77
test_memory_6
Revert "Deployed 7ccda9a with MkDocs version: 1.3.0" This reverts commit bd9629c40e01241766197119b581a99409b07068.
https://github.com/EpistasisLab/tpot.git
def test_memory_6(): tpot_obj = TPOTClassifier( random_state=42, population_size=1, offspring_size=2, generations=1, config_dict='TPOT light', memory=str, verbosity=0 ) assert_raises(ValueError, tpot_obj._setup_memory)
45
tpot_tests.py
Python
tests/tpot_tests.py
388616b6247ca4ea8de4e2f340d6206aee523541
tpot
1
181,726
9
9
3
34
5
0
9
18
test_conf_dict_2
Revert "Deployed 7ccda9a with MkDocs version: 1.3.0" This reverts commit bd9629c40e01241766197119b581a99409b07068.
https://github.com/EpistasisLab/tpot.git
def test_conf_dict_2(): tpot_obj = TPOTClassifier(config_dict=tpot_mdr_classifier_config_dict) assert tpot_obj.config_dict == tpot_mdr_classifier_config_dict
19
tpot_tests.py
Python
tests/tpot_tests.py
388616b6247ca4ea8de4e2f340d6206aee523541
tpot
1
281,429
65
18
37
334
28
0
75
258
get_defi_protocols
Features(crypto): added new commands to defi menu (#1169) * adding new defi features and refactoring existing ones * added tests and docs * added zlot to ignored words * markdown lint * fixed pr issues * fix hugo main.yml * fix hugo main.yml * added tests * fixed prt issue * added iv_surface failing tests * added mocking to llama tests * new tests * skipped rich test for now
https://github.com/OpenBB-finance/OpenBBTerminal.git
def get_defi_protocols() -> pd.DataFrame: response = requests.get(API_URL + "/protocols") columns = [ "name", "symbol", "category", "chains", "change_1h", "change_1d", "change_7d", "tvl", "url", "description", "chain", ] if response.status_code != 200: raise Exception(f"Status code: {response.status_code}. Reason: {response.text}") try: df = pd.DataFrame(response.json()) df.replace({float(np.nan): None}, inplace=True) df["chains"] = df["chains"].apply( lambda x: "\n".join(textwrap.wrap(", ".join(x), width=50)) ) df["description"] = df["description"].apply( lambda x: "\n".join(textwrap.wrap(x, width=70)) if isinstance(x, str) else x ) return df[columns] except Exception as e: raise ValueError("Wrong response type\n") from e
184
llama_model.py
Python
gamestonk_terminal/cryptocurrency/defi/llama_model.py
d334d5e0878961d2b6cfda82271693d457047bee
OpenBBTerminal
4
126,674
27
13
10
112
14
0
32
110
test_failed_runtime_env_setup
Convert job_manager to be async (#27123) Updates jobs api Updates snapshot api Updates state api Increases jobs api version to 2 Signed-off-by: Alan Guo [email protected] Why are these changes needed? follow-up for #25902 (comment)
https://github.com/ray-project/ray.git
async def test_failed_runtime_env_setup(self, job_manager): run_cmd = f"python {_driver_script_path('override_env_var.py')}" job_id = await job_manager.submit_job( entrypoint=run_cmd, runtime_env={"working_dir": "s3://does_not_exist.zip"} ) await async_wait_for_condition_async_predicate( check_job_failed, job_manager=job_manager, job_id=job_id ) data = await job_manager.get_job_info(job_id) assert "runtime_env setup failed" in data.message
59
test_job_manager.py
Python
dashboard/modules/job/tests/test_job_manager.py
326b5bd1acc6d3d00ab0546e4ae45da6bed501f7
ray
1
48,723
2
6
6
13
2
0
2
9
test_empty_html_checkbox_not_required
Fix BooleanField's allow_null behavior (#8614) * Fix BooleanField's allow_null behavior * Update rest_framework.fields - Use .get with default value for 'allow_null' kwarg in BooleanField's init
https://github.com/encode/django-rest-framework.git
def test_empty_html_checkbox_not_required(self):
51
test_fields.py
Python
tests/test_fields.py
1fbe16a8d26ff5be64797cafb7004898f72ca52b
django-rest-framework
1
130,091
89
12
6
125
14
0
118
151
get_incremental_data
[CI] Format Python code with Black (#21975) See #21316 and #21311 for the motivation behind these changes.
https://github.com/ray-project/ray.git
def get_incremental_data(self, day=0): start = self._get_day_slice(day - 1) end = self._get_day_slice(day) available_data = Subset(self.dataset, list(range(start, end))) train_n = int(0.8 * (end - start)) # 80% train data, 20% validation data return random_split(available_data, [train_n, end - start - train_n]) ####################################################################### # PyTorch neural network classifier # --------------------------------- # Next, we will introduce our PyTorch neural network model and the # train and test function. These are adapted directly from # our :doc:`PyTorch MNIST example </tune/examples/mnist_pytorch>`. # We only introduced an additional neural network layer with a configurable # layer size. This is not strictly needed for learning good performance on # MNIST, but it is useful to demonstrate scenarios where your hyperparameter # search space affects the model complexity.
75
tune-serve-integration-mnist.py
Python
doc/source/tune/_tutorials/tune-serve-integration-mnist.py
7f1bacc7dc9caf6d0ec042e39499bbf1d9a7d065
ray
1
45,040
7
8
16
26
6
0
7
21
test_set_serialize_call_old_signature
Add params dag_id, task_id etc to XCom.serialize_value (#19505) When implementing a custom XCom backend, in order to store XCom objects organized by dag_id, run_id etc, we need to pass those params to `serialize_value`.
https://github.com/apache/airflow.git
def test_set_serialize_call_old_signature(self, get_import, session): serialize_watcher = MagicMock()
82
test_xcom.py
Python
tests/models/test_xcom.py
56285eee04285d8b6fac90911248d7e9dd5504d8
airflow
1
120,210
9
8
2
31
3
0
9
11
mock_4x8x16_devices
[mesh_utils] Support creating device meshes for hybrid networks Also makes some NFCs to other mesh_utils code. PiperOrigin-RevId: 442581767
https://github.com/google/jax.git
def mock_4x8x16_devices(one_device_per_chip): return mock_devices(4, 8, 16, 'TPU v4', one_device_per_chip)
19
mesh_utils_test.py
Python
tests/mesh_utils_test.py
3f9e45e0c5b035de27b14588cd3b4cfd5f3c1f04
jax
1
248,552
30
10
20
135
14
0
49
247
test_random_users_cannot_send_state_before_first_pl
EventAuthTestCase: build events for the right room version In practice, when we run the auth rules, all of the events have the right room version. Let's stop building Room V1 events for these tests and use the right version.
https://github.com/matrix-org/synapse.git
def test_random_users_cannot_send_state_before_first_pl(self): creator = "@creator:example.com" joiner = "@joiner:example.com" auth_events = [ _create_event(RoomVersions.V1, creator), _join_event(RoomVersions.V1, creator), _join_event(RoomVersions.V1, joiner), ] # creator should be able to send state event_auth.check_auth_rules_for_event( RoomVersions.V1, _random_state_event(RoomVersions.V1, creator), auth_events, ) # joiner should not be able to send state self.assertRaises( AuthError, event_auth.check_auth_rules_for_event, RoomVersions.V1, _random_state_event(RoomVersions.V1, joiner), auth_events, )
89
test_event_auth.py
Python
tests/test_event_auth.py
2959184a42398277ff916206235b844a8f7be5d7
synapse
1
276,845
23
8
4
41
4
0
25
71
get
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
https://github.com/keras-team/keras.git
def get(self, object_id): # Explicitly check for `None` internally to make external calling code a # bit cleaner. if object_id is None: return return self._obj_ids_to_obj.get(object_id)
23
generic_utils.py
Python
keras/utils/generic_utils.py
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
2
274,615
20
12
9
89
8
0
24
67
get
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
https://github.com/keras-team/keras.git
def get(identifier): if isinstance(identifier, dict): return deserialize(identifier) elif isinstance(identifier, str): return deserialize(str(identifier)) elif callable(identifier): return identifier else: raise ValueError(f"Could not interpret metric identifier: {identifier}")
51
__init__.py
Python
keras/metrics/__init__.py
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
4