id
int64
20
338k
vocab_size
int64
2
671
ast_levels
int64
4
32
nloc
int64
1
451
n_ast_nodes
int64
12
5.6k
n_identifiers
int64
1
186
n_ast_errors
int64
0
10
n_words
int64
2
2.17k
n_whitespaces
int64
2
13.8k
fun_name
stringlengths
2
73
commit_message
stringlengths
51
15.3k
url
stringlengths
31
59
code
stringlengths
51
31k
ast_errors
stringlengths
0
1.46k
token_counts
int64
6
3.32k
file_name
stringlengths
5
56
language
stringclasses
1 value
path
stringlengths
7
134
commit_id
stringlengths
40
40
repo
stringlengths
3
28
complexity
int64
1
153
286,618
106
18
73
549
30
0
178
729
get_orders
[IMPROVE] Fix Docstring formatting/Fix missing, incomplete type hints (#3412) * Fixes * Update stocks_helper.py * update git-actions set-output to new format * Update stocks_helper.py * Update terminal_helper.py * removed LineAnnotateDrawer from qa_view * lint * few changes * updates * sdk auto gen modules done * Update stocks_helper.py * updates to changed imports, and remove first sdk_modules * Update generate_sdk.py * Update generate_sdk.py * pylint * revert stocks_helper * Update generate_sdk.py * Update sdk.py * Update generate_sdk.py * full auto generation, added sdk.py/controllers creation * missed enable forecasting * added running black in subprocess after sdk files generation completes * removed deleted sdk_arg_logger * comment out tests * property doc fix * clean up * Update generate_sdk.py * make trailmap classes useable for doc generation * Update generate_sdk.py * added lineon to trailmap class for linking to func in markdown * changed lineon to dict * added full_path to trailmap for linking in docs * updated portfolio * feat: initial files * feat: added meta head * feat: added funcdef * added func_def to trailmap attributes for markdown in docs, added missing type hints to covid functions * feat: added view and merged with jaun * Update generate_sdk.py * Update generate_sdk.py * Update generate_sdk.py * Update generate_sdk.py * init * fix returns * fix: random stuff * fix: random * fixed encoding issue on windows * fix: generate tabs * update * Update generate_sdk_markdown.py * Create .pydocstyle.ini * added type hint classes for views * fixes * alt, ba * alt-economy * Update finviz_compare_model.py * fixs * Update substack_model.py * Update generate_sdk.py * last of my section * porfolio * po * Update optimizer_model.py * fixing more things * few more * keys done * update * fixes * Update generate_sdk_markdown.py * Update generate_sdk_markdown.py * mypy forecast fix * Update generate_sdk_markdown.py * Update generate_sdk_markdown.py * Update generate_sdk_markdown.py * fixes * forecast fixes * one more fix * Update coinbase_model.py * Update generate_sdk_markdown.py Co-authored-by: Colin Delahunty <[email protected]> Co-authored-by: James Maslek <[email protected]> Co-authored-by: jose-donato <[email protected]> Co-authored-by: andrewkenreich <[email protected]>
https://github.com/OpenBB-finance/OpenBBTerminal.git
def get_orders() -> Tuple[str, DataFrame]: url_orders = ( "https://eresearch.fidelity.com/eresearch/gotoBL/fidelityTopOrders.jhtml" ) text_soup_url_orders = BeautifulSoup( requests.get(url_orders, headers={"User-Agent": get_user_agent()}).text, "lxml" ) l_orders = [] l_orders_vals = [] idx = 0 order_list = text_soup_url_orders.findAll( "td", {"class": ["second", "third", "fourth", "fifth", "sixth", "seventh", "eight"]}, ) for an_order in order_list: if ((idx + 1) % 3 == 0) or ((idx + 1) % 4 == 0) or ((idx + 1) % 6 == 0): if not an_order: l_orders_vals.append("") else: try: l_orders_vals.append(an_order.contents[1]) except IndexError: l_orders_vals.append("0") elif (idx + 1) % 5 == 0: s_orders = str(an_order) l_orders_vals.append( s_orders[ s_orders.find('title="') + len('title="') : s_orders.find('"/>') ] ) else: l_orders_vals.append(an_order.text.strip()) idx += 1 # Add value to dictionary if (idx + 1) % 8 == 0: l_orders.append(l_orders_vals) l_orders_vals = [] idx = 0 df_orders = pd.DataFrame( l_orders, columns=[ "Symbol", "Company", "Price Change", "# Buy Orders", "Buy / Sell Ratio", "# Sell Orders", "Latest News", ], ) df_orders = df_orders[ [ "Symbol", "Buy / Sell Ratio", "Price Change", "Company", "# Buy Orders", "# Sell Orders", "Latest News", ] ] order_header = text_soup_url_orders.findAll("span", {"class": "source"})[ 0 ].text.capitalize() return order_header, df_orders
318
fidelity_model.py
Python
openbb_terminal/stocks/discovery/fidelity_model.py
59d8b36bb0467a1a99513b10e8b8471afaa56fd6
OpenBBTerminal
9
9,882
25
18
13
134
17
0
29
260
activate
feat: star routing (#3900) * feat(proto): adjust proto for star routing (#3844) * feat(proto): adjust proto for star routing * feat(proto): generate proto files * feat(grpc): refactor grpclet interface (#3846) * feat: refactor connection pool for star routing (#3872) * feat(k8s): add more labels to k8s deployments * feat(network): refactor connection pool * feat(network): refactor k8s pool * feat: star routing graph gateway (#3877) * feat: star routing - refactor grpc data runtime (#3887) * feat(runtimes): refactor grpc dataruntime * fix(tests): adapt worker runtime tests * fix(import): fix import * feat(proto): enable sending multiple lists (#3891) * feat: star routing gateway (#3893) * feat: star routing gateway all protocols (#3897) * test: add streaming and prefetch tests (#3901) * feat(head): new head runtime for star routing (#3899) * feat(head): new head runtime * feat(head): new head runtime * style: fix overload and cli autocomplete * feat(network): improve proto comments Co-authored-by: Jina Dev Bot <[email protected]> * feat(worker): merge docs in worker runtime (#3905) * feat(worker): merge docs in worker runtime * feat(tests): assert after clean up * feat(tests): star routing runtime integration tests (#3908) * fix(tests): fix integration tests * test: test runtimes fast slow request (#3910) * feat(zmq): purge zmq, zed, routing_table (#3915) * feat(zmq): purge zmq, zed, routing_table * style: fix overload and cli autocomplete * feat(zmq): adapt comment in dependency list * style: fix overload and cli autocomplete * fix(tests): fix type tests Co-authored-by: Jina Dev Bot <[email protected]> * test: add test gateway to worker connection (#3921) * feat(pea): adapt peas for star routing (#3918) * feat(pea): adapt peas for star routing * style: fix overload and cli autocomplete * feat(pea): add tests * feat(tests): add failing head pea test Co-authored-by: Jina Dev Bot <[email protected]> * feat(tests): integration tests for peas (#3923) * feat(tests): integration tests for peas * feat(pea): remove _inner_pea function * feat: star routing container pea (#3922) * test: rescue tests (#3942) * fix: fix streaming tests (#3945) * refactor: move docker run to run (#3948) * feat: star routing pods (#3940) * feat(pod): adapt pods for star routing * feat(pods): adapt basepod to star routing * feat(pod): merge pod and compound pod * feat(tests): fix tests * style: fix overload and cli autocomplete * feat(test): add container pea int test * feat(ci): remove more unnecessary tests * fix(tests): remove jinad runtime * feat(ci): remove latency tracking * fix(ci): fix ci def * fix(runtime): enable runtime to be exited * fix(tests): wrap runtime test in process * fix(runtimes): remove unused runtimes * feat(runtimes): improve cancel wait * fix(ci): build test pip again in ci * fix(tests): fix a test * fix(test): run async in its own process * feat(pod): include shard in activate msg * fix(pea): dont join * feat(pod): more debug out * feat(grpc): manage channels properly * feat(pods): remove exitfifo * feat(network): add simple send retry mechanism * fix(network): await pool close * fix(test): always close grpc server in worker * fix(tests): remove container pea from tests * fix(tests): reorder tests * fix(ci): split tests * fix(ci): allow alias setting * fix(test): skip a test * feat(pods): address comments Co-authored-by: Jina Dev Bot <[email protected]> * test: unblock skipped test (#3957) * feat: jinad pea (#3949) * feat: jinad pea * feat: jinad pea * test: remote peas * test: toplogy tests with jinad * ci: parallel jobs * feat(tests): add pod integration tests (#3958) * feat(tests): add pod integration tests * fix(tests): make tests less flaky * fix(test): fix test * test(pea): remote pea topologies (#3961) * test(pea): remote pea simple topology * test: remote pea topologies * refactor: refactor streamer result handling (#3960) * feat(k8s): adapt K8s Pod for StarRouting (#3964) * test: optimize k8s test * test: increase timeout and use different namespace * test: optimize k8s test * test: build and load image when needed * test: refactor k8s test * test: fix image name error * test: fix k8s image load * test: fix typoe port expose * test: update tests in connection pool and handling * test: remove unused fixture * test: parameterize docker images * test: parameterize docker images * test: parameterize docker images * feat(k8s): adapt k8s pod for star routing * fix(k8s): dont overwrite add/remove function in pool * fix(k8s): some fixes * fix(k8s): some more fixes * fix(k8s): linting * fix(tests): fix tests * fix(tests): fix k8s unit tests * feat(k8s): complete k8s integration test * feat(k8s): finish k8s tests * feat(k8s): fix test * fix(tests): fix test with no name * feat(k8s): unify create/replace interface * feat(k8s): extract k8s port constants * fix(tests): fix tests * fix(tests): wait for runtime being ready in tests * feat(k8s): address comments Co-authored-by: bwanglzu <[email protected]> * feat(flow): adapt Flow for StarRouting (#3986) * feat(flow): add routes * feat(flow): adapt flow to star routing * style: fix overload and cli autocomplete * feat(flow): handle empty topologies * feat(k8s): allow k8s pool disabling * style: fix overload and cli autocomplete * fix(test): fix test with mock * fix(tests): fix more tests * feat(flow): clean up tests * style: fix overload and cli autocomplete * fix(tests): fix more tests * feat: add plot function (#3994) * fix(tests): avoid hanging tests * feat(flow): add type hinting * fix(test): fix duplicate exec name in test * fix(tests): fix more tests * fix(tests): enable jinad test again * fix(tests): random port fixture * fix(style): replace quotes Co-authored-by: Jina Dev Bot <[email protected]> Co-authored-by: Joan Fontanals <[email protected]> * feat(ci): bring back ci (#3997) * feat(ci): enable ci again * style: fix overload and cli autocomplete * feat(ci): add latency tracking * feat(ci): bring back some tests * fix(tests): remove invalid port test * feat(ci): disable daemon and distributed tests * fix(tests): fix entrypoint in hub test * fix(tests): wait for gateway to be ready * fix(test): fix more tests * feat(flow): do rolling update and scale sequentially * fix(tests): fix more tests * style: fix overload and cli autocomplete * feat: star routing hanging pods (#4011) * fix: try to handle hanging pods better * test: hanging pods test work * fix: fix topology graph problem * test: add unit test to graph * fix(tests): fix k8s tests * fix(test): fix k8s test * fix(test): fix k8s pool test * fix(test): fix k8s test * fix(test): fix k8s connection pool setting * fix(tests): make runtime test more reliable * fix(test): fix routes test * fix(tests): make rolling update test less flaky * feat(network): gurantee unique ports * feat(network): do round robin for shards * fix(ci): increase pytest timeout to 10 min Co-authored-by: Jina Dev Bot <[email protected]> Co-authored-by: Joan Fontanals <[email protected]> * fix(ci): fix ci file * feat(daemon): jinad pod for star routing * Revert "feat(daemon): jinad pod for star routing" This reverts commit ed9b37ac862af2e2e8d52df1ee51c0c331d76f92. * feat(daemon): remote jinad pod support (#4042) * feat(daemon): add pod tests for star routing * feat(daemon): add remote pod test * test(daemon): add remote pod arguments test * test(daemon): add async scale test * test(daemon): add rolling update test * test(daemon): fix host * feat(proto): remove message proto (#4051) * feat(proto): remove message proto * fix(tests): fix tests * fix(tests): fix some more tests * fix(tests): fix more tests * fix(tests): fix more tests * fix(tests): fix more tests * fix(tests): fix more tests * feat(proto): put docs back in data * fix(proto): clean up * feat(proto): clean up * fix(tests): skip latency tracking * fix(test): fix hub test * fix(tests): fix k8s test * fix(test): some test clean up * fix(style): clean up style issues * feat(proto): adjust for rebase * fix(tests): bring back latency tracking * fix(tests): fix merge accident * feat(proto): skip request serialization (#4074) * feat: add reduce to star routing (#4070) * feat: add reduce on shards to head runtime * test: add reduce integration tests with fixed order * feat: add reduce on needs * chore: get_docs_matrix_from_request becomes public * style: fix overload and cli autocomplete * docs: remove undeterministic results warning * fix: fix uses_after * test: assert correct num docs after reducing in test_external_pod * test: correct asserts after reduce in test_rolling_update * fix: no reduce if uses_after_address is set * fix: get_docs_from_request only if needed * fix: fix tests after merge * refactor: move reduce from data_request_handler to head * style: fix overload and cli autocomplete * chore: apply suggestions * fix: fix asserts * chore: minor test fix * chore: apply suggestions * test: remove flow tests with external executor (pea) * fix: fix test_expected_messages_routing * fix: fix test_func_joiner * test: adapt k8s test Co-authored-by: Jina Dev Bot <[email protected]> * fix(k8s): fix static pool config * fix: use custom protoc doc generator image (#4088) * fix: use custom protoc doc generator image * fix(docs): minor doc improvement * fix(docs): use custom image * fix(docs): copy docarray * fix: doc building local only * fix: timeout doc building * fix: use updated args when building ContainerPea * test: add container PeaFactory test * fix: force pea close on windows (#4098) * fix: dont reduce if uses exist (#4099) * fix: dont use reduce if uses exist * fix: adjust reduce tests * fix: adjust more reduce tests * fix: fix more tests * fix: adjust more tests * fix: ignore non jina resources (#4101) * feat(executor): enable async executors (#4102) * feat(daemon): daemon flow on star routing (#4096) * test(daemon): add remote flow test * feat(daemon): call scale in daemon * feat(daemon): remove tail args and identity * test(daemon): rename scalable executor * test(daemon): add a small delay in async test * feat(daemon): scale partial flow only * feat(daemon): call scale directly in partial flow store * test(daemon): use asyncio sleep * feat(daemon): enable flow level distributed tests * test(daemon): fix jinad env workspace config * test(daemon): fix pod test use new port rolling update * feat(daemon): enable distribuetd tests * test(daemon): remove duplicate tests and zed runtime test * test(daemon): fix stores unit test * feat(daemon): enable part of distributed tests * feat(daemon): enable part of distributed tests * test: correct test paths * test(daemon): add client test for remote flows * test(daemon): send a request with jina client * test(daemon): assert async generator * test(daemon): small interval between tests * test(daemon): add flow test for container runtime * test(daemon): add flow test for container runtime * test(daemon): fix executor name * test(daemon): fix executor name * test(daemon): use async client fetch result * test(daemon): finish container flow test * test(daemon): enable distributed in ci * test(daemon): enable distributed in ci * test(daemon): decare flows and pods * test(daemon): debug ci if else * test(daemon): debug ci if else * test(daemon): decare flows and pods * test(daemon): correct test paths * test(daemon): add small delay for async tests * fix: star routing fixes (#4100) * docs: update docs * fix: fix Request.__repr__ * docs: update flow remarks * docs: fix typo * test: add non_empty_fields test * chore: remove non_empty_fields test * feat: polling per endpoint (#4111) * feat(polling): polling per endpoint configurable * fix: adjust tests * feat(polling): extend documentation * style: fix overload and cli autocomplete * fix: clean up * fix: adjust more tests * fix: remove repeat from flaky test * fix: k8s test * feat(polling): address pr feedback * feat: improve docs Co-authored-by: Jina Dev Bot <[email protected]> * feat(grpc): support connect grpc server via ssl tunnel (#4092) * feat(grpc): support ssl grpc connect if port is 443 * fix(grpc): use https option instead of detect port automatically * chore: fix typo * fix: update jina/peapods/networking.py Co-authored-by: Joan Fontanals <[email protected]> * fix: update jina/peapods/networking.py Co-authored-by: Joan Fontanals <[email protected]> * fix: update jina/peapods/networking.py Co-authored-by: Joan Fontanals <[email protected]> * test(networking): add test for peapods networking * fix: address comments Co-authored-by: Joan Fontanals <[email protected]> * feat(polling): unify polling args (#4113) * fix: several issues for jinad pods (#4119) * fix: activate for jinad pods * fix: dont expose worker pod in partial daemon * fix: workspace setting * fix: containerized flows * fix: hub test * feat(daemon): remote peas on star routing (#4112) * test(daemon): fix request in peas * test(daemon): fix request in peas * test(daemon): fix sync async client test * test(daemon): enable remote peas test * test(daemon): replace send message to send request * test(daemon): declare pea tests in ci * test(daemon): use pea args fixture * test(daemon): head pea use default host * test(daemon): fix peas topologies * test(daemon): fix pseudo naming * test(daemon): use default host as host * test(daemon): fix executor path * test(daemon): add remote worker back * test(daemon): skip local remote remote topology * fix: jinad pea test setup * fix: jinad pea tests * fix: remove invalid assertion Co-authored-by: jacobowitz <[email protected]> * feat: enable daemon tests again (#4132) * feat: enable daemon tests again * fix: remove bogy empty script file * fix: more jinad test fixes * style: fix overload and cli autocomplete * fix: scale and ru in jinad * fix: fix more jinad tests Co-authored-by: Jina Dev Bot <[email protected]> * fix: fix flow test * fix: improve pea tests reliability (#4136) Co-authored-by: Joan Fontanals <[email protected]> Co-authored-by: Jina Dev Bot <[email protected]> Co-authored-by: Deepankar Mahapatro <[email protected]> Co-authored-by: bwanglzu <[email protected]> Co-authored-by: AlaeddineAbdessalem <[email protected]> Co-authored-by: Zhaofeng Miao <[email protected]>
https://github.com/jina-ai/jina.git
def activate(self): if self.head_pea is not None: for shard_id in self.peas_args['peas']: for pea_idx, pea_args in enumerate(self.peas_args['peas'][shard_id]): worker_host = self.get_worker_host( pea_args, self.shards[shard_id]._peas[pea_idx], self.head_pea ) GrpcConnectionPool.activate_worker_sync( worker_host, int(pea_args.port_in), self.head_pea.runtime_ctrl_address, shard_id, )
88
__init__.py
Python
jina/peapods/pods/__init__.py
933415bfa1f9eb89f935037014dfed816eb9815d
jina
4
21,797
6
6
4
22
4
0
6
20
is_super_table
Update tomlkit==0.9.2 Used: python -m invoke vendoring.update --package=tomlkit
https://github.com/pypa/pipenv.git
def is_super_table(self) -> bool: return self._is_super_table
12
items.py
Python
pipenv/vendor/tomlkit/items.py
8faa74cdc9da20cfdcc69f5ec29b91112c95b4c9
pipenv
1
290,736
5
6
4
17
2
0
5
12
async_reset_adapter
Minor refactor of zha config flow (#82200) * Minor refactor of zha config flow * Move ZhaRadioManager to a separate module
https://github.com/home-assistant/core.git
async def async_reset_adapter(self) -> None:
24
radio_manager.py
Python
homeassistant/components/zha/radio_manager.py
bb64b39d0e6d41f531af9c63b69d1ce243a2751b
core
1
107,162
51
12
19
294
25
0
58
317
test_align_labels
ENH: implement and use base layout_engine for more flexible layout.
https://github.com/matplotlib/matplotlib.git
def test_align_labels(): fig, (ax3, ax1, ax2) = plt.subplots(3, 1, layout="constrained", figsize=(6.4, 8), gridspec_kw={"height_ratios": (1, 1, 0.7)}) ax1.set_ylim(0, 1) ax1.set_ylabel("Label") ax2.set_ylim(-1.5, 1.5) ax2.set_ylabel("Label") ax3.set_ylim(0, 1) ax3.set_ylabel("Label") fig.align_ylabels(axs=(ax3, ax1, ax2)) fig.draw_without_rendering() after_align = [ax1.yaxis.label.get_window_extent(), ax2.yaxis.label.get_window_extent(), ax3.yaxis.label.get_window_extent()] # ensure labels are approximately aligned np.testing.assert_allclose([after_align[0].x0, after_align[2].x0], after_align[1].x0, rtol=0, atol=1e-05) # ensure labels do not go off the edge assert after_align[0].x0 >= 1
200
test_constrainedlayout.py
Python
lib/matplotlib/tests/test_constrainedlayout.py
ec4dfbc3c83866f487ff0bc9c87b0d43a1c02b22
matplotlib
1
261,039
18
11
6
93
12
0
20
42
test_convert_to_numpy_error
ENH Adds Array API support to LinearDiscriminantAnalysis (#22554) Co-authored-by: Olivier Grisel <[email protected]> Co-authored-by: Julien Jerphanion <[email protected]>
https://github.com/scikit-learn/scikit-learn.git
def test_convert_to_numpy_error(): xp = pytest.importorskip("numpy.array_api") xp_ = _AdjustableNameAPITestWrapper(xp, "wrapped.array_api") X = xp_.asarray([1.2, 3.4]) with pytest.raises(ValueError, match="Supported namespaces are:"): _convert_to_numpy(X, xp=xp_)
57
test_array_api.py
Python
sklearn/utils/tests/test_array_api.py
2710a9e7eefd2088ce35fd2fb6651d5f97e5ef8b
scikit-learn
1
20,592
6
8
5
33
4
0
6
20
validate
check point progress on only bringing in pip==22.0.4 (#4966) * vendor in pip==22.0.4 * updating vendor packaging version * update pipdeptree to fix pipenv graph with new version of pip. * Vendoring of pip-shims 0.7.0 * Vendoring of requirementslib 1.6.3 * Update pip index safety restrictions patch for pip==22.0.4 * Update patches * exclude pyptoject.toml from black to see if that helps. * Move this part of the hash collection back to the top (like prior implementation) because it affects the outcome of this test now in pip 22.0.4
https://github.com/pypa/pipenv.git
def validate(self, validateTrace=None) -> None: self._checkRecursion([])
19
core.py
Python
pipenv/patched/notpip/_vendor/pyparsing/core.py
f3166e673fe8d40277b804d35d77dcdb760fc3b3
pipenv
1
34,912
16
10
5
38
4
0
18
48
has_length
[Trainer] Deeper length checks for IterableDatasetShard (#15539) * Unused import * Make `has_length()` torch-independent to use in callbacks * Update src/transformers/trainer_utils.py Co-authored-by: Sylvain Gugger <[email protected]> Co-authored-by: Sylvain Gugger <[email protected]>
https://github.com/huggingface/transformers.git
def has_length(dataset): try: return len(dataset) is not None except TypeError: # TypeError: len() of unsized object return False
21
trainer_utils.py
Python
src/transformers/trainer_utils.py
75b13f82e91d03bed88bf6cf0e2efb85346fb311
transformers
2
281,596
114
11
21
213
10
0
159
294
about_us
Terminal Wide Rich (#1161) * My idea for how we handle Rich moving forward * remove independent consoles * FIxed pylint issues * add a few vars * Switched print to console * More transitions * Changed more prints * Replaced all prints * Fixing tabulate * Finished replace tabulate * Finished removing rich from Tabulate * add Panel around menu * add GST watermark under feature flag * Fixed 46 tests * Delete test_screener[False].yaml * Delete test_screener[True].yaml * Fixed the rest of the tests * add help and source color vars and use rgb * rich on stocks/options * update rich on disc, dps, sia * rich in gov, ins and scr menus * ba and ca menus with rich * Fixed import issue * Fixed some tests * removed termcolor * Removed prettytable * add rich to remaining stocks menus * FIxed linting issue * Added James' changes * Updated dependencies * Add rich to cryptocurrency menu * refactor economy and forex * refactor etf with rich * refactor mfunds * refactor rich rest * not specify style so default color works well on any background * Fixing mypy issues * Updated tests * More test fixes * James' test fixes * Updating tests : stocks/screener - fix cassettes using BR * Updating tests : crypto * Updating tests : disable DEBUG_MODE * Updating tests : stocks/fa/yfinance * minor fixes that escape * Improve the rich table function (that replaces tabulate :D ) * Fixed bad code * delete rogue file + dcf fix + NoConsole * sia mypy * fuck you linter * fuck you linter pt 2 * skip hehe * i hate the black linter * ubuntu mypy attempt * Update : rich_config + gtff * Updating tests : conftest * Updating tests : stocks * Update : rich_config * Updating : rich_config * make panel configurable for Theodore :b * colors update * Merged * Updating : rich_config + feature_flags * Updating : rich_config * Updating tests : stocks * Updating : feature_flags Co-authored-by: DidierRLopes <[email protected]> Co-authored-by: Chavithra PARANA <[email protected]> Co-authored-by: james <[email protected]> Co-authored-by: jose-donato <[email protected]>
https://github.com/OpenBB-finance/OpenBBTerminal.git
def about_us(): console.print( f"{Fore.GREEN}Thanks for using Gamestonk Terminal. This is our way!{Style.RESET_ALL}\n" "\n" f"{Fore.CYAN}Join our community on discord: {Style.RESET_ALL}https://discord.gg/Up2QGbMKHY\n" f"{Fore.CYAN}Follow our twitter for updates: {Style.RESET_ALL}https://twitter.com/gamestonkt\n" f"{Fore.CYAN}Access our landing page: {Style.RESET_ALL}https://gamestonkterminal.vercel.app\n" "\n" f"{Fore.YELLOW}Partnerships:{Style.RESET_ALL}\n" f"{Fore.CYAN}FinBrain: {Style.RESET_ALL}https://finbrain.tech\n" f"{Fore.CYAN}Quiver Quantitative: {Style.RESET_ALL}https://www.quiverquant.com\n" f"{Fore.CYAN}SentimentInvestor: {Style.RESET_ALL}https://sentimentinvestor.com\n" f"\n{Fore.RED}" "DISCLAIMER: Trading in financial instruments involves high risks including the risk of losing some, " "or all, of your investment amount, and may not be suitable for all investors. Before deciding to trade in " "financial instrument you should be fully informed of the risks and costs associated with trading the financial " "markets, carefully consider your investment objectives, level of experience, and risk appetite, and seek " "professional advice where needed. The data contained in Gamestonk Terminal (GST) is not necessarily accurate. " "GST and any provider of the data contained in this website will not accept liability for any loss or damage " f"as a result of your trading, or your reliance on the information displayed.{Style.RESET_ALL}" )
38
terminal_helper.py
Python
gamestonk_terminal/terminal_helper.py
82747072c511beb1b2672846ae2ee4aec53eb562
OpenBBTerminal
1
243,947
12
6
10
45
9
0
14
29
parse_requirements
[Enhancement] Upgrade isort in pre-commit hook (#7130) * upgrade isort to v5.10.1 * replace known_standard_library with extra_standard_library * upgrade isort to v5.10.1 replace known_standard_library with extra_standard_library * imports order changes
https://github.com/open-mmlab/mmdetection.git
def parse_requirements(fname='requirements.txt', with_version=True): import re import sys from os.path import exists require_fpath = fname
41
setup.py
Python
setup.py
40505a5b6daa03632691b433197810f528231d26
mmdetection
1
4,970
30
15
11
123
10
0
36
165
decode
source salesforce: use utf8 by default and iso as fallback (#12576) * use utf8 by default and iso as fallback * test both * add comment * Bump version
https://github.com/airbytehq/airbyte.git
def decode(self, chunk): if self.encoding == DEFAULT_ENCODING: try: decoded = chunk.decode(self.encoding) return decoded except UnicodeDecodeError as e: self.encoding = "ISO-8859-1" self.logger.info(f"Could not decode chunk. Falling back to {self.encoding} encoding. Error: {e}") return self.decode(chunk) else: return chunk.decode(self.encoding)
66
streams.py
Python
airbyte-integrations/connectors/source-salesforce/source_salesforce/streams.py
22b67d85281252bcbe9e8923c7a973803cff9d4c
airbyte
3
156,236
10
8
7
48
8
0
10
47
_frag_subset
Remove pyarrow-legacy engine from parquet API (#8835) * remove pyarrow-legacy * Small fixup * Small fixup for pyarrow < 5 Co-authored-by: Jim Crist-Harif <[email protected]>
https://github.com/dask/dask.git
def _frag_subset(old_frag, row_groups): return old_frag.format.make_fragment( old_frag.path, old_frag.filesystem, old_frag.partition_expression, row_groups=row_groups, )
32
arrow.py
Python
dask/dataframe/io/parquet/arrow.py
0b36d7fcaf54ee9a78fff4b07f124cb0c8741cdf
dask
1
213,241
25
10
6
104
12
0
31
53
lars_update
added methods inplace_arrays_supported and inplace_variables_supported, and updated all gradient methods to use inplace by default only if the backend supports this. Also updated inplace methods to raise exceptions when the backend does not support it.
https://github.com/unifyai/ivy.git
def lars_update(ws, dcdws, lr, decay_lambda=0, inplace=None, stop_gradients=True): ws_norm = ws.vector_norm() lr = _ivy.stable_divide(ws_norm * lr, dcdws.vector_norm()) if decay_lambda > 0: lr /= (ws_norm * decay_lambda) return gradient_descent_update(ws, dcdws, lr, inplace, stop_gradients)
71
gradients.py
Python
ivy/core/gradients.py
d5b29bf3c74b116fd18192239abba21af9efaf69
ivy
2
256,644
25
11
13
90
9
0
28
96
get_document_store
Pylint: solve or silence locally rare warnings (#2170) * Remove invalid-envvar-default and logging-too-many-args * Remove import-self, access-member-before-definition and deprecated-argument * Remove used-before-assignment by restructuring type import * Remove unneeded-not * Silence unnecessary-lambda (it's necessary) * Remove pointless-string-statement * Update Documentation & Code Style * Silenced unsupported-membership-test (probably a real bug, can't fix though) * Remove trailing-newlines * Remove super-init-not-called and slience invalid-sequence-index (it's valid) * Remove invalid-envvar-default in ui * Remove some more warnings from pyproject.toml than actually solrted in code, CI will fail * Linting all modules together is more readable * Update Documentation & Code Style * Typo in pylint disable comment * Simplify long boolean statement * Simplify init call in FAISS * Fix inconsistent-return-statements * Fix useless-super-delegation * Fix useless-else-on-loop * Fix another inconsistent-return-statements * Move back pylint disable comment moved by black * Fix consider-using-set-comprehension * Fix another consider-using-set-comprehension * Silence non-parent-init-called * Update pylint exclusion list * Update Documentation & Code Style * Resolve unnecessary-else-after-break * Fix superfluous-parens * Fix no-else-break * Remove is_correctly_retrieved along with its pylint issue * Update exclusions list * Silence constructor issue in squad_data.py (method is already broken) * Fix too-many-return-statements * Fix use-dict-literal * Fix consider-using-from-import and useless-object-inheritance * Update exclusion list * Fix simplifiable-if-statements * Fix one consider-using-dict-items * Fix another consider-using-dict-items * Fix a third consider-using-dict-items * Fix last consider-using-dict-items * Fix three use-a-generator * Silence import errors on numba, tensorboardX and apex, but add comments & logs * Fix couple of mypy issues * Fix another typing issue * Silence mypy, was conflicting with more meaningful pylint issue * Fix no-else-continue * Silence unsubscriptable-object and fix an import error with importlib.metadata * Update Documentation & Code Style * Fix all no-else-raise * Update Documentation & Code Style * Fix inverted parameters in simplified if switch * Change [test] to [all] in some jobs (for typing and linting) * Add comment in haystack/schema.py on pydantic's dataclasses * Move comment from get_documents_by_id into _convert_weaviate_result_to_document in weaviate.py * Add comment on pylint silencing * Fix bug introduced rest_api/controller/search.py * Update Documentation & Code Style * Add ADR about Pydantic dataclasses * Update pydantic-dataclasses.md * Add link to Pydantic docs on Dataclasses Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
https://github.com/deepset-ai/haystack.git
def get_document_store(self) -> Optional[BaseDocumentStore]: matches = self.get_nodes_by_class(class_type=BaseDocumentStore) if len(matches) > 1: raise Exception(f"Multiple Document Stores found in Pipeline: {matches}") if len(matches) == 0: return None else: return matches[0]
52
base.py
Python
haystack/pipelines/base.py
8de1aa3e4304ddbfcd9f5a7c3cfd60dab84e73e3
haystack
3
21,648
76
15
40
444
44
0
117
473
compatible_abstract_dep
Vendor in latest requirements lib and pip-shims in order to drop packaging and resolve differences in sourcing it.
https://github.com/pypa/pipenv.git
def compatible_abstract_dep(self, other): from .requirements import Requirement if len(self.candidates) == 1 and next(iter(self.candidates)).editable: return self elif len(other.candidates) == 1 and next(iter(other.candidates)).editable: return other new_specifiers = self.specifiers & other.specifiers markers = set(self.markers) if self.markers else set() if other.markers: markers.add(other.markers) new_markers = None if markers: new_markers = Marker(" or ".join(str(m) for m in sorted(markers))) new_ireq = copy.deepcopy(self.requirement.ireq) new_ireq.req.specifier = new_specifiers new_ireq.req.marker = new_markers new_requirement = Requirement.from_line(format_requirement(new_ireq)) compatible_versions = self.compatible_versions(other) if isinstance(compatible_versions, AbstractDependency): return compatible_versions candidates = [ c for c in self.candidates if parse(version_from_ireq(c)) in compatible_versions ] dep_dict = {} candidate_strings = [format_requirement(c) for c in candidates] for c in candidate_strings: if c in self.dep_dict: dep_dict[c] = self.dep_dict.get(c) return AbstractDependency( name=self.name, specifiers=new_specifiers, markers=new_markers, candidates=candidates, requirement=new_requirement, parent=self.parent, dep_dict=dep_dict, finder=self.finder, )
285
dependencies.py
Python
pipenv/vendor/requirementslib/models/dependencies.py
8a4d2eb130fd173466310f59df607ea59bfc44a5
pipenv
15
76,993
21
11
9
78
9
0
25
104
get_list_filter
Add internationalisation UI to modeladmin Includes changes from #6230 Co-Authored-By: Dan Braghis <[email protected]>
https://github.com/wagtail/wagtail.git
def get_list_filter(self, request): list_filter = self.list_filter if ( getattr(settings, "WAGTAIL_I18N_ENABLED", False) and issubclass(self.model, TranslatableMixin) and "locale" not in list_filter ): list_filter += ("locale",) return list_filter
47
options.py
Python
wagtail/contrib/modeladmin/options.py
483b7d27b3a16bfb7098bcbd581d951f1370b0dc
wagtail
4
42,123
187
18
42
589
47
0
328
779
_default_color
Fix: shifting margins caused by multiple rugplots (#2953) * Fix: shifting margins caused by multiple rugplots This fix prevents margins from increasing when multiple rugplots are added to the same `ax`, even if `expand_margins` is `False`. As a minimum reproducible example: ```py import seaborn as sns; sns.set() import numpy as np import matplotlib.pyplot as plt values = np.linspace(start=0, stop=1, num=5) ax = sns.lineplot(x=values, y=values) sns.rugplot(x=values, ax=ax) ylim = ax.get_ylim() for _ in range(4): sns.rugplot(x=values, ax=ax, expand_margins=False) if not all(a == b for a, b in zip(ylim, ax.get_ylim())): print(f'{ylim} != {ax.get_ylim()}') plt.suptitle("Example showing that multiple rugplots cause issues") plt.show() ``` Running the above code: ```sh $ pip install seaborn numpy matplotlib; python3 test.py (-0.1, 1.1) != (-0.61051, 1.14641) ``` This bug was caused by how seaborn detects the correct colors to use. In `seaborn/utils.py`, in method `_default_color`, the following line used to resolve to `ax.plot([], [], **kws)`: ```py scout, = method([], [], **kws) ``` But matplotlib has the parameters `scalex` and `scaley` of `ax.plot` set to `True` by default. Matplotlib would see that the rug was already on the `ax` from the previous call to `sns.rugplot`, and so it would rescale the x and y axes. This caused the content of the plot to take up less and less space, with larger and larger margins as more rugplots were added. The fix sets `scalex` and `scaley` both to `False`, since this plot method is a null-plot and is only used to check what colours should be used: ```py scout, = method([], [], scalex=False, scaley=False, **kws) ``` The above line is within an if-elif-else branch, but the other branches do not suffer this bug and so no change is needed for them. An additional unit test was also added to catch this bug: `test_multiple_rugs`. * Update PR based on comments * Remove unused import
https://github.com/mwaskom/seaborn.git
def _default_color(method, hue, color, kws): if hue is not None: # This warning is probably user-friendly, but it's currently triggered # in a FacetGrid context and I don't want to mess with that logic right now # if color is not None: # msg = "`color` is ignored when `hue` is assigned." # warnings.warn(msg) return None if color is not None: return color elif method.__name__ == "plot": scout, = method([], [], scalex=False, scaley=False, **kws) color = scout.get_color() scout.remove() elif method.__name__ == "scatter": # Matplotlib will raise if the size of x/y don't match s/c, # and the latter might be in the kws dict scout_size = max( np.atleast_1d(kws.get(key, [])).shape[0] for key in ["s", "c", "fc", "facecolor", "facecolors"] ) scout_x = scout_y = np.full(scout_size, np.nan) scout = method(scout_x, scout_y, **kws) facecolors = scout.get_facecolors() if not len(facecolors): # Handle bug in matplotlib <= 3.2 (I think) # This will limit the ability to use non color= kwargs to specify # a color in versions of matplotlib with the bug, but trying to # work out what the user wanted by re-implementing the broken logic # of inspecting the kwargs is probably too brittle. single_color = False else: single_color = np.unique(facecolors, axis=0).shape[0] == 1 # Allow the user to specify an array of colors through various kwargs if "c" not in kws and single_color: color = to_rgb(facecolors[0]) scout.remove() elif method.__name__ == "bar": # bar() needs masked, not empty data, to generate a patch scout, = method([np.nan], [np.nan], **kws) color = to_rgb(scout.get_facecolor()) scout.remove() elif method.__name__ == "fill_between": # There is a bug on matplotlib < 3.3 where fill_between with # datetime units and empty data will set incorrect autoscale limits # To workaround it, we'll always return the first color in the cycle. # https://github.com/matplotlib/matplotlib/issues/17586 ax = method.__self__ datetime_axis = any([ isinstance(ax.xaxis.converter, mpl.dates.DateConverter), isinstance(ax.yaxis.converter, mpl.dates.DateConverter), ]) if Version(mpl.__version__) < Version("3.3") and datetime_axis: return "C0" kws = _normalize_kwargs(kws, mpl.collections.PolyCollection) scout = method([], [], **kws) facecolor = scout.get_facecolor() color = to_rgb(facecolor[0]) scout.remove() return color
355
utils.py
Python
seaborn/utils.py
708122389feb683aaf143339ab279de04c5f6f08
seaborn
13
259,441
15
11
4
54
10
0
17
52
predict
ENH migrate GLMs / TweedieRegressor to linear loss (#22548) Co-authored-by: Olivier Grisel <[email protected]> Co-authored-by: Thomas J. Fan <[email protected]>
https://github.com/scikit-learn/scikit-learn.git
def predict(self, X): # check_array is done in _linear_predictor raw_prediction = self._linear_predictor(X) y_pred = self._linear_loss.base_loss.link.inverse(raw_prediction) return y_pred
32
glm.py
Python
sklearn/linear_model/_glm/glm.py
75a94f518f7bd7d0bf581ffb67d9f961e3c4efbc
scikit-learn
1
252,676
5
6
2
26
3
0
5
12
transport_protocol
Unify proxy modes, introduce UDP protocol detection. (#5556) * [modes] new unified syntax * [modes] full coverage udp.py * [modes] mypy and coverage * [modes] split mode_spec into two files * [modes] adjust DNS layer * [modes] add udp layer decision * [modes] use 1:1 ServerInstances * [modes] fix paste issue * [modes] update tests * [modes] fix typo * [modes] updated docs * [modes] fix typo
https://github.com/mitmproxy/mitmproxy.git
def transport_protocol(self) -> Literal["tcp", "udp"]:
13
mode_specs.py
Python
mitmproxy/proxy/mode_specs.py
1706a9b9fe8c8157824df22634d97c3d695a495d
mitmproxy
1
142,621
13
11
14
60
8
0
16
28
test_core_worker_error_message
[api] Annotate as public / move ray-core APIs to _private and add enforcement rule (#25695) Enable checking of the ray core module, excluding serve, workflows, and tune, in ./ci/lint/check_api_annotations.py. This required moving many files to ray._private and associated fixes.
https://github.com/ray-project/ray.git
def test_core_worker_error_message(): script = proc = run_string_as_driver_nonblocking(script) err_str = proc.stderr.read().decode("ascii") assert "Hello there" in err_str, err_str
33
test_output.py
Python
python/ray/tests/test_output.py
43aa2299e6623c8f8c7c4a1b80133459d0aa68b0
ray
1
100,955
62
15
18
289
25
0
78
282
_add_latest_live
Bug fixes - PhazeA tooltip spacing - Graph live cache bug
https://github.com/deepfakes/faceswap.git
def _add_latest_live(self, session_id, loss, timestamps): logger.debug("Adding live data to cache: (session_id: %s, loss: %s, timestamps: %s)", session_id, loss.shape, timestamps.shape) if not np.any(loss) and not np.any(timestamps): return cache = self._data[session_id] for metric in ("loss", "timestamps"): data = locals()[metric] dtype = "float32" if metric == "loss" else "float64" old = np.frombuffer(zlib.decompress(cache[metric]), dtype=dtype) if data.ndim > 1: old = old.reshape(-1, *data.shape[1:]) new = np.concatenate((old, data)) logger.debug("'%s' old_shape: %s new_shape: %s", metric, cache[f"{metric}_shape"], new.shape) cache[f"{metric}_shape"] = new.shape cache[metric] = zlib.compress(new) del old
177
event_reader.py
Python
lib/gui/analysis/event_reader.py
e2fc0703709a08f64f3aee44a5c78f9e71ebdb92
faceswap
6
276,210
11
12
5
47
9
0
11
30
_get_var_list
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
https://github.com/keras-team/keras.git
def _get_var_list(model): var_list, _, _ = tf.__internal__.tracking.ObjectGraphView( model ).serialize_object_graph() return var_list
28
saved_model_experimental.py
Python
keras/saving/saved_model_experimental.py
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
1
111,200
14
10
4
69
11
1
14
31
test_noun_chunks_is_parsed
Add a noun chunker for Finnish (#10214) with test cases
https://github.com/explosion/spaCy.git
def test_noun_chunks_is_parsed(fi_tokenizer): doc = fi_tokenizer("Tämä on testi") with pytest.raises(ValueError): list(doc.noun_chunks) @pytest.mark.parametrize( "text,pos,deps,heads,expected_noun_chunks", FI_NP_TEST_EXAMPLES )
@pytest.mark.parametrize( "text,pos,deps,heads,expected_noun_chunks", FI_NP_TEST_EXAMPLES )
26
test_noun_chunks.py
Python
spacy/tests/lang/fi/test_noun_chunks.py
e9c26f2ee9f03c2aa6b7cd724f4c0b3717507211
spaCy
1
267,812
6
6
3
21
3
0
6
20
option_name
ansible-test - Use more native type hints. (#78435) * ansible-test - Use more native type hints. Simple search and replace to switch from comments to native type hints for return types of functions with no arguments. * ansible-test - Use more native type hints. Conversion of simple single-line function annotation type comments to native type hints. * ansible-test - Use more native type hints. Conversion of single-line function annotation type comments with default values to native type hints. * ansible-test - Use more native type hints. Manual conversion of type annotation comments for functions which have pylint directives.
https://github.com/ansible/ansible.git
def option_name(self) -> str: return '--target-python'
10
__init__.py
Python
test/lib/ansible_test/_internal/cli/parsers/__init__.py
3eb0485dd92c88cc92152d3656d94492db44b183
ansible
1
203,157
7
10
3
52
10
0
7
28
test_response_resolver_match_class_based_view
Adjusted CBV resolver_match example in testing tools docs. The view_class is available on the view callback, allowing that to be checked, rather than the __name__.
https://github.com/django/django.git
def test_response_resolver_match_class_based_view(self): response = self.client.get('/accounts/') self.assertIs(response.resolver_match.func.view_class, RedirectView)
30
tests.py
Python
tests/test_client/tests.py
d15a10afb51619faf14e678deae7dcda720413d9
django
1
160,273
16
9
7
109
10
0
23
48
test_array_keys_use_private_array
API: Allow newaxis indexing for `array_api` arrays (#21377) * TST: Add test checking if newaxis indexing works for `array_api` Also removes previous check against newaxis indexing, which is now outdated * TST, BUG: Allow `None` in `array_api` indexing Introduces test for validating flat indexing when `None` is present * MAINT,DOC,TST: Rework of `_validate_index()` in `numpy.array_api` _validate_index() is now called as self._validate_index(shape), and does not return a key. This rework removes the recursive pattern used. Tests are introduced to cover some edge cases. Additionally, its internal docstring reflects new behaviour, and extends the flat indexing note. * MAINT: `advance` -> `advanced` (integer indexing) Co-authored-by: Aaron Meurer <[email protected]> * BUG: array_api arrays use internal arrays from array_api array keys When an array_api array is passed as the key for get/setitem, we access the key's internal np.ndarray array to be used as the key for the internal get/setitem operation. This behaviour was initially removed when `_validate_index()` was reworked. * MAINT: Better flat indexing error message for `array_api` arrays Also better semantics for its prior ellipsis count condition Co-authored-by: Sebastian Berg <[email protected]> * MAINT: `array_api` arrays don't special case multi-ellipsis errors This gets handled by NumPy-proper. Co-authored-by: Aaron Meurer <[email protected]> Co-authored-by: Sebastian Berg <[email protected]>
https://github.com/numpy/numpy.git
def test_array_keys_use_private_array(): a = ones((0, 0), dtype=bool_) assert a[a].shape == (0,) a = ones((0,), dtype=bool_) key = ones((0, 0), dtype=bool_) with pytest.raises(IndexError): a[key]
70
test_array_object.py
Python
numpy/array_api/tests/test_array_object.py
befef7b26773eddd2b656a3ab87f504e6cc173db
numpy
1
176,627
13
9
5
65
10
1
15
29
path_graph
Adjust the usage of nodes_or_number decorator (#5599) * recorrect typo in decorators.py * Update tests to show troubles in current code * fix troubles with usage of nodes_or_number * fix typo * remove nodes_or_number where that makes sense * Reinclude nodes_or_numbers and add some tests for nonstandard usage * fix typowq * hopefully final tweaks (no behavior changes * Update test_classic.py Co-authored-by: Jarrod Millman <[email protected]>
https://github.com/networkx/networkx.git
def path_graph(n, create_using=None): _, nodes = n G = empty_graph(nodes, create_using) G.add_edges_from(pairwise(nodes)) return G @nodes_or_number(0)
@nodes_or_number(0)
34
classic.py
Python
networkx/generators/classic.py
de1d00f20e0bc14f1cc911b3486e50225a8fa168
networkx
1
20,567
11
7
5
28
3
0
11
15
enable_all_warnings
check point progress on only bringing in pip==22.0.4 (#4966) * vendor in pip==22.0.4 * updating vendor packaging version * update pipdeptree to fix pipenv graph with new version of pip. * Vendoring of pip-shims 0.7.0 * Vendoring of requirementslib 1.6.3 * Update pip index safety restrictions patch for pip==22.0.4 * Update patches * exclude pyptoject.toml from black to see if that helps. * Move this part of the hash collection back to the top (like prior implementation) because it affects the outcome of this test now in pip 22.0.4
https://github.com/pypa/pipenv.git
def enable_all_warnings() -> None: __diag__.enable_all_warnings() # hide abstract class del __config_flags
12
core.py
Python
pipenv/patched/notpip/_vendor/pyparsing/core.py
f3166e673fe8d40277b804d35d77dcdb760fc3b3
pipenv
1
102,297
24
9
7
95
12
0
28
43
sample_inputs_linalg_invertible
Remove random_fullrank_matrix_distinc_singular_value (#68183) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/68183 We do so in favour of `make_fullrank_matrices_with_distinct_singular_values` as this latter one not only has an even longer name, but also generates inputs correctly for them to work with the PR that tests noncontig inputs latter in this stack. We also heavily simplified the generation of samples for the SVD, as it was fairly convoluted and it was not generating the inputs correclty for the noncontiguous test. To do the transition, we also needed to fix the following issue, as it was popping up in the tests: Fixes https://github.com/pytorch/pytorch/issues/66856 cc jianyuh nikitaved pearu mruberry walterddr IvanYashchuk xwang233 Lezcano Test Plan: Imported from OSS Reviewed By: ngimel Differential Revision: D32684853 Pulled By: mruberry fbshipit-source-id: e88189c8b67dbf592eccdabaf2aa6d2e2f7b95a4
https://github.com/pytorch/pytorch.git
def sample_inputs_linalg_invertible(op_info, device, dtype, requires_grad=False, **kwargs): make_fn = make_fullrank_matrices_with_distinct_singular_values make_arg = partial(make_fn, dtype=dtype, device=device, requires_grad=requires_grad) batches = [(), (0, ), (2, ), (1, 1)] ns = [5, 0]
76
common_methods_invocations.py
Python
torch/testing/_internal/common_methods_invocations.py
baeca11a21e285d66ec3e4103c29dfd0b0245b85
pytorch
1
176,135
17
14
16
165
5
0
30
143
test_edgeql_functions_array_agg_20
Filter out nulls when calling array_agg and enumerate (#3325) Fixes #3324.
https://github.com/edgedb/edgedb.git
async def test_edgeql_functions_array_agg_20(self): await self.assert_query_result( r, tb.bag([{"te": [3000]}, {"te": []}, {"te": []}, {"te": []}]), ) await self.assert_query_result( r, tb.bag( [{"te": [3000, 3000]}, {"te": [3000]}, {"te": [3000]}, {"te": [3000]}], ) )
100
test_edgeql_functions.py
Python
tests/test_edgeql_functions.py
dac3c0e8e2aa5f978bcf56e69b50cfde1bdbc91f
edgedb
1
22,298
14
14
6
95
8
0
20
62
registerLabels
refactor: clean code Signed-off-by: slowy07 <[email protected]>
https://github.com/geekcomputers/Python.git
def registerLabels(): for i in range(len(tokens)): if tokens[i].t == "label": jumps[tokens[i].token] = i elif tokens[i].t == "subprogram": jumps[tokens[i].token] = i
58
assembler.py
Python
Assembler/assembler.py
f0af0c43340763724f139fa68aa1e5a9ffe458b4
Python
4
31,490
19
12
10
61
7
0
22
80
cls_token
Fix properties of unset special tokens in non verbose mode (#17797) Co-authored-by: SaulLu <[email protected]>
https://github.com/huggingface/transformers.git
def cls_token(self) -> str: if self._cls_token is None: if self.verbose: logger.error("Using cls_token, but it is not set yet.") return None return str(self._cls_token)
35
tokenization_utils_base.py
Python
src/transformers/tokenization_utils_base.py
3eed5530ec74bb60ad9f8f612717d0f6ccf820f2
transformers
3
283,097
12
14
4
99
16
1
14
21
get_yf_currency_list
Add YahooFinance to forex load (#1544) * Load yf forex * Fix linting * Adding forex dictionary * Fix tests failing * Fix tests failing * Remove inheritance to oanda model * Adding -s to --source * Addressed comments * Fix tests * Fix oanda menu entry * Add av api limit error handling * Fix start date being in different formats across sources * Silence "too much data" warning * Add gst script to test currency sources Co-authored-by: Theodore Aptekarev <[email protected]> Co-authored-by: jmaslek <[email protected]>
https://github.com/OpenBB-finance/OpenBBTerminal.git
def get_yf_currency_list() -> List: path = os.path.join(os.path.dirname(__file__), "data/yahoofinance_forex.json") return sorted(list(set(pd.read_json(path)["from_symbol"]))) YF_CURRENCY_LIST = get_yf_currency_list() @log_start_end(log=logger)
@log_start_end(log=logger)
45
forex_helper.py
Python
gamestonk_terminal/forex/forex_helper.py
93aba1e2cdb90cf91787b4a999aa63d5a91b9a3d
OpenBBTerminal
1
300,059
33
11
17
146
15
0
38
120
test_chime_can_be_played
Add buttons to Ring chime devices to play ding and motion chimes (#71370) Co-authored-by: Paulus Schoutsen <[email protected]>
https://github.com/home-assistant/core.git
async def test_chime_can_be_played(hass, requests_mock): await setup_platform(hass, Platform.BUTTON) # Mocks the response for playing a test sound requests_mock.post( "https://api.ring.com/clients_api/chimes/123456/play_sound", text="SUCCESS", ) await hass.services.async_call( "button", "press", {"entity_id": "button.downstairs_play_chime_ding"}, blocking=True, ) await hass.async_block_till_done() assert requests_mock.request_history[-1].url.startswith( "https://api.ring.com/clients_api/chimes/123456/play_sound?" ) assert "kind=ding" in requests_mock.request_history[-1].url
83
test_button.py
Python
tests/components/ring/test_button.py
c22cf3b3d2e4d079499473500abf8a73e5b80163
core
1
153,527
5
8
2
26
4
0
5
11
is_reduce_function
REFACTOR-#4322: Move is_reduce_fn outside of groupby_agg. (#4323) Co-authored-by: Dmitry Chigarev <[email protected]> Co-authored-by: Yaroslav Igoshev <[email protected]> Signed-off-by: mvashishtha <[email protected]>
https://github.com/modin-project/modin.git
def is_reduce_function(fn): return _is_reduce_function_with_depth(fn, depth=0)
15
groupby.py
Python
modin/core/dataframe/algebra/groupby.py
b0c1e646a2462a3d20430ea8bc02194acca1248a
modin
1
20,440
108
24
56
609
30
0
193
1,512
get_tokens_unprocessed
check point progress on only bringing in pip==22.0.4 (#4966) * vendor in pip==22.0.4 * updating vendor packaging version * update pipdeptree to fix pipenv graph with new version of pip. * Vendoring of pip-shims 0.7.0 * Vendoring of requirementslib 1.6.3 * Update pip index safety restrictions patch for pip==22.0.4 * Update patches * exclude pyptoject.toml from black to see if that helps. * Move this part of the hash collection back to the top (like prior implementation) because it affects the outcome of this test now in pip 22.0.4
https://github.com/pypa/pipenv.git
def get_tokens_unprocessed(self, text=None, context=None): tokendefs = self._tokens if not context: ctx = LexerContext(text, 0) statetokens = tokendefs['root'] else: ctx = context statetokens = tokendefs[ctx.stack[-1]] text = ctx.text while 1: for rexmatch, action, new_state in statetokens: m = rexmatch(text, ctx.pos, ctx.end) if m: if action is not None: if type(action) is _TokenType: yield ctx.pos, action, m.group() ctx.pos = m.end() else: yield from action(self, m, ctx) if not new_state: # altered the state stack? statetokens = tokendefs[ctx.stack[-1]] # CAUTION: callback must set ctx.pos! if new_state is not None: # state transition if isinstance(new_state, tuple): for state in new_state: if state == '#pop': if len(ctx.stack) > 1: ctx.stack.pop() elif state == '#push': ctx.stack.append(ctx.stack[-1]) else: ctx.stack.append(state) elif isinstance(new_state, int): # see RegexLexer for why this check is made if abs(new_state) >= len(ctx.stack): del ctx.state[1:] else: del ctx.stack[new_state:] elif new_state == '#push': ctx.stack.append(ctx.stack[-1]) else: assert False, "wrong state def: %r" % new_state statetokens = tokendefs[ctx.stack[-1]] break else: try: if ctx.pos >= ctx.end: break if text[ctx.pos] == '\n': # at EOL, reset state to "root" ctx.stack = ['root'] statetokens = tokendefs['root'] yield ctx.pos, Text, '\n' ctx.pos += 1 continue yield ctx.pos, Error, text[ctx.pos] ctx.pos += 1 except IndexError: break
373
lexer.py
Python
pipenv/patched/notpip/_vendor/pygments/lexer.py
f3166e673fe8d40277b804d35d77dcdb760fc3b3
pipenv
20
102,045
10
11
5
43
6
0
10
35
items_queued
Extract: Implement re-align/2nd pass - implement configurable re-align function in extract - update locales + documentation - re-factor align._base and split to separate modules - move normalization method to plugin parent - bugfix: FAN use zeros for pre-processing crop - lint AlignedFilter
https://github.com/deepfakes/faceswap.git
def items_queued(self) -> bool: with self._queue_lock: return self._active and bool(self._queued)
24
processing.py
Python
plugins/extract/align/_base/processing.py
9e2026f6feba4fc1d60e0d985cbc1ba9c44a4848
faceswap
2
320,611
10
11
2
46
6
0
10
16
other_tabs
Fixes qutebrowser/qutebrowser#6967 by adding win id param in _tabs & using it in delete_tabs As delete_tab was assuming that completion column contains window ID, it was showing exception in case of tab-focus, as it doesn't have the window ID in completion column. So instead a new parameter named current_win_id is used in _tabs which is also passed in all uses of the function.
https://github.com/qutebrowser/qutebrowser.git
def other_tabs(*, info): return _tabs(win_id_filter=lambda win_id: win_id != info.win_id, current_win_id=info.win_id)
28
miscmodels.py
Python
qutebrowser/completion/models/miscmodels.py
57155e329ada002245ab3fac45d906f6707c14cf
qutebrowser
1
311,839
15
14
6
73
11
0
16
59
speed_count
Add missing type hints to homekit_controller (#65368)
https://github.com/home-assistant/core.git
def speed_count(self) -> int: return round( min(self.service[CharacteristicsTypes.ROTATION_SPEED].maxValue or 100, 100) / max(1, self.service[CharacteristicsTypes.ROTATION_SPEED].minStep or 0) )
47
fan.py
Python
homeassistant/components/homekit_controller/fan.py
9f5d77e0df957c20a2af574d706140786f0a551a
core
3
211,031
47
17
29
255
22
0
73
396
update
[MOT] Add OC_SORT tracker (#6272) * add ocsort tracker * add ocsort deploy * merge develop * fix ocsort tracker codes * fix doc, test=document_fix * fix doc, test=document_fix
https://github.com/PaddlePaddle/PaddleDetection.git
def update(self, bbox): if bbox is not None: if self.last_observation.sum() >= 0: # no previous observation previous_box = None for i in range(self.delta_t): dt = self.delta_t - i if self.age - dt in self.observations: previous_box = self.observations[self.age - dt] break if previous_box is None: previous_box = self.last_observation self.velocity = speed_direction(previous_box, bbox) self.last_observation = bbox self.observations[self.age] = bbox self.history_observations.append(bbox) self.time_since_update = 0 self.history = [] self.hits += 1 self.hit_streak += 1 self.kf.update(convert_bbox_to_z(bbox)) else: self.kf.update(bbox)
156
ocsort_tracker.py
Python
deploy/pptracking/python/mot/tracker/ocsort_tracker.py
c84153a355d9855fe55cf51d203b8b24e7d884e5
PaddleDetection
6
34,175
24
14
7
123
15
0
30
55
blackify
Copies and docstring styling (#15202) * Style docstrings when making/checking copies * Polish
https://github.com/huggingface/transformers.git
def blackify(code): has_indent = len(get_indent(code)) > 0 if has_indent: code = f"class Bla:\n{code}" result = black.format_str(code, mode=black.FileMode([black.TargetVersion.PY35], line_length=119)) result, _ = style_docstrings_in_code(result) return result[len("class Bla:\n") :] if has_indent else result
72
check_copies.py
Python
utils/check_copies.py
1144d336b689d1710534b245697e41be7a168075
transformers
3
156,746
11
7
3
46
7
0
12
33
clip
Don't include docs in ``Array`` methods, just refer to module docs (#9244) Co-authored-by: James Bourbeau <[email protected]>
https://github.com/dask/dask.git
def clip(self, min=None, max=None): from dask.array.ufunc import clip return clip(self, min, max)
31
core.py
Python
dask/array/core.py
2820bae493a49cb1d0a6e376985c5473b8f04fa8
dask
1
166,580
60
12
4
164
28
0
78
90
__doc__
TYP: CallableDynamicDoc (#46786) * TYP: CallableDynamicDoc * using cast * tighten NDFrameIndexerBase * dtype and clarification for _IndexingMixinT Co-authored-by: Matthew Roeschke <[email protected]>
https://github.com/pandas-dev/pandas.git
def __doc__(self) -> str: # type: ignore[override] opts_desc = _describe_option("all", _print_desc=False) opts_list = pp_options_list(list(_registered_options.keys())) return self.__doc_tmpl__.format(opts_desc=opts_desc, opts_list=opts_list) _get_option_tmpl = _set_option_tmpl = _describe_option_tmpl = _reset_option_tmpl = # bind the functions with their docstrings into a Callable # and use that as the functions exposed in pd.api get_option = CallableDynamicDoc(_get_option, _get_option_tmpl) set_option = CallableDynamicDoc(_set_option, _set_option_tmpl) reset_option = CallableDynamicDoc(_reset_option, _reset_option_tmpl) describe_option = CallableDynamicDoc(_describe_option, _describe_option_tmpl) options = DictWrapper(_global_config) # # Functions for use by pandas developers, in addition to User - api
45
config.py
Python
pandas/_config/config.py
3e3bb9028b48aa7e42592b1d4a518197f4dbf6de
pandas
1
215,959
10
10
2
54
9
0
10
16
_get_svc_list
Update to latest ``pyupgrade`` hook. Stop skipping it on CI. Signed-off-by: Pedro Algarvio <[email protected]>
https://github.com/saltstack/salt.git
def _get_svc_list(name="*", status=None): return sorted(os.path.basename(el) for el in _get_svc_path(name, status))
33
runit.py
Python
salt/modules/runit.py
f2a783643de61cac1ff3288b40241e5ce6e1ddc8
salt
2
101,443
14
15
4
87
10
0
14
53
sections
Bugfix: convert - Gif Writer - Fix non-launch error on Gif Writer - convert plugins - linting - convert/fs_media/preview/queue_manager - typing - Change convert items from dict to Dataclass
https://github.com/deepfakes/faceswap.git
def sections(self) -> List[str]: return sorted(set(plugin.split(".")[0] for plugin in self._config.config.sections() if plugin.split(".")[0] != "writer"))
51
preview.py
Python
tools/preview/preview.py
1022651eb8a7741014f5d2ec7cbfe882120dfa5f
faceswap
3
261,252
30
16
12
191
15
0
54
136
svd_flip
DOC Ensures that svd_flip passes numpydoc validation (#24581) Co-authored-by: Thomas J. Fan <[email protected]>
https://github.com/scikit-learn/scikit-learn.git
def svd_flip(u, v, u_based_decision=True): if u_based_decision: # columns of u, rows of v max_abs_cols = np.argmax(np.abs(u), axis=0) signs = np.sign(u[max_abs_cols, range(u.shape[1])]) u *= signs v *= signs[:, np.newaxis] else: # rows of v, columns of u max_abs_rows = np.argmax(np.abs(v), axis=1) signs = np.sign(v[range(v.shape[0]), max_abs_rows]) u *= signs v *= signs[:, np.newaxis] return u, v
127
extmath.py
Python
sklearn/utils/extmath.py
97057d329da1786aa03206251aab68bf51312390
scikit-learn
2
259,807
73
12
16
215
21
0
102
195
precision_recall_curve
FIX compute precision-recall at 100% recall (#23214) Co-authored-by: Guillaume Lemaitre <[email protected]> Co-authored-by: jeremiedbb <[email protected]>
https://github.com/scikit-learn/scikit-learn.git
def precision_recall_curve(y_true, probas_pred, *, pos_label=None, sample_weight=None): fps, tps, thresholds = _binary_clf_curve( y_true, probas_pred, pos_label=pos_label, sample_weight=sample_weight ) ps = tps + fps precision = np.divide(tps, ps, where=(ps != 0)) # When no positive label in y_true, recall is set to 1 for all thresholds # tps[-1] == 0 <=> y_true == all negative labels if tps[-1] == 0: warnings.warn( "No positive class found in y_true, " "recall is set to one for all thresholds." ) recall = np.ones_like(tps) else: recall = tps / tps[-1] # reverse the outputs so recall is decreasing sl = slice(None, None, -1) return np.hstack((precision[sl], 1)), np.hstack((recall[sl], 0)), thresholds[sl]
140
_ranking.py
Python
sklearn/metrics/_ranking.py
32c53bc69cc0d1f142700dd6c02281a000b855b9
scikit-learn
2
76,599
26
15
8
132
18
0
27
98
test_page_move_page_position_to_the_same_position
Fix permission error when sorting pages having page type restrictions
https://github.com/wagtail/wagtail.git
def test_page_move_page_position_to_the_same_position(self): response = self.client.post( reverse("wagtailadmin_pages:set_page_position", args=(self.child_1.id,)) + "?position=0" ) self.assertEqual(response.status_code, 200) # Ensure page order does not change: child_slugs = self.index_page.get_children().values_list("slug", flat=True) self.assertListEqual(list(child_slugs), ["child-1", "child-2", "child-3"])
77
test_reorder_page.py
Python
wagtail/admin/tests/pages/test_reorder_page.py
128b319b9990c01ddcda3f827cf9961a050971ac
wagtail
1
292,870
10
9
4
37
7
0
10
42
async_start
Fix dhcp None hostname (#67289) * Fix dhcp None hostname * Test handle None hostname
https://github.com/home-assistant/core.git
async def async_start(self): self._unsub = async_dispatcher_connect( self.hass, CONNECTED_DEVICE_REGISTERED, self._async_process_device_data )
22
__init__.py
Python
homeassistant/components/dhcp/__init__.py
d9abd5efea9a847dd54e5cda80aec29a0ffcabaa
core
1
100,563
13
9
19
51
7
0
14
42
_get_free_vram
Refactor lib.gpu_stats (#1218) * inital gpu_stats refactor * Add dummy CPU Backend * Update Sphinx documentation
https://github.com/deepfakes/faceswap.git
def _get_free_vram(self) -> List[float]: vram = self._all_vram self._log("debug", f"GPU VRAM free: {vram}") return vram
27
amd.py
Python
lib/gpu_stats/amd.py
bdbbad4d310fb606b6f412aa81e9f57ccd994e97
faceswap
1
284,059
56
13
49
346
35
0
66
529
call_cr
Replaces coingecko deprecated commands (#1650) * removes cgproducts and cgplatforms and replaces with cr * add ignore word * added .openbb script * reverted crypto change * doc * failing tests * trying chart and fixed minh issues * Create barh * Fix ticker labels * fix test * loanscan mock * defi test * defi test * Fix defi test Co-authored-by: Minh Hoang <[email protected]> Co-authored-by: minhhoang1023 <[email protected]> Co-authored-by: Theodore Aptekarev <[email protected]> Co-authored-by: Colin Delahunty <[email protected]>
https://github.com/OpenBB-finance/OpenBBTerminal.git
def call_cr(self, other_args): parser = argparse.ArgumentParser( prog="cr", add_help=False, formatter_class=argparse.ArgumentDefaultsHelpFormatter, description=, ) parser.add_argument( "-t", "--type", dest="type", type=str, help="Select interest rate type", default="supply", choices=["borrow", "supply"], ) parser.add_argument( "-c", "--cryptocurrrencies", dest="cryptos", type=loanscan_model.check_valid_coin, help=f, default="BTC,ETH,USDT,USDC", ) parser.add_argument( "-p", "--platforms", dest="platforms", type=loanscan_model.check_valid_platform, help=f, default="BlockFi,Ledn,SwissBorg,Youhodler", ) if other_args and "-" not in other_args[0][0]: other_args.insert(0, "-t") ns_parser = parse_known_args_and_warn( parser, other_args, EXPORT_ONLY_RAW_DATA_ALLOWED, limit=10 ) if ns_parser: loanscan_view.display_crypto_rates( rate_type=ns_parser.type, cryptos=ns_parser.cryptos, platforms=ns_parser.platforms, limit=ns_parser.limit, export=ns_parser.export, )
196
overview_controller.py
Python
openbb_terminal/cryptocurrency/overview/overview_controller.py
670402396e7e25e95bd6497affb143565d9bd4ea
OpenBBTerminal
4
311,262
52
11
35
297
24
0
89
269
test_write_current_mode
Allow homekit_controller to set Ecobee's mode (#65032)
https://github.com/home-assistant/core.git
async def test_write_current_mode(hass, utcnow): helper = await setup_test_component(hass, create_service_with_ecobee_mode) service = helper.accessory.services.first(service_type=ServicesTypes.THERMOSTAT) # Helper will be for the primary entity, which is the service. Make a helper for the sensor. energy_helper = Helper( hass, "select.testdevice_current_mode", helper.pairing, helper.accessory, helper.config_entry, ) service = energy_helper.accessory.services.first( service_type=ServicesTypes.THERMOSTAT ) mode = service[CharacteristicsTypes.Vendor.ECOBEE_SET_HOLD_SCHEDULE] await hass.services.async_call( "select", "select_option", {"entity_id": "select.testdevice_current_mode", "option": "home"}, blocking=True, ) assert mode.value == 0 await hass.services.async_call( "select", "select_option", {"entity_id": "select.testdevice_current_mode", "option": "sleep"}, blocking=True, ) assert mode.value == 1 await hass.services.async_call( "select", "select_option", {"entity_id": "select.testdevice_current_mode", "option": "away"}, blocking=True, ) assert mode.value == 2
176
test_select.py
Python
tests/components/homekit_controller/test_select.py
a65694457a8c9357d890877144be69f3179bc632
core
1
189,969
22
11
4
72
10
0
23
62
ghost_to
Prevent TransformMatchingTex from crashing when there is nothing to fade (#2846) * Prevent TransformMatchingTex from crashing when there is nothing to fade Fixes #2845 and adds tests. I originally tried to make FadeTransformPieces not crash if it was given two Mobjects with no submobjects, but I couldn't quite get that to work. This is probably the less invasive of a change. * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * use Tex arrows Co-authored-by: Benjamin Hackl <[email protected]> * Update expectations after Tex change * Address feedback This makes FadeTransform::ghost_to more robust to receiving an empty target. It is currently unspecified what should happen, so how about just fading in place? * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Benjamin Hackl <[email protected]>
https://github.com/ManimCommunity/manim.git
def ghost_to(self, source, target): # mobject.replace() does not work if the target has no points. if target.get_num_points() or target.submobjects: source.replace(target, stretch=self.stretch, dim_to_match=self.dim_to_match) source.set_opacity(0)
45
transform.py
Python
manim/animation/transform.py
c4f1d202a9c858da930e4515d44ce0910740f1b2
manim
3
96,045
15
11
8
73
11
0
15
87
test_orderby_tag
ref(metrics): Cleans up metrics API (#31541) Hacks out the `MockingDataSource`, and moves the `SnubaDataSource` into its own module
https://github.com/getsentry/sentry.git
def test_orderby_tag(self): response = self.get_response( self.project.organization.slug, field=["sum(sentry.sessions.session)", "environment"], groupBy="environment", orderBy="environment", ) assert response.status_code == 400
43
test_organization_metrics.py
Python
tests/sentry/api/endpoints/test_organization_metrics.py
c79220f4fae984e531b4ec9cecae0ff2c4d0b7ae
sentry
1
124,306
22
13
14
80
10
0
29
68
render
[Core] Add HTML reprs for `ClientContext` and `WorkerContext` (#25730)
https://github.com/ray-project/ray.git
def render(self, **kwargs) -> str: rendered = self.template for key, value in kwargs.items(): rendered = rendered.replace("{{ " + key + " }}", value if value else "") return rendered
47
render.py
Python
python/ray/widgets/render.py
ea47d97a548504bdb6ff1afdb1021b0bc54d5dfa
ray
3
154,519
58
12
14
221
27
0
73
194
reduce
REFACTOR-#5009: use RayWrapper.materialize instead of ray.get (#5010) Signed-off-by: Myachev <[email protected]>
https://github.com/modin-project/modin.git
def reduce(self, first, others, func, axis=0, **kwargs): # TODO: Try to use `axis` parameter of cudf.concat join_func = ( cudf.DataFrame.join if not axis else lambda x, y: cudf.concat([x, y]) ) if not isinstance(others[0], int): other_dfs = RayWrapper.materialize(others) else: other_dfs = [self.cudf_dataframe_dict[i] for i in others] df1 = self.cudf_dataframe_dict[first] df2 = others[0] if len(others) >= 1 else None for i in range(1, len(others)): df2 = join_func(df2, other_dfs[i]) result = func(df1, df2, **kwargs) return self.store_new_df(result)
148
gpu_manager.py
Python
modin/core/execution/ray/implementations/cudf_on_ray/partitioning/gpu_manager.py
1dc16415333bf2428ee2b1f4d31ff94e66b9a0a6
modin
6
94,414
27
14
16
237
28
0
33
169
test_instance_url_mismatch
feat(Jira): Plugin issue migration endpoint (#37577) * feat(jira): Plugin issue migration endpoint
https://github.com/getsentry/sentry.git
def test_instance_url_mismatch(self): self.plugin.set_option("instance_url", "https://hellboy.atlassian.net", self.project) group = self.create_group(message="Hello world", culprit="foo.bar") plugin_issue = GroupMeta.objects.create( key=f"{self.plugin.slug}:tid", group_id=group.id, value="SEN-1" ) with self.tasks(): self.installation.migrate_issues() assert not ExternalIssue.objects.filter( organization_id=self.organization.id, integration_id=self.integration.id, key=plugin_issue.value, ).exists() assert GroupMeta.objects.filter( key=f"{self.plugin.slug}:tid", group_id=group.id, value="SEN-1" ).exists()
132
test_integration.py
Python
tests/sentry/integrations/jira/test_integration.py
f5e5a3b1ed97383e0699aff9eb0363e9eb5db479
sentry
1
258,673
10
9
14
61
12
0
11
23
test_estimator_does_not_support_feature_names
TST Better info when checking for no warnings in tests (#22362)
https://github.com/scikit-learn/scikit-learn.git
def test_estimator_does_not_support_feature_names(): pytest.importorskip("pandas") X, y = datasets.load_iris(as_frame=True, return_X_y=True) all_feature_names = set(X.columns)
114
test_from_model.py
Python
sklearn/feature_selection/tests/test_from_model.py
9f85c9d44965b764f40169ef2917e5f7a798684f
scikit-learn
2
295,967
84
12
20
260
29
1
111
224
test_auto_purge_disabled
Reduce memory pressure during database migration (#69628)
https://github.com/home-assistant/core.git
def test_auto_purge_disabled(hass_recorder): hass = hass_recorder({CONF_AUTO_PURGE: False}) original_tz = dt_util.DEFAULT_TIME_ZONE tz = dt_util.get_time_zone("Europe/Copenhagen") dt_util.set_default_time_zone(tz) # Purging is scheduled to happen at 4:12am every day. We want # to verify that when auto purge is disabled periodic db cleanups # are still scheduled # # The clock is started at 4:15am then advanced forward below now = dt_util.utcnow() test_time = datetime(now.year + 2, 1, 1, 4, 15, 0, tzinfo=tz) run_tasks_at_time(hass, test_time) with patch( "homeassistant.components.recorder.purge.purge_old_data", return_value=True ) as purge_old_data, patch( "homeassistant.components.recorder.perodic_db_cleanups" ) as perodic_db_cleanups: # Advance one day, and the purge task should run test_time = test_time + timedelta(days=1) run_tasks_at_time(hass, test_time) assert len(purge_old_data.mock_calls) == 0 assert len(perodic_db_cleanups.mock_calls) == 1 purge_old_data.reset_mock() perodic_db_cleanups.reset_mock() dt_util.set_default_time_zone(original_tz) @pytest.mark.parametrize("enable_statistics", [True])
@pytest.mark.parametrize("enable_statistics", [True])
141
test_init.py
Python
tests/components/recorder/test_init.py
66f0a3816a7341f0726a01f4dfdbd1ff47c27d1b
core
1
126,482
31
14
13
66
8
0
32
123
get_entrypoint_task_id
[workflow] Change `step` to `task` in workflow. (#27330) * change step to task Signed-off-by: Yi Cheng <[email protected]> * fix comments Signed-off-by: Yi Cheng <[email protected]> * fix comments Signed-off-by: Yi Cheng <[email protected]> * fix comments Signed-off-by: Yi Cheng <[email protected]>
https://github.com/ray-project/ray.git
def get_entrypoint_task_id(self) -> TaskID: # empty TaskID represents the workflow driver try: return self._locate_output_task_id("") except Exception as e: raise ValueError( "Fail to get entrypoint task ID from workflow" f"[id={self._workflow_id}]" ) from e
31
workflow_storage.py
Python
python/ray/workflow/workflow_storage.py
a9697722cf188ebb9b257ad7a4a6c43ece32c546
ray
2
259,225
38
16
8
149
17
0
45
77
test_ohe_infrequent_three_levels_drop_infrequent_errors
ENH Adds infrequent categories to OneHotEncoder (#16018) * ENH Completely adds infrequent categories * STY Linting * STY Linting * DOC Improves wording * DOC Lint * BUG Fixes * CLN Address comments * CLN Address comments * DOC Uses math to description float min_frequency * DOC Adds comment regarding drop * BUG Fixes method name * DOC Clearer docstring * TST Adds more tests * FIX Fixes mege * CLN More pythonic * CLN Address comments * STY Flake8 * CLN Address comments * DOC Fix * MRG * WIP * ENH Address comments * STY Fix * ENH Use functiion call instead of property * ENH Adds counts feature * CLN Rename variables * DOC More details * CLN Remove unneeded line * CLN Less lines is less complicated * CLN Less diffs * CLN Improves readiabilty * BUG Fix * CLN Address comments * TST Fix * CLN Address comments * CLN Address comments * CLN Move docstring to userguide * DOC Better wrapping * TST Adds test to handle_unknown='error' * ENH Spelling error in docstring * BUG Fixes counter with nan values * BUG Removes unneeded test * BUG Fixes issue * ENH Sync with main * DOC Correct settings * DOC Adds docstring * DOC Immprove user guide * DOC Move to 1.0 * DOC Update docs * TST Remove test * DOC Update docstring * STY Linting * DOC Address comments * ENH Neater code * DOC Update explaination for auto * Update sklearn/preprocessing/_encoders.py Co-authored-by: Roman Yurchak <[email protected]> * TST Uses docstring instead of comments * TST Remove call to fit * TST Spelling error * ENH Adds support for drop + infrequent categories * ENH Adds infrequent_if_exist option * DOC Address comments for user guide * DOC Address comments for whats_new * DOC Update docstring based on comments * CLN Update test with suggestions * ENH Adds computed property infrequent_categories_ * DOC Adds where the infrequent column is located * TST Adds more test for infrequent_categories_ * DOC Adds docstring for _compute_drop_idx * CLN Moves _convert_to_infrequent_idx into its own method * TST Increases test coverage * TST Adds failing test * CLN Careful consideration of dropped and inverse_transform * STY Linting * DOC Adds docstrinb about dropping infrequent * DOC Uses only * DOC Numpydoc * TST Includes test for get_feature_names_out * DOC Move whats new * DOC Address docstring comments * DOC Docstring changes * TST Better comments * TST Adds check for handle_unknown='ignore' for infrequent * CLN Make _infrequent_indices private * CLN Change min_frequency default to None * DOC Adds comments * ENH adds support for max_categories=1 * ENH Describe lexicon ordering for ties * DOC Better docstring * STY Fix * CLN Error when explicity dropping an infrequent category * STY Grammar Co-authored-by: Joel Nothman <[email protected]> Co-authored-by: Roman Yurchak <[email protected]> Co-authored-by: Guillaume Lemaitre <[email protected]>
https://github.com/scikit-learn/scikit-learn.git
def test_ohe_infrequent_three_levels_drop_infrequent_errors(drop): X_train = np.array([["a"] * 5 + ["b"] * 20 + ["c"] * 10 + ["d"] * 3]).T ohe = OneHotEncoder( handle_unknown="infrequent_if_exist", sparse=False, max_categories=3, drop=drop ) msg = f"Unable to drop category {drop[0]!r} from feature 0 because it is infrequent" with pytest.raises(ValueError, match=msg): ohe.fit(X_train)
82
test_encoders.py
Python
sklearn/preprocessing/tests/test_encoders.py
7f0006c8aad1a09621ad19c3db19c3ff0555a183
scikit-learn
1
101,223
10
9
4
56
12
0
10
43
_kwarg_mapping
lib.align updates: - alignments.py - Add typed dicts for imported alignments - Explicitly check for presence of thumb value in alignments dict - linting - detected_face.py - Typing - Linting - Legacy support for pre-aligned face - Update dependencies to new property names
https://github.com/deepfakes/faceswap.git
def _kwarg_mapping(self) -> Dict[str, Union[int, Tuple[int, int]]]: return dict(ksize=self._kernel_size, sigmaX=self._sigma)
38
detected_face.py
Python
lib/align/detected_face.py
5e73437be47f2410439a3c6716de96354e6a0c94
faceswap
1
19,019
40
13
19
205
23
0
47
236
_log_dataset_tag
Evaluation Default evaluator (#5092) * init Signed-off-by: Weichen Xu <[email protected]> * update Signed-off-by: Weichen Xu <[email protected]> * update Signed-off-by: Weichen Xu <[email protected]> * update Signed-off-by: Weichen Xu <[email protected]> * update Signed-off-by: Weichen Xu <[email protected]> * update Signed-off-by: Weichen Xu <[email protected]> * update Signed-off-by: Weichen Xu <[email protected]> * update Signed-off-by: Weichen Xu <[email protected]> * update Signed-off-by: Weichen Xu <[email protected]> * update Signed-off-by: Weichen Xu <[email protected]> * update Signed-off-by: Weichen Xu <[email protected]> * update Signed-off-by: Weichen Xu <[email protected]> * update Signed-off-by: Weichen Xu <[email protected]> * update Signed-off-by: Weichen Xu <[email protected]> * update Signed-off-by: Weichen Xu <[email protected]> * rename module Signed-off-by: Weichen Xu <[email protected]> * address comments Signed-off-by: Weichen Xu <[email protected]> * address comments Signed-off-by: Weichen Xu <[email protected]> * revert black change Signed-off-by: Weichen Xu <[email protected]> * change module path Signed-off-by: Weichen Xu <[email protected]> * update Signed-off-by: Weichen Xu <[email protected]> * update Signed-off-by: Weichen Xu <[email protected]> * update Signed-off-by: Weichen Xu <[email protected]> * update Signed-off-by: Weichen Xu <[email protected]> * address comments Signed-off-by: Weichen Xu <[email protected]> * fix Signed-off-by: Weichen Xu <[email protected]> * refactor Signed-off-by: Weichen Xu <[email protected]> * lazy load pyspark Signed-off-by: Weichen Xu <[email protected]> * revert export Signed-off-by: Weichen Xu <[email protected]> * fix curcit import Signed-off-by: Weichen Xu <[email protected]> * update tests Signed-off-by: Weichen Xu <[email protected]> * fix conftest.py Signed-off-by: Weichen Xu <[email protected]> * Revert "fix conftest.py" This reverts commit 2ea29c62bfffc5461bf77f3da15b5c00f51de19b. * fix tests Signed-off-by: Weichen Xu <[email protected]> * update doc Signed-off-by: Weichen Xu <[email protected]> * update Signed-off-by: Weichen Xu <[email protected]> * default evaluator Signed-off-by: Weichen Xu <[email protected]> * update Signed-off-by: Weichen Xu <[email protected]> * fix Signed-off-by: Weichen Xu <[email protected]> * fix Signed-off-by: Weichen Xu <[email protected]> * address comments Signed-off-by: Weichen Xu <[email protected]> * fix doc Signed-off-by: Weichen Xu <[email protected]> * fix doc Signed-off-by: Weichen Xu <[email protected]> * update import Signed-off-by: Weichen Xu <[email protected]> * fix doc Signed-off-by: Weichen Xu <[email protected]> * update hash algo Signed-off-by: Weichen Xu <[email protected]> * update import Signed-off-by: Weichen Xu <[email protected]> * address comment Signed-off-by: Weichen Xu <[email protected]> * add tests Signed-off-by: Weichen Xu <[email protected]> * fix lint Signed-off-by: Weichen Xu <[email protected]> * add tests Signed-off-by: Weichen Xu <[email protected]> * add more tests Signed-off-by: Weichen Xu <[email protected]> * add tests Signed-off-by: Weichen Xu <[email protected]> * fix lint Signed-off-by: Weichen Xu <[email protected]> * update shap explainer Signed-off-by: Weichen Xu <[email protected]> * address comments Signed-off-by: Weichen Xu <[email protected]> * update Signed-off-by: Weichen Xu <[email protected]> * update Signed-off-by: Weichen Xu <[email protected]> * update Signed-off-by: Weichen Xu <[email protected]> * update Signed-off-by: Weichen Xu <[email protected]> * update Signed-off-by: Weichen Xu <[email protected]> * update Signed-off-by: Weichen Xu <[email protected]> * update Signed-off-by: Weichen Xu <[email protected]> * remove scikitplot dep Signed-off-by: Weichen Xu <[email protected]> * add pr curve Signed-off-by: Weichen Xu <[email protected]> * add shap.summary_plot Signed-off-by: Weichen Xu <[email protected]> * log explainer Signed-off-by: Weichen Xu <[email protected]> * address comments Signed-off-by: Weichen Xu <[email protected]> * address comments Signed-off-by: Weichen Xu <[email protected]> * improve explainer code Signed-off-by: Weichen Xu <[email protected]> * address comments Signed-off-by: Weichen Xu <[email protected]> * update Signed-off-by: Weichen Xu <[email protected]> * update Signed-off-by: Weichen Xu <[email protected]> * address comments Signed-off-by: Weichen Xu <[email protected]> * update shap init Signed-off-by: Weichen Xu <[email protected]> * update explainer creating Signed-off-by: Weichen Xu <[email protected]> * update predict_proba Signed-off-by: Weichen Xu <[email protected]> * address comments Signed-off-by: Weichen Xu <[email protected]> * refactor Signed-off-by: Weichen Xu <[email protected]> * add multi-class metrics artifacts Signed-off-by: Weichen Xu <[email protected]> * update doc Signed-off-by: Weichen Xu <[email protected]> * add log_loss metric Signed-off-by: Weichen Xu <[email protected]> * lazy load pyspark Signed-off-by: Weichen Xu <[email protected]> * address ben comments Signed-off-by: Weichen Xu <[email protected]> * fix Signed-off-by: Weichen Xu <[email protected]> * prevent show shap logo, add tests Signed-off-by: Weichen Xu <[email protected]> * support spark model Signed-off-by: Weichen Xu <[email protected]> * add tests Signed-off-by: Weichen Xu <[email protected]> * add shap version check Signed-off-by: Weichen Xu <[email protected]> * update docs, loose classifier label limit Signed-off-by: Weichen Xu <[email protected]> * add tests Signed-off-by: Weichen Xu <[email protected]> * multiclass classifier merge metrics/plots Signed-off-by: Weichen Xu <[email protected]> * zfill feature name Signed-off-by: Weichen Xu <[email protected]> * update doc Signed-off-by: Weichen Xu <[email protected]> * add config max_num_classes_threshold_logging_roc_pr_curve_for_multiclass_classifier Signed-off-by: Weichen Xu <[email protected]> * refactor Signed-off-by: Weichen Xu <[email protected]> * update tests Signed-off-by: Weichen Xu <[email protected]> * improve label handling Signed-off-by: Weichen Xu <[email protected]> * refactor Signed-off-by: Weichen Xu <[email protected]> * add tests Signed-off-by: Weichen Xu <[email protected]> * black Signed-off-by: Weichen Xu <[email protected]> * update Signed-off-by: Weichen Xu <[email protected]> * increase plot dpi Signed-off-by: Weichen Xu <[email protected]> * fix test fixture Signed-off-by: Weichen Xu <[email protected]> * fix pylint Signed-off-by: Weichen Xu <[email protected]> * update doc Signed-off-by: Weichen Xu <[email protected]> * use matplot rc_context Signed-off-by: Weichen Xu <[email protected]> * fix shap import Signed-off-by: Weichen Xu <[email protected]> * refactor EvaluationDataset Signed-off-by: Weichen Xu <[email protected]> * limit user specify shap algos Signed-off-by: Weichen Xu <[email protected]> * clean Signed-off-by: Weichen Xu <[email protected]> * update evaluation dataset Signed-off-by: Weichen Xu <[email protected]> * use svg fig Signed-off-by: Weichen Xu <[email protected]> * revert svg Signed-off-by: Weichen Xu <[email protected]> * curve dashline, legend display ap/roc, legend move out Signed-off-by: Weichen Xu <[email protected]> * linewidth 1 Signed-off-by: Weichen Xu <[email protected]> * keyword arguments for evaluate, fix tests Signed-off-by: Weichen Xu <[email protected]> * mark abc.abstractmethod, kw args for ModelEvaluator methods Signed-off-by: Weichen Xu <[email protected]> * fix pylint Signed-off-by: Weichen Xu <[email protected]> * fix pylint Signed-off-by: Weichen Xu <[email protected]>
https://github.com/mlflow/mlflow.git
def _log_dataset_tag(self, client, run_id, model_uuid): existing_dataset_metadata_str = client.get_run(run_id).data.tags.get( "mlflow.datasets", "[]" ) dataset_metadata_list = json.loads(existing_dataset_metadata_str) for metadata in dataset_metadata_list: if ( metadata["hash"] == self.hash and metadata["name"] == self.name and metadata["model"] == model_uuid ): break else: dataset_metadata_list.append({**self._metadata, "model": model_uuid}) dataset_metadata_str = json.dumps(dataset_metadata_list, separators=(",", ":")) client.log_batch( run_id, tags=[RunTag("mlflow.datasets", dataset_metadata_str)], )
124
base.py
Python
mlflow/models/evaluation/base.py
964f5ab75098c55f028f8acfeeae05df35ea68d5
mlflow
5
155,526
181
17
71
743
58
0
298
816
percentile
Replace `interpolation` with `method` and `method` with `internal_method` (#8525) Following the change in numpy 1.22.0 Co-authored-by: James Bourbeau <[email protected]>
https://github.com/dask/dask.git
def percentile(a, q, method="linear", internal_method="default", **kwargs): from .dispatch import percentile_lookup as _percentile from .utils import array_safe, meta_from_array allowed_internal_methods = ["default", "dask", "tdigest"] if method in allowed_internal_methods: warnings.warn( "In Dask 2022.1.0, the `method=` argument was renamed to `internal_method=`", FutureWarning, ) internal_method = method if "interpolation" in kwargs: warnings.warn( "In Dask 2022.1.0, the `interpolation=` argument to percentile was renamed to " "`method= ` ", FutureWarning, ) method = kwargs.pop("interpolation") if kwargs: raise TypeError( f"percentile() got an unexpected keyword argument {kwargs.keys()}" ) if not a.ndim == 1: raise NotImplementedError("Percentiles only implemented for 1-d arrays") if isinstance(q, Number): q = [q] q = array_safe(q, like=meta_from_array(a)) token = tokenize(a, q, method) dtype = a.dtype if np.issubdtype(dtype, np.integer): dtype = (array_safe([], dtype=dtype, like=meta_from_array(a)) / 0.5).dtype meta = meta_from_array(a, dtype=dtype) if internal_method not in allowed_internal_methods: raise ValueError( f"`internal_method=` must be one of {allowed_internal_methods}" ) # Allow using t-digest if method is allowed and dtype is of floating or integer type if ( internal_method == "tdigest" and method == "linear" and (np.issubdtype(dtype, np.floating) or np.issubdtype(dtype, np.integer)) ): from dask.utils import import_required import_required( "crick", "crick is a required dependency for using the t-digest method." ) name = "percentile_tdigest_chunk-" + token dsk = { (name, i): (_tdigest_chunk, key) for i, key in enumerate(a.__dask_keys__()) } name2 = "percentile_tdigest-" + token dsk2 = {(name2, 0): (_percentiles_from_tdigest, q, sorted(dsk))} # Otherwise use the custom percentile algorithm else: # Add 0 and 100 during calculation for more robust behavior (hopefully) calc_q = np.pad(q, 1, mode="constant") calc_q[-1] = 100 name = "percentile_chunk-" + token dsk = { (name, i): (_percentile, key, calc_q, method) for i, key in enumerate(a.__dask_keys__()) } name2 = "percentile-" + token dsk2 = { (name2, 0): ( merge_percentiles, q, [calc_q] * len(a.chunks[0]), sorted(dsk), method, ) } dsk = merge(dsk, dsk2) graph = HighLevelGraph.from_collections(name2, dsk, dependencies=[a]) return Array(graph, name2, chunks=((len(q),),), meta=meta)
460
percentile.py
Python
dask/array/percentile.py
3c46e89aea2af010e69049cd638094fea2ddd576
dask
14
19,867
26
11
6
58
10
0
31
90
_dictionary
check point progress on only bringing in pip==22.0.4 (#4966) * vendor in pip==22.0.4 * updating vendor packaging version * update pipdeptree to fix pipenv graph with new version of pip. * Vendoring of pip-shims 0.7.0 * Vendoring of requirementslib 1.6.3 * Update pip index safety restrictions patch for pip==22.0.4 * Update patches * exclude pyptoject.toml from black to see if that helps. * Move this part of the hash collection back to the top (like prior implementation) because it affects the outcome of this test now in pip 22.0.4
https://github.com/pypa/pipenv.git
def _dictionary(self) -> Dict[str, Any]: # NOTE: Dictionaries are not populated if not loaded. So, conditionals # are not needed here. retval = {} for variant in OVERRIDE_ORDER: retval.update(self._config[variant]) return retval
35
configuration.py
Python
pipenv/patched/notpip/_internal/configuration.py
f3166e673fe8d40277b804d35d77dcdb760fc3b3
pipenv
2
248,619
18
9
10
115
10
0
20
91
test_second_upgrade_from_same_user
Add more tests for room upgrades (#13074) Signed-off-by: Sean Quah <[email protected]>
https://github.com/matrix-org/synapse.git
def test_second_upgrade_from_same_user(self) -> None: channel1 = self._upgrade_room() self.assertEqual(200, channel1.code, channel1.result) channel2 = self._upgrade_room(expire_cache=False) self.assertEqual(200, channel2.code, channel2.result) self.assertEqual( channel1.json_body["replacement_room"], channel2.json_body["replacement_room"], )
72
test_upgrade_room.py
Python
tests/rest/client/test_upgrade_room.py
99d3931974e65865d1102ee79d7b7e2b017a3180
synapse
1
33,812
95
13
20
280
22
0
128
336
shift_tokens_right
Update serving signatures and make sure we actually use them (#19034) * Override save() to use the serving signature as the default * Replace int32 with int64 in all our serving signatures * Remember one very important line so as not to break every test at once * Dtype fix for TFLED * dtype fix for shift_tokens_right in general * Dtype fixes in mBART and RAG * Fix dtypes for test_unpack_inputs * More dtype fixes * Yet more mBART + RAG dtype fixes * Yet more mBART + RAG dtype fixes * Add a check that the model actually has a serving method
https://github.com/huggingface/transformers.git
def shift_tokens_right(self, input_ids, start_token_id=None): if start_token_id is None: start_token_id = self.generator.config.decoder_start_token_id assert start_token_id is not None, ( "self.generator.config.decoder_start_token_id has to be defined. In Rag we commonly use Bart as" " generator, see Bart docs for more information" ) pad_token_id = self.generator.config.pad_token_id assert pad_token_id is not None, "self.model.config.pad_token_id has to be defined." start_tokens = tf.fill((shape_list(input_ids)[0], 1), tf.cast(start_token_id, input_ids.dtype)) shifted_input_ids = tf.concat([start_tokens, input_ids[:, :-1]], -1) # replace possible -100 values in labels by `pad_token_id` shifted_input_ids = tf.where( shifted_input_ids == -100, tf.fill(shape_list(shifted_input_ids), tf.cast(pad_token_id, input_ids.dtype)), shifted_input_ids, ) # "Verify that `labels` has only positive values and -100" assert_gte0 = tf.debugging.assert_greater_equal(shifted_input_ids, tf.cast(0, shifted_input_ids.dtype)) # Make sure the assertion op is called by wrapping the result in an identity no-op with tf.control_dependencies([assert_gte0]): shifted_input_ids = tf.identity(shifted_input_ids) return shifted_input_ids # nll stands for 'negative log likelihood'
179
modeling_tf_rag.py
Python
src/transformers/models/rag/modeling_tf_rag.py
2322eb8e2f9765cb73f59b324cc46a0e9cfe803f
transformers
2
8,165
46
13
23
276
29
0
65
162
test_render_config
Added tests for init_config and render_config CLI commands (#2551)
https://github.com/ludwig-ai/ludwig.git
def test_render_config(tmpdir): user_config_path = os.path.join(tmpdir, "config.yaml") input_features = [ number_feature(), number_feature(), category_feature(encoder={"vocab_size": 3}), category_feature(encoder={"vocab_size": 3}), ] output_features = [category_feature(decoder={"vocab_size": 3})] user_config = { INPUT_FEATURES: input_features, OUTPUT_FEATURES: output_features, } with open(user_config_path, "w") as f: yaml.dump(user_config, f) output_config_path = os.path.join(tmpdir, "rendered.yaml") _run_ludwig("render_config", config=user_config_path, output=output_config_path) rendered_config = load_yaml(output_config_path) assert len(rendered_config[INPUT_FEATURES]) == len(user_config[INPUT_FEATURES]) assert len(rendered_config[OUTPUT_FEATURES]) == len(user_config[OUTPUT_FEATURES]) assert TRAINER in rendered_config assert COMBINER in rendered_config assert PREPROCESSING in rendered_config
170
test_cli.py
Python
tests/integration_tests/test_cli.py
5a25d1f2a914c849cf978573c5993411151bc447
ludwig
1
167,440
23
10
19
100
16
0
27
70
parse_date_fields
TYP: more return annotations for io/* (#47524) * TYP: more return annotations for io/* * import future
https://github.com/pandas-dev/pandas.git
def parse_date_fields(year_col, month_col, day_col) -> npt.NDArray[np.object_]: warnings.warn( , # noqa: E501 FutureWarning, stacklevel=find_stack_level(), ) year_col = _maybe_cast(year_col) month_col = _maybe_cast(month_col) day_col = _maybe_cast(day_col) return parsing.try_parse_year_month_day(year_col, month_col, day_col)
63
date_converters.py
Python
pandas/io/date_converters.py
e48c9c3973286e257f6da1966c91806d86b917e0
pandas
1
259,239
51
16
15
311
26
1
65
116
test_ohe_infrequent_two_levels_drop_frequent
ENH Adds infrequent categories to OneHotEncoder (#16018) * ENH Completely adds infrequent categories * STY Linting * STY Linting * DOC Improves wording * DOC Lint * BUG Fixes * CLN Address comments * CLN Address comments * DOC Uses math to description float min_frequency * DOC Adds comment regarding drop * BUG Fixes method name * DOC Clearer docstring * TST Adds more tests * FIX Fixes mege * CLN More pythonic * CLN Address comments * STY Flake8 * CLN Address comments * DOC Fix * MRG * WIP * ENH Address comments * STY Fix * ENH Use functiion call instead of property * ENH Adds counts feature * CLN Rename variables * DOC More details * CLN Remove unneeded line * CLN Less lines is less complicated * CLN Less diffs * CLN Improves readiabilty * BUG Fix * CLN Address comments * TST Fix * CLN Address comments * CLN Address comments * CLN Move docstring to userguide * DOC Better wrapping * TST Adds test to handle_unknown='error' * ENH Spelling error in docstring * BUG Fixes counter with nan values * BUG Removes unneeded test * BUG Fixes issue * ENH Sync with main * DOC Correct settings * DOC Adds docstring * DOC Immprove user guide * DOC Move to 1.0 * DOC Update docs * TST Remove test * DOC Update docstring * STY Linting * DOC Address comments * ENH Neater code * DOC Update explaination for auto * Update sklearn/preprocessing/_encoders.py Co-authored-by: Roman Yurchak <[email protected]> * TST Uses docstring instead of comments * TST Remove call to fit * TST Spelling error * ENH Adds support for drop + infrequent categories * ENH Adds infrequent_if_exist option * DOC Address comments for user guide * DOC Address comments for whats_new * DOC Update docstring based on comments * CLN Update test with suggestions * ENH Adds computed property infrequent_categories_ * DOC Adds where the infrequent column is located * TST Adds more test for infrequent_categories_ * DOC Adds docstring for _compute_drop_idx * CLN Moves _convert_to_infrequent_idx into its own method * TST Increases test coverage * TST Adds failing test * CLN Careful consideration of dropped and inverse_transform * STY Linting * DOC Adds docstrinb about dropping infrequent * DOC Uses only * DOC Numpydoc * TST Includes test for get_feature_names_out * DOC Move whats new * DOC Address docstring comments * DOC Docstring changes * TST Better comments * TST Adds check for handle_unknown='ignore' for infrequent * CLN Make _infrequent_indices private * CLN Change min_frequency default to None * DOC Adds comments * ENH adds support for max_categories=1 * ENH Describe lexicon ordering for ties * DOC Better docstring * STY Fix * CLN Error when explicity dropping an infrequent category * STY Grammar Co-authored-by: Joel Nothman <[email protected]> Co-authored-by: Roman Yurchak <[email protected]> Co-authored-by: Guillaume Lemaitre <[email protected]>
https://github.com/scikit-learn/scikit-learn.git
def test_ohe_infrequent_two_levels_drop_frequent(drop): X_train = np.array([["a"] * 5 + ["b"] * 20 + ["c"] * 10 + ["d"] * 3]).T ohe = OneHotEncoder( handle_unknown="infrequent_if_exist", sparse=False, max_categories=2, drop=drop ).fit(X_train) assert_array_equal(ohe.drop_idx_, [0]) X_test = np.array([["b"], ["c"]]) X_trans = ohe.transform(X_test) assert_allclose([[0], [1]], X_trans) # TODO(1.2) Remove when get_feature_names is removed feature_names = ohe.get_feature_names() assert_array_equal(["x0_infrequent_sklearn"], feature_names) feature_names = ohe.get_feature_names_out() assert_array_equal(["x0_infrequent_sklearn"], feature_names) X_inverse = ohe.inverse_transform(X_trans) assert_array_equal([["b"], ["infrequent_sklearn"]], X_inverse) @pytest.mark.parametrize("drop", [["a"], ["d"]])
@pytest.mark.parametrize("drop", [["a"], ["d"]])
165
test_encoders.py
Python
sklearn/preprocessing/tests/test_encoders.py
7f0006c8aad1a09621ad19c3db19c3ff0555a183
scikit-learn
1
281,449
72
15
40
419
42
0
90
286
get_newsletters
Terminal Wide Rich (#1161) * My idea for how we handle Rich moving forward * remove independent consoles * FIxed pylint issues * add a few vars * Switched print to console * More transitions * Changed more prints * Replaced all prints * Fixing tabulate * Finished replace tabulate * Finished removing rich from Tabulate * add Panel around menu * add GST watermark under feature flag * Fixed 46 tests * Delete test_screener[False].yaml * Delete test_screener[True].yaml * Fixed the rest of the tests * add help and source color vars and use rgb * rich on stocks/options * update rich on disc, dps, sia * rich in gov, ins and scr menus * ba and ca menus with rich * Fixed import issue * Fixed some tests * removed termcolor * Removed prettytable * add rich to remaining stocks menus * FIxed linting issue * Added James' changes * Updated dependencies * Add rich to cryptocurrency menu * refactor economy and forex * refactor etf with rich * refactor mfunds * refactor rich rest * not specify style so default color works well on any background * Fixing mypy issues * Updated tests * More test fixes * James' test fixes * Updating tests : stocks/screener - fix cassettes using BR * Updating tests : crypto * Updating tests : disable DEBUG_MODE * Updating tests : stocks/fa/yfinance * minor fixes that escape * Improve the rich table function (that replaces tabulate :D ) * Fixed bad code * delete rogue file + dcf fix + NoConsole * sia mypy * fuck you linter * fuck you linter pt 2 * skip hehe * i hate the black linter * ubuntu mypy attempt * Update : rich_config + gtff * Updating tests : conftest * Updating tests : stocks * Update : rich_config * Updating : rich_config * make panel configurable for Theodore :b * colors update * Merged * Updating : rich_config + feature_flags * Updating : rich_config * Updating tests : stocks * Updating : feature_flags Co-authored-by: DidierRLopes <[email protected]> Co-authored-by: Chavithra PARANA <[email protected]> Co-authored-by: james <[email protected]> Co-authored-by: jose-donato <[email protected]>
https://github.com/OpenBB-finance/OpenBBTerminal.git
def get_newsletters() -> pd.DataFrame: urls = [ "https://defiweekly.substack.com/archive", "https://newsletter.thedefiant.io/archive", "https://thedailygwei.substack.com/archive", "https://todayindefi.substack.com/archive", "https://newsletter.banklesshq.com/archive", "https://defislate.substack.com/archive", ] threads = len(urls) newsletters = [] with concurrent.futures.ThreadPoolExecutor(max_workers=threads) as executor: for newsletter in executor.map(scrape_substack, urls): try: newsletters.append(pd.DataFrame(newsletter)) except KeyError as e: console.print(e, "\n") continue df = pd.concat(newsletters, ignore_index=True) df.columns = ["Title", "Link", "Date"] df["Title"] = df["Title"].apply(lambda x: "".join(i for i in x if ord(i) < 128)) df["Date"] = df["Date"].apply( lambda x: parser.parse(x).strftime("%Y-%m-%d %H:%M:%S") ) df["Title"] = df["Title"].apply( lambda x: "\n".join(textwrap.wrap(x, width=50)) if isinstance(x, str) else x ) return ( df[["Title", "Date", "Link"]] .sort_values(by="Date", ascending=False) .reset_index(drop="index") )
242
substack_model.py
Python
gamestonk_terminal/cryptocurrency/defi/substack_model.py
82747072c511beb1b2672846ae2ee4aec53eb562
OpenBBTerminal
6
259,455
20
13
7
92
11
0
24
77
test_family_deprecation
ENH migrate GLMs / TweedieRegressor to linear loss (#22548) Co-authored-by: Olivier Grisel <[email protected]> Co-authored-by: Thomas J. Fan <[email protected]>
https://github.com/scikit-learn/scikit-learn.git
def test_family_deprecation(est, family): with pytest.warns(FutureWarning, match="`family` was deprecated"): if isinstance(family, str): assert est.family == family else: assert est.family.__class__ == family.__class__ assert est.family.power == family.power
56
test_glm.py
Python
sklearn/linear_model/_glm/tests/test_glm.py
75a94f518f7bd7d0bf581ffb67d9f961e3c4efbc
scikit-learn
2
266,324
24
9
7
89
13
0
28
57
load_module
ansible-test - Fix plugin loading. This fixes a traceback when loading plugins that use dataclasses.
https://github.com/ansible/ansible.git
def load_module(path, name): # type: (str, str) -> None if name in sys.modules: return spec = importlib.util.spec_from_file_location(name, path) module = importlib.util.module_from_spec(spec) sys.modules[name] = module # noinspection PyUnresolvedReferences spec.loader.exec_module(module)
54
util.py
Python
test/lib/ansible_test/_internal/util.py
7e814dd4db22d94ee61aaa57b7be630a9cdb598e
ansible
2
249,400
50
12
22
227
15
0
87
283
test_list_user_devices
Implement MSC3852: Expose `last_seen_user_agent` to users for their own devices; also expose to Admin API (#13549)
https://github.com/matrix-org/synapse.git
def test_list_user_devices(self) -> None: # Request all devices of "other user" channel = self.make_request( "GET", f"/_synapse/admin/v2/users/{self.other_user_id}/devices", access_token=self.admin_user_token, ) self.assertEqual(200, channel.code, msg=channel.json_body) # Double-check we got the single device expected user_devices = channel.json_body["devices"] self.assertEqual(len(user_devices), 1) self.assertEqual(channel.json_body["total"], 1) # Check that all the attributes of the device reported are as expected. self._validate_attributes_of_device_response(user_devices[0]) # Request just a single device for "other user" by its ID channel = self.make_request( "GET", f"/_synapse/admin/v2/users/{self.other_user_id}/devices/" f"{self.other_user_device_id}", access_token=self.admin_user_token, ) self.assertEqual(200, channel.code, msg=channel.json_body) # Check that all the attributes of the device reported are as expected. self._validate_attributes_of_device_response(channel.json_body)
127
test_user.py
Python
tests/rest/admin/test_user.py
f9f03426de338ae1879e174f63adf698bbfc3a4b
synapse
1
42,585
14
12
5
79
7
0
16
59
wiki_dict
Support both iso639-3 codes and BCP-47 language tags (#3060) * Add support for iso639-3 language codes * Add support for retired language codes * Move langnames.py to the top-level * Add langcode() function * Add iso639retired dictionary * Improve wrapper functions * Add module docstring with doctest * Add 2-letter language codes * Add regular expression check * Improve inverse lookup of retired codes * Support BCP-47 * Avoid deprecated langcodes * Set stack level for warnings to warn on the langname call Now it throws e.g. ``` ...\nltk_3060.py:9: UserWarning: Shortening 'smo' to 'sm' print(f"{lang}: {langname(code)}") ``` Rather than ``` ...\nltk\langnames.py:64: UserWarning: Shortening zha to za warn(f"Shortening {code} to {code2}") ``` * Dict key membership is equivalent to dict membership * Resolve bug: subtag -> tag * Capitalize BCP47 in CorpusReader name * Reimplement removed type hint changes from #3081 Co-authored-by: Tom Aarsen <[email protected]>
https://github.com/nltk/nltk.git
def wiki_dict(self, lines): return { pair[1]: pair[0].split("/")[-1] for pair in [line.strip().split("\t") for line in lines] }
48
bcp47.py
Python
nltk/corpus/reader/bcp47.py
f019fbedb3d2b6a2e6b58ec1b38db612b106568b
nltk
3
267,625
51
12
11
106
11
1
68
202
_fail_on_undefined
Move undefined check from concat to finalize (#78165) * Move undefined check from concat to finalize In the classic Jinja2's Environment str() is called on the return value of the finalize method to potentially trigger the undefined error. That is not the case in NativeEnvironment where string conversion of the return value is not desired. We workaround that by checking for Undefined in all of our concat functions. It seems simpler to do it earlier in the finalize method(s) instead. As a side-effect it fixes an undefined variable detection in imported templates. Fixes #78156 ci_complete * Fix sanity * ... * sigh
https://github.com/ansible/ansible.git
def _fail_on_undefined(data): if isinstance(data, Mapping): for value in data.values(): _fail_on_undefined(value) elif is_sequence(data): for item in data: _fail_on_undefined(item) else: if isinstance(data, StrictUndefined): # To actually raise the undefined exception we need to # access the undefined object otherwise the exception would # be raised on the next access which might not be properly # handled. # See https://github.com/ansible/ansible/issues/52158 # and StrictUndefined implementation in upstream Jinja2. str(data) return data @_unroll_iterator
@_unroll_iterator
58
__init__.py
Python
lib/ansible/template/__init__.py
17d52c8d647c4181922db42c91dc2828cdd79387
ansible
6
276,923
41
17
14
128
7
0
48
122
ask_to_proceed_with_overwrite
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
https://github.com/keras-team/keras.git
def ask_to_proceed_with_overwrite(filepath): overwrite = ( input("[WARNING] %s already exists - overwrite? " "[y/n]" % (filepath)) .strip() .lower() ) while overwrite not in ("y", "n"): overwrite = ( input('Enter "y" (overwrite) or "n" ' "(cancel).").strip().lower() ) if overwrite == "n": return False print_msg("[TIP] Next time specify overwrite=True!") return True
67
io_utils.py
Python
keras/utils/io_utils.py
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
3
107,813
33
15
12
148
8
0
51
140
_strip_comment
Support quoted strings in matplotlibrc This enables using the comment character # within strings. Closes #19288. Superseeds #22565.
https://github.com/matplotlib/matplotlib.git
def _strip_comment(s): pos = 0 while True: quote_pos = s.find('"', pos) hash_pos = s.find('#', pos) if quote_pos < 0: without_comment = s if hash_pos < 0 else s[:hash_pos] return without_comment.strip() elif 0 <= hash_pos < quote_pos: return s[:hash_pos].strip() else: pos = s.find('"', quote_pos + 1) + 1 # behind closing quote
89
__init__.py
Python
lib/matplotlib/cbook/__init__.py
7c378a8f3f30ce57c874a851f3af8af58f1ffdf6
matplotlib
5
316,438
9
10
21
45
6
0
9
18
test_default_discovery_abort_on_new_unique_flow
Search/replace RESULT_TYPE_* by FlowResultType enum (#74642)
https://github.com/home-assistant/core.git
async def test_default_discovery_abort_on_new_unique_flow(hass, manager): mock_integration(hass, MockModule("comp")) mock_entity_platform(hass, "config_flow.comp", None)
165
test_config_entries.py
Python
tests/test_config_entries.py
7cd68381f1d4f58930ffd631dfbfc7159d459832
core
1
309,019
7
10
2
38
2
0
7
13
_rear_left_tire_pressure_supported
Use SensorEntityDescription in Mazda integration (#63423) * Use SensorEntityDescription in Mazda integration * Change lambdas to functions * Minor fixes * Address review comments
https://github.com/home-assistant/core.git
def _rear_left_tire_pressure_supported(data): return data["status"]["tirePressure"]["rearLeftTirePressurePsi"] is not None
20
sensor.py
Python
homeassistant/components/mazda/sensor.py
8915b73f724b58e93284a823c0d2e99fbfc13e84
core
1
286,572
33
10
24
139
15
0
37
188
_get_base_market_data_info
[IMPROVE] Fix Docstring formatting/Fix missing, incomplete type hints (#3412) * Fixes * Update stocks_helper.py * update git-actions set-output to new format * Update stocks_helper.py * Update terminal_helper.py * removed LineAnnotateDrawer from qa_view * lint * few changes * updates * sdk auto gen modules done * Update stocks_helper.py * updates to changed imports, and remove first sdk_modules * Update generate_sdk.py * Update generate_sdk.py * pylint * revert stocks_helper * Update generate_sdk.py * Update sdk.py * Update generate_sdk.py * full auto generation, added sdk.py/controllers creation * missed enable forecasting * added running black in subprocess after sdk files generation completes * removed deleted sdk_arg_logger * comment out tests * property doc fix * clean up * Update generate_sdk.py * make trailmap classes useable for doc generation * Update generate_sdk.py * added lineon to trailmap class for linking to func in markdown * changed lineon to dict * added full_path to trailmap for linking in docs * updated portfolio * feat: initial files * feat: added meta head * feat: added funcdef * added func_def to trailmap attributes for markdown in docs, added missing type hints to covid functions * feat: added view and merged with jaun * Update generate_sdk.py * Update generate_sdk.py * Update generate_sdk.py * Update generate_sdk.py * init * fix returns * fix: random stuff * fix: random * fixed encoding issue on windows * fix: generate tabs * update * Update generate_sdk_markdown.py * Create .pydocstyle.ini * added type hint classes for views * fixes * alt, ba * alt-economy * Update finviz_compare_model.py * fixs * Update substack_model.py * Update generate_sdk.py * last of my section * porfolio * po * Update optimizer_model.py * fixing more things * few more * keys done * update * fixes * Update generate_sdk_markdown.py * Update generate_sdk_markdown.py * mypy forecast fix * Update generate_sdk_markdown.py * Update generate_sdk_markdown.py * Update generate_sdk_markdown.py * fixes * forecast fixes * one more fix * Update coinbase_model.py * Update generate_sdk_markdown.py Co-authored-by: Colin Delahunty <[email protected]> Co-authored-by: James Maslek <[email protected]> Co-authored-by: jose-donato <[email protected]> Co-authored-by: andrewkenreich <[email protected]>
https://github.com/OpenBB-finance/OpenBBTerminal.git
def _get_base_market_data_info(self) -> Union[Dict[str, Any], Any]: market_dct = {} market_data = self.coin.get("market_data", {}) for stat in [ "total_supply", "max_supply", "circulating_supply", "price_change_percentage_24h", "price_change_percentage_7d", "price_change_percentage_30d", ]: market_dct[stat] = market_data.get(stat) prices = create_dictionary_with_prefixes( ["current_price"], market_data, DENOMINATION ) market_dct.update(prices) return market_dct
84
pycoingecko_model.py
Python
openbb_terminal/cryptocurrency/due_diligence/pycoingecko_model.py
59d8b36bb0467a1a99513b10e8b8471afaa56fd6
OpenBBTerminal
2
276,919
20
8
4
31
5
0
21
43
is_interactive_logging_enabled
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
https://github.com/keras-team/keras.git
def is_interactive_logging_enabled(): # Use `getattr` in case `INTERACTIVE_LOGGING` # does not have the `enable` attribute. return getattr( INTERACTIVE_LOGGING, "enable", keras_logging.INTERACTIVE_LOGGING_DEFAULT )
16
io_utils.py
Python
keras/utils/io_utils.py
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
1
26,585
18
12
5
79
10
0
21
44
check_permissions_for_custom_prices
Custom prices (#9393) * Add price_override to CheckoutLine model * Allow to define custom price in checkout mutations * Apply code review suggestions * Use custom price when generating checkout payload * Use price override when calculating prices for checkout * Update populatedb - create checkout with custom prices * Fix schema.graphql file * Make quantity optional in `CheckoutLinesUpdate` mutation (#9430) * Make quantity optional in CheckoutLinesUpdate mutation * Refactor add_variants_to_checkout checkout utils * Update changelog * Update quantity field description in CheckoutLineUpdateInput
https://github.com/saleor/saleor.git
def check_permissions_for_custom_prices(app, lines): if any(["price" in line for line in lines]) and ( not app or not app.has_perm(CheckoutPermissions.HANDLE_CHECKOUTS) ): raise PermissionDenied(permissions=[CheckoutPermissions.HANDLE_CHECKOUTS])
48
utils.py
Python
saleor/graphql/checkout/mutations/utils.py
620569b3a2466e8dda80df20a11b99ec0bec8c7c
saleor
5
249,085
37
10
23
191
17
0
52
246
test_update_no_display_name
Use literals in place of `HTTPStatus` constants in tests (#13469)
https://github.com/matrix-org/synapse.git
def test_update_no_display_name(self) -> None: # Set iniital display name. update = {"display_name": "new display"} self.get_success( self.handler.update_device( self.other_user, self.other_user_device_id, update ) ) channel = self.make_request( "PUT", self.url, access_token=self.admin_user_tok, ) self.assertEqual(200, channel.code, msg=channel.json_body) # Ensure the display name was not updated. channel = self.make_request( "GET", self.url, access_token=self.admin_user_tok, ) self.assertEqual(200, channel.code, msg=channel.json_body) self.assertEqual("new display", channel.json_body["display_name"])
119
test_device.py
Python
tests/rest/admin/test_device.py
c97042f7eef3748e17c90e48a4122389a89c4735
synapse
1
314,793
7
8
3
42
6
1
7
12
device2_zone_slave
Rewrite SoundTouch tests to use mocked payloads (#72984)
https://github.com/home-assistant/core.git
def device2_zone_slave() -> str: return load_fixture("soundtouch/device2_getZone_slave.xml") @pytest.fixture(scope="session")
@pytest.fixture(scope="session")
12
conftest.py
Python
tests/components/soundtouch/conftest.py
efbd47c828c6c2e1cd967df2a4cefd2b00c60c25
core
1
300,061
18
9
7
95
11
0
24
45
test_entity_registry
Add buttons to Ring chime devices to play ding and motion chimes (#71370) Co-authored-by: Paulus Schoutsen <[email protected]>
https://github.com/home-assistant/core.git
async def test_entity_registry(hass, requests_mock): await setup_platform(hass, Platform.BUTTON) entity_registry = er.async_get(hass) entry = entity_registry.async_get("button.downstairs_play_chime_ding") assert entry.unique_id == "123456-play-chime-ding" entry = entity_registry.async_get("button.downstairs_play_chime_motion") assert entry.unique_id == "123456-play-chime-motion"
53
test_button.py
Python
tests/components/ring/test_button.py
c22cf3b3d2e4d079499473500abf8a73e5b80163
core
1
308,131
74
15
33
331
22
0
90
464
test_no_mode_no_state
Add deconz current hvac operation to thermostate based on "state" (#59989) * deconz - add current hvac operation to thermostate based on "state" * deconz - extend current hvac operation to thermostate based on "state" and "mode" * Add tests for current hvac action * Add boost mode as special case * format using Black * sort imports * Add test for device with mode none and state none * Update homeassistant/components/deconz/climate.py Co-authored-by: Robert Svensson <[email protected]> * Fix test_climate.py test_no_mode_no_state * Add test for boost mode Co-authored-by: Robert Svensson <[email protected]>
https://github.com/home-assistant/core.git
async def test_no_mode_no_state(hass, aioclient_mock, mock_deconz_websocket): data = { "sensors": { "0": { "config": { "battery": 25, "heatsetpoint": 2222, "mode": None, "preset": "auto", "offset": 0, "on": True, "reachable": True, }, "ep": 1, "etag": "074549903686a77a12ef0f06c499b1ef", "lastseen": "2020-11-27T13:45Z", "manufacturername": "Zen Within", "modelid": "Zen-01", "name": "Zen-01", "state": {"lastupdated": "none", "on": None, "temperature": 2290}, "type": "ZHAThermostat", "uniqueid": "00:24:46:00:00:11:6f:56-01-0201", } } } with patch.dict(DECONZ_WEB_REQUEST, data): config_entry = await setup_deconz_integration(hass, aioclient_mock) assert len(hass.states.async_all()) == 2 climate_thermostat = hass.states.get("climate.zen_01") assert climate_thermostat.state is STATE_OFF assert climate_thermostat.attributes["preset_mode"] is DECONZ_PRESET_AUTO assert climate_thermostat.attributes["hvac_action"] is HVACAction.IDLE # Verify service calls mock_deconz_put_request(aioclient_mock, config_entry.data, "/sensors/0/config")
181
test_climate.py
Python
tests/components/deconz/test_climate.py
7a6897c7578dffd6b67f57747ebd81b67b153e01
core
1
303,576
24
12
6
83
9
0
28
49
test_parse_mapping_physical_address
Add tests for the HDMI-CEC integration (#75094) * Add basic tests to the HDMI-CEC component * Add tests for the HDMI-CEC switch component * Add test for watchdog code * Start adding tests for the HDMI-CEC media player platform Also some cleanup and code move. * Add more tests for media_player And cleanup some switch tests. * Improve xfail message for features * Align test pyCEC dependency with main dependency * Make fixtures snake_case * Cleanup call asserts * Cleanup service tests * fix issues with media player tests * Cleanup MockHDMIDevice class * Cleanup watchdog tests * Add myself as code owner for the HDMI-CEC integration * Fix async fire time changed time jump * Fix event api sync context * Delint tests * Parametrize watchdog test Co-authored-by: Martin Hjelmare <[email protected]>
https://github.com/home-assistant/core.git
def test_parse_mapping_physical_address(mapping, expected): result = parse_mapping(mapping) result = [ (r[0], str(r[1]) if isinstance(r[1], PhysicalAddress) else r[1]) for r in result ] assert result == expected # Test Setup
55
test_init.py
Python
tests/components/hdmi_cec/test_init.py
7cd4be1310b3f76398b4404d3f4ecb26b9533cee
core
3
176,214
85
10
28
418
23
0
169
423
test_from_scipy_sparse_array_parallel_edges
Use scipy.sparse array datastructure (#5139) * Step 1: use sparse arrays in nx.to_scipy_sparse_matrix. Seems like a reasonable place to start. nx.to_scipy_sparse_matrix is one of the primary interfaces to scipy.sparse from within NetworkX. * 1: Use np.outer instead of mult col/row vectors Fix two instances in modularitymatrix where a new 2D array was being created via an outer product of two \"vectors\". In the matrix case, this was a row vector \* a column vector. In the array case this can be disambiguated by being explicit with np.outer. * Update _transition_matrix in laplacianmatrix module - A few instances of matrix multiplication operator - Add np.newaxis + transpose to get shape right for broadcasting - Explicitly convert e.g. sp.sparse.spdiags to a csr_array. * Update directed_combinitorial_laplacian w/ sparse array. - Wrap spdiags in csr_array and update matmul operators. * Rm matrix-specific code from lgc and hmn modules - Replace .A call with appropriate array semantics - wrap sparse.diags in csr_array. * Change hits to use sparse array semantics. - Replace * with @ - Remove superfluous calls to flatten. * Update sparse matrix usage in layout module. - Simplify lil.getrowview call - Wrap spdiags in csr_array. * lil_matrix -> lil_array in graphmatrix.py. * WIP: Start working on algebraic connectivity module. * Incorporate auth mat varname feedback. * Revert 1D slice and comment for 1D sparse future. * Add TODOs: rm csr_array wrapper around spdiags etc. * WIP: cleanup algebraicconn: tracemin_fiedler. * Typo. * Finish reviewing algebraicconnectivity. * Convert bethe_hessian matrix to use sparse arrays. * WIP: update laplacian. Update undirected laplacian functions. * WIP: laplacian - add comment about _transition_matrix return types. * Finish laplacianmatrix review. * Update attrmatrix. * Switch to official laplacian function. * Update pagerank to use sparse array. * Switch bipartite matrix to sparse arrays. * Check from_scipy_sparse_matrix works with arrays. Modifies test suite. * Apply changes from review. * Fix failing docstring tests. * Fix missing axis for in-place multiplication. * Use scipy==1.8rc2 * Use matrix multiplication * Fix PyPy CI * [MRG] Create plot_subgraphs.py example (#5165) * Create plot_subgraphs.py https://github.com/networkx/networkx/issues/4220 * Update plot_subgraphs.py black * Update plot_subgraphs.py lint plus font_size * Update plot_subgraphs.py added more plots * Update plot_subgraphs.py removed plots from the unit test and added comments * Update plot_subgraphs.py lint * Update plot_subgraphs.py typos fixed * Update plot_subgraphs.py added nodes to the plot of the edges removed that was commented out for whatever reason * Update plot_subgraphs.py revert the latest commit - the line was commented out for a reason - it's broken * Update plot_subgraphs.py fixed node color issue * Update plot_subgraphs.py format fix * Update plot_subgraphs.py forgot to draw the nodes... now fixed * Fix sphinx warnings about heading length. * Update examples/algorithms/plot_subgraphs.py * Update examples/algorithms/plot_subgraphs.py Co-authored-by: Ross Barnowski <[email protected]> Co-authored-by: Dan Schult <[email protected]> * Add traveling salesman problem to example gallery (#4874) Adds an example of the using Christofides to solve the TSP problem to the example galery. Co-authored-by: Ross Barnowski <[email protected]> * Fixed inconsistent documentation for nbunch parameter in DiGraph.edges() (#5037) * Fixed inconsistent documentation for nbunch parameter in DiGraph.edges() * Resolved Requested Changes * Revert changes to degree docstrings. * Update comments in example. * Apply wording to edges method in all graph classes. Co-authored-by: Ross Barnowski <[email protected]> * Compatibility updates from testing with numpy/scipy/pytest rc's (#5226) * Rm deprecated scipy subpkg access. * Use recwarn fixture in place of deprecated pytest pattern. * Rm unnecessary try/except from tests. * Replace internal `close` fn with `math.isclose`. (#5224) * Replace internal close fn with math.isclose. * Fix lines in docstring examples. * Fix Python 3.10 deprecation warning w/ int div. (#5231) * Touchups and suggestions for subgraph gallery example (#5225) * Simplify construction of G with edges rm'd * Rm unused graph attribute. * Shorten categorization by node type. * Simplify node coloring. * Simplify isomorphism check. * Rm unit test. * Rm redundant plotting of each subgraph. * Use new package name (#5234) * Allowing None edges in weight function of bidirectional Dijkstra (#5232) * added following feature also to bidirectional dijkstra: The weight function can be used to hide edges by returning None. * changed syntax for better readability and code duplicate avoidance Co-authored-by: Hohmann, Nikolas <[email protected]> * Add an FAQ about assigning issues. (#5182) * Add FAQ about assigning issues. * Add note about linking issues from new PRs. * Update dev deps (#5243) * Update minor doc issues with tex notation (#5244) * Add FutureWarnings to fns that return sparse matrices - biadjacency_matrix. - bethe_hessian_matrix. - incidence_matrix. - laplacian functions. - modularity_matrix functions. - adjacency_matrix. * Add to_scipy_sparse_array and use it everywhere. Add a new conversion function to preserve array semantics internally while not altering behavior for users. Also adds FutureWarning to to_scipy_sparse_matrix. * Add from_scipy_sparse_array. Supercedes from_scipy_sparse_matrix. * Handle deprecations in separate PR. * Fix docstring examples. Co-authored-by: Mridul Seth <[email protected]> Co-authored-by: Jarrod Millman <[email protected]> Co-authored-by: Andrew Knyazev <[email protected]> Co-authored-by: Dan Schult <[email protected]> Co-authored-by: eskountis <[email protected]> Co-authored-by: Anutosh Bhat <[email protected]> Co-authored-by: NikHoh <[email protected]> Co-authored-by: Hohmann, Nikolas <[email protected]> Co-authored-by: Sultan Orazbayev <[email protected]> Co-authored-by: Mridul Seth <[email protected]>
https://github.com/networkx/networkx.git
def test_from_scipy_sparse_array_parallel_edges(self): A = sp.sparse.csr_array([[1, 1], [1, 2]]) # First, with a simple graph, each integer entry in the adjacency # matrix is interpreted as the weight of a single edge in the graph. expected = nx.DiGraph() edges = [(0, 0), (0, 1), (1, 0)] expected.add_weighted_edges_from([(u, v, 1) for (u, v) in edges]) expected.add_edge(1, 1, weight=2) actual = nx.from_scipy_sparse_array( A, parallel_edges=True, create_using=nx.DiGraph ) assert graphs_equal(actual, expected) actual = nx.from_scipy_sparse_array( A, parallel_edges=False, create_using=nx.DiGraph ) assert graphs_equal(actual, expected) # Now each integer entry in the adjacency matrix is interpreted as the # number of parallel edges in the graph if the appropriate keyword # argument is specified. edges = [(0, 0), (0, 1), (1, 0), (1, 1), (1, 1)] expected = nx.MultiDiGraph() expected.add_weighted_edges_from([(u, v, 1) for (u, v) in edges]) actual = nx.from_scipy_sparse_array( A, parallel_edges=True, create_using=nx.MultiDiGraph ) assert graphs_equal(actual, expected) expected = nx.MultiDiGraph() expected.add_edges_from(set(edges), weight=1) # The sole self-loop (edge 0) on vertex 1 should have weight 2. expected[1][1][0]["weight"] = 2 actual = nx.from_scipy_sparse_array( A, parallel_edges=False, create_using=nx.MultiDiGraph ) assert graphs_equal(actual, expected)
287
test_convert_scipy.py
Python
networkx/tests/test_convert_scipy.py
5dfd57af2a141a013ae3753e160180b82bec9469
networkx
3
147,843
72
17
40
245
27
1
92
357
train_one_step
[RLlib] Rewrite PPO to use training_iteration + enable DD-PPO for Win32. (#23673)
https://github.com/ray-project/ray.git
def train_one_step(trainer, train_batch, policies_to_train=None) -> Dict: config = trainer.config workers = trainer.workers local_worker = workers.local_worker() num_sgd_iter = config.get("num_sgd_iter", 1) sgd_minibatch_size = config.get("sgd_minibatch_size", 0) learn_timer = trainer._timers[LEARN_ON_BATCH_TIMER] with learn_timer: # Subsample minibatches (size=`sgd_minibatch_size`) from the # train batch and loop through train batch `num_sgd_iter` times. if num_sgd_iter > 1 or sgd_minibatch_size > 0: info = do_minibatch_sgd( train_batch, { pid: local_worker.get_policy(pid) for pid in policies_to_train or local_worker.get_policies_to_train(train_batch) }, local_worker, num_sgd_iter, sgd_minibatch_size, [], ) # Single update step using train batch. else: info = local_worker.learn_on_batch(train_batch) learn_timer.push_units_processed(train_batch.count) trainer._counters[NUM_ENV_STEPS_TRAINED] += train_batch.count trainer._counters[NUM_AGENT_STEPS_TRAINED] += train_batch.agent_steps() return info @DeveloperAPI
@DeveloperAPI
151
train_ops.py
Python
rllib/execution/train_ops.py
00922817b66ee14ba215972a98f416f3d6fef1ba
ray
5
291,630
11
11
4
55
11
0
11
32
device_class
Simplify use of binary sensor device classes in MySensors (#82946)
https://github.com/home-assistant/core.git
def device_class(self) -> BinarySensorDeviceClass | None: pres = self.gateway.const.Presentation return SENSORS.get(pres(self.child_type).name)
33
binary_sensor.py
Python
homeassistant/components/mysensors/binary_sensor.py
c36dd1778022defb11854532851ee5af10734313
core
1
127,496
6
6
3
22
4
0
6
20
scheme
[Datasets] Rename `PathPartitionScheme` as `Partitioning` (#28397)
https://github.com/ray-project/ray.git
def scheme(self) -> Partitioning: return self._scheme
12
partitioning.py
Python
python/ray/data/datasource/partitioning.py
15f30de59327acee491fb0d5a51e5e1f6b194929
ray
1
20,193
6
6
3
22
4
0
6
20
user_state_dir
check point progress on only bringing in pip==22.0.4 (#4966) * vendor in pip==22.0.4 * updating vendor packaging version * update pipdeptree to fix pipenv graph with new version of pip. * Vendoring of pip-shims 0.7.0 * Vendoring of requirementslib 1.6.3 * Update pip index safety restrictions patch for pip==22.0.4 * Update patches * exclude pyptoject.toml from black to see if that helps. * Move this part of the hash collection back to the top (like prior implementation) because it affects the outcome of this test now in pip 22.0.4
https://github.com/pypa/pipenv.git
def user_state_dir(self) -> str: return self.user_data_dir
12
android.py
Python
pipenv/patched/notpip/_vendor/platformdirs/android.py
f3166e673fe8d40277b804d35d77dcdb760fc3b3
pipenv
1
300,899
48
15
26
242
18
0
69
307
test_deconz_events_bad_unique_id
Clean up accessing device_registry helpers via hass (#72031)
https://github.com/home-assistant/core.git
async def test_deconz_events_bad_unique_id(hass, aioclient_mock, mock_deconz_websocket): data = { "sensors": { "1": { "name": "Switch 1 no unique id", "type": "ZHASwitch", "state": {"buttonevent": 1000}, "config": {}, }, "2": { "name": "Switch 2 bad unique id", "type": "ZHASwitch", "state": {"buttonevent": 1000}, "config": {"battery": 100}, "uniqueid": "00:00-00", }, } } with patch.dict(DECONZ_WEB_REQUEST, data): config_entry = await setup_deconz_integration(hass, aioclient_mock) device_registry = dr.async_get(hass) assert len(hass.states.async_all()) == 1 assert ( len(dr.async_entries_for_config_entry(device_registry, config_entry.entry_id)) == 2 )
135
test_deconz_event.py
Python
tests/components/deconz/test_deconz_event.py
c3d19f38274935ab8f784e6fbeb7c5959a551074
core
1
118,169
193
13
146
1,388
31
0
596
1,619
test_version_managing
join_learn_process in test comments in migraion env.py
https://github.com/mindsdb/mindsdb.git
def test_version_managing(self, data_handler): # set up df = pd.DataFrame([ {'a': 1, 'b': dt.datetime(2020, 1, 1)}, {'a': 2, 'b': dt.datetime(2020, 1, 2)}, {'a': 1, 'b': dt.datetime(2020, 1, 3)}, ]) self.set_handler(data_handler, name='pg', tables={'tasks': df}) # ================= retrain cycles ===================== # create folder self.run_sql('create database proj') # -- create model -- self.run_sql( ) self.wait_predictor('proj', 'task_model') # check input to data handler assert data_handler().native_query.call_args[0][0] == 'select * from tasks' # tag works in create model ret = self.run_sql('select * from proj.models') assert ret['TAG'][0] == 'first' # use model ret = self.run_sql() assert len(ret) == 3 assert ret.predicted[0] == 42 # -- retrain predictor with tag -- data_handler.reset_mock() self.run_sql( ) self.wait_predictor('proj', 'task_model', {'tag': 'second'}) # get current model ret = self.run_sql('select * from proj.models') # check target assert ret['PREDICT'][0] == 'b' # check label assert ret['TAG'][0] == 'second' # check integration sql assert data_handler().native_query.call_args[0][0] == 'select * from tasks where a=2' # use model ret = self.run_sql() assert ret.predicted[0] == 42 # used model has tag 'second' models = self.get_models() model_id = ret.predictor_id[0] assert models[model_id].label == 'second' # -- retrain again with active=0 -- data_handler.reset_mock() self.run_sql( ) self.wait_predictor('proj', 'task_model', {'tag': 'third'}) ret = self.run_sql('select * from proj.models') # check target is from previous retrain assert ret['PREDICT'][0] == 'b' # use model ret = self.run_sql() # used model has tag 'second' (previous) models = self.get_models() model_id = ret.predictor_id[0] assert models[model_id].label == 'second' # ================ working with inactive versions ================= # run 3rd version model and check used model version ret = self.run_sql() # 3rd version was used models = self.get_models() model_id = ret.predictor_id[0] assert models[model_id].label == 'third' # one-line query model by version ret = self.run_sql('SELECT * from proj.task_model.3 where a=1 and b=2') model_id = ret.predictor_id[0] assert models[model_id].label == 'third' # check exception: not existing version with pytest.raises(Exception) as exc_info: self.run_sql( 'SELECT * from proj.task_model.4 where a=1 and b=2', ) assert 'does not exists' in str(exc_info.value) # ================== managing versions ========================= # check 'show models' command in different combination # Show models <from | in> <project> where <expr> ret = self.run_sql('Show models') assert len(ret) == 1 and ret['NAME'][0] == 'task_model' ret = self.run_sql('Show models from proj') assert len(ret) == 1 and ret['NAME'][0] == 'task_model' ret = self.run_sql('Show models in proj') assert len(ret) == 1 and ret['NAME'][0] == 'task_model' ret = self.run_sql("Show models where name='task_model'") assert len(ret) == 1 and ret['NAME'][0] == 'task_model' # model is not exists ret = self.run_sql("Show models from proj where name='xxx'") assert len(ret) == 0 # ---------------- # See all versions ret = self.run_sql('select * from proj.models_versions') # we have all tags in versions assert set(ret['TAG']) == {'first', 'second', 'third'} # Set active selected version self.run_sql() # get active version ret = self.run_sql('select * from proj.models_versions where active = 1') assert ret['TAG'][0] == 'first' # use active version ? # Delete specific version self.run_sql() # deleted version not in list ret = self.run_sql('select * from proj.models_versions') assert len(ret) == 2 assert 'second' not in ret['TAG'] # try to use deleted version with pytest.raises(Exception) as exc_info: self.run_sql( 'SELECT * from proj.task_model.2 where a=1', ) assert 'does not exists' in str(exc_info.value) # exception with deleting active version with pytest.raises(Exception) as exc_info: self.run_sql() assert "Can't remove active version" in str(exc_info.value) # exception with deleting non-existing version with pytest.raises(Exception) as exc_info: self.run_sql() assert "is not found" in str(exc_info.value) # ---------------------------------------------------- # retrain without all params self.run_sql( ) self.wait_predictor('proj', 'task_model', {'version': '4'}) # ---------------------------------------------------- # drop predictor and check model is deleted and no versions self.run_sql('drop predictor proj.task_model') ret = self.run_sql('select * from proj.models') assert len(ret) == 0 # versions are also deleted ret = self.run_sql('select * from proj.models_versions') assert len(ret) == 0
761
test_project_structure.py
Python
tests/unit/test_project_structure.py
0628b7dd82debc0998c86f6ad7b46f23374b35b6
mindsdb
5
3,800
20
10
6
93
16
0
23
65
test_less_jobs
🎉 🎉 Source FB Marketing: performance and reliability fixes (#9805) * Facebook Marketing performance improvement * add comments and little refactoring * fix integration tests with the new config * improve job status handling, limit concurrency to 10 * fix campaign jobs, refactor manager * big refactoring of async jobs, support random order of slices * update source _read_incremental to hook new state logic * fix issues with timeout * remove debugging and clean up, improve retry logic * merge changes from #8234 * fix call super _read_increment * generalize batch execution, add use_batch flag * improve coverage, do some refactoring of spec * update test, remove overrides of source * add split by AdSet * add smaller insights * fix end_date < start_date case * add account_id to PK * add notes * fix new streams * fix reversed incremental stream * update spec.json for SAT * upgrade CDK and bump version Co-authored-by: Dmytro Rezchykov <[email protected]> Co-authored-by: Eugene Kulak <[email protected]>
https://github.com/airbytehq/airbyte.git
def test_less_jobs(self, api, started_job, batch): jobs = [started_job for _ in range(49)] update_in_batch(api=api, jobs=jobs) assert started_job.update_job.call_count == 49 assert len(api.new_batch.return_value) == 49 batch.execute.assert_called_once()
60
test_async_job.py
Python
airbyte-integrations/connectors/source-facebook-marketing/unit_tests/test_async_job.py
a3aae8017a0a40ff2006e2567f71dccb04c997a5
airbyte
2
212,520
6
8
6
31
5
0
6
20
app_paths
Normalize built-in types and remove `Unknown` (#12252) * Use lower case names for built-in types Also incidentally apply TypeAlias marker. * Drop `Unknown` in favour of consistent usage of `Any` * Enable lazy annotations in conftest.py
https://github.com/bokeh/bokeh.git
def app_paths(self) -> set[str]: return set(self._applications)
18
tornado.py
Python
bokeh/server/tornado.py
528d85e642340ef30ec91f30b65c7c43370f648d
bokeh
1
281,551
18
11
20
129
13
0
25
60
print_help
Terminal Wide Rich (#1161) * My idea for how we handle Rich moving forward * remove independent consoles * FIxed pylint issues * add a few vars * Switched print to console * More transitions * Changed more prints * Replaced all prints * Fixing tabulate * Finished replace tabulate * Finished removing rich from Tabulate * add Panel around menu * add GST watermark under feature flag * Fixed 46 tests * Delete test_screener[False].yaml * Delete test_screener[True].yaml * Fixed the rest of the tests * add help and source color vars and use rgb * rich on stocks/options * update rich on disc, dps, sia * rich in gov, ins and scr menus * ba and ca menus with rich * Fixed import issue * Fixed some tests * removed termcolor * Removed prettytable * add rich to remaining stocks menus * FIxed linting issue * Added James' changes * Updated dependencies * Add rich to cryptocurrency menu * refactor economy and forex * refactor etf with rich * refactor mfunds * refactor rich rest * not specify style so default color works well on any background * Fixing mypy issues * Updated tests * More test fixes * James' test fixes * Updating tests : stocks/screener - fix cassettes using BR * Updating tests : crypto * Updating tests : disable DEBUG_MODE * Updating tests : stocks/fa/yfinance * minor fixes that escape * Improve the rich table function (that replaces tabulate :D ) * Fixed bad code * delete rogue file + dcf fix + NoConsole * sia mypy * fuck you linter * fuck you linter pt 2 * skip hehe * i hate the black linter * ubuntu mypy attempt * Update : rich_config + gtff * Updating tests : conftest * Updating tests : stocks * Update : rich_config * Updating : rich_config * make panel configurable for Theodore :b * colors update * Merged * Updating : rich_config + feature_flags * Updating : rich_config * Updating tests : stocks * Updating : feature_flags Co-authored-by: DidierRLopes <[email protected]> Co-authored-by: Chavithra PARANA <[email protected]> Co-authored-by: james <[email protected]> Co-authored-by: jose-donato <[email protected]>
https://github.com/OpenBB-finance/OpenBBTerminal.git
def print_help(self): has_option_start = "" if self.options else "[unvl]" has_option_end = "" if self.options else "[/unvl]" help_text = f console.print(text=help_text, menu="Stocks - Options - Payoff")
40
payoff_controller.py
Python
gamestonk_terminal/stocks/options/payoff_controller.py
82747072c511beb1b2672846ae2ee4aec53eb562
OpenBBTerminal
3
113,521
16
12
8
60
7
0
17
49
absolute_scope
Mutable V3 (Stage 1) - Label namespace and utilities (#5194)
https://github.com/microsoft/nni.git
def absolute_scope(self) -> str: if self.path is None: raise ValueError(f'label_scope "{self.scope_name}" is not entered yet.') return '/'.join(self.path)
30
utils.py
Python
nni/mutable/utils.py
6641d309ac6e98c69295ac3d59bf7fa23bdb6588
nni
2
297,162
21
9
8
87
16
0
25
49
test_with_stop
Blebox add thermoBox to climate (#81090) Co-authored-by: Martin Hjelmare <[email protected]>
https://github.com/home-assistant/core.git
async def test_with_stop(gatebox, hass): feature_mock, entity_id = gatebox opening_to_stop_feature_mock(feature_mock) feature_mock.has_stop = True await async_setup_entity(hass, entity_id) state = hass.states.get(entity_id) supported_features = state.attributes[ATTR_SUPPORTED_FEATURES] assert supported_features & CoverEntityFeature.STOP
53
test_cover.py
Python
tests/components/blebox/test_cover.py
923fa473e171fcdf396556ea200612e378f9b0a5
core
1