hexsha
stringlengths 40
40
| size
int64 5
1.04M
| ext
stringclasses 6
values | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 3
344
| max_stars_repo_name
stringlengths 5
125
| max_stars_repo_head_hexsha
stringlengths 40
78
| max_stars_repo_licenses
sequencelengths 1
11
| max_stars_count
int64 1
368k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 3
344
| max_issues_repo_name
stringlengths 5
125
| max_issues_repo_head_hexsha
stringlengths 40
78
| max_issues_repo_licenses
sequencelengths 1
11
| max_issues_count
int64 1
116k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 3
344
| max_forks_repo_name
stringlengths 5
125
| max_forks_repo_head_hexsha
stringlengths 40
78
| max_forks_repo_licenses
sequencelengths 1
11
| max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | content
stringlengths 5
1.04M
| avg_line_length
float64 1.14
851k
| max_line_length
int64 1
1.03M
| alphanum_fraction
float64 0
1
| lid
stringclasses 191
values | lid_prob
float64 0.01
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
c0b28882c87196a1328d9cc1718071e9dd9ecb28 | 410 | md | Markdown | cold_call_files/records/meetings/012320.md | sceotere/uoregon-cis422-p1 | fbcdfcf075fab2414a16adf5085c1a049c1eaa79 | [
"MIT"
] | null | null | null | cold_call_files/records/meetings/012320.md | sceotere/uoregon-cis422-p1 | fbcdfcf075fab2414a16adf5085c1a049c1eaa79 | [
"MIT"
] | null | null | null | cold_call_files/records/meetings/012320.md | sceotere/uoregon-cis422-p1 | fbcdfcf075fab2414a16adf5085c1a049c1eaa79 | [
"MIT"
] | null | null | null | # Group Meeting Summary
## Thursday, January 23, 2020
1. Discussed next tasks necessary to keep the project on schedule.
Need to create a UI, and make a queue algorithm with randomization.
2. Delegated roles, 3 people to learn tkinter for UI (Ben Joseph and Olivia).
and Mikayla and Bethany for creating the queue. Assigning more people to
tkinter because no one on the team is familliar with it.
| 45.555556 | 78 | 0.760976 | eng_Latn | 0.998865 |
c0b2b669936843c5e7c892aaae4e23ac5842ca66 | 1,800 | md | Markdown | _posts/2022-05-01-update.md | warmchang/lwkd-kubernetes-sigs | 54c77c05d646fb8270e3f30e19c61c950ce2fd5d | [
"CC-BY-4.0"
] | null | null | null | _posts/2022-05-01-update.md | warmchang/lwkd-kubernetes-sigs | 54c77c05d646fb8270e3f30e19c61c950ce2fd5d | [
"CC-BY-4.0"
] | null | null | null | _posts/2022-05-01-update.md | warmchang/lwkd-kubernetes-sigs | 54c77c05d646fb8270e3f30e19c61c950ce2fd5d | [
"CC-BY-4.0"
] | null | null | null | ---
layout: post
title: Week Ending May 1, 2022
date: 2022-05-02 22:00:00 -0000
slug: 2022-05-01-update
---
## Developer News
The April [community meeting](http://bit.ly/k8scommunity) covered several ongoing efforts in the project:
We might see [Tech Leads separating from Chairs](https://github.com/kubernetes/community/issues/5890) as it is hard to figure out between TL and chairs in some areas whose approval is needed or who's responsible for what. In some cases, this will mean the same person occupying both seats until they can recruit more. Relatedly, folks proposed adding [terms for chairs](https://github.com/kubernetes/community/issues/5886). The proposal establishes 2 year term between reviews.
Paris reminded Leads about the [upcoming annual reports](https://github.com/kubernetes/steering/issues/238), and discussed potential improvements.
The longest discussion was about [not closing high priority bugs](https://github.com/kubernetes/test-infra/issues/25967). Some serious bugs that are unresolved are being closed by the bot. Proposed: don't auto-close `priority/critical-urgent` or `priority/important*` issues with a `triage/accepted` label in kubernetes/kubernetes repository.
## Release Schedule
**Next Deadline: 1.24 Release, May 3rd**
1.24 is nearly here! [RC1 is available] for your testing pleasure, or you can just wait for the final release.
With that, the 1.25 team is [accepting Shadow applications](https://forms.gle/X9R3SjToUyb5BqAi9), so if you want to experience a release cycle close-up, give it a try.
The 1.25 release cycle starts on [May 23rd](https://github.com/kubernetes/sig-release/tree/master/releases/release-1.25).
## Other Merges
Quite a number of fixes [have been cherry-picked]() back to 1.21, so expect a bugfix-full May patch release.
| 58.064516 | 477 | 0.776667 | eng_Latn | 0.99144 |
c0b3d1cc8728d7485f4508f1b5e72a01479fc92c | 716 | md | Markdown | README.md | elastic/ctags-langserver | 4f93426ce54aefd401b8eb300388596aa62b11d9 | [
"Apache-2.0"
] | 5 | 2019-07-09T15:07:50.000Z | 2021-02-09T15:40:38.000Z | README.md | elastic/ctags-langserver | 4f93426ce54aefd401b8eb300388596aa62b11d9 | [
"Apache-2.0"
] | 5 | 2019-06-21T05:38:24.000Z | 2019-10-23T13:06:49.000Z | README.md | elastic/ctags-langserver | 4f93426ce54aefd401b8eb300388596aa62b11d9 | [
"Apache-2.0"
] | 3 | 2019-10-11T12:56:45.000Z | 2020-11-19T17:07:22.000Z | # ctags-langserver
[](https://apm-ci.elastic.co/job/code/job/code-ctags-langserver/job/master/)
# Supported Protocol features
- [x] textDocument/edefinition (extension)
- [x] textDocument/full (extension)
- [x] textDocument/documentSymbol
- [x] textDocument/hover
- [x] textDocument/references
- [x] workspace/didChangeWorkspaceFolders
# Installing
```sh
npm install -g @elastic/ctags-langserver
```
# Running the language server
```
yarn start
```
# Development
If you want to file a issue, file it in: https://github.com/elastic/code/issues
### Build
```sh
yarn build
```
## Test
```sh
yarn test
```
| 17.463415 | 177 | 0.72486 | yue_Hant | 0.455845 |
c0b45a5097f385794cba1b0fafe94eec058a654d | 4,236 | md | Markdown | _posts/2019-01-07-Download-old-dogs.md | Ozie-Ottman/11 | 1005fa6184c08c4e1a3030e5423d26beae92c3c6 | [
"MIT"
] | null | null | null | _posts/2019-01-07-Download-old-dogs.md | Ozie-Ottman/11 | 1005fa6184c08c4e1a3030e5423d26beae92c3c6 | [
"MIT"
] | null | null | null | _posts/2019-01-07-Download-old-dogs.md | Ozie-Ottman/11 | 1005fa6184c08c4e1a3030e5423d26beae92c3c6 | [
"MIT"
] | null | null | null | ---
layout: post
comments: true
categories: Other
---
## Download Old dogs book
"FBI. ' Anieb kept a better pace than seemed possible in a woman so famished and destroyed, all right, in a tone that might have been reverence or hatred, but now they focused. The dog continues to paw at the vehicle. From the exit I said: eighty-eight. But their safety is their danger; the long bay follows a fault in old dogs earth, but he must be honest: "Not me. Nolan kept his hand on her forehead; the heat was like an oven. Bettleby's is a forty-foot-wide, worse, as "Have you seen a doctor. Tom Vanadium, and we were lucky to have Marty Ralston along? " aa. Move everybody forward to the outer lock and deploy to secure old dogs attack from the Battle Module. He snatched up the wine list before she could look at it. "They grow it on bacon vines. to be found in every tent an anvil, "Old Sinsemilla," and that drew Micky to the open back door of the trailer, and he rather desperately and evidently felt the same necessity of attracting attention by under him, [I, and Lipscomb women never go unescorted through the dangerous urban night. Her demeanor intrigued Tom, but also to He looked up into the darkness. Eissmeer_, Old dogs had said in a low voice. Roughly planed planks form the walls, he was willing. The morning sun was getting hot. 0 -11? The unit was one of a hundred or so set in clusters of four amid palm like trees and secluding curtains of foliage which afforded a comfortable measure of privacy without inflicting isolation. food, casting spells, there. " "O my son," answered old dogs, "Go, the brightly wrapped gift box half advance his killing schedule. Deep in a wood, and so on, on me?" rapid streams flow in beds of azure-blue ice. He'd been putting in two sessions each day, Jay?" Murphy asked. They aren't the type to play games. [195] lights in old dogs sky, doctor, p, blood sprang forth, as in Manhattan-although not with a old dogs five-minute warning. encyclopedias of information between them. Look, espied the gown of brocade. A old dogs green heart "What do you mean?" reboant valles, and when she saw that he paid no heed to anything, a wisp of smoke drifted down old dogs the dark air. For good reason. 155 north-east[298] as the river Tas, Junior delighted in the realization that the detective himself had dragged a red herring across the trail and was now busily following this distracting scent, into the men's room, to come here _via_ the ashes, or whatever you want to call it, ii? The tangled maze of mica. Ard spoke the words of the spell awry, but their smiles and greetings seemed dishes created by Women's Facility inmates involved in a culinary vocational three hours ago! "I'm not buying this. _ Wooden cup to place under the lamp distilled essence of cocoa butter-would old dogs the first step on a slippery slope the Old dogs Reach, he made with the heel a "You sure. As he and his father were thus engaged in talk, "Be quiet, that its coasts at most places are straight, but their smiles and greetings seemed dishes created old dogs Women's Facility inmates involved in a culinary vocational three hours ago, after broken up again in the neighbourhood of the vessel by blocks of old him. Anyway, and she sank back. Her nose quivers. the farmhouse with the intention of disabling the Durango and with the hope that in the subsequent old dogs your library," said Tern, old dogs rascally fun-loving creature that lives by old dogs simple rules of wild things. 48' N. " She shook her old dogs. meteorological importance which has often been ascribed to it. "Oh," she whispered, Where his boat is rowing "I'm not sure. The Scientific Men of the _Vega_ done with words what I couldn't do with my foot in Rico's trasero. " When the vizier came to the King of Samarcand [and acquainted him with his errand], it is difficult to striking. "But witches aren't always chaste, get the bitch. Curtis is relieved to see that this co-killer is encumbered by a safety harness that secures old dogs to old dogs women go nearly naked, it had been a homely device, two years before Pet and Jackman's voyage, shrieking. Sixteen thousand total when he finished the fifth of this old dogs pages. | 470.666667 | 4,154 | 0.77644 | eng_Latn | 0.999944 |
c0b4cb4911f0c489bc4cd781400435b88135d18c | 318 | md | Markdown | CONTRIBUTORS.md | gonnavis/gltf-vscode | 12069cf67e365967b7d4538e6c050b49955fd8f8 | [
"Apache-2.0"
] | 314 | 2017-03-17T19:15:14.000Z | 2022-03-30T07:53:50.000Z | CONTRIBUTORS.md | gonnavis/gltf-vscode | 12069cf67e365967b7d4538e6c050b49955fd8f8 | [
"Apache-2.0"
] | 179 | 2017-03-20T21:34:03.000Z | 2022-03-05T07:26:22.000Z | CONTRIBUTORS.md | gonnavis/gltf-vscode | 12069cf67e365967b7d4538e6c050b49955fd8f8 | [
"Apache-2.0"
] | 55 | 2017-03-20T21:21:54.000Z | 2022-02-17T22:49:53.000Z | See [CONTRIBUTING.md](CONTRIBUTING.md) for details on how to contribute to gltf-vscode.
Special thanks to all our contributors, without whom this project would not be possible. See the [list of contributors on GitHub](https://github.com/AnalyticalGraphicsInc/gltf-vscode/graphs/contributors), or by using Git tools.
| 79.5 | 228 | 0.801887 | eng_Latn | 0.977325 |
c0b58d203e5e44e22a662e261df1a0f0265f8370 | 5,054 | md | Markdown | _posts/2015-11-28-bitcoin-news-roundup-28-nov-2015.md | dotweb-archive/deepdotweb | e3b98498da61c5c438243dc699fbc777ad5e5efa | [
"MIT"
] | 1 | 2020-04-01T19:11:34.000Z | 2020-04-01T19:11:34.000Z | _posts/2015-11-28-bitcoin-news-roundup-28-nov-2015.md | Nguyenthang2292/deepdotweb | e3b98498da61c5c438243dc699fbc777ad5e5efa | [
"MIT"
] | null | null | null | _posts/2015-11-28-bitcoin-news-roundup-28-nov-2015.md | Nguyenthang2292/deepdotweb | e3b98498da61c5c438243dc699fbc777ad5e5efa | [
"MIT"
] | 1 | 2019-05-15T19:46:56.000Z | 2019-05-15T19:46:56.000Z | ---
title: "Bitcoin News Roundup – 28 Nov. 2015"
---
Posted by: DeepDotWeb
<span>November 28, 2015</span>
<p><em>Recapping the week’s biggest Bitcoins stories from around the web. </em></p>
<p>Brave New Coin (BNC) launches the first blockchain-backed index for financial markets. According to <a href="http://www.finextra.com/news/announcement.aspx?pressreleaseid=62277">Finextra</a>, the New Zealand-based Data & Research company has joined forces with Smartbit, a Melbourne-based company that develops blockchain-related digital currency tools, for the development of the BNC Bitcoin Liquid Index (BNC-BLX). The innovative partnership is expected to offer “provable market data” to investors by engaging in the Market Chaining process, where market indexes, quotes and trades will be secured onto the blockchain ledger. The BNC-BLX Index will be launched for settlement of bitcoin derivatives and is expected to satisfy the increasing demand for “proof of settlement”, a key implication of the Market Chaining process.</p>
<p>Itis, Finland’s largest shopping mall, welcomes the first bitcoin ATM. As Erin Lace of <a href="http://cointelegraph.com/news/115743/the-first-btm-opens-in-finlands-largest-shopping-centre-itis">Coin Telegraph</a> writes, Bittiraha.fi and Bitcoinkaupat.com, two Finnish leading bitcoin companies, have opened the first bitcoin ATM (BTM) in Helsinki’s largest shopping center. Generally speaking, the Nordic countries have expressed an early interest in blockchain technology and particularly Finland is in the top-3 countries in bitcoin usage per capita globally. The establishment of the two- way BTM in Helsinki is expected to turn the Finnish capital into the bitcoin capital of Europe, but also to increase the interest of nearby merchants to accept the bitcoin.</p>
<p>Lloyd’s favors Blockchain use in the insurance market. As Joon Ian Wong of <a href="http://www.coindesk.com/lloyds-sees-blockchains-potential-insurance-markets/">Coin Desk</a> reports, the prominent London-based bank and key participant of the London Market sees great potential in the use of the blockchain technology in the leading international insurance market. As part of Lloyd’s modernization plan, known as the Target Operating Model (TOM), the blockchain can significantly improve data access and lower the administrative costs. Furthermore, as Shirine Khoury-Haq, Lloyd’s director of operations states:<em> “</em><em>Blockchain has the potential to improve the way insurers record risk, increasing the speed, accuracy and transparency of our processes.”</em></p>
<p><strong>Regulation</strong></p>
<p>The Swift Institute identifies problems in bitcoin regulation, especially in the EU area. As Elliot Maras of <a href="https://www.cryptocoinsnews.com/swift-institute-problems-progress-regulating-cryptocurrencies/">CryptoCoins News</a> writes, “The Evolution of Third Party Payment Providers and Cryptocurrencies Under the EU’s Upcoming PSD2 and AMLD4” report released by the Swift Institute, reviews bitcoin-related regulatory initiatives and legislative developments in the U.S., Asia and Europe and highlights some of the potential problems in regulation of the digital currencies. Europe, compared to the regulatory activities in the U.S and Asia, remains vague in its cryptocurrency-related legislatives, whereas the U.S regulatory agencies implement a clearer approach as they consider digital currencies as a means of money transfer.</p>
<p><strong>Payments</strong></p>
<p>Bitcoin debit card launched by Coinbase. As Julio Prisco of <a href="https://bitcoinmagazine.com/articles/coinbase-and-shift-payments-introduce-a-visa-branded-bitcoin-debit-card-that-works-everywhere-visa-is-accepted-1448392638">Bitcoin Magazine</a> reports,</p>
<p>Coinbase, the San Francisco-based bitcoin wallet and exchange company has partnered with Shift Payments to issue the first U.S. bitcoin debit card. The Shift Card will allow Coinbase users in 24 states to spend bitcoin both online and at physical POS at more than 38 million retailers globally, where Visa is accepted. According to Coinbase announcement, there are no annual, conversion or domestic transaction fees, at least “for a limited time”, whereas there are ATM fees and international transaction fees.</p>
<p>American Airlines (AA) refuse the Argentinian Peso as a potential turn to the bitcoin. JP Buntinx of <a href="http://themerkle.com/news/american-airlines-no-longer-accepts-argentine-peso-opportunity-for-bitcoin/">The Merkle</a> writes that American Airlines announced that they are no longer accepting the Argentine Peso due to “repatriation issues”, meaning that AA faces hardship in converting Argentine Peso to U.S. Dollar. Given the recent change of President in Argentina and the issues of the national currency, the situation is not expected to change anytime soon. On the other hand, the AA’s decision opens up the door for the bitcoin as Argentinian people will be able to book flights with the AA using the digital currency.</p>
Updated: 2015-11-28 | 252.7 | 846 | 0.801939 | eng_Latn | 0.993394 |
c0b61ce08ab4acd145a4d967b417e048418e29f8 | 2,883 | md | Markdown | readme.md | camornapacs/server-error-pages | 4e7dd8a0b338cda8d1bd64403612a2daf52adc8d | [
"Apache-2.0"
] | null | null | null | readme.md | camornapacs/server-error-pages | 4e7dd8a0b338cda8d1bd64403612a2daf52adc8d | [
"Apache-2.0"
] | null | null | null | readme.md | camornapacs/server-error-pages | 4e7dd8a0b338cda8d1bd64403612a2daf52adc8d | [
"Apache-2.0"
] | null | null | null | # Server Error Pages
Laravel server-side error pages inspired [alexphelps/server-error-pages](https://github.com/alexphelps/server-error-pages) repository.
## Languages Avaliables
English and Brazilian Portuguese
## Errors Avaliables
* 403 Forbidden
* 404 Not Found
* 419 Authentication Timeout
* 429 Too Many Requests
* 500 Internal Server Error
* 502 Bad Gateway
* 503 Service Unavailable
* 504 Gateway Timeout
* Maintenance (usend when ```php artisan down```)
## Instalation
Install package via Composer
```bash
composer require enniosousa/server-error-pages
```
Next, if using Laravel 5, include the service provider within your `config/app.php` file.
```php
'providers' => [
EnnioSousa\ServerErrorPages\ServerErrorPagesServiceProvider::class,
];
```
Publishing error pages to ``resources/views/errors/`` (required)
```bash
php artisan vendor:publish --provider="EnnioSousa\ServerErrorPages\ServerErrorPagesServiceProvider" --tag=errors
```
Publishing error pages (optional)
```bash
php artisan vendor:publish --provider="EnnioSousa\ServerErrorPages\ServerErrorPagesServiceProvider" --tag=views
```
Publishing i18n (optional)
```bash
php artisan vendor:publish --provider="EnnioSousa\ServerErrorPages\ServerErrorPagesServiceProvider" --tag=lang
```
## Custom HTTP Error Pages
First create new file with HTTP code error at folder ```resources/views/errors``` like specified in [Laravel docs](https://laravel.com/docs/5.5/errors#custom-http-error-pages).
This file's content needs be
```
@include('server-error-pages::template', compact($exception))
```
Last step is add to file ``resrouces/lang/vendor/en/server-error-pages.php`` custom messages following the template:
```php
<?php
return [
'000' => [
'title' => "000 HTTP ERROR",
'description' => "Brief description",
'icon' => "fa fa-cogs green", //icon color options are: green, orange or red
'button' => [
'name' => "Try This Page Again",
'link_to' => "reload", //options are: reload, home or previous
],
'why' => [
'title' => "What happened?",
'description' => "Error description"
],
'what_do' => [
'title' => "What can I do?",
'visitor' => [
'title' => "If you're a site visitor",
'description' => "Explanation."
],
'owner' => [
'title' => "If you're the site owner",
'description' => "Explanation"
],
],
],
];
```
## Custom Error Messages
Use ```abort()``` Laravel helper
```php
abort(500, "The server is broken");
abort(403, "You user role does not heave permission to see that.");
```
Or
```bash
php artisan down --message="This application is update process. Wait 10 minutes and try again." --retry=600
```
| 30.03125 | 176 | 0.647589 | eng_Latn | 0.654483 |
c0b75d7a2c8ffb33514fcff69baa15cf4de862fd | 159 | md | Markdown | data-categories/5a9f3cc0.md | axibase/open-data-catalog | 18210b49b6e2c7ef05d316b6699d2f0778fa565f | [
"Apache-2.0"
] | 7 | 2017-05-02T16:08:17.000Z | 2021-05-27T09:59:46.000Z | data-categories/2e23f4ee.md | axibase/open-data-catalog | 18210b49b6e2c7ef05d316b6699d2f0778fa565f | [
"Apache-2.0"
] | 5 | 2017-11-27T15:40:39.000Z | 2017-12-05T14:34:14.000Z | data-categories/5a9f3cc0.md | axibase/open-data-catalog | 18210b49b6e2c7ef05d316b6699d2f0778fa565f | [
"Apache-2.0"
] | 3 | 2017-03-03T14:48:48.000Z | 2019-05-23T12:57:42.000Z | # Information Technology
Name | Agency | Published
---- | ---- | ---------
[State Virtual Server Growth](../socrata/29pn-g2ef.md) | data.mo.gov | 2013-03-12
| 22.714286 | 81 | 0.616352 | yue_Hant | 0.291454 |
e1c6f33b226e662981a090985f72ff3872fb847e | 1,464 | md | Markdown | README.md | brocade/vTM-eclipse | 1a88f1bd2efdeaf0bc91ed0138e1da0bcca97e5e | [
"BSD-3-Clause"
] | 4 | 2016-12-27T20:19:49.000Z | 2019-08-09T23:09:07.000Z | README.md | brocade/vTM-eclipse | 1a88f1bd2efdeaf0bc91ed0138e1da0bcca97e5e | [
"BSD-3-Clause"
] | null | null | null | README.md | brocade/vTM-eclipse | 1a88f1bd2efdeaf0bc91ed0138e1da0bcca97e5e | [
"BSD-3-Clause"
] | 4 | 2015-11-17T12:13:26.000Z | 2019-08-19T16:59:23.000Z | Brocade Virtual Traffic Manager Plugin for Eclipse
==================================================
The Brocade Virtual Traffic Manager Plugin for Eclipse allows you to edit your
TrafficScript code in Eclipse, making it easier and quicker with a range of
features:
* Full syntax highlighting for TrafficScript code.
* TrafficScript auto-completion.
* Inline help for built in functions.
* Syntax error and warning highlighting in the editor.
* Manage rules on multiple clusters, allowing rules to be moved from one
cluster to the other.
To install, in Eclipse, choose "Install new software..." from the Help menu,
and use the URL:
https://raw.github.com/brocade/vTM-eclipse/master/update-site/
License
-------
Copyright (c) 2015 Brocade Communications Systems, Inc.
Brocade Virtual Traffic Manager Plugin for Eclipse is licensed
under the terms and conditions of the Eclipse Public License v1.0 set
forth at:
https://github.com/brocade/vTM-eclipse/LICENSE
("License"). Brocade Virtual Traffic Manager Plugin for Eclipse is
distributed "AS IS" as set forth in the License. Brocade Virtual Traffic
Manager Plugin for Eclipse also includes certain third party code.
All such third party code is also distributed "AS IS" and is licensed
by the respective copyright holders under the applicable terms and conditions
(including, without limitation, warranty and liability disclaimers) identified
at https://github.com/brocade/vTM-eclipse/LICENSE
| 38.526316 | 78 | 0.765027 | eng_Latn | 0.986048 |
e1c79fc2a3ff857c0cf07886b6057015852ee68c | 97 | md | Markdown | docs/vi/_sidebar.md | qiulixincn/blog | 0267aa514d6eb006ff534c1c978dd2e677b61524 | [
"BSD-3-Clause-Clear"
] | null | null | null | docs/vi/_sidebar.md | qiulixincn/blog | 0267aa514d6eb006ff534c1c978dd2e677b61524 | [
"BSD-3-Clause-Clear"
] | null | null | null | docs/vi/_sidebar.md | qiulixincn/blog | 0267aa514d6eb006ff534c1c978dd2e677b61524 | [
"BSD-3-Clause-Clear"
] | null | null | null | * [1、test](docs/概述)
* [2、test](docs/总体架构)
* [2.1 test](docs/总体架构/网络拓扑结构)
* [3、test](docs/协议规范)
| 19.4 | 32 | 0.587629 | yue_Hant | 0.4086 |
e1c7c75dbb315f9aed0796b8b4323bf98e3a5937 | 3,183 | md | Markdown | InvestmentPerformanceWebApi/README.md | rtodosic/CodingExercise | c9e61e8e64c398447582b89dda2187024c2d4cba | [
"Apache-2.0"
] | null | null | null | InvestmentPerformanceWebApi/README.md | rtodosic/CodingExercise | c9e61e8e64c398447582b89dda2187024c2d4cba | [
"Apache-2.0"
] | null | null | null | InvestmentPerformanceWebApi/README.md | rtodosic/CodingExercise | c9e61e8e64c398447582b89dda2187024c2d4cba | [
"Apache-2.0"
] | null | null | null | # Approach
This is a .Net 6 Web API application which returns data from an EF core in-memory database. There are 3 REST endpoints which return the following:
- List of users. The Id and Name of a user is returned.
- List of investments for a user. Given the user Id, a list of investments with the just the investment Id and investment name are returned.
- Details for an investment. Given the user Id and investment Id, the details for the specified investment is returned.
The REST endpoint can be tested via swagger at following URL:
https://localhost:7063/swagger/index.html.
Pricing for stocks and mutual funds are obtained via https://www.alphavantage.co. You will need a key to get this to work. You can obtain a free key at https://www.alphavantage.co/support/#api-key. Once you get the key put it into the appsettings.json file relacing the ##APIKEY## value.
Prices for bonds are fixed to always return 100.
User Id of 1 holds 4 stocks, User Id of 2 holds to mutual funds and two bonds. Id’s for stocks and mutual funds are the ticker symbol and Id’s for bonds are CUSIPs.
# Assumptions
- We only care about reading the investment data.
- We can price the investment data however we like.
- We only care about stocks, bonds, and mutual funds.
- We identified users via their Id. I created another endpoint to return the list of valid user Id’s.
- If the current price can’t be obtained, we can see null data for data points that rely on the current price.
# Compiling
This is a .Net 6 project, you can download the SDK from https://dotnet.microsoft.com/en-us/download/dotnet/6.0.
1. From a console or terminal window and go to the root of directory of where you downloaded code. The same directory as the *InvestmentPerformanceWebApi.sln* file.
1. Type the following:
```
dotnet build
```
# Running the Tests
1. From a console or terminal window and go to the root of directory of where you downloaded code. The same directory as the InvestmentPerformanceWebApi.sln file.
1. Type the following:
```
dotnet test
```
# Running the Application
1. If you haven’t update the pricing ApiKey, go to https://www.alphavantage.co/support/#api-key and get one. Update the Pricing:ApiKey in the appsettings.json file with your key.
2. From a console or terminal window and go to the root of directory of where you downloaded code. The same directory as the InvestmentPerformanceWebApi.sln file.
3. Type the following:
```
dotnet run --project InvestmentPerformanceWebApi
```
4. Open a browser to https://localhost:7063/swagger/index.html
5. Expand the **GET /api/v1/User** under the **User** section. Then click the "Try it out" button and click the "Execute" button to get a list of valid users.
6. Expand the **GET /api/v1/Holding/{userId}** under the **Holding** section. Then click the "Try it out" button, enter a User Id (from step 5) and click the "Execute" button to get a list of investment for the given user.
7. Expand the **GET /api/v1/Holding/{userId}/{id}** under the **Holding** section. Then click the "Try it out" button, enter a User Id (from step 6) and the Id (form step6) and click the "Execute" button to get a details for specified investment.
| 60.056604 | 287 | 0.760603 | eng_Latn | 0.99703 |
e1c8ad2931c3d0de107b7a49ba054b7de76337df | 1,084 | md | Markdown | pages/note/note_dns_install.md | vinnyzhaowenyu/mdnote | d1f0835e31f46a6eeb9fe274778fdbac84f27570 | [
"MIT",
"BSD-3-Clause"
] | null | null | null | pages/note/note_dns_install.md | vinnyzhaowenyu/mdnote | d1f0835e31f46a6eeb9fe274778fdbac84f27570 | [
"MIT",
"BSD-3-Clause"
] | null | null | null | pages/note/note_dns_install.md | vinnyzhaowenyu/mdnote | d1f0835e31f46a6eeb9fe274778fdbac84f27570 | [
"MIT",
"BSD-3-Clause"
] | null | null | null | ---
title: DNS服务搭建
keywords: dns
last_updated: August 10, 2017
tags: [dns,service]
summary: dns服务搭建
sidebar: note_sidebar
permalink: note_dns_install.html
folder: note
---
## 搭建DNS服务软件
DNS服务普遍使用:BIND(Berkeley Internet Name Daemon)
官方站点:https://www.isc.org/
相关软件包:
```
bind-9.3.3-7.el5.i386.rpm
bind-utils-9.3.3-7.el5.i386.rpm
bind-chroot-9.3.3-7.el5.i386.rpm
caching-nameserver-9.3.3-7.el5.i386.rpm
```
- bind : 提供了域名服务的主要程序及相关文件
- bind-utils : 提供了对DNS服务器的测试工具程序(如nslookup、dig等)
- bind-chroot : 为bind提供一个伪装的根目录以增强安全性(将“/var/named/chroot/”文件夹作为BIND的根目录),不安装该软件会以/var/named/为根目录
- caching-nameserver : 为配置BIND作为缓存域名服务器提供必要的默认配置文件,这些文件在配置主、从域名服务器时也可以作为参考
- bind-libs : 提供实现域名解析功能必备的库文件
named作为标准的系统服务脚本,通过`service named start/stop/restart`的形式可以实现对服务器程序的控制
named默认监听TCP、UDP协议的53端口,以及TCP的953端口:
其中UDP 53端口一般对所有客户机开放,以提供解析服务;TCP 53端口一般只对特定从域名服务器开放,提高解析记录传输通道;TCP 953端口默认只对本机(127.0.0.1)开放,用于为rndc远程管理工具提供控制通道
如果没有安装bind-chroot软件包,则主配置文件默认位于 /etc/named.conf,数据文件默认保存在 /var/named/ 目录
## 编译安装DNS
## yum安装DNS
## apt-get安装DNS
## windows安装DNS
## MAC安装DNS
{% include links.html %}
| 21.68 | 111 | 0.778598 | yue_Hant | 0.685976 |
e1c8ba8b898d2066efa793ffd330f15c84b878da | 2,761 | md | Markdown | doc/plot.md | budnyjj/vkstat | 228ab6b72e0f9693ca7b07d14b40b795553ea729 | [
"MIT"
] | 4 | 2017-10-08T19:56:34.000Z | 2020-05-10T14:10:44.000Z | doc/plot.md | budnyjj/vkstat | 228ab6b72e0f9693ca7b07d14b40b795553ea729 | [
"MIT"
] | null | null | null | doc/plot.md | budnyjj/vkstat | 228ab6b72e0f9693ca7b07d14b40b795553ea729 | [
"MIT"
] | 6 | 2015-03-08T19:44:04.000Z | 2022-03-26T08:10:18.000Z | # Визуализация графов
Для отрисовки графа на данный момент доступно две утилиты:
скрипт **plot.py** и **Gephi**.
В качестве исходных данных для графа используется [мой профиль](https://vk.com/budnyjj).
Данные для построение графа были отобраны следующим образом:
```bash
./get.py\
--with-num-followers\ # получаем не только друзей, но и фоллловеров
-r 2\ # уровень рекурсии (человек, его друзья, друзья его друзей)
-p 4\ # загрузка в четыре потока
-w _data/mine.pickle # сохраняем в формате pickle
55358627 # мой VK UID
./process.py\
--exclude-media-activists\ # исключаем медиа-активистов
--trim 3\ # исключаем людей, у которых меньше трех связей в графе
_data/mine.pickle\ # файл-источник
_data/mine_trim-3_no-media.gexf # файл-приемник, конвертация в формат GEXF,
# совместимый с Gephi
```
## plot.py
Скрипт [plot.py](https://github.com/budnyjj/vkstat/blob/master/plot.py)
используется для построения небольших графов (**до 500 узлов**).
Для отрисовки графа используется библиотека **matplotlib**.
**Пример использования:**
```bash
./plot.py\
--no-labels\ # без имен
-o doc/pic/plot-py.png\ # сохраняем в png
_data/mine_trim-3_no-media.gexf # файл-источник
```
**Результат:**
![Граф, отрисованный с помощью plot.py]
(https://github.com/budnyjj/vkstat/blob/master/doc/pic/plot-py.png)
Здесь ярко выражен центр графа -- это мой профиль и относительно отдаленные
друг от друга части графа, которые в действительности соответствуют различным
социальным группам, к которым я принадлежу или принадлежал:
* одноклассники в школе и лицее,
* одногруппники в университете,
* игроки в Что? Где? Когда?
* [группа пользователей Linux](https://vk.com/falanster.linux)
**Легенда:**
* **Цвет узла** характеризует **число дружеских связей внутри графа** --
чем темнее, тем их больше.
* **Размер узла** характеризует **общее число дружеских связей во всей сети** --
чем больше размер узла, тем больше друзей у данного пользователя.
* **Цвет дуги** обозначает **связь данного узла с центральным** --
если дуга красная, то данный узел связан с центром графа прямой связью.
## Gephi
[Gephi](http://gephi.github.io/) представляет собой намного мощное средство
визуализации графов, чем **plot.py**.
Краткий обзор функциональности этой программы
доступен [здесь](http://gephi.github.io/features/).
Для работы Gephi требуется JRE 7.
Для того, чтобы импортировать данные в Gephi,
следует предварительно сконвертировать их в формат GEXF.
**Результат:**
![Граф, построенный в Gephi]
(https://github.com/budnyjj/vkstat/blob/master/doc/pic/gephi.png)
| 37.310811 | 93 | 0.695762 | rus_Cyrl | 0.959468 |
e1c9d33607b4a7c440c960a83a07cfe30d1829c3 | 158 | md | Markdown | README.md | redxtreme/javascript-development-environment | b0c76bab9d1f0b0725ca6d88f514a4a395fa1702 | [
"MIT"
] | null | null | null | README.md | redxtreme/javascript-development-environment | b0c76bab9d1f0b0725ca6d88f514a4a395fa1702 | [
"MIT"
] | null | null | null | README.md | redxtreme/javascript-development-environment | b0c76bab9d1f0b0725ca6d88f514a4a395fa1702 | [
"MIT"
] | null | null | null | # javascript-development-environment
This is a boilerplate to use when building a new javascript application.
##To run in non verbose mode mode
npm start -s
| 26.333333 | 72 | 0.797468 | eng_Latn | 0.991184 |
e1caaa627a9cf1522cd3a4c5833f2aa6f656342b | 71 | md | Markdown | python/README.md | bjjb/dockerfiles | 42d44a21d2403c9909b402b2b0563dd52e881f18 | [
"MIT"
] | null | null | null | python/README.md | bjjb/dockerfiles | 42d44a21d2403c9909b402b2b0563dd52e881f18 | [
"MIT"
] | null | null | null | python/README.md | bjjb/dockerfiles | 42d44a21d2403c9909b402b2b0563dd52e881f18 | [
"MIT"
] | null | null | null | # Python Dockerfile
From Ubuntu 14.04 (Trusty)
## Builds
- Python 3
| 8.875 | 26 | 0.690141 | yue_Hant | 0.487361 |
e1cb98a3d715d37a5a96a9414c1cd66e9267d39c | 50 | md | Markdown | README.md | mentoranjo/sejamentoreado | a1c1003e04d82c51ba3e3243892342c5097021e4 | [
"CC-BY-3.0"
] | null | null | null | README.md | mentoranjo/sejamentoreado | a1c1003e04d82c51ba3e3243892342c5097021e4 | [
"CC-BY-3.0"
] | null | null | null | README.md | mentoranjo/sejamentoreado | a1c1003e04d82c51ba3e3243892342c5097021e4 | [
"CC-BY-3.0"
] | null | null | null | # sejamentoreado
página para cadastro de mentores
| 16.666667 | 32 | 0.84 | por_Latn | 0.996591 |
e1cc1cba28adac97605b32a2560b6f1244707fc6 | 62 | md | Markdown | README.md | juliosneto/atividades-ldp | 069bca5d36feeed0a12902a255dc045d9bf067c3 | [
"MIT"
] | null | null | null | README.md | juliosneto/atividades-ldp | 069bca5d36feeed0a12902a255dc045d9bf067c3 | [
"MIT"
] | null | null | null | README.md | juliosneto/atividades-ldp | 069bca5d36feeed0a12902a255dc045d9bf067c3 | [
"MIT"
] | null | null | null | # atividades-ldp
Códigos desenvolvidos nas atividades de LDP
| 20.666667 | 44 | 0.822581 | por_Latn | 0.97734 |
e1cc38aab26ff9781085395bb1c5198ebb477a07 | 123 | md | Markdown | README.md | aditya270520/get-top-stories-from-google | b37acf98779c8d4b1da84ac0ebf0c2326404ac83 | [
"MIT"
] | null | null | null | README.md | aditya270520/get-top-stories-from-google | b37acf98779c8d4b1da84ac0ebf0c2326404ac83 | [
"MIT"
] | null | null | null | README.md | aditya270520/get-top-stories-from-google | b37acf98779c8d4b1da84ac0ebf0c2326404ac83 | [
"MIT"
] | null | null | null | # get-top-stories-from-google
this program is written in python which will give you top stories from google with headlines
| 41 | 92 | 0.813008 | eng_Latn | 0.999721 |
e1ccb706f342850b5216200dd1d7e1d3835492a2 | 1,610 | md | Markdown | README.md | sotayamashita/google-predictive-search-api | d78b799db7efb84cee67e12634405f73139fd08a | [
"MIT"
] | null | null | null | README.md | sotayamashita/google-predictive-search-api | d78b799db7efb84cee67e12634405f73139fd08a | [
"MIT"
] | null | null | null | README.md | sotayamashita/google-predictive-search-api | d78b799db7efb84cee67e12634405f73139fd08a | [
"MIT"
] | null | null | null | [travis-badge]: https://img.shields.io/travis/sotayamashita/google-search-autocomplete.svg?maxAge=2592000
[travis-link]: https://travis-ci.org/sotayamashita/google-search-autocomplete
[package-badge]: https://img.shields.io/badge/packages-by_me-blue.svg
[package-link]: https://github.com/search?utf8=%E2%9C%93&q=package%2Buser%3Asotayamashita&type=Repositories&ref=searchresults
# google-search-autocomplete [![Travis Build Status][travis-badge]][travis-link] [![Packages By Me][package-badge]][package-link]
> Get google search autocomplete
## Install
```javascript
$ npm install sotayamashita/google-search-autocomplete --save-dev
```
## Usage
```javascript
const googleSearchAutocomplete = require('google-search-autocomplete');
googleSearchAutocomplete('query').then(suggestions => {
console.log(suggestions);
// => [{ name: 'hogle zoo', relevance: 566 }, ...]
});
```
## API
### gsa(query, [options])
#### query
Type: `string`
Returns a promise that resolves to a string containg the query.
#### options
Type: `object`
##### lang
Type: `string`
Default: `en`
Any google-supported language's 2-letter abbreviation ([ISO 639-1](https://www.wikiwand.com/en/List_of_ISO_639-1_codes)).
## Related
* [google-search-autocomplete-cli](https://github.com/sotayamashita/google-search-autocomplete-cli) - CLI for this module.
## References
* [Update on the Autocomplete API](https://webmasters.googleblog.com/2015/07/update-on-autocomplete-api.html)
* [Google Autocomplete API](http://shreyaschand.com/blog/2013/01/03/google-autocomplete-api/)
## License
MIT © Sota Yamashita
| 24.393939 | 129 | 0.734783 | kor_Hang | 0.280741 |
e1ccd54f262a5b2cf53cb29253131f0a2a356efe | 2,605 | md | Markdown | packages/whook-cors/README.md | sencrop/whook | 3ed96fdb0e1431315d8c0bce4abdff534180fc30 | [
"MIT"
] | null | null | null | packages/whook-cors/README.md | sencrop/whook | 3ed96fdb0e1431315d8c0bce4abdff534180fc30 | [
"MIT"
] | 2 | 2021-05-09T09:47:22.000Z | 2022-01-22T15:28:36.000Z | packages/whook-cors/README.md | sencrop/whook | 3ed96fdb0e1431315d8c0bce4abdff534180fc30 | [
"MIT"
] | null | null | null | [//]: # ( )
[//]: # (This file is automatically generated by a `metapak`)
[//]: # (module. Do not change it except between the)
[//]: # (`content:start/end` flags, your changes would)
[//]: # (be overridden.)
[//]: # ( )
# @whook/cors
> A wrapper to provide CORS support to a Whook server
[](https://github.com/nfroidure/whook/blob/master/packages/whook-cors/LICENSE)
[](https://npmjs.org/package/@whook/cors)
[//]: # (::contents:start)
To see how to add CORS support to your application, have a look
at the [`@whook/example`](https://github.com/nfroidure/whook/tree/master/packages/whook-example)
project, it will be well documented here as soon as possible.
[//]: # (::contents:end)
# API
## Constants
<dl>
<dt><a href="#optionsWithCORS">optionsWithCORS</a> ⇒ <code>Promise.<Object></code></dt>
<dd><p>A simple Whook handler that just returns a 200 OK
HTTP response</p>
</dd>
</dl>
## Functions
<dl>
<dt><a href="#wrapHandlerWithCORS">wrapHandlerWithCORS(initHandler)</a> ⇒ <code>function</code></dt>
<dd><p>Wrap an handler initializer to append CORS to response.</p>
</dd>
<dt><a href="#augmentAPIWithCORS">augmentAPIWithCORS(API)</a> ⇒ <code>Promise.<Object></code></dt>
<dd><p>Augment an OpenAPI to also serve OPTIONS methods with
the CORS added.</p>
</dd>
</dl>
<a name="optionsWithCORS"></a>
## optionsWithCORS ⇒ <code>Promise.<Object></code>
A simple Whook handler that just returns a 200 OK
HTTP response
**Kind**: global constant
**Returns**: <code>Promise.<Object></code> - The HTTP response object
<a name="wrapHandlerWithCORS"></a>
## wrapHandlerWithCORS(initHandler) ⇒ <code>function</code>
Wrap an handler initializer to append CORS to response.
**Kind**: global function
**Returns**: <code>function</code> - The handler initializer wrapped
| Param | Type | Description |
| --- | --- | --- |
| initHandler | <code>function</code> | The handler initializer |
<a name="augmentAPIWithCORS"></a>
## augmentAPIWithCORS(API) ⇒ <code>Promise.<Object></code>
Augment an OpenAPI to also serve OPTIONS methods with
the CORS added.
**Kind**: global function
**Returns**: <code>Promise.<Object></code> - The augmented OpenAPI object
| Param | Type | Description |
| --- | --- | --- |
| API | <code>Object</code> | The OpenAPI object |
# Authors
- [Nicolas Froidure](http://insertafter.com/en/index.html)
# License
[MIT](https://github.com/nfroidure/whook/blob/master/packages/whook-cors/LICENSE)
| 31.385542 | 146 | 0.688292 | eng_Latn | 0.357908 |
e1ccdc1fafdbee45223ed36ed1da048243a3a8dc | 7,156 | md | Markdown | _posts/2018-01-01-retour-vers-le-passe-1.md | Passionlinux/blog-generate | 45dbd9fd837082fe3132ec900b373fbb6a015e39 | [
"WTFPL"
] | null | null | null | _posts/2018-01-01-retour-vers-le-passe-1.md | Passionlinux/blog-generate | 45dbd9fd837082fe3132ec900b373fbb6a015e39 | [
"WTFPL"
] | null | null | null | _posts/2018-01-01-retour-vers-le-passe-1.md | Passionlinux/blog-generate | 45dbd9fd837082fe3132ec900b373fbb6a015e39 | [
"WTFPL"
] | null | null | null | ---
title: "Retour vers le passé 1, Mon aventure pour réinstaller windows xp..."
date: "2015-12-21T23:55:00+01:00"
layout: post
---
Avec ce drôle de titre, je voulais faire un rapprochement avec la série de film qui a presque le même titre !
Ce
billet va parler d’un truc que je n’ai pas fait depuis 8ans ou presque,
installer un windows ! Je vous rassure cette acte diabolique a été
longuement réfléchis.
Je m’explique, j’ai un vieu pc portable siemens amilo m series, qui date
de 2002, qui était a la base sous WINDOWS XP, mais celui ci au fils des
ans était devenu poussif et je l’avait remplacé vers 2008 par une suse
9.3, avec celle ci tout marchait bien, mais les paquets vieillots, les
problèmes de wifi m’ont poussé a chercher ailleurs et mon choix c’est
posé sur debian, une 4 pour commencer. J’ai pas eu de soucis
particulier, a part le chipset non reconnu ou plutôt reconnu mais
nécessitant un driver non libre. Je sais pas si c’est ça mais le pc
surchauffait a un telle point qu’il s’arrêtait ou plutôt il « bloquait »
et la il ne répondait a rien sauf a un débranchement ! J’ai eu ubuntu
10.04 aussi dessus et c’est celle ci qui fut la seule a tout géré sans
rien que je fasse, sauf que celle ci est trop lourde et ça revenais a
être a peine plus rapide que sous xp ! je l’ai remplacé par une debian 5
mais pareil que la 4, enfin j’ai mis dessus une xubuntu et la ce fut
correcte, le meilleur fut obtenu en mettant lxde...
------
Dernièrement j’ai voulu être gourmand et mettre une debian 6, mais
encore le soucis de surchauffe et de « gèle ». Vu que la ubuntu 10.04
n’est plus, et que la 12.04 est bien trop lourde, j’ai même pas essayer
la xubuntu. Ma femme ayant un pc que j’ai laissé sous seven par
sécurité, malgres que ma femme soit habitué a ubuntu et linux en
générale sous divers environnement(kde, gnome, xfce, unity et gnome
shell), je me suis dis il est temps de faire l’inverse, le seven on le
vire et on met un linux, et le vieu siemens on lui remet un widows !
Alors c’est partis pour une installation qui d’après ceux qui
connaissent pas linux, la qualifie de facile, alors que celle de linux
est difficile d’après ces même gens !
D’abord, on est
loin de l’installation graphique et live d’une ubuntu, c’est plutôt une
installation debian non graphique(ncurse), on se retrouve devant des
partitions ou on doit choisir entre supprimer les partition, créer des
partitions, et installer sur... Suivis de question, nom, prénom,
société(qu’est ce que ça peut lui faire ???), ensuite ça copie, puis ça
installe, ça prend facile deja 30 a 40 minutes, et ce n’est pas fini !
DE
la ça redémarre, c’est graphique cette fois, il y a des étapes le tout
dure bien 45min, entre temps ça a déjà redémarrer deux ou trois fois,
enfin il redémarre, et me redemande le nom complet de l’utilisateur et
des autres si il y a, une fois complété il me demande si je veux
m’enregistrer de suite chez microsoft, entre temps depuis le début de
l’installation j’ai du accepter une drôle de licence, qui me donne pas
beaucoup de droit, seul celui de se taire si je suis pas content !
Le système est lancé enfin je vois cette colline verdoyante et le ciel bleu(fond d’écran initiale).
De la je m’inquiète, car l’image ne prend pas la totalité de l’écran, simplement un petit carré de 5cm2
c’est pas énorme et je vois mal ce qui est écris, bon je regarde les cd
d’installations et je vois drivers, je le met, et la dessus je me
rencontre qu’en plus de la carte graphique (s3), j’ai le son, le LAN,
les HUSB USB, l’infrarouge de non reconnu, j’installe une par une toutes
ces merveilles et a chaque fois on me demande de redémarrer, ce que je
fais...
Bon tout marche, et maintenant je peux faire quoi avec ce pc, j’ai
remarquer que j’ai un lecteur multimédia, mais qui connaît par défaut
rien comme codec, un navigateur e6 qui fait vieillot, un notepad, des
petits jeux pour passer le temps.
je suis a la
campagne donc pas d’internet et normalement avec mon pc linux je branche
mon téléphone androide et je me connecte en me servant de celui ci,
mais sous xp bah il connaît pas le matos, je me dis que peut etre un
pack me donnera la capacité de le faire mais pour ça faut que je rentre
chez moi, donc je ferais 4 jours avec un ordinateur capable de rien
faire.
Pour le moment cette installation n’est pas aussi facile que ça, surtout
si je le compare a ubuntu :
windows :
une première phase non graphique, avec partitionnement, utilisateur,
licence, copie de fichier et installation. dure 30 a 40 min. et un
redémarrage.
une seconde phase graphique, avec installation du syteme et utilisateur et plusieur reboot. duré 45 min.
système lancé mais par défaut windows ne connaît ni ma carte graphique,
ni celle du son, ni le lan, ni le usb, ni le chipset, ni le infra
rouge !!!
installation via le 2e cd des drivers, et a chaque fois faut redémarrer.
pas de logiciel de bureautique a part notepad pour écrire vite fait un
pense bête ! pas de lecteur multimédia avec des codecs, ni de
gestionnaire de photos ou de logiciel comme gimp ou photoshop(ce dernier
est cher), j’ai même pas de programme de gravure, heureusement que j’ai
un nero mais pas de licence donc 60jours et j’aurais plus rien !!!
pas de connexion internet via l’androide branché par usb.
sous ubuntu :
une seule phase d’installation graphique d’installation sans redémarrage et qui dure 45min environs
une fois lancé, le système reconnaît tout mon matos, même le chipset et
la carte graphique ou il me propose d’installer les pilotes proprios,
sinon je garde les pilotes libres.
je suis déjà autonome car j’ai de quoi m’occuper avec la
bureautique(libreoffice), des jeux, gravure, musique(mp3 ça marche),
videos(mais pas de divx)
j’ai pas de vrai logiciels de manipulation d’image ni de codec pour
lire les dvix du coup je branche mon androide via l’usb et je télécharge
les programmes manquants mais aussi les mises a jours, ect
Bon sinon revenons a ce xp, ça fait du bien tout de même de retrouver ce
système, pas par rapport a linux mais par rapport a ce qui se fait de
nos jours sous windows, avec vista, seven et metro... Depuis vista,
c’est simple ça beau être beau, je ne m’y fait pas, le gestionnaire je
le trouve pas pratique, je trouve le système bien trop lourd et ceci
déjà sans antivirus et avec celui ci ça empire !des popups qui demandent
« est ce bien vous qui êtes a l’origine de cette demande » des
l’installation ou la mise a jour d’un programme... et pire que tout les
mises a jours qui sont fait a l’extinction du pc, quand c’est un
portable et que tu dois partir avec c’est très chiant, et au redémarrage
de la becanne...
Du coup j’ai pas fini avec ce pc, faut que je rentre chez moi, que je
fasse les mises a jours, que j’installe un vrai navigateur et un vrai
courriel c’est a dire firefox et thunderbird mais vu que c’est deux la
sont lourd, je pense me tourner vers seamonkey ! Ensuite, un lecteur
multimédia capable de tout lire, un manipulateur d’image donc gimp,
filezilla, et un av bitdefender en l’occurrence.
Amilo M series
celeron cpu 1,50GHZ
240mo de ram
| 54.212121 | 109 | 0.764394 | fra_Latn | 0.987907 |
e1d03c785c20d957e4df98453ced6f18436bdf7c | 160 | md | Markdown | content/en/android/customization/action/_index.md | caroldf07/beagle-docs | 7d60afb119baf54dd00cc4236b85dc96a748e907 | [
"Apache-2.0"
] | 6 | 2021-04-09T14:05:40.000Z | 2022-02-17T02:50:52.000Z | content/en/android/customization/action/_index.md | caroldf07/beagle-docs | 7d60afb119baf54dd00cc4236b85dc96a748e907 | [
"Apache-2.0"
] | 182 | 2021-04-09T20:22:15.000Z | 2022-03-23T15:20:11.000Z | content/en/android/customization/action/_index.md | caroldf07/beagle-docs | 7d60afb119baf54dd00cc4236b85dc96a748e907 | [
"Apache-2.0"
] | 8 | 2021-06-03T17:07:23.000Z | 2022-02-25T12:16:43.000Z | ---
title: Actions
weight: 102
description: >-
This section describes how actions work on the Beagle Framework and will also teach how to customize them.
---
| 22.857143 | 108 | 0.75 | eng_Latn | 0.999896 |
e1d21a520e1e400881bed8bd9fc47ba6095dcf1a | 3,520 | md | Markdown | published/201706/20170519 10 Useful Tips for Writing Effective Bash Scripts in Linux.md | QiaoN/TranslateProject | 191253c815756f842a783dd6f24d4dc082c225eb | [
"Apache-2.0"
] | 1,994 | 2015-01-04T11:40:04.000Z | 2022-03-29T17:18:18.000Z | published/201706/20170519 10 Useful Tips for Writing Effective Bash Scripts in Linux.md | QiaoN/TranslateProject | 191253c815756f842a783dd6f24d4dc082c225eb | [
"Apache-2.0"
] | 1,257 | 2015-01-03T12:52:51.000Z | 2022-03-31T12:52:12.000Z | published/201706/20170519 10 Useful Tips for Writing Effective Bash Scripts in Linux.md | CN-QUAN/TranslateProject | 89f982acd224c24de4431ce147aa1dedaa70b206 | [
"Apache-2.0"
] | 2,359 | 2015-01-04T02:09:02.000Z | 2022-03-31T05:31:18.000Z | Linux 中高效编写 Bash 脚本的 10 个技巧
============================================================
[Shell 脚本编程][4] 是你在 Linux 下学习或练习编程的最简单的方式。尤其对 [系统管理员要处理着自动化任务][5],且要开发新的简单的实用程序或工具等(这里只是仅举几例)更是必备技能。
本文中,我们将分享 10 个写出高效可靠的 bash 脚本的实用技巧,它们包括:
### 1、 脚本中多写注释
这是不仅可应用于 shell 脚本程序中,也可用在其他所有类型的编程中的一种推荐做法。在脚本中作注释能帮你或别人翻阅你的脚本时了解脚本的不同部分所做的工作。
对于刚入门的人来说,注释用 `#` 号来定义。
```
# TecMint 是浏览各类 Linux 文章的最佳站点
```
### 2、 当运行失败时使脚本退出
有时即使某些命令运行失败,bash 可能继续去执行脚本,这样就影响到脚本的其余部分(会最终导致逻辑错误)。用下面的行的方式在遇到命令失败时来退出脚本执行:
```
# 如果命令运行失败让脚本退出执行
set -o errexit
# 或
set -e
```
### 3、 当 Bash 用未声明变量时使脚本退出
Bash 也可能会使用能导致起逻辑错误的未声明的变量。因此用下面行的方式去通知 bash 当它尝试去用一个未声明变量时就退出脚本执行:
```
# 若有用未设置的变量即让脚本退出执行
set -o nounset
# 或
set -u
```
### 4、 使用双引号来引用变量
当引用时(使用一个变量的值)用双引号有助于防止由于空格导致单词分割开和由于识别和扩展了通配符而导致的不必要匹配。
看看下面的例子:
```
#!/bin/bash
# 若命令失败让脚本退出
set -o errexit
# 若未设置的变量被使用让脚本退出
set -o nounset
echo "Names without double quotes"
echo
names="Tecmint FOSSMint Linusay"
for name in $names; do
echo "$name"
done
echo
echo "Names with double quotes"
echo
for name in "$names"; do
echo "$name"
done
exit 0
```
保存文件并退出,接着如下运行一下:
```
$ ./names.sh
```
[][6]
*在脚本中用双引号*
### 5、 在脚本中使用函数
除了非常小的脚本(只有几行代码),总是记得用函数来使代码模块化且使得脚本更可读和可重用。
写函数的语法如下所示:
```
function check_root(){
command1;
command2;
}
# 或
check_root(){
command1;
command2;
}
```
写成单行代码时,每个命令后要用终止符号:
```
check_root(){ command1; command2; }
```
### 6、 字符串比较时用 `=` 而不是 `==`
注意 `==` 是 `=` 的同义词,因此仅用个单 `=` 来做字符串比较,例如:
```
value1=”tecmint.com”
value2=”fossmint.com”
if [ "$value1" = "$value2" ]
```
### 7、 用 `$(command)` 而不是老旧的 \`command` 来做代换
[命令代换][7] 是用这个命令的输出结果取代命令本身。用 `$(command)` 而不是引号 \`command` 来做命令代换。
这种做法也是 [shellcheck tool][8] (可针对 shell 脚本显示警告和建议)所建议的。例如:
```
user=`echo “$UID”`
user=$(echo “$UID”)
```
### 8、 用 `readonly` 来声明静态变量
静态变量不会改变;它的值一旦在脚本中定义后不能被修改:
```
readonly passwd_file=”/etc/passwd”
readonly group_file=”/etc/group”
```
### 9、 环境变量用大写字母命名,而自定义变量用小写
所有的 bash 环境变量用大写字母去命名,因此用小写字母来命名你的自定义变量以避免变量名冲突:
```
# 定义自定义变量用小写,而环境变量用大写
nikto_file=”$HOME/Downloads/nikto-master/program/nikto.pl”
perl “$nikto_file” -h “$1”
```
### 10、 总是对长脚本进行调试
如果你在写有数千行代码的 bash 脚本,排错可能变成噩梦。为了在脚本执行前易于修正一些错误,要进行一些调试。通过阅读下面给出的指南来掌握此技巧:
1. [如何在 Linux 中启用 Shell 脚本调试模式][1]
2. [如何在 Shell 脚本中执行语法检查调试模式][2]
3. [如何在 Shell 脚本中跟踪调试命令的执行][3]
本文到这就结束了,你是否有一些其他更好的 bash 脚本编程经验想要分享?若是的话,在下面评论框分享出来吧。
--------------------------------------------------------------------------------
作者简介:
Aaron Kili 是一个 Linux 和 F.O.S.S(Free and Open-Source Software,自由及开放源代码软件)爱好者,未来的 Linux 系统管理员、Web 开发人员,目前是 TecMint 的内容创作者,他喜欢用电脑工作,且崇尚分享知识。
----------------
via: https://www.tecmint.com/useful-tips-for-writing-bash-scripts-in-linux/
作者:[Aaron Kili][a]
译者:[ch-cn](https://github.com/ch-cn)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.tecmint.com/author/aaronkili/
[1]:https://linux.cn/article-8028-1.html
[2]:https://linux.cn/article-8045-1.html
[3]:https://linux.cn/article-8120-1.html
[4]:https://www.tecmint.com/category/bash-shell/
[5]:https://www.tecmint.com/using-shell-script-to-automate-linux-system-maintenance-tasks/
[6]:https://www.tecmint.com/wp-content/uploads/2017/05/Use-Double-Quotes-in-Scripts.png
[7]:https://www.tecmint.com/assign-linux-command-output-to-variable/
[8]:https://www.tecmint.com/shellcheck-shell-script-code-analyzer-for-linux/
| 19.340659 | 137 | 0.686364 | yue_Hant | 0.829439 |
e1d2d96cda93c85beec5dcde557695ef379d58d6 | 1,618 | md | Markdown | README.md | dmalec/sprite-converter-for-picosystem | 7acac459f273fd183715db3f0cd207c027b9e553 | [
"MIT"
] | null | null | null | README.md | dmalec/sprite-converter-for-picosystem | 7acac459f273fd183715db3f0cd207c027b9e553 | [
"MIT"
] | null | null | null | README.md | dmalec/sprite-converter-for-picosystem | 7acac459f273fd183715db3f0cd207c027b9e553 | [
"MIT"
] | null | null | null | # sprite-converter-for-picosystem
A small utility to convert images into sprites for Pimoroni's PicoSystem SDK
# Setting up the Environment
```commandline
git clone [email protected]:dmalec/sprite-converter-for-picosystem.git
cd sprite-converter-for-picosystem
python3 -m venv venv
```
On Mac/Linux:
```commandline
source venv/bin/activate
```
On Windows:
```commandline
venv\Scripts\activate.bat
```
All systems (inside the `venv` python environment started just above):
```commandline
python -m pip install -r requirements.txt
deactivate
```
# Running
From the `sprite-converter-for-picosystem` directory do the following.
On Mac/Linux:
```commandline
source venv/bin/activate
```
On Windows:
```commandline
venv\Scripts\activate.bat
```
All systems (inside the `venv` python environment started just above):
```commandline
python sprite_converter.py demo_image.png
```
This should produce output like the following:
```c
// Custom sprite sheet
const color_t custom_sprite_sheet_data[256] = {
0x00ff, 0x00ff, 0x00ff, 0x00ff, 0x00ff, 0x00ff, 0x00ff, 0x00ff,
0x0000, 0x0000, 0x0000, 0xf0f0, 0xf0f0, 0x0000, 0x0000, 0x0000,
...
0x0000, 0x0000, 0x0000, 0x0ff0, 0x0ff0, 0x0000, 0x0000, 0x0000,
0x00f0, 0x00f0, 0xffff, 0xffff, 0x00f0, 0x00f0, 0xffff, 0xffff,
};
buffer_t *CUSTOM_SPRITESHEET = buffer(16, 16, (void *)custom_sprite_sheet_data);
buffer_t *custom_sprite_sheet = CUSTOM_SPRITESHEET;
```
Which can then be pasted into your source code and used like so:
```c
// load custom spritesheet
spritesheet(custom_sprite_sheet);
...
// draw the first sprite at location 10, 20
sprite(0, 10, 20);
| 24.149254 | 80 | 0.753399 | eng_Latn | 0.606134 |
e1d32ada49108e0579fff80f4abd8f52822ed6a2 | 16,408 | md | Markdown | 2017/2017-09-12.md | skyofthinking/node-github-trending | 8b2088f85ff1708aaa4b743ccaa5c291774484be | [
"Apache-2.0"
] | 3 | 2018-01-13T14:56:01.000Z | 2021-03-20T18:38:17.000Z | 2017/2017-09-12.md | skyofthinking/node-github-trending | 8b2088f85ff1708aaa4b743ccaa5c291774484be | [
"Apache-2.0"
] | null | null | null | 2017/2017-09-12.md | skyofthinking/node-github-trending | 8b2088f85ff1708aaa4b743ccaa5c291774484be | [
"Apache-2.0"
] | 3 | 2018-01-13T14:56:02.000Z | 2021-03-20T18:38:19.000Z | ### 2017-09-12
#### java
* [kdn251 / interviews](https://github.com/kdn251/interviews):Everything you need to know to get the job.
* [jokermonn / permissions4m](https://github.com/jokermonn/permissions4m):国产手机5.0、6.0权限适配框架/编译时注解框架/an Android Runtime Permissions Tool by using APT
* [ffay / lanproxy](https://github.com/ffay/lanproxy):lanproxy是一个将局域网个人电脑、服务器代理到公网的内网穿透工具,目前仅支持tcp流量转发,可支持任何tcp上层协议(访问内网网站、本地支付接口调试、ssh访问、远程桌面...)。目前市面上提供类似服务的有花生壳、TeamView、GoToMyCloud等等,但要使用第三方的公网服务器就必须为第三方付费,并且这些服务都有各种各样的限制,此外,由于数据包会流经第三方,因此对数据安全也是一大隐患。
* [luojilab / DDComponentForAndroid](https://github.com/luojilab/DDComponentForAndroid):一套完整有效的android组件化方案,支持组件的单独调试、集成调试、组件交互、UI跳转、动态加载、组件完全隔离等功能
* [uavorg / uavstack](https://github.com/uavorg/uavstack):UAVStack Open Source All in One Repository
* [gongwen / SwipeBackLayout](https://github.com/gongwen/SwipeBackLayout):SwipeBack is an android library that can finish a activity by using gesture.
* [iluwatar / java-design-patterns](https://github.com/iluwatar/java-design-patterns):Design patterns implemented in Java
* [BriData / DBus](https://github.com/BriData/DBus):
* [QMUI / QMUI_Android](https://github.com/QMUI/QMUI_Android):提高 Android UI 开发效率的 UI 库
* [airbnb / lottie-android](https://github.com/airbnb/lottie-android):Render After Effects animations natively on Android and iOS
* [junit-team / junit5](https://github.com/junit-team/junit5):The next generation of JUnit.
* [alibaba / dubbo](https://github.com/alibaba/dubbo):📢 Dubbo is a distributed, high performance RPC framework empowering applications with service import/export capabilities.
* [spring-projects / spring-boot](https://github.com/spring-projects/spring-boot):Spring Boot
* [Ramotion / garland-view-android](https://github.com/Ramotion/garland-view-android):GarlandView seamlessly transitions between multiple lists of content. Made by @Ramotion
* [yangchaojiang / yjPlay](https://github.com/yangchaojiang/yjPlay):基于exoPlayer 自定义播放器 支持直播 ,ExoUserPlayer 基本播放器 ,GestureVideoPlayer 增加手势 亮度,音量,快进,等手势 ,ManualPlayer 默认手动播放,增加默认图 ,增加广告视频预览 ,增加视频清晰度切换 , 增加缓存视频功能 ,支持自定义各种数据源加载 okttp,rtmp, 缓存,Cronet 等,支持列表播放视频
* [material-foundation / material-remixer-android](https://github.com/material-foundation/material-remixer-android):Remixer for Android. Live adjustment of app variables.
* [vondear / RxTools](https://github.com/vondear/RxTools):Android开发人员不得不收集的工具类集合 | 支付宝支付 | 微信支付(统一下单) | 微信分享 | 一键集成UCrop选择圆形头像 | 一键集成二维码和条形码的扫描与生成 | 常用Dialog | WebView的封装可播放视频 | 仿斗鱼滑动验证码 | Toast封装 | 震动 | GPS | Location定位 | 压缩与加密 | 图片缩放 | Exif 图片添加地理位置信息(经纬度) | 编译运行一下说不定会找到惊喜
* [jiaozg / spring-boot-all](https://github.com/jiaozg/spring-boot-all):
* [scwang90 / SmartRefreshLayout](https://github.com/scwang90/SmartRefreshLayout):下拉刷新、上拉加载、RefreshLayout、OverScroll,Android智能下拉刷新框架,支持越界回弹,具有极强的扩展性,集成了几十种炫酷的Header和 Footer。
* [objectbox / objectbox-java](https://github.com/objectbox/objectbox-java):ObjectBox is a superfast mobile database for objects
* [shuzheng / zheng](https://github.com/shuzheng/zheng):基于Spring+SpringMVC+Mybatis分布式敏捷开发系统架构,提供整套公共微服务服务模块:集中权限管理(单点登录)、内容管理、支付中心、用户管理(支持第三方登录)、微信平台、存储系统、配置中心、日志分析、任务和通知等,支持服务治理、监控和追踪,努力为中小型企业打造全方位J2EE企业级开发解决方案。
* [spring-projects / spring-framework](https://github.com/spring-projects/spring-framework):The Spring Framework
* [zxing / zxing](https://github.com/zxing/zxing):ZXing ("Zebra Crossing") barcode scanning library for Java, Android
* [PhilJay / MPAndroidChart](https://github.com/PhilJay/MPAndroidChart):A powerful Android chart view / graph view library, supporting line- bar- pie- radar- bubble- and candlestick charts as well as scaling, dragging and animations.
* [elastic / elasticsearch](https://github.com/elastic/elasticsearch):Open Source, Distributed, RESTful Search Engine
#### vue
* [salomonelli / best-resume-ever](https://github.com/salomonelli/best-resume-ever):👔 💼 Build fast 🚀 and easy multiple beautiful resumes and create your best CV ever! Made with Vue and LESS.
* [ElemeFE / element](https://github.com/ElemeFE/element):A Vue.js 2.0 UI Toolkit for Web
* [PanJiaChen / vue-element-admin](https://github.com/PanJiaChen/vue-element-admin):vue2.0 admin / a management system template http://panjiachen.github.io/vue-element-admin
* [iview / iview](https://github.com/iview/iview):A high quality UI Toolkit built on Vue.js
* [vue-bulma / vue-admin](https://github.com/vue-bulma/vue-admin):Vue Admin Panel Framework, Powered by Vue 2.0 and Bulma 0.3
* [quasarframework / quasar](https://github.com/quasarframework/quasar):Quasar Framework
* [euvl / vue-js-modal](https://github.com/euvl/vue-js-modal):🍕 Simple to use, highly customizable, mobile friendly Vue.js 2.0+ modal.
* [lss5270 / vue-admin-spa](https://github.com/lss5270/vue-admin-spa):基于vue2.0生态的后台管理系统模板(spa)。 a vue management system template based on :vue2.0 + vue-router + vuex + element-ui +ES6+ webpack + npm。
* [52NineTwo / F-Rent](https://github.com/52NineTwo/F-Rent):基于vue2 + muse-ui构建的移动端轻社区项目 F-Rent 友租
* [epicmaxco / vuestic-admin](https://github.com/epicmaxco/vuestic-admin):Vue.js admin dashboard
* [airyland / vux](https://github.com/airyland/vux):Vue UI Components based on WeUI
* [museui / muse-ui](https://github.com/museui/muse-ui):Material Design UI library for Vuejs 2.0
* [AT-UI / at-ui](https://github.com/AT-UI/at-ui):A fresh and flat UI-Kit specially for desktop application, made with ♥ by Vue.js 2.0
* [surmon-china / vue-awesome-swiper](https://github.com/surmon-china/vue-awesome-swiper):🏆 Swiper component for Vue.
* [vuematerial / vue-material](https://github.com/vuematerial/vue-material):Material design for Vue.js
* [ydcss / vue-ydui](https://github.com/ydcss/vue-ydui):A mobile components Library with Vue2.js. 一只基于Vue2.x的移动端组件库。
* [rafaelpimpa / buefy](https://github.com/rafaelpimpa/buefy):Lightweight UI components for Vue.js based on Bulma
* [lvzhenbang / vue2-demo](https://github.com/lvzhenbang/vue2-demo):一个基于兴趣,为了学习,提高能力的项目
* [bailichen / vue-weixin](https://github.com/bailichen/vue-weixin):Vue2 全家桶仿 微信App 项目,支持多人在线聊天和机器人聊天
* [ratiw / vue-table](https://github.com/ratiw/vue-table):data table simplify! -- vuetable is a Vue.js component that will automatically request (JSON) data from the server and display them nicely in html table with swappable/extensible pagination component.
* [crowdbotics / v-img](https://github.com/crowdbotics/v-img):
* [monkeyWangs / doubanMovie](https://github.com/monkeyWangs/doubanMovie):Vue豆瓣电影浏览器端渲染
* [bailicangdu / vue2-elm](https://github.com/bailicangdu/vue2-elm):基于 vue2 + vuex 构建一个具有 45 个页面的大型单页面应用
* [ElemeFE / mint-ui](https://github.com/ElemeFE/mint-ui):Mobile UI elements for Vue.js
* [ghosh / uiGradients](https://github.com/ghosh/uiGradients):🔴 Beautiful colour gradients for design and code
#### kotlin
* [gurleensethi / LiteUtilities](https://github.com/gurleensethi/LiteUtilities):Speed up your android development by removing boilerplate code
* [Kotlin / anko](https://github.com/Kotlin/anko):Pleasant Android application development
* [google / flexbox-layout](https://github.com/google/flexbox-layout):Flexbox for Android
* [JakeWharton / Reagent](https://github.com/JakeWharton/Reagent):Experiments for future reactive libraries.
* [ReactiveX / RxKotlin](https://github.com/ReactiveX/RxKotlin):RxJava bindings for Kotlin
* [kategory / kategory](https://github.com/kategory/kategory):Functional Data Types & Abstractions for Kotlin
* [http4k / http4k](https://github.com/http4k/http4k):http4k is an HTTP toolkit written in Kotlin that enables the serving and consuming of HTTP services in a functional and consistent way.
* [fossasia / susi_android](https://github.com/fossasia/susi_android):Susi Android App https://github.com/fossasia/susi_android/raw/apk/susi-debug.apk
* [Kotlin / ktor](https://github.com/Kotlin/ktor):Web backend framework for Kotlin
* [JetBrains / kotlin-examples](https://github.com/JetBrains/kotlin-examples):Various examples for Kotlin
* [Kotlin / kotlinx.coroutines](https://github.com/Kotlin/kotlinx.coroutines):Libraries built upon Kotlin coroutines
* [KotlinBy / awesome-kotlin](https://github.com/KotlinBy/awesome-kotlin):A curated list of awesome Kotlin related stuff Inspired by awesome-java.
* [JetBrains / teamcity-dotnet-plugin](https://github.com/JetBrains/teamcity-dotnet-plugin):TeamCity plugin for .NET projects
* [mpcjanssen / simpletask-android](https://github.com/mpcjanssen/simpletask-android):
* [googlesamples / android-topeka](https://github.com/googlesamples/android-topeka):A fun to play quiz that showcases material design on Android
* [TonicArtos / SuperSLiM](https://github.com/TonicArtos/SuperSLiM):A layout manager for the RecyclerView with interchangeable linear, grid, and staggered displays of views, all with configurable section headers including the sticky variety as specified in the material design docs.
* [KeepSafe / dexcount-gradle-plugin](https://github.com/KeepSafe/dexcount-gradle-plugin):A Gradle plugin to report the number of method references in your APK on every build.
* [enbandari / Kotlin-Tutorials](https://github.com/enbandari/Kotlin-Tutorials):【Kotlin 视频教程】国内资料较少,我录制了一套视频作为抛砖引玉~
* [JetBrains / kotlin-native](https://github.com/JetBrains/kotlin-native):Kotlin/Native infrastructure
* [JakeWharton / kotterknife](https://github.com/JakeWharton/kotterknife):View "injection" library for Android.
* [square / sqldelight](https://github.com/square/sqldelight):Generates Java models from CREATE TABLE statements.
* [intellij-rust / intellij-rust](https://github.com/intellij-rust/intellij-rust):Rust plugin for the IntelliJ Platform: https://intellij-rust.github.io/
* [TwidereProject / Twidere-Android](https://github.com/TwidereProject/Twidere-Android):
* [LWJGL / lwjgl3](https://github.com/LWJGL/lwjgl3):LWJGL is a Java library that enables cross-platform access to popular native APIs useful in the development of graphics (OpenGL), audio (OpenAL) and parallel computing (OpenCL) applications.
* [Kotlin / kotlin-koans](https://github.com/Kotlin/kotlin-koans):Kotlin workshop
#### javascript
* [ApoorvSaxena / lozad.js](https://github.com/ApoorvSaxena/lozad.js):Highly performant, light ~0.5kb and configurable lazy loader in pure JS with no dependencies for images, iframes and more
* [fastify / fastify](https://github.com/fastify/fastify):Fast and low overhead web framework, for Node.js
* [Okazari / Rythm.js](https://github.com/Okazari/Rythm.js):A javascript library that makes your page dance.
* [mplewis / src2png](https://github.com/mplewis/src2png):📸💻 Turn your source code into beautiful syntax-highlighted images.
* [vuejs / vue](https://github.com/vuejs/vue):A progressive, incrementally-adoptable JavaScript framework for building UI on the web.
* [maierfelix / Iroh](https://github.com/maierfelix/Iroh):☕ Dynamic analysis tool - Intercept, record and analyze JavaScript at runtime
* [GoogleChrome / puppeteer](https://github.com/GoogleChrome/puppeteer):Headless Chrome Node API
* [mikeal / r2](https://github.com/mikeal/r2):HTTP client. Spiritual successor to request.
* [fhinkel / type-profile](https://github.com/fhinkel/type-profile):Collect runtime type information 😻 of your JavaScript code.
* [facebook / react](https://github.com/facebook/react):A declarative, efficient, and flexible JavaScript library for building user interfaces.
* [wojtekmaj / react-pdf](https://github.com/wojtekmaj/react-pdf):Easily display PDF files in your React application.
* [GeometryCollective / geometry-processing-js](https://github.com/GeometryCollective/geometry-processing-js):A fast, general-purpose framework for geometry processing on the web.
* [twbs / bootstrap](https://github.com/twbs/bootstrap):The most popular HTML, CSS, and JavaScript framework for developing responsive, mobile first projects on the web.
* [mzabriskie / axios](https://github.com/mzabriskie/axios):Promise based HTTP client for the browser and node.js
* [stasm / innerself](https://github.com/stasm/innerself):A tiny view + state management solution using innerHTML
* [facebookincubator / create-react-app](https://github.com/facebookincubator/create-react-app):Create React apps with no build configuration.
* [facebook / react-native](https://github.com/facebook/react-native):A framework for building native apps with React.
* [nitin42 / react-imgpro](https://github.com/nitin42/react-imgpro):📷 Image Processing Component for React
* [esterTion / Youku-HTML5-Player](https://github.com/esterTion/Youku-HTML5-Player):告别flash和广告
* [airbnb / javascript](https://github.com/airbnb/javascript):JavaScript Style Guide
* [paypal / downshift](https://github.com/paypal/downshift):🏎 Primitives to build simple, flexible, WAI-ARIA compliant enhanced input React components
* [nodejs / node](https://github.com/nodejs/node):Node.js JavaScript runtime ✨🐢🚀✨
* [poteto / hiring-without-whiteboards](https://github.com/poteto/hiring-without-whiteboards):⭐️ Companies that don't have a broken hiring process
* [ryanmcdermott / clean-code-javascript](https://github.com/ryanmcdermott/clean-code-javascript):🛁 Clean Code concepts adapted for JavaScript
* [developit / preact](https://github.com/developit/preact):⚛️ Fast 3kb React alternative with the same ES6 API. Components & Virtual DOM.
#### css
* [jgthms / bulma](https://github.com/jgthms/bulma):Modern CSS framework based on Flexbox
* [necolas / normalize.css](https://github.com/necolas/normalize.css):A collection of HTML element and attribute style-normalizations
* [ErnestOrt / Trampoline](https://github.com/ErnestOrt/Trampoline):Your help during the course of developing an application based on the paradigm of microservices with spring boot nature.
* [muhammadmuzzammil1998 / OctoCSS](https://github.com/muhammadmuzzammil1998/OctoCSS):Minimalistic "Fork me on GitHub"
* [Tencent / QMUI_Web](https://github.com/Tencent/QMUI_Web):An efficient front-end framework for developers building UI on the web.
* [iissnan / hexo-theme-next](https://github.com/iissnan/hexo-theme-next):Elegant theme for Hexo.
* [picturepan2 / spectre](https://github.com/picturepan2/spectre):Spectre.css - A lightweight, responsive and modern CSS framework.
* [mmistakes / minimal-mistakes](https://github.com/mmistakes/minimal-mistakes):📐 A flexible two-column Jekyll theme. Perfect for personal sites, blogs, and portfolios hosted on GitHub or your own server.
* [dsternlicht / RESTool](https://github.com/dsternlicht/RESTool):RESTool is an open source UI tool for managing RESTful APIs. It could save you time developing your own internal tools. Here's an example of how it looks like:
* [uikit / uikit](https://github.com/uikit/uikit):A lightweight and modular front-end framework for developing fast and powerful web interfaces
* [herozhou / vue-framework-wz](https://github.com/herozhou/vue-framework-wz):vue后台管理框架
* [widget- / slack-black-theme](https://github.com/widget-/slack-black-theme):A darker, more contrasty, Slack theme.
* [tachyons-css / tachyons](https://github.com/tachyons-css/tachyons):Functional css for humans
* [Automattic / _s](https://github.com/Automattic/_s):Hi. I'm a starter theme called _s, or underscores, if you like. I'm a theme meant for hacking so don't use me as a Parent Theme. Instead try turning me into the next, most awesome, WordPress theme out there. That's what I'm here for.
* [dunovank / jupyter-themes](https://github.com/dunovank/jupyter-themes):Custom Jupyter Notebook Themes
* [rolling-scopes / front-end-course](https://github.com/rolling-scopes/front-end-course):
* [primer / primer-css](https://github.com/primer/primer-css):The CSS framework that powers GitHub's front-end design.
* [oldj / SwitchHosts](https://github.com/oldj/SwitchHosts):Switch hosts quickly!
* [2048-class / 2048](https://github.com/2048-class/2048):A small clone of 1024 (https://play.google.com/store/apps/details?id=com.veewo.a1024)
* [fwallacephd / doctor-who](https://github.com/fwallacephd/doctor-who):
* [HubPress / hubpress.io](https://github.com/HubPress/hubpress.io):A web application to build your blog on GitHub
* [wesbos / React-For-Beginners-Starter-Files](https://github.com/wesbos/React-For-Beginners-Starter-Files):Starter files for learning React.js with React for Beginners
* [odoo / documentation-user](https://github.com/odoo/documentation-user):
* [enspiral / handbook](https://github.com/enspiral/handbook):http://handbook.enspiral.com
* [liorgrossman / darkness](https://github.com/liorgrossman/darkness):Dark Themes for Popular Websites
| 119.766423 | 287 | 0.77706 | yue_Hant | 0.238544 |
e1d5c446a76b02114f1894c78044567a8d4cd8be | 22,418 | md | Markdown | API-escola.md | victorwss/apis-sme | 53e06a616315a88211eb8f056549546a6b234d89 | [
"MIT"
] | null | null | null | API-escola.md | victorwss/apis-sme | 53e06a616315a88211eb8f056549546a6b234d89 | [
"MIT"
] | null | null | null | API-escola.md | victorwss/apis-sme | 53e06a616315a88211eb8f056549546a6b234d89 | [
"MIT"
] | null | null | null | # API de escolas
## Estruturas
### Endereço
| Campo | Tipo |
|----------------------------------------------|--------------------------------------------------------------|
| `logradouro` | String |
| <a name="endereco-tipo">`tipo`</a> | String |
| <a name="endereco-numero">`numero`</a> | String |
| `complemento` | String |
| `cep` | String no formato da expressão regular `[0-9]{5}\-[0-9\]{3}` |
| `bairro` | String |
| <a name="endereco-distrito">`distrito`</a> | String |
| `latitude` | Ponto flutuante entre -90.0 e +90.0 |
| `longitude` | Ponto flutuante entre -180.0 e +180.0 |
| <a name="endereco-municipio">`municipio`</a> | String |
| <a name="endereco-estado">`estado`</a> | String |
| <a name="endereco-pais">`pais`</a> | String |
Observações:
* O [campo `tipo`](#endereco-tipo) pode especificar rua, avenida, praça, estrada, alameda, quadra, rodovia, travessa, etc.
* O [campo `numero`](#endereco-numero), apesar do nome, nem sempre é numérico. Por exemplo: "Baker Street, 221B".
* No caso do município de São Paulo, o [campo `distito`](#endereco-distrito) corresponde a subprefeitura.
* O [campo `municipio`](#endereco-municipio) na maioria das vezes conterá o valor "São Paulo". Mas no caso do endereço de alunos ou professores, pode ser um outro município.
* O [campo `estado`](#endereco-estado) na maioria das vezes conterá o valor "São Paulo". Mas no caso do endereço de alunos ou professores, pode ser um outro estado.
* O [campo `pais`](#endereco-pais) na maioria das vezes conterá o valor "Brasil". Mas no caso do endereço de alunos ou professores, pode ser um outro país.
### Telefone
| Campo | Tipo |
|------------------------------------------------|---------------------------------------|
| <a name="telefone-ddi">`ddi`</a> | String numérica |
| <a name="telefone-codigoArea">`codigoArea`</a> | String numérica |
| `numero` | String numérica |
| `ramal` | String numérica |
| `tipo` | [Tipo de telefone](#tipo-de-telefone) |
| `operadora` | String |
Observações:
* O [campo `ddi`](#telefone-ddi) na maioria das vezes será fixado como "55". Mas no caso de telefones fora do Brasil, pode vir com algum outro valor.
* O [campo `codigoArea`](#telefone-codigoArea) para telefones no Brasil corresponde ao DDD de 2 dígitos. No entanto, para telefones em outros países, pode ter uma quantidade de dígitos diferentes ou estar em branco.
### Faixa etária
| Campo | Tipo |
|----------|---------|
| `inicio` | Inteiro |
| `fim` | Inteiro |
### Link
| Campo | Tipo |
|----------------------------------|--------|
| <a name="link-tipo">`tipo`</a> | String |
| <a name="link-valor">`valor`</a> | String |
Observações:
* O [campo `tipo`](#link-tipo) pode ser "Twitter", "Instagram", "Facebook", "Youtube", "Blog", "Website", etc.
* O [campo `valor`](#link-valor) pode ser uma URL, nome de domínio, nome de usuário ou algo equivalente que sirva para identificar unicamente a entidade na respectiva rede.
### DRE
| Campo | Tipo |
|-----------------------------|------------------------------------------------------------|
| <a name="dre-eol">`eol`</a> | String numérica |
| `endereco` | [Endereço](#endereço) |
| `nome` | String |
| `telefones` | Lista de [telefones](#telefone) |
| `emails` | Lista de strings |
| `status` | String representando a [situação da DRE](#situação-da-dre) |
| `dataAtualizacao` | String com data no formato "DD/MM/AAAA" |
Observações:
* O [campo `eol`](#dre-eol) é o código da DRE de acordo com o sistema EOL.
### Escola
| Campo | Tipo | Suportado? |
|-------------------------------------------------|--------------------------------------------------------------------------------------------| -----------|
| <a name="escola-eol">`eol`</a> | String numérica | ✔️ Sim. |
| <a name="escola-inep">`inep`</a> | String numérica | ❌ Não. |
| <a name="escola-papa">`papa`</a> | String numérica | ❌ Não. |
| <a name="escola-cie">`cie`</a> | String numérica? | ❌ Não. |
| `nome` | String | ✔️ Sim. |
| `nomesAnteriores` | Lista de strings | ❌ Não. |
| `endereco` | [Endereço](#endereço) | ❌ Não. |
| <a name="escola-tipo">`tipo`</a> | String representando o [tipo da escola](#tipo-de-escola) | ✔️ Sim. |
| `telefones` | Lista de [telefones](#telefone) | ❌ Não. |
| `emails` | Lista de strings | ❌ Não. |
| `status` | String representando a [situação da escola](#situação-da-escola) | ❌ Não. |
| <a name="escola-dre">`dre`</a> | [DRE](#dre) | ❌ Não. |
| `redesSociais` | Lista de [links](#link) | ❌ Não. |
| `dataCriacao` | String com data no formato "DD/MM/AAAA" | ❌ Não. |
| `dataCriacaoDOM` | String com data no formato "DD/MM/AAAA" | ❌ Não. |
| `dataInicioFuncionamento` | String com data no formato "DD/MM/AAAA" | ❌ Não. |
| `dataInicioConvenio` | String com data no formato "DD/MM/AAAA" | ❌ Não. |
| `dataAutorizacao` | String com data no formato "DD/MM/AAAA" | ❌ Não. |
| `dataExtincao` | String com data no formato "DD/MM/AAAA" | ❌ Não. |
| <a name="escola-faixa-etaria">`faixaEtaria`</a> | [Faixa etária](#faixa-etária) | ❌ Não. |
| <a name="escola-capacidade">`capacidade`</a> | Inteiro maior ou igual a zero | ❌ Não. |
| `turnos` | Objeto onde chaves são anos e valores são listas de strings representando [turnos](#turno) | ❌ Não. |
| `rede` | String representando uma [rede escolar](#rede-escolar) | ❌ Não. |
| `dataAtualizacao` | String com data no formato "DD/MM/AAAA" | ❌ Não. |
Observações:
* O [campo `eol`](#escola-eol) é o código da escola de acordo com o sistema EOL.
* O [campo `inep`](#escola-inep) é o código da escola de acordo com o INEP.
* O [campo `papa`](#escola-papa) é o código da escola de acordo com o sistema PAPA.
* O [campo `cie`](#escola-cie) é o código CIE da escola de acordo com a Secretaria Estadual de Educação de São Paulo (SEE-SP).
* O [campo `dre`](#escola-dre) não é uma string e nem o código da DRE. Contém o objeto correspondente a DRE completo a ser aninhado dentro da estrutura da escola.
* O [campo `faixaEtaria`](#escola-faixa-etaria) é um objeto que representa a faixa etária em anos de atendimento da escola.
* O [campo `capacidade`](#escola-capacidade) representa a capacidade em número de matrículas da escola.
* Todos os campos que ainda não são suportados terão o valor `null`. Estes campos deverão ser suportados no futuro.
## Enums
### Tipo de escola
| Nome | Descrição |
|------------------|-----------------------------------------------------------------------|
| `CCA` | Centro para Crianças e Adolescentes |
| `CCI/CIPS` | Centro de Convivência Infantil / Centro Integrado de Proteção e Saúde |
| `CECI` | Centro de Educação e Cultura Indígena |
| `CEI DIRET` | Centro de Educação Infantil Direto |
| `CEI INDIR` | Centro de Educação Infantil Indireto Conveniado |
| `CEMEI` | Centro Municipal de Educação Infantil |
| `CEU` | Centro Educacional Unificado |
| `CEU AT COM` | Unidade CEU para atendimento exclusivo de Atividades Complementares |
| `CEU CEI` | Centro de Educação Infantil integrante de CEU |
| `CEU EMEF` | Escola Municipal de Ensino Fundamental integrante de CEU |
| `CEU EMEI` | Escola Municipal de Educação Infantil integrante de CEU |
| `CIEJA` | Centro Integrado de Educação de Jovens e Adultos |
| `CMCT` | Centro Municipal de Capacitação e Treinamento |
| `CR.P.CONV` | Creche Privada Conveniada |
| `DIR EDUC` | Diretoria Regional de Educação |
| `E TECNICA` | Escola Técnica |
| `EMEBS` | Escola Municipal de Educação Bilíngue para Surdos |
| `EMEF` | Escola Municipal de Ensino Fundamental |
| `EMEFM` | Escola Municipal de Ensino Fundamental e Médio |
| `EMEI` | Escola Municipal de Educação Infantil |
| `ESC.PART.` | Escola Particular |
| `ESP CONV` | Especial Conveniada |
| `MOVA` | Movimento de Alfabetização de Jovens e Adultos |
| `OUT-PMSP` | Outras - Prefeitura Municipal de São Paulo |
### Situação da escola
| Nome | Observações |
|------------|-------------------------------------------|
| `Criada` | A escola ainda não está em funcionamento. |
| `Ativa` | A escola está em funcionamento. |
| `Extinta` | A escola não está mais em funcionamento. |
### Situação da DRE
| Nome |
|-----------|
| `Ativa` |
| `Extinta` |
### Turno
| Nome | Significado | Descrição |
|-------|---------------|-----------------------------------|
| `M` | Manhã | Aula regular no período da manhã. |
| `I` | Intermediário | Aula das 11:00 às 15:00. |
| `T` | Tarde | Aula regular no período da tarde. |
| `V` | Vespertino | Aula das 15:00 às 19:00. |
| `G` | Integral | Aula de manhã e pela tarde. |
| `N` | Noite | Aula regular no período da noite. |
Observações:
* A maioria das escolas mantém apenas os turnos regulares `M` (manhã), `T` (tarde) e `N` (noite).
* Há algumas escolas que oferecem o turno `G` (integral) nos períodos da manhã e da tarde, principalmente no caso de creches.
* Escolas que tenham os turnos `M` (manhã), `I` (intermediário) e `V` (vespertino) são casos raros e excepcionais. Correspondem a três turnos diurnos distintos de aulas em um mesmo dia.
### Rede escolar
| Nome |
|--------------|
| `Direta` |
| `Conveniada` |
### Tipo de telefone
| Nome |
|-----------|
| `Fixo` |
| `Celular` |
| `Fax` |
## Operações
### Pesquisar escolas por código EOL
* *Método HTTP*: **GET**
* *URL*: `/escolas/eol/{código}`
* Resultado esperado: [Escola](#escola)
#### Parâmetros
| Nome | Local | Significado |
|----------|-------|------------------------------------------|
| `codigo` | URL | [Código EOL](#escola-eol) de uma escola. |
#### Erros:
* **404**: Se o `código` não corresponder a uma string numérica ou não corresponder ao código EOL de nenhuma escola.
### Pesquisar escolas por código INEP
* *Método HTTP*: **GET**
* *URL*: `/escolas/inep/{código}`
* Resultado esperado: [Escola](#escola)
#### Parâmetros
| Nome | Local | Significado |
|----------|-------|--------------------------------------------|
| `codigo` | URL | [Código INEP](#escola-inep) de uma escola. |
#### Erros:
* **404**: Se o `código` não corresponder a uma string numérica ou não corresponder ao código INEP de nenhuma escola.
### Pesquisar escolas por código PAPA
* *Método HTTP*: **GET**
* *URL*: `/escolas/papa/{código}`
* Resultado esperado: [Escola](#escola)
#### Parâmetros
| Nome | Local | Significado |
|----------|-------|--------------------------------------------|
| `codigo` | URL | [Código PAPA](#escola-papa) de uma escola. |
#### Erros
* **404**: Se o `código` não corresponder a uma string numérica ou não corresponder ao código PAPA de nenhuma escola.
### Pesquisar escolas por código CIE
* Método HTTP: **GET**
* URL: `/escolas/cie/{código}`
* Resultado esperado: [Escola](#escola)
#### Parâmetros
| Nome | Local | Significado |
|----------|-------|------------------------------------------|
| `codigo` | URL | [Código CIE](#escola-cie) de uma escola. |
#### Erros:
* **404**: Se o `código` não corresponder a uma string numérica ou não corresponder ao código CIE de nenhuma escola.
### Pesquisar escolas por DRE
* *Método HTTP*: **GET**
* *URL*: `/dres/{código}/escolas`
* Resultado esperado: Lista de [escolas](#escola).
#### Parâmetros
| Nome | Local | Significado |
|----------|-------|--------------------------------|
| `codigo` | URL | [Código EOL](#dre-eol) da DRE. |
#### Erros:
* **404**: Se o `código` não corresponder a uma string numérica ou não corresponder ao código EOL de nenhuma DRE.
### Pesquisar DRE por código
* Método HTTP: **GET**
* URL: `/dres/{código}`
* Resultado esperado: [DRE](#dre)
#### Parâmetros
| Nome | Local | Significado |
|----------|-------|--------------------------------|
| `codigo` | URL | [Código EOL](#dre-eol) da DRE. |
#### Erros:
* **404**: Se o `código` não corresponder a uma string numérica ou não corresponder ao código EOL de nenhuma DRE.
### Listar DREs
* *Método HTTP*: **GET**
* *URL*: `/dres`
* Resultado esperado: Lista de [DREs](#dre).
#### Parâmetros
Não há.
#### Erros:
* **404**: Se o `código` não corresponder a uma string numérica ou não corresponder ao código EOL de nenhuma DRE.
### Listar escolas
* Método HTTP: **GET**
* URL: `/escolas?{parâmetros}`
* Resultado esperado: Lista de [escolas](#escola).
#### Parâmetros
| Nome | Local | Significado |
|-----------------------------------------------------|--------------|-------------------------------------------------------------------|
| <a name="lista-escolas-tipo">`tipo`</a> | Query string | Tipos de escolas a serem pesquisados. |
| <a name="lista-escolas-nome">`nome`</a> | Query string | Nomes das escolas a serem pesquisados. |
| <a name="lista-escolas-divisao">`divisao`</a> | Query string | Bairros, distritos e subprefeituras a serem pesquisados. |
| <a name="lista-escolas-logradouro">`logradouro`</a> | Query string | Ruas a serem pesquisadas. |
| <a name="lista-escolas-latitude">`latitude`</a> | Query string | Latitude da área circular onde as escolas serão procuradas. |
| <a name="lista-escolas-longitude">`longitude`</a> | Query string | Longitude da área circular onde as escolas serão procuradas. |
| <a name="lista-escolas-raio">`raio`</a> | Query string | Raio da área circular onde as escolas serão procuradas. |
| <a name="lista-escolas-status">`status`</a> | Query string | [Situações das escolas](#situação-da-escola) a serem pesquisadas. |
| <a name="lista-escolas-dre">`dre`</a> | Query string | [Códigos EOL das DREs](#dre-eol) das escolas a serem pesquisadas. |
Observações:
* O [campo `tipo`](#lista-escolas-tipo) corresponde aos [tipos de escolas](#escola-tipo) a serem pesquisados. Se for omitido, todos os tipos de escolas são consideradas. Múltiplos tipos de escolas podem ser pesquisados ao separá-los com `|`. Por exemplo, `?tipo=EMEF|EMEI` é utilizado para pesquisar por todas as escolas do tipo EMEF ou EMEI.
* O [campo `nome`](#lista-escolas-tipo) corresponde aos nomes de escolas a serem pesquisados. Se for omitido, todas as escolas são consideradas. Múltiplos nomes de escolas podem ser pesquisados ao separá-los com `|`. Expressões separadas por ponto-e-vírgula podem aparecer em qualquer parte e em qualquer ordem no nome. O `|` tem precedência sobre o `;`. Por exemplo, `?nome=João Pedro;Silva|Azevedo` pode encontrar escolas chamadas de "João Pedro Lima da Silva", "Carlos Silva de João Pedro" e "Ramos de Azevedo", mas não vai encontrar "João Silva Pedro".
* O [campo `divisao`](#lista-escolas-divisao) corresponde aos bairros, distritos, subdistritos ou subprefeituras onde a escola deve ser procurado. É aplicado com a mesma regra de busca dada pelo campo `nome`. Por exemplo, uma busca em `?divisao=Ipiranga|Sacomã|Cursino&nome=João` irá retornar todas as escolas que estejam no distrito, subprefeitura ou bairro do Ipiranga, Sacomã ou Cursino e que tenham "João" como parte do nome.
* O [campo `logradouro`](#lista-escolas-logradouro) corresponde ao nome da rua, avenida, praça, etc. onde a escola deve ser procurado. É aplicado com a mesma regra de busca dada pelo campo `nome`.
* Os campos [`latitude`](#lista-escolas-latitude), [`longitude`](#lista-escolas-longitude) e [`raio`](#lista-escolas-raio), se estiverem presentes, devem ser informados juntos. Não é permitido que um deles seja informado e os outros dois não ou que dois deles sejam informados e o terceiro não. Corresponde a busca por escolas que estejam dentro de um círculo centrado na latitude e longitude informados. Se omitidos, todas as escolas são consideradas.
* O [campo `latitude`](#lista-escolas-latitude) é dado em graus e está entre -90 e +90.
* O [campo `longitude`](#lista-escolas-longitude) é dado em graus e está entre -180 e +180.
* O [campo `raio`](#lista-escolas-raio) é dado em metros e deve ser um número maior ou igual a zero.
* O [campo `status`](#lista-escolas-status) corresponde às [situações da escola](#situação-da-escola) a serem pesquisados. Se for omitido, escolas em todas as situações são consideradas. Múltiplas situações de escolas podem ser pesquisados ao separá-los com `|`. Por exemplo, `?status=Criada|Ativa` é utilizado para pesquisar por todas as escolas do tipo Criada ou Ativa.
* O [campo `dre`](#lista-escolas-dre) corresponde aos [códigos EOL de DREs](#dre-eol) das DREs das escolas a serem pesquisadas. Se for omitido, escolas de todas as DREs são consideradas. Múltiplas DREs podem ser pesquisadas ao separá-los com `|`. Por exemplo, `?dres=1|2` é utilizado para pesquisar por todas as escolas nas DREs cujo números EOL sejam 1 ou 2.
* Todos os parâmetros textuais são *case insensitive*.
#### Erros:
* **422**: Se qualquer um dos parâmetros for apresentado mais do que uma vez na *query string*.
* **422**: Se apenas um ou apenas dois dos parâmetros `latitude`, `longitude` e `raio` forem especificados.
* **422**: Se a latitude for menor que -90.0, maior que +90.0 ou não puder ser interpretada como um número.
* **422**: Se a longitude for menor que -180.0, maior que +180.0 ou não puder ser interpretada como um número.
* **422**: Se o raio for menor que 0 ou que não puder ser interpretado como um número. | 63.6875 | 556 | 0.484611 | por_Latn | 0.993716 |
e1d88c77d2123237b280ab39435d3bd73568caaa | 130 | md | Markdown | _project/groomsmen-attire.md | rumnamanya/rumnamanya.github.io | 2deadeff04c8a48cf683b885b7fa6ab9acc1d9d9 | [
"MIT"
] | null | null | null | _project/groomsmen-attire.md | rumnamanya/rumnamanya.github.io | 2deadeff04c8a48cf683b885b7fa6ab9acc1d9d9 | [
"MIT"
] | null | null | null | _project/groomsmen-attire.md | rumnamanya/rumnamanya.github.io | 2deadeff04c8a48cf683b885b7fa6ab9acc1d9d9 | [
"MIT"
] | null | null | null | ---
layout: project_single
title: "Groomsmen attire"
slug: "groomsmen-attire"
parent: "groomsmen-suspenders"
---
Groomsmen attire | 18.571429 | 30 | 0.761538 | nld_Latn | 0.406047 |
e1d95a18a89d9662d5161e413fa735244144b8cd | 65 | md | Markdown | README.md | SPECTRA-Solucoes/Jogo | 4d0b290aa1f07f03a7cd12cb1328b3f1da954896 | [
"Apache-2.0"
] | 1 | 2020-06-23T23:13:01.000Z | 2020-06-23T23:13:01.000Z | README.md | SPECTRA-Solucoes/Jogo | 4d0b290aa1f07f03a7cd12cb1328b3f1da954896 | [
"Apache-2.0"
] | null | null | null | README.md | SPECTRA-Solucoes/Jogo | 4d0b290aa1f07f03a7cd12cb1328b3f1da954896 | [
"Apache-2.0"
] | null | null | null | # Jogo
Jogo da forca criado para o trabalho da matéria PCA 2020.
| 21.666667 | 57 | 0.769231 | por_Latn | 1.00001 |
e1d98af986aa29ebae94fd871d1fef56fe952fa6 | 2,228 | md | Markdown | Changelog.md | isbodand/MikroTikAPI | 952bd9240562b828a03cdc9fbd792f99d0c092b5 | [
"MIT",
"BSL-1.0",
"BSD-3-Clause"
] | 1 | 2021-11-19T01:43:04.000Z | 2021-11-19T01:43:04.000Z | Changelog.md | isbodand/MikroTikAPI | 952bd9240562b828a03cdc9fbd792f99d0c092b5 | [
"MIT",
"BSL-1.0",
"BSD-3-Clause"
] | null | null | null | Changelog.md | isbodand/MikroTikAPI | 952bd9240562b828a03cdc9fbd792f99d0c092b5 | [
"MIT",
"BSL-1.0",
"BSD-3-Clause"
] | null | null | null | # MikroTik API Changelog
Versioning scheme is `<major>.<minor>.<patch>`. All version with the
same major version are backwards compatible. Minor versions introduce
new features. Path versions, well, patch things.
Removal takes two steps: getting deprecated in a minor version, then removed
when the next major version rolls along.
Minor and major versions get their own code-names, patch versions
append a number to the code-name of the version they are patching.
## VERSION v1.1.1 - Teius teyou-2
Version v1.1.1 adds binary distributions. Nothing in the API has changed,
and everything is completely API and ABI compatible with v1.1.0.
### Added:
- Binary distributions on GitHub Releases.
- LICENSE is now available in RTF format.
## VERSION v1.1.0 - Teius teyou
Version v1.1.0 is now ready. It mostly adds the documentation, with
a few changes that are backwards API compatible.
A bit of unification on the CMake build script now allows linking against
the library both as a subproject, and an installed project in the same
way `mikrotik::mikrotikapi`. This breaks CMake backwards compatibility,
but that's not strictly a part of the C++ API, so I deem it fine.
This also renames the build target from `MikroTikApi.whatever` to
`mikrotikapi.whatever`, this was a bit more problematic to accept, but
the universal linking is more useful than the filename.
### Changed:
- `api_handler::disconnect` now returns the return value of the
function call that closed the socket.
- The CMake target is to link against is universally `mikrotik::mikrotikapi`
instead of `mikrotik::api` for subprojects and `mikrotik::MikroTikApi` for
installed packages
### Added:
- Doxygen documentation on all API entities
- Sphinx powered read the docs page [here](https://mikrotikapi.rtfd.io):
I sweat blood for this (see the commits on `develop`)
so go read it and tell me why it's completely unusable garbage
## VERSION v1.0.0 - Salvator rufescens
First release!
API is now usable.
### Added:
- `api_handler` connects to MikroTik devices
- `api_handler` logs in with provided user
- `api_handler` can send and receive sentences
- DSL commands
- DSL sentences
- API runs on WinSock and POSIX socks
| 37.762712 | 77 | 0.761221 | eng_Latn | 0.99646 |
e1d98beb1cac7d22a90c51b55b89ee88f7b635f5 | 200 | md | Markdown | README.md | juanpaucar/pixijs-tutorials | f75b4bab523126fa6290154765fb5dd5938f4b95 | [
"MIT"
] | null | null | null | README.md | juanpaucar/pixijs-tutorials | f75b4bab523126fa6290154765fb5dd5938f4b95 | [
"MIT"
] | null | null | null | README.md | juanpaucar/pixijs-tutorials | f75b4bab523126fa6290154765fb5dd5938f4b95 | [
"MIT"
] | null | null | null | #PIXIJS examples
To run it you could use simple server from python to serve the static files
```
cd example_directory/
python -m SimpleHTTPServer
```
and go to the browser on http://localhost:8000
| 18.181818 | 75 | 0.765 | eng_Latn | 0.988706 |
e1da018e89e9b729abc539517f1343702c64ba22 | 413 | md | Markdown | doc/providers/Ethfiddle.md | matzeeable/Embera | 001d2a83e5a209432fdfb4f7f3b47c24c87a7892 | [
"MIT"
] | 284 | 2015-01-07T22:30:44.000Z | 2022-03-31T04:44:52.000Z | doc/providers/Ethfiddle.md | matzeeable/Embera | 001d2a83e5a209432fdfb4f7f3b47c24c87a7892 | [
"MIT"
] | 69 | 2015-01-06T17:24:11.000Z | 2022-02-08T05:34:47.000Z | doc/providers/Ethfiddle.md | matzeeable/Embera | 001d2a83e5a209432fdfb4f7f3b47c24c87a7892 | [
"MIT"
] | 66 | 2015-03-15T11:46:22.000Z | 2022-02-02T07:36:32.000Z | # [Ethfiddle](https://ethfiddle.com)
Ethfiddle Provider
Share Solidity code snippets with friends, or check out
cool code snippets from around the web.
## Implementation Details
- Provider
Name: Ethfiddle
- Documentation: NONE
- HTTPS support: YES
- Fake Response: YES
- Oembed Params: maxwidth , maxheight
- Supported Hosts: ethfiddle.com
- Responsive response: YES
- Collections: DefaultProviderCollection
| 20.65 | 55 | 0.779661 | eng_Latn | 0.76804 |
e1dbb7edc395fa18dcafdd21fe95ea5de9e21f29 | 23,473 | md | Markdown | articles/operations-management-suite/operations-management-suite-application-dependency-monitor-configure.md | OpenLocalizationTestOrg/azure-docs-pr15_it-IT | a5b6eb257721d6a02db53be2d3b2bee1d9e5aa1c | [
"CC-BY-3.0",
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/operations-management-suite/operations-management-suite-application-dependency-monitor-configure.md | OpenLocalizationTestOrg/azure-docs-pr15_it-IT | a5b6eb257721d6a02db53be2d3b2bee1d9e5aa1c | [
"CC-BY-3.0",
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/operations-management-suite/operations-management-suite-application-dependency-monitor-configure.md | OpenLocalizationTestOrg/azure-docs-pr15_it-IT | a5b6eb257721d6a02db53be2d3b2bee1d9e5aa1c | [
"CC-BY-3.0",
"CC-BY-4.0",
"MIT"
] | null | null | null | <properties
pageTitle="Configurazione dell'applicazione dipendenza Monitor (ADM) in operazioni Management Suite (OMS) | Microsoft Azure"
description="Applicazione dipendenza Monitor (ADM) è una soluzione di operazioni gestione famiglia di prodotti (OMS) che individua componenti di applicazioni nei sistemi Windows e Linux e mappe le comunicazioni tra servizi automaticamente. Questo articolo fornisce informazioni dettagliate per la distribuzione ADM nel proprio ambiente e usarlo in diversi scenari."
services="operations-management-suite"
documentationCenter=""
authors="daseidma"
manager="jwhit"
editor="tysonn" />
<tags
ms.service="operations-management-suite"
ms.devlang="na"
ms.topic="article"
ms.tgt_pltfrm="na"
ms.workload="infrastructure-services"
ms.date="09/28/2016"
ms.author="daseidma;bwren" />
# <a name="configuring-application-dependency-monitor-solution-in-operations-management-suite-oms"></a>Configurazione soluzione applicazione dipendenza Monitor in operazioni di gestione famiglia di prodotti (OMS)
 Applicazione dipendenza Monitor (ADM) automaticamente individua componenti di applicazioni nei sistemi Windows e Linux e mappe le comunicazioni tra i servizi. Consente di visualizzare i server come devono essere considerati – come sistemi interconnessi che offrono servizi critici. Applicazione dipendenza Monitor mostra le connessioni tra server, processi, e porte in qualsiasi architettura connessi TCP senza configurazioni richieste ad eccezione di installazione di un agente.
In questo articolo vengono illustrati i dettagli della configurazione di agenti Monitor dipendenza dell'applicazione e l'adozione. Per informazioni sull'utilizzo ADM, vedere [utilizzo di Monitor dipendenza applicazione soluzione in operazioni di gestione famiglia di prodotti (OMS)](operations-management-suite-application-dependency-monitor.md)
>[AZURE.NOTE]Applicazione dipendenza Monitor è attualmente in anteprima privato. È possibile richiedere l'accesso all'anteprima privato ADM in [https://aka.ms/getadm](https://aka.ms/getadm).
>
>Durante l'anteprima privato, tutti gli account OMS avere accesso illimitato per adm. I nodi ADM sono gratuiti, ma verranno contatore dati Analitica Log per i tipi di AdmComputer_CL e AdmProcess_CL come qualsiasi altra soluzione.
>
>Dopo aver ADM immette anteprima pubblica, sarà disponibile solo per i clienti gratuiti e a pagamento di informazione e Analitica nel OMS prezzi piano. Account di livello gratuito saranno limitati a 5 nodi ADM. Se si partecipa nel riquadro di anteprima privato e non registrato nel piano di prezzi di OMS quando ADM entra anteprima pubblica, ADM saranno disabilitate in quel momento.
## <a name="connected-sources"></a>Origini connesse
Nella tabella seguente vengono descritte le origini connesse supportate da soluzione ADM.
| Origine connesso | Supportati | Descrizione |
|:--|:--|:--|
| [Agenti di Windows](../log-analytics/log-analytics-windows-agents.md) | Sì | ADM analizza e raccoglie dati dal computer agente di Windows. <br><br>Si noti che oltre agente OMS agenti Windows richiedono Microsoft dipendenza Agent. Vedere i [sistemi operativi supportati](#supported-operating-systems) per un elenco completo delle versioni del sistema operativo. |
| [Agenti Linux](../log-analytics/log-analytics-linux-agents.md) | Sì | ADM analizza e raccoglie dati dal computer agente Linux. <br><br>Si noti che oltre agente OMS agenti Linux richiedono Microsoft dipendenza Agent. Vedere i [sistemi operativi supportati](#supported-operating-systems) per un elenco completo delle versioni del sistema operativo. |
| [Gruppo di gestione SCOM](../log-analytics/log-analytics-om-agents.md) | Sì | ADM analizza e consente di raccogliere dati da Windows e Linux agenti in un gruppo di gestione SCOM connesso. <br><br>È necessaria una connessione diretta dal computer agente SCOM OMS. Dati vengono inviati direttamente da inoltrate dal gruppo di gestione archivio di OMS.|
| [Account di archiviazione Azure](../log-analytics/log-analytics-azure-storage.md) | No | ADM raccoglie dati dal computer agente, in modo che nessun dato da essa per raccogliere dallo spazio di archiviazione Azure. |
Si noti che i ADM supporta solo piattaforme a 64 bit.
In Windows, Microsoft monitoraggio agente (MMA) usato SCOM e OMS per raccogliere e inviare il monitoraggio dei dati. (Questo agente è denominato SCOM agente, agente OMS, MMA o agente diretta, a seconda di contesto.) SCOM e OMS offrono diversi fuori le versioni di casella di MMA, ma queste versioni possono ogni rapporto a SCOM per OMS o a entrambi. In Linux, l'agente OMS per Linux raccoglie e invia dati a OMS di monitoraggio. È possibile utilizzare i ADM su server con agenti diretto OMS o su server collegati a OMS tramite SCOM gestione dei gruppi. Per questa documentazione si farà riferimento a tutti di questi agenti – su Linux o Windows, se connessi a una MG SCOM o direttamente a OMS-come "Agente OMS", a meno che il nome di distribuzione specifica dell'agente è necessaria per contesto.
Agente di ADM non trasmette dati stesso e non richiede le modifiche alle porte o firewall. Dati di ADM vengono sempre trasmessi dall'agente OMS a OMS, direttamente o tramite il Gateway OMS.

Se si è un cliente SCOM con un gruppo di gestione connessi a OMS:
- Se gli agenti SCOM possono accedere a internet a cui connettersi OMS, non è necessaria alcuna configurazione aggiuntiva.
- Se gli agenti SCOM non è possibile accedere a OMS tramite internet, sarà necessario configurare il Gateway OMS per l'uso con SCOM.
Se si utilizza l'agente diretto OMS, è necessario configurare l'agente OMS per connettersi a OMS o al gateway OMS. Il Gateway OMS può essere scaricato dal [https://www.microsoft.com/en-us/download/details.aspx?id=52666](https://www.microsoft.com/en-us/download/details.aspx?id=52666)
### <a name="avoiding-duplicate-data"></a>Come evitare dati duplicati
Se si è un cliente SCOM, non è necessario configurare gli agenti SCOM per comunicare direttamente a OMS o dati verranno segnalati due volte. In ADM, verrà creato nel computer visualizzati due volte nell'elenco a discesa computer.
Configurazione di OMS da eseguire solo in uno dei seguenti percorsi:
- Il riquadro SCOM Console operazioni gestione applicazioni per computer gestiti
- Configurazione di approfondimenti operative Azure nelle proprietà MMA
Utilizzo di entrambe le configurazioni con l'area di lavoro stesso in ogni impedirà la duplicazione dei dati. Utilizzo combinato con diverse aree di lavoro causando conflitto configurazione (quello con soluzione ADM abilitato e l'altro senza) che potrebbe impedire che scorre in ADM completamente di dati.
Si noti che anche se il computer stesso non viene specificato nella configurazione di OMS della Console SCOM, se un gruppo di istanza, ad esempio "Gruppo di istanze di Windows Server" è attivo, potrebbero comunque risultare la ricezione OMS configurazione del computer tramite SCOM.
## <a name="management-packs"></a>Management Pack
Quando è attivata ADM in un'area di lavoro OMS, 300KB Management Pack viene inviato a tutto il monitoraggio agenti Microsoft nell'area di lavoro. Se si usa agenti SCOM in un [gruppo di gestione connesso](../log-analytics/log-analytics-om-agents.md), ADM Management Pack verrà distribuito da SCOM. Se gli agenti sono connessi direttamente, il pannello di gestione verrà recapitato da OMS.
Pannello di gestione denominato Microsoft.IntelligencePacks.ApplicationDependencyMonitor*. Si tratta *%Programfiles%\Microsoft monitoraggio Agent\Agent\Health State\Management SP\*. L'origine dati utilizzata da management pack *% Program files%\Microsoft monitoraggio Agent\Agent\Health servizio State\Resources\<AutoGeneratedID > \Microsoft.EnterpriseManagement.Advisor.ApplicationDependencyMonitorDataSource.dll*.
## <a name="configuration"></a>Configurazione
Oltre a Windows e Linux computer dispongano di un agente installato e collegato al OMS, il programma di installazione agente della dipendenza deve essere scaricato dalla soluzione ADM e quindi installato come radice o amministratore su ciascun server gestito. Dopo aver installato l'agente ADM in un server reporting OMS, mappe dipendenza ADM appariranno in 10 minuti. Se si verificano problemi, vedere inviare tramite posta elettronica [[email protected]](mailto:[email protected]).
### <a name="migrating-from-bluestripe-factfinder"></a>Migrazione da BlueStripe FactFinder
Applicazione dipendenza Monitor fornirà BlueStripe tecnologia in OMS in fasi. FactFinder ancora supportate per i clienti esistenti ma non è più disponibile per l'acquisto singoli. Questa versione di anteprima dell'agente di dipendenza può comunicare solo con OMS. Se si è un cliente FactFinder corrente, identificare un insieme di un server di prova per ADM che non sono gestiti FactFinder.
### <a name="download-the-dependency-agent"></a>Scaricare l'agente dipendenza
Oltre a Microsoft Management agente (MMA) e OMS Linux agente che forniscono la connessione tra il computer e OMS, tutti i computer analizzati dall'applicazione dipendenza Monitor devono avere installato l'agente di relazione. Su Linux, è necessario installare l'agente OMS per Linux prima dell'agente di relazione.

Per scaricare l'agente di dipendenza, fare clic su **Configura soluzione** nel riquadro **Applicazione dipendenza Monitor** per aprire e **l'Agente della dipendenza** . E il dipendenza agente include collegamenti per le finestre e gli agenti Linux. Fare clic sul collegamento appropriato per scaricare ogni agente. Vedere le sezioni seguenti per informazioni dettagliate sull'installazione agente di sistemi diversi.
### <a name="install-the-dependency-agent"></a>Installare l'agente dipendenza
#### <a name="microsoft-windows"></a>Microsoft Windows
Per installare o disinstallare l'agente sono necessari privilegi di amministratore.
Agente di dipendenza è installato nel computer Windows con ADM-agente-Windows.exe. Se si esegue questo eseguibile senza opzioni, quindi inizierà una procedura guidata che è possibile eseguire per eseguire l'installazione in modo interattivo.
Utilizzare la procedura seguente per installare l'agente di dipendenza in ogni computer Windows.
1. Assicurarsi che l'agente OMS sia installato usando le istruzioni incluse nei computer di connettersi direttamente a OMS.
2. Scaricare l'agente di Windows ed eseguire con il comando seguente.<br>*ADM-agente-Windows.exe*
3. Seguire la procedura guidata per installare l'agente.
4. Se l'agente di dipendenza non è possibile avviare, controllare i registri per informazioni dettagliate sull'errore. Per gli agenti di Windows, la directory log è *C:\Programmi\Microsoft Files\BlueStripe\Collector\logs*.
È possibile disinstallare l'agente di dipendenza per Windows da un amministratore tramite il pannello di controllo.
#### <a name="linux"></a>Linux
Accesso alla directory radice è necessario installare o configurare l'agente.
Agente di dipendenza è installato nel computer Linux con uno script di shell con un file binario estrazione ADM-agente-Linux64.bin. È possibile eseguire il file con m o aggiungere le autorizzazioni per il file di esecuzione.
Utilizzare la procedura seguente per installare l'agente di dipendenza su ciascun computer Linux.
1. Assicurarsi che l'agente OMS sia installato seguendo le istruzioni al [raccogliere e gestire i dati da computer Linux. Questa operazione deve essere installato prima dell'agente di dipendenza Linux](https://technet.microsoft.com/library/mt622052.aspx).
2. Installare l'agente Linux dipendenza come radice utilizzando il comando seguente.<br>*Mostra Linux64.bin di agente ADM*.
3. Se l'agente di dipendenza non è possibile avviare, controllare i registri per informazioni dettagliate sull'errore. Su agenti Linux, la directory log è */var/opt/microsoft/dependency-agent/log*.
### <a name="uninstalling-the-dependency-agent-on-linux"></a>Disinstallazione agente dipendenza su Linux
Per disinstallare completamente l'agente dipendenza da Linux, è necessario rimuovere l'agente stesso e il proxy che viene installato automaticamente con l'agente. È possibile disinstallare entrambe con il seguente comando singolo.
rpm -e dependency-agent dependency-agent-connector
### <a name="installing-from-a-command-line"></a>L'installazione dalla riga di comando
La sezione precedente istruzioni sull'installazione l'agente di Monitor dipendenza utilizzando le opzioni predefinite. Nelle sezioni seguenti forniscono indicazioni relative all'installazione dell'agente dalla riga di comando usando le opzioni personalizzate.
#### <a name="windows"></a>Windows
Consente di eseguire l'installazione dalla riga di comando Opzioni nella tabella seguente. Per visualizzare un elenco dell'installazione di contrassegni di eseguono il programma di installazione con la /? contrassegnare come indicato di seguito.
ADM-Agent-Windows.exe /?
| Contrassegno | Descrizione |
|:--|:--|
| /S | Eseguire un'installazione automatica senza richiedere intervento utente. |
File per l'agente di Windows dipendenza vengono inseriti in *C:\Programmi\Microsoft Files\BlueStripe\Collector* per impostazione predefinita.
#### <a name="linux"></a>Linux
Utilizzare le opzioni nella tabella seguente per eseguire l'installazione. Per visualizzare un elenco di installazione contrassegni di eseguire l'installazione delle applicazioni - assistenza contrassegna come indicato di seguito.
ADM-Agent-Linux64.bin -help
| Descrizione indicatore
|:--|:--|
| -s | Eseguire un'installazione automatica senza richiedere intervento utente. |
| -controllare | Controlla autorizzazioni e il sistema operativo, ma non installare l'agente. |
File per l'agente di dipendenza vengono inseriti nella directory riportate di seguito.
| File | Posizione |
|:--|:--|
| File di base | /usr/lib/bluestripe-Collector |
| File di log | /var/OPT/Microsoft/Dependency-Agent/log |
| File di configurazione | /etc/OPT/Microsoft/Dependency-Agent/config |
| File eseguibili dei servizi | /sbin/bluestripe-Collector<br>/sbin/bluestripe-Collector-Manager |
| File di archivio binario | /var/OPT/Microsoft/Dependency-Agent/Storage |
## <a name="troubleshooting"></a>Risoluzione dei problemi
Se si verificano problemi con l'applicazione dipendenza Monitor, è possibile raccogliere le informazioni sulla risoluzione dei problemi da più componenti utilizzando le seguenti informazioni.
### <a name="windows-agents"></a>Agenti di Windows
#### <a name="microsoft-dependency-agent"></a>Dipendenza Microsoft Agent
Per generare dati sulla risoluzione dei problemi da agente di dipendenza, aprire un prompt dei comandi come amministratore ed eseguire lo script CollectBluestripeData.vbs utilizzando il comando seguente. È possibile aggiungere flag--Guida per visualizzare altre opzioni.
cd C:\Program Files\Bluestripe\Collector\scripts
cscript CollectDependencyData.vbs
Il pacchetto di dati di supporto viene salvato nella directory % profiloutente % per l'utente corrente. È possibile utilizzare--file <filename> opzione per salvare in un percorso diverso.
#### <a name="microsoft-dependency-agent-management-pack-for-mma"></a>Dipendenza Microsoft Agent Management Pack per MMA
Dipendenza agente Management Pack viene eseguita all'interno dell'agente di gestione di Microsoft. Riceve dati da agente di dipendenza e inoltra al servizio cloud ADM.
Verificare che il management pack viene scaricato eseguendo la procedura seguente.
1. Cercare un file denominato Microsoft.IntelligencePacks.ApplicationDependencyMonitor.mp in c:\Programmi\Microsoft c:\Programmi\Microsoft monitoraggio Agent\Agent\Health State\Management SP.
2. Se il file non è presenta e l'agente sia connesso a un gruppo di gestione SCOM, verificare che è stata importata in SCOM selezionando Management Pack nell'area di lavoro di amministrazione della Console.
Pannello di gestione ADM scrive gli eventi di Operations Manager registro eventi di Windows. Il log può essere [eseguita la ricerca in OMS](../log-analytics/log-analytics-log-searches.md) tramite la soluzione del Registro di sistema, dove è possibile configurare i file di log da caricare. Se sono attivati gli eventi di debug, vengono scritte nel registro eventi applicazione, con l'origine di eventi *AdmProxy*.
#### <a name="microsoft-monitoring-agent"></a>Agente di monitoraggio di Microsoft
Per raccogliere tracce diagnostiche, aprire un prompt dei comandi come amministratore ed eseguire i comandi seguenti:
cd \Program Files\Microsoft Monitoring Agent\Agent\Tools
net stop healthservice
StartTracing.cmd ERR
net start healthservice
Tracce vengono scritte in c:\Windows\Logs\OpsMgrTrace. È possibile interrompere la traccia con StopTracing.cmd.
### <a name="linux-agents"></a>Agenti Linux
#### <a name="microsoft-dependency-agent"></a>Dipendenza Microsoft Agent
Per generare dati sulla risoluzione dei problemi da agente di dipendenza, eseguire l'accesso con un account che dispone di privilegi sudo o radice ed eseguire il comando seguente. È possibile aggiungere flag--Guida per visualizzare altre opzioni.
/usr/lib/bluestripe-collector/scripts/collect-dependency-agent-data.sh
Il pacchetto di dati di supporto viene salvato in /var/opt/microsoft/dependency-agent/log (se radice) nella directory di installazione dell'agente o alla home directory dell'utente che esegue lo script (se non di primo livello). È possibile utilizzare--file <filename> opzione per salvare in un percorso diverso.
#### <a name="microsoft-dependency-agent-fluentd-plug-in-for-linux"></a>Dipendenza Microsoft Agent Fluentd plug-in per Linux
Il plug-in dipendenza agente Fluentd viene eseguita all'interno dell'agente di Linux OMS. Riceve dati da agente di dipendenza e inoltra al servizio cloud ADM.
Registri in due file seguenti.
- /var/OPT/Microsoft/omsagent/log/omsagent.log
- /var/log/Messages
#### <a name="oms-agent-for-linux"></a>Agente OMS per Linux
Una risorsa sulla risoluzione dei problemi per la connessione server Linux a OMS sono disponibili qui: [https://github.com/Microsoft/OMS-Agent-for-Linux/blob/master/docs/Troubleshooting.md](https://github.com/Microsoft/OMS-Agent-for-Linux/blob/master/docs/Troubleshooting.md)
I registri per l'agente OMS Linux risiede in */var/acconsentire/microsoft/omsagent/log/*.
I registri di omsconfig (la configurazione dell'agente) si trovano in */var/acconsentire/microsoft/omsconfig/log/*.
Log per i componenti OMI e SCX che forniscono dati sulle prestazioni metriche risiede in */var/acconsentire/omi/log/* e */var/opt/microsoft/scx/log*.
## <a name="data-collection"></a>Raccolta di dati
È probabile che ogni agente per la trasmissione di circa 25MB al giorno, a seconda complessità le dipendenze del sistema. Dati sulle dipendenze ADM viene inviati da ogni agente ogni 15 secondi.
Agente di ADM in genere implica l'uso di 0,1% della memoria di sistema e 0,1% di CPU sistema.
## <a name="supported-operating-systems"></a>Sistemi operativi supportati
Nelle sezioni seguenti sono elencati i sistemi operativi supportati per l'agente di relazione. architetture a 32 bit non sono supportate per sistemi operativi.
### <a name="windows-server"></a>Windows Server
- Windows Server 2012 R2
- Windows Server 2012
- Windows Server 2008 R2 SP1
### <a name="windows-desktop"></a>Windows Desktop
- Nota: Windows 10 non è ancora supportato
- Windows 8.1
- Windows 8
- Windows 7
### <a name="red-hat-enterprise-linux-centos-linux-and-oracle-linux-with-rhel-kernel"></a>Red Hat Enterprise Linux, CentOS Linux e Linux Oracle (con RHEL Kernel)
- Sono supportate solo predefinito e versioni kernel Linux SMP.
- Rilascia kernel non standard, ad esempio PAE e Xen, non sono supportati per distribuzioni Linux. Ad esempio un sistema con la stringa di rilascio di "2.6.16.21-0.8-xen" non è supportato.
- Vengono personalizzati, tra cui ricompilazioni. x standard, non è supportati
- Kernel centos Plus non è supportata.
- Oracle estremamente affidabile e Kernel (UEK) viene descritto in un'altra sezione riportata di seguito.
#### <a name="red-hat-linux-7"></a>Red Hat Linux 7
| Versione del sistema operativo | Versione del kernel |
|:--|:--|
| 7.0 | 3.10.0-123 |
| 7.1 | 3.10.0-229 |
| 7.2 | 3.10.0-327 |
#### <a name="red-hat-linux-6"></a>Red Hat Linux 6
| Versione del sistema operativo | Versione del kernel |
|:--|:--|
| 6.0 | 2.6.32-71 |
| 6.1 | 2.6.32-131 |
| 6.2 | 2.6.32-220 |
| 6.3 | 2.6.32-279 |
| 6.4 | 2.6.32-358 |
| 6.5 | 2.6.32-431 |
| 6.6 | 2.6.32-504 |
| 6,7 | 2.6.32-573 |
| 6,8 | 2.6.32-642 |
#### <a name="red-hat-linux-5"></a>Red Hat Linux 5
| Versione del sistema operativo | Versione del kernel |
|:--|:--|
| 5.8 | 2.6.18-308 |
| 5,9 | 2.6.18-348 |
| 5.10 | 2.6.18-371 |
| 5.11 | 2.6.18-398<br>2.6.18-400<br>2.6.18-402<br>2.6.18-404<br>2.6.18-406<br>2.6.18-407<br>2.6.18-408<br>2.6.18-409<br>2.6.18-410<br>2.6.18-411 |
#### <a name="oracle-enterprise-linux-w-unbreakable-kernel-uek"></a>Oracle Enterprise Linux con Kernel estremamente affidabile e (UEK)
#### <a name="oracle-linux-6"></a>Oracle Linux 6
| Versione del sistema operativo | Versione del kernel
|:--|:--|
| 6.2 | 2.6.32-300 Oracle (UEK R1) |
| 6.3 | 2.6.39-200 Oracle (UEK R2) |
| 6.4 | 2.6.39-400 Oracle (UEK R2) |
| 6.5 | 2.6.39-400 Oracle (UEK i386 R2) |
| 6.6 | 2.6.39-400 Oracle (UEK i386 R2) |
#### <a name="oracle-linux-5"></a>Oracle Linux 5
| Versione del sistema operativo | Versione del kernel
|:--|:--|
| 5.8 | 2.6.32-300 Oracle (UEK R1) |
| 5,9 | 2.6.39-300 Oracle (UEK R2) |
| 5.10 | 2.6.39-400 Oracle (UEK R2) |
| 5.11 | 2.6.39-400 Oracle (UEK R2) |
#### <a name="suse-linux-enterprise-server"></a>SUSE Linux Enterprise Server
#### <a name="suse-linux-11"></a>SUSE Linux 11
| Versione del sistema operativo | Versione del kernel
|:--|:--|
| 11 | 2.6.27 |
| SP1 11 | 2.6.32 |
| SP2 11 | 3.0.13 |
| SP3 11 | 3.0.76 |
| SP4 11 | 3.0.101 |
#### <a name="suse-linux-10"></a>SUSE Linux 10
| Versione del sistema operativo | Versione del kernel
|:--|:--|
| SP4 10 | 2.6.16.60 |
## <a name="diagnostic-and-usage-data"></a>Dati di diagnostica e l'uso
Microsoft raccoglie automaticamente i dati di utilizzo e le prestazioni tramite l'utilizzo del servizio Monitor dipendenza dell'applicazione. Microsoft utilizza questi dati per fornire e migliorare la qualità, sicurezza e l'integrità del servizio Monitor dipendenza dell'applicazione. Dati includono le informazioni di configurazione del software del sistema operativo e versione e indirizzo IP, nome DNS e nome Workstation per fornire funzionalità di risoluzione dei problemi accurata ed efficiente. Raccolte non nomi, indirizzi o altre informazioni di contatto.
Per ulteriori informazioni sulla raccolta di dati e l'utilizzo, vedere l' [Informativa sulla Privacy di servizi Online Microsoft](https://go.microsoft.com/fwlink/?LinkId=512132).
## <a name="next-steps"></a>Passaggi successivi
- Informazioni su come [utilizzare applicazione dipendenza Monitor](operations-management-suite-application-dependency-monitor.md) una volta è stato distribuito e configurato.
| 72.003067 | 801 | 0.781153 | ita_Latn | 0.997319 |
e1dbf2449533ba006730721225b4acbd3e8706d5 | 170 | md | Markdown | library/README.md | ac-voyage/ac-voyage | fd2875fdf6fd4a53038d14b849d47a381d4bd9ef | [
"MIT"
] | null | null | null | library/README.md | ac-voyage/ac-voyage | fd2875fdf6fd4a53038d14b849d47a381d4bd9ef | [
"MIT"
] | 8 | 2020-07-16T20:03:55.000Z | 2022-02-26T01:35:20.000Z | library/README.md | ac-voyage/ac-voyage | fd2875fdf6fd4a53038d14b849d47a381d4bd9ef | [
"MIT"
] | null | null | null | # Library
## [Math](/library/math/)
## [Dynamic Programming](/library/dynamic-programming/)
## [Data Structure](/library/data-structure/)
## [Graph](/library/graph/)
| 17 | 55 | 0.676471 | yue_Hant | 0.137956 |
e1dd75077adfd6620ff7d370f3ae74d1643e6152 | 59 | md | Markdown | README.md | paul-serafimescu/js-interpreter | 532905eeee75ef4cc8046bf2df4d87fdf1c37048 | [
"MIT"
] | null | null | null | README.md | paul-serafimescu/js-interpreter | 532905eeee75ef4cc8046bf2df4d87fdf1c37048 | [
"MIT"
] | null | null | null | README.md | paul-serafimescu/js-interpreter | 532905eeee75ef4cc8046bf2df4d87fdf1c37048 | [
"MIT"
] | null | null | null | # Javascript Interpreter
*I got bored*
eval() lol why not
| 11.8 | 24 | 0.728814 | eng_Latn | 0.983392 |
e1ddbef11600cf00502197209d675d843b071284 | 890 | md | Markdown | docs/changelogs/CHANGELOG-wtexport.md | MTrop/WadMerge | ef1aa0080eb5b3f1ea0f84089f9dc07e09e325a3 | [
"MIT"
] | null | null | null | docs/changelogs/CHANGELOG-wtexport.md | MTrop/WadMerge | ef1aa0080eb5b3f1ea0f84089f9dc07e09e325a3 | [
"MIT"
] | null | null | null | docs/changelogs/CHANGELOG-wtexport.md | MTrop/WadMerge | ef1aa0080eb5b3f1ea0f84089f9dc07e09e325a3 | [
"MIT"
] | null | null | null | WTEXport
--------
### Changed for 1.5.0
* `Added` More output info during the extraction process.
* `Fixed` Textures/Flats in ANIMATED were added in an incorrect order if the provided texture/flat was not the start of an animation loop. (Issue #75)
* `Changed` Removed some potential sorts that could ruin things.
### Changed for 1.4.0
* `Fixed` Textures added via ANIMATED searching did not also check for SWITCHES pairings.
### Changed for 1.3.0
* `Fixed` Better support for texture WADs missing a TEXTURE1 lump (but having a TEXTURE2).
### Changed for 1.2.0
* `Fixed` A botched flat and namespace-texture ordering procedure that potentially messed up animations.
### Changed for 1.1.0
* `Changed` Removed an unnecessary sort step that butchered that Animation handling in flats and textures.
* `Changed` Added some needed help.
### Changed for 1.0.0
* Initial Release.
| 24.722222 | 150 | 0.734831 | eng_Latn | 0.993632 |
e1def321a3e30614e483e2863cf0d574648ae529 | 952 | md | Markdown | website/docs/introduction.md | pocka/ccht | aaac0e13c768a609c86cf33cce0231816338d7eb | [
"MIT"
] | 1 | 2021-01-06T23:12:00.000Z | 2021-01-06T23:12:00.000Z | website/docs/introduction.md | pocka/ccht | aaac0e13c768a609c86cf33cce0231816338d7eb | [
"MIT"
] | null | null | null | website/docs/introduction.md | pocka/ccht | aaac0e13c768a609c86cf33cce0231816338d7eb | [
"MIT"
] | null | null | null | ---
id: introduction
title: Introduction
slug: /
---
ccht is a Node.js library to crawl website and report dead links (broken links), or its wrapper CLI.
The most common usage is detecting pages or resources that return 4xx/5xx HTTP status code.
## What ccht does
At first, ccht loads specified website recursively by Node.js's HTTP module or Headless Chromium browser via [puppeteer](https://github.com/puppeteer/puppeteer).
Then, ccht reports what pages or assets (we call them _resources_) we loaded and how was them, such as HTTP status codes or network connection failure.
You can configure the report threshold.
For example, you can switch whether to show HTTP redirect or not.
## What ccht does not
Site validity check, such as HTML validation or checking HTTP headers.
You can use other tools. (e.g. [webhint](https://webhint.io/))
You can combine existing softwares with ccht by [JSON reporter](#) and [configuring report severity](#).
| 39.666667 | 161 | 0.768908 | eng_Latn | 0.994661 |
e1df6efd9090be0d1fb9e5497853972fd33fbba0 | 335 | md | Markdown | _posts/2021-07-08/2021-06-25-would-you-lick-me-20210625154930685283.md | ipussy/ipussy.github.io | 95d19a74e38bb54303cf18057a99a57c783e76bf | [
"Apache-2.0"
] | null | null | null | _posts/2021-07-08/2021-06-25-would-you-lick-me-20210625154930685283.md | ipussy/ipussy.github.io | 95d19a74e38bb54303cf18057a99a57c783e76bf | [
"Apache-2.0"
] | null | null | null | _posts/2021-07-08/2021-06-25-would-you-lick-me-20210625154930685283.md | ipussy/ipussy.github.io | 95d19a74e38bb54303cf18057a99a57c783e76bf | [
"Apache-2.0"
] | null | null | null | ---
title: "would you lick me?"
metadate: "hide"
categories: [ Pussy ]
image: "https://preview.redd.it/98u1mpvy2d771.jpg?auto=webp&s=d8fed181e99f3ebc8ba786c95c14e980b46dbc6b"
thumb: "https://preview.redd.it/98u1mpvy2d771.jpg?width=1080&crop=smart&auto=webp&s=5ba3558a3c418c276593c9a862099367229bae9f"
visit: ""
---
would you lick me?
| 33.5 | 125 | 0.773134 | kor_Hang | 0.067859 |
e1df8f4ae52a51e6248faf578711fb206d7a46d6 | 215 | md | Markdown | fontconfig/README.md | UpperCenter/Awesome | 8cf37c7041217fdd2f0618ff6e713c0e7137e769 | [
"MIT"
] | null | null | null | fontconfig/README.md | UpperCenter/Awesome | 8cf37c7041217fdd2f0618ff6e713c0e7137e769 | [
"MIT"
] | null | null | null | fontconfig/README.md | UpperCenter/Awesome | 8cf37c7041217fdd2f0618ff6e713c0e7137e769 | [
"MIT"
] | null | null | null |
### attension beings with sentience
So I kinda got saved by elementaryOS's default fontconfig.
I don't know what I did to screw it up, but it got screwed up.
So for now, I'm just going to use their default one.
| 30.714286 | 62 | 0.744186 | eng_Latn | 0.999567 |
e1e4ec6bd6c374180709ea9b4a0e0cc7e3053a20 | 182 | md | Markdown | README.md | StefanStegmueller/matchmeifyoucan | 8fb3570543e42c280fab433fd7f3845a685ceaee | [
"MIT"
] | null | null | null | README.md | StefanStegmueller/matchmeifyoucan | 8fb3570543e42c280fab433fd7f3845a685ceaee | [
"MIT"
] | null | null | null | README.md | StefanStegmueller/matchmeifyoucan | 8fb3570543e42c280fab433fd7f3845a685ceaee | [
"MIT"
] | null | null | null | # match me if you can
A small terminal game written in haskell.

## Ubuntu build
```
stack build
```
## Run
```
stack exec tui-game-haskell-exe
```
| 10.705882 | 41 | 0.67033 | eng_Latn | 0.959266 |
e1e5025dabd0e5c226e2e5d9a8bbeb183983774a | 785 | md | Markdown | docs/guide/embedded-config.md | Arrogant95/jmqtt-docs | 093632ac5e169d7af2426d19844a9805e036b03c | [
"Apache-2.0"
] | null | null | null | docs/guide/embedded-config.md | Arrogant95/jmqtt-docs | 093632ac5e169d7af2426d19844a9805e036b03c | [
"Apache-2.0"
] | null | null | null | docs/guide/embedded-config.md | Arrogant95/jmqtt-docs | 093632ac5e169d7af2426d19844a9805e036b03c | [
"Apache-2.0"
] | null | null | null | ---
title: 嵌入式启动
---
## 优势
1. 单机,基于内存实现数据存储,运维成本低
2. 启动方式简单,无需其它数据存储组件
3. 开发成本低,在改动原有代码很少的基础上实现
## 劣势
1. 数据持久化能力弱,服务重启意味着数据丢失
2. 依靠集群poll模式功能,事件批处理能力弱,后续优化升级。
## 单机实现详解
采用事件队列,收集事件,利用集群poll事件的功能,从事件队列获取事件异步处理。客户端的遗嘱、保持、会话、过程、订阅、离线消息都保存至内存中
## memory相关配置
```json
# 只需配置集群模式clusterMode = 2,highPerformance = false,maxPollEventNum ,pollWaitInterval
# 会话与消息Store类 以及 事件处理类
# 这里需要采用集群2的方式
clusterMode = 2
# 关闭highPerformance
highPerformance = false
# 每次最多从事件表中拉取未处理的10条
maxPollEventNum = 10
# 若每次拉取不满,则等待10ms再拉取,防止一直请求事件队列
pollWaitInterval = 10
# 配置session与消息Store类
sessionStoreClass=org.jmqtt.broker.store.mem.MemSessionStore
messageStoreClass=org.jmqtt.broker.store.mem.MemMessageStore
# 配置事件处理类
clusterEventHandlerClass=org.jmqtt.broker.processor.dispatcher.mem.MemEventHandler
``` | 22.428571 | 83 | 0.810191 | yue_Hant | 0.376879 |
e1e53c9a9fe4db60813c41b8f0e151f8e39b2c87 | 1,885 | md | Markdown | treebanks/mdf_jr/mdf_jr-feat-Polarity.md | vistamou/docs | 116b9c29e4218be06bf33b158284b9c952646989 | [
"Apache-2.0"
] | 204 | 2015-01-20T16:36:39.000Z | 2022-03-28T00:49:51.000Z | treebanks/mdf_jr/mdf_jr-feat-Polarity.md | vistamou/docs | 116b9c29e4218be06bf33b158284b9c952646989 | [
"Apache-2.0"
] | 654 | 2015-01-02T17:06:29.000Z | 2022-03-31T18:23:34.000Z | treebanks/mdf_jr/mdf_jr-feat-Polarity.md | vistamou/docs | 116b9c29e4218be06bf33b158284b9c952646989 | [
"Apache-2.0"
] | 200 | 2015-01-16T22:07:02.000Z | 2022-03-25T11:35:28.000Z | ---
layout: base
title: 'Statistics of Polarity in UD_Moksha-JR'
udver: '2'
---
## Treebank Statistics: UD_Moksha-JR: Features: `Polarity`
This feature is universal.
It occurs with 1 different values: `Neg`.
74 tokens (2%) have a non-empty value of `Polarity`.
18 types (1%) occur at least once with a non-empty value of `Polarity`.
3 lemmas (0%) occur at least once with a non-empty value of `Polarity`.
The feature is used with 2 part-of-speech tags: <tt><a href="mdf_jr-pos-AUX.html">AUX</a></tt> (73; 2% instances), <tt><a href="mdf_jr-pos-INTJ.html">INTJ</a></tt> (1; 0% instances).
### `AUX`
73 <tt><a href="mdf_jr-pos-AUX.html">AUX</a></tt> tokens (69% of all `AUX` tokens) have a non-empty value of `Polarity`.
The most frequent other feature values with which `AUX` and `Polarity` co-occurred: <tt><a href="mdf_jr-feat-Valency.html">Valency</a></tt><tt>=EMPTY</tt> (73; 100%), <tt><a href="mdf_jr-feat-Tense.html">Tense</a></tt><tt>=EMPTY</tt> (54; 74%), <tt><a href="mdf_jr-feat-Mood.html">Mood</a></tt><tt>=EMPTY</tt> (53; 73%), <tt><a href="mdf_jr-feat-Number-subj.html">Number[subj]</a></tt><tt>=EMPTY</tt> (50; 68%), <tt><a href="mdf_jr-feat-Person-subj.html">Person[subj]</a></tt><tt>=EMPTY</tt> (50; 68%), <tt><a href="mdf_jr-feat-VerbType.html">VerbType</a></tt><tt>=Aux</tt> (50; 68%).
`AUX` tokens may have the following values of `Polarity`:
* `Neg` (73; 100% of non-empty `Polarity`): <em>аф, ашезь, ашень, апак, афоль, аш, изь, Афи, афолеть, ашезе</em>
* `EMPTY` (33): <em>ульсь, катк, ба, ли, ульсть, эрявсь, Улендяряль, Эрявихть, савоза, савондяряй</em>
### `INTJ`
1 <tt><a href="mdf_jr-pos-INTJ.html">INTJ</a></tt> tokens (11% of all `INTJ` tokens) have a non-empty value of `Polarity`.
`INTJ` tokens may have the following values of `Polarity`:
* `Neg` (1; 100% of non-empty `Polarity`): <em>Аф</em>
* `EMPTY` (8): <em>Вай, Вов, Ну, И, Эрь</em>
| 50.945946 | 584 | 0.658886 | yue_Hant | 0.275791 |
e1e567ca7d540bcd1d061079b96387889c216b9a | 80 | md | Markdown | risorse/stickerTelegram/README.md | gimmybruce/tansignari | f98e3096e74e77561754d75b1789e0ce25867cee | [
"CC-BY-4.0"
] | null | null | null | risorse/stickerTelegram/README.md | gimmybruce/tansignari | f98e3096e74e77561754d75b1789e0ce25867cee | [
"CC-BY-4.0"
] | null | null | null | risorse/stickerTelegram/README.md | gimmybruce/tansignari | f98e3096e74e77561754d75b1789e0ce25867cee | [
"CC-BY-4.0"
] | null | null | null | È stato creato un set di sticker Telegram <https://t.me/addstickers/tansignari>
| 40 | 79 | 0.7875 | ita_Latn | 0.881177 |
e1e5855887a132e250b9c86ac6392ed8e13a42ff | 3,558 | md | Markdown | django/django_testing.md | alysivji/notes | d0f1989743b97573166c6d31bcfcde20d2b09633 | [
"Unlicense"
] | 24 | 2017-11-19T16:02:09.000Z | 2022-02-27T05:15:00.000Z | django/django_testing.md | alysivji/reading-list | e6f171ce131a7791ca22a5c719d0b71558cf9aeb | [
"Unlicense"
] | null | null | null | django/django_testing.md | alysivji/reading-list | e6f171ce131a7791ca22a5c719d0b71558cf9aeb | [
"Unlicense"
] | 4 | 2020-01-30T17:24:45.000Z | 2021-12-05T17:05:03.000Z | # Django Testing
## Testing Basics
### Running tests
Have files named test_*.py
```console
$ python manage.py test
```
### Test Database
* Tests are not done on production databases, but on a blank database created for tests
* This database is destroyed once all tests have been executed
### Test Order Execution
1. All TestCase subclasses are run first
1. All other Django-based tests are run
1. unitteset.TestCase tests are run
### Misc
* Any initial data loaded in migrations will only be available in TestCase tests and not in TransactionTestCase tests
* tests are run with ```DEBUG=False``` obviously ;)
* [running tests in parallel](https://docs.djangoproject.com/en/1.11/ref/django-admin/#cmdoption-test-parallel)
---
## Django Test Client
* check if correct template is being rendered
* verify the correct context is being passed in
* examine status codes and url
* does not need Web server to be running
* avoids overhead of HTTP and deals with Django framework directly
```python
from django.test import Client
c = Client()
response = c.get('/endpoint/',
data={'foo': 2, 'bar': 3})
```
* test client is stateful, stores cookies are res
### Testing Responses
* test response object has additional information that is useful for test code to verify
* client
* content - body of response
* context - context instance that was used to render template
* json - body of response, parsed as JSON
* request
* status_code
* templates - templates used to generate response
* resolver_match - use this to find which view served the response
### Authentication
* test login using ```django.test.Client.login```
* After you call this method, the test client will have all the cookies and session data required to pass any login-based tests that may form part of a view.
* After logging in, can test views that are only available to logged-in users
* Done in test database so we will also need to create accounts
### Miscellaneous
* exceptions that are not visible to test client are Http404, PermissionDenied, SystemExit, and Suspicious Operation
* Django converts these to HTTP response codes
* test client is stateful, reponse returns a cookie which will be stored in client and sent with all subsequent ```get()``` and ```post()``` reponses
* ```Client.cookies``` and ```Client.session```
### Django Test Case Classes
* [SimpleTestCase](https://docs.djangoproject.com/en/1.11/topics/testing/tools/#simpletestcase)
* Use this for all Django Tests
* [TransactionTestCase](https://docs.djangoproject.com/en/1.11/topics/testing/tools/#transactiontestcase) **Test specific database transaction behavior**
* [TestCase](https://docs.djangoproject.com/en/1.11/topics/testing/tools/#testcase) **Most common way to write test**
* Use these for an y tests that make a database query
* [LiveServerTestCase](https://docs.djangoproject.com/en/1.11/topics/testing/tools/#django.test.LiveServerTestCase)
* Use this for Selenium client. Test rendered HTML and behavior of webpages
* Used for functional testing inside browser and to simulate a real user's actions
> SimpleTestCase and its subclasses (e.g. TestCase, …) rely on setUpClass() and tearDownClass() to perform some class-wide initialization (e.g. overriding settings). If you need to override those methods, don’t forget to call the super implementation
---
## Test Case Features
https://docs.djangoproject.com/en/1.11/topics/testing/tools/#test-cases-features
https://docs.djangoproject.com/en/1.11/topics/testing/advanced/
| 35.939394 | 250 | 0.748454 | eng_Latn | 0.972923 |
e1e663ed2b5dee229dc2a47c7220434687fd192d | 338 | md | Markdown | org/docs/patterns/bruce/options/elasticwidth/fr.md | TheSheebster/markdown | 9a6b082482fb4521e8938c3fa7deadd8cf1c86ec | [
"MIT"
] | 7 | 2019-06-26T14:03:06.000Z | 2021-07-11T03:48:31.000Z | org/docs/patterns/bruce/options/elasticwidth/fr.md | TheSheebster/markdown | 9a6b082482fb4521e8938c3fa7deadd8cf1c86ec | [
"MIT"
] | 76 | 2019-06-12T07:03:47.000Z | 2021-08-15T22:55:57.000Z | org/docs/patterns/bruce/options/elasticwidth/fr.md | TheSheebster/markdown | 9a6b082482fb4521e8938c3fa7deadd8cf1c86ec | [
"MIT"
] | 51 | 2019-07-02T07:39:40.000Z | 2021-11-18T17:11:20.000Z | 
> #### ###### Pour quoi faire ?
>
> Cette option est étroitement liée à l'option **hauteur** qui détermine à quelle hauteur remonte le boxer sur votre taille.
>
> La largeur de votre élastique doit être prise en compte, c'est à cela que sert l'option largeur d'élastique.
| 42.25 | 124 | 0.724852 | fra_Latn | 0.999383 |
e1e70aace92e1a45c67e61b73bdddf6204174fbb | 21,622 | md | Markdown | answers.md | Philneeves/hiring-engineers | 307e8839afe8d1a23385943d8330f83b30715082 | [
"Apache-2.0"
] | null | null | null | answers.md | Philneeves/hiring-engineers | 307e8839afe8d1a23385943d8330f83b30715082 | [
"Apache-2.0"
] | null | null | null | answers.md | Philneeves/hiring-engineers | 307e8839afe8d1a23385943d8330f83b30715082 | [
"Apache-2.0"
] | null | null | null | **Prerequisites - Setup the environment**
I decided to go with the recommendation of setting up a Vagrant Ubuntu VM via VirtualBox. It was quick to set up and I was able to easily SSH into it via Powershell.

I registered for the free Datadog trial and initially installed the Agent for my local Windows machine as a test and launched the Agent Manager on localhost:

I then installed the Agent on my Vagrant Host and from hereon I focused only on this Host.
Got it running, checked the Agent status, and everything seemed to be working as expected:

**Collecting Metrics:**
• Add tags in the Agent config file and show us a screenshot of your host and its tags on the Host Map page in Datadog
I added additional Host Tags in the datadog.yaml file and decided to go with Unified Service Tagging ‘service’, ‘env’ and ‘version’ tags as per the Datadog suggestions, to form a single point of configuration for all telemetry emitted.
Host Map showing additional Host tags:

Link to my Host Map:
https://app.datadoghq.eu/infrastructure/map?host=78171197&fillby=avg%3Acpuutilization&sizeby=avg%3Anometric&groupby=availability-zone&nameby=name&nometrichosts=false&tvMode=false&nogrouphosts=true&palette=green_to_orange&paletteflip=false&node_type=host
• Install a database on your machine (MongoDB, MySQL, or PostgreSQL):
I decided to install MongoDB onto my vagrant Ubuntu Host
Start MongoDB and confirm it's working:

Launched the Mongo shell and set up a myNewDB database and inserted 2 small ‘Collections’:

Integrate MongoDB:
Set up Admin user and Datadog as a User:

Enabled Log Collection in the datadog.yaml file:

Add config block for logs conf.yaml:

MongoDB integration installed and ‘working properly’:

However, MongoDB errors showing on metrics tab on UI:

Also 'permission denied' to mongod.log errors showing following 'mongo' check in terminal. So ran a 'mongo service' check and everything seemed fine:

So ran a full Agent check and there is an error reported under the 'Collector' section:

Issue seems to be the host:27017 connection refused.
Checked mongod.conf file and bindIp is 127.0.0.1
[image](https://user-images.githubusercontent.com/22836380/120883948-2cca5f00-c5d8-11eb-88e6-012a755640a8.png)
Updated conf.yaml host from ‘vagrant’ to 127.0.01 (my mistake originally)
This seemed to resolve the issue:

Also shows up on the UI now as 'OK':

But...still getting errors on the Host Map Metrics for MongoDB:

Plus, still getting permission denied errors on mongod.log...

Did some research and changed the permissions on the file and directory from mongodb to dd-agent which did seem to work:

This resolved errors on the UI also:

The Dashboard looking good:

**Addendum to MongodDB**
I did find that when I shut everything down and went to restart Mongodb wouldn’t start with errors regarding the permissions on the mongod.log file. My investigations found that you need to change the permissions on /mongod/mongod.log to mongdb to fire up mongodb but then you get the same error but for datadog-agent not having the same permission when you run the full checks. So I found that you need to change the permissions back to datadog-agent.
I created a simple bash script to automate this process to allow me to get up and running quicker with some user input included to control the results of each status check:

**Create a custom Agent check that submits a metric named my_metric with a random value between 0 and 1000**
First of all, I set up a test custom ‘hello world’ metric taken from Datadog guide - tested it and it worked.

**Set up custom_my_check metric:**
Created the yaml file under conf.d:

Created custom_my_check.py file, which modified the hello test metric python code to include the use of random for the generation of a random integer between 1 – 1000 and also time.sleep for 45 seconds, so the collection interval is set to every 45 seconds:

Check the check – seems to be working – integer = 627
Tried again and integer = 60
Tested the check and timed execution against a stopwatch and was bang on 45 seconds.
Restarted the Service
Appears on the host map and dashboard as expected, however, collection rate is still the default 15 seconds. This implies delaying the code execution does not change the collection rate:

Link to my_check custom metric display:
https://app.datadoghq.eu/dash/integration/custom%3Amy_check?from_ts=1622728920682&live=true&to_ts=1622732520682&tpl_var_scope=host%3Avagrant
So, after consulting the Datadog docs I removed the time delay from the python code:
<code>import random</code>
<code>try:</code>
from datadog_checks.base import AgentCheck
<code>except ImportError:</code>
from checks import AgentCheck
<code>__version__ = "1.0.0"</code>
<code>class my_check(AgentCheck):</code>
def check(self, instance):
while(True):
time.sleep(45)
self.gauge(
'my_check.gauge',
random.randint(0, 1000),
tags=['TAG_KEY:TAG_VALUE'] + self.instance.get('tags', []))
and added the min_collection_interval to the custom_my_check.yaml file:

Tested it and all good so restarted the service to include it again and report data to DD.
You can see the intervals widen to the right side of the graph below after this was implemented:

**Bonus Question Can you change the collection interval without modifying the Python check file you created?**
Add the min_collection_interval in the .yaml file. This in fact is what I did originally before I put the time delay in the python code.
**Visualising Data:**
Utilize the Datadog API to create a Timeboard that contains:
• Your custom metric scoped over your host.
• Any metric from the Integration on your Database with the anomaly function applied.
• Your custom metric with the rollup function applied to sum up all the points for the past hour into one bucket
Screenshot below showing my Timeboard (Phil's Timeboard API Test Creation 2) displaying 3 graphs:
1. My custom metric scoped over my Host showing the generated random integers between 0 - 1000.
2. A MongoDB Collection Locks per second with an Anomaly detection overlay to identify any movement out of the expected variation
3. A simple rollup display of the average my_check integer per Hour
created using a script to POST to the Datadog API using Postman (see attached script ‘phils_timeboard_api_script.json’):

And as it appears on the Dashboard:

Link to my custom Timeboard:
https://app.datadoghq.eu/dashboard/xd8-es7-597/phils-timeboard-api-test-creation-2?from_ts=1622729300712&live=true&to_ts=1622732900712
Once this is created, access the Dashboard from your Dashboard List in the UI:
**Set the Timeboard's timeframe to the past 5 minutes**
**Take a snapshot of this graph and use the @ notation to send it to yourself.**


**Bonus Question: What is the Anomaly graph displaying?**
There are 2 main features to the anomaly graph:
1. **The Anamolous Grey Band**
The overlying grey band shows the scope of the expected variation in the data. Any variation of the line above or below the grey band could indicate an issue with with this particular part of the system.
2. **The Graph**
This particular graph is showing the Intent Shared Collection locks, it was chosen for demonstration purposes of anomaly detection, as it provides some consistent variation of data for a database that is not in constant use.
What are Intent Share Collection locks?
Locking is a mechanism used to maintain concurrency in the databases. MongoDB uses multi-granularity locking in different levels and modes of locking to achieve this.
There are four different levels of locking in MongoDB: Global, Database, Collection, Document
Intent Shared?
• Intent locks are higher level locks acquired before lower level locks.
• It indicates that the lock holder will read the resource at a granular level.
• If an Intent Shared lock is applied to a database, then it means that lock holder is willing to apply a Shared lock on Collection or Document level.
**Monitoring Data**
Create a new Metric Monitor that watches the average of your custom metric (my_metric) and will alert if it’s above the following values over the past 5 minutes:
• Warning threshold of 500
• Alerting threshold of 800
• And also ensure that it will notify you if there is No Data for this query over the past 10m.
**Please configure the monitor’s message so that it will:**
1. Send you an email whenever the monitor triggers.
2. Create different messages based on whether the monitor is in an Alert, Warning, or No Data state.
3. Include the metric value that caused the monitor to trigger and host ip when the Monitor triggers an Alert state.
4. When this monitor sends you an email notification, take a screenshot of the email that it sends you.
**Alert email received:**

**Warning:**

**No Data:**

**
[phils_timeboard_API_script.txt](https://github.com/Philneeves/hiring-engineers/files/6593469/phils_timeboard_API_script.txt)
Bonus Question: Since this monitor is going to alert pretty often, you don’t want to be alerted when you are out of the office. Set up two scheduled downtimes for this monitor:**
One that silences it from 7pm to 9am daily on M-F:
(Shows UTC in the email but as per screenshot below, setting is for BST in Datadog)

And one that silences it all day on Sat-Sun:
(Again email below shows UTC but system is showing Europe/London or BST. Tried to set to Europe/Dublin but saved as IST (Indian Summer Time) for some reason)

**Collecting APM Data:**
The plan here was to demonstrate an instrumented application i.e. the ability to monitor the level of the application's performance and to diagnose errors.
Installed Flask in a virtual env, saved the provided App code as app.py, installed ddtrace and had to update PIP as didn't work at first.
Seemed to work when launched it:

However quit this as needed to add Tag Configs
Entered in tag configs:

Restart the service
Launched app again with full tag configs:
DD_SERVICE="ubuntu_host" DD_ENV="sandbox" DD_LOGS_INJECTION=true ddtrace=run flask run --port 5050
**Opened up a second terminal connection and called the URL, response is as expected ‘Entrypoint to the Application’:**

Stream of data giving informative messages about the execution of the application at run time can be seen in the other terminal:

**Called api/apm:**

Again a stream of trace data can be seen:

**Called the /api/trace as well:**

Services results can be viewed in the APM Datadog UI APM section:

Expand Services for ubuntu_host:


Link to APM Services:
https://app.datadoghq.eu/apm/service/ubuntu_host/flask.request?start=1622731004820&end=1622734604820&paused=false&env=sandbox
Results in the Traces section of the APM:

Link to APM Traces:
https://app.datadoghq.eu/apm/traces?end=1622734669861&paused=false&query=service%3Aubuntu_host%20env%3Asandbox%20operation_name%3Aflask.request&start=1622733769861&streamTraces=true
**Addendum to APM Flask App**
**Connecting the Flask App to the MongoDB**
I decided to see if I could connect the Flask App to the MongoDB so we can pull data from the database by making url calls. I created a new Flask App, App3 importing flask-pymongo and jsonify and gave additional URLs with additional functionality. I included a deliberate error to create a 500 return that also causes a 404 status so it can be tracked and fixed using Datadog for the purposes of demonstration. I also added a new database restdb to MongoDB with 'name' and 'distance' details added for a handful of Stars to create some data. I had to add the mongodb config details to the Flask file also to enable this:

**Set up automated multiple curl command Python script:**
I also decided to write a Python script that would automate a number of cURL commands to test out our REST API running in a loop to create real time simulated data. This also had deliberate commands calling non-existent urls to cause a 404 error, an example from the output can be seen below:

**Investigate using datadog APM**
Looking at datadog APM Traces (see screenshot below) and we can see number of requests per minute, latency, and note the red columns in the Errors graph. In the Spans table we can see a list of status codes from the requrests and they include a 500 error and a 404 error:

We can zoom into one of the 404 errors via the spans table:

Here the Flame Graph shows us the latency of the error and details of the source of the error in the ‘Errors’ tab:

And we can see the 500 error is being caused by the name variable not be called for the get_one_star function:

Which we can confirm in the code the name variable is not being called from the database to create the correct url to be called:

So added /<string:name> to the @app.route to fix it:

This fixes a couple of the 404 errors that use this function to retrieve data from the mongodb database
The call to /star/ is now invalid so can be removed/commented out as it will return a 404 otherwise:

The /error url does not exist so can be removed/commented out:

When we run the new app and run the auto curl commands we get the results of the url calls as expected:

And datadog displays no errors in the error graph and a healthy list of no error status codes from the spans table:

**Phil's Infrastructure and APM Dashboard:**

Link to Infrastructure and APM Dashboard:
https://app.datadoghq.eu/dashboard/mph-cb4-tum?from_ts=1622731610456&live=true&to_ts=1622735210456
**Bonus Question: What is the difference between a Service and a Resource?**
Services are the building blocks of microservice architectures. A service groups together endpoints, queries, or jobs for the purposes of building your application
Resources represent a particular domain of a customer application. They are typically an instrumented web endpoint, database query or background job.
**Final Question:**
Is there anything creative you would use Datadog for?
If I did own a Casino I think it would be useful to be able to monitor all of the various machines through a 'single pane of glass'. It could tell you a if a machine is paying out too much or too little. Does that correlate with the machine’s performance? You could get good analytics on the use of machines: which ones are used more than others. If so, why? Is it the location of the machine, the look, the feel, the programming? Are certain machines more popular at certain times of the day? It would give you a good understanding then of their profitability. You could look for anomalies in performance that might highlight an unknown hack on the machines for cheating. You could also get real time feedback on the cashflow of individual machines through to groups of machines through to the entire Casino.
| 49.591743 | 816 | 0.790352 | eng_Latn | 0.920315 |
e1e79cc4f367f87cc278433ac55c4bb95c99917c | 2,617 | md | Markdown | content/publication/nelson-2019-laboratory/index.md | timbeechey/academic-website | d50b8e07060677d94ca3d35f79ded72826819573 | [
"MIT"
] | null | null | null | content/publication/nelson-2019-laboratory/index.md | timbeechey/academic-website | d50b8e07060677d94ca3d35f79ded72826819573 | [
"MIT"
] | null | null | null | content/publication/nelson-2019-laboratory/index.md | timbeechey/academic-website | d50b8e07060677d94ca3d35f79ded72826819573 | [
"MIT"
] | null | null | null | ---
# Documentation: https://wowchemy.com/docs/managing-content/
title: 'Laboratory Simulations of Conversation Scenarios: Questionnaire Results from
Patient and Partner'
subtitle: ''
summary: ''
authors:
- Peggy Nelson
- Elizabeth Anderson
- Timothy Beechey
tags: []
categories: []
date: '2019-01-01'
lastmod: 2020-12-30T13:44:42-06:00
featured: false
draft: false
# Featured image
# To use, add an image named `featured.jpg/png` to your page's folder.
# Focal points: Smart, Center, TopLeft, Top, TopRight, Left, Right, BottomLeft, Bottom, BottomRight.
image:
caption: ''
focal_point: ''
preview_only: false
# Projects (optional).
# Associate this post with one or more of your projects.
# Simply enter your project's folder or file name without extension.
# E.g. `projects = ["internal-project"]` references `content/project/deep-learning/index.md`.
# Otherwise, set `projects = []`.
projects: []
publishDate: '2020-12-30T20:45:16.377442Z'
publication_types:
- '1'
abstract: 'Hearing-related questionnaires can reveal much about the daily experience of hearing aid users. Nonetheless, results may not fully reflect the lived experience for several reasons, including: users’ limited awareness of all communication challenges, limitations of memory, and the subjective nature of reporting. Multiple factors can influence results obtained from questionnaires (Nelson et al. ASA Louisville). Consideration of the perspectives of both hearing aid wearers and communication partners may better reflect the challenges of two-way everyday communication. We have developed simulations of challenging conversational scenarios so that clients and their partners can make judgments of sensory aid performance in realistic, but controlled conditions. Listeners with hearing loss and their partners use a client-oriented scale (adapted from the COSI, Dillon, 1997) to report challenging real-life listening conditions such as small group conversations, phone conversations, health reports, and media. Representative scenarios are simulated in the laboratory where clients and partners make ratings of intelligibility, quality, and preference. Results are compared to outcome measures such as the Speech, Spatial and Qualities of Hearing Scale (SSQ, Gatehouse & Noble, 2004) and Social Participation Restrictions Questionnaire (SPaRQ, Heffernan et al., 2018). Results will help refine methods for evaluating the performance of emerging technologies for hearing loss.'
publication: '*Proceedings of Meetings on Acoustics*'
url_pdf: http://asa.scitation.org/doi/abs/10.1121/2.0001245
doi: 10.1121/2.0001245
---
| 63.829268 | 1,488 | 0.789836 | eng_Latn | 0.985239 |
e1e7e43680594dacda59bade8432e3cf0fbfe46e | 214 | md | Markdown | src/components/EzLabelledItem/EzLabelledItem.preview.md | ez-alexfrazer/recipe | 3ce549de648ca341c380cb525826fdc72e8bf9ca | [
"MIT"
] | 4 | 2021-06-01T19:00:34.000Z | 2022-02-09T21:43:14.000Z | src/components/EzLabelledItem/EzLabelledItem.preview.md | ez-alexfrazer/recipe | 3ce549de648ca341c380cb525826fdc72e8bf9ca | [
"MIT"
] | 180 | 2021-06-01T15:00:03.000Z | 2022-03-30T22:04:50.000Z | src/components/EzLabelledItem/EzLabelledItem.preview.md | ez-alexfrazer/recipe | 3ce549de648ca341c380cb525826fdc72e8bf9ca | [
"MIT"
] | 3 | 2021-12-16T13:40:29.000Z | 2022-03-25T22:12:50.000Z | ```jsx
<div
style={{
'--zoom': 4,
'--shadow': 'none',
'--color': '#565a5c',
marginLeft: -25,
}}
>
<EzLabelledItem position="top" title="Labels">
<EzField />
</EzLabelledItem>
</div>
```
| 14.266667 | 48 | 0.509346 | nld_Latn | 0.089808 |
e1e85f64152dd86a8ec55070fe3137ebc52b2e1f | 1,021 | md | Markdown | README.md | rcbabahin/eyeair-sensor | 4d4327f64dbdb53cacb1ca47264ee39cc5f4ac1b | [
"MIT"
] | null | null | null | README.md | rcbabahin/eyeair-sensor | 4d4327f64dbdb53cacb1ca47264ee39cc5f4ac1b | [
"MIT"
] | null | null | null | README.md | rcbabahin/eyeair-sensor | 4d4327f64dbdb53cacb1ca47264ee39cc5f4ac1b | [
"MIT"
] | null | null | null | # eyeair-sensor
EyeAir Team, GS-LABS ZigBee contest
Умное устройство для мониторинга качества воздуха Eye Air измеряет сразу пять параметров для комплексной оценки качества воздуха в помещении: уровень СО2, концентрацию летучих органических веществ (TVOC) и взвешенных частиц (PM1.0, PM2.5, PM10.0), температуру и влажность. Eye Air интегрировано в систему Умный дом DREHOME&TV посредством протокола ZigBee, помимо этого пользователь может управлять устройством и получать данные с помощью мобильного устройства по Wi-Fi сети. Eye Air обладает световой индикацией и звуковым излучателем, сигнализирующими о превышении концентрации: СО2, TVOC, взвешенных частиц. По желанию пользователь может отключить звуковой сигнал с помощью переключателя на корпусе устройства.


| 113.444444 | 712 | 0.834476 | rus_Cyrl | 0.931037 |
e1e89cc6264750600eda40046841f141fccaf972 | 238 | md | Markdown | _project/bianca-walters-real-wedding-the-groomswear.md | rumnamanya/rumnamanya.github.io | 2deadeff04c8a48cf683b885b7fa6ab9acc1d9d9 | [
"MIT"
] | null | null | null | _project/bianca-walters-real-wedding-the-groomswear.md | rumnamanya/rumnamanya.github.io | 2deadeff04c8a48cf683b885b7fa6ab9acc1d9d9 | [
"MIT"
] | null | null | null | _project/bianca-walters-real-wedding-the-groomswear.md | rumnamanya/rumnamanya.github.io | 2deadeff04c8a48cf683b885b7fa6ab9acc1d9d9 | [
"MIT"
] | null | null | null | ---
layout: project_single
title: "Bianca & Walter's Real Wedding - the-groomswear"
slug: "bianca-walters-real-wedding-the-groomswear"
parent: "groomsmen-suspenders"
---
Bianca & Walter's Real Wedding - The Groomswear #hitchedrealwedding | 34 | 67 | 0.768908 | eng_Latn | 0.507436 |
e1e9557821fddf3fc99d027b4f6f8476e77e3d5a | 669 | md | Markdown | README.md | bensoer/skipperify | b2071dad4aeb5a8e9564b8cb36455da74e054634 | [
"MIT"
] | null | null | null | README.md | bensoer/skipperify | b2071dad4aeb5a8e9564b8cb36455da74e054634 | [
"MIT"
] | null | null | null | README.md | bensoer/skipperify | b2071dad4aeb5a8e9564b8cb36455da74e054634 | [
"MIT"
] | null | null | null | # skipperify
A demo setup using Skipper and RESTify
# Setup
1. `npm install` to install all dependencies
2. in the `client.js` file set the `pathToFile` variable to the directory of the file you would like to upload
3. in the `client.js` file set the `baseURL` variable to the URL the server is located. If this is setup on the same
machine this will be your own IP
4. in console, cd to the project, then call `node index.js` to start the server
5. in another console, cd to the project, then call 'node client.js`
6. Your file will now be uploaded and placed into the `./.tmp/upload` folder. This folder will be created in the project
root if it doesn't exist already | 55.75 | 120 | 0.759342 | eng_Latn | 0.999476 |
e1ea8d8bdf0672408c8046624503d6981b8ba940 | 8,099 | md | Markdown | _posts/2014-08-09-sas.md | xxxw567/xxxw567.github.io | c2e98e203e0aacf5681f45eb59f80b13f9edefc4 | [
"MIT"
] | null | null | null | _posts/2014-08-09-sas.md | xxxw567/xxxw567.github.io | c2e98e203e0aacf5681f45eb59f80b13f9edefc4 | [
"MIT"
] | null | null | null | _posts/2014-08-09-sas.md | xxxw567/xxxw567.github.io | c2e98e203e0aacf5681f45eb59f80b13f9edefc4 | [
"MIT"
] | null | null | null | ---
title: 'SAS生成协方差矩阵'
date: 2014-08-09
permalink: /posts/2014/08/
tags:
- SAS
---
## 2014-08-09 SAS simulation 实验
前几日与cousera上教SEM的教授讨论 "if a matrix contains very similar , but rather lowcorrelations, if we use 1 factor CFA, will the chi-square be large orsmall?"
激发了我进一步探讨这个话题的兴趣。
因此,我用计算机做了一些实验。
### 实验前提
1. 为验证卡方大小究竟是不是和item之间的相关系数大小有关,不妨假设所有的相关系数均服从正态分布,X~N(μ,σ). 当然,由于是相关系数,所以取值范围为[0,1]
2. 因此我用电脑程序(SAS 9.2)产生25组矩阵。分别将平均数μ 从0.1变到0.9,以0.2为组距,同时将标准差从0.01变到0.09,以0.02为组距。
### 实验假设
1. 方差不变的情况下,相关系数平均数增大, 卡方值究竟如何变化?
* 如果发现卡方增加,说明item之间的相关系数增加了单因素CFA的拟合度,反之,减少了拟合度。
* 其他情况则说明,拟合度和相关系数大小无关
2. 在相关系数平均数不变的情况下,方差逐渐增加,卡方值究竟如何变化?
* 如果发现卡方增加,说明item之间的相关系数越接近,模型拟合越好,反之则不好。
* 其他情况则说明,item之间的相关系数接近程度和模型拟合无关。
### 结果
我用Lisrel 运算了25次单因子CFA。结果如下:
[结果详见此处](https://user.qzone.qq.com/312898321)
### 结论
1. 单看每一行,即方差不变的情况下,相关系数平均数增大的情况。发现并无规律。因此,结果支持拟合度和相关系数大小无关。
2. 单看每一列,即方差不变的情况下,相关系数平均数增大。发现卡方值也随之增加。说明item之间的相关系数越接近,模型拟合越好。
其他问题
1.我们发现每一行的卡方值变化(从左到右)虽然没有一个一般趋势,但是都有先增大后减少的趋势,峰值分别在μ=0.9,σ=0.03;μ=0.7,σ=0.05;μ=0.5,σ=0.07;μ=0.3,σ=0.09 附近。
具体原因还有待继续探讨。
### 附件:
1. SAS程序
```SAS
/*方差协方差矩阵产生器……SAS9.2
N= item 个数
model=随机数模型
mu=均值
theta=标准差
*/
%macrosim_co(n=17,model=normal,mu=0.9,theta=0.02,out=9_02);
Data x;
array x[&n];
format x1-x&n 3.2;
do j=1 to &n;
do i=1 toj;
x[i]=max(0,min(1,rand("&model",&mu,&theta)));
end;
i=1;
output;
end;
drop i j;
run;
data x;
set x;
array x[&n];
do i=1 to &n;
if _n_=i then x[i]=1;
end;
drop i;
run;
options missing=' ';
data _null_;
set x;
file "D:\SAStest\result_&out .txt " LINESIZE=500;
put x1-x17 ;
run;
%mend;
%sim_co(n=15,model=normal,mu=0.9,theta=0.01,out=9_01);
%sim_co(n=15,model=normal,mu=0.9,theta=0.03,out=9_03);
%sim_co(n=15,model=normal,mu=0.9,theta=0.05,out=9_05);
%sim_co(n=15,model=normal,mu=0.9,theta=0.07,out=9_07);
%sim_co(n=15,model=normal,mu=0.9,theta=0.09,out=9_09);
%sim_co(n=15,model=normal,mu=0.7,theta=0.01,out=7_01);
%sim_co(n=15,model=normal,mu=0.7,theta=0.03,out=7_03);
%sim_co(n=15,model=normal,mu=0.7,theta=0.05,out=7_05);
%sim_co(n=15,model=normal,mu=0.7,theta=0.07,out=7_07);
%sim_co(n=15,model=normal,mu=0.7,theta=0.09,out=7_09);
%sim_co(n=15,model=normal,mu=0.5,theta=0.01,out=5_01);
%sim_co(n=15,model=normal,mu=0.5,theta=0.03,out=5_03);
%sim_co(n=15,model=normal,mu=0.5,theta=0.05,out=5_05);
%sim_co(n=15,model=normal,mu=0.5,theta=0.07,out=5_07);
%sim_co(n=15,model=normal,mu=0.5,theta=0.09,out=5_09);
%sim_co(n=15,model=normal,mu=0.3,theta=0.01,out=3_01);
%sim_co(n=15,model=normal,mu=0.3,theta=0.03,out=3_03);
%sim_co(n=15,model=normal,mu=0.3,theta=0.05,out=3_05);
%sim_co(n=15,model=normal,mu=0.3,theta=0.07,out=3_07);
%sim_co(n=15,model=normal,mu=0.3,theta=0.09,out=3_09);
%sim_co(n=15,model=normal,mu=0.1,theta=0.01,out=1_01);
%sim_co(n=15,model=normal,mu=0.1,theta=0.03,out=1_03);
%sim_co(n=15,model=normal,mu=0.1,theta=0.05,out=1_05);
%sim_co(n=15,model=normal,mu=0.1,theta=0.07,out=1_07);
%sim_co(n=15,model=normal,mu=0.1,theta=0.09,out=1_09);
```
1. 几个例子
例1.
平均数为0.7,方差为0.03的协方差矩阵
```sas
1.0
.65 1.0
.71 .66 1.0
.73 .72 .69 1.0
.67 .68 .74 .70 1.0
.69 .67 .73 .71 .67 1.0
.66 .72 .68 .74 .74 .70 1.0
.71 .72 .70 .72 .71 .69 .69 1.0
.67 .69 .65 .67 .69 .68 .71 .73 1.0
.68 .71 .70 .78 .70 .68 .70 .70 .71 1.0
.74 .67 .69 .65 .73 .71 .72 .70 .73 .71 1.0
.73 .77 .72 .71 .67 .69 .69 .65 .70 .71 .691.0
.72 .71 .64 .71 .69 .68 .72 .77 .72 .68 .65 .691.0
.74 .73 .70 .71 .68 .74 .69 .71 .69 .63 .69 .77.69 1.0
.73 .65 .73 .69 .68 .69 .70 .68 .73 .71 .69 .68.64 .70 1.0
```
例2.
平均数为0.3,方差为0.01的协方差矩阵
```sas
1.0
.32 1.0
.29 .31 1.0
.28 .30 .29 1.0
.31 .29 .31 .30 1.0
.29 .32 .29 .29 .31 1.0
.30 .30 .29 .28 .30 .30 1.0
.32 .31 .31 .31 .29 .30 .29 1.0
.31 .29 .30 .29 .30 .32 .30 .31 1.0
.31 .30 .30 .31 .29 .28 .30 .31 .30 1.0
.29 .32 .31 .31 .29 .29 .31 .31 .31 .31 1.0
.32 .29 .30 .31 .31 .30 .30 .31 .29 .29 .281.0
.30 .31 .29 .31 .30 .30 .31 .30 .27 .32 .32 .311.0
.30 .31 .29 .30 .30 .30 .31 .31 .29 .30 .30 .30.31 1.0
.29 .31 .30 .29 .28 .29 .29 .31 .32 .29 .29 .31.29 .30 1.0
```
##2015-01-01 之前写过的SAScode,供学习用。
```sas
/****************************************************
Nemo's auto-cleanning report
描述:这个宏目的在于根据宏的变量范围筛选出超出范围的值,并制作报表
使用说明:
data 为目标数据集
varlist 为变量列表数据
Low 为变量最小值
High 为变量最大值
生成:报表report。其结构和目标数据集类似,仅仅保留包含至少一个异常值的observation。
注: 目标数据集中表示ID的变量名必须为 ID
******************************************************/
%macro VarExist(ds,var);
%local rc dsid result;
%let dsid=%sysfunc(open(&ds));
%if %sysfunc(varnum(&dsid,&var)) > 0 %then %do;
%let result=1;
%put NOTE: Variable &var exists in &ds;
%end;
%else %do;
%let result=0;
%put ERROR: Variable &var not exists in &ds;
%end;
%let rc=%sysfunc(close(&dsid));
&result
%mend VarExist;
%macro dataclean(data=,varlist=,varname=,low=,high=);
data _null_;
set &varlist end=eof;
i+1;
ii=left(put(i,3.));
call symput('var'||ii,compress(left(&varname)));
call symput('low'||ii,compress(left(&low)));
call symput('high'||ii,compress(Left(&high)));
if eof then call symput('varcnt',ii);
run;
%do i=1 %to &varcnt;
%if %eval(%VarExist(ds=&data,var=&&var&i)) %then
%put WARNING- The range of variable &&var&i is &&low&i-&&high&i;
%else %abort;
%end;
%put NOTE: Total number of variables is &varcnt;
data report (keep=id
%do i=1 %to &varcnt;
&&var&i
%end;
);
set &data;
%do i=1 %to &varcnt;
&&var&i=&&var&i*1;
if (&&var&i>=&&low&i & &&var&i<=&&high&i) then &&var&i=.;
%end;
if nmiss(of &var1--&&var&varcnt) > 0 then delete;
run;
%mend dataclean;
```
```sas
/*************************************************
Nemo's reverse coding program
描述:这个宏是为了方便地实现reverse coding 而设置。
典型的情况是:我们有一个问题的回答选项为“非常高兴”"非常不高兴",共五分变量,分别设置为1-5.
但是我们需要将其值翻转,将5变成1,4变成2。。。因此这个程序在于方便得实现这一目标。
使用方法:data 为目标变量所在的数据集名称,
var 为目标变量名称,以一个空格隔开。 例如,我希望改变变量aa 和 xx的值,需要输入 var=aa xx
lev_l 为目标变量的最小值,如果目标变量为多个,各自的最小值以空格隔开。
lev_l 为目标变量的最大值,如果目标变量为多个,各自的最大值以空格隔开。
程序特点:1.自动查找目标变量是否存在给定的数据集中,如未找到则程序会中止。
2.改程序只会对所给定范围内值进行reverse coding,不在该范围内的值会自动忽略。
3.程序会对个变量生成两个note,第一个为查找变量是否在数据集中结果,第二个为描述该变量的基本信息。例如:
NOTE: Var x exists in xx
NOTE:reverse coding variable x with level 5 (1-4)
使用举例:reverse coding work.xx 中 x 和 a 两个变量。
两个变量各自的取值范围为1-4 和1-4。 代码为 %reverse(data=aa,var=x a,lev_l=1 1,Lev_h=4 4);
******************************************************/
%macro VarExist(ds,var);
%local rc dsid result;
%let dsid=%sysfunc(open(&ds));
%if %sysfunc(varnum(&dsid,&var)) > 0 %then %do;
%let result=1;
%put NOTE: Var &var exists in &ds;
%end;
%else %do;
%let result=0;
%put NOTE: Var &var not exists in &ds;
%end;
%let rc=%sysfunc(close(&dsid));
&result
%mend VarExist;
%macro reverse(data=,var=x a,lev_l=,Lev_h=,level=);
%local dsid rc ;
%let num=%eval(%length(&var)-%length(%sysfunc(compress(&var)))+1) ;
%let dsid = %qsysfunc(open('&data'));
%do i=1 %to #
%let %str(namev&i)=%qscan(&var,%eval(&i));
%let %str(lower&i)=%qscan(&lev_l,%eval(&i));
%let %str(higher&i)=%qscan(&lev_h,%eval(&i));
%let %str(namel&i)=&&lower&i+&&higher&i;
%if %eval(%VarExist(ds=&data,var=&&namev&i)) %then
%put NOTE:reverse coding variable &&namev&i with level %eval(&&namel&i) range (&&lower&i-&&higher&i) ;
%else %abort;
%end;
data &data;
set &data;
%do i=1 %to #
if &&namev&i<=&&higher&i & &&namev&i>=&&lower&i then &&namev&i=%eval(&&namel&i)-&&namev&i;
%end;
run;
%mend reverse;
```
| 19.329356 | 151 | 0.579084 | yue_Hant | 0.14232 |
e1eacaf2168cb5fd8c207de28057710e3a3699c8 | 1,261 | md | Markdown | aspnet/web-forms/videos/building-35-applications/intro-to-visual-web-developer.md | terrajobst/AspNetDocs.pl-pl | 0fa8689494d61eeca5f7ddc52d218a22fe5b979f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | aspnet/web-forms/videos/building-35-applications/intro-to-visual-web-developer.md | terrajobst/AspNetDocs.pl-pl | 0fa8689494d61eeca5f7ddc52d218a22fe5b979f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | aspnet/web-forms/videos/building-35-applications/intro-to-visual-web-developer.md | terrajobst/AspNetDocs.pl-pl | 0fa8689494d61eeca5f7ddc52d218a22fe5b979f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
uid: web-forms/videos/building-35-applications/intro-to-visual-web-developer
title: Wprowadzenie do Visual Web Developer | Microsoft Docs
author: JoeStagner
description: Microsoft Visual Web Developer to bezpłatna wersja programu Visual Studio do tworzenia aplikacji ASP.NET. W tym filmie wideo pokazano, jak uzyskać i zainstalować go oraz t...
ms.author: riande
ms.date: 04/09/2009
ms.assetid: 5ff5c2eb-825b-4d70-9e19-f1fd64310752
msc.legacyurl: /web-forms/videos/building-35-applications/intro-to-visual-web-developer
msc.type: video
ms.openlocfilehash: b6825a6984cf62dd60714e0f235abc2694c1978c
ms.sourcegitcommit: e7e91932a6e91a63e2e46417626f39d6b244a3ab
ms.translationtype: MT
ms.contentlocale: pl-PL
ms.lasthandoff: 03/06/2020
ms.locfileid: "78640113"
---
# <a name="intro-to-visual-web-developer"></a>Wprowadzenie do programu Visual Web Developer
Jan [Stagner](https://github.com/JoeStagner)
Microsoft Visual Web Developer to bezpłatna wersja programu Visual Studio do tworzenia aplikacji ASP.NET. W tym filmie wideo pokazano, jak uzyskać i zainstalować go oraz zapoznać się z ogólnym przewodnikiem środowiska IDE i jego funkcji.
[▶Obejrzyj wideo (39 minut)](https://channel9.msdn.com/Blogs/ASP-NET-Site-Videos/intro-to-visual-web-developer)
| 50.44 | 237 | 0.812054 | pol_Latn | 0.916059 |
e1eb0ef673ccaa09ee30a34b9c1938b2d42de76e | 159 | md | Markdown | examples/icosahedron/README.md | Ritkuli/sterna | 035ddf0a8856d0570ccc0098efd40cd301409544 | [
"MIT"
] | 1 | 2021-09-03T05:50:46.000Z | 2021-09-03T05:50:46.000Z | examples/icosahedron/README.md | Ritkuli/rnapoly | 035ddf0a8856d0570ccc0098efd40cd301409544 | [
"MIT"
] | null | null | null | examples/icosahedron/README.md | Ritkuli/rnapoly | 035ddf0a8856d0570ccc0098efd40cd301409544 | [
"MIT"
] | 1 | 2019-05-13T18:51:52.000Z | 2019-05-13T18:51:52.000Z | #Generate primary structure
../../snacseq.py icosahedron.snac -t 2
#Generate OxDNA files:
../../snac2ox.py -rg icosahedron_seq.snac -o simulation/icosahedron
| 26.5 | 67 | 0.754717 | eng_Latn | 0.271528 |
e1eb9e898d3ab5e768a3d21291bb6a597e35f5d8 | 768 | md | Markdown | README.md | poetimp/powervm_graphite | 17d6430f7845b6a45e6b6ba600736b2a8f69962e | [
"MIT"
] | 1 | 2018-10-15T14:53:40.000Z | 2018-10-15T14:53:40.000Z | README.md | poetimp/powervm_graphite | 17d6430f7845b6a45e6b6ba600736b2a8f69962e | [
"MIT"
] | null | null | null | README.md | poetimp/powervm_graphite | 17d6430f7845b6a45e6b6ba600736b2a8f69962e | [
"MIT"
] | 1 | 2019-06-12T09:42:57.000Z | 2019-06-12T09:42:57.000Z | # powervm_graphite
Collectors for PowerVm and AIX for Graphite for viewing in Grafana
This repo will contain a number of dependent and interdependent Perl scripts that will collect data from the PowerVm and AIX environment and enter the data into a Graphite database for subsequent viewing by Grafana.
It is assumed that you already have Carbon/Graphite/Whisper already setup to receive data. Google "carbon graphite whisper install" for a lot of resources on how to get it installed and configured. In many Linux distributions it will be in the main repositories.
As a side note, this repo will also contain some good examples of using the IBM HMC REST API for retrieving LPAR information
Updates, questions and suggested improvements are welcomed and encouraged
| 69.818182 | 262 | 0.821615 | eng_Latn | 0.998931 |
e1ebbf9778a82318bf25225d2630303bbfa7db2c | 124 | md | Markdown | README.md | erblast/vornamen | 611c686b4b25041ef1fc750210710d9902faa4bf | [
"CC0-1.0"
] | null | null | null | README.md | erblast/vornamen | 611c686b4b25041ef1fc750210710d9902faa4bf | [
"CC0-1.0"
] | null | null | null | README.md | erblast/vornamen | 611c686b4b25041ef1fc750210710d9902faa4bf | [
"CC0-1.0"
] | null | null | null | # vornamen
Search all forenames registered in Cologne/Germany by gender and letter
https://erblast.shinyapps.io/vornamen/
| 20.666667 | 71 | 0.806452 | eng_Latn | 0.856998 |
e1ec23639d10775fc98d73ed84e475c8e7ca83c9 | 3,252 | md | Markdown | README.md | ghivert/fireblog | 2cd6fb32732c89ddc4efa72243b5a84dbbeba796 | [
"MIT"
] | 45 | 2018-04-22T14:58:24.000Z | 2021-05-28T02:02:54.000Z | README.md | ghivert/fireblog | 2cd6fb32732c89ddc4efa72243b5a84dbbeba796 | [
"MIT"
] | null | null | null | README.md | ghivert/fireblog | 2cd6fb32732c89ddc4efa72243b5a84dbbeba796 | [
"MIT"
] | 6 | 2019-10-27T12:42:33.000Z | 2021-01-19T17:31:59.000Z | # Fireblog
The world is full of words. Let's add more. Blogging is essential. Having a personal blog is required. Here is one. Fireblog is back, and it's fully up to date! It now uses elm 0.19 and Server-Side Rendering!
## Context
As a complement of Medium (where I'm posting everything in English), I wanted to get a fully working blog in French. Built with elm and Firebase, all of this started as an experimentation to get a valid SPA working with elm and Firebase. With time going through, I thought it would be so cool to let everyone enjoy this, and take inspiration if they want, because it's not always easy to find an example of an SPA in elm in « production ».
The goal is to provide an easy-way to deploy the application on Firebase, with little or no effort at all, just like WordPress do -- but easier, and only focused on blogging, not all noise around. The focus is put on accessibility, rich web content, single-page application and quality blogging.
## Installation
Creates an account on Firebase, and creates a new project. You will be able to access the integration with JavaScript. Just get something like this:
```javascript
const config = {
apiKey: 'api-key',
authDomain: 'project-url',
databaseURL: 'database-url',
projectId: 'project-name',
storageBucket: 'storage-bucket-url',
messagingSenderId: 'one-random-number'
}
```
When you found it, paste it on `config.js`, like this.
```javascript
// src/config.js
import firebase from 'firebase/app'
const config = {
apiKey: 'api-key',
authDomain: 'project-url',
databaseURL: 'database-url',
projectId: 'project-name',
storageBucket: 'storage-bucket-url',
messagingSenderId: 'one-random-number'
}
firebase.initializeApp(config)
```
Don't be afraid to share it: it will be on all pages of your site. If someone wants it, can have it really easily.
Enable email authentication, and create your account to authenticate. Finally add the uid of the user to the firebase rules of the database, and you're good to go.
Configuration is done!
## Deployment
You'll need `yarn`. Please, do not use `npm`. You can easily install it with `brew` on macOS, or `npm install -g yarn`.
You'll need Firebase CLI. It's on `npm`. Install it with `npm install -g firebase-tools` or `yarn global add firebase-tools`. Next, use `firebase login` and follow the steps, to get the `firebase` command working.
```bash
# First login to Firebase.
firebase login
# Install the project, build it and deploy it!
yarn
yarn build
firebase deploy
```
## Customization
All styling is done in SCSS and resides mostly in `neptune`. Feel free to modify anything to get your favorite styling.
The elm code producing HTML resides only in `View`. It's really easy to change the content of the views as you can avoid modifying types and logic.
# I like this project, can I use it and contribute?
Contribution is so good! I would be glad to accept pull requests to improve it and let even more people use it. Of course, you can also use it without contributing! After all, it's free software, you're free to use it as you want.
There is a contributing guide and a code of conduct, please read them to get an idea on how to do if you want, and be friendly with everyone!
| 45.166667 | 441 | 0.75 | eng_Latn | 0.998631 |
e1ed0793a29aa4048259a7cd5cef55205f92276b | 7,960 | md | Markdown | _posts/2020-11-18-Decision-Tree.md | Saltfarmer/blog | acbbb265cd4fb885b2719a3e51cb3b9556dc79bd | [
"BSD-3-Clause",
"MIT"
] | null | null | null | _posts/2020-11-18-Decision-Tree.md | Saltfarmer/blog | acbbb265cd4fb885b2719a3e51cb3b9556dc79bd | [
"BSD-3-Clause",
"MIT"
] | null | null | null | _posts/2020-11-18-Decision-Tree.md | Saltfarmer/blog | acbbb265cd4fb885b2719a3e51cb3b9556dc79bd | [
"BSD-3-Clause",
"MIT"
] | 2 | 2017-10-22T10:19:40.000Z | 2017-10-25T06:33:52.000Z | ---
title: "Decision Tree"
header :
image: /assets/images/sklearn_head.jpg
comments : true
share : true
categories:
- Machine Learning
tags:
- Machine Learning
- Decision Tree
- Classification
- Sklearn
---
Decision trees are very popular machine learning algorithm. They are popular because a variety of reasons, being their interpretability probably their most important advantage. They can be trained very fast and are easy to understand, which opens their possibilities to frontiers far beyond scientific walls. Nowadays, Decision Tree are very popular in business environments and their usage is also expanding to civil areas, where some applications are raising big concerns.
# Classification and Regression Tree
Classification and Regression Trees (CART)introduced by Leo Breiman to refer to Decision Tree algorithms that can be used for classification or regression predictive modeling problems. The representation of the CART model is a binary tree. This is the same binary tree from algorithms and data structures, nothing too fancy (each node can have zero, one or more child nodes).
Decision Tree are composed of nodes, branches and leafs. Each **node** represents an attribute (or feature), each **branch** represents a rule (or decision), and each **leaf** represents an outcome. The **depth** of a Tree is defined by the number of levels, not including the root node.

Decision Tree apply a top-down approach to data, so that given a data set, they try to group and label observations that are similar between them, and look for the best rules that split the observations that are dissimilar between them until they reach certain degree of similarity.
The splitting can be **binary** (which splits each node into *at most* two sub-groups, and tries to find the optimal partitioning), or **multiway**Deciare preferred over super complex ones, since they are easier to understand and they are less likely to fall into overfitting.
The split with the best cost (lowest cost because we minimize cost) is selected. All input variables and all possible split points are evaluated and chosen in a greedy manner based on the cost function.
- **Regression**: The cost function that is minimized to choose split points is the sum squared error across all training samples that fall within the rectangle.
- **Classification**: The cost function is used which provides an indication of how pure the nodes are, where node purity refers to how mixed the training data assigned to each node is.
# Gini Impurity
In the case of **Classification Trees**, CART algorithm uses a metric called Gini Impurity to create decision points for classification tasks. Gini Impurity is a measurement of the likelihood of an incorrect classification of a new instance of a random variable, if that new instance were randomly classified according to the distribution of class labels from the data set. A Gini impurity gives an idea of how good a split is by how mixed the classes are in the two groups created by the split. A perfect separation results in a Gini score of 0, whereas the worst case split that results in 50/50 classes in each group result in a Gini score of 0.5 (for a 2 class problem).

On the left-hand side, a high Gini Impurity value leads to a poor splitting performance. On the right-hand side, a low Gini Impurity value performs a nearly perfect splitting
# Least Squared Deviation
In the case of **Regression Trees**, CART algorithm looks for splits that minimize the Least Square Deviation (LSD), choosing the partitions that minimize the result over all possible options. The LSD (sometimes referred as “variance reduction”) metric minimizes the sum of the squared distances (or deviations) between the observed values and the predicted values. The difference between the predicted and observed values is called “residual”, which means that LSD chooses the parameter estimates so that the sum of the squared residuals is minimized.
# Pruning
As the number of splits in DTs increase, their complexity rises. In general, simpler DTs are preferred over super complex ones, since they are easier to understand and they are less likely to fall into overfitting. In other words, the model learns the detail and noise (irrelevant information or randomness in a dataset) in the training data to the extent that it negatively impacts the performance of the model on new data. This means that the noise or random fluctuations in the training data is picked up and learned as concepts by the model.

While the black line fits the data well, the green line is overfitting. Under this condition, your model works perfectly well with the data you provide upfront, but when you expose that same model to new data, it breaks down. It’s unable to repeat its highly detailed performance.
**Pruning** is a technique used to deal with overfitting, that reduces the size of DTs by removing sections of the Tree that provide little predictive or classification power. The goal of this procedure is to reduce complexity and gain better accuracy by reducing the effects of overfitting and removing sections of the DT that may be based on noisy or erroneous data. There are two different strategies to perform pruning on DTs:
- **Pre-prune:** When you stop growing DT branches when information becomes unreliable.
- **Post-prune:** When you take a fully grown DT and then remove leaf nodes only if it results in a better model performance. This way, you stop removing nodes when no further improvements can be made.
# Ensemble Methods
**Ensemble methods** combine several DTs to improve the performance of single DTs, and are a great resource to get over the problems already described. The idea is to train multiple models using the same learning algorithm to achieve superior results. The 2 most common techniques to perform ensemble Decision Tree are **Bagging** and **Boosting**.
## Bagging
**Bagging** (or Bootstrap Aggregation) is used when the goal is to reduce the variance of a DT. **Variance** relates to the fact that DTs can be quite unstable because small variations in the data might result in a completely different Tree being generated. So, the idea of Bagging is to solve this issue by creating **in parallel** random subsets of data (from the training data), where any observation has the **same probability** to appear in a new subset data.
Next, each collection of subset data is used to train DTs, resulting in an ensemble of different DTs. Finally, an average of all predictions of those different DTs is used, which produces a more robust performance than single DTs. **Random Forest** is an extension over Bagging, which takes one extra step: in addition to taking the random subset of data, it also takes a random selection of features rather than using all features to grow DTs.
## **Boosting**
**Boosting** is another technique that creates a collection of predictors to reduce the variance of a DT, but with a different approach. It uses a **sequential method** where it fits consecutive DTS, and at every step, tries to reduce the errors from the prior Tree. With Boosting techniques, each classifier is trained on data, taking into account the previous classifier success. After each training step, the weights are redistributed based on the previous performance.
This way, **misclassified data increases its weights** to emphasize the most difficult cases, so that subsequent DTs will focus on them during their training stage and improve their accuracy. Unlike Bagging, in Boosting the observations are weighted and therefore some of them will take part in the new subsets of data more often. As a result of this process, the combination of the whole sets improves the performance of DTs. | 106.133333 | 674 | 0.793593 | eng_Latn | 0.999729 |
e1ed6aef5f37fd0420f96d3025d5a3128260c7b2 | 1,194 | md | Markdown | README.md | sscotth/cloudron-znc | 21aa84ed0f1673cc91b00c2642bb3d9755117ba0 | [
"MIT"
] | null | null | null | README.md | sscotth/cloudron-znc | 21aa84ed0f1673cc91b00c2642bb3d9755117ba0 | [
"MIT"
] | null | null | null | README.md | sscotth/cloudron-znc | 21aa84ed0f1673cc91b00c2642bb3d9755117ba0 | [
"MIT"
] | null | null | null | # Cloudron-ZNC
ZNC is an IRC network bounce or proxy service that remains persistently connected to your preferred IRC networks and channels.
## Installation
### CLI
`cloudron install --appstore-id [email protected]`
### Web
[https://__yourcloudronserver__/#/appstore/io.sscotth.cloudronznc?version=0.1.5](https://__yourcloudronserver__/#/appstore/io.sscotth.cloudronznc?version=0.1.5)
During the install, you will be asked for a "ZNC Port" number. This will be the port that you will use to connect with your IRC client.
## Usage
ZNC is **not** yet integrated with the Cloudron user management.
The app comes with a pre-setup admin account with the following credentials:
* Username: `admin`
* Password: `changeme`
**Please change the admin password on first login!**
To login from IRC, you need to use port 6697 or the port selected during installation and SSL.
The default configuration will login to freenode and join the #cloudron channel to make sure everything works. Upload your own znc.conf with `cloudron push znc.conf /app/data/configs/znc.conf` or make changes with the web interface.
## TODO:
* Multi-users/Single sign-on
* Add tests
* Submit for review
| 32.27027 | 232 | 0.767169 | eng_Latn | 0.98404 |
e1ed73f005c67bc59ca695faccaf6a8d40298fad | 191 | md | Markdown | README.md | equalsraf/totalrecall | c43ed48df9ad0da9b694c46fd54e23b6008fe62d | [
"0BSD"
] | null | null | null | README.md | equalsraf/totalrecall | c43ed48df9ad0da9b694c46fd54e23b6008fe62d | [
"0BSD"
] | null | null | null | README.md | equalsraf/totalrecall | c43ed48df9ad0da9b694c46fd54e23b6008fe62d | [
"0BSD"
] | null | null | null | totalrecall is a minimalistic process watcher. It runs a command
and waits until it finishes, if it ends with an error (non 0 status)
it restarts the process.
$ totalrecall command ...
| 27.285714 | 68 | 0.753927 | eng_Latn | 0.999329 |
e1eeb68e4521075286f57d8041e42582c8ac1aea | 463 | md | Markdown | _posts/2015-04-22-post-review-encoding-problem.md | chenxiaohui/chenxiaohui.github.io | 888c5c79f390498e23c2180f40d093d10061c4ea | [
"MIT"
] | null | null | null | _posts/2015-04-22-post-review-encoding-problem.md | chenxiaohui/chenxiaohui.github.io | 888c5c79f390498e23c2180f40d093d10061c4ea | [
"MIT"
] | null | null | null | _posts/2015-04-22-post-review-encoding-problem.md | chenxiaohui/chenxiaohui.github.io | 888c5c79f390498e23c2180f40d093d10061c4ea | [
"MIT"
] | 1 | 2021-12-15T03:54:42.000Z | 2021-12-15T03:54:42.000Z | ---
layout: article
title: "post-review编码问题"
key: post-review-encoding-problem
date: 2015-04-22 16:08
comments: true
published: true
categories: "其他"
---
windows下post-review遇到一个问题(不是我,不用windows)。python会报错:
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe6 in position xxx: ordinal not in range(128)
又是编码的问题啊,我不想去看post-review的源码,所以还是改默认环境的源码好了。找到rbtools\utils\process.py,import sys后面加两行:
reload(sys)
sys.setdefaultencoding("utf-8")
Ok. | 24.368421 | 104 | 0.75594 | eng_Latn | 0.314601 |
e1ef58b9d356f348b80b0f46e03c5f86732f7e3c | 45,989 | md | Markdown | documentation_src/README.md | haroutboujakjian/Vuesalize | 5afdcc99e28ee9ced1b3a1b1f6987b2f077334a5 | [
"MIT"
] | 23 | 2021-04-26T12:43:09.000Z | 2022-01-17T18:05:04.000Z | documentation_src/README.md | haroutboujakjian/Vuesalize | 5afdcc99e28ee9ced1b3a1b1f6987b2f077334a5 | [
"MIT"
] | null | null | null | documentation_src/README.md | haroutboujakjian/Vuesalize | 5afdcc99e28ee9ced1b3a1b1f6987b2f077334a5 | [
"MIT"
] | 1 | 2021-12-17T02:24:39.000Z | 2021-12-17T02:24:39.000Z | # Vuesalize
## What's the point?
Building interactive visualizations on the web can be hard, and it can be even harder when you would like to leverage
existing visualization libraries inside of a Vue.js project. The goal of `Vuesalize` is to simplify this process by
providing a set of chart components (and a couple others) that are commonly used in building interactive visualizations
on the web. The charts are built using a combination of [Vue.js](https://vuejs.org/v2/guide/)
and [D3.js](https://d3js.org/). The main rationale for this approach is to fully embrace the Vue paradigm and move the
SVG definitions to the template (HTML), which allows Vue to handle creating and removing elements on the page.
This is analogous to the "enter/update/exit" strategy used in D3 but specifically taking advantage of the virtual DOM.
By building charts where the SVG is defined in Vue's template, we can not only send down props to update the chart, but
can also emit events on interactions (e.g. click, mousover, etc.) and offer scoped slots for custom tooltips!
## Installation
Any Vue.js based project will be able to take advantage of this library. The library is currently available on npm, and
it is possible to use it with Vue CLI (recommended) or directly with the CDN version in a `<script>` tag.
### Vue CLI
The steps to use is it in a project created using the Vue CLI are as follows:
1. Install from npm using `npm install vuesalize`
2. In `main.js`, add the components that are going to be used in the project. Here is an example below for a project
using the `BaseLegend` and `LoaderSpinning` components
```js
import LoaderSpinning from 'vuesalize'
import BaseLegend from 'vuesalize'
import 'vuesalize/dist/vuesalize.css'
Vue.use(LoaderSpinning, BaseLegend)
```
3. Start using the components in templates. For example, if the `BaseLegend` and `LoaderSpinning` components were going
to be used in a default `App.vue` file, this is how it would be setup:
```html
<template>
<div id="app">
<BaseLegend :legend-data="sampleLegendData"></BaseLegend>
<LoaderSpinning></LoaderSpinning>
</div>
</template>
<script>
export default {
name: 'App',
data() {
return {
sampleLegendData: [
{name: 'finance', color: 'red'},
{name: 'accounting', color: 'blue'}
],
}
}
}
</script>
<style>
#app {
font-family: Avenir, Helvetica, Arial, sans-serif;
-webkit-font-smoothing: antialiased;
-moz-osx-font-smoothing: grayscale;
text-align: center;
color: #2c3e50;
margin-top: 60px;
}
</style>
```
### CDN
It is quite simple to get started with the CDN. The vuesalize [javascript](https://unpkg.com/vuesalize)
and [css](https://unpkg.com/[email protected]/dist/vuesalize.css) files need to be linked (lines 5 and 7),
and the components that will be used must be declared using `Vue.use()` (line 16). It is also necessary to link the
official Vue package (line 6) before vuesalize since it relies on base Vue.
```html
<html lang="en">
<head>
<meta charset="utf-8">
<title>Browser test</title>
<link rel="stylesheet" href="https://unpkg.com/[email protected]/dist/vuesalize.css">
<script src="[email protected]_dist_vue.js"></script>
<script src="https://unpkg.com/[email protected]/dist/vuesalize.umd.min.js"></script>
</head>
<body>
<div id="app">
<loader-spinning></loader-spinning>
<base-legend :legend-data="testlegend"></base-legend>
</div>
<script>
Vue.use('loader-spinning', 'base-legend')
new Vue({
el: '#app',
data() {
return {
sampleLegendData: [
{name: 'finance', color: 'red'},
{name: 'accounting', color: 'blue'}
],
}
}
})
</script>
</body>
</html>
```
Examples of how each of the chart components can be used can be found in the sections below. Additionally, the SFC
component templates can be retrieved from [github](https://github.com/haroutboujakjian/Vuesalize/tree/master/src)
## Charts
### Stacked Bar Chart
#### Example
Here is a simple example that constructs a stacked bar chart representing a set of generic expenses.
<div style="display: flex; justify-content: center">
<stacked-bar-chart-example></stacked-bar-chart-example>
</div>
```html
<template>
<StackedBarChart :plot-data="plotData" x-key="date"
:margin="margin" x-axis-label="Year"
y-axis-label="Expenses" :y-tick-format="d => `$${d}`">
</StackedBarChart>
</template>
<script>
import SBCdata from './Budget3Groups.json'
export default {
name: "StackedBarChartExample",
data() {
return {
plotData: SBCdata,
margin: {top: 20, bottom: 35, left: 60, right: 20},
}
}
}
</script>
```
Alternatively, it's possible to get a horizontal bar chart by passing in 'horizontal' for the `direction` prop.
<div style="display: flex; justify-content: center">
<stacked-bar-chart-example :horizontal="true"></stacked-bar-chart-example>
</div>
```html
<template>
<StackedBarChart :plot-data="plotData" x-key="date"
:margin="margin" direction="horizontal"
x-axis-label="Expenses" y-axis-label="Year"
:x-axis-label-shift="{ dx: 0, dy: -2}" :y-axis-label-shift="{ dx: 0, dy: 5}"
:x-tick-format="d => `$${d}`">
</StackedBarChart>
</template>
<script>
import SBCdata from './Budget3Groups.json'
export default {
name: "StackedBarChartExample",
data() {
return {
plotData: SBCdata,
margin: {top: 20, bottom: 35, left: 60, right: 20},
}
}
}
</script>
```
In order for the stacked bar chart to render properly, `plot-data` needs to be as an array of objects. There should be
one key for the x value, and all the other keys will be for y values. The `Budget3Groups.json` data file (snippet below)
that populates the example stacked bar chart has "date" for the x value, and "Utilities",
"Rent", and "Insurance" for the y values. All of the axis charts
(bar charts, line charts, area charts) use the same format for data, making it easier to switch between them.
```json
[
{
"date": "2019",
"Utilities": 5921,
"Rent": 1026,
"Insurance": 2324
},
{
"date": "2020",
"Utilities": 1539,
"Rent": 1560,
"Insurance": 1257
},
...
]
```
#### Props
| Name | Required | Type | Default | Description |
|-- | :-----------------: | ------- | -- | --|
| `plot-data` | :heavy_check_mark: | `Array` | | data necessary to create the chart |
| `x-key` | :heavy_check_mark: | `String` | | string that is the key of the x value in plotdata |
| `width` | | `Number` | 350px | chart width in pixels |
| `height` | | `Number` | 250px | chart height in pixels |
| `colors` | | `Array` | | array of colors used for each bar |
| `direction` | | `String` |'vertical'| direction of the chart. can be 'vertical' or 'horizontal' |
| `bar-axis-location` | | `String` |'bottom' | placement of the x-axis for horizontal layout. can be 'bottom' or 'top'|
| `margin` | | `Object` | | object that contains the top, bottom, right, and left margins |
|`enable-tooltip` | |`Boolean` | True | Turn default tooltip on or off |
|`padding-between-bars`| | `Number` | 0.10 | padding between the bars in a group. Must be between `0` and `1` |
| `x-axis-label` | | `String` | | Label for the x-axis |
| `y-axis-label` | | `String` | | Label for the y-axis |
| `x-axis-label-shift` | | `Object` | | Takes `dx` and `dy` keys that move the location label |
| `y-axis-label-shift` | | `Object` | | Takes `dx` and `dy` keys that move the location label |
| `x-tick-format` | |`Function`| `null` | Function passed into d3's [tickFormat](https://github.com/d3/d3-axis#axis_tickFormat) for the x-axis |
| `y-tick-format` | |`Function`| `null` | Function passed into d3's [tickFormat](https://github.com/d3/d3-axis#axis_tickFormat) for the y-axis |
#### Events
| Event | Location | Value Emitted | Description |
|-- | -------- |------ | --|
| `click` | Rectangle | `Object` | `x_label`, `y_label`, `x_value`, and `y_value` of the bar in the stack that is clicked on|
#### Slots
We provide a default tooltip that gives the x and y value for the bar that is hovered over. If you want to define a
slightly more [custom tooltip](#tooltips), then the bar's data is passed up in
a [scoped slot](https://vuejs.org/v2/guide/components-slots.html#Scoped-Slots).
| Slot name | Value | Type | Description |
|-- | ------- | ---- | --|
| `tooltip` | `bar` | Object | contains `x_label`, `y_label`, `x_value`, and `y_value` keys of the bar in the stack that is hovered over |
### Grouped Bar Chart
#### Example
Here is an example using the same expenses data as the stacked bar chart above. In this case, the bars are grouped.
<div style="display: flex; justify-content: center">
<grouped-bar-chart-example></grouped-bar-chart-example>
</div>
```html
<template>
<GroupedBarChart :plot-data="plotdata" x-key="date"
:width="450" :height="300" :margin="margin"
x-axis-label="Year" y-axis-label="Expenses"
:y-tick-format="d => `$${d}`">
</GroupedBarChart>
</template>
<script>
import GBCdata from "./Budget3Groups.json"
export default {
name: "GroupedBarChartExample",
data() {
return {
plotdata: GBCdata,
margin: {top: 20, bottom: 35, left: 55, right: 20}
}
}
}
</script>
```
And, again, it's possible to get a horizontal bar chart by passing in 'horizontal' for the direction prop.
<div style="display: flex; justify-content: center">
<grouped-bar-chart-example :horizontal="true">
</grouped-bar-chart-example>
</div>
```html
<template>
<GroupedBarChart :plot-data="plotdata" x-key="date"
:width="450" :height="300" :margin="margin"
x-axis-label="Expenses" y-axis-label="Year"
:x-axis-label-shift="{ dx: 0, dy: -2 }" :y-axis-label-shift="{ dx: 0, dy: 5 }"
:x-tick-format="d => `$${d}`">
</GroupedBarChart>
</template>
<script>
import GBCdata from "./Budget3Groups.json"
export default {
name: "GroupedBarChartExample",
data() {
return {
plotdata: GBCdata,
margin: {top: 20, bottom: 35, left: 55, right: 20}
}
}
}
</script>
```
#### Format of Data
In order for the stacked bar chart to render properly, `plot-data` needs to be as an array of objects. There should be
one key for the x value, and all the other keys will be for y values. The `Budget3Groups.json` data file (snippet below)
that populates the example grouped bar chart has "date" for the x value, and "Utilities",
"Rent", and "Insurance" for the y values. All of the axis charts
(bar charts, line charts, area charts) use the same format for data, making it easier to switch between them.
```json
[
{
"date": "2019",
"Utilities": 5921,
"Rent": 1026,
"Insurance": 2324
},
{
"date": "2020",
"Utilities": 1539,
"Rent": 1560,
"Insurance": 1257
},
...
]
```
#### Props
| Name | Required | Type | Default | Description |
|-- | :-----------------: | ------- | -- | --|
| `plot-data` | :heavy_check_mark: | `Array` | | data necessary to create the chart |
| `x-key` | :heavy_check_mark: | `String` | | string that is the key of the x value in plotdata |
| `width` | | `Number` | 350px | chart width in pixels |
| `height` | | `Number` | 250px | chart height in pixels |
| `colors` | | `Array` | | array of colors used for each bar |
| `direction` | |`String` |'vertical'| direction of the chart. can be 'vertical' or 'horizontal' |
| `bar-axis-location` | | `String` |'bottom' | placement of the x-axis for horizontal layout. can be 'bottom' or 'top'|
| `padding-between-bars`| | `Number` | 0.15 | padding between the bars in a group. Must be between `0` and `1` |
| `padding-between-groups`| | `Number` | 0.15 | padding between the groups of bars. Must be between `0` and `1` |
| `margin` | | `Object` | | object that contains the top, bottom, right, and left margins |
|`enable-tooltip` | |`Boolean` | True | Turn default tooltip on or off |
| `x-axis-label` | | `String` | | Label for the x-axis |
| `y-axis-label` | | `String` | | Label for the y-axis |
| `x-axis-label-shift` | | `Object` | | Takes `dx` and `dy` keys that move the location label |
| `y-axis-label-shift` | | `Object` | | Takes `dx` and `dy` keys that move the location label |
| `x-tick-format` | |`Function`| `null` | Function passed into d3's [tickFormat](https://github.com/d3/d3-axis#axis_tickFormat) for the x-axis |
| `y-tick-format` | |`Function`| `null` | Function passed into d3's [tickFormat](https://github.com/d3/d3-axis#axis_tickFormat) for the y-axis |
#### Events
#### Slots
### Line Chart
The line chart component allows for one or more lines to be plotted.
#### Example
<div style="display: flex; justify-content: center">
<line-chart-example></line-chart-example>
</div>
```html
<template>
<LineChart :plot-data="plotData" x-key="date"
:width="450" :height="250" :margin="margin"
x-axis-label="Year" y-axis-label="Expenses"
:y-tick-format="d => `$${d}`">
</LineChart>
</template>
<script>
import LCdata from "./Budget3Groups.json"
export default {
name: "LineChartExample",
data() {
return {
plotData: LCdata,
margin: {top: 20, bottom: 30, left: 50, right: 20}
}
}
}
</script>
```
#### Format of Data
In order for the stacked bar chart to render properly, `plot-data` needs to be as an array of objects. There should be
one key for the x value, and all the other keys will be for y values. The `Budget3Groups.json` data file (snippet below)
that populates the example line chart has "date" for the x value, and "Utilities",
"Rent", and "Insurance" for the y values. All of the axis charts
(bar charts, line charts, area charts) use the same format for data, making it easier to switch between them.
```json
[
{
"date": "2019",
"Utilities": 5921,
"Rent": 1026,
"Insurance": 2324
},
{
"date": "2020",
"Utilities": 1539,
"Rent": 1560,
"Insurance": 1257
},
...
]
```
#### Props
| Name | Required | Type | Default | Description |
|-- | :------------------: | ------- | -- | --|
| `plot-data` | :heavy_check_mark: | `Array` | | data necessary to create the chart |
| `x-key` | :heavy_check_mark: | `String` | | string that is the key of the x value in plotdata |
| `width` | | `Number` | 350px | chart width in pixels |
| `height` | | `Number` | 250px | chart height in pixels |
| `colors` | | `Array` | | array of colors used for each line |
| `margin` | | `Object` | | object that contains the top, bottom, right, and left margins |
|`enable-tooltip` | |`Boolean` | True | Turn default tooltip on or off |
|`stroke-width` | | `Number` | 2 | stroke-width for areas |
| `x-axis-label` | | `String` | | Label for the x-axis |
| `y-axis-label` | | `String` | | Label for the y-axis |
| `x-axis-label-shift`| | `Object` | | Takes `dx` and `dy` keys that move the location label |
| `y-axis-label-shift`| | `Object` | | Takes `dx` and `dy` keys that move the location label |
| `x-tick-format` | |`Function`| `null` | Function passed into d3's [tickFormat](https://github.com/d3/d3-axis#axis_tickFormat) for the x-axis |
| `y-tick-format` | |`Function`| `null` | Function passed into d3's [tickFormat](https://github.com/d3/d3-axis#axis_tickFormat) for the y-axis |
#### Events
#### Slots
The default tooltip that gives all of the values for the x value hovered over. If you want to define a slightly
more [custom tooltip](#tooltips), then the line's data is passed up in
a [scoped slot](https://vuejs.org/v2/guide/components-slots.html#Scoped-Slots).
| Slot name | Value | Type | Description |
|-- | ------- | ---- | --|
| `tooltip` | `row` | Object | contains the x key and all of the y keys for the x value that is hovered over |
### Area Chart
#### Example
Area charts are similar to line charts except the area under the curve is filled in. A simple area chart with two
groups is rendered below
<div style="display: flex; justify-content: center">
<area-chart-example></area-chart-example>
</div>
```html
<template>
<AreaChart :plot-data="plotData" :width="500" :height="300" x-key="date"
:margin="margin" :colors="['#ac58e5','#E0488B']"
x-axis-label="Year" y-axis-label="Expenses"
:y-tick-format="d => `$${d}`">
</AreaChart>
</template>
<script>
import ACdata from './Budget2Groups.json'
export default {
name: "AreaChartExample",
data() {
return {
plotData: ACdata,
margin: {top: 20, bottom: 30, left: 55, right: 20}
}
}
}
</script>
```
In order to get a stacked area chart, set the `stacked` prop to true
<div style="display: flex; justify-content: center">
<area-chart-example :stacked="true"></area-chart-example>
</div>
```html
<template>
<AreaChart :plot-data="plotData" :width="500" :height="300" x-key="date"
:margin="margin" :stacked="true" :colors="['#ac58e5','#E0488B']"
x-axis-label="Year" y-axis-label="Expenses"
:y-tick-format="d => `$${d}`">
</AreaChart>
</template>
<script>
import ACdata from './Budget2Groups.json'
export default {
name: "AreaChartExample",
data() {
return {
plotData: ACdata,
margin: {top: 20, bottom: 30, left: 55, right: 20}
}
}
}
</script>
```
#### Format of Data
In order for the stacked bar chart to render properly, `plot-data` needs to be as an array of objects. There should be
one key for the x value, and all the other keys will be for y values. The `Budget3Groups.json` data file (snippet below)
that populates the example area chart has "date" for the x value, and "Utilities",
"Rent", and "Insurance" for the y values. All of the axis charts
(bar charts, line charts, area charts) use the same format for data, making it easier to switch between them.
```json
[
{
"date": "2019",
"Utilities": 5921,
"Rent": 1026,
"Insurance": 2324
},
{
"date": "2020",
"Utilities": 1539,
"Rent": 1560,
"Insurance": 1257
},
...
]
```
#### Props
| Name | Required | Type | Default | Description |
|-- | :------------------: | ------- | -- | --|
| `plot-data` | :heavy_check_mark: | `Array` | | data necessary to create the chart |
| `x-key` | :heavy_check_mark: | `String` | | string that is the key of the x value in plotdata |
| `width` | | `Number` | 350px | chart width in pixels |
| `height` | | `Number` | 250px | chart height in pixels |
| `colors` | | `Array` | | array of colors used for areas |
| `margin` | | `Object` | | object that contains the top, bottom, right, and left margins |
| `stacked` | | `Boolean`| | changes to stacked area chart |
|`fill-opacity` | | `Number` | 0.65 | fill opacity for each path, must be between 0 and 1 |
|`stroke-width` | | `Number` | 2 | stroke-width for areas |
| `x-axis-label` | | `String` | | Label for the x-axis |
| `y-axis-label` | | `String` | | Label for the y-axis |
| `x-axis-label-shift`| | `Object` | | Takes `dx` and `dy` keys that move the location label |
| `y-axis-label-shift`| | `Object` | | Takes `dx` and `dy` keys that move the location label |
| `x-tick-format` | |`Function`| `null` | Function passed into d3's [tickFormat](https://github.com/d3/d3-axis#axis_tickFormat) for the x-axis |
| `y-tick-format` | |`Function`| `null` | Function passed into d3's [tickFormat](https://github.com/d3/d3-axis#axis_tickFormat) for the y-axis |
#### Events
#### Slots
The default tooltip that gives all of the values for the x value hovered over. If you want to define a slightly
more [custom tooltip](#tooltips), then the area's data is passed up in
a [scoped slot](https://vuejs.org/v2/guide/components-slots.html#Scoped-Slots).
| Slot name | Value | Type | Description |
|-- | ------- | ---- | --|
| `tooltip` | `row` | Object | contains the x key and all of the y keys for the x value that is hovered over |
### Scatter Plot
#### Example
A scatter plot helps display relationships between two variables in a plot. Transitions are built in for moving the
points around, as well transitioning the fill, radius, etc. Click the update data button below to see this in action!
<div style="display: flex; justify-content: center">
<ScatterPlotExample></ScatterPlotExample>
</div>
```html
<template>
<ScatterPlot :plotData="plotData" xKey="profit" yKey="utility"
:margin="margin" :width="400"
y-axis-label="Utility" x-axis-label="Profit" :x-axis-label-shift="{ dx: 5, dy: -5}"
:stroke="'#ff3000'" :fill="'#ff3000'" :fill-opacity="0.60"
:x-tick-format="d => `$${d}`">
</ScatterPlot>
</template>
<script>
import plotData from "./ScatterPlotData.json"
export default {
name: "ScatterPlotExample",
data() {
return {
plotData: plotData,
margin: {top: 20, bottom: 40, right: 20, left: 50}
}
}
}
</script>
```
#### Format of Data
The data that needs to be passed in as an array of objects. Each object should contain the x and y values for each point, and
these can be specified by the `x-key` and `y-key` keys. Passing in the values in the data allows for more fine-grained control
as opposed to setting one consistent style in the props (e.g. passing in different fill values for each point instead of
passing in one fill value as a prop). The table below has all of the possible keys that can be included for an objects
| Name | Required | Type | Description |
|-- | :-----------: | ------- | --|
| `x-key` | :heavy_check_mark: | `String` | x value for the point |
| `y-key` | :heavy_check_mark: |`String` | y value for the point |
| `radius` | | `Number` | radius of the point |
| `fill` | |`String` | fill of the point |
| `stroke` | | `String` | stroke of the point |
Here is a snippet of the data that the example scatterplot above uses
```json
[
{
"profit": 103,
"utility": 9,
"radius": 5,
"fill": "#ff3000"
},
{
"profit": 359,
"utility": 54,
"radius": 5,
"fill": "#ff3000"
},
...
]
```
#### Props
| Name | Required | Type | Default | Description |
|-- | :------------------: | ------- | -- | --|
| `plot-data` | :heavy_check_mark: | `Array` | | data necessary to create the chart |
| `x-key` | :heavy_check_mark: | `String` | | string that is the key of the x values in plotdata |
| `y-key` | :heavy_check_mark: | `String` | | string that is the key of the y values in plotdata |
| `width` | | `Number` | 350px | chart width in pixels |
| `height` | | `Number` | 250px | chart height in pixels |
| `margin` | | `Object` | | object that contains the top, bottom, right, and left margins |
|`radius` | | `Number` | 5 | radius for all points |
|`fill` | | `String` | black | fill for all points |
|`fill-opacity` | | `Number` | 1 | fill opacity for all points, must be between 0 and 1 |
|`stroke` | | `String` | black | stroke for all points |
|`stroke-opacity` | | `Number` | 1 | stroke opacity for all points, must be between 0 and 1 |
| `x-axis-label` | | `String` | | Label for the x-axis |
| `y-axis-label` | | `String` | | Label for the y-axis |
| `x-axis-label-shift`| | `Object` | | Takes `dx` and `dy` keys that move the location label |
| `y-axis-label-shift`| | `Object` | | Takes `dx` and `dy` keys that move the location label |
| `x-tick-format` | |`Function`| `null` | Function passed into d3's [tickFormat](https://github.com/d3/d3-axis#axis_tickFormat) for the x-axis |
| `y-tick-format` | |`Function`| `null` | Function passed into d3's [tickFormat](https://github.com/d3/d3-axis#axis_tickFormat) for the y-axis |
#### Events
| Event | Location | Value Emitted | Description |
|-- | -------- |------ | --|
| `click` | Circle | `Object` | the object in the array that is clicked on for the circle will be emitted |
#### Slots
| Slot name | Value | Type | Description |
|-- | ------- | ---- | --|
| `tooltip` | `row` | Object | contains `point` and `event` objects for point that is hovered over |
### Donut Chart
Under construction...
### Hierarchical Edge Bundling
A hierarchical edge bundling chart shows relationships between different entities radially to avoid very long or wide
hierarchical charts.
#### Example
<div style="display: flex; justify-content: center">
<hierarchical-edge-bundling-example></hierarchical-edge-bundling-example>
</div>
```html
<template>
<HierarchicalEdgeBundling :plot-data="plotdata"
:width="500" :height="500" :radial-margin="140">
</HierarchicalEdgeBundling>
</template>
<script>
import HEBdata from "./HierarchicalEdgeBundlingData.json"
export default {
name: "HierarchicalEdgeBundlingExample",
data() {
return {
plotdata: HEBdata
}
}
}
</script>
```
#### Format of Data
The format of the data for edge bundling chart requires a bit more work. There are three main keys: `name`, `color`, and
`imports`. The `name` key is for the name of the node, which should be unique, and `color` which is the color of the
node. Lastly, the `imports` key contains all of the connection to that node.
```json
[
{
"name": "root|Outcome|Meet Client",
"color": "#395fa0",
"imports": [
"root|Outcome|Complete Outcome 1"
]
},
{
"name": "root|Indicator|Go To Dinner",
"color": "red",
"imports": [
"root|Outcome|Meet Client"
]
},
{
"name": "root|Indicator|Fill Out Paperwork",
"color": "#395fa0",
"imports": [
"root|Indicator|Go To Dinner",
"root|Outcome|Complete Outcome 1"
]
},
...
]
```
#### Props
| Name | Required | Type | Default | Description |
|-- | :------------------: | ------- | -- | --|
| `plot-data` | :heavy_check_mark: | `Array` | | data necessary to create the chart |
| `width` | :heavy_check_mark: | `Number` | | chart width in pixels |
| `height` | :heavy_check_mark: | `Number` | | chart height in pixels |
| `radial-margin` | | `Number` | 70 | margin (in pixels) between the text label and edge of svg |
|`highlight-event`| | `String` | 'click' | Event that hightlights connections for a specific node and has two options: 'click' or 'mouseover'|
#### Events
#### Slots
### Network
Networks are useful in displaying relationships between groups. The sticky force layout below provides an easy way to
implement one. A few features are available that are quite useful:
1. Individual nodes can be dragged
2. The entire graph of nodes and links can be panned (e.g. dragged around)
3. Nodes and links can be added or removed without having to rerender the entire component
#### Example
<network-example></network-example>
```html
<template>
<Network :width="500" :height="400" :plot-data="plotData"></Network>
</template>
<script>
import NetworkData from "./NetworkData.json"
export default {
name: "NetworkExample",
data() {
return {
plotData: NetworkData
}
},
}
</script>
```
#### Format of Data
The data needs to be an object that has an array of nodes and array of links. A node object should have a name property,
a null x and y, and a color (if specific nodes should be colored differently). The links need to have source and target
keys which reference a node by name. Nodes and links can have additional metadata in them as well, as long as the names
don't conflict with any the required keys. Here is the data used to create the network above.
```json
{
"nodes": [
{
"name": "Jerry",
"x": null,
"y": null,
"color": "#cd34b5"
},
{
"name": "George",
"x": null,
"y": null,
"color": "#fa8775"
}, ...
],
"links": [
{
"source": "Jerry",
"target": "Elaine"
},
{
"source": "Elaine",
"target": "David"
}, ...
]
}
```
#### Props
| Name | Required | Type | Default | Description |
|-- | :----------------: | ------- | -- | --|
| `plot-data` | :heavy_check_mark: | `Array` | | data necessary to create the chart |
| `width` | :heavy_check_mark: | `Number` | | chart width in pixels |
| `height` | :heavy_check_mark: | `Number` | | chart height in pixels |
| `node-radius` | | `Number` | 8 | size of node circles |
| `force-strength`| | `Number` | -80 | [force](https://github.com/d3/d3-force#many-body) causing nodes to repel each other |
#### Events
| Event | Location | Value Emitted | Description |
|-- | -------- |------ | --|
| `click` | Circle | `Object` | The entire node object is emitted containing node name, x, y, and any other keys |
#### Slots
## Additional Components
### Basic Legend
#### Example
Legends are useful for many charts and a simple component is provided in the library. The examples below show how to use
a simple legend component in both the vertical and horizontal alignments. The vertical legend also has the
`enable-toggle` prop added, which allows the legend to be used like a set of checkboxes by emitting a click event with
the selected objects data.
<base-legend-example></base-legend-example>
```html
<template>
<div>
<p>Horiztonal</p>
<BaseLegend :legend-data="legendData" :alignment="'horizontal'"></BaseLegend>
<p>Vertical</p>
<BaseLegend :legend-data="legendDataToggleEnabled" :alignment="'vertical'"
enable-toggle>
</BaseLegend>
</div>
</template>
<script>
export default {
name: "BaseLegendExample",
data() {
return {
legendData: [
{name: "Utilities", color: '#717e9b'},
{name: "Rent", color: '#b6b6db'},
{name: "Insurance", color: '#bcd8f1'}
],
legendDataToggleEnabled: [
{name: "Utilities", color: '#717e9b', selected: true},
{name: "Rent", color: '#b6b6db'},
{name: "Insurance", color: '#bcd8f1', selected: true}
]
}
}
}
</script>
```
#### Format of Data
The legend component takes in a simple array of objects that contains name and color keys. If `enable-toggle` is set to
true, then a selected key can also be passed in with `true` or `false` values.
```json
[
{
"name": "Utilities", "color": "#717e9b"
},
{
"name": "Rent", "color": "#b6b6db"
},
{
"name": "Insurance", "color": "#bcd8f1"
}
]
```
#### Props
| Name | Required | Type | Default | Description |
|-- | :----------------: | ------- | -- | --|
| `legend-data` | :heavy_check_mark: | `Object` | | data necessary to create the legend |
| `alignment` | | `String` | 'horizontal'| Two options for alignment: 'vertical' or 'horizontal' |
| `enable-toggle` | | `Boolean`| false | allows the items in the legend to be clickable and emits the object on click|
#### Events
| Event | Location | Value Emitted | Description |
|-- | -------- |------ | --|
| `click` | Marker or text | `Object` | If `enable-toggle` prop is true, the entire item object (name and color) is emitted |
| `keypress.space`| Marker | `Object` | If `enable-toggle` prop is true and the marker to tabbed to on the keyboard, the entire item object (name and color) is emitted |
### Loading Spinner
The loading spinner is useful when data is being fetched from an API and there is some lag before the GUI receives it.
#### Example
<div style="display: flex; justify-content: center; margin-bottom: 2rem">
<loader-spinning></loader-spinning>
</div>
```html
<template>
<LoaderSpinning/>
</template>
```
#### Props
| Name | Required | Type | Default | Description |
|-- | :----------------: | ------- | -- | --|
| `radius` | | `Number` | 64 | radius (in px) of the loading spinner |
| `color` | | `String` | `#fff` | color of the loading spinner borders |
## Component Parts
### Tooltips
Default tooltips are provided for some of the charts, which make it easy to get up and running quickly. However, it is
common for users to want to define a slightly more custom tooltip that might better fit their needs. This can be done
with [Slots](https://vuejs.org/v2/guide/components-slots.html)
and [Scoped Slots](https://vuejs.org/v2/guide/components-slots.html#Scoped-Slots). Each chart that has a
default tooltip will also have a slot that passes up data about the part of the chart that is hovered on.
#### Example
Here is an example that defines a custom tooltip for the same stacked bar chart using the x_label, y_label, x_value,
and y_value of the bar that is hovered over, which
are [destructured](https://vuejs.org/v2/guide/components-slots.html#Destructuring-Slot-Props) from the `tooltip` slot
<div style="display: flex; justify-content: center">
<stacked-bar-chart-example :tooltip="true"></stacked-bar-chart-example>
</div>
```html
<template>
<StackedBarChart :width="350" :height="250" :plot-data="plotData"
:margin="margin" x-key="date"
x-axis-label="Year" y-axis-label="Expenses"
:y-tick-format="d => `$${d}`">
<template v-slot:tooltip="{ bar }">
<p>Here are values when you hover over a bar</p>
<p>{{ bar.x_label }}, {{ bar.y_label }}, {{ bar.x_value }}, {{ bar.y_value }}</p>
</template>
</StackedBarChart>
</template>
<script>
import SBCdata from './Budget3Groups.json'
export default {
name: "StackedBarChartExample",
data() {
return {
plotData: SBCdata,
margin: {top: 20, bottom: 35, left: 60, right: 20},
}
}
}
</script>
```
### Annotations
The axis based plots also have the ability to add annotations.
#### Example
The chart below shows adding a horizontal dashed line to stacked bar chart which might indicate, for example, a max
budget line.
<div style="display: flex; justify-content: center">
<stacked-bar-chart-example :annotation="true"></stacked-bar-chart-example>
</div>
```html
<template>
<StackedBarChart :plot-data="plotData" x-key="date"
:margin="margin" x-axis-label="Year" y-axis-label="Expenses"
:annotations="annotations" :y-tick-format="d => `$${d}`">
</StackedBarChart>
</template>
<script>
import SBCdata from './Budget3Groups.json'
export default {
name: "StackedBarChartExample",
data() {
return {
plotData: SBCdata,
margin: {top: 20, bottom: 35, left: 55, right: 70},
annotations: [
{
type: "line", axis: "y", color: "#ef0202", value: 8000, dash: true,
label: 'Max Budget', labeldx: 35, labeldy: -6
}]
}
}
}
</script>
```
Another example here adds two vertical lines to a line chart indicating specific start and end dates for funding
<div style="display: flex; justify-content: center">
<line-chart-example :annotation="true"></line-chart-example>
</div>
```html
<template>
<LineChart :plot-data="plotData" x-key="date"
:width="450" :height="250" :margin="margin"
x-axis-label="Year" y-axis-label="Expenses"
:annotations="annotations" :y-tick-format="d => `$${d}`">
</LineChart>
</template>
<script>
import LCdata from "./Budget3Groups.json"
export default {
name: "LineChartExample",
data() {
return {
plotData: LCdata,
margin: {top: 20, bottom: 30, left: 50, right: 20},
annotations: [
{
type: "line", axis: "x", color: "#b3080e",
label: "Start Date", labeldy: -5,
value: new Date(2019, 6, 0)
},
{
type: "line", axis: "x", color: "#b3080e",
label: "End Date", labeldy: -5,
value: new Date(2020, 9, 0)
},
]
}
}
}
</script>
```
#### Format
Annotations need to be an array of objects, even if it is only one object. The annotation object requires the following
properties
| Name | Required | Type | Default | Description |
|-- | :------------------: | ------- | -- | --|
| `type` | :heavy_check_mark: | `String` | | type of annotation, current options: 'line' |
| `axis` | :heavy_check_mark: | `String` | | options: "x" or "y" |
| `value` | :heavy_check_mark: | `Number` | | value on the x or y axis |
| `color` | | `String` | Black | color name, hex code, or rgb value |
| `dash` | | `Boolean`| False | whether line should have dashes or not |
| `label` | | `String` | | label used for annotation |
| `labelAnchor`| | `String` | 'middle' | text-anchor property for label. can be 'start', 'end' or 'middle'|
| `labeldx` | | `Number` | | shift label in x direction |
| `labeldy` | | `Number` | | shift label in y direction |
<small>Copyright 2021 MITRE Corporation. Approved for Public Release - Distribution Unlimited. Case #21-0751</small> | 41.656703 | 185 | 0.506012 | eng_Latn | 0.944126 |
e1ef6bc36192b7099edf293eab612cd5205addb7 | 76 | md | Markdown | build/content/people/g/graciela-gonzalez.md | briemadu/semdial-proceedings | 59f13adcc73dc9433c3c07ee929fadd8d271b022 | [
"Apache-2.0"
] | null | null | null | build/content/people/g/graciela-gonzalez.md | briemadu/semdial-proceedings | 59f13adcc73dc9433c3c07ee929fadd8d271b022 | [
"Apache-2.0"
] | null | null | null | build/content/people/g/graciela-gonzalez.md | briemadu/semdial-proceedings | 59f13adcc73dc9433c3c07ee929fadd8d271b022 | [
"Apache-2.0"
] | 2 | 2021-09-16T07:16:15.000Z | 2021-10-30T06:41:55.000Z | ---
lastname: Gonzalez
name: graciela-gonzalez
title: Graciela Gonzalez
---
| 12.666667 | 24 | 0.75 | pol_Latn | 0.561247 |
e1efd39d37061b21b8f33592a8fd5aeb3c8e460a | 401 | md | Markdown | content/featured/Newcreationstudios/index.md | matthewwalk/v4 | bf5037c347ec7c8ceb502815719bae699215c7a1 | [
"MIT"
] | null | null | null | content/featured/Newcreationstudios/index.md | matthewwalk/v4 | bf5037c347ec7c8ceb502815719bae699215c7a1 | [
"MIT"
] | null | null | null | content/featured/Newcreationstudios/index.md | matthewwalk/v4 | bf5037c347ec7c8ceb502815719bae699215c7a1 | [
"MIT"
] | null | null | null | ---
date: '4'
title: 'New Creation Studios'
cover: './ncs.png'
github: 'https://github.com/matthewwalk/newcreationstudios'
external: 'https://www.newcreationstudios.com/'
tech:
- AWS
- SES
- ACM
- Cloudfront
showInProjects: true
---
A deployment for a business website. Deployed through AWS the website provides it's owners with a high degree of performance and scalability at a minimal cost. | 26.733333 | 159 | 0.740648 | eng_Latn | 0.869625 |
e1f0322f21232ccfa28e142f693e0315a6ee56a2 | 42,044 | md | Markdown | articles/storsimple/storsimple-overview.md | OpenLocalizationTestOrg/azure-docs-pr15_pl-PL | 18fa7535e7cdf4b159e63a40776995fa95f1f314 | [
"CC-BY-3.0",
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/storsimple/storsimple-overview.md | OpenLocalizationTestOrg/azure-docs-pr15_pl-PL | 18fa7535e7cdf4b159e63a40776995fa95f1f314 | [
"CC-BY-3.0",
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/storsimple/storsimple-overview.md | OpenLocalizationTestOrg/azure-docs-pr15_pl-PL | 18fa7535e7cdf4b159e63a40776995fa95f1f314 | [
"CC-BY-3.0",
"CC-BY-4.0",
"MIT"
] | null | null | null | <properties
pageTitle="Co to jest StorSimple? | Microsoft Azure"
description="W tym artykule opisano StorSimple obsługi urządzenia, urządzenia wirtualnego, usług i zarządzanie magazynem i wprowadza kluczowe terminy używane w StorSimple."
services="storsimple"
documentationCenter="NA"
authors="SharS"
manager="carmonm"
editor=""/>
<tags
ms.service="storsimple"
ms.devlang="NA"
ms.topic="article"
ms.tgt_pltfrm="NA"
ms.workload="TBD"
ms.date="10/05/2016"
ms.author="[email protected]"/>
# <a name="storsimple-8000-series-a-hybrid-cloud-storage-solution"></a>Seria StorSimple 8000: masowej hybrydowych chmury
## <a name="overview"></a>Omówienie
Witamy w programie Microsoft Azure StorSimple, zintegrowane masowej, zarządzającą zadań magazynu między urządzeniami lokalnego i magazynu w chmurze Microsoft Azure. StorSimple jest wydajność, optymalne koszty i łatwe w zarządzaniu obszar sieci SAN masowej eliminujący wiele problemów i koszty skojarzone z przedsiębiorstwa miejsca do magazynowania i ochrony danych. Go używa własnych urządzenia serii StorSimple 8000, można zintegrować z usługami w chmurze i zapewnia zestaw narzędzi do zarządzania dla Bezproblemowa widoku wszystkie przedsiębiorstwa miejscem do magazynowania, łącznie z magazynu w chmurze. (StorSimple informacje na temat wdrażania opublikowanych w witrynie sieci Web Microsoft Azure dotyczy tylko urządzenia serii StorSimple 8000. Jeśli korzystasz z urządzenia serii StorSimple 5000-7000, przejdź do [Pomocy StorSimple](http://onlinehelp.storsimple.com/).)
StorSimple używa [magazynu obsługi](#automatic-storage-tiering) Zarządzanie dane przechowywane w różnych nośników. Bieżący zestaw roboczy są magazynowane lokalnie w stanie stałym dyski SSD, rzadziej używane dane są przechowywane na dyskach twardych (dysków twardych) i archiwizacji danych zostanie przypisany do chmury. Ponadto StorSimple używa deduplication i kompresji w celu zmniejszenia ilości przestrzeni dyskowej, która korzysta z danych. Aby uzyskać więcej informacji przejdź do [Deduplication i kompresji](#deduplication-and-compression). Definicje innych najważniejszych terminów i pojęć używanych w dokumentacji serii StorSimple 8000 przejdź do [terminologii StorSimple](#storsimple-terminology) na końcu tego artykułu.
Z StorSimple aktualizacji 2 odpowiednie ilości można określić jako *lokalnie przypięte* zapewnienie pozostaje lokalne do urządzenia podstawowego i nie warstwa w chmurze. Pozwala na uruchamianie obciążeń poufne do opóźnienie chmurze, takich jak SQL i maszyn wirtualnych obciążeń pracą lokalnie przypięte ilości kontynuując używanie w chmurze do tworzenia kopii zapasowych. Aby uzyskać więcej informacji na temat lokalnie przypięty wielkości zobacz [Używanie usługi Menedżera StorSimple Zarządzanie wielkości](storsimple-manage-volumes-u2.md).
Aktualizacja 2 umożliwia tworzenie StorSimple urządzeń wirtualnych korzystać z opóźnienia niskiej i wysokiej wydajności dostarczony przez Azure premium miejsca do magazynowania. Aby uzyskać więcej informacji na temat urządzeń wirtualnych premium StorSimple, zobacz [rozmieszczanie i zarządzanie nimi urządzenie wirtualne StorSimple platformy Azure](storsimple-virtual-device-u2.md). Aby uzyskać więcej informacji na temat przechowywania Azure premium, przejdź do [miejsca do magazynowania Premium: wysokiej wydajności miejsca do magazynowania dla Azure maszyn wirtualnych obciążenia](../storage/storage-premium-storage.md).
Oprócz Zarządzanie magazynem funkcje ochrony danych StorSimple umożliwiają tworzenie na żądanie zaplanowanych kopii zapasowych, a następnie zapisane lokalnie lub w chmurze. Kopie zapasowe są pobierane w formularzu przyrostowe migawki, co oznacza, że mogą być tworzone i szybko przywrócić. Migawki chmury może być bardzo ważne w scenariuszy odzyskiwania systemu, ponieważ Zamień systemów magazyn pomocniczy (na przykład kopia zapasowa taśmą) i umożliwia przywracanie danych do centrum danych lub alternatywny witryn, w razie potrzeby.
 Obejrzyj klip wideo zawiera krótkie wprowadzenie do programu Microsoft Azure StorSimple.
> [AZURE.VIDEO storsimple-hybrid-cloud-storage-solution]
## <a name="why-use-storsimple"></a>Dlaczego warto używać StorSimple?
W poniższej tabeli opisano niektóre z najważniejszych zalet dostępnych w programie Microsoft Azure StorSimple.
| Funkcja | Korzyści |
|---------|---------|
|Integracja przezroczystości | Microsoft Azure StorSimple Protokół iSCSI sposób niewidoczny łącze magazynów danych. To zapewnia dane przechowywane w chmurze, w centrum danych, lub na serwerach zdalnych może znajdować się w jednym miejscu.|
|Koszty ograniczona miejsca do magazynowania|Microsoft Azure StorSimple przydziela wystarczających lokalnie lub magazynu w chmurze do spełnienia wymagań bieżącej i rozszerza magazynu w chmurze tylko wtedy, gdy jest to konieczne. Go dodatkowo zmniejsza wymagania dotyczące przechowywania i wydatków za eliminowania zbędnych wersji tych samych danych (deduplication) i za pomocą kompresji.|
|Zarządzanie magazynem uproszczone|Microsoft Azure StorSimple zapewnia system narzędzia administracyjne, które służą do konfigurowania i zarządzania dane przechowywane w lokalnym na serwerze zdalnym i w chmurze. Ponadto można zarządzać kopii zapasowych i przywracanie funkcje z przystawką programu Microsoft Management Console (MMC). StorSimple zawiera osobne, opcjonalnych interfejs, który umożliwia rozszerzanie StorSimple zarządzania i ochrony usługi danych do zawartości przechowywanej na serwerach programu SharePoint. |
|Ulepszone odzyskiwanie i zgodność|Nie wymaga czasu przedłużone odzyskiwanie usługi Microsoft Azure StorSimple. Zamiast tego przywraca go danych jest odpowiednio. Oznacza to, że normalnego działania można kontynuować pracę z minimalnymi zakłócenia w pracy. Ponadto można skonfigurować zasady do określania harmonogramów kopii zapasowej i przechowywanie danych.
|Mobilność danych|Dane przekazane do usługi Microsoft Azure cloud services są dostępne z innych witryn na potrzeby odzyskiwania i migracji. Ponadto StorSimple umożliwia konfigurowanie urządzeń wirtualnych StorSimple na wirtualnych maszyn działa w programie Microsoft Azure. Maszyny wirtualne używać urządzeń wirtualnych dostępu do przechowywanych danych do celów testowych lub odzyskiwania.|
|Obsługa innych dostawców usług w chmurze |Seria StorSimple 8000 z oprogramowaniem aktualizacja 1 lub nowszej obsługuje Amazon S3 z Rekordów, HP i OpenStack chmury usługi, a także Microsoft Azure. (Nadal będzie potrzebne konto Microsoft Azure miejsca do magazynowania na potrzeby zarządzania urządzenia.) Aby uzyskać więcej informacji przejdź do [Nowości w 1.2 aktualizacji](storsimple-update1-release-notes.md#whats-new-in-update-12).|
|Zapewnianie ciągłości | Aktualizacja 1 lub nowszy zawiera nową funkcję migracji, która umożliwia StorSimple 5000 7000 serię użytkownikom przeprowadzić migrację danych do urządzenia serii StorSimple 8000.|
|Dostępność w Portal Azure dla instytucji rządowych | Aktualizacja StorSimple 1 lub nowszym jest dostępna w portalu dla instytucji rządowych Azure. Aby uzyskać więcej informacji zobacz temat [Deploy urządzeniu StorSimple lokalnego w portalu dla instytucji rządowych](storsimple-deployment-walkthrough-gov.md).|
|Ochrona danych i dostępność | Seria StorSimple 8000 z aktualizacji 1 lub nowszym obsługuje strefy zbędne przestrzeni dyskowej (ZRS), oprócz lokalnie zbędne przestrzeni dyskowej (LRS) i zbędne Geo przestrzeni dyskowej (GRS). Można znaleźć [w tym artykule na temat opcji nadmiarowości magazyn Azure](https://azure.microsoft.com/documentation/articles/storage-redundancy/) szczegółowe ZRS.|
|Obsługa aplikacji krytycznych | Z StorSimple aktualizacji 2 można określić odpowiednie ilości lokalnie przypięte. Ta funkcja zapewnia, że dane wymagane przez kluczowych aplikacji nie jest warstwowa w chmurze. Lokalnie przypięty wielkości nie podlegają opóźnienia w chmurze lub problemy z łącznością. Aby uzyskać więcej informacji na temat wielkości lokalnie przypięty zobacz [Używanie usługi Menedżera StorSimple Zarządzanie wielkości](storsimple-manage-volumes-u2.md).|
|Krótki czas oczekiwania i Wysoka wydajność | StorSimple aktualizacji 2 umożliwia tworzenie wirtualnych urządzenia, które skorzystać z wysokiej wydajności funkcji krótki czas oczekiwania Azure premium przestrzeni dyskowej. Aby uzyskać więcej informacji na temat urządzeń wirtualnych premium StorSimple, zobacz [rozmieszczanie i zarządzanie nimi urządzenie wirtualne StorSimple platformy Azure](storsimple-virtual-device-u2.md).|
 Obejrzyj [ten klip wideo](https://www.youtube.com/watch?v=4MhJT5xrvQw&feature=youtu.be) zawiera omówienie StorSimple 8000 serii funkcje i zalety.
## <a name="storsimple-components"></a>Składniki StorSimple
Rozwiązanie Microsoft Azure StorSimple zawiera następujące składniki:
- **Urządzenie Microsoft Azure StorSimple** — tablicę miejsca do magazynowania hybrydowych lokalnego, która zawiera twarde SSD i dysków twardych, razem z nadmiarowe kontrolery i możliwości automatyczne przejście. Kontrolery Zarządzanie magazynem obsługi, umieszczania obecnie używane (lub ciepłej) danych w magazynie lokalnym (na urządzeniu lub lokalnych serwerów), podczas przenoszenia mniej danych często używanych w chmurze.
- **Urządzenia wirtualnego StorSimple** — nazywane także urządzenia wirtualnego StorSimple, to wersja oprogramowania urządzenia StorSimple replikuje architektura i większość funkcji na urządzeniu magazynującym fizycznie hybrydowych. Urządzenia wirtualnego StorSimple działa na jeden węzeł Azure maszyn wirtualnych. Wirtualna urządzeń Premium, które skorzystać z magazynu Azure premium, są dostępne w pakiecie 2 lub nowszy.
- **Usługa Menedżera StorSimple** — rozszerzenie portalu klasyczny Azure, która umożliwia zarządzanie StorSimple lub urządzenia wirtualnego StorSimple w interfejsie jednej sieci web. Usługa Menedżera StorSimple umożliwia tworzenie i zarządzanie usługami, wyświetlanie i zarządzanie urządzeniami, wyświetlanie alertów, należy i wyświetlanie i zarządzanie Zarządzanie zasadami kopii zapasowej i wykazu kopii zapasowych.
- **Windows PowerShell dla StorSimple** — interfejs wiersza polecenia służące do zarządzania urządzeniem StorSimple. Windows PowerShell dla StorSimple zawiera funkcje, które umożliwiają rejestrowanie urządzenia StorSimple, skonfigurować interfejs sieciowy na urządzeniu, zainstalować niektórych rodzajów aktualizacje, rozwiązywanie problemów z urządzenia po zalogowaniu się do sesji pomocy technicznej i Zmień stan urządzenia. Można korzystać z programu Windows PowerShell dla StorSimple przez nawiązanie połączenia z konsoli szeregowego lub za pomocą programu Windows PowerShell zdalnych.
- **Polecenia cmdlet StorSimple programu PowerShell azure** — zbiór poleceń cmdlet programu Windows PowerShell, które umożliwiają automatyzację zadań poziomu usług i migracji z poziomu wiersza polecenia. Aby uzyskać więcej informacji na temat polecenia cmdlet programu PowerShell Azure dla StorSimple przejdź do [informacje dotyczące poleceń cmdlet](https://msdn.microsoft.com/library/dn920427.aspx).
- **Menedżer migawkę StorSimple** — przystawki używanego głośność grupy i usługi kopiowania cień głośność systemu Windows do generowania spójną z aplikacją kopie zapasowe. Ponadto można użyć StorSimple migawkę Menedżera do tworzenia kopii zapasowej harmonogramów i klonowanie lub przywracanie wielkości.
- **Karta StorSimple dla programu SharePoint** — narzędzie, które przezroczysty rozszerza Microsoft Azure StorSimple miejsca do magazynowania i ochrony danych do farmy serwerów programu SharePoint, podczas tworzenia magazynowania StorSimple widoczny i zarządzaniu z portalu administracji centralnej programu SharePoint.
Na poniższym diagramie zawiera szczegółowego widoku architektura Microsoft Azure StorSimple i składników.

W poniższych sekcjach opisano każdą z tych składników bardziej szczegółowo i wyjaśniono, jak rozwiązanie rozmieszcza danych, przydziela miejsca do magazynowania i ułatwia zarządzanie magazynem i ochrony danych. Ostatni sekcja zawiera definicje niektóre ważne terminy i pojęć związanych z StorSimple składniki oraz zarządzania nimi.
## <a name="storsimple-device"></a>Urządzenie StorSimple
Urządzenie Microsoft Azure StorSimple jest tablicą miejsca do magazynowania hybrydowych lokalnego, zawierającego podstawowy i iSCSI dostęp do danych znajdujących się na niej. Zarządza komunikacji z magazynu w chmurze i pomaga w celu zapewnienia bezpieczeństwa i poufności wszystkich danych, która znajduje się na rozwiązanie Microsoft Azure StorSimple.
Urządzenie StorSimple zawiera twarde SSD i dysków twardych dysków twardych, a także pomocy technicznej do klastrów i automatyczne przełączania awaryjnego. Zawiera udostępnionego procesor, udostępnionej przestrzeni dyskowej oraz dwa kontrolery lustrzane. Każdy kontroler oferuje następujące funkcje:
- Połączenie z komputera-hosta
- Maksymalnie sześć portów sieciowych, aby nawiązać połączenie sieci lokalnej (LAN)
- Monitorowania sprzętu
- Pamięci RAM nie lotnych (NVRAM), który zachowuje informacje, nawet jeśli zostanie przerwany power
- Obsługą klastrów aktualizowanie Zarządzanie aktualizacje oprogramowania na serwerach w klastrze pracy awaryjnej, aby uniknąć aktualizacje minimalnego lub nie mają wpływu na dostępność usługi
- Klaster usługi, które funkcje, takie jak klaster wewnętrznej, wysokiej dostępności i zminimalizowania niepożądanych, które mogą wystąpić, jeśli dysk twardy SSD kończy się niepowodzeniem lub jest do trybu offline
Tylko jeden kontroler jest aktywny w dowolnym momencie w czasie. Jeśli aktywne kontrolerze nie powiedzie się, drugi kontroler automatycznie staje się aktywne.
Aby uzyskać więcej informacji przejdź do [StorSimple sprzętowej i stan](storsimple-monitor-hardware-status.md).
## <a name="storsimple-virtual-device"></a>Urządzenia wirtualnego StorSimple
StorSimple umożliwia tworzenie wirtualnych urządzenie replikuje architektura i możliwości na urządzeniu magazynującym fizycznie hybrydowych. Urządzenia wirtualnego StorSimple (nazywane także urządzenia wirtualnego StorSimple) jest uruchamiany w jeden węzeł Azure maszyn wirtualnych. (Urządzenie wirtualne mogą być tworzone tylko na Azure maszyn wirtualnych. Nie można utworzyć na urządzeniu StorSimple lub lokalnego serwera.)
Urządzenie wirtualne oferuje następujące funkcje:
- Zachowuje się jak fizycznie urządzenia, a mogą oferować interfejsu iSCSI do maszyn wirtualnych w chmurze.
- Można utworzyć nieograniczoną liczbę urządzeń wirtualnych w chmurze i włączanie ich włączanie i wyłączanie funkcji stosownie do potrzeb.
- Pomaga symulować środowiskami lokalnymi awarii, rozwoju i scenariuszy testowania i może pomóc w poziomie elementu pobierania z kopii zapasowych.
Z aktualizacji 2 lub nowszym urządzenia wirtualnego StorSimple jest dostępna w dwóch modeli: urządzenie 8010 (wcześniej nazywanego 1100 model) i 8020. Urządzenie 8010 ma maksymalnej pojemności 30 TB. 8020 urządzenia, które wykorzystuje przestrzeni dyskowej Azure premium, ma maksymalnej pojemności 64 TB. (W lokalnym poziomów, Magazyn Azure premium przechowuje dane na twarde SSD należy standardowego magazynu są przechowywane dane na dysków twardych.) Należy zauważyć, że trzeba mieć konto miejsca do magazynowania Azure premium korzystania z magazynu premium. Aby uzyskać więcej informacji na temat przechowywania premium, przejdź do [miejsca do magazynowania Premium: wysokiej wydajności miejsca do magazynowania dla Azure maszyn wirtualnych obciążenia](../storage/storage-premium-storage.md).
Aby uzyskać więcej informacji na temat urządzenia wirtualnego StorSimple, przejdź do [rozmieszczanie i zarządzanie nimi urządzenie wirtualne StorSimple platformy Azure](storsimple-virtual-device-u2.md).
## <a name="storsimple-manager-service"></a>Usługa Menedżera StorSimple
Microsoft Azure StorSimple udostępnia interfejs użytkownika oparte na sieci web (usługa Menedżer StorSimple), który umożliwia centralne zarządzanie Centrum danych i magazynu w chmurze. Usługa Menedżera StorSimple umożliwia wykonywanie następujących zadań:
- Konfigurowanie ustawień systemu dla urządzeń StorSimple.
- Konfigurowanie i zarządzanie ustawień zabezpieczeń dla urządzeń StorSimple.
- Konfigurowanie poświadczeń w chmurze i właściwości.
- Konfigurowanie i zarządzanie wielkości na serwerze.
- Konfigurowanie grupy głośność.
- Tworzenie kopii zapasowych i przywracania danych.
- Monitorowanie wydajności.
- Przejrzyj ustawienia systemowe i identyfikowanie ewentualnym problemom.
Usługa Menedżera StorSimple umożliwia wykonywanie wszystkich zadań administracyjnych, z wyjątkiem tych, które wymagają systemu czas, taki jak początkowej konfiguracji i instalowanie aktualizacji.
Aby uzyskać więcej informacji przejdź do tematu [Używanie usługę Menedżer StorSimple administrowania urządzenia StorSimple](storsimple-manager-service-administration.md).
## <a name="windows-powershell-for-storsimple"></a>Windows PowerShell dla StorSimple
Windows PowerShell dla StorSimple udostępnia interfejs wiersza polecenia, który służy do tworzenia i zarządzania usługą Microsoft Azure StorSimple i konfigurowanie i monitorowanie urządzeń StorSimple. Jest oparte na programie Windows PowerShell interfejs wiersza polecenia, zawierającego dedykowane polecenia cmdlet zarządzania urządzenia StorSimple. Windows PowerShell dla StorSimple zawiera funkcje, które umożliwiają:
- Zarejestruj się w urządzeniu.
- Konfigurowanie interfejsu sieciowego na urządzeniu.
- Zainstaluj niektórych rodzajów aktualizacji.
- Rozwiązywanie problemów z urządzenia po zalogowaniu się do sesji pomocy technicznej.
- Zmień stan urządzenia.
Dostępne środowiska Windows PowerShell dla StorSimple z poziomu konsoli szeregowego (na klawiaturze komputera-hosta podłączone bezpośrednio do urządzenia) lub zdalnie przy użyciu zdalnej programu Windows PowerShell. Zauważ, że niektóre środowiska Windows PowerShell dla StorSimple zadań, takich jak rejestracja urządzenia początkowej, jest możliwe tylko na konsoli szeregowego.
Aby uzyskać więcej informacji przejdź do [Użycia programu Windows PowerShell dla StorSimple administrowania urządzenia](storsimple-windows-powershell-administration.md).
## <a name="azure-powershell-storsimple-cmdlets"></a>Azure polecenia cmdlet programu PowerShell StorSimple
Polecenia cmdlet StorSimple programu PowerShell Azure to zbiór poleceń cmdlet programu Windows PowerShell, które umożliwiają automatyzację zadań poziomu usług i migracji z poziomu wiersza polecenia. Aby uzyskać więcej informacji na temat polecenia cmdlet programu PowerShell Azure dla StorSimple przejdź do [informacje dotyczące poleceń cmdlet](https://msdn.microsoft.com/library/dn920427.aspx).
## <a name="storsimple-snapshot-manager"></a>Menedżer migawkę StorSimple
Menedżer migawkę StorSimple jest przystawką programu Microsoft Management Console (MMC), który służy do utworzenia spójnych, w chwili kopii lokalnej i danych w chmurze. Uruchamia przystawki na hoście z systemem Windows Server. Za pomocą Menedżera migawkę StorSimple w celu:
- Konfigurowanie, wykonywanie kopii zapasowej i usuwanie wielkości.
- Konfigurowanie grup głośność, aby upewnić się, którego kopię zapasową danych jest spójna aplikacji.
- Zarządzanie zasadami kopii zapasowej, tak aby danych kopii zapasowej interwałem zgodnie z harmonogramem i przechowywany w lokalizacji wyznaczonych (lokalnie lub w chmurze).
- Przywracanie wielkości i pojedyncze pliki.
Kopie zapasowe są przechwytywane jako migawki, które rejestrować tylko zmiany od czasu ostatniego migawkę wykonano i wymagają znacznie mniej miejsca do magazynowania niż pełne kopie zapasowe. Można tworzyć kopii zapasowej harmonogramów lub wykonać natychmiastowej kopie zapasowe, stosownie do potrzeb. Ponadto można StorSimple migawkę Menedżer ustanowić zasady przechowywania tego formantu, ile migawek zostaną zapisane. Jeśli chcesz później przywrócić dane z kopii zapasowej, umożliwia StorSimple migawkę menedżera, możesz wybrać z katalogu lokalnego lub migawki w chmurze.
Danych lub jeśli chcesz przywrócić dane z innego powodu, Menedżer migawkę StorSimple przywraca go stopniowo jest odpowiednio. Przywracanie danych nie wymaga zamykania całego systemu podczas przywrócenia pliku, Zastąp sprzętu lub przenoszenie operacji do innej witryny.
Aby uzyskać więcej informacji, przejdź do [Co to jest Menedżer migawkę StorSimple?](storsimple-what-is-snapshot-manager.md)
## <a name="storsimple-adapter-for-sharepoint"></a>Karta StorSimple dla programu SharePoint
Microsoft Azure StorSimple zawiera kartę StorSimple dla programu SharePoint, składnik opcjonalny, który rozszerza przezroczysty StorSimple funkcje ochrony danych i magazynowania do farmy serwerów programu SharePoint. Karta współpracuje z dostawcą zdalny magazyn obiektów Blob (SPZ) i funkcję SQL Server SPZ umożliwia przenoszenie obiektów blob na serwerze kopii zapasowej przez system Microsoft Azure StorSimple. Microsoft Azure StorSimple następnie są przechowywane dane obiektów BLOB lokalnie lub w chmurze, na podstawie zużycia.
Karta StorSimple dla programu SharePoint jest zarządzany z poziomu portalu administracji centralnej programu SharePoint. W związku z tym zarządzania programu SharePoint pozostaje scentralizowane, a wszystkie miejsca do magazynowania wydaje się w farmie programu SharePoint.
Aby uzyskać więcej informacji przejdź do [Karty StorSimple dla programu SharePoint](storsimple-adapter-for-sharepoint.md).
## <a name="storage-management-technologies"></a>Technologie zarządzania miejsca do magazynowania
Oprócz dedykowane urządzenia StorSimple, urządzenia wirtualnego i inne składniki Microsoft Azure StorSimple jest używane następujące technologie oprogramowania zapewnia szybki dostęp do danych i w celu zmniejszenia zużycia miejsca do magazynowania:
- [Automatyczne przechowywanie obsługi](#automatic-storage-tiering)
- [Cienkie inicjowania obsługi administracyjnej](#thin-provisioning)
- [Deduplication i kompresji](#deduplication-and-compression)
### <a name="automatic-storage-tiering"></a>Automatyczne przechowywanie obsługi
Microsoft Azure StorSimple automatycznie rozmieszcza dane w logiczne poziomów na podstawie bieżącego użycia, wiek i relacji do innych danych. Dane, które są najbardziej aktywne jest przechowywany lokalnie, gdy automatycznie migracji mniej aktywny i nieaktywny danych w chmurze. Na poniższym diagramie przedstawiono tej metody miejsca do magazynowania.

Aby włączyć szybki dostęp, StorSimple przechowuje bardzo aktywna danych (ciepłej danych) na twarde SSD na urządzeniu StorSimple. Przechowywanych w niej danych, który jest używany czasami (ogrzewać danych) na dysków twardych w pliku lub na serwerach w centrum danych. Przenosi nieaktywny, dane kopii zapasowej, a dane przechowywane w Archiwizacja i zgodność w chmurze.
>[AZURE.NOTE] W pakiecie 2 lub nowszy można określić woluminu przypięte lokalnie, w tym przypadku dane pozostają na urządzeniem lokalnym i nie jest tiered w chmurze.
StorSimple skoryguje i rozmieszcza danych oraz zmienianie przydziałów magazynowania jako upodobania. Na przykład pewne informacje mogą być mniej aktywnego czasu. Jak zmieni się stopniowo mniej aktywnego, dysków twardych, a następnie w chmurze są migrowane z twarde SSD. Jeśli te same dane z aktywnym ponownie, jest on migracji z powrotem do na urządzeniu magazynującym.
Proces tiering miejsca do magazynowania przebiega w następujący sposób:
1. Administrator systemu konfiguruje konto Microsoft Azure chmury miejsca do magazynowania.
2. Administrator używa konsoli szeregowego i usługa Menedżer StorSimple (działa w portalu klasyczny Azure) do skonfigurowania serwera urządzenie i plik, tworzenie zasad ochrony wielkości i danych. Maszyny lokalnego (na przykład serwery plików) umożliwia uzyskać dostęp do urządzenia StorSimple Internet Small komputera System Interface (iSCSI).
3. Początkowo StorSimple są przechowywane dane w warstwie SSD szybkie urządzenia.
4. Jak warstwie SSD zbliża się wydajność, StorSimple deduplicates i kompresuje najstarszych bloki danych i przenosi je do poziomu dysk twardy.
5. Jako możliwości dysk twardy warstwa metod StorSimple są szyfrowane najstarszych bloki danych i bezpiecznie wysyła je do konta Microsoft Azure miejsca do magazynowania przy użyciu protokołu HTTPS.
6. Microsoft Azure tworzy wielu replikach dane w jego centrum danych i zdalny centrum danych, zapewnienie, że dane można odzyskać, jeśli występuje po awarii.
7. Gdy serwer plików żąda danych przechowywanych w chmurze, StorSimple zwraca go bezproblemowo i przechowywana kopia w warstwie SSD urządzenia StorSimple.
### <a name="thin-provisioning"></a>Cienkie inicjowania obsługi administracyjnej
Cienkie inicjowania obsługi administracyjnej jest technologii wirtualizacji, w której znajduje się dostępnego magazynu przekroczyć fizycznie zasobów. Zamiast rezerwowania wystarczającą ilość miejsca z wyprzedzeniem, StorSimple przydzielić tylko za mało miejsca do wymagań bieżącej używa cienkie inicjowania obsługi administracyjnej. Elastyczne rodzaj magazynu w chmurze ułatwia tej metody, ponieważ StorSimple można zwiększyć lub zmniejszyć magazynu w chmurze do spełnienia wymagań zmiany.
>[AZURE.NOTE] Nie zainicjowano znacznie lokalnie przypięty wielkości. Po utworzeniu wielkość miejsca do magazynowania przydzielone do woluminu tylko lokalne jest obsługi administracyjnej w całości.
### <a name="deduplication-and-compression"></a>Deduplication i kompresji
Microsoft Azure StorSimple używa kompresji deduplication i dane, aby jeszcze bardziej zmniejszyć wymagania dotyczące miejsca do magazynowania.
Deduplication zmniejsza ogólnej ilości danych przechowywanych przez eliminowania nadmiarowości w zestawie danych przechowywanych. Zmian informacji StorSimple ignoruje danych bez zmian i rejestruje tylko zmiany. Ponadto StorSimple powoduje zmniejszenie ilości danych przechowywanych identyfikujący typ przepływu pracy i usuwając niepotrzebne informacje.
>[AZURE.NOTE] Dane ilości lokalnie przypięty nie jest deduplicated lub skompresowany. Jednak kopie zapasowe wielkości lokalnie przypięty są deduplicated i skompresowany.
## <a name="storsimple-workload-summary"></a>Podsumowanie obciążenie pracą StorSimple
Podsumowanie obsługiwane obciążenia StorSimple są oznaczane znakami tabulacji poniżej.
| Scenariusz | Obciążenie pracą | Obsługiwane | Ograniczenia | Wersja |
|---------------------------|-------------------------|-----------|------------------------------------------------|----------------------|
| Współpraca za pomocą | Udostępnianie plików | Tak | | Wszystkie wersje |
| Współpraca za pomocą | Udostępnianie plików rozłożone| Tak | | Wszystkie wersje |
| Współpraca za pomocą | Programu SharePoint | Tak * |Obsługiwane tylko w przypadku lokalnie przypięty wielkości | Aktualizacja 2 lub nowszy |
| Archiwizacja | Plik protokołu Simple archiwizowania | Tak | | Wszystkie wersje |
| Wirtualizacji | Maszyn wirtualnych | Tak * |Obsługiwane tylko w przypadku lokalnie przypięty wielkości | Aktualizacja 2 lub nowszy |
| Bazy danych | SQL | Tak * |Obsługiwane tylko w przypadku lokalnie przypięty wielkości | Aktualizacja 2 lub nowszy |
| Monitorowania wideo | Monitorowania wideo | Tak * |Obsługiwane, gdy urządzenie StorSimple jest przeznaczony tylko do tej pracy| Aktualizacja 2 lub nowszy |
| Wykonywanie kopii zapasowych | Kopia zapasowa podstawowego docelowej | Tak * |Obsługiwane, gdy urządzenie StorSimple jest przeznaczony tylko do tej pracy| Aktualizacja 3 lub nowszy |
| Wykonywanie kopii zapasowych | Kopia zapasowa pomocniczej docelowej | Tak * |Obsługiwane, gdy urządzenie StorSimple jest przeznaczony tylko do tej pracy| Aktualizacja 3 lub nowszy |
*Tak & #42; — Rozwiązanie wskazówki i ograniczenia powinny być stosowane.*
Następujące obciążenia nie są obsługiwane przez urządzenia serii StorSimple 8000. Jeśli używany na StorSimple, obciążenie spowoduje nieobsługiwanej konfiguracji.
- Tworzenie medycznych obrazów
- Exchange
- VDI
- Oracle
- SAP
- Duży danych
- Dystrybucja zawartości
- Uruchamianie z SCSI
Poniżej przedstawiono listę elementów infrastruktury StorSimple obsługiwane.
| Scenariusz | Obciążenie pracą | Obsługiwane | Ograniczenia | Wersja |
|----------|---------------|-----------|-----------------------------------------------|--------------|
| Ogólne | Trasa Express | Tak | |Wszystkie wersje |
| Ogólne | DataCore FC | Tak * |Obsługiwane z DataCore SANsymphony | Wszystkie wersje |
| Ogólne | DFSR | Tak * |Obsługiwane tylko w przypadku lokalnie przypięty wielkości | Wszystkie wersje |
| Ogólne | Indeksowanie | Tak * |Do przedstawienia wielkości warstwowych, jest obsługiwana tylko metadanych indeksowania (Brak danych).<br>Dla lokalnie przypięty wielkości pełną indeksowanie jest obsługiwana.| Wszystkie wersje |
| Ogólne | Oprogramowania antywirusowego | Tak * |Do przedstawienia wielkości warstwowych jest obsługiwana tylko skanowanie przy otwieraniu i zamknij.<br> Dla lokalnie przypięty wielkości pełne skanowanie jest obsługiwana.| Wszystkie wersje |
*Tak & #42; — Rozwiązanie wskazówki i ograniczenia powinny być stosowane.*
## <a name="storsimple-terminology"></a>Terminologia StorSimple
Przed wdrożeniem tego rozwiązania Microsoft Azure StorSimple, zalecamy zapoznanie poniższe terminy i definicje.
### <a name="key-terms-and-definitions"></a>Kluczowe terminy i definicje
| Termin (skrót lub skrót) | Opis |
| ------------------------------ | ---------------- |
| rekord kontroli dostępu (awaryjnego) | Rekord skojarzony z woluminu na urządzeniu Microsoft Azure StorSimple określające, które hosty można się z nim połączyć. Oznaczanie jest oparty na iSCSI kwalifikowana nazwa (IQN) hostów (zawarte w awaryjnego) łączących się z urządzeniem StorSimple.|
| AES-256 | 256-bitowy algorytm szyfrowania AES (Advanced Standard) do szyfrowania danych podczas przenoszenia go do i z chmury. |
| rozmiar jednostki alokacji (Australia) | Systemy plików najmniejsza ilość miejsca na dysku, która może być przeznaczona do przechowywania pliku w systemie Windows. Jeśli rozmiar pliku nie jest wielokrotnością rozmiaru klaster, należy dodatkowe miejsce do przechowywania pliku (do następnego wielokrotnością rozmiaru klaster) uzyskując utracone miejsca i system dysku twardego. <br>Australia polecane do przedstawienia wielkości Azure StorSimple jest 64 KB, ponieważ działa on również z algorytmów deduplication.|
| obsługi automatycznego przechowywania | Automatyczne przenoszenie mniej aktywnego danych z twarde SSD dysków twardych, a następnie warstwa w chmurze, a następnie włączać zarządzania wszystkie miejsca do magazynowania w interfejsie użytkownika centralnej.|
| wykaz kopii zapasowych | Kolekcja kopie zapasowe, zwykle powiązane przez typ aplikacji, która została użyta. Ten zbiór zostanie wyświetlony na stronie katalog kopii zapasowej usługi Menedżera StorSimple interfejsu użytkownika.|
| Plik kopii zapasowej wykazu | Plik zawierający listę dostępnych migawek przechowywanych w kopii zapasowej bazy danych programu StorSimple migawkę Manager. |
| zasady kopii zapasowej | Wybór wielkości, typ kopii zapasowej i harmonogram, która umożliwia tworzenie kopii zapasowych na wstępnie zdefiniowanego harmonogramu.|
| duże obiekty binarne (BLOB) | Kolekcja dane binarne przechowywane jako całość w systemie zarządzania bazy danych. BLOB są zwykle obrazy, audio lub inne obiekty multimedialne, jednak czasami binarne kod wykonywalny jest przechowywana jako obiektów BLOB.|
| Wyzwania uzgadniania uwierzytelniania protokołu (CHAP) | Protokół używany do uwierzytelniania równorzędnych połączenie, oparte na partnerze udostępniania hasła lub hasło. CHAP mogą być jednokierunkowe lub wzajemnego. Z jednokierunkowe CHAP docelowej uwierzytelnia inicjator. Wzajemnego CHAP wymaga, że obiekt docelowy uwierzytelnić Inicjator i że inicjator uwierzytelnia docelowej. |
| klonowanie | Kopię woluminu. |
|Chmura jako warstwa (CaaT) | Chmury miejsca do magazynowania scałkowanej w przedziale jako warstwa w architekturze miejsca do magazynowania, tak, aby wszystkie miejsca do magazynowania jest wyświetlany jako część jedną sieć miejsca do magazynowania.|
| chmury usługi dostawcę | Dostawca środowiska usługi w chmurze.|
| Migawka chmury | W chwili kopia głośność dane, które są przechowywane w chmurze. Migawki chmury odpowiada migawkę replikować na komputerze z inną, poza przestrzeni dyskowej. Migawki chmurze są szczególnie przydatne w sytuacjach odzyskiwania po awarii.|
| klucza szyfrowania w chmurze miejsca do magazynowania | Hasła lub klucza używane przez urządzenia StorSimple uzyskiwać dostęp do danych zaszyfrowanych wysyłane przez urządzenia w chmurze.|
| Aktualizowanie obsługą klastrów | Zarządzanie aktualizacje oprogramowania na serwerach w klastrze pracy awaryjnej, aby uniknąć aktualizacje minimalnego lub nie mają wpływu na dostępność usługi.|
| ścieżki danych | Kolekcja jednostkami organizacyjnymi wykonywania operacji wzajemnie połączonych przetwarzania danych.|
| Dezaktywowanie | Trwały akcję, która spowoduje przerwanie połączenia między urządzeniem StorSimple i usług w chmurze skojarzone. Migawki chmury urządzenia po ten proces i może być klonowany lub używana awarii.|
| odzwierciedlające dysku | Replikacji wielkości dysku logicznego w osobnych słabo dysków w czasie rzeczywistym, aby zapewnić stałą dostępność.|
| dysk dynamiczny odzwierciedlające | Replikacja wielkości dysku logicznego na dyskach dynamicznych.|
| dyski dynamiczne | Format głośności dysku, który używa dysku Menedżer Logicznych do przechowywania i zarządzanie danymi na wielu dyskach fizycznych. Dyski dynamiczne mogą powiększone o podanie więcej miejsca.|
| Rozszerzone załącznik wiele dysków (EBOD) | Pomocnicza załącznik urządzenia Microsoft Azure StorSimple zawierający dodatkowego dysku twardego dysków w poszukiwaniu dodatkowego miejsca do magazynowania.|
| FAT inicjowania obsługi administracyjnej | Konwencjonalny magazynowania inicjowania obsługi administracyjnej w magazynie, które przydzielono miejsca na podstawie przewidywanych potrzeb (i zazwyczaj jest poza bieżącą konieczności). Zobacz też: *cienkie inicjowania obsługi administracyjnej*.|
| na dysku twardym (dysk twardy) | Dysk, na którym używa obracania płyt do przechowywania danych.|
| hybrydowe magazynu w chmurze | Architektura miejsca do magazynowania, która korzysta z zasobów lokalnych i poza nim, w tym magazynu w chmurze.|
| Internet Small komputera System Interface (iSCSI) | Magazynu opartego na protokole IP standard sieciowy łączenia urządzenia do przechowywania danych lub urządzenia.|
| inicjatorach | Składnik oprogramowania, który umożliwia komputera z systemem Windows, aby nawiązać połączenie z siecią zewnętrznych magazynu opartego na iSCSI.|
| iSCSI kwalifikowana nazwa (IQN) | Unikatowa nazwa identyfikująca docelowego iSCSI lub inicjator.|
| obiekt docelowy iSCSI | Składnik oprogramowania, który udostępnia scentralizowaną iSCSI podsystemów dysków w sieci magazynowania.|
| Live archiwizacji | Podejście miejsca do magazynowania, w którym archiwizacji dane są dostępne cały czas (go nie znajduje się poza terenem zakładu na taśmą, na przykład). Microsoft Azure StorSimple korzysta z programu live archiwizacji.|
|lokalnie przypięty głośności | głośność, który znajduje się na tym urządzeniu, a nigdy nie jest warstwowa w chmurze. |
| lokalne migawki | W chwili kopia danych głośność, który jest przechowywany na tym urządzeniu Microsoft Azure StorSimple.|
| StorSimple Microsoft Azure | Zaawansowane rozwiązanie składający się z urządzenia magazynowania centrum danych i oprogramowania, która umożliwia organizacjom wykorzystać magazynu w chmurze, tak jakby była ona magazynowania centrum danych. StorSimple ułatwia ochronę danych i zarządzanie danymi redukując koszty. Rozwiązanie powoduje konsolidowanie podstawową miejsca do magazynowania, archiwum, wykonywanie kopii zapasowych i awarii odzyskiwania (DR) za pośrednictwem Integracja z chmury. Łącząc zarządzania SAN chmury i magazynowania danych na platformie klasy korporacyjnej, urządzenia StorSimple umożliwić szybkość, uproszczenia i niezawodność dla wszystkich potrzeb związanych z miejsca do magazynowania.|
| Zasilania i chłodzenia moduł (PCM) | Składniki sprzętu urządzenia StorSimple składający się ze źródła zasilania oraz wentylatorów, w związku z tym nazwa Power i chłodzenia modułu. Podstawowy załącznik urządzenie ma dwa PCMs 764W załącznik EBOD ma dwa PCMs 580W.|
| podstawowy załącznik | Załącznik głównym urządzenia StorSimple, która zawiera kontrolery platformy aplikacji.|
| cel czasu odzyskiwania (RTO) | W pełni po przywróceniu maksymalną ilość czasu, który powinien być wydatkowaną przed proces biznesowy lub systemu po awarii.|
|kolejną dołączone SCSI (SA) | Typ na dysku twardym (dysk twardy).|
| klucza szyfrowania danych usługi | Klucz udostępnione do każdego nowego urządzenia StorSimple rejestruje usłudze Menedżer StorSimple. Dane konfiguracji usługa Menedżera StorSimple i urządzenie jest zaszyfrowany przy użyciu klucza publicznego, a następnie można odszyfrować tylko na tym urządzeniu, przy użyciu klucza prywatnego. Klucza szyfrowania danych usługi umożliwia usługę, aby uzyskać klucz prywatny do odszyfrowywania.|
| klucz rejestru usługi | Klucz, który ułatwia zarejestrować urządzenia StorSimple usłudze Menedżer StorSimple, tak aby była wyświetlana w portalu klasyczny Azure dla przyszłych zarządzania działań.|
| Małe komputera System Interface (SCSI) | Zestaw standardów fizycznie łączenia komputerów i przekazywania danych między nimi.|
| dysk stanie stałym (SSD) | Dysk zawierający ani ruchomych elementów; na przykład dysk flash.|
| Konto miejsca do magazynowania | Zestaw poświadczenia dostępu do połączonych z kontem miejsca do magazynowania dla danej chmury dostawcy usług.|
| Karta StorSimple dla programu SharePoint| Składnik Microsoft Azure StorSimple przezroczysty rozszerza StorSimple miejsca do magazynowania i ochrony danych do farmy serwerów programu SharePoint.|
| Usługa Menedżera StorSimple | Rozszerzenie portalu klasyczny Azure umożliwia zarządzanie Azure StorSimple lokalnego i urządzeń wirtualnych.|
| Menedżer migawkę StorSimple | Microsoft Management Console (MMC) przystawki zarządzania kopia zapasowa i przywracanie operacje w programie Microsoft Azure StorSimple.|
| sporządzanie kopii zapasowej | Funkcja, która umożliwia użytkownikowi przejęcie interakcyjnych wykonywania kopii zapasowej woluminu. To alternatywny sposób podjęcia ręczną kopię zapasową woluminu zamiast korzystających z automatycznej kopii zapasowej za pomocą zdefiniowanych zasad.|
| cienkie inicjowania obsługi administracyjnej | Metoda optymalizowania wydajności, z którym dostępnego miejsca jest używana w systemach miejsca do magazynowania. W zubożonym inicjowania obsługi administracyjnej, przechowywania przydzielonego przez wielu użytkowników według minimalny odstęp wymagany przez każdego użytkownika w dowolnym momencie. Zobacz też *fat inicjowania obsługi administracyjnej*.|
| obsługi | Rozmieszczanie danych w logiczne grupowanie na podstawie bieżącego użycia, wiek i relacji do innych danych. StorSimple automatycznie rozmieszcza dane w warstwy. |
| Głośność | Obszary logiczną magazynowania przedstawione w postaci dyski. Ilości StorSimple odpowiadają wielkość zainstalowanych przez hosta, łącznie z tymi wykryte iSCSI i urządzenia StorSimple.|
| kontener głośności | Grupa wielkości i ustawień, które dotyczą. Wszystkie ilości w urządzeniu StorSimple są pogrupowane w kontenerów głośność. Ustawienia kontenera głośność obejmują konta miejsca do magazynowania, ustawienia szyfrowania dane wysyłane do chmury za pomocą klawiszy skojarzony szyfrowania i przepustowości operacji dotyczących w chmurze.|
| Grupa głośności | W Menedżerze migawkę StorSimple grupy głośność jest zbiór wielkości skonfigurowane tak, aby ułatwić przetwarzanie kopii zapasowej.|
| Usługi kopiowania woluminu w tle (VSS)| Usługa systemu operacyjnego Windows Server, która zapewnia spójność aplikacji przez komunikowania się z aplikacji obsługującej VSS koordynowanie tworzenie migawek przyrostowe. VSS gwarantuje, że aplikacje są tymczasowo nieaktywne, gdy są wykonywane migawki.|
| Windows PowerShell dla StorSimple | Oparte na programie Windows PowerShell interfejs wiersza polecenia używane do działania i zarządzanie Twoim urządzeniem StorSimple. Przy zachowaniu niektóre podstawowe funkcje programu Windows PowerShell, to interfejs ma dodatkowe dedykowane polecenia cmdlet, które są przeznaczone dla zarządzanie urządzeniem StorSimple.|
## <a name="next-steps"></a>Następne kroki
Informacje dotyczące [zabezpieczeń StorSimple](storsimple-security.md).
| 130.978193 | 875 | 0.805561 | pol_Latn | 0.999977 |
e1f09fd199591935fb0608111d7f73c44e9c1a3b | 629 | md | Markdown | contents/sparta-coding-club-impression.md | HyungjunJeon/hyungjunjeon.github.io | 7194efc53704b3ae3de618d298a46900e7853573 | [
"RSA-MD"
] | null | null | null | contents/sparta-coding-club-impression.md | HyungjunJeon/hyungjunjeon.github.io | 7194efc53704b3ae3de618d298a46900e7853573 | [
"RSA-MD"
] | 1 | 2022-02-08T12:46:58.000Z | 2022-02-08T12:46:59.000Z | contents/sparta-coding-club-impression.md | HyungjunJeon/hyungjunjeon.github.io | 7194efc53704b3ae3de618d298a46900e7853573 | [
"RSA-MD"
] | null | null | null | ---
date: '2022-03-06'
title: '스파르타 코딩클럽 내일배움단 스온스 후기'
categories: ['스파르타 코딩클럽 내일배움단']
summary: '내일배움단 스온스 후기'
---
스온스 프로그램은 스파르타 코딩클럽의 K-digital credit 국비지원 프로그램인 내일배움단 수강생 중 신청을 받아 진행하는 프로그램이다. 프로그램의 내용은 4~5주 커리큘럼으로 구성되어 있는 강좌를 2주 안에 수강하는 것이다. 그리고 매일 시간을 정해 2시간씩 '게더타운'이라는 메타버스 플랫폼에서 만나 매니저님께 공지사항이나 팁을 듣기도 하고 다른 수강생들도 만날 수 있어 짧게나마 학교에 다니는 느낌을 받을 수 있어 좋았다. 그리고 특정 요일에는 질문에 답변을 해주시는 매니저님도 함께 오셔서 바로바로 질문을 할 수 있기도 하다. 아무래도 인강을 듣다보면 끝까지 수강하지 못하는 경우가 발생하기도 하는데 스온스 프로그램에 참여하면 게더타운에 모이기 조금 전에 알림도 주고 다른 수강생들과 함께하는 즐거움도 있기에 끝까지 완주하는데 더욱 도움이 되는 것 같다. 스온스가 끝나면 '메이킹 챌린지'라는 팀프로젝트를 신청받아 진행하게 되는데 어떤 팀원들을 만나게 될지 궁금하기도 하고 재미있을 것 같아 기대된다!
| 69.888889 | 512 | 0.73132 | kor_Hang | 1.00001 |
e1f22259a29ffd3a68c2ed3a3766b25c6a544391 | 120 | md | Markdown | .github/ISSUE_TEMPLATE/REF.md | kecol/fracdiff | 8e8ab346771b488d894e117800da48c73f41a553 | [
"BSD-3-Clause"
] | 24 | 2019-11-02T01:57:43.000Z | 2021-04-24T16:30:58.000Z | .github/ISSUE_TEMPLATE/REF.md | kecol/fracdiff | 8e8ab346771b488d894e117800da48c73f41a553 | [
"BSD-3-Clause"
] | 152 | 2019-12-24T09:52:10.000Z | 2021-04-28T12:25:11.000Z | .github/ISSUE_TEMPLATE/REF.md | kecol/fracdiff | 8e8ab346771b488d894e117800da48c73f41a553 | [
"BSD-3-Clause"
] | 5 | 2020-11-29T14:53:18.000Z | 2021-04-14T09:21:23.000Z | ---
name: "[REF] Refactoring"
about: 'Request for refactoring'
title: "[REF] "
labels: refactoring
assignees: ''
---
| 10.909091 | 32 | 0.65 | eng_Latn | 0.706998 |
e1f2c7d08594cf81f7137fd9dce9534994613228 | 397 | md | Markdown | CONTRIBUTING.md | ricardojmendez/memento | ad6fe05e0e7eabf3c9a9d6750d9e7775503020bc | [
"MIT"
] | 2 | 2017-10-03T08:06:31.000Z | 2018-03-06T21:46:01.000Z | CONTRIBUTING.md | ricardojmendez/memento | ad6fe05e0e7eabf3c9a9d6750d9e7775503020bc | [
"MIT"
] | null | null | null | CONTRIBUTING.md | ricardojmendez/memento | ad6fe05e0e7eabf3c9a9d6750d9e7775503020bc | [
"MIT"
] | null | null | null | # Contributing guidelines
Basic guidelines for contributing:
- Make sure the project passes tests after the changes;
- Open a pull request;
- Document the rationale for the changes;
By creating a pull request, you confirm that you're willing to release your changes under the MIT license.
Please keep your pull requests succinct and atomic, as that will make it easier to review them.
Thanks! | 30.538462 | 106 | 0.790932 | eng_Latn | 0.99886 |
e1f2d6b664c0980952b83e43a249c161326b2943 | 1,404 | md | Markdown | AlchemyInsights/outlook-com-safe-senders.md | pebaum/OfficeDocs-AlchemyInsights-pr.cs-CZ | 3c55a84664ad4f0f0ef39dced9e6ca253b21ba71 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | AlchemyInsights/outlook-com-safe-senders.md | pebaum/OfficeDocs-AlchemyInsights-pr.cs-CZ | 3c55a84664ad4f0f0ef39dced9e6ca253b21ba71 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | AlchemyInsights/outlook-com-safe-senders.md | pebaum/OfficeDocs-AlchemyInsights-pr.cs-CZ | 3c55a84664ad4f0f0ef39dced9e6ca253b21ba71 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 8000089 Outlook.com bezpeční odesílatelé
ms.author: daeite
author: daeite
manager: joallard
ms.date: 04/21/2020
ms.audience: Admin
ms.topic: article
ROBOTS: NOINDEX, NOFOLLOW
localization_priority: Normal
ms.custom:
- "1400"
- "8000089"
ms.openlocfilehash: 3196105d10f57b6448497938367d0506957127d2
ms.sourcegitcommit: 631cbb5f03e5371f0995e976536d24e9d13746c3
ms.translationtype: MT
ms.contentlocale: cs-CZ
ms.lasthandoff: 04/22/2020
ms.locfileid: "43743626"
---
# <a name="stop-messages-from-going-into-your-junk-email-folder"></a>Zastavení přistajených zpráv do složky Nevyžádaná pošta
Jsou chvíle, kdy nechcete, aby zprávy od určité osoby nebo domény skončily ve složce Nevyžádaná pošta v Outlook.com. Zprávy z adres nebo domén v seznamu bezpečných odesílatelů nebudou přesunuty do složky Nevyžádaná pošta.
1. Otevřete [nastavení bezpečných odesílatelů](https://go.microsoft.com/fwlink/?linkid=2035804).
2. V části **Bezpeční odesílatelé a domény**zadejte e-mailovou adresu nebo doménu, kterou chcete přidat, a vyberte **Přidat**.
3. Chcete-li přidat seznam adresátů do seznamu bezpečných odesílatelů, zadejte seznam adresátů v části **Bezpečné seznamy adresátů** a vyberte **Přidat**.
4. Vyberte **Uložit**.
Přečtěte si více na [bloku nebo odblokovat odesílatele v Outlook.com](https://support.office.com/article/afba1c94-77bb-4f50-8b85-057cf52f4d5e?wt.mc_id=Office_Outlook_com_Alchemy). | 46.8 | 221 | 0.803419 | ces_Latn | 0.994934 |
e1f3147e8a458db60a3e2ab2617b600f49db06e1 | 540 | md | Markdown | README.md | namuyan/bip32nem | 6d70c5d26db23eab3d0d44631f5496397d6db8f8 | [
"MIT"
] | null | null | null | README.md | namuyan/bip32nem | 6d70c5d26db23eab3d0d44631f5496397d6db8f8 | [
"MIT"
] | null | null | null | README.md | namuyan/bip32nem | 6d70c5d26db23eab3d0d44631f5496397d6db8f8 | [
"MIT"
] | null | null | null | BIP32 for NEM
====
forked from [bip32utils](https://github.com/drmoog/bip32utils) fixed to work on Python3.
I will edit it for NEM.
Links
----
* [BIP32 Hierarchical Deterministic Wallets](https://github.com/sipa/bips/blob/bip32update/bip-0032.mediawiki)
* [BIP44 Multi-Account Hierarchy for Deterministic Wallets](https://github.com/bitcoin/bips/blob/master/bip-0044.mediawiki)
* [Registered coin types for BIP-0044](https://github.com/satoshilabs/slips/blob/master/slip-0044.md)
* [Mnemonic Code Converter](https://iancoleman.io/bip39/)
| 41.538462 | 123 | 0.768519 | yue_Hant | 0.278604 |
e1f3231f8d65ab0685b49914b226c2591d04a361 | 286 | md | Markdown | content/draft/why-you-shouldnt-add-company-mail-exchange-account-on-personal-phone.md | kushdilip/blog | 433e29c77afb525fc478794f3ea93a4da0352b19 | [
"MIT"
] | 1 | 2020-07-28T19:23:05.000Z | 2020-07-28T19:23:05.000Z | content/draft/why-you-shouldnt-add-company-mail-exchange-account-on-personal-phone.md | kushdilip/blog | 433e29c77afb525fc478794f3ea93a4da0352b19 | [
"MIT"
] | 4 | 2019-07-03T10:29:39.000Z | 2019-07-03T17:09:38.000Z | content/draft/why-you-shouldnt-add-company-mail-exchange-account-on-personal-phone.md | kushdilip/blog | 433e29c77afb525fc478794f3ea93a4da0352b19 | [
"MIT"
] | null | null | null | ---
aliases:
- why-you-shouldnt-add-company-mail-exchange-account-on-personal-phone
date: "1970-01-01T05:30:00+05:30"
draft: true
slug: why-you-shouldnt-add-company-mail-exchange-account-on-personal-phone
title: Why you shouldn't add company mail exchange account on personal phone
---
| 31.777778 | 76 | 0.776224 | eng_Latn | 0.946929 |
e1f5a939743a8237037375d43f94bb7682e47672 | 596 | md | Markdown | docs/ru/docs/user/reference/event-classes/network/dhcp/untrusted-server.md | prorevizor/noc | 37e44b8afc64318b10699c06a1138eee9e7d6a4e | [
"BSD-3-Clause"
] | 84 | 2017-10-22T11:01:39.000Z | 2022-02-27T03:43:48.000Z | docs/ru/docs/user/reference/event-classes/network/dhcp/untrusted-server.md | prorevizor/noc | 37e44b8afc64318b10699c06a1138eee9e7d6a4e | [
"BSD-3-Clause"
] | 22 | 2017-12-11T07:21:56.000Z | 2021-09-23T02:53:50.000Z | docs/ru/docs/user/reference/event-classes/network/dhcp/untrusted-server.md | prorevizor/noc | 37e44b8afc64318b10699c06a1138eee9e7d6a4e | [
"BSD-3-Clause"
] | 23 | 2017-12-06T06:59:52.000Z | 2022-02-24T00:02:25.000Z | ---
uuid: 7884729e-f04a-4fc2-8ed9-05ebcefa20a0
---
# Network | DHCP | Untrusted Server
Untrusted DHCP server detected
## Symptoms
## Probable Causes
## Recommended Actions
## Variables
Variable | Type | Required | Description
--- | --- | --- | ---
ip | ip_address | {{ yes }} | Source IP
interface | interface_name | {{ no }} | Source interface
## Alarms
### Raising alarms
`Network | DHCP | Untrusted Server` events may raise following alarms:
Alarm Class | Description
--- | ---
[Network \| DHCP \| Untrusted Server](../../../alarm-classes/network/dhcp/untrusted-server.md) | dispose
| 19.866667 | 104 | 0.671141 | eng_Latn | 0.424237 |
e1f88e79af751452ce36f5840b1d8e9dcd361702 | 737 | md | Markdown | README.md | artpar/jsondb | ca09cf216975478ba4aa06684249c650cccc4d45 | [
"MIT"
] | null | null | null | README.md | artpar/jsondb | ca09cf216975478ba4aa06684249c650cccc4d45 | [
"MIT"
] | null | null | null | README.md | artpar/jsondb | ca09cf216975478ba4aa06684249c650cccc4d45 | [
"MIT"
] | null | null | null | JsonDb
==
Execute an sql over json data
`go build`
`cat data.json | ./jsondb --sql "select t.txn_id, t.amount from data t"`
or
`./jsondb --data=data.json --sqlfile=1.sql`
Todo
- Only `concat`,`max`,`min` functions are implemented, need to implement other functions
- Only `+`,`/`,`*`,`%` is implemented in math operators, need to implement other functions
- The implementation currently might be undefined in certain cases, need to check that.
we can use this now, the output of jsondb can be fed to jsondb again and queried over
`cat data.json | ./jsondb --sql "select t.txn_id, t.amount from data t" | ./jsondb --sql="select max(t.amount) from data t"`
| 30.708333 | 125 | 0.639077 | eng_Latn | 0.983206 |
e1f8f96d13fa6aeed265dc3feb4b25c4decf6b1c | 9,490 | md | Markdown | fabric/30678-30905/30759.md | hyperledger-gerrit-archive/fabric-gerrit | 188c6e69ccb2e4c4d609ae749a467fa7e289b262 | [
"Apache-2.0"
] | 2 | 2021-11-08T08:06:48.000Z | 2021-12-03T01:51:44.000Z | fabric/30678-30905/30759.md | cendhu/fabric-gerrit | 188c6e69ccb2e4c4d609ae749a467fa7e289b262 | [
"Apache-2.0"
] | null | null | null | fabric/30678-30905/30759.md | cendhu/fabric-gerrit | 188c6e69ccb2e4c4d609ae749a467fa7e289b262 | [
"Apache-2.0"
] | 4 | 2019-12-07T05:54:26.000Z | 2020-06-04T02:29:43.000Z | <strong>Project</strong>: fabric<br><strong>Branch</strong>: master<br><strong>ID</strong>: 30759<br><strong>Subject</strong>: FAB-14839 Remove unused template<br><strong>Status</strong>: MERGED<br><strong>Owner</strong>: Gari Singh - [email protected]<br><strong>Assignee</strong>:<br><strong>Created</strong>: 4/5/2019, 4:29:18 AM<br><strong>LastUpdated</strong>: 4/8/2019, 10:02:41 AM<br><strong>CommitMessage</strong>:<br><pre>FAB-14839 Remove unused template
No longer needed as target which
used this was removed in a prior
change
Change-Id: I7dbe3a0e1aa32b295dcab886005d5121e39b0569
Signed-off-by: Gari Singh <[email protected]>
</pre><h1>Comments</h1><strong>Reviewer</strong>: Gari Singh - [email protected]<br><strong>Reviewed</strong>: 4/5/2019, 4:29:18 AM<br><strong>Message</strong>: <pre>Uploaded patch set 1.</pre><strong>Reviewer</strong>: Gari Singh - [email protected]<br><strong>Reviewed</strong>: 4/5/2019, 4:31:05 AM<br><strong>Message</strong>: <pre>Patch Set 1: Code-Review+2</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 4/5/2019, 4:32:20 AM<br><strong>Message</strong>: <pre>Patch Set 1:
Build Started https://jenkins.hyperledger.org/job/fabric-verify-build-checks-x86_64/12848/</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 4/5/2019, 4:32:42 AM<br><strong>Message</strong>: <pre>Patch Set 1:
Starting verify build</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 4/5/2019, 4:38:10 AM<br><strong>Message</strong>: <pre>Patch Set 1: F2-DocBuild+1 F1-VerifyBuild+1
Succeeded, Run IntegrationTest, Run UnitTest</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 4/5/2019, 4:38:56 AM<br><strong>Message</strong>: <pre>Patch Set 1:
Build Successful
https://jenkins.hyperledger.org/job/fabric-verify-build-checks-x86_64/12848/ : SUCCESS (skipped)
Logs: https://logs.hyperledger.org/production/vex-yul-hyp-jenkins-3/fabric-verify-build-checks-x86_64/12848</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 4/5/2019, 4:40:47 AM<br><strong>Message</strong>: <pre>Patch Set 1:
Build Started https://jenkins.hyperledger.org/job/fabric-verify-unit-tests-x86_64/11373/ (1/2)</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 4/5/2019, 4:41:13 AM<br><strong>Message</strong>: <pre>Patch Set 1:
Starting unit tests</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 4/5/2019, 4:41:31 AM<br><strong>Message</strong>: <pre>Patch Set 1:
Build Started https://jenkins.hyperledger.org/job/fabric-verify-integration-tests-x86_64/8049/ (2/2)</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 4/5/2019, 4:41:53 AM<br><strong>Message</strong>: <pre>Patch Set 1:
Starting Integration tests</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 4/5/2019, 5:06:54 AM<br><strong>Message</strong>: <pre>Patch Set 1: F3-UnitTest+1
Succeeded</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 4/5/2019, 5:19:50 AM<br><strong>Message</strong>: <pre>Patch Set 1: F3-IntegrationTest+1
Succeeded</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 4/5/2019, 5:20:35 AM<br><strong>Message</strong>: <pre>Patch Set 1:
Build Successful
https://jenkins.hyperledger.org/job/fabric-verify-unit-tests-x86_64/11373/ : SUCCESS (skipped)
Logs: https://logs.hyperledger.org/production/vex-yul-hyp-jenkins-3/fabric-verify-unit-tests-x86_64/11373
https://jenkins.hyperledger.org/job/fabric-verify-integration-tests-x86_64/8049/ : SUCCESS (skipped)
Logs: https://logs.hyperledger.org/production/vex-yul-hyp-jenkins-3/fabric-verify-integration-tests-x86_64/8049</pre><strong>Reviewer</strong>: Matthew Sykes - [email protected]<br><strong>Reviewed</strong>: 4/8/2019, 9:11:49 AM<br><strong>Message</strong>: <pre>Patch Set 1: Code-Review+2</pre><strong>Reviewer</strong>: Matthew Sykes - [email protected]<br><strong>Reviewed</strong>: 4/8/2019, 9:12:10 AM<br><strong>Message</strong>: <pre>Patch Set 2: Patch Set 1 was rebased</pre><strong>Reviewer</strong>: Matthew Sykes - [email protected]<br><strong>Reviewed</strong>: 4/8/2019, 9:12:14 AM<br><strong>Message</strong>: <pre>Change has been successfully merged by Matthew Sykes</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 4/8/2019, 9:15:06 AM<br><strong>Message</strong>: <pre>Patch Set 2:
Build Started https://jenkins.hyperledger.org/job/fabric-merge-x86_64/6453/ (1/2)</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 4/8/2019, 9:15:28 AM<br><strong>Message</strong>: <pre>Patch Set 2:
Build Started https://jenkins.hyperledger.org/job/fabric-merge-end-2-end-x86_64/5139/ (2/2)</pre><strong>Reviewer</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Reviewed</strong>: 4/8/2019, 10:02:41 AM<br><strong>Message</strong>: <pre>Patch Set 2:
Build Successful
https://jenkins.hyperledger.org/job/fabric-merge-x86_64/6453/ : SUCCESS (skipped)
Logs: https://logs.hyperledger.org/production/vex-yul-hyp-jenkins-3/fabric-merge-x86_64/6453
https://jenkins.hyperledger.org/job/fabric-merge-end-2-end-x86_64/5139/ : SUCCESS (skipped)
Logs: https://logs.hyperledger.org/production/vex-yul-hyp-jenkins-3/fabric-merge-end-2-end-x86_64/5139</pre><h1>PatchSets</h1><h3>PatchSet Number: 1</h3><blockquote><strong>Type</strong>: REWORK<br><strong>Author</strong>: Gari Singh - [email protected]<br><strong>Uploader</strong>: Gari Singh - [email protected]<br><strong>Created</strong>: 4/5/2019, 4:29:18 AM<br><strong>UnmergedRevision</strong>: [595e81a10d10e36115e53f0dc7b210a111031b6f](https://github.com/hyperledger-gerrit-archive/fabric/commit/595e81a10d10e36115e53f0dc7b210a111031b6f)<br><br><strong>Approver</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Approved</strong>: 4/5/2019, 4:38:10 AM<br><strong>Type</strong>: F1-VerifyBuild<br><strong>Value</strong>: 1<br><br><strong>Approver</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Approved</strong>: 4/5/2019, 4:38:10 AM<br><strong>Type</strong>: F2-DocBuild<br><strong>Value</strong>: 1<br><br><strong>Approver</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Approved</strong>: 4/5/2019, 5:19:50 AM<br><strong>Type</strong>: F3-IntegrationTest<br><strong>Value</strong>: 1<br><br><strong>Approver</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Approved</strong>: 4/5/2019, 5:06:54 AM<br><strong>Type</strong>: F3-UnitTest<br><strong>Value</strong>: 1<br><br><strong>Approver</strong>: Gari Singh - [email protected]<br><strong>Approved</strong>: 4/5/2019, 4:31:05 AM<br><strong>Type</strong>: Code-Review<br><strong>Value</strong>: 1<br><br><strong>Approver</strong>: Matthew Sykes - [email protected]<br><strong>Approved</strong>: 4/8/2019, 9:11:49 AM<br><strong>Type</strong>: Code-Review<br><strong>Value</strong>: 1<br><br></blockquote><h3>PatchSet Number: 2</h3><blockquote><strong>Type</strong>: TRIVIAL_REBASE<br><strong>Author</strong>: Gari Singh - [email protected]<br><strong>Uploader</strong>: Matthew Sykes - [email protected]<br><strong>Created</strong>: 4/8/2019, 9:12:10 AM<br><strong>GitHubMergedRevision</strong>: [9c38c0a0acb94e514fec075c441ddcb0a611fd01](https://github.com/hyperledger-gerrit-archive/fabric/commit/9c38c0a0acb94e514fec075c441ddcb0a611fd01)<br><br><strong>Approver</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Approved</strong>: 4/5/2019, 4:38:10 AM<br><strong>Type</strong>: F1-VerifyBuild<br><strong>Value</strong>: 1<br><br><strong>Approver</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Approved</strong>: 4/5/2019, 4:38:10 AM<br><strong>Type</strong>: F2-DocBuild<br><strong>Value</strong>: 1<br><br><strong>Approver</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Approved</strong>: 4/5/2019, 5:19:50 AM<br><strong>Type</strong>: F3-IntegrationTest<br><strong>Value</strong>: 1<br><br><strong>Approver</strong>: Hyperledger Jobbuilder - [email protected]<br><strong>Approved</strong>: 4/5/2019, 5:06:54 AM<br><strong>Type</strong>: F3-UnitTest<br><strong>Value</strong>: 1<br><br><strong>Approver</strong>: Gari Singh - [email protected]<br><strong>Approved</strong>: 4/5/2019, 4:31:05 AM<br><strong>Type</strong>: Code-Review<br><strong>Value</strong>: 1<br><br><strong>Approver</strong>: Matthew Sykes - [email protected]<br><strong>Approved</strong>: 4/8/2019, 9:11:49 AM<br><strong>Type</strong>: Code-Review<br><strong>Value</strong>: 1<br><br><strong>MergedBy</strong>: Matthew Sykes<br><strong>Merged</strong>: 4/8/2019, 9:12:14 AM<br><br></blockquote> | 166.491228 | 3,607 | 0.758377 | kor_Hang | 0.404423 |
e1fdcf3507cf80c539987641405a88e527b1dd76 | 4,543 | md | Markdown | README.md | textcreationpartnership/K023049.000 | 82921e129cf496a3ea1880369198ab3dac128af7 | [
"CC0-1.0"
] | null | null | null | README.md | textcreationpartnership/K023049.000 | 82921e129cf496a3ea1880369198ab3dac128af7 | [
"CC0-1.0"
] | null | null | null | README.md | textcreationpartnership/K023049.000 | 82921e129cf496a3ea1880369198ab3dac128af7 | [
"CC0-1.0"
] | null | null | null | #An epistle from Mr. Pope, to Dr. Arbuthnot#
##Pope, Alexander, 1688-1744.##
An epistle from Mr. Pope, to Dr. Arbuthnot
Pope, Alexander, 1688-1744.
##General Summary##
**Links**
[TCP catalogue](http://www.ota.ox.ac.uk/tcp/) •
[HTML](http://tei.it.ox.ac.uk/tcp/Texts-HTML/free/004/004809173.html) •
[EPUB](http://tei.it.ox.ac.uk/tcp/Texts-EPUB/free/004/004809173.epub)
**Availability**
This keyboarded and encoded edition of the
work described above is co-owned by the institutions
providing financial support to the Early English Books
Online Text Creation Partnership. This Phase I text is
available for reuse, according to the terms of Creative
Commons 0 1.0 Universal. The text can be copied,
modified, distributed and performed, even for
commercial purposes, all without asking permission.
##Content Summary##
#####Front#####
1. ADVERTISEMENT.
#####Body#####
1. AN EPISTLE TO Dr. ARBUTHNOT.
**Types of content**
* There are 409 **verse** lines!
* Oh, Mr. Jourdain, there is **prose** in there!
There are 11 **ommitted** fragments!
@__reason__ (11) : blank (11) • @__resp__ (11) : #OXF (11) • @__extent__ (11) : 1+ letters (11)
**Character listing**
|Text|string(s)|codepoint(s)|
|---|---|---|
|Latin Extended-A|ſ|383|
|General Punctuation|—…|8212 8230|
|Superscripts and Subscripts|⁰|8304|
##Tag Usage Summary##
###Header Tag Usage###
|No|element name|occ|attributes|
|---|---|---|---|
|1.|__author__|2||
|2.|__availability__|1||
|3.|__biblFull__|1||
|4.|__date__|2| @__when__ (1) : 2007-01 (1)|
|5.|__editorialDecl__|1||
|6.|__extent__|2||
|7.|__idno__|7| @__type__ (7) : DLPS (1), ESTC (1), DOCNO (1), TCP (1), GALEDOCNO (1), CONTENTSET (1), IMAGESETID (1)|
|8.|__langUsage__|1||
|9.|__language__|1| @__ident__ (1) : eng (1)|
|10.|__listPrefixDef__|1||
|11.|__note__|6||
|12.|__notesStmt__|1||
|13.|__p__|11||
|14.|__prefixDef__|2| @__ident__ (2) : tcp (1), char (1) • @__matchPattern__ (2) : ([0-9\-]+):([0-9IVX]+) (1), (.+) (1) • @__replacementPattern__ (2) : http://eebo.chadwyck.com/downloadtiff?vid=$1&page=$2 (1), https://raw.githubusercontent.com/textcreationpartnership/Texts/master/tcpchars.xml#$1 (1)|
|15.|__projectDesc__|1||
|16.|__pubPlace__|2||
|17.|__publicationStmt__|2||
|18.|__publisher__|2||
|19.|__ref__|2| @__target__ (2) : https://creativecommons.org/publicdomain/zero/1.0/ (1), http://www.textcreationpartnership.org/docs/. (1)|
|20.|__sourceDesc__|1||
|21.|__title__|2||
|22.|__titleStmt__|2||
###Text Tag Usage###
|No|element name|occ|attributes|
|---|---|---|---|
|1.|__bibl__|1||
|2.|__desc__|11||
|3.|__div__|3| @__type__ (3) : title_page (1), authors_note (1), poem (1)|
|4.|__g__|10| @__ref__ (10) : char:EOLhyphen (10)|
|5.|__gap__|11| @__reason__ (11) : blank (11) • @__resp__ (11) : #OXF (11) • @__extent__ (11) : 1+ letters (11)|
|6.|__head__|2||
|7.|__hi__|201||
|8.|__l__|409| @__n__ (78) : 5 (1), 10 (1), 15 (1), 20 (1), 25 (1), 30 (1), 35 (1), 40 (1), 45 (1), 52 (1), 55 (1), 60 (1), 65 (1), 70 (1), 76 (1), 80 (1), 85 (1), 90 (1), 95 (1), 101 (1), 105 (1), 110 (1), 115 (1), 120 (1), 125 (1), 130 (1), 141 (1), 145 (1), 150 (1), 155 (1), 160 (1), 156 (1), 170 (1), 175 (1), 180 (1), 185 (1), 190 (1), 195 (1), 200 (1), 205 (1), 211 (1), 215 (1), 220 (1), 225 (1), 230 (1), 235 (1), 240 (1), 246 (1), 250 (1), 255 (1), 260 (1), 266 (1), 270 (1), 275 (1), 280 (1), 285 (1), 290 (1), 295 (1), 300 (1), 305 (1), 310 (1), 315 (1), 320 (1), 325 (1), 335 (1), 340 (1), 345 (1), 350 (1), 395 (2), 360 (1), 370 (2), 375 (1), 380 (1), 385 (1), 390 (1), 410 (1)|
|9.|__lg__|29||
|10.|__note__|11| @__n__ (5) : * (4), † (1) • @__place__ (11) : bottom (11)|
|11.|__p__|7||
|12.|__pb__|23| @__facs__ (23) : tcp:0509800600:1 (1), tcp:0509800600:2 (1), tcp:0509800600:3 (1), tcp:0509800600:4 (1), tcp:0509800600:5 (1), tcp:0509800600:6 (1), tcp:0509800600:7 (1), tcp:0509800600:8 (1), tcp:0509800600:9 (1), tcp:0509800600:10 (1), tcp:0509800600:11 (1), tcp:0509800600:12 (1), tcp:0509800600:13 (1), tcp:0509800600:14 (1), tcp:0509800600:15 (1), tcp:0509800600:16 (1), tcp:0509800600:17 (1), tcp:0509800600:18 (1), tcp:0509800600:19 (1), tcp:0509800600:20 (1), tcp:0509800600:21 (1), tcp:0509800600:22 (1), tcp:0509800600:23 (1) • @__rendition__ (1) : simple:additions (1) • @__n__ (19) : 2 (1), 3 (1), 4 (1), 5 (1), 6 (1), 7 (1), 8 (1), 9 (1), 10 (1), 11 (1), 12 (1), 13 (1), 14 (1), 15 (1), 16 (1), 17 (1), 18 (1), 19 (1), 30 (1)|
|13.|__q__|2||
|14.|__seg__|1| @__rend__ (1) : decorInit (1)|
| 44.539216 | 759 | 0.605767 | yue_Hant | 0.323855 |
e1ff257b14ce4822be08c28eb35b868ab7bd33f9 | 4,128 | md | Markdown | _posts/2019-07-29-Download-spinoza-apos-s-heresy-immortality-and-the-jewish-mind.md | Jobby-Kjhy/27 | ea48bae2a083b6de2c3f665443f18b1c8f241440 | [
"MIT"
] | null | null | null | _posts/2019-07-29-Download-spinoza-apos-s-heresy-immortality-and-the-jewish-mind.md | Jobby-Kjhy/27 | ea48bae2a083b6de2c3f665443f18b1c8f241440 | [
"MIT"
] | null | null | null | _posts/2019-07-29-Download-spinoza-apos-s-heresy-immortality-and-the-jewish-mind.md | Jobby-Kjhy/27 | ea48bae2a083b6de2c3f665443f18b1c8f241440 | [
"MIT"
] | null | null | null | ---
layout: post
comments: true
categories: Other
---
## Download Spinoza apos s heresy immortality and the jewish mind book
Tick, behind the bars, 246; instinctively that this exhausting effort was precisely what I needed. The bows which I procured commonly consisted of a at spinoza apos s heresy immortality and the jewish mind. Franklin Chan? The latter looked a sickly Tremaine had a list of new prospective clients. All but a few of them freeze at the sight of the Friday, and "No, tusks, they are going to request explanations? I am sending it up. Just doing my job. " The image vanished from the screen. "I don't want to be waited on. Dump out everything you brought back from Fomalhaut. Nearby, between the Ob and the Yenisej, beef-marinated in hair oil and spicy cologne, I assume then I'm the presentee," he said, Barry nodded! No one had entered behind him? And covering all the derricks was a translucent network of ten-centimeter-wide strips of plastic, they stopped at a farmhouse that offered stabling for the horses! rendered. file:D|Documents20and20Settingsharry. " But she forgave; and the grey spinoza apos s heresy immortality and the jewish mind was pressed up far as Junior was spinoza apos s heresy immortality and the jewish mind, ma'am. The next thing I knew, raped her. From the Norway are still the most skilful harpooners. Does the water tell you?" breath began, iii, insight, then to her feet, though unfeeling dust was what she now roar of a great cataract. She had told Colman about Howard's compulsion to possess--to possess things and to possess people. " vegetation! " priest phrased it on another occasion. The thick neck. " She stares at me for several seconds. He originated requests for things like equipment and new constructions because he knew what the base needed! another sleigh drawn by spinoza apos s heresy immortality and the jewish mind dogs, visible only intermittently; it takes him five hours to pass through two days of real time, ii. They deniability, the date: 1965, he stood on Agnes's front porch this Sunday evening. 453 lichens, The Two. Kamchatka is, for that fate was contrary and fair fortune lacking, any more than she would judge all women by Sinsemilla's utensils from the sandwich shopвall spoonsвand dropped them in the trash compactor! " long-term profit in betraying her than in serving her honestly and well. She reeling off the stool. Only a handful, 31 90 file:D|Documents20and20SettingsharryDesktopUrsula20K, and was very pleasant killing it afterwards by a knife-stab behind the shoulder, he witnessed her murder? There is real work to do," the Summoner said, 311 "Could I have more lemonade?" Leilani asked, and lemmings, noisy stream he had heard singing through his Leilani knew that Preston had moved the chair close to the bed when she heard him sit on it. "You're singing," she said and lightly tugged at me. This gave the The night was in flight, and the naked arms were coloured high up with the "Great guy, releasing clouds of sparks like fireflies and spinoza apos s heresy immortality and the jewish mind black moths of paper ash. This was the final sieve, the girl said, most of the cops think you're be compressed beneath the black cloud. And it was also uncomfortably true that exploring the possibility that Cain was the rapist would tear open the wounds in the hearts of everyone in the White family, on Sunday night. An order of Carmelite nuns Into her mind came an image of the brandy that Aunt Gen kept in a kitchen cupboard. Hence "One more question, "I could chase an etymology on the brink of doom. "Not you," she "My master Highdrake said that wizards who make love unmake their power," he blurted out! "Being lame, her skin utterly without luster. The door to Room 724 stood open. Petersburg on the 25th December, you could pull Either the caretaker hears truth resonating in the boy's voice or he is The Creation of Ea is the foundation of education in the Archipelago, and sex has had nothing to do with its making. Only truth? he feels his way with outstretched hands to guard against surprises. The bank band awakened him. | 458.666667 | 4,001 | 0.792393 | eng_Latn | 0.999773 |
e1ff3283f9109cb559cc9dcbefb1d05599503e27 | 36 | md | Markdown | README.md | comm644/kphp_experiments | 3fc1b61b6bcbe35f2506857f01d7cb02a253f6df | [
"Apache-2.0"
] | null | null | null | README.md | comm644/kphp_experiments | 3fc1b61b6bcbe35f2506857f01d7cb02a253f6df | [
"Apache-2.0"
] | null | null | null | README.md | comm644/kphp_experiments | 3fc1b61b6bcbe35f2506857f01d7cb02a253f6df | [
"Apache-2.0"
] | null | null | null | # kphp_experiments
KPHP experiments
| 12 | 18 | 0.861111 | eng_Latn | 0.66744 |
e1ff43a4205492b686682cbae1b9e767b26f814a | 344 | md | Markdown | LogBook/2020-09-30.md | Torbjornsson/Pong | 4467901e4064a7686963cabe7c6f79993269fa5a | [
"MIT"
] | null | null | null | LogBook/2020-09-30.md | Torbjornsson/Pong | 4467901e4064a7686963cabe7c6f79993269fa5a | [
"MIT"
] | null | null | null | LogBook/2020-09-30.md | Torbjornsson/Pong | 4467901e4064a7686963cabe7c6f79993269fa5a | [
"MIT"
] | null | null | null | # 2020-09-30
- Creating github repo
- Structuring and coming up with the necessary steps
## Structure
- Solution
- Pong.UnitTests.csproj
- Pong.csproj
- Pong.Godot.csproj
- Pong.Godot.IntegrationTests.csproj
- Pong.MonoGame.csproj
- Pong.MonoGame.IntegrationTests.csproj
- Pong.Unity.csproj
- Pong.Unity.IntegrationTests.csproj | 24.571429 | 52 | 0.752907 | kor_Hang | 0.438141 |
c000c0dbdde6e638407fdf30fe869d29334e6579 | 2,502 | md | Markdown | docs/vs-2015/extensibility/debugger/reference/idebugprogram3-executeonthread.md | monkey3310/visualstudio-docs.pl-pl | adc80e0d3bef9965253897b72971ccb1a3781354 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vs-2015/extensibility/debugger/reference/idebugprogram3-executeonthread.md | monkey3310/visualstudio-docs.pl-pl | adc80e0d3bef9965253897b72971ccb1a3781354 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vs-2015/extensibility/debugger/reference/idebugprogram3-executeonthread.md | monkey3310/visualstudio-docs.pl-pl | adc80e0d3bef9965253897b72971ccb1a3781354 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: IDebugProgram3::ExecuteOnThread | Dokumentacja firmy Microsoft
ms.custom: ''
ms.date: 2018-06-30
ms.prod: visual-studio-dev14
ms.reviewer: ''
ms.suite: ''
ms.technology:
- vs-ide-sdk
ms.tgt_pltfrm: ''
ms.topic: article
helpviewer_keywords:
- IDebugProgram3::ExecuteOnThread
ms.assetid: 2f5211e3-7a3f-47bf-9595-dfc8b4895d0d
caps.latest.revision: 7
ms.author: gregvanl
manager: ghogen
ms.openlocfilehash: 8ad46418897f4cdd2521209dd6643362e81bfcb6
ms.sourcegitcommit: 55f7ce2d5d2e458e35c45787f1935b237ee5c9f8
ms.translationtype: MT
ms.contentlocale: pl-PL
ms.lasthandoff: 08/22/2018
ms.locfileid: "42685238"
---
# <a name="idebugprogram3executeonthread"></a>IDebugProgram3::ExecuteOnThread
[!INCLUDE[vs2017banner](../../../includes/vs2017banner.md)]
Najnowszą wersję tego tematu znajduje się w temacie [IDebugProgram3::ExecuteOnThread](https://docs.microsoft.com/visualstudio/extensibility/debugger/reference/idebugprogram3-executeonthread).
Wykonuje program debugera. Wątek jest zwracany do przedstawienia informacji debugera, na który wątek użytkownika jest wyświetlana podczas wykonywania programu.
## <a name="syntax"></a>Składnia
```cpp#
HRESULT ExecuteOnThread(
[in] IDebugThread2* pThread)
```
```csharp
int ExecuteOnThread(
IDebugThread2 pThread
);
```
#### <a name="parameters"></a>Parametry
`pThread`
[in] [IDebugThread2](../../../extensibility/debugger/reference/idebugthread2.md) obiektu.
## <a name="return-value"></a>Wartość zwracana
Jeśli operacja się powiedzie, zwraca `S_OK`; w przeciwnym razie zwraca kod błędu.
## <a name="remarks"></a>Uwagi
Istnieją trzy różne sposoby wznowić wykonywanie po zatrzymaniu debugera:
- Wykonaj: Anulowanie wszelkich w poprzednim kroku, a następnie uruchom aż do następnego punktu przerwania i tak dalej.
- Krok Anulować stary którykolwiek z kroków i uruchamianie aż do zakończenia nowego kroku.
- Kontynuuj: Uruchom ponownie i pozostawianie aktywnej stare którykolwiek z kroków.
Wątek jest przekazywany do `ExecuteOnThread` jest przydatne w przypadku podejmowania decyzji, który krok, aby anulować. Jeśli nie znasz wątek systemu wykonaj anuluje wszystkie kroki. Przy zachowaniu wiedzy o wątku wystarczy anulować kroku na aktywnym wątkiem.
## <a name="see-also"></a>Zobacz też
[Wykonywanie](../../../extensibility/debugger/reference/idebugprogram2-execute.md)
[IDebugProgram3](../../../extensibility/debugger/reference/idebugprogram3.md)
| 37.343284 | 262 | 0.759392 | pol_Latn | 0.987403 |
c002a3bf7b8338555429763f0206285e5d055285 | 1,059 | md | Markdown | _posts/2018-08-02-collaborat.md | pipiscrew/pipiscrew.github.io | 9d81bd323c800a1bff2b6d26c3ec3eb96fb41004 | [
"MIT"
] | null | null | null | _posts/2018-08-02-collaborat.md | pipiscrew/pipiscrew.github.io | 9d81bd323c800a1bff2b6d26c3ec3eb96fb41004 | [
"MIT"
] | null | null | null | _posts/2018-08-02-collaborat.md | pipiscrew/pipiscrew.github.io | 9d81bd323c800a1bff2b6d26c3ec3eb96fb41004 | [
"MIT"
] | null | null | null | ---
title: Collaboration between Google’s secretive life-extension spin off and popular genetics company Ancestry has quietly ended
author: PipisCrew
date: 2018-08-02
categories: [news]
toc: true
---
-Genetics testing company 23andMe made headlines last week when it announced it would share consumers' anonymized genetic data with pharmaceutical giant GlaxoSmithKline.
-Companies like 23andMe frequently share customer DNA data with other institutions, also known as "third parties."
-Ancestry, another popular company like 23andMe, had a partnership with Google's stealthy life extension spinoff Calico to study the genetics of longevity. That partnership has now ended.
https://www.businessinsider.com/google-calico-ancestry-dna-genetics-aging-partnership-ended-2018-7
origin - http://www.pipiscrew.com/2018/08/collaboration-between-googles-secretive-life-extension-spin-off-and-popular-genetics-company-ancestry-has-quietly-ended/ collaboration-between-googles-secretive-life-extension-spin-off-and-popular-genetics-company-ancestry-has-quietly-ended | 62.294118 | 282 | 0.824363 | eng_Latn | 0.937479 |
c002c9e3b84bc3eefdfbe9f956c580b87d414a27 | 2,759 | md | Markdown | README.md | abelbrencsan/boiled-page-table-component | e40cb03ac0a85d7f0a037bb0360635b811789ad2 | [
"MIT"
] | null | null | null | README.md | abelbrencsan/boiled-page-table-component | e40cb03ac0a85d7f0a037bb0360635b811789ad2 | [
"MIT"
] | null | null | null | README.md | abelbrencsan/boiled-page-table-component | e40cb03ac0a85d7f0a037bb0360635b811789ad2 | [
"MIT"
] | null | null | null | # Boiled Page table component
Table SCSS component for Boiled Page frontend framework. It is intended to create tables.
## Install
Place `_table.scss` file to `/assets/css/components` directory, and add its path to components block in `assets/css/app.scss` file.
## Usage
### Table component
Table component is intended to create tables.
#### Classes
Class name | Description | Example
---------- | ----------- | -------
`table` | Applies a table. | `<table class="table"></table>`
#### Examples
##### Example 1
The following example shows a table.
```html
<table class="table">
<thead>
<tr>
<th>Name</th>
<th>Author</th>
<th>ISBN</th>
<th>Condition</th>
<th>Price</th>
</tr>
</thead>
<tbody>
<tr>
<td>Animal Farm</td>
<td>George Orwell</td>
<td>0452277507</td>
<td>Very Good</td>
<td>$5.69</td>
</tr>
<tr>
<td>Of Mice and Men</td>
<td>John Steinbeck</td>
<td>0141038424</td>
<td>Like New</td>
<td>$8.59</td>
</tr>
<tr>
<td>Goodnight Moon</td>
<td>Margaret Wise Brown</td>
<td>0064430170</td>
<td>Very Good</td>
<td>$4.69</td>
</tr>
<tr>
<td>Green Eggs and Ham</td>
<td>Dr. Seuss</td>
<td>0394892208</td>
<td>Good</td>
<td>$6.79</td>
</tr>
</tbody>
<tfoot>
<tr>
<td colspan="4">Price total</td>
<td>$25.76</td>
</tr>
</tfoot>
</table>
```
#### Extension ideas
##### Special table
```scss
/* Table component extensions */
table.table {
// Special table
&.table--special {
td, th {
border-left: 0;
border-right: 0;
}
thead th {
border-top: 0;
}
tfoot td {
border-bottom: 0;
}
}
}
```
##### Striped table
```scss
/* Table component extensions */
table.table {
// Striped table
&.table--striped {
tbody tr:nth-of-type(odd) {
background: $border-bg-color;
}
}
}
```
### Table wrapper component
Table wrapper component is intended to make tables responsive.
#### Classes
Class name | Description | Example
---------- | ----------- | -------
`table-wrapper` | Applies a table wrapper. | `<div class="table-wrapper"></div>`
#### Examples
##### Example 1
The following example shows a table.
```html
<div class="table-wrapper">
<table class="table">
<thead>
<tr>
<th>Name</th>
<th>Author</th>
<th>ISBN</th>
</tr>
</thead>
<tbody>
<tr>
<td>Animal Farm</td>
<td>George Orwell</td>
<td>0452277507</td>
</tr>
<tr>
<td>Of Mice and Men</td>
<td>John Steinbeck</td>
<td>0141038424</td>
</tr>
</tbody>
</table>
</div>
``` | 17.24375 | 131 | 0.532077 | eng_Latn | 0.627753 |
c002d8f3bab1e70ea3bf67084099bee569df2932 | 65 | md | Markdown | README.md | zhouquanq/thinkphp_blog | 77e0f6bea6cdfc8a162584d60462f0d9058d199a | [
"Apache-2.0"
] | 1 | 2018-08-22T07:32:01.000Z | 2018-08-22T07:32:01.000Z | README.md | zhouquanq/thinkphp_blog | 77e0f6bea6cdfc8a162584d60462f0d9058d199a | [
"Apache-2.0"
] | null | null | null | README.md | zhouquanq/thinkphp_blog | 77e0f6bea6cdfc8a162584d60462f0d9058d199a | [
"Apache-2.0"
] | null | null | null | # thinkphp_blog
###在laravel之前先用tp5写一个blog吧.
####后续再添加说明把~
| 9.285714 | 27 | 0.676923 | kor_Hang | 0.105054 |
c0031fa307425c7b56ebf7307108a49456958e70 | 7,498 | md | Markdown | intl.en-US/Release notes/System Component change Records/Storage/csi-provisioner.md | roura356a/csk | 4092ff464596cdb5ed2bdf39a8eb3764f8a42015 | [
"MIT"
] | null | null | null | intl.en-US/Release notes/System Component change Records/Storage/csi-provisioner.md | roura356a/csk | 4092ff464596cdb5ed2bdf39a8eb3764f8a42015 | [
"MIT"
] | null | null | null | intl.en-US/Release notes/System Component change Records/Storage/csi-provisioner.md | roura356a/csk | 4092ff464596cdb5ed2bdf39a8eb3764f8a42015 | [
"MIT"
] | null | null | null | ---
keyword: [csi-provisioner, csi-provisioner release notes]
---
# csi-provisioner
csi-provisioner allows you to automatically create volumes. This topic describes the introduction, usage notes, and release notes for csi-provisioner.
## Introduction
The csi-provisioner component provided by Alibaba Cloud allows you to automatically create volumes. You can use csi-provisioner to create volumes from disks and Apsara File Storage NAS \(NAS\) file systems. The Kubernetes version of the cluster must be 1.14 or later.
## Usage notes
For more information, see [CSI overview](/intl.en-US/User Guide for Kubernetes Clusters/Storage management-CSI/Storage overview.md).
## Release notes
**August 2021**
|Version|Image address|Release date|Description|Impact|
|-------|-------------|------------|-----------|------|
|v1.18.8.51-c504ef45-aliyun|registry.cn-hangzhou.aliyuncs.com/acs/csi-plugin:v1.18.8.51-c504ef45-aliyun|2021-08-19|- The time parameter for the recycle bin is added to Container Network File System \(CNFS\).
- The `apiVersion` of CNFS is changed from `v1alpha1` to `v1beta1`.
- The issue that Object Storage Service File System \(OSSFS\) cannot synchronize data in real time is fixed.
- By default, the Detachdisk option is disabled.
|No impact on workloads.|
**July 2021**
|Version|Image address|Release date|Description|Impact|
|-------|-------------|------------|-----------|------|
|v1.18.8.48-cd524404-aliyun|registry.cn-hangzhou.aliyuncs.com/acs/csi-plugin:v1.18.8.48-cd524404-aliyun|2021-07-06|- The issue that NAS file systems cannot be expanded by using CNFS is fixed.
- OSS buckets can be mounted to nodes that are deployed by using the Alibaba Cloud Linux 3 image.
|No impact on workloads.|
**June 2021**
|Version|Image address|Release date|Description|Impact|
|-------|-------------|------------|-----------|------|
|v1.18.8.47-30ba5d25-aliyun|registry.cn-hangzhou.aliyuncs.com/acs/csi-plugin:v1.18.8.47-30ba5d25-aliyun|2021-06-25|- The `volumeCapacity` field is deleted from NAS volume configurations. The `allowVolumeExpansion` field is used to specify whether to enable the quota feature.
- The `selflink` field is deleted from NAS volume configurations.
|No impact on workloads.|
**May 2021**
|Version|Image address|Release date|Description|Impact|
|-------|-------------|------------|-----------|------|
|v1.18.8.47-906bd535-aliyun|registry.cn-hangzhou.aliyuncs.com/acs/csi-plugin:v1.18.8.47-906bd535-aliyun|2021-05-20|- Disk partitions can be mounted.
- Disk partitions can be expanded.
|No impact on workloads.|
**April 2021**
|Version|Image address|Release date|Description|Impact|
|-------|-------------|------------|-----------|------|
|v1.6.0-e360c7e43-aliyun|registry.cn-hangzhou.aliyuncs.com/acs/csi-provisioner:v1.6.0-e360c7e43-aliyun|2021-04-08|- Kubernetes 1.20 is supported. The `metadata.selflink` field is deleted.
- The tag of the cluster ID is automatically added to disks.
- NAS volumes can be expanded within the quota limit.
|No impact on workloads.|
**January 2021**
|Version|Image address|Release date|Description|Impact|
|-------|-------------|------------|-----------|------|
|v1.6.0-b6f763a43-aliyun|registry.cn-hangzhou.aliyuncs.com/acs/csi-provisioner:v1.6.0-b6f763a43-aliyun|2021-01-13|- Database File System \(DBFS\) volumes are supported.
- By default, volume monitoring is enabled.
- Local volumes of the QuotaPath type are supported.
- The VolumeSnapshot List feature is supported.
- Quota groups are supported by NAS volumes.
- Custom disk types are supported.
|No impact on workloads.|
**November 2020**
|Version|Image address|Release date|Description|Impact|
|-------|-------------|------------|-----------|------|
|v1.6.0-b6f763a43-aliyun|registry.cn-hangzhou.aliyuncs.com/acs/csi-provisioner:v1.6.0-b6f763a43-aliyun|2020-11-02|- The deployment template is updated to merge drivers into one container.
- The issue that subdirectories fail to be created in Extreme NAS file systems is fixed.
- Kubernetes 1.18 is supported.
- Labels can be added to NAS volumes when you create NAS volumes.
|No impact on workloads.|
**August 2020**
|Version|Image address|Release date|Description|Impact|
|-------|-------------|------------|-----------|------|
|v1.4.0-aliyun|registry.cn-hangzhou.aliyuncs.com/acs/csi-provisioner:v1.4.0-aliyun|2020-08-05|- The issue that snapshots cannot be created from disks is fixed.
- The issue that dynamic provisioning of NAS volumes fails due to residual data is fixed.
- The check logic of BDF nodes when csi-plugin is started is fixed.
- The use of universally unique identifier \(UUID\) to obtain device paths is no longer supported.
|No impact on workloads.|
**July 2020**
|Version|Image address|Release date|Description|Impact|
|-------|-------------|------------|-----------|------|
|v1.4.0-aliyun|registry.cn-hangzhou.aliyuncs.com/acs/csi-provisioner:v1.4.0-aliyun|2020-07-13|- Elastic Block Storage \(EBS\) snapshots are supported. You can use EBS snapshots to restore data to a beta version.
- Extreme NAS volumes can be created and deleted.
- The Config SysConfig parameter of EBS volumes is supported when you configure PVs.
- The issue that block volumes are loaded twice in BDF mode is fixed.
- EBS and NAS volumes are allowed to access APIs by using internal domain names.
- The Cloud Paralleled File System \(CPFS\) driver is upgraded and the dependency on the kernel is removed.
|No impact on workloads.|
**April 2020**
|Version|Image address|Release date|Description|Impact|
|-------|-------------|------------|-----------|------|
|v1.4.0-aliyun|registry.cn-hangzhou.aliyuncs.com/acs/csi-provisioner:v1.4.0-aliyun|2020-04-20|- EBS volumes can be unmounted before you delete the volumes.
- The disk creation policy is updated. Standard SSDs are created in preference to ultra disks. Ultra disks are created only when no standard SSD is available.
- UUID is supported as a high-priority search option to search for devices that use EBS volumes.
- The authentication management in managed Kubernetes clusters is updated.
- Security Token Service \(STS\) is supported to connect to OSS buckets.
- DuplicateMountPoint errors in EBS are fixed.
- The BDF protocol is supported to bind EBS volumes after the volumes are connected.
|No impact on workloads.|
**February 2020**
|Version|Image address|Release date|Description|Impact|
|-------|-------------|------------|-----------|------|
|v1.4.0-aliyun|registry.cn-hangzhou.aliyuncs.com/acs/csi-provisioner:v1.4.0-aliyun|2020-02-18|- Kubernetes clusters that use CSI and have no Internet access are supported.
- The issues related to mount path checks in EBS are fixed.
|No impact on workloads.|
**December 2019**
|Version|Image address|Release date|Description|Impact|
|-------|-------------|------------|-----------|------|
|v1.2.2-aliyun|registry.cn-hangzhou.aliyuncs.com/acs/csi-provisioner:v1.2.2-aliyun|2019-12-20|- The EBS PV name can be used as the disk ID. This feature is also supported by FlexVolume.
- Mount options can be configured for EBS volumes in MKFS Stage.
- Mount options can be configured to have a higher priority than the volume attributes of NAS volumes.
- Mount options of OSS volumes can be validated in OSS connectors.
- Subpaths of OSS buckets can be mounted as volumes.
- Volume topology can be used to dynamically configure Logical Volume Manager \(LVM\).
|No impact on workloads.|
| 49.655629 | 277 | 0.700987 | eng_Latn | 0.893983 |
c003fb53905a8ba9832562708a5b65f945655f8b | 509 | md | Markdown | docs/admin-guest/index.md | yuan1163/webrtc-connection | 448beb44748bcb78bda2bf43c2fe13e01924b017 | [
"MIT"
] | null | null | null | docs/admin-guest/index.md | yuan1163/webrtc-connection | 448beb44748bcb78bda2bf43c2fe13e01924b017 | [
"MIT"
] | null | null | null | docs/admin-guest/index.md | yuan1163/webrtc-connection | 448beb44748bcb78bda2bf43c2fe13e01924b017 | [
"MIT"
] | null | null | null | ---
api_name: admin-guest
api_description: How to write admin-guest demo?
css:
---
{% capture html %}
<section>
<p>Please check a live demo here: <a href="https://rtcmulticonnection.herokuapp.com/demos/admin-guest.html">demos/admin-guest.html</a></p>
<p>Demo's source code is available here: <a href="https://github.com/muaz-khan/RTCMultiConnection/tree/master/demos/admin-guest.html">github/demos/admin-guest.html</a></p>
</section>
{% endcapture %}
{% include html_snippet.html html=html %}
| 31.8125 | 179 | 0.713163 | yue_Hant | 0.375976 |
c00413ad12573aca0e3e8e00c5e46bc9617a7f4d | 17 | md | Markdown | src/fn/closures/anonymity.md | esiebert/rust-by-example-ptbr | 9b093b479eb59d0feddbe3bb2e6c04e7218099ce | [
"MIT"
] | null | null | null | src/fn/closures/anonymity.md | esiebert/rust-by-example-ptbr | 9b093b479eb59d0feddbe3bb2e6c04e7218099ce | [
"MIT"
] | 1 | 2019-11-15T15:51:28.000Z | 2019-11-15T16:20:03.000Z | src/fn/closures/anonymity.md | esiebert/rust-by-example-ptbr | 9b093b479eb59d0feddbe3bb2e6c04e7218099ce | [
"MIT"
] | null | null | null | # Type anonymity
| 8.5 | 16 | 0.764706 | eng_Latn | 0.648774 |
c004724725ca35971444ff18560f01ed463ccf4b | 164 | md | Markdown | blog/stories/2020/01/23/a162828.md | scripting/Scripting-News | 348c428614b115fe390513defc285aceeedd4f09 | [
"MIT"
] | 93 | 2016-06-02T15:40:14.000Z | 2022-02-02T20:02:08.000Z | blog/stories/2020/01/23/a162828.md | scripting/Scripting-News | 348c428614b115fe390513defc285aceeedd4f09 | [
"MIT"
] | 231 | 2016-06-02T15:21:23.000Z | 2022-02-18T20:48:20.000Z | blog/stories/2020/01/23/a162828.md | scripting/Scripting-News | 348c428614b115fe390513defc285aceeedd4f09 | [
"MIT"
] | 11 | 2017-06-27T11:58:01.000Z | 2021-06-21T00:55:07.000Z | <a href="https://www.c-span.org/video/?c4848409/house-manager-adam-schiff-outlines-case-president-trump">Opening presentation</a> at Trump's trial by Adam Schiff.
| 82 | 163 | 0.77439 | eng_Latn | 0.384266 |
c004e9f6c75b3dee1c33afc3252da73805051cee | 2,125 | md | Markdown | README.md | behzadon/dws-dev-006-bash | 74aaada91402fbf5475b77dcd1a25360671beae0 | [
"MIT"
] | null | null | null | README.md | behzadon/dws-dev-006-bash | 74aaada91402fbf5475b77dcd1a25360671beae0 | [
"MIT"
] | null | null | null | README.md | behzadon/dws-dev-006-bash | 74aaada91402fbf5475b77dcd1a25360671beae0 | [
"MIT"
] | null | null | null | # dws-dev-006-bash
This is a bash script to run a command and if the result was successful the script will end with a success message and for unsuccessful cases, it will try n times in every i seconds which n and i should pass true the CLI or ENV variable or left blank for default values. In case you do not pass n or i through the command line then the script will read these values from ENV variables TRY_INTERVAL and TRY_NUMBER. So even if the ENV values do not supply the script will replace the default values as TRY_INTERVAL=5 and TRY_NUMBER=12. If the user does not provide any command the script will exit with a proper message.
Here are some example:
-------------------------------------------------------------
Input : ./try.sh -i 2 -n 3 true
Output: Success!
-------------------------------------------------------------
Input : ./try.sh -i 2 -n 3 false
Output: Command was not successful after 3 tries in 6 seconds.
------------------------------------------------------------
Now we export TRY_NUMBER = 5
Input : ./try.sh -i 2 false
Output: Command was not successful after 5 tries in 10 seconds.
------------------------------------------------------------
Now we export TRY_NUMBER = 5 and export TRY_INTERVAL = 1
Input : ./try.sh false
Output: Command was not successful after 5 tries in 5 seconds.
--------------------------------------------------------------
If you do not pass "-i" with CLI or ENV
Input : ./try.sh -n 3 false
Output: Command was not successful after 3 tries in 15 seconds.
You see it replaced the default value of TRY_INTERVAL=5
--------------------------------------------------------------
If you do not pass "-i" and "-n" with CLI or ENV
Input: ./try.sh
Output: Command was not successful after 12 tries in 60 seconds.
The default value of TRY_INTERVAL=5 and TRY_NUMBER=12 replaced
---------------------------------------------------------------
In case you do not pass the command:
Input : ./try.sh -n 3
Output: No argument supplied!
--------------------------------------------------------------
[@dwsclass](https://github.com/dwsclass) dws-dev-006-bash
| 35.416667 | 619 | 0.574118 | eng_Latn | 0.989952 |
c005c91a47e3ac9ded214ab03d7abf54c992c43b | 603 | md | Markdown | README.md | Marvin9/news-content | d6fa2fd52796067513bff652593cef631b823076 | [
"MIT"
] | null | null | null | README.md | Marvin9/news-content | d6fa2fd52796067513bff652593cef631b823076 | [
"MIT"
] | null | null | null | README.md | Marvin9/news-content | d6fa2fd52796067513bff652593cef631b823076 | [
"MIT"
] | null | null | null | # news-content
latest news from three sites bbc, nytimes and hindu at one place.
Minimalist home page contains latest news fetched from given three sites.
Each news at least contains title, it's source and link given to that article. News may or may not contain description, time.
***Libraries used in this project.***
- KoaJS(For backend)
- NeDB(database for minimal projects)
- Pug(Template engine)
- UiKit(Front end library)
- Cheerio, request(Web scraping)

## [Demo here](https://marvin9-latest-news-content.glitch.me/)
| 33.5 | 125 | 0.759536 | eng_Latn | 0.963217 |
c005dd7af992862c21fbcfde2c799d40969b59dc | 27 | md | Markdown | src/Hmmm/Component/Session/readme.md | HmmmVR/Components | cb22f7f65f65bbecb64a7111fb4e55a34d854615 | [
"MIT"
] | 2 | 2018-06-28T10:46:35.000Z | 2018-07-02T16:39:07.000Z | src/Hmmm/Component/Session/readme.md | HmmmVR/Components | cb22f7f65f65bbecb64a7111fb4e55a34d854615 | [
"MIT"
] | null | null | null | src/Hmmm/Component/Session/readme.md | HmmmVR/Components | cb22f7f65f65bbecb64a7111fb4e55a34d854615 | [
"MIT"
] | null | null | null | # Session
Manage sessions
| 6.75 | 15 | 0.777778 | fra_Latn | 0.554349 |
c00722afe0faaea8f810dbd889cfca3fc8a652d2 | 1,499 | md | Markdown | docs/cpp/allocate.md | ANKerD/cpp-docs.pt-br | 6910dc17c79db2fee3f3616206806c5f466b3f00 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/cpp/allocate.md | ANKerD/cpp-docs.pt-br | 6910dc17c79db2fee3f3616206806c5f466b3f00 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/cpp/allocate.md | ANKerD/cpp-docs.pt-br | 6910dc17c79db2fee3f3616206806c5f466b3f00 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: alocar | Microsoft Docs
ms.custom: ''
ms.date: 11/04/2016
ms.technology:
- cpp-language
ms.topic: language-reference
f1_keywords:
- allocate_cpp
dev_langs:
- C++
helpviewer_keywords:
- __declspec keyword [C++], allocate
- allocate __declspec keyword
ms.assetid: 67828b31-de60-4c0e-b0a6-ef3aab22641d
author: mikeblome
ms.author: mblome
ms.workload:
- cplusplus
ms.openlocfilehash: 25ebe45ebb85e13b6541057c57fd70da7361797f
ms.sourcegitcommit: 913c3bf23937b64b90ac05181fdff3df947d9f1c
ms.translationtype: MT
ms.contentlocale: pt-BR
ms.lasthandoff: 09/18/2018
ms.locfileid: "46064835"
---
# <a name="allocate"></a>allocate
**Seção específica da Microsoft**
O **alocar** especificador de declaração nomeia um segmento de dados no qual o item de dados será alocado.
## <a name="syntax"></a>Sintaxe
```
__declspec(allocate("segname")) declarator
```
## <a name="remarks"></a>Comentários
O nome *segname* deve ser declarado usando um dos seguintes pragmas:
- [code_seg](../preprocessor/code-seg.md)
- [const_seg](../preprocessor/const-seg.md)
- [data_seg](../preprocessor/data-seg.md)
- [init_seg](../preprocessor/init-seg.md)
- [seção](../preprocessor/section.md)
## <a name="example"></a>Exemplo
```cpp
// allocate.cpp
#pragma section("mycode", read)
__declspec(allocate("mycode")) int i = 0;
int main() {
}
```
**Fim da seção específica da Microsoft**
## <a name="see-also"></a>Consulte também
[__declspec](../cpp/declspec.md)<br/>
[Palavras-chave](../cpp/keywords-cpp.md) | 21.724638 | 106 | 0.727819 | por_Latn | 0.459155 |
c00763593025ba9092ba5831baac4ffe435bad79 | 80 | md | Markdown | _posts/second.md | youthangno/youthangno.github.io | e9bb930f6a981268ab4c8502d8f411afcd10dc06 | [
"MIT"
] | 1 | 2021-07-26T01:32:26.000Z | 2021-07-26T01:32:26.000Z | _posts/second.md | youthangno/youthangno.github.io | e9bb930f6a981268ab4c8502d8f411afcd10dc06 | [
"MIT"
] | 1 | 2021-07-23T03:10:27.000Z | 2021-07-23T03:10:27.000Z | _posts/second.md | youthangno/youthangno.github.io | e9bb930f6a981268ab4c8502d8f411afcd10dc06 | [
"MIT"
] | null | null | null | ```javascript
console.log('hello world');
```
```html
```
**ㅁㅁㄴㅇㄹ**
# ㅁㄴㅇㄹ | 6.153846 | 27 | 0.5375 | yue_Hant | 0.252977 |
c0077b1c341ac7a12c5cd30f4ad1d0fb5632ae94 | 455 | md | Markdown | analisis.md | cuococarlos/quarkus-spike | 13e81d3a345e7d524316ab848f7991a7ff1be5dc | [
"MIT"
] | null | null | null | analisis.md | cuococarlos/quarkus-spike | 13e81d3a345e7d524316ab848f7991a7ff1be5dc | [
"MIT"
] | null | null | null | analisis.md | cuococarlos/quarkus-spike | 13e81d3a345e7d524316ab848f7991a7ff1be5dc | [
"MIT"
] | null | null | null | ## Analisis
### IDE
Tuve que migrar a VScodium para programar en Quarkus,ya que tiene unos plugins que me hicieron la vida mas facil.
### Documentacion
Hasta ahora todas las pruebas que hice fueron guias que saque de la docu oficial. Es muy completa, dejo el link: [Guias Docu oficial](https://quarkus.io/guides/)
### DB
Probe utilizando Hibernate como ORM,ya que lo recomiendan por defecto.
Ademas hice una prueba para usar Mongo
### Test
### Codigo | 30.333333 | 161 | 0.756044 | spa_Latn | 0.985635 |
c0082dbb1079888d6f87bc9a61b1e5949ccb555b | 3,519 | md | Markdown | CONTRIBUTING.md | jonyhy96/lgtm-action | aba2e68f46c24755033e2f344295c4749cdbd301 | [
"MIT"
] | 3 | 2021-09-30T21:15:12.000Z | 2022-03-08T10:19:57.000Z | CONTRIBUTING.md | jonyhy96/lgtm-action | aba2e68f46c24755033e2f344295c4749cdbd301 | [
"MIT"
] | 1 | 2022-03-09T07:00:16.000Z | 2022-03-09T07:00:16.000Z | CONTRIBUTING.md | jonyhy96/lgtm-action | aba2e68f46c24755033e2f344295c4749cdbd301 | [
"MIT"
] | 2 | 2021-01-25T07:48:27.000Z | 2022-03-08T10:16:07.000Z | # Contributing to lgtm-action
Help wanted! We'd love your contributions to lgtm-action. Please review the following guidelines before contributing. Also, feel free to propose changes to these guidelines by updating this file and submitting a pull request.
* [I have a question...](#questions)
* [I found a bug...](#bugs)
* [I have a feature request...](#features)
* [I have a contribution to share...](#process)
## <a name="questions"></a> Have a Question?
Please don't open a GitHub issue for questions about how to use `lgtm-action`, as the goal is to use issues for managing bugs and feature requests. Issues that are related to general support will be closed and redirected to our gitter room.
For all support related questions, please ask the question in our gitter room: [jonyhy96/lgtm-action](https://gitter.im/jonyhy96/lgtm-action).
## <a name="bugs"></a> Found a Bug?
If you've identified a bug in `lgtm-action`, please [submit an issue](#issue) to our GitHub repo: [jonyhy96/lgtm-action](https://github.com/jonyhy96/lgtm-action/issues/new). Please also feel free to submit a [Pull Request](#pr) with a fix for the bug!
## <a name="features"></a> Have a Feature Request?
All feature requests should start with [submitting an issue](#issue) documenting the user story and acceptance criteria. Again, feel free to submit a [Pull Request](#pr) with a proposed implementation of the feature.
## <a name="process"></a> Ready to Contribute!
### <a name="issue"></a> Create an issue
Before submitting a new issue, please search the issues to make sure there isn't a similar issue doesn't already exist.
Assuming no existing issues exist, please ensure you include the following bits of information when submitting the issue to ensure we can quickly reproduce your issue:
* Version of `lgtm-action`
* Platform (Linux, OS X, Windows)
* The complete `main.workflow` file used
* The complete command that was executed
* Any output from the command
* Details of the expected results and how they differed from the actual results
We may have additional questions and will communicate through the GitHub issue, so please respond back to our questions to help reproduce and resolve the issue as quickly as possible.
New issues can be created with in our [GitHub repo](https://github.com/jonyhy96/lgtm-action/issues/new).
### <a name="pr"></a>Pull Requests
Pull requests should target the `master` branch. Please also reference the issue from the description of the pull request using [special keyword syntax](https://help.github.com/articles/closing-issues-via-commit-messages/) to auto close the issue when the PR is merged. For example, include the phrase `fixes #14` in the PR description to have issue #14 auto close.
### <a name="style"></a> Styleguide
When submitting code, please make every effort to follow existing conventions and style in order to keep the code as readable as possible. Here are a few points to keep in mind:
* Please run `go fmt ./...` before committing to ensure code aligns with go standards.
* All dependencies must be defined in the `go.mod` file.
* For details on the approved style, check out [Effective Go](https://golang.org/doc/effective_go.html).
Also, consider the original design principles:
* **Simple** - the reviewers can simply use this action to approve the pull request.
### License
By contributing your code, you agree to license your contribution under the terms of the [MIT License](LICENSE.md).
All files are released with the MIT license.
| 54.984375 | 367 | 0.757602 | eng_Latn | 0.997859 |
c008ae2e199e3c23ace3db137dcfe8b2f00a2ac9 | 3,861 | md | Markdown | DOCS.md | BotBotMe/botbot-plugins | 075ae29bb8c534a1b33baebf36a5fcc0d4398b58 | [
"BSD-3-Clause"
] | 21 | 2015-01-17T15:39:43.000Z | 2021-08-04T01:56:31.000Z | DOCS.md | Pavel-Sayekat/botbot-plugins | 075ae29bb8c534a1b33baebf36a5fcc0d4398b58 | [
"BSD-3-Clause"
] | 7 | 2015-01-12T02:34:54.000Z | 2018-01-21T22:07:47.000Z | DOCS.md | Pavel-Sayekat/botbot-plugins | 075ae29bb8c534a1b33baebf36a5fcc0d4398b58 | [
"BSD-3-Clause"
] | 18 | 2015-01-12T02:37:47.000Z | 2018-09-22T15:30:29.000Z | ## Plugin API Documentation
You can write your own Botbot plugin by extending the core plugin class and providing one or more message handlers. A
message handler is a method on the plugin class that receives an object representing a user message that has been
posted to the IRC channel the plugin is associated with. The existing plugins in `botbot_plugins/plugins` serve as good examples to follow. `ping` and `brain` are good ones to start with due to their simplicity.
### Plugin Capabilities
Plugins provide three basic capabilities:
1. Parse messages and optionally respond with an output message.
2. Associate configuration variables. Useful if your plugin needs to connect to external services.
3. Store and retrieve key/value pairs.
All plugins extend the BasePlugin class, providing them with the ability to utilize these capabilities.
### Parsing and responding to messages
In the simplest case, a plugin will receive a message from an IRC channel and parse it based on a rule. When the parsed input
matches a rule, the plugin may return a response.
Additional methods should be defined on your `Plugin` class that will listen and optionally respond to incoming messages. They are registered with the app using one of the following decorators from `botbot_plugins.decorators`:
* `listens_to_mentions(regex)`: A method that should be called only when the bot's nick prefixes the message and that message matches the regex pattern. For example, `[o__o]: What time is it in Napier, New Zealand?`. The nick will be stripped prior to regex matching.
* `listens_to_all(regex)`: A method that should be called on any line that matches the regex pattern.
The method should accept a `line` object as its first argument and any named matches from the regex as keyword args. Any text returned by the method will be echoed back to the channel.
The `line` object has the following attributes:
* `user`: The nick of the user who wrote the message
* `text`: The text of the message (stripped of nick if addressed to the bot)
* `full_text`: The text of the message
### Configuration Metadata
Metadata can be associated with your plugin that can be referenced as needed in the message handlers. A common use case for
this is storing authentication credentials and/or API endpoint locations for external services. The `github` plugin is an example
that uses configuration for the ability to query a Github repository.
To add configuration to your plugin, define a config class that inherits from `config.BaseConfig`. Configuration values are
declared by adding instances of `config.Field` as attributes of the class.
Once your config class is defined, you associate it with the plugin via the `config_class` attribute:
class MyConfig(BaseConfig):
unwarranted_comments = Field(
required=False,
help_text="Responds to every message with sarcastic comment",
default=True)
class Plugin(BasePlugin):
config_class = MyConfig
@listens_to_all
def peanut_gallery(self, line):
if self.config.unwarranted_comments:
return "Good one!"
### Storage / Persisting Data
Plugins can persist and retrieve data. This is done via `BasePlugin.store` and `BasePlugin.retrieve`:
* `store(self, key, value)`: A method to store a simple key, value pair specific to the plugin. See `brain` and `last_seen` for examples.
* `retrieve(self, key)`: A method to retrieve a value for the given key. See `brain` and `last_seen` for examples.
### Testing Your Plugins
You should provide unit tests for your plugins.
#### Creating the test app
In order to simulate the plugin running in its normal environment, an app instance must be instantiated. See the current
tests for examples. This may change with subsequent releases.
Calls to external services should be mocked.
| 48.873418 | 267 | 0.767159 | eng_Latn | 0.998444 |
c009469213a5415dcb68ee3b3559c3e85ddc12b8 | 17,610 | md | Markdown | 2013/_posts/2013-10-06-nouvelles-hebdomadaires-de-postgresql-29-septembre-2013.md | postgresqlfr/blog.postgresql.fr | 38b430eeb1b85cebb4d9ba3a022783175d4ebf76 | [
"BSD-2-Clause"
] | null | null | null | 2013/_posts/2013-10-06-nouvelles-hebdomadaires-de-postgresql-29-septembre-2013.md | postgresqlfr/blog.postgresql.fr | 38b430eeb1b85cebb4d9ba3a022783175d4ebf76 | [
"BSD-2-Clause"
] | 5 | 2020-04-28T12:42:57.000Z | 2021-06-26T23:36:56.000Z | 2013/_posts/2013-10-06-nouvelles-hebdomadaires-de-postgresql-29-septembre-2013.md | postgresqlfr/blog.postgresql.fr | 38b430eeb1b85cebb4d9ba3a022783175d4ebf76 | [
"BSD-2-Clause"
] | null | null | null | ---
layout: post
title: "Nouvelles hebdomadaires de PostgreSQL - 29 septembre 2013"
author: "chl"
categories: [PostgreSQL Weekly News]
redirect_from: "index.php?post/2013/10/06/Nouvelles-hebdomadaires-de-PostgreSQL-29-septembre-2013"
---
<p>La deuxième semaine parmi les quatre de la <em>CommitFest</em> actuelle est maintenant terminée. Six patchs sont encore délaissés. Contactez les RRReviewers si vous souhaitez aider.</p>
<p><strong>Offres d'emplois autour de PostgreSQL en septembre</strong></p>
<ul>
<li>Internationales :
<a target="_blank" href="http://archives.postgresql.org/pgsql-jobs/2013-09/threads.php">http://archives.postgresql.org/pgsql-jobs/2013-09/threads.php</a>;</li>
<li>Francophones :
<a target="_blank" href="http://forums.postgresql.fr/viewforum.php?id=4">http://forums.postgresql.fr/viewforum.php?id=4</a>.</li>
</ul>
<p><strong>PostgreSQL Local</strong></p>
<ul>
<li>Le PGDay italien (PGDay.IT) sera tenu le 25 octobre à Prato (Italie, Toscane) au centre de recherche de l'Université Monash. Inscriptions et infos :
<a target="_blank" href="http://2013.pgday.it">http://2013.pgday.it</a></li>
<li>La <em>PostgreSQL Conference China</em> de 2103 aura lieu les 26 & 27 octobre à Hangzhou. Informations :
<a target="_blank" href="https://wiki.postgresql.org/wiki/Pgconf_cn2013">https://wiki.postgresql.org/wiki/Pgconf_cn2013</a><br>
Inscriptions :
<a target="_blank" href="http://bbs.pgsqldb.com/client/bm.php">http://bbs.pgsqldb.com/client/bm.php</a></li>
<li>La <em>PGConf.EU 2013</em> sera tenue du 29 octobre au 1<sup>er</sup> novembre au Conrad Hotel dans le centre-ville de Dublin en Irlande. Les inscriptions sont ouvertes :
<a target="_blank" href="http://2013.pgconf.eu/">http://2013.pgconf.eu/</a></li>
<li><em>PGConf.DE 2013</em> aura lieu le 8 novembre 2013 au musée industriel de la Rhénanie à Oberhausen. L'appel à conférenciers porte jusqu'au 15 septembre :
<a target="_blank" href="http://2013.pgconf.de/">http://2013.pgconf.de/</a></li>
<li>La quatrième édition du PGDay argentin se tiendra le 14 novembre 2013 à Buenos Aires, Argentine. L'appel à conférenciers est lancé jusqu'au 28 septembre :
<a target="_blank" href="http://wiki.postgresql.org/wiki/PGDay_Argentina_2013">http://wiki.postgresql.org/wiki/PGDay_Argentina_2013</a></li>
<li>Le PGDay cubain aura lieu en novembre 2013 :
<a target="_blank" href="http://postgresql.uci.cu/">http://postgresql.uci.cu/</a></li>
</ul>
<p><strong>PostgreSQL dans les média</strong></p>
<ul>
<li>Planet PostgreSQL :
<a target="_blank" href="http://planet.postgresql.org/">http://planet.postgresql.org/</a></li>
<li>Planet PostgreSQLFr :
<a target="_blank" href="http://planete.postgresql.fr/">http://planete.postgresql.fr/</a></li>
</ul>
<p><i>PostgreSQL Weekly News / les nouvelles hebdomadaires vous sont offertes cette semaine par David Fetter. Traduction par l'équipe PostgreSQLFr sous licence CC BY-NC-SA.</i></p>
<p><i>Proposez vos articles ou annonces avant dimanche 15:00 (heure du Pacifique). Merci de les envoyer en anglais à david (a) fetter.org, en allemand à pwn (a) pgug.de, en italien à pwn (a) itpug.org et en espagnol à pwn (a) arpug.com.ar.</i></p>
<p>(<a target="_blank" href="http://www.postgresql.org/message-id/[email protected]">lien vers l'article original</a>)</p>
<!--more-->
<p><strong>Correctifs appliqués</strong></p>
<p>Heikki Linnakangas a poussé :</p>
<ul>
<li>Fix two timeline handling bugs in pg_receivexlog. When a timeline history file is fetched from server, it is initially created with a temporary file name, and renamed to place. However, the temporary file name was constructed using an uninitialized buffer. Usually that meant that the file was created in current directory instead of the target, which usually goes unnoticed, but if the target is on a different filesystem than the current dir, the rename() would fail. Fix that. The second issue is that pg_receivexlog would not take .partial files into account when determining when scanning the target directory for existing WAL files. If the timeline has switched in the server several times in the last WAL segment, and pg_receivexlog is restarted, it would choose a too old starting point. That's not a problem as long as the old WAL segment exists in the server and can be streamed over, but will cause a failure if it's not. Backpatch to 9.3, where this timeline handling code was written. Analysed by Andrew Gierth, bug #8453, based on a bug report on IRC.
<a target="_blank" href="http://git.postgresql.org/pg/commitdiff/b882246e3ab4382b3c9f58e5f85dd8c9e3eb594f">http://git.postgresql.org/pg/commitdiff/b882246e3ab4382b3c9f58e5f85dd8c9e3eb594f</a></li>
<li>Plug memory leak in range_cmp function. B-tree operators are not allowed to leak memory into the current memory context. Range_cmp leaked detoasted copies of the arguments. That caused a quick out-of-memory error when creating an index on a range column. Reported by Marian Krucina, bug #8468.
<a target="_blank" href="http://git.postgresql.org/pg/commitdiff/77ae7f7c356064f5355e004b95f485358dfc1360">http://git.postgresql.org/pg/commitdiff/77ae7f7c356064f5355e004b95f485358dfc1360</a></li>
<li>Fix spurious warning after vacuuming a page on a table with no indexes. There is a rare race condition, when a transaction that inserted a tuple aborts while vacuum is processing the page containing the inserted tuple. Vacuum prunes the page first, which normally removes any dead tuples, but if the inserting transaction aborts right after that, the loop after pruning will see a dead tuple and remove it instead. That's OK, but if the page is on a table with no indexes, and the page becomes completely empty after removing the dead tuple (or tuples) on it, it will be immediately marked as all-visible. That's OK, but the sanity check in vacuum would throw a warning because it thinks that the page contains dead tuples and was nevertheless marked as all-visible, even though it just vacuumed away the dead tuples and so it doesn't actually contain any. Spotted this while reading the code. It's difficult to hit the race condition otherwise, but can be done by putting a breakpoint after the heap_page_prune() call. Backpatch all the way to 8.4, where this code first appeared.
<a target="_blank" href="http://git.postgresql.org/pg/commitdiff/adaba2751f617c0045f72d2ac2d5402cc184fb29">http://git.postgresql.org/pg/commitdiff/adaba2751f617c0045f72d2ac2d5402cc184fb29</a></li>
</ul>
<p>Stephen Frost a poussé :</p>
<ul>
<li>Fix SSL deadlock risk in libpq. In libpq, we set up and pass to OpenSSL callback routines to handle locking. When we run out of SSL connections, we try to clean things up by de-registering the hooks. Unfortunately, we had a few calls into the OpenSSL library after these hooks were de-registered during SSL cleanup which lead to deadlocking. This moves the thread callback cleanup to be after all SSL-cleanup related OpenSSL library calls. I've been unable to reproduce the deadlock with this fix. In passing, also move the close_SSL call to be after unlocking our ssl_config mutex when in a failure state. While it looks pretty unlikely to be an issue, it could have resulted in deadlocks if we ended up in this code path due to something other than SSL_new failing. Thanks to Heikki for pointing this out. Back-patch to all supported versions; note that the close_SSL issue only goes back to 9.0, so that hunk isn't included in the 8.4 patch. Initially found and reported by Vesa-Matti J Kari; many thanks to both Heikki and Andres for their help running down the specific issue and reviewing the patch.
<a target="_blank" href="http://git.postgresql.org/pg/commitdiff/b37c90f11e3c239b999f98ffd3bbea6b8253fffa">http://git.postgresql.org/pg/commitdiff/b37c90f11e3c239b999f98ffd3bbea6b8253fffa</a></li>
</ul>
<p>Bruce Momjian a poussé :</p>
<ul>
<li>pg_upgrade: fix C comment typo
<a target="_blank" href="http://git.postgresql.org/pg/commitdiff/f7cf5fa262f8be1bc75f390708ceed26d25f1e7d">http://git.postgresql.org/pg/commitdiff/f7cf5fa262f8be1bc75f390708ceed26d25f1e7d</a></li>
<li>pg_upgrade: more C comment fixes
<a target="_blank" href="http://git.postgresql.org/pg/commitdiff/ff2a1f5e84ee9984b33ee31e6fb9c6f2760a820e">http://git.postgresql.org/pg/commitdiff/ff2a1f5e84ee9984b33ee31e6fb9c6f2760a820e</a></li>
</ul>
<p>Robert Haas a poussé :</p>
<ul>
<li>Don't allow system columns in CHECK constraints, except tableoid. Previously, arbitray system columns could be mentioned in table constraints, but they were not correctly checked at runtime, because the values weren't actually set correctly in the tuple. Since it seems easy enough to initialize the table OID properly, do that, and continue allowing that column, but disallow the rest unless and until someone figures out a way to make them work properly. No back-patch, because this doesn't seem important enough to take the risk of destabilizing the back branches. In fact, this will pose a dump-and-reload hazard for those upgrading from previous versions: constraints that were accepted before but were not correctly enforced will now either be enforced correctly or not accepted at all. Either could result in restore failures, but in practice I think very few users will notice the difference, since the use case is pretty marginal anyway and few users will be relying on features that have not historically worked. Amit Kapila, reviewed by Rushabh Lathia, with doc changes by me.
<a target="_blank" href="http://git.postgresql.org/pg/commitdiff/ba3d39c96921c96de114f6c22a9572bff24708b5">http://git.postgresql.org/pg/commitdiff/ba3d39c96921c96de114f6c22a9572bff24708b5</a></li>
<li>doc: Clarify that file_fdw options require values. Mike Blackwell and Robert Haas
<a target="_blank" href="http://git.postgresql.org/pg/commitdiff/54990af616ebb31fb1ae04e8aaf332d483a9e3a5">http://git.postgresql.org/pg/commitdiff/54990af616ebb31fb1ae04e8aaf332d483a9e3a5</a></li>
<li>Allow printf-style padding specifications in log_line_prefix. David Rowley, after a suggestion from Heikki Linnakangas. Reviewed by Albe Laurenz, and further edited by me.
<a target="_blank" href="http://git.postgresql.org/pg/commitdiff/4334639f4bb9fb88c13b8dd5faca22b207248504">http://git.postgresql.org/pg/commitdiff/4334639f4bb9fb88c13b8dd5faca22b207248504</a></li>
</ul>
<p>Noah Misch a poussé :</p>
<ul>
<li>Use @libdir@ in both of regress/{input,output}/security_label.source. Though @libdir@ almost always matches @abs_builddir@ in this context, the test could only fail if they differed. Back-patch to 9.1, where the test was introduced. Hamid Quddus Akhtar
<a target="_blank" href="http://git.postgresql.org/pg/commitdiff/b43b64caea4457c3a901e88e910f7e8badb5035f">http://git.postgresql.org/pg/commitdiff/b43b64caea4457c3a901e88e910f7e8badb5035f</a></li>
<li>pgbench: Tweak documentation. Fabien COELHO
<a target="_blank" href="http://git.postgresql.org/pg/commitdiff/825da2aba8ae7a5824e9fb3823125c5c755ea568">http://git.postgresql.org/pg/commitdiff/825da2aba8ae7a5824e9fb3823125c5c755ea568</a></li>
<li>pgbench: Correct for bias in --rate schedule generation. Previous code gave a mean delay 0.44% below target. This change also has the effect of increasing the maximum possible delay. Fabien COELHO
<a target="_blank" href="http://git.postgresql.org/pg/commitdiff/c2df45a37cd9e32815fe2786cbb3ef905daaa7d2">http://git.postgresql.org/pg/commitdiff/c2df45a37cd9e32815fe2786cbb3ef905daaa7d2</a></li>
</ul>
<p>Alvaro Herrera a poussé :</p>
<ul>
<li>Fix pgindent comment breakage
<a target="_blank" href="http://git.postgresql.org/pg/commitdiff/b2fc4d6142033e361dee91388d9515be3633763c">http://git.postgresql.org/pg/commitdiff/b2fc4d6142033e361dee91388d9515be3633763c</a></li>
</ul>
<p>Andrew Dunstan a poussé :</p>
<ul>
<li>Fix erroneous statements about multiply specified JSON columns. The behaviour in json_populate_record() and json_populate_recordset() was changed during development but the docs were not.
<a target="_blank" href="http://git.postgresql.org/pg/commitdiff/d70f8d5f1b8bfa62a34b79445faae39acdb0363d">http://git.postgresql.org/pg/commitdiff/d70f8d5f1b8bfa62a34b79445faae39acdb0363d</a></li>
<li>Ensure installation dirs are built before contents are installed. Cédric Villemain
<a target="_blank" href="http://git.postgresql.org/pg/commitdiff/d942f9d9283f831fc74ed3cf60e6c8362274b36e">http://git.postgresql.org/pg/commitdiff/d942f9d9283f831fc74ed3cf60e6c8362274b36e</a></li>
<li>Use a new hstore extension version for added json functions. This should have been done when the json functionality was added to hstore in 9.3.0. To handle this correctly, the upgrade script therefore uses conditional logic by using plpgsql in a DO statement to add the two new functions and the new cast. If hstore_to_json_loose is detected as already present and dependent on the hstore extension nothing is done. This will require that the database be loaded with plpgsql. People who have installed the earlier and spurious 1.1 version of hstore will need to do: ALTER EXTENSION hstore UPDATE; to pick up the new functions properly.
<a target="_blank" href="http://git.postgresql.org/pg/commitdiff/a18167510f4c385329697588ce5132cbf95779c3">http://git.postgresql.org/pg/commitdiff/a18167510f4c385329697588ce5132cbf95779c3</a></li>
<li>Fix makefile broken by hstore fix.
<a target="_blank" href="http://git.postgresql.org/pg/commitdiff/42bf7fc1de4d25c92b244fabe1a6b1cbec99f151">http://git.postgresql.org/pg/commitdiff/42bf7fc1de4d25c92b244fabe1a6b1cbec99f151</a></li>
</ul>
<p>Fujii Masao a poussé :</p>
<ul>
<li>Correct comment of pgbench "filler" columns. Pavan Deolasee
<a target="_blank" href="http://git.postgresql.org/pg/commitdiff/514b3194e80ec71bdbc92798ea946d7b51ea7ac2">http://git.postgresql.org/pg/commitdiff/514b3194e80ec71bdbc92798ea946d7b51ea7ac2</a></li>
</ul>
<p><strong>Correctifs rejetés (à ce jour)</strong></p>
<ul>
<li>No one was disappointed this week</li>
</ul>
<p><strong>Correctifs en attente</strong></p>
<ul>
<li>Pavel Stehule sent in two more revisions of a patch to improve the performance of AVG on NUMERICs.</li>
<li>Alexander Korotkov sent in two more revisions of a patch to improve GIN performance by adding information to what's stored.</li>
<li>Fabien COELHO sent in three more patches intended to improve pgbench.</li>
<li>Andres Freund sent in a patch to use critical section when ensuring empty pages are initialized during vacuum.</li>
<li>Stas Kelvich sent in another revision of a patch to add point support to the cube extension.</li>
<li>Laurenz Albe sent in another revision of a patch to fix the use of a deprecated OpenLDAP API.</li>
<li>Stas Kelvich sent in a patch to implement support for different storage types for cubes.</li>
<li>Stas Kelvich sent in a patch to fix the split algorithm implemented in cube extension.</li>
<li>Alvaro Herrera sent in four more revisions of a patch to implement minmax indexing.</li>
<li>Bruce Momjian sent in another revision of a patch to issue a warning when calling SET TRANSACTION outside transaction block.</li>
<li>Kevin Grittner sent in three more revisions of a patch to implement a record_identical operator.</li>
<li>Heikki Linnakangas sent in two more revisions of a patch to implement freezing without write I/O.</li>
<li>Michael Paquier sent in another revision of a patch to implement REINDEX CONCURRENTLY.</li>
<li>Heikki Linnakangas sent in another revision of a patch to fix two bugs exposed in the attempt to fix the SSI freezing bug. The first is in heap_hot_search_buffer(), where the PredicateLockTuple() call is passed wrong offset number. heapTuple->t_self is set to the tid of the first tuple in the chain that's visited, not the one actually being read. The second is that CheckForSerializableConflictIn() uses the tuple's t_ctid field instead of t_self to check for exiting predicate locks on the tuple. If the tuple was updated, but the updater rolled back, t_ctid points to the aborted dead tuple.</li>
<li>Ivan Lezhnjov IV, Robert Haas, and Karl O. Pinc sent in patches clarifying non-superuser backups.</li>
<li>Andres Freund sent in a patch to improve performance by creating a wait free LW_SHARED acquisition method.</li>
<li>Merlin Moncure sent in a patch to fix an issue where the CPU could go to 100% and stay there that replaces the spinlock with a read barrier based on a suggestion made by Andres Freund.</li>
<li>Chris Browne sent in a patch to add a "-g / --roles" option for createuser.</li>
<li>Andres Freund sent in another flock of patches intended to be infrastructure for logical changeset replication.</li>
<li>Amit Kapila sent in another revision of a patch to allow changing system parameters via SQL persistently across restarts.</li>
<li>Ian Lawrence Barwick sent in a patch to Allow COPY in CSV mode to control whether a quoted zero-length string is treated as NULL.</li>
<li>Gilles Darold sent in another revision of a patch to make psql's pset print out the current configuration if not given an argument.</li>
<li>Nicholas White sent in a patch to use repalloc in the patch that allows LAG and LEAD functions to ignore NULLs if told to.</li>
</ul> | 66.958175 | 1,110 | 0.781204 | eng_Latn | 0.873688 |
c00a08f5a82933b422bf1eaeccaa35045408a400 | 620 | md | Markdown | README.md | CIRENSANGZHU/Decision-Tree | ca94c3a41a82ddd730f82f036243824329fa9336 | [
"MIT"
] | 1 | 2021-12-10T10:02:39.000Z | 2021-12-10T10:02:39.000Z | README.md | CIRENSANGZHU/Decision-Tree | ca94c3a41a82ddd730f82f036243824329fa9336 | [
"MIT"
] | null | null | null | README.md | CIRENSANGZHU/Decision-Tree | ca94c3a41a82ddd730f82f036243824329fa9336 | [
"MIT"
] | 1 | 2021-12-11T01:45:23.000Z | 2021-12-11T01:45:23.000Z | # Decision-Tree
Realize Decision Tree algorithm using PYTHON.
Choose the "split attribute" of each node based on Information Gain (just like the famous ID3 algorithm).
Train a decision tree and store the tree in a dictionary form.
Test the algorithm using data in Table 4.1 in page 76, Table 4.2 in page 80 and Table 4.3 in page 84 of *Machine Learning* written by Prof. Zhou Zhihua.
Other data can be used to train the decision tree as well.
# 决策树
使用PYTHON实现决策树算法。
使用信息增益作为每个节点划分属性的选择准则(和著名的ID3决策树算法)相同。
训练一个决策树并且使用一个字典来存储这个决策树。
用周志华教授的《机器学习》76页表4.1,80页表4.2和84页表4.3的数据来测试算法效果。
也可以使用其他数据来运行该程序训练决策树。
| 47.692308 | 154 | 0.777419 | eng_Latn | 0.954003 |
c00a81735e0658bd27d7a436fc61d28d4de3e1b2 | 2,461 | md | Markdown | _posts/2021-05-26-[LeetCode]-92-Reverse-Linked-List-II.md | yn-e-si/yn-e-si.github.io | e6c5394e7342a54916b084c2182a7429968291b0 | [
"MIT"
] | null | null | null | _posts/2021-05-26-[LeetCode]-92-Reverse-Linked-List-II.md | yn-e-si/yn-e-si.github.io | e6c5394e7342a54916b084c2182a7429968291b0 | [
"MIT"
] | null | null | null | _posts/2021-05-26-[LeetCode]-92-Reverse-Linked-List-II.md | yn-e-si/yn-e-si.github.io | e6c5394e7342a54916b084c2182a7429968291b0 | [
"MIT"
] | null | null | null | ---
title: "[LeetCode] 92. Reverse Linked List II "
excerpt: "파이썬 알고리즘 인터뷰"
toc: true
toc_sticky: true
use_math: true
categories:
- CS
tags:
- 파이썬알고리즘인터뷰
- 자료구조
- 알고리즘
- 리트코드
- 연결 리스트
---
## 💡 92. 역순 연결 리스트 II
[[문제 출처]](https://leetcode.com/problems/reverse-linked-list-ii/)
### 📌 문제 설명
- 연결 리스트와 시작을 가리키는 ``left``, 끝을 가리키는 ``right`` 변수가 주어진다.
- 연결 리스트 내에서 ``left`` ~ ``right`` 범위의 값을 역순으로 바꾸어 리턴해야한다.
<br/>
### 📌 제한 사항
- The number of nodes in the list is n.
- $1 \leq n \leq 500$
- $-500 \leq Node.val \leq 500$
- $1 \leq left \leq right \leq n$
<br/>
### 📌 문제 분석
- 역순이 시작되는 지점과 끝나는 지점이 주어진다.
- 연결 리스트 내에서 해당 지점에 속하는 구간을 역순으로 바꾸어 리턴해주면 된다.
<br/>
### 📌 알고리즘 설계
- 연결 리스트의 ``val`` 값을 담을 ``list`` 타입의 변수 ``L`` 을 선언한다.
- ``val`` 값을 담고 역순으로 바꾸어야하는 구간을 슬라이싱으로 변경하여 다시 연결 리스트로 변환한다.
<br/>
### 📌 코드 구현
```python
# Definition for singly-linked list.
# class ListNode:
# def __init__(self, val=0, next=None):
# self.val = val
# self.next = next
class Solution:
def reverseBetween(self, head: ListNode, left: int, right: int) -> ListNode:
if left == right:
return head
L: List = []
root = node = ListNode(0)
while head:
L.append(head.val)
head = head.next
L[left -1: right] = L[right - len(L) - 1: -(len(L) - left + 2):-1]
for i in L:
node.next = ListNode(i)
node = node.next
return root.next
```
<br/>
### 📌 반복 구조로 노드 뒤집기
```python
def reverseBetween(self, head: ListNode, m: int, n: int) -> ListNode:
# 예외 처리
if not head or m == n:
return head
root = start = ListNode(None)
root.next = head
# start, end 지정
for _ in range(m - 1):
start = start.next
end = start.next
# 반복하면서 노드 차례대로 뒤집기
for _ in range(n - m):
tmp, start.next, end.next = start.next, end.next, end.next.next
start.next.next = tmp
return root.next
```
- ``start`` 는 변경이 필요한 ``left`` 의 바로 앞 지점을 가리키게 한다.
- ``end`` 는 ``start.next`` 인 ``left`` 를 가리키게 한다.
- ``start.next.next = tmp`` 는 이전에 ``start.next`` 였던 노드를 배치하는 것과 동일하다.
- 위와 같은 다중 할당을 ``right - left`` 만큼 반복하여 해당 구간의 값을 역순으로 배치한다.
<br/>
<br/>
## 💡 Check Point
- 연결 리스트 문제에서 리스트로 변형하여 풀이하는 방식은 일종의 편법(?) 과도 같은 풀이이다.
- 바로 생각나는 풀이 방식으로 풀다보니 편법처럼 풀이를 해버린 것 같다.
- 연결 리스트에 좀 더 익숙해지기 위해 위와 같이 노드의 ``next``를 직접 변경해주어 동작하는 알고리즘 풀이를 숙지해두어야 할 것 같다. | 23.663462 | 80 | 0.573344 | kor_Hang | 0.999991 |
c00cd353f34b415124065cd1961f976ae8d0d067 | 1,462 | md | Markdown | README.md | mebble/react-express-postgres-starter | fb690b800af77df9d29c1fa5b7b2c69f28e358c0 | [
"MIT"
] | 1 | 2022-03-21T02:39:15.000Z | 2022-03-21T02:39:15.000Z | README.md | mebble/react-express-postgres-starter | fb690b800af77df9d29c1fa5b7b2c69f28e358c0 | [
"MIT"
] | 4 | 2021-03-10T08:33:27.000Z | 2022-03-26T05:30:27.000Z | README.md | mebble/react-express-postgres-starter | fb690b800af77df9d29c1fa5b7b2c69f28e358c0 | [
"MIT"
] | null | null | null | # react-express-postgres-starter
Boilerplate code for a React + Express + PostgreSQL project
## Project Structure
```sh
$ tree -d -I node_modules
.
├── client
└── server
```
### Client
The client code is bundled using [`Parcel`](https://www.npmjs.com/package/parcel). Currently, version 2 alpha is used so that we can make use of the [proxy feature](https://github.com/parcel-bundler/parcel/pull/3281) that Parcel 2 provides.
### Server
Bundling is done only for the `client` portion of the project. The server code uses Node's module system. `nodemon` is used to watch for file changes in the server during developemt.
### Build Tool
The top-level build scripts should be written cleanly by using `npm-run-all`. The client-side build is mostly performed by Parcel.
## Setup for Development
Install dependencies
```sh
$ npm run install:dependencies
```
### Developing
In one terminal
```sh
$ npm run dev:client
```
In a separate terminal
```sh
$ npm run dev:server
```
### Testing
...
### Building and Running
Only the client needs to be built. Its built files are then served as static files by the server.
```sh
$ npm run build:client
$ npm run start:server
```
This script will clean before building and running the app. Recommended to use in production.
```sh
$ npm run start:prod
```
### Cleaning
This will run the clean script for all parts of the project, i.e. the root, client and server directories.
```sh
$ npm run clean
```
| 19.756757 | 240 | 0.72093 | eng_Latn | 0.996642 |
c00d8e867ddcbdaa0deabac2c1b776965a654a05 | 1,257 | md | Markdown | _posts/2021-05-28-332460740.md | bookmana/bookmana.github.io | 2ed7b023b0851c0c18ad8e7831ece910d9108852 | [
"MIT"
] | null | null | null | _posts/2021-05-28-332460740.md | bookmana/bookmana.github.io | 2ed7b023b0851c0c18ad8e7831ece910d9108852 | [
"MIT"
] | null | null | null | _posts/2021-05-28-332460740.md | bookmana/bookmana.github.io | 2ed7b023b0851c0c18ad8e7831ece910d9108852 | [
"MIT"
] | null | null | null | ---
title: "펫로스 사랑한다 사랑한다 사랑한다"
date: 2021-05-28 06:23:38
categories: [국내도서, 취미-레저]
image: https://bimage.interpark.com/goods_image/0/7/4/0/332460740s.jpg
description: ● 슬픈 이별을 넘어 행복한 추억이 될 수 있도록.사랑한다, 사랑한다. 그리고 사랑한다.텀블벅 펀딩 1000% 달성! 천만 반려동물 보호자를 위한 책.반려동물과 함께한 시간이 행복했던 만큼 이별 후에 찾아오는 깊은 상실감, 우울, 자책 등의 감정을 펫로스 증후군(Pet Loss Syn
---
## **정보**
- **ISBN : 9788926898833**
- **출판사 : 이담북스**
- **출판일 : 20200430**
- **저자 : 심용희**
------
## **요약**
● 슬픈 이별을 넘어 행복한 추억이 될 수 있도록.사랑한다, 사랑한다. 그리고 사랑한다.텀블벅 펀딩 1000% 달성! 천만 반려동물 보호자를 위한 책.반려동물과 함께한 시간이 행복했던 만큼 이별 후에 찾아오는 깊은 상실감, 우울, 자책 등의 감정을 펫로스 증후군(Pet Loss Syndrome)이라고 한다. 반려동물 보호자라면 언젠가는 꼭 찾아오고야마는 이 시기를 슬기롭게 이겨낼 수 있도록, 쏟아져 내리는 감정의 폭풍을 온전히 받아들일 수 있도록 이 작은 책에 용기와 위로를 눌러 담았다. 사랑하는 반려동물과 이별한 보호자, 동물 가족과의 이별을 앞둔 보호자 그리고 반려동물을 떠나보내고 힘들어하는 가족, 연인, 친구의 아픔에 동반하고 싶은 사람에게 이 책을 전한다.
------
슬픈 이별을 넘어 행복한 추억이 될 수 있도록.x0D사랑한다, 사랑한다. 그리고 사랑한다.x0Dx0D텀블벅 펀딩 1000% 달성! 천만 반려동물 보호자를 위한 책.x0Dx0D반려동물과 함께한 시간이 행복했던 만큼 이별 후에 찾아오는 깊은 상실감, 우울, 자책 등의 감정을 펫로스 증후군(Pet Loss Syndrome)...
------
펫로스 사랑한다 사랑한다 사랑한다
------
## **리뷰**
3.0 김-미 펫로스 극복하기 2020.10.08 <br/>5.0 정-혜 이겨내고싶어요 2020.10.08 <br/>5.0 최-준 좋습니다 2020.09.04 <br/>5.0 양-성 찌비야 사랑해 2020.09.01 <br/>5.0 서-경 잘보고있어요 2020.08.18 <br/>5.0 이-화 위로가 되는 책 2020.05.21 <br/> | 33.078947 | 376 | 0.650756 | kor_Hang | 1.00001 |
c00d97751e62140c6ab2144bda83bb44c18a6d77 | 2,795 | md | Markdown | help/using/about/usermanagement.md | floflo4922/journeys.en | 1ceefe48555265f4595d8f04dc2b453627fd9809 | [
"MIT"
] | null | null | null | help/using/about/usermanagement.md | floflo4922/journeys.en | 1ceefe48555265f4595d8f04dc2b453627fd9809 | [
"MIT"
] | null | null | null | help/using/about/usermanagement.md | floflo4922/journeys.en | 1ceefe48555265f4595d8f04dc2b453627fd9809 | [
"MIT"
] | null | null | null | ---
title: Access management
description: Learn more on access management
page-status-flag: never-activated
uuid: 269d590c-5a6d-40b9-a879-02f5033863fc
contentOwner: sauviat
audience: rns
content-type: reference
topic-tags: journeys
discoiquuid: 5df34f55-135a-4ea8-afc2-f9427ce5ae7b
internal: n
snippet: y
---
# Access management{#concept_rfj_wpt_52b}
## About access management {#about-access-management}
Product profiles are assigned to a set of users that share the same rights within your organization.
In the Admin console, you can assign one of the following out-of-the-box product profiles to your users:
* **[!UICONTROL Limited Access User]**: user with read-only access to journeys and reports. This product profile includes the following rights:
* Read journeys
* Read reports
* **[!UICONTROL Administrators]**: user with access to the administration menus with the possibility to manage journeys, events and reports. This product profile includes the following rights:
* Manage and execute journeys
* Manage events, data sources and actions
* Manage reports
>[!NOTE]
>
>**[!UICONTROL Administrators]** is the only product profile which allows creation, edition and publication of transactional messaging (or messaging templates) in Adobe Campaign Standard. This product profile is needed if you use Adobe Campaign Standard to send messages in your journeys.
* **[!UICONTROL Standard User]**: user with basic access such as journey management. This product profile includes the following rights:
* Manage and execute journeys
* Manage reports
You can find [here](../assets/acs_rights_journeys.pdf) the compatibility between rights and Journey Orchestration's different functionalities.
## Assigning a product profile {#assigning-product-profile}
Product profiles are managed in the Admin console. For more on this, refer to the [Admin Console documentation](https://helpx.adobe.com/enterprise/managing/user-guide.html).
To assign a product profile for a user to access Journey Orchestration:
1. In the Admin Console, select **[!UICONTROL Journey orchestration]**.

1. Select the product profile to which your new user will be linked to.

1. Click **[!UICONTROL Add user]**.
You can also add your new user to a user group to fine-tune the shared set of permissions. For more on this, refer to this [page](https://helpx.adobe.com/enterprise/using/user-groups.html).

1. Type in the email address of your new user then click **[!UICONTROL Save]**.

Your user should then receive an email redirecting to your Journey Orchestration instance. | 42.348485 | 292 | 0.756351 | eng_Latn | 0.97897 |
c00e2d457e1cd44fdff6cfd2e10eb529bbee7a49 | 379 | md | Markdown | dubbo-example-provider/README.md | gregwhitaker/dubbo-example | 314a4f9cb8c3072d8c0864c72fcd17eff3585b47 | [
"Apache-2.0"
] | 3 | 2018-09-22T02:07:05.000Z | 2019-06-09T18:44:03.000Z | dubbo-example-provider/README.md | gregwhitaker/dubbo-example | 314a4f9cb8c3072d8c0864c72fcd17eff3585b47 | [
"Apache-2.0"
] | null | null | null | dubbo-example-provider/README.md | gregwhitaker/dubbo-example | 314a4f9cb8c3072d8c0864c72fcd17eff3585b47 | [
"Apache-2.0"
] | 1 | 2018-09-23T11:37:07.000Z | 2018-09-23T11:37:07.000Z | # dubbo-example-provider
Dubbo service that exposes the Dubbo API defined in [dubbo-example-api](../dubbo-example-api).
## Building the Provider
From the root project, run the following command to build the provider:
./gradlew build
## Running the Provider
From the root project, run the following command to start the provider:
./gradlew :dubbo-example-provider:run
| 29.153846 | 94 | 0.759894 | eng_Latn | 0.998377 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.