hexsha
stringlengths
40
40
size
int64
5
1.04M
ext
stringclasses
6 values
lang
stringclasses
1 value
max_stars_repo_path
stringlengths
3
344
max_stars_repo_name
stringlengths
5
125
max_stars_repo_head_hexsha
stringlengths
40
78
max_stars_repo_licenses
sequencelengths
1
11
max_stars_count
int64
1
368k
max_stars_repo_stars_event_min_datetime
stringlengths
24
24
max_stars_repo_stars_event_max_datetime
stringlengths
24
24
max_issues_repo_path
stringlengths
3
344
max_issues_repo_name
stringlengths
5
125
max_issues_repo_head_hexsha
stringlengths
40
78
max_issues_repo_licenses
sequencelengths
1
11
max_issues_count
int64
1
116k
max_issues_repo_issues_event_min_datetime
stringlengths
24
24
max_issues_repo_issues_event_max_datetime
stringlengths
24
24
max_forks_repo_path
stringlengths
3
344
max_forks_repo_name
stringlengths
5
125
max_forks_repo_head_hexsha
stringlengths
40
78
max_forks_repo_licenses
sequencelengths
1
11
max_forks_count
int64
1
105k
max_forks_repo_forks_event_min_datetime
stringlengths
24
24
max_forks_repo_forks_event_max_datetime
stringlengths
24
24
content
stringlengths
5
1.04M
avg_line_length
float64
1.14
851k
max_line_length
int64
1
1.03M
alphanum_fraction
float64
0
1
lid
stringclasses
191 values
lid_prob
float64
0.01
1
b983d19c9ee9bf8ea95033b113523547aec44707
2,474
md
Markdown
_posts/2015-11-15-battleship-an-online-pygame.md
shiv19/shiv19.github.io
6cacaa2f33b0808e8d03d29ef78b25e16213c28b
[ "MIT" ]
null
null
null
_posts/2015-11-15-battleship-an-online-pygame.md
shiv19/shiv19.github.io
6cacaa2f33b0808e8d03d29ef78b25e16213c28b
[ "MIT" ]
null
null
null
_posts/2015-11-15-battleship-an-online-pygame.md
shiv19/shiv19.github.io
6cacaa2f33b0808e8d03d29ef78b25e16213c28b
[ "MIT" ]
1
2017-05-03T04:02:25.000Z
2017-05-03T04:02:25.000Z
--- layout: post title: BattleShip Game date: 2015-11-15 05:09 author: multishiv19 comments: true category: [projects] tags: [Online Game, Projects, PyGame, Python] description: I revived this classic old game using code skulptor --- ![Post Header Image]({{ site.baseurl }}/assets/img/battleship/BattleShip-1024x307.png "BattleShip") <p>BattleShip is a simple game where the player gets to find and sink all 3 battleships of his/her opponent. This game depends on the luck of the player as it is purely a guessing game. The player can choose to play against the AI or against another player. Or the bonus AI options that I've provided. First the player places 3 battleships on the 5x5 game board. The he/she tries to guess the location of the battleships of the opponent in a turn based manner.</p> <p>This game was built using SimpleGUI module available in the <a href="http://www.codeskulptor.org/" target="_blank">Codeskulptor</a> python. I initially had built a console version of this game while taking a course in <a href="https://www.codecademy.com/learn" target="_blank">Codecademy</a>. Later I learnt to make GUI based programs using SimpleGUI while taking a course named <a href="https://www.coursera.org/course/interactivepython1" target="_blank">Introduction to Interactive Programming in Python</a> by <a href="http://www.rice.edu/" target="_blank">Rice University</a> at <a href="https://www.coursera.org/" target="_blank">Coursera</a>.</p> <p>You can view the source code of the game by viewing the page source of the game page.</p> <p>This game was last modified on 11:13 AM IST, 26/10/2014.  </p> <p><a href="{{ site.baseurl }}/assets/battleship/" target="_blank">CLICK HERE</a> to Play the game.</p> <h2>Screenshots:</h2> ![Front-end]({{ site.baseurl }}/assets/img/battleship/Frontend1-300x201.png "Front-end")<br/> Front-end<br/><br/> ![Menu]({{ site.baseurl }}/assets/img/battleship/Menu1-300x202.png "Menu") <br/>Menu<br/><br/> ![Place battleship]({{ site.baseurl }}/assets/img/battleship/Place-battleship1-300x200.png "Place battleship")<br/> Place battleship<br/><br/> ![Find the Ship]({{ site.baseurl }}/assets/img/battleship/Find-the-Ship1-300x200.png "Find the Ship")<br/> Find the Ship<br/><br/> ![Multiplayer]({{ site.baseurl }}/assets/img/battleship/Multiplayer1-300x200.png "Multiplayer")<br/> Multiplayer Ship Placement<br/><br/> ![Winner]({{ site.baseurl }}/assets/img/battleship/Winner1-300x200.png "Winner")<br/>Winner<br/><br/>
77.3125
655
0.739693
eng_Latn
0.889687
b98433f2be7b17972c51ae4946af5cdfddeb7309
383
md
Markdown
windows-driver-docs-pr/install/cm-create-devnode-ex.md
hugmyndakassi/windows-driver-docs
aa56990cc71e945465bd4d4f128478b8ef5b3a1a
[ "CC-BY-4.0", "MIT" ]
1
2022-02-07T12:25:23.000Z
2022-02-07T12:25:23.000Z
windows-driver-docs-pr/install/cm-create-devnode-ex.md
hugmyndakassi/windows-driver-docs
aa56990cc71e945465bd4d4f128478b8ef5b3a1a
[ "CC-BY-4.0", "MIT" ]
null
null
null
windows-driver-docs-pr/install/cm-create-devnode-ex.md
hugmyndakassi/windows-driver-docs
aa56990cc71e945465bd4d4f128478b8ef5b3a1a
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: CM_Create_DevNode_Ex description: CM_Create_DevNode_Ex keywords: ["CM_Create_DevNode_ExA", "CM_Create_DevNode_ExW", "CM_Create_DevNode_Ex Device and Driver Installation"] topic_type: - apiref api_name: - CM_Create_DevNode_Ex - CM_Create_DevNode_ExW api_type: - NA ms.date: 10/17/2018 --- # CM_Create_DevNode_Ex This function is reserved for system use.
21.277778
116
0.772846
yue_Hant
0.780301
b984fdd36062523d028e05ce79177611ea952b5f
11,812
md
Markdown
bangumi/info.md
zxc135781/bilibili-API-collect
a59cc11a923b6ce7726a272702c11426ae40d1a5
[ "CC-BY-4.0" ]
1
2020-12-30T18:37:23.000Z
2020-12-30T18:37:23.000Z
bangumi/info.md
summerthread/bilibili-API-collect
e6de156df7d3abfafc86ea1bb2d98a6fce5c9657
[ "CC-BY-4.0" ]
null
null
null
bangumi/info.md
summerthread/bilibili-API-collect
e6de156df7d3abfafc86ea1bb2d98a6fce5c9657
[ "CC-BY-4.0" ]
null
null
null
# 番剧基本信息 > http://api.bilibili.com/pgc/view/web/season *请求方式:GET* **参数:** | 参数名 | 类型 | 内容 | 必要性 | 备注 | | --------- | ---- | -------- | ------ | ------------------------ | | season_id | url | 番剧ssID | 非必要 | season_id与ep_id任选其一 | | ep_id | url | 剧集epID | 非必要 | season_id与ep_id任选其一 | **json回复:** 根对象: | 字段 | 类型 | 内容 | 备注 | | ------- | ---- | -------- | ----------------------- | | code | num | 返回值 | 0:成功<br />-404:错误 | | message | str | 错误信息 | 默认为success | | ttl | num | 1 | 作用尚不明确 | | result | obj | 信息本体 | | `result`对象: | 字段 | 类型 | 内容 | 备注 | | --------------- | ------ | -------------------------- | ------------------------------------------ | | activity | obj | 参与的活动 | | | alias | str | 空 | 作用尚不明确 | | bkg_cover | str | 网页背景图片url | 无则为空 | | cover | str | 剧集封面图片url | | | episodes | array | 正片剧集列表 | | | evaluate | str | 简介 | | | jp_title | str | 空 | 作用尚不明确 | | link | str | 简介页面url | | | media_id | num | 剧集mdID | | | mode | num | 2 | 作用尚不明确 | | new_ep | obj | 更新信息 | | | payment | obj | 会员&付费信息 | 若无相关内容则无此项 | | positive | obj | | | | publish | obj | 发布信息 | | | rating | obj | 评分信息 | 若无相关内容则无此项 | | record | str | 备案号 | 无则为空 | | rights | obj | 属性标志信息 | | | season_id | num | 番剧ssID | | | season_title | str | 剧集标题 | | | seasons | array | 同系列所以季信息 | | | section | array | 花絮、PV、番外等非正片内容 | 若无相关内容则无此项 | | series | obj | 系列信息 | | | share_copy | str | 《{标题}》+{备注} | | | share_sub_title | str | 备注 | | | share_url | str | 番剧播放页面url | | | show | obj | 网页全屏标志 | | | square_cover | str | 方形封面图片url | | | stat | obj | 状态数 | | | status | num | | | | subtitle | str | 剧集副标题 | | | title | str | 剧集标题 | | | total | num | 总计正片集数 | 未完结:大多为-1<br />已完结:正整数 | | type | num | 剧集类型 | 1:番剧<br />2:电影<br />3:纪录片<br />4:国创<br />5:电视剧<br />7:综艺 | | up_info | obj | UP主信息 | 若无相关内容则无此项 | `result`中的`activity`对象: | 字段 | 类型 | 内容 | 备注 | | ----------- | ---- | -------- | ------------ | | head_bg_url | str | 空 | 作用尚不明确 | | id | num | 活动id | | | title | str | 活动标题 | | `result`中的`episodes`数组: | 项 | 类型 | 内容 | 备注 | | ---- | ---- | --------------- | ------------ | | 0 | obj | 正片第1集 | | | n | obj | 正片第(n+1)集 | 按照顺序排列 | | …… | obj | | | `episodes`数组中的对象: | 字段 | 类型 | 内容 | 备注 | | ------------ | ---- | --------------------------------- | -------------------- | | aid | num | 单集稿件avID | | | badge | str | 标签文字 | 例如`会员`、`限免`等 | | badge_info | obj | | | | badge_type | num | | | | bvid | str | 单集稿件bvID | | | cid | num | 视频CID | | | cover | str | 单集封面url | | | dimension | obj | 分辨率信息 | | | from | str | | | | id | num | 单集epID | | | link | str | 单集网页url | | | long_title | str | 单集完整标题 | | | pub_time | num | 发布时间 | 时间戳 | | pv | num | 0 | 作用尚不明确 | | release_date | str | 空 | 作用尚不明确 | | rights | obj | | | | share_copy | str | 《{标题}》+第n话+{单集完整标题} | | | share_url | str | 单集网页url | | | short_link | str | 单集网页url短链接 | | | status | num | | | | subtitle | str | 单集副标题 | 观看次数文字 | | title | str | 单集标题 | | | vid | str | 单集vID | vupload_+{CID} | `result`中的`new_ep`对象: | 字段 | 类型 | 内容 | 备注 | | ------ | ---- | ------------ | ---------------- | | desc | str | 更新备注 | | | id | num | 最新一话epID | | | is_new | num | 是否最新发布 | 0:否<br />1:是 | | title | str | 最新一话标题 | | `result`中的`payment`对象: | 字段 | 类型 | 内容 | 备注 | | ------------------- | ---- | ---- | ---- | | discount | num | | | | pay_type | obj | | | | price | str | | | | promotion | str | | | | tip | str | | | | vip_discount | num | | | | vip_first_promotion | str | | | | vip_promotion | str | | | `result`中的`positive`对象: | 字段 | 类型 | 内容 | 备注 | | ----- | ---- | ---- | ---- | | id | num | | | | title | str | | | `result`中的`publish`对象: | 字段 | 类型 | 内容 | 备注 | | --------------- | ---- | ---------------- | ------------------------ | | is_finish | num | 完结状态 | 0:未完结<br />1:已完结 | | is_started | num | 是否发布 | 0:未发布<br />1:已发布 | | pub_time | str | 发布时间 | YYYY-MM-DDD hh:mm:ss | | pub_time_show | str | 发布时间文字介绍 | | | unknow_pub_date | num | 0 | 作用尚不明确 | | weekday | num | 0 | 作用尚不明确 | `result`中的`rating`对象: | 字段 | 类型 | 内容 | 备注 | | ----- | ---- | ------------ | ---- | | count | num | 总计评分人数 | | | score | num | 评分 | | `result`中的`rights`对象: | 字段 | 类型 | 内容 | 备注 | | ----------------- | ---- | -------- | ------------------------------- | | allow_bp | num | | | | allow_bp_rank | num | | | | allow_download | num | | | | allow_review | num | | | | area_limit | num | | | | ban_area_show | num | | | | can_watch | num | | | | copyright | str | 版权标志 | bilibili:授权<br />dujia:独家 | | forbid_pre | num | | | | is_cover_show | num | | | | is_preview | num | | | | only_vip_download | num | | | | resource | str | | | | watch_platform | num | | | `result`中的`seasons`数组: | 项 | 类型 | 内容 | 备注 | | ---- | ---- | ----------------- | ------------ | | 0 | obj | 同系列剧集1 | | | n | obj | 同系列剧集(n+1) | 按照顺序排列 | | …… | obj | | | `seasons`数组中的对象: | 字段 | 类型 | 内容 | 备注 | | ------------ | ---- | ---- | ---- | | badge | str | | | | badge_info | obj | | | | badge_type | num | | | | cover | str | | | | media_id | str | | | | new_ep | num | | | | season_id | obj | | | | season_title | num | | | | season_type | str | | | | stat | obj | | | `result`中的`section`数组: | 项 | 类型 | 内容 | 备注 | | ---- | ---- | ----------------- | ------------ | | 0 | obj | 其他内容块1 | | | n | obj | 其他内容块(n+1) | 按照顺序排列 | | …… | obj | | | `section`数组中的对象: | 字段 | 类型 | 内容 | 备注 | | ---------- | ----- | -------- | ---- | | episode_id | num | 0 | | | episodes | array | 板块内容 | | | id | num | 板块ID? | | | title | str | 板块标题 | | | type | num | ? | | `result`中的`series`对象: | 字段 | 类型 | 内容 | 备注 | | ------------ | ---- | ------ | ---- | | series_id | num | 系列ID | | | series_title | str | 系列名 | | `result`中的`show`对象: | 字段 | 类型 | 内容 | 备注 | | ----------- | ---- | -------- | -------------------- | | wide_screen | num | 是否全屏 | 0:正常<br />1:全屏 | `result`中的`stat`对象: | 字段 | 类型 | 内容 | 备注 | | --------- | ---- | ------ | ---- | | coins | num | 投币数 | | | danmakus | num | 弹幕数 | | | favorites | num | 收藏数 | | | likes | num | 点赞数 | | | reply | num | 评论数 | | | share | num | 分享数 | | | views | num | 播放数 | | `result`中的`up_info`对象: | 字段 | 类型 | 内容 | 备注 | | ----------- | ---- | ----------- | ---- | | avatar | str | 头像图片url | | | follower | num | 粉丝数 | | | is_follow | num | 0 | | | mid | num | UP主UID | | | pendant | obj | | | | theme_type | num | 0 | | | uname | str | UP主昵称 | | | verify_type | num | | | | vip_status | num | | | | vip_type | num | | |
42.336918
102
0.234931
eng_Latn
0.091593
b9851e0e0d31103f0f6c698173d0195f6ed2ce1c
831
md
Markdown
docs/guide/README.md
lenarx/yii2-elasticsearch
52a5e327012244d66a574bbfd6911c3edbedddef
[ "BSD-3-Clause" ]
1
2019-12-20T12:07:50.000Z
2019-12-20T12:07:50.000Z
docs/guide/README.md
lenarx/yii2-elasticsearch
52a5e327012244d66a574bbfd6911c3edbedddef
[ "BSD-3-Clause" ]
null
null
null
docs/guide/README.md
lenarx/yii2-elasticsearch
52a5e327012244d66a574bbfd6911c3edbedddef
[ "BSD-3-Clause" ]
2
2020-05-21T18:20:28.000Z
2020-05-21T18:32:57.000Z
Elasticsearch Extension for Yii 2 ================================= This extension provides the [elasticsearch](https://www.elastic.co/products/elasticsearch) integration for the Yii2 framework. It includes basic querying/search support and also implements the `ActiveRecord` pattern that allows you to store active records in elasticsearch. Getting Started --------------- * [Installation](installation.md) Usage ----- * [Data Mapping & Indexing](mapping-indexing.md) * [Using the Query](usage-query.md) * [Using the ActiveRecord](usage-ar.md) * [Working with data providers](usage-data-providers.md) Additional topics ----------------- * [Using the elasticsearch DebugPanel](topics-debug.md) * Relation definitions with records whose primary keys are not part of attributes * Fetching records from different indexes/types
31.961538
126
0.726835
eng_Latn
0.929111
b9855d7e19420afd7889fbe10bcb1f3c397a29bc
916
md
Markdown
docs/error-messages/compiler-errors-1/compiler-error-c2021.md
Mdlglobal-atlassian-net/cpp-docs.it-it
c8edd4e9238d24b047d2b59a86e2a540f371bd93
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/error-messages/compiler-errors-1/compiler-error-c2021.md
Mdlglobal-atlassian-net/cpp-docs.it-it
c8edd4e9238d24b047d2b59a86e2a540f371bd93
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/error-messages/compiler-errors-1/compiler-error-c2021.md
Mdlglobal-atlassian-net/cpp-docs.it-it
c8edd4e9238d24b047d2b59a86e2a540f371bd93
[ "CC-BY-4.0", "MIT" ]
1
2020-05-28T15:54:57.000Z
2020-05-28T15:54:57.000Z
--- title: Errore del compilatore C2021 ms.date: 11/04/2016 f1_keywords: - C2021 helpviewer_keywords: - C2021 ms.assetid: 064f32e2-3794-48d5-9767-991003dcb36a ms.openlocfilehash: 24463abcf123fda285356c86e3394d7274f2f6c8 ms.sourcegitcommit: 16fa847794b60bf40c67d20f74751a67fccb602e ms.translationtype: MT ms.contentlocale: it-IT ms.lasthandoff: 12/03/2019 ms.locfileid: "74751040" --- # <a name="compiler-error-c2021"></a>Errore del compilatore C2021 previsto valore esponente. Trovato 'character' Il carattere utilizzato come esponente di una costante a virgola mobile non è un numero valido. Assicurarsi di usare un esponente compreso nell'intervallo. ## <a name="example"></a>Esempio L'esempio seguente genera l'C2021: ```cpp // C2021.cpp float test1=1.175494351E; // C2021 ``` ## <a name="example"></a>Esempio Possibile soluzione: ```cpp // C2021b.cpp // compile with: /c float test2=1.175494351E8; ```
22.9
155
0.768559
ita_Latn
0.720601
b985777f2422685556aa88e91dd83aca807d122b
8,440
md
Markdown
src/content/SG-Compliance-Edition.md
exceleratesystems/Excelerate-Systems-Blog-
6f5b975698f41cd14fe30dd0d93af309127715df
[ "MIT" ]
1
2020-11-03T11:10:55.000Z
2020-11-03T11:10:55.000Z
src/content/SG-Compliance-Edition.md
exceleratesystems/Excelerate-Systems-Blog-
6f5b975698f41cd14fe30dd0d93af309127715df
[ "MIT" ]
null
null
null
src/content/SG-Compliance-Edition.md
exceleratesystems/Excelerate-Systems-Blog-
6f5b975698f41cd14fe30dd0d93af309127715df
[ "MIT" ]
1
2020-12-05T02:31:15.000Z
2020-12-05T02:31:15.000Z
--- layout: post title: 'Search Guard - Compliance Edition' author: [Aadel Benyoussef] tags: - Compliance - Search-guard - Data privacy - Data protection - Elasticsearch image: img/sg.jpg date: '2018-06-13T23:46:37.121Z' draft: false excerpt: Nous sommes très fiers de vous annoncer Search Guard™ Compliance Edition ! --- Nous sommes très fiers de vous annoncer Search Guard™ Compliance Edition ! Des fonctionnalités additionnelles à "Search Guard Enterprise Edition", qui vous aideront à vous conformer aux réglementations telles que RGPD, SOX et ISO. Intégrité de l’installation d’Elasticsearch et de la configuration de Search Guard La 1ère fonctionnalité et non des moindres, Search Guard vous permettra de contrôler et de garantir l’intégrité de votre installation d'Elasticsearch et de Configuration de Search Guard. Intégrité de l’installation d’Elasticsearch Quand vous voulez démarrer un nœud, Search Guard peut émettre un événement qui liste : - Les paramètres dans elasticsearch.yml - Les variables d’environnement utilisées dans elasticsearch.yml - Les propriétés Java - Les fichiers utilisés par Search Guard, par exemple cerficats PEM, keystores ou keytabs Kerberos En plus, Search Guard calcule un hash de ces paramètres, pour que vous puissiez détecter immédiatement des modifications faites à votre installation d’Elasticsearch. Intégrité de configuration de Search Guard De façon similaire, le suivi d'intégrité de la configuration de Search Guard enregistre les actions concernant l’accès en lecture-écriture à la configuration de Search Guard. Vous pouvez voir qui a eu accès à la configuration, et quelles modifications ont été faites. Voulez-vous savoir quand un rôle particulier a été ajouté, ou quelles étaient les autorisations pour un rôle particulier à un moment donné ? Notre nouveau suivi de configuration vous permet de faire exactement cela. Si vous stockez ces événements dans un indice immuable, vous maîtriserez toujours votre configuration sécurisée. Historique de lecture Cette fonctionnalité vous permet de connaitre exactement quels champs dans vos documents ont été consultés, et par quel utilisateur. Cela vous aide à répondre à plusieurs questions du RGPD. Selon le RGPD, vous êtes obligé d’informer vos utilisateurs ou clients de précisément qui, dans votre entreprise, peut accéder à vos détails personnels, et à quelles fins. Avec l'historique de lecture, vous pouvez faire exactement cela : il est possible de surveiller et tracer l’accès aux détails confidentiels, par exemple prénom, nom, adresse mail, etc., et de stocker les données dans Elasticsearch, Kafka, Cassandra ou dans tout autre système de votre choix. Historique d’écriture De façon similaire, l'historique d'écriture permet de retracer la création ou suppression de documents ou les changements faits aux documents dans votre cluster. Search Guard enregistre les modifications en format « JSON patch ». Ainsi, vous pouvez savoir comment un document a été modifié au cours du temps, ce qui est extrêmement utile puisque si vous enregistrez des données personnelles identifiables (PII), votre client peut : - Demander des renseignements concernant quelles données personnelles identifiables sont stockées - Demander des renseignements concernant quand les données ont été créées - Demander que des modifications soient apportées aux données personnelles identifiables - Demander que les données soient supprimées (« droit à l’oubli ») Avec l’historique d’écriture, vous pouvez garder une trace d’audit pour tous ces événements, et la mettre à disposition de votre client. Il est maintenant beaucoup plus facile de vous conformer au RGPD grâce à l’historique de lecture et d’écriture, mais nous ne nous arrêtons pas là. Dans « Compliance Edition », nous avons ajouté de nombreuses nouvelles fonctionnalités pour vous aider à vous conformer. Anonymisation des champs Une des fonctionnalités de Search Guard est la sécurité au niveau des champs, qui vous permet de filtrer les champs sensibles dans vos documents. Si vous configurez une liste noire de champs, Search Guard les filtrera en conséquence. Cependant, dans certains cas vous pouvez avoir besoin d’anonymiser des champs plutôt que de les enlever. Pour ce faire, il faut remplacer la valeur en texte clair par un hash cohérent et sécurisé. Jusqu'à présent, il fallait faire cela en enregistrant un champ sous deux formes : en forme de texte clair, et en forme de valeur de hash. La méthode normalement utilisée pour remplacer une valeur par un hash quand il s’agit d’acquérir et de collectionner des données est d’utiliser le plugin Logstash « fingerprint », par exemple. Mais grâce à la fonctionnalité d'anonymisation des champs offerte par Search Guard, cela n’est plus nécessaire. Vous pouvez stocker la valeur en texte clair, et ensuite Search Guard la remplacera par un hash quand vous exécutez une requête. La configuration de l’anonymisation des champs s’effectue par rôle : vous pouvez par exemple préciser qu’un utilisateur disposant de droits d’administrateur ou de gestionnaire peut voir la valeur en texte clair, alors que tous les autres utilisateurs ne voient que le hachage. Search Guard utilise Blake2bDigest pour calculer le hash. Cet algorithme atteint un bon équilibre entre vitesse et sécurité, et dispose du support intégré pour le hachage aléatoire, qui est configurable dans elasticsearch.yml avec une longueur minimum de 32 caractères pour des raisons de sécurité. L’anonymisation est très facile à utiliser et à configurer, mais elle a également un impact sur le RGPD, puisque les données anonymisées ne sont pas des données personnelles identifiables (PII). Autrement dit, si un utilisateur ne voit que la version hash des données en texte clair, plusieurs règles de conformité ne s’appliquent plus. En conséquence, les champs anonymisés sont automatiquement exclus de l’historique de lecture dans Search Guard ! Indices immuables – Intégrité des données Search Guard vous offre deux fonctionnalités clés pour protéger l’intégrité de vos données. Avec TLS, vous pouvez vous assurer que vos données ne seront pas modifiées pendant leur transit. Et en implémentant le contrôle d’accès à base de rôles, vous pouvez préciser quels utilisateurs ont le droit de créer, modifier ou supprimer des données. Avec les indices immuables, l’intégrité des données prend une nouvelle dimension. Si un indice est marqué comme immuable, vous pouvez créer des documents dans l’indice, mais vous ne pouvez plus jamais les modifier. Cela suit la technique « write-once read-many » et est très utile si vous devez vous assurer que les données ne peuvent pas être modifiées une fois enregistrées. Toutes sortes d’événement d’audit et de conformité se trouvent dans cette catégorie – une fois écrits, vous ne voulez pas qu’ils soient modifiés. Il est facile de marquer un indice comme immuable – il suffit juste de le lister dans elasticsearch.yml : En outre, Search Guard veille à ce que vos indices ne soient pas modifiés directement. Les opérations suivantes sont interdites pour un indice immuable: - Supprimer l’indice - Ouvrir et fermer l’indice - Réindexer l’indice - Restaurer à partir de snapshots Routage d’événements d’audit Vous pouvez stocker des événements d’audit dans plusieurs points d’accès, y compris Elasticsearch, Webhooks, Kafka, Cassandra, et plein d’autres. Jusqu'à maintenant, le module de traçabilité d’audit vous permettait de configurer et d’utiliser un seul point d’accès pour tous les événements. Maintenant, Compliance Edition vous offre les fonctionnalités de routage d'événements flexible et ciblage multipoints. Vous pouvez par exemple stocker des événements qui se trouvent dans l’audit pour l’historique de lecture et d’écriture dans Kafka, tout en envoyant tous les événements de sécurité, par exemple des tentatives de connexion échouées ou des privilèges manquants, à un système SIEM. Le routage est basé sur la catégorie d’événement, et il est possible de stocker des événements dans plusieurs points d’accès à la fois. Les fonctionnalités de piste d’audit et de conformité sont alors extrêmement flexibles en ce qui concerne le point d’accès pour le stockage. La nouvelle configuration est entièrement rétro compatible, ce qui veut dire que si vous utilisiez le module d’audit précédent, vous pouvez maintenant introduire de nouvelles options de ciblages étape par étape.
80.380952
849
0.810782
fra_Latn
0.995289
b985cef33b6e8c481555aab78a19553f295cef0a
27,339
markdown
Markdown
_posts/2007-10-31-adaptive-content-platform-and-method-of-using-same.markdown
api-evangelist/patents-2007
da723589b6977a05c0119d5476325327da6c5a5c
[ "Apache-2.0" ]
1
2017-11-15T11:20:53.000Z
2017-11-15T11:20:53.000Z
_posts/2007-10-31-adaptive-content-platform-and-method-of-using-same.markdown
api-evangelist/patents-2007
da723589b6977a05c0119d5476325327da6c5a5c
[ "Apache-2.0" ]
null
null
null
_posts/2007-10-31-adaptive-content-platform-and-method-of-using-same.markdown
api-evangelist/patents-2007
da723589b6977a05c0119d5476325327da6c5a5c
[ "Apache-2.0" ]
2
2019-10-31T13:03:32.000Z
2020-08-13T12:57:02.000Z
--- title: Adaptive content platform and method of using same abstract: An adaptive content platform includes one or more content-enabled, dependent applications, each of which includes a user interface and business logic. A services layer, which is interfaced with the dependent applications and a software infrastructure, provides one or more services that are usable by the dependent applications. url: http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-adv.htm&r=1&f=G&l=50&d=PALL&S1=07971144&OS=07971144&RS=07971144 owner: OpenPages number: 07971144 owner_city: Waltham owner_country: US publication_date: 20071031 --- This application is a continuation of application Ser. No. 10 256 613 now U.S. Pat. No. 7 356 771 B2 filed Sep. 26 2002 and claims priority from U.S. Provisional Patent Application Ser. No. 60 394 441 entitled Adaptive Content Platform and filed on Jul. 9 2002. The above are incorporated by reference herein for all purposes. This invention relates to software development and more particularly to software development platforms. For a suite of applications developed for a common software platform e.g. an application server platform each application within the suite typically includes a group of services e.g. content management services workflow services publishing services search and query services that are common amongst the applications in the suite. As the software platform does not allow the services of one application to be shared by another these services must be coded for and included in each application written for the software platform even though multiple applications use common services. According to an aspect of this invention an adaptive content platform includes one or more content enabled dependent applications thus forming an application layer each of which includes a user interface and business logic. A services layer which is interfaced with the content enabled dependent applications and a software infrastructure e.g. an application server provides one or more services that are usable by the content enabled dependent applications. One or more of the following features may be included. The services layer includes a unique application programming interface for each of the one or more services such that dependent applications using a specific service make requests through the application programming interface assigned to that service. The services include a content management service for storing and managing files which includes a repository service for storing files and a search service for allowing users to search files stored by the repository service for example. The services also include a workflow and collaboration service for managing projects and users which includes a workflow service for managing the workflow of files a user management and authentication service for managing the users and user groups and an events and notification service for managing and broadcasting notifications to the users that were generated by the services for example. Additionally the services include a multi modal content creation service for manual creation and automated importation and conversion of files which includes a transformation and content handling service for converting the formats of files a desktop integration service for manual contribution of content and an import service for facilitating file importation from external systems for example. The services further include a multi channel deployment service for publishing files to one or more publishing channels which includes a publishing service for publishing files to one or more publishing channels such as a web site an email broadcast a wireless broadcast a syndication stream or a printed publication for example a dynamic publishing service and a static publishing service for example. A data layer is interfaced with the software infrastructure such that the data layer includes one or more databases. The above described adaptive content platform may be implemented in a distributed computing system in that a first or local computing device executes the content enabled dependent applications and a second or remote computing device executes the services layer. This second computer may be a web server. In a distributed computing system the first and second computing devices are interconnected by a network such as a local area network the Internet or an intranet for example. The above described adaptive content platform and distributed computing system may be implemented as a method or a sequence of instructions executed by a processor. One or more advantages can be provided from the above. By providing the programmer with a common set of reusable services applications are no longer required to include stand alone services. Since these applications are not required to include services application development is significantly accelerated and deployment is simplified. Additionally by using a common set of services compatibility issues are minimized. Further as applications share a common set of services application size is reduced. In addition by separating an application s services from the application s business logic user interface distributed computing is possible leading to superior scalability and availability. The details of one or more embodiments of the invention are set forth in the accompanying drawings and the description below. Other features objects and advantages of the invention will be apparent from the description and drawings and from the claims. Referring to an adaptive content platform includes one or more dependent applications each of which includes a user interface e.g. user interface and business logic e.g. business logic . This group of dependent applications may be referred to as an application layer . Each user interface allows a user of the dependent application to access and use the functionality of the dependent application. Business logic performs the functions native to the dependent application. Note that while dependent applications typically include some form of user interface this is not required. Typically the dependent applications are content enabled dependent applications in that they manage and process content e.g. documents images audio clips video clips . Examples of content enabled applications are web content management systems shareholder and regulatory reporting applications corporate marketing and communications systems newspaper publishing systems and so forth. Content enabled applications typically include a combination of content management workflow management and publishing capabilities. Adaptive content platform is a multi tiered software architecture that includes a services layer for interfacing the application layer and a software infrastructure e.g. an application server . Examples of an application server are BEA Weblogic and IBM Websphere both of which implement the Java 2 Enterprise Edition standard J2EE . Services layer provides a group of services which are available for use by dependent applications . Examples of these services include content management services search services and file conversion services for example. These services which will be discussed below in greater detail are shared services common to the dependent applications as opposed to each dependent application s native functions which are handled by the dependent application s business logic. Dependent applications may be J2EE Java 2 Enterprise Edition compliant dependent applications that adhere to v1.3 standards and are compatible with and run on a Java 2 Enterprise Edition application server. A data layer is interfaced to the software infrastructure and provides data services for platform . Data layer may provide access to database servers such as Oracle IBM DB2 and Microsoft SQL Server . Further data layer may provide access to file servers such as Microsoft Windows 2000 Servers Microsoft Windows NT Servers and Unix Servers . Additionally data layer may allow access to legacy systems i.e. applications and data that have been inherited from languages platforms and techniques earlier than current technology . Data layer is typically interfaced with an operating system OS layer which includes the operating system that manages the above described layers infrastructures and dependent applications. Examples of compatible operating systems are Windows Solaris and Linux . Typically a web server layer is interfaced with the application layer i.e. dependent applications and allows a user not shown to use and access the functionality of the individual dependent applications from with a web browser e.g. Microsoft Internet Explorer Netscape Navigator . Examples of web server layer are Microsoft Internet Information Server and Apache Web Server . By combining the user interface and business logic of a dependent application with one or more of the services offered by the services layer the functionality of a stand alone independent application can be emulated without the application having to include dedicated services . Concerning the services offered by services layer these services can typically be loosely described as four groups of services namely content management workflow and collaboration multi modal content creation and multi channel deployment each of which will be discussed below in greater detail. Referring to the content management group which stores and manages files and content used by the adaptive content platform may include a repository service and a search service . Repository service works in conjunction with the data layer generally and the database servers the file servers and the legacy systems specifically to store organize and manage files and content hereinafter files . Repository service allows for the production organization and management of numerous content types that define the specific type of files being produced and managed. Additionally repository service allows users administrators to define numerous property fields or meta data fields e.g. release date revision number production date revision date and approval date for example that define and refine the files stored by the data layer. Access to the files managed by repository service can be controlled by regulating the users who can view check out edit print and save a particular file for example. Additionally the data structure in which the files are stored e.g. the directory tree structure is defined and controlled using repository service . Typically repository service works in conjunction with a relational database e.g. database that is accessed through data layer . The search service allows a user to search the files stored by the repository service . Searches may be performed on either file properties or content. If the files are stored in a structured database as described above search service may be an SQL structured query language database query engine. Alternatively if the files are stored as HTML or XML Extensible Markup Language based documents search service may use search engine technology to generate a list of relevant documents. The dependent applications described above may access each service offered by content management group e.g. repository service and search service by making the appropriate request of and establishing a connection through the API application programming interface assigned to that particular service. For example API is assigned to repository service and API is assigned to search service . Therefore if a user of a dependent application e.g. dependent application wanted to execute a search for a particular file dependent application would make the appropriate request from API . Referring to the workflow and collaboration group which manages projects and users of the adaptive content platform may include a workflow service a user management and authentication service and an events and notification service . The workflow service allows the administrator or user to control the workflow of files through the adaptive content platform. For example if a file is produced for publishing purposes that file might need to be approved by a midlevel manager prior to it being sent to an upper level manager. Further the upper level manager might have to approve the file prior to it being published or otherwise disseminated. Therefore workflow service could mandate that the file be approved by a midlevel manager prior to it being sent to the higher level manager who approves it prior to publication. Further workflow service may assign time limits for the completion of certain tasks such as the midlevel or upper level review and approval process. The user management and authentication service provides a set of tools to the user administrator that allows them to manage users and user groups. Individual users can be produced and deleted using user management and authentication service . Further the rights and privileges of these individual users also can be controlled and regulated. Additionally these users can be assigned to moved between and deleted from various users groups which are also maintained using user management and authentication service . Further as rights and privileges can be assigned to a user group by adding an individual user to a user group the rights or privileges of an individual user can be efficiently defined. The events and notification service allows for the delivery of notification events generated by the services offered by the applet service layer . These message can be delivered to individual users of the system broadcast to entire user groups or delivered to the various services offered by the applet service layer . As above the dependent applications described above may access each service offered by workflow and collaboration group e.g. workflow service user management and authentication service and the events and notification service by making the appropriate request of and establishing a connection through the API assigned to that particular service. For this particular group API is assigned to the workflow service API is assigned to the user management and authentication service and API is assigned to the events and notification service . Referring to the multi modal content creation group which imports and converts files for the adaptive content platform may include a transformation and content handling service an import service and a desktop integration service . The transformation and content handling service provides file format conversion services thus allowing the user to import files of various types and convert them over into a common format e.g. XML and HTML . Converter templates are available for popular applications such as Microsoft Word Microsoft Excel Adobe PDF and Microsoft PowerPoint for example. The import service allows for automated import of files from external systems. Import service is configured to monitor on a periodic basis the files located on a network drive an FTP file transfer protocol site and an HTTP site. When new files are detected on one of these sources the files are automatically imported into the system. Further if a format conversion is required import service will work in conjunction with transformation service to import and convert the file. The desktop integration service allows content to be contributed by users via standard desktop creation tools. These tools include the Microsoft Office suite as well as Adobe and Macromedia applications. The service uses the WEBDAV protocol WEB based Distributed Authoring and Versioning which is an extension of the HTTP protocol to communicate with the desktop tools. As above the dependent applications described above may access each service offered by multi modal content creation group e.g. transformation service import service and desktop integration service by making the appropriate request of and establishing a connection through the API assigned to that particular service. For this particular group API is assigned to the transformation service API is assigned to the import service and API is assigned to the desktop integration service . Referring to the multi channel deployment group which publishes files to one or more publishing channels may include a static publishing service and a dynamic publishing service . The static publishing service allows for proactive publishing of files based on predefined templates. Therefore the structure and format of the file published and the document produced is defined ahead of time and is not varied depending on the content of the document. Additionally the content itself is semi dynamic in that it changes periodically e.g. a few times a week . An example of static documents generated using a static publishing service is a newsroom home page of a corporate web site in which one hundred press releases are currently being displayed. The home page is a collection of one hundred summary links and each link leads to a press release detail page. A corporate communications officer can publish the home page and the one hundred detail pages by invoking the static publishing service which merges the appropriate press release content with the detail page template to generate HTML. By generating the pages using static publishing the communications officer ensures that web site visitors have fast page retrieval since the content is already in HTML format and does not need to be regenerated for every website visitor. The dynamic publishing service allows for reactive publishing of files that are dynamically altered based on current conditions user preferences and query results for example. In an online auction house that has one hundred items for sale a dynamic document may be created in response to a user query. For example while one hundred items may be offered the user an avid World War II buff may only be interested in those items that relate to World War II. Therefore the user would enter their search criteria and a dynamic document would be generated that includes fourteen items each of which is related to World War II. This dynamically generated list which itemizes the fourteen items may also specify the starting bid the current bid and the auction ending time for each item. By generating this document with dynamic publishing services documents can be generated that more accurately reflect current conditions. Regardless of whether the static publishing service or dynamic publishing service produce the file the file can be published over various channels such as a web site an email broadcast a wireless broadcast a syndication stream and a printed publication for example. The file can also be published in various formats such as HTML XML and PDF for example. For web site publishing the file being published may be posted to a website so that the file is accessible by various users and guests. If security or access is a concern the file may be published on an intranet which is not remotely accessible or within a restricted access user section of a website. For email broadcasts the file can be published as an attachment to an email that is sent out to a distribution list of individual users. Alternatively the file may be converted into a format e.g. ASCII text and HTML that is easily incorporated into the body of an email. For wireless broadcasts the file can be transmitted to users over a wireless network. This file may be text based such as an email attachment sent to a wireless email device or multimedia based such as a sound file sent to a cellular telephone . For syndication streams the file may be published on data streams that are text based such as streaming messages audio based such as streaming audio video based such as streaming video or multimedia based such as streaming audio video for example. For printed publications the file being published may be printed on traditional printing systems laser printers and distributed using traditional distribution paths e.g. interoffice mail courier or the postal service for example . As above the dependent applications described above access each service offered by multi channel deployment group e.g. static publishing service and dynamic publishing service by making the appropriate request of and establishing a connection through the API assigned to that particular service. For this particular group API is assigned to the static publishing service and API is assigned to the dynamic publishing service . Referring to a distributed computing system is shown which incorporates the adaptive content platform described above. Distributed computing system includes a local or first computing device that executes one or more content enabled dependent applications . As described above each of the dependent applications includes business logic and a user interface . A storage device stores the individual instruction sets and subroutines of dependent applications . Storage devices may be a hard disk drive a tape drive an optical drive a RAID array a random access memory RAM or a read only memory ROM for example. Local computing device includes at least one processing unit not shown and main memory system not shown . A remote or second computing device e.g. a web server executes the services layer as described above. Typically services layer is interfaced with a software infrastructure not shown which is interfaced with a data layer not shown which is interfaced with an OS layer not shown . A storage device stores the individual instruction sets and subroutines of services layer and any additional required layers or infrastructure . Storage device may be a hard disk drive a tape drive an optical drive a RAID array a random access memory RAM or a read only memory ROM for example. Remote computing device includes at least one processing unit not shown and main memory system not shown . Local computing device and remote computing device are interconnected with a network such as a LAN local area network WAN wide area network the Internet or an intranet for example. While the above described embodiment discusses the deployment of the services layer on a single second computer other configurations are possible such as those in which each service or a group of services is deployed on its own dedicated computer. While the above described embodiment describes a local and a remote computing device this is not intended to define the physical location of either computing device and is merely intended to indicate that the second computing device is remote i.e. separate from the first computing device. While the above described embodiment discusses the use of content enabled dependent applications other configurations are possible such as data enabled dependent applications i.e. those designed to manage data as opposed to content . While the above described embodiment specifies a software infrastructure that is an application server other configurations are possible such as a general purpose operating system e.g. UNIX Windows 2000 or a special purpose operating system e.g. embedded OS real time OS . While the above described embodiment illustrates the availability of three services and three dependent applications the actual number of services and dependent applications can be adjusted based on system requirements. Referring to a distributed computing method is shown. One or more content enabled dependent applications are executed on a local computing device. Each dependent application includes a user interface and business logic. A services layer which is interfaced with the dependent applications and a software infrastructure is executed on a remote computing device. The services layer provides one or more services that are usable by the content enabled dependent applications. A data layer which includes one or more databases is interfaced with the software infrastructure. A unique application programming interface is assigned to each of the services. Dependent applications using a specific service make requests through the application programming interface assigned to that service. Referring to a multi tier software development method is shown. One or more content enabled dependent applications are provided each of which includes a user interface and business logic. A services layer is provided which is interfaced with the one or more content enabled dependent applications and provides one or more services that are usable by the content enabled dependent applications. The services layer is interfaced with a software infrastructure interfaced with a data layer. The data layer includes one or more databases. The embodiments described herein are not limited to the embodiments described above it may find applicability in any computing or processing environment. The embodiments may be implemented in hardware software or a combination of the two. For example the embodiments may be implemented using circuitry such as one or more of programmable logic e.g. an ASIC logic gates a processor and a memory. The embodiments may be implemented in computer programs executing on programmable computers that each includes a processor and a storage medium readable by the processor including volatile and non volatile memory and or storage elements . Each such program may be implemented in a high level procedural or object oriented programming language to communicate with a computer system. However the programs can be implemented in assembly or machine language. The language may be a compiled or an interpreted language. Each computer program may be stored on an article of manufacture such as a storage medium e.g. CD ROM hard disk or magnetic diskette or device e.g. computer peripheral that is readable by a general or special purpose programmable computer for configuring and operating the computer when the storage medium or device is read by the computer to perform the functions of the embodiments. The embodiments may also be implemented as a machine readable storage medium configured with a computer program where upon execution instructions in the computer program cause a machine to operate to perform the functions of the embodiments described above. The embodiments described above may be used in a variety of applications. Although the embodiments are not limited in this respect the embodiments may be implemented with memory devices in microcontrollers general purpose microprocessors digital signal processors DSPs reduced instruction set computing RISC and complex instruction set computing CISC among other electronic components. The embodiments described above may also be implemented using integrated circuit blocks referred to as main memory cache memory or other types of memory that store electronic instructions to be executed by a microprocessor or store data that may be used in arithmetic operations. A number of embodiments of the invention have been described. Nevertheless it will be understood that various modifications may be made without departing from the spirit and scope of the embodiments described above.
179.861842
1,150
0.828304
eng_Latn
0.99992
b985f406550ee1b7f92e0af3bab148f1062664ac
2,598
md
Markdown
clients/swift5/generated/docs/CharacteristicAPI.md
cliffano/pokeapi-clients
92af296c68c3e94afac52642ae22057faaf071ee
[ "MIT" ]
null
null
null
clients/swift5/generated/docs/CharacteristicAPI.md
cliffano/pokeapi-clients
92af296c68c3e94afac52642ae22057faaf071ee
[ "MIT" ]
null
null
null
clients/swift5/generated/docs/CharacteristicAPI.md
cliffano/pokeapi-clients
92af296c68c3e94afac52642ae22057faaf071ee
[ "MIT" ]
null
null
null
# CharacteristicAPI All URIs are relative to *https://pokeapi.co* Method | HTTP request | Description ------------- | ------------- | ------------- [**characteristicList**](CharacteristicAPI.md#characteristiclist) | **GET** /api/v2/characteristic/ | [**characteristicRead**](CharacteristicAPI.md#characteristicread) | **GET** /api/v2/characteristic/{id}/ | # **characteristicList** ```swift open class func characteristicList(limit: Int? = nil, offset: Int? = nil, completion: @escaping (_ data: String?, _ error: Error?) -> Void) ``` ### Example ```swift // The following code samples are still beta. For any issue, please report via http://github.com/OpenAPITools/openapi-generator/issues/new import OpenAPIClient let limit = 987 // Int | (optional) let offset = 987 // Int | (optional) CharacteristicAPI.characteristicList(limit: limit, offset: offset) { (response, error) in guard error == nil else { print(error) return } if (response) { dump(response) } } ``` ### Parameters Name | Type | Description | Notes ------------- | ------------- | ------------- | ------------- **limit** | **Int** | | [optional] **offset** | **Int** | | [optional] ### Return type **String** ### Authorization No authorization required ### HTTP request headers - **Content-Type**: Not defined - **Accept**: text/plain [[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) # **characteristicRead** ```swift open class func characteristicRead(id: Int, completion: @escaping (_ data: String?, _ error: Error?) -> Void) ``` ### Example ```swift // The following code samples are still beta. For any issue, please report via http://github.com/OpenAPITools/openapi-generator/issues/new import OpenAPIClient let id = 987 // Int | CharacteristicAPI.characteristicRead(id: id) { (response, error) in guard error == nil else { print(error) return } if (response) { dump(response) } } ``` ### Parameters Name | Type | Description | Notes ------------- | ------------- | ------------- | ------------- **id** | **Int** | | ### Return type **String** ### Authorization No authorization required ### HTTP request headers - **Content-Type**: Not defined - **Accept**: text/plain [[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
24.280374
180
0.615473
eng_Latn
0.418416
b986b6db8ad3a196d4cba82b68af09701b73a2fe
529
md
Markdown
de_DE/plugins/programming/htmldisplay/index.md
AlexCrp/documentations
c87e0cd060d00dd7d872d7a2ff232f13a86487d9
[ "MIT" ]
1
2021-01-22T20:02:12.000Z
2021-01-22T20:02:12.000Z
de_DE/plugins/programming/htmldisplay/index.md
ERICRAC/documentations
57f89b977d52e1da9edcab798dc919d22c961700
[ "MIT" ]
null
null
null
de_DE/plugins/programming/htmldisplay/index.md
ERICRAC/documentations
57f89b977d52e1da9edcab798dc919d22c961700
[ "MIT" ]
null
null
null
# HTML Display Plugin Sehr einfaches Plugin, mit dem Sie den gewünschten HTML- / Javascript- / CSS-Code auf die Gerätekachel setzen können. >**Wichtig** > >Das Plugin muss wissen, wie man in HTML, Javascript und CSS codiert. Das Jeedom-Team unterstützt den Code Ihres Widgets nicht. Anwendungsfälle können sein : - einzigartiges Menü für Designs - Aufnahme externer Informationen bei Jeedom >**Wichtig** > >Zur Erinnerung: Entwürfe verwenden IMMER die Dashboard-Version des Codes (ob auf Mobilgeräten oder auf dem Desktop))
31.117647
127
0.778828
deu_Latn
0.997662
b987996a4f5ab7caf16c33cd3336c7ff9b0aeaec
1,884
md
Markdown
desktop-src/VML/src-attribute--stroke--vml.md
KrupalJoshi/win32
f5099e1e3e455bb162771d80b0ba762ee5c974ec
[ "CC-BY-4.0", "MIT" ]
3
2020-04-24T13:02:42.000Z
2021-07-17T15:32:03.000Z
desktop-src/VML/src-attribute--stroke--vml.md
KrupalJoshi/win32
f5099e1e3e455bb162771d80b0ba762ee5c974ec
[ "CC-BY-4.0", "MIT" ]
null
null
null
desktop-src/VML/src-attribute--stroke--vml.md
KrupalJoshi/win32
f5099e1e3e455bb162771d80b0ba762ee5c974ec
[ "CC-BY-4.0", "MIT" ]
1
2022-01-01T04:19:14.000Z
2022-01-01T04:19:14.000Z
--- title: Src Attribute (Stroke)(VML) description: Src Attribute (Stroke)(VML) ms.assetid: dac6b5b7-2038-4534-97e9-a1340102777e ms.topic: article ms.date: 05/31/2018 --- # Src Attribute (Stroke)(VML) This topic describes VML, a feature that is deprecated as of Windows Internet Explorer 9. Webpages and applications that rely on VML should be [migrated to SVG](https://go.microsoft.com/fwlink/p/?LinkID=236964) or other widely supported standards. > [!Note] > As of December 2011, this topic has been archived. As a result, it is no longer actively maintained. For more information, see [Archived Content](https://docs.microsoft.com/previous-versions/windows/internet-explorer/ie-developer/). For information, recommendations, and guidance regarding the current version of Windows Internet Explorer, see [Internet Explorer Developer Center](https://go.microsoft.com/fwlink/p/?linkid=204313).   Defines the source image to load for a stroke fill. Read/write. **String**. **Applies To** [Stroke](msdn-online-vml-stroke-element.md) **Tag Syntax** <v: *element* src=" *expression* "> **Script Syntax** *element* .src="*expression*" *expression*=*element*.src **Remarks** URL to an image to load for image and pattern fills. This attribute must always be present and point to valid image data for a picture to appear. If this attribute appears alone, that is, no **HRef** or **Title**, then the image is linked. *VML Standard Attribute* **Example** The stroke is created with the image specified by the cylinder.gif file. ```HTML <v:shape id="rect01" strokecolor="red" fillcolor="red" style="top:20;left:20;width:30;height:30" path="m 1,1 l 1,200, 200,200, 200,1 x e"> <v:stroke src="cylinder.gif" filltype="tile" width="10pt"/> </v:shape> ``` [Show Me](https://samples.msdn.microsoft.com/workshop/samples/vml/shape/stroke/x_stroke.md)    
28.984615
433
0.730361
eng_Latn
0.893354
b9879b2829abcdab90ba283102a6d97e6c987da3
854
md
Markdown
docs/Model/Call.md
AdrianFX/swaggerclient-php
881dab6af519149dac997c4c31d6eb17fb2d69ac
[ "Apache-2.0" ]
null
null
null
docs/Model/Call.md
AdrianFX/swaggerclient-php
881dab6af519149dac997c4c31d6eb17fb2d69ac
[ "Apache-2.0" ]
null
null
null
docs/Model/Call.md
AdrianFX/swaggerclient-php
881dab6af519149dac997c4c31d6eb17fb2d69ac
[ "Apache-2.0" ]
null
null
null
# Call ## Properties Name | Type | Description | Notes ------------ | ------------- | ------------- | ------------- **id** | **int** | | **external_id** | **string** | | [optional] **type** | **string** | | **account_id** | **int** | | **user_id** | **int** | | **uci_id** | **int** | | **direction** | **string** | | **caller_id** | **string** | Remote caller ID (if present) | [optional] **phone_number** | **string** | | **duration** | **int** | Duration in milliseconds | **state** | **string** | | **start_time** | [**\DateTime**](Date.md) | | **answer_time** | [**\DateTime**](Date.md) | | [optional] **end_time** | [**\DateTime**](Date.md) | | [optional] [[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
35.583333
161
0.508197
yue_Hant
0.335839
b9887704af44f1384741f5fbbca1dd7b0261250f
37
md
Markdown
README.md
borthne/borthne.github.io
ed1a0295afc88bd1930bbdddac28b16248d56fa2
[ "MIT" ]
null
null
null
README.md
borthne/borthne.github.io
ed1a0295afc88bd1930bbdddac28b16248d56fa2
[ "MIT" ]
null
null
null
README.md
borthne/borthne.github.io
ed1a0295afc88bd1930bbdddac28b16248d56fa2
[ "MIT" ]
null
null
null
# borthne.github.io Personal website
12.333333
19
0.810811
eng_Latn
0.759052
b988f3ab5c76b03548817797df25713d62ee71b7
118
md
Markdown
translations/ru-RU/data/reusables/shortdesc/creating_custom_badges_github_apps.md
kyawburma/docs
0ff7de03be7c2432ced123aca17bfbf444bee1bf
[ "CC-BY-4.0", "MIT" ]
11,698
2020-10-07T16:22:18.000Z
2022-03-31T18:54:47.000Z
translations/ru-RU/data/reusables/shortdesc/creating_custom_badges_github_apps.md
kyawburma/docs
0ff7de03be7c2432ced123aca17bfbf444bee1bf
[ "CC-BY-4.0", "MIT" ]
8,317
2020-10-07T16:26:58.000Z
2022-03-31T23:24:25.000Z
translations/ru-RU/data/reusables/shortdesc/creating_custom_badges_github_apps.md
kyawburma/docs
0ff7de03be7c2432ced123aca17bfbf444bee1bf
[ "CC-BY-4.0", "MIT" ]
48,204
2020-10-07T16:15:45.000Z
2022-03-31T23:50:42.000Z
You can replace the default badge on your GitHub App by uploading your own logo image and customizing the background.
59
117
0.822034
eng_Latn
0.999712
b988f62723ef8da212249067aadd6a662c1f23c2
2,304
md
Markdown
news/salesforce_enters_iot_market.md
lijiangsheng1/infoQ
86df0e334388fed4a9c9c35652506455b9e69aad
[ "CC0-1.0" ]
2
2016-01-23T18:17:02.000Z
2020-05-24T02:48:16.000Z
news/salesforce_enters_iot_market.md
lijiangsheng1/infoQ
86df0e334388fed4a9c9c35652506455b9e69aad
[ "CC0-1.0" ]
null
null
null
news/salesforce_enters_iot_market.md
lijiangsheng1/infoQ
86df0e334388fed4a9c9c35652506455b9e69aad
[ "CC0-1.0" ]
null
null
null
#Salesforce进军物联网市场 ## 摘要: Salesforce近期举办的Dreamforce会议上,宣布了其物联网平台会很快和大家见面。此平台将能够整合实时数据,且可将这些数据在整个基于云服务的环境中变成可操作的任务。 -------------------------------------------------- Salesforce近期举办的Dreamforce会议上,[宣布](http://www.salesforce.com/company/news-press/press-releases/2015/09/150915-2.jsp)了其物联网平台会很快和大家见面。此平台将能够整合实时数据,且可将这些数据在整个基于云服务的环境中变成可操作的任务。 平台的名字叫做Thunder,整合了不同的开源技术,其可允许用户跨系统的摄取、处理和协调事件。平台的核心技术包括: * Kafka(消息) * Storm(流数据) * Spark(内存数据) * Cassandra(高扩展性数据库) 除了作为物联网技术栈之外,Thunder也作为很多其它Salesforce服务的输入通道。来自Salesforce的Adam Bosworth,物联网云的EVP,[设想](https://www.youtube.com/watch?v=GjSNMJLM3WM)了使用物联网云来“收拢和聚集大数据,你可以来打造非凡的智能业务规则,且可使用Salesforce云来实时的去做。” 业务规则和编排的能力是Salesforce之所以不同于其它的物理网供应商的关键。一如它允许最终用户在Salesforce云中只需要简单的配置下工作流即可,甚至都不需要任何的编码工作。举个Salesforce所提供的用户编排的例子,如下面示意图所示,检测到已经超出了风速的阀值,随后即改变了风速涡轮机的速度。 ![iot-demo](http://cdn.infoq.com/statics_s2_20150922-0305u1/resource/news/2015/10/salesforce-iot/en/resources/salesforce-iot.jpg) *图片来源:https://www.salesforce.com/form/conf/iot-demo.jsp* Salesforce还公布了一些客户的案例,包括: * 拼车服务中对乘客的通知,这在缓减交通问题中很常见。 * 基于现有产品的使用,通知销售代表可能的潜在销售机会。 * 航空公司代理能够为得到延迟通知的客户预定新的航班。 Bosworth还看到了一些机会,即“管理已经售出的产品,确保顺利的销售过程,确保客户继续使用你的产品,以及假如他们退货的话,能够让他们再回头购买。” Salesforce目前拥有着巨大的生态系统,能够看到诸如ARM, Informatica,Xively,LogMeIn,微软等合作伙伴对即将发布的平台有着浓厚的兴趣。在客户案例研究的[视频](https://www.youtube.com/watch?v=edZKf-k5JJ0)中,来自微软的副总裁兼首席布道师Steve Guggenheimer声称,他们的客户将来自Azure Event Hubs的实时数据连接到Thunder和市场云中,“能够让市场团队,实时的,从事后分析转变为预测分析。” 微软在这里如此的表现,让我们值得一提的就是微软在很多层面上都是Salesforce的竞争对手。 在物联网领域,Salesforce有许多的竞争对手,包括亚马逊,微软,Google,GE以及英特尔。虽然客户可以单独的运行Thunder平台,但Salesforce更加看重增值的服务,那就是以插件的方式整合其它Salesforce的服务到Thunder。Salesforce能够提供的不同之处就在于,通过其云服务和云应用可以很容易的消费这些数据。Salesforce能够让企业更加深入了解他们的客户。通过引入物联网云计算,他们给自己的客户额外的输入通道来获得这种洞察力。Salesforce的首席执行官兼主席重申了这一观点,“物联网云能够让企业创建1:1的实时分析,在销售、服务、市场以及其它一些业务流程上能够主动出击,为客户的成功提供一种新的方式或途径。” Salesforce还没有公布物理网平台具体的发布日期。据预计,试验测试将在2016年上半年的某个时候开始。研究分析师Doug Henschen[预测](https://vimeo.com/139542538),通过近期发布的Lightning,Salesforce按照预定的模式有条不紊的进行,客户将会在明年的Salesforce Dreamforce峰会上看到物联网云。他说道:“Thunder将会步Lightning的后尘,因为Lightning是在去年的Dreamforce上公布的,在其真正可用之前经历了差不多一年的时间。” 尽管Salesforce介绍了一些客户如何使用平台的案例,但是都属于早期试用者。Doug Henschen觉得此平台扔人有许多细节需要进一步的开发。“保守点说,在这一点Thunder仍然是一个开发路线图的定位,而不是一个真正的产品。” 查看英文原文:[Salesforce enters IoT Market](http://www.infoq.com/news/2015/10/salesforce-iot)
54.857143
329
0.84809
yue_Hant
0.491498
b98aea046339e44d8a9e2d86b75bd76f85383f65
4,749
md
Markdown
powerapps-docs/developer/common-data-service/virtual-entities/sample-generic-ve-plugin.md
Miguel-byte/powerapps-docs.es-es
1edcf434a8e8f096b67510fa3c423ca3c5d1ede4
[ "CC-BY-4.0", "MIT" ]
null
null
null
powerapps-docs/developer/common-data-service/virtual-entities/sample-generic-ve-plugin.md
Miguel-byte/powerapps-docs.es-es
1edcf434a8e8f096b67510fa3c423ca3c5d1ede4
[ "CC-BY-4.0", "MIT" ]
null
null
null
powerapps-docs/developer/common-data-service/virtual-entities/sample-generic-ve-plugin.md
Miguel-byte/powerapps-docs.es-es
1edcf434a8e8f096b67510fa3c423ca3c5d1ede4
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: 'Ejemplo: Complemento de proveedor de datos de entidad virtual genérico (Common Data Service | MicrosoftDocs)' description: El ejemplo muestra cómo implementar un complemento de entidad virtual personalizado genérico en Dynamics 365. ms.custom: '' ms.date: 10/31/2018 ms.reviewer: '' ms.service: powerapps ms.suite: '' ms.tgt_pltfrm: '' ms.topic: samples applies_to: - Dynamics 365 (online) ms.assetid: d329dade-16c5-46e9-8dec-4b8efb996d24 author: mayadumesh ms.author: jdaly manager: amyla search.audienceType: - developer search.app: - PowerApps - D365CE --- # <a name="sample-generic-virtual-entity-data-provider-plug-in"></a>Ejemplo: Complemento de proveedor de datos de entidad virtual genérico ## <a name="demonstrates"></a>Demostraciones Este ejemplo muestra una implementación mínima para un complemento de proveedor de datos de entidad virtual genérico de Common Data Service, **DropboxRetrieveMultiplePlugin**, para el servicio de uso compartido de archivos [Dropbox](https://www.dropbox.com/). Utiliza el método "básico" de traducir la expresión <xref:Microsoft.Xrm.Sdk.Query.QueryExpression> creando la clase visitante personalizada **DropBoxExpressionVisitor**. Devuelve una colección de archivos que cumplen con los criterios de búsqueda como <xref:Microsoft.Xrm.Sdk.EntityCollection>. ## <a name="getting-started"></a>Introducción Para crear este ejemplo, primero debe instalar los paquetes NuGet [Dropbox.Api](https://www.nuget.org/packages/Dropbox.Api/) y [Microsoft.CrmSdk.Data](https://www.nuget.org/packages/Microsoft.CrmSdk.Data/) en la solución. También necesitará una cuenta de Dropbox y pasará un token de acceso real al crear una instancia del **DropboxClient**. Agregue las siguientes instrucciones de uso a su código: ```csharp using Microsoft.Xrm.Sdk; using Microsoft.Xrm.Sdk.Query; using Dropbox.Api; using Dropbox.Api.Files; ``` ## <a name="sample-code"></a>Código de ejemplo ```csharp public class DropBoxExpressionVisitor : QueryExpressionVisitorBase { public string SearchKeyWords { get; private set; } public override QueryExpression Visit(QueryExpression query) { // Very simple visitor that extracts search keywords var filter = query.Criteria; if (filter.Conditions.Count > 0) { foreach (ConditionExpression condition in filter.Conditions) { if (condition.Operator == ConditionOperator.Like && condition.Values.Count > 0) { string exprVal = (string)condition.Values[0]; if (exprVal.Length > 2) { this.SearchKeyWords += " " + exprVal.Substring(1, exprVal.Length - 2); } } } return query; } } } public class DropboxRetrieveMultiplePlugin : IPlugin { public void Execute(IServiceProvider serviceProvider) { var context = (IPluginExecutionContext)serviceProvider.GetService(typeof(IPluginExecutionContext)); var qe = (QueryExpression)context.InputParameters["Query"]; if (qe != null) { var visitor = new DropBoxExpressionVisitor(); qe.Accept(visitor); using (var dbx = new DropboxClient(AccessToken)) { if (visitor.SearchKeyWords != string.Empty) { var searchCriteria = new SearchArg(string.Empty, visitor.SearchKeyWords); var task = Task.Run(() => this.SearchFile(dbx, searchCriteria)); context.OutputParameters["BusinessEntityCollection"] = task.Result; } } } } public async Task<EntityCollection> SearchFile(DropboxClient dbx, SearchArg arg) { EntityCollection ec = new EntityCollection(); var list = await dbx.Files.SearchAsync(arg); foreach (var item in list.Matches) { if (item.Metadata.IsFile) { Entity e = new Entity("new_dropbox"); e.Attributes.Add("new_dropboxid", Guid.NewGuid()); e.Attributes.Add("new_filename", item.Metadata.AsFile.Name); e.Attributes.Add("new_filesize", item.Metadata.AsFile.Size); e.Attributes.Add("new_modifiedon", item.Metadata.AsFile.ServerModified); ec.Entities.Add(e); } } return ec; } } ``` ### <a name="see-also"></a>Vea también [Introducción a las entidades virtuales](get-started-ve.md)<br /> [Consideraciones sobre API para entidades virtuales](api-considerations-ve.md)<br /> [Proveedores de datos de entidad virtual personalizados](custom-ve-data-providers.md)
38.92623
555
0.662876
spa_Latn
0.350906
b98c174b4eb1d541bdb14f55083340e0cbdd4aeb
389
md
Markdown
docs/api-reference/magento.usecontent.md
hopvu/magento2-1
6a5af4ea9265815a47ec0fdb58c8e95202743e14
[ "MIT" ]
1
2021-11-30T11:27:21.000Z
2021-11-30T11:27:21.000Z
docs/api-reference/magento.usecontent.md
hopvu/magento2-1
6a5af4ea9265815a47ec0fdb58c8e95202743e14
[ "MIT" ]
null
null
null
docs/api-reference/magento.usecontent.md
hopvu/magento2-1
6a5af4ea9265815a47ec0fdb58c8e95202743e14
[ "MIT" ]
null
null
null
<!-- Do not edit this file. It is automatically generated by API Documenter. --> [Home](./index.md) &gt; [@vue-storefront/magento](./magento.md) &gt; [useContent](./magento.usecontent.md) ## useContent variable <b>Signature:</b> ```typescript _default: (ssrKey?: string) => import("../../types/composables").UseContent<import("@vue-storefront/magento-api").CmsPage, CmsBlock, any> ```
32.416667
137
0.691517
eng_Latn
0.318641
b98c491a8eb11b188e3746acebc8962f248dc906
4,197
md
Markdown
src/pages/blog/2019-08-21-reflections-on-failure.md
madvib/recreate-landing-page
4738e1c6a9bb63d7519f30ddc1ab374d7c21b8bc
[ "MIT" ]
null
null
null
src/pages/blog/2019-08-21-reflections-on-failure.md
madvib/recreate-landing-page
4738e1c6a9bb63d7519f30ddc1ab374d7c21b8bc
[ "MIT" ]
null
null
null
src/pages/blog/2019-08-21-reflections-on-failure.md
madvib/recreate-landing-page
4738e1c6a9bb63d7519f30ddc1ab374d7c21b8bc
[ "MIT" ]
null
null
null
--- templateKey: blog-post title: Reflections on Failure date: 2019-08-21T18:16:25.212Z description: | What we learn from coming up short featuredpost: false featuredimage: /img/jetski.jpg tags: - Failure - Motivation - Goal Setting --- ![jetski fail](/img/jetski.jpg "ouch") By definition, failure represents something other than the desired outcome and, in a culture that puts success above all else, it is often looked down upon or ignored altogether. In reality, __failure is common (read: completely normal) and offers much to learn from. By handling failure appropriately we can take these experiences as opportunities to grow rather than something to be ashamed of. Here are 3 ideas to change the way you think about failure: ## Failure is normal Failure is a function of goal setting that serves to motivate and us improve. Using goals and milestones allows you to track progress in a motivating and informative way. The best goals are crafted with two possible outcomes, either success or failure. This way we can learn from what went right in the case of success and what went wrong in the event of failure. Appropriate goal setting is a central theme here at ReCreate. The most important thing is to recognize that goals are nothing more than tools used to monitor and guide progression. The likeliest indicator of failure is how difficult a particular goal is. We most often recommend setting goals that are within reach but require you to push yourself, meaning that success should not be guaranteed. There is no shame in failing at something if you know from the start that it's one of two possible outcomes. By shifting towards a growth mindset, **failure only it exists so that we can learn from it and move forward**. ## The worst thing failure can do is undermine motivation There are 3 possible outcomes from a failure. You can either bounce back, reassess your goals, or allow it to demoralize. Sometimes a failure can re-ignite determination to achieve a goal, and other times you will realize that a particular goal may not be right for you. Disappointment is perfectly normal after a failure, but when you allow a failure to demoralize it can cripple your motivation for future action. Remember, failure is the result of reaching for goals that are designed to help us push ourselves and serves as an opportunity to reassess and learn from. **Failure is NOT a reflection of your value or worth.** Ultimately, you have nothing that you need to prove. I recently participated in an mountain bike race across Georgia. 90 miles into the 350 mile course I decided to pack it in and end my race. I had been dealing with some health issues going into the race and realized early on that I was not as recovered as I had hoped to be. While I had really hoped to go further, the experience allowed me to reassess some of my goals and moving forward I am focused solely on my health with no aspirations to race ultras for the time being. Rather than focusing on the past it's important to think forward. ## Failure should be talked about openly There are few things as difficult or sensitive to discuss as failure. Simply acknowledging it can sting, but the reality is that failures provide the best examples from which we can learn. **By embracing a culture of shame surrounding failure we rob ourselves of the opportunity to learn and grow from these experiences.** Instead, we should embrace failure as experiences to learn from. By speaking openly we help to remove stigma and shame associated with failure. It may feel uncomfortable at first but it becomes easier and easier with practice. Everyone fails at something, often quite frequently. Speaking with others isn't just a reminder that failure is everywhere, but you may even hear from someone who has been in a similar situation. Many of our best insights and connections come from speaking with other people. Talking with other people can help to process and extract the positives from an experience. <br> </br> That's all from me, I hope you were able to pull some insights into failing with grace. It's no fun but it happens, changing your mindset makes all the difference in the world.
91.23913
542
0.793662
eng_Latn
0.999957
b98c8f8d6c29a8caaddfa101ff95b74bba4dbdc1
12,463
md
Markdown
articles/site-recovery/vmware-azure-install-linux-master-target.md
hongman/azure-docs.ko-kr
56e2580d78e1be8ac6b34a50bc4730ab56add9eb
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/site-recovery/vmware-azure-install-linux-master-target.md
hongman/azure-docs.ko-kr
56e2580d78e1be8ac6b34a50bc4730ab56add9eb
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/site-recovery/vmware-azure-install-linux-master-target.md
hongman/azure-docs.ko-kr
56e2580d78e1be8ac6b34a50bc4730ab56add9eb
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Azure Site Recovery를 사용하여 Linux VM 장애 복구(failback)를 위한 마스터 대상 서버 설치 description: Azure Site Recovery를 사용한 VMware VM과 Azure 간 재해 복구 중에 온-프레미스 사이트로 장애 복구(failback)를 위한 Linux 마스터 대상 서버를 설치하는 방법을 알아봅니다. author: mayurigupta13 services: site-recovery manager: rochakm ms.service: site-recovery ms.topic: conceptual ms.date: 03/06/2019 ms.author: mayg ms.openlocfilehash: 281743268364b0e9d39c7bea28afc17d753db2f6 ms.sourcegitcommit: e995f770a0182a93c4e664e60c025e5ba66d6a45 ms.translationtype: MT ms.contentlocale: ko-KR ms.lasthandoff: 07/08/2020 ms.locfileid: "86130150" --- # <a name="install-a-linux-master-target-server-for-failback"></a>장애 복구(failback)를 위한 Linux 마스터 대상 서버 설치 Azure에 가상 머신을 장애 조치(failover)한 후 가상 머신을 다시 온-프레미스 사이트에 장애 복구할 수 있습니다. 장애 복구하려면 가상 머신을 Azure에서 온-프레미스 사이트로 다시 보호해야 합니다. 이 프로세스를 수행하려면 트래픽을 수신할 온-프레미스 마스터 대상 서버가 필요합니다. 보호된 가상 머신이 Windows 가상 머신인 경우 Windows 마스터 대상이 필요합니다. Linux 가상 머신인 경우 Linux 마스터 대상이 필요합니다. 다음 단계를 읽고 Linux 마스터 대상을 만들고 설치하는 방법에 대해 알아보세요. > [!IMPORTANT] > 9\.10.0 마스터 대상 서버 릴리스부터 최신 마스터 대상 서버는 Ubuntu 16.04 서버에만 설치할 수 있습니다. 새로운 설치는 CentOS6.6 서버에서 허용되지 않습니다. 그러나 9.10.0 버전을 사용하여 이전 마스터 대상 서버를 계속 업그레이드할 수 있습니다. > LVM의 마스터 대상 서버는 지원되지 않습니다. ## <a name="overview"></a>개요 이 문서에서는 Linux 마스터 대상을 설치하는 방법의 지침을 제공합니다. 이 문서의 마지막 부분 또는 [Azure Recovery Services의 Microsoft Q&A 질문 페이지](/answers/topics/azure-site-recovery.html)에 의견이나 질문을 게시할 수 있습니다. ## <a name="prerequisites"></a>사전 요구 사항 * 마스터 대상을 배포해야 하는 호스트를 선택하려면 기존 온-프레미스 가상 머신에 장애 복구를 수행할 것인지 아니면 새 가상 머신에 장애 복구를 수행할 것인지 결정합니다. * 기존 가상 컴퓨터에서 수행하는 경우 마스터 대상의 호스트가 가상 컴퓨터의 데이터 저장소에 액세스할 수 있어야 합니다. * 온-프레미스 가상 머신이 없는 경우(대체 위치 복구의 경우) 마스터 대상과 동일한 호스트에 장애 복구(failback) 가상 머신이 생성됩니다. 아무 ESXi 호스트를 선택하여 마스터 대상을 설치할 수 있습니다. * 마스터 대상은 프로세스 서버 및 구성 서버와 통신할 수 있는 네트워크에 있어야 합니다. * 마스터 대상의 버전이 프로세스 서버 및 구성 서버의 버전과 같거나 그보다 이전 버전이어야 합니다. 예를 들어 구성 서버의 버전이 9.4인 경우 마스터 대상의 버전이 9.4 또는 9.3인 것은 괜찮지만 9.5는 안 됩니다. * 마스터 대상은 VMware 가상 머신만 될 수 있고 물리적 서버는 안 됩니다. > [!NOTE] > 마스터 대상 같은 관리 구성 요소에서 Storage vMotion을 설정하지 않아야 합니다. 마스터 대상이 다시 보호 후에 이동되면 VMDK(가상 머신 디스크)를 분리할 수 없습니다. 이 경우, 장애 복구에 실패합니다. ## <a name="sizing-guidelines-for-creating-master-target-server"></a>마스터 대상 서버 만들기에 대한 크기 조정 지침 다음 크기 조정 지침에 따라 마스터 대상을 만듭니다. - **RAM**: 6GB 이상 - **OS 디스크 크기**: 100GB 이상(OS 설치에 필요) - **보존 드라이브에 대한 추가 디스크 크기**: 1TB - **CPU 코어**: 4 코어 이상 다음 Ubuntu 커널을 사용할 수 있습니다. |커널 시리즈 |최대 지원 | |---------|---------| |4.4. |4.4.0-81-제네릭 | |4.8 |4.8.0-56-제네릭 | |4.10 |4.10.0-24-제네릭 | ## <a name="deploy-the-master-target-server"></a>마스터 대상 서버 배포 ### <a name="install-ubuntu-16042-minimal"></a>Ubuntu 16.04.2 최소 설치 다음 단계를 통해 Ubuntu 16.04.2 64비트 운영 체제를 설치합니다. 1. [다운로드 링크](http://old-releases.ubuntu.com/releases/16.04.2/ubuntu-16.04.2-server-amd64.iso)로 이동하고 가장 가까운 미러를 선택한 다음, Ubuntu 16.04.2 최소 64비트 ISO를 다운로드합니다. DVD 드라이브에서 Ubuntu 16.04.2 최소 64비트 ISO를 유지하고 시스템을 시작합니다. 1. 기본 설정 언어로 **영어**를 선택하고 **Enter** 키를 선택합니다. ![언어 선택](./media/vmware-azure-install-linux-master-target/image1.png) 1. **Ubuntu 서버 설치**를 선택하고 **Enter** 키를 선택합니다. ![Ubuntu Server 설치 선택](./media/vmware-azure-install-linux-master-target/image2.png) 1. 기본 설정 언어로 **영어**를 선택하고 **Enter** 키를 선택합니다. ![원하는 언어로 영어 선택](./media/vmware-azure-install-linux-master-target/image3.png) 1. **표준 시간대** 옵션 목록에서 적절한 옵션을 선택하고 **Enter** 키를 선택합니다. ![올바른 표준 시간대 선택](./media/vmware-azure-install-linux-master-target/image4.png) 1. **아니요**(기본 옵션)를 선택하고 **Enter** 키를 선택합니다. ![키보드 구성](./media/vmware-azure-install-linux-master-target/image5.png) 1. 키보드의 원산지로 **영어(미국)** 를 선택하고 **Enter** 키를 선택합니다. 1. 키보드 레이아웃으로 **영어(미국)** 를 선택하고 **Enter** 키를 선택합니다. 1. **호스트 이름** 상자에 서버의 호스트 이름을 입력하고 **계속**을 선택합니다. 1. 사용자 계정을 만들려면 사용자 이름을 입력하고 **계속**을 선택합니다. ![사용자 계정 만들기](./media/vmware-azure-install-linux-master-target/image9.png) 1. 새 사용자 계정의 암호를 입력하고 **계속**을 선택합니다. 1. 새 사용자의 암호를 확인하고 **계속**을 선택합니다. ![암호 확인](./media/vmware-azure-install-linux-master-target/image11.png) 1. 홈 디렉터리를 암호화하기 위한 다음 선택 영역에서 **아니요**(기본 옵션)를 선택하고 **Enter** 키를 선택합니다. 1. 표시되는 표준 시간대가 올바르면 **예**(기본 옵션)를 선택하고 **Enter** 키를 선택합니다. 표준 시간대를 다시 구성하려면 **아니요**를 선택합니다. 1. 분할 방법 옵션에서 **단계별 - 전체 디스크 사용** 옵션을 선택하고 **Enter** 키를 선택합니다. ![분할 방법 옵션 선택](./media/vmware-azure-install-linux-master-target/image14.png) 1. **분할할 디스크 선택** 옵션에서 적절한 디스크를 선택하고 **Enter** 키를 선택합니다. ![디스크 선택](./media/vmware-azure-install-linux-master-target/image15.png) 1. **예**를 선택하여 디스크에 변경 내용을 쓰고 **Enter** 키를 선택합니다. ![기본 옵션 선택](./media/vmware-azure-install-linux-master-target/image16-ubuntu.png) 1. 구성 프록시 선택 영역에서 기본 옵션을 선택하고 **계속**을 선택한 다음, **Enter** 키를 선택합니다. ![업그레이드를 관리하는 방법 선택](./media/vmware-azure-install-linux-master-target/image17-ubuntu.png) 1. 시스템 업그레이드를 관리하기 위한 선택 영역에서 **자동 업데이트 없음** 옵션을 선택하고 **Enter** 키를 선택합니다. ![업그레이드를 관리하는 방법 선택](./media/vmware-azure-install-linux-master-target/image18-ubuntu.png) > [!WARNING] > Azure Site Recovery 마스터 대상 서버에 Ubuntu의 매우 구체적인 버전이 필요하기 때문에 가상 머신에서 커널 업그레이드를 비활성화해야 합니다. 활성화한 경우 일반 업그레이드로 인해 마스터 대상 서버에 오작동이 발생합니다. **자동 업데이트 없음** 옵션을 선택했는지 확인합니다. 1. 기본 옵션을 선택합니다. SSH 연결에 openSSH를 설정하려는 경우 **OpenSSH 서버** 옵션을 선택한 다음 **계속**을 선택합니다. ![소프트웨어 선택](./media/vmware-azure-install-linux-master-target/image19-ubuntu.png) 1. GRUB 부팅 로더를 설치하기 위한 선택 영역에서 **Yes**를 선택하고 **Enter** 키를 누릅니다. ![GRUB 부팅 설치 관리자](./media/vmware-azure-install-linux-master-target/image20.png) 1. 부팅 로더를 설치할 적절한 디바이스(가급적 **/dev/sda**)를 선택하고 **Enter** 키를 선택합니다. ![해당 디바이스 선택](./media/vmware-azure-install-linux-master-target/image21.png) 1. **계속**을 선택한 다음, **Enter** 키를 선택하여 설치를 완료합니다. ![설치 완료](./media/vmware-azure-install-linux-master-target/image22.png) 1. 설치가 완료된 후에 새 사용자 자격 증명을 사용하여 VM에 로그인합니다. (자세한 정보는 **10단계**를 참조하세요.) 1. 루트 사용자 암호를 설정하려면 다음 스크린샷에 설명된 단계를 사용합니다. 그런 다음 루트 사용자로 로그인합니다. ![루트 사용자 암호 설정](./media/vmware-azure-install-linux-master-target/image23.png) ### <a name="configure-the-machine-as-a-master-target-server"></a>컴퓨터를 마스터 대상 서버로 구성 Linux 가상 머신에 있는 각 SCSI 하드 디스크의 ID를 가져오려면 **disk.EnableUUID = TRUE** 매개 변수를 사용하도록 설정해야 합니다. 이 매개 변수를 사용하려면 다음 단계를 따르세요. 1. 가상 머신을 종료합니다. 2. 왼쪽 창에서 가상 머신의 항목을 마우스 오른쪽 단추로 클릭한 다음 **편집 설정**을 선택합니다. 3. **옵션** 탭을 선택합니다. 4. 왼쪽 창에서 **고급** > **일반**을 선택한 다음 화면의 오른쪽 아래에서 **구성 매개 변수** 단추를 선택합니다. ![구성 매개 변수 열기](./media/vmware-azure-install-linux-master-target/image24-ubuntu.png) **구성 매개 변수** 옵션은 컴퓨터가 실행 중인 동안에는 사용할 수 없습니다. 이 탭을 활성화하려면 가상 머신을 종료합니다. 5. **disk.EnableUUID**가 있는 행이 있는지 확인합니다. - 값이 있고 **False**로 설정되어 있으면 **True**로 변경합니다. (이 값은 대/소문자를 구분하지 않습니다.) - 값이 있고 **True**로 설정되어 있으면 **취소**를 선택합니다. - 값이 없으면 **행 추가**를 선택합니다. - 이름 열에서 **disk.EnableUUID**를 추가하고 값을 **TRUE**로 설정합니다. ![disk.EnableUUID가 있는지 확인](./media/vmware-azure-install-linux-master-target/image25.png) #### <a name="disable-kernel-upgrades"></a>커널 업그레이드 비활성화 Azure Site Recovery 마스터 대상 서버에 Ubuntu의 매우 구체적인 버전이 필요합니다. 가상 머신에 커널 업그레이드를 비활성화합니다. 커널 업그레이드를 사용하도록 설정하면 마스터 대상 서버가 오작동할 수 있습니다. #### <a name="download-and-install-additional-packages"></a>추가 패키지를 다운로드하여 설치 > [!NOTE] > 추가 패키지를 다운로드하여 설치할 수 있도록 인터넷에 연결되어 있는지 확인합니다. 인터넷에 연결되지 않으면 해당 Deb 패키지를 수동으로 찾아서 설치해야 합니다. `apt-get install -y multipath-tools lsscsi python-pyasn1 lvm2 kpartx` ### <a name="get-the-installer-for-setup"></a>설치를 위한 설치 프로그램 가져오기 마스터 대상이 인터넷에 연결된 경우 다음 단계에 따라 설치 프로그램을 다운로드할 수 있습니다. 인터넷에 연결되지 않은 경우 프로세스 서버에서 설치 프로그램을 복사하여 설치할 수 있습니다. #### <a name="download-the-master-target-installation-packages"></a>마스터 대상 설치 패키지 다운로드 [최신 Linux 마스터 대상 설치 비트를 다운로드합니다](https://aka.ms/latestlinuxmobsvc). Linux를 사용하여 다운로드하려면 다음을 입력합니다. `wget https://aka.ms/latestlinuxmobsvc -O latestlinuxmobsvc.tar.gz` > [!WARNING] > 설치 관리자를 다운로드하고 홈 디렉터리에 압축을 풉니다. **/usr/Local**에 압축을 풀면 설치가 실패합니다. #### <a name="access-the-installer-from-the-process-server"></a>프로세스 서버에서 설치 프로그램 액세스 1. 프로세스 서버에서 **C:\Program Files (x86)\Microsoft Azure Site Recovery\home\svsystems\pushinstallsvc\repository**로 이동합니다. 2. 프로세스 서버에서 필요한 설치 프로그램 파일을 복사하고 홈 디렉터리에 **latestlinuxmobsvc.tar.gz**로 저장합니다. ### <a name="apply-custom-configuration-changes"></a>사용자 지정 구성 변경 내용 적용 사용자 지정 구성 변경 내용을 적용하려면 루트 사용자 권한으로 다음 단계를 따릅니다. 1. 다음 명령을 실행하여 바이너리를 untar합니다. `tar -xvf latestlinuxmobsvc.tar.gz` ![실행할 명령 스크린샷](./media/vmware-azure-install-linux-master-target/image16.png) 2. 다음 명령을 실행하여 권한을 부여합니다. `chmod 755 ./ApplyCustomChanges.sh` 3. 다음 명령을 사용하여 스크립트를 실행합니다. `./ApplyCustomChanges.sh` > [!NOTE] > 서버에서 스크립트를 한 번만 실행합니다. 서버를 종료합니다. 다음 섹션에 설명된 대로 디스크를 추가한 후에 서버를 다시 시작합니다. ### <a name="add-a-retention-disk-to-the-linux-master-target-virtual-machine"></a>보존 디스크를 Linux 마스터 대상 가상 머신에 추가 다음 단계에 따라 보존 디스크를 만듭니다. 1. Linux 마스터 대상 가상 머신에 새로운 1TB 디스크를 연결하고 컴퓨터를 시작합니다. 2. **multipath -ll** 명령을 사용하여 보존 디스크의 다중 경로 ID를 확인합니다. **multipath -ll** ![다중 경로 ID](./media/vmware-azure-install-linux-master-target/image27.png) 3. 드라이브를 포맷 하 고 새 드라이브에 파일 시스템을 만듭니다. **mkfs. ext4/dev/mapper/ \<Retention disk's multipath id> **. ![파일 시스템](./media/vmware-azure-install-linux-master-target/image23-centos.png) 4. 파일 시스템을 만든 후 보존 디스크를 탑재합니다. ``` mkdir /mnt/retention mount /dev/mapper/<Retention disk's multipath id> /mnt/retention ``` 5. 시스템을 시작할 때마다 보존 드라이브를 탑재하도록 **fstab** 항목을 만듭니다. `vi /etc/fstab` **Insert** 키를 눌러 파일을 편집하기 시작합니다. 새 줄을 만들고 다음 텍스트를 삽입합니다. 이전 명령에서 강조 표시된 다중 경로 ID에 따라 디스크 다중 경로 ID를 편집합니다. **/dev/mapper/\<Retention disks multipath id> /mnt/retention ext4 rw 0 0** **Esc** 키를 선택하고 **:wq**(쓰기 및 종료)를 입력하여 편집기 창을 닫습니다. ### <a name="install-the-master-target"></a>마스터 대상 설치 > [!IMPORTANT] > 마스터 대상 서버의 버전이 프로세스 서버 및 구성 서버의 버전과 같거나 그보다 이전 버전이어야 합니다. 이 조건이 충족되지 않으면 다시 보호에 성공하지만 복제는 실패합니다. > [!NOTE] > 마스터 대상 서버를 설치하기 전에 로컬 호스트 이름을 모든 네트워크 어댑터와 연결된 IP 주소에 매핑하는 항목이 가상 머신의 **/etc/hosts** 파일에 포함되어 있는지 확인합니다. 1. 다음 명령을 실행하여 마스터 대상을 설치합니다. ``` ./install -q -d /usr/local/ASR -r MT -v VmWare ``` 2. 구성 서버의 **C:\ProgramData\Microsoft Azure Site Recovery\private\connection.passphrase**에서 암호를 복사합니다. 그리고 다음 명령을 실행하여 같은 로컬 디렉터리에서 **passphrase.txt**로 저장합니다. `echo <passphrase> >passphrase.txt` 예제: `echo itUx70I47uxDuUVY >passphrase.txt` 3. 구성 서버의 IP 주소를 적어둡니다. 다음 명령을 실행 하 여 서버를 구성 서버에 등록 합니다. ``` /usr/local/ASR/Vx/bin/UnifiedAgentConfigurator.sh -i <ConfigurationServer IP Address> -P passphrase.txt ``` 예제: ``` /usr/local/ASR/Vx/bin/UnifiedAgentConfigurator.sh -i 104.40.75.37 -P passphrase.txt ``` 스크립트가 완료될 때까지 기다립니다. 마스터 대상이 성공적으로 등록되면 해당 마스터 대상이 포털의 **Site Recovery 인프라** 페이지에 나열됩니다. #### <a name="install-the-master-target-by-using-interactive-installation"></a>대화식 설치를 사용하여 마스터 대상 설치 1. 다음 명령을 실행하여 마스터 대상을 설치합니다. 에이전트 역할의 경우 **마스터 대상**을 선택합니다. ``` ./install ``` 2. 설치할 기본 위치를 선택하고 **Enter** 키를 선택하여 진행합니다. ![마스터 대상의 기본 설치 위치 선택](./media/vmware-azure-install-linux-master-target/image17.png) 설치가 완료된 후에 명령줄을 사용하여 구성 서버를 등록합니다. 1. 구성 서버의 IP 주소를 적어둡니다. 다음 단계에서 필요합니다. 2. 다음 명령을 실행 하 여 서버를 구성 서버에 등록 합니다. ``` /usr/local/ASR/Vx/bin/UnifiedAgentConfigurator.sh ``` 스크립트가 완료될 때까지 기다립니다. 마스터 대상이 성공적으로 등록되면 해당 마스터 대상이 포털의 **Site Recovery 인프라** 페이지에 나열됩니다. ### <a name="install-vmware-tools--open-vm-tools-on-the-master-target-server"></a>마스터 대상 서버에 VMware 도구/open-vm-tools 설치 VMware 도구 또는 open-vm-tools가 데이터 저장소를 찾을 수 있도록 마스터 대상에 설치해야 합니다. 도구가 설치되지 않으면 다시 보호 화면이 데이터 저장소에서 나열되지 않습니다. VMware 도구를 설치한 후에 다시 부팅해야 합니다. ### <a name="upgrade-the-master-target-server"></a>마스터 대상 서버 업그레이드 설치 관리자를 실행합니다. 마스터 대상에 에이전트가 설치되어 있는지를 자동으로 검색합니다. 업그레이드하려면 **Y**를 선택합니다. 설치가 완료된 후에 다음 명령을 사용하여 설치된 마스터 대상의 버전을 확인할 수 있습니다. `cat /usr/local/.vx_version` **버전** 필드에서 제공된 마스터 대상의 버전 번호를 볼 수 있습니다. ## <a name="common-issues"></a>일반적인 문제 * 마스터 대상 같은 관리 구성 요소에서 Storage vMotion을 설정하지 않아야 합니다. 마스터 대상이 다시 보호 후에 이동되면 VMDK(가상 머신 디스크)를 분리할 수 없습니다. 이 경우, 장애 복구에 실패합니다. * 마스터 대상에는 가상 머신에 대한 스냅샷이 없어야 합니다. 스냅샷이 있으면 장애 복구에 실패합니다. * 일부 사용자 지정 NIC 구성 때문에 시작하는 동안 네트워크 인터페이스를 사용할 수 없으며 마스터 대상 에이전트를 초기화할 수 없습니다. 다음 속성이 올바르게 설정되어 있는지 확인합니다. 이더넷 카드 파일의/etc/network/interfaces.에서 다음 속성을 확인 합니다. * auto eth0 * iface eth0 inet dhcp <br> 다음 명령을 사용 하 여 네트워킹 서비스를 다시 시작 합니다. <br> `sudo systemctl restart networking` ## <a name="next-steps"></a>다음 단계 마스터 대상의 설치 및 등록이 완료되면 구성 서버 개요 아래에 있는 **Site Recovery 인프라**의 **마스터 대상** 섹션에 마스터 대상이 표시된 것을 확인할 수 있습니다. 이제 [다시 보호](vmware-azure-reprotect.md)와 장애 복구를 계속 진행할 수 있습니다.
33.866848
171
0.678729
kor_Hang
1.00001
b98cf9f0679cbbb246e61f22b4d5ace14596a124
1,961
md
Markdown
locobot/README.md
Dhiraj100892/droidlet
e4ea578672531524552b6ff021165fc9371b0ec8
[ "MIT" ]
null
null
null
locobot/README.md
Dhiraj100892/droidlet
e4ea578672531524552b6ff021165fc9371b0ec8
[ "MIT" ]
null
null
null
locobot/README.md
Dhiraj100892/droidlet
e4ea578672531524552b6ff021165fc9371b0ec8
[ "MIT" ]
null
null
null
<p align="center"> <img src="locobot_readme.gif" /> </p> ## Setup The Locobot Assistant is currently setup using a client-server architecture - with a thin layer on the locobot and a devserver which deals with all the heavy computation. **On the Locobot** * Setup pyrobot on the locobot using the [python 3 setup](https://github.com/facebookresearch/pyrobot/blob/master/README.md). Copy [remote_locobot.py](./remote_locobot.py) and [launch_pyro.sh](./launch_pyro.sh) to the locobot and launch the environment. ``` chmod +x launch_pyro.sh ./launch_pyro.sh ``` **On the Devserver** ``` conda create -n droidlet_env python==3.7.4 pip numpy scikit-learn==0.19.1 pytorch torchvision -c conda-forge -c pytorch conda activate droidlet_env cd ~/droidlet/locobot pip install -r requirements.txt export LOCOBOT_IP=<IP of the locobot> ``` Run with default behavior, in which agent will explore the environment ``` python locobot_agent.py ``` This will download models, datasets and spawn the dashboard that is served on `localhost:8000`. Results should look something like this with `habitat` backend <p align="center"> <img src="https://media.giphy.com/media/XwmXCvoGHBXBqYUdMe/giphy.gif", width="960" height="192"> </p> To `turn off default behaviour` ``` python locobot_agent.py --no_default_behavior ``` ## ROS cheatsheet A set of commands to be run on the locobot to sanity-check basic functionalities. * rosrun tf view_frames - creates a PDF with the graph of the current transform tree to help identify different frames. These can then be used to the transformation matrices between any two frames. * rostopic echo <topic name> (http://wiki.ros.org/rostopic) - ros publishes a stream for each activity as topics (for example one for the raw camera stream, depth stream etc). This is a useful debugging command to sanity check that the basic functionalities are working on the locobot and can help identify issues like lose cables.
37.711538
332
0.759306
eng_Latn
0.972876
b98d72cbe0e93aee076880253956a5ba4372e98d
19,360
md
Markdown
README.md
Shin-Tachibana/capacitor-fs
bfc7c16466a9489a2b1fbdb04b18c157b10302eb
[ "MIT" ]
3
2021-10-17T11:54:41.000Z
2021-11-07T13:02:21.000Z
README.md
Shin-Tachibana/capacitor-fs
bfc7c16466a9489a2b1fbdb04b18c157b10302eb
[ "MIT" ]
1
2021-09-20T14:01:42.000Z
2021-09-21T16:18:37.000Z
README.md
tachibana-shin/capacitor-fs
bfc7c16466a9489a2b1fbdb04b18c157b10302eb
[ "MIT" ]
null
null
null
# capacitor-fs This is a lightning-fs based library created to support the use of filesystem on framework capacitor. > Important: I fixed the `.startsWith()` on `@capacitor/filesystem@^1.0.3` so we don't need `fixStartsWith()` anymore. if you are using package `@capacitor/filesystem@^1.0.3` you can safely update to `capacitor-fs^0.0.40` if you use `@capacitor/filesystem` < 1.0.3 force you to install `capacitor-fs` < 0.0.39-b1 ## Usage ### `createFilesystem(Filesystem, opts?)` First, create or open a "filesystem". ```js import { createFilesystem } from "capacitor-fs"; import { Filesystem, Directory } from "@capacitor/filesystem"; const fs = createFilesystem(Filesystem, { rootDir: "/", directory: Directory.Documents, base64Alway: false, }) ``` **Note: It is better not to create multiple `FS` instances using the same name in a single thread.** Memory usage will be higher as each instance maintains its own cache, and throughput may be lower as each instance will have to compete over the mutex for access to the IndexedDb store. Options object: | Param | Type [= default] | Description | | --------------- | ------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | `rootDir` | string = "/" | Top level directory where is will work | | `directory` | Directory = Directory.Documents | What kind of directory rootDir is in. [View it](https://capacitorjs.com/docs/apis/filesystem#directory) | | `base64Alway` | boolean = false | Allow fs to do full base64 permissions. this option will take care of all the silly errors about saving text files and buffer(image, audio, video, pdf...) of capacitor/filesystem. but it makes the `encoding` option of the writeFile function useless. When true it will save all data types in base64 with encoding = void 0 and preserve their encoding | #### Advanced usage Make directory Options object: | Param | Type [= default] | Description | | ------ | ---------------- | ---------------------- | | `recursive` | recursive = false | Whether to recursively remove the contents of the directory | ### `fs.rmdir(path: string, { recursive?: boolean }): Promise<void>` Remove directory ### `fs.readdir(path: string): Promise<string[]>` Read directory The callback return value is an Array of strings. NOTE: _To save time, it is NOT SORTED._ (Fun fact: Node.js' `readdir` output is not guaranteed to be sorted either. I learned that the hard way.) ### `fs.writeFile(path: string, data: ArrayBuffer | Uint8Array | Blob | string, { encoding?: Encoding | "buffer", recursive: boolean }): Promise<void>` Options object: | Param | Type [= default] | Description | | ---------- | ------------------ | -------------------------------- | | `recursive` | boolean = false | Whether to create any missing parent directories. | | `encoding` | string = Encoding.UTF8 | The encoding to write the file in. If not provided, data is written as base64 encoded. Pass Encoding.UTF8 to write data as string. If `base64Alway = true` this option is useless. | ### `fs.readFile(path: string, { encoding?: Encoding | "buffer" }): Promise<string | ArrayBuffer>` The result value will be a Uint8Array (if `encoding` is `'buffer'`) or (if `encoding` is `Encoding`) a string. If `opts` is a string, it is interpreted as `{ encoding: opts }`. Options object: | Param | Type [= default] | Description | | ---------- | ------------------ | -------------------------------- | | `encoding` | Encoding | "buffer" = Encoding.UTF8 | The encoding to read the file in, if not provided, data is read as binary and returned as base64 encoded. Pass Encoding.UTF8 to read data as string | ### `fs.unlink(path: string): Promise<void>` Delete a file ### `fs.rename(oldPath: string, newPath: string): Promise<void>` Rename a file or directory ### `fs.stat(path: string, { bigint?: boolean }): Promise<Stat | StatBigInt>` The result is a Stat object similar to the one used by Node but with fewer and slightly different properties and methods. The included properties are: - `type` ("file" or "dir") - `mode` - `size` - `ino` - `mtimeMs` - `ctimeMs` - `uid` (fixed value of 1) - `gid` (fixed value of 1) - `dev` (fixed value of 1) The included methods are: - `isFile()` - `isDirectory()` - `isSymbolicLink()` Options object: | Param | Type [= default] | Description | | ---------- | ------------------ | -------------------------------- | | `bigint` | boolean = false | result StatBigInt | ### `fs.exists(path: string): Promise<boolean>` Check file is exists ### `fs.lstat(path: string): Promise<Stat | StatBigInt>` Like `fs.stat` except that paths to symlinks return the symlink stats not the file stats of the symlink's target. ### `fs.symlink(target: string, path: string): Promise<void>` Create a symlink at `path` that points to `target`. ### `fs.readlink(path: string, opts?)` Read the target of a symlink. ### `fs.backFile(filepath)` Create or change the stat data for a file backed by HTTP. Size is fetched with a HEAD request. Useful when using an HTTP backend without `urlauto` set, as then files will only be readable if they have stat data. Note that stat data is made automatically from the file `/.superblock.txt` if found on the server. `/.superblock.txt` can be generated or updated with the [included standalone script](src/superblocktxt.js). Options object: | Param | Type [= default] | Description | | ------ | ---------------- | ---------------------- | | `mode` | number = 0o666 | Posix mode permissions | ### `fs.du(path: string): Promise<number>` Returns the size of a file or directory in bytes. ### `fs.promises` All the same functions as above, but instead of passing a callback they return a promise. `fs.promises = fs` ## Other methods ### `fs.init(autofix?: boolean): Promise<void>` Implement `rootDir` directory initialization if it does not exist. Options `autofix` removed `rootDir` if this is file. ### `fs.clear(): Promise<void>` Empty `rootDir` ### `fs.relatively(path: string): string` Returns the monotonic path of `path`. same as `path.resolve` but for `createFilesystem` ### `fs.relative(from: string, to: string): string` Returns the relative path of `to` relative to `from`, same as `path.relative` but for `createFilesystem` ### `fs.isEqual(path1: string, path2: string): boolean` Compare if 2 paths are the same. based on `fs.relative`. Example: ``` ts fs.isEqual("src/index.ts", "/src/index.ts") // true fs.isEqual("src/index.ts", "src/posix/index.ts") // false fs.isEqual("src/index.ts", "src/posix/../index.ts") // true ``` ### `fs.isParentDir(parent: string, path: string): boolean` Compare if path is a child of parent. Example ``` ts fs.isEqual("src", "src/index.ts") // true fs.isEqual("src", "src/posix/../index.ts") // true ``` ### `fs.replaceParentDir(path: string, parent: string, replace: string): string` Replace parent path. based on `fs.isParentDir` ### `fs.isDirectory(path: string): Promise<boolean>` Return `true` if `path` exists and is `directory`. ### `fs.isFile(path: string): Promise<boolean>` Return 'true' if `path` exists and is `file`. ### `fs.appendFile(path: string, data: ArrayBuffer | Uint8Array | Blob | string, { encoding?: Encoding | "buffer", recursive: boolean }): Promise<void>` Same as `fs.writeFile` but writes further to the file. ### `fs.on(type: Type, cb: (param: Events[Type]) => void) => () => void` Listen for file system interaction events like `write:file`, `remove:file`, `create:dir`, `remove:dir`. Return function call cancel listener. ### `fs.watch(path, cb, options?: WatchOptions) => () => void)` A listener function like `fs.on` but more powerful and versatile * `path`: `string | string[] | () => string | string[]` what are we going to listen to. the input parameter is the expression pattern `path shell` or absolute path. Example `projects/*/.git/index` * `cb`: is a function that accepts as parameter `{ path: string, action: string }` Options object: | Param and type | Description | |------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | `mode?: "absolute" | "relative" | "abstract"` | Listening mode. * `absolute` will treat path as absolute path using `isEqual` * `relative` will treat path as relative using `isPathParent` * `abstract` is a mixture of `absolute` and `relative` * `void 0` will treat path as the uri expression of `shell` | | `type: ("file" | "dir" | "*") \| keyof MainEvents` = "*" | Specify which object to track | | `miniOpts?: minimatch.IOptions` = { dot: true } | minoptions for minimatch. **only works if `options.mode = void 0`** | | `immediate?: boolean` | if set to `true`, cbr will be called as soon as tracking is registered | | `exists?: boolean` | if set to `true`, `cb` will only be called when tracking objects exist | | `dir?: null | string | () => null | string` | will track the path of which directory. This option is useful when `path` is a pattern | `exclude?: string[] | (() => string[])` | Exclude ## Typescript ```ts import type { Filesystem as CFS, Directory } from "@capacitor/filesystem"; import minimatch from "minimatch"; import { Stat, StatBigInt } from "./Stat"; declare type EncodingBuffer = "buffer"; declare type EncodingString = "utf8" | "utf16" | "ascii" | "base64"; declare type Encoding = EncodingString | EncodingBuffer; export declare type Events = { readonly "write:file": string; readonly "remove:file": string; readonly "create:dir": string; readonly "remove:dir": string; readonly "*": string; readonly "move:file": { readonly from: string; readonly to: string; }; readonly "move:dir": { readonly from: string; readonly to: string; }; }; declare type OptionsConstructor = { readonly rootDir?: string; readonly directory: Directory; readonly base64Alway?: boolean; readonly watcher?: boolean; }; export declare function createFilesystem(Filesystem: typeof CFS, options: OptionsConstructor): { promises: { init: (autofix?: boolean) => Promise<void>; clear: () => Promise<void>; relatively: (path: string) => string; relative: (from: string, to: string) => string; isEqual: (path1: string, path2: string) => boolean; isParentDir: (parent: string, path: string) => boolean; replaceParentDir: (path: string, from: string, to: string) => string; mkdir: (path: string, options?: { readonly recursive?: boolean | undefined; } | undefined) => Promise<void>; rmdir: (path: string, options?: { readonly recursive?: boolean | undefined; } | undefined) => Promise<void>; readdir: (path: string) => Promise<readonly string[]>; writeFile: (path: string, data: ArrayBuffer | Blob | string, options?: Encoding | { readonly recursive?: boolean | undefined; readonly encoding?: Encoding | undefined; } | undefined) => Promise<void>; readFile: { (path: string, options?: "buffer" | { readonly encoding?: "buffer" | undefined; } | undefined): Promise<ArrayBuffer>; (path: string, options: { readonly encoding: EncodingString; } | EncodingString): Promise<string>; (path: string, options: { readonly encoding: Encoding; } | Encoding): Promise<string | ArrayBuffer>; }; unlink: (path: string) => Promise<void>; rename: (oldPath: string, newPath: string) => Promise<void>; copy: (oldPath: string, newPath: string) => Promise<void>; stat: { (path: string): Promise<Stat>; (path: string, options: { readonly bigint: false; }): Promise<Stat>; (path: string, options: { readonly bigint: true; }): Promise<StatBigInt>; (path: string, options: { readonly bigint: boolean; }): Promise<Stat | StatBigInt>; }; exists: (path: string) => Promise<boolean>; isDirectory: (path: string) => Promise<boolean>; isFile: (path: string) => Promise<boolean>; lstat: (path: string, options?: { readonly bigint: boolean; } | undefined) => Promise<Stat | StatBigInt>; symlink: (target: string, path: string) => Promise<void>; readlink: (path: string) => Promise<string>; backFile: (filepath: string) => Promise<number>; du: (path: string) => Promise<number>; getUri: (path: string) => Promise<string>; appendFile: (path: string, data: ArrayBuffer | Blob | string, options?: Encoding | { readonly encoding?: Encoding | undefined; } | undefined) => Promise<void>; on: <Type extends keyof Events>(type: Type, cb: (param: Events[Type]) => void) => { (): void; }; watch: (path: string | readonly string[] | (() => string | readonly string[]), cb: (param: { readonly path: string; readonly action: keyof Events; }) => void | Promise<void>, { mode, type, miniOpts, immediate, exists, dir, }?: { readonly mode?: "absolute" | "relative" | "abstract" | undefined; readonly type?: "*" | "file" | "dir" | undefined; readonly miniOpts?: minimatch.IOptions | undefined; readonly immediate?: boolean | undefined; readonly exists?: boolean | undefined; readonly dir?: string | (() => string | null) | null | undefined; }) => { (): void; }; }; init: (autofix?: boolean) => Promise<void>; clear: () => Promise<void>; relatively: (path: string) => string; relative: (from: string, to: string) => string; isEqual: (path1: string, path2: string) => boolean; isParentDir: (parent: string, path: string) => boolean; replaceParentDir: (path: string, from: string, to: string) => string; mkdir: (path: string, options?: { readonly recursive?: boolean | undefined; } | undefined) => Promise<void>; rmdir: (path: string, options?: { readonly recursive?: boolean | undefined; } | undefined) => Promise<void>; readdir: (path: string) => Promise<readonly string[]>; writeFile: (path: string, data: ArrayBuffer | Blob | string, options?: Encoding | { readonly recursive?: boolean | undefined; readonly encoding?: Encoding | undefined; } | undefined) => Promise<void>; readFile: { (path: string, options?: "buffer" | { readonly encoding?: "buffer" | undefined; } | undefined): Promise<ArrayBuffer>; (path: string, options: { readonly encoding: EncodingString; } | EncodingString): Promise<string>; (path: string, options: { readonly encoding: Encoding; } | Encoding): Promise<string | ArrayBuffer>; }; unlink: (path: string) => Promise<void>; rename: (oldPath: string, newPath: string) => Promise<void>; copy: (oldPath: string, newPath: string) => Promise<void>; stat: { (path: string): Promise<Stat>; (path: string, options: { readonly bigint: false; }): Promise<Stat>; (path: string, options: { readonly bigint: true; }): Promise<StatBigInt>; (path: string, options: { readonly bigint: boolean; }): Promise<Stat | StatBigInt>; }; exists: (path: string) => Promise<boolean>; isDirectory: (path: string) => Promise<boolean>; isFile: (path: string) => Promise<boolean>; lstat: (path: string, options?: { readonly bigint: boolean; } | undefined) => Promise<Stat | StatBigInt>; symlink: (target: string, path: string) => Promise<void>; readlink: (path: string) => Promise<string>; backFile: (filepath: string) => Promise<number>; du: (path: string) => Promise<number>; getUri: (path: string) => Promise<string>; appendFile: (path: string, data: ArrayBuffer | Blob | string, options?: Encoding | { readonly encoding?: Encoding | undefined; } | undefined) => Promise<void>; on: <Type extends keyof Events>(type: Type, cb: (param: Events[Type]) => void) => { (): void; }; watch: (path: string | readonly string[] | (() => string | readonly string[]), cb: (param: { readonly path: string; readonly action: keyof Events; }) => void | Promise<void>, { mode, type, miniOpts, immediate, exists, dir, }?: { readonly mode?: "absolute" | "relative" | "abstract" | undefined; readonly type?: "*" | "file" | "dir" | undefined; readonly miniOpts?: minimatch.IOptions | undefined; readonly immediate?: boolean | undefined; readonly exists?: boolean | undefined; readonly dir?: string | (() => string | null) | null | undefined; }) => { (): void; }; }; export default createFilesystem; export { Stat, StatBigInt }; ``` ## License MIT (c) 2021 [Tachibana Shin](https://github.com/tachibana-shin)
46.990291
396
0.551498
eng_Latn
0.88906
b98e1f18cb3de5d90ebc8fac8ea623a3b6dc67b1
4,258
md
Markdown
content/en/docs/ops/deployment/architecture/index.md
ipuustin/istio.io
e9a89c879f6925593e2147b7778baeff85ac9bba
[ "Apache-2.0" ]
1
2020-09-18T05:38:50.000Z
2020-09-18T05:38:50.000Z
content/en/docs/ops/deployment/architecture/index.md
ipuustin/istio.io
e9a89c879f6925593e2147b7778baeff85ac9bba
[ "Apache-2.0" ]
null
null
null
content/en/docs/ops/deployment/architecture/index.md
ipuustin/istio.io
e9a89c879f6925593e2147b7778baeff85ac9bba
[ "Apache-2.0" ]
null
null
null
--- title: Architecture description: Describes Istio's high-level architecture and design goals. weight: 10 aliases: - /docs/concepts/architecture - /docs/ops/architecture owner: istio/wg-environments-maintainers test: n/a --- An Istio service mesh is logically split into a **data plane** and a **control plane**. * The **data plane** is composed of a set of intelligent proxies ([Envoy](https://www.envoyproxy.io/)) deployed as sidecars. These proxies mediate and control all network communication between microservices. They also collect and report telemetry on all mesh traffic. * The **control plane** manages and configures the proxies to route traffic. The following diagram shows the different components that make up each plane: {{< image width="80%" link="./arch.svg" alt="The overall architecture of an Istio-based application." caption="Istio Architecture" >}} ## Components The following sections provide a brief overview of each of Istio's core components. ### Envoy Istio uses an extended version of the [Envoy](https://envoyproxy.github.io/envoy/) proxy. Envoy is a high-performance proxy developed in C++ to mediate all inbound and outbound traffic for all services in the service mesh. Envoy proxies are the only Istio components that interact with data plane traffic. Envoy proxies are deployed as sidecars to services, logically augmenting the services with Envoy’s many built-in features, for example: * Dynamic service discovery * Load balancing * TLS termination * HTTP/2 and gRPC proxies * Circuit breakers * Health checks * Staged rollouts with %-based traffic split * Fault injection * Rich metrics This sidecar deployment allows Istio to extract a wealth of signals about traffic behavior as [attributes](/docs/reference/config/policy-and-telemetry/mixer-overview/#attributes). Istio can use these attributes to enforce policy decisions, and send them to monitoring systems to provide information about the behavior of the entire mesh. The sidecar proxy model also allows you to add Istio capabilities to an existing deployment with no need to rearchitect or rewrite code. You can read more about why we chose this approach in our [Design Goals](#design-goals). Some of the Istio features and tasks enabled by Envoy proxies include: * Traffic control features: enforce fine-grained traffic control with rich routing rules for HTTP, gRPC, WebSocket, and TCP traffic. * Network resiliency features: setup retries, failovers, circuit breakers, and fault injection. * Security and authentication features: enforce security policies and enforce access control and rate limiting defined through the configuration API. * Pluggable extensions model based on WebAssembly that allows for custom policy enforcement and telemetry generation for mesh traffic. ### Istiod Istiod provides service discovery, configuration and certificate management. Istiod converts high level routing rules that control traffic behavior into Envoy-specific configurations, and propagates them to the sidecars at runtime. Pilot abstracts platform-specific service discovery mechanisms and synthesizes them into a standard format that any sidecar conforming with the [Envoy API](https://www.envoyproxy.io/docs/envoy/latest/api/api) can consume. Istio can support discovery for multiple environments such as Kubernetes, Consul, or VMs. You can use Istio's [Traffic Management API](/docs/concepts/traffic-management/#introducing-istio-traffic-management) to instruct Istiod to refine the Envoy configuration to exercise more granular control over the traffic in your service mesh. Istiod [security](/docs/concepts/security/) enables strong service-to-service and end-user authentication with built-in identity and credential management. You can use Istio to upgrade unencrypted traffic in the service mesh. Using Istio, operators can enforce policies based on service identity rather than on relatively unstable layer 3 or layer 4 network identifiers. Starting from release 0.5, you can use [Istio's authorization feature](/docs/concepts/security/#authorization) to control who can access your services. Istiod maintains a CA and generates certificates to allow secure mTLS communication in the data plane.
39.06422
97
0.797088
eng_Latn
0.996547
b98eaf6c6c2f0b771f1a3a9363d429c20c27a82b
1,296
md
Markdown
README.md
inkydragon/gkd
556693e002caa8021222705d7329a7689457ae1a
[ "MIT" ]
null
null
null
README.md
inkydragon/gkd
556693e002caa8021222705d7329a7689457ae1a
[ "MIT" ]
null
null
null
README.md
inkydragon/gkd
556693e002caa8021222705d7329a7689457ae1a
[ "MIT" ]
null
null
null
## gkd(搞快点) alpha version, do not use. A set of tools to help programming in LaTex. `pip install wisepy2 gkd`, and append contents of `gkd.tex` to your Tex sources. ### 1. GKDBNF: The best LaTex BNF package you've ever seen? This relies on [paperbnf](https://github.com/thautwarm/paperbnf). **Usage** ```tex \begin{GKDBNF}{some_unique_id} !Expressions! <e> ::= <e> ( <e> ) | let <n> = <e> in <e> | !$\lambda$! <n> . <e> | <\mathtt{atom}> \end{GKDBNF} ``` ![capture](Capture.PNG) **Remember to place a blank line in the end of GKDBNF block**. How to write this BNF? Follow the syntax and lexer rules: Valid BNF Syntax: ```bnf <atom> ::= NONTERMINAL | TERMINAL | TERMINAL2 | '|' <prod> ::= NONTERMINAL '::=' <atom>+ NEWLINE | TERMINAL NONTERMINAL '::=' <atom>+ NEWLINE | TERMINAL2 NONTERMINAL '::=' <atom>+ NEWLINE | '|' <atom>+ <NEWLINE> ``` Lexer rule by regex: ``` NEWLINE = [\r\n]+ NONTERMINAL = <.*?> TERMINAL2 = !.*?! TERMINAL = \S+ ``` Whitespace tokens are ignored. ### 2. Utilities ```tex \GKDSet{a}{1} \GKDGet{a} % get you "1" \GKDPush{xs}{1} \GKDPush{xs}{2} \GKDPop{xs} \GKDPop{xs} % get you "1" \newcommand{\addone}[1]{ \GKDPyCall{"lambda x: int(x) + 1"}{#1} } \addone{2} % 3 \addone{2.0} % 3.0 ```
16
80
0.591821
yue_Hant
0.73766
b98fe687eba6aeab65503475218ebbb20536898c
1,296
md
Markdown
intune/device-profile-monitor.md
Lauragra/IntuneDocs
806ad7ddaf558a030088e42562d657060336c63f
[ "CC-BY-4.0", "MIT" ]
3
2019-08-23T06:03:08.000Z
2022-03-15T12:50:23.000Z
intune/device-profile-monitor.md
DeployWindowsCom/IntuneDocs
0882d7b2672af12a1d4dab4f8b23e0c9610773b1
[ "CC-BY-4.0", "MIT" ]
null
null
null
intune/device-profile-monitor.md
DeployWindowsCom/IntuneDocs
0882d7b2672af12a1d4dab4f8b23e0c9610773b1
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- # required metadata title: How to monitor device profiles with Intune titlesuffix: "Azure portal" description: Learn how to monitor assigned Intune device profiles." keywords: author: arob98 ms.author: angrobe manager: angrobe ms.date: 03/16/2017 ms.topic: article ms.prod: ms.service: microsoft-intune ms.technology: ms.assetid: 9deaed87-fb4b-4689-ba88-067bc61686d7 # optional metadata #ROBOTS: #audience: #ms.devlang: ms.reviewer: heenamac ms.suite: ems #ms.tgt_pltfrm: ms.custom: intune-azure --- # How to monitor device profiles in Microsoft Intune [!INCLUDE[azure_portal](./includes/azure_portal.md)] You can monitor the assignment progress of Intune device profiles in two ways: 1. Sign into the Azure portal. 2. Choose **More Services** > **Monitoring + Management** > **Intune**. 3. On the **Intune** blade, choose **Device configuration**. 2. On the **Device Configuration** blade, choose **Manage** > **Profiles**. 2. In the list of profiles blade, choose the profile you want to manage, and then, either: - On the <*profile name*> **Reports** blade, choose **Overview** to see basic information about the profile and its assignments. - On the <*profile name*> **Reports** blade, choose **Reports** to see more detailed information about the profile and its assignments.
29.454545
136
0.746142
eng_Latn
0.872407
b990779b5f0eb8969f017184e6e9f037029b16fe
8,519
md
Markdown
sccm/develop/core/understand/connecting-to-configuration-manager-with-windows-powershell.md
nunoarias/SCCMdocs
0506129c4b3b0e88264e4648b6c03c02da91d938
[ "CC-BY-4.0", "MIT" ]
1
2019-11-19T15:45:13.000Z
2019-11-19T15:45:13.000Z
sccm/develop/core/understand/connecting-to-configuration-manager-with-windows-powershell.md
nunoarias/SCCMdocs
0506129c4b3b0e88264e4648b6c03c02da91d938
[ "CC-BY-4.0", "MIT" ]
null
null
null
sccm/develop/core/understand/connecting-to-configuration-manager-with-windows-powershell.md
nunoarias/SCCMdocs
0506129c4b3b0e88264e4648b6c03c02da91d938
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: "Connecting with Windows PowerShell" titleSuffix: "Configuration Manager" ms.date: "09/20/2016" ms.prod: "configuration-manager" ms.technology: configmgr-sdk ms.topic: conceptual ms.assetid: 1d466a0b-bb4a-4648-8f16-9b6c897934f5 author: aczechowski ms.author: aaroncz manager: dougeby ms.collection: M365-identity-device-management --- # Connecting to Configuration Manager with Windows PowerShell In the [Configuration Manager Windows PowerShell Basics](../../../develop/core/understand/windows-powershell-basics.md) topic, you tried a few basic Windows PowerShell cmdlets. This topic helps you connect to Configuration Manager from your Windows PowerShell environment. ## Loading Windows PowerShell from the Configuration Manager Console The easiest method to load Windows PowerShell is directly from the Configuration Manager console. 1. Start by launching the Configuration Manager console. In the upper left corner, there’s a blue rectangle. Click the white arrow in the blue rectangle, and choose **Connect via Windows PowerShell**. ![PowerShell Menu](../../../develop/core/understand/media/cmpowershellmenucb.PNG "CMPowerShellMenuCB") 2. Once Windows PowerShell loads, you’ll see a prompt that contains your site code. For example, if the site code is “ABC��?, the prompt looks like: ``` PS ABC:\> ``` 3. Let’s just verify everything is working fine. The first cmdlet you’ll try is `Get-CMSite`. This cmdlet will return information about the Configuration Manager site we’re currently connected to. Go to your Windows PowerShell window, and type in `Get-CMSite`: ``` PS ABC:\> get-cmsite BuildNumber : 7958 Features : 0000000000000000000000000000000000000000000000000000000000000000 InstallDir : C:\Program Files\Microsoft Configuration Manager Mode : 0 ReportingSiteCode : RequestedStatus : 110 ServerName : SDKTESTLAB.test.lab SiteCode : ABC SiteName : ABC Test Site Status : 1 TimeZoneInfo : 000001E0 0000 000B 0000 0001 0002 0000 0000 0000 00000000 0000 0003 0000 0002 0002 0000 0000 0000 FFFFFFC4 Type : 2 Version : 5.00.7958.1000 ``` ## Importing the Configuration Manager PowerShell Module Another method of connecting to Configuration Manager from your Windows PowerShell environment is to load the Configuration Manager module manually. 1. Hit your Windows Key and type “PowerShell��? – then right-click **Windows PowerShell** and choose “Run as administrator��?. You should now see your PowerShell environment. ``` Windows PowerShell Copyright (C) 2016 Microsoft Corporation. All rights reserved. PS C:\WINDOWS\system32> ``` 2. Now, you’ll need to import the Configuration Manager module using the built-in Windows PowerShell cmdlet `Import-Module`. To import the Configuration Manager module, you will have to specify the path to the Configuration Manager module or change to the directory that contains the module. Go to your Windows PowerShell window, and type in `CD ‘C:\Program Files (x86)\Microsoft Configuration Manager\AdminConsole\bin’`: ``` PS C:\> PS C:\> CD ‘C:\Program Files (x86)\Microsoft Configuration Manager\AdminConsole\bin’ PS C:\Program Files (x86)\Microsoft Configuration Manager\AdminConsole\bin> ``` Go to your Windows PowerShell window, and type in `import-module .\ConfigurationManager.psd1 -verbose`: ``` PS C:\Program Files (x86)\Microsoft Configuration Manager\AdminConsole\bin> PS C:\Program Files (x86)\Microsoft Configuration Manager\AdminConsole\bin> import-module .\ConfigurationManager.psd1 -verbose Note: The ‘-verbose’ switch displays a list of the cmdlets being imported (quite a long list in the case of Configuration Manager). ``` 3. Confirm that the Configuration Manager module has been loaded using the `Get-CMSite` cmdlet. Go to your Windows PowerShell window, and type in `Get-CMSite`: ``` PS C:\Program Files (x86)\Microsoft Configuration Manager\AdminConsole\bin>Get-CMSite get-cmsite : This command cannot be run from the current drive. To run this command you must first connect to a Configuration Manager drive. at line:1 char:1 get-cmsite ~~~~~~~~~~ + CategoryInfo : NotSpecified: (:) [Get-CMSite], InvalidOperationException + FullyQualifiedErrorId : System.InvalidOperationException,Microsoft.ConfigurationManagement.Cmdlets.HS.Commands.GetSiteCommand PS C:\Program Files (x86)\Microsoft Configuration Manager\AdminConsole\bin> ``` The error was caused by our current path pointing to the local hard drive (our Configuration Manager Console path) not the Configuration Manager site. > [!IMPORTANT] > To run the Configuration Manager cmdlets, you need to switch the path to the Configuration Manager site. 4. Go to your Windows PowerShell window, and type in `CD <site code>:`, replacing \<site code> with your site code (the site code “ABC��? is used below): ``` PS C:\Program Files (x86)\Microsoft Configuration Manager\AdminConsole\bin> CD ABC: PS ABC:\> ``` 5. Now, try to again to confirm that the Configuration Manager module has been loaded using the `Get-CMSite` cmdlet. Go to your Windows PowerShell window, and type in `Get-CMSite`: ``` PS ABC:\> Get-CMSite BuildNumber : 7958 Features : 0000000000000000000000000000000000000000000000000000000000000000 InstallDir : C:\Program Files\Microsoft Configuration Manager Mode : 0 ReportingSiteCode : RequestedStatus : 110 ServerName : SDKTESTLAB.test.lab SiteCode : ABC SiteName : ABC Test Site Status : 1 TimeZoneInfo : 000001E0 0000 000B 0000 0001 0002 0000 0000 0000 00000000 0000 0003 0000 0002 0002 0000 0000 0000 FFFFFFC4 Type : 2 Version : 5.00.7958.1000 ``` 6. Success! You are connected to the Configuration Manager site and the Configuration Manager module is loaded. ## Update Help! PowerShell 3.0 introduced a new feature to update your Windows PowerShell help over the Internet. 1. You can update Windows PowerShell help (and specifically the help for the Configuration Manager cmdlets) using the `Update-Help` cmdlet. If your computer is connected to the Internet, go to your Windows PowerShell window, and type in `Update-Help –Module configurationmanager`. ``` PS ABC:\> Update-Help –Module configurationmanager PS ABC:\> ``` 2. You can get help about Windows PowerShell cmdlets by using the `Get-Help` cmdlet. Go to your Windows PowerShell window, and type in `Get-Help Get-CMSite`: ``` PS ABC:\> Get-Help Get-CMSite NAME Get-CMSite SYNOPSIS Gets one or more Configuration Manager sites. SYNTAX Get-CMSite [-Name <string>] [<CommonParameters>] Get-CMSite -SiteCode <string> [<CommonParameters>] DESCRIPTION The Get-CMSite cmdlet gets one or more Microsoft System Center Configuration Manager sites. A SystemCenter Configuration Manager site is a server that has clients assigned to it and that processes client-generated data. You can get a Configuration Manager site by using either a site name or a site code. RELATED LINKS Online Version: http://go.microsoft.com/fwlink/?LinkID=263855 Set-CMSite REMARKS To see the examples, type: "get-help Get-CMSite -examples". For more information, type: "get-help Get-CMSite -detailed". For technical information, type: "get-help Get-CMSite -full". For online help, type: "get-help Get-CMSite -online" ``` ## See Also [Getting Started with Configuration Manager Windows PowerShell](../../../develop/core/understand/getting-started-with-configuration-manager-and-windows-powershell.md)
44.369792
295
0.673084
eng_Latn
0.782951
b9909de6dbc48af977d9476e2092e0fb76a4aea0
550
md
Markdown
doc/nano-editor.md
devel0/linux-knowledge
c5692dc56cd3b338cb10407fd8e662d257f4dc99
[ "MIT" ]
null
null
null
doc/nano-editor.md
devel0/linux-knowledge
c5692dc56cd3b338cb10407fd8e662d257f4dc99
[ "MIT" ]
null
null
null
doc/nano-editor.md
devel0/linux-knowledge
c5692dc56cd3b338cb10407fd8e662d257f4dc99
[ "MIT" ]
null
null
null
# nano editor ## enable syntax highlight - include csharp syntax highlight ( using java file ) ```sh nanorcdir=/usr/share/nano if [ ! -e "$nanorcdir" ]; then echo "not found nano resource files in [$nanorcdir]" else if [ ! -e "$nanorcdir"/csharp.nanorc ]; then cat "$nanorcdir"/java.nanorc | sed 's/\.java/\.cs/g' | sed 's/java/csharp/I' > "$nanorcdir"/csharp.nanorc fi ls -1 "$nanorcdir"/*.nanorc | sed 's/^\//include \//' >> ~/.nanorc fi ``` ## disable ^T^Z terminal suspend add to `~/.nanorc` follow ``` bind ^Z suspend main ```
21.153846
109
0.632727
eng_Latn
0.743152
b990d81753cc43c0b53f6093ee5ddaa6f630620f
6,691
md
Markdown
products/firewall/src/content/cf-firewall-rules/index.md
sweethuman/cloudflare-docs
33e21108fe03be8f16d8fe17d7a934c31ac1a9dd
[ "MIT" ]
915
2020-10-02T21:29:22.000Z
2022-03-31T21:41:30.000Z
products/firewall/src/content/cf-firewall-rules/index.md
sweethuman/cloudflare-docs
33e21108fe03be8f16d8fe17d7a934c31ac1a9dd
[ "MIT" ]
1,394
2020-09-28T21:23:45.000Z
2022-03-31T19:27:58.000Z
products/firewall/src/content/cf-firewall-rules/index.md
sweethuman/cloudflare-docs
33e21108fe03be8f16d8fe17d7a934c31ac1a9dd
[ "MIT" ]
1,440
2020-09-18T16:31:31.000Z
2022-03-31T18:14:21.000Z
--- title: About pcx-content-type: concept order: 200 --- # About Cloudflare Firewall Rules ## Flexibility and control **Cloudflare Firewall Rules** is a flexible and intuitive framework for filtering HTTP requests. It gives you fine-grained control over which requests reach your applications. Firewall Rules complements existing Cloudflare tools by allowing you to create rules that combine a variety of techniques. For example, rather than managing 3 independent rules in 3 different places, you can easily create a single firewall rule that blocks traffic to a URI when the request comes from a particular IP and the user-agent matches a specific string or a pattern. Once you are satisfied with the rule, you can deploy it yourself, immediately. Fundamentally, Firewall Rules gives you the power to proactively inspect incoming site traffic and automatically respond to threats. You define **expressions** that tell Cloudflare what to look for and specify the appropriate **action** to take when those criteria are satisfied. It is a simple concept, but like the Wireshark Display Filter language that inspired our own expression language, the Firewall Rules language is a powerful tool that allows organizations to rapidly adapt to a constantly evolving threat landscape. ## Working with Firewall Rules To configure Firewall Rules from the Cloudflare dashboard, use the **Firewall Rules** tab in the **Firewall** app. For more, see [_Manage rules in the Cloudflare dashboard_](/cf-dashboard). To configure Firewall Rules with the Cloudflare API, use the Firewall Rules API. Use the Cloudflare Filters API to manage expressions. For more, see [_Manage rules via the APIs_](/api). You can also manage Firewall Rules through Terraform. For more, see [_Getting Started with Terraform_](https://blog.cloudflare.com/getting-started-with-terraform-and-cloudflare-part-1/). ### Firewall Rules tab The **Rules List** gives you a snapshot of recent activity and allows you to manage firewall rules in a single convenient location (see image below). ![Firewall Rules tab](../images/cf-firewall-rules-panel.png) #### Challenge Solve Rate (CSR) The **Rules List** displays each rule's **CSR** (Challenge Solve Rate), which is the percentage of issued challenges that were solved. This metric applies to rules configured with _Challenge (Captcha)_ or _JS Challenge_ actions, and it is calculated as follows: <p><var>CSR</var> = <var>number of challenges solved</var> / <var>number of challenges issued</var></p> Hover over the CSR to reveal the number of issued and solved CAPTCHA challenges: ![Revealing the number of issued vs. solved CAPTCHA challenges](../images/firewall-rules-csr-hover.png) A low CSR means that Cloudflare is issuing a low number of CAPTCHA challenges to actual humans, since these are the solved challenges. You should aim for a low Challenge Solve Rate. Review the CSR of your CAPTCHA rules periodically and adjust them if necessary: * If the rate is higher than expected, for example regarding a Bot Management rule, consider relaxing the rule criteria so that you issue fewer challenges to human visitors. * If the rate is 0%, no CAPTCHA challenges are being solved. This means that you have no human visitors whose requests match the rule filter. Consider changing the rule action to _Block_. <Aside type="warning" header="Important"> Currently, Cloudflare does not calculate the CSR of Managed Challenges. For customers on a Free plan, any rules configured with the _Challenge (Captcha)_ action now use Managed Challenges. For more information, see [Understanding Cloudflare Captchas and Challenge Passage](https://support.cloudflare.com/hc/articles/200170136#managed-challenge). </Aside> ### Expression Builder Both the **Create Firewall** and **Edit Firewall** panels include the visual **Expression Builder** (outlined below, in orange), which is an excellent tool to start with. ![Expression Builder](../images/firewall-rules-intro-exp-builder.png) ### Expression Editor Advanced users will appreciate the **Expression Editor** (shown below), which trades the visual simplicity of the builder for the raw power of the [Cloudflare Firewall Rules language](https://developers.cloudflare.com/firewall/cf-firewall-language). The editor also supports advanced features, such as grouping symbols, for constructing highly sophisticated, targeted rules. ![Expression Editor](../images/firewall-rules-intro-exp-editor.png) ### Firewall Rules APIs Power users, particularly those who develop large numbers of firewall rules, can use the Cloudflare API to programmatically manage Firewall Rules (see [_Manage rules via the API_](https://developers.cloudflare.com/firewall/api)). ## Entitlements Cloudflare Firewall Rules is available to all customers. Keep in mind that the number of firewall rules you can have active on your account is based on your type of plan, as is support for the _Log_ action and support for regular expressions. This table outlines the Firewall Rules features and entitlements available with each customer plan: <TableWrap> <table> <thead> <tr> <td></td> <td colspan="4" style="text-align:center"><strong>Cloudflare plan</strong></td> </tr> <tr> <td><strong>Feature</strong></td> <td><strong>Free</strong></td> <td><strong>Pro</strong></td> <td><strong>Business</strong></td> <td><strong>Enterprise</strong></td> </tr> </thead> <tbody> <tr> <td>Active rules</td> <td>5</td> <td>20</td> <td>100</td> <td>1000</td> </tr> <tr> <td>Supported actions</td> <td>All except <em>Log</em></td> <td>All except <em>Log</em></td> <td>All except <em>Log</em></td> <td>All</td> </tr> <tr> <td>Regular expression support</td> <td>No</td> <td>No</td> <td>Yes</td> <td>Yes</td> </tr> <tr> <td>Number of <a href='https://developers.cloudflare.com/firewall/cf-firewall-rules/rules-lists'>Rules Lists</a></td> <td>1</td> <td>10</td> <td>10</td> <td>10</td> </tr> </tbody> </table> </TableWrap> ## Get started Unless you are already an advanced user, review [expressions](/cf-firewall-rules/fields-and-expressions/) and [actions](/cf-firewall-rules/actions/), which form the foundation of Firewall Rules. To get started building your own firewall rules, see [_Manage Firewall Rules in the dashboard_](/cf-dashboard/create-edit-delete-rules/). Those eager to dive straight into the technical details can refer to these topics: * [_Common use cases_](https://developers.cloudflare.com/firewall/recipes) * [_Firewall Rules language_](https://developers.cloudflare.com/firewall/cf-firewall-language) * [_Manage rules via the APIs_](https://developers.cloudflare.com/firewall/api/)
48.136691
455
0.763862
eng_Latn
0.988649
b99215d85b99c674cd369b41589554c967524953
4,585
md
Markdown
README.md
npmtest/node-npmtest-caniuse-api
cae01c42cfc4820c2fe156d4bde91681d6f09d10
[ "MIT" ]
null
null
null
README.md
npmtest/node-npmtest-caniuse-api
cae01c42cfc4820c2fe156d4bde91681d6f09d10
[ "MIT" ]
null
null
null
README.md
npmtest/node-npmtest-caniuse-api
cae01c42cfc4820c2fe156d4bde91681d6f09d10
[ "MIT" ]
null
null
null
# npmtest-caniuse-api #### basic test coverage for [caniuse-api (v1.6.1)](https://github.com/nyalab/caniuse-api#readme) [![npm package](https://img.shields.io/npm/v/npmtest-caniuse-api.svg?style=flat-square)](https://www.npmjs.org/package/npmtest-caniuse-api) [![travis-ci.org build-status](https://api.travis-ci.org/npmtest/node-npmtest-caniuse-api.svg)](https://travis-ci.org/npmtest/node-npmtest-caniuse-api) #### request the caniuse data to check browsers compatibilities [![NPM](https://nodei.co/npm/caniuse-api.png?downloads=true&downloadRank=true&stars=true)](https://www.npmjs.com/package/caniuse-api) | git-branch : | [alpha](https://github.com/npmtest/node-npmtest-caniuse-api/tree/alpha)| |--:|:--| | coverage : | [![istanbul-coverage](https://npmtest.github.io/node-npmtest-caniuse-api/build/coverage.badge.svg)](https://npmtest.github.io/node-npmtest-caniuse-api/build/coverage.html/index.html)| | test-report : | [![test-report](https://npmtest.github.io/node-npmtest-caniuse-api/build/test-report.badge.svg)](https://npmtest.github.io/node-npmtest-caniuse-api/build/test-report.html)| | build-artifacts : | [![build-artifacts](https://npmtest.github.io/node-npmtest-caniuse-api/glyphicons_144_folder_open.png)](https://github.com/npmtest/node-npmtest-caniuse-api/tree/gh-pages/build)| - [https://npmtest.github.io/node-npmtest-caniuse-api/build/coverage.html/index.html](https://npmtest.github.io/node-npmtest-caniuse-api/build/coverage.html/index.html) [![istanbul-coverage](https://npmtest.github.io/node-npmtest-caniuse-api/build/screenCapture.buildCi.browser.%252Ftmp%252Fbuild%252Fcoverage.lib.html.png)](https://npmtest.github.io/node-npmtest-caniuse-api/build/coverage.html/index.html) - [https://npmtest.github.io/node-npmtest-caniuse-api/build/test-report.html](https://npmtest.github.io/node-npmtest-caniuse-api/build/test-report.html) [![test-report](https://npmtest.github.io/node-npmtest-caniuse-api/build/screenCapture.buildCi.browser.%252Ftmp%252Fbuild%252Ftest-report.html.png)](https://npmtest.github.io/node-npmtest-caniuse-api/build/test-report.html) - [https://npmdoc.github.io/node-npmdoc-caniuse-api/build/apidoc.html](https://npmdoc.github.io/node-npmdoc-caniuse-api/build/apidoc.html) [![apidoc](https://npmdoc.github.io/node-npmdoc-caniuse-api/build/screenCapture.buildCi.browser.%252Ftmp%252Fbuild%252Fapidoc.html.png)](https://npmdoc.github.io/node-npmdoc-caniuse-api/build/apidoc.html) ![npmPackageListing](https://npmtest.github.io/node-npmtest-caniuse-api/build/screenCapture.npmPackageListing.svg) ![npmPackageDependencyTree](https://npmtest.github.io/node-npmtest-caniuse-api/build/screenCapture.npmPackageDependencyTree.svg) # package.json ```json { "authors": [ "nyalab", "MoOx" ], "babel": { "presets": [ "babel-preset-latest" ] }, "bugs": { "url": "https://github.com/nyalab/caniuse-api/issues" }, "dependencies": { "browserslist": "^1.3.6", "caniuse-db": "^1.0.30000529", "lodash.memoize": "^4.1.2", "lodash.uniq": "^4.5.0" }, "description": "request the caniuse data to check browsers compatibilities", "devDependencies": { "babel-cli": "^6.22.2", "babel-eslint": "^5.0.0", "babel-preset-latest": "^6.22.0", "babel-tape-runner": "^2.0.1", "jshint": "^2.5.10", "npmpub": "^3.1.0", "tap-spec": "^4.1.1", "tape": "^4.6.0" }, "directories": {}, "dist": { "shasum": "b534e7c734c4f81ec5fbe8aca2ad24354b962c6c", "tarball": "https://registry.npmjs.org/caniuse-api/-/caniuse-api-1.6.1.tgz" }, "files": [ "dist" ], "gitHead": "a0d94a2d08b7d5de48e8404d63182764999734cc", "homepage": "https://github.com/nyalab/caniuse-api#readme", "keywords": [ "caniuse", "browserslist" ], "license": "MIT", "main": "dist/index.js", "maintainers": [ { "name": "nyalab" } ], "name": "caniuse-api", "optionalDependencies": {}, "repository": { "type": "git", "url": "git+https://github.com/nyalab/caniuse-api.git" }, "scripts": { "build": "babel src --out-dir dist", "lint": "jshint src", "prepublish": "npm run build", "release": "npmpub", "test": "npm run lint && babel-tape-runner test/*.js | tap-spec" }, "version": "1.6.1", "bin": {} } ``` # misc - this document was created with [utility2](https://github.com/kaizhu256/node-utility2)
41.681818
391
0.659978
yue_Hant
0.175079
b99240e468e94cc28b85d968a80b1cee79950c6e
9,717
md
Markdown
articles/terraform/terraform-hub-spoke-hub-nva.md
changeworld/azure-docs.tr-tr
a6c8b9b00fe259a254abfb8f11ade124cd233fcb
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/terraform/terraform-hub-spoke-hub-nva.md
changeworld/azure-docs.tr-tr
a6c8b9b00fe259a254abfb8f11ade124cd233fcb
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/terraform/terraform-hub-spoke-hub-nva.md
changeworld/azure-docs.tr-tr
a6c8b9b00fe259a254abfb8f11ade124cd233fcb
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Öğretici - Terraform'u kullanarak Azure'da hub sanal ağ cihazı oluşturun description: Öğretici, diğer tüm ağlar arasında ortak bir bağlantı noktası görevi gören Hub VNet'in oluşturulmasını uygular ms.topic: tutorial ms.date: 10/26/2019 ms.openlocfilehash: 28ccb89d237cbe21dd0433da5f7fbb32883f6550 ms.sourcegitcommit: 0947111b263015136bca0e6ec5a8c570b3f700ff ms.translationtype: MT ms.contentlocale: tr-TR ms.lasthandoff: 03/24/2020 ms.locfileid: "74159255" --- # <a name="tutorial-create-a-hub-virtual-network-appliance-in-azure-using-terraform"></a>Öğretici: Terraform'u kullanarak Azure'da hub sanal ağ cihazı oluşturun **VPN aygıtı,** şirket içi ağa harici bağlantı sağlayan bir aygıttır. VPN aygıtı bir donanım aygıtı veya yazılım çözümü olabilir. Yazılım çözümüne örnek olarak, Windows Server 2012'deki Yönlendirme ve Uzaktan Erişim Hizmeti (RRAS) örnek olarak sunulmaktadır. VPN cihazları hakkında daha fazla bilgi için [Siteden Siteye VPN Ağ Geçidi bağlantıları için VPN aygıtları hakkında](/azure/vpn-gateway/vpn-gateway-about-vpn-devices)bilgi alabiliyorum. Azure, seçilecek çok çeşitli ağ sanal cihazlarını destekler. Bu öğretici için bir Ubuntu görüntüsü kullanılır. Azure'da desteklenen çok çeşitli aygıt çözümleri hakkında daha fazla bilgi edinmek için [Ağ Cihazları ana sayfasına](https://azure.microsoft.com/solutions/network-appliances/)bakın. Bu öğretici aşağıdaki görevleri kapsar: > [!div class="checklist"] > * Hub VNet'i hub konuşan topolojide uygulamak için HCL'yi (HashiCorp Dili) kullanın > * Terraform'u kullanarak cihaz görevi gören Hub Network Sanal Makinesi'ni oluşturun > * CustomScript uzantılarını kullanarak yolları etkinleştirmek için Terraform'u kullanma > * Hub ve Spoke ağ geçidi rota tabloları oluşturmak için Terraform'u kullanma ## <a name="prerequisites"></a>Ön koşullar 1. [Azure'da Terraform ile bir hub ve kollu karma ağ topolojisi oluşturun.](./terraform-hub-spoke-introduction.md) 1. [Azure'da Terraform ile şirket içi sanal ağ oluşturun.](./terraform-hub-spoke-on-prem.md) 1. [Azure'da Terraform ile bir hub sanal ağı oluşturun.](./terraform-hub-spoke-hub-network.md) ## <a name="create-the-directory-structure"></a>Dizin yapısını oluşturma 1. [Azure portalına](https://portal.azure.com)göz atın. 1. [Azure Cloud Shell](/azure/cloud-shell/overview)'i açın. Önceden bir ortam seçmediyseniz **Bash** ortamını seçin. ![Cloud Shell istemi](./media/terraform-common/azure-portal-cloud-shell-button-min.png) 1. `clouddrive` dizinine geçin. ```bash cd clouddrive ``` 1. Dizinleri yeni dizinle değiştirin: ```bash cd hub-spoke ``` ## <a name="declare-the-hub-network-appliance"></a>Hub ağ cihazını bildirme Şirket içinde sanal ağ bildiren Terraform yapılandırma dosyasını oluşturun. 1. Bulut Kabuğu'nda, '' `hub-nva.tf`adlı yeni bir dosya oluşturun. ```bash code hub-nva.tf ``` 1. Aşağıdaki kodu düzenleyiciye yapıştırın: ```hcl locals { prefix-hub-nva = "hub-nva" hub-nva-location = "CentralUS" hub-nva-resource-group = "hub-nva-rg" } resource "azurerm_resource_group" "hub-nva-rg" { name = "${local.prefix-hub-nva}-rg" location = local.hub-nva-location tags { environment = local.prefix-hub-nva } } resource "azurerm_network_interface" "hub-nva-nic" { name = "${local.prefix-hub-nva}-nic" location = azurerm_resource_group.hub-nva-rg.location resource_group_name = azurerm_resource_group.hub-nva-rg.name enable_ip_forwarding = true ip_configuration { name = local.prefix-hub-nva subnet_id = azurerm_subnet.hub-dmz.id private_ip_address_allocation = "Static" private_ip_address = "10.0.0.36" } tags { environment = local.prefix-hub-nva } } resource "azurerm_virtual_machine" "hub-nva-vm" { name = "${local.prefix-hub-nva}-vm" location = azurerm_resource_group.hub-nva-rg.location resource_group_name = azurerm_resource_group.hub-nva-rg.name network_interface_ids = [azurerm_network_interface.hub-nva-nic.id] vm_size = var.vmsize storage_image_reference { publisher = "Canonical" offer = "UbuntuServer" sku = "16.04-LTS" version = "latest" } storage_os_disk { name = "myosdisk1" caching = "ReadWrite" create_option = "FromImage" managed_disk_type = "Standard_LRS" } os_profile { computer_name = "${local.prefix-hub-nva}-vm" admin_username = var.username admin_password = var.password } os_profile_linux_config { disable_password_authentication = false } tags { environment = local.prefix-hub-nva } } resource "azurerm_virtual_machine_extension" "enable-routes" { name = "enable-iptables-routes" location = azurerm_resource_group.hub-nva-rg.location resource_group_name = azurerm_resource_group.hub-nva-rg.name virtual_machine_name = azurerm_virtual_machine.hub-nva-vm.name publisher = "Microsoft.Azure.Extensions" type = "CustomScript" type_handler_version = "2.0" settings = <<SETTINGS { "fileUris": [ "https://raw.githubusercontent.com/mspnp/reference-architectures/master/scripts/linux/enable-ip-forwarding.sh" ], "commandToExecute": "bash enable-ip-forwarding.sh" } SETTINGS tags { environment = local.prefix-hub-nva } } resource "azurerm_route_table" "hub-gateway-rt" { name = "hub-gateway-rt" location = azurerm_resource_group.hub-nva-rg.location resource_group_name = azurerm_resource_group.hub-nva-rg.name disable_bgp_route_propagation = false route { name = "toHub" address_prefix = "10.0.0.0/16" next_hop_type = "VnetLocal" } route { name = "toSpoke1" address_prefix = "10.1.0.0/16" next_hop_type = "VirtualAppliance" next_hop_in_ip_address = "10.0.0.36" } route { name = "toSpoke2" address_prefix = "10.2.0.0/16" next_hop_type = "VirtualAppliance" next_hop_in_ip_address = "10.0.0.36" } tags { environment = local.prefix-hub-nva } } resource "azurerm_subnet_route_table_association" "hub-gateway-rt-hub-vnet-gateway-subnet" { subnet_id = azurerm_subnet.hub-gateway-subnet.id route_table_id = azurerm_route_table.hub-gateway-rt.id depends_on = ["azurerm_subnet.hub-gateway-subnet"] } resource "azurerm_route_table" "spoke1-rt" { name = "spoke1-rt" location = azurerm_resource_group.hub-nva-rg.location resource_group_name = azurerm_resource_group.hub-nva-rg.name disable_bgp_route_propagation = false route { name = "toSpoke2" address_prefix = "10.2.0.0/16" next_hop_type = "VirtualAppliance" next_hop_in_ip_address = "10.0.0.36" } route { name = "default" address_prefix = "0.0.0.0/0" next_hop_type = "vnetlocal" } tags { environment = local.prefix-hub-nva } } resource "azurerm_subnet_route_table_association" "spoke1-rt-spoke1-vnet-mgmt" { subnet_id = azurerm_subnet.spoke1-mgmt.id route_table_id = azurerm_route_table.spoke1-rt.id depends_on = ["azurerm_subnet.spoke1-mgmt"] } resource "azurerm_subnet_route_table_association" "spoke1-rt-spoke1-vnet-workload" { subnet_id = azurerm_subnet.spoke1-workload.id route_table_id = azurerm_route_table.spoke1-rt.id depends_on = ["azurerm_subnet.spoke1-workload"] } resource "azurerm_route_table" "spoke2-rt" { name = "spoke2-rt" location = azurerm_resource_group.hub-nva-rg.location resource_group_name = azurerm_resource_group.hub-nva-rg.name disable_bgp_route_propagation = false route { name = "toSpoke1" address_prefix = "10.1.0.0/16" next_hop_in_ip_address = "10.0.0.36" next_hop_type = "VirtualAppliance" } route { name = "default" address_prefix = "0.0.0.0/0" next_hop_type = "vnetlocal" } tags { environment = local.prefix-hub-nva } } resource "azurerm_subnet_route_table_association" "spoke2-rt-spoke2-vnet-mgmt" { subnet_id = azurerm_subnet.spoke2-mgmt.id route_table_id = azurerm_route_table.spoke2-rt.id depends_on = ["azurerm_subnet.spoke2-mgmt"] } resource "azurerm_subnet_route_table_association" "spoke2-rt-spoke2-vnet-workload" { subnet_id = azurerm_subnet.spoke2-workload.id route_table_id = azurerm_route_table.spoke2-rt.id depends_on = ["azurerm_subnet.spoke2-workload"] } ``` 1. Dosyayı kaydedin ve düzenleyiciden çıkın. ## <a name="next-steps"></a>Sonraki adımlar > [!div class="nextstepaction"] > [Azure'da Terraform ile konuşlu sanal ağlar oluşturun](./terraform-hub-spoke-spoke-network.md)
35.463504
444
0.640836
tur_Latn
0.617416
b992412fd5b84e6d9293eb1fe37ddf9c46689ec5
11,468
md
Markdown
docs/cl/azure.md
DatePerfect/typhoon
43aad1a267d306fa102da4fdb63bb31df310d85e
[ "MIT" ]
null
null
null
docs/cl/azure.md
DatePerfect/typhoon
43aad1a267d306fa102da4fdb63bb31df310d85e
[ "MIT" ]
null
null
null
docs/cl/azure.md
DatePerfect/typhoon
43aad1a267d306fa102da4fdb63bb31df310d85e
[ "MIT" ]
null
null
null
# Azure !!! danger Typhoon for Azure is alpha. For production, use AWS, Google Cloud, or bare-metal. As Azure matures, check [errata](https://github.com/poseidon/typhoon/wiki/Errata) for known shortcomings. In this tutorial, we'll create a Kubernetes v1.13.3 cluster on Azure with Container Linux. We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a resource group, virtual network, subnets, security groups, controller availability set, worker scale set, load balancer, and TLS assets. Controllers are provisioned to run an `etcd-member` peer and a `kubelet` service. Workers run just a `kubelet` service. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules the `apiserver`, `scheduler`, `controller-manager`, and `coredns` on controllers and schedules `kube-proxy` and `flannel` on every node. A generated `kubeconfig` provides `kubectl` access to the cluster. ## Requirements * Azure account * Azure DNS Zone (registered Domain Name or delegated subdomain) * Terraform v0.11.x and [terraform-provider-ct](https://github.com/coreos/terraform-provider-ct) installed locally ## Terraform Setup Install [Terraform](https://www.terraform.io/downloads.html) v0.11.x on your system. ```sh $ terraform version Terraform v0.11.11 ``` Add the [terraform-provider-ct](https://github.com/coreos/terraform-provider-ct) plugin binary for your system to `~/.terraform.d/plugins/`, noting the final name. ```sh wget https://github.com/coreos/terraform-provider-ct/releases/download/v0.3.0/terraform-provider-ct-v0.3.0-linux-amd64.tar.gz tar xzf terraform-provider-ct-v0.3.0-linux-amd64.tar.gz mv terraform-provider-ct-v0.3.0-linux-amd64/terraform-provider-ct ~/.terraform.d/plugins/terraform-provider-ct_v0.3.0 ``` Read [concepts](/architecture/concepts/) to learn about Terraform, modules, and organizing resources. Change to your infrastructure repository (e.g. `infra`). ``` cd infra/clusters ``` ## Provider [Install](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest) the Azure `az` command line tool to [authenticate with Azure](https://www.terraform.io/docs/providers/azurerm/authenticating_via_azure_cli.html). ``` az login ``` Configure the Azure provider in a `providers.tf` file. ```tf provider "azurerm" { version = "1.16.0" alias = "default" } provider "ct" { version = "0.3.0" } provider "local" { version = "~> 1.0" alias = "default" } provider "null" { version = "~> 1.0" alias = "default" } provider "template" { version = "~> 1.0" alias = "default" } provider "tls" { version = "~> 1.0" alias = "default" } ``` Additional configuration options are described in the `azurerm` provider [docs](https://www.terraform.io/docs/providers/azurerm/). ## Cluster Define a Kubernetes cluster using the module `azure/container-linux/kubernetes`. ```tf module "azure-ramius" { source = "git::https://github.com/poseidon/typhoon//azure/container-linux/kubernetes?ref=v1.13.3" providers = { azurerm = "azurerm.default" local = "local.default" null = "null.default" template = "template.default" tls = "tls.default" } # Azure cluster_name = "ramius" region = "centralus" dns_zone = "azure.example.com" dns_zone_group = "example-group" # configuration ssh_authorized_key = "ssh-rsa AAAAB3Nz..." asset_dir = "/home/user/.secrets/clusters/ramius" # optional worker_count = 2 host_cidr = "10.0.0.0/20" } ``` Reference the [variables docs](#variables) or the [variables.tf](https://github.com/poseidon/typhoon/blob/master/azure/container-linux/kubernetes/variables.tf) source. ## ssh-agent Initial bootstrapping requires `bootkube.service` be started on one controller node. Terraform uses `ssh-agent` to automate this step. Add your SSH private key to `ssh-agent`. ```sh ssh-add ~/.ssh/id_rsa ssh-add -L ``` ## Apply Initialize the config directory if this is the first use with Terraform. ```sh terraform init ``` Plan the resources to be created. ```sh $ terraform plan Plan: 86 to add, 0 to change, 0 to destroy. ``` Apply the changes to create the cluster. ```sh $ terraform apply ... module.azure-ramius.null_resource.bootkube-start: Still creating... (6m50s elapsed) module.azure-ramius.null_resource.bootkube-start: Still creating... (7m0s elapsed) module.azure-ramius.null_resource.bootkube-start: Creation complete after 7m8s (ID: 3961816482286168143) Apply complete! Resources: 86 added, 0 changed, 0 destroyed. ``` In 4-8 minutes, the Kubernetes cluster will be ready. ## Verify [Install kubectl](https://coreos.com/kubernetes/docs/latest/configure-kubectl.html) on your system. Use the generated `kubeconfig` credentials to access the Kubernetes cluster and list nodes. ``` $ export KUBECONFIG=/home/user/.secrets/clusters/ramius/auth/kubeconfig $ kubectl get nodes NAME STATUS ROLES AGE VERSION ramius-controller-0 Ready controller,master 24m v1.13.3 ramius-worker-000001 Ready node 25m v1.13.3 ramius-worker-000002 Ready node 24m v1.13.3 ``` List the pods. ``` $ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-7c6fbb4f4b-b6qzx 1/1 Running 0 26m kube-system coredns-7c6fbb4f4b-j2k3d 1/1 Running 0 26m kube-system flannel-bwf24 2/2 Running 2 26m kube-system flannel-ks5qb 2/2 Running 0 26m kube-system flannel-tq2wg 2/2 Running 0 26m kube-system kube-apiserver-hxgsx 1/1 Running 3 26m kube-system kube-controller-manager-5ff9cd7bb6-b942n 1/1 Running 0 26m kube-system kube-controller-manager-5ff9cd7bb6-bbr6w 1/1 Running 0 26m kube-system kube-proxy-j4vpq 1/1 Running 0 26m kube-system kube-proxy-jxr5d 1/1 Running 0 26m kube-system kube-proxy-lbdw5 1/1 Running 0 26m kube-system kube-scheduler-5f76d69686-s4fbx 1/1 Running 0 26m kube-system kube-scheduler-5f76d69686-vgdgn 1/1 Running 0 26m kube-system pod-checkpointer-cnqdg 1/1 Running 0 26m kube-system pod-checkpointer-cnqdg-ramius-controller-0 1/1 Running 0 25m ``` ## Going Further Learn about [maintenance](/topics/maintenance/) and [addons](/addons/overview/). !!! note On Container Linux clusters, install the `CLUO` addon to coordinate reboots and drains when nodes auto-update. Otherwise, updates may not be applied until the next reboot. ## Variables Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/azure/container-linux/kubernetes/variables.tf) source. ### Required | Name | Description | Example | |:-----|:------------|:--------| | cluster_name | Unique cluster name (prepended to dns_zone) | "ramius" | | region | Azure region | "centralus" | | dns_zone | Azure DNS zone | "azure.example.com" | | dns_zone_group | Resource group where the Azure DNS zone resides | "global" | | ssh_authorized_key | SSH public key for user 'core' | "ssh-rsa AAAAB3NZ..." | | asset_dir | Path to a directory where generated assets should be placed (contains secrets) | "/home/user/.secrets/clusters/ramius" | !!! tip Regions are shown in [docs](https://azure.microsoft.com/en-us/global-infrastructure/regions/) or with `az account list-locations --output table`. #### DNS Zone Clusters create a DNS A record `${cluster_name}.${dns_zone}` to resolve a load balancer backed by controller instances. This FQDN is used by workers and `kubectl` to access the apiserver(s). In this example, the cluster's apiserver would be accessible at `ramius.azure.example.com`. You'll need a registered domain name or delegated subdomain on Azure DNS. You can set this up once and create many clusters with unique names. ```tf # Azure resource group for DNS zone resource "azurerm_resource_group" "global" { name = "global" location = "centralus" } # DNS zone for clusters resource "azurerm_dns_zone" "clusters" { resource_group_name = "${azurerm_resource_group.global.name}" name = "azure.example.com" zone_type = "Public" } ``` Reference the DNS zone with `"${azurerm_dns_zone.clusters.name}"` and its resource group with `"${azurerm_resource_group.global.name}"`. !!! tip "" If you have an existing domain name with a zone file elsewhere, just delegate a subdomain that can be managed on Azure DNS (e.g. azure.mydomain.com) and [update nameservers](https://docs.microsoft.com/en-us/azure/dns/dns-delegate-domain-azure-dns). ### Optional | Name | Description | Default | Example | |:-----|:------------|:--------|:--------| | controller_count | Number of controllers (i.e. masters) | 1 | 1 | | worker_count | Number of workers | 1 | 3 | | controller_type | Machine type for controllers | "Standard_DS1_v2" | See below | | worker_type | Machine type for workers | "Standard_F1" | See below | | os_image | Channel for a Container Linux derivative | coreos-stable | coreos-stable, coreos-beta, coreos-alpha | | disk_size | Size of the disk in GB | "40" | "100" | | worker_priority | Set priority to Low to use reduced cost surplus capacity, with the tradeoff that instances can be deallocated at any time | Regular | Low | | controller_clc_snippets | Controller Container Linux Config snippets | [] | [example](/advanced/customization/#usage) | | worker_clc_snippets | Worker Container Linux Config snippets | [] | [example](/advanced/customization/#usage) | | host_cidr | CIDR IPv4 range to assign to instances | "10.0.0.0/16" | "10.0.0.0/20" | | pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" | | service_cidr | CIDR IPv4 range to assign to Kubernetes services | "10.3.0.0/16" | "10.3.0.0/24" | | cluster_domain_suffix | FQDN suffix for Kubernetes services answered by coredns. | "cluster.local" | "k8s.example.com" | Check the list of valid [machine types](https://azure.microsoft.com/en-us/pricing/details/virtual-machines/linux/) and their [specs](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/sizes-general). Use `az vm list-skus` to get the identifier. !!! warning Unlike AWS and GCP, Azure requires its *virtual* networks to have non-overlapping IPv4 CIDRs (yeah, go figure). Instead of each cluster just using `10.0.0.0/16` for instances, each Azure cluster's `host_cidr` must be non-overlapping (e.g. 10.0.0.0/20 for the 1st cluster, 10.0.16.0/20 for the 2nd cluster, etc). !!! warning Do not choose a `controller_type` smaller than `Standard_DS1_v2`. Smaller instances are not sufficient for running a controller. #### Low Priority Add `worker_priority=Low` to use [Low Priority](https://docs.microsoft.com/en-us/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-use-low-priority) workers that run on Azure's surplus capacity at lower cost, but with the tradeoff that they can be deallocated at random. Low priority VMs are Azure's analog to AWS spot instances or GCP premptible instances.
42.161765
415
0.695152
eng_Latn
0.741608
b992b5cc7fcf4b8e8b46b7608b2f59b67bc7faae
753
md
Markdown
docat/README.md
dinakar29/docat
5a5b3c64c77955d9a43030914fc6d807f6856570
[ "MIT" ]
194
2019-11-08T15:33:41.000Z
2021-10-16T16:14:00.000Z
docat/README.md
dinakar29/docat
5a5b3c64c77955d9a43030914fc6d807f6856570
[ "MIT" ]
88
2019-11-08T15:30:00.000Z
2021-10-16T16:20:06.000Z
docat/README.md
lukasweber/docat
bbd1a77ef89a20951209845883816acddb3413b5
[ "MIT" ]
21
2019-11-08T18:00:04.000Z
2021-10-02T13:46:02.000Z
# docat backend The backend hosts the documentation and an api to push documentation and tag versions of the documentation. ## development enviroment You will need to install [poetry](https://python-poetry.org/docs/#installation) `pip install poetry==1.1.5`. Install the dependencies and run the application: ```sh # install dependencies poetry install # run the app [DOCAT_SERVE_FILES=1] [FLASK_DEBUG=1] [PORT=8888] poetry run python -m docat ``` ### Config Options * **DOCAT_SERVE_FILES**: Serve static documentation instead of a nginx (for testing) * **DOCAT_DOC_PATH**: Upload directory for static files (needs to match nginx config) * **FLASK_DEBUG**: Start flask in debug mode ## Usage See [getting-started.md](../doc/getting-started.md)
26.892857
108
0.754316
eng_Latn
0.841167
b992c6e0659cdf147acdfd869e0a46ad8585d848
3,269
md
Markdown
azps-6.2.0/Az.Network/New-AzApplicationGatewayIdentity.md
carsonruebel/azure-docs-powershell
f067c0254dce93665a97eebbba5bfde8694226fb
[ "CC-BY-4.0", "MIT" ]
126
2019-01-26T06:47:25.000Z
2022-03-21T20:24:45.000Z
azps-6.2.0/Az.Network/New-AzApplicationGatewayIdentity.md
carsonruebel/azure-docs-powershell
f067c0254dce93665a97eebbba5bfde8694226fb
[ "CC-BY-4.0", "MIT" ]
1,140
2019-01-17T02:44:36.000Z
2022-03-31T22:16:36.000Z
azps-6.2.0/Az.Network/New-AzApplicationGatewayIdentity.md
carsonruebel/azure-docs-powershell
f067c0254dce93665a97eebbba5bfde8694226fb
[ "CC-BY-4.0", "MIT" ]
217
2019-01-18T00:49:16.000Z
2022-03-21T20:24:48.000Z
--- external help file: Microsoft.Azure.PowerShell.Cmdlets.Network.dll-Help.xml Module Name: Az.Network online version: https://docs.microsoft.com/powershell/module/az.network/new-azapplicationgatewayidentity schema: 2.0.0 content_git_url: https://github.com/Azure/azure-powershell/blob/master/src/Network/Network/help/New-AzApplicationGatewayIdentity.md original_content_git_url: https://github.com/Azure/azure-powershell/blob/master/src/Network/Network/help/New-AzApplicationGatewayIdentity.md --- # New-AzApplicationGatewayIdentity ## SYNOPSIS Creates an identity object for an application gateway. This will hold reference to the user assigned identity. ## SYNTAX ``` New-AzApplicationGatewayIdentity -UserAssignedIdentityId <String> [-DefaultProfile <IAzureContextContainer>] [-WhatIf] [-Confirm] [<CommonParameters>] ``` ## DESCRIPTION **New-AzApplicationGatewayIdentity** cmdlet creates an application gateway identity object. ## EXAMPLES ### Example 1 ```powershell PS C:\> $identity = New-AzUserAssignedIdentity -Name $identityName -ResourceGroupName $rgName -Location $location PS C:\> $appgwIdentity = New-AzApplicationGatewayIdentity -UserAssignedIdentity $identity.Id PS C:\> $gateway = New-AzApplicationGateway -Name "AppGateway01" -ResourceGroupName "ResourceGroup01" -Location "West US" -Identity $appgwIdentity <..> ``` In this example, we create a user assigned identity and then reference it in identity object used with Application Gateway. ## PARAMETERS ### -DefaultProfile The credentials, account, tenant, and subscription used for communication with Azure. ```yaml Type: Microsoft.Azure.Commands.Common.Authentication.Abstractions.Core.IAzureContextContainer Parameter Sets: (All) Aliases: AzContext, AzureRmContext, AzureCredential Required: False Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False ``` ### -UserAssignedIdentityId ResourceId of the user assigned identity to be assigned to Application Gateway. ```yaml Type: System.String Parameter Sets: (All) Aliases: UserAssignedIdentity Required: True Position: Named Default value: None Accept pipeline input: True (ByPropertyName) Accept wildcard characters: False ``` ### -Confirm Prompts you for confirmation before running the cmdlet. ```yaml Type: System.Management.Automation.SwitchParameter Parameter Sets: (All) Aliases: cf Required: False Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False ``` ### -WhatIf Shows what would happen if the cmdlet runs. The cmdlet is not run. ```yaml Type: System.Management.Automation.SwitchParameter Parameter Sets: (All) Aliases: wi Required: False Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False ``` ### CommonParameters This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see about_CommonParameters (http://go.microsoft.com/fwlink/?LinkID=113216). ## INPUTS ### System.String ## OUTPUTS ### Microsoft.Azure.Commands.Network.Models.PSManagedServiceIdentity ## NOTES ## RELATED LINKS
28.929204
314
0.795044
yue_Hant
0.587556
b992cd727f3a2796649bd4d8e1fa1a939cb9ef74
39
md
Markdown
README.md
tehfedaykin/HackingYourWorkLife__Balance
4a6e4d2ca324645c15c257d5045d4c73a43325a4
[ "MIT" ]
9
2018-07-15T00:14:54.000Z
2020-05-09T17:31:46.000Z
README.md
tehfedaykin/HackingYourWorkLife__Balance
4a6e4d2ca324645c15c257d5045d4c73a43325a4
[ "MIT" ]
5
2020-07-07T19:24:27.000Z
2022-02-12T02:45:36.000Z
README.md
tehfedaykin/NoSnowDaysWhenYouWorkRemote
2e653d00090b268a329dda4049687797df66ffe5
[ "MIT" ]
1
2018-11-20T14:57:29.000Z
2018-11-20T14:57:29.000Z
# Hacking Your Work Life Blank Balance
19.5
38
0.794872
eng_Latn
0.676439
b993389c95b93ca80a689e3a6193f65d278c8b0d
1,246
md
Markdown
docs/description/Performance_Count.md
Booster-Apps/codacy-rubocop
22694000eca23fbd2b32d4d9cd0079b0028a0c6e
[ "Apache-2.0" ]
3
2019-08-15T17:54:27.000Z
2021-05-11T22:03:33.000Z
docs/description/Performance_Count.md
Booster-Apps/codacy-rubocop
22694000eca23fbd2b32d4d9cd0079b0028a0c6e
[ "Apache-2.0" ]
95
2015-10-21T10:37:30.000Z
2022-03-28T22:20:25.000Z
docs/description/Performance_Count.md
Booster-Apps/codacy-rubocop
22694000eca23fbd2b32d4d9cd0079b0028a0c6e
[ "Apache-2.0" ]
13
2016-03-23T15:17:46.000Z
2021-07-20T20:25:54.000Z
This cop is used to identify usages of `count` on an `Enumerable` that follow calls to `select`, `find_all`, `filter` or `reject`. Querying logic can instead be passed to the `count` call. `ActiveRecord` compatibility: `ActiveRecord` will ignore the block that is passed to `count`. Other methods, such as `select`, will convert the association to an array and then run the block on the array. A simple work around to make `count` work with a block is to call `to_a.count {...}`. Example: `Model.where(id: [1, 2, 3]).select { |m| m.method == true }.size` becomes: `Model.where(id: [1, 2, 3]).to_a.count { |m| m.method == true }` # Examples ```ruby # bad [1, 2, 3].select { |e| e > 2 }.size [1, 2, 3].reject { |e| e > 2 }.size [1, 2, 3].select { |e| e > 2 }.length [1, 2, 3].reject { |e| e > 2 }.length [1, 2, 3].select { |e| e > 2 }.count { |e| e.odd? } [1, 2, 3].reject { |e| e > 2 }.count { |e| e.even? } array.select(&:value).count # good [1, 2, 3].count { |e| e > 2 } [1, 2, 3].count { |e| e < 2 } [1, 2, 3].count { |e| e > 2 && e.odd? } [1, 2, 3].count { |e| e < 2 && e.even? } Model.select('field AS field_one').count Model.select(:value).count ``` [Source](http://www.rubydoc.info/gems/rubocop/RuboCop/Cop/Performance/Count)
31.15
89
0.614767
eng_Latn
0.960294
b9935b9f6a4d7a00461ead57745c8bd6a2fc389f
1,067
md
Markdown
README.md
colin-nolan/fail2ban-ansible-modules
e20f61f8a1eebffbc113ba8f86358096e1b48faf
[ "MIT" ]
null
null
null
README.md
colin-nolan/fail2ban-ansible-modules
e20f61f8a1eebffbc113ba8f86358096e1b48faf
[ "MIT" ]
null
null
null
README.md
colin-nolan/fail2ban-ansible-modules
e20f61f8a1eebffbc113ba8f86358096e1b48faf
[ "MIT" ]
null
null
null
[![Build Status](https://travis-ci.org/colin-nolan/fail2ban-ansible-modules.svg?branch=master)](https://travis-ci.org/colin-nolan/fail2ban-ansible-modules) [![codecov](https://codecov.io/gh/colin-nolan/fail2ban-ansible-modules/branch/master/graph/badge.svg)](https://codecov.io/gh/colin-nolan/fail2ban-ansible-modules) # Fail2ban Ansible Modules _Ansible modules for configuring Fail2ban_ ## Requirements - Python 3.5+ - Ansible 2.5+ ## Modules ### Jails The `fail2ban_jail.py` module manages Fail2ban jails. #### Examples ##### Add Jail ```yaml - name: add ssh jail fail2ban_jail: name: ssh enabled: true port: ssh filter: sshd logpath: /var/log/auth.log maxretry: 6 notify: restart_fail2ban ``` Note: `enabled: false` does not remove the jail's configuration file. See [Remove Jail](#remove-jail) for details on how to do this. ##### Remove Jail ```yaml - name: remove ssh jail fail2ban_jail: name: ssh present: false jail_directory: /etc/fail2ban/jail.d notify: restart_fail2ban ``` ## License [MIT](LICENSE.txt).
24.813953
162
0.714152
eng_Latn
0.358884
b9940a5524f30700c081c63f490f48bd2091bc64
538
md
Markdown
.cache/typescript/3.9/node_modules/@types/jsbn/README.md
Andersonlima1/allflix0
9403c0a2995bf9930daa795b12aafc527fcf895b
[ "MIT" ]
null
null
null
.cache/typescript/3.9/node_modules/@types/jsbn/README.md
Andersonlima1/allflix0
9403c0a2995bf9930daa795b12aafc527fcf895b
[ "MIT" ]
2
2021-03-11T04:07:09.000Z
2022-02-27T09:28:21.000Z
.cache/typescript/3.9/node_modules/@types/jsbn/README.md
Andersonlima1/allflix0
9403c0a2995bf9930daa795b12aafc527fcf895b
[ "MIT" ]
null
null
null
# Installation > `npm install --save @types/jsbn` # Summary This package contains type definitions for jsbn (http://www-cs-students.stanford.edu/%7Etjw/jsbn/). # Details Files were exported from https://www.github.com/DefinitelyTyped/DefinitelyTyped/tree/master/types/jsbn Additional Details * Last updated: Sat, 04 Nov 2017 05:34:19 GMT * Dependencies: none * Global values: jsbn # Credits These definitions were written by Eugene Chernyshov <https://github.com/Evgenus>, Al Tabayoyon <https://github.com/al2xed>.
31.647059
124
0.741636
yue_Hant
0.384691
b99483a02e9cdb353b4790ba3548c5815a3dcdbf
36,738
md
Markdown
Exchange-Deployments-2013/exchange-server-hybrid-deployments-exchange-2013-help.md
v-kents/OfficeDocs-Exchange-Test-pr.ru-ru
369e193682e71d5edffee0d10a840b4967668b93
[ "CC-BY-4.0", "MIT" ]
4
2018-07-20T08:47:21.000Z
2021-05-26T10:59:17.000Z
Exchange-Deployments-2013/exchange-server-hybrid-deployments-exchange-2013-help.md
v-kents/OfficeDocs-Exchange-Test-pr.ru-ru
369e193682e71d5edffee0d10a840b4967668b93
[ "CC-BY-4.0", "MIT" ]
24
2018-06-19T08:37:04.000Z
2018-09-26T16:37:08.000Z
Exchange-Deployments-2013/exchange-server-hybrid-deployments-exchange-2013-help.md
v-kents/OfficeDocs-Exchange-Test-pr.ru-ru
369e193682e71d5edffee0d10a840b4967668b93
[ "CC-BY-4.0", "MIT" ]
12
2018-06-19T07:21:50.000Z
2021-11-15T11:19:10.000Z
--- title: 'Гибридные развертывания Exchange Server: Exchange 2013 Help' TOCTitle: '@NoTitle' ms:assetid: 59e32000-4fcf-417f-a491-f1d8f9aeef9b ms:mtpsurl: https://technet.microsoft.com/ru-ru/library/JJ200581(v=EXCHG.150) ms:contentKeyID: 50489594 ms.date: 05/05/2018 mtps_version: v=EXCHG.150 ms.translationtype: HT --- # Гибридные развертывания Exchange Server   _<strong>Применимо к:</strong>Exchange Online, Exchange Server 2013, Exchange Server 2016_ _<strong>Последнее изменение раздела:</strong>2018-04-16_ **Сводка**. Что необходимо знать для планирования гибридного развертывания Exchange. Гибридное развертывание позволяет организациям распространять многофункциональные возможности администрирования с существующего локального развертывания Microsoft Exchange на облако. Гибридное развертывание обеспечивает единую среду с общими функциональными возможностями Exchange для локальной организации Exchange и Exchange Online Online в Microsoft Office 365. Кроме того, гибридное развертывание позволяет в любое время и без проблем выполнить полный переход к использованию организации Exchange Online. **Содержание** Возможности гибридного развертывания Exchange Рекомендации по гибридному развертыванию Exchange Компоненты гибридного развертывания Exchange Пример гибридного развертывания Exchange Факторы, которые необходимо учитывать перед настройкой гибридного развертывания Exchange Основные термины Документация по гибридному развертыванию Exchange ## Возможности гибридного развертывания Exchange Гибридное развертывание обеспечивает следующие возможности: - Обеспечение безопасной маршрутизации почты между локальной организацией Exchange и организацией Exchange Online. - Маршрутизация с общим доменным пространством имен. Например, локальная организация и организация Exchange Online используют домен SMTP @contoso.com. - Единый глобальный список адресов (GAL), или общая адресная книга. - Обмен сведениями о доступности и данными календаря между локальной организацией Exchange и организацией Exchange Online. - Централизованное управление исходящим и входящим потоком почты. Можно настроить маршрутизацию всех входящих и исходящих сообщений Exchange Online через локальную организацию Exchange. - Один URL-адрес Outlook в Интернете для локальной организации и организации Exchange Online. - Возможность перемещения существующих локальных почтовых ящиков в организацию Exchange Online. При необходимости можно переместить почтовые ящики Exchange Online обратно в локальную организацию. - Централизованное управление почтовыми ящиками с помощью Центра администрирования (EAC) локальной Exchange. - Отслеживание сообщений, подсказки и поиск в нескольких почтовых ящиках в локальной организации Exchange и организации Exchange Online. - Архивация сообщений в облаке для локальных почтовых ящиков Exchange. Exchange Online Archiving может использоваться при гибридном развертывании. Дополнительные сведения об Exchange Online Archiving см. в статье [Дополнительные службы Microsoft Office 365](https://go.microsoft.com/fwlink/p/?linkid=233231). ## Рекомендации по гибридному развертыванию Exchange Перед реализацией гибридного развертывания Exchange необходимо рассмотреть следующие вопросы: - **Требования к гибридному развертыванию** Прежде чем настраивать гибридное развертывание, убедитесь, что ваша локальная организация соответствует всем требованиям для успешного развертывания. Дополнительные сведения см. в разделе [Предварительные условия для гибридного развертывания](hybrid-deployment-prerequisites-exchange-2013-help.md). - **Клиенты Exchange ActiveSync**. При переносе почтового ящика из локальной организации Exchange в Exchange Online необходимо обновить все клиенты, имеющие к нему доступ, в том числе устройства Exchange ActiveSync, чтобы они могли использовать Exchange Online. При перемещении почтового ящика в Exchange Online большинство клиентов Exchange ActiveSync будут автоматически перенастроены, но некоторые старые устройства могут обновиться неправильно. Дополнительные сведения см. в статье [Параметры устройств Exchange ActiveSync при гибридных развертываниях Exchange](exchange-activesync-device-settings-with-exchange-hybrid-deployments-exchange-2013-help.md). - **Перенос разрешений почтовых ящиков**. Разрешения локальных почтовых ящиков, такие как "Отправить как", "Полный доступ", "Отправить от имени", а также разрешения для папок, которые явно применяются к почтовому ящику, переносятся в Exchange Online. При этом не переносятся унаследованные (неявные) разрешения почтовых ящиков и разрешения, предоставленные объектам, для которых не включена поддержка почты в Exchange Online. Перед переносом необходимо убедиться, что все разрешения предоставлены в явном виде, а для всех объектов включена поддержка почты. Поэтому необходимо запланировать настройку этих разрешений в Office 365, если это применимо к вашей организации. В случае разрешений "Отправить как", если пользователь и отправляемый ресурс перемещаются не одновременно, необходимо явно добавить разрешение "Отправить как" в Exchange Online с помощью командлета **Add-RecipientPermission**. - **Поддержка разрешений для почтовых ящиков в гибридном развертывании.** Почтовые ящики в локальной организации Exchange могут предоставлять разрешения почтовым ящикам в Office 365 на полный доступ и отправку от имени, и наоборот. Для предоставления разрешений "Отправить как" требуются дополнительные действия. Кроме того, дополнительная настройка может потребоваться для поддержки разрешений в гибридном развертывании, в зависимости от установленной в локальной организации версии Exchange. Дополнительные сведения см. в разделе [Делегирование разрешений для почтовых ящиков](permissions-in-exchange-hybrid-deployments-exchange-2013-help.md) статьи [Разрешения при гибридных развертываниях Exchange](permissions-in-exchange-hybrid-deployments-exchange-2013-help.md) и в статье [Настройка Exchange для поддержки делегированных разрешений почтовых ящиков в гибридном развертывании](configure-exchange-to-support-delegated-mailbox-permissions-in-a-hybrid-deployment-exchange-2013-help.md). > [!NOTE] > В феврале 2018 г. функция поддержки разрешений для папок, а также разрешений &quot;Полный доступ&quot; и &quot;Отправить от имени&quot; в нескольких лесах подготавливается. Ее выпуск планируется на апрель 2018 г. - **Исходящая миграция**.  В процессе текущего управления получателем вам может потребоваться переместить почтовые ящики Exchange Online назад в локальную среду. Дополнительные сведения о том, как перемещать почтовые ящики в гибридной среде Exchange 2010, см. в разделе [Перемещение почтовых ящиков Exchange Online в локальную организацию](https://technet.microsoft.com/ru-ru/library/hh882527\(v=exchg.150\)). Дополнительные сведения о том, как перемещать почтовые ящики в гибридных развертываниях Exchange 2013 или более поздних версий, см. в разделе [Перемещение почтовых ящиков между локальными организациями и организациями Exchange Online в случаях гибридного развертывания](move-mailboxes-between-on-premises-and-exchange-online-organizations-in-hybrid-deployments-exchange-2013-help.md). - **Параметры переадресации для почтовых ящиков.** Почтовые ящики можно настраивать на автоматическую переадресацию отправляемых им писем в другой почтовый ящик. В Exchange Online поддерживается переадресация почтовых ящиков, но конфигурация переадресации не копируется в Exchange Online при переносе почтового ящика в эту среду. Прежде чем переносить почтовый ящик в Exchange Online, обязательно экспортируйте конфигурацию переадресации для каждого почтового ящика. Конфигурация переадресации хранится в свойствах `DeliverToMailboxAndForward`, `ForwardingAddress` и `ForwardingSmtpAddress` каждого почтового ящика. ## Компоненты гибридного развертывания Exchange Гибридное развертывание предполагает использование различных служб и компонентов. - **Серверы Exchange**. Для настройки гибридного развертывания в локальной организации необходимо настроить по крайней мере один сервер Exchange. Если вы используете Exchange 2013 или более ранней версии, необходимо установить по крайней мере один сервер с ролями сервера почтовых ящиков и клиентского доступа. Если вы используете Exchange 2016 или более поздней версии, необходимо установить по крайней мере один сервер с ролью сервера почтовых ящиков. При необходимости пограничные транспортные серверы Exchange также можно установить в сети периметра для поддержки защищенного потока почты Office 365. > [!NOTE] > Установка серверов Exchange с ролью сервера почтовых ящиков или клиентского доступа в сети периметра не поддерживается. - **Microsoft Office 365**. Подписка на службу Office 365 включает организацию Exchange Online. Организациям, настраивающим гибридную среду, необходимо приобрести лицензию для каждого почтового ящика, который создается в организации Exchange Online или переносится в нее. - **Мастер настройки гибридной конфигурации**Exchange включает мастер настройки гибридной конфигурации, который упрощает настройку гибридного развертывания для локальной организации Exchange и организации Exchange Online. Дополнительные сведения см. в разделе [Мастер гибридной конфигурации](hybrid-configuration-wizard-exchange-2013-help.md). - **Система проверки подлинности Azure AD**Система проверки подлинности Azure Active Directory (AD) — это бесплатная облачная служба, которая выступает в качестве брокера доверия между локальной организацией Exchange 2016 и организацией Exchange Online. Локальные организации, которые настраивают гибридное развертывание, должны иметь доверие федерации с системой проверки подлинности Azure AD. Доверие федерации можно создать вручную, в рамках настройки федеративного общего доступа между локальной организацией Exchange и другими федеративными организациями Exchange, или в рамках настройки гибридного развертывания с помощью мастера гибридной конфигурации. Доверие федерации с системой проверки подлинности Azure AD для клиента Office 365 настраивается автоматически при активации учетной записи службы Office 365. Дополнительные сведения см. в статье, посвященной [системе проверки подлинности Azure AD](https://go.microsoft.com/fwlink/p/?linkid=135986). - **Синхронизация Azure Active Directory**. Синхронизация Azure AD с помощью Azure AD Connect реплицирует локальные данные Active Directory объектов, поддерживающих почту, в организацию Office 365 для поддержки единого глобального списка адресов (GAL) и проверки подлинности пользователей. Организациям, которые настраивают гибридное развертывание, необходимо развернуть Azure AD Connect на отдельном локальном сервере для синхронизации локальной службы Active Directory с Office 365. Дополнительные сведения см. в [обзоре Azure AD Connect](https://go.microsoft.com/fwlink/p/?linkid=203007). Основные термины ## Пример гибридного развертывания Обратите внимание на следующий сценарий. В нем приведен пример топологии типичного развертывания Exchange 2016. Contoso Ltd. — это организация с одним лесом, одним доменом, двумя контроллерами домена и одним установленным сервером Exchange 2016. Удаленные пользователи Contoso используют приложение Outlook в Интернете для подключения к Exchange 2016 через Интернет для проверки почты и работы с календарями Outlook. ![Локальное развертывание Exchange перед настройкой гибридного развертывания с Office 365](images/JJ200581.dad133ae-d18a-42ec-8f0a-dd1de391200e(EXCHG.150).png "Локальное развертывание Exchange перед настройкой гибридного развертывания с Office 365") Предположим, что вы сетевой администратор Contoso и заинтересованы в настройке гибридного развертывания. Вы развернули и настроили необходимый сервер Azure AD Connect, а также решили использовать функцию синхронизации паролей Azure AD Connect, чтобы пользователи могли использовать одни и те же учетные данные в учетной записи локальной сети и в учетной записи Office 365. После выполнения необходимых действий для гибридного развертывания и выбора параметров гибридного развертывания с помощью мастера гибридной конфигурации новая топология будет иметь следующую конфигурацию: - Пользователи будут использовать одни и те же имя пользователя и пароль для входа в локальную организацию и организацию Exchange Online ("единый вход"). - Для почтовых ящиков пользователей, расположенных в локальной организации и организации Exchange Online, будет использоваться один и тот же домен адресов электронной почты. Например, для почтовых ящиков, расположенных как в локальной организации, так и в организации Exchange Online, в адресах электронной почты пользователей будет указан домен @contoso.com. - Локальная организация доставляет всю исходящую почту в Интернет. Управление транспортировкой всех сообщений осуществляется локальной организацией, которая выступает в качестве ретранслятора для организации Exchange Online ("централизованная транспортировка почты"). - Пользователи локальной организации и организации Exchange Online могут обмениваться друг с другом сведениями о доступности. Связи организации, настроенные для обеих организаций, также предоставляют возможность отслеживания сообщений между организациями, включения подсказок и поиска сообщений. - Локальные пользователи и пользователи Exchange Online используют один и тот же URL-адрес для подключения к своим почтовым ящикам через Интернет. ![Локальное развертывание Exchange после настройки гибридного развертывания с Office 365](images/JJ200581.e8681849-f15d-4d0e-b77e-6105b6096c4b(EXCHG.150).png "Локальное развертывание Exchange после настройки гибридного развертывания с Office 365") Если сравнить существующую конфигурацию организации Contoso с конфигурацией гибридного развертывания, можно увидеть, что при настройке гибридного развертывания были добавлены серверы и службы, поддерживающие дополнительные функции и возможности подключения между локальной организацией и организацией Exchange Online. Далее приводится обзор изменений исходной локальной организации Exchange, которые были выполнены в результате развертывания гибридного сценария. <table> <colgroup> <col style="width: 33%" /> <col style="width: 33%" /> <col style="width: 33%" /> </colgroup> <thead> <tr class="header"> <th>Конфигурация</th> <th>До гибридного развертывания</th> <th>После гибридного развертывания</th> </tr> </thead> <tbody> <tr class="odd"> <td><p>Расположение почтового ящика</p></td> <td><p>Только локальные почтовые ящики.</p></td> <td><p>Локальные почтовые ящики и почтовые ящики в Office 365.</p></td> </tr> <tr class="even"> <td><p>Транспортировка сообщений</p></td> <td><p>Локальные серверы почтовых ящиков обрабатывают все запросы маршрутизации входящих и исходящих сообщений.</p></td> <td><p>Локальные серверы почтовых ящиков обеспечивают внутреннюю маршрутизацию сообщений между локальной организацией и организацией Office 365.</p></td> </tr> <tr class="odd"> <td><p>Outlook в Интернете</p></td> <td><p>Локальные серверы почтовых ящиков принимают все запросы Outlook в Интернете и отображают сведения о почтовых ящиках.</p></td> <td><p>Локальные серверы почтовых ящиков перенаправляют запросы Outlook в Интернете на локальные серверы почтовых ящиков Exchange 2016 или предоставляют ссылку для входа в организацию Office 365.</p></td> </tr> <tr class="even"> <td><p>Единый глобальный список адресов для обеих организаций</p></td> <td><p>Не применимо; только для отдельных организаций.</p></td> <td><p>Локальный сервер синхронизации Active Directory реплицирует данные Active Directory объектов, поддерживающих почту, в Office 365.</p></td> </tr> <tr class="odd"> <td><p>В обеих организациях используется компонент единого входа</p></td> <td><p>Не применимо; только для отдельных организаций.</p></td> <td><p>Для почтовых ящиков в локальной службе Active Directory и Office 365 используются одни и те же имя пользователя и пароль.</p></td> </tr> <tr class="even"> <td><p>Установленная связь организации и доверие федерации с системой проверки подлинности Azure AD</p></td> <td><p>Вы можете настроить отношение доверия с системой проверки подлинности Azure AD и связи организаций с другими федеративными организациями Exchange.</p></td> <td><p>Требуется отношение доверия с системой проверки подлинности Azure AD. Установлены связи между локальной организацией и Office 365.</p></td> </tr> <tr class="odd"> <td><p>Обмен сведениями о доступности</p></td> <td><p>Обмен сведениями о доступности только между локальными пользователями.</p></td> <td><p>Обмен сведениями о доступности между пользователями локальной организации и Office 365.</p></td> </tr> </tbody> </table> ## Факторы, которые необходимо учитывать перед настройкой гибридного развертывания После краткого знакомства с гибридным развертыванием можно более подробно рассмотреть некоторые важные вопросы. Развертывание гибридного сценария может затронуть разные аспекты функционирования текущей сети и организации Exchange. ## Синхронизация каталогов и единый вход Для настройки гибридного развертывания требуется синхронизация Active Directory локальной организации и организации Office 365, которую каждые три часа выполняет сервер с Azure Active Directory Connect. Синхронизация каталогов позволяет получателям обеих организаций видеть друг друга в глобальном списке адресов. Кроме того, синхронизируются имена и пароли пользователей, поэтому для входа в локальную организацию и организацию Office 365 можно использовать одни и те же учетные данные. > [!NOTE] > Если настроить Azure AD Connect со службами федерации Active Directory (AD FS), имена и пароли пользователей локальной организации будут по-прежнему синхронизироваться с Office 365 по умолчанию. Тем не менее основным методом проверки подлинности будет проверка с помощью локальной службы Active Directory через AD FS. Если AD FS не может подключиться к локальной службе Active Directory по любой причине, клиенты попытаются выполнить проверку подлинности с помощью имен пользователей и паролей, синхронизированных с Office 365. По умолчанию количество объектов (пользователей, контактов, поддерживающих почту, и групп) для всех клиентов Azure Active Directory и Office 365 ограничено до 50 000. Это ограничение определяет количество объектов, которые вы можете создать в организации Office 365. После проверки первого домена это ограничение автоматически увеличивается до 300 000 объектов. Если домен проверен и необходимо синхронизировать более 300 000 объектов или у вас нет доменов для проверки и необходимо синхронизировать более 50 000 объектов, отправьте запрос на увеличение квоты объектов в службу поддержки Azure Active Directory. Если настроить AD FS, кроме сервера с Azure AD Connect также нужно развернуть прокси-сервер веб-приложений. Этот сервер необходимо разместить в сети периметра, где он будет выполнять роль посредника между внутренним сервером Azure AD Connect и Интернетом. Прокси-сервер веб-приложений должен принимать подключения от клиентов и серверов в Интернете через TCP-порт 443. ## Управление гибридным развертыванием Управление гибридным развертыванием в Exchange 2016 осуществляется в единой консоли управления, в которой можно управлять как локальной организацией, так и организацией Exchange Online. В *Центре администрирования Exchange* (EAC), который заменил консоль управления Exchange и панель управления Exchange, можно подключать и настраивать функции для обеих организаций. При первом запуске мастера гибридной конфигурации вам будет предложено подключиться к организации Exchange Online. Для подключения EAC к организации Exchange Online нужно использовать учетную запись Office 365, которая является членом группы ролей управления организацией. ## Сертификаты При развертывании гибридного сценария важное значение имеют цифровые сертификаты SSL. С их помощью обеспечивается защищенный обмен данными между локальным гибридным сервером и организацией Exchange Online. Сертификаты являются обязательным требованием для настройки нескольких типов служб. Если в организации Exchange уже используются цифровые сертификаты, может потребоваться их изменение с включением дополнительных доменов или приобретение дополнительных сертификатов в доверенном центре сертификации. Если сертификаты еще не используются, потребуется приобрести один или несколько сертификатов в доверенном центре сертификации. Дополнительные сведения см. в разделе [Требования к сертификатам для гибридных развертываний](certificate-requirements-for-hybrid-deployments-exchange-2013-help.md) ## Пропускная способность Сетевое подключение к Интернету непосредственно влияет на эффективность обмена данными между локальной организацией и организацией Office 365. В большей степени это касается процессов перемещения почтовых ящиков с локального сервера Exchange 2016 в организацию Office 365. Продолжительность перемещения почтовых ящиков определяется доступной пропускной способностью сети, а также количеством одновременно перемещаемых почтовых ящиков и их размерами. Кроме того, некоторые службы Office 365, например SharePoint Server 2016 и Skype для бизнеса, также могут влиять на пропускную способность сети для служб обмена сообщениями. Прежде чем перемещать почтовые ящики в Office 365, необходимо выполнить указанные ниже действия. - Определить средний размер почтовых ящиков, перемещаемых в Office 365. - Определить среднюю пропускную способность и скорость подключения между локальной организацией и Интернетом. - Определить ожидаемую среднюю скорость передачи данных и соответствующим образом спланировать процесс перемещения почтовых ящиков. Дополнительные сведения см. в статье [Сеть](https://go.microsoft.com/fwlink/p/?linkid=280178). ## Единая система обмена сообщениями Единая система обмена сообщениями (UM) поддерживается в гибридном развертывании между локальной организацией и организацией Office 365. Локальное приложение телефонии должно иметь возможность подключаться к Office 365. Для этого может потребоваться приобрести дополнительное оборудование и программное обеспечение. Прежде чем перемещать почтовые ящики, настроенные для единой системы обмена сообщениями, из локальной организации в Office 365, необходимо настроить единую систему обмена сообщениями в гибридном развертывании. Если переместить их до настройки единой системы обмена сообщениями в гибридном развертывании, у них не будет доступа к ее функциям. Дополнительные сведения: [Настройка единой системы обмена сообщениями в гибридном развертывании](https://go.microsoft.com/fwlink/p/?linkid=842271) ## Управление правами на доступ к данным Управление правами на доступ к данным (IRM) позволяет пользователям применять к отправляемым сообщениям шаблоны служб Active Directory Rights Management (AD RMS). Шаблоны AD RMS позволяют предотвратить утечку информации за счет настройки параметров доступа к защищенным правами сообщениям и работы с ними. Для управления правами на доступ к данным в гибридном развертывании требуется планирование, ручная настройка конфигурации организации Office 365 и понимание того, как клиенты используют серверы AD RMS в зависимости от того, где расположен почтовый ящик — в локальной организации или организации Exchange Online. Дополнительные сведения см. в разделе [Управление правами доступа к данным при гибридных развертываниях Exchange](irm-in-exchange-hybrid-deployments-exchange-2013-help.md) ## Мобильные устройства В гибридном развертывании поддерживаются мобильные устройства. Если Exchange ActiveSync уже включен на существующих серверах, они продолжат перенаправлять запросы с мобильных устройств в почтовые ящики на локальном сервере почтовых ящиков. На большинстве мобильных устройств, подключающихся к существующим почтовым ящикам, перемещенным из локальной организации в Office 365, профили Exchange ActiveSync автоматически обновятся для подключения к Office 365. Все мобильные устройства, поддерживающие службу Exchange ActiveSync, должны быть совместимы с гибридным развертыванием. Дополнительные сведения см. в статье [Мобильные телефоны](https://go.microsoft.com/fwlink/p/?linkid=206387). ## Требования клиента В гибридном развертывании рекомендуем использовать клиенты Outlook 2016 или Outlook 2013. Клиенты более ранних версий, чем Outlook 2010, не поддерживаются в гибридных развертываниях и в Office 365. ## Лицензирование для Office 365 Для создания почтовых ящиков в организации Office 365 или их перемещения в нее требуется регистрация в Office 365 для предприятий и наличие необходимого числа лицензий. После регистрации в Office 365 вы получите определенное количество лицензий, которые можно назначить новым почтовым ящикам или почтовым ящикам, перемещенным из локальной организации. Каждый почтовый ящик в Office 365 должен иметь лицензию. ## Службы защиты от вирусов и нежелательной почты В Office 365 предусмотрена автоматическая защита почтовых ящиков от вирусов и нежелательной почты с помощью службы Exchange Online Protection (EOP). Если вы решите направить всю входящую интернет-почту через службу EOP, может потребоваться приобрести дополнительные лицензии EOP для локальных пользователей. Рекомендуем тщательно оценить, соответствует ли защита EOP в Office 365 потребностям вашей локальной организации. При наличии защиты в локальной организации, может потребоваться обновить или настроить локальные решения для защиты от вирусов и нежелательной почты в организации для обеспечения максимальной безопасности. Дополнительные сведения см. в разделе [Защита от нежелательной почты и вредоносных программ](https://technet.microsoft.com/ru-ru/library/jj200731\(v=exchg.150\)) ## Общедоступные папки Общедоступные папки поддерживаются в Office 365, и в эту службу можно также переместить локальные общедоступные папки. Кроме того, общедоступные папки из Office 365 можно переместить в локальную организацию Exchange 2016. Общедоступные папки в любой из организаций доступны локальным пользователям и пользователям Office 365 в Outlook в Интернете, Outlook 2016, Outlook 2013, Outlook 2010 с пакетом обновления 2 (SP2) или более поздней версии. Существующая конфигурация локальных общедоступных папок и доступ к локальным почтовым ящикам не изменяются при настройке гибридного развертывания. Дополнительные сведения см. в разделе [Общедоступные папки](https://technet.microsoft.com/ru-ru/library/jj150538\(v=exchg.150\)) ## Специальные возможности Сведения о сочетаниях клавиш, которые могут применяться для процедур в этом контрольном списке, см. в статье [Сочетания клавиш в Центре администрирования Exchange](https://technet.microsoft.com/ru-ru/library/jj150484\(v=exchg.150\)). ## Основные термины В следующем списке приведены определения основных компонентов, связанных с гибридным развертыванием в Exchange 2013. - **Централизованный почтовый транспорт** Вариант гибридной конфигурации, в котором все входящие и исходящие сообщения Exchange Online из и в Интернет маршрутизируются через локальную организацию Exchange. Этот вариант маршрутизации настраивается в мастере настройки гибридной конфигурации. Подробнее см. в разделе [Параметры транспорта при гибридных развертываниях Exchange](transport-options-in-exchange-hybrid-deployments-exchange-2013-help.md). <!-- end list --> - **Домен сосуществования** Обслуживаемый домен, добавленный к локальной организации для гибридного почтового потока и запросов автообнаружения для службы Office 365. Этот домен добавляется как дополнительный прокси-домен для любых политик адресов электронной почты, которые содержат шаблоны *PrimarySmtpAddress* для доменов, выбранных в мастере настройки гибридной конфигурации. По умолчанию это домен \<домен\>.mail.onmicrosoft.com. <!-- end list --> - ***HybridConfiguration* объект Active Directory** Объект Active Directory в локальной организации, который включает в себя нужные параметры конфигурации гибридного развертывания, определяемые выбранными значениями в мастере настройки гибридной конфигурации. Гибридный модуль настройки использует эти параметры при настройке локальных параметров и параметров Exchange Online в гибридной конфигурации. Содержимое объекта *HybridConfiguration* сбрасывается при каждом запуске мастера настройки гибридной конфигурации. <!-- end list --> - **модуль гибридной конфигурации** Модуль гибридной конфигурации выполняет основные действия, необходимые для настройки и обновления гибридного развертывания. Он сравнивает состояние объекта Active Directory *HybridConfiguration* и текущие параметры конфигурации локальной организации Exchange и Exchange Online, а затем приводит параметры конфигурации развертывания в соответствие с параметрами, определенными в объекте *HybridConfiguration* в Active Directory. Дополнительные сведения см. в разделе [Гибридный модуль конфигурации](hybrid-configuration-wizard-exchange-2013-help.md). <!-- end list --> - **мастер настройки гибридной конфигурации (HCW)** Адаптивное средство в Exchange, помогающее администраторам настроить гибридное развертывание между локальными организациями и Exchange Online. Мастер определяет параметры конфигурации гибридного развертывания в объекте *HybridConfiguration* и побуждает модуль гибридной конфигурации включить определенные гибридные возможности. Дополнительные сведения см. в разделе [Мастер гибридной конфигурации](hybrid-configuration-wizard-exchange-2013-help.md). <!-- end list --> - **гибридное развертывание на основе Exchange 2010** Гибридное развертывание, настроенное с использованием локальных серверов Exchange Server 2010 с пакетом обновления 3 (SP3) как конечных точек подключения для служб Office 365 и Exchange Online. Вариант гибридного развертывания для локальных организаций Exchange 2010, Exchange Server 2007 и Exchange Server 2003. <!-- end list --> - **гибридное развертывание на основе Exchange 2013** Гибридное развертывание, настроенное с использованием локальных серверов Exchange 2013 как конечных точек подключения для служб Office 365 и Exchange Online. Вариант гибридного развертывания для локальных организаций Exchange 2013, Exchange 2010 и Exchange 2007. <!-- end list --> - **Гибридное развертывание на основе Exchange 2016** Гибридное развертывание, настроенное с использованием локальных серверов Exchange 2016 как конечных точек подключения для служб Office 365 и Exchange Online. Вариант гибридного развертывания для локальных организаций Exchange 2016, Exchange 2013 и Exchange 2010. <!-- end list --> - **Безопасный почтовый транспорт** Автоматически настраиваемая функция гибридного развертывания, которая обеспечивает безопасный обмен сообщениями между локальной организацией и организацией Exchange Online. Сообщения шифруются и проходят проверку подлинности с использованием протокола TLS и сертификата, выбранного в мастере настройки гибридной конфигурации. Клиент Office 365 — это конечная точка гибридных транспортных подключений из локальной организации и источник гибридных транспортных подключений к локальной организации из Exchange Online. Основные термины ## Документация по гибридному развертыванию Exchange В следующей таблице приведены ссылки на разделы, которые содержат дополнительную информацию по управлению гибридными развертываниями в Microsoft Exchange. <table> <colgroup> <col style="width: 50%" /> <col style="width: 50%" /> </colgroup> <thead> <tr class="header"> <th>Раздел</th> <th>Описание</th> </tr> </thead> <tbody> <tr class="odd"> <td><p><a href="hybrid-configuration-wizard-exchange-2013-help.md">Мастер гибридной конфигурации</a></p></td> <td><p>Дополнительные сведения о том, как мастер гибридной конфигурации и модуль гибридной конфигурации настраивают гибридное развертывание.</p></td> </tr> <tr class="even"> <td><p><a href="hybrid-deployment-prerequisites-exchange-2013-help.md">Предварительные условия для гибридного развертывания</a></p></td> <td><p>Дополнительные сведения о предварительных требованиях для гибридного развертывания, в том числе о совместимых организациях Exchange Server, требованиях Office 365, и других требованиях локальной конфигурации.</p></td> </tr> <tr class="odd"> <td><p><a href="certificate-requirements-for-hybrid-deployments-exchange-2013-help.md">Требования к сертификатам для гибридных развертываний</a></p></td> <td><p>Дополнительные сведения о требованиях к цифровым сертификатам в гибридных развертываниях.</p></td> </tr> <tr class="even"> <td><p><a href="transport-options-in-exchange-hybrid-deployments-exchange-2013-help.md">Параметры транспорта при гибридных развертываниях Exchange</a></p></td> <td><p>Дополнительные сведения о параметрах передачи входящих и исходящих сообщений в гибридных развертываниях.</p></td> </tr> <tr class="odd"> <td><p><a href="transport-routing-in-exchange-hybrid-deployments-exchange-2013-help.md">Маршрутизация транспорта при гибридных развертываниях Exchange</a></p></td> <td><p>Дополнительные сведения о параметрах маршрутизации входящих и исходящих сообщений в гибридных развертываниях.</p></td> </tr> <tr class="even"> <td><p><a href="hybrid-management-in-exchange-hybrid-deployments-exchange-2013-help.md">Управление в гибридных средах Exchange</a></p></td> <td><p>Дополнительные сведения об управлении гибридным развертыванием с помощью Центра администрирования Exchange и консоли управления Exchange.</p></td> </tr> <tr class="odd"> <td><p><a href="shared-free-busy-in-exchange-hybrid-deployments-exchange-2013-help.md">Общий доступ к сведениям о занятости в гибридных средах Exchange</a></p></td> <td><p>Дополнительные сведения об обмене данными о доступности между локальными организациями и организациями Exchange Online в гибридном развертывании.</p></td> </tr> <tr class="even"> <td><p><a href="server-roles-in-exchange-hybrid-deployments-exchange-2013-help.md">Роли серверов при гибридных развертываниях Exchange</a></p></td> <td><p>Дополнительные сведения о функции ролей серверов Exchange в гибридном развертывании.</p></td> </tr> <tr class="odd"> <td><p><a href="irm-in-exchange-hybrid-deployments-exchange-2013-help.md">Управление правами доступа к данным при гибридных развертываниях Exchange</a></p></td> <td><p>Дополнительные сведения о функции управлении правами на доступ к данным в гибридном развертывании.</p></td> </tr> <tr class="even"> <td><p><a href="permissions-in-exchange-hybrid-deployments-exchange-2013-help.md">Разрешения при гибридных развертываниях Exchange</a></p></td> <td><p>Дополнительные сведения о том, как управление доступом на основе ролей (RBAC) используется для контроля разрешений в гибридном развертывании.</p></td> </tr> <tr class="odd"> <td><p><a href="edge-transport-servers-with-hybrid-deployments-exchange-2013-help.md">Пограничные транспортные серверы в гибридных развертываниях</a></p></td> <td><p>Дополнительные сведения о пограничных транспортных серверах Exchange, их развертывании и работе в гибридном развертывании.</p></td> </tr> <tr class="even"> <td><p><a href="single-sign-on-with-hybrid-deployments-exchange-2013-help.md">Единый вход в гибридных развертываниях</a></p></td> <td><p>Дополнительные сведения о едином входе с синхронизацией паролей и AD FS в гибридном развертывании.</p></td> </tr> <tr class="odd"> <td><p><a href="hybrid-deployment-procedures-exchange-2013-help.md">Процедуры гибридного развертывания</a></p></td> <td><p>Инструкции по созданию и изменению гибридных развертываний для локальной организации Exchange и организации Exchange Online.</p></td> </tr> <tr class="even"> <td><p><a href="hybrid-deployments-with-exchange-2013-and-exchange-2010-exchange-2013-help.md">Гибридные развертывания с Exchange 2013 и Exchange 2010</a></p></td> <td><p>Дополнительные сведения о гибридных развертываниях на основе Exchange 2013 с организациями Exchange 2010.</p></td> </tr> <tr class="odd"> <td><p><a href="hybrid-deployments-with-exchange-2013-and-exchange-2007-exchange-2013-help.md">Гибридные развертывания с Exchange 2013 и Exchange 2007</a></p></td> <td><p>Дополнительные сведения о гибридных развертываниях на основе Exchange 2013 с организациями Exchange 2007.</p></td> </tr> </tbody> </table> ## Никогда не работали с Office 365? <table> <colgroup> <col style="width: 100%" /> </colgroup> <tbody> <tr class="odd"> <td><p><img src="images/JJ200581.eac8a413-9498-4220-8544-1e37d1aaea13(EXCHG.150).png" title="Небольшой значок LinkedIn Learning" alt="Небольшой значок LinkedIn Learning" /> <strong>Никогда не работали с Office 365?</strong><br /> <a href="https://support.office.com/ru-ru/article/office-365-admin-and-it-pro-courses-68cc9b95-0bdc-491e-a81f-ee70b3ec63c5">Office 365 admins and IT pros</a> будут заинтересованы бесплатными видеокурсами, предоставленными платформой LinkedIn Learning.</p></td> </tr> </tbody> </table>
87.263658
991
0.81937
rus_Cyrl
0.967667
b9949c9c91f790ac5b4ed0738a91bfffee948b03
6,114
md
Markdown
source/R-Portable-Win/library/treeio/NEWS.md
kant/Cerebro
5083b9d11c4618b9d3bbdc6354040e37a106223e
[ "MIT" ]
null
null
null
source/R-Portable-Win/library/treeio/NEWS.md
kant/Cerebro
5083b9d11c4618b9d3bbdc6354040e37a106223e
[ "MIT" ]
null
null
null
source/R-Portable-Win/library/treeio/NEWS.md
kant/Cerebro
5083b9d11c4618b9d3bbdc6354040e37a106223e
[ "MIT" ]
null
null
null
# treeio 1.5.3 + `read.jplace` compatible with output of [TIPars](https://github.com/id-bioinfo/TIPars) (2018-08-07, Tue) # treeio 1.5.2 + bug fixed of `as.phylo.ggtree` and `as.treedata.ggtree` (2018-07-19, Thu) + fixed R check for `tree_subset` by using `rlang::quo` and import `utils::head` and `utils::tail` (2018-05-24, Thu) + `tree_subset` methods contributed by [@tbradley1013](https://github.com/tbradley1013) + `drop.tip` works with `tree@extraInfo` (2018-05-23, Wed) - <https://github.com/GuangchuangYu/tidytree/pull/6#issuecomment-390259901> # treeio 1.5.1 + bug fixed of `groupOTU.treedata` (2018-05-23, Wed) - <https://github.com/GuangchuangYu/treeio/issues/7> # treeio 1.4.0 + Bioconductor 3.7 release # treeio 1.3.15 + Supports convert edge list (matrix, data.frame or tibble) to `phylo` and `treedata` object, now `ggtree` can be used to visualize all tree-like graph. (2018-04-23, Mon) # treeio 1.3.14 + rename_taxa (2018-04-19, Thu) - <https://guangchuangyu.github.io/2018/04/rename-phylogeny-tip-labels-in-treeio/> + read.astral (2018-04-17, Tue) + read.iqtree # treeio 1.3.13 + mv project website to <https://guangchuangyu.github.io/software/treeio> + update for rOpenSci acceptance - <https://github.com/ropensci/onboarding/issues/179#issuecomment-372127781> # treeio 1.3.12 + read.beast now compatible with taxa label contains ', " and space (2018-02-27, Wed) + update according to rOpenSci comments (2018-02-26, Mon) - <https://github.com/ropensci/onboarding/issues/179#issuecomment-365144565> - <https://github.com/ropensci/onboarding/issues/179#issuecomment-366800716> # treeio 1.3.11 + deprecate read.phyloT as read.tree in ape v5 now supports phyloT newick text <2018-01-11, Thu> + fixed goodpractice check <2018-01-10, Wed> - <https://github.com/ropensci/onboarding/issues/179#event-1416196637> - avoid using = for assignment - avoid code line > 80 characters - avoid sapply, instead using vapply and lapply - avoid using 1:length, 1:nrow and 1:ncol, use `seq_len` and `seq_along` - more unit tests # treeio 1.3.10 * added 'Parsing jtree format' session in Importer vignette <2017-12-20, Wed> * added 'Exporting tree data to JSON format' in Exporter vignette * `read.jtree` and `write.jtree` functions * added 'Combining tree with external data' and 'Merging tree data from different sources' sessions in Exporter vignette * added 'Combining tree data' and 'Manipulating tree data using tidytree' sessions in Importer vignette * full_join method for treedata object and added 'Linking external data to phylogeny' session in Importer vignette <2017-12-15, Fri> # treeio 1.3.9 * move treedata class, show, get.fields methods to tidytree <2017-12-14, Thu> * Exporter.Rmd vignette <2017-12-13, Wed> # treeio 1.3.8 * mv treeio.Rmd vignette to Importer.Rmd and update the contents <2017-12-13, Wed> * write.beast for treedata object <2017-12-12, Tue> * add "connect" parameter in groupOTU <2017-12-12, Tue> + <https://groups.google.com/forum/#!msg/bioc-ggtree/Q4LnwoTf1DM/yEe95OFfCwAJ> # treeio 1.3.7 * export groupClade.phylo method <2017-12-11, Mon> # treeio 1.3.6 * re-defined groupOTU and groupClade generic using S3 <2017-12-11, Mon> # treeio 1.3.5 * parent, ancestor, child, offspring, rootnode and sibling generic and method for phylo <2017-12-11, Mon> * update mask and merge_tree function according to the treedata object <2017-12-11, Mon> # treeio 1.3.4 * support tbl_tree object defined in tidytree <2017-12-08, Fri> # treeio 1.3.3 * read.codeml output treedata, remove codeml class and clean up code <2017-12-07, Thu> # treeio 1.3.2 * read.codeml_mlc output treedata object and remove codeml_mlc class <2017-12-06, Wed> * read.paml_rst output treedata and remove paml_rst class <2017-12-06, Wed> * read.phylip.tree and read.phylip.seq * read.phylip output treedata object and phylip class definition was removed * read.hyphy output treedata object; hyphy class definition was removed * remove r8s class, read.r8s now output multiPhylo object * jplace class inherits treedata <2017-12-05, Tue> * using treedata object to store beast and mrbayes tree * export read.mrbayes # treeio 1.3.1 * compatible to parse beast output that only contains HPD range <2017-11-01, Wed> + https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/bioc-ggtree/RF2Ly52U_gc/jEP97nNPAwAJ # treeio 1.2.0 * BioC 3.6 release <2017-11-01, Wed> # treeio 1.1.2 * new project site using blogdown <2017-09-28, Thu> # treeio 1.1.1 * parse mlc file without dNdS <2017-08-31, Thu> + https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!topic/bioc-ggtree/hTRj-uldgAg * better implementation of merge_tree <2017-08-31, Thu> # treeio 0.99.11 * bug fixed in get.fields method for paml_rst <2017-03-20, Mon> * fixed raxml2nwk for using treedata as output of read.raxml <2017-03-17, Fri> * taxa_rename function <2017-03-15, Wed> * phyPML method moved from ggtree <2017-03-06, Mon> # treeio 0.99.10 * remove raxml class, now read.raxml output treedata object <2017-02-28, Tue> * bug fixed of read.beast <2017-02-27, Mon> # treeio 0.99.9 * read.newick for parsing node.label as support values <2017-01-03, Tue> * read.beast support MrBayes output <2016-12-30, Fri> * export as.phylo.ggtree <2016-12-30, Fri> # treeio 0.99.8 * as.treedata.ggtree <2016-12-30, Fri> * as.treedata.phylo4 & as.treedata.phylo4d <2016-12-28, Wed> # treeio 0.99.7 * groupOTU, groupClade, gzoom methods from ggtree <2016-12-21, Wed> # treeio 0.99.6 * add unit test of NHX (move from ggtree) <2016-12-14, Wed> # treeio 0.99.3 * fixed BiocCheck by adding examples <2016-12-07, Wed> # treeio 0.99.1 * fixed link in DESCRIPTION <2016-12-06, Tue> # treeio 0.99.0 * add vignette <2016-12-06, Tue> * move parser functions from ggtree <2016-12-06, Tue> # treeio 0.0.1 * read.nhx from ggtree <2016-12-06, Tue> * as.phylo.treedata to access phylo from treedata object <2016-12-06, Tue> * as.treedata.phylo to convert phylo to tree data object <2016-12-06, Tue> * treedata class definition <2016-12-06, Tue>
32.870968
132
0.729964
eng_Latn
0.659521
b994cc5a4c638b754d4f538175a82eef58b2dcd2
3,422
md
Markdown
rtos-docs/guix/about-guix-studio.md
MicrosoftDocs/rtos-docs-pr.es-es
f597c369e81c584b3dba613e73da7f88c44ca434
[ "CC-BY-4.0", "MIT" ]
null
null
null
rtos-docs/guix/about-guix-studio.md
MicrosoftDocs/rtos-docs-pr.es-es
f597c369e81c584b3dba613e73da7f88c44ca434
[ "CC-BY-4.0", "MIT" ]
null
null
null
rtos-docs/guix/about-guix-studio.md
MicrosoftDocs/rtos-docs-pr.es-es
f597c369e81c584b3dba613e73da7f88c44ca434
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Guía del usuario de Azure RTOS GUIX Studio description: En esta guía se proporciona información exhaustiva sobre Azure RTOS GUIX Studio, el entorno de desarrollo rápido de interfaces de usuario basado en Microsoft Windows diseñado específicamente para la biblioteca en tiempo de ejecución de Azure RTOS GUIX de Microsoft. author: philmea ms.author: philmea ms.date: 5/19/2020 ms.service: rtos ms.topic: article ms.openlocfilehash: 63b3f17aae95cb00a338db423c94e4846c589787027401d3e33a29bbfafdd966 ms.sourcegitcommit: 93d716cf7e3d735b18246d659ec9ec7f82c336de ms.translationtype: HT ms.contentlocale: es-ES ms.lasthandoff: 08/07/2021 ms.locfileid: "116784723" --- # <a name="about-this-guix-studio-user-guide"></a>Acerca de esta guía del usuario de Azure RTOS GUIX Studio En esta guía se proporciona información exhaustiva sobre Azure RTOS GUIX Studio, el entorno de desarrollo rápido de interfaces de usuario basado en Microsoft Windows diseñado específicamente para la biblioteca en tiempo de ejecución de Azure RTOS GUIX de Microsoft. Está destinada al desarrollador de software insertado en tiempo real que usa el sistema operativo en tiempo real ThreadX (RTOS) y la biblioteca en tiempo de ejecución de UI de Azure RTOS GUIX. El desarrollador debe estar familiarizado con los conceptos estándar de Azure RTOS ThreadX y Azure RTOS GUIX. ## <a name="organization"></a>Organización - [**Capítulo 1**](guix-studio-1.md): ofrece una introducción básica a Azure RTOS GUIX Studio y su relación con el desarrollo en tiempo real. - [**Capítulo 2**](guix-studio-2.md): proporciona los pasos básicos para instalar y utilizar Azure RTOS GUIX Studio a fin de analizar la aplicación desde el primer momento. - [**Capítulo 3**](guix-studio-3.md): describe las principales características de Azure RTOS GUIX Studio. - [**Capítulo 4**](guix-studio-4.md): explica cómo usar Azure RTOS GUIX Studio para crear y administrar los recursos de la aplicación. - [**Capítulo 5**](guix-studio-5.md): explica cómo usar el diseñador de pantallas WYSIWYG de Azure RTOS GUIX. - [**Capítulo 6**](guix-studio-6.md): explica cómo va a utilizar la aplicación los archivos de salida y las funciones API generadas por Azure RTOS GUIX Studio. - [**Capítulo 7**](guix-studio-7.md): explica cómo configurar el flujo de pantalla. - [**Capítulo 8**](guix-studio-8.md): describe el uso de la herramienta de línea de comandos. - [**Capítulo 9**](guix-studio-9.md): describe una aplicación de interfaz de usuario sencilla pero completa creada mediante Azure RTOS GUIX Studio. ## <a name="customer-support-center"></a>Centro de soporte al cliente Envíe una incidencia de soporte técnico por medio de Azure Portal si tiene alguna pregunta o necesita ayuda con estos pasos. Proporcione la siguiente información en un mensaje de correo electrónico para que podamos resolver la solicitud de soporte técnico de la forma más eficaz posible: - Una descripción detallada del problema, incluida la frecuencia con que se produce y cómo se puede reproducir de forma confiable. - Adjunte el archivo de seguimiento que causa el problema. - La versión de Azure RTOS GUIX Studio que está usando (aparece en la parte superior izquierda de la pantalla). - La versión de Azure RTOS GUIX que está usando, incluidas las variables **_gx_version_idstring** y **_gx_build_options**. - La versión de Azure RTOS ThreadX que está usando, incluido **_tx_version_idstring**.
79.581395
302
0.791642
spa_Latn
0.980457
b994cd557c25e0a2eeed0727f373ce21ea92b530
553
md
Markdown
docs/error-messages/tool-errors/project-build-error-prj0005.md
psimn/cpp-docs.zh-cn
0f8c59315e1753eb94b113dac7c38b3b70486ad7
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/error-messages/tool-errors/project-build-error-prj0005.md
psimn/cpp-docs.zh-cn
0f8c59315e1753eb94b113dac7c38b3b70486ad7
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/error-messages/tool-errors/project-build-error-prj0005.md
psimn/cpp-docs.zh-cn
0f8c59315e1753eb94b113dac7c38b3b70486ad7
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: 项目生成错误 PRJ0005 ms.date: 11/04/2016 f1_keywords: - PRJ0005 helpviewer_keywords: - PRJ0005 ms.assetid: 00c1821b-16aa-4bd9-9cf6-a778e5ed4ad9 ms.openlocfilehash: b77c029b77d48d35ff1a4ea1508ed81cf31fa531 ms.sourcegitcommit: 0ab61bc3d2b6cfbd52a16c6ab2b97a8ea1864f12 ms.translationtype: MT ms.contentlocale: zh-CN ms.lasthandoff: 04/23/2019 ms.locfileid: "62359639" --- # <a name="project-build-error-prj0005"></a>项目生成错误 PRJ0005 无法在目录 'directory' 中创建临时文件。 若要创建临时文件的调用失败。 失败的原因包括: - 用尽了临时文件的名称。 - 临时目录是只读的。 - 没有临时目录或 TMP 环境变量。 - 您的计算机由可用磁盘空间不足。
19.75
60
0.793852
yue_Hant
0.359499
b997139c37f759677a187e1328518baffbdb29b1
158
md
Markdown
analyzer-comments/elixir/rpg-character-sheet/ends_with_IO_inspect.md
IsaacG/website-copy
0fc5edbf6567733d8d762f1c61775880c3b03c17
[ "MIT" ]
215
2018-06-17T22:51:08.000Z
2022-03-29T11:42:17.000Z
analyzer-comments/elixir/rpg-character-sheet/ends_with_IO_inspect.md
IsaacG/website-copy
0fc5edbf6567733d8d762f1c61775880c3b03c17
[ "MIT" ]
805
2017-08-19T18:17:23.000Z
2022-03-28T06:15:23.000Z
analyzer-comments/elixir/rpg-character-sheet/ends_with_IO_inspect.md
IsaacG/website-copy
0fc5edbf6567733d8d762f1c61775880c3b03c17
[ "MIT" ]
1,359
2017-08-18T23:04:31.000Z
2022-03-30T06:52:45.000Z
# ends with IO inspect The function `run/0` should not explicitly return the result, but return whatever `IO.inspect` returns (which is its first argument).
39.5
133
0.772152
eng_Latn
0.99916
b997332615e0a75c6878f6ca9b7bab2116e534b4
983
md
Markdown
desktop-src/RRAS/multicast-group-manager-reference.md
citelao/win32
bf61803ccb0071d99eee158c7416b9270a83b3e4
[ "CC-BY-4.0", "MIT" ]
552
2019-08-20T00:08:40.000Z
2022-03-30T18:25:35.000Z
desktop-src/RRAS/multicast-group-manager-reference.md
citelao/win32
bf61803ccb0071d99eee158c7416b9270a83b3e4
[ "CC-BY-4.0", "MIT" ]
1,143
2019-08-21T20:17:47.000Z
2022-03-31T20:24:39.000Z
desktop-src/RRAS/multicast-group-manager-reference.md
citelao/win32
bf61803ccb0071d99eee158c7416b9270a83b3e4
[ "CC-BY-4.0", "MIT" ]
1,287
2019-08-20T05:37:48.000Z
2022-03-31T20:22:06.000Z
--- title: Multicast Group Manager Reference description: The following documentation describes the functions, callbacks, structures, and enumeration types to use when working with the multicast group manager ms.assetid: bd32d8e9-e2f0-4406-8e9c-979dc6e85221 keywords: - Routing and Remote Access Service RRAS , Multicast Group Manager, reference - Multicast Group Manager RRAS - Multicast Group Manager RRAS , reference ms.topic: article ms.date: 05/31/2018 --- # Multicast Group Manager Reference The following documentation describes the functions, callbacks, structures, and enumeration types to use when working with the multicast group manager: - [Multicast Group Manager Functions](multicast-group-manager-functions.md) - [Multicast Group Manager Callbacks](multicast-group-manager-callbacks.md) - [Multicast Group Manager Structures](multicast-group-manager-structures.md) - [Multicast Group Manager Enumerations](multicast-group-manager-enumerations.md)    
33.896552
163
0.80061
eng_Latn
0.898359
b997a303f9145815b616b1925d9f10209e1e169e
1,666
md
Markdown
deeplinking/deeplinking.md
cryptobuks/developer
bc66273ace32f4703ffe0e4ee530cc2ce7a419ac
[ "MIT" ]
3
2020-09-21T15:13:20.000Z
2022-03-13T00:14:03.000Z
deeplinking/deeplinking.md
oleksiivinogradov/developer
89373ea9033036c2921a04cb6f448be15bd1b4e7
[ "MIT" ]
null
null
null
deeplinking/deeplinking.md
oleksiivinogradov/developer
89373ea9033036c2921a04cb6f448be15bd1b4e7
[ "MIT" ]
1
2020-04-20T13:40:39.000Z
2020-04-20T13:40:39.000Z
# Deep Linking # Usage ## DApp Browser ### Open dapp browser with a specific url and network - `coin` - slip44 index - `url` - website url https://link.trustwallet.com/open_url?coin_id=60&url=https://compound.finance ## Payments ### Activate coin - `coin_id` - slip44 index https://link.trustwallet.com/activate_coin?coin_id=60 ### Redeem Code: - `code` unique code - `provider` provider url https://link.trustwallet.com/redeem?code=abc123 ### Send Payment: - `coin` slip44 index - `token_id` Optional. Token identifier (as smart contrtact address or unique token ID) - `address` Recipient address - `amount` Optional. Payment amount - `memo` Optional. Memo - `data` Optional. Data https://link.trustwallet.com/send?coin=60&token_id=0x6B175474E89094C44Da98b954EedeAC495271d0F&address=0x650b5e446edabad7eba7fa7bb2f6119b2630bfbb&amount=1&memo=test ### Add custom token: - `token_id` token identifier on the blockchain. https://link.trustwallet.com/add_token?token_id=0x514910771af9ca656af840dff83e8264ecf986ca ### Referral: https://link.trustwallet.com/referral ## Staking ### Stake details: - `coin` slip44 index https://link.trustwallet.com/stake?coin=118 ### Stake / Delegate: - `coin` slip44 index https://link.trustwallet.com/stake_delegate?coin=118 ### Unstake / Undelegate: - `coin` slip44 index https://link.trustwallet.com/stake_undelegate?coin=118 ### Claim Rewards: - `coin` slip44 index https://link.trustwallet.com/stake_claim_rewards?coin=118 #### Available domains links: - `https://link.trustwallet.com` - `trust://` #### Definition slip44 index - https://github.com/satoshilabs/slips/blob/master/slip-0044.md
19.149425
163
0.741897
yue_Hant
0.3391
b9998bc058e7d5ecb060ef1a655952b88fb39086
1,531
md
Markdown
windows-driver-docs-pr/install/device-identification-strings.md
i35010u/windows-driver-docs.zh-cn
e97bfd9ab066a578d9178313f802653570e21e7d
[ "CC-BY-4.0", "MIT" ]
1
2021-02-04T01:49:58.000Z
2021-02-04T01:49:58.000Z
windows-driver-docs-pr/install/device-identification-strings.md
i35010u/windows-driver-docs.zh-cn
e97bfd9ab066a578d9178313f802653570e21e7d
[ "CC-BY-4.0", "MIT" ]
null
null
null
windows-driver-docs-pr/install/device-identification-strings.md
i35010u/windows-driver-docs.zh-cn
e97bfd9ab066a578d9178313f802653570e21e7d
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: 设备标识字符串 description: 即插即用 (PnP) 管理器和其他设备安装组件使用设备标识字符串来识别安装在计算机上的设备。 keywords: - 兼容 Id WDK 设备安装 - 设备 Id WDK 设备安装 - 设备实例 Id WDK 设备安装 - 驱动程序节点 WDK 设备安装 - 硬件 Id WDK 设备安装 - 实例 Id WDK 设备安装 - 设备设置 WDK 设备安装,设备标识字符串 - 设备安装 WDK,设备标识字符串 - 安装设备 WDK,设备标识字符串 ms.date: 04/20/2017 ms.localizationpriority: medium ms.openlocfilehash: 5bb42f14f13fbf52ef23e6bbfb2471abad3febfb ms.sourcegitcommit: 418e6617e2a695c9cb4b37b5b60e264760858acd ms.translationtype: MT ms.contentlocale: zh-CN ms.lasthandoff: 12/07/2020 ms.locfileid: "96782871" --- # <a name="device-identification-strings"></a>设备标识字符串 即插即用 (PnP) 管理器和其他 [设备安装组件](./overview-of-device-and-driver-installation.md) 使用设备标识字符串来识别安装在计算机上的设备。 Windows 使用以下设备标识字符串来查找与设备最匹配 (INF) 文件的信息。 这些字符串由设备的枚举器报告,系统组件基于 PnP 硬件标准发现 PnP 设备。 这些任务由与 PnP 管理器合作的 PnP 总线驱动程序执行。 设备通常由其父总线驱动程序(如 PCI 或 PCMCIA 总线驱动程序)枚举。 某些设备由总线筛选器驱动程序(如 ACPI 驱动程序)枚举。 - [硬件 Id](hardware-ids.md) - [兼容 Id](compatible-ids.md) Windows 尝试查找其中一个硬件 Id 或兼容 Id 的匹配项。 有关 Windows 如何使用这些 Id 将设备与 INF 文件相匹配的详细信息,以及如何在 INF 文件中指定 Id 的详细信息,请参阅 [Windows 如何选择驱动程序](./how-windows-selects-a-driver-for-a-device.md)。 除了使用前面的 Id 来标识设备之外,PnP 管理器还使用以下 Id 来唯一标识计算机上安装的每个设备的实例: - [实例 Id](instance-ids.md) - [设备实例 Id](device-instance-ids.md) 从 Windows 7 开始,PnP 管理器使用 [容器 ID](container-ids.md) 设备标识字符串对一个或多个设备节点进行分组, (devnodes) 从计算机上安装的每个物理设备实例枚举。 每个枚举器自定义其设备 Id、硬件 Id 和兼容 Id 以唯一标识它所枚举的设备。 此外,每个枚举器都有自己的策略来识别硬件 Id 和兼容 Id。 有关大多数系统总线的硬件 ID 和兼容 ID 格式的详细信息,请参阅 [设备标识符格式](./generic-identifiers.md)。 > [!NOTE] > 不应分析设备标识字符串。 它们仅适用于字符串比较,应视为不透明字符串。
32.574468
185
0.777923
yue_Hant
0.857543
b99a0b7654cf8c5f0d0bfd5b10491a5b979d5f58
2,744
md
Markdown
README.md
fedasaro62/transner
1422784b54bfd9b4afcccf6ca244f95d3cba8579
[ "Apache-2.0" ]
null
null
null
README.md
fedasaro62/transner
1422784b54bfd9b4afcccf6ca244f95d3cba8579
[ "Apache-2.0" ]
null
null
null
README.md
fedasaro62/transner
1422784b54bfd9b4afcccf6ca244f95d3cba8579
[ "Apache-2.0" ]
null
null
null
# transner NER with transformer ## route: /transner/v0.3/ner * input: JSON object containing a list of strings {“strings”: [...]} * This interface expects sentences taken eventually from longer documents or records from a table. Please check with Pipple if they are willing to contribute to the provision of a sentence splitter for longer documents. Otherwise, we will implement it ourselves * output: JSON object containing the extracted entities * example of usage: ```console $ curl -i -H "Content-Type: application/json" -X POST -d '{"strings": ["Mario Rossi è nato a Busto Arsizio", "Il signor Di Marzio ha effettuato un pagamento a Matteo", "Marco e Luca sono andati a Magenta"]}' http://localhost:5000/transner/v0.7/ner $curl -d '{"strings": ["Mario Rossi è nato a Busto Arsizio", "Il signor Di Marzio ha effettuato un pagamento a Matteo", "Marco e Luca sono andati a Magenta"]}' http://localhost:6000/transner/v0.7/ner -H "Content-Type: application/json" { "results": [ { "entities": [ { "offset": 0, "type": "PERSON", "value": "mario rossi" }, { "offset": 21, "type": "LOCATION", "value": "busto arsizio" } ], "sentence": "Mario Rossi è nato a Busto Arsizio" }, { "entities": [ { "offset": 0, "type": "PERSON", "value": "il signor d'alberto" }, { "offset": 49, "type": "PERSON", "value": "matteo" } ], "sentence": "Il signor D'Alberto ha effettuato un pagamento a Matteo" }, { "entities": [ { "offset": 0, "type": "PERSON", "value": "marco" }, { "offset": 8, "type": "PERSON", "value": "luca" }, { "offset": 27, "type": "LOCATION", "value": "magenta" } ], "sentence": "Marco e Luca sono andati a Magenta" } ], "timestamp": 1581065432.7972977 } ``` ## HOW TO USE: clone the repository and then do: ``` git submodule init git submodule update ``` pretrained models link: https://istitutoboella-my.sharepoint.com/:f:/g/personal/matteo_senese_linksfoundation_com/EvhOF23tja5Nuo3mw03v24oB7D14q9cjk16Ca7xF3nTm-A?e=AWpuiu conda create --name mediaverse_transner python=3.8 conda activate mediaverse_rest pip install -r requirements.txt ### DOCKER ###### build the image docker build -t transner-api . ###### run the image docker run -d --network host --name mediaverse_transner transner-api ###### remove container and image docker ps docker stop mediaverse_transner docker rm mediaverse_transner docker rmi transner-api
28.28866
265
0.604227
eng_Latn
0.348676
b99a69adfd24d843c47fc5dc7e28cf09b3ea8a59
843
md
Markdown
2016_Curriculum_Expansion/Section_Introductions/d3js.md
HKuz/freeCodeCamp_Contributions
e0d0619dd0c08c7013ce793b16196365605e259b
[ "BSD-3-Clause" ]
null
null
null
2016_Curriculum_Expansion/Section_Introductions/d3js.md
HKuz/freeCodeCamp_Contributions
e0d0619dd0c08c7013ce793b16196365605e259b
[ "BSD-3-Clause" ]
null
null
null
2016_Curriculum_Expansion/Section_Introductions/d3js.md
HKuz/freeCodeCamp_Contributions
e0d0619dd0c08c7013ce793b16196365605e259b
[ "BSD-3-Clause" ]
null
null
null
# D3.js Challenges Introduction ![XKCD comic showing movie narrative data visualization](http://imgs.xkcd.com/comics/movie_narrative_charts.png) D3.js, or D3, stands for Data Driven Documents. D3 is a JavaScript library to create dynamic and interactive data visualizations in the browser. It's built to work with common web standards, namely HTML, CSS, and Scalable Vector Graphics (SVG). D3 takes input data and maps it into a visual representation of that data. It supports many different data formats. D3 lets you bind (or attach) the data to the Document Object Model (DOM). You use HTML or SVG elements with D3's built-in methods to transform the data into a visualization. D3 gives you a lot of control over the presentation of data. This section covers the basic functionality and how to create visualizations with the D3 library.
84.3
289
0.797153
eng_Latn
0.995264
b99a8927b5220fbf3fe1d0d62207d1667ee1a661
1,350
md
Markdown
README.md
Danielg212/vue-doc-reader
2215781ceee844205bdf7cfe69e023f473c910ab
[ "MIT" ]
4
2020-09-14T23:16:18.000Z
2020-11-06T09:51:49.000Z
README.md
Danielg212/vue-doc-reader
2215781ceee844205bdf7cfe69e023f473c910ab
[ "MIT" ]
null
null
null
README.md
Danielg212/vue-doc-reader
2215781ceee844205bdf7cfe69e023f473c910ab
[ "MIT" ]
null
null
null
<h1 align="center">Welcome to vue-doc-reader 👋</h1> <p> <img src="https://img.shields.io/npm/types/vue-doc-reader" /> <img alt="Version" src="https://img.shields.io/badge/version-0.1.0-blue.svg?cacheSeconds=2592000" /> <img src="https://img.shields.io/badge/node-%3E%3D10-blue.svg" /> <a href="#" target="_blank"> <img alt="License: MIT" src="https://img.shields.io/badge/License-MIT-yellow.svg" /> </a> </p> > file input reader for docs ## Supported Formats .xlsx .xlsb .xlsm .xls .xml .csv ## Prerequisites - node >=10 ## Install ```sh npm i vue-doc-reader ``` ## Usage In main.ts ```ts import VueDocReader from 'vue-doc-reader' Vue.component('vue-doc-reader',VueDocReader) ``` In parent component template ```html <vue-doc-reader @onLoad="onLoad" /> ``` ### Props * (optional) label:string - input label. * (optional) includeRows:boolean - if ``true`` the table data will return with ``rowIndex`` property for each row. ### Callback onLoad event callback return ``{data:Array<any>,headers:Array<string>}`` * ``data`` represents array of row objects ``[columnName:string]:value`` * ``headers`` represents array of columns name ```ts onLoad(results:any){ this.data = results.data; this.headers = results.headers } ``` ## Author 👤 **daniel212** ## Show your support Give a ⭐️ if this project helped you!
19.565217
114
0.671111
eng_Latn
0.491489
b99a9b86fe2c720ca84dec6b958f437eec69723f
6,051
md
Markdown
RELEASE_NOTES.md
rayzhb/DotNetty
54fd364a9319c4b888296b4a6a20eb8f9d28285f
[ "MIT" ]
null
null
null
RELEASE_NOTES.md
rayzhb/DotNetty
54fd364a9319c4b888296b4a6a20eb8f9d28285f
[ "MIT" ]
null
null
null
RELEASE_NOTES.md
rayzhb/DotNetty
54fd364a9319c4b888296b4a6a20eb8f9d28285f
[ "MIT" ]
null
null
null
#### 0.7.2 February 14, 2022 - Start threads as background in HashedWheelTimer, LoopExecutor, ThreadDeathWatcher - Google.Protobuf 3.19.4 (latest) #### 0.7.1 December 15, 2021 - Revert to use background threads #### 0.7.0 June 11, 2021 - Target net472 and netstandard2.0 - Microsoft.Extensions.Logging 5.0.0 - Microsoft.Extensions.Configuration 5.0.0 #### 0.6.0 October 9, 2018 - Clearly marks Unsafe Buffer management routines as `unsafe` - Changes defaults for Unpooled and Pooled buffer allocators to safe versions - Fixes write buffer handling (#423) #### 0.5.0 August 14, 2018 - Web Socket support - Aligned execution service model - Fix for synchronous socket connection establishment on .NET Core 2.1 - TlsHandler fixes - Fix to scheduled task cancellation - XML Doc updates #### 0.4.8 April 24, 2018 - Unsafe direct buffers - HTTP 1.1 codec - FlowControlHandler - Channel pool - Better Buffer-String integration - Better shutdown handling for sockets - Realigned Redis codec - Fixes to LenghtFieldPrepender, LengthFieldBasedDecoder - Fixes to libuv-based transport - Fixes to buffer management on flush for .NET Core - Fixes to ResourceLeakDetector #### 0.4.6 August 2 2017 - Small fixes (#259, #260, #264, #266) - Properly handling handshake with AutoRead = false when Read is not issued by upstream handlers in pipeline (#263) - Proper exception handling in TcpServerSocketChannel to retry accept instead of closing (#272) #### 0.4.5 May 15 2017 - Support for Medium and Unsigned Medium types (#244) - Support for Float (Single) type and Zeroing API (#209) - Hashed Wheel Timer (#242) - Fix for unintended concurrent flush (#218), silent failures during TLS handshake (#225) #### 0.4.4 March 31 2017 - Added SNI support - Fixed assembly metadata #### 0.4.3 March 21 2017 - Extended support for .NET 4.5 - Fix to PooledByteBufferAllocator to promptly release freed chunks for GC - Ability to limit overall PooledByteBufferAllocator capacity - Updated dependencies #### 0.4.2 February 9 2017 - Better alignment with .NET Standard and portability (esp UWP support) - New tooling #### 0.4.1 January 26 2017 - Introduced Platform allowing for alternative implementations of platform-specific concepts. - STEE and others use Task-based "thread" abstraction. #### 0.4.0 November 25 2016 - .NET Standard 1.3 support. - Libraries are strong-named by default. - Redis codec. - Protocol Buffers 2 and 3 codecs. - Socket Datagram Channel. - Base64 encoder and decoder. - STEE uses ConcurrentQueue by default (queue impl is pluggable now). #### 0.3.2 June 22 2016 - Better API alignment with final version of netty 4.1 (#125). - Exposed API for flexible TlsHandler initialization (#132, #134). #### 0.3.1 June 01 2016 - Port of IdleStateHandler, ReadTimeoutHandler, WriteTimeoutHandler (#98). - Fixes and optimization in TlsHandler (#116). - Port of AdaptiveRecvByteBufAllocator enabling flexible sizing of read buffer (#117). - Support for adding Attributes on Channel (#114). - Proper xml-doc configuration (#120). #### 0.3.0 May 13 2016 - BREAKING CHANGE: default byte buffer is now PooledByteBufferAllocator (unless overriden through environment variable). - Port of PooledByteBuffer (support for flexible buffer sizes). - Enables sending of multiple buffers in a single socket call. - Refreshed DefaultChannelPipeline, AbstractChannelHandlerContext. - Port of JsonObjectDecoder, DelimeterBasedFrameDecoder. - Fixes to async sending in TcpSocketChannel. - IoBufferCount, GetIoBuffer(s) introduced in IByteBuffer. #### 0.2.6 April 27 2016 - TlsHandler negotiates TLS 1.0+ on server side (#89). - STEE properly supports graceful shutdown (#7). - UnpooledHeapByteBuffer.GetBytes honors received index and length (#88). - Port of MessageToMessageDecoder, LineBasedFrameDecoder, StringDecoder, StringEncoder, ByteProcessor and ForEachByte family of methods on Byte Buffers (#86). #### 0.2.5 April 14 2016 - Fixes regression in STEE where while evaluation of idling timeout did not account for immediately pending scheduled tasks (#83). #### 0.2.4 April 07 2016 - Proper handling of pooled buffer growth beyond max capacity of buffer in pool (fixing #71). - Improved pooling of buffers when a buffer was released in other thread (#73). - Introduction of IEventExecutor.Schedule and proper cancellation of scheduled tasks (#80). - Better handling of wake-ups for scheduled tasks (#81). - Default internal logging initialization is deferred to allow override it completely (#80 extra). - Honoring `IByteBuffer.ArrayOffset` in `IByteBuffer.ToString(Encoding)` (#80 extra). #### 0.2.3 February 10 2016 - Critical fix to handling of async operations when initiated from outside the event loop (#66). - Fix to enable setting socket-related options through SetOption on Bootstrap (#68). - build changes to allow signing assemblies #### 0.2.2 January 30 2016 - `ResourceLeakDetector` fix (#64) - Assigned GUID on default internal logger `EventSource` - `IByteBuffer.ToString(..)` for efficient string decoding directly from Byte Buffer #### 0.2.1 December 08 2015 - fixes to EmptyByteBuffer - ported LoggingHandler #### 0.2.0 November 17 2015 - Proper Event Executor model port - EmbeddedChannel - Better test coverage for executor model and basic channel functionality - Channel groups support - Channel ID - Complete `LengthFieldBasedFrameDecoder` and `LengthFieldPrepender` - Resource leak detection support (basic is on by default for pooled byte buffers) - Proper internal logging - Reacher byte buffer API - Proper utilities set for byte buffers, strings, system properties - Performance improvements in SingleThreadEventExecutor #### 0.1.3 September 21 2015 - Fixed `TcpSocketChannel` closure on graceful socket closure - Better alignment of IChannel implementations to netty's expected behavior for `Open`, `Active`, `LocalAddress`, `RemoteAddress` - Proper port of `Default/IChannelPipeline` and `AbstractChannelHandlerContext` to enable channel handlers to run on different invoker. #### 0.1.2 August 09 2015 First public release
41.163265
158
0.769294
eng_Latn
0.935459
b99b283a8b5849a4615130e360beab391cfad419
1,147
md
Markdown
tests/unit/test_data/en_tw-wa/en_tw/bible/names/ashkelon.md
linearcombination/DOC
4478e55ec81426c15a2c402cb838e76d79741c03
[ "MIT" ]
1
2022-01-10T21:03:26.000Z
2022-01-10T21:03:26.000Z
tests/unit/test_data/en_tw-wa/en_tw/bible/names/ashkelon.md
linearcombination/DOC
4478e55ec81426c15a2c402cb838e76d79741c03
[ "MIT" ]
1
2022-03-28T17:44:24.000Z
2022-03-28T17:44:24.000Z
tests/unit/test_data/en_tw-wa/en_tw/bible/names/ashkelon.md
linearcombination/DOC
4478e55ec81426c15a2c402cb838e76d79741c03
[ "MIT" ]
3
2022-01-14T02:55:44.000Z
2022-02-23T00:17:51.000Z
# Ashkelon ## Facts: In Bible times, Ashkelon was a major Philistine city located on the coast of the Mediterranean Sea. It still exists in Israel today. * Ashkelon was one of the five most important Philistine cities, along with Ashdod, Ekron, Gath, and Gaza. * The Israelites did not completely conquer the people of Ashkelon, even though the kingdom of Judah occupied its hill country. * Ashkelon remained occupied by the Philistines for hundreds of years. (Translation suggestions: [[rc://en/ta/man/jit/translate-names]]) (See also: [Ashdod](../names/ashdod.md), [Canaan](../names/canaan.md), [Ekron](../names/ekron.md), [Gath](../names/gath.md), [Gaza](../names/gaza.md), [Philistines](../names/philistines.md), [Mediterranean](../names/mediterranean.md)) ## Bible References: * [1 Samuel 06:17-18](rc://en/tn/help/1sa/06/17) * [Amos 01:8](rc://en/tn/help/amo/01/08) * [Jeremiah 25:19-21](rc://en/tn/help/jer/25/19) * [Joshua 13:2-3](rc://en/tn/help/jos/13/02) * [Judges 01:18-19](rc://en/tn/help/jdg/01/18) * [Zechariah 09:05](rc://en/tn/help/zec/09/05) ## Word Data: * Strong's: H831 ## Forms Found in the English ULB: Ashkelon
35.84375
234
0.701831
eng_Latn
0.87027
b99bbaa4b865d45cc87f26b7c5dc9018755214c0
998
md
Markdown
linux.md
tarrow/CMInstall
a50b1af8e5e82811d47d2d32e880b1c4138feedc
[ "MIT" ]
null
null
null
linux.md
tarrow/CMInstall
a50b1af8e5e82811d47d2d32e880b1c4138feedc
[ "MIT" ]
null
null
null
linux.md
tarrow/CMInstall
a50b1af8e5e82811d47d2d32e880b1c4138feedc
[ "MIT" ]
null
null
null
--- layout: default title: Linux Installation --- # Linux Installation Procedure ## getpapers 1. Download npm and node using your package manager. 1. run `npm install --global getpapers` either as root or with sudo enabled ## norma ### With .deb Install the .deb from [github](https://github.com/ContentMine/norma/releases) if on Debian or Ubuntu. ### Alternate method We are in the process of preparing rpms. You can also install manually using a zip file: {% include norma-from-zip.md %} --- # Linux Operating Procedure ## getpapers {% include run-getpapers.md %} ## norma {% include check-java-generic.md %} You could either download and install packages on their website or preferably use your package manager. For example `apt-get` or `yum` etc.. {% include run-norma.md %} ## ami {% include check-java-generic.md %} You could either download and install packages on their website or preferably use your package manager. For example `apt-get` or `yum` etc.. {% include run-ami.md %}
26.263158
140
0.733467
eng_Latn
0.994405
b99c4ff96d18d878f91e923d1d52d0ecb9cae3ac
129
md
Markdown
README.md
tamireinhorn/Dice-Roller
5311531cb9abe1c90267241c17f4049f190c09a0
[ "MIT" ]
null
null
null
README.md
tamireinhorn/Dice-Roller
5311531cb9abe1c90267241c17f4049f190c09a0
[ "MIT" ]
null
null
null
README.md
tamireinhorn/Dice-Roller
5311531cb9abe1c90267241c17f4049f190c09a0
[ "MIT" ]
null
null
null
# Dice-Roller This is my first Android app created based on the Udacity tutorial. I've just modified it slightly to learn more :)
64.5
115
0.782946
eng_Latn
0.999813
b99c7acd3422e1b417a67b0c16080f3a3c609a65
624
md
Markdown
docs/ContainerTemplateReference.md
roliveri/topological_inventory-ingress_api-client-ruby
6e059242bba66abacd3a6ae35cea96180e59355d
[ "Apache-2.0" ]
null
null
null
docs/ContainerTemplateReference.md
roliveri/topological_inventory-ingress_api-client-ruby
6e059242bba66abacd3a6ae35cea96180e59355d
[ "Apache-2.0" ]
30
2018-10-16T17:39:25.000Z
2019-10-18T14:54:28.000Z
docs/ContainerTemplateReference.md
roliveri/topological_inventory-ingress_api-client-ruby
6e059242bba66abacd3a6ae35cea96180e59355d
[ "Apache-2.0" ]
5
2018-10-10T20:16:59.000Z
2019-02-08T09:03:20.000Z
# TopologicalInventoryIngressApiClient::ContainerTemplateReference ## Properties Name | Type | Description | Notes ------------ | ------------- | ------------- | ------------- **inventory_collection_name** | **String** | | **reference** | [**ClusterReferenceReference**](ClusterReferenceReference.md) | | **ref** | **String** | | ## Code Sample ```ruby require 'TopologicalInventoryIngressApiClient' instance = TopologicalInventoryIngressApiClient::ContainerTemplateReference.new(inventory_collection_name: null, reference: null, ref: null) ```
28.363636
112
0.602564
yue_Hant
0.626049
b99c90219cb144e2eff12e6e9f461613e76a5fb6
24,665
md
Markdown
docs/advanced-analytics/install/sql-r-services-windows-install.md
drake1983/sql-docs.es-es
d924b200133b8c9d280fc10842a04cd7947a1516
[ "CC-BY-4.0", "MIT" ]
1
2020-04-25T17:50:01.000Z
2020-04-25T17:50:01.000Z
docs/advanced-analytics/install/sql-r-services-windows-install.md
drake1983/sql-docs.es-es
d924b200133b8c9d280fc10842a04cd7947a1516
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/advanced-analytics/install/sql-r-services-windows-install.md
drake1983/sql-docs.es-es
d924b200133b8c9d280fc10842a04cd7947a1516
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Instalar SQL Server 2016 R Services (In-Database) | Documentos de Microsoft ms.prod: sql ms.technology: machine-learning ms.date: 04/15/2018 ms.topic: conceptual author: HeidiSteen ms.author: heidist manager: cgronlun ms.openlocfilehash: 5d8cf1c6bb2ac59a2745aceb979c5f566917548a ms.sourcegitcommit: 808d23a654ef03ea16db1aa23edab496b73e5072 ms.translationtype: MT ms.contentlocale: es-ES ms.lasthandoff: 06/02/2018 ms.locfileid: "34585597" --- # <a name="install-sql-server-2016-r-services-in-database"></a>Instalar SQL Server 2016 R Services (en bases de datos) [!INCLUDE[appliesto-ss-xxxx-xxxx-xxx-md-winonly](../../includes/appliesto-ss-xxxx-xxxx-xxx-md-winonly.md)] Este artículo explica cómo instalar y configurar **SQL Server 2016 R Services (In-Database)**. Si tiene SQL Server 2016, instale esta característica para habilitar la ejecución del código de R en SQL Server. ## <a name="bkmk_prereqs"> </a> Lista de comprobación previa a la instalación + El programa de instalación de SQL Server 2016 es necesaria si desea instalar R Services. Si en su lugar tiene los medios de instalación de SQL Server 2017, debe instalar [Machine Learning Services (In-Database) de SQL Server de 2017](sql-machine-learning-services-windows-install.md) para obtener la integración de R para esa versión de SQL Server. + Se requiere una instancia del motor de base de datos. No se puede instalar solo R, a pesar de que puede agregar de forma incremental a una instancia existente. + No instale Servicios de R en un clúster de conmutación por error. El mecanismo de seguridad utilizado para aislar los procesos de R no es compatible con un entorno de clúster de conmutación por error de Windows Server. + No instale Servicios de R en un controlador de dominio. Se producirá un error en la parte de servicios de R del programa de instalación. + No instale **características compartidas** > **R Server (independiente)** en el mismo equipo que ejecuta una instancia de base de datos. + Instalación en paralelo con otras versiones de R y Python son posibles porque la instancia de SQL Server utiliza sus propias copias de las distribuciones de R y Anaconda de código abierto. Sin embargo, ejecutar código que usa R y Python en el equipo de SQL Server fuera de SQL Server puede producir varios problemas: + Usar una biblioteca diferente y un ejecutable diferente y obtener resultados diferentes, que se obtienen cuando se ejecuta en SQL Server. + Scripts de R y Python que se ejecutan en bibliotecas externas no se puede administrar por SQL Server, dando lugar a la contención de recursos. Si utiliza cualquier versión anterior de paquetes RevoScaleR o el entorno de desarrollo de Revolution Analytics, o bien si instaló las versiones preliminares de SQL Server 2016, debe desinstalarlas. No se admite la ejecución de versiones anteriores y recientes de RevoScaleR y otros paquetes propietarios. Para quitar versiones anteriores, consulte [actualización y p+f sobre la instalación de servicios de aprendizaje de máquina de SQL Server](../r/upgrade-and-installation-faq-sql-server-r-services.md). > [!IMPORTANT] > Una vez completada la instalación, asegúrese de completar los pasos adicionales de configuración posterior a la descrita en este artículo. Estos pasos incluyen la habilitación de SQL Server puede utilizar scripts externos y agregar cuentas necesarias para que SQL Server ejecutar trabajos de R en su nombre. Cambios de configuración suele ser necesitan un reinicio de la instancia o un reinicio del servicio Launchpad. ## <a name="get-the-installation-media"></a>Obtener los medios de instalación [!INCLUDE[GetInstallationMedia](../../includes/getssmedia.md)] ### <a name="bkmk_ga_instalpatch"></a> Requisito de instalación de revisión Microsoft ha identificado un problema con la versión concreta de los archivos binarios en tiempo de ejecución de Microsoft VC++ 2013 que instala como requisito previo SQL Server. Si esta actualización de los archivos binarios en tiempo de ejecución de VC++ no se instala, puede que SQL Server experimente problemas de estabilidad en determinados escenarios. Antes de instalar SQL Server, siga las instrucciones de [Notas de la versión de SQL Server](../../sql-server/sql-server-2016-release-notes.md#bkmk_ga_instalpatch) para ver si el equipo necesita una revisión para los archivos binarios en tiempo de ejecución de VC. ## <a name="bkmk2016top"></a>Ejecute el programa de instalación En instalaciones locales, debe ejecutar el programa de instalación como administrador. Si instala [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] desde un recurso compartido remoto, deberá usar una cuenta de dominio que tenga permisos de lectura y ejecución para dicho recurso. 1. Inicie al Asistente para la instalación de SQL Server 2016. 2. En el **instalación** ficha, seleccione **instalación independiente del nuevo servidor SQL Server o agregar características a una instalación existente**. ![Instalar R Services (In-Database)](media/2016-setup-installation-rsvcs.png "Iniciar instalación del motor de base de datos con R Services") 3. En el **selección de características** página, seleccione las opciones siguientes: - Seleccione **servicios del motor de base de datos**. Se requiere el motor de base de datos en cada instancia que usa el aprendizaje automático. - Seleccione **R Services (en bases de datos)**. Instala compatibilidad para su uso en bases de datos de R. ![Selección de características de servicios de R](media/2016setup-rsvcs-features.png "seleccione estas características para R Services en bases de datos") > [!IMPORTANT] > No instale Servicios de R y R Server al mismo tiempo. Por lo general, debería instalar a R Server (independiente) para crear un entorno que utiliza un científico de datos o un desarrollador para conectarse a SQL Server e implementar soluciones de R. Por lo tanto, no es necesario instalar ambos en el mismo equipo. 4. En el **dar su consentimiento para instalar Microsoft R Open** página, haga clic en **Accept**. Este contrato de licencia se requiere para descargar Microsoft R Open, que incluye una distribución de los paquetes de base de código abierto R y herramientas, junto con los paquetes de R mejorados y proveedores de conectividad desde el equipo de desarrollo de Microsoft R. 5. Después de que ha aceptado el contrato de licencia, no hay una breve pausa mientras el programa de instalación se prepara. Haga clic en **siguiente** cuando el botón esté disponible. 6. En el **preparado para instalar** , comprueba que se incluyen los siguientes elementos y, a continuación, seleccionan **instalar**. + Servicios de Motor de base de datos + R Services (en bases de datos) Tenga en cuenta la ubicación de la carpeta en la ruta de acceso `..\Setup Bootstrap\Log` donde se almacenan los archivos de configuración. Cuando finalice la instalación, puede revisar los componentes instalados en el archivo de resumen. 7. Una vez completada la instalación, reinicie el equipo. ## <a name="bkmk_enableFeature"></a>Habilitar la ejecución de scripts externos 1. Abra [!INCLUDE[ssManStudioFull](../../includes/ssmanstudiofull-md.md)]. > [!TIP] > Puede descargar e instalar la versión adecuada de esta página: [descargar SQL Server Management Studio (SSMS)](https://docs.microsoft.com/sql/ssms/download-sql-server-management-studio-ssms). > > También puede probar la versión preliminar de [Studio de operaciones SQL](https://docs.microsoft.com/sql/sql-operations-studio/what-is), que es compatible con las tareas administrativas y las consultas en SQL Server. 2. Conectarse a la instancia donde haya instalado servicios de aprendizaje de máquina, haga clic en **nueva consulta** para abrir una ventana de consulta y ejecute el siguiente comando: ```SQL sp_configure ``` El valor de la propiedad, `external scripts enabled`, debería ser **0** en este momento. Esto ocurre porque la característica está desactivada de forma predeterminada. La característica debe habilitarse explícitamente por un administrador para poder ejecutar scripts de R o Python. 3. Para habilitar la característica de scripting externo, ejecute la siguiente instrucción: ```SQL EXEC sp_configure 'external scripts enabled', 1 RECONFIGURE WITH OVERRIDE ``` ## <a name="restart-the-service"></a>Reinicie el servicio. Una vez completada la instalación, reinicie el motor de base de datos antes de continuar con la siguiente, habilitar la ejecución del script. Al reiniciar el ombre también automáticamente reinician relacionado [!INCLUDE[rsql_launchpad](../../includes/rsql-launchpad-md.md)] servicio. Puede reiniciar el servicio mediante el menú contextual **reiniciar** comando para la instancia de SSMS o mediante el **servicios** en el Panel de Control, o mediante el panel [Administrador de configuración de SQL Server ](../../relational-databases/sql-server-configuration-manager.md). ## <a name="verify-installation"></a>Comprobar la instalación Use los pasos siguientes para comprobar que se están ejecutando todos los componentes utilizados para iniciar scripts externos. 1. En SQL Server Management Studio, abra una nueva ventana de consulta y ejecute el siguiente comando: ```SQL EXEC sp_configure 'external scripts enabled' ``` **run_value** debería estar establecido ahora en 1. 2. Abra la **servicios** panel o administrador de configuración de SQL Server y compruebe **servicio SQL Server Launchpad** está ejecutando. Debe tener un servicio para cada instancia de motor de base de datos que tenga R o Python instalado. Si no se está ejecutando, reinicie el servicio. Para obtener más información, consulte [componentes para la integración de Python](../python/new-components-in-sql-server-to-support-python-integration.md). 7. Si está ejecutando Launchpad, debe ser capaz de ejecutar R simple para comprobar que los tiempos de ejecución de secuencias de comandos externos pueden comunicarse con SQL Server. Abra una nueva **consulta** ventana en [!INCLUDE[ssManStudioFull](../../includes/ssmanstudiofull-md.md)], y, a continuación, ejecute una secuencia de comandos como la siguiente: ```SQL EXEC sp_execute_external_script @language =N'R', @script=N' OutputDataSet <- InputDataSet; ', @input_data_1 =N'SELECT 1 AS hello' WITH RESULT SETS (([hello] int not null)); GO ``` La secuencia de comandos puede tardar un poco para que se ejecute, la primera vez que se carga en tiempo de ejecución de script externo. Los resultados deben ser algo parecido a esto: | hello | |----| | 1| ## <a name="bkmk_FollowUp"></a> Configuración adicional Si el paso de comprobación de script externo fue correcto, puede ejecutar comandos de Python desde SQL Server Management Studio, código de Visual Studio o cualquier otro cliente que puede enviar instrucciones T-SQL en el servidor. Si recibe un error al ejecutar el comando, revise los pasos de configuración adicionales en esta sección. Debe realizar configuraciones adecuadas adicionales en el servicio o la base de datos. Entre los escenarios comunes que requieren cambios adicionales se incluyen: * [Configurar firewall de Windows para las conexiones de entrada](../../database-engine/configure-windows/configure-a-windows-firewall-for-database-engine-access.md) * [Habilitar otros protocolos de red](../../database-engine/configure-windows/enable-or-disable-a-server-network-protocol.md) * [Habilitar las conexiones remotas](../../database-engine/configure-windows/configure-the-remote-access-server-configuration-option.md) * [Ampliar los permisos integrados a los usuarios remotos](#bkmk_configureAccounts) * [Conceder permiso para ejecutar scripts externos](#bkmk_AllowLogon) * [Conceder acceso a bases de datos individuales](#permissions-db) > [!NOTE] > No todos los cambios mostrados son necesarios y ninguno puede ser necesario. Requisitos dependen de un esquema de seguridad, que se instaló SQL Server y cómo esperar a los usuarios conectarse a la base de datos y ejecutar scripts externos. Sugerencias de solución de problemas adicionales se pueden encontrar aquí: [preguntas más frecuentes de actualización e instalación](../r/upgrade-and-installation-faq-sql-server-r-services.md) ### <a name="bkmk_configureAccounts"></a>Habilitar la autenticación implícita para el grupo de cuentas de Launchpad Durante la instalación, se crean algunas cuentas de usuario de Windows para ejecutar tareas en el token de seguridad de la [!INCLUDE[rsql_launchpad_md](../../includes/rsql-launchpad-md.md)] servicio. Cuando un usuario envía un script de R desde un cliente externo, [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] activa una cuenta de trabajo disponible, lo asigna a la identidad del usuario que realiza la llamada y ejecuta el script de R en nombre del usuario. Este nuevo servicio del motor de base de datos admite la ejecución segura de scripts externos, denominado *autenticación implícita*. Puede ver estas cuentas en el grupo de usuarios de Windows **SQLRUserGroup**. De forma predeterminada, se crean 20 cuentas de trabajo, que normalmente es más que suficiente para ejecutar código R trabajos. Sin embargo, si necesita ejecutar scripts de R desde un cliente de ciencia de datos remotos y usa la autenticación de Windows, debe conceder estas cuentas de trabajo permiso para iniciar sesión en el [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] instancia en su nombre. 1. En [!INCLUDE[ssManStudioFull](../../includes/ssmanstudiofull-md.md)], en el Explorador de objetos, expanda **Seguridad**, haga clic con el botón derecho en **Inicios de sesión**y seleccione **Nuevo inicio de sesión**. 2. En el **inicio de sesión - nuevo** cuadro de diálogo, seleccione **búsqueda**. 3. Seleccione el **tipos de objeto** y **grupos** casillas de verificación y desactive todas las demás casillas. 4. Haga clic en **avanzadas**, compruebe que la ubicación de búsqueda es el equipo actual y, a continuación, haga clic en **Buscar ahora**. 5. Desplácese por la lista de cuentas de grupo en el servidor hasta que encuentre una que comience con `SQLRUserGroup`. + El nombre del grupo que esté asociada con el servicio Launchpad para la _instancia predeterminada_ es siempre **SQLRUserGroup**. Seleccione esta cuenta solo para la instancia predeterminada. + Si usas un _con el nombre de instancia_, el nombre de instancia se anexa al nombre predeterminado, `SQLRUserGroup`. Por lo tanto, si la instancia se denomina "MLTEST", el nombre del grupo de usuario predeterminada para esta instancia sería **SQLRUserGroupMLTest**. 5. Haga clic en **Aceptar** para cerrar el cuadro de diálogo de búsqueda avanzada y compruebe que ha seleccionado la cuenta correcta para la instancia. Cada instancia puede usar solo su propio servicio de Launchpad y el grupo creado para ese servicio. 6. Haga clic en **Aceptar** otra vez para cerrar el **Seleccionar usuario o grupo** cuadro de diálogo. 7. En el **inicio de sesión - nuevo** cuadro de diálogo, haga clic en **Aceptar**. De forma predeterminada, el inicio de sesión se asigna al rol **público** y tiene permiso para conectarse al motor de base de datos. ### <a name="bkmk_AllowLogon"></a>Proporcionar a los usuarios permiso para ejecutar scripts externos > [!NOTE] > Si utiliza un inicio de sesión SQL para ejecutar scripts de R en un contexto de proceso de SQL Server, este paso no es necesario. Si instaló [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] en su propia instancia, normalmente se ejecutan las secuencias de comandos como administrador, o al menos como un propietario de la base de datos, y, por tanto, tienen permisos implícitos para diversas operaciones, todos los datos en la base de datos y la capacidad para instalar nuevos paquetes según sea necesario. Sin embargo, en un escenario empresarial, la mayoría de los usuarios, incluidos los usuarios que tienen acceso a la base de datos mediante el uso de los inicios de sesión SQL, no tiene estos permisos elevados. Por lo tanto, para cada usuario que vaya a ejecutar scripts de R, debe conceder al usuario permisos para ejecutar scripts en cada base de datos que se usarán los scripts externos. ```SQL USE <database_name> GO GRANT EXECUTE ANY EXTERNAL SCRIPT TO [UserName] ``` > [!TIP] > ¿Necesita ayuda con la instalación? ¿No está seguro de haber ejecutado todos los pasos? Usar estos informes personalizados para comprobar el estado de la instalación y ejecutar pasos adicionales. > > [Supervisar servicios de aprendizaje de máquina con informes personalizados](../r/monitor-r-services-using-custom-reports-in-management-studio.md). ### <a name="permissions-db"></a> Proporcionar a los usuarios de lectura, escritura o permisos de DDL para la base de datos La cuenta de usuario que se usa para ejecutar R podría necesario para leer datos de otras bases de datos, crear nuevas tablas para almacenar los resultados y escribir datos en tablas. Por lo tanto, para cada usuario que se van a ejecutar scripts de R, asegúrese de que el usuario tiene los permisos adecuados en la base de datos: *db_datareader*, *db_datawriter*, o *db_ddladmin*. Por ejemplo, la siguiente instrucción [!INCLUDE[tsql](../../includes/tsql-md.md)] concede al inicio de sesión de SQL *MySQLLogin* los derechos necesarios para ejecutar consultas de T-SQL en la base de datos *RSamples* . Para ejecutar esta instrucción, el inicio de sesión de SQL debe existir en el contexto de seguridad del servidor. ```SQL USE RSamples GO EXEC sp_addrolemember 'db_datareader', 'MySQLLogin' ``` Para obtener más información acerca de los permisos que se incluyen en cada rol, consulte [roles de nivel de base de datos](../../relational-databases/security/authentication-access/database-level-roles.md). ### <a name="create-an-odbc-data-source-for-the-instance-on-your-data-science-client"></a>Crear un origen de datos de ODBC para la instancia en el cliente de ciencia de datos Si crea una solución en R en un equipo cliente de ciencia de datos y que necesite ejecutar código con el equipo de SQL Server como el contexto de proceso, puede utilizar un inicio de sesión SQL o la autenticación integrada de Windows. * En caso de usar un inicio de sesión de SQL: asegúrese de que el inicio de sesión tenga los permisos adecuados en la base de datos donde se van a leer los datos. Puede hacerlo agregando *conectarse a* y *seleccione* permisos, o agregando el inicio de sesión para la *db_datareader* rol. Para los inicios de sesión que necesitan para crear objetos, agregue *funciones DDL_admin* derechos. Para los inicios de sesión que deben guardar los datos en tablas, agregue el inicio de sesión para la *db_datawriter* rol. * Para la autenticación de Windows: tendrá que configurar un origen de datos ODBC en el cliente de ciencia de datos que especifica el nombre de instancia y otra información de conexión. Para obtener más información, consulte [Administrador de orígenes de datos ODBC](https://docs.microsoft.com/sql/odbc/admin/odbc-data-source-administrator). ## <a name="suggested-optimizations"></a>Optimizaciones sugeridas Ahora que tiene todo en funcionamiento, que le interese optimizar el servidor para admitir el aprendizaje automático o instalar previamente entrenada modelos. ### <a name="add-more-worker-accounts"></a>Agregue más cuentas de trabajo Si cree que podría usar R mucho, o si se espera que muchos usuarios pueden ejecutar scripts al mismo tiempo, puede aumentar el número de cuentas de trabajo que están asignadas al servicio Launchpad. Para obtener más información, consulte [modificar el grupo de cuentas de usuario para servicios de aprendizaje de máquina de SQL Server](../r/modify-the-user-account-pool-for-sql-server-r-services.md). ### <a name="bkmk_optimize"></a>Optimizar el servidor para la ejecución de scripts externos La configuración predeterminada para [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] el programa de instalación están diseñados para optimizar el equilibrio del servidor para una variedad de servicios que son compatibles con el motor de base de datos, lo que puede incluir de extracción, transformación y carga (ETL) de procesos, creación de informes, auditoría, y las aplicaciones que utilizan [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] datos. Por lo tanto, la configuración predeterminada, es posible que los recursos para el aprendizaje automático a veces se restringido o limitados, especialmente en operaciones con uso intensivo de memoria. Para garantizar que los trabajos de aprendizaje de máquina están ordenados y asignaron correctamente, se recomienda que use el regulador de recursos de SQL Server para configurar un grupo de recursos externos. También puede cambiar la cantidad de memoria que se asigna a la [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] motor de base de datos o aumentar el número de cuentas que se ejecutan en el [!INCLUDE[rsql_launchpad](../../includes/rsql-launchpad-md.md)] service. - Para configurar un grupo de recursos para administrar recursos externos, consulte [crear un grupo de recursos externo](../../t-sql/statements/create-external-resource-pool-transact-sql.md). - Para cambiar la cantidad de memoria reservada para la base de datos, vea [opciones de configuración de memoria de servidor](../../database-engine/configure-windows/server-memory-server-configuration-options.md). - Para cambiar el número de cuentas de R que se puede iniciar por [!INCLUDE[rsql_launchpad](../../includes/rsql-launchpad-md.md)], consulte [modificar el grupo de cuentas de usuario para el aprendizaje automático](../r/modify-the-user-account-pool-for-sql-server-r-services.md). Si está usando Standard Edition y no tiene el regulador de recursos, puede usar vistas de administración dinámica (DMV) y eventos extendidos, así como eventos de Windows de supervisión, para ayudar a administrar los recursos del servidor que usan R. Para obtener más información, consulte [supervisión y administración de servicios de R](../r/managing-and-monitoring-r-solutions.md). ### <a name="install-additional-r-packages"></a>Instalar paquetes de R adicionales Las soluciones de R que se crean para SQL Server pueden llamar a funciones básicas de R, las funciones de la packes properietary instalado con SQL Server y paquetes de R de terceros compatibles con la versión de R de código abierto instalado por SQL Server. Los paquetes que quiera usar de SQL Server deben estar instalados en la biblioteca predeterminada que la instancia usa. Si tiene una instalación independiente de R en el equipo, o si ha instalado los paquetes a las bibliotecas de usuario, no podrá usar esos paquetes de T-SQL. El proceso para instalar y administrar paquetes de R es diferente de SQL Server 2016 y 2017 de SQL Server. En SQL Server 2016, un administrador de base de datos debe instalar paquetes de R que necesitan los usuarios. En SQL Server 2017, puede configurar grupos de usuarios para compartir los paquetes en un nivel de cada base de datos o configurar los roles de base de datos para permitir a los usuarios instalar sus propios paquetes. Para obtener más información, consulte [instalar nuevos paquetes de R](../r/install-additional-r-packages-on-sql-server.md). ## <a name="get-help"></a>Obtener ayuda ¿Necesita ayuda con la instalación o actualización? Para obtener respuestas a preguntas comunes y los problemas conocidos, vea el artículo siguiente: * [Actualización e instalación preguntas más frecuentes: servicios de aprendizaje de máquina](../r/upgrade-and-installation-faq-sql-server-r-services.md) Para comprobar el estado de instalación de la instancia y solucionar problemas comunes, pruebe estos informes personalizados. * [Informes personalizados para SQL Server R Services](../r/monitor-r-services-using-custom-reports-in-management-studio.md) ## <a name="next-steps"></a>Pasos siguientes Los desarrolladores de R pueden empezar a trabajar con algunos ejemplos sencillos y conozca los aspectos básicos del funcionamiento de R con SQL Server. Para el siguiente paso, vea los siguientes vínculos: + [Tutorial: Ejecutar R en T-SQL](../tutorials/rtsql-using-r-code-in-transact-sql-quickstart.md) + [Tutorial: Análisis de en bases de datos para los desarrolladores de R](../tutorials/sqldev-in-database-r-for-sql-developers.md) Para ver ejemplos de aprendizaje automático que se basan en situaciones del mundo real, vea [máquina tutoriales de aprendizaje](../tutorials/machine-learning-services-tutorials.md).
86.848592
670
0.782972
spa_Latn
0.986449
b99cdf67f65f021cd6615f99e7cf007f8d652870
306
md
Markdown
README.md
hsadler/BlockBuilder
0173b4dc4f74fb5d0995292085c46d796b27e8c1
[ "MIT" ]
null
null
null
README.md
hsadler/BlockBuilder
0173b4dc4f74fb5d0995292085c46d796b27e8c1
[ "MIT" ]
null
null
null
README.md
hsadler/BlockBuilder
0173b4dc4f74fb5d0995292085c46d796b27e8c1
[ "MIT" ]
null
null
null
# BlockBuilder Unity 3D block building game starter project. This starter was used to build [Block Engineer](https://hsadler.itch.io/block-engineer). ## Requirements * Unity 3d version: `2018.3.11f1` * git * git lfs ## Notes This is a block building sandbox with basic functionality. It is free to use.
25.5
88
0.751634
eng_Latn
0.98955
b99d030d4ab1641d30e8624303595f6af7684422
2,208
md
Markdown
scripting-docs/winscript/reference/idebugapplicationnode100-interface.md
tommorris/visualstudio-docs.cs-cz
92c436dbc75020bc5121cc2c9e4976f62c9b13ca
[ "CC-BY-4.0", "MIT" ]
null
null
null
scripting-docs/winscript/reference/idebugapplicationnode100-interface.md
tommorris/visualstudio-docs.cs-cz
92c436dbc75020bc5121cc2c9e4976f62c9b13ca
[ "CC-BY-4.0", "MIT" ]
null
null
null
scripting-docs/winscript/reference/idebugapplicationnode100-interface.md
tommorris/visualstudio-docs.cs-cz
92c436dbc75020bc5121cc2c9e4976f62c9b13ca
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Idebugapplicationnode100 – rozhraní | Microsoft Docs ms.custom: '' ms.date: 01/18/2017 ms.prod: windows-script-interfaces ms.reviewer: '' ms.suite: '' ms.tgt_pltfrm: '' ms.topic: reference helpviewer_keywords: - IDebugApplicationNode100 Interface ms.assetid: 43966d4e-5f89-4a04-a08d-782347d00c2d caps.latest.revision: 5 author: mikejo5000 ms.author: mikejo manager: ghogen ms.openlocfilehash: af79614d38ef55776b660329f51931be70b7f52e ms.sourcegitcommit: aadb9588877418b8b55a5612c1d3842d4520ca4c ms.translationtype: MT ms.contentlocale: cs-CZ ms.lasthandoff: 10/27/2017 ms.locfileid: "24793896" --- # <a name="idebugapplicationnode100-interface"></a>IDebugApplicationNode100 – rozhraní `IDebugApplicationNode100` Rozhraní rozšiřuje funkce [idebugapplicationnode – rozhraní](../../winscript/reference/idebugapplicationnode-interface.md). Instance tohoto rozhraní můžete získat voláním QueryInterface na implementaci pro [idebugapplicationnode – rozhraní](../../winscript/reference/idebugapplicationnode-interface.md). > [!IMPORTANT] > Toto rozhraní je implementováno PDM v10.0 a větší. Nachází se v souboru activdbg100.h. ## <a name="methods"></a>Metody `IDebugApplicationNode100` Rozhraní poskytuje následující metody. |Metoda|Popis| |------------|-----------------| |[IDebugApplicationNode100::GetExcludedDocuments](../../winscript/reference/idebugapplicationnode100-getexcludeddocuments.md)|Získá text dokumenty, které jsou skrytý na základě zadaného filtru.| |[IDebugApplicationNode100::QueryIsChildNode](../../winscript/reference/idebugapplicationnode100-queryischildnode.md)|Určuje, zda zadaný dokument patří do jedné z podřízených uzlů tohoto uzlu.| |[IDebugApplicationNode100::SetFilterForEventSink](../../winscript/reference/idebugapplicationnode100-setfilterforeventsink.md)|Nastaví filtr pro určitý [idebugapplicationnodeevents – rozhraní](../../winscript/reference/idebugapplicationnodeevents-interface.md) implementace. To umožňuje ladicí programy skriptu filtrovat generované kompilátorem podřízené uzly aplikace tak, aby PDM budou už odesílat události při vytváření nebo odebrat uzly. Ve výchozím nastavení budou odeslány všechny uzly.|
59.675676
494
0.793478
ces_Latn
0.877716
b99d30f9eb572317075c3580625eb428dfe688c1
11,553
md
Markdown
clients/javascript/generated/README.md
ub1k24/swagger-aem
c9ae0cf8b57d27658527982d6d6653790d3acf80
[ "Apache-2.0" ]
null
null
null
clients/javascript/generated/README.md
ub1k24/swagger-aem
c9ae0cf8b57d27658527982d6d6653790d3acf80
[ "Apache-2.0" ]
null
null
null
clients/javascript/generated/README.md
ub1k24/swagger-aem
c9ae0cf8b57d27658527982d6d6653790d3acf80
[ "Apache-2.0" ]
null
null
null
# node-swagger-aem NodeSwaggerAem - JavaScript client for node-swagger-aem Swagger AEM is an OpenAPI specification for Adobe Experience Manager (AEM) API This SDK is automatically generated by the [OpenAPI Generator](https://openapi-generator.tech) project: - API version: 3.5.0-pre.0 - Package version: 0.9.0 - Build package: org.openapitools.codegen.languages.JavascriptClientCodegen For more information, please visit [http://shinesolutions.com](http://shinesolutions.com) ## Installation ### For [Node.js](https://nodejs.org/) #### npm To publish the library as a [npm](https://www.npmjs.com/), please follow the procedure in ["Publishing npm packages"](https://docs.npmjs.com/getting-started/publishing-npm-packages). Then install it via: ```shell npm install node-swagger-aem --save ``` Finally, you need to build the module: ```shell npm run build ``` ##### Local development To use the library locally without publishing to a remote npm registry, first install the dependencies by changing into the directory containing `package.json` (and this README). Let's call this `JAVASCRIPT_CLIENT_DIR`. Then run: ```shell npm install ``` Next, [link](https://docs.npmjs.com/cli/link) it globally in npm with the following, also from `JAVASCRIPT_CLIENT_DIR`: ```shell npm link ``` To use the link you just defined in your project, switch to the directory you want to use your node-swagger-aem from, and run: ```shell npm link /path/to/<JAVASCRIPT_CLIENT_DIR> ``` Finally, you need to build the module: ```shell npm run build ``` #### git If the library is hosted at a git repository, e.g.https://github.com/shinesolutions/swagger-aem then install it via: ```shell npm install shinesolutions/swagger-aem --save ``` ### For browser The library also works in the browser environment via npm and [browserify](http://browserify.org/). After following the above steps with Node.js and installing browserify with `npm install -g browserify`, perform the following (assuming *main.js* is your entry file): ```shell browserify main.js > bundle.js ``` Then include *bundle.js* in the HTML pages. ### Webpack Configuration Using Webpack you may encounter the following error: "Module not found: Error: Cannot resolve module", most certainly you should disable AMD loader. Add/merge the following section to your webpack config: ```javascript module: { rules: [ { parser: { amd: false } } ] } ``` ## Getting Started Please follow the [installation](#installation) instruction and execute the following JS code: ```javascript var NodeSwaggerAem = require('node-swagger-aem'); var defaultClient = NodeSwaggerAem.ApiClient.instance; // Configure HTTP basic authorization: aemAuth var aemAuth = defaultClient.authentications['aemAuth']; aemAuth.username = 'YOUR USERNAME' aemAuth.password = 'YOUR PASSWORD' var api = new NodeSwaggerAem.ConsoleApi() var callback = function(error, data, response) { if (error) { console.error(error); } else { console.log('API called successfully. Returned data: ' + data); } }; api.getAemProductInfo(callback); ``` ## Documentation for API Endpoints All URIs are relative to *http://localhost* Class | Method | HTTP request | Description ------------ | ------------- | ------------- | ------------- *NodeSwaggerAem.ConsoleApi* | [**getAemProductInfo**](docs/ConsoleApi.md#getAemProductInfo) | **GET** /system/console/status-productinfo.json | *NodeSwaggerAem.ConsoleApi* | [**getConfigMgr**](docs/ConsoleApi.md#getConfigMgr) | **GET** /system/console/configMgr | *NodeSwaggerAem.ConsoleApi* | [**postBundle**](docs/ConsoleApi.md#postBundle) | **POST** /system/console/bundles/{name} | *NodeSwaggerAem.ConsoleApi* | [**postJmxRepository**](docs/ConsoleApi.md#postJmxRepository) | **POST** /system/console/jmx/com.adobe.granite:type&#x3D;Repository/op/{action} | *NodeSwaggerAem.ConsoleApi* | [**postSamlConfiguration**](docs/ConsoleApi.md#postSamlConfiguration) | **POST** /system/console/configMgr/com.adobe.granite.auth.saml.SamlAuthenticationHandler | *NodeSwaggerAem.CqApi* | [**getLoginPage**](docs/CqApi.md#getLoginPage) | **GET** /libs/granite/core/content/login.html | *NodeSwaggerAem.CqApi* | [**postCqActions**](docs/CqApi.md#postCqActions) | **POST** /.cqactions.html | *NodeSwaggerAem.CrxApi* | [**getCrxdeStatus**](docs/CrxApi.md#getCrxdeStatus) | **GET** /crx/server/crx.default/jcr:root/.1.json | *NodeSwaggerAem.CrxApi* | [**getInstallStatus**](docs/CrxApi.md#getInstallStatus) | **GET** /crx/packmgr/installstatus.jsp | *NodeSwaggerAem.CrxApi* | [**getPackageManagerServlet**](docs/CrxApi.md#getPackageManagerServlet) | **GET** /crx/packmgr/service/script.html | *NodeSwaggerAem.CrxApi* | [**postPackageService**](docs/CrxApi.md#postPackageService) | **POST** /crx/packmgr/service.jsp | *NodeSwaggerAem.CrxApi* | [**postPackageServiceJson**](docs/CrxApi.md#postPackageServiceJson) | **POST** /crx/packmgr/service/.json/{path} | *NodeSwaggerAem.CrxApi* | [**postPackageUpdate**](docs/CrxApi.md#postPackageUpdate) | **POST** /crx/packmgr/update.jsp | *NodeSwaggerAem.CrxApi* | [**postSetPassword**](docs/CrxApi.md#postSetPassword) | **POST** /crx/explorer/ui/setpassword.jsp | *NodeSwaggerAem.CustomApi* | [**getAemHealthCheck**](docs/CustomApi.md#getAemHealthCheck) | **GET** /system/health | *NodeSwaggerAem.CustomApi* | [**postConfigAemHealthCheckServlet**](docs/CustomApi.md#postConfigAemHealthCheckServlet) | **POST** /apps/system/config/com.shinesolutions.healthcheck.hc.impl.ActiveBundleHealthCheck | *NodeSwaggerAem.CustomApi* | [**postConfigAemPasswordReset**](docs/CustomApi.md#postConfigAemPasswordReset) | **POST** /apps/system/config/com.shinesolutions.aem.passwordreset.Activator | *NodeSwaggerAem.GraniteApi* | [**sslSetup**](docs/GraniteApi.md#sslSetup) | **POST** /libs/granite/security/post/sslSetup.html | *NodeSwaggerAem.SlingApi* | [**deleteAgent**](docs/SlingApi.md#deleteAgent) | **DELETE** /etc/replication/agents.{runmode}/{name} | *NodeSwaggerAem.SlingApi* | [**deleteNode**](docs/SlingApi.md#deleteNode) | **DELETE** /{path}/{name} | *NodeSwaggerAem.SlingApi* | [**getAgent**](docs/SlingApi.md#getAgent) | **GET** /etc/replication/agents.{runmode}/{name} | *NodeSwaggerAem.SlingApi* | [**getAgents**](docs/SlingApi.md#getAgents) | **GET** /etc/replication/agents.{runmode}.-1.json | *NodeSwaggerAem.SlingApi* | [**getAuthorizableKeystore**](docs/SlingApi.md#getAuthorizableKeystore) | **GET** /{intermediatePath}/{authorizableId}.ks.json | *NodeSwaggerAem.SlingApi* | [**getKeystore**](docs/SlingApi.md#getKeystore) | **GET** /{intermediatePath}/{authorizableId}/keystore/store.p12 | *NodeSwaggerAem.SlingApi* | [**getNode**](docs/SlingApi.md#getNode) | **GET** /{path}/{name} | *NodeSwaggerAem.SlingApi* | [**getPackage**](docs/SlingApi.md#getPackage) | **GET** /etc/packages/{group}/{name}-{version}.zip | *NodeSwaggerAem.SlingApi* | [**getPackageFilter**](docs/SlingApi.md#getPackageFilter) | **GET** /etc/packages/{group}/{name}-{version}.zip/jcr:content/vlt:definition/filter.tidy.2.json | *NodeSwaggerAem.SlingApi* | [**getQuery**](docs/SlingApi.md#getQuery) | **GET** /bin/querybuilder.json | *NodeSwaggerAem.SlingApi* | [**getTruststore**](docs/SlingApi.md#getTruststore) | **GET** /etc/truststore/truststore.p12 | *NodeSwaggerAem.SlingApi* | [**getTruststoreInfo**](docs/SlingApi.md#getTruststoreInfo) | **GET** /libs/granite/security/truststore.json | *NodeSwaggerAem.SlingApi* | [**postAgent**](docs/SlingApi.md#postAgent) | **POST** /etc/replication/agents.{runmode}/{name} | *NodeSwaggerAem.SlingApi* | [**postAuthorizableKeystore**](docs/SlingApi.md#postAuthorizableKeystore) | **POST** /{intermediatePath}/{authorizableId}.ks.html | *NodeSwaggerAem.SlingApi* | [**postAuthorizables**](docs/SlingApi.md#postAuthorizables) | **POST** /libs/granite/security/post/authorizables | *NodeSwaggerAem.SlingApi* | [**postConfigAdobeGraniteSamlAuthenticationHandler**](docs/SlingApi.md#postConfigAdobeGraniteSamlAuthenticationHandler) | **POST** /apps/system/config/com.adobe.granite.auth.saml.SamlAuthenticationHandler.config | *NodeSwaggerAem.SlingApi* | [**postConfigApacheFelixJettyBasedHttpService**](docs/SlingApi.md#postConfigApacheFelixJettyBasedHttpService) | **POST** /apps/system/config/org.apache.felix.http | *NodeSwaggerAem.SlingApi* | [**postConfigApacheHttpComponentsProxyConfiguration**](docs/SlingApi.md#postConfigApacheHttpComponentsProxyConfiguration) | **POST** /apps/system/config/org.apache.http.proxyconfigurator.config | *NodeSwaggerAem.SlingApi* | [**postConfigApacheSlingDavExServlet**](docs/SlingApi.md#postConfigApacheSlingDavExServlet) | **POST** /apps/system/config/org.apache.sling.jcr.davex.impl.servlets.SlingDavExServlet | *NodeSwaggerAem.SlingApi* | [**postConfigApacheSlingGetServlet**](docs/SlingApi.md#postConfigApacheSlingGetServlet) | **POST** /apps/system/config/org.apache.sling.servlets.get.DefaultGetServlet | *NodeSwaggerAem.SlingApi* | [**postConfigApacheSlingReferrerFilter**](docs/SlingApi.md#postConfigApacheSlingReferrerFilter) | **POST** /apps/system/config/org.apache.sling.security.impl.ReferrerFilter | *NodeSwaggerAem.SlingApi* | [**postConfigProperty**](docs/SlingApi.md#postConfigProperty) | **POST** /apps/system/config/{configNodeName} | *NodeSwaggerAem.SlingApi* | [**postNode**](docs/SlingApi.md#postNode) | **POST** /{path}/{name} | *NodeSwaggerAem.SlingApi* | [**postNodeRw**](docs/SlingApi.md#postNodeRw) | **POST** /{path}/{name}.rw.html | *NodeSwaggerAem.SlingApi* | [**postPath**](docs/SlingApi.md#postPath) | **POST** /{path}/ | *NodeSwaggerAem.SlingApi* | [**postQuery**](docs/SlingApi.md#postQuery) | **POST** /bin/querybuilder.json | *NodeSwaggerAem.SlingApi* | [**postTreeActivation**](docs/SlingApi.md#postTreeActivation) | **POST** /etc/replication/treeactivation.html | *NodeSwaggerAem.SlingApi* | [**postTruststore**](docs/SlingApi.md#postTruststore) | **POST** /libs/granite/security/post/truststore | *NodeSwaggerAem.SlingApi* | [**postTruststorePKCS12**](docs/SlingApi.md#postTruststorePKCS12) | **POST** /etc/truststore | ## Documentation for Models - [NodeSwaggerAem.InlineObject](docs/InlineObject.md) - [NodeSwaggerAem.InlineObject1](docs/InlineObject1.md) - [NodeSwaggerAem.InlineObject2](docs/InlineObject2.md) - [NodeSwaggerAem.InlineObject3](docs/InlineObject3.md) - [NodeSwaggerAem.InlineObject4](docs/InlineObject4.md) - [NodeSwaggerAem.InlineObject5](docs/InlineObject5.md) - [NodeSwaggerAem.InstallStatus](docs/InstallStatus.md) - [NodeSwaggerAem.InstallStatusStatus](docs/InstallStatusStatus.md) - [NodeSwaggerAem.KeystoreChainItems](docs/KeystoreChainItems.md) - [NodeSwaggerAem.KeystoreInfo](docs/KeystoreInfo.md) - [NodeSwaggerAem.KeystoreItems](docs/KeystoreItems.md) - [NodeSwaggerAem.SamlConfigurationInfo](docs/SamlConfigurationInfo.md) - [NodeSwaggerAem.SamlConfigurationProperties](docs/SamlConfigurationProperties.md) - [NodeSwaggerAem.SamlConfigurationPropertyItemsArray](docs/SamlConfigurationPropertyItemsArray.md) - [NodeSwaggerAem.SamlConfigurationPropertyItemsBoolean](docs/SamlConfigurationPropertyItemsBoolean.md) - [NodeSwaggerAem.SamlConfigurationPropertyItemsLong](docs/SamlConfigurationPropertyItemsLong.md) - [NodeSwaggerAem.SamlConfigurationPropertyItemsString](docs/SamlConfigurationPropertyItemsString.md) - [NodeSwaggerAem.TruststoreInfo](docs/TruststoreInfo.md) - [NodeSwaggerAem.TruststoreItems](docs/TruststoreItems.md) ## Documentation for Authorization ### aemAuth - **Type**: HTTP basic authentication
55.543269
242
0.753657
yue_Hant
0.784034
b99d859ce8a9bd5042c3ccac7ab770c863f4add5
1,499
md
Markdown
_posts/2017-08-10-getting-the-physical-print-size-in-millimetres-of-an-image-in-c.md
benbhall/benbhall.github.io
a9d011034fccfda15c8abf72745f554ceeb2a580
[ "MIT" ]
null
null
null
_posts/2017-08-10-getting-the-physical-print-size-in-millimetres-of-an-image-in-c.md
benbhall/benbhall.github.io
a9d011034fccfda15c8abf72745f554ceeb2a580
[ "MIT" ]
6
2020-02-07T08:20:17.000Z
2021-09-24T09:23:02.000Z
_posts/2017-08-10-getting-the-physical-print-size-in-millimetres-of-an-image-in-c.md
benbhall/benbhall.github.io
a9d011034fccfda15c8abf72745f554ceeb2a580
[ "MIT" ]
null
null
null
--- title: C# Image class extension methods for physical print size in millimetres date: 2017-08-10 except: A couple of extension methods that use the width and height in pixels, along with the DPI to return an Image object’s width and height in mm. permalink: /getting-the-physical-print-size-in-millimetres-of-an-image-in-c/ redirect_from: - /2017/08/10/getting-the-physical-print-size-in-millimetres-of-an-image-in-c/ categories: - 'C# .NET' tags: - .NET - 'C#' --- We had a business need for the actual print size of an image in millimetres. The `PhysicalDimension` property on `System.Drawing.Image` was unfortunately not what I had hoped for. Here’s a couple of extension methods that use the width and height in pixels, along with the DPI (as `HorizontalResolution` and `VerticalResolution`) to return an `Image` object’s width and height in mm. ```csharp public static class ImageExtensions { public static double WidthInMm(this Image img) { const double mmPerInch = 25.4; return Convert.ToDouble(img.Width / img.HorizontalResolution * mmPerInch); } public static double HeightInMm(this Image img) { const double mmPerInch = 25.4; return Convert.ToDouble(img.Height / img.VerticalResolution * mmPerInch); } } ``` Usage example for those new to using extension methods: ```csharp var bitmapImage = new Bitmap(filename); var bitmapWidthInMm = bitmapImage.WidthInMm(); ```
32.586957
203
0.707138
eng_Latn
0.916063
b99d8ca40a2029cf6955475b1b9fb14f8be14f19
14,329
md
Markdown
2020-05/2020-05-08.md
admariner/trending_archive
ffca6522c4fdaa5203c3f18d9a5a7d7da098169c
[ "MIT" ]
80
2015-02-13T16:52:22.000Z
2022-03-10T20:13:08.000Z
2020-05/2020-05-08.md
admariner/trending_archive
ffca6522c4fdaa5203c3f18d9a5a7d7da098169c
[ "MIT" ]
65
2021-10-02T05:54:01.000Z
2021-12-28T22:50:23.000Z
2020-05/2020-05-08.md
admariner/trending_archive
ffca6522c4fdaa5203c3f18d9a5a7d7da098169c
[ "MIT" ]
16
2015-10-08T11:06:28.000Z
2021-06-30T07:26:49.000Z
### 2020-05-08 #### python * [manchenkoff/skillbox-async-messenger](https://github.com/manchenkoff/skillbox-async-messenger): - Python Skillbox * [shmilylty/OneForAll](https://github.com/shmilylty/OneForAll): OneForAll * [kubernetes-client/python](https://github.com/kubernetes-client/python): Official Python client library for kubernetes * [willmcgugan/rich](https://github.com/willmcgugan/rich): Rich is a Python library for rich text and beautiful formatting in the terminal. * [luigifreda/pyslam](https://github.com/luigifreda/pyslam): pySLAM contains a monocular Visual Odometry (VO) pipeline in Python. It supports many modern local features based on Deep Learning. * [shengqiangzhang/examples-of-web-crawlers](https://github.com/shengqiangzhang/examples-of-web-crawlers): python,,QQ(Some interesting examples of python crawlers that are friendly to beginners. ) * [shenweichen/DeepCTR](https://github.com/shenweichen/DeepCTR): Easy-to-use,Modular and Extendible package of deep-learning based CTR models. * [Avik-Jain/100-Days-Of-ML-Code](https://github.com/Avik-Jain/100-Days-Of-ML-Code): 100 Days of ML Coding * [Azure/azure-sdk-for-python](https://github.com/Azure/azure-sdk-for-python): This repository is for active development of the Azure SDK for Python. For consumers of the SDK we recommend visiting our public developer docs at https://docs.microsoft.com/en-us/python/azure/ or our versioned developer docs at https://azure.github.io/azure-sdk-for-python. * [tensorflow/models](https://github.com/tensorflow/models): Models and examples built with TensorFlow * [hankcs/HanLP](https://github.com/hankcs/HanLP): Natural Language Processing for the next decade. Tokenization, Part-of-Speech Tagging, Named Entity Recognition, Syntactic & Semantic Dependency Parsing, Document Classification * [stellargraph/stellargraph](https://github.com/stellargraph/stellargraph): StellarGraph - Machine Learning on Graphs * [0voice/interview_internal_reference](https://github.com/0voice/interview_internal_reference): 2019 * [wkentaro/labelme](https://github.com/wkentaro/labelme): Image Polygonal Annotation with Python (polygon, rectangle, circle, line, point and image-level flag annotation). * [clovaai/CRAFT-pytorch](https://github.com/clovaai/CRAFT-pytorch): Official implementation of Character Region Awareness for Text Detection (CRAFT) * [bojone/bert4keras](https://github.com/bojone/bert4keras): light reimplement of bert for keras * [kritiksoman/GIMP-ML](https://github.com/kritiksoman/GIMP-ML): Set of Machine Learning Python plugins for GIMP * [MVIG-SJTU/AlphaPose](https://github.com/MVIG-SJTU/AlphaPose): Real-Time and Accurate Multi-Person Pose Estimation&Tracking System * [automl/auto-sklearn](https://github.com/automl/auto-sklearn): Automated Machine Learning with scikit-learn * [milesial/Pytorch-UNet](https://github.com/milesial/Pytorch-UNet): PyTorch implementation of the U-Net for image semantic segmentation with high quality images * [cloud-custodian/cloud-custodian](https://github.com/cloud-custodian/cloud-custodian): Rules engine for cloud security, cost optimization, and governance, DSL in yaml for policies to query, filter, and take actions on resources * [mbadry1/DeepLearning.ai-Summary](https://github.com/mbadry1/DeepLearning.ai-Summary): This repository contains my personal notes and summaries on DeepLearning.ai specialization courses. I've enjoyed every little bit of the course hope you enjoy my notes too. * [lyhue1991/eat_tensorflow2_in_30_days](https://github.com/lyhue1991/eat_tensorflow2_in_30_days): Tensorflow2.0 is delicious, just eat it! * [irahorecka/YouTube2Audio](https://github.com/irahorecka/YouTube2Audio): A desktop application to download YouTube videos as annotated MP3 or MP4 files. * [frappe/frappe](https://github.com/frappe/frappe): Low Code Open Source Framework in Python and JS #### go * [keybase/client](https://github.com/keybase/client): Keybase Go Library, Client, Service, OS X, iOS, Android, Electron * [evanw/esbuild](https://github.com/evanw/esbuild): An extremely fast JavaScript bundler and minifier * [casbin/casbin](https://github.com/casbin/casbin): An authorization library that supports access control models like ACL, RBAC, ABAC in Golang * [coreos/prometheus-operator](https://github.com/coreos/prometheus-operator): Prometheus Operator creates/configures/manages Prometheus clusters atop Kubernetes * [iikira/BaiduPCS-Go](https://github.com/iikira/BaiduPCS-Go): - Go * [kubernetes/kubernetes](https://github.com/kubernetes/kubernetes): Production-Grade Container Scheduling and Management * [nektos/act](https://github.com/nektos/act): Run your GitHub Actions locally * [GoogleCloudPlatform/buildpacks](https://github.com/GoogleCloudPlatform/buildpacks): Builders and buildpacks designed to run on Google Cloud's container platforms * [ehang-io/nps](https://github.com/ehang-io/nps): tcpudpsocks5httpsshdnssocks5weba lightweight, high-performance, powerful intranet penetration proxy server, with a powerful web management terminal. * [stretchr/testify](https://github.com/stretchr/testify): A toolkit with common assertions and mocks that plays nicely with the standard library * [helm/helm](https://github.com/helm/helm): The Kubernetes Package Manager * [gruntwork-io/terragrunt](https://github.com/gruntwork-io/terragrunt): Terragrunt is a thin wrapper for Terraform that provides extra tools for working with multiple Terraform modules. * [aws/aws-sdk-go](https://github.com/aws/aws-sdk-go): AWS SDK for the Go programming language. * [astaxie/beego](https://github.com/astaxie/beego): beego is an open-source, high-performance web framework for the Go programming language. * [esrrhs/pingtunnel](https://github.com/esrrhs/pingtunnel): .a tool that advertises tcp/udp/socks5 traffic as icmp traffic for forwarding. * [terraform-providers/terraform-provider-aws](https://github.com/terraform-providers/terraform-provider-aws): Terraform AWS provider * [spf13/cobra](https://github.com/spf13/cobra): A Commander for modern Go CLI interactions * [go-redis/redis](https://github.com/go-redis/redis): Type-safe Redis client for Golang * [gin-gonic/gin](https://github.com/gin-gonic/gin): Gin is a HTTP web framework written in Go (Golang). It features a Martini-like API with much better performance -- up to 40 times faster. If you need smashing performance, get yourself some Gin. * [unknwon/the-way-to-go_ZH_CN](https://github.com/unknwon/the-way-to-go_ZH_CN): The Way to GoGo * [istio/istio](https://github.com/istio/istio): Connect, secure, control, and observe services. * [cdr/sshcode](https://github.com/cdr/sshcode): Run VS Code on any server over SSH. * [google/cadvisor](https://github.com/google/cadvisor): Analyzes resource usage and performance characteristics of running containers. * [helm/charts](https://github.com/helm/charts): Curated applications for Kubernetes * [sirupsen/logrus](https://github.com/sirupsen/logrus): Structured, pluggable logging for Go. #### cpp * [mrc-ide/covid-sim](https://github.com/mrc-ide/covid-sim): * [ouyanghuiyu/chineseocr_lite](https://github.com/ouyanghuiyu/chineseocr_lite): ocr, ncnn , psenet(8.5M) + crnn(6.3M) + anglenet(1.5M) 17M * [grpc/grpc](https://github.com/grpc/grpc): The C based gRPC (C++, Python, Ruby, Objective-C, PHP, C#) * [weolar/miniblink49](https://github.com/weolar/miniblink49): a lighter, faster browser kernel of blink to integrate HTML UI in your app. wkelibcef * [PCSX2/pcsx2](https://github.com/PCSX2/pcsx2): PCSX2 - The Playstation 2 Emulator * [protocolbuffers/protobuf](https://github.com/protocolbuffers/protobuf): Protocol Buffers - Google's data interchange format * [apache/arrow](https://github.com/apache/arrow): Apache Arrow is a cross-language development platform for in-memory data. It specifies a standardized language-independent columnar memory format for flat and hierarchical data, organized for efficient analytic operations on modern hardware. It also provides computational libraries and zero-copy streaming messaging and interprocess communication * [google/flatbuffers](https://github.com/google/flatbuffers): FlatBuffers: Memory Efficient Serialization Library * [ossrs/srs](https://github.com/ossrs/srs): SRS is a RTMP/HLS/WebRTC/SRT/GB28181 streaming cluster, high efficiency, stable and simple. * [apache/thrift](https://github.com/apache/thrift): Apache Thrift * [Tencent/ncnn](https://github.com/Tencent/ncnn): ncnn is a high-performance neural network inference framework optimized for the mobile platform * [nccgroup/SocksOverRDP](https://github.com/nccgroup/SocksOverRDP): Socks5 Proxy support for Remote Desktop Protocol / Terminal Services * [electron/electron](https://github.com/electron/electron): Build cross-platform desktop apps with JavaScript, HTML, and CSS * [PaddlePaddle/Paddle](https://github.com/PaddlePaddle/Paddle): PArallel Distributed Deep LEarning: Machine Learning Framework from Industrial Practice & * [google/googletest](https://github.com/google/googletest): Googletest - Google Testing and Mocking Framework * [vnpy/vnpy](https://github.com/vnpy/vnpy): Python * [Tencent/mars](https://github.com/Tencent/mars): Mars is a cross-platform network component developed by WeChat. * [dmlc/xgboost](https://github.com/dmlc/xgboost): Scalable, Portable and Distributed Gradient Boosting (GBDT, GBRT or GBM) Library, for Python, R, Java, Scala, C++ and more. Runs on single machine, Hadoop, Spark, Flink and DataFlow * [Tencent/rapidjson](https://github.com/Tencent/rapidjson): A fast JSON parser/generator for C++ with both SAX/DOM style API * [yhirose/cpp-httplib](https://github.com/yhirose/cpp-httplib): A C++ header-only HTTP/HTTPS server and client library * [HKUST-Aerial-Robotics/VINS-Mono](https://github.com/HKUST-Aerial-Robotics/VINS-Mono): A Robust and Versatile Monocular Visual-Inertial State Estimator * [PX4/Firmware](https://github.com/PX4/Firmware): PX4 Autopilot Software * [IntelRealSense/realsense-ros](https://github.com/IntelRealSense/realsense-ros): Intel(R) RealSense(TM) ROS Wrapper for D400 series, SR300 Camera and T265 Tracking Module * [googleprojectzero/SkCodecFuzzer](https://github.com/googleprojectzero/SkCodecFuzzer): Fuzzing harness for testing proprietary image codecs supported by Skia on Android * [pybind/pybind11](https://github.com/pybind/pybind11): Seamless operability between C++11 and Python #### javascript * [sudheerj/reactjs-interview-questions](https://github.com/sudheerj/reactjs-interview-questions): List of top 500 ReactJS Interview Questions & Answers....Coding exercise questions are coming soon!! * [ljianshu/Blog](https://github.com/ljianshu/Blog): [] * [umijs/umi](https://github.com/umijs/umi): Pluggable enterprise-level react application framework. * [MicrosoftDocs/office-docs-powershell](https://github.com/MicrosoftDocs/office-docs-powershell): PowerShell Reference for Office Products - Short URL: aka.ms/office-powershell * [swagger-api/swagger-ui](https://github.com/swagger-api/swagger-ui): Swagger UI is a collection of HTML, Javascript, and CSS assets that dynamically generate beautiful documentation from a Swagger-compliant API. * [amejiarosario/dsa.js-data-structures-algorithms-javascript](https://github.com/amejiarosario/dsa.js-data-structures-algorithms-javascript): Data Structures and Algorithms explained and implemented in JavaScript * [jhu-ep-coursera/fullstack-course4](https://github.com/jhu-ep-coursera/fullstack-course4): Example code for HTML, CSS, and Javascript for Web Developers Coursera Course * [apache/incubator-echarts](https://github.com/apache/incubator-echarts): A powerful, interactive charting and visualization library for browser * [bvaughn/react-virtualized](https://github.com/bvaughn/react-virtualized): React components for efficiently rendering large lists and tabular data * [gpuweb/gpuweb](https://github.com/gpuweb/gpuweb): Where the GPU for the Web work happens! * [oldj/SwitchHosts](https://github.com/oldj/SwitchHosts): Switch hosts quickly! * [goldbergyoni/nodebestpractices](https://github.com/goldbergyoni/nodebestpractices): The Node.js best practices list (April 2020) * [Fugiman/google-meet-grid-view](https://github.com/Fugiman/google-meet-grid-view): Userscript to offer a grid-view layout in Google Meets * [MarkerHub/eblog](https://github.com/MarkerHub/eblog): eblogSpringboot2.1.201Freemarkershiro+redisrediszsett-io+websocketrabbitmq+elasticsearch * [cypress-io/cypress](https://github.com/cypress-io/cypress): Fast, easy and reliable testing for anything that runs in a browser. * [d3/d3](https://github.com/d3/d3): Bring data to life with SVG, Canvas and HTML. * [atlassian/react-beautiful-dnd](https://github.com/atlassian/react-beautiful-dnd): Beautiful and accessible drag and drop for lists with React * [plotly/falcon](https://github.com/plotly/falcon): Free, open-source SQL client for Windows and Mac * [alvarotrigo/fullPage.js](https://github.com/alvarotrigo/fullPage.js): fullPage plugin by Alvaro Trigo. Create full screen pages fast and simple * [openlayers/openlayers](https://github.com/openlayers/openlayers): OpenLayers * [mozilla/pdf.js](https://github.com/mozilla/pdf.js): PDF Reader in JavaScript * [travist/jsencrypt](https://github.com/travist/jsencrypt): A Javascript library to perform OpenSSL RSA Encryption, Decryption, and Key Generation. * [ansible/awx](https://github.com/ansible/awx): AWX Project * [renovatebot/renovate](https://github.com/renovatebot/renovate): Universal dependency update tool that fits into your workflows. * [Hacker0x01/react-datepicker](https://github.com/Hacker0x01/react-datepicker): A simple and reusable datepicker component for React #### coffeescript * [nicolaskruchten/pivottable](https://github.com/nicolaskruchten/pivottable): Open-source Javascript Pivot Table (aka Pivot Grid, Pivot Chart, Cross-Tab) implementation with drag'n'drop. * [sparanoid/chinese-copywriting-guidelines](https://github.com/sparanoid/chinese-copywriting-guidelines): Chinese copywriting guidelines for better written communication * [atom/spell-check](https://github.com/atom/spell-check): Spell check Atom package * [yakyak/yakyak](https://github.com/yakyak/yakyak): Desktop chat client for Google Hangouts * [tractical/hubot](https://github.com/tractical/hubot): A Hubot clone trained to interact in our Campfire rooms.
122.470085
398
0.788052
eng_Latn
0.347606
b99dd1fc43cce712ee0d98a4b296aaa0b773aab2
32,797
md
Markdown
docs/standard/microservices-architecture/multi-container-microservice-net-applications/data-driven-crud-microservice.md
dhernandezb/docs.es-es
cf1637e989876a55eb3c57002818d3982591baf1
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/standard/microservices-architecture/multi-container-microservice-net-applications/data-driven-crud-microservice.md
dhernandezb/docs.es-es
cf1637e989876a55eb3c57002818d3982591baf1
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/standard/microservices-architecture/multi-container-microservice-net-applications/data-driven-crud-microservice.md
dhernandezb/docs.es-es
cf1637e989876a55eb3c57002818d3982591baf1
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Creación de un microservicio CRUD sencillo controlado por datos description: Arquitectura de microservicios de .NET para aplicaciones .NET en contenedor | Obtenga más información sobre la creación de un microservicio CRUD sencillo (controlado por datos) en el contexto de una aplicación de microservicios. author: CESARDELATORRE ms.author: wiwagn ms.date: 10/02/2018 ms.openlocfilehash: c6316717f78dffb672afdf79e919fd1bd7874b52 ms.sourcegitcommit: ccd8c36b0d74d99291d41aceb14cf98d74dc9d2b ms.translationtype: HT ms.contentlocale: es-ES ms.lasthandoff: 12/10/2018 ms.locfileid: "53149580" --- # <a name="creating-a-simple-data-driven-crud-microservice"></a>Creación de un microservicio CRUD sencillo controlado por datos En esta sección se describe cómo crear un microservicio sencillo que lleve a cabo operaciones de creación, lectura, actualización y eliminación (CRUD) en un origen de datos. ## <a name="designing-a-simple-crud-microservice"></a>Diseño de un microservicio CRUD sencillo Desde un punto de vista de diseño, este tipo de microservicio en contenedor es muy sencillo. Quizás el problema para resolver es sencillo o la implementación es solo una prueba de concepto. ![Un microservicio CRUD sencillo es un modelo de diseño interno.](./media/image4.png) **Figura 6-4**. Diseño interno de microservicios CRUD sencillos Un ejemplo de este tipo de servicio sencillo controlado por datos es el microservicio de catálogo de la aplicación de ejemplo eShopOnContainers. Este tipo de servicio implementa toda su funcionalidad en un solo proyecto de API Web de ASP.NET Core que incluye las clases para su modelo de datos, su lógica de negocios y su código de acceso a datos. También almacena los datos relacionados en una base de datos que ejecuta SQL Server (como otro contenedor para fines de desarrollo y pruebas), pero también podría ser cualquier host de SQL Server normal, como se muestra en la Figura 6-5. ![El microservicio Catalog lógico incluye su base de datos Catalog, que puede estar o no en el mismo host de Docker. La presencia de la base de datos en el mismo host de Docker es buena para el desarrollo, pero no para producción.](./media/image5.png) **Figura 6-5**. Diseño de un microservicio CRUD sencillo controlado por datos Para desarrollar este tipo de servicio, solo necesita [ASP.NET Core](https://docs.microsoft.com/aspnet/core/) y una ORP o API de acceso a datos, como [Entity Framework Core](https://docs.microsoft.com/ef/core/index). También puede generar automáticamente metadatos [Swagger](https://swagger.io/) a través de [Swashbuckle](https://github.com/domaindrivendev/Swashbuckle.AspNetCore), para proporcionar una descripción de lo que ofrece el servicio, tal como se describe en la sección siguiente. Tenga en cuenta que ejecutar un servidor de base de datos como SQL Server en un contenedor de Docker es muy útil para entornos de desarrollo, porque puede poner en marcha todas sus dependencias sin tener que proporcionar una base de datos local o en la nube. Esto resulta muy útil para ejecutar pruebas de integración. Pero no se recomienda ejecutar un servidor de base de datos en un contenedor para entornos de producción, ya que normalmente no se obtiene alta disponibilidad con ese método. En un entorno de producción de Azure, le recomendamos que utilice la base de datos SQL de Azure o cualquier otra tecnología de base de datos que pueda proporcionar alta disponibilidad y alta escalabilidad. Por ejemplo, para un enfoque NoSQL, es posible que elija CosmosDB. Por último, al editar los archivos de metadatos de Dockerfile y docker-compose.yml, puede configurar cómo se creará la imagen de este contenedor, es decir, la imagen base que se usará y la configuración de diseño, como los nombres internos y externos y los puertos TCP. ## <a name="implementing-a-simple-crud-microservice-with-aspnet-core"></a>Implementación de un microservicio CRUD sencillo con ASP.NET Core Para implementar un microservicio CRUD sencillo con .NET Core y Visual Studio, primero debe crear un proyecto de API web de ASP.NET Core sencillo (que se ejecute en .NET Core para que pueda ejecutarse en un host de Linux Docker), como se muestra en la Figura 6-6. ![Para crear un proyecto de API web de ASP.NET Core, seleccione primero una aplicación web de ASP.NET Core y, después, seleccione el tipo de API.](./media/image6.png) **Figura 6-6**. Creación de un proyecto de API Web de ASP.NET Core en Visual Studio Después de crear el proyecto, puede implementar los controladores MVC como lo haría en cualquier otro proyecto de API Web, mediante la API de Entity Framework u otra API. En un nuevo proyecto de API Web, puede ver que la única dependencia que tiene de ese microservicio es el mismo ASP.NET Core. Internamente, dentro de la dependencia *Microsoft.AspNetCore.All*, hace referencia a Entity Framework y a muchos otros paquetes Nuget de .NET Core, como se muestra en la Figura 6-7. ![El proyecto de API incluye referencias al paquete NuGet Microsoft.AspNetCore.App, que incluye referencias a todos los paquetes esenciales. También podría incluir otros paquetes.](./media/image8.png) **Figura 6-7**. Dependencias en un microservicio API Web de CRUD sencillo ### <a name="implementing-crud-web-api-services-with-entity-framework-core"></a>Implementación de servicios API Web de CRUD con Entity Framework Core Entity Framework (EF) Core es una versión ligera, extensible y multiplataforma de la popular tecnología de acceso a datos Entity Framework. EF Core es un asignador relacional de objetos (ORM) que permite a los desarrolladores de .NET trabajar con una base de datos mediante objetos .NET. El microservicio de catálogo usa EF y el proveedor de SQL Server porque su base de datos se está ejecutando en un contenedor con la imagen de SQL Server para Linux Docker. Pero la base de datos podría implementarse en cualquier SQL Server, como en una base de datos SQL de Azure o Windows local. Lo único que debe cambiar es la cadena de conexión en el microservicio ASP.NET Web API. #### <a name="the-data-model"></a>El modelo de datos Con EF Core, el acceso a datos se realiza utilizando un modelo. Un modelo se compone de clases de entidad (modelo de dominio) y un contexto derivado (DbContext) que representa una sesión con la base de datos, lo que permite consultar y guardar los datos. Puede generar un modelo a partir de una base de datos existente, codificar manualmente un modelo para que coincida con la base de datos, o bien usar migraciones de EF para crear una base de datos a partir del modelo, mediante el enfoque Code First (que facilita que la base de datos evolucione a medida que el modelo cambia en el tiempo). Para el microservicio de catálogo, usamos el último enfoque. Puede ver un ejemplo de la clase de entidad CatalogItem en el ejemplo de código siguiente, que es una clase de entidad de objeto CLR estándar ([POCO](https://en.wikipedia.org/wiki/Plain_Old_CLR_Object)). ```csharp public class CatalogItem { public int Id { get; set; } public string Name { get; set; } public string Description { get; set; } public decimal Price { get; set; } public string PictureFileName { get; set; } public string PictureUri { get; set; } public int CatalogTypeId { get; set; } public CatalogType CatalogType { get; set; } public int CatalogBrandId { get; set; } public CatalogBrand CatalogBrand { get; set; } public int AvailableStock { get; set; } public int RestockThreshold { get; set; } public int MaxStockThreshold { get; set; } public bool OnReorder { get; set; } public CatalogItem() { } // Additional code ... } ``` También necesita un DbContext que represente una sesión con la base de datos. Para el microservicio de catálogo, la clase CatalogContext se deriva de la clase base DbContext, tal como se muestra en el ejemplo siguiente: ```csharp public class CatalogContext : DbContext { public CatalogContext(DbContextOptions<CatalogContext> options) : base(options) { } public DbSet<CatalogItem> CatalogItems { get; set; } public DbSet<CatalogBrand> CatalogBrands { get; set; } public DbSet<CatalogType> CatalogTypes { get; set; } // Additional code ... } ``` Puede tener implementaciones `DbContext` adicionales. Por ejemplo, en el microservicio Catalog.API de ejemplo, hay un segundo `DbContext` denominado `CatalogContextSeed`, en que rellena automáticamente los datos de ejemplo la primera vez que intenta acceder a la base de datos. Este método es útil para los datos de demostración y también para escenarios de pruebas automatizadas. En `DbContext`, se usa el método `OnModelCreating` para personalizar las asignaciones de entidades de objeto y base de datos, y otros [puntos de extensibilidad de EF](https://blogs.msdn.microsoft.com/dotnet/2016/09/29/implementing-seeding-custom-conventions-and-interceptors-in-ef-core-1-0/). ##### <a name="querying-data-from-web-api-controllers"></a>Consulta de los datos desde controladores de API web Normalmente las instancias de sus clases de entidad se recuperan de la base de datos mediante Language Integrated Query (LINQ), como se muestra en el ejemplo siguiente: ```csharp [Route("api/v1/[controller]")] public class CatalogController : ControllerBase { private readonly CatalogContext _catalogContext; private readonly CatalogSettings _settings; private readonly ICatalogIntegrationEventService _catalogIntegrationEventService; public CatalogController(CatalogContext context, IOptionsSnapshot<CatalogSettings> settings, ICatalogIntegrationEventService catalogIntegrationEventService) { _catalogContext = context ?? throw new ArgumentNullException(nameof(context)); _catalogIntegrationEventService = catalogIntegrationEventService ?? throw new ArgumentNullException(nameof(catalogIntegrationEventService)); _settings = settings.Value; ((DbContext)context).ChangeTracker.QueryTrackingBehavior = QueryTrackingBehavior.NoTracking; } // GET api/v1/[controller]/items[?pageSize=3&pageIndex=10] [HttpGet] [Route("[action]")] [ProducesResponseType(typeof(PaginatedItemsViewModel<CatalogItem>), (int)HttpStatusCode.OK)] public async Task<IActionResult> Items([FromQuery]int pageSize = 10, [FromQuery]int pageIndex = 0) { var totalItems = await _catalogContext.CatalogItems .LongCountAsync(); var itemsOnPage = await _catalogContext.CatalogItems .OrderBy(c => c.Name) .Skip(pageSize * pageIndex) .Take(pageSize) .ToListAsync(); itemsOnPage = ChangeUriPlaceholder(itemsOnPage); var model = new PaginatedItemsViewModel<CatalogItem>( pageIndex, pageSize, totalItems, itemsOnPage); return Ok(model); } //... } ``` ##### <a name="saving-data"></a>Guardado de datos Los datos se crean, se eliminan y se modifican en la base de datos mediante instancias de las clases de entidad. Puede agregar código similar al siguiente ejemplo codificado de forma rígida (datos simulados, en este caso) a sus controladores de la API web. ```csharp var catalogItem = new CatalogItem() {CatalogTypeId=2, CatalogBrandId=2, Name="Roslyn T-Shirt", Price = 12}; _context.Catalog.Add(catalogItem); _context.SaveChanges(); ``` ##### <a name="dependency-injection-in-aspnet-core-and-web-api-controllers"></a>Inserción de dependencias en los controladores de ASP.NET Core y API web En ASP.NET Core, puede usar la inserción de dependencias desde el principio. No es necesario que configure un contenedor de inversión de control (IoC) de terceros, aunque, si lo desea, puede conectar su contenedor de IoC preferido a la infraestructura de ASP.NET Core. En este caso, puede insertar directamente el DBContext de EF requerido o los repositorios adicionales a través del constructor del controlador. En el ejemplo anterior de la clase `CatalogController`, vamos a insertar un objeto del tipo `CatalogContext` junto con otros objetos a través del constructor `CatalogController()`. Una opción importante que hay que configurar en el proyecto de Web API es el registro de la clase DbContext en el contenedor de IoC del servicio. Normalmente se hace en la clase `Startup`, mediante una llamada al método `services.AddDbContext<DbContext>()` dentro del método `ConfigureServices()`, como se muestra en el ejemplo siguiente: ```csharp public void ConfigureServices(IServiceCollection services) { // Additional code... services.AddDbContext<CatalogContext>(options => { options.UseSqlServer(Configuration["ConnectionString"], sqlServerOptionsAction: sqlOptions => { sqlOptions. MigrationsAssembly( typeof(Startup). GetTypeInfo(). Assembly. GetName().Name); //Configuring Connection Resiliency: sqlOptions. EnableRetryOnFailure(maxRetryCount: 5, maxRetryDelay: TimeSpan.FromSeconds(30), errorNumbersToAdd: null); }); // Changing default behavior when client evaluation occurs to throw. // Default in EFCore would be to log warning when client evaluation is done. options.ConfigureWarnings(warnings => warnings.Throw( RelationalEventId.QueryClientEvaluationWarning)); }); //... } ``` ### <a name="additional-resources"></a>Recursos adicionales - **Consulta de datos** \ [*https://docs.microsoft.com/ef/core/querying/index*](https://docs.microsoft.com/ef/core/querying/index) - **Guardado de datos** \ [*https://docs.microsoft.com/ef/core/saving/index*](https://docs.microsoft.com/ef/core/saving/index) ## <a name="the-db-connection-string-and-environment-variables-used-by-docker-containers"></a>Variables de entorno y cadena de conexión de la base de datos utilizadas por contenedores de Docker Puede usar la configuración de ASP.NET Core y agregar una propiedad ConnectionString al archivo settings.json, tal como se muestra en el ejemplo siguiente: ```json { "ConnectionString": "Server=tcp:127.0.0.1,5433;Initial Catalog=Microsoft.eShopOnContainers.Services.CatalogDb;User Id=sa;Password=Pass@word", "ExternalCatalogBaseUrl": "http://localhost:5101", "Logging": { "IncludeScopes": false, "LogLevel": { "Default": "Debug", "System": "Information", "Microsoft": "Information" } } } ``` El archivo settings.json puede tener valores predeterminados para la propiedad ConnectionString o para cualquier otra propiedad. Pero estas propiedades se reemplazarán por los valores de las variables de entorno que se especifican en el archivo docker-compose.override.yml, al usar Docker. Desde los archivos docker-compose.yml o docker-compose.override.yml, puede inicializar estas variables de entorno para que Docker las configure como variables de entorno del sistema operativo, como se muestra en el siguiente archivo docker-compose.override.yml (la cadena de conexión y otras líneas se encapsulan en este ejemplo, pero no lo harán en su propio archivo). ```yml # docker-compose.override.yml # catalog.api: environment: - ConnectionString=Server=sql.data;Database=Microsoft.eShopOnContainers.Services.CatalogDb;User Id=sa;Password=Pass@word # Additional environment variables for this service ports: - "5101:80" ``` Los archivos docker-compose.yml en el nivel de solución no solo son más flexibles que los archivos de configuración en el nivel de proyecto o de microservicio, sino que también son más seguros si reemplaza las variables de entorno declaradas en los archivos docker-compose con valores establecidos en las herramientas de implementación, como las tareas de implementación del Docker de Azure DevOps Services. Por último, puede obtener ese valor desde el código mediante la configuración \["ConnectionString"\], tal y como se muestra en el método ConfigureServices de un ejemplo de código anterior. Pero, en entornos de producción, puede ser que le interese analizar otras formas de almacenar secretos, como las cadenas de conexión. Una manera excelente de administrar los secretos de aplicación consiste en usar [Azure Key Vault}(https://azure.microsoft.com/services/key-vault/). Azure Key Vault ayuda a almacenar y proteger las claves criptográficas y los secretos que usan la aplicaciones y los servicios en la nube. Un secreto es todo aquello sobre lo que quiera mantener un control estricto, como las claves de API, las cadenas de conexión, las contraseñas, etc. Asimismo, un control estricto incluye el registro del uso, el establecimiento de la caducidad y la administración del acceso, <span class="underline">entre otros aspectos</span>. Azure Key Vault permite un nivel de control muy detallado del uso de secretos de la aplicación sin necesidad de dejar que nadie los conozca. Incluso se puede definir que los secretos vayan rotando para mejorar la seguridad sin interrumpir las operaciones ni el desarrollo. Es necesario registrar las aplicaciones en la instancia de Active Directory de la organización, de modo que puedan usar el almacén de claves. Puede consultar la <span class="underline">documentación de conceptos de Key Vault</span> para obtener más detalles. ### <a name="implementing-versioning-in-aspnet-web-apis"></a>Implementación del control de versiones en las API web de ASP.NET A medida que cambian los requisitos empresariales, pueden agregarse nuevas colecciones de recursos, las relaciones entre recursos pueden cambiar y la estructura de los datos en los recursos se puede modificar. Actualizar una API web para controlar requisitos nuevos es un proceso relativamente sencillo, pero debe tener en cuenta los efectos que estos cambios tendrán en las aplicaciones cliente que consumen la API web. Aunque el desarrollador que diseña e implementa una API web tiene control total sobre dicha API, no tiene el mismo grado de control sobre las aplicaciones cliente creadas por organizaciones de terceros que funcionan de forma remota. El control de versiones permite que una API web indique las características y los recursos que expone. De este modo, una aplicación cliente puede enviar solicitudes a una versión específica de una característica o de un recurso. Existen varios enfoques para implementar el control de versiones: - Control de versiones de URI - Control de versiones de cadena de consulta - Control de versiones de encabezado El control de versiones de URI y de cadena de consulta son los más fáciles de implementar. El control de versiones de encabezado es una buena opción. Pero el control de versiones de encabezado no es tan explícito y sencillo como el control de versiones de URI. Como el control de versiones de URI es el más sencillo y explícito, es el que utiliza la aplicación de ejemplo eShopOnContainers. Con el control de versiones de URI, como se muestra en la aplicación de ejemplo eShopOnContainers, cada vez que modifique la API web o cambie el esquema de recursos, agregará un número de versión al URI de cada recurso. Los URI existentes deben continuar funcionando como antes, devolviendo los recursos que conforman el esquema que coincide con la versión solicitada. Como se muestra en el ejemplo de código siguiente, la versión se puede establecer mediante el atributo Route del controlador de la API web, lo que hace que la versión se explicite en el URI (v1 en este caso). ```csharp [Route("api/v1/[controller]")] public class CatalogController : ControllerBase { // Implementation ... ``` Este mecanismo de control de versiones es sencillo y depende del servidor que enruta la solicitud al punto de conexión adecuado. Pero para utilizar un control de versiones más sofisticado y adoptar el mejor método al utilizar REST, debe usar hipermedia e implementar [HATEOAS (hipertexto como motor del estado de la aplicación)](https://docs.microsoft.com/azure/architecture/best-practices/api-design#use-hateoas-to-enable-navigation-to-related-resources). ### <a name="additional-resources"></a>Recursos adicionales - **Scott Hanselman. ASP.NET Core RESTful Web API versioning made easy** \ (Control de versiones simplificado de API web RESTful de ASP.NET Core) [*https://www.hanselman.com/blog/ASPNETCoreRESTfulWebAPIVersioningMadeEasy.aspx*](https://www.hanselman.com/blog/ASPNETCoreRESTfulWebAPIVersioningMadeEasy.aspx) - **Control de versiones de una API web RESTful** \ [*https://docs.microsoft.com/azure/architecture/best-practices/api-design#versioning-a-restful-web-api*](https://docs.microsoft.com/azure/architecture/best-practices/api-design#versioning-a-restful-web-api) - **Roy Fielding. Versioning, Hypermedia, and REST** \ (Control de versiones, hipermedios y REST) [*https://www.infoq.com/articles/roy-fielding-on-versioning*](https://www.infoq.com/articles/roy-fielding-on-versioning) ## <a name="generating-swagger-description-metadata-from-your-aspnet-core-web-api"></a>Generación de metadatos de descripción de Swagger desde la API web de ASP.NET Core [Swagger](https://swagger.io/) es un marco de código abierto de uso común, respaldado por una gran variedad de herramientas que le permite diseñar, compilar, documentar y utilizar las API RESTful. Se está convirtiendo en el estándar para el dominio de metadatos de la descripción de API. Debe incluir los metadatos de descripción de Swagger con cualquier tipo de microservicio, tanto si está controlado por datos como si está controlado por dominios de forma más avanzada (como se explica en la sección siguiente). El núcleo de Swagger es su especificación, que son los metadatos de descripción de la API en un archivo JSON o YAML. La especificación crea el contrato RESTful para la API, donde se detallan todos sus recursos y operaciones en formatos legibles por máquinas y por humanos, para que se puedan desarrollar, descubrir e integrar de forma sencilla. La especificación es la base de la especificación OpenAPI (OAS) y se desarrolla en una comunidad abierta, transparente y colaborativa para estandarizar la forma en que se definen las interfaces RESTful. La especificación define la estructura de descubrimiento de un servicio y la forma de entender sus capacidades. Para obtener más información, incluido un editor web y ejemplos de especificaciones de Swagger de empresas como Spotify, Uber, Slack y Microsoft, vea el sitio de Swagger ([https://swagger.io](https://swagger.io)). ### <a name="why-use-swagger"></a>¿Por qué usar Swagger? Las razones principales para generar metadatos de Swagger para las API son las siguientes: **Capacidad de otros productos de utilizar e integrar las API automáticamente** . Swagger es compatible con docenas de productos y [herramientas comerciales](https://swagger.io/commercial-tools/), así como con muchas [bibliotecas y marcos](https://swagger.io/open-source-integrations/). Microsoft tiene productos y herramientas de alto nivel que pueden utilizar automáticamente API basadas en Swagger, como las siguientes: - [AutoRest](https://github.com/Azure/AutoRest). Puede generar automáticamente clases de cliente de .NET para llamar a Swagger. Esta herramienta se puede utilizar desde la interfaz de la línea de comandos y también se integra con Visual Studio para que pueda utilizarse fácilmente desde la interfaz gráfica de usuario. - [Microsoft Flow](https://flow.microsoft.com/en-us/). También puede [utilizar e integrar la API](https://flow.microsoft.com/en-us/blog/integrating-custom-api/) automáticamente en un flujo de trabajo de Microsoft Flow de alto nivel, aunque no tenga conocimientos de programación. - [Microsoft PowerApps](https://powerapps.microsoft.com/). Puede utilizar la API automáticamente desde [aplicaciones móviles PowerApps](https://powerapps.microsoft.com/blog/register-and-use-custom-apis-in-powerapps/) creadas con [PowerApps Studio](https://powerapps.microsoft.com/build-powerapps/), aunque no tenga conocimientos de programación. - [Azure App Service Logic Apps](https://docs.microsoft.com/azure/app-service-logic/app-service-logic-what-are-logic-apps). También puede [utilizar e integrar automáticamente su API en una Azure App Service Logic App](https://docs.microsoft.com/azure/app-service-logic/app-service-logic-custom-hosted-api), aunque no tenga conocimientos de programación. **Capacidad de generar documentación de la API automáticamente**. Al crear API RESTful a gran escala, como aplicaciones complejas basadas en microservicios, tiene que controlar muchos de los puntos de conexión con diferentes modelos de datos diferentes que se utilizan en las cargas de solicitud y respuesta. Tener una documentación adecuada y un explorador de API potente, como se consigue con Swagger, es fundamental para que su API tenga éxito y los desarrolladores la adopten. Microsoft Flow, PowerApps y Azure Logic Apps usan los metadatos de Swagger para aprender a usar las API y conectarse a ellas. Hay varias opciones para automatizar la generación de metadatos de Swagger para las aplicaciones de API REST de ASP.NET Core, en forma de páginas de ayuda de API funcionales, basadas en <span class="underline">swagger-ui</span>. Probablemente la más conocida sea [Swashbuckle](https://github.com/domaindrivendev/Swashbuckle.AspNetCore), que actualmente se usa en [eShopOnContainers](https://github.com/dotnet-architecture/eShopOnContainers) y que trataremos con más detalle en esta guía, pero también existe la opción de usar [NSwag](https://github.com/RSuter/NSwag), que puede generar clientes de API de Typescript y C\#, así como controladores de C\#, a partir de una especificación de OpenAPI o Swagger, e incluso mediante el análisis del archivo .dll que contiene los controladores, con [NSwagStudio](https://github.com/RSuter/NSwag/wiki/NSwagStudio). ### <a name="how-to-automate-api-swagger-metadata-generation-with-the-swashbuckle-nuget-package"></a>Cómo se automatiza la generación de metadatos de la API de Swagger con el paquete NuGet de Swashbuckle Generar metadatos de Swagger manualmente (en un archivo JSON o YAML) puede resultar muy pesado. Pero puede automatizar la detección de API de servicios ASP.NET Web API mediante el uso del [paquete NuGet de Swashbuckle](https://aka.ms/swashbuckledotnetcore) para generar dinámicamente metadatos de la API de Swagger. Swashbuckle genera automáticamente metadatos de Swagger para sus proyectos de ASP.NET Web API. Admite proyectos de ASP.NET Core Web API, proyectos tradicionales de ASP.NET Web API y cualquier otro tipo, como la aplicación API de Azure, la aplicación móvil de Azure o los microservicios Azure Service Fabric basados en ASP.NET. También admite API web sencillas implementadas en contenedores, como es el caso de la aplicación de referencia. Swashbuckle combina el explorador de API y Swagger o [swagger-ui](https://github.com/swagger-api/swagger-ui) para proporcionar una experiencia de detección y documentación increíble a los consumidores de la API. Además de su motor generador de metadatos de Swagger, Swashbuckle también contiene una versión insertada de swagger-ui, que se usará automáticamente cuando se haya instalado Swashbuckle. Esto significa que puede complementar su API con una bonita interfaz de usuario de descubrimiento para ayudar a los desarrolladores a usar su API. Para ello se requiere una cantidad muy pequeña de código y mantenimiento, puesto que se genera automáticamente, lo que le permite centrarse en la creación de la API. El resultado para el explorador de API se parece a la Figura 6-8. ![La documentación de API de la interfaz de usuario de Swagger generada por Swashbuckle incluye todas las acciones publicadas.](./media/image9.png) **Figura 6-8**. Explorador de API de Swashbuckle basado en metadatos de Swagger: microservicio del catálogo eShopOnContainers Pero aquí lo más importante no es el explorador de API. Cuando tenga una API web que se pueda describir en metadatos de Swagger, la API podrá usarse sin problemas desde herramientas basadas en Swagger, incluidos los generadores de código de clase proxy de cliente que pueden tener varias plataformas como destino. Por ejemplo, tal y como se ha mencionado, [AutoRest](https://github.com/Azure/AutoRest) genera automáticamente clases de cliente .NET. Pero también están disponibles herramientas como [swagger-codegen](https://github.com/swagger-api/swagger-codegen), que permiten que se genere automáticamente código de bibliotecas de cliente de API, códigos auxiliares de servidor y documentación. En la actualidad, Swashbuckle consta de cinco paquetes NuGet internos que se engloban en el metapaquete general [Swashbuckle.AspNetCore](https://www.nuget.org/packages/Swashbuckle.AspNetCore) para las aplicaciones ASP.NET Core. Después de instalar estos paquetes NuGet en el proyecto de API web, debe configurar Swagger en la clase de inicio, como en el código siguiente (simplificado): ```csharp public class Startup { public IConfigurationRoot Configuration { get; } // Other startup code... public void ConfigureServices(IServiceCollection services) { // Other ConfigureServices() code... // Add framework services. services.AddSwaggerGen(options => { options.DescribeAllEnumsAsStrings(); options.SwaggerDoc("v1", new Swashbuckle.AspNetCore.Swagger.Info { Title = "eShopOnContainers - Catalog HTTP API", Version = "v1", Description = "The Catalog Microservice HTTP API. This is a Data-Driven/CRUD microservice sample", TermsOfService = "Terms Of Service" }); }); // Other ConfigureServices() code... } public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory) { // Other Configure() code... // ... app.UseSwagger() .UseSwaggerUI(c => { c.SwaggerEndpoint("/swagger/v1/swagger.json", "My API V1"); }); } } ``` Una vez hecho esto, puede iniciar la aplicación y examinar los siguientes puntos de conexión JSON y de interfaz de usuario de Swagger utilizando direcciones URL como estas: ```url http://<your-root-url>/swagger/v1/swagger.json http://<your-root-url>/swagger/ ``` Anteriormente, vio la interfaz de usuario generada creada por Swashbuckle para una dirección URL como http://\<la-dirección-URL-raíz\>/swagger. En la Figura 6-9 también puede ver cómo se puede probar cualquier método de API. ![En los detalles de la API de interfaz de usuario de Swagger se muestra un ejemplo de la respuesta y se puede usar para ejecutar la API real, que es muy útil para la detección por parte de los desarrolladores.](./media/image10.png) **Figura 6-9**. Interfaz de usuario de Swashbuckle poniendo a prueba el método de API de catálogo o elementos En la Figura 6-10 se muestran los metadatos JSON de Swagger generados a partir del microservicio eShopOnContainers (que es lo que las herramientas usan en segundo plano) al solicitar \<la-dirección-URL-raíz\>/swagger/v1/swagger.json mediante [Postman](https://www.getpostman.com/). ![Ejemplo de interfaz de usuario de Postman en la que se muestran metadatos JSON de Swagger](./media/image11.png) **Figura 6-10**. Metadatos JSON de Swagger Es así de sencillo. Y, como se generan automáticamente, los metadatos de Swagger crecerán cuando agregue más funcionalidad a la API. ### <a name="additional-resources"></a>Recursos adicionales - **Páginas de ayuda de ASP.NET Core Web API con Swagger** \ [*https://docs.microsoft.com/aspnet/core/tutorials/web-api-help-pages-using-swagger*](https://docs.microsoft.com/aspnet/core/tutorials/web-api-help-pages-using-swagger) - **Introducción a Swashbuckle y ASP.NET Core** \ [*https://docs.microsoft.com/aspnet/core/tutorials/getting-started-with-swashbuckle?view=aspnetcore-2.1\&tabs=visual-studio%2Cvisual-studio-xml*](https://docs.microsoft.com/aspnet/core/tutorials/getting-started-with-swashbuckle?view=aspnetcore-2.1\&tabs=visual-studio%2Cvisual-studio-xml) - **Introducción a NSwag y ASP.NET Core** \ [*https://docs.microsoft.com/aspnet/core/tutorials/getting-started-with-nswag?view=aspnetcore-2.1\&tabs=visual-studio%2Cvisual-studio-xml*](https://docs.microsoft.com/en-us/aspnet/core/tutorials/getting-started-with-nswag?view=aspnetcore-2.1\&tabs=visual-studio%2Cvisual-studio-xml) >[!div class="step-by-step"] >[Anterior](microservice-application-design.md) >[Siguiente](multi-container-applications-docker-compose.md)
75.222477
858
0.769125
spa_Latn
0.97995
b99dd59deedd6d83d70078fcd1916b21ee29f261
933
md
Markdown
src/pages/blog/2013-02-14-love-matters.md
crhallen/amanda-brookfield
5732b892f16b5aa8f5dfb29fd441f7caf31ca0b9
[ "MIT" ]
null
null
null
src/pages/blog/2013-02-14-love-matters.md
crhallen/amanda-brookfield
5732b892f16b5aa8f5dfb29fd441f7caf31ca0b9
[ "MIT" ]
6
2020-05-19T18:24:41.000Z
2021-05-10T19:07:40.000Z
src/pages/blog/2013-02-14-love-matters.md
crhallen/amanda-brookfield
5732b892f16b5aa8f5dfb29fd441f7caf31ca0b9
[ "MIT" ]
1
2020-07-06T17:50:46.000Z
2020-07-06T17:50:46.000Z
--- templateKey: blog-post title: Love Matters date: 2013-02-14T11:29:53.000Z tags: - newsletters --- Every so often (not just on Valentines Day) I am struck, all over again, by the astonishing reach and drive of human love.  The Technological Revolution, Wars, Famines, Rockets On The Moon and Melting Ice-Cap... his non-stop backdrop of epic, game-changing events continues, often at breakneck speed, changing our world.  And yet it is still the abstraction we call Love that beams mostly fiercely in our daily lives, the thing most of us seek to experience and understand.  It brings down Generals, Politicians, smitten teachers, smitten teenagers, and everything in between.  It is the engine of all that is saintly and selfless, as well as much that is satanic and selfish.  I know I sound dramatic, but it is dramatic.  The endless human quest to love and be loved in return. It trips us up but is also what keeps us going.
93.3
827
0.773848
eng_Latn
0.999607
b99e62b5ae3ffc4a893f219f9cf1f23a61e65dc1
2,788
md
Markdown
content/reviews/voivod-—-war-and-pain.md
innereq/morkerfyr
8e9f4026fe5c0d3f60396eb2a456db4941a87096
[ "Unlicense" ]
null
null
null
content/reviews/voivod-—-war-and-pain.md
innereq/morkerfyr
8e9f4026fe5c0d3f60396eb2a456db4941a87096
[ "Unlicense" ]
7
2020-11-22T18:00:32.000Z
2020-12-30T06:31:18.000Z
content/reviews/voivod-—-war-and-pain.md
innereq/morkerfyr
8e9f4026fe5c0d3f60396eb2a456db4941a87096
[ "Unlicense" ]
null
null
null
--- title: Voivod — War and Pain author: Fuerlee date: 2021-05-02T11:02:09.113Z tags: - thrash metal - speed metal - canada country: Канада --- {{< spotify 6GWY2BAseXWPser0aVHUlp >}} Выпуск I. Начало долгого путешествия по галактике Начав свою карьеру с такого неистового грохота, который заработал бы название трэш-метала, первые несколько альбомов Voivod редко узнаваемы по сравнению с более поздними шедеврами progressive метала, но это не значит, что War and Pain особенно не хороши для прослушивания. Где-то между Discharge и Venom, с ударными Motörhead группа играет с большим энтузиазмом, чем мастерством, но легко сказать, что это ядро чего-то совершенно особенного. Все участники явно отлично проводят время, но гитарист с прозвищем Пигги - это изюминка, его разнообразная и безумная игра управляет музыкой и хорошо сочетается с отчётливо слышимым басом Блэки (который сам по себе отлично справляется, бас часто является неотъемлемой частью песен). Он всегда был единственным гитаристом Voivod и всегда прекрасно справлялся с собой, предоставляя множество мини-соло поверх риффов, чтобы поддерживать музыку – другой гитарист нарушил бы баланс. Интересно, что там, где другие группы (Anthrax, я про вас) довольствовались повторением того, что было в их плейлистах в то время, когда они записывали свои дебютные альбомы, Voivod хотели продвинуть себя и свой звук, и уже есть немного авангардный воздух в их мелодиях, особенно на более длинных треках, таких как "Black City". Вы можете услышать, где Gojira явно получил хорошее влияние от центрального риффа на атмосферную Nuclear War, в то время как даже относительно простые сокращения, такие как Voivod, представляют собой беспорядочные заграждения панк-трэша, которым с научной точки зрения невозможно не наслаждаться. Вокал Снейка заслуживает отдельного обзора, так как он даже не пытается петь здесь, но истерически визжит повсюду, заставляя вокал на ранних альбомах Megadeth звучать как образец сдержанности по сравнению с ним. По мере продвижения альбома песни становятся лучше, более медленная интерлюдия в "Warriors Of Ice" - всего лишь один пример. Это чистое развлечение, speed metal на" Suck Your Bone" - отлично веселящая произведение, а разнообразные риффы "Iron Gang" и лёгкий элемент doom-а доведены до крайности на следующем заглавном треке. Немногие трэш-группы смогли бы сделать что-то столь же захватывающее, как "Live For Violence", за всю свою карьеру, не говоря уже о том, что до 1985 года, и зловещая перкуссионная часть, направленная на трёхминутную отметку, показывает больше интеллекта, чем вы могли бы взять из других мест. Если вы хотите сырой, рваный, но умный трэш-драйв, то Voivod всегда были более чем способны, и даже на этом раннем этапе Канадцы смогли стать на голову выше конкурентов.
146.736842
919
0.810976
rus_Cyrl
0.994651
b99eb702857239356b70180a9eaa8b55316a5cab
595
md
Markdown
docs/reference/errors-and-warnings/NU1801.md
NuGet/docs.microsoft.com-nuget.zh-tw
65d15533e1563de8527c96f2f3f25226ea9054b5
[ "MIT" ]
3
2017-08-28T06:09:40.000Z
2019-10-31T07:12:29.000Z
docs/reference/errors-and-warnings/NU1801.md
NuGet/docs.microsoft.com-nuget.zh-tw
65d15533e1563de8527c96f2f3f25226ea9054b5
[ "MIT" ]
10
2018-01-16T09:10:38.000Z
2019-11-06T09:22:03.000Z
docs/reference/errors-and-warnings/NU1801.md
NuGet/docs.microsoft.com-nuget.zh-tw
65d15533e1563de8527c96f2f3f25226ea9054b5
[ "MIT" ]
5
2018-01-19T00:09:45.000Z
2021-01-08T12:28:57.000Z
--- title: NuGet 警告 NU1801 description: NU1801 警告的程式碼 author: zhili1208 ms.author: lzhi ms.date: 06/25/2018 ms.topic: reference ms.reviewer: anangaur f1_keywords: - NU1801 ms.openlocfilehash: 33fc5ccb6644f98f09cc2c59292e84a5c59e2281 ms.sourcegitcommit: 1d1406764c6af5fb7801d462e0c4afc9092fa569 ms.translationtype: MT ms.contentlocale: zh-TW ms.lasthandoff: 09/04/2018 ms.locfileid: "43549298" --- # <a name="nuget-warning-nu1801"></a>NuGet 警告 NU1801 ### <a name="issue"></a>問題 讀取摘要時,發生錯誤時`IgnoreFailedSources`設為 true,將它轉換成非嚴重警告。 這可能會包含任何訊息,而且是泛型。 ### <a name="solution"></a>方案 編輯您的設定,以指定有效的來源。
24.791667
69
0.781513
yue_Hant
0.317551
b99f0e8c87fc6ce9dee4b8c853e1d03d9a3d4b0a
1,473
md
Markdown
docs/visual-basic/misc/bc30666.md
v-cakoll/docs.fr-fr
89917d581843816397481ef5c92689b0cfe37042
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/visual-basic/misc/bc30666.md
v-cakoll/docs.fr-fr
89917d581843816397481ef5c92689b0cfe37042
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/visual-basic/misc/bc30666.md
v-cakoll/docs.fr-fr
89917d581843816397481ef5c92689b0cfe37042
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: L’instruction 'Throw' ne peut pas omettre l’opérande en dehors d’une instruction 'Catch' ou dans une instruction 'Finally' ms.date: 07/20/2015 f1_keywords: - vbc30666 - bc30666 helpviewer_keywords: - BC30666 ms.assetid: a208a6ea-0e36-4bf1-8984-4de1a0e38a2a ms.openlocfilehash: c4ab115b01e78c245cbd9f564b50573d6e30c5e7 ms.sourcegitcommit: f8c270376ed905f6a8896ce0fe25b4f4b38ff498 ms.translationtype: MT ms.contentlocale: fr-FR ms.lasthandoff: 06/04/2020 ms.locfileid: "84414931" --- # <a name="throw-statement-cannot-omit-operand-outside-a-catch-statement-or-inside-a-finally-statement"></a>L’instruction 'Throw' ne peut pas omettre l’opérande en dehors d’une instruction 'Catch' ou dans une instruction 'Finally' Les instructions`Throw` situées en dehors de l’instruction `Catch` doivent fournir le nom d’un objet exception. **ID d’erreur :** BC30666 ## <a name="to-correct-this-error"></a>Pour corriger cette erreur 1. Spécifiez le nom d’un objet d’exception dérivé de `System.Exception`. 2. Restructurez votre code pour que l’instruction `Throw` se trouve dans un bloc `Catch` . ## <a name="see-also"></a>Voir aussi - [Throw (instruction)](../language-reference/statements/throw-statement.md) - [Try...Catch...Finally (instruction)](../language-reference/statements/try-catch-finally-statement.md) - <xref:System.Exception?displayProperty=nameWithType> - [Gestion et levée d’exceptions dans .NET](../../standard/exceptions/index.md)
43.323529
230
0.76239
fra_Latn
0.58879
b99f1f82c118d3bd8388440796ce929114dcc1f6
1,222
md
Markdown
trainers/_posts/2015-07-09-pengweixiong.md
youpeiban/youpeiban.github.io
62eb1461e90bb0647f2a5d63e4cde3c6fbf97271
[ "CC-BY-3.0", "BSD-2-Clause-FreeBSD" ]
null
null
null
trainers/_posts/2015-07-09-pengweixiong.md
youpeiban/youpeiban.github.io
62eb1461e90bb0647f2a5d63e4cde3c6fbf97271
[ "CC-BY-3.0", "BSD-2-Clause-FreeBSD" ]
1
2015-08-20T21:14:21.000Z
2017-01-25T17:33:46.000Z
trainers/_posts/2015-07-09-pengweixiong.md
youpeiban/youpeiban.github.io
62eb1461e90bb0647f2a5d63e4cde3c6fbf97271
[ "CC-BY-3.0", "BSD-2-Clause-FreeBSD" ]
null
null
null
--- name: 彭伟雄 avatar_url: /images/founder/pengweixiong.jpg weibo: email: testimonial: 我们是幸运的,因为我们追逐的是伟大而又有意义的事情!期待着我们YOU陪伴家族做到以自组织方式团结一切有情怀和有梦想的力量,善意善心去让爱和梦想陪伴追梦天使的成长,让每位追梦天使惊讶于自己的成长,并见证YOU陪伴的成长。 --- <font color=#0099ff>【毕业院校】</font>中南大学(硕士) <font color=#0099ff>【工作单位】</font>华为 <font color=#0099ff>【常驻地点】</font>北京、上海 <font color=#0099ff>【学习经历】</font> 我幸运得到了家人、老师们和朋友们的支持,让我从唯有读书可改变命运的农村走了出来!幸运的是从小就知道苦是什么滋味,这赋予了我肯吃苦,能吃亏,敢打硬仗的个性!我感恩贵人们为我指明了大道,给予我足够的帮助和支持,我认为报答他们的最好方式,不仅仅是对他们的涌泉相报,更应该是将这份大爱传承下去,帮助更多懂得感恩,渴望成长的学子们,这是我公益的源动力!我羡慕北大清华的学子们,我心中也有一个北大清华梦!羡慕他们享受着中国最好的教育资源和平台,面对着教育资源的鸿沟和平台的巨差,这绝非通过个人努力能够弥补。北漂游学(清华北大),我后来才傻愣愣知道原来还可以这样,我不是不敢去漂泊,只是很傻不知道可以这样做,心中还留有深深的遗憾!通过每周一场讲座(不论专业),去拓展自己的眼界和见识。过去的12年,我爬了三个大台阶:1)7年的求学求知之路(重庆大学(本科)和中南大学(硕士));2)5年的历练之路,在国内最具有狼性的公司(华为)历练;3)1年多的开化之路(在华为期间入学和君商学院6届),这里有着太多的传奇; <font color=#0099ff>【公益足迹】</font> A、iJiangzuo爱讲座公益平台高校的讲座资源是一笔巨大的财富,它授人以广博的视野、辩证的思维和奋发的动力,就像一扇扇门,可以引领你走上更高的台阶的思想之门。2010年5月4日,创建了iJiangzuo爱讲座公益平台,以“分享,改变世界的力量”为愿景,聚合985、211高校的讲座信息,打破高校间、学院间的信息隔阂,让更多学子们得到及时、有效和全面的讲座信息,去享受知识洗礼和精神碰撞。坚持了28个月,自己无法继续前行,但最初的梦想仍在,现资助XX讲座网延续梦想。B、家乡教育活动2007年-2009年,组织策划:1)首届、第二届全国重点大学双峰籍大学生回母校演讲活动——用己所学报效家乡;2)以实际行动,支持家乡教育事业活动——高考志愿填报咨询活动;3)“爱心改变命运”中秋书画义卖活动,用于资助六名09级贫困学子
76.375
451
0.827332
yue_Hant
0.394375
b99f97551fbe1373e59ea31b12611023be83f5b6
2,109
md
Markdown
docs/framework/data/adonet/sql/linq/user-defined-functions.md
Ming77/docs.zh-cn
dd4fb6e9f79320627d19c760922cb66f60162607
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/data/adonet/sql/linq/user-defined-functions.md
Ming77/docs.zh-cn
dd4fb6e9f79320627d19c760922cb66f60162607
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/data/adonet/sql/linq/user-defined-functions.md
Ming77/docs.zh-cn
dd4fb6e9f79320627d19c760922cb66f60162607
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: "用户定义的函数" ms.custom: ms.date: 03/30/2017 ms.prod: .net-framework ms.reviewer: ms.suite: ms.technology: dotnet-ado ms.tgt_pltfrm: ms.topic: article ms.assetid: 3304c9b2-5c7a-4a95-9d45-4f260dcb606e caps.latest.revision: "3" author: douglaslMS ms.author: douglasl manager: craigg ms.workload: dotnet ms.openlocfilehash: ad269ddaa7d7c3995672398a24de06f57f7122e2 ms.sourcegitcommit: ed26cfef4e18f6d93ab822d8c29f902cff3519d1 ms.translationtype: MT ms.contentlocale: zh-CN ms.lasthandoff: 01/17/2018 --- # <a name="user-defined-functions"></a>用户定义的函数 [!INCLUDE[vbtecdlinq](../../../../../../includes/vbtecdlinq-md.md)] 在您的对象模型中使用方法来表示用户定义的函数。 您可以通过应用 <xref:System.Data.Linq.Mapping.FunctionAttribute> 属性和 <xref:System.Data.Linq.Mapping.ParameterAttribute> 属性(如果需要)将方法指定为函数。 有关详细信息,请参阅[LINQ to SQL 对象模型](../../../../../../docs/framework/data/adonet/sql/linq/the-linq-to-sql-object-model.md)。 为避免出现 <xref:System.InvalidOperationException>,[!INCLUDE[vbtecdlinq](../../../../../../includes/vbtecdlinq-md.md)] 中用户定义的函数必须采用以下形式之一: - 包装为具有正确映射属性的方法调用的函数。 有关详细信息,请参阅[基于属性的映射](../../../../../../docs/framework/data/adonet/sql/linq/attribute-based-mapping.md)。 - 特定于 [!INCLUDE[vbtecdlinq](../../../../../../includes/vbtecdlinq-md.md)] 的静态 SQL 方法。 - [!INCLUDE[dnprdnshort](../../../../../../includes/dnprdnshort-md.md)] 方法支持的函数。 本节中的主题说明了在您自行编写代码的情况下,如何在您的应用程序中构建和调用这些方法。 使用 [!INCLUDE[vs_current_short](../../../../../../includes/vs-current-short-md.md)] 的开发人员通常会使用 [!INCLUDE[vs_ordesigner_long](../../../../../../includes/vs-ordesigner-long-md.md)] 来映射用户定义的函数。 ## <a name="in-this-section"></a>本节内容 [如何:使用标量值用户定义的函数](../../../../../../docs/framework/data/adonet/sql/linq/how-to-use-scalar-valued-user-defined-functions.md) 介绍如何实现返回标量值的函数。 [如何:使用表值用户定义的函数](../../../../../../docs/framework/data/adonet/sql/linq/how-to-use-table-valued-user-defined-functions.md) 介绍如何实现返回表值的函数。 [如何:以内联方式调用用户定义的函数](../../../../../../docs/framework/data/adonet/sql/linq/how-to-call-user-defined-functions-inline.md) 介绍如何对函数进行内联调用,以及进行内联调用时在执行方面的差异。
46.866667
341
0.697013
yue_Hant
0.414485
b9a004218b2c022fff8c02487e727b05801479c0
2,594
md
Markdown
docs/csharp/language-reference/keywords/volatile.md
drvoss/docs.ko-kr
108d884ebe03f99edfd57e1d9a20b3334fa3a0fe
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/csharp/language-reference/keywords/volatile.md
drvoss/docs.ko-kr
108d884ebe03f99edfd57e1d9a20b3334fa3a0fe
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/csharp/language-reference/keywords/volatile.md
drvoss/docs.ko-kr
108d884ebe03f99edfd57e1d9a20b3334fa3a0fe
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: volatile - C# 참조 ms.date: 10/24/2018 f1_keywords: - volatile_CSharpKeyword - volatile helpviewer_keywords: - volatile keyword [C#] ms.assetid: 78089bc7-7b38-4cfd-9e49-87ac036af009 ms.openlocfilehash: c7a6c442c33ac2b41f652805837f455a957819de ms.sourcegitcommit: 5f236cd78cf09593c8945a7d753e0850e96a0b80 ms.translationtype: HT ms.contentlocale: ko-KR ms.lasthandoff: 01/07/2020 ms.locfileid: "75712847" --- # <a name="volatile-c-reference"></a>volatile(C# 참조) `volatile` 키워드는 동시에 실행되는 여러 스레드에 의해 필드가 수정될 수 있음을 나타냅니다. 컴파일러, 런타임 시스템 및 하드웨어는 성능상의 이유로 메모리 위치에 대한 읽기 및 쓰기를 다시 정렬할 수 있습니다. `volatile`로 선언된 필드에는 이러한 최적화가 적용되지 않습니다. `volatile` 한정자를 추가하면 모든 스레드가 수행된 순서대로 다른 스레드가 휘발성 쓰기를 수행합니다. 모든 실행 스레드에서처럼 휘발성 쓰기의 단일 순서가 모두 보장되는 것은 아닙니다. `volatile` 키워드는 다음 형식의 필드에 적용될 수 있습니다. - 참조 형식. - 포인터 형식(안전하지 않은 컨텍스트에서). 포인터 자체는 volatile이 될 수 있지만, 포인터가 가리키는 개체는 volatile이 될 수 없습니다. 즉, "pointer to volatile"을 선언할 수 없습니다. - `sbyte`, `byte`, `short`, `ushort`, `int`, `uint`, `char`, `float` 및 `bool`와 같은 단순 형식. - 기본 형식 `byte`, `sbyte`, `short`, `ushort`, `int` 또는 `uint` 중 하나가 있는 `enum` 형식. - 참조 형식으로 알려진 제네릭 형식 매개 변수. - <xref:System.IntPtr>와 <xref:System.UIntPtr>을 참조하세요. `double` 및 `long`을 포함한 기타 형식은 해당 형식의 필드에 대한 읽기 및 쓰기가 원자성임을 보장할 수 없기 때문에 `volatile`로 표시될 수 없습니다. 이러한 형식의 필드에 대한 다중 스레드 액세스를 보호하려면 <xref:System.Threading.Interlocked> 클래스 멤버를 사용하거나 [`lock`](lock-statement.md) 문을 통해 액세스를 보호합니다. `volatile` 키워드는 `class` 또는 `struct`의 필드에만 적용할 수 있습니다. 지역 변수는 `volatile`로 선언할 수 없습니다. ## <a name="example"></a>예제 다음 예제에서는 공용 필드 변수를 `volatile`로 선언하는 방법을 보여 줍니다. [!code-csharp[declareVolatile](~/samples/snippets/csharp/language-reference/keywords/volatile/Program.cs#Declaration)] 다음 예제에서는 보조 또는 작업자 스레드를 만들어 기본 스레드와 병렬로 처리하는 데 사용하는 방법을 보여줍니다. 다중 스레딩에 대한 자세한 내용은 [관리되는 스레딩](../../../standard/threading/index.md)을 참조하세요. [!code-csharp[declareVolatile](~/samples/snippets/csharp/language-reference/keywords/volatile/Program.cs#Volatile)] `_shouldStop`의 선언에 `volatile` 한정자를 추가하면 항상 동일한 결과가 표시됩니다(앞의 코드에 표시된 것과 유사함). 그러나 `_shouldStop` 멤버의 해당 한정자가 없으면 동작을 예측할 수 없습니다. `DoWork` 메서드가 멤버 액세스를 최적화할 수 있으므로 부실 데이터를 읽게 됩니다. 다중 스레드 프로그래밍의 특성으로 인해 부실 읽기 수는 예측할 수 없습니다. 프로그램의 실행에 따라 약간 다른 결과가 생성됩니다. ## <a name="c-language-specification"></a>C# 언어 사양 [!INCLUDE[CSharplangspec](~/includes/csharplangspec-md.md)] ## <a name="see-also"></a>참조 - [C# 언어 사양: volatile 키워드](../../../../_csharplang/spec/classes.md#volatile-fields) - [C# 참조](../index.md) - [C# 프로그래밍 가이드](../../programming-guide/index.md) - [C# 키워드](index.md) - [한정자](index.md) - [lock 문](lock-statement.md) - <xref:System.Threading.Interlocked>
43.966102
270
0.717039
kor_Hang
1.000005
b9a04bdf486ff01d740d58f20f7a600a1cd4de72
11,378
md
Markdown
articles/data-factory/configure-azure-ssis-integration-runtime-performance.md
kedMertens/azure-docs.ru-ru
6fd8c58d1385c59d1c3889d6d2b855cd1c6dfd95
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/data-factory/configure-azure-ssis-integration-runtime-performance.md
kedMertens/azure-docs.ru-ru
6fd8c58d1385c59d1c3889d6d2b855cd1c6dfd95
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/data-factory/configure-azure-ssis-integration-runtime-performance.md
kedMertens/azure-docs.ru-ru
6fd8c58d1385c59d1c3889d6d2b855cd1c6dfd95
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Настройка производительности для среды выполнения интеграции Azure-SSIS | Документация Microsoft description: Узнайте, как настроить свойства среды выполнения интеграции Azure-SSIS для высокой производительности services: data-factory ms.date: 01/10/2018 ms.topic: conceptual ms.service: data-factory ms.workload: data-services author: swinarko ms.author: sawinark ms.reviewer: douglasl manager: craigg ms.openlocfilehash: 2592c81947f48c10891fe920647612d5c30af64f ms.sourcegitcommit: 32d218f5bd74f1cd106f4248115985df631d0a8c ms.translationtype: HT ms.contentlocale: ru-RU ms.lasthandoff: 09/24/2018 ms.locfileid: "46989090" --- # <a name="configure-the-azure-ssis-integration-runtime-for-high-performance"></a>Настройка среды выполнения интеграции Azure-SSIS для высокой производительности В этой статье описывается настройка среды выполнения интеграции Azure-SSIS для высокой производительности. Azure SSIS IR можно использовать для развертывания и запуска пакетов SQL Server Integration Services (SSIS) в Azure. Дополнительные сведения о среде выполнения интеграции Azure-SSIS IR см. в [этом разделе](concepts-integration-runtime.md#azure-ssis-integration-runtime). Сведения о развертывании и запуске пакетов SQL Server Integration Services (SSIS) в Azure см. в статье [Lift and shift SQL Server Integration Services workloads to the cloud](/sql/integration-services/lift-shift/ssis-azure-lift-shift-ssis-packages-overview) (Перенос рабочих нагрузок SQL Server Integration Services в облако). > [!IMPORTANT] > Эта статья содержит результаты производительности и наблюдения за внутренним тестированием, выполняемым участниками группы разработки служб SSIS. Результаты могут различаться. Выполните тестирование до подготовки параметров конфигурации, которые могут повлиять на затраты и производительность. ## <a name="properties-to-configure"></a>Свойства, которые нужно настроить В следующей части сценария конфигурации приведены свойства, которые можно настроить при создании среды выполнения интеграции Azure-SSIS. Полный сценарий и описание PowerShell см. в статье [Развертывание пакетов служб интеграции SQL Server (SSIS) в Azure](tutorial-deploy-ssis-packages-azure-powershell.md). ```powershell $SubscriptionName = "<Azure subscription name>" $ResourceGroupName = "<Azure resource group name>" # Data factory name. Must be globally unique $DataFactoryName = "<Data factory name>" $DataFactoryLocation = "EastUS" # Azure-SSIS integration runtime information. This is a Data Factory compute resource for running SSIS packages $AzureSSISName = "<Specify a name for your Azure-SSIS IR>" $AzureSSISDescription = "<Specify description for your Azure-SSIS IR" # Only EastUS, NorthEurope, and WestEurope are supported. $AzureSSISLocation = "EastUS" # Only Standard_A4_v2, Standard_A8_v2, Standard_D1_v2, Standard_D2_v2, Standard_D3_v2, Standard_D4_v2 are supported $AzureSSISNodeSize = "Standard_D3_v2" # Only 1-10 nodes are supported. $AzureSSISNodeNumber = 2 # For a Standard_D1_v2 node, 1-4 parallel executions per node are supported. For other nodes, it's 1-8. $AzureSSISMaxParallelExecutionsPerNode = 2 # SSISDB info $SSISDBServerEndpoint = "<Azure SQL server name>.database.windows.net" $SSISDBServerAdminUserName = "<Azure SQL server - user name>" $SSISDBServerAdminPassword = "<Azure SQL server - user password>" # Remove the SSISDBPricingTier variable if you are using Azure SQL Database Managed Instance # This parameter applies only to Azure SQL Database. For the basic pricing tier, specify "Basic", not "B". For standard tiers, specify "S0", "S1", "S2", 'S3", etc. $SSISDBPricingTier = "<pricing tier of your Azure SQL server. Examples: Basic, S0, S1, S2, S3, etc.>" ``` ## <a name="azuressislocation"></a>AzureSSISLocation **AzureSSISLocation** — расположение для рабочего узла среды выполнения интеграции. Рабочий узел поддерживает постоянное подключение к базе данных каталога SSIS (SSISDB) в базе данных Azure SQL. Задайте для **AzureSSISLocation** то же расположение, что и для сервера базы данных SQL, на котором размещена SSISDB, что позволяет среде выполнения интеграции работать максимально эффективно. ## <a name="azuressisnodesize"></a>AzureSSISNodeSize Служба "Фабрика данных Azure", включая среду выполнения интеграции MSSQL Integration Services Azure, поддерживает следующие параметры: - Standard\_A4\_v2 - Standard\_A8\_v2; - Standard\_D1\_v2; - Standard\_D2\_v2 - Standard\_D3\_v2; - Standard\_D4\_v2. В неофициальном внутреннем тестировании командой разработчиков служб SSIS оказалось, что устройства серии D (а не серии A) лучше использовать для выполнения пакета служб SSIS. - Соотношение производительности и цен серии D выше, чем серии А. - Пропускная способность для серий D выше, чем в сериях A (но такая же себестоимость). ### <a name="configure-for-execution-speed"></a>Настройка скорости выполнения Если у вас немного пакетов, и вы хотите, чтобы они выполнялись быстро, используйте информацию на следующей схеме, чтобы выбрать тип виртуальной машины, подходящий для вашего сценария. Далее предоставлены данные о выполнении одного пакета на одном рабочем узле. Этот пакет загружает 10 миллионов записей столбцов с именами и фамилиями из хранилища BLOB-объектов Azure, создает столбец с полным именем и записи с полным именем длиной более 20 символов в хранилище BLOB-объектов Azure. ![Скорость выполнения пакета среды выполнения интеграции SSIS](media/configure-azure-ssis-integration-runtime-performance/ssisir-execution-speed.png) ### <a name="configure-for-overall-throughput"></a>Настройка общей пропускной способности Если у вас есть много пакетов, и для вас важна общая пропускная способность, используйте информацию на следующей схеме, чтобы выбрать тип виртуальной машины, подходящий для вашего сценария. ![Максимальная общая пропускная способность среды выполнения интеграции SSIS](media/configure-azure-ssis-integration-runtime-performance/ssisir-overall-throughput.png) ## <a name="azuressisnodenumber"></a>AzureSSISNodeNumber **AzureSSISNodeNumber** корректирует масштабируемость среды выполнения интеграции. Пропускная способность среды выполнения интеграции пропорциональна **AzureSSISNodeNumber**. Сначала задайте для **AzureSSISNodeNumber** небольшое значение, контролируйте пропускную способность среды выполнения интеграции, а затем измените значение для своего сценария. Чтобы узнать, как изменить количество рабочих узлов, см. статью [Управление средой выполнения интеграции Azure SSIS](manage-azure-ssis-integration-runtime.md). ## <a name="azuressismaxparallelexecutionspernode"></a>AzureSSISMaxParallelExecutionsPerNode Если вы уже используете мощный рабочий узел для запуска пакетов, увеличение значения **AzureSSISMaxParallelExecutionsPerNode** может повысить общую пропускную способность среды выполнения интеграции. Для узлов Standard_D1_v2 поддерживаются 1–4 параллельных выполнения на каждом узле. На остальных узлах поддерживаются 1–8 параллельных выполнений. Вы можете определить подходящее значение на основе стоимости пакета и следующих конфигураций для рабочих узлов. Дополнительные сведения см. в статье [Размеры виртуальных машин общего назначения](../virtual-machines/windows/sizes-general.md). | Размер | vCPU | Память, ГиБ | Временное хранилище (SSD): ГиБ | Максимальная пропускная способность временного хранилища: операций ввода-вывода в секунду / чтение Мбит/с / запись Мбит/с | Макс. число дисков данных / пропускная способность: операций ввода-вывода в секунду | Максимальное число сетевых адаптеров и ожидаемая производительность сети (Мбит/с) | |------------------|------|-------------|------------------------|------------------------------------------------------------|-----------------------------------|------------------------------------------------| | Standard\_D1\_v2 | 1 | 3,5 | 50 | 3000 / 46 / 23 | 2 / 2x500 | 2 / 750 | | Standard\_D2\_v2 | 2 | 7 | 100 | 6000 / 93 / 46 | 4 / 4x500 | 2 / 1500 | | Standard\_D3\_v2 | 4. | 14 | 200 | 12000 / 187 / 93 | 8 / 8x500 | 4 / 3000 | | Standard\_D4\_v2 | 8 | 28 | 400 | 24000 / 375 / 187 | 16 / 16x500 | 8 / 6000 | | Standard\_A4\_v2 | 4. | 8 | 40 | 4000 / 80 / 40 | 8 / 8x500 | 4 / 1000 | | Standard\_A8\_v2 | 8 | 16 | 80 | 8000 / 160 / 80 | 16 / 16x500 | 8 / 2000 | Ниже приведены рекомендации по настройке правильного значения для свойства **AzureSSISMaxParallelExecutionsPerNode**. 1. Сначала задайте для него небольшое значение. 2. Немного увеличьте его, чтобы проверить, улучшилась ли общая пропускная способность. 3. Перестаньте увеличивать значение, когда общая пропускная способность достигнет максимального значения. ## <a name="ssisdbpricingtier"></a>SSISDBPricingTier Ценовая категория **SSISDBPricingTier** предназначена для базы данных каталога SSIS (SSISDB) в базе данных Azure SQL. Этот параметр влияет на максимальное число рабочих ролей в экземпляре IR, на скорость помещения операции выполнения пакета в очередь, а также на скорость загрузки журнала выполнения. - Если для вас не имеет значения скорость помещения операции выполнения пакета в очередь и загрузки журнала выполнения, вы можете выбрать самый низкий уровень ценовой категории для базы данных. База данных SQL Azure с ценовой категорией "Базовый" поддерживает 8 рабочих ролей в экземпляре среды выполнения интеграции. - Выберите более эффективную базу данных, чем базовая, если вам требуется больше 8 рабочих ролей или больше 50 ядер. В противном случае база данных станет узким местом для экземпляра среды выполнения интеграции и отрицательно повлияет на общую производительность. Вы также можете настроить ценовую категорию базы данных на основе информации об использовании [единицы транзакций базы данных](../sql-database/sql-database-what-is-a-dtu.md) (DTU), доступной на портале Azure. ## <a name="design-for-high-performance"></a>Проектирование для обеспечения высокой производительности Проектирование пакета SSIS для запуска в Azure отличается от проектирования пакета для локального выполнения. Для более эффективного выполнения в Azure SSIS IR вместо объединения нескольких независимых задач в том же пакете разделите их на несколько пакетов. Создайте выполнение для каждого пакета, чтобы не приходилось ожидать выполнения каждого из них. Такой подход обеспечивает масштабируемость среды выполнения интеграции Azure-SSIS и улучшает общую пропускную способность. ## <a name="next-steps"></a>Дополнительная информация Узнайте больше о среде выполнения интеграции Azure-SSIS в [этом разделе](concepts-integration-runtime.md#azure-ssis-integration-runtime).
88.890625
704
0.729478
rus_Cyrl
0.846034
b9a058f213a0bcd8e419639f1c6c114bf4811849
958
md
Markdown
api/Excel.ProtectedViewWindow.Height.md
RichardCory/VBA-Docs
1240462311fb77ee051d4e8b7d7a434d7d020dd3
[ "CC-BY-4.0", "MIT" ]
null
null
null
api/Excel.ProtectedViewWindow.Height.md
RichardCory/VBA-Docs
1240462311fb77ee051d4e8b7d7a434d7d020dd3
[ "CC-BY-4.0", "MIT" ]
null
null
null
api/Excel.ProtectedViewWindow.Height.md
RichardCory/VBA-Docs
1240462311fb77ee051d4e8b7d7a434d7d020dd3
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: ProtectedViewWindow.Height property (Excel) keywords: vbaxl10.chm914076 f1_keywords: - vbaxl10.chm914076 ms.prod: excel api_name: - Excel.ProtectedViewWindow.Height ms.assetid: 32d5baad-2c78-02ad-7814-f703889f8a36 ms.date: 06/08/2017 localization_priority: Normal --- # ProtectedViewWindow.Height property (Excel) Returns or sets a value that represents the height, in points, of the Protected View window. Read/write ## Syntax _expression_.**Height** _expression_ A variable that represents a **[ProtectedViewWindow](Excel.ProtectedViewWindow.md)** object. ## Return value **Double** ## Remarks You cannot set this property if the Protected View window is maximized or minimized. Use the **[WindowState](Excel.ProtectedViewWindow.WindowState.md)** property to determine the window state. ## See also [ProtectedViewWindow Object](Excel.ProtectedViewWindow.md) [!include[Support and feedback](~/includes/feedback-boilerplate.md)]
22.809524
192
0.782881
eng_Latn
0.840378
b9a0a7bfa46ba0b2f8c71402fa437fe7e1d4e247
4,779
md
Markdown
_posts/2014-12-07-review-monuments-to-an-elegy.md
aria42/aria42.github.io
8da6dead4c8f51d4c52c5cbad995baee6e1d6f1f
[ "MIT" ]
1
2018-03-07T18:40:58.000Z
2018-03-07T18:40:58.000Z
_posts/2014-12-07-review-monuments-to-an-elegy.md
aria42/aria42.github.io
8da6dead4c8f51d4c52c5cbad995baee6e1d6f1f
[ "MIT" ]
null
null
null
_posts/2014-12-07-review-monuments-to-an-elegy.md
aria42/aria42.github.io
8da6dead4c8f51d4c52c5cbad995baee6e1d6f1f
[ "MIT" ]
null
null
null
--- layout: post type: post title: "Album Review: Monuments To An Elegy (Smashing Pumpkins)" latex: false date: 2014-12-07 excerpt: "My review of the new Smashing Pumpkins record." --- <img class="half-right no-bottom-margin" src="/images/elegy-cover.jpg"> I can't be objective about the [Smashing Pumpkins]. I was so deep in the throes of adolescence when [Mellon Collie and the Infinite Sandess] was released that the intended irony of the album's name was lost upon me. Billy Corgan's voice will always make me feel thirteen, angsty, happy, misunderstood, and hopeful. That is to say that no matter how crazy Corgan's [exploits] have gotten or poor his [musical offerings], I've held out hope that he still had great music left in him. After 2005's disastrous [Zeitgeist], 2012's [Oceania] presented a faint glimmer of musical relevance. The start of the first track [Quasar] sounds like Corgan clearing the cobwebs that have been in the way of him writing good material. While Oceania had several terrific tracks and solid moments, most of the arrangments were unfocused and meandering. It wasn't an album I would've recommended for someone's enjoyment without the handicap of Pumpkins nostalgia. <iframe class="half-right" height="260px" src="https://rd.io/i/QV5bTjdeQjPa/" frameborder="0"></iframe> On [Monuments to An Elegy], Corgan has decided to not fuck around; at thirty-three minutes and nine tracks, Monuments represents the tightest Pumpkin offering to date. No interminable prog-rock solo or fading instrumental ambience. This is a straight ahead Pop/Rock record and its the best since 1998's [Adore]. The traditional Pumpkins' _wall-of-sound_ is still there, but its layers trimmed and supplemented with a heavy does of synth. Even at the height of the Pumpkins, Corgan's lyrics could veer towards hollow mysticism and self-indulgence. The latter records almost exclusively occupy that space. On Monuments, Corgan does something he's never tried before: straight-forward lyrics. I would've had a hard time picturing Corgan on a different album confidently singing "_I will bang this drum to my dying day_" as he does on [Drum and fife]. At times on the album, you do wish Corgan would try for something more ambitious than some of the light love songs on Monuments, but it's definitely different from his past more oblique efforts. It lets him focus on something that hasn't received enough attention lately: writing a tight catchy song. <iframe class="third-right" src="http://youtube.com/embed/UHMSDYtxsu4"></iframe> The first track [Tiberius] is probably my vote for best Pumpkin track of the 2000s. The key message of the song in my view is that Corgan has become comfortable with his various music personalities and managed to blend them together. The track leads with synths that might've felt at home on [Adore] and bridges that had the metallic crunch of [Zeitgeist] and some of the quiet-then-loud dynamics that made Mellon Collie feel epic. All wrapped around a tight melodic core that is most reminiscent of [Siamese Dream]. If you have some fondness -- or nostalgia at least -- for every era of the Pumpkins, as I do, then Tiberius might get stuck in your head for a while. If I had to liken Monuments to other Pumpkins records or tracks, I would say the closest are [Pisces Iscariot] or _1979_ from Mellon Collie. To get a little more geeky, many of the tracks remind me of New Wave covers done as B-Sides on [Aeroplane Flies High]. And if the Smashing Pumpkins were also the soundtrack of your adolescence then the thought of hearing more of _that_ Pumpkins should surprise and delight you. It might never be as good or mean as much to you as when you were fifteen, but at least this time you'll be in on the joke in the album's name. <!-- Footnotes and Links --> [Smashing Pumpkins]: http://en.wikipedia.org/wiki/The_Smashing_Pumpkins [Mellon Collie and the Infinite Sandess]: http://en.wikipedia.org/wiki/Mellon_Collie_and_the_Infinite_Sadness [exploits]: https://www.youtube.com/watch?v=ESMCx0KNVkw [musical offerings]: http://www.allmusic.com/album/zeitgeist-mw0000475412 [Zeitgeist]: http://www.allmusic.com/album/zeitgeist-mw0000475412 [Oceania]: http://www.allmusic.com/album/oceania-mw0002232972 [Quasar]: http://rd.io/x/QV5bTjdeQjPa/ [Tiberius]: https://www.youtube.com/watch?v=UHMSDYtxsu4 [Adore]: http://www.allmusic.com/album/adore-mw0000035035 [Siamese Dream]: http://www.allmusic.com/album/siamese-dream-mw0000099414 [Aeroplane Flies High]: http://www.allmusic.com/album/the-aeroplane-flies-high-mw0000080285 [Pisces Iscariot]: http://www.allmusic.com/album/pisces-iscariot-mw0000626353 [Monuments to An Elegy]: https://itunes.apple.com/us/album/monuments-to-an-elegy/id929790535
111.139535
1,149
0.779452
eng_Latn
0.990248
b9a0c5844eef8b5abf0a0c7845e1d6bc70e20546
3,737
md
Markdown
Hands-on lab/Before the HOL - Title xxx.md
Mmodarre/MCW-Template-Cloud-Workshop
e1ef552d49cb6c064dd4eb9f23f3ea14f60113c3
[ "MIT" ]
null
null
null
Hands-on lab/Before the HOL - Title xxx.md
Mmodarre/MCW-Template-Cloud-Workshop
e1ef552d49cb6c064dd4eb9f23f3ea14f60113c3
[ "MIT" ]
null
null
null
Hands-on lab/Before the HOL - Title xxx.md
Mmodarre/MCW-Template-Cloud-Workshop
e1ef552d49cb6c064dd4eb9f23f3ea14f60113c3
[ "MIT" ]
1
2020-07-11T00:30:18.000Z
2020-07-11T00:30:18.000Z
![](https://github.com/Microsoft/MCW-Template-Cloud-Workshop/raw/master/Media/ms-cloud-workshop.png "Microsoft Cloud Workshops") <div class="MCWHeader1"> [Insert workshop name here] </div> <div class="MCWHeader2"> Before the hands-on lab setup guide </div> <div class="MCWHeader3"> [Insert date here Month Year] </div> Information in this document, including URL and other Internet Web site references, is subject to change without notice. Unless otherwise noted, the example companies, organizations, products, domain names, e-mail addresses, logos, people, places, and events depicted herein are fictitious, and no association with any real company, organization, product, domain name, e-mail address, logo, person, place or event is intended or should be inferred. Complying with all applicable copyright laws is the responsibility of the user. Without limiting the rights under copyright, no part of this document may be reproduced, stored in or introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical, photocopying, recording, or otherwise), or for any purpose, without the express written permission of Microsoft Corporation. Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual property rights covering subject matter in this document. Except as expressly provided in any written license agreement from Microsoft, the furnishing of this document does not give you any license to these patents, trademarks, copyrights, or other intellectual property. The names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and warranties, either expressed, implied, or statutory, regarding these manufacturers or the use of the products with any Microsoft technologies. The inclusion of a manufacturer or product does not imply endorsement of Microsoft of the manufacturer or product. Links may be provided to third party sites. Such sites are not under the control of Microsoft and Microsoft is not responsible for the contents of any linked site or any link contained in a linked site, or any changes or updates to such sites. Microsoft is not responsible for webcasting or any other form of transmission received from any linked site. Microsoft is providing these links to you only as a convenience, and the inclusion of any link does not imply endorsement of Microsoft of the site or the products contained therein. © 2018 Microsoft Corporation. All rights reserved. Microsoft and the trademarks listed at <https://www.microsoft.com/en-us/legal/intellectualproperty/Trademarks/Usage/General.aspx> are trademarks of the Microsoft group of companies. All other trademarks are property of their respective owners. **Contents** <!-- TOC --> - [\[insert workshop name here\] before the hands-on lab setup guide](#\insert-workshop-name-here\-before-the-hands-on-lab-setup-guide) - [Requirements](#requirements) - [Before the hands-on lab](#before-the-hands-on-lab) - [Task 1: Task name](#task-1-task-name) - [Task 2: Task name](#task-2-task-name) <!-- /TOC --> # \[insert workshop name here\] before the hands-on lab setup guide ## Requirements 1. Number and insert your custom workshop content here . . . ## Before the hands-on lab Duration: X minutes \[insert your custom workshop content here . . . ### Task 1: Task name 1. Number and insert your custom workshop content here . . . a. Insert content here i. ### Task 2: Task name 1. Number and insert your custom workshop content here . . . a. Insert content here i. You should follow all steps provided *before* performing the Hands-on lab.
54.955882
926
0.769601
eng_Latn
0.998454
b9a11e2d1d80d2e36386954617e6c1c6ee2481f2
53
md
Markdown
application/WeatherService/DataAccessLayer/Database/ReadMe.md
FrancisDinh/Smart-Energy-Project
16b021e127d9ac5c01653abc31d8cc5d0a7a05c6
[ "MIT" ]
null
null
null
application/WeatherService/DataAccessLayer/Database/ReadMe.md
FrancisDinh/Smart-Energy-Project
16b021e127d9ac5c01653abc31d8cc5d0a7a05c6
[ "MIT" ]
4
2021-06-02T00:34:13.000Z
2021-06-02T00:35:28.000Z
application/WeatherService/DataAccessLayer/Database/ReadMe.md
FrancisDinh/Smart-Energy-Project
16b021e127d9ac5c01653abc31d8cc5d0a7a05c6
[ "MIT" ]
null
null
null
This is the database folder containing data in SQLITE
53
53
0.849057
eng_Latn
0.999944
b9a1239761b3a1e7f67ef23a7093a1906509cd05
37
md
Markdown
README.md
AleFiorucci/Falcon-9
4c68132bb433ab7b56ec0ae648b10ecdaaa71fbe
[ "MIT" ]
null
null
null
README.md
AleFiorucci/Falcon-9
4c68132bb433ab7b56ec0ae648b10ecdaaa71fbe
[ "MIT" ]
null
null
null
README.md
AleFiorucci/Falcon-9
4c68132bb433ab7b56ec0ae648b10ecdaaa71fbe
[ "MIT" ]
null
null
null
# Falcon-9 Mod for the game factorio
12.333333
25
0.756757
eng_Latn
0.995084
b9a206ac0eba94b4d8f84e2ee88ca64d4ff83bc8
50
md
Markdown
index.md
0jk/0jk.github.com
946f9fa51963e7173bf6ccf716e313b83f5d17d2
[ "Apache-2.0" ]
null
null
null
index.md
0jk/0jk.github.com
946f9fa51963e7173bf6ccf716e313b83f5d17d2
[ "Apache-2.0" ]
null
null
null
index.md
0jk/0jk.github.com
946f9fa51963e7173bf6ccf716e313b83f5d17d2
[ "Apache-2.0" ]
null
null
null
ERROR: type should be string, got "https://0jk.github.com/\n\n{% include embed.html %}\n"
12.5
24
0.66
yue_Hant
0.542144
b9a2737138ae1350d456f2b3648b9232c4339cfe
2,219
md
Markdown
ItsATrap_theme_Pathfinder/README.md
florianbeisel/roll20-api-scripts
ea7ceddb11cf74580518c96a0a1220ddcf9edb49
[ "MIT" ]
null
null
null
ItsATrap_theme_Pathfinder/README.md
florianbeisel/roll20-api-scripts
ea7ceddb11cf74580518c96a0a1220ddcf9edb49
[ "MIT" ]
null
null
null
ItsATrap_theme_Pathfinder/README.md
florianbeisel/roll20-api-scripts
ea7ceddb11cf74580518c96a0a1220ddcf9edb49
[ "MIT" ]
null
null
null
# It's A Trap! - Pathfinder theme _3.1 Updates_ * The trap theme now supports the Rogue Trap Spotter ability. This is a Pathfinder trap theme built to support Samuel Marino, Nibrodooh, Vince, Samuel Terrazas, chris-b, Magik, and James W.'s Pathfinder character sheet. ## Trap Spotter ability This Trap Theme supports the Rogue's Trap Spotter talent. It works for any character that has Trap Spotter in the Class Abilities section of their character sheet. When the character approaches within 10' of a trap, they will automatically get a perception check to try to notice the trap. The results of this check are sent to the GM. If the Perception check is successful, the players are also alerted about the trap's presence. This ability only works with traps whose type is 'trap'. For the character's Perception check, it uses their Perception skill total on their character sheet, so it doesn't take into account any situational bonuses. It is the GM's job to account for any situational bonuses that might contribute to the hidden Perception check when the result is displayed to them. ## Help Due to complications with the API reading attributes from certain character sheets, there have been issues in the past with things such as saving throws or passive perception not being correct. If this happens, first try adjusting the values for these on your character sheet or try re-creating the character sheet from scratch to see if that resolves the problem. If you continue to experience any issues while using this script, need help using it, or if you have a neat suggestion for a new feature, please reply to this thread: https://app.roll20.net/forum/post/3280344/script-its-a-trap-v2-dot-3 or shoot me a PM: https://app.roll20.net/users/46544/stephen-l ## Show Support If you would like to show your appreciation and support for the work I do in writing, updating, and maintaining my API scripts, consider buying one of my art packs from the Roll20 marketplace (https://marketplace.roll20.net/browse/search/?keywords=&sortby=newest&type=all&genre=all&author=Stephen%20Lindberg) or, simply leave a thank you note in the script's thread on the Roll20 forums. Either is greatly appreciated! Happy gaming!
48.23913
222
0.794051
eng_Latn
0.998641
b9a2a53f2ad89684467cf25d9c55235b5f2c6768
9,299
md
Markdown
README.md
Azganoth/unisul-machine-learning
c5c8dd65b0084521e4f5f679f53fedb03207a9a2
[ "MIT" ]
null
null
null
README.md
Azganoth/unisul-machine-learning
c5c8dd65b0084521e4f5f679f53fedb03207a9a2
[ "MIT" ]
null
null
null
README.md
Azganoth/unisul-machine-learning
c5c8dd65b0084521e4f5f679f53fedb03207a9a2
[ "MIT" ]
null
null
null
# unisul-machine-learning Coleção de scripts utilizados para a matéria Aprendizado de Máquina na UNISUL. ## 📜 Scripts ### Avaliação 1 #### Enunciado - Desenvolva um programa para realizar extração de características de imagens (conforme apresentado em aula). - O programa deve ser capaz de analisar um dataset de imagens e criar um arquivo **\*.arff** com as características de todas as imagens contidas no dataset. - Para geração do arquivo e demais etapas do trabalho, você deverá utilizar o dataset dos **[personagens de Os Simpsons](https://www.kaggle.com/alexattia/the-simpsons-characters-dataset)**. - Escolha dois personagens distintos deste dataset para utilizar em seu trabalho. Mantenha um diretório apenas com os personagens escolhidos por você. - As características que serão extraídas de cada personagem devem ser definidas por você. - O programa deve possibilitar selecionar uma imagem qualquer, e exibir as características da imagem selecionada. - O programa deve permitir que seja selecionada uma imagem para realizar a inferência das probabilidades de a imagem selecionada de ser um determinado personagem ou a probabilidade de ser o outro personagem. - Compactar em um arquivo com o seu nome: - Documento contendo quais personagens foram utilizados e quais características de cada personagem foram escolhidas para a etapa de extração de características. - Arquivo **\*.arff** com as características extraídas. - Matriz de confusão gerada pelo algoritmo Naive Bayes. - Código-fonte (pode ser um link para o github). #### Decisões Para essa avaliação foram escolhidos os personagens Marge Simpson e Diretor Skinner. Como características foram escolhidas para cada personagem: - **Marge Simpson:** o cabelo azul e o vestido verde - **Diretor Skinner:** o cabelo cinza e o terno azul #### Matriz de confusão ![Confusion Matrix](/docs/test_1_confusion_matrix.png) #### Executar ```sh python test_1.py ``` ### Classificação Marge Simpson e Diretor Skinner Avaliação 1 com adição do algoritmo da Árvore de Decisão. #### Executar ```sh python marge_skinner.py ``` ### Avaliação 2 #### Enunciado - Desenvolver um programa. - O programa deve realizar a extração de características de sons. - O programa deve criar um arquivo *\*.arff* contendo as características extraídas. - O programa deve treinar uma rede neural perceptron multicamadas com as características extraídas. - O programa deve permitir o usuário escolher um arquivo de som (*.wav*) e informar a pontuação do som obtida na rede neural treinada. ##### Entregar - O código-fonte do programa. - Descrição das características. - Configurações da rede neural. - Arquivo de características *\*.arff*. ##### Detalhes do dataset O dataset [Audio Cats and Dogs](https://www.kaggle.com/mmoreaux/audio-cats-and-dogs) consiste de: - 164 arquivos WAV de miados de gato correspondendo a um total de 1323 segundos de audio; - 113 arquivos WAV de latidos de cachorro correspondendo a um total de 598 segundos de audio. Todos os arquivos WAV possuem frequência de 16KHz e duração variável. #### Descrição das características `chroma_stft_mean` `chroma_stft_var` Média e mediana dos valores do cromagrama de cada quadro do audio. > Um cromagrama é a projeção de um espectro de quadro de audio em 12 caixas que representam os 12 semitons distintos (ou croma) da oitava musical (intervalo entre uma nota musical e outra com a metade ou o dobro de sua frequência). `rms_mean` `rms_var` Média e mediana dos valores da raiz do valor quadrático médio de cada quadro do audio. `spectral_centroid_mean` `spectral_centroid_var` Média e mediana dos valores da centróide espectral de cada quadro do audio. > O Centróide espectral indica onde o centro da massa de um audio está localizada e calcula a média ponderada das frequências presentes no audio. `spectral_bandwidth_mean` `spectral_bandwidth_var` Média e mediana dos valores da largura de banda espectral de cada quadro do audio. #### Configurações da rede neural - As camadas ocultas consistem de **2** *(duas)* camadas com **5** *(cinco)* neurônios em cada. - A função tangente hiperbólica mostrou o melhor resultado como função de ativação. - A taxa de aprendizagem `0.2` se mostrou a mais eficiente. - O momentum `0.15` mostrou um bom resultado. - Apesar do treinamento ser concluído por volta de 500 iterações, foi dado um máximo de 1000 iterações. #### Executar ```sh python test_2.py ``` ### Avaliação 3 #### Enunciado Descobrir “informações” que não estão visíveis no dataset, como por exemplo: - Existe associação entre vendas? - Percebe-se mudança de perfil das vendas a medida que o tempo passa? - É possível fazer algum agrupamento baseando-se em vendas? - É possível descobrir algum perfil de jogador com base no local da venda? - O nome do jogo está associado ao gênero? - É possível prever se alguma editora está em queda ou melhorando as vendas a medida que o tempo passa? - Existe associações entre gêneros e plataformas? Ou entre gêneros e vendas? - Outras descobertas. ##### Entregar - Descrição de todas as técnicas, algoritmos e parâmetros utilizados como teste do dataset, mesmo as que não descobriram absolutamente nada como resultado final. - Descrição da técnica, algoritmo e parâmetros que geraram alguma descoberta. - Descrição da(s) descoberta(s) obtida(s). - Informação de como foram feitos os testes (desenvolvimento de aplicação ou uso de alguma aplicação como o WEKA). - No caso de desenvolvimento disponibilização do códigofonte. - Parágrafo conclusivo relacionando o trabalho com os aspectos abordados na Unidade de Aprendizagem. ##### Detalhes do dataset O dataset [Venda de jogos](/samples/vendas_de_jogos.csv) consiste de: - 16.598 entradas contendo informações de venda de jogos. - Cada entrada contém as seguintes informações sobre uma venda: - **Ranking:** posição no *ranking* de vendas; - **Nome:** nome do jogo; - **Plataforma:** plataforma em que o jogo foi liberado (PC, PS4, XBOX, etc); - **Ano:** ano de lançamento do jogo; - **Gênero:** gênero do jogo; - **Editora:** empresa que publicou o jogo; - **Vendas América do Norte:** vendas na América do Norte (em milhões de dólares); - **Vendas EUA:** vendas na Europa (em milhões de dólares); - **Vendas Japão:** vendas no Japão (em milhões de dólares); - **Vendas em outros paises:** vendas no restante do mundo (em milhões de dólares); - **Vendas totais:** total de vendas no mundo inteiro (em milhões de dólares). #### Descrição O dataset passou por um pré-processamento, onde todas as linhas com informações nulas, foram removidas. Durante os testes foram utilizados diversas funções para categorizar os dados, por gênero, plataforma, ano e quantidade de vendas. Todos os testes foram feitos em python utilizando o ambiente Jupyter através do aplicativo Google Collab. [Link para o código-fonte](https://colab.research.google.com/drive/1nCuulQNKRdRmHcuxgvoocyh4g0HXwIpL?usp=sharing). #### Descobertas ##### Descoberta 1: ![Descoberta 1](/docs/test_3_desc_1.png) Com o gráfico, conclui-se que a maior quantidade de jogos são do gênero Ação, seguido pelo gênero Esportes. ##### Descoberta 2: ![Descoberta 2](/docs/test_3_desc_2.png) Com o gráfico, conclui-se que o gênero mais vendido globalmente é de Ação, seguido de Esportes. Se diferenciando do padrão em outros países, no Japão há uma preferência maior para jogos do gênero RPG, vendo até mais do que jogos do gênero Ação e Esportes. ##### Descoberta 3: ![Descoberta 3](/docs/test_3_desc_3.png) Com o gráfico, conclui-se que as plataformas Nintendo DS e PS2 possuem a maior quantidade de jogos. Enquanto a plataforma PS2 possui mais jogos do gênero Esportes, com 391 jogos, a Nintendo DS possui mais do gênero Diverso, com 389 jogos. ##### Descoberta 4: ![Descoberta 4](/docs/test_3_desc_4.png) Com o gráfico, conclui-se que a plataforma com mais jogos vendidos é o PS2, seguido de X360, PS3, Wii e Nintendo DS. ##### Descoberta 5: ![Descoberta 5](/docs/test_3_desc_5.png) Com o gráfico, conclui-se que houve um aumento gigantesco de vendas nos últimos anos, principalmente em jogos dos gêneros Ação, Esportes e Diverso. ##### Descoberta 6: ![Descoberta 6](/docs/test_3_desc_6.png) Com o gráfico, conclui-se que as vendas, apesar de aumentarem bastante no mundo todo, tiveram um aumento desproporcional na América do Norte nos últimos anos. ##### Descoberta 7: ![Descoberta 7](/docs/test_3_desc_7.png) Com o gráfico, conclui-se que a plataforma X360 possui o maior número de vendas de jogos do gênero FPS, enquanto a plataforma PS3 possui o maior número de vendas de jogos do gênero Ação. #### Conclusão Apesar de não utilizar nenhum algoritmo abordado na disciplina, foi utilizado pré-processamento de dados para focar em informações úteis. ## 🚀 Como usar **Requerimentos:** - Python 3.8 Criar um ambiente virtual: ```sh python -m venv venv ``` Carregar as variáveis de ambiente: ```sh # bash venv/Scripts/activate # cmd venv\Scripts\activate.bat # powershell venv/Scripts/Activate.ps1 ``` Instalar as dependências do projeto: ```sh pip install -r requirements.txt ``` Executar um script: ```sh python script_path.py ``` ## 🔑 Licença Este projeto está sob a [licença MIT](LICENSE.md).
36.610236
255
0.761372
por_Latn
0.999893
b9a2fd6f9140085790beac88d35a1276c9ba29f2
366
md
Markdown
ru/_includes/datalens/datalens-connection-note.md
leksuss/docs
11c22f0f3967d84fa451bae4e9bab65bc2187d91
[ "CC-BY-4.0" ]
null
null
null
ru/_includes/datalens/datalens-connection-note.md
leksuss/docs
11c22f0f3967d84fa451bae4e9bab65bc2187d91
[ "CC-BY-4.0" ]
null
null
null
ru/_includes/datalens/datalens-connection-note.md
leksuss/docs
11c22f0f3967d84fa451bae4e9bab65bc2187d91
[ "CC-BY-4.0" ]
null
null
null
{% note warning %} При подключении к внешнему источнику данных (который не является ресурсом {{ yandex-cloud }}), необходимо предоставить доступ к источнику для следующих диапазонов IP-адресов сервиса DataLens: - `178.154.242.176/28` - `178.154.242.192/28` - `178.154.242.208/28` - `178.154.242.128/28` - `178.154.242.144/28` - `178.154.242.160/28` {% endnote %}
28.153846
191
0.710383
rus_Cyrl
0.892164
b9a32f9148eda3ee38d2c1ee1cdad392d4a96b1f
1,845
md
Markdown
SOURCE/input/keyboard-input/commands.md
zpy807/android-training-course-in-chinese
bceec94a5c37e73b49acf849ed067748018d70a2
[ "Apache-2.0" ]
1
2015-11-05T18:07:17.000Z
2015-11-05T18:07:17.000Z
SOURCE/input/keyboard-input/commands.md
lanffy/android-training-course-in-chinese
99bdb00ae150907b195484f7a53beffea964562a
[ "Apache-2.0" ]
null
null
null
SOURCE/input/keyboard-input/commands.md
lanffy/android-training-course-in-chinese
99bdb00ae150907b195484f7a53beffea964562a
[ "Apache-2.0" ]
null
null
null
> 编写:[zhaochunqi](https://github.com/zhaochunqi) > 校对: # 处理按键动作 当预估给予可编辑当文本域焦点时,如一个EditText元素,而且用户拥有一个实体键盘连接,所有当输入由系统处理。然而如果你想接管或直接处理键盘输入键盘操作,通过实现接口KeyEvent.Callback的回调方法,如 onKeyDown()和onKeyMultiple(). Activity和View类都实现了KeyEvent.Callback的接口,所以通常你只需要在这些重写回调方法来适当的扩展这些类。 >**注意:**当使用KeyEvent类和相关的API处理键盘事件时,你期望的应该是只从实体键盘中接收。你永远不应该指望从一个软键盘(如屏幕键盘)来接受点击事件。 ## 处理单个按键点击事件 处理单个的按键点击,实现合适的 onKeyDown() 或 onKeyUp()。通常,你使用onKeyUp()来确保你只接收一个事件。如果用户点击并按住按钮不放,onKeyDown()会被调用多次。 举例,这是一个对一些按键控制游戏的实现: ```java @Override public boolean onKeyUp(int keyCode, KeyEvent event) { switch (keyCode) { case KeyEvent.KEYCODE_D: moveShip(MOVE_LEFT); return true; case KeyEvent.KEYCODE_F: moveShip(MOVE_RIGHT); return true; case KeyEvent.KEYCODE_J: fireMachineGun(); return true; case KeyEvent.KEYCODE_K: fireMissile(); return true; default: return super.onKeyUp(keyCode, event); } } ``` ## 处理修饰键 为了对修饰键进行回应如一个组合Shift和Control修饰键,你可以查询KeyEvent传递到回调方法。一些方法提供一些信息关于修饰键如getModifiers() 和 getMetaState()。然而,最简单的解决方案时检查你关心的按键是否被按下了的方法,如 isShiftPressed() 和 isCtrlPressed()。 例如,有一个onKeyDown() 的实现,当Shift键和一个其他当键按下当时候做一些额外的处理: ```java @Override public boolean onKeyUp(int keyCode, KeyEvent event) { switch (keyCode) { ... case KeyEvent.KEYCODE_J: if (event.isShiftPressed()) { fireLaser(); } else { fireMachineGun(); } return true; case KeyEvent.KEYCODE_K: if (event.isShiftPressed()) { fireSeekingMissle(); } else { fireMissile(); } return true; default: return super.onKeyUp(keyCode, event); } } ```
25.985915
168
0.631978
yue_Hant
0.867884
b9a419a488a933ed4c4d5bbeb7eab3bee1694e58
3,279
md
Markdown
README.md
hunterInt/connector-ipfs
70c8bc26ece081c05d0a0e838f670450fc4101ff
[ "Apache-2.0" ]
null
null
null
README.md
hunterInt/connector-ipfs
70c8bc26ece081c05d0a0e838f670450fc4101ff
[ "Apache-2.0" ]
null
null
null
README.md
hunterInt/connector-ipfs
70c8bc26ece081c05d0a0e838f670450fc4101ff
[ "Apache-2.0" ]
null
null
null
## connector-ipfs (uplink v1.0.5) [![Codacy Badge](https://api.codacy.com/project/badge/Grade/ac7bbc539c0a45a5a2140b3e6b7c823d)](https://app.codacy.com/gh/storj-thirdparty/connector-IPFS?utm_source=github.com&utm_medium=referral&utm_content=storj-thirdparty/connector-IPFS&utm_campaign=Badge_Grade_Dashboard) [![Go Report Card](https://goreportcard.com/badge/github.com/storj-thirdparty/connector-ipfs)](https://goreportcard.com/report/github.com/storj-thirdparty/connector-ipfs) ![Cloud Build](https://storage.googleapis.com/storj-utropic-services-badges/builds/connector-ipfs/branches/master.svg) ## Overview The IPFS Connector connects to an IPFS server, takes a backup of the specified files and uploads the backup data on Storj network. ```bash Usage: connector-ipfs [command] <flags> Available Commands: help Help about any command store Command to upload data to a Storj V3 network. version Prints the version of the tool ``` `store` - Connect to the specified IPFS (default: `ipfs_property.json`). Back-up of the IPFS is generated using tooling provided by IPFS and then uploaded to the Storj network. Connect to a Storj v3 network using the access specified in the Storj configuration file (default: `storj_config.json`). Sample configuration files are provided in the `./config` folder. ## Requirements and Install To build from scratch, [install the latest Go](https://golang.org/doc/install#install). > Note: Ensure go modules are enabled (GO111MODULE=on) #### Option #1: clone this repo (most common) To clone the repo ``` git clone https://github.com/storj-thirdparty/connector-ipfs.git ``` Then, build the project using the following: ``` cd connector-ipfs go build ``` #### Option #2: ``go get`` into your gopath To download the project inside your GOPATH use the following command: ``` go get github.com/storj-thirdparty/connector-ipfs ``` ## Connect to IPFS Server Make sure you are connected to IPFS server. If not, run the ipfs daemon in another `terminal` to join your node to the public network: ``` $ ipfs daemon ``` ## Run (short version) Once you have built the project run the following commands as per your requirement: ##### Get help ``` $ ./connector-ipfs --help ``` ##### Check version ``` $ ./connector-ipfs --version ``` ##### Create backup from ipfs and upload to Storj ``` $ ./connector-ipfs store ``` ## Documentation * Access documentation on local system : 1) Install [docsify](https://www.npmjs.com/package/docsify-cli) 2) Run the following command at the root directory of the cloned project. ``` $ docsify serve docs ``` * For more information on runtime flags, configuration, testing, and diagrams, check out the [Detail](//github.com/storj-thirdparty/wiki/Detail) or jump to: * [Config Files](//github.com/storj-thirdparty/connector-ipfs/wiki/#config-files) * [Run (long version)](//github.com/storj-thirdparty/connector-ipfs/wiki/#run) * [Testing](//github.com/storj-thirdparty/connector-ipfs/wiki/#testing) * [Flow Diagram](//github.com/storj-thirdparty/connector-ipfs/wiki/#flow-diagram) * [Video](//github.com/storj-thirdparty/connector-ipfs/docs/videos)
33.121212
299
0.719427
eng_Latn
0.893073
b9a4887d28bd5546e8546b9521941ae96f8ea8ac
36
md
Markdown
_includes/03-links.md
hexzha/markdown-portfolio
44e09e85065a003c4758debc618a5f054b1bd622
[ "MIT" ]
null
null
null
_includes/03-links.md
hexzha/markdown-portfolio
44e09e85065a003c4758debc618a5f054b1bd622
[ "MIT" ]
8
2018-08-04T03:42:21.000Z
2020-10-05T22:04:12.000Z
_includes/03-links.md
hexzha/markdown-portfolio
44e09e85065a003c4758debc618a5f054b1bd622
[ "MIT" ]
null
null
null
[hexzha](https://github.com/hexzha)
18
35
0.722222
yue_Hant
0.253244
b9a4aab7804ff1a7b2afb388b1e90e708e86fef3
3,213
md
Markdown
electives/kkd/lab/lista-1/readme.md
jerry-sky/academic-notebook
be2d350289441b99168ea40412891bc65b9cb431
[ "Unlicense" ]
4
2020-12-28T21:53:00.000Z
2022-03-22T19:24:47.000Z
electives/kkd/lab/lista-1/readme.md
jerry-sky/academic-notebook
be2d350289441b99168ea40412891bc65b9cb431
[ "Unlicense" ]
3
2022-02-13T18:07:10.000Z
2022-02-13T18:16:07.000Z
electives/kkd/lab/lista-1/readme.md
jerry-sky/academic-notebook
be2d350289441b99168ea40412891bc65b9cb431
[ "Unlicense" ]
4
2020-12-28T16:05:35.000Z
2022-03-08T16:20:00.000Z
--- lang: 'pl' title: 'Lista-1' author: 'Jerry Sky' --- --- ## Zadanie na laboratorium > Dla dyskretnych zmiennych losowych $X$ i $Y$ entropia $Y$ warunkowana przez $X$ jest określona wzorem > $$ > H(Y|X) = \sum_{x\in X}P(x) \cdot H(Y|x) > $$ > gdzie > $$ > H(Y|x) = \sum_{y\in Y}P(y|x) \cdot I(y|x) > $$ > a $P(z)$ oznacza prawdopodobieństwo $z$ a $I(z)$ informację związaną z $z$. > > Napisz program który dla podanego pliku traktowanego jako ciąg 8-bitowych symboli policzy częstość występowania tych symboli oraz częstość występowania symboli po danym symbolu (częstość występowania pod warunkiem, że poprzedni znak jest dany, dla pierwszego znaku przyjmij, że przed nim jest znak o kodzie $0$). Dopisz funkcje które dla policzonych częstości traktowanych jako zmienne losowe policzy entropię i entropię warunkową (warunkowaną znajomością poprzedniego symbolu), oraz poda różnicę między nimi. > > Program ma wypisywać wyniki w sposób czytelny i łatwy do dalszego przetwarzania. > > Przeanalizuj wyniki działania swojego programu dla przykładowych plików tekstowych, `doc`, `pdf`, `mp4` czy `jpg` (weź pliki o rozmiarze co najmniej 1MB). ## Uruchomienie programu W celu uruchomienia programu należy najpierw go skompilować przy pomocy `make` a następnie `./main.out <plik do otwarcia>` (przykładowe uruchomienie `./main.out pan-tadeusz.txt`). Cały [kod](main.cpp) jest zawarty w pliku `main.cpp`. --- Poniżej znajdują się dodatkowe uproszczenia wzorów w celu dokładniejszego działania programu. ## Entropia warunkowa $P(y|x) = \frac{P(y~\cap~x)}{P(x)} = \frac{|y~\cap~x|}{|x|}$ $I(y|x) = -\log_2(~P(y|x)~)$ $H(Y|x) = \sum_{y \in Y}P(y|x)\cdot I(y|x)$ $H(Y|X) = \sum_{x \in X}P(x)\cdot H(Y|x)$ podstawmy:\ $H(Y|X) = \sum_{x \in X}(~P(x)\cdot \sum_{y \in Y}P(y|x)\cdot I(y|x)~)$\ $H(Y|X) = \sum_{x \in X}(~P(x)\cdot \sum_{y \in Y}P(y|x)\cdot (-\log_2(~P(y|x))~)$ mamy przecież: $$ P(x) = \frac{|x|}{|\Omega|} $$ oraz: $$ P(y|x) = \frac{P(y \cap x)}{P(x)} = \frac{|y \cap x|}{|x|} $$ wówczas: $$ H(Y|X) = \sum_{x \in X}(~\frac{|x|}{|\Omega|}\cdot \sum_{y \in Y}\frac{|y \cap x|}{|x|}\cdot (-\log_2(~\frac{|y \cap x|}{|x|})~) $$ $$ H(Y|X) = \sum_{x \in X}(~\sum_{y \in Y} \frac{|x|}{|\Omega|} \cdot \frac{|y \cap x|}{|x|}\cdot (-\log_2(~\frac{|y \cap x|}{|x|})~) $$ $$ H(Y|X) = \sum_{x \in X}(~\sum_{y \in Y} \frac{|y \cap x|}{|\Omega|}\cdot (-\log_2(~\frac{|y \cap x|}{|x|})~) $$ $$ H(Y|X) = \sum_{x \in X}(~\sum_{y \in Y} \frac{|y \cap x|}{|\Omega|}\cdot (-\log_2(~|y \cap x|)~ + log_2|x|) $$ $$ H(Y|X) = \sum_{x \in X}(~\sum_{y \in Y} \frac{|y \cap x|}{|\Omega|}\cdot (-\log_2(~|y \cap x|)~ + log_2|x|) $$ ## Zwykła entropia $$ H(X) = \sum_{x \in X}P(x)\cdot I(x) $$ przy czym $I(x) = -\log_2 P(x)$ $$ H(X) = \sum_{x \in X}\frac{|x|}{|\Omega|}\cdot (-\log_2 \frac{|x|}{|\Omega|}) $$ $$ H(X) = \frac{1}{|\Omega|}\cdot\sum_{x \in X}|x|\cdot (-\log_2 \frac{|x|}{|\Omega|}) $$ $$ H(X) = \frac{1}{|\Omega|}\cdot\sum_{x \in X}|x|\cdot (-\log_2 |x| + log_2 |\Omega|) $$ $$ H(X) = \frac{1}{|\Omega|}\cdot\sum_{x \in X}|x|\cdot (-\log_2 |x|) + \frac{1}{|\Omega|} \cdot log_2 |\Omega| \cdot \sum_{x \in X} |x|) $$ $$ H(X) = \frac{1}{|\Omega|}\cdot\sum_{x \in X}|x|\cdot (-\log_2 |x|) + log_2 |\Omega| $$
33.123711
511
0.603797
pol_Latn
0.996127
b9a4c7b5c9dd5405d9e8a0a6d3cf799b3cd1b771
2,557
md
Markdown
_posts/2021/2021-10-07-twickenham-resident-to-host-cop26-event.md
barrymcgee/stmgrts
53fadf731262e07b748cad455563c2f45fb80db5
[ "CC0-1.0" ]
null
null
null
_posts/2021/2021-10-07-twickenham-resident-to-host-cop26-event.md
barrymcgee/stmgrts
53fadf731262e07b748cad455563c2f45fb80db5
[ "CC0-1.0" ]
null
null
null
_posts/2021/2021-10-07-twickenham-resident-to-host-cop26-event.md
barrymcgee/stmgrts
53fadf731262e07b748cad455563c2f45fb80db5
[ "CC0-1.0" ]
null
null
null
--- layout: post title: "Twickenham resident to host COP26 event" permalink: /archives/2021/10/twickenham-resident-to-host-cop26-event.html commentfile: 2021-10-07-twickenham-resident-to-host-cop26-event category: news date: 2021-10-07 10:00:00 image: "https://www.richmond.gov.uk/media/22658/cop26_conference.jpg" excerpt: | Twickenham resident Nikita Patel has been chosen to co-host an event at the United Nations Climate Change Conference (COP26) in Glasgow next week. The event 'Countdown to Planet Zero: combating climate change with chemistry' is a youth panel which gives the next generation of scientists and inventors an opportunity to showcase their work and its impact on climate change. --- <img src="https://www.richmond.gov.uk/media/22658/cop26_conference.jpg" alt="image - Twickenham resident to host COP26 event" width="250" class="photo right" > Twickenham resident Nikita Patel has been chosen to co-host an event at the United Nations Climate Change Conference (COP26) in Glasgow next week. The event 'Countdown to Planet Zero: combating climate change with chemistry' is a youth panel which gives the next generation of scientists and inventors an opportunity to showcase their work and its impact on climate change. Nikita, 25, has lived in TW2 since 1996 and attended Chase Bridge Primary School. She is currently a PhD student at the Centre for Translational Medicine and Therapeutics at Queen Mary University of London. > "I'm really excited to be a part of COP26 which I see as a moment for all of us to get involved - we can all make some simple changes like using public transport or cycling more to create a different path and limit our impact on the climate. Whilst these individual steps are necessary and achievable, we must take our personal knowledge and actions to the collective today to change our world's tomorrow. This is what COP26 is about. The youth panel event is designed to start to address the significant concerns of the next generation about the planet they will inherit and showcase some of the important work being carried out by our scientists. I believe it is important to highlight how fundamental collaboration between industry and academia is to speedily develop solutions for society. By the end of the event, I hope the audience will feel more positive that there is hope in science." If you are interested in registering to virtually attend the event on Thursday 4 November, 5 to 6pm, please [register online](https://www.soci.org/events/hq-events/2021/cop26-countdown-to-planet-zero).
116.227273
896
0.799374
eng_Latn
0.998136
b9a50f839253a89ab381c6bf8dfacfe5f27f1c62
8,668
md
Markdown
op_practice_book-master/doc/store/gfs.md
benniao1996/1996
d95b647db326d261a77794823d257d24a77559ac
[ "MIT" ]
12
2020-07-03T12:55:41.000Z
2021-02-05T10:47:57.000Z
op_practice_book-master/doc/store/gfs.md
thinszx/1996
d95b647db326d261a77794823d257d24a77559ac
[ "MIT" ]
null
null
null
op_practice_book-master/doc/store/gfs.md
thinszx/1996
d95b647db326d261a77794823d257d24a77559ac
[ "MIT" ]
7
2020-07-07T12:06:55.000Z
2021-07-09T08:18:45.000Z
# GFS <!-- vim-markdown-toc GFM --> * [分布式文件系统的要求](#分布式文件系统的要求) * [GFS 基于的假设](#gfs-基于的假设) * [架构](#架构) * [Chunk 大小](#chunk-大小) * [Metadata](#metadata) * [Operation Log](#operation-log) * [容错机制](#容错机制) * [Master 容错](#master-容错) * [一致性模型](#一致性模型) * [Lease 机制](#lease-机制) * [版本号](#版本号) * [负载均衡](#负载均衡) * [基本操作](#基本操作) * [Read](#read) * [Overwrite](#overwrite) * [Record Append](#record-append) * [Snapshot](#snapshot) * [Delete](#delete) <!-- vim-markdown-toc --> GFS 作为一个分布式的文件系统, 除了要满足一般的文件系统的需求之外, 还根据一些特殊的应用场景(原文反复提到的`application workloads and technological environment`), 来完成整个系统的设计。 ### 分布式文件系统的要求 一般的分布式文件系统需要满足以下四个要求: * Performance:高性能,较低的响应时间,较高的吞吐量 * Scalability: 易于扩展,可以简单地通过增加机器来增大容量 * Reliability: 可靠性,系统尽量不出错误 * Availability: 可用性,系统尽量保持可用 (注:关于 reliability 和 availability 的区别, 请参考 [这篇](http://unfolding-mirror.blogspot.com/2009/06/reliability-vs-availability.html)) ### GFS 基于的假设 基于对实际应用场景的研究,GFS 对它的使用场景做出了如下假设: 1. GFS 运行在成千上万台便宜的机器上,这意味着节点的故障会经常发生。 必须有一定的容错的机制来应对这些故障。 2. 系统要存储的文件通常都比较大,每个文件大约 100MB 或者更大, GB 级别的文件也很常见。必须能够有效地处理这样的大文件, 基于这样的大文件进行系统优化。 3. workloads 的读操作主要有两种: * 大规模的流式读取,通常一次读取数百 KB 的数据, 更常见的是一次读取 1MB 甚至更多的数据。 来自同一个 client 的连续操作通常是读取同一个文件中连续的一个区域。 * 小规模的随机读取,通常是在文件某个随机的位置读取 几个 KB 数据。 对于性能敏感的应用通常把一批随机读任务进行排序然后按照顺序批量读取, 这样能够避免在通过一个文件来回移动位置。(后面我们将看到, 这样能够减少获取 metadata 的次数,也就减少了和 master 的交互) 4. workloads 的写操作主要由大规模的,顺序的 append 操作构成。 一个文件一旦写好之后,就很少进行改动。因此随机的写操作是很少的, 所以 GFS 主要针对于 append 进行优化。 5. 系统必须有合理的机制来处理多个 client 并发写同一个文件的情况。 文件经常被用于生产者 - 消费者队列,需要高效地处理多个 client 的竞争。 正是基于这种特殊的应用场景,GFS 实现了一个无锁并发 append。 6. 利用高带宽比低延迟更加重要。基于这个假设, 可以把读写的任务分布到各个节点, 尽量保证每个节点的负载均衡, 尽管这样会造成一些请求的延迟。 <!--more--> ## 架构 下面我们来具体看一下 GFS 的整个架构。 可以看到 GFS 由三个不同的部分组成,分别是`master`,`client`, `chunkserver`。 > * `master`负责管理整个系统(包括管理 metadata,垃圾回收等),一个系统只有一个`master`。 > * `chunkserver`负责保存数据,一个系统有多个`chunkserver`。 > * `client`负责接受应用程序的请求,通过请求`master`和`chunkserver`来完成读写等操作。 由于系统只有一个`master`,`client`对`master`请求只涉及 metadata,数据的交互直接与`chunkserver`进行,这样减小了`master`的压力。 一个文件由多个 chunk 组成,一个 chunk 会在多个`chunkserver`上存在多个 replica。 对于新建文件,目录等操作,只是更改了 metadata, 只需要和`master`交互就可以了。注意,与 linux 的文件系统不同, 目录不再以一个 inode 的形式保存,也就是它不会作为 data 被保存在`chunkserver`。 如果要读写文件的文件的内容,就需要`chunkserver`的参与, `client`根据需要操作文件的偏移量转化为相应的`chunk index`, 向`master`发出请求,`master`根据文件名和`chunk index`,得到一个全局的`chunk handle`, 一个 chunk 由唯一的一个`chunk handle`所标识, `master`返回这个`chunk handle`以及拥有这个 chunk 的`chunkserver`的位置。 (不止一个,一个 chunk 有多个 replica,分布在不同的`chunkserver`。 必要的时候,`master`可能会新建 chunk, 并在`chunkserver`准备好了这个 chunk 的 replica 之后,才返回) `client`拿到`chunk handle`和`chunkserver`列表之后, 先把这个信息用文件名和`chunk index`作为 key 缓存起来, 然后对相应的`chunkserver`发出数据的读写请求。 这只是一个大概的流程,对于具体的操作过程,下面会做分析。 ### Chunk 大小 Chunk 的大小是一个值得考虑的问题。在 GFS 中,chunk 的大小是 64MB。 这比普通文件系统的 block 大小要大很多。 在`chunkserver`上,一个 chunk 的 replica 保存成一个文件, 这样,它只占用它所需要的空间,防止空间的浪费。 Chunk 拥有较大的大小由如下几个好处: * 它减少了`client`和`master`交互的次数。 * 减少了网络的开销,由于一个客户端可能对同一个 chunk 进行操作, 这样可以与`chunkserver`维护一个长 TCP 连接。 * chunk 数目少了,metadata 的大小也就小了,这样节省了`master`的内存。 大的 chunk size 也会带来一个问题,一个小文件可能就只占用一个 chunk, 那么如果多个`client`同时操作这个文件的话,就会变成操作同一个 chunk, 保存这个 chunk 的`chunkserver`就会称为一个 hotspot。 这样的问题对于小的 chunk 并不存在,因为如果是小的 chunk 的话, 一个文件拥有多个 chunk,操作同一个文件被分布到多个`chunkserver`. 虽然在实践中,可以通过错开应用的启动的时间来减小同时操作一个文件的可能性。 ### Metadata GFS 的`master`保存三种 metadata: 1. 文件和 chunk 的 namespace(命名空间) -- 整个文件系统的目录结构以及 chunk 基本信息 2. 文件到 chunk 的映射 3. 每一个 chunk 的具体位置 metadata 保存在内存中,可以很快地获取。 前面两种 metadata 会通过 operation log 来持久化。 第 3 种信息不用持久化,因为在`master`启动时, 它会问`chunkserver`要 chunk 的位置信息。 而且 chunk 的位置也会不断的变化,比如新的`chunkserver`加入。 这些新的位置信息会通过日常的`HeartBeat`消息由`chunkserver`传给`master`。 将 metadata 保存在内存中能够保证在`master`的日常处理中很快的获取 metadata, 为了保证系统的正常运行,`master`必须定时地做一些维护工作,比如清除被删除的 chunk, 转移或备份 chunk 等,这些操作都需要获取 metadata。 metadata 保存在内存中有一个不好的地方就是能保存的 metadata 受限于`master`的内存, 不过足够大的 chunk size 和使用前缀压缩,能够保证 metadata 占用很少的空间。 对 metadata 进行修改时,使用锁来控制并发。需要注意的是,对于目录, 获取锁的方式和 linux 的文件系统有点不太一样。在目录下新建文件, 只获取对这个目录的读锁,而对目录进行 snapshot,却对这个目录获取一个写锁。 同时,如果涉及到某个文件,那么要获取所有它的所有上层目录的读锁。 这样的锁有一个好的地方是可以在通过一个目录下同时新建两个文件而不会冲突, 因为它们都是获得对这个目录的读锁。 ### Operation Log Operation log 用于持久化存储前两种 metadata,这样`master`启动时, 能够根据 operation log 恢复 metadata。同时,可以通过 operation log 知道 metadata 修改的顺序, 对于重现并发操作非常有帮助。因此,必须可靠地存储 operation log, 只有当 operation log 已经存储好之后才向`client`返回。 而且,operation log 不仅仅只保存在`master`的本地,而且在远程的机器上有备份, 这样,即使`master`出现故障,也可以使用其他的机器做为`master`。 从 operation log 恢复状态是一个比较耗时的过程,因此,使用 checkpoint 来减小 operation log 的大小。 每次恢复时,从 checkpoint 开始恢复,只处理 checkpoint 只有的 operation log。 在做 checkpoint 时,新开一个线程进行 checkpoint,原来的线程继续处理 metadata 的修改请求, 此时把 operation log 保存在另外一个文件里。 ### 容错机制 #### Master 容错 通过操作日志加 checkpoint 的方式进行,并且有一台称为 "Shadow Master" 的实时热备。 > * GFS Master 的修改操作总是先记录操作日志,然后修改内存。 > * Master 会定期将内存中的数据以 checkpoint 文件的形式转存到磁盘中 > * 实时热备,所有元数据修改操作都发送到实时热备才算成功。 ### 一致性模型 关于一致性,先看几个定义,对于一个 file region,存在以下几个状态: * consistent。如果任何 replica, 包含的都是同样的 data。 * defined。defined 一定是 consistent,而且能够看到一次修改造成的结果。 * undefined。undefined 一定是 consistent,是多个修改混合在一块。举个例子, 修改 a 想给文件添加 A1,A2,修改 b 想给文件添加 B1,B2,如果最后的结果是 A1,A2,B1,B2, 那么就是 defined,如果是 A1,B1,A2,B2,就是 undefined。 * inconsitent。对于不同的 replica,包含的是不同的 data。 在 GFS 中,不同的修改可能会出现不同的状态。对于文件的 append 操作(是 GFS 中的主要写操作), 通过放松一定的一致性,更好地支持并发,在下面的具体操作时再讲述具体的过程。 ### Lease 机制 `master`通过 lease 机制把控制权交给`chunkserver`,当写一个 chunk 时, `master`指定一个包含这个 chunk 的 replica 的`chunkserver`作为`primary replica`, 由它来控制对这个 chunk 的写操作。一个 lease 的过期时间是 60 秒,如果写操作没有完成, `primary replica`可以延长这个 lease。`primary replica`通过一个序列号控制对这个 chunk 的写的顺序, 这样能够保证所有的 replica 都是按同样的顺序执行同样的操作,也就保证了一致性。 ### 版本号 对于每一个 chunk 的修改,chunk 都会赋予一个新的版本号。 这样,如果有的 replica 没有被正常的修改(比如修改的时候当前的`chunkserver`挂了), 那么这个 replica 就被`stale replica`,当`client`请求一个 chuck 时,`stale replica`会被`master`忽略, 在`master`的定时管理过程中,会把`stale replica`删除。 ### 负载均衡 为了尽量保证所有`chunkserver`都承受差不多的负载, `master`通过以下机制来完成: * 首先,在新建一个 chunk 或者是复制一个 chunk 的 replica 时, 尽量保证负载均衡。 * 当一个 chunk 的 replica 数量低于某个值时,尝试给这个 chuck 复制 replica * 扫描整个系统的分布情况,如果不够平衡,则通过移动一些 replica 来达到负责均衡的目的。 注意,`master`不仅考虑了`chunkserver`的负载均衡,也考虑了机架的负载均衡。 ## 基本操作 ### Read Read 操作其实已经在上面的 Figure 1 中描述得很明白了,有如下几个过程: 1. `client`根据 chunk size 的大小,把`(filename,byte offset)`转化为`(filename,chunk index)`, 发送`(filename,chunk index)`给`master` 2. `master` 返回`(chunk handle, 所有正常 replica 的位置)`, `client`以`(filename,chunk index)`作为 key 缓存这个信息 3. `client`发`(chunk handle,byte range)`给其中一个`chunkserver`,通常是最近的一个。 4. `chunkserver`返回 chunk data ### Overwrite 直接假设`client`已经知道了要写的 chunk,如 Figure 2,具体过程如下: 1. `client`向`master`询问拥有这个 chunk 的 lease 的`primary replica`,如果当前没有`primary replica`, `master`把 lease 给其中的 replica 2. `master`把`primary replica`的位置和其他的拥有这个 chunk 的 replica 的`chunkserver`(`secondary replica`)的位置返回, `client`缓存这个信息。 3. `client`把数据以流水线的方式发送到所有的 replica,流水线是一种最高效利用的带宽的方法, 每一个 replica 把数据用 LRU buffer 保存起来,并向`client`发送接受到的信息。 4. `client`向`primary replica`发送 write 请求,`primary replica`根据请求的顺序赋予一个序列号 5. `primary replica`根据序列号修改 replica 和请求其他的`secondary replica`修改 replica, 这个统一的序列号保证了所有的 replica 都是按照统一的顺序来执行修改操作。 6. 当所有的`secondary replica`修改完成之后,返回修改完成的信号给`primary replica` 7. `primary replica`向`client`返回修改完成的信号,如果有任何的`secondary replica`修改失败, 信息也会被发给`client`,`client`然后重新尝试修改,重新执行步骤 3-7。 如果一个修改很大或者到了 chuck 的边界,那么 client 会把它分成两个写操作, 这样就有可能发生在两个写操作之间有其他的写操作,所以这时会出现 undefined 的情况。 ### Record Append Record Append 的过程相对于 Overwrite 的不同在于它的错误处理不同, 当写操作没有成功时,`client`会尝试再次操作,由于它不知道 offset, 所以只能再次 append,这就会导致在一些 replica 有重复的记录, 而且不同的 replica 拥有不同的数据。 为了应对这种情况的发生,应用程序必须通过一定的校验手段来确保数据的正确性, 如果对于生产者 - 消费者队列,消费者可以通过唯一的 id 过滤掉重复的记录。 ### Snapshot Snapshot 是对文件或者一个目录的“快照”操作,快速地复制一个文件或者目录。 GFS 使用*Copy-on-Write*实现 snapshot,首先`master`revoke 所有相关 chunk 的 lease, 这样所有的修改文件的操作都需要和`master`联系, 然后复制相关的 metadata,复制的文件跟原来的文件指向同样的 chunck, 但是 chuck 的 reference count 大于 1。 当有`client`需要写某个相关的 chunck C 时,`master`会发现它的 reference count 大于 1, `master`推迟回复给`client`,先新建一个`chunk handle`C', 然后让所有拥有 C 的 replica 的`chunkserver`在本地新建一个同样的 C‘的 replica, 然后赋予 C’的一个 replica 一个 lease,把 C'返回给`client`用于修改。 ### Delete 当`client`请求删除文件时,GFS 并不立即回收这个文件的空间。 也就是说,文件相关的 metadata 还在, 文件相关的 chunk 也没有从`chunkserver`上删除。 GFS 只是简单的把文件删除的 operation log 记下, 然后把文件重新命名为一个 hidden name, 里面包含了它的删除时间。 在`master`的日常维护工作时, 它会把删除时间删除时间超过 3 天的文件从 metadata 中删除, 同时删除相应 chunk 的 metadata, 这样这些 chunk 就变成了 orphan chunk, 它们会在`chunkserver`和`master`进行`Heartbeat`交互时从`chunkserver`删除。 这样推迟删除(原文叫垃圾回收)的好处有: * 对于分布式系统而言,要确保一个动作正确执行是很难的, 所以如果当场要删除一个 chunk 的所有 replica 需要复杂的验错,重试。 如果采用这种推迟删除的方法,只要 metadata 被正确的处理,最后的 replica 就一定会被删除, 非常简单 * 把这些删除操作放在`master`的日常处理中,可以使用批处理这些操作, 平摊下来的开销就小了 * 可以防止意外删除的可能,类似于回收站 这样推迟删除的不好在于浪费空间,如果空间吃紧的话,`client`可以强制删除, 或者指定某些目录下面的文件直接删除。
28.05178
98
0.78311
yue_Hant
0.606366
b9a547e5063b2a42c026c6c81f9ca86521a66823
3,148
md
Markdown
TODO.md
bugs84/keb
134593c027930d6b84a36d5f3d095b7aff3751d6
[ "WTFPL" ]
1
2022-01-05T07:10:56.000Z
2022-01-05T07:10:56.000Z
TODO.md
bugs84/keb
134593c027930d6b84a36d5f3d095b7aff3751d6
[ "WTFPL" ]
null
null
null
TODO.md
bugs84/keb
134593c027930d6b84a36d5f3d095b7aff3751d6
[ "WTFPL" ]
null
null
null
# TODOs ## Core functionality - Prepare support for basic elements via out-of-the-box modules - for example, because this: ```assertThat(pageWithModulesPage.surname.textInput.getAttribute("value")).isEqualTo("Doe")``` is really awful. .getAttribute('valeu') for input i really don't like - We realized, that `WebElement.getAttribute("value")` is not nice. And that we can solve it by normal prepared modules: `val input by content { module(InputModule(css("#selector"))) }` - IMPORTATNT TO MULTIPLE TESTS Lazily start the browser inside tests - not when the test starts, but on first browser access - Browser should have possibility to directly get and set url. This should not been required: ```browser.driver.get("localhost:8080")``` something like `browser.url` and `browser.setUrl()` or someting... - Keb should provide support for obtaining WebDriver (Firefox, chrome) - There is library (something like "web driver manager") which provide this - So that user doesn't have to investigate how to obtain driver - By default we don't want to close browser after each test - Share browser between tests. (plus do it configurable) - Probably Implementation of "KebTest" will be needed. - Let there some possibility to write tests, which use multiple Browsers... - We want to capture images on fail test (maybe even on every test) - In at() verifier use for example assertJ - so that we can have nice error message, when it fails - at() - waiting - in at (and to) should be possible to override waiting timeout - at(::LongLoadingPage, wait: 30, retryInterval: 200) - lateinit browser not initialized - Kdyz delas stranku a primo se do ni selecti obsah, - Tak vyleti lateinit browser has not been initialized - Šlo by se tam hooknout na getter a vratit nějakou víc vysvetlujici hlasku ## Keb configuration - in config driver have to be closure - to start browser in lazy way - to be able start browser before each test - KebConfig can be defined global, or locally overridden by test "probably not so important for the beginning" - Load KebConfig from external file. - Preferably Kotlin Script format ## Tests semantic ## Other - publish Maven artifact - verify, that setup project and write first working tests is as simple and prepared as possible - samples and on boarding have to be super easy - closing browser register onJVMExit() - less browsers will hang in processes - probably will be needed need reference to browser as "weak reference" - to release memory ## Ideas - Consider usage of "concept of CurrentPage" - e.g. browser will remember on which page is this can be set by "at" method - Consider possibility, where everything is waiting by default - even WebElement.click() - is tried multiple times until it succeeed (with maxTimeout ofcourse) - e.g. element is covered by loader, thats why click fails, when loader disapper it will succeed - to achive this, some kind of WebElement proxy will be needed (or use custom KebElement)
41.973333
125
0.721093
eng_Latn
0.996519
b9a5ccdbbdda5bf8c58fb0117c0ab1eab7bf3d3e
807
md
Markdown
_people/README.md
lsh/cmudig.github.io
4608e97a510c1602304ef41da5de4733a0378a45
[ "Apache-2.0" ]
null
null
null
_people/README.md
lsh/cmudig.github.io
4608e97a510c1602304ef41da5de4733a0378a45
[ "Apache-2.0" ]
null
null
null
_people/README.md
lsh/cmudig.github.io
4608e97a510c1602304ef41da5de4733a0378a45
[ "Apache-2.0" ]
null
null
null
# Adding a person Create a new `*.md` in `_people` with a unique name for you. Then fill it with the following content. ```md --- name: ... website: ... image: /assets/people/....jpg role: PhD Student advisors: - AdvisorFirstName AdvisorLastName --- Fun fact. ``` We have the following roles: `Professor`, `Postdoc`, `PhD Student`, `Visiting PhD Student`, `Masters Student`, `Undergraduate Student`, and `Collaborator`. You can also add a new role if it makes sense. Add add a picture, add it to [the assets directory](../assets/people) with around `400x400` pixels as a JPEG image. Aim for about 40kb and adjust the compression if necessary. Once someone leaves the group, add `alumni_since: XXXX` to make them as alumni. Please send a pull request with the changes and an admin will merge it.
32.28
202
0.726146
eng_Latn
0.988093
b9a681b1e370ea8db0cb7f2676120815a62bdcc3
22,612
md
Markdown
node_modules/bookshelf/CHANGELOG.md
andrewcz/dreamingofbeinganengineer
dce8bfd18a422700ae28b73c13a74315a9ab1e1c
[ "MIT" ]
2
2018-04-11T09:24:36.000Z
2018-04-30T21:33:51.000Z
node_modules/bookshelf/CHANGELOG.md
andrewcz/dreamingofbeinganengineer
dce8bfd18a422700ae28b73c13a74315a9ab1e1c
[ "MIT" ]
null
null
null
node_modules/bookshelf/CHANGELOG.md
andrewcz/dreamingofbeinganengineer
dce8bfd18a422700ae28b73c13a74315a9ab1e1c
[ "MIT" ]
1
2021-09-05T08:28:45.000Z
2021-09-05T08:28:45.000Z
## Change Log **0.10.1** - <small>_Jun 29, 2016_</small> — [Diff](https://github.com/tgriesser/bookshelf/compare/0.10.0...0.10.1) * Allows using knex 0.12 as a peerDependency * knex instance used by bookshelf may be swapped out ## Change Log **0.10.0** — <small>_Jun 29, 2016_</small> — [Diff](https://github.com/tgriesser/bookshelf/compare/0.9.5...0.10.0) **Breaking Changes:** * Removal/renaming of certain lodash functions from Model and Collection that were removed in lodash 4. * Collection Methods * removed `CollectionBase#collect` => use `CollectionBase#map` instead * removed `CollectionBase#foldl` => use `CollectionBase#reduce` instead * removed `CollectionBase#inject` => use `CollectionBase#reduce` instead * removed `CollectionBase#foldr` => use `CollectionBase#reduceRight` instead * removed `CollectionBase#detect` => use `CollectionBase#find` instead * removed `CollectionBase#select` => use `CollectionBase#filter` instead * removed `CollectionBase#all` => use `CollectionBase#every` instead * removed `CollectionBase#any` => use `CollectionBase#some` instead * removed `CollectionBase#include` => use `CollectionBase#includes` instead * removed `CollectionBase#contains` => use `CollectionBase#includes` instead * removed `CollectionBase#rest` => use `CollectionBase#tail instead` * renamed `CollectionBase#invoke` => `CollectionBase#invokeMap` * split `CollectionBase#max` into `CollectionBase#maxBy` - see the [lodash docs](https://lodash.com/docs/#max) for more explanation * split `CollectionBase#min` into `CollectionBase#minBy` - see the [lodash docs](https://lodash.com/docs/#min) for more explanation * Model Methods * renamed `ModelBase#pairs` => `ModelBase#toPairs` **Other Changes:** * Update to Lodash 4. #1287 * Registry plugin: Better support for custom relations. #1294 **0.9.5** — <small>_May 15, 2016_</small> — [Diff](https://github.com/tgriesser/bookshelf/compare/0.9.4...0.9.5) * Add pagination plugin. #1183 * Fire {@link Model#event:fetched} on eagerly loaded relations. #1206 * Correct cloning of {@link Model#belongsToMany} decorated relations. #1222 * Update Knex to 0.11.x. #1227 * Update minimum lodash version. #1230 **0.9.4** — <small>_April 3, 2016_</small> — [Diff](https://github.com/tgriesser/bookshelf/compare/0.9.3...0.9.4) * Include `babel-runtime` as a dependency. #1188 **0.9.3** — <small>_April 3, 2016_</small> — [Diff](https://github.com/tgriesser/bookshelf/compare/0.9.2...0.9.3) * Bugfix: Restore support for `camelCase` and `color:separated` event names. #1184 **0.9.2** — <small>_February 17, 2016_</small> — [Diff](https://github.com/tgriesser/bookshelf/compare/0.9.1...0.9.2) * Permit up to Knex 0.11.0 via `peerDependencies`. * `Model.forge` works for ES6 classes. #924 * Fix `Collection#count` for `hasMany` relations. #1115 **0.9.1** — <small>_November 4, 2015_</small> — [Diff](https://github.com/tgriesser/bookshelf/compare/0.9.0...0.9.1) * {@link Events#off} can now deregister multiple methods at once. #983 * Permit Knex 0.10.0 via `peerDependencies`. #998 **0.9.0** — <small>_November 1, 2015_</small> — [Diff](https://github.com/tgriesser/bookshelf/compare/0.8.2...0.9.0) * Repo no longer includes built source or generated documentation. Release script updated to include these only in the tagged release commit. #950. * {@link Model#previous} returned `undefined` instead of `null` for non-existant attributes. * Update tests and documentation to confirm that `null` (rather than `undefined`) is returned from {@link Model#fetch} and {@link Collection#fetchOne}. * Fix error in virtual plugin - #936 * Correct error updating parsed/formatted {@link Model#idAttribute} after successful `insert` operation. #955 * Many documentation fixes. **0.8.2** — <small>_August 20, 2015_</small> — [Diff](https://github.com/tgriesser/bookshelf/compare/0.8.1...0.8.2) * ES6/7: Move code base to `/src` — code is now compiled into `/lib` via [Babel](https://babeljs.io/). * Add `collection.count`, `model.count` and `Model.count`. * Add `model.refresh`. #796 * Prevent `fetch` and `refresh` from trying to add JSON attributes to a `where` clause. #550 #778 * Virtuals plugin now supports `{patch: true}` argument to `model.save`. #542 * Restored `model.clone` and `collection.clone`, which were not previously working. #744 * Allow `bookshelf.Collection` to be modified and extended by plugins (so that relations and `fetchAll` operations will return the extended instance). #681 #688 * Fix `model.timestamps` behaviour which deviated from documentation. Also ensure that `createdAt` is set when `{method: "insert"}` is passed explicitly. #787 * Calling `create` on a `through` relationship no longer tries to make a pivot object. Previously this would attempt to create an object with invalid foreign keys. #768 * Parse foreign keys set during `create` in a relation. #770 **0.8.1** — <small>_May 12, 2015_</small> — [Diff](https://github.com/tgriesser/bookshelf/compare/0.8.0...0.8.1) * Fix for regression in `initialize` not being called in Collection constructor, #737. * Fix for regression, removing `omitPivot` in 0.8 #721 * Added `serialize`, a method which contains toJSON logic for easier customization. **0.8.0** — <small>_May 1, 2015_</small> — [Diff](https://github.com/tgriesser/bookshelf/compare/0.7.9...0.8.0) * Dropped Backbone dependency * More specific errors throughout, #522 * Support {require: true} for model.destroy #617 * Add lifecycle events on pivot models for belongsToMany, .through #578 * Allows for select/column calls in the query builder closure, #633. * Added per-constructor error classes #694 (note: this will not work in CoffeeScript). **Breaking Changes:** * Removed the `__super__` internal property on the constructor, this shouldn't have been something you were relying on anyway. **0.7.9** — <small>_Oct 28, 2014_</small> — [Diff](https://github.com/tgriesser/bookshelf/compare/0.7.8...0.7.9) * Fix for regression in columns / eager fetch query constraints, (#510). **0.7.8** — <small>_Oct 28, 2014_</small> — [Diff](https://github.com/tgriesser/bookshelf/compare/0.7.7...0.7.8) * Timestamp `created_at` is now saved with any insert. * Fix for regression created by #429. * New events, `attaching`, `attached`, `detaching`, `detached` #452. * Ability to specify custom column names in morphTo, #454 * Fix for stack overflow with model list as arguments, #482 * Modified location of eager fetch query constraints internally. **0.7.7** — <small>_July 23, 2014_</small> — [Diff](https://github.com/tgriesser/bookshelf/compare/0.7.6...0.7.7) * Fix for formatting on polymorphic keys, (#429). * Added a resolve method for specifying a custom resolver function for the registry plugin. **0.7.6** — <small>_June 29, 2014_</small> — [Diff](https://github.com/tgriesser/bookshelf/compare/0.7.5...0.7.6) Add `omitPivot` flag on toJSON options for omitting the `_pivot_` keys in `through` and `belongsToMany` relations (#404). **0.7.5** — <small>_June 23, 2014_</small> — [Diff](https://github.com/tgriesser/bookshelf/compare/0.7.4...0.7.5) Fix missing NotFoundError & EmptyError on Model & Collection, respectively (#389, 399). **0.7.4** — <small>_June 18, 2014_</small> — [Diff](https://github.com/tgriesser/bookshelf/compare/0.7.3...0.7.4) Added `bookshelf.model(name, protoProps, [staticProps])` syntax for registry plugin. **0.7.3** — <small>_June 17, 2014_</small> — [Diff](https://github.com/tgriesser/bookshelf/compare/0.7.2...0.7.3) Fix for collection dropping models early in set, #376. **0.7.2** — <small>_June 12, 2014_</small> — [Diff](https://github.com/tgriesser/bookshelf/compare/0.7.1...0.7.2) Pass a cloned copy of the model's attributes to `format` rather than the original, related to #315. **0.7.1** — <small>_June 10, 2014_</small> — [Diff](https://github.com/tgriesser/bookshelf/compare/0.7.0...0.7.1) Ensure the knex version >= 0.6.10, where a major regression affecting column names was fixed. **0.7.0** — <small>_June 9, 2014_</small> * Added {@link Model#fetchAll}, for fetching a collection of models from a model. * Added {@link Model#where}, as a shortcut for the most commonly used {@linkplain Model#query query method}. * Initializing via a plain options object is deprecated, you must now pass in an initialized knex instance. * Adding typed errors (#221). * Upgrade to support knex 0.6.x **0.6.12** — <small>_June 5, 2014_</small> — [Diff](https://github.com/tgriesser/bookshelf/compare/0.6.11...0.6.12) Fix for eager loaded belongsTo relation bug with custom parse/format (#377). **0.6.11** — <small>_June 4, 2014_</small> — [Diff](https://github.com/tgriesser/bookshelf/compare/0.6.10...0.6.11) Temporarily add knex to `peerDependencies` until 0.7 is released to support knex 0.6 and there exists a better internal method of doing a semver check. Fix for belongsTo relation bug (#353). **0.6.10** — <small>_April 3, 2014_</small> — [Diff](https://github.com/tgriesser/bookshelf/compare/0.6.9...0.6.10) * Bumping dependencies, including upgrading to Bluebird 1.2, trigger-then 0.3, fixing an erroneous "unhandledRejection" (#310). * `fetchOne` properly resets the query on the collection, (#300). **0.6.9** — <small>_April 3, 2014_</small> — [Diff](https://github.com/tgriesser/bookshelf/compare/0.6.8...0.6.9) Only prefix model fields with the "tableName" after format has been called, (#308). **0.6.8** — <small>_March 6, 2014_</small> — [Diff](https://github.com/tgriesser/bookshelf/compare/0.6.7...0.6.8) * Virtuals plugin may now accept a hash of attributes to set. * Properly fix issue addressed in 0.6.7. **0.6.7** — <small>_March 2, 2014_</small> — [Diff](https://github.com/tgriesser/bookshelf/compare/0.6.6...0.6.7) Bugfix for edge case for eager loaded relations and `relatedData` settings. **0.6.6** — <small>_March 1, 2014_</small> — [Diff](https://github.com/tgriesser/bookshelf/compare/0.6.5...0.6.6) Bugfix for registry plugin, resolving correct models for "through" relations. (#260) **0.6.5** — <small>_February 28, 2014_</small> — [Diff](https://github.com/tgriesser/bookshelf/compare/0.6.4...0.6.5) * Added {@link Collection#reduceThen} as a passthrough to Bluebird's "reduce" method with models. * Options are now passed to "plugin" method. (#254) * Bugfix for registry plugin. (#259) **0.6.4** — <small>_February 11, 2014_</small> — [Diff](https://github.com/tgriesser/bookshelf/compare/0.6.3...0.6.4) Adds static method {@link Model.collection Model.collection()} as a shortcut for creating a collection with the current model. **0.6.3** — <small>_February 9, 2014_</small> — [Diff](https://github.com/tgriesser/bookshelf/compare/0.6.2...0.6.3) * Added an` Relation#updatePivot` method for updating tables on a "belongsToMany" relation. (#134, #230) * Allow mutating the options for passing constraints to eager loaded relations. (#151) * All keys of an object passed into sync are properly prefixed before sync. (#177) * Clearer error messages for debugging. (#204, #197) * Fixed error message for updates that don't affect rows. (#228) * Group by the correct key on "belongsTo.through" relations. (#214) * Ability to only use `created_at` or `updated_at` as timestamp properties. (#158) * Numerous documentation corrections, clarifications, enhancements. * Bumped Bluebird dependency to ~1.0.0. **Plugins:** * Added the `registry` plugin for registering models as strings, helping with the circular dependency problem * Added the `virtuals` plugin for getting/setting virtual (computed) properties on the model. * Added the `visibility` plugin for specifying a whitelist/blacklist of keys when a model is serialized with toJSON. **0.6.2** — <small>_December 18, 2013_</small> — [Diff](https://github.com/tgriesser/bookshelf/compare/0.6.1...0.6.2) * Debug may now be passed as an option in any sync method, to log queries, including relations. * Save now triggers an error in updates with no affected rows. (#119) * The `model.id` attribute is only set on insert if it's empty. (#130) * Ensure eager loaded relations can use attach/detach. (#120) **0.6.1** — <small>_November 26, 2013_</small> — [Diff](https://github.com/tgriesser/bookshelf/compare/0.6.0...0.6.1) Fixes bug with promise code and saving event firing, where promises are not properly resolved with ".all" during saving events. **0.6.0** — <small>_November 25, 2013_</small> — [Diff](https://github.com/tgriesser/bookshelf/compare/0.5.8...0.6.0) * Updating dependency to knex.js 0.5.x * Switched from when.js to [bluebird](https://github.com/petkaantonov/bluebird) for promise implementation, with shim for backward compatibility. * Switched from underscore to lodash, for semver reliability. **0.5.8** — <small>_November 24, 2013_</small> — [Diff](https://github.com/tgriesser/bookshelf/compare/0.5.7...0.5.8) * Parse models after all relations have been eager loaded, for appropriate column name matching (thanks [@benesch](https://github.com/benesch)) (#97) * Specify table for `withRelated` fetches to prevent column naming conflicts (#96). * Fix for polymorphic relation loading (#95). * Other documentation tweaks and other internal code cleanup. **0.5.7** — <small>_October 11, 2013_</small> — [Diff](https://github.com/tgriesser/bookshelf/compare/0.5.6...0.5.7) The "fetching" event is now fired on eager loaded relation fetches. **0.5.6** — <small>_October 10, 2013_</small> — [Diff](https://github.com/tgriesser/bookshelf/compare/0.5.5...0.5.6) The `options.query` now contains the appropriate `knex` instance during the "fetching" event handler. **0.5.5** — <small>_October 1, 2013_</small> — [Diff](https://github.com/tgriesser/bookshelf/compare/0.5.4...0.5.5) An eager loaded [morphTo](#Model-morphTo) relation may now have child relations nested beneath it that are properly eager loaded, depending on the parent. **0.5.4** — <small>_October 1, 2013_</small> — [Diff](https://github.com/tgriesser/bookshelf/compare/0.5.3...0.5.4) * Fix issue where the `relatedData` context was not appropriately maintained for subsequent {@link Collection#create} calls after an eager load (#77). * Documentation improvements, encouraging the use of {@link Model#related} rather than calling a relation method directly, to keep association with the parent model's {@link Model#relations relations} hash. **0.5.3** — <small>_September 26, 2013_</small> — [Diff](https://github.com/tgriesser/bookshelf/compare/0.5.2...0.5.3) The `columns` explicitly specified in a fetch are no-longer passed along when eager loading relations, fixes (#70). **0.5.2** — <small>_September 22, 2013_</small> — [Diff](https://github.com/tgriesser/bookshelf/compare/0.5.1...0.5.2) Fixed incorrect eager loading in `belongsTo` relations (#65). **0.5.1** — <small>_September 21, 2013_</small> — [Diff](https://github.com/tgriesser/bookshelf/compare/0.5.0...0.5.1) Fixed incorrect eager loading in `hasOne` relations (#63). **0.5.0** — <small>_September 20, 2013_</small> — [Diff](https://github.com/tgriesser/bookshelf/compare/0.3.1...0.5.0) **Major Breaking Changes:** * Global state is no longer stored in the library, an instance is returned from `Bookshelf.initialize`, so you will need to call this once and then reference this `Bookshelf` client elsewhere in your application. * Lowercasing of `bookshelf.initialize`, `bookshelf.knex`, `bookshelf.transaction`. * During the lifecycle events, such as "fetching", "saving", or "destroying", the model no-longer contains the active query builder instance at the time it is saved. If you need to modify the query builder chain inside of an event handler, you may now use `options.query` inside the event handlers. **Other Changes:** * Added `tableName` for all queries, so joins use the correct id (#61). * The `attach` & `detach` now remove models from the associated collection, as appropriate (#59). * A `withPivot` no longer accepts an object to specify the keys for the returned pivot items, if you wish to specify how these pivot objects are defined on the object, a custom [toJSON](#Model-toJSON) is your best bet. * Added {@link Collection#invokeThen} and {@link Collection#mapThen} as convenience helpers for `Promise.all(collection.invoke(method, args*))` and `Promise.all(collection.map(method, iterator, [context]))`, respectively. * Added a `Bookshelf.plugin` method, for a standard way to extend Bookshelf instances. * A re-written modular architecture, to move the library toward becoming a database agnostic "data mapper" foundation, with the ablitiy to form relations between different data stores and types, not just SQL (although SQL is the primary focus for now). Also, support for AMD, for eventual use outside of Node.js runtime (with webSQL and likewise). **0.3.1** — <small>_August 29, 2013_</small> — [Diff](https://github.com/tgriesser/bookshelf/compare/0.3.0...0.3.1) — [Docs](http://htmlpreview.github.com/?https://raw.github.com/tgriesser/bookshelf/0.3.0/index.html) Fixed regression in `belongsToMany` custom column name order. **0.3.0** — <small>_August 28, 2013_</small> Support for a {@link Model#through} clause on various model relations. Creating a model from a related collection maintains the appropriate relation data (#35). Support for a `{patch: true}` flag on save, to only update specific saved attributes. Added a `fetchOne` method, for pulling out a single model from a collection, mostly useful for related collection instances. Updated to Knex "0.2.x" syntax for insert / returning, Ability to specify a `morphValue` on {@link Model#morphOne} or {@link Model#morphMany} relations. Internal refactor of relations for more consistent behavior. **0.2.8** — <small>_August 26, 2013_</small> Some minor fixes to make the `Sync` methods more consistent in their behavior when called directly, (#53). **0.2.7** — <small>_August 21, 2013_</small> Timestamp for `created_at` is not set during an "update" query, and the update where clause does not include the `idAttribute` if it isn't present (#51). **0.2.6** — <small>_August 21, 2013_</small> Fixes bug with query function feature added in `0.2.5`, mentioned in (#51). **0.2.5** — <small>_August 19, 2013_</small> The {@link Model#query} method may now accept a function, for even more dynamic query building (#45). Fix for relations not allowing "0" as a valid foreign key value (#49). **0.2.4** — <small>_July 30, 2013_</small> More consistent query resetting, fixing query issues on post-query event handlers. The `toJSON` is only called on a related model if the method exists, allowing for objects or arrays to be manually specified on the `relations` hash and serialized properly on the parent's `toJSON`. **0.2.3** — <small>_July 7, 2013_</small> Fixing bug where `triggerThen` wasn't actually being used for several of the events as noted in 0.2.1 release. **0.2.2** — <small>_July 2, 2013_</small> The Model's `related` method is now a no-op if the model doesn't have the related method. Any `withPivot` columns on many-to-many relations are now prefixed with `_pivot` rather than `pivot` unless named otherwise, for consistency. The `_reset` is not called until after all triggered events so that `hasChanged` can be used on the current model state in the "created", "updated", "saved", and "destroyed" events. Eager queries may be specified as an object with a function, to constrain the eager queries: user.fetch({withRelated: ['accounts', { 'accounts.settings': function(qb) { qb.where('status', 'enabled'); } }, 'other_data']}).then(... **0.2.1** — <small>_June 26, 2013_</small> Using `triggerThen` instead of `trigger` for "created", "updated", "saved", "destroyed", and "fetched" events - if any async operations are needed _after_ the model is created but before resolving the original promise. **0.2.0** — <small>_June 24, 2013_</small> Resolve Model's `fetch` promise with `null` rather than undefined. An object of `query`. constraints (e.g. `{where: {...}, orWhere: {...}}`may be passed to the query method (#30). Fix for empty eager relation responses not providing an empty model or collection instance on the `model.relations` object. **0.1.9** — <small>_June 19, 2013_</small> Resolve Model's `fetch` promise with `undefined` if no model was returned. An array of "created at" and "updated at" values may be used for `hasTimestamps`. Format is called on the `Model#fetch` method. Added an `exec` plugin to provide a node callback style interface for any of the promise methods. **0.1.8** — <small>_June 18, 2013_</small> Added support for polymorphic associations, with `morphOne`, `morphMany`, and `morphTo` model methods. **0.1.7** — <small>_June 15, 2013_</small> Bugfix where `detach` may be used with no parameters to detach all related items (#19). **0.1.6** — <small>_June 15, 2013_</small> Fixing bug allowing custom `idAttribute` values to be used in eager loaded many-to-many relations (#18). **0.1.5** — <small>_June 11, 2013_</small> Ensuring each of the `_previousAttribute` and `changed` values are properly reset on related models after sync actions. **0.1.4** — <small>_June 10, 2013_</small> Fixing issue with `idAttribute` not being assigned after database inserts. Removing various aliases {@link Events} methods for clarity. **0.1.3** — <small>_June 10, 2013_</small> Added {@link Model#hasChanged}, {@link Model#previous}, and {@link Model#previousAttributes} methods, for getting the previous value of the model since the last sync. Using `Object.create(null)` for various internal model objects dealing with user values. Calling {@link Model#related} on a model will now create an empty related object if one is not present on the `relations` object. Removed the `{patch: true}` option on save, instead only applying defaults if the object `isNew`, or if `{defaults: true}` is passed. Fix for `model.clone`'s relation responses. **0.1.2** — <small>_May 17, 2013_</small> Added `triggerThen` and `emitThen` for promise based events, used internally in the "creating", "updating", "saving", and "destroying" events. Docs updates, fixing `{patch: true}` on `update` to have intended functionality. A model's `toJSON` is now correctly called on any related properties. **0.1.1** — <small>_May 16, 2013_</small> Fixed bug with eager loaded `belongsTo` relations (#14). **0.1.0** — <small>_May 13, 2013_</small> Initial Bookshelf release.
68.521212
586
0.723554
eng_Latn
0.949041
b9a6b1539eae19c283d0cd5a794c634d4326e926
23,119
md
Markdown
playlists/cumulative/37i9dQZF1DX3nNRJvSufrk.md
mackorone/spotify-playlist-archive
1c49db2e79d0dfd02831167616e4997363382c20
[ "MIT" ]
179
2019-05-30T14:38:31.000Z
2022-03-27T15:10:20.000Z
playlists/cumulative/37i9dQZF1DX3nNRJvSufrk.md
mackorone/spotify-playlist-archive
1c49db2e79d0dfd02831167616e4997363382c20
[ "MIT" ]
78
2019-06-16T21:38:29.000Z
2022-03-01T18:36:42.000Z
playlists/cumulative/37i9dQZF1DX3nNRJvSufrk.md
mackorone/spotify-playlist-archive
1c49db2e79d0dfd02831167616e4997363382c20
[ "MIT" ]
96
2019-06-16T15:04:25.000Z
2022-03-01T17:38:47.000Z
[pretty](/playlists/pretty/37i9dQZF1DX3nNRJvSufrk.md) - cumulative - [plain](/playlists/plain/37i9dQZF1DX3nNRJvSufrk) - [githistory](https://github.githistory.xyz/mackorone/spotify-playlist-archive/blob/main/playlists/plain/37i9dQZF1DX3nNRJvSufrk) ### [Footwork/Juke](https://open.spotify.com/playlist/37i9dQZF1DX3nNRJvSufrk) > Can you keep up? Get moving to the rapid rhythms of Footwork & Juke, originating in 1990s Chicago\. Cover: DJ Manny | Title | Artist(s) | Album | Length | Added | Removed | |---|---|---|---|---|---| | [+3 \(feat\. DJ Rashad, DJ Paypal & Nasty Nigel\)](https://open.spotify.com/track/7LjIWJdk5Z70ZoclGOSOat) | [Nick Hook](https://open.spotify.com/artist/4ICbI408d4uYagVEL3xf7S), [DJ Rashad](https://open.spotify.com/artist/4zGBj9dI63YIWmZkPl3o7V), [DJ Paypal](https://open.spotify.com/artist/4hH4fEXPg3qpTDlmdNOO01), [Nasty Nigel](https://open.spotify.com/artist/42W0OUrWwVQXuHfKan5R49) | [Relationships](https://open.spotify.com/album/0x1yjRNnoer1e1H3bJOUiI) | 3:32 | 2020-12-03 | | | [147](https://open.spotify.com/track/60I8YrrHXJnfPmo9U8yBnu) | [Slick Shoota](https://open.spotify.com/artist/2P1OqKNHmAOg9RfAufNNkR) | [VIP VAULTS](https://open.spotify.com/album/7dQjTAus3GtAzuFhAiEPHz) | 4:31 | 2020-12-03 | | | [1luv](https://open.spotify.com/track/2VpNyk111vQhDyHlDwv6qM) | [DJ Chap](https://open.spotify.com/artist/3WbV0kiFoU7G5uLkAdV1YA) | [Footwork Frenzy Ep](https://open.spotify.com/album/6kyXFMA96qp261FU2pnksI) | 4:43 | 2020-12-03 | | | [8 Bit Shit](https://open.spotify.com/track/1Fugmy42LXbsW0IsJJcr0m) | [Heavee](https://open.spotify.com/artist/3bTrwZAKTLYI9zozCH6zxw) | [Next Life](https://open.spotify.com/album/2OVxb7gOjFHj7vN7oAt7qC) | 4:37 | 2020-12-03 | | | [Afrika Jungle Them](https://open.spotify.com/track/6FHGAzQKr7CnrmlRwWnjZd) | [DSS](https://open.spotify.com/artist/2T4IqxlbDbMsjHF0kljX0f) | [Afrika Jungle Them EP](https://open.spotify.com/album/3ktYpr3qQwRxWmVpKlqkNZ) | 3:12 | 2020-12-03 | | | [All I See Is Red](https://open.spotify.com/track/0KVlbLCYP90RqR0PKd9YMS) | [DJ Clent](https://open.spotify.com/artist/5GcEUbBsdWf1Jf7jQEA5Mv) | [All I See Is Red](https://open.spotify.com/album/2yd17ccvJmGS9MRAjceY0z) | 5:17 | 2020-12-03 | | | [Animosty](https://open.spotify.com/track/2uO6V4OOr08LfU5DSQDchL) | [Teklife](https://open.spotify.com/artist/1GoZKzpOpwfoZLj1W6sjeg), [DJ Earl](https://open.spotify.com/artist/3Y6Xd3ZOlhkroMrz1Bmo0Y) | [ON LIFE, Vol\. 2](https://open.spotify.com/album/6PUO9tb1HaXV7uzka0VtMu) | 3:28 | 2020-12-03 | | | [Another Chance \- SBF17](https://open.spotify.com/track/1XHBrciEOk88g5toN30LNS) | [DJ Earl](https://open.spotify.com/artist/3Y6Xd3ZOlhkroMrz1Bmo0Y) | [Another Chance \(SBF17\)](https://open.spotify.com/album/7ddsu7Hd6aRxDYG0KTYIy9) | 3:03 | 2021-02-12 | | | [Baaaaaa \- Taken from Bass + Funk & Soul](https://open.spotify.com/track/0ltuVwIisCsCSbdro4pdUb) | [DJ Earl](https://open.spotify.com/artist/3Y6Xd3ZOlhkroMrz1Bmo0Y) | [Baaaaaa \(Taken from Bass + Funk & Soul\)](https://open.spotify.com/album/4TY7Zqenlhjmsb7ZHyqnfO) | 2:30 | 2020-12-03 | | | [Bangs & Works](https://open.spotify.com/track/32QRnB5xxdfYnrtc0e3tYK) | [DJ Trouble](https://open.spotify.com/artist/33KsPfISa6inRnrPvWutSy) | [Bangs & Works, Vol.1: A Chicago Footwork Compilation](https://open.spotify.com/album/7jm8RDfHY8ezdu5zEn4I2y) | 3:00 | 2020-12-03 | | | [Battle](https://open.spotify.com/track/78Mwf97KESI1Ls0FPD5Wil) | [DJ Orange Julius](https://open.spotify.com/artist/4DiPpabfaBSsHYvjlPkazH) | [Some Pulp \- EP](https://open.spotify.com/album/5JychXyvRnufCvGYLKid6s) | 3:52 | 2020-12-03 | | | [Below Zero](https://open.spotify.com/track/3I4bWAn64bYRlNankRD511) | [DJ Nate](https://open.spotify.com/artist/5tefnddMVyra0vGqyFVEjM) | [Da Trak Genious](https://open.spotify.com/album/08fSEliSx03pGtS0C9RMkl) | 2:49 | 2020-12-03 | | | [Big Booty Savage](https://open.spotify.com/track/7fQ6AGVEcds2qwnQeAdNyZ) | [EQ Why](https://open.spotify.com/artist/2XEjbBHqhnBlfydDBUp1Rf) | [Juke Pack Vol.1](https://open.spotify.com/album/0gMUmMf0xpZjctgUUmhj0p) | 2:37 | 2020-12-03 | | | [Bounce](https://open.spotify.com/track/2by9dE80FvGCBjyQxd2zJo) | [Starski and Clutch](https://open.spotify.com/artist/0jGUP5FLOFgZ0nbuQhVGCe) | [Players, Ballers, Rollers](https://open.spotify.com/album/3TCqiFkNsX6O9NfJwKcgu1) | 2:25 | 2020-12-03 | | | [Bounce That Booty \- Jackmaster Werks Remix](https://open.spotify.com/track/6Ta2yIK2q5GAs2tqTLRLX5) | [DJ Deeon](https://open.spotify.com/artist/5wY9R35VmZOg7NxQvKJXdH), [Jackmaster Werks](https://open.spotify.com/artist/2eRGQtdPsIbK1HwWdWMPJN) | [Bounce That Booty \(Jackmaster Werks Remix\)](https://open.spotify.com/album/77SNTASc4iAYl3y5WfAZOS) | 2:44 | 2020-12-03 | | | [Braxe Traxe \- SBF13](https://open.spotify.com/track/7nv1TaJwLtUdbYbRQP95oG) | [Big Dope P](https://open.spotify.com/artist/0eebKLG13kCWzqNI1LItJe) | [Braxe Traxe \(SBF13\)](https://open.spotify.com/album/4XxVcPT88m47hDjVmAWQyr) | 2:05 | 2020-12-03 | | | [Bring It Back](https://open.spotify.com/track/0NesKGMUOHjOoPI6KWNLKT) | [DJ Orange Julius](https://open.spotify.com/artist/4DiPpabfaBSsHYvjlPkazH) | [The Grove](https://open.spotify.com/album/0KRdcfl8C4iPEhmw0Qgwce) | 3:37 | 2020-12-03 | | | [BS6](https://open.spotify.com/track/6TvNg91oTRyHX6zN676dK5) | [Hyroglifics](https://open.spotify.com/artist/6hNELDwN2cBEdL74cpXKc0), [Sinistarr](https://open.spotify.com/artist/1AqybHsTw984feND8RwcCe) | [BS6](https://open.spotify.com/album/6NeoDlP2hzdBFRQdG8hLQF) | 3:45 | 2020-12-03 | | | [Burnin Shoes](https://open.spotify.com/track/2Ce4WzaMgZ2CexSmkMfRfy) | [BSN Posse](https://open.spotify.com/artist/1fnlGaoXeWH8RMPVKR2gBU), [Jon1st](https://open.spotify.com/artist/4rkqsUdw0a2YaadfuNM7zF) | [Rituals EP](https://open.spotify.com/album/653gnuoXpGfwuaVcHX0J5N) | 5:16 | 2020-12-03 | | | [Burnin Ya Boa \(feat\. DJ Manny\)](https://open.spotify.com/track/3CJhotfLdzJbbjAWlK7YFZ) | [DJ Taye](https://open.spotify.com/artist/4T1sY4aibm24hxfz9JnI7c) | [Move Out EP](https://open.spotify.com/album/4NgpWCSva4J1jkTUz2gn7v) | 3:30 | 2020-12-03 | | | [Can't Fake My Feelings \(Part\. 2\)](https://open.spotify.com/track/2rptAk77V1e0SXaPjweRba) | [Seimei](https://open.spotify.com/artist/7zZ9WviThpO5AVYmigIf3V) | [MODUS OPERANDI 005 \(mixed by TRAXMAN\)](https://open.spotify.com/album/6yHBIYcEQaKzafljkBvJia) | 4:38 | 2020-12-03 | | | [Chicago \(feat\. Lefty & Nikes\)](https://open.spotify.com/track/7iSYeJOtNFuPwNg0gEStEL) | [BSN Posse](https://open.spotify.com/artist/1fnlGaoXeWH8RMPVKR2gBU), [Lefty](https://open.spotify.com/artist/40VHJIhcpp7hQh4a2fGo4W), [Nikes](https://open.spotify.com/artist/33H4iqgsqXDDWY6FC9NMLc) | [\#UOAI](https://open.spotify.com/album/03PPB3GsJaP65KjNBOKEm2) | 4:20 | 2020-12-03 | | | [Divine](https://open.spotify.com/track/34MzzTrgINt8x5tGS7zRUT) | [Jana Rush](https://open.spotify.com/artist/3lyUIhz1kSCrlqsaMlXAZr) | [Pariah](https://open.spotify.com/album/2KLD67ZxiIL1cwNPaMz7q9) | 5:44 | 2020-12-03 | | | [Dnb Spaceout](https://open.spotify.com/track/6nAOS8cst8SYqzMAyrRtMv) | [DJ Tre](https://open.spotify.com/artist/6so1AgXg57ZYwyhe9dhhYS) | [Next Life](https://open.spotify.com/album/2OVxb7gOjFHj7vN7oAt7qC) | 3:13 | 2020-12-03 | | | [Do Right](https://open.spotify.com/track/7lMjqRbsChqroWRRkhw3ge) | [Slick Shoota](https://open.spotify.com/artist/2P1OqKNHmAOg9RfAufNNkR) | [Do Right](https://open.spotify.com/album/79oVcdOfST9iLRlqenoJdl) | 3:27 | 2020-12-03 | | | [DON'T JUST STAND THERE](https://open.spotify.com/track/3tR8VBrDIU6emD1coGfg3I) | [Teklife](https://open.spotify.com/artist/1GoZKzpOpwfoZLj1W6sjeg), [DJ Big Hank](https://open.spotify.com/artist/1d7okU7Ufb7pYnCdVT6CnK), [Sirr TMO](https://open.spotify.com/artist/7wMCA0Cx8O1adCSiTV1IMY) | [ON LIFE](https://open.spotify.com/album/3NYdL4BxJhHiA0IjBK9NbR) | 3:26 | 2020-12-03 | | | [Drank, Kush, Barz](https://open.spotify.com/track/5xFy8h9eEJUjOIXGKRE19C) | [DJ Rashad](https://open.spotify.com/artist/4zGBj9dI63YIWmZkPl3o7V), [Spinn](https://open.spotify.com/artist/5gmgJUPTu5ApaV6Swjfb20) | [Double Cup](https://open.spotify.com/album/4J7qkorMbPmJQy79SntDA8) | 3:36 | 2020-12-03 | | | [Elevate](https://open.spotify.com/track/11n4LmYCiLsXPfAO6t5Srd) | [Druguse](https://open.spotify.com/artist/7cacQtmSGJSf7HtEslj0xW) | [Hood Rich Life](https://open.spotify.com/album/2U51TV6gM2Sm3mknG4Bf1P) | 3:28 | 2020-12-03 | | | [Erotic Heat](https://open.spotify.com/track/7cw7ln9b7LNWNSb5sf1mx9) | [Jlin](https://open.spotify.com/artist/23QKqAkKwti9zBiac6RFBA) | [Dark Energy](https://open.spotify.com/album/2yHEmN9QGuFbFpWt21BJtd) | 4:23 | 2020-12-03 | | | [Feel](https://open.spotify.com/track/1iKW2yrouUd2E3tK4tj8Tu) | [Xyla](https://open.spotify.com/artist/7CmkZcKpESltjho1LZJgnb) | [Ways](https://open.spotify.com/album/2PqGGOZm4IPKiXzLZpJULH) | 4:37 | 2020-12-03 | | | [Fire Man](https://open.spotify.com/track/2yyJvnqLkcX87CovivBUos) | [DJ Orange Julius](https://open.spotify.com/artist/4DiPpabfaBSsHYvjlPkazH) | [Pleasure](https://open.spotify.com/album/2ciPFYPEsMR8zbswFr82hy) | 3:12 | 2020-12-03 | | | [Flicka](https://open.spotify.com/track/7BMJLV7eWICgcb1ouJAhq3) | [Dj Spaldin](https://open.spotify.com/artist/2TWLvBrhwsGGnd10vmtIBj) | [Ww3](https://open.spotify.com/album/4QY4yD4BeJPqJPskLtbEDB) | 3:34 | 2020-12-03 | | | [Footworkin On Air](https://open.spotify.com/track/5YyTiraUCPWYDke6iwkMaz) | [Traxman](https://open.spotify.com/artist/0KyFKunOclAI5jah1T55lh) | [Da Mind Of Traxman](https://open.spotify.com/album/20gJfqZovoaiO5AmN1hoSV) | 4:01 | 2020-12-03 | | | [Footwurk Homicide](https://open.spotify.com/track/26weQsYyIquMBNbU9uQTa7) | [DJ Nate](https://open.spotify.com/artist/22vxwGMACJbp5JOQ2G6OpT) | [Da Trak Genious](https://open.spotify.com/album/7dWfzFNIGLhFBt0KRrPkyB) | 2:48 | 2020-12-03 | | | [Forgotten](https://open.spotify.com/track/69L3zcmAXAW09qTz7HoQlX) | [DJ FLP](https://open.spotify.com/artist/7mcrgPUbytCnGDjt7PYXCA) | [Intuition](https://open.spotify.com/album/7rgcAQIrNsHCPRp8p7CWpM) | 6:03 | 2020-12-03 | | | [fucK iT uP](https://open.spotify.com/track/5mZa0ucjIfMPpa2XFhdF16) | [Teklife](https://open.spotify.com/artist/1GoZKzpOpwfoZLj1W6sjeg), [DJ Manny](https://open.spotify.com/artist/5whJkWAzwCYfeetVpUJKn7) | [ON LIFE, Vol\. 2](https://open.spotify.com/album/6PUO9tb1HaXV7uzka0VtMu) | 2:42 | 2020-12-03 | | | [Get Down Lil Bit \- Street Bangers Factory 16](https://open.spotify.com/track/1KzKhPENPl7U9Mi25Qo7ir) | [Traxman](https://open.spotify.com/artist/0KyFKunOclAI5jah1T55lh) | [Get Down Lil Bit \(Street Bangers Factory 16\)](https://open.spotify.com/album/75vio1xSvsEGN3oC9veBzn) | 4:50 | 2020-12-03 | | | [Get Off Me \(Betta Get Back\) \- Basic Rhythm Remix](https://open.spotify.com/track/4v016wLhvKXHjQk2jG4ZQe) | [DJ Nate](https://open.spotify.com/artist/5tefnddMVyra0vGqyFVEjM), [Basic Rhythm](https://open.spotify.com/artist/3L3DtTvIVJ9yiQIOEeGCF2) | [Get Off Me \(Betta Get Back\) \[Basic Rhythm Remix\]](https://open.spotify.com/album/3VA9StGsg25mohWjRqHUr2) | 3:06 | 2020-12-03 | | | [Get the Money \- SBF14](https://open.spotify.com/track/5nvE83ieA4Qd74wU6cYtE0) | [DJ Manny](https://open.spotify.com/artist/5whJkWAzwCYfeetVpUJKn7) | [Get the Money \(SBF14\)](https://open.spotify.com/album/6CjXD6nyCZCEP867ZiqwJg) | 3:13 | 2020-12-03 | | | [Gimme Some Mo](https://open.spotify.com/track/1Ah9GZskjhqGdrOMN4zWwj) | [DJ Taye](https://open.spotify.com/artist/4T1sY4aibm24hxfz9JnI7c), [UNIIQU3](https://open.spotify.com/artist/5aR8qSaApKChlZvzB0Jfpx) | [Still Trippin'](https://open.spotify.com/album/7xDFC6tA0ADGOv3NmIM7rE) | 4:35 | 2020-12-03 | | | [Give It to Me](https://open.spotify.com/track/75MGUYTxhledkSiuOmjmU0) | [DJ Manny](https://open.spotify.com/artist/5whJkWAzwCYfeetVpUJKn7) | [Trackaholic, Vol\. 1 \- EP](https://open.spotify.com/album/3W8TE3M9M5QPsnm4wj06wZ) | 3:49 | 2020-12-03 | | | [Good Days](https://open.spotify.com/track/6vAMZ2r4RwgoC6L47owv1U) | [DJ Swisha](https://open.spotify.com/artist/3rnWXUmpJQJzzP3TIoqp8H) | [Perfecto](https://open.spotify.com/album/3SGneAajBWiCia1mHvVs2w) | 2:49 | 2020-12-03 | | | [Grateful](https://open.spotify.com/track/0gSaVbrG4PIrmvyL5EurFE) | [Kush Jones](https://open.spotify.com/artist/5ifmtTvKK5Pfk6K1b0eHZm) | [Sleep](https://open.spotify.com/album/1tDaiSq1p9ZLrlnT4QGkIg) | 3:15 | 2020-12-03 | | | [Heavy Heat](https://open.spotify.com/track/0EoOfrsL0YyA0DIbkBlk0M) | [RP Boo](https://open.spotify.com/artist/678aHai0twQ5ZJcqO1KYWl) | [Bangs & Works Vol\. 2 \(The Best Of Chicago Footwork\)](https://open.spotify.com/album/6EShA1FkOyix9LWx3Qnupt) | 3:43 | 2020-12-03 | | | [Hip Thruster](https://open.spotify.com/track/2lsLDBItN7XZ1e92K5mzBc) | [NameBrandSound](https://open.spotify.com/artist/65kgJ8N0DY3S5XcMAtOSmD) | [Nowadays Pressure](https://open.spotify.com/album/2iM9f33JfLhqJofIu2K1mD) | 3:48 | 2020-12-03 | | | [HNADFUL OF SAND](https://open.spotify.com/track/4qpbXynEqm71wahVnnYYRP) | [x.nte](https://open.spotify.com/artist/3dwRwPEStsaBVjdZp9PllA), [Elevation](https://open.spotify.com/artist/61zHgRKaKFtSxh8kNGIICY), [Empress Lex](https://open.spotify.com/artist/7q7s8rwydoB8ZmcTut10Bm), [NO EYES](https://open.spotify.com/artist/17kvA3T2g0xqLYb67vuCEm) | [HNADFUL OF SAND](https://open.spotify.com/album/0VvFwyjGybjRMlzKvRn3HV) | 2:30 | 2020-12-11 | | | [ICE CREAM](https://open.spotify.com/track/5Bs3ufnc5SfXGPYQdeLrGx) | [Teklife](https://open.spotify.com/artist/1GoZKzpOpwfoZLj1W6sjeg), [DJ Chap](https://open.spotify.com/artist/3WbV0kiFoU7G5uLkAdV1YA) | [ON LIFE](https://open.spotify.com/album/3NYdL4BxJhHiA0IjBK9NbR) | 3:14 | 2020-12-03 | | | [Imma Dog](https://open.spotify.com/track/524AOf3Zmp93608e38RukD) | [bastienGOAT](https://open.spotify.com/artist/55GgSmZm0TR5qvTRcRwq6B) | [Aspects](https://open.spotify.com/album/41TVmUMGU9ngfPb350XfwC) | 1:48 | 2020-12-03 | | | [In Da Club Before Eleven O' Clock](https://open.spotify.com/track/2JsQdDbuwFYNwykifahaYL) | [DJ Rashad](https://open.spotify.com/artist/4zGBj9dI63YIWmZkPl3o7V) | [Juke Trax Online Vol\. 13](https://open.spotify.com/album/6tBqpKzYDAiIQxanFOZ0FU) | 5:38 | 2020-12-03 | | | [Juice](https://open.spotify.com/track/4p8e7TRZUOu1sy2LGnaMiN) | [DJ T\-Why](https://open.spotify.com/artist/52nwgb3EZSQrv9Q6MltYye) | [Bangs & Works Vol\. 2 \(The Best Of Chicago Footwork\)](https://open.spotify.com/album/6EShA1FkOyix9LWx3Qnupt) | 2:37 | 2020-12-03 | | | [Juke It Up](https://open.spotify.com/track/7FY0rPkPYlRlSmczXl9iQh) | [EQ Why](https://open.spotify.com/artist/2XEjbBHqhnBlfydDBUp1Rf) | [Juke Pack Vol.1](https://open.spotify.com/album/0gMUmMf0xpZjctgUUmhj0p) | 2:48 | 2020-12-03 | | | [Juking For Live \(I.C.T.W\)](https://open.spotify.com/track/0x8GxrGOhZLJJWve0qMryZ) | [DJ Slugo](https://open.spotify.com/artist/1cdLR0Fz14MLkWY78hNTYT) | [Juking For Live \(I.C.T.W\)](https://open.spotify.com/album/6nUlE97QCne6C6AYAdFY3F) | 3:56 | 2020-12-03 | | | [Keep the Drug$](https://open.spotify.com/track/7tzfOnSTtrr8Gzl00240Ik) | [Stayhigh](https://open.spotify.com/artist/2d51ltzSq7hB3viB1DTBEn) | [Kush, Rhodes & 808's](https://open.spotify.com/album/4xW3MYeu3Gwc224UHhNAfV) | 3:30 | 2020-12-03 | | | [L](https://open.spotify.com/track/5NhmNspdkZwdMYeebLY1Hi) | [CRZKNY](https://open.spotify.com/artist/1FGPwtHOMV3xV8qtoci5po) | [T3 TRAXX, Vol\. 2](https://open.spotify.com/album/0VXeDXXyazcUW5kbzSOWSH) | 3:06 | 2020-12-03 | | | [Miau](https://open.spotify.com/track/6kEIRclfzZfraQY9Cn3F01) | [Aylu](https://open.spotify.com/artist/1MDwlFKprZOcAzi83BF1VG) | [Transgenre](https://open.spotify.com/album/6VdPebiwMhQqLmCAG7f1tO) | 2:18 | 2020-12-03 | | | [Narcissus](https://open.spotify.com/track/3MEYgmIqsjW4qlW1SoflGQ) | [Xyla](https://open.spotify.com/artist/7CmkZcKpESltjho1LZJgnb) | [Ways](https://open.spotify.com/album/2PqGGOZm4IPKiXzLZpJULH) | 3:41 | 2020-12-04 | | | [Never Created, Never Destroyed](https://open.spotify.com/track/6YhSkzyhPBK0N1xulfWlPV) | [Jlin](https://open.spotify.com/artist/23QKqAkKwti9zBiac6RFBA) | [Black Origami](https://open.spotify.com/album/7526bnJCkFFnAMSQ9fsva9) | 3:31 | 2020-12-03 | | | [New Start](https://open.spotify.com/track/6lEFi0B4E7tYgN7ahY7HlB) | [Taso](https://open.spotify.com/artist/0zN0VIGQs6bYKzrB7EQYhC), [DJ Rashad](https://open.spotify.com/artist/4zGBj9dI63YIWmZkPl3o7V), [DJ Spinn](https://open.spotify.com/artist/0ZGOz1bQgvsT4KSzHB1dg9) | [New Start](https://open.spotify.com/album/7hhPEiSLGSFdHJEheB8IL3) | 4:30 | 2020-12-03 | | | [Next Subject](https://open.spotify.com/track/6KnzZDxeaW7BLHXep6roA4) | [DJ Curt](https://open.spotify.com/artist/7fhA2N9tmCmCnmiQrkDqyA) | [Footwork Frenzy Ep](https://open.spotify.com/album/6kyXFMA96qp261FU2pnksI) | 5:03 | 2020-12-03 | | | [No Cap 4 2020 \- SBF14](https://open.spotify.com/track/2MPzyS96YOHV8789Bk8j9p) | [DJ Earl](https://open.spotify.com/artist/3Y6Xd3ZOlhkroMrz1Bmo0Y) | [No Cap 4 2020 \(SBF14\)](https://open.spotify.com/album/5MgKGMeW2RO46jiF0R1v2R) | 2:37 | 2020-12-03 | | | [Nycfw](https://open.spotify.com/track/3EQA4h2oX2HsqUXwmb9EMs) | [Dj Spaldin](https://open.spotify.com/artist/2TWLvBrhwsGGnd10vmtIBj) | [Heavy on DA Footwork](https://open.spotify.com/album/3cwJ07tqWWspBoeWQvWBYJ) | 4:06 | 2020-12-03 | | | [One Blood](https://open.spotify.com/track/1lw1zYHWWl7sXlQgqqJE4Y) | [DJ Roc](https://open.spotify.com/artist/3M5fbUWlySs9LximfJj5Da) | [Bangs & Works, Vol.1: A Chicago Footwork Compilation](https://open.spotify.com/album/7jm8RDfHY8ezdu5zEn4I2y) | 2:48 | 2020-12-03 | | | [Overture of Spaldin](https://open.spotify.com/track/764tyXjLix45eNDYsWIFxw) | [Dj Spaldin](https://open.spotify.com/artist/2TWLvBrhwsGGnd10vmtIBj) | [Heavy on DA Footwork](https://open.spotify.com/album/3cwJ07tqWWspBoeWQvWBYJ) | 3:52 | 2020-12-03 | | | [Pass That](https://open.spotify.com/track/4YKjrt7cGTQnu97OgKaFgC) | [DJ Rashad](https://open.spotify.com/artist/4zGBj9dI63YIWmZkPl3o7V), [Tripletrain](https://open.spotify.com/artist/47UATnEOiiEMa2OFvZjv6i), [DJ Spinn](https://open.spotify.com/artist/0ZGOz1bQgvsT4KSzHB1dg9) | [Afterlife](https://open.spotify.com/album/0lFoUvjrmqtll233XwCyko) | 3:13 | 2020-12-03 | | | [Pop Drop](https://open.spotify.com/track/1IrZcCsY92EhaqCA2QVSsH) | [DJ Paypal](https://open.spotify.com/artist/4hH4fEXPg3qpTDlmdNOO01), [DJ Taye](https://open.spotify.com/artist/4T1sY4aibm24hxfz9JnI7c) | [Computers Smarter Than People](https://open.spotify.com/album/3Hj551CuSFPhth9J6ffqcG) | 2:37 | 2020-12-03 | | | [Samba Focused](https://open.spotify.com/track/2AusXnxTndZvmYz5ygFXo8) | [Kush Jones](https://open.spotify.com/artist/5ifmtTvKK5Pfk6K1b0eHZm) | [Strictly 4 My Cdjz 7](https://open.spotify.com/album/2rYIMDR48AbLrkbZ78ly6M) | 3:15 | 2020-12-03 | | | [Set It](https://open.spotify.com/track/1T7BgOvzf2pNKs6vTGfw99) | [Dream Continuum](https://open.spotify.com/artist/4VRT7JNcqS3yMV0rBxxUlV) | [Reworkz E.P.](https://open.spotify.com/album/3zDaBTJKnxWGIp4lOhud33) | 5:12 | 2020-12-03 | 2021-12-29 | | [Set It](https://open.spotify.com/track/72paK2vF9UTopN6XAsijRX) | [Dream Continuum](https://open.spotify.com/artist/4VRT7JNcqS3yMV0rBxxUlV) | [Reworkz E.P.](https://open.spotify.com/album/4eTOHLnXdr5WK7UYWnaYXU) | 5:12 | 2020-12-03 | | | [She a Go](https://open.spotify.com/track/3RWDJd7eh6Scfoz94sJbsX) | [DJ Rashad](https://open.spotify.com/artist/4zGBj9dI63YIWmZkPl3o7V), [Spinn](https://open.spotify.com/artist/5gmgJUPTu5ApaV6Swjfb20), [Taso](https://open.spotify.com/artist/0zN0VIGQs6bYKzrB7EQYhC) | [Double Cup](https://open.spotify.com/album/4J7qkorMbPmJQy79SntDA8) | 3:37 | 2020-12-03 | | | [Skkrtt](https://open.spotify.com/track/0zEgbC1lVNEYUJCv0jbUfc) | [DJ Orange Julius](https://open.spotify.com/artist/4DiPpabfaBSsHYvjlPkazH) | [The Grove](https://open.spotify.com/album/0KRdcfl8C4iPEhmw0Qgwce) | 4:49 | 2020-12-03 | | | [Slanted \- BSN Posse Remix](https://open.spotify.com/track/2bZty9jSWZiv4rWQqeQJAA) | [Tim Parker](https://open.spotify.com/artist/3LBR9DFhfM9nUjdu1gi7lI), [BSN Posse](https://open.spotify.com/artist/1fnlGaoXeWH8RMPVKR2gBU) | [Slanted](https://open.spotify.com/album/0lP2c6I9kE1rULPnvACaSA) | 3:46 | 2020-12-03 | | | [Stick Em'](https://open.spotify.com/track/1Pr37UKvEs9EWlO70gTPhF) | [Caidance](https://open.spotify.com/artist/3bWNdjdWiWNdv1xPOnFU1r) | [Guaranteed](https://open.spotify.com/album/1HMCwgdVCktmbzoKPxAzzX) | 3:40 | 2020-12-03 | | | [Still Geekin'](https://open.spotify.com/track/0TnVZ2ObWc4TeRZiFUEIoL) | [DJ Orange Julius](https://open.spotify.com/artist/4DiPpabfaBSsHYvjlPkazH) | [The Grove](https://open.spotify.com/album/0KRdcfl8C4iPEhmw0Qgwce) | 2:30 | 2020-12-03 | | | [Tear It Up](https://open.spotify.com/track/44UIC6KMStyShoIvRyNSHP) | [Turk Turkelton](https://open.spotify.com/artist/1RJ1QHgaJoiAd1czuhS00d) | [Worldwidejuke Vol.2](https://open.spotify.com/album/5bsOhjBEJ7O3rQGzlmyWPs) | 4:39 | 2020-12-03 | | | [Trippin'](https://open.spotify.com/track/0mmde5walUcEMrhlCWKsrq) | [DJ Taye](https://open.spotify.com/artist/4T1sY4aibm24hxfz9JnI7c) | [Still Trippin'](https://open.spotify.com/album/7xDFC6tA0ADGOv3NmIM7rE) | 3:13 | 2020-12-03 | | | [Turn Up](https://open.spotify.com/track/2r9821O4VvvqaRQles7VlL) | [Nangdo](https://open.spotify.com/artist/3q8bo1K12TtwbvsPiwJWDi) | [High On Clouds](https://open.spotify.com/album/3ackwT9BASzdzkoTgqBS67) | 3:14 | 2020-12-03 | | | [Turn Up](https://open.spotify.com/track/5Qdcx5xvBBckTVyYiHdQuN) | [Nangdo](https://open.spotify.com/artist/3q8bo1K12TtwbvsPiwJWDi) | [High on Clouds](https://open.spotify.com/album/7daVVI2wd8FIhzWyWtpq8N) | 3:14 | 2020-12-03 | 2021-12-28 | | [Way You Move](https://open.spotify.com/track/4HJADIxeKfoDyE3HVLzxif) | [DJ Manny](https://open.spotify.com/artist/5whJkWAzwCYfeetVpUJKn7), [DJ Chap](https://open.spotify.com/artist/09SMSRxhT4hiqiEAtIv69G) | [Greenlight](https://open.spotify.com/album/39Ir1QmErVs5OlVthfAoFq) | 3:13 | 2020-12-03 | | | [WE GON DANCE](https://open.spotify.com/track/3MI20cDoa9zWn0c70ikDq4) | [Teklife](https://open.spotify.com/artist/1GoZKzpOpwfoZLj1W6sjeg), [DJ Tre](https://open.spotify.com/artist/6so1AgXg57ZYwyhe9dhhYS) | [ON LIFE](https://open.spotify.com/album/3NYdL4BxJhHiA0IjBK9NbR) | 3:39 | 2020-12-03 | | | [WFM](https://open.spotify.com/track/4V7vv7lvro6yMyJZm9Z96F) | [Heavee](https://open.spotify.com/artist/3bTrwZAKTLYI9zozCH6zxw), [Gant\-Man](https://open.spotify.com/artist/0FRf7YoRwTB4L2HVVwzks4), [DJ Phil](https://open.spotify.com/artist/4L2n1xvdqgPgQjYxLHUAbG), [Sirr TMO](https://open.spotify.com/artist/7wMCA0Cx8O1adCSiTV1IMY) | [WFM](https://open.spotify.com/album/7jlA2sFirrhCJamcIvaEmy) | 3:03 | 2020-12-03 | | | [What You Need](https://open.spotify.com/track/1oGd6koqefNYVJ0Rn6EKOW) | [DJ Spinn](https://open.spotify.com/artist/0ZGOz1bQgvsT4KSzHB1dg9), [DJ Manny](https://open.spotify.com/artist/5whJkWAzwCYfeetVpUJKn7) | [TEKLIFE Vol.2: What You Need](https://open.spotify.com/album/3gp6LlL3363zr3Jz9rjH0G) | 5:41 | 2020-12-03 | | | [Wouldn't Get Far](https://open.spotify.com/track/24GUxWledUuwzCm4XEUatd) | [Young Smoke](https://open.spotify.com/artist/0ehqzJzgBzhTRcOd0BqgCs) | [Bangs & Works Vol\. 2 \(The Best Of Chicago Footwork\)](https://open.spotify.com/album/6EShA1FkOyix9LWx3Qnupt) | 3:00 | 2020-12-03 | | \*This playlist was first scraped on 2021-12-21. Prior content cannot be recovered.
251.293478
484
0.749254
yue_Hant
0.620213
b9a7553ec3a365733746e42efb1320a918a7ca63
7,989
md
Markdown
pages/Tutorial_Rennspiel.md
python4kids-ba/Python4kids
84cfe09c6ad0c954f9a197ba3424fcff043b0e79
[ "CC0-1.0" ]
null
null
null
pages/Tutorial_Rennspiel.md
python4kids-ba/Python4kids
84cfe09c6ad0c954f9a197ba3424fcff043b0e79
[ "CC0-1.0" ]
null
null
null
pages/Tutorial_Rennspiel.md
python4kids-ba/Python4kids
84cfe09c6ad0c954f9a197ba3424fcff043b0e79
[ "CC0-1.0" ]
null
null
null
# Tutorial: Rennspiel In diesem Tutorial bauen wir Schritt für Schritt ein Rennspiel. Python, das wir dafür brauchen: Bedingungen, Schleifen, Listen, Funktionen und Tupel. Zusätzlich haben wir Bewegung, einen High Score und einen Startbildschirm. ## Basisspiel Ähnlich wie der Shooter, beginnen wir mit dem Auflisten des kompletten Programms und einigen leeren Funktionen, die wir im Verlauf des Tutorials füllen werden. (Python führt kein Programm mit komplett leeren Funktionen aus, daher werden sie wieder mit `pass` gefüllt, um Python zu zeigen, dass sie nichts machen.) ![Race game](https://i.imgur.com/zXMuFWY.png) Wie beim Shooter fangen wir mit drei Punkten an: 1. Definieren von globalen Variablen. 2. Eine `draw()` Funktion. 3. Eine `update()` Funktion. Diese Funktionen prüfen die boolesche Variable `playing`. Boolean ist ein Datentyp, der nur die Werte `True` (Wahr) oder `False` (Falsch) haben kann. Wenn die Variable `False` ist, wird der Startbildschirm gezeigt, anstatt des Spielbildschirm. Der einzige, wirklich komplizierte Teil dieses Programms ist, wie wir die Form des Tunnels, den der Spieler entlang fährt, speichern. `lines` ist eine Liste von *Tupeln*. Ein Tupel ist ähnlich wie eine Liste, kann jedoch *nicht verändert werden*. Es kann jedoch in einzelne Variablen *entpackt* werden. Jedes Tupel steht für eine horizontale Linie auf dem Bildschirm. Es wird drei Werte, `x`, `x2` und `color`, haben, die für die Positionen der linken Wand, die Lücke zwischen der linken und rechten Wand und die Farbe der Wand stehen. ```python= import random import math WIDTH = 600 HEIGHT = 600 player = Actor("alien", (300, 580)) player.vx = 0 # horizontale Bewegung player.vy = 1 # vertikale Bewegung lines = [] # Liste von Tupeln der horizontalen Linien wall_gradient = -3 # Steilheit der Wand left_wall_x = 200 # x-Koordinate der Wand distance = 0 # wie weit der Spieler gefahren ist time = 15 # Zeit bis das Spiel beendet wird playing = False # True wenn im Spiel, False wenn im Startbildschirm best_distance = 0 # merkt sich den höchsten Distanz Score def draw(): screen.clear() if playing: # wir sind im Spiel for i in range(0, len(lines)): # zeichne die Wände x, x2, color = lines[i] screen.draw.line((0, i), (x, i), color) screen.draw.line((x + x2, i), (WIDTH, i), color) player.draw() else: # wir sind im Startbildschirm screen.draw.text("PRESS SPACE TO START", (150, 300),color="green",fontsize=40) screen.draw.text("BEST DISTANCE: "+str(int(best_distance / 10)), (170, 400), color="green", fontsize=40) screen.draw.text("SPEED: " + str(int(player.vy)), (0, 0), color="green", fontsize=40) screen.draw.text("DISTANCE: " + str(int(distance / 10)), (200, 0), color="green", fontsize=40) screen.draw.text("TIME: " + str(int(time)), (480, 0), color="green", fontsize=40) def update(delta): global playing, distance, time if playing: wall_collisions() scroll_walls() generate_lines() player_input() timer(delta) elif keyboard.space: playing = True distance = 0 time = 10 def player_input(): pass def generate_lines(): pass generate_lines() def scroll_walls(): pass def wall_collisions(): pass def timer(delta): pass def on_mouse_move(pos): pass ``` ### Übungsaufgabe + Führe das Programm aus. Prüfe, ob es einen Startbildschirm hat und ob du das Spiel starten kannst und den Spieler siehst. (Das ist alles, was es tut bis wir die restlichen Funktionen füllen.) ## Spielereingabe Ersetze `player_input()` durch Folgendes: ```python def player_input(): if keyboard.up: player.vy += 0.1 if keyboard.down: player.vy -= 0.1 if player.vy < 1: player.vy = 1 if keyboard.right: player.vx += 0.4 if keyboard.left: player.vx -= 0.4 player.x += player.vx ``` ### Übungsaufgabe + Führe das Spiel aus. Prüfe, ob der Spieler sich nach links und rechts bewegen kann und Schwund hat. Versuche die Geschwindigkeit anzupassen oder setze eine Limit, damit es sich nicht zu schnell bewegt. ## Generiere die Wände Wir haben bereit den Code um die Wände aufs Spielfeld zu bringen, aber `lines` ist leer, daher passiert momentan nichts. Ersetze die Funktion `generate_lines()` mit nachfolgendem Code. Beachte, dass `generate_lines()` sofort abgerufen wird, nachdem wir die Funktion definiert haben, damit die Wände auch gleich zu Beginn des Spiel erscheinen. ```python def generate_lines(): global wall_gradient, left_wall_x gap_width = 300 + math.sin(distance / 3000) * 100 while len(lines) < HEIGHT: pretty_colour = (255, 0, 0) lines.insert(0, (left_wall_x, gap_width, pretty_colour)) left_wall_x += wall_gradient if left_wall_x < 0: left_wall_x = 0 wall_gradient = random.random() * 2 + 0.1 elif left_wall_x + gap_width > WIDTH: left_wall_x = WIDTH - gap_width wall_gradient = -random.random() * 2 - 0.1 generate_lines() ``` ### Übungsaufgabe + Ändere die Farbe der Wände von rot auf grün. ## Mach die Wände bunt Ändere die Zeile, die die Farbe festsetzt, zu Folgendem: ```python pretty_colour = (255, min(left_wall_x, 255), min(time * 20, 255)) ``` ## Scrolling Ändere die `scroll_walls()` Funktion, damit sie die Linien am Fuß des Bildschirms, passend zu den vertikalten Bewegungen des Spieler, entfernt. ```python def scroll_walls(): global distance for i in range(0, int(player.vy)): lines.pop() distance += 1 ``` ### Übungsaufgabe + Ändere `scroll_walls()` wie oben genannt und prüfe, ob der Spieler jetzt beschleunigen kann. + (Für Fortgeschrittene): Ändere den Beschleunigungswert um das Spiel schneller oder langsamer zu machen. ## Wandkollision Zurzeit kann der Spieler noch durch die Wände fahren, was jedoch nicht Sinn der Sache ist. Außerdem wollen wir, dass der Spieler als Strafe an Geschwindigkeit verliert, wenn er mit der Wand kollidiert. ```python def wall_collisions(): a, b, c = lines[-1] if player.x < a: player.x += 5 player.vx = player.vx * -0.5 player.vy = 0 if player.x > a + b: player.x -= 5 player.vx = player.vx * -0.5 player.vy = 0 ``` ### Übungsaufgabe + Ändere `wall_collisions()` wie oben und prüfe, ob der Spieler jetzt von der Wand abprallt. + (Für Fortgeschrittene): Lass den Spieler stärker von der Wand abprallen. ## Timer Im Moment hat der Spieler noch unendlich viel Zeit. Wir wollen, dass die `time` Variable zeigt, wie viel Zeit vergangen ist und dass das Spiel endet, wenn die Zeit um ist. ```python def timer(delta): global time, playing, best_distance time -= delta if time < 0: playing = False if distance > best_distance: best_distance = distance ``` ### Übungsaufgabe + Prüfe, ob das Spiel nach 15 Sekunden endet. + Lass das Spiel 30 Sekunden dauern. ## Mausbewegung Das Spiel ist leichter, aber vielleicht auch unterhaltsamer, wenn man es mit der Maus spielen kann. Pygame ruft die folgende Funktion automatisch ab. ```python def on_mouse_move(pos): x, y = pos player.x = x player.vy = (HEIGHT - y) / 20 ``` ### Übungsaufgabe + Wie beschleunigt der Spieler mit der Maus? ## Ideen für Erweiterungen * Zeichne ein neues Bild für den Spieler. Lasse unterschiedliche Bilder anzeigen, je nachdem ob der Spieler nach links oder rechts steuert. * Gib eine Ziel-Distanz vor, die erreicht werden muss. Wenn der Spieler die Distanz erreicht, bekommt er mehr Zeit, die ihn weiterspielen lässt. * Füge Musik und Soundeffekte hinzu. * Wenn du einen großen Bildschirm hast, mach das Spielfenster größer (und stell sicher, dass das Alien immer noch am unteren Bildschirmrand erscheint).
33.708861
535
0.689072
deu_Latn
0.984754
b9a7e923712c89bf50476c7c320e9900770e07da
1,043
md
Markdown
i18n/cn/docusaurus-plugin-content-docs/current/democracy_council.md
thomas-frayer/HydraDX-docs
62327808c1febb209167f85755798c011a9ae614
[ "Apache-2.0" ]
4
2021-03-24T11:00:49.000Z
2021-09-17T07:38:29.000Z
i18n/cn/docusaurus-plugin-content-docs/current/democracy_council.md
thomas-frayer/HydraDX-docs
62327808c1febb209167f85755798c011a9ae614
[ "Apache-2.0" ]
51
2021-03-11T22:46:37.000Z
2022-03-18T12:28:39.000Z
i18n/cn/docusaurus-plugin-content-docs/current/democracy_council.md
thomas-frayer/HydraDX-docs
62327808c1febb209167f85755798c011a9ae614
[ "Apache-2.0" ]
27
2021-03-07T22:03:14.000Z
2022-03-08T00:52:39.000Z
--- id: democracy_council title: HydraDX 议会 --- HydraDX 议会是一个链上实体,在协议的治理中起着关键作用。 本文提供有关议会 [组成](#composition)、其 [主要任务](#tasks) 和 [选举议会成员](#elections) 的信息。 有关如何参与议会选举的分步指导,请参阅 **[本指南](/participate_in_council_elections)** 。 ## 组成 {#composition} HydraDX 议会目前由 **13 名成员** 组成。 创始团队和投资者(4 名创始人 + 2 名投资者)将获得 6 个席位中的少数席位。 剩下的 7 个席位由更广泛的 HDX 持有者社区选举产生。 ## 任务和职责 {#tasks} HydraDX 议会的任务涵盖广泛的日常治理活动。 首先,议会控制财政部并批准或拒绝财政部的提案。 HydraDX 议会也在公投机制中发挥作用。 议会可以发起全民投票,前提是至少有 60% 的成员支持(绝对多数)并且没有成员行使否决权。 在否决的情况下,可以在冷却期过后重新提交提案。 否决成员不能两次否决同一提案。 此外,任何提议的公投都可以在获得议会 2/3 绝对多数票的情况下取消。 这可以用作阻止可能在代码中引入错误的恶意提议或更改的最后手段。 最后,HydraDX 议会负责选举技术委员会。 ## 选举 {#elections} HDX 令牌的任何持有者都可以 **[申请](/participate_in_council_elections#become_candidate)** 作为 HydraDX 议会的 7 个非常任组的一个候选人。 议会选举每 7 天举行一次,届时将在接下来的 7 天内填补 7 个非常任议会席位。 民主模块使用相同的 Phragmen 算法,该算法用于选举 **[活跃验证人集](/staking#validators)** 。 所有社区成员都可以通过选择锁定他们一定数量的 HDX 令牌在议会选举中 **[投票](/participate_in_council_elections#vote)** 。 锁定的令牌不可转让,将在后续选举中使用(直至取消)。 选民可以也应该按照优先顺序选择一名以上的候选人。 然后,选举算法将所有选票分配给获得最高社区支持的候选人,以确定可获得的议会席位的最优分配。
32.59375
184
0.784276
yue_Hant
0.488489
b9a8dd1e77860e4e683241d331d55e14b0223eff
1,151
md
Markdown
_posts/2016-01-08-git-rebase-vs-git-pull.md
loverszhaokai/loverszhaokai.github.io
7b3fce6cee61374e9a1505ad36cf93fc296aacf1
[ "MIT" ]
1
2015-08-29T16:13:15.000Z
2015-08-29T16:13:15.000Z
_posts/2016-01-08-git-rebase-vs-git-pull.md
loverszhaokai/loverszhaokai.github.io
7b3fce6cee61374e9a1505ad36cf93fc296aacf1
[ "MIT" ]
null
null
null
_posts/2016-01-08-git-rebase-vs-git-pull.md
loverszhaokai/loverszhaokai.github.io
7b3fce6cee61374e9a1505ad36cf93fc296aacf1
[ "MIT" ]
null
null
null
--- layout: post title: "git pull VS git fetch and git rebase" date: 2016-01-08 20:01 IST categories: git tags: git comments: true analytics: true --- reference: [git-pull-vs-git-fetch-git-rebase](http://stackoverflow.com/questions/3357122/git-pull-vs-git-fetch-git-rebase) ## 1. I have done something on the master branch ~~~ - o - o - o - H - A - B - C (master) \ P - Q - R (origin/master) ~~~ <span/> ## 2. Git pull from origin/master, if there aren't any conflicts ~~~ - o - o - o - H - A - B - C - X (master) \ / P - Q - R ---(origin/master) ~~~ ## 3. Git rebase ~~~ - o - o - o - H - P - Q - R - A' - B' - C' (master) | (origin/master) ~~~ ## 4. Conclusion: The content of your work tree should end up the same in both cases; you've just created a different history leading up to it. The `rebase` **rewrites your history**, making it look as if you had committed on top of origin's new master branch(`R`), instead of where you originally committed (`H`) ![Image description](/images/git_pull_vs_fetch.png)
28.073171
169
0.578627
eng_Latn
0.907601
b9a9590e78c3023b75db0bc63a24d8e1d805182d
1,877
md
Markdown
locale/en/blog/vulnerability/jan-2018-spectre-meltdown.md
jalfaro1/nodejs.org
97de8dc8a97c3cbedb77f26a627853cccdab8e97
[ "MIT" ]
5
2021-03-27T17:47:52.000Z
2021-11-17T12:21:32.000Z
locale/en/blog/vulnerability/jan-2018-spectre-meltdown.md
jalfaro1/nodejs.org
97de8dc8a97c3cbedb77f26a627853cccdab8e97
[ "MIT" ]
96
2019-06-04T19:46:14.000Z
2021-07-27T04:31:48.000Z
locale/en/blog/vulnerability/jan-2018-spectre-meltdown.md
jalfaro1/nodejs.org
97de8dc8a97c3cbedb77f26a627853cccdab8e97
[ "MIT" ]
1
2019-03-22T19:29:19.000Z
2019-03-22T19:29:19.000Z
--- date: 2018-01-08T17:30:00.617Z category: vulnerability title: Meltdown and Spectre - Impact On Node.js slug: jan-2018-spectre-meltdown layout: blog-post.hbs author: Michael Dawson --- # Summary Project zero has recently announced some new attacks that have received a lot of attention: https://googleprojectzero.blogspot.ca/2018/01/reading-privileged-memory-with-side.html. The risk from these attacks to systems running Node.js resides in the systems in which your Node.js applications run, as opposed to the Node.js runtime itself. The trust model for Node.js assumes you are running trusted code and does not provide any separation between code running within the runtime itself. Therefore, untrusted code that would be necessary to execute these attacks in Node.js could already affect the execution of your Node.js applications in ways that are more severe than possible through these new attacks. This does not mean that you don't need to protect yourself from these new attacks when running Node.js applications. If an attacker manages to run malicious code on an unpatched OS (whether using JavaScript or something else) they may be able to access memory and or data that they should not have access to. In order to protect yourself from these cases, apply the security patches for your operating system. You do not need to update the Node.js runtime. # Contact and future updates The current Node.js security policy can be found at https://nodejs.org/en/security/. Please contact [email protected] if you wish to report a vulnerability in Node.js. Subscribe to the low-volume announcement-only nodejs-sec mailing list at https://groups.google.com/forum/#!forum/nodejs-sec to stay up to date on security vulnerabilities and security-related releases of Node.js and the projects maintained in the [nodejs GitHub organisation](https://github.com/nodejs/).
43.651163
88
0.801811
eng_Latn
0.996686
b9a9a41bd836b4593b56fa1d2f2961a7f78ef210
15,206
md
Markdown
README.md
matounet51/test
9d4c2bdfa1125143b62f09fed1c128a32356aec3
[ "MIT" ]
null
null
null
README.md
matounet51/test
9d4c2bdfa1125143b62f09fed1c128a32356aec3
[ "MIT" ]
null
null
null
README.md
matounet51/test
9d4c2bdfa1125143b62f09fed1c128a32356aec3
[ "MIT" ]
null
null
null
# themeparks An unofficial API library for accessing ride wait times and park opening times for many theme parks around the world, including Disney, Universal and SeaWorld parks. [![Build Status](https://travis-ci.org/cubehouse/themeparks.svg?branch=master)](https://travis-ci.org/cubehouse/themeparks) [![npm version](https://badge.fury.io/js/themeparks.svg)](https://badge.fury.io/js/themeparks) [![Dependency Status](https://beta.gemnasium.com/badges/github.com/cubehouse/themeparks.svg)](https://beta.gemnasium.com/projects/github.com/cubehouse/themeparks) [![GitHub stars](https://img.shields.io/github/stars/cubehouse/themeparks.svg)](https://github.com/cubehouse/themeparks/stargazers) ![Downloads](https://img.shields.io/npm/dt/themeparks.svg) ![Monthly Downloads](https://img.shields.io/npm/dm/themeparks.svg) [Roadmap](https://github.com/cubehouse/themeparks/projects/1) | [Documentation](https://cubehouse.github.io/themeparks/) | [Change Log](CHANGELOG.md) | [Supported Parks](#supported-park-features) ## Install npm install themeparks --save ## Migrate from themeparks 4.x If you have been using themeparks 4.x, please follow this guide to [migrate from themeparks 4.x to themeparks 5.x](https://github.com/cubehouse/themeparks/wiki/Migrating-from-4.x-to-5.x) ## Example Use // include the Themeparks library const Themeparks = require("themeparks"); // configure where SQLite DB sits // optional - will be created in node working directory if not configured // Themeparks.Settings.Cache = __dirname + "/themeparks.db"; // access a specific park // Create this *ONCE* and re-use this object for the lifetime of your application // re-creating this every time you require access is very slow, and will fetch data repeatedly for no purpose const DisneyWorldMagicKingdom = new Themeparks.Parks.WaltDisneyWorldMagicKingdom(); // Access wait times by Promise DisneyWorldMagicKingdom.GetWaitTimes().then((rideTimes) => { for(var i=0, ride; ride=rideTimes[i++];) { console.log(`${ride.name}: ${ride.waitTime} minutes wait (${ride.status})`); } }).catch((error) => { console.error(error); }); // Get park opening times DisneyWorldMagicKingdom.GetOpeningTimes().then((times) => { for(var i=0, time; time=times[i++];) { if (time.type == "Operating") { console.log(`[${time.date}] Open from ${time.openingTime} until ${time.closingTime}`); } } }).catch((error) => { console.error(error); }); ### Using Promises or callbacks Both GetWaitTimes and GetOpeningTimes work either through callback or Promises. This is the same as the above example, but using a callback instead of a Promise. // access wait times via callback disneyMagicKingdom.GetWaitTimes((err, rides) => { if (err) return console.error(err); // print each wait time for(var i=0, ride; ride=rides[i++];) { console.log(`${ride.name}: ${ride.waitTime} minutes wait (${ride.status})`); } }); ### Proxy If you wish to use themeparks with a proxy, you can pass a proxy agent when you construct the park object. // include the Themeparks library const Themeparks = require("themeparks"); // include whichever proxy library you want to use (must provide an http.Agent object) const SocksProxyAgent = require('socks-proxy-agent'); // create your proxy agent object const MyProxy = new SocksProxyAgent("socks://socks-proxy-host", true); // create your park object, passing in proxyAgent as an option const DisneyWorldMagicKingdom = new Themeparks.Parks.WaltDisneyWorldMagicKingdom({ proxyAgent: MyProxy }); ## Change Log [View themeparks Change Log](CHANGELOG.md) ## Parks available <!-- START_SUPPORTED_PARKS_LIST --> **22** Parks Supported * Magic Kingdom - Walt Disney World Florida (ThemeParks.Parks.WaltDisneyWorldMagicKingdom) * Epcot - Walt Disney World Florida (ThemeParks.Parks.WaltDisneyWorldEpcot) * Hollywood Studios - Walt Disney World Florida (ThemeParks.Parks.WaltDisneyWorldHollywoodStudios) * Animal Kingdom - Walt Disney World Florida (ThemeParks.Parks.WaltDisneyWorldAnimalKingdom) * Disneyland Resort - Magic Kingdom (ThemeParks.Parks.DisneylandResortMagicKingdom) * Disneyland Resort - California Adventure (ThemeParks.Parks.DisneylandResortCaliforniaAdventure) * Disneyland Paris - Magic Kingdom (ThemeParks.Parks.DisneylandParisMagicKingdom) * Walt Disney Studios - Disneyland Paris (ThemeParks.Parks.DisneylandParisWaltDisneyStudios) * Hong Kong Disneyland (ThemeParks.Parks.HongKongDisneyland) * Magic Kingdom - Shanghai Disney Resort (ThemeParks.Parks.ShanghaiDisneyResortMagicKingdom) * Tokyo Disney Resort - Magic Kingdom (ThemeParks.Parks.TokyoDisneyResortMagicKingdom) * Tokyo Disney Resort - Disney Sea (ThemeParks.Parks.TokyoDisneyResortDisneySea) * Europa Park (ThemeParks.Parks.EuropaPark) * Parc-Asterix (ThemeParks.Parks.AsterixPark) * California's Great America (ThemeParks.Parks.CaliforniasGreatAmerica) * Canada's Wonderland (ThemeParks.Parks.CanadasWonderland) * Carowinds (ThemeParks.Parks.Carowinds) * Cedar Point (ThemeParks.Parks.CedarPoint) * Kings Island (ThemeParks.Parks.KingsIsland) * Knott's Berry Farm (ThemeParks.Parks.KnottsBerryFarm) * Dollywood (ThemeParks.Parks.Dollywood) * Silver Dollar City (ThemeParks.Parks.SilverDollarCity) <!-- END_SUPPORTED_PARKS_LIST --> ## Supported Park Features <!-- START_PARK_FEATURES_SUPPORTED --> |Park|Wait Times|Park Opening Times|Ride Opening Times| |:---|:---------|:-----------------|:-----------------| |Magic Kingdom - Walt Disney World Florida|&#10003;|&#10003;|&#10007;| |Epcot - Walt Disney World Florida|&#10003;|&#10003;|&#10007;| |Hollywood Studios - Walt Disney World Florida|&#10003;|&#10003;|&#10007;| |Animal Kingdom - Walt Disney World Florida|&#10003;|&#10003;|&#10007;| |Disneyland Resort - Magic Kingdom|&#10003;|&#10003;|&#10007;| |Disneyland Resort - California Adventure|&#10003;|&#10003;|&#10007;| |Disneyland Paris - Magic Kingdom|&#10003;|&#10003;|&#10007;| |Walt Disney Studios - Disneyland Paris|&#10003;|&#10003;|&#10007;| |Hong Kong Disneyland|&#10003;|&#10003;|&#10007;| |Magic Kingdom - Shanghai Disney Resort|&#10003;|&#10003;|&#10007;| |Tokyo Disney Resort - Magic Kingdom|&#10003;|&#10003;|&#10007;| |Tokyo Disney Resort - Disney Sea|&#10003;|&#10003;|&#10007;| |Europa Park|&#10003;|&#10003;|&#10007;| |Parc-Asterix|&#10003;|&#10003;|&#10003;| |California's Great America|&#10003;|&#10003;|&#10007;| |Canada's Wonderland|&#10003;|&#10003;|&#10007;| |Carowinds|&#10003;|&#10003;|&#10007;| |Cedar Point|&#10003;|&#10003;|&#10007;| |Kings Island|&#10003;|&#10003;|&#10007;| |Knott's Berry Farm|&#10003;|&#10003;|&#10007;| |Dollywood|&#10003;|&#10003;|&#10007;| |Silver Dollar City|&#10003;|&#10003;|&#10007;| <!-- END_PARK_FEATURES_SUPPORTED --> ## Result Objects ### Ride Wait Times [ { id: (string or number: uniquely identifying a ride), name: (string: ride name), waitTime: (number: current wait time in minutes), active: (bool: is the ride currently active?), fastPass: (bool: is fastpass available for this ride?), fastPassReturnTime: { (object containing current return times, parks supporting this will set FastPassReturnTimes to true - entire field may be null for unsupported rides or when fastPass has ran out for the day) startTime: (string return time formatted as "HH:mm": start of the current return time period), endTime: (string return time formatted as "HH:mm": end of the current return time period), lastUpdate: (JavaScript Date object: last time the fastPass return time changed), }, status: (string: will either be "Operating", "Closed", or "Down"), lastUpdate: (JavaScript Date object: last time this ride had new data), schedule: { **schedule will only be present if park.SupportsRideSchedules is true** openingTime: (timeFormat timestamp: opening time of ride), closingTime: (timeFormat timestamp: closing time of ride), type: (string: "Operating" or "Closed"), special: [ (array of "special" ride times, usually Disney Extra Magic Hours or similar at other parks - field may be null) openingTime: (timeFormat timestamp: opening time for ride), closingTime: (timeFormat timestamp: closing time for ride), type: (string: type of schedule eg. "Extra Magic Hours", but can be "Event" or "Special Ticketed Event" or other) ] }, }, ... ] ### Schedules [ { date: (dateFormat timestamp: day this schedule applies), openingTime: (timeFormat timestamp: opening time for requested park - can be null if park is closed), closingTime: (timeFormat timestamp: closing time for requested park - can be null if park is closed), type: (string: "Operating" or "Closed"), special: [ (array of "special" times for this day, usually Disney Extra Magic Hours or similar at other parks - field may be null) openingTime: (timeFormat timestamp: opening time for requested park), closingTime: (timeFormat timestamp: closing time for requested park), type: (string: type of schedule eg. "Extra Magic Hours", but can be "Event" or "Special Ticketed Event" or other) ], }, ... ] ## Park Object values There are some values available on each park object that may be useful. |Variable|Description| |:-------|:----------| |Name|Name of the park| |Timezone|The park's local timezone| |LocationString|This park's location as a geolocation string| |SupportsWaitTimes|Does this park's API support ride wait times?| |SupportsOpeningTimes|Does this park's API support opening hours?| |SupportsRideSchedules|Does this park return schedules for rides?| |FastPass|Does this park have FastPass (or a FastPass-style service)?| |FastPassReturnTimes|Does this park tell you the FastPass return times?| |Now|Current date/time at this park (returned as a Moment object)| |UserAgent|The HTTP UserAgent this park is using to make API requests (usually randomly generated per-park at runtime)| const ThemeParks = require("themeparks"); // construct our park objects and keep them in memory for fast access later const Parks = {}; for (const park in ThemeParks.Parks) { Parks[park] = new ThemeParks.Parks[park](); } // print each park's name, current location, and timezone for (const park in Parks) { console.log(`* ${Parks[park].Name} [${Parks[park].LocationString}]: (${Parks[park].Timezone})`); } Prints: <!-- START_PARK_TIMEZONE_LIST --> * Magic Kingdom - Walt Disney World Florida [(28°23′6.72″N, 81°33′50.04″W)]: (America/New_York) * Epcot - Walt Disney World Florida [(28°22′28.92″N, 81°32′57.84″W)]: (America/New_York) * Hollywood Studios - Walt Disney World Florida [(28°21′27.00″N, 81°33′29.52″W)]: (America/New_York) * Animal Kingdom - Walt Disney World Florida [(28°21′19.08″N, 81°35′24.36″W)]: (America/New_York) * Disneyland Resort - Magic Kingdom [(33°48′36.39″N, 117°55′8.30″W)]: (America/Los_Angeles) * Disneyland Resort - California Adventure [(33°48′31.39″N, 117°55′8.36″W)]: (America/Los_Angeles) * Disneyland Paris - Magic Kingdom [(48°52′13.16″N, 2°46′46.82″E)]: (Europe/Paris) * Walt Disney Studios - Disneyland Paris [(48°52′5.78″N, 2°46′50.59″E)]: (Europe/Paris) * Hong Kong Disneyland [(22°18′47.52″N, 114°2′40.20″E)]: (Asia/Hong_Kong) * Magic Kingdom - Shanghai Disney Resort [(31°8′35.88″N, 121°39′28.80″E)]: (Asia/Shanghai) * Tokyo Disney Resort - Magic Kingdom [(35°38′5.45″N, 139°52′45.46″E)]: (Asia/Tokyo) * Tokyo Disney Resort - Disney Sea [(35°37′37.40″N, 139°53′20.75″E)]: (Asia/Tokyo) * Europa Park [(48°16′8.15″N, 7°43′17.61″E)]: (Europe/Berlin) * Parc-Asterix [(49°8′9.75″N, 2°34′21.96″E)]: (Europe/Paris) * California's Great America [(37°23′52.08″N, 121°58′28.98″W)]: (America/Los_Angeles) * Canada's Wonderland [(43°50′34.80″N, 79°32′20.40″W)]: (America/Toronto) * Carowinds [(35°6′16.20″N, 80°56′21.84″W)]: (America/New_York) * Cedar Point [(41°28′42.24″N, 82°40′45.48″W)]: (America/New_York) * Kings Island [(39°20′40.92″N, 84°16′6.96″W)]: (America/New_York) * Knott's Berry Farm [(33°50′39.12″N, 117°59′54.96″W)]: (America/Los_Angeles) * Dollywood [(35°47′43.18″N, 83°31′51.19″W)]: (America/New_York) * Silver Dollar City [(36°40′5.44″N, 93°20′18.84″W)]: (America/Chicago) <!-- END_PARK_TIMEZONE_LIST --> ## Development ### Running Tests themeparks supports mocha unit tests. Install mocha with npm install -g mocha Run the following to test the library's unit tests (this will build the library and then run functional offline unit tests): npm test You can also run unit tests against the source js files using ```npm run testdev```. There is a separate test for checking the library still connects to park APIs correctly. This is the "online test". npm run testonline You can also test an individual park using the PARKID environment variable, for example: PARKID=UniversalStudiosFlorida npm run testonline ### Debug Mode Themeparks supports the standard NODE_DEBUG environment variable. Pass the name of the library into NODE_DEBUG to turn on debug mode: NODE_DEBUG=themeparks npm run testonline Environment variables can be combined: NODE_DEBUG=themeparks PARKID=UniversalStudiosFlorida npm run testonline ### Contributing Each park inherits it's core logic from lib/park.js. For each set of parks, a base object should be made with all the core logic for that API/park group. Then, for each park, a basic shell object should be implemented that just configures the park's base object (and overrides anything in unusual setups). Throughout the API, please make use of the this.Log() function so debugging parks when things break is easier. Please raise issues and make pull requests with new features :) See full contribution guide at [Themeparks Contribution Guide](https://github.com/cubehouse/themeparks/wiki/Contributing). A rough guide for adding new parks is also available at [Adding New Parks to the ThemeParks API](https://github.com/cubehouse/themeparks/wiki/Adding-New-Parks). ## People using themeparks If you're using themeparks for a project, please let me know! I'd love to see what people are doing! ### Websites and Mobile Apps * [My Disney Visit](http://www.mydisneyvisit.com/) - Walt Disney World * [ChronoPass](https://play.google.com/store/apps/details?id=fr.dechriste.android.attractions&hl=en_GB) - Walt Disney World, Disneyland Paris, Parc Asterix, EuropaPark ### Pebble Apps * [Disneyland California Wait Times](https://apps.getpebble.com/en_US/application/5656424b4431a2ce6c00008d) * [Disneyland Paris Wait Times](https://apps.getpebble.com/en_US/application/55e25e8d3ea1fb6fa30000bd) * [Disney World Wait Times](https://apps.getpebble.com/en_US/application/54bdb77b54845b1bf40000bb)
47.51875
381
0.704788
eng_Latn
0.545724
b9a9ee94649570216939402bd417581f9a6ccfa8
261
md
Markdown
README.md
pallavi192k/EviSecure
5465075c8cd4c638486b42d2fb50bdcb4e413802
[ "MIT" ]
null
null
null
README.md
pallavi192k/EviSecure
5465075c8cd4c638486b42d2fb50bdcb4e413802
[ "MIT" ]
null
null
null
README.md
pallavi192k/EviSecure
5465075c8cd4c638486b42d2fb50bdcb4e413802
[ "MIT" ]
null
null
null
## EviSecure ### Instructions to run - Use *npm install* to install dependencies - Use *npm start* to start react-scripts - Install Metamask and Ganache and create a free account - Import Ganache into Metamask - Visit *localhost:3000* to view the app running
26.1
56
0.758621
eng_Latn
0.973029
b9aa3e3e789d33925e379392bc54d3d70d906b78
2,491
md
Markdown
README.md
Abe-Rybeck/Supreme-Grabber
76f5a3692f067266983c2ebfe984d9268038ef24
[ "Apache-2.0" ]
4
2017-11-29T21:53:11.000Z
2018-03-28T15:51:03.000Z
README.md
Abe-Rybeck/Supreme-Grabber
76f5a3692f067266983c2ebfe984d9268038ef24
[ "Apache-2.0" ]
8
2017-11-29T22:31:33.000Z
2018-01-09T23:22:27.000Z
README.md
Abe-Rybeck/Supreme-Grabber
76f5a3692f067266983c2ebfe984d9268038ef24
[ "Apache-2.0" ]
null
null
null
[![Build Status](https://travis-ci.org/Abe-Rybeck/Supreme-Grabber.svg?branch=master)](https://travis-ci.org/Abe-Rybeck/Supreme-Grabber) **Visit "Releases" tab for compiled one-click-run download** _____ _____ _ _ / ____| / ____| | | | | | (___ _ _ _ __ _ __ ___ _ __ ___ ___ | | __ _ __ __ _| |__ | |__ ___ _ __ \___ \| | | | '_ \| '__/ _ \ '_ ` _ \ / _ \ | | |_ | '__/ _` | '_ \| '_ \ / _ \ '__| ____) | |_| | |_) | | | __/ | | | | | __/ | |__| | | | (_| | |_) | |_) | __/ | |_____/ \__,_| .__/|_| \___|_| |_| |_|\___| \_____|_| \__,_|_.__/|_.__/ \___|_| | | |_| A lightwieght assistance tool for purchasing Supreme Clothing and Accessories. Author: Abe Rybeck https://github.com/Abe-Rybeck **this Readme and the associated files may not be altered in any way without the written consent of the author stated above** Unlike most other Supreme bots, Supreme Grabber does not send your info through any 3rd party server, making your sensitive information no more likely to be stolen than if you had typed it into the Supreme Website yourself. Requirements: 64-bit JRE version 8.0.0 Firefox version 56.0 or higher To Test: 1. double-click Supreme Grabber 2. Enter *FAKE* Billing and Credit Card info 3. Enter KeyWORD to search for 4. Enter size (numbers or letters, depending on item) 5. Set time for current time + 1 minute (time layout is military HH:MM) 6. click "save and run" To Run: 1. double-click Supreme Grabber 2. Enter Billing and Credit Card info 3. Enter KeyWORD to search for 4. Enter size (numbers or letters, depending on item) 6. Click "save and Run" 9. At the time specified, The bot will take effect 10. Once the Bot reaches the ReCapctha, it will prompt you for input. 11. Hopefully you've got your item! *download links provided in "Requirements" folder* *during alpha stage of developement, access code will be "Supreme Grabber"* *Supreme Grabber will execute when it reaches the time entered in YOUR timezone, so reference to Supreme's (EST)* *IF YOU ENTER LEGITIMATE CREDIT CARD AND BILLING INFO WHILE TESTING, THE ITEM WILL BE PURCHASED* ======= # Supreme-Grabber
40.177419
223
0.592533
eng_Latn
0.920522
b9ab68e19a3a93a160facd1011432af126662289
9,009
md
Markdown
articles/web-application-firewall/ag/tutorial-restrict-web-traffic-cli.md
Microsoft/azure-docs.cs-cz
1e2621851bc583267d783b184f52dc4b853a058c
[ "CC-BY-4.0", "MIT" ]
6
2017-08-28T07:43:21.000Z
2022-01-04T10:32:24.000Z
articles/web-application-firewall/ag/tutorial-restrict-web-traffic-cli.md
MicrosoftDocs/azure-docs.cs-cz
1e2621851bc583267d783b184f52dc4b853a058c
[ "CC-BY-4.0", "MIT" ]
428
2018-08-23T21:35:37.000Z
2021-03-03T10:46:43.000Z
articles/web-application-firewall/ag/tutorial-restrict-web-traffic-cli.md
Microsoft/azure-docs.cs-cz
1e2621851bc583267d783b184f52dc4b853a058c
[ "CC-BY-4.0", "MIT" ]
16
2018-03-03T16:52:06.000Z
2021-12-22T09:52:44.000Z
--- title: Povolení firewallu webových aplikací – Azure CLI description: Přečtěte si, jak omezit webový provoz pomocí brány firewall webových aplikací na aplikační bránu pomocí Azure CLI. services: web-application-firewall author: vhorne ms.service: web-application-firewall ms.date: 03/29/2021 ms.author: victorh ms.topic: how-to ms.custom: devx-track-azurecli ms.openlocfilehash: 390fdd4d9e9d0bc62589484ab0c4ba7468bcaf4b ms.sourcegitcommit: 4b0e424f5aa8a11daf0eec32456854542a2f5df0 ms.translationtype: MT ms.contentlocale: cs-CZ ms.lasthandoff: 04/20/2021 ms.locfileid: "107773094" --- # <a name="enable-web-application-firewall-using-the-azure-cli"></a>Povolení firewallu webových aplikací pomocí rozhraní příkazového řádku Azure Můžete omezit provoz brány Application Gateway pomocí [brány firewall webových aplikací](ag-overview.md) (WAF). WAF používá k ochraně aplikace pravidla [OWASP](https://www.owasp.org/index.php/Category:OWASP_ModSecurity_Core_Rule_Set_Project). Tato pravidla zahrnují ochranu před útoky, jako je injektáž SQL, skriptování mezi weby a krádeže relací. V tomto článku získáte informace o těchto tématech: * Nastavit síť * Vytvořit aplikační bránu se zapnutým Firewallem webových aplikací * Vytvoření škálovací sady virtuálních počítačů * Vytvoření účtu úložiště a konfigurace diagnostiky ![Příklad brány firewall webových aplikací](../media/tutorial-restrict-web-traffic-cli/scenario-waf.png) Pokud budete chtít, můžete tento postup dokončit pomocí [Azure PowerShell](tutorial-restrict-web-traffic-powershell.md). [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] [!INCLUDE [azure-cli-prepare-your-environment.md](../../../includes/azure-cli-prepare-your-environment.md)] - Tento článek vyžaduje verzi rozhraní příkazového řádku Azure 2.0.4 nebo novější. Pokud používáte Azure Cloud Shell, nejnovější verze je už nainstalovaná. ## <a name="create-a-resource-group"></a>Vytvoření skupiny prostředků Skupina prostředků je logický kontejner, ve kterém se nasazují a spravují prostředky Azure. K vytvoření skupiny prostředků Azure pojmenované *myResourceGroupAG* použijte příkaz [az group create](/cli/azure/group#az_group_create). ```azurecli-interactive az group create --name myResourceGroupAG --location eastus ``` ## <a name="create-network-resources"></a>Vytvoření síťových prostředků Virtuální síť a podsítě umožňují síťové připojení k aplikační bráně a přidruženým prostředkům. Vytvořte virtuální síť s názvem *myVNet* a podsíť s názvem *myAGSubnet*. pak vytvořte veřejnou IP adresu s názvem *myAGPublicIPAddress*. ```azurecli-interactive az network vnet create \ --name myVNet \ --resource-group myResourceGroupAG \ --location eastus \ --address-prefix 10.0.0.0/16 \ --subnet-name myBackendSubnet \ --subnet-prefix 10.0.1.0/24 az network vnet subnet create \ --name myAGSubnet \ --resource-group myResourceGroupAG \ --vnet-name myVNet \ --address-prefix 10.0.2.0/24 az network public-ip create \ --resource-group myResourceGroupAG \ --name myAGPublicIPAddress \ --allocation-method Static \ --sku Standard ``` ## <a name="create-an-application-gateway-with-a-waf"></a>Vytvoření aplikační brány s Firewallem webových aplikací K vytvoření aplikační brány s názvem *myAppGateway* použijte příkaz [az network application-gateway create](/cli/azure/network/application-gateway). Při vytváření aplikační brány pomocí Azure CLI zadáte konfigurační údaje, jako je kapacita, skladová položka nebo nastavení HTTP. Aplikační brána je přiřazena k *myAGSubnet* a *myAGPublicIPAddress*. ```azurecli-interactive az network application-gateway create \ --name myAppGateway \ --location eastus \ --resource-group myResourceGroupAG \ --vnet-name myVNet \ --subnet myAGSubnet \ --capacity 2 \ --sku WAF_v2 \ --http-settings-cookie-based-affinity Disabled \ --frontend-port 80 \ --http-settings-port 80 \ --http-settings-protocol Http \ --public-ip-address myAGPublicIPAddress az network application-gateway waf-config set \ --enabled true \ --gateway-name myAppGateway \ --resource-group myResourceGroupAG \ --firewall-mode Detection \ --rule-set-version 3.0 ``` Vytvoření aplikační brány může trvat několik minut. Po vytvoření aplikační brány se zobrazí tyto její nové funkce: - *appGatewayBackendPool* – aplikační brána musí mít aspoň jeden back-endový fond adres. - *appGatewayBackendHttpSettings* – určuje, že se ke komunikaci používá port 80 a protokol HTTP. - *appGatewayHttpListener* – výchozí naslouchací proces přidružený k *appGatewayBackendPool*. - *appGatewayFrontendIP* – přiřadí adresu *myAGPublicIPAddress* naslouchacímu procesu *appGatewayHttpListener*. - *rule1* – výchozí pravidlo směrování přidružené k naslouchacímu procesu *appGatewayHttpListener*. ## <a name="create-a-virtual-machine-scale-set"></a>Vytvoření škálovací sady virtuálních počítačů V tomto příkladu vytvoříte škálovací sadu virtuálních počítačů, která v aplikační bráně poskytuje dva servery back-endového fondu. Virtuální počítače jsou ve škálovací sadě přidruženy k podsíti *myBackendSubnet*. K vytvoření škálovací sady použijte příkaz [az vmss create](/cli/azure/vmss#az_vmss_create). \<username> \<password> Před spuštěním hodnoty nahraďte a hodnotami. ```azurecli-interactive az vmss create \ --name myvmss \ --resource-group myResourceGroupAG \ --image UbuntuLTS \ --admin-username <username> \ --admin-password <password> \ --instance-count 2 \ --vnet-name myVNet \ --subnet myBackendSubnet \ --vm-sku Standard_DS2 \ --upgrade-policy-mode Automatic \ --app-gateway myAppGateway \ --backend-pool-name appGatewayBackendPool ``` ### <a name="install-nginx"></a>Instalace serveru NGINX ```azurecli-interactive az vmss extension set \ --publisher Microsoft.Azure.Extensions \ --version 2.0 \ --name CustomScript \ --resource-group myResourceGroupAG \ --vmss-name myvmss \ --settings '{ "fileUris": ["https://raw.githubusercontent.com/Azure/azure-docs-powershell-samples/master/application-gateway/iis/install_nginx.sh"],"commandToExecute": "./install_nginx.sh" }' ``` ## <a name="create-a-storage-account-and-configure-diagnostics"></a>Vytvoření účtu úložiště a konfigurace diagnostiky V tomto článku používá Aplikační brána účet úložiště k ukládání dat pro účely detekce a prevence. K zaznamenávání dat můžete použít také protokoly Azure Monitor nebo centra událostí. ### <a name="create-a-storage-account"></a>Vytvoření účtu úložiště K vytvoření účtu úložiště s názvem *myagstore1* použijte příkaz [az storage account create](/cli/azure/storage/account#az_storage_account_create). ```azurecli-interactive az storage account create \ --name myagstore1 \ --resource-group myResourceGroupAG \ --location eastus \ --sku Standard_LRS \ --encryption-services blob ``` ### <a name="configure-diagnostics"></a>Konfigurace diagnostiky Nakonfigurujte diagnostiku, aby se data zaznamenávala do protokolů ApplicationGatewayAccessLog, ApplicationGatewayPerformanceLog a ApplicationGatewayFirewallLog. Nahraďte `<subscriptionId>` identifikátorem předplatného a pak nakonfigurujte diagnostiku pomocí [AZ monitor Diagnostic-Settings Create](/cli/azure/monitor/diagnostic-settings#az_monitor_diagnostic_settings_create). ```azurecli-interactive appgwid=$(az network application-gateway show --name myAppGateway --resource-group myResourceGroupAG --query id -o tsv) storeid=$(az storage account show --name myagstore1 --resource-group myResourceGroupAG --query id -o tsv) az monitor diagnostic-settings create --name appgwdiag --resource $appgwid \ --logs '[ { "category": "ApplicationGatewayAccessLog", "enabled": true, "retentionPolicy": { "days": 30, "enabled": true } }, { "category": "ApplicationGatewayPerformanceLog", "enabled": true, "retentionPolicy": { "days": 30, "enabled": true } }, { "category": "ApplicationGatewayFirewallLog", "enabled": true, "retentionPolicy": { "days": 30, "enabled": true } } ]' \ --storage-account $storeid ``` ## <a name="test-the-application-gateway"></a>Otestování aplikační brány K získání veřejné IP adresy aplikační brány použijte příkaz [az network public-ip show](/cli/azure/network/public-ip#az_network_public_ip_show). Zkopírujte veřejnou IP adresu a pak ji vložte do adresního řádku svého prohlížeče. ```azurecli-interactive az network public-ip show \ --resource-group myResourceGroupAG \ --name myAGPublicIPAddress \ --query [ipAddress] \ --output tsv ``` ![Testování základní adresy URL v aplikační bráně](../media/tutorial-restrict-web-traffic-cli/application-gateway-nginxtest.png) ## <a name="clean-up-resources"></a>Vyčištění prostředků Až nebudete skupinu prostředků, aplikační bránu a další související prostředky potřebovat, odeberte je. ```azurecli-interactive az group delete --name myResourceGroupAG ``` ## <a name="next-steps"></a>Další kroky [Přizpůsobení pravidel firewallu webových aplikací](application-gateway-customize-waf-rules-portal.md)
45.5
377
0.77467
ces_Latn
0.987802
b9ab8be9668c8cf62bc821ac1780ed36ecde9148
338
md
Markdown
windows-driver-docs-pr/image/wia-samples-and-tools.md
hugmyndakassi/windows-driver-docs
aa56990cc71e945465bd4d4f128478b8ef5b3a1a
[ "CC-BY-4.0", "MIT" ]
1
2022-02-07T12:25:23.000Z
2022-02-07T12:25:23.000Z
windows-driver-docs-pr/image/wia-samples-and-tools.md
hugmyndakassi/windows-driver-docs
aa56990cc71e945465bd4d4f128478b8ef5b3a1a
[ "CC-BY-4.0", "MIT" ]
null
null
null
windows-driver-docs-pr/image/wia-samples-and-tools.md
hugmyndakassi/windows-driver-docs
aa56990cc71e945465bd4d4f128478b8ef5b3a1a
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: WIA Samples and Tools description: WIA Samples and Tools ms.date: 04/20/2017 --- # WIA Samples and Tools This section discusses the WIA sample code and the WIA tools that are available to aid in the driver development and testing process. [Sample WIA Drivers](sample-wia-drivers.md) [WIA Tools](wia-tools.md)
13
133
0.727811
eng_Latn
0.981346
b9abac8044f3eaf6e3f3c64a16ce11bb11b2f101
3,544
md
Markdown
windows-driver-docs-pr/stream/oem-guidance-on-registry-keys-for-video-stabilization.md
i35010u/windows-driver-docs.zh-cn
e97bfd9ab066a578d9178313f802653570e21e7d
[ "CC-BY-4.0", "MIT" ]
1
2021-02-04T01:49:58.000Z
2021-02-04T01:49:58.000Z
windows-driver-docs-pr/stream/oem-guidance-on-registry-keys-for-video-stabilization.md
i35010u/windows-driver-docs.zh-cn
e97bfd9ab066a578d9178313f802653570e21e7d
[ "CC-BY-4.0", "MIT" ]
null
null
null
windows-driver-docs-pr/stream/oem-guidance-on-registry-keys-for-video-stabilization.md
i35010u/windows-driver-docs.zh-cn
e97bfd9ab066a578d9178313f802653570e21e7d
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: 视频防抖动注册表设置 description: VideoStabilization 注册表项中的 OEM set MaxPixelsPerSecond 值使 Oem 能够在设备上配置视频抖动设置,并在捕获时将视频抖动应用到视频。 ms.date: 04/20/2017 ms.localizationpriority: medium ms.openlocfilehash: 3a375c3aa94838f580787e1bfd471f0c3b7e5ddd ms.sourcegitcommit: 418e6617e2a695c9cb4b37b5b60e264760858acd ms.translationtype: MT ms.contentlocale: zh-CN ms.lasthandoff: 12/07/2020 ms.locfileid: "96840523" --- # <a name="video-stabilization-registry-settings"></a>视频防抖动注册表设置 **VideoStabilization** 注册表项中的 Oem set **MaxPixelsPerSecond** 值使 oem 能够在设备上配置视频抖动设置,并在捕获时将视频抖动应用到视频。 该配置会考虑设备的录制分辨率,并考虑其硬件和软件功能。 ## <a name="overview"></a>概述 在最佳情况下, **VideoStabilization** 注册表项 **MaxPixelsPerSecond** 值用于指定设备上视频的最大性能。 所有应用程序都可以读取注册表项,并避免尝试合理使用视频抖动。 在 **MaxPixelsPerSecond** 值中输入的值将设置限制,超过该限制后,MFT 将不会尝试启用视频抖动,即使应用程序启用它也是如此。 注册表项需要指示设备运行视频抖动的最大分辨率和帧速率。 如果未设置 **MaxPixelsPerSecond** 值,则视频稳定化 MFT 将使用回退值。 最后,如果此操作失败,视频稳定性将使用其内部逻辑来关闭以防止获得最佳的用户体验。 ## <a name="video-stabilization-requirements"></a>视频抖动要求 当发生以下所有情况时,设备被视为能够运行视频抖动: - 视频稳定处于开启状态,并且未处于直通模式 - 录制已打开 - 预览处于活动状态 - 预览中未显示干扰或丢帧 - 录制的视频中未显示干扰帧或丢帧 ## <a name="set-the-video-stabilization-registry-key"></a>设置视频不稳定注册表项 **VideoStabilization 注册表项格式:** - Oem 应设置 **MaxPixelsPerSecond** QWORD 值,该值定义每秒像素数的截止值,超过此值后,视频稳定将被强制在传递模式下运行(即使应用程序已启用)。 - **MaxPixelsPerSecond** 定义如下: `MaxPixelsPerSecond = width * height * frame-rate` 例如,对于 30 fps 的1080p 分辨率, **MaxPixelsPerSecond** 将定义为 1920 \* 1080 \* 30 = 62208000。 **VideoStabilization 注册表项位置:** - 在以下位置,Oem 应创建并设置视频稳定的 **VideoStabilization** 注册表项: **HKEY \_ 本地 \_ 计算机 \\ 软件 \\ Microsoft \\ Windows 媒体基础 \\ 平台 \\ VideoStabilization** 若要在32位计算机上设置 **VideoStabilization** 注册表项 **MaxPixelsPerSecond** 值,请在提升的命令提示符下使用以下命令: ```console reg add "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows Media Foundation\Platform\VideoStabilization" /v "MaxPixelsPerSecond" /t REG_QWORD /d 62208000 /f ``` - 在64位计算机上,Oem 还应在 Wow6432Node 路径上创建和设置相同的密钥: **HKEY \_ 本地 \_ 计算机 \\ 软件 \\ Wow6432Node \\ Microsoft \\ Windows 媒体基础 \\ 平台 \\ VideoStabilization** 若要在64位计算机上设置 **VideoStabilization** 注册表项 **MaxPixelsPerSecond** 值,请在提升的命令提示符下使用以下命令: ```console reg add "HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Windows Media Foundation\Platform\VideoStabilization" /v "MaxPixelsPerSecond" /t REG_QWORD /d 62208000 /f ``` 设置此项后, **VideoStabilization** 注册表项将对视频稳定 MFT 和第一个和第三方应用可见。 如果设置了 **MaxPixelsPerSecond** 值,则视频稳定 MFT 将永远不会尝试稳定帧速率或超出限制的分辨率。 相反,它将进入直通模式,即使应用程序请求视频稳定性。 视频稳定 MFT 有一种机制,用于向给定设备的应用程序推荐帧速率和分辨率。 应用可以选择建议,以避免在已填充注册表项的设备上进行此类传递。 如果未设置 **MaxPixelsPerSecond** 值,则视频稳定化 MFT 将尝试使其达到默认值,但不会更高。 默认值为每秒62208000像素,即1920像素 x 1080 像素 x 30 fps。 当视频抖动尝试稳定,但无法维持视频帧的实时稳定性时,内部逻辑会将视频抖动切换为直通模式 (关闭视频稳定) 而不删除任何帧。 如果在上一个会话中视频抖动关闭,则在决定切换到直通模式之前,MFT 将尝试在每个新会话的常规模式下启动视频稳定性。 这是因为它不能依赖之前的模式来做出未来的决策,因为设备在上次操作时可能已经过压力。 ## <a name="video-stabilization-test-requirements"></a>视频抖动测试要求 Oem 需要验证其设备的端到端功能是否正常工作。 它们需要以每秒给定的最大像素分辨率来验证可接受的体验。 Oem 必须验证以下各项: - Microsoft 提供的注册表项位置禁用了视频稳定内部逻辑。 禁用内部逻辑可保证在遇到压力情况时,视频稳定不会进入直通模式。 - 视频稳定可以单独运行,无需后台任务或其他功能 - 启用视频稳定并禁用内部逻辑时的平滑预览呈现 - 启用视频稳定并禁用内部逻辑时的平滑视频录制 - 稳定记录中实现的每秒所需像素数 - 无过热 **注意** 零售系统不应具有用于禁用本部分中所述视频稳定化内部逻辑的注册表项。 但是,零售系统应具有 **VideoStabilization** 注册表项,该注册表项具有通过此测试过程确定的 **MaxPixelsPerSecond** 值。 **注意** 仅当对此效果设置了 attribute [MF \_ 低 \_ 延迟](/windows/desktop/medfound/mf-low-latency)时, **VideoStabilization** 注册表项 **MaxPixelsPerSecond** 值功能才起作用。 将提供的视频抖动效果添加到 MediaCapture 管道后,会自动设置属性。 但是,如果将视频抖动效果插入到未设置 **MF \_ 低 \_ 延迟** 属性的自定义管道或管道,则注册表项不起作用。
32.814815
246
0.773702
yue_Hant
0.751581
b9ad9c8ca3aa3566e9d817f0deee619c1f1ec2c2
11,045
md
Markdown
_posts/2020-09-04-photographer.md
darrynb89/darrynb89.github.io
5187831057c67f40f10fe371c297aa31170beb10
[ "MIT" ]
1
2021-01-30T15:37:13.000Z
2021-01-30T15:37:13.000Z
_posts/2020-09-04-photographer.md
darrynb89/darrynb89.github.io
5187831057c67f40f10fe371c297aa31170beb10
[ "MIT" ]
null
null
null
_posts/2020-09-04-photographer.md
darrynb89/darrynb89.github.io
5187831057c67f40f10fe371c297aa31170beb10
[ "MIT" ]
null
null
null
--- layout: post current: post cover: 'assets/images/photographer.png' navigation: True title: Photographer Write Up date: 2020-09-05 00:00:00 tags: [vulnhub, ctf, oscp] class: post-template subclass: 'post' author: darryn --- ![Photographer](/assets/images/photographer.png) ### Overview Photographer was the last machine I did before I took my OSCP exam so it seemed fitting for it to be the first write up on my new blog. Photographer was a great OSCP like machine created by [v1n1v131r4](https://twitter.com/v1n1v131r4). ### Nmap Starting with a Nmap scan lets see what ports are open. I got the IP of the machine by checking the DHCP server on my network. However, I could have used arp-scan to find the IP address. ```highlight ┌─[daz@parrot]─[~/Documents/Vulnhub/Photographer] └──╼ $nmap -sC -sV -oA nmap/initial 192.168.1.77 Starting Nmap 7.80 ( https://nmap.org ) at 2020-09-01 17:54 BST Nmap scan report for 192.168.1.77 Host is up (0.0012s latency). Not shown: 996 closed ports PORT STATE SERVICE VERSION 80/tcp open http Apache httpd 2.4.18 ((Ubuntu)) |_http-server-header: Apache/2.4.18 (Ubuntu) |_http-title: Photographer by v1n1v131r4 139/tcp open netbios-ssn Samba smbd 3.X - 4.X (workgroup: WORKGROUP) 445/tcp open netbios-ssn Samba smbd 4.3.11-Ubuntu (workgroup: WORKGROUP) 8000/tcp open http Apache httpd 2.4.18 ((Ubuntu)) |_http-generator: Koken 0.22.24 |_http-open-proxy: Proxy might be redirecting requests |_http-server-header: Apache/2.4.18 (Ubuntu) |_http-title: daisa ahomi |_http-trane-info: Problem with XML parsing of /evox/about Service Info: Host: PHOTOGRAPHER Host script results: |_clock-skew: mean: 1h19m59s, deviation: 2h18m33s, median: 0s |_nbstat: NetBIOS name: PHOTOGRAPHER, NetBIOS user: <unknown>, NetBIOS MAC: <unknown> (unknown) | smb-os-discovery: | OS: Windows 6.1 (Samba 4.3.11-Ubuntu) | Computer name: photographer | NetBIOS computer name: PHOTOGRAPHER\x00 | Domain name: \x00 | FQDN: photographer |_ System time: 2020-09-01T12:54:28-04:00 | smb-security-mode: | account_used: guest | authentication_level: user | challenge_response: supported |_ message_signing: disabled (dangerous, but default) | smb2-security-mode: | 2.02: |_ Message signing enabled but not required | smb2-time: | date: 2020-09-01T16:54:28 |_ start_date: N/A Service detection performed. Please report any incorrect results at https://nmap.org/submit/ . Nmap done: 1 IP address (1 host up) scanned in 12.80 seconds ``` The scan reveals 4 ports open, Samba and two web. Based on the HTTP banners it looks to be a Linux Ubuntu machine, Googling apache 2.4.18 ubuntu reveals the OS is probably Ubuntu Xenial 16.04 LTS. ### Enumeration I started with port 80 but didn't find anything interesting. I ran Gobuster and Nikto and both came up blank so decided to move on for now. On port 8000 I'm presented with a CMS type page. Looking at the footer indicates 'Built with Koken'. ![Port 8000](/assets/images/photographer/Port-8000.png) A quick Google shows [Koken](http://koken.me/) is a CMS for photographers. An [exploit](https://www.exploit-db.com/exploits/48706) is also available by the same author of the machine which would indicate this is the intend path. However, the exploit requires authentication. ![Exploit](/assets/images/photographer/exploit.png) Looking at the exploit, the POST request makes a call to **/admin/**. Going to the URL does provide a login page requiring a email address and password. I will take a look at Samba before going any further on the web ports. ![Login](/assets/images/photographer/login.png) Using smbclient and logging in anonymously shows one share in particular that looks interesting 'sambashare'. ```highlight ┌─[daz@parrot]─[~/Documents/Vulnhub/Photographer] └──╼ $smbclient -L \\\\192.168.1.77\\ Sharename Type Comment --------- ---- ------- print$ Disk Printer Drivers sambashare Disk Samba on Ubuntu IPC$ IPC IPC Service (photographer server (Samba, Ubuntu)) SMB1 disabled -- no workgroup available ┌─[daz@parrot]─[~/Documents/Vulnhub/Photographer] └──╼ $smbclient \\\\192.168.1.77\\sambashare\\ Enter WORKGROUP\daz\'s password: Try "help" to get a list of possible commands. smb: \> ls . D 0 Tue Jul 21 02:30:07 2020 .. D 0 Tue Jul 21 10:44:25 2020 mailsent.txt N 503 Tue Jul 21 02:29:40 2020 wordpress.bkp.zip N 13930308 Tue Jul 21 02:22:23 2020 278627392 blocks of size 1024. 264268400 blocks available smb: \> mget * Get file mailsent.txt? y getting file \mailsent.txt of size 503 as mailsent.txt (245.6 KiloBytes/sec) (average 245.6 KiloBytes/sec) Get file wordpress.bkp.zip? y getting file \wordpress.bkp.zip of size 13930308 as wordpress.bkp.zip (67013.8 KiloBytes/sec) (average 66362.5 KiloBytes/sec) smb: \> ``` Two files are on the share, the first is an email from Agi to Daisa advising the site is ready and the other file appears to be a backup zip of the site. ```highlight ┌─[daz@parrot]─[~/Documents/Vulnhub/Photographer] └──╼ $cat mailsent.txt Message-ID: <[email protected]> Date: Mon, 20 Jul 2020 11:40:36 -0400 From: Agi Clarence <[email protected]> User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.0.1) Gecko/20020823 Netscape/7.0 X-Accept-Language: en-us, en MIME-Version: 1.0 To: Daisa Ahomi <[email protected]> Subject: To Do - Daisa Website\'s Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit Hi Daisa! Your site is ready now. Don\'t forget your secret, my babygirl ;) ┌─[daz@parrot]─[~/Documents/Vulnhub/Photographer] └──╼ $ ``` 'babygirl' looks to be a hint to the password and I now have 2 users and email addresses: - Agi Clarence - [email protected] - Daisa Ahomi - [email protected] ### Foot hold I go back to port **8000/admin/** and try them out. I get straight in with **[email protected]** and **babygirl**. ![CMS](/assets/images/photographer/cms.png) Going back to the exploit from earlier, it looks like I can upload a PHP file by saving the file as .jpg then use Burp to rename it. Im going to try and upload a reverse shell PHP script. If your using Kali or ParrotOS the script can be found in /usr/share/webshells/php/ or downloaded from [pentestmonkey](http://pentestmonkey.net/tools/web-shells/php-reverse-shell). ```highlight ┌─[daz@parrot]─[~/Documents/Vulnhub/Photographer] └──╼ $cp /usr/share/webshells/php/php-reverse-shell.php . ┌─[daz@parrot]─[~/Documents/Vulnhub/Photographer] └──╼ $mv php-reverse-shell.php shell.php.jpg ``` I update the script with my local IP and port details. ```highlight $ip = '127.0.0.1'; // CHANGE THIS $port = 1234; // CHANGE THIS ``` Start a netcat listener ready to catch the shell. ```highlight ┌─[daz@parrot]─[~/Documents/Vulnhub/Photographer] └──╼ $sudo nc -nvlp 443 listening on [any] 443 ... ``` Going back to the admin page I upload the file using 'Import content' and find the PHP file. With Burp open and proxy intercept on I set Burp as a proxy in my browser and select 'Import'. ![Content](/assets/images/photographer/content.png) In Burp I can now remove the .jpg extension from the file and forward the request. ![Burp](/assets/images/photographer/burp.png) With the file selected, right clicking on 'Download File' and 'Open Link in New Tab' should run out PHP script. ![DownloadFile](/assets/images/photographer/downloadfile.png) I have a shell as www-data! First thing I always do is upgrade it to a more stable TTY using Python. ![Shell](/assets/images/photographer/shell.png) The first flag can be found in Daisa's user directory. ```highlight www-data@photographer:/$ cd home/daisa/ www-data@photographer:/home/daisa$ ls Desktop Downloads Pictures Templates examples.desktop Documents Music Public Videos user.txt www-data@photographer:/home/daisa$ cat user.txt d41d8cd98f00{REDACTED} www-data@photographer:/home/daisa$ ``` ### Privilege Escalation I now need to escalate out of www-data to either a user or root. Im going to use [linpeas](https://github.com/carlospolop/privilege-escalation-awesome-scripts-suite) to enumerate the machine for possible local privilege escalation paths. First I will use Python to copy the script from my machine. ![linpeas](/assets/images/photographer/linpeas.png) Linpeas provides a lot of output, looking through the output **/usr/bin/php7.2** jumps out. > Linpeas will colour code the output based on severity but notice /usr/bin/php7.2 is green. Its important to review all the output and not rely on the scripts/tools to identify potential attack vectors. ![suid](/assets/images/photographer/suid.png) First I will check [GTFOBins](https://gtfobins.github.io), searching PHP. ![gtfobins](/assets/images/photographer/phpsuid.png) Lets give it a go. ```highlight www-data@photographer:/tmp$ CMD="/bin/sh" www-data@photographer:/tmp$ /usr/bin/php7.2 -r "pcntl_exec('/bin/sh', ['-p']);" # id uid=33(www-data) gid=33(www-data) euid=0(root) groups=33(www-data) # whoami root # ``` We have root! Lets grab the flag. ```highlight # # cd /root # cat proof.txt .:/://::::///:-` -/++:+`:--:o: oo.-/+/:` -++-.`o++s-y:/s: `sh:hy`:-/+:` :o:``oyo/o`. ` ```/-so:+--+/` -o:-`yh//. `./ys/-.o/ ++.-ys/:/y- /s-:/+/:/o` o/ :yo-:hNN .MNs./+o--s` ++ soh-/mMMN--.` `.-/MMMd-o:+ -s .y /++:NMMMy-.`` ``-:hMMMmoss: +/ s- hMMMN` shyo+:. -/+syd+ :MMMMo h h `MMMMMy./MMMMMd: +mMMMMN--dMMMMd s. y `MMMMMMd`/hdh+..+/.-ohdy--mMMMMMm +- h dMMMMd:```` `mmNh ```./NMMMMs o. y. /MMMMNmmmmd/ `s-:o sdmmmmMMMMN. h` :o sMMMMMMMMs. -hMMMMMMMM/ :o s: `sMMMMMMMo - . `. . hMMMMMMN+ `y` `s- +mMMMMMNhd+h/+h+dhMMMMMMd: `s- `s: --.sNMMMMMMMMMMMMMMMMMMmo/. -s. /o.`ohd:`.odNMMMMMMMMMMMMNh+.:os/ `/o` .++-`+y+/:`/ssdmmNNmNds+-/o-hh:-/o- ./+:`:yh:dso/.+-++++ss+h++.:++- -/+/-:-/y+/d:yh-o:+--/+/:` `-///////////////:` Follow me at: http://v1n1v131r4.com d41d8cd98f00{REDACTED} # ```
39.873646
368
0.640923
eng_Latn
0.795528
b9ade3e5041e61b53c0f517fc513f5d648211924
165
md
Markdown
README.md
wmcooper2/lyricsearch
0aff7a32d240f6ba2ba1e21ae46d3ce79d13edd5
[ "MIT" ]
null
null
null
README.md
wmcooper2/lyricsearch
0aff7a32d240f6ba2ba1e21ae46d3ce79d13edd5
[ "MIT" ]
null
null
null
README.md
wmcooper2/lyricsearch
0aff7a32d240f6ba2ba1e21ae46d3ce79d13edd5
[ "MIT" ]
null
null
null
# Lyric Search A CLI tool for searching through more than 600,000 lyrics' text files. For more info see the [wiki](https://github.com/wmcooper2/lyricsearch/wiki).
41.25
76
0.763636
eng_Latn
0.868965
b9ae3d1b2bb9a148800c705ea24a32d815cdd132
5,017
md
Markdown
docs/2014/analysis-services/multidimensional-models/scripting-language-assl/assl-objects-and-object-characteristics.md
gmilani/sql-docs.pt-br
02f07ca69eae8435cefd74616a8b00f09c4d4f99
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/2014/analysis-services/multidimensional-models/scripting-language-assl/assl-objects-and-object-characteristics.md
gmilani/sql-docs.pt-br
02f07ca69eae8435cefd74616a8b00f09c4d4f99
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/2014/analysis-services/multidimensional-models/scripting-language-assl/assl-objects-and-object-characteristics.md
gmilani/sql-docs.pt-br
02f07ca69eae8435cefd74616a8b00f09c4d4f99
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Objetos e características de objeto ASSL | Microsoft Docs ms.custom: '' ms.date: 03/06/2017 ms.prod: sql-server-2014 ms.reviewer: '' ms.technology: analysis-services ms.topic: reference helpviewer_keywords: - reference exceptions [Analysis Services Scripting Language] - ASSL, objects - inheritance [Analysis Services Scripting Language] - localized names [Analysis Services Scripting Language] - objects [Analysis Services Scripting Language] - names [Analysis Services Scripting Language] - Analysis Services Scripting Language, objects - expansion [Analysis Services Scripting Language] ms.assetid: 6e5c28b5-c0bc-4ccd-82e5-e174bbb71386 author: minewiskan ms.author: owend manager: craigg ms.openlocfilehash: aee5e7b94aaaca2b35e34f8c4d49c2834189f114 ms.sourcegitcommit: 3026c22b7fba19059a769ea5f367c4f51efaf286 ms.translationtype: MT ms.contentlocale: pt-BR ms.lasthandoff: 06/15/2019 ms.locfileid: "62736610" --- # <a name="assl-objects-and-object-characteristics"></a>Objetos e características de objeto ASSL Os objetos da ASSL (Analysis Services Scripting Language) seguem diretrizes específicas a respeito de grupos de objetos, herança, nomenclatura, expansão e processamento. ## <a name="object-groups"></a>Grupos de objetos Todos os objetos do [!INCLUDE[msCoName](../../../includes/msconame-md.md)] [!INCLUDE[ssNoVersion](../../../includes/ssnoversion-md.md)] [!INCLUDE[ssASnoversion](../../../includes/ssasnoversion-md.md)] possuem uma representação XML. Os objetos estão divididos em dois grupos: **Objetos principais** Os objetos principais podem ser criados, alterados e excluídos de forma independente. Entre eles, estão incluídos: - Servidores - Bancos de dados - Dimensões - Cubes - Grupos de medidas - Partições - perspectivas - Modelos de mineração - Funções - Comandos associados a um servidor ou a um banco de dados - Fontes de dados Os objetos principais têm as seguintes propriedades para o rastreamento de seu histórico e de seu status. - `CreatedTimestamp` - `LastSchemaUpdate` - `LastProcessed` (onde apropriado) > [!NOTE] > A classificação de um objeto como um objeto principal afeta a forma como uma instância do [!INCLUDE[ssASnoversion](../../../includes/ssasnoversion-md.md)] tratará esse objeto e como ele será manipulado na linguagem de definição de objeto. No entanto, essa classificação não garante que as ferramentas de desenvolvimento e de gerenciamento do [!INCLUDE[ssASnoversion](../../../includes/ssasnoversion-md.md)] permitirão a criação, a modificação ou a exclusão independente desses objetos. **Objetos secundários** Os objetos secundários só podem ser criados, alterados ou excluídos como parte da criação, da alteração ou da exclusão do objeto principal pai. Entre eles, estão incluídos: - Hierarquias e níveis - Atributos - Medidas - Colunas do modelo de mineração - Comandos associados a um cubo - Agregações ## <a name="object-expansion"></a>Expansão de objetos A restrição `ObjectExpansion` pode ser usada para controlar o grau de expansão do XML ASSL retornado pelo servidor. As opções dessa restrição estão relacionadas na tabela a seguir. |Valor de enumeração|Permitido para \<Alter >|Descrição| |-----------------------|---------------------------|-----------------| |*ReferenceOnly*|não|Retorna somente o nome, a ID e o carimbo de data/hora do objeto solicitado e de todos os objetos principais contidos de forma recursiva.| |*ObjectProperties*|sim|Expande o objeto solicitado e os objetos secundários contidos, mas não retorna objetos principais contidos.| |*ExpandObject*|não|Igual a *ObjectProperties*, mas também retorna o nome, a ID e o carimbo de data/hora para os principais objetos contidos.| |*ExpandFull*|sim|Expande completamente o objeto solicitado e todos os objetos recursivamente.| Esta seção de referência de ASSL descreve a representação *ExpandFull* . Todos os outros níveis de `ObjectExpansion` derivam desse nível. ## <a name="object-processing"></a>Processamento de objetos A ASSL inclui elementos ou propriedades somente leitura (por exemplo, `LastProcessed`) que podem ser lidos na instância do [!INCLUDE[ssASnoversion](../../../includes/ssasnoversion-md.md)], mas que são omitidos quando os scripts de comando são enviados à instância. O [!INCLUDE[ssASnoversion](../../../includes/ssasnoversion-md.md)] ignora valores modificados para elementos somente leitura sem aviso ou erro. O [!INCLUDE[ssASnoversion](../../../includes/ssasnoversion-md.md)] também ignora propriedades impróprias ou irrelevantes sem gerar erros de validação. Por exemplo, o elemento X só deve estar presente quando o elemento Y tiver um valor específico. A instância do [!INCLUDE[ssASnoversion](../../../includes/ssasnoversion-md.md)] ignora o elemento X em vez de validar aquele elemento em relação ao valor do elemento Y.
48.240385
490
0.74128
por_Latn
0.996213
b9af759d559f3b1086e85a5d5baf087506ec8938
13,205
md
Markdown
articles/storage/files/storage-files-quick-create-use-windows.md
NikoMix/azure-docs.de-de
357aca84dfe4bb69cc9c376d62d7b4c81da38b42
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/storage/files/storage-files-quick-create-use-windows.md
NikoMix/azure-docs.de-de
357aca84dfe4bb69cc9c376d62d7b4c81da38b42
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/storage/files/storage-files-quick-create-use-windows.md
NikoMix/azure-docs.de-de
357aca84dfe4bb69cc9c376d62d7b4c81da38b42
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Erstellen und Verwenden einer Azure Files-Freigabe auf Windows-VMs description: Erstellen und verwenden Sie eine Azure Files-Freigabe im Azure-Portal. Stellen Sie eine Verbindung mit einem virtuellen Windows-Computer und eine Verbindung mit der Files-Freigabe her, und laden Sie eine Datei in die Files-Freigabe hoch. author: roygara ms.service: storage ms.topic: quickstart ms.date: 02/01/2019 ms.author: rogarana ms.subservice: files ms.openlocfilehash: 4c5629f80c37c9f79dc9a39c4d8304acbee9679d ms.sourcegitcommit: 3bcce2e26935f523226ea269f034e0d75aa6693a ms.translationtype: HT ms.contentlocale: de-DE ms.lasthandoff: 10/23/2020 ms.locfileid: "92489573" --- # <a name="quickstart-create-and-manage-azure-files-share-with-windows-virtual-machines"></a>Schnellstart: Erstellen und Verwalten einer Azure Files-Freigabe mit virtuellen Windows-Computern Der Artikel zeigt die grundlegenden Schritte zur Erstellung und Verwendung einer Azure Files-Freigabe. Der Schwerpunkt dieser Schnellstartanleitung liegt auf der schnellen Einrichtung einer Azure Files-Freigabe, damit Sie sich mit der Funktionsweise des Diensts vertraut machen können. Sollten Sie eine ausführlichere Anleitung für die Erstellung und Verwendung von Azure-Dateifreigaben in Ihrer Umgebung benötigen, finden Sie diese unter [Verwenden einer Azure-Dateifreigabe mit Windows](storage-how-to-use-files-windows.md). Wenn Sie kein Azure-Abonnement besitzen, können Sie ein [kostenloses Konto](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) erstellen, bevor Sie beginnen. ## <a name="sign-in-to-azure"></a>Anmelden bei Azure Melden Sie sich beim [Azure-Portal](https://portal.azure.com) an. ## <a name="prepare-your-environment"></a>Vorbereiten der Umgebung In dieser Schnellstartanleitung wird Folgendes eingerichtet: - Ein Azure-Speicherkonto und eine Azure-Dateifreigabe - Ein virtueller Computer mit Windows Server 2016 Datacenter ### <a name="create-a-storage-account"></a>Erstellen eines Speicherkontos Um eine Azure-Dateifreigabe verwenden zu können, müssen Sie zunächst ein Azure-Speicherkonto erstellen. Ein Speicherkonto vom Typ „Allgemein v2“ bietet Zugriff auf sämtliche Azure Storage-Dienste: Blobs, Dateien, Warteschlangen und Tabellen. In dieser Schnellstartanleitung wird ein universelles v2-Speicherkonto erstellt. Die Schritte für die Erstellung einer anderen Art von Speicherkonto sind jedoch ähnlich. Ein Speicherkonto kann eine unbegrenzte Anzahl von Freigaben enthalten. Auf einer Freigabe kann eine unbegrenzte Anzahl von Dateien gespeichert werden, bis die Kapazitätsgrenzen des Speicherkontos erreicht sind. [!INCLUDE [storage-create-account-portal-include](../../../includes/storage-create-account-portal-include.md)] ### <a name="create-an-azure-file-share"></a>Erstellen einer Azure-Dateifreigabe Als Nächstes erstellen Sie eine Dateifreigabe. 1. Wählen Sie nach Abschluss der Bereitstellung des Azure-Speicherkontos die Option **Zu Ressource wechseln** aus. 1. Wählen Sie im Speicherkontobereich die Option **Dateien**. ![Auswählen von Dateien](./media/storage-files-quick-create-use-windows/click-files.png) 1. Wählen Sie **Dateifreigabe** aus. ![Auswählen der Schaltfläche „Dateifreigabe hinzufügen“](./media/storage-files-quick-create-use-windows/create-file-share.png) 1. Nennen Sie die neue Dateifreigabe *qsfileshare* , geben Sie unter **Kontingent** den Wert „1“ ein, und wählen Sie anschließend **Erstellen** aus. Das Kontingent kann auf bis zu 5 TiB festgelegt werden. Für diese Schnellstartanleitung ist jedoch 1 GiB ausreichend. 1. Erstellen Sie auf Ihrem lokalen Computer eine neue TXT-Datei namens *qsTestFile*. 1. Wählen Sie die neue Dateifreigabe und anschließend am Speicherort der Dateifreigabe **Hochladen** aus. ![Hochladen einer Datei](./media/storage-files-quick-create-use-windows/create-file-share-portal5.png) 1. Navigieren Sie zu dem Speicherort, an dem Sie die TXT-Datei erstellt haben, und wählen Sie *qsTestFile.txt* und anschließend **Hochladen** aus. Sie haben nun ein Azure-Speicherkonto und eine Dateifreigabe mit einer einzelnen Datei in Azure erstellt. Als Nächstes erstellen Sie den virtuellen Azure-Computer mit Windows Server 2016 Datacenter, der in dieser Schnellstartanleitung den lokalen Server darstellt. ### <a name="deploy-a-vm"></a>Bereitstellen einer VM 1. Erweitern Sie als Nächstes das Menü auf der linken Seite des Portals, und wählen Sie im Azure-Portal oben links die Option **Ressource erstellen**. 1. Suchen Sie im Suchfeld oberhalb der Liste mit den **Azure Marketplace** -Ressourcen nach **Windows Server 2016 Datacenter** , und wählen Sie den Eintrag aus. Klicken Sie anschließend auf **Erstellen**. 1. Wählen Sie auf der Registerkarte **Grundlagen** unter **Projektdetails** die Ressourcengruppe aus, die Sie für diese Schnellstartanleitung erstellt haben. ![Eingeben grundlegender Informationen zu Ihrem virtuellen Computer im Portalblatt](./media/storage-files-quick-create-use-windows/vm-resource-group-and-subscription.png) 1. Geben Sie dem virtuellen Computer unter **Instanzendetails** den Namen *qsVM*. 1. Übernehmen Sie die Standardeinstellungen für **Region** , **Verfügbarkeitsoptionen** , **Image** und **Größe**. 1. Fügen Sie unter **Administratorkonto** den Namen *VMadmin* als **Benutzername** hinzu, und geben Sie unter **Kennwort** ein Kennwort für den virtuellen Computer ein. 1. Wählen Sie unter **Regeln für eingehende Ports** die Option **Ausgewählte Ports zulassen** aus, und wählen Sie dann **RDP (3389)** und **HTTP** aus der Dropdownliste aus. 1. Klicken Sie auf **Überprüfen + erstellen**. 1. Klicken Sie auf **Erstellen**. Das Erstellen eines neuen virtuellen Computers dauert einige Minuten. 1. Wählen Sie nach der Bereitstellung des virtuellen Computers die Option **Zu Ressource wechseln** aus. Sie haben jetzt einen neuen virtuellen Computer erstellt und einen Datenträger angefügt. Nun müssen Sie eine Verbindung mit dem virtuellen Computer herstellen. ### <a name="connect-to-your-vm"></a>Herstellen einer Verbindung mit Ihrer VM 1. Wählen Sie auf der Eigenschaftenseite des virtuellen Computers die Option **Verbinden** aus. ![Herstellen einer Verbindung mit einem virtuellen Azure-Computer über das Portal](./media/storage-files-quick-create-use-windows/connect-vm.png) 1. Übernehmen Sie auf der Seite zum **Herstellen der Verbindung mit dem virtuellen Computer** die Standardoptionen, um über die **IP-Adresse** und die **Portnummer** *3389* eine Verbindung herzustellen. Wählen Sie anschließend **RDP-Datei herunterladen** aus. 1. Öffnen Sie die heruntergeladene RDP-Datei, und klicken Sie auf **Verbinden** , wenn Sie dazu aufgefordert werden. 1. Wählen Sie im Fenster **Windows-Sicherheit** die Option **Weitere Optionen** und dann **Anderes Konto verwenden** aus. Geben Sie den Benutzernamen im Format *localhost\<Benutzername>* ein. &lt;<Benutzername>&gt; ist hierbei der VM-Administratorbenutzername, den Sie für den virtuellen Computer erstellt haben. Geben Sie das Kennwort ein, das Sie für den virtuellen Computer erstellt haben, und wählen Sie anschließend **OK** aus. ![Weitere Optionen](./media/storage-files-quick-create-use-windows/local-host2.png) 1. Während des Anmeldevorgangs wird unter Umständen eine Zertifikatwarnung angezeigt. Wählen Sie **Ja** oder **Weiter** aus, um die Verbindung zu erstellen. ## <a name="map-the-azure-file-share-to-a-windows-drive"></a>Zuordnen der Azure-Dateifreigabe zu einem Windows-Laufwerk 1. Navigieren Sie im Azure-Portal zur Dateifreigabe *qsfileshare* , und wählen Sie **Verbinden** aus. 1. Kopieren Sie den Inhalt des zweiten Felds, und fügen Sie ihn in **Editor** ein. ![Screenshot: Inhalt des zweiten Felds, den Sie kopieren und in Editor einfügen müssen](./media/storage-files-quick-create-use-windows/portal_netuse_connect2.png) 1. Öffnen Sie auf dem virtuellen Computer den **Datei-Explorer** , und wählen Sie **Dieser PC** aus. Dadurch ändern sich die verfügbaren Menüs auf dem Menüband. Wählen Sie im Menü **Computer** die Option **Netzlaufwerk verbinden** aus. 1. Wählen Sie den Laufwerkbuchstaben aus, und geben Sie den UNC-Pfad ein. Kopieren Sie *\\qsstorageacct.file.core.windows.net\qsfileshare* aus **Editor** (sofern Sie sich an die Benennungsvorschläge in dieser Schnellstartanleitung gehalten haben). Vergewissern Sie sich, dass beide Kontrollkästchen aktiviert sind. ![Screenshot des Dialogfelds „Netzlaufwerk verbinden“](./media/storage-files-quick-create-use-windows/mountonwindows10.png) 1. Wählen Sie **Fertig stellen** aus. 1. Gehen Sie im Dialogfeld **Windows-Sicherheit** wie folgt vor: - Kopieren Sie in Editor den Speicherkontonamen mit vorangestelltem „AZURE\“, und fügen Sie ihn im Dialogfeld **Windows-Sicherheit** als Benutzername ein. Kopieren Sie also *AZURE\qsstorageacct* , sofern Sie sich an die Benennungsvorschläge in dieser Schnellstartanleitung gehalten haben. - Kopieren Sie in Editor den Speicherkontoschlüssel, und fügen Sie ihn im Dialogfeld **Windows-Sicherheit** als Kennwort ein. ![UNC-Pfad aus dem Verbindungsbereich von Azure Files](./media/storage-files-quick-create-use-windows/portal_netuse_connect3.png) ## <a name="create-a-share-snapshot"></a>Erstellen einer Freigabemomentaufnahme Nach dem Zuordnen des Laufwerks können Sie eine Momentaufnahme erstellen. 1. Navigieren Sie im Portal zu Ihrer Dateifreigabe, und wählen Sie **Momentaufnahme erstellen** aus. ![Erstellen einer Momentaufnahme](./media/storage-files-quick-create-use-windows/create-snapshot.png) 1. Öffnen Sie auf dem virtuellen Computer die Datei *qstestfile.txt* , geben Sie „this file has been modified“ (Diese Datei wurde geändert.) ein, speichern Sie die Datei, und schließen Sie sie. 1. Erstellen Sie eine weitere Momentaufnahme. ## <a name="browse-a-share-snapshot"></a>Durchsuchen einer Freigabemomentaufnahme 1. Wählen Sie in Ihrer Dateifreigabe die Option **Momentaufnahmen anzeigen** aus. 1. Wählen Sie im Bereich **Dateifreigabemomentaufnahmen** die erste Momentaufnahme in der Liste aus. ![Ausgewählte Momentaufnahme in der Zeitstempelliste](./media/storage-files-quick-create-use-windows/snapshot-list.png) 1. Wählen Sie im Bereich für diese Momentaufnahme die Datei *qsTestFile.txt* aus. ## <a name="restore-from-a-snapshot"></a>Wiederherstellen aus einer Momentaufnahme 1. Klicken Sie auf dem Blatt der Dateifreigabemomentaufnahme mit der rechten Maustaste auf *qsTestFile* , und wählen Sie **Wiederherstellen** aus. 1. Wählen Sie **Originaldatei überschreiben** aus. ![Schaltflächen „Herunterladen“ und „Wiederherstellen“](./media/storage-files-quick-create-use-windows/snapshot-download-restore-portal.png) 1. Öffnen Sie die Datei auf dem virtuellen Computer. Die unveränderte Version wurde wiederhergestellt. ## <a name="delete-a-share-snapshot"></a>Löschen einer Freigabemomentaufnahme 1. Wählen Sie in Ihrer Dateifreigabe die Option **Momentaufnahmen anzeigen** aus. 1. Wählen Sie im Bereich **Dateifreigabemomentaufnahmen** die letzte Momentaufnahme in der Liste aus, und klicken Sie auf **Löschen**. ![Schaltfläche „Löschen“](./media/storage-files-quick-create-use-windows/portal-snapshots-delete.png) ## <a name="use-a-share-snapshot-in-windows"></a>Verwenden einer Freigabemomentaufnahme unter Windows Die Momentaufnahmen Ihrer eingebundenen Azure-Dateifreigabe können genau wie lokale VSS-Momentaufnahmen auf der Registerkarte „Vorherige Versionen“ angezeigt werden. 1. Navigieren Sie im Datei-Explorer zu der eingebundenen Freigabe. ![Eingebundene Freigabe im Datei-Explorer](./media/storage-files-quick-create-use-windows/snapshot-windows-mount.png) 1. Klicken Sie mit der rechten Maustaste auf *qsTestFile.txt* , und wählen Sie im Menü die Option **Eigenschaften** aus. ![Kontextmenü für ein ausgewähltes Verzeichnis](./media/storage-files-quick-create-use-windows/snapshot-windows-previous-versions.png) 1. Wählen Sie **Vorherige Versionen** aus, um die Liste der Freigabemomentaufnahmen für dieses Verzeichnis anzuzeigen. 1. Wählen Sie **Öffnen** aus, um die Momentaufnahme zu öffnen. ![Registerkarte „Vorherige Versionen“](./media/storage-files-quick-create-use-windows/snapshot-windows-list.png) ## <a name="restore-from-a-previous-version"></a>Wiederherstellen aus einer vorherigen Version 1. Wählen Sie **Wiederherstellen** aus. Dadurch wird der Inhalt des gesamten Verzeichnisses rekursiv an den ursprünglichen Speicherort zum Erstellungszeitpunkt der Freigabemomentaufnahme kopiert. ![Schaltfläche „Wiederherstellen“ in Warnmeldung](./media/storage-files-quick-create-use-windows/snapshot-windows-restore.png) Hinweis: Wenn sich Ihre Datei nicht geändert hat, wird keine vorherige Version davon angezeigt, weil die Version der Datei der Momentaufnahme entspricht. Dies ist die gleiche Vorgehensweise wie auf einem Windows-Dateiserver. ## <a name="clean-up-resources"></a>Bereinigen von Ressourcen [!INCLUDE [storage-files-clean-up-portal](../../../includes/storage-files-clean-up-portal.md)] ## <a name="next-steps"></a>Nächste Schritte > [!div class="nextstepaction"] > [Verwenden einer Azure-Dateifreigabe mit Windows](storage-how-to-use-files-windows.md)
70.994624
623
0.791291
deu_Latn
0.995832
b9b04cc0da07c5f145fbfb50f82383e26095577a
3,584
md
Markdown
packages/types/README.md
jaywcjlove/province-city-china
7f5ccc45baf6e7758f0259de465ffaaf50d19419
[ "MIT" ]
65
2017-03-17T02:20:16.000Z
2019-11-19T11:30:06.000Z
packages/types/README.md
jaywcjlove/province-city-china
7f5ccc45baf6e7758f0259de465ffaaf50d19419
[ "MIT" ]
3
2018-03-26T07:31:03.000Z
2018-11-18T04:18:35.000Z
packages/types/README.md
jaywcjlove/province-city-china
7f5ccc45baf6e7758f0259de465ffaaf50d19419
[ "MIT" ]
14
2017-07-02T09:37:24.000Z
2019-10-19T13:58:35.000Z
类型文件 ```bash npm i @province-city-china/types ``` ```typescript /// <reference types="@province-city-china/types" /> ``` | 包名 | 说明 | 版本 | 大小 | | ---- | ---- | ---- | ---- | | [province-city-china](https://github.com/uiwjs/province-city-china) | 包含所有包内容 | [![npm package](https://img.shields.io/npm/v/province-city-china.svg)](https://www.npmjs.com/package/province-city-china) | - | | [@province-city-china/country](https://github.com/uiwjs/province-city-china/tree/master/packages/country) | 国家和地区代码列表 | [![npm package](https://img.shields.io/npm/v/@province-city-china/country.svg)](https://www.npmjs.com/package/@province-city-china/country) |![](https://img.shields.io/bundlephobia/min/@province-city-china/country) | | [@province-city-china/data](https://github.com/uiwjs/province-city-china/tree/master/packages/data) | 总数据(省/地/县/乡) | [![npm package](https://img.shields.io/npm/v/@province-city-china/data.svg)](https://www.npmjs.com/package/@province-city-china/data) | ![](https://img.shields.io/bundlephobia/min/@province-city-china/data) | | [@province-city-china/province](https://github.com/uiwjs/province-city-china/tree/master/packages/province) | 省级(省/直辖市/特别行政区) | [![npm package](https://img.shields.io/npm/v/@province-city-china/province.svg)](https://www.npmjs.com/package/@province-city-china/province) | ![](https://img.shields.io/bundlephobia/min/@province-city-china/province) | | [@province-city-china/city](https://github.com/uiwjs/province-city-china/tree/master/packages/city) | 地级(城市) | [![npm package](https://img.shields.io/npm/v/@province-city-china/city.svg)](https://www.npmjs.com/package/@province-city-china/city) | ![](https://img.shields.io/bundlephobia/min/@province-city-china/city) | | [@province-city-china/area](https://github.com/uiwjs/province-city-china/tree/master/packages/area) | 县级(区县) | [![npm package](https://img.shields.io/npm/v/@province-city-china/area.svg)](https://www.npmjs.com/package/@province-city-china/area) | ![](https://img.shields.io/bundlephobia/min/@province-city-china/area) | | [@province-city-china/town](https://github.com/uiwjs/province-city-china/tree/master/packages/town) | 乡级(乡镇/街) | [![npm package](https://img.shields.io/npm/v/@province-city-china/town.svg)](https://www.npmjs.com/package/@province-city-china/town) | ![](https://img.shields.io/bundlephobia/min/@province-city-china/town) | | [@province-city-china/level](https://github.com/uiwjs/province-city-china/tree/master/packages/level) | 总数据(省/地/县/乡)层级数据 | [![npm package](https://img.shields.io/npm/v/@province-city-china/level.svg)](https://www.npmjs.com/package/@province-city-china/level) | ![](https://img.shields.io/bundlephobia/min/@province-city-china/level) | | [@province-city-china/utils](https://github.com/uiwjs/province-city-china/tree/master/packages/utils) | 提供使用数据方法 | [![npm package](https://img.shields.io/npm/v/@province-city-china/utils.svg)](https://www.npmjs.com/package/@province-city-china/utils) | ![](https://img.shields.io/bundlephobia/min/@province-city-china/utils) | | [@province-city-china/types](https://github.com/uiwjs/province-city-china/tree/master/packages/types) | 类型文件 | [![npm package](https://img.shields.io/npm/v/@province-city-china/types.svg)](https://www.npmjs.com/package/@province-city-china/types) | - | | [@province-city-china/district-code](https://github.com/uiwjs/province-city-china/tree/master/packages/district-code) | 国内长途电话区号 | [![npm package](https://img.shields.io/npm/v/@province-city-china/district-code.svg)](https://www.npmjs.com/package/@province-city-china/district-code) | - |
155.826087
350
0.724051
yue_Hant
0.558393