hexsha
stringlengths 40
40
| size
int64 5
1.04M
| ext
stringclasses 6
values | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 3
344
| max_stars_repo_name
stringlengths 5
125
| max_stars_repo_head_hexsha
stringlengths 40
78
| max_stars_repo_licenses
sequencelengths 1
11
| max_stars_count
int64 1
368k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 3
344
| max_issues_repo_name
stringlengths 5
125
| max_issues_repo_head_hexsha
stringlengths 40
78
| max_issues_repo_licenses
sequencelengths 1
11
| max_issues_count
int64 1
116k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 3
344
| max_forks_repo_name
stringlengths 5
125
| max_forks_repo_head_hexsha
stringlengths 40
78
| max_forks_repo_licenses
sequencelengths 1
11
| max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | content
stringlengths 5
1.04M
| avg_line_length
float64 1.14
851k
| max_line_length
int64 1
1.03M
| alphanum_fraction
float64 0
1
| lid
stringclasses 191
values | lid_prob
float64 0.01
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
474fee310355183853f13ef14d89402011cd9af5 | 2,209 | md | Markdown | README.md | mengwenlu4617/BRV | c3a8c7419cc5834859f2d05b7097a38354df4b6f | [
"Apache-2.0"
] | 1 | 2021-06-30T01:32:34.000Z | 2021-06-30T01:32:34.000Z | README.md | mengwenlu4617/BRV | c3a8c7419cc5834859f2d05b7097a38354df4b6f | [
"Apache-2.0"
] | null | null | null | README.md | mengwenlu4617/BRV | c3a8c7419cc5834859f2d05b7097a38354df4b6f | [
"Apache-2.0"
] | null | null | null | ## BRV
<p align="center"><img src="https://i.imgur.com/S0IjjHS.jpg" alt="1600" width="25%"/></p>
<p align="center">
<strong>可能是最强大的RecyclerView框架</strong>
<br>
<br>
<a href="http://liangjingkanji.github.io/BRV/">使用文档</a>
<br>
<img src="https://i.imgur.com/G7WYYXb.jpg" width="50%"/>
</p>
<br>
<p align="center">
<a href="https://jitpack.io/#liangjingkanji/BRV"><img src="https://jitpack.io/v/liangjingkanji/BRV.svg"/></a>
<img src="https://img.shields.io/badge/language-kotlin-orange.svg"/>
<img src="https://img.shields.io/badge/license-Apache-blue"/>
<a href="https://jq.qq.com/?_wv=1027&k=vWsXSNBJ"><img src="https://img.shields.io/badge/QQ群-752854893-blue"/></a>
</p>
<p align="center"><img src="https://i.imgur.com/lZXNqXE.jpg" align="center" width="30%;" /></p>
### 特点
- 简洁代码
- 功能全面
- 非侵入式
- 不创建任何文件
- 刷新不闪屏
- 数据双向绑定(DataBinding)
- DSL作用域
- 高扩展性
- 文档详细
- Demo简单
### 功能
- [x] 多类型
- [x] 单一数据模型一对多
- [x] 多数据模型
- [x] 添加头布局和脚布局
- [x] 点击(防抖动)/长按事件
- [x] 分组(展开折叠/递归层次/展开置顶/单一展开模式)
- [x] 悬停
- [x] 分割线/均布间隔(支持官方全部的`LayoutManager`)
- [x] 切换模式
- [x] 选择模式(多选/单选/全选/取消全选/反选)
- [x] 拖拽位置
- [x] 侧滑删除
- [x] 下拉刷新 | 上拉加载, 扩展SmartRefreshLayout即兼容其所有功能
- [x] 预拉取索引(UpFetch) | 预加载索引(Preload)
- [x] 缺省页
- [x] 自动分页加载
- [x] 伸缩布局 ([FlexboxLayoutManager](https://github.com/google/flexbox-layout))
- [x] 可扩展自动化网络请求 ([Net](https://github.com/liangjingkanji/Net)), 该网络请求基于协程实现自动化的并发网络请求
在项目根目录的 build.gradle 添加仓库
```groovy
allprojects {
repositories {
// ...
maven { url 'https://jitpack.io' }
}
}
```
在 module 的 build.gradle 添加依赖
```groovy
implementation 'com.github.liangjingkanji:BRV:1.3.11'
```
## License
```
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
```
| 21.656863 | 113 | 0.665912 | yue_Hant | 0.432282 |
474ff0116a9eaeafb9118fae0ddf3a66e0c756e5 | 164 | md | Markdown | _posts/0000-01-02-etni.md | etni/github-slideshow | 8c1b5ee827547d21857204f59e222ccf30f54d70 | [
"MIT"
] | null | null | null | _posts/0000-01-02-etni.md | etni/github-slideshow | 8c1b5ee827547d21857204f59e222ccf30f54d70 | [
"MIT"
] | 3 | 2020-11-13T16:42:27.000Z | 2020-11-13T16:54:59.000Z | _posts/0000-01-02-etni.md | etni/github-slideshow | 8c1b5ee827547d21857204f59e222ccf30f54d70 | [
"MIT"
] | null | null | null | ---
layout: slide
title: "Welcome to our second slide!"
---
I refuse to join any club that would have me as a member.
-Groucho Marx
Use the left arrow to go back!
| 18.222222 | 57 | 0.713415 | eng_Latn | 0.999768 |
475002d730f124dbe282fae8fc681bbebd0a30d0 | 2,477 | md | Markdown | docs/ssdt/how-to-use-microsoft-sql-server-2012-objects-in-your-project.md | huanwnn/sql-docs.zh-tw | 1c3423fa4a220bc3d44cc086bb42c21ebec79471 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/ssdt/how-to-use-microsoft-sql-server-2012-objects-in-your-project.md | huanwnn/sql-docs.zh-tw | 1c3423fa4a220bc3d44cc086bb42c21ebec79471 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/ssdt/how-to-use-microsoft-sql-server-2012-objects-in-your-project.md | huanwnn/sql-docs.zh-tw | 1c3423fa4a220bc3d44cc086bb42c21ebec79471 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-01-09T04:04:39.000Z | 2021-01-09T04:04:39.000Z | ---
title: 專案中的 SQL Server 2012 物件
ms.prod: sql
ms.technology: ssdt
ms.topic: conceptual
ms.assetid: 9baf122f-cf22-4860-98db-ef782cd972fc
author: markingmyname
ms.author: maghan
manager: jroth
ms.reviewer: “”
ms.custom: seo-lt-2019
ms.date: 02/09/2017
ms.openlocfilehash: c9ea326bc37d4843b6cb7e3bc4e21fa356af5435
ms.sourcegitcommit: ff82f3260ff79ed860a7a58f54ff7f0594851e6b
ms.translationtype: HT
ms.contentlocale: zh-TW
ms.lasthandoff: 03/29/2020
ms.locfileid: "75244246"
---
# <a name="how-to-use-microsoft-sql-server-2012-objects-in-your-project"></a>如何:在專案中使用 Microsoft SQL Server 2012 物件
在這個範例中,您會將一個序列物件加入至以 Microsoft SQL Server 2012 為目標的資料庫專案。
序列在 Microsoft SQL Server 2012 中導入。 序列是使用者定義的結構描述繫結物件,該物件會根據建立順序所使用的規格產生數值序列。 數值序列是在定義的間隔依照遞增或遞減順序來產生,而且可依照要求循環 (重複)。 如需有關序列物件的詳細資訊,請參閱[序號](htttp://msdn.microsoft.com/library/ff878058(SQL.110).aspx)。 如需 Microsoft SQL Server 2012 新功能的資訊,請參閱 [SQL Server 2012 的新功能](https://msdn.microsoft.com/library/bb500435(SQL.110).aspx)。
> [!WARNING]
> 下列程序利用先前在[連接的資料庫開發](../ssdt/connected-database-development.md)和[專案導向的離線資料庫開發](../ssdt/project-oriented-offline-database-development.md)小節中的程序所建立的實體。
### <a name="to-add-a-new-sequence-object-to-your-project"></a>若要將新的序列物件加入至專案
1. 以滑鼠右鍵按一下 [方案總管] 中的 [TradeDev] 資料庫專案,再依序選取 [加入] 和 [新增項目]。
2. 按一下左窗格中的 [可程式性] ,然後選取 [序列] 。 按一下 [加入] 將新的物件加入至專案 。
3. 使用下列程式碼來取代預設程式碼。
```
CREATE SEQUENCE [dbo].[Seq1]
AS INT
START WITH 1
INCREMENT BY 1
MAXVALUE 1000
NO CYCLE
CACHE 10
```
4. 如果專案的目標平台未設定為 Microsoft SQL Server 2012,則 [錯誤清單] 會顯示 `CREATE SEQUENCE` 陳述式有語法錯誤。 若要修正此問題,請遵循[如何:變更目標平台及發行資料庫專案](../ssdt/how-to-change-target-platform-and-publish-a-database-project.md)主題來相應地變更目標平台。
5. 請遵循[如何:變更目標平台及發行資料庫專案](../ssdt/how-to-change-target-platform-and-publish-a-database-project.md),將專案發佈至已連接的 Microsoft SQL Server 2012 伺服器中的資料庫。
### <a name="to-use-the-new-sequence-object"></a>若要使用新的序列物件
1. 在 [SQL Server 物件總管] 中,以滑鼠右鍵按一下您在上述程序中已發行的目標資料庫,再選取 [新增查詢] 。
2. 將下列程式碼貼入查詢視窗。
```
DECLARE @counter INT
SET @counter=0
WHILE @counter<10
BEGIN
SET @counter = @counter +1
INSERT dbo.Products (Id, Name, CustomerId) VALUES (NEXT VALUE FOR dbo.Seq1, 'ProductItem'+cast(@counter as varchar), 1)
END
GO
```
3. 按 [執行查詢] 按鈕。
4. 在 [SQL Server 物件總管] 中,巡覽至資料庫的 [Products] 資料表。 按一下滑鼠右鍵選取 [檢視資料] 檢查新加入的資料列。
| 34.402778 | 324 | 0.701252 | yue_Hant | 0.846511 |
4750e48648deeb16d27ced7f689edc2aa5de6342 | 1,537 | md | Markdown | README.md | SweetOnion/KCFinderBundle | d891f5d12c3fc73d1fb1beea7caf77eb264a9620 | [
"MIT"
] | null | null | null | README.md | SweetOnion/KCFinderBundle | d891f5d12c3fc73d1fb1beea7caf77eb264a9620 | [
"MIT"
] | null | null | null | README.md | SweetOnion/KCFinderBundle | d891f5d12c3fc73d1fb1beea7caf77eb264a9620 | [
"MIT"
] | null | null | null | # README
The bundle provides a [KCFinder](http://kcfinder.sunhater.com/) for [CKEditor](http://ckeditor.com/) integration for your Symfony Project
## Documentation
### Installation
This package require [kcfinder] but the composer can't load it if it's on require so add it manualy to you composer.json file:
``` php
// composer.json
"sunhater/kcfinder": "dev-master"
```
Require the bundle in your composer.json file:
```
$ composer require ikadoc/kcfinder-bundle --no-update
```
Register the bundle:
``` php
// app/AppKernel.php
public function registerBundles()
{
return array(
new Ikadoc\KCFinderBundle\IkadocKCFinderBundle(),
// ...
);
}
```
Install the bundle:
```
$ composer update ikadoc/kcfinder-bundle
```
Add routing:
```
// app/config/routing.yml
kcfinder:
resource: "@IkadocKCFinderBundle/Resources/config/routing.yml"
prefix: /admin
```
### Configuration
The bundle allow to change base_path to kcfinder folder and you can define as many configs you want. The list of all config options are available
[here](http://kcfinder.sunhater.com/install).
``` yaml
ikadoc_kc_finder:
base_path : "%kernel.root_dir%/../vendor/sunhater/kcfinder"
config:
disabled : false
uploadURL: "/uploads/"
uploadDir: "%kernel.root_dir%/../web/uploads/"
```
## License
The Ikadoc KCFinder Bundle is under the MIT license. For the full copyright and license information, please read the
[LICENSE](/LICENSE) file that was distributed with this source code.
| 21.347222 | 145 | 0.704619 | eng_Latn | 0.752079 |
475144cff4b0ce586692cbf134f3919c7b5722c6 | 1,039 | md | Markdown | signatory-ledger-tm/README.md | baajur/signatory | 08dbc95165889499036956f4d8537dc4540c51d3 | [
"Apache-2.0",
"MIT"
] | 139 | 2018-01-20T20:09:57.000Z | 2022-03-10T04:47:09.000Z | signatory-ledger-tm/README.md | baajur/signatory | 08dbc95165889499036956f4d8537dc4540c51d3 | [
"Apache-2.0",
"MIT"
] | 25 | 2018-01-25T05:55:12.000Z | 2020-08-28T00:30:10.000Z | signatory-ledger-tm/README.md | baajur/signatory | 08dbc95165889499036956f4d8537dc4540c51d3 | [
"Apache-2.0",
"MIT"
] | 27 | 2018-03-06T13:14:16.000Z | 2022-03-14T18:30:15.000Z | # signatory-ledger-tm [![crate][crate-image]][crate-link] [![Docs][docs-image]][docs-link] [![Build Status][build-image]][build-link] ![MIT/Apache2 licensed][license-image]
[Signatory] provider for Ledger Tendermint Validator app.
[Documentation](https://docs.rs/signatory/)
[Signatory]: https://github.com/iqlusioninc/signatory
## License
**Signatory** is distributed under the terms of either the MIT license or the
Apache License (Version 2.0), at your option.
See [LICENSE-APACHE](LICENSE-APACHE) and [LICENSE-MIT](LICENSE-MIT) for details.
[crate-image]: https://img.shields.io/crates/v/signatory-ledger-tm.svg
[crate-link]: https://crates.io/crates/signatory-ledger-tm
[docs-image]: https://docs.rs/signatory-ledger-tm/badge.svg
[docs-link]: https://docs.rs/signatory-ledger-tm/
[build-image]: https://github.com/iqlusioninc/signatory/workflows/Rust/badge.svg?branch=develop&event=push
[build-link]: https://github.com/iqlusioninc/signatory/actions
[license-image]: https://img.shields.io/badge/license-MIT/Apache2.0-blue.svg
| 45.173913 | 172 | 0.756497 | yue_Hant | 0.349351 |
47514db299fec1557e3b0143ae3036799fa5183a | 2,095 | md | Markdown | _posts/2007-6-6-2007-06-06.md | gt-2003-2017/gt-2003-2017.github.io | d9b4154d35ecccbe063e9919c7e9537a84c6d56e | [
"MIT"
] | null | null | null | _posts/2007-6-6-2007-06-06.md | gt-2003-2017/gt-2003-2017.github.io | d9b4154d35ecccbe063e9919c7e9537a84c6d56e | [
"MIT"
] | null | null | null | _posts/2007-6-6-2007-06-06.md | gt-2003-2017/gt-2003-2017.github.io | d9b4154d35ecccbe063e9919c7e9537a84c6d56e | [
"MIT"
] | null | null | null | 사무엘상 14:13 - 14:23
2007년 06월 06일 (수)
요나단의 신앙과 사울의 신앙
사무엘상 14:13 - 14:23 / 새찬송가 389 장
관찰질문
요나단을 본 블레셋 진영의 모습은 어떠합니까?
블레셋 진영을 본 사울은 어떻게 행동합니까?
13 요나단이 손 발로 기어 올라갔고 그 무기를 든 자도 따랐더라 블레셋 사람들이 요나단 앞에서 엎드러지매 무기를 든 자가 따라가며 죽였으니 14 요나단과 그 무기를 든 자가 반나절 갈이 땅 안에서 처음으로 쳐죽인 자가 이십 명 가량이라 15 들에 있는 진영과 모든 백성들이 공포에 떨었고 부대와 노략꾼들도 떨었으며 땅도 진동하였으니 이는 큰 떨림이었더라 16 베냐민 기브아에 있는 사울의 파수꾼이 바라본즉 허다한 블레셋 사람들이 무너져 이리 저리 흩어지더라 17 사울이 자기와 함께 한 백성에게 이르되 우리에게서 누가 나갔는지 점호하여 보라 하여 점호한즉 요나단과 그의 무기를 든 자가 없어졌더라 18 사울이 아히야에게 이르되 하나님의 궤를 이리로 가져오라 하니 그 때에 하나님의 궤가 이스라엘 자손과 함께 있음이니라 19 사울이 제사장에게 말할 때에 블레셋 사람들의 진영에 소동이 점점 더한지라 사울이 제사장에게 이르되 네 손을 거두라 하고 20 사울과 그와 함께 한 모든 백성이 모여 전장에 가서 본즉 블레셋 사람들이 각각 칼로 자기의 동무들을 치므로 크게 혼란하였더라 21 전에 블레셋 사람들과 함께 하던 히브리 사람이 사방에서 블레셋 사람들과 함께 진영에 들어왔더니 그들이 돌이켜 사울과 요나단과 함께 한 이스라엘 사람들과 합하였고 22 에브라임 산지에 숨었던 이스라엘 모든 사람도 블레셋 사람들이 도망함을 듣고 싸우러 나와서 그들을 추격하였더라 23 여호와께서 그 날에 이스라엘을 구원하시므로 전쟁이 벧아웬을 지나니라
15 Then panic struck the whole army-those in the camp and field, and those in the outposts and raiding parties-and the ground shook. It was a panic sent by God.
해석도움
두려움에 떠는 블레셋
블레셋 진영으로 기어올라간 요나단은 한 쌍의 소가 반나절 동안 기경 할 수 있는 좁은 지역에서 블레셋 군인들과 전투를 시작했습니다. 이 싸움에서 요나단과 그의 부하는 블레셋 사람 이십 명 가량을 죽였습니다(14). 그러자 진지 안에 있는 군대, 전장에 있는 군대, 초소에 있는 군대, 기습부대 할 것 없이 두려움에 휩싸여 떨게 되었습니다. 게다가 하나님은 땅까지 흔들리게 하심으로 그들을 더 큰 두려움에 휩싸이게 했습니다(15). 이 때문에 블레셋 군인들은 제정신을 잃은 나머지 자기편끼리 칼을 휘두르게 되었고, 블레셋 군대에 포함되어 있던 히브리인들은 이스라엘로 투항하기 시작했습니다(21). 그뿐만 아니라 산지에 숨어있었던 많은 이스라엘 군사도 이 같은 소식을 듣고는 싸움터로 뛰어나오게 되었습니다(22). 요나단의 도전적인 믿음은 위대한 기적을 만들어 낸 것입니다. 우리가 믿음으로 영적 전쟁을 시작하면 대적들은 두려움에 떨게 되어 있습니다. 왜냐하면, 승리자이신 예수님을 주로 모신 우리는 세상이 감당하지 못할 자이며, 하나님께서 친히 큰 권능과 위엄으로 우리를 도와주시기 때문입니다.
기도할 시간이 없는 사울
블레셋 진영이 소란하고 군사들이 사방으로 흩어지는 모습을 본 사울은 그제야 요나단과 그의 부하가 없어진 것을 알게 되었습니다(17). 이에 사울은 제사장 아히야에게 하나님의 언약궤를 가져와 전쟁에 대한 하나님의 뜻을 묻도록 했습니다(18). 하지만, 블레셋 진영이 점점 더 혼란스러워지는 모습을 본 사울은 제사장에게 하나님의 뜻 묻는 일을 그만두게 하고 곧바로 전쟁에 뛰어들었습니다(19). 이와 같은 행동은 하나님을 이류로 전락시키고, 기도를 부차적인 것으로 만드는 것이었습니다. 만약 사울이 이 전쟁에서 하나님의 뜻을 제대로 묻는 시간을 가졌더라면 이어지는 내용에 나오는 어리석은 실수를 범하지 않고 블레셋으로부터의 완전한 승리도 거둘 수 있었을 것입니다. 신자는 바쁠수록 기도해야 합니다. 하나님의 뜻을 묻는 시간은 돌아가는 시간이 아니라 가장 질러가는 시간입니다. | 72.241379 | 744 | 0.727446 | kor_Hang | 1.00001 |
4751666f29c4c326e2dc428acd3e92ecb33ea29c | 2,082 | md | Markdown | windows.networking.sockets/controlchanneltrigger_controlchanneltrigger_2092736614.md | tossnet/winrt-api | 42ebcbca22e88f3ed0f273629d7cdf290dfe2133 | [
"CC-BY-4.0",
"MIT"
] | 199 | 2017-02-09T23:13:51.000Z | 2022-03-28T15:56:12.000Z | windows.networking.sockets/controlchanneltrigger_controlchanneltrigger_2092736614.md | tossnet/winrt-api | 42ebcbca22e88f3ed0f273629d7cdf290dfe2133 | [
"CC-BY-4.0",
"MIT"
] | 2,093 | 2017-02-09T21:52:45.000Z | 2022-03-25T22:23:18.000Z | windows.networking.sockets/controlchanneltrigger_controlchanneltrigger_2092736614.md | tossnet/winrt-api | 42ebcbca22e88f3ed0f273629d7cdf290dfe2133 | [
"CC-BY-4.0",
"MIT"
] | 620 | 2017-02-08T19:19:44.000Z | 2022-03-29T11:38:25.000Z | ---
-api-id: M:Windows.Networking.Sockets.ControlChannelTrigger.#ctor(System.String,System.UInt32,Windows.Networking.Sockets.ControlChannelTriggerResourceType)
-api-type: winrt method
---
<!-- Method syntax
public ControlChannelTrigger(System.String channelId, System.UInt32 serverKeepAliveIntervalInMinutes, Windows.Networking.Sockets.ControlChannelTriggerResourceType resourceRequestType)
-->
# Windows.Networking.Sockets.ControlChannelTrigger.ControlChannelTrigger
## -description
Creates a new [ControlChannelTrigger](controlchanneltrigger.md) object with a control channel trigger ID, a value for the server keep-alive interval, and the resource type requested for the control channel trigger.
> [!NOTE]
> The ControlChannelTrigger class is not supported on Windows Phone.
## -parameters
### -param channelId
A string used to differentiate various control channel triggers on the local computer. The maximum length allowed for this string is 64 characters.
### -param serverKeepAliveIntervalInMinutes
The keep-alive interval, in minutes, registered with the system to indicate when the app and network connections used should wake up.
The minimum value that can be set for this parameter is 15 minutes. The maximum value that can be set is 1439 minutes (approximately 24 hours).
### -param resourceRequestType
The resource type requested for the control channel trigger.
## -remarks
The [ControlChannelTrigger(String, UInt32, ControlChannelTriggerResourceType) constructor allows an app to create a [ControlChannelTrigger](controlchanneltrigger.md) object with a specific the resource type requested for the control channel trigger. If an app needs a hardware slot to support connected standby, then the *resourceRequestType* should be set to **RequestHardwareSlot**.
## -examples
## -see-also
[How to set background connectivity options](/previous-versions/windows/apps/hh771189(v=win.10)), [ControlChannelTrigger(String, UInt32)](controlchanneltrigger_controlchanneltrigger_381309182.md), [ControlChannelTriggerResourceType](controlchanneltriggerresourcetype.md)
| 56.27027 | 384 | 0.821326 | eng_Latn | 0.877589 |
4751dc39755e68f89b6184c6af3e8b26ce33ce85 | 9,425 | md | Markdown | node_modules/shaka-player/docs/tutorials/upgrade-v2-2.md | CocoaZX/RNVideo | 4989b82ec17d83a3499d1a941314cb79fbdf7877 | [
"MIT"
] | 1 | 2019-12-11T02:29:22.000Z | 2019-12-11T02:29:22.000Z | node_modules/shaka-player/docs/tutorials/upgrade-v2-2.md | fanlilinSaber/MSTRnLoader | 500ef138cb5ab5b511fa7182e3017dd74b0ae42e | [
"MIT"
] | null | null | null | node_modules/shaka-player/docs/tutorials/upgrade-v2-2.md | fanlilinSaber/MSTRnLoader | 500ef138cb5ab5b511fa7182e3017dd74b0ae42e | [
"MIT"
] | null | null | null | # Shaka Upgrade Guide, v2.2 => v2.4
This is a detailed guide for upgrading from Shaka Player v2.2 to v2.4.
Feel free to skim or to search for the class and method names you are using in
your application.
#### What's New in v2.4?
Shaka v2.4 introduces several improvements over v2.2, including:
- Support for HLS live streams
- Support for HLS VOD streams that do not start at t=0
- MPEG-2 TS content can be transmuxed to MP4 for playback on all browsers
- Captions are not streamed until they are shown
- Use NetworkInformation API to get initial bandwidth estimate
- The demo app is now a Progressive Web App (PWA) and can be used offline
- Support for CEA captions in TS content
- Support for TTML and VTT regions
- A video element is no longer required when `Player` is constructed
- New `attach()` and `detach()` methods have been added to `Player` to manage
attachment to video elements
- Fetch is now preferred over XHR when available
- Network requests are now abortable
- Live stream playback can begin at a negative offset from the live edge
#### HLS start time configuration
For VOD HLS content which does not start at t=0, v2.2 had a configuration called
`manifest.hls.defaultTimeOffset` which applications could use to inform us of
the correct start time for content.
This has been removed in v2.4. The start time of HLS content can now be
automatically extracted from the segments themselves. No configuration is
necessary.
#### Text parser API changes
The text-parsing plugin API has changed. Plugins now take a `Uint8Array` instead
of an `ArrayBuffer` like in v2.2. This allowed us to optimize and prevent buffer
copies.
```js
// v2.2
/**
* @param {!ArrayBuffer} data
* @param {shakaExtern.TextParser.TimeContext} timeContext
* @return {!Array.<!shaka.text.Cue>}
*/
MyTextParser.prototype.parseMedia = function(data, timeContext) {};
// v2.4
/**
* @param {!Uint8Array} data
* @param {shakaExtern.TextParser.TimeContext} timeContext
* @return {!Array.<!shaka.text.Cue>}
*/
MyTextParser.prototype.parseMedia = function(data, timeContext) {};
```
Additionally, the `segmentStart` attribute of `timeContext` is now
nullable. Your plugin will receive a `segmentStart` of `null` if the
information is not available, as is the case in HLS.
Text-parsing plugins that produce region information will need to be updated to
use the new `shaka.text.CueRegion` class. The new structure allows for more
accurate representation of both TTML and VTT regions.
All application-specific text-parsing plugins MUST to be updated.
v2.4 does not have backward compatibility for this!
See {@link shakaExtern.TextParser.prototype.parseInit} and
{@link shakaExtern.TextParser.prototype.parseMedia} for details.
#### Text displayer plugin API changes
The TextDisplayer plugin API has changed.
TextDisplayer plugins need to handle changes to the `shakaExtern.CueRegion`
structure. The new structure allows for more accurate representation of both
TTML and VTT regions.
All application-specific TextDisplayer plugins MUST to be updated.
v2.4 does not have backward compatibility for this!
See {@link shakaExtern.CueRegion} for details.
#### Offline storage API changes
In v2.2, the `remove()` method on `shaka.offline.Storage` took an instance of
`StoredContent` as an argument. Now, in v2.4, it takes a the `offlineUri` field
from `StoredContent` as an argument.
All applications which use offline storage MUST update to the new API.
The old argument was deprecated in v2.3 and has been removed in v2.4.
```js
// v2.2:
storage.list().then(function(storedContentList) {
var someContent = storedContentList[someIndex];
storage.remove(someContent);
});
// v2.4:
storage.list().then(function(storedContentList) {
var someContent = storedContentList[someIndex];
storage.remove(someContent.offlineUri);
});
```
#### Retry after streaming failure
In v2.1.3, we introduced a new config called
`streaming.infiniteRetriesForLiveStreams` to control retry behavior for live
streams. In v2.2, we added a more flexible callback mechanism to specify retry
behavior for all kinds of streams.
```js
// v2.1
player.configure({
streaming: {
infiniteRetriesForLiveStreams: true // the default
}
});
// v2.4
player.configure({
streaming: {
failureCallback: function(error) {
// Always retry live streams:
if (player.isLive()) player.retryStreaming();
}
}
});
// v2.1
player.configure({
streaming: {
infiniteRetriesForLiveStreams: false // do not retry live
}
});
// v2.4
player.configure({
streaming: {
failureCallback: function(error) {
// Do nothing, and we will stop trying to stream the content.
}
}
});
```
The `streaming.infiniteRetriesForLiveStreams` config was deprecated in v2.2 and
removed in v2.3.
The `player.retryStreaming()` method can be used to retry after a failure.
You can base the decision on `player.isLive()`, `error.code`, or anything else.
Because you can call `retryStreaming()` at any time, you can also delay the
decision until you get feedback from the user, the browser is back online, etc.
A few more examples of possible failure callbacks:
```js
function neverRetryCallback(error) {}
function alwaysRetryCallback(error) {
player.retryStreaming();
}
function retryOnSpecificHttpErrorsCallback(error) {
if (error.code == shaka.util.Error.Code.BAD_HTTP_STATUS) {
var statusCode = error.data[1];
var retryCodes = [ 502, 503, 504, 520 ];
if (retryCodes.indexOf(statusCode) >= 0) {
player.retryStreaming();
}
}
}
```
If you choose to react to `error` events instead of the failure callback, you
can use `event.preventDefault()` to avoid the callback completely:
```js
player.addEventListener('error', function(event) {
// Custom logic for error events
if (player.isLive() &&
event.error.code == shaka.util.Error.Code.BAD_HTTP_STATUS) {
player.retryStreaming();
}
// Do not invoke the failure callback for this event
event.preventDefault();
});
```
#### Language and role selection
In addition to the language methods introduced in v2.1, v2.4 adds additional
methods for dealing with roles: `getAudioLanguagesAndRoles()` and
`getTextLanguagesAndRoles()`. These return language/role combinations in an
object. You can specify a role in an optional second argument to the language
selection methods.
```js
// v2.4:
var languagesAndRoles = player.getAudioLanguagesAndRoles();
for (var i = 0; i < languagesAndRoles.length; ++i) {
var combo = languagesAndRoles[i];
if (someSelector(combo)) {
player.selectAudioLanguage(combo.language, combo.role);
break;
}
}
```
#### NetworkingEngine API changes
In v2.2, the `request()` method on `shaka.net.NetworkingEngine` returned a
Promise. Now, in v2.4, it returns an instance of
`shakaExtern.IAbortableOperation`, which contains a Promise.
All applications which make application-level requests via `NetworkingEngine`
SHOULD update to the new API. Support for the old API will be removed in v2.5.
```js
// v2.2:
player.getNetworkingEngine().request(type, request).then((response) => {
// ...
});
// v2.4:
let operation = player.getNetworkingEngine().request(type, request);
// Use operation.promise to get the response.
operation.promise.then((response) => {
// ...
});
// The operation can also be aborted on some condition.
onSomeOtherCondition(() => {
operation.abort();
});
```
Backward compatibility is provided in the v2.4 releases by adding `.then` and
`.catch` methods to the return value from `request()`.
#### Network scheme plugin API changes
In v2.4, we changed the API for network scheme plugins.
These plugins now return an instance of `shakaExtern.IAbortableOperation`.
We suggest using the utility `shaka.util.AbortableOperation` for convenience.
We also introduced an additional parameter for network scheme plugins to
identify the request type.
All applications which have application-level network scheme plugins SHOULD
update to the new API. Support for the old API will be removed in v2.5.
```js
// v2.2
function fooPlugin(uri, request) {
return new Promise((resolve, reject) => {
// ...
});
}
shaka.net.NetworkingEngine.registerScheme('foo', fooPlugin);
// v2.4
function fooPlugin(uri, request, requestType) {
let rejectCallback = null;
const promise = new Promise((resolve, reject) => {
rejectCallback = reject;
// Use this if you have a need for it. Ignore it otherwise.
if (requestType == shaka.net.NetworkingEngine.RequestType.MANIFEST) {
// ...
} else {
// ...
}
// ...
});
const abort = () => {
// Abort the operation.
// ...
// Reject the Promise.
rejectCallback(new shaka.util.Error(
shaka.util.Error.Severity.RECOVERABLE,
shaka.util.Error.Category.NETWORK,
shaka.util.Error.Code.OPERATION_ABORTED));
};
return new shaka.util.AbortableOperation(promise, abort);
}
shaka.net.NetworkingEngine.registerScheme('foo', fooPlugin);
```
#### Manifest parser plugin API changes
The API for `shaka.media.PresentationTimeline` has changed. `ManifestParser`
plugins that use these methods MUST be updated:
- `setAvailabilityStart()` was renamed to `setUserSeekStart()`.
- `notifySegments()` now takes a reference array and a boolean called
`isFirstPeriod`, instead of a period start time and a reference array.
| 29.361371 | 80 | 0.731247 | eng_Latn | 0.94118 |
475239c267b1c655d48fad218145ab258e32473a | 36 | md | Markdown | README.md | WeepingRaven/Magnifie | 52f135c43e8372248165fd26c5f8d615a8713730 | [
"MIT"
] | null | null | null | README.md | WeepingRaven/Magnifie | 52f135c43e8372248165fd26c5f8d615a8713730 | [
"MIT"
] | null | null | null | README.md | WeepingRaven/Magnifie | 52f135c43e8372248165fd26c5f8d615a8713730 | [
"MIT"
] | null | null | null | # Magnifie
Image detail magnifier.
| 12 | 24 | 0.777778 | fra_Latn | 0.954217 |
475272e11b3b4317cb4f9550058f4815ed379876 | 2,367 | md | Markdown | contribute.md | GBKS/Guide | 5a429576bff9fb8362bd2294cb10e393833f40ce | [
"MIT"
] | null | null | null | contribute.md | GBKS/Guide | 5a429576bff9fb8362bd2294cb10e393833f40ce | [
"MIT"
] | null | null | null | contribute.md | GBKS/Guide | 5a429576bff9fb8362bd2294cb10e393833f40ce | [
"MIT"
] | null | null | null | ---
layout: page
title: Contribute
permalink: /contribute/
main_nav: true
nav_order: 5
image: https://pedromvpg.github.io/bitcoin-design-guide/assets/bitcoin-design-custom-og.png
projets:
- name: CoinSwap
site: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-May/017898.html
description: CoinSwap is a way of trading one coin for another coin in a non-custodial way. It is closely related to the idea of an atomic swap.
image: /assets/images/contribute/coinswap.png
- name: photon-lib
site: https://github.com/photon-sdk/photon-lib
description: A high level library for building bitcoin wallets with react native.
image: /assets/images/contribute/photon-lib.png
- name: BTCPay Server
site: https://btcpayserver.org/
description: BTCPay Server is proudly free and open-source, built and maintained by a world-wide community of passionate contributors.
image: /assets/images/contribute/btcpay.png
- name: Bisq
site: https://bisq.network/
description: Bisq is an open-source, peer-to-peer application that allows you to buy and sell cryptocurrencies in exchange for national currencies. No registration required.
image: /assets/images/contribute/bisq.png
- name: Electrum
site: https://electrum.org/
description: Electrum was created by Thomas Voegtlin in November 2011. Since then, various developers have contributed to its source code.
image: /assets/images/contribute/electrum.png
- name: Samurai
site: https://samouraiwallet.com/
description: A modern bitcoin wallet hand forged to keep your transactions private your identity masked and your funds secured.
image: /assets/images/contribute/samurai.png
- name: Wasabi
site: https://wasabiwallet.io/
description: Wasabi is an open-source, non-custodial, privacy-focused Bitcoin wallet for Desktop, that implements trustless CoinJoin.
image: /assets/images/contribute/wasabi.png
---
Help build this guide, find an exciting project to help scale, or join the community and start something new.
<div class="grid projects">
{% for item in page.projets %}
<div class="grid-item">
<img src="{{ item.image | relative_url }}" />
<h3>{{- item.name -}}</h3>
{{- item.description -}}
<a href="{{- item.site -}}" target="_blank">Website</a>
</div>
{% endfor %}
</div>
| 43.833333 | 177 | 0.726236 | eng_Latn | 0.858467 |
47528ae59834be41be4768c3ad5d95609e3f8c62 | 5,150 | md | Markdown | articles/data-factory/v1/data-factory-map-columns.md | ebibibi/azure-docs.ja-jp | ef49e9766dd3dc5a825ea3b19b9c526e2c37faba | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/data-factory/v1/data-factory-map-columns.md | ebibibi/azure-docs.ja-jp | ef49e9766dd3dc5a825ea3b19b9c526e2c37faba | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/data-factory/v1/data-factory-map-columns.md | ebibibi/azure-docs.ja-jp | ef49e9766dd3dc5a825ea3b19b9c526e2c37faba | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "Azure Data Factory のデータセット列のマッピング | Microsoft Docs"
description: "ソース列を変換先列にマップする方法について説明します。"
services: data-factory
documentationcenter:
author: linda33wj
manager: jhubbard
editor: monicar
ms.service: data-factory
ms.workload: data-services
ms.tgt_pltfrm: na
ms.devlang: na
ms.topic: article
ms.date: 01/10/2018
ms.author: jingwang
robots: noindex
ms.openlocfilehash: b2cb55bc0c3888fa05b2dd49df10ec157c8bf95e
ms.sourcegitcommit: 9cc3d9b9c36e4c973dd9c9028361af1ec5d29910
ms.translationtype: HT
ms.contentlocale: ja-JP
ms.lasthandoff: 01/23/2018
---
# <a name="map-source-dataset-columns-to-destination-dataset-columns"></a>ソース データセット列を変換先のデータセット列にマップする
> [!NOTE]
> この記事は、一般公開 (GA) されている Data Factory のバージョン 1 に適用されます。
列マッピングは、ソース テーブル マップの “structure” 列を、シンク テーブルの “structure” 列に対し指定するために使用できます。 **columnMapping** プロパティは、コピー アクティビティの **typeProperties** セクションにあります。
列マッピングは、次のシナリオをサポートしています。
* ソース データセット構造のすべての列が、シンク データセット構造のすべての列にマップされる。
* ソース データセット構造の列のサブセットが、シンク データセット構造のすべての列にマップされる。
次のエラー状態では例外が発生します。
* シンク テーブルの “structure” の列数が、マッピングの指定数より多いか少ない。
* 重複したマッピング。
* SQL クエリの結果に、マッピングで指定されている列名がない。
> [!NOTE]
> 以下の例は Azure SQL と Azure BLOB 向けですが、四角形のデータセットをサポートする任意のデータ ストアにも適用できます。 例で適切なデータ ソース内のデータを示すには、データセットとリンクされているサービスの定義を調整します。
## <a name="sample-1--column-mapping-from-azure-sql-to-azure-blob"></a>例 1 – Azure SQL から Azure BLOB への列マッピング
この例の入力テーブルには構造体が 1 つあり、Azure SQL データベース内の SQL テーブルをポイントしています。
```json
{
"name": "AzureSQLInput",
"properties": {
"structure":
[
{ "name": "userid"},
{ "name": "name"},
{ "name": "group"}
],
"type": "AzureSqlTable",
"linkedServiceName": "AzureSqlLinkedService",
"typeProperties": {
"tableName": "MyTable"
},
"availability": {
"frequency": "Hour",
"interval": 1
},
"external": true,
"policy": {
"externalData": {
"retryInterval": "00:01:00",
"retryTimeout": "00:10:00",
"maximumRetry": 3
}
}
}
}
```
この例の出力テーブルには構造体が 1 つあり、Azure BLOB ストレージ内の BLOB をポイントしています。
```json
{
"name": "AzureBlobOutput",
"properties":
{
"structure":
[
{ "name": "myuserid"},
{ "name": "myname" },
{ "name": "mygroup"}
],
"type": "AzureBlob",
"linkedServiceName": "StorageLinkedService",
"typeProperties": {
"folderPath": "mycontainer/myfolder",
"fileName":"myfile.csv",
"format":
{
"type": "TextFormat",
"columnDelimiter": ","
}
},
"availability":
{
"frequency": "Hour",
"interval": 1
}
}
}
```
次の JSON は、パイプラインのコピー アクティビティを定義します。 ソースの列は、**Translator** プロパティを使用して、シンク (**columnMappings**) の列にマップされます。
```json
{
"name": "CopyActivity",
"description": "description",
"type": "Copy",
"inputs": [ { "name": "AzureSQLInput" } ],
"outputs": [ { "name": "AzureBlobOutput" } ],
"typeProperties": {
"source":
{
"type": "SqlSource"
},
"sink":
{
"type": "BlobSink"
},
"translator":
{
"type": "TabularTranslator",
"ColumnMappings": "UserId: MyUserId, Group: MyGroup, Name: MyName"
}
},
"scheduler": {
"frequency": "Hour",
"interval": 1
}
}
```
**列マッピングのフロー:**

## <a name="sample-2--column-mapping-with-sql-query-from-azure-sql-to-azure-blob"></a>例 2 – SQL クエリを使用した Azure SQL から Azure BLOB への列マッピング
この例では、“structure” セクションでテーブル名と列名を単純に指定する代わりに、SQL クエリを Azure SQL からデータを抽出するために使用します。
```json
{
"name": "CopyActivity",
"description": "description",
"type": "CopyActivity",
"inputs": [ { "name": " AzureSQLInput" } ],
"outputs": [ { "name": " AzureBlobOutput" } ],
"typeProperties":
{
"source":
{
"type": "SqlSource",
"SqlReaderQuery": "$$Text.Format('SELECT * FROM MyTable WHERE StartDateTime = \\'{0:yyyyMMdd-HH}\\'', WindowStart)"
},
"sink":
{
"type": "BlobSink"
},
"Translator":
{
"type": "TabularTranslator",
"ColumnMappings": "UserId: MyUserId, Group: MyGroup,Name: MyName"
}
},
"scheduler": {
"frequency": "Hour",
"interval": 1
}
}
```
この場合、クエリの結果は、ソースの “structure” で指定された列に最初にマップされます。 次に、“structure” ソースからの列が、columnMappings で指定した規則によって、シンク"structure"内の列にマップされます。 クエリは 5 つの列、すなわち、ソースの “structure” で指定されている列よりも 2 つ多い列を返しています。
**列マッピングのフロー**

## <a name="next-steps"></a>次の手順
コピー アクティビティの使用に関するチュートリアルは、次の記事をご覧ください。
- [Blob Storage から SQL データベースにデータをコピーする](data-factory-copy-data-from-azure-blob-storage-to-sql-database.md)
| 27.540107 | 189 | 0.587379 | yue_Hant | 0.68492 |
475326f55fe785688462a546bbb85168f0780c49 | 430 | md | Markdown | .github/ISSUE_TEMPLATE/feature-request.md | Bonifacio2/flask-wtf | 7fc4a618ecc7880bcb7b03f69fe58d340be986c7 | [
"BSD-3-Clause"
] | 1,064 | 2015-01-08T09:39:47.000Z | 2021-05-25T09:56:13.000Z | .github/ISSUE_TEMPLATE/feature-request.md | Bonifacio2/flask-wtf | 7fc4a618ecc7880bcb7b03f69fe58d340be986c7 | [
"BSD-3-Clause"
] | 257 | 2015-01-18T12:41:06.000Z | 2021-05-24T17:35:04.000Z | .github/ISSUE_TEMPLATE/feature-request.md | Bonifacio2/flask-wtf | 7fc4a618ecc7880bcb7b03f69fe58d340be986c7 | [
"BSD-3-Clause"
] | 299 | 2015-01-20T20:14:17.000Z | 2021-05-17T10:37:00.000Z | ---
name: Feature request
about: Suggest a new feature for Flask-WTF
---
<!--
Replace this comment with a description of what the feature should do.
Include details such as links to relevant specs or previous discussions.
-->
<!--
Replace this comment with an example of the problem which this feature
would resolve. Is this problem solvable without changes to Flask-WTF,
such as by subclassing or using existing utilities?
-->
| 26.875 | 72 | 0.767442 | eng_Latn | 0.999929 |
4753393a25c7f3be2dcbc38e37541ade0a942d80 | 721 | md | Markdown | README.md | mdebarros/bulk-api-adapter | 9922cda60dd775f9e75604cc84ea5124fb1abda3 | [
"Apache-2.0"
] | null | null | null | README.md | mdebarros/bulk-api-adapter | 9922cda60dd775f9e75604cc84ea5124fb1abda3 | [
"Apache-2.0"
] | null | null | null | README.md | mdebarros/bulk-api-adapter | 9922cda60dd775f9e75604cc84ea5124fb1abda3 | [
"Apache-2.0"
] | null | null | null | # bulk-api-adapter
[](https://github.com/mojaloop/bulk-api-adapter/commits/master)
[](https://github.com/mojaloop/bulk-api-adapter/releases)
[](https://hub.docker.com/r/mojaloop/bulk-api-adapter)
[](https://circleci.com/gh/mojaloop/bulk-api-adapter)
Bulk Transfers API and notifications
## API Definition
Swagger API [location](./src/interface/swagger.yaml)
| 65.545455 | 160 | 0.771151 | yue_Hant | 0.476484 |
475347eb34ff03daad68ce7115e4d0667e56457c | 15,825 | md | Markdown | docs/content/components/markdown.md | Muflhi01/css | f544ef8574a4d6e12cccf94d534ad66df3e9249a | [
"MIT"
] | 7,431 | 2015-03-23T19:10:53.000Z | 2019-02-08T08:05:59.000Z | docs/content/components/markdown.md | Muflhi01/css | f544ef8574a4d6e12cccf94d534ad66df3e9249a | [
"MIT"
] | 382 | 2015-03-23T19:52:09.000Z | 2019-02-05T21:02:31.000Z | docs/content/components/markdown.md | Muflhi01/css | f544ef8574a4d6e12cccf94d534ad66df3e9249a | [
"MIT"
] | 742 | 2015-03-23T19:36:27.000Z | 2019-02-07T23:17:10.000Z | ---
title: Markdown
path: components/markdown
status: Stable
source: 'https://github.com/primer/css/tree/main/src/markdown'
bundle: markdown
---
```html live
<div class="markdown-body">
<p>Text can be <b>bold</b>, <i>italic</i>, or <s>strikethrough</s>. <a href="https://github.com">Links </a> should be blue with no underlines (unless hovered over).</p>
<p>There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs.</p>
<p>There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs.</p>
<blockquote>
<p>There should be no margin above this first sentence.</p>
<p>Blockquotes should be a lighter gray with a gray border along the left side.</p>
<p>There should be no margin below this final sentence.</p>
</blockquote>
<h1>Header 1</h1>
<p>This is a normal paragraph following a header. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.</p>
<h2>Header 2</h2>
<blockquote>This is a blockquote following a header. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.</blockquote>
<h3>Header 3</h3>
<pre><code>This is a code block following a header.</code></pre>
<h4>Header 4</h4>
<ul>
<li>This is an unordered list following a header.</li>
<li>This is an unordered list following a header.</li>
<li>This is an unordered list following a header.</li>
</ul>
<h5>Header 5</h5>
<ol>
<li>This is an ordered list following a header.</li>
<li>This is an ordered list following a header.</li>
<li>This is an ordered list following a header.</li>
</ol>
<h6>Header 6</h6>
<table>
<thead>
<tr>
<th>What</th>
<th>Follows</th>
</tr>
</thead>
<tbody>
<tr>
<td>A table</td>
<td>A header</td>
</tr>
<tr>
<td>A table</td>
<td>A header</td>
</tr>
<tr>
<td>A table</td>
<td>A header</td>
</tr>
</tbody>
</table>
<hr />
<p>There's a horizontal rule above and below this.</p>
<hr />
<p>Here is an unordered list:</p>
<ul>
<li>Salt-n-Pepa</li>
<li>Bel Biv DeVoe</li>
<li>Kid 'N Play</li>
</ul>
<p>And an ordered list:</p>
<ol>
<li>Michael Jackson</li>
<li>Michael Bolton</li>
<li>Michael Bublé</li>
</ol>
<p>And an unordered task list:</p>
<ul>
<li><input type="checkbox" checked> Create a sample markdown document</li>
<li><input type="checkbox"> Add task lists to it</li>
<li><input type="checkbox"> Take a vacation</li>
</ul>
<p>And a "mixed" task list:</p>
<ul>
<li><input type="checkbox"> Steal underpants</li>
<li>?</li>
<li><input type="checkbox"> Profit!</li>
</ul>
And a nested list:
<ul>
<li>Jackson 5
<ul>
<li>Michael</li>
<li>Tito</li>
<li>Jackie</li>
<li>Marlon</li>
<li>Jermaine</li>
</ul>
</li>
<li>TMNT
<ul>
<li>Leonardo</li>
<li>Michelangelo</li>
<li>Donatello</li>
<li>Raphael</li>
</ul>
</li>
</ul>
<p>Definition lists can be used with HTML syntax. Definition terms are bold and italic.</p>
<dl>
<dt>Name</dt>
<dd>Godzilla</dd>
<dt>Born</dt>
<dd>1952</dd>
<dt>Birthplace</dt>
<dd>Japan</dd>
<dt>Color</dt>
<dd>Green</dd>
</dl>
<hr />
<p>Tables should have bold headings and alternating shaded rows.</p>
<table>
<thead>
<tr>
<th>Artist</th>
<th>Album</th>
<th>Year</th>
</tr>
</thead>
<tbody>
<tr>
<td>David Bowie</td>
<td>Scary Monsters</td>
<td>1980</td>
</tr>
<tr>
<td>Prince</td>
<td>Purple Rain</td>
<td>1982</td>
</tr>
<tr>
<td>Beastie Boys</td>
<td>License to Ill</td>
<td>1986</td>
</tr>
<tr>
<td>Janet Jackson</td>
<td>Rhythm Nation 1814</td>
<td>1989</td>
</tr>
</tbody>
</table>
<p>If a table is too wide, it should condense down and/or scroll horizontally.</p>
<table>
<thead>
<tr>
<th>Artist</th>
<th>Album</th>
<th>Year</th>
<th>Label</th>
<th>Songs</th>
</tr>
</thead>
<tbody>
<tr>
<td>David Bowie</td>
<td>Scary Monsters</td>
<td>1980</td>
<td>RCA Records</td>
<td>It's No Game (No. 1), Up the Hill Backwards, Scary Monsters (And Super Creeps), Ashes to Ashes, Fashion, Teenage Wildlife, Scream Like a Baby, Kingdom Come, Because You're Young, It's No Game (No. 2)</td>
</tr>
<tr>
<td>Prince</td>
<td>Purple Rain</td>
<td>1982</td>
<td>Warner Brothers Records</td>
<td>Let's Go Crazy, Take Me With U, The Beautiful Ones, Computer Blue, Darling Nikki, When Doves Cry, I Would Die 4 U, Baby I'm a Star, Purple Rain</td>
</tr>
<tr>
<td>Beastie Boys</td>
<td>License to Ill</td>
<td>1986</td>
<td>Def Jam</td>
<td>Rhymin & Stealin, The New Style, She's Crafty, Posse in Effect, Slow Ride, Girls, Fight for Your Right, No Sleep till Brooklyn, Paul Revere, "Hold It Now, Hit It", Brass Monkey, Slow and Low, Time to Get Ill</td>
</tr>
<tr>
<td>Janet Jackson</td>
<td>Rhythm Nation 1814</td>
<td>1989</td>
<td>A&M</td>
<td>Interlude: Pledge, Rhythm Nation, Interlude: T.V., State of the World, Interlude: Race, The Knowledge, Interlude: Let's Dance, Miss You Much, Interlude: Come Back, Love Will Never Do (Without You), Livin' in a World (They Didn't Make), Alright, Interlude: Hey Baby, Escapade, Interlude: No Acid, Black Cat, Lonely, Come Back to Me, Someday Is Tonight, Interlude: Livin'...In Complete Darkness</td>
</tr>
</tbody>
</table>
<hr />
<p>Code snippets like <code>var foo = "bar";</code> can be shown inline.</p>
<p>Also, <code>this should vertically align</code> <s><code>with this</code></s> <s>and this</s>.</p>
<p>Code can also be shown in a block element.</p>
<pre><code>var foo = "bar";</code></pre>
<p>Code can also use syntax highlighting.</p>
<pre><code class="prism-code language-javascript">var foo = "bar";</code></pre>
<pre><code>Long, single-line code blocks should not wrap. They should horizontally scroll if they are too long. This line should be long enough to demonstrate this.</code></pre>
<pre><code class="prism-code language-javascript">var foo = "The same thing is true for code with syntax highlighting. A single line of code should horizontally scroll if it is really long.";</code></pre>
<p>Inline code inside table cells should still be distinguishable.</p>
<table>
<thead>
<tr>
<th>Language</th>
<th>Code</th>
</tr>
</thead>
<tbody>
<tr>
<td>JavasScript</td>
<td><code>var foo = "bar";</code></td>
</tr>
<tr>
<td>Ruby</td>
<td><code>foo = "bar"</code></td>
</tr>
</tbody>
</table>
<hr />
<p>The <code>samp</code> HTML element is used to enclose inline text which represents sample (or quoted) output from a computer program. Here an example of an error message: <samp>File not found</samp></p>
<hr />
<p>Small images should be shown at their actual size.</p>
<p><img src="http://placekitten.com/g/300/200/"></p>
<p>Large images should always scale down and fit in the content container.</p>
<p><img src="http://placekitten.com/g/1200/800/"></p>
<p>
Here's a simple footnote,<sup><a href="#user-content-fn-1-12345" id="user-content-fnref-1-12345" data-footnote-ref="" aria-describedby="footnote-label">1</a></sup> and here's a longer one.<sup><a href="#user-content-fn-bignote-12345" id="user-content-fnref-bignote-12345" data-footnote-ref="" aria-describedby="footnote-label">2</a></sup>
</p>
<section data-footnotes="" class="footnotes">
<h2 id="footnote-label" class="sr-only">Footnotes</h2>
<ol>
<li id="user-content-fn-1-12345">
<p>This is the first footnote. <a href="#user-content-fnref-1-12345" data-footnote-backref="" class="data-footnote-backref" aria-label="Back to content"><g-emoji class="g-emoji" alias="leftwards_arrow_with_hook" fallback-src="https://github.githubassets.com/images/icons/emoji/unicode/21a9.png">↩</g-emoji></a></p>
</li>
<li id="user-content-fn-bignote-12345">
<p>Here's one with multiple paragraphs and code.</p>
<p>Indent paragraphs to include them in the footnote.</p>
<p><code>{ my code }</code></p>
<p>Add as many paragraphs as you like. <a href="#user-content-fnref-bignote-12345" data-footnote-backref="" class="data-footnote-backref" aria-label="Back to content"><g-emoji class="g-emoji" alias="leftwards_arrow_with_hook" fallback-src="https://github.githubassets.com/images/icons/emoji/unicode/21a9.png">↩</g-emoji></a></p>
</li>
</ol>
</section>
<pre><code>This is the final element on the page and there should be no margin below this.</code></pre>
</div>
```
<!-- Source MD (update when making changes to the HTML) -->
<!--
Text can be **bold**, _italic_, or ~~strikethrough~~. [Links](https://github.com) should be blue with no underlines (unless hovered over).
There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs.
There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs.
> There should be no margin above this first sentence.
>
> Blockquotes should be a lighter gray with a gray border along the left side.
>
> There should be no margin below this final sentence.
# Header 1
This is a normal paragraph following a header. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
## Header 2
> This is a blockquote following a header. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.
### Header 3
```
This is a code block following a header.
```
#### Header 4
* This is an unordered list following a header.
* This is an unordered list following a header.
* This is an unordered list following a header.
##### Header 5
1. This is an ordered list following a header.
2. This is an ordered list following a header.
3. This is an ordered list following a header.
###### Header 6
| What | Follows |
|-----------|-----------------|
| A table | A header |
| A table | A header |
| A table | A header |
----------------
There's a horizontal rule above and below this.
----------------
Here is an unordered list:
* Salt-n-Pepa
* Bel Biv DeVoe
* Kid 'N Play
And an ordered list:
1. Michael Jackson
2. Michael Bolton
3. Michael Bublé
And an unordered task list:
- [x] Create a sample markdown document
- [x] Add task lists to it
- [ ] Take a vacation
And a "mixed" task list:
- [ ] Steal underpants
- ?
- [ ] Profit!
And a nested list:
* Jackson 5
* Michael
* Tito
* Jackie
* Marlon
* Jermaine
* TMNT
* Leonardo
* Michelangelo
* Donatello
* Raphael
Definition lists can be used with HTML syntax. Definition terms are bold and italic.
<dl>
<dt>Name</dt>
<dd>Godzilla</dd>
<dt>Born</dt>
<dd>1952</dd>
<dt>Birthplace</dt>
<dd>Japan</dd>
<dt>Color</dt>
<dd>Green</dd>
</dl>
----------------
Tables should have bold headings and alternating shaded rows.
| Artist | Album | Year |
|-------------------|-----------------|------|
| Michael Jackson | Thriller | 1982 |
| Prince | Purple Rain | 1984 |
| Beastie Boys | License to Ill | 1986 |
If a table is too wide, it should condense down and/or scroll horizontally.
| Artist | Album | Year | Label | Awards | Songs |
|-------------------|-----------------|------|-------------|----------|-----------|
| Michael Jackson | Thriller | 1982 | Epic Records | Grammy Award for Album of the Year, American Music Award for Favorite Pop/Rock Album, American Music Award for Favorite Soul/R&B Album, Brit Award for Best Selling Album, Grammy Award for Best Engineered Album, Non-Classical | Wanna Be Startin' Somethin', Baby Be Mine, The Girl Is Mine, Thriller, Beat It, Billie Jean, Human Nature, P.Y.T. (Pretty Young Thing), The Lady in My Life |
| Prince | Purple Rain | 1984 | Warner Brothers Records | Grammy Award for Best Score Soundtrack for Visual Media, American Music Award for Favorite Pop/Rock Album, American Music Award for Favorite Soul/R&B Album, Brit Award for Best Soundtrack/Cast Recording, Grammy Award for Best Rock Performance by a Duo or Group with Vocal | Let's Go Crazy, Take Me With U, The Beautiful Ones, Computer Blue, Darling Nikki, When Doves Cry, I Would Die 4 U, Baby I'm a Star, Purple Rain |
| Beastie Boys | License to Ill | 1986 | Mercury Records | noawardsbutthistablecelliswide | Rhymin & Stealin, The New Style, She's Crafty, Posse in Effect, Slow Ride, Girls, (You Gotta) Fight for Your Right, No Sleep Till Brooklyn, Paul Revere, Hold It Now, Hit It, Brass Monkey, Slow and Low, Time to Get Ill |
----------------
Code snippets like `var foo = "bar";` can be shown inline.
Also, `this should vertically align` ~~`with this`~~ ~~and this~~.
Code can also be shown in a block element.
```
var foo = "bar";
```
Code can also use syntax highlighting.
```javascript
var foo = "bar";
```
```
Long, single-line code blocks should not wrap. They should horizontally scroll if they are too long. This line should be long enough to demonstrate this.
```
```javascript
var foo = "The same thing is true for code with syntax highlighting. A single line of code should horizontally scroll if it is really long.";
```
Inline code inside table cells should still be distinguishable.
| Language | Code |
|-------------|--------------------|
| JavaScript | `var foo = "bar";` |
| Ruby | `foo = "bar"` |
----------------
The `<samp>` HTML element is used to enclose inline text which represents sample (or quoted) output from a computer program. Here an example of an error message: <samp>File not found</samp>
----------------
Small images should be shown at their actual size.

Large images should always scale down and fit in the content container.

Here's a simple footnote,[^1] and here's a longer one.[^bignote]
[^1]: This is the first footnote.
[^bignote]: Here's one with multiple paragraphs and code.
Indent paragraphs to include them in the footnote.
`{ my code }`
Add as many paragraphs as you like.
```
This is the final element on the page and there should be no margin below this.
```
-->
| 32.034413 | 492 | 0.633365 | eng_Latn | 0.893326 |
4754b87267b53ce9bb20db1a8749f13e3a571264 | 537 | md | Markdown | datasets/README.md | XN137/nessie-demos | cde371ccefc9040debac123f834852d9609823de | [
"Apache-2.0"
] | 5 | 2021-05-20T10:54:26.000Z | 2022-03-26T21:28:23.000Z | datasets/README.md | XN137/nessie-demos | cde371ccefc9040debac123f834852d9609823de | [
"Apache-2.0"
] | 153 | 2021-05-07T12:38:16.000Z | 2022-03-25T11:38:41.000Z | datasets/README.md | XN137/nessie-demos | cde371ccefc9040debac123f834852d9609823de | [
"Apache-2.0"
] | 11 | 2021-05-11T12:49:33.000Z | 2022-03-16T04:58:21.000Z | # Datasets for demos
Since some notebook environments like Google Colaboratory only support uploading a single `.ipynb`
file, which cannot access "files next to it", the `pydemolib` code allows downloading
named datasets to the local file system. Files in a dataset can then be identified by name via a
Python `dict`.
# Uploading a new Dataset
A new dataset can be uploaded to a sub-directory of `datasets`. Please make sure that there is a `ls.txt`
file so that the code in `pydemolib` can quickly scan what files it should look at.
| 44.75 | 105 | 0.778399 | eng_Latn | 0.998002 |
4754e085f320ddcd18b4912c3315f2b6b68355bf | 6,828 | md | Markdown | articles/virtual-machines/extensions/agent-dependency-linux.md | jrafaelsantana/azure-docs.pt-br | d4b77e58f94484cb8babd958ff603d1cfed586eb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/virtual-machines/extensions/agent-dependency-linux.md | jrafaelsantana/azure-docs.pt-br | d4b77e58f94484cb8babd958ff603d1cfed586eb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/virtual-machines/extensions/agent-dependency-linux.md | jrafaelsantana/azure-docs.pt-br | d4b77e58f94484cb8babd958ff603d1cfed586eb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Extensão de máquina virtual de dependência Azure Monitor para Linux | Microsoft Docs
description: Implante o Azure Monitor agente de dependência na máquina virtual Linux usando uma extensão de máquina virtual.
services: virtual-machines-linux
documentationcenter: ''
author: mgoedtel
manager: carmonm
editor: ''
tags: azure-resource-manager
ms.assetid: ''
ms.service: virtual-machines-linux
ms.topic: article
ms.tgt_pltfrm: vm-linux
ms.workload: infrastructure-services
ms.date: 03/29/2019
ms.author: magoedte
ms.openlocfilehash: 416b0c89105f97514efdfcc859a630d78f7ba7f5
ms.sourcegitcommit: 44e85b95baf7dfb9e92fb38f03c2a1bc31765415
ms.translationtype: MT
ms.contentlocale: pt-BR
ms.lasthandoff: 08/28/2019
ms.locfileid: "70084844"
---
# <a name="azure-monitor-dependency-virtual-machine-extension-for-linux"></a>Extensão de máquina virtual de dependência Azure Monitor para Linux
O recurso Mapa do Azure Monitor para VMs obtém seus dados do Microsoft Dependency Agent. A extensão da máquina virtual do agente de dependência de VM do Azure para Linux é publicada e suportada pela Microsoft. A extensão instala o agente de dependência em máquinas virtuais do Azure. Este documento detalha as plataformas com suporte, as configurações e as opções de implantação para a extensão de máquina virtual do agente de dependência de VM do Azure para Linux.
## <a name="prerequisites"></a>Pré-requisitos
### <a name="operating-system"></a>Sistema operacional
A extensão do agente de dependência de VM do Azure para Linux pode ser executada nos sistemas operacionais com suporte listados na seção [sistemas operacionais com suporte](../../azure-monitor/insights/vminsights-enable-overview.md#supported-operating-systems) no artigo Azure monitor para VMs implantação.
## <a name="extension-schema"></a>Esquema de extensão
O JSON a seguir mostra o esquema para a extensão do agente de dependência de VM do Azure em uma VM Linux do Azure.
```json
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"vmName": {
"type": "string",
"metadata": {
"description": "The name of existing Linux Azure VM."
}
}
},
"variables": {
"vmExtensionsApiVersion": "2017-03-30"
},
"resources": [
{
"type": "Microsoft.Compute/virtualMachines/extensions",
"name": "[concat(parameters('vmName'),'/DAExtension')]",
"apiVersion": "[variables('vmExtensionsApiVersion')]",
"location": "[resourceGroup().location]",
"dependsOn": [
],
"properties": {
"publisher": "Microsoft.Azure.Monitoring.DependencyAgent",
"type": "DependencyAgentLinux",
"typeHandlerVersion": "9.5",
"autoUpgradeMinorVersion": true
}
}
],
"outputs": {
}
}
```
### <a name="property-values"></a>Valores de propriedade
| Nome | Valor/exemplo |
| ---- | ---- |
| apiVersion | 2015-01-01 |
| publisher | Microsoft.Azure.Monitoring.DependencyAgent |
| type | DependencyAgentLinux |
| typeHandlerVersion | 9.5 |
## <a name="template-deployment"></a>Implantação de modelo
Você pode implantar extensões de VM do Azure com os modelos do Azure Resource Manager. Você pode usar o esquema JSON detalhado na seção anterior em um modelo de Azure Resource Manager para executar a extensão do agente de dependência de VM do Azure durante uma implantação de modelo de Azure Resource Manager.
O JSON para uma extensão de máquina virtual pode ser aninhado dentro do recurso de máquina virtual. Ou você pode colocá-lo no nível raiz ou superior de um modelo JSON do Resource Manager. O posicionamento do JSON afeta o valor do tipo e nome do recurso. Para obter mais informações, consulte [Definir o nome e o tipo de recursos filho](../../azure-resource-manager/child-resource-name-type.md).
O exemplo a seguir pressupõe que a extensão do agente de dependência está aninhada dentro do recurso de máquina virtual. Quando você Aninha o recurso de extensão, o JSON é colocado no `"resources": []` objeto da máquina virtual.
```json
{
"type": "extensions",
"name": "DAExtension",
"apiVersion": "[variables('apiVersion')]",
"location": "[resourceGroup().location]",
"dependsOn": [
"[concat('Microsoft.Compute/virtualMachines/', variables('vmName'))]"
],
"properties": {
"publisher": "Microsoft.Azure.Monitoring.DependencyAgent",
"type": "DependencyAgentLinux",
"typeHandlerVersion": "9.5",
"autoUpgradeMinorVersion": true
}
}
```
Quando você coloca a extensão JSON na raiz do modelo, o nome do recurso inclui uma referência à máquina virtual pai. O tipo reflete a configuração aninhada.
```json
{
"type": "Microsoft.Compute/virtualMachines/extensions",
"name": "<parentVmResource>/DAExtension",
"apiVersion": "[variables('apiVersion')]",
"location": "[resourceGroup().location]",
"dependsOn": [
"[concat('Microsoft.Compute/virtualMachines/', variables('vmName'))]"
],
"properties": {
"publisher": "Microsoft.Azure.Monitoring.DependencyAgent",
"type": "DependencyAgentLinux",
"typeHandlerVersion": "9.5",
"autoUpgradeMinorVersion": true
}
}
```
## <a name="azure-cli-deployment"></a>Implantação da CLI do Azure
Você pode usar o CLI do Azure para implantar a extensão de VM do agente de dependência em uma máquina virtual existente.
```azurecli
az vm extension set \
--resource-group myResourceGroup \
--vm-name myVM \
--name DependencyAgentLinux \
--publisher Microsoft.Azure.Monitoring.DependencyAgent \
--version 9.5
```
## <a name="troubleshoot-and-support"></a>Solução de problemas e suporte
### <a name="troubleshoot"></a>Solução de problemas
Os dados sobre o estado das implantações de extensão podem ser recuperados do portal do Azure e usando o CLI do Azure. Para ver o estado de implantação das extensões de uma determinada VM, execute o seguinte comando usando o CLI do Azure:
```azurecli
az vm extension list --resource-group myResourceGroup --vm-name myVM -o table
```
A saída de execução da extensão é registrada no seguinte arquivo:
```
/opt/microsoft/dependcency-agent/log/install.log
```
### <a name="support"></a>Suporte
Se precisar de mais ajuda a qualquer momento neste artigo, entre em contato com os especialistas do Azure nos [fóruns do Azure e do Stack Overflow do MSDN](https://azure.microsoft.com/support/forums/). Ou, você pode arquivar um incidente de suporte do Azure. Vá para o [site de suporte do Azure](https://azure.microsoft.com/support/options/) e selecione **Obter suporte**. Para obter informações sobre como usar o suporte do Azure, leia as [perguntas frequentes sobre suporte do Microsoft Azure](https://azure.microsoft.com/support/faq/).
| 41.889571 | 538 | 0.724517 | por_Latn | 0.963966 |
4754f07d6dee95b4ef551cc242f1bbc197b65b27 | 1,042 | md | Markdown | pages/news.md | RizwanaBegum/SWG | bfbaeabb17d91afd9ea2e3466f5ee3ead5f7981c | [
"MIT"
] | null | null | null | pages/news.md | RizwanaBegum/SWG | bfbaeabb17d91afd9ea2e3466f5ee3ead5f7981c | [
"MIT"
] | null | null | null | pages/news.md | RizwanaBegum/SWG | bfbaeabb17d91afd9ea2e3466f5ee3ead5f7981c | [
"MIT"
] | null | null | null | # News
## Post1:
### What is Markdown?
Markdown is a lightweight markup language that you can use to add formatting elements to plaintext text documents. Created by John Gruber in 2004, Markdown is now one of the world’s most popular markup languages.
Using Markdown is different than using a WYSIWYG editor. In an application like Microsoft Word, you click buttons to format words and phrases, and the changes are visible immediately. Markdown isn’t like that. When you create a Markdown-formatted file, you add Markdown syntax to the text to indicate which words and phrases should look different.
For instance, to denote a heading, you add a number sign before it (e.g., # Heading One). Or to make a phrase bold, you add two asterisks before and after it (e.g., **this text is bold**). It may take a while to get used to seeing Markdown syntax in your text, especially if you’re accustomed to WYSIWYG applications. The screenshot below shows a Markdown file displayed in the Atom text editor.
[Back to Home page](./index.html) | 65.125 | 395 | 0.772553 | eng_Latn | 0.999183 |
47552f01cb6f789d2b89d6a37c281081396ed1e4 | 1,907 | md | Markdown | lab11/lab11 presentation.md | oorazgeldiyeva/os-intro | 0ccce7c7c76c525b580e863b53de176d0f4720f8 | [
"CC-BY-4.0"
] | null | null | null | lab11/lab11 presentation.md | oorazgeldiyeva/os-intro | 0ccce7c7c76c525b580e863b53de176d0f4720f8 | [
"CC-BY-4.0"
] | null | null | null | lab11/lab11 presentation.md | oorazgeldiyeva/os-intro | 0ccce7c7c76c525b580e863b53de176d0f4720f8 | [
"CC-BY-4.0"
] | null | null | null | # Российский университет дружбы народов
## Факультет физико-математических и естественных наук
***
# Лабораторная работа №11
## Программирование в командном процессоре ОС UNIX. Командные файлы
**Дисциплина:** Операционные системы
**Студент** Оразгелдиева Огулнур
**Группа:** НПИбд-02-20
2021, Москва
***
## Структура презентации:
1. Прагматика лабораторной работы
2. Цель лабораторной работы
3. Задачи выполнения лабораторной работы
4. Результаты выполнения лабораторной работы и вывод
***
## Прагматика
Выполнение данной лабораторной работы позволит изучить основы программирования в командном процессоре ОС Unix и создавать командные файлы.
***
## Цель лабораторной работы
* изучить основы программирования в оболочке ОС UNIX/Linux.
* научиться писать небольшие командные файлы* познакомиться с операционной системой Linux
***
## Задачи выполнения
1. Ознакомиться с теоретическим материалом
2. Написать скрипт, который при запуске будет делать резервную копию самого себя в другую директорию backup в вашем домашнем каталоге. При этом файл должен архивироваться одним из архиваторов на выбор zip, bzip2 или tar.
3. Написать пример командного файла, обрабатывающего любое произвольное число аргументов командной строки, в том числе превышающее десять.
4. Написать командный файл—аналог команды ls (без использования самой этой команды и команды dir). Требуется, чтобы он выдавал информацию о нужном каталоге и выводил информацию о возможностях доступа к файлам этого каталога.
5. Написать командный файл, который получает в качестве аргумента командной строки формат файла (.txt, .doc, .jpg, .pdf и т.д.) и вычисляет количество таких файлов в указанной директории.
***
## Результаты и вывод
В результате лабораторной работы
изучила основы программирования в оболочке ОС UNIX/Linux; научилась писать небольшие командные файлы
| 34.053571 | 225 | 0.77871 | rus_Cyrl | 0.984725 |
47558c1ea2fb14df0e6d0f0e1f819147e71aed34 | 1,716 | md | Markdown | src/lv/2020-04/03/05.md | Pmarva/sabbath-school-lessons | 0e1564557be444c2fee51ddfd6f74a14fd1c45fa | [
"MIT"
] | 68 | 2016-10-30T23:17:56.000Z | 2022-03-27T11:58:16.000Z | src/lv/2020-04/03/05.md | Pmarva/sabbath-school-lessons | 0e1564557be444c2fee51ddfd6f74a14fd1c45fa | [
"MIT"
] | 367 | 2016-10-21T03:50:22.000Z | 2022-03-28T23:35:25.000Z | src/lv/2020-04/03/05.md | Pmarva/sabbath-school-lessons | 0e1564557be444c2fee51ddfd6f74a14fd1c45fa | [
"MIT"
] | 109 | 2016-08-02T14:32:13.000Z | 2022-03-31T10:18:41.000Z | ---
title: Likuma Ievērotāju Pūles Un Cīņas
date: 14/10/2020
---
Ievērojot Dieva likumu, ir daudz ieguvumu, kā tas atklājas cilvēkos, kam Dievs ir ļāvis uzplaukt. Jozua cieši sekoja Dieva priekšrakstiem un labi vadīja Israēla tautu. Laiku pa laikam Kungs teica Israēlam: ja viņi paklausīs likumam, viņi piedzīvos labklājību.
`Izlasi 2L 31:20, 21. Kādi bija galvenie iemesli šajos pantos, kāpēc Hiskija piedzīvoja labklājību?`
Lai arī kādu izglītības ceļu mēs ietu, mums ir jāuzsver paklausības svarīgums. Tomēr mūsu skolēni nav muļķi. Viņi agri vai vēlu pamanīs skarbo faktu, ka daži cilvēki ir uzticīgi, mīloši un paklausīgi, bet – kas notiek? Arī viņus skar nelaimes! Kā to izskaidrot?
Patiesība ir tāda, ka mēs to nevaram. Mēs dzīvojam grēka, ļaunuma pasaulē, kurā plosās lielā cīņa, un neviens no mums nav imūns pret to.
`Ko šie panti mums māca par šo grūto jautājumu? Mk 6:25–27, Īj 1. un 2. nod.; 2kor 11:23–29.`
Bez šaubām, labi un uzticīgi cilvēki, likumam paklausīgi kristieši ne vienmēr ir piedzīvojuši labklājību, vismaz tādu, kā to saprot šī pasaule. Un arī šī var būt daļēji atbilde uz šo grūto jautājumu – jautājumu, kas, mācot par likuma svarīgumu, noteikti tiks uzdots. Ko tieši mēs domājam ar “labklājību”? Ko saka dziesminieks? “Laba ir diena tavos pagalmos, starp tūkstoš to izvēlos – sēdēt pie mana Dieva sliekšņa nekā mājot neganto teltīs” (Ps 84:11). Nav šaubu, ka pēc pasaules standarta pat tie, kas ir uzticīgi Dievam un paklausīgi Viņa likumam, ne vienmēr “uzplaukst”, vismaz šobrīd ne. Mēs saviem skolēniem izdarīsim lāča pakalpojumu, ja apgalvosim pretējo.
`Izlasi Ebr 11:13–16. Kā šie panti palīdz mums saprast, kāpēc tie, kas ir uzticīgi, šajā dzīvē vienalga piedzīvo ciešanas?` | 95.333333 | 664 | 0.7838 | lvs_Latn | 1.000007 |
4755ad72d42aa00eb9e5ec9be897656b19ff4e0e | 1,553 | md | Markdown | node_modules/domain-browser/HISTORY.md | firojkabir/lsg | ff8b5edc02b2d45f6bc602c7a2aa592706009345 | [
"MIT"
] | null | null | null | node_modules/domain-browser/HISTORY.md | firojkabir/lsg | ff8b5edc02b2d45f6bc602c7a2aa592706009345 | [
"MIT"
] | 7 | 2020-03-10T07:47:34.000Z | 2022-02-12T00:20:30.000Z | node_modules/domain-browser/HISTORY.md | firojkabir/lsg | ff8b5edc02b2d45f6bc602c7a2aa592706009345 | [
"MIT"
] | null | null | null | # History
## v1.2.0 2018 January 26
- `index.js` is now located at `source/index.js`
- Updated base files
## v1.1.7 2015 December 12
- Revert minimum node version from 0.12 back to 0.4
- Thanks to [Alexander Sorokin](https://github.com/syrnick) for [this comment](https://github.com/bevry/domain-browser/commit/c66ee3445e87955e70d0d60d4515f2d26a81b9c4#commitcomment-14938325)
## v1.1.6 2015 December 12
- Fixed `assert-helpers` sneaking into `dependencies`
- Thanks to [Bogdan Chadkin](https://github.com/TrySound) for [Pull Request #8](https://github.com/bevry/domain-browser/pull/8)
## v1.1.5 2015 December 9
- Updated internal conventions
- Added better jspm support
- Thanks to [Guy Bedford](https://github.com/guybedford) for [Pull Request #7](https://github.com/bevry/domain-browser/pull/7)
## v1.1.4 2015 February 3
- Added
- `domain.enter()`
- `domain.exit()`
- `domain.bind()`
- `domain.intercept()`
## v1.1.3 2014 October 10
- Added
- `domain.add()`
- `domain.remove()`
## v1.1.2 2014 June 8
- Added `domain.createDomain()` alias
- Thanks to [James Halliday](https://github.com/substack) for [Pull Request #1](https://github.com/bevry/domain-browser/pull/1)
## v1.1.1 2013 December 27
- Fixed `domain.create()` not returning anything
## v1.1.0 2013 November 1
- Dropped component.io and bower support, just use ender or browserify
## v1.0.1 2013 September 18
- Now called `domain-browser` everywhere
## v1.0.0 2013 September 18
- Initial release
| 33.042553 | 195 | 0.684482 | eng_Latn | 0.509121 |
475612e8f44b1aaf816110eb87c194c029b5d946 | 1,816 | md | Markdown | translations/pl/docs/Vanilla/CTGUI.md | kindlich/CraftTweaker-Documentation | 3579ab48d05731d9b9175fe65fb6d3c856f60a9b | [
"MIT"
] | 1 | 2017-10-21T12:33:53.000Z | 2017-10-21T12:33:53.000Z | translations/pl/docs/Vanilla/CTGUI.md | kindlich/CraftTweaker-Documentation | 3579ab48d05731d9b9175fe65fb6d3c856f60a9b | [
"MIT"
] | 1 | 2017-07-16T09:55:12.000Z | 2017-07-16T11:37:55.000Z | translations/pl/docs/Vanilla/CTGUI.md | kindlich/CraftTweaker-Documentation | 3579ab48d05731d9b9175fe65fb6d3c856f60a9b | [
"MIT"
] | 2 | 2017-07-16T00:14:53.000Z | 2017-12-31T14:04:05.000Z | # CTGUI
Możesz uruchomić polecenie i edytować receptury z gry, używając interfejsu graficznego. Wygeneruje on plik w lokalnym folderze skryptów o nazwie recipes.zs .
<details><summary>Historia w tle</summary> Niektórzy nie mają fantastycznych edytorów tekstów. Nawet przy użyciu szablonów podświetlających ich składnię nie można ich usatysfakcjonować. Chcą interfejsu graficznego (interfejs użytkownika).
W tej sprawie żared, skromny sług Władcy Ellpeck of House penguin, prawowitego spadkobiercy Tronu Milkshake, Król siedmiu królestw Niemiec, Rhoynar i Pierwszego Mężczyzny, Matki Pingwinów, modder wielkich zamrożonych równin nieuszkodzony i pękający mody, zakończył się na nas z Maven, wspaniałą bibliotekę zakazanych Mędrców i Blasfii, aby podzielić się swoją wspaniałą wiedzą z ludzkością, po wezwaniu przez BBoldt, podróżnego świata Realms, slayera wielkiego nieznanego, pisarza Necrochodu. Niestety, nie byliśmy jeszcze w stanie rozbić pozornie przypadkowego żabowania, które z niego wypłynęło, zdecydował się zamiast tego skorzystać z bardziej uproszczonego środka pomocy, dając ludziom, którzy mają uprawnienia do manipulowania prawami wszechświata (nazywanymi również `OPs` lub `Admins`) dostęp do magicznego okna z wewnątrz gry, którą chcieli grać, i zmieniając fundamenty tej fałszywej rzeczywistości z wewnątrz. </details>
## Wywołanie polecenia
Wywołujesz polecenie używając
/CTGUI id
Te identyfikatory są obecnie wdrażane:
| ID | Dodane przez | Uwagi |
| -------------------- | ------------ | ----- |
| tablica do tworzenia | CraftTweaker | |
| piec | CraftTweaker | |
Uwaga: Ta komenda działa tylko na jednym Graczu. To zwróci nieznane polecenie, jeśli jest uruchomione na serwerze. Edytuj lokalnie i złącz/prześlij go na swój serwer. | 82.545455 | 931 | 0.772577 | pol_Latn | 1.000006 |
4756648a9dd016262f683c70e1659a56ebfb7a95 | 53,746 | md | Markdown | articles/web-sites-dotnet-troubleshoot-visual-studio.md | barbkess/azure-content | 98860699270372e2329685b7e9a8a82d212b9282 | [
"CC-BY-3.0"
] | 1 | 2017-09-12T05:06:09.000Z | 2017-09-12T05:06:09.000Z | articles/web-sites-dotnet-troubleshoot-visual-studio.md | barbkess/azure-content | 98860699270372e2329685b7e9a8a82d212b9282 | [
"CC-BY-3.0"
] | null | null | null | articles/web-sites-dotnet-troubleshoot-visual-studio.md | barbkess/azure-content | 98860699270372e2329685b7e9a8a82d212b9282 | [
"CC-BY-3.0"
] | 1 | 2018-11-14T11:06:49.000Z | 2018-11-14T11:06:49.000Z | <properties title="Troubleshooting Azure Websites in Visual Studio" pageTitle="Troubleshooting Azure Websites in Visual Studio" metaKeywords="troubleshoot debug azure web site tracing logging" description="Learn how to troubleshoot an Azure Website by using remote debugging, tracing, and logging tools that are built in to Visual Studio 2013." metaCanonical="" services="web-sites" documentationCenter=".NET" authors="tdykstra" manager="wpickett" solutions="" />
<tags ms.service="web-sites" ms.workload="web" ms.tgt_pltfrm="na" ms.devlang="dotnet" ms.topic="article" ms.date="11/13/2014" ms.author="tdykstra" />
# Troubleshooting Azure Websites in Visual Studio
This tutorial shows how to use Visual Studio tools that help debug an application while it runs in an Azure Website, by running in [debug mode](http://www.visualstudio.com/en-us/get-started/debug-your-app-vs.aspx) remotely or by viewing application logs and web server logs.
You'll learn:
* Which Azure site management functions are available in Visual Studio.
* How to use Visual Studio remote view to make quick changes in a remote website.
* How to run debug mode remotely while a project is running in Azure.
* How to create application trace logs and view them while the application is creating them.
* How to view web server logs, including detailed error messages and failed request tracing.
* How to send diagnostic logs to an Azure Storage account and view them there.
If you have Visual Studio Ultimate, you can also use [IntelliTrace](http://msdn.microsoft.com/library/vstudio/dd264915.aspx) for debugging. IntelliTrace is not covered in this tutorial.
### Tutorial segments
- [Prerequisites](#prerequisites)
- [Site configuration and management](#sitemanagement)
- [Access website files in Server Explorer](#remoteview)
- [Remote debugging](#remotedebug)
- Remote debugging websites
- Remote debugging WebJobs
- Notes about remote debugging
- [Diagnostic logs overview](#logsoverview)
- [Create and view application trace logs](#apptracelogs)
- [View web server logs](#webserverlogs)
- [View detailed error message logs](#detailederrorlogs)
- [Download file system logs](#downloadlogs)
- [View storage logs](#storagelogs)
- [View failed request logs](#failedrequestlogs)
- [Next steps](#nextsteps)
<h2><a name="prerequisites"></a>Prerequisites</h2>
This tutorial works with the development environment, web project, and Azure Website that you set up in [Get started with Azure and ASP.NET][GetStarted]. For the WebJobs sections, you'll need the application that you create in [Get Started with the Azure WebJobs SDK][GetStartedWJ].
The code samples shown in this tutorial are for a C# MVC web application, but the troubleshooting procedures are the same for Visual Basic and Web Forms applications.
Remote debugging requires Visual Studio 2013 or Visual Studio 2012 with Update 4. The remote debugging and **Server Explorer** features for WebJobs require [Visual Studio 2013 Update 4](http://go.microsoft.com/fwlink/?LinkID=510314) or later. The other features shown in the tutorial also work in Visual Studio 2013 Express for Web, and Visual Studio 2012 Express for Web.
The streaming logs feature only works for applications that target .NET Framework 4 or later.
<h2><a name="sitemanagement"></a>Site configuration and management</h2>
Visual Studio provides access to a subset of the site management functions and configuration settings available in the management portal. In this section you'll see what's available.
1. If you aren't already signed in to Azure in Visual Studio, click the **Connect to Azure** button in **Server Explorer**.
An alternative is to install a management certificate that enables access to your account. The management certificate gives **Server Explorer** access to additional Azure services (SQL Database and Mobile Services). If you choose to install a certificate, right-click the **Azure** node in **Server Explorer**, and then click **Manage Subscriptions** in the context menu. In the **Manage Azure Subscriptions** dialog box, click the **Certificates** tab, and then click **Import**. Follow the directions to download and then import a subscription file (also called a *.publishsettings* file) for your Azure account.
> [AZURE.NOTE]
> If you download a subscription file, save it to a folder outside your source code directories (for example, in the Downloads folder), and then delete it once the import has completed. A malicious user who gains access to the subscription file can edit, create, and delete your Azure services.
For more information about connecting to Azure resources from Visual Studio, see [Manage Accounts, Subscriptions, and Administrative Roles](http://go.microsoft.com/fwlink/?LinkId=324796#BKMK_AccountVCert).
2. In **Server Explorer**, expand **Azure**, and then expand **Websites**.
3. Right-click the node for the website that you created in [Getting started with Azure and ASP.NET][GetStarted], and then click **View Settings**.

The **Azure Website** tab appears, and you can see there the site management and configuration tasks that are available in Visual Studio.

In this tutorial you'll be using the logging and tracing drop-downs. You'll also use remote debugging but you'll use a different method to enable it.
For information about the App Settings and Connection Strings boxes in this window, see [Azure Web Sites: How Application Strings and Connection Strings Work](http://blogs.msdn.com/b/windowsazure/archive/2013/07/17/windows-azure-web-sites-how-application-strings-and-connection-strings-work.aspx).
If you want to perform a site management task that can't be done in this window, you can click **Full Website Settings** to open a browser window to the management portal. For more information, see [How to Configure Web Sites](/en-us/manage/services/web-sites/how-to-configure-websites/#howtochangeconfig).
<h2><a name="remoteview"></a>Access website files in Server Explorer</h2>
You typically deploy a site with the `customErrors` flag in the Web.config file set to `On` or `RemoteOnly`, which means you don't get a helpful error message when something goes wrong. For many errors all you get is a page like one of the following ones.
**Server Error in '/' Application:**

**An error occurred:**

**The website cannot display the page**

Frequently the easiest way to find the cause of the error is to enable detailed error messages, which the first of the preceding screenshots explains how to do. That requires a change in the deployed Web.config file. You could edit the *Web.config* file in the project and redeploy the project, or create a [Web.config transform](http://www.asp.net/mvc/tutorials/deployment/visual-studio-web-deployment/web-config-transformations) and deploy a debug build, but there's a quicker way: in **Solution Explorer** you can directly view and edit files on the remote site by using the *remote view* feature.
1. In **Server Explorer**, expand **Azure**, expand **Websites**, and expand the node for the website you're deploying to.
You see nodes that give you access to the website's content files and log files.

2. Expand the **Files** node, and double-click the *Web.config* file.

Visual Studio opens the Web.config file from the remote site and shows [Remote] next to the file name in the title bar.
3. Add the following line to the `system.web` element:
`<customErrors mode="off"></customErrors>`

4. Refresh the browser that is showing the unhelpful error message, and now you get a detailed error message, such as the following example:

(The error shown was created by adding the line shown in red to *Views\Home\Index.cshtml*.)
Editing the Web.config file is only one example of scenarios in which the ability to read and edit files on your Azure website make troubleshooting easier.
<h2><a name="remotedebug"></a>Remote debugging</h2>
If the detailed error message doesn't provide enough information, and you can't re-create the error locally, another way to troubleshoot is to run in debug mode remotely. You can set breakpoints, manipulate memory directly, step through code, and even change the code path.
Remote debugging does not work in Express editions of Visual Studio.
### Remote debugging websites
This section shows how to debug remotely using the project you create in [Getting started with Azure and ASP.NET][GetStarted].
1. Open the web project that you created in [Getting started with Azure and ASP.NET][GetStarted].
1. Open *Controllers\HomeController.cs*.
2. Delete the `About()` method and insert the following code in its place.
public ActionResult About()
{
string currentTime = DateTime.Now.ToLongTimeString();
ViewBag.Message = "The current time is " + currentTime;
return View();
}
2. [Set a breakpoint](http://www.visualstudio.com/en-us/get-started/debug-your-app-vs.aspx) on the `ViewBag.Message` line.
1. In **Solution Explorer**, right-click the project, and click **Publish**.
2. In the **Profile** drop-down list, select the same profile that you used in [Getting started with Azure and ASP.NET][GetStarted].
3. Click the **Settings** tab, and change **Configuration** to **Debug**, and then click **Publish**.

4. After deployment finishes and your browser opens to the Azure URL of your site, close the browser.
5. For Visual Studio 2013: In **Server Explorer** expand **Azure**, expand **Websites**, right-click your website, and click **Attach Debugger**.

The browser automatically opens to your home page running in Azure. You might have to wait 20 seconds or so while Azure sets up the server for debugging. This delay only happens the first time you run in debug mode on a website. Subsequent times within the next 48 hours when you start debugging again there won't be a delay.
6. For Visual Studio 2012 with Update 4:<a id="vs2012"></a>
* In the Azure Management Portal, go to the **Configure** tab for your website, and then scroll down to the **Site Diagnostics** section.
* Set **Remote Debugging** to **On**, and set **Remote Debugging Visual Studio Version** to **2012**.

* In the Visual Studio **Debug** menu, click **Attach to Process**.
* In the **Qualifier** box, enter the URL for your website, without the `http://` prefix.
* Select **Show processes from all users**.
* When you're prompted for credentials, enter the user name and password that has permissions to publish the website. To get these credentials, go to the Dashboard tab for your website in the management portal and click **Download the publish profile**. Open the file in a text editor, and you'll find the user name and password after the first occurrences of **userName=** and **userPWD=**.
* When the processes appear in the **Available Processes** table, select **w3wp.exe**, and then click **Attach**.
* Open a browser to your site URL.
You might have to wait 20 seconds or so while Azure sets up the server for debugging. This delay only happens the first time you run in debug mode on a website. Subsequent times within the next 48 hours when you start debugging again there won't be a delay.
6. Click **About** in the menu.
Visual Studio stops on the breakpoint, and the code is running in Azure, not on your local computer.
7. Hover over the `currentTime` variable to see the time value.

The time you see is the Azure server time, which may be in a different time zone than your local computer.
8. Enter a new value for the `currentTime` variable, such as "Now running in Azure".
5. Press F5 to continue running.
The About page running in Azure displays the new value that you entered into the currentTime variable.

### <a name="remotedebugwj"></a> Remote debugging WebJobs
This section shows how to debug remotely using the project and website you create in [Get Started with the Azure WebJobs SDK](../websites-dotnet-webjobs-sdk). The features shown on in this section are available only in Visual Studio 2013 with Update 4.
1. Open the web project that you created in [Get Started with the Azure WebJobs SDK][GetStartedWJ].
1. In the ContosoAdsWebJob project, open *Functions.cs*.
2. [Set a breakpoint](http://www.visualstudio.com/en-us/get-started/debug-your-app-vs.aspx) on the first statement in the `GnerateThumbnail` method.

1. In **Solution Explorer**, right-click the web project (not the WebJob project), and click **Publish**.
2. In the **Profile** drop-down list, select the same profile that you used in [Get Started with the Azure WebJobs SDK](../websites-dotnet-webjobs-sdk).
3. Click the **Settings** tab, and change **Configuration** to **Debug**, and then click **Publish**.
Visual Studio deploys the web and WebJob projects, and your browser opens to the Azure URL of your site.
5. In **Server Explorer** expand **Azure** > **Websites** > your website > **WebJobs** > **Continuous**, and then right-click **ContosoAdsWebJob**.
7. Click **Attach Debugger**.

The browser automatically opens to your home page running in Azure. You might have to wait 20 seconds or so while Azure sets up the server for debugging. This delay only happens the first time you run in debug mode on a website. The next time you attach the debugger there won't be a delay, if you do it within 48 hours.
6. In the web browser that is opened to the Contoso Ads home page, create a new ad.
Creating an ad causes a queue message to be created, which will be picked up by the WebJob and processed. When the WebJobs SDK calls the function to process the queue message, the code will hit your breakpoint.
7. When the debugger breaks at your breakpoint, you can examine and change variable values while the program is running the cloud. In the following illustration the debugger shows the contents of the blobInfo object that was passed to the GenerateThumbnail method.

5. Press F5 to continue running.
The GenerateThumbnail method finishes creating the thumbnail.
6. In the browser, refresh the Index page and you see the thumbnail.
6. In Visual Studio, press SHIFT+F5 to stop debugging.
7. In **Server Explorer**, right-click the ContosoAdsWebJob node and click **View Dashboard**.
8. Sign in with your Azure credentials, and then click the WebJob name to go to the page for your WebJob.

The Dashboard shows that the GenerateThumbnail function executed recently.
(The next time you click **View Dashboard**, you don't have to sign in, and the browser goes directly to the page for your WebJob.)
9. Click the function name to see details about the function execution.

If your function [wrote logs](../websites-dotnet-webjobs-sdk-storage-queues-how-to/#logs), you could click **ToggleOutput** to see them.
### Notes about remote debugging
* Running in debug mode in production is not recommended. If your production site is not scaled out to multiple server instances, debugging will prevent the web server from responding to other requests. If you do have multiple web server instances, when you attach to the debugger you'll get a random instance, and you have no way to ensure that subsequent browser requests will go to that instance. Also, you typically don't deploy a debug build to production, and compiler optimizations for release builds might make it impossible to show what is happening line by line in your source code. For troubleshooting production problems, your best resource is application tracing and web server logs.
* Avoid long stops at breakpoints when remote debugging. Azure treats a process that is stopped for longer than a few minutes as an unresponsive process, and shuts it down.
* While you're debugging, the server is sending data to Visual Studio, which could affect bandwidth charges. For information about bandwidth rates, see [Azure Pricing](/en-us/pricing/calculator/).
* Make sure that the `debug` attribute of the `compilation` element in the *Web.config* file is set to true. It is set to true by default when you publish a debug build configuration.
<system.web>
<compilation debug="true" targetFramework="4.5" />
<httpRuntime targetFramework="4.5" />
</system.web>
* If you find that the debugger won't step into code that you want to debug, you might have to change the Just My Code setting. For more information, see [Restrict stepping to Just My Code](http://msdn.microsoft.com/en-us/library/vstudio/y740d9d3.aspx#BKMK_Restrict_stepping_to_Just_My_Code).
* A timer starts on the server when you enable the remote debugging feature, and after 48 hours the feature is automatically turned off. This 48 hour limit is done for security and performance reasons. You can easily turn the feature back on as many times as you like. We recommend leaving it disabled when you are not actively debugging.
* You can manually attach the debugger to any process, not only the website process (w3wp.exe). For more information about how to use debug mode in Visual Studio, see [Debugging in Visual Studio](http://msdn.microsoft.com/en-us/library/vstudio/sc65sadd.aspx).
<h2><a name="logsoverview"></a>Diagnostic logs overview</h2>
An ASP.NET application that runs in an Azure Website can create the following kinds of logs:
* **Application tracing logs**<br/>
The application creates these logs by calling methods of the [System.Diagnostics.Trace](http://msdn.microsoft.com/en-us/library/system.diagnostics.trace.aspx) class.
* **Web server logs**<br/>
The web server creates a log entry for every HTTP request to the site.
* **Detailed error message logs**<br/>
The web server creates an HTML page with some additional information for failed HTTP requests (those that result in status code 400 or greater).
* **Failed request tracing logs**<br/>
The web server creates an XML file with detailed tracing information for failed HTTP requests. The web server also provides an XSL file to format the XML in a browser.
Logging affects site performance, so Azure gives you the ability to enable or disable each type of log as needed. For application logs, you can specify that only logs above a certain severity level should be written. When you create a new website, by default all logging is disabled.
Logs are written to files in a *LogFiles* folder in the file system of your website and are accessible via FTP. Web server logs and application logs can also be written to an Azure Storage account. You can retain a greater volume of logs in a storage account than is possible in the file system. You're limited to a maximum of 100 megabytes of logs when you use the file system. (File system logs are only for short-term retention. Azure deletes old log files to make room for new ones after the limit is reached.)
<h2><a name="apptracelogs"></a>Create and view application trace logs</h2>
In this section you'll do the following tasks:
* Add tracing statements to the web project that you created in [Get started with Azure and ASP.NET][GetStarted].
* View the logs when you run the project locally.
* View the logs as they are generated by the application running in Azure.
For information about how to create application logs in WebJobs, see [How to work with Azure queue storage using the WebJobs SDK - How to write logs](../websites-dotnet-webjobs-sdk-storage-queues-how-to/#logs). The following instructions for viewing logs and controlling how they're stored in Azure apply also to application logs created by WebJobs.
### Add tracing statements to the application
1. Open *Controllers\HomeController.cs*, and replace the file contents with the following code in order to add `Trace` statements and a `using` statement for `System.Diagnostics`:
using System;
using System.Collections.Generic;
using System.Configuration;
using System.Diagnostics;
using System.Linq;
using System.Web;
using System.Web.Configuration;
using System.Web.Mvc;
namespace MyExample.Controllers
{
public class HomeController : Controller
{
public ActionResult Index()
{
Trace.WriteLine("Entering Index method");
ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application.";
Trace.TraceInformation("Displaying the Index page at " + DateTime.Now.ToLongTimeString());
Trace.WriteLine("Leaving Index method");
return View();
}
public ActionResult About()
{
Trace.WriteLine("Entering About method");
ViewBag.Message = "Your app description page.";
Trace.TraceWarning("Transient error on the About page at " + DateTime.Now.ToShortTimeString());
Trace.WriteLine("Leaving About method");
return View();
}
public ActionResult Contact()
{
Trace.WriteLine("Entering Contact method");
ViewBag.Message = "Your contact page.";
Trace.TraceError("Fatal error on the Contact page at " + DateTime.Now.ToLongTimeString());
Trace.WriteLine("Leaving Contact method");
return View();
}
}
}
### View the tracing output locally
3. Press F5 to run the application in debug mode.
The default trace listener writes all trace output to the **Output** window, along with other Debug output. The following illustration shows the output from the trace statements that you added to the `Index` method.

The following steps show how to view trace output in a web page, without compiling in debug mode.
2. Open the application Web.config file (the one located in the project folder) and add a `<system.diagnostics>` element at the end of the file just before the closing `</configuration>` element:
<system.diagnostics>
<trace>
<listeners>
<add name="WebPageTraceListener"
type="System.Web.WebPageTraceListener,
System.Web,
Version=4.0.0.0,
Culture=neutral,
PublicKeyToken=b03f5f7f11d50a3a" />
</listeners>
</trace>
</system.diagnostics>
The `WebPageTraceListener` lets you view trace output by browsing to `/trace.axd`.
3. Add a <a href="http://msdn.microsoft.com/en-us/library/vstudio/6915t83k(v=vs.100).aspx">trace element</a> under `<system.web>` in the Web.config file, such as the following example:
<trace enabled="true" writeToDiagnosticsTrace="true" mostRecent="true" pageOutput="false" />
3. Press CTRL+F5 to run the application.
4. In the address bar of the browser window, add *trace.axd* to the URL, and then press Enter (the URL will be similar to http://localhost:53370/trace.axd).
5. On the **Application Trace** page, click **View Details** on the first line (not the BrowserLink line).

The **Request Details** page appears, and in the **Trace Information** section you see the output from the trace statements that you added to the `Index` method.

By default, `trace.axd` is only available locally. If you wanted to make it available from a remote site, you could add `localOnly="false"` to the `trace` element in the *Web.config* file, as shown in the following example:
<trace enabled="true" writeToDiagnosticsTrace="true" localOnly="false" mostRecent="true" pageOutput="false" />
However, enabling `trace.axd` in a production site is generally not recommended for security reasons, and in the following sections you'll see an easier way to read tracing logs in an Azure Website.
### View the tracing output in Azure
1. In **Solution Explorer**, right-click the web project and click **Publish**.
2. In the **Publish Web** dialog box, click **Publish**.
After Visual Studio publishes your update, it opens a browser window to your home page (assuming you didn't clear **Destination URL** on the **Connection** tab).
3. In **Server Explorer**, right-click your website and select **View Streaming Logs**.

The **Output** window shows that you are connected to the log-streaming service, and adds a notification line each minute that goes by without a log to display.

4. In the browser window that shows your application home page, click **Contact**.
Within a few seconds the output from the error-level trace you added to the `Contact` method appears in the **Output** window.

Visual Studio is only showing error-level traces because that is the default setting when you enable the log monitoring service. When you create a new Azure Website, all logging is disabled by default, as you saw when you opened the site settings page earlier:

However, when you selected **View Streaming Logs**, Visual Studio automatically changed **Application Logging(File System)** to **Error**, which means error-level logs get reported. In order to see all of your tracing logs, you can change this setting to **Verbose**. When you select a severity level lower than error, all logs for higher severity levels are also reported. So when you select verbose, you also see information, warning, and error logs.
4. In **Server Explorer**, right-click the website, and then click **View Settings** as you did earlier.
5. Change **Application Logging (File System)** to **Verbose**, and then click **Save**.

6. In the browser window that is now showing your **Contact** page, click **Home**, then click **About**, and then click **Contact**.
Within a few seconds, the **Output** window shows all of your tracing output.

In this section you enabled and disabled logging by using Azure Website settings. You can also enable and disable trace listeners by modifying the Web.config file. However, modifying the Web.config file causes the app domain to recycle, while enabling logging via the website doesn't do that. If the problem takes a long time to reproduce, or is intermittent, recycling the app domain might "fix" it and force you to wait until it happens again. Enabling diagnostics in Azure doesn't do this, so you can start capturing error information immediately.
### Output window features
The **Azure Logs** tab of the **Output** Window has several buttons and a text box:

These perform the following functions:
* Clear the **Output** window.
* Enable or disable word wrap.
* Start or stop monitoring logs.
* Specify which logs to monitor.
* Download logs.
* Filter logs based on a search string or a regular expression.
* Close the **Output** window.
If you enter a search string or regular expression, Visual Studio filters logging information at the client. That means you can enter the criteria after the logs are displayed in the **Output** window and you can change filtering criteria without having to regenerate the logs.
<h2><a name="webserverlogs"></a>View web server logs</h2>
Web server logs record all HTTP activity on the site. In order to see them in the **Output** window you have to enable them on the site and tell Visual Studio that you want to monitor them.
1. In the **Azure Website Configuration** tab that you opened from **Server Explorer**, change Web Server Logging to **On**, and then click **Save**.

2. In the **Output** Window, click the **Specify which Azure logs to monitor** button.

3. In the **Azure Logging Options** dialog box, select **Web server logs**, and then click **OK**.

4. In the browser window that shows the website, click **Home**, then click **About**, and then click **Contact**.
The application logs generally appear first, followed by the web server logs. You might have to wait a while for the logs to appear.

By default, when you first enable web server logs by using Visual Studio, Azure writes the logs to the file system. As an alternative, you can use the management portal to specify that web server logs should be written to a blob container in a storage account. For more information, see the **site diagnostics** section in [How to Configure Web Sites](/en-us/manage/services/web-sites/how-to-configure-websites/#howtochangeconfig).
If you use the management portal to enable web server logging to an Azure storage account, and then disable logging in Visual Studio, when you re-enable logging in Visual Studio your storage account settings are restored.
<h2><a name="detailederrorlogs"></a>View detailed error message logs</h2>
Detailed error logs provide some additional information about HTTP requests that result in error response codes (400 or above). In order to see them in the **Output** window, you have to enable them on the site and tell Visual Studio that you want to monitor them.
1. In the **Azure Website Configuration** tab that you opened from **Server Explorer**, change **Detailed Error Messages** to **On**, and then click **Save**.

2. In the **Output** Window, click the **Specify which Azure logs to monitor** button.
3. In the **Azure Logging Options** dialog box, click **All logs**, and then click **OK**.

4. In the address bar of the browser window, add an extra character to the URL to cause a 404 error (for example, `http://localhost:53370/Home/Contactx`), and press Enter.
After several seconds the detailed error log appears in the Visual Studio **Output** window.

Control+click the link to see the log output formatted in a browser:

<h2><a name="downloadlogs"></a>Download file system logs</h2>
Any logs that you can monitor in the **Output** window can also be downloaded as a *.zip* file.
1. In the **Output** window, click **Download Streaming Logs**.

File Explorer opens to your *Downloads* folder with the downloaded file selected.

2. Extract the *.zip* file, and you see the following folder structure:

* Application tracing logs are in *.txt* files in the *LogFiles\Application* folder.
* Web server logs are in *.log* files in the *LogFiles\http\RawLogs* folder. You can use a tool such as [Log Parser](http://www.microsoft.com/en-us/download/details.aspx?displaylang=en&id=24659) to view and manipulate these files.
* Detailed error message logs are in *.html* files in the *LogFiles\DetailedErrors* folder.
(The *deployments* folder is for files created by source control publishing; it doesn't have anything related to Visual Studio publishing. The *Git* folder is for traces related to source control publishing and the log file streaming service.)
<h2><a name="storagelogs"></a>View storage logs</h2>
Application tracing logs can also be sent to an Azure storage account, and you can view them in Visual Studio. To do that you'll create a storage account, enable storage logs in the management portal, and view them in the **Logs** tab of the **Azure Website** window.
You can send logs to any or all of three destinations:
* The file system.
* Storage account tables.
* Storage account blobs.
You can specify a different severity level for each destination.
Tables make it easy to view details of logs online, and they support streaming; you can query logs in tables and see new logs as they are being created. Blobs make it easy to download logs in files and to analyze them using HDInsight, because HDInsight knows how to work with blob storage. For more information, see **Hadoop and MapReduce** in [Data Storage Options (Building Real-World Cloud Apps with Azure)](http://www.asp.net/aspnet/overview/developing-apps-with-windows-azure/building-real-world-cloud-apps-with-windows-azure/data-storage-options).
You currently have file system logs set to verbose level; the following steps walk you through setting up information level logs to go to storage account tables. Information level means all logs created by calling `Trace.TraceInformation`, `Trace.TraceWarning`, and `Trace.TraceError` will be displayed, but not logs created by calling `Trace.WriteLine`.
Storage accounts offer more storage and longer-lasting retention for logs compared to the file system. Another advantage of sending application tracing logs to storage is that you get some additional information with each log that you don't get from file system logs.
5. Right-click **Storage** under the Azure node, and then click **Create Storage Account**.

3. In the **Create Storage Account** dialog, enter a name for the storage account.
The name must be must be unique (no other Azure storage account can have the same name). If the name you enter is already in use you'll get a chance to change it.
The URL to access your storage account will be *{name}*.core.windows.net.
5. Set the **Region or Affinity Group** drop-down list to the region closest to you.
This setting specifies which Azure datacenter will host your storage account. For this tutorial your choice won't make a noticeable difference, but for a production site you want your web server and your storage account to be in the same region to minimize latency and data egress charges. The website (which you'll create later) should be as close as possible to the browsers accessing your site in order to minimize latency.
6. Set the **Replication** drop-down list to **Locally redundant**.
When geo-replication is enabled for a storage account, the stored content is replicated to a secondary datacenter to enable failover to that location in case of a major disaster in the primary location. Geo-replication can incur additional costs. For test and development accounts, you generally don't want to pay for geo-replication. For more information, see [Create, manage, or delete a storage account](../storage-create-storage-account/#replication-options).
5. Click **Create**.

1. In the Visual Studio **Azure Website** window, click the **Logs** tab, and then click **Configure Logging in Management Portal**.

This opens the **Configure** tab in the management portal for your website. Another way to get here is to click the **Websites** tab, click your website, and then click the **Configure** tab.
2. In the management portal **Configure** tab, scroll down to the application diagnostics section, and then change **Application Logging (Table Storage)** to **On**.
3. Change **Logging Level** to **Information**.
4. Click **Manage Table Storage**.

In the **Manage table storage for application diagnostics** box, you can choose your storage account if you have more than one. You can create a new table or use an existing one.

6. In the **Manage table storage for application diagnostics** box click the check mark to close the box.
6. In the management portal **Configure** tab, click **Save**.
7. In the browser window that displays the application website, click **Home**, then click **About**, and then click **Contact**.
The logging information produced by browsing these web pages will be written to the storage account.
8. In the **Logs** tab of the **Azure Website** window in Visual Studio, click **Refresh** under **Diagnostic Summary**.

The **Diagnostic Summary** section shows logs for the last 15 minutes by default. You can change the period to see more logs.
(If you get a "table not found" error, verify that you browsed to the pages that do the tracing after you enabled **Application Logging (Storage)** and after you clicked **Save**.)

Notice that in this view you see **Process ID** and **Thread ID** for each log, which you don't get in the file system logs. You can see additional fields by viewing the Azure storage table directly.
8. Click **View all application logs**.
The trace log table appears in the Azure storage table viewer.
(If you get a "sequence contains no elements" error, open **Server Explorer**, expand the node for your storage account under the **Azure** node, and then right-click **Tables** and click **Refresh**.)


This view shows additional fields you don't see in any other views. This view also enables you to filter logs by using special Query Builder UI for constructing a query. For more information, see Working with Table Resources - Filtering Entities in [Browsing Storage Resources with Server Explorer](http://msdn.microsoft.com/en-us/library/windowsazure/ff683677.aspx).
7. To look at the details for a single row, double-click one of the rows.

<h2><a name="failedrequestlogs"></a>View failed request tracing logs</h2>
Failed request tracing logs are useful when you need to understand the details of how IIS is handling an HTTP request, in scenarios such as URL rewriting or authentication problems.
Azure Websites use the same failed request tracing functionality that has been available with IIS 7.0 and later. You don't have access to the IIS settings that configure which errors get logged, however. When you enable failed request tracing, all errors are captured.
You can enable failed request tracing by using Visual Studio, but you can't view them in Visual Studio. These logs are XML files. The streaming log service only monitors files that are deemed readable in plain text mode: *.txt*, *.html*, and *.log* files.
You can view failed request tracing logs in a browser directly via FTP or locally after using an FTP tool to download them to your local computer. In this section you'll view them in a browser directly.
1. In the **Configuration** tab of the **Azure Website** window that you opened from **Server Explorer**, change **Failed Request Tracing** to **On**, and then click **Save**.

4. In the address bar of the browser window that shows the website, add an extra character to the URL and click Enter to cause a 404 error.
This causes a failed request tracing log to be created, and the following steps show how to view or download the log.
2. In Visual Studio, in the **Configuration** tab of the **Azure Website** window, click **Open in Management Portal**.
3. In the management portal, click **Dashboard**, and then click **Reset your deployment credentials** in the **Quick Glance** section.

4. Enter a new user name and password.

5. In the management portal **Dashboard** tab press F5 to refresh the page, and then scroll down to where you see **Deployment / FTP User**. Notice that the user name has the site name prefixed to it. **When you log in, you have to use this full user name with the site name prefixed to it as shown here.**
5. In a new browser window, go to the URL that is shown under **FTP Host Name** in the **Dashboard** tab of the management portal page for your website. **FTP Host Name** is located near **Deployment / FTP User** in the **Quick Glance** section.
6. Log in using the FTP credentials that you created earlier (including the site name prefix for the user name).
The browser shows the root folder of the site.
6. Open the *LogFiles* folder.

7. Open the folder that is named W3SVC plus a numeric value.

The folder contains XML files for any errors that have been logged after you enabled failed request tracing, and an XSL file that a browser can use to format the XML.

8. Click the XML file for the failed request that you want to see tracing information for.
The following illustration shows part of the tracing information for a sample error.

<h2><a name="nextsteps"></a>Next Steps</h2>
You've seen how Visual Studio makes it easy to view logs created by an Azure Website. The following sections provide links to more resources on related topics:
* Azure Website troubleshooting
* Debugging in Visual Studio
* Remote debugging in Azure
* Tracing in ASP.NET applications
* Analyzing web server logs
* Analyzing failed request tracing logs
* Debugging Cloud Services
### Azure Website troubleshooting
For more information about troubleshooting Azure Websites (WAWS), see the following resources:
* [How to Monitor Web Sites](/en-us/manage/services/web-sites/how-to-monitor-websites/)
* [Investigating Memory Leaks in Azure Web Sites with Visual Studio 2013](http://blogs.msdn.com/b/visualstudioalm/archive/2013/12/20/investigating-memory-leaks-in-azure-web-sites-with-visual-studio-2013.aspx). Microsoft ALM blog post about Visual Studio features for analyzing managed memory issues.
* [Windows Azure Websites online tools you should know about](/blog/2014/03/28/windows-azure-websites-online-tools-you-should-know-about-2/). Blog post by Amit Apple.
For help with a specific troubleshooting question, start a thread in one of the following forums:
* [The Azure forum on the ASP.NET site](http://forums.asp.net/1247.aspx/1?Azure+and+ASP+NET).
* [The Azure forum on MSDN](http://social.msdn.microsoft.com/Forums/windowsazure/).
* [StackOverflow.com](http://www.stackoverflow.com).
### Debugging in Visual Studio
For more information about how to use debug mode in Visual Studio, see the [Debugging in Visual Studio](http://msdn.microsoft.com/en-us/library/vstudio/sc65sadd.aspx) MSDN topic and [Debugging Tips with Visual Studio 2010](http://weblogs.asp.net/scottgu/archive/2010/08/18/debugging-tips-with-visual-studio-2010.aspx).
### Remote debugging in Azure
For more information about remote debugging for Azure Websites and WebJobs, see the following resources:
* [Introduction to Remote Debugging on Azure Web Sites](/blog/2014/05/06/introduction-to-remote-debugging-on-azure-web-sites/).
* [Introduction to Remote Debugging Azure Web Sites part 2 - Inside Remote debugging](/blog/2014/05/07/introduction-to-remote-debugging-azure-web-sites-part-2-inside-remote-debugging/)
* [Introduction to Remote Debugging on Azure Web Sites part 3 - Multi-Instance environment and GIT](/blog/2014/05/08/introduction-to-remote-debugging-on-azure-web-sites-part-3-multi-instance-environment-and-git/)
* [WebJobs Debugging (video)](https://www.youtube.com/watch?v=ncQm9q5ZFZs&list=UU_SjTh-ZltPmTYzAybypB-g&index=1)
If your website uses an Azure Web API or Mobile Services back-end and you need to debug that, see [Debugging .NET Backend in Visual Studio](http://blogs.msdn.com/b/azuremobile/archive/2014/03/14/debugging-net-backend-in-visual-studio.aspx).
### Tracing in ASP.NET applications
There are no thorough and up-to-date introductions to ASP.NET tracing available on the Internet. The best you can do is get started with old introductory materials written for Web Forms because MVC didn't exist yet, and supplement that with newer blog posts that focus on specific issues. Some good places to start are the following resources:
* [Monitoring and Telemetry (Building Real-World Cloud Apps with Azure)](http://www.asp.net/aspnet/overview/developing-apps-with-windows-azure/building-real-world-cloud-apps-with-windows-azure/monitoring-and-telemetry).<br>
E-book chapter with recommendations for tracing in Azure cloud applications.
* [ASP.NET Tracing](http://msdn.microsoft.com/en-us/library/ms972204.aspx)<br/>
Old but still a good resource for a basic introduction to the subject.
* [Trace Listeners](http://msdn.microsoft.com/en-us/library/4y5y10s7.aspx)<br/>
Information about trace listeners but doesn't mention the [WebPageTraceListener](http://msdn.microsoft.com/en-us/library/system.web.webpagetracelistener.aspx).
* [Walkthrough: Integrating ASP.NET Tracing with System.Diagnostics Tracing](http://msdn.microsoft.com/en-us/library/b0ectfxd.aspx)<br/>
This too is old, but includes some additional information that the introductory article doesn't cover.
* [Tracing in ASP.NET MVC Razor Views](http://blogs.msdn.com/b/webdev/archive/2013/07/16/tracing-in-asp-net-mvc-razor-views.aspx)<br/>
Besides tracing in Razor views, the post also explains how to create an error filter in order to log all unhandled exceptions in an MVC application. For information about how to log all unhandled exceptions in a Web Forms application, see the Global.asax example in [Complete Example for Error Handlers](http://msdn.microsoft.com/en-us/library/bb397417.aspx) on MSDN. In either MVC or Web Forms, if you want to log certain exceptions but let the default framework handling take effect for them, you can catch and rethrow as in the following example:
try
{
// Your code that might cause an exception to be thrown.
}
catch (Exception ex)
{
Trace.TraceError("Exception: " + ex.ToString());
throw;
}
* [Streaming Diagnostics Trace Logging from the Azure Command Line (plus Glimpse!)](http://www.hanselman.com/blog/StreamingDiagnosticsTraceLoggingFromTheAzureCommandLinePlusGlimpse.aspx)<br/>
How to use the command line to do what this tutorial shows how to do in Visual Studio. [Glimpse](http://www.hanselman.com/blog/IfYoureNotUsingGlimpseWithASPNETForDebuggingAndProfilingYoureMissingOut.aspx) is a tool for debugging ASP.NET applications.
* [Using Azure Web Site Logging and Diagnostics - with David Ebbo](http://www.windowsazure.com/en-us/documentation/videos/azure-web-site-logging-and-diagnostics/) and [Streaming Logs from Azure Web Sites - with David Ebbo](http://www.windowsazure.com/en-us/documentation/videos/log-streaming-with-azure-web-sites/)<br>
Videos by Scott Hanselman and David Ebbo.
For error logging, an alternative to writing your own tracing code is to use an open-source logging framework such as [ELMAH](http://nuget.org/packages/elmah/). For more information, see [Scott Hanselman's blog posts about ELMAH](http://www.hanselman.com/blog/NuGetPackageOfTheWeek7ELMAHErrorLoggingModulesAndHandlersWithSQLServerCompact.aspx).
Also, note that you don't have to use ASP.NET or System.Diagnostics tracing if you want to get streaming logs from Azure. The Azure Website streaming log service will stream any *.txt*, *.html*, or *.log* file that it finds in the *LogFiles* folder. Therefore, you could create your own logging system that writes to the file system of the website, and your file will be automatically streamed and downloaded. All you have to do is write application code that creates files in the *d:\home\logfiles* folder.
### Analyzing web server logs
For more information about analyzing web server logs, see the following resources:
* [LogParser](http://www.microsoft.com/en-us/download/details.aspx?id=24659)<br/>
A tool for viewing data in web server logs (*.log* files).
* [Troubleshooting IIS Performance Issues or Application Errors using LogParser ](http://www.iis.net/learn/troubleshoot/performance-issues/troubleshooting-iis-performance-issues-or-application-errors-using-logparser)<br/>
An introduction to the Log Parser tool that you can use to analyze web server logs.
* [Blog posts by Robert McMurray on using LogParser](http://blogs.msdn.com/b/robert_mcmurray/archive/tags/logparser/)<br/>
* [The HTTP status code in IIS 7.0, IIS 7.5, and IIS 8.0](http://support.microsoft.com/kb/943891)
### Analyzing failed request tracing logs
The Microsoft TechNet website includes a [Using Failed Request Tracing](http://www.iis.net/learn/troubleshoot/using-failed-request-tracing) section which may be helpful for understanding how to use these logs. However, this documentation focuses mainly on configuring failed request tracing in IIS, which you can't do in Azure Websites.
### Debugging Cloud Services
If you want to debug an Azure Cloud Service rather than a Website, see [Debugging Cloud Services](http://msdn.microsoft.com/en-us/library/windowsazure/ee405479.aspx).
[GetStarted]: ../web-sites-dotnet-get-started/
[GetStartedWJ]: ../websites-dotnet-webjobs-sdk/
| 68.205584 | 697 | 0.752223 | eng_Latn | 0.97839 |
4756ad58800ce238f5032c425d8c47af623951fb | 1,424 | md | Markdown | introduction/navigators/filtering/filter-editor.md | ErpNetDocs/winclient | 2740b35c7b478e2afcacba32f1412dcbc97f1aba | [
"CC-BY-4.0"
] | null | null | null | introduction/navigators/filtering/filter-editor.md | ErpNetDocs/winclient | 2740b35c7b478e2afcacba32f1412dcbc97f1aba | [
"CC-BY-4.0"
] | null | null | null | introduction/navigators/filtering/filter-editor.md | ErpNetDocs/winclient | 2740b35c7b478e2afcacba32f1412dcbc97f1aba | [
"CC-BY-4.0"
] | 8 | 2020-12-14T10:22:48.000Z | 2021-03-29T09:27:01.000Z | # Filter editor
You can use the <b>filter editor</b> in the navigators of @@winclientfull to narrow down the filter that you are using for limiting the number of shown records. In order to use it, you must apply a filter for the data. To bring up the filter editor, click on the row with the filter description and then Edit Filter.

Give the filter editor conditions, which has to be true for the resulting records that will show. Conditions will be possible for every field.

Every condition contains a field name and a value or a field for comparison. <br>
Edit the filters by using the following buttons:
-  - add a condition;
-  - edit the comparison field or add a value;
-  - remove the filter;
-  - change the field;
-  - change the condition.
**For example:** <br>
You can match conditions with the keywords AND & OR. If you want the result to answer both conditions simultaneously (read it with the connective word “<b>and</b>”), use <b>AND</b>. If you want to see results, which answer one of the two conditions (read it with the connective word “<b>or</b>”), use <b>OR</b>.
| 64.727273 | 316 | 0.747893 | eng_Latn | 0.997523 |
4756d257cfd601da8daf12f4f37b49e8528f7df3 | 32 | md | Markdown | README.md | akshah/iodb | 80fbad1cb639e2cad304d6565cf4918ee5b4e4c0 | [
"Apache-2.0"
] | null | null | null | README.md | akshah/iodb | 80fbad1cb639e2cad304d6565cf4918ee5b4e4c0 | [
"Apache-2.0"
] | null | null | null | README.md | akshah/iodb | 80fbad1cb639e2cad304d6565cf4918ee5b4e4c0 | [
"Apache-2.0"
] | null | null | null | # iodb
Internet Outage Database
| 10.666667 | 24 | 0.8125 | kor_Hang | 0.80358 |
47571e7af3ca4de55c0f3c1d5d99c4408265dd63 | 1,968 | md | Markdown | src/ta/2020-01/09/07.md | PrJared/sabbath-school-lessons | 94a27f5bcba987a11a698e5e0d4279b81a68bc9a | [
"MIT"
] | 68 | 2016-10-30T23:17:56.000Z | 2022-03-27T11:58:16.000Z | src/ta/2020-01/09/07.md | PrJared/sabbath-school-lessons | 94a27f5bcba987a11a698e5e0d4279b81a68bc9a | [
"MIT"
] | 367 | 2016-10-21T03:50:22.000Z | 2022-03-28T23:35:25.000Z | src/ta/2020-01/09/07.md | PrJared/sabbath-school-lessons | 94a27f5bcba987a11a698e5e0d4279b81a68bc9a | [
"MIT"
] | 109 | 2016-08-02T14:32:13.000Z | 2022-03-31T10:18:41.000Z | ---
title: மேலும் படிக்க
date: 28/02/2020
---
`கேள்வி: தானி 2,7,8இல் நாம் பார்த்த ராஜ்யங்களின் வரிசைமுறையானது அட்டவணை யாக கீழே கொடுக்கப்பட்டுள்ளது. பரிசுத்தஸ்தல சுத்திகரிப்புபற்றி அதிலிருந்து என்ன புரிகிறது?`
இந்த
|தானி 2|தானி 7|தானி 8|
|---|---|---|
|பாபிலோன்|பாபிலோன்|-----|
|மேதிய-பெர்சியா|மேதிய-பெர்சியா|மேதிய-பெர்சியா|
|கிரேக்கம்|கிரேக்கம்|கிரேக்கம்|
|அஞ்ஞான ரோம்|அஞ்ஞான ரோம்|அஞ்ஞான ரோம்|
|போப்புமார்க்க ரோம்|போப்புமார்க்க ரோம்|போப்புமார்க்க ரோம்|
|-----|பரலோக நியாய விசாரணை|பரிசுத்தஸ்தல சுத்திகரிப்பு|
|இரண்டாம் வருகை|இரண்டாம் வருகை|-----|
இந்த அதிகாரங்களுக்கு சம்பந்தமிருக்கின்றன. ஒரே தேசங்கள் பற்றி வரு கின்றன. மேலும், போப்புமார்க்க ரோமின் 1260 வருட உபத்திரவக் காலத்திற்கு பிறகு வருகிற பரலோக நியாயவிசாரணை காட்சியை (தானி 7), பரிசுத்தஸ்தல சுத்திகரிப் புடன் சம்பந்தப்படுத்தி தானி 8 சொல்கிறது. தானி 7ல் சொல்லப்படுகிற, நியாய விசா ரணைதான், அதாவது உலகத்தின் முடிவுக்கு ஏதுவான அந்த நியாய விசாரணைதான் தானி 8ல் பரிசுத்த ஸ்தல சுத்திகரிப்பு என்று சொல்லப்படுகிறது. ஒரே விஷயம் இரண்டு விதமாக இங்கு குறிப்பிடப்படுகிறது. சின்ன கொம்பு சுட்டிக்காட்டுகிற வல்லமையின் 1260 வருட உபத்திரவக்காலத்திற்கு பிறகு இந்த இரண்டுமே நிகழ்கின்றன.
**கலந்துரையாடக் கேள்விகள்**
`1 தானி 7ல் சொல்லப்படும் நியாயவிசாரணைதான் பரிசுத்த ஸ்தல சுத்திகரிப்பு என்றும், சின்ன கொம்பு பற்றின 1,260 வருட தீர்க்கதரிசனக் காலத்திற்குபிறகு, ஆனால் தேவனுடைய ராஜ்யம் இறுதியாக நிறுவப்படுவதற்குமுன்பு அது ஸ்தாபிக்கப்படப் போகிறது என்றும் புரிந்துகொள்ள மேலுள்ள அட்டவணை எவ்வாறு உதவியாக இருக்கிறது?`
`2 தானி 8இன் தீர்க்கதரிசனமானது வன்முறையும் தீமையும் நிறைந்ததாக வரலாறைச் சுட்டிக்காட்டுகிறது. இரண்டு உலக பேரரசுகளைச் சுட்டிக்காட்டிய மிருகங்கள் ஒன்றோடு ஒன்று சண்டையிடுகின்றன. தானி 8:8-12. அவர்களுக்கு பிறகு எழும்புகிற சின்ன கொம்புவின் வல்லமை கொடூரமான, உபத்திரவப் படுத்துகிற வல்லமையாக இருக்கிறது. தானி 8:23-25. எனவே, இந்த உலகத் தில் காணப்படுகிற பாடுகளை மறைத்துப் பேசுவதற்கு வேதாகமம் முயலவே இல்லை. நம்மைச் சுற்றிலும் தீமை காணப்பட்டாலும்கூட, தேவனிலும் அவருடைய நற்குணத்திலும் நாம் நம்பிக்கையோடு இருப்பதற்கு என்ன செய்யலாம்?` | 75.692308 | 572 | 0.504573 | tam_Taml | 0.998536 |
47572a58ee85480027455ef3af6011e2950bb82e | 6,118 | md | Markdown | _extras/guide.md | mfernandes61/wrangling-genomics | 83b4f9c15756030121e40f32c1b920daaba5b36d | [
"CC-BY-4.0"
] | null | null | null | _extras/guide.md | mfernandes61/wrangling-genomics | 83b4f9c15756030121e40f32c1b920daaba5b36d | [
"CC-BY-4.0"
] | null | null | null | _extras/guide.md | mfernandes61/wrangling-genomics | 83b4f9c15756030121e40f32c1b920daaba5b36d | [
"CC-BY-4.0"
] | null | null | null | ---
layout: page
title: "Instructor Notes"
---
# Instructor Notes for Wrangling Genomics
## Issues with Macs vs Windows
This lesson currently uses the `open` command to view FastQC output on its local browser. The `open` command is great for Macs, but there is currently no command listed in the lesson that works for Macs. The `explore` command may be useful here. If a solution is found, it's worth adding to the lesson.
## SAMtools or IGV?
Some instructors chose to use SAMtools tview for visualization of variant calling results while other prefer than IGV. SAMtools is the default because installation of IGV can take up additional instruction time, and SAMtools tview is sufficient to visualize results. However, episode 04-variant_calling includes instructions for installation and using IGV.
## Commands with Lengthy Run Times
#### Raw Data Downloads
The fastq files take about 15 minutes to download. This would be a good time to discuss the overall workflow of this lesson as illustrated by the graphic integrated on the page. It is recommended to start this lesson with the commmands to make and move to the /data/untrimmed-fastq directory and begin the download, and while files download, cover the "Bioinformatics Workflows" and "Starting with Data" texts. Beware that the last fastq file in the list takes the longest to download (~6-8 mins).
#### Running FastQC
The FastQC analysis on all raw reads takes about 10 minutes to run. It is a good idea to have learners start this command and cover the FastQC background material and images while FastQC runs.
#### Trimmomatic
The trimmomatic for loop will take about 10 minutes to run. Perhaps this would be a good time for a coffee break or a discussion about trimming.
#### bcftools mpileup
The bcftools mpileup command will take about 5 minutes to run. It is:
~~~
bcftools mpileup -O b -o results/bcf/SRR2584866_raw.bcf \
-f data/ref_genome/ecoli_rel606.fasta results/bam/SRR2584866.aligned.sorted.bam
~~~
## Commands that must be modified
There are several commands that are example commands that will not run correctly if copy and pasted directly to the terminal. These commands serve as example commands and will need to be modified to fit each user. There is text around the commands outlining how they need to be changed, but it's helpful to be aware of them ahead of time as an instructor so you can set them up properly.
#### scp Command to Download FastQC to local machines
In the FastQC section, learners will download FastQC output files in order to open '.html `.html` summary files on their local machines in a web browser. The scp command currently contains a public DNS (for example, `ec2-34-238-162-94.compute-1.amazonaws.com`), but this will need to be replaced with the public DNS of the machine used by each learner. The Public DNS for each learner will be the same one they use to log in. The password is again `data4Carp`.
Command as is:
~~~
scp [email protected]:~/dc_workshop/results/fastqc_untrimmed_reads/*.html ~/Desktop/fastqc_html
~~~
Command for learners to use:
~~~
scp dcuser@<Public_DNS>:~/dc_workshop/results/fastqc_untrimmed_reads/*.html ~/Desktop/fastqc_html
~~~
#### The unzip for loop
The for loop to unzip FastQC output will not work as directly copied pasted as:
~~~
$ for filename in *.zip
> do
> unzip $filename
> done
~~~
Because the `>` symbol will cause a syntax error when copied. This command will work correctly when typed at the command line! Learners may be surprised that a for loop takes multiple lines on the terminal.
#### open in View FastQC Results
The command `open SRR2584863_1_fastqc.html` will present an error in the lesson to illustrate that the AWS instance cannot run the open command because it has no web browser. There is text around this command explaining this issue.
#### unzip in Working with FastQC Output
The command `unzip *.zip` in the Working with FastQC Output section will run successfully for the first file, but fail for subsequent files. This error introduces the need for a for loop.
#### Example Trimmomatic Command
The first trimmomatic serves as an explanation for trimmomatic parameters and is not meant to be run. The command is:
~~~
$ trimmomatic PE -threads 4 SRR_1056_1.fastq SRR_1056_2.fastq \
SRR_1056_1.trimmed.fastq SRR_1056_1un.trimmed.fastq \
SRR_1056_2.trimmed.fastq SRR_1056_2un.trimmed.fastq \
ILLUMINACLIP:SRR_adapters.fa SLIDINGWINDOW:4:20
~~~
The correct syntax is outlined in the next section, Running Trimmomatic.
#### Actual Trimmomatic Command
The actual trimmomatic command is complicated for loop. It will need to be typed out by learners because the `>` symbols will raise an error if copy and pasted.
For reference, this command is:
~~~
$ for infile in *_1.fastq.gz
> do
> base=$(basename ${infile} _1.fastq.gz)
> trimmomatic PE ${infile} ${base}_2.fastq.gz \
> ${base}_1.trim.fastq.gz ${base}_1un.trim.fastq.gz \
> ${base}_2.trim.fastq.gz ${base}_2un.trim.fastq.gz \
> SLIDINGWINDOW:4:20 MINLEN:25 ILLUMINACLIP:NexteraPE-PE.fa:2:40:15
> done
~~~
#### bwa mem Example Command
The first bwa mem command is an example and is not meant to be run. It is:
~~~
# bwa mem ref_genome.fasta input_file_R1.fastq input_file_R2.fastq > output.sam
~~~
The correct command follows:
~~~
$ bwa mem data/ref_genome/ecoli_rel606.fasta data/trimmed_fastq_small/SRR2584866_1.trim.sub.fastq data/trimmed_fastq_small/SRR2584866_2.trim.sub.fastq > results/sam/SRR2584866.aligned.sam
~~~
#### The Automation Episode
The code blocks at the beginning of the automation episode (05-automation.md) are examples of for loops and scripts and are not meant to be run by learners. The first code chunks that should be run are under Analyzing Quality with FastQC.
Also, after the first code chunk of code meant to be run, there is a line that reads only `read_qc.sh` and will yield a message saying that this command wasn't found. After the creation of the script, this command will run the script that will be written.
| 54.141593 | 497 | 0.764629 | eng_Latn | 0.996981 |
475741b16e6fca8c5e5e4cfac6731071d04b909b | 5,384 | md | Markdown | _posts/2019-12-25-kastraciya-sobaki-peremeny-v-povedenii-pitomca.md | hubuhub/pznat | c7841df7c13815935c12ff90c8e446faee903a65 | [
"MIT"
] | null | null | null | _posts/2019-12-25-kastraciya-sobaki-peremeny-v-povedenii-pitomca.md | hubuhub/pznat | c7841df7c13815935c12ff90c8e446faee903a65 | [
"MIT"
] | 1 | 2021-03-29T23:00:37.000Z | 2021-03-29T23:00:37.000Z | _posts/2019-12-25-kastraciya-sobaki-peremeny-v-povedenii-pitomca.md | hubuhub/pznat | c7841df7c13815935c12ff90c8e446faee903a65 | [
"MIT"
] | null | null | null | ---
title: "Кастрация собаки - перемены в поведении питомца и правила ухода"
metadate: "hide"
categories: [ Дом, Животные ]
image: "/assets/images/sobaka-posle-kastracii.jpg"
---
Кастрация — операция, рекомендованная для всех домашних животных, которые не участвуют в разведении, не несут племенной ценности и не делают выставочную карьеру.
Если вы хотите кастрировать свою собаку, обязательно найдите хорошего врача, который оценит, насколько безопасно, актуально проведение подобной операции в конкретном случае (отказать могут из-за возраста, хронических заболеваний и т.д.).
Например, вы можете записаться тут [https://doctorpanda24.ru/sterilizatsiya-sobaki](https://doctorpanda24.ru/sterilizatsiya-sobaki) на кастрацию собаки по доступной цене.
Из нашей статьи вы узнаете, как ведет себя собака после кастрации, как себя подготовить и что может измениться в поведении вашего питомца после операции.
## Как меняется собака после кастрации
В первую очередь пес больше не будет интересоваться представителями противоположного пола. Он будет играть с ними, но половой охоты у него больше не будет. Это избавит от многих проблем:
* исчезнет агрессия, связанная с борьбой за полового партнера;
* собака перестанет убегать (в поисках партнера);
* не будет дома метить территорию (если это происходило ранее).
При этом кастрация не сказывается на породных качествах. Охранник не станет трусом и все также будет защищать свою территорию от чужаков, пастух не перестанет помогать вам, а охотника все также можно будет использовать по назначению.
## Как ухаживать за собакой после кастрации
Помните, ваше животное перенесло серьезное хирургическое вмешательство. Поэтому относиться к нему нужно максимально бережно и осторожно. Питомец нуждается в вашей заботе, помощи и ласке. Первый день самый тяжелый и во многом определяет, как будет дальше развиваться ситуация.
Первые 24 часа собака будет плохо ориентироваться в пространстве, может писать под себя. Если питомец скулит, это значит, что у него что-то болит и ему нужно дать обезболивающее. Постелите собаке удобную лежанку на полу. Не надо поднимать ее высоко — животное может упасть. Выберите максимально удобное место — вдали от сквозняков и обогревательных приборов.
Обеспечьте животному максимальный покой (оградите от детей, других питомцев). Каждые пол часа проверяйте пульс, дыхание, состояние слизистой. Если собака сама плохо пьет, можно поить ее из шприца без иглы.
### Обработка швов
На протяжение всего реабилитационного периода необходимо следить за швом питомца. В отдельных случаях возможна его отечность и припухлость. Ветеринар даст рекомендации относительно ухода за швом, которые обязательно нужно соблюдать.
Возможно придется ежедневно привозить животное для обработки швов в клинику. У кобелей процесс восстановления проходит намного легче, чем у сук. Как правило, швы снимаются на 10-14 день. Они могут сниматься и раньше — если состояние питомца стабильно и есть показания.
### Когда можно гулять с собакой после кастрации
Все индивидуально и точные рекомендации даст вам лечащий врач. Если шов зажил и животное бодрое, просится на улицу и врач не запрещает выводить его — то конечно можно отправиться гулять. В других случаях пусть питомец лучше ходит в туалет на пеленку.
### Когда можно купать собаку после кастрации
Если у пса запачкались лапы, шерсть, можно аккуратно вымыть именно эту часть туловища. Полноценно искупать питомца вы сможете после полного заживления раны.
### Чем кормить собаку после кастрации и через сколько
Первые сутки понадобится только обильное питье. Животное вряд ли будет просить еды, так как будет очень ослаблено. Давать пищу, даже если собака будет просить, также не стоит. Глотательный рефлекс будет еще очень слабым. Поэтому кусочки еды могут попасть в бронхи, что приведет к отсутствию кислорода и удушению.
На 2 день можно давать немного жидкой пищи или в протертом виде. Если в первое время питомец будет отказываться от еды или будет есть мало, не переживайте. Такая реакция возможна.
В дальнейшем лучше использовать специальный корм для стерилизованных животных или придерживаться диеты со сниженным количеством высококалорийных продуктов.
### Сколько носить воротник после кастрации собаки
Срок указывается индивидуально ветеринаром, может достигать 2х недель. Все зависит от многих факторов. В первую очередь от состояния животного и того, как быстро заживает шов.
### Сколько собака отходит от наркоза после кастрации
Точного ответа не даст ни один ветеринар. Все зависит от типа используемого наркоза, от индивидуальных особенностей организма. Большинство животных хорошо переносят ингаляционный наркоз, может использоваться комбинированная анестезия или внутривенная.
Внимательно следите за состояние здоровья животного. При появлении таких симптомов срочно обратитесь к врачу:
* тяжелое дыхание;
* сильная дрожь в теле;
* хрипы в груди;
* неровный пульс;
* высокая или очень низкая температура;
* слизистая стала очень бледной.
Лучшее решение — оставить на несколько часов после операции животное в клинике, чтобы оно находилось под наблюдением специалистов.
Теперь вы знаете, что происходит с собакой после кастрации и сможете подготовиться к этой операции. Помните, забота и внимание очень важны для животного в это время и его состояние в значительной степени зависит того, как вы будете ухаживать за ним в реабилитационный период.
| 73.753425 | 358 | 0.817608 | rus_Cyrl | 0.996044 |
475780461b7343f70726aedfd49156536a9ab0f3 | 2,196 | md | Markdown | README.md | MaineC/elasticsearch-query-templates | bfaf5551a90c70329cc60a0752793297a48a7136 | [
"Apache-2.0"
] | null | null | null | README.md | MaineC/elasticsearch-query-templates | bfaf5551a90c70329cc60a0752793297a48a7136 | [
"Apache-2.0"
] | null | null | null | README.md | MaineC/elasticsearch-query-templates | bfaf5551a90c70329cc60a0752793297a48a7136 | [
"Apache-2.0"
] | 1 | 2020-02-11T06:12:57.000Z | 2020-02-11T06:12:57.000Z | Query templating plugin for ElasticSearch
==================================
Installation
------------
For latest master installation clone the repository, run a simple
mvn package
mkdir $ES_DIR/plugins/elasticsearch-query-templates
cp target/releases/elasticsearch-query-templates-1.0-SNAPSHOT.zip $ES_DIR/plugins/elasticsearch-query-templates
unzip $ES_DIR/plugins/elasticsearch-query-templates/elasticsearch-query-templates.zip
Usage
-----
In the simplest case, submit both, template_string and template_vars as part of
your search request:
```json
GET _search
{
"query": {
"template": {
"template_string": "{\"match_{{template}}\": {}}\"",
"template_vars" : {
"template" : "all"
}
}
}
}
```
You register a template by storing it in the conf/scripts directory of
elasticsearch. In order to execute the stored template execute a request like
so:
```json
GET _search
{
"query": {
"template": {
"template_string": "storedTemplate",
"template_vars" : {
"template" : "all"
}
}
}
}
```
Template language
-----------------
Templating is based on Mustache. Some simple usage examples:
Substitution of tokens:
```json
"template_string": "{\"match_{{template}}\": {}}\"",
"template_vars" : {
"template" : "all"
```
License
-------
Licensed to Elasticsearch under one or more contributor
license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright
ownership. Elasticsearch licenses this file to you under
the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
| 23.115789 | 115 | 0.660291 | eng_Latn | 0.905304 |
4757b3c099d842d9969bae4f3f7062deac772260 | 477 | md | Markdown | CHANGELOG.md | slai11/goto | 5d6d60ddf0d094e2ac1e08f4e4847ce403242891 | [
"MIT"
] | 32 | 2020-04-25T10:16:20.000Z | 2022-03-12T11:41:12.000Z | CHANGELOG.md | slai11/goto | 5d6d60ddf0d094e2ac1e08f4e4847ce403242891 | [
"MIT"
] | 8 | 2020-04-21T01:32:10.000Z | 2020-08-06T22:13:52.000Z | CHANGELOG.md | slai11/goto | 5d6d60ddf0d094e2ac1e08f4e4847ce403242891 | [
"MIT"
] | 3 | 2020-08-06T01:46:29.000Z | 2021-02-19T21:22:11.000Z | # CHANGELOG
## `v0.3.0`
- Search considers sparsity of search term-alias subsequence.
## `v0.2.4`
- Clean up error message.
- Refine tree printing.
## `v0.2.3`
- Fix `rm` bug in cli.
## `v0.2.2`
- Fix db initialisation process.
## `v0.2.1`
- Add space to tree print.
- Add demo screenshot to docs.
## `v0.2.0`
- Fix bug of clashing aliases.
- A `rm` feature to delete alises.
- Lists index in tree format.
## `v0.1.0`
- Initial implementation with simple fuzzy jump-to.
| 17.666667 | 61 | 0.66457 | eng_Latn | 0.767458 |
47583043a7a8a93454de2f30e8c059e83ea7f2a8 | 1,638 | md | Markdown | AlchemyInsights/commercial-dialogues/assign-an-audit-log-role-in-office-365-security-and-compliance-center.md | isabella232/OfficeDocs-AlchemyInsights-pr.id-ID | a378cb115ca9ee2ef20ad097a08471f925d505a9 | [
"CC-BY-4.0",
"MIT"
] | 3 | 2020-05-19T19:06:50.000Z | 2021-03-06T00:35:09.000Z | AlchemyInsights/commercial-dialogues/assign-an-audit-log-role-in-office-365-security-and-compliance-center.md | MicrosoftDocs/OfficeDocs-AlchemyInsights-pr.id-ID | 95d1cef182a766160dba451d9c5027f04a6dbe06 | [
"CC-BY-4.0",
"MIT"
] | 3 | 2020-06-02T23:33:41.000Z | 2022-02-09T07:00:30.000Z | AlchemyInsights/commercial-dialogues/assign-an-audit-log-role-in-office-365-security-and-compliance-center.md | isabella232/OfficeDocs-AlchemyInsights-pr.id-ID | a378cb115ca9ee2ef20ad097a08471f925d505a9 | [
"CC-BY-4.0",
"MIT"
] | 3 | 2020-06-02T23:33:21.000Z | 2021-10-09T10:42:11.000Z | ---
title: Menetapkan peran log audit di pusat kepatuhan & keamanan Office 365
ms.author: v-smandalika
author: v-smandalika
manager: dansimp
ms.date: 02/21/2021
audience: Admin
ms.topic: article
ms.service: o365-administration
ROBOTS: NOINDEX, NOFOLLOW
localization_priority: Priority
ms.collection: Adm_O365
ms.custom:
- "7363"
- "9000722"
ms.openlocfilehash: 0eb470b6c17def5517db2f866ef40898b36662ed
ms.sourcegitcommit: 6312ee31561db36104f32282d019d069ede69174
ms.translationtype: MT
ms.contentlocale: id-ID
ms.lasthandoff: 03/11/2021
ms.locfileid: "50746170"
---
# <a name="assign-an-audit-log-role-in-the-office-365-security--compliance-center"></a>Menetapkan peran log audit di pusat kepatuhan & keamanan Office 365
Untuk mencari log audit Office 365, administrator harus ditetapkan peran **log audit hanya Tampilkan** atau peran **log audit** di Exchange Online. Peran ini ditetapkan ke grup peran manajemen kepatuhan dan manajemen organisasi secara default. Administrator global di Office 365 dan Microsoft 365 secara otomatis ditambahkan sebagai anggota grup peran manajemen organisasi.
Untuk mengaktifkan pengguna untuk mencari dengan tingkat hak istimewa minimum, membuat grup peran kustom di Exchange Online, menambahkan peran log audit atau **log audit** **saja** , lalu menambahkan pengguna sebagai anggota grup peran baru.
Untuk informasi selengkapnya, lihat [mengelola grup peran di Exchange Online](https://docs.microsoft.com/Exchange/permissions-exo/role-groups) dan [mencari log audit di pusat kepatuhan & keamanan](https://docs.microsoft.com/microsoft-365/compliance/search-the-audit-log-in-security-and-compliance). | 56.482759 | 373 | 0.811966 | ind_Latn | 0.897632 |
475897672bb95bdab2f0e7d13a9efe3e464b9371 | 2,315 | md | Markdown | docs/atl/reference/adding-an-atl-control.md | Erikarts/cpp-docs.es-es | 9fef104c507e48ec178a316218e1e581753a277c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/atl/reference/adding-an-atl-control.md | Erikarts/cpp-docs.es-es | 9fef104c507e48ec178a316218e1e581753a277c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/atl/reference/adding-an-atl-control.md | Erikarts/cpp-docs.es-es | 9fef104c507e48ec178a316218e1e581753a277c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Agregar un Control ATL
ms.date: 11/04/2016
helpviewer_keywords:
- ATL projects, adding controls
- controls [ATL], adding to projects
ms.assetid: 10223e7e-fdb7-4163-80c6-44aeafa8e6ce
ms.openlocfilehash: 26667c2ad3bb2cedb42767fe42ff0ad358fa6d66
ms.sourcegitcommit: 6052185696adca270bc9bdbec45a626dd89cdcdd
ms.translationtype: MT
ms.contentlocale: es-ES
ms.lasthandoff: 10/31/2018
ms.locfileid: "50487690"
---
# <a name="adding-an-atl-control"></a>Agregar un Control ATL
Use este asistente para agregar un objeto de interfaz de usuario a un proyecto que admita las interfaces para todos los contenedores potenciales. Para admitir estas interfaces, el proyecto debe se ha creado como una aplicación ATL o como una aplicación MFC con compatibilidad con ATL. Puede usar el [Asistente para proyectos ATL](../../atl/reference/atl-project-wizard.md) para crear una aplicación ATL, o bien [agregar un objeto ATL a una aplicación MFC](../../mfc/reference/adding-atl-support-to-your-mfc-project.md) para implementar la compatibilidad con ATL para una aplicación MFC.
## <a name="to-add-an-atl-control-to-your-project"></a>Para agregar un control ATL al proyecto
1. En el **el Explorador de soluciones** o [vista de clases](/visualstudio/ide/viewing-the-structure-of-code), haga clic en el nombre del proyecto al que desea agregar el objeto simple ATL.
1. Haga clic en **agregar** desde el menú contextual y, a continuación, haga clic en **Agregar clase**.
1. En el [Agregar clase](../../ide/add-class-dialog-box.md) cuadro de diálogo, en el panel Plantillas, haga clic en **Control ATL**y, a continuación, haga clic en **agregar** para mostrar el [Asistente para controles ATL](../../atl/reference/atl-control-wizard.md).
Mediante el **Asistente para controles ATL**, puede crear uno de los tres tipos de controles:
- Un control estándar
- Un control compuesto
- Un control DHTML
Además, puede reducir el tamaño del control y quitar las interfaces que no se usan por la mayoría de los contenedores seleccionando **control mínimo** en el **opciones** página del asistente.
## <a name="see-also"></a>Vea también
[Agregar funciones al control compuesto](../../atl/adding-functionality-to-the-composite-control.md)<br/>
[Aspectos básicos de los objetos ATL COM](../../atl/fundamentals-of-atl-com-objects.md)
| 56.463415 | 586 | 0.768898 | spa_Latn | 0.951649 |
4759193b2f973bc04c5300dc46f719a07582a4d0 | 2,883 | md | Markdown | README.md | AlexOAnder/LearningSite | 616c2698a9a7d1a67c9d2265b1be1e10c6be432e | [
"MIT"
] | null | null | null | README.md | AlexOAnder/LearningSite | 616c2698a9a7d1a67c9d2265b1be1e10c6be432e | [
"MIT"
] | 1 | 2016-01-15T18:18:03.000Z | 2016-01-15T18:18:03.000Z | README.md | AlexOAnder/LearningSite | 616c2698a9a7d1a67c9d2265b1be1e10c6be432e | [
"MIT"
] | null | null | null | # LearningSite
Проект образовательного сайта
Выполнен AlexOAnder
-----
Данный проект был создан для выполнения контрольной работы #1 по предмету ПСП.
Представляет собой страничку, которая поделена на функциональные области:
- Header(верхняя часть) - содержит название сайта и логотип
- Navigation (левый фрэйм) - содержит навигационные ссылки, котоыре позволяют перемещаться по контенту
- Body ( правый фрэйм) - тело сайта, который содержи в себе необходимую информацию
Сайт в своей работе использует CSS и JS. Сss проявляется в оформлении сайта - цвет фона,
кнопок, некоторые эффекты в навигационной панели. Описывается в файле /css/styles.css.
JS проявляется в обработке события нажатия кнопки на странице заказа, а также для загрузки xsl обработчика.
Может быть встроенным на самой странице, так и внешне поддгружаемым сриптом .js .
Основная часть данного задания сосредоточена на странице order.html ( кнопка "Записаться online" в навигационной панели ).
На данной странице есть несколько особенностей :
Подгрузка библиотеки/скрипта fileSave.js
--------------------
```js
<script type="text/javascript" src="../js/fileSave.js"></script>
```
Подгружает стороннюю библиотеку(скрипт) для сохранения результатов. https://github.com/eligrey/FileSaver.js
Делается это в блоке
```js
<script type="text/javascript">
```
в 43 строке. Через поиск id с помощью библиотеки Jqery мы находим
- кнопку, по которой нажали;
- Добавляем событие на нажатие ;
- Записываем в переменные значения полей ввода и передаем их сторонней библиотеке для формирования blob массива, в котором мы поместим наши данные.
Загрузка xsl/xml
--------------------
В html мы можем увидеть данную запись :
```html
<body onLoad="initXML()">
```
Она (запись) означает, что после загрузки содержимого страницы, будет вызвана функция initXML().
Cама функция представляет собой использование особенность ActiveObject для загрузки и обработки XMLDOM.
Данный метод работает исключительно в браузере InternetExplorer.
Поэтому в функции предусмотрена обработка ошибки - она выведет на экран сообщение, что xml загрузка не поддерживается
Xml/xsl
--------------------
Файлы данного формата находятся в папке /xml/.
Всего их три :
- 1.xml;
- 1a.xsl;
- 1b.xsl
Файл XML - это данные, которые будут выведены
XSL - это то, как мы должны вывести эти данные. В нашем случае, это будет таблица.
В построенной таблице есть две кликабельные ссылки - Курс и Цена. Нажатие на них приводит к вызову orderByName() и orderByCost() соответственно. Делается это подгрузкой нужного xsl файла
UML
------------------------
UML диаграмма вариантов использования

UML диаграмма последовательности

| 43.681818 | 186 | 0.769684 | rus_Cyrl | 0.952496 |
47598183ac0523b473a2f119635edd60042cc2f4 | 5,546 | md | Markdown | README.md | NikhilMJ/Micro-Grid-Power-Management | 9f9e68781c17ca33a8b2fb71be12c0e829998491 | [
"MIT"
] | 13 | 2020-02-18T12:53:59.000Z | 2022-03-15T11:45:44.000Z | README.md | NikhilMJ/Micro-Grid-Power-Management | 9f9e68781c17ca33a8b2fb71be12c0e829998491 | [
"MIT"
] | null | null | null | README.md | NikhilMJ/Micro-Grid-Power-Management | 9f9e68781c17ca33a8b2fb71be12c0e829998491 | [
"MIT"
] | 7 | 2019-12-31T13:38:27.000Z | 2022-01-28T11:27:42.000Z | # Micro Grid Power Management
This project investigates an optimal control strategy that efficiently manages an electric grid comprising of various renewable and non-renewable energy sources for a medium sized community. The grid consists of a solar farm, a conventional fossil fuel energy plant and a grid level energy storage. A community comprising of residential and commercial consumers is modeled using an expected demands. Power flow and battery storage is managed using expected power demands and expected solar production capacities. Goal is meet demand, minimize use of fossil fuels and ensure the energy storage is always maintained around a nominal point and ensure it isn't over-depleted.
This project was conceived and developed in Fall of 2014, and the motivation was to allow for a rural community to independently and efficiently manage power sources.
The project features a behavior model developed from first principles and modeled after power flows and losses is formulated. The model is hybrid in nature, i.e. consists of switched states. A hybrid model predictive control scheme is implemented for choosing an optimal mode and set of inputs for the system for tracking both a constant and load-varying power demand profile. The entire compiled report is included in the pdf included, [Micro Grid using MPC](Micro-Grid-Power-Management/Micro%20Grid%20Energy%20Optimization%20using%20MPC.pdf)
## Grid Model
The below diagram shows the overall system layout of the micro-grid. Three power plant types(explained later) are included. The power transmission lines transmit this to residential and commercial plants. The equations of the model are attached in the attached pddf.

## Power Plant
The grid model can be expanded to include behavioral models of multiple energy producer and storage types. The main characteristics required at the grid level are time constant to meet reference targets, efficiency(or inversely cost) of production. Once a power demand is set for each power plant, it is assumed that specialized control schemes built for each power plant would perform control actions required to meet the requested power levels. In this micro-grid project, the three types of power plants are considered in order to build the proof of concept:
*Traditional: diesel generator(included in model). Other examples: hydel, nuclear power plants, thermal plants
*Renewable: solar farm(included in model). Other include wind energy, tidal energy, etc.
*Storage: battery(included in mode). Others examples for storage plants are pumped hydro, molten salt, pressurized gas, etc.
The key here is to maximize the use of renewable sources and minimizing the use of the traditional sources. In a real world, constraints are added to ensure basic operation of these plants such as providing for spinning reserves, ensuring dam water levels are within maximum levels. These can all be included into the behavioral model as state bounds and cost per energy unit for each power plant can be adjusted for the control scheme to handle.
## Power Demand Curves
The diagram below is the reference power demand curve that would need to be met by the energy management unit. The power demand profile is known 24hrs in advance.

Having a reference power profile to meet 24hrs in advance is a standard model that is currently in use by energy exchanges where 15min intervals that are bid, reserved and sold 24hrs in advance. While this allows for an efficient overall strategy, near term load demands are mostly met by the surplus that is passed to the grid as a factor of safety. The proposed model should be able to efficiently manage power demands that are made known few minutes to hours in advance. Meeting such immediate demands is limited to the response rate of the fastest energy producer which in this case is the energy storage.
While the Tesla powerwall concept was not known at the time of conceiving this project, it is to be noted that the response rate of energy storage solutions based on battery technologies are in the 100msec range [See link](https://reneweconomy.com.au/speed-of-tesla-big-battery-leaves-rule-makers-struggling-to-catch-up-36135/). This allows the control scheme to optimize for and meet power demands that are not known well ahead of time and allows handling of high frequency arbitrages. A behavioral model of this type allows operation and management of a system of systems and captures enough information to allow management of the grid. Additional constraints of resilience to failures, robustness and factor of safety can be easily added to the behavioral model. Furthermore, daily weather prediction data can be fed to this behavioral model to build better long term optimization goals whereas hourly prediction data can be fed and used to provide efficient control command generation.
## Results
Kindly refer to the [pdf](Micro%20Grid%20Energy%20Optimization%20using%20MPC.pdf) above for a detailed discussion on the motivation, problem formulation, MPC setup and results. Also results of several weight balances, constraints and other limits in [results](Code/Results). Below is a sample state trajectory from solving the MPC

#### Keywords:
Energy, Model, Predictive, Control, MPC, Powerwall, grid, storage
| 135.268293 | 989 | 0.812838 | eng_Latn | 0.999555 |
4759b1f27009f2033805eba6fb6a1ae83663d2f9 | 5,586 | md | Markdown | docs/atl/reference/iconnectionpointimpl-class.md | baruchiro/cpp-docs | 6012887526a505e334e9f7ec73c5a84a59562177 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-04-18T12:54:41.000Z | 2021-04-18T12:54:41.000Z | docs/atl/reference/iconnectionpointimpl-class.md | Mikejo5000/cpp-docs | 4b2c3b0c720aef42bce7e1e5566723b0fec5ec7f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/atl/reference/iconnectionpointimpl-class.md | Mikejo5000/cpp-docs | 4b2c3b0c720aef42bce7e1e5566723b0fec5ec7f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "IConnectionPointImpl Class | Microsoft Docs"
ms.custom: ""
ms.date: "11/04/2016"
ms.technology: ["cpp-atl"]
ms.topic: "reference"
f1_keywords: ["IConnectionPointImpl", "ATLCOM/ATL::IConnectionPointImpl", "ATLCOM/ATL::IConnectionPointImpl::Advise", "ATLCOM/ATL::IConnectionPointImpl::EnumConnections", "ATLCOM/ATL::IConnectionPointImpl::GetConnectionInterface", "ATLCOM/ATL::IConnectionPointImpl::GetConnectionPointContainer", "ATLCOM/ATL::IConnectionPointImpl::Unadvise", "ATLCOM/ATL::IConnectionPointImpl::m_vec"]
dev_langs: ["C++"]
helpviewer_keywords: ["connection points [C++], implementing", "IConnectionPointImpl class"]
ms.assetid: 27992115-3b86-45dd-bc9e-54f32876c557
author: "mikeblome"
ms.author: "mblome"
ms.workload: ["cplusplus"]
---
# IConnectionPointImpl Class
This class implements a connection point.
## Syntax
```
template<class T, const IID* piid, class CDV = CComDynamicUnkArray>
class ATL_NO_VTABLE IConnectionPointImpl : public _ICPLocator<piid>
```
#### Parameters
`T`
Your class, derived from `IConnectionPointImpl`.
`piid`
A pointer to the IID of the interface represented by the connection point object.
`CDV`
A class that manages the connections. The default value is [CComDynamicUnkArray](../../atl/reference/ccomdynamicunkarray-class.md), which allows unlimited connections. You can also use [CComUnkArray](../../atl/reference/ccomunkarray-class.md), which specifies a fixed number of connections.
## Members
### Public Methods
|Name|Description|
|----------|-----------------|
|[IConnectionPointImpl::Advise](#advise)|Establishes a connection between the connection point and a sink.|
|[IConnectionPointImpl::EnumConnections](#enumconnections)|Creates an enumerator to iterate through the connections for the connection point.|
|[IConnectionPointImpl::GetConnectionInterface](#getconnectioninterface)|Retrieves the IID of the interface represented by the connection point.|
|[IConnectionPointImpl::GetConnectionPointContainer](#getconnectionpointcontainer)|Retrieves an interface pointer to the connectable object.|
|[IConnectionPointImpl::Unadvise](#unadvise)|Terminates a connection previously established through `Advise`.|
### Public Data Members
|Name|Description|
|----------|-----------------|
|[IConnectionPointImpl::m_vec](#m_vec)|Manages the connections for the connection point.|
## Remarks
`IConnectionPointImpl` implements a connection point, which allows an object to expose an outgoing interface to the client. The client implements this interface on an object called a sink.
ATL uses [IConnectionPointContainerImpl](../../atl/reference/iconnectionpointcontainerimpl-class.md) to implement the connectable object. Each connection point within the connectable object represents an outgoing interface, identified by `piid`. Class *CDV* manages the connections between the connection point and a sink. Each connection is uniquely identified by a "cookie."
For more information about using connection points in ATL, see the article [Connection Points](../../atl/atl-connection-points.md).
## Inheritance Hierarchy
`_ICPLocator`
`IConnectionPointImpl`
## Requirements
**Header:** atlcom.h
## <a name="advise"></a> IConnectionPointImpl::Advise
Establishes a connection between the connection point and a sink.
```
STDMETHOD(Advise)(
IUnknown* pUnkSink,
DWORD* pdwCookie);
```
### Remarks
Use [Unadvise](#unadvise) to terminate the connection call.
See [IConnectionPoint::Advise](http://msdn.microsoft.com/library/windows/desktop/ms678815) in the Windows SDK.
## <a name="enumconnections"></a> IConnectionPointImpl::EnumConnections
Creates an enumerator to iterate through the connections for the connection point.
```
STDMETHOD(EnumConnections)(IEnumConnections** ppEnum);
```
### Remarks
See [IConnectionPoint::EnumConnections](http://msdn.microsoft.com/library/windows/desktop/ms680755) in the Windows SDK.
## <a name="getconnectioninterface"></a> IConnectionPointImpl::GetConnectionInterface
Retrieves the IID of the interface represented by the connection point.
```
STDMETHOD(GetConnectionInterface)(IID* piid2);
```
### Remarks
See [IConnectionPoint::GetConnectionInterface](http://msdn.microsoft.com/library/windows/desktop/ms693468) in the Windows SDK.
## <a name="getconnectionpointcontainer"></a> IConnectionPointImpl::GetConnectionPointContainer
Retrieves an interface pointer to the connectable object.
```
STDMETHOD(GetConnectionPointContainer)(IConnectionPointContainer** ppCPC);
```
### Remarks
See [IConnectionPoint::GetConnectionPointContainer](http://msdn.microsoft.com/library/windows/desktop/ms679669) in the Windows SDK.
## <a name="m_vec"></a> IConnectionPointImpl::m_vec
Manages the connections between the connection point object and a sink.
```
CDV m_vec;
```
### Remarks
By default, `m_vec` is of type [CComDynamicUnkArray](../../atl/reference/ccomdynamicunkarray-class.md).
## <a name="unadvise"></a> IConnectionPointImpl::Unadvise
Terminates a connection previously established through [Advise](#advise).
```
STDMETHOD(Unadvise)(DWORD dwCookie);
```
### Remarks
See [IConnectionPoint::Unadvise](http://msdn.microsoft.com/library/windows/desktop/ms686608) in the Windows SDK.
## See Also
[IConnectionPoint](http://msdn.microsoft.com/library/windows/desktop/ms694318)
[Class Overview](../../atl/atl-class-overview.md)
| 41.377778 | 384 | 0.740243 | eng_Latn | 0.66179 |
4759cc7f0d219f2f47ea7e5467f9cf87204b176f | 3,483 | md | Markdown | contracts/README.md | Beovolytics-Inc/0x-monorepo | 8a20cc682cd620cc5bed3df9db7b654aa6e02dbf | [
"Apache-2.0"
] | 1,075 | 2018-03-04T13:18:52.000Z | 2022-03-29T06:33:59.000Z | contracts/README.md | Beovolytics-Inc/0x-monorepo | 8a20cc682cd620cc5bed3df9db7b654aa6e02dbf | [
"Apache-2.0"
] | 1,873 | 2018-03-03T14:37:53.000Z | 2021-06-26T03:02:12.000Z | contracts/README.md | Beovolytics-Inc/0x-monorepo | 8a20cc682cd620cc5bed3df9db7b654aa6e02dbf | [
"Apache-2.0"
] | 500 | 2018-03-03T20:39:43.000Z | 2022-03-21T21:01:55.000Z | #### Deployed Contract Packages
| Contract | Package | Version | Git Tag |
| --------------- | ------------------------------------------------------------------- | -------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------- |
| AssetProxyOwner | [`@0x/contracts-multisig`](/contracts/multisig) | [v1.0.2](https://www.npmjs.com/package/@0x/contracts-multisig/v/1.0.2) | [@0x/[email protected]](https://github.com/0xProject/0x-monorepo/releases/tag/@0x/[email protected]) |
| ERC20Proxy | [`@0x/contracts-asset-proxy`](/contracts/asset-proxy) | [v1.0.1](https://www.npmjs.com/package/@0x/contracts-asset-proxy/v/1.0.1) | [@0x/[email protected]](https://github.com/0xProject/0x-monorepo/releases/tag/@0x/[email protected]) |
| ERC721Proxy | [`@0x/contracts-asset-proxy`](/contracts/asset-proxy) | [v1.0.1](https://www.npmjs.com/package/@0x/contracts-asset-proxy/v/1.0.1) | [@0x/[email protected]](https://github.com/0xProject/0x-monorepo/releases/tag/@0x/[email protected]) |
| Exchange | [`@0x/contracts-exchange`](/contracts/exchange) | [v1.0.1](https://www.npmjs.com/package/@0x/contracts-exchange/v/1.0.1) | [@0x/[email protected]](https://github.com/0xProject/0x-monorepo/releases/tag/@0x/[email protected]) |
| DutchAuction | [`@0x/contracts-extensions`](/contracts/extensions) | [v1.0.2](https://www.npmjs.com/package/@0x/contracts-extensions/v/1.0.2) | [@0x/[email protected]](https://github.com/0xProject/0x-monorepo/releases/tag/@0x/[email protected]) |
| Forwarder | [`@0x/contracts-exchange-forwarder`](/contracts/exchange-forwarder) | [v1.0.1](https://www.npmjs.com/package/@0x/contracts-exchange-forwarder/v/1.0.1) | [@0x/[email protected]](https://github.com/0xProject/0x-monorepo/releases/tag/@0x/[email protected]) |
| MultiAssetProxy | [`@0x/contracts-asset-proxy`](/contracts/asset-proxy) | [v1.0.1](https://www.npmjs.com/package/@0x/contracts-asset-proxy/v/1.0.1) | [@0x/[email protected]](https://github.com/0xProject/0x-monorepo/releases/tag/@0x/[email protected]) |
| ZRXToken | [`@0x/contracts-erc20`](/contracts/erc20) | [v1.0.1](https://www.npmjs.com/package/@0x/contracts-erc20/v/1.0.1) | [@0x/[email protected]](https://github.com/0xProject/0x-monorepo/releases/tag/@0x/[email protected]) |
#### Development
Building solidity files will update the contract artifact in `{package-name}/generated-artifacts/{contract}.json`, but does not automatically update the `contract-artifacts` or `contract-wrappers` packages, which are generated from the artifact JSON. See `contract-artifacts/README.md` for instructions on updating these packages.
| 204.882353 | 330 | 0.540626 | yue_Hant | 0.370727 |
475a1f98759839adf0c944fe90cddf5da0da4974 | 307 | md | Markdown | docs/slf4j/slf4j/org.slf4j.impl/-android-logger-adapter/is-warn-enabled.md | danbrough/util | 2a3ae340114d1cf8b96aceeddcc62a0057d4ef3e | [
"Apache-2.0"
] | null | null | null | docs/slf4j/slf4j/org.slf4j.impl/-android-logger-adapter/is-warn-enabled.md | danbrough/util | 2a3ae340114d1cf8b96aceeddcc62a0057d4ef3e | [
"Apache-2.0"
] | null | null | null | docs/slf4j/slf4j/org.slf4j.impl/-android-logger-adapter/is-warn-enabled.md | danbrough/util | 2a3ae340114d1cf8b96aceeddcc62a0057d4ef3e | [
"Apache-2.0"
] | null | null | null | //[slf4j](../../index.md)/[org.slf4j.impl](../index.md)/[AndroidLoggerAdapter](index.md)/[isWarnEnabled](is-warn-enabled.md)
# isWarnEnabled
[androidJvm]
Content
open fun [isWarnEnabled](is-warn-enabled.md)(): [Boolean](https://kotlinlang.org/api/latest/jvm/stdlib/kotlin/-boolean/index.html)
| 25.583333 | 132 | 0.703583 | yue_Hant | 0.322192 |
475a50650681156a76d124e006ddf608aec38518 | 1,009 | md | Markdown | docs/v2/friendly_directives_and_sources.md | aidantwoods/SecureHeaders | 93d833156fee711b14321a835548540e261eefb5 | [
"MIT"
] | 408 | 2017-01-07T01:34:13.000Z | 2022-03-07T08:41:53.000Z | docs/supplements/friendly_directives_and_sources.md | aidantwoods/SecureHeaders | 93d833156fee711b14321a835548540e261eefb5 | [
"MIT"
] | 55 | 2016-11-11T13:00:27.000Z | 2019-03-01T23:55:09.000Z | docs/v2/friendly_directives_and_sources.md | aidantwoods/SecureHeaders | 93d833156fee711b14321a835548540e261eefb5 | [
"MIT"
] | 29 | 2016-11-11T17:20:24.000Z | 2021-11-01T01:41:33.000Z | ## Description
Friendly directives and sources are included as shorthands for some of the CSP directive and source keyword values. For example, it is possible to omit `-src` from most directives, and the surrounding single quotes from most source keywords.
The full array translation is below:
### Directives
```php
array(
'default' => 'default-src',
'script' => 'script-src',
'style' => 'style-src',
'image' => 'img-src',
'img' => 'img-src',
'font' => 'font-src',
'child' => 'child-src',
'base' => 'base-uri',
'connect' => 'connect-src',
'form' => 'form-action',
'object' => 'object-src',
'report' => 'report-uri',
'reporting' => 'report-uri'
);
```
### Sources
```php
array(
'self' => "'self'",
'none' => "'none'",
'unsafe-inline' => "'unsafe-inline'",
'unsafe-eval' => "'unsafe-eval'",
'strict-dynamic' => "'strict-dynamic'",
);
```
| 28.828571 | 241 | 0.531219 | eng_Latn | 0.893608 |
475a805275711c6319819baec56ad2cde959462f | 4,575 | md | Markdown | articles/java/learning-resources/fundamentals.md | kyliel/azure-dev-docs | 4503668ec251efe6165f4ffad72e1a11e9d7e8a3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/java/learning-resources/fundamentals.md | kyliel/azure-dev-docs | 4503668ec251efe6165f4ffad72e1a11e9d7e8a3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/java/learning-resources/fundamentals.md | kyliel/azure-dev-docs | 4503668ec251efe6165f4ffad72e1a11e9d7e8a3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Java learning path
description: Java - learning paths for Java developers.
author: sadigopu
ms.author: sreea
ms.topic: article
ms.date: 03/19/2021
ms.custom: devx-track-java
---
# Java learning path
This article provides a list of curated resources for learning Java.
## Java fundamentals
The following sections provide links to resources that can help you learn Java fundamental concepts with a hands-on approach.
### Language
- [Core Platform - Part 1](https://learning.oreilly.com/videos/core-java/9780134540603)
- [Core Platform - Part 2](https://www.linkedin.com/learning/advanced-java-programming-2/learn-advanced-java-programming?u=3322)
### IDE features
- [IDE IntelliJ Fundamentals](https://www.linkedin.com/learning/intellij-idea-community-edition-essential-training/welcome?u=3322)
- [IDE Eclipse Fundamentals](https://www.linkedin.com/learning/eclipse-essential-training/welcome?u=3322)
- [Maven Fundamentals](https://learning.oreilly.com/videos/getting-started-with/9781782165729/)
### Software development life cycle
- [JDBC and Databases](https://www.linkedin.com/learning/learning-jdbc/get-going-with-data-access-in-java?u=3322)
- [Logging](https://www.youtube.com/watch?v=oiaEP57nsmI)
- [Debugging and Testing](https://learning.oreilly.com/library/view/java-for-absolute/9781484237786/html/463938_1_En_9_Chapter.xhtml)
- [Unit testing with JUnit5](https://learning.oreilly.com/videos/introduction-to-junit/9781788292290/)
### Frameworks
- [Java EE](https://www.linkedin.com/learning/learning-java-enterprise-edition?u=3322)
- [Spring Framework](https://learning.oreilly.com/videos/spring-framework/9780133477252/)
- [Java EE vs Spring](https://www.quora.com/What-are-the-differences-between-Java-EE-and-Spring)
- [Java EE vs Spring : A Dev perspective](https://dzone.com/articles/developers-perspective-spring)
### Messaging
- [Java Message Service (JMS)](https://learning.oreilly.com/videos/enterprise-messaging-with/9781491917671/)
### Web
- [Java Web Fundamentals](https://learning.oreilly.com/videos/beginning-java-web/9781771376051/)
- [Spring MVC application](https://www.linkedin.com/learning/spring-spring-mvc-2/spring-mvc-for-robust-applications?u=3322)
- [Spring Boot - Your first app](https://www.linkedin.com/learning/learning-spring-with-spring-boot-2?u=3322)
- [Modern Web apps with Spring Boot 2.0](https://learning.oreilly.com/videos/modern-java-web/9781788993241/)
### Microservices
- [Spring Cloud Microservices](https://www.linkedin.com/learning/spring-spring-cloud-2?u=3322)
- [Mastering microservices with Spring](https://www.linkedin.com/learning/mastering-microservices-with-java?u=3322)
## Java advanced
The following sections provide links to videos that can help you learn Java advanced concepts with a hands-on approach.
### Scalability
- [Multithreading and Concurrency](https://www.linkedin.com/learning/learning-java-threads/welcome?u=3322)
- [Understanding Concurrency](https://learning.oreilly.com/playlists/d44bf7e8-56c4-415d-8d76-b621373d44ee/)
- [Optimizing Java](https://learning.oreilly.com/videos/optimizing-java/9781771374866/)
### Performance
- [Performance](/archive/blogs/azureossds/profiling-java-process-on-azure-web-apps)
- [Memory Issues](https://www.linkedin.com/learning/java-memory-management?u=3322)
- [Dump Analysis](https://www.linkedin.com/learning/java-concurrency-troubleshooting-latency-and-throughput?u=3322)
- [Testing - sampler of methods](https://learning.oreilly.com/playlists/e1ec94ab-a912-4455-b8a7-eccb024d3c55/)
## Java on Azure fundamentals
The following sections provide links to resources that can help you understand hosting options and Azure services. You can use this information to help you migrate your Java applications to Azure.
### Azure SDK
- [Java API on Azure](/java/api/overview/azure)
### Application migration
- [Host Spring Web app on Azure](/azure/app-service/quickstart-java?tabs=javase&pivots=platform-linux)
- [Authenticate with Azure](/java/api/overview/azure)
- [Monitor with AppInsights](/azure/application-insights/app-insights-java-quick-start)
### Profiling on Azure
- [Configure app for JDK Flight Recorder](/azure/app-service/configure-language-java?pivots=platform-linux#flight-recorder)
- [Profiling with New Relic](/azure/app-service/configure-language-java?pivots=platform-linux#configure-new-relic)
- [Configure New Relic for Azure Spring Cloud](https://github.com/selvasingh/spring-petclinic-microservices)
### Support on Azure
- [Java support on Azure and Azure Stack](../fundamentals/java-support-on-azure.md)
| 45.75 | 196 | 0.780109 | yue_Hant | 0.229153 |
475ae564ac9a9bb69df3becde1e93fbff5d376a7 | 5,525 | md | Markdown | docs/framework/wpf/advanced/how-to-create-a-text-decoration.md | adamsitnik/docs.pl-pl | c83da3ae45af087f6611635c348088ba35234d49 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wpf/advanced/how-to-create-a-text-decoration.md | adamsitnik/docs.pl-pl | c83da3ae45af087f6611635c348088ba35234d49 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wpf/advanced/how-to-create-a-text-decoration.md | adamsitnik/docs.pl-pl | c83da3ae45af087f6611635c348088ba35234d49 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 'Instrukcje: Tworzenie dekoracji tekstu'
ms.date: 03/30/2017
dev_langs:
- csharp
- vb
helpviewer_keywords:
- fonts [WPF], baseline
- text [WPF], decorations
- fonts [WPF], underline
- fonts [WPF], overline
- strikethrough type [WPF]
- fonts [WPF], strikethrough
- overline type [WPF]
- underline type [WPF]
- typography [WPF], text decorations
- baseline type [WPF]
ms.assetid: cf3cb4e7-782a-4be7-b2d4-e0935e21e4e0
ms.openlocfilehash: d586eef8d1308070da38a0a54c63c3ba64d30c8b
ms.sourcegitcommit: 9b552addadfb57fab0b9e7852ed4f1f1b8a42f8e
ms.translationtype: MT
ms.contentlocale: pl-PL
ms.lasthandoff: 04/23/2019
ms.locfileid: "61776636"
---
# <a name="how-to-create-a-text-decoration"></a>Instrukcje: Tworzenie dekoracji tekstu
A <xref:System.Windows.TextDecoration> obiekt jest ornamentacji visual, można dodać do tekstu. Istnieją cztery typy dekoracje tekstu: podkreślenie, linii bazowej, przekreślenia i nadkreślenia. Poniższy przykład przedstawia lokalizacje dekoracje tekstu względem tekstu.

Aby dodać dekoracji tekstu do tekstu, należy utworzyć <xref:System.Windows.TextDecoration> obiektów i modyfikowania jego właściwości. Użyj <xref:System.Windows.TextDecoration.Location%2A> właściwości w celu określenia, gdzie dekoracji tekstu pojawi się, takie jak podkreślenie. Użyj <xref:System.Windows.TextDecoration.Pen%2A> właściwości w celu określenia wyglądu dekoracji tekstu, takiego jak wypełnienia kryjącego lub kolor gradientu. Jeśli nie określisz wartości <xref:System.Windows.TextDecoration.Pen%2A> właściwości wartość domyślna dekoracje, to ten sam kolor jak tekst. Po zdefiniowaniu <xref:System.Windows.TextDecoration> obiektu, dodaj ją do <xref:System.Windows.TextDecorations> kolekcji obiektu odpowiedni tekst.
Dekorację tekstu, który został wstawiony w pędzel gradientów liniowych i Pióro przerywaną, co można znaleźć w poniższym przykładzie.

<xref:System.Windows.Documents.Hyperlink> Obiekt jest element zawartości śródwierszowy przepływ, który pozwala na hosta hiperlinki w dowolnej zawartości. Domyślnie <xref:System.Windows.Documents.Hyperlink> używa <xref:System.Windows.TextDecoration> obiektu, aby wyświetlić podkreślenie. <xref:System.Windows.TextDecoration> obiekty mogą być intensywnie do utworzenia wystąpienia, wydajność, zwłaszcza, jeśli dostępnych jest wiele <xref:System.Windows.Documents.Hyperlink> obiektów. Jeśli wprowadzisz zwiększone użycie <xref:System.Windows.Documents.Hyperlink> elementów, warto wziąć pod uwagę przedstawiający podkreślenie, tylko wtedy, gdy wyzwalanie zdarzenia, takie jak <xref:System.Windows.ContentElement.MouseEnter> zdarzeń.
W poniższym przykładzie jest dynamiczny podkreślenie dla linku "Mój MSN" — pojawia się tylko, gdy <xref:System.Windows.ContentElement.MouseEnter> zdarzenie jest wyzwalane.

Aby uzyskać więcej informacji, zobacz [określ czy hiperłącze jest podkreślone](how-to-specify-whether-a-hyperlink-is-underlined.md).
## <a name="example"></a>Przykład
W poniższym przykładzie kodu podkreślenia dekoracji tekstu używa wartość domyślna czcionka.
[!code-csharp[TextDecorationSnippets#TextDecorationSnippets1](~/samples/snippets/csharp/VS_Snippets_Wpf/TextDecorationSnippets/CSharp/Window1.xaml.cs#textdecorationsnippets1)]
[!code-vb[TextDecorationSnippets#TextDecorationSnippets1](~/samples/snippets/visualbasic/VS_Snippets_Wpf/TextDecorationSnippets/visualbasic/window1.xaml.vb#textdecorationsnippets1)]
[!code-xaml[TextDecorationSnippets#TextDecorationSnippets1](~/samples/snippets/csharp/VS_Snippets_Wpf/TextDecorationSnippets/CSharp/Window1.xaml#textdecorationsnippets1)]
W poniższym przykładzie kodu podkreślenia Dekoracja tekstu jest tworzony z pędzel pełnego koloru pióra.
[!code-csharp[TextDecorationSnippets#TextDecorationSnippets2](~/samples/snippets/csharp/VS_Snippets_Wpf/TextDecorationSnippets/CSharp/Window1.xaml.cs#textdecorationsnippets2)]
[!code-vb[TextDecorationSnippets#TextDecorationSnippets2](~/samples/snippets/visualbasic/VS_Snippets_Wpf/TextDecorationSnippets/visualbasic/window1.xaml.vb#textdecorationsnippets2)]
[!code-xaml[TextDecorationSnippets#TextDecorationSnippets2](~/samples/snippets/csharp/VS_Snippets_Wpf/TextDecorationSnippets/CSharp/Window1.xaml#textdecorationsnippets2)]
W poniższym przykładzie kodu dekoracyjną podkreślenie jest tworzony z pędzel gradientów liniowych kreskowane pióra.
[!code-csharp[TextDecorationSnippets#TextDecorationSnippets3](~/samples/snippets/csharp/VS_Snippets_Wpf/TextDecorationSnippets/CSharp/Window1.xaml.cs#textdecorationsnippets3)]
[!code-vb[TextDecorationSnippets#TextDecorationSnippets3](~/samples/snippets/visualbasic/VS_Snippets_Wpf/TextDecorationSnippets/visualbasic/window1.xaml.vb#textdecorationsnippets3)]
[!code-xaml[TextDecorationSnippets#TextDecorationSnippets3](~/samples/snippets/csharp/VS_Snippets_Wpf/TextDecorationSnippets/CSharp/Window1.xaml#textdecorationsnippets3)]
## <a name="see-also"></a>Zobacz także
- <xref:System.Windows.TextDecoration>
- <xref:System.Windows.Documents.Hyperlink>
- [Określanie, czy hiperlink jest podkreślony](how-to-specify-whether-a-hyperlink-is-underlined.md)
| 80.072464 | 731 | 0.823891 | pol_Latn | 0.981857 |
475b30d18493cf14d5154dfc7d4f1837b48bd46b | 118 | md | Markdown | studio/working-with-data/browse/Studio-Edit-Vertex.md | orientechnologies/orientdb-docs | ca99704389868d6f9be5cc615b2cfbb6fcb04da4 | [
"Apache-2.0"
] | 63 | 2015-03-13T17:14:11.000Z | 2022-01-07T05:24:47.000Z | studio/working-with-data/browse/Studio-Edit-Vertex.md | orientechnologies/orientdb-docs | ca99704389868d6f9be5cc615b2cfbb6fcb04da4 | [
"Apache-2.0"
] | 331 | 2015-03-14T09:33:06.000Z | 2021-05-12T17:17:21.000Z | studio/working-with-data/browse/Studio-Edit-Vertex.md | orientechnologies/orientdb-docs | ca99704389868d6f9be5cc615b2cfbb6fcb04da4 | [
"Apache-2.0"
] | 250 | 2015-03-17T06:07:12.000Z | 2022-03-21T13:57:22.000Z | ---
search:
keywords: ['Studio', 'edit vertex']
---
# Edit Vertex

| 11.8 | 45 | 0.601695 | kor_Hang | 0.506559 |
475ba4ad2f601467d897ca2047b08ba5342669f2 | 2,916 | md | Markdown | docs/modeling/multiple-dsls-in-one-solution.md | tommorris/visualstudio-docs.cs-cz | 92c436dbc75020bc5121cc2c9e4976f62c9b13ca | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/modeling/multiple-dsls-in-one-solution.md | tommorris/visualstudio-docs.cs-cz | 92c436dbc75020bc5121cc2c9e4976f62c9b13ca | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/modeling/multiple-dsls-in-one-solution.md | tommorris/visualstudio-docs.cs-cz | 92c436dbc75020bc5121cc2c9e4976f62c9b13ca | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Vícesouborové DSL v jediném řešení
ms.date: 11/04/2016
ms.topic: conceptual
author: gewarren
ms.author: gewarren
manager: douge
ms.workload:
- multiple
ms.prod: visual-studio-dev15
ms.technology: vs-ide-modeling
ms.openlocfilehash: ce16cba80962c68d2480e934e2816be4fe77ab1f
ms.sourcegitcommit: 6944ceb7193d410a2a913ecee6f40c6e87e8a54b
ms.translationtype: MT
ms.contentlocale: cs-CZ
ms.lasthandoff: 09/05/2018
ms.locfileid: "43775874"
---
# <a name="multiple-dsls-in-one-solution"></a>Vícesouborové DSL v jediném řešení
Jako součást jediného řešení můžete zabalit několik DSL, tak, že jsou nainstalovány společně.
Můžete použít několik technik k integraci vícesouborové DSL. Další informace najdete v tématu [integrace modelů pomocí Visual Studio Modelbus](../modeling/integrating-models-by-using-visual-studio-modelbus.md) a [postupy: přidání obslužné rutiny operace přetažení myší](../modeling/how-to-add-a-drag-and-drop-handler.md) a [přizpůsobení chování kopírování](../modeling/customizing-copy-behavior.md).
### <a name="to-build-more-than-one-dsl-in-the-same-solution"></a>K vytvoření více než jednom DSL ve stejném řešení
1. Vytvořte dvě nebo více řešení DSL a projekt VSIX a přidejte všechny projekty do jediného řešení.
- Chcete-li vytvořit nový projekt VSIX: V **nový projekt** dialogového okna, vyberte **Visual C#**, **rozšiřitelnost**, **projekt VSIX**.
- Vytvoření dvou nebo více řešení DSL v adresáři řešení VSIX.
Pro každý DSL otevřete novou instanci sady Visual Studio. Vytvořte nový DSL a zadejte stejnou složku řešení jako řešení VSIX.
Ujistěte se, že vytvoření každé DSL s příponou jiný název souboru.
- Změna názvů **Dsl** a **DslPackage** projekty tak, aby byly všechny různé. Příklad: `Dsl1`, `DslPackage1`, `Dsl2`, `DslPackage2`.
- V každém **DslPackage\*\source.extension.tt**, aktualizujte tohoto řádku na správný název projektu Dsl:
`string dslProjectName = "Dsl2";`
- Do řešení VSIX, přidejte Dsl * a DslPackage\* projekty.
Můžete chtít umístit každý pár ve vlastní složce řešení.
2. Kombinovat manifestu VSIX DSL:
1. Otevřít _YourVsixProject_**\source.extension.manifest**.
2. Pro každý DSL, zvolte **přidat obsah** a přidejte:
- `Dsl*` projekt jako **Komponenta MEF**
- `DslPackage*` projekt jako **Komponenta MEF**
- `DslPackage*` projekt jako **balíček VS**
3. Sestavte řešení.
Výsledný VSIX nainstaluje i DSL. Můžete otestovat pomocí klávesy F5 nebo nasadit _YourVsixProject_**\bin\Debug\\\*VSIX**.
## <a name="see-also"></a>Viz také
- [Integrace modelů pomocí Visual Studio Modelbus](../modeling/integrating-models-by-using-visual-studio-modelbus.md)
- [Postupy: Přidání obslužné rutiny operace přetažení myší](../modeling/how-to-add-a-drag-and-drop-handler.md)
- [Přizpůsobení chování kopírování](../modeling/customizing-copy-behavior.md) | 44.181818 | 400 | 0.74177 | ces_Latn | 0.999737 |
475c06d6c296c212b2e469a7a72510d1a71f661e | 1,130 | md | Markdown | docs/led-rgb-BLINKM.md | mattp94/johnny-five | c81033f4795838c7132114cfe9d0a8993c39ab5f | [
"MIT"
] | 2 | 2021-04-30T18:08:48.000Z | 2021-11-08T09:19:05.000Z | docs/led-rgb-BLINKM.md | mattp94/johnny-five | c81033f4795838c7132114cfe9d0a8993c39ab5f | [
"MIT"
] | 1 | 2020-09-12T19:24:12.000Z | 2020-09-12T19:24:12.000Z | docs/led-rgb-BLINKM.md | mattp94/johnny-five | c81033f4795838c7132114cfe9d0a8993c39ab5f | [
"MIT"
] | null | null | null | <!--remove-start-->
# LED - Rainbow BlinkM
<!--remove-end-->
Demonstrates use of a BlinkM by cycling through rainbow colors.
##### Tessel - BlinkM Basick
emonstrates use of a BlinkM, with a Tessel 2, by cycling through rainbow colors.
<br>
Run this example from the command line with:
```bash
node eg/led-rgb-BLINKM.js
```
```javascript
const { Board, Led } = require("johnny-five");
const board = new Board();
board.on("ready", () => {
// Initialize the RGB LED
const rgb = new Led.RGB({
controller: "BLINKM"
});
let index = 0;
const rainbow = ["FF0000", "FF7F00", "FFFF00", "00FF00", "0000FF", "4B0082", "8F00FF"];
board.loop(1000, () => {
if (index + 1 === rainbow.length) {
index = 0;
}
rgb.color(rainbow[index++]);
});
});
```
<!--remove-start-->
## License
Copyright (c) 2012-2014 Rick Waldron <[email protected]>
Licensed under the MIT license.
Copyright (c) 2015-2020 The Johnny-Five Contributors
Licensed under the MIT license.
<!--remove-end-->
| 15.479452 | 90 | 0.642478 | eng_Latn | 0.627912 |
475cb606947cfed63befcd96529bb53862d0a3b1 | 1,354 | md | Markdown | docs/standard-library/remove-reference-class.md | baruchiro/cpp-docs | 6012887526a505e334e9f7ec73c5a84a59562177 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-04-18T12:54:41.000Z | 2021-04-18T12:54:41.000Z | docs/standard-library/remove-reference-class.md | Mikejo5000/cpp-docs | 4b2c3b0c720aef42bce7e1e5566723b0fec5ec7f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/standard-library/remove-reference-class.md | Mikejo5000/cpp-docs | 4b2c3b0c720aef42bce7e1e5566723b0fec5ec7f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "remove_reference Class | Microsoft Docs"
ms.custom: ""
ms.date: "11/04/2016"
ms.technology: ["cpp-standard-libraries"]
ms.topic: "reference"
f1_keywords: ["type_traits/std::remove_reference"]
dev_langs: ["C++"]
helpviewer_keywords: ["remove_reference class", "remove_reference"]
ms.assetid: 294e1965-3ae3-46ee-bc42-4fdf60c24717
author: "corob-msft"
ms.author: "corob"
ms.workload: ["cplusplus"]
---
# remove_reference Class
Makes non reference type from type.
## Syntax
```cpp
template <class T>
struct remove_reference;
template <class T>
using remove_reference_t = typename remove_reference<T>::type;
```
### Parameters
`T`
The type to modify.
## Remarks
An instance of `remove_reference<T>` holds a modified-type that is `T1` when `T` is of the form `T1&`, otherwise `T`.
## Example
```cpp
#include <type_traits>
#include <iostream>
int main()
{
int *p = (std::remove_reference_t<int&> *)0;
p = p; // to quiet "unused" warning
std::cout << "remove_reference_t<int&> == "
<< typeid(*p).name() << std::endl;
return (0);
}
```
```Output
remove_reference_t<int&> == int
```
## Requirements
**Header:** \<type_traits>
**Namespace:** std
## See also
[<type_traits>](../standard-library/type-traits.md)<br/>
[add_lvalue_reference Class](../standard-library/add-lvalue-reference-class.md)<br/>
| 19.342857 | 117 | 0.679468 | eng_Latn | 0.689068 |
475cbbf3975b0bd122e0b12afa6b1208481bfd2b | 5,153 | md | Markdown | README.md | pombredanne/reppy2 | 757dc5e86ceb647b5bd27a2467e38dd860f5bf0e | [
"MIT"
] | null | null | null | README.md | pombredanne/reppy2 | 757dc5e86ceb647b5bd27a2467e38dd860f5bf0e | [
"MIT"
] | 1 | 2015-10-13T12:48:23.000Z | 2015-10-13T12:48:23.000Z | README.md | pombredanne/reppy2 | 757dc5e86ceb647b5bd27a2467e38dd860f5bf0e | [
"MIT"
] | null | null | null | Robots Exclusion Protocol Parser for Python
===========================================
[](https://travis-ci.org/seomoz/reppy)
This started out of a lack of memoization support in other robots.txt parsers
I've encountered, and the lack of support for `Crawl-delay` and `Sitemap` in
the built-in `robotparser`.
Features
--------
- Configurable caching of robots.txt files
- Expiration based on the `Expires` and `Cache-Control` headers
- Configurable automatic refetching basing on expiration
- Support for Crawl-delay
- Support for Sitemaps
- Wildcard matching
Matching
========
This package supports the
[1996 RFC](http://www.robotstxt.org/norobots-rfc.txt), as well as additional
commonly-implemented features, like wildcard matching, crawl-delay, and
sitemaps. There are varying approaches to matching `Allow` and `Disallow`. One
approach is to use the longest match. Another is to use the most specific.
This package chooses to follow the directive that is longest, the assumption
being that it's the one that is most specific -- a term that is a little
difficult to define in this context.
Usage
=====
The easiest way to use `reppy`, you first need a cache object. This object
controls how fetched `robots.txt` are stored and cached, and provides an
interface to issue queries about urls.
```python
from reppy.cache import RobotsCache
# Any args and kwargs provided here are given to `requests.get`. You can use
# this to set request headers and the like
robots = RobotsCache()
# Now ask if a particular url is allowed
robots.allowed('http://example.com/hello', 'my-agent')
```
By default, it fetches `robots.txt` for you using `requests`. If you're,
interested in getting a `Rules` object (which is a parsed `robots.txt` file),
you can ask for it without it being cached:
```python
# This returns a Rules object, which is not cached, but can answer queries
rules = robots.fetch('http://example.com/')
rules.allowed('http://example.com/foo', 'my-agent')
```
If automatic fetching doesn't suit your needs (perhaps you have your own way of
fetching pages), you can still make use of the cache object. To do this,
you'll need to make your own `Rules` object:
```python
from reppy.parser import Rules
# Do some fetching here
# ...
robots.add(Rules('http://example.com/robots.txt',
status_code, # What status code did we get back?
content, # The content of the fetched page
expiration)) # When is this page set to expire?
```
Expiration
----------
A `RobotsCache` object can track the expiration times for fetched `robots.txt`
files. This is taken from either the `Cache-Control` or `Expires` headers if
they are present, or defaults to an hour. At some point, I would like this to
be configurable, but I'm still trying to think of the best interface.
```python
rules = RobotsCache.find('http://example.com/robots.txt')
# Now long before it expires?
rules.ttl
# When does it expire?
rules.expires
# Has it expired?
rules.expired
```
Caching
=======
The default caching policy is to cache everything until it expires. At some
point, we'll add other caching policies (probably LRU), but you can also extend
the `RobotsCache` object to implement your own. Override the `cache` method and
you're on your way!
```python
class MyCache(RobotsCache):
def cache(self, url, *args, **kwargs):
fetched = self.fetch(url, *args, **kwargs)
self._cache[Utility.hostname(url)] = fetched
# Figure out any cached items that need eviction
return fetched
```
You may want to explicitly clear the cache, too, which can be done either with
the `clear` method, or it's done automatically when used as a context manager:
```python
# Store some results
robots.allowed('http://example.com/foo')
# Now we'll get rid of the cache
robots.clear()
# Now as a context manager
with robots:
robots.allowed('http://example.com/foo')
# Now there's nothing cached in robots
```
Queries
=======
Allowed / Disallowed
--------------------
Each of these takes a url and a short user agent string (for example,
'my-agent').
```python
robots.allowed('http://example.com/allowed.html', 'my-agent')
# True
robots.disallowed('http://example.com/allowed.html', 'my-agent')
# False
```
Alternatively, a rules object provides the same interface:
```python
rules = robots.find('http://example.com/allowed')
rules.allowed('http://example.com/allowed', 'my-agent')
```
Crawl-Delay
-----------
Crawl delay can be specified on a per-agent basis, so when checking the crawl
delay for a site, you must provide an agent.
```python
robots.delay('http://example.com/foo', 'my-agent')
```
If there is no crawl delay specified for the provided agent /or/ for the `*`
agent, then `delay` returns None
Sitemaps
--------
A `robots.txt` file can also specify sitemaps, accessible through a `Rules`
object or a `RobotsCache` object:
```python
robots.sitemaps('http://example.com/')
```
Path-Matching
-------------
Path matching supports both `*` and `$`
Running Tests
=============
In order to run tests, run:
```bash
./configure --dev
source venv/bin/activate
py.test -vvs .
```
| 29.786127 | 107 | 0.719969 | eng_Latn | 0.992679 |
03d057125531e36389fb0195cca41b130892e5cd | 2,556 | md | Markdown | website/translated_docs/en-US/version-0.20/cita/system/system.md | ruizhaoz1/citahub-docs | 881a3ed093a1d5b53c9899f22c31eafb42c932a5 | [
"MIT"
] | 7 | 2019-12-26T08:38:06.000Z | 2020-12-17T09:29:01.000Z | website/translated_docs/en-US/version-0.20/cita/system/system.md | ruizhaoz1/citahub-docs | 881a3ed093a1d5b53c9899f22c31eafb42c932a5 | [
"MIT"
] | 38 | 2019-12-03T09:51:40.000Z | 2020-12-01T02:49:42.000Z | website/translated_docs/en-US/version-0.20/cita/system/system.md | ruizhaoz1/citahub-docs | 881a3ed093a1d5b53c9899f22c31eafb42c932a5 | [
"MIT"
] | 16 | 2019-12-03T06:15:55.000Z | 2022-02-20T12:04:01.000Z | ---
id: version-0.20-system
title: System Contracts
original_id: system
---
CITA 链生成时,通过系统合约来生成创世块,并作为链的最基本配置。拥有权限的管理员可以发送交易修改创世块的部分配置,所以了解系统合约至关重要。 你可以在 `/scripts/contracts/src` 目录下查看所有的系统合约,当然,接下来我们会一一解释。
## 合约说明
### 节点管理系统合约
按照快速搭裢的步骤,生成的链默认包含四个节点。如果你需要增加或是删除节点的话,管理员可以通过发送交易来做自定义配置。
节点管理合约存放在`/scripts/contracts/src/system/node_manager.sol`, 地址是 `0xffffffffffffffffffffffffffffffffff020001`
节点管理的相关描述及方法介绍见 [node_manager](./node)
### 配额管理系统合约
和以太坊消耗 `gas` 不一样, CITA 中消耗 `quota`, 我们把它称作 `配额`,但是作用是类似的, 你可以把 `quota` 视为 `CITA` 的 `gas`。 CITA 中关于配额,有两个限制 BQL(BlockQuotaLimit) 和 AQL(AccountQuotaLimit),分别表示区块的配额上限和用户配额上限。管理员可以通过发交易给 配额管理合约来做自定义修改。
配额管理合约存放在 `/scripts/contracts/src/system/quota_manager.sol`, 地址是 `0xffffffffffffffffffffffffffffffffff020003`
配额管理的相关描述及方法介绍见 [quota_manager](./quota)
### 配额价格管理系统合约
通过上边的讲述,我们已经清楚的知道配额的含义。CITA 类似于高速行驶的汽车,那么 `quota` 就是消耗的汽油,当然 `quota` 也是有价格的,我们用 `quota_price` 来表示它。幸运的是,管理员可以通过发送交易给配额价格管理系统合约来做自定义修改。
配额管理合约存放在 `/scripts/contracts/src/system/price_management.sol`, 地址是 `0xffffffffffffffffffffffffffffffffff020010`
配额价格管理的相关描述及方法介绍见 [price_manager](./price)
### 权限管理系统合约
CITA 是一个面向企业级应用的区块链平台,严格的权限管理必不可少。我们提供了完整的权限管理接口,覆盖了企业级应用最常见的权限场景。
权限管理合约存放在 `/scripts/contracts/src/system/permission_management.sol`, 地址是 `0xffffffffffffffffffffffffffffffffff020004`
权限管理的相关描述及方法介绍见 [permission_management](./permission)
### 用户管理系统合约
CITA 为了方便对用户的管理, 我们采用基于组的管理方式,管理员可以选择对组,对组内用户进行增删改查的灵活管理。
组管理合约存放在 `/scripts/contracts/src/user_management/group_management.sol`, 地址是 `0xffffffffffffffffffffffffffffffffff02000a`
组用户管理合约存放在 `/scripts/contracts/src/user_management/group.sol`, 地址是 `0xffffffffffffffffffffffffffffffffff020009`
用户管理的相关描述及方法介绍见 [user_manager](./user)
### 批量转发系统合约
CITA 支持批量调用合约。
批量转发合约存放在 `/scripts/contracts/src/system/batch-tx.sol`, 地址是 `0xffffffffffffffffffffffffffffffffff02000e`
权限管理的相关描述及方法介绍见 [batch-tx](./batch-tx)
### 紧急制动系统合约
在极端情况下,管理员可以通过发送交易到紧急制动系统合约,开启紧急制动模式,只接受管理员发送的交易,屏蔽掉其他所有交易。
紧急制动合约存放在 `/scripts/contracts/src/system/emergency-brake.sol`, 地址是 `0xffffffffffffffffffffffffffffffffff02000f`
紧急制动相关描述及方法介绍见 [emergency-brake](./emg-brake) 紧急制动相关描述及方法介绍见 [emergency-brake](./emg-brake)
### 协议号管理系统合约
自 CITA 诞生以来,我们致力于研发成熟稳定,功能健全的区块链平台。CITA 的性能,功能上依旧在快速迭代,考虑到未来可能存在的兼容性问题,减少对现有客户的影响,我们增加了协议号管理系统合约。
协议号管理系统合约存放在 `/scripts/contracts/src/system/version_manager.sol`, 地址是 `0xffffffffffffffffffffffffffffffffff020011`
协议号管理的相关描述及方法介绍见 [version_manager](./version)
## 合约函数签名
在 `test-chain/template/contracts/docs` 目录(`test-chain` 为默认链名称)提供了所有系统合约函数签名,感兴趣的朋友可以自行查阅。 | 32.35443 | 198 | 0.820031 | yue_Hant | 0.865834 |
03d05d545912ede7b7dccea86c6e45617a1d4c28 | 2,052 | md | Markdown | README.md | TrsNium/digdag-worker-manager | 32522eab723f2a9818203ac110000fddd50dfd26 | [
"Apache-2.0"
] | 5 | 2020-02-17T15:23:44.000Z | 2021-12-10T11:56:33.000Z | README.md | TrsNium/digdag-worker-manager | 32522eab723f2a9818203ac110000fddd50dfd26 | [
"Apache-2.0"
] | 1 | 2020-06-11T06:38:38.000Z | 2020-06-11T06:38:48.000Z | README.md | TrsNium/digdag-worker-manager | 32522eab723f2a9818203ac110000fddd50dfd26 | [
"Apache-2.0"
] | null | null | null | # digdag-worker-crd

Digdag is a workflow engine.
`digdag-worker-crd` is made for Digdag project.
This project aims to make digdag worker scalable on kubernetes.
## Usage
```yaml
apiVersion: horizontalpodautoscalers.autoscaling.digdag-worker-crd/v1
kind: HorizontalDigdagWorkerAutoscaler
metadata:
name: horizontaldigdagworkerautoscaler
spec:
deployment:
name: digdag-worker
namespace: default
maxTaskThreads: 3
postgresql:
host:
value: <YOUR_POSTGRESQL_HOST>
port:
value: "5432"
database:
value: postgres
user:
value: postgres
password:
valueFromSecretKeyRef:
name: postgres
namespace: default
key: password
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: digdag-worker
name: digdag-worker
spec:
progressDeadlineSeconds: 600
revisionHistoryLimit: 10
selector:
matchLabels:
run: digdag-worker
containers:
- args:
- -cx
- digdag server --disable-scheduler --config /etc/config/digdag.properties --max-task-threads 3
command:
- /bin/bash
image: yourImage
imagePullPolicy: Always
name: digdag-worker
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
volumeMounts:
- mountPath: /etc/config
name: digdag-config-volume
volumes:
- configMap:
name: digdag-config
name: digdag-config-volume
```
## Detail
`HorizontalDigdagWorkerAutoscaler` looks at the postgresql task queue used by the Digdag worker and adjusts the Digdag worker's replicas.
If the task queue is empty, set replicas of Digdag worker to 1.
Increase the replicas of the Digdag worker under the following conditions.
`TotalQueudTask - DigdagMaxTaskThreads * DigdagWorker Replicas > 0`
Also Digdag worker scale-in will not be done emptying the task queue.
| 26.307692 | 139 | 0.690058 | eng_Latn | 0.577537 |
03d0c4a36a00269fcadb45237b4f36004db94b5f | 3,407 | md | Markdown | docs/visual-basic/language-reference/statements/handles-clause.md | mtorreao/docs.pt-br | e080cd3335f777fcb1349fb28bf527e379c81e17 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/visual-basic/language-reference/statements/handles-clause.md | mtorreao/docs.pt-br | e080cd3335f777fcb1349fb28bf527e379c81e17 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/visual-basic/language-reference/statements/handles-clause.md | mtorreao/docs.pt-br | e080cd3335f777fcb1349fb28bf527e379c81e17 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Cláusula Handles
ms.date: 07/20/2015
f1_keywords:
- Handles
- vb.Handles
helpviewer_keywords:
- Handles keyword [Visual Basic]
ms.assetid: 1b051c0e-f499-42f6-acb5-6f4f27824b40
ms.openlocfilehash: 347f521267d4fd954ac359ab25ed5810cfd71d34
ms.sourcegitcommit: d2db216e46323f73b32ae312c9e4135258e5d68e
ms.translationtype: MT
ms.contentlocale: pt-BR
ms.lasthandoff: 09/22/2020
ms.locfileid: "90873254"
---
# <a name="handles-clause-visual-basic"></a>Cláusula Handles (Visual Basic)
Declara que um procedimento manipula um evento especificado.
## <a name="syntax"></a>Sintaxe
```vb
proceduredeclaration Handles eventlist
```
## <a name="parts"></a>Partes
`proceduredeclaration`
A `Sub` declaração de procedimento para o procedimento que manipulará o evento.
`eventlist`
Lista dos eventos `proceduredeclaration` a serem tratados, separados por vírgulas. Os eventos devem ser gerados pela classe base para a classe atual ou por um objeto declarado usando a `WithEvents` palavra-chave.
## <a name="remarks"></a>Comentários
Use a `Handles` palavra-chave no final de uma declaração de procedimento para fazer com que ela manipule eventos gerados por uma variável de objeto declarada usando a `WithEvents` palavra-chave. A `Handles` palavra-chave também pode ser usada em uma classe derivada para manipular eventos de uma classe base.
A `Handles` palavra-chave e a `AddHandler` instrução permitem que você especifique que procedimentos específicos manipulam eventos específicos, mas há diferenças. Use a `Handles` palavra-chave ao definir um procedimento para especificar que ele trata de um evento específico. A `AddHandler` instrução conecta procedimentos a eventos em tempo de execução. Para obter mais informações, consulte [instrução AddHandler](addhandler-statement.md).
Para eventos personalizados, o aplicativo invoca o `AddHandler` acessador do evento quando ele adiciona o procedimento como um manipulador de eventos. Para obter mais informações sobre eventos personalizados, consulte [Event Statement](event-statement.md).
## <a name="example"></a>Exemplo
[!code-vb[VbVbalrEvents#2](~/samples/snippets/visualbasic/VS_Snippets_VBCSharp/VbVbalrEvents/VB/Class1.vb#2)]
O exemplo a seguir demonstra como uma classe derivada pode usar a `Handles` instrução para manipular um evento de uma classe base.
[!code-vb[VbVbalrEvents#3](~/samples/snippets/visualbasic/VS_Snippets_VBCSharp/VbVbalrEvents/VB/Class1.vb#3)]
## <a name="example"></a>Exemplo
O exemplo a seguir contém dois manipuladores de eventos de botão para um projeto de **aplicativo WPF** .
[!code-vb[VbVbalrEvents#41](~/samples/snippets/visualbasic/VS_Snippets_VBCSharp/VbVbalrEvents/VB/class3.vb#41)]
## <a name="example"></a>Exemplo
O exemplo a seguir é equivalente ao exemplo anterior. O `eventlist` na `Handles` cláusula contém os eventos para ambos os botões.
[!code-vb[VbVbalrEvents#42](~/samples/snippets/visualbasic/VS_Snippets_VBCSharp/VbVbalrEvents/VB/class3.vb#42)]
## <a name="see-also"></a>Confira também
- [WithEvents](../modifiers/withevents.md)
- [Instrução AddHandler](addhandler-statement.md)
- [Instrução RemoveHandler](removehandler-statement.md)
- [Instrução Event](event-statement.md)
- [Instrução RaiseEvent](raiseevent-statement.md)
- [Eventos](../../programming-guide/language-features/events/index.md)
| 47.985915 | 444 | 0.769592 | por_Latn | 0.980425 |
03d1379f7c9a7ed11ed36cb7e0fa8dd4fb9c3550 | 28,759 | md | Markdown | docs/build/reference/TOC_BACKUP_11848.md | morra1026/cpp-docs.ko-kr | 77706f3bffb88ed46b244c46184289a3b683f661 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/build/reference/TOC_BACKUP_11848.md | morra1026/cpp-docs.ko-kr | 77706f3bffb88ed46b244c46184289a3b683f661 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/build/reference/TOC_BACKUP_11848.md | morra1026/cpp-docs.ko-kr | 77706f3bffb88ed46b244c46184289a3b683f661 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | # [C/C++ 빌드 참조](c-cpp-building-reference.md)
# [C++ 프로젝트용 MSBuild 참조](msbuild-reference-cpp.md)
## [C++ 프로젝트용 MSBuild 내부](msbuild-visual-cpp-overview.md)
## [빌드 명령 및 속성에 대한 일반 매크로](common-macros-for-build-commands-and-properties.md)
## [C++ 프로젝트에 사용되는 파일 형식](file-types-created-for-visual-cpp-projects.md)
### [프로젝트 및 솔루션 파일](project-and-solution-files.md)
### [C++ 프로젝트 템플릿](visual-cpp-project-types.md)
### [C++ 새 항목 템플릿](using-visual-cpp-add-new-item-templates.md)
### [리소스 파일](resource-files-cpp.md)
### [CLR 프로젝트용 파일](files-created-for-clr-projects.md)
### [ATL 프로그램 또는 컨트롤 소스 및 헤더 파일](atl-program-or-control-source-and-header-files.md)
### [MFC 프로그램 또는 컨트롤 소스 및 헤더 파일](mfc-program-or-control-source-and-header-files.md)
### [HTML 도움말 파일](help-files-html-help.md)
### [Winhelp 도움말 파일](help-files-winhelp.md)
### [IntelliSense용 힌트 파일](hint-files.md)
### [속성 페이지 XML 파일](property-page-xml-files.md)
### [.vcxproj 및 .props 파일 구조](vcxproj-file-structure.md)
### [예제 프로젝트 파일](project-files.md)
## [C++ 프로젝트 속성 참조](property-pages-visual-cpp.md)
### [일반 속성 페이지](general-property-page-project.md)
### [일반 속성 페이지](general-property-page-file.md)
### [VC++ 디렉터리 속성 페이지](vcpp-directories-property-page.md)
### [명령줄 속성 페이지](command-line-property-pages.md)
### [NMake 속성 페이지](nmake-property-page.md)
### [링커 속성 페이지](linker-property-pages.md)
### [매니페스트 도구 속성 페이지](manifest-tool-property-pages.md)
#### [<Projectname> 속성 페이지 대화 상자, 구성 속성, 매니페스트 도구, 일반](general-manifest-tool-configuration-properties.md)
#### [<Projectname> 속성 페이지 대화 상자, 구성 속성, 매니페스트 도구, 입력 및 출력](input-and-output-manifest-tool.md)
#### [<Projectname> 속성 페이지 대화 상자, 구성 속성, 매니페스트 도구, 격리 COM](isolated-com-manifest-tool.md)
#### [<Projectname> 속성 페이지 대화 상자, 구성 속성, 매니페스트 도구, 고급](advanced-manifest-tool.md)
### [리소스 속성 페이지](resources-property-pages.md)
### [관리되는 리소스 속성 페이지](managed-resources-property-page.md)
### [MIDL 속성 페이지](midl-property-pages.md)
#### [MIDL 속성 페이지: 일반](midl-property-pages-general.md)
#### [MIDL 속성 페이지: 출력](midl-property-pages-output.md)
#### [MIDL 속성 페이지: 고급](midl-property-pages-advanced.md)
### [웹 참조 속성 페이지](web-references-property-page.md)
### [XML 데이터 생성기 도구 속성 페이지](xml-data-generator-tool-property-page.md)
### [XML 문서 생성기 도구 속성 페이지](xml-document-generator-tool-property-pages.md)
### [사용자 지정 빌드 단계 속성 페이지: 일반](custom-build-step-property-page-general.md)
### [HLSL 속성 페이지](hlsl-property-pages.md)
#### [HLSL 속성 페이지: 일반](hlsl-property-pages-general.md)
#### [HLSL 속성 페이지: 고급](hlsl-property-pages-advanced.md)
#### [HLSL 속성 페이지: 출력 파일](hlsl-property-pages-output-files.md)
# [MSVC 컴파일러 참조](compiling-a-c-cpp-program.md)
## [MSVC 컴파일러 명령줄 구문](compiler-command-line-syntax.md)
### [CL 파일 이름 구문](cl-filename-syntax.md)
### [CL 옵션 순서](order-of-cl-options.md)
### [cl.exe의 반환 값](return-value-of-cl-exe.md)
## [CL 환경 변수](cl-environment-variables.md)
## [CL 명령 파일](cl-command-files.md)
## [CL에서의 링커 호출](cl-invokes-the-linker.md)
## [MSVC 컴파일러 옵션](compiler-options.md)
### [컴파일러 옵션 범주별 목록](compiler-options-listed-by-category.md)
### [컴파일러 옵션 사전순 목록](compiler-options-listed-alphabetically.md)
#### [@(컴파일러 지시 파일 지정)](at-specify-a-compiler-response-file.md)
#### [/AI(메타데이터 디렉터리 지정)](ai-specify-metadata-directories.md)
#### [/analyze(코드 분석)](analyze-code-analysis.md)
#### [/arch(최소 CPU 아키텍처)](arch-minimum-cpu-architecture.md)
##### [-arch(x86)](arch-x86.md)
##### [-arch(x64)](arch-x64.md)
##### [-arch(ARM)](arch-arm.md)
#### [/await(코루틴 지원 사용)](await-enable-coroutine-support.md)
#### [/bigobj(.Obj 파일의 섹션 수 늘리기)](bigobj-increase-number-of-sections-in-dot-obj-file.md)
#### [/C(전처리 중에 주석 유지)](c-preserve-comments-during-preprocessing.md)
#### [/c(링크 없이 컴파일)](c-compile-without-linking.md)
#### [/cgthreads(코드 생성 스레드)](cgthreads-code-generation-threads.md)
#### [/clr(공용 언어 런타임 컴파일)](clr-common-language-runtime-compilation.md)
##### [/clr 제한](clr-restrictions.md)
#### [/constexpr(컨트롤 constexpr 평가)](constexpr-control-constexpr-evaluation.md)
#### [/D(전처리기 정의)](d-preprocessor-definitions.md)
#### [/diagnostics(컴파일러 진단 옵션)](diagnostics-compiler-diagnostic-options.md)
#### [/doc(문서 주석 처리)(C/C++)](doc-process-documentation-comments-c-cpp.md)
#### [/E(stdout으로 전처리)](e-preprocess-to-stdout.md)
#### [/EH(예외 처리 모델)](eh-exception-handling-model.md)
#### [/EP(#line 지시문 없이 stdout으로 전처리)](ep-preprocess-to-stdout-without-hash-line-directives.md)
#### [/errorReport(내부 컴파일러 오류 보고)](errorreport-report-internal-compiler-errors.md)
#### [/execution/charset(실행 문자 집합 설정)](execution-charset-set-execution-character-set.md)
#### [/F(스택 크기 설정)](f-set-stack-size.md)
#### [출력 파일(/F) 옵션](output-file-f-options.md)
##### [/FA, /Fa(목록 파일)](fa-fa-listing-file.md)
##### [경로 이름 지정](specifying-the-pathname.md)
##### [/FD(IDE 최소 재빌드)](fd-ide-minimal-rebuild.md)
##### [/Fd(프로그램 데이터베이스 파일 이름)](fd-program-database-file-name.md)
##### [/Fe(EXE 파일 이름 지정)](fe-name-exe-file.md)
##### [/Fi(출력 파일 이름 전처리)](fi-preprocess-output-file-name.md)
##### [/FI(강제 포함 파일 이름 지정)](fi-name-forced-include-file.md)
##### [/Fm(맵 파일 이름 지정)](fm-name-mapfile.md)
##### [/Fo(개체 파일 이름)](fo-object-file-name.md)
##### [/Fp(.Pch 파일 이름 지정)](fp-name-dot-pch-file.md)
##### [/FR, /Fr(.Sbr 파일 만들기)](fr-fr-create-dot-sbr-file.md)
##### [/FU(강제 #using 파일 이름 지정)](fu-name-forced-hash-using-file.md)
##### [/Fx(삽입된 코드 병합)](fx-merge-injected-code.md)
#### [/favor(아키텍처에 맞게 최적화)](favor-optimize-for-architecture-specifics.md)
#### [/FC(진단 소스 코드 파일의 전체 경로)](fc-full-path-of-source-code-file-in-diagnostics.md)
#### [/fp(부동 소수점 동작 지정)](fp-specify-floating-point-behavior.md)
##### [MSVC 부동 소수점 최적화](floating-point-optimization.md)
#### [/FS(동기 PDB 쓰기 적용)](fs-force-synchronous-pdb-writes.md)
#### [/GA(Windows 애플리케이션 최적화)](ga-optimize-for-windows-application.md)
#### [/Gd, /Gr, /Gv, /Gz(호출 규칙)](gd-gr-gv-gz-calling-convention.md)
#### [/Ge(스택 조사 사용)](ge-enable-stack-probes.md)
#### [/GF(중복 문자열 제거)](gf-eliminate-duplicate-strings.md)
#### [/GH(_pexit 후크 함수 사용)](gh-enable-pexit-hook-function.md)
#### [/Gh(_penter 후크 함수 사용)](gh-enable-penter-hook-function.md)
#### [/GL(전체 프로그램 최적화)](gl-whole-program-optimization.md)
#### [/Gm(최소 다시 빌드 사용)](gm-enable-minimal-rebuild.md)
#### [/GR(런타임 형식 정보 사용)](gr-enable-run-time-type-information.md)
#### [/GS(버퍼 보안 검사)](gs-buffer-security-check.md)
#### [/Gs(스택 검사 호출 제어)](gs-control-stack-checking-calls.md)
#### [/guard(제어 흐름 보호 사용)](guard-enable-control-flow-guard.md)
#### [/GT(파이버 안전 스레드 로컬 스토리지 지원)](gt-support-fiber-safe-thread-local-storage.md)
#### [/Gw(전역 데이터 최적화)](gw-optimize-global-data.md)
#### [/GX(예외 처리 사용)](gx-enable-exception-handling.md)
#### [/Gy(함수 수준 링크 사용)](gy-enable-function-level-linking.md)
#### [/GZ(스택 프레임 런타임 오류 검사 사용)](gz-enable-stack-frame-run-time-error-checking.md)
#### [/H(외부 이름 길이 제한)](h-restrict-length-of-external-names.md)
#### [/HELP(컴파일러 명령줄 도움말)](help-compiler-command-line-help.md)
#### [/homeparams(스택에 레지스터 매개 변수 복사)](homeparams-copy-register-parameters-to-stack.md)
#### [/hotpatch(핫 패치 가능 이미지 만들기)](hotpatch-create-hotpatchable-image.md)
#### [/I(추가 포함 디렉터리)](i-additional-include-directories.md)
#### [/J(부호 없는 기본 문자 형식)](j-default-char-type-is-unsigned.md)
#### [/JMC(내 코드만 디버깅)](jmc.md)
#### [/kernel(커널 모드 이진 만들기)](kernel-create-kernel-mode-binary.md)
#### [/link(옵션을 링커로 전달)](link-pass-options-to-linker.md)
#### [/LN(MSIL 모듈 만들기)](ln-create-msil-module.md)
#### [/MD, /MT, /LD(런타임 라이브러리 사용)](md-mt-ld-use-run-time-library.md)
#### [/MP(여러 프로세스로 빌드)](mp-build-with-multiple-processes.md)
#### [/nologo(시작 배너 표시 안 함)(C/C++)](nologo-suppress-startup-banner-c-cpp.md)
#### [/O 옵션(코드 최적화)](o-options-optimize-code.md)
##### [/O1, /O2(크기 최소화, 속도 최대화)](o1-o2-minimize-size-maximize-speed.md)
##### [/Ob(인라인 함수 확장)](ob-inline-function-expansion.md)
##### [/Od(디버그 사용 안 함)](od-disable-debug.md)
##### [/Og(전역 최적화)](og-global-optimizations.md)
##### [/Oi(내장 함수 만들기)](oi-generate-intrinsic-functions.md)
##### [/Os, /Ot(크기 우선 코드, 속도 우선 코드)](os-ot-favor-small-code-favor-fast-code.md)
##### [/Ox(대부분의 속도 최적화 사용)](ox-full-optimization.md)
##### [/Oy(프레임/포인터 생략)](oy-frame-pointer-omission.md)
#### [/openmp(OpenMP 2.0 지원 사용)](openmp-enable-openmp-2-0-support.md)
#### [/P(파일로 전처리)](p-preprocess-to-a-file.md)
#### [/permissive-(표준 준수)](permissive-standards-conformance.md)
#### [/Q 옵션(하위 수준 작업)](q-options-low-level-operations.md)
##### [/Qfast_transcendentals(빠른 초월수 강제 적용)](qfast-transcendentals-force-fast-transcendentals.md)
##### [/QIfist(_ftol 사용 안 함)](qifist-suppress-ftol.md)
##### [/Qimprecise_fwaits(Try 블록 내의 fwait 제거)](qimprecise-fwaits-remove-fwaits-inside-try-blocks.md)
##### [/Qpar(자동 평행화 도우미)](qpar-auto-parallelizer.md)
##### [/Qpar-report(자동 평행화 도우미 보고 수준)](qpar-report-auto-parallelizer-reporting-level.md)
##### [/Qsafe_fp_loads](qsafe-fp-loads.md)
##### [/Qspectre](qspectre.md)
##### [/Qvec-report(자동 벡터화 도우미 보고 수준)](qvec-report-auto-vectorizer-reporting-level.md)
#### [/RTC(런타임 오류 검사)](rtc-run-time-error-checks.md)
#### [/sdl(추가 보안 검사 사용)](sdl-enable-additional-security-checks.md)
#### [/showIncludes(포함 파일 나열)](showincludes-list-include-files.md)
#### [/source-charset(소스 문자 집합 설정)](source-charset-set-source-character-set.md)
#### [/std(언어 표준 버전 지정)](std-specify-language-standard-version.md)
#### [/Tc, /Tp, /TC, /TP(소스 파일 형식 지정)](tc-tp-tc-tp-specify-source-file-type.md)
#### [/U, /u(기호 정의 해제)](u-u-undefine-symbols.md)
#### [/utf-8(소스 및 실행 파일 문자 집합을 UTF-8로 설정)](utf-8-set-source-and-executable-character-sets-to-utf-8.md)
#### [/V(버전 번호)](v-version-number.md)
#### [/validate-charset(호환 문자에 대한 유효성 검사)](validate-charset-validate-for-compatible-characters.md)
#### [/vd(생성 치환 사용 안 함)](vd-disable-construction-displacements.md)
#### [/vmb, /vmg(표시 메서드)](vmb-vmg-representation-method.md)
#### [/vmm, /vms, /vmv(일반적인 용도 표시)](vmm-vms-vmv-general-purpose-representation.md)
#### [/volatile(volatile 키워드 해석)](volatile-volatile-keyword-interpretation.md)
#### [/w, /W0, /W1, /W2, /W3, /W4, /w1, /w2, /w3, /w4, /Wall, /wd, /we, /wo, /Wv, /WX(경고 수준)](compiler-option-warning-level.md)
#### [/WL(1줄 진단 사용)](wl-enable-one-line-diagnostics.md)
#### [/Wp64(64비트 이식성 문제 검색)](wp64-detect-64-bit-portability-issues.md)
#### [/X(표준 포함 경로 무시)](x-ignore-standard-include-paths.md)
#### [/Y(미리 컴파일된 헤더)](y-precompiled-headers.md)
##### [/Y-(미리 컴파일된 헤더 옵션 무시)](y-ignore-precompiled-header-options.md)
##### [/Yc(미리 컴파일된 헤더 파일 만들기)](yc-create-precompiled-header-file.md)
##### [/Yd(개체 파일에 디버그 정보 삽입)](yd-place-debug-information-in-object-file.md)
##### [/Yl(디버그 라이브러리에 PCH 참조 넣기)](yl-inject-pch-reference-for-debug-library.md)
##### [/Yu(미리 컴파일된 헤더 파일 사용)](yu-use-precompiled-header-file.md)
#### [/Z7, /Zi, /ZI(디버깅 정보 형식)](z7-zi-zi-debug-information-format.md)
#### [/Za, /Ze(언어 확장 사용 안 함)](za-ze-disable-language-extensions.md)
##### [C 및 C++에 대한 Microsoft 확장](microsoft-extensions-to-c-and-cpp.md)
#### [/Zc(규칙)](zc-conformance.md)
##### [/Zc:alignedNew(C++17 과다 정렬된 할당)](zc-alignednew.md)
##### [/Zc:auto(변수 형식 추론)](zc-auto-deduce-variable-type.md)
##### [/Zc:__cplusplus (업데이트된 __cplusplus 매크로 사용)](zc-cplusplus.md)
##### [/Zc:externConstexpr(extern constexpr 변수 사용)](zc-externconstexpr.md)
##### [/Zc:forScope(for 루프 범위의 강제 규칙)](zc-forscope-force-conformance-in-for-loop-scope.md)
##### [/Zc:implicitNoexcept(암시적 예외 지정자)](zc-implicitnoexcept-implicit-exception-specifiers.md)
##### [/Zc:inline(참조되지 않은 COMDAT 제거)](zc-inline-remove-unreferenced-comdat.md)
##### [/Zc:noexceptTypes(C++17 noexcept 규칙)](zc-noexcepttypes.md)
##### [/Zc:referenceBinding(참조 바인딩 규칙 적용)](zc-referencebinding-enforce-reference-binding-rules.md)
##### [/Zc:rvalueCast(형식 변환 규칙 적용)](zc-rvaluecast-enforce-type-conversion-rules.md)
##### [/Zc:sizedDealloc(전역으로 크기가 지정된 할당 해제 함수 사용)](zc-sizeddealloc-enable-global-sized-dealloc-functions.md)
##### [/Zc:strictStrings(문자열 리터럴 형식 변환 사용 안 함)](zc-strictstrings-disable-string-literal-type-conversion.md)
##### [/Zc:ternary(조건 연산자 규칙 적용)](zc-ternary.md)
##### [/Zc:threadSafeInit(스레드로부터 안전한 로컬 정적 초기화)](zc-threadsafeinit-thread-safe-local-static-initialization.md)
##### [/Zc:throwingNew(operator new throw 가정)](zc-throwingnew-assume-operator-new-throws.md)
##### [/Zc:trigraphs(삼중자 대체)](zc-trigraphs-trigraphs-substitution.md)
##### [/Zc:twoPhase-(2단계 이름 조회를 사용하지 않도록 설정)](zc-twophase.md)
##### [/Zc:wchar_t(wchar_t를 네이티브 형식으로 인식)](zc-wchar-t-wchar-t-is-native-type.md)
#### [/Zf(더 빠른 PDB 생성)](zf.md)
#### [/Zg(함수 프로토타입 생성)](zg-generate-function-prototypes.md)
#### [/Zl(기본 라이브러리 이름 생략)](zl-omit-default-library-name.md)
#### [/Zm(미리 컴파일된 헤더 메모리 할당 제한 지정)](zm-specify-precompiled-header-memory-allocation-limit.md)
#### [/Zo(최적화된 디버깅 향상)](zo-enhance-optimized-debugging.md)
#### [/Zp(구조체 멤버 맞춤)](zp-struct-member-alignment.md)
#### [/Zs(구문만 검사)](zs-syntax-check-only.md)
#### [/ZW(Windows 런타임 컴파일)](zw-windows-runtime-compilation.md)
## [컴파일러 및 링커에서의 유니코드 지원](unicode-support-in-the-compiler-and-linker.md)
# [MSVC 링커 참조](linking.md)
## [MSVC 링커 옵션](linker-options.md)
### [컴파일러 제어 LINK 옵션](compiler-controlled-link-options.md)
### [LINK 입력 파일](link-input-files.md)
#### [링커 입력 파일로 사용하는 .Obj 파일](dot-obj-files-as-linker-input.md)
#### [링커 입력 파일로 사용하는 .netmodule 파일](netmodule-files-as-linker-input.md)
##### [.netmodule 입력 파일 형식 선택](choosing-the-format-of-netmodule-input-files.md)
#### [링커 입력 파일로 사용하는 .Lib 파일](dot-lib-files-as-linker-input.md)
#### [링커 입력 파일로 사용하는 .Exp 파일](dot-exp-files-as-linker-input.md)
#### [링커 입력 파일로 사용하는 .Def 파일](dot-def-files-as-linker-input.md)
#### [링커 입력 파일로 사용하는 .Pdb 파일](dot-pdb-files-as-linker-input.md)
#### [링커 입력 파일로 사용하는 .Res 파일](dot-res-files-as-linker-input.md)
#### [링커 입력 파일로 사용하는 .Exe 파일](dot-exe-files-as-linker-input.md)
#### [링커 입력 파일로 사용하는 .Txt 파일](dot-txt-files-as-linker-input.md)
#### [링커 입력 파일로 사용하는 .Ilk 파일](dot-ilk-files-as-linker-input.md)
### [LINK 출력](link-output.md)
### [예약어](reserved-words.md)
### [@(링커 지시 파일 지정)](at-specify-a-linker-response-file.md)
### [/ALIGN(섹션 맞춤)](align-section-alignment.md)
### [/ALLOWBIND(DLL 바인딩 방지)](allowbind-prevent-dll-binding.md)
### [/ALLOWISOLATION(매니페스트 조회)](allowisolation-manifest-lookup.md)
### [/APPCONTAINER(UWP/Microsoft Store 앱)](appcontainer-windows-store-app.md)
### [/ASSEMBLYDEBUG(DebuggableAttribute 추가)](assemblydebug-add-debuggableattribute.md)
### [/ASSEMBLYLINKRESOURCE(.NET Framework 리소스에 대한 링크)](assemblylinkresource-link-to-dotnet-framework-resource.md)
### [/ASSEMBLYMODULE(MSIL 모듈을 어셈블리에 추가)](assemblymodule-add-a-msil-module-to-the-assembly.md)
### [/ASSEMBLYRESOURCE(관리되는 리소스 포함)](assemblyresource-embed-a-managed-resource.md)
### [/BASE(기준 주소)](base-base-address.md)
### [/CETCOMPAT(제어 흐름 적용 기술 호환 가능)](cetcompat.md)
### [/CGTHREADS(컴파일러 스레드)](cgthreads-compiler-threads.md)
### [/CLRIMAGETYPE(CLR 이미지 형식 지정)](clrimagetype-specify-type-of-clr-image.md)
### [/CLRSUPPORTLASTERROR(PInvoke 호출의 마지막 오류 코드 유지)](clrsupportlasterror-preserve-last-error-code-for-pinvoke-calls.md)
### [/CLRTHREADATTRIBUTE(CLR 스레드 특성 설정)](clrthreadattribute-set-clr-thread-attribute.md)
### [/CLRUNMANAGEDCODECHECK(SuppressUnmanagedCodeSecurityAttribute 제거)](clrunmanagedcodecheck-add-suppressunmanagedcodesecurityattribute.md)
### [/DEBUG(디버깅 정보 생성)](debug-generate-debug-info.md)
### [/DEBUGTYPE(디버그 정보 옵션)](debugtype-debug-info-options.md)
### [/DEF(모듈 정의 파일 지정)](def-specify-module-definition-file.md)
### [/DEFAULTLIB(기본 라이브러리 지정)](defaultlib-specify-default-library.md)
### [/DELAY(가져오기 설정 로드 지연)](delay-delay-load-import-settings.md)
### [/DELAYLOAD(가져오기 로드 지연)](delayload-delay-load-import.md)
### [/DELAYSIGN(어셈블리에 부분적으로 서명)](delaysign-partially-sign-an-assembly.md)
### [/DEPENDENTLOADFLAG(기본 종속 로드 플래그 설정)](dependentloadflag.md)
### [/DLL(DLL 빌드)](dll-build-a-dll.md)
### [/DRIVER(Windows NT 커널 모드 드라이버)](driver-windows-nt-kernel-mode-driver.md)
### [/DYNAMICBASE(주소 공간 레이아웃을 임의로 지정)](dynamicbase-use-address-space-layout-randomization.md)
### [/ENTRY(진입점 기호)](entry-entry-point-symbol.md)
### [/ERRORREPORT(내부 링커 오류 보고)](errorreport-report-internal-linker-errors.md)
### [/EXPORT(함수 내보내기)](export-exports-a-function.md)
### [/FILEALIGN(파일의 섹션 맞춤)](filealign.md)
### [/FIXED(고정 기준 주소)](fixed-fixed-base-address.md)
### [/FORCE(파일 출력 강제)](force-force-file-output.md)
### [/FUNCTIONPADMIN(핫 패치 가능 이미지 만들기)](functionpadmin-create-hotpatchable-image.md)
### [/GENPROFILE, /FASTGENPROFILE(계측된 빌드 프로파일링 생성)](genprofile-fastgenprofile-generate-profiling-instrumented-build.md)
### [/GUARD(보호 검사 사용)](guard-enable-guard-checks.md)
### [/HEAP(힙 크기 설정)](heap-set-heap-size.md)
### [/HIGHENTROPYVA(64비트 ASLR 지원)](highentropyva-support-64-bit-aslr.md)
### [/IDLOUT(MIDL 출력 파일 이름 지정)](idlout-name-midl-output-files.md)
### [/IGNORE(특정 경고 무시)](ignore-ignore-specific-warnings.md)
### [/IGNOREIDL(특성을 MIDL로 처리하지 않음)](ignoreidl-don-t-process-attributes-into-midl.md)
### [/IMPLIB(가져오기 라이브러리 이름 지정)](implib-name-import-library.md)
### [/INCLUDE(강제 기호 참조)](include-force-symbol-references.md)
### [/INCREMENTAL(증분 링크)](incremental-link-incrementally.md)
### [/INTEGRITYCHECK(시그니처 확인 필요)](integritycheck-require-signature-check.md)
### [/KEYCONTAINER(어셈블리에 서명할 키 컨테이너 지정)](keycontainer-specify-a-key-container-to-sign-an-assembly.md)
### [/KEYFILE(어셈블리에 서명할 키 또는 키 쌍 지정)](keyfile-specify-key-or-key-pair-to-sign-an-assembly.md)
### [/LARGEADDRESSAWARE(큰 주소 처리)](largeaddressaware-handle-large-addresses.md)
### [/LIBPATH(추가 Libpath)](libpath-additional-libpath.md)
### [/LTCG(링크 타임 코드 생성)](ltcg-link-time-code-generation.md)
### [/MACHINE(대상 플랫폼 지정)](machine-specify-target-platform.md)
### [/MANIFEST(side-by-side 어셈블리 매니페스트 만들기)](manifest-create-side-by-side-assembly-manifest.md)
### [/MANIFESTDEPENDENCY(매니페스트 종속성 지정)](manifestdependency-specify-manifest-dependencies.md)
### [/MANIFESTFILE(매니페스트 파일 이름 지정)](manifestfile-name-manifest-file.md)
### [/MANIFESTINPUT(매니페스트 입력 지정)](manifestinput-specify-manifest-input.md)
### [/MANIFESTUAC(매니페스트에 UAC 정보 포함)](manifestuac-embeds-uac-information-in-manifest.md)
### [/MAP(맵파일 생성)](map-generate-mapfile.md)
### [/MAPINFO(맵파일에 정보 포함)](mapinfo-include-information-in-mapfile.md)
### [/MERGE(섹션 결합)](merge-combine-sections.md)
### [/MIDL(MIDL 명령줄 옵션 지정)](midl-specify-midl-command-line-options.md)
### [/NATVIS(PDB에 Natvis 추가)](natvis-add-natvis-to-pdb.md)
### [/NOASSEMBLY(MSIL 모듈 만들기)](noassembly-create-a-msil-module.md)
### [/NODEFAULTLIB(라이브러리 무시)](nodefaultlib-ignore-libraries.md)
### [/NOENTRY(진입점 없음)](noentry-no-entry-point.md)
### [/NOLOGO(시작 배너 표시 안 함)(링커)](nologo-suppress-startup-banner-linker.md)
### [/NXCOMPAT(데이터 실행 방지 기능과 호환)](nxcompat-compatible-with-data-execution-prevention.md)
### [/OPT(최적화)](opt-optimizations.md)
### [/ORDER(함수에 순서 지정)](order-put-functions-in-order.md)
### [/OUT(출력 파일 이름)](out-output-file-name.md)
### [/PDB(프로그램 데이터베이스 사용)](pdb-use-program-database.md)
### [/PDBALTPATH(대체 PDB 경로 사용)](pdbaltpath-use-alternate-pdb-path.md)
### [/PDBSTRIPPED(전용 기호 제거)](pdbstripped-strip-private-symbols.md)
### [/PGD(프로필 기반 최적화를 위한 데이터베이스 지정)](pgd-specify-database-for-profile-guided-optimizations.md)
### [/POGOSAFEMODE](pogosafemode-linker-option.md)
### [/PROFILE(성능 도구 프로파일러)](profile-performance-tools-profiler.md)
### [/RELEASE(체크섬 설정)](release-set-the-checksum.md)
### [/SAFESEH(이미지에 안전한 예외 처리기 포함)](safeseh-image-has-safe-exception-handlers.md)
### [/SECTION(섹션 특성 지정)](section-specify-section-attributes.md)
### [/SOURCELINK(PDB에 Sourcelink 파일 포함)](sourcelink.md)
### [/STACK(스택 할당)](stack-stack-allocations.md)
### [/STUB(MS-DOS 스텁 파일 이름)](stub-ms-dos-stub-file-name.md)
### [/SUBSYSTEM(하위 시스템 지정)](subsystem-specify-subsystem.md)
### [/SWAPRUN(링커 출력을 스왑 파일로 로드)](swaprun-load-linker-output-to-swap-file.md)
### [/TLBID(TypeLib의 리소스 ID 지정)](tlbid-specify-resource-id-for-typelib.md)
### [/TLBOUT(.TLB 파일 이름 지정)](tlbout-name-dot-tlb-file.md)
### [/TSAWARE(터미널 서버 인식 애플리케이션 만들기)](tsaware-create-terminal-server-aware-application.md)
### [/USEPROFILE](useprofile.md)
### [/VERBOSE(진행 메시지 표시)](verbose-print-progress-messages.md)
### [/VERSION(버전 정보)](version-version-information.md)
### [/WHOLEARCHIVE(모든 라이브러리 개체 파일 포함)](wholearchive-include-all-library-object-files.md)
### [/WINMD(Windows 메타데이터 생성)](winmd-generate-windows-metadata.md)
### [/WINMDFILE(winmd 파일 지정)](winmdfile-specify-winmd-file.md)
### [/WINMDKEYFILE(winmd 키 파일 지정)](winmdkeyfile-specify-winmd-key-file.md)
### [/WINMDKEYCONTAINER(키 컨테이너 지정)](winmdkeycontainer-specify-key-container.md)
### [/WINMDDELAYSIGN(winmd에 부분적으로 서명)](winmddelaysign-partially-sign-a-winmd.md)
### [/WX(링커 경고를 오류로 처리)](wx-treat-linker-warnings-as-errors.md)
## [데코레이팅된 이름](decorated-names.md)
## [모듈 정의(.Def) 파일](module-definition-dot-def-files.md)
### [모듈 정의 문의 규칙](rules-for-module-definition-statements.md)
#### [EXPORTS](exports.md)
#### [LIBRARY](library.md)
#### [HEAPSIZE](heapsize.md)
#### [NAME(C/C++)](name-c-cpp.md)
#### [SECTIONS(C/C++)](sections-c-cpp.md)
#### [STACKSIZE](stacksize.md)
#### [STUB](stub.md)
#### [VERSION(C/C++)](version-c-cpp.md)
## [링커의 지연 로드된 DLL 지원](linker-support-for-delay-loaded-dlls.md)
### [지연 로드할 DLL 지정](specifying-dlls-to-delay-load.md)
### [지연 로드된 DLL의 명시적 언로드](explicitly-unloading-a-delay-loaded-dll.md)
### [가져오기 바인딩](binding-imports.md)
### [지연 로드된 DLL에 대한 모든 가져오기 로드](loading-all-imports-for-a-delay-loaded-dll.md)
### [오류 처리 및 알림](error-handling-and-notification.md)
#### [알림 후크](notification-hooks.md)
#### [오류 후크](failure-hooks.md)
#### [예외(C/C++)](exceptions-c-cpp.md)
### [지연 로드된 가져오기 덤프](dumping-delay-loaded-imports.md)
### [DLL 지연 로드의 제약 조건](constraints-of-delay-loading-dlls.md)
### [도우미 함수 이해](understanding-the-helper-function.md)
#### [Visual C++ 6.0 이후 DLL 지연 로드 도우미 함수의 변경 내용](changes-in-the-dll-delayed-loading-helper-function-since-visual-cpp-6-0.md)
#### [호출 규칙, 매개 변수, 반환 형식](calling-conventions-parameters-and-return-type.md)
#### [구조체 및 상수 정의](structure-and-constant-definitions.md)
#### [필요한 값 계산](calculating-necessary-values.md)
#### [지연 로드된 DLL 언로드](unloading-a-delay-loaded-dll.md)
### [사용자 도우미 함수 개발](developing-your-own-helper-function.md)
# [추가 MSVC 빌드 도구](c-cpp-build-tools.md)
## [NMAKE 참조](nmake-reference.md)
### [Visual Studio의 NMAKE 프로젝트](creating-a-makefile-project.md)
### [NMAKE 실행](running-nmake.md)
#### [NMAKE 옵션](nmake-options.md)
#### [Tools.ini 및 NMake](tools-ini-and-nmake.md)
#### [NMAKE의 종료 코드](exit-codes-from-nmake.md)
### [메이크파일의 내용](contents-of-a-makefile.md)
#### [와일드카드와 NMAKE](wildcards-and-nmake.md)
#### [메이크파일의 긴 파일 이름](long-filenames-in-a-makefile.md)
#### [메이크파일의 주석](comments-in-a-makefile.md)
#### [메이크파일의 특수 문자](special-characters-in-a-makefile.md)
#### [샘플 메이크파일](sample-makefile.md)
### [설명 블록](description-blocks.md)
#### [대상](targets.md)
##### [의사 대상](pseudotargets.md)
##### [대상이 여러 개인 경우](multiple-targets.md)
##### [누적되는 종속 줄](cumulative-dependencies.md)
##### [여러 개의 설명 블록에 포함된 대상](targets-in-multiple-description-blocks.md)
##### [종속 줄의 부수적 효과](dependency-side-effects.md)
#### [종속 파일](dependents.md)
##### [유추된 종속 파일](inferred-dependents.md)
##### [종속 파일의 경로 검색](search-paths-for-dependents.md)
### [메이크파일의 명령](commands-in-a-makefile.md)
#### [명령 한정자](command-modifiers.md)
#### [파일 이름 부분 구문](filename-parts-syntax.md)
#### [메이크파일의 인라인 파일](inline-files-in-a-makefile.md)
##### [인라인 파일 지정](specifying-an-inline-file.md)
##### [인라인 파일 텍스트 만들기](creating-inline-file-text.md)
##### [인라인 파일 다시 사용](reusing-inline-files.md)
##### [다중 인라인 파일](multiple-inline-files.md)
### [매크로와 NMake](macros-and-nmake.md)
#### [NMake 매크로 정의](defining-an-nmake-macro.md)
##### [매크로의 특수 문자](special-characters-in-macros.md)
##### [null 및 정의되지 않은 매크로](null-and-undefined-macros.md)
##### [매크로를 정의할 위치](where-to-define-macros.md)
##### [매크로 정의의 우선 순위](precedence-in-macro-definitions.md)
#### [NMAKE 매크로 사용](using-an-nmake-macro.md)
##### [매크로 대체](macro-substitution.md)
#### [특수 NMake 매크로](special-nmake-macros.md)
##### [파일 이름 매크로](filename-macros.md)
##### [재귀 매크로](recursion-macros.md)
##### [명령 매크로와 옵션 매크로](command-macros-and-options-macros.md)
##### [환경 변수 매크로](environment-variable-macros.md)
### [유추 규칙](inference-rules.md)
#### [규칙 정의](defining-a-rule.md)
##### [규칙에서 경로 검색](search-paths-in-rules.md)
#### [일괄 처리 모드 규칙](batch-mode-rules.md)
#### [미리 정의된 규칙](predefined-rules.md)
#### [유추된 종속 파일과 규칙](inferred-dependents-and-rules.md)
#### [유추 규칙의 우선 순위](precedence-in-inference-rules.md)
### [점 지시문](dot-directives.md)
### [메이크파일 전처리](makefile-preprocessing.md)
#### [메이크파일 전처리 지시문](makefile-preprocessing-directives.md)
#### [메이크파일 전처리 식](expressions-in-makefile-preprocessing.md)
##### [메이크파일 전처리 연산자](makefile-preprocessing-operators.md)
##### [전처리에서 프로그램 실행](executing-a-program-in-preprocessing.md)
## [LIB 참조](lib-reference.md)
### [LIB 개요](overview-of-lib.md)
#### [방법: Visual Studio 개발 환경에서 LIB.EXE 옵션 설정](how-to-set-lib-exe-options-in-the-visual-studio-development-environment.md)
#### [LIB 입력 파일](lib-input-files.md)
#### [LIB 출력 파일](lib-output-files.md)
#### [기타 LIB 출력](other-lib-output.md)
#### [라이브러리 구조](structure-of-a-library.md)
### [LIB 실행](running-lib.md)
### [라이브러리 관리](managing-a-library.md)
### [라이브러리 멤버 추출](extracting-a-library-member.md)
### [가져오기 라이브러리 및 내보내기 파일을 사용한 작업](working-with-import-libraries-and-export-files.md)
#### [가져오기 라이브러리 및 내보내기 파일 빌드](building-an-import-library-and-export-file.md)
#### [가져오기 라이브러리 및 내보내기 파일 사용](using-an-import-library-and-export-file.md)
## [EDITBIN 참조](editbin-reference.md)
### [EDITBIN 명령줄](editbin-command-line.md)
### [EDITBIN 옵션](editbin-options.md)
#### [/ALLOWISOLATION](allowisolation.md)
#### [/ALLOWBIND](allowbind.md)
#### [/APPCONTAINER](appcontainer.md)
#### [/BIND](bind.md)
#### [/DYNAMICBASE](dynamicbase.md)
#### [/ERRORREPORT(editbin.exe)](errorreport-editbin-exe.md)
#### [/HEAP](heap.md)
#### [/HIGHENTROPYVA](highentropyva.md)
#### [/INTEGRITYCHECK](integritycheck.md)
#### [/LARGEADDRESSAWARE](largeaddressaware.md)
#### [/NOLOGO (EDITBIN)](nologo-editbin.md)
#### [/NXCOMPAT](nxcompat.md)
#### [/REBASE](rebase.md)
#### [/RELEASE](release.md)
#### [/SECTION(EDITBIN)](section-editbin.md)
#### [/STACK](stack.md)
#### [/SUBSYSTEM](subsystem.md)
#### [/SWAPRUN](swaprun.md)
#### [/TSAWARE](tsaware.md)
#### [/VERSION](version.md)
## [DUMPBIN 참조](dumpbin-reference.md)
### [DUMPBIN 명령줄](dumpbin-command-line.md)
### [DUMPBIN 옵션](dumpbin-options.md)
#### [/ALL](all.md)
#### [/ARCHIVEMEMBERS](archivemembers.md)
#### [/CLRHEADER](clrheader.md)
#### [/DEPENDENTS](dependents.md)
#### [/DIRECTIVES](directives.md)
#### [/DISASM](disasm.md)
#### [/ERRORREPORT(dumpbin.exe)](errorreport-dumpbin-exe.md)
#### [/EXPORTS](dash-exports.md)
#### [/FPO](fpo.md)
#### [/HEADERS](headers.md)
#### [/IMPORTS (DUMPBIN)](imports-dumpbin.md)
#### [/LINENUMBERS](linenumbers.md)
#### [/LINKERMEMBER](linkermember.md)
#### [/LOADCONFIG](loadconfig.md)
#### [/OUT (DUMPBIN)](out-dumpbin.md)
#### [/PDATA](pdata.md)
#### [/PDBPATH](pdbpath.md)
#### [/RANGE](range.md)
#### [/RAWDATA](rawdata.md)
#### [/RELOCATIONS](relocations.md)
#### [/SECTION (DUMPBIN)](section-dumpbin.md)
#### [/SUMMARY](summary.md)
#### [/SYMBOLS](symbols.md)
#### [/TLS](tls.md)
## [ERRLOOK 참조](errlook-reference.md)
### [값 편집 컨트롤](value-edit-control.md)
### [오류 메시지 편집 컨트롤](error-message-edit-control.md)
### [모듈 단추](modules-button.md)
### [찾기 단추](look-up-button.md)
## [XDCMake 참조](xdcmake-reference.md)
## [BSCMAKE 참조](bscmake-reference.md)
### [찾아보기 정보 파일 빌드: 개요](building-browse-information-files-overview.md)
### [.Bsc 파일 빌드](building-a-dot-bsc-file.md)
#### [.Sbr 파일 만들기](creating-an-dot-sbr-file.md)
#### [BSCMAKE에서 .Bsc 파일을 빌드하는 방법](how-bscmake-builds-a-dot-bsc-file.md)
### [BSCMAKE 명령줄](bscmake-command-line.md)
### [BSCMAKE 명령 파일(지시 파일)](bscmake-command-file-response-file.md)
### [BSCMAKE 옵션](bscmake-options.md)
### [BSCMAKE 종료 코드](bscmake-exit-codes.md)
# [컴파일러 오류 C999 ~ C2499](../../error-messages/compiler-errors-1/TOC.md)
# [컴파일러 오류 C2500 ~ C3999](../../error-messages/compiler-errors-2/TOC.md)
# [컴파일러 경고](../../error-messages/compiler-warnings/TOC.md)
<<<<<<< HEAD
# [도구 오류](../../error-messages/tool-errors/TOC.md)
||||||| merged common ancestors
# [도구 오류](../../error-messages/tool-errors/TOC.md)
=======
# [도구 오류](../../error-messages/tool-errors/TOC.md)
# [C++용 XML 설명서](xml-documentation-visual-cpp.md)
## [C++ 문서 주석에 대한 권장 태그](recommended-tags-for-documentation-comments-visual-cpp.md)
### [<c>](c-visual-cpp.md)
### [<code>](code-visual-cpp.md)
### [<example>](example-visual-cpp.md)
### [<exception>](exception-visual-cpp.md)
### [<include>](include-visual-cpp.md)
### [<list>](list-visual-cpp.md)
### [<para>](para-visual-cpp.md)
### [<param>](param-visual-cpp.md)
### [<paramref>](paramref-visual-cpp.md)
### [<permission>](permission-visual-cpp.md)
### [<remarks>](remarks-visual-cpp.md)
### [<returns>](returns-visual-cpp.md)
### [<see>](see-visual-cpp.md)
### [<seealso>](seealso-visual-cpp.md)
### [<summary>](summary-visual-cpp.md)
### [<value>](value-visual-cpp.md)
## [.Xml 파일 처리](dot-xml-file-processing.md)
## [C++ 문서 태그의 구분 기호](delimiters-for-visual-cpp-documentation-tags.md)
>>>>>>> d4619707c9ee3f2dd2fc2daf6589abff26c2eb06
| 56.060429 | 140 | 0.679787 | kor_Hang | 0.881654 |
03d16dd243d8810e5bf042de91ff9cfbfa67747f | 1,923 | md | Markdown | linearReg/linear_reg.md | 3upperm2n/ml | 37811d4c8e41aa877960e5b8c8d5cf3335ba20a3 | [
"MIT"
] | null | null | null | linearReg/linear_reg.md | 3upperm2n/ml | 37811d4c8e41aa877960e5b8c8d5cf3335ba20a3 | [
"MIT"
] | null | null | null | linearReg/linear_reg.md | 3upperm2n/ml | 37811d4c8e41aa877960e5b8c8d5cf3335ba20a3 | [
"MIT"
] | null | null | null |
```python
import matplotlib.pyplot as plt
import numpy as np
from sklearn import linear_model
import pandas as pd
```
/home/leiming/anaconda2/lib/python2.7/site-packages/matplotlib/font_manager.py:273: UserWarning: Matplotlib is building the font cache using fc-list. This may take a moment.
warnings.warn('Matplotlib is building the font cache using fc-list. This may take a moment.')
```python
def load_data(filename):
cols=["N", "Time_in_us"]
return pd.read_csv(filename, names=cols)
df = load_data('dev0_h2d_stepsize4.csv')
X = df["N"]
y = df["Time_in_us"]
X = np.array(X)
y = np.array(y)
X = X.reshape(-1,1)
y = y.reshape(-1,1)
```
```python
lr_model = linear_model.LinearRegression()
lr_model.fit(X, y)
```
LinearRegression(copy_X=True, fit_intercept=True, n_jobs=1, normalize=False)
```python
print lr_model.coef_
print lr_model.intercept_
```
[[ 0.00062165]]
[ 1.8122791]
```python
print "Mean squared error: %.6f" % np.mean((lr_model.predict(X) - y) ** 2)
print('Variance score: %.6f' % lr_model.score(X, y))
```
Mean squared error: 0.001865
Variance score: 0.999422
### permutation
```python
np.random.seed(42)
sample = np.random.choice(df.index, size= int(len(df) * 0.9), replace=False)
data, test_data = df.ix[sample], df.drop(sample)
```
```python
X_train, y_train = data["N"], data["Time_in_us"]
X_test, y_test = test_data["N"], test_data["Time_in_us"]
X_train = X_train.reshape(-1,1)
y_train = y_train.reshape(-1,1)
X_test = X_train.reshape(-1,1)
y_test = y_train.reshape(-1,1)
lr_model.fit(X_train, y_train)
print lr_model.coef_
print lr_model.intercept_
```
[[ 0.0006215]]
[ 1.81329683]
```python
print "Mean squared error: %.6f" % np.mean((lr_model.predict(X_test) - y_test) ** 2)
print('Variance score: %.6f' % lr_model.score(X_test, y_test))
```
Mean squared error: 0.001904
Variance score: 0.999411
| 17.971963 | 177 | 0.680187 | eng_Latn | 0.260087 |
03d1d562c04e91ed03b5b45ce11fd264c501b473 | 4,406 | md | Markdown | docs/dev/src/maintenance/unreviewed-code-triage.md | stephandruskat/hexatomic | 8b8ea0feb105ccc3ca44048f05716843a8fca179 | [
"Apache-2.0"
] | 7 | 2020-08-28T19:51:18.000Z | 2021-11-10T21:05:13.000Z | docs/dev/src/maintenance/unreviewed-code-triage.md | stephandruskat/hexatomic | 8b8ea0feb105ccc3ca44048f05716843a8fca179 | [
"Apache-2.0"
] | 323 | 2018-11-08T13:39:09.000Z | 2022-03-24T09:52:32.000Z | docs/dev/src/maintenance/unreviewed-code-triage.md | stephandruskat/hexatomic | 8b8ea0feb105ccc3ca44048f05716843a8fca179 | [
"Apache-2.0"
] | 13 | 2019-08-01T11:49:46.000Z | 2021-05-26T17:52:36.000Z | # Periodic unreviewed code triage
Hotfixes can be merged without code review if there is only one maintainer available.
To still be able to perform some kind of review, periodic triages of yet unreviewed code are performed by one of the maintainers.
These triages must be done *at the beginning of each quarter of the calendar year*.
The time of the last triage and any findings are logged in the file `UNREVIEWED_CODE_TRIAGE.md` in the main repository.
## Determining changes since the last triage
Open the list of all pull requests (PRs) in GitHub, filter them by the `unreviewed` label, and sort them to show the oldest ones first.
If there are no pull requests with such a label, the triage is finished and you can proceed to the "Documenting the results" section.
Next, open the file `UNREVIEWED_CODE_TRIAGE.md` and check which commit of the master branch was last triaged.
Create a new branch from this commit.
> **This branch must never be merged into another branch.**
For each of the pull requests with the `unreviewed` label, merge the tip commit of the pull request into your branch.
At this stage, conflicts may appear, which may, for example, be due to the addition of reviewed PRs with larger features in between non-reviewed hotfix PRs.
If there is any conflict, abort the merge, review the current changes up to this point, and start a new triage with the last commit before the merge commit of the problematic PR as a starting point.
For example, in the following history of pull requests, there would be two triages: one triage for PRs 1 and 2, starting from the last triaged master commit, and one triage from the PR X commit (which was reviewed) that includes PRs 4 and 5.
```plain
last triaged commit
|
v
+ <-- PR 1 (not reviewed)
|
v
+ <-- PR 2 (not reviewed)
|
v
+ <-- PR X (reviewed)
|
v
conflict! <-- PR 4 (not reviewed)
|
v <-- PR 5 (not reviewed)
```
When you have merged all the pull requests included in this triage, compare the changes of the branch with the previously triaged commit and review the code changes.
To compare the changes, you can use the git command line (`git diff <last-triaged-commit> HEAD`), a graphical git diff tool, or the GitHub "compare branches" feature.
For each problem you find, create a new issue in the GitHub issue tracker and add it to the `UNREVIEWED_CODE_TRIAGE.md` file.
At the end of the triage and after all issues have been documented, remove the `unreviewed` label from the triaged pull requests.
## Documenting the results
Each triage should have its own section in the `UNREVIEWED_CODE_TRIAGE.md` file.
New triage results are added to the top of the file.
Use the date on which the triage has been completed as the header of the triage section.
If a conflict occurred during the merge of the PRs, and you have had to split the set of PRs that you need to triage as explained above, also add "Set" and the consecutive number of the set in the order of processing, e.g., `# 2020-08-27 (Set 1)`, `# 2020-08-27 (Set 2)`, etc.
Below the heading, record
- a link to the GitHub user page of the person who did the triage,
- the revision hash range **on the master branch** of the first and last commit you have triaged in the triaged set (or the whole triage if you didn't have to split it),
- a list of PRs that where included in the triage.
Then, add a list entry for each issue you found, with the issue title and number URL, for example:
```markdown
- Resolve settings for running from within Eclipse in docs #197
- TestGraphEditor.testShowSaltExample times out #202
```
### Examples for triage results
First part of a triage which was split due to conflicts.
```markdown
## 2020-08-07 (Set 1)
Triage done by [thomaskrause](github.com/thomaskrause).
Revison range: 758884326b3e3a6b29f54418158e4cc0204be525..874720de8fd6d5b415edad9a66412719abf49279
PRs: #199, #127, #138
### Issues
- Resolve settings for running from within Eclipse in docs #197
- TestGraphEditor.testShowSaltExample times out #202
```
Example file section for a triage where no unreviewed pull requests have been found. You can omit the start revision hash in this case.
```markdown
## 2020-04-01
Triage done by [thomaskrause](github.com/thomaskrause).
PRs: none
Revision range: ..fbc424bc0918e16fc78d07307c4cca47bc1620f1
```
| 46.378947 | 276 | 0.748071 | eng_Latn | 0.999251 |
03d1d99a86216f9706e26cd81ba5ca3b3db18b25 | 1,520 | md | Markdown | catboost/docs/en/references/text-processing__specification-example.md | jochenater/catboost | de2786fbc633b0d6ea6a23b3862496c6151b95c2 | [
"Apache-2.0"
] | 6,989 | 2017-07-18T06:23:18.000Z | 2022-03-31T15:58:36.000Z | catboost/docs/en/references/text-processing__specification-example.md | jochenater/catboost | de2786fbc633b0d6ea6a23b3862496c6151b95c2 | [
"Apache-2.0"
] | 1,978 | 2017-07-18T09:17:58.000Z | 2022-03-31T14:28:43.000Z | catboost/docs/en/references/text-processing__specification-example.md | jochenater/catboost | de2786fbc633b0d6ea6a23b3862496c6151b95c2 | [
"Apache-2.0"
] | 1,228 | 2017-07-18T09:03:13.000Z | 2022-03-29T05:57:40.000Z | # Text processing JSON specification example
```json
"text_processing_options" : {
"tokenizers" : [{
"tokenizer_id" : "Space",
"delimiter" : " ",
"lowercasing" : "true"
}],
"dictionaries" : [{
"dictionary_id" : "BiGram",
"gram_order" : "2"
}, {
"dictionary_id" : "Word",
"gram_order" : "1"
}],
"feature_processing" : {
"default" : [{
"dictionaries_names" : ["Word"],
"feature_calcers" : ["BoW"],
"tokenizers_names" : ["Space"]
}],
"1" : [{
"tokenizers_names" : ["Space"],
"dictionaries_names" : ["BiGram", "Word"],
"feature_calcers" : ["BoW"]
}, {
"tokenizers_names" : ["Space"],
"dictionaries_names" : ["Word"],
"feature_calcers" : ["NaiveBayes"]
}]
}
}
```
In this example:
- A single split-by-delimiter tokenizer is specified. It lowercases tokens after splitting.
- Two dictionaries: unigram (identified <q>Word</q>) and bigram (identified <q>BiGram</q>).
- Two feature calcers are specified for the second text feature:
- {{ dictionary__feature-calcers__BoW }}, which uses the <q>BiGram</q> and <q>Word</q> dictionaries.
- {{ dictionary__feature-calcers__NaiveBayes }}, which uses the <q>Word</q> dictionary.
- A single feature calcer is specified for all other text features: {{ dictionary__feature-calcers__BoW }}, which uses the <q>Word</q> dictionary.
| 31.666667 | 146 | 0.568421 | eng_Latn | 0.724631 |
03d21580e461d1b75850d2a863e910c2a6d99cc1 | 2,419 | md | Markdown | Help-for-Mac.md | meiwanlanjun/paddle-mobile | b63dfd244afd88c491098a2038b26cb8e4572138 | [
"MIT"
] | 48 | 2017-05-05T13:18:22.000Z | 2022-01-13T08:23:38.000Z | Help-for-Mac.md | ImageMetrics/paddle-mobile | 8edfa0ddfc542962ba4530410bd2d2c9b8d77cc4 | [
"MIT"
] | 1 | 2019-03-05T07:48:52.000Z | 2019-03-07T06:09:06.000Z | Help-for-Mac.md | ImageMetrics/paddle-mobile | 8edfa0ddfc542962ba4530410bd2d2c9b8d77cc4 | [
"MIT"
] | 18 | 2018-06-02T07:19:16.000Z | 2022-02-05T09:10:43.000Z | ## OS X Installation
We highly recommend using [HomeBrew](https://brew.sh/) as the package manager,and [CMake](https://cmake.org/) as the build tool, in the following, we assume you have installed HomeBrew and CMake already.
### General dependencies
**ProtoBuffer**
Install ProtoBuffer via Homebrew:
```
brew install protobuffer
```
the caffe.pb.cc & caffe.pb.h in tools, are gernerated by proto3.4.0, if the version is incompatible with the protoc you just installed, you can regenerate the source code and replcace the caff.pb.cc and caffe.pb.h
Generate source code with protoc:
```bash
cd tools
protoc --proto_path=. --cpp_out=. caffe.proto
```
**NDK**
To install the Android NDK, simply expand the archive in the folder where you want to install it.
After installing the NDK, define an environment variable identifying the path to the NDK. For example,
on OS X, you might add the following to your .bash_profile:
```bash
export NDK_ROOT=//path to your NDK
```
Make sure the environment variable name is 'NDK_ROOT', or the build.sh can't find NDK tools while compiling for Android.
### Model Converting
* To get the caffe2mdl binary file:
build the tools dir alone:
```bash
cd tools
cmake .
make
```
or You can build the whole project:
```bash
sh build.sh mac
```
the executable file will be created in the build tree directory corresponding to the source tree directory.
* To convert caffe model to MDL
`./caffe2mdl deploy.prototxt full.caffemodel`
the third para is optional, if you want to test the model produced by this script, provide color value array of an image as the third parameter ,like this:
```bash
./caffe2mdl model.prototxt model.caffemodel data
```
* How to generate the data file
The data file is an plain text file, numbers are seperated with a space, the numbers is organized in the order of RGB,like this:
```
RRRRRRRR…GGGGGGGGGG……BBBBBBBBBBB……
```
It should be noted that the color value has been preprocessed according to the model, take googlenet in our directory for example, each value of RGB array has been substracted by 148 (the mean value of the model).
### Test on Mac
```bash
./build.sh mac
cd build/release/x86/build
./mdlTest
```
* For obect detection task, the result array indicates coordinate of the rect
* For classification task, the result array indicates the probability of classification
| 25.734043 | 214 | 0.739562 | eng_Latn | 0.997148 |
03d29ca7c10bcd5bee75c9e4a3fbbed4a39de5ba | 2,734 | md | Markdown | docs/ambassador.md | tuxerrante/kubespray | 4d20521ab2bc84114fc553d1de135f86b4499b62 | [
"Apache-2.0"
] | null | null | null | docs/ambassador.md | tuxerrante/kubespray | 4d20521ab2bc84114fc553d1de135f86b4499b62 | [
"Apache-2.0"
] | null | null | null | docs/ambassador.md | tuxerrante/kubespray | 4d20521ab2bc84114fc553d1de135f86b4499b62 | [
"Apache-2.0"
] | null | null | null |
# Ambassador
The [Ambassador API Gateway](https://github.com/datawire/ambassador) provides all the functionality of a traditional ingress controller
(e.g., path-based routing) while exposing many additional capabilities such as authentication,
URL rewriting, CORS, rate limiting, and automatic metrics collection.
## Installation
### Configuration
* `ingress_ambassador_namespace` (default `ambassador`): namespace for installing Ambassador.
* `ingress_ambassador_update_window` (default `0 0 * * SUN`): _crontab_-like expression
for specifying when the Operator should try to update the Ambassador API Gateway.
* `ingress_ambassador_version` (defaulkt: `*`): SemVer rule for versions allowed for
installation/updates.
* `ingress_ambassador_secure_port` (default: 443): HTTPS port to listen at.
* `ingress_ambassador_insecure_port` (default: 80): HTTP port to listen at.
### Ambassador Operator
This Ambassador addon deploys the Ambassador Operator, which in turn will install
the [Ambassador API Gateway](https://github.com/datawire/ambassador) in
a Kubernetes cluster.
The Ambassador Operator is a Kubernetes Operator that controls Ambassador's complete lifecycle
in your cluster, automating many of the repeatable tasks you would otherwise have to perform
yourself. Once installed, the Operator will complete installations and seamlessly upgrade to new
versions of Ambassador as they become available.
## Usage
The following example creates simple http-echo services and an `Ingress` object
to route to these services.
Note well that the [Ambassador API Gateway](https://github.com/datawire/ambassador) will automatically load balance `Ingress` resources
that include the annotation `kubernetes.io/ingress.class=ambassador`. All the other
resources will be just ignored.
```yaml
kind: Pod
apiVersion: v1
metadata:
name: foo-app
labels:
app: foo
spec:
containers:
- name: foo-app
image: hashicorp/http-echo
args:
- "-text=foo"
---
kind: Service
apiVersion: v1
metadata:
name: foo-service
spec:
selector:
app: foo
ports:
# Default port used by the image
- port: 5678
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example-ingress
annotations:
kubernetes.io/ingress.class: ambassador
spec:
rules:
- http:
paths:
- path: /foo
backend:
serviceName: foo-service
servicePort: 5678
```
Now you can test that the ingress is working with curl:
```console
$ export AMB_IP=$(kubectl get service ambassador -n ambassador -o 'go-template={{range .status.loadBalancer.ingress}}{{print .ip "\n"}}{{end}}')
$ curl $AMB_IP/foo
foo
```
| 31.068182 | 145 | 0.730066 | eng_Latn | 0.908128 |
03d2d37599f0dc5c1f63579dd7cdf2cdfd12db87 | 6,146 | md | Markdown | POST/hr2day/README.md | Simeng-unique/CMAQ-changed | cb83401728ed7ea1bb19a6986c0acc84dabe11a4 | [
"CC0-1.0"
] | 2 | 2019-06-30T18:27:13.000Z | 2020-12-01T15:52:05.000Z | POST/hr2day/README.md | Simeng-unique/CMAQ-changed | cb83401728ed7ea1bb19a6986c0acc84dabe11a4 | [
"CC0-1.0"
] | 1 | 2019-08-20T20:10:27.000Z | 2019-08-20T20:10:27.000Z | POST/hr2day/README.md | Simeng-unique/CMAQ-changed | cb83401728ed7ea1bb19a6986c0acc84dabe11a4 | [
"CC0-1.0"
] | 5 | 2017-11-03T14:32:33.000Z | 2020-09-28T11:55:42.000Z | hr2day
========
This Fortran program creates gridded IOAPI files with daily values from gridded IOAPI files containing hourly values.
## Environment Run Time Variables:
```
USELOCAL use local time when computing daily values (default N)
USEDST use daylight savings time when computing daily values (default N)
TZFILE location of time zone data file, tz.csv (this is a required input file)
PARTIAL_DAY allow use of partial days when computing daily values. If this is set to N,
the program will require at least 18 out of 24 values to be present in the
time zone of interest to compute a daily value (default N)
HROFFSET constant hour offset between desired time zone and GMT to use when computing
daily values. For example, if one wants to compute daily values with respect
to Eastern Standard Time (EST) and the time zone for the IOAPI input file is
GMT, this should be set to 5 (default 0).
START_HOUR starting hour to use when computing daily values (default 0)
END_HOUR ending hour to use when computing daily values (default 23)
HOURS_8HRMAX Number of 8hr values to use when computing daily maximum 8hr ozone.
Allowed values are 24 (use all 8-hr averages with starting hours
from 0 - 23 hr local time) and 17 (use only the 17 8-hr averages
with starting hours from 7 - 23 hr local time) (default is 24)
M3_FILE_# List of input IOAPI file names with hourly values.
If only a single input file is provided, INFILE can be used instead of
M3_FILE_1.
The program will concatenate time steps from all input files to construct the
longest possible time record which can be processed. Duplicate time steps are
eliminated.
The maximum number of IOAPI files is set to be one less than the global IOAPI parameter MXFILE3.
Since this parameter is currently set to 64 (https://www.cmascenter.org/ioapi/documentation/all_versions/html/TUTORIAL.html),
the maximum number of IOAPI input files is 63.
Supported map projections are Lambert conformal, polar
stereographic, and lat/lon
OUTFILE output IOAPI file name with computed daily values
SPECIES_# Defines the name, units, expression and daily operation for each variable in OUTFILE. For configuration options see below.
```
## Environment Run Time Variables (not required):
```
IOAPI_ISPH projection sphere type (use type #20 to match WRF/CMAQ)
(ioapi default is 8)
START_DATE Optional desired first and last processing date.
END_DATE The program will adjust the requested dates if the desired range is not covered
by the input file(s). If these dates are not specified, the processing will be
performed for the longest possible time record that can be derived from the
model input file(s).
```
## Species and operator definitions:
Defines the name, units, expression and daily operation for each variable in OUTFILE. These definitions are specified by environment variables SPECIES_[n]
```
format: SPECIES_1 = "[variable1_name], [variable1_units], [model_expression1], [operation1]"
SPECIES_2 = "[variable2_name], [variable2_units], [model_expression2], [operation2]"
variable[n]_name: desired name of the daily output variable, maximum 16 characters
variable[n]_units: units of the daily output variable, maximum 16 characters
model_expression[n]: Formular expressions supports operators +-*/ and are evaluated from
left to right using precedence order of */+-. Order of evaluation can
be forced by use of parentheses. When part of an expression is enclosed
in parentheses, that part is evaluated first. Other supported functions
include "LOG", "EXP", "SQRT", and "ABS". In addition, expresssions can be
combined to create conditional statements of the form:
"expression_for_condition ? expresssion_if_true : expression_if_false".
operation[n]: daily operation to perform. Options are
SUM - sums the 24 hour values
AVG- sums the 24 values and divides by 24
MIN- uses the minimum hourly value
MAX- uses the maximum hourly value
HR@MIN - hour of the minimum hourly value
HR@MAX - hour of the maximum hourly value
@MAXT - uses the hourly value at maximum temperature
MAXDIF - uses the maximum hourly change
8HRMAX - uses the maximum 8 hour period
W126 - computes the secondary ozone standard, weighted average between 8am & 7pm
@8HRMAXO3 - averages the value within the 8-hr-max ozone period
HR@8HRMAX - Starting hour of the 8-hr-max period
SUM06 - computes the SUM06 ozone value
examples:
setenv SPECIES_1 "O3,ppbV,1000*O3,8HRMAX" (computes the 8-hr daily maximum value of 1000 * O3 from INFILE
(assumed to be in ppmV) and writes the result to OUTFILE as O3
with units ppbV)
setenv SPECIES_2 "ASO4J_AVG,ug/m3,ASO4J,AVG" (computes the 24-hr average value of ASO4J from INFILE
(assumed to be in ug/m3) and writes the result to OUTFILE as
ASO4J_AVG with units ug/m3)
setenv SPECIES_3 "ASO4J_MAX,ug/m3,ASO4J,MAX" (computes the daily maximum value of ASO4J from INFILE
(assumed to be in ug/m3) and writes the result to OUTFILE as
ASO4J_MAX with units ug/m3)
```
## Compile hr2day source code:
Execute the build script to compile hr2day:
```
cd $CMAQ_HOME/POST/hr2day/scripts
./bldit_hr2day.csh [compiler] [version] |& tee build_hr2day.log
```
## Run hr2day:
Edit the sample run script (run.hr2day.make8hrmax), then run:
```
./run.hr2day |& tee hr2day.log
```
Check the log file to ensure complete and correct execution without errors.
| 52.529915 | 154 | 0.675887 | eng_Latn | 0.993173 |
03d408b49e0ddd24cf328a9ad08d05905f034ecc | 8,608 | md | Markdown | container_files/public_html/doc/builtin/Url.md | kstepanmpmg/mldb | f78791cd34d01796705c0f173a14359ec1b2e021 | [
"Apache-2.0"
] | 665 | 2015-12-09T17:00:14.000Z | 2022-03-25T07:46:46.000Z | container_files/public_html/doc/builtin/Url.md | tomzhang/mldb | a09cf2d9ca454d1966b9e49ae69f2fe6bf571494 | [
"Apache-2.0"
] | 797 | 2015-12-09T19:48:19.000Z | 2022-03-07T02:19:47.000Z | container_files/public_html/doc/builtin/Url.md | matebestek/mldb | f78791cd34d01796705c0f173a14359ec1b2e021 | [
"Apache-2.0"
] | 103 | 2015-12-25T04:39:29.000Z | 2022-02-03T02:55:22.000Z | # Files and URLs
MLDB gives users full control over where and how data is persisted. Datasets can be saved and loaded from files. Procedures can create files and Functions can load up parameters from files.
If data is stored in a system such as S3 or HDFS, it is possible to run multiple instances of MLDB so as to distribute computational load. For example, one powerful and expensive machine could handle model training, storing model files on S3, while a load-balanced cluster of smaller, cheaper machines could load these model files and do real-time scoring over HTTP.
## Protocol Handlers
The following protocols are available for URLs:
- `http://` and `https://`: standard HTTP, to get a file from an HTTP server on the public
internet or a private intranet.
- `s3://`: Refers to a file on Amazon's S3 service. If the file is not public, then
credentials must be added.
- `sftp://` Refers to a file on an SFTP server. Credentials must be added. If a custom port is used,
it must simply be part of the url. (For example, `sftp://host.com:1234/`.) The same is true
for the credentials location parameter. (To continue with the same example, `host.com:12345`.)
- `file://`: Refers to a file inside the MLDB container. These resources are only
accessible from the same container that created them. A relative path (for example
`file://filename.txt`) has two slashes after `file:`, and will create a file in the
working directory of MLDB, that is the `mldb_data` directory. An absolute path has
three slashes after the `file:`, and will create a path relative to the root
directory of the MLDB container (for example, `file:///mldb_data/filename.txt`).
- `mem://` Refers to an object that will be kept in memory, and will be lost
when the container stops. These are normally used for testing or to hold
temporary data.
A URL that is passed without a protocol will cause an error.
For examples of how to use the above protocol handlers, take a look at the
 and the .
## Compression support
MLDB supports compression and decompression of files using the `gzip`, `bzip2`,
`xz`, `zstd` and `lz4` algorithms. These are chosen and transparently applied
based upon the file extension.
In addition, it supports most compression schemes including `zip` and `rar` when
opening archives (see below). This decompression is performed transparently
to the user.
It is recommended that `zstd` be used for most purposes, as it provides a
very good tradeoff between speed and compression ratio. See
[the Zstandard page](https://facebook.github.io/zstd/)
for more information and a comparison table.
## Accessing files inside archives
It is possible to extract files from archives, by using the `archive+` scheme
prefix. This is done by:
1. Obtain a URI to the archive itself, for example `http://site.com/files.zip'
2. Add the prefix `archive+` to the file, to show that we should be entering
in to the archive. So our example becomes `archive+http://site.com/files.zip`.
3. Add a `#` character and the path of the file within the archive to extract.
So the final URI to load becomes `archive+http://site.com/files.zip#path/file.txt`
Most archive formats, including compressed, are supported with this scheme.
Note that extracting a file from an archive may require loading the entire
archive, so for complex manipulation of archives it's still better to use
external tools.
## Credentials
MLDB can store credentials and supply them whenever required.
MLDB exposes a REST API which is accessible at the route `/v1/credentials`. It
is also possible to specify credentials when starting MLDB. Please refer
to the [BatchMode](BatchMode.md) Section for details.
### Credential rules
Credentials are specified by giving rules. A credential rule contains two parts:
1. The credentials, which gives the actual secret information to access a resource as well
as the location of that resource and other metadata required for the service
hosting the resource to complete the request;
2. A rule which includes the resource type (e.g. S3) and a resource path prefix.
The rule allows a user to store many credentials for the same service
and control when the credentials is used.
When a resource that requires credentials is requested, MLDB will scan the stored
rules for matching credentials. The matching is performed on the resource path
using prefixes. For example, suppose that two sets of credentials are stored
in MLDB for the AWS S3 service, one set with read-only access
```
{
"store":{
"resource":"s3://public.example.com",
"resourceType":"aws:s3",
"credential":{
"protocol":"http",
"location":"s3.amazonaws.com",
"id":"<READ ACCESS KEY ID>",
"secret":"<READ ACCESS KEY>",
"validUntil":"2030-01-01T00:00:00Z"
}
}
}
```
and one set with read-write access
```
{
"store":{
"resource":"s3://public.example.com/mystuff",
"resourceType":"aws:s3",
"credential":{
"protocol":"http",
"location":"s3.amazonaws.com",
"id":"<READ-WRITE ACCESS KEY ID>",
"secret":"<READ-WRITE ACCESS KEY>",
"validUntil":"2030-01-01T00:00:00Z"
}
}
}
```
When requesting this resource `s3://public.example.com/text.csv`, MLDB will match the first
rule only because its `resource` field is a prefix of the resource path and it will therefore
use the read-only credentials. On the other-hand, if the resource
`s3://public.example.com/mystuff/text.csv` is requested, both rules will match but
MLDB will use the second one because it matches a longer prefix of the resource path.
### Credentials storage
Unlike other resources stored in MLDB, the credentials are persisted to disk under the `.mldb_credentials`
sub-folder of the `mldb_data` folder. The credentials are persisted in clear so it is important to
protect them and users are encourage to put in place the proper safeguards on that location.
Deleting a credential entity will also delete its persisted copy.
### Credentials objects
Credentials are represented as JSON objects with the following fields.

with the `credential` field (which define the actual credentials and the resource to
access) as follows:

### Storing credentials
You can `PUT` (without an ID) or `POST` (with an ID) the following object to
`/v1/credentials` in order to store new credentials:

### Example: storing Amazon Web Services S3 credentials
The first thing that you will probably want to do is to post some AWS S3
credentials into MLDB, as otherwise you won't be able to
do anything on Amazon.
The way to do this is to `PUT` to `/v1/credentials/<id>`:
```python
mldb.put("/v1/credentials/mys3creds", {
"store":
{
"resource":"s3://",
"resourceType":"aws:s3",
"credential":
{
"protocol": "http",
"location":"s3.amazonaws.com",
"id":<ACCESS KEY ID>,
"secret":<ACCESS KEY>,
"validUntil":"2030-01-01T00:00:00Z"
}
}
})
```
That will be stored in the `mldb_data/.mldb_credentials/mys3creds` file, and
automatically reloaded next time that the container gets loaded up.
Be very careful about the firewall rules on your machine; credentials
are stored in plain text on the filesystem inside the container, and
it is possible to run arbitrary Python code that can access that filesystem
from plugins.
It is dangerous to use credentials that
have more permission than is strictly required (usually, read and possibly write
access to a specific path on a bucket). See, for example, the [Amazon Web Services](http://docs.aws.amazon.com/general/latest/gr/aws-access-keys-best-practices.html)
page on best practices with access keys.
## Credentials Resources
This section lists, per resource type, the type of credentials calls that are made
for that resource.
### Amazon Web Services
Requests to AWS all start with `resourceType` of `aws:`.
#### Amazon S3
Requests to read from an Amazon S3 account, under `s3://` will result in a
credentials request with the following parameters:
- `resourceType`: `aws:s3`
- `resource`: `s3://...`
The `extra` parameters that can be returned are:
- `bandwidthToServiceMbps`: if this is set, then it indicates the available total
bandwidth to the S3 service in mbps (default 20). This influences the timeouts
that are calculated on S3 requests.
| 39.851852 | 366 | 0.735827 | eng_Latn | 0.997308 |
03d451aa8debdaacb9c13b92be3c79d8ab58c2a0 | 624 | md | Markdown | PULL_REQUEST_TEMPLATE.md | rubenbase/rufrontgen | 5efb3a6059b27faf6bcb168685e4c670250c1633 | [
"MIT"
] | 3 | 2018-07-19T22:32:40.000Z | 2018-09-26T11:15:58.000Z | PULL_REQUEST_TEMPLATE.md | rubenbase/rufrontgen | 5efb3a6059b27faf6bcb168685e4c670250c1633 | [
"MIT"
] | 28 | 2018-08-07T14:45:25.000Z | 2022-02-26T12:49:16.000Z | PULL_REQUEST_TEMPLATE.md | rubenbase/rufrontgen | 5efb3a6059b27faf6bcb168685e4c670250c1633 | [
"MIT"
] | 3 | 2018-10-10T09:28:00.000Z | 2018-11-16T16:39:38.000Z | **Before submitting a pull request,** please make sure the following is done:
Flow (current branch) action
(master) git pull origin master
(master) git checkout -b X
(X) hacemos los cambios que ibamos a hacer
(X) git add .
(X) git commit -m "He arreglado tal cosa"
(X) git checkout master
(master) git pull origin master
(master) git checkout X
(X) git merge master
(X) git push origin X
Vas a github y abres un pull request
una vez hecho el pull request, vas a master, y borras la rama en la que estabas: git branch -D X
y a continuación: git pull origin master
No hay que pushear la rama sino has acabado con tu tarea.
| 31.2 | 96 | 0.746795 | spa_Latn | 0.726027 |
03d46ee880acef7c116a16ee25b8ce6245607a5a | 30 | md | Markdown | docs/state-management-in-socless.md | Kerl1310/socless | b638f7a77bb300657e5d91ad0569b4746777440c | [
"Apache-2.0"
] | null | null | null | docs/state-management-in-socless.md | Kerl1310/socless | b638f7a77bb300657e5d91ad0569b4746777440c | [
"Apache-2.0"
] | null | null | null | docs/state-management-in-socless.md | Kerl1310/socless | b638f7a77bb300657e5d91ad0569b4746777440c | [
"Apache-2.0"
] | null | null | null | # State Management in SOCless
| 15 | 29 | 0.8 | eng_Latn | 0.9567 |
03d4b1c641e6d0d55ea5a7725205329ea5ef01d2 | 1,082 | md | Markdown | AlchemyInsights/creating-inbox-rules-for-shared-mailboxes.md | isabella232/OfficeDocs-AlchemyInsights-pr.ro-RO | 03015a129e62a25629e7430f7fa7208c74a08ad7 | [
"CC-BY-4.0",
"MIT"
] | 3 | 2020-05-19T19:07:36.000Z | 2021-11-10T22:45:21.000Z | AlchemyInsights/creating-inbox-rules-for-shared-mailboxes.md | MicrosoftDocs/OfficeDocs-AlchemyInsights-pr.ro-RO | 7e915e888b922fd3272e8c1a0781a3b3373d8082 | [
"CC-BY-4.0",
"MIT"
] | 3 | 2020-06-02T23:26:32.000Z | 2022-02-09T06:55:22.000Z | AlchemyInsights/creating-inbox-rules-for-shared-mailboxes.md | isabella232/OfficeDocs-AlchemyInsights-pr.ro-RO | 03015a129e62a25629e7430f7fa7208c74a08ad7 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2019-10-09T20:32:20.000Z | 2020-06-02T23:26:01.000Z | ---
title: Crearea regulilor de Inbox pentru cutiile poștale partajate
ms.author: pebaum
author: pebaum
ms.audience: ITPro
ms.topic: article
ms.service: o365-administration
ROBOTS: NOINDEX, NOFOLLOW
localization_priority: Normal
ms.custom:
- "930"
- "820"
- "1800021"
- "3500003"
ms.assetid: fd97c1c7-fc0a-466d-87d4-cbdaf6310ca1
ms.openlocfilehash: 6e5e4a0aabb76123ea98b91f84a76d56132695c2361f125b769a6f7fff7bdbaa
ms.sourcegitcommit: b5f7da89a650d2915dc652449623c78be6247175
ms.translationtype: MT
ms.contentlocale: ro-RO
ms.lasthandoff: 08/05/2021
ms.locfileid: "53929281"
---
# <a name="creating-inbox-rules-for-shared-mailboxes"></a>Crearea regulilor de Inbox pentru cutiile poștale partajate
Puteți să adăugați reguli pentru o cutie poștală partajată la care aveți acces într-un mod similar cu modul în care adăugați reguli la propriul cont.
Asigurați-vă că faceți parte din cutia poștală partajată respectivă, apoi urmați pașii din acest articol: Adăugarea de reguli la [o cutie poștală partajată](https://support.office.com/article/b0963400-2a51-4c64-afc7-b816d737d164)
| 38.642857 | 229 | 0.81146 | ron_Latn | 0.987824 |
03d50000821557cddecc6e421463c51c97ab8483 | 44 | md | Markdown | README.md | Ashwin-op/QSTP_Personal_Blog | 6e7a94609891e072376faaf7d507e5e85234538f | [
"MIT"
] | 1 | 2020-07-10T15:25:22.000Z | 2020-07-10T15:25:22.000Z | README.md | Ashwin-op/QSTP_Personal_Blog | 6e7a94609891e072376faaf7d507e5e85234538f | [
"MIT"
] | null | null | null | README.md | Ashwin-op/QSTP_Personal_Blog | 6e7a94609891e072376faaf7d507e5e85234538f | [
"MIT"
] | 1 | 2020-07-16T02:48:09.000Z | 2020-07-16T02:48:09.000Z | # QSTP_Personal_Blog
A MERN Full-stack blog
| 14.666667 | 22 | 0.818182 | kor_Hang | 0.930659 |
03d59ac5ab8db9994f563bc7651b3e9808709cd6 | 961 | md | Markdown | _posts/2014-08-26-no30-back-from-the-dead.md | berlin-hack-and-tell/berlinhackandtell.rocks | 7ad6b6e5304573ed4e233b8f931bcfaadb5ca7ac | [
"CC0-1.0"
] | 16 | 2018-04-01T17:31:26.000Z | 2022-01-29T23:51:23.000Z | _posts/2014-08-26-no30-back-from-the-dead.md | berlin-hack-and-tell/berlinhackandtell.rocks | 7ad6b6e5304573ed4e233b8f931bcfaadb5ca7ac | [
"CC0-1.0"
] | 85 | 2017-10-08T23:14:56.000Z | 2021-11-19T13:08:37.000Z | _posts/2014-08-26-no30-back-from-the-dead.md | berlin-hack-and-tell/berlinhackandtell.rocks | 7ad6b6e5304573ed4e233b8f931bcfaadb5ca7ac | [
"CC0-1.0"
] | 14 | 2018-03-01T13:24:10.000Z | 2021-11-03T09:49:19.000Z | ---
layout: post
title: Berlin Hack & Tell \#30 - BACK FROM THE DEAD
date: 2014-08-26 13:37:42 +0100
time: '18:30'
location: '[c-base](https://www.c-base.org)'
meetupUrl: https://www.meetup.com/Berlin-Hack-and-Tell/events/181266562/
photosUrl: https://www.meetup.com/Berlin-Hack-and-Tell/photos/24105492/
---
* Deface by Chris
* [Memory hacking / ASLR](https://github.com/coolwanglu/scanmem_/pull/25) by [Sebastian](https://github.com/sriemer)
* Bitcoin millionaire by Michael
* freesound by Ivan
* [AJShA](https://github.com/ligi/AJSHA) by [ligi](http://ligi.de)
* [3DGPSTracker](http://michaelkreil.github.io/Track3D/web/) ([code](https://github.com/MichaelKreil/Track3D)) by [Michael Kreil](https://github.com/MichaelKreil)
* [QTrek](https://github.com/strathausen/qtrek.bas) - QBasic game by [Philipp](https://stratha.us) - **Hack of the month**
* [Rainbow Slice](https://github.com/SimonHFrost/rainbow-slice) by [Simon](https://twitter.com/simonhfrost)
| 50.578947 | 162 | 0.724246 | yue_Hant | 0.641273 |
03d5d77cb271f99ad442840a89cf4073b750fa85 | 1,006 | md | Markdown | src/pages/posts/2015-02-20-sitecore-7-1-unable-to-create-document-builder-please-check-your-configuration.md | Wesley-Lomax/blog-wesleylomax | cd075b0be0ad20c93a539d02945d46442c503207 | [
"MIT"
] | null | null | null | src/pages/posts/2015-02-20-sitecore-7-1-unable-to-create-document-builder-please-check-your-configuration.md | Wesley-Lomax/blog-wesleylomax | cd075b0be0ad20c93a539d02945d46442c503207 | [
"MIT"
] | null | null | null | src/pages/posts/2015-02-20-sitecore-7-1-unable-to-create-document-builder-please-check-your-configuration.md | Wesley-Lomax/blog-wesleylomax | cd075b0be0ad20c93a539d02945d46442c503207 | [
"MIT"
] | null | null | null | ---
templateKey: blog-post
title: Sitecore Lucene Exception – Unable to create document builder
date: 2015-02-20T12:15:00.000Z
description: Sitecore Lucene Exception – Unable to create document builder
featuredpost: false
featuredimage: /img/search.png
tags:
- '7.1'
- Lucene
- Sitecore
categories:
- Lucene
- Search
- Sitecore
---
Ran it to an Error message today flooding a Sitecore 7.1 instance log files with thousands of
> Unable to create document builder (). Please check your configuration. We will fallback to the default for now.
Happily <a href="https://sitecoreblog.marklowe.ch/2014/05/unable-to-create-document-builder/" target="_blank">this </a>post pointed me in the right direction but my solution was slightly different I needed to add line below to the**\App_Config\Include\Sitecore.ContentSearch.Lucene.DefaultIndexConfiguration.config** must have been missed out in a previous upgrade.
<script src="https://gist.github.com/Wesley-Lomax/1d5341450eb7e1f38665.js"></script>
| 41.916667 | 365 | 0.776342 | eng_Latn | 0.929431 |
03d5d8f429c358e59d924afcf33b34ab6e4fa9e1 | 254 | md | Markdown | spring-aop/README.md | lribeiros/tutorials | da97e4c5ac400f89be453e2bf3d6fa33a3c15e03 | [
"MIT"
] | 1 | 2019-12-05T13:05:51.000Z | 2019-12-05T13:05:51.000Z | spring-aop/README.md | lribeiros/tutorials | da97e4c5ac400f89be453e2bf3d6fa33a3c15e03 | [
"MIT"
] | null | null | null | spring-aop/README.md | lribeiros/tutorials | da97e4c5ac400f89be453e2bf3d6fa33a3c15e03 | [
"MIT"
] | 1 | 2018-06-06T15:41:53.000Z | 2018-06-06T15:41:53.000Z | ### Relevant articles
- [Implementing a Custom Spring AOP Annotation](http://www.baeldung.com/spring-aop-annotation)
- [Intro to AspectJ](http://www.baeldung.com/aspectj)
- [Spring Performance Logging](http://www.baeldung.com/spring-performance-logging) | 50.8 | 94 | 0.771654 | yue_Hant | 0.677081 |
03d625f575be2d43fb55b45a987b911d88af4176 | 34,743 | md | Markdown | docs/user/openstack/install_upi.md | bjhuangr/installer | 0fb93e8503e16c2996786ee9d58bdfdc9d1061c7 | [
"Apache-2.0"
] | 1 | 2019-11-28T06:40:54.000Z | 2019-11-28T06:40:54.000Z | docs/user/openstack/install_upi.md | bjhuangr/installer | 0fb93e8503e16c2996786ee9d58bdfdc9d1061c7 | [
"Apache-2.0"
] | 1 | 2019-10-16T18:57:53.000Z | 2019-10-24T15:10:04.000Z | docs/user/openstack/install_upi.md | bjhuangr/installer | 0fb93e8503e16c2996786ee9d58bdfdc9d1061c7 | [
"Apache-2.0"
] | 3 | 2019-09-09T18:00:48.000Z | 2019-11-05T17:42:16.000Z | # Installing OpenShift on OpenStack User-Provisioned Infrastructure
The User-Provisioned Infrastructure (UPI) process installs OpenShift in stages, providing opportunities for modifications or integrating with existing infrastructure.
It contrasts with the fully-automated Installer-Provisioned Infrastructure (IPI) which creates everything in one go.
With UPI, creating the cloud (OpenStack) resources (e.g. Nova servers, Neutron ports, security groups) is the responsibility of the person deploying OpenShift.
The installer is still used to generate the ignition files and monitor the installation process.
This provides a greater flexibility at the cost of a more explicit and interactive process.
Below is a step-by-step guide to a UPI installation that mimics an automated IPI installation; prerequisites and steps described below should be adapted to the constraints of the target infrastructure.
Please be aware of the [Known Issues](known-issues.md#known-issues-specific-to-user-provisioned-installations)
of this method of installation.
## Table of Contents
* [Prerequisites](#prerequisites)
* [Install Ansible](#install-ansible)
* [OpenShift Configuration Directory](#openshift-configuration-directory)
* [Red Hat Enterprise Linux CoreOS (RHCOS)](#red-hat-enterprise-linux-coreos-rhcos)
* [API and Ingress Floating IP Addresses](#api-and-ingress-floating-ip-addresses)
* [Install Config](#install-config)
* [Fix the Node Subnet](#fix-the-node-subnet)
* [Empty Compute Pools](#empty-compute-pools)
* [Modify NetworkType (Required for Kuryr SDN)](#modify-networktype-required-for-kuryr-sdn)
* [Edit Manifests](#edit-manifests)
* [Remove Machines and MachineSets](#remove-machines-and-machinesets)
* [Make control-plane nodes unschedulable](#make-control-plane-nodes-unschedulable)
* [Ignition Config](#ignition-config)
* [Infra ID](#infra-id)
* [Bootstrap Ignition](#bootstrap-ignition)
* [Master Ignition](#master-ignition)
* [Network Topology](#network-topology)
* [Security Groups](#security-groups)
* [Network, Subnet and external router](#network-subnet-and-external-router)
* [Subnet DNS (optional)](#subnet-dns-optional)
* [Bootstrap](#bootstrap)
* [Control Plane](#control-plane)
* [Control Plane Trunks (Kuryr SDN)](#control-plane-trunks-kuryr-sdn)
* [Wait for the Control Plane to Complete](#wait-for-the-control-plane-to-complete)
* [Access the OpenShift API](#access-the-openshift-api)
* [Delete the Bootstrap Resources](#delete-the-bootstrap-resources)
* [Compute Nodes](#compute-nodes)
* [Compute Nodes Trunks (Kuryr SDN)](#compute-nodes-trunks-kuryr-sdn)
* [Approve the worker CSRs](#approve-the-worker-csrs)
* [Wait for the OpenShift Installation to Complete](#wait-for-the-openshift-installation-to-complete)
* [Destroy the OpenShift Cluster](#destroy-the-openshift-cluster)
## Prerequisites
The file `inventory.yaml` contains the variables most likely to need customisation.
**NOTE**: some of the default pods (e.g. the `openshift-router`) require at least two nodes so that is the effective minimum.
The requirements for UPI are broadly similar to the [ones for OpenStack IPI][ipi-reqs]:
[ipi-reqs]: ./README.md#openstack-requirements
- OpenStack account with `clouds.yaml`
- input in the `openshift-install` wizard
- Nova flavors
- inventory: `os_flavor_master` and `os_flavor_worker`
- An external subnet you want to use for floating IP addresses
- inventory: `os_external_network`
- The `openshift-install` binary
- A subnet range for the Nova servers / OpenShift Nodes, that does not conflict with your existing network
- inventory: `os_subnet_range`
- A cluster name you will want to use
- input in the `openshift-install` wizard
- A base domain
- input in the `openshift-install` wizard
- OpenShift Pull Secret
- input in the `openshift-install` wizard
- A DNS zone you can configure
- it must be the resolver for the base domain, for the installer and for the end-user machines
- it will host two records: for API and apps access
For an installation with Kuryr SDN on UPI, you should also check the requirements which are the same
needed for [OpenStack IPI with Kuryr][ipi-reqs-kuryr]. Please also note that **RHEL 7 nodes are not
supported on deployments configured with Kuryr**. This is because Kuryr container images are based on
RHEL 8 and may not work properly when run on RHEL 7.
[ipi-reqs-kuryr]: ./kuryr.md#requirements-when-enabling-kuryr
## Install Ansible
This repository contains [Ansible playbooks][ansible-upi] to deploy OpenShift on OpenStack.
**Requirements:**
* Python
* Ansible
* Python modules required in the playbooks. Namely:
* openstackclient
* openstacksdk
* netaddr
### RHEL
From a RHEL 8 box, make sure that the repository origins are all set:
```sh
sudo subscription-manager register # if not done already
sudo subscription-manager attach --pool=$YOUR_POOLID # if not done already
sudo subscription-manager repos --disable=* # if not done already
sudo subscription-manager repos \
--enable=rhel-8-for-x86_64-baseos-rpms \
--enable=openstack-16-tools-for-rhel-8-x86_64-rpms \
--enable=ansible-2.8-for-rhel-8-x86_64-rpms \
--enable=rhel-8-for-x86_64-appstream-rpms
```
Then install the packages:
```sh
sudo yum install python3-openstackclient ansible python3-openstacksdk python3-netaddr
```
Make sure that `python` points to Python3:
```sh
sudo alternatives --set python /usr/bin/python3
```
### Fedora
This command installs all required dependencies on Fedora:
```sh
sudo dnf install python-openstackclient ansible python-openstacksdk python-netaddr
```
[ansible-upi]: ../../../upi/openstack "Ansible Playbooks for Openstack UPI"
## OpenShift Configuration Directory
All the configuration files, logs and installation state are kept in a single directory:
```sh
$ mkdir -p openstack-upi
$ cd openstack-upi
```
## Red Hat Enterprise Linux CoreOS (RHCOS)
A proper [RHCOS][rhcos] image in the OpenStack cluster or project is required for successful installation.
Get the RHCOS image for your OpenShift version [here][rhcos-image]. You should download images with the highest version that is less than or equal to the OpenShift version that you install. Use the image versions that match your OpenShift version if they are available.
The OpenStack QCOW2 image is delivered in compressed format and therefore has the `.gz` extension. Unfortunately, compressed image support is not supported in OpenStack. So, you have to decompress the data before uploading it into Glance. The following command will unpack the image and create `rhcos-${RHCOSVERSION}-openstack.qcow2` file without `.gz` extension.
```sh
$ gunzip rhcos-${RHCOSVERSION}-openstack.qcow2.gz
```
Next step is to create a Glance image.
**NOTE:** *This document* will use `rhcos` as the Glance image name, but it's not mandatory.
```sh
$ openstack image create --container-format=bare --disk-format=qcow2 --file rhcos-${RHCOSVERSION}-openstack.qcow2 rhcos
```
**NOTE:** Depending on your OpenStack environment you can upload the RHCOS image as `raw` or `qcow2`. See [Disk and container formats for images](https://docs.openstack.org/image-guide/introduction.html#disk-and-container-formats-for-images) for more information.
Finally validate that the image was successfully created:
```sh
$ openstack image show rhcos
```
[rhcos]: https://www.openshift.com/learn/coreos/
[rhcos-image]: https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/
## API and Ingress Floating IP Addresses
**NOTE**: throughout this document, we will use `203.0.113.23` as the public IP address for the OpenShift API endpoint and `203.0.113.19` as the public IP for the ingress (`*.apps`) endpoint.
```sh
$ openstack floating ip create --description "OpenShift API" <external>
=> 203.0.113.23
$ openstack floating ip create --description "OpenShift Ingress" <external>
=> 203.0.113.19
```
The OpenShift API (for the OpenShift administrators and app developers) will be at `api.<cluster name>.<cluster domain>` and the Ingress (for the apps' end users) at `*.apps.<cluster name>.<cluster domain>`.
Create these two records in your DNS zone:
```plaintext
api.openshift.example.com. A 203.0.113.23
*.apps.openshift.example.com. A 203.0.113.19
```
They will need to be available to your developers, end users as well as the OpenShift installer process later in this guide.
## Install Config
Run the `create install-config` subcommand and fill in the desired entries:
```sh
$ openshift-install create install-config
? SSH Public Key </home/user/.ssh/id_rsa.pub>
? Platform <openstack>
? Cloud <openstack>
? ExternalNetwork <external>
? APIFloatingIPAddress <203.0.113.23>
? FlavorName <m1.xlarge>
? Base Domain <example.com>
? Cluster Name <openshift>
```
Most of these are self-explanatory. `Cloud` is the cloud name in your `clouds.yaml` i.e. what's set as your `OS_CLOUD` environment variable.
*Cluster Name* and *Base Domain* will together form the fully qualified domain name which the API interface will expect to the called, and the default name with which OpenShift will expose newly created applications.
Given the values above, the OpenShift API will be available at:
```plaintext
https://api.openshift.example.com:6443/
```
Afterwards, you should have `install-config.yaml` in your current directory:
```sh
$ tree
.
└── install-config.yaml
```
### Fix the Node Subnet
The installer added a default IP range for the OpenShift nodes. It must match the range for the Neutron subnet you'll create later on.
We're going to use a custom subnet to illustrate how that can be done.
Our range will be `192.0.2.0/24` so we need to add that value to
`install-config.yaml`. Look under `networking` -> `machineNetwork` -> network -> `cidr`.
This command will do it for you:
```sh
$ python -c 'import yaml;
path = "install-config.yaml";
data = yaml.safe_load(open(path));
data["networking"]["machineNetwork"][0]["cidr"] = "192.0.2.0/24";
open(path, "w").write(yaml.dump(data, default_flow_style=False))'
```
**NOTE**: All the scripts in this guide work with Python 3 as well as Python 2.
### Empty Compute Pools
UPI will not rely on the Machine API for node creation. Instead, we will create the compute nodes ("workers") manually.
We will set their count to `0` in `install-config.yaml`. Look under `compute` -> (first entry) -> `replicas`.
This command will do it for you:
```sh
$ python -c '
import yaml;
path = "install-config.yaml";
data = yaml.safe_load(open(path));
data["compute"][0]["replicas"] = 0;
open(path, "w").write(yaml.dump(data, default_flow_style=False))'
```
### Modify NetworkType (Required for Kuryr SDN)
By default the `networkType` is set to `OpenShiftSDN` on the `install-config.yaml`.
If an installation with Kuryr is desired, you must modify the `networkType` field.
This command will do it for you:
```sh
$ python -c '
import yaml;
path = "install-config.yaml";
data = yaml.safe_load(open(path));
data["networking"]["networkType"] = "Kuryr";
open(path, "w").write(yaml.dump(data, default_flow_style=False))'
```
Also set `os_networking_type` to `Kuryr` in `inventory.yaml`.
## Edit Manifests
We are not relying on the Machine API so we can delete the control plane Machines and compute MachineSets from the manifests.
**WARNING**: The `install-config.yaml` file will be automatically deleted in the next section. If you want to keep it around, copy it elsewhere now!
First, let's turn the install config into manifests:
```sh
$ openshift-install create manifests
$ tree
.
├── manifests
│ ├── 04-openshift-machine-config-operator.yaml
│ ├── cloud-provider-config.yaml
│ ├── cluster-config.yaml
│ ├── cluster-dns-02-config.yml
│ ├── cluster-infrastructure-02-config.yml
│ ├── cluster-ingress-02-config.yml
│ ├── cluster-network-01-crd.yml
│ ├── cluster-network-02-config.yml
│ ├── cluster-proxy-01-config.yaml
│ ├── cluster-scheduler-02-config.yml
│ ├── cvo-overrides.yaml
│ ├── etcd-ca-bundle-configmap.yaml
│ ├── etcd-client-secret.yaml
│ ├── etcd-host-service-endpoints.yaml
│ ├── etcd-host-service.yaml
│ ├── etcd-metric-client-secret.yaml
│ ├── etcd-metric-serving-ca-configmap.yaml
│ ├── etcd-metric-signer-secret.yaml
│ ├── etcd-namespace.yaml
│ ├── etcd-service.yaml
│ ├── etcd-serving-ca-configmap.yaml
│ ├── etcd-signer-secret.yaml
│ ├── kube-cloud-config.yaml
│ ├── kube-system-configmap-root-ca.yaml
│ ├── machine-config-server-tls-secret.yaml
│ └── openshift-config-secret-pull-secret.yaml
└── openshift
├── 99_cloud-creds-secret.yaml
├── 99_kubeadmin-password-secret.yaml
├── 99_openshift-cluster-api_master-machines-0.yaml
├── 99_openshift-cluster-api_master-machines-1.yaml
├── 99_openshift-cluster-api_master-machines-2.yaml
├── 99_openshift-cluster-api_master-user-data-secret.yaml
├── 99_openshift-cluster-api_worker-machineset-0.yaml
├── 99_openshift-cluster-api_worker-user-data-secret.yaml
├── 99_openshift-machineconfig_master.yaml
├── 99_openshift-machineconfig_worker.yaml
├── 99_rolebinding-cloud-creds-secret-reader.yaml
└── 99_role-cloud-creds-secret-reader.yaml
2 directories, 38 files
```
### Remove Machines and MachineSets
Remove the control-plane Machines and compute MachineSets, because we'll be providing those ourselves and don't want to involve the
[machine-API operator][mao]:
```sh
$ rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml
```
Leave the compute MachineSets in if you want to create compute machines via the machine API. However, some references must be updated in the machineset spec (`openshift/99_openshift-cluster-api_worker-machineset-0.yaml`) to match your environment:
* The OS image: `spec.template.spec.providerSpec.value.image`
[mao]: https://github.com/openshift/machine-api-operator
### Make control-plane nodes unschedulable
Currently [emptying the compute pools][empty-compute-pools] makes control-plane nodes schedulable. But due to a [Kubernetes limitation][kubebug], router pods running on control-plane nodes will not be reachable by the ingress load balancer. Update the scheduler configuration to keep router pods and other workloads off the control-plane nodes:
```sh
$ python -c '
import yaml;
path = "manifests/cluster-scheduler-02-config.yml"
data = yaml.safe_load(open(path));
data["spec"]["mastersSchedulable"] = False;
open(path, "w").write(yaml.dump(data, default_flow_style=False))'
```
[empty-compute-pools]: #empty-compute-pools
[kubebug]: https://github.com/kubernetes/kubernetes/issues/65618
## Ignition Config
Next, we will turn these manifests into [Ignition][ignition] files. These will be used to configure the Nova servers on boot (Ignition performs a similar function as cloud-init).
```sh
$ openshift-install create ignition-configs
$ tree
.
├── auth
│ ├── kubeadmin-password
│ └── kubeconfig
├── bootstrap.ign
├── master.ign
├── metadata.json
└── worker.ign
```
[ignition]: https://coreos.com/ignition/docs/latest/
### Infra ID
The OpenShift cluster has been assigned an identifier in the form of `<cluster name>-<random string>`. You do not need this for anything, but it is a good idea to keep it around.
You can see the various metadata about your future cluster in `metadata.json`.
The Infra ID is under the `infraID` key:
```sh
$ export INFRA_ID=$(jq -r .infraID metadata.json)
$ echo $INFRA_ID
openshift-qlvwv
```
We'll use the `infraID` as the prefix for all the OpenStack resources we'll create. That way, you'll be able to have multiple deployments in the same OpenStack project without name conflicts.
Make sure your shell session has the `$INFRA_ID` environment variable set when you run the commands later in this document.
### Bootstrap Ignition
#### Edit the Bootstrap Ignition
We need to set the bootstrap hostname explicitly, and in the case of OpenStack using self-signed certificate, the CA cert file. The IPI installer does this automatically, but for now UPI does not.
We will update the ignition file (`bootstrap.ign`) to create the following files:
**`/etc/hostname`**:
```plaintext
openshift-qlvwv-bootstrap
```
(using the `infraID`)
**`/opt/openshift/tls/cloud-ca-cert.pem`** (if applicable).
**NOTE**: We recommend you back up the Ignition files before making any changes!
You can edit the Ignition file manually or run this Python script:
```python
import base64
import json
import os
with open('bootstrap.ign', 'r') as f:
ignition = json.load(f)
files = ignition['storage'].get('files', [])
infra_id = os.environ.get('INFRA_ID', 'openshift').encode()
hostname_b64 = base64.standard_b64encode(infra_id + b'-bootstrap\n').decode().strip()
files.append(
{
'path': '/etc/hostname',
'mode': 420,
'contents': {
'source': 'data:text/plain;charset=utf-8;base64,' + hostname_b64,
'verification': {}
},
'filesystem': 'root',
})
ca_cert_path = os.environ.get('OS_CACERT', '')
if ca_cert_path:
with open(ca_cert_path, 'r') as f:
ca_cert = f.read().encode()
ca_cert_b64 = base64.standard_b64encode(ca_cert).decode().strip()
files.append(
{
'path': '/opt/openshift/tls/cloud-ca-cert.pem',
'mode': 420,
'contents': {
'source': 'data:text/plain;charset=utf-8;base64,' + ca_cert_b64,
'verification': {}
},
'filesystem': 'root',
})
ignition['storage']['files'] = files;
with open('bootstrap.ign', 'w') as f:
json.dump(ignition, f)
```
Feel free to make any other changes.
#### Upload the Boostrap Ignition
The generated boostrap ignition file tends to be quite large (around 300KB -- it contains all the manifests, master and worker ignitions etc.). This is generally too big to be passed to the server directly (the OpenStack Nova user data limit is 64KB).
To boot it up, we will create a smaller Ignition file that will be passed to Nova as user data and that will download the main ignition file upon execution.
The main file needs to be uploaded to an HTTP(S) location the Bootstrap node will be able to access.
Choose the storage that best fits your needs and availability.
**IMPORTANT**: The `bootstrap.ign` contains sensitive information such as your `clouds.yaml` credentials. It should not be accessible by the public! It will only be used once during the Nova boot of the Bootstrap server. We strongly recommend you restrict the access to that server only and delete the file afterwards.
Possible choices include:
* Swift (see Example 1 below);
* Glance (see Example 2 below);
* Amazon S3;
* Internal web server inside your organisation;
* A throwaway Nova server in `$INFRA_ID-nodes` hosting a static web server exposing the file.
In this guide, we will assume the file is at the following URL:
https://static.example.com/bootstrap.ign
##### Example 1: Swift
The `swift` client is needed for enabling listing on the container.
Create the `<container_name>` container and upload the `bootstrap.ign` file:
```sh
$ swift upload <container_name> bootstrap.ign
```
Make the container accessible:
```sh
$ swift post <container_name> --read-acl ".r:*,.rlistings"
```
Get the `storage_url` from the output:
```sh
$ swift stat -v
```
The URL to be put in the `source` property of the Ignition Shim (see below) will have the following format: `<storage_url>/<container_name>/bootstrap.ign`.
##### Example 2: Glance image service
Create the `<image_name>` image and upload the `bootstrap.ign` file:
```sh
$ openstack image create --disk-format=raw --container-format=bare --file bootstrap.ign <image_name>
```
**NOTE**: Make sure the created image has `active` status.
Copy and save `file` value of the output, it should look like `/v2/images/<image_id>/file`.
Get Glance public URL:
```sh
$ openstack catalog show image
```
By default Glance service doesn't allow anonymous access to the data. So, if you use Glance to store the ignition config, then you also need to provide a valid auth token in the `ignition.config.append.httpHeaders` field.
To obtain the token execute:
```sh
openstack token issue -c id -f value
```
The command will return the token to be added to the `ignition.config.append[0].httpHeaders` property in the Bootstrap Ignition Shim (see [below](#create-the-bootstrap-ignition-shim)):
```json
"httpHeaders": [
{
"name": "X-Auth-Token",
"value": "<token>"
}
]
```
Combine the public URL with the `file` value to get the link to your bootstrap ignition, in the format `<glance_public_url>/v2/images/<image_id>/file`.
Example of the link to be put in the `source` property of the Ignition Shim (see below): `https://public.glance.example.com:9292/v2/images/b7e2b84e-15cf-440a-a113-3197518da024/file`.
### Create the Bootstrap Ignition Shim
As mentioned before due to Nova user data size limit, we will need to create a new Ignition file that will load the bulk of the Bootstrap node configuration. This will be similar to the existing `master.ign` and `worker.ign` files.
Create a file called `$INFRA_ID-bootstrap-ignition.json` (fill in your `infraID`) with the following contents:
```json
{
"ignition": {
"config": {
"append": [
{
"source": "https://static.example.com/bootstrap.ign",
"verification": {},
"httpHeaders": []
}
]
},
"security": {},
"timeouts": {},
"version": "2.4.0"
},
"networkd": {},
"passwd": {},
"storage": {},
"systemd": {}
}
```
Change the `ignition.config.append.source` field to the URL hosting the `bootstrap.ign` file you've uploaded previously.
#### Ignition file served by server using self-signed certificate
In order for the bootstrap node to retrieve the ignition file when it is served by a server using self-signed certificate, it is necessary to add the CA certificate to the `ignition.security.tls.certificateAuthorities` in the ignition file. For instance:
```json
{
"ignition": {
"config": {},
"security": {
"tls": {
"certificateAuthorities": [
{
"source": "data:text/plain;charset=utf-8;base64,<base64_encoded_certificate>",
"verification": {}
}
]
}
},
"timeouts": {},
"version": "2.4.0"
},
"networkd": {},
"passwd": {},
"storage": {},
"systemd": {}
}
```
### Master Ignition
Similar to bootstrap, we need to make sure the hostname is set to the expected value (it must match the name of the Nova server exactly).
Since that value will be different for every master node, we will need to create multiple Ignition files: one for every node.
We will deploy three Control plane (master) nodes. Their Ignition configs can be create like so:
```sh
$ for index in $(seq 0 2); do
MASTER_HOSTNAME="$INFRA_ID-master-$index\n"
python -c "import base64, json, sys;
ignition = json.load(sys.stdin);
files = ignition['storage'].get('files', []);
files.append({'path': '/etc/hostname', 'mode': 420, 'contents': {'source': 'data:text/plain;charset=utf-8;base64,' + base64.standard_b64encode(b'$MASTER_HOSTNAME').decode().strip(), 'verification': {}}, 'filesystem': 'root'});
ignition['storage']['files'] = files;
json.dump(ignition, sys.stdout)" <master.ign >"$INFRA_ID-master-$index-ignition.json"
done
```
This should create files `openshift-qlvwv-master-0-ignition.json`, `openshift-qlvwv-master-1-ignition.json` and `openshift-qlvwv-master-2-ignition.json`.
If you look inside, you will see that they contain very little. In fact, most of the master Ignition is served by the Machine Config Server running on the bootstrap node and the masters contain only enough to know where to look for the rest.
You can make your own changes here.
**NOTE**: The worker nodes do not require any changes to their Ignition, but you can make your own by editing `worker.ign`.
## Network Topology
In this section we'll create all the networking pieces necessary to host the OpenShift cluster: security groups, network, subnet, router, ports.
### Security Groups
```sh
$ ansible-playbook -i inventory.yaml security-groups.yaml
```
The playbook creates one Security group for the Control Plane and one for the Compute nodes, then attaches rules for enabling communication between the nodes.
### Network, Subnet and external router
```sh
$ ansible-playbook -i inventory.yaml network.yaml
```
The playbook creates a network and a subnet. The subnet obeys `os_subnet_range`; however the first ten IP addresses are removed from the allocation pool. These addresses will be used for the VRRP addresses managed by keepalived for high availability. For more information, read the [networking infrastructure design document][net-infra].
Outside connectivity will be provided by attaching the floating IP addresses (IPs in the inventory) to the corresponding routers.
[net-infra]: https://github.com/openshift/installer/blob/master/docs/design/openstack/networking-infrastructure.md
### Subnet DNS (optional)
**NOTE**: This step is optional and only necessary if you want to control the default resolvers your Nova servers will use.
During deployment, the OpenShift nodes will need to be able to resolve public name records to download the OpenShift images and so on. They will also need to resolve the OpenStack API endpoint.
The default resolvers are often set up by the OpenStack administrator in Neutron. However, some deployments do not have default DNS servers set, meaning the servers are not able to resolve any records when they boot.
If you are in this situation, you can add resolvers to your Neutron subnet (`openshift-qlvwv-nodes`). These will be put into `/etc/resolv.conf` on your servers post-boot.
For example, if you want to add the following nameservers: `198.51.100.86` and `198.51.100.87`, you can run this command:
```sh
$ openstack subnet set --dns-nameserver <198.51.100.86> --dns-nameserver <198.51.100.87> "$INFRA_ID-nodes"
```
## Bootstrap
```sh
$ ansible-playbook -i inventory.yaml bootstrap.yaml
```
The playbook sets the *allowed address pairs* on each port attached to our OpenShift nodes.
Since the keepalived-managed IP addresses are not attached to any specific server, Neutron would block their traffic by default. By passing them to `--allowed-address` the traffic can flow freely through.
An additional Floating IP is also attached to the bootstrap port. This is not necessary for the deployment (and we will delete the bootstrap resources afterwards). However, if the bootstrapping phase fails for any reason, the installer will try to SSH in and download the bootstrap log. That will only succeed if the node is reachable (which in general means a floating IP).
After the bootstrap server is active, you can check the console log to see that it is getting the ignition correctly:
```sh
$ openstack console log show "$INFRA_ID-bootstrap"
```
You can also SSH into the server (using its floating IP address) and check on the bootstrapping progress:
```sh
$ ssh [email protected]
[core@openshift-qlvwv-bootstrap ~]$ journalctl -b -f -u bootkube.service
```
## Control Plane
```sh
$ ansible-playbook -i inventory.yaml control-plane.yaml
```
Our control plane will consist of three nodes. The servers will be passed the `master-?-ignition.json` files prepared earlier.
The playbook places the Control Plane in a Server Group with "soft anti-affinity" policy.
The master nodes should load the initial Ignition and then keep waiting until the bootstrap node stands up the Machine Config Server which will provide the rest of the configuration.
### Control Plane Trunks (Kuryr SDN)
If `os_networking_type` is set to `Kuryr` in the Ansible inventory, the playbook creates the Trunks for Kuryr to plug the containers into the OpenStack SDN.
### Wait for the Control Plane to Complete
When that happens, the masters will start running their own pods, run etcd and join the "bootstrap" cluster. Eventually, they will form a fully operational control plane.
You can monitor this via the following command:
```sh
$ openshift-install wait-for bootstrap-complete
```
Eventually, it should output the following:
```plaintext
INFO API v1.14.6+f9b5405 up
INFO Waiting up to 30m0s for bootstrapping to complete...
```
This means the masters have come up successfully and are joining the cluster.
Eventually, the `wait-for` command should end with:
```plaintext
INFO It is now safe to remove the bootstrap resources
```
### Access the OpenShift API
You can use the `oc` or `kubectl` commands to talk to the OpenShift API. The admin credentials are in `auth/kubeconfig`:
```sh
$ export KUBECONFIG="$PWD/auth/kubeconfig"
$ oc get nodes
$ oc get pods -A
```
**NOTE**: Only the API will be up at this point. The OpenShift UI will run on the compute nodes.
### Delete the Bootstrap Resources
```sh
$ ansible-playbook -i inventory.yaml down-bootstrap.yaml
```
The teardown playbook deletes the bootstrap port, server and floating IP address.
If you haven't done so already, you should also disable the bootstrap Ignition URL.
## Compute Nodes
```sh
$ ansible-playbook -i inventory.yaml compute-nodes.yaml
```
This process is similar to the masters, but the workers need to be approved before they're allowed to join the cluster.
The workers need no ignition override.
### Compute Nodes Trunks (Kuryr SDN)
If `os_networking_type` is set to `Kuryr` in the Ansible inventory, the playbook creates the Trunks for Kuryr to plug the containers into the OpenStack SDN.
### Approve the worker CSRs
Even after they've booted up, the workers will not show up in `oc get nodes`.
Instead, they will create certificate signing requests (CSRs) which need to be approved. You can watch for the CSRs here:
```sh
$ watch oc get csr -A
```
Eventually, you should see `Pending` entries looking like this
```sh
$ oc get csr -A
NAME AGE REQUESTOR CONDITION
csr-2scwb 16m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued
csr-5jwqf 16m system:node:openshift-qlvwv-master-0 Approved,Issued
csr-88jp8 116s system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending
csr-9dt8f 15m system:node:openshift-qlvwv-master-1 Approved,Issued
csr-bqkw5 16m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued
csr-dpprd 6s system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending
csr-dtcws 24s system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending
csr-lj7f9 16m system:node:openshift-qlvwv-master-2 Approved,Issued
csr-lrtlk 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued
csr-wkm94 16m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued
```
You should inspect each pending CSR and verify that it comes from a node you recognise:
```sh
$ oc describe csr csr-88jp8
Name: csr-88jp8
Labels: <none>
Annotations: <none>
CreationTimestamp: Wed, 23 Oct 2019 13:22:51 +0200
Requesting User: system:serviceaccount:openshift-machine-config-operator:node-bootstrapper
Status: Pending
Subject:
Common Name: system:node:openshift-qlvwv-worker-0
Serial Number:
Organization: system:nodes
Events: <none>
```
If it does (this one is for `openshift-qlvwv-worker-0` which we've created earlier), you can approve it:
```sh
$ oc adm certificate approve csr-88jp8
```
Approved nodes should now show up in `oc get nodes`, but they will be in the `NotReady` state. They will create a second CSR which you should also review:
```sh
$ oc get csr -A
NAME AGE REQUESTOR CONDITION
csr-2scwb 17m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued
csr-5jwqf 17m system:node:openshift-qlvwv-master-0 Approved,Issued
csr-7mv4d 13s system:node:openshift-qlvwv-worker-1 Pending
csr-88jp8 3m29s system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued
csr-9dt8f 17m system:node:openshift-qlvwv-master-1 Approved,Issued
csr-bqkw5 18m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued
csr-bx7p4 28s system:node:openshift-qlvwv-worker-0 Pending
csr-dpprd 99s system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued
csr-dtcws 117s system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued
csr-lj7f9 17m system:node:openshift-qlvwv-master-2 Approved,Issued
csr-lrtlk 17m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued
csr-wkm94 18m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued
csr-wqpfd 21s system:node:openshift-qlvwv-worker-2 Pending
```
(we see the CSR approved earlier as well as a new `Pending` one for the same node: `openshift-qlvwv-worker-0`)
And approve:
```sh
$ oc adm certificate approve csr-bx7p4
```
Once this CSR is approved, the node should switch to `Ready` and pods will be scheduled on it.
### Wait for the OpenShift Installation to Complete
Run the following command to verify the OpenShift cluster is fully deployed:
```sh
$ openshift-install --log-level debug wait-for install-complete
```
Upon success, it will print the URL to the OpenShift Console (the web UI) as well as admin username and password to log in.
## Destroy the OpenShift Cluster
```sh
$ ansible-playbook -i inventory.yaml \
down-bootstrap.yaml \
down-control-plane.yaml \
down-compute-nodes.yaml \
down-load-balancers.yaml \
down-network.yaml \
down-security-groups.yaml
```
The playbook `down-load-balancers.yaml` idempotently deletes the load balancers created by the Kuryr installation, if any.
**NOTE:** The deletion of load balancers with `provisioning_status` `PENDING-*` is skipped. Make sure to retry the
`down-load-balancers.yaml` playbook once the load balancers have transitioned to `ACTIVE`.
Then, remove the `api` and `*.apps` DNS records.
| 38.263216 | 374 | 0.731658 | eng_Latn | 0.957663 |
03d687df95bbf67f1bfeaa94f6ae9f5fd1b99f8f | 1,045 | md | Markdown | README.md | Xliff/Method-Also | d1bd4f671adc7614147f769f1a76c88097ac4b26 | [
"Artistic-2.0"
] | null | null | null | README.md | Xliff/Method-Also | d1bd4f671adc7614147f769f1a76c88097ac4b26 | [
"Artistic-2.0"
] | null | null | null | README.md | Xliff/Method-Also | d1bd4f671adc7614147f769f1a76c88097ac4b26 | [
"Artistic-2.0"
] | null | null | null | NAME
====
Method::Also - add "is also" trait to Methods
SYNOPSIS
========
use Method::Also;
class Foo {
has $.foo;
method foo() is also<bar bazzy> { $!foo }
}
Foo.new(foo => 42).bar; # 42
Foo.new(foo => 42).bazzy; # 42
# separate multi methods can have different aliases
class Bar {
multi method foo() is also<bar> { 42 }
multi method foo($foo) is also<bazzy> { $foo }
}
Bar.foo; # 42
Bar.foo(666); # 666
Bar.bar; # 42
Bar.bazzy(768); # 768
DESCRIPTION
===========
This module adds a `is also` trait to `Method`s, allowing you to specify other names for the same method.
AUTHOR
======
Elizabeth Mattijsen <[email protected]>
Source can be located at: https://github.com/lizmat/Method-Also . Comments and Pull Requests are welcome.
COPYRIGHT AND LICENSE
=====================
Copyright 2018-2019 Elizabeth Mattijsen
This library is free software; you can redistribute it and/or modify it under the Artistic License 2.0.
| 21.326531 | 105 | 0.602871 | eng_Latn | 0.950116 |
03d6ad5ae2482302ea2639cc3612cf6f7df929e3 | 3,548 | md | Markdown | Mathematics/DiscreteMathematics/Permutation/LC46Permutations.md | wtsanshou/Coding | 451738297f7249fe8a12849d7dcacda0095bc2e9 | [
"RSA-MD"
] | 16 | 2019-07-26T14:40:28.000Z | 2022-02-17T01:26:44.000Z | Mathematics/DiscreteMathematics/Permutation/LC46Permutations.md | wtsanshou/Coding | 451738297f7249fe8a12849d7dcacda0095bc2e9 | [
"RSA-MD"
] | null | null | null | Mathematics/DiscreteMathematics/Permutation/LC46Permutations.md | wtsanshou/Coding | 451738297f7249fe8a12849d7dcacda0095bc2e9 | [
"RSA-MD"
] | 6 | 2020-05-06T17:14:14.000Z | 2021-09-23T06:51:48.000Z | # LC46. Permutations
### Question
Given a collection of distinct numbers, return all possible permutations.
**For example**
```
[1,2,3] have the following permutations:
[
[1,2,3],
[1,3,2],
[2,1,3],
[2,3,1],
[3,1,2],
[3,2,1]
]
```
## Solutions
### Solution 1
* C++1 (13ms)
```
class Solution {
public:
void permutaion(vector<vector<int>>& res, vector<int>& aRow, vector<int>& nums, int start)
{
if(start == nums.size()) res.push_back(aRow);
else
{
for(int i=start; i<nums.size(); ++i)
{
swap(nums[start], nums[i]);
aRow.push_back(nums[start]);
permutaion(res, aRow, nums, start+1);
swap(nums[start], nums[i]);
aRow.pop_back();
}
}
}
vector<vector<int>> permute(vector<int>& nums) {
vector<vector<int>> res;
vector<int> aRow;
permutaion(res, aRow, nums, 0);
return res;
}
};
```
* C++2 (16ms)
```
class Solution {
public:
void permuteHelper(vector<vector<int>>& res, vector<int>& aRow, vector<int>& nums)
{
if(aRow.size() == nums.size())
res.push_back(aRow);
else
{
for(int i=0; i<nums.size(); ++i)
{
if(find(aRow.begin(), aRow.end(), nums[i]) == aRow.end())
{
aRow.push_back(nums[i]);
permuteHelper(res, aRow, nums);
aRow.pop_back();
}
}
}
}
vector<vector<int>> permute(vector<int>& nums) {
vector<vector<int>> res;
vector<int> aRow;
permuteHelper(res, aRow, nums);
return res;
}
};
```
* C++3
```
class Solution {
public:
void permutaion(vector<vector<int>>& res,vector<int>& nums, int start)
{
if(start == nums.size()) res.push_back(nums);
else
{
for(int i=start; i<nums.size(); ++i)
{
swap(nums[start], nums[i]);
permutaion(res, nums, start+1);
swap(nums[start], nums[i]);
}
}
}
vector<vector<int>> permute(vector<int>& nums) {
vector<vector<int>> res;
permutaion(res, nums, 0);
return res;
}
};
```
* Java1 (6ms)
```
public class Solution {
private List<List<Integer>> res;
public List<List<Integer>> permute(int[] nums) {
res = new ArrayList<>();
List<Integer> arow = new LinkedList<Integer>();
createPermute(arow, nums, 0);
return res;
}
private void createPermute(List<Integer> arow, int[] nums, int start)
{
if(arow.size() == nums.length)
{
List<Integer> ans = new LinkedList<Integer>(arow);
res.add(ans);
return;
}
for(int i=start; i<nums.length; ++i)
{
mySwap(nums, start, i);
arow.add(nums[start]);
createPermute(arow, nums, start+1);
mySwap(nums, start, i);
arow.remove(arow.size()-1);
}
}
private void mySwap(int[] nums, int i, int j)
{
int temp = nums[i];
nums[i] = nums[j];
nums[j] = temp;
}
}
```
#### Explanation
Using `back tracking` algorithm.
* **worst-case time complexity:** O(N<sup>2</sup>), where `N` is the length of the input `nums`.
* **worst-case space complexity:** O(N<sup>2</sup>), where `N` is the length of the input `nums`. | 21.245509 | 97 | 0.491545 | eng_Latn | 0.36213 |
03d7590313928549a35cac9500dfccad76330056 | 1,707 | md | Markdown | docs/mdx/mdx-data-definition-create-measure.md | PiJoCoder/sql-docs | 128b255506c47eb05e19770a6bf5edfbdaa817ec | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-03-04T18:16:27.000Z | 2021-03-04T18:16:27.000Z | docs/mdx/mdx-data-definition-create-measure.md | PiJoCoder/sql-docs | 128b255506c47eb05e19770a6bf5edfbdaa817ec | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/mdx/mdx-data-definition-create-measure.md | PiJoCoder/sql-docs | 128b255506c47eb05e19770a6bf5edfbdaa817ec | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-01-15T10:07:40.000Z | 2021-01-15T10:07:40.000Z | ---
title: "CREATE MEASURE statement (MDX) | Microsoft Docs"
ms.custom: ""
ms.date: "03/02/2016"
ms.prod: "sql-server-2016"
ms.reviewer: ""
ms.suite: ""
ms.technology:
- "analysis-services"
ms.tgt_pltfrm: ""
ms.topic: "language-reference"
ms.assetid: f264ba96-cbbe-488b-8ac9-b3056a6e997b
caps.latest.revision: 5
author: "Minewiskan"
ms.author: "owend"
manager: "erikre"
---
# MDX Data Definition - CREATE MEASURE
[!INCLUDE[tsql-appliesto-ss2012-xxxx-xxxx-xxx_md](../includes/tsql-appliesto-ss2012-xxxx-xxxx-xxx-md.md)]
Creates a measure in a Tabular Model.
## Syntax
```
CREATE MEASURE Table_Name[Measure_Name] = DAX_Expression
[; CREATE MEASURE ...n]
```
## Arguments
*Table_Name*
A valid string literal that provides the name of the table where the measure will be created.
*Measure_Name*
A valid string literal that provides a measure name.
*DAX_Expression*
A valid DAX expression that returns a single scalar value.
## Remarks
The *Measure_Name* must be enclosed in square parenthesis.
The CREATE MEASURE statement can only be used inside of a MDX script definition; see [MdxScript Element (ASSL)](../analysis-services/scripting/objects/mdxscript-element-assl.md).
You can also define a calculated member for use by a single query. To define a calculated member that is limited to a single query, you use the WITH clause in the SELECT statement. For more information, see [Building Measures in MDX](../analysis-services/multidimensional-models/mdx/mdx-building-measures.md).
## See Also
[MDX Data Definition Statements (MDX)](../mdx/mdx-data-definition-statements-mdx.md)
| 32.207547 | 312 | 0.712361 | eng_Latn | 0.592267 |
03d8907dc309ae0b2880dd6c9681ed1f4e65062f | 5,180 | md | Markdown | _posts/2009-12-29-how-disabling-viewstate-on-asp-net-2-0-website-effects-youre-code.md | roelvanlisdonk/blogposts | e32acd82617dcdc7a3e6733c8fc880e06d9a5475 | [
"MIT"
] | null | null | null | _posts/2009-12-29-how-disabling-viewstate-on-asp-net-2-0-website-effects-youre-code.md | roelvanlisdonk/blogposts | e32acd82617dcdc7a3e6733c8fc880e06d9a5475 | [
"MIT"
] | null | null | null | _posts/2009-12-29-how-disabling-viewstate-on-asp-net-2-0-website-effects-youre-code.md | roelvanlisdonk/blogposts | e32acd82617dcdc7a3e6733c8fc880e06d9a5475 | [
"MIT"
] | null | null | null | ---
ID: 893
post_title: 'How disabling viewstate on ASP .NET 2.0 website effects you’re code'
author: Roel van Lisdonk
post_excerpt: ""
layout: post
permalink: >
https://www.roelvanlisdonk.nl/2009/12/29/how-disabling-viewstate-on-asp-net-2-0-website-effects-youre-code/
published: true
post_date: 2009-12-29 10:18:24
---
<p>You can disable viewstate on an ASP .NET 2.0 website by setting the enabledViewState=”false” on the configuration > system.web > pages node in the Web.config: <br /><span style="color: blue"> <br /><</span><span style="color: #a31515">pages </span><span style="color: red">enableViewState</span><span style="color: blue">=</span>"<span style="color: blue">false</span>"<span style="color: blue">></span></p> <a href="http://11011.net/software/vspaste"></a> <p>Then you will have to fill dropdownlists etc. on every page postback in the OnInit event of the page and some times have to use the good old: this.Request.Form <br /><span style="color: blue"> <br />string </span>selectedValue = <span style="color: blue">this</span>.Request.Form[<span style="color: #a31515">"ctl00$MainContentPlaceHolder$selectBatchDropDownList"</span>]; <br />The <span style="color: #a31515">"ctl00$MainContentPlaceHolder$selectBatchDropDownList"</span>  is the control UniqueID (<strong>selectedBatchDropDownList.UniqueID</strong>) <br /> <br /> <br /><strong>However, after reading some articles: <br /></strong><a title="http://dotneteers.net/blogs/petersm/archive/2006/11/09/how-to-put-controlstate-into-viewstate-and-how-to-put-viewstate-into-session.aspx" href="http://dotneteers.net/blogs/petersm/archive/2006/11/09/how-to-put-controlstate-into-viewstate-and-how-to-put-viewstate-into-session.aspx">http://dotneteers.net/blogs/petersm/archive/2006/11/09/how-to-put-controlstate-into-viewstate-and-how-to-put-viewstate-into-session.aspx</a> <br /><a title="http://weblogs.asp.net/ngur/archive/2004/03/08/85876.aspx" href="http://weblogs.asp.net/ngur/archive/2004/03/08/85876.aspx">http://weblogs.asp.net/ngur/archive/2004/03/08/85876.aspx</a>  <br /><a title="http://blog.tatham.oddie.com.au/2008/12/18/how-i-learned-to-stop-worrying-and-love-the-view-state/" href="http://blog.tatham.oddie.com.au/2008/12/18/how-i-learned-to-stop-worrying-and-love-the-view-state/">http://blog.tatham.oddie.com.au/2008/12/18/how-i-learned-to-stop-worrying-and-love-the-view-state/</a> <br /> <br />I re-enabled ViewState and started filling dropdownlists on the page “OnInit” event and saved the viewstate in the sessionstate. <br />This allowed me to use the normal code and reduced the viewstate size to 50 characters. <br /> <br /><strong>To persist viewstate to sessionstate use:</strong></p> <pre class="code"><span style="color: gray"> /// <summary>
/// </span><span style="color: green">Set viewstate to sessionstate
</span><span style="color: gray">/// </summary>
</span><span style="color: blue">protected override </span><span style="color: #2b91af">PageStatePersister </span>PageStatePersister
{
<span style="color: blue">get
</span>{
<span style="color: blue">return new </span><span style="color: #2b91af">SessionPageStatePersister</span>(<span style="color: blue">this</span>);
}
}</pre>
<a href="http://11011.net/software/vspaste"></a>
<p>In the web.config</p>
<pre class="code"><span style="color: blue"><</span><span style="color: #a31515">system.web</span><span style="color: blue">>
<</span><span style="color: #a31515">browserCaps</span><span style="color: blue">>
<</span><span style="color: #a31515">case</span><span style="color: blue">>
</span>RequiresControlStateInSession=true
<span style="color: blue"></</span><span style="color: #a31515">case</span><span style="color: blue">>
</</span><span style="color: #a31515">browserCaps</span><span style="color: blue">></span></pre>
<a href="http://11011.net/software/vspaste"></a>
<p>
<br /><strong>To completely remove viewstate information use:</strong></p>
<pre class="code"><span style="color: gray">/// <summary>
/// </span><span style="color: green">Completly disable viewstate for performance
</span><span style="color: gray">/// </summary>
/// <param name="viewState"></param>
</span><span style="color: blue">protected override void </span>SavePageStateToPersistenceMedium(<span style="color: blue">object </span>viewState)
{
}
<span style="color: gray">/// <summary>
/// </span><span style="color: green">Completly disable viewstate for performance
</span><span style="color: gray">/// </summary>
/// <returns></returns>
</span><span style="color: blue">protected override object </span>LoadPageStateFromPersistenceMedium()
{
<span style="color: blue">return null</span>;
}</pre>
<a href="http://11011.net/software/vspaste"></a> | 99.615385 | 2,542 | 0.680116 | yue_Hant | 0.34898 |
03d90f1d1b9d4534e4a687a761b4598d86a90466 | 1,726 | md | Markdown | docs/client/task-4-5-js-game-environment.md | roelofr/ClientSideTechnology | 615c30586f79897efd5548bda8c2868046507184 | [
"AFL-3.0"
] | null | null | null | docs/client/task-4-5-js-game-environment.md | roelofr/ClientSideTechnology | 615c30586f79897efd5548bda8c2868046507184 | [
"AFL-3.0"
] | null | null | null | docs/client/task-4-5-js-game-environment.md | roelofr/ClientSideTechnology | 615c30586f79897efd5548bda8c2868046507184 | [
"AFL-3.0"
] | null | null | null | **Taakgroep: Javascript AJAX**
# Taak: Environments in de game
Het is gebruikelijk te werken met verschillende environments, bijvoorbeeld: development, test, staging, production.
Afhankelijk van de environment wordt er door `Game.Data.get` mock data teruggegeven of een request uitgevoerd.
In deze taak bouw je de environment parameter in.
## Aanpak
De `Game.Data` krijgt een variabele `stateMap`. In deze variabele worden waarden opgeslagen die kunnen veranderen, zoals de environment.
In de function `Game.Data.init` wordt bepaald met welke environment gewerkt wordt.
De function `Game.Data.get` bepaalt op basis van de environment of er mock data of een request wordt uitegevoerd.
- Voeg een variabele `stateMap` toe met als default waarde 'development':
```javascript
//Game.Data
let stateMap = {
environment : 'development'
}
```
- Geef `Game.Data.init` een parameter environment mee en in de function wordt de ontvangen waarde opgeslagen in `stateMap.environment`.
- Controleer in `Game.Data.init` of:
- `environment` de waarde `production` of `development` heeft. Zoniet, gooi een error: `new Error(<error bericht>)`.
- als `environment` de waarde `production` heeft, dan een request aan de server doen.
- als `environment` de waarde `development` heeft, dan op basis van de url gegevens teruggeven uit de `configMap`, via `getMockData`.
- Voeg in de function `get` een controle toe op de environment. Is de environment `development`, gebruik dan `getMockData` om een resultaat te retourneren. In geval van een productie omgeving wordt een request gedaan (de comments weghalen :-)).
## Ondersteunende informatie
geen | 40.139535 | 247 | 0.732329 | nld_Latn | 0.999645 |
03d93294e099645e4922b54ad1abfc378060cc87 | 2,128 | md | Markdown | README.md | shlomif/biblegatewayquick | 9282dcec13f3f3c9870f7cfa487dd32627fa9f18 | [
"BSD-3-Clause"
] | null | null | null | README.md | shlomif/biblegatewayquick | 9282dcec13f3f3c9870f7cfa487dd32627fa9f18 | [
"BSD-3-Clause"
] | null | null | null | README.md | shlomif/biblegatewayquick | 9282dcec13f3f3c9870f7cfa487dd32627fa9f18 | [
"BSD-3-Clause"
] | null | null | null | # biblegatewayquick
Make a quick search for a Bible verse on the most awesome Protestant [biblegateway.com](https://www.biblegateway.com/)


## What's the idea
There already is a Firefox plugin to add biblegateway.com as a search engine, created by *Narwhals Dermal*, which is duper cool.
However the idea of this Firefox plugin is to allow you a quick, context menu search (i.e. when you right click a select text)
geting a new tab opened to your bible verse at biblegateway.com, rather than changing your search engine.
## Why Biblegateway.com
Because Protestants love the Bible, and biblegatway is duper awesome, having translations that I love (and no, I have not read the entire bible, yet...) such as NET (New English Translation) that offer tons of explicative footnotes, explaining historical context, langauge subtilities and other things. Or NIV (New International Version) that is super easy to read and understand.
## Licensing
Idk what I'm doing, in the past I've read the *MIT lincense* that I consider super short and usfefull, I just wanted to make this software free and open source, with the *Revised BSD license*
## Appreciations
Thanks to [aruljohn](https://github.com/aruljohn) for providing the Protestant Canon list :)
<p> </p>
### Am I a duper pious person?
Not at all, in fact I have't even read the whole Bible, for some people that tells mountains about me, I love things like South Park, I am very sketip at times and have my dark nights of the soul. But, I like to hope and think that by the grace of God, *I am what I am* (1 Corinthians 15:10), says St. Paul, echoed by the Protestant reformer Martin Luther.
### I have a dream...
Maybe one day the Romanian Orthodox Church will collaborate with biblegateway.com and offer their "aproved" version for their faithful to have it *for free* (Protestants rock).
| 56 | 380 | 0.772086 | eng_Latn | 0.994201 |
03da8986dcce212b947da551e9199a9a425a71e7 | 6,945 | md | Markdown | docs/custom_node_development.md | Xaenalt/model_server | f977dbf1246ebf85e960ca058e814deac7c6a16c | [
"Apache-2.0"
] | 234 | 2020-04-24T22:09:49.000Z | 2022-03-30T10:40:04.000Z | docs/custom_node_development.md | Xaenalt/model_server | f977dbf1246ebf85e960ca058e814deac7c6a16c | [
"Apache-2.0"
] | 199 | 2020-04-29T08:43:21.000Z | 2022-03-29T09:05:52.000Z | docs/custom_node_development.md | Xaenalt/model_server | f977dbf1246ebf85e960ca058e814deac7c6a16c | [
"Apache-2.0"
] | 80 | 2020-04-29T14:54:41.000Z | 2022-03-30T14:50:29.000Z | # Custom Node Development Guide
## Overview
Custom node in OpenVINO Model Server simplifies linking deep learning models into a complete pipelines even if the inputs and output
of the sequential models do not fit. In many cases, output of one model can not be directly passed to another one.
The data might need to be analysed, filtered or converted to different format. Those operations can not be easily implemented
in AI frameworks or are simply not supported. Custom node addresses this challenge. They allows employing a dynamic library
developed in C++ or C to perform arbitrary data transformations.
## Custom Node API
The custom node library must implement the API interface defined in [custom_node_interface.h](../src/custom_node_interface.h).
The interface is defined in `C` to simplify compatibility with various compilers. The library could use third party components
linked statically or dynamically. OpenCV is a built in component in OVMS which could be used to perform manipulation on the image
data.
Below are explained the data structures and functions defined in the API header.
### "CustomNodeTensor" struct
The CustomNodeTensor struct consist of several fields defining the data in the output and input of the node execution.
Custom node can generate results based on multiple inputs from one or more other nodes.
CustomNodeTensor object can store multiple inputs to be processed in the execute function.
Each input can be referenced using an index or you can search by name:
```
inputTensor0 = &(inputs[0])
```
Every CustomNodeTensor struct include the following fields:
`const char* name` - pointer to the string representing the input name
`uint8_t* data` - pointer to data buffer. Data is stored as bytes.
`uint64_t dataBytes` - the size of the data allocation in bytes
`uint64_t* dims` - pointer to the buffer storing array shape size. Size of each dimension consumes 8 bytes
`uint64_t dimsCount` - number of dimension in the data array
`CustomNodeTensorPrecision precision` - data precision enumeration
### CustomNodeTensorInfo struct
The fields in struct CustomNodeTensorInfo are similar to CustomNodeTensor. It just holds information about
the metadata of the custom node interfaces: inputs and outputs.
### "CustomNodeParam" struct
Struct CustomNodeParam stores a list of pairs with pointers to parameter and and value strings.
Each parameter can be references in such object using an index you can search by key name by iterating the structure.
### "execute" function
```
int execute(const struct CustomNodeTensor* inputs, int inputsCount, struct CustomNodeTensor** outputs, int* outputsCount, const struct CustomNodeParam* params, int paramsCount);
```
This function implements the data transformation of the custom node. The input data for the function are passed in the form of
a pointer to CustomNodeTensor struct object. It includes all the data and pointers to buffers for all custom node inputs.
The parameter inputsCount pass info about the number of inputs passed that way.
Note that the execute function should not modify the buffers storing the input data because that would alter the data
which potentially might be used in other pipeline nodes.
The behaviour of custom node execute function can be dependent on the node parameters set in OVMS configuration.
They are passed to execute function in `params` argument. paramsCount pass info about the number of parameters configured.
The results of the data transformation should be returned by the outputs pointer to a pointer which stores the address of
CustomNodeTensor struct. Number of outputs is to be defined during the function execution in the `outputsCount` argument.
Note that during the function execution all the output data buffers needs to be allocated. They will be released by OVMS after
the request processing is completed and returned to the user. The cleanup is triggered by calling `release` function
which also needs to be implemented in custom library.
Execute function returns an integer which value defines the success (`0` value) or failure ( greater than 0). When the function
reports error, the pipeline execution is stopped and error is returned to the user.
### "getInputsInfo" function
This function returns information about the metadata of the expected inputs. Returned CustomNodeTensorInfo object is used
to create a response for getModelMetadata calls. It is also used in the user request validation and the pipeline
configuration validation.
Custom nodes can generate the results which have dynamic size depending on the input data and the custom node parameters.
In such case function getInputsInfo should return value `0` on the dimension with dynamic size. It could be an input with
variable resolution or a batch size.
### "getOutputInfo" function
Similar to previous function but defining the outputs metadata.
### "release" function
This function is called by OVMS at the end of the pipeline processing. It clear all memory allocations used during the
node execution. This function should only call `free`. OVMS decides when to free and what to free.
## Using OpenCV
The custom node library can use any third-party dependencies which could be linked statically or dynamically.
For simplicity OpenCV libraries included in the OVMS docker image can be used.
Just add include statement like:
```c++
#include "opencv2/core.hpp"
```
## Building
Custom node library can be compiled using any tool. It is recommended to follow the example based
a docker container with all build dependencies included. It is described in this [Makefile](../src/custom_nodes/east_ocr/Makefile).
## Testing
Recommended method for testing the custom library is via OVMS execution:
- Compile the library using a docker container configured in the Makefile. It will be exported to `lib` folder.
- Prepare a pipeline configuration with the path custom node library compiled in the previous step
- Start OVMS docker container
- Submit a request to OVMS endpoint using a gRPC or REST client
- Analyse the logs on the OVMS server
For debugging steps refer the OVMS [developer guide](developer_guide.md)
## Custom node examples
The best staring point for developing new custom nodes is via exploring or copying the examples.
The fully functional custom nodes are in:
- [east-resnet50 OCR custom node](../src/custom_nodes/east_ocr)
- [model zoo intel object detection custom node](../src/custom_nodes/model_zoo_intel_object_detection)
- [image transformation custom node](../src/custom_nodes/image_transformation)
Other examples are included in the unit tests:
- [node_add_sub.c](../src/test/custom_nodes/node_add_sub.c)
- [node_choose_maximum.cpp](../src/test/custom_nodes/node_choose_maximum.cpp)
- [node_missing_implementation.c](../src/test/custom_nodes/node_missing_implementation.c)
- [node_perform_different_operations.cpp](../src/test/custom_nodes/node_perform_different_operations.cpp)
| 55.119048 | 177 | 0.803024 | eng_Latn | 0.997857 |
03dac2f8ce2b495e5743fae09284895590b52334 | 2,670 | md | Markdown | articles/sql-database/sql-database-operate-query-store.md | lettucebo/azure-content-zhtw | f9aa6105a8900d47dbf27a51fd993c44a9af5b63 | [
"CC-BY-3.0"
] | null | null | null | articles/sql-database/sql-database-operate-query-store.md | lettucebo/azure-content-zhtw | f9aa6105a8900d47dbf27a51fd993c44a9af5b63 | [
"CC-BY-3.0"
] | null | null | null | articles/sql-database/sql-database-operate-query-store.md | lettucebo/azure-content-zhtw | f9aa6105a8900d47dbf27a51fd993c44a9af5b63 | [
"CC-BY-3.0"
] | null | null | null | <properties
pageTitle="操作 Azure SQL Database 中的查詢存放區"
description="了解如何操作 Azure SQL Database 中的查詢存放區"
keywords=""
services="sql-database"
documentationCenter=""
authors="CarlRabeler"
manager="jhubbard"
editor=""/>
<tags
ms.service="sql-database"
ms.devlang="NA"
ms.topic="article"
ms.tgt_pltfrm="sqldb-performance"
ms.workload="data-management"
ms.date="08/16/2016"
ms.author="carlrab"/>
# 操作 Azure SQL Database 中的查詢存放區
Azure 中的查詢存放區是完全受管理的資料庫功能,可持續收集及呈現有關所有查詢的詳細歷程記錄資訊。您可以將查詢存放區視為類似於飛航資料記錄器,可大幅簡化雲端選項與內部部署客戶的查詢效能疑難排解。這篇文章說明在 Azure 中操作查詢存放區的特定層面。您可以使用此預先收集的查詢資料,快速地診斷並解決效能問題,因此能夠花更多時間專注於業務上。
查詢存放區已自 2015 年 11 月在 Azure SQL Database 中[通用版本上市](https://azure.microsoft.com/updates/general-availability-azure-sql-database-query-store/)。查詢存放區是效能分析及調整功能的基礎,例如 [SQL Database 建議程式和效能儀表板](https://azure.microsoft.com/updates/sqldatabaseadvisorga/)。在本文發行時,查詢存放區正於 Azure 中 200,000 個以上的使用者資料庫內執行,不間斷地收集數個月的查詢相關資訊。
> [AZURE.IMPORTANT] Microsoft 正在為所有 Azure SQL Database (現有和新的) 啟用「查詢存放區」。
## 最佳查詢存放區組態
本節描述最佳的組態預設值,其設計目的是確保查詢存放區及相依功能 (例如 [SQL Database 建議程式和效能儀表板](https://azure.microsoft.com/updates/sqldatabaseadvisorga/)) 能夠可靠地運作。預設組態已針對持續收集資料最佳化,也就是在 OFF/READ\_ONLY 狀態花費最少的時間。
| 組態 | 說明 | 預設值 | 註解 |
| ------------- | ----------- | ------- | ------- |
| MAX\_STORAGE\_SIZE\_MB | 指定查詢存放區在 z 客戶資料庫內佔用的資料空間限制 | 100 | 對新資料庫強制執行 |
| INTERVAL\_LENGTH\_MINUTES | 定義彙總和保存查詢計畫所收集到的執行階段統計資料的時段大小。對於此組態定義的一段時間,每個使用中的查詢計劃最多會有一個資料列 | 60 | 對新資料庫強制執行 |
| STALE\_QUERY\_THRESHOLD\_DAYS | 以時間為基礎的清理原則,可控制保存執行階段統計資料和非使用中查詢的保留期限 | 30 | 對新資料庫和具有先前的預設值 (367) 的資料庫強制執行 |
| SIZE\_BASED\_CLEANUP\_MODE | 指定當查詢存放區資料大小接近限制時,是否進行自動資料清理 | AUTO | 對所有資料庫強制執行 |
| QUERY\_CAPTURE\_MODE | 指定是否會追蹤所有查詢或只有查詢的子集 | AUTO | 對所有資料庫強制執行 |
| FLUSH\_INTERVAL\_SECONDS | 指定擷取的執行階段統計資料在排清到磁碟之前,保留在記憶體中的最大期間 | 900 | 對新資料庫強制執行 |
||||||
> [AZURE.IMPORTANT] 在所有 Azure SQL Database 中查詢存放區啟用的最後階段會自動套用這些預設值 (請參閱上面的重要附註)。在這次推出之後,Azure SQL Database 不會變更客戶所設定的組態值,除非該組態值會對主要工作負載或查詢存放區的可靠操作造成負面影響。
如果您想要繼續使用自訂設定,請使用 [ALTER DATABASE 搭配查詢存放區選項](https://msdn.microsoft.com/library/bb522682.aspx),以將組態還原到先前的狀態。請查看[使用查詢存放區的最佳作法](https://msdn.microsoft.com/library/mt604821.aspx),以了解如何選擇最佳的組態參數。
## 後續步驟
[SQL Database 效能深入解析](sql-database-performance.md)
## 其他資源
如需詳細資訊,請參閱下列文章:
- [您的資料庫的航班資料錄製器](https://azure.microsoft.com/blog/query-store-a-flight-data-recorder-for-your-database)
- [使用查詢存放區來監視效能](https://msdn.microsoft.com/library/dn817826.aspx)
- [查詢存放區使用案例](https://msdn.microsoft.com/library/mt614796.aspx)
- [使用查詢存放區來監視效能](https://msdn.microsoft.com/library/dn817826.aspx)
<!---HONumber=AcomDC_0817_2016--> | 43.064516 | 310 | 0.748689 | yue_Hant | 0.972095 |
03db2976dac55b6f566b9ffaaab01f4de476aa41 | 3,126 | md | Markdown | spider9.md | Landers1037/articles | bb7cf0e0be39ed75fdc139c9363a1a506f1bbf96 | [
"Apache-2.0"
] | null | null | null | spider9.md | Landers1037/articles | bb7cf0e0be39ed75fdc139c9363a1a506f1bbf96 | [
"Apache-2.0"
] | null | null | null | spider9.md | Landers1037/articles | bb7cf0e0be39ed75fdc139c9363a1a506f1bbf96 | [
"Apache-2.0"
] | null | null | null | ---
title: Scrapy爬取烂番茄的年度电影榜单
name: spider9
date: 2019-03-08
id: 0
tags: [python,爬虫,scrapy]
categories: []
abstract: ""
---
分析网页结构后,发现年度电影数据最早到1950年,使用一个for循环对url进行构造
源码地址 [github](https://github.com/Landers1037/PySpider/tree/master/%E7%83%82%E7%95%AA%E8%8C%84%E7%94%B5%E5%BD%B1%E7%BD%91%E7%AB%99%E5%B9%B4%E5%BA%A6%E6%9C%80%E4%BD%B3%E7%94%B5%E5%BD%B1%E6%A6%9C%E5%8D%95/rottentomatoes)
<!--more-->
分析网页结构后,发现年度电影数据最早到1950年,使用一个for循环对url进行构造
源码地址 [github](https://github.com/Landers1037/PySpider/tree/master/%E7%83%82%E7%95%AA%E8%8C%84%E7%94%B5%E5%BD%B1%E7%BD%91%E7%AB%99%E5%B9%B4%E5%BA%A6%E6%9C%80%E4%BD%B3%E7%94%B5%E5%BD%B1%E6%A6%9C%E5%8D%95/rottentomatoes)<!--more-->
## 蜘蛛部分
```python
# -*- coding: utf-8 -*-
import scrapy
from rottentomatoes.items import RottentomatoesItem
class TomatoSpider(scrapy.Spider):
name = 'tomato'
allowed_domains = ['www.rottentomatoes.com']
start_urls = ['https://www.rottentomatoes.com/']
def parse(self, response):
datas = response.css('.table tr')
for data in datas:
item = RottentomatoesItem()
item['movieyear'] = response.url.replace('https://www.rottentomatoes.com/top/bestofrt/?year=','')
item['rank'] = data.css('td:nth-child(1)::text').extract_first()
item['movie'] = str(data.css('td a::text').extract_first()).strip()
item['tomatometer'] = str(data.css('.tMeterScore::text').extract_first()).replace('\xa0','')
yield item
def start_requests(self):
base_url = 'https://www.rottentomatoes.com/top/bestofrt/?year='
for page in range(1950,2019):
url = base_url + str(page)
yield scrapy.Request(url,callback=self.parse)
```
## pipeline部分
```python
class pgPipeline():
def __init__(self,host,database,user,password,port):
self.host = host
self.database = database
self.user = user
self.password = password
self.port = port
self.db = psycopg2.connect(host=self.host, user=self.user, password=self.password, database=self.database, port=self.port)
self.cursor = self.db.cursor()
@classmethod
def from_crawler(cls,crawler):
return cls(
host=crawler.settings.get('SQL_HOST'),
database=crawler.settings.get('SQL_DATABASE'),
user=crawler.settings.get('SQL_USER'),
password=crawler.settings.get('SQL_PASSWORD'),
port=crawler.settings.get('SQL_PORT'),
)
def open_spider(self,spider):
self.db = psycopg2.connect(host=self.host, user=self.user, password=self.password, database=self.database, port=self.port)
self.cursor = self.db.cursor()
def process_item(self,item,spider):
data = item
sql = 'insert into tomato(movieyear,rank,movie,tomatometer) values (%s,%s,%s,%s)'
try:
self.cursor.execute(sql,(data['movieyear'],data['rank'],data['movie'],data['tomatometer']))
self.db.commit()
except:
self.db.rollback()
return item
def close_spider(self,spider):
self.db.close()
```
| 32.905263 | 228 | 0.638196 | yue_Hant | 0.126373 |
03db2cc50c3d6a139023719a83815457d018c514 | 3,322 | md | Markdown | _posts/2014-12-10-the-hobbit-the-battle-of-the-five-armies.md | breaktimetv2/breaktimetv2.github.io | 486dafc58b6e6c7e3fd3aaf38f334466bb9651a7 | [
"MIT"
] | null | null | null | _posts/2014-12-10-the-hobbit-the-battle-of-the-five-armies.md | breaktimetv2/breaktimetv2.github.io | 486dafc58b6e6c7e3fd3aaf38f334466bb9651a7 | [
"MIT"
] | null | null | null | _posts/2014-12-10-the-hobbit-the-battle-of-the-five-armies.md | breaktimetv2/breaktimetv2.github.io | 486dafc58b6e6c7e3fd3aaf38f334466bb9651a7 | [
"MIT"
] | null | null | null | ---
layout: peliculas
title: "El Hobbit 3: La batalla de los cinco ejércitos"
titulo_original: "The Hobbit: The Battle of the Five Armies"
image_carousel: 'https://res.cloudinary.com/imbriitneysam/image/upload/v1543723090/hobbit3-poster-min.jpg'
image_banner: 'https://res.cloudinary.com/imbriitneysam/image/upload/v1543723092/hobbit3-banner-min.jpg'
description: Después de haber recuperado el reino del Dragón Smaug en la montaña, la Compañía ha desencadenado, sin querer, una potencia maligna. Un Smaug enfurecido vuela hacia la Ciudad del Lago para acabar con cualquier resto de vida. Obsesionado con las enormes riquezas en su poder, el rey enano Thorin se vuelve codicioso, mientras Bilbo intenta hacerle entrar en razón haciendo algo desesperado y peligroso.
description_corta: Después de haber recuperado el reino del Dragón Smaug en la montaña, la Compañía ha desencadenado, sin querer, una potencia maligna. Un Smaug enfurecido vuela hacia la Ciudad del Lago para acabar con cualquier resto de vida. Obsesionado con...
category: 'peliculas'
descargas: 'yes'
idioma: 'Latino'
descargas2:
descarga-1: ["1", "https://oload.fun/f/U89kNlzwgQk/El_Hobbit_3-_La_batalla_de_los_cinco_ej%C3%A9rcitos.mp4", "https://www.google.com/s2/favicons?domain=openload.co","OpenLoad","https://res.cloudinary.com/imbriitneysam/image/upload/v1541473684/mexico.png", "Latino", "Full HD"]
descarga-3: ["2", "https://mega.nz/#!46IUGBLa!iwCzj-H9mnGzddeJ2BefNzqxcneHmQS3CN67EjnEqq0", "https://www.google.com/s2/favicons?domain=mega.nz","Mega","https://res.cloudinary.com/imbriitneysam/image/upload/v1541473684/mexico.png", "Latino", "Full HD"]
anio: '2014'
duracion: '2h 24 min'
estrellas: '5'
genero: Acción, Ciencia Ficción, Aventura
trailer: https://www.youtube.com/embed/clRX9RXOb64
embed: https://www.youtube.com/embed/clRX9RXOb64?autoplay=1&rel=0&hd=1&border=0&wmode=opaque&enablejsapi=1&modestbranding=1&controls=1&showinfo=0
sandbox: allow-same-origin allow-forms
reproductor: fembed
clasificacion: '+7'
calidad: 'Full HD'
reproductores_otros: ["https://van.gigastream.xyz/player/index.php?data=4c5bde74a8f110656874902f07378009","Latino","https://streampelis.info/public/dist/index.html?id=772ade1edaec4d068f3f0a48a0c3c7bb","Latino","https://gdriveplayer.me/embed2.php?link=NmUVb0IT6oQSOYCj2UGhSg1y10PPUbgMPtlFESeyjMtJGGKTjRcHPEkOlBni4EE%252F038J2ZP1ijZ4v3NdvJYfHOi6QvLsdYHGZKSUEoEwnKpFgN8DUOiAUcjmIxU88jBAFHcrY6crHHY3iAYhFAVbqXqJPyHIQlnsgSAnpeMZOvSkKH60QFWfwPhZGQpFLZIHy3zeiOWflqrYmRNAQ7Zmnp","Latino","https://movcloud.net/embed/wo-7mQqpY-RM","Latino","https://api.cuevana3.io/stream/index.php?file=ek5lbm9xYWNrS0xYMTZLa2xNbkdvY3ZTb3BtZng4TGp6ZFpobGFMUGtOelcwcUZmbWRIVzRkakVuS0JnbEplcG1KUnNZSlRTMGViVTBxZGdsdEhPb3QzSW1wbDlyZG5ocWNpZ1lLRFNsUT09","Latino","https://mstream.press/b3g8zqtbfydx","Latino","https://mstream.press/8h8rmhywpand","Latino"]
reproductores_fembed: ["https://feurl.com/v/80oe6lr179j","Latino","https://feurl.com/v/3q917p1g829","Latino"]
tags:
- Ciencia-Ficcion
twitter_text: El Hobbit 3, La batalla de los cinco ejércitos
introduction: Después de haber recuperado el reino del Dragón Smaug en la montaña, la Compañía ha desencadenado, sin querer, una potencia maligna. Un Smaug enfurecido vuela hacia la Ciudad del Lago para acabar con cualquier resto de vida. Obsesionado con
---
| 77.255814 | 825 | 0.807044 | spa_Latn | 0.512821 |
03db877ce6568d551b755f93005657b225e60578 | 1,484 | md | Markdown | README.md | davops/ansible-haproxy | e639410469c4d120c5872a4bcee65bc22c41e478 | [
"Apache-2.0"
] | 1 | 2021-03-08T00:14:26.000Z | 2021-03-08T00:14:26.000Z | README.md | davops/ansible-haproxy | e639410469c4d120c5872a4bcee65bc22c41e478 | [
"Apache-2.0"
] | null | null | null | README.md | davops/ansible-haproxy | e639410469c4d120c5872a4bcee65bc22c41e478 | [
"Apache-2.0"
] | 2 | 2016-06-21T19:40:34.000Z | 2020-02-20T21:36:05.000Z | haproxy
========
Installs and configures [HAProxy 1.5](http://www.haproxy.org/).
Versions
--------
**WARNING:** This is the README for the `master` branch, which tracks the development of version 2 and targets Ansible 2.x. This branch is under active development and will include breaking changes.
The most recent release in the 1.x series is [1.1.0](https://github.com/devops-coop/ansible-haproxy/tree/v1.1.0).
Features
--------
* Supports Alpine, CentOS, Debian, and Ubuntu.
* Installs HAProxy 1.5 from official repositories on Debian and Ubuntu.
* Installs EPEL repository on CentOS.
Role Variables
--------------
* `haproxy_global`
Global HAProxy settings.
* `haproxy_defaults`
Default settings for frontends, backends, and listen proxies.
* `haproxy_backends`
A list of HAProxy backends.
* `haproxy_frontends`
A list of HAProxy frontends.
* `haproxy_listen`
A list of listen proxies.
See [`vars/main.yml`](vars/main.yml) for a complete list of configurable .
Example
-------
```yaml
- hosts: loadbalancers
roles:
- role: haproxy
haproxy_frontends:
- name: 'fe-mysupersite'
ip: '123.123.123.120'
port: '80'
maxconn: '1000'
default_backend: 'be-mysupersite'
haproxy_backends:
- name: 'be-mysupersite'
description: 'mysupersite is really cool'
servers:
- name: 'be-mysupersite-01'
ip: '192.168.1.100'
```
License
-------
Apache v2
| 22.484848 | 198 | 0.656334 | eng_Latn | 0.895241 |
03dbdd6062d6782d4db5f699dec8f2fe11e38534 | 9,231 | md | Markdown | Week4_lecture2_2020115002.md | Vithesh-Reddy/Algorithms_2020115002 | 70868560347e7b7b444e51aa3d26c7a9963665c0 | [
"Apache-2.0"
] | null | null | null | Week4_lecture2_2020115002.md | Vithesh-Reddy/Algorithms_2020115002 | 70868560347e7b7b444e51aa3d26c7a9963665c0 | [
"Apache-2.0"
] | null | null | null | Week4_lecture2_2020115002.md | Vithesh-Reddy/Algorithms_2020115002 | 70868560347e7b7b444e51aa3d26c7a9963665c0 | [
"Apache-2.0"
] | null | null | null | # Week 4, Lecture 2
## Activity Selection
<pre>
Suppose we have a set S = {a<sub>1</sub>, a<sub>2</sub>, ...., a<sub>n</sub>} of n proposed activities that wish to use a resource, such as a lecture hall, which can serve only one activity at a time.
Each activity a<sub>i</sub> has a start time s<sub>i</sub> and a finish time f<sub>i</sub> , where 0 s<sub>i</sub> < f<sub>i</sub> < 1. If selected, activity a<sub>i</sub> takes place during the half-open time interval [s<sub>i</sub>, f<sub>i</sub>). Activities a<sub>i</sub> and a<sub>j</sub> are compatible if the intervals [s<sub>i</sub>, f<sub>i</sub>) and [s<sub>j</sub>, f<sub>j</sub>) do not overlap. That is, a<sub>i</sub> and a<sub>j</sub> are compatible if s<sub>i</sub> >= f<sub>j</sub> or s<sub>j</sub> >= f<sub>i</sub>.
In the activity-selection problem, we wish to select a maximum-size subset of mutually compatible activities.
We assume that the activities are sorted in monotonically increasing order of finish time:
f<sub>1</sub> <= f<sub>2</sub> <= f<sub>3</sub> .... <= f<sub>n-1</sub> <= f<sub>n</sub>
</pre>
### Optimal Substructure Property:
<pre>
We now verify that the activity-selection problem exhibits optimal substructure. Let us denote by S<sub>ij</sub> the set of activities that start after activity a<sub>i</sub> finishes and that finish before activity a<sub>j</sub> starts.
Suppose that we wish to find a maximum set of mutually compatible activities in S<sub>ij</sub> , and suppose further that such a maximum set is A<sub>ij</sub> , which includes some activity a<sub>k</sub>.
By including a<sub>k</sub> in an optimal solution, we are left with two subproblems: finding mutually compatible activities in the set S<sub>ik</sub> and finding mutually compatible activities in the set S<sub>kj</sub>.
Therefore, the activity-selection problem exhibits optimal substructure property.
</pre>
### Greedy Choice:
Consider any nonempty subproblem S<sub>k</sub>, and let a<sub>m</sub> be an activity in S<sub>k</sub> with the earliest finish time. Then a<sub>m</sub> is included in some maximum-size subset of mutually compatible activities of S<sub>k</sub>.
#### Proof:
<pre>
Let A<sub>k</sub> be a maximum-size subset of mutually compatible activities in S<sub>k</sub>,
and let a<sub>j</sub> be the activity in A<sub>k</sub> with the earliest finish time.
If a<sub>j</sub> = a<sub>m</sub>, we are done, since we have shown that a<sub>m</sub> is in some maximum-size subset of mutually compatible activities of S<sub>k</sub>. If aj != am, let the set A'<sub>k</sub> = Ak - {a<sub>j</sub>} U {a<sub>m</sub>} be A<sub>k</sub> but substituting a<sub>m</sub> for a<sub>j</sub>.
The activities in A'<sub>k</sub> are disjoint, which follows because the activities in A<sub>k</sub> are disjoint, a<sub>j</sub> is the first activity in A<sub>k</sub> to finish, and f<sub>m</sub> <= f<sub>j</sub>.
Since |A'<sub>k</sub>| = |A<sub>k</sub>|, we conclude that A'<sub>k</sub> is a maximum-size subset of mutually compatible activities of S<sub>k</sub>, and it includes a<sub>m</sub>.
</pre>
### Greedy Algorithm:
<pre>
The recursive procedure takes the start and finish times of the activities, represented as arrays s and f, the index k that defines the subproblem S<sub>k</sub> it is to solve, and the size n of the original problem.
It returns a maximum-size set of mutually compatible activities in S<sub>k</sub>. We assume that the n input activities are already ordered by monotonically increasing finish time. If not, we can sort them into this order in O(nlog(n)) time, breaking ties arbitrarily.
In order to start, we add the fictitious activity a<sub>0</sub> with f<sub>0</sub> = 0, so that subproblem S<sub>o</sub> is the entire set of activities S.
</pre>
```python
RECURSIVE-ACTIVITY-SELECTOR(s, f, k, n)
m = k + 1
2 while m <= n and s[m] < f [k]:
3 m = m + 1
4 if m <= n:
5 return {a_m} U RECURSIVE-ACTIVITY-SELECTOR(s, f, m, n)
6 else:
return NULL_SET
```
Assuming that the activities have already been sorted by finish times, the running time of the call RECURSIVE-ACTIVITY-SELECTOR(s, f, 0, n) is O(n).
The procedure GREEDY-ACTIVITY-SELECTOR is an iterative version of the procedure RECURSIVE-ACTIVITY-SELECTOR. It also assumes that the input activities are ordered by monotonically increasing finish time. It collects selected activities into a set A and returns this set when it is done.
```python
GREEDY-ACTIVITY-SELECTOR(s, f)
n = s.length
A = {a_1}
k = 1
for m = 2 to n:
if s[m] >= f [k]:
A = A U {a_m}
k = m
return A
```
Like the recursive version, GREEDY-ACTIVITY-SELECTOR schedules a set of n activities in O(n) time, assuming that the activities were already sorted initially by their finish times.
## Huffman Codes
<pre>
We consider the data to be a sequence of characters. Huffman’s greedy algorithm uses a table giving how often each character occurs (frequency) to build up an optimal way of representing each character as a binary string.
We consider the problem of designing a binary character code in which each character is represented by a unique binary string, which we call a codeword. If we use a fixed-length code, we need 3 bits to represent 6 characters: a = 000, b = 001, ..., f = 101. But this way of coding is inefficient.
A variable-length code can do considerably better than a fixed-length code, by giving frequent characters short codewords and infrequent characters long codewords.
</pre>
### Prefix Codes:
<pre>
Codes in which no codeword is a prefix of some other codeword are called prefix codes. A prefix code can always achieve the optimal data compression among any character code.
Prefix codes are desirable because they simplify decoding. Since no codeword is a prefix of any other, the codeword that begins an encoded file is unambiguous. We can simply identify the initial codeword, translate it back to the original character, and repeat the decoding process on the remainder of the encoded file.
For example, the string 001011101 parses uniquely as 0 0 101 1101, which decodes to aabe.
An optimal code for a file is always represented by a full binary tree, in which every nonleaf node has two children.
We can say that if C is the alphabet from which the characters are drawn and all character frequencies are positive, then the tree for an optimal prefix code has exactly |C| leaves, one for each letter of the alphabet, and exactly |C| - 1 internal nodes.
Given a tree T corresponding to a prefix code, we can compute the number of bits required to encode a file:
For each character c in the alphabet C, let the attribute c.freq denote the frequency of c in the file and let d<sub>T</sub>(c) denote the depth of c’s leaf in the tree. d<sub>T</sub>(c) is also the length of the codeword for character c.
Hence, the number of bits required to encode a file is:
</pre>

which is the cost of tree T.
### Huffman Algorithm:
<pre>
We assume that C is a set of n characters and that each character c ∈ C is an object with an attribute c.freq giving its frequency.
The algorithm builds the tree T corresponding to the optimal code in a bottom-up manner. It begins with a set of |C| leaves and performs a sequence of |C| - 1 'merging' operations to create the final tree.
The algorithm uses a min-priority queue Q, keyed on the freq attribute, to identify the two least-frequent objects to merge together. When we merge two objects, the result is a new object whose frequency is the sum of the frequencies of the two objects that were merged.
</pre>
```python
HUFFMAN(C)
1 n = |C|
2 Q = C
3 for i = 1 to n - 1:
4 allocate a new node z
5 z.left = x = EXTRACT-MIN(Q)
6 z.right = y = EXTRACT-MIN(Q)
7 z.freq = x.freq + y.freq
8 INSERT(Q,z)
9 return EXTRACT-MIN(Q)
```
#### Time complexity:
<pre>
We assume that Q is implemented as a binary min-heap.
For a set C of n characters, we can initialize Q in line 2 in O(n) time using the BUILD-MIN-HEAP procedure. The for loop in lines 3–8 executes exactly n - 1 times, and since each heap operation requires time O(log(n)), the loop contributes O(nlog(n)) to the running time.
Thus, the total running time of HUFFMAN on a set of n characters is O(nlog(n)).
</pre>
- Note - The problem of determining an optimal prefix code exhibits the greedy-choice and optimal substructure properties.
### Entropy:
<pre>
The inherent unpredictability of a probability distribution can be measured by the extent to which it is possible to compress data drawn from that distribution.
more compressible ≡ less random ≡ more predictable
Suppose there are n possible outcomes, with probabilities p<sub>1</sub>, p<sub>2</sub>,..., p<sub>n</sub>. If a sequence of m values is drawn from the distribution, then the i<sup>th</sup> outcome will pop up approximately
m<sub>p</sub>i times (when m is large).
Assume that the p<sub>i</sub>’s are of the form 1/2<sup>k</sup>
By induction, the number of bits needed to encode the sequence is:
</pre>

Thus the average number of bits needed to encode a single draw from the distribution is:

This is the entropy of the distribution - a measure of how much randomness it contains.
| 67.379562 | 533 | 0.736323 | eng_Latn | 0.99889 |
03dbe46198e043706db843eacf120d3c9faf98d6 | 4,666 | md | Markdown | docs/LibraryItem.md | UltraCart/rest_api_v2_sdk_java | aff5ac367c25a663209d32e8ec6d4c069917b1dc | [
"Apache-2.0"
] | null | null | null | docs/LibraryItem.md | UltraCart/rest_api_v2_sdk_java | aff5ac367c25a663209d32e8ec6d4c069917b1dc | [
"Apache-2.0"
] | 1 | 2021-02-06T14:31:50.000Z | 2021-02-06T14:31:50.000Z | docs/LibraryItem.md | UltraCart/rest_api_v2_sdk_java | aff5ac367c25a663209d32e8ec6d4c069917b1dc | [
"Apache-2.0"
] | null | null | null |
# LibraryItem
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**assets** | [**List<LibraryItemAsset>**](LibraryItemAsset.md) | | [optional]
**attributes** | [**List<LibraryItemAttribute>**](LibraryItemAttribute.md) | | [optional]
**categories** | **List<String>** | | [optional]
**content** | **String** | | [optional]
**contentType** | **String** | | [optional]
**description** | **String** | | [optional]
**industries** | **List<String>** | | [optional]
**libraryItemOid** | **Integer** | | [optional]
**merchantId** | **String** | | [optional]
**myPurchasedVersion** | **Integer** | If this is a public item and the merchant has already purchased it, this is their version. If not yet purchased, this will be zero. This value will only be populated during a searchPublicItems() call. | [optional]
**originalObjectId** | **String** | This id points to the original object that was added to the library. For flows and campaigns, this is a uuid string. For upsells, it is an oid integer. For transactional_emails, it is an email name. | [optional]
**price** | [**BigDecimal**](BigDecimal.md) | The price of the published item. Null for any private library items. | [optional]
**priceFormatted** | **String** | The formatted price of the published item. Null for any private library items. | [optional]
**published** | **Boolean** | True if this library item is a published item (not source) | [optional]
**publishedDts** | **Object** | The timestamp of the last published version | [optional]
**publishedFromLibraryItemOid** | **Integer** | The source item used to publish this item. This allows for comparisons between source and published | [optional]
**publishedMeta** | [**LibraryItemPublishedMeta**](LibraryItemPublishedMeta.md) | | [optional]
**publishedVersion** | **Integer** | The source version when this item was published. This allows for out-of-date alerts to be shown when there is a difference between source and published | [optional]
**purchased** | **Boolean** | True if this library item has been purchased | [optional]
**purchasedFromLibraryItemOid** | **Integer** | The published item that was purchased to make this item. This allows for comparisons between published and purchased | [optional]
**purchasedMeta** | [**LibraryItemPurchasedMeta**](LibraryItemPurchasedMeta.md) | | [optional]
**purchasedVersion** | **Integer** | The published version when this item was purchased. This allows for out-of-date alerts to be shown when there is a difference between published and purchased | [optional]
**rejected** | **Boolean** | Any published library reviewed by UltraCart staff for malicious or inappropriate content will have this flag set to true. This is always false for non-published items | [optional]
**rejectedReason** | **String** | Any rejected published item will have this field populated with the reason. | [optional]
**releaseNotes** | **String** | Release notes specific to each published version and only appearing on public items. | [optional]
**releaseVersion** | **Integer** | This counter records how many times a library item has been published. This is used to show version history. | [optional]
**reviewed** | **Boolean** | Any published library items must be reviewed by UltraCart staff for malicious content. This flag shows the status of that review. This is always false for non-published items | [optional]
**reviewedDts** | **Object** | This is the timestamp for a published items formal review by UltraCart staff for malicious content. | [optional]
**screenshots** | [**List<LibraryItemScreenshot>**](LibraryItemScreenshot.md) | | [optional]
**shareWithAccounts** | [**List<LibraryItemAccount>**](LibraryItemAccount.md) | | [optional]
**shareWithOtherEmails** | [**List<LibraryItemEmail>**](LibraryItemEmail.md) | | [optional]
**shared** | **Boolean** | True if this item is shared from another merchant account | [optional]
**source** | **Boolean** | True if this library item has been published | [optional]
**sourceToLibraryItemOid** | **Integer** | This oid points to the published library item, if there is one. | [optional]
**sourceVersion** | **Integer** | The version of this item. Increment every time the item is saved. | [optional]
**style** | **String** | | [optional]
**timesPurchased** | **Integer** | | [optional]
**title** | **String** | | [optional]
**type** | **String** | | [optional]
**underReview** | **Boolean** | True if this library item was published but is awaiting review from UltraCart staff. | [optional]
| 93.32 | 255 | 0.699314 | eng_Latn | 0.973786 |
03dd12ce643c136a73bd2e5f9c8aacd3530ea74a | 2,435 | md | Markdown | README.md | leizongmin/leizm-async-throttle | eed8169cb3282d46732f4d55702337a1928ab525 | [
"MIT"
] | null | null | null | README.md | leizongmin/leizm-async-throttle | eed8169cb3282d46732f4d55702337a1928ab525 | [
"MIT"
] | null | null | null | README.md | leizongmin/leizm-async-throttle | eed8169cb3282d46732f4d55702337a1928ab525 | [
"MIT"
] | null | null | null | # @leizm/async-throttle
异步操作限流
## 安装
```bash
npm install @leizm/async-throttle --save
```
## 使用
```typescript
const t = new AsyncThrottle({
concurrent: 10, // 并发
tps: 1000, // 每秒最多可处理的请求数
timeout: 1000, // 超时时间
});
// 执行异步任务
const ret = await t.run(async () => {
return 12345;
});
// 返回结果:12345
// 获得当前运行状态信息
const stats = t.stat();
// 返回结果:{ running: 1, waitting: 0, tps: 1 }
```
## 性能
```
------------------------------------------------------------------------
@leizm/async-throttle Benchmark
------------------------------------------------------------------------
Platform info:
- Darwin 18.5.0 x64
- Node.JS: 10.15.3
- V8: 6.8.275.32-node.51
Intel(R) Core(TM) i7-6820HQ CPU @ 2.70GHz × 8
3 tests success:
┌───────────────────────┬──────────┬─────────┬────────┐
│ test │ rps │ ns/op │ spent │
├───────────────────────┼──────────┼─────────┼────────┤
│ #run concurrent=100 │ 53252.1 │ 18778.6 │ 2.221s │
├───────────────────────┼──────────┼─────────┼────────┤
│ #run concurrent=1000 │ 143878.0 │ 6950.3 │ 2.099s │
├───────────────────────┼──────────┼─────────┼────────┤
│ #run concurrent=10000 │ 136674.3 │ 7316.7 │ 2.195s │
└───────────────────────┴──────────┴─────────┴────────┘
```
说明:当 `capacity` 不受限制时,执行 `setTimeout(fn, 0)` 异步任务可以达到 13 万次每秒,即性能损害基本可忽略不计。
## License
```
MIT License
Copyright (c) 2019 Zongmin Lei <[email protected]>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
```
| 29.337349 | 78 | 0.584805 | yue_Hant | 0.654727 |
03de8308eaa162758ce5760c514ee64811f3d98d | 1,291 | md | Markdown | Lesson/hints.md | Intro-Quantitative-Geology/Lesson-12-Viscous-flows | d4ea3c71b94c5e21edc113d7c586201ce23574a6 | [
"MIT"
] | null | null | null | Lesson/hints.md | Intro-Quantitative-Geology/Lesson-12-Viscous-flows | d4ea3c71b94c5e21edc113d7c586201ce23574a6 | [
"MIT"
] | 1 | 2016-12-02T11:47:50.000Z | 2016-12-02T11:47:50.000Z | Lesson/hints.md | Intro-Quantitative-Geology/Lesson-12-Viscous-flows | d4ea3c71b94c5e21edc113d7c586201ce23574a6 | [
"MIT"
] | null | null | null | # Hints for Exercise 12
## Contents
- [General hints for Exercise 12](#general-hints)
- [Problem 1 hints](#problem-1)
- [Problem 2 hints](#problem-2)
## General hints
## Problem 1
### Solving for the unknown pressures in Equation 10
In Part 2 of Problem 1 you're asked to solve for the term (*p*<sub>1</sub> - *p*<sub>0</sub>) / *L* in Equation 10.
To help you know whether or not you've done this properly, I've given a table of the values you should get below.
| *n* | (*p*<sub>1</sub> - *p*<sub>0</sub>) / *L* |
| --- | ----------------------------------------- |
| 2 | 4215743440.23 |
| 3 | 8029987.50521 |
| 4 | 14339.2634022 |
| 5 | 24.5815944037 |
## Problem 2
### Solving for the unknown term in Equation 19
Similar to Problem 2, you're again asked to solve for an unknown term in Equation 19, in this case *ɣ*<sub>*x*</sub>.
As above, you have a table of values below to help you check whether or not you've properly solved your equation for *ɣ*<sub>*x*</sub>.
| *n* | *ɣ*<sub>*x*</sub> |
| --- | ----------------- |
| 1 | 8.33314255626e+12 |
| 2 | 51263351337.2 |
| 3 | 327038924.001 |
| 4 | 1955974.42585 |
| 5 | 11230.4751723 |
| 39.121212 | 135 | 0.545314 | eng_Latn | 0.964675 |
03de8b9027b1634dc88258d23ec6bfd0cba21874 | 2,254 | md | Markdown | pkg/analyzer_cli/README.md | zhaoxuyang/sdk | ecd82166c31297db61f062c9ba509a2d498d6cef | [
"BSD-3-Clause"
] | 1 | 2020-05-14T02:23:11.000Z | 2020-05-14T02:23:11.000Z | pkg/analyzer_cli/README.md | onuryildriim/sdk | f99631b12c4ae3f1771ef68b1ae8c57a0dd9434d | [
"BSD-3-Clause"
] | null | null | null | pkg/analyzer_cli/README.md | onuryildriim/sdk | f99631b12c4ae3f1771ef68b1ae8c57a0dd9434d | [
"BSD-3-Clause"
] | null | null | null | # dartanalyzer
Use _dartanalyzer_ to statically analyze your code at the command line,
checking for errors and warnings that are specified in the
[Dart Language Specification](https://dart.dev/guides/language/spec).
DartPad, code editors, and IDEs such as Android Studio and VS Code
use the same analysis engine that dartanalyzer uses.
## Basic usage
Run the analyzer from the top directory of the package.
Here's an example of analyzing a Dart file.
```
dartanalyzer bin/test.dart
```
## Options
The following are the most commonly used options for dartanalyzer:
* `--packages=`
Specify the path to the package resolution configuration file.
For more information see
[Package Resolution Configuration File](https://github.com/lrhn/dep-pkgspec/blob/master/DEP-pkgspec.md).
This option cannot be used with `--package-root`.
* `--package-warnings`
Show warnings not only for code in the specified .dart file and
others in its library, but also for libraries imported with `package:`.
* `--options=`
Specify the path to an analysis options file.
* `--[no-]lints`
Show the results from the linter.
* `--[no-]hints`
Don't show hints for improving the code.
* `--version`
Show the analyzer version.
* `-h` _or_ `--help`
Show all of the command-line options.
See the[Customizing static analysis
guide](https://dart.dev/guides/language/analysis-options) for further ways to
customize how dartanalyzer performs static analysis, and how it reports its
findings.
The following are advanced options to use with dartanalyzer:
* `--dart-sdk=`
Specify the directory that contains the Dart SDK.
* `--fatal-warnings`
Except for type warnings, treat warnings as fatal.
* `--format=machine`
Produce output in a format suitable for parsing.
* `--ignore-unrecognized-flags`
Rather than printing the help message, ignore any unrecognized command-line
flags.
* `--url-mapping=libraryUri,/path/to/library.dart`
Use the specified library as the source for that particular import.
The following options are deprecated:
* `--package-root=`<br>
**Deprecated.** Specify the directory to search for any libraries that are
imported using `package:`. _This option is replaced as of Dart 1.12 with
`--packages`._
| 25.613636 | 106 | 0.746673 | eng_Latn | 0.993447 |
03deb1b8bfa30cf70f5baa595da7a23abe8c70f6 | 486 | md | Markdown | tests/integration-tests/src/test/resources/test-app-1/BUTTERFLY_MANUAL_INSTRUCTIONS.md | myohn-paypal/butterfly | c71434d1b2633bcb2006782d45ad0425b97ad633 | [
"MIT"
] | 38 | 2017-11-11T06:44:04.000Z | 2022-02-19T17:54:45.000Z | butterfly-core/src/main/resources/MANUAL_INSTRUCTIONS_BASELINE.md | DalavanCloud/butterfly | 16c86fc2602ceb5a5a4d327ab083b0786d6e8351 | [
"MIT"
] | 235 | 2017-10-19T20:57:02.000Z | 2022-03-25T16:14:22.000Z | butterfly-core/src/main/resources/MANUAL_INSTRUCTIONS_BASELINE.md | DalavanCloud/butterfly | 16c86fc2602ceb5a5a4d327ab083b0786d6e8351 | [
"MIT"
] | 69 | 2017-10-25T12:39:12.000Z | 2021-12-29T16:39:20.000Z | # Pending manual instructions
This application has been transformed by Butterfly. However, to complete the transformation, a few manual instructions need to be performed.
Follow the list of instructions below and complete each one of them in this order. Notice that, in case of upgrade paths, there are not necessarily manual instructions for every upgrade step.
After you finish all of them, don't forget to remove this document and all manual instruction documents.
</br>
</br>
| 37.384615 | 191 | 0.796296 | eng_Latn | 0.999773 |
03df3192d4614005201fc6e71d8bf978450a33b5 | 347 | md | Markdown | changelog/unreleased/mentix-prom-ext.md | KarthikSundar2002/reva | 2f8da47177c1ad39cf679df1be1e01ffc30f45b0 | [
"Apache-2.0"
] | null | null | null | changelog/unreleased/mentix-prom-ext.md | KarthikSundar2002/reva | 2f8da47177c1ad39cf679df1be1e01ffc30f45b0 | [
"Apache-2.0"
] | 31 | 2022-01-10T04:27:49.000Z | 2022-03-28T04:17:41.000Z | changelog/unreleased/mentix-prom-ext.md | KarthikSundar2002/reva | 2f8da47177c1ad39cf679df1be1e01ffc30f45b0 | [
"Apache-2.0"
] | null | null | null | Enhancement: Mentix PromSD extensions
The Mentix Prometheus SD scrape targets are now split into one file per service type, making health checks configuration easier. Furthermore, the local file connector for mesh data and the site registration endpoint have been dropped, as they aren't needed anymore.
https://github.com/cs3org/reva/pull/2560
| 57.833333 | 265 | 0.818444 | eng_Latn | 0.978619 |
03df8311cffedfdd740a95bee9efa838980793e5 | 3,478 | md | Markdown | _posts/2017-07-13-DataStructue-Map.md | YumiChen/Blog | 45a84fd20127defe1bc5426a144313eb7a95af72 | [
"MIT"
] | 1 | 2019-04-15T09:32:36.000Z | 2019-04-15T09:32:36.000Z | _posts/2017-07-13-DataStructue-Map.md | YumiChen/Blog | 45a84fd20127defe1bc5426a144313eb7a95af72 | [
"MIT"
] | 3 | 2018-07-05T09:23:56.000Z | 2018-07-22T06:55:33.000Z | _posts/2017-07-13-DataStructue-Map.md | YumiChen/Blog | 45a84fd20127defe1bc5426a144313eb7a95af72 | [
"MIT"
] | null | null | null | ---
layout: post
title: "[學習筆記 | 基礎資料結構] Map - 使用Javascript"
author: "Yumi Chen"
categories: dataStructure
tags: [Javascript,data structure]
---
<iframe height="315" src="https://www.youtube.com/embed/_1BPrCHcjhs" frameborder="0" allowfullscreen></iframe>
[程式碼來源/ 教學程式碼](https://codepen.io/beaucarnes/pen/jBjobG?editors=0012)
## 概念簡述
- Map儲存一連串的鍵值對(kay-value pair),取值時透過特定的鍵(key)去取得特定的值(value),若用生活例子來比喻,則像小學班級的學生編號,1號是小明,2號是小健...等,點名時若點到1號,則小明同學就會喊"右!",例子中的編號就像是鍵(key),相對應的學生則是值(value)
- 影片中提到的例子,透過月份名稱取得欲取得的值:
{:class="img-responsive"}
## 基礎結構
Javascript裡的陣列已實作Map物件,但仍可自己實作Stack
1. Javascript的Map物件使用範例
```javascript
let map2 = new Map();
map2.has('hands');
map2.entries();
let keyObj = {},
keyFunc = function() {};
map2.set('hello', 'string value');
map2.set(keyObj, 'obj value');
map2.set(keyFunc, 'func value');
map2.set(NaN, 'NaN value')
console.log(map2.size);
console.log(map2.get('hello'));
console.log(map2.get(keyObj));
console.log(map2.get(keyFunc));
console.log(map2.get(NaN));
// 4
// "string value"
// "obj value"
// "func value"
// "NaN value"
```
2. Map方法
- size Map元素總數
- set 加入新元素
- has 確認是否有特定元素
- get 回傳特定元素
- delete 刪除特定元素
- values 回傳所有元素的值
- clear 清除所有元素
## 實作步驟
1. 建立物件,加入count和collection屬性
```javascript
var myMap = function() {
this.collection = {}; // Stack元素
this.count = 0; // Stack長度
}
```
2. 實作size方法
```javascript
...
this.size = function() {
return this.count;
};
...
```
3. 實作set方法
```javascript
...
this.set = function(key, value) {
this.collection[key] = value;
this.count++;
};
...
```
4. 實作has方法
```javascript
...
this.has = function(key) {
return (key in this.collection);
};
...
```
5. 實作get方法
```javascript
...
this.get = function(key) {
return (key in this.collection) ? this.collection[key] : null;
};
...
```
6. 實作delete方法
```javascript
...
this.delete = function(key) {
if (key in this.collection) {
delete this.collection[key];
this.count--;
}
};
...
```
7. 實作values方法
```javascript
...
this.values = function() {
let result = new Array();
for (let key of Object.keys(this.collection)) {
result.push(this.collection[key]);
};
return (result.length > 0) ? result : null;
};
...
```
8. 實作clear方法
```javascript
...
this.clear = function() {
this.collection = {};
this.count = 0;
};
...
```
9. 即完成,完整程式碼如下:
```javascript
let myMap = function() {
this.collection = {};
this.count = 0;
this.size = function() {
return this.count;
};
this.set = function(key, value) {
this.collection[key] = value;
this.count++;
};
this.has = function(key) {
return (key in this.collection);
};
this.get = function(key) {
return (key in this.collection) ? this.collection[key] : null;
};
this.delete = function(key) {
if (key in this.collection) {
delete this.collection[key];
this.count--;
}
};
this.values = function() {
let result = new Array();
for (let key of Object.keys(this.collection)) {
result.push(this.collection[key]);
};
return (result.length > 0) ? result : null;
};
this.clear = function() {
this.collection = {};
this.count = 0;
};
};
```
## 使用範例
```javascript
let map = new myMap();
map.set('arms', 2);
map.set('fingers', 10);
map.set('eyes', 2);
map.set('belley button', 1);
console.log(map.get('fingers'));
console.log(map.size());
console.log(map.values());
// 10
// 4
// [2, 10, 2, 1]
```
| 17.477387 | 152 | 0.628235 | yue_Hant | 0.490776 |
03dfd1c80b13126f1dd33eac063f6e45784e0a64 | 3,408 | md | Markdown | articles/virtual-machines/disk-bursting.md | FrankBoylan92/azure-docs | 6da72e4f8dfbc1fd705a30d1377180279e32b81c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/virtual-machines/disk-bursting.md | FrankBoylan92/azure-docs | 6da72e4f8dfbc1fd705a30d1377180279e32b81c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/virtual-machines/disk-bursting.md | FrankBoylan92/azure-docs | 6da72e4f8dfbc1fd705a30d1377180279e32b81c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Managed disk bursting
description: Learn about disk bursting for Azure disks and Azure virtual machines.
author: albecker1
ms.author: albecker
ms.date: 03/02/2021
ms.topic: conceptual
ms.service: virtual-machines
ms.subservice: disks
ms.custom: references_regions
---
# Managed disk bursting
[!INCLUDE [managed-disks-bursting](../../includes/managed-disks-bursting.md)]
Azure [premium SSDs](disks-types.md#premium-ssd) offer two models of bursting:
- A on-demand bursting model (preview), where the disk bursts whenever its needs exceed its current capacity. This model incurs additional charges anytime the disk bursts. Noncredit bursting is only available on disks greater than 512 GiB in size.
- A credit-based model, where the disk will burst only if it has burst credits accumulated in its credit bucket. This model does not incur additional charges when the disk bursts. Credit-based bursting is only available on disks 512 GiB and smaller.
Additionally, the [performance tier of managed disks can be changed](disks-change-performance.md), which could be ideal if your workload would otherwise be running in burst.
| |Credit-based bursting |On-demand bursting |Changing performance tier |
|---------|---------|---------|---------|
| Scenarios|Ideal for short-term scaling (30 minutes or less).|Ideal for short-term scaling(Not time restricted).|Ideal if your workload would otherwise continually be running in burst.|
|Cost |Free |Cost is variable, see the [Billing](#billing) section for details. |The cost of each performance tier is fixed, see [Managed Disks pricing](https://azure.microsoft.com/pricing/details/managed-disks/) for details. |
|Availability |Only available for premium SSDs 512 GiB and smaller. |Only available for premium SSDs larger than 512 GiB. |Available to all premium SSD sizes. |
|Enablement |Enabled by default on eligible disks. |Must be enabled by user. |User must manually change their tier. |
## Common scenarios
The following scenarios can benefit greatly from bursting:
- **Improve startup times** – With bursting, your instance will startup at a significantly faster rate. For example, the default OS disk for premium enabled VMs is the P4 disk, which is a provisioned performance of up to 120 IOPS and 25 MB/s. With bursting, the P4 can go up to 3500 IOPS and 170 MB/s allowing for startup to accelerate by up to 6X.
- **Handle batch jobs** – Some application workloads are cyclical in nature. They require a baseline performance most of the time, and higher performance for short periods of time. An example of this is an accounting program that processes daily transactions that require a small amount of disk traffic. At the end of the month this program would complete reconciling reports that need a much higher amount of disk traffic.
- **Traffic spikes** – Web servers and their applications can experience traffic surges at any time. If your web server is backed by VMs or disks that use bursting, the servers would be better equipped to handle traffic spikes.
[!INCLUDE [managed-disks-bursting](../../includes/managed-disks-bursting-2.md)]
## Next steps
To enable on-demand bursting, see [Enable on-demand bursting](disks-enable-bursting.md).
To learn how to gain insight into your bursting resources, see [Disk bursting metrics](disks-metrics.md).
| 83.121951 | 423 | 0.757042 | eng_Latn | 0.994811 |
03e0c5245d82d56af51662075264c20f5c5d7a61 | 2,370 | md | Markdown | README.md | Kriechi/http2-frame-test-case | 5c67db0d4d68e1fb7d3a241d6e01fc04d981f465 | [
"MIT"
] | 11 | 2015-04-10T15:48:32.000Z | 2020-11-02T03:45:16.000Z | README.md | Kriechi/http2-frame-test-case | 5c67db0d4d68e1fb7d3a241d6e01fc04d981f465 | [
"MIT"
] | 9 | 2015-03-24T14:51:39.000Z | 2018-01-10T16:39:32.000Z | README.md | Kriechi/http2-frame-test-case | 5c67db0d4d68e1fb7d3a241d6e01fc04d981f465 | [
"MIT"
] | 6 | 2015-02-28T16:13:28.000Z | 2020-07-23T09:00:04.000Z | # http2-frame-test-case
Test cases to encode/decode [http2](http://tools.ietf.org/html/draft-ietf-httpbis-http2) frame.
## Test case files
Each directory has test cases of its corresponding frame.
For example, "data" directory has test cases of the DATA frame.
Each JSON file is named #{n}.json. n is 0 origin.
## JSON Format
Each JSON file has:
- description: general description of implementation.
- wire: encoded wire data in hex string.
- frame:
- length: length property of frame header.
- type: type property of frame header.
- flags: flags property of frame header.
- stream_identifier: stream identifier property of frame header.
- frame_payload: see frame payload section
- error: a list of possible error codes
Each property name is lower snake case (connected by underscore) of original name
### Example: a data frame
```js
{
"error": null,
"wire": "0000140008000000020648656C6C6F2C20776F726C6421486F77647921",
"frame": {
"length": 20,
"frame_payload": {
"data": "Hello, world!",
"padding_length": 6,
"padding": "Howdy!"
},
"flags": 8,
"stream_identifier": 2,
"type": 0
},
"description": "normal data frame"
}
```
## Test algorithm
First, decode a JSON file to data structures in your programming language.
For simplicity, we refer to such data structures just by JSON names, such as
"frame".
To decode "wire", you MUST use default setting values of HTTP2 and an empty dynamic table of HPACK.
### Normal cases
"frame" is not null and "error" is null.
1. Decode "wire" to obtain a decoded frame. Then, compare the decoded frame with "frame".
2. Encode the decode frame to obtain an encoded frame. Then, compare the encoded frame with "wire".
For the HEADERS frame,
"header_block_fragment" is carefully created just by using the static table.
If you decode "header_block_fragment" in step 1 and encode again in step 2,
you would obtain a different "wire"
since encoding algorithms of HPACK is not unique,
### Error cases
"frame" is null and "error" is not null.
1. Decode "wire" to obtain an error code. Then check if the error code is a member of "error".
## Contribution
- Please send a pull request with #{non-number}.json
- We will merge it by renaming it to #{latest}.json
## License
See [LICENSE](LICENSE) file
| 28.214286 | 99 | 0.703376 | eng_Latn | 0.981647 |
03e109e059c492a0331dd33905cb911ec27be407 | 777 | md | Markdown | content/blog/image-resizing/index.md | adamem1/gatsby-blog | 813f3df9def07524935f4387c305f89739b6f8b1 | [
"RSA-MD"
] | null | null | null | content/blog/image-resizing/index.md | adamem1/gatsby-blog | 813f3df9def07524935f4387c305f89739b6f8b1 | [
"RSA-MD"
] | null | null | null | content/blog/image-resizing/index.md | adamem1/gatsby-blog | 813f3df9def07524935f4387c305f89739b6f8b1 | [
"RSA-MD"
] | null | null | null | ---
title: Image Resizing
date: "2021-07-21T22:12:03.284Z"
description: "Working with image resizing"
---
Let's demo some image resizing logic here.
# The car
Here's the original picture of the car.

# Resized car
Here's the resized version of the car

# AVIF car
Here's the resized version of the car

# AVIF car big
Here's the resized version of the car
 | 24.28125 | 109 | 0.769627 | eng_Latn | 0.571859 |
03e157133919dc920b663291c99d4fa043f0e741 | 1,990 | md | Markdown | echo-verify-adler32/README.md | pjcon/ral-ceph-tools | ca97e3cea192727d81c924a7bb134e3738c9bc73 | [
"Apache-2.0"
] | null | null | null | echo-verify-adler32/README.md | pjcon/ral-ceph-tools | ca97e3cea192727d81c924a7bb134e3738c9bc73 | [
"Apache-2.0"
] | null | null | null | echo-verify-adler32/README.md | pjcon/ral-ceph-tools | ca97e3cea192727d81c924a7bb134e3738c9bc73 | [
"Apache-2.0"
] | null | null | null | ## Task
Write a script utility that compares the stored adler32 checksum with the actual checksum for a file on any pool given a set of command line arguments. This script cannot be reliant on potentially inaccessible elements like the 'xrd' command.
## Issues
There has been difficulty in the past to create this script owing to the binary format of the adler32 xattribute, along with existing manual methods which have reduced the incentive. Erasure code integrity checking (scrubbing) is able to check for changes in object data while the file objects are stored, but this does not guarantee that the file on echo is equivalent to the version kept by the service user.
A script has been written to check the adler32 checksum of a single large object file on echo, but does not use the striper option since the python library does not support it. The current process turns out to be slow to check large numbers of objects (~20s per GB) using the basic non-striper method, and cannot check files that are split into objects since it cannot piece them together. Striper draws objects from multiple disk drives, so theoretically it should obtain a large file faster in proportion to the number of disk drives providing the data.
Storage is a limiting factor on the machines performing the check which means that a rolling checksum on segments of the streamed file may be the preferred method.
## Suggestions
- Check ral-ceph-tools github repo for the latest version of the current scripts
- Use the python file striper-adler32.py as a starting point, or a pointer.
- Use subprocess to implement the striper functionality (see commented code in get\_cluster\_object' function)
- Write a short script which accepts a file though stdout and outputs the adler32 checksum (rather than operating on a file on system). rados can output to stdout by adding a '-' in place of the output filename.
- It might be entirely reasonable to use subprocess commands in place of librados python commands
| 117.058824 | 555 | 0.803518 | eng_Latn | 0.999703 |
03e1b1fcf8eddadf6042b2f3b6b6b1808b003998 | 4,466 | md | Markdown | packages/web-components/docs/markdown-contents.md | guilhermelMoraes/ibm-dotcom-library | 699b93b7dee34021ea39f2aadf212f9513b446bd | [
"Apache-2.0"
] | 50 | 2020-10-20T06:30:51.000Z | 2022-03-21T19:59:37.000Z | packages/web-components/docs/markdown-contents.md | guilhermelMoraes/ibm-dotcom-library | 699b93b7dee34021ea39f2aadf212f9513b446bd | [
"Apache-2.0"
] | 4,721 | 2020-10-09T14:52:53.000Z | 2022-03-31T22:03:39.000Z | packages/web-components/docs/markdown-contents.md | guilhermelMoraes/ibm-dotcom-library | 699b93b7dee34021ea39f2aadf212f9513b446bd | [
"Apache-2.0"
] | 64 | 2020-10-09T18:19:37.000Z | 2022-03-15T22:19:49.000Z | <!-- START doctoc generated TOC please keep comment here to allow auto update -->
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
## Table of contents
- [Markdown contents](#markdown-contents)
- [Using `<dds-content-*-copy>`](#using-dds-content--copy)
- [Rendering markdown on server](#rendering-markdown-on-server)
- [Use raw HTML](#use-raw-html)
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
## Markdown contents
In most cases, `<dds-content-*>` is used with its "copy" content. There are several ways to specify the copy content.
### Using `<dds-content-*-copy>`
`<dds-content-*-copy>` are Web Components that automatically renders the given markdown content to HTML. It takes either `content` property (_not_ attribute) or child text node as the markdown content. The markdown content will be converted to HTML with sanitization.
Given the nature of child text node, some extra caution is required to use child text node as the markdown content:
- After-initialization change of markdown content via child text node is not supported.
- Don't put any extra whitespace, e.g. line feeds, between the start/end tag and the markdown content.
- HTML-escape the content as needed.
### Rendering markdown on server
While `<dds-content-*-copy>` provides an easy way to use markdown for `<dds-content-*>`, it requires markdown parser and HTML sanitizer has to be downloaded and run in browser. To reduce the cost of downloading and running, especially if the target network and device environment is limiting, rendering markdown on server is often helpful.
For example, a Handlebars helper that works with the Carbon for IBM.com [`markdownToHtml` utility](https://github.com/carbon-design-system/carbon-for-ibm-dotcom/blob/v1.12.0/packages/utilities/src/utilities/markdownToHtml/markdownToHtml.js) can be defined to convert generated HTML from markdown content:
```javascript
const Handlebars = require('handlebars');
const { default: markdownToHtml } = require('@carbon/ibmdotcom-utilities/lib/utilities/markdownToHtml/markdownToHtml');
...
Handlebars.registerHelper('markdown', options => {
return new Handlebars.SafeString(markdownToHtml(options.fn(this)));
});
```
Such Handlebars helper can be used like:
```handlebars
<div class="bx--content-item__copy">
{{{{markdown}}}}
Lorem ipsum *dolor* sit amet, consectetur adipiscing elit. Aenean et ultricies est.
Mauris iaculis eget dolor nec hendrerit. Phasellus at elit sollicitudin, sodales
nulla quis, *consequat* libero. Here are
some common categories:
- [list item](https://www.ibm.com)
- list item 1a
1. list item 2
1. list item 2a
{{{{/markdown}}}}
</div>
```
> 💡 Make sure loading the Sass code for the corresponding CSS class for the "copy" content. For example, one for `bx--content-item__copy` class is defined in `@carbon/ibmdotcom-styles/scss/internal/content-item/content-item`.
> 💡 Check our
> [CodeSandbox](https://githubbox.com/carbon-design-system/carbon-for-ibm-dotcom/tree/master/packages/web-components/examples/codesandbox/usage/markdown-handlebars)
> example implementation.
[](https://githubbox.com/carbon-design-system/carbon-for-ibm-dotcom/tree/master/packages/web-components/examples/codesandbox/usage/markdown-handlebars)
### Use raw HTML
Another way to define the "copy" content is using raw HTML:
```html
<div class="bx--content-item__copy">
<p>
Lorem ipsum <em>dolor</em> sit amet, consectetur adipiscing elit. Aenean et ultricies est. Mauris iaculis eget dolor nec
hendrerit. Phasellus at elit sollicitudin, sodales nulla quis, <em>consequat</em> libero. Here are some common categories:
</p>
<ul class="bx--list--unordered">
<li class="bx--list__item">
<a href="https://www.ibm.com" class="bx--link">list item</a>
<ol class="bx--list--ordered">
<li class="bx--list__item">list item 1a</li>
</ol>
</li>
</ul>
<ol class="bx--list--ordered">
<li class="bx--list__item">
list item 2
<ul class="bx--list--unordered">
<li class="bx--list__item">list item 2a</li>
</ul>
</li>
</ol>
</div>
```
> 💡 Make sure loading the Sass code for the corresponding CSS class for the "copy" content. For example, one for `bx--content-item__copy` class is defined in `@carbon/ibmdotcom-styles/scss/internal/content-item/content-item`.
| 45.111111 | 339 | 0.735558 | eng_Latn | 0.802039 |
03e2359ec3dfd947d217d4757a8ba5a506729916 | 6,942 | md | Markdown | articles/communication-services/concepts/call-flows.md | FrankBoylan92/azure-docs | 6da72e4f8dfbc1fd705a30d1377180279e32b81c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/communication-services/concepts/call-flows.md | FrankBoylan92/azure-docs | 6da72e4f8dfbc1fd705a30d1377180279e32b81c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/communication-services/concepts/call-flows.md | FrankBoylan92/azure-docs | 6da72e4f8dfbc1fd705a30d1377180279e32b81c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Call flows in Azure Communication Services
titleSuffix: An Azure Communication Services concept document
description: Learn about call flows in Azure Communication Services.
author: mikben
manager: jken
services: azure-communication-services
ms.author: mikben
ms.date: 09/30/2020
ms.topic: overview
ms.service: azure-communication-services
---
# Call flow basics
[!INCLUDE [Public Preview Notice](../includes/public-preview-include.md)]
The section below gives an overview of the call flows in Azure Communication Services. Signaling and media flows depend on the types of calls your users are making. Examples of call types include one-to-one VoIP, one-to-one PSTN, and group calls containing a combination of VoIP and PSTN-connected participants. Review [Call types](./voice-video-calling/about-call-types.md).
## About signaling and media protocols
When you establish a peer-to-peer or group call, two protocols are used behind the scenes - HTTP (REST) for signaling and SRTP for media.
Signaling between the client libraries or between client libraries and Communication Services Signaling Controllers is handled with HTTP REST (TLS). For Real-Time Media Traffic (RTP), the User Datagram Protocol (UDP) is preferred. If the use of UDP is prevented by your firewall, the client library will use the Transmission Control Protocol (TCP) for media.
Let's review the signaling and media protocols in various scenarios.
## Call flow cases
### Case 1: VoIP where a direct connection between two devices is possible
In one-to-one VoIP or video calls, traffic prefers the most direct path. "Direct path" means that if two client libraries can reach each other directly, they'll establish a direct connection. This is usually possible when two client libraries are in the same subnet (for example, in a subnet 192.168.1.0/24) or two when the devices each live in subnets that can see each other (client libraries in subnet 10.10.0.0/16 and 192.168.1.0/24 can reach out each other).
:::image type="content" source="./media/call-flows/about-voice-case-1.png" alt-text="Diagram showing a Direct VOIP call between users and Communication Services.":::
### Case 2: VoIP where a direct connection between devices is not possible, but where connection between NAT devices is possible
If two devices are located in subnets that can't reach each other (for example, Alice works from a coffee shop and Bob works from his home office) but the connection between the NAT devices is possible, the client side client libraries will establish connectivity via NAT devices.
For Alice it will be the NAT of the coffee shop and for Bob it will be the NAT of the home office. Alice's device will send the external address of her NAT and Bob's will do the same. The client libraries learn the external addresses from a STUN (Session Traversal Utilities for NAT) service that Azure Communication Services provides free of charge. The logic that handles the handshake between Alice and Bob is embedded within the Azure Communication Services provided client libraries. (You don't need any additional configuration)
:::image type="content" source="./media/call-flows/about-voice-case-2.png" alt-text="Diagram showing a VOIP call which utilizes a STUN connection.":::
### Case 3: VoIP where neither a direct nor NAT connection is possible
If one or both client devices are behind a symmetric NAT, a separate cloud service to relay the media between the two client libraries is required. This service is called TURN (Traversal Using Relays around NAT) and is also provided by the Communication Services. The Communication Services calling client library automatically uses TURN services based on detected network conditions. Use of Microsoft's TURN service is charged separately.
:::image type="content" source="./media/call-flows/about-voice-case-3.png" alt-text="Diagram showing a VOIP call which utilizes a TURN connection.":::
### Case 4: Group calls with PSTN
Both signaling and media for PSTN Calls use the Azure Communication Services telephony resource. This resource is interconnected with other carriers.
PSTN media traffic flows through a component called Media Processor.
:::image type="content" source="./media/call-flows/about-voice-pstn.png" alt-text="Diagram showing a PSTN Group Call with Communication Services.":::
> [!NOTE]
> For those familiar with media processing, our Media Processor is also a Back to Back User Agent, as defined in [RFC 3261 SIP: Session Initiation Protocol](https://tools.ietf.org/html/rfc3261), meaning it can translate codecs when handling calls between Microsoft and Carrier networks. The Azure Communication Services Signaling Controller is Microsoft's implementation of an SIP Proxy per the same RFC.
For group calls, media and signaling always flow via the Azure Communication Services backend. The audio and/or video from all participants is mixed in the Media Processor component. All members of a group call send their audio and/or video streams to the media processor, which returns mixed media streams.
The default real-time protocol (RTP) for group calls is User Datagram Protocol (UDP).
> [!NOTE]
> The Media Processor can act as a Multipoint Control Unit (MCU) or Selective Forwarding Unit (SFU)
:::image type="content" source="./media/call-flows/about-voice-group-calls.png" alt-text="Diagram showing UDP media process flow within Communication Services.":::
If the client library can't use UDP for media due to firewall restrictions, an attempt will be made to use the Transmission Control Protocol (TCP). Note that the Media Processor component requires UDP, so when this happens, the Communication Services TURN service will be added to the group call to translate TCP to UDP. TURN charges will be incurred in this case unless TURN capabilities are manually disabled.
:::image type="content" source="./media/call-flows/about-voice-group-calls-2.png" alt-text="Diagram showing TCP media process flow within Communication Services.":::
### Case 5: Communication Services client library and Microsoft Teams in a scheduled Teams meeting
Signaling flows through the signaling controller. Media flows through the Media Processor. The signaling controller and Media Processor are shared between Communication Services and Microsoft Teams.
:::image type="content" source="./media/call-flows/teams-communication-services-meeting.png" alt-text="Diagram showing Communication Services client library and Teams Client in a scheduled Teams meeting.":::
## Next steps
> [!div class="nextstepaction"]
> [Get started with calling](../quickstarts/voice-video-calling/getting-started-with-calling.md)
The following documents may be interesting to you:
- Learn more about [call types](../concepts/voice-video-calling/about-call-types.md)
- Learn about [Client-server architecture](./client-and-server-architecture.md)
- Learn about [Call flow topologies](./detailed-call-flows.md)
| 74.645161 | 534 | 0.791703 | eng_Latn | 0.9953 |
03e2ff6635cb43a99fe4ebaf067251da6d294b55 | 7,153 | md | Markdown | README.md | bachhoan88/CleanArchitecture | fbfbcfc2cae9317996852281dc765e4c9346bbae | [
"Apache-2.0"
] | 187 | 2018-06-29T07:40:03.000Z | 2022-03-29T09:52:22.000Z | README.md | PibeDx/CleanArchitecture | 10cb85635a7269fe2adff4da85ee905474aa4b97 | [
"Apache-2.0"
] | 6 | 2020-03-14T15:55:42.000Z | 2021-08-05T23:35:02.000Z | README.md | PibeDx/CleanArchitecture | 10cb85635a7269fe2adff4da85ee905474aa4b97 | [
"Apache-2.0"
] | 62 | 2018-06-27T05:59:32.000Z | 2022-02-27T14:38:46.000Z | Android Kotlin Clean Architecture & Components Example
===========================================================
[](https://circleci.com/gh/bachhoan88/CleanArchitecture/tree/androidx)
[](https://codecov.io/github/bachhoan88/CleanArchitecture)
This is a sample app & basic code that uses Clean Architecture & Components, that is part of a blog post
I have written about how to architect android application using the Uncle Bob's clean architecture approach.
**NOTE** It is a relatively more complex and complete example so if you are not familiar
with [Architecture Components][arch], you are highly recommended to check other examples
in this repository first.
Introduction
-------------
### Data-Flow

### Work-Flow

### Handler-Error-Flow

#### Domain Layer
- Contains business model
- Contains business RULEs
- Repository interface adapt
#### Data Layer
- Implementation Repository
- Executor API data
- Storage data to local: Share preferences, database, external storage
- Mapper data model to domain model
- Contains data service, third party data service
#### Presentation Layer
- View (Activity/Fragment/Layout) Adapt data to view
- Validate/Submit data input from view via UseCase
### Base Code
Base code designed one `Activity` and multiple `Fragment`, using `Navigation Component` to UI navigate
Use Dagger2 (version 2.23.2) for Dependencies Injection, You can easily switch to using `Koin` (I suggestion `Dagger` for big, super projects)
Base code
- Has created a flow that handles all of the corner cases, you can easily customize them via `CleanException`, it `extends Throwable`
- Added `Authorization`, `Interceptor` easily handler and `implementations` for your project if needed
- Use `ktlint`, `kotlin-offical` for check code conventions, you can run `./gradlew ktlint`
- Use `jacoco` for full Unit and Instrument test
- Added basically `circle-ci`, `gitlab-ci` with some work-flows
- Report bugs into `Crashlytics` via `Timber.e`
### Building
Work from Android Studio 3.2 and above
### Unit Test
You can easily write Unit Test up to 70% code coverage lines of code (LOC), if you write focus on
- Data: API service, Local (database, share preferences), RepositoryImpl, Model Mapper
- Domain: UseCases, Repository, Exception handlers
- Presentation: ViewModel, Model Mapper
### Libraries used
--------------
* [Foundation][0] - Components for core system capabilities, Kotlin extensions and support for
multidex and automated testing.
* [AppCompat][1] - Degrade gracefully on older versions of Android.
* [Android KTX][2] - Write more concise, idiomatic Kotlin code.
* [Test][4] - An Android testing framework for unit and runtime UI tests.
* [Architecture][10] - A collection of libraries that help you design robust, testable, and
maintainable apps. Start with classes for managing your UI component lifecycle and handling data
persistence.
* [Data Binding][11] - Declaratively bind observable data to UI elements.
* [Lifecycles][12] - Create a UI that automatically responds to lifecycle events.
* [LiveData][13] - Build data objects that notify views when the underlying database changes.
* [Navigation][14] - Handle everything needed for in-app navigation.
* [Room][16] - Access your app's SQLite database with in-app objects and compile-time checks.
* [ViewModel][17] - Store UI-related data that isn't destroyed on app rotations. Easily schedule
asynchronous tasks for optimal execution.
* [WorkManager][18] - Manage your Android background jobs.
* [UI][30] - Details on why and how to use UI Components in your apps - together or separate
* [Animations & Transitions][31] - Move widgets and transition between screens.
* [Fragment][34] - A basic unit of composable UI.
* [Layout][35] - Lay out widgets using different algorithms.
* Third party
* [Glide][90] for image loading
* [Kotlin Coroutines][91] for managing background threads with simplified code and reducing needs for callbacks
* [ReactiveX][92] library for composing asynchronous and event-based programs by using observable sequences.
* [Dagger2][93] for dependencies injection
* [Retrofit][94] Type-safe HTTP client for Android
* [EasyPermission][95] is a wrapper library to simplify basic system permissions logic when targeting Android M or higher.
[0]: https://developer.android.com/jetpack/components
[1]: https://developer.android.com/topic/libraries/support-library/packages#v7-appcompat
[2]: https://developer.android.com/kotlin/ktx
[4]: https://developer.android.com/training/testing/
[10]: https://developer.android.com/jetpack/arch/
[11]: https://developer.android.com/topic/libraries/data-binding/
[12]: https://developer.android.com/topic/libraries/architecture/lifecycle
[13]: https://developer.android.com/topic/libraries/architecture/livedata
[14]: https://developer.android.com/topic/libraries/architecture/navigation/
[16]: https://developer.android.com/topic/libraries/architecture/room
[17]: https://developer.android.com/topic/libraries/architecture/viewmodel
[18]: https://developer.android.com/topic/libraries/architecture/workmanager
[30]: https://developer.android.com/guide/topics/ui
[31]: https://developer.android.com/training/animation/
[34]: https://developer.android.com/guide/components/fragments
[35]: https://developer.android.com/guide/topics/ui/declaring-layout
[90]: https://bumptech.github.io/glide/
[91]: https://kotlinlang.org/docs/reference/coroutines-overview.html
[92]: https://github.com/ReactiveX
[93]: https://github.com/google/dagger
[94]: https://github.com/square/retrofit
[95]: https://github.com/googlesamples/easypermissions
Upcoming features
-----------------
- Build layer for library `aar`
- Make design support for all application: `styles`, `fonts`, `theme`
- Interested in seeing a particular feature of the Architecture or Base Code implemented in this
app? Please open a new [issue](https://github.com/bachhoan88/CleanArchitecture/issues).
License
--------
Copyright 2017 The Android Open Source Project, Inc.
Licensed to the Apache Software Foundation (ASF) under one or more contributor
license agreements. See the NOTICE file distributed with this work for
additional information regarding copyright ownership. The ASF licenses this
file to you under the Apache License, Version 2.0 (the "License"); you may not
use this file except in compliance with the License. You may obtain a copy of
the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations under
the License.
| 49.673611 | 166 | 0.757584 | eng_Latn | 0.867137 |
03e321aa673b1816cd36f0c65196e5f4435538a7 | 1,196 | md | Markdown | README.md | kebiro/pasture | ae2d579414013583a1a1f23401dd6720622462b0 | [
"Apache-2.0"
] | null | null | null | README.md | kebiro/pasture | ae2d579414013583a1a1f23401dd6720622462b0 | [
"Apache-2.0"
] | null | null | null | README.md | kebiro/pasture | ae2d579414013583a1a1f23401dd6720622462b0 | [
"Apache-2.0"
] | null | null | null | # pasture
A Rust library for working with point cloud data. It features:
- Fine-grained support for arbitrary point attributes, similar to [PDAL](https://pdal.io/), but with added type safety
- A very flexible memory model, natively supporting both Array-of-Structs (AoS) and Struct-of-Arrays (SoA) memory layouts
- Support for reading and writing various point cloud formats with the `pasture-io` crate
- A growing set of algorithms with the `pasture-algorithms` crate
To this end, `pasture` chooses flexibility over simplicity. If you are looking for something small and simple, for example to work with LAS files, try a crate like [`las`](https://crates.io/crates/las). If you are planning to implement high-performance tools and services that will work with very large point cloud data, `pasture` is what you are looking for!
# Usage
Add this to your `Cargo.toml`:
```
[dependencies]
pasture-core = "0.1.0"
# You probably also want I/O support
pasture-io = "0.1.0"
```
# Development
`pasture` is in the early stages of development and is not yet stable.
# License
`pasture` is distributed under the terms of the Apacke License (Version 2.0). See [LICENSE](LICENSE) for details. | 44.296296 | 359 | 0.754181 | eng_Latn | 0.997689 |
03e3553717253c9de810f09777d1bc21e681fc35 | 2,146 | md | Markdown | _posts/2020-03-01-turning_grumble_into_actions.md | pgum/blogtest | a3e3341abecf36f118aa205ca95502760bed82b8 | [
"MIT"
] | null | null | null | _posts/2020-03-01-turning_grumble_into_actions.md | pgum/blogtest | a3e3341abecf36f118aa205ca95502760bed82b8 | [
"MIT"
] | 2 | 2021-09-28T01:31:09.000Z | 2022-02-26T06:46:00.000Z | _posts/2020-03-01-turning_grumble_into_actions.md | pgum/blogtest | a3e3341abecf36f118aa205ca95502760bed82b8 | [
"MIT"
] | null | null | null | ---
layout: post
title: Turning grumble into actions
subtitle: This is where I will tell my friends way too much about me
image: /img/daoudi-aissa-absT1BNRDAI-unsplash.jpg
category: scenarios
tags: retro
duration: 60
outcome: improvement
---
## Goals
* Group understand negative effects of grumble
* Group can identify underlaying needs from current work
* Group assign working groups or individuals to most important actions identified
## Time and place
* Around 1 hour long
* Room to sit in circle
## Participants
* Group of interest
* Moderator
## Prerequisites
* Group members know each other fairly well
* Group members understand requirements for empirical process
## Room and items needed
* Room with round table preferably
* Flipchart & markers
* HD display or projector
## Duration
* 60 mins
### Outcome
* education, actions to improvement backlog
## Scenario description
![grumpy cat into smart action][grumpy-cat-into-smart-action]
### 10 mins - Set the stage - Explain meeting goals to the Team
* How do we differ between being grumpy and constructive criticism?
* What are the consequences of being grumpy
### 15 mins - Explain method
* From materials, display examples from [grumble_to_need_example_file]
* Work on examples until whole group understand principles behind method
### 25 mins - Real life application
* Write up most notorious grumbles heard in your group
* Sort and possibly group results into categories, prioritize
* What need is vocalized by each?
* For top 2-3 try to identify low-hanging fruits to grab at low cost
### 10 mins - Define Review Date and next steps
* Define way to synchronize status information
* Define follow-up meeting for Backlog refinement, add generated items to appropriate backlogs
* Gather feedback on methodology used
## Helpful information
## Reference
* "Photo by Daoudi Aissa on Unsplash https://unsplash.com/photos/absT1BNRDAI"
[grumble_to_need_example_file]: ./stash/grumble_to_need_example_file.md
[grumpy-cat-into-smart-action]: ../imgs/daoudi-aissa-absT1BNRDAI-unsplash.jpg "Photo by Daoudi Aissa on Unsplash https://unsplash.com/photos/absT1BNRDAI" | 26.493827 | 153 | 0.775396 | eng_Latn | 0.986004 |
03e40ea21fddb647e5ab6e904ac86bca406f5adb | 4,866 | md | Markdown | docs/connect/jdbc/understanding-java-ee-support.md | real-napster/sql-docs.de-de | 0cf3bb09a98541d4ee3fd6e4635b008c94019b6a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/connect/jdbc/understanding-java-ee-support.md | real-napster/sql-docs.de-de | 0cf3bb09a98541d4ee3fd6e4635b008c94019b6a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/connect/jdbc/understanding-java-ee-support.md | real-napster/sql-docs.de-de | 0cf3bb09a98541d4ee3fd6e4635b008c94019b6a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Grundlegendes zur Java EE-Unterstützung | Microsoft-Dokumentation
ms.custom: ''
ms.date: 04/16/2019
ms.prod: sql
ms.prod_service: connectivity
ms.reviewer: ''
ms.technology: connectivity
ms.topic: conceptual
ms.assetid: a9448b80-b7a3-49cf-8bb4-322c73676005
author: MightyPen
ms.author: genemi
manager: craigg
ms.openlocfilehash: 7d7d1867c8c6d9311736124cf74e30b748a9de68
ms.sourcegitcommit: e2d65828faed6f4dfe625749a3b759af9caa7d91
ms.translationtype: HT
ms.contentlocale: de-DE
ms.lasthandoff: 04/17/2019
ms.locfileid: "59671226"
---
# <a name="understanding-java-ee-support"></a>Grundlegendes zur Java EE-Unterstützung
[!INCLUDE[Driver_JDBC_Download](../../includes/driver_jdbc_download.md)]
In den folgenden Abschnitten wird dokumentiert, wie [!INCLUDE[jdbcNoVersion](../../includes/jdbcnoversion_md.md)] die optionalen API-Features für die Java-Plattform, Enterprise Edition (Java EE) und JDBC 3.0 unterstützt. Die Quellcodebeispiele in diesem Hilfesystem stellen eine gute Referenz für erste Schritte mit diesen Funktionen dar.
Stellen Sie zunächst sicher, dass die Java-Umgebung (JDK, JRE) das Paket javax.sql einschließt. Dies ist ein erforderliches Paket für alle JDBC-Anwendungen, die die optionale API verwenden. JDK 1.5 und höhere Versionen umfassen dieses Paket bereits, sodass Sie es nicht separat installieren müssen.
## <a name="driver-name"></a>Treibername
Der Treiberklassenname lautet **com.microsoft.sqlserver.jdbc.SQLServerDriver**. Für die JDBC-Treiber 4.1, 4.2 und 6.0 ist der Treiber in einer der folgenden Dateien enthalten: **sqljdbc.jar**, **sqljdbc4.jar**, **sqljdbc41.jar** oder **sqljdbc42.jar**.
JDBC-Treiber 6.2: Der Treiber ist in der Datei **mssql-jdbc-6.2.2.jre7.jar** oder **mssql-jdbc-6.2.2.jre8.jar** enthalten.
JDBC-Treiber 6.4: Der Treiber ist in der Datei **mssql-jdbc-6.4.0.jre7.jar**, **mssql-jdbc-6.4.0.jre8.jar** oder **mssql-jdbc-6.4.0.jre9.jar** enthalten.
JDBC-Treiber 7.0: Der Treiber ist in der Datei **mssql-jdbc-7.0.0.jre8.jar** oder **mssql-jdbc-7.0.0.jre10.jar** enthalten.
JDBC-Treiber 7.2: Der Treiber ist in der Datei **mssql-jdbc-7.2.2.jre8.jar** oder **mssql-jdbc-7.2.2.jre11.jar** enthalten.
Der Klassenname wird immer dann verwendet, wenn Sie den Treiber mit der JDBC-Klasse „DriverManager“ laden. Er wird außerdem verwendet, wenn Sie den Klassennamen des Treibers in einer Treiberkonfiguration angeben müssen. Für das Konfigurieren einer Datenquelle in einem Java EE-Anwendungsserver kann es beispielsweise erforderlich sein, den Treiberklassennamen einzugeben.
## <a name="data-sources"></a>Projektmappen-Explorer
Der JDBC-Treiber unterstützt Java EE-/JDBC 3.0-Datenquellen. Die JDBC-Treiberklasse [SQLServerXADataSource](../../connect/jdbc/reference/sqlserverxadatasource-class.md) wird von `com.microsoft.sqlserver.jdbc.SQLServerXADataSource` implementiert.
### <a name="datasource-names"></a>Datenquellennamen
Sie können Datenbankverbindungen mithilfe von Datenquellen herstellen. Die mit dem JDBC-Treiber verfügbaren Datenquellen werden in der folgenden Tabelle beschrieben:
|DataSource-Typ|Klassenname und Beschreibung|
|---------------|--------------------------|
|DataSource|`com.microsoft.sqlserver.jdbc.SQLServerDataSource` <br/> <br/> Die Datenquelle ohne Pooling.|
|ConnectionPoolDataSource|`com.microsoft.sqlserver.jdbc.SQLServerConnectionPoolDataSource` <br/> <br/> Die Datenquelle zum Konfigurieren von Verbindungspools für Java EE-Anwendungsserver. Wird normalerweise verwendet, wenn die Anwendung innerhalb eines Java EE-Anwendungsservers ausgeführt wird.|
|XADataSource|`com.microsoft.sqlserver.jdbc.SQLServerXADataSource` <br/> <br/> Die Datenquelle zum Konfigurieren von Java EE-XA-Datenquellen. Wird normalerweise verwendet, wenn die Anwendung innerhalb eines Java EE-Anwendungsservers und eines XA-Transaktionsmanagers ausgeführt wird.|
### <a name="data-source-properties"></a>Datenquelleneigenschaften
Alle Datenquellen unterstützen die Möglichkeit zum Festlegen und Abrufen aller Eigenschaften, die dem Eigenschaftenset des zugrunde liegenden Treibers zugeordnet sind.
Beispiele:
`setServerName("localhost");`
`setDatabaseName("AdventureWorks");`
Im Folgenden wird veranschaulicht, wie eine Anwendung mit einer Datenquelle eine Verbindung herstellt:
```java
//initialize JNDI ..
Context ctx = new InitialContext(System.getProperties());
...
DataSource ds = (DataSource) ctx.lookup("MyDataSource");
Connection c = ds.getConnection("user", "pwd");
```
Weitere Informationen zu den Datenquelleneigenschaften finden Sie unter [Festlegen der Datenquelleneigenschaften](../../connect/jdbc/setting-the-data-source-properties.md).
## <a name="see-also"></a>Weitere Informationen
[Overview of the JDBC Driver (Übersicht über den JDBC-Treiber)](../../connect/jdbc/overview-of-the-jdbc-driver.md)
| 60.074074 | 373 | 0.775997 | deu_Latn | 0.943115 |
03e44f93d654ee5df23ddcf840bb089f4e19a024 | 31,541 | md | Markdown | docs/changelog.md | jekoule/zMobile | b4b62f180f98429b88c972fdfb3907c0bf68b760 | [
"Apache-2.0"
] | null | null | null | docs/changelog.md | jekoule/zMobile | b4b62f180f98429b88c972fdfb3907c0bf68b760 | [
"Apache-2.0"
] | 3 | 2021-09-02T06:00:18.000Z | 2022-03-02T17:19:28.000Z | docs/changelog.md | jekoule/zMobile | b4b62f180f98429b88c972fdfb3907c0bf68b760 | [
"Apache-2.0"
] | null | null | null | # Version History
The date on a release generally reflects when the source commit was
tagged and the release build was first posted to GitHub or our alpha
channels on the Play Store and Apple's App Store. The main rollout
to users in general on the app stores is typically a few days later.
#### Version numbering
We number our versions like so:
* Android requires a monotonically increasing integer version ID.
Call that N. We increment it on each release, including those that
only go to alpha or beta.
* Then we use version numbers A.B.N, where N is that same N, and...
* A is incremented for "a major release", whatever that means. (It's
a bit of an arbitrary choice we make when making the release.)
* B is reset to zero on "a major release", and otherwise incremented.
#### "Resolved issues"
The list of "Resolved issues" on each release is intended to be
* comprehensive,
* as a list of issues we believe were resolved by changes in that
release -- a bug fixed, feature implemented, or desired change made.
It doesn't include
* duplicate issues
* issues we determined had been fixed, but don't know when
* issues we closed because we decided not to make the requested
change, or couldn't make use of the bug report
## Unreleased
### Highlights for users
* Animated GIFs now animate, even when shown full-screen.
* When you type a very long message, the input box no longer overflows
the screen.
Plus, like every release, other fixes and improvements for your Zulip
experience.
### Highlights for developers
* Resolved issues, user-facing:
* #3497: animated GIFs in lightbox
* #3551: show in user profile when user is deactivated
* #3760: UI glitch in "create stream" flow
* #3614: keep compose box appropriately sized when message is long
* #3528: drop "Reply" in message action sheet for PM or topic narrow
* Resolved issues, developer-facing:
* #3768: Flow bug affecting `connect`
* #3801: document how to use React DevTools
* #3827: type fixes for upcoming Flow upgrade
* #3783: build failure on macOS
* #3777: build failure on Windows
## 26.20.143 (2020-01-07)
### Highlights for users
* When a topic's name is too long to fit in the UI, you can long-press
the topic to show it in full.
* Links to conversations now work correctly for streams and topics
with names that go beyond ASCII characters.
Plus, like every release, other fixes and improvements for your Zulip
experience.
### Highlights for developers
This is a regular release from the master branch following 26.18.141.
In addition to the changes mentioned here, it includes the changes
that were cherry-picked for 26.19.142.
* New test suite `pirlo` (#3669), which runs an end-to-end smoketest
of an Android release build in the cloud using pirlo.io.
* Improvements to Sentry logging (#3733): instead of interpolating
details of an event into the message string, we now typically use
the Sentry "extras" mechanism to attach the data, and leave the
message string constant. This causes Sentry to keep the events
grouped as a single issue even when the data varies.
* Resolved issues: #3570, #3711, #3715, #3631 (showing long topic
names), #3752, #3739 (decoding non-ASCII in narrow-links)
## (iOS) 26.19.142 (2019-12-11)
### Highlights for users
(iOS-only release.)
Fixes and improvements for your Zulip experience.
### Highlights for developers
This is a cherry-pick release atop 26.17.140, with selected small
changes. It does not include the changes made in 26.18.141.
* Resolved issues: 30018d7d7 (on welcome-help text)
## (Android) 26.18.141 (2019-12-05)
### Highlights for users
(Android-only release.)
Fixes and improvements for your Zulip experience.
### Highlights for developers
* Upgraded Sentry from v0.x to v1.x, take 2.
* Resolved issues: #3585
## 26.17.140 (2019-12-04)
### Highlights for users
* You can now see who left emoji reactions on a message! Just
long-press on the message or reaction.
* If your Zulip server uses SAML authentication, the app now supports
it.
Plus, like every release, other fixes and improvements for your Zulip
experience.
### Highlights for developers
* Resolved issues: #2252, #3670, e438f82b2
## 26.16.139 (2019-11-22)
### Highlights for users
Fixes and improvements for your Zulip experience.
### Highlights for developers
(Some important fixes were backported for a cherry-pick release
26.15.138, and are described there.)
* We tried upgrading Sentry from v0.x to v1.x, but reverted the
upgrade for now. See issue #3585, PR #3676, and commit 57e08f789.
* New convention and lint rule: props types for our React components
are read-only. See 821aa44fd^..760cfa9cf, aka PR #3682.
* New (tiny) test suite: `tools/test deps`, which runs
`yarn-deduplicate`. See 8b155e92b.
* Resolved issues: #3689
## 26.15.138 (2019-11-19)
### Highlights for users
* Fixed an issue that affected notifications if you reset your API key.
Plus, like every release, other fixes and improvements for your Zulip
experience.
### Highlights for developers
This is a bugfix release atop 26.14.137, with small cherry-picked
changes.
* Resolved issues: #3695, 7caa4d08e
## (iOS) 26.14.137 (2019-11-07)
### Highlights for users
(iOS-only release.)
* Fixed an issue affecting certain models of iPad, which caused
messages not to promptly appear.
### Highlights for developers
* Resolved issues: #3657
## 26.13.136 (2019-11-05)
(This release went to prod on Android but on iOS only to beta.)
### Highlights for users
* When the app hasn't been able to reach the server, the
PM-conversations tab now shows cached data like most of the app,
rather than a loading spinner.
Plus, like every release, other fixes and improvements for your Zulip
experience.
### Highlights for developers
* Bumped minimum Android version to Android 5 Lollipop, API 21;
dropped support for Android 4.4 KitKat. (b3eced058)
* Resolved issues: #3602, #3242
## 26.12.135 (2019-10-22)
### Highlights for users
(The last release supporting Android 4.4 KitKat.)
Fixes and improvements for your Zulip experience.
### Highlights for developers
* Bumped targetSdkVersion to 28! Aka Android 9 Pie. (#3563)
* Started importing certain code directly from the webapp: see #3638
and its companion zulip/zulip-mobile#13253. (This also fixed some
quirks in our sending of typing-status events.)
* Resolved issues: #3563 (modulo beta feedback).
## 26.11.134 (2019-10-17)
### Highlights for users
* If you use multiple Zulip accounts, the app now switches to the
right one when you open a notification.
Plus, like every release, other fixes and improvements for your Zulip
experience.
### Highlights for developers
* Resolved issues: much of #2295 (via PR #3648); and issue described
in PR #3084.
## (Android) 26.10.133 (2019-10-14)
### Highlights for users
(Android-only release.)
Fixes and improvements for your Zulip experience.
### Highlights for developers
* In f65b50c85 (#3644), fixed an issue affecting the message list on
very old Chrome versions. (Found on Android K, L, and M on the
small fraction of devices where the WebView implementation hasn't
been getting updated.)
## 26.9.132 (2019-10-10)
### Highlights for users
* (Android) Fixed issue where opening a notification wouldn't go to
the specific conversation if the app was already running in the
background.
* Fixed issue where we didn't set your availability to "active" until
one minute after launching the app.
Plus, like every release, other fixes and improvements for your Zulip
experience.
### Highlights for developers
* Resolved issues: #3582, #3590, #2902
## 26.8.131 (2019-09-25)
### Highlights for users
* Fixed issue where search results would be based on an incomplete
version of your query.
Plus, like every release, many other improvements for your Zulip
experience.
### Highlights for developers
* Resolved issues: #3591, #3592, #2209, #3058
* Started sending typing "stop" events when message sent.
## 26.7.130 (2019-08-27)
### Highlights for users
Bugfixes and other improvements for your Zulip experience.
### Highlights for developers
* Reverted the client-side fix for #3594; it's now fixed on the server
side, and this keeps us compatible with servers running Zulip
versions from before the original change.
* Resolved issues: #3369, #3509
## (beta) 26.6.129 (2019-08-26)
### Highlights for users
* Updated Google auth process to match a recent change in the
Zulip server.
### Highlights for developers
* Resolved issues: #3594
## 26.5.128 (2019-08-22)
### Highlights for users (since 26.1.124 / 26.2.125)
* Highlight colors for code blocks now match the webapp and
offer more contrast, especially in night mode.
Plus, like every release, many other improvements for your Zulip
experience.
### Highlights for developers
* Logging to the device log (via `console`) is now enabled in release
builds as well as debug.
* Resolved issues: d6f497bd6
## (beta) 26.4.127 (2019-08-21)
### Highlights for users (since 26.1.124 / 26.2.125)
* Highlight colors for code blocks now match the webapp and
offer more contrast, especially in night mode.
Plus, like every release, many other improvements for your Zulip
experience.
### Highlights for developers
* Logging added on connection failure at RealmScreen.
* Resolved issues: #3568, #3515, #3524
## (beta) 26.3.126 (2019-08-19)
### Highlights for users
* (iOS) Fixed issue where new users couldn't log in (yikes!)
### Highlights for developers
* Upgraded to react-navigation v2 (part of #3573).
* Resolved issues: #3588
## (iOS) 26.2.125 (2019-08-16)
### Highlights for users
* Fixed issue where new users couldn't log in (yikes!)
### Highlights for developers
This release is identical to 25.8.122, except for the version number.
It was released for iOS only, as a stopgap fix for #3588.
Reintroduces two issues (excluding Android-only issues): #2760, #3176.
Also returns us to RN v0.57.
## 26.1.124 (2019-08-09)
### Highlights for users
* Links to other Zulip conversations were broken; now they work again.
* On Android you can now upload any file from your device, in addition
to photos.
### Highlights for developers
* Resolved issues: #2760, #3184
## 26.0.123 (2019-07-26)
### Highlights for users (since 25.6.120)
* Upgrades across most of the third-party software we use to help
make the app.
Plus, like every release, many other improvements for your Zulip
experience.
### Highlights for developers
* Upgraded React Native to v0.59! (#3399)
* Resolved issues: #3399, #3323, #3176, #3574
## (beta) 25.8.122 (2019-07-24)
### Highlights for users
Bugfixes and other improvements for your Zulip experience.
### Highlights for developers
* Upgrades to lots of dependencies, and other changes in preparation
for the RN v0.59 upgrade #3399.
* Dropped iOS 9 support; now iOS 10.3+.
* Resolved issues: #3106, #3565, #3550, #3518
## (beta) 25.7.121 (2019-07-19)
(This was a beta-only, and Android-only, release.)
### Highlights for users
Bugfixes and other improvements for your Zulip experience.
### Details
Resolved issues: #3553, #3539, #3196
## 25.6.120 (2019-06-18)
### Highlights for users
* You can now see when any message was sent by tapping on it. (#3491)
Plus, like every release, many other improvements for your Zulip
experience.
## Details
Resolved issues: #3264, #3526, #3516
## (beta) 25.5.119 (2019-06-13)
### Highlights for users
This is a beta-only release for testing an in-development feature:
* The time each message was sent is now tucked in at the end,
rather than sliding in when tapped. (#3488, replacing #3491)
Like every release, it comes with many other improvements for your
Zulip experience.
## (beta) 25.4.118 (2019-06-11)
(This was a beta-only release.)
### Highlights for users
* You can now see when any message was sent by tapping on it. (#3491)
Plus, like every release, many other improvements for your Zulip
experience.
### Details
Resolved issues: #3375
## 25.3.117 (2019-06-06)
Incremental release following 25.2.116, with several bugfixes.
## 25.2.116 (2019-06-05)
Incremental release following 25.0.114, with several bugfixes.
## (alpha) 25.1.115 (2019-06-04)
Alpha release; no release notes. See Git log for detailed changes.
## 25.0.114 (2019-05-31)
### Highlights for users
Special highlights:
* Just like Zulip on the desktop and web, we now highlight
messages that you're reading for the first time. (#3125)
* Fixed a bug that caused the app to miss some messages. (#3441)
Like every release, this contains many other improvements for
your Zulip experience.
## 24.0.113 (2019-03-29)
### Highlights for users
Many fixes and improvements, including:
* The app now fetches more messages more eagerly when scrolling
through the message list, so you'll less often have to wait.
* Long-pressing a link in a message copies the link.
* The special `:zulip:` emoji.
* A new complete translation for Romanian, and updates for Czech,
Turkish, and Italian.
### Full changes for users
* Touching a user's avatar or name in the app bar above a message list
leads to their profile.
* Fetch more messages sooner, to reduce user waiting. (6ccf3a297)
* Adjusted text on "Switch account" and "Log out" buttons.
* The special `:zulip:` emoji is now supported. (#2375)
* User status / "unavailable" feature more fully supported. (#3417)
* Azure AD authentication backend supported, when enabled on server. (#3227)
* On iOS when displaying the message list, switched a major
system-provided component ("WebView") to a newer version offering
performance and stability improvements. (#3296)
* Complete translation into Romanian from scratch; translation updates
for Czech, Turkish, and Italian.
* Fixed a bug on iOS where long-pressing something in the message list
could act like a normal press after the long-press. (#3429)
* When a message contains a link, you can now copy the link by
long-pressing it. (#3385)
### Highlights for developers
* The Android app now supports Kotlin! In fact we're migrating to
Kotlin -- new code will be in Kotlin. See
[doc](architecture/android.md#kotlin-and-java).
* The Android app now has unit tests! Just a few so far -- but now
that we have a model to follow, we'll write them for other code
as appropriate. See [doc](howto/testing.md#unit-tests-android).
* We've begun using a "crunchy shell, soft center" pattern in handling
data from the server. This means all parsing of messy
data-from-the-network happens at the edge (the "crunchy shell") --
and constructs new, clean data structures with an exactly known
format. Then the rest of the app can be a "soft center", with many
fewer boring checks, so the real application logic it's expressing
easier to read and the code is less prone to bugs.
So far this is demonstrated in parsing of notification data
(i.e. FCM messages) in our Android code, in [FcmMessage.kt].
See also discussion in the commit message of f85d3250f.
The same pattern works great in JS too, and we may gradually
also move to it there.
[FcmMessage.kt]: ../android/app/src/main/java/com/zulipmobile/notifications/FcmMessage.kt
* We've begun to put small single-use helper React components in the
same file where they're used, and in general to put several React
components in a file when that makes the code clearer. Disabled the
lint rule that would complain about that. Discussion in cb418f134,
examples in 7f7620811 and parent.
### Other changes for developers
* Types for the server API significantly reorganized: moved
initial-data types in from app's `types.js`, and separated out model
types and generic protocol/transport types in their own files.
(12bc3e801^..958bc2b7e)
* Types for our Redux state separated out and organized.
(cc945867a^..4bc77bdab)
* `Touchable` revised to more effectively cover up a subtle,
regrettable difference between upstream RN's interfaces for the
implementations we use on Android vs. iOS. (ec32af1a4, f44114b2b)
* Fixed most remaining violations of Flow's `strict-local`; just 8
files remain without it. (dd03939cb^..d298669ef)
## 23.3.112 (2019-03-04)
Small Android-only bugfix release following 23.2.111.
* Fixed regression in 23.2.111 that broke autocomplete on Android.
## 23.2.111 (2019-02-28)
### Highlights for users (since 22.1.108)
Many fixes and improvements, including:
* Support for setting yourself as away/unavailable, or setting
a status message.
* Fixed several issues in message compose and autocomplete.
* Fixed several issues in sending messages under bad network
conditions.
* Translation updates for Portuguese, Italian, Hindi, Turkish, French,
German, and Czech.
### Full changes for users
* Support for the new "availability" or "user status" feature (#3344;
7d16af845^..f37856207, 130fde9fd^..7bbd09896)
* Distinct nav icons "inbox" and "world" for the unreads and
all-messages screens, rather than both "home". (#3232)
* Fixed issue causing stuttering animation on lightbox. (#3334)
* Fixed background color below compose box on notched
displays. (#3329)
* Fixed color of user-group icon in @-mention autocomplete in dark
mode. (#3366)
* Support batched remove-notification events, on Android. (#3343)
* Translation updates for Turkish. (a6b548999)
### Full changes for developers
* Improved documentation for developing against a dev server.
(e62f84f2d)
* Small improvements to Git documentation. (f018461d4)
* Almost all selectors are now annotated with types. (#3360, #3364)
* Fixed ineffective caching in many selectors. (#3015;
2e898e745^..414e48cc6)
* New script `tools/ios` to build for iOS and upload to the App Store,
entirely from the command line. (38f8b5da1)
* New, more streamlined and secure workflow for signing Android
release builds. (06b53639b^..23a3c705b)
## (beta) 23.1.110 (2019-02-20)
This was a beta-only release.
### Highlights for users
Many fixes and improvements, including:
* Fixed several issues in message compose and autocomplete.
* Fixed several issues in sending messages under bad network
conditions.
* Translation updates for Portuguese, Italian, Hindi, French,
German, and Czech.
### Detailed changes
A terse and incomplete list:
* Numerous type improvements: actions, events, strict-local
* Fixed #3274, lightbox action sheet
* Fixed #3259, outbox reordering
* Fixed #3120 by retrying outbox
* Reducer refactor
* Fixed #3280, iOS layout at top
* Android build updates
* Compose box simpler, and fixed some latency
* Fixed double autocomplete popups
* Fixed #3295 in compose box
* Make WebViews debuggable
* Buffer thunk actions
* Fixed #2128, spamming server with notif signups
* android notif: Completely cut out wix library
* android notif: Upgrade to FCM from GCM
* Fixed #3338, by using server's `found_newest`/`found_oldest`
* Fixed caching in some selectors (#3015)
* Fixed some inefficient data structures (#3339)
* Fixed #3289, `@`-autocomplete following newline
* Cleaned up CI in several ways
* Fixed #2693, emoji cut off at bottom
* Translation updates for Portuguese, Italian, Hindi, French, German, and Czech
## (alpha) 23.0.109 (2019-02-20)
This was an alpha-only release, followed closely by 23.1.110.
See above for details.
## 22.1.108 (2019-01-15)
* Fixed regression in 22.0.107: launching the app from a notification
would lead to a "No messages" screen. (#3284, 5d1b5b0d8)
## 22.0.107 (2019-01-09)
### Highlights for users
Many fixes and improvements, including:
* Fixed bug: a successfully-sent message would stick around as a
zombie, with "sending" animation.
* Evaded bug in React Native: the message list and nav bar sometimes
failed to display.
* Redesigned language-settings screen uses each language's own name,
drops flag images, and has search.
* Translation updates in Korean, Hindi, Ukrainian, and Chinese.
### Full changes for users
* Fixed bug: a successfully-sent message would stick around as a
zombie, with "sending" animation. (#3203)
* Evaded bug in React Native: the message list and nav bar sometimes
failed to display. (#3089)
* Redesigned language-settings screen uses each language's own name,
drops flag images, and has search. (#2611, #3231)
* Don't (attempt to) stop notifications on switching accounts.
(23e01e850)
* Fix broken layout on account details screen. (#3228)
* Paint "safe area" with appropriate background color. (#3236)
* Translation updates in Korean, Hindi, Ukrainian, and Chinese.
(7cc9950c6, 6b4ce281c)
* Keep presence info up to date. (#3207)
### Changes for developers
#### Highlights of important updates to know
All active developers will benefit from knowing about these. More
details on each in subsections below.
* Major typing upgrades, including:
* Exact object types -- use them in most cases.
Discussion in 61d2e3426.
* Intersection types -- probably never use them.
Discussion in ff515bc9d and 124a2f39a.
* Read-only arrays -- use them in most cases.
Discussion in 4c3aaa0b1.
* New patterns for getting styles: static where possible, and
otherwise using new React context API instead of legacy one.
All new code should follow. Examples in a2bfcb41b, 51dd1b3b2,
f6ddc2dba.
* The type `Account` is no longer the same as `Auth`.
In either case, `Identity` is preferred where it suffices.
Changed in 5738ccb6f, as part of notifications changes.
* `getAuth` and other account-related selectors no longer return
malformed data. Some throw; others explicitly can return
`undefined`. Interfaces in jsdoc in `accountsSelectors.js`;
discussion in 33a4df218.
* We no longer lie to Redux through `areStatesEqual`! See #3163.
* Automated refactoring is pretty great! Discussion in e566058bf of
one approach. Lower-tech approaches already helped powerfully for
migrating to exact types, and to new `styles` API.
#### Workflow improvements
* Experimented with automated refactoring: an AST-based tool
`jscodeshift`, and lower-tech Perl one-liners. (`jscodeshift`
discussed in e566058bf, used in 47365203f. Perl one-liners on
several occasions; see `git log --grep perl`.)
* `tools/test` accepts `--diff COMMIT`: run only on files changed
since `COMMIT` (vs. default of files changed in current branch.)
(1fe380e1a)
* Reactotron disabled by default, because it broke basic app
functionality. :'-( (170ed2a32, 598386524)
* New script `tools/changelog` streamlines some steps of making a
release. (593d38d06^..9dfb52e24)
#### Architecture, interface, and quality improvements
* Most object types are now exact. Let's do more of that.
(Discussion in 61d2e3426; additional changes in
a15c00e1a^..b9b48657f, 703739338, e5e57abe3^..9c1898242)
* Intersection types nearly all replaced with object spread.
(Discussion in ff515bc9d and 124a2f39a; additional changes in
eb3783b1a^..47365203f)
* New patterns for getting styles: static where possible, and
otherwise using new React context API instead of legacy one. Most
existing code migrated; all new code should follow. (examples in
a2bfcb41b, 51dd1b3b2, f6ddc2dba; fuller changes in
112f99be9^..8dad2d191, 1f71edad9^..a4e0f23b3)
* Major parts of notifications code rewritten, others refactored;
the wix `react-native-notifications` library reduced to a small
role. (Context in #2877. Changes in 410041dfa^..2ed116267,
dcbe2ac86^..d6454eb50, 034e25be8^..3a2076e0f, f1eae82d8^..233d68c40)
* Rewrote `accountsSelectors.js`. Now `getAuth` can only return a
real, authentication-bearing value. (Discussion in 33a4df218;
changes in 3706965d3^..614f56bd2, f1eae82d8)
* Removed the `connectPreserveOnBackOption` hack, where we told lies
to Redux via `areStatesEqual`. (#3163, da6c43d4b^..cd7b25757)
* Server API bindings describe more routes (even that the app doesn't
use); route bindings have a more uniform signature, and link to API
docs. (1acf7d96a^..8170045d8, 0af4af22b^..6becc6e91
* We subscribe to all server events with our queue.
(d8b36412c^..6c7fffc76)
* Logic fixes in Android notification UI code for sound and vibration;
no visible changes yet. (125dc0806^..458ef8832)
* Don't run old migrations on first install. (863bca711)
* Don't use `console.warn`. (21f64aad7)
* More read-only array types. (4c3aaa0b1)
* Translated-message files moved out of `src/`, to `static`, to avoid
spamming grep results. (1fc26a512)
* Upgraded RN to v0.57.8, from v0.57.1. (c03c85684^..ca759b106,
329dd67f0)
* New script `tools/upgrade` to help systematize upgrading
dependencies. (b64ce0023^..eb130c631)
## 21.2.106 (2018-12-12)
### Highlights for users (since 20.0.103)
Many fixes and improvements, including:
* Full support for custom emoji, including in composing messages and
in reactions.
* Fetch updates much sooner when reopened after several minutes idle.
* Fixed bug: a message view seen shortly after starting the app could
show "No messages".
* Fixed bug: uploading an image while viewing a stream would go to the
wrong topic.
* Fixed bug: a draft message typed just after starting the app was
lost.
* Complete translations for Italian and Korean.
### Full changes for users (since 21.1.105)
* Fixed a regression in 21.0.104: the autocomplete popup would sometimes
not respond when touched. (#3209)
## (beta) 21.1.105 (2018-12-11)
This was a beta version that did not become a production release;
see above.
### Full changes for users (since 21.0.104)
* Fixed issue where a message view seen shortly after starting the app
could show "No messages". (#3162)
* Fixed issue where uploading an image while viewing a stream would go
to the wrong topic. (#3130)
* Fixed a regression in 21.0.104: the password input for logging into
a server was rendered in a broken way, looking empty. (#3182)
### Full changes for developers (since 21.0.104)
#### Workflow improvements
* `tools/test` accepts a `--fix` option. (177d3eaa9)
#### Architecture, interface, and quality improvements
* New internal API `withGetText` for acquiring a handy
string-translating function, to use in any part of the app that
isn't a React component. (#2812; c22dfee9b^..9eaa05c27)
* New experimental internal API for the (server) API bindings:
`import api from ...`, then `api.sendMessage(...)` etc.
(63ae59808^..acb979cf5)
* We no longer write `props: Props`, or where applicable
`state: State`, at the top of each React component; the type
arguments to `PureComponent` or `Component` express that already.
(7e3becfba, c5df77962)
* A good swath of our uses of `any` and `Object` are replaced with
real types, and 20 more files are marked strict-local; 60 to go.
(9a0df7416^..60f14ed83)
## (beta) 21.0.104 (2018-12-05)
This was a beta version that did not become a production release;
see the regression fix above.
### Full changes for users
* Added full support for custom emoji ("realm emoji"), including in
composing messages and in reactions. (#2129, #2846)
* The app now fetches updates much sooner when reopened after several
minutes idle. (#3190)
* Fixed issue where a draft message typed just after starting the app
was lost. (#2861)
* Complete translations for Italian and Korean. (62c8d92d8)
* Fixed missing line that made switching to Indonesian language not
work. (d92329bb4)
* Messages pending send can now be deleted in long-press menu, like
other messages. (#3189)
* Force-upgrade screen provides helpful App Store or Play Store
deep-link. (#3158)
* Fixed handling of old reactions with emoji that have changed
name. (#3169)
* Fixed misrendering of "keypad" emoji like `:zero:`. (#3129)
* Group PM conversations now show combined avatars with rounded
corners, like individual avatars. (#3167)
* Fixed bugs causing top bar to sometimes be white instead of
stream-colored. (#2797, #3139)
* Long-pressing a recipient bar now offers "Unmute topic" when
appropriate. (8b60314e0 / #3156)
* Alert words are now highlighted in the message list. (#3082)
* Fixed fetching of explicit avatars (`!avatar(...)`) in messages. (#3047)
* Overflow menu in lightbox is now properly aligned. (#3024)
* Send button has larger touch target. (#2945)
* Error banners in message list show as red, rather than gray.
* Fixed oversizing of images in Dropbox inline previews. (#3136)
* Various improvements across the app for latency and performance.
### Full changes for developers
#### Workflow improvements
* Tests and linters run fast by default (<5s on a fast desktop for
small changes, <1s for no changes), by running only on files changed
in the current branch. (977596d9e^..bd24bd1be)
* Spell-checker results are now pure warnings, free to ignore. (ff7bc2992)
* Configuration for Reactotron, and expanded developer documentation
on debugging. (#3109, 0e5d03631^..59967fc23)
* One-step release-mode Android builds without signing keys or
Sentry. (#2883; 8d55447be^..ee40b3c7b)
* Detailed step-by-step instructions for setting up dev environment on
WSL. (#3193)
#### Architecture, interface, and quality improvements
* Extensive refactoring of the message list and rendering to
HTML. (#3156, #3170)
* New `caseNarrow` abstraction for working with narrow objects.
(fa6134aa6^..e9fe1e801)
* Explain `Auth` vs. `Account` types, and introduce distinct
`Identity`. (f5a2603a4^..28b1177d3)
* Applied `@flow strict-local` to most files and `@flow strict` to
many files, fixing newly-exposed type issues. (#3164, 6efa7980c,
2a96ede50, fa1b8a85c; 5ec1d3f9d^..597c51f6e; 5a2d49f85^..da5d519bf)
* Began to use more Flow "exact types". (01003e619, 24211fb55, others)
* Flow types on many more areas of code.
* Enable ESLint in most places where it was disabled, fixing issues.
(ddd51e5eb^..a533fa8d8)
* Scripts run on Bash, and are moved out of package.json to their own
files. (6c25beeb0, 3119ec697, 8d3e8ade5^..4d58c11d8)
## 20.0.103 (2018-11-12)
### Highlights
Many fixes and improvements, including:
* Mark messages you see as read, even in a short thread.
* Tapping an emoji reaction works again to add/remove your own.
* Messages you send no longer flicker when they reach the server.
* Translation updates. Complete translations for Polish and
Portuguese, the latter nearly from scratch!
### Full
* Mark messages you see as read, even in a short thread. (#2988)
* Tapping an emoji reaction works again to add/remove your own. (#2784)
* Messages you send no longer flicker when they reach the server. (#2483)
* Translation updates. Complete translations for Polish and
Portuguese, the latter nearly from scratch!
* (iOS) Downloading a shared image works again. (#2618)
* (iOS) Fix multiple bugs affecting autocorrect when typing a message.
(#3052, #3053)
* (iOS) New React Native version 0.57 no longer breaks typing in
Chinese or Japanese. (#2434)
* (Android) New React Native version 0.57 no longer crashes when
typing an astral-plane Unicode character, including post-2009
emoji. (#2787)
* (Android) Fix crash when downloading a file, by requesting needed
permissions. (#3115)
* SSO login was broken. (#3126)
* (Android, infra) Client-side support for removing notifications when
you read the messages elsewhere. (#2634)
* (infra) Updated to React Native v0.57 (from v0.55). (#2789)
## 19.2.102 and earlier
TODO: backfill some of this information from notes in other places.
| 29.896682 | 89 | 0.741638 | eng_Latn | 0.996488 |
03e480dd01396084e234a61cc5e15f5d2d1c6dfc | 3,243 | md | Markdown | content/faq/otherobservatories.md | antaldaniel/dataandlyrics | f96878b84d35cc25c4eb905de7cc16a73e656734 | [
"MIT"
] | null | null | null | content/faq/otherobservatories.md | antaldaniel/dataandlyrics | f96878b84d35cc25c4eb905de7cc16a73e656734 | [
"MIT"
] | null | null | null | content/faq/otherobservatories.md | antaldaniel/dataandlyrics | f96878b84d35cc25c4eb905de7cc16a73e656734 | [
"MIT"
] | null | null | null | +++
title = "Existing Observatories"
date = 2020-07-10T10:00:00
lastmod = 2020-09-01T07:10:00
draft = false
toc = true
type = "docs" # Do not modify.
# Add menu entry to sidebar.
linktitle = "Exsisting Observatories"
[menu.reproducible]
parent = "Observatories"
weight = 13
+++
## Existing observatories - independent from us
Observatories are created to permanently collect data, information, and createa knowledge-base for research and development, science, evidence-based policymaking, usually by Consortia of business, NGO, scientific and public bodies.
We are aiming to create similar observatories, but we are in no way affiliated or connected to the following, existing observatories that we see as role models. Our mission is to serve similar observatories with research automation, making the observatory's services less costly, more timely with higher level of quality control.
* The [European Digital Media Observatory](https://ec.europa.eu/digital-single-market/en/european-digital-media-observatory) was launched on 1 June 2020.
* The [European Alternative Fuels Observatory](https://www.eafo.eu/) is an European Commission funded initiative which provides open and free information, amongst others to support Member States with the implementation of EU Directive 2014/94 on the deployment of alternative fuels infrastructure. The EAFO is maintained by the [EAFO Consortium](https://www.eafo.eu/knowledge-center/consortium).
* The [EU Energy Poverty Observatory](https://www.energypoverty.eu/about/about-observatory) (EPOV) is an initiative by the European Commission to help Member States in their efforts to combat energy poverty. It exists to improve the measuring, monitoring and sharing of knowledge and best practice on energy poverty. The EU Energy Poverty Observatory (EPOV) has been developed by a consortium of 13 organisations, including universities, think tanks, and the business sector.
* The [European Observatory for Clusters and Industrial Change](https://www.clustercollaboration.eu/eu-initiatives/european-cluster-observatory) to help Europe's regions and countries in designing better and more evidence-based cluster policies and initiatives. This platform is funded by the EU programme for the Competitiveness of Enterprises and SMEs (COSME).
* The [European Observatory on Health Systems and Policies](http://www.euro.who.int/en/about-us/partners/observatory)
* The [European Observatory on Homelessness](https://www.feantsaresearch.org/)
* The [European Observatory on Infringements of Intellectual Property](https://euipo.europa.eu/ohimportal/en/web/observatory/home)
* The [Cluster Observatory](http://www.clusterobservatory.eu/) of the Center for Strategy and Competitiveness at the Stockholm School of Economics, which started out originally as the European Cluster Observatory.
* The [European Audiovisual Observatory](https://www.obs.coe.int/en/web/observatoire/) of the Council of Europe works in a more complicated environment, as it is created by EU member states and non-member states before the EU was born with the Maastricht Treaty.
* The [Eastern Partnership Cultural Observatory](http://observatory.culturepartnership.eu/en/page/observatory) has an out of EU reach.
| 75.418605 | 475 | 0.798952 | eng_Latn | 0.969885 |
03e4ce5ee813ce8bc48cd29ab7aa9448c24b3d7a | 4,922 | md | Markdown | src/posts/master-node.md | rmorabia/radhika.dev | a5498c58c59116710949944d94c7b9f3a323ca8c | [
"MIT"
] | 17 | 2019-06-11T18:32:40.000Z | 2022-02-05T19:56:35.000Z | src/posts/master-node.md | rmorabia/radhika.dev | a5498c58c59116710949944d94c7b9f3a323ca8c | [
"MIT"
] | 16 | 2020-01-24T23:41:38.000Z | 2022-02-28T03:37:47.000Z | src/posts/master-node.md | rmorabia/radhika.dev | a5498c58c59116710949944d94c7b9f3a323ca8c | [
"MIT"
] | 6 | 2019-08-02T05:54:19.000Z | 2020-03-18T17:38:02.000Z | ---
title: "How I'm Mastering Node"
date: '2020-01-24'
---
Node is weird.
If you've built any significant app with JavaScript, you've used Node. You've used npm or yarn, used `import` syntax and are familiar with `require` syntax.
But... does that really mean you know Node?
I've built [plenty](https://github.com/rmorabia/timeline) [of](https://github.com/rmorabia/highestscores) [apps](https://github.com/rmorabia/conway) with Node. I still don't feel like I understand it, though. I just wrote browser-based JavaScript and didn't open a browser, basically.
I've been trying to find the missing links with Node (and back-end programming in general) and have been going through the extremely good [Node course from Andrew Mead](https://www.udemy.com/course/the-complete-nodejs-developer-course-2/). I highly recommend it if you've never written Node before.
However, I'm at a weird middle ground with my knowledge where I am familiar with a lot of Node concepts, I just haven't put them together in a unified Node-first way yet.
Let's list what I consider myself familiar with:
- fs & the Node API in general
- how to make API endpoints (although not without Express)
- npm/yarn
- command line arguments
- promises & async/await (although I have never used this outside of an AJAX environment)
- event loop / callback queue
Here's what I'm not familiar with:
- streams
- buffers
- how to publish a package on npm and make it importable & usable
- callbacks without the context of AJAX, aka how to make a callback for callback's sake
- how http works in Node???
- testing Node
- connecting to databases???
So, I know a lot of individual concepts, I think I'm missing a lot of the connections, though.
I _could_ get those connections if I sit through the 30 hours of Node that Andrew is presenting in his excellent Udemy course, but I no longer want to. I know too much to feel satisfied with spending more time there. I have to force myself to go through it every day and I don't feel like I'm learning at a fast enough pace given my prior experience.
So, let's not.
I need to figure out a better way to learn this stuff.
This is how I know I'm getting to the level where Udemy isn't as appealing and faster-paced resources like [Egghead](https://egghead.io) & [PluralSight](https://pluralsight.com) are. Give me an overview of the concepts, structure, and vocabulary in less than 5 hours, so I can work on something myself. That might even be direct work tickets.
## But how do we get that list of essential concepts to learn?
Especially if we're skipping large, comprehensive courses like Udemy's from now on.
My big issue with Node is that I don't know what I don't know. I have no idea what streams are and why I need them. So, if specific Googling about Node ([or an Egghead course](https://egghead.io/courses/introduction-to-node-the-fundamentals)) doesn't cover it, I'll still feel like I'm missing something.
Right now, the only answer I have is to ask folks. Twitter is good for this.
Otherwise, I'd say if Twitter didn't exist, I'd just find the top 10 books on the topic and summarize all the commonalities that show up again and again. Don't skip anything even if it sounds obscure.
## Resources
So, here's what I'm using for resources for Node, now that I have a good list of things I need to learn from Twitter and reading around.
- [nodejs.dev](https://nodejs.dev) is a great overview of most essential Node topics without getting into Express or testing.
- [Andrew Mead's Node Course](https://www.udemy.com/course/the-complete-nodejs-developer-course-2/) includes a 125-page PDF that covers the essential lesson of each video.
- [Introduction to Node: The Fundamentals](https://egghead.io/courses/introduction-to-node-the-fundamentals) on Egghead
- [Test Node Backends](https://egghead.io/courses/test-node-js-backends) on Egghead
- [Anthony Alicea's Node Course](https://www.udemy.com/course/understand-nodejs/) on Udemy
- YouTubeing and reading wherever my gaps are
**The most important resource on this list has been Anthony Alicea's Node course.** It's the best course to understand Node deeply.
## Projects
The most important way to make progress with this is to make projects that slowly but surely allow me to practice all of these concepts at least 3x. This was covered before in [How I Learn](https://radhika.dev/how-i-learn/).
I built two capstone projects that cover most of the topics that I felt I was lacking.
Firstly is [rashee](https://npmjs.com/package/rashee), a CLI app that grabs your horoscope for the day. This uses CLI arguments, HTTP, npm, etc..
Secondly is [radhikaisms](https://radhikaisms.herokuapp.com), a tiny end-to-end Node app. This only uses one library (to import the Postgres database), and otherwise builds an API that connects to a database + a front-end to add new stuff to do the database using only Node (and HTML/JS for the front-end). | 63.102564 | 350 | 0.764527 | eng_Latn | 0.996962 |
03e567a438bc83de4ba85711318886ee5fd7e27b | 207 | md | Markdown | src/views/v2docs/apireferences.md | noaione/ihaapi-ts | efff3ed1d2a82291d004d2a5739a9cf2f6c60ade | [
"MIT"
] | 1 | 2021-01-22T16:26:02.000Z | 2021-01-22T16:26:02.000Z | src/views/v2docs/apireferences.md | ihateani-me/ihaapi-ts | 5cbdd3d0289e0a0e04e62b1e75b31353d89f9939 | [
"MIT"
] | 11 | 2020-12-03T11:18:39.000Z | 2021-06-29T10:49:36.000Z | src/views/v2docs/apireferences.md | ihateani-me/ihaapi-ts | 5cbdd3d0289e0a0e04e62b1e75b31353d89f9939 | [
"MIT"
] | 1 | 2020-11-04T07:14:48.000Z | 2020-11-04T07:14:48.000Z | # API References
?> **You could also see it in here: [API References](https://api.ihateani.me/v2/gql-docs/api-references)**
[API References](/api-references ':include :type=iframe width=100% height=500px') | 41.4 | 106 | 0.729469 | eng_Latn | 0.768125 |
03e594e105d7f5cb198ac0e409150088a6a91ebc | 1,672 | md | Markdown | README.md | sh1dow3r/ForenWare | 16fd9151ca76337859efaaedb8252523dc368ac2 | [
"MIT"
] | 5 | 2020-10-30T05:20:32.000Z | 2022-01-25T00:50:43.000Z | README.md | sh1dow3r/ForenWare | 16fd9151ca76337859efaaedb8252523dc368ac2 | [
"MIT"
] | null | null | null | README.md | sh1dow3r/ForenWare | 16fd9151ca76337859efaaedb8252523dc368ac2 | [
"MIT"
] | null | null | null | # ForenWare
## Overview
Forenware is an ansiblezed script found to automate data acquisition (Memory and Disk) from VMware vSphere platform.
In VMware a virtual machine can have few files depending on the task performed on it:
| file | Description | Usage |
|--------|------------------------------------------------|--------------------------------------------|
| **.vmem** | **Virtual Machine volatile memory file** | Will be used for memory analysis |
| **.vmss** | **Virtual machine suspend file** | Will be used to extract metadata of memory |
| **.vmdk** | **Virtual machine storage disk file** | Will be used for disk analysis |
## How to get started
- Make sure you have ansible and python3 installed
- You will want to the run the script `dependencies.sh`
- `bash dependencies.sh`
- Edit the file `vars.yml`
- Run the ansible playbook
- `ansible-playbook site.yml`
## Analysis
After the playbook run, you will have a new directory `ForenWare_Data` or whatever you set the variable to in vars.yml.
Inside the `ForenWare_Data` folder, you will have two folder
- Disks: You can convert the vmdk file to raw using the follwoing command:
- `qemu-img convert -f vmdk -O raw Demo_01_Ubuntu_20.vmdk Demo_01_Ubuntu_20.raw`
- Then use [The Sleuthkit Framework](https://github.com/sleuthkit/sleuthkit)
- Memory: You can use [Volatility Framework](https://github.com/volatilityfoundation/volatility)
## Demo
[You can watch a demonstration of how the tools works here](https://www.youtube.com/watch?v=SsAYqglwGvo&t=1215s)
| 45.189189 | 120 | 0.639354 | eng_Latn | 0.941048 |
03e5972351345ea4fc64ad57266e896c3990d1be | 2,804 | md | Markdown | docs/Model/OAuthClient.md | DoDSoftware/platform-client-sdk-php | 73643f1f9154cf41e665dc8dd3f788c546b4e656 | [
"MIT"
] | 1 | 2020-10-30T14:40:53.000Z | 2020-10-30T14:40:53.000Z | docs/Model/OAuthClient.md | DoDSoftware/platform-client-sdk-php | 73643f1f9154cf41e665dc8dd3f788c546b4e656 | [
"MIT"
] | null | null | null | docs/Model/OAuthClient.md | DoDSoftware/platform-client-sdk-php | 73643f1f9154cf41e665dc8dd3f788c546b4e656 | [
"MIT"
] | 3 | 2021-03-28T22:30:06.000Z | 2022-03-01T19:59:13.000Z | # OAuthClient
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**id** | **string** | The globally unique identifier for the object. | [optional]
**name** | **string** | The name of the OAuth client. |
**accessTokenValiditySeconds** | **int** | The number of seconds, between 5mins and 48hrs, until tokens created with this client expire. If this field is omitted, a default of 24 hours will be applied. | [optional]
**description** | **string** | | [optional]
**registeredRedirectUri** | **string[]** | List of allowed callbacks for this client. For example: https://myap.example.com/auth/callback | [optional]
**secret** | **string** | System created secret assigned to this client. Secrets are required for code authorization and client credential grants. | [optional]
**roleIds** | **string[]** | Deprecated. Use roleDivisions instead. | [optional]
**dateCreated** | [**\DateTime**](\DateTime.md) | Date this client was created. Date time is represented as an ISO-8601 string. For example: yyyy-MM-ddTHH:mm:ss.SSSZ | [optional]
**dateModified** | [**\DateTime**](\DateTime.md) | Date this client was last modified. Date time is represented as an ISO-8601 string. For example: yyyy-MM-ddTHH:mm:ss.SSSZ | [optional]
**createdBy** | [**\PureCloudPlatform\Client\V2\Model\DomainEntityRef**](DomainEntityRef.md) | User that created this client | [optional]
**modifiedBy** | [**\PureCloudPlatform\Client\V2\Model\DomainEntityRef**](DomainEntityRef.md) | User that last modified this client | [optional]
**authorizedGrantType** | **string** | The OAuth Grant/Client type supported by this client. Code Authorization Grant/Client type - Preferred client type where the Client ID and Secret are required to create tokens. Used where the secret can be secured. Implicit grant type - Client ID only is required to create tokens. Used in browser and mobile apps where the secret can not be secured. SAML2-Bearer extension grant type - SAML2 assertion provider for user authentication at the token endpoint. Client Credential grant type - Used to created access tokens that are tied only to the client. |
**scope** | **string[]** | The scope requested by this client. Scopes only apply to clients not using the client_credential grant | [optional]
**roleDivisions** | [**\PureCloudPlatform\Client\V2\Model\RoleDivision[]**](RoleDivision.md) | Set of roles and their corresponding divisions associated with this client. Roles and divisions only apply to clients using the client_credential grant | [optional]
**selfUri** | **string** | The URI for this object | [optional]
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
| 112.16 | 595 | 0.721469 | eng_Latn | 0.958331 |
03e5a2bd5f63e30865711fbe31b0cf59d264e380 | 19,802 | md | Markdown | windows-apps-src/gaming/gamepad-and-vibration.md | eltociear/windows-uwp.ja-jp | 4689c028134a43061aa9a867df5273b19ac7e949 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | windows-apps-src/gaming/gamepad-and-vibration.md | eltociear/windows-uwp.ja-jp | 4689c028134a43061aa9a867df5273b19ac7e949 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | windows-apps-src/gaming/gamepad-and-vibration.md | eltociear/windows-uwp.ja-jp | 4689c028134a43061aa9a867df5273b19ac7e949 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: ゲームパッドと振動
description: ゲームパッドの検出、読み取り、およびゲームパッドへの振動とリアル コマンドの送信には、Windows.Gaming.Input ゲームパッド API を使用します。
ms.assetid: BB03BB8E-255F-4AE8-AC43-1E519CA860FE
ms.date: 09/06/2018
ms.topic: article
keywords: Windows 10, UWP, ゲーム, ゲームパッド, 振動
ms.localizationpriority: medium
ms.openlocfilehash: e65b22039c381bd333516bd9f98c60bbddb9621c
ms.sourcegitcommit: ca1b5c3ab905ebc6a5b597145a762e2c170a0d1c
ms.translationtype: MT
ms.contentlocale: ja-JP
ms.lasthandoff: 03/13/2020
ms.locfileid: "79210598"
---
# <a name="gamepad-and-vibration"></a>ゲームパッドと振動
このページでは、[Windows.Gaming.Input.Gamepad][gamepad] とユニバーサル Windows プラットフォーム (UWP) 用の関連 API を使った、Xbox One ゲームパッドを対象にしたプログラミングの基礎について説明します。
ここでは、次の項目について紹介します。
* 接続されているゲームパッドとそのユーザーの一覧を収集する方法
* ゲームパッドが追加または削除されたことを検出する方法
* 1 つ以上のゲームパッドの入力を読み取る方法
* 振動とリアル コマンドを送信する方法
* ゲームパッドが UI ナビゲーションデバイスとして動作するしくみ
## <a name="gamepad-overview"></a>ゲームパッドの概要
Xbox ワイヤレス コントローラーや Xbox ワイヤレス コントローラー S などのゲームパッドは、汎用のゲーム入力デバイスです。 ゲームパッドは Xbox One の標準入力デバイスです。一般的に、キーボードやマウスを好まない Windows のゲーマーが選びます。 ゲームパッドは、Windows 10 および Xbox UWP アプリで [Windows. ゲーム. 入力][] 名前空間によってサポートされています。
Xbox One ゲームパッドには、方向パッド (または D パッド) が搭載されています。**A**、 **B**、 **X**、 **Y**、**ビュー**、および**メニュー**ボタン左および右の thumbsticks、バンパー、およびトリガー合計4つの振動モーター。 どちらのサムスティックも、X 軸と Y 軸のデュアル アナログの読み取り値を提供し、内側に押すとボタンとしても機能します。 各トリガーは、どれだけの距離を示すアナログ読み取りを提供します。
<!-- > [!NOTE]
> The Xbox Elite Wireless Controller is equipped with four additional **Paddle** buttons on its underside. These can be used to provide redundant access to game commands that are difficult to use together (such as the right thumbstick together with any of the **A**, **B**, **X**, or **Y** buttons) or to provide dedicated access to additional commands. -->
> [!NOTE]
> また `Windows.Gaming.Input.Gamepad` では、Xbox 360 ゲームパッドもサポートされています。これには、標準 Xbox 1 ゲームパッドと同じコントロールレイアウトがあります。
### <a name="vibration-and-impulse-triggers"></a>振動とリアル トリガー
Xbox One ゲームパッドには、強弱のゲームパッドの振動を生むための独立した 2 つのモーターと、トリガーごとに鋭い振動を生む 2 つの専用のモーターがあります (この独自の機能のために、Xbox One ゲームパッドのトリガーは_リアル トリガー_と呼ばれています)。
> [!NOTE]
> Xbox 360 ゲームパッドには、_インパルストリガー_が搭載されていません。
詳しくは、「[振動とリアル トリガーの概要](#vibration-and-impulse-triggers-overview)」をご覧ください。
### <a name="thumbstick-deadzones"></a>サムスティックのデッドゾーン
中央の位置で待機中のサムスティックは、常に安定してニュートラルな X 軸と Y 軸の読み取り値を生成することが理想的ですが、 機械的な力とサムスティックの感度のために、中央の位置での実際の読み取り値は、理想的なニュートラルの値の近似値でしかなく、読み取りごとに異なる可能性があります。 このため、製造の違い、機械的な磨耗、またはその他のゲームパッドの問題を補正するために—無視される、理想的な中心位置の近くにある小さな_deadzone_—使用する必要があります。
デッドゾーンを大きくすることは、意図する入力と意図しない入力とを分ける簡単な方法です。
詳しくは、「[サムスティックの読み取り](#reading-the-thumbsticks)」をご覧ください。
### <a name="ui-navigation"></a>UI のナビゲーション
ユーザー インターフェイスの操作に異なる入力デバイスをサポートする負担を軽くし、ゲームとデバイス間の整合性を高めるため、ほとんどの物理入力デバイスは、[UI ナビゲーション コントローラー](ui-navigation-controller.md)と呼ばれる個別の論理入力デバイスとして同時に機能します。 UI ナビゲーション コントローラーは、各種入力デバイスに共通の UI ナビゲーション コマンドのボキャブラリを提供します。
UI ナビゲーションコントローラーとして、ゲームパッドは、[必要な一連](ui-navigation-controller.md#required-set)のナビゲーションコマンドを左スティック、D パッド、**ビュー**、**メニュー**、 **a**、 **B**の各ボタンにマップします。
| ナビゲーション コマンド | ゲームパッド入力 |
| ------------------:| ----------------------------------- |
| [上へ] | 左スティックを上/方向パッドを上 |
| [下へ] | 左スティックを下/方向パッドを下 |
| Left | 左スティックを左/方向パッドを左 |
| 右 | 左スティックを右/方向パッドを右 |
| 表示 | 表示ボタン |
| メニュー | メニュー ボタン |
| 同意する | A ボタン |
| キャンセル | B ボタン |
また、ゲームパッドはナビゲーション コマンドのすべての[オプション セット](ui-navigation-controller.md#optional-set)をその他の入力にマップします。
| ナビゲーション コマンド | ゲームパッド入力 |
| ------------------:| ---------------------- |
| PageUp | 左トリガー |
| PageDown | 右トリガー |
| Page Left | L ボタン |
| Page Right | R ボタン |
| Scroll Up | 右スティックを上 |
| Scroll Down | 右スティックを下 |
| Scroll Left | 右スティックを左 |
| Scroll Right | 右スティックを右 |
| Context 1 | X ボタン |
| Context 2 | Y ボタン |
| Context 3 | 左スティックを押す |
| Context 4 | 右スティックを押す |
## <a name="detect-and-track-gamepads"></a>ゲームパッドの検出と追跡
ゲームパッドはシステムによって管理されるため、ゲームパッドを自分で作成したり初期化する必要はありません。 接続されているゲームパッドとイベントの一覧はシステムによって提供され、ゲームパッドが追加または削除されると通知されます。
### <a name="the-gamepads-list"></a>ゲームパッドの一覧
[Gamepad][] クラスには静的プロパティである [Gamepad][] が用意されています。これは、現在接続されているゲームパッドの読み取り専用リストです。 接続されているゲームパッドの一部にのみ関心があるため、`Gamepads` プロパティを使用してアクセスするのではなく、独自のコレクションを維持することをお勧めします。
次の例では、接続されているすべてのゲームパッドを新しいコレクションにコピーします。 バックグラウンドの他のスレッドがこのコレクションにアクセスすることに注意してください。これは、(ユーザーが[追加][]した、または作成した[padpad)][]イベントで、コレクションを読み込んだ、または更新するコードをロックする必要があることに注意してください。
```cpp
auto myGamepads = ref new Vector<Gamepad^>();
critical_section myLock{};
for (auto gamepad : Gamepad::Gamepads)
{
// Check if the gamepad is already in myGamepads; if it isn't, add it.
critical_section::scoped_lock lock{ myLock };
auto it = std::find(begin(myGamepads), end(myGamepads), gamepad);
if (it == end(myGamepads))
{
// This code assumes that you're interested in all gamepads.
myGamepads->Append(gamepad);
}
}
```
```cs
private readonly object myLock = new object();
private List<Gamepad> myGamepads = new List<Gamepad>();
private Gamepad mainGamepad;
private void GetGamepads()
{
lock (myLock)
{
foreach (var gamepad in Gamepad.Gamepads)
{
// Check if the gamepad is already in myGamepads; if it isn't, add it.
bool gamepadInList = myGamepads.Contains(gamepad);
if (!gamepadInList)
{
// This code assumes that you're interested in all gamepads.
myGamepads.Add(gamepad);
}
}
}
}
```
### <a name="adding-and-removing-gamepads"></a>ゲームパッドの追加と削除
ゲームパッドが追加または削除さ[padpad)][]、そのイベントが発生した場合は[追加][]が発生します。 これらのイベントハンドラーを登録することで、現在接続されているゲームパッドを追跡できます。
次の例では、追加されたゲームパッドの追跡を開始します。
```cpp
Gamepad::GamepadAdded += ref new EventHandler<Gamepad^>(Platform::Object^, Gamepad^ args)
{
// Check if the just-added gamepad is already in myGamepads; if it isn't, add
// it.
critical_section::scoped_lock lock{ myLock };
auto it = std::find(begin(myGamepads), end(myGamepads), args);
if (it == end(myGamepads))
{
// This code assumes that you're interested in all new gamepads.
myGamepads->Append(args);
}
}
```
```cs
Gamepad.GamepadAdded += (object sender, Gamepad e) =>
{
// Check if the just-added gamepad is already in myGamepads; if it isn't, add
// it.
lock (myLock)
{
bool gamepadInList = myGamepads.Contains(e);
if (!gamepadInList)
{
myGamepads.Add(e);
}
}
};
```
次の例では、削除されたゲームパッドの追跡を停止します。 また、削除されたときに追跡しているゲームパッドの動作を処理する必要もあります。たとえば、このコードでは、1つのゲームパッドからの入力のみを追跡し、削除されたときに `nullptr` に設定するだけです。 ゲームパッドがアクティブになっている場合はすべてのフレームをチェックし、コントローラーが接続されていないときに入力を収集しているゲームパッドを更新する必要があります。
```cpp
Gamepad::GamepadRemoved += ref new EventHandler<Gamepad^>(Platform::Object^, Gamepad^ args)
{
unsigned int indexRemoved;
critical_section::scoped_lock lock{ myLock };
if(myGamepads->IndexOf(args, &indexRemoved))
{
if (m_gamepad == myGamepads->GetAt(indexRemoved))
{
m_gamepad = nullptr;
}
myGamepads->RemoveAt(indexRemoved);
}
}
```
```cs
Gamepad.GamepadRemoved += (object sender, Gamepad e) =>
{
lock (myLock)
{
int indexRemoved = myGamepads.IndexOf(e);
if (indexRemoved > -1)
{
if (mainGamepad == myGamepads[indexRemoved])
{
mainGamepad = null;
}
myGamepads.RemoveAt(indexRemoved);
}
}
};
```
詳細については、「[ゲームの入力方法](input-practices-for-games.md)」を参照してください。
### <a name="users-and-headsets"></a>ユーザーとヘッドセット
各ゲームパッドは、ユーザー アカウントと関連付けることでユーザーの ID をユーザーのゲームプレイにリンクできます。また、ボイス チャットやゲーム内機能を円滑化するためにヘッドセットをアタッチすることもできます。 ユーザーとの連携およびヘッドセット操作について詳しくは、[ユーザーおよびそのデバイスの追跡](input-practices-for-games.md#tracking-users-and-their-devices)と、[ヘッドセット](headset.md)に関するページをご覧ください。
## <a name="reading-the-gamepad"></a>ゲームパッドの読み取り
目的のゲームパッドを特定したら、入力を収集する準備は完了です。 ただし、なじみのある一部の他の種類の入力とは違い、ゲームパッドはイベントを発生することによって状態の変化を伝えるわけではありません。 代わりに、イベントを_ポーリング_することで現在の状態を定期的に読み取ります。
### <a name="polling-the-gamepad"></a>ゲームパッドのポーリング
ポーリングでは、明確な時点におけるナビゲーション デバイスのスナップショットをキャプチャします。 入力を収集するこのアプローチはほとんどのゲームに最適です。ゲームのロジックはイベント駆動型ではなく、確定的なループの中で実行されることが一般的なためです。また通常は、徐々に集めた多数の単一の入力によるコマンドを解釈するより、一度に集めた入力によるゲーム コマンドを解釈する方が簡単になります。
ゲームパッドをポーリングするには、[GetCurrentReading][] を呼び出します。この関数はゲームパッドの状態が格納された [GamepadReading][] を返します。
次の例では、ゲームパッドをポーリングして現在の状態を確認します。
```cpp
auto gamepad = myGamepads[0];
GamepadReading reading = gamepad->GetCurrentReading();
```
```cs
Gamepad gamepad = myGamepads[0];
GamepadReading reading = gamepad.GetCurrentReading();
```
読み取りデータには、ゲームパッドの状態だけでなく、正確にいつ状態が取得されたかを示すタイムスタンプも含まれます。 このタイムスタンプは、以前の読み取りのタイミングや、ゲームのシミュレーションのタイミングと関連付けに便利です。
### <a name="reading-the-thumbsticks"></a>サムスティックの読み取り
各サムスティックは、X 軸と Y 軸で -1.0 ~ +1.0 のアナログの読み取り値を提供します。 X 軸では、-1.0 の値はサムスティックを最も左に移動した位置に対応し、+1.0 の値はサムスティックを最も右に移動した位置に対応します。 Y 軸では、-1.0 の値はサムスティックを最も下に移動した位置に対応し、+1.0 の値はサムスティックを最も上に移動した位置に対応します。 どちらの軸でも、スティックが中央の位置にある場合、値は約0.0 ですが、後続の読み取りの間でも正確な値は変化します。このバリエーションを緩和するための戦略については、このセクションの後半で説明します。
左のサムスティックの X 軸の値は、[GamepadReading][] 構造体の `LeftThumbstickX` プロパティから読み取られ、Y 軸の値は `LeftThumbstickY` プロパティから読み取られます。 右のサムスティックの X 軸の値は、`RightThumbstickX` プロパティから読み取られ、Y 軸の値は `RightThumbstickY` プロパティから読み取られます。
```cpp
float leftStickX = reading.LeftThumbstickX; // returns a value between -1.0 and +1.0
float leftStickY = reading.LeftThumbstickY; // returns a value between -1.0 and +1.0
float rightStickX = reading.RightThumbstickX; // returns a value between -1.0 and +1.0
float rightStickY = reading.RightThumbstickY; // returns a value between -1.0 and +1.0
```
```cs
double leftStickX = reading.LeftThumbstickX; // returns a value between -1.0 and +1.0
double leftStickY = reading.LeftThumbstickY; // returns a value between -1.0 and +1.0
double rightStickX = reading.RightThumbstickX; // returns a value between -1.0 and +1.0
double rightStickY = reading.RightThumbstickY; // returns a value between -1.0 and +1.0
```
サムスティックの値を読み取るとき、中央の位置で待機中のサムスティックの値は、一定してニュートラルの 0.0 にはなりません。サムスティックを動かし、中央の位置に戻るたびに、0.0 に近い値が生成されます。 このばらつきを少なくするために、小さな_デッドゾーン_を実装します。デッドゾーン+は、理想の中央の位置付近の、無視される範囲の値です。 デッドゾーンを実装する方法の 1 つは、サムスティックが中央から移動された距離を特定し、読み取り値が指定した距離以下の場合は無視することです。 サムスティックの読み取りは基本的にはピタゴラス定理を使用するだけで—平面ではなく極座標であるため、正確ではない—距離を計算できます。 これで、放射状のデッドゾーンが作られます。
次の例は、ピタゴラスの定理を使った基本的な放射状のデッドゾーンを示しています。
```cpp
float leftStickX = reading.LeftThumbstickX; // returns a value between -1.0 and +1.0
float leftStickY = reading.LeftThumbstickY; // returns a value between -1.0 and +1.0
// choose a deadzone -- readings inside this radius are ignored.
const float deadzoneRadius = 0.1;
const float deadzoneSquared = deadzoneRadius * deadzoneRadius;
// Pythagorean theorem -- for a right triangle, hypotenuse^2 = (opposite side)^2 + (adjacent side)^2
auto oppositeSquared = leftStickY * leftStickY;
auto adjacentSquared = leftStickX * leftStickX;
// accept and process input if true; otherwise, reject and ignore it.
if ((oppositeSquared + adjacentSquared) > deadzoneSquared)
{
// input accepted, process it
}
```
```cs
double leftStickX = reading.LeftThumbstickX; // returns a value between -1.0 and +1.0
double leftStickY = reading.LeftThumbstickY; // returns a value between -1.0 and +1.0
// choose a deadzone -- readings inside this radius are ignored.
const double deadzoneRadius = 0.1;
const double deadzoneSquared = deadzoneRadius * deadzoneRadius;
// Pythagorean theorem -- for a right triangle, hypotenuse^2 = (opposite side)^2 + (adjacent side)^2
double oppositeSquared = leftStickY * leftStickY;
double adjacentSquared = leftStickX * leftStickX;
// accept and process input if true; otherwise, reject and ignore it.
if ((oppositeSquared + adjacentSquared) > deadzoneSquared)
{
// input accepted, process it
}
```
各サムスティックは、内側に押すことでボタンとしても機能します。この入力の読み取りの詳細については、「[ボタンの読み取り](#reading-the-buttons)」をご覧ください。
### <a name="reading-the-triggers"></a>トリガーの読み取り
トリガーは、0.0 (完全に離された状態) ~ 1.0 (完全に押された状態) の浮動小数点値として表されます。 左のトリガーの値は、[GamepadReading][] 構造体の `LeftTrigger` プロパティから、右のトリガーの値は `RightTrigger` プロパティから読み取られます。
```cpp
float leftTrigger = reading.LeftTrigger; // returns a value between 0.0 and 1.0
float rightTrigger = reading.RightTrigger; // returns a value between 0.0 and 1.0
```
```cs
double leftTrigger = reading.LeftTrigger; // returns a value between 0.0 and 1.0
double rightTrigger = reading.RightTrigger; // returns a value between 0.0 and 1.0
```
### <a name="reading-the-buttons"></a>ボタンの読み取り
各ゲームパッドの各—ボタンは、D パッド、左と右のバンパー、左と右のサムスティックの押下、 **A**、 **B**、 **X**、 **Y**、**ビュー**、**メニュー**—によって、押された状態 (下方向) または解放された (上の) かを示すデジタル読み取りを提供します。 効率を上げるために、ボタンの読み取りは個々のブール値として表されません。これらはすべて、ビットフィールド[Padbuttons][]列挙体によって表される1つのにパックされています。
<!-- > [!NOTE]
> The Xbox Elite Wireless Controller is equipped with four additional **paddle** buttons on its underside. These buttons are also represented in the `GamepadButtons` enumeration and their values are read in the same way as the standard gamepad buttons. -->
ボタンの値は、[GamepadReading][] 構造体の `Buttons` プロパティから読み取ります。 このプロパティはビットフィールドであるため、ビット演算子マスクを使用して目的のボタンの値を分離します。 対応するビットが設定されているときはボタンが押されており (ダウン)、それ以外の場合はボタンが離されています (アップ)。
次の例では、A ボタンが押されているかどうかを判別します。
```cpp
if (GamepadButtons::A == (reading.Buttons & GamepadButtons::A))
{
// button A is pressed
}
```
```cs
if (GamepadButtons.A == (reading.Buttons & GamepadButtons.A))
{
// button A is pressed
}
```
次の例では、A ボタンが離されているかどうかを判別します。
```cpp
if (GamepadButtons::None == (reading.Buttons & GamepadButtons::A))
{
// button A is released
}
```
```cs
if (GamepadButtons.None == (reading.Buttons & GamepadButtons.A))
{
// button A is released
}
```
場合によっては、ボタンを押された状態から離した状態に切り替えるか、または離された状態に切り替えられるか、複数のボタンが押されたか離されたか、またはボタンのセットが特定の—方法で並べ替えられているかどうかを判断する必要がある場合があります。 これらの各状態を検出する方法について詳しくは、「[ボタンの状態遷移の検出](input-practices-for-games.md#detecting-button-transitions)」および「[ボタンの複雑な配置の検出](input-practices-for-games.md#detecting-complex-button-arrangements)」をご覧ください。
## <a name="run-the-gamepad-input-sample"></a>ゲームパッド入力のサンプルの実行
[GamepadUWP サンプル _(github)_ ](https://github.com/Microsoft/Xbox-ATG-Samples/tree/master/UWPSamples/System/GamepadUWP) は、ゲームパッドに接続して、その状態を読み取る方法を示しています。
## <a name="vibration-and-impulse-triggers-overview"></a>振動とリアル トリガーの概要
ゲームパッド内の振動モーターは、触覚的なフィードバックをユーザーに提供することを目的としています。 ゲームではこの機能を、より高い没入感を生み出す、状態情報 (ダメージを受けたなど) の伝達を助ける、重要なアイテムに近接信号を送るなど、クリエイティブな用途に利用します。
Xbox One ゲームパッドには、独立した振動モーターが合計 4 つ搭載されています。 2つはゲームパッドの本文にある大きなモータです。左側のモーターは、粗い振動を備えています。一方、右モータは、穏やか、より微妙な振動を提供します。 残り 2 つは小型のモーターで、各トリガー内に 1 つずつ組み込まれていて、トリガーを操作しているユーザーの指に直接、鋭い弾けるような振動を伝えます。Xbox One ゲームパッドのトリガーは、この独自の機能のために、_リアル トリガー_と呼ばれます。 これらのモーターが協調することで、幅広い種類の触感を生成できます。
## <a name="using-vibration-and-impulse"></a>振動とリアル トリガーの使用
ゲームパッドの振動は、[Gamepad][] クラスの [Vibration][] プロパティによって制御されます。 `Vibration` は、4つの浮動小数点値で構成される[GamepadVibration][]構造体のインスタンスです。各値は、いずれかのモーターの強度を表します。
`Gamepad.Vibration` プロパティのメンバーを直接変更することもできますが、別の `GamepadVibration` インスタンスを必要な値に初期化し、`Gamepad.Vibration` プロパティにコピーして、実際のモータの輝度をすべて一度に変更することをお勧めします。
次の例は、モーターの強さを一度に変更する方法を示しています。
```cpp
// get the first gamepad
Gamepad^ gamepad = Gamepad::Gamepads->GetAt(0);
// create an instance of GamepadVibration
GamepadVibration vibration;
// ... set vibration levels on vibration struct here
// copy the GamepadVibration struct to the gamepad
gamepad.Vibration = vibration;
```
```cs
// get the first gamepad
Gamepad gamepad = Gamepad.Gamepads[0];
// create an instance of GamepadVibration
GamepadVibration vibration = new GamepadVibration();
// ... set vibration levels on vibration struct here
// copy the GamepadVibration struct to the gamepad
gamepad.Vibration = vibration;
```
### <a name="using-the-vibration-motors"></a>振動モーターの使用
左右の振動モーターの値は、0.0 (振動なし) ~ 1.0 (最も強い振動) の浮動小数点値になります。 左のモーターの強さは、[GamepadVibration][] 構造体の `LeftMotor` プロパティによって設定され、右のモーターの強さは `RightMotor` プロパティによって設定されます。
次の例では両方の振動モーターの強さを設定し、ゲームパッドの振動を有効にします。
```cpp
GamepadVibration vibration;
vibration.LeftMotor = 0.80; // sets the intensity of the left motor to 80%
vibration.RightMotor = 0.25; // sets the intensity of the right motor to 25%
gamepad.Vibration = vibration;
```
```cs
GamepadVibration vibration = new GamepadVibration();
vibration.LeftMotor = 0.80; // sets the intensity of the left motor to 80%
vibration.RightMotor = 0.25; // sets the intensity of the right motor to 25%
mainGamepad.Vibration = vibration;
```
この 2 つのモーターは同じではないため、これらのプロパティを同じ値に設定しても、一方のモーターともう一方のモーターの振動は同じになりません。 いずれの値についても、左側のモーターは、同じ値を—する右モータよりも低い周波数でより強い振動を生成し、より高い周波数で穏やか振動を生成—ます。 最大値でも、左のモーターでは右のモーターと同じ高い周波数を生成することはできず、右のモーターは左のモーターほど強い力を生み出すことはできません。 ただし、これらのモーターはゲームパッドの本体によってしっかりと連結しているため、各モーターの特徴は異なり、振動の強度が異なる場合でも、プレイヤーがそれぞれの振動を完全に分けて感じることはありません。 このアレンジによって、モーターがまったく同じ場合よりも、より幅広く表現豊かに触感を生み出すことができます。
### <a name="using-the-impulse-triggers"></a>リアル トリガーの使用
各リアル トリガーのモーターの値は、0.0 (振動なし) ~ 1.0 (最も強い振動) の浮動小数点値になります。 左のトリガー モーターの強さは、[GamepadVibration][] 構造体の `LeftTrigger` プロパティによって設定され、右のトリガー の強さは `RightTrigger` プロパティによって設定されます。
次の例では両方のリアル トリガーの強さを設定し、有効にします。
```cpp
GamepadVibration vibration;
vibration.LeftTrigger = 0.75; // sets the intensity of the left trigger to 75%
vibration.RightTrigger = 0.50; // sets the intensity of the right trigger to 50%
gamepad.Vibration = vibration;
```
```cs
GamepadVibration vibration = new GamepadVibration();
vibration.LeftTrigger = 0.75; // sets the intensity of the left trigger to 75%
vibration.RightTrigger = 0.50; // sets the intensity of the right trigger to 50%
mainGamepad.Vibration = vibration;
```
本体のモーターと異なり、トリガー内のこの 2 つの振動モーターは同じであるため、同じ値を設定すると、どちらのモーターでも同じ振動が生成されます。 ただし、これらのモーターは何らかの形で強く連結されてはいないため、プレイヤーは振動を個別に感じます。 このアレンジでは、完全に独立した感覚を両方のトリガーに提供することができ、ゲームパッド本体のモーターよりも、個別の情報を伝えることができます。
## <a name="run-the-gamepad-vibration-sample"></a>ゲームパッドの振動のサンプルの実行
[GamepadVibrationUWP サンプル _(github)_ ](https://github.com/Microsoft/Xbox-ATG-Samples/tree/master/UWPSamples/System/GamepadVibrationUWP) では、ゲームパッドの振動モーターとリアル トリガーを使用して、さまざまな効果を生む方法を示しています。
## <a name="see-also"></a>関連項目
* [Windows.Gaming.Input.UINavigationController][]
* [IGameController を入力します。][]
* [ゲームの入力方法](input-practices-for-games.md)
[Windows. ゲーム. 入力]: https://msdn.microsoft.com/library/windows/apps/windows.gaming.input.aspx
[Windows.Gaming.Input.UINavigationController]: https://msdn.microsoft.com/library/windows/apps/windows.gaming.input.uinavigationcontroller.aspx
[IGameController を入力します。]: https://msdn.microsoft.com/library/windows/apps/windows.gaming.input.igamecontroller.aspx
[Gamepad]: https://msdn.microsoft.com/library/windows/apps/windows.gaming.input.gamepad.aspx
[Gamepad]: https://msdn.microsoft.com/library/windows/apps/windows.gaming.input.gamepad.gamepads.aspx
[追加]: https://msdn.microsoft.com/library/windows/apps/windows.gaming.input.gamepad.gamepadadded.aspx
[padpad)]: https://msdn.microsoft.com/library/windows/apps/windows.gaming.input.gamepad.gamepadremoved.aspx
[getcurrentreading]: https://msdn.microsoft.com/library/windows/apps/windows.gaming.input.gamepad.getcurrentreading.aspx
[Vibration]: https://msdn.microsoft.com/library/windows/apps/windows.gaming.input.gamepad.vibration.aspx
[GamepadReading]: https://msdn.microsoft.com/library/windows/apps/windows.gaming.input.gamepadreading.aspx
[Padbuttons]: https://msdn.microsoft.com/library/windows/apps/windows.gaming.input.gamepadbuttons.aspx
[gamepadvibration]: https://msdn.microsoft.com/library/windows/apps/windows.gaming.input.gamepadvibration.aspx
| 40.828866 | 382 | 0.743359 | yue_Hant | 0.648659 |
03e5d61efedc18bc194ec58d2ea2288fb6a78ba5 | 3,541 | md | Markdown | docs/1.15/core/v1/storageOSPersistentVolumeSource.md | kensho-technologies/k8s-libsonnet | 91008dbd2ea5734288467d6dcafef7c285c3f7e6 | [
"Apache-2.0"
] | null | null | null | docs/1.15/core/v1/storageOSPersistentVolumeSource.md | kensho-technologies/k8s-libsonnet | 91008dbd2ea5734288467d6dcafef7c285c3f7e6 | [
"Apache-2.0"
] | null | null | null | docs/1.15/core/v1/storageOSPersistentVolumeSource.md | kensho-technologies/k8s-libsonnet | 91008dbd2ea5734288467d6dcafef7c285c3f7e6 | [
"Apache-2.0"
] | null | null | null | ---
permalink: /1.15/core/v1/storageOSPersistentVolumeSource/
---
# core.v1.storageOSPersistentVolumeSource
Represents a StorageOS persistent volume resource.
## Index
* [`fn withFsType(fsType)`](#fn-withfstype)
* [`fn withReadOnly(readOnly)`](#fn-withreadonly)
* [`fn withVolumeName(volumeName)`](#fn-withvolumename)
* [`fn withVolumeNamespace(volumeNamespace)`](#fn-withvolumenamespace)
* [`obj secretRef`](#obj-secretref)
* [`fn withFieldPath(fieldPath)`](#fn-secretrefwithfieldpath)
* [`fn withKind(kind)`](#fn-secretrefwithkind)
* [`fn withName(name)`](#fn-secretrefwithname)
* [`fn withNamespace(namespace)`](#fn-secretrefwithnamespace)
* [`fn withResourceVersion(resourceVersion)`](#fn-secretrefwithresourceversion)
* [`fn withUid(uid)`](#fn-secretrefwithuid)
## Fields
### fn withFsType
```ts
withFsType(fsType)
```
Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified.
### fn withReadOnly
```ts
withReadOnly(readOnly)
```
Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.
### fn withVolumeName
```ts
withVolumeName(volumeName)
```
VolumeName is the human-readable name of the StorageOS volume. Volume names are only unique within a namespace.
### fn withVolumeNamespace
```ts
withVolumeNamespace(volumeNamespace)
```
VolumeNamespace specifies the scope of the volume within StorageOS. If no namespace is specified then the Pod's namespace will be used. This allows the Kubernetes name scoping to be mirrored within StorageOS for tighter integration. Set VolumeName to any name to override the default behaviour. Set to 'default' if you are not using namespaces within StorageOS. Namespaces that do not pre-exist within StorageOS will be created.
## obj secretRef
ObjectReference contains enough information to let you inspect or modify the referred object.
### fn secretRef.withFieldPath
```ts
withFieldPath(fieldPath)
```
If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object.
### fn secretRef.withKind
```ts
withKind(kind)
```
Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
### fn secretRef.withName
```ts
withName(name)
```
Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
### fn secretRef.withNamespace
```ts
withNamespace(namespace)
```
Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
### fn secretRef.withResourceVersion
```ts
withResourceVersion(resourceVersion)
```
Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#concurrency-control-and-consistency
### fn secretRef.withUid
```ts
withUid(uid)
```
UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids | 33.093458 | 568 | 0.765603 | eng_Latn | 0.922787 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.