hexsha
stringlengths 40
40
| size
int64 5
1.04M
| ext
stringclasses 6
values | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 3
344
| max_stars_repo_name
stringlengths 5
125
| max_stars_repo_head_hexsha
stringlengths 40
78
| max_stars_repo_licenses
sequencelengths 1
11
| max_stars_count
int64 1
368k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 3
344
| max_issues_repo_name
stringlengths 5
125
| max_issues_repo_head_hexsha
stringlengths 40
78
| max_issues_repo_licenses
sequencelengths 1
11
| max_issues_count
int64 1
116k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 3
344
| max_forks_repo_name
stringlengths 5
125
| max_forks_repo_head_hexsha
stringlengths 40
78
| max_forks_repo_licenses
sequencelengths 1
11
| max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | content
stringlengths 5
1.04M
| avg_line_length
float64 1.14
851k
| max_line_length
int64 1
1.03M
| alphanum_fraction
float64 0
1
| lid
stringclasses 191
values | lid_prob
float64 0.01
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
8a5a40870336e26c8945939d093b0463e8ef1bbd | 3,337 | md | Markdown | README.md | kergeodeta/spring-camel-cluster-singleton | 690dbc4049c898dbbbcfa4df800ab1db21a2cbfe | [
"Apache-2.0"
] | null | null | null | README.md | kergeodeta/spring-camel-cluster-singleton | 690dbc4049c898dbbbcfa4df800ab1db21a2cbfe | [
"Apache-2.0"
] | null | null | null | README.md | kergeodeta/spring-camel-cluster-singleton | 690dbc4049c898dbbbcfa4df800ab1db21a2cbfe | [
"Apache-2.0"
] | 1 | 2020-05-14T13:07:07.000Z | 2020-05-14T13:07:07.000Z | # spring-camel-cluster-singleton
Camel based cluster singleton service using camel's core FileLockClusterService or KubernetesClusterService.
## Camel Leader Election using config based ClusterService
Based on blog article post: https://www.nicolaferraro.me/2017/10/17/creating-clustered-singleton-services-on-kubernetes/
Original gitlab link: [https://github.com/nicolaferraro/spring-camel-cluster-singleton.git](https://github.com/nicolaferraro/spring-camel-cluster-singleton.git)
The app at the moment support file and kubernetes based Cluster Service implementations.
## Enable File Cluster Service
Service: ```FileLockClusterService```
Config: [Camel Cluster Service](https://camel.apache.org/manual/latest/clustering.html)
First comment kubernetes based config lines in ```application.properties``` if there are present then set/uncomment the file lock based clustering like this:
```
camel.component.file.cluster.service.enabled = true
camel.component.file.cluster.service.id = ${random.uuid}
camel.component.file.cluster.service.root = ${java.io.tmpdir}
camel.component.file.cluster.service.cluster-labels[group]=${project.groupId}
camel.component.file.cluster.service.cluster-labels[app]=${project.artifactId}
```
## Enable Kubernetes Cluster Service
First comment file based clustering lines in ```application.properties``` if there are present then set/uncomment the kubernetes based clustering like this:
```
camel.component.kubernetes.cluster.service.enabled=true
camel.component.kubernetes.cluster.service.cluster-labels[group]=${project.groupId}
camel.component.kubernetes.cluster.service.cluster-labels[app]=${project.artifactId}
```
Deploy to kubernetes wih command:
Check if minikube already running, start it if not already started:
```minikube start```
```kubectl apply -f src/main/fabric8/raw/rb.yml ```
```kubectl apply -f src/main/fabric8/raw/role.yml```
```kubectl apply -f src/main/fabric8/raw/sa.yml ```
```eval $(minikube docker-env)```
```mvn fabric8:deploy```
For the above programs use the same terminal session otherwise you will be phasing image pull error in kubernetes
Check the cluster singleton service in the logs:
## To configure endpoints
The master service endpoint uri format is the following:
```master:namespace:delegateUri```
Where you can define any endpoint, queue, timer, etc
The above consuming jms foo endpoint and delegates to other queue:
```from("master:lock1:jms:foo").to("activemq:wine")```
See: ```ClasterRoutes.java```
## Choose Cluster Service based on environment
By default file based cluster leader election is enabled, setting config to:
```camel.component.file.cluster.service.enabled = true```
To enable kubernetes cluster service you should disable file based cluster
and enable kubernetes config:
```camel.component.kubernetes.cluster.service.enabled=true```
## Testing
Set the environment variables, file lock or kubernetes
### Testing on kubernetes
If you are using minikube execute: ```minikube start```
then: ```eval $(minikube docker-env)```
Configure kubectl, run command ```mvn fabric8:deploy```
Use kubernetes command:
Kubectl cheet sheet: ```https://kubernetes.io/docs/reference/kubectl/cheatsheet/```
* kubectl get pods,
* kubectl logs
* kubectl delete pod
* kubectl get deployments
* kubectl delete deployment
| 32.086538 | 160 | 0.776446 | eng_Latn | 0.625202 |
8a5b138218c877a371264cd3cab48556fdb56bcf | 1,871 | md | Markdown | api/PowerPoint.Tags.Value.md | ryanmajidi/VBA-Docs | 8b07050f4ff38fcabda606284ec5f6f6634e9569 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | api/PowerPoint.Tags.Value.md | ryanmajidi/VBA-Docs | 8b07050f4ff38fcabda606284ec5f6f6634e9569 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | api/PowerPoint.Tags.Value.md | ryanmajidi/VBA-Docs | 8b07050f4ff38fcabda606284ec5f6f6634e9569 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Tags.Value Method (PowerPoint)
keywords: vbapp10.chm611009
f1_keywords:
- vbapp10.chm611009
ms.prod: powerpoint
api_name:
- PowerPoint.Tags.Value
ms.assetid: 8d7507d2-6533-5d63-c6ff-fec9581fb44f
ms.date: 06/08/2017
---
# Tags.Value Method (PowerPoint)
Returns the value of the specified tag as a **String**.
## Syntax
_expression_. `Value`( `_Index_` )
_expression_ A variable that represents a [Tags](./PowerPoint.Tags.md) object.
## Parameters
|Name|Required/Optional|Data type|Description|
|:-----|:-----|:-----|:-----|
| _Index_|Required|**Long**|The tag number.|
## Return value
String
## Example
This example displays the name and value for each tag associated with slide one in the active presentation.
```vb
With Application.ActivePresentation.Slides(1).Tags
For i = 1 To .Count
MsgBox "Tag #" & i & ": Name = " & .Name(i)
MsgBox "Tag #" & i & ": Value = " & .Value(i)
Next
End With
```
This example searches through the tags for each slide in the active presentation. If there's a tag named "PRIORITY," a message box displays the tag value. If there isn't a tag named "PRIORITY," the example adds this tag that has the value "Unknown."
```vb
For Each s In Application.ActivePresentation.Slides
With s.Tags
found = False
For i = 1 To .Count
If .Name(i) = "PRIORITY" Then
found = True
slNum = .Parent.SlideIndex
MsgBox "Slide " & slNum & " priority: " & .Value(i)
End If
Next
If Not found Then
slNum = .Parent.SlideIndex
.Add "Name", "New Figures"
.Add "Priority", "Unknown"
MsgBox "Slide " & slNum & _
" priority tag added: Unknown"
End If
End With
Next
```
## See also
[Tags Object](PowerPoint.Tags.md)
| 19.694737 | 249 | 0.61892 | eng_Latn | 0.692135 |
8a5b28ac66fcec2ca777ed0443b8cc8ea0fc4262 | 6,257 | md | Markdown | articles/stream-analytics/stream-analytics-job-diagram-with-metrics.md | Myhostings/azure-docs.tr-tr | 536eaf3b454f181f4948041d5c127e5d3c6c92cc | [
"CC-BY-4.0",
"MIT"
] | 16 | 2017-08-28T08:29:36.000Z | 2022-01-02T16:46:30.000Z | articles/stream-analytics/stream-analytics-job-diagram-with-metrics.md | Ahmetmaman/azure-docs.tr-tr | 536eaf3b454f181f4948041d5c127e5d3c6c92cc | [
"CC-BY-4.0",
"MIT"
] | 470 | 2017-11-11T20:59:16.000Z | 2021-04-10T17:06:28.000Z | articles/stream-analytics/stream-analytics-job-diagram-with-metrics.md | Ahmetmaman/azure-docs.tr-tr | 536eaf3b454f181f4948041d5c127e5d3c6c92cc | [
"CC-BY-4.0",
"MIT"
] | 25 | 2017-11-11T19:39:08.000Z | 2022-03-30T13:47:56.000Z | ---
title: Azure Stream Analytics veri odaklı hata ayıklama
description: Bu makalede, Azure portal iş diyagramını ve ölçümleri kullanarak Azure Stream Analytics işinizin nasıl giderileceği açıklanmaktadır.
author: jseb225
ms.author: jeanb
ms.service: stream-analytics
ms.topic: how-to
ms.date: 05/01/2017
ms.openlocfilehash: 6d20454515088ccca87665d9b3b27c0d82c3cdf9
ms.sourcegitcommit: f28ebb95ae9aaaff3f87d8388a09b41e0b3445b5
ms.translationtype: MT
ms.contentlocale: tr-TR
ms.lasthandoff: 03/29/2021
ms.locfileid: "98020409"
---
# <a name="data-driven-debugging-by-using-the-job-diagram"></a>İş diyagramını kullanarak veri odaklı hata ayıklama
Azure portal **izleme** dikey penceresindeki iş diyagramı, iş işlem hattınızı görselleştirmenize yardımcı olabilir. Girişleri, çıkışları ve sorgu adımlarını gösterir. İş diyagramını kullanarak her adımda ölçümleri inceleyebilir, sorunları giderirken sorunun kaynağını daha hızlı yalıtabilirsiniz.
## <a name="using-the-job-diagram"></a>İş diyagramını kullanma
Azure portal, bir Stream Analytics işinde, **destek + sorun giderme** altında **iş diyagramı**' nı seçin:

Sorgu düzenlemesi bölmesinde ilgili bölümü görmek için her bir sorgu adımını seçin. Adım için bir ölçüm grafiği, sayfada alt bölmede görüntülenir.

Azure Event Hubs girişi bölümlerini görmek için... seçeneğini belirleyin **.** Bağlam menüsü görüntülenir. Ayrıca, giriş merkli öğesini de görebilirsiniz.

Ölçüm grafiğini yalnızca tek bir bölüm için görmek üzere bölüm düğümünü seçin. Ölçümler sayfanın alt kısmında gösterilir.

Birleşme için ölçüm grafiğini görmek için birleşme düğümünü seçin. Aşağıdaki grafikte hiçbir olayın bırakılmadığı veya ayarlandığı gösterilmektedir.

Ölçüm değeri ve saatinin ayrıntılarını görmek için grafiğin üzerine gelin.

## <a name="troubleshoot-by-using-metrics"></a>Ölçümleri kullanarak sorun giderme
**Querylastprocessedtime** ölçümü, belirli bir adımın veri aldığını gösterir. Topolojiyi inceleyerek, hangi adımın veri almadığını görmek için çıkış işlemcisinden geriye gidebilirsiniz. Bir adım veri almıyorsanız, hemen önceki sorgu adımına gidin. Yukarıdaki sorgu adımında bir zaman penceresi olup olmadığını ve veri çıkışı için yeterince zaman geçtiğini denetleyin. (Zaman pencerelerinin saate götürüldiğini unutmayın.)
Yukarıdaki sorgu adımı bir giriş işlemcisidir, aşağıdaki hedeflenen soruların yanıtlanmasına yardımcı olması için giriş ölçümlerini kullanın. Bu kişiler, bir işin giriş kaynaklarından veri alma olup olmadığını belirlemenize yardımcı olabilirler. Sorgu bölümlendirilmişse her bir bölümü inceleyin.
### <a name="how-much-data-is-being-read"></a>Ne kadar veri okunmakta?
* **Inputeventssourcestotal** , okunan veri birimlerinin sayısıdır. Örneğin, Blobların sayısı.
* **Inputeventstotal** , okunan olay sayısıdır. Bu ölçüm her bölüm için kullanılabilir.
* **Inputeventsinbytestotal** , okunan bayt sayısıdır.
* **Inputeventslastarrivaltime** , alınan her olayın sıraya alınma zamanına göre güncelleştirilir.
### <a name="is-time-moving-forward-if-actual-events-are-read-punctuation-might-not-be-issued"></a>Zaman ileri taşınıyor mı? Gerçek olaylar okunuyorsa noktalama işaretleri verilmeyebilir.
* **InputEventsLastPunctuationTime**, zamanın ilerlemesini sağlamak için bir noktalama işaretinin ne zaman verildiğini gösterir. Noktalama işareti verilmemişse, veri akışı engellenebilir.
### <a name="are-there-any-errors-in-the-input"></a>Girişte herhangi bir hata var mı?
* **Inputeventseventdadtanulltotal** , null veri içeren bir olay sayısıdır.
* **Inputeventsserializererrorstotal** , doğru bir şekilde seri durumdan çıkarılamıyor bir olay sayısıdır.
* **Inputeventsdüşürüldedtotal** , seri durumdan çıkarma dışında bir sorunu olan olay sayısıdır.
### <a name="are-events-being-dropped-or-adjusted"></a>Olaylar bırakılıyor veya düzeltildi mi?
* **Inputeventsearlytotal** , yüksek filigrandan önce bir uygulama zaman damgasına sahip olan olay sayısıdır.
* **Inputeventslatetotal** , yüksek filigrandan sonra uygulama zaman damgasına sahip olan olay sayısıdır.
* **Inputeventsdroppedbeforeapplicationstarttimetotal** , iş başlangıç zamanından önce bırakılan olay sayısıdır.
### <a name="are-we-falling-behind-in-reading-data"></a>Verileri okurken geride düşeceğiz mı?
* **Biriktirme listesindeki giriş olayları (Toplam)** Event Hubs ve Azure IoT Hub girişleri için kaç tane daha fazla ileti okunması gerektiğini söyler. Bu sayı 0 ' dan büyükse, işinizin verileri geldiği kadar hızlı işleyemediği anlamına gelir. Bu durumda, akış birimlerinin sayısını artırmanız ve/veya işinizin paralelleştirilmesine emin olmanız gerekebilir. [Sorgu paralelleştirme sayfasında](./stream-analytics-parallelization.md)bu konuda daha fazla bilgi görebilirsiniz.
## <a name="get-help"></a>Yardım alın
Ek Yardım için, [Azure Stream Analytics Için Microsoft Q&soru sayfasını](/answers/topics/azure-stream-analytics.html)deneyin.
## <a name="next-steps"></a>Sonraki adımlar
* [Stream Analytics giriş](stream-analytics-introduction.md)
* [Akış Analizi ile çalışmaya başlama](stream-analytics-real-time-fraud-detection.md)
* [Stream Analytics işlerini ölçeklendirme](stream-analytics-scale-jobs.md)
* [Stream Analytics sorgu dili başvurusu](/stream-analytics-query/stream-analytics-query-language-reference)
* [Stream Analytics yönetim REST API başvurusu](/rest/api/streamanalytics/) | 71.102273 | 477 | 0.805977 | tur_Latn | 0.998049 |
8a5b3ec5a30c9bb907d17b67b57e9552653345e8 | 3,508 | md | Markdown | docs/standard/exceptions/index.md | TyounanMOTI/docs.ja-jp | 72947e02a15d5396c2ee514246023a4ab24abc77 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-01-29T12:31:08.000Z | 2019-01-29T12:31:08.000Z | docs/standard/exceptions/index.md | TyounanMOTI/docs.ja-jp | 72947e02a15d5396c2ee514246023a4ab24abc77 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/standard/exceptions/index.md | TyounanMOTI/docs.ja-jp | 72947e02a15d5396c2ee514246023a4ab24abc77 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: .NET での例外の処理とスロー
ms.date: 06/19/2018
ms.technology: dotnet-standard
helpviewer_keywords:
- exceptions [.NET], handling
- runtime, exceptions
- filtering exceptions
- errors [.NET], exceptions
- exceptions [.NET], throwing
- exceptions [.NET]
- common language runtime, exceptions
ms.assetid: f99a1d29-a2a8-47af-9707-9909f9010735
author: mairaw
ms.author: mairaw
ms.openlocfilehash: 263e6394a57ec3e7ef00eb79671d9b8ac47e724f
ms.sourcegitcommit: ea00c05e0995dae928d48ead99ddab6296097b4c
ms.translationtype: HT
ms.contentlocale: ja-JP
ms.lasthandoff: 10/02/2018
ms.locfileid: "48046714"
---
# <a name="handling-and-throwing-exceptions-in-net"></a>.NET での例外の処理とスロー
アプリケーションは、実行中に発生するエラーを一貫した方法で処理できなければなりません。 .NET では、一貫した方法でアプリケーションにエラーを通知するためのモデルが用意されています。 .NET 操作では、例外をスローすることによって障害の発生を示します。
## <a name="exceptions"></a>例外
例外とは、プログラムを実行することによって発生するエラー状態または予期しない動作のことです。 例外がスローされる原因として、コードまたは呼び出したコード (たとえば共有ライブラリ) 内に障害がある、オペレーティング システム リソースを使用できない、予期しない状態 (たとえば検証できないコード) をランタイムが検出したなどがあります。 アプリケーションは、他の状態からではなく、これらの状態のうちのいくつかから回復できます。 ほとんどのアプリケーション例外から回復できますが、ほとんどのランタイム例外からは回復できません。
.NET では、例外は、<xref:System.Exception?displayProperty=nameWithType> クラスから継承されるオブジェクトです。 例外は問題が発生したコード領域からスローされます。 例外は、アプリケーションが処理するかプログラムが終了するまで、スタックに渡されます。
## <a name="exceptions-vs-traditional-error-handling-methods"></a>例外:従来のエラー処理メソッド
言語のエラー処理モデルは従来、エラーを検出してそれに対応したハンドラーを見つける言語固有の方法か、オペレーティング システムが備えているエラー処理機構のいずれかを使用していました。 .NET が例外処理を実装する方法は、次の利点をもたらします。
- 例外のスローと処理は、.NET プログラミング言語では同じように機能します。
- 例外を処理するための特定の言語構文を必要とせず、各言語が独自の構文を定義できます。
- 例外は、プロセス間、さらにはコンピューターの境界を越えてスローできます。
- プログラムの信頼性を高めるための例外処理コードをアプリケーションに追加できます。
例外には、リターン コードなどの他のエラー通知メソッドに優る利点があります。 例外がスローされ、それを処理しないと、ランタイムによってアプリケーションが終了されるため、エラーが見過ごされることはありません。 無効な値は、エラーのリターン コードの確認に失敗したコードの結果として、システムを経由した伝達を続行しません。
## <a name="common-exceptions"></a>一般的な例外
次の表は、一般的な例外とそれらの原因の例をいくつか示しています。
| 例外の種類 | 説明 | 例 |
| -------------- | ----------- | ------- |
| <xref:System.Exception> | すべての例外の基底クラスです。 | なし (この例外の派生クラスを使用)。 |
| <xref:System.IndexOutOfRangeException> | 配列のインデックスが誤っている場合にのみ、ランタイムによってスローされます。 | 次のように、配列に対して配列の有効範囲外のインデックスを付けた場合。 <br /> `arr[arr.Length+1]` |
| <xref:System.NullReferenceException> | null オブジェクトが参照された場合にのみ、ランタイムによってスローされます。 | `object o = null;` <br /> `o.ToString();` |
| <xref:System.InvalidOperationException> | 無効な状態の場合にメソッドによってスローされます。 | 基になるコレクションから項目を削除した後での、`Enumerator.MoveNext()` の呼び出しです。 |
| <xref:System.ArgumentException> | すべての引数の例外の基底クラスです。 | なし (この例外の派生クラスを使用)。 |
| <xref:System.ArgumentNullException> | null の引数を許可しないメソッドによってスローされます。 | `String s = null;` <br /> `"Calculate".IndexOf(s);`|
| <xref:System.ArgumentOutOfRangeException> | 引数が特定の範囲内にあることを検査するメソッドによってスローされます。 | `String s = "string";` <br /> `s.Substring(s.Length+1);` |
## <a name="see-also"></a>関連項目
- [Exception クラスとプロパティ](exception-class-and-properties.md)
- [方法: Try ブロックと Catch ブロックを使用して例外をキャッチする](how-to-use-the-try-catch-block-to-catch-exceptions.md)
- [方法: catch ブロックで特定の例外を使用する](how-to-use-specific-exceptions-in-a-catch-block.md)
- [方法: 例外を明示的にスローする](how-to-explicitly-throw-exceptions.md)
- [方法: ユーザー定義の例外を作成する](how-to-create-user-defined-exceptions.md)
- [ユーザー フィルター例外ハンドラーの使用](using-user-filtered-exception-handlers.md)
- [方法: finally ブロックを使用する](how-to-use-finally-blocks.md)
- [COM 相互運用の例外の処理](handling-com-interop-exceptions.md)
- [例外の推奨事項](best-practices-for-exceptions.md)
- [ランタイム時の例外についてすべての開発者が知っておくべきこと](https://github.com/dotnet/coreclr/blob/master/Documentation/botr/exceptions.md)
| 48.054795 | 262 | 0.785348 | yue_Hant | 0.739429 |
8a5baa3a127a1e5f89a31d2ccc83394b7e6d5846 | 1,808 | md | Markdown | _posts/2019-03-01-mais-dry-no-seu-yaml.md | tinogomes/tinogomes.github.com | ae77ff5d96979aaed8b9dfdb9564482890e034cb | [
"MIT"
] | null | null | null | _posts/2019-03-01-mais-dry-no-seu-yaml.md | tinogomes/tinogomes.github.com | ae77ff5d96979aaed8b9dfdb9564482890e034cb | [
"MIT"
] | null | null | null | _posts/2019-03-01-mais-dry-no-seu-yaml.md | tinogomes/tinogomes.github.com | ae77ff5d96979aaed8b9dfdb9564482890e034cb | [
"MIT"
] | null | null | null | ---
layout: post
title: Mais DRY no seu YAML
tags:
- dry
- yaml
status: publish
type: post
published: true
---
{% include JB/setup %}
Olha quem está vivo? Sim, sou eu. Sem mais enrolação.
Como já publicado no post do [DRY config/database.yml](/2008/03/23/dry-configdatabaseyml), é possível usar _aliases_ para evitar duplicidade de valores, mas como fazer isso para chaves simples? E porque eu tive essa necessidade?
Em uma _view_ onde eu preciso exibir uma mensagem de disponibilidade de expedição de um produto, tenho algo como:
<p style="text-align: center"><strong>app/views/products/show.html</strong></p>
{% highlight erb linenos %}
...
<dl>
<dt>Prazo para expição</dt>
<dd><%= I18n.t('.delivery_days', count: product.delivery_days) %></dd>
</dl>
...
{% endhighlight %}
<p style="text-align: center"><strong>config/locales/pt-BR.yml</strong> com repetição</p>
{% highlight yaml linenos %}
pt-BR:
products:
show:
delivery_days:
one: Pronta Entrega
other: Disponível em %{count} dias úteis
zero: Pronta Entrega
{% endhighlight %}
Considerando que eu tenha que mostrar a mensagem quando da disponibilidade de entrega é de zero ou um dia, mas não quero duplicar a mensagem. O que podemos fazer?
Antes da chave que nós queremos "copiar", criamos o _alias_ `&alias_name`, e no lugar do valor que vamos repetir, usamos o `*alias_name`
<p style="text-align: center"><strong>config/locales/pt-BR.yml</strong> sem repetição</p>
{% highlight yaml linenos %}
pt-BR:
products:
show:
delivery_days:
&oneDeliveryDay one: Pronta Entrega
other: Disponível em %{count} dias úteis
zero: *oneDeliveryDay
{% endhighlight %}
No exemplo acima, nós identificamos a chave `:one` com o _alias_ `onDeliveryDay` e aplicanos na chave `:zero`
| 30.133333 | 228 | 0.709624 | por_Latn | 0.9907 |
8a5bcd6b5d46eab5ec2169ec764fa2de7e2b9a4e | 1,037 | md | Markdown | readme.md | mmcgrana/hammockdb | f8dda0479bced8409fc7e4fe3b4153b0117b6846 | [
"MIT"
] | null | null | null | readme.md | mmcgrana/hammockdb | f8dda0479bced8409fc7e4fe3b4153b0117b6846 | [
"MIT"
] | null | null | null | readme.md | mmcgrana/hammockdb | f8dda0479bced8409fc7e4fe3b4153b0117b6846 | [
"MIT"
] | 2 | 2017-04-01T23:53:18.000Z | 2019-12-21T19:53:24.000Z | # HammockDB
An very incomplete implementation of the [CouchDB](http://couchdb.apache.org/) [HTTP API](http://wiki.apache.org/couchdb/Complete_HTTP_API_Reference) in Clojure, inspired by the more complete Ruby equivalent [Booth](http://github.com/jchris/booth).
## Why
Clojure's support for immutable data structures, functional programming, and sane concurrency align perfectly with CouchDB's design and enable a clean HammockDB implementation.
HammockDB also serves as a non-trivial example of a data-based web application composed with [Ring](http://github.com/mmcgrana/ring) and taking advantage of that library's synchronous and experimental asynchronous modes.
The name "HammockDB" comes from [Rich Hickey](http://p.hagelb.org/hammock.jpg)'s talk at the first [clojure-conj](http://clojure-conj.org), where he emphasized the importance of hard thinking on hard problems and suggested that hammocks might facilitate such thinking.
## Usage
$ lein deps
$ lein run -m hammockdb.server
$ curl http://localhost:5984/
| 57.611111 | 268 | 0.780135 | eng_Latn | 0.947026 |
8a5be163b041a97a5265a18a67988f9b577f1a01 | 2,579 | md | Markdown | doc/2/api/controllers/realtime/subscribe/index.md | Fr0zenSide/kuzzle | 1863b4e9994029571b4091235e6d423bd3a4d044 | [
"Apache-2.0"
] | 1 | 2021-03-16T11:03:26.000Z | 2021-03-16T11:03:26.000Z | doc/2/api/controllers/realtime/subscribe/index.md | vkey/kuzzle | 42d52fb0af8eb6582ba7fdca3a6b4f2c20808567 | [
"Apache-2.0"
] | 2 | 2021-09-02T19:55:06.000Z | 2022-01-22T18:22:42.000Z | doc/2/api/controllers/realtime/subscribe/index.md | IO1337/kuzzle | 551014854b2b620196e27a7e61a3e2f350382a45 | [
"Apache-2.0"
] | null | null | null | ---
code: true
type: page
title: subscribe
---
# subscribe
Subscribes by providing a set of filters: messages, document changes and, optionally, user events matching the provided filters will generate [real-time notifications](/core/2/api/essentials), sent to you in real-time by Kuzzle.
---
## Query Syntax
### HTTP
Due to the synchronous nature of the HTTP protocol, real-time notifications are not supported.
### Other protocols
```js
{
"index": "<index>",
"collection": "<collection>",
"controller": "realtime",
"action": "subscribe",
"body": {
// subscription filters
},
"volatile": {
// query volatile data
},
"scope": "<all|in|out|none>",
"users": "<all|in|out|none>"
}
```
---
## Arguments
- `collection`: watched collection
- `index`: watched index
### Optional:
- `scope`: accepted values: `all`, `in`, `out`, `none` (default: `all`). Subscribe to either new documents entering the scope of the subscription filters (`in`), to documents leaving it (`out`), or both (`all`). Alternatively, document notifications can be ignored entirely (`none`)
- `users`: accepted values: `all`, `in`, `out`, `none` (default: `none`). Receive real-time notifications about users subscribing to the same filters (`in`), about users leaving the subscription (`out`), or both (`all`). If set to `none`, no notifications are sent about users
- `volatile`: subscription information, used in [user join/leave notifications](/core/2/api/essentials/volatile-data)
---
## Body properties
Subscription filters, following the [Koncorde syntax](/core/2/guides/cookbooks/realtime-api)
An empty filter subscribes to any change occuring on the selected index-collection pair.
---
## Response
Returns an object detailing the new subscription properties:
- `channel`: unique channel identifier. A channel acts as a subscription configuration ID, allowing multiple subscriptions to occur with the same filters, but different notification options.
- `roomId`: unique subscription identifier.
Notifications include the `room` property, which indicates to what channel the notification is for. This is how notifications can be linked to subscriptions by front-end applications (our SDK perform these operations automatically).
### Example
```js
{
"status": 200,
"error": null,
"index": "<index>",
"collection": "<collection>",
"controller": "realtime",
"action": "subscribe",
"requestId": "<unique request identifier>",
"result": {
"roomId": "<unique Kuzzle room identifier>",
"channel": "<unique channel identifier>"
}
}
```
| 28.977528 | 282 | 0.708414 | eng_Latn | 0.971954 |
8a5bfa37bf6ca8c2991d7abf0218429a81bf857a | 7,092 | md | Markdown | framework/drawing/pdf-output.md | dandv/kendo-docs | 536d6abbe39b377d57de3ced112d79ad4cebe8dd | [
"MIT",
"Unlicense"
] | null | null | null | framework/drawing/pdf-output.md | dandv/kendo-docs | 536d6abbe39b377d57de3ced112d79ad4cebe8dd | [
"MIT",
"Unlicense"
] | null | null | null | framework/drawing/pdf-output.md | dandv/kendo-docs | 536d6abbe39b377d57de3ced112d79ad4cebe8dd | [
"MIT",
"Unlicense"
] | null | null | null | ---
title: PDF drawing backend
page_title: Export a drawing as a PDF file
position: 50
---
# PDF drawing backend
The Kendo UI Drawing API can export your drawing to a PDF file. However, because PDF-s can't be displayed by a browser inside an HTML element, you cannot create a `Surface` object for this kind of output; instead, you will use a few functions exported into `kendo.drawing.pdf` to generate the binary data. Example usage:
var drawing = kendo.drawing;
var geo = kendo.geometry;
// this will contain all our drawing
var group = new drawing.Group();
// draw a circle
var circleGeometry = new geo.Circle([ 100, 100 ], 50);
var circle = new drawing.Circle(circleGeometry).stroke("red", 1);
// and add it to the group
group.append(circle);
// add some text
var text = new drawing.Text("Hello World", new geo.Point(100, 200));
group.append(text);
// set PDF arguments (optional, see the "PDF options" section below)
group.options.set("pdf", {
paperSize: "A4",
margin: {
left : "20mm",
top : "40mm",
right : "20mm",
bottom : "40mm"
}
});
// you can offer the file for download now
drawing.pdf.saveAs(group, "filename.pdf", proxyUrl, callback);
The `proxyUrl` and `callback` arguments are optional. `proxyUrl` is necessary for the download to work with Internet Explorer 9 and Safari; it won't be used for other browsers. See [kendo.saveAs](/api/javascript/kendo.html#methods-saveAs) for more information about the `proxyURL`. The `callback` will be invoked when the file has been successfully generated (generation could be asynchronous).
// or, you can get the PDF as Blob object in browsers that support it
// (all except IE < 10).
drawing.pdf.toBlob(group, function(blob){
// you can now upload it to a server.
// this form simulates an <input type="file" name="pdfFile" />
var form = new FormData();
form.append("pdfFile", blob);
var xhr = new XMLHttpRequest();
xhr.open("POST", "/posturl", true);
xhr.send(form);
});
// or, you can get it as a data URL
drawing.pdf.toDataURL(group, function(dataURL){ ... });
## PDF options
The following options are currently supported:
- `paperSize` — can be either a paper name (i.e. "A4"), an array of two numbers (paper width and height), or "auto". By default it's "auto" which means the paper size will be just enough to fit the drawing. If numbers are specified they are assumed to be in typographic points unit. A point is 1/72 of an inch. Strings of the form "297mm" can also be used. The supported units are: "mm", "cm", "in" and "pt".
- `margin` — paper margins. Must be an object containing `top`, `left`, `right` and `bottom`, numbers which specify the paper margins. Again, if numbers are passed they will be assumed to be in points; with strings you can specify units. When `paperSize` is "auto", the dimensions will be adjusted to include the margin.
- `landscape` — (boolean, default `false`). If `true` is specified the paper dimensions will be rotated if needed such that the width is the larger edge.
- `title`, `author`, `subject`, `keywords`, `creator` — optional strings to be included in the PDF information dictionary.
- `date` — optional `Date` object to specify the creation date of the document. Default is current date/time (`new Date()`).
## Using custom fonts
The drawing API allows you to specify fonts with the `font` option of `Text` elements:
var text = new drawing.Text("Hello World", new geo.Point(100, 100));
text.options.set("font", "30px Verdana");
In order for this to render correctly as PDF, our code must have access to the TTF files. Ideally they must be the same fonts that the browser uses to render on screen. However, we cannot access the fonts from client-side JavaScript on the machine where the browser runs, so they must be provided on the server, and the paths to them must be declared as follows:
kendo.pdf.defineFont({
"Verdana" : "/fonts/Verdana.ttf", // this is a URL
"Verdana|Bold" : "/fonts/Verdana_Bold.ttf",
"Verdana|Bold|Italic" : "/fonts/Verdana_Bold_Italic.ttf",
"Verdana|Italic" : "/fonts/Verdana_Italic.ttf"
});
This code must run before a PDF is requested; you could simply include it into a `<script>` tag in your page.
The object passed to `kendo.pdf.defineFont` must map between font name/style to an URL with the TrueType file. The “same domain policy” applies, you can't specify URLs to different hosts.
Fonts are loaded on-demand, so you can declare more fonts than might be needed without worrying that data will be needlessly downloaded or parsed. On the other hand, they will be cached so if you are building a "SPA" (Single-Page Application) the overhead will occur only once.
Currently only TTF fonts having an Unicode mapping are supported.
If you do not declare any fonts, our PDF generator will fallback to the following standard PDF fonts:
"serif" : "Times-Roman",
"serif|bold" : "Times-Bold",
"serif|italic" : "Times-Italic",
"serif|bold|italic" : "Times-BoldItalic",
"sans-serif" : "Helvetica",
"sans-serif|bold" : "Helvetica-Bold",
"sans-serif|italic" : "Helvetica-Oblique",
"sans-serif|bold|italic" : "Helvetica-BoldOblique",
"monospace" : "Courier",
"monospace|bold" : "Courier-Bold",
"monospace|italic" : "Courier-Oblique",
"monospace|bold|italic" : "Courier-BoldOblique"
The font names above (on the right) are reserved and cannot be used as URLs to TrueType fonts with `kendo.pdf.defineFont`.
Note that non-ASCII characters are unsupported with the standard PDF fonts.
### Unicode notes
Unicode is supported only if the fonts you provide contain glyphs for the referenced characters. Otherwise, a default glyph will be displayed (it depends on the font, but it's usually a blank rectangle). Currently we don't do font substitution, so if the text contains glyphs that are not available in the current font, but are perhaps available in another font that was declared, the default glyph will still be used.
## Compression
The PDF generator supports compression via the JavaScript [pako library](https://github.com/nodeca/pako). Just load pako with a `<script>` tag (window.pako should be available) and compression will be automatically enabled.
Compression can make a big difference in the output file size when you're using custom TTF fonts or images with alpha channel (i.e. PNGs with transparency).
## Supported browsers
Kendo PDF Generator has been tested in recent versions of Chrome, Firefox, Safari, Blink-based Opera, Internet Explorer 9 or later. We use [typed arrays](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Typed_arrays) where available to improve speed (all browsers except IE9).
Internet Explorer <= 8 is not supported.
| 52.147059 | 420 | 0.697829 | eng_Latn | 0.97573 |
8a5cc8b518e5086db67d29aef620673778100e8c | 701 | md | Markdown | README.md | ipa-lth/rviz_utils | c6a9d1c096a081a68dc83c1475ba6974dbe211dc | [
"MIT"
] | 2 | 2017-04-08T16:12:47.000Z | 2021-12-26T07:00:01.000Z | README.md | ipa-lth/rviz_utils | c6a9d1c096a081a68dc83c1475ba6974dbe211dc | [
"MIT"
] | null | null | null | README.md | ipa-lth/rviz_utils | c6a9d1c096a081a68dc83c1475ba6974dbe211dc | [
"MIT"
] | null | null | null | # rviz_utils
Useful tools and utils for visualisation based on ROS Visualisation Tools like Rviz, rqt, image_view...
- [interactive_pose_publisher](interactive_pose_publisher/README.md) Let you control a tf frame using an interactive marker in rviz
- [tf_trace_publisher](tf_trace_publisher/README.md) Provides a trace in form of a visualisation marker in rviz behind a define tf frame
- [publish_media](publish_media/README.md) Publishes images and video, for you to embed them into your visualisation
- [rviz_text_publisher](rviz_text_publisher/README.md) Publishes a text attached to a tf frame in rviz
- [rviz_overlays](rviz_overlays/README.md) Rviz Plugins to overlay data on top of rviz views
| 77.888889 | 137 | 0.813124 | eng_Latn | 0.931592 |
8a5d87ecd83c9f1d2deb1e66906d60af869f8815 | 3,034 | md | Markdown | src/in/2018-04/01/06.md | PrJared/sabbath-school-lessons | 94a27f5bcba987a11a698e5e0d4279b81a68bc9a | [
"MIT"
] | 68 | 2016-10-30T23:17:56.000Z | 2022-03-27T11:58:16.000Z | src/in/2018-04/01/06.md | PrJared/sabbath-school-lessons | 94a27f5bcba987a11a698e5e0d4279b81a68bc9a | [
"MIT"
] | 367 | 2016-10-21T03:50:22.000Z | 2022-03-28T23:35:25.000Z | src/in/2018-04/01/06.md | PrJared/sabbath-school-lessons | 94a27f5bcba987a11a698e5e0d4279b81a68bc9a | [
"MIT"
] | 109 | 2016-08-02T14:32:13.000Z | 2022-03-31T10:18:41.000Z | ---
title: Umat Pilihan Allah
date: 04/10/2018
---
Dalam memanggil Abraham untuk menjadi hamba-Nya, Allah memilih bagi diri-Nya suatu umat untuk mewakili Dia bagi dunia. Panggilan dan pemilihan ini merupakan tindakan kasih dan anugerah Allah. Panggilan Allah kepada Israel sangat penting bagi rencana-Nya untuk pemulihan semua umat manusia setelah kehancuran dan perpecahan yang disebabkan oleh kejatuhan. Sejarah suci adalah studi tentang pekerjaan Allah bagi pemulihan ini, dan komponen utama dari rencana tersebut adalah bangsa perjanjian Israel.
`Menurut Ulangan 7:6-11, mengapakah Tuhan menyebut Israel umat-Nya? Mengapakah Dia memilih keturunan Abraham sebagai umat-Nya?`
Kasih Tuhan bagi umat manusia merupakan penyebab utama pemilihan Israel sebagai umat-Nya. Allah mengadakan suatu perjanjian dengan Abraham dan keturunannya untuk menjaga pengetahuan tentang Allah melalui umat-Nya dan untuk mewujudkan penebusan umat manusia (Mzm 67:2). Namun, adalah suatu tindakan kasih tertinggi yang membuat Allah memiih Israel. Keturunan Abraham tidak memiliki apa pun untuk dibanggakan, mereka menuntut kasih Allah yang tidak layak mereka terima. "Bukan karena lebih banyak jumlahmu dari bangsa mana pun juga, maka hati TUHAN terpikat olehmu dan memilih kamu--bukankah kamu ini yang paling kecil dari segala bangsa?" (Ul. 7:7).
Ini adalah pembalikan nilai yang aneh yang Tuhan gunakan untuk memilih umat-Nya. Pada saat manusia melihat pada kekuatan, kebijaksanaan, dan kepercayaan diri untuk memilih pemimpin, Allah tidak memilih yang kuat dan hebat untuk melayani Dia, tetapi orang-orang yang merasakan atau mengakui kelemahan, kebodohan, dan ketiadaan mereka, sehingga tidak ada yang bisa memegahkan diri dihadapan-Nya (1 Kor. 1:26-31).
Namun, lihat keistimewaan yang mereka miliki: "Allah ingin menjadikan umat-Nya bangsa Israel suatu pujian dan kemuliaan. Setiap kesempatan rohani diberikan kepada mereka. Allah tidak menahan apa-apa bagi mereka yang dapat menyumbang pembentukan tabiat yang akan menjadikan mereka wakil-wakil-Nya sendiri."
"Penurutan mereka kepada undang-undang Allah akan menjadikan mereka keajaiban dari kemakmuran di hadapan bangsa-bangsa di dunia. Ia dapat memberikan mereka hikmat dan kepandaian dalam segala pekerjaan yang suulit yang akan terus menjadi guru mereka dan akan memuliakan dan mengangkat mereka melalui penurutan kepada undang-undang-Nya. Jika menurut, mereka akan dilindungi dari penyakit-penyakit yang menimbulkan bencana kepada bangsa-bangsa lain dan kuasa-Nya, harus ditunjukkan dalam segala kemajuan. Mereka harus menjadi sebuah kerajaan imam dan raja-raja. Allah melengkapi mereka dengan setiap perlengkapan supaya menjadi bangsa yang termulia dalam dunia ini."--Ellen G. White, Membina Kehidupan Abadi, hlm. 221.
`Kesamaan apakah yang dapat kita temukan antara apa yang Tuhan lakukan terhadap Israel kuno, dan panggilan-Nya untuk mereka, dan apa yang telah Dia lakukan untuk kita, dan panggilan-Nya bagi kita sebagai umat Advent? Bawalah jawaban Anda ke kelas pada hari Sabat.` | 168.555556 | 715 | 0.824654 | ind_Latn | 0.831438 |
8a5dda82609a4e407466f13c354e005a1501a84c | 212 | md | Markdown | tests/tutorial-test/cases/singleStep.md | pnigos/pxt | 7b827d687d55d47c69b4ceddda38474ad5ca7084 | [
"MIT"
] | 3 | 2021-03-14T08:26:26.000Z | 2021-11-11T05:55:34.000Z | tests/tutorial-test/cases/singleStep.md | pnigos/pxt | 7b827d687d55d47c69b4ceddda38474ad5ca7084 | [
"MIT"
] | null | null | null | tests/tutorial-test/cases/singleStep.md | pnigos/pxt | 7b827d687d55d47c69b4ceddda38474ad5ca7084 | [
"MIT"
] | 1 | 2020-11-13T13:09:16.000Z | 2020-11-13T13:09:16.000Z | # Getting started
### @diffs false
## Introduction @unplugged
Welcome! Place the ``||basic:show string||`` block in the ``||basic:on start||`` slot to scroll your name.
```blocks
basic.showString("Micro!")
``` | 21.2 | 106 | 0.674528 | eng_Latn | 0.871883 |
8a5e248ccc36eeb5f69b90fa94256257d8370699 | 8,668 | md | Markdown | articles/media-services/video-indexer/customize-language-model-with-api.md | AFoolPig/azure-docs.zh-cn | 0c0a914fd4b7b8c73cd183514bb9ddac1ffdcd64 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/media-services/video-indexer/customize-language-model-with-api.md | AFoolPig/azure-docs.zh-cn | 0c0a914fd4b7b8c73cd183514bb9ddac1ffdcd64 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/media-services/video-indexer/customize-language-model-with-api.md | AFoolPig/azure-docs.zh-cn | 0c0a914fd4b7b8c73cd183514bb9ddac1ffdcd64 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 使用视频索引器 Api 自定义语言模型-Azure
titlesuffix: Azure Media Services
description: 本文介绍如何使用视频索引器 API 自定义语言模型。
services: media-services
author: anikaz
manager: johndeu
ms.service: media-services
ms.subservice: video-indexer
ms.topic: article
ms.date: 02/04/2020
ms.author: anzaman
ms.openlocfilehash: 01ea4d9ef943183f09baa86b729ec69344d4309e
ms.sourcegitcommit: 57669c5ae1abdb6bac3b1e816ea822e3dbf5b3e1
ms.translationtype: MT
ms.contentlocale: zh-CN
ms.lasthandoff: 02/06/2020
ms.locfileid: "77049028"
---
# <a name="customize-a-language-model-with-the-video-indexer-apis"></a>使用视频索引器 API 自定义语言模型
视频索引器可以让你通过上传自适应文本(即其词汇需要引擎来适应的领域中的文本)创建自定义语言模型来自定义语音识别。 训练模型后,可以识别自适应文本中显示的新单词。
有关自定义语言模型的详细概述和最佳做法,请参阅[使用视频索引器自定义语言模型](customize-language-model-overview.md)。
可按本主题所述,使用视频索引器 API 在帐户中创建和编辑自定义语言模型。 也可以按[使用视频索引器网站自定义语言模型](customize-language-model-with-api.md)中所述使用网站。
## <a name="create-a-language-model"></a>创建语言模型
[创建语言模型](https://api-portal.videoindexer.ai/docs/services/Operations/operations/Create-Language-Model?)API 将在指定的帐户中创建一个新的自定义语言模型。 可以在此调用中上传语言模型的文件。 或者,可以在此处创建语言模型,稍后再通过更新语言模型上传模型的文件。
> [!NOTE]
> 仍必须使用模型的已启用文件来训练该模型,以学习其文件的内容。 下一部分提供了有关训练语言的指导。
若要上传要添加到语言模型的文件,必须使用表单数据在正文中上传文件,此外,必须为上述所需参数提供值。 可通过两种方式实现此目的:
1. 密钥是文件名,值是 txt 文件
2. 密钥是文件名,值是 txt 文件的 URL
### <a name="response"></a>响应
响应提供有关新建的语言模型的元数据,以及有关每个模型的文件的元数据(遵循示例 JSON 输出的格式)。
```json
{
"id": "dfae5745-6f1d-4edd-b224-42e1ab57a891",
"name": "TestModel",
"language": "En-US",
"state": "None",
"languageModelId": "00000000-0000-0000-0000-000000000000",
"files": [
{
"id": "25be7c0e-b6a6-4f48-b981-497e920a0bc9",
"name": "hellofile",
"enable": true,
"creator": "John Doe",
"creationTime": "2018-04-28T11:55:34.6733333"
},
{
"id": "33025f5b-2354-485e-a50c-4e6b76345ca7",
"name": "worldfile",
"enable": true,
"creator": "John Doe",
"creationTime": "2018-04-28T11:55:34.86"
}
]
}
```
## <a name="train-a-language-model"></a>训练语言模型
[训练语言模型](https://api-portal.videoindexer.ai/docs/services/operations/operations/Train-Language-Model?&pattern=train)API 使用上传到并在语言模型中启用的文件中的内容训练指定帐户中的自定义语言模型。
> [!NOTE]
> 必须先创建语言模型并上传其文件。 可以在创建语言模型时或者通过更新语言模型来上传文件。
### <a name="response"></a>响应
响应提供有关新训练的语言模型的元数据,以及有关每个模型的文件的元数据(遵循示例 JSON 输出的格式)。
```json
{
"id": "41464adf-e432-42b1-8e09-f52905d7e29d",
"name": "TestModel",
"language": "En-US",
"state": "Waiting",
"languageModelId": "531e5745-681d-4e1d-b124-12e5ab57a891",
"files": [
{
"id": "84fcf1ac-1952-48f3-b372-18f768eedf83",
"name": "RenamedFile",
"enable": false,
"creator": "John Doe",
"creationTime": "2018-04-27T20:10:10.5233333"
},
{
"id": "9ac35b4b-1381-49c4-9fe4-8234bfdd0f50",
"name": "hellofile",
"enable": true,
"creator": "John Doe",
"creationTime": "2018-04-27T20:10:10.68"
}
]
}
```
返回的**id**是用于区分语言模型的唯一 id,而**languageModelId**用于[上传视频来索引](https://api-portal.videoindexer.ai/docs/services/operations/operations/Upload-video?)和重新索引[视频](https://api-portal.videoindexer.ai/docs/services/operations/operations/Re-index-video?)api (在视频索引器上传/重新索引 api 中也称为**linguisticModelId** )。
## <a name="delete-a-language-model"></a>删除语言模型
[删除语言模型](https://api-portal.videoindexer.ai/docs/services/operations/operations/Delete-Language-Model?&pattern=delete)API 将从指定的帐户中删除自定义语言模型。 使用已删除语言模型的任何视频会保留相同的索引,直到为该视频重新编制索引为止。 如果重新为视频编制索引,可为视频分配新的语言模型。 否则,视频索引器会使用其默认模型重新为视频编制索引。
### <a name="response"></a>响应
成功删除语言模型后不会返回内容。
## <a name="update-a-language-model"></a>更新语言模型
[更新语言模型](https://api-portal.videoindexer.ai/docs/services/operations/operations/Update-Language-Model?&pattern=update)API 会更新指定帐户中的自定义语言用户模型。
> [!NOTE]
> 必须事先创建一个语言模型。 可以使用此调用来启用或禁用模型下的所有文件、更新语言模型的名称,以及上传要添加到语言模型的文件。
若要上传要添加到语言模型的文件,必须使用表单数据在正文中上传文件,此外,必须为上述所需参数提供值。 可通过两种方式实现此目的:
1. 密钥是文件名,值是 txt 文件
2. 密钥是文件名,值是 txt 文件的 URL
### <a name="response"></a>响应
响应提供有关新训练的语言模型的元数据,以及有关每个模型的文件的元数据(遵循示例 JSON 输出的格式)。
```json
{
"id": "41464adf-e432-42b1-8e09-f52905d7e29d",
"name": "TestModel",
"language": "En-US",
"state": "Waiting",
"languageModelId": "531e5745-681d-4e1d-b124-12e5ab57a891",
"files": [
{
"id": "84fcf1ac-1952-48f3-b372-18f768eedf83",
"name": "RenamedFile",
"enable": true,
"creator": "John Doe",
"creationTime": "2018-04-27T20:10:10.5233333"
},
{
"id": "9ac35b4b-1381-49c4-9fe4-8234bfdd0f50",
"name": "hellofile",
"enable": true,
"creator": "John Doe",
"creationTime": "2018-04-27T20:10:10.68"
}
]
}
```
使用响应中返回的文件的**id**下载文件的内容。
## <a name="update-a-file-from-a-language-model"></a>更新语言模型中的文件
使用[更新文件](https://api-portal.videoindexer.ai/docs/services/operations/operations/Update-Language-Model-file?&pattern=update)可以更新指定帐户的自定义语言模型中的文件的名称和**启用**状态。
### <a name="response"></a>响应
响应提供有关已更新的文件的元数据(遵循以下示例 JSON 输出的格式)。
```json
{
"id": "84fcf1ac-1952-48f3-b372-18f768eedf83",
"name": "RenamedFile",
"enable": false,
"creator": "John Doe",
"creationTime": "2018-04-27T20:10:10.5233333"
}
```
使用在响应中返回的文件的**id**下载文件的内容。
## <a name="get-a-specific-language-model"></a>获取特定的语言模型
[Get](https://api-portal.videoindexer.ai/docs/services/operations/operations/Get-Language-Model?&pattern=get) API 返回指定帐户中指定语言模型的信息,如语言和语言模型中的文件。
### <a name="response"></a>响应
响应提供有关指定的语言模型的元数据,以及有关每个模型的文件的元数据(遵循以下示例 JSON 输出的格式)。
```json
{
"id": "dfae5745-6f1d-4edd-b224-42e1ab57a891",
"name": "TestModel",
"language": "En-US",
"state": "None",
"languageModelId": "00000000-0000-0000-0000-000000000000",
"files": [
{
"id": "25be7c0e-b6a6-4f48-b981-497e920a0bc9",
"name": "hellofile",
"enable": true,
"creator": "John Doe",
"creationTime": "2018-04-28T11:55:34.6733333"
},
{
"id": "33025f5b-2354-485e-a50c-4e6b76345ca7",
"name": "worldfile",
"enable": true,
"creator": "John Doe",
"creationTime": "2018-04-28T11:55:34.86"
}
]
}
```
使用在响应中返回的文件的**id**下载文件的内容。
## <a name="get-all-the-language-models"></a>获取所有语言模型
"[获取所有](https://api-portal.videoindexer.ai/docs/services/operations/operations/Get-Language-Models?&pattern=get)API" 返回列表中指定帐户的所有自定义语言模型。
### <a name="response"></a>响应
响应提供一个列表,其中包含帐户中的所有语言模型,以及这些模型的每个元数据和文件(遵循以下示例 JSON 输出的格式)。
```json
[
{
"id": "dfae5745-6f1d-4edd-b224-42e1ab57a891",
"name": "TestModel",
"language": "En-US",
"state": "None",
"languageModelId": "00000000-0000-0000-0000-000000000000",
"files": [
{
"id": "25be7c0e-b6a6-4f48-b981-497e920a0bc9",
"name": "hellofile",
"enable": true,
"creator": "John Doe",
"creationTime": "2018-04-28T11:55:34.6733333"
},
{
"id": "33025f5b-2354-485e-a50c-4e6b76345ca7",
"name": "worldfile",
"enable": true,
"creator": "John Doe",
"creationTime": "2018-04-28T11:55:34.86"
}
]
},
{
"id": "dfae5745-6f1d-4edd-b224-42e1ab57a892",
"name": "AnotherTestModel",
"language": "En-US",
"state": "None",
"languageModelId": "00000000-0000-0000-0000-000000000001",
"files": []
}
]
```
## <a name="delete-a-file-from-a-language-model"></a>从语言模型中删除文件
[删除](https://api-portal.videoindexer.ai/docs/services/operations/operations/Delete-Language-Model-File?&pattern=delete)API 将从指定帐户中的指定语言模型删除指定的文件。
### <a name="response"></a>响应
成功从语言模型中删除文件后不会返回内容。
## <a name="get-metadata-on-a-file-from-a-language-model"></a>获取有关语言模型中的文件的元数据
文件 API 的[get 元数据](https://api-portal.videoindexer.ai/docs/services/operations/operations/Get-Language-Model-File-Data?&pattern=get%20language%20model)从你的帐户中所选的语言模型返回指定文件上的内容和元数据。
### <a name="response"></a>响应
响应提供文件的 JSON 格式内容和元数据,如下所示:
```json
{
"content": "hello\r\nworld",
"id": "84fcf1ac-1952-48f3-b372-18f768eedf83",
"name": "Hello",
"enable": true,
"creator": "John Doe",
"creationTime": "2018-04-27T20:10:10.5233333"
}
```
> [!NOTE]
> 此示例文件的内容是分两行显示的单词“hello”和“world”。
## <a name="download-a-file-from-a-language-model"></a>从语言模型下载文件
"[下载文件](https://api-portal.videoindexer.ai/docs/services/operations/operations/Download-Language-Model-File-Content?)API" 下载一个文本文件,其中包含指定帐户中指定语言模型的指定文件的内容。 此文本文件应与最初上传的文本文件的内容相匹配。
### <a name="response"></a>响应
响应是下载的文本文件,其中包含该文件的 JSON 格式内容。
## <a name="next-steps"></a>后续步骤
[使用网站自定义语言模型](customize-language-model-with-website.md)
| 28.607261 | 290 | 0.665436 | yue_Hant | 0.644585 |
8a5e24dcf6bc3ce47ed54efbbb9e25f4bff59866 | 131 | md | Markdown | 2.导入问题/2.11.目前SAUSAGE不支持的截面有哪些.md | Haiezan/SAUSG_FAQ | 1387e9b752e105b0cd7226725767550db642432c | [
"CC0-1.0"
] | 6 | 2019-03-29T08:51:02.000Z | 2021-08-04T06:12:06.000Z | 2.导入问题/2.11.目前SAUSAGE不支持的截面有哪些.md | Haiezan/SAUSG_FAQ | 1387e9b752e105b0cd7226725767550db642432c | [
"CC0-1.0"
] | 1 | 2020-04-17T03:35:29.000Z | 2020-04-17T03:35:29.000Z | 2.导入问题/2.11.目前SAUSAGE不支持的截面有哪些.md | Haiezan/SAUSG_FAQ | 1387e9b752e105b0cd7226725767550db642432c | [
"CC0-1.0"
] | 2 | 2019-04-04T10:12:44.000Z | 2019-12-13T01:26:37.000Z | ### 2.11 目前SAUSAGE不支持的截面有哪些?
---
PKPM中自定义截面、焊接截面、标准型钢截面、薄壁型钢截面均无法导入。
同时,SAUSAGE中不支持变截面,因而在PKPM中定义的变截面导入后会按照变截面中较小一端截面取用。
---
| 13.1 | 51 | 0.763359 | yue_Hant | 0.875115 |
8a5e99e34d6a038b1d41faf7b2e2d87e058babbe | 569 | md | Markdown | topic_folders/communications/tools/slack-and-email.md | moonhacker/handbook | 598ac0f20eb506601219930a0fbd79092ad1c9ea | [
"CC-BY-4.0"
] | null | null | null | topic_folders/communications/tools/slack-and-email.md | moonhacker/handbook | 598ac0f20eb506601219930a0fbd79092ad1c9ea | [
"CC-BY-4.0"
] | null | null | null | topic_folders/communications/tools/slack-and-email.md | moonhacker/handbook | 598ac0f20eb506601219930a0fbd79092ad1c9ea | [
"CC-BY-4.0"
] | 1 | 2019-05-11T16:42:39.000Z | 2019-05-11T16:42:39.000Z | ### Slack and Mailing Lists
There are many ways in which you can join our conversations:
- The [Discuss email list](http://carpentries.topicbox.com/groups/discuss), which community members are welcome to join and post to.
- Our [other email lists](https://carpentries.org/community/#mailing-lists) are a combination of regional and topic-specific lists.
- Our [Slack channel](https://swcarpentry.slack.com/), which community members are welcome to [join](https://swc-slack-invite.herokuapp.com/). Click on "Channels" in the left panel to view all existing channels.
| 71.125 | 212 | 0.769772 | eng_Latn | 0.981148 |
8a603281a0003f4b7f347863130c411a8cf1f462 | 113 | md | Markdown | README.md | williambout/clear-sky | b247c883e26402489cf84c4c9cefdcfcee936580 | [
"MIT"
] | 1 | 2020-01-11T17:51:40.000Z | 2020-01-11T17:51:40.000Z | README.md | williambout/clear-sky | b247c883e26402489cf84c4c9cefdcfcee936580 | [
"MIT"
] | null | null | null | README.md | williambout/clear-sky | b247c883e26402489cf84c4c9cefdcfcee936580 | [
"MIT"
] | null | null | null | # Clear Sky

## License
MIT © [William Bout](http://williambout.me)
| 14.125 | 43 | 0.699115 | kor_Hang | 0.228274 |
8a60faa4fcf51c2f38e86ee80046fba98c8d7c94 | 2,047 | md | Markdown | windows-driver-docs-pr/stream/bda-filter-category-guids.md | k-takai/windows-driver-docs.ja-jp | f28c3b8e411a2502e6378eaeef88cbae054cd745 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | windows-driver-docs-pr/stream/bda-filter-category-guids.md | k-takai/windows-driver-docs.ja-jp | f28c3b8e411a2502e6378eaeef88cbae054cd745 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | windows-driver-docs-pr/stream/bda-filter-category-guids.md | k-takai/windows-driver-docs.ja-jp | f28c3b8e411a2502e6378eaeef88cbae054cd745 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: BDA フィルター カテゴリの GUID
description: BDA フィルター カテゴリの GUID
ms.assetid: fbd4bf91-8309-423a-97ea-7e4f90cd3b68
ms.date: 11/28/2017
ms.localizationpriority: medium
ms.openlocfilehash: c7f9d1e033ac03dc8b6e124a5568efbe18ed20a5
ms.sourcegitcommit: fb7d95c7a5d47860918cd3602efdd33b69dcf2da
ms.translationtype: MT
ms.contentlocale: ja-JP
ms.lasthandoff: 06/25/2019
ms.locfileid: "67386689"
---
# <a name="bda-filter-category-guids"></a>BDA フィルター カテゴリの GUID
## <span id="ddk_bda_filter_category_guids_ks"></span><span id="DDK_BDA_FILTER_CATEGORY_GUIDS_KS"></span>
BDA ミニドライバーでは、BDA フィルター カテゴリ Guid を使用して、作成する BDA フィルターの種類を指定します。 BDA ミニドライバーこれら Guid の割り当て先の配列、**カテゴリ**のメンバー、 [ **KSFILTER\_記述子**](https://docs.microsoft.com/windows-hardware/drivers/ddi/content/ks/ns-ks-_ksfilter_descriptor)ポイントを構造体します。 *Bdamedia.h*ヘッダー ファイルには、これらの Guid が定義されています。
次のフィルターのカテゴリの Guid を BDA 使用できます。
<span id="KSCATEGORY_BDA_RECEIVER_COMPONENT"></span><span id="kscategory_bda_receiver_component"></span>KSCATEGORY\_BDA\_受信者\_コンポーネント
BDA ミニドライバーは、BDA 受信者フィルターを作成するを指定するには、この GUID を割り当てます。
<span id="KSCATEGORY_BDA_NETWORK_TUNER"></span><span id="kscategory_bda_network_tuner"></span>KSCATEGORY\_BDA\_ネットワーク\_チューナー
BDA ミニドライバーは、BDA ネットワーク チューナー フィルターを作成するを指定するには、この GUID を割り当てます。
<span id="KSCATEGORY_BDA_NETWORK_EPG"></span><span id="kscategory_bda_network_epg"></span>KSCATEGORY\_BDA\_ネットワーク\_EPG
BDA ミニドライバーは、BDA 電子番組ガイド (EPG) フィルターを作成するを指定するには、この GUID を割り当てます。
<span id="KSCATEGORY_BDA_IP_SINK"></span><span id="kscategory_bda_ip_sink"></span>KSCATEGORY\_BDA\_IP\_シンク
BDA ミニドライバーは、シンク BDA IP フィルターを作成するを指定するには、この GUID を割り当てます。
<span id="KSCATEGORY_BDA_NETWORK_PROVIDER"></span><span id="kscategory_bda_network_provider"></span>KSCATEGORY\_BDA\_ネットワーク\_プロバイダー
BDA ミニドライバーは、BDA ネットワーク プロバイダーのフィルターを作成するを指定するには、この GUID を割り当てます。
<span id="KSCATEGORY_BDA_TRANSPORT_INFORMATION"></span><span id="kscategory_bda_transport_information"></span>KSCATEGORY\_BDA\_トランスポート\_情報
BDA ミニドライバーは、フィルターを作成する BDA トランスポート情報 (TIF) を指定するには、この GUID を割り当てます。
| 40.94 | 282 | 0.806058 | yue_Hant | 0.849317 |
8a613b5b434d51fb5ec2e0d536bbda348fe9e45a | 1,376 | markdown | Markdown | _posts/2019-02-25-Setting-up-microk8s-environment.markdown | spadeq/spadeq.github.io | c4f7d7d4974e341ac0b8566c954bdce42691c259 | [
"Apache-2.0"
] | null | null | null | _posts/2019-02-25-Setting-up-microk8s-environment.markdown | spadeq/spadeq.github.io | c4f7d7d4974e341ac0b8566c954bdce42691c259 | [
"Apache-2.0"
] | 3 | 2020-02-25T20:43:37.000Z | 2022-02-26T04:45:30.000Z | _posts/2019-02-25-Setting-up-microk8s-environment.markdown | spadeq/spadeq.github.io | c4f7d7d4974e341ac0b8566c954bdce42691c259 | [
"Apache-2.0"
] | null | null | null | ---
layout: post
title: 设置 microk8s 环境
date: 2019-02-25 14:56:00
categories:
- Container
tags:
- Kubernetes
- microk8s
- docker
- Container
---
真实的生产 Kubernetes 集群需要多节点,而且部署较为复杂。在学习和开发测试环节中,我们更加倾向于部署一个单机版 Kubernetes。这就是本文使用的 microk8s。
## 基本软件包安装
推荐使用 Ubuntu,原生支持 snap。安装命令:
```shell
snap info microk8s
sudo snap install microk8s --classic
microk8s.status
microk8s.enable dns dashboard
sudo snap alias microk8s.kubectl kubectl
sudo snap alias microk8s.docker docker
```
注意不需要另行安装 docker,microk8s 已经自带包含了。`snap info` 命令可以查看当前 microk8s 所对应的版本,默认安装最新的 stable 版。最后两条命令是将 microk8s 开头的命令映射到通常使用的命令,简化操作。
## 镜像源优化
由于众所周知的原因,国内如果要好好的玩容器和 k8s,是必须对网络进行优化的。。
### Docker 镜像
修改 `~/snap/microk8s/current/etc/docker/daemon.json` 文件,加入以下内容。由于 snap 的特殊文件系统机制,不可以直接修改 /snap 目录下的文件,也不使用系统自身的 /etc/docker 目录作为配置。
```json
{
"registry-mirrors": ["https://docker.mirrors.ustc.edu.cn"]
}
```
然后重启 microk8s:
```shell
sudo snap restart microk8s
```
可以随便 pull 一个镜像,测试是不是切换到国内源。
### k8s 容器
如果这时候你直接进行 `kubectl create` 等操作,会发现容器创建不了,始终处于 ContainerCreating 状态。用 `kubectl describe pod <podname>` 查看报错信息可以发现,是因为无法从 k8s.gcr.io 上拉取镜像。解决方法有多种,我个人是采用 docker 中央源 + 改 tag 的办法。
首先,将所需要的容器从中央源拉取下来(注意将版本号和镜像名替换为 describe 报错中对应的内容):
```shell
docker pull mirrorgooglecontainers/pause:3.1
docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
```
当然全套 kubernetes 需要的镜像有很多,需要一个一个手工操作。(蛋疼)
| 21.169231 | 176 | 0.773256 | yue_Hant | 0.567252 |
8a613cdff1e230a170351d25b7721371e0574c25 | 576 | md | Markdown | README.md | albanow/etherscan-sales-bot | 092108030b8c1c95d6b6a544522343c3bba97444 | [
"MIT"
] | 4 | 2021-12-23T05:49:38.000Z | 2022-02-12T03:53:45.000Z | README.md | albanow/etherscan-sales-bot | 092108030b8c1c95d6b6a544522343c3bba97444 | [
"MIT"
] | null | null | null | README.md | albanow/etherscan-sales-bot | 092108030b8c1c95d6b6a544522343c3bba97444 | [
"MIT"
] | 2 | 2022-01-10T16:47:45.000Z | 2022-02-07T23:11:27.000Z | # Etherscan sales bot for Python!
A bot :robot: to post NFT/Token sales from etherscan transactions to Twitter.
## Installation
Clone the repo
```
git clone https://github.com/albanow/etherscan_sales_bot.git
```
Move to a virtual environment
```
pipenv shell
```
Install dependencies
```
pip install -r requirements.txt
```
Run the bot
```
python bot.py
```
## Configuration
### Donations
ETH: 0x8143978e687066F635515BD28E0d9D070FAcEb4B
Feel free to use the bot and if you need any help with your bot you can contact me
Twitter: [albaknow](https://twitter.com/albaknow)
| 18.580645 | 82 | 0.751736 | eng_Latn | 0.734265 |
8a61c211cd0ef3e6c1c68560b6ef3c6d2acc0ef5 | 4,492 | md | Markdown | README.md | villelahdenvuo/OpenAPI-Specification | d599df756d5f3d9c7af403ca12937c0f4cf3a9ab | [
"Apache-2.0"
] | 1 | 2021-12-07T14:09:15.000Z | 2021-12-07T14:09:15.000Z | README.md | a0s/OpenAPI-Specification | 49f4c8b3c01fc5180b3cd46afe4f1a7ff049eefe | [
"Apache-2.0"
] | null | null | null | README.md | a0s/OpenAPI-Specification | 49f4c8b3c01fc5180b3cd46afe4f1a7ff049eefe | [
"Apache-2.0"
] | null | null | null | # The OpenAPI Specification (fka The Swagger Specification)
[](https://travis-ci.org/OAI/OpenAPI-Specification)

**Looking for the next version of the OpenAPI Specification? [See here](https://github.com/OAI/OpenAPI-Specification/tree/OpenAPI.next).**
The goal of The OpenAPI Specification is to define a standard, language-agnostic interface to REST APIs which allows both humans and computers to discover and understand the capabilities of the service without access to source code, documentation, or through network traffic inspection. When properly defined via OpenAPI, a consumer can understand and interact with the remote service with a minimal amount of implementation logic. Similar to what interfaces have done for lower-level programming, OpenAPI removes the guesswork in calling the service.
Use cases for machine-readable API interfaces include interactive documentation, code generation for documentation, client, and server, as well as automated test cases. OpenAPI-enabled APIs expose JSON files that correctly adhere to the OpenAPI Specification, documented in this repository. These files can either be produced and served statically, or be generated dynamically from your application.
Without going into a long history of interfaces to Web Services, this is not the first attempt to do so. We can learn from CORBA, WSDL and WADL. These specifications had good intentions but were limited by proprietary vendor-specific implementations, being bound to a specific programming language, and goals which were too open-ended. In the end, they failed to gain traction.
OpenAPI does not require you to rewrite your existing API. It does not require binding any software to a service--the service being described may not even be yours. It does, however, require the capabilities of the service be described in the structure of the OpenAPI Specification. Not all services can be described by OpenAPI--this specification is not intended to cover every possible use-case of a REST-ful API. OpenAPI does not define a specific development process such as design-first or code-first. It does facilitate either technique by establishing clear interactions with a REST API.
This GitHub project is the starting point for OpenAPI.
Here you will find the information you need about the OpenAPI Specification, a simple static sample of what it looks like,
and some general information regarding the project.
## Current Version - 2.0
The current version of the OpenAPI specification is 2.0 - and you can find it [here](versions/2.0.md).
### [OpenAPI 2.0 Specification](versions/2.0.md)
This repository contains the existing Swagger 1.2 specification as well as proposals for the 2.0 version.
## Structure
Each section should contain v1.2 and v2.0 folders to avoid confusion between the versions.
Please keep in mind that the other projects under OpenAPI use an independent version system.
As such, don't confuse the version of the OpenAPI Specification they support and the version of that given library.
## The Wiki
Check out the [wiki](https://github.com/OAI/OpenAPI-Specification/wiki) for additional and relevant information about the project.
This includes:
- Static sample tutorial.
- List of known deployments.
- Revision history.
## See it in Action
If you just want to see it work, check out the [pet store sample](http://petstore.swagger.io/).
## Tools and Libraries
Looking to see how you can create your own OpenAPI definition, present it or otherwise use it? Check out our [list of tools](http://swagger.io/open-source-integrations/) over at [http://swagger.io](http://swagger.io/open-source-integrations/).
(Yes, there used to be a really long list here, we just moved it to the main website)
## License
Copyright 2016 The Linux Foundation
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at [apache.org/licenses/LICENSE-2.0](http://www.apache.org/licenses/LICENSE-2.0)
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| 65.101449 | 599 | 0.794078 | eng_Latn | 0.995886 |
8a62203da29ce7428c95e7b85db1779c4ce535fc | 303 | md | Markdown | backend/README.md | samuelfagundez/chat-app | c53311c9e0cfa18392e14396efc83732bfd785e1 | [
"MIT"
] | null | null | null | backend/README.md | samuelfagundez/chat-app | c53311c9e0cfa18392e14396efc83732bfd785e1 | [
"MIT"
] | null | null | null | backend/README.md | samuelfagundez/chat-app | c53311c9e0cfa18392e14396efc83732bfd785e1 | [
"MIT"
] | null | null | null | # Socket Server
Este backend contiene todo lo necesario para configurar un servidor de express + socket.io.
Cualquier conexión adicional de sockets, se puede hacer en el archivo ```models/sockets.js``` y cualquier middleware adicional de express, se puede realizar en el archivo ```models/server.js``` | 60.6 | 193 | 0.785479 | spa_Latn | 0.989882 |
8a62209ba3a9019c6a7dbd6d8ca952660e491515 | 9,158 | md | Markdown | articles/data-factory/data-factory-service-identity.md | Almulo/azure-docs.es-es | f1916cdaa2952cbe247723758a13b3ec3d608863 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/data-factory/data-factory-service-identity.md | Almulo/azure-docs.es-es | f1916cdaa2952cbe247723758a13b3ec3d608863 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/data-factory/data-factory-service-identity.md | Almulo/azure-docs.es-es | f1916cdaa2952cbe247723758a13b3ec3d608863 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Identidad de servicio de Azure Data Factory | Microsoft Docs
description: Obtenga información acerca de la identidad de servicio de la factoría de datos en Azure Data Factory.
services: data-factory
author: linda33wj
manager: craigg
editor: ''
ms.service: data-factory
ms.workload: data-services
ms.tgt_pltfrm: na
ms.devlang: na
ms.topic: conceptual
ms.date: 08/17/2018
ms.author: jingwang
ms.openlocfilehash: ffe7337282d06dd9a7e22d6750ac98b3a56964bd
ms.sourcegitcommit: 974c478174f14f8e4361a1af6656e9362a30f515
ms.translationtype: HT
ms.contentlocale: es-ES
ms.lasthandoff: 08/20/2018
ms.locfileid: "42143190"
---
# <a name="azure-data-factory-service-identity"></a>Identidad de servicio de Azure Data Factory
Este artículo le ayudará a comprender qué es la identidad de servicio de Data Factory y cómo funciona.
## <a name="overview"></a>Información general
Al crear una factoría de datos, se puede crear también una identidad de servicio. La identidad de servicio es una aplicación administrada registrada en Azure Active Directory y representa esta factoría de datos específica.
La identidad de servicio de Data Factory beneficia a las características siguientes:
- [Almacenamiento de credenciales en Azure Key Vault](store-credentials-in-key-vault.md), en cuyo caso se utiliza la identidad de servicio de Data Factory para la autenticación de Azure Key Vault.
- Conectores incluidos [Azure Blob Storage](connector-azure-blob-storage.md), [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md), [Azure SQL Database](connector-azure-sql-database.md), y [Azure SQL Data Warehouse](connector-azure-sql-data-warehouse.md).
## <a name="generate-service-identity"></a>Generación de la identidad de servicio
La identidad de servicio de Data Factory se genera de la manera siguiente:
- Cuando crea una factoría de datos mediante **Azure Portal o PowerShell**, la identidad de servicio siempre se creará automáticamente.
- Cuando crea una factoría de datos mediante **SDK**, la identidad de servicio se creará solo si especifica "Identity = new FactoryIdentity()" en el objeto de la factoría para la creación. Vea el ejemplo que aparece en el [Inicio rápido de .NET: Crear una factoría de datos](quickstart-create-data-factory-dot-net.md#create-a-data-factory).
- Cuando crea una factoría de datos mediante la **API de REST**, la identidad de servicio solo se creará si especifica la sección "identity" en el cuerpo de la solicitud. Vea el ejemplo que aparece en el [Inicio rápido de REST: Crear una factoría de datos](quickstart-create-data-factory-rest-api.md#create-a-data-factory).
Si observa que la factoría de datos no tiene una identidad de servicio asociada después de seguir la instrucción de [recuperación de la identidad de servicio](#retrieve-service-identity), puede generar explícitamente una si actualiza la factoría de datos con el iniciador de identidades mediante programación:
- [Generar una identidad de servicio con PowerShell](#generate-service-identity-using-powershell)
- [Generar una identidad de servicio con la API de REST](#generate-service-identity-using-rest-api)
- [Generar una identidad de servicio con el SDK](#generate-service-identity-using-sdk)
>[!NOTE]
>- La identidad de servicio no se puede modificar. La actualización de una factoría de datos que ya tiene una identidad de servicio no tendrá ningún efecto, ya que la identidad de servicio se mantiene sin cambios.
>- Si actualiza una factoría de datos que ya tiene una identidad de servicio, sin especificar el parámetro "identity" en el objeto de la factoría o sin especificar la sección "identity" en el cuerpo de la solicitud de REST, obtendrá un error.
>- Cuando se elimina una factoría de datos, se eliminará también la identidad de servicio asociada.
### <a name="generate-service-identity-using-powershell"></a>Generar una identidad de servicio con PowerShell
Llame al comando **Set-AzureRmDataFactoryV2** de nuevo y verá que los campos "Identity" se han generado recientemente:
```powershell
PS C:\WINDOWS\system32> Set-AzureRmDataFactoryV2 -ResourceGroupName <resourceGroupName> -Name <dataFactoryName> -Location <region>
DataFactoryName : ADFV2DemoFactory
DataFactoryId : /subscriptions/<subsID>/resourceGroups/<resourceGroupName>/providers/Microsoft.DataFactory/factories/ADFV2DemoFactory
ResourceGroupName : <resourceGroupName>
Location : East US
Tags : {}
Identity : Microsoft.Azure.Management.DataFactory.Models.FactoryIdentity
ProvisioningState : Succeeded
```
### <a name="generate-service-identity-using-rest-api"></a>Generar una identidad de servicio con la API de REST
Llame a la siguiente API con la sección "identity" en el cuerpo de la solicitud:
```
PATCH https://management.azure.com/subscriptions/<subsID>/resourceGroups/<resourceGroupName>/providers/Microsoft.DataFactory/factories/<data factory name>?api-version=2017-09-01-preview
```
**Cuerpo de la solicitud**: agregue "identity": { "type": "SystemAssigned" }.
```json
{
"name": "<dataFactoryName>",
"location": "<region>",
"properties": {},
"identity": {
"type": "SystemAssigned"
}
}
```
**Respuesta**: la identidad de servicio se crea automáticamente y la sección "identity" se rellena en consecuencia.
```json
{
"name": "ADFV2DemoFactory",
"tags": {},
"properties": {
"provisioningState": "Succeeded",
"loggingStorageAccountKey": "**********",
"createTime": "2017-09-26T04:10:01.1135678Z",
"version": "2017-09-01-preview"
},
"identity": {
"type": "SystemAssigned",
"principalId": "765ad4ab-XXXX-XXXX-XXXX-51ed985819dc",
"tenantId": "72f988bf-XXXX-XXXX-XXXX-2d7cd011db47"
},
"id": "/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.DataFactory/factories/ADFV2DemoFactory",
"type": "Microsoft.DataFactory/factories",
"location": "EastUS"
}
```
### <a name="generate-service-identity-using-sdk"></a>Generar una identidad de servicio con el SDK
Llame a la función create_or_update de la factoría de datos con Identity=new FactoryIdentity(). Código de ejemplo mediante .NET:
```csharp
Factory dataFactory = new Factory
{
Location = <region>,
Identity = new FactoryIdentity()
};
client.Factories.CreateOrUpdate(resourceGroup, dataFactoryName, dataFactory);
```
## <a name="retrieve-service-identity"></a>Recuperación de la identidad de servicio
Puede recuperar la identidad de servicio desde Azure Portal o mediante programación. Las secciones siguientes le muestran algunos ejemplos.
>[!TIP]
> Si no ve la identidad de servicio, [genérela](#generate-service-identity) mediante la actualización de la factoría.
### <a name="retrieve-service-identity-using-azure-portal"></a>Recuperar la identidad de servicio mediante Azure Portal
Puede encontrar la información de la identidad de servicio en Azure Portal -> su factoría de datos -> Configuración ->Propiedades:
- ID. DE LA IDENTIDAD DE SERVICIO
- INQUILINO DE LA IDENTIDAD DE SERVICIO
- **ID. DE LA APLICACIÓN DE IDENTIDAD DE SERVICIO** > copie este valor

### <a name="retrieve-service-identity-using-powershell"></a>Recuperar una identidad de servicio con PowerShell
El Id. de entidad de seguridad de la identidad de servicio y el Id. del inquilino se devolverán cuando obtenga una factoría de datos específica como se indica a continuación:
```powershell
PS C:\WINDOWS\system32> (Get-AzureRmDataFactoryV2 -ResourceGroupName <resourceGroupName> -Name <dataFactoryName>).Identity
PrincipalId TenantId
----------- --------
765ad4ab-XXXX-XXXX-XXXX-51ed985819dc 72f988bf-XXXX-XXXX-XXXX-2d7cd011db47
```
Copie el identificador de entidad de seguridad y, luego, ejecute el comando de Azure Active Directory siguiente con el Id. de entidad de seguridad como parámetro para obtener el valor de **ApplicationId** que se usará para conceder acceso:
```powershell
PS C:\WINDOWS\system32> Get-AzureRmADServicePrincipal -ObjectId 765ad4ab-XXXX-XXXX-XXXX-51ed985819dc
ServicePrincipalNames : {76f668b3-XXXX-XXXX-XXXX-1b3348c75e02, https://identity.azure.net/P86P8g6nt1QxfPJx22om8MOooMf/Ag0Qf/nnREppHkU=}
ApplicationId : 76f668b3-XXXX-XXXX-XXXX-1b3348c75e02
DisplayName : ADFV2DemoFactory
Id : 765ad4ab-XXXX-XXXX-XXXX-51ed985819dc
Type : ServicePrincipal
```
## <a name="next-steps"></a>Pasos siguientes
Consulte los siguientes temas que presentan cuándo y cómo se utiliza la identidad de servicio de Data Factory:
- [Almacenamiento de credenciales en Azure Key Vault](store-credentials-in-key-vault.md)
- [Copia de datos con Azure Data Lake Store como origen o destino mediante Azure Data Factory](connector-azure-data-lake-store.md)
Consulte el tema de [información general sobre MSI](~/articles/active-directory/msi-overview.md) para obtener más información sobre la identidad de servicio administrada en la que se basa la identidad de servicio de factoría de datos. | 52.632184 | 340 | 0.762175 | spa_Latn | 0.917029 |
8a62526e137f6b8083e92c62ef6686515201f9ab | 2,183 | md | Markdown | README.md | PhilippMaxx/hotelbot | 4cb0e4dfc5d883d8b38f9cb0d4b0cb7e1717f0d3 | [
"MIT"
] | null | null | null | README.md | PhilippMaxx/hotelbot | 4cb0e4dfc5d883d8b38f9cb0d4b0cb7e1717f0d3 | [
"MIT"
] | null | null | null | README.md | PhilippMaxx/hotelbot | 4cb0e4dfc5d883d8b38f9cb0d4b0cb7e1717f0d3 | [
"MIT"
] | null | null | null | # Hotel Chatbot
This is an example chatbot based on the open-source framework of [Rasa](https://rasa.com/). I have chosen the domain of hotels for a show case, so that advanced functionalities will be easier to understand. The setup includes a docker version, for server-level tests.
## Install
First you have to grant permissions for the intialization shell script. This will than set the required outstanding permissions for the other shell scripts and create the required database folder ./db.
`sudo chmod -R 777 initRun.sh`
`./initRun.sh`
For hosting the bot, you will also need to install docker on your system. Used system stats:
* Ubuntu 18.04.5
* Docker 19.03.12
## Host the Bot on a local server
To host the bot on a local server, you can run the shell script `./startRun.sh`. This will build and run the rasa server and action server from the two docker images `./DockerfileRasa`, `./DockerfileAction` and host them together with a Postgres SQL server via the `./docker-compose.yml` script. If you don't want to use the server side, but run it directly in your shell, you can refer to the documentation of rasa, where you will find a manual to do so.
To stop the server again, run the shell script `./stopRun.sh`. This will also ask you, if you want to prune your Docker containers and volumes. If you don't want to lose other Docker containers, do not accept this and edit it in the shell script, by deleting the last line in `./stopRun.sh`. However, I find it for myself most practical, to prune the system after hosting, so that Docker containers don't eat up my storage space.
## Bot API
With hosting the bot on a local server, you will be able to call it via local API requests. To execute a POST request, use this form:
`curl -XPOST http://localhost:5005/webhooks/rest/webhook \
-H "Content-type: application/json" \
-d '{"sender": "test", "message": "hello"}'
`
## SQL API
With hosting the bot, you will have access to all the chat history and metadata through a Postgres SQL server. To access it, you can run the shell script `./startSQL.sh`. The username and keys will be the same, as defined in `./endpoints.yml` and `./docker-compose.yml`.
| 60.638889 | 456 | 0.752634 | eng_Latn | 0.997053 |
8a626c2108cfc26a03d24d809b90cde0871f803c | 115 | md | Markdown | README.md | AddisonFreeman/nanoc_filter_yaml | d4facb5f3937446fbd1fcdcb20e5b3d283d9dfea | [
"MIT"
] | null | null | null | README.md | AddisonFreeman/nanoc_filter_yaml | d4facb5f3937446fbd1fcdcb20e5b3d283d9dfea | [
"MIT"
] | null | null | null | README.md | AddisonFreeman/nanoc_filter_yaml | d4facb5f3937446fbd1fcdcb20e5b3d283d9dfea | [
"MIT"
] | null | null | null | # nanoc_filter_yaml
Parses a structurally known yaml file and allows layout (html) access to content as variables.
| 38.333333 | 94 | 0.817391 | eng_Latn | 0.991128 |
8a6373d69f85fd3eabd3a42e95987a028e9fb913 | 377 | md | Markdown | _posts/2016-09-13-mysql-cve-.md | pipiscrew/pipiscrew.github.io | 9d81bd323c800a1bff2b6d26c3ec3eb96fb41004 | [
"MIT"
] | null | null | null | _posts/2016-09-13-mysql-cve-.md | pipiscrew/pipiscrew.github.io | 9d81bd323c800a1bff2b6d26c3ec3eb96fb41004 | [
"MIT"
] | null | null | null | _posts/2016-09-13-mysql-cve-.md | pipiscrew/pipiscrew.github.io | 9d81bd323c800a1bff2b6d26c3ec3eb96fb41004 | [
"MIT"
] | null | null | null | ---
title: MySQL CVE-2016-6662
author: PipisCrew
date: 2016-09-13
categories: [news]
toc: true
---
Release date: 12.09.2016 - Severity: Critical
https://www.percona.com/blog/2016/09/12/database-affected-cve-2016-6662/
https://www.psce.com/blog/2016/09/12/how-to-quickly-patch-mysql-server-against-cve-2016-6662/
origin - http://www.pipiscrew.com/?p=6080 mysql-cve-2016-6662 | 25.133333 | 93 | 0.742706 | kor_Hang | 0.245425 |
8a639541f6d562a972b8b6a22388ba82bcf79133 | 32,593 | md | Markdown | docs/analysis-services/tabular-models-scripting-language-objects/tables-object-tmsl.md | satoshi-baba-0823/sql-docs.ja-jp | a0681de7e067cc6da1be720cb8296507e98e0f29 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/analysis-services/tabular-models-scripting-language-objects/tables-object-tmsl.md | satoshi-baba-0823/sql-docs.ja-jp | a0681de7e067cc6da1be720cb8296507e98e0f29 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/analysis-services/tabular-models-scripting-language-objects/tables-object-tmsl.md | satoshi-baba-0823/sql-docs.ja-jp | a0681de7e067cc6da1be720cb8296507e98e0f29 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Tables オブジェクト (TMSL) |Microsoft ドキュメント
ms.date: 05/07/2018
ms.prod: sql
ms.technology: analysis-services
ms.custom: tmsl
ms.topic: reference
ms.author: owend
ms.reviewer: owend
author: minewiskan
manager: kfile
ms.openlocfilehash: eb8948ca9c51bebc39bb93fbe44bed1afc5be38b
ms.sourcegitcommit: c12a7416d1996a3bcce3ebf4a3c9abe61b02fb9e
ms.translationtype: MT
ms.contentlocale: ja-JP
ms.lasthandoff: 05/10/2018
ms.locfileid: "34044636"
---
# <a name="tables-object-tmsl"></a>Tables オブジェクト (TMSL)
[!INCLUDE[ssas-appliesto-sqlas-aas](../../includes/ssas-appliesto-sqlas-aas.md)]
モデルに含まれているテーブルを定義します。 モデル内のテーブルは、外部データのインポートまたはクエリを実行すると、元のデータベース内のテーブルまたは DAX 式から作成された計算テーブルにバインドされますか。 テーブルで 1 つまたは複数**パーティション**オブジェクトは、データのソースを説明します。 テーブルの間、**リレーションシップ**オブジェクトは、基数、フィルターの方向、およびその他のリレーションシップのプロパティを指定します。
## <a name="object-definition"></a>オブジェクトの定義
すべてのオブジェクトは、共通の名前、型、説明、プロパティのコレクション、および注釈を含むプロパティのセットを持ちます。 **テーブル**オブジェクトでは、次のプロパティもがあります。
dataCategory
通常のままにして、テーブルの型指定を指定します。 有効な値は 0 - 不明、1 時に、2 - メジャー、3 - OTHER, 5 - 定量的、6-アカウント、7 - 顧客、製品-8、9 - シナリオでは、10 ユーティリティ、11、通貨、12 - 率、13 - チャネル、4 - プロモーション、15 - 組織、16 の部品表、17 – GEOGRAPHY です。
IsHidden
テーブルを扱うかどうかを示すブール値の視覚エフェクトのクライアント ツールで非表示として。
テーブルを非表示扱いとする場合は true、それ以外の場合は false です。
列
テーブル内の列を表します。 Table オブジェクトの子です。 各列は、さまざまなクライアント アプリケーションが、列のデータを視覚化する方法に影響を与えることで定義されたプロパティを持っています。
メジャー
式に基づいて計算される値を表します。 Table オブジェクトの子です。
階層
クライアント アプリケーションの論理階層ドリルダウン パスを提供するレベルのコレクションを表します。 Table オブジェクトの子です。
## <a name="usage"></a>使用方法
テーブル オブジェクトを使用[Alter コマンド(TMSL)](../../analysis-services/tabular-models-scripting-language-commands/alter-command-tmsl.md)、[コマンドを作成して(TMSL)](../../analysis-services/tabular-models-scripting-language-commands/create-command-tmsl.md)、 [CreateOrReplace コマンド(TMSL) ](../../analysis-services/tabular-models-scripting-language-commands/createorreplace-command-tmsl.md)、 [Delete コマンド(TMSL)](../../analysis-services/tabular-models-scripting-language-commands/delete-command-tmsl.md)、 [Refresh コマンド(TMSL)](../../analysis-services/tabular-models-scripting-language-commands/refresh-command-tmsl.md)、および[MergePartitions コマンド(TMSL)](../../analysis-services/tabular-models-scripting-language-commands/mergepartitions-command-tmsl.md).
作成する場合、置換、またはテーブル オブジェクトを変更することは、オブジェクト定義のすべての読み取り/書き込みプロパティを指定します。 読み取り/書き込みプロパティの省略は、削除であると見なされます。
## <a name="condensed-syntax"></a>圧縮の構文
オブジェクトのテーブルの定義は複雑です。 この構文は、内部プロパティと主要な部分の概略図を提供するオブジェクトを折りたたみます。
```
"tables": {
"type": "array",
"items": {
"description": "Table object of Tabular Object Model (TOM)",
"type": "object",
"properties": {
"dataCategory": { },
"description": { },
"isHidden": { },
"partitions": { },
"annotations": { },
"columns": { },
"measures": { },
"hierarchies": { },
```
## <a name="full-syntax"></a>完全な構文
モデルのテーブル オブジェクトのスキーマ表現を以下に示します。 この定義のサイズを減らすためには、パーティションのオブジェクトは他の場所で説明します。 参照してください[パーティション オブジェクト(TMSL)](../../analysis-services/tabular-models-scripting-language-objects/partitions-object-tmsl.md)です。
```
"tables": {
"type": "array",
"items": {
"description": "Table object of Tabular Object Model (TOM)",
"type": "object",
"properties": {
"name": {
"type": "string"
},
"dataCategory": {
"type": "string"
},
"description": {
"anyOf": [
{
"type": "string"
},
{
"type": "array",
"items": {
"type": "string"
}
}
]
},
"isHidden": {
"type": "boolean"
},
"partitions": {
},
"columns": {
"type": "array",
"items": {
"anyOf": [
{
"description": "DataColumn object of Tabular Object Model (TOM)",
"type": "object",
"properties": {
"name": {
"type": "string"
},
"dataType": {
"enum": [
"automatic",
"string",
"int64",
"double",
"dateTime",
"decimal",
"boolean",
"binary",
"unknown",
"variant"
]
},
"dataCategory": {
"type": "string"
},
"description": {
"type": "string"
},
"isHidden": {
"type": "boolean"
},
"isUnique": {
"type": "boolean"
},
"isKey": {
"type": "boolean"
},
"isNullable": {
"type": "boolean"
},
"alignment": {
"enum": [
"default",
"left",
"right",
"center"
]
},
"tableDetailPosition": {
"type": "integer"
},
"isDefaultLabel": {
"type": "boolean"
},
"isDefaultImage": {
"type": "boolean"
},
"summarizeBy": {
"enum": [
"default",
"none",
"sum",
"min",
"max",
"count",
"average",
"distinctCount"
]
},
"type": {
"enum": [
"data",
"calculated",
"rowNumber",
"calculatedTableColumn"
]
},
"formatString": {
"type": "string"
},
"isAvailableInMdx": {
"type": "boolean"
},
"keepUniqueRows": {
"type": "boolean"
},
"displayOrdinal": {
"type": "integer"
},
"sourceProviderType": {
"type": "string"
},
"displayFolder": {
"type": "string"
},
"sourceColumn": {
"type": "string"
},
"sortByColumn": {
"type": "string"
},
"annotations": {
"type": "array",
"items": {
"description": "Annotation object of Tabular Object Model (TOM)",
"type": "object",
"properties": {
"name": {
"type": "string"
},
"value": {
"anyOf": [
{
"type": "string"
},
{
"type": "array",
"items": {
"type": "string"
}
}
]
}
},
"additionalProperties": false
}
}
},
"additionalProperties": false
},
{
"description": "CalculatedTableColumn object of Tabular Object Model (TOM)",
"type": "object",
"properties": {
"name": {
"type": "string"
},
"dataType": {
"enum": [
"automatic",
"string",
"int64",
"double",
"dateTime",
"decimal",
"boolean",
"binary",
"unknown",
"variant"
]
},
"dataCategory": {
"type": "string"
},
"description": {
"type": "string"
},
"isHidden": {
"type": "boolean"
},
"isUnique": {
"type": "boolean"
},
"isKey": {
"type": "boolean"
},
"isNullable": {
"type": "boolean"
},
"alignment": {
"enum": [
"default",
"left",
"right",
"center"
]
},
"tableDetailPosition": {
"type": "integer"
},
"isDefaultLabel": {
"type": "boolean"
},
"isDefaultImage": {
"type": "boolean"
},
"summarizeBy": {
"enum": [
"default",
"none",
"sum",
"min",
"max",
"count",
"average",
"distinctCount"
]
},
"type": {
"enum": [
"data",
"calculated",
"rowNumber",
"calculatedTableColumn"
]
},
"formatString": {
"type": "string"
},
"isAvailableInMdx": {
"type": "boolean"
},
"keepUniqueRows": {
"type": "boolean"
},
"displayOrdinal": {
"type": "integer"
},
"sourceProviderType": {
"type": "string"
},
"displayFolder": {
"type": "string"
},
"isNameInferred": {
"type": "boolean"
},
"isDataTypeInferred": {
"type": "boolean"
},
"sourceColumn": {
"type": "string"
},
"sortByColumn": {
"type": "string"
},
"columnOriginTable": {
"type": "string"
},
"columnOriginColumn": {
"type": "string"
},
"annotations": {
"type": "array",
"items": {
"description": "Annotation object of Tabular Object Model (TOM)",
"type": "object",
"properties": {
"name": {
"type": "string"
},
"value": {
"anyOf": [
{
"type": "string"
},
{
"type": "array",
"items": {
"type": "string"
}
}
]
}
},
"additionalProperties": false
}
}
},
"additionalProperties": false
},
{
"description": "CalculatedColumn object of Tabular Object Model (TOM)",
"type": "object",
"properties": {
"name": {
"type": "string"
},
"dataType": {
"enum": [
"automatic",
"string",
"int64",
"double",
"dateTime",
"decimal",
"boolean",
"binary",
"unknown",
"variant"
]
},
"dataCategory": {
"type": "string"
},
"description": {
"type": "string"
},
"isHidden": {
"type": "boolean"
},
"isUnique": {
"type": "boolean"
},
"isKey": {
"type": "boolean"
},
"isNullable": {
"type": "boolean"
},
"alignment": {
"enum": [
"default",
"left",
"right",
"center"
]
},
"tableDetailPosition": {
"type": "integer"
},
"isDefaultLabel": {
"type": "boolean"
},
"isDefaultImage": {
"type": "boolean"
},
"summarizeBy": {
"enum": [
"default",
"none",
"sum",
"min",
"max",
"count",
"average",
"distinctCount"
]
},
"type": {
"enum": [
"data",
"calculated",
"rowNumber",
"calculatedTableColumn"
]
},
"formatString": {
"type": "string"
},
"isAvailableInMdx": {
"type": "boolean"
},
"keepUniqueRows": {
"type": "boolean"
},
"displayOrdinal": {
"type": "integer"
},
"sourceProviderType": {
"type": "string"
},
"displayFolder": {
"type": "string"
},
"isDataTypeInferred": {
"type": "boolean"
},
"expression": {
"type": "string"
},
"sortByColumn": {
"type": "string"
},
"annotations": {
"type": "array",
"items": {
"description": "Annotation object of Tabular Object Model (TOM)",
"type": "object",
"properties": {
"name": {
"type": "string"
},
"value": {
"anyOf": [
{
"type": "string"
},
{
"type": "array",
"items": {
"type": "string"
}
}
]
}
},
"additionalProperties": false
}
}
},
"additionalProperties": false
}
]
}
},
"measures": {
"type": "array",
"items": {
"description": "Measure object of Tabular Object Model (TOM)",
"type": "object",
"properties": {
"name": {
"type": "string"
},
"description": {
"anyOf": [
{
"type": "string"
},
{
"type": "array",
"items": {
"type": "string"
}
}
]
},
"expression": {
"anyOf": [
{
"type": "string"
},
{
"type": "array",
"items": {
"type": "string"
}
}
]
},
"formatString": {
"type": "string"
},
"isHidden": {
"type": "boolean"
},
"isSimpleMeasure": {
"type": "boolean"
},
"displayFolder": {
"type": "string"
},
"kpi": {
"description": "KPI object of Tabular Object Model (TOM)",
"type": "object",
"properties": {
"description": {
"anyOf": [
{
"type": "string"
},
{
"type": "array",
"items": {
"type": "string"
}
}
]
},
"targetDescription": {
"type": "string"
},
"targetExpression": {
"anyOf": [
{
"type": "string"
},
{
"type": "array",
"items": {
"type": "string"
}
}
]
},
"targetFormatString": {
"type": "string"
},
"statusGraphic": {
"type": "string"
},
"statusDescription": {
"type": "string"
},
"statusExpression": {
"anyOf": [
{
"type": "string"
},
{
"type": "array",
"items": {
"type": "string"
}
}
]
},
"trendGraphic": {
"type": "string"
},
"trendDescription": {
"type": "string"
},
"trendExpression": {
"anyOf": [
{
"type": "string"
},
{
"type": "array",
"items": {
"type": "string"
}
}
]
},
"annotations": {
"type": "array",
"items": {
"description": "Annotation object of Tabular Object Model (TOM)",
"type": "object",
"properties": {
"name": {
"type": "string"
},
"value": {
"anyOf": [
{
"type": "string"
},
{
"type": "array",
"items": {
"type": "string"
}
}
]
}
},
"additionalProperties": false
}
}
},
"additionalProperties": false
},
"annotations": {
"type": "array",
"items": {
"description": "Annotation object of Tabular Object Model (TOM)",
"type": "object",
"properties": {
"name": {
"type": "string"
},
"value": {
"anyOf": [
{
"type": "string"
},
{
"type": "array",
"items": {
"type": "string"
}
}
]
}
},
"additionalProperties": false
}
}
},
"additionalProperties": false
}
},
"hierarchies": {
"type": "array",
"items": {
"description": "Hierarchy object of Tabular Object Model (TOM)",
"type": "object",
"properties": {
"name": {
"type": "string"
},
"description": {
"anyOf": [
{
"type": "string"
},
{
"type": "array",
"items": {
"type": "string"
}
}
]
},
"isHidden": {
"type": "boolean"
},
"displayFolder": {
"type": "string"
},
"annotations": {
"type": "array",
"items": {
"description": "Annotation object of Tabular Object Model (TOM)",
"type": "object",
"properties": {
"name": {
"type": "string"
},
"value": {
"anyOf": [
{
"type": "string"
},
{
"type": "array",
"items": {
"type": "string"
}
}
]
}
},
"additionalProperties": false
}
},
"levels": {
"type": "array",
"items": {
"description": "Level object of Tabular Object Model (TOM)",
"type": "object",
"properties": {
"ordinal": {
"type": "integer"
},
"name": {
"type": "string"
},
"description": {
"anyOf": [
{
"type": "string"
},
{
"type": "array",
"items": {
"type": "string"
}
}
]
},
"column": {
"type": "string"
},
"annotations": {
"type": "array",
"items": {
"description": "Annotation object of Tabular Object Model (TOM)",
"type": "object",
"properties": {
"name": {
"type": "string"
},
"value": {
"anyOf": [
{
"type": "string"
},
{
"type": "array",
"items": {
"type": "string"
}
}
]
}
},
"additionalProperties": false
}
}
},
"additionalProperties": false
}
}
},
"additionalProperties": false
}
}
},
"additionalProperties": false
}
}
```
## <a name="see-also"></a>参照
[表形式モデルのスクリプト言語 (TMSL) リファレンス](../../analysis-services/tabular-model-scripting-language-tmsl-reference.md)
| 39.942402 | 772 | 0.228976 | yue_Hant | 0.326337 |
8a63a2a246b4faaf03c1a68700553660440376d0 | 137 | md | Markdown | content/blog/algorithm/2021-11-17-TLI-codingtest.md.md | camiyoung/camiyoung.github.com | e54a0b2c48d563b3f72aeb30dddb77568fd56c73 | [
"MIT"
] | null | null | null | content/blog/algorithm/2021-11-17-TLI-codingtest.md.md | camiyoung/camiyoung.github.com | e54a0b2c48d563b3f72aeb30dddb77568fd56c73 | [
"MIT"
] | 1 | 2021-12-27T18:10:07.000Z | 2021-12-27T18:10:08.000Z | content/blog/algorithm/2021-11-17-TLI-codingtest.md.md | camiyoung/camiyoung.github.com | e54a0b2c48d563b3f72aeb30dddb77568fd56c73 | [
"MIT"
] | null | null | null | ---
date: 2021-11-17 16:23:13
category: 'algorithm'
draft: false
title: '[프로그래머스] 행렬 테두리 회전하기, 메뉴 리뉴얼, 최댓값과 최솟값 c++'
emoji: 📝
---
수정 예정
| 13.7 | 51 | 0.635036 | kor_Hang | 0.981528 |
8a63adabd784773606a9f10434a63eb910e2f819 | 2,026 | md | Markdown | 流行/Get Me-Justin Bieber ft Kehlani/README.md | hsdllcw/everyonepiano-music-database | d440544ad31131421c1f6b5df0f039974521eb8d | [
"MIT"
] | 17 | 2020-12-01T05:27:50.000Z | 2022-03-28T05:03:34.000Z | 流行/Get Me-Justin Bieber ft Kehlani/README.md | hsdllcw/everyonepiano-music-database | d440544ad31131421c1f6b5df0f039974521eb8d | [
"MIT"
] | null | null | null | 流行/Get Me-Justin Bieber ft Kehlani/README.md | hsdllcw/everyonepiano-music-database | d440544ad31131421c1f6b5df0f039974521eb8d | [
"MIT"
] | 2 | 2021-08-24T08:58:58.000Z | 2022-02-08T08:22:52.000Z |
**Get Me双手简谱** 和五线谱完全对应。
_Get Me_ 是Justin
Bieber与R&B才女Kehlani合作的单曲。全曲迷幻的R&B氛围像极了Journals时期的Justin,看来这次的新专他是决心向R&B转型了。Justin慵懒撩人的声音和Kehlani颇具辨识度的嗓音完美match,后面的几个高音又有Purpose时期的味道。
R&B总是能带给人更强烈的沉浸感的,歌词里写的缠绵情愫弥漫在每一个重低音节拍中,这样的风格更耐听,更需要细品。
同时,网站还为大家提供了《[Love Yourself](Music-6606-Love-Yourself-Justin-Bieber.html "Love
Yourself")》曲谱下载
歌词下方是 _Get Me钢琴谱_ ,大家可以免费下载学习。
### Get Me歌词:
Justin Bieber:
Ooh you don't compare don't fit in with 'em do you get me
Judgin' by the way you open up you get me
Ooh out of this world hands on baby now you see me
Lookin' at the way we're blendin' in you get me
Justin Bieber:
Ha-ha-ha you get me
Ha-ha-ha you get me
Justin Bieber:
See you're lookin' beyond the surface
Can tell by the questions you're asking
You got me low-key nervous
It feels like we're on the same wave yeah
Never intended to relate I mean what are the chances
Never thought I'd connect with you not any circumstances
Justin Bieber:
Ooh you don't compare don't fit in with 'em do you get me
Judgin' by the way you open up you get me
Ooh out of this world hands on baby now you see me
Lookin' at the way we're blendin' in you get me
Justin Bieber:
Ha-ha-ha you get me
Ha-ha-ha you get me
Kehlani:
Ooh there's so much chemistry
Like you came inside you're finishin' my sentences
And they're sayin' no we can't deny this energy
How 'bout reapin' all the benefits yeah
Never intended to relate I mean what are the chances
Never thought I'd connect with you not any circumstances no-o-o-oh
Justin Bieber/Kehlani:
Ooh you don't compare don't fit in with 'em do you get me
Judgin' by the way you open up you get me
Yeah you really get me
Ooh out of this world hands on baby now you send me
Ooh that's why you send me
Lookin' at the way we're blendin' in you get me
Ooh you really get me
Ha-ha-ha you get me
Oo-oo-oo-oo-oo-ooh
Ha-ha-ha you get me
Ye-e-e-e-e-e-ah
Oh you get me yeah
You get me yeah yeah
You get me yeah
You get me yeah yeah
| 32.15873 | 134 | 0.737907 | eng_Latn | 0.980607 |
8a6413b8969cee900d301b1656b55d31ff58630d | 820 | md | Markdown | README.md | HanpyBin/Obsidian-Templater-Templates | 031c847e6a560d660d32a401d386b52ff476fc31 | [
"MIT"
] | null | null | null | README.md | HanpyBin/Obsidian-Templater-Templates | 031c847e6a560d660d32a401d386b52ff476fc31 | [
"MIT"
] | null | null | null | README.md | HanpyBin/Obsidian-Templater-Templates | 031c847e6a560d660d32a401d386b52ff476fc31 | [
"MIT"
] | null | null | null | # Obsidian-Templater-Templates
 This repo is used to store all the templates I design for various situation like essay reading,diary,academic report and so on.
## Requirements
- Obsidian
- Obsidian plugins
- Templater
- Admonition
## Templates until now
> Attention:
> - For a better code view, see the raw code BUT NOT markdown style.
> - All the template files with '-admonition' mean you need to use this template with obsidian plugin `admonition` to show its origin appearance.
> - All the demonstration graphs below are "admonition-based".
### Essay Reading

## Reference
- [Templater official document](https://silentvoid13.github.io/Templater/)
- [Admonition official document](https://squidfunk.github.io/mkdocs-material/) | 39.047619 | 145 | 0.769512 | eng_Latn | 0.875198 |
8a642295cc18fefa17659327492ef3bdbaec5f85 | 1,588 | md | Markdown | BingMaps/v8-web-control/modules/heat-map-module/index.md | tushar-nitave/bingmaps-docs | 38a5c4e364f62a8f5c9e1e48d14f6f960ed7d130 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | BingMaps/v8-web-control/modules/heat-map-module/index.md | tushar-nitave/bingmaps-docs | 38a5c4e364f62a8f5c9e1e48d14f6f960ed7d130 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | BingMaps/v8-web-control/modules/heat-map-module/index.md | tushar-nitave/bingmaps-docs | 38a5c4e364f62a8f5c9e1e48d14f6f960ed7d130 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "Heat Map Module | Microsoft Docs"
ms.custom: ""
ms.date: "02/28/2018"
ms.reviewer: ""
ms.suite: ""
ms.tgt_pltfrm: ""
ms.topic: "article"
ms.assetid: c6c93a23-4c32-462c-9e86-87e648c006d5
caps.latest.revision: 5
author: "rbrundritt"
ms.author: "richbrun"
manager: "stevelom"
ms.service: "bing-maps"
---
# Heat Map Module
**Module name**: Microsoft.Maps.HeatMap
**Namespace**: Microsoft.Maps
Heat maps, also known as density maps, are a type of overlay on a map used to represent the density of data using different colors. Heat maps are often used to show the data “hot spots” on a map. Heat maps are useful when you have a lot of data you want to look at on the map. If you were to display tens of thousands of pushpins on the map you will likely find that the performance of the map degrades and due to the number of pushpins majority, if not all of the map is covered, thus making it unusable. Rendering the data as a heat map has a lot better performance and helps make better sense of the density of the data while still being able to see and use the map.
## API Reference
* [HeatMapLayer Class](../v8-web-control/heatmaplayer-class.md)
* [HeatMapLayerOptions Object](../v8-web-control/heatmaplayeroptions-object.md)
## Concepts
* [Heat Map Color Gradients](../v8-web-control/heat-map-color-gradients.md)
## Examples
* [Basic Heat Map Example](../v8-web-control/basic-heat-map-example.md)
* [Customized Heat Map Example](../v8-web-control/customized-heat-map-example.md)
* [Render GeoJSON as Heat Map](../v8-web-control/render-geojson-as-heat-map.md)
| 45.371429 | 669 | 0.741184 | eng_Latn | 0.958321 |
8a652d2831f2665b43e701dc28fb6d56fb71ecbb | 2,890 | md | Markdown | README.md | theta-skunkworks/theta-plugin-tfl-od-tracking | 67e6003b85f17f0a6b00bab5bb330b04d98b86f2 | [
"Apache-2.0"
] | 1 | 2020-05-22T06:20:12.000Z | 2020-05-22T06:20:12.000Z | README.md | dun933/theta-plugin-tfl-od-tracking | d3f50d46032eaf50e718b2a84aca11838ed68d5b | [
"Apache-2.0"
] | null | null | null | README.md | dun933/theta-plugin-tfl-od-tracking | d3f50d46032eaf50e718b2a84aca11838ed68d5b | [
"Apache-2.0"
] | 3 | 2020-05-22T06:20:02.000Z | 2021-07-20T11:31:57.000Z | # Object tracking sample using TF-Lite Object Detection and equirectangular rotation processing
より詳しい日本語の説明は[こちら](https://qiita.com/KA-2/items/6f8395e4ea0dc378cc7a)。<br>
[Click here](https://qiita.com/KA-2/items/6f8395e4ea0dc378cc7a) for a more detailed explanation in Japanese.
このソフトウェアは、 Apache 2.0ライセンスで配布されている製作物が含まれています。<br>
This software includes the work that is distributed in the Apache License 2.0.
## Overview
This is a sample program that tracks bananas recognized using Object Detection of TensorFlow Lite in all directions.
It includes the following technical elements:
- How to use TensorFlow Lite Object Detection
- Equirectangular rotation process written using NDK's OpenCV
- Use of THETA posture information

## Build
To build this set of project files, you need to do the following three things yourself.
### Download and deploy [OpenCV Android pack 3.4.9](https://sourceforge.net/projects/opencvlibrary/files/3.4.9/opencv-3.4.9-android-sdk.zip/download)
The example of the deployment destination is as follows
```
C:/opencv/OpenCV-3.4.9-android-sdk
```
Please rewrite the "Path to OpenCV.mk" described in Android.mk according to the result of expanding the file.
### Android Studio settings (setting to enable NDK build)
Open "Tools"-> "SDK Manager"-> "SDK Tools" and make sure the following items are checked.
- Android SDK Build-Tools
- Android SDK Platform-Tools
- Android SDK Tools
- Google USB Driver
- NDK
### Download and install "[Java SE Development Kit 8](https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html)"
### Copy of OpenCV library (.so file)
```
Source : C:/(The location of OpenCV Android pack)/sdk/native/libs/arm64-v8a
Destination: C:/(The location of the project file)/app/jniLibs/arm64-v8a
```
## Development Environment
### Camera
* RICOH THETA V Firmware ver.3.30.1 and above
* RICOH THETA Z1 Firmware ver.1.40.1 and above
### SDK/Library
* RICOH THETA Plug-in SDK ver.2.0.10
### OpenCV Android pack
* opencv-3.4.9-android-sdk
### TensorFlow Lite Object Detection model
* coco_ssd_mobilenet_v1_1.0_quant_2018_06_29.zip
### Development Software
* Android Studio ver.3.5.3
* gradle ver.5.1.1
## License
```
Copyright 2018 Ricoh Company, Ltd.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
```
## Contact

| 27.52381 | 149 | 0.76436 | eng_Latn | 0.893842 |
8a65ba142f693f321d0035f8aab53fe0ebc8d5aa | 7,830 | md | Markdown | README.md | Senth/minecraft-mod-updater | 96e1614361365e3e612b19ac56b8b6669af2d537 | [
"MIT"
] | null | null | null | README.md | Senth/minecraft-mod-updater | 96e1614361365e3e612b19ac56b8b6669af2d537 | [
"MIT"
] | null | null | null | README.md | Senth/minecraft-mod-updater | 96e1614361365e3e612b19ac56b8b6669af2d537 | [
"MIT"
] | null | null | null | # mcman/mmm (minecraft-mod-manager)
[](https://pypi.python.org/pypi/minecraft-mod-manager)
[](https://pypi.python.org/pypi/minecraft-mod-manager)
[](https://lgtm.com/projects/g/Senth/minecraft-mod-manager/alerts/)
[](https://lgtm.com/projects/g/Senth/minecraft-mod-manager/context:python)
Install and update mods from ~~CurseForge~~ and Modrinth through a simple command.
## News — CurseForge support disabled (hopefully only for now) (2022-05-08)
Hi everyone!
I'm not sure if you're aware, Overwolf will tomorrow disable the old API for CurseForge which is used by mcman.
The old API that mcman used is sort of in a gray area legally.
But on a positive note, Overwolf has decided to open up the new API.
Albeit it comes with some limitations; not all mods can be downloaded from 3rd party apps.
I just applied for an API key for the new API, so hopefully it gets accepted.
For the mods that can't be downloaded I plan to link directly to the CurseForge page for easier manual download.
The Overwolf client has also become a lot better with more support, but still lacks official linux and OSX support.
As a server owner though, it requires a bit of changes to how you update the mods.
A tip is to sync your mods folder with Dropbox, that makes it a lot easier.
This will mean that CurseForge mods will be unavailable for some time.
The change in mcman will only take ~4 hours with updating tests.
The issue is keeping the API key safe.
I have some ideas but it will take time to develop and I also need to check with the
Overwolf team that it's legally possible.
Anyway, thanks for all the support!
Hopefully we can get mcman up and running again with CurseForge support 🙂
If it's not accepted, thank you for all the support so far!
Cheers,
Senth
_[(News Archive)](./NEWS.md)_
## Features
- Install mods with `minecraft-mod-manager install mod_name`
- Update all mods with `minecraft-mod-manager update`, `mcman update` or `mmm update`
- Searches on CurseForge and Modrinth for updates on installed mods
- Filter updates by
- Stable (default), beta `--beta`, or alpha `--alpha` releases
- Minecraft version `-v 1.16.4`
- Fabric/Forge mod `--mod-loader fabric`
## Installation/Upgrade & Requirements
1. Requires at least python 3.8
1. Install/Upgrade with `$ pip install --user --upgrade minecraft-mod-manager`
## Examples
**Note!** All examples start with `minecraft-mod-manager`, `mcman` or `mmm`
(shorthand commands) then comes the arguments.
| Arguments | Description |
| ----------------------------------------------- | --------------------------------------------------------------------------------------------------- |
| `install jei` | Searches for jei on all sites and installs the latest version. |
| `install sodium=modrinth` | Install Sodium specifically from modrinth. |
| `install dynmap=curse:dynmapforge` | Install dynmap with slug dynmapforge on Curse. |
| `install sodium=modrinth --mod-loader fabric` | Install fabric version of sodium. Generally not necessary to specify `mod-loader` |
| `install carpet fabric-api sodium lithium` | Easily install many mods. |
| `update` | Update all mods. |
| `update --pretend` | Check what will be updated. Does not change anything. |
| `update sodium lithium phosphor` | Update specific mods. |
| `update -v "1.16.5"` | Updates to latest mod version which works with specified MC version. |
| `update -v "1.16.1"` | If you upgraded the mods, to a higher version (e.g. snapshot), you can easily downgrade them again. |
| `configure sodium=modrinth` | Change the download site for a mod. |
| `configure sodium=` | Doesn't work, known bug! Reset download sites (downloads from all sites again) |
| `configure carpet=curse:fabric-carpet` | Change site slug for a mod. |
| `configure carpet=curse` | If you don't define a slug, you will reset the slug for that mod. |
| `configure sodium=modrinth carpet=curse` | Easily configure multiple mods at the same time. |
| `configure carpet=modrinth,curse:fabric-carpet` | Configure different slugs for different sites. |
| `list` | List all installed mods. |
## Full usage
```none
positional arguments:
{install,update,configure,list}
Install, update, configure, or list mods
mods
The mods to update or configure.
If no mods are specified during an update, all mods will be updated.
You can specify download sites and slugs for each mod (if necessary)
dynmap=curse
dynmap=curse:dynmapforge
dynmap=curse:dynmapforge,modrinth
dynmap=curse:dynmapforge,modrinth:dynmap
minecraft:
-d DIR, --dir DIR Location of the mods folder. By default it's the current directory
-v MINECRAFT_VERSION, --minecraft-version MINECRAFT_VERSION
Only update mods to this Minecraft version. Example: -v 1.16.4
--beta Allow beta releases of mods
--alpha Allow alpha and beta releases of mods
--mod-loader {fabric,forge}
Only install mods that use this mod loader. You rarely need to be
this specific. The application figures out for itself which type
you'll likely want to install.
logging & help:
-h, --help show this help message and exit
--version Print application version
--verbose Print more messages
--debug Turn on debug messages
--pretend Only pretend to install/update/configure. Does not change anything
--no-color Disable color output
```
## Alternatives
### GUI
- [Overwolf](https://www.overwolf.com/)
- [kaniol-lck/modmanager](https://github.com/kaniol-lck/modmanager)
- [ReviversMC/modget-minecraft](https://github.com/ReviversMC/modget-minecraft)
- [4JX/mCubed](https://github.com/4JX/mCubed)
### CLI
- [sargunv/modsman](https://github.com/sargunv/modsman)
- [tyra314/modweaver](https://github.com/tyra314/modweaver)
## Authors
- Matteus Magnusson, [email protected]
| 57.573529 | 198 | 0.573946 | eng_Latn | 0.930065 |
8a65bb5bd32fa3b1bc60e8d4629b0e34cf849bfd | 476 | md | Markdown | README.md | geek-dc/laravel-sensitive | 86dd2d493d6cea6effcc5d06802161e408801534 | [
"MIT"
] | 1 | 2019-05-09T01:37:28.000Z | 2019-05-09T01:37:28.000Z | README.md | geek-dc/laravel-sensitive | 86dd2d493d6cea6effcc5d06802161e408801534 | [
"MIT"
] | null | null | null | README.md | geek-dc/laravel-sensitive | 86dd2d493d6cea6effcc5d06802161e408801534 | [
"MIT"
] | null | null | null | # Laravel-sensitive
Sensitive Fliter for Laravel5 based on [geek-dc/laravel-sensitive](https://github.com/geek-dc/laravel-sensitive).
## Install
```shell
composer require yankewei/laravel-sensitive
```
## For Laravel
Add config
```shell
php artisan vendor:publish --provider="GeekDC\Sensitive\LaravelSensitiveProvider"
```
Execute database migration
```shell
php artisan migrate
```
## Usage
Using facade:
```php
Sensitive::match($content);
```
## License
MIT
| 12.526316 | 113 | 0.731092 | eng_Latn | 0.594673 |
8a6776e1035f289185a5da15308e00765880215a | 4,016 | md | Markdown | README.md | fredfung007/vehicle-nl-retrieval | bad888ddab4f16fc705ce821336561da8421bcbd | [
"Apache-2.0"
] | 28 | 2021-02-02T16:54:54.000Z | 2022-03-22T02:45:30.000Z | README.md | fredfung007/vehicle-nl-retrieval | bad888ddab4f16fc705ce821336561da8421bcbd | [
"Apache-2.0"
] | 3 | 2021-10-03T06:56:08.000Z | 2022-01-07T22:56:36.000Z | README.md | fredfung007/vehicle-nl-retrieval | bad888ddab4f16fc705ce821336561da8421bcbd | [
"Apache-2.0"
] | 4 | 2021-02-03T02:46:47.000Z | 2022-03-25T08:13:19.000Z | # Natural Language-Based Vehicle Retrieval
This dataset is curated for the Natural Language (NL) Based Vehicle Retrieval
Challenge Track of the 2021 AI City Workshop.
The workshop summary paper is available on ArXiv: https://arxiv.org/abs/2104.12233
```
@inproceedings{naphade2021AIC21,
author = {Milind Naphade and Shuo Wang and David C. Anastasiu and Zheng Tang and Ming-Ching Chang and Xiaodong Yang and Yue Yao and Liang Zheng
and Pranamesh Chakraborty and Anuj Sharma and Qi Feng and Vitaly Ablavsky and Stan Sclaroff},
title = {The 5th AI City Challenge},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2021},
}
```
## Contents in this repository
`data/extract_vdo_frms.py` is a Python script that is used to extract frames
from videos provided in Challenge Track 3 (MTMC). Please use this script to
extract frames, so that the path configurations in JSON files are consistent.
`data/train-tracks.json` is a dictionary of all 2,498 vehicle tracks in the
training split. Each vehicle track is annotated with three natural language
descriptions of the target and is assigned a universally unique identifier
(UUID). The file is structured as
```json
{
"track-uuid-1": {
"frames": ["file-1.jpg", ..., "file-n.jpg"],
"boxes": [[742, 313, 171, 129], ..., [709, 304, 168, 125]],
"nl": [
"A gray pick-up truck goes ...",
"A dark pick-up runs ...",
"Pick-up truck that goes ..."
]
},
"track-uuid-2": ...
}
```
The files under the `frames` attribute are paths in the CityFlow Benchmark [2]
used in Challenge Track 2 of the 2021 AI City Challenge.
`data/test-tracks.json` contains 530 tracks of target vehicles. The structure of
this file is identical to the training split, except that the natural language
descriptions are removed.
`data/test-queries.json` contains 530 queries. Each consists of three natural
language descriptions of the vehicle target annotated by different annotators.
Each query is assigned a UUID that is later used in results submission. The
structure of this file is as follows:
```json
{
"query-uuid-1": [
"A dark red SUV drives straight through an intersection.",
"A red MPV crosses an empty intersection.",
"A red SUV going straight down the street."
],
"query-uuid-2": ...
}
```
The `baseline/` directory contains a baseline model that measures the similarity
between language descriptions and frame crops in a track. Details of this model
can be found in [1].
## Problem Definition
Teams should retrieve and rank the provided vehicle tracks for each of the
queries. A baseline retrieval model is provided as a demo for a start point for
participating teams.
## Submission Format
For each query, teams should submit a list of the testing tracks ranked by their
retrieval model. One JSON file should be submitted containing a dictionary in
the following format:
```json
{
"query-uuid-1": ["track-uuid-i", ..., "track-uuid-j"],
"query-uuid-2": ["track-uuid-m", ..., "track-uuid-n"],
...
}
```
A sample JSON file of submission for the baseline model is available in
`baseline/baseline-results.json`.
## Evaluation Metrics
The Vehicle Retrieval by NL Descriptions task is evaluated using standard
metrics for retrieval tasks. We use the Mean Reciprocal Rank (MRR) [3] as the
main evaluation metric. Recall @ 5, Recall @ 10, and Recall @ 25 are also
evaluated for all submissions.
The provided baseline model’s MRR is 0.0269, Recall @ 5 is 0.0264, Recall @ 10
is 0.0491, Recall @ 25 is 0.1113.
## Citations
Please cite this work:
[1] Feng, Qi, et al. "CityFlow-NL: Tracking and Retrieval of Vehicles at City
Scale by Natural Language Descriptions." arXiv preprint. arXiv:2101.04741.
## References
[2] Tang, Zheng, et al. "CityFlow: A city-scale benchmark for multi-target
multi-camera vehicle tracking and re-identification." CVPR. 2019.
[3] Voorhees, Ellen M. "The TREC-8 question answering track report." Trec. Vol.
99. 1999.
| 34.033898 | 143 | 0.736305 | eng_Latn | 0.985727 |
8a686b3d381f130fa52c42377693f85ebb364b39 | 1,853 | md | Markdown | _posts/2019-03-14-google-trouble.md | quorten/quorten-blog1 | f14f1c2d20a66f36cd083d5044635aacd782ff90 | [
"Unlicense"
] | null | null | null | _posts/2019-03-14-google-trouble.md | quorten/quorten-blog1 | f14f1c2d20a66f36cd083d5044635aacd782ff90 | [
"Unlicense"
] | 1 | 2021-01-19T23:42:48.000Z | 2021-02-03T04:02:20.000Z | _posts/2019-03-14-google-trouble.md | quorten/quorten-blog1 | f14f1c2d20a66f36cd083d5044635aacd782ff90 | [
"Unlicense"
] | null | null | null | ---
layout: post
title: Trouble with Google ReCAPTCHA
date: 2019-03-14 23:09 -0500
author: quorten
categories: [random-software]
tags: [random-software]
---
So, recent brands of Google ReCAPTCHA sometimes deliberately feed you
humanly impossible challenges to solve. Yes, indeed the complaints on
the Internet are widespread, but today I tried an experiment to see if
I could file any reasonable feedback with Google on the issue. My
attempt has turned out in failure.
What did I do to try? Well, first, ReCAPTCHA has some feedback link
you can click on. I clicked on that, and I got brought to a marketing
website advertising the new ReCAPTCHA service to web developers. Not
helpful from an end-user standpoint. But, anyways, I continued to
play along with their game. Okay, if I can't get end-user support, I
can get developer support on their Google group for developers, right?
Okay, treading further, I pretty quickly realize that the Google group
for developers was setup by Google as an attempt to not have to do any
work answering developer questions. Oh, come on, we've set up this
community where all users can chat together, hopefully some more
experienced users will already know the answer to the question, answer
it for us, and we won't have to do any work.
Treading further into that forum, I found some interesting evidence
that I was not the only one. On the forum, there were some recent
posts by clear end-users, rather than developers, asking about their
difficulty in using ReCAPTCHA.
So, my summary of the experience:
* Google has virtually zero customer support. I can affirm that.
<!-- more -->
* Google sells their services to web developers and advertisers.
Unfortunately, the other customer-facing services did not evolve.
* If you're not paying, you're not the customer, you are the product
being sold.
| 41.177778 | 70 | 0.778737 | eng_Latn | 0.99954 |
8a689cdbfb204544f9dd8e1c8388debe8cfae7c3 | 354 | markdown | Markdown | _posts/2014-07-16-project-3.markdown | anandiamy/anandia234.github.io | d42f56b140c2efb538aedc229217e927b196f9d6 | [
"Apache-2.0"
] | null | null | null | _posts/2014-07-16-project-3.markdown | anandiamy/anandia234.github.io | d42f56b140c2efb538aedc229217e927b196f9d6 | [
"Apache-2.0"
] | null | null | null | _posts/2014-07-16-project-3.markdown | anandiamy/anandia234.github.io | d42f56b140c2efb538aedc229217e927b196f9d6 | [
"Apache-2.0"
] | null | null | null | ---
title: Bbvet Wates
subtitle: Laboratorium Information System
layout: default
modal-id: 3
date: 2017-02-10
img: bbvetwates.png
thumbnail: bbvetwates-thumbnail.png
alt: image-alt
project-date: February 2017
client: Bbvet Wates Yogyakarta
category: Web Development
description: Website for Laboratorium Information System in Bbvet Wates, Yogyakarta.
--- | 25.285714 | 84 | 0.80791 | hun_Latn | 0.165369 |
8a691c3591cbed3a1778196107f3cb13fd72296d | 1,634 | md | Markdown | Documentation/Release_Notes/Release_1.1.0.md | vulogov/Bund2 | d6edeb9cca6575fdb8acffd7e643d68ffecf3fbd | [
"Apache-2.0"
] | 3 | 2021-08-05T21:59:05.000Z | 2021-08-19T18:03:41.000Z | Documentation/Release_Notes/Release_1.1.0.md | vulogov/Bund | 0f2e4122a9fe67c38c5c8d8de4de1c3912bc43e8 | [
"Apache-2.0"
] | null | null | null | Documentation/Release_Notes/Release_1.1.0.md | vulogov/Bund | 0f2e4122a9fe67c38c5c8d8de4de1c3912bc43e8 | [
"Apache-2.0"
] | null | null | null | # Features
- Add !"" unixcmd datatype. Content of the string will be considered as UNIX command which will be executed and STDOUT will be placed as string object to the stack
- Add j"" json datatype. Content of the string will be considered as JSON which will be parsed and stored as the value of the object of type json, placed on the stack.
- str/Strip and str/Lines functions
- fs/File/* functions
- Functors now a "first class citizens".
```
/* Converting string to json with functor json */
( '{"answer": 42}':(json) )
```
- JSON now is a datatype and have a functor for converting the strings
- JSON querying and generation functions as json/*
- new functor "all" - executing operations over all content of the stack
- new loop "loop" - running loop until externally interrupted
- new functions in args* tree. They will query what you pass through --args
- new loop function 'times' which call function n times
- new loop function 'over' which pushes data from (* ) to stack and call function
- new function 'type' returns name of the type of data element on the stack
- new function 'seq' - generating sequence of the values in DBLOCK
- new function 'rotateFront'/'rotate', 'rotateBack' - rotating the stack front to back or back to front
- new datatype and opcode http
- string datatype supports pre-function feature
- data retrieval from remote http/https endpoints
- http* functions for retrieval from remote http/https endpoints
# Bugfixes
- Fixing typos in MAT matrix files
- Fix bug in sleep function causing overflow if sleep specified in integers
- Fix the fact that \` references was declared, but not implemented
| 51.0625 | 167 | 0.760098 | eng_Latn | 0.991896 |
8a69fd5b1e1c1303aa962e040cccfbea894ee908 | 1,146 | md | Markdown | README.md | nkb-bd/ci-ready-player-one | 1f2f19a9c113f07ab044a43a64d2e66abd622197 | [
"MIT"
] | null | null | null | README.md | nkb-bd/ci-ready-player-one | 1f2f19a9c113f07ab044a43a64d2e66abd622197 | [
"MIT"
] | null | null | null | README.md | nkb-bd/ci-ready-player-one | 1f2f19a9c113f07ab044a43a64d2e66abd622197 | [
"MIT"
] | null | null | null | # CI3 ready player one
Ready admin panel with ready landing page.
**[VIEW THE DEMO](https://lukman-nakib.000webhostapp.com/ci-ready-player-one/admin)**
inspired by
### CI3 Fire Starter (ci3-fire-starter)
JasonBaier
###template from colorlib
https://github.com/JasonBaier/ci3-fire-starter
## installation
create a database 'ci_voyager_junior' and update the application/config/database.php using your database credentials
Main admin login :
url : your main url with '/login'
username :admin
password :admin
* CodeIgniter 3.x
* Base controllers for Public, Private, Admin and API classes
* Basic admin tool with dashboard, user management, settings and Contact Us message list
* Database sessions
<a name="system-requirements"></a>
## SYSTEM REQUIREMENTS
* PHP version 5.6+ (successfully tested on PHP 7.0.x)
* MySQL 5.1+ (successfully tested on MySQL 5.7)
* PHP Mcrypt extension if you want to use the Encryption class
See CodeIgniter's [Server Requirements](https://codeigniter.com/user_guide/general/requirements.html)
for the complete list.
** this is my first github repo pardon any mistakes.
| 30.157895 | 117 | 0.745201 | eng_Latn | 0.847089 |
8a6a9311d43219e49e5ff1570c048bc7453f74b4 | 1,343 | md | Markdown | docs/apigeecli_envoy-bindings_create.md | srinandan/apigeecli | 40599ad07a8499baf7c24b2d2b69363f676bb7ec | [
"Apache-2.0"
] | 16 | 2020-01-01T17:49:05.000Z | 2022-03-29T05:00:11.000Z | docs/apigeecli_envoy-bindings_create.md | srinandan/apigeecli | 40599ad07a8499baf7c24b2d2b69363f676bb7ec | [
"Apache-2.0"
] | 15 | 2020-06-17T18:12:34.000Z | 2022-02-25T15:58:39.000Z | docs/apigeecli_envoy-bindings_create.md | srinandan/apigeecli | 40599ad07a8499baf7c24b2d2b69363f676bb7ec | [
"Apache-2.0"
] | 7 | 2020-12-25T23:21:29.000Z | 2021-11-30T09:54:58.000Z | ## apigeecli envoy-bindings create
Create a new Envoy binding; Binds an Envoy service to an API Product
### Synopsis
Create a new Envoy binding; Binds an Envoy service to an API Product
```
apigeecli envoy-bindings create [flags]
```
### Options
```
-f, --approval string Approval type
--attrs stringToString Custom attributes (default [])
-d, --desc string Description for the API Product
-m, --displayname string Display Name of the API Product
-e, --envs stringArray Environments to enable
-h, --help help for create
-i, --interval string Quota Interval
-n, --name string Name of the API Product
-q, --quota string Quota Amount
-r, --remote-svcs stringArray Envoy Service names. Ex: -s service1:port1 -s service2:port2
-s, --scopes stringArray OAuth scopes
-u, --unit string Quota Unit
```
### Options inherited from parent commands
```
-a, --account string Path Service Account private key in JSON
--disable-check Disable check for newer versions
-t, --token string Google OAuth Token
```
### SEE ALSO
* [apigeecli envoy-bindings](apigeecli_envoy-bindings.md) - Manage Envoy API Product Bindings Apigee
###### Auto generated by spf13/cobra on 2-Dec-2021
| 31.232558 | 101 | 0.643336 | eng_Latn | 0.596645 |
8a6aceffeea8e82825ed05bf1a126a1a1ede139a | 8,320 | md | Markdown | README.md | csonuryilmaz/appcircle-cache-push-component | 1fb39e84e8c2a8f3a82a01829957efc3be52bb5a | [
"MIT"
] | null | null | null | README.md | csonuryilmaz/appcircle-cache-push-component | 1fb39e84e8c2a8f3a82a01829957efc3be52bb5a | [
"MIT"
] | null | null | null | README.md | csonuryilmaz/appcircle-cache-push-component | 1fb39e84e8c2a8f3a82a01829957efc3be52bb5a | [
"MIT"
] | null | null | null | # Appcircle Cache Push
Uploads user selected files and folders to Appcircle cache.
Required Input Variables
- `AC_CACHE_LABEL`: User defined cache label to identify one cache from others. Both cache push and pull steps should have the same value to match.
- `AC_CACHE_INCLUDED_PATHS`: Specifies the files and folders which should be in cache. Multiple glob patterns can be provided as a colon-separated list. For example; .gradle:app/build
- `AC_TOKEN_ID`: System generated token used for getting signed url. Zipped cache file is uploaded to signed url.
- `ASPNETCORE_CALLBACK_URL`: System generated callback url for signed url web service. It's different for various environments.
Optional Input Variables
- `AC_CACHE_EXCLUDED_PATHS`: Specifies the files and folders which should be ignored from cache. Multiple glob patterns can be provided as a colon-separated list. For example; .gradle/*.lock:*.apk
- `AC_REPOSITORY_DIR`: Cloned git repository path. Included and excluded paths are defined relative to cloned repository, except `~/`, `/` or environment variable prefixed paths. See following sections for more details.
## Included & Excluded Paths
Cache step uses a pattern in order to select files and folders. That the pattern is not a regexp, it's closer to a shell glob. (_The verb "glob" is an old Unix term for filename matching in a shell._)
Also we have some keywords or characters for special use cases, especially for system folders. Following sections summarize cache step's supported patterns for included and excluded paths.
### System vs. Repository
In order to identify between a repository resource and a system resource, cache step checks prefix for each given pattern.
Resource word, used in the document, means files or folders in this context.
Repository resources begin with directly glob pattern. They shouldn't be prefixed with `/` or other folder tree characters.
For example:
- `.gradle/` is .gradle folder in project repository.
- `local.properties` is single file in project repository.
- `*.apk` is related with .apk files in project repository.
Repository resources are generated with git clone on most cases. For this reason, take care of step order while using cache and git clone for repository reources.
On the other hand system resources for `$HOME` begin with `~/` pattern. These resources are generated on build and on most cases they're not included in repository.
For example:
- `~/.gradle/` is .gradle folder at $HOME.
- `~/Library/Caches/CocoaPods` is Cocoapods caches folder at $HOME.
Also other system-wide resources are reachable with prefix `/`.
---
**Note:** We should be careful with dynamic folder paths which are temporary on builds.
From build to build, their absolute path is changing. For example, `_appcircle_temp` folder's absolute path is `/Volumes/agent-disk/agent/workflow_data/xjr1walp.ikh/_appcircle_temp` on one build and is `/Volumes/agent-disk/agent/workflow_data/y3jdjln4.0kj/_appcircle_temp` on another build.
So, for those kinds of resources, we should prefix include and exclude paths with Appcircle-specific (reserved) environment variables.
For example:
- `$AC_TEMP_DIR/appcircle_build_ios_simulator/SimulatorDerivedData` is simulator derived data resources at `_appcircle_temp`.
See full list of environment variables at [Appcircle docs](https://docs.appcircle.io/environment-variables/appcircle-specific-environment-variables/).
---
### Glob Patterns
#### `*`
Match zero or more characters. A glob consisting of only the asterisk and no other characters or wildcards will match all files or folders in that folder.
Examples:
- `AC_CACHE_INCLUDED_PATHS=*`: All files and folders in repository included.
We can also focus into subfolders with prefixing parent folders relative to repository.
- `AC_CACHE_INCLUDED_PATHS=app/*`: All files and folders in "app" folder included.
The asterisk is usually combined with a file extension and other characters for prefix or instring matches. See examples below:
- `AC_CACHE_INCLUDED_PATHS=*.properties`: All files with .properties extension in repository root. (_no subfolders included_)
- `AC_CACHE_INCLUDED_PATHS=gradle*`: All files and folders begin with "gradle" in repository root. (_no subfolders included_)
- `AC_CACHE_INCLUDED_PATHS=*release*`: All files and folders begin with "gradle" in repository root. (_no subfolders included_)
We can also focus them into subfolders with prefixing parent folders relative to repository.
- `AC_CACHE_INCLUDED_PATHS=app/*.properties`: All files with .properties extension in "app" folder. (_no subfolders included_)
- `AC_CACHE_INCLUDED_PATHS=app/gradle*`: All files begin with "gradle" in "app" folder. (_no subfolders included_)
- `AC_CACHE_INCLUDED_PATHS=app/*release*`: All files begin with "gradle" in "app" folder. (_no subfolders included_)
Including subfolders requires recursion. See following section for details of `**` usage.
For the examples above, if you need to exclude relevant folders and select only files, use `/\*` suffixed version of the same pattern in AC_CACHE_EXCLUDED_PATHS. It will discard all matched folders for the related include.
#### `**`
Match all folders recursively. This is used to descend into the folder tree and find all files and folders in subfolders of the current folder.
- `AC_CACHE_INCLUDED_PATHS=**/*.properties`: All files with .properties extension and folders ending with ".properties" in repository.
- `AC_CACHE_INCLUDED_PATHS=**/gradle*`: All files and folders begin with "gradle" in repository.
- `AC_CACHE_INCLUDED_PATHS=**/*release*`: All files and folders that contain "release" in repository.
We can also focus into subfolders to make recursion from there with prefixing parent folders relative to repository.
- `AC_CACHE_INCLUDED_PATHS=app/**/*.properties`: All files with .properties extension and folders ending with ".properties" in "app" folder.
- `AC_CACHE_INCLUDED_PATHS=app/**/gradle*`: All files and folders begin with "gradle" in "app" folder.
- `AC_CACHE_INCLUDED_PATHS=app/**/*release*`: All files and folders that contain "release" in "app" folder.
For all examples above, if you need to exclude relevant folders and select only files, use `/\*` suffixed version of the same pattern in AC_CACHE_EXCLUDED_PATHS. It will discard all matched folders for the related include.
#### Notice
We should be careful while using excluded paths, especially for cases that defined pattern both has match from file and folder in the same path. Let's explain the situation with an example.
Assume that we want to select all files beginning with `gradle*`.
- `AC_CACHE_INCLUDED_PATHS=**/gradle*`
With above definition we get the following files and folders:
```txt
**_gradle*.zip:
app/gradle/
app/gradle/b.txt
app/gradle/c.txt
app/gradle/a.txt
app/gradle.x
app/src/gradle.y
gradle/
gradle/wrapper/
gradle/wrapper/gradle-wrapper.properties
gradle/wrapper/gradle-wrapper.jar
gradle.properties
gradlew
gradlew.bat
```
Since we want only files, we basically add below pattern to excludes.
- `AC_CACHE_EXCLUDED_PATHS=**/gradle*/\*`
After the modification, we get the following files:
```txt
**_gradle*.zip:
app/gradle.x
app/src/gradle.y
gradle.properties
gradlew
gradlew.bat
```
But now we have two missing "gradle*" prefixed files under "gradle/wrapper" folder. Our folder exclude removes them from parent folder.
In order to add those missing files, we need to define an additional include pattern for "gradle" subfolder which selects "gradle*" prefixed files like before. Since we want only files, we also define an exclude pattern specific for that subfolder too.
- `AC_CACHE_INCLUDED_PATHS=**/gradle*:gradle/**/gradle*`
- `AC_CACHE_EXCLUDED_PATHS=**/gradle*/\*:gradle/**/gradle*/\*`
Now we have all "gradle*" prefixed files.
```txt
**_gradle*.zip:
app/gradle.x
app/src/gradle.y
gradle.properties
gradlew
gradlew.bat
gradle_**_gradle*.zip:
gradle/wrapper/gradle-wrapper.jar
gradle/wrapper/gradle-wrapper.properties
```
As an alternative method, other "gradle*" prefixed files can be added with a specific include pattern like `**/gradle-wrapper.*` without using any extra exclude.
```txt
**_gradle-wrapper.*.zip
gradle/wrapper/gradle-wrapper.jar
gradle/wrapper/gradle-wrapper.properties
```
| 45.966851 | 290 | 0.776322 | eng_Latn | 0.995216 |
8a6b0eb0874b70bda066d831fc1a7af4908ca249 | 19,690 | md | Markdown | docs/en/query_language/alter.md | maximdanilchenko/ClickHouse | 7a5c3986288ca21511843014fa2225a9a8036118 | [
"Apache-2.0"
] | 1 | 2018-10-13T15:32:07.000Z | 2018-10-13T15:32:07.000Z | docs/en/query_language/alter.md | maximdanilchenko/ClickHouse | 7a5c3986288ca21511843014fa2225a9a8036118 | [
"Apache-2.0"
] | null | null | null | docs/en/query_language/alter.md | maximdanilchenko/ClickHouse | 7a5c3986288ca21511843014fa2225a9a8036118 | [
"Apache-2.0"
] | null | null | null | <a name="query_language_queries_alter"></a>
## ALTER
The `ALTER` query is only supported for `*MergeTree` tables, as well as `Merge`and`Distributed`. The query has several variations.
### Column Manipulations
Changing the table structure.
```sql
ALTER TABLE [db].name [ON CLUSTER cluster] ADD|DROP|MODIFY COLUMN ...
```
In the query, specify a list of one or more comma-separated actions.
Each action is an operation on a column.
The following actions are supported:
```sql
ADD COLUMN name [type] [default_expr] [AFTER name_after]
```
Adds a new column to the table with the specified name, type, and `default_expr` (see the section "Default expressions"). If you specify `AFTER name_after` (the name of another column), the column is added after the specified one in the list of table columns. Otherwise, the column is added to the end of the table. Note that there is no way to add a column to the beginning of a table. For a chain of actions, 'name_after' can be the name of a column that is added in one of the previous actions.
Adding a column just changes the table structure, without performing any actions with data. The data doesn't appear on the disk after ALTER. If the data is missing for a column when reading from the table, it is filled in with default values (by performing the default expression if there is one, or using zeros or empty strings). If the data is missing for a column when reading from the table, it is filled in with default values (by performing the default expression if there is one, or using zeros or empty strings). The column appears on the disk after merging data parts (see MergeTree).
This approach allows us to complete the ALTER query instantly, without increasing the volume of old data.
```sql
DROP COLUMN name
```
Deletes the column with the name 'name'.
Deletes data from the file system. Since this deletes entire files, the query is completed almost instantly.
```sql
MODIFY COLUMN name [type] [default_expr]
```
Changes the 'name' column's type to 'type' and/or the default expression to 'default_expr'. When changing the type, values are converted as if the 'toType' function were applied to them.
If only the default expression is changed, the query doesn't do anything complex, and is completed almost instantly.
Changing the column type is the only complex action – it changes the contents of files with data. For large tables, this may take a long time.
There are several processing stages:
- Preparing temporary (new) files with modified data.
- Renaming old files.
- Renaming the temporary (new) files to the old names.
- Deleting the old files.
Only the first stage takes time. If there is a failure at this stage, the data is not changed.
If there is a failure during one of the successive stages, data can be restored manually. The exception is if the old files were deleted from the file system but the data for the new files did not get written to the disk and was lost.
There is no support for changing the column type in arrays and nested data structures.
The `ALTER` query lets you create and delete separate elements (columns) in nested data structures, but not whole nested data structures. To add a nested data structure, you can add columns with a name like `name.nested_name` and the type `Array(T)`. A nested data structure is equivalent to multiple array columns with a name that has the same prefix before the dot.
There is no support for deleting columns in the primary key or the sampling key (columns that are in the `ENGINE` expression). Changing the type for columns that are included in the primary key is only possible if this change does not cause the data to be modified (for example, it is allowed to add values to an Enum or change a type with `DateTime` to `UInt32`).
If the `ALTER` query is not sufficient for making the table changes you need, you can create a new table, copy the data to it using the `INSERT SELECT` query, then switch the tables using the `RENAME` query and delete the old table.
The `ALTER` query blocks all reads and writes for the table. In other words, if a long `SELECT` is running at the time of the `ALTER` query, the `ALTER` query will wait for it to complete. At the same time, all new queries to the same table will wait while this `ALTER` is running.
For tables that don't store data themselves (such as `Merge` and `Distributed`), `ALTER` just changes the table structure, and does not change the structure of subordinate tables. For example, when running ALTER for a `Distributed` table, you will also need to run `ALTER` for the tables on all remote servers.
The `ALTER` query for changing columns is replicated. The instructions are saved in ZooKeeper, then each replica applies them. All `ALTER` queries are run in the same order. The query waits for the appropriate actions to be completed on the other replicas. However, a query to change columns in a replicated table can be interrupted, and all actions will be performed asynchronously.
### Manipulations With Partitions and Parts
It only works for tables in the `MergeTree` family. The following operations are available:
- `DETACH PARTITION` – Move a partition to the 'detached' directory and forget it.
- `DROP PARTITION` – Delete a partition.
- `ATTACH PART|PARTITION` – Add a new part or partition from the `detached` directory to the table.
- `FREEZE PARTITION` – Create a backup of a partition.
- `FETCH PARTITION` – Download a partition from another server.
Each type of query is covered separately below.
A partition in a table is data for a single calendar month. This is determined by the values of the date key specified in the table engine parameters. Each month's data is stored separately in order to simplify manipulations with this data.
A "part" in the table is part of the data from a single partition, sorted by the primary key.
You can use the `system.parts` table to view the set of table parts and partitions:
```sql
SELECT * FROM system.parts WHERE active
```
`active` – Only count active parts. Inactive parts are, for example, source parts remaining after merging to a larger part – these parts are deleted approximately 10 minutes after merging.
Another way to view a set of parts and partitions is to go into the directory with table data.
Data directory: `/var/lib/clickhouse/data/database/table/`,where `/var/lib/clickhouse/` is the path to the ClickHouse data, 'database' is the database name, and 'table' is the table name. Example:
```bash
$ ls -l /var/lib/clickhouse/data/test/visits/
total 48
drwxrwxrwx 2 clickhouse clickhouse 20480 May 5 02:58 20140317_20140323_2_2_0
drwxrwxrwx 2 clickhouse clickhouse 20480 May 5 02:58 20140317_20140323_4_4_0
drwxrwxrwx 2 clickhouse clickhouse 4096 May 5 02:55 detached
-rw-rw-rw- 1 clickhouse clickhouse 2 May 5 02:58 increment.txt
```
Here, `20140317_20140323_2_2_0` and ` 20140317_20140323_4_4_0` are the directories of data parts.
Let's break down the name of the first part: `20140317_20140323_2_2_0`.
- `20140317` is the minimum date of the data in the chunk.
- `20140323` is the maximum date of the data in the chunk.
- `2` is the minimum number of the data block.
- `2` is the maximum number of the data block.
- `0` is the chunk level (the depth of the merge tree it is formed from).
Each piece relates to a single partition and contains data for just one month.
`201403` is the name of the partition. A partition is a set of parts for a single month.
On an operating server, you can't manually change the set of parts or their data on the file system, since the server won't know about it.
For non-replicated tables, you can do this when the server is stopped, but we don't recommended it.
For replicated tables, the set of parts can't be changed in any case.
The `detached` directory contains parts that are not used by the server - detached from the table using the `ALTER ... DETACH` query. Parts that are damaged are also moved to this directory, instead of deleting them. You can add, delete, or modify the data in the 'detached' directory at any time – the server won't know about this until you make the `ALTER TABLE ... ATTACH` query.
```sql
ALTER TABLE [db.]table DETACH PARTITION 'name'
```
Move all data for partitions named 'name' to the 'detached' directory and forget about them.
The partition name is specified in YYYYMM format. It can be indicated in single quotes or without them.
After the query is executed, you can do whatever you want with the data in the 'detached' directory — delete it from the file system, or just leave it.
The query is replicated – data will be moved to the 'detached' directory and forgotten on all replicas. The query can only be sent to a leader replica. To find out if a replica is a leader, perform SELECT to the 'system.replicas' system table. Alternatively, it is easier to make a query on all replicas, and all except one will throw an exception.
```sql
ALTER TABLE [db.]table DROP PARTITION 'name'
```
The same as the `DETACH` operation. Deletes data from the table. Data parts will be tagged as inactive and will be completely deleted in approximately 10 minutes. The query is replicated – data will be deleted on all replicas.
```sql
ALTER TABLE [db.]table ATTACH PARTITION|PART 'name'
```
Adds data to the table from the 'detached' directory.
It is possible to add data for an entire partition or a separate part. For a part, specify the full name of the part in single quotes.
The query is replicated. Each replica checks whether there is data in the 'detached' directory. If there is data, it checks the integrity, verifies that it matches the data on the server that initiated the query, and then adds it if everything is correct. If not, it downloads data from the query requestor replica, or from another replica where the data has already been added.
So you can put data in the 'detached' directory on one replica, and use the ALTER ... ATTACH query to add it to the table on all replicas.
```sql
ALTER TABLE [db.]table FREEZE PARTITION 'name'
```
Creates a local backup of one or multiple partitions. The name can be the full name of the partition (for example, 201403), or its prefix (for example, 2014): then the backup will be created for all the corresponding partitions.
The query does the following: for a data snapshot at the time of execution, it creates hardlinks to table data in the directory `/var/lib/clickhouse/shadow/N/...`
`/var/lib/clickhouse/` is the working ClickHouse directory from the config.
`N` is the incremental number of the backup.
The same structure of directories is created inside the backup as inside `/var/lib/clickhouse/`.
It also performs 'chmod' for all files, forbidding writes to them.
The backup is created almost instantly (but first it waits for current queries to the corresponding table to finish running). At first, the backup doesn't take any space on the disk. As the system works, the backup can take disk space, as data is modified. If the backup is made for old enough data, it won't take space on the disk.
After creating the backup, data from `/var/lib/clickhouse/shadow/` can be copied to the remote server and then deleted on the local server.
The entire backup process is performed without stopping the server.
The `ALTER ... FREEZE PARTITION` query is not replicated. A local backup is only created on the local server.
As an alternative, you can manually copy data from the `/var/lib/clickhouse/data/database/table` directory.
But if you do this while the server is running, race conditions are possible when copying directories with files being added or changed, and the backup may be inconsistent. You can do this if the server isn't running – then the resulting data will be the same as after the `ALTER TABLE t FREEZE PARTITION` query.
`ALTER TABLE ... FREEZE PARTITION` only copies data, not table metadata. To make a backup of table metadata, copy the file `/var/lib/clickhouse/metadata/database/table.sql`
To restore from a backup:
> - Use the CREATE query to create the table if it doesn't exist. The query can be taken from an .sql file (replace `ATTACH` in it with `CREATE`).
- Copy the data from the data/database/table/ directory inside the backup to the `/var/lib/clickhouse/data/database/table/detached/ directory.`
- Run `ALTER TABLE ... ATTACH PARTITION YYYYMM` queries, where `YYYYMM` is the month, for every month.
In this way, data from the backup will be added to the table.
Restoring from a backup doesn't require stopping the server.
### Backups and Replication
Replication provides protection from device failures. If all data disappeared on one of your replicas, follow the instructions in the "Restoration after failure" section to restore it.
For protection from device failures, you must use replication. For more information about replication, see the section "Data replication".
Backups protect against human error (accidentally deleting data, deleting the wrong data or in the wrong cluster, or corrupting data).
For high-volume databases, it can be difficult to copy backups to remote servers. In such cases, to protect from human error, you can keep a backup on the same server (it will reside in `/var/lib/clickhouse/shadow/`).
```sql
ALTER TABLE [db.]table FETCH PARTITION 'name' FROM 'path-in-zookeeper'
```
This query only works for replicatable tables.
It downloads the specified partition from the shard that has its `ZooKeeper path` specified in the `FROM` clause, then puts it in the `detached` directory for the specified table.
Although the query is called `ALTER TABLE`, it does not change the table structure, and does not immediately change the data available in the table.
Data is placed in the `detached` directory. You can use the `ALTER TABLE ... ATTACH` query to attach the data.
The ` FROM` clause specifies the path in ` ZooKeeper`. For example, `/clickhouse/tables/01-01/visits`.
Before downloading, the system checks that the partition exists and the table structure matches. The most appropriate replica is selected automatically from the healthy replicas.
The `ALTER ... FETCH PARTITION` query is not replicated. The partition will be downloaded to the 'detached' directory only on the local server. Note that if after this you use the `ALTER TABLE ... ATTACH` query to add data to the table, the data will be added on all replicas (on one of the replicas it will be added from the 'detached' directory, and on the rest it will be loaded from neighboring replicas).
### Synchronicity of ALTER Queries
For non-replicatable tables, all `ALTER` queries are performed synchronously. For replicatable tables, the query just adds instructions for the appropriate actions to `ZooKeeper`, and the actions themselves are performed as soon as possible. However, the query can wait for these actions to be completed on all the replicas.
For `ALTER ... ATTACH|DETACH|DROP` queries, you can use the `replication_alter_partitions_sync` setting to set up waiting.
Possible values: `0` – do not wait; `1` – only wait for own execution (default); `2` – wait for all.
<a name="query_language_queries_show_databases"></a>
### Mutations
Mutations are an ALTER query variant that allows changing or deleting rows in a table. In contrast to standard `UPDATE` and `DELETE` queries that are intended for point data changes, mutations are intended for heavy operations that change a lot of rows in a table.
The functionality is in beta stage and is available starting with the 1.1.54388 version. Currently *MergeTree table engines are supported (both replicated and unreplicated).
Existing tables are ready for mutations as-is (no conversion necessary), but after the first mutation is applied to a table, its metadata format becomes incompatible with previous server versions and falling back to a previous version becomes impossible.
Currently available commands:
```sql
ALTER TABLE [db.]table DELETE WHERE filter_expr
```
The `filter_expr` must be of type UInt8. The query deletes rows in the table for which this expression takes a non-zero value.
```sql
ALTER TABLE [db.]table UPDATE column1 = expr1 [, ...] WHERE filter_expr
```
The command is available starting with the 18.12.14 version. The `filter_expr` must be of type UInt8. This query updates values of specified columns to the values of corresponding expressions in rows for which the `filter_expr` takes a non-zero value. Values are casted to the column type using the `CAST` operator. Updating columns that are used in the calculation of the primary or the partition key is not supported.
One query can contain several commands separated by commas.
For *MergeTree tables mutations execute by rewriting whole data parts. There is no atomicity - parts are substituted for mutated parts as soon as they are ready and a `SELECT` query that started executing during a mutation will see data from parts that have already been mutated along with data from parts that have not been mutated yet.
Mutations are totally ordered by their creation order and are applied to each part in that order. Mutations are also partially ordered with INSERTs - data that was inserted into the table before the mutation was submitted will be mutated and data that was inserted after that will not be mutated. Note that mutations do not block INSERTs in any way.
A mutation query returns immediately after the mutation entry is added (in case of replicated tables to ZooKeeper, for nonreplicated tables - to the filesystem). The mutation itself executes asynchronously using the system profile settings. To track the progress of mutations you can use the `system.mutations` table. A mutation that was successfully submitted will continue to execute even if ClickHouse servers are restarted. There is no way to roll back the mutation once it is submitted.
Entries for finished mutations are not deleted right away (the number of preserved entries is determined by the `finished_mutations_to_keep` storage engine parameter). Older mutation entries are deleted.
#### system.mutations Table
The table contains information about mutations of MergeTree tables and their progress. Each mutation command is represented by a single row. The table has the following columns:
**database**, **table** - The name of the database and table to which the mutation was applied.
**mutation_id** - The ID of the mutation. For replicated tables these IDs correspond to znode names in the `<table_path_in_zookeeper>/mutations/` directory in ZooKeeper. For unreplicated tables the IDs correspond to file names in the data directory of the table.
**command** - The mutation command string (the part of the query after `ALTER TABLE [db.]table`).
**create_time** - When this mutation command was submitted for execution.
**block_numbers.partition_id**, **block_numbers.number** - A Nested column. For mutations of replicated tables contains one record for each partition: the partition ID and the block number that was acquired by the mutation (in each partition only parts that contain blocks with numbers less than the block number acquired by the mutation in that partition will be mutated). Because in non-replicated tables blocks numbers in all partitions form a single sequence, for mutatations of non-replicated tables the column will contain one record with a single block number acquired by the mutation.
**parts_to_do** - The number of data parts that need to be mutated for the mutation to finish.
**is_done** - Is the mutation done? Note that even if `parts_to_do = 0` it is possible that a mutation of a replicated table is not done yet because of a long-running INSERT that will create a new data part that will need to be mutated.
| 71.6 | 593 | 0.778771 | eng_Latn | 0.999483 |
8a6b174de48e7d12e4063baf04dc7b42b0fa8b3e | 4,309 | md | Markdown | articles/cognitive-services/text-analytics/includes/verify-language-detection-container.md | ebarbosahsi/azure-docs.es-es | b6dbec832e5dccd7118e05208730a561103b357e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cognitive-services/text-analytics/includes/verify-language-detection-container.md | ebarbosahsi/azure-docs.es-es | b6dbec832e5dccd7118e05208730a561103b357e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cognitive-services/text-analytics/includes/verify-language-detection-container.md | ebarbosahsi/azure-docs.es-es | b6dbec832e5dccd7118e05208730a561103b357e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Comprobación de la instancia del contenedor de detección de idioma
titleSuffix: Azure Cognitive Services
description: Aprenda a comprobar la instancia del contenedor de detección de idioma.
services: cognitive-services
author: aahill
manager: nitinme
ms.service: cognitive-services
ms.subservice: text-analytics
ms.topic: include
ms.date: 04/01/2020
ms.author: aahi
ms.openlocfilehash: 543a4d85982adadc86435819679351c8ffaa9814
ms.sourcegitcommit: 910a1a38711966cb171050db245fc3b22abc8c5f
ms.translationtype: HT
ms.contentlocale: es-ES
ms.lasthandoff: 03/19/2021
ms.locfileid: "96009939"
---
### <a name="verify-the-language-detection-container-instance"></a>Comprobación de la instancia del contenedor de detección de idioma
1. Seleccione la pestaña **Información general** y copie la dirección IP.
1. Abra una nueva pestaña del explorador y escriba la dirección IP. Por ejemplo, escriba `http://<IP-address>:5000 (http://55.55.55.55:5000`). Se mostrará la página principal del contenedor, que le permitirá saber que el contenedor se está ejecutando.

1. Seleccione el vínculo de **Descripción de API de servicio** para ir a la página Swagger del contenedor.
1. Elija cualquiera de las API **POST** y seleccione **Probar**. Se muestran los parámetros, que incluyen la entrada de este ejemplo:
```json
{
"documents": [
{
"language": "en",
"id": "1",
"text": "Hello world. This is some input text that I love."
},
{
"language": "fr",
"id": "2",
"text": "Bonjour tout le monde"
},
{
"language": "es",
"id": "3",
"text": "La carretera estaba atascada. Había mucho tráfico el día de ayer."
}
]
}
```
1. Establezca **showStats** en `true`.
1. Seleccione **Execute** (Ejecutar) para determinar la opinión del texto.
El modelo empaquetado en el contenedor genera una puntuación comprendida entre 0 y 1, donde 0 es una opinión negativa y 1, positiva.
La respuesta JSON devuelta incluye las opiniones de la entrada de texto actualizado:
```json
{
"documents": [
{
"id": "1",
"detectedLanguages": [
{
"name": "English",
"iso6391Name": "en",
"score": 1
}
],
"statistics": {
"charactersCount": 11,
"transactionsCount": 1
}
},
{
"id": "2",
"detectedLanguages": [
{
"name": "French",
"iso6391Name": "fr",
"score": 1
}
],
"statistics": {
"charactersCount": 21,
"transactionsCount": 1
}
},
{
"id": "3",
"detectedLanguages": [
{
"name": "Spanish",
"iso6391Name": "es",
"score": 1
}
],
"statistics": {
"charactersCount": 65,
"transactionsCount": 1
}
},
{
"id": "4",
"detectedLanguages": [
{
"name": "French",
"iso6391Name": "fr",
"score": 1
}
],
"statistics": {
"charactersCount": 8,
"transactionsCount": 1
}
}
],
"errors": [],
"statistics": {
"documentsCount": 4,
"validDocumentsCount": 4,
"erroneousDocumentsCount": 0,
"transactionsCount": 4
}
}
```
Ahora podemos correlacionar los documentos de los datos JSON de la carga de respuesta con los documentos de la carga de solicitud originales por su `id` correspondiente. Cada documento se trata de forma independiente con varias estadísticas, como `characterCount` y `transactionCount`. Además, cada documento resultante tiene la matriz `detectedLanguages` con los valores de `name`, `iso6391Name` y `score` de cada idioma detectado. Cuando se detectan varios idiomas, `score` se usa para determinar el más probable. | 32.89313 | 515 | 0.581573 | spa_Latn | 0.875356 |
8a6b2fabd315f7a258fb7d52802bbdb76e4126e0 | 1,589 | md | Markdown | README.md | itning/admin-server | 8cef9e63e15a161f9558c02b2d8497761ec8b569 | [
"Apache-2.0"
] | 1 | 2019-04-11T07:16:41.000Z | 2019-04-11T07:16:41.000Z | README.md | itning/admin-server | 8cef9e63e15a161f9558c02b2d8497761ec8b569 | [
"Apache-2.0"
] | null | null | null | README.md | itning/admin-server | 8cef9e63e15a161f9558c02b2d8497761ec8b569 | [
"Apache-2.0"
] | null | null | null | # Spring Boot Admin Server
[](https://github.com/itning/admin-server/stargazers)
[](https://github.com/itning/admin-server/network/members)
[](https://github.com/itning/admin-server/watchers)
[](https://github.com/itning?tab=followers)
[](https://github.com/itning/admin-server/issues)
[](https://github.com/itning/admin-server/blob/master/LICENSE)
[](https://github.com/itning/admin-server/commits)
[](https://github.com/itning/admin-server/releases)
[](https://github.com/itning/admin-server)
[](http://hits.dwyl.com/itning/admin-server)
[](https://github.com/itning/admin-server)
ADMIN_SERVER_USERNAME >> 用户名
ADMIN_SERVER_PASSWORD >> 密码
| 88.277778 | 158 | 0.770925 | yue_Hant | 0.297562 |
8a6b4d0fed6d72f4591ef3a83892e530000f25ca | 44 | md | Markdown | README.md | genzi/blockchain | 4fc221440281b77b6b8aaaeffbbe52ade168ddf9 | [
"MIT"
] | null | null | null | README.md | genzi/blockchain | 4fc221440281b77b6b8aaaeffbbe52ade168ddf9 | [
"MIT"
] | null | null | null | README.md | genzi/blockchain | 4fc221440281b77b6b8aaaeffbbe52ade168ddf9 | [
"MIT"
] | null | null | null | # blockchain
Repository for learning golang
| 14.666667 | 30 | 0.840909 | eng_Latn | 0.937327 |
8a6b51c63fdfb6473904457f4b3e11c4fcf87e72 | 6,608 | md | Markdown | docs/2014/integration-services/change-data-capture/perform-an-incremental-load-of-multiple-tables.md | allenwux/sql-docs | d50119e47506b388a3e24a70ebeb249ec63b8b2a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/2014/integration-services/change-data-capture/perform-an-incremental-load-of-multiple-tables.md | allenwux/sql-docs | d50119e47506b388a3e24a70ebeb249ec63b8b2a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/2014/integration-services/change-data-capture/perform-an-incremental-load-of-multiple-tables.md | allenwux/sql-docs | d50119e47506b388a3e24a70ebeb249ec63b8b2a | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-04-05T00:11:33.000Z | 2021-04-05T00:11:33.000Z | ---
title: "Perform an Incremental Load of Multiple Tables | Microsoft Docs"
ms.custom: ""
ms.date: "03/06/2017"
ms.prod: "sql-server-2014"
ms.reviewer: ""
ms.suite: ""
ms.technology:
- "integration-services"
ms.tgt_pltfrm: ""
ms.topic: conceptual
helpviewer_keywords:
- "incremental load [Integration Services],multiple tables"
ms.assetid: 39252dd5-09c3-46f9-a17b-15208cfd336d
caps.latest.revision: 25
author: douglaslMS
ms.author: douglasl
manager: craigg
---
# Perform an Incremental Load of Multiple Tables
In the topic, [Improving Incremental Loads with Change Data Capture](change-data-capture-ssis.md), the diagram illustrates a basic package that performs an incremental load on just one table. However, loading one table is not as common as having to perform an incremental load of multiple tables.
When you perform an incremental load of multiple tables, some steps have to be performed once for all the tables, and other steps have to be repeated for each source table. You have more than one option for implementing these steps in [!INCLUDE[ssISnoversion](../../includes/ssisnoversion-md.md)]:
- Use a parent package and child packages.
- Use multiple Data Flow tasks in a single package.
## Loading Multiple Tables by Using a Parent Package and Multiple Child Packages
You can use a parent package to perform those steps that only have to be done once. The child packages will perform those steps that have to be done for each source table.
#### To create a parent package that performs those steps that only have to be done once
1. Create a parent package.
2. In the control flow, use an Execute SQL task or [!INCLUDE[ssISnoversion](../../includes/ssisnoversion-md.md)] expressions to calculate the endpoints.
For an example of how to calculate endpoints, see [Specify an Interval of Change Data](specify-an-interval-of-change-data.md).
3. If needed, use a For Loop container to delay execution until change data for the selected period is ready.
For an example of such a For Loop container, see [Determine Whether the Change Data Is Ready](determine-whether-the-change-data-is-ready.md).
4. Use multiple Execute Package tasks to execute child packages for each table to be loaded. Pass the endpoints calculated in the parent package to each child package by using Parent Package Variable configurations.
For more information, see [Execute Package Task](../control-flow/execute-package-task.md) and [Use the Values of Variables and Parameters in a Child Package](../use-the-values-of-variables-and-parameters-in-a-child-package.md).
#### To create child packages to perform those steps that have to be done for each source table
1. For each source table, create a child package.
2. In the control flow, use a Script task or an Execute SQL task to assemble the SQL statement that will be used to query for changes.
For an example of how to assemble the query, see [Prepare to Query for the Change Data](prepare-to-query-for-the-change-data.md).
3. Use a single Data Flow task in each child package to load the change data and apply it to the destination. Configure the Data Flow as described in the following steps:
1. In the data flow, use a source component to query the change tables for the changes that fall within the selected endpoints.
For an example of how to query the change tables, see [Retrieve and Understand the Change Data](retrieve-and-understand-the-change-data.md).
2. Use a Conditional Split transformation to direct inserts, updates, and deletes to different outputs for appropriate processing.
For an example of how to configure this transformation to direct output, see [Process Inserts, Updates, and Deletes](process-inserts-updates-and-deletes.md).
3. Use a destination component to apply the inserts to the destination. Use OLE DB Command transformations with parameterized UPDATE and DELETE statements to apply updates and deletes to the destination.
For an example of how to use this transformation to apply updates and deletes, see [Apply the Changes to the Destination](apply-the-changes-to-the-destination.md).
## Loading Multiple Tables by Using Multiple Data Flow Tasks in a Single Package
Alternatively, you can use a single package that contains a separate Data Flow task for each source table to be loaded.
#### To load multiple tables by using multiple Data Flow tasks in a single package
1. Create a single package.
2. In the control flow, use an Execute SQL Task or [!INCLUDE[ssISnoversion](../../includes/ssisnoversion-md.md)] expressions to calculate the endpoints.
For an example of how to calculate endpoints, see [Specify an Interval of Change Data](specify-an-interval-of-change-data.md).
3. If needed, use a For Loop container to delay execution until the change data for the selected interval is ready.
For an example of such a For Loop container, see [Determine Whether the Change Data Is Ready](determine-whether-the-change-data-is-ready.md).
4. Use a Script task or an Execute SQL task to assemble the SQL statement that will be used to query for changes.
For an example of how to assemble the query, see [Prepare to Query for the Change Data](prepare-to-query-for-the-change-data.md).
5. Use multiple Data Flow tasks to load the change data from each source table and apply it to the destination. Configure each Data Flow task as described in the following steps.
1. In each data flow, use a source component to query the change tables for the changes that fall within the selected endpoints.
For an example of how to query the change tables, see [Retrieve and Understand the Change Data](retrieve-and-understand-the-change-data.md).
2. Use a Conditional Split transformation to direct inserts, updates, and deletes to different outputs for appropriate processing.
For an example of how to configure this transformation to direct output, see [Process Inserts, Updates, and Deletes](process-inserts-updates-and-deletes.md).
3. Use a destination component to apply the inserts to the destination. Use OLE DB Command transformations with parameterized UPDATE and DELETE statements to apply updates and deletes to the destination.
For an example of how to use this transformation to apply updates and deletes, see [Apply the Changes to the Destination](apply-the-changes-to-the-destination.md).
| 63.538462 | 300 | 0.750454 | eng_Latn | 0.99134 |
8a6bbe76dc28666c1bdff25ae5ead2ebe54c9f80 | 7,822 | md | Markdown | docs/relational-databases/native-client-odbc-date-time/metadata-parameter-and-result.md | strikersree/sql-docs | 9ece10c2970a4f0812647149d3de2c6b75713e14 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/relational-databases/native-client-odbc-date-time/metadata-parameter-and-result.md | strikersree/sql-docs | 9ece10c2970a4f0812647149d3de2c6b75713e14 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/relational-databases/native-client-odbc-date-time/metadata-parameter-and-result.md | strikersree/sql-docs | 9ece10c2970a4f0812647149d3de2c6b75713e14 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "Parameter and Result Metadata | Microsoft Docs"
ms.custom: ""
ms.date: "03/04/2017"
ms.prod: sql
ms.prod_service: "database-engine, sql-database, sql-data-warehouse, pdw"
ms.reviewer: ""
ms.technology: native-client
ms.topic: "reference"
helpviewer_keywords:
- "metadata [ODBC]"
ms.assetid: 1518e6e5-a6a8-4489-b779-064c5624df53
author: MightyPen
ms.author: genemi
manager: craigg
monikerRange: ">=aps-pdw-2016||=azuresqldb-current||=azure-sqldw-latest||>=sql-server-2016||=sqlallproducts-allversions||>=sql-server-linux-2017||=azuresqldb-mi-current"
---
# Metadata - Parameter and Result
[!INCLUDE[appliesto-ss-asdb-asdw-pdw-md](../../includes/appliesto-ss-asdb-asdw-pdw-md.md)]
[!INCLUDE[SNAC_Deprecated](../../includes/snac-deprecated.md)]
This topic describes what is returned in the implementation parameter descriptor (IPD) and implementation row descriptor (IRD) fields for date and time data types.
## Information Returned in IPD Fields
The following information is returned in the IPD fields:
|Parameter type|date|time|smalldatetime|datetime|datetime2|datetimeoffset|
|--------------------|----------|----------|-------------------|--------------|---------------|--------------------|
|SQL_DESC_CASE_SENSITIVE|SQL_FALSE|SQL_FALSE|SQL_FALSE|SQL_FALSE|SQL_FALSE|SQL_FALSE|
|SQL_DESC_CONCISE_TYPE|SQL_TYPE_DATE|SQL_SS_TIME2|SQL_TYPE_TIMESTAMP|SQL_TYPE_TIMESTAMP|SQL_TYPE_TIMESTAMP|SQL_SS_TIMESTAMPOFFSET|
|SQL_DESC_DATETIME_INTERVAL_CODE|SQL_CODE_DATE|0|SQL_CODE_TIMESTAMP|SQL_CODE_TIMESTAMP|SQL_CODE_TIMESTAMP|0|
|SQL_DESC_DATETIME_INTERVAL_PRECISION|10|8,10..16|16|23|19, 21..27|26, 28..34|
|SQL_DESC_FIXED_PREC_SCALE|SQL_FALSE|SQL_FALSE|SQL_FALSE|SQL_FALSE|SQL_FALSE|SQL_FALSE|
|SQL_DESC_LENGTH|10|8,10..16|16|23|19, 21..27|26, 28..34|
|SQL_DESC_OCTET_LENGTH|6|12|4|8|16|20|
|SQL_DESC_PRECISION|0|0..7|0|3|0..7|0..7|
|SQL_DESC_SCALE|0|0..7|0|3|0..7|0..7|
|SQL_DESC_TYPE|SQL_TYPE_DATE|SQL_SS_TYPE_TIME2|SQL_DATETIME|SQL_DATETIME|SQL_DATETIME|SQL_SS_TIMESTAMPOFFSET|
|SQL_DESC_TYPE_NAME|**date**|**time**|**smalldatetime** in IRD, **datetime2** in IPD|**datetime** in IRD, **datetime2** in IPD|**datetime2**|datetimeoffset|
|SQL_CA_SS_VARIANT_TYPE|SQL_C_TYPE_DATE|SQL_C_TYPE_BINARY|SQL_C_TYPE_TIMESTAMP|SQL_C_TYPE_TIMESTAMP|SQL_C_TYPE_TIMESTAMP|SQL_C_TYPE_BINARY|
|SQL_CA_SS_VARIANT_SQL_TYPE|SQL_TYPE_DATE|SQL_SS_TIME2|SQL_TYPE_TIMESTAMP|SQL_TYPE_TIMESTAMP|SQL_TYPE_TIMESTAMP|SQL_SS_TIMESTAMPOFFSET|
|SQL_CA_SS_SERVER_TYPE|N/A|N/A|SQL_SS_TYPE_SMALLDATETIME|SQL_SS_TYPE_DATETIME|SQL_SS_TYPE_DEFAULT|N/A|
Sometimes there are discontinuities in value ranges. For example, 9 is missing in 8,10..16. This is due to the addition of a decimal point when fractional precision is greater than zero.
**datetime2** is returned as the typename for **smalldatetime** and **datetime** because the driver uses this as a common type for transmitting all **SQL_TYPE_TIMESTAMP** values to the server.
SQL_CA_SS_VARIANT_SQL_TYPE is a new descriptor field. This field was added to the IRD and IPD to enable applications to specify the value type associated with **sqlvariant** (SQL_SSVARIANT) columns and parameters
SQL_CA_SS_SERVER_TYPE is a new IPD-only field to enable applications to control how values for parameters bound as SQL_TYPE_TYPETIMESTAMP (or as SQL_SS_VARIANT with a C type of SQL_C_TYPE_TIMESTAMP) are sent to the server. If SQL_DESC_CONCISE_TYPE is SQL_TYPE_TIMESTAMP (or is SQL_SS_VARIANT and the C type is SQL_C_TYPE_TIMESTAMP) when SQLExecute or SQLExecDirect is called, the value of SQL_CA_SS_SERVER_TYPE determines the tabular data stream (TDS) type of the parameter value, as follows:
|Value of SQL_CA_SS_SERVER_TYPE|Valid values for SQL_DESC_PRECISION|Valid values for SQL_DESC_LENGTH|TDS type|
|----------------------------------------|-------------------------------------------|----------------------------------------|--------------|
|SQL_SS_TYPE_DEFAULT|0..7|19, 21..27|**datetime2**|
|SQL_SS_TYPE_SMALLDATETIME|0|19|**smalldatetime**|
|SQL_SS_TYPE_DATETIME|3|23|**datetime**|
The default setting of SQL_CA_SS_SERVER_TYPE is SQL_SS_TYPE_DEFAULT. The settings of SQL_DESC_PRECISION and SQL_DESC_LENGTH are validated with the setting of SQL_CA_SS_SERVER_TYPE as described in the table above. If this validation fails, SQL_ERROR is returned and a diagnostic record is logged with SQLState 07006 and the message "Restricted data type attribute violation". This error is also returned if SQL_CA_SS_SERVER_TYPE is set to a value other than SQL_SS_TYPE DEFAULT and DESC_CONCISE_TYPE is not SQL_TYPE_TIMESTAMP. These validations are performed when descriptor consistency validation occurs, for example:
- When SQL_DESC_DATA_PTR is changed.
- At prepare or execute time (when SQLExecute, SQLExecDirect, SQLSetPos, or SQLBulkOperations is called).
- When an application forces a non-deferred prepare by calling SQLPrepare with deferred prepare disabled, or by calling SQLNumResultCols, SQLDescribeCol, or SQLDescribeParam for a statement that is prepared but not executed.
When SQL_CA_SS_SERVER_TYPE is set by a call to SQLSetDescField, its value must be SQL_SS_TYPE_DEFAULT, SQL_SS_TYPE_SMALLDATETIME, or SQL_SS_TYPE_DATETIME. If this is not the case, SQL_ERROR is returned and a diagnostic record is logged with SQLState HY092 and the message "Invalid attribute/option identifier".
The SQL_CA_SS_SERVER_TYPE attribute can be used by applications that depend on functionality supported by **datetime** and **smalldatetime**, but not **datetime2**. For example, **datetime2** requires the use of the **dateadd** and **datediif** functions, whereas **datetime** and **smalldatetime** also allow arithmetic operators. Most applications will not need to use this attribute and its use should be avoided.
## Information Returned in IRD Fields
The following information is returned in the IRD fields:
|Column Type|date|time|smalldatetime|datetime|datetime2|datetimeoffset|
|-----------------|----------|----------|-------------------|--------------|---------------|--------------------|
|SQL_DESC_AUTO_UNIQUE_VALUE|SQL_FALSE|SQL_FALSE|SQL_FALSE|SQL_FALSE|SQL_FALSE|SQL_FALSE|
|SQL_DESC_CASE_SENSITIVE|SQL_FALSE|SQL_FALSE|SQL_FALSE|SQL_FALSE|SQL_FALSE|SQL_FALSE|
|SQL_DESC_CONCISE_TYPE|SQL_TYPE_DATE|SQL_SS_TIME2|SQL_TYPE_TIMESTAMP|SQL_TYPE_TIMESTAMP|SQL_TYPE_TIMESTAMP|SQL_SS_TIMESTAMPOFFSET|
|SQL_DESC_DATETIME_INTERVAL_CODE|SQL_CODE_DATE|0|SQL_CODE_TIMESTAMP|SQL_CODE_TIMESTAMP|SQL_CODE_TIMESTAMP|0|
|SQL_DESC_DATETIME_INTERVAL_PRECISION|10|8,10..16|16|23|19, 21..27|26, 28..34|
|SQL_DESC_DISPLAY_SIZE|10|8,10..16|16|23|19, 21..27|26, 28..34|
|SQL_DESC_FIXED_PREC_SCALE|SQL_FALSE|SQL_FALSE|SQL_FALSE|SQL_FALSE|SQL_FALSE|SQL_FALSE|
|SQL_DESC_LENGTH|10|8,10..16|16|2|19, 21..27|26, 28..34|
|SQL_DESC_LITERAL_PREFIX|‘|‘|‘|‘|‘|‘|
|SQL_DESC_LITERAL_SUFFIX|‘|‘|‘|‘|‘|‘|
|SQL_DESC_LOCAL_TYPE_NAME|**date**|**time**|**smalldatetime**|**datetime**|**datetime2**|datetimeoffset|
|SQL_DESC_OCTET_LENGTH|6|12|4|8|16|20|
|SQL_DESC_PRECISION|0|0..7|0|3|0..7|0..7|
|SQL_DESC_SCALE|0|0..7|0|3|0..7|0..7|
|SQL_DESC_SEARCHABLE|SQL_PRED_SEARCHABLE|SQL_PRED_SEARCHABLE|SQL_PRED_SEARCHABLE|SQL_PRED_SEARCHABLE|SQL_PRED_SEARCHABLE|SQL_PRED_SEARCHABLE|
|SQL_DESC_TYPE|SQL_DATETIME|SQL_SS_TIME2|SQL_DATETIME|SQL_DATETIME|SQL_DATETIME|SQL_SS_TIMESTAMPOFFSET|
|SQL_DESC_TYPE_NAME|**date**|**time**|**smalldatetime**|**datetime**|**datetime2**|datetimeoffset|
|SQL_DESC_UNSIGNED|SQL_TRUE|SQL_TRUE|SQL_TRUE|SQL_TRUE|SQL_TRUE|SQL_TRUE|
## See Also
[Metadata (ODBC)](https://msdn.microsoft.com/library/99133efc-b1f2-46e9-8203-d90c324a8e4c)
| 79.816327 | 621 | 0.747379 | eng_Latn | 0.468324 |
8a6bd8bf9c6ff07254b0990550b27c5af0c59b7e | 28,474 | md | Markdown | docs/t-sql/statements/drop-index-transact-sql.md | B4V/sql-docs.ru-ru | 775c201cc90e5754b2d3748a80d220a8ba693758 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/t-sql/statements/drop-index-transact-sql.md | B4V/sql-docs.ru-ru | 775c201cc90e5754b2d3748a80d220a8ba693758 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/t-sql/statements/drop-index-transact-sql.md | B4V/sql-docs.ru-ru | 775c201cc90e5754b2d3748a80d220a8ba693758 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: DROP INDEX (Transact-SQL) | Документы Майкрософт
ms.custom: ''
ms.date: 05/11/2017
ms.prod: sql
ms.prod_service: database-engine, sql-database, sql-data-warehouse, pdw
ms.reviewer: ''
ms.technology: t-sql
ms.topic: language-reference
f1_keywords:
- DROP_INDEX_TSQL
- DROP INDEX
dev_langs:
- TSQL
helpviewer_keywords:
- nonclustered indexes [SQL Server], removing
- MAXDOP index option, DROP INDEX statement
- index removal [SQL Server]
- spatial indexes [SQL Server], dropping
- removing indexes
- deleting indexes
- dropping indexes
- MOVE TO clause
- clustered indexes, removing
- indexes [SQL Server], dropping
- filtered indexes [SQL Server], dropping
- ONLINE option
- indexes [SQL Server], moving
- XML indexes [SQL Server], dropping
- DROP INDEX statement
ms.assetid: 2b1464c8-934c-405f-8ef7-2949346b5372
author: CarlRabeler
ms.author: carlrab
manager: craigg
monikerRange: '>=aps-pdw-2016||=azuresqldb-current||=azure-sqldw-latest||>=sql-server-2016||=sqlallproducts-allversions||>=sql-server-linux-2017||=azuresqldb-mi-current'
ms.openlocfilehash: 3b84acd01f7291ad420cf2a643ffb9bc350e0a6a
ms.sourcegitcommit: 61381ef939415fe019285def9450d7583df1fed0
ms.translationtype: HT
ms.contentlocale: ru-RU
ms.lasthandoff: 10/01/2018
ms.locfileid: "47777972"
---
# <a name="drop-index-transact-sql"></a>DROP INDEX (Transact-SQL)
[!INCLUDE[tsql-appliesto-ss2008-all-md](../../includes/tsql-appliesto-ss2008-all-md.md)]
Удаляет один или несколько реляционных, пространственных, фильтруемых или XML-индексов из текущей базы данных. Можно удалить кластеризованный индекс и переместить полученную в результате таблицу в другую файловую группу или схему секционирования в одной транзакции, указав параметр MOVE TO.
Инструкция DROP INDEX неприменима к индексам, созданным при указании ограничений параметров PRIMARY KEY и UNIQUE. Для удаления ограничения и соответствующего индекса используется инструкция [ALTER TABLE](../../t-sql/statements/alter-table-transact-sql.md) с предложением DROP CONSTRAINT.
> [!IMPORTANT]
> Синтаксис, определяемый в `<drop_backward_compatible_index>`, не будет поддерживаться в будущих версиях [!INCLUDE[msCoName](../../includes/msconame-md.md)][!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)]. Избегайте использования этого синтаксиса в новых разработках и учитывайте необходимость изменения в будущем приложений, использующих эти функции сейчас. Используйте синтаксис, описанный в `<drop_relational_or_xml_index>`. XML-индексы нельзя удалить с использованием обратно совместимого синтаксиса.
 [Синтаксические обозначения в Transact-SQL](../../t-sql/language-elements/transact-sql-syntax-conventions-transact-sql.md)
## <a name="syntax"></a>Синтаксис
```
-- Syntax for SQL Server (All options except filegroup and filestream apply to Azure SQL Database.)
DROP INDEX [ IF EXISTS ]
{ <drop_relational_or_xml_or_spatial_index> [ ,...n ]
| <drop_backward_compatible_index> [ ,...n ]
}
<drop_relational_or_xml_or_spatial_index> ::=
index_name ON <object>
[ WITH ( <drop_clustered_index_option> [ ,...n ] ) ]
<drop_backward_compatible_index> ::=
[ owner_name. ] table_or_view_name.index_name
<object> ::=
{
[ database_name. [ schema_name ] . | schema_name. ]
table_or_view_name
}
<drop_clustered_index_option> ::=
{
MAXDOP = max_degree_of_parallelism
| ONLINE = { ON | OFF }
| MOVE TO { partition_scheme_name ( column_name )
| filegroup_name
| "default"
}
[ FILESTREAM_ON { partition_scheme_name
| filestream_filegroup_name
| "default" } ]
}
```
```
-- Syntax for Azure SQL Database
DROP INDEX
{ <drop_relational_or_xml_or_spatial_index> [ ,...n ]
}
<drop_relational_or_xml_or_spatial_index> ::=
index_name ON <object>
<object> ::=
{
[ database_name. [ schema_name ] . | schema_name. ]
table_or_view_name
}
```
```
-- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse
DROP INDEX index_name ON [ database_name . [schema_name ] . | schema_name . ] table_name
[;]
```
## <a name="arguments"></a>Аргументы
*IF EXISTS*
**Применимо к**: [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] (с [!INCLUDE[ssSQL15](../../includes/sssql15-md.md)] до [текущей версии](http://go.microsoft.com/fwlink/p/?LinkId=299658)).
Условное удаление индекса только в том случае, если он уже существует.
*index_name*
Имя индекса, который необходимо удалить.
*database_name*
Имя базы данных.
*schema_name*
Имя схемы, которой принадлежит таблица или представление.
*table_or_view_name*
Имя таблицы или представления, связанного с индексом. Пространственные индексы поддерживаются только для таблиц.
Чтобы отобразить отчет по индексам объекта, следует воспользоваться представлением каталога [sys.indexes](../../relational-databases/system-catalog-views/sys-indexes-transact-sql.md).
База данных SQL Windows Azure поддерживает формат трехкомпонентного имени database_name.[schema_name].object_name, если database_name — это текущая база данных или database_name — это tempdb и object_name начинается с символа «#».
\<drop_clustered_index_option>
**Применимо к**: с [!INCLUDE[ssKatmai](../../includes/sskatmai-md.md)] до [!INCLUDE[ssCurrent](../../includes/sscurrent-md.md)], [!INCLUDE[sqldbesa](../../includes/sqldbesa-md.md)].
Управляет параметрами кластеризованного индекса. Эти параметры неприменимы к другим типам индексов.
MAXDOP = *max_degree_of_parallelism*
**Применимо к**: с [!INCLUDE[ssKatmai](../../includes/sskatmai-md.md)] до [!INCLUDE[ssCurrent](../../includes/sscurrent-md.md)], [!INCLUDE[sqldbesa](../../includes/sqldbesa-md.md)] (уровни производительности P2 и P3).
Переопределяет параметр конфигурации **max degree of parallelism** на время выполнения операции с индексами. Дополнительные сведения см. в разделе [Настройка параметра конфигурации сервера max degree of parallelism](../../database-engine/configure-windows/configure-the-max-degree-of-parallelism-server-configuration-option.md). MAXDOP можно использовать для ограничения числа процессоров, используемых при параллельном выполнении планов. Максимальное число процессоров — 64.
> [!IMPORTANT]
> Параметр MAXDOP нельзя использовать для пространственных или XML-индексов.
Параметр *max_degree_of_parallelism* может иметь одно из следующих значений:
1
Подавляет формирование параллельных планов.
\>1
Ограничивает указанным значением максимальное число процессоров, используемых для параллельных операций с индексами.
0 (по умолчанию)
В зависимости от текущей рабочей нагрузки системы использует реальное или меньшее число процессоров.
Дополнительные сведения см. в статье [Настройка параллельных операций с индексами](../../relational-databases/indexes/configure-parallel-index-operations.md).
> [!NOTE]
> Параллельные операции с индексами доступны не во всех выпусках [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)]. Сведения о функциях, поддерживаемых различными выпусками [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)], см. в статье [Возможности, поддерживаемые различными выпусками SQL Server 2016](../../sql-server/editions-and-supported-features-for-sql-server-2016.md).
ONLINE = ON | **OFF**
**Применимо к**: с [!INCLUDE[ssKatmai](../../includes/sskatmai-md.md)] до [!INCLUDE[ssCurrent](../../includes/sscurrent-md.md)], [!INCLUDE[ssSDSfull](../../includes/sssdsfull-md.md)].
Определяет, будут ли базовые таблицы и связанные индексы доступны для запросов и изменения данных во время операций с индексами. Значение по умолчанию — OFF.
ON
Не устанавливаются долгосрочные блокировки таблицы. Это позволяет продолжать выполнение запросов и обновлений базовых таблиц.
OFF
Применяются блокировки таблиц, при этом таблицы становятся недоступны на время выполнения индексирования.
Параметр ONLINE можно указать только при удалении кластеризованных индексов. Дополнительные сведения см. в разделе «Примечания».
> [!NOTE]
> Операции с индексами в сети доступны не во всех выпусках [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)]. Сведения о функциях, поддерживаемых различными выпусками [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)], см. в статье [Возможности, поддерживаемые различными выпусками SQL Server 2016](../../sql-server/editions-and-supported-features-for-sql-server-2016.md).
MOVE TO { _partition\_scheme\_name_**(**_column\_name_**)** | _filegroup\_name_ | **"** default **"**
**Применимо к**: с [!INCLUDE[ssKatmai](../../includes/sskatmai-md.md)] до [!INCLUDE[ssCurrent](../../includes/sscurrent-md.md)]. [!INCLUDE[sqldbesa](../../includes/sqldbesa-md.md)] поддерживает "default" в качестве имени файловой группы.
Определяет размещение, куда будут перемещаться строки данных, находящиеся на конечном уровне кластеризованного индекса. Данные перемещаются в новое расположение со структурой типа куча. В качестве нового расположения можно указать файловую группу или схему секционирования, но они должны уже существовать. Параметр MOVE TO недопустим для индексированных представлений и некластеризованных индексов. Если ни схема секционирования, ни файловая группа не указаны, результирующая таблица помещается в схему секционирования или файловую группу, которая определена для кластеризованного индекса.
Если кластеризованный индекс удаляется с помощью параметра MOVE TO, то все некластеризованные индексы базовых таблиц создаются заново, но остаются в исходных файловых группах или схемах секционирования. Если базовая таблица перемещается в другую файловую группу или схему секционирования, некластеризованные индексы не перемещаются для совпадения с новым расположением базовой таблицы (кучи). Поэтому некластеризованные индексы могут потерять выравнивание с кучей, даже если ранее они были выровнены с кластеризованным индексом. Дополнительные сведения о выравнивании секционированных индексов см. в разделе [Секционированные таблицы и индексы](../../relational-databases/partitions/partitioned-tables-and-indexes.md).
*partition_scheme_name* **(** *column_name* **)**
**Применимо к**: с [!INCLUDE[ssKatmai](../../includes/sskatmai-md.md)] до [!INCLUDE[ssCurrent](../../includes/sscurrent-md.md)], [!INCLUDE[sqldbesa](../../includes/sqldbesa-md.md)].
Указывает схему секционирования, в которой будет размещена результирующая таблица. Схема секционирования должна быть создана заранее выполнением инструкции [CREATE PARTITION SCHEME](../../t-sql/statements/create-partition-scheme-transact-sql.md) или [ALTER PARTITION SCHEME](../../t-sql/statements/alter-partition-scheme-transact-sql.md). Если размещение не указано и таблица секционирована, таблица включается в ту же схему секционирования, где размещен существующий кластеризованный индекс.
Имя столбца в схеме не обязательно должно соответствовать столбцам из определения индекса. Можно указать любой столбец базовой таблицы.
*filegroup_name*
**Применимо к**: с [!INCLUDE[ssKatmai](../../includes/sskatmai-md.md)] до [!INCLUDE[ssCurrent](../../includes/sscurrent-md.md)].
Указывает файловую группу, в которую будет помещена результирующая таблица. Если размещение не указано и таблица не секционирована, тогда результирующая таблица включается в ту файловую группу, где размещен существующий кластеризованный индекс. Файловая группа должна существовать.
**"** default **"**
Указывает размещение по умолчанию для результирующей таблицы.
> [!NOTE]
> В этом контексте default не является ключевым словом. Это идентификатор файловой группы по умолчанию, и поэтому он должен быть заключен в разделители, например: MOVE TO **"** default **"** или MOVE TO **[** default **]**. Если указывается параметр **"** default **"**, то параметр QUOTED_IDENTIFIER для текущего сеанса должен иметь значение ON. Это параметр по умолчанию. Дополнительные сведения см. в статье [SET QUOTED_IDENTIFIER (Transact-SQL)](../../t-sql/statements/set-quoted-identifier-transact-sql.md).
FILESTREAM_ON { *partition_scheme_name* | *filestream_filegroup_name* | **"** default **"** }
**Применимо к**: с [!INCLUDE[ssKatmai](../../includes/sskatmai-md.md)] до [!INCLUDE[ssCurrent](../../includes/sscurrent-md.md)].
Определяет папку, в которую будет перемещаться таблица FILESTREAM, находящаяся на конечном уровне кластеризованного индекса. Данные перемещаются в новое расположение со структурой типа куча. В качестве нового расположения можно указать файловую группу или схему секционирования, но они должны уже существовать. Параметр FILESTREAM ON недопустим для индексированных представлений или некластеризованных индексов. Если не указана схема секционирования, то данные будут размещены в той же схеме секционирования или файловой группе, которая была определена для кластеризованного индекса.
*partition_scheme_name*
Указывает схему секционирования для данных FILESTREAM. Схема секционирования должна быть создана заранее выполнением инструкции [CREATE PARTITION SCHEME](../../t-sql/statements/create-partition-scheme-transact-sql.md) или [ALTER PARTITION SCHEME](../../t-sql/statements/alter-partition-scheme-transact-sql.md). Если размещение не указано и таблица секционирована, таблица включается в ту же схему секционирования, где размещен существующий кластеризованный индекс.
При указании схемы секционирования для инструкции MOVE TO необходимо использовать ту же схему секционирования, что и для инструкции FILESTREAM ON.
*filestream_filegroup_name*
Указывает файловую группу FILESTREAM для данных FILESTREAM. Если расположение не указано, а таблица не секционирована, данные включаются в файловую группу FILESTREAM по умолчанию.
**"** default **"**
Указывает расположение по умолчанию для данных FILESTREAM.
> [!NOTE]
> В этом контексте default не является ключевым словом. Это идентификатор файловой группы по умолчанию, и поэтому он должен быть заключен в разделители, например: MOVE TO **"** default **"** или MOVE TO **[** default **]**. Если указано значение "default" (по умолчанию), параметр QUOTED_IDENTIFIER должен иметь значение ON для текущего сеанса. Это параметр по умолчанию. Дополнительные сведения см. в статье [SET QUOTED_IDENTIFIER (Transact-SQL)](../../t-sql/statements/set-quoted-identifier-transact-sql.md).
## <a name="remarks"></a>Remarks
При удалении некластеризованного индекса его определение удаляется из метаданных, а страницы данных сбалансированного дерева индекса удаляются из файлов базы данных. При удалении кластеризованного индекса определение индекса удаляется из метаданных, а строки данных, которые хранились на конечном уровне кластеризованного индекса, сохраняются в результирующей неупорядоченной таблице — куче. Все пространство, ранее занимаемое индексом, освобождается. Оно может быть впоследствии использовано любым объектом базы данных.
Индекс невозможно удалить, если файловая группа, в которой он размещен, находится в режиме вне сети или доступна только для чтения.
Если удален кластеризованный индекс индексированного представления, то все некластеризованные индексы и автоматически создаваемые статистики в этом представлении автоматически удаляются. Статистики, созданные вручную, не удаляются.
Синтаксис _table\_or\_view\_name_**.**_index\_name_ сохраняется для обратной совместимости. Пространственный или XML-индекс нельзя удалить с использованием синтаксиса обратной совместимости.
Если удаляемый индекс содержит 128 и более экстентов, компонент [!INCLUDE[ssDE](../../includes/ssde-md.md)] откладывает действительное освобождение страниц и связанных с ними блокировок до фиксации транзакции.
Иногда индексы удаляются и пересоздаются для реорганизации или перестроения индекса, например чтобы применить новое значение коэффициента заполнения, или для реорганизации данных после массовой загрузки. Для этих задач более эффективно использование инструкции [ALTER INDEX](../../t-sql/statements/alter-index-transact-sql.md), особенно для кластеризованных индексов. Инструкция ALTER INDEX REBUILD выполняется с оптимизациями, предотвращающими дополнительные издержки на перестройку некластеризованных индексов.
## <a name="using-options-with-drop-index"></a>Использование параметров инструкции DROP INDEX
При удалении кластеризованного индекса можно установить следующие параметры: MAXDOP, ONLINE и MOVE TO.
Используйте параметр MOVE TO, чтобы удалить кластеризованный индекс и переместить результирующую таблицу в другую файловую группу или схему секционирования в одной транзакции.
При присвоении параметру ONLINE значения ON запросы и изменения базовых данных и связанных некластеризованных индексов не блокируются во время выполнения транзакции DROP INDEX. В режиме в сети одновременно может удаляться только один кластеризованный индекс. Полное описание параметра ONLINE см. в разделе [CREATE INDEX (Transact-SQL)](../../t-sql/statements/create-index-transact-sql.md).
Кластеризованный индекс нельзя удалить в режиме в сети, если индекс недоступен в представлении или содержит столбцы типа **text**, **ntext**, **image**, **varchar(max)**, **nvarchar(max)**, **varbinary(max)** или **xml** в строках данных конечного уровня.
Использование параметров ONLINE = ON и MOVE TO требует дополнительного временного места на диске.
После удаления индекса результирующая куча появляется в представлении каталога **sys.indexes** со значением NULL в столбце **name**. Для просмотра имени таблицы выполните соединение **sys.indexes** с **sys.tables** по **object_id**. Пример запроса см. в примере Г.
На многопроцессорных компьютерах под управлением [!INCLUDE[ssEnterpriseEd2005](../../includes/ssenterpriseed2005-md.md)] или выше инструкция DROP INDEX может использовать больше процессоров для операций просмотра и сортировки, связанных с удалением кластеризованного индекса, как и в случаях с другими инструкциями. Можно вручную настроить число процессоров, применяемых для запуска инструкции DROP INDEX, указав параметр индекса MAXDOP. Дополнительные сведения см. в статье [Настройка параллельных операций с индексами](../../relational-databases/indexes/configure-parallel-index-operations.md).
При удалении кластеризованного индекса соответствующие секции кучи сохраняют настройки сжатия данных, если только не была изменена схема секционирования. Если схема секционирования подверглась изменениям, все секции перестраиваются в распакованное состояние (DATA_COMPRESSION = NONE). Чтобы удалить кластеризованный индекс и изменить схему секционирования, необходимо выполнить следующие шаги.
1. Удалить кластеризованный индекс.
2. Изменить таблицу с помощью инструкции ALTER TABLE ... Параметр REBUILD ..., определяющий параметр сжатия.
При удалении кластеризованного индекса в режиме не в сети удаляются только верхние уровни кластеризованных индексов, следовательно, операция выполняется довольно быстро. При удалении кластеризованного индекса в режиме ONLINE [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] перестраивает кучу два раза: один для первого шага, один для второго. Дополнительную информацию о сжатии данных см. в разделе [Сжатие данных](../../relational-databases/data-compression/data-compression.md).
## <a name="xml-indexes"></a>XML-индексы
При удалении XML-индекса нельзя указывать параметры. Кроме того, нельзя использовать синтаксис _table\_or\_view\_name_**.**_index\_name_. При удалении первичного XML-индекса все связанные вторичные XML-индексы удаляются автоматически. Дополнительные сведения см в разделе [XML-индексы (SQL Server)](../../relational-databases/xml/xml-indexes-sql-server.md).
## <a name="spatial-indexes"></a>Пространственные индексы
Пространственные индексы поддерживаются только для таблиц. При удалении пространственного индекса нельзя указывать любые параметры или использовать **.**_index\_name_. Правильный синтаксис:
DROP INDEX *spatial_index_name* ON *spatial_table_name*;
Дополнительные сведения о пространственных индексах см. в разделе [Общие сведения о пространственных индексах](../../relational-databases/spatial/spatial-indexes-overview.md).
## <a name="permissions"></a>Разрешения
Для выполнения инструкции DROP INDEX как минимум требуется разрешение ALTER для таблицы или представления. По умолчанию это разрешение предоставляется предопределенной роли сервера **sysadmin** и предопределенным ролям базы данных **db_ddladmin** и **db_owner** .
## <a name="examples"></a>Примеры
### <a name="a-dropping-an-index"></a>A. Удаление индекса
В следующем примере показано, как удалить индекс `IX_ProductVendor_VendorID` в таблице `ProductVendor` базы данных [!INCLUDE[ssSampleDBnormal](../../includes/sssampledbnormal-md.md)].
```
DROP INDEX IX_ProductVendor_BusinessEntityID
ON Purchasing.ProductVendor;
GO
```
### <a name="b-dropping-multiple-indexes"></a>Б. Удаление нескольких индексов
В следующем примере показано, как удалить два индекса в одной транзакции в базе данных [!INCLUDE[ssSampleDBnormal](../../includes/sssampledbnormal-md.md)].
```
DROP INDEX
IX_PurchaseOrderHeader_EmployeeID ON Purchasing.PurchaseOrderHeader,
IX_Address_StateProvinceID ON Person.Address;
GO
```
### <a name="c-dropping-a-clustered-index-online-and-setting-the-maxdop-option"></a>В. Удаление кластеризованного индекса в режиме в сети и установка параметра MAXDOP
В следующем примере удаляется кластеризованный индекс с параметром `ONLINE`, установленным в значение `ON` и параметром `MAXDOP`, установленным в значение `8`. Поскольку параметр MOVE TO не был указан, результирующая таблица сохраняется в той же файловой группе, что и индекс. В этих примерах используется база данных [!INCLUDE[ssSampleDBnormal](../../includes/sssampledbnormal-md.md)]
**Применимо к**: с [!INCLUDE[ssKatmai](../../includes/sskatmai-md.md)] до [!INCLUDE[ssCurrent](../../includes/sscurrent-md.md)], [!INCLUDE[sqldbesa](../../includes/sqldbesa-md.md)].
```
DROP INDEX AK_BillOfMaterials_ProductAssemblyID_ComponentID_StartDate
ON Production.BillOfMaterials WITH (ONLINE = ON, MAXDOP = 2);
GO
```
### <a name="d-dropping-a-clustered-index-online-and-moving-the-table-to-a-new-filegroup"></a>Г. Удаление кластеризованного индекса в режиме в сети и перемещение таблицы в другую файловую группу
В следующем примере кластеризованный индекс удаляется в режиме в сети и результирующая таблица (куча) перемещается в файловую группу `NewGroup` с использованием предложения `MOVE TO` . Представления каталога `sys.indexes`, `sys.tables`и `sys.filegroups` запрашиваются для проверки размещения индекса и таблицы в файловых группах до и после перемещения. (Начиная с версии [!INCLUDE[ssSQL15](../../includes/sssql15-md.md)] можно использовать синтаксис DROP INDEX IF EXISTS.)
**Применимо к**: с [!INCLUDE[ssKatmai](../../includes/sskatmai-md.md)] до [!INCLUDE[ssCurrent](../../includes/sscurrent-md.md)].
```
--Create a clustered index on the PRIMARY filegroup if the index does not exist.
CREATE UNIQUE CLUSTERED INDEX
AK_BillOfMaterials_ProductAssemblyID_ComponentID_StartDate
ON Production.BillOfMaterials (ProductAssemblyID, ComponentID,
StartDate)
ON 'PRIMARY';
GO
-- Verify filegroup location of the clustered index.
SELECT t.name AS [Table Name], i.name AS [Index Name], i.type_desc,
i.data_space_id, f.name AS [Filegroup Name]
FROM sys.indexes AS i
JOIN sys.filegroups AS f ON i.data_space_id = f.data_space_id
JOIN sys.tables as t ON i.object_id = t.object_id
AND i.object_id = OBJECT_ID(N'Production.BillOfMaterials','U')
GO
--Create filegroup NewGroup if it does not exist.
IF NOT EXISTS (SELECT name FROM sys.filegroups
WHERE name = N'NewGroup')
BEGIN
ALTER DATABASE AdventureWorks2012
ADD FILEGROUP NewGroup;
ALTER DATABASE AdventureWorks2012
ADD FILE (NAME = File1,
FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\File1.ndf')
TO FILEGROUP NewGroup;
END
GO
--Verify new filegroup
SELECT * from sys.filegroups;
GO
-- Drop the clustered index and move the BillOfMaterials table to
-- the Newgroup filegroup.
-- Set ONLINE = OFF to execute this example on editions other than Enterprise Edition.
DROP INDEX AK_BillOfMaterials_ProductAssemblyID_ComponentID_StartDate
ON Production.BillOfMaterials
WITH (ONLINE = ON, MOVE TO NewGroup);
GO
-- Verify filegroup location of the moved table.
SELECT t.name AS [Table Name], i.name AS [Index Name], i.type_desc,
i.data_space_id, f.name AS [Filegroup Name]
FROM sys.indexes AS i
JOIN sys.filegroups AS f ON i.data_space_id = f.data_space_id
JOIN sys.tables as t ON i.object_id = t.object_id
AND i.object_id = OBJECT_ID(N'Production.BillOfMaterials','U');
GO
```
### <a name="e-dropping-a-primary-key-constraint-online"></a>Д. Удаление ограничения PRIMARY KEY в режиме в сети
Индексы, созданные в результате создания ограничений параметров PRIMARY KEY или UNIQUE, нельзя удалить с помощью инструкции DROP INDEX. Они удаляются с помощью инструкции ALTER TABLE DROP CONSTRAINT. Дополнительные сведения см. в разделе [ALTER TABLE](../../t-sql/statements/alter-table-transact-sql.md).
Следующий пример иллюстрирует удаление кластеризованного индекса с ограничением PRIMARY KEY путем удаления ограничения. У таблицы `ProductCostHistory` нет ограничений FOREIGN KEY. Если бы они были, необходимо было бы сначала удалить их.
```
-- Set ONLINE = OFF to execute this example on editions other than Enterprise Edition.
ALTER TABLE Production.TransactionHistoryArchive
DROP CONSTRAINT PK_TransactionHistoryArchive_TransactionID
WITH (ONLINE = ON);
```
### <a name="f-dropping-an-xml-index"></a>Е. Удаление XML-индекса
В следующем примере показано, как удалить XML-индекс в таблице `ProductModel` базы данных [!INCLUDE[ssSampleDBnormal](../../includes/sssampledbnormal-md.md)].
```
DROP INDEX PXML_ProductModel_CatalogDescription
ON Production.ProductModel;
```
### <a name="g-dropping-a-clustered-index-on-a-filestream-table"></a>Ж. Удаление кластеризованного индекса для таблицы FILESTREAM
В следующем примере кластеризованный индекс удаляется в режиме в сети и результирующая таблица (куча) вместе с данными FILESTREAM перемещается в схему секционирования `MyPartitionScheme` с использованием предложений `MOVE TO` и `FILESTREAM ON`.
**Применимо к**: с [!INCLUDE[ssKatmai](../../includes/sskatmai-md.md)] до [!INCLUDE[ssCurrent](../../includes/sscurrent-md.md)].
```
DROP INDEX PK_MyClusteredIndex
ON dbo.MyTable
WITH (MOVE TO MyPartitionScheme,
FILESTREAM_ON MyPartitionScheme);
GO
```
## <a name="see-also"></a>См. также:
[ALTER INDEX (Transact-SQL)](../../t-sql/statements/alter-index-transact-sql.md)
[ALTER PARTITION SCHEME (Transact-SQL)](../../t-sql/statements/alter-partition-scheme-transact-sql.md)
[ALTER TABLE (Transact-SQL)](../../t-sql/statements/alter-table-transact-sql.md)
[CREATE INDEX (Transact-SQL)](../../t-sql/statements/create-index-transact-sql.md)
[CREATE PARTITION SCHEME (Transact-SQL)](../../t-sql/statements/create-partition-scheme-transact-sql.md)
[CREATE SPATIAL INDEX (Transact-SQL)](../../t-sql/statements/create-spatial-index-transact-sql.md)
[CREATE XML INDEX (Transact-SQL)](../../t-sql/statements/create-xml-index-transact-sql.md)
[EVENTDATA (Transact-SQL)](../../t-sql/functions/eventdata-transact-sql.md)
[sys.indexes (Transact-SQL)](../../relational-databases/system-catalog-views/sys-indexes-transact-sql.md)
[sys.tables (Transact-SQL)](../../relational-databases/system-catalog-views/sys-tables-transact-sql.md)
[sys.filegroups (Transact-SQL)](../../relational-databases/system-catalog-views/sys-filegroups-transact-sql.md)
[sp_spaceused (Transact-SQL)](../../relational-databases/system-stored-procedures/sp-spaceused-transact-sql.md)
| 69.11165 | 721 | 0.759992 | rus_Cyrl | 0.757121 |
8a6cdb3ca71646f0ec068ffcafbdbbfcd402b972 | 114,260 | md | Markdown | tensorflow/g3doc/api_docs/python/train.md | sergejsrk/tensorflow | c5983f87f0402f2cb8c627807917ebdf8e4d4bb6 | [
"Apache-2.0"
] | 2 | 2020-07-30T05:06:30.000Z | 2020-08-28T05:10:49.000Z | tensorflow/g3doc/api_docs/python/train.md | alainrk/tensorflow | 314d9cd9b607460f8bfea80fc828b1521ca18443 | [
"Apache-2.0"
] | null | null | null | tensorflow/g3doc/api_docs/python/train.md | alainrk/tensorflow | 314d9cd9b607460f8bfea80fc828b1521ca18443 | [
"Apache-2.0"
] | 2 | 2018-03-14T03:10:40.000Z | 2018-09-13T13:59:40.000Z | <!-- This file is machine generated: DO NOT EDIT! -->
# Training
[TOC]
This library provides a set of classes and functions that helps train models.
## Optimizers
The Optimizer base class provides methods to compute gradients for a loss and
apply gradients to variables. A collection of subclasses implement classic
optimization algorithms such as GradientDescent and Adagrad.
You never instantiate the Optimizer class itself, but instead instantiate one
of the subclasses.
- - -
### `class tf.train.Optimizer` {#Optimizer}
Base class for optimizers.
This class defines the API to add Ops to train a model. You never use this
class directly, but instead instantiate one of its subclasses such as
`GradientDescentOptimizer`, `AdagradOptimizer`, or `MomentumOptimizer`.
### Usage
```python
# Create an optimizer with the desired parameters.
opt = GradientDescentOptimizer(learning_rate=0.1)
# Add Ops to the graph to minimize a cost by updating a list of variables.
# "cost" is a Tensor, and the list of variables contains tf.Variable
# objects.
opt_op = opt.minimize(cost, var_list=<list of variables>)
```
In the training program you will just have to run the returned Op.
```python
# Execute opt_op to do one step of training:
opt_op.run()
```
### Processing gradients before applying them.
Calling `minimize()` takes care of both computing the gradients and
applying them to the variables. If you want to process the gradients
before applying them you can instead use the optimizer in three steps:
1. Compute the gradients with `compute_gradients()`.
2. Process the gradients as you wish.
3. Apply the processed gradients with `apply_gradients()`.
Example:
```python
# Create an optimizer.
opt = GradientDescentOptimizer(learning_rate=0.1)
# Compute the gradients for a list of variables.
grads_and_vars = opt.compute_gradients(loss, <list of variables>)
# grads_and_vars is a list of tuples (gradient, variable). Do whatever you
# need to the 'gradient' part, for example cap them, etc.
capped_grads_and_vars = [(MyCapper(gv[0]), gv[1]) for gv in grads_and_vars]
# Ask the optimizer to apply the capped gradients.
opt.apply_gradients(capped_grads_and_vars)
```
- - -
#### `tf.train.Optimizer.__init__(use_locking, name)` {#Optimizer.__init__}
Create a new Optimizer.
This must be called by the constructors of subclasses.
##### Args:
* <b>`use_locking`</b>: Bool. If True apply use locks to prevent concurrent updates
to variables.
* <b>`name`</b>: A non-empty string. The name to use for accumulators created
for the optimizer.
##### Raises:
* <b>`ValueError`</b>: If name is malformed.
- - -
#### `tf.train.Optimizer.minimize(loss, global_step=None, var_list=None, gate_gradients=1, aggregation_method=None, colocate_gradients_with_ops=False, name=None, grad_loss=None)` {#Optimizer.minimize}
Add operations to minimize `loss` by updating `var_list`.
This method simply combines calls `compute_gradients()` and
`apply_gradients()`. If you want to process the gradient before applying
them call `compute_gradients()` and `apply_gradients()` explicitly instead
of using this function.
##### Args:
* <b>`loss`</b>: A `Tensor` containing the value to minimize.
* <b>`global_step`</b>: Optional `Variable` to increment by one after the
variables have been updated.
* <b>`var_list`</b>: Optional list of `Variable` objects to update to minimize
`loss`. Defaults to the list of variables collected in the graph
under the key `GraphKeys.TRAINABLE_VARIABLES`.
* <b>`gate_gradients`</b>: How to gate the computation of gradients. Can be
`GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`.
* <b>`aggregation_method`</b>: Specifies the method used to combine gradient terms.
Valid values are defined in the class `AggregationMethod`.
* <b>`colocate_gradients_with_ops`</b>: If True, try colocating gradients with
the corresponding op.
* <b>`name`</b>: Optional name for the returned operation.
* <b>`grad_loss`</b>: Optional. A `Tensor` holding the gradient computed for `loss`.
##### Returns:
An Operation that updates the variables in `var_list`. If `global_step`
was not `None`, that operation also increments `global_step`.
##### Raises:
* <b>`ValueError`</b>: If some of the variables are not `Variable` objects.
- - -
#### `tf.train.Optimizer.compute_gradients(loss, var_list=None, gate_gradients=1, aggregation_method=None, colocate_gradients_with_ops=False, grad_loss=None)` {#Optimizer.compute_gradients}
Compute gradients of `loss` for the variables in `var_list`.
This is the first part of `minimize()`. It returns a list
of (gradient, variable) pairs where "gradient" is the gradient
for "variable". Note that "gradient" can be a `Tensor`, an
`IndexedSlices`, or `None` if there is no gradient for the
given variable.
##### Args:
* <b>`loss`</b>: A Tensor containing the value to minimize.
* <b>`var_list`</b>: Optional list of tf.Variable to update to minimize
`loss`. Defaults to the list of variables collected in the graph
under the key `GraphKey.TRAINABLE_VARIABLES`.
* <b>`gate_gradients`</b>: How to gate the computation of gradients. Can be
`GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`.
* <b>`aggregation_method`</b>: Specifies the method used to combine gradient terms.
Valid values are defined in the class `AggregationMethod`.
* <b>`colocate_gradients_with_ops`</b>: If True, try colocating gradients with
the corresponding op.
* <b>`grad_loss`</b>: Optional. A `Tensor` holding the gradient computed for `loss`.
##### Returns:
A list of (gradient, variable) pairs.
##### Raises:
* <b>`TypeError`</b>: If `var_list` contains anything else than `Variable` objects.
* <b>`ValueError`</b>: If some arguments are invalid.
- - -
#### `tf.train.Optimizer.apply_gradients(grads_and_vars, global_step=None, name=None)` {#Optimizer.apply_gradients}
Apply gradients to variables.
This is the second part of `minimize()`. It returns an `Operation` that
applies gradients.
##### Args:
* <b>`grads_and_vars`</b>: List of (gradient, variable) pairs as returned by
`compute_gradients()`.
* <b>`global_step`</b>: Optional `Variable` to increment by one after the
variables have been updated.
* <b>`name`</b>: Optional name for the returned operation. Default to the
name passed to the `Optimizer` constructor.
##### Returns:
An `Operation` that applies the specified gradients. If `global_step`
was not None, that operation also increments `global_step`.
##### Raises:
* <b>`TypeError`</b>: If `grads_and_vars` is malformed.
* <b>`ValueError`</b>: If none of the variables have gradients.
### Gating Gradients
Both `minimize()` and `compute_gradients()` accept a `gate_gradient` argument
that controls the degree of parallelism during the application of the
gradients.
The possible values are: `GATE_NONE`, `GATE_OP`, and `GATE_GRAPH`.
<b>`GATE_NONE`</b>: Compute and apply gradients in parallel. This provides
the maximum parallelism in execution, at the cost of some non-reproducibility
in the results. For example the two gradients of `matmul` depend on the input
values: With `GATE_NONE` one of the gradients could be applied to one of the
inputs _before_ the other gradient is computed resulting in non-reproducible
results.
<b>`GATE_OP`</b>: For each Op, make sure all gradients are computed before
they are used. This prevents race conditions for Ops that generate gradients
for multiple inputs where the gradients depend on the inputs.
<b>`GATE_GRAPH`</b>: Make sure all gradients for all variables are computed
before any one of them is used. This provides the least parallelism but can
be useful if you want to process all gradients before applying any of them.
### Slots
Some optimizer subclasses, such as `MomentumOptimizer` and `AdagradOptimizer`
allocate and manage additional variables associated with the variables to
train. These are called <i>Slots</i>. Slots have names and you can ask the
optimizer for the names of the slots that it uses. Once you have a slot name
you can ask the optimizer for the variable it created to hold the slot value.
This can be useful if you want to log debug a training algorithm, report stats
about the slots, etc.
- - -
#### `tf.train.Optimizer.get_slot_names()` {#Optimizer.get_slot_names}
Return a list of the names of slots created by the `Optimizer`.
See `get_slot()`.
##### Returns:
A list of strings.
- - -
#### `tf.train.Optimizer.get_slot(var, name)` {#Optimizer.get_slot}
Return a slot named `name` created for `var` by the Optimizer.
Some `Optimizer` subclasses use additional variables. For example
`Momentum` and `Adagrad` use variables to accumulate updates. This method
gives access to these `Variable` objects if for some reason you need them.
Use `get_slot_names()` to get the list of slot names created by the
`Optimizer`.
##### Args:
* <b>`var`</b>: A variable passed to `minimize()` or `apply_gradients()`.
* <b>`name`</b>: A string.
##### Returns:
The `Variable` for the slot if it was created, `None` otherwise.
- - -
### `class tf.train.GradientDescentOptimizer` {#GradientDescentOptimizer}
Optimizer that implements the gradient descent algorithm.
- - -
#### `tf.train.GradientDescentOptimizer.__init__(learning_rate, use_locking=False, name='GradientDescent')` {#GradientDescentOptimizer.__init__}
Construct a new gradient descent optimizer.
##### Args:
* <b>`learning_rate`</b>: A Tensor or a floating point value. The learning
rate to use.
* <b>`use_locking`</b>: If True use locks for update operations.
* <b>`name`</b>: Optional name prefix for the operations created when applying
gradients. Defaults to "GradientDescent".
- - -
### `class tf.train.AdadeltaOptimizer` {#AdadeltaOptimizer}
Optimizer that implements the Adadelta algorithm.
See [M. D. Zeiler](http://arxiv.org/abs/1212.5701)
([pdf](http://arxiv.org/pdf/1212.5701v1.pdf))
- - -
#### `tf.train.AdadeltaOptimizer.__init__(learning_rate=0.001, rho=0.95, epsilon=1e-08, use_locking=False, name='Adadelta')` {#AdadeltaOptimizer.__init__}
Construct a new Adadelta optimizer.
##### Args:
* <b>`learning_rate`</b>: A `Tensor` or a floating point value. The learning rate.
* <b>`rho`</b>: A `Tensor` or a floating point value. The decay rate.
* <b>`epsilon`</b>: A `Tensor` or a floating point value. A constant epsilon used
to better conditioning the grad update.
* <b>`use_locking`</b>: If `True` use locks for update operations.
* <b>`name`</b>: Optional name prefix for the operations created when applying
gradients. Defaults to "Adadelta".
- - -
### `class tf.train.AdagradOptimizer` {#AdagradOptimizer}
Optimizer that implements the Adagrad algorithm.
See this [paper](http://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf).
- - -
#### `tf.train.AdagradOptimizer.__init__(learning_rate, initial_accumulator_value=0.1, use_locking=False, name='Adagrad')` {#AdagradOptimizer.__init__}
Construct a new Adagrad optimizer.
##### Args:
* <b>`learning_rate`</b>: A `Tensor` or a floating point value. The learning rate.
* <b>`initial_accumulator_value`</b>: A floating point value.
Starting value for the accumulators, must be positive.
* <b>`use_locking`</b>: If `True` use locks for update operations.
* <b>`name`</b>: Optional name prefix for the operations created when applying
gradients. Defaults to "Adagrad".
##### Raises:
* <b>`ValueError`</b>: If the `initial_accumulator_value` is invalid.
- - -
### `class tf.train.MomentumOptimizer` {#MomentumOptimizer}
Optimizer that implements the Momentum algorithm.
- - -
#### `tf.train.MomentumOptimizer.__init__(learning_rate, momentum, use_locking=False, name='Momentum')` {#MomentumOptimizer.__init__}
Construct a new Momentum optimizer.
##### Args:
* <b>`learning_rate`</b>: A `Tensor` or a floating point value. The learning rate.
* <b>`momentum`</b>: A `Tensor` or a floating point value. The momentum.
* <b>`use_locking`</b>: If `True` use locks for update operations.
* <b>`name`</b>: Optional name prefix for the operations created when applying
gradients. Defaults to "Momentum".
- - -
### `class tf.train.AdamOptimizer` {#AdamOptimizer}
Optimizer that implements the Adam algorithm.
See [Kingma et. al., 2014](http://arxiv.org/abs/1412.6980)
([pdf](http://arxiv.org/pdf/1412.6980.pdf)).
- - -
#### `tf.train.AdamOptimizer.__init__(learning_rate=0.001, beta1=0.9, beta2=0.999, epsilon=1e-08, use_locking=False, name='Adam')` {#AdamOptimizer.__init__}
Construct a new Adam optimizer.
Initialization:
```
m_0 <- 0 (Initialize initial 1st moment vector)
v_0 <- 0 (Initialize initial 2nd moment vector)
t <- 0 (Initialize timestep)
```
The update rule for `variable` with gradient `g` uses an optimization
described at the end of section2 of the paper:
```
t <- t + 1
lr_t <- learning_rate * sqrt(1 - beta2^t) / (1 - beta1^t)
m_t <- beta1 * m_{t-1} + (1 - beta1) * g
v_t <- beta2 * v_{t-1} + (1 - beta2) * g * g
variable <- variable - lr_t * m_t / (sqrt(v_t) + epsilon)
```
The default value of 1e-8 for epsilon might not be a good default in
general. For example, when training an Inception network on ImageNet a
current good choice is 1.0 or 0.1.
Note that in dense implement of this algorithm, m_t, v_t and variable will
update even if g is zero, but in sparse implement, m_t, v_t and variable
will not update in iterations g is zero.
##### Args:
* <b>`learning_rate`</b>: A Tensor or a floating point value. The learning rate.
* <b>`beta1`</b>: A float value or a constant float tensor.
The exponential decay rate for the 1st moment estimates.
* <b>`beta2`</b>: A float value or a constant float tensor.
The exponential decay rate for the 2nd moment estimates.
* <b>`epsilon`</b>: A small constant for numerical stability.
* <b>`use_locking`</b>: If True use locks for update operations.
* <b>`name`</b>: Optional name for the operations created when applying gradients.
Defaults to "Adam".
- - -
### `class tf.train.FtrlOptimizer` {#FtrlOptimizer}
Optimizer that implements the FTRL algorithm.
See this [paper](
https://www.eecs.tufts.edu/~dsculley/papers/ad-click-prediction.pdf).
- - -
#### `tf.train.FtrlOptimizer.__init__(learning_rate, learning_rate_power=-0.5, initial_accumulator_value=0.1, l1_regularization_strength=0.0, l2_regularization_strength=0.0, use_locking=False, name='Ftrl')` {#FtrlOptimizer.__init__}
Construct a new FTRL optimizer.
##### Args:
* <b>`learning_rate`</b>: A float value or a constant float `Tensor`.
* <b>`learning_rate_power`</b>: A float value, must be less or equal to zero.
* <b>`initial_accumulator_value`</b>: The starting value for accumulators.
Only positive values are allowed.
* <b>`l1_regularization_strength`</b>: A float value, must be greater than or
equal to zero.
* <b>`l2_regularization_strength`</b>: A float value, must be greater than or
equal to zero.
* <b>`use_locking`</b>: If `True` use locks for update operations.
* <b>`name`</b>: Optional name prefix for the operations created when applying
gradients. Defaults to "Ftrl".
##### Raises:
* <b>`ValueError`</b>: If one of the arguments is invalid.
- - -
### `class tf.train.RMSPropOptimizer` {#RMSPropOptimizer}
Optimizer that implements the RMSProp algorithm.
See the [paper]
(http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf).
- - -
#### `tf.train.RMSPropOptimizer.__init__(learning_rate, decay=0.9, momentum=0.0, epsilon=1e-10, use_locking=False, name='RMSProp')` {#RMSPropOptimizer.__init__}
Construct a new RMSProp optimizer.
Note that in dense implement of this algorithm, m_t and v_t will
update even if g is zero, but in sparse implement, m_t and v_t
will not update in iterations g is zero.
##### Args:
* <b>`learning_rate`</b>: A Tensor or a floating point value. The learning rate.
* <b>`decay`</b>: Discounting factor for the history/coming gradient
* <b>`momentum`</b>: A scalar tensor.
* <b>`epsilon`</b>: Small value to avoid zero denominator.
* <b>`use_locking`</b>: If True use locks for update operation.
* <b>`name`</b>: Optional name prefix for the operations created when applying
gradients. Defaults to "RMSProp".
## Gradient Computation
TensorFlow provides functions to compute the derivatives for a given
TensorFlow computation graph, adding operations to the graph. The
optimizer classes automatically compute derivatives on your graph, but
creators of new Optimizers or expert users can call the lower-level
functions below.
- - -
### `tf.gradients(ys, xs, grad_ys=None, name='gradients', colocate_gradients_with_ops=False, gate_gradients=False, aggregation_method=None)` {#gradients}
Constructs symbolic partial derivatives of sum of `ys` w.r.t. x in `xs`.
`ys` and `xs` are each a `Tensor` or a list of tensors. `grad_ys`
is a list of `Tensor`, holding the gradients received by the
`ys`. The list must be the same length as `ys`.
`gradients()` adds ops to the graph to output the partial
derivatives of `ys` with respect to `xs`. It returns a list of
`Tensor` of length `len(xs)` where each tensor is the `sum(dy/dx)`
for y in `ys`.
`grad_ys` is a list of tensors of the same length as `ys` that holds
the initial gradients for each y in `ys`. When `grad_ys` is None,
we fill in a tensor of '1's of the shape of y for each y in `ys`. A
user can provide their own initial `grad_ys` to compute the
derivatives using a different initial gradient for each y (e.g., if
one wanted to weight the gradient differently for each value in
each y).
##### Args:
* <b>`ys`</b>: A `Tensor` or list of tensors to be differentiated.
* <b>`xs`</b>: A `Tensor` or list of tensors to be used for differentiation.
* <b>`grad_ys`</b>: Optional. A `Tensor` or list of tensors the same size as
`ys` and holding the gradients computed for each y in `ys`.
* <b>`name`</b>: Optional name to use for grouping all the gradient ops together.
defaults to 'gradients'.
* <b>`colocate_gradients_with_ops`</b>: If True, try colocating gradients with
the corresponding op.
* <b>`gate_gradients`</b>: If True, add a tuple around the gradients returned
for an operations. This avoids some race conditions.
* <b>`aggregation_method`</b>: Specifies the method used to combine gradient terms.
Accepted values are constants defined in the class `AggregationMethod`.
##### Returns:
A list of `sum(dy/dx)` for each x in `xs`.
##### Raises:
* <b>`LookupError`</b>: if one of the operations between `x` and `y` does not
have a registered gradient function.
* <b>`ValueError`</b>: if the arguments are invalid.
- - -
### `class tf.AggregationMethod` {#AggregationMethod}
A class listing aggregation methods used to combine gradients.
Computing partial derivatives can require aggregating gradient
contributions. This class lists the various methods that can
be used to combine gradients in the graph:
* `ADD_N`: All of the gradient terms are summed as part of one
operation using the "AddN" op. It has the property that all
gradients must be ready before any aggregation is performed.
* `DEFAULT`: The system-chosen default aggregation method.
- - -
### `tf.stop_gradient(input, name=None)` {#stop_gradient}
Stops gradient computation.
When executed in a graph, this op outputs its input tensor as-is.
When building ops to compute gradients, this op prevents the contribution of
its inputs to be taken into account. Normally, the gradient generator adds ops
to a graph to compute the derivatives of a specified 'loss' by recursively
finding out inputs that contributed to its computation. If you insert this op
in the graph it inputs are masked from the gradient generator. They are not
taken into account for computing gradients.
This is useful any time you want to compute a value with TensorFlow but need
to pretend that the value was a constant. Some examples include:
* The *EM* algorithm where the *M-step* should not involve backpropagation
through the output of the *E-step*.
* Contrastive divergence training of Boltzmann machines where, when
differentiating the energy function, the training must not backpropagate
through the graph that generated the samples from the model.
* Adversarial training, where no backprop should happen through the adversarial
example generation process.
##### Args:
* <b>`input`</b>: A `Tensor`.
* <b>`name`</b>: A name for the operation (optional).
##### Returns:
A `Tensor`. Has the same type as `input`.
## Gradient Clipping
TensorFlow provides several operations that you can use to add clipping
functions to your graph. You can use these functions to perform general data
clipping, but they're particularly useful for handling exploding or vanishing
gradients.
- - -
### `tf.clip_by_value(t, clip_value_min, clip_value_max, name=None)` {#clip_by_value}
Clips tensor values to a specified min and max.
Given a tensor `t`, this operation returns a tensor of the same type and
shape as `t` with its values clipped to `clip_value_min` and `clip_value_max`.
Any values less than `clip_value_min` are set to `clip_value_min`. Any values
greater than `clip_value_max` are set to `clip_value_max`.
##### Args:
* <b>`t`</b>: A `Tensor`.
* <b>`clip_value_min`</b>: A 0-D (scalar) `Tensor`. The minimum value to clip by.
* <b>`clip_value_max`</b>: A 0-D (scalar) `Tensor`. The maximum value to clip by.
* <b>`name`</b>: A name for the operation (optional).
##### Returns:
A clipped `Tensor`.
- - -
### `tf.clip_by_norm(t, clip_norm, axes=None, name=None)` {#clip_by_norm}
Clips tensor values to a maximum L2-norm.
Given a tensor `t`, and a maximum clip value `clip_norm`, this operation
normalizes `t` so that its L2-norm is less than or equal to `clip_norm`,
along the dimensions given in `axes`. Specifically, in the default case
where all dimensions are used for calculation, if the L2-norm of `t` is
already less than or equal to `clip_norm`, then `t` is not modified. If
the L2-norm is greater than `clip_norm`, then this operation returns a
tensor of the same type and shape as `t` with its values set to:
`t * clip_norm / l2norm(t)`
In this case, the L2-norm of the output tensor is `clip_norm`.
As another example, if `t` is a matrix and `axes == [1]`, then each row
of the output will have L2-norm equal to `clip_norm`. If `axes == [0]`
instead, each column of the output will be clipped.
This operation is typically used to clip gradients before applying them with
an optimizer.
##### Args:
* <b>`t`</b>: A `Tensor`.
* <b>`clip_norm`</b>: A 0-D (scalar) `Tensor` > 0. A maximum clipping value.
* <b>`axes`</b>: A 1-D (vector) `Tensor` of type int32 containing the dimensions
to use for computing the L2-norm. If `None` (the default), uses all
dimensions.
* <b>`name`</b>: A name for the operation (optional).
##### Returns:
A clipped `Tensor`.
- - -
### `tf.clip_by_average_norm(t, clip_norm, name=None)` {#clip_by_average_norm}
Clips tensor values to a maximum average L2-norm.
Given a tensor `t`, and a maximum clip value `clip_norm`, this operation
normalizes `t` so that its average L2-norm is less than or equal to
`clip_norm`. Specifically, if the average L2-norm is already less than or
equal to `clip_norm`, then `t` is not modified. If the average L2-norm is
greater than `clip_norm`, then this operation returns a tensor of the same
type and shape as `t` with its values set to:
`t * clip_norm / l2norm_avg(t)`
In this case, the average L2-norm of the output tensor is `clip_norm`.
This operation is typically used to clip gradients before applying them with
an optimizer.
##### Args:
* <b>`t`</b>: A `Tensor`.
* <b>`clip_norm`</b>: A 0-D (scalar) `Tensor` > 0. A maximum clipping value.
* <b>`name`</b>: A name for the operation (optional).
##### Returns:
A clipped `Tensor`.
- - -
### `tf.clip_by_global_norm(t_list, clip_norm, use_norm=None, name=None)` {#clip_by_global_norm}
Clips values of multiple tensors by the ratio of the sum of their norms.
Given a tuple or list of tensors `t_list`, and a clipping ratio `clip_norm`,
this operation returns a list of clipped tensors `list_clipped`
and the global norm (`global_norm`) of all tensors in `t_list`. Optionally,
if you've already computed the global norm for `t_list`, you can specify
the global norm with `use_norm`.
To perform the clipping, the values `t_list[i]` are set to:
t_list[i] * clip_norm / max(global_norm, clip_norm)
where:
global_norm = sqrt(sum([l2norm(t)**2 for t in t_list]))
If `clip_norm > global_norm` then the entries in `t_list` remain as they are,
otherwise they're all shrunk by the global ratio.
Any of the entries of `t_list` that are of type `None` are ignored.
This is the correct way to perform gradient clipping (for example, see
[Pascanu et al., 2012](http://arxiv.org/abs/1211.5063)
([pdf](http://arxiv.org/pdf/1211.5063.pdf))).
However, it is slower than `clip_by_norm()` because all the parameters must be
ready before the clipping operation can be performed.
##### Args:
* <b>`t_list`</b>: A tuple or list of mixed `Tensors`, `IndexedSlices`, or None.
* <b>`clip_norm`</b>: A 0-D (scalar) `Tensor` > 0. The clipping ratio.
* <b>`use_norm`</b>: A 0-D (scalar) `Tensor` of type `float` (optional). The global
norm to use. If not provided, `global_norm()` is used to compute the norm.
* <b>`name`</b>: A name for the operation (optional).
##### Returns:
* <b>`list_clipped`</b>: A list of `Tensors` of the same type as `list_t`.
* <b>`global_norm`</b>: A 0-D (scalar) `Tensor` representing the global norm.
##### Raises:
* <b>`TypeError`</b>: If `t_list` is not a sequence.
- - -
### `tf.global_norm(t_list, name=None)` {#global_norm}
Computes the global norm of multiple tensors.
Given a tuple or list of tensors `t_list`, this operation returns the
global norm of the elements in all tensors in `t_list`. The global norm is
computed as:
`global_norm = sqrt(sum([l2norm(t)**2 for t in t_list]))`
Any entries in `t_list` that are of type None are ignored.
##### Args:
* <b>`t_list`</b>: A tuple or list of mixed `Tensors`, `IndexedSlices`, or None.
* <b>`name`</b>: A name for the operation (optional).
##### Returns:
A 0-D (scalar) `Tensor` of type `float`.
##### Raises:
* <b>`TypeError`</b>: If `t_list` is not a sequence.
## Decaying the learning rate
- - -
### `tf.train.exponential_decay(learning_rate, global_step, decay_steps, decay_rate, staircase=False, name=None)` {#exponential_decay}
Applies exponential decay to the learning rate.
When training a model, it is often recommended to lower the learning rate as
the training progresses. This function applies an exponential decay function
to a provided initial learning rate. It requires a `global_step` value to
compute the decayed learning rate. You can just pass a TensorFlow variable
that you increment at each training step.
The function returns the decayed learning rate. It is computed as:
```python
decayed_learning_rate = learning_rate *
decay_rate ^ (global_step / decay_steps)
```
If the argument `staircase` is `True`, then `global_step /decay_steps` is an
integer division and the decayed learning rate follows a staircase function.
Example: decay every 100000 steps with a base of 0.96:
```python
...
global_step = tf.Variable(0, trainable=False)
starter_learning_rate = 0.1
learning_rate = tf.train.exponential_decay(starter_learning_rate, global_step,
100000, 0.96, staircase=True)
# Passing global_step to minimize() will increment it at each step.
learning_step = (
tf.GradientDescentOptimizer(learning_rate)
.minimize(...my loss..., global_step=global_step)
)
```
##### Args:
* <b>`learning_rate`</b>: A scalar `float32` or `float64` `Tensor` or a
Python number. The initial learning rate.
* <b>`global_step`</b>: A scalar `int32` or `int64` `Tensor` or a Python number.
Global step to use for the decay computation. Must not be negative.
* <b>`decay_steps`</b>: A scalar `int32` or `int64` `Tensor` or a Python number.
Must be positive. See the decay computation above.
* <b>`decay_rate`</b>: A scalar `float32` or `float64` `Tensor` or a
Python number. The decay rate.
* <b>`staircase`</b>: Boolean. It `True` decay the learning rate at discrete intervals.
* <b>`name`</b>: String. Optional name of the operation. Defaults to 'ExponentialDecay'
##### Returns:
A scalar `Tensor` of the same type as `learning_rate`. The decayed
learning rate.
## Moving Averages
Some training algorithms, such as GradientDescent and Momentum often benefit
from maintaining a moving average of variables during optimization. Using the
moving averages for evaluations often improve results significantly.
- - -
### `class tf.train.ExponentialMovingAverage` {#ExponentialMovingAverage}
Maintains moving averages of variables by employing an exponential decay.
When training a model, it is often beneficial to maintain moving averages of
the trained parameters. Evaluations that use averaged parameters sometimes
produce significantly better results than the final trained values.
The `apply()` method adds shadow copies of trained variables and add ops that
maintain a moving average of the trained variables in their shadow copies.
It is used when building the training model. The ops that maintain moving
averages are typically run after each training step.
The `average()` and `average_name()` methods give access to the shadow
variables and their names. They are useful when building an evaluation
model, or when restoring a model from a checkpoint file. They help use the
moving averages in place of the last trained values for evaluations.
The moving averages are computed using exponential decay. You specify the
decay value when creating the `ExponentialMovingAverage` object. The shadow
variables are initialized with the same initial values as the trained
variables. When you run the ops to maintain the moving averages, each
shadow variable is updated with the formula:
`shadow_variable -= (1 - decay) * (shadow_variable - variable)`
This is mathematically equivalent to the classic formula below, but the use
of an `assign_sub` op (the `"-="` in the formula) allows concurrent lockless
updates to the variables:
`shadow_variable = decay * shadow_variable + (1 - decay) * variable`
Reasonable values for `decay` are close to 1.0, typically in the
multiple-nines range: 0.999, 0.9999, etc.
Example usage when creating a training model:
```python
# Create variables.
var0 = tf.Variable(...)
var1 = tf.Variable(...)
# ... use the variables to build a training model...
...
# Create an op that applies the optimizer. This is what we usually
# would use as a training op.
opt_op = opt.minimize(my_loss, [var0, var1])
# Create an ExponentialMovingAverage object
ema = tf.train.ExponentialMovingAverage(decay=0.9999)
# Create the shadow variables, and add ops to maintain moving averages
# of var0 and var1.
maintain_averages_op = ema.apply([var0, var1])
# Create an op that will update the moving averages after each training
# step. This is what we will use in place of the usual training op.
with tf.control_dependencies([opt_op]):
training_op = tf.group(maintain_averages_op)
...train the model by running training_op...
```
There are two ways to use the moving averages for evaluations:
* Build a model that uses the shadow variables instead of the variables.
For this, use the `average()` method which returns the shadow variable
for a given variable.
* Build a model normally but load the checkpoint files to evaluate by using
the shadow variable names. For this use the `average_name()` method. See
the [Saver class](../../api_docs/python/train.md#Saver) for more
information on restoring saved variables.
Example of restoring the shadow variable values:
```python
# Create a Saver that loads variables from their saved shadow values.
shadow_var0_name = ema.average_name(var0)
shadow_var1_name = ema.average_name(var1)
saver = tf.train.Saver({shadow_var0_name: var0, shadow_var1_name: var1})
saver.restore(...checkpoint filename...)
# var0 and var1 now hold the moving average values
```
- - -
#### `tf.train.ExponentialMovingAverage.__init__(decay, num_updates=None, name='ExponentialMovingAverage')` {#ExponentialMovingAverage.__init__}
Creates a new ExponentialMovingAverage object.
The `apply()` method has to be called to create shadow variables and add
ops to maintain moving averages.
The optional `num_updates` parameter allows one to tweak the decay rate
dynamically. . It is typical to pass the count of training steps, usually
kept in a variable that is incremented at each step, in which case the
decay rate is lower at the start of training. This makes moving averages
move faster. If passed, the actual decay rate used is:
`min(decay, (1 + num_updates) / (10 + num_updates))`
##### Args:
* <b>`decay`</b>: Float. The decay to use.
* <b>`num_updates`</b>: Optional count of number of updates applied to variables.
* <b>`name`</b>: String. Optional prefix name to use for the name of ops added in
`apply()`.
- - -
#### `tf.train.ExponentialMovingAverage.apply(var_list=None)` {#ExponentialMovingAverage.apply}
Maintains moving averages of variables.
`var_list` must be a list of `Variable` or `Tensor` objects. This method
creates shadow variables for all elements of `var_list`. Shadow variables
for `Variable` objects are initialized to the variable's initial value.
They will be added to the `GraphKeys.MOVING_AVERAGE_VARIABLES` collection.
For `Tensor` objects, the shadow variables are initialized to 0.
shadow variables are created with `trainable=False` and added to the
`GraphKeys.ALL_VARIABLES` collection. They will be returned by calls to
`tf.all_variables()`.
Returns an op that updates all shadow variables as described above.
Note that `apply()` can be called multiple times with different lists of
variables.
##### Args:
* <b>`var_list`</b>: A list of Variable or Tensor objects. The variables
and Tensors must be of types float32 or float64.
##### Returns:
An Operation that updates the moving averages.
##### Raises:
* <b>`TypeError`</b>: If the arguments are not all float32 or float64.
* <b>`ValueError`</b>: If the moving average of one of the variables is already
being computed.
- - -
#### `tf.train.ExponentialMovingAverage.average_name(var)` {#ExponentialMovingAverage.average_name}
Returns the name of the `Variable` holding the average for `var`.
The typical scenario for `ExponentialMovingAverage` is to compute moving
averages of variables during training, and restore the variables from the
computed moving averages during evaluations.
To restore variables, you have to know the name of the shadow variables.
That name and the original variable can then be passed to a `Saver()` object
to restore the variable from the moving average value with:
`saver = tf.train.Saver({ema.average_name(var): var})`
`average_name()` can be called whether or not `apply()` has been called.
##### Args:
* <b>`var`</b>: A `Variable` object.
##### Returns:
A string: The name of the variable that will be used or was used
by the `ExponentialMovingAverage class` to hold the moving average of
`var`.
- - -
#### `tf.train.ExponentialMovingAverage.average(var)` {#ExponentialMovingAverage.average}
Returns the `Variable` holding the average of `var`.
##### Args:
* <b>`var`</b>: A `Variable` object.
##### Returns:
A `Variable` object or `None` if the moving average of `var`
is not maintained..
- - -
#### `tf.train.ExponentialMovingAverage.variables_to_restore(moving_avg_variables=None)` {#ExponentialMovingAverage.variables_to_restore}
Returns a map of names to `Variables` to restore.
If a variable has a moving average, use the moving average variable name as
the restore name; otherwise, use the variable name.
For example,
```python
variables_to_restore = ema.variables_to_restore()
saver = tf.train.Saver(variables_to_restore)
```
Below is an example of such mapping:
```
conv/batchnorm/gamma/ExponentialMovingAverage: conv/batchnorm/gamma,
conv_4/conv2d_params/ExponentialMovingAverage: conv_4/conv2d_params,
global_step: global_step
```
##### Args:
* <b>`moving_avg_variables`</b>: a list of variables that require to use of the
moving variable name to be restored. If None, it will default to
variables.moving_average_variables() + variables.trainable_variables()
##### Returns:
A map from restore_names to variables. The restore_name can be the
moving_average version of the variable name if it exist, or the original
variable name.
## Coordinator and QueueRunner
See [Threading and Queues](../../how_tos/threading_and_queues/index.md)
for how to use threads and queues. For documentation on the Queue API,
see [Queues](../../api_docs/python/io_ops.md#queues).
- - -
### `class tf.train.Coordinator` {#Coordinator}
A coordinator for threads.
This class implements a simple mechanism to coordinate the termination of a
set of threads.
#### Usage:
```python
# Create a coordinator.
coord = Coordinator()
# Start a number of threads, passing the coordinator to each of them.
...start thread 1...(coord, ...)
...start thread N...(coord, ...)
# Wait for all the threads to terminate.
coord.join(threads)
```
Any of the threads can call `coord.request_stop()` to ask for all the threads
to stop. To cooperate with the requests, each thread must check for
`coord.should_stop()` on a regular basis. `coord.should_stop()` returns
`True` as soon as `coord.request_stop()` has been called.
A typical thread running with a coordinator will do something like:
```python
while not coord.should_stop():
...do some work...
```
#### Exception handling:
A thread can report an exception to the coordinator as part of the
`should_stop()` call. The exception will be re-raised from the
`coord.join()` call.
Thread code:
```python
try:
while not coord.should_stop():
...do some work...
except Exception as e:
coord.request_stop(e)
```
Main code:
```python
try:
...
coord = Coordinator()
# Start a number of threads, passing the coordinator to each of them.
...start thread 1...(coord, ...)
...start thread N...(coord, ...)
# Wait for all the threads to terminate.
coord.join(threads)
except Exception as e:
...exception that was passed to coord.request_stop()
```
To simplify the thread implementation, the Coordinator provides a
context handler `stop_on_exception()` that automatically requests a stop if
an exception is raised. Using the context handler the thread code above
can be written as:
```python
with coord.stop_on_exception():
while not coord.should_stop():
...do some work...
```
#### Grace period for stopping:
After a thread has called `coord.request_stop()` the other threads have a
fixed time to stop, this is called the 'stop grace period' and defaults to 2
minutes. If any of the threads is still alive after the grace period expires
`coord.join()` raises a RuntimeException reporting the laggards.
```python
try:
...
coord = Coordinator()
# Start a number of threads, passing the coordinator to each of them.
...start thread 1...(coord, ...)
...start thread N...(coord, ...)
# Wait for all the threads to terminate, give them 10s grace period
coord.join(threads, stop_grace_period_secs=10)
except RuntimeException:
...one of the threads took more than 10s to stop after request_stop()
...was called.
except Exception:
...exception that was passed to coord.request_stop()
```
- - -
#### `tf.train.Coordinator.__init__(clean_stop_exception_types=None)` {#Coordinator.__init__}
Create a new Coordinator.
##### Args:
* <b>`clean_stop_exception_types`</b>: Optional tuple of Exception types that should
cause a clean stop of the coordinator. If an exception of one of these
types is reported to `request_stop(ex)` the coordinator will behave as
if `request_stop(None)` was called. Defaults to
`(tf.errors.OutOfRangeError,)` which is used by input queues to signal
the end of input. When feeding training data from a Python iterator it
is common to add `StopIteration` to this list.
- - -
#### `tf.train.Coordinator.clear_stop()` {#Coordinator.clear_stop}
Clears the stop flag.
After this is called, calls to `should_stop()` will return `False`.
- - -
#### `tf.train.Coordinator.join(threads, stop_grace_period_secs=120)` {#Coordinator.join}
Wait for threads to terminate.
Blocks until all `threads` have terminated or `request_stop()` is called.
After the threads stop, if an `exc_info` was passed to `request_stop`, that
exception is re-raised.
Grace period handling: When `request_stop()` is called, threads are given
'stop_grace_period_secs' seconds to terminate. If any of them is still
alive after that period expires, a `RuntimeError` is raised. Note that if
an `exc_info` was passed to `request_stop()` then it is raised instead of
that `RuntimeError`.
##### Args:
* <b>`threads`</b>: List of `threading.Threads`. The started threads to join.
* <b>`stop_grace_period_secs`</b>: Number of seconds given to threads to stop after
`request_stop()` has been called.
##### Raises:
* <b>`RuntimeError`</b>: If any thread is still alive after `request_stop()`
is called and the grace period expires.
- - -
#### `tf.train.Coordinator.request_stop(ex=None)` {#Coordinator.request_stop}
Request that the threads stop.
After this is called, calls to `should_stop()` will return `True`.
Note: If an exception is being passed in, in must be in the context of
handling the exception (i.e. `try: ... except Exception as ex: ...`) and not
a newly created one.
##### Args:
* <b>`ex`</b>: Optional `Exception`, or Python `exc_info` tuple as returned by
`sys.exc_info()`. If this is the first call to `request_stop()` the
corresponding exception is recorded and re-raised from `join()`.
- - -
#### `tf.train.Coordinator.should_stop()` {#Coordinator.should_stop}
Check if stop was requested.
##### Returns:
True if a stop was requested.
- - -
#### `tf.train.Coordinator.stop_on_exception()` {#Coordinator.stop_on_exception}
Context manager to request stop when an Exception is raised.
Code that uses a coordinator must catch exceptions and pass
them to the `request_stop()` method to stop the other threads
managed by the coordinator.
This context handler simplifies the exception handling.
Use it as follows:
```python
with coord.stop_on_exception():
# Any exception raised in the body of the with
# clause is reported to the coordinator before terminating
# the execution of the body.
...body...
```
This is completely equivalent to the slightly longer code:
```python
try:
...body...
exception Exception as ex:
coord.request_stop(ex)
```
##### Yields:
nothing.
- - -
#### `tf.train.Coordinator.wait_for_stop(timeout=None)` {#Coordinator.wait_for_stop}
Wait till the Coordinator is told to stop.
##### Args:
* <b>`timeout`</b>: Float. Sleep for up to that many seconds waiting for
should_stop() to become True.
##### Returns:
True if the Coordinator is told stop, False if the timeout expired.
- - -
### `class tf.train.QueueRunner` {#QueueRunner}
Holds a list of enqueue operations for a queue, each to be run in a thread.
Queues are a convenient TensorFlow mechanism to compute tensors
asynchronously using multiple threads. For example in the canonical 'Input
Reader' setup one set of threads generates filenames in a queue; a second set
of threads read records from the files, processes them, and enqueues tensors
on a second queue; a third set of threads dequeues these input records to
construct batches and runs them through training operations.
There are several delicate issues when running multiple threads that way:
closing the queues in sequence as the input is exhausted, correctly catching
and reporting exceptions, etc.
The `QueueRunner`, combined with the `Coordinator`, helps handle these issues.
- - -
#### `tf.train.QueueRunner.__init__(queue=None, enqueue_ops=None, close_op=None, cancel_op=None, queue_runner_def=None)` {#QueueRunner.__init__}
Create a QueueRunner.
On construction the `QueueRunner` adds an op to close the queue. That op
will be run if the enqueue ops raise exceptions.
When you later call the `create_threads()` method, the `QueueRunner` will
create one thread for each op in `enqueue_ops`. Each thread will run its
enqueue op in parallel with the other threads. The enqueue ops do not have
to all be the same op, but it is expected that they all enqueue tensors in
`queue`.
##### Args:
* <b>`queue`</b>: A `Queue`.
* <b>`enqueue_ops`</b>: List of enqueue ops to run in threads later.
* <b>`close_op`</b>: Op to close the queue. Pending enqueue ops are preserved.
* <b>`cancel_op`</b>: Op to close the queue and cancel pending enqueue ops.
* <b>`queue_runner_def`</b>: Optional `QueueRunnerDef` protocol buffer. If specified,
recreates the QueueRunner from its contents. `queue_runner_def` and the
other arguments are mutually exclusive.
##### Raises:
* <b>`ValueError`</b>: If both `queue_runner_def` and `queue` are both specified.
* <b>`ValueError`</b>: If `queue` or `enqueue_ops` are not provided when not
restoring from `queue_runner_def`.
- - -
#### `tf.train.QueueRunner.cancel_op` {#QueueRunner.cancel_op}
- - -
#### `tf.train.QueueRunner.close_op` {#QueueRunner.close_op}
- - -
#### `tf.train.QueueRunner.create_threads(sess, coord=None, daemon=False, start=False)` {#QueueRunner.create_threads}
Create threads to run the enqueue ops.
This method requires a session in which the graph was launched. It creates
a list of threads, optionally starting them. There is one thread for each
op passed in `enqueue_ops`.
The `coord` argument is an optional coordinator, that the threads will use
to terminate together and report exceptions. If a coordinator is given,
this method starts an additional thread to close the queue when the
coordinator requests a stop.
This method may be called again as long as all threads from a previous call
have stopped.
##### Args:
* <b>`sess`</b>: A `Session`.
* <b>`coord`</b>: Optional `Coordinator` object for reporting errors and checking
stop conditions.
* <b>`daemon`</b>: Boolean. If `True` make the threads daemon threads.
* <b>`start`</b>: Boolean. If `True` starts the threads. If `False` the
caller must call the `start()` method of the returned threads.
##### Returns:
A list of threads.
##### Raises:
* <b>`RuntimeError`</b>: If threads from a previous call to `create_threads()` are
still running.
- - -
#### `tf.train.QueueRunner.enqueue_ops` {#QueueRunner.enqueue_ops}
- - -
#### `tf.train.QueueRunner.exceptions_raised` {#QueueRunner.exceptions_raised}
Exceptions raised but not handled by the `QueueRunner` threads.
Exceptions raised in queue runner threads are handled in one of two ways
depending on whether or not a `Coordinator` was passed to
`create_threads()`:
* With a `Coordinator`, exceptions are reported to the coordinator and
forgotten by the `QueueRunner`.
* Without a `Coordinator`, exceptions are captured by the `QueueRunner` and
made available in this `exceptions_raised` property.
##### Returns:
A list of Python `Exception` objects. The list is empty if no exception
was captured. (No exceptions are captured when using a Coordinator.)
- - -
#### `tf.train.QueueRunner.from_proto(queue_runner_def)` {#QueueRunner.from_proto}
Returns a `QueueRunner` object created from `queue_runner_def`.
- - -
#### `tf.train.QueueRunner.name` {#QueueRunner.name}
The string name of the underlying Queue.
- - -
#### `tf.train.QueueRunner.queue` {#QueueRunner.queue}
- - -
#### `tf.train.QueueRunner.to_proto()` {#QueueRunner.to_proto}
Converts this `QueueRunner` to a `QueueRunnerDef` protocol buffer.
##### Returns:
A `QueueRunnerDef` protocol buffer.
- - -
### `tf.train.add_queue_runner(qr, collection='queue_runners')` {#add_queue_runner}
Adds a `QueueRunner` to a collection in the graph.
When building a complex model that uses many queues it is often difficult to
gather all the queue runners that need to be run. This convenience function
allows you to add a queue runner to a well known collection in the graph.
The companion method `start_queue_runners()` can be used to start threads for
all the collected queue runners.
##### Args:
* <b>`qr`</b>: A `QueueRunner`.
* <b>`collection`</b>: A `GraphKey` specifying the graph collection to add
the queue runner to. Defaults to `GraphKeys.QUEUE_RUNNERS`.
- - -
### `tf.train.start_queue_runners(sess=None, coord=None, daemon=True, start=True, collection='queue_runners')` {#start_queue_runners}
Starts all queue runners collected in the graph.
This is a companion method to `add_queue_runner()`. It just starts
threads for all queue runners collected in the graph. It returns
the list of all threads.
##### Args:
* <b>`sess`</b>: `Session` used to run the queue ops. Defaults to the
default session.
* <b>`coord`</b>: Optional `Coordinator` for coordinating the started threads.
* <b>`daemon`</b>: Whether the threads should be marked as `daemons`, meaning
they don't block program exit.
* <b>`start`</b>: Set to `False` to only create the threads, not start them.
* <b>`collection`</b>: A `GraphKey` specifying the graph collection to
get the queue runners from. Defaults to `GraphKeys.QUEUE_RUNNERS`.
##### Returns:
A list of threads.
## Distributed execution
See [Distributed TensorFlow](../../how_tos/distributed/index.md) for
more information about how to configure a distributed TensorFlow program.
- - -
### `class tf.train.Server` {#Server}
An in-process TensorFlow server, for use in distributed training.
A `tf.train.Server` instance encapsulates a set of devices and a
[`tf.Session`](../../api_docs/python/client.md#Session) target that
can participate in distributed training. A server belongs to a
cluster (specified by a [`tf.train.ClusterSpec`](#ClusterSpec)), and
corresponds to a particular task in a named job. The server can
communicate with any other server in the same cluster.
- - -
#### `tf.train.Server.__init__(server_or_cluster_def, job_name=None, task_index=None, protocol=None, config=None, start=True)` {#Server.__init__}
Creates a new server with the given definition.
The `job_name`, `task_index`, and `protocol` arguments are optional, and
override any information provided in `server_or_cluster_def`.
##### Args:
* <b>`server_or_cluster_def`</b>: A `tf.train.ServerDef` or
`tf.train.ClusterDef` protocol buffer, or a
`tf.train.ClusterSpec` object, describing the server to be
created and/or the cluster of which it is a member.
* <b>`job_name`</b>: (Optional.) Specifies the name of the job of which the server
is a member. Defaults to the value in `server_or_cluster_def`, if
specified.
* <b>`task_index`</b>: (Optional.) Specifies the task index of the server in its
job. Defaults to the value in `server_or_cluster_def`, if specified.
Otherwise defaults to 0 if the server's job has only one task.
* <b>`protocol`</b>: (Optional.) Specifies the protocol to be used by the server.
Acceptable values include `"grpc"`. Defaults to the value in
`server_or_cluster_def`, if specified. Otherwise defaults to `"grpc"`.
* <b>`config`</b>: (Options.) A `tf.ConfigProto` that specifies default
configuration options for all sessions that run on this server.
* <b>`start`</b>: (Optional.) Boolean, indicating whether to start the server
after creating it. Defaults to `True`.
##### Raises:
tf.errors.OpError: Or one of its subclasses if an error occurs while
creating the TensorFlow server.
- - -
#### `tf.train.Server.create_local_server(config=None, start=True)` {#Server.create_local_server}
Creates a new single-process cluster running on the local host.
This method is a convenience wrapper for creating a
`tf.train.Server` with a `tf.train.ServerDef` that specifies a
single-process cluster containing a single task in a job called
`"local"`.
##### Args:
* <b>`config`</b>: (Options.) A `tf.ConfigProto` that specifies default
configuration options for all sessions that run on this server.
* <b>`start`</b>: (Optional.) Boolean, indicating whether to start the server after
creating it. Defaults to `True`.
##### Returns:
A local `tf.train.Server`.
- - -
#### `tf.train.Server.target` {#Server.target}
Returns the target for a `tf.Session` to connect to this server.
To create a
[`tf.Session`](../../api_docs/python/client.md#Session) that
connects to this server, use the following snippet:
```python
server = tf.train.Server(...)
with tf.Session(server.target):
# ...
```
##### Returns:
A string containing a session target for this server.
- - -
#### `tf.train.Server.server_def` {#Server.server_def}
Returns the `tf.train.ServerDef` for this server.
##### Returns:
A `tf.train.ServerDef` prototocol buffer that describes the configuration
of this server.
- - -
#### `tf.train.Server.start()` {#Server.start}
Starts this server.
##### Raises:
tf.errors.OpError: Or one of its subclasses if an error occurs while
starting the TensorFlow server.
- - -
#### `tf.train.Server.join()` {#Server.join}
Blocks until the server has shut down.
This method currently blocks forever.
##### Raises:
tf.errors.OpError: Or one of its subclasses if an error occurs while
joining the TensorFlow server.
- - -
### `class tf.train.Supervisor` {#Supervisor}
A training helper that checkpoints models and computes summaries.
The Supervisor is a small wrapper around a `Coordinator`, a `Saver`,
and a `SessionManager` that takes care of common needs of TensorFlow
training programs.
#### Use for a single program
```python
with tf.Graph().as_default():
...add operations to the graph...
# Create a Supervisor that will checkpoint the model in '/tmp/mydir'.
sv = Supervisor(logdir='/tmp/mydir')
# Get a TensorFlow session managed by the supervisor.
with sv.managed_session(FLAGS.master) as sess:
# Use the session to train the graph.
while not sv.should_stop():
sess.run(<my_train_op>)
```
Within the `with sv.managed_session()` block all variables in the graph have
been initialized. In addition, a few services have been started to
checkpoint the model and add summaries to the event log.
If the program crashes and is restarted, the managed session automatically
reinitialize variables from the most recent checkpoint.
The supervisor is notified of any exception raised by one of the services.
After an exception is raised, `should_stop()` returns `True`. In that case
the training loop should also stop. This is why the training loop has to
check for `sv.should_stop()`.
Exceptions that indicate that the training inputs have been exhausted,
`tf.errors.OutOfRangeError`, also cause `sv.should_stop()` to return `True`
but are not re-raised from the `with` block: they indicate a normal
termination.
#### Use for multiple replicas
To train with replicas you deploy the same program in a `Cluster`.
One of the tasks must be identified as the *chief*: the task that handles
initialization, checkpoints, summaries, and recovery. The other tasks
depend on the *chief* for these services.
The only change you have to do to the single program code is to indicate
if the program is running as the *chief*.
```python
# Choose a task as the chief. This could be based on server_def.task_index,
# or job_def.name, or job_def.tasks. It's entirely up to the end user.
# But there can be only one *chief*.
is_chief = (server_def.task_index == 0)
server = tf.train.Server(server_def)
with tf.Graph().as_default():
...add operations to the graph...
# Create a Supervisor that uses log directory on a shared file system.
# Indicate if you are the 'chief'
sv = Supervisor(logdir='/shared_directory/...', is_chief=is_chief)
# Get a Session in a TensorFlow server on the cluster.
with sv.managed_session(server.target) as sess:
# Use the session to train the graph.
while not sv.should_stop():
sess.run(<my_train_op>)
```
In the *chief* task, the `Supervisor` works exactly as in the first example
above. In the other tasks `sv.managed_session()` waits for the Model to have
been intialized before returning a session to the training code. The
non-chief tasks depend on the chief taks for initializing the model.
If one of the tasks crashes and restarts, `managed_session()`
checks if the Model is initialized. If yes, it just creates a session and
returns it to the training code that proceeds normally. If the model needs
to be initialized, the chief task takes care of reinitializing it; the other
tasks just wait for the model to have been initialized.
NOTE: This modified program still works fine as a single program.
The single program marks itself as the chief.
#### What `master` string to use
Whether you are running on your machine or in the cluster you can use the
following values for the --master flag:
* Specifying `''` requests an in-process session that does not use RPC.
* Specifying `'local'` requests a session that uses the RPC-based
"Master interface" to run TensorFlow programs. See
[`tf.train.Server.create_local_server()`](#Server.create_local_server) for
details.
* Specifying `'grpc://hostname:port'` requests a session that uses
the RPC interface to a specific , and also allows the in-process
master to access remote tensorflow workers. Often, it is
appropriate to pass `server.target` (for some `tf.train.Server`
named `server).
#### Advanced use
##### Launching additional services
`managed_session()` launches the Checkpoint and Summary services (threads).
If you need more services to run you can simply launch them in the block
controlled by `managed_session()`.
Example: Start a thread to print losses. We want this thread to run
every 60 seconds, so we launch it with `sv.loop()`.
```python
...
sv = Supervisor(logdir='/tmp/mydir')
with sv.managed_session(FLAGS.master) as sess:
sv.loop(60, print_loss, (sess))
while not sv.should_stop():
sess.run(my_train_op)
```
##### Launching fewer services
`managed_session()` launches the "summary" and "checkpoint" threads which use
either the optionally `summary_op` and `saver` passed to the constructor, or
default ones created automatically by the supervisor. If you want to run
your own summary and checkpointing logic, disable these services by passing
`None` to the `summary_op` and `saver` parameters.
Example: Create summaries manually every 100 steps in the chief.
```python
# Create a Supervisor with no automatic summaries.
sv = Supervisor(logdir='/tmp/mydir', is_chief=is_chief, summary_op=None)
# As summary_op was None, managed_session() does not start the
# summary thread.
with sv.managed_session(FLAGS.master) as sess:
for step in xrange(1000000):
if sv.should_stop():
break
if is_chief and step % 100 == 0:
# Create the summary every 100 chief steps.
sv.summary_computed(sess, sess.run(my_summary_op))
else:
# Train normally
sess.run(my_train_op)
```
##### Custom model initialization
`managed_session()` only supports initializing the model by running an
`init_op` or restoring from the latest checkpoint. If you have special
initialization needs, see how to specify a `local_init_op` when creating the
supervisor. You can also use the `SessionManager` directly to create a
session and check if it could be initialized automatically.
- - -
#### `tf.train.Supervisor.__init__(graph=None, ready_op=0, is_chief=True, init_op=0, init_feed_dict=None, local_init_op=0, logdir=None, summary_op=0, saver=0, global_step=0, save_summaries_secs=120, save_model_secs=600, recovery_wait_secs=30, stop_grace_secs=120, checkpoint_basename='model.ckpt', session_manager=None, summary_writer=0, init_fn=None)` {#Supervisor.__init__}
Create a `Supervisor`.
##### Args:
* <b>`graph`</b>: A `Graph`. The graph that the model will use. Defaults to the
default `Graph`. The supervisor may add operations to the graph before
creating a session, but the graph should not be modified by the caller
after passing it to the supervisor.
* <b>`ready_op`</b>: 1-D string `Tensor`. This tensor is evaluated by supervisors in
`prepare_or_wait_for_session()` to check if the model is ready to use.
The model is considered ready if it returns an empty array. Defaults to
the tensor returned from `tf.report_uninitialized_variables()` If
`None`, the model is not checked for readiness.
* <b>`is_chief`</b>: If True, create a chief supervisor in charge of initializing
and restoring the model. If False, create a supervisor that relies
on a chief supervisor for inits and restore.
* <b>`init_op`</b>: `Operation`. Used by chief supervisors to initialize the model
when it can not be recovered. Defaults to an `Operation` that
initializes all variables. If `None`, no initialization is done
automatically unless you pass a value for `init_fn`, see below.
* <b>`init_feed_dict`</b>: A dictionary that maps `Tensor` objects to feed values.
This feed dictionary will be used when `init_op` is evaluated.
* <b>`local_init_op`</b>: `Operation`. Used by all supervisors to run initializations
that should run for every new supervisor instance. By default these
are table initializers and initializers for local variables.
If `None`, no further per supervisor-instance initialization is
done automatically.
* <b>`logdir`</b>: A string. Optional path to a directory where to checkpoint the
model and log events for the visualizer. Used by chief supervisors.
The directory will be created if it does not exist.
* <b>`summary_op`</b>: An `Operation` that returns a Summary for the event logs.
Used by chief supervisors if a `logdir` was specified. Defaults to the
operation returned from merge_all_summaries(). If `None`, summaries are
not computed automatically.
* <b>`saver`</b>: A Saver object. Used by chief supervisors if a `logdir` was
specified. Defaults to the saved returned by Saver().
If `None`, the model is not saved automatically.
* <b>`global_step`</b>: An integer Tensor of size 1 that counts steps. The value
from 'global_step' is used in summaries and checkpoint filenames.
Default to the op named 'global_step' in the graph if it exists, is of
rank 1, size 1, and of type tf.int32 ot tf.int64. If `None` the global
step is not recorded in summaries and checkpoint files. Used by chief
supervisors if a `logdir` was specified.
* <b>`save_summaries_secs`</b>: Number of seconds between the computation of
summaries for the event log. Defaults to 120 seconds. Pass 0 to
disable summaries.
* <b>`save_model_secs`</b>: Number of seconds between the creation of model
checkpoints. Defaults to 600 seconds. Pass 0 to disable checkpoints.
* <b>`recovery_wait_secs`</b>: Number of seconds between checks that the model
is ready. Used by supervisors when waiting for a chief supervisor
to initialize or restore the model. Defaults to 30 seconds.
* <b>`stop_grace_secs`</b>: Grace period, in seconds, given to running threads to
stop when `stop()` is called. Defaults to 120 seconds.
* <b>`checkpoint_basename`</b>: The basename for checkpoint saving.
* <b>`session_manager`</b>: `SessionManager`, which manages Session creation and
recovery. If it is `None`, a default `SessionManager` will be created
with the set of arguments passed in for backwards compatibility.
* <b>`summary_writer`</b>: `SummaryWriter` to use or `USE_DEFAULT`. Can be `None`
to indicate that no summaries should be written.
* <b>`init_fn`</b>: Optional callable used to initialize the model. Called
after the optional `init_op` is called. The callable must accept one
argument, the session being initialized.
##### Returns:
A `Supervisor`.
- - -
#### `tf.train.Supervisor.managed_session(master='', config=None, start_standard_services=True, close_summary_writer=True)` {#Supervisor.managed_session}
Returns a context manager for a managed session.
This context manager creates and automatically recovers a session. It
optionally starts the standard services that handle checkpoints and
summaries. It monitors exceptions raised from the `with` block or from the
services and stops the supervisor as needed.
The context manager is typically used as follows:
```python
def train():
sv = tf.train.Supervisor(...)
with sv.managed_session(<master>) as sess:
for step in xrange(..):
if sv.should_stop():
break
sess.run(<my training op>)
...do other things needed at each training step...
```
An exception raised from the `with` block or one of the service threads is
raised again when the block exits. This is done after stopping all threads
and closing the session. For example, an `AbortedError` exception, raised
in case of preemption of one of the workers in a distributed model, is
raised again when the block exits.
If you want to retry the training loop in case of preemption you can do it
as follows:
```python
def main(...):
while True
try:
train()
except tf.errors.Aborted:
pass
```
As a special case, exceptions used for control flow, such as
`OutOfRangeError` which reports that input queues are exhausted, are not
raised again from the `with` block: they indicate a clean termination of
the training loop and are considered normal termination.
##### Args:
* <b>`master`</b>: name of the TensorFlow master to use. See the `tf.Session`
constructor for how this is interpreted.
* <b>`config`</b>: Optional `ConfigProto` proto used to configure the session.
Passed as-is to create the session.
* <b>`start_standard_services`</b>: Whether to start the standard services,
such as checkpoint, summary and step counter.
* <b>`close_summary_writer`</b>: Whether to close the summary writer when
closing the session. Defaults to True.
##### Returns:
A context manager that yields a `Session` restored from the latest
checkpoint or initialized from scratch if not checkpoint exists. The
session is closed when the `with` block exits.
- - -
#### `tf.train.Supervisor.prepare_or_wait_for_session(master='', config=None, wait_for_checkpoint=False, max_wait_secs=7200, start_standard_services=True)` {#Supervisor.prepare_or_wait_for_session}
Make sure the model is ready to be used.
Create a session on 'master', recovering or initializing the model as
needed, or wait for a session to be ready. If running as the chief
and `start_standard_service` is set to True, also call the session
manager to start the standard services.
##### Args:
* <b>`master`</b>: name of the TensorFlow master to use. See the `tf.Session`
constructor for how this is interpreted.
* <b>`config`</b>: Optional ConfigProto proto used to configure the session,
which is passed as-is to create the session.
* <b>`wait_for_checkpoint`</b>: Whether we should wait for the availability of a
checkpoint before creating Session. Defaults to False.
* <b>`max_wait_secs`</b>: Maximum time to wait for the session to become available.
* <b>`start_standard_services`</b>: Whether to start the standard services and the
queue runners.
##### Returns:
A Session object that can be used to drive the model.
- - -
#### `tf.train.Supervisor.start_standard_services(sess)` {#Supervisor.start_standard_services}
Start the standard services for 'sess'.
This starts services in the background. The services started depend
on the parameters to the constructor and may include:
- A Summary thread computing summaries every save_summaries_secs.
- A Checkpoint thread saving the model every save_model_secs.
- A StepCounter thread measure step time.
##### Args:
* <b>`sess`</b>: A Session.
##### Returns:
A list of threads that are running the standard services. You can use
the Supervisor's Coordinator to join these threads with:
sv.coord.Join(<list of threads>)
##### Raises:
* <b>`RuntimeError`</b>: If called with a non-chief Supervisor.
* <b>`ValueError`</b>: If not `logdir` was passed to the constructor as the
services need a log directory.
- - -
#### `tf.train.Supervisor.start_queue_runners(sess, queue_runners=None)` {#Supervisor.start_queue_runners}
Start threads for `QueueRunners`.
Note that the queue runners collected in the graph key `QUEUE_RUNNERS`
are already started automatically when you create a session with the
supervisor, so unless you have non-collected queue runners to start
you do not need to call this explicitely.
##### Args:
* <b>`sess`</b>: A `Session`.
* <b>`queue_runners`</b>: A list of `QueueRunners`. If not specified, we'll use the
list of queue runners gathered in the graph under the key
`GraphKeys.QUEUE_RUNNERS`.
##### Returns:
The list of threads started for the `QueueRunners`.
- - -
#### `tf.train.Supervisor.summary_computed(sess, summary, global_step=None)` {#Supervisor.summary_computed}
Indicate that a summary was computed.
##### Args:
* <b>`sess`</b>: A `Session` object.
* <b>`summary`</b>: A Summary proto, or a string holding a serialized summary proto.
* <b>`global_step`</b>: Int. global step this summary is associated with. If `None`,
it will try to fetch the current step.
##### Raises:
* <b>`TypeError`</b>: if 'summary' is not a Summary proto or a string.
* <b>`RuntimeError`</b>: if the Supervisor was created without a `logdir`.
- - -
#### `tf.train.Supervisor.stop(threads=None, close_summary_writer=True)` {#Supervisor.stop}
Stop the services and the coordinator.
This does not close the session.
##### Args:
* <b>`threads`</b>: Optional list of threads to join with the coordinator. If
`None`, defaults to the threads running the standard services, the
threads started for `QueueRunners`, and the threads started by the
`loop()` method. To wait on additional threads, pass the
list in this parameter.
* <b>`close_summary_writer`</b>: Whether to close the `summary_writer`. Defaults to
`True` if the summary writer was created by the supervisor, `False`
otherwise.
- - -
#### `tf.train.Supervisor.request_stop(ex=None)` {#Supervisor.request_stop}
Request that the coordinator stop the threads.
See `Coordinator.request_stop()`.
##### Args:
* <b>`ex`</b>: Optional `Exception`, or Python `exc_info` tuple as returned by
`sys.exc_info()`. If this is the first call to `request_stop()` the
corresponding exception is recorded and re-raised from `join()`.
- - -
#### `tf.train.Supervisor.should_stop()` {#Supervisor.should_stop}
Check if the coordinator was told to stop.
See `Coordinator.should_stop()`.
##### Returns:
True if the coordinator was told to stop, False otherwise.
- - -
#### `tf.train.Supervisor.stop_on_exception()` {#Supervisor.stop_on_exception}
Context handler to stop the supervisor when an exception is raised.
See `Coordinator.stop_on_exception()`.
##### Returns:
A context handler.
- - -
#### `tf.train.Supervisor.wait_for_stop()` {#Supervisor.wait_for_stop}
Block waiting for the coordinator to stop.
#### Other Methods
- - -
#### `tf.train.Supervisor.Loop(timer_interval_secs, target, args=None, kwargs=None)` {#Supervisor.Loop}
Start a LooperThread that calls a function periodically.
If `timer_interval_secs` is None the thread calls `target(*args, **kwargs)`
repeatedly. Otherwise it calls it every `timer_interval_secs`
seconds. The thread terminates when a stop is requested.
The started thread is added to the list of threads managed by the supervisor
so it does not need to be passed to the `stop()` method.
##### Args:
* <b>`timer_interval_secs`</b>: Number. Time boundaries at which to call `target`.
* <b>`target`</b>: A callable object.
* <b>`args`</b>: Optional arguments to pass to `target` when calling it.
* <b>`kwargs`</b>: Optional keyword arguments to pass to `target` when calling it.
##### Returns:
The started thread.
- - -
#### `tf.train.Supervisor.PrepareSession(master='', config=None, wait_for_checkpoint=False, max_wait_secs=7200, start_standard_services=True)` {#Supervisor.PrepareSession}
Make sure the model is ready to be used.
Create a session on 'master', recovering or initializing the model as
needed, or wait for a session to be ready. If running as the chief
and `start_standard_service` is set to True, also call the session
manager to start the standard services.
##### Args:
* <b>`master`</b>: name of the TensorFlow master to use. See the `tf.Session`
constructor for how this is interpreted.
* <b>`config`</b>: Optional ConfigProto proto used to configure the session,
which is passed as-is to create the session.
* <b>`wait_for_checkpoint`</b>: Whether we should wait for the availability of a
checkpoint before creating Session. Defaults to False.
* <b>`max_wait_secs`</b>: Maximum time to wait for the session to become available.
* <b>`start_standard_services`</b>: Whether to start the standard services and the
queue runners.
##### Returns:
A Session object that can be used to drive the model.
- - -
#### `tf.train.Supervisor.RequestStop(ex=None)` {#Supervisor.RequestStop}
Request that the coordinator stop the threads.
See `Coordinator.request_stop()`.
##### Args:
* <b>`ex`</b>: Optional `Exception`, or Python `exc_info` tuple as returned by
`sys.exc_info()`. If this is the first call to `request_stop()` the
corresponding exception is recorded and re-raised from `join()`.
- - -
#### `tf.train.Supervisor.ShouldStop()` {#Supervisor.ShouldStop}
Check if the coordinator was told to stop.
See `Coordinator.should_stop()`.
##### Returns:
True if the coordinator was told to stop, False otherwise.
- - -
#### `tf.train.Supervisor.StartQueueRunners(sess, queue_runners=None)` {#Supervisor.StartQueueRunners}
Start threads for `QueueRunners`.
Note that the queue runners collected in the graph key `QUEUE_RUNNERS`
are already started automatically when you create a session with the
supervisor, so unless you have non-collected queue runners to start
you do not need to call this explicitely.
##### Args:
* <b>`sess`</b>: A `Session`.
* <b>`queue_runners`</b>: A list of `QueueRunners`. If not specified, we'll use the
list of queue runners gathered in the graph under the key
`GraphKeys.QUEUE_RUNNERS`.
##### Returns:
The list of threads started for the `QueueRunners`.
- - -
#### `tf.train.Supervisor.StartStandardServices(sess)` {#Supervisor.StartStandardServices}
Start the standard services for 'sess'.
This starts services in the background. The services started depend
on the parameters to the constructor and may include:
- A Summary thread computing summaries every save_summaries_secs.
- A Checkpoint thread saving the model every save_model_secs.
- A StepCounter thread measure step time.
##### Args:
* <b>`sess`</b>: A Session.
##### Returns:
A list of threads that are running the standard services. You can use
the Supervisor's Coordinator to join these threads with:
sv.coord.Join(<list of threads>)
##### Raises:
* <b>`RuntimeError`</b>: If called with a non-chief Supervisor.
* <b>`ValueError`</b>: If not `logdir` was passed to the constructor as the
services need a log directory.
- - -
#### `tf.train.Supervisor.Stop(threads=None, close_summary_writer=True)` {#Supervisor.Stop}
Stop the services and the coordinator.
This does not close the session.
##### Args:
* <b>`threads`</b>: Optional list of threads to join with the coordinator. If
`None`, defaults to the threads running the standard services, the
threads started for `QueueRunners`, and the threads started by the
`loop()` method. To wait on additional threads, pass the
list in this parameter.
* <b>`close_summary_writer`</b>: Whether to close the `summary_writer`. Defaults to
`True` if the summary writer was created by the supervisor, `False`
otherwise.
- - -
#### `tf.train.Supervisor.StopOnException()` {#Supervisor.StopOnException}
Context handler to stop the supervisor when an exception is raised.
See `Coordinator.stop_on_exception()`.
##### Returns:
A context handler.
- - -
#### `tf.train.Supervisor.SummaryComputed(sess, summary, global_step=None)` {#Supervisor.SummaryComputed}
Indicate that a summary was computed.
##### Args:
* <b>`sess`</b>: A `Session` object.
* <b>`summary`</b>: A Summary proto, or a string holding a serialized summary proto.
* <b>`global_step`</b>: Int. global step this summary is associated with. If `None`,
it will try to fetch the current step.
##### Raises:
* <b>`TypeError`</b>: if 'summary' is not a Summary proto or a string.
* <b>`RuntimeError`</b>: if the Supervisor was created without a `logdir`.
- - -
#### `tf.train.Supervisor.WaitForStop()` {#Supervisor.WaitForStop}
Block waiting for the coordinator to stop.
- - -
#### `tf.train.Supervisor.coord` {#Supervisor.coord}
Return the Coordinator used by the Supervisor.
The Coordinator can be useful if you want to run multiple threads
during your training.
##### Returns:
A Coordinator object.
- - -
#### `tf.train.Supervisor.global_step` {#Supervisor.global_step}
Return the global_step Tensor used by the supervisor.
##### Returns:
An integer Tensor for the global_step.
- - -
#### `tf.train.Supervisor.init_feed_dict` {#Supervisor.init_feed_dict}
Return the feed dictionary used when evaluating the `init_op`.
##### Returns:
A feed dictionary or `None`.
- - -
#### `tf.train.Supervisor.init_op` {#Supervisor.init_op}
Return the Init Op used by the supervisor.
##### Returns:
An Op or `None`.
- - -
#### `tf.train.Supervisor.is_chief` {#Supervisor.is_chief}
Return True if this is a chief supervisor.
##### Returns:
A bool.
- - -
#### `tf.train.Supervisor.loop(timer_interval_secs, target, args=None, kwargs=None)` {#Supervisor.loop}
Start a LooperThread that calls a function periodically.
If `timer_interval_secs` is None the thread calls `target(*args, **kwargs)`
repeatedly. Otherwise it calls it every `timer_interval_secs`
seconds. The thread terminates when a stop is requested.
The started thread is added to the list of threads managed by the supervisor
so it does not need to be passed to the `stop()` method.
##### Args:
* <b>`timer_interval_secs`</b>: Number. Time boundaries at which to call `target`.
* <b>`target`</b>: A callable object.
* <b>`args`</b>: Optional arguments to pass to `target` when calling it.
* <b>`kwargs`</b>: Optional keyword arguments to pass to `target` when calling it.
##### Returns:
The started thread.
- - -
#### `tf.train.Supervisor.ready_op` {#Supervisor.ready_op}
Return the Ready Op used by the supervisor.
##### Returns:
An Op or `None`.
- - -
#### `tf.train.Supervisor.save_model_secs` {#Supervisor.save_model_secs}
Return the delay between checkpoints.
##### Returns:
A timestamp.
- - -
#### `tf.train.Supervisor.save_path` {#Supervisor.save_path}
Return the save path used by the supervisor.
##### Returns:
A string.
- - -
#### `tf.train.Supervisor.save_summaries_secs` {#Supervisor.save_summaries_secs}
Return the delay between summary computations.
##### Returns:
A timestamp.
- - -
#### `tf.train.Supervisor.saver` {#Supervisor.saver}
Return the Saver used by the supervisor.
##### Returns:
A Saver object.
- - -
#### `tf.train.Supervisor.session_manager` {#Supervisor.session_manager}
Return the SessionManager used by the Supervisor.
##### Returns:
A SessionManager object.
- - -
#### `tf.train.Supervisor.summary_op` {#Supervisor.summary_op}
Return the Summary Tensor used by the chief supervisor.
##### Returns:
A string Tensor for the summary or `None`.
- - -
#### `tf.train.Supervisor.summary_writer` {#Supervisor.summary_writer}
Return the SummaryWriter used by the chief supervisor.
##### Returns:
A SummaryWriter.
- - -
### `class tf.train.SessionManager` {#SessionManager}
Training helper that restores from checkpoint and creates session.
This class is a small wrapper that takes care of session creation and
checkpoint recovery. It also provides functions that to facilitate
coordination among multiple training threads or processes.
* Checkpointing trained variables as the training progresses.
* Initializing variables on startup, restoring them from the most recent
checkpoint after a crash, or wait for checkpoints to become available.
### Usage:
```python
with tf.Graph().as_default():
...add operations to the graph...
# Create a SessionManager that will checkpoint the model in '/tmp/mydir'.
sm = SessionManager()
sess = sm.prepare_session(master, init_op, saver, checkpoint_dir)
# Use the session to train the graph.
while True:
sess.run(<my_train_op>)
```
`prepare_session()` initializes or restores a model. It requires `init_op`
and `saver` as an argument.
A second process could wait for the model to be ready by doing the following:
```python
with tf.Graph().as_default():
...add operations to the graph...
# Create a SessionManager that will wait for the model to become ready.
sm = SessionManager()
sess = sm.wait_for_session(master)
# Use the session to train the graph.
while True:
sess.run(<my_train_op>)
```
`wait_for_session()` waits for a model to be initialized by other processes.
- - -
#### `tf.train.SessionManager.__init__(local_init_op=None, ready_op=None, graph=None, recovery_wait_secs=30)` {#SessionManager.__init__}
Creates a SessionManager.
The `local_init_op` is an `Operation` that is run always after a new session
was created. If `None`, this step is skipped.
The `ready_op` is an `Operation` used to check if the model is ready. The
model is considered ready if that operation returns an empty string tensor.
If the operation returns non empty string tensor, the elements are
concatenated and used to indicate to the user why the model is not ready.
If `ready_op` is `None`, the model is not checked for readiness.
`recovery_wait_secs` is the number of seconds between checks that
the model is ready. It is used by processes to wait for a model to
be initialized or restored. Defaults to 30 seconds.
##### Args:
* <b>`local_init_op`</b>: An `Operation` run immediately after session creation.
Usually used to initialize tables and local variables.
* <b>`ready_op`</b>: An `Operation` to check if the model is initialized.
* <b>`graph`</b>: The `Graph` that the model will use.
* <b>`recovery_wait_secs`</b>: Seconds between checks for the model to be ready.
- - -
#### `tf.train.SessionManager.prepare_session(master, init_op=None, saver=None, checkpoint_dir=None, wait_for_checkpoint=False, max_wait_secs=7200, config=None, init_feed_dict=None, init_fn=None)` {#SessionManager.prepare_session}
Creates a `Session`. Makes sure the model is ready to be used.
Creates a `Session` on 'master'. If a `saver` object is passed in, and
`checkpoint_dir` points to a directory containing valid checkpoint
files, then it will try to recover the model from checkpoint. If
no checkpoint files are available, and `wait_for_checkpoint` is
`True`, then the process would check every `recovery_wait_secs`,
up to `max_wait_secs`, for recovery to succeed.
If the model cannot be recovered successfully then it is initialized by
either running the provided `init_op`, or calling the provided `init_fn`.
It is an error if the model cannot be recovered and neither an `init_op`
or an `init_fn` are passed.
This is a convenient function for the following, with a few error checks
added:
```python
sess, initialized = self.recover_session(master)
if not initialized:
if init_op:
sess.run(init_op, feed_dict=init_feed_dict)
if init_fn;
init_fn(sess)
return sess
```
##### Args:
* <b>`master`</b>: `String` representation of the TensorFlow master to use.
* <b>`init_op`</b>: Optional `Operation` used to initialize the model.
* <b>`saver`</b>: A `Saver` object used to restore a model.
* <b>`checkpoint_dir`</b>: Path to the checkpoint files.
* <b>`wait_for_checkpoint`</b>: Whether to wait for checkpoint to become available.
* <b>`max_wait_secs`</b>: Maximum time to wait for checkpoints to become available.
* <b>`config`</b>: Optional `ConfigProto` proto used to configure the session.
* <b>`init_feed_dict`</b>: Optional dictionary that maps `Tensor` objects to feed
values. This feed dictionary is passed to the session `run()` call when
running the init op.
* <b>`init_fn`</b>: Optional callable used to initialize the model. Called after the
optional `init_op` is called. The callable must accept one argument,
the session being initialized.
##### Returns:
A `Session` object that can be used to drive the model.
##### Raises:
* <b>`RuntimeError`</b>: If the model cannot be initialized or recovered.
- - -
#### `tf.train.SessionManager.recover_session(master, saver=None, checkpoint_dir=None, wait_for_checkpoint=False, max_wait_secs=7200, config=None)` {#SessionManager.recover_session}
Creates a `Session`, recovering if possible.
Creates a new session on 'master'. If the session is not initialized
and can be recovered from a checkpoint, recover it.
##### Args:
* <b>`master`</b>: `String` representation of the TensorFlow master to use.
* <b>`saver`</b>: A `Saver` object used to restore a model.
* <b>`checkpoint_dir`</b>: Path to the checkpoint files.
* <b>`wait_for_checkpoint`</b>: Whether to wait for checkpoint to become available.
* <b>`max_wait_secs`</b>: Maximum time to wait for checkpoints to become available.
* <b>`config`</b>: Optional `ConfigProto` proto used to configure the session.
##### Returns:
A pair (sess, initialized) where 'initialized' is `True` if
the session could be recovered, `False` otherwise.
- - -
#### `tf.train.SessionManager.wait_for_session(master, config=None, max_wait_secs=inf)` {#SessionManager.wait_for_session}
Creates a new `Session` and waits for model to be ready.
Creates a new `Session` on 'master'. Waits for the model to be
initialized or recovered from a checkpoint. It's expected that
another thread or process will make the model ready, and that this
is intended to be used by threads/processes that participate in a
distributed training configuration where a different thread/process
is responsible for initializing or recovering the model being trained.
NB: The amount of time this method waits for the session is bounded
by max_wait_secs. By default, this function will wait indefinitely.
##### Args:
* <b>`master`</b>: `String` representation of the TensorFlow master to use.
* <b>`config`</b>: Optional ConfigProto proto used to configure the session.
* <b>`max_wait_secs`</b>: Maximum time to wait for the session to become available.
##### Returns:
A `Session`. May be None if the operation exceeds the timeout
specified by config.operation_timeout_in_ms.
##### Raises:
tf.DeadlineExceededError: if the session is not available after
max_wait_secs.
- - -
### `class tf.train.ClusterSpec` {#ClusterSpec}
Represents a cluster as a set of "tasks", organized into "jobs".
A `tf.train.ClusterSpec` represents the set of processes that
participate in a distributed TensorFlow computation. Every
[`tf.train.Server`](#Server) is constructed in a particular cluster.
To create a cluster with two jobs and five tasks, you specify the
mapping from job names to lists of network addresses (typically
hostname-port pairs).
```
cluster = tf.train.ClusterSpec({"worker": ["worker0.example.com:2222",
"worker1.example.com:2222",
"worker2.example.com:2222"],
"ps": ["ps0.example.com:2222",
"ps1.example.com:2222"]})
```
- - -
#### `tf.train.ClusterSpec.as_cluster_def()` {#ClusterSpec.as_cluster_def}
Returns a `tf.train.ClusterDef` protocol buffer based on this cluster.
- - -
#### `tf.train.ClusterSpec.as_dict()` {#ClusterSpec.as_dict}
Returns a dictionary from job names to lists of network addresses.
#### Other Methods
- - -
#### `tf.train.ClusterSpec.__init__(cluster)` {#ClusterSpec.__init__}
Creates a `ClusterSpec`.
##### Args:
* <b>`cluster`</b>: A dictionary mapping one or more job names to lists of network
addresses, or a `tf.train.ClusterDef` protocol buffer.
##### Raises:
* <b>`TypeError`</b>: If `cluster` is not a dictionary mapping strings to lists
of strings, and not a `tf.train.ClusterDef` protobuf.
- - -
#### `tf.train.ClusterSpec.job_tasks(job_name)` {#ClusterSpec.job_tasks}
Returns a list of tasks in the given job.
##### Args:
* <b>`job_name`</b>: The string name of a job in this cluster.
##### Returns:
A list of strings, corresponding to the network addresses of tasks in
the given job, ordered by task index.
##### Raises:
* <b>`ValueError`</b>: If `job_name` does not name a job in this cluster.
- - -
#### `tf.train.ClusterSpec.jobs` {#ClusterSpec.jobs}
Returns a list of job names in this cluster.
##### Returns:
A list of strings, corresponding to the names of jobs in this cluster.
- - -
### `tf.train.replica_device_setter(ps_tasks=0, ps_device='/job:ps', worker_device='/job:worker', merge_devices=True, cluster=None, ps_ops=None)` {#replica_device_setter}
Return a `device function` to use when building a Graph for replicas.
Device Functions are used in `with tf.device(device_function):` statement to
automatically assign devices to `Operation` objects as they are constructed,
Device constraints are added from the inner-most context first, working
outwards. The merging behavior adds constraints to fields that are yet unset
by a more inner context. Currently the fields are (job, task, cpu/gpu).
If `cluster` is `None`, and `ps_tasks` is 0, the returned function is a no-op.
For example,
```python
# To build a cluster with two ps jobs on hosts ps0 and ps1, and 3 worker
# jobs on hosts worker0, worker1 and worker2.
cluster_spec = {
"ps": ["ps0:2222", "ps1:2222"],
"worker": ["worker0:2222", "worker1:2222", "worker2:2222"]}
with tf.device(tf.replica_device_setter(cluster=cluster_spec)):
# Build your graph
v1 = tf.Variable(...) # assigned to /job:ps/task:0
v2 = tf.Variable(...) # assigned to /job:ps/task:1
v3 = tf.Variable(...) # assigned to /job:ps/task:0
# Run compute
```
##### Args:
* <b>`ps_tasks`</b>: Number of tasks in the `ps` job.
* <b>`ps_device`</b>: String. Device of the `ps` job. If empty no `ps` job is used.
Defaults to `ps`.
* <b>`worker_device`</b>: String. Device of the `worker` job. If empty no `worker`
job is used.
* <b>`merge_devices`</b>: `Boolean`. If `True`, merges or only sets a device if the
device constraint is completely unset. merges device specification rather
than overriding them.
* <b>`cluster`</b>: `ClusterDef` proto or `ClusterSpec`.
* <b>`ps_ops`</b>: List of `Operation` objects that need to be placed on `ps` devices.
##### Returns:
A function to pass to `tf.device()`.
##### Raises:
TypeError if `cluster` is not a dictionary or `ClusterDef` protocol buffer.
## Summary Operations
The following ops output
[`Summary`](https://www.tensorflow.org/code/tensorflow/core/framework/summary.proto)
protocol buffers as serialized string tensors.
You can fetch the output of a summary op in a session, and pass it to
a [SummaryWriter](../../api_docs/python/train.md#SummaryWriter) to append it
to an event file. Event files contain
[`Event`](https://www.tensorflow.org/code/tensorflow/core/util/event.proto)
protos that can contain `Summary` protos along with the timestamp and
step. You can then use TensorBoard to visualize the contents of the
event files. See [TensorBoard and
Summaries](../../how_tos/summaries_and_tensorboard/index.md) for more
details.
- - -
### `tf.scalar_summary(tags, values, collections=None, name=None)` {#scalar_summary}
Outputs a `Summary` protocol buffer with scalar values.
The input `tags` and `values` must have the same shape. The generated
summary has a summary value for each tag-value pair in `tags` and `values`.
##### Args:
* <b>`tags`</b>: A `string` `Tensor`. Tags for the summaries.
* <b>`values`</b>: A real numeric Tensor. Values for the summaries.
* <b>`collections`</b>: Optional list of graph collections keys. The new summary op is
added to these collections. Defaults to `[GraphKeys.SUMMARIES]`.
* <b>`name`</b>: A name for the operation (optional).
##### Returns:
A scalar `Tensor` of type `string`. The serialized `Summary` protocol
buffer.
- - -
### `tf.image_summary(tag, tensor, max_images=3, collections=None, name=None)` {#image_summary}
Outputs a `Summary` protocol buffer with images.
The summary has up to `max_images` summary values containing images. The
images are built from `tensor` which must be 4-D with shape `[batch_size,
height, width, channels]` and where `channels` can be:
* 1: `tensor` is interpreted as Grayscale.
* 3: `tensor` is interpreted as RGB.
* 4: `tensor` is interpreted as RGBA.
The images have the same number of channels as the input tensor. For float
input, the values are normalized one image at a time to fit in the range
`[0, 255]`. `uint8` values are unchanged. The op uses two different
normalization algorithms:
* If the input values are all positive, they are rescaled so the largest one
is 255.
* If any input value is negative, the values are shifted so input value 0.0
is at 127. They are then rescaled so that either the smallest value is 0,
or the largest one is 255.
The `tag` argument is a scalar `Tensor` of type `string`. It is used to
build the `tag` of the summary values:
* If `max_images` is 1, the summary value tag is '*tag*/image'.
* If `max_images` is greater than 1, the summary value tags are
generated sequentially as '*tag*/image/0', '*tag*/image/1', etc.
##### Args:
* <b>`tag`</b>: A scalar `Tensor` of type `string`. Used to build the `tag`
of the summary values.
* <b>`tensor`</b>: A 4-D `uint8` or `float32` `Tensor` of shape `[batch_size, height,
width, channels]` where `channels` is 1, 3, or 4.
* <b>`max_images`</b>: Max number of batch elements to generate images for.
* <b>`collections`</b>: Optional list of ops.GraphKeys. The collections to add the
summary to. Defaults to [ops.GraphKeys.SUMMARIES]
* <b>`name`</b>: A name for the operation (optional).
##### Returns:
A scalar `Tensor` of type `string`. The serialized `Summary` protocol
buffer.
- - -
### `tf.audio_summary(tag, tensor, sample_rate, max_outputs=3, collections=None, name=None)` {#audio_summary}
Outputs a `Summary` protocol buffer with audio.
The summary has up to `max_outputs` summary values containing audio. The
audio is built from `tensor` which must be 3-D with shape `[batch_size,
frames, channels]` or 2-D with shape `[batch_size, frames]`. The values are
assumed to be in the range of `[-1.0, 1.0]` with a sample rate of
`sample_rate`.
The `tag` argument is a scalar `Tensor` of type `string`. It is used to
build the `tag` of the summary values:
* If `max_outputs` is 1, the summary value tag is '*tag*/audio'.
* If `max_outputs` is greater than 1, the summary value tags are
generated sequentially as '*tag*/audio/0', '*tag*/audio/1', etc.
##### Args:
* <b>`tag`</b>: A scalar `Tensor` of type `string`. Used to build the `tag`
of the summary values.
* <b>`tensor`</b>: A 3-D `float32` `Tensor` of shape `[batch_size, frames, channels]`
or a 2-D `float32` `Tensor` of shape `[batch_size, frames]`.
* <b>`sample_rate`</b>: The sample rate of the signal in hertz.
* <b>`max_outputs`</b>: Max number of batch elements to generate audio for.
* <b>`collections`</b>: Optional list of ops.GraphKeys. The collections to add the
summary to. Defaults to [ops.GraphKeys.SUMMARIES]
* <b>`name`</b>: A name for the operation (optional).
##### Returns:
A scalar `Tensor` of type `string`. The serialized `Summary` protocol
buffer.
- - -
### `tf.histogram_summary(tag, values, collections=None, name=None)` {#histogram_summary}
Outputs a `Summary` protocol buffer with a histogram.
The generated
[`Summary`](https://www.tensorflow.org/code/tensorflow/core/framework/summary.proto)
has one summary value containing a histogram for `values`.
This op reports an `InvalidArgument` error if any value is not finite.
##### Args:
* <b>`tag`</b>: A `string` `Tensor`. 0-D. Tag to use for the summary value.
* <b>`values`</b>: A real numeric `Tensor`. Any shape. Values to use to
build the histogram.
* <b>`collections`</b>: Optional list of graph collections keys. The new summary op is
added to these collections. Defaults to `[GraphKeys.SUMMARIES]`.
* <b>`name`</b>: A name for the operation (optional).
##### Returns:
A scalar `Tensor` of type `string`. The serialized `Summary` protocol
buffer.
- - -
### `tf.nn.zero_fraction(value, name=None)` {#zero_fraction}
Returns the fraction of zeros in `value`.
If `value` is empty, the result is `nan`.
This is useful in summaries to measure and report sparsity. For example,
z = tf.Relu(...)
summ = tf.scalar_summary('sparsity', tf.nn.zero_fraction(z))
##### Args:
* <b>`value`</b>: A tensor of numeric type.
* <b>`name`</b>: A name for the operation (optional).
##### Returns:
The fraction of zeros in `value`, with type `float32`.
- - -
### `tf.merge_summary(inputs, collections=None, name=None)` {#merge_summary}
Merges summaries.
This op creates a
[`Summary`](https://www.tensorflow.org/code/tensorflow/core/framework/summary.proto)
protocol buffer that contains the union of all the values in the input
summaries.
When the Op is run, it reports an `InvalidArgument` error if multiple values
in the summaries to merge use the same tag.
##### Args:
* <b>`inputs`</b>: A list of `string` `Tensor` objects containing serialized `Summary`
protocol buffers.
* <b>`collections`</b>: Optional list of graph collections keys. The new summary op is
added to these collections. Defaults to `[GraphKeys.SUMMARIES]`.
* <b>`name`</b>: A name for the operation (optional).
##### Returns:
A scalar `Tensor` of type `string`. The serialized `Summary` protocol
buffer resulting from the merging.
- - -
### `tf.merge_all_summaries(key='summaries')` {#merge_all_summaries}
Merges all summaries collected in the default graph.
##### Args:
* <b>`key`</b>: `GraphKey` used to collect the summaries. Defaults to
`GraphKeys.SUMMARIES`.
##### Returns:
If no summaries were collected, returns None. Otherwise returns a scalar
`Tensor` of type `string` containing the serialized `Summary` protocol
buffer resulting from the merging.
## Adding Summaries to Event Files
See [Summaries and
TensorBoard](../../how_tos/summaries_and_tensorboard/index.md) for an
overview of summaries, event files, and visualization in TensorBoard.
- - -
### `class tf.train.SummaryWriter` {#SummaryWriter}
Writes `Summary` protocol buffers to event files.
The `SummaryWriter` class provides a mechanism to create an event file in a
given directory and add summaries and events to it. The class updates the
file contents asynchronously. This allows a training program to call methods
to add data to the file directly from the training loop, without slowing down
training.
- - -
#### `tf.train.SummaryWriter.__init__(logdir, graph=None, max_queue=10, flush_secs=120, graph_def=None)` {#SummaryWriter.__init__}
Creates a `SummaryWriter` and an event file.
On construction the summary writer creates a new event file in `logdir`.
This event file will contain `Event` protocol buffers constructed when you
call one of the following functions: `add_summary()`, `add_session_log()`,
`add_event()`, or `add_graph()`.
If you pass a `Graph` to the constructor it is added to
the event file. (This is equivalent to calling `add_graph()` later).
TensorBoard will pick the graph from the file and display it graphically so
you can interactively explore the graph you built. You will usually pass
the graph from the session in which you launched it:
```python
...create a graph...
# Launch the graph in a session.
sess = tf.Session()
# Create a summary writer, add the 'graph' to the event file.
writer = tf.train.SummaryWriter(<some-directory>, sess.graph)
```
The other arguments to the constructor control the asynchronous writes to
the event file:
* `flush_secs`: How often, in seconds, to flush the added summaries
and events to disk.
* `max_queue`: Maximum number of summaries or events pending to be
written to disk before one of the 'add' calls block.
##### Args:
* <b>`logdir`</b>: A string. Directory where event file will be written.
* <b>`graph`</b>: A `Graph` object, such as `sess.graph`.
* <b>`max_queue`</b>: Integer. Size of the queue for pending events and summaries.
* <b>`flush_secs`</b>: Number. How often, in seconds, to flush the
pending events and summaries to disk.
* <b>`graph_def`</b>: DEPRECATED: Use the `graph` argument instead.
- - -
#### `tf.train.SummaryWriter.add_summary(summary, global_step=None)` {#SummaryWriter.add_summary}
Adds a `Summary` protocol buffer to the event file.
This method wraps the provided summary in an `Event` protocol buffer
and adds it to the event file.
You can pass the result of evaluating any summary op, using
[`Session.run()`](client.md#Session.run) or
[`Tensor.eval()`](framework.md#Tensor.eval), to this
function. Alternatively, you can pass a `tf.Summary` protocol
buffer that you populate with your own data. The latter is
commonly done to report evaluation results in event files.
##### Args:
* <b>`summary`</b>: A `Summary` protocol buffer, optionally serialized as a string.
* <b>`global_step`</b>: Number. Optional global step value to record with the
summary.
- - -
#### `tf.train.SummaryWriter.add_session_log(session_log, global_step=None)` {#SummaryWriter.add_session_log}
Adds a `SessionLog` protocol buffer to the event file.
This method wraps the provided session in an `Event` procotol buffer
and adds it to the event file.
##### Args:
* <b>`session_log`</b>: A `SessionLog` protocol buffer.
* <b>`global_step`</b>: Number. Optional global step value to record with the
summary.
- - -
#### `tf.train.SummaryWriter.add_event(event)` {#SummaryWriter.add_event}
Adds an event to the event file.
##### Args:
* <b>`event`</b>: An `Event` protocol buffer.
- - -
#### `tf.train.SummaryWriter.add_graph(graph, global_step=None, graph_def=None)` {#SummaryWriter.add_graph}
Adds a `Graph` to the event file.
The graph described by the protocol buffer will be displayed by
TensorBoard. Most users pass a graph in the constructor instead.
##### Args:
* <b>`graph`</b>: A `Graph` object, such as `sess.graph`.
* <b>`global_step`</b>: Number. Optional global step counter to record with the
graph.
* <b>`graph_def`</b>: DEPRECATED. Use the `graph` parameter instead.
##### Raises:
* <b>`ValueError`</b>: If both graph and graph_def are passed to the method.
- - -
#### `tf.train.SummaryWriter.add_run_metadata(run_metadata, tag, global_step=None)` {#SummaryWriter.add_run_metadata}
Adds a metadata information for a single session.run() call.
##### Args:
* <b>`run_metadata`</b>: A `RunMetadata` protobuf object.
* <b>`tag`</b>: The tag name for this metadata.
* <b>`global_step`</b>: Number. Optional global step counter to record with the
StepStats.
##### Raises:
* <b>`ValueError`</b>: If the provided tag was already used for this type of event.
- - -
#### `tf.train.SummaryWriter.flush()` {#SummaryWriter.flush}
Flushes the event file to disk.
Call this method to make sure that all pending events have been written to
disk.
- - -
#### `tf.train.SummaryWriter.close()` {#SummaryWriter.close}
Flushes the event file to disk and close the file.
Call this method when you do not need the summary writer anymore.
#### Other Methods
- - -
#### `tf.train.SummaryWriter.reopen()` {#SummaryWriter.reopen}
Reopens the summary writer.
Can be called after `close()` to add more events in the same directory.
The events will go into a new events file.
Does nothing if the summary writer was not closed.
- - -
### `tf.train.summary_iterator(path)` {#summary_iterator}
An iterator for reading `Event` protocol buffers from an event file.
You can use this function to read events written to an event file. It returns
a Python iterator that yields `Event` protocol buffers.
Example: Print the contents of an events file.
```python
for e in tf.train.summary_iterator(path to events file):
print(e)
```
Example: Print selected summary values.
```python
# This example supposes that the events file contains summaries with a
# summary value tag 'loss'. These could have been added by calling
# `add_summary()`, passing the output of a scalar summary op created with
# with: `tf.scalar_summary(['loss'], loss_tensor)`.
for e in tf.train.summary_iterator(path to events file):
for v in e.summary.value:
if v.tag == 'loss':
print(v.simple_value)
```
See the protocol buffer definitions of
[Event](https://www.tensorflow.org/code/tensorflow/core/util/event.proto)
and
[Summary](https://www.tensorflow.org/code/tensorflow/core/framework/summary.proto)
for more information about their attributes.
##### Args:
* <b>`path`</b>: The path to an event file created by a `SummaryWriter`.
##### Yields:
`Event` protocol buffers.
## Training utilities
- - -
### `tf.train.global_step(sess, global_step_tensor)` {#global_step}
Small helper to get the global step.
```python
# Creates a variable to hold the global_step.
global_step_tensor = tf.Variable(10, trainable=False, name='global_step')
# Creates a session.
sess = tf.Session()
# Initializes the variable.
sess.run(global_step_tensor.initializer)
print('global_step: %s' % tf.train.global_step(sess, global_step_tensor))
global_step: 10
```
##### Args:
* <b>`sess`</b>: A TensorFlow `Session` object.
* <b>`global_step_tensor`</b>: `Tensor` or the `name` of the operation that contains
the global step.
##### Returns:
The global step value.
- - -
### `tf.train.write_graph(graph_def, logdir, name, as_text=True)` {#write_graph}
Writes a graph proto to a file.
The graph is written as a binary proto unless `as_text` is `True`.
```python
v = tf.Variable(0, name='my_variable')
sess = tf.Session()
tf.train.write_graph(sess.graph_def, '/tmp/my-model', 'train.pbtxt')
```
##### Args:
* <b>`graph_def`</b>: A `GraphDef` protocol buffer.
* <b>`logdir`</b>: Directory where to write the graph. This can refer to remote
filesystems, such as Google Cloud Storage (GCS).
* <b>`name`</b>: Filename for the graph.
* <b>`as_text`</b>: If `True`, writes the graph as an ASCII proto.
## Other Functions and Classes
- - -
### `class tf.train.LooperThread` {#LooperThread}
A thread that runs code repeatedly, optionally on a timer.
This thread class is intended to be used with a `Coordinator`. It repeatedly
runs code specified either as `target` and `args` or by the `run_loop()`
method.
Before each run the thread checks if the coordinator has requested stop. In
that case the looper thread terminates immediately.
If the code being run raises an exception, that exception is reported to the
coordinator and the thread terminates. The coordinator will then request all
the other threads it coordinates to stop.
You typically pass looper threads to the supervisor `Join()` method.
- - -
#### `tf.train.LooperThread.__init__(coord, timer_interval_secs, target=None, args=None, kwargs=None)` {#LooperThread.__init__}
Create a LooperThread.
##### Args:
* <b>`coord`</b>: A Coordinator.
* <b>`timer_interval_secs`</b>: Time boundaries at which to call Run(), or None
if it should be called back to back.
* <b>`target`</b>: Optional callable object that will be executed in the thread.
* <b>`args`</b>: Optional arguments to pass to `target` when calling it.
* <b>`kwargs`</b>: Optional keyword arguments to pass to `target` when calling it.
##### Raises:
* <b>`ValueError`</b>: If one of the arguments is invalid.
- - -
#### `tf.train.LooperThread.daemon` {#LooperThread.daemon}
A boolean value indicating whether this thread is a daemon thread (True) or not (False).
This must be set before start() is called, otherwise RuntimeError is
raised. Its initial value is inherited from the creating thread; the
main thread is not a daemon thread and therefore all threads created in
the main thread default to daemon = False.
The entire Python program exits when no alive non-daemon threads are
left.
- - -
#### `tf.train.LooperThread.getName()` {#LooperThread.getName}
- - -
#### `tf.train.LooperThread.ident` {#LooperThread.ident}
Thread identifier of this thread or None if it has not been started.
This is a nonzero integer. See the thread.get_ident() function. Thread
identifiers may be recycled when a thread exits and another thread is
created. The identifier is available even after the thread has exited.
- - -
#### `tf.train.LooperThread.isAlive()` {#LooperThread.isAlive}
Return whether the thread is alive.
This method returns True just before the run() method starts until just
after the run() method terminates. The module function enumerate()
returns a list of all alive threads.
- - -
#### `tf.train.LooperThread.isDaemon()` {#LooperThread.isDaemon}
- - -
#### `tf.train.LooperThread.is_alive()` {#LooperThread.is_alive}
Return whether the thread is alive.
This method returns True just before the run() method starts until just
after the run() method terminates. The module function enumerate()
returns a list of all alive threads.
- - -
#### `tf.train.LooperThread.join(timeout=None)` {#LooperThread.join}
Wait until the thread terminates.
This blocks the calling thread until the thread whose join() method is
called terminates -- either normally or through an unhandled exception
or until the optional timeout occurs.
When the timeout argument is present and not None, it should be a
floating point number specifying a timeout for the operation in seconds
(or fractions thereof). As join() always returns None, you must call
isAlive() after join() to decide whether a timeout happened -- if the
thread is still alive, the join() call timed out.
When the timeout argument is not present or None, the operation will
block until the thread terminates.
A thread can be join()ed many times.
join() raises a RuntimeError if an attempt is made to join the current
thread as that would cause a deadlock. It is also an error to join() a
thread before it has been started and attempts to do so raises the same
exception.
- - -
#### `tf.train.LooperThread.loop(coord, timer_interval_secs, target, args=None, kwargs=None)` {#LooperThread.loop}
Start a LooperThread that calls a function periodically.
If `timer_interval_secs` is None the thread calls `target(args)`
repeatedly. Otherwise `target(args)` is called every `timer_interval_secs`
seconds. The thread terminates when a stop of the coordinator is
requested.
##### Args:
* <b>`coord`</b>: A Coordinator.
* <b>`timer_interval_secs`</b>: Number. Time boundaries at which to call `target`.
* <b>`target`</b>: A callable object.
* <b>`args`</b>: Optional arguments to pass to `target` when calling it.
* <b>`kwargs`</b>: Optional keyword arguments to pass to `target` when calling it.
##### Returns:
The started thread.
- - -
#### `tf.train.LooperThread.name` {#LooperThread.name}
A string used for identification purposes only.
It has no semantics. Multiple threads may be given the same name. The
initial name is set by the constructor.
- - -
#### `tf.train.LooperThread.run()` {#LooperThread.run}
- - -
#### `tf.train.LooperThread.run_loop()` {#LooperThread.run_loop}
Called at 'timer_interval_secs' boundaries.
- - -
#### `tf.train.LooperThread.setDaemon(daemonic)` {#LooperThread.setDaemon}
- - -
#### `tf.train.LooperThread.setName(name)` {#LooperThread.setName}
- - -
#### `tf.train.LooperThread.start()` {#LooperThread.start}
Start the thread's activity.
It must be called at most once per thread object. It arranges for the
object's run() method to be invoked in a separate thread of control.
This method will raise a RuntimeError if called more than once on the
same thread object.
- - -
#### `tf.train.LooperThread.start_loop()` {#LooperThread.start_loop}
Called when the thread starts.
- - -
#### `tf.train.LooperThread.stop_loop()` {#LooperThread.stop_loop}
Called when the thread stops.
- - -
### `tf.train.do_quantize_training_on_graphdef(input_graph, num_bits)` {#do_quantize_training_on_graphdef}
- - -
### `tf.train.generate_checkpoint_state_proto(save_dir, model_checkpoint_path, all_model_checkpoint_paths=None)` {#generate_checkpoint_state_proto}
Generates a checkpoint state proto.
##### Args:
* <b>`save_dir`</b>: Directory where the model was saved.
* <b>`model_checkpoint_path`</b>: The checkpoint file.
* <b>`all_model_checkpoint_paths`</b>: List of strings. Paths to all not-yet-deleted
checkpoints, sorted from oldest to newest. If this is a non-empty list,
the last element must be equal to model_checkpoint_path. These paths
are also saved in the CheckpointState proto.
##### Returns:
CheckpointState proto with model_checkpoint_path and
all_model_checkpoint_paths updated to either absolute paths or
relative paths to the current save_dir.
| 31.065797 | 375 | 0.723797 | eng_Latn | 0.99063 |
8a6f18c0d3f463d781359c70ab49ddeba94f8b3f | 7,741 | md | Markdown | docs/t-sql/functions/text-and-image-functions-textptr-transact-sql.md | masahiko-sotta/sql-docs.ja-jp | f9e587be8d74ad47d0cc2c31a1670e2190a0aab7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/t-sql/functions/text-and-image-functions-textptr-transact-sql.md | masahiko-sotta/sql-docs.ja-jp | f9e587be8d74ad47d0cc2c31a1670e2190a0aab7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/t-sql/functions/text-and-image-functions-textptr-transact-sql.md | masahiko-sotta/sql-docs.ja-jp | f9e587be8d74ad47d0cc2c31a1670e2190a0aab7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: TEXTPTR (Transact-SQL) | Microsoft Docs
ms.custom: ''
ms.date: 10/23/2017
ms.prod: sql
ms.prod_service: sql-database
ms.reviewer: ''
ms.technology: t-sql
ms.topic: language-reference
f1_keywords:
- TEXTPTR_TSQL
- TEXTPTR
dev_langs:
- TSQL
helpviewer_keywords:
- TEXTPTR function
- viewing text pointer values
- text-pointer values
- displaying text pointer values
ms.assetid: 2672b8cb-f747-46f3-9358-9b49b3583b8e
author: MikeRayMSFT
ms.author: mikeray
ms.openlocfilehash: d0e511e34b782c444bcdf6c778bb89dfebd4fab4
ms.sourcegitcommit: b2464064c0566590e486a3aafae6d67ce2645cef
ms.translationtype: HT
ms.contentlocale: ja-JP
ms.lasthandoff: 07/15/2019
ms.locfileid: "68099035"
---
# <a name="text-and-image-functions---textptr-transact-sql"></a>テキスト関数とイメージ関数 - TEXTPTR (Transact-SQL)
[!INCLUDE[tsql-appliesto-ss2008-xxxx-xxxx-xxx-md](../../includes/tsql-appliesto-ss2008-xxxx-xxxx-xxx-md.md)]
**text**、**ntext**、または **image** 列に対応するテキスト ポインターの値を **varbinary** 形式で返します。 取得したテキスト ポインターの値は、READTEXT、WRITETEXT、および UPDATETEXT ステートメントで使用します。
> [!IMPORTANT]
> [!INCLUDE[ssNoteDepFutureAvoid](../../includes/ssnotedepfutureavoid-md.md)]代替機能を使用することはできません。
 [Transact-SQL 構文表記規則](../../t-sql/language-elements/transact-sql-syntax-conventions-transact-sql.md)
## <a name="syntax"></a>構文
```
TEXTPTR ( column )
```
## <a name="arguments"></a>引数
*column*
使われる **text**、**ntext**、または **image** の列です。
## <a name="return-types"></a>戻り値の型
**varbinary**
## <a name="remarks"></a>Remarks
行内テキストがあるテーブルの場合、TEXTPTR では、処理するテキストのハンドルが返されます。 テキストの値が NULL である場合も、有効なテキスト ポインターを取得できます。
ビューの列に対して TEXTPTR 関数を使用することはできません。 それはテーブルの列に対してのみ使用できます。 ビューの列に対して TEXTPTR 関数を使用するには、[ALTER DATABASE 互換性レベル](../../t-sql/statements/alter-database-transact-sql-compatibility-level.md)を使用して互換性レベルを 80 に設定する必要があります。 テーブルに行内テキストがなく、**text**、**ntext**、または **image** 列が UPDATETEXT ステートメントで初期化されていない場合、TEXTPTR は NULL ポインターを返します。
テキスト ポインターが存在するかどうかをテストするには、TEXTVALID を使用します。 有効なテキスト ポインターがないと、UPDATETEXT、WRITETEXT、READTEXT は使用できません。
これらの関数とステートメントは、**text**、**ntext**、**image** データを操作する場合にも役立ちます。
|関数またはステートメント|[説明]|
|---------------------------|-----------------|
|PATINDEX<b>('</b> _%pattern%_ **' ,** _expression_ **)**|**text** または **ntext** 列で指定された文字列の文字位置を返します。|
|DATALENGTH<b>(</b>_expression_ **)**|**text**、**ntext**、**image** 列のデータの長さを返します。|
|[SET TEXTSIZE]|SELECT ステートメントで返される **text**、**ntext**、または **image** データの制限値をバイト単位で返します。|
|SUBSTRING<b>(</b>_text_column_, _start_, _length_ **)**|指定された *start* オフセットと *length* で指定される **varchar** 文字列を返します。 長さは 8 KB 未満で指定してください。|
## <a name="examples"></a>使用例
> [!NOTE]
> 次の例を実行するには、**pubs** データベースをインストールする必要があります。
### <a name="a-using-textptr"></a>A. TEXTPTR を使用する
次の例では、`TEXTPTR` 関数を使用して、`pubs` データベースの `pub_info` テーブル内の `New Moon Books` に関連付けられている **image** 列 `logo` を検索します。 テキスト ポインターは、ローカル変数 `@ptrval.` に格納されます。
```
USE pubs;
GO
DECLARE @ptrval varbinary(16);
SELECT @ptrval = TEXTPTR(logo)
FROM pub_info pr, publishers p
WHERE p.pub_id = pr.pub_id
AND p.pub_name = 'New Moon Books';
GO
```
### <a name="b-using-textptr-with-in-row-text"></a>B. TEXTPTR を行内テキストと使用する
[!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] では、行内テキスト ポインターは、次の例に示すように、トランザクション内で使用する必要があります。
```
CREATE TABLE t1 (c1 int, c2 text);
EXEC sp_tableoption 't1', 'text in row', 'on';
INSERT t1 VALUES ('1', 'This is text.');
GO
BEGIN TRAN;
DECLARE @ptrval VARBINARY(16);
SELECT @ptrval = TEXTPTR(c2)
FROM t1
WHERE c1 = 1;
READTEXT t1.c2 @ptrval 0 1;
COMMIT;
```
### <a name="c-returning-text-data"></a>C. テキスト データを返す
次の例では、`pub_info` テーブルから `pub_id` 列、および `pr_info` 列の 16 バイトのテキスト ポインターを選択します。
```
USE pubs;
GO
SELECT pub_id, TEXTPTR(pr_info)
FROM pub_info
ORDER BY pub_id;
GO
```
[!INCLUDE[ssResult](../../includes/ssresult-md.md)]
```
pub_id
------ ----------------------------------
0736 0x6c0000000000feffb801000001000100
0877 0x6d0000000000feffb801000001000300
1389 0x6e0000000000feffb801000001000500
1622 0x700000000000feffb801000001000900
1756 0x710000000000feffb801000001000b00
9901 0x720000000000feffb801000001000d00
9952 0x6f0000000000feffb801000001000700
9999 0x730000000000feffb801000001000f00
(8 row(s) affected)
```
次の例では、TEXTPTR を使用せずにテキストの最初の `8000` バイトを返す方法を示します。
```
USE pubs;
GO
SET TEXTSIZE 8000;
SELECT pub_id, pr_info
FROM pub_info
ORDER BY pub_id;
GO
```
[!INCLUDE[ssResult](../../includes/ssresult-md.md)]
```
pub_id pr_info
------ -----------------------------------------------------------------
0736 New Moon Books (NMB) has just released another top ten publication. With the latest publication this makes NMB the hottest new publisher of the year!
0877 This is sample text data for Binnet & Hardley, publisher 0877 in the pubs database. Binnet & Hardley is located in Washington, D.C.
This is sample text data for Binnet & Hardley, publisher 0877 in the pubs database. Binnet & Hardley is located in Washi
1389 This is sample text data for Algodata Infosystems, publisher 1389 in the pubs database. Algodata Infosystems is located in Berkeley, California.
9999 This is sample text data for Lucerne Publishing, publisher 9999 in the pubs database. Lucerne publishing is located in Paris, France.
This is sample text data for Lucerne Publishing, publisher 9999 in the pubs database. Lucerne publishing is located in
(8 row(s) affected)
```
### <a name="d-returning-specific-text-data"></a>D. 特定のテキスト データを返す
次の例では、`pubs` データベースの `pub_info` テーブル内の `pub_id``0736` に関連付けられた `text` 列 (`pr_info`) を検索します。 まず、ローカル変数 `@val` が宣言されます。 次に、テキスト ポインター (長いバイナリ文字列) が `@val` に挿入され、`READTEXT` ステートメントにパラメーターとして指定されます。 これによって、5 番目のバイト (オフセットは 4) から開始して、10 バイトのデータが返されます。
```
USE pubs;
GO
DECLARE @val varbinary(16);
SELECT @val = TEXTPTR(pr_info)
FROM pub_info
WHERE pub_id = '0736';
READTEXT pub_info.pr_info @val 4 10;
GO
```
[!INCLUDE[ssResult](../../includes/ssresult-md.md)]
```
pr_info
-----------------------------------------------------------------------
is sample
(1 row(s) affected)
```
## <a name="see-also"></a>参照
[DATALENGTH (Transact-SQL)](../../t-sql/functions/datalength-transact-sql.md)
[PATINDEX (Transact-SQL)](../../t-sql/functions/patindex-transact-sql.md)
[READTEXT (Transact-SQL)](../../t-sql/queries/readtext-transact-sql.md)
[SET TEXTSIZE (Transact-SQL)](../../t-sql/statements/set-textsize-transact-sql.md)
[テキスト関数とイメージ関数 (Transact-SQL)](https://msdn.microsoft.com/library/b9c70488-1bf5-4068-a003-e548ccbc5199)
[UPDATETEXT (Transact-SQL)](../../t-sql/queries/updatetext-transact-sql.md)
[WRITETEXT (Transact-SQL)](../../t-sql/queries/writetext-transact-sql.md)
| 39.09596 | 325 | 0.620075 | yue_Hant | 0.692373 |
8a6f2678f17da6a79d2cf6ad5c3aba3ac54e7ab4 | 872 | md | Markdown | recipes/components/dashi-stock.md | phronmophobic/dinner | a94d403741954171900731109dc78019a4480582 | [
"MIT"
] | 1 | 2022-01-23T22:41:26.000Z | 2022-01-23T22:41:26.000Z | recipes/components/dashi-stock.md | phronmophobic/dinner | a94d403741954171900731109dc78019a4480582 | [
"MIT"
] | null | null | null | recipes/components/dashi-stock.md | phronmophobic/dinner | a94d403741954171900731109dc78019a4480582 | [
"MIT"
] | 3 | 2022-01-23T22:44:15.000Z | 2022-01-26T04:40:16.000Z | # Dashi Stock
The most essential Japanese cooking stock that becomes the base of many other sauces and soups.
## Ingredients
- [ ] 30g konbu
- [ ] 30g katsuobushi
- [ ] 8 cups of water (filtered if possible)
## Recipe
1. Place konbu and water in a sauce pan or stock pot over high heat
1. Keep an eye on the pot and as soon as you start to see the water come to a low simmer, remove from the heat
1. Add katsuobushi to the water, set a timer for five minutes
1. When the five minutes are up, strain the liquid into another vessel and discard the used katsuobushi and konbu. If you want to, you can use these same ingredients to make a second batch, much like you might brew a second batch of tea with the same leaves. It won't be as strong but will still have really great flavor
1. Store in your fridge/freezer in deli containers or use immediately to make miso soup | 51.294118 | 320 | 0.762615 | eng_Latn | 0.999752 |
8a6fa2de7840f6991bcf3ebdb92c45c3b90bc1e3 | 431 | markdown | Markdown | _posts/2017/12/2017-12-11-2017-12-11-til.markdown | Kirade/kirade.github.io | e523ff8ddba8349f4b3970ed0aebe6a2091c92a0 | [
"MIT"
] | null | null | null | _posts/2017/12/2017-12-11-2017-12-11-til.markdown | Kirade/kirade.github.io | e523ff8ddba8349f4b3970ed0aebe6a2091c92a0 | [
"MIT"
] | null | null | null | _posts/2017/12/2017-12-11-2017-12-11-til.markdown | Kirade/kirade.github.io | e523ff8ddba8349f4b3970ed0aebe6a2091c92a0 | [
"MIT"
] | null | null | null | ---
layout: "post"
title: "2017-12-11 TIL"
date: "2017-12-11 23:08"
tag: TIL
category: TIL
---
## Today I Learned
### What I Did
* Django
- 회원가입 폼 UI 제작
- 로그인 폼 UI 제작
- 회원가입 및 로그인 테스트, 모델 수정
* 프로젝트 외
- 컴퓨터 보안 TA 활동
- 입사지원서 작성
### To-Do
* Django
- 게시판 제작
- 회원 정보 수정 페이지
* Blog
- 배운 내용 중 포스팅하며 정리할 내용 생각
### Reference
* [부트스트랩](http://bootstrapk.com/css/)
* [Django Docs](http://docs.djangoproject.com/en/2.0)
| 13.060606 | 53 | 0.593968 | kor_Hang | 0.999573 |
8a703bc03d43a8e3ece2ee912b281b6f96402173 | 3,373 | md | Markdown | vendor/phpunit/phpunit/ChangeLog-6.5.md | Faizanq/Laravel_API_Full | 6f07549f8cba3151bb4c95efd0d0266600558324 | [
"MIT"
] | 272 | 2015-05-09T16:20:33.000Z | 2022-03-13T08:58:07.000Z | vendor/phpunit/phpunit/ChangeLog-6.5.md | Andres1496/lavanderia | 778de6877339b8517f84f503ef79560bbc50eeaf | [
"MIT"
] | 18 | 2015-08-26T01:59:36.000Z | 2019-11-12T06:31:46.000Z | vendor/phpunit/phpunit/ChangeLog-6.5.md | Andres1496/lavanderia | 778de6877339b8517f84f503ef79560bbc50eeaf | [
"MIT"
] | 156 | 2015-05-03T15:00:43.000Z | 2021-04-13T02:00:43.000Z | # Changes in PHPUnit 6.5
All notable changes of the PHPUnit 6.5 release series are documented in this file using the [Keep a CHANGELOG](http://keepachangelog.com/) principles.
## [6.5.8] - 2018-04-10
### Fixed
* Fixed [#2830](https://github.com/sebastianbergmann/phpunit/issues/2830): `@runClassInSeparateProcess` does not work for tests that use `@dataProvider`
## [6.5.7] - 2018-02-26
### Fixed
* Fixed [#2974](https://github.com/sebastianbergmann/phpunit/issues/2974): JUnit XML logfile contains invalid characters when test output contains binary data
## [6.5.6] - 2018-02-01
### Fixed
* Fixed [#2236](https://github.com/sebastianbergmann/phpunit/issues/2236): Exceptions in `tearDown()` do not affect `getStatus()`
* Fixed [#2950](https://github.com/sebastianbergmann/phpunit/issues/2950): Class extending `PHPUnit\Framework\TestSuite` does not extend `PHPUnit\FrameworkTestCase`
* Fixed [#2972](https://github.com/sebastianbergmann/phpunit/issues/2972): PHPUnit crashes when test suite contains both `.phpt` files and unconventionally named tests
## [6.5.5] - 2017-12-17
### Fixed
* Fixed [#2922](https://github.com/sebastianbergmann/phpunit/issues/2922): Test class is not discovered when there is a test class with `@group` and provider throwing exception in it, tests are run with `--exclude-group` for that group, there is another class called later (after the class from above), and the name of that another class does not match its filename
## [6.5.4] - 2017-12-10
### Changed
* Require version 5.0.5 of `phpunit/phpunit-mock-objects` for [phpunit-mock-objects#394](https://github.com/sebastianbergmann/phpunit-mock-objects/issues/394)
## [6.5.3] - 2017-12-06
### Fixed
* Fixed an issue with PHPT tests when `forceCoversAnnotation="true"` is configured
## [6.5.2] - 2017-12-02
### Changed
* Require version 5.0.4 of `phpunit/phpunit-mock-objects` for [phpunit-mock-objects#388](https://github.com/sebastianbergmann/phpunit-mock-objects/issues/388)
## [6.5.1] - 2017-12-01
* Fixed [#2886](https://github.com/sebastianbergmann/phpunit/pull/2886): Forced environment variables do not affect `getenv()`
## [6.5.0] - 2017-12-01
### Added
* Implemented [#2286](https://github.com/sebastianbergmann/phpunit/issues/2286): Optional `$exit` parameter for `PHPUnit\TextUI\TestRunner::run()`
* Implemented [#2496](https://github.com/sebastianbergmann/phpunit/issues/2496): Allow shallow copy of dependencies
### Fixed
* Fixed [#2654](https://github.com/sebastianbergmann/phpunit/issues/2654): Problems with `assertJsonStringEqualsJsonString()`
* Fixed [#2810](https://github.com/sebastianbergmann/phpunit/pull/2810): Code Coverage for PHPT tests does not work
[6.5.8]: https://github.com/sebastianbergmann/phpunit/compare/6.5.7...6.5.8
[6.5.7]: https://github.com/sebastianbergmann/phpunit/compare/6.5.6...6.5.7
[6.5.6]: https://github.com/sebastianbergmann/phpunit/compare/6.5.5...6.5.6
[6.5.5]: https://github.com/sebastianbergmann/phpunit/compare/6.5.4...6.5.5
[6.5.4]: https://github.com/sebastianbergmann/phpunit/compare/6.5.3...6.5.4
[6.5.3]: https://github.com/sebastianbergmann/phpunit/compare/6.5.2...6.5.3
[6.5.2]: https://github.com/sebastianbergmann/phpunit/compare/6.5.1...6.5.2
[6.5.1]: https://github.com/sebastianbergmann/phpunit/compare/6.5.0...6.5.1
[6.5.0]: https://github.com/sebastianbergmann/phpunit/compare/6.4...6.5.0
| 44.973333 | 365 | 0.735843 | eng_Latn | 0.399568 |
8a704c6661e582d1691e50cc472f3b1b0724cbc2 | 1,538 | md | Markdown | scripting-docs/winscript/reference/iscriptentry-setitemname.md | mavasani/visualstudio-docs | 4aa6fed75c395bb654dc884441ebb2d9b88bfd30 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2019-08-19T19:51:53.000Z | 2021-03-17T18:30:52.000Z | scripting-docs/winscript/reference/iscriptentry-setitemname.md | mavasani/visualstudio-docs | 4aa6fed75c395bb654dc884441ebb2d9b88bfd30 | [
"CC-BY-4.0",
"MIT"
] | 3 | 2019-04-17T23:46:44.000Z | 2019-04-18T00:09:37.000Z | scripting-docs/winscript/reference/iscriptentry-setitemname.md | mavasani/visualstudio-docs | 4aa6fed75c395bb654dc884441ebb2d9b88bfd30 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2022-03-01T12:08:27.000Z | 2022-03-01T12:08:27.000Z | ---
title: "IScriptEntry::SetItemName | Microsoft Docs"
ms.custom: ""
ms.date: "01/18/2017"
ms.reviewer: ""
ms.suite: ""
ms.tgt_pltfrm: ""
ms.topic: "reference"
apiname:
- "IScriptEntry.SetItemName"
apilocation:
- "scrobj.dll"
helpviewer_keywords:
- "IScriptEntry::SetItemName"
ms.assetid: 9551a7ec-38f8-466a-9722-09367763f380
caps.latest.revision: 11
author: "mikejo5000"
ms.author: "mikejo"
manager: "ghogen"
---
# IScriptEntry::SetItemName
Sets the item name that identifies an `IScriptEntry` object.
## Syntax
```cpp
HRESULT SetItemName(
LPCOLESTR psz
);
```
#### Parameters
`psz`
[in] The address of a buffer that contains the item name. The item name is used by the host to identify the entry.
## Return Value
An `HRESULT`. Possible values include, but are not limited to, those in the following table.
|Value|Description|
|-----------|-----------------|
|`S_OK`|The method succeeded.|
|`E_FAIL`|The method did not succeed.|
## Remarks
For `IScriptEntry` objects, this method returns `S_OK`.
For `IScriptScriptlet` objects (which derive from `IScriptEntry`), this method returns `E_FAIL`. For `IScriptScriptlet` objects, the item name is set by [IActiveScriptAuthor::AddScriptlet](../../winscript/reference/iactivescriptauthor-addscriptlet.md) and cannot be changed.
## See Also
[IScriptEntry Interface](../../winscript/reference/iscriptentry-interface.md)
[IScriptEntry::GetItemName](../../winscript/reference/iscriptentry-getitemname.md) | 30.156863 | 277 | 0.695709 | eng_Latn | 0.410339 |
8a7055cfb42741788aa16269ca8238678408d4e1 | 338 | md | Markdown | content/docs/tools-ui/ux-patterns/error/components/_index.md | LB-KacperBieniek/o3de.org | 9dedfce2509d1b9ecfaae6b8b6c212598d2ea080 | [
"Apache-2.0",
"CC-BY-4.0",
"MIT"
] | null | null | null | content/docs/tools-ui/ux-patterns/error/components/_index.md | LB-KacperBieniek/o3de.org | 9dedfce2509d1b9ecfaae6b8b6c212598d2ea080 | [
"Apache-2.0",
"CC-BY-4.0",
"MIT"
] | null | null | null | content/docs/tools-ui/ux-patterns/error/components/_index.md | LB-KacperBieniek/o3de.org | 9dedfce2509d1b9ecfaae6b8b6c212598d2ea080 | [
"Apache-2.0",
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
linktitle: Types of UI Messages
title: Types of UI Messages in BlueJay Design System
description: Learn how to craft various types of UI messages using the BlueJay Design System in Open 3D Engine (O3DE), including console loga, dialogs, inline notifications, log tables, text input validations, and toasts.
weight: 100
toc: true
---
| 42.25 | 222 | 0.781065 | eng_Latn | 0.955429 |
8a70a0fce66bb86f7f5d8aa450e92d7b5c36b57a | 11,476 | md | Markdown | src/fr/2018-02/10/teacher-comments.md | PrJared/sabbath-school-lessons | 94a27f5bcba987a11a698e5e0d4279b81a68bc9a | [
"MIT"
] | 68 | 2016-10-30T23:17:56.000Z | 2022-03-27T11:58:16.000Z | src/fr/2018-02/10/teacher-comments.md | PrJared/sabbath-school-lessons | 94a27f5bcba987a11a698e5e0d4279b81a68bc9a | [
"MIT"
] | 367 | 2016-10-21T03:50:22.000Z | 2022-03-28T23:35:25.000Z | src/fr/2018-02/10/teacher-comments.md | OsArts/Bible-study | cfcefde42e21795e217d192a8b7a703ebb7a6c01 | [
"MIT"
] | 109 | 2016-08-02T14:32:13.000Z | 2022-03-31T10:18:41.000Z | ---
title: Commentaires Moniteurs
date: 08/06/2018
---
### La leçon en bref
**Texte clé:** Apocalypse 13:1-18
#### Objectifs:
**Savoir:** Identifier les puissances historiques représentées par les deux bêtes et reconnaitre les évènements qui leur sont associés.
**Ressentir:** Évaluer la gravité de l’enjeu et contrôler ses sentiments dans l’expérience de l’adoration.
**Agir:** Documenter les preuves à l’appui de l’accomplissement des prophéties et trouver davantage de raisons de faire confiance à Dieu qui contrôle l’histoire.
#### Plan de l’étude:
**I. Savoir: Babylone et les États-Unis**
A. Quels indices montrent que la bête de la mer est l’église catholique romaine?
B. Quels indices donnent à penser que la bête de la terre représente les États-Unis d’Amérique?
**II. Ressentir: Contrôle émotionnel**
A. Pourquoi avoir un bon sentiment n’est pas une indication fiable de la présence de Dieu?
B. Comment pouvez-vous être surs que vos sentiments d’adoration sont en harmonie avec le vrai Dieu?
C. Pourquoi aimez-vous les gens malgré le fait qu’ils peuvent appartenir au camp de la bête?
**III. Agir: être connecté à Dieu**
A. Quelle est la tentation la plus courante de la fausse adoration?
B. Comment peut-on sortir de Babylone?
C. Pourquoi sortir de Babylone ne suffit-il pas pour éviter son influence?
#### Résumé:
L’ambition de Babylone est d’être adorée par le monde entier.
### Cycle d’apprentissage
#### ÉTAPE 1—Motiver
**Pleins feux sur l’Écriture:** Apocalypse 13:10
**Concept clé de croissance spirituelle:** La vie spirituelle n’est pas égoïste; elle implique plutôt une expérience d’adoration centrée sur Dieu et une vie fondée sur la confiance en Dieu. La véritable adoration n’est pas seulement la compagnie et le fait de passer de bons moments ensemble. L’appartenance au camp de véritables croyants ne suffit non plus à constituer la véritable adoration. Plutôt, une véritable adoration doit conduire à une dévotion sincère à Dieu.
**Coin du moniteur:** L’identification des puissances derrière les deux bêtes d’Apocalypse 13 devrait nous aider à situer la période prophétique dans laquelle nous vivons et nous instruire à mener notre vie en conséquence. Examinez les preuves historiques qui soutiennent l’identification des bêtes. Autant que possible, sélectionnez les sources bien connues, même des livres historiques, afin d’assurer l’objectivité et la crédibilité. L’objectif principal de cette leçon est de nous inspirer à nous engager complètement et à adorer le vrai Dieu.
**Discussion d’ouverture:** Il n’y avait jamais eu autant de religions et de confessions qui prétendent être la véritable église de Dieu. Discutez de comment le réflexe de cette prolifération des sectes est de se réfugier dans l’église traditionnelle dans laquelle nous avons grandi à cause de sa légitimité historique.
**Discussion:**
1. Comment peut-on expliquer cette prolifération des sectes?
2. Examinez plus loin l’erreur de se réfugier dans les églises traditionnelles, simplement parce qu’elles ont une légitimité historique. Comment pouvons-nous dénoncer cette illusion sans tomber dans le piège de l’orgueil et de l’autonomie qui caractérise l’église de Laodicée (Apo. 3:17)?
#### ÉTAPE 2—Explorer
**Coin du moniteur:** Il est important d’être aussi sensible que possible dans votre présentation du matériel de cette leçon. Tout d’abord, dévoilez l’identité des deux bêtes à la lumière des évènements contemporains. En second lieu, identifiez le principal enjeu et discutez pourquoi le fait de dénoncer l’erreur de retomber dans une église traditionnelle est important pour notre vie spirituelle. Soyez créatifs et pertinents dans votre présentation. Renforcez le fondement de votre démarche en utilisant les meilleurs éléments de preuve et d’arguments pour présenter cette leçon.
##### Commentaire biblique
**I. Babylone et ses alliés** *(Examinez Apocalypse 13:12 avec la classe).*
Les caractéristiques de la bête de la mer évoquent les quatre animaux de Daniel 7 – les trois premiers étant le léopard, l’ours et le lion (Apo. 13:2; comparez à Dan. 7:2, 3). Mais l’accent est mis particulièrement sur la quatrième bête (Apocalypse 13:1, comparez à Dan. 7:7). L’élément caractéristique de ce quatrième animal qui retient particulièrement l’attention de Jean est la petite corne. Tout comme la petite corne, la bête de la mer usurpe le pouvoir de Dieu, et elle veut être adorée. L’expression « qui est semblable à la bête », prononcée par ses adorateurs (Apo. 13:4, LSG) est calquée sur l’expression traditionnelle qui caractérise l’adoration de Dieu dans l’ancien Israël: « qui est semblable à Toi, ô Éternel? » (Exode 15:11; Ps. 35:10). En outre, tout comme la petite corne, cette bête persécute le peuple de Dieu pendant la même durée de temps, 42 mois, ce qui correspond au temps, des temps et la moitié d’un temps de la petite corne (Apo. 13:5, Dan. 7:25), à partir de 538 apr. JC et se terminant en 1798 apr. JC. La bête de la mer représente alors la même puissance que la petite corne: autrement dit, l’église catholique romaine en tant qu’institution.
La vision de l’Apocalypse ajoute une marque d’identification supplémentaire à notre compréhension de la petite corne: la bête de la mer (petite corne) sera blessée et perdra son prestige pendant un certain temps, après quoi elle récupèrera et recevra des éloges à nouveau (Apo. 13:8). La plaie se réfère à la pression de la révolution française, et plus particulièrement, au coup de Napoléon contre l’église romaine lorsqu’il a capturé et emprisonné le pape en 1798. La guérison de la plaie se réfère à la récupération de l’église, à partir du XIXe siècle lorsque, entre autres choses, le dogme de l’infaillibilité du pape a été prononcé (1870). La popularité et l’influence politique de la papauté n’ont jamais été plus grandes dans les temps modernes, qu’elles le sont aujourd’hui.
**Considérez ceci:** Selon Apocalypse, quelles sont les caractéristiques qui font de la quatrième bête une puissance persécutrice?
**II. L’adoration est en jeu** *(Examinez Apocalypse 13:16, 17, avec la classe.)*
Après la vision de la bête de la mer, Jean voit une bête s’élevant de la terre. Cette bête de la terre supportera la bête de la mer et encouragera même les gens à l’adorer (Apo. 13:12), tout comme le dragon avait déjà encouragé l’adoration de la bête (Apo. 13:4).
Maintenant, avec la venue de la bête de la terre, cette revendication de la bête de la mer à être adorée est réaffirmée. La bête de la terre fait tout en son pouvoir politique pour favoriser l’adoration de la bête de la mer. Le langage de la vision de Jean rappelle l’histoire de Daniel 3 où Nabuchodonosor a érigé une statue qui était la réplique de celle de son rêve, dans Daniel 2, et puis il a ordonné à tous les peuples d’adorer cette image. Ceux qui refusaient étaient tués (Dan. 3:4, 7). De même, la bête de la terre « fît que tous ceux qui n’adoreraient pas l’image de la bête fussent tués » (Apo. 13:15, LSG).
Le texte biblique d’Apocalypse spécifie comment cette adoration de la bête de la mer se manifestera: l’adorateur de la bête reçoit la marque sur la main et sur le front (Apo. 13:16). Pour le juif fidèle, ce langage évoque la vieille coutume (Deut. 6:8) de lier les commandements sur la main et sur le front pour symboliser sa soumission totale aux commandements de Dieu (voir Prov. 3:3, 6:21, 7:3), impliquant les deux actions (la main) et la pensée (le front). Le même symbole apparait dans Apocalypse 14:9, où il était associé à la foi en la création, ce qui suggère une indication plus précise du sabbat (voir la leçon 6). Un certain nombre d’indices donnent à penser que la bête de la terre se réfère aux États-Unis d’Amérique. Cette prophétie n’a pas encore été complètement accomplie. Les indices suivants aideront à confirmer l’identité de la bête de la terre:
1. Cette puissance est différente de la bête de la mer: elle n’est pas religieuse; elle n’est pas adorée (Apo. 13:12, 15). C’est une puissance politique; elle peut tuer (Apo. 13:15) et fonctionne comme une puissance économique; elle détermine qui peut acheter ou vendre (Apo. 13:17).
2. Cette puissance entre en importance après la bête de la mer, et elle commence à agir immédiatement après que la bête de la mer ait reçu sa blessure (Apo. 13:12); par conséquent, avant la fin du XVIIIe siècle.
3. Cette puissance a un caractère rassurant. Elle ressemble à l’agneau (Apo. 13:11) qui est le symbole de Jésus-Christ dans sa vulnérabilité. Pourtant, elle parle comme un dragon; elle a un immense pouvoir. En outre, il s’agit de la « terre » – une partie peu peuplée de la terre, contrairement à la bête de la mer (voir Apo.17:15).
4. Cette puissance exerce une grande influence politique et culturelle sur le monde; c’est une superpuissance.
Le prophète biblique n’accuse pas seulement ces puissances maléfiques. L’intention spirituelle derrière la révélation des mouvements de l’histoire n’est pas de jouer le juge et de pointer du doigt contre les personnes.
Au contraire, l’intention est de nous pousser à sortir de Babylone (Apo. 18:2) et de fortifier notre foi et notre espérance (Apo. 13:10). C’est pour instaurer la confiance en la parole de Dieu et en Sa maitrise de l’histoire et de nous exhorter à adorer le seul vrai Dieu.
**Discussion:** Quelles caractéristiques de la bête de la terre répondent aux caractéristiques des États-Unis d’Amérique? Quels sont les évènements contemporains pointant dans la direction des États-Unis d’Amérique dans l’accomplissement de son rôle prophétique comme indiqué dans l’Apocalypse? Qu’est-ce qui fait du sabbat le test idéal de l’adoration? Que signifie sortir de Babylone? Quel est l’effet de l’accomplissement des prophéties sur votre vie spirituelle? Comment l’association paradoxale de l’agneau et du dragon caractérise-t-elle les États-Unis dans la prophétie? Comment cette association paradoxale rappelle-t-elle la petite corne avec des caractéristiques humaines?
#### ÉTAPE 3—Appliquer
**Coin du moniteur:** L’internet est plein de leçons sur la prophétie. Paradoxalement, les gens ne croient pas en Dieu parce qu’ils pensent que c’est une foi naïve; mais ils s’enfoncent dans les horoscopes et consultent avidement les diseurs de bonne aventure. Pourquoi?
**Application:**
- Comment pouvons-nous nous protéger contre les interprétations très éloignées du livre de l’Apocalypse?
- Pourquoi avons-nous tant d’interprétations différentes et même contradictoires du livre de l’Apocalypse de nos jours?
#### ÉTAPE 4—Créer
**Coin du moniteur:** Discutez du document suivant rapportant le voyage du pape François aux États-Unis:
« Il est venu comme un berger et est allé partout paitre son troupeau, avec le contact humain qui a captivé les sceptiques … Nous avons vu des éléments de ce spectacle avant: Paul VI fut le premier pape à se rendre aux États-Unis, en 1965, quand Vatican II venait juste de commencer... Jean Paul II a fait sept visites aux États-Unis pendant son règne de 27 ans… Mais rien de cela n’a eu lieu à l’ère de l’Instagram, lorsque chacun des millions qui sont venus le voir pouvaient partager l’expérience avec des millions d’autres… Il est le premier pape à faire un lieu de rencontre sur Google et le premier à réunir plus de 20 millions d’adeptes et de fans sur Twitter » – Time, October 5, 2015, pp. 36, 40).
**Activités:** Cherchez des documents récents des magazines populaires qui parlent de la prophétie d’Apocalypse 13 | 100.666667 | 1,175 | 0.781806 | fra_Latn | 0.993111 |
8a70a92c35dc8b71757924f17bb0a8fd10c6d260 | 32,505 | md | Markdown | source/documentation/0.14.0/reference/configuration.md | aurora-scheduler/website-tools | 7eb86e4e2a3be95c340593b4b6a7358189c902ab | [
"Apache-2.0"
] | 3 | 2017-09-04T00:59:54.000Z | 2021-11-10T19:10:47.000Z | source/documentation/0.14.0/reference/configuration.md | apache/aurora-website | dbcc57376595e62dadbe730fa44eb93cbe7dc78c | [
"Apache-2.0"
] | 1 | 2021-02-24T03:39:40.000Z | 2021-02-24T03:39:40.000Z | source/documentation/0.14.0/reference/configuration.md | apache/aurora-website | dbcc57376595e62dadbe730fa44eb93cbe7dc78c | [
"Apache-2.0"
] | 9 | 2015-05-08T16:11:16.000Z | 2021-11-10T19:10:37.000Z | Aurora Configuration Reference
==============================
Don't know where to start? The Aurora configuration schema is very
powerful, and configurations can become quite complex for advanced use
cases.
For examples of simple configurations to get something up and running
quickly, check out the [Tutorial](../../getting-started/tutorial/). When you feel comfortable with the basics, move
on to the [Configuration Tutorial](../configuration-tutorial/) for more in-depth coverage of
configuration design.
- [Process Schema](#process-schema)
- [Process Objects](#process-objects)
- [Task Schema](#task-schema)
- [Task Object](#task-object)
- [Constraint Object](#constraint-object)
- [Resource Object](#resource-object)
- [Job Schema](#job-schema)
- [Job Objects](#job-objects)
- [UpdateConfig Objects](#updateconfig-objects)
- [HealthCheckConfig Objects](#healthcheckconfig-objects)
- [Announcer Objects](#announcer-objects)
- [Container Objects](#container)
- [LifecycleConfig Objects](#lifecycleconfig-objects)
- [Specifying Scheduling Constraints](#specifying-scheduling-constraints)
- [Template Namespaces](#template-namespaces)
- [mesos Namespace](#mesos-namespace)
- [thermos Namespace](#thermos-namespace)
Process Schema
==============
Process objects consist of required `name` and `cmdline` attributes. You can customize Process
behavior with its optional attributes. Remember, Processes are handled by Thermos.
### Process Objects
**Attribute Name** | **Type** | **Description**
------------------- | :---------: | ---------------------------------
**name** | String | Process name (Required)
**cmdline** | String | Command line (Required)
**max_failures** | Integer | Maximum process failures (Default: 1)
**daemon** | Boolean | When True, this is a daemon process. (Default: False)
**ephemeral** | Boolean | When True, this is an ephemeral process. (Default: False)
**min_duration** | Integer | Minimum duration between process restarts in seconds. (Default: 15)
**final** | Boolean | When True, this process is a finalizing one that should run last. (Default: False)
**logger** | Logger | Struct defining the log behavior for the process. (Default: Empty)
#### name
The name is any valid UNIX filename string (specifically no
slashes, NULLs or leading periods). Within a Task object, each Process name
must be unique.
#### cmdline
The command line run by the process. The command line is invoked in a bash
subshell, so can involve fully-blown bash scripts. However, nothing is
supplied for command-line arguments so `$*` is unspecified.
#### max_failures
The maximum number of failures (non-zero exit statuses) this process can
have before being marked permanently failed and not retried. If a
process permanently fails, Thermos looks at the failure limit of the task
containing the process (usually 1) to determine if the task has
failed as well.
Setting `max_failures` to 0 makes the process retry
indefinitely until it achieves a successful (zero) exit status.
It retries at most once every `min_duration` seconds to prevent
an effective denial of service attack on the coordinating Thermos scheduler.
#### daemon
By default, Thermos processes are non-daemon. If `daemon` is set to True, a
successful (zero) exit status does not prevent future process runs.
Instead, the process reinvokes after `min_duration` seconds.
However, the maximum failure limit still applies. A combination of
`daemon=True` and `max_failures=0` causes a process to retry
indefinitely regardless of exit status. This should be avoided
for very short-lived processes because of the accumulation of
checkpointed state for each process run. When running in Mesos
specifically, `max_failures` is capped at 100.
#### ephemeral
By default, Thermos processes are non-ephemeral. If `ephemeral` is set to
True, the process' status is not used to determine if its containing task
has completed. For example, consider a task with a non-ephemeral
webserver process and an ephemeral logsaver process
that periodically checkpoints its log files to a centralized data store.
The task is considered finished once the webserver process has
completed, regardless of the logsaver's current status.
#### min_duration
Processes may succeed or fail multiple times during a single task's
duration. Each of these is called a *process run*. `min_duration` is
the minimum number of seconds the scheduler waits before running the
same process.
#### final
Processes can be grouped into two classes: ordinary processes and
finalizing processes. By default, Thermos processes are ordinary. They
run as long as the task is considered healthy (i.e., no failure
limits have been reached.) But once all regular Thermos processes
finish or the task reaches a certain failure threshold, it
moves into a "finalization" stage and runs all finalizing
processes. These are typically processes necessary for cleaning up the
task, such as log checkpointers, or perhaps e-mail notifications that
the task completed.
Finalizing processes may not depend upon ordinary processes or
vice-versa, however finalizing processes may depend upon other
finalizing processes and otherwise run as a typical process
schedule.
#### logger
The default behavior of Thermos is to store stderr/stdout logs in files which grow unbounded.
In the event that you have large log volume, you may want to configure Thermos to automatically rotate logs
after they grow to a certain size, which can prevent your job from using more than its allocated
disk space.
A Logger union consists of a destination enum, a mode enum and a rotation policy.
It's to set where the process logs should be sent using `destination`. Default
option is `file`. Its also possible to specify `console` to get logs output
to stdout/stderr, `none` to suppress any logs output or `both` to send logs to files and
console output. In case of using `none` or `console` rotation attributes are ignored.
Rotation policies only apply to loggers whose mode is `rotate`. The acceptable values
for the LoggerMode enum are `standard` and `rotate`. The rotation policy applies to both
stderr and stdout.
By default, all processes use the `standard` LoggerMode.
**Attribute Name** | **Type** | **Description**
------------------- | :---------------: | ---------------------------------
**destination** | LoggerDestination | Destination of logs. (Default: `file`)
**mode** | LoggerMode | Mode of the logger. (Default: `standard`)
**rotate** | RotatePolicy | An optional rotation policy.
A RotatePolicy describes log rotation behavior for when `mode` is set to `rotate`. It is ignored
otherwise.
**Attribute Name** | **Type** | **Description**
------------------- | :----------: | ---------------------------------
**log_size** | Integer | Maximum size (in bytes) of an individual log file. (Default: 100 MiB)
**backups** | Integer | The maximum number of backups to retain. (Default: 5)
An example process configuration is as follows:
process = Process(
name='process',
logger=Logger(
destination=LoggerDestination('both'),
mode=LoggerMode('rotate'),
rotate=RotatePolicy(log_size=5*MB, backups=5)
)
)
Task Schema
===========
Tasks fundamentally consist of a `name` and a list of Process objects stored as the
value of the `processes` attribute. Processes can be further constrained with
`constraints`. By default, `name`'s value inherits from the first Process in the
`processes` list, so for simple `Task` objects with one Process, `name`
can be omitted. In Mesos, `resources` is also required.
### Task Object
**param** | **type** | **description**
--------- | :---------: | ---------------
```name``` | String | Process name (Required) (Default: ```processes0.name```)
```processes``` | List of ```Process``` objects | List of ```Process``` objects bound to this task. (Required)
```constraints``` | List of ```Constraint``` objects | List of ```Constraint``` objects constraining processes.
```resources``` | ```Resource``` object | Resource footprint. (Required)
```max_failures``` | Integer | Maximum process failures before being considered failed (Default: 1)
```max_concurrency``` | Integer | Maximum number of concurrent processes (Default: 0, unlimited concurrency.)
```finalization_wait``` | Integer | Amount of time allocated for finalizing processes, in seconds. (Default: 30)
#### name
`name` is a string denoting the name of this task. It defaults to the name of the first Process in
the list of Processes associated with the `processes` attribute.
#### processes
`processes` is an unordered list of `Process` objects. To constrain the order
in which they run, use `constraints`.
##### constraints
A list of `Constraint` objects. Currently it supports only one type,
the `order` constraint. `order` is a list of process names
that should run in the order given. For example,
process = Process(cmdline = "echo hello {{name}}")
task = Task(name = "echoes",
processes = [process(name = "jim"), process(name = "bob")],
constraints = [Constraint(order = ["jim", "bob"]))
Constraints can be supplied ad-hoc and in duplicate. Not all
Processes need be constrained, however Tasks with cycles are
rejected by the Thermos scheduler.
Use the `order` function as shorthand to generate `Constraint` lists.
The following:
order(process1, process2)
is shorthand for
[Constraint(order = [process1.name(), process2.name()])]
The `order` function accepts Process name strings `('foo', 'bar')` or the processes
themselves, e.g. `foo=Process(name='foo', ...)`, `bar=Process(name='bar', ...)`,
`constraints=order(foo, bar)`.
#### resources
Takes a `Resource` object, which specifies the amounts of CPU, memory, and disk space resources
to allocate to the Task.
#### max_failures
`max_failures` is the number of failed processes needed for the `Task` to be
marked as failed.
For example, assume a Task has two Processes and a `max_failures` value of `2`:
template = Process(max_failures=10)
task = Task(
name = "fail",
processes = [
template(name = "failing", cmdline = "exit 1"),
template(name = "succeeding", cmdline = "exit 0")
],
max_failures=2)
The `failing` Process could fail 10 times before being marked as permanently
failed, and the `succeeding` Process could succeed on the first run. However,
the task would succeed despite only allowing for two failed processes. To be more
specific, there would be 10 failed process runs yet 1 failed process. Both processes
would have to fail for the Task to fail.
#### max_concurrency
For Tasks with a number of expensive but otherwise independent
processes, you may want to limit the amount of concurrency
the Thermos scheduler provides rather than artificially constraining
it via `order` constraints. For example, a test framework may
generate a task with 100 test run processes, but wants to run it on
a machine with only 4 cores. You can limit the amount of parallelism to
4 by setting `max_concurrency=4` in your task configuration.
For example, the following task spawns 180 Processes ("mappers")
to compute individual elements of a 180 degree sine table, all dependent
upon one final Process ("reducer") to tabulate the results:
def make_mapper(id):
return Process(
name = "mapper%03d" % id,
cmdline = "echo 'scale=50;s(%d\*4\*a(1)/180)' | bc -l >
temp.sine_table.%03d" % (id, id))
def make_reducer():
return Process(name = "reducer", cmdline = "cat temp.\* | nl \> sine\_table.txt
&& rm -f temp.\*")
processes = map(make_mapper, range(180))
task = Task(
name = "mapreduce",
processes = processes + [make\_reducer()],
constraints = [Constraint(order = [mapper.name(), 'reducer']) for mapper
in processes],
max_concurrency = 8)
#### finalization_wait
Process execution is organizued into three active stages: `ACTIVE`,
`CLEANING`, and `FINALIZING`. The `ACTIVE` stage is when ordinary processes run.
This stage lasts as long as Processes are running and the Task is healthy.
The moment either all Processes have finished successfully or the Task has reached a
maximum Process failure limit, it goes into `CLEANING` stage and send
SIGTERMs to all currently running Processes and their process trees.
Once all Processes have terminated, the Task goes into `FINALIZING` stage
and invokes the schedule of all Processes with the "final" attribute set to True.
This whole process from the end of `ACTIVE` stage to the end of `FINALIZING`
must happen within `finalization_wait` seconds. If it does not
finish during that time, all remaining Processes are sent SIGKILLs
(or if they depend upon uncompleted Processes, are
never invoked.)
When running on Aurora, the `finalization_wait` is capped at 60 seconds.
### Constraint Object
Current constraint objects only support a single ordering constraint, `order`,
which specifies its processes run sequentially in the order given. By
default, all processes run in parallel when bound to a `Task` without
ordering constraints.
param | type | description
----- | :----: | -----------
order | List of String | List of processes by name (String) that should be run serially.
### Resource Object
Specifies the amount of CPU, Ram, and disk resources the task needs. See the
[Resource Isolation document](../../features/resource-isolation/) for suggested values and to understand how
resources are allocated.
param | type | description
----- | :----: | -----------
```cpu``` | Float | Fractional number of cores required by the task.
```ram``` | Integer | Bytes of RAM required by the task.
```disk``` | Integer | Bytes of disk required by the task.
```gpu``` | Integer | Number of GPU cores required by the task
Job Schema
==========
### Job Objects
*Note: Specifying a ```Container``` object as the value of the ```container``` property is
deprecated in favor of setting its value directly to the appropriate ```Docker``` or ```Mesos```
container type*
name | type | description
------ | :-------: | -------
```task``` | Task | The Task object to bind to this job. Required.
```name``` | String | Job name. (Default: inherited from the task attribute's name)
```role``` | String | Job role account. Required.
```cluster``` | String | Cluster in which this job is scheduled. Required.
```environment``` | String | Job environment, default ```devel```. Must be one of ```prod```, ```devel```, ```test``` or ```staging<number>```.
```contact``` | String | Best email address to reach the owner of the job. For production jobs, this is usually a team mailing list.
```instances```| Integer | Number of instances (sometimes referred to as replicas or shards) of the task to create. (Default: 1)
```cron_schedule``` | String | Cron schedule in cron format. May only be used with non-service jobs. See [Cron Jobs](../../features/cron-jobs/) for more information. Default: None (not a cron job.)
```cron_collision_policy``` | String | Policy to use when a cron job is triggered while a previous run is still active. KILL_EXISTING Kill the previous run, and schedule the new run CANCEL_NEW Let the previous run continue, and cancel the new run. (Default: KILL_EXISTING)
```update_config``` | ```UpdateConfig``` object | Parameters for controlling the rate and policy of rolling updates.
```constraints``` | dict | Scheduling constraints for the tasks. See the section on the [constraint specification language](#specifying-scheduling-constraints)
```service``` | Boolean | If True, restart tasks regardless of success or failure. (Default: False)
```max_task_failures``` | Integer | Maximum number of failures after which the task is considered to have failed (Default: 1) Set to -1 to allow for infinite failures
```priority``` | Integer | Preemption priority to give the task (Default 0). Tasks with higher priorities may preempt tasks at lower priorities.
```production``` | Boolean | Whether or not this is a production task that may [preempt](../../features/multitenancy/#preemption) other tasks (Default: False). Production job role must have the appropriate [quota](../../features/multitenancy/#preemption).
```health_check_config``` | ```HealthCheckConfig``` object | Parameters for controlling a task's health checks. HTTP health check is only used if a health port was assigned with a command line wildcard.
```container``` | Choice of ```Container```, ```Docker``` or ```Mesos``` object | An optional container to run all processes inside of.
```lifecycle``` | ```LifecycleConfig``` object | An optional task lifecycle configuration that dictates commands to be executed on startup/teardown. HTTP lifecycle is enabled by default if the "health" port is requested. See [LifecycleConfig Objects](#lifecycleconfig-objects) for more information.
```tier``` | String | Task tier type. The default scheduler tier configuration allows for 3 tiers: `revocable`, `preemptible`, and `preferred`. The `revocable` tier requires the task to run with Mesos revocable resources. Setting the task's tier to `preemptible` allows for the possibility of that task being preempted by other tasks when cluster is running low on resources. The `preferred` tier prevents the task from using revocable resources and from being preempted. Since it is possible that a cluster is configured with a custom tier configuration, users should consult their cluster administrator to be informed of the tiers supported by the cluster. Attempts to schedule jobs with an unsupported tier will be rejected by the scheduler.
### UpdateConfig Objects
Parameters for controlling the rate and policy of rolling updates.
| object | type | description
| ---------------------------- | :------: | ------------
| ```batch_size``` | Integer | Maximum number of shards to be updated in one iteration (Default: 1)
| ```watch_secs``` | Integer | Minimum number of seconds a shard must remain in ```RUNNING``` state before considered a success (Default: 45)
| ```max_per_shard_failures``` | Integer | Maximum number of restarts per shard during update. Increments total failure count when this limit is exceeded. (Default: 0)
| ```max_total_failures``` | Integer | Maximum number of shard failures to be tolerated in total during an update. Cannot be greater than or equal to the total number of tasks in a job. (Default: 0)
| ```rollback_on_failure``` | boolean | When False, prevents auto rollback of a failed update (Default: True)
| ```wait_for_batch_completion```| boolean | When True, all threads from a given batch will be blocked from picking up new instances until the entire batch is updated. This essentially simulates the legacy sequential updater algorithm. (Default: False)
| ```pulse_interval_secs``` | Integer | Indicates a [coordinated update](../../features/job-updates/#coordinated-job-updates). If no pulses are received within the provided interval the update will be blocked. Beta-updater only. Will fail on submission when used with client updater. (Default: None)
### HealthCheckConfig Objects
Parameters for controlling a task's health checks via HTTP or a shell command.
| param | type | description
| ------- | :-------: | --------
| ```health_checker``` | HealthCheckerConfig | Configure what kind of health check to use.
| ```initial_interval_secs``` | Integer | Initial delay for performing a health check. (Default: 15)
| ```interval_secs``` | Integer | Interval on which to check the task's health. (Default: 10)
| ```max_consecutive_failures``` | Integer | Maximum number of consecutive failures that will be tolerated before considering a task unhealthy (Default: 0)
| ```timeout_secs``` | Integer | Health check timeout. (Default: 1)
### HealthCheckerConfig Objects
| param | type | description
| ------- | :-------: | --------
| ```http``` | HttpHealthChecker | Configure health check to use HTTP. (Default)
| ```shell``` | ShellHealthChecker | Configure health check via a shell command.
### HttpHealthChecker Objects
| param | type | description
| ------- | :-------: | --------
| ```endpoint``` | String | HTTP endpoint to check (Default: /health)
| ```expected_response``` | String | If not empty, fail the HTTP health check if the response differs. Case insensitive. (Default: ok)
| ```expected_response_code``` | Integer | If not zero, fail the HTTP health check if the response code differs. (Default: 0)
### ShellHealthChecker Objects
| param | type | description
| ------- | :-------: | --------
| ```shell_command``` | String | An alternative to HTTP health checking. Specifies a shell command that will be executed. Any non-zero exit status will be interpreted as a health check failure.
### Announcer Objects
If the `announce` field in the Job configuration is set, each task will be
registered in the ServerSet `/aurora/role/environment/jobname` in the
zookeeper ensemble configured by the executor (which can be optionally overriden by specifying
`zk_path` parameter). If no Announcer object is specified,
no announcement will take place. For more information about ServerSets, see the [Service Discover](../../features/service-discovery/)
documentation.
By default, the hostname in the registered endpoints will be the `--hostname` parameter
that is passed to the mesos agent. To override the hostname value, the executor can be started
with `--announcer-hostname=<overriden_value>`. If you decide to use `--announcer-hostname` and if
the overriden value needs to change for every executor, then the executor has to be started inside a wrapper, see [Executor Wrapper](../../operations/configuration/#thermos-executor-wrapper).
For example, if you want the hostname in the endpoint to be an IP address instead of the hostname,
the `--hostname` parameter to the mesos agent can be set to the machine IP or the executor can
be started with `--announcer-hostname=<host_ip>` while wrapping the executor inside a script.
| object | type | description
| ------- | :-------: | --------
| ```primary_port``` | String | Which named port to register as the primary endpoint in the ServerSet (Default: `http`)
| ```portmap``` | dict | A mapping of additional endpoints to be announced in the ServerSet (Default: `{ 'aurora': '{{primary_port}}' }`)
| ```zk_path``` | String | Zookeeper serverset path override (executor must be started with the `--announcer-allow-custom-serverset-path` parameter)
#### Port aliasing with the Announcer `portmap`
The primary endpoint registered in the ServerSet is the one allocated to the port
specified by the `primary_port` in the `Announcer` object, by default
the `http` port. This port can be referenced from anywhere within a configuration
as `{{thermos.ports[http]}}`.
Without the port map, each named port would be allocated a unique port number.
The `portmap` allows two different named ports to be aliased together. The default
`portmap` aliases the `aurora` port (i.e. `{{thermos.ports[aurora]}}`) to
the `http` port. Even though the two ports can be referenced independently,
only one port is allocated by Mesos. Any port referenced in a `Process` object
but which is not in the portmap will be allocated dynamically by Mesos and announced as well.
It is possible to use the portmap to alias names to static port numbers, e.g.
`{'http': 80, 'https': 443, 'aurora': 'http'}`. In this case, referencing
`{{thermos.ports[aurora]}}` would look up `{{thermos.ports[http]}}` then
find a static port 80. No port would be requested of or allocated by Mesos.
Static ports should be used cautiously as Aurora does nothing to prevent two
tasks with the same static port allocations from being co-scheduled.
External constraints such as agent attributes should be used to enforce such
guarantees should they be needed.
### Container Objects
*Note: Both Docker and Mesos unified-container support are currently EXPERIMENTAL.*
*Note: In order to correctly execute processes inside a job, the Docker container must have python 2.7 installed.*
*Note: For private docker registry, mesos mandates the docker credential file to be named as `.dockercfg`, even though docker may create a credential file with a different name on various platforms. Also, the `.dockercfg` file needs to be copied into the sandbox using the `-thermos_executor_resources` flag, specified while starting Aurora.*
Describes the container the job's processes will run inside. If not using Docker or the Mesos
unified-container, the container can be omitted from your job config.
param | type | description
----- | :----: | -----------
```docker``` | Docker | A docker container to use.
```mesos``` | Mesos | A mesos container to use.
### Docker Object
param | type | description
----- | :----: | -----------
```image``` | String | The name of the docker image to execute. If the image does not exist locally it will be pulled with ```docker pull```.
```parameters``` | List(Parameter) | Additional parameters to pass to the docker containerizer.
### Docker Parameter Object
Docker CLI parameters. This needs to be enabled by the scheduler `allow_docker_parameters` option.
See [Docker Command Line Reference](https://docs.docker.com/reference/commandline/run/) for valid parameters.
param | type | description
----- | :----: | -----------
```name``` | String | The name of the docker parameter. E.g. volume
```value``` | String | The value of the parameter. E.g. /usr/local/bin:/usr/bin:rw
### Mesos Object
param | type | description
----- | :----: | -----------
```image``` | Choice(AppcImage, DockerImage) | An optional filesystem image to use within this container.
### AppcImage
*Note: In order to correctly execute processes inside a job, the filesystem image must include python 2.7.*
Describes an AppC filesystem image.
param | type | description
----- | :----: | -----------
```name``` | String | The name of the appc image.
```image_id``` | String | The [image id](https://github.com/appc/spec/blob/master/spec/aci.md#image-id) of the appc image.
### DockerImage
*Note: In order to correctly execute processes inside a job, the filesystem image must include python 2.7.*
Describes a Docker filesystem image.
param | type | description
----- | :----: | -----------
```name``` | String | The name of the docker image.
```tag``` | String | The tag that identifies the docker image.
### LifecycleConfig Objects
*Note: The only lifecycle configuration supported is the HTTP lifecycle via the HttpLifecycleConfig.*
param | type | description
----- | :----: | -----------
```http``` | HttpLifecycleConfig | Configure the lifecycle manager to send lifecycle commands to the task via HTTP.
### HttpLifecycleConfig Objects
param | type | description
----- | :----: | -----------
```port``` | String | The named port to send POST commands (Default: health)
```graceful_shutdown_endpoint``` | String | Endpoint to hit to indicate that a task should gracefully shutdown. (Default: /quitquitquit)
```shutdown_endpoint``` | String | Endpoint to hit to give a task its final warning before being killed. (Default: /abortabortabort)
#### graceful_shutdown_endpoint
If the Job is listening on the port as specified by the HttpLifecycleConfig
(default: `health`), a HTTP POST request will be sent over localhost to this
endpoint to request that the task gracefully shut itself down. This is a
courtesy call before the `shutdown_endpoint` is invoked a fixed amount of
time later.
#### shutdown_endpoint
If the Job is listening on the port as specified by the HttpLifecycleConfig
(default: `health`), a HTTP POST request will be sent over localhost to this
endpoint to request as a final warning before being shut down. If the task
does not shut down on its own after this, it will be forcefully killed
Specifying Scheduling Constraints
=================================
In the `Job` object there is a map `constraints` from String to String
allowing the user to tailor the schedulability of tasks within the job.
The constraint map's key value is the attribute name in which we
constrain Tasks within our Job. The value is how we constrain them.
There are two types of constraints: *limit constraints* and *value
constraints*.
| constraint | description
| ------------- | --------------
| Limit | A string that specifies a limit for a constraint. Starts with <code>'limit:</code> followed by an Integer and closing single quote, such as ```'limit:1'```.
| Value | A string that specifies a value for a constraint. To include a list of values, separate the values using commas. To negate the values of a constraint, start with a ```!``` ```.```
Further details can be found in the [Scheduling Constraints](../../features/constraints/) feature
description.
Template Namespaces
===================
Currently, a few Pystachio namespaces have special semantics. Using them
in your configuration allow you to tailor application behavior
through environment introspection or interact in special ways with the
Aurora client or Aurora-provided services.
### mesos Namespace
The `mesos` namespace contains variables which relate to the `mesos` agent
which launched the task. The `instance` variable can be used
to distinguish between Task replicas.
| variable name | type | description
| --------------- | :--------: | -------------
| ```instance``` | Integer | The instance number of the created task. A job with 5 replicas has instance numbers 0, 1, 2, 3, and 4.
| ```hostname``` | String | The instance hostname that the task was launched on.
Please note, there is no uniqueness guarantee for `instance` in the presence of
network partitions. If that is required, it should be baked in at the application
level using a distributed coordination service such as Zookeeper.
### thermos Namespace
The `thermos` namespace contains variables that work directly on the
Thermos platform in addition to Aurora. This namespace is fully
compatible with Tasks invoked via the `thermos` CLI.
| variable | type | description |
| :----------: | --------- | ------------ |
| ```ports``` | map of string to Integer | A map of names to port numbers |
| ```task_id``` | string | The task ID assigned to this task. |
The `thermos.ports` namespace is automatically populated by Aurora when
invoking tasks on Mesos. When running the `thermos` command directly,
these ports must be explicitly mapped with the `-P` option.
For example, if '{{`thermos.ports[http]`}}' is specified in a `Process`
configuration, it is automatically extracted and auto-populated by
Aurora, but must be specified with, for example, `thermos -P http:12345`
to map `http` to port 12345 when running via the CLI.
| 53.638614 | 746 | 0.68411 | eng_Latn | 0.993236 |
8a70f93a5f661f2e783a99bcbbe07dcbd949d315 | 6,048 | md | Markdown | src/pages/2020-04-06 post-one/index.md | megdadkins7/personal-blog | 82d8434cd5daa62caadb11ade92e6fd63c7030b2 | [
"MIT"
] | null | null | null | src/pages/2020-04-06 post-one/index.md | megdadkins7/personal-blog | 82d8434cd5daa62caadb11ade92e6fd63c7030b2 | [
"MIT"
] | null | null | null | src/pages/2020-04-06 post-one/index.md | megdadkins7/personal-blog | 82d8434cd5daa62caadb11ade92e6fd63c7030b2 | [
"MIT"
] | null | null | null | ---
path: "/5-resources"
date: "04/06/2020"
title: "My Top 5 Resources For Learning Code"
author: "Meghan Adkins"
---
You've taken a big step and decided you wanted to learn how to code. Even better, you've decided to commit to front-end development. Smile, celebrate, and do a little dance.
Now what?
Where do you even start?
You've looked up "learn to code" and there's endless ads, resources, and sites competing for your attention.
There really are plenty of resources to choose from. The best advice I can give is to take a look at a bunch of different sites, do some research, and test a few out to see what you personally like. However, because there are so many, sometimes it feels overwhelming to have to go through all of them.
So without further ado, here are some of *my* favorite resources that I've used to begin and continue my own learning.
1. <a href='https://www.freecodecamp.org/' target='_blank'>freeCodeCamp</a>
freeCodeCamp is where I started my journey. They made learning HTML and CSS a breeze. They split the screen into three parts: the lesson and task all the way to the left, the code editor to write answers in the middle, and to the right shows the result of the code written. After completing their lessons on basic html, basic css, visual design, accessibility, responsive web design principles, css flexbox, and css grid there are 5 responsive web design projects to complete. After this it's time to submit projects for a certification in web design. They also offer certifications in javascript algorithms and data structures, front end libraries, data visualization, APIs and microservices, and information security and quality assurance. Beyond that, there are coding challenges for interview preparation. And yes, it's all free.
2. <a href='https://www.codecademy.com/learn' target='_blank'>Codecademy</a>
While freeCodeCamp offers hours of free coding lessons that helped me get started, Codecademy is where I really began to understand javascript. Codecademy comes with a price tag, but (depending on how quickly you learn) it isn't too steep. The Pro version of Codecademy is $19.99 a month and there is a free trial as well. The Pro version has access to the full Web Development path. At the start, this path works through html and css (which I used as a refresher) and javascript. Codecademy helped flesh out javascript and provided context for each task. After javascript they have units on the command line and on git. These are essential tools for any developer. By unit 10 Codecademy introduces React (one of the most popular front end libraries available). The React unit focuses on piecemeal leassons and ties it altogether with a full project at the end. Codecademy also offers a skill path for React. This can be completed in conjunction with the Web Development Path and add to an understanding of the library and more projects for a portfolio. From here, Codecademy moves to back end development using Express and SQL. They wrap up the path with teaching Test-Driven Development, which is a process on how to approach code.
3. <a href='https://developer.mozilla.org/en-US/' target='_blank'>Mozilla Developer Network</a>
MDN is a consistently used network by many developers. Whereas freeCodeCamp and Codecademy give paths and lessons to follow, MDN is more of a database of web docs to search when a developer needs some reminding on how a given method, style, or other piece of code works. A common search on google would be "mdn reduce" or just "reduce" on the MDN site. The MDN page gives syntax, description, examples, and more. Personally, I've used it mostly for javascript especially when I forget how array methods work, but their network covers html, css, http, graphics, api/dom, browser extensions, and MathML as well.
4. <a href='https://css-tricks.com/' target='_blank'>CSS Tricks</a>
CSS Tricks has probably become one of my favorite resources in learning CSS. I've found their visuals on flexbox and grid invaluable because they actually *show* the results of code in graphic form. And even better for something like grid or flexbox they show every property and value, explain what they do, and show what they do. Like MDN there are no direct paths or lessons to complete and you can search the actual website for what you're looking for. Or tell google "css tricks flexbox". There are plenty of guides and videos to help with all styling needs. Their guides are my favorite and I almost always have one pulled up while I'm working on a project.
5. <a href='https://stackoverflow.com/' target='_blank'>Stack Overflow</a>
Stack Overflow is a little scary as a newbie. It's kind of like reddit for developers where you can post a question for the community and other developers will answer. Or you can answer someone else's question. However before you post a question I'd caution you to look and see if a similar one has been posted before. Stack Overflow is great for questions that web docs from MDN or guides from CSS Tricks doesn't address because they're too specific. For searching Stack Overflow some of the best advice I can give is to format your searches as "language/library your problem/question" (i.e. "javascript how to check if string contains substring"). I've found many of the questions I have are already answered and if not, there are similar scenarios. Do your best to pull from those before you ask a question. If you do ask a question provide context *and* the code you are asking about so other developers can see it. Stack Overflow is a community where you develop a reputation, unlock badges, create a profile, and can even receive job recommendations so make sure you put your best foot forward here.
Learning to code is exciting especially when there are so many are great resources to help you get started. You don't have to be overwhelmed by the sheer volume of options. Take a deep breath, take some time to look at some resources, and don't forget to think about your learning style. You'll be on your way to coding in no time! | 147.512195 | 1,233 | 0.786376 | eng_Latn | 0.999559 |
8a718a14f342f5fd242b15e6566a54f366e244fa | 37 | md | Markdown | README.md | JonathanHarada/about-jonathan | 02dd7932b489ac7404c4be5aa901bfe540ad9ddd | [
"Apache-2.0"
] | null | null | null | README.md | JonathanHarada/about-jonathan | 02dd7932b489ac7404c4be5aa901bfe540ad9ddd | [
"Apache-2.0"
] | null | null | null | README.md | JonathanHarada/about-jonathan | 02dd7932b489ac7404c4be5aa901bfe540ad9ddd | [
"Apache-2.0"
] | null | null | null | # about-jonathan
assignment about me
| 12.333333 | 19 | 0.810811 | eng_Latn | 0.992007 |
8a71db2a360eb076d833cbec32e25aeafe953399 | 8,520 | md | Markdown | docs/standard/collections/thread-safe/when-to-use-a-thread-safe-collection.md | rscprof/docs.ru-ru | 9c2a47b4b444efb88ed2c2d943b09721415d5ed0 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/standard/collections/thread-safe/when-to-use-a-thread-safe-collection.md | rscprof/docs.ru-ru | 9c2a47b4b444efb88ed2c2d943b09721415d5ed0 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/standard/collections/thread-safe/when-to-use-a-thread-safe-collection.md | rscprof/docs.ru-ru | 9c2a47b4b444efb88ed2c2d943b09721415d5ed0 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Преимущества использования потокобезопасных коллекций
ms.date: 03/30/2017
ms.technology: dotnet-standard
helpviewer_keywords:
- thread-safe collections, when to upgrade
ms.assetid: a9babe97-e457-4ff3-b528-a1bc940d5320
author: mairaw
ms.author: mairaw
ms.openlocfilehash: b224e758eb5b0e07c76f055f22bfe827789f07ab
ms.sourcegitcommit: 3d5d33f384eeba41b2dff79d096f47ccc8d8f03d
ms.translationtype: HT
ms.contentlocale: ru-RU
ms.lasthandoff: 05/04/2018
ms.locfileid: "33574843"
---
# <a name="when-to-use-a-thread-safe-collection"></a>Преимущества использования потокобезопасных коллекций
В [!INCLUDE[net_v40_long](../../../../includes/net-v40-long-md.md)] представлено пять новых типов коллекций, специально разработанных для поддержки многопотоковых операций добавления и удаления. Для достижения потокобезопасности эти новые типы используют различные типы эффективных механизмов синхронизации с блокировкой и без блокировки. Синхронизация добавляет к операции издержки. Значения издержек зависят от используемого типа синхронизации, выполняемого типа операции и других факторов, например количества потоков, которые одновременно пытаются получить доступ к коллекции.
В некоторых сценариях издержки синхронизации незначительны и позволяют многопотоковым вариантам выполняться значительно быстрее и обеспечивают лучшую масштабируемость, чем в случае потоконебезопасного эквивалента при защите с помощью внешней блокировки. В других сценариях издержки могут вызвать ситуацию, когда потокобезопасный вариант выполняется и масштабируется примерно так же и даже более медленно, чем потоконебезопасная версия типа с внешней блокировкой.
В следующих подразделах приводятся общие рекомендации по использованию потокобезопасной коллекции и потоконебезопасного эквивалента, который содержит заданную пользователем блокировку для операций чтения и записи. Так как производительность может зависеть от множества факторов, рекомендации нехарактерны и необязательно являются допустимыми во всех обстоятельствах. Если производительность имеет важное значение, то лучшим способом для определения используемого типа коллекции является измерение производительности на основе обычной конфигурации компьютера и нагрузке. В данном документе используются следующие термины.
*Чистый сценарий "производитель — потребитель"*
Все заданные потоки либо добавляют элементы, либо удаляют их, но не то и другое одновременно.
*Смешанный сценарий "производитель — потребитель"*
Все заданные потоки как добавляют элементы, так и удаляют их.
*Ускорение*
Ускорение производительности алгоритма одного типа относительно другого типа в рамках одного сценария.
*Масштабируемость*
Увеличение в производительности, которое пропорционально числу ядер в компьютере. Масштабируемый алгоритм выполняется быстрее на компьютере, у которого восемь ядер, чем на компьютере, у которого два ядра.
## <a name="concurrentqueuet-vs-queuet"></a>ConcurrentQueue(T) и Queue(T)
В чистых сценариях "производитель-получатель", когда время обработки каждого элемента очень мало (несколько инструкций), класс <xref:System.Collections.Concurrent.ConcurrentQueue%601?displayProperty=nameWithType> может дать незначительный рост производительности по сравнению с классом <xref:System.Collections.Generic.Queue%601?displayProperty=nameWithType>, который использует внешнюю блокировку. В этом сценарии класс <xref:System.Collections.Concurrent.ConcurrentQueue%601> выполняется лучше, когда один выделенный поток помещается в очередь, а другой выделенный поток удаляется из очереди. Если это правило не применяется, класс <xref:System.Collections.Generic.Queue%601> может даже выполняться немного быстрее, чем класс <xref:System.Collections.Concurrent.ConcurrentQueue%601> на компьютерах с многоядерными процессорами.
Когда время обработки составляет 500 FLOPS (операций с плавающей запятой) или больше, то правило двух потоков не применяется к классу <xref:System.Collections.Concurrent.ConcurrentQueue%601>, который имеет очень хорошую масштабируемость. <xref:System.Collections.Generic.Queue%601> в этой ситуации не обладает хорошей масштабируемостью.
В смешанных сценариях "производитель-получатель", когда время обработки очень мало, класс <xref:System.Collections.Generic.Queue%601>, который имеет внешнюю блокировку, масштабируется лучше, чем класс<xref:System.Collections.Concurrent.ConcurrentQueue%601>. Однако, если время обработки имеет значение приблизительно равное 500 FLOPS и выше, то класс <xref:System.Collections.Concurrent.ConcurrentQueue%601> масштабируется лучше.
## <a name="concurrentstack-vs-stack"></a>ConcurrentStack и Стек
В чистых сценариях "производитель-получатель", когда время обработки каждого элемента очень мало, класс <xref:System.Collections.Concurrent.ConcurrentStack%601?displayProperty=nameWithType> и класс <xref:System.Collections.Generic.Stack%601?displayProperty=nameWithType>, который использует внешнюю блокировку, обычно выполняются с одинаковой скоростью при одном выделенном потоке на добавление и одном выделенном потоке на извлечение. Однако по мере увеличения числа потоков производительность снижается у обоих типов, так как увеличивается число конфликтных ситуаций, и класс <xref:System.Collections.Generic.Stack%601> может выполняться лучше, чем класс <xref:System.Collections.Concurrent.ConcurrentStack%601>. Если время обработки имеет значение приблизительно равное 500 FLOPS и выше, то оба типа масштабируются примерно одинаково.
В смешанных сценариях "производитель-получатель" класс <xref:System.Collections.Concurrent.ConcurrentStack%601> имеет большее ускорение для небольших и больших рабочих нагрузок.
Использование методов <xref:System.Collections.Concurrent.ConcurrentStack%601.PushRange%2A> и <xref:System.Collections.Concurrent.ConcurrentStack%601.TryPopRange%2A> может значительно снизить время доступа.
## <a name="concurrentdictionary-vs-dictionary"></a>ConcurrentDictionary и Словарь
Как правило, лучше использовать класс <xref:System.Collections.Concurrent.ConcurrentDictionary%602?displayProperty=nameWithType> в любой ситуации, когда вы одновременно добавляете и обновляете ключи или значения из множества потоков. В сценариях, которые включают частные операции обновления и относительно редкие операции чтения, класс <xref:System.Collections.Concurrent.ConcurrentDictionary%602>, в общем случае, обеспечивает немного лучшую производительность. В сценариях, которые включают частные операции чтения и относительно редкие операции обновления, класс <xref:System.Collections.Concurrent.ConcurrentDictionary%602>, в общем случае, имеет значительно большее ускорение на компьютерах с многоядерными процессорами.
В сценариях, которые включают частые обновления, можно увеличить степень параллелизма в классе <xref:System.Collections.Concurrent.ConcurrentDictionary%602> и затем провести оценку, чтобы увидеть, увеличилась ли производительность на компьютерах с многоядерными процессорами. При изменении уровня параллелизма исключите, насколько это возможно, глобальные операции.
Если выполняются только операции чтения ключа или значений, класс <xref:System.Collections.Generic.Dictionary%602> работает быстрее, так как он не требует синхронизации, пока словарь не изменяется каким-либо потоком.
## <a name="concurrentbag"></a>ConcurrentBag
В чистых сценариях "производитель — получатель" класс <xref:System.Collections.Concurrent.ConcurrentBag%601?displayProperty=nameWithType> может выполняться более медленно, чем другие типы параллельных коллекций.
В смешанных сценариях "производитель-получатель" класс <xref:System.Collections.Concurrent.ConcurrentBag%601> в общем случае имеет большее ускорение и большую масштабируемость, чем все остальные типы параллельных коллекций для небольших и больших рабочих нагрузок.
## <a name="blockingcollection"></a>BlockingCollection
Если вы хотите использовать семантику границ и блокировок, класс <xref:System.Collections.Concurrent.BlockingCollection%601?displayProperty=nameWithType> может работать быстрее, чем любые пользовательские реализации. Он также поддерживает гибкую обработку исключений и операций отмены, перечисления.
## <a name="see-also"></a>См. также
<xref:System.Collections.Concurrent?displayProperty=nameWithType>
[Потокобезопасные коллекции](../../../../docs/standard/collections/thread-safe/index.md)
[Параллельное программирование](../../../../docs/standard/parallel-programming/index.md)
| 123.478261 | 840 | 0.82723 | rus_Cyrl | 0.961453 |
8a720baa38c476e2803dece5d8bc14ca19a00f90 | 1,232 | md | Markdown | docs/00_index.md | kingdavid72/NectarJS | 1e54ae5edab62115c556de60db7225aac02d07a2 | [
"MIT"
] | 1 | 2019-10-20T08:58:28.000Z | 2019-10-20T08:58:28.000Z | docs/00_index.md | arjndr/nectarjs | 1e54ae5edab62115c556de60db7225aac02d07a2 | [
"MIT"
] | null | null | null | docs/00_index.md | arjndr/nectarjs | 1e54ae5edab62115c556de60db7225aac02d07a2 | [
"MIT"
] | null | null | null | # NectarJS
Javascript's God Mode : one language to rule them all. Code everything, everywhere, for everything, in JS, TS, CS and more.
[](http://nectar-lang.com/key) [](http://api.nectarjs.com:3000/) [](http://nectar-lang.com/contribute/) [](http://nectar-lang.com) <a class="github-button" href="https://github.com/seraum/nectarjs" data-icon="octicon-star" data-show-count="true" aria-label="Star seraum/nectarjs on GitHub">Star</a>
[](https://nodei.co/npm/nectarjs/)
Join us on Slack : [NectarJS' Slack](http://api.nectarjs.com:3000/)
Get your free key here : [NectarJS free Key](http://nectar-lang.com/key/)
<style>
a.external:after
{
content: none;
}
a.external
{
display:inline-block;
}
</style>
<script async defer src="https://buttons.github.io/buttons.js"></script>
| 47.384615 | 658 | 0.730519 | yue_Hant | 0.456534 |
8a72376e6cf00cb54cc808d3ec68174d2d9c2d70 | 1,217 | md | Markdown | docs/index.md | simon-mo/terraform-provider-buildkite | 143d54d5d30f871944d4285ab2ef9722b54c54a4 | [
"MIT"
] | null | null | null | docs/index.md | simon-mo/terraform-provider-buildkite | 143d54d5d30f871944d4285ab2ef9722b54c54a4 | [
"MIT"
] | null | null | null | docs/index.md | simon-mo/terraform-provider-buildkite | 143d54d5d30f871944d4285ab2ef9722b54c54a4 | [
"MIT"
] | null | null | null | # Buildkite Provider
This provider can be used to manage resources on [buildkite.com](https://buildkite.com) via Terraform.
Two configuration values are required:
* An API token, generated at https://buildkite.com/user/api-access-tokens. The
token must have the `write_pipelines` REST API scope and be enabled for GraphQL
* A Buildkite organization slug, available by signing into buildkite.com and
examining the URL: https://buildkite.com/<org-slug>
## Example Usage
```hcl
terraform {
required_providers {
buildkite = {
source = "buildkite/buildkite"
version = "0.0.17"
}
}
}
provider "buildkite" {
api_token = "token" # can also be set from env: BUILDKITE_API_TOKEN
organization = "slug" # can also be set from env: BUILDKITE_ORGANIZATION
}
```
## Argument Reference
* `api_token` - (Required) This is the Buildkite API Access Token. It must be provided but can also be sourced from the `BUILDKITE_API_TOKEN` environment variable.
* `organization` - (Required) This is the Buildkite organization slug. It must be provided, but can also be sourced from the `BUILDKITE_ORGANIZATION` environment variable. The token requires GraphQL access and the `write_pipelines` scope.
| 35.794118 | 238 | 0.746097 | eng_Latn | 0.970468 |
8a7399df528a931c8b9b4c8ea1ea4f71e8f483b0 | 147 | md | Markdown | content/010_introduction/eks/eks_customers.md | shogo2022/eks-workshop-jp | 9b180868610045021ebaa64994bd2b1d196218d8 | [
"MIT-0"
] | null | null | null | content/010_introduction/eks/eks_customers.md | shogo2022/eks-workshop-jp | 9b180868610045021ebaa64994bd2b1d196218d8 | [
"MIT-0"
] | null | null | null | content/010_introduction/eks/eks_customers.md | shogo2022/eks-workshop-jp | 9b180868610045021ebaa64994bd2b1d196218d8 | [
"MIT-0"
] | null | null | null | ---
title: "EKSクラスタの作成フロー"
date: 2018-10-03T10:23:24-07:00
draft: false
weight: 130
---

| 14.7 | 56 | 0.714286 | kor_Hang | 0.086851 |
8a739fdc5fbeddb77b4b0d48df86f57fb0763f65 | 6,240 | md | Markdown | README.md | javierAraluce/zed-ros-wrapper | 7f29630644f17d4b4edc421b228c7fbad0b5b7c7 | [
"MIT"
] | null | null | null | README.md | javierAraluce/zed-ros-wrapper | 7f29630644f17d4b4edc421b228c7fbad0b5b7c7 | [
"MIT"
] | null | null | null | README.md | javierAraluce/zed-ros-wrapper | 7f29630644f17d4b4edc421b228c7fbad0b5b7c7 | [
"MIT"
] | null | null | null | 
#Install SDK on docker
```
export USER=insert_user
sudo mkdir -p /etc/udev/rules.d/
sudo apt install udev
```
# Stereolabs ZED Camera - ROS Integration
This package lets you use the ZED stereo camera with ROS. It outputs the camera left and right images, depth map, point cloud, pose information and supports the use of multiple ZED cameras.
[More information](https://www.stereolabs.com/documentation/guides/using-zed-with-ros/introduction.html)
## Getting started
- First, download the latest version of the ZED SDK on [stereolabs.com](https://www.stereolabs.com/developers/)
- [Install](#build-the-program) the ZED ROS wrapper
- For more information, check out our [ROS documentation](https://www.stereolabs.com/documentation/guides/using-zed-with-ros/introduction.html). If you want to customize the wrapper, check the [ZED API documentation](https://www.stereolabs.com/developers/documentation/API/)
### Prerequisites
- Ubuntu 16.04 or newer (Ubuntu 18 recommended)
- [ZED SDK **≥ 3.0**](https://www.stereolabs.com/developers/) and its dependency [CUDA](https://developer.nvidia.com/cuda-downloads)
- [ROS Kinetic](http://wiki.ros.org/kinetic/Installation/Ubuntu) or [ROS Melodic](http://wiki.ros.org/melodic/Installation/Ubuntu)
*Note:* an older version of the wrapper compatible with the **SDK v2.8.x** is available [here](https://github.com/stereolabs/zed-ros-wrapper/releases/tag/v2.x)
### Build the program
The zed_ros_wrapper is a catkin package. It depends on the following ROS packages:
- nav_msgs
- tf2_geometry_msgs
- message_runtime
- catkin
- roscpp
- stereo_msgs
- rosconsole
- robot_state_publisher
- urdf
- sensor_msgs
- image_transport
- roslint
- diagnostic_updater
- dynamic_reconfigure
- tf2_ros
- message_generation
- nodelet
Open a terminal, clone the repository, update the dependencies and build the packages:
$ cd ~/catkin_ws/src
$ git clone https://github.com/stereolabs/zed-ros-wrapper.git
$ cd ../
$ rosdep install --from-paths src --ignore-src -r -y
$ catkin_make -DCMAKE_BUILD_TYPE=Release
$ source ./devel/setup.bash
### Run the program
To launch the ZED node use
ZED camera:
$ roslaunch zed_wrapper zed.launch
ZED Mini camera:
$ roslaunch zed_wrapper zedm.launch
ZED2 camera:
$ roslaunch zed_wrapper zed2.launch
To select the ZED from its serial number:
$ roslaunch zed_wrapper zed.launch serial_number:=1010 #replace 1010 with the actual SN
### Rviz visualization
Example launch files to start a pre-configured Rviz environment to visualize the data of ZED, ZED Mini and ZED 2 cameras are provided in the [`zed-ros-examples` repository](https://github.com/stereolabs/zed-ros-examples/tree/master/zed_display_rviz)
### SVO recording
[SVO recording](https://www.stereolabs.com/docs/video/#video-recording) can be started and stopped while the ZED node is running using the service `start_svo_recording` and the service `stop_svo_recording`.
[More information](https://www.stereolabs.com/docs/ros/zed_node/#services)
### Object Detection
The SDK v3.0 introduces the Object Detection and Tracking module. **The Object Detection module is available only with a ZED 2 camera**.
The Object Detection can be enabled *automatically* when the node start setting the parameter `object_detection/od_enabled` to `true` in the file `zed2.yaml`.
The Object Detection can be enabled/disabled *manually* calling the services `start_object_detection` and `stop_object_detection`.
### Spatial Mapping
The Spatial Mapping can be enabled automatically when the node start setting the parameter `mapping/mapping_enabled` to `true` in the file `common.yaml`.
The Spatial Mapping can be enabled/disabled manually calling the services `start_3d_mapping` and `stop_3d_mapping`.
### Diagnostic
The ZED node publishes diagnostic information that can be used by the robotic system using a [diagnostic_aggregator node](http://wiki.ros.org/diagnostic_aggregator).
With the `rqt` plugin `Runtime monitor`, it is possible to retrieve all the diagnostic information, checking that the node
is working as expected.
### 2D mode
For robots moving on a planar surface it is possible to activate the "2D mode" (parameter `tracking/two_d_mode` in `common.yaml`).
The value of the coordinate Z for odometry and pose will have a fixed value (parameter `tracking/fixed_z_value` in `common.yaml`).
Roll and pitch and relative velocities will be fixed to zero.
## Examples and Tutorials
Examples and tutorials are provided to better understand how to use the ZED wrapper and how to integrate it in the ROS framework.
See the [`zed-ros-examples` repository](https://github.com/stereolabs/zed-ros-examples)
### Examples
Alongside the wrapper itself and the Rviz display, a few examples are provided to interface the ZED with other ROS packages :
- [RTAB-Map](http://introlab.github.io/rtabmap/): See [zed_rtabmap_example](https://github.com/stereolabs/zed-ros-examples/tree/master/examples/zed_rtabmap_example/README.md)
- ROS Nodelet, `depthimage_to_laserscan`: See [zed_nodelet_example](https://github.com/stereolabs/zed-ros-examples/tree/master/examples/zed_nodelet_example/README.md)
- AR Track Alvar: See [zed_ar_track_alvar_example](https://github.com/stereolabs/zed-ros-examples/tree/master/examples/zed_ar_track_alvar_example/README.md)
### Tutorials
A few tutorials are provided to understand how to use the ZED node in the ROS environment :
- [Image subscription tutorial](https://github.com/stereolabs/zed-ros-examples/tree/master/tutorials/zed_video_sub_tutorial/README.md)
- [Depth subscription tutorial](https://github.com/stereolabs/zed-ros-examples/tree/master/tutorials/zed_depth_sub_tutorial/README.md)
- [Tracking subscription tutorial](https://github.com/stereolabs/zed-ros-examples/tree/master/tutorials/zed_tracking_sub_tutorial/README.md)
- [Sensors data subscription tutorial](https://github.com/stereolabs/zed-ros-examples/blob/master/tutorials/zed_sensors_sub_tutorial/README.md)
- [Object detection subscription tutorial](https://github.com/stereolabs/zed-ros-examples/blob/master/tutorials/zed_obj_det_sub_tutorial/README.md)
| 48 | 274 | 0.773718 | eng_Latn | 0.836726 |
8a7437408a4628d1070b1ddf848e5c4d9a780c49 | 2,451 | md | Markdown | windows-driver-docs-pr/install/device-manager-details-tab.md | AmadeusW/windows-driver-docs | 6d272f80814969bbb5ec836cbbebdf5cae52ee35 | [
"CC-BY-4.0",
"MIT"
] | 4 | 2017-05-30T18:13:16.000Z | 2021-09-26T19:45:08.000Z | windows-driver-docs-pr/install/device-manager-details-tab.md | AmadeusW/windows-driver-docs | 6d272f80814969bbb5ec836cbbebdf5cae52ee35 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | windows-driver-docs-pr/install/device-manager-details-tab.md | AmadeusW/windows-driver-docs | 6d272f80814969bbb5ec836cbbebdf5cae52ee35 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-02-25T05:28:44.000Z | 2020-02-25T05:28:44.000Z | ---
title: Device Manager Details Tab
description: Device Manager Details Tab
ms.assetid: 5f1e345f-72c6-4bd4-a0fa-304e5d0d91be
keywords:
- Device Manager WDK , Details tab
- firmware revision numbers WDK Device Manager
- revision numbers WDK Device Manager
- Details tab WDK Device Manager
ms.author: windowsdriverdev
ms.date: 04/20/2017
ms.topic: article
ms.prod: windows-hardware
ms.technology: windows-devices
---
# Device Manager Details Tab
## <a href="" id="ddk-device-manager-details-tab-dg"></a>
For Windows XP and later versions of Windows, Device Manager provides a **Details** tab for each device. This tab displays lots of information useful to driver developers and testers, and aids Microsoft Customer Support Services (CSS) in diagnosing customer problems. The tab's page displays [device identification strings](device-identification-strings.md), together with device and driver configuration information that can be useful when you debug drivers.
### <a href="" id="ddk-viewing-a-device-s-details-tab-dg"></a>Viewing a Device's Details Tab
Starting with Windows Server 2003 SP1 and Windows XP SP2, the details tab is enabled by default.
On Windows Server 2003, Windows XP SP1, Windows XP, and Windows 2000, the details tab is disabled by default.
To enable this tab, set the user environment variable DEVMGR_SHOW_DETAILS to 1. After you set this environment variable, the **Details** tab of the device will be available in Device Manager. To permanently set a user environment variable, use the **Advanced** tab of the system property sheet. For information about how to set environment variables, see "Setting environment variables" in the Help and Support Center.
### <a href="" id="ddk-providing-firmware-revision-numbers-for-the-details-tab-dg"></a>Providing Firmware Revision Numbers for the Details Tab
Device Manager's **Details** tab can display a device's firmware revision number, if available. A driver can supply a firmware revision number by responding to a WMI request. Specifically, the driver's [**DpWmiQueryDataBlock**](https://msdn.microsoft.com/library/windows/hardware/ff544096) routine should support **MSDeviceUI_FirmwareRevision_GUID** by returning a DEVICE_UI_FIRMWARE_REVISION structure (defined in Wmidata.h). The structure must contain the firmware revision number as a NULL-terminated WCHAR string, preceded by a USHORT value that contains the string length (including the **NULL**).
| 54.466667 | 602 | 0.787842 | eng_Latn | 0.934 |
8a75327b693a7a4c8fc2f4458e2ca9ae7811602c | 2,325 | md | Markdown | 2017/CVE-2017-10061.md | justinforbes/cve | 375c65312f55c34fc1a4858381315fe9431b0f16 | [
"MIT"
] | 2,340 | 2022-02-10T21:04:40.000Z | 2022-03-31T14:42:58.000Z | 2017/CVE-2017-10061.md | justinforbes/cve | 375c65312f55c34fc1a4858381315fe9431b0f16 | [
"MIT"
] | 19 | 2022-02-11T16:06:53.000Z | 2022-03-11T10:44:27.000Z | 2017/CVE-2017-10061.md | justinforbes/cve | 375c65312f55c34fc1a4858381315fe9431b0f16 | [
"MIT"
] | 280 | 2022-02-10T19:58:58.000Z | 2022-03-26T11:13:05.000Z | ### [CVE-2017-10061](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-10061)


%20of%20PeopleSoft%20Enterprise%20PeopleTools.&color=brighgreen)
### Description
Vulnerability in the PeopleSoft Enterprise PeopleTools component of Oracle PeopleSoft Products (subcomponent: Integration Broker). Supported versions that are affected are 8.54 and 8.55. Easily exploitable vulnerability allows unauthenticated attacker with network access via HTTP to compromise PeopleSoft Enterprise PeopleTools. While the vulnerability is in PeopleSoft Enterprise PeopleTools, attacks may significantly impact additional products. Successful attacks of this vulnerability can result in unauthorized update, insert or delete access to some of PeopleSoft Enterprise PeopleTools accessible data as well as unauthorized read access to a subset of PeopleSoft Enterprise PeopleTools accessible data and unauthorized ability to cause a partial denial of service (partial DOS) of PeopleSoft Enterprise PeopleTools. CVSS 3.0 Base Score 8.3 (Confidentiality, Integrity and Availability impacts). CVSS Vector: (CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:C/C:L/I:L/A:L).
### POC
#### Reference
- http://www.oracle.com/technetwork/security-advisory/cpujul2017-3236622.html
#### Github
No PoCs found on GitHub currently.
| 129.166667 | 964 | 0.831828 | eng_Latn | 0.630934 |
8a75b4e190682b5d5e4b3ae3efce4015948f21af | 4,064 | md | Markdown | docs/relational-databases/system-stored-procedures/sp-replqueuemonitor-transact-sql.md | robsonbrandao/sql-docs.pt-br | f41715f4d108211fce4c97803848bd294b9c1c17 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/relational-databases/system-stored-procedures/sp-replqueuemonitor-transact-sql.md | robsonbrandao/sql-docs.pt-br | f41715f4d108211fce4c97803848bd294b9c1c17 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/relational-databases/system-stored-procedures/sp-replqueuemonitor-transact-sql.md | robsonbrandao/sql-docs.pt-br | f41715f4d108211fce4c97803848bd294b9c1c17 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: sp_replqueuemonitor (Transact-SQL) | Microsoft Docs
ms.custom: ''
ms.date: 03/14/2017
ms.prod: sql
ms.prod_service: database-engine
ms.reviewer: ''
ms.technology: replication
ms.topic: language-reference
f1_keywords:
- sp_replqueuemonitor
- sp_replqueuemonitor_TSQL
helpviewer_keywords:
- sp_replqueuemonitor
ms.assetid: 6909a3f1-43a2-4df5-a6a5-9e6f347ac841
author: stevestein
ms.author: sstein
manager: craigg
ms.openlocfilehash: e8eb21085625c7f2f0071c18da80501774088fdc
ms.sourcegitcommit: ceb7e1b9e29e02bb0c6ca400a36e0fa9cf010fca
ms.translationtype: MT
ms.contentlocale: pt-BR
ms.lasthandoff: 12/03/2018
ms.locfileid: "52789368"
---
# <a name="spreplqueuemonitor-transact-sql"></a>sp_replqueuemonitor (Transact-SQL)
[!INCLUDE[tsql-appliesto-ss2008-xxxx-xxxx-xxx-md](../../includes/tsql-appliesto-ss2008-xxxx-xxxx-xxx-md.md)]
Lista as mensagens de fila de uma [!INCLUDE[msCoName](../../includes/msconame-md.md)] [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] fila ou [!INCLUDE[msCoName](../../includes/msconame-md.md)] enfileiramento de mensagens para assinaturas de atualização em fila para uma publicação especificada. Se as filas do [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] forem usadas, esse procedimento armazenado será executado no banco de dados de assinatura. Se o Enfileiramento de Mensagens for usado, esse procedimento armazenado será executado no Distribuidor, no banco de dados de distribuição.
 [Convenções de sintaxe de Transact-SQL](../../t-sql/language-elements/transact-sql-syntax-conventions-transact-sql.md)
## <a name="syntax"></a>Sintaxe
```
sp_replqueuemonitor [ @publisher = ] 'publisher'
[ , [ @publisherdb = ] 'publisher_db' ]
[ , [ @publication = ] 'publication' ]
[ , [ @tranid = ] 'tranid' ]
[ , [ @queuetype = ] 'queuetype' ]
```
## <a name="arguments"></a>Argumentos
[ **@publisher** =] **'***publisher***'**
É o nome do Publicador. *Publisher* está **sysname**, com um padrão NULL. O servidor deve ser configurado para publicação. NULL para todos os Publicadores.
[ **@publisherdb** =] **'***publisher_db***'** ]
É o nome do banco de dados de publicação. *publisher_db* está **sysname**, com um padrão NULL. NULL para todos os bancos de dados de publicação.
[ **@publication** =] **'***publicação***'** ]
É o nome da publicação. *publicação*está **sysname**, com um padrão NULL. NULL para todas as publicações.
[ **@tranid** =] **'***tranid***'** ]
É a ID da transação. *tranid*está **sysname**, com um padrão NULL. NULL para todas as transações.
[**@queuetype=** ] **'***queuetype***'** ]
É o tipo de fila que armazena transações. *queuetype* está **tinyint** com um padrão de **0**, e pode ser um destes valores.
|Valor|Descrição|
|-----------|-----------------|
|**0**|Todos os tipos de filas|
|**1**|Enfileiramento de Mensagens|
|**2**|Fila do [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] |
## <a name="return-code-values"></a>Valores do código de retorno
**0** (êxito) ou **1** (falha)
## <a name="remarks"></a>Comentários
**sp_replqueuemonitor** é usado em replicação de instantâneo ou replicação transacional com assinaturas de atualização enfileirada. As mensagens em fila que não contêm comandos SQL ou são parte de um comando SQL abrangente não são exibidas.
## <a name="permissions"></a>Permissões
Somente os membros dos **sysadmin** função de servidor fixa ou **db_owner** banco de dados fixa podem executar **sp_replqueuemonitor**.
## <a name="see-also"></a>Consulte também
[Updatable Subscriptions for Transactional Replication](../../relational-databases/replication/transactional/updatable-subscriptions-for-transactional-replication.md)
[Procedimentos armazenados do sistema (Transact-SQL)](../../relational-databases/system-stored-procedures/system-stored-procedures-transact-sql.md)
| 50.8 | 613 | 0.706693 | por_Latn | 0.932527 |
8a75c4ba0ef9fbe4d7cea1c89530decbc224b641 | 1,148 | md | Markdown | README.md | NickLeon92/Ecommerce-Back-End | 9ef5d59bcccdf27affc1fa2a705068c45d361074 | [
"MIT"
] | null | null | null | README.md | NickLeon92/Ecommerce-Back-End | 9ef5d59bcccdf27affc1fa2a705068c45d361074 | [
"MIT"
] | 4 | 2021-08-31T16:12:13.000Z | 2021-08-31T21:03:15.000Z | README.md | NickLeon92/Ecommerce-Back-End | 9ef5d59bcccdf27affc1fa2a705068c45d361074 | [
"MIT"
] | null | null | null | # Ecommerce-Back-End
# Ecommerce Back End
<img src="https://img.shields.io/badge/MIT-license-green">
## DESCRIPTION
This repository serves as a database for an ecommerce site. The data is handled through express routes with sequlize being used to parse the MYSQL data
## TABLE OF CONTENTS
- [Installation](#installation)
- [Usage](#usage)
- [Contributing](#contributing)
- [Tests](#tests)
- [Questions](#questions)
## Installation
run npm i and mysql -u root -p. Source the schema file and then run the seed file by running node seeds/index
## Usage
https://user-images.githubusercontent.com/83552236/131538400-96d15033-8a26-46e3-a621-8ae15fb73841.mp4
https://user-images.githubusercontent.com/83552236/131575105-c0fae1fd-c338-4d2e-8f00-58fa39c1482a.mp4
https://user-images.githubusercontent.com/83552236/131575498-355c4bdc-8b54-4b53-9dbf-5b2531605cf7.mp4
Use insomnia or similar program to read json data from the mysql database and to post to it as well
## Contributing
N/A
## Tests
none
## Questions
For further questions reach to me on [GitHub](https://github.com/NickLeon92)
or email: [email protected]
| 24.425532 | 151 | 0.754355 | eng_Latn | 0.55319 |
8a75e9028d7194b229080a6be1d222131fd839aa | 1,489 | md | Markdown | CONTRIBUTING.md | sphh/RPLCD | c4dc451623da5e02292046388be5201cbc25321e | [
"MIT"
] | 231 | 2015-02-04T16:12:52.000Z | 2022-01-18T22:03:10.000Z | CONTRIBUTING.md | sphh/RPLCD | c4dc451623da5e02292046388be5201cbc25321e | [
"MIT"
] | 106 | 2015-03-31T15:34:26.000Z | 2022-03-04T14:23:50.000Z | CONTRIBUTING.md | sphh/RPLCD | c4dc451623da5e02292046388be5201cbc25321e | [
"MIT"
] | 74 | 2015-03-28T12:17:40.000Z | 2021-11-15T08:14:51.000Z | # Contributing
Thanks a lot for any contribution!
To keep code quality high and maintenance work low, please adhere to the
following guidelines when creating a pull request:
- Please follow the [coding
guidelines](https://github.com/dbrgn/RPLCD#coding-guidelines).
- Use meaningful commit messages: Please follow the advice in [this
blogpost](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html).
First line of your commit message should be a very short summary (ideally 50
characters or less) in the imperative mood. After the first line of the commit
message, add a blank line and then a more detailed explanation (when relevant).
The following items make my life easier, but are optional:
- If you know how to use `git rebase`, please rebase/sqash your commits so that
unnecessary noise in the commit history is avoided.
- If you have have previously filed a GitHub issue and want to contribute code
that addresses that issue, I prefer it if you use
[hub](https://github.com/github/hub) to convert your existing issue to a pull
request. To do that, first push the changes to a separate branch in your fork
and then issue the following command:
hub pull-request -b dbrgn:master -i <issue-number> -h <your-github-username>:<your-branch-name>
This is no strict requirement though, if you don't have hub installed or
prefer to use the web interface, then feel free to post a traditional pull
request.
Thanks for your contribution!
| 45.121212 | 103 | 0.766286 | eng_Latn | 0.999133 |
8a7621a35d30d417a47bf9fa34e8dc99b7256d06 | 4,009 | markdown | Markdown | _posts/2016-02-10-icli.markdown | thormagnusson/sonicwriting | 8406b6ce2432bf0c725f234a26746e7f4a451c73 | [
"Apache-2.0"
] | 1 | 2019-05-28T18:38:59.000Z | 2019-05-28T18:38:59.000Z | _posts/2016-02-10-icli.markdown | thormagnusson/sonicwriting | 8406b6ce2432bf0c725f234a26746e7f4a451c73 | [
"Apache-2.0"
] | null | null | null | _posts/2016-02-10-icli.markdown | thormagnusson/sonicwriting | 8406b6ce2432bf0c725f234a26746e7f4a451c73 | [
"Apache-2.0"
] | null | null | null | ---
layout: post
title: "ICLI 2016: The 3rd International Conference on Live Interfaces"
date: 2016-02-10 15:00:00
author: "Thor Magnusson"
header-img: "img/icli.jpg"
twitter: True
comments: True
---
<p>Members of the <a href="http://www.miptl.org">Music Informatics and Performance Technologies Lab</a> at the University of Sussex are organising the 3rd <a href="http://www.liveinterfaces.org">International Conference on Live Interfaces</a>. This biennial conference began at the University of Leeds in 2012, and was then held in Lisbon in 2014. We are very excited to be hosting the third instance of this fantastic conference in Brighton this summer, at the University of Sussex.</p>
<p>The conference website can be found here: <a href="http://www.liveinterfaces.org">http://www.liveinterfaces.org</a></p>
<p>The ICLI conference is an interdisciplinary summit that focusses on performance technologies in all artistic human activities. There are more bespoke conferences on new interfaces in music (such as <a href="http://www.nime.org">NIME</a>), but ICLI is unique in that it aims to understand how the different art forms engage with concepts of interface, embodiment, live performance, improvisation and technical skill. We can learn much from each other as practitioners in the diverse art forms, and there was a bespoke need in creating this platform.</p>
<p>In order to stress the interdisciplinary nature of the conference, we have invited keynote speakers from three distinct fields of practice: <a href="http://tinything.com/">Kristina Andersen</a> (music), <a href="http://stuartnolan.com">Stuart Nolan</a> (magic), and <a href="http://watermillcenter.org/residency/roman-paska">Roman Paska</a> (puppetry). We are hoping the diversity of topics listed in the call will encourage people from as diverse performing art forms as dance, puppetry, music, performance art, magic, computer games, comedy, visual arts, video art, and many more. The boundaries are fuzzy, of course, as some might argue that live interfaces are also used in expressive ways in martial arts, religious ceremonies, or gastronomy, to name but a few examples, but we will await decisions from our excellent reviewer team to define what kind of conference this will be.</p>
<p>The conference will include the potential for a strong public involvement, where people will be able to attend special events during the conference or getting a day ticket for selected days. The conference will be one of the first public engagement activities of the newly refurbished <a href="http://www.sussex.ac.uk/acca/">ACCA</a> (Attenborough Centre for Creative Arts), and the event benefits from the strong support by its creative director, Paula McDermott.</p>
<img src="{{ site.baseurl }}/img/acca.jpg" alt="inspecting ACCA">
<span class="caption text-muted">Some members of the ICLI team inspecting the conference location at ACCA (Attenborough Centre for Creative Arts).</span>
<p>We are very excited about two resonating instruments workshops that will take place before the conference starts. During these workshops, composers and performers will be teamed up to study, compose and perform on two quite unique instruments: the <a href="http://www.halldorulfarsson.info/halldorophones/about-halldorophones">halldorophone</a> and the <a href="http://www.eecs.qmul.ac.uk/~andrewm/mrp.html">Magnetic Resonator Piano</a>. These sessions will be lead by the instrument designers Halldor Ulfarsson and Andrew McPherson, together with skilled performers <a href="http://www.ecila.org">Alice Eldridge</a> and <a href="http://sarahnicolls.com">Sarah Nicolls</a>. There will be other workshops during the conference, but those will be proposed through the submissions and go through our peer review process.</p>
<p>The deadline for submissions has been extended to February 21st, so please consider submitting a performance proposal, a paper, a workshop, an installation or participate in the doctoral colloquium. </p> | 154.192308 | 891 | 0.781492 | eng_Latn | 0.99365 |
8a766ad7466d31b2e04bda8e33de5b283cb6c7c2 | 14,766 | md | Markdown | 2008-11-07/README.md | fronx/homoiconic | 25f077da9a7a784f672299048cafb44b45114210 | [
"MIT"
] | 1 | 2015-11-05T07:59:54.000Z | 2015-11-05T07:59:54.000Z | 2008-11-07/README.md | fronx/homoiconic | 25f077da9a7a784f672299048cafb44b45114210 | [
"MIT"
] | null | null | null | 2008-11-07/README.md | fronx/homoiconic | 25f077da9a7a784f672299048cafb44b45114210 | [
"MIT"
] | null | null | null | Aspect-Oriented Programming in Ruby using Combinator Birds
---
In [Combinatory Logic](http://en.wikipedia.org/wiki/Combinatory_logic), the bluebird is one of the most important and fundamental combinators, because the bluebird *composes* two other combinators. Although this is usually discussed as part of [functional programming style](http://weblog.raganwald.com/2007/03/why-why-functional-programming-matters.html "Why Why Functional Programming Matters Matters"), it is just as valuable when writing object-oriented programs. In this post, we will develop an [aspect-oriented programming](http://en.wikipedia.org/wiki/Aspect-oriented_programming "") (or "AOP") module that adds before methods and after methods to Ruby programs, with the implementation inspired by the bluebird.
> As explained in [Kestrels](http://github.com/raganwald/homoiconic/tree/master/2008-10-29/kestrel.markdown#readme), the practice of nicknaming combinators after birds was established in Raymond Smullyan's amazing book [To Mock a Mockingbird](http://www.amazon.com/gp/product/0192801422?ie=UTF8&tag=raganwald001-20&linkCode=as2&camp=1789&creative=9325&creativeASIN=0192801422). In this book, Smullyan explains combinatory logic and derives a number of important results by presenting the various combinators as songbirds in a forest. Since the publication of the book more than twenty years ago, the names he gave the birds have become standard nicknames for the various combinators.
[](http://www.flickr.com/photos/dagberg/2451392973/ "Eastern bluebird (c) 2008 Doug Greenberg, some rights reserved")
The bluebird is written `Bxyz = x(yz)`. In Ruby, we can express the bluebird like this:
bluebird.call(proc1).call(proc2).call(value)
=> proc1.call(proc2.call(value))
If this seems a little arcane, consider a simple Ruby expression `(x * 2) + 1`: This expression *composes* multiplication and addition. Composition is so pervasive in programming languages that it becomes part of the syntax, something we take for granted. We don't have to think about it until someone like Oliver Steele writes a library like [functional javascript](http://osteele.com/sources/javascript/functional/) that introduces a `compose` function, then we have to ask what it does.
Before we start using bluebirds, let's be clear about something. We wrote that `bluebird.call(proc1).call(proc2).call(value)` is equivalent to `proc1.call(proc2.call(value))`. We want to be very careful that we understand what is special about `proc1.call(proc2.call(value))`. How is it different from `proc1.call(proc2).call(value)`?
The answer is:
proc1.call(proc2.call(value))
=> puts value into proc2, then puts the result of that into proc1
proc1.call(proc2).call(value)
=> puts proc2 into proc1, getting a function out, then puts value into the new function
So with a bluebird you can chain functions together in series, while if you didn't have a bluebird all you could do is write functions that transform other functions. Not that there's anything wrong with that, we used that to great effect with [cardinals](http://github.com/raganwald/homoiconic/tree/master/2008-10-31/songs_of_the_cardinal.markdown#readme) and [quirky birds](http://github.com/raganwald/homoiconic/tree/master/2008-11-04/quirky_birds_and_meta_syntactic_programming.markdown#readme).
**giving methods advice**
We're not actually going to [Greenspun](http://en.wikipedia.org/wiki/Greenspun%27s_Tenth_Rule "Greenspun's Tenth Rule - Wikipedia, the free encyclopedia") an entire aspect-oriented layer on top of Ruby, but we will add a simple feature, we are going to add *before and after methods*. You already know what a normal method is. A before method simply specifies some behaviour you want executed before the method is called, while an after method specifies some behaviour you want executed after the method is called. In AOP, before and after methods are called "advice."
Ruby on Rails programmers are familiar with method advice. If you have ever written any of the following, you were using Rails' built-in aspect-oriented programming support:
after_save
validates_each
alias_method_chain
before_filter
These and other features of Rails implement method advice, albeit in a very specific way tuned to portions of the Rails framework. We're going to implement method advice in a module that you can use in any of your classes, on any method or methods you choose. We'll start with before methods. Here's the syntax we want:
def something(parameter)
# do stuff...
end
before :something do |parameter|
# stuff to do BEFORE we do stuff...
end
before :something do |parameter|
# stuff to do BEFORE stuff to do BEFORE we do stuff...
end
As we can see, the before methods get chained together before the method. To keep this nice and clean, we are going to make them work just like composable functions: whatever our before method's block returns will be passed as a parameter up the chain. We also won't fool around with altering the order of before methods, we'll just take them as they come.
This is really simple, we are composing methods. To compare to the bluebird above, we are writing `before`, then the name of a method, then a function. I'll rewrite it like this:
bluebird.call(something).call(stuff_to_do_before_we_do_stuff).call(value)
=> something.call(stuff_to_do_before_we_do_stuff.call(value))
Now we can see that this newfangled aspect-oriented programming stuff was figured out nearly a century ago by people like [Alonzo Church](http://en.wikipedia.org/wiki/Alonzo_Church).
Okay, enough history, let's get started. First, we are not going to write any C, so there is no way to actually force the Ruby VM to call our before methods. So instead, we are going to have to rewrite our method. We'll use a [trick](http://blog.jayfields.com/2006/12/ruby-alias-method-alternative.html "Jay Fields' Thoughts: Ruby: Alias method alternative") I found on Jay Fields' blog:
module NaiveBeforeMethods
module ClassMethods
def before(method_sym, &block)
old_method = self.instance_method(method_sym)
if old_method.arity == 0
define_method(method_sym) do
block.call
old_method.bind(self).call
end
else
define_method(method_sym) do |*params|
old_method.bind(self).call(*block.call(*params))
end
end
end
end
def self.included(receiver)
receiver.extend ClassMethods
end
end
As you can see, we have a special case for methods with no parameters, and when we have a method with multiple parameters, our before method must answer an array of parameters. And the implementation relies on a "flock of bluebirds:" Our before methods and the underlying base method are composed with each other to define the method that is actually executed at run time.
Using it is very easy:
class SuperFoo
def one_parameter(x)
x + 1
end
def two_parameters(x, y)
x * y
end
end
class Foo < SuperFoo
include NaiveBeforeMethods
before :one_parameter do |x|
x * 2
end
before :two_parameters do |x, y|
[x + y, x - y]
end
end
Foo.new.one_parameter(5)
=> 11
Foo.new.two_parameters(3,1)
=> 8
> This could be even more useful if it supported methods with blocks. Adventurous readers may want to combine this code with the tricks in [cardinal.rb](http://github.com/raganwald/homoiconic/tree/master/2008-10-31/cardinal.rb) and see if they can build a version of `before` that supports methods that take blocks.
**the super keyword, perhaps you've heard of it?**
Of course, Ruby provides a means of 'decorating' methods like this by overriding a method and calling `super` within it. So we might have written:
class Foo < SuperFoo
def one_parameter(x)
super(x * 2)
end
def two_parameters(x, y)
super(x + y, x - y)
end
end
On a trivial example, the two techniques seem equivalent, so why bother with the extra baggage? The answer is that using `super` is a little low level. When you see a method definition in a language like Ruby, you don't know whether you are defining a new method, overriding an existing method with entirely new functionality, or "decorating" a method with before advice. Using advice can be useful when you want to signal exactly what you are trying to accomplish.
Another reason to prefer method advice is when you want to share some functionality:
class LoggingFoo < SuperFoo
def one_parameter(x)
log_entry
returning(super) do
log_exit
end
end
def two_parameters(x, y)
log_entry
returning(super) do
log_exit
end
end
end
This could be written as:
class LoggingFoo < SuperFoo
include NaiveBeforeMethods
before :one_parameter, :two_parameters do # see below
log_entry
end
after :one_parameter, :two_parameters do
log_exit
end
end
This cleanly separates the concern of logging from the mechanism of what the methods actually do
> Although this is not the main benefit, method advice also works with methods defined in modules and the current class, not just superclasses. So in some ways it is even more flexible than Ruby's `super` keyword.
**the queer bird**
That looks handy. But we also want an _after method_, a way to compose methods in the other order. Good news, the queer bird combinator is exactly what we want.
[](http://www.flickr.com/photos/penguincakes/2891197379/ "happy pride (c) 2008 penguincakes, some rights reserved")
Written `Qxyz = y(xz)`, the Ruby equivalent is:
queer_bird.call(something).call(stuff_to_do_after_we_do_stuff).call(value)
=> stuff_to_do_after_we_do_stuff.call(something.call(value))
Which is, of course:
def something(parameter)
# do stuff...
end
after :something do |return_value|
# stuff to do AFTER we do stuff...
end
The difference between before and after advice is that after advice is consumes and transforms whatever the method returns, while before advice consumes and transforms the parameters to the method.
We _could_ copy, paste and modify our bluebird code for the before methods to create after methods. But before you rush off to implement that, you might want to think about a few interesting "real world" requirements:
1. If you define before and after methods in any order, the final result should be that all of the before methods are run before the main method, then all of the after methods. This is not part of combinatory logic, but it's the standard behaviour people expect from before and after methods.
2. You should be able to apply the same advice to more than one method, for example by writing `after :foo, :bar do ... end`
3. If you declare parameters for before advice, whatever it returns will be used by the next method, just like the example above. If you do not declare parameters for before advice, whatever it returns should be ignored. The same goes for after advice.
4. If you override the main method, the before and after methods should still work.
5. The blocks provided should execute in the receiver's scope, like method bodies.
One implementation meeting these requirements is here: [before\_and\_after\_advice.rb](http://github.com/raganwald/homoiconic/tree/master/2008-11-07/before_and_after_advice.rb "before_and_after_advice.rb"). Embedded in a lot of extra moving parts, the basic pattern of composing methods is still evident:
# ...
define_method(method_sym) do |*params|
composition.after.inject(
old_method.bind(self).call(
*composition.before.inject(params) do |acc_params, block|
self.instance_exec(*acc_params, &block)
end
)
) do |ret_val, block|
self.instance_exec(ret_val, &block)
end
end
# ...
That is why we looked at supporting just before methods first. If you are comfortable with the [naïve implementation of before advice](http://github.com/raganwald/homoiconic/tree/master/2008-11-07/naive_before_advice.rb) discussed above, the mechanism is easy to understand. The complete version is considerably more powerful. As mentioned, it supports before and after advice. It also uses `instance_exec` to evaluate the blocks in the receiver's scope, providing access to private methods and instance variables. And it works properly even when you override the method being advised.
Please give it a try and let me know what you think.
p.s. If the sample code gives an error, it could be [a known bug in Ruby 1.8](http://github.com/raganwald/homoiconic/tree/master/2008-11-09/proc_arity.markdown "Proc#arity"). Try declaring your advice with an empty parameter list, e.g. `do || ... end`.
p.p.s. [A comment on implementing method advice](http://github.com/raganwald/homoiconic/tree/master/2008-11-07/comment_on_implementing_advice.markdown#readme).
_More on combinators_: [Kestrels](http://github.com/raganwald/homoiconic/tree/master/2008-10-29/kestrel.markdown#readme), [The Thrush](http://github.com/raganwald/homoiconic/tree/master/2008-10-30/thrush.markdown#readme), [Songs of the Cardinal](http://github.com/raganwald/homoiconic/tree/master/2008-10-31/songs_of_the_cardinal.markdown#readme), [Quirky Birds and Meta-Syntactic Programming](http://github.com/raganwald/homoiconic/tree/master/2008-11-04/quirky_birds_and_meta_syntactic_programming.markdown#readme), [Aspect-Oriented Programming in Ruby using Combinator Birds](http://github.com/raganwald/homoiconic/tree/master/2008-11-07/from_birds_that_compose_to_method_advice.markdown#readme), [The Enchaining and Obdurate Kestrels](http://github.com/raganwald/homoiconic/tree/master/2008-11-12/the_obdurate_kestrel.md#readme), [Finding Joy in Combinators](http://github.com/raganwald/homoiconic/tree/master/2008-11-16/joy.md#readme), [Refactoring Methods with Recursive Combinators](http://github.com/raganwald/homoiconic/tree/master/2008-11-23/recursive_combinators.md#readme), [Practical Recursive Combinators](http://github.com/raganwald/homoiconic/tree/master/2008-11-26/practical_recursive_combinators.md#readme), [The Hopelessly Egocentric Blog Post](http://github.com/raganwald/homoiconic/tree/master/2009-02-02/hopeless_egocentricity.md#readme), and [Wrapping Combinators](http://github.com/raganwald/homoiconic/tree/master/2009-06-29/wrapping_combinators.md#readme).
**(more)**
Follow [me](http://reginald.braythwayt.com) on [Twitter](http://twitter.com/raganwald). I work with [Unspace Interactive](http://unspace.ca), and I like it. | 59.301205 | 1,482 | 0.768793 | eng_Latn | 0.993742 |
8a769faa6d4277e5d77b90a9e67815898e1543c2 | 3,559 | md | Markdown | README.md | santiagopemo/platformer_videogame | 0eb82c44ff2f0dd8fafb1b4cf60fe34e11378342 | [
"CC-BY-3.0"
] | 1 | 2021-08-05T17:24:13.000Z | 2021-08-05T17:24:13.000Z | README.md | santiagopemo/platformer_videogame | 0eb82c44ff2f0dd8fafb1b4cf60fe34e11378342 | [
"CC-BY-3.0"
] | null | null | null | README.md | santiagopemo/platformer_videogame | 0eb82c44ff2f0dd8fafb1b4cf60fe34e11378342 | [
"CC-BY-3.0"
] | 1 | 2021-11-27T01:37:47.000Z | 2021-11-27T01:37:47.000Z | # Platformer Videogame
## Description
Platform video games, or simply platformers, are a genre of video games characterized by having to walk, run, jump or climb on a series of platforms and cliffs, with enemies, while collecting objects to complete the game. This repository contains the source code for a simple 3-level 3D platformer.
[Play It Now](https://santiagopemo.github.io/platformer_videogame/)
<p align="center"><img src="readme_images/platformer_level2.gif"></p>
### Attributions
The realization of this project was possible, thanks to the following resources:
* Kenney: https://kenney.nl/
* Oculus Audio Pack: https://developer.oculus.com/downloads/package/oculus-audio-pack-1/
* Mindful Audio: https://mindful-audio.com/
* “Wallpaper”, “Cheery Monday” Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 3.0
http://creativecommons.org/licenses/by/3.0/
## Installation
* If you want to edit the game:
* Download untity from its official site https://unity3d.com/es/get-unity/download
* Download Blender from official site https://www.blender.org/
* Clone this repository `git clone https://github.com/santiagopemo/platformer_videogame/`
* When you have opened the project in Unity go to file -> Build Settings -> Build
* If you only want to play, download any of the following desktop versions depending on your operating system and click on the executable
* **[Platformer WebGL](https://santiagopemo.github.io/platformer_videogame/)**
* **[Platformer Linux](https://drive.google.com/file/d/1AFmC0SUztxUnFXIXbTtAMPave7cSwd67/view?usp=sharing)**
* **[Platformer Windows](https://drive.google.com/file/d/16qK7EBQsbgkPI6Oj15_8A6D3nZWk19Em/view?usp=sharing)**
* **[Platformer Mac](https://drive.google.com/file/d/1x8ge4lPgs6VJ43tAdMkSnUDW-W075YFO/view?usp=sharing)**
## Usage
The game begins in the main manu where you can salect level **1**, **2** or **3**, modify your settings in the **Options** menu or **Exit**. the game
<p align="center"><img src="readme_images/main_menu.gif"></p>
In the Options menu you can you can change the volume of the background music, the volume of the sound effects, or invert the movement of the camera
<p align="center"><img src="readme_images/options_menu.gif"></p>
Once the level has started you con move the character with the **arrows keys** or with the kyes **A**, **W**, **S**, **D**, and jump with the **Space Bar**
<p align="center"><img src="readme_images/movement.gif"></p>
You can also controll the camera by holding the right click and moving the mouse
<p align="center"><img src="readme_images/cam_movement.gif"></p>
While playing you can pause the game pressing the **Esc** key, this will popup a pause menu where you can choose any of the options
<p align="center"><img width="500px" src="readme_images/pause_menu.PNG"></p>
To complete each level you must collide with the sea horse flag, this will display a menu which will show the time it took to complete the level
<p align="center"><img src="readme_images/win.gif"></p>
## Author :pencil:
### Santiago Peña Mosquera
Mechatronic engineer and student of software development in holberton school, lover of building new things from scratch, that's why my passion for programming, starting from an empty sheet and turning it into a solution for real problems.
<a href="https://www.linkedin.com/in/santiago-pe%C3%B1a-mosquera-abaa20196/" target="_blank">LinkedIn</a>
<a href="https://twitter.com/santiagopemo" target="_blank">Twitter</a>
| 63.553571 | 300 | 0.750211 | eng_Latn | 0.958384 |
8a76ea17aae92d8ee626a1c8bb87d471e12565b3 | 9,636 | md | Markdown | docs/pipelines/tasks/build/dotnet-core-cli.md | EliotSeattle/vsts-docs | 2a86689abef3e26d1f0cdbb485a3575bd8c50c75 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2019-06-21T23:57:33.000Z | 2019-06-21T23:57:37.000Z | docs/pipelines/tasks/build/dotnet-core-cli.md | EliotSeattle/vsts-docs | 2a86689abef3e26d1f0cdbb485a3575bd8c50c75 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/pipelines/tasks/build/dotnet-core-cli.md | EliotSeattle/vsts-docs | 2a86689abef3e26d1f0cdbb485a3575bd8c50c75 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: .NET Core CLI task
ms.custom: seodec18
description: Build, test, package, or publish a dotnet application, or run a custom dotnet command. For package commands, supports NuGet.org and authenticated feeds like Package Management and MyGet.
ms.topic: reference
ms.prod: devops
ms.technology: devops-cicd
ms.assetid: 5541a522-603c-47ad-91fc-a4b1d163081b
ms.manager: jillfra
ms.author: puagarw
author: pulkitaggarwl
ms.date: 05/29/2019
monikerRange: 'azure-devops'
---
# .NET Core CLI task
**Azure Pipelines**
Use this task in a build or release pipeline to build, test, package, or publish a dotnet application, or to run a custom dotnet command. For package commands, this task supports NuGet.org and authenticated feeds like Package Management and MyGet.
## Task Inputs
<table><thead><tr><th>Parameters</th><th>Description</th></tr></thead>
<tr><td><code>command</code><br/>Command</td><td>(Required) The dotnet command to run. Select 'Custom' to add arguments or use a command not listed here <br/>Default value: build</td></tr>
<tr><td><code>publishWebProjects</code><br/>Publish Web Projects</td><td>(Required) If true, the task will try to find the web projects in the repository and run the publish command on them. Web projects are identified by presence of either a web.config file or wwwroot folder in the directory <br/>Default value: true</td></tr>
<tr><td><code>projects</code><br/>Path to project(s)</td><td>(Optional) The path to the csproj file(s) to use. You can use wildcards (e.g. **/.csproj for all .csproj files in all subfolders).</td></tr>
<tr><td><code>custom</code><br/>Custom command</td><td>(Required) The command to pass to dotnet.exe for execution</td></tr>
<tr><td><code>arguments</code><br/>Arguments</td><td>(Optional) Arguments to the selected command. For example, build configuration, output folder, runtime. The arguments depend on the command selected</td></tr>
<tr><td><code>publishTestResults</code><br/>Publish test results</td><td>(Optional) Enabling this option will generate a test results TRX file in `$(Agent.TempDirectory)` and results will be published to the server. <br>This option appends `--logger trx --results-directory $(Agent.TempDirectory)` to the command line arguments. <br/>Default value: true</td></tr>
<tr><td><code>testRunTitle</code><br/>Test run title</td><td>(Optional) Provide a name for the test run</td></tr>
<tr><td><code>zipAfterPublish</code><br/>Zip Published Projects</td><td>(Optional) If true, folder created by the publish command will be zipped. <br/>Default value: true</td></tr>
<tr><td><code>modifyOutputPath</code><br/>Add project name to publish path</td><td>(Optional) If true, folders created by the publish command will have project file name prefixed to their folder names when output path is specified explicitly in arguments. This is useful if you want to publish multiple projects to the same folder. <br/>Default value: true</td></tr>
<tr><td><code>selectOrConfig</code><br/>Feeds to use</td><td>(Required) You can either select a feed from Azure Artifacts and/or NuGet.org here, or commit a nuget.config file to your source code repository and set its path here. <br/>Default value: select</td></tr>
<tr><td><code>feedRestore</code><br/>Use packages from this Azure Artifacts/TFS feed</td><td>(Optional) Include the selected feed in the generated NuGet.config. You must have Package Management installed and licensed to select a feed here.</td></tr>
<tr><td><code>includeNuGetOrg</code><br/>Use packages from NuGet.org</td><td>(Optional) Include NuGet.org in the generated NuGet.config. <br/>Default value: true</td></tr>
<tr><td><code>nugetConfigPath</code><br/>Path to NuGet.config</td><td>(Optional) The NuGet.config in your repository that specifies the feeds from which to restore packages.</td></tr>
<tr><td><code>externalEndpoints</code><br/>Credentials for feeds outside this organization/collection</td><td>(Optional) Credentials to use for external registries located in the selected NuGet.config. For feeds in this organization/collection, leave this blank; the build’s credentials are used automatically</td></tr>
<tr><td><code>noCache</code><br/>Disable local cache</td><td>(Optional) Prevents NuGet from using packages from local machine caches <br/>Default value: false</td></tr>
<tr><td><code>packagesDirectory</code><br/>Destination directory</td><td>(Optional) Specifies the folder in which packages are installed. If no folder is specified, packages are restored into the default NuGet package cache</td></tr>
<tr><td><code>verbosityRestore</code><br/>Verbosity</td><td>(Optional) Specifies the amount of detail displayed in the output <br/>Default value: Detailed</td></tr>
<tr><td><code>searchPatternPush</code><br/>Path to NuGet package(s) to publish</td><td>(Required) The pattern to match or path to nupkg files to be uploaded. Multiple patterns can be separated by a semicolon, and you can make a pattern negative by prefixing it with '-:'. Example: **/*.nupkg;-:**/*.Tests.nupkg <br/>Default value: $(Build.ArtifactStagingDirectory)/*.nupkg
</td></tr>
<tr><td><code>nuGetFeedType</code><br/>Target feed location</td><td>(Required) undefined <br/>Default value: internal</td></tr>
<tr><td><code>feedPublish</code><br/>Target feed</td><td>(Required) Select a feed hosted in this organization. You must have Package Management installed and licensed to select a feed here</td></tr>
<tr><td><code>publishPackageMetadata</code><br/>Publish pipeline metadata</td><td>Associate this build/release pipeline’s metadata (run #, source code information) with the package <br/>Default value: true</td></tr>
<tr><td><code>externalEndpoint</code><br/>NuGet server</td><td>(Required) The NuGet service connection that contains the external NuGet server’s credentials.</td></tr>
<tr><td><code>searchPatternPack</code><br/>Path to csproj or nuspec file(s) to pack</td><td>(Required) Pattern to search for csproj or nuspec files to pack.
You can separate multiple patterns with a semicolon, and you can make a pattern negative by prefixing it with '-:'. Example: **/*.csproj;-:**/*.Tests.csproj <br/>Default value: **/*.csproj</td></tr>
<tr><td><code>configurationToPack</code><br/>Configuration to Package</td><td>(Optional) When using a csproj file this specifies the configuration to package <br/>Default value: $(BuildConfiguration)</td></tr>
<tr><td><code>outputDir</code><br/>Package Folder</td><td>(Optional) Folder where packages will be created. If empty, packages will be created alongside the csproj file <br/>Default value: $(Build.ArtifactStagingDirectory)</td></tr>
<tr><td><code>nobuild</code><br/>Do not build</td><td>(Optional) Don't build the project before packing. Corresponds to the --no-build command line parameter.</td></tr>
<tr><td><code>includesymbols</code><br/>Include Symbols</td><td>(Optional) Additionally creates symbol NuGet packages. Corresponds to the --include-symbols command line parameter <br/>Default value: false</td></tr>
<tr><td><code>includesource</code><br/>Include Source</td><td>(Optional) Includes source code in the package. Corresponds to the --include-source command line parameter <br/>Default value: false</td></tr>
<tr><td><code>versioningScheme</code><br/>Automatic package versioning</td><td>(Required) Cannot be used with include referenced projects. If you choose 'Use the date and time', this will generate a [SemVer](http://semver.org/spec/v1.0.0.html)-compliant version formatted as `X.Y.Z-ci-datetime` where you choose X, Y, and Z.
If you choose 'Use an environment variable', you must select an environment variable and ensure it contains the version number you want to use.
If you choose 'Use the build number', this will use the build number to version your package. **Note:** Under Options set the build number format to be '[$(BuildDefinitionName)_$(Year:yyyy).$(Month).$(DayOfMonth)$(Rev:.r)](https://go.microsoft.com/fwlink/?LinkID=627416)' <br/>Default value: off</td></tr>
<tr><td><code>versionEnvVar</code><br/>Environment variable</td><td>(Required) Enter the variable name without $, $env, or %</td></tr>
<tr><td><code>requestedMajorVersion</code><br/>Major</td><td>(Required) The 'X' in version [X.Y.Z](http://semver.org/spec/v1.0.0.html) <br/>Default value: 1</td></tr>
<tr><td><code>requestedMinorVersion</code><br/>Minor</td><td>(Required) The 'Y' in version [X.Y.Z](http://semver.org/spec/v1.0.0.html) <br/>Default value: 0</td></tr>
<tr><td><code>requestedPatchVersion</code><br/>Patch</td><td>(Required) The 'Z' in version [X.Y.Z](http://semver.org/spec/v1.0.0.html) <br/>Default value: 0</td></tr>
<tr><td><code>buildProperties</code><br/>Additional build properties</td><td>(Optional) Specifies a list of token = value pairs, separated by semicolons, where each occurrence of $token$ in the .nuspec file will be replaced with the given value. Values can be strings in quotation marks</td></tr>
<tr><td><code>verbosityPack</code><br/>Verbosity</td><td>(Optional) Specifies the amount of detail displayed in the output <br/>Default value: Detailed</td></tr>
<tr><td><code>workingDirectory</code><br/>Working Directory</td><td>(Optional) Current working directory where the script is run. Empty is the root of the repo (build) or artifacts (release), which is $(System.DefaultWorkingDirectory)</td></tr>
</table>
## Example
The following example builds a project.
```YAML
steps:
- task: DotNetCoreCLI@2
inputs:
command: build
```
## Open source
This task is open source [on GitHub](https://github.com/Microsoft/azure-pipelines-tasks). Feedback and contributions are welcome. | 116.096386 | 396 | 0.747198 | eng_Latn | 0.930509 |
8a774d8d459c7f13c861aa3dd8eac6fa280f3ab9 | 2,489 | md | Markdown | help/xdm/ui/fields/enum.md | AdobeDocs/experience-platform.de-DE | a7f35357257b57fa5674cc0b82e2668f516c7394 | [
"MIT"
] | null | null | null | help/xdm/ui/fields/enum.md | AdobeDocs/experience-platform.de-DE | a7f35357257b57fa5674cc0b82e2668f516c7394 | [
"MIT"
] | null | null | null | help/xdm/ui/fields/enum.md | AdobeDocs/experience-platform.de-DE | a7f35357257b57fa5674cc0b82e2668f516c7394 | [
"MIT"
] | null | null | null | ---
keywords: Experience Platform; Startseite; beliebte Themen; API; XDM; XDM; XDM-System; Experience-Datenmodell; Datenmodell; ui; Workspace; Enum; Feld;
solution: Experience Platform
title: Definieren von Enum-Feldern in der Benutzeroberfläche
description: Erfahren Sie, wie Sie in der Experience Platform-Benutzeroberfläche ein Enum-Feld definieren.
topic-legacy: user guide
exl-id: 67ec5382-31de-4f8d-9618-e8919bb5a472
source-git-commit: 5d449c1ca174cafcca988e9487940eb7550bd5cf
workflow-type: tm+mt
source-wordcount: '251'
ht-degree: 0%
---
# Definieren von Aufzählungsfeldern in der Benutzeroberfläche
Im Experience-Datenmodell (XDM) stellt ein Enum-Feld ein Feld dar, das auf eine vordefinierte Liste zulässiger Werte beschränkt ist.
Wenn Sie [ein neues Feld](./overview.md#define) in der Adobe Experience Platform-Benutzeroberfläche definieren, können Sie es als Enum-Feld festlegen, indem Sie in der rechten Leiste das Kontrollkästchen **[!UICONTROL Enum]** aktivieren.

Nach Auswahl des Kontrollkästchens werden zusätzliche Steuerelemente angezeigt, mit denen Sie die Wertbegrenzungen für die Aufzählung festlegen können. Unter der Spalte **[!UICONTROL Wert]** müssen Sie den genauen Wert angeben, auf den Sie das Feld beschränken möchten. Dieser Wert muss dem [!UICONTROL Typ] entsprechen, den Sie für das Enum-Feld ausgewählt haben. Sie können optional auch eine benutzerfreundliche **[!UICONTROL Beschriftung]** für die Beschränkung angeben.
Um dem Enum zusätzliche Einschränkungen hinzuzufügen, wählen Sie **[!UICONTROL Zeile hinzufügen]** aus.

Fügen Sie der Enum weiterhin die gewünschten Einschränkungen und optionalen Beschriftungen hinzu. Wenn Sie fertig sind, wählen Sie **[!UICONTROL Anwenden]** aus, um die Änderungen auf das Schema anzuwenden.

Die Arbeitsfläche wird entsprechend den Änderungen aktualisiert. Wenn Sie dieses Schema zukünftig untersuchen, können Sie die Begrenzungen für das Enum-Feld in der rechten Leiste anzeigen und bearbeiten.

## Nächste Schritte
In diesem Handbuch wurde beschrieben, wie Sie in der Benutzeroberfläche ein Enum-Feld definieren. Informationen zum Definieren anderer XDM-Feldtypen im [!DNL Schema Editor] finden Sie in der Übersicht zu [Definieren von Feldern in der Benutzeroberfläche](./overview.md#special).
| 62.225 | 474 | 0.797911 | deu_Latn | 0.996264 |
8a7789e277a91e8babacb113a72bf142f71d87ba | 1,910 | md | Markdown | _listings/square/location-iditemsitem-idmodifierlistsmodifier-list-id-delete-postman.md | streamdata-gallery-organizations/square | c1747bc046c78736da21b89882be1d6af74ec4c2 | [
"CC-BY-3.0"
] | null | null | null | _listings/square/location-iditemsitem-idmodifierlistsmodifier-list-id-delete-postman.md | streamdata-gallery-organizations/square | c1747bc046c78736da21b89882be1d6af74ec4c2 | [
"CC-BY-3.0"
] | null | null | null | _listings/square/location-iditemsitem-idmodifierlistsmodifier-list-id-delete-postman.md | streamdata-gallery-organizations/square | c1747bc046c78736da21b89882be1d6af74ec4c2 | [
"CC-BY-3.0"
] | null | null | null | {
"info": {
"name": "Square Connect API Delete Location Items Item Modifier Lists Modifier List",
"_postman_id": "a1d7ec38-405a-4025-865a-c7a63023859c",
"description": "Removes a modifier list association from an item, meaning modifier options from the list can no longer be applied to the item.",
"schema": "https://schema.getpostman.com/json/collection/v2.0.0/"
},
"item": [
{
"name": "location",
"item": [
{
"id": "741a2681-5dea-440b-a489-c182ca1539e2",
"name": "deleteLocationItemsItemModifierListsModifierList",
"request": {
"url": {
"protocol": "http",
"host": "connect.squareup.com",
"path": [
"v1",
":location_id/items/:item_id/modifier-lists/:modifier_list_id"
],
"variable": [
{
"id": "item_id",
"value": "{}",
"type": "string"
},
{
"id": "location_id",
"value": "{}",
"type": "string"
},
{
"id": "modifier_list_id",
"value": "{}",
"type": "string"
}
]
},
"method": "DELETE",
"body": {
"mode": "raw"
},
"description": "Removes a modifier list association from an item, meaning modifier options from the list can no longer be applied to the item"
},
"response": [
{
"status": "OK",
"code": 200,
"name": "Response_200",
"id": "6981fa41-8940-4827-9540-ba3a383ae4d1"
}
]
}
]
}
]
} | 32.372881 | 155 | 0.412565 | eng_Latn | 0.412269 |
8a77c7edf0b2a20be968af28ee6ce0905bc3c296 | 6,203 | md | Markdown | docs-archive-a/2014/reporting-services/report-server/register-a-service-principal-name-spn-for-a-report-server.md | v-alji/sql-docs-archive-pr.es-es | 410a49b0a08c22fd4bc973078b563238d69c8b44 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-11-25T21:09:51.000Z | 2021-11-25T21:09:51.000Z | docs-archive-a/2014/reporting-services/report-server/register-a-service-principal-name-spn-for-a-report-server.md | v-alji/sql-docs-archive-pr.es-es | 410a49b0a08c22fd4bc973078b563238d69c8b44 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-11-25T02:22:05.000Z | 2021-11-25T02:27:15.000Z | docs-archive-a/2014/reporting-services/report-server/register-a-service-principal-name-spn-for-a-report-server.md | v-alji/sql-docs-archive-pr.es-es | 410a49b0a08c22fd4bc973078b563238d69c8b44 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-09-29T08:53:04.000Z | 2021-09-29T08:53:04.000Z | ---
title: Registrar un nombre de entidad de seguridad de servicio (SPN) para un servidor de informes | Microsoft Docs
ms.custom: ''
ms.date: 06/13/2017
ms.prod: sql-server-2014
ms.reviewer: ''
ms.technology: reporting-services-native
ms.topic: conceptual
ms.assetid: dda91d4f-77cc-4898-ad03-810ece5f8e74
author: maggiesMSFT
ms.author: maggies
manager: kfile
ms.openlocfilehash: 502170f16aad66757e8f44419ccbac3017072449
ms.sourcegitcommit: ad4d92dce894592a259721a1571b1d8736abacdb
ms.translationtype: MT
ms.contentlocale: es-ES
ms.lasthandoff: 08/04/2020
ms.locfileid: "87677426"
---
# <a name="register-a-service-principal-name-spn-for-a-report-server"></a>Registrar un nombre principal de servicio (SPN) para un servidor de informes
Si está implementando [!INCLUDE[ssRSnoversion](../../includes/ssrsnoversion-md.md)] en una red que usa el protocolo Kerberos para la autenticación mutua, debe crear un nombre principal de servicio (SPN) para el servicio Servidor de informes si lo configura para que se ejecute como una cuenta de usuario de dominio.
## <a name="about-spns"></a>Acerca de los nombres principales de servicio
Un SPN es un identificador único para un servicio en una red que utiliza la autenticación Kerberos. Está compuesto de una clase de servicio, un nombre de host y un puerto. En una red que utiliza la autenticación Kerberos, un SPN para el servidor se debe registrar en una cuenta de equipo integrada (como NetworkService o LocalSystem) o en una cuenta de usuario. Los SPN se registran automáticamente para las cuentas integradas. Sin embargo, al ejecutar un servicio en una cuenta de usuario de dominio, debe registrar manualmente el SPN para la cuenta que desea utilizar.
Para crear un SPN, puede emplear la utilidad de línea de comandos **SetSPN** . Para obtener más información, vea lo siguiente:
- [Setspn](https://technet.microsoft.com/library/cc731241\(WS.10\).aspx) ( https://technet.microsoft.com/library/cc731241(WS.10).aspx) .
- [Sintaxis de los nombres de entidad de seguridad de servicio (SPN) setspn (Setspn.exe)](https://social.technet.microsoft.com/wiki/contents/articles/717.service-principal-names-spns-setspn-syntax-setspn-exe.aspx) ( https://social.technet.microsoft.com/wiki/contents/articles/717.service-principal-names-spns-setspn-syntax-setspn-exe.aspx) .
Debe ser administrador de dominio si desea ejecutar la utilidad en el controlador de dominio.
## <a name="syntax"></a>Sintaxis
La sintaxis de comandos para utilizar la utilidad SetSPN con el fin de crear un SPN para el servidor de informes es similar a la siguiente:
```
Setspn -s http/<computername>.<domainname>:<port> <domain-user-account>
```
**SetSPN** está disponible con Windows Server. El argumento `-s` agrega un SPN después de validar que no existe ningún duplicado. **NOTA: -s** está disponible en Windows Server a partir de Windows Server 2008.
`HTTP` es la clase de servicio. El servicio web del servidor de informes se ejecuta en HTTP.SYS. Una consecuencia de la creación de un SPN para HTTP es que a todas las aplicaciones web del mismo equipo que se ejecutan en HTTP.SYS (incluidas las que se hospedan en IIS) se les concederán vales en función de la cuenta de usuario de dominio. Si esos servicios se ejecutan en una cuenta diferente, se producirá un error en las solicitudes de autenticación. Para evitar este problema, asegúrese de configurar todas las aplicaciones HTTP para ejecutarse en la misma cuenta, o considere la posibilidad de crear los encabezados de host para cada aplicación y crear después SPN independientes para cada encabezado de host. Al configurar los encabezados de host, se requieren cambios de DNS con independencia de la configuración de [!INCLUDE[ssRSnoversion](../../includes/ssrsnoversion-md.md)] .
Los valores que se especifican para \<*computername*> , \<*domainname*> e \<*port*> identifican la dirección de red única del equipo que hospeda el servidor de informes. Puede ser un nombre de host local o un nombre de dominio completo (FQDN). Si solo tiene un dominio y usa el puerto 80, puede omitir \<*domainname*> y \<*port*> desde la línea de comandos. \<*domain-user-account*>es la cuenta de usuario con la que se ejecuta el servicio del servidor de informes y para la que se debe registrar el SPN.
## <a name="register-an-spn-for-domain-user-account"></a>Registrar un nombre principal de servicio para la cuenta de usuario de dominio
#### <a name="to-register-an-spn-for-a-report-server-service-running-as-a-domain-user"></a>Para registrar un SPN para un servicio Servidor de informes que se ejecute como un usuario de dominio
1. Instale [!INCLUDE[ssRSnoversion](../../includes/ssrsnoversion-md.md)] y configure el servicio Servidor de informes para ejecutarse como una cuenta de usuario de dominio. Observe que los usuarios no podrán conectarse al servidor de informes hasta que complete los pasos siguientes.
2. Inicie sesión en el controlador de dominio como administrador de dominio.
3. Abra una ventana de símbolo del sistema.
4. Copie el comando siguiente, reemplazando los valores de los marcadores de posición con valores reales que sean válidos para su red:
```
Setspn -s http/<computer-name>.<domain-name>:<port> <domain-user-account>
```
Por ejemplo: `Setspn -s http/MyReportServer.MyDomain.com:80 MyDomainUser`
5. Ejecute el comando.
6. Abra el archivo **RsReportServer.config** y busque la sección `<AuthenticationTypes>` .
7. Agregue `<RSWindowsNegotiate/>` como primera entrada en esta sección para habilitar NTLM.
## <a name="see-also"></a>Consulte también
[Configurar una cuenta de servicio (SSRS Configuration Manager)](../../sql-server/install/configure-a-service-account-ssrs-configuration-manager.md)
[Configurar la cuenta de servicio del servidor de informes (Administrador de configuración de SSRS)](../install-windows/configure-the-report-server-service-account-ssrs-configuration-manager.md)
[Administración de un servidor de informes en modo nativo de Reporting Services](manage-a-reporting-services-native-mode-report-server.md)
| 80.558442 | 889 | 0.765436 | spa_Latn | 0.969146 |
8a796692fac8d29fb35905fe9e2d53ef47b1e444 | 15,720 | md | Markdown | docs/t-sql/queries/select-for-clause-transact-sql.md | peterkarman1/sql-docs | 7569551402944b31cf4d0059f7793903f9546722 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-01-03T02:37:57.000Z | 2020-01-03T02:37:57.000Z | docs/t-sql/queries/select-for-clause-transact-sql.md | peterkarman1/sql-docs | 7569551402944b31cf4d0059f7793903f9546722 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/t-sql/queries/select-for-clause-transact-sql.md | peterkarman1/sql-docs | 7569551402944b31cf4d0059f7793903f9546722 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "FOR Clause (Transact-SQL) | Microsoft Docs"
ms.custom: ""
ms.date: "08/09/2017"
ms.prod: sql
ms.prod_service: "database-engine, sql-database"
ms.reviewer: ""
ms.technology: t-sql
ms.topic: "language-reference"
f1_keywords:
- "FOR"
- "FOR CLAUSE"
- "FOR_TSQL"
- "FOR_CLAUSE_TSQL"
dev_langs:
- "TSQL"
helpviewer_keywords:
- "XML option [SQL Server]"
- "BROWSE option"
- "FOR clause [Transact-SQL]"
ms.assetid: 08a6f084-8f73-4f2a-bae4-3c7513dc99b9
author: "douglaslMS"
ms.author: "douglasl"
manager: craigg
---
# SELECT - FOR Clause (Transact-SQL)
[!INCLUDE[tsql-appliesto-ss2008-asdb-xxxx-xxx-md](../../includes/tsql-appliesto-ss2008-asdb-xxxx-xxx-md.md)]
Use the FOR clause to specify one of the following options for query results.
- Allow updates while viewing query results in a browse mode cursor by specifying **FOR BROWSE**.
- Format query results as XML by specifying **FOR XML**.
- Format query results as JSON by specifying **FOR JSON**.
 [Transact-SQL Syntax Conventions](../../t-sql/language-elements/transact-sql-syntax-conventions-transact-sql.md)
## Syntax
```
[ FOR { BROWSE | <XML> | <JSON>} ]
<XML> ::=
XML
{
{ RAW [ ( 'ElementName' ) ] | AUTO }
[
<CommonDirectivesForXML>
[ , { XMLDATA | XMLSCHEMA [ ( 'TargetNameSpaceURI' ) ] } ]
[ , ELEMENTS [ XSINIL | ABSENT ]
]
| EXPLICIT
[
<CommonDirectivesForXML>
[ , XMLDATA ]
]
| PATH [ ( 'ElementName' ) ]
[
<CommonDirectivesForXML>
[ , ELEMENTS [ XSINIL | ABSENT ] ]
]
}
<CommonDirectivesForXML> ::=
[ , BINARY BASE64 ]
[ , TYPE ]
[ , ROOT [ ( 'RootName' ) ] ]
<JSON> ::=
JSON
{
{ AUTO | PATH }
[
[ , ROOT [ ( 'RootName' ) ] ]
[ , INCLUDE_NULL_VALUES ]
[ , WITHOUT_ARRAY_WRAPPER ]
]
}
```
## FOR BROWSE
BROWSE
Specifies that updates be allowed while viewing the data in a DB-Library browse mode cursor. A table can be browsed in an application if the table includes a **timestamp** column, the table has a unique index, and the FOR BROWSE option is at the end of the SELECT statements sent to an instance of [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)].
> [!NOTE]
> You cannot use the \<lock_hint> HOLDLOCK in a SELECT statement that includes the FOR BROWSE option.
FOR BROWSE cannot appear in SELECT statements that are joined by the UNION operator.
> [!NOTE]
> When the unique index key columns of a table are nullable, and the table is on the inner side of an outer join, the index is not supported by browse mode.
The browse mode lets you scan the rows in your [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] table and update the data in your table one row at a time. To access a [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] table in your application in the browse mode, you must use one of the following two options:
- The SELECT statement that you use to access the data from your [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] table must end with the keywords **FOR BROWSE**. When you turn on the **FOR BROWSE** option to use browse mode, temporary tables are created.
- You must run the following [!INCLUDE[tsql](../../includes/tsql-md.md)] statement to turn on the browse mode by using the **NO_BROWSETABLE** option:
```
SET NO_BROWSETABLE ON
```
When you turn on the **NO_BROWSETABLE** option, all the SELECT statements behave as if the **FOR BROWSE** option is appended to the statements. However, the **NO_BROWSETABLE** option does not create the temporary tables that the **FOR BROWSE** option generally uses to send the results to your application.
When you try to access the data from [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] tables in browse mode by using a SELECT query that involves an outer join statement, and when a unique index is defined on the table that is present on the inner side of an outer join statement, the browse mode does not support the unique index. The browse mode supports the unique index only when all the unique index key columns can accept null values. The browse mode does not support the unique index if the following conditions are true:
- You try to access the data from [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] tables in browse mode by using a SELECT query that involves an outer join statement.
- A unique index is defined on the table that is present on the inner side of an outer join statement.
To reproduce this behavior in the browse mode, follow these steps:
1. In [!INCLUDE[ssManStudioFull](../../includes/ssmanstudiofull-md.md)], create a database, named SampleDB.
2. In the SampleDB database, create a tleft table and a tright table that both contain a single column that is named c1. Define a unique index on the c1 column in the tleft table, and set the column to accept null values. To do this, run the following [!INCLUDE[tsql](../../includes/tsql-md.md)] statements in an appropriate query window:
```
CREATE TABLE tleft(c1 INT NULL UNIQUE) ;
GO
CREATE TABLE tright(c1 INT NULL) ;
GO
```
3. Insert several values in the tleft table and the tright table. Make sure that you insert a null value in the tleft table. To do this, run the following [!INCLUDE[tsql](../../includes/tsql-md.md)] statements in the query window:
```
INSERT INTO tleft VALUES(2) ;
INSERT INTO tleft VALUES(NULL) ;
INSERT INTO tright VALUES(1) ;
INSERT INTO tright VALUES(3) ;
INSERT INTO tright VALUES(NULL) ;
GO
```
4. Turn on the **NO_BROWSETABLE** option. To do this, run the following [!INCLUDE[tsql](../../includes/tsql-md.md)] statements in the query window:
```
SET NO_BROWSETABLE ON ;
GO
```
5. Access the data in the tleft table and the tright table by using an outer join statement in the SELECT query. Make sure that the tleft table is on the inner side of the outer join statement. To do this, run the following [!INCLUDE[tsql](../../includes/tsql-md.md)] statements in the query window:
```
SELECT tleft.c1
FROM tleft
RIGHT JOIN tright
ON tleft.c1 = tright.c1
WHERE tright.c1 <> 2 ;
```
Notice the following output in the Results pane:
c1
---\-
NULL
NULL
After you run the SELECT query to access the tables in the browse mode, the result set of the SELECT query contains two null values for the c1 column in the tleft table because of the definition of the right outer join statement. Therefore, in the result set, you cannot distinguish between the null values that came from the table and the null values that the right outer join statement introduced. You might receive incorrect results if you must ignore the null values from the result set.
> [!NOTE]
> If the columns that are included in the unique index do not accept null values, all the null values in the result set were introduced by the right outer join statement.
## FOR XML
XML
Specifies that the results of a query are to be returned as an XML document. One of the following XML modes must be specified: RAW, AUTO, EXPLICIT. For more information about XML data and [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)], see [FOR XML (SQL Server)](../../relational-databases/xml/for-xml-sql-server.md).
RAW [ **('***ElementName***')** ]
Takes the query result and transforms each row in the result set into an XML element with a generic identifier \<row /> as the element tag. You can optionally specify a name for the row element. The resulting XML output uses the specified *ElementName* as the row element generated for each row. For more information, see [Use RAW Mode with FOR XML](../../relational-databases/xml/use-raw-mode-with-for-xml.md).
AUTO
Returns query results in a simple, nested XML tree. Each table in the FROM clause, for which at least one column is listed in the SELECT clause, is represented as an XML element. The columns listed in the SELECT clause are mapped to the appropriate element attributes. For more information, see [Use AUTO Mode with FOR XML](../../relational-databases/xml/use-auto-mode-with-for-xml.md).
EXPLICIT
Specifies that the shape of the resulting XML tree is defined explicitly. Using this mode, queries must be written in a particular way so that additional information about the desired nesting is specified explicitly. For more information, see [Use EXPLICIT Mode with FOR XML](../../relational-databases/xml/use-explicit-mode-with-for-xml.md).
XMLDATA
Returns inline XDR schema, but does not add the root element to the result. If XMLDATA is specified, XDR schema is appended to the document.
> [!IMPORTANT]
> The XMLDATA directive is deprecated. Use XSD generation in the case of RAW and AUTO modes. There is no replacement for the XMLDATA directive in EXPLICIT mode. [!INCLUDE[ssNoteDepFutureAvoid](../../includes/ssnotedepfutureavoid-md.md)]
XMLSCHEMA [ **('***TargetNameSpaceURI***')** ]
Returns inline XSD schema. You can optionally specify a target namespace URI when you specify this directive, which returns the specified namespace in the schema. For more information, see [Generate an Inline XSD Schema](../../relational-databases/xml/generate-an-inline-xsd-schema.md).
ELEMENTS
Specifies that the columns are returned as subelements. Otherwise, they are mapped to XML attributes. This option is supported in RAW, AUTO and PATH modes only. For more information, see [Use RAW Mode with FOR XML](../../relational-databases/xml/use-raw-mode-with-for-xml.md).
XSINIL
Specifies that an element with **xsi:nil** attribute set to **True** be created for NULL column values. This option can only be specified with ELEMENTS directive. For more information, see [Generate Elements for NULL Values with the XSINIL Parameter](../../relational-databases/xml/generate-elements-for-null-values-with-the-xsinil-parameter.md).
ABSENT
Indicates that for null column values, corresponding XML elements will not be added in the XML result. Specify this option only with ELEMENTS.
PATH [ **('***ElementName***')** ]
Generates a \<row> element wrapper for each row in the result set. You can optionally specify an element name for the \<row> element wrapper. If an empty string is provided, such as FOR XML PATH (**''**) ), a wrapper element is not generated. Using PATH may provide a simpler alternative to queries written using the EXPLICIT directive. For more information, see [Use PATH Mode with FOR XML](../../relational-databases/xml/use-path-mode-with-for-xml.md).
BINARY BASE64
Specifies that the query returns the binary data in binary base64-encoded format. When you retrieve binary data by using RAW and EXPLICIT mode, this option must be specified. This is the default in AUTO mode.
TYPE
Specifies that the query returns results as **xml** type. For more information, see [TYPE Directive in FOR XML Queries](../../relational-databases/xml/type-directive-in-for-xml-queries.md).
ROOT [ **('***RootName***')** ]
Specifies that a single top-level element be added to the resulting XML. You can optionally specify the root element name to generate. If the optional root name is not specified, the default \<root> element is added.
For more info, see [FOR XML (SQL Server)](../../relational-databases/xml/for-xml-sql-server.md).
**FOR XML Example**
The following example specifies `FOR XML AUTO` with the `TYPE` and `XMLSCHEMA` options. Because of the `TYPE` option, the result set is returned to the client as an **xml** type. The `XMLSCHEMA` option specifies that the inline XSD schema is included in the XML data returned, and the `ELEMENTS` option specifies that the XML result is element-centric.
```
USE AdventureWorks2012;
GO
SELECT p.BusinessEntityID, FirstName, LastName, PhoneNumber AS Phone
FROM Person.Person AS p
JOIN Person.PersonPhone AS pph ON p.BusinessEntityID = pph.BusinessEntityID
WHERE LastName LIKE 'G%'
ORDER BY LastName, FirstName
FOR XML AUTO, TYPE, XMLSCHEMA, ELEMENTS XSINIL;
```
## FOR JSON
JSON
Specify FOR JSON to return the results of a query formatted as JSON text. You also have to specify one of the following JSON modes : AUTO or PATH. For more information about the **FOR JSON** clause, see [Format Query Results as JSON with FOR JSON (SQL Server)](../../relational-databases/json/format-query-results-as-json-with-for-json-sql-server.md).
AUTO
Format the JSON output automatically based on the structure of the **SELECT** statement
by specifying **FOR JSON AUTO**. For more info and examples, see [Format JSON Output Automatically with AUTO Mode (SQL Server)](../../relational-databases/json/format-json-output-automatically-with-auto-mode-sql-server.md).
PATH
Get full control over the format of the JSON output by specifying
**FOR JSON PATH**. **PATH** mode lets you create wrapper objects and nest complex properties. For more info and examples, see [Format Nested JSON Output with PATH Mode (SQL Server)](../../relational-databases/json/format-nested-json-output-with-path-mode-sql-server.md).
INCLUDE_NULL_VALUES
Include null values in the JSON output by specifying the **INCLUDE_NULL_VALUES** option with the **FOR JSON** clause. If you don't specify this option, the output does not include JSON properties for null values in the query results. For more info and examples, see [Include Null Values in JSON Output with the INCLUDE_NULL_VALUES Option (SQL Server)](../../relational-databases/json/include-null-values-in-json-include-null-values-option.md).
ROOT [ **('***RootName***')** ]
Add a single, top-level element to the JSON output by specifying the **ROOT** option with the **FOR JSON** clause. If you don't specify the **ROOT** option, the JSON output doesn't have a root element. For more info and examples, see [Add a Root Node to JSON Output with the ROOT Option (SQL Server)](../../relational-databases/json/add-a-root-node-to-json-output-with-the-root-option-sql-server.md).
WITHOUT_ARRAY_WRAPPER
Remove the square brackets that surround the JSON output by default by specifying the **WITHOUT_ARRAY_WRAPPER** option with the **FOR JSON** clause. If you don't specify this option, the JSON output is enclosed within square brackets. Use the **WITHOUT_ARRAY_WRAPPER** option to generate a single JSON object as output. For more info, see [Remove Square Brackets from JSON Output with the WITHOUT_ARRAY_WRAPPER Option (SQL Server)](../../relational-databases/json/remove-square-brackets-from-json-without-array-wrapper-option.md).
For more info, see [Format Query Results as JSON with FOR JSON (SQL Server)](../../relational-databases/json/format-query-results-as-json-with-for-json-sql-server.md).
## See Also
[SELECT (Transact-SQL)](../../t-sql/queries/select-transact-sql.md)
| 60.930233 | 543 | 0.698664 | eng_Latn | 0.961085 |
8a7975bb5c91dedcb23b2c0a0eb2dc333a419e37 | 2,857 | md | Markdown | README.md | mhdawson/HomeAlarm | 0ff9c28d16142d3f157e48c0e88196391251c137 | [
"MIT"
] | 25 | 2015-09-11T13:09:29.000Z | 2020-03-04T23:19:43.000Z | README.md | mhdawson/HomeAlarm | 0ff9c28d16142d3f157e48c0e88196391251c137 | [
"MIT"
] | null | null | null | README.md | mhdawson/HomeAlarm | 0ff9c28d16142d3f157e48c0e88196391251c137 | [
"MIT"
] | 4 | 2017-02-21T22:24:20.000Z | 2019-01-23T01:42:52.000Z | # MQTT/Node base home alarm system - OBSOLETE
OBSOLETE, OBSOLETE, OBSOLETE
Replaced by: https://github.com/mhdawson/micro-app-home-alarm
The same home alarm functionality but based on the micro-app-framework
so you can also get the look add feel of native desktop and
mobile applications
----------------------------------------------------------
This projects provides a home based alarm system using Node and
MQTT. It provides a GUI that allows you to:
* arm/disarm the alarm
* see the status of the zones
* ask that a picture be taken
* view the last 4 pictures taken
* view pictures from multiple cameras
* view the log of alarm events
When the alarm is triggered it will take pictures every 10 second for 5 minutes, pushing them to a remote webserver.
It can also be configured to send sms messages to notify the owner than an alarm has occured

The following projects can be used to connect sensors such
motion detectors, door contacts and webcams.
* [PI433WirelessRecvMananager](https://github.com/mhdawson/PI433WirelessRecvManager)
* [PIWebcamServer](https://github.com/mhdawson/PIWebcamServer)
Additional views when alarm is active and triggered


View when using GUI to display pictures taken by camera

The server requires Node with the following modules installed:
* basic-auth
* mqtt
* twilio
* websocket
It also requires:
* an mqtt server
* a remote webserver to serve up the pictures taken
* twillio account (If you want SMS notifications)
Most configuration is done in the config.txt file which supports the following configuration options:
* alarmSite=\<Name assigned to this instance of the alarm\>
* username=\<username that must be provided to acess the alarm\>
* password=\<password that must be provided to acess the alarm\>
* port=\<port on which alarm GUI is server\>
* mqttServerIP=\<IP of mqtt server\>
* mqttRootTopic=\<root topic under which events are pubished/subscribed\>
* zone=\<topic for sensor\>:\<zone\>:\<named assigned to zone\> (one or more of these)
* twilioAccountSID=\<twillio account for SMS notifications\>
* twilioAccountAuthToken=\<authentication token for twillio account\>
* twilioFromNumber=\<twillio from number\>
* twilioToNumber=\<twillio to number\>
* cameraTopic=\<topic to which to publish requests that a picture be taken\>
* eventLogPrefix=\<directory in which log for alarm will be written\>
## TODOs
- Add more doc on how to configure, setup and run, including the required mqtt server
- Add support for day/night sensor
- Mobile app for gui would be nice.
| 36.628205 | 116 | 0.763738 | eng_Latn | 0.993226 |
8a7997e6e8c9f514b116917dc3eaaf913d6eaf03 | 3,243 | md | Markdown | docs/vs-2015/extensibility/internals/managing-configuration-options.md | IgorRozani/visualstudio-docs.pt-br | 7b46a758c4a0ba4a00d5b63332f235ee227a0042 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vs-2015/extensibility/internals/managing-configuration-options.md | IgorRozani/visualstudio-docs.pt-br | 7b46a758c4a0ba4a00d5b63332f235ee227a0042 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vs-2015/extensibility/internals/managing-configuration-options.md | IgorRozani/visualstudio-docs.pt-br | 7b46a758c4a0ba4a00d5b63332f235ee227a0042 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Gerenciar opções de configuração | Microsoft Docs
ms.custom: ''
ms.date: 11/15/2016
ms.prod: visual-studio-dev14
ms.reviewer: ''
ms.suite: ''
ms.technology:
- vs-ide-sdk
ms.tgt_pltfrm: ''
ms.topic: article
helpviewer_keywords:
- configuration options
ms.assetid: 596c28ee-f48d-4252-a5c4-f730c43a39e6
caps.latest.revision: 13
ms.author: gregvanl
manager: ghogen
ms.openlocfilehash: b114948ad662b9c027e208609dc1e48a6bec8979
ms.sourcegitcommit: 9ceaf69568d61023868ced59108ae4dd46f720ab
ms.translationtype: MT
ms.contentlocale: pt-BR
ms.lasthandoff: 10/12/2018
ms.locfileid: "49243224"
---
# <a name="managing-configuration-options"></a>Gerenciando opções de configuração
[!INCLUDE[vs2017banner](../../includes/vs2017banner.md)]
Quando você cria um novo tipo de projeto, você deve gerenciar definições de configuração do projeto e solução que determinam como o projeto será compilado, empacotado, implantados e execução. Os tópicos a seguir abordam a configuração de projeto e solução.
## <a name="in-this-section"></a>Nesta seção
[Visão geral](../../extensibility/internals/configuration-options-overview.md)
Descreve como projetos em [!INCLUDE[vsprvs](../../includes/vsprvs-md.md)] pode dar suporte a várias configurações.
[Páginas de propriedade](../../extensibility/internals/property-pages.md)
Explica o que os usuários podem exibir e alterar propriedades de configuração de projeto dependentes e independentes propriedades por meio de páginas de propriedade.
[Configuração da solução](../../extensibility/internals/solution-configuration.md)
Fornece informações sobre o que é armazenado em configurações da solução e como as configurações da solução direcionam o comportamento do **inicie** e **Build** comandos.
[Objeto de configuração do projeto](../../extensibility/internals/project-configuration-object.md)
Explica como o objeto de configuração do projeto gerencia a exibição de informações de configuração para a interface do usuário.
[Configuração do projeto para compilação](../../extensibility/internals/project-configuration-for-building.md)
Explica como uma lista de configurações da solução para uma determinada solução é gerenciada pelo **configurações da solução** caixa de diálogo.
[Configuração do projeto para gerenciar a implantação](../../extensibility/internals/project-configuration-for-managing-deployment.md)
Define a ação de implantação e as duas formas de [!INCLUDE[vsprvs](../../includes/vsprvs-md.md)] dá suporte a projetos que dão suporte à implantação.
[Configuração do projeto para saída](../../extensibility/internals/project-configuration-for-output.md)
Explica os processos de compilação que podem dar suporte a todas as configurações e as interfaces e métodos por qual saída itens podem ser disponibilizados.
## <a name="related-sections"></a>Seções relacionadas
[Tipos de projeto](../../extensibility/internals/project-types.md)
Fornece uma visão geral dos projetos como blocos de construção básicos do [!INCLUDE[vsprvs](../../includes/vsprvs-md.md)] o ambiente de desenvolvimento integrado (IDE). São fornecidos links para tópicos adicionais que explicam como projetos de controle de criação e compilação de código.
| 57.910714 | 288 | 0.779217 | por_Latn | 0.99698 |
8a79a741d4d0e49047525f669973a9772d6b9705 | 5,139 | md | Markdown | windows-apps-src/packaging/create-certificate-package-signing.md | hyoshioka0128/windows-uwp.ja-jp | 2eec6ca0e23e4c841ab51bfa6e811a80de7352ed | [
"CC-BY-4.0",
"MIT"
] | null | null | null | windows-apps-src/packaging/create-certificate-package-signing.md | hyoshioka0128/windows-uwp.ja-jp | 2eec6ca0e23e4c841ab51bfa6e811a80de7352ed | [
"CC-BY-4.0",
"MIT"
] | null | null | null | windows-apps-src/packaging/create-certificate-package-signing.md | hyoshioka0128/windows-uwp.ja-jp | 2eec6ca0e23e4c841ab51bfa6e811a80de7352ed | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: パッケージ署名用証明書を作成する
description: PowerShell ツールを使ってアプリ パッケージ署名用を作成し、エクスポートします。
ms.date: 09/30/2018
ms.topic: article
keywords: windows 10, uwp
ms.assetid: 7bc2006f-fc5a-4ff6-b573-60933882caf8
ms.localizationpriority: medium
ms.openlocfilehash: 1b9a538dc36818c065e790170f693576650f5024
ms.sourcegitcommit: 34671182c26f5d0825c216a6cededc02b0059a9e
ms.translationtype: MT
ms.contentlocale: ja-JP
ms.lasthandoff: 06/20/2019
ms.locfileid: "67286925"
---
# <a name="create-a-certificate-for-package-signing"></a>パッケージ署名用証明書を作成する
この記事では、PowerShell ツールを使用して、アプリ パッケージ署名用の証明書を作成してエクスポートする方法について説明します。 Visual Studio を使用して [UWP アプリをパッケージ化する](https://docs.microsoft.com/windows/uwp/packaging/packaging-uwp-apps)ことをお勧めしますが、Visual Studio を使用してアプリを開発していない場合は、ストア対応アプリを手動でパッケージ化することができます。
> [!IMPORTANT]
> Visual Studio を使用してアプリを開発する場合は、Visual Studio のウィザードを使って証明書をインポートし、アプリ パッケージに署名することをお勧めします。 詳しくは、「[Visual Studio での UWP アプリのパッケージ化](https://docs.microsoft.com/windows/uwp/packaging/packaging-uwp-apps)」をご覧ください。
## <a name="prerequisites"></a>前提条件
- **パッケージまたはパッケージ化されていないアプリ**
AppxManifest.xml ファイルを含むアプリ。 マニフェスト ファイルを参照して、最終的なアプリ パッケージの署名に使われる証明書を作成する必要があります。 手動でアプリをパッケージ化する方法について詳しくは、「[MakeAppx.exe ツールを使ってアプリ パッケージを作成する](https://docs.microsoft.com/windows/uwp/packaging/create-app-package-with-makeappx-tool)」をご覧ください。
- **公開キー基盤 (PKI) のコマンドレット**
署名証明書を作成およびエクスポートするには、PKI コマンドレットが必要です。 詳しくは、「[公開キー基盤コマンドレット](https://docs.microsoft.com/powershell/module/pkiclient/)」をご覧ください。
## <a name="create-a-self-signed-certificate"></a>自己署名証明書を作成する
自己署名証明書は、ストアに発行する準備ができた前に、アプリのテストに便利です。 自己署名証明書を作成するには、このセクションで説明されている手順に従います。
> [!NOTE]
> 自己署名証明書は厳密にテストします。 ストア、またはその他の会場からアプリを発行する準備ができたら、証明書を信頼できるソースに切り替えます。 これに失敗すると、アプリが、顧客にインストールすることができない可能性があります。
### <a name="determine-the-subject-of-your-packaged-app"></a>パッケージ アプリのサブジェクトを決定する
証明書を使ってアプリ パッケージに署名するには、証明書の「サブジェクト」がアプリのマニフェストの [Publisher] セクションと**一致する必要**があります。
たとえば、アプリの AppxManifest.xml ファイルの [Identity] セクションは、次のようになります。
```xml
<Identity Name="Contoso.AssetTracker"
Version="1.0.0.0"
Publisher="CN=Contoso Software, O=Contoso Corporation, C=US"/>
```
この例では [Publisher] は "CN=Contoso Software, O=Contoso Corporation, C=US" で、これを証明書の作成に使用する必要があります。
### <a name="use-new-selfsignedcertificate-to-create-a-certificate"></a>**New-SelfSignedCertificate** を使って証明書を作成する
PowerShell コマンドレット **New-SelfSignedCertificate** を使用して自己署名証明書を作成します。 **New-SelfSignedCertificate** にはカスタマイズのためのいくつかのパラメーターがありますが、この記事では、**SignTool** で動作する簡単な証明書の作成を中心に説明します。 このコマンドレットの使用とその例について詳しくは、「[New-SelfSignedCertificate](https://docs.microsoft.com/powershell/module/pkiclient/New-SelfSignedCertificate)」をご覧ください。
前の例の AppxManifest.xml ファイルに基づいて、次の構文を使用して証明書を作成する必要があります。 管理者特権の PowerShell プロンプトで、次のコマンドを実行します。
```powershell
New-SelfSignedCertificate -Type Custom -Subject "CN=Contoso Software, O=Contoso Corporation, C=US" -KeyUsage DigitalSignature -FriendlyName "Your friendly name goes here" -CertStoreLocation "Cert:\CurrentUser\My" -TextExtension @("2.5.29.37={text}1.3.6.1.5.5.7.3.3", "2.5.29.19={text}")
```
一部のパラメーターの詳細については、次の詳細を確認してください。
- **KeyUsage**:このパラメーターは、証明書が使用目的を定義します。 自己署名証明書の場合にこのパラメーターを設定する必要があります**DigitalSignature**します。
- **TextExtension**:このパラメーターには、次の拡張機能の設定が含まれます。
- 拡張キー使用法 (EKU)。この拡張機能では、他の目的で、認定公開キーを使えることを示します。 自己署名証明書では、このパラメーターは、拡張子の文字列を含める必要があります **"2.5.29.37={text}1.3.6.1.5.5.7.3.3"** 、証明書がコード署名に使用することを示します。
- 基本的な制約:この拡張機能は、証明書が証明書機関 (CA) かどうかを示します。 自己署名証明書では、このパラメーターは、拡張子の文字列を含める必要があります **"2.5.29.19={text}"** 、証明書がエンド エンティティ (CA ではない) を示します。
このコマンドを実行すると、"-CertStoreLocation" パラメーターで指定されたローカル証明書ストアに証明書が追加されます。 コマンドの結果には、証明書の拇印も生成されます。
次のコマンドを使って、PowerShell ウィンドウで証明書を表示できます。
```powershell
Set-Location Cert:\CurrentUser\My
Get-ChildItem | Format-Table Subject, FriendlyName, Thumbprint
```
これにより、ローカル ストア内のすべての証明書が表示されます。
## <a name="export-a-certificate"></a>証明書のエクスポート
ローカル ストアの証明書を Personal Information Exchange (.pfx) ファイルにエクスポートするには、**Export-PfxCertificate** コマンドレットを使用します。
**Export-PfxCertificate** コマンドレットを使用する場合は、パスワードを作成して使用するか、"-ProtectTo" パラメーターを使用して、パスワードなしでファイルにアクセスできるユーザーまたはグループを指定する必要があります。 "-Password" または "-ProtectTo" パラメーターのいずれかを使用しない場合、エラーが表示されます。
### <a name="password-usage"></a>Password を使用
```powershell
$pwd = ConvertTo-SecureString -String <Your Password> -Force -AsPlainText
Export-PfxCertificate -cert "Cert:\CurrentUser\My\<Certificate Thumbprint>" -FilePath <FilePath>.pfx -Password $pwd
```
### <a name="protectto-usage"></a>ProtectTo を使用
```powershell
Export-PfxCertificate -cert Cert:\CurrentUser\My\<Certificate Thumbprint> -FilePath <FilePath>.pfx -ProtectTo <Username or group name>
```
証明書を作成してエクスポートしたら、**SignTool** を使ってアプリ パッケージに署名する準備が整いました。 手動によるパッケージ化プロセスの次の手順については、「[SignTool を使ったアプリ パッケージの署名](https://docs.microsoft.com/windows/uwp/packaging/sign-app-package-using-signtool)」をご覧ください。
## <a name="security-considerations"></a>セキュリティの考慮事項
[ローカル コンピューターの証明書ストア](https://docs.microsoft.com/windows-hardware/drivers/install/local-machine-and-current-user-certificate-stores)に証明書を追加することによって、コンピューター上のすべてのユーザーの証明書の信頼に影響します。 システムの信頼性を損なうのを防ぐために、これらの証明書が不要になったときには、削除することをお勧めします。
| 47.583333 | 318 | 0.802491 | yue_Hant | 0.865941 |
8a7a3343cc7f702494d7cab688e54c7e7a470b3d | 3,756 | md | Markdown | packages/jobboard/website/content/jobs/remote-software-project-manager-scopic--JbEEe.md | crocoder-dev/monorepo | 71a80351110128a30b6bce54cb927adea6a44e3a | [
"MIT"
] | 3 | 2020-12-07T21:35:08.000Z | 2020-12-26T08:40:16.000Z | packages/jobboard/website/content/jobs/remote-software-project-manager-scopic--JbEEe.md | crocoder-dev/monorepo | 71a80351110128a30b6bce54cb927adea6a44e3a | [
"MIT"
] | 18 | 2020-11-10T11:55:26.000Z | 2022-03-07T08:55:00.000Z | packages/jobboard/website/content/jobs/remote-software-project-manager-scopic--JbEEe.md | crocoder-dev/monorepo | 71a80351110128a30b6bce54cb927adea6a44e3a | [
"MIT"
] | 3 | 2020-12-08T08:48:36.000Z | 2021-11-25T11:05:56.000Z | ---
title: "Software Project Manager"
location: "Anywhere"
host: "https://scopicsoftware.recruiterbox.com/?q=&limit=100"
companyName: "Scopic"
url: "https://scopicsoftware.recruiterbox.com/jobs/fk0345b/"
applyUrl: "https://scopicsoftware.recruiterbox.com/jobs/fk0345b/?apply=true"
timestamp: 1616371200000
hashtags: "#management,#optimization,#English,#ui/ux"
jobType: "other"
logoUrl: "https://jobboard-logos-bucket.s3.eu-central-1.amazonaws.com/scopic"
companyWebsite: "https://scopicsoftware.com/"
summary: "Scopic is searching for a remote software project manager that has experience with project management software tools."
summaryBackup: "To apply as a remote software project manager at Scopic, you preferably need to have some knowledge of: #management, #ui/ux, #English."
featured: 10
archived: "true"
---
Are you on the hunt for exciting new challenges that boost your professional growth? If you’re an innovator by nature and a Remote Software Project Manager by trade, we’d love to hear from you! Read on to see if you’d be a good fit for the Scopic team of 250+ professionals from over 40 countries.
At Scopic, the virtual world is our home so this is a full-time remote position. Only apply if you’re prepared for the zero-hour commute and the thought of collaborating with colleagues from around the globe excites you!
The skills and traits we’re looking for:
* 2+ years working as a software project manager
* Strong English reading, writing, and speaking skills
* Very good communication skills, with the ability to work directly with clients and development teams
* Ability to read and write software application specifications
* A good sense of software user interface design and ability to create quick wireframe mockups
* Knowledge of the software quality assurance process
* Experience with project management software tools (e.g., scheduling software, and bug trackers)
* Strong organizational skills and attention to detail
* Energy and passion for your work
* Bachelor's degree or higher preferred
* Stable internet connection and home computer
* Interest, dedication, and discipline to work remotely from home
The secret ingredients that make us special:
* Your growth is our growth. We invest in your future with paid training and other professional opportunities.
* We’re industry innovators at the forefront of change. Equipped with the latest technologies and a team of knowledgeable colleagues by your side, you’ll embrace new and interesting challenges.
* Your location. Your schedule — Pick your time-zone, choose your preferred hours, and work from the place where you feel most at home.
* Flexibility and freedom are in our DNA! As long as you have a stable internet connection and the drive to thrive, you can travel and work from anywhere you like.
* A workload you can rely on. We’ll set you enough tasks to keep that mind busy! At Scopic, we’ll ensure you always have a consistent flow of engaging, challenging work to do.
* Recognition and reward. We acknowledge diligence and hard work through annual pay increases for good performance.
Down to business!
* Salary Range: $ 10 - $ 20/hour
* Your starting salary is negotiable depending on your skills and experience.
* Both hourly and salary positions are available.
* Employees are paid monthly via wire transfer.
Our values:
Scopic is an equal opportunity employer. We value diversity and do not discriminate on the basis of race, religion, color, marital status, national origin, gender, veteran status, sexual orientation, age, or disability status.
Have the skills, the drive, and the passion to join the Scopic family?
Apply today to join our growing team of remote professionals from around the world.
| 61.57377 | 297 | 0.78328 | eng_Latn | 0.998762 |
8a7a3a9096b8f489008ef18f2e22040565826f8a | 1,666 | md | Markdown | azps-3.8.0/Az.Dns/Az.DNS.md | AdrianaDJ/azure-docs-powershell.tr-TR | 78407d14f64e877506d6c0c14cac18608332c7a8 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-12-05T17:58:35.000Z | 2020-12-05T17:58:35.000Z | azps-3.8.0/Az.Dns/Az.DNS.md | AdrianaDJ/azure-docs-powershell.tr-TR | 78407d14f64e877506d6c0c14cac18608332c7a8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | azps-3.8.0/Az.Dns/Az.DNS.md | AdrianaDJ/azure-docs-powershell.tr-TR | 78407d14f64e877506d6c0c14cac18608332c7a8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
Module Name: Az.Dns
Module Guid: 5e5ed8bc-27bf-4380-9de1-4b22ba0920b2
Download Help Link: https://docs.microsoft.com/en-us/powershell/module/az.dns
Help Version: 4.1.2.0
Locale: en-US
content_git_url: https://github.com/Azure/azure-powershell/blob/master/src/Dns/Dns/help/Az.DNS.md
original_content_git_url: https://github.com/Azure/azure-powershell/blob/master/src/Dns/Dns/help/Az.DNS.md
ms.openlocfilehash: 2f741e911f7118f06d15e7caf1807822ecd13b76
ms.sourcegitcommit: 6a91b4c545350d316d3cf8c62f384478e3f3ba24
ms.translationtype: MT
ms.contentlocale: tr-TR
ms.lasthandoff: 04/21/2020
ms.locfileid: "94097373"
---
# Az. DNS modülü
## Tanım
Bu konuda, Azure DNS cmdlet 'Lerinin yardım konularını görüntüler.
## Az. DNS cmdlet 'Leri
### [Add-AzDnsRecordConfig](Add-AzDnsRecordConfig.md)
Yerel kayıt kümesi nesnesine bir DNS kaydı ekler.
### [Get-AzDnsRecordSet](Get-AzDnsRecordSet.md)
DNS kayıt kümesi alır.
### [Get-AzDnsZone](Get-AzDnsZone.md)
DNS bölgesi alır.
### [New-AzDnsRecordConfig](New-AzDnsRecordConfig.md)
Yeni bir DNS kaydı yerel nesnesi oluşturur.
### [New-AzDnsRecordSet](New-AzDnsRecordSet.md)
DNS kayıt kümesi oluşturur.
### [Yeni-Azdnzone](New-AzDnsZone.md)
Yeni bir DNS bölgesi oluşturur.
### [Remove-AzDnsRecordConfig](Remove-AzDnsRecordConfig.md)
Yerel kayıt kümesi nesnesinden DNS kaydını kaldırır.
### [Remove-AzDnsRecordSet](Remove-AzDnsRecordSet.md)
Kayıt kümesini siler.
### [Remove-AzDnsZone](Remove-AzDnsZone.md)
Kaynak grubundan bir DNS bölgesini kaldırır.
### [Set-AzDnsRecordSet](Set-AzDnsRecordSet.md)
DNS kayıt kümesini güncelleştirir.
### [Set-AzDnsZone](Set-AzDnsZone.md)
DNS bölgesinin özelliklerini güncelleştirir.
| 30.851852 | 106 | 0.786315 | tur_Latn | 0.817144 |
8a7a69efe8d5cebc40260d6a0d955a7f3c8063d8 | 5,074 | md | Markdown | userManual.md | MelvynHerzig/projet-berney_forestier_herzig | 2106ed2ecf33657d801a0c1b566c96ff2bbe7ab6 | [
"Unlicense"
] | null | null | null | userManual.md | MelvynHerzig/projet-berney_forestier_herzig | 2106ed2ecf33657d801a0c1b566c96ff2bbe7ab6 | [
"Unlicense"
] | null | null | null | userManual.md | MelvynHerzig/projet-berney_forestier_herzig | 2106ed2ecf33657d801a0c1b566c96ff2bbe7ab6 | [
"Unlicense"
] | null | null | null | # Manuel Utilisateur
## Installation
### 1) Télécharger l'archive
[récupérer l'archive statique.zip](https://github.com/gen-classroom/projet-berney_forestier_herzig/releases)
### 2) Dezippage de l'archive
**MacOS/Linux:**
```
unzip -o statique.zip
```
**Windows:**
dézipper l'archive *target/statique.zip* manuellement.
### 3) Ajouter l'application au path
depuis l'intérieur de l'archive dézippée
**MacOS/Linux:**
```
export PATH=$PATH:`pwd`/bin
```
**Windows:**
Dans cmd.exe
```
SET PATH=%PATH%;%cd%\bin
```
## Utilisation
### Informations d'utilisation générale
>L'application s'utilise à l'aide d'une interface en ligne de commande.
#### Commandes
Executer <i>$statique</i> seul vous donnera la liste des commandes et donc le résultat suivant
```
Usage: statique [-version] [COMMAND]
A brand new static site generator.
-version Print software version
Commands:
init Initialize a static site directory
clean Clean a static site
build Build a static site
serve Open the site in a web browser.
```
les 4 commandes prennent des chemins de dossier. Si le chemin est manquant, la commande travail dans le répertoire
courant.
Voici à quoi sevent chacune de ces commandes:
* init : Initialise tout ce qui est nécessaire pour créer le site
* build : Créer le site static à partir des fichiers mardkdown
* serve : Afficher le site dans un navigateur
* clean : Supprimer
#### Arguments pour les commandes
Des arguments peuvent être utilisés sur certaines commandes, voici comment les ajouter:
>$ statique --[argument] build /cheminSite
Voici les arguments qui peuvent être passés à certaines commandes:
* watch : Permet de régénérer le site statique à la volée lorsque un changement sur le système de fichiers opère
Voici une liste des arguments pouvant être passés à chaque commande:
* init : aucune
* build : --watch
* serve : --watch
* clean : aucune
#### Ordre d'utilisation logique
1. statique init /monSite
2. configuration et création du site
3. statique build /monSite
4. statique serve /monSite
5. statique clean /monSite
### Configuration
Dans le fichier config.yaml il est possible de définir le titre de votre site. Ce titre
apparaîtra dans l'onglet des navigateurs avec le titre de page.
```
---
site_titre: titre du site
---
```
### Créer un page
Pour créer une page, copier coller le template suivant dans chaque fichier markdown.
```
---
page_titre: titre de la page
---
# C'est un titre
C'est du contenu!
```
Voici un exemple contenant beaucoup de possibilités de contenu pour le fichier index.md:
```
---
page_titre: titre de la page
---
# Un titre
## Un sous-titre
Notre projet github [ici](https://github.com/gen-classroom/projet-berney_forestier_herzig)
Une autre page de mon site [ici](./subdir/subdirfile.html)
| col1 | col2 |
|---|---|
|cell1|cell2|
~~du texte barré~~
++du texte souligné++
Qu'est-ce qui est mieux ?
- [ ] Ne pas avoir des checkbox
- [x] Avoir des checkbox
---\
Une jolie image pour vos beaux yeux.

La même image en plus petite
{width=200}
```
> page_titre est la seule metadonnée des pages.
Plus de contenu peut être ajouté en suivant la spécification commonmark:
https://commonmark.org/help/
#### Ajouter des liens dans le menu
Il est possible de créer un menu en éditant le fichier <i>/template/menu.html</i>
Pour chaque nouveau lien que vous souhaitez créer ajouter entre les balises <i>ul</i> présentes:
```
<li><a href="chemin/vers_un_fichier_md/depuis_le_dossier_init/index.html">nom du lien</a></li>
````
> Le fichier ciblé par le lien est insérer entre dans href="<fichier ici>". Le fichier doit être ciblé depuis
> le dossier créer par init et en remplacant l'extension md par html.

Si vous souhaitez créer un lien vers anotherPage.md, il faut écrire "aFolder/anotherPage.html" dans la balise href.
### Scénario d'utilisation
Si vous travaillez dans le répertoire <i>C:\user</i> et que vous souhaitez créer votre site dans le répertoire <i>C:\user\mySite</i>
>$ statique init /mySite
Configurez, préparez et créez votre site comme vous le souhaitez.
Pour ceci, consulter la section *Configuration* et *Créer une page* et vous pouvez également consulter la section *Ajouter des liens dans le menu*.
Pour lancer la création / génération du site statique
>$ statique build /mySite
Le résultat de la traduction est déployé dans <i>C:\user\mySite\build</i>
Vous voulez retirer le déploiement ?
La commande recherche un dossier nommé build et le supprime. Ne vous trompez en pointant sur un mauvais dossier.
>$ statique clean /mySite
Pour visualiser votre site dans un navigateur.
>$ statique serve /mySite
Vous voulez que les modifications s'appliquent et s'affichent directement sur le navigateur?
>$ statique build --watch /mySite
>$ statique serve --watch /mySite
Si vous voulez quitter le mode *watch*, utiliser le raccourci:
>$ Ctrl + c
Finalement, tapez *O* et *Enter*:
>$Terminer le programme de commandes (O/N) ? O
| 26.427083 | 148 | 0.737879 | fra_Latn | 0.979074 |
8a7aa678e40f148c31c9ceaf2f10df2d1f4a1c30 | 2,183 | md | Markdown | README.md | ryan-alfi/flutter-html-editor | 2fcb8f7fba1031ec465ceccfe1fc40cff2bc326b | [
"MIT"
] | null | null | null | README.md | ryan-alfi/flutter-html-editor | 2fcb8f7fba1031ec465ceccfe1fc40cff2bc326b | [
"MIT"
] | null | null | null | README.md | ryan-alfi/flutter-html-editor | 2fcb8f7fba1031ec465ceccfe1fc40cff2bc326b | [
"MIT"
] | null | null | null | # Flutter Html Editor
Flutter HTML Editor is a text editor for android and iOS to help write WYSIWYG HTML code based on the Summernote javascript wrapper. with this library you can insert images into the text editor
## Setup
add ```html_editor: ^1.0.1``` as deppendecy to pubspec.yaml
### iOS
Add the following keys to your Info.plist file, located in <project root>/ios/Runner/Info.plist:
```
<key>io.flutter.embedded_views_preview</key>
<true/>
<key>NSCameraUsageDescription</key>
<string>Used to demonstrate image picker plugin</string>
<key>NSMicrophoneUsageDescription</key>
<string>Used to capture audio for image picker plugin</string>
<key>NSPhotoLibraryUsageDescription</key>
<string>Used to demonstrate image picker plugin</string>
<key>NSAppTransportSecurity</key>
<dict>
<key>NSAllowsArbitraryLoads</key>
<true/>
</dict>
```
### Usage
1. import flutter html editor
```
import 'package:html_editor/html_editor.dart';
```
2. Create Global key from HTML Editor State
```
GlobalKey<HtmlEditorState> keyEditor = GlobalKey();
```
3. Add HTML Editor to widget
```
HtmlEditor(
hint: "Your text here...",
//value: "text content initial, if any",
key: keyEditor,
height: 400,
),
```
4. Get text from Html Editor
```
final txt = await keyEditor.currentState.getText();
```
### Avalaible option parameters
Parameter | Type | Default | Description
------------ | ------------- | ------------- | -------------
**key** | GlobalKey<HtmlEditorState> | **required** | for get method & reset
**value** | String | empty | iniate text content for text editor
**height** | double | 380 | height of text editor
**decoration** | BoxDecoration | | Decoration editor
**useBottomSheet** | bool | true | if true Pickup image user bottomsheet or dialog if else
**widthImage** | String | 100% | width of image picker
**showBottomToolbar** | bool | true | show hide bottom toolbar
**hint** | String | empty | Placeholder hint text
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
`#MajalengkaExoticSundaland`
| 26.950617 | 193 | 0.675218 | eng_Latn | 0.669236 |
8a7af71d96fd51f6b67339fa06656473e48b4530 | 54 | md | Markdown | source/includes/zh/_introduction.md | fb996de/jp-api-docs | 5dc7451918b3b2629b2ccf054348247c0555d75d | [
"Apache-2.0"
] | 11 | 2018-06-08T09:21:51.000Z | 2019-03-28T08:40:06.000Z | source/includes/zh/_introduction.md | fb996de/jp-api-docs | 5dc7451918b3b2629b2ccf054348247c0555d75d | [
"Apache-2.0"
] | 10 | 2018-06-24T05:44:29.000Z | 2022-02-26T03:57:40.000Z | source/includes/zh/_introduction.md | fb996de/jp-api-docs | 5dc7451918b3b2629b2ccf054348247c0555d75d | [
"Apache-2.0"
] | 10 | 2018-06-08T13:50:27.000Z | 2020-03-19T14:00:57.000Z | # 介绍
通过了解以下信息,您可以方便的使用 FCoin 提供的 API 来接入 FCoin 交易平台。
| 13.5 | 47 | 0.759259 | yue_Hant | 0.998824 |
8a7b00fc1d1ad5832ba0ec96c502b9fcefe4ad8d | 2,065 | md | Markdown | src/history/Thin-People-Video.md | ToastedBuns/website_gat | 8c1c7d16ac1010b0510743d6696e0772f0034997 | [
"MIT"
] | null | null | null | src/history/Thin-People-Video.md | ToastedBuns/website_gat | 8c1c7d16ac1010b0510743d6696e0772f0034997 | [
"MIT"
] | null | null | null | src/history/Thin-People-Video.md | ToastedBuns/website_gat | 8c1c7d16ac1010b0510743d6696e0772f0034997 | [
"MIT"
] | null | null | null | ---
title: Its Okay To Be Thin
date: 2019-03-20 21:00:23
author: 'Toasted Buns'
image: ../../images/thin_privilege.png
tags:
- Youtube
- Political Correctness
- SJW
- Fat Acceptance
---
BBC Sesh and Michaela breaks dismantles Thin Privilege
BBC continues to push fat acceptance our throats and we are funding this!
Is it the dumbest thing you've heard or have you seen worse?
If you enjoyed it, give it a like and subscribe for more!
Let me know how you feel about this video in the comments below!
<script async src="//pagead2.googlesyndication.com/pagead/js/adsbygoogle.js"></script><ins class="adsbygoogle" style="display:block; text-align:center;" data-ad-layout="in-article" data-ad-format="fluid" data-ad-client="ca-pub-2164900147810573" data-ad-slot="8817307412"></ins><script>(adsbygoogle = window.adsbygoogle || []).push({});</script>
Thanks for watching!
You can also join us on our weekly Podcast every Sunday @ 7pm live here on this channel!
Let's talk!
Gab: https://gab.ai/ToastedBuns
<iframe width="560" height="315" src="https://www.youtube.com/embed/6flMkaehUoM" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
Sources:
https://www.youtube.com/watch?v=dfyyQY-uLlE
https://twitter.com/mixon_bridget/status/1094456903030042624
https://www.nytimes.com/1999/07/29/us/people-of-size-gather-to-promote-fat-acceptance.html
https://twitter.com/bbcsesh/status/1093194261636485120
https://www.nhs.uk/news/obesity/obesity-now-a-leading-cause-of-death-especially-in-men/
https://www.webmd.com/diet/obesity/obesity-health-risks#1
https://i1.kym-cdn.com/entries/icons/facebook/000/016/737/8516201_orig.jpeg
https://www.youtube.com/watch?v=260ou4_9mYI -- Will Smith
https://i1.wp.com/scienceblog.com/wp-content/uploads/2016/05/louie-plane2.jpg.824x0_q71.jpg?fit=824%2C462&ssl=1
Background song: https://www.youtube.com/watch?v=Rt_dBpG5iu8
Ending Song: http://freemusicarchive.org/music/BrokeForFree/SomethingEP/BrokeForFree-SomethingEP-05SomethingElated
| 35.603448 | 348 | 0.76707 | yue_Hant | 0.298985 |
8a7b288e5ae36bd10f079e98abc9909510a7d152 | 1,471 | md | Markdown | bsabi32/README.md | PrzemekWirkus/abi-aa | 358b22fbcfa2b85ee93a477d9e632c3d98a404af | [
"Apache-2.0"
] | 302 | 2020-02-13T13:26:27.000Z | 2022-03-30T01:26:49.000Z | bsabi32/README.md | PrzemekWirkus/abi-aa | 358b22fbcfa2b85ee93a477d9e632c3d98a404af | [
"Apache-2.0"
] | 72 | 2020-02-13T20:03:55.000Z | 2022-03-25T07:09:49.000Z | bsabi32/README.md | PrzemekWirkus/abi-aa | 358b22fbcfa2b85ee93a477d9e632c3d98a404af | [
"Apache-2.0"
] | 86 | 2020-02-19T22:20:47.000Z | 2022-02-12T22:29:33.000Z | <div align="center">
<img src="Arm_logo_blue_RGB.svg" />
</div>
# ABI for the Arm® Architecture Base Standard (AArch32)
## About this document
This document is the [ABI for the Arm® Architecture Base
Standard](bsabi32.rst). It describes the overall structure of the ABI
and how the collection of documents fit into it.
## About the license
As identified more fully in the [LICENSE](LICENSE) file, this project
is licensed under CC-BY-SA-4.0 along with an additional patent
license. The language in the additional patent license is largely
identical to that in Apache-2.0 (specifically, Section 3 of Apache-2.0
as reflected at https://www.apache.org/licenses/LICENSE-2.0) with two
exceptions.
First, several changes were made related to the defined terms so as to
reflect the fact that such defined terms need to align with the
terminology in CC-BY-SA-4.0 rather than Apache-2.0 (e.g., changing
“Work” to “Licensed Material”).
Second, the defensive termination clause was changed such that the
scope of defensive termination applies to “any licenses granted to
You” (rather than “any patent licenses granted to You”). This change
is intended to help maintain a healthy ecosystem by providing
additional protection to the community against patent litigation
claims.
## Defects report
Please report defects in the [ABI for the Arm® Architecture Base
Standard](bsabi32.rst) to the [issue tracker page on
GitHub](https://github.com/ARM-software/abi-aa/issues).
| 36.775 | 70 | 0.777702 | eng_Latn | 0.998307 |
8a7c3f2f33586ab2f71331c66aa826b4fa8f68cc | 6,918 | md | Markdown | README_grahamedgecombe.md | cpantel/evilCodeSequence | e8ef647f98ebb2d4ca0a5e7493ed3bdddb64c32e | [
"0BSD"
] | 2 | 2020-10-25T18:40:28.000Z | 2021-09-09T16:20:50.000Z | README_grahamedgecombe.md | cpantel/evilCodeSequence | e8ef647f98ebb2d4ca0a5e7493ed3bdddb64c32e | [
"0BSD"
] | null | null | null | README_grahamedgecombe.md | cpantel/evilCodeSequence | e8ef647f98ebb2d4ca0a5e7493ed3bdddb64c32e | [
"0BSD"
] | 4 | 2021-06-06T02:17:47.000Z | 2021-11-20T11:50:37.000Z | # Icicle
## Introduction
Icicle is a 32-bit [RISC-V][riscv] system on chip for [iCE40 HX8K][ice40],
[iCE40 UP5K][ice40-up5k] and [ECP5][ecp5] FPGAs. It can be built with the
open-source [SymbiFlow][symbiflow] toolchain and currently targets several
development boards.
## Current features
* RV32I core with a [classic 5-stage RISC pipeline][classic-risc], static branch
prediction, bypassing and interlocking. It currently implements the entire
[user ISA][riscv-user] parts of the [privileged ISA][riscv-priv].
* Shared instruction and data memory (8 KiB, implemented with FPGA block RAM).
* Memory-mapped UART and LEDs.
* Memory-mapped SPI flash.
## Dependencies
* [GNU Make][make]
* [GNU RISC-V toolchain][riscv-gnu]
* [Icarus Verilog][iverilog] (`master` branch)
* [nextpnr][nextpnr] or [arachne-pnr][arachne-pnr]
* [Project IceStorm][icestorm] or [Project Trellis][trellis]
* [vim][vim] (for `xxd`)
* [Yosys][yosys] (`master` branch)
## Building and testing
### Supported boards
Icicle supports several development boards:
* `blackice-ii`: [BlackIce II][blackice-ii-board]
* `ecp5-evn`: [ECP5 evaluation board][ecp5-evn]
* `edufpga`: [EDU-CIAA-FPGA iCE40-HX4k board](./EDU-FPGA.md)
* `ice40hx8k-b-evn`: [iCE40-HX8K breakout board][ice40-hx8k-breakout]
* `icebreaker`: [iCEBreaker][icebreaker]
* `upduino`: [UPduino][upduino]
`<board>` should be replaced with the internal name of your development board in
the rest of the instructions (e.g. `ice40hx8k-b-evn` for the iCE40-HX8K breakout
board).
### Building
* Run `make BOARD=<board> syntax` to check the syntax with [Icarus][iverilog],
which has a stricter parser than [Yosys][yosys]. At the time of writing the
`master` branch of Icarus is required as there isn't a stable release with
`always_comb`/`always_ff` support yet.
* Run `make BOARD=<board>` to synthesize the design, place and route, compile
the demo program in `progmem.c` and create the bitstream.
### Programming
#### BlackIce II
* Configure jumper on board for [DFU Mode][dfu-mode] and connect both USB1 and
USB2 on the board to host USB ports.
* Run `make BOARD=blackice-ii dfu-flash` to flash the bitstream.
#### ECP5 evaluation board
* Remove R22, R23 and R24 to disconnect the channel B of the FTDI chip from the
I2C bus.
* Populate R34 and R35 with zero-ohm resistors to connect channel B of the FTDI
chip to the UART RX and TX pins.
* Optionally populate R21 with a zero-ohm resistor to enable the UART TX
indicator LED.
#### EDU-FPGA
* Run `make BOARD=edufpga flash` to flash the bitstream.
#### iCE40-HX8K breakout board
* Configure the jumpers for flash programming.
* Run `make BOARD=ice40hx8k-b-evn flash` to flash the bitstream.
### Testing
* If your chosen board has built-in LEDs, some of the LEDs should turn on and blink.
* Run `picocom -b 9600 /dev/ttyUSBn` (replacing `ttyUSBn` with the name of the
serial port) to connect to the serial port. `Hello, world!` should be printed
once per second.
### Other targets
The `make BOARD=<board> stat` target runs `icebox_stat` and the
`make BOARD=<board> time` target prints the `icetime` report.
The `Makefile` runs the [IceStorm][icestorm] toolchain in quiet mode. Unset the
`QUIET` variable to run the toolchain in verbose mode - e.g.
`make BOARD=<board> QUIET= ...`.
Set the `PNR` variable to `arachne-pnr` to use [arachne-pnr][arachne-pnr]
instead of [nextpnr][nextpnr] (the default) - e.g. `make PNR=arachne-pnr`.
## Formal verification
Icicle supports the RISC-V Formal Interface (RVFI), allowing it to be formally
verified with [SymbiYosys][symbiyosys] and [riscv-formal][riscv-formal]:
* Run `git clone https://github.com/SymbioticEDA/riscv-formal` to clone
riscv-formal.
* Run `cd riscv-formal/cores && git clone https://github.com/grahamedgecombe/icicle`
to clone Icicle in the `cores` subdirectory.
* Run ``cd icicle && python ../../checks/genchecks.py && make -C checks -j `nproc```
to verify the core.
## Planned features
* Use remaining block RAM tiles to eke out as much memory as possible.
* Use the SPRAM tiles on UP5K devices.
* Implement remaining bits of the user ISA.
* Implement machine mode from the privileged ISA.
* Interrupts/exceptions.
* Unaligned memory access support.
* Memory-mapped GPIOs.
* Add XIP, DDR, DSPI and QSPI support to the SPI flash controller.
* Improved reset support (a reset signal + boot ROM to zero all the registers).
* Automated tests.
* Multiply/divide support.
* Compressed instruction support.
* Add flags to disable certain features (e.g. privileged mode) to save LUTs on
smaller devices (e.g. the UP5K).
* Investigate using DSP tiles on the UP5K.
## Size and performance
The entire system on chip currently occupies around 3,000 LUTs on an iCE40 when
synthesized with [Yosys][yosys].
If bypassing and branch prediction are disabled [nextpnr][nextpnr] estimates it
can be clocked at around 50 MHz on a HX series device and 20 MHz on a UP series
device.
The core is capable of issuing and retiring one instruction per clock cycle,
although the actual number of instructions per cycle will be slightly less than
this in practice due to interlocking, branch mispredictions and the shared
memory bus.
## License
This project is available under the terms of the ISC license, which is similar
to the 2-clause BSD license. See the `LICENSE` file for the copyright
information and licensing terms.
[arachne-pnr]: https://github.com/cseed/arachne-pnr#readme
[blackice-ii-board]: https://github.com/mystorm-org/BlackIce-II#readme
[classic-risc]: https://en.wikipedia.org/wiki/Classic_RISC_pipeline
[dfu-mode]: https://github.com/mystorm-org/BlackIce-II/wiki/DFU-operations-on-the-BlackIce-II
[ecp5-evn]: https://www.latticesemi.com/en/Products/DevelopmentBoardsAndKits/ECP5EvaluationBoard.aspx
[ecp5]: https://www.latticesemi.com/Products/FPGAandCPLD/ECP5.aspx
[ice40-hx8k-breakout]: https://www.latticesemi.com/Products/DevelopmentBoardsAndKits/iCE40HX8KBreakoutBoard.aspx
[ice40-up5k]: https://www.latticesemi.com/Products/FPGAandCPLD/iCE40Ultra.aspx
[ice40]: https://www.latticesemi.com/Products/FPGAandCPLD/iCE40.aspx
[icebreaker]: https://github.com/icebreaker-fpga/
[icestorm]: http://www.clifford.at/icestorm/
[iverilog]: http://iverilog.icarus.com/
[make]: https://www.gnu.org/software/make/
[nextpnr]: https://github.com/YosysHQ/nextpnr#readme
[riscv-formal]: https://github.com/SymbioticEDA/riscv-formal
[riscv-gnu]: https://github.com/riscv/riscv-gnu-toolchain#readme
[riscv-priv]: https://riscv.org/specifications/privileged-isa/
[riscv-user]: https://riscv.org/specifications/
[riscv]: https://riscv.org/risc-v-isa/
[symbiflow]: https://symbiflow.github.io/
[symbiyosys]: https://symbiyosys.readthedocs.io/
[trellis]: https://github.com/SymbiFlow/prjtrellis#readme
[upduino]: http://gnarlygrey.atspace.cc/development-platform.html#upduino
[vim]: https://www.vim.org/
[yosys]: http://www.clifford.at/yosys/
| 39.531429 | 112 | 0.75094 | eng_Latn | 0.776499 |
8a7c79a4e91636e0e39a69a3b0a920048bb388ec | 2,003 | md | Markdown | _posts/AI/2012-08-01-bringing-the-singularity-closer.md | theclue/braindump | 10e072a4a5c32e1ebbf988bacaef6bb5d87ff2a7 | [
"MIT"
] | 5 | 2015-08-13T13:30:04.000Z | 2019-02-11T03:25:56.000Z | _posts/AI/2012-08-01-bringing-the-singularity-closer.md | theclue/braindump | 10e072a4a5c32e1ebbf988bacaef6bb5d87ff2a7 | [
"MIT"
] | null | null | null | _posts/AI/2012-08-01-bringing-the-singularity-closer.md | theclue/braindump | 10e072a4a5c32e1ebbf988bacaef6bb5d87ff2a7 | [
"MIT"
] | 4 | 2015-08-13T13:38:07.000Z | 2019-09-06T09:19:46.000Z | ---
date: 2012-08-19
category: AI
tags: [Philosophy, Fundamentals]
title: Bringing the Singularity Closer
subtitle: Designing single purpose machines is relatively easier than generalized AI
layout: post
from_mdda_blog: true
---
{% include JB/setup %}
Designing single purpose machines is relatively easier than generalized AI.
One-Dimensional Talents
------------------------------------------
IBM's Watson machine, in its Jeopardy playing mode,
was never designed to be a "fully rounded individual".
Because Watson was designed for a particular goal,
it is easy (in retrospect) for people to accept that it would excel at the
'1-dimensional task' that it was built for. After the initial euphoria of success,
the audience readily placed it on the same spectrum of clever/smart machines on which
computer chess Grandmasters had already been assigned a spot.
Humans are faced with a very nebulous task ahead when building a generalized AI.
In contrast, the task of building bigger, more powerful or more efficient machines
sounds like something that could be characterised as a '1-dimensional task'. Given the available precedents,
it's relatively easy to accept that creating a machine that could do this specific task
better than the best available human is a practical goal. After all,
humans already have to use computer assistance to build microprocessors.
Simple proposal : Create a Watson that is good at designing better Watsons.
When does the Singularity Arrive?
------------------------------------------
Humans may not be able to design AIs - but the singularity doesn't have to wait until we can.
We can accelerate pace of development by building a better developer first.
The singularity may be closer than it first appears : It doesn't happen
when we can surpass an average human's generalized intelligence. The singularity occurs when
there's a machine-designing Watson that can design its own successor (and financiers are willing to invest in its designs).
| 42.617021 | 123 | 0.761358 | eng_Latn | 0.999661 |
8a7c8c8cf0cf9a0cc09e8de65cfa25a74d79c6ec | 82 | md | Markdown | README.md | ekmartens/css-r2d2 | d43bc37391e4b8b856da3467a2b499cdfde4b3fa | [
"MIT"
] | null | null | null | README.md | ekmartens/css-r2d2 | d43bc37391e4b8b856da3467a2b499cdfde4b3fa | [
"MIT"
] | null | null | null | README.md | ekmartens/css-r2d2 | d43bc37391e4b8b856da3467a2b499cdfde4b3fa | [
"MIT"
] | null | null | null | # css-r2d2
This project can be viewed here: https://ekmartens.github.io/css-r2d2/
| 27.333333 | 70 | 0.756098 | eng_Latn | 0.606247 |
8a7ce26ed96c90b4647e80f490fc499daae5a1a8 | 229 | md | Markdown | README.md | YuzeZhang/dubbo-samples | be7157235ea141242ccac3613cdb2e685d9da4fd | [
"Apache-2.0"
] | 2 | 2021-08-05T06:48:37.000Z | 2021-08-05T06:48:45.000Z | README.md | YuzeZhang/dubbo-samples | be7157235ea141242ccac3613cdb2e685d9da4fd | [
"Apache-2.0"
] | 7 | 2022-01-12T23:04:36.000Z | 2022-01-12T23:08:47.000Z | README.md | YuzeZhang/dubbo-samples | be7157235ea141242ccac3613cdb2e685d9da4fd | [
"Apache-2.0"
] | 1 | 2019-12-24T10:21:47.000Z | 2019-12-24T10:21:47.000Z | # Dubbo samples
* Java samples are kept in [java](https://github.com/apache/dubbo-samples/tree/master/java) subdirectory
* Go samples are kept in [golang](https://github.com/apache/dubbo-samples/tree/master/golang) subdirectory
| 45.8 | 106 | 0.777293 | eng_Latn | 0.54001 |
8a7d242ee2698bab1911495963efb39f7875dae9 | 787 | md | Markdown | README.md | jestillore/checkout | 37fc6c3f4f6c5ea1b2c78ef12c23c67c226f600b | [
"MIT"
] | null | null | null | README.md | jestillore/checkout | 37fc6c3f4f6c5ea1b2c78ef12c23c67c226f600b | [
"MIT"
] | null | null | null | README.md | jestillore/checkout | 37fc6c3f4f6c5ea1b2c78ef12c23c67c226f600b | [
"MIT"
] | null | null | null | ## A very simple git checkout tool
### Installation
You can move `checkout.sh` to `/usr/local/bin/checkout` or anywhere as long as it's on your PATH.
### Usage
There's really not much on this script, so I'll just show you how it's used.
Suppose you have branches like these:
- `feature/create-users-module`
- `feature/1234-create-tasks-module`
To checkout to `feature/create-users-module` branch, you just have to run `checkokut users`. `users` is the keyword that's unique for your branch name. Do not use the keyword `create` as it's shared among other branches and the script will fail.
To checkout to `feature/1234-create-tasks-modul`, you can run `checkout tasks` or you can run `checkout 1234`.
## TODO
- [ ] Checkout remote branch
- [ ] Better management of branch conflicts
| 41.421053 | 245 | 0.7446 | eng_Latn | 0.993527 |
8a7d26200d21a4cb2c92ac504c76322de39ed898 | 693 | md | Markdown | extensions/igniter/payregister/vendor/square/connect/docs/Model/CustomerPreferences.md | themepress360/Tasti_Updated | 434c2674c83d489770db693356968535b32a769a | [
"MIT"
] | 130 | 2016-03-29T23:37:00.000Z | 2021-12-30T09:24:37.000Z | extensions/igniter/payregister/vendor/square/connect/docs/Model/CustomerPreferences.md | themepress360/Tasti_Updated | 434c2674c83d489770db693356968535b32a769a | [
"MIT"
] | 49 | 2016-12-01T04:27:23.000Z | 2020-07-06T17:13:32.000Z | extensions/igniter/payregister/vendor/square/connect/docs/Model/CustomerPreferences.md | themepress360/Tasti_Updated | 434c2674c83d489770db693356968535b32a769a | [
"MIT"
] | 63 | 2016-04-06T09:10:45.000Z | 2021-12-20T11:25:58.000Z | # CustomerPreferences
### Description
Represents communication preferences for the customer profile.
## Properties
Name | Getter | Setter | Type | Description | Notes
------------ | ------------- | ------------- | ------------- | ------------- | -------------
**email_unsubscribed** | getEmailUnsubscribed() | setEmailUnsubscribed($value) | **bool** | The customer has unsubscribed from receiving marketing campaign emails. | [optional]
Note: All properties are protected and only accessed via getters and setters.
[[Back to Model list]](../../README.md#documentation-for-models) [[Back to API list]](../../README.md#documentation-for-api-endpoints) [[Back to README]](../../README.md)
| 43.3125 | 177 | 0.643579 | eng_Latn | 0.823446 |
8a7d5480a5ab572a0ff6e3257972247504fe459d | 31,621 | md | Markdown | docs/framework/wcf/diagnostics/exceptions-reference/identitymodel-exceptions.md | rprouse/docs-microsoft | af49757a2295db7fab1a4aea118fbb896861dba8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wcf/diagnostics/exceptions-reference/identitymodel-exceptions.md | rprouse/docs-microsoft | af49757a2295db7fab1a4aea118fbb896861dba8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wcf/diagnostics/exceptions-reference/identitymodel-exceptions.md | rprouse/docs-microsoft | af49757a2295db7fab1a4aea118fbb896861dba8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "IdentityModel Exceptions"
ms.custom: ""
ms.date: "03/30/2017"
ms.prod: ".net-framework"
ms.reviewer: ""
ms.suite: ""
ms.technology:
- "dotnet-clr"
ms.tgt_pltfrm: ""
ms.topic: "article"
ms.assetid: 4ef34497-8ff5-4621-b773-7731cc721231
caps.latest.revision: 7
author: "dotnet-bot"
ms.author: "dotnetcontent"
manager: "wpickett"
---
# IdentityModel Exceptions
This topic lists all exceptions generated by IdentityModel.
## Exception List
|Resource Code|Current String|
|-------------------|--------------------|
|ValueMustBeOf2Types|The value of this argument must be one of these two types.|
|SAMLSubjectNameIdentifierRequiresNameValue|The 'Name' specified for a SamlNameIdentifier cannot be null or of length 0.|
|TraceCodeIssuanceTokenProviderEndSecurityNegotiation|The IssuanceTokenProvider has completed the security negotiation.|
|TraceCodeSecurityNewServerSessionKeyIssued|A new security session key was issued by the server.|
|SAMLAttributeMissingNameAttributeOnRead|The 'Name' for the SamlAttribute being read is missing or is of length 0.|
|UnknownICryptoType|The ICrypto implementation is not supported.|
|TraceCodeSecurityTokenProviderClosed|Security Token Provider was closed.|
|SAMLUnableToLoadAdvice|Failed to load the \<saml:advice> element.|
|SAMLAuthenticationStatementMissingAuthenticationMethodOnRead|The 'AuthenticationMethod' attribute being read for a SamlAuthenticationStatement is missing or of length 0.|
|UnsupportedTransformAlgorithm|Unsupported transform or canonicalization algorithm.|
|SAMLAudienceRestrictionShouldHaveOneAudience|A SamlAudienceRestrictionCondition must contain at least one Audience (URI).|
|SAMLEvidenceShouldHaveOneAssertion|SamlEvidence must reference at least one SamlAssertion either by Id or reference.|
|SAMLAudienceRestrictionInvalidAudienceValueOnRead|The SamlAudienceRestrictionCondition being read is missing a value in the 'Audience' element.|
|X509ChainBuildFail|The specific X.509 certificate chain building failed. The certificate that was used has a trust chain that cannot be verified. Replace the certificate or change the certificateValidationMode.|
|XDCannotFindValueInDictionaryString|The specific value id not found in the dictionary string.|
|TraceCodeImportSecurityChannelBindingEntry|Starting Security ImportChannelBinding.|
|PrivateKeyExchangeNotSupported|The private key does not support the exchange KeySpec.|
|TokenProviderUnableToGetToken|The specific token provider was unable to provide a security token.|
|SAMLEntityCannotBeNullOrEmpty|The specific SamlAssertion entity cannot be null or empty.|
|SAMLAssertionRequireOneStatement|A SamlAssertion requires at least one statement. Ensure that you have added at least one SamlStatement to the SamlAssertion you are creating.|
|AESInvalidInputBlockSize|The input size must be a multiple of specific bytes.|
|AESCryptAcquireContextFailed|Failed to acquire the CSP context.|
|SAMLAssertionRequireOneStatementOnRead|The SamlAssertion being read did not contain any SamlStatement. A SamlAssertion must contain at least one SamlStatement.|
|TraceCodeSecuritySessionClosedFaultReceived|The client security session received a session closed fault from the server.|
|TraceCodeIssuanceTokenProviderRedirectApplied|IssuanceTokenProvider applied a redirection header.|
|TraceCodeSecuritySessionClosedFaultSendFailure|A failure occurred when sending a security session closed fault to the client.|
|ValueMustBeZero|The value of this argument must be 0.|
|SAMLUnableToResolveSignatureKey|Unable to resolve SecurityKeyIdentifier found in the SamlAssertion signature. The SamlAssertion signature cannot be validated for the specific Issuer.|
|X509IsNotInTrustedStore|The specific X.509 certificate is not in the trusted people store.|
|SAMLElementNotRecognized|The specific element is not supported.|
|SAMLAuthorizationDecisionStatementMissingResourceAttributeOnRead|The 'Resource' attribute for the SamlAuthorizationDecisionStatement being read is missing or of length 0.|
|SamlTokenMissingSignature|The SamlAssertion is not signed. SamlAssertions can be signed by setting the SigningCredentials.|
|ExpectedElementMissing|The expected element with the specific namespace is missing.|
|NoKeyIdentifierClauseFound|No clause of the specific type was found in the SecurityKeyIdentifier.|
|MissingPrivateKey|The private key is not present in the X.509 certificate.|
|UnexpectedEOFFromReader|Unexpected EOF from XML reader.|
|UnsupportedKeyDerivationAlgorithm|The specific key derivation algorithm is not supported.|
|TokenDoesNotSupportKeyIdentifierClauseCreation|The specific token does not support the specific key identifier clause creation.|
|LastMessageNumberExceeded|A violation of sequence number protocol has been detected.|
|SymmetricKeyLengthTooShort|The length of the symmetric key specified is too short.|
|SAMLAuthorityBindingMissingAuthorityKindOnRead|The SamlAuthorityBinding being read was found to contain an 'AuthorityKind' that was missing or of length 0. This is not allowed.|
|XmlTokenBufferIsEmpty|XmlTokenBuffer is empty.|
|InvalidXmlQualifiedName|An Xml qualified name was expected, but an invalid name was found.|
|SAMLAuthorityKindMissingName|The XmlQualifiedName that represents the 'AuthorityKind' in the SamlAuthorityBinding cannot be null or of length 0.|
|AESCryptEncryptFailed|Failed to encrypt the specific data.|
|AuthorizationContextCreated|Authorization Context with the specific id is created.|
|SamlSerializerUnableToReadSecurityKeyIdentifier|The SamlSerializer does not contain a SecurityTokenSerializer capable of reading the SecurityKeyIdentifier. If you are using a custom SecurityKeyIdentifier, you must provide a custom SecurityTokenSerializer.|
|TraceCodeIssuanceTokenProviderServiceTokenCacheFull|IssuanceTokenProvider reduced the service token cache.|
|TraceCodeSecurityTokenProviderOpened|Security Token Provider was opened.|
|PublicKeyNotRSA|The public key is not an RSA key.|
|InvalidReaderState|The specific state is invalid for the supplied input reader.|
|UnableToResolveReferenceUriForSignature|Unable to resolve the specific URI in the signature to compute the digest.|
|EmptyBase64Attribute|An empty value was found for the required base64 attribute name and namespace.|
|SAMLSubjectRequiresConfirmationMethodWhenConfirmationDataOrKeyInfoIsSpecified|The SAML SubjectConfirmation requires a Confirmation method when the Confirmation Data or KeyInfo is specified.|
|SAMLAudienceRestrictionShouldHaveOneAudienceOnRead|The SamlAudienceRestrictionCondition being read must contain at least one 'Audience' value. None were found.|
|TokenProviderUnableToRenewToken|The specific token provider was unable to renew the security token.|
|AESIVLengthNotSupported|The specific bits IV is not supported. Only 128 bits IV is supported.|
|SAMLAuthorityBindingMissingAuthorityKind|A SamlAuthorityBinding must contain an 'AuthorityKind' that is not null.|
|TraceCodeSecuritySessionDemuxFailure|The incoming message is not part of an existing security session.|
|TokenRenewalNotSupported|The specific token provider does not support token renewal.|
|AtLeastOneReferenceRequired|At least one reference is required in a signature.|
|SAMLSignatureAlreadyRead|The signature is already read in the SAML assertion.|
|AlgorithmAndPrivateKeyMisMatch|The algorithm specified and the private key do not match.|
|EmptyTransformChainNotSupported|The empty transform chain is not supported.|
|SspiWrapperEncryptDecryptAssert1|SSPIWrapper::EncryptDecryptHelper|'offset' is out of range.|
|SspiWrapperEncryptDecryptAssert2|SSPIWrapper::EncryptDecryptHelper|'size' is out of range. SecurityTokenManagerCannotCreateAuthenticatorForRequirement=The security token manager cannot create a token authenticator for the specific requirement.|
|UnableToCreateKeyedHashAlgorithm|Unable to create a KeyedHashAlgorithm from the specific value for the specific signature algorithm.|
|SAMLUnableToLoadAssertion|The \<saml:assertion> element failed to load.|
|X509FindValueMismatchMulti|The specific X509FindType requires the type of the argument findValue to be one of the 2 values. The argument findValue is of another type.|
|TraceCodeSecurityIdentityDeterminationSuccess|Identity was determined for an EndpointAddress.|
|UndefinedUseOfPrefixAtElement|The specific prefix that is used at the element has no namespace defined.|
|TraceCodeSecuritySessionResponderOperationFailure|Security session operation failed at the server.|
|CannotFindCert|Unable to find the X.509 certificate using the specific search criteria: StoreName , StoreLocation, FindType, FindValue.|
|X509InvalidUsageTime|The specific X.509 certificate usage time is invalid. The usage time does not fall between the required NotBefore time and NotAfter time.|
|TraceCodeSecurityIdentityDeterminationFailure|Identity cannot be determined for an EndpointAddress.|
|AsyncObjectAlreadyEnded|The End method has already been called on this asynchronous result object.|
|ExternalDictionaryDoesNotContainAllRequiredStrings|The external dictionary does not contain definitions for all the required strings. The specific string is not available in the remote dictionary.|
|TraceCodeSecuritySessionKeyRenewalFaultReceived|The client security session received a key renewal fault from the server.|
|SAMLActionNameRequired|The string that represents the SamlAction cannot be null or of length 0.|
|SignatureVerificationFailed|The signature verification failed.|
|TraceCodeSecurityContextTokenCacheFull|The SecurityContextSecurityToken cache is full.|
|SAMLAssertionMissingMajorVersionAttributeOnRead|The MajorVersion for the SamlAssertion being read is missing or is of length 0.|
|SamlAttributeClaimRightShouldBePossessProperty|This SamlAttribute constructor requires that the Right of the Claim have the value System.IdentityModel.Claims.Rights.PossessProperty.|
|AuthorizationPolicyEvaluated|Policy with the specific id is evaluated.|
|SAMLUnableToLoadCondtions|The \<saml:conditions> element failed to load.|
|AESKeyLengthNotSupported|The specific bits key is not supported. Only 128, 192 and 256 bits key is supported.|
|UserNameCannotBeEmpty|The username cannot be empty.|
|AlgorithmAndPublicKeyMisMatch|The algorithm specified and the public key do not match.|
|SAMLUnableToLoadCondtion|The \<saml:conditions> element failed to load.|
|SamlAssertionMissingSigningCredentials|SigningCredentials have not been set on the SamlAssertion. SamlAssertions must be signed, please set a valid SigningCredentials on the SamlAssertion to proceed.|
|SspiPayloadNotEncrypted|The binary data was not encrypted with the SSPI security context.|
|SAMLAuthorizationDecisionShouldHaveOneActionOnRead|The SamlAuthorizationDecisionStatement that is being read does not contain any SamlAction.|
|TraceCodeSecurityBindingSecureOutgoingMessageFailure|The security protocol cannot secure the outgoing message.|
|UndefinedUseOfPrefixAtAttribute|The specific prefix used at the specific attribute has no namespace defined.|
|NoInputIsSetForCanonicalization|No input is set for writing canonicalized output.|
|TraceCodeSecurityPendingServerSessionAdded|A pending security session is added to the server.|
|AsyncCallbackException|An AsyncCallback threw an exception.|
|PrivateKeyNotRSA|The private key is not a RSA key.|
|TraceCodeSecurityClientSessionKeyRenewed|The client security session renewed the session key.|
|SAMLAuthorizationDecisionStatementMissingDecisionAttributeOnRead|The 'Decision' for the SamlAuthorizationDecisionStatement being read is missing or of length 0.|
|SAMLAttributeNameAttributeRequired|The 'Name' specified for a SamlAttribute cannot be null or of length 0.|
|SamlSerializerRequiresExternalSerializers|The SamlSerializer requires a SecurityTokenSerializer to serialize the SecurityKeyIdentifier present in the token.|
|UnableToResolveKeyReference|The token resolver is unable to resolve the specific security key reference.|
|UnsupportedKeyWrapAlgorithm|The specific key wrap algorithm is not supported.|
|SAMLAssertionMissingIssuerAttributeOnRead|The 'Issuer' for the SamlAssertion being read is missing or is of length 0.|
|TraceCodeIssuanceTokenProviderUsingCachedToken|The IssuanceTokenProvider used the cached service token.|
|AESCryptGetKeyParamFailed|Failed to get the specific key parameter.|
|InvalidNamespaceForEmptyPrefix|The namespace is invalid for the empty prefix.|
|AESCipherModeNotSupported|The specific cipher mode is not supported. Only CBC is supported.|
|ArgumentCannotBeEmptyString|The argument must be a non-empty string.|
|SAMLAssertionMissingMinorVersionAttributeOnRead|The MinorVersion for the SamlAssertion being read is missing or is of length 0.|
|SpecifiedStringNotAvailableInDictionary|The specified string is not an entry in the current dictionary.|
|KerberosApReqInvalidOrOutOfMemory|The AP-REQ is invalid or the system does not have enough memory.|
|FailLogonUser|The LogonUser failed for the specified user. Ensure that the user has a valid Windows account.|
|ValueMustBeNonNegative|The value of this argument must be non-negative.|
|X509ValidationFail|The specified X.509 certificate validation failed.|
|TraceCodeSecuritySessionRequestorOperationSuccess|The security session operation completed successfully at the client.|
|SAMLActionNameRequiredOnRead|The string that is read for the SamlAction is missing or is of length 0.|
|KerberosMultilegsNotSupported|Identity is specified as UPN. Authenticating a service running under a user account requires Kerberos multi-legs, which is unsupported.|
|SAMLAssertionIdRequired|The 'assertionId' for a SamlAssertion cannot be null or empty.|
|InvalidOperationForWriterState|The specified operation is invalid in the specified XmlWriter state.|
|CannotValidateSecurityTokenType|The specified security token authenticator cannot validate a token of the specified type.|
|X509FindValueMismatch|The specified X509FindType requires the type of the argument findValue to be the specified value. The argument findValue is of another type.|
|TraceCodeSecurityClientSessionCloseSent|A Close message was sent by the client security session.|
|SuiteDoesNotAcceptAlgorithm|The specified algorithm is not accepted for the specified operation by the specified algorithm suite|
|TraceCodeSecuritySessionRequestorOperationFailure|The client security session operation failed.|
|SAMLUnableToLoadStatement|Failed to load a SamlStatement.|
|InnerReaderMustBeAtElement|The inner reader must be at the element.|
|UnableToCreateTokenReference|Unable to create a security token reference.|
|TraceCodeSecurityBindingIncomingMessageVerified|The security protocol verified the incoming message.|
|ObjectIsReadOnly|The object is read-only.|
|TraceCodeSecurityClientSessionPreviousKeyDiscarded|The client security session discarded the previous session key.|
|SAMLTokenTimeInvalid|The SamlToken is not time valid. The current time is outside the Effective and Expiration time of the token.|
|TraceCodeSecurityIdentityVerificationSuccess|Identity verification succeeded.|
|SigningTokenHasNoKeys|The specified signing token has no keys.|
|TraceCodeSecurityIdentityVerificationFailure|Identity verification failed.|
|AESCryptImportKeyFailed|Failed to import the key material.|
|FailInitializeSecurityContext|InitializeSecurityContent failed. Ensure the service principal name is correct.|
|TraceCodeStreamSecurityUpgradeAccepted|The stream security upgrade was accepted successfully.|
|SAMLAuthorityBindingRequiresLocation|The 'Location' attribute that is specified on the SamlAuthorityBinding cannot be null or of length 0.|
|PublicKeyNotDSA|The public key is not a DSA key.|
|ImpersonationLevelNotSupported|The authentication modes using Kerberos do not support the specified impersonation level. Specify a valid identification or impersonation level.|
|RequiredTargetNotSigned|The element with the specified id is required to be signed, but was not.|
|SAMLAuthenticationStatementMissingAuthenticationInstanceOnRead|The 'AuthenticationInstant' attribute being read for a SamlAuthenticationStatement is missing or of length 0.|
|SAMLEvidenceShouldHaveOneAssertionOnRead|The SamlEvidence being read did not contain either a reference to or an embedded SamlAssertion.|
|LengthOfArrayToConvertMustGreaterThanZero|The length of the array to convert to an integer must be greater than 0.|
|InvalidAsyncResult|Invalid AsyncResult.|
|TraceCodeIssuanceTokenProviderRemovedCachedToken|The IssuanceTokenProvider removed the expired service token.|
|IncorrectUserNameFormat|The username is in an invalid format. The username format must be in the form of "username' or 'domain\\\username'.|
|TraceCodeExportSecurityChannelBindingEntry|Starting Security ExportChannelBinding.|
|UnsupportedInputTypeForTransform|The specified input type is not supported for the transform.|
|CannotFindDocumentRoot|Cannot find the root of the document.|
|XmlBufferQuotaExceeded|The size necessary to buffer the XML content exceeded the buffer quota.|
|TraceCodeSecuritySessionClosedResponseSendFailure|A failure occurred when sending a security session Close response to the client.|
|UnableToResolveReferenceInSamlSignature|Unable to resolve the specified reference in the SAML signature with AssertionID.|
|SAMLSubjectRequiresNameIdentifierOrConfirmationMethod|A SamlSubject requires that a 'NameIdentifier' or 'ConfirmationMethod' be specified. Both were missing.|
|SAMLAttributeMissingNamespaceAttributeOnRead|The 'Namespace' for the SamlAttribute being read is missing or of length 0.|
|SAMLSubjectConfirmationClauseMissingConfirmationMethodOnRead|A 'ConfirmationMethod' cannot be found on the SamlSubjectConfirmation being read.|
|SecurityTokenRequirementHasInvalidTypeForProperty|The token requirement has an unexpected type for the specified property. The expected property type is of another value.|
|TraceCodeNegotiationTokenProviderAttached|NegotiationTokenProvider was attached.|
|TraceCodeSpnegoClientNegotiationCompleted|SpnegoTokenProvider completed SSPI negotiation.|
|SAMLUnableToLoadUnknownElement|The selected SamlSerializer is unable to deserialize this element. Please register a custom SamlSerializer to deserialize custom elements.|
|CreateSequenceRefused|The create sequence request has been refused by the RM Destination.|
|TraceCodeSecuritySessionRedirectApplied|The client security session was redirected.|
|SecurityTokenRequirementDoesNotContainProperty|The token requirement does not contain the specified property.|
|SAMLAttributeValueCannotBeNull|One of the attributeValues found in the SamlAttribute was found to be a null value. Ensure that lists are not null when creating the SamlAttribute.|
|ValueMustBeGreaterThanZero|The value of this argument must be greater than 0.|
|TraceCodeNegotiationAuthenticatorAttached|NegotiationTokenAuthenticator was attached.|
|ValueMustBePositive||
|SAMLAuthorizationDecisionShouldHaveOneAction|A SamlAuthorizationDecisionStatement must have at least one SamlAction.|
|TraceCodeSecurityTokenAuthenticatorClosed|Security Token Authenticator was closed.|
|TraceCodeSecurityAuditWrittenSuccess|The security audit log is written successfully.|
|PrivateKeyNotDSA|The private key is not a DSA key.|
|MessageNumberRollover|The maximum sequence number for this sequence has been exceeded.|
|AESPaddingModeNotSupported|The specified padding mode is not supported. Only PKCS7 and ISO10126 is supported.|
|SAMLSubjectRequiresNameIdentifierOrConfirmationMethodOnRead|The required 'NameIdentifier' and the 'ConfirmationMethod' elements are not found for the SamlSubject being read.|
|TraceCodeSecurityAuditWrittenFailure|A failure occurred while writing to the security audit log.|
|UnsupportedCryptoAlgorithm|The specified crypto algorithm is not supported in this context.|
|SigningTokenHasNoKeysSupportingTheAlgorithmSuite|The signing token has no key that supports the specified algorithm suite.|
|SAMLNameIdentifierMissingIdentifierValueOnRead|The 'Identifier' string for the SamlNameIdentifier being read is missing.|
|SAMLSubjectStatementRequiresSubject|The SAML Subject Statement requires a SAML subject to be specified.|
|TraceCodeSslClientCertMissing|The remote SSL client failed to provide a required certificate.|
|SAMLTokenVersionNotSupported|The specified major version and minor version are not supported.|
|TraceCodeConfigurationIsReadOnly|The configuration is read-only.|
|TraceCodeSecuritySessionRenewFaultSendFailure|A failure occurred when sending a renewal fault on the security session key to the client.|
|TraceCodeSecurityInactiveSessionFaulted|An inactive security session was faulted by the server.|
|SAMLUnableToLoadAttribute|Failed to load a SamlAttribute.|
|Psha1KeyLengthInvalid|The specified PSHA1 key length is invalid.|
|KeyIdentifierCannotCreateKey|This SecurityKeyIdentifier does not have any clause that can create a key.|
|X509IsInUntrustedStore|The specified X.509 certificate is in an untrusted certificate store.|
|UnexpectedXmlChildNode|The specified XML child node of specified type is unexpected for the specified element.|
|TokenDoesNotMeetKeySizeRequirements|The key size requirements for the specified algorithm suite are not met by the specified token.|
|TraceCodeSecuritySessionRequestorStartOperation|A security session operation was started at the client.|
|InvalidHexString|Invalid hexadecimal string format.|
|SamlAttributeClaimResourceShouldBeAString|This SamlAttribute constructor requires that the resource of the claim is of type 'string'.|
|SamlSigningTokenNotFound|The SamlAssertion is signed but the token that signed the SamlAssertion cannot be found. Ensure that the SecurityTokenResolver contains the token that signed the SamlAssertion.|
|TraceCodeSecuritySpnToSidMappingFailure|The ServicePrincipalName could not be mapped to a SecurityIdentifier.|
|UnableToCreateSignatureFormatterFromAsymmetricCrypto|Unable to create a signature formatter for the specified algorithm from the specified asymmetric crypto.|
|TraceCodeSecurityServerSessionClosedFaultSent|The server security session sent a session closed fault to the client.|
|UnableToFindPrefix|Unable to find the prefix for the specified visibly used prefix at the specified element.|
|TraceCodeSecurityTokenAuthenticatorOpened|Security Token Authenticator was opened.|
|RequiredAttributeMissing|The specified attribute is required on the specified element.|
|LocalIdCannotBeEmpty|The localId cannot be empty. Specify a valid 'localId'.|
|ValueMustBeInRange|The value of this argument must fall within the specified range.|
|TraceCodeIssuanceTokenProviderBeginSecurityNegotiation|IssuanceTokenProvider started a new security negotiation.|
|InvalidNtMapping|The specified X.509 certificate cannot be mapped to a Windows account. The UPN subject alternate name is required.|
|AESCryptSetKeyParamFailed|Failed to set the specified key parameter.|
|TraceCodeSecuritySessionClosedResponseReceived|The client security session received a closed response from the server.|
|UnableToCreateSignatureDeformatterFromAsymmetricCrypto|Unable to create a signature deformatter for the specified algorithm from the specified asymmetric crypto.|
|TraceCodeIdentityModelAsyncCallbackThrewException|An asynchronous callback threw an exception.|
|LengthMustBeGreaterThanZero|The length of this argument must be greater than 0.|
|FoundMultipleCerts|Found multiple X.509 certificates using the specified search criteria: StoreName, StoreLocation, FindType, FindValue. Provide a more specific find value.|
|AtLeastOneTransformRequired|The Transforms element must contain at least one transform.|
|SAMLTokenNotSerialized|The SamlAssertion could not be serialized to XML. Please see inner exception for details.|
|TraceCodeSecurityBindingOutgoingMessageSecured|The security protocol secured the outgoing message.|
|KeyIdentifierClauseDoesNotSupportKeyCreation|This SecurityKeyIdentifierClause does not support key creation.|
|UnableToResolveTokenReference|The token resolver is unable to resolve the specified token reference.|
|UnsupportedEncryptionAlgorithm|The specified encryption algorithm is not supported.|
|SamlSerializerUnableToWriteSecurityKeyIdentifier|The SamlSerializer does not contain a SecurityTokenSerializer capable of serializing the given SecurityKeyIdentifier. If you are using a custom SecurityKeyIdentifier, you must provide a custom SecurityTokenSerializer.|
|SAMLAttributeShouldHaveOneValue|No attribute values were found. A SamlAttribute attribute must have at least one attribute value.|
|TraceCodeSecurityBindingVerifyIncomingMessageFailure|Security protocol cannot verify the incoming message.|
|SamlSigningTokenMissing|The SamlAssertion passed to the SamlSecurityTokenAuthenticator does not contain a signing token.|
|NoPrivateKeyAvailable|No private key is available.|
|ValueMustBeOne|The value of this argument must be 1.|
|TraceCodeSecurityPendingServerSessionRemoved|A pending security session was made active by the server.|
|TraceCodeImportSecurityChannelBindingExit|Finished Security ImportChannelBinding.|
|X509CertStoreLocationNotValid|The StoreLocation must be either LocalMachine or CurrentUser.|
|SettingdMayBeModifiedOnlyWhenTheWriterIsInStartState|The writer settings may be modified only when the writer is in the Start state.|
|ArgumentInvalidCertificate|The certificate is invalid.|
|DigestVerificationFailedForReference|Digest verification failed for the specified Reference.|
|SAMLAuthorityBindingRequiresBinding|The 'Binding' attribute specified on the SamlAuthorityBinding cannot be null or of length 0.|
|AESInsufficientOutputBuffer|The output buffer must be greater than the specified bytes.|
|SAMLAuthorityBindingMissingBindingOnRead|The 'Binding' attribute for the SamlAuthorityBinding being read is missing or of length 0.|
|SAMLAuthorityBindingInvalidAuthorityKind|The SamlAuthorityBinding being read has an invalid AuthorityKind. The format of the AuthorityKind must be a QName.|
|ProvidedNetworkCredentialsForKerberosHasInvalidUserName|The NetworkCredentials provided for the Kerberos Token does not have a valid UserName.|
|SSPIPackageNotSupported|The specified SSPI package is not supported.|
|TokenCancellationNotSupported|The specified token provider does not support token cancellation.|
|UnboundPrefixInQName|An unbound prefix is used in the specified qualified name.|
|SAMLAuthorizationDecisionResourceRequired|The 'resource' specified to the SamlAuthorizationDecisionStatement cannot be null or of length 0.|
|TraceCodeSecurityNegotiationProcessingFailure|Service security negotiation processing failure.|
|SAMLAssertionIssuerRequired|The 'Issuer' specified for a SamlAssertion cannot be null or empty.|
|UnableToCreateHashAlgorithmFromAsymmetricCrypto|Unable to create a HashAlgorithm for the specified algorithm from the specified asymmetric crypto.|
|SamlUnableToExtractSubjectKey|The SecurityKeyIdentifier that was found in the SamlSubject cannot be resolved to a SecurityToken. The SecurityTokenResolver must contain a SecurityToken that the SecurityKeyIdentifier resolves to.|
|ChildNodeTypeMissing|The specified XML element does not have a child of the specified type.|
|TraceCodeSecurityPendingServerSessionClosed|The pending security session was closed by the server.|
|TraceCodeSecuritySessionCloseResponseSent|The server security session sent a Close response to the client.|
|TraceCodeSecurityIdentityHostNameNormalizationFailure|The HostName portion of an endpoint address cannot be normalized.|
|FailAcceptSecurityContext|The AcceptSecurityContext failed.|
|EmptyXmlElementError|The specified element cannot be empty.|
|PrefixNotDefinedForNamespace|A prefix for the specified namespace is not defined in this context and cannot be declared.|
|SAMLAuthorizationDecisionHasMoreThanOneEvidence|The SamlAuthorizationDecisionStatement being read was found to contain more than one Evidence. This is not allowed.|
|SamlTokenAuthenticatorCanOnlyProcessSamlTokens|The SamlSecurityTokenAuthenticator can only process SamlSecurityTokens. The specified SecurityTokenType was received .|
|SAMLAttributeStatementMissingAttributeOnRead|The SamlAttributeStatement being read does not contain any 'SamlAttribute' elements. This is not allowed.|
|CouldNotFindNamespaceForPrefix|Cannot look up the namespace for the specified prefix.|
|TraceCodeExportSecurityChannelBindingExit|Finished Security ExportChannelBinding.|
|AESCryptDecryptFailed|Failed to decrypt the specified data.|
|SAMLAttributeNamespaceAttributeRequired|The 'Namespace' specified for a SamlAttribute cannot be null or of length 0.|
|TraceCodeSpnegoServiceNegotiationCompleted|SpnegoTokenAuthenticator completed SSPI negotiation.|
|TraceCodeSecurityServerSessionRenewalFaultSent|The server security session sent a key renewal fault to the client.|
|AlgorithmMismatchForTransform|A mismatch occurred on the algorithm for the transform.|
|UserNameAuthenticationFailed|Authentication of a username/password using the specified mechanism failed. User is not authenticated.|
|SamlInvalidSigningToken|The SamlAssertion has been signed with a token that was not validated according to the protocol. If you are using X.509 certificates, examine your validation semantics.|
|TraceCodeSecurityServerSessionKeyUpdated|The security session key was updated by the server.|
|TraceCodeSecurityServerSessionCloseReceived|The server security session received a Close message from the client.|
|SAMLAuthenticationStatementMissingSubject|The SamlAuthenticationStatement is missing the required SamlSubjectStatement.|
|UnexpectedEndOfFile|Unexpected end of file.|
|UnsupportedAlgorithmForCryptoOperation|The specified algorithm is not supported for the specified operation.|
|XmlLangAttributeMissing|The required xml:lang attribute is missing.|
|TraceCodeSecurityImpersonationSuccess|Security Impersonation succeeded at the server.|
|SAMLAuthorityBindingMissingLocationOnRead|The 'Location' attribute for the SamlAuthorityBinding being read is missing or of length 0.|
|SAMLAttributeStatementMissingSubjectOnRead|The 'SamlSubject' element for the SamlAttributeStatement is missing.|
|SAMLAuthorizationDecisionStatementMissingSubjectOnRead|The 'SamlSubject' element for SamlAuthorizationDecisionStatement being read is missing.|
|SAMLBadSchema|While reading a SamlAssertion this specified element was found not to comply with the schema.|
|SAMLAssertionIDIsInvalid|The specified 'assertionId' for a SamlAssertion must start with a letter or '_'.|
|TraceCodeSecurityActiveServerSessionRemoved|An active security session was removed by the server.|
|UnableToCreateKeyedHashAlgorithmFromSymmetricCrypto|Unable to create a keyedHashAlgorithm for the specified algorithm from the specified symmetric crypto.|
|SAMLAuthenticationStatementMissingAuthenticationMethod|The 'AuthenticationMethod' specified for a SamlAuthenticationStatement cannot be null or of length 0.|
|TraceCodeSecurityImpersonationFailure|Security impersonation failed at the server.|
|Default|(Default)|
|UnsupportedNodeTypeInReader|The specified node type with the specified name is not supported.|
| 103.67541 | 270 | 0.840834 | eng_Latn | 0.906361 |
8a7dac0e2490af37bbee966573f0d0d791582abe | 671 | md | Markdown | README.md | keedio/openshift-kafka | f816e5082f76ac7d5fbe3acfc2190b103de3796e | [
"Apache-2.0"
] | 4 | 2017-03-06T15:58:49.000Z | 2018-07-09T15:56:49.000Z | README.md | keedio/openshift-kafka | f816e5082f76ac7d5fbe3acfc2190b103de3796e | [
"Apache-2.0"
] | 1 | 2017-05-24T13:09:50.000Z | 2017-05-24T13:25:23.000Z | README.md | keedio/openshift-kafka | f816e5082f76ac7d5fbe3acfc2190b103de3796e | [
"Apache-2.0"
] | 3 | 2017-04-24T12:59:07.000Z | 2020-01-21T16:04:14.000Z | # OpenShift Kafka bucket
Proof of concept with Apache Kafka and Apache ZooKeeper on OpenShift v3.4
Architecture:
* 2 Kafka pods (brokers)
* 1 Zookeeper pod
## Quick start
Firs of all, you should have persistent storage assigned.
1. Create a new OpenShift project
2. Import the templates in your OpenShift project throw the UI. Once yo have do it, the build and deployment should start automatically.
3. Once the deployment has finished, create a route throw the UI.
## Openshift UI
From the UI, we can test our solution:

| 25.807692 | 137 | 0.791356 | eng_Latn | 0.920516 |
8a7e2a4f5a96903e5a2088cba49cd2cc2c469862 | 2,322 | markdown | Markdown | _posts/2016-02-29-sea-hag.markdown | beaushinkle/bestiary | 03e8bfe28bf72038a2735d8bada10eefaf7dfe9d | [
"MIT"
] | 4 | 2019-10-29T13:09:20.000Z | 2022-02-02T03:57:17.000Z | _posts/2016-02-29-sea-hag.markdown | beaushinkle/bestiary | 03e8bfe28bf72038a2735d8bada10eefaf7dfe9d | [
"MIT"
] | null | null | null | _posts/2016-02-29-sea-hag.markdown | beaushinkle/bestiary | 03e8bfe28bf72038a2735d8bada10eefaf7dfe9d | [
"MIT"
] | 6 | 2019-09-02T00:13:18.000Z | 2021-08-01T15:30:19.000Z | ---
layout: post
title: "Sea Hag"
date: 2016-02-29
tags: [medium, fey, cr2]
---
**Medium fey, chaotic evil**
**Armor Class** 14 (natural armor)
**Hit Points** 52 (7d8 + 21)
**Speed** 30 ft., swim 40 ft.
| STR | DEX | CON | INT | WIS | CHA |
|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|
| 16 (+3) | 13 (+1) | 16 (+3) | 12 (+1) | 12 (+1) | 13 (+1) |
**Senses** darkvision 60 ft., passive Perception 11
**Languages** Aquan, Common, Giant
**Challenge** 2 (450 XP)
***Amphibious.*** The hag can breathe air and water.
***Horrific Appearance.*** Any humanoid that starts its turn within 30 feet of the hag and can see the hag’s true form must make a DC 11 Wisdom saving throw. On a failed save, the creature is frightened for 1 minute. A creature can repeat the saving throw at the end of each of its turns, with disadvantage if the hag is within line of sight, ending the effect on itself on a success. If a creature’s saving throw is successful or the effect ends for it, the creature is immune to the hag’s Horrific Appearance for the next 24 hours. Unless the target is surprised or the revelation of the hag’s true form is sudden, the target can avert its eyes and avoid making the initial saving throw. Until the start of its next turn, a creature that averts its eyes has disadvantage on attack rolls against the hag.
**Actions**
***Claws.*** Melee Weapon Attack: +5 to hit, reach 5 ft., one target. Hit: 10 (2d6 + 3) slashing damage.
***Death Glare.*** The hag targets one frightened creature she can see within 30 feet of her. If the target can see the hag, it must succeed on a DC 11 Wisdom saving throw against this magic or drop to 0 hit points.
***Illusory Appearance.*** The hag covers herself and anything she is wearing or carrying with a magical illusion that makes her look like an ugly creature of her general size and humanoid shape. The effect ends if the hag takes a bonus action to end it or if she dies. The changes wrought by this effect fail to hold up to physical inspection. For example, the hag could appear to have no claws, but someone touching her hand might feel the claws. Otherwise, a creature must take an action to visually inspect the illusion and succeed on a DC 16 Intelligence (Investigation) check to discern that the hag is disguised.
| 62.756757 | 806 | 0.708441 | eng_Latn | 0.998846 |
8a7eec73625052ebd88f26bcebae9fd623ce93ae | 4,840 | md | Markdown | docs/odbc/reference/develop-driver/odbc-driver-architecture.md | MRGRD56/sql-docs.ru-ru | 4994c363fd2f95812769d48d881fd877abe35738 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2020-09-23T01:19:32.000Z | 2020-09-29T15:21:34.000Z | docs/odbc/reference/develop-driver/odbc-driver-architecture.md | MRGRD56/sql-docs.ru-ru | 4994c363fd2f95812769d48d881fd877abe35738 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/odbc/reference/develop-driver/odbc-driver-architecture.md | MRGRD56/sql-docs.ru-ru | 4994c363fd2f95812769d48d881fd877abe35738 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
description: Архитектура драйвера ODBC
title: Архитектура драйвера ODBC | Документация Майкрософт
ms.custom: ''
ms.date: 01/19/2017
ms.prod: sql
ms.prod_service: connectivity
ms.reviewer: ''
ms.technology: connectivity
ms.topic: conceptual
helpviewer_keywords:
- ODBC drivers [ODBC], architecture
ms.assetid: 21a62c7c-192e-4718-a16e-aa12b0de4419
author: David-Engel
ms.author: v-daenge
ms.openlocfilehash: 1789d5799ed9eb15ace7ea263d1a5804c8e86e74
ms.sourcegitcommit: e700497f962e4c2274df16d9e651059b42ff1a10
ms.translationtype: MT
ms.contentlocale: ru-RU
ms.lasthandoff: 08/17/2020
ms.locfileid: "88476250"
---
# <a name="odbc-driver-architecture"></a>Архитектура драйвера ODBC
Модули записи драйверов должны знать, что архитектура драйвера может повлиять на то, может ли приложение использовать специфический для СУБД SQL.

[Драйверы на основе файлов](../../../odbc/reference/file-based-drivers.md)
Когда драйвер обращается непосредственно к физическим данным, драйвер действует как драйвер, так и источник данных. Драйвер должен обрабатывать как вызовы ODBC, так и инструкции SQL. Разработчики файловых драйверов должны писать собственные ядра СУБД.
[Драйверы на основе СУБД](../../../odbc/reference/dbms-based-drivers.md)
Если для доступа к физическим данным используется отдельное ядро СУБД, драйвер обрабатывает только вызовы ODBC. Он передает инструкции SQL ядру СУБД для обработки.
[Сетевая архитектура](../../../odbc/reference/network-example.md)
Конфигурации файлов и СУБД ODBC могут существовать в одной сети.
[Другие архитектуры драйверов](../../../odbc/reference/other-driver-architectures.md)
Если драйвер необходим для работы с различными источниками данных, его можно использовать по промежуточного слоя. Архитектура разнородного механизма объединения может привести к отображению драйвера в качестве диспетчера драйверов. Драйверы также могут быть установлены на серверах, где они могут совместно использоваться несколькими клиентами.
Дополнительные сведения об архитектуре драйвера см. в разделе Архитектура [диспетчера драйверов](../../../odbc/reference/the-driver-manager.md) и [драйвера](../../../odbc/reference/driver-architecture.md) раздела об [архитектуре ODBC](../../../odbc/reference/odbc-architecture.md).
Дополнительные сведения о проблемах с драйверами можно найти в расположениях, описанных в следующей таблице.
|Проблема|Раздел|Расположение|
|-----------|-----------|--------------|
|Проблемы совместимости с приложениями и драйверами|[Совместимость приложений и драйверов](../../../odbc/reference/develop-app/application-and-driver-compatibility.md)|[Рекомендации по программированию](../../../odbc/reference/develop-app/programming-considerations.md)в справочнике программиста по ODBC|
|Написание драйверов ODBC|[Написание драйверов ODBC 3.x](../../../odbc/reference/develop-app/writing-odbc-3-x-drivers.md)|[Рекомендации по программированию](../../../odbc/reference/develop-app/programming-considerations.md)в справочнике программиста по ODBC|
|Рекомендации по драйверам для обеспечения обратной совместимости|[Рекомендации по обеспечению обратной совместимости с драйвером](../../../odbc/reference/appendixes/appendix-g-driver-guidelines-for-backward-compatibility.md)|[Приложение ж. рекомендации по использованию драйверов для обеспечения обратной совместимости](../../../odbc/reference/appendixes/appendix-g-driver-guidelines-for-backward-compatibility.md)в справочнике программиста по ODBC|
|Подключение к драйверу|[Выбор источника данных или драйвера](../../../odbc/reference/develop-app/choosing-a-data-source-or-driver.md)|[Подключение к источнику данных или драйверу](../../../odbc/reference/develop-app/connecting-to-a-data-source-or-driver.md), Справочник программиста по ODBC|
|Определение драйверов|[Просмотр драйверов](../../../odbc/admin/viewing-drivers.md)|[Просмотр драйверов](../../../odbc/admin/viewing-drivers.md)в интерактивной справке администратора источников данных Microsoft ODBC|
|Включение пулов соединений|[Объединение соединений ODBC](../../../odbc/reference/develop-app/driver-manager-connection-pooling.md)|[Подключение к источнику данных или драйверу](../../../odbc/reference/develop-app/connecting-to-a-data-source-or-driver.md), Справочник программиста по ODBC|
|Проблемы с подключением и драйверами Юникода/ANSI|[Драйверы Юникода](../../../odbc/reference/develop-app/unicode-drivers.md)|[Рекомендации по программированию](../../../odbc/reference/develop-app/programming-considerations.md)в справочнике программиста по ODBC|
## <a name="see-also"></a>См. также
[Разработка драйвера ODBC](../../../odbc/reference/develop-driver/developing-an-odbc-driver.md)
| 80.666667 | 452 | 0.776033 | rus_Cyrl | 0.682179 |
8a7f2d2012eee636dde1cc7b226f7d2457d76fdc | 104 | md | Markdown | _drafts/template.md | liangddyy/liangddyy.github.io | 00d61f17d0dac1df9658ea45ea32b59720cdb6b0 | [
"MIT"
] | 1 | 2020-09-01T07:59:06.000Z | 2020-09-01T07:59:06.000Z | _drafts/template.md | liangddyy/liangddyy.github.io | 00d61f17d0dac1df9658ea45ea32b59720cdb6b0 | [
"MIT"
] | 9 | 2017-08-24T14:24:58.000Z | 2018-09-22T13:47:47.000Z | _drafts/template.md | liangddyy/liangddyy.github.io | 00d61f17d0dac1df9658ea45ea32b59720cdb6b0 | [
"MIT"
] | null | null | null | ---
layout: post
title: title
categories: [cate1, cate2]
description:
keywords: 开发,
---
Content here
| 10.4 | 26 | 0.692308 | eng_Latn | 0.785998 |
8a7f4996d189fe3c52eb224ce4b7195a91cd1805 | 2,143 | md | Markdown | wdk-ddi-src/content/d3dumddi/ne-d3dumddi-d3dddi_markertype.md | DeviceObject/windows-driver-docs-ddi | be6b8ddad4931e676fb6be20935b82aaaea3a8fb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | wdk-ddi-src/content/d3dumddi/ne-d3dumddi-d3dddi_markertype.md | DeviceObject/windows-driver-docs-ddi | be6b8ddad4931e676fb6be20935b82aaaea3a8fb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | wdk-ddi-src/content/d3dumddi/ne-d3dumddi-d3dddi_markertype.md | DeviceObject/windows-driver-docs-ddi | be6b8ddad4931e676fb6be20935b82aaaea3a8fb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
UID: NE:d3dumddi.D3DDDI_MARKERTYPE
title: D3DDDI_MARKERTYPE (d3dumddi.h)
description: Indicates the type of Event Tracing for Windows (ETW) marker event that the user-mode display driver supports.
old-location: display\d3dddi_markertype.htm
tech.root: display
ms.assetid: 55A48F87-B96C-42E7-B9B4-3C829097CAE9
ms.date: 05/10/2018
keywords: ["D3DDDI_MARKERTYPE enumeration"]
ms.keywords: D3DDDIMT_NONE, D3DDDIMT_PROFILE, D3DDDI_MARKERTYPE, D3DDDI_MARKERTYPE enumeration [Display Devices], d3dumddi/D3DDDIMT_NONE, d3dumddi/D3DDDIMT_PROFILE, d3dumddi/D3DDDI_MARKERTYPE, display.d3dddi_markertype
f1_keywords:
- "d3dumddi/D3DDDI_MARKERTYPE"
- "D3DDDI_MARKERTYPE"
req.header: d3dumddi.h
req.include-header: D3d10umddi.h
req.target-type: Windows
req.target-min-winverclnt: Windows 8.1
req.target-min-winversvr: Windows Server 2012 R2
req.kmdf-ver:
req.umdf-ver:
req.ddi-compliance:
req.unicode-ansi:
req.idl:
req.max-support:
req.namespace:
req.assembly:
req.type-library:
req.lib:
req.dll:
req.irql:
topic_type:
- APIRef
- kbSyntax
api_type:
- HeaderDef
api_location:
- D3dumddi.h
api_name:
- D3DDDI_MARKERTYPE
targetos: Windows
req.typenames: D3DDDI_MARKERTYPE
---
# D3DDDI_MARKERTYPE enumeration
## -description
Indicates the type of Event Tracing for Windows (ETW) marker event that the user-mode display driver supports.
## -enum-fields
### -field D3DDDIMT_NONE
No marker type is supported. This type is set on creation of the display device.
### -field D3DDDIMT_PROFILE
Profile mode, where the driver estimates the length of time the GPU takes to execute certain operations. The context submits GPU work for single-threaded user-mode DDIs. In this case, each time stamp denotes the end of GPU work.
See Remarks of the <a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/d3dumddi/nc-d3dumddi-pfnd3dddi_setmarkermode">pfnSetMarkerMode</a> function for more info.
## -see-also
<a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/d3dumddi/nc-d3dumddi-pfnd3dddi_setmarkermode">pfnSetMarkerMode</a>
| 26.7875 | 229 | 0.759683 | eng_Latn | 0.318697 |
8a801935df4bfa6b1b0589548ee786048f2f097e | 1,954 | md | Markdown | includes/active-directory-develop-guidedsetup-aspnetwebapp-configure.md | gitruili/azure-docs.zh-cn | 4853c7dd56dcb4f2609e927196d2e25b6026a5f8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | includes/active-directory-develop-guidedsetup-aspnetwebapp-configure.md | gitruili/azure-docs.zh-cn | 4853c7dd56dcb4f2609e927196d2e25b6026a5f8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | includes/active-directory-develop-guidedsetup-aspnetwebapp-configure.md | gitruili/azure-docs.zh-cn | 4853c7dd56dcb4f2609e927196d2e25b6026a5f8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: include 文件
description: include 文件
services: active-directory
documentationcenter: dev-center-name
author: andretms
manager: mtillman
editor: ''
ms.assetid: 820acdb7-d316-4c3b-8de9-79df48ba3b06
ms.service: active-directory
ms.devlang: na
ms.topic: include
ms.tgt_pltfrm: na
ms.workload: identity
ms.date: 05/04/2018
ms.author: andret
ms.custom: include file
ms.openlocfilehash: f1dc23729f32a7a9535b887acf638cf5464c24bd
ms.sourcegitcommit: c851842d113a7078c378d78d94fea8ff5948c337
ms.translationtype: HT
ms.contentlocale: zh-CN
ms.lasthandoff: 06/18/2018
ms.locfileid: "36206109"
---
## <a name="register-your-application"></a>注册应用程序
若要注册应用程序并将应用程序注册信息添加到解决方案,有两个选项:
### <a name="option-1-express-mode"></a>选项 1:快速模式
可以通过执行以下操作快速注册应用程序:
1. 通过 [Microsoft 应用程序注册门户](https://apps.dev.microsoft.com/portal/register-app?appType=serverSideWebApp&appTech=aspNetWebAppOwin&step=configure)注册应用程序
2. 输入应用程序的名称和电子邮件
3. 确保选中“指导式设置”选项
4. 按照说明向应用程序添加重定向 URL
### <a name="option-2-advanced-mode"></a>选项 2:高级模式
若要注册应用程序并将应用程序注册信息添加到解决方案,请执行以下操作:
1. 转到 [Microsoft 应用程序注册门户](https://apps.dev.microsoft.com/portal/register-app)注册应用程序
2. 输入应用程序的名称和电子邮件
3. 确保取消选中“指导式设置”选项
4. 单击 `Add Platform`,并选择 `Web`
5. 返回 Visual Studio,在解决方案资源管理器中选择项目并查看“属性”窗口(如果看不到“属性”窗口,请按 F4)
6. 将“已启用 SSL”更改为 `True`
7. 在 Visual Studio 中右键单击该项目,然后选择“属性”和“Web”选项卡。在服务器部分中,将项目 URL 更改为 SSL URL
8. 复制 SSL URL 并将此 URL 添加到注册门户重定向列表中的重定向 URL 列表:<br/><br/><br />
9. 在根文件夹内 `web.config` 中的 `configuration\appSettings` 部分之下添加以下内容:
```xml
<add key="ClientId" value="Enter_the_Application_Id_here" />
<add key="redirectUri" value="Enter_the_Redirect_URL_here" />
<add key="Tenant" value="common" />
<add key="Authority" value="https://login.microsoftonline.com/{0}/v2.0" />
```
10. 将 `ClientId` 替换为刚注册的应用程序 ID
11. 用项目的 SSL URL 替换 `redirectUri`
| 31.516129 | 161 | 0.755374 | yue_Hant | 0.482698 |
8a80ab91a2ec2a6d6d427d7237fe501e3662c8d7 | 1,996 | md | Markdown | content/series/redis/04-advanced/improve-03.md | szthanatos/academic-kickstart | ad63dd837d5e47788e946a6f53c72dd5a6f00938 | [
"MIT"
] | null | null | null | content/series/redis/04-advanced/improve-03.md | szthanatos/academic-kickstart | ad63dd837d5e47788e946a6f53c72dd5a6f00938 | [
"MIT"
] | null | null | null | content/series/redis/04-advanced/improve-03.md | szthanatos/academic-kickstart | ad63dd837d5e47788e946a6f53c72dd5a6f00938 | [
"MIT"
] | null | null | null | ---
title: "优化指南 - 运维"
linktitle: "运维"
toc: true
type: book
date: 2019-03-18T15:04:35+08:00
draft: false
weight: 63
---
## 监控
为了发现前面所说的问题,需要开发 / 运维人员不断的监控 redis 运行情况。
### redis-cli 查询
部分信息无法通过 redis 命令直接获取,但是可以通过 `redis-cli [参数]` 获取:
`–-bigkeys`
后台 scan 出每种数据类型中较大的 key
`--latency`
服务端响应延时
### slowlog 命令
在客户端执行 `slowlog get [n]` 可以获取最慢的 n 条执行命令的记录
### info 命令
返回服务器信息,性能监测的时候注意其中的几个部分:
**memory**:`mem_fragmentation_ratio`
内存碎片率,`used_memory_rss`(系统分配内存总量) 和 `used_memory`(Redis 分配器分配的内存总量) 的比值。
在 1-1.5 之间都是合理值,<1 则说明内存已经占满,正在和硬盘进行内存交换,性能下降严重,>1.5 则说明碎片过多需要清理了。
**stats**:`latest_fork_usec`
最近一次 fork 操作耗时
**persistence**:`aof_delayed_fsync`
被延迟的 fsync 调用数量
**clients**:`connected_clients`,`blocked_clients`
已连接客户端的数量和正在等待阻塞命令的客户端的数量
### monitor 命令
可以用来监测一个节点一段时间内执行的命令,从而统计出热点 key。但是 monitor 自己也是有内存占用的,所以不能频繁、持续的使用。
## 部署
### 网络
影响 redis 性能的最主要因素是网络。
按官方基准测试来说,对于 10kb 以内的数据,redis 的处理能力在 100000q/s 以上。
那么假设每次 set/get 的 4kb 大小的字符串,这时占用的带宽就有 3.2 Gbit/s ,千兆网卡 (1 Gbit/s) 就不够用了,得换万兆网卡 (10 Gbit/s) 才能满足需求,可见想跑满 redis 的 CPU 计算力对网络的要求是很夸张的。
当然,这个例子比较极端,redis 官方推荐的网络环境下每次传输的包最好不超过一个 `MTU`(大约 1500 bytes)。
如果完全抛开网络因素,客户端服务端都在单机上时,使用 Unix 域套接字 (`Unix domain sockets`,也叫 `IPC(inter-precess communication) socket` 进程间通信套接字) 替换默认的 TCP/IP 连接方式,能额外再有 50% 的吞吐量提升(不过在大量使用 pipeline 的情况下就没差这么多了)。
启用 Unix 域套接字需要在配置文件中取消注释:
```bash
# unixsocket 路径
unixsocket /tmp/redis.sock
# unixsocket 权限
unixsocketperm 700
```
之后就可以在客户端使用指定方式连接了,以 python 客户端为例:
```python
import redis
redis_connect = redis.Redis(unix_socket_path='/tmp/redis.sock')
pass
```
### CPU
redis 更倾向于具有更大缓存而不是更多核的 CPU,在多核的情况下,redis 性能会受 NUMA 配置和进程所处位置的影响,指定客户端和服务器使用同一 CPU 的两个不同核心可以使从 L3 缓存获得的收益最大化。
另外,redis 在 Inter 和 AMD 的 CPU 上的表现也有差别,在某些情况下在 AMD 的 CPU 上性能可能只有 Inter 的一半。
### 内存
只有在面对大于 10KB 的数据的时候,内存频率 / 带宽才会影响 redis 性能,所以一般不用去考虑。内存大小只会影响能存放的数据量。
### 连接数
redis 可以在 60000 多个连接时维持 50000 q/s 的性能,但是根据官方测试,具有 30000 个连接的 redis 实例只能处理 100 个连接实例可实现的吞吐量的一半。
### 虚拟化
虚拟机中的 redis 性能肯定是低于实机上的,系统调用和中断上面浪费的太多。
| 18.654206 | 180 | 0.754509 | yue_Hant | 0.467358 |
8a80c622274b843143ae8e973892e1a75149408b | 6,433 | md | Markdown | support/windows-server/group-policy/force-authoritative-non-authoritative-synchronization.md | 0-SamboNZ-0/SupportArticles-docs | 1bd812354cf0e0f42aa5dedd0252e415ddade6bc | [
"CC-BY-4.0",
"MIT"
] | null | null | null | support/windows-server/group-policy/force-authoritative-non-authoritative-synchronization.md | 0-SamboNZ-0/SupportArticles-docs | 1bd812354cf0e0f42aa5dedd0252e415ddade6bc | [
"CC-BY-4.0",
"MIT"
] | null | null | null | support/windows-server/group-policy/force-authoritative-non-authoritative-synchronization.md | 0-SamboNZ-0/SupportArticles-docs | 1bd812354cf0e0f42aa5dedd0252e415ddade6bc | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Force synchronization for Distributed File System Replication (DFSR) replicated sysvol replication
description: Describes how to restart DFSR on a server, either authoritatively or non-authoritative.
ms.date: 09/08/2020
author: Deland-Han
ms.author: delhan
manager: dscontentpm
audience: ITPro
ms.topic: troubleshooting
ms.prod: windows-server
localization_priority: medium
ms.reviewer: kaushika, nedpyle
ms.prod-support-area-path: Sysvol access or replication issues
ms.technology: windows-server-group-policy
---
# How to force authoritative and non-authoritative synchronization for DFSR-replicated sysvol replication
This article introduces how to force an authoritative and non-authoritative synchronization for DFSR-replicated sysvol replication.
_Applies to:_ Windows Server 2012 R2
_Original KB number:_ 2218556
## Summary
Consider the following scenario:
You want to force the non-authoritative synchronization of sysvol replication on a domain controller (DC). In the File Replication Service (FRS), it was controlled through the **D2** and **D4** data values for the `Bur Flags` registry values, but these values don't exist for the Distributed File System Replication (DFSR) service. You can't use the DFS Management snap-in (Dfsmgmt.msc) or the Dfsradmin.exe command-line tool to achieve this. Unlike custom DFSR replicated folders, sysvol replication is intentionally protected from any editing through its management interfaces to prevent accidents.
## How to perform a non-authoritative synchronization of DFSR-replicated sysvol replication (like D2 for FRS)
1. In the ADSIEDIT.MSC tool, modify the following distinguished name (DN) value and attribute on each of the domain controllers (DCs) that you want to make non-authoritative:
```console
CN=SYSVOL Subscription,CN=Domain System Volume,CN=DFSR-LocalSettings,CN=<the server name>,OU=Domain Controllers,DC=<domain>
msDFSR-Enabled=FALSE
```
2. Force Active Directory replication throughout the domain.
3. Run the following command from an elevated command prompt on the same servers that you set as non-authoritative:
```console
DFSRDIAG POLLAD
```
4. You'll see Event ID 4114 in the DFSR event log indicating sysvol replication is no longer being replicated.
5. On the same DN from Step 1, set **msDFSR-Enabled=TRUE**.
6. Force Active Directory replication throughout the domain.
7. Run the following command from an elevated command prompt on the same servers that you set as non-authoritative:
```console
DFSRDIAG POLLAD
```
8. You'll see Event ID 4614 and 4604 in the DFSR event log indicating sysvol replication has been initialized. That domain controller has now done a **D2** of sysvol replication.
## How to perform an authoritative synchronization of DFSR-replicated sysvol replication (like D4 for FRS)
1. Set the DFS Replication service Startup Type to Manual, and stop the service on all domain controllers in the domain.
2. In the ADSIEDIT.MSC tool, modify the following DN and two attributes on the domain controller you want to make authoritative (preferably the PDC Emulator, which is usually the most up-to-date for sysvol replication contents):
```console
CN=SYSVOL Subscription,CN=Domain System Volume,CN=DFSR-LocalSettings,CN=<the server name>,OU=Domain Controllers,DC=<domain>
msDFSR-Enabled=FALSE
msDFSR-options=1
```
3. Modify the following DN and single attribute on **all** other domain controllers in that domain:
```console
CN=SYSVOL Subscription,CN=Domain System Volume,CN=DFSR-LocalSettings,CN=<each other server name>,OU=Domain Controllers,DC=<domain>
msDFSR-Enabled=FALSE
```
4. Force Active Directory replication throughout the domain and validate its success on all DCs.
5. Start the DFSR service on the domain controller that was set as authoritative in Step 2.
6. You'll see Event ID 4114 in the DFSR event log indicating sysvol replication is no longer being replicated.
7. On the same DN from Step 1, set **msDFSR-Enabled=TRUE**.
8. Force Active Directory replication throughout the domain and validate its success on all DCs.
9. Run the following command from an elevated command prompt on the same server that you set as authoritative:
```console
DFSRDIAG POLLAD
```
10. You'll see Event ID 4602 in the DFSR event log indicating sysvol replication has been initialized. That domain controller has now done a **D4** of sysvol replication.
11. Start the DFSR service on the other non-authoritative DCs. You'll see Event ID 4114 in the DFSR event log indicating sysvol replication is no longer being replicated on each of them.
12. Modify the following DN and single attribute on **all** other domain controllers in that domain:
```console
CN=SYSVOL Subscription,CN=Domain System Volume,CN=DFSR-LocalSettings,CN=<each other server name>,OU=Domain Controllers,DC=<domain>
msDFSR-Enabled=TRUE
```
13. Run the following command from an elevated command prompt on all non-authoritative DCs (that is, all but the formerly authoritative one):
```console
DFSRDIAG POLLAD
```
14. Return the DFSR service to its original Startup Type (Automatic) on all DCs.
## More information
If setting the authoritative flag on one DC, you must non-authoritatively synchronize all other DCs in the domain. Otherwise you'll see conflicts on DCs, originating from any DCs where you did not set auth/non-auth and restarted the DFSR service. For example, if all logon scripts were accidentally deleted and a manual copy of them was placed back on the PDC Emulator role holder, making that server authoritative and all other servers non-authoritative would guarantee success and prevent conflicts.
If making any DC authoritative, the PDC Emulator as authoritative is preferable, since its sysvol replication contents are most up to date.
The use of the authoritative flag is only necessary if you need to force synchronization of all DCs. If only repairing one DC, make it non-authoritative and don't touch other servers.
This article is designed with a 2-DC environment in mind, for simplicity of description. If you had more than one affected DC, expand the steps to include ALL of them as well. It also assumes you have the ability to restore data that was deleted, overwritten, damaged, and so on. previously if it's a disaster recovery scenario on all DCs in the domain.
| 55.456897 | 600 | 0.78175 | eng_Latn | 0.993014 |
Subsets and Splits