hexsha
stringlengths 40
40
| size
int64 5
1.04M
| ext
stringclasses 6
values | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 3
344
| max_stars_repo_name
stringlengths 5
125
| max_stars_repo_head_hexsha
stringlengths 40
78
| max_stars_repo_licenses
sequencelengths 1
11
| max_stars_count
int64 1
368k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 3
344
| max_issues_repo_name
stringlengths 5
125
| max_issues_repo_head_hexsha
stringlengths 40
78
| max_issues_repo_licenses
sequencelengths 1
11
| max_issues_count
int64 1
116k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 3
344
| max_forks_repo_name
stringlengths 5
125
| max_forks_repo_head_hexsha
stringlengths 40
78
| max_forks_repo_licenses
sequencelengths 1
11
| max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | content
stringlengths 5
1.04M
| avg_line_length
float64 1.14
851k
| max_line_length
int64 1
1.03M
| alphanum_fraction
float64 0
1
| lid
stringclasses 191
values | lid_prob
float64 0.01
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
763055c0ecd2db490aff4b0777969f34a33923e4 | 8,092 | md | Markdown | docs/2014/sql-server/install/install-powerpivot-from-the-command-prompt.md | shidrikov/sql-docs.ru-ru | 75dbf67763478c9b0a3635f7370f6a8fde8b03e1 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/2014/sql-server/install/install-powerpivot-from-the-command-prompt.md | shidrikov/sql-docs.ru-ru | 75dbf67763478c9b0a3635f7370f6a8fde8b03e1 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/2014/sql-server/install/install-powerpivot-from-the-command-prompt.md | shidrikov/sql-docs.ru-ru | 75dbf67763478c9b0a3635f7370f6a8fde8b03e1 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Установка PowerPivot из командной строки | Документация Майкрософт
ms.custom: ''
ms.date: 03/07/2017
ms.prod: sql-server-2014
ms.reviewer: ''
ms.technology:
- database-engine
ms.topic: conceptual
ms.assetid: 7f1f2b28-c9f5-49ad-934b-02f2fa6b9328
author: markingmyname
ms.author: maghan
manager: craigg
ms.openlocfilehash: e84b6e148774fc9b48b6174fa8be87579290fec4
ms.sourcegitcommit: 1ab115a906117966c07d89cc2becb1bf690e8c78
ms.translationtype: MT
ms.contentlocale: ru-RU
ms.lasthandoff: 11/27/2018
ms.locfileid: "52393419"
---
# <a name="install-powerpivot-from-the-command-prompt"></a>Установка PowerPivot из командной строки
Программу установки SQL Server PowerPivot для SharePoint можно запустить из командной строки. В команду необходимо включить параметр `/ROLE` и исключить из нее параметр `/FEATURES`.
## <a name="prerequisites"></a>предварительные требования
Необходимо установить выпуск SharePoint Server 2010 Enterprise Edition с пакетом обновления 1 (SP1).
Для подготовки к работе служб Analysis Services необходимо использовать учетные записи пользователей домена.
Компьютер должен быть присоединен к тому же домену, что и ферма SharePoint.
## <a name="Commands"></a> / Параметры установки на основе ROLE
При развертывании PowerPivot для SharePoint используется параметр `/ROLE` вместо параметра `/FEATURES`. Допустимы следующие значения.
- `SPI_AS_ExistingFarm`
- `SPI_AS_NewFarm`
При использовании обеих ролей устанавливаются файлы приложения, конфигурации и развертывания, позволяющие PowerPivot для SharePoint работать в ферме SharePoint. При выборе любой роли программа установки проверит соответствие требованиям к программному и аппаратному обеспечению, которые должны быть удовлетворены для интеграции с SharePoint.
Параметр «Существующая ферма» предполагает, что ферма SharePoint уже сконфигурирована. Параметр новой фермы предполагается, что вы создадите новую ферму; он поддерживает добавление экземпляр компонента Database Engine в синтаксисе командной строки, можно использовать экземпляр компонента Database Engine как сервера базы данных фермы.
В отличие от предыдущих выпусков, все задачи настройки сервера выполняются как задачи после установки. В целях автоматизации шагов по установке и настройке для настройки сервера можно использовать PowerShell. Дополнительные сведения см. в разделе [Настройка PowerPivot с помощью Windows PowerShell](../../analysis-services/power-pivot-sharepoint/power-pivot-configuration-using-windows-powershell.md).
## <a name="example-commands"></a>Примеры команд
В следующих примерах демонстрируется применение каждого из вариантов. В примере 1 `SPI_AS_ExistingFarm`.
```
Setup.exe /q /IAcceptSQLServerLicenseTerms /ACTION=install /ROLE=SPI_AS_ExistingFarm /INSTANCENAME=PowerPivot /INDICATEPROGRESS/ASSVCACCOUNT=<DomainName\UserName> /ASSVCPASSWORD=<StrongPassword> /ASSYSADMINACCOUNTS=<DomainName\UserName>
```
Пример 2 иллюстрирует вариант `SPI_AS_NewFarm`. Обратите внимание, что он включает параметры для провизионирования компонента Database Engine.
```
Setup.exe /q /IAcceptSQLServerLicenseTerms /ACTION=install /ROLE=SPI_AS_NewFarm /INSTANCENAME=PowerPivot /INDICATEPROGRESS/SQLSVCACCOUNT=<DomainName\UserName> /SQLSVCPASSWORD=<StrongPassword> /SQLSYSADMINACCOUNTS=<DomainName\UserName> /AGTSVCACCOUNT=<DomainName\UserName> /AGTSVCPASSWORD=<StrongPassword> /ASSVCACCOUNT=<DomainName\UserName> /ASSVCPASSWORD=<StrongPassword> /ASSYSADMINACCOUNTS=<DomainName\UserName>
```
## <a name="Join"></a> Изменение синтаксиса команды
Используйте следующие шаги для изменения примера синтаксиса команды.
1. Скопируйте следующую команду в Блокнот:
```
Setup.exe /q /IAcceptSQLServerLicenseTerms /ACTION=install /ROLE=SPI_AS_ExistingFarm /INSTANCENAME=PowerPivot /INDICATEPROGRESS/ASSVCACCOUNT=<DomainName\UserName> /ASSVCPASSWORD=<StrongPassword> /ASSYSADMINACCOUNTS=<DomainName\UserName>
```
Параметр `/q` запускает программу установки в «тихом» режиме, в котором пользовательский интерфейс не задействуется.
Когда параметр `/IAcceptSQLServerLicenseTerms` или `/q` задан для автоматических установок, требуется `/qs`.
Параметр `/action` дает указание программе установки выполнить установку.
Параметр `/role` дает указание программе установки установить службы Analysis Services и файлы конфигурации, необходимые для работы PowerPivot для SharePoint. Эта роль также определяет и использует сведения о соединении существующей фермы для доступа к базе данных конфигурации SharePoint. Этот параметр является обязательным. Используйте его вместо параметра `/features`, чтобы указать компоненты для установки.
Параметр `/instancename` указывает «PowerPivot» в качестве именованного экземпляра. Это значение задано жестко, его невозможно изменить. Оно указывается в команде в образовательных целях с тем, чтобы пользователь знал, как устанавливается служба.
С помощью параметра `/indicateprogress` можно отслеживать ход выполнения в окне командной строки.
2. Параметр `PID` в команде не указывается, в результате чего устанавливается выпуск Evaluation. Если требуется установить выпуск Enterprise Edition, добавьте параметр PID в команду запуска программы установки и укажите действительный ключ продукта.
```
/PID=<product key for an Enterprise installation>
```
3. Замените заполнители для \<domain\username > и \<StrongPassword > используя действительные учетные записи пользователей и пароли.
`/assvaccount` И **/assvcpassword** параметры используются для настройки [!INCLUDE[ssGeminiSrv](../../includes/ssgeminisrv-md.md)] экземпляра на сервере приложений. Замените эти заполнители на допустимые сведения учетной записи.
**/Assysadminaccounts** параметру должно быть присвоено удостоверение пользователя, запустившего программу установки SQL Server. Для служб необходимо указать хотя бы одного системного администратора. Следует отметить, что программа установки SQL Server больше не предоставляет автоматически разрешения sysadmin членам встроенной группы «Администраторы».
4. Удалите разрывы строк.
5. Выделите всю команду и нажмите кнопку **копирования** в меню "Правка".
6. Откройте командную строку администратора. Чтобы сделать это, нажмите кнопку **запустить**, щелкните правой кнопкой мыши командную строку и выберите **Запуск от имени администратора**.
7. Перейдите в папку на диске или общую папку, которая содержит установочный носитель SQL Server.
8. Вставьте измененную команду в командную строку. Чтобы сделать это, щелкните значок в верхнем левом углу окна командной строки, укажите **изменить**, а затем нажмите кнопку **вставить**.
9. Нажмите клавишу **ввод** для выполнения команды. Подождите завершения работы программы установки. В окне командной строки можно наблюдать за ходом выполнения.
10. Чтобы проверить установку, в папке \Program Files\SQL Server\120\Setup Bootstrap\Log откройте файл summary.txt. Если сервер был установлен без ошибок, то окончательный результат должен содержать текст «Выполнено».
11. Настройте сервер. Как минимум необходимо развернуть решения, создать приложение службы и включить этот компонент для каждого семейства веб-сайтов. Дополнительные сведения см. в разделе [Настройка или восстановление PowerPivot для SharePoint 2010 (средство настройки PowerPivot) ](../../../2014/analysis-services/configure-repair-powerpivot-sharepoint-2010.md) или [Настройка и администрирование сервера PowerPivot в центре администрирования ](../../analysis-services/power-pivot-sharepoint/power-pivot-server-administration-and-configuration-in-central-administration.md).
## <a name="see-also"></a>См. также
[Настройка учетных записей служб PowerPivot](../../analysis-services/power-pivot-sharepoint/configure-power-pivot-service-accounts.md)
[Установка PowerPivot для SharePoint 2010](../../../2014/sql-server/install/powerpivot-for-sharepoint-2010-installation.md)
| 71.610619 | 586 | 0.791522 | rus_Cyrl | 0.905468 |
76305920de1b8cb5a8d4b1a151b5ba4c2264ed17 | 48 | md | Markdown | README.md | koalarunner/Matplot-Homework | 9df6f6dedfda8ad986b3b0ccde9f8ee8f91b1162 | [
"MIT"
] | null | null | null | README.md | koalarunner/Matplot-Homework | 9df6f6dedfda8ad986b3b0ccde9f8ee8f91b1162 | [
"MIT"
] | null | null | null | README.md | koalarunner/Matplot-Homework | 9df6f6dedfda8ad986b3b0ccde9f8ee8f91b1162 | [
"MIT"
] | null | null | null | # Matplot-Homework
Matplotlib Homework - Pyber
| 16 | 28 | 0.791667 | kor_Hang | 0.349316 |
7630a36767b1e944856480b5e7e536d049d59fd4 | 28,161 | md | Markdown | articles/azure-monitor/insights/container-insights-update-metrics.md | changeworld/azure-docs.pl-pl | f97283ce868106fdb5236557ef827e56b43d803e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/azure-monitor/insights/container-insights-update-metrics.md | changeworld/azure-docs.pl-pl | f97283ce868106fdb5236557ef827e56b43d803e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/azure-monitor/insights/container-insights-update-metrics.md | changeworld/azure-docs.pl-pl | f97283ce868106fdb5236557ef827e56b43d803e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Jak zaktualizować usługę Azure Monitor dla kontenerów dla metryk | Dokumenty firmy Microsoft
description: W tym artykule opisano sposób aktualizowania usługi Azure Monitor dla kontenerów, aby włączyć funkcję metryk niestandardowych, która obsługuje eksplorowanie i alerty dotyczące zagregowanych metryk.
ms.topic: conceptual
ms.date: 11/11/2019
ms.openlocfilehash: a7f40cb0523c2366c47da228e49311c2f9579212
ms.sourcegitcommit: 2ec4b3d0bad7dc0071400c2a2264399e4fe34897
ms.translationtype: MT
ms.contentlocale: pl-PL
ms.lasthandoff: 03/27/2020
ms.locfileid: "76715906"
---
# <a name="how-to-update-azure-monitor-for-containers-to-enable-metrics"></a>Jak zaktualizować usługę Azure Monitor dla kontenerów w celu włączenia metryk
Usługa Azure Monitor dla kontenerów wprowadza obsługę zbierania metryk z węzłów i zasobników klastrów usług Azure Kubernetes (AKS) i zapisywania ich w magazynie metryk usługi Azure Monitor. Ta zmiana ma na celu zapewnienie ulepszonych terminowości podczas prezentacji zagregowanych obliczeń (średnia, liczba, maks., min, suma) na wykresach wydajności, obsługa przypinania wykresów wydajności w pulpitach nawigacyjnych portalu Azure i alerty metryki pomocy technicznej.
>[!NOTE]
>Ta funkcja nie obsługuje obecnie klastrów OpenShift red hat platformy Azure.
>
W ramach tej funkcji włączono następujące metryki:
| Metryczna przestrzeń nazw | Metryka | Opis |
|------------------|--------|-------------|
| insights.container/nodes | cpuUsageMillicores, cpuUsagePercentage, memoryRssBytes, memoryRssPercentage, memoryWorkingSetBytes, memoryWorkingSetPercentage, nodesCount | Są to metryki *węzłów* i obejmują *hosta* jako wymiar, a także<br> nazwę węzła jako wartość dla wymiaru *hosta.* |
| insights.container/zasobniki | liczba podCount | Są to metryki *zasobników* i obejmują następujące jako wymiary — ControllerName, Obszar nazw Kubernetes, nazwa, faza. |
Aktualizowanie klastra w celu obsługi tych nowych możliwości można wykonać z witryny Azure Portal, Azure PowerShell lub za pomocą interfejsu wiersza polecenia platformy Azure. Za pomocą programu Azure PowerShell i interfejsu wiersza polecenia można włączyć ten na klaster lub dla wszystkich klastrów w ramach subskrypcji. Nowe wdrożenia usługi AKS automatycznie uwzględnią tę zmianę konfiguracji i możliwości.
Proces przypisuje monitorowanie **metryk roli wydawcy** do jednostki usługi klastra, tak aby dane zebrane przez agenta mogą być publikowane do zasobu klastrów. Monitorowanie Metryki Wydawca ma uprawnienia tylko do wypychania metryki do zasobu, nie może zmienić żadnego stanu, zaktualizować zasób lub odczytać żadnych danych. Aby uzyskać więcej informacji na temat roli, zobacz [Monitorowanie roli wydawcy metryk](../../role-based-access-control/built-in-roles.md#monitoring-metrics-publisher).
## <a name="prerequisites"></a>Wymagania wstępne
Przed rozpoczęciem upewnij się, że:
* Metryki niestandardowe są dostępne tylko w podzbiorze regionów platformy Azure. Lista obsługiwanych regionów jest udokumentowana [w tym miejscu](../platform/metrics-custom-overview.md#supported-regions).
* Jesteś członkiem roli **[Właściciel](../../role-based-access-control/built-in-roles.md#owner)** zasobu klastra AKS, aby włączyć zbieranie metryk wydajności niestandardowej węzła i zasobu.
Jeśli zdecydujesz się korzystać z interfejsu wiersza polecenia platformy Azure, najpierw należy zainstalować i używać interfejsu wiersza polecenia lokalnie. Musi być uruchomiony interfejsu wiersza polecenia platformy Azure w wersji 2.0.59 lub nowszej. Aby zidentyfikować swoją `az --version`wersję, uruchom polecenie . Jeśli chcesz zainstalować lub uaktualnić platformę Azure CLI, zobacz [Instalowanie interfejsu wiersza polecenia platformy Azure](https://docs.microsoft.com/cli/azure/install-azure-cli).
## <a name="upgrade-a-cluster-from-the-azure-portal"></a>Uaktualnianie klastra z witryny Azure portal
W przypadku istniejących klastrów AKS monitorowanych przez usługę Azure Monitor dla kontenerów, po wybraniu klastra, aby wyświetlić jego kondycję z widoku wielu klastrów w usłudze Azure Monitor lub bezpośrednio z klastra, wybierając **pozycję Insights** z lewego okienka, w górnej części portalu powinien zostać wyświetlony baner.

Kliknięcie **przycisku Włącz** spowoduje zainicjowanie procesu uaktualniania klastra. Ten proces może potrwać kilka sekund, aby zakończyć i można śledzić jego postępy w obszarze Powiadomienia z menu.
## <a name="upgrade-all-clusters-using-bash-in-azure-command-shell"></a>Uaktualnianie wszystkich klastrów przy użyciu funkcji Bash w usłudze Azure Command Shell
Wykonaj następujące kroki, aby zaktualizować wszystkie klastry w ramach subskrypcji przy użyciu bash w usłudze Azure Command Shell.
1. Uruchom następujące polecenie przy użyciu interfejsu wiersza polecenia platformy Azure. Edytuj wartość **identyfikatora subskrypcji** przy użyciu wartości ze strony **Przegląd usługi AKS** dla klastra AKS.
```azurecli
az login
az account set --subscription "Subscription Name"
curl -sL https://aka.ms/ci-md-onboard-atscale | bash -s subscriptionId
```
Zmiana konfiguracji może potrwać kilka sekund. Po jego zakończeniu zostanie wyświetlony komunikat podobny do następującego i zawiera wynik:
```azurecli
completed role assignments for all AKS clusters in subscription: <subscriptionId>
```
## <a name="upgrade-per-cluster-using-azure-cli"></a>Uaktualnianie na klaster przy użyciu interfejsu wiersza polecenia platformy Azure
Wykonaj następujące kroki, aby zaktualizować określony klaster w ramach subskrypcji przy użyciu interfejsu wiersza polecenia platformy Azure.
1. Uruchom następujące polecenie przy użyciu interfejsu wiersza polecenia platformy Azure. Edytuj wartości **identyfikatora,** **zasobuGroupName**i **clusterName** przy użyciu wartości na stronie **Przegląd usługi AKS** dla klastra AKS. Aby uzyskać wartość **clientIdOfSPN**, jest zwracany `az aks show` po uruchomieniu polecenia, jak pokazano w poniższym przykładzie.
```azurecli
az login
az account set --subscription "Subscription Name"
az aks show -g <resourceGroupName> -n <clusterName>
az role assignment create --assignee <clientIdOfSPN> --scope <clusterResourceId> --role "Monitoring Metrics Publisher"
```
## <a name="upgrade-all-clusters-using-azure-powershell"></a>Uaktualnianie wszystkich klastrów przy użyciu programu Azure PowerShell
Wykonaj następujące kroki, aby zaktualizować wszystkie klastry w ramach subskrypcji przy użyciu programu Azure PowerShell.
1. Skopiuj i wklej następujący skrypt do pliku:
```powershell
<#
.DESCRIPTION
Adds the Monitoring Metrics Publisher role assignment to the all AKS clusters in specified subscription
.PARAMETER SubscriptionId
Subscription Id that the AKS cluster is in
#>
param(
[Parameter(mandatory = $true)]
[string]$SubscriptionId
)
# checks the required Powershell modules exist and if not exists, request the user permission to install
$azAccountModule = Get-Module -ListAvailable -Name Az.Accounts
$azAksModule = Get-Module -ListAvailable -Name Az.Aks
$azResourcesModule = Get-Module -ListAvailable -Name Az.Resources
if (($null -eq $azAccountModule) -or ( $null -eq $azAksModule ) -or ($null -eq $azResourcesModule)) {
$currentPrincipal = New-Object Security.Principal.WindowsPrincipal([Security.Principal.WindowsIdentity]::GetCurrent())
if ($currentPrincipal.IsInRole([Security.Principal.WindowsBuiltInRole]::Administrator)) {
Write-Host("Running script as an admin...")
Write-Host("")
}
else {
Write-Host("Please run the script as an administrator") -ForegroundColor Red
Stop-Transcript
exit
}
$message = "This script will try to install the latest versions of the following Modules : `
Az.Resources, Az.Accounts and Az.Aks using the command if not installed already`
`'Install-Module {Insert Module Name} -Repository PSGallery -Force -AllowClobber -ErrorAction Stop -WarningAction Stop'
`If you do not have the latest version of these Modules, this troubleshooting script may not run."
$question = "Do you want to Install the modules and run the script or just run the script?"
$choices = New-Object Collections.ObjectModel.Collection[Management.Automation.Host.ChoiceDescription]
$choices.Add((New-Object Management.Automation.Host.ChoiceDescription -ArgumentList '&Yes, Install and run'))
$choices.Add((New-Object Management.Automation.Host.ChoiceDescription -ArgumentList '&Continue without installing the Module'))
$choices.Add((New-Object Management.Automation.Host.ChoiceDescription -ArgumentList '&Quit'))
$decision = $Host.UI.PromptForChoice($message, $question, $choices, 0)
switch ($decision) {
0 {
if ($null -eq $azResourcesModule) {
try {
Write-Host("Installing Az.Resources...")
Install-Module Az.Resources -Repository PSGallery -Force -AllowClobber -ErrorAction Stop
}
catch {
Write-Host("Close other powershell logins and try installing the latest modules forAz.Accounts in a new powershell window: eg. 'Install-Module Az.Accounts -Repository PSGallery -Force'") -ForegroundColor Red
exit
}
}
if ($null -eq $azAccountModule) {
try {
Write-Host("Installing Az.Accounts...")
Install-Module Az.Accounts -Repository PSGallery -Force -AllowClobber -ErrorAction Stop
}
catch {
Write-Host("Close other powershell logins and try installing the latest modules forAz.Accounts in a new powershell window: eg. 'Install-Module Az.Accounts -Repository PSGallery -Force'") -ForegroundColor Red
exit
}
}
if ($null -eq $azAksModule) {
try {
Write-Host("Installing Az.Aks...")
Install-Module Az.Aks -Repository PSGallery -Force -AllowClobber -ErrorAction Stop
}
catch {
Write-Host("Close other powershell logins and try installing the latest modules for Az.Aks in a new powershell window: eg. 'Install-Module Az.Aks -Repository PSGallery -Force'") -ForegroundColor Red
exit
}
}
}
1 {
if ($null -eq $azResourcesModule) {
try {
Import-Module Az.Resources -ErrorAction Stop
}
catch {
Write-Host("Could not import Az.Resources...") -ForegroundColor Red
Write-Host("Close other powershell logins and try installing the latest modules for Az.Resources in a new powershell window: eg. 'Install-Module Az.Resources -Repository PSGallery -Force'") -ForegroundColor Red
Stop-Transcript
exit
}
}
if ($null -eq $azAccountModule) {
try {
Import-Module Az.Accounts -ErrorAction Stop
}
catch {
Write-Host("Could not import Az.Accounts...") -ForegroundColor Red
Write-Host("Close other powershell logins and try installing the latest modules for Az.Accounts in a new powershell window: eg. 'Install-Module Az.Accounts -Repository PSGallery -Force'") -ForegroundColor Red
Stop-Transcript
exit
}
}
if ($null -eq $azAksModule) {
try {
Import-Module Az.Aks -ErrorAction Stop
}
catch {
Write-Host("Could not import Az.Aks... Please reinstall this Module") -ForegroundColor Red
Stop-Transcript
exit
}
}
}
2 {
Write-Host("")
Stop-Transcript
exit
}
}
}
try {
Write-Host("")
Write-Host("Trying to get the current Az login context...")
$account = Get-AzContext -ErrorAction Stop
Write-Host("Successfully fetched current AzContext context...") -ForegroundColor Green
Write-Host("")
}
catch {
Write-Host("")
Write-Host("Could not fetch AzContext..." ) -ForegroundColor Red
Write-Host("")
}
if ($account.Account -eq $null) {
try {
Write-Host("Please login...")
Connect-AzAccount -subscriptionid $SubscriptionId
}
catch {
Write-Host("")
Write-Host("Could not select subscription with ID : " + $SubscriptionId + ". Please make sure the ID you entered is correct and you have access to the cluster" ) -ForegroundColor Red
Write-Host("")
Stop-Transcript
exit
}
}
else {
if ($account.Subscription.Id -eq $SubscriptionId) {
Write-Host("Subscription: $SubscriptionId is already selected. Account details: ")
$account
}
else {
try {
Write-Host("Current Subscription:")
$account
Write-Host("Changing to subscription: $SubscriptionId")
Set-AzContext -SubscriptionId $SubscriptionId
}
catch {
Write-Host("")
Write-Host("Could not select subscription with ID : " + $SubscriptionId + ". Please make sure the ID you entered is correct and you have access to the cluster" ) -ForegroundColor Red
Write-Host("")
Stop-Transcript
exit
}
}
}
#
# get all the AKS clusters in specified subscription
#
Write-Host("getting all aks clusters in specified subscription ...")
$allClusters = Get-AzAks -ErrorVariable notPresent -ErrorAction SilentlyContinue
if ($notPresent) {
Write-Host("")
Write-Host("Failed to get Aks clusters in specified subscription. Please make sure that you have access to the existing clusters") -ForegroundColor Red
Write-Host("")
Stop-Transcript
exit
}
Write-Host("Successfully got all aks clusters ...") -ForegroundColor Green
$clustersCount = $allClusters.Id.Length
Write-Host("Adding role assignment for the clusters ...")
for ($index = 0 ; $index -lt $clustersCount ; $index++) {
#
# Add Monitoring Metrics Publisher role assignment to the AKS cluster resource
#
$servicePrincipalClientId = $allClusters.ServicePrincipalProfile[$index].ClientId
$clusterResourceId = $allClusters.Id[$index]
$clusterName = $allClusters.Name[$index]
Write-Host("Adding role assignment for the cluster: $clusterResourceId, servicePrincipalClientId: $servicePrincipalClientId ...")
New-AzRoleAssignment -ApplicationId $servicePrincipalClientId -scope $clusterResourceId -RoleDefinitionName "Monitoring Metrics Publisher" -ErrorVariable assignmentError -ErrorAction SilentlyContinue
if ($assignmentError) {
$roleAssignment = Get-AzRoleAssignment -scope $clusterResourceId -RoleDefinitionName "Monitoring Metrics Publisher" -ErrorVariable getAssignmentError -ErrorAction SilentlyContinue
if ($assignmentError.Exception -match "role assignment already exists" -or ( $roleAssignment -and $roleAssignment.ObjectType -like "ServicePrincipal" )) {
Write-Host("Monitoring Metrics Publisher role assignment already exists on the cluster resource : '" + $clusterName + "'") -ForegroundColor Green
}
else {
Write-Host("Failed to add Monitoring Metrics Publisher role assignment to cluster : '" + $clusterName + "' , error : $assignmentError") -ForegroundColor Red
}
}
else {
Write-Host("Successfully added Monitoring Metrics Publisher role assignment to cluster : '" + $clusterName + "'") -ForegroundColor Green
}
Write-Host("Completed adding role assignment for the cluster: $clusterName ...")
}
Write-Host("Completed adding role assignment for the aks clusters in subscriptionId :$SubscriptionId")
```
2. Zapisz ten plik jako **onboard_metrics_atscale.ps1** w folderze lokalnym.
3. Uruchom następujące polecenie przy użyciu programu Azure PowerShell. Edytuj wartość **identyfikatora subskrypcji** przy użyciu wartości ze strony **Przegląd usługi AKS** dla klastra AKS.
```powershell
.\onboard_metrics_atscale.ps1 subscriptionId
```
Zmiana konfiguracji może potrwać kilka sekund. Po jego zakończeniu zostanie wyświetlony komunikat podobny do następującego i zawiera wynik:
```powershell
Completed adding role assignment for the aks clusters in subscriptionId :<subscriptionId>
```
## <a name="upgrade-per-cluster-using-azure-powershell"></a>Uaktualnianie na klaster przy użyciu programu Azure PowerShell
Wykonaj następujące kroki, aby zaktualizować określony klaster przy użyciu programu Azure PowerShell.
1. Skopiuj i wklej następujący skrypt do pliku:
```powershell
<#
.DESCRIPTION
Adds the Monitoring Metrics Publisher role assignment to the specified AKS cluster
.PARAMETER SubscriptionId
Subscription Id that the AKS cluster is in
.PARAMETER resourceGroupName
Resource Group name that the AKS cluster is in
.PARAMETER clusterName
Name of the AKS cluster.
#>
param(
[Parameter(mandatory = $true)]
[string]$SubscriptionId,
[Parameter(mandatory = $true)]
[string]$resourceGroupName,
[Parameter(mandatory = $true)]
[string] $clusterName
)
# checks the required Powershell modules exist and if not exists, request the user permission to install
$azAccountModule = Get-Module -ListAvailable -Name Az.Accounts
$azAksModule = Get-Module -ListAvailable -Name Az.Aks
$azResourcesModule = Get-Module -ListAvailable -Name Az.Resources
if (($null -eq $azAccountModule) -or ($null -eq $azAksModule) -or ($null -eq $azResourcesModule)) {
$currentPrincipal = New-Object Security.Principal.WindowsPrincipal([Security.Principal.WindowsIdentity]::GetCurrent())
if ($currentPrincipal.IsInRole([Security.Principal.WindowsBuiltInRole]::Administrator)) {
Write-Host("Running script as an admin...")
Write-Host("")
}
else {
Write-Host("Please run the script as an administrator") -ForegroundColor Red
Stop-Transcript
exit
}
$message = "This script will try to install the latest versions of the following Modules : `
Az.Resources, Az.Accounts and Az.Aks using the command`
`'Install-Module {Insert Module Name} -Repository PSGallery -Force -AllowClobber -ErrorAction Stop -WarningAction Stop'
`If you do not have the latest version of these Modules, this troubleshooting script may not run."
$question = "Do you want to Install the modules and run the script or just run the script?"
$choices = New-Object Collections.ObjectModel.Collection[Management.Automation.Host.ChoiceDescription]
$choices.Add((New-Object Management.Automation.Host.ChoiceDescription -ArgumentList '&Yes, Install and run'))
$choices.Add((New-Object Management.Automation.Host.ChoiceDescription -ArgumentList '&Continue without installing the Module'))
$choices.Add((New-Object Management.Automation.Host.ChoiceDescription -ArgumentList '&Quit'))
$decision = $Host.UI.PromptForChoice($message, $question, $choices, 0)
switch ($decision) {
0 {
if ($null -eq $azResourcesModule) {
try {
Write-Host("Installing Az.Resources...")
Install-Module Az.Resources -Repository PSGallery -Force -AllowClobber -ErrorAction Stop
}
catch {
Write-Host("Close other powershell logins and try installing the latest modules forAz.Accounts in a new powershell window: eg. 'Install-Module Az.Accounts -Repository PSGallery -Force'") -ForegroundColor Red
exit
}
}
if ($null -eq $azAccountModule) {
try {
Write-Host("Installing Az.Accounts...")
Install-Module Az.Accounts -Repository PSGallery -Force -AllowClobber -ErrorAction Stop
}
catch {
Write-Host("Close other powershell logins and try installing the latest modules forAz.Accounts in a new powershell window: eg. 'Install-Module Az.Accounts -Repository PSGallery -Force'") -ForegroundColor Red
exit
}
}
if ($null -eq $azAksModule) {
try {
Write-Host("Installing Az.Aks...")
Install-Module Az.Aks -Repository PSGallery -Force -AllowClobber -ErrorAction Stop
}
catch {
Write-Host("Close other powershell logins and try installing the latest modules for Az.Aks in a new powershell window: eg. 'Install-Module Az.Aks -Repository PSGallery -Force'") -ForegroundColor Red
exit
}
}
}
1 {
if ($null -eq $azResourcesModule) {
try {
Import-Module Az.Resources -ErrorAction Stop
}
catch {
Write-Host("Could not import Az.Resources...") -ForegroundColor Red
Write-Host("Close other powershell logins and try installing the latest modules for Az.Resources in a new powershell window: eg. 'Install-Module Az.Resources -Repository PSGallery -Force'") -ForegroundColor Red
Stop-Transcript
exit
}
}
if ($null -eq $azAccountModule) {
try {
Import-Module Az.Accounts -ErrorAction Stop
}
catch {
Write-Host("Could not import Az.Accounts...") -ForegroundColor Red
Write-Host("Close other powershell logins and try installing the latest modules for Az.Accounts in a new powershell window: eg. 'Install-Module Az.Accounts -Repository PSGallery -Force'") -ForegroundColor Red
Stop-Transcript
exit
}
}
if ($null -eq $azAksModule) {
try {
Import-Module Az.Aks -ErrorAction Stop
}
catch {
Write-Host("Could not import Az.Aks... Please reinstall this Module") -ForegroundColor Red
Stop-Transcript
exit
}
}
}
2 {
Write-Host("")
Stop-Transcript
exit
}
}
}
try {
Write-Host("")
Write-Host("Trying to get the current Az login context...")
$account = Get-AzContext -ErrorAction Stop
Write-Host("Successfully fetched current AzContext context...") -ForegroundColor Green
Write-Host("")
}
catch {
Write-Host("")
Write-Host("Could not fetch AzContext..." ) -ForegroundColor Red
Write-Host("")
}
if ($account.Account -eq $null) {
try {
Write-Host("Please login...")
Connect-AzAccount -subscriptionid $SubscriptionId
}
catch {
Write-Host("")
Write-Host("Could not select subscription with ID : " + $SubscriptionId + ". Please make sure the ID you entered is correct and you have access to the cluster" ) -ForegroundColor Red
Write-Host("")
Stop-Transcript
exit
}
}
else {
if ($account.Subscription.Id -eq $SubscriptionId) {
Write-Host("Subscription: $SubscriptionId is already selected. Account details: ")
$account
}
else {
try {
Write-Host("Current Subscription:")
$account
Write-Host("Changing to subscription: $SubscriptionId")
Set-AzContext -SubscriptionId $SubscriptionId
}
catch {
Write-Host("")
Write-Host("Could not select subscription with ID : " + $SubscriptionId + ". Please make sure the ID you entered is correct and you have access to the cluster" ) -ForegroundColor Red
Write-Host("")
Stop-Transcript
exit
}
}
}
#
# Check AKS cluster existence and access check
#
Write-Host("Checking aks cluster exists...")
$cluster = Get-AzAks -ResourceGroupName $resourceGroupName -Name $clusterName -ErrorVariable notPresent -ErrorAction SilentlyContinue
if ($notPresent) {
Write-Host("")
Write-Host("Could not find Aks cluster. Please make sure that specified cluster exists: '" + $clusterName + "'is correct and you have access to the cluster") -ForegroundColor Red
Write-Host("")
Stop-Transcript
exit
}
Write-Host("Successfully checked specified cluster exists details...") -ForegroundColor Green
$servicePrincipalClientId = $cluster.ServicePrincipalProfile.clientId
$clusterResourceId = $cluster.Id
#
# Add Monitoring Metrics Publisher role assignment to the AKS cluster resource
#
New-AzRoleAssignment -ApplicationId $servicePrincipalClientId -scope $clusterResourceId -RoleDefinitionName "Monitoring Metrics Publisher" -ErrorVariable assignmentError -ErrorAction SilentlyContinue
if ($assignmentError) {
$roleAssignment = Get-AzRoleAssignment -scope $clusterResourceId -RoleDefinitionName "Monitoring Metrics Publisher" -ErrorVariable getAssignmentError -ErrorAction SilentlyContinue
if ($assignmentError.Exception -match "role assignment already exists" -or ( $roleAssignment -and $roleAssignment.ObjectType -like "ServicePrincipal" )) {
Write-Host("Monitoring Metrics Publisher role assignment already exists on the cluster resource : '" + $clusterName + "'") -ForegroundColor Green
}
else {
Write-Host("Failed to add Monitoring Metrics Publisher role assignment to cluster : '" + $clusterName + "' , error : $assignmentError") -ForegroundColor Red
}
}
else {
Write-Host("Successfully added Monitoring Metrics Publisher role assignment to cluster : '" + $clusterName + "'") -ForegroundColor Green
}
```
2. Zapisz ten plik jako **onboard_metrics.ps1** w folderze lokalnym.
3. Uruchom następujące polecenie przy użyciu programu Azure PowerShell. Edytuj wartości **identyfikatora,** **zasobuGroupName**i **clusterName** przy użyciu wartości na stronie **Przegląd usługi AKS** dla klastra AKS.
```powershell
.\onboard_metrics.ps1 subscriptionId <subscriptionId> resourceGroupName <resourceGroupName> clusterName <clusterName>
```
Zmiana konfiguracji może potrwać kilka sekund. Po jego zakończeniu zostanie wyświetlony komunikat podobny do następującego i zawiera wynik:
```powershell
Successfully added Monitoring Metrics Publisher role assignment to cluster : <clusterName>
```
## <a name="verify-update"></a>Weryfikowanie aktualizacji
Po zainicjowaniu aktualizacji przy użyciu jednej z metod opisanych wcześniej, można użyć Eksploratora metryk usługi Azure Monitor i sprawdzić z **obszaru nazw metryki,** że **szczegółowe informacje** są wymienione. Jeśli tak jest, oznacza to, że możesz iść dalej i rozpocząć [konfigurowanie alertów metrycznych](../platform/alerts-metric.md) lub przypinanie wykresów do [pulpitów nawigacyjnych](../../azure-portal/azure-portal-dashboards.md).
| 47.730508 | 505 | 0.670537 | pol_Latn | 0.649297 |
76315aded267125fb8768f324a985973868d6e94 | 2,611 | md | Markdown | Getting-Started/Backoffice/Login/index-v7.md | fnkymnky/UmbracoDocs | cacba6f50ec62b7b5dcc4582a12c527924e211b7 | [
"MIT"
] | 1 | 2019-09-09T09:53:01.000Z | 2019-09-09T09:53:01.000Z | Getting-Started/Backoffice/Login/index-v7.md | fnkymnky/UmbracoDocs | cacba6f50ec62b7b5dcc4582a12c527924e211b7 | [
"MIT"
] | null | null | null | Getting-Started/Backoffice/Login/index-v7.md | fnkymnky/UmbracoDocs | cacba6f50ec62b7b5dcc4582a12c527924e211b7 | [
"MIT"
] | null | null | null | ---
meta.Title: "Configure and customize the Login screen"
meta.Description: "In this article you can learn the various ways of customizing the Umbraco backoffice login screen and form."
versionFrom: 7.0.0
---
# Login screen
To access the backoffice, you will need to login. You can do this by adding `/umbraco` to the end of your website URL, e.g. http://mywebsite.com/umbraco.
You will be presented with a login form simular to this:

*The login screen has a greeting, username/password field and optionally a 'Forgotten password' link*
Below, you will find instructions on how to customise the login screen.
## Greeting
The login screen features a greeting, which you can personalize by changing the language file of your choice. For example for en-US you would add the following keys to: `~/Config/Lang/en-US.user.xml`
```xml
<area alias="login">
<key alias="greeting0">Sunday greeting</key>
<key alias="greeting1">Monday greeting</key>
<key alias="greeting2">Tuesday greeting</key>
<key alias="greeting3">Wednesday greeting</key>
<key alias="greeting4">Thursday greeting</key>
<key alias="greeting5">Friday greeting</key>
<key alias="greeting6">Saturday greeting</key>
</area>
```
You can customize other text in the login screen as well, grab the default values from `~/Umbraco/Config/Lang/en.xml` and copy the keys you want to translate into your `~/Config/Lang/MYLANGUAGE.user.xml` file.
## Password reset
The "Forgot password?" link allows your backoffice users to reset their password. For this feature to work properly you will need to configure an SMTP server in your web.config file and the "from" address needs to be specified. An example:
```xml
<system.net>
<mailSettings>
<smtp from="[email protected]">
<network host="127.0.0.1" userName="username" password="password" />
</smtp>
</mailSettings>
</system.net>
```
This feature can be turned off completely using the `allowPasswordReset` configuration, see: [UmbracoSettings - Security](../../../Reference/Config/umbracoSettings/#security)
## Background image
You can customise the background image for the backoffice login screen. In [`~/Config/umbracoSettings.config`](../../../Reference/Config/umbracoSettings/) find the `loginBackgroundImage`and change the path to the image you want to use.
```xml
<settings>
<content>
...
<loginBackgroundImage>/images/myCustomImage.jpg</loginBackgroundImage>
</content>
</settings>
```
| 40.796875 | 239 | 0.743393 | eng_Latn | 0.969334 |
763187b9f1816c077d99360fd6abf05fae39d2dd | 1,812 | markdown | Markdown | data/spells/delayed-blast-fireball.markdown | whiplashomega/elthelas2 | 9e91c751392ff9601d8769b70e04a9e9c87054c5 | [
"MIT"
] | 1 | 2017-09-18T19:48:36.000Z | 2017-09-18T19:48:36.000Z | data/spells/delayed-blast-fireball.markdown | whiplashomega/elthelas2 | 9e91c751392ff9601d8769b70e04a9e9c87054c5 | [
"MIT"
] | 52 | 2017-09-18T19:50:59.000Z | 2022-03-08T23:41:03.000Z | data/spells/delayed-blast-fireball.markdown | whiplashomega/elthelas2 | 9e91c751392ff9601d8769b70e04a9e9c87054c5 | [
"MIT"
] | null | null | null | ---
layout: post
title: "Delayed Blast Fireball"
date: 2015-01-11
source: PHB.230
tags: [sorcerer, wizard, level7, evocation]
---
**7th-level evocation**
**Casting Time**: 1 action
**Range**: 150 feet
**Components**: V, S, M (a tiny ball of bat guano and sulfur)
**Duration**: Concentration, up to 1 minute
**Saving Throw**: Constitution
**Save For**: half damage
**Damage**: [ { "dice": "10d8", label: "necrotic", "damagebonus": 0, "addstat": false, "prof": false, "higherlevels": "" } ]
A beam of yellow light flashes from your pointing finger, then condenses to linger at a chosen point within range as a glowing bead for the duration. When the spell ends, either because your concentration is broken or because you decide to end it, the bead blossoms with a low roar into an explosion of flame that spreads around corners. Each creature in a 20-foot-radius sphere centered on that point must make a Dexterity saving throw. A creature takes fire damage equal to the total accumulated damage on a failed save, or half as much damage on a successful one.
The spell's base damage is 12d6. If at the end of your turn the bead has not yet detonated, the damage increases by 1d6.
If the glowing bead is touched before the interval has expired, the creature touching it must make a Dexterity saving throw. On a failed save, the spell ends immediately, causing the bead to erupt in flame. On a successful save, the creature can throw the bead up to 40 feet. When it strikes a creature or a solid object, the spell ends, and the bead explodes.
The fire damages objects in the area and ignites flammable objects that aren't being worn or carried.
**At Higher Levels.** When you cast this spell using a spell slot of 8th level or higher, the base damage increases by 1d6 for each slot level above 7th.
| 53.294118 | 566 | 0.750552 | eng_Latn | 0.999234 |
7631cc981bea1f4604a1fbda747908eb960db57b | 1,953 | md | Markdown | examples/autoscaler-webhook/README.md | Laremere/agones | a58d25685ceac04dd63d3e842bc02175692952e9 | [
"Apache-2.0"
] | null | null | null | examples/autoscaler-webhook/README.md | Laremere/agones | a58d25685ceac04dd63d3e842bc02175692952e9 | [
"Apache-2.0"
] | null | null | null | examples/autoscaler-webhook/README.md | Laremere/agones | a58d25685ceac04dd63d3e842bc02175692952e9 | [
"Apache-2.0"
] | null | null | null | # Example Webhook Autoscaler Service
This service provides an example of the webhook fleetautoscaler service which is used to control the number of GameServers in a Fleet (`Replica` count).
## Autoscaler Service
The service exposes an endpoint which allows client calls to custom scaling logic.
When this endpoint is called, target Replica count gets calculated. If a fleet does not need to scale we return `Scale` equals `false`. Endpoint receives and returns the JSON encoded [`FleetAutoscaleReview`](../../docs/fleetautoscaler_spec.md#webhook-endpoint-specification) every SyncPeriod which is 30 seconds.
Note that scaling up logic is based on the percentage of allocated gameservers in a fleet. If this fraction is more than threshold (i. e. 0.7) than `Scale` parameter in `FleetAutoscaleResponse` is set to `true` and `Replica` value is returned increased by the `scaleFactor` (in this example twice) which results in creating more `Ready` GameServers. If the fraction below the threshold (i. e. 0.3) we decrease the count of gameservers in a fleet. There is a `minReplicasCount` parameters which defined the lower limit of the gameservers number in a Fleet.
To learn how to deploy the fleet to GKE, please see the tutorial [Create a Fleet (Go)](https://agones.dev/site/docs/getting-started/create-fleet/).
## Example flow
1. Fleet with 100 Replicas (gameservers) was created.
2. 70 gameservers got allocated -> No scaling for now.
3. One more server allocated, we got 71 allocated gamesevers, fraction in a fleet is over 0.7 `AllocatedPart > 0.7` so we are scaling by `scaleFactor`. Which results in doubling fleet size.
4. Fleet now has 200 Replicas.
5. `AllocatedPart = 71/200 = 0.355` so no downscaling for now.
6. 22 gameservers were shutdown and now this count of gameservers is not in Allocated state.
7. `AllocatedPart = 59/200 = 0.295` Thus `AllocatedPart < 0.3` and fleet got scaled down.
Fleet now return to 100 gameservers size.
| 81.375 | 555 | 0.777778 | eng_Latn | 0.998427 |
76323e9818c87ee66a4eede83977606a6afc88ee | 1,883 | md | Markdown | content/post/2017-10-20-friday-hacks-141.md | hansottowirtz/nushackers-site | 2cf457593abf738f7474604d756d4fa7eb0e3e8c | [
"MIT"
] | null | null | null | content/post/2017-10-20-friday-hacks-141.md | hansottowirtz/nushackers-site | 2cf457593abf738f7474604d756d4fa7eb0e3e8c | [
"MIT"
] | null | null | null | content/post/2017-10-20-friday-hacks-141.md | hansottowirtz/nushackers-site | 2cf457593abf738f7474604d756d4fa7eb0e3e8c | [
"MIT"
] | 1 | 2020-10-18T19:49:28.000Z | 2020-10-18T19:49:28.000Z | ---
title: "Friday Hacks #141, October 20"
date: 2017-10-10 18:33:25.812044
author: Herbert
url: /2017/10/friday-hacks-141
---
Hey everyone! This week, we have Feng Yuan from GovTech who will be talking about data science for the public good and Mattheus who will be talking about C++ Template Metaprogramming. See you there!
{{% friday_hack_header venue="The HANGAR by NUS Enterprise" date="October 20" %}}
### Data Science for the Public Good
#### Talk Description:
Feng-Yuan will talk a bit more about the work that the Data Science Division does in enabling data to influence policy and operational decision making.
#### Speaker Profile
Feng-Yuan is Director of the Data Science and Artificial Intelligence Division at GovTech. He runs a multidisciplinary team of social scientists, data scientists, designers and software engineers to apply data-driven methods to solving public sector problems.
### C++ Template Metaprogramming
#### Talk Description:
For some, the term "Template Metaprogramming" conjures up thoughts of complex and headache-inducing code. But it doesn't have to be. This talk will provide a breadth-focused introduction of several commonly-used ideas in Template Metaprogramming through the following use cases:
1. Compile-time evaluation/calculations
2. Automatic selection of type of object to instantiate
3. Overloading functions based of type groups/traits instead of individual types
This talk is aimed at users with experience in C++.
#### Speaker Profile
Mattheus is a Year 4 Mechanical Engineering Major and Computer Science Minor undergraduate. He enjoys discovering (and abusing) new features in programming languages, and then passing on that knowledge to others. He currently enjoys teaching beginner/intermediate programming, and is a TA for CS1010E (Programming Methodology) and CS1020E (Data Structures and Algorithms I) in NUS.
| 49.552632 | 381 | 0.792353 | eng_Latn | 0.992027 |
763288cf617076fce5c5747ea25c6c930649ac91 | 33,971 | md | Markdown | CHANGELOG.md | erickcg/vue-simple-calendar | 25c8c6e976d01c73891eabb5380b6bbc1be010bb | [
"MIT"
] | 2 | 2021-05-05T14:06:43.000Z | 2021-07-16T07:26:03.000Z | CHANGELOG.md | erickcg/vue-simple-calendar | 25c8c6e976d01c73891eabb5380b6bbc1be010bb | [
"MIT"
] | null | null | null | CHANGELOG.md | erickcg/vue-simple-calendar | 25c8c6e976d01c73891eabb5380b6bbc1be010bb | [
"MIT"
] | 1 | 2021-06-06T10:12:38.000Z | 2021-06-06T10:12:38.000Z | # Changelog
## Future?
- Mobile-compatible drag and drop (#21)
- Handles to drag events to make them longer or shorter
- Add month name to the 1st of the month when viewing multiple months (probably using classes to hide/show)
## Future: Full GCal theme
I originally intended for this to be part of v5.0, but it was holding things up, so I've deferred the full implementation for the future. There is a CSS file and some options in `App.vue` of this repo to serve as a starting point. Here's a short to-do list for creating a reasonable facsimile:
- Option for short month label in day number for 1st of each month
- `span-1` events with start time should have a color dot and should not have a `background-color`
- Events that cross the `displayPeriod` boundary should have a pointed edge
- Create a sample app with all of the appropriate fonts, overrides, etc. and events using the Google Calendar palette
- If possible, drag event selecting dates should create pseudo-event using CSS before/after content rather than highlighting the entire date block. Probably not possible but may be able to approximate.
- CSS swipe transition between periods
- Header button tooltips
- Document new button properties, move to headerProps
- Add and document header slot for additional buttons, etc. on the same flex row
## 6.0.1 (2021-04-27)
- Just minor dependency updates
## 6.0.0 (2021-03-27)
- I've lost the ability (and will) to test for IE11 anymore, so it is no longer targets.
- Upgraded to Vue 3. This shouldn't cause an issue for people using it with Vue 2.
- Migrated to TypeScript. Needed an excuse to learn it.
- Now using Vite instead of vue-cli for the development and built process.
- `CalendarMathMixin` is now `CalendarMath`, a normal class
- Added St. Valentine's Day to the US Traditional Holiday theme
## 5.0.0 (2020-09-04)
## Main breaking change: Events to Items
Any reference to a "thing that is scheduled on the calendar" is now called an "item" rather than an "event" due to the confusion possible with DOM and Vue-emitted events. (#129)
- The `click-event` event is now called `click-item`
- The `events` property is now called `items`.
- The `event` slot is now the `item` slot, and the calendar item is passed as `value`
- The `cv-event` CSS class is now `cv-item`
- The `normalizeEvent` function is now `normalizeItem`
- For all normalized calendar items, `originalEvent` is now `originalItem`
- `showEventTimes` is now `showTimes
- `eventTop`, `eventContentHeight`, and `eventBorderHeight` are now `item*`
- `eventRow` is now `itemRow`
- The header prop `fixedEvents` is now `fixedItems`
- The `wrap-event-title-on-hover` CSS class is now `wrap-item-title-on-hover`
- Due to the addition of optional week numbers, there's a `cv-weekdays` DIV now between `cw-week` and `cv-day`. This could impact some custom CSS theming (for example, if you used something like `.cv-weekdays > .cv-day` as a selector).
## Enhancements
- The DIVs for dates that have at least one calendar item crossing into them now have a `hasItems` class. (#143, thanks @SwithFr!)
- Breaking: the `click-date` event now passes the list of calendar items falling on that date as the second argument (`windowEvent` is pushed to argument 3) #143
- The `dragStart` event for an item now passes the item's `id` (stringified) into the `dataTransfer` data. This should make it easier to create custom drag/drop functionality where someone could drag a calendar item outside this component.
- Now supports date range selection, and user drag-select! Enable with the `enable-date-selection` prop.
- Now supports an optional "week number" column using the `displayWeekNumbers` property. This has a named slot to allow full control.
## 4.4.0 (2020-05-24)
- Fix events showing incorrectly during the week of a DST change in the UK (#135 and #150, thanks @ghost and @robert-joscelyne)
- Update dependencies
- Clean up to use arrow functions for some functions
- Now passes second argument `$event` (the native click event) to the `onClickDay` handler (#142, thanks @GiboMac)
- Pass drag/drop events on days back to calling app even if the dragged item is not one of the calendar's items. This allows developers to drag in elements from other controls, or to handle drag/drop if they use scoped slots for items. (#134, thanks @vykimo!)
## 4.3.2 (2019-11-24)
- Fix calendar layout for RTL languages (#138)
## 4.3.1 (2019-11-24)
- Method rename bug
## 4.3.0 (2019-11-24)
- The `click-date` and `click-event` events are emitted with the DOM click event as the second argument
- Added the `doEmitItemMouseEvents` option to emit `item-mouseover` and `item-mouseout` events when the mouse hovers over a calendar item (#136)
- Began renaming calendar "events" to calendar "items" in documentation and code where possible without breaking compatibility. Using the term "event" is confusing when the component also deals with a number of _DOM_ events. In version 5.0, there will be some breaking changes around this, but it'll just be renaming some props, slots, etc.
- Updated to Vue CLI 4, updated all dependencies. Still stuck on ESLint 5 since `cli-plugin-eslint` isn't caught up yet.
## 4.2.2 (2019-05-01)
- Fix CSS precedence for "today" class over "outsideOfMonth" (#126)
- Update various dependencies
## 4.2.1 (2019-01-25)
- Fix issue with button click event propagation in default header (#117)
- Update dependencies
- Update minimal test app to use the default header
- Changes due to the hell that is Vetur's available HTML template auto-formatters
## 4.2.0 (2018-12-18)
- Updated dependencies
- Removed auto-defaulting of event `id`, was causing update issues (#108)
- Removed auto-registration of the Vue components in the webpack bundle, it was causing issues for people with multiple Vue instances (#106)
- Replaced "Today" button in the default header with a `headerProps.currentPeriodLabel` property (set by the parent calendar) (#101)
- Moved said button between the arrow pairs (if you need to move it back, you can use the CSS flexbox `order` property)
- Added corresponding optional `currentPeriodLabel` property to the calendar component. If undefined, uses the localized date range. If "icons", uses an icon pair (`⇤` or `⇥`). Otherwise uses the literal value (_e.g._, use "Today" to mimic the old functionality).
- Added a `currentPeriodLabelIcons` property for advanced swapping of said icons
- Auto-formatting of HTML in template source code created some diff noise in the repo
- Added sample CSS to disable sticky positioning in Edge (disabled by default) (#109)
## 4.1.0 (2018-10-05)
- Fix where dowX class improperly assigned when startingDayOfWeek != 0 (#93)
- Add pseudo-hover class (`isHovered`) to all event elements in the view whose id matches the event being hovered (#95)
- Renamed prop `onPeriodChange` to `periodChangedCallback` to resolve issue for JSX users (#94) **BREAKING CHANGE**
- Hopefully fix where the initial call of `periodChangedCallback` was not firing for all users, or was firing duplicates (could not reproduce, #94, #98)
## 4.0.1, 4.0.2 (2018-08-25)
- Fix the "main" setting
- Fix exports
## 4.0.0 (2018-08-25)
### Breaking changes
#### Upgraded to vue-cli 3
This involved some changes to the source folder structure, as well as to the compiled files in the "dist" folder. Webpack-based imports of the components should be unaffected, same with CSS files. However, if you reference files directly in the dist folder (such as working in a non-webpack environment), the filenames have changed. Also, CalendarMathMixin is now part of the exported module, so if you're using webpack, you shouldn't need to reference the distribution file directly anymore. (And due to other changes, you may not need it anyway!)
#### A header component is _REQUIRED_ if you want a header
This is the biggest change. Many users want to heavily customize the header using slots, and to ensure feature parity and minimize edge cases, I decided to make the default header component _opt-in_. This library still includes the same default header component (CalendarViewHeader), but to see it, you should put it in the `header` named slot. Then, if you decide to create your own header, you can swap it out with ease.
Here's a minimal example:
```HTML
<calendar-view :show-date="dt">
<calendar-view-header
#header="{ headerProps }"
:header-props="headerProps"
@input="setShowDate" />
</calendar-view>
```
```JavaScript
import { CalendarView, CalendarViewHeader } from "vue-simple-calendar"
export default {
name: "App",
data: () ({ dt: new Date() })
components: {
CalendarView,
CalendarViewHeader,
},
methods: {
setShowDate(newValue) {
this.dt = newValue
}
}
[...]
}
```
#### The show-date-change event no longer exists
The `show-date-change` event was emitted by `CalendarView` _on behalf of_ the default header. Since `CalendarView` is now decoupled from the default header, the purpose of this event is now moot. If you're using the default header component, put an `@input` listener on it instead. If you are using your own header component, you can decide how it should communicate with your app if the header includes UI components the user can interact with.
#### New property: onPeriodChange (NOTE: renamed in 4.1 to periodChangedCallback)
This property sounds like an event, and it is, _sort of_. It's an optional `prop` that takes a _function_ as its argument. The function is invoked whenever the date range being shown by the calendar has changed (and also when the component is initialized). It passes a single argument to the function, an object with the keys `periodStart` and `periodEnd` (the dates that fall within the range of the months being shown) and `displayFirstDate` / `displayLastDate` (the dates shown on the calendar, including those that fall outside the period). The intent of this property is to let the parent know what is being shown on the calendar, which the parent can then use to, for example, query a back-end database to update the event list, or update another component with information about the same period.
This was the solution I landed on in response to the question I had on issue #69. While `watch` with `immediate` can be used to see when the period changes, a watch handler will _not_ emit an event during initialization (`this.$emit` does nothing). I also tried using `mounted` or `updated` as well, but they have the same problem--you can't emit events during component initialization. `Function` props are an oft-ignored feature of Vue, but in this case I believe it was the right call. It is fired once after the component updates the first time, and anytime thereafter that a change to any of the other props results in a change in the calendar's current date range.
Note: these date ranges are also passed to the header slot as part of `headerProps`, so you don't need to wire them to your header.
#### No default export
The `vue-simple-calendar` main bundle now includes `CalendarView`, `CalendarViewHeader`, and `CalendarViewMixin`. This makes more sense, particularly since the header will need to be imported in any apps that want to use it, but it also means your `import` statements need to specify which module you're importing (there is no default export). In most cases, your import should look like this:
```JavaScript
import { CalendarView, CalendarViewHeader } from "vue-simple-calendar"
```
In short, the curly braces are important.
### Bug fixes and non-breaking changes
- Added `style` property to events to allow pass-through of arbitrary CSS attributes (thanks @apalethorpe!)
- Added optional behavior to show the entire event title (wrapped) on hover (thanks @jiujiuwen!)
- Fixed where if the passed showDate value has a time component, events on the first of the week on a one-week calendar are not shown. #80 (thanks @MrSnoozles!)
- Added `previousFullPeriod`/`nextFullPeriod` to `headerProps` to provide more flexibility to how the previous/next buttons operats. #79 (thanks @lochstar!)
- Added `label` to `headerProps`, as part of the move to get rid of the default header and require a header slot.
## 3.0.2 (2018-05-16)
- Added `top` scoped property to the `event` slot (#66, thanks @lochstar!)
- Tweak CSS for scrolling when ancestor uses `flex-direction: column` (#71)
- Ensure keys used internally for weeks and days don't collide with numeric event `id` values (#65)
## 3.0.1 (2018-05-08)
- Added the `eventTop`, `eventContentHeight`, and `eventBorderHeight` props to allow better theming (#66)
## 3.0.0 (2018-05-05)
- Added `dateClasses` prop to allow easy dynamic styling of specific dates (#55, thanks @LTroya!)
- Massive CSS reorganization to rely less on complex cascading for easier theming (#45, #52)
- Removed need for complex z-index on week and event elements (`zIndex` no longer passed in event slot)
- Removed need for eventRowX class. Top position CSS is now computed dynamically based on the row and passed to the event slot as "top".
- Removed limitation of 20 event rows per week
- Default header buttons no longer use CSS content for their labels
- The `dayContent` slot now does **not** contain the `cv-date-number` div. This makes it easier to provide your own content without having to duplicate the day number.
- The `content` element within each day has been removed, as it is no longer needed. The default theme now uses `box-shadow` instead of `border` to highlight the date when dragging an event.
- Fixed drag and drop issue in Firefox (#57)
- Implemented new custom header capability, and refactored the default header as a separate component with the same interface
- Upgraded to webpack 4.7
- Refactored `periodLabel` from CSS logic into a reusable function
- Transpilation to ES5 appears to be functioning properly
- Activating the default theme now requires a class ("theme-default") (#45)
- Fixed and tested polyfill in the same application in IE11
- Fixed flexbox rendering issue in IE11
### Migration guide for 3.0.0
- IE11 support has been fixed! However, you will need to include `babel-polyfill` in your app's entry point, webpack config, or via a script tag, if you aren't already doing so.
- Any custom themes or other CSS overrides will need to be modified for the upgraded classes and structure.
- If you use the `dayContent` slot, you no longer need to render the day number, it's outside the slot now.
- There was a problem in the default height of the calendar (it wasn't 100% of the parent), that has been resolved.
- Webpack 4 is required to build the component or sample app. If you're just installing and using the component, this won't impact you.
- Creating and slotting a custom header is now very simple.
- To activate the default theme, your `<calendar-view>` element will need the `theme-default` class (in addition to importing the CSS file, of course).
## 2.2.1 (2018.03.19)
- Fix where babel was not transpiling appropriately for IE11 (#46)
- Fix where "sticky" content was causing issues for IE11, which doesn't support sticky
- Never published on npm due to continued IE11 issues
## 2.2.0 (2018.03.18)
- Removed the events deprecated in 2.1.0
- Upgraded to Webpack 4
- Moved version history to this CHANGELOG file
- Moved some opinionated styles from the baseline (SFC) to the default theme.
- Fixed event slot issue reported in #42 and #50 (thanks @lexuzieel!).
- Added `zIndex` prop to event scoped slot properties.
- Formatted to meet newer eslint rules.
- Corrected some minor positioning issues with events (including removing remaining em-based borders)
- The `click-event` and `drag-*` events events now passes the **normalized** event (same as the "event" named slot). You can access your original event (_which is the one you should modify_) using the `originalItem` attribute. While this is a minor breaking change, I wasn't quite ready to move up to 3.0, and this does make the API more consistent in how it passes events back to the caller.
- **Known issue:** Babel is not currently transpiling correctly to provide IE11 support. Looking for assistance.
## 2.1.2 / 2.1.3 (2018.01.27)
- Prevent `click-date` events for future dates when `disableFuture` is `true` (feature parity with disablePast). Fixes #40.
## 2.1.0 / 2.1.1 (2018.01.25)
The events below were renamed to make them kebab-case (for DOM template compatibility) and to refine the wording. The old event names, shown here, were deprecated in this version and removed in 2.2:
- `clickDay`
- `clickEvent`
- `setShowDate`
- `dragEventStart`
- `dragEventEnterDate`
- `dragEventLeaveDate`
- `dragEventOverDate`
- `dropEventOnDate`
## 2.0.1 (2018.01.23)
- Fixed `outsideOfMonth` logic bug, #38
## 2.0.0 (2018.01.01)
Version 2.0 includes some major upgrades! Here are the new features:
- Dates passed as strings are interpreted using _browser local_ time, not UTC, which prevents the event from showing up on an unexpected date.
- Optional display of start and/or end times of events, with options for formatting
- Ability to view more than one month at a time
- Week view (including multi-week)
- Year view (including, but not necessarily sanely, multi-year support)
- New named slot for `event`
- All slots now pass back useful properties the caller can bring into their scope
- The main grid is scrollable if it is too tall for the component
- Each week is scrollable if its events are too tall for the week's row in the component
This means there are some breaking changes:
- The component is now called **calendar-view** rather than **calendar-month**, to better reflect the flexibility of the period shown. (The package is still `vue-simple-calendar`.)
- Because of the above, the CSS class of the root element has also changed to **calendar-view**.
- The CSS class of the element containing the body of the view has changed from **month** to **weeks**, since periods other than a single month can be shown.
- If you pass dates as strings, they MUST be in ISO form (`yyyy-mm-dd hh:mm:ss`). The time portion is optional, and within time, the minutes and seconds are also optional.
- The header has been refactored to take better advantage of flexbox, increase the header text size, and group the buttons. This should make it easier to customize, but if you have a custom theme, it may need some updates.
- If the calendar is too short to view the entire period, the calendar body is scrollable (scroll bars are hidden, use touch or scroll wheel).
- If an individual week is too short to view all events in the week, the week's events are scrollable (scroll bars are hidden, use touch or scroll wheel).
- The minimum cell height is now 3em, to ensure that at least one event shows vertically, and if there are others to scroll to, a small part of the next one is visible.
- Emitted drag and drop events pass the original calendar event, not just its id.
- The `dragEventDragOverDate` event (undocumented) has been renamed as `dragEventOverDate`. Prior to 2.0, user events emitted the calendar event's _id_ as the first argument rather than the calendar event itself. Since not all calendar events will have an ID and the parent will probably want access to the actual calendar event, I changed these Vue events to emit the original calendar event, not just its id.
- The `dayList` slot has been replaced with `dayHeader`, and slot `day` has been renamed as `dayContent`.
- The word `slot` in the sense of an event display row has been renamed as `eventRow` in the code and CSS to avoid confusion with Vue slots.
- Up to 20 events per day are now supported (up from 10).
- Some basic colors, borders, etc. have been moved from the default theme into the component's core CSS, allowing the component to have a more appealing look with no theme in place and a better starting point for custom themes.
- Reversed the circle-arrow labels to return to the current period. These are now clockwise to "go forward" to return to the current period, counter-clockwise to "go back" to return to the current period.
#### Props Added in 2.0
- `showEventTimes` - If true, shows the start and/or end time of an event beside the event title. Midnight is not shown, a midnight time is assumed to indicate an all-day or indeterminate time. (If you want to show midnight, use `00:00:01` and don't choose to show seconds.) The default is `false`.
- `timeFormatOptions` - This takes an object containing `Intl.DateTimeFormat` options to be used to format the event times. The `locale` setting is automatically used. This option is ignored for browsers that don't support `Intl` (they will see the 24-hour, zero-padded time).
- `displayPeriodUom` - The period type to show. By default this is `month`, _i.e._, it shows a calendar in month-sized chunks. Other allowed values are `year` and `week`.
- `displayPeriodCount` - The _number_ of periods to show within the view. For example, if `displayPeriodUom` is `week` and `displayPeriodCount` is 2, the view will show a two-week period.
## 1.8.2 (2017.12.30)
- A `dayList` slot was added.
- A `day` slot was added.
- A `header` slot was added (#32)
- Fixed display issue (#33)
## Older changes
| Date | Version | Notes |
| ---------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 2017.05.11 | 1.0.0 | First version |
| 2017.05.15 | 1.1.0 | Better demo styling; refactor code; add basic drag/drop capability; fix display issue when events not sorted by start date |
| 2017.05.20 | 1.2.0 | Redesigned to work around z-index context issue with multi-day events (events now positioned above days, weeks rendered individually). Significant improvements to handling of event slots and clipping when event content exceeds height/width. |
| 2017.05.21 | 1.3.0 | Fixed IE. Bad IE. Fixed CSS references to emoji. Default style adjustments. Clean up some old code. Add previous/next year buttons. |
| 2017.05.22 | 1.3.1 | Improved demo, first published to npm. |
| 2017.05.27 | 1.4.0 | Add new classes, move a few classes up to `calendar` node, rename a few classes to pascalCase for consistency. |
| 2017.07.16 | 1.5.0 | Clean up code, move date math to a mixin; allow `endDate`, `title`, and `id` to be optional; change so only core CSS (mostly position / metrics) is in the component, a separate CSS file contains the default theme. Reorganized and updated optional US holiday theme CSS file. Tweaked default theme and metrics for consistency and cleaner look. NOTE: the default component name is now `calendar-month`, as is the primary container's CSS class. This was done for possible future expansion to support other views (such as a week view) and to give the CSS a slightly more unique name without resorting to scoped CSS. The name of the npm package, repository, etc. remains vue-simple-calendar. |
| 2017.10.03 | 1.5.1 | Fix issue where months ending in Saturday did not show their last week. Moved mixin to component folder. |
| 2017.10.04 | 1.5.2 | Fix webpack issue with mixin import and Vue warning about non-primitive keys. |
| 2017.11.11 | 1.5.3 | Fix date differences over DST and toBeContinued logic (thanks @houseoftech and @sean-atomized!) |
| 2017.11.12 | 1.6.0 | Fix future/past classes. Tweaks to CSS to fix border render issue, simplify. Change height from aspect ratio to the height of the container (the reason for the minor version increment). |
| 2017.11.12 | 1.6.1 | Fix issues when events have a time other than midnight (they should be ignored). Add stylelint and vue lint, clean up package.json, other minor tweaks. Set browser compatibility to a minimum of IE10. Prevent issues from caching "today" value. |
| 2017.12.12 | 1.7.0 | Add `startingDayOfWeek` property to allow the calendar to optionally start on any desired day of the week |
| 2017.12.15 | 1.7.1 | Hopefully resolve reported babel preset error |
| 2017.12.17 | 1.8.0 | Split sample app to another repo, rebuild build/config scripts from scratch |
| 2017.12.17 | 1.8.1 | Add build for mixin |
| 100.20944 | 802 | 0.532601 | eng_Latn | 0.99852 |
7632a9843e5ba42f2ccaf20224758df7546aa93f | 293 | md | Markdown | share/ansible_navigator/ee-details/collections/ansible_collections/ansible/ansible_navigator/README.md | samccann/ansible-navigator | ed623f8e9aad3fd63265495f9e96c7ac035817b4 | [
"Apache-2.0",
"MIT"
] | null | null | null | share/ansible_navigator/ee-details/collections/ansible_collections/ansible/ansible_navigator/README.md | samccann/ansible-navigator | ed623f8e9aad3fd63265495f9e96c7ac035817b4 | [
"Apache-2.0",
"MIT"
] | null | null | null | share/ansible_navigator/ee-details/collections/ansible_collections/ansible/ansible_navigator/README.md | samccann/ansible-navigator | ed623f8e9aad3fd63265495f9e96c7ac035817b4 | [
"Apache-2.0",
"MIT"
] | null | null | null | # Ansible Collection - ansible.ansible_navigator
http://github.com/ansible/ansible-navigator
### Developer note
Changes to this collection will likely result in the fixtures needing to be recreated and running with `tox --recreate`, since `tox` does not track files outside the source tree
| 36.625 | 177 | 0.791809 | eng_Latn | 0.995226 |
7632d6fee8488cfe41addc81c879dd2f1a74162c | 5,164 | md | Markdown | README.md | XrosFade/ElonaFoobar | c33880080e0b475103ae3ea7d546335f9d4abd02 | [
"MIT"
] | null | null | null | README.md | XrosFade/ElonaFoobar | c33880080e0b475103ae3ea7d546335f9d4abd02 | [
"MIT"
] | null | null | null | README.md | XrosFade/ElonaFoobar | c33880080e0b475103ae3ea7d546335f9d4abd02 | [
"MIT"
] | null | null | null | # Elona foobar [![AppVeyor Build Status][appveyor-build-status-svg]][appveyor-build-status] [![Travis CI Build Status][travis-build-status-svg]][travis-build-status]
* One of the Elona variants.
* It is made of C++.
* It is derived from Elona v1.22.
* It is alpha version yet.
* It ~~has~~will have the highest extensibility with Lua. [####+]
* It ~~achieves~~will achieve truly internationalization. [####+]
* It works on multi-platform. [####+]
# How To Build
## Requirements
* `make`
* CMake 3.2 or later
* C++ compiler which supports C++14
* Boost
* Lua 5.3
* SDL2, SDL2_image, SDL2_ttf and SDL2_mixer
* `clang-format`, `find` and `xargs`(Optional)
### Additional requirements for Windows
* Visual Studio 2017 x64
* 7-Zip or similar (for automated dependency extraction using `7z.exe`)
* `patch.exe` (for patching Lua with UTF-16 filename support. It comes with Git for Windows. Make sure it's on your path.)
### Additional requirements for Linux
* SMPEG
* Timidity++
## Steps
1. Extract `elona122.zip` (from [here](http://ylvania.style.coocan.jp/file/elona122.zip)) to the `deps` directory, so `deps\elona` exists. This will allow for automatically copying the required assets.
2. Follow the platform-specific instructions below.
### macOS
1. Install the required dependencies.
```
brew install cmake boost sdl2 sdl2_ttf sdl2_mixer sdl2_image lua
```
2. `cd path/to/Elona_foobar; make build`
### Linux
1. Install the required dependencies. For Arch Linux:
```
sudo pacman -S cmake sdl2 sdl2_ttf sdl2_image sdl2_mixer gtk3 smpeg lua boost timidity++
```
For systems with `apt`:
```
sudo apt-get install cmake liblua5.3-dev libboost-all-dev libsdl2-dev libsdl2-image-dev libsdl2-mixer-dev libsdl2-ttf-dev gtk+-3.0 smpeg timidity
```
2. `cd path/to/Elona_foobar; make build`
### Windows
1. Download and install the binaries for Boost `1.66` from [here](https://dl.bintray.com/boostorg/release/1.66.0/binaries/boost_1_66_0-msvc-14.1-64.exe).
2. Edit `Makefile.win` to point to your Boost install directory.
3. Run `download.bat` inside the `deps` folder to download and extract the other dependencies to `deps\include`, `deps\lib` and `thirdparty\lib` (you have to have `7z.exe` and `patch.exe` on your `PATH`). This will also patch Lua for UTF-16 filename support.
4. Open the `Developer Command Prompt for VS 2017`.
5. `cd path/to/Elona_foobar & nmake build -f Makefile.win`
To debug with Visual Studio, open `bin\Elona_foobar.sln`.
### Android
Building has only been tested on Linux so far.
1. Copy `android/local.properties.sample` to `android/local.properties` and edit it to point to your Android SDK and NDK installation paths.
2. `cd path/to/Elona_foobar; make android` (for release, run `make android_release`)
A standalone APK will be output in `bin/`.
By default, assets from vanilla Elona in `deps/` are not bundled with the APK, to respect the original content authors. To bundle these assets, pass the flag `-DANDROID_BUNDLE_ASSETS` to `cmake`. If original assets are not detected on startup, the app will prompt for the location of `elona122.zip`.
# How To Play
1. Copy the `data`, `graphic`, `sound` and `user` folders from vanilla v1.22 to the directory containing the executable. Make sure not to overwrite any files that already exist, as they have been updated in this version.
2. Execute `bin/Elona_foobar`, `bin/Elona_foobar.app` or `bin/Debug/Elona_foobar.exe`.
# How To Contribute
See [CONTRIBUTING.md](.github/CONTRIBUTING.md) for details.
# License
MIT License. See [LICENSE.txt](LICENSE.txt) for details. This license is applied for the
contents in this repository. Note that images, sounds and fonts are not included.
For files under [runtime/graphic](runtime/graphic/) folder of the repository, see
[runtime/graphic/LICENSE.txt](runtime/graphic/LICENSE.txt).
## Thirdparty libraries
* microhcl: see [src/thirdparty/microhcl/LICENSE](src/thirdparty/microhcl/LICENSE).
* microhil: see [src/thirdparty/microhil/LICENSE](src/thirdparty/microhil/LICENSE).
* Catch2: see [src/thirdparty/catch2/LICENSE](src/thirdparty/catch2/LICENSE).
* hayai: see [src/thirdparty/hayai/LICENSE](src/thirdparty/hayai/LICENSE).
* sol2: see [src/thirdparty/sol2/LICENSE](src/thirdparty/sol2/LICENSE).
* ordered_map: see [src/thirdparty/ordered_map/LICENSE](src/thirdparty/ordered_map/LICENSE).
* boostrandom: see [src/thirdparty/boostrandom/LICENSE_1_0.txt](src/thirdparty/LICENSE_1_0.txt).
* cmake/FindXXX.cmake: see [cmake/LICENSE](cmake/LICENSE).
* nativefiledialog: see [src/thirdparty/nfd/LICENSE](src/thirdparty/nfd/LICENSE).
* nlohmann/json: seee [src/thirdparty/nlohmannjson/LICENSE.MIT](src/thirdparty/nlohmannjson/LICENSE.MIT).
## Lua libraries
* [inspect.lua](https://github.com/kikito/inspect.lua) (MIT)
<!-- Badges -->
[appveyor-build-status]: https://ci.appveyor.com/project/ki-foobar/elonafoobar/branch/develop
[appveyor-build-status-svg]: https://ci.appveyor.com/api/projects/status/jqhbtdkx86lool4t/branch/develop?svg=true
[travis-build-status]: https://travis-ci.org/ElonaFoobar/ElonaFoobar?branch=develop
[travis-build-status-svg]: https://travis-ci.org/ElonaFoobar/ElonaFoobar.svg?branch=develop
| 40.34375 | 299 | 0.754067 | eng_Latn | 0.803025 |
7632f777d01559bbfb5da79f4c512b8482696721 | 3,110 | md | Markdown | pages/new_md/intersectelements.md | github-livingston/lagrit_site | 4c596616bcd4cafeed3ba0a13ba64df5632b004c | [
"CC0-1.0"
] | 1 | 2019-11-01T18:12:10.000Z | 2019-11-01T18:12:10.000Z | pages/new_md/intersectelements.md | github-livingston/lagrit_site | 4c596616bcd4cafeed3ba0a13ba64df5632b004c | [
"CC0-1.0"
] | null | null | null | pages/new_md/intersectelements.md | github-livingston/lagrit_site | 4c596616bcd4cafeed3ba0a13ba64df5632b004c | [
"CC0-1.0"
] | 1 | 2019-09-29T08:35:55.000Z | 2019-09-29T08:35:55.000Z | ---
title: INTERSECTELEMENTS
---
**INTERSECT\_ELEMENTS**
This command takes two meshes and creates an element-based attribute
in mesh1 that contains the number of elements in mesh2 that
intersected the respective element in mesh1.
We define intersection as two elements sharing any common point.
**FORMAT:**
**intersect\_elements** / mesh1 / mesh2 / [attrib\_name]
**NOTES:**
[attrib\_name] specifies the name of the element based attribute in
mesh1 that is created by this command. The default name for this
attribute is in\_<mesh2>. For example, if the comand syntax was:
**intersect\_elements**/cmo\_strat/cmo\_well/
the element based attribute that stores the number of intersections
would be named in\_cmo\_well. It is worth noting that GMV does not
take kindly to names that are longer than eight characters and will
truncate them without even thinking twice, resulting in the name used
in our example being changed to in\_cmo\_w. Therefore, it is good
practice to use your own attribute names less than eight characters if
possible.
This code has been slightly modified to work with AMR grids produced
in X3D. This modification depends on an element based attribute that
X3D creates called **itetkid**. If this attribute is not present,
**intersect\_elements** will **NOT** be able to recognize the AMR
grid, and will intersect all elements of the octree. With the itetkid
attribute present, only leaves of the octree which intersect will be
flagged.
**intersect\_elements** is not designed to work with every
element-element combination, but it is pretty thorough. The following
table shows what element/element intersetion capabilities are
available. An **X** in the box means that the intersection is
supported.
------- ------- ------- ------- ------- ------- ------- -------
point line tri quad tet pyr hex
point **X** **X** **X** **X** **X** **X** **X**
line **X** **X** **X** **X** **X** **X**
tri **X** **X** **X** **X** **X** **X**
quad **X** **X** **X** **X** **X** **X**
tet **X** **X** **X** **X** **X** **X**
pyr **X**
hex **X** **X** **X** **X** **X** **X**
------- ------- ------- ------- ------- ------- ------- -------
For example, this means that if you have a mesh that has hexes and
tets in it, you could intersect it with a mesh that has anything but
pyramids in it.
Finally, **intersect\_elements** is based on a k-D-R tree
implementation to improve performance in many circumstances.
Unfortunately, there is no way to improve performance if the elements
being intersected have many candidate elements in their bounding
boxes. As such, there are situations where running time may be
improved by refining mesh2 such that its elements are of comparable
size with those of mesh1.
**EXAMPLES:**
**intersect\_elements**/cmo\_grid/cmo\_sphere/
**intersect\_elements**/cmo\_grid/cmo\_well/obswell
| 41.466667 | 71 | 0.642444 | eng_Latn | 0.999194 |
763328cbb9384151d4e49e0f8b4dc0f55be563fd | 337 | md | Markdown | src/functions/usdrepeatmessage.md | stickman1199/documentation | d7321b281dba45b1700abc0426fe021f5eeefb66 | [
"Apache-2.0"
] | 38 | 2021-06-21T02:38:04.000Z | 2022-03-23T06:42:49.000Z | src/functions/usdrepeatmessage.md | stickman1199/documentation | d7321b281dba45b1700abc0426fe021f5eeefb66 | [
"Apache-2.0"
] | 22 | 2021-06-22T19:18:55.000Z | 2022-02-06T17:00:00.000Z | src/functions/usdrepeatmessage.md | stickman1199/documentation | d7321b281dba45b1700abc0426fe021f5eeefb66 | [
"Apache-2.0"
] | 68 | 2021-06-21T02:46:43.000Z | 2022-02-12T06:12:05.000Z | ---
description: Repeats the message for X amount of times
---
# $repeatMessage
This function repeats the <message> X amount of times
```javascript
$repeatMessage[times;message]
```
```javascript
bot.command({
name: "repeatMessage",
code: `$repeatMessage[5;Aoi.js is awesome]`
})
//Will resend 'Aoi.js is awesome' 5 times
```
| 16.047619 | 59 | 0.709199 | eng_Latn | 0.890728 |
763373d38696a0bed56ed15eb17c14a0f713237a | 1,110 | md | Markdown | data/markdown/2019/125a.md | munichrocker/grundgesetz_changes | 79fcf71f3f8d89f93cbea960e411d253c4939610 | [
"MIT"
] | 3 | 2019-08-06T14:42:02.000Z | 2021-04-01T08:31:53.000Z | data/markdown/2019/125a.md | munichrocker/grundgesetz_changes | 79fcf71f3f8d89f93cbea960e411d253c4939610 | [
"MIT"
] | null | null | null | data/markdown/2019/125a.md | munichrocker/grundgesetz_changes | 79fcf71f3f8d89f93cbea960e411d253c4939610 | [
"MIT"
] | null | null | null | ## Artikel 125a
(1) Recht, das als Bundesrecht erlassen worden ist, aber wegen der Änderung des [Artikels 74 Abs. 1](#artikel-74), der Einfügung des [Artikels 84 Abs. 1 Satz 7](#artikel-84), des [Artikels 85 Abs. 1 Satz 2](#artikel-85) oder des [Artikels 105 Abs. 2 a Satz 2](#artikel-105) oder wegen der Aufhebung der [Artikel 74 a](#artikel-74a), [75](#artikel-75) oder [98 Abs. 3 Satz 2](#artikel-98) nicht mehr als Bundesrecht erlassen werden könnte, gilt als Bundesrecht fort. Es kann durch Landesrecht ersetzt werden.
(2) Recht, das aufgrund des [Artikels 72 Abs. 2](#artikel-72) in der bis zum 15. November 1994 geltenden Fassung erlassen worden ist, aber wegen Änderung des [Artikels 72 Abs. 2](#artikel-72) nicht mehr als Bundesrecht erlassen werden könnte, gilt als Bundesrecht fort. Durch Bundesgesetz kann bestimmt werden, dass es durch Landesrecht ersetzt werden kann.
(3) Recht, das als Landesrecht erlassen worden ist, aber wegen Änderung des [Artikels 73 GG](#artikel-73) nicht mehr als Landesrecht erlassen werden könnte, gilt als Landesrecht fort. Es kann durch Bundesrecht ersetzt werden.
| 158.571429 | 507 | 0.767568 | deu_Latn | 0.97527 |
7633b44e6d1b2be5bb19939aa3b5c1f5dd996401 | 1,706 | md | Markdown | rules/production-do-you-know-how-to-record-live-video-interviews-on-location/rule.md | bradystroud/SSW.Rules.Content | 82b957e2f00194fbf9fef57d640f82e6d16f8bfd | [
"CC0-1.0"
] | null | null | null | rules/production-do-you-know-how-to-record-live-video-interviews-on-location/rule.md | bradystroud/SSW.Rules.Content | 82b957e2f00194fbf9fef57d640f82e6d16f8bfd | [
"CC0-1.0"
] | null | null | null | rules/production-do-you-know-how-to-record-live-video-interviews-on-location/rule.md | bradystroud/SSW.Rules.Content | 82b957e2f00194fbf9fef57d640f82e6d16f8bfd | [
"CC0-1.0"
] | null | null | null | ---
type: rule
title: Production - Do you know how to record live video interviews on location?
uri: production-do-you-know-how-to-record-live-video-interviews-on-location
authors: []
related: []
redirects: []
created: 2011-08-30T18:02:28.000Z
archivedreason: null
guid: 20ba212f-4b83-4be8-9051-07e6c54b9a30
---
Recording live video interviews on location can be difficult. The key to success is to make the process as simple as possible, so you continue to record and release interviews.
<!--endintro-->
As there may be many variables during a shoot on location, you need to be able to keep track of multiple things during the interview.
The most important things to focus on are:
* Audio
* Is each microphone coming through clearly
* Are there any sounds that would make it hard to hear the interview (e.g. passing trucks, air conditioning on, machinery)
* Lighting
* Are your subjects bright enough to see clearly
* Are there any harsh or awkward shadows blocking their face or body?
* Framing
* Can you see everything you need to?
If any of these are not right, you will probably need to record again!
### Tips to simplify the process:
1. The interviewer should hold the camera and interview the subject at the same time.
2. Keep a tight frame - don’t have lots of empty space around the subject.
3. Use the rule of thirds. See:
`youtube: https://www.youtube.com/embed/iH3Z-3SeWiM`
4. Don't zoom erratically – Ease in and ease out of zooms
5. If someone starts speaking off camera, move to them slowly and smoothly without rushing (it is OK for them to talk off camera for a short time)
6. To record both voices use a single shotgun microphone for both interviewer and subject
| 40.619048 | 176 | 0.762603 | eng_Latn | 0.998661 |
7633d0de805352e678a5d556cf12b7dc38aef72f | 145 | md | Markdown | README.md | josemm97/invie-githhub | 7ee98ad4640ac5fc3a33d1062400e356b8c78679 | [
"Apache-2.0"
] | null | null | null | README.md | josemm97/invie-githhub | 7ee98ad4640ac5fc3a33d1062400e356b8c78679 | [
"Apache-2.0"
] | null | null | null | README.md | josemm97/invie-githhub | 7ee98ad4640ac5fc3a33d1062400e356b8c78679 | [
"Apache-2.0"
] | null | null | null | # invie-githhub
Proyecto que se realizo durante el curso de GIT profesional utilizando imagenes como partes del proyeecto como ejemplo solamente
| 48.333333 | 128 | 0.841379 | spa_Latn | 0.999122 |
7633e0e0506bafaf4cdb6a9d10e813917d60552d | 145 | md | Markdown | _dependencies/material-design-web-css.md | Cover-UI/Cover-UI.github.io | 8330db2505329cd31e08065c7e0e2f313149eb68 | [
"MIT"
] | null | null | null | _dependencies/material-design-web-css.md | Cover-UI/Cover-UI.github.io | 8330db2505329cd31e08065c7e0e2f313149eb68 | [
"MIT"
] | null | null | null | _dependencies/material-design-web-css.md | Cover-UI/Cover-UI.github.io | 8330db2505329cd31e08065c7e0e2f313149eb68 | [
"MIT"
] | 1 | 2021-06-22T16:01:47.000Z | 2021-06-22T16:01:47.000Z | ---
label: material-design-web-css
src:https://unpkg.com/material-components-web@latest/dist/material-components-web.min.css
version: latest
---
| 24.166667 | 89 | 0.772414 | deu_Latn | 0.125502 |
76340a8b2d6c2ee854756460c0085edff09cdf63 | 2,680 | md | Markdown | README.md | tigertext/prometheus-cowboy | 9ea655cdeee11f47cd230b238b35568110a5e121 | [
"MIT"
] | null | null | null | README.md | tigertext/prometheus-cowboy | 9ea655cdeee11f47cd230b238b35568110a5e121 | [
"MIT"
] | null | null | null | README.md | tigertext/prometheus-cowboy | 9ea655cdeee11f47cd230b238b35568110a5e121 | [
"MIT"
] | 1 | 2020-03-03T07:17:07.000Z | 2020-03-03T07:17:07.000Z |
# prometheus_cowboy #
Copyright (c) 2017 Ilya Khaprov <<[email protected]>>.
__Version:__ 0.1.4
[![Hex.pm][Hex badge]][Hex link]
[![Hex.pm Downloads][Hex downloads badge]][Hex link]
[![Build Status][Travis badge]][Travis link]
## Exporting metrics with handlers
Cowboy 1:
```erlang
Routes = [
{'_', [
{"/metrics/[:registry]", prometheus_cowboy1_handler, []},
{"/", toppage_handler, []}
]}
]
```
Cowboy 2:
```erlang
Routes = [
{'_', [
{"/metrics/[:registry]", prometheus_cowboy2_handler, []},
{"/", toppage_handler, []}
]}
]
```
## Exporting Cowboy2 metrics
```erlang
{ok, _} = cowboy:start_clear(http, [{port, 0}],
#{env => #{dispatch => Dispatch},
metrics_callback => fun prometheus_cowboy2_instrumenter:observe/1,
stream_handlers => [cowboy_metrics_h, cowboy_stream_h]})
```
## Contributing
Section order:
- Types
- Macros
- Callbacks
- Public API
- Deprecations
- Private Parts
Install the `git` pre-commit hook:
```bash
./bin/pre-commit.sh install
```
The pre-commit check can be skipped by passing `--no-verify` to `git commit`.
## License
MIT
[Hex badge]: https://img.shields.io/hexpm/v/prometheus_cowboy.svg?maxAge=2592000?style=plastic
[Hex link]: https://hex.pm/packages/prometheus_cowboy
[Hex downloads badge]: https://img.shields.io/hexpm/dt/prometheus_cowboy.svg?maxAge=2592000
[Travis badge]: https://travis-ci.org/deadtrickster/prometheus-cowboy.svg?branch=version-3
[Travis link]: https://travis-ci.org/deadtrickster/prometheus-cowboy
[Coveralls badge]: https://coveralls.io/repos/github/deadtrickster/prometheus-cowboy/badge.svg?branch=master
[Coveralls link]: https://coveralls.io/github/deadtrickster/prometheus-cowboy?branch=master
## Modules ##
<table width="100%" border="0" summary="list of modules">
<tr><td><a href="https://github.com/deadtrickster/prometheus-cowboy/blob/master/doc/prometheus_cowboy.md" class="module">prometheus_cowboy</a></td></tr>
<tr><td><a href="https://github.com/deadtrickster/prometheus-cowboy/blob/master/doc/prometheus_cowboy1_handler.md" class="module">prometheus_cowboy1_handler</a></td></tr>
<tr><td><a href="https://github.com/deadtrickster/prometheus-cowboy/blob/master/doc/prometheus_cowboy2_handler.md" class="module">prometheus_cowboy2_handler</a></td></tr>
<tr><td><a href="https://github.com/deadtrickster/prometheus-cowboy/blob/master/doc/prometheus_cowboy2_instrumenter.md" class="module">prometheus_cowboy2_instrumenter</a></td></tr></table>
| 28.210526 | 188 | 0.667164 | yue_Hant | 0.527148 |
76345b1f1330c5bb38bcd21b9a381f4708c5de92 | 3,445 | md | Markdown | sdk-api-src/content/gdiplusheaders/nf-gdiplusheaders-cachedbitmap-getlaststatus.md | amorilio/sdk-api | 54ef418912715bd7df39c2561fbc3d1dcef37d7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | sdk-api-src/content/gdiplusheaders/nf-gdiplusheaders-cachedbitmap-getlaststatus.md | amorilio/sdk-api | 54ef418912715bd7df39c2561fbc3d1dcef37d7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | sdk-api-src/content/gdiplusheaders/nf-gdiplusheaders-cachedbitmap-getlaststatus.md | amorilio/sdk-api | 54ef418912715bd7df39c2561fbc3d1dcef37d7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
UID: NF:gdiplusheaders.CachedBitmap.GetLastStatus
title: CachedBitmap::GetLastStatus (gdiplusheaders.h)
description: The CachedBitmap::GetLastStatus method returns a value that indicates whether this CachedBitmap object was constructed successfully.
helpviewer_keywords: ["CachedBitmap class [GDI+]","GetLastStatus method","CachedBitmap.GetLastStatus","CachedBitmap::GetLastStatus","GetLastStatus","GetLastStatus method [GDI+]","GetLastStatus method [GDI+]","CachedBitmap class","_gdiplus_CLASS_CachedBitmap_GetLastStatus_","gdiplus._gdiplus_CLASS_CachedBitmap_GetLastStatus_"]
old-location: gdiplus\_gdiplus_CLASS_CachedBitmap_GetLastStatus_.htm
tech.root: gdiplus
ms.assetid: VS|gdicpp|~\gdiplus\gdiplusreference\classes\cachedbitmapclass\getlaststatus_27.htm
ms.date: 12/05/2018
ms.keywords: CachedBitmap class [GDI+],GetLastStatus method, CachedBitmap.GetLastStatus, CachedBitmap::GetLastStatus, GetLastStatus, GetLastStatus method [GDI+], GetLastStatus method [GDI+],CachedBitmap class, _gdiplus_CLASS_CachedBitmap_GetLastStatus_, gdiplus._gdiplus_CLASS_CachedBitmap_GetLastStatus_
req.header: gdiplusheaders.h
req.include-header: Gdiplus.h
req.target-type: Windows
req.target-min-winverclnt: Windows XP, Windows 2000 Professional [desktop apps only]
req.target-min-winversvr: Windows 2000 Server [desktop apps only]
req.kmdf-ver:
req.umdf-ver:
req.ddi-compliance:
req.unicode-ansi:
req.idl:
req.max-support:
req.namespace:
req.assembly:
req.type-library:
req.lib: Gdiplus.lib
req.dll: Gdiplus.dll
req.irql:
targetos: Windows
req.typenames:
req.redist:
req.product: GDI+ 1.0
ms.custom: 19H1
f1_keywords:
- CachedBitmap::GetLastStatus
- gdiplusheaders/CachedBitmap::GetLastStatus
dev_langs:
- c++
topic_type:
- APIRef
- kbSyntax
api_type:
- COM
api_location:
- Gdiplus.dll
api_name:
- CachedBitmap.GetLastStatus
---
# CachedBitmap::GetLastStatus
## -description
The <b>CachedBitmap::GetLastStatus</b> method returns a value that indicates whether this
<a href="/windows/desktop/api/gdiplusheaders/nl-gdiplusheaders-cachedbitmap">CachedBitmap</a> object was constructed successfully.
## -returns
Type: <b><a href="/windows/desktop/api/gdiplustypes/ne-gdiplustypes-status">Status</a></b>
If this
<a href="/windows/desktop/api/gdiplusheaders/nl-gdiplusheaders-cachedbitmap">CachedBitmap</a> object was constructed successfully, the
<b>CachedBitmap::GetLastStatus</b> method returns <a href="/windows/desktop/api/gdiplustypes/ne-gdiplustypes-status">Ok</a>, which is an element of the
<b>Status</b> enumeration.
If this
<a href="/windows/desktop/api/gdiplusheaders/nl-gdiplusheaders-cachedbitmap">CachedBitmap</a> object was not constructed successfully, the <b>CachedBitmap::GetLastStatus</b> method returns an element of the
<a href="/windows/desktop/api/gdiplustypes/ne-gdiplustypes-status">Status</a> enumeration that indicates the nature of the failure.
## -see-also
<a href="/windows/desktop/api/gdiplusheaders/nl-gdiplusheaders-bitmap">Bitmap</a>
<a href="/windows/desktop/api/gdiplusheaders/nl-gdiplusheaders-cachedbitmap">CachedBitmap</a>
<a href="/windows/desktop/api/gdiplusgraphics/nl-gdiplusgraphics-graphics">Graphics</a>
<a href="/windows/desktop/api/gdiplusheaders/nl-gdiplusheaders-image">Image</a>
<a href="/windows/desktop/gdiplus/-gdiplus-using-a-cached-bitmap-to-improve-performance-use">Using a Cached Bitmap to Improve Performance</a>
| 37.857143 | 327 | 0.790711 | yue_Hant | 0.241178 |
7634dbd111bbc49fb17bc6db1260a7c894039e7d | 886 | md | Markdown | _includes/examples/ocp/ocp.md | djoroyaDeusto/dycon-platform-documentation | 9c99bf66e2b625708ef828855199fdf5459b6a18 | [
"MIT"
] | null | null | null | _includes/examples/ocp/ocp.md | djoroyaDeusto/dycon-platform-documentation | 9c99bf66e2b625708ef828855199fdf5459b6a18 | [
"MIT"
] | 2 | 2020-07-17T16:16:04.000Z | 2021-05-09T22:49:17.000Z | _includes/examples/ocp/ocp.md | djoroyaDeusto/dycon-platform-documentation | 9c99bf66e2b625708ef828855199fdf5459b6a18 | [
"MIT"
] | null | null | null |
```matlab
clear all;close all
import casadi.*
Xs = SX.sym('x',2,1);
Us = SX.sym('u',2,1);
ts = SX.sym('t');
%
A = [ -2 +1;
+1 -2];
B = [1 0;
0 1];
```
```matlab
EvolutionFcn = Function('f',{ts,Xs,Us},{ A*Xs + B*Us });
%
tspan = linspace(0,2,10);
iode = ode(EvolutionFcn,Xs,Us,tspan);
SetIntegrator(iode,'RK4')
iode.InitialCondition = [1;2];
```
```matlab
epsilon = 1e4;
PathCost = Function('L' ,{ts,Xs,Us},{ Us'*Us });
FinalCost = Function('Psi',{Xs} ,{ epsilon*(Xs'*Xs) });
iocp = ocp(iode,PathCost,FinalCost)
```
```
iocp =
ocp with properties:
DynamicSystem: [1x1 ode]
CostFcn: [1x1 CostFcn]
VariableTime: 0
constraints: [1x1 constraints]
TargetState: []
Hamiltonian: [0x0 casadi.Function]
AdjointStruct: [1x1 AdjointStruct]
ControlGradient: [1x1 casadi.Function]
```
| 13.84375 | 60 | 0.560948 | yue_Hant | 0.453048 |
763603436e06072f04100c8f3e382c5fed9b6436 | 4,433 | md | Markdown | translations/de-DE/content/account-and-profile/setting-up-and-managing-your-github-profile/managing-contribution-graphs-on-your-profile/troubleshooting-commits-on-your-timeline.md | sinizin/docs | fb2aa91ca1dd9f7b60a63ce556edff292b24a811 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2022-01-11T22:31:56.000Z | 2022-01-11T22:31:56.000Z | translations/de-DE/content/account-and-profile/setting-up-and-managing-your-github-profile/managing-contribution-graphs-on-your-profile/troubleshooting-commits-on-your-timeline.md | sinizin/docs | fb2aa91ca1dd9f7b60a63ce556edff292b24a811 | [
"CC-BY-4.0",
"MIT"
] | 37 | 2021-12-01T00:09:47.000Z | 2022-03-01T00:41:16.000Z | translations/de-DE/content/account-and-profile/setting-up-and-managing-your-github-profile/managing-contribution-graphs-on-your-profile/troubleshooting-commits-on-your-timeline.md | sinizin/docs | fb2aa91ca1dd9f7b60a63ce556edff292b24a811 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Fehlerbehebung bei Commit-Fehlern mithilfe der Zeitleiste
intro: 'Details zu einzelnen Commits findest Du in der Zeitleiste Deines Profils. Wenn Du in Deinem Profil respektive auf der Profilseite keine Details zu einem erwarteten Commit findest, weichen das Erstellungs- und das Commit-Datum des Commits eventuell voneinander ab.'
redirect_from:
- /articles/troubleshooting-commits-on-your-timeline
- /github/setting-up-and-managing-your-github-profile/troubleshooting-commits-on-your-timeline
- /github/setting-up-and-managing-your-github-profile/managing-contribution-graphs-on-your-profile/troubleshooting-commits-on-your-timeline
versions:
fpt: '*'
ghes: '*'
ghae: '*'
ghec: '*'
topics:
- Profiles
shortTitle: Troubleshoot commits
---
## Erwartetes Verhalten bei der Anzeige der Commit-Details über die Zeitleiste
Wenn Du in der Zeitleiste Deiner Profilseite neben einem Repository auf die Anzahl der Commits klickst, werden die Details zu Deinen Commits des betreffenden Zeitraums angezeigt, einschließlich eines Diffs der spezifischen Änderungen am Repository.


## Fehlende Commit-Details in der Zeitleiste
Wenn Du auf Deiner Profilseite auf einen Commit-Link klickst und auf der Commit-Seite des Repositorys nicht alle erwarteten Commits vorfindest, könnte es sein, dass der Commit-Verlauf in Git umgeschrieben wurde und das Erstellungs- und das Commit-Datum der Commits voneinander abweichen.

## So verwendet GitHub das Erstellungs- und Commit-Datum von Git
In Git bezeichnet das Erstellungsdatum das Datum, an dem ein Commit ursprünglich mit dem Befehl `git commit` erstellt wurde. Das Commit-Datum ist mit diesem Datum identisch, solange der ursprüngliche Commit, und damit das Commit-Datum, nicht später durch `git commit --amend`, einen erzwungenen Push, ein Rebase oder einen anderen Git-Befehl geändert wurde.
Auf Deiner Profilseite wird das Bearbeitungsdatum zur Berechnung des Commit-Datums verwendet. In einem Repository gilt dagegen das Commit-Datum als Erstellungsdatum des Commits im Repository.
Meist sind das Verfassungs- und Commit-Datum identisch. Gelegentlich aber gerät die Commit-Abfolge durcheinander, wenn der Commit-Verlauf geändert wird. Weitere Informationen findest Du unter „[Warum werden meine Beiträge nicht in meinem Profil angezeigt?](/articles/why-are-my-contributions-not-showing-up-on-my-profile)“
## Fehlende Commit-Details in der Zeitleiste anzeigen
Mit dem Befehl `git show` und dem Flag `--pretty=fuller` kannst Du überprüfen, ob das Erstellungs- und das Commit-Datum eines Commits identisch sind.
```shell
$ git show <em>Your commit SHA number</em> --pretty=fuller
commit <em>Your commit SHA number</em>
Author: octocat <em>user email</em>
AuthorDate: Tue Apr 03 02:02:30 2018 +0900
Commit: Sally Johnson <em>user email</em>
CommitDate: Tue Apr 10 06:25:08 2018 +0900
```
Weichen das Erstellungs- und Commit-Datum voneinander ab, kannst Du das Commit-Datum in der URL manuell ändern, um die Commit-Details anzuzeigen.
Ein Beispiel:
- Die folgende URL verwendet das Verfassungsdatum `2018-04-03`:
`https://github.com/your-organization-or-personal-account/your-repository/commits?author=octocat&since=2018-04-03T00:00:00Z&until=2018-04-03T23:59:59Z`
- Die folgende URL verwendet das Commit-Datum `2018-04-10`:
`https://github.com/your-organization-or-personal-account/your-repository/commits?author=octocat&since=2018-04-10T00:00:00Z&until=2018-04-10T23:59:59Z`
Wenn Du die URL mit dem korrigierten Commit-Datum aufrufst, werden die Commit-Details angezeigt.

## Wenn erwartete Commits in der Zeitleiste fehlen
Wenn in Deiner Zeitleiste nicht alle erwarteten Commits angezeigt werden, könnte es sein, dass der Commit-Verlauf in Git umgeschrieben wurde und das Erstellungs- und das Commit-Datum der Commits voneinander abweichen. Weitere Ursachen dieses Verhaltens werden unter der Frage „[Warum werden meine Beiträge nicht in meinem Profil angezeigt?](/articles/why-are-my-contributions-not-showing-up-on-my-profile)“ beschrieben.
| 63.328571 | 419 | 0.798556 | deu_Latn | 0.978539 |
76373f6bbddc1e048adc6aecf55bc262d6277462 | 3,093 | md | Markdown | help/forms/using/saving-html5-form-draft.md | calimat20/experience-manager-65.en | 249cfa48f3e7df1f746e5bd454407c385b6553bf | [
"MIT"
] | null | null | null | help/forms/using/saving-html5-form-draft.md | calimat20/experience-manager-65.en | 249cfa48f3e7df1f746e5bd454407c385b6553bf | [
"MIT"
] | null | null | null | help/forms/using/saving-html5-form-draft.md | calimat20/experience-manager-65.en | 249cfa48f3e7df1f746e5bd454407c385b6553bf | [
"MIT"
] | null | null | null | ---
title: Saving an HTML5 form as a draft
seo-title: Saving an HTML5 form as a draft
description: Save an HTML5 form as a draft and resume filling the form at a later stage.
seo-description: Save an HTML5 form as a draft and resume filling the form at a later stage.
uuid: 70cd5f6f-f125-470c-8cee-ee14d2127713
content-type: reference
products: SG_EXPERIENCEMANAGER/6.5/FORMS
topic-tags: hTML5_forms
discoiquuid: 445e24af-cd1a-414d-bd01-9feb6631bbef
feature: Mobile Forms
exl-id: a9879445-d626-4279-8a95-a9009294b483
---
# Saving an HTML5 form as a draft {#saving-an-html-form-as-a-draft}
You can save an HTML5 form as a draft and resume filling the form at a later stage. Forms Portal allows any user to save and restore an HTML5 form. To enable the Save as Draft functionality, add the following configurations to the profile node:
## Custom Profile to allow Save as Draft feature {#custom-profile-to-allow-save-as-draft-feature}
Out of the box, AEM Forms provide a **Save as Draft** profile. You can render a form with the Save as Draft profile to enable draft functionality for an HTML5 form. You can specify HTML render profile for a form in [Forms Manager](/help/forms/using/introduction-managing-forms.md).
To enable Save as Draft functionality for your existing [custom profile](/help/forms/using/custom-profile.md), add the following properties to your custom profile node:
<table>
<tbody>
<tr>
<td><strong>Property Name</strong></td>
<td><strong>Type</strong></td>
<td><strong>Value</strong></td>
<td><strong>Description</strong></td>
</tr>
<tr>
<td>mfAllowFPDraft</td>
<td>String</td>
<td>true</td>
<td><p>Enables save as draft feature</p> <p>for this profile.</p> </td>
</tr>
<tr>
<td>mfAllowAttachments</td>
<td>String</td>
<td>true</td>
<td><p>Allows uploading of attachments</p> <p>with this profile.</p> </td>
</tr>
</tbody>
</table>
## Drafts storage and listing {#drafts-storage-and-listing}
After enabling Save as Draft functionality for a form; when the form is saved, it is listed in the [Drafts and Submission component](/help/forms/using/draft-submission-component.md). You can retrieve and start filling the saved form form the Draft and Submission component.
To enable forms listing for the Draft and Submission component, add the following property to the profile node:
<table>
<tbody>
<tr>
<td><strong>Property Name</strong></td>
<td><strong>Type</strong></td>
<td><strong>Value</strong></td>
<td><strong>Description</strong></td>
</tr>
<tr>
<td>fp.enablePortalSubmit</td>
<td>String</td>
<td>true</td>
<td>To enable drafts and forms to get listed in<br /> Forms Portal Drafts & Submissions component after submission</td>
</tr>
</tbody>
</table>
By default, AEM Forms stores the user data associated with the draft and submission of a form in the /content/forms/fp node on the Publish instance. You can add your custom storage provider, for details see [Custom storage for Drafts and Submissions component](/help/forms/using/adding-custom-storage-provider-forms.md).
| 43.56338 | 320 | 0.735532 | eng_Latn | 0.929677 |
7637a1c10ec773edc8b523d955208d45f081d93a | 3,425 | md | Markdown | esb/shared/src/main/filtered-resources/quickstarts/beginner/camel-cbr/README.md | dhirajsb/fuse | 38a664386daa76c5e9b32fc2d6094f0875c2bb6c | [
"Apache-2.0"
] | null | null | null | esb/shared/src/main/filtered-resources/quickstarts/beginner/camel-cbr/README.md | dhirajsb/fuse | 38a664386daa76c5e9b32fc2d6094f0875c2bb6c | [
"Apache-2.0"
] | null | null | null | esb/shared/src/main/filtered-resources/quickstarts/beginner/camel-cbr/README.md | dhirajsb/fuse | 38a664386daa76c5e9b32fc2d6094f0875c2bb6c | [
"Apache-2.0"
] | null | null | null | camel-cbr: Demonstrates the Camel CBR Pattern
======================================================
Author: Fuse Team
Level: Beginner
Technologies: Camel, Blueprint, ActiveMQ
Summary: This quickstart demonstrates how to use Apache Camel to route messages using the Content Based Router (cbr) pattern.
Target Product: Fuse
Source: <https://github.com/jboss-fuse/quickstarts>
What is it?
-----------
This quick start shows how to use Apache Camel, and its OSGi integration to dynamically route messages to new or updated OSGi bundles. This allows you to route to newly deployed services at runtime without impacting running services.
This quick start combines use of the Camel Recipient List, which allows you to at runtime specify the Camel Endpoint to route to, and use of the Camel VM Component, which provides a SEDA queue that can be accessed from different OSGi bundles running in the same Java virtual machine.
In studying this quick start you will learn:
* how to define a Camel route using the Blueprint XML syntax
* how to build and deploy an OSGi bundle in JBoss Fuse
* how to use the CBR enterprise integration pattern
For more information see:
* http://www.enterpriseintegrationpatterns.com/ContentBasedRouter.html for more information about the CBR EIP
* https://access.redhat.com/site/documentation/red-hat-jboss-fuse/ for more information about using JBoss Fuse
Note: Extra steps, like use of Camel VM Component, need to be taken when accessing Camel Routes in different Camel Contexts, and in different OSGi bundles, as you are dealing with classes in different ClassLoaders.
System requirements
-------------------
Before building and running this quick start you need:
* Maven 3.1.1 or higher
* JDK 1.7 or 1.8
* JBoss Fuse 6
Build and Deploy the Quickstart
-------------------------
1. Change your working directory to `camel-cbr` directory.
* Run `mvn clean install` to build the quickstart.
* Start JBoss Fuse 6 by running bin/fuse (on Linux) or bin\fuse.bat (on Windows).
* In the JBoss Fuse console, enter the following command:
osgi:install -s mvn:org.jboss.quickstarts.fuse/beginner-camel-cbr/${project.version}
* Fuse should give you an id when the bundle is deployed
* You can check that everything is ok by issuing the command:
osgi:list
your bundle should be present at the end of the list
Use the bundle
---------------------
To use the application be sure to have deployed the quickstart in Fuse as described above.
1. As soon as the Camel route has been started, you will see a directory `work/cbr/input` in your JBoss Fuse installation.
2. Copy the files you find in this quick start's `src/main/fabric8/data` directory to the newly created `work/cbr/input`
directory.
3. Wait a few moments and you will find the same files organized by country under the `work/cbr/output` directory.
* `order1.xml` in `work/cbr/output/others`
* `order2.xml` and `order4.xml` in `work/cbr/output/uk`
* `order3.xml` and `order5.xml` in `work/cbr/output/us`
4. Use `log:display` to check out the business logging.
Receiving order order1.xml
Sending order order1.xml to another country
Done processing order1.xml
Undeploy the Archive
--------------------
To stop and undeploy the bundle in Fuse:
1. Enter `osgi:list` command to retrieve your bundle id
2. To stop and uninstall the bundle enter
osgi:uninstall <id>
| 38.483146 | 283 | 0.730803 | eng_Latn | 0.989657 |
7637a23794a19123bfbc94169db4237e4998d4cf | 443 | md | Markdown | .README/rules/boolean-style.md | MarkyMarkMcDonald/eslint-plugin-flowtype | 63815f9f22f03dbf1d5ad7d09bf3e1b11fe2ab71 | [
"BSD-3-Clause"
] | 1,145 | 2016-02-23T18:04:25.000Z | 2022-02-20T12:30:27.000Z | .README/rules/boolean-style.md | MarkyMarkMcDonald/eslint-plugin-flowtype | 63815f9f22f03dbf1d5ad7d09bf3e1b11fe2ab71 | [
"BSD-3-Clause"
] | 486 | 2016-02-15T01:21:42.000Z | 2022-01-07T18:34:49.000Z | .README/rules/boolean-style.md | MarkyMarkMcDonald/eslint-plugin-flowtype | 63815f9f22f03dbf1d5ad7d09bf3e1b11fe2ab71 | [
"BSD-3-Clause"
] | 192 | 2016-03-13T18:15:49.000Z | 2022-03-24T16:19:31.000Z | ### `boolean-style`
_The `--fix` option on the command line automatically fixes problems reported by this rule._
Enforces a particular style for boolean type annotations. This rule takes one argument.
If it is `'boolean'` then a problem is raised when using `bool` instead of `boolean`.
If it is `'bool'` then a problem is raised when using `boolean` instead of `bool`.
The default value is `'boolean'`.
<!-- assertions booleanStyle -->
| 31.642857 | 92 | 0.735892 | eng_Latn | 0.999578 |
76386f0c1f406b4d1c836ff562e6755b941ec87a | 1,641 | md | Markdown | docs/howto-contribute.md | EdAyers/mathlib | ca5d4c1f16f9c451cf7170b10105d0051db79e1b | [
"Apache-2.0"
] | 5 | 2019-05-29T06:39:26.000Z | 2022-03-21T12:47:46.000Z | docs/howto-contribute.md | EdAyers/mathlib | ca5d4c1f16f9c451cf7170b10105d0051db79e1b | [
"Apache-2.0"
] | 1 | 2019-03-15T13:44:08.000Z | 2019-03-15T13:44:08.000Z | docs/howto-contribute.md | EdAyers/mathlib | ca5d4c1f16f9c451cf7170b10105d0051db79e1b | [
"Apache-2.0"
] | null | null | null | # How to contribute to mathlib
Principally mathlib uses the fork-and-branch workflow. See
https://blog.scottlowe.org/2015/01/27/using-fork-branch-git-workflow/
for a good introduction.
Here are some tips and tricks
to make the process of contributing as smooth as possible.
1. Use Zulip: https://leanprover.zulipchat.com/
Discuss your contribution while you are working on it.
2. Adhere to the guidelines:
- The [style guide](/docs/style.md) for contributors.
- The explanation of [naming conventions](/docs/naming.md).
- The [git commit conventions](https://github.com/leanprover/lean/blob/master/doc/commit_convention.md).
3. Create a pull request from a feature branch on your personal fork,
as explained in the link above, or from a branch of the main repository if you have commit access (you can ask for access on Zulip).
## The nursery
Finally, https://github.com/leanprover-community/mathlib-nursery
makes it possible to have early access to work in progress.
See [its README](https://github.com/leanprover-community/mathlib-nursery/blob/master/README.md)
for more details.
## Caching compilation
In the `mathlib` git repository, run the following in a terminal:
```sh
$ scripts/setup-dev-scripts.sh
$ source ~/.profile
$ setup-lean-git-hooks
```
It will install scripts including `update-mathlib` and `cache-olean`
and setup git hooks that will call `cache-olean` when making a commit
and `cache-olean --fetch` and `update-mathlib` when checking out a
branch. `update-mathlib` will fetch a compiled version of `mathlib`
and `cache-olean` will store and fetch the compiled binaries of the
branches you work.
| 38.162791 | 135 | 0.766606 | eng_Latn | 0.987767 |
7639008734facc754edc463ee66169a3fa940d18 | 10,933 | md | Markdown | articles/storsimple/storsimple-ova-system-requirements.md | Nike1016/azure-docs.hu-hu | eaca0faf37d4e64d5d6222ae8fd9c90222634341 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-09-29T16:59:33.000Z | 2019-09-29T16:59:33.000Z | articles/storsimple/storsimple-ova-system-requirements.md | Nike1016/azure-docs.hu-hu | eaca0faf37d4e64d5d6222ae8fd9c90222634341 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/storsimple/storsimple-ova-system-requirements.md | Nike1016/azure-docs.hu-hu | eaca0faf37d4e64d5d6222ae8fd9c90222634341 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Microsoft Azure StorSimple virtuális tömb rendszerkövetelményei | Microsoft Docs
description: A StorSimple virtuális tömb szoftver-és hálózati követelményeinek megismerése
services: storsimple
documentationcenter: NA
author: alkohli
manager: jeconnoc
editor: ''
ms.assetid: ea1d3bca-e71b-453d-aa82-440d2638f5e3
ms.service: storsimple
ms.devlang: NA
ms.topic: article
ms.tgt_pltfrm: NA
ms.workload: NA
ms.date: 07/25/2019
ms.author: alkohli
ms.openlocfilehash: 65d2a21a9f40470cee1dd9d713f9f9cb5431a245
ms.sourcegitcommit: f5cc71cbb9969c681a991aa4a39f1120571a6c2e
ms.translationtype: MT
ms.contentlocale: hu-HU
ms.lasthandoff: 07/26/2019
ms.locfileid: "68516684"
---
# <a name="storsimple-virtual-array-system-requirements"></a>A StorSimple virtuális tömb rendszerkövetelményei
[!INCLUDE [storsimple-virtual-array-eol-banner](../../includes/storsimple-virtual-array-eol-banner.md)]
## <a name="overview"></a>Áttekintés
Ez a cikk ismerteti a Microsoft Azure StorSimple virtuális tömb és a tömbhöz hozzáférő Storage-ügyfelek fontos rendszerkövetelményeit. Javasoljuk, hogy a StorSimple rendszer üzembe helyezése előtt gondosan tekintse át az információkat, majd az üzembe helyezés és az azt követő művelet során szükség szerint küldje vissza az adatokat.
A rendszerkövetelmények a következők:
* A Storage-ügyfelekkel kapcsolatos **szoftverek** – ismerteti a támogatott virtualizációs platformokat, a webböngészőket, az iSCSI-kezdeményezőket, az SMB-ügyfeleket, a virtuális eszközök minimális követelményeit, valamint az adott operációs rendszerek további követelményeit.
* **A StorSimple eszköz hálózati követelményei** – információt nyújt azokról a portokról, amelyeket meg kell nyitni a tűzfalon az iSCSI-, felhő-vagy felügyeleti forgalom engedélyezéséhez.
Az ebben a cikkben közzétett StorSimple rendszerkövetelményekkel kapcsolatos információk csak a StorSimple virtuális tömbökre vonatkoznak.
* Az 8000 sorozatú eszközökhöz lépjen a [StorSimple 8000 Series eszköz](storsimple-system-requirements.md)rendszerkövetelményei között.
* Az 7000 sorozatú eszközökhöz lépjen a [StorSimple 5000-7000 Series eszköz](http://onlinehelp.storsimple.com/1_StorSimple_System_Requirements)rendszerkövetelményei között.
## <a name="software-requirements"></a>Szoftverkövetelmények
A szoftverre vonatkozó követelmények tartalmazzák a támogatott webböngészőkkel, az SMB-verziókkal, a virtualizációs platformokkal és a virtuális eszközök minimális követelményeivel kapcsolatos információkat.
### <a name="supported-virtualization-platforms"></a>Támogatott virtualizációs platformok
| **Hypervisor** | **Verzió** |
| --- | --- |
| Hyper-V |Windows Server 2008 R2 SP1 és újabb verziók |
| VMware ESXi |5,0, 5,5, 6,0 és 6,5. |
> [!IMPORTANT]
> Ne telepítse a VMware-eszközöket a StorSimple virtuális tömbbe; Ez nem támogatott konfigurációt eredményez.
### <a name="virtual-device-requirements"></a>Virtuális eszközre vonatkozó követelmények
| **Összetevő** | **Követelmény** |
| --- | --- |
| Virtuális processzorok minimális száma (magok) |4 |
| Minimális memória (RAM) |8 GB <br> Fájlkiszolgálón, 8 GB-nál kevesebb, mint 2 000 000 fájl és 16 GB 2-4 millió fájl esetén|
| <sup>1</sup> . lemezterület |OPERÁCIÓSRENDSZER-lemez – 80 GB <br></br>Adatlemez – 500 GB – 8 TB |
| Hálózati adapterek minimális száma |1 |
| Internetes sávszélesség<sup>2</sup> |Minimális sávszélesség szükséges: 5 Mbps <br> Ajánlott sávszélesség: 100 Mbps <br> Az adatátviteli sebesség az Internet sávszélességének megfelelően méretezhető. Például 100 GB adat 2 nap alatt átvihető 5 Mbps-ra, ami biztonsági mentési hibákhoz vezethet, mert a napi biztonsági mentések nem fejeződnek be naponta. 100 Mbps sávszélességgel 100 GB adatforgalom 2,5 órán belül átvihető. |
<sup>1</sup> – dinamikus kiépítve
<sup>2</sup> – a hálózati követelmények a napi adatváltozási aránytól függően változhatnak. Ha például egy eszköznek egy nap alatt 10 GB-nyi vagy több módosítást kell biztonsági mentést végeznie, akkor a napi biztonsági mentés 5 Mbps-kapcsolaton keresztül akár 4,25 órát is igénybe vehet (ha az adat nem tömöríthető vagy deduplikált).
### <a name="supported-web-browsers"></a>Támogatott webböngészők
| **Összetevő** | **Verzió** | **További követelmények/megjegyzések** |
| --- | --- | --- |
| Microsoft Edge |Legújabb verzió | |
| Internet Explorer |Legújabb verzió |Az Internet Explorer 11 tesztelése |
| Google Chrome |Legújabb verzió |Tesztelték a Chrome 46-mel |
### <a name="supported-storage-clients"></a>Támogatott Storage-ügyfelek
A következő szoftver-követelmények azon iSCSI-kezdeményezők esetében szükségesek, amelyek hozzáférnek a StorSimple virtuális tömbhöz (iSCSI-kiszolgálóként konfigurálva).
| **Támogatott operációs rendszerek** | **Szükséges verzió** | **További követelmények/megjegyzések** |
| --- | --- | --- |
| Windows Server |2008R2 SP1, 2012, 2012R2 |A StorSimple dinamikusan kiosztott és teljesen kiosztott köteteket hozhat létre. Nem lehet részben kiosztott köteteket létrehozni. A StorSimple iSCSI-kötetek csak a következők esetén támogatottak: <ul><li>Egyszerű kötetek a Windows alaplemezeken.</li><li>A kötetek formázásának Windows NTFS.</li> |
A következő szoftver-követelmények azokra az SMB-ügyfelekre vonatkoznak, amelyek hozzáférnek a StorSimple virtuális tömbhöz (fájlkiszolgálóként konfigurálva).
| **SMB-verzió** |
| --- |
| SMB 2. x |
| SMB 3.0 |
| SMB 3,02 |
> [!IMPORTANT]
> Ne másolja és ne tárolja a Windows titkosított fájlrendszer (EFS) által védett fájlokat a StorSimple Virtual Array fájlkiszolgáló számára; Ez nem támogatott konfigurációt eredményez.
### <a name="supported-storage-format"></a>Támogatott tárolási formátum
Csak az Azure Block blob Storage támogatott. Az oldal Blobok nem támogatottak. További információ [a blobok és a](https://docs.microsoft.com/rest/api/storageservices/understanding-block-blobs--append-blobs--and-page-blobs)Blobok blokkolásáról.
## <a name="networking-requirements"></a>Hálózati követelmények
A következő táblázat felsorolja azokat a portokat, amelyeket meg kell nyitni a tűzfalon az iSCSI-, SMB-, felhő-vagy felügyeleti forgalom engedélyezéséhez. A (z) vagy a *bejövő* tábla a bejövő ügyfelek által az eszközhöz való hozzáférést kérő irányt jelöli. A *kimenő vagy kimenő* állapot arra utal, hogy a StorSimple-eszköz hogyan küldi el az adatokat külsőleg, az üzembe helyezésen kívül: például az internet felé.
| **<sup>1</sup> . port** | **Be vagy ki** | **Port hatóköre** | **Kötelező** | **Megjegyzések** |
| --- | --- | --- | --- | --- |
| TCP 80 (HTTP) |Ki |WAN |Nem |A rendszer a kimenő portot használja az internet-hozzáféréshez a frissítések lekéréséhez. <br></br>A kimenő webes proxy a felhasználó által konfigurálható. |
| TCP 443 (HTTPS) |Ki |WAN |Igen |A kimenő port a felhőben tárolt adatok elérésére szolgál. <br></br>A kimenő webes proxy a felhasználó által konfigurálható. |
| UDP 53 (DNS) |Ki |WAN |Bizonyos esetekben; Lásd: megjegyzések. |Erre a portra csak akkor van szükség, ha Internet alapú DNS-kiszolgálót használ. <br></br> Vegye figyelembe, hogy ha egy fájlkiszolgáló üzembe helyezését végzi, javasoljuk a helyi DNS-kiszolgáló használatát. |
| UDP 123 (NTP) |Ki |WAN |Bizonyos esetekben; Lásd: megjegyzések. |Erre a portra csak akkor van szükség, ha Internet-alapú NTP-kiszolgálót használ.<br></br> Vegye figyelembe, hogy ha egy fájlkiszolgáló üzembe helyezését javasolja, javasoljuk, hogy az időt szinkronizálja Active Directory tartományvezérlőkkel. |
| TCP 80 (HTTP) |A |LAN |Igen |Ez a bejövő port helyi felhasználói felület esetén a helyi felügyelethez a StorSimple eszközön. <br></br> Vegye figyelembe, hogy a helyi felhasználói felület HTTP protokollon keresztüli elérése automatikusan a HTTPS-re lesz átirányítva. |
| TCP 443 (HTTPS) |A |LAN |Igen |Ez a bejövő port helyi felhasználói felület esetén a helyi felügyelethez a StorSimple eszközön. |
| TCP 3260 (iSCSI) |A |LAN |Nem |Ez a port az iSCSI-kapcsolaton keresztüli adateléréshez használatos. |
<sup>1</sup> nem szükséges bejövő portot megnyitni a nyilvános interneten.
> [!IMPORTANT]
> Győződjön meg arról, hogy a tűzfal nem módosítja vagy visszafejti az SSL-forgalmat a StorSimple-eszköz és az Azure között.
>
>
### <a name="url-patterns-for-firewall-rules"></a>Tűzfalszabályok URL-mintái
A hálózati rendszergazdák gyakran konfigurálhatják a speciális tűzfalszabályok alapján a bejövő és a kimenő forgalom szűrésére szolgáló URL-mintákat. A virtuális tömb és a StorSimple Eszközkezelő szolgáltatás más Microsoft-alkalmazásokkal (például Azure Service Bus, Azure Active Directory Access Control, Storage-fiókokkal és Microsoft Update-kiszolgálókkal függ. Az ezekhez az alkalmazásokhoz társított URL-mintákat a tűzfalszabályok konfigurálására lehet használni. Fontos megérteni, hogy az alkalmazásokhoz társított URL-minták megváltoztathatók. Ehhez a hálózati rendszergazdának meg kell figyelnie és frissítenie kell a StorSimple vonatkozó tűzfalszabályok beállításait a és ha szükséges.
Javasoljuk, hogy a legtöbb esetben a StorSimple rögzített IP-címek alapján állítsa be a tűzfal szabályait a kimenő forgalomra vonatkozóan. Az alábbi információk segítségével azonban megadhatja a biztonságos környezetek létrehozásához szükséges speciális tűzfalszabályok beállításait is.
> [!NOTE]
>
> * Az eszköz (forrás) IP-címeit mindig az összes felhőalapú hálózati adapterre kell beállítani.
> * A cél IP-címeket az [Azure Datacenter IP-címtartományok](https://www.microsoft.com/download/confirmation.aspx?id=41653)értékre kell beállítani.
>
>
| URL-minta | Összetevő/funkció |
| --- | --- |
| `https://*.storsimple.windowsazure.com/*`<br>`https://*.accesscontrol.windows.net/*`<br>`https://*.servicebus.windows.net/*` <br>`https://login.windows.net`|StorSimple-eszközkezelő szolgáltatás<br>Access Control Service<br>Azure Service Bus<br>Hitelesítési szolgáltatás|
| `http://*.backup.windowsazure.com` |Eszközregisztráció |
| `https://crl.microsoft.com/pki/*`<br>`https://www.microsoft.com/pki/*` |Tanúsítvány visszavonása |
| `https://*.core.windows.net/*`<br>`https://*.data.microsoft.com`<br>`http://*.msftncsi.com` |Azure Storage-fiókok és-figyelés |
| `https://*.windowsupdate.microsoft.com`<br>`https://*.windowsupdate.microsoft.com`<br>`https://*.update.microsoft.com`<br> `https://*.update.microsoft.com`<br>`http://*.windowsupdate.com`<br>`https://download.microsoft.com`<br>`http://wustat.windows.com`<br>`https://ntservicepack.microsoft.com` |Kiszolgálók Microsoft Update<br> |
| `http://*.deploy.akamaitechnologies.com` |Akamai CDN |
| `https://*.partners.extranet.microsoft.com/*` |Támogatási csomag |
| `https://*.data.microsoft.com` |Telemetria szolgáltatás a Windowsban: [frissítés a felhasználói élmény és a diagnosztikai telemetria](https://support.microsoft.com/en-us/kb/3068708) |
## <a name="next-steps"></a>További lépések
* [A portál előkészítése a StorSimple virtuális tömb üzembe helyezéséhez](storsimple-virtual-array-deploy1-portal-prep.md)
| 77.539007 | 695 | 0.78039 | hun_Latn | 0.999998 |
763a02336d643cc99c638e2dee74d97aef06beff | 82,122 | md | Markdown | windows/security/threat-protection/auditing/event-4741.md | andronwas/windows-itpro-docs | 70a0c62a43d0e3a531282482ae084f7d86714a28 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2019-07-12T17:10:22.000Z | 2019-07-12T17:11:31.000Z | windows/security/threat-protection/auditing/event-4741.md | ooo-criminal/windows-itpro-docs | 70a0c62a43d0e3a531282482ae084f7d86714a28 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | windows/security/threat-protection/auditing/event-4741.md | ooo-criminal/windows-itpro-docs | 70a0c62a43d0e3a531282482ae084f7d86714a28 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-07-12T17:11:47.000Z | 2019-07-12T17:11:47.000Z | ---
title: 4741(S) A computer account was created. (Windows 10)
description: Describes security event 4741(S) A computer account was created.
ms.pagetype: security
ms.prod: w10
ms.mktglfcycl: deploy
ms.sitesec: library
ms.localizationpriority: none
author: dansimp
ms.date: 04/19/2017
ms.reviewer:
manager: dansimp
ms.author: dansimp
---
# 4741(S): A computer account was created.
**Applies to**
- Windows 10
- Windows Server 2016
<img src="images/event-4741.png" alt="Event 4741 illustration" width="449" height="837" hspace="10" align="left" />
***Subcategory:*** [Audit Computer Account Management](audit-computer-account-management.md)
***Event Description:***
This event generates every time a new computer object is created.
This event generates only on domain controllers.
> **Note** For recommendations, see [Security Monitoring Recommendations](#security-monitoring-recommendations) for this event.
<br clear="all">
***Event XML:***
```
- <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
- <System>
<Provider Name="Microsoft-Windows-Security-Auditing" Guid="{54849625-5478-4994-A5BA-3E3B0328C30D}" />
<EventID>4741</EventID>
<Version>0</Version>
<Level>0</Level>
<Task>13825</Task>
<Opcode>0</Opcode>
<Keywords>0x8020000000000000</Keywords>
<TimeCreated SystemTime="2015-08-12T18:41:39.201898100Z" />
<EventRecordID>170254</EventRecordID>
<Correlation />
<Execution ProcessID="520" ThreadID="1096" />
<Channel>Security</Channel>
<Computer>DC01.contoso.local</Computer>
<Security />
</System>
- <EventData>
<Data Name="TargetUserName">WIN81$</Data>
<Data Name="TargetDomainName">CONTOSO</Data>
<Data Name="TargetSid">S-1-5-21-3457937927-2839227994-823803824-6116</Data>
<Data Name="SubjectUserSid">S-1-5-21-3457937927-2839227994-823803824-1104</Data>
<Data Name="SubjectUserName">dadmin</Data>
<Data Name="SubjectDomainName">CONTOSO</Data>
<Data Name="SubjectLogonId">0xc88b2</Data>
<Data Name="PrivilegeList">-</Data>
<Data Name="SamAccountName">WIN81$</Data>
<Data Name="DisplayName">-</Data>
<Data Name="UserPrincipalName">-</Data>
<Data Name="HomeDirectory">-</Data>
<Data Name="HomePath">-</Data>
<Data Name="ScriptPath">-</Data>
<Data Name="ProfilePath">-</Data>
<Data Name="UserWorkstations">-</Data>
<Data Name="PasswordLastSet">8/12/2015 11:41:39 AM</Data>
<Data Name="AccountExpires">%%1794</Data>
<Data Name="PrimaryGroupId">515</Data>
<Data Name="AllowedToDelegateTo">-</Data>
<Data Name="OldUacValue">0x0</Data>
<Data Name="NewUacValue">0x80</Data>
<Data Name="UserAccountControl">%%2087</Data>
<Data Name="UserParameters">-</Data>
<Data Name="SidHistory">-</Data>
<Data Name="LogonHours">%%1793</Data>
<Data Name="DnsHostName">Win81.contoso.local</Data>
<Data Name="ServicePrincipalNames">HOST/Win81.contoso.local RestrictedKrbHost/Win81.contoso.local HOST/WIN81 RestrictedKrbHost/WIN81</Data>
</EventData>
</Event>
```
***Required Server Roles:*** Active Directory domain controller.
***Minimum OS Version:*** Windows Server 2008.
***Event Versions:*** 0.
***Field Descriptions:***
**Subject:**
- **Security ID** \[Type = SID\]**:** SID of account that requested the “create Computer object” operation. Event Viewer automatically tries to resolve SIDs and show the account name. If the SID cannot be resolved, you will see the source data in the event.
> **Note** A **security identifier (SID)** is a unique value of variable length used to identify a trustee (security principal). Each account has a unique SID that is issued by an authority, such as an Active Directory domain controller, and stored in a security database. Each time a user logs on, the system retrieves the SID for that user from the database and places it in the access token for that user. The system uses the SID in the access token to identify the user in all subsequent interactions with Windows security. When a SID has been used as the unique identifier for a user or group, it cannot ever be used again to identify another user or group. For more information about SIDs, see [Security identifiers](/windows/access-protection/access-control/security-identifiers).
- **Account Name** \[Type = UnicodeString\]**:** the name of the account that requested the “create Computer object” operation.
- **Account Domain** \[Type = UnicodeString\]**:** subject’s domain name. Formats vary, and include the following:
- Domain NETBIOS name example: CONTOSO
- Lowercase full domain name: contoso.local
- Uppercase full domain name: CONTOSO.LOCAL
- For some [well-known security principals](https://support.microsoft.com/kb/243330), such as LOCAL SERVICE or ANONYMOUS LOGON, the value of this field is “NT AUTHORITY”.
- **Logon ID** \[Type = HexInt64\]**:** hexadecimal value that can help you correlate this event with recent events that might contain the same Logon ID, for example, “[4624](event-4624.md): An account was successfully logged on.”
**New Computer Account:**
- **Security ID** \[Type = SID\]**:** SID of created computer account. Event Viewer automatically tries to resolve SIDs and show the account name. If the SID cannot be resolved, you will see the source data in the event.
- **Account Name** \[Type = UnicodeString\]**:** the name of the computer account that was created. For example: WIN81$
- **Account Domain** \[Type = UnicodeString\]**:** domain name of created computer account. Formats vary, and include the following:
- Domain NETBIOS name example: CONTOSO
- Lowercase full domain name: contoso.local
- Uppercase full domain name: CONTOSO.LOCAL
**Attributes:**
- **SAM Account Name** \[Type = UnicodeString\]: logon name for account used to support clients and servers from previous versions of Windows (pre-Windows 2000 logon name). The value of **sAMAccountName** attribute of new computer object. For example: WIN81$.
- **Display Name** \[Type = UnicodeString\]: the value of **displayName** attribute of new computer object. It is a name displayed in the address book for a particular account (typically – user account). This is usually the combination of the user's first name, middle initial, and last name. For computer objects, it is optional, and typically is not set. You can change this attribute by using Active Directory Users and Computers, or through a script, for example. This parameter might not be captured in the event, and in that case appears as “-”.
- **User Principal Name** \[Type = UnicodeString\]: internet-style login name for the account, based on the Internet standard RFC 822. By convention this should map to the account's email name. This parameter contains the value of **userPrincipalName** attribute of new computer object. For computer objects, it is optional, and typically is not set. You can change this attribute by using Active Directory Users and Computers, or through a script, for example. This parameter might not be captured in the event, and in that case appears as “-”.
- **Home Directory** \[Type = UnicodeString\]: user's home directory. If **homeDrive** attribute is set and specifies a drive letter, **homeDirectory** should be a UNC path. The path must be a network UNC of the form \\\\Server\\Share\\Directory. This parameter contains the value of **homeDirectory** attribute of new computer object. For computer objects, it is optional, and typically is not set. You can change this attribute by using Active Directory Users and Computers, or through a script, for example. This parameter might not be captured in the event, and in that case appears as “-”.
- **Home Drive** \[Type = UnicodeString\]**:** specifies the drive letter to which to map the UNC path specified by **homeDirectory** account’s attribute. The drive letter must be specified in the form “DRIVE\_LETTER:”. For example – “H:”. This parameter contains the value of **homeDrive** attribute of new computer object. For computer objects, it is optional, and typically is not set. You can change this attribute by using Active Directory Users and Computers, or through a script, for example. This parameter might not be captured in the event, and in that case appears as “-”.
- **Script Path** \[Type = UnicodeString\]**:** specifies the path of the account's logon script. This parameter contains the value of **scriptPath** attribute of new computer object. For computer objects, it is optional, and typically is not set. You can change this attribute by using Active Directory Users and Computers, or through a script, for example. This parameter might not be captured in the event, and in that case appears as “-”.
- **Profile Path** \[Type = UnicodeString\]: specifies a path to the account's profile. This value can be a null string, a local absolute path, or a UNC path. This parameter contains the value of **profilePath** attribute of new computer object. For computer objects, it is optional, and typically is not set. You can change this attribute by using Active Directory Users and Computers, or through a script, for example. This parameter might not be captured in the event, and in that case appears as “-”.
- **User Workstations** \[Type = UnicodeString\]: contains the list of NetBIOS or DNS names of the computers from which the user can logon. Each computer name is separated by a comma. The name of a computer is the **sAMAccountName** property of a computer object. This parameter contains the value of **userWorkstations** attribute of new computer object. For computer objects, it is optional, and typically is not set. You can change this attribute by using Active Directory Users and Computers, or through a script, for example. This parameter might not be captured in the event, and in that case appears as “-”.
- **Password Last Set** \[Type = UnicodeString\]**:** last time the account’s password was modified. For manually created computer account, using Active Directory Users and Computers snap-in, this field typically has value “**<never>”**. For computer account created during standard domain join procedure this field will contains time when computer object was created, because password creates during domain join procedure. For example: 8/12/2015 11:41:39 AM. This parameter contains the value of **pwdLastSet** attribute of new computer object.
- **Account Expires** \[Type = UnicodeString\]: the date when the account expires. This parameter contains the value of **accountExpires** attribute of new computer object. For computer objects, it is optional, and typically is not set. You can change this attribute by using Active Directory Users and Computers, or through a script, for example. This parameter might not be captured in the event, and in that case appears as “-”.
- **Primary Group ID** \[Type = UnicodeString\]: Relative Identifier (RID) of computer’s object primary group.
> **Note** **Relative identifier (RID)** is a variable length number that is assigned to objects at creation and becomes part of the object's Security Identifier (SID) that uniquely identifies an account or group within a domain.
Typically, **Primary Group** field for new computer accounts has the following values:
- 516 (Domain Controllers) – for domain controllers.
- 521 (Read-only Domain Controllers) – for read-only domain controllers (RODC).
- 515 (Domain Computers) – for member servers and workstations.
See this article <https://support.microsoft.com/kb/243330> for more information. This parameter contains the value of **primaryGroupID** attribute of new computer object.
<!-- -->
- **AllowedToDelegateTo** \[Type = UnicodeString\]: the list of SPNs to which this account can present delegated credentials. Can be changed using Active Directory Users and Computers management console in **Delegation** tab of computer account. Typically it is set to “**-“** for new computer objects. This parameter contains the value of **AllowedToDelegateTo** attribute of new computer object. See description of **AllowedToDelegateTo** field for “[4742](event-4742.md): A computer account was changed” event for more details.
> **Note** **Service Principal Name (SPN)** is the name by which a client uniquely identifies an instance of a service. If you install multiple instances of a service on computers throughout a forest, each instance must have its own SPN. A given service instance can have multiple SPNs if there are multiple names that clients might use for authentication. For example, an SPN always includes the name of the host computer on which the service instance is running, so a service instance might register an SPN for each name or alias of its host.
- **Old UAC Value** \[Type = UnicodeString\]: specifies flags that control password, lockout, disable/enable, script, and other behavior for the user or computer account. **Old UAC value** always **“0x0”** for new computer accounts. This parameter contains the previous value of **userAccountControl** attribute of computer object.
- **New UAC Value** \[Type = UnicodeString\]: specifies flags that control password, lockout, disable/enable, script, and other behavior for the user or computer account. This parameter contains the value of **userAccountControl** attribute of new computer object.
To decode this value, you can go through the property value definitions in the “Table 7. User’s or Computer’s account UAC flags.” from largest to smallest. Compare each property value to the flags value in the event. If the flags value in the event is greater than or equal to the property value, then the property is "set" and applies to that event. Subtract the property value from the flags value in the event and note that the flag applies and then go on to the next flag.
Here's an example: Flags value from event: 0x15
Decoding:
• PASSWD\_NOTREQD 0x0020
• LOCKOUT 0x0010
• HOMEDIR\_REQUIRED 0x0008
• (undeclared) 0x0004
• ACCOUNTDISABLE 0x0002
• SCRIPT 0x0001
0x0020 > 0x15, so PASSWD\_NOTREQD does not apply to this event
0x10 < 0x15, so LOCKOUT applies to this event. 0x15 - 0x10 = 0x5
0x4 < 0x5, so the undeclared value is set. We'll pretend it doesn't mean anything. 0x5 - 0x4 = 0x1
0x2 > 0x1, so ACCOUNTDISABLE does not apply to this event
0x1 = 0x1, so SCRIPT applies to this event. 0x1 - 0x1 = 0x0, we're done.
So this UAC flags value decodes to: LOCKOUT and SCRIPT
- **User Account Control** \[Type = UnicodeString\]**:** shows the list of changes in **userAccountControl** attribute. You will see a line of text for each change. For new computer accounts, when the object for this account was created, the **userAccountControl** value was considered to be **“0x0”**, and then it was changed from **“0x0”** to the real value for the account's **userAccountControl** attribute. See possible values in the table below. In the “User Account Control field text” column, you can see the text that will be displayed in the **User Account Control** field in 4741 event.
| <span id="User_or_Computer_account_UAC_flags" class="anchor"></span>Flag Name | userAccountControl in hexadecimal | userAccountControl in decimal | Description | User Account Control field text |
|-------------------------------------------------------------------------------|-----------------------------------|-------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------|
| SCRIPT | 0x0001 | 1 | The logon script will be run. | Changes of this flag do not show in 4741 events. |
| ACCOUNTDISABLE | 0x0002 | 2 | The user account is disabled. | Account Disabled<br>Account Enabled |
| Undeclared | 0x0004 | 4 | This flag is undeclared. | Changes of this flag do not show in 4741 events. |
| HOMEDIR\_REQUIRED | 0x0008 | 8 | The home folder is required. | 'Home Directory Required' - Enabled<br>'Home Directory Required' - Disabled |
| LOCKOUT | 0x0010 | 16 | | Changes of this flag do not show in 4741 events. |
| PASSWD\_NOTREQD | 0x0020 | 32 | No password is required. | 'Password Not Required' - Enabled<br>'Password Not Required' - Disabled |
| PASSWD\_CANT\_CHANGE | 0x0040 | 64 | The user cannot change the password. This is a permission on the user's object. | Changes of this flag do not show in 4741 events. |
| ENCRYPTED\_TEXT\_PWD\_ALLOWED | 0x0080 | 128 | The user can send an encrypted password.<br>Can be set using “Store password using reversible encryption” checkbox. | 'Encrypted Text Password Allowed' - Disabled<br>'Encrypted Text Password Allowed' - Enabled |
| TEMP\_DUPLICATE\_ACCOUNT | 0x0100 | 256 | This is an account for users whose primary account is in another domain. This account provides user access to this domain, but not to any domain that trusts this domain. This is sometimes referred to as a local user account. | Cannot be set for computer account. |
| NORMAL\_ACCOUNT | 0x0200 | 512 | This is a default account type that represents a typical user. | 'Normal Account' - Disabled<br>'Normal Account' - Enabled |
| INTERDOMAIN\_TRUST\_ACCOUNT | 0x0800 | 2048 | This is a permit to trust an account for a system domain that trusts other domains. | Cannot be set for computer account. |
| WORKSTATION\_TRUST\_ACCOUNT | 0x1000 | 4096 | This is a computer account for a computer that is running Microsoft Windows NT 4.0 Workstation, Microsoft Windows NT 4.0 Server, Microsoft Windows 2000 Professional, or Windows 2000 Server and is a member of this domain. | 'Workstation Trust Account' - Disabled<br>'Workstation Trust Account' - Enabled |
| SERVER\_TRUST\_ACCOUNT | 0x2000 | 8192 | This is a computer account for a domain controller that is a member of this domain. | 'Server Trust Account' - Enabled<br>'Server Trust Account' - Disabled |
| DONT\_EXPIRE\_PASSWORD | 0x10000 | 65536 | Represents the password, which should never expire on the account.<br>Can be set using “Password never expires” checkbox. | 'Don't Expire Password' - Disabled<br>'Don't Expire Password' - Enabled |
| MNS\_LOGON\_ACCOUNT | 0x20000 | 131072 | This is an MNS logon account. | 'MNS Logon Account' - Disabled<br>'MNS Logon Account' - Enabled |
| SMARTCARD\_REQUIRED | 0x40000 | 262144 | When this flag is set, it forces the user to log on by using a smart card. | 'Smartcard Required' - Disabled<br>'Smartcard Required' - Enabled |
| TRUSTED\_FOR\_DELEGATION | 0x80000 | 524288 | When this flag is set, the service account (the user or computer account) under which a service runs is trusted for Kerberos delegation. Any such service can impersonate a client requesting the service. To enable a service for Kerberos delegation, you must set this flag on the userAccountControl property of the service account.<br>If you enable Kerberos constraint or unconstraint delegation or disable these types of delegation in Delegation tab you will get this flag changed. | 'Trusted For Delegation' - Enabled<br>'Trusted For Delegation' - Disabled |
| NOT\_DELEGATED | 0x100000 | 1048576 | When this flag is set, the security context of the user is not delegated to a service even if the service account is set as trusted for Kerberos delegation.<br>Can be set using “Account is sensitive and cannot be delegated” checkbox. | 'Not Delegated' - Disabled<br>'Not Delegated' - Enabled |
| USE\_DES\_KEY\_ONLY | 0x200000 | 2097152 | Restrict this principal to use only Data Encryption Standard (DES) encryption types for keys.<br>Can be set using “Use Kerberos DES encryption types for this account” checkbox. | 'Use DES Key Only' - Disabled<br>'Use DES Key Only' - Enabled |
| DONT\_REQ\_PREAUTH | 0x400000 | 4194304 | This account does not require Kerberos pre-authentication for logging on.<br>Can be set using “Do not require Kerberos preauthentication” checkbox. | 'Don't Require Preauth' - Disabled<br>'Don't Require Preauth' - Enabled |
| PASSWORD\_EXPIRED | 0x800000 | 8388608 | The user's password has expired. | Changes of this flag do not show in 4741 events. |
| TRUSTED\_TO\_AUTH\_FOR\_DELEGATION | 0x1000000 | 16777216 | The account is enabled for delegation. This is a security-sensitive setting. Accounts that have this option enabled should be tightly controlled. This setting lets a service that runs under the account assume a client's identity and authenticate as that user to other remote servers on the network.<br>If you enable Kerberos protocol transition delegation or disable this type of delegation in Delegation tab you will get this flag changed. | 'Trusted To Authenticate For Delegation' - Disabled<br>'Trusted To Authenticate For Delegation' - Enabled |
| PARTIAL\_SECRETS\_ACCOUNT | 0x04000000 | 67108864 | The account is a read-only domain controller (RODC). This is a security-sensitive setting. Removing this setting from an RODC compromises security on that server. | No information. |
> <span id="_Ref433117054" class="anchor"></span>Table 7. User’s or Computer’s account UAC flags.
- **User Parameters** \[Type = UnicodeString\]: if you change any setting using Active Directory Users and Computers management console in Dial-in tab of computer’s account properties, then you will see **<value changed, but not displayed>** in this field in “[4742](event-4742.md)(S): A computer account was changed.” This parameter might not be captured in the event, and in that case appears as “-”.
- **SID History** \[Type = UnicodeString\]: contains previous SIDs used for the object if the object was moved from another domain. Whenever an object is moved from one domain to another, a new SID is created and becomes the objectSID. The previous SID is added to the **sIDHistory** property. This parameter contains the value of **sIDHistory** attribute of new computer object. This parameter might not be captured in the event, and in that case appears as “-”.
- **Logon Hours** \[Type = UnicodeString\]: hours that the account is allowed to logon to the domain. The value of **logonHours** attribute of new computer object. For computer objects, it is optional, and typically is not set. You can change this attribute by using Active Directory Users and Computers, or through a script, for example. You will see **<value not set>** value for new created computer accounts in event 4741.
- **DNS Host Name** \[Type = UnicodeString\]: name of computer account as registered in DNS. The value of **dNSHostName** attribute of new computer object. For manually created computer account objects this field has value “**-**“.
- **Service Principal Names** \[Type = UnicodeString\]**:** The list of SPNs, registered for computer account. For new computer accounts it will typically contain HOST SPNs and RestrictedKrbHost SPNs. The value of **servicePrincipalName** attribute of new computer object. For manually created computer objects it is typically equals “**-**“. This is an example of **Service Principal Names** field for new domain joined workstation**:**
HOST/Win81.contoso.local
RestrictedKrbHost/Win81.contoso.local
HOST/WIN81
RestrictedKrbHost/WIN81
**Additional Information:**
- **Privileges** \[Type = UnicodeString\]: the list of user privileges which were used during the operation, for example, SeBackupPrivilege. This parameter might not be captured in the event, and in that case appears as “-”. See full list of user privileges in the table below:
| Privilege Name | User Right Group Policy Name | Description |
|---------------------------------|----------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| SeAssignPrimaryTokenPrivilege | Replace a process-level token | Required to assign the [*primary token*](https://msdn.microsoft.com/library/windows/desktop/ms721603(v=vs.85).aspx#_security_primary_token_gly) of a process. <br>With this privilege, the user can initiate a process to replace the default token associated with a started subprocess. |
| SeAuditPrivilege | Generate security audits | With this privilege, the user can add entries to the security log. |
| SeBackupPrivilege | Back up files and directories | - Required to perform backup operations. <br>With this privilege, the user can bypass file and directory, registry, and other persistent object permissions for the purposes of backing up the system.<br>This privilege causes the system to grant all read access control to any file, regardless of the [*access control list*](https://msdn.microsoft.com/library/windows/desktop/ms721532(v=vs.85).aspx#_security_access_control_list_gly) (ACL) specified for the file. Any access request other than read is still evaluated with the ACL. The following access rights are granted if this privilege is held:<br>READ\_CONTROL<br>ACCESS\_SYSTEM\_SECURITY<br>FILE\_GENERIC\_READ<br>FILE\_TRAVERSE |
| SeChangeNotifyPrivilege | Bypass traverse checking | Required to receive notifications of changes to files or directories. This privilege also causes the system to skip all traversal access checks. <br>With this privilege, the user can traverse directory trees even though the user may not have permissions on the traversed directory. This privilege does not allow the user to list the contents of a directory, only to traverse directories. |
| SeCreateGlobalPrivilege | Create global objects | Required to create named file mapping objects in the global namespace during Terminal Services sessions. |
| SeCreatePagefilePrivilege | Create a pagefile | With this privilege, the user can create and change the size of a pagefile. |
| SeCreatePermanentPrivilege | Create permanent shared objects | Required to create a permanent object. <br>This privilege is useful to kernel-mode components that extend the object namespace. Components that are running in kernel mode already have this privilege inherently; it is not necessary to assign them the privilege. |
| SeCreateSymbolicLinkPrivilege | Create symbolic links | Required to create a symbolic link. |
| SeCreateTokenPrivilege | Create a token object | Allows a process to create a token which it can then use to get access to any local resources when the process uses NtCreateToken() or other token-creation APIs.<br>When a process requires this privilege, we recommend using the LocalSystem account (which already includes the privilege), rather than creating a separate user account and assigning this privilege to it. |
| SeDebugPrivilege | Debug programs | Required to debug and adjust the memory of a process owned by another account.<br>With this privilege, the user can attach a debugger to any process or to the kernel. Developers who are debugging their own applications do not need this user right. Developers who are debugging new system components need this user right. This user right provides complete access to sensitive and critical operating system components. |
| SeEnableDelegationPrivilege | Enable computer and user accounts to be trusted for delegation | Required to mark user and computer accounts as trusted for delegation.<br>With this privilege, the user can set the **Trusted for Delegation** setting on a user or computer object.<br>The user or object that is granted this privilege must have write access to the account control flags on the user or computer object. A server process running on a computer (or under a user context) that is trusted for delegation can access resources on another computer using the delegated credentials of a client, as long as the account of the client does not have the **Account cannot be delegated** account control flag set. |
| SeImpersonatePrivilege | Impersonate a client after authentication | With this privilege, the user can impersonate other accounts. |
| SeIncreaseBasePriorityPrivilege | Increase scheduling priority | Required to increase the base priority of a process.<br>With this privilege, the user can use a process with Write property access to another process to increase the execution priority assigned to the other process. A user with this privilege can change the scheduling priority of a process through the Task Manager user interface. |
| SeIncreaseQuotaPrivilege | Adjust memory quotas for a process | Required to increase the quota assigned to a process. <br>With this privilege, the user can change the maximum memory that can be consumed by a process. |
| SeIncreaseWorkingSetPrivilege | Increase a process working set | Required to allocate more memory for applications that run in the context of users. |
| SeLoadDriverPrivilege | Load and unload device drivers | Required to load or unload a device driver.<br>With this privilege, the user can dynamically load and unload device drivers or other code in to kernel mode. This user right does not apply to Plug and Play device drivers. |
| SeLockMemoryPrivilege | Lock pages in memory | Required to lock physical pages in memory. <br>With this privilege, the user can use a process to keep data in physical memory, which prevents the system from paging the data to virtual memory on disk. Exercising this privilege could significantly affect system performance by decreasing the amount of available random access memory (RAM). |
| SeMachineAccountPrivilege | Add workstations to domain | With this privilege, the user can create a computer account.<br>This privilege is valid only on domain controllers. |
| SeManageVolumePrivilege | Perform volume maintenance tasks | Required to run maintenance tasks on a volume, such as remote defragmentation. |
| SeProfileSingleProcessPrivilege | Profile single process | Required to gather profiling information for a single process. <br>With this privilege, the user can use performance monitoring tools to monitor the performance of non-system processes. |
| SeRelabelPrivilege | Modify an object label | Required to modify the mandatory integrity level of an object. |
| SeRemoteShutdownPrivilege | Force shutdown from a remote system | Required to shut down a system using a network request. |
| SeRestorePrivilege | Restore files and directories | Required to perform restore operations. This privilege causes the system to grant all write access control to any file, regardless of the ACL specified for the file. Any access request other than write is still evaluated with the ACL. Additionally, this privilege enables you to set any valid user or group SID as the owner of a file. The following access rights are granted if this privilege is held:<br>WRITE\_DAC<br>WRITE\_OWNER<br>ACCESS\_SYSTEM\_SECURITY<br>FILE\_GENERIC\_WRITE<br>FILE\_ADD\_FILE<br>FILE\_ADD\_SUBDIRECTORY<br>DELETE<br>With this privilege, the user can bypass file, directory, registry, and other persistent objects permissions when restoring backed up files and directories and determines which users can set any valid security principal as the owner of an object. |
| SeSecurityPrivilege | Manage auditing and security log | Required to perform a number of security-related functions, such as controlling and viewing audit events in security event log.<br>With this privilege, the user can specify object access auditing options for individual resources, such as files, Active Directory objects, and registry keys.<br>A user with this privilege can also view and clear the security log. |
| SeShutdownPrivilege | Shut down the system | Required to shut down a local system. |
| SeSyncAgentPrivilege | Synchronize directory service data | This privilege enables the holder to read all objects and properties in the directory, regardless of the protection on the objects and properties. By default, it is assigned to the Administrator and LocalSystem accounts on domain controllers. <br>With this privilege, the user can synchronize all directory service data. This is also known as Active Directory synchronization. |
| SeSystemEnvironmentPrivilege | Modify firmware environment values | Required to modify the nonvolatile RAM of systems that use this type of memory to store configuration information. |
| SeSystemProfilePrivilege | Profile system performance | Required to gather profiling information for the entire system. <br>With this privilege, the user can use performance monitoring tools to monitor the performance of system processes. |
| SeSystemtimePrivilege | Change the system time | Required to modify the system time.<br>With this privilege, the user can change the time and date on the internal clock of the computer. Users that are assigned this user right can affect the appearance of event logs. If the system time is changed, events that are logged will reflect this new time, not the actual time that the events occurred. |
| SeTakeOwnershipPrivilege | Take ownership of files or other objects | Required to take ownership of an object without being granted discretionary access. This privilege allows the owner value to be set only to those values that the holder may legitimately assign as the owner of an object.<br>With this privilege, the user can take ownership of any securable object in the system, including Active Directory objects, files and folders, printers, registry keys, processes, and threads. |
| SeTcbPrivilege | Act as part of the operating system | This privilege identifies its holder as part of the trusted computer base.<br>This user right allows a process to impersonate any user without authentication. The process can therefore gain access to the same local resources as that user. |
| SeTimeZonePrivilege | Change the time zone | Required to adjust the time zone associated with the computer's internal clock. |
| SeTrustedCredManAccessPrivilege | Access Credential Manager as a trusted caller | Required to access Credential Manager as a trusted caller. |
| SeUndockPrivilege | Remove computer from docking station | Required to undock a laptop.<br>With this privilege, the user can undock a portable computer from its docking station without logging on. |
| SeUnsolicitedInputPrivilege | Not applicable | Required to read unsolicited input from a [*terminal*](https://msdn.microsoft.com/library/windows/desktop/ms721627(v=vs.85).aspx#_security_terminal_gly) device. |
> <span id="_Ref433296229" class="anchor"></span>Table 8. User Privileges.
## Security Monitoring Recommendations
For 4741(S): A computer account was created.
> **Important** For this event, also see [Appendix A: Security monitoring recommendations for many audit events](appendix-a-security-monitoring-recommendations-for-many-audit-events.md).
- If your information security monitoring policy requires you to monitor computer account creation, monitor this event.
- Consider whether to track the following fields and values:
| **Field and value to track** | **Reason to track** |
|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| **SAM Account Name**: empty or - | This field must contain the computer account name. If it is empty or **-**, it might indicate an anomaly. |
| **Display Name** is not -<br>**User Principal Name** is not -<br>**Home Directory** is not -<br>**Home Drive** is not -<br>**Script Path** is not -<br>**Profile Path** is not -<br>**User Workstations** is not -<br>**AllowedToDelegateTo** is not - | Typically these fields are **-** for new computer accounts. Other values might indicate an anomaly and should be monitored. |
| **Password Last Set** is **<never>** | This typically means this is a manually created computer account, which you might need to monitor. |
| **Account Expires** is not **<never>** | Typically this field is **<never>** for new computer accounts. Other values might indicate an anomaly and should be monitored. |
| **Primary Group ID** is any value other than 515. | Typically, the **Primary Group ID** value is one of the following:<br>**516** for domain controllers<br>**521** for read only domain controllers (RODCs)<br>**515** for servers and workstations (domain computers)<br>If the **Primary Group ID** is 516 or 521, it is a new domain controller or RODC, and the event should be monitored.<br>If the value is not 516, 521, or 515, it is not a typical value and should be monitored. |
| **Old UAC Value** is not 0x0 | Typically this field is **0x0** for new computer accounts. Other values might indicate an anomaly and should be monitored. |
| **SID History** is not - | This field will always be set to - unless the account was migrated from another domain. |
| **Logon Hours** value other than **<value not set>** | This should always be **<value not set>** for new computer accounts. |
- Consider whether to track the following account control flags:
| **User account control flag to track** | **Information about the flag** |
|--------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| **'Encrypted Text Password Allowed'** – Enabled | Should not be set for computer accounts. By default, it will not be set, and it cannot be set in the account properties in Active Directory Users and Computers. |
| **'Server Trust Account'** – Enabled | Should be enabled **only** for domain controllers. |
| **'Don't Expire Password'** – Enabled | Should not be enabled for new computer accounts, because the password automatically changes every 30 days by default. For computer accounts, this flag cannot be set in the account properties in Active Directory Users and Computers. |
| **'Smartcard Required'** – Enabled | Should not be enabled for new computer accounts. |
| **'Trusted For Delegation'** – Enabled | Should not be enabled for new member servers and workstations. It is enabled by default for new domain controllers. |
| **'Not Delegated'** – Enabled | Should not be enabled for new computer accounts. |
| **'Use DES Key Only'** – Enabled | Should not be enabled for new computer accounts. For computer accounts, it cannot be set in the account properties in Active Directory Users and Computers. |
| **'Don't Require Preauth'** – Enabled | Should not be enabled for new computer accounts. For computer accounts, it cannot be set in the account properties in Active Directory Users and Computers. |
| **'Trusted To Authenticate For Delegation'** – Enabled | Should not be enabled for new computer accounts by default. |
| 245.140299 | 940 | 0.351112 | eng_Latn | 0.990404 |
763a0c6de1590b4923506b19242f68174d2e66b1 | 251 | md | Markdown | README.md | nealey/convulse | 5b578d532f1e0325c282b652fdda5f5e65d956e3 | [
"MIT"
] | null | null | null | README.md | nealey/convulse | 5b578d532f1e0325c282b652fdda5f5e65d956e3 | [
"MIT"
] | null | null | null | README.md | nealey/convulse | 5b578d532f1e0325c282b652fdda5f5e65d956e3 | [
"MIT"
] | 1 | 2021-04-20T05:32:49.000Z | 2021-04-20T05:32:49.000Z | This is a HTML5 doodad that lets you screencast with your face on top.
It saves videos locally as `.webm` files.
You can move your face around, zoom in to a region of the screen,
I guess this is enough for my needs.
Maybe it'll be enough for yours.
| 31.375 | 70 | 0.756972 | eng_Latn | 0.999979 |
763a166b2114e7977ca7fd00b5f6b3f26f85235f | 2,680 | md | Markdown | docs/build/reference/how-to-merge-multiple-pgo-profiles-into-a-single-profile.md | Erikarts/cpp-docs.es-es | 9fef104c507e48ec178a316218e1e581753a277c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/build/reference/how-to-merge-multiple-pgo-profiles-into-a-single-profile.md | Erikarts/cpp-docs.es-es | 9fef104c507e48ec178a316218e1e581753a277c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/build/reference/how-to-merge-multiple-pgo-profiles-into-a-single-profile.md | Erikarts/cpp-docs.es-es | 9fef104c507e48ec178a316218e1e581753a277c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 'Cómo: Fusionar mediante combinación varios perfiles PGO en un solo perfil'
ms.date: 03/14/2018
helpviewer_keywords:
- merging profiles
- profile-guided optimizations, merging profiles
ms.assetid: aab686b5-59dd-40d1-a04b-5064690f65a6
ms.openlocfilehash: 04730524fa756b0c6f1e505f3610609bdec6262a
ms.sourcegitcommit: 6052185696adca270bc9bdbec45a626dd89cdcdd
ms.translationtype: MT
ms.contentlocale: es-ES
ms.lasthandoff: 10/31/2018
ms.locfileid: "50476549"
---
# <a name="how-to-merge-multiple-pgo-profiles-into-a-single-profile"></a>Cómo: Fusionar mediante combinación varios perfiles PGO en un solo perfil
Optimización guiada por perfiles (PGO) es una excelente herramienta para crear archivos binarios optimizados basados en un escenario que se generan perfiles. Pero, ¿qué ocurre si tiene una aplicación que tiene varios escenarios importantes, pero distintos? ¿Cómo crear un perfil único que puede utilizar PGO entre varios escenarios distintos? En Visual Studio, el Administrador PGO, [pgomgr.exe](pgomgr.md), realiza este trabajo.
La sintaxis para combinar perfiles es:
`pgomgr /merge[:num] [.pgc_files] .pgd_files`
donde `num` es un peso opcional que se usará para los archivos .pgc agregados esta fusión mediante combinación. Las ponderaciones se usan normalmente si hay algunos escenarios que son más importantes que otras, o si existen escenarios que van son ejecutarse varias veces.
> [!NOTE]
> El Administrador PGO no funciona con datos de perfil obsoletos. Para combinar un archivo .pgc en un archivo .pgd, el archivo .pgc debe generarse un ejecutable que se creó mediante la invocación mismo vínculo que generó el archivo PGD.
## <a name="examples"></a>Ejemplos
En este ejemplo, el Administrador PGO agrega pgcFile.pgc a pgdFile.pgd seis veces:
`pgomgr /merge:6 pgcFile.pgc pgdFile.pgd`
En este ejemplo, el Administrador PGO agrega pgcFile1.pgc y pgcFile2.pgc a pgdFile.pgd, dos veces para cada archivo .pgc:
`pgomgr /merge:2 pgcFile1.pgc pgcFile2.pgc pgdFile.pgd`
Si el Administrador PGO se ejecuta sin ningún argumento de archivo .pgc, lo busca en el directorio local para todos los archivos .pgc que tienen el mismo nombre base que el archivo .pgd seguido por un signo de exclamación (!) y caracteres arbitrarios, a continuación, uno o más. Por ejemplo, si el directorio local tiene archivos test.pgd, Test! 1.pgc, test2.pgc y test! hello.pgc, y se ejecuta el comando siguiente desde el directorio local, a continuación, **pgomgr** combina Test! 1.pgc y test! hello.pgc en test.pgd.
`pgomgr /merge test.pgd`
## <a name="see-also"></a>Vea también
[Optimizaciones guiadas por perfiles](../../build/reference/profile-guided-optimizations.md)
| 59.555556 | 520 | 0.790672 | spa_Latn | 0.980818 |
763a40777d2b1674a9e17d8895afd8fbee0f1bda | 2,411 | md | Markdown | articles/virtual-machines/linux/flatcar-create-upload-vhd.md | LeMuecke/azure-docs.de-de | a7b8103dcc7d5ec5b56b9b4bb348aecd2434afbd | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/virtual-machines/linux/flatcar-create-upload-vhd.md | LeMuecke/azure-docs.de-de | a7b8103dcc7d5ec5b56b9b4bb348aecd2434afbd | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/virtual-machines/linux/flatcar-create-upload-vhd.md | LeMuecke/azure-docs.de-de | a7b8103dcc7d5ec5b56b9b4bb348aecd2434afbd | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Erstellen und Hochladen einer VHD-Datei mit Flatcar Container Linux zur Verwendung in Azure
description: Erfahren Sie, wie Sie eine VHD erstellen und hochladen, die das Betriebssystem Flatcar Container Linux enthält.
author: marga-kinvolk
ms.author: danis
ms.service: virtual-machines-linux
ms.workload: infrastructure-services
ms.topic: how-to
ms.date: 07/16/2020
ms.reviewer: cynthn
ms.openlocfilehash: 555e53899ed78a5200009d04659e974f8157057e
ms.sourcegitcommit: dccb85aed33d9251048024faf7ef23c94d695145
ms.translationtype: HT
ms.contentlocale: de-DE
ms.lasthandoff: 07/28/2020
ms.locfileid: "87268238"
---
# <a name="using-a-prebuilt-flatcar-image-for-azure"></a>Verwenden eines vordefinierten Flatcar-Images für Azure
Sie können vorab erstellte Images von virtuellen Azure-Festplatten mit Flatcar Container Linux für jeden der von Flatcar unterstützten Kanäle herunterladen:
- [stable](https://stable.release.flatcar-linux.net/amd64-usr/current/flatcar_production_azure_image.vhd.bz2)
- [beta](https://beta.release.flatcar-linux.net/amd64-usr/current/flatcar_production_azure_image.vhd.bz2)
- [alpha](https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_azure_image.vhd.bz2)
- [edge](https://edge.release.flatcar-linux.net/amd64-usr/current/flatcar_production_azure_image.vhd.bz2)
Dieses Image ist bereits vollständig eingerichtet und für die Ausführung in Azure optimiert. Sie müssen es nur dekomprimieren.
## <a name="building-your-own-flatcar-based-virtual-machine-for-azure"></a>Erstellen eines eigenen Flatcar-basierten virtuellen Computers für Azure
Alternativ können Sie auch Ihr eigenes Flatcar Container Linux-Image erstellen.
Befolgen Sie auf einem Linux-Computer die Anweisungen im [Flatcar Container Linux developer SDK guide](https://docs.flatcar-linux.org/os/sdk-modifying-flatcar/) (Entwickler-SDK-Handbuch für Flatcar Container Linux). Wenn Sie das Skript `image_to_vm.sh` ausführen, stellen Sie sicher, dass Sie `--format=azure` übergeben, um eine virtuelle Azure-Festplatte zu erstellen.
## <a name="next-steps"></a>Nächste Schritte
Sobald Sie über die VHD-Datei verfügen, können Sie mit der resultierenden Datei neue virtuelle Computer in Azure erstellen. Wenn Sie zum ersten Mal eine VHD-Datei in Azure hochladen, lesen Sie den Artikel [Erstellen einer Linux-VM aus einem benutzerdefinierten Datenträger](upload-vhd.md#option-1-upload-a-vhd).
| 63.447368 | 369 | 0.8146 | deu_Latn | 0.868446 |
763a8d2329c2d91fed45128012c8c43ddd0057e6 | 2,569 | md | Markdown | doc/lightning-listinvoices.7.md | denis2342/lightning | 549af782f1b9c37bc3c35c636753e4ced5047c71 | [
"MIT"
] | 2,288 | 2015-06-13T04:01:10.000Z | 2022-03-30T16:23:20.000Z | doc/lightning-listinvoices.7.md | denis2342/lightning | 549af782f1b9c37bc3c35c636753e4ced5047c71 | [
"MIT"
] | 4,311 | 2015-06-13T23:39:10.000Z | 2022-03-31T23:23:29.000Z | doc/lightning-listinvoices.7.md | denis2342/lightning | 549af782f1b9c37bc3c35c636753e4ced5047c71 | [
"MIT"
] | 750 | 2015-06-13T16:34:01.000Z | 2022-03-31T15:50:39.000Z | lightning-listinvoices -- Command for querying invoice status
=============================================================
SYNOPSIS
--------
**listinvoices** \[*label*\] \[*invstring*\] \[*payment_hash*\] \[*offer_id*\]
DESCRIPTION
-----------
The **listinvoices** RPC command gets the status of a specific invoice,
if it exists, or the status of all invoices if given no argument.
A specific invoice can be queried by providing either the `label`
provided when creating the invoice, the `invstring` string representing
the invoice, the `payment_hash` of the invoice, or the local `offer_id`
this invoice was issued for. Only one of the query parameters can be used at once.
RETURN VALUE
------------
[comment]: # (GENERATE-FROM-SCHEMA-START)
On success, an object containing **invoices** is returned. It is an array of objects, where each object contains:
- **label** (string): unique label supplied at invoice creation
- **description** (string): description used in the invoice
- **payment_hash** (hex): the hash of the *payment_preimage* which will prove payment (always 64 characters)
- **status** (string): Whether it's paid, unpaid or unpayable (one of "unpaid", "paid", "expired")
- **expires_at** (u64): UNIX timestamp of when it will become / became unpayable
- **amount_msat** (msat, optional): the amount required to pay this invoice
- **bolt11** (string, optional): the BOLT11 string (always present unless *bolt12* is)
- **bolt12** (string, optional): the BOLT12 string (always present unless *bolt11* is)
- **local_offer_id** (hex, optional): the *id* of our offer which created this invoice (**experimental-offers** only). (always 64 characters)
- **payer_note** (string, optional): the optional *payer_note* from invoice_request which created this invoice (**experimental-offers** only).
If **status** is "paid":
- **pay_index** (u64): Unique incrementing index for this payment
- **amount_received_msat** (msat): the amount actually received (could be slightly greater than *amount_msat*, since clients may overpay)
- **paid_at** (u64): UNIX timestamp of when it was paid
- **payment_preimage** (hex): proof of payment (always 64 characters)
[comment]: # (GENERATE-FROM-SCHEMA-END)
AUTHOR
------
Rusty Russell <<[email protected]>> is mainly responsible.
SEE ALSO
--------
lightning-waitinvoice(7), lightning-delinvoice(7), lightning-invoice(7).
RESOURCES
---------
Main web site: <https://github.com/ElementsProject/lightning>
[comment]: # ( SHA256STAMP:3dc5d5b8f7796d29e0d174d96e93915cbc7131b173a1547de022e021c55e8db6)
| 42.816667 | 142 | 0.709615 | eng_Latn | 0.956869 |
763adc69b9afacfc8d7532a7829dd8ed7a3f20f8 | 454 | md | Markdown | clap_complete/CONTRIBUTING.md | plaflamme/clap | 2bd3dd7d0ccffa45b2cc30a384cab920ebd6a4c2 | [
"Apache-2.0",
"MIT"
] | 2,157 | 2015-03-01T05:04:30.000Z | 2018-07-30T03:46:32.000Z | clap_complete/CONTRIBUTING.md | marcospb19/clap | fb39216cafe16cffe52594f45200d7be67691b7e | [
"Apache-2.0",
"MIT"
] | 1,215 | 2015-03-01T10:24:03.000Z | 2018-07-25T03:39:29.000Z | clap_complete/CONTRIBUTING.md | marcospb19/clap | fb39216cafe16cffe52594f45200d7be67691b7e | [
"Apache-2.0",
"MIT"
] | 318 | 2015-03-16T15:37:13.000Z | 2018-07-25T05:52:02.000Z | # How to Contribute
See the [clap-wide CONTRIBUTING.md](../CONTRIBUTING.md). This will contain `clap_complete` specific notes.
### Scope
`clap_complete` contains the core completion generators, meaning ones
maintained by the clap maintainers that get priority for features and fixes.
Additional, including contributor-maintained generators can also be contributed
to the clap repo and sit alongside `clap_complete` in a `clap_complete_<name>`
crate.
| 37.833333 | 107 | 0.799559 | eng_Latn | 0.998272 |
763b3c8ede9bb7e656040e3cee67be6f2597b0a5 | 8,085 | md | Markdown | ce/marketing/entity-mapping.md | mairaw/dynamics-365-customer-engagement | 18b45fa62f6af559f6f272575878c21ab279638c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ce/marketing/entity-mapping.md | mairaw/dynamics-365-customer-engagement | 18b45fa62f6af559f6f272575878c21ab279638c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ce/marketing/entity-mapping.md | mairaw/dynamics-365-customer-engagement | 18b45fa62f6af559f6f272575878c21ab279638c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "Map form data to entities with custom Workflows (Dynamics 365 Marketing) | Microsoft Docs"
description: "Learn how to map form data to entities with custom Workflows."
ms.date: 09/22/2020
ms.service: dynamics-365-marketing
ms.custom:
- dyn365-marketing
ms.topic: article
author: alfergus
ms.author: alfergus
manager: shellyha
search.audienceType:
- admin
- customizer
- enduser
search.app:
- D365CE
- D365Mktg
---
# Map form data to entities with custom Workflows
If you have configured a marketing form to collect form submissions without updating contacts or leads, you can create a Workflow to map the form submission data to any entity.
- Learn more about collecting form data without updating contacts or leads: [Create, view, and manage marketing forms](marketing-forms.md#do-not-createupdate-contacts-or-leads)
- Learn more about building workflows: [Use Workflow processes to automate processes that don't require user interaction](../customerengagement/on-premises/customize/workflow-processes.md)
## Creating a Workflow
Create a Workflow to extract the values from a form submission. You can use this data to create a custom entity or to create or update any existing entity.
To create a Workflow:
1. In the navigation bar, go to **Settings** > **Process Center** > **Processes**.

1. Create new blank process, set the **Category** to **Workflow**, and add the entity that triggers your workflow. In this case, we'll add the Marketing form submission.

1. Next, you will start adding steps to your Workflow. You will find flexible options to handle entities under **Add Step** > **Dynamics 365 Marketing**.
For example, you can start with **Extract a submitted value by field** to find a value inside a submission that you would like to store. Add a **Match entity** step to match the marketing form submission to the entity that you want to update. Add a **Json set property** step to be used in the other steps’ JSON properties.

## Example Workflow: Collecting credit card applications
In this example, we'll create a Workflow to update a custom entity called “Credit card applications.” The Workflow will allow a user to collect credit card applications through a Marketing form and store the data under the new custom entity.
The credit card application Workflow requires the following general processes:
- Check if the submission is coming from a form the workflow can handle. The simplest method to do this is selecting submissions from a specific form.
- Extract the submitted values so that they available in the Workflow (**extract value**).
- Combine multiple values into a single structure that is suitable for entity matching or creation (**set JSON property**).
- Create an entity with properties that were set in the previous step. Alternatively, the Workflow can search for a matching entity and either update the found entity or create a new one if not found (**create entity**, **update entity**, and **match entity actions**).
The following steps detail the actions required to create the credit card application Workflow:
1. To create a custom entity, in the navigation bar, go to **Settings** > **Customize the System** > **Entities**.
1. Create a marketing form for the credit card applications containing the fields you want to use. Create fields under the new custom entity to use inside the form. Make sure the form is set to [not update contacts or leads](marketing-forms.md#do-not-createupdate-contacts-or-leads).
1. Next, you will create a Workflow to process the custom entities. Go to **Settings** > **Processes** and create a new **Workflow** process. In the **Entity** field, select the entity that triggers your Workflow. In this case, we'll select **Marketing form submission**. Then select **OK**.
1. To add a step, select **Add Step**, then go to **Dynamics 365 Marketing** > **Extract submitted value by field**. This will allow you to extract a value from a form submission.
1. Add a name for the step. We will name our step "Extract value from form submission (E-mail)."

1. Select the **Set Properties** button.
1. Next, we'll extract the email address from a submitted form. To extract the email address, go to the **Operator** pane and select **Marketing form submission** in the **Look for:** drop down menu. Then, select the **Add** button in the **Operator** pane. To add the dynamic value to the form submission property, select the **OK** button.

1. To select the desired field to extract from the lookup, select **E-mail** under the **Value** column for the **Marketing form field** property.

1. Next, to match the result of the extracted e-mail value to the logical name of the e-mail entity in the CRM database, go to **Add Step** > **Dynamics 365 Marketing** > **Json set property**.
To find the logical name of the entity, go to **Customize the System** > **Entities**, and open the relevant entity. The logical name is the **Name** field of the entity.

1. Continue adding the previously set JSON values one by one.
1. Insert the logical name.
1. Insert the result from the **Extracted value from** field.
1. Choose a previous JSON value to add on top of the extracted value. This ensures that you will combine all of the JSON entries into a combined value that will be used at the end.

1. Select the **Save and Close** button.
1. To create a credit card application record that results from each form submission, select **Dynamics 365 Marketing** > **Create Entity**. Set the **JSON properties** value column to **Result of the last JSON set property**.
1. To insert an initial step into your process to filter submissions to only those coming from the form that is collecting the credit card applications, select **Add Step** > **Check condition**.

1. Open the dropdown menu and select **Primary Entity** > **Marketing form submission**.
1. Go to the marketing form you are using for the credit card application and find the **Form ID** in the form page URL.

1. Place the **Form ID** into the condition step in the Worklflow.

1. To find the specific form, you can set the condition logic to **marketing form ID** to check if it is equal to the specific form ID. If yes, run the Workflow. If not, add a step to stop the Workflow with the reason for cancellation.

<!-- 1. You can find the submissions related to your custom entity by selecting the **Advanced find** button  on the top ribbon in the Marketing app. In each submission, you can find submission values under the **Form** > **Submissions** tab. -->
### See also
[Create, view, and manage marketing forms](marketing-forms.md)
[!INCLUDE[footer-include](../includes/footer-banner.md)] | 70.921053 | 345 | 0.747928 | eng_Latn | 0.994208 |
763b4895813fd1605125d7d45170ce004aa1de3c | 6,381 | md | Markdown | README.md | UMM-CSci-3601-S18/iteration-3-the-underdawgz | 631bd7e0f628b487e8f5ee6ccd3086039c2180f2 | [
"MIT"
] | null | null | null | README.md | UMM-CSci-3601-S18/iteration-3-the-underdawgz | 631bd7e0f628b487e8f5ee6ccd3086039c2180f2 | [
"MIT"
] | 24 | 2018-04-03T19:45:41.000Z | 2018-04-16T18:17:51.000Z | README.md | UMM-CSci-3601-S18/iteration-3-the-underdawgz | 631bd7e0f628b487e8f5ee6ccd3086039c2180f2 | [
"MIT"
] | null | null | null | # Friendly Panda Emotion Tracker
An emotion tracking web application which can be used by health practioners and their consumers.
Provides ability for consumers to quickly log data about emotion and mood, and provides convenient, helpful links.
[](https://travis-ci.org/UMM-CSci-3601-S18/iteration-3-the-underdawgz)
<!-- TOC depthFrom:1 depthTo:5 withLinks:1 updateOnSave:1 orderedList:0 -->
## Table of Contents
- [Prerequisites](#prerequisites)
- [Setup](#setup)
- [Running your project](#running-your-project)
- [Deploying Project for Production](#deploying-project-for-production)
- [Testing and Continuous Integration](#testing-and-continuous-integration)
- [Authors](#authors)
<!-- /TOC -->
## Prerequisites
Versions used for required packages:
- OpenJDK v1.8.0_141
- Node v6.11.2
- Npm v3.10.10
- @angular/cli v1.0.0
- Yarn v1.0.2
- IntelliJ Gradle Plugin
## Setup
For IntelliJ users, you should be able to clone this repository inside IntelliJ
- When prompted to create a new IntelliJ project, select **yes**.
- Select **import project from existing model** and select **Gradle.**
- Make sure **Use default Gradle wrapper** is selected.
- Click **Finish.**
- If IDEA asks you if you want to compile JavaScript to TypeScript :fire: DO NOT :fire: – if you do it will break your project.
:warning: IDEA will sometimes decide to "help" you by offering
"Compile TypeScript to JavaScript?" :bangbang: *Never* say "OK" to this
offer -- if you do it will make a complete mess of your project. We're
using other tools (`gradle`, `yarn`, and `ng`) to do that compilation.
If you let IDEA do it, you'll
have a ton of JavaScript files cluttering up your project and confusing other
tools.
## Running your project
- The **build** task will _build_ the entire project (but not run it)
- The **run** Gradle task will still run your SparkJava server.
(which is available at ``localhost:4567``)
- The **runClient** task will build and run the client side of your project (available at ``localhost:9000``)
- The **runAllTests** task will run both the Java (server) tests and the `karma` (client-side, Angular) tests
- The **runServerTests** task will run the Java (server) tests
- The **runClientTests** task will run the `karma` (client-side, Angular) tests.
* The **runClientTestWithCoverage** task will run the `karma` tests and generate test coverage data which will be placed in `client/coverage`; open the `index.html` in that directory in a browser and you'll get a web interface to that coverage data.
* The **runClientTestsAndWatch** task will run the `karma` tests, but leave the testing browser open and the tests in "watch" mode. This means that any changes you make will cause the code to recompile and the tests to be re-run in the background. This can give you continuous feedback on the health of your tests.
- The **runE2ETest** task runs the E2E (end-to-end, Protractor) tests. For this to work you _must_ make sure you have your server running, and you may need to re-seed the database to make sure it's in a predictable state.
- The **seedMongoDB** task will load the "demo" data into the Mongo database. If you want/need to change what gets loaded, the `seedMongoDB` command is defined in the top level `build.gradle` and currently loads four files, `emotions.seed.json`, `goals.seed.json`, `resources.seed.json`, and `journals.seed.json`, all of which are also in the top level of the project. To load new/different data you should create the necessary JSON data files, and then update `build.gradle` to load those files.
**build.sh** is a script that calls upon gradle build to build the entire project which creates an executable to be able to launch the
project in production mode. To run **build.sh**, go to your project directory in a terminal and enter:``./build.sh``
When **build.sh** is run, the script **.3601_run.sh** is copied to ~/**3601.sh**. When this is launched, for example, ``./3601.sh``, will run your project in production mode. The API_URL in _environment.prod.ts_ needs to be
the actual URL of the server. If your server is deployed on a droplet or virtual machine, for example, then you want something like
`http://192.168.0.1:4567` where you replace that IP with the IP of your droplet. If you've set up a domain name for your system, you can use that instead, like `http://acooldomainname.com`.
## Deploying Project for Production
Instructions on setting up the project for production can be found here:
[UMM CSCI 3601 Droplet Setup Instructions](https://gist.github.com/pluck011/d968c2280cc9dc190a294eaf149b1c6e)
## Testing and Continuous Integration
Testing the client:
* `runAllTests` runs both the server tests and the clients tests once.
* `runClientTests` runs the client tests once.
* `runClientTestsAndWatch` runs the client tests every time that the code changes after a save.
* `runClientTestsWithCoverage` runs the client tests and deposits code coverage statistics into a new directory within `client` called `coverage`. In there you will find an `index.html`. Right click on `index.html` and select `Open in Browser` with your browser of choice. For Chrome users, you can drag and drop index.html onto chrome and it will open it.
* `runE2ETest` runs end to end test for the client side. NOTE: Two Gradle tasks _must_ be run before you can run the e2e tests.
The server (`run`) needs to be on for this test to work, and you have to
need to have data in the `dev` database before running the e2e tests!
* `runServerTests` runs the server tests.
## Authors
- **Jubair Hassan**
- **Abenezer Monjor**
- **Hunter Welch**
- **Yukai Zang**
- **Sunjae Park**
[angular-karma-jasmine]: https://codecraft.tv/courses/angular/unit-testing/jasmine-and-karma/
[e2e-testing]: https://coryrylan.com/blog/introduction-to-e2e-testing-with-the-angular-cli-and-protractor
[environments]: http://tattoocoder.com/angular-cli-using-the-environment-option/
[spark-documentation]: http://sparkjava.com/documentation.html
[status-codes]: https://en.wikipedia.org/wiki/List_of_HTTP_status_codes
[lab2]: https://github.com/UMM-CSci-3601/3601-lab2_client-server/blob/master/README.md#resources
[mongo-jdbc]: https://docs.mongodb.com/ecosystem/drivers/java/
[labtasks]: LABTASKS.md
[travis]: https://travis-ci.org/
| 60.198113 | 496 | 0.754897 | eng_Latn | 0.982281 |
763c46c7ed93472eab2ff392aaeedb397c1d90ee | 2,099 | md | Markdown | _posts/2018-12-08-transition.md | iamjichao/iamjichao.github.io | d9525c83a302bb42b5ae66b8186e4263a884b565 | [
"MIT"
] | 2 | 2019-02-22T15:05:40.000Z | 2022-01-22T09:50:21.000Z | _posts/2018-12-08-transition.md | iamjichao/iamjichao.github.io | d9525c83a302bb42b5ae66b8186e4263a884b565 | [
"MIT"
] | 50 | 2019-02-23T14:35:23.000Z | 2019-07-22T02:57:10.000Z | _posts/2018-12-08-transition.md | iamjichao/iamjichao.github.io | d9525c83a302bb42b5ae66b8186e4263a884b565 | [
"MIT"
] | null | null | null | ---
layout: post
title: CSS3 transition 属性
categories: [CSS]
description: CSS3 transition 属性
keywords: [transition]
---
transition 属性是一个简写属性,用于设置四个过渡属性:
```css
transition: property duration timing-function delay;
```
默认值:all 0 ease 0
### transition-property
transition-property 属性规定应用过渡效果的 CSS 属性的名称。当指定的 CSS 属性改变时,过渡效果将开始。过渡效果通常在用户将鼠标指针浮动到元素上时发生。
```css
transition-property: none|all|property;
```
默认值:all
**none** 没有属性会获得过渡效果。
**all** 所有属性都将获得过渡效果。
**property** 定义应用过渡效果的 CSS 属性名称列表,列表以逗号分隔。
### transition-duration
transition-duration 属性规定完成过渡效果需要花费的时间(以秒或毫秒计)。请始终设置 transition-duration 属性,否则时长为 0,就不会产生过渡效果。
```css
transition-duration: time;
```
默认值:0
**time** 规定完成过渡效果需要花费的时间(以秒或毫秒计)。默认值是 0,意味着不会有效果。
### transition-timing-function
transition-timing-function 属性规定过渡效果的速度曲线。该属性允许过渡效果随着时间来改变其速度。
```css
transition-timing-function: linear|ease|ease-in|ease-out|ease-in-out|cubic-bezier(n,n,n,n);
```
默认值:ease
**linear** 规定以相同速度开始至结束的过渡效果(等于 cubic-bezier(0,0,1,1))。
**ease** 规定慢速开始,然后变快,然后慢速结束的过渡效果(等于 cubic-bezier(0.25,0.1,0.25,1))。
**ease-in** 规定以慢速开始的过渡效果(等于 cubic-bezier(0.42,0,1,1))。
**ease-out** 规定以慢速结束的过渡效果(等于 cubic-bezier(0,0,0.58,1))。
**ease-in-out** 规定以慢速开始和结束的过渡效果(等于 cubic-bezier(0.42,0,0.58,1))。
**cubic-bezier(n,n,n,n)** 在 cubic-bezier 函数中定义自己的值。可能的值是 0 至 1 之间的数值。
### transition-delay
transition-delay 属性规定过渡效果何时开始。值以秒或毫秒计。
```css
transition-delay: time;
```
默认值:0
**time** 规定在过渡效果开始之前需要等待的时间,以秒或毫秒计。
### 应用:实现不确定高度元素展开和收起的动画效果
使用 CSS 实现展开收起的动画效果,首先想到的就是通过 height+overflow:hidden 实现,使用这种方法一般都是将 height 的高度固定写死,高度 auto 的话是无法形成过渡动画效果的。auto 是一个关键字值,并非数值,因此 height 从 0 到 auto 是无法计算的。此时可以通过 max-height 实现。
给元素设置 box 类,展开时增加 box-unfolded 类,收起时去掉 box-unfolded 类:
```css
.box {
max-height: 0;
overflow: hidden;
-webkit-transition: max-height 600ms;
-moz-transition: max-height 600ms;
-o-transition: max-height 600ms;
transition: max-height 600ms;
}
.box-unfolded {
max-height: 2000px;
}
```
max-height 相对 height 来说比较灵活。两者的区别就是计算高度的过程,一个是由人为计算,一个由盒子内容高度去计算然后自适应。这种写法必须给定足够存放内容的高度,所以尽量把 max-height 设置大一点。[效果展示](https://lab.iamjichao.com)
以上。
| 20.578431 | 172 | 0.743211 | yue_Hant | 0.70521 |
763d20ff4bf5d5e2feb38aded543ba4bebbc8081 | 6,527 | md | Markdown | support/windows-server/windows-security/mbam-secure-network-communication.md | 0-SamboNZ-0/SupportArticles-docs | 1bd812354cf0e0f42aa5dedd0252e415ddade6bc | [
"CC-BY-4.0",
"MIT"
] | null | null | null | support/windows-server/windows-security/mbam-secure-network-communication.md | 0-SamboNZ-0/SupportArticles-docs | 1bd812354cf0e0f42aa5dedd0252e415ddade6bc | [
"CC-BY-4.0",
"MIT"
] | null | null | null | support/windows-server/windows-security/mbam-secure-network-communication.md | 0-SamboNZ-0/SupportArticles-docs | 1bd812354cf0e0f42aa5dedd0252e415ddade6bc | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: MBAM and Secure Network Communication
description: Discusses how to configure Microsoft's BitLocker Administration and Monitoring (MBAM) with Secure Network Communication.
ms.date: 09/07/2020
author: Deland-Han
ms.author: delhan
manager: dscontentpm
audience: itpro
ms.topic: troubleshooting
ms.prod: windows-server
localization_priority: medium
ms.reviewer: manojse, nacan, kaushika
ms.prod-support-area-path: Bitlocker
ms.technology: windows-server-security
---
# MBAM and Secure Network Communication
This article discusses how to configure Microsoft's BitLocker Administration and Monitoring (MBAM) with Secure Network Communication.
_Applies to:_ Windows Server 2012 R2, Windows 10 – all editions
_Original KB number:_ 2754259
## Summary
MBAM can encrypt the communication between the MBAM Recovery and Hardware Database, the Administration, and Monitoring servers and the MBAM clients. If you decide to encrypt the communication, you're asked to select the certification authority-provisioned certificate that will be used for encryption.
The channel between MBAM Administration & Monitoring Server and SQL SSRS can also be encrypted. An Administrator needs a certificate approved from CA (Certificate Authority) or a Self-Signed Certificate before deploying MBAM.
> [!Note]
> If you decide to go with SSL, make sure you have the correct certificate to configure SSL before running MBAM Setup on your server.
Step 1: Encrypt Channel between MBAM Client and Administration & Monitoring Server.
1. Using Self Signed Certificate.
1. Connect to Server where MBAM Administration & Monitoring Role will be installed.
2. Make sure you have installed IIS.
3. Open Server Manager and Click on Roles.
4. Select webserver and click on IIS.
5. In Feature View, Double-Click Server Certificates.
6. Under Actions Pane, Select Self-Signed Certificate.
7. On the Create Self-Signed Certificate page, type a friendly name for the certificate in the Specify a friendly name for the certificate box, and then click OK.
> [!Note]
>
> - This procedure generates a self-signed certificate that doesn't originate from a generally trusted source; therefore, you shouldn't use this certificate to help secure data transfers between Internet clients and your server.
> - Self-signed certificates may cause your Web browser to issue phishing warnings.
2. Using Certificate Approved by Certificate Authority
There are two ways to import a certificate
1. Request or Import a certificate from a CA using IIS:
- [Request an Internet Server Certificate in IIS](https://technet.microsoft.com/library/cc732906%28v=ws.10%29.aspx)
- [Import a Server Certificate in IIS](https://technet.microsoft.com/library/cc732785%28v=ws.10%29.aspx)
2. Request or Import a certificate into the Personal Certificate Store using Certificate Manager:
[Windows help and learning](https://windows.microsoft.com/windows-vista/request-or-renew-a-certificate)
Certificate Templates to be used:
MBAM Client to MBAM Administration & Monitoring Server: Use Standard Web Server Template.
After you have certificate ready, when you execute MBAM Setup, we will show you the thumbprint of the certificate in "Configure Network Communication Security" wizard for MBAM Setup.

Step 2: Encrypt Channel between MBAM Administration & Monitoring Server and MBAM Recovery & Hardware SQL DB.
MBAM can encrypt the communication between the Recovery and Hardware Database and the Administration and Monitoring servers. If you choose the option to encrypt communication, you're asked to select the Certificate Authority-provisioned certificate that is used for encryption.
Certificate Templates to be used:
MBAM SQL DB Server to Admin & Monitoring Server: Standard Server Authentication Template
When you execute MBAM Setup Program on a server where you'll install MBAM Recovery & Hardware DB Role, you can see the certificate thumbprint in "Configure Network Communication Security" wizard for MBAM Setup Program.

Step 3: How to configure SSL for SQL Compliance and Audit DB Server.
> [!Note]
> You'll have to configure SSL for SQL before you run MBAM Setup on your server.
1. Open SQL Reporting Services Configuration Manager on Server where you installed MBAM Audit Reports Role.
2. Connect to your Server and click Web Service URL.

3. Click Advanced and then select your certificate. See image below:

4. Repeat "Step 3" for Report Manager URL in SQL Reporting Services Configuration Manager.
5. Now when you open MBAM Reports it will use SSL to connect to SQL SSRS.
Step 4: Configure SQL to force encryption on all protocols.
1. Log in to SQL Server and Open SQL Server Configuration Manager.
2. Expand SQL Server Network Configuration and select "Protocols for MSSQLSERVER".
3. Right click on "Protocols for MSSQLSERVER" and select Yes for Force Encryption.

4. Select Certificates tab and choose your certificate from drop-down.
5. Click Apply and restart your SQL Services.
6. When you try to restart SQL Services, you'll hit an error message "The request failed or the service didn't respond in a timely fashion. Consult the event log or other applicable error logs for details."
7. It fails since SQL account doesn't have rights on Private keys of the certificate.
8. Open Certificate Manager MMC console and give the SQL account that is used for SQL services Full access on the certificate.

9. Restart the SQL Server Services now and it should be successful.
## More information
- [Create a Self-Signed Server Certificate in IIS](https://technet.microsoft.com/library/cc753127%28v=ws.10%29.aspx)
- [Configuring a Report Server for Secure Sockets Layer (SSL) Connections](https://msdn.microsoft.com/library/ms345223%28v=sql.105%29.aspx)
| 55.786325 | 301 | 0.779838 | eng_Latn | 0.958848 |
763d4fa0d87da7d48a64443eeef6a9895d6ebcc0 | 223 | md | Markdown | AUTHORS.md | deleyva/obsidian_cli | 5e26bebc72bb56239a03e47ca791dc6732b5918e | [
"MIT"
] | 11 | 2020-10-02T00:09:29.000Z | 2022-01-05T10:06:53.000Z | AUTHORS.md | tallguyjenks/template | 505acd2f291c16935d67629b3f6c8e0de954dc6c | [
"MIT"
] | 8 | 2021-09-14T03:13:31.000Z | 2022-03-15T03:23:02.000Z | AUTHORS.md | deleyva/obsidian_cli | 5e26bebc72bb56239a03e47ca791dc6732b5918e | [
"MIT"
] | 5 | 2020-10-02T00:09:52.000Z | 2021-09-26T02:01:37.000Z | # Authors
## Core Contributor
---
[Bryan Jenks](https://github.com/tallguyjenks) -- <[email protected]>
---
## Logo Designer
---
## Contributors
---
None yet. [Why not be the first?](CONTRIBUTING.md)
| 11.15 | 83 | 0.654709 | yue_Hant | 0.193369 |
763d97df856a83573c3a8d4dbb22be38e96b002c | 1,209 | md | Markdown | CONTRIBUTING.md | HendrikPetertje/vim-devicons | ea5bbf0e2a960965accfa50a516773406a5b6b26 | [
"MIT"
] | 23 | 2015-02-22T06:00:03.000Z | 2021-09-09T03:09:56.000Z | CONTRIBUTING.md | HendrikPetertje/vim-devicons | ea5bbf0e2a960965accfa50a516773406a5b6b26 | [
"MIT"
] | 1 | 2018-04-26T16:10:04.000Z | 2018-04-26T16:10:04.000Z | CONTRIBUTING.md | HendrikPetertje/vim-devicons | ea5bbf0e2a960965accfa50a516773406a5b6b26 | [
"MIT"
] | 1 | 2015-07-01T12:51:55.000Z | 2015-07-01T12:51:55.000Z | # Contributing Guide
## How to contribute
Work In Progress, for now the minimum:
* Fork the project and submit a Pull Request (PR)
* Explain what the PR fixes or improves
* Screenshots for bonus points
* Use sensible commit messages
* If your PR fixes a separate issue number, include it in the commit message
## Things to keep in mind
* Smaller PRs are likely to be merged more quickly than bigger changes
* If it is a useful PR it **will** get merged in eventually
* [E.g. see how many have already been merged vs. still open](https://github.com/ryanoasis/vim-devicons/pulls)
* This project is using [Semantic Versioning 2.0.0](http://semver.org/)
* I try to group fixes into milestones/versions
* If your bug or PR is *not* trivial it will likely end up in the next **MINOR** version
* If your bug or PR *is* trivial *or* critical it will likely end up in the next **PATCH** version
* Most of the time PRs and fixes are *not* merged directly into master without being present on a new versioned branch
** Sometimes for small items I will make exceptions to get the fix or readme change on master sooner but even after there will *always* be a versioned branch to keep track of each release
| 52.565217 | 187 | 0.74938 | eng_Latn | 0.999383 |
763de46ad4af3fca617b4d5a1e777ecb494116b7 | 535 | md | Markdown | Algorithm/0307 (Medium)Range Sum Query - Mutable/notebook.md | ZexinLi0w0/LeetCode | cf3988620ccdcc3d54b9beafd04c517c96f01bb9 | [
"MIT"
] | 1 | 2020-12-03T10:10:15.000Z | 2020-12-03T10:10:15.000Z | Algorithm/0307 (Medium)Range Sum Query - Mutable/notebook.md | ZexinLi0w0/LeetCode | cf3988620ccdcc3d54b9beafd04c517c96f01bb9 | [
"MIT"
] | null | null | null | Algorithm/0307 (Medium)Range Sum Query - Mutable/notebook.md | ZexinLi0w0/LeetCode | cf3988620ccdcc3d54b9beafd04c517c96f01bb9 | [
"MIT"
] | null | null | null | # Brute-Force
Learn how to use this pointer in C++.
Very straightfoward and naive implementation.
Runtime: 148 ms, faster than 24.84% of C++ online submissions for Range Sum Query - Mutable.
Memory Usage: 19.1 MB, less than 58.54% of C++ online submissions for Range Sum Query - Mutable.
# Thinking Further
Using Tree to accelerate
Segment Tree
Runtime: 40 ms, faster than 89.42% of C++ online submissions for Range Sum Query - Mutable.
Memory Usage: 19.9 MB, less than 19.51% of C++ online submissions for Range Sum Query - Mutable.
| 44.583333 | 96 | 0.753271 | eng_Latn | 0.936809 |
763e19fcca16ddddc8f4cdf46462ae64b548a742 | 2,373 | md | Markdown | sdk/storage/Azure.Storage.Common/CHANGELOG.md | asrosent/azure-sdk-for-net | 4a678fda37af180229a4056fef09ceecff7b2069 | [
"MIT"
] | null | null | null | sdk/storage/Azure.Storage.Common/CHANGELOG.md | asrosent/azure-sdk-for-net | 4a678fda37af180229a4056fef09ceecff7b2069 | [
"MIT"
] | null | null | null | sdk/storage/Azure.Storage.Common/CHANGELOG.md | asrosent/azure-sdk-for-net | 4a678fda37af180229a4056fef09ceecff7b2069 | [
"MIT"
] | null | null | null | # Release History
## 12.6.0-preview.1 (Unreleased)
## 12.5.1 (2020-08-18)
- Fixed bug in TaskExtensions.EnsureCompleted method that causes it to unconditionally throw an exception in the environments with synchronization context
## 12.5.0 (2020-08-13)
- Includes all features from 12.5.0-preview.1 through 12.5.0-preview.6.
- This release contains bug fixes to improve quality.
## 12.5.0-preview.6 (2020-07-27)
- This release contains bug fixes to improve quality.
## 12.5.0-preview.5 (2020-07-03)
- This release contains bug fixes to improve quality.
## 12.5.0-preview.4 (2020-06)
- This preview contains bug fixes to improve quality.
## 12.5.0-preview.1 (2020-06)
- This preview adds support for client-side encryption, compatible with data uploaded in previous major versions.
## 12.4.3 (2020-06)
- This release contains bug fixes to improve quality.
## 12.4.2 (2020-06)
- This release contains bug fixes to improve quality.
## 12.4.1 (2020-05)
- This release contains bug fixes to improve quality.
## 12.4.0 (2020-04)
- This release contains bug fixes to improve quality.
## 12.3.0 (2020-03)
- Added InitialTransferLength to StorageTransferOptions
## 12.2.0 (2020-02)
- Added support for service version 2019-07-07.
- Update StorageSharedKeyPipelinePolicy to upload the request date header each retry.
- Sanitized header values in exceptions.
## 12.1.1 (2020-01)
- Fixed issue where SAS content headers were not URL encoded when using Sas builders.
- Fixed bug where using SAS connection string from portal would throw an exception if it included
table endpoint.
## 12.1.0
- Add support for populating AccountName properties of the UriBuilders
for non-IP style Uris.
## 12.0.0-preview.4 (2019-10)
- Bug fixes
## 12.0.0-preview.3 (2019-09)
- Support new for Blobs/Files features
- Bug fixes
For more information, please visit: https://aka.ms/azure-sdk-preview3-net.
## 12.0.0-preview.2 (2019-08)
- Credential rolling
- Bug fixes
## 12.0.0-preview.1 (2019-07)
This preview is the first release of a ground-up rewrite of our client
libraries to ensure consistency, idiomatic design, productivity, and an
excellent developer experience. It was created following the Azure SDK Design
Guidelines for .NET at https://azuresdkspecs.z5.web.core.windows.net/DotNetSpec.html.
For more information, please visit: https://aka.ms/azure-sdk-preview1-net.
| 32.506849 | 154 | 0.749684 | eng_Latn | 0.982229 |
763ec7c30379806174eb8734ca4e4de0d53a129d | 2,733 | md | Markdown | wdk-ddi-src/content/wiamindr_lh/nf-wiamindr_lh-iwiaminidrv-drvanalyzeitem.md | Acidburn0zzz/windows-driver-docs-ddi | c16b5cd6c3f59ca5f963ff86f3f0926e3b910231 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2018-11-06T01:28:38.000Z | 2018-11-06T01:28:38.000Z | wdk-ddi-src/content/wiamindr_lh/nf-wiamindr_lh-iwiaminidrv-drvanalyzeitem.md | Acidburn0zzz/windows-driver-docs-ddi | c16b5cd6c3f59ca5f963ff86f3f0926e3b910231 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | wdk-ddi-src/content/wiamindr_lh/nf-wiamindr_lh-iwiaminidrv-drvanalyzeitem.md | Acidburn0zzz/windows-driver-docs-ddi | c16b5cd6c3f59ca5f963ff86f3f0926e3b910231 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
UID: NF:wiamindr_lh.IWiaMiniDrv.drvAnalyzeItem
title: IWiaMiniDrv::drvAnalyzeItem
author: windows-driver-content
description: The IWiaMiniDrv::drvAnalyzeItem method inspects an item, and creates subitems, if necessary.
old-location: image\iwiaminidrv_drvanalyzeitem.htm
tech.root: image
ms.assetid: e742f898-e663-431d-870e-bb0fe7e89b5a
ms.date: 05/03/2018
ms.keywords: IWiaMiniDrv interface [Imaging Devices],drvAnalyzeItem method, IWiaMiniDrv.drvAnalyzeItem, IWiaMiniDrv::drvAnalyzeItem, MiniDrv_dfa93eeb-ea39-44b6-b465-5bff0f056763.xml, drvAnalyzeItem, drvAnalyzeItem method [Imaging Devices], drvAnalyzeItem method [Imaging Devices],IWiaMiniDrv interface, image.iwiaminidrv_drvanalyzeitem, wiamindr_lh/IWiaMiniDrv::drvAnalyzeItem
ms.topic: method
req.header: wiamindr_lh.h
req.include-header: Wiamindr.h
req.target-type: Desktop
req.target-min-winverclnt: Available in Windows Me and in Windows XP and later.
req.target-min-winversvr:
req.kmdf-ver:
req.umdf-ver:
req.ddi-compliance:
req.unicode-ansi:
req.idl:
req.max-support:
req.namespace:
req.assembly:
req.type-library:
req.lib:
req.dll:
req.irql:
topic_type:
- APIRef
- kbSyntax
api_type:
- COM
api_location:
- wiamindr_lh.h
api_name:
- IWiaMiniDrv.drvAnalyzeItem
product:
- Windows
targetos: Windows
req.typenames:
---
# IWiaMiniDrv::drvAnalyzeItem
## -description
The <b>IWiaMiniDrv::drvAnalyzeItem</b> method inspects an item, and creates subitems, if necessary.
## -parameters
### -param __MIDL__IWiaMiniDrv0036
### -param __MIDL__IWiaMiniDrv0037
### -param __MIDL__IWiaMiniDrv0038
#### - lFlags [in]
Is currently unused.
#### - pWiasContext [in]
Pointer to a WIA item context.
#### - plDevErrVal [in]
Points to a memory location that will receive a status code for this method. If this method returns S_OK, the value stored will be zero. Otherwise, a minidriver-specific error code will be stored at the location pointed to by this parameter.
## -returns
On success, the method should return S_OK and clear the device error value pointed to by <i>plDevErrVal</i>. If the method is not fully implemented, it can return E_NOTIMPL. If the method fails, it should return a standard COM error code and place a minidriver-specific error code value in the memory pointed to by <i>plDevErrVal</i>.
The value pointed to by <i>plDevErrVal</i> can be converted to a string by calling <a href="https://msdn.microsoft.com/library/windows/hardware/ff543982">IWiaMiniDrv::drvGetDeviceErrorStr</a>.
## -see-also
<a href="https://msdn.microsoft.com/15068d10-5e24-427c-9684-24ce67b75ada">IWiaMiniDrv</a>
<a href="https://msdn.microsoft.com/library/windows/hardware/ff543982">IWiaMiniDrv::drvGetDeviceErrorStr</a>
| 23.765217 | 376 | 0.773509 | eng_Latn | 0.484062 |
763ed861e0533fcd1af95d6cb3a12788d00190cd | 21,039 | md | Markdown | README.md | varunjain0606/telegraf | c676bada8b69b80081704b16176c70530a66bf72 | [
"MIT"
] | 1 | 2022-03-30T23:09:20.000Z | 2022-03-30T23:09:20.000Z | README.md | varunjain0606/telegraf | c676bada8b69b80081704b16176c70530a66bf72 | [
"MIT"
] | 24 | 2021-09-06T11:33:14.000Z | 2022-03-28T11:14:32.000Z | README.md | shreyashsoni1/telegraf | 34565a303db841c359c5a20fd5909ea58837fa1c | [
"MIT"
] | 2 | 2021-01-14T01:38:56.000Z | 2021-01-16T13:09:35.000Z |
# Telegraf

[](https://circleci.com/gh/influxdata/telegraf) [](https://hub.docker.com/_/telegraf/) [](https://lgtm.com/projects/g/influxdata/telegraf/alerts/)
[](https://www.influxdata.com/slack)
Telegraf is an agent for collecting, processing, aggregating, and writing metrics.
Design goals are to have a minimal memory footprint with a plugin system so
that developers in the community can easily add support for collecting
metrics.
Telegraf is plugin-driven and has the concept of 4 distinct plugin types:
1. [Input Plugins](#input-plugins) collect metrics from the system, services, or 3rd party APIs
2. [Processor Plugins](#processor-plugins) transform, decorate, and/or filter metrics
3. [Aggregator Plugins](#aggregator-plugins) create aggregate metrics (e.g. mean, min, max, quantiles, etc.)
4. [Output Plugins](#output-plugins) write metrics to various destinations
New plugins are designed to be easy to contribute, pull requests are welcomed
and we work to incorporate as many pull requests as possible.
If none of the internal plugins fit your needs, you could have a look at the
[list of external plugins](EXTERNAL_PLUGINS.md).
## Try in Browser :rocket:
You can try Telegraf right in your browser in the [Telegraf playground](https://rootnroll.com/d/telegraf/).
## Contributing
There are many ways to contribute:
- Fix and [report bugs](https://github.com/influxdata/telegraf/issues/new)
- [Improve documentation](https://github.com/influxdata/telegraf/issues?q=is%3Aopen+label%3Adocumentation)
- [Review code and feature proposals](https://github.com/influxdata/telegraf/pulls)
- Answer questions and discuss here on github and on the [Community Site](https://community.influxdata.com/)
- [Contribute plugins](CONTRIBUTING.md)
- [Contribute external plugins](docs/EXTERNAL_PLUGINS.md)
## Minimum Requirements
Telegraf shares the same [minimum requirements][] as Go:
- Linux kernel version 2.6.23 or later
- Windows 7 or later
- FreeBSD 11.2 or later
- MacOS 10.11 El Capitan or later
[minimum requirements]: https://github.com/golang/go/wiki/MinimumRequirements#minimum-requirements
## Installation:
You can download the binaries directly from the [downloads](https://www.influxdata.com/downloads) page
or from the [releases](https://github.com/influxdata/telegraf/releases) section.
### Ansible Role:
Ansible role: https://github.com/rossmcdonald/telegraf
### From Source:
Telegraf requires Go version 1.14 or newer, the Makefile requires GNU make.
1. [Install Go](https://golang.org/doc/install) >=1.14 (1.15 recommended)
2. Clone the Telegraf repository:
```
cd ~/src
git clone https://github.com/influxdata/telegraf.git
```
3. Run `make` from the source directory
```
cd ~/src/telegraf
make
```
### Changelog
View the [changelog](/CHANGELOG.md) for the latest updates and changes by
version.
### Nightly Builds
These builds are generated from the master branch:
FreeBSD - .tar.gz
- [telegraf-nightly_freebsd_amd64.tar.gz](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly_freebsd_amd64.tar.gz)
- [telegraf-nightly_freebsd_armv7.tar.gz](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly_freebsd_armv7.tar.gz)
- [telegraf-nightly_freebsd_i386.tar.gz](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly_freebsd_i386.tar.gz)
Linux - .rpm
- [telegraf-nightly.arm64.rpm](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly.arm64.rpm)
- [telegraf-nightly.armel.rpm](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly.armel.rpm)
- [telegraf-nightly.armv6hl.rpm](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly.armv6hl.rpm)
- [telegraf-nightly.i386.rpm](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly.i386.rpm)
- [telegraf-nightly.ppc64le.rpm](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly.ppc64le.rpm)
- [telegraf-nightly.s390x.rpm](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly.s390x.rpm)
- [telegraf-nightly.x86_64.rpm](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly.x86_64.rpm)
Linux - .deb
- [telegraf_nightly_amd64.deb](https://dl.influxdata.com/telegraf/nightlies/telegraf_nightly_amd64.deb)
- [telegraf_nightly_arm64.deb](https://dl.influxdata.com/telegraf/nightlies/telegraf_nightly_arm64.deb)
- [telegraf_nightly_armel.deb](https://dl.influxdata.com/telegraf/nightlies/telegraf_nightly_armel.deb)
- [telegraf_nightly_armhf.deb](https://dl.influxdata.com/telegraf/nightlies/telegraf_nightly_armhf.deb)
- [telegraf_nightly_i386.deb](https://dl.influxdata.com/telegraf/nightlies/telegraf_nightly_i386.deb)
- [telegraf_nightly_ppc64el.deb](https://dl.influxdata.com/telegraf/nightlies/telegraf_nightly_ppc64el.deb)
- [telegraf_nightly_s390x.deb](https://dl.influxdata.com/telegraf/nightlies/telegraf_nightly_s390x.deb)
Linux - .tar.gz
- [telegraf-nightly_linux_amd64.tar.gz](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly_linux_amd64.tar.gz)
- [telegraf-nightly_linux_arm64.tar.gz](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly_linux_arm64.tar.gz)
- [telegraf-nightly_linux_armel.tar.gz](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly_linux_armel.tar.gz)
- [telegraf-nightly_linux_armhf.tar.gz](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly_linux_armhf.tar.gz)
- [telegraf-nightly_linux_i386.tar.gz](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly_linux_i386.tar.gz)
- [telegraf-nightly_linux_s390x.tar.gz](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly_linux_s390x.tar.gz)
- [telegraf-static-nightly_linux_amd64.tar.gz](https://dl.influxdata.com/telegraf/nightlies/telegraf-static-nightly_linux_amd64.tar.gz)
OSX - .tar.gz
- [telegraf-nightly_darwin_amd64.tar.gz](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly_darwin_amd64.tar.gz)
Windows - .zip
- [telegraf-nightly_windows_i386.zip](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly_windows_i386.zip)
- [telegraf-nightly_windows_amd64.zip](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly_windows_amd64.zip)
## How to use it:
See usage with:
```
telegraf --help
```
#### Generate a telegraf config file:
```
telegraf config > telegraf.conf
```
#### Generate config with only cpu input & influxdb output plugins defined:
```
telegraf --section-filter agent:inputs:outputs --input-filter cpu --output-filter influxdb config
```
#### Run a single telegraf collection, outputting metrics to stdout:
```
telegraf --config telegraf.conf --test
```
#### Run telegraf with all plugins defined in config file:
```
telegraf --config telegraf.conf
```
#### Run telegraf, enabling the cpu & memory input, and influxdb output plugins:
```
telegraf --config telegraf.conf --input-filter cpu:mem --output-filter influxdb
```
## Documentation
[Latest Release Documentation][release docs].
For documentation on the latest development code see the [documentation index][devel docs].
[release docs]: https://docs.influxdata.com/telegraf
[devel docs]: docs
## Input Plugins
* [activemq](./plugins/inputs/activemq)
* [aerospike](./plugins/inputs/aerospike)
* [amqp_consumer](./plugins/inputs/amqp_consumer) (rabbitmq)
* [apache](./plugins/inputs/apache)
* [apcupsd](./plugins/inputs/apcupsd)
* [aurora](./plugins/inputs/aurora)
* [aws cloudwatch](./plugins/inputs/cloudwatch) (Amazon Cloudwatch)
* [azure_storage_queue](./plugins/inputs/azure_storage_queue)
* [bcache](./plugins/inputs/bcache)
* [beanstalkd](./plugins/inputs/beanstalkd)
* [bind](./plugins/inputs/bind)
* [bond](./plugins/inputs/bond)
* [burrow](./plugins/inputs/burrow)
* [cassandra](./plugins/inputs/cassandra) (deprecated, use [jolokia2](./plugins/inputs/jolokia2))
* [ceph](./plugins/inputs/ceph)
* [cgroup](./plugins/inputs/cgroup)
* [chrony](./plugins/inputs/chrony)
* [cisco_telemetry_gnmi](./plugins/inputs/cisco_telemetry_gnmi) (deprecated, renamed to [gnmi](/plugins/inputs/gnmi))
* [cisco_telemetry_mdt](./plugins/inputs/cisco_telemetry_mdt)
* [clickhouse](./plugins/inputs/clickhouse)
* [cloud_pubsub](./plugins/inputs/cloud_pubsub) Google Cloud Pub/Sub
* [cloud_pubsub_push](./plugins/inputs/cloud_pubsub_push) Google Cloud Pub/Sub push endpoint
* [conntrack](./plugins/inputs/conntrack)
* [consul](./plugins/inputs/consul)
* [couchbase](./plugins/inputs/couchbase)
* [couchdb](./plugins/inputs/couchdb)
* [cpu](./plugins/inputs/cpu)
* [DC/OS](./plugins/inputs/dcos)
* [diskio](./plugins/inputs/diskio)
* [disk](./plugins/inputs/disk)
* [disque](./plugins/inputs/disque)
* [dmcache](./plugins/inputs/dmcache)
* [dns query time](./plugins/inputs/dns_query)
* [docker](./plugins/inputs/docker)
* [docker_log](./plugins/inputs/docker_log)
* [dovecot](./plugins/inputs/dovecot)
* [dpdk](./plugins/inputs/dpdk)
* [aws ecs](./plugins/inputs/ecs) (Amazon Elastic Container Service, Fargate)
* [elasticsearch](./plugins/inputs/elasticsearch)
* [ethtool](./plugins/inputs/ethtool)
* [eventhub_consumer](./plugins/inputs/eventhub_consumer) (Azure Event Hubs \& Azure IoT Hub)
* [exec](./plugins/inputs/exec) (generic executable plugin, support JSON, influx, graphite and nagios)
* [execd](./plugins/inputs/execd) (generic executable "daemon" processes)
* [fail2ban](./plugins/inputs/fail2ban)
* [fibaro](./plugins/inputs/fibaro)
* [file](./plugins/inputs/file)
* [filestat](./plugins/inputs/filestat)
* [filecount](./plugins/inputs/filecount)
* [fireboard](/plugins/inputs/fireboard)
* [fluentd](./plugins/inputs/fluentd)
* [github](./plugins/inputs/github)
* [gnmi](./plugins/inputs/gnmi)
* [graylog](./plugins/inputs/graylog)
* [haproxy](./plugins/inputs/haproxy)
* [hddtemp](./plugins/inputs/hddtemp)
* [httpjson](./plugins/inputs/httpjson) (generic JSON-emitting http service plugin)
* [http_listener](./plugins/inputs/influxdb_listener) (deprecated, renamed to [influxdb_listener](/plugins/inputs/influxdb_listener))
* [http_listener_v2](./plugins/inputs/http_listener_v2)
* [http](./plugins/inputs/http) (generic HTTP plugin, supports using input data formats)
* [http_response](./plugins/inputs/http_response)
* [icinga2](./plugins/inputs/icinga2)
* [infiniband](./plugins/inputs/infiniband)
* [influxdb](./plugins/inputs/influxdb)
* [influxdb_listener](./plugins/inputs/influxdb_listener)
* [influxdb_v2_listener](./plugins/inputs/influxdb_v2_listener)
* [intel_powerstat](plugins/inputs/intel_powerstat)
* [intel_rdt](./plugins/inputs/intel_rdt)
* [internal](./plugins/inputs/internal)
* [interrupts](./plugins/inputs/interrupts)
* [ipmi_sensor](./plugins/inputs/ipmi_sensor)
* [ipset](./plugins/inputs/ipset)
* [iptables](./plugins/inputs/iptables)
* [ipvs](./plugins/inputs/ipvs)
* [jenkins](./plugins/inputs/jenkins)
* [jolokia2](./plugins/inputs/jolokia2) (java, cassandra, kafka)
* [jolokia](./plugins/inputs/jolokia) (deprecated, use [jolokia2](./plugins/inputs/jolokia2))
* [jti_openconfig_telemetry](./plugins/inputs/jti_openconfig_telemetry)
* [kafka_consumer](./plugins/inputs/kafka_consumer)
* [kapacitor](./plugins/inputs/kapacitor)
* [aws kinesis](./plugins/inputs/kinesis_consumer) (Amazon Kinesis)
* [kernel](./plugins/inputs/kernel)
* [kernel_vmstat](./plugins/inputs/kernel_vmstat)
* [kibana](./plugins/inputs/kibana)
* [knx_listener](./plugins/inputs/knx_listener)
* [kubernetes](./plugins/inputs/kubernetes)
* [kube_inventory](./plugins/inputs/kube_inventory)
* [lanz](./plugins/inputs/lanz)
* [leofs](./plugins/inputs/leofs)
* [linux_sysctl_fs](./plugins/inputs/linux_sysctl_fs)
* [logparser](./plugins/inputs/logparser) (deprecated, use [tail](/plugins/inputs/tail))
* [logstash](./plugins/inputs/logstash)
* [lustre2](./plugins/inputs/lustre2)
* [mailchimp](./plugins/inputs/mailchimp)
* [marklogic](./plugins/inputs/marklogic)
* [mcrouter](./plugins/inputs/mcrouter)
* [memcached](./plugins/inputs/memcached)
* [mem](./plugins/inputs/mem)
* [mesos](./plugins/inputs/mesos)
* [minecraft](./plugins/inputs/minecraft)
* [modbus](./plugins/inputs/modbus)
* [mongodb](./plugins/inputs/mongodb)
* [monit](./plugins/inputs/monit)
* [mqtt_consumer](./plugins/inputs/mqtt_consumer)
* [multifile](./plugins/inputs/multifile)
* [mysql](./plugins/inputs/mysql)
* [nats_consumer](./plugins/inputs/nats_consumer)
* [nats](./plugins/inputs/nats)
* [neptune_apex](./plugins/inputs/neptune_apex)
* [net](./plugins/inputs/net)
* [net_response](./plugins/inputs/net_response)
* [netstat](./plugins/inputs/net)
* [nfsclient](./plugins/inputs/nfsclient)
* [nginx](./plugins/inputs/nginx)
* [nginx_plus_api](./plugins/inputs/nginx_plus_api)
* [nginx_plus](./plugins/inputs/nginx_plus)
* [nginx_sts](./plugins/inputs/nginx_sts)
* [nginx_upstream_check](./plugins/inputs/nginx_upstream_check)
* [nginx_vts](./plugins/inputs/nginx_vts)
* [nsd](./plugins/inputs/nsd)
* [nsq_consumer](./plugins/inputs/nsq_consumer)
* [nsq](./plugins/inputs/nsq)
* [nstat](./plugins/inputs/nstat)
* [ntpq](./plugins/inputs/ntpq)
* [nvidia_smi](./plugins/inputs/nvidia_smi)
* [opcua](./plugins/inputs/opcua)
* [openldap](./plugins/inputs/openldap)
* [openntpd](./plugins/inputs/openntpd)
* [opensmtpd](./plugins/inputs/opensmtpd)
* [opentelemetry](./plugins/inputs/opentelemetry)
* [openweathermap](./plugins/inputs/openweathermap)
* [pf](./plugins/inputs/pf)
* [pgbouncer](./plugins/inputs/pgbouncer)
* [phpfpm](./plugins/inputs/phpfpm)
* [phusion passenger](./plugins/inputs/passenger)
* [ping](./plugins/inputs/ping)
* [postfix](./plugins/inputs/postfix)
* [postgresql_extensible](./plugins/inputs/postgresql_extensible)
* [postgresql](./plugins/inputs/postgresql)
* [powerdns](./plugins/inputs/powerdns)
* [powerdns_recursor](./plugins/inputs/powerdns_recursor)
* [processes](./plugins/inputs/processes)
* [procstat](./plugins/inputs/procstat)
* [prometheus](./plugins/inputs/prometheus) (can be used for [Caddy server](./plugins/inputs/prometheus/README.md#usage-for-caddy-http-server))
* [proxmox](./plugins/inputs/proxmox)
* [puppetagent](./plugins/inputs/puppetagent)
* [rabbitmq](./plugins/inputs/rabbitmq)
* [raindrops](./plugins/inputs/raindrops)
* [ras](./plugins/inputs/ras)
* [ravendb](./plugins/inputs/ravendb)
* [redfish](./plugins/inputs/redfish)
* [redis](./plugins/inputs/redis)
* [rethinkdb](./plugins/inputs/rethinkdb)
* [riak](./plugins/inputs/riak)
* [salesforce](./plugins/inputs/salesforce)
* [sensors](./plugins/inputs/sensors)
* [sflow](./plugins/inputs/sflow)
* [smart](./plugins/inputs/smart)
* [snmp_legacy](./plugins/inputs/snmp_legacy)
* [snmp](./plugins/inputs/snmp)
* [snmp_trap](./plugins/inputs/snmp_trap)
* [socket_listener](./plugins/inputs/socket_listener)
* [solr](./plugins/inputs/solr)
* [sql](./plugins/inputs/sql) (generic SQL query plugin)
* [sql server](./plugins/inputs/sqlserver) (microsoft)
* [stackdriver](./plugins/inputs/stackdriver) (Google Cloud Monitoring)
* [sql](./plugins/outputs/sql) (SQL generic output)
* [statsd](./plugins/inputs/statsd)
* [suricata](./plugins/inputs/suricata)
* [swap](./plugins/inputs/swap)
* [synproxy](./plugins/inputs/synproxy)
* [syslog](./plugins/inputs/syslog)
* [sysstat](./plugins/inputs/sysstat)
* [systemd_units](./plugins/inputs/systemd_units)
* [system](./plugins/inputs/system)
* [tail](./plugins/inputs/tail)
* [temp](./plugins/inputs/temp)
* [tcp_listener](./plugins/inputs/socket_listener)
* [teamspeak](./plugins/inputs/teamspeak)
* [tengine](./plugins/inputs/tengine)
* [tomcat](./plugins/inputs/tomcat)
* [twemproxy](./plugins/inputs/twemproxy)
* [udp_listener](./plugins/inputs/socket_listener)
* [unbound](./plugins/inputs/unbound)
* [uwsgi](./plugins/inputs/uwsgi)
* [varnish](./plugins/inputs/varnish)
* [vsphere](./plugins/inputs/vsphere) VMware vSphere
* [webhooks](./plugins/inputs/webhooks)
* [filestack](./plugins/inputs/webhooks/filestack)
* [github](./plugins/inputs/webhooks/github)
* [mandrill](./plugins/inputs/webhooks/mandrill)
* [papertrail](./plugins/inputs/webhooks/papertrail)
* [particle](./plugins/inputs/webhooks/particle)
* [rollbar](./plugins/inputs/webhooks/rollbar)
* [win_eventlog](./plugins/inputs/win_eventlog)
* [win_perf_counters](./plugins/inputs/win_perf_counters) (windows performance counters)
* [win_services](./plugins/inputs/win_services)
* [wireguard](./plugins/inputs/wireguard)
* [wireless](./plugins/inputs/wireless)
* [x509_cert](./plugins/inputs/x509_cert)
* [zfs](./plugins/inputs/zfs)
* [zipkin](./plugins/inputs/zipkin)
* [zookeeper](./plugins/inputs/zookeeper)
## Parsers
- [InfluxDB Line Protocol](/plugins/parsers/influx)
- [Collectd](/plugins/parsers/collectd)
- [CSV](/plugins/parsers/csv)
- [Dropwizard](/plugins/parsers/dropwizard)
- [FormUrlencoded](/plugins/parser/form_urlencoded)
- [Graphite](/plugins/parsers/graphite)
- [Grok](/plugins/parsers/grok)
- [JSON](/plugins/parsers/json)
- [Logfmt](/plugins/parsers/logfmt)
- [Nagios](/plugins/parsers/nagios)
- [Value](/plugins/parsers/value), ie: 45 or "booyah"
- [Wavefront](/plugins/parsers/wavefront)
## Serializers
- [InfluxDB Line Protocol](/plugins/serializers/influx)
- [Carbon2](/plugins/serializers/carbon2)
- [Graphite](/plugins/serializers/graphite)
- [JSON](/plugins/serializers/json)
- [MessagePack](/plugins/serializers/msgpack)
- [ServiceNow](/plugins/serializers/nowmetric)
- [SplunkMetric](/plugins/serializers/splunkmetric)
- [Wavefront](/plugins/serializers/wavefront)
## Processor Plugins
* [clone](/plugins/processors/clone)
* [converter](/plugins/processors/converter)
* [date](/plugins/processors/date)
* [dedup](/plugins/processors/dedup)
* [defaults](/plugins/processors/defaults)
* [enum](/plugins/processors/enum)
* [execd](/plugins/processors/execd)
* [ifname](/plugins/processors/ifname)
* [filepath](/plugins/processors/filepath)
* [override](/plugins/processors/override)
* [parser](/plugins/processors/parser)
* [pivot](/plugins/processors/pivot)
* [port_name](/plugins/processors/port_name)
* [printer](/plugins/processors/printer)
* [regex](/plugins/processors/regex)
* [rename](/plugins/processors/rename)
* [reverse_dns](/plugins/processors/reverse_dns)
* [s2geo](/plugins/processors/s2geo)
* [starlark](/plugins/processors/starlark)
* [strings](/plugins/processors/strings)
* [tag_limit](/plugins/processors/tag_limit)
* [template](/plugins/processors/template)
* [topk](/plugins/processors/topk)
* [unpivot](/plugins/processors/unpivot)
## Aggregator Plugins
* [basicstats](./plugins/aggregators/basicstats)
* [final](./plugins/aggregators/final)
* [histogram](./plugins/aggregators/histogram)
* [merge](./plugins/aggregators/merge)
* [minmax](./plugins/aggregators/minmax)
* [valuecounter](./plugins/aggregators/valuecounter)
## Output Plugins
* [influxdb](./plugins/outputs/influxdb) (InfluxDB 1.x)
* [influxdb_v2](./plugins/outputs/influxdb_v2) ([InfluxDB 2.x](https://github.com/influxdata/influxdb))
* [amon](./plugins/outputs/amon)
* [amqp](./plugins/outputs/amqp) (rabbitmq)
* [application_insights](./plugins/outputs/application_insights)
* [aws kinesis](./plugins/outputs/kinesis)
* [aws cloudwatch](./plugins/outputs/cloudwatch)
* [azure_monitor](./plugins/outputs/azure_monitor)
* [bigquery](./plugins/outputs/bigquery)
* [cloud_pubsub](./plugins/outputs/cloud_pubsub) Google Cloud Pub/Sub
* [cratedb](./plugins/outputs/cratedb)
* [datadog](./plugins/outputs/datadog)
* [discard](./plugins/outputs/discard)
* [dynatrace](./plugins/outputs/dynatrace)
* [elasticsearch](./plugins/outputs/elasticsearch)
* [exec](./plugins/outputs/exec)
* [execd](./plugins/outputs/execd)
* [file](./plugins/outputs/file)
* [graphite](./plugins/outputs/graphite)
* [graylog](./plugins/outputs/graylog)
* [health](./plugins/outputs/health)
* [http](./plugins/outputs/http)
* [instrumental](./plugins/outputs/instrumental)
* [kafka](./plugins/outputs/kafka)
* [librato](./plugins/outputs/librato)
* [logz.io](./plugins/outputs/logzio)
* [mqtt](./plugins/outputs/mqtt)
* [nats](./plugins/outputs/nats)
* [newrelic](./plugins/outputs/newrelic)
* [nsq](./plugins/outputs/nsq)
* [opentelemetry](./plugins/outputs/opentelemetry)
* [opentsdb](./plugins/outputs/opentsdb)
* [prometheus](./plugins/outputs/prometheus_client)
* [riemann](./plugins/outputs/riemann)
* [riemann_legacy](./plugins/outputs/riemann_legacy)
* [sensu](./plugins/outputs/sensu)
* [signalfx](./plugins/outputs/signalfx)
* [socket_writer](./plugins/outputs/socket_writer)
* [stackdriver](./plugins/outputs/stackdriver) (Google Cloud Monitoring)
* [syslog](./plugins/outputs/syslog)
* [tcp](./plugins/outputs/socket_writer)
* [udp](./plugins/outputs/socket_writer)
* [warp10](./plugins/outputs/warp10)
* [wavefront](./plugins/outputs/wavefront)
* [websocket](./plugins/outputs/websocket)
* [sumologic](./plugins/outputs/sumologic)
* [yandex_cloud_monitoring](./plugins/outputs/yandex_cloud_monitoring)
| 43.201232 | 390 | 0.753363 | yue_Hant | 0.11255 |
763f25a35b2bcd0d675898653ee8ff6389cc692d | 7,574 | md | Markdown | packages/auth/docs/readme.en.md | ablaev-rs/startupjs | f2402668cf43d044b5151a3245f1aa2752749a1e | [
"MIT"
] | 90 | 2020-03-31T11:21:32.000Z | 2022-03-10T18:50:38.000Z | packages/auth/docs/readme.en.md | ablaev-rs/startupjs | f2402668cf43d044b5151a3245f1aa2752749a1e | [
"MIT"
] | 106 | 2020-04-29T22:28:02.000Z | 2022-03-23T18:14:43.000Z | packages/auth/docs/readme.en.md | ablaev-rs/startupjs | f2402668cf43d044b5151a3245f1aa2752749a1e | [
"MIT"
] | 64 | 2020-03-23T15:24:55.000Z | 2022-03-28T11:48:33.000Z | import { useState } from 'react'
import { AuthForm, LogoutButton, onLogout } from '../'
import { AuthButton as GoogleAuthButton } from '@startupjs/auth-google'
import { AuthButton as LinkedinAuthButton } from '@startupjs/auth-linkedin/client'
import { LoginForm, RegisterForm, RecoverForm } from '@startupjs/auth-local'
import { Button } from '@startupjs/ui'
# Authorization
## Requirements
```
@react-native-cookies/cookies: >= 6.0.6
@startupjs/ui: >= 0.33.0
startupjs: >= 0.33.0
```
## Description
## Init on server
```js
import { initAuth } from '@startupjs/auth/server'
```
In the body of the **startup js Server** function, the module is initialized, and strategies are added to **strategies**, each with its own set of characteristics:
```js
initAuth(ee, {
strategies: [
new LocalStrategy({}),
new FacebookStrategy({
clientId: conf.get('FACEBOOK_CLIENT_ID'),
clientSecret: conf.get('FACEBOOK_CLIENT_SECRET')
})
]
})
```
## Micro frontend
[Test example](/auth/sign-in)
(apple and azure keys are hidden from public access and need to be added manually)
These are ready-made pages with forms that can be connected to the site
To use them, you need in the file Root/index.js (or where startupjs/app is used), init the micro frontend and put it in the App component
First, you need the initializer function, which accepts the necessary options:
```js
import { initAuthApp } from '@startupjs/auth'
```
Its main options are **socialButtons** and **localForms**, which collect the necessary components for a common form. Since it is not known in advance which strategies should be connected, you have to connect them yourself. There are buttons for almost every strategy, except for the local one, to see what components exist for **auth-local**, there is a separate description for this strategy (in fact, as for all others).
Import the necessary components:
```js
import { AuthButton as AzureadAuthButton } from '@startupjs/auth-azuread'
import { AuthButton as LinkedinAuthButton } from '@startupjs/auth-linkedin'
import { LoginForm, RegisterForm, RecoverForm } from '@startupjs/auth-local'
```
Everything is based on a local strategy. It has 3 standard forms: Authorization, Registration, Password Recovery. Inside the module, there is logic for switching between these forms.
Microfrontend has 3 mandatory routes for the local strategy: 'sign-in', 'sign-up' и 'recover', actually, when using a local form, you always need to install components with forms using these keys.
```jsx
const auth = initAuthApp({
localForms: {
'sign-in': <LoginForm />,
'sign-up': <RegisterForm />,
'recover': <RecoverForm />
},
socialButtons: [
<AzureadAuthButton />,
<LinkedinAuthButton />
]
})
```
When the microfrontend is generated, you just need to pass it to the App
```pug
App(apps={ auth, main })
```
And also add routes for the server
```js
import { getAuthRoutes } from '@startupjs/auth/isomorphic'
//...
appRoutes: [
...getAuthRoutes()
]
//...
```
### Microfrontend customization
You can change the `Layout`. For example, the site has its own header, logo, background, etc.
To do this, you can throw a custom Layout in the microfrontend options:
```jsx
const auth = initAuthApp({
Layout,
localForms: {
'sign-in': <LoginForm />,
'sign-up': <RegisterForm />,
'recover': <RecoverForm />
}
})
```
Since jsx is thrown in **localForms** and **socialButtons**, all components can be modified as usual:
```jsx
const auth = initAuthApp({
Layout,
localForms: {
'sign-in': <LoginForm
properties={{
age: {
input: 'number',
label: 'Age',
placeholder: 'Enter your age'
}
}}
validateSchema={{
age: Joi.string().required().messages({
'any.required': 'Fill in the field',
'string.empty': 'Fill in the field'
})
}}
/>,
'sign-up': <RegisterForm />,
'recover': <RecoverForm />
},
socialButtons: [
<GoogleAuthButton
label='Sign in with Google'
/>,
<FacebookAuthButton
label='Sign in with Facebook'
/>
]
})
```
More information about customizing these components is described on the pages with the necessary strategies
If you need to change the standard headers and make your own markup, you can use the **renderForm** function:
```jsx
const auth = initAuthApp({
Layout,
localForms: {
'sign-in': <LoginForm />,
'sign-up': <RegisterForm />,
'recover': <RecoverForm />
},
socialButtons: [
<GoogleAuthButton />,
<FacebookAuthButton />
],
renderForm: function ({
slide,
socialButtons,
localActiveForm,
onChangeSlide
}) {
return pug`
Div
H5= getCaptionForm(slide)
= socialButtons
Div(style={ marginTop: 16 })
= localActiveForm
`
}
})
```
It gets the forms that are declared, and the current slide
## Common component
General form with the necessary types of authorization
Everything is the same as for the microfrontend, but there are no routes, and you need to configure the slide switching yourself
```js
import { AuthForm } from '@startupjs/auth'
```
```jsx example
const [slide, setSlide] = useState('sign-in')
return (
<AuthForm
slide={slide}
localForms={{
'sign-in': <LoginForm />,
'sign-up': <RegisterForm />,
'recover': <RecoverForm />
}}
socialButtons={[
<GoogleAuthButton label='Sign in with Google' />,
<LinkedinAuthButton />
]}
onChangeSlide={slide=> setSlide(slide)}
/>
)
```
## Helpers and server hooks
### Exit button
```jsx
import { LogoutButton } from '@startupjs/auth'
...
return <LogoutButton />
```
### Exit helper
```jsx
import { onLogout } from '@startupjs/auth'
...
return <Button onPress={onLogout}>Выйти</Button>
```
### onBeforeLoginHook
Helper-middleware, called before authorization
```jsx
initAuth(ee, {
// ...
onBeforeLoginHook: ({ userId }, req, res, next) => {
console.log('onBeforeLoginHook')
next()
},
// ...
}
```
### onAfterUserCreationHook
Helper-middleware, called after the user is created
```jsx
initAuth(ee, {
// ...
onAfterUserCreationHook: ({ userId }, req) => {
console.log('onAfterUserCreationHook')
},
// ...
}
```
### onAfterLoginHook
Helper-middleware, called after authorization
```jsx
initAuth(ee, {
// ...
onAfterLoginHook: ({ userId }, req) => {
console.log('onAfterLoginHook')
},
// ...
}
```
### onBeforeLogoutHook
Helper-middleware, called before exiting
```jsx
initAuth(ee, {
// ...
onBeforeLogoutHook: (req, res, next) => {
console.log('onBeforeLogoutHook')
next()
},
// ...
}
```
## Redirect after authorization
To set up a redirect, you need to throw the redirectUrl prop in initAuthApp, either for AuthForm, or for a separate button or form, for example:
`<GoogleAuthButton redirectUrl='/profile/google' />`
`<LoginForm redirectUrl='/profile/local' />`
The redirect works through cookies, and if you put something in a cookie named redirectUrl before authorization, then a redirect to value from the cookie will occur after it:
```js
import { CookieManager } from '@startupjs/auth'
CookieManager.set({
baseUrl,
name: 'authRedirectUrl',
value: redirectUrl,
expires: moment().add(5, 'minutes')
})
```
You can also override the redirect on the server (for example, in the onBeforeLoginHook hook):
```js
onBeforeLoginHook: ({ userId }, req, res, next) => {
// req.cookies.authRedirectUrl = '/custom-redirect-path'
next()
}
```
| 25.331104 | 422 | 0.672564 | eng_Latn | 0.944169 |
763fc3877f08040731875f254c98be82715bbc07 | 3,380 | md | Markdown | wxWidgets/docs/osx/install.md | VonaInc/lokl | 83fbaa9c73d3112490edd042da812ceeb3cc9e53 | [
"MIT"
] | 15 | 2019-05-19T23:10:41.000Z | 2021-08-06T14:02:09.000Z | wxWidgets/docs/osx/install.md | VonaInc/lokl | 83fbaa9c73d3112490edd042da812ceeb3cc9e53 | [
"MIT"
] | 3 | 2019-12-05T08:08:50.000Z | 2020-05-14T20:31:39.000Z | wxWidgets/docs/osx/install.md | VonaInc/lokl | 83fbaa9c73d3112490edd042da812ceeb3cc9e53 | [
"MIT"
] | 4 | 2020-02-27T00:57:21.000Z | 2021-08-06T14:02:11.000Z | wxWidgets for OS X installation {#plat_osx_install}
-----------------------------------
[TOC]
wxWidgets can be compiled using Apple's Cocoa toolkit.
Most OS X developers should start by downloading and installing Xcode
from the App Store. It is a free IDE from Apple that provides
all of the tools you need for working with wxWidgets.
After Xcode is installed, download wxWidgets-{version}.tar.bz2 and then
double-click on it to unpack it to create a wxWidgets directory.
Next use Terminal (under Applications, Utilities, Terminal) to access a command
prompt. Use cd to change directories to your wxWidgets directory and execute
the following sets of commands from the wxWidgets directory.
mkdir build-cocoa-debug
cd build-cocoa-debug
../configure --enable-debug
make
Build the samples and demos
cd samples; make;cd ..
cd demos; make;cd ..
After the compilation completes, use Finder to run the samples and demos
* Go to build-cocoa-debug/samples to experiment with the Cocoa samples.
* Go to build-cocoa-debug/demos to experiment with the Cocoa demos.
* Double-click on the executables which have an icon showing three small squares.
* The source code for the samples is in wxWidgets/samples
* The source code for the demos is in wxWidgets/demos
More information about building on OS X is available in the wxWiki.
Here are two useful links
* https://wiki.wxwidgets.org/Guides_%26_Tutorials
* https://wiki.wxwidgets.org/Development:_wxMac
Advanced topics {#osx_advanced}
===============
Installing library {#osx_install}
------------------
It is rarely desirable to install non-Apple software into system directories,
so the recommended way of using wxWidgets under macOS is to skip the `make
install` step and simply use the full path to `wx-config` under the build
directory when building application using the library.
If you want to install the library into the system directories you'll need
to do this as root. The accepted way of running commands as root is to
use the built-in sudo mechanism. First of all, you must be using an
account marked as a "Computer Administrator". Then
sudo make install
type \<YOUR OWN PASSWORD\>
Distributing applications using wxWidgets
-----------------------------------------
If you build wxWidgets as static libraries, i.e. pass `--disable-shared` option
to configure, you don't need to do anything special to distribute them, as all
the required code is linked into your application itself. When using shared
libraries (which is the default), you need to copy the libraries into your
application bundle and change their paths using `install_name_tool` so that
they are loaded from their new locations.
Apple Developer Tools: Xcode {#osx_xcode}
----------------------------
You can use the project in build/osx/wxcocoa.xcodeproj to build the Cocoa
version of wxWidgets (wxOSX/Cocoa). There are also sample
projects supplied with the minimal sample.
Notice that the command line build above builds not just the library itself but
also wxrc tool which doesn't have its own Xcode project. If you need this tool,
the simplest possibility is to build it from the command line after installing
the libraries using commands like this:
$ cd utils/wxrc
$ g++ -o wxrc wxrc.cpp `wx-config --cxxflags --libs base,xml`
| 39.764706 | 81 | 0.734024 | eng_Latn | 0.997789 |
764043352edbe291ecf2d0aa3b82e8769b6c4ab8 | 25,896 | md | Markdown | pages/mesosphere/dcos/services/edge-lb/1.5/concepts/cloud-connector/index.md | rjurney/dcos-docs-site | 1486f18ce70efbccef496d533856a38797810494 | [
"Apache-2.0"
] | 57 | 2017-12-21T00:51:03.000Z | 2021-12-28T19:46:12.000Z | pages/mesosphere/dcos/services/edge-lb/1.5/concepts/cloud-connector/index.md | rjurney/dcos-docs-site | 1486f18ce70efbccef496d533856a38797810494 | [
"Apache-2.0"
] | 1,303 | 2017-12-14T22:43:03.000Z | 2022-03-31T21:32:50.000Z | pages/mesosphere/dcos/services/edge-lb/1.5/concepts/cloud-connector/index.md | rjurney/dcos-docs-site | 1486f18ce70efbccef496d533856a38797810494 | [
"Apache-2.0"
] | 140 | 2017-12-22T11:51:30.000Z | 2022-03-07T09:46:58.000Z | ---
layout: layout.pug
navigationTitle: Integrating with cloud providers
title: Integrating with cloud providers
menuWeight: 16
excerpt: Describes how you can integrate Edge-LB with cloud provider load balancers
enterprise: true
---
When you define the configuration settings for an Edge-LB pool, you have the option to support automatic provisioning and lifecycle management of cloud provider load balancers.
There are several benefits to having a public cloud load balancer - such as the AWS® Network Load Balancer® (NLB®) - deployed in front of an Edge-LB pool and managed by the Edge-LB server.
For example, using the public cloud load balancer in combination with Edge-LB:
- Ensures higher availability of the Edge-LB pool by providing a second tier of failover and fault-tolerance.
- Enables better load distribution across multiple instances of an Edge-LB pool.
- Provides automated scale-up and scale-down adjustments for the Edge-LB pool and its load balancer instances.
- Enables you to configure load balancing across multiple availability zones.
The following diagram provides a simplified view of the architecture with a cloud provider load balancer deployed between the Edge-LB API server and an Edge-LB pool.

You should note that, currently, Edge-LB supports only AWS Network Load Balancers (NLB) for integrated cloud provider load balancing. For information about deploying and configuring AWS Network Load Balancers (NLB), see the AWS documentation for [Network Load Balancers](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html).
# Configuring cloud provider settings
Cloud provider load balancers such as the AWS Network Load Balancer are configured with a top-level `cloudProvider` field in the Edge-LB pool configuration file.
The following code excerpt illustrates how you can define the properties required for the `cloudProvider` field in the Edge-LB pool configuration file:
```json
"cloudProvider": {
"aws": {
"elbs": [
{
"name": "echo",
"type": "NLB",
"internal": false,
"listeners": [
{
"port": 80,
"linkFrontend": "echo"
}
],
"tags": [
{
"key": "Protocol",
"value": "HTTP"
}
]
}
]
}
}
```
As illustrated in this example, the `cloudProvider` field includes a subfield that identifies a specific cloud provider. In this case, the cloud provider subfield is `aws` to specify integration with an AWS Network Load Balancer.
The `aws` cloud provider field also has a subfield of `elbs`. The `elbs` field contains the configuration settings for a particular load balancer. The settings in this example define an AWS NLB configuration using the following fields and values:
- `name` - Specifies the name that Edge-LB uses to generate names of the load balancers and their resources.
- `type` - Specifies a load balancer type. Currently, only NLB is supported.
- `internal` - Indicates whether the corresponding load balancer is for internal requests (`true`) or not (`false`).
If the `internal` setting is `true`, the load balancer routes requests from internal clients running within the same cluster.
If a load balancer is internet-facing with the `internal` field set to `false`, the load balancer can route external requests that are received from clients over the internet.
- `listeners` - Defines the following configuration details for each listener that receives inbound requests for the Edge-LB pool:
- `port` specifies a port number on which the respective load balancer is to listen for incoming connections from clients.
- `linkFrontend` specifies the name of the individual HAProxy frontend of the pool to which inbound requests are routed.
- `tags` - Specifies an array of user-defined tag name and value pairs. The tags you specify using this field are applied to load balancers and target groups in addition to the [internal tags](#internal-tagging) that are automatically defined by the Edge-LB API server.
## Specifying subnets
You can define the subnets that are associated with an AWS Network Load Balancer either manually or automatically.
To specify subnets manually, you can add a `subnets` field to the cloud provider configuration details in the Edge-LB pool configuration file. For example, the following code snippet illustrates how to specify `subnet-1234567890abcdefgi` manually for the cloud provider load balancer:
```json
{
"name": "echo",
"type": "NLB",
"subnets": [
"subnet-1234567890abcdefgi"
],
"listeners": [
{
"port": 80,
"linkFrontend": "echo"
}
]
}
```
If you don't specify the subnets you want attached to the AWS Network Load Balancer manually, then subnets are discovered automatically based on the **metadata** provided by Edge-LB pool instances. For more information about Edge-LB pool metadata, see [metadata]().
## Specifying Elastic IP addresses
You can associate Elastic IP addresses with any AWS Network Load Balancer that Edge-LB creates. For example, the following code snippet illustrates how to specify two Elastic IP addresses for the cloud provider load balancer:
```json
{
"name": "echo",
"type": "NLB",
"elasticIPs": [
"1.1.1.1",
"eipalloc-12345678"
],
"listeners": [
{
"port": 80,
"linkFrontend": "echo"
}
]
}
```
As this example illustrates, you can specify the Elastic network addresses using:
- an IPv4 address, like the value `1.1.1.1` in the example.
- an allocation ID, like the value `eipalloc-12345678` in the example.
If you use an IPv4 address, the address is resolved into the corresponding allocation ID before it is associated with the NLB.
## Enabling Transport Layer Security (TLS)
By default, AWS Network Load Balancer listeners use the TCP protocol. If you want to enable secure encrypted communication using Transport Layer Security and Secure Socket Layer (SSL) certificates, you should do the following in the Edge-LB pool configuration file:
- Set the `protocol` field to TLS.
- Set the `policy` field to specify a [secure socket layer (SSL) policy](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-tls-listener.html#describe-ssl-policies).
- Set the `certificates` field to specify one or more [SSL certificates](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-tls-listener.html#tls-listener-certificates).
For example, the following code snippet illustrates how to specify configuration properties for secure communication:
```json
"listeners": [
{
"port": 80,
"protocol": "TLS",
"tls": {
"policy": "ELBSecurityPolicy-2016-08",
"certificates": [
"arn:aws:acm:us-west-2:123456789012:certificate/12345678-efgi-abcd-efgi-12345678abcd"
]
},
"linkFrontend": "echo"
}
]
```
As this example illustrates, you can specify multiple certificates using the certificate [Amazon Resource Names (ARNs)](https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html).
## Enabling access logging
You can log requests made to the TLS listeners by setting the `accessLogS3Location` field to `<bucket-name>/prefix` or just `<bucket-name>`.
For example, the following code snippet illustrates how to enable logging for inbound requests that use the secure TLS protocol:
```json
{
"name": "echo",
"type": "NLB",
"listeners": [
{
"port": 80,
"protocol": "TLS",
"tls": {
"policy": "ELBSecurityPolicy-2016-08",
"certificates": [
"arn:aws:acm:us-west-2:123456789012:certificate/12345678-efgi-abcd-efgi-12345678abcd"
]
},
"linkFrontend": "echo"
}
],
"accessLogS3Location": "access-logs/echo"
}
```
For more information about the format of access log files, see [Access Logs for Your Network Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-access-logs.html#access-log-file-format) in the AWS documentation.
## Preventing accidental deletion
You can prevent accidental deletion of AWS Network Load Balancers by enabling the deletion protection flag in the `cloudProvider` section of the Edge-LB pool configuration file.
The following code excerpt illustrates how you can set the `deletionProtection` field to prevent AWS Network Load Balancers for an Edge-LB pool from being accidentally deleted:
```json
{
"name": "echo",
"type": "NLB",
"deletionProtection": true,
"listeners": [
{
"port": 80,
"linkFrontend": "echo"
}
]
}
```
Accidental deletion protection is disabled by default. If you enable deletion protection, you should keep in mind that deleting an Edge-LB pool associated with the AWS Network Load Balancer will not delete the AWS Network Load Balancer. If you want to delete all of the AWS Network Load Balancers associated with an Edge-LB pool when you delete the Edge-LB pool, you should set the `deletionProtection` field to `false` for all AWS Network Load Balancers **before** you delete the Edge-LB pool.
### Using existing Amazon Resource Names
Edge-LB can manage an existing AWS Network Load Balancer by specifying the AWS Network Load Balancer resource names in the `cloudProvider` section of the Edge-LB pool configuration file.
The following code excerpt illustrates how you can use Amazon Resource Names (ARNs) to associate an existing AWS Network Load Balancer with an Edge-LB pool in the Edge-LB pool configuration file:
```json
{
"name": "echo",
"type": "NLB",
"arn": "arn:aws:elasticloadbalancing:us-west-2:123456789012:loadbalancer/net/dcos-lb-JOQ3wMJ7Z-Q8ayl-XzLP-PVv/1234567890abcfge",
"listeners": [
{
"port": 80,
"linkFrontend": "echo"
}
]
}
```
If you specify a custom ARN identifier in the pool configuration file, Edge-LB assumes that there is an existing Network Load Balancer for the pool to use. In this scenario, the Edge-LB API server does not attempt to create or delete a Network Load Balancer for the pool. Instead, Edge-LB attempts to manage resources for the specified Network Load Balancer. Therefore, if you want to use an existing Network Load Balancer, you should be sure that the existing Network Load Balancer configuration aligns with the configuration settings specified in the Edge-LB pool configuration file. For example, if you enable access logging for the existing Network Load Balancer, you should also enable access logging in the corresponding pool configuration to ensure that setting is used.
If there is a conflict between the configuration of the existing Network Load Balancer and the settings defined in the pool configuration file that use that Network Load Balancer, the pool configuration settings override the existing Network Load Balancer configuration.
## Internal tagging
Load balancers are automatically defined with the following information in addition to any user-defined tags:
- `DC/OS:EdgeLB:ClusterID` - specifies a cluster identifier for the DC/OS cluster on which Edge-LB is running. For example: 18f21a68-058f-4d14-8055-e61ed91e3794.
- `DC/OS:EdgeLB:ApplicationID` - specifies the Marathon application identifier for the Edge-LB API server. For example, /dcos-edgelb/api.
- `DC/OS:EdgeLB:PoolName` - specifies a name of pool the load balancer belongs to. For example: test-http-pool-with-aws-nlb.
- `DC/OS:EdgeLB:LoadBalancerName` - specifies the original load balancer name. For example: echo.
Target groups have two additional tags:
- `DC/OS:EdgeLB:FrontendName` - specifies the name of a corresponding frontend. For example: echo.
- `DC/OS:EdgeLB:ListenerPort` - specifies a port number. For example: 80.
## Required permissions
To manage Network Load Balancers, the following AWS API permissions are needed by one of these two:
- the instance on which the Edge-LB API server is running, or...
- the IAM user that is specified using an AWS access key at the time of Edge-LB installation.
```
elasticloadbalancing:DescribeLoadBalancers
elasticloadbalancing:CreateLoadBalancer
elasticloadbalancing:DeleteLoadBalancer
elasticloadbalancing:DescribeListeners
elasticloadbalancing:CreateListener
elasticloadbalancing:DeleteListener
elasticloadbalancing:ModifyListener
elasticloadbalancing:CreateTargetGroup
elasticloadbalancing:DeleteTargetGroup
elasticloadbalancing:DescribeTargetGroups
elasticloadbalancing:ModifyTargetGroup
elasticloadbalancing:RegisterTargets
elasticloadbalancing:DeregisterTargets
elasticloadbalancing:DescribeTargetHealth
elasticloadbalancing:DescribeLoadBalancerAttributes
elasticloadbalancing:ModifyLoadBalancerAttributes
elasticloadbalancing:DescribeTags
elasticloadbalancing:AddTags
elasticloadbalancing:RemoveTags
ec2:DescribeAddresses
```
For more information about required permissions, see [Elastic Load Balancing API Permissions](https://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/elb-api-permissions.html) in the AWS documentation. For more information about using Edge-LB configuration properties to provide AWS credentials, see the [Edge-LB configuration reference](https://github.com/mesosphere/dcos-edge-lb/blob/master/framework/edgelb/universe/config.json).
# Deploying with the cloud provider load balancer
To illustrate how you can deploy an Edge-LB pool that uses an Amazon Network Load Balancer, you can deploy a sample application that uses an Edge-LB pool to route incoming requests to the app.
1. Open a text editor, then copy and paste the following sample app definition to create the `host-echo.json` file:
```json
{
"id": "/host-echo",
"cmd": "/start $PORT0",
"instances": 1,
"cpus": 0.1,
"mem": 32,
"constraints": [["public_ip", "UNLIKE", "true"]],
"container": {
"type": "DOCKER",
"docker": {
"image": "mesosphere/echo-http"
}
},
"portDefinitions": [
{
"name": "web",
"protocol": "tcp",
"port": 0
}
],
"healthChecks": [
{
"portIndex": 0,
"path": "/",
"protocol": "HTTP"
}
]
}
```
1. Deploy the `host-echo` service using the `host-echo.json` app definition by running the following command:
```bash
dcos marathon app add examples/apps/host-echo.json
```
1. Open a text editor, then copy and paste the following Edge-LB pool configuration settings to define the `pool-http-with-aws-nlb` Edge-LB pool:
```json
{
"apiVersion": "V2",
"name": "pool-http-with-aws-nlb",
"count": 1,
"cloudProvider": {
"aws": {
"elbs": [
{
"name": "echo",
"type": "NLB",
"internal": false,
"listeners": [
{
"port": 80,
"linkFrontend": "echo"
}
],
"tags": [
{
"key": "Protocol",
"value": "HTTP"
}
]
}
]
}
},
"haproxy": {
"frontends": [
{
"name": "echo",
"bindPort": 80,
"protocol": "HTTP",
"linkBackend": {
"defaultBackend": "echo"
}
}
],
"backends": [
{
"name": "echo",
"protocol": "HTTP",
"services": [
{
"marathon": {
"serviceID": "/host-echo"
},
"endpoint": {
"portName": "web"
}
}
]
}
]
}
}
```
1. Deploy the `pool-http-with-aws-nlb` pool configuration file to create the pool instance for load balancing access to the `host-echo` service by running the following command:
```bash
dcos edgelb create examples/config/pool-http-with-aws-nlb.json
```
1. List the cloud provider load balancers managed by Edge-LB running in the attached DC/OS cluster by running the following command:
```bash
dcos edgelb ingresslb pool-http-with-aws-nlb
```
The command returns output similar to the following:
```bash
NAME DNS
echo dcos-lb-JOQ3wMJ7Z-Q8ayl-XzLP-PVv-f0f10cfccfa7d5a8.elb.us-west-2.amazonaws.com
AVAILABILITY ZONE ELASTIC IP
LISTENER PROTOCOL LISTENER PORT FRONTEND
TCP 80 echo
```
In this example, the generated AWS Network Load Balancer name returned is:
`dcos-lb-JOQ3wMJ7Z-Q8ayl-XzLP-PVv-f0f10cfccfa7d5a8.elb.us-west-2.amazonaws.com`
You can use the generated AWS Network Load Balancer name to retrieve details about the Network Load Balancer and its associated resources.
1. Use the generated AWS Network Load Balancer name to describe the Network Load Balancer by running the following command:
```bash
aws elbv2 describe-load-balancers --names dcos-lb-JOQ3wMJ7Z-Q8ayl-XzLP-PVv
```
The command returns information similar to the following:
```bash
{
"LoadBalancers": [
{
"LoadBalancerArn": "arn:aws:elasticloadbalancing:us-west-2:273854932432:loadbalancer/net/dcos-lb-JOQ3wMJ7Z-Q8ayl-XzLP-PVv/f0f10cfccfa7d5a8",
"DNSName": "dcos-lb-JOQ3wMJ7Z-Q8ayl-XzLP-PVv-f0f10cfccfa7d5a8.elb.us-west-2.amazonaws.com",
"CanonicalHostedZoneId": "Z18D5FSROUN65G",
"CreatedTime": "2019-04-26T16:20:20.071Z",
"LoadBalancerName": "dcos-lb-JOQ3wMJ7Z-Q8ayl-XzLP-PVv",
"Scheme": "internet-facing",
"VpcId": "vpc-0d745ccca41eb2da1",
"State": {
"Code": "active"
},
"Type": "network",
"AvailabilityZones": [
{
"ZoneName": "us-west-2c",
"SubnetId": "subnet-05e8d3ea6fbad165a"
}
],
"IpAddressType": "ipv4"
}
]
}
```
1. List the load balancer's listeners by running the following command:
```bash
aws elbv2 describe-listeners --load-balancer-arn arn:aws:elasticloadbalancing:us-west-2:273854932432:loadbalancer/net/dcos-lb-JOQ3wMJ7Z-Q8ayl-XzLP-PVv/f0f10cfccfa7d5a8
```
The command returns information similar to the following:
```bash
{
"Listeners": [
{
"ListenerArn": "arn:aws:elasticloadbalancing:us-west-2:273854932432:listener/net/dcos-lb-JOQ3wMJ7Z-Q8ayl-XzLP-PVv/f0f10cfccfa7d5a8/df8377617abd7bf2",
"LoadBalancerArn": "arn:aws:elasticloadbalancing:us-west-2:273854932432:loadbalancer/net/dcos-lb-JOQ3wMJ7Z-Q8ayl-XzLP-PVv/f0f10cfccfa7d5a8",
"Port": 80,
"Protocol": "TCP",
"DefaultActions": [
{
"Type": "forward",
"TargetGroupArn": "arn:aws:elasticloadbalancing:us-west-2:273854932432:targetgroup/dcos-tg-Gt8X3J46KQWg-80-PVvAO/d5a676c635d7e437"
}
]
}
]
}
```
1. Get the load balancer's tags by running the following command:
```bash
aws elbv2 describe-tags --resource-arns arn:aws:elasticloadbalancing:us-west-2:273854932432:loadbalancer/net/dcos-lb-JOQ3wMJ7Z-Q8ayl-XzLP-PVv/f0f10cfccfa7d5a8
```
The command returns information similar to the following:
```bash
{
"TagDescriptions": [
{
"ResourceArn": "arn:aws:elasticloadbalancing:us-west-2:273854932432:loadbalancer/net/dcos-lb-JOQ3wMJ7Z-Q8ayl-XzLP-PVv/f0f10cfccfa7d5a8",
"Tags": [
{
"Key": "DC/OS:EdgeLB:ApplicationID",
"Value": "/dcos-edgelb/api"
},
{
"Key": "DC/OS:EdgeLB:ClusterID",
"Value": "18f21a68-058f-4d14-8055-e61ed91e3794"
},
{
"Key": "DC/OS:EdgeLB:PoolName",
"Value": "test-http-pool-with-aws-nlb"
},
{
"Key": "Protocol",
"Value": "HTTP"
},
{
"Key": "DC/OS:EdgeLB:LoadBalancerName",
"Value": "echo"
}
]
}
]
}
```
1. Get a list of target groups associated with the load balancer by running the following command:
```bash
aws elbv2 describe-target-groups --load-balancer-arn arn:aws:elasticloadbalancing:us-west-2:273854932432:loadbalancer/net/dcos-lb-JOQ3wMJ7Z-Q8ayl-XzLP-PVv/f0f10cfccfa7d5a8
```
The command returns information similar to the following:
```bash
{
"TargetGroups": [
{
"TargetGroupArn": "arn:aws:elasticloadbalancing:us-west-2:273854932432:targetgroup/dcos-tg-Gt8X3J46KQWg-80-PVvAO/d5a676c635d7e437",
"TargetGroupName": "dcos-tg-Gt8X3J46KQWg-80-PVvAO",
"Protocol": "TCP",
"Port": 65535,
"VpcId": "vpc-0d745ccca41eb2da1",
"HealthCheckProtocol": "TCP",
"HealthCheckPort": "traffic-port",
"HealthCheckEnabled": true,
"HealthCheckIntervalSeconds": 30,
"HealthCheckTimeoutSeconds": 10,
"HealthyThresholdCount": 3,
"UnhealthyThresholdCount": 3,
"LoadBalancerArns": [
"arn:aws:elasticloadbalancing:us-west-2:273854932432:loadbalancer/net/dcos-lb-JOQ3wMJ7Z-Q8ayl-XzLP-PVv/f0f10cfccfa7d5a8"
],
"TargetType": "instance"
}
]
}
```
1. Get the target group's tags by running the following command:
```bash
aws elbv2 describe-tags --resource-arns arn:aws:elasticloadbalancing:us-west-2:273854932432:targetgroup/dcos-tg-Gt8X3J46KQWg-80-PVvAO/d5a676c635d7e437
```
The command returns information similar to the following:
```bash
{
"TagDescriptions": [
{
"ResourceArn": "arn:aws:elasticloadbalancing:us-west-2:273854932432:targetgroup/dcos-tg-Gt8X3J46KQWg-80-PVvAO/d5a676c635d7e437",
"Tags": [
{
"Key": "DC/OS:EdgeLB:ListenerPort",
"Value": "80"
},
{
"Key": "DC/OS:EdgeLB:ApplicationID",
"Value": "/dcos-edgelb/api"
},
{
"Key": "DC/OS:EdgeLB:ClusterID",
"Value": "18f21a68-058f-4d14-8055-e61ed91e3794"
},
{
"Key": "DC/OS:EdgeLB:PoolName",
"Value": "test-http-pool-with-aws-nlb"
},
{
"Key": "Protocol",
"Value": "HTTP"
},
{
"Key": "DC/OS:EdgeLB:FrontendName",
"Value": "echo"
},
{
"Key": "DC/OS:EdgeLB:LoadBalancerName",
"Value": "echo"
}
]
}
]
}
```
# Viewing pool metadata
Pool metadata contains additional information about cloud provider load balancers, if any additional information has been defined.
To get load balancer metadata for a pool, you can make a request to the `/service/edgelb/v2/pools/<pool-name>/metadata` endpoint.
Here is an example of a response:
```json
{
"aws": {
"elbs": [
{
"availabilityZones": [
{
"elasticIP": "34.212.99.119",
"name": "us-west-2a"
}
],
"dns": "dcos-lb-JOQ3wMJ7Z-Q8ayl-XzLP-PVv-f0f10cfccfa7d5a8.elb.us-west-2.amazonaws.com",
"listeners": [
{
"linkFrontend": "echo",
"port": 80
}
],
"name": "echo",
"state": {
"status": "active"
}
}
]
},
"frontends": [
{
"endpoints": [
{
"port": 80,
"private": [
"10.0.4.103"
],
"public": [
"34.211.112.50"
]
}
],
"name": "echo"
}
],
"name": "test-http-pool-with-aws-nlb",
"stats": [
{
"port": 9090,
"private": [
"10.0.4.103"
],
"public": [
"34.211.112.50"
]
}
]
}
```
In this example, `aws.elbs` is an array of entries where each entry corresponds to a respective AWS load balancer configuration in the pool definition.
Each `aws.elbs` entry has the following fields:
- `name` is a user-defined load balancer name.
- `dns` is a DNS name of load balancer.
- `listeners` is listeners as defined in the respective pool configuration.
- `state` specifies the status of the load balancer, together with a description for unexpected statuses.
- `availabilityZones` specify the availability zones that identify where load balancer nodes are located.
For other details on metadata format, see the pool [metadata reference](../../reference/pool-configuration-reference/metadata/) section.
| 38.766467 | 777 | 0.636392 | eng_Latn | 0.921073 |
7640ca1bf6ce1167f79d97f12711e8898e6fb6d6 | 645 | md | Markdown | README.md | leandrorichard/esb-connector-msazurestorage | f4be120ae24a1cb8c27b41b87d265b1a2fbb5b84 | [
"Apache-2.0"
] | null | null | null | README.md | leandrorichard/esb-connector-msazurestorage | f4be120ae24a1cb8c27b41b87d265b1a2fbb5b84 | [
"Apache-2.0"
] | null | null | null | README.md | leandrorichard/esb-connector-msazurestorage | f4be120ae24a1cb8c27b41b87d265b1a2fbb5b84 | [
"Apache-2.0"
] | null | null | null | ### Microsoft Azure Storage ESB Connector
The Microsoft Azure Storage Connector allows you to access the Azure Storage services using Microsoft Azure Storage Java SDK through WSO2 Enterprise Integrator (WSO2 EI). Microsoft Azure Storage is a Microsoft-managed cloud service that provides storage that is highly available and secure. Azure Storage consists of three data services: Blob storage, File storage, and Queue storage.
### Build
mvn clean install
### How You Can Contribute
You can create a third party connector and publish in WSO2 Store.
https://docs.wso2.com/display/ESBCONNECTORS/Creating+and+Publishing+a+Third+Party+Connector
| 49.615385 | 384 | 0.809302 | eng_Latn | 0.97482 |
7640de182a4a958d88232b29c296f52619128a3d | 14,505 | md | Markdown | tutorials/js/01-basic-module.md | exjohn/rawblock | b2585303bf9a290067200ca46f622b2ddf70e591 | [
"MIT"
] | null | null | null | tutorials/js/01-basic-module.md | exjohn/rawblock | b2585303bf9a290067200ca46f622b2ddf70e591 | [
"MIT"
] | null | null | null | tutorials/js/01-basic-module.md | exjohn/rawblock | b2585303bf9a290067200ca46f622b2ddf70e591 | [
"MIT"
] | null | null | null | # How to create a rawblock component I
As an example component we will create a "slim header". As soon as the user scrolls down a certain threshold the header gets slim. A full demo can be seen at [codesandbox (SlimHeader with rawblock)](https://codesandbox.io/s/3q51m1pxjm).
## HTML of our slim header component
A component markup always has to have a `data-module` attribute with the name of your component and in general a `js-rb-live` class to indicate, that rawblock should create the UI component immediately.
### Excursion: Initializing components
In general rawblock components have the class `js-rb-live` to be automatically created, if they are first seen in the document. In case a component only reacts to a `click` event and only needs to be created at this time, the author can add a `js-rb-click` class instead.
There is also the possibility to use the rb_lazymodules module to lazily create modules as soon as they become visible in the viewport using the `js-rb-lazylive` class.
Or a class can be fully omitted and the component can initialized from JS using the [`rb.getComponent`](rb.html#.getComponent__anchor) or `this.component` method.
The functional childs should have a class prefixed with the component name (htmlName).
```html
<div class="js-rb-live" data-module="slimheader"></div>
```
## JS-Boilerplate of a rawblock component
```js
import { Component } from 'rawblock';
class SlimHeader extends Component {
// The static defaults getter defines the default options of our component.
static get defaults(){
return {
};
}
// The static events getter defines the events rawblock should attach to the DOM.
static get events(){
return {
};
}
// The constructor is invoked to create the rb component
constructor(/*element, initialDefaults*/){
super(...arguments);
}
// attached is invoked either after the constructor or if the DOM element is added to the DOM
// Normally the attached method should be used as an antagonist to the detached method.
attached(){
}
//detached is invoked after the element was removed from the document and should be used to clean up (i.e.: clear timeouts, unbind global events etc.)
detached(){
}
}
//rb.live.register registers the Component class and defines the component name. The class is then added to the `rb.components` namespace (i.e. `rb.components.clearinput`).
Component.register('slimheader', SlimHeader);
//in your app.js include core files...
//import 'rawblock/$';
//import rb from 'rawblock';
//... and your component
//import 'slimheader';
//call rb.live.init in your main file to start rawblock.
//rb.live.init();
```
## SlimHeader JS Class
### Working with options
First we define the threshold as `topThreshold` in our `defaults` getter.
```js
import { Component } from 'rawblock';
class SlimHeader extends Component {
static get defaults(){
return {
topThreshold: 60
};
}
constructor(/*element, initialDefaults*/){
super(...arguments);
this.log(this.options.topThreshold); //outputs 60
}
}
Component.register('slimheader', SlimHeader);
```
A rawblock component can be configured in multiple ways.
With JS:
```js
import rb from 'rawblock';
//change the default itself (Note: rawblock changes the defaults getter to a defaults object value.)
rb.components.slimheader.defaults.topThreshold = 70;
//create a new component with changed options and use this instead (Note: rawblock will automatically merge/mixin your defaults object)
class SuperSlimHeader extends rb.components.slimheader {
static get defaults(){
return {
topThreshold: 80
};
}
}
//change the specific component option
rb.$('.slimheader').rbComponent().setOption('topThreshold', 90);
```
With HTML:
```html
<!-- As one option -->
<div data-module="slimheader" data-slimheader-top-threshold="100"></div>
<!-- As an option object (with multiple options) -->
<div data-module="slimheader" data-slimheader-options='{"topThreshold": 110}'></div>
```
With CSS/SCSS
```scss
.slimheader {
@include rb-js-export((
topThreshold: 120,
));
}
```
Due to the fact that the threshold is heavily style/layout related it might make sense to configure it with SCSS. It can get really helpful if you plan to act on it responsively.
```scss
.slimheader {
@include rb-js-export((
topThreshold: 120,
));
@media (min-width: 120px) {
@include rb-js-export((
topThreshold: 180,
));
}
}
```
By extending/overriding the `setOption` method it is possible to react to option changes:
```js
class SlimHeader extends Component {
setOption(name, value, isSticky){
super.setOption(name, value, isSticky);
if(name == 'topThreshold'){
//do the work
}
}
}
```
### Events
JS events can be bound normally or by using the events object. In our case we need to bind the `scroll` event to the `window` object.
Due to the fact, that the `window` object remains even if our component is destroyed. It makes sense to use the `attached`/`detached` lifecycle callbacks.
```js
class SuperSlimHeader extends rb.components.slimheader {
constructor(/*element, initialDefaults*/){
super(...arguments);
this.handleScroll = this.handleScroll.bind(this);
}
handleScroll(){
}
attached(){
window.addEventListener('scroll', this.handleScroll);
}
detached(){
window.removeEventListener('scroll', this.handleScroll);
}
}
```
But you can also use the static `events` object of your component class. Normally rawblock binds all events to the component itself and gives you some options to help with event delegation.
But due to the fact that the `scroll` event happens outside of the component event delegation does not help here. For this you can use the `@` event option. Every event option is prefixed with a `:`.
```js
class SlimHeader extends Component {
static get events(){
return {
'scroll:@(window)': 'handleScroll',
};
}
constructor(/*element, initialDefaults*/){
super(...arguments);
}
handleScroll(){
}
}
```
#### About event options:
There are 4 different kinds of event options:
* The `@` allows to bind listeners to other elements than the component element. These elements are retrieved by the `this.getElementsByString` method, which not only allows to use predefined values like `"window"` or `"document"`, but also to use jQuery-like traversing methods to select an element (i.e.: `"submit:@(closest(form))"`, `"click:@(next(.item))"` etc.).
* Native event options: `capture`, `once` and `passive`.
* proxy functions: `closest`, `matches`, `keycodes` and some more.
* `closest` can be used for event delegation. For example `'click:closest(.button)'` means, if a click happens the proxy function will use the `closest` method on the `event.target` and if it finds an element will set the `event.delegatedTarget` property to this element and call the declared event handler.
* `matches` can also be used for event delegation. For example `'change:matches(.input)'` means, if a change happens the proxy function will use the `matches` method on the `event.target` and if it returns `true` will set the `event.delegatedTarget` property to this element and call the declared event handler.
* Different options for custom events.
### Adding some logic
Now we can fill in some logic to add a class as soon as the header reaches our threshold.
```js
class SlimHeader extends Component {
static get defaults(){
return {
topThreshold: 80
};
}
static get events(){
return {
'scroll:@(window)': 'handleScroll',
};
}
constructor(/*element, initialDefaults*/){
super(...arguments);
//call handleScroll to get the initial state right
this.handleScroll();
}
handleScroll(){
this.element.classList.toggle('is-slim', this.options.topThreshold < document.scrollingElement.scrollTop);
}
}
Component.register('slimheader', SlimHeader);
```
### Improvements to our current component
The code above will give us a full functional rawblock component. But it can be improved in multiple ways.
Performance considerations
1. `classList.toggle` is called with a very high frequency, even if the threshold expression has the same result as before.
2. `classList.toggle` mutates the DOM (= layout write) outside of a `requestAnimationFrame`, which is likely to produce layout thrashing in a complex application.
3. `handleScroll` could also be throttled.
To fix the first point we will add a `isSlim` property and only change the class if it has changed
```js
class SlimHeader extends Component {
//.....
constructor(/*element, initialDefaults*/){
super(...arguments);
this.isSlim = false;
this.handleScroll();
}
handleScroll(){
const shouldBeSlim = this.options.topThreshold < document.scrollingElement.scrollTop;
if(this.isSlim != shouldBeSlim){
this.isSlim = shouldBeSlim;
this.element.classList.toggle('is-slim', this.isSlim);
}
}
}
```
To fix the second point rawblock offers a method called `this.rAFs`. This method can take an unlimited number of method names and will make sure that these methods are called inside a `rAF`.
Due to the fact, that getting `document.scrollingElement.scrollTop` is a layout read we need to separate it from our layout write `classList.toggle`.
```js
class SlimHeader extends Component {
//.....
constructor(/*element, initialDefaults*/){
super(...arguments);
this.isSlim = false;
this.rAFs('changeState');
this.calculateState();
}
changeState(){
this.element.classList.toggle('is-slim', this.isSlim);
}
calculateState(){
const shouldBeSlim = this.options.topThreshold < document.scrollingElement.scrollTop;
if(this.isSlim != shouldBeSlim){
this.isSlim = shouldBeSlim;
this.changeState();
}
}
}
```
For small DOM changes rawblock also supports jQuery-like functions, that are called in a rAF. To these methods the string "Raf" is appended. In our case the method `toggleClassRaf` can be used:
```js
class SlimHeader extends Component {
//...
calculateState(){
const shouldBeSlim = this.options.topThreshold < document.scrollingElement.scrollTop;
if(this.isSlim != shouldBeSlim){
this.isSlim = shouldBeSlim;
this.$element.toggleClassRaf('is-slim', this.isSlim);
}
}
}
}
```
Point 3. could be fixed by using `rb.throttle`. But I assume that in our refactored case the `calculateState` is so light, that throttling won't affect the performance too much.
```js
import rb, { Component } from 'rawblock';
class SlimHeader extends Component {
//.....
constructor(/*element, initialDefaults*/){
super(...arguments);
this.isSlim = false;
this.calculateState = rb.throttle(this.calculateState, {read: true});
this.calculateState();
}
}
```
The slimheader component can be further improved under the following more rawblock specific aspects:
1. A good component should dispatch an event as soon as it's state changes.
This can be realized using the `trigger` method. The method accepts an event name as also a detail object for further event specific information. The event name is automatically prefixed by the component name (jsName). If a component only has one state that changes or one state can be seen as the main state the component should dispatch a `changed` state.
In case no event name is given, `trigger` will automatically generate this `changed` state event prefixed by the component name. (In our case `"slimheaderchanged"`.)
In case you are using a `*Raf` method to alter your DOM, your `trigger` call is not done in a rAF method and you want to dispatch the event after the DOM changes are done you can use the method `this.triggerRaf`.
2. rawblock allows you to define how state classes are defined (prefixed by `is-`, `modifier__` etc.).
To support this feature we need to use `rb.$.fn.rbToggleState` instead of `classList.toggle`.
3. Building responsive JS components often reveals, that you need to disable/switch off a component under certain media conditions and turn others on.
rawblock uses the convention to use the option `switchedOff` for those cases. In case `options.switchedOff` is `true` no event listener bound by the events object is called (Keep in mind, that this won't help, if your bound listeners are called async du to throttling or debouncing.).
Often the developer still has to do some work to react to those option changes (cleanup changed markup). In case your specific project needs this or you want to build a general re-usable component you should do this.
Our [final improved code](https://codesandbox.io/s/3q51m1pxjm) could look like this:
```js
import rb, { Component } from 'rawblock';
class SlimHeader extends Component {
static get defaults(){
return {
topThreshold: 80,
};
}
static get events(){
return {
'scroll:@(window)': 'calculateState',
};
}
constructor(/*element, initialDefaults*/){
super(...arguments);
this.isSlim = false;
this.calculateState = rb.throttle(this.calculateState, {read: true});
this.calculateState();
}
setOption(name/*, value, isSticky*/){
super.setOption(...arguments);
//if topThreshold or switchedOff option changed re-calculate with these options.
if(name == 'topThreshold' || name == 'switchedOff'){
this.calculateState();
}
}
calculateState(){
const {switchedOff, topThreshold} = this.options;
//if it is switchedOff it is never slim
const shouldBeSlim = !switchedOff && topThreshold < document.scrollingElement.scrollTop;
if(this.isSlim != shouldBeSlim){
this.isSlim = shouldBeSlim;
this.$element.rbToggleStateRaf('slim', this.isSlim);
this.triggerRaf();
}
}
}
Component.register('slimheader', SlimHeader);
```
| 32.233333 | 367 | 0.688108 | eng_Latn | 0.988099 |
7641b0b6884cbb70f5adde837ce99413734e9db5 | 16,623 | md | Markdown | articles/cosmos-db/continuous-backup-restore-introduction.md | Artaggedon/azure-docs.es-es | 73e6ff211a5d55a2b8293a4dc137c48a63ed1369 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cosmos-db/continuous-backup-restore-introduction.md | Artaggedon/azure-docs.es-es | 73e6ff211a5d55a2b8293a4dc137c48a63ed1369 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cosmos-db/continuous-backup-restore-introduction.md | Artaggedon/azure-docs.es-es | 73e6ff211a5d55a2b8293a4dc137c48a63ed1369 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Copia de seguridad continua con la característica de restauración a un momento dado de Azure Cosmos DB
description: La característica de restauración a un momento dado de Azure Cosmos DB permite recuperar datos de operaciones de escritura o eliminación accidentales, o bien restaurar datos en cualquier región. Obtenga información sobre los precios y las limitaciones actuales.
author: kanshiG
ms.service: cosmos-db
ms.topic: conceptual
ms.date: 02/01/2021
ms.author: govindk
ms.reviewer: sngun
ms.custom: references_regions
ms.openlocfilehash: d1dc108ecec93dddeb768eb61af425ba67f23002
ms.sourcegitcommit: f28ebb95ae9aaaff3f87d8388a09b41e0b3445b5
ms.translationtype: HT
ms.contentlocale: es-ES
ms.lasthandoff: 03/29/2021
ms.locfileid: "100393146"
---
# <a name="continuous-backup-with-point-in-time-restore-preview-feature-in-azure-cosmos-db"></a>Copia de seguridad continua con la característica de restauración a un momento dado (versión preliminar) de Azure Cosmos DB
[!INCLUDE[appliesto-sql-mongodb-api](includes/appliesto-sql-mongodb-api.md)]
> [!IMPORTANT]
> La característica de restauración a un momento dado (modo de copia de seguridad continua) de Azure Cosmos DB está actualmente en versión preliminar pública.
> Esta versión preliminar se ofrece sin Acuerdo de Nivel de Servicio y no se recomienda para cargas de trabajo de producción. Es posible que algunas características no sean compatibles o que tengan sus funcionalidades limitadas.
> Para más información, consulte [Términos de uso complementarios de las Versiones Preliminares de Microsoft Azure](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
La característica de restauración a un momento dado (versión preliminar) de Azure Cosmos DB ayuda en varios escenarios, como los siguientes:
* Para recuperarse de una operación de escritura o eliminación accidental dentro de un contenedor.
* Para restaurar una cuenta, una base de datos o un contenedor eliminados.
* Para realizar una restauración en cualquier región (donde existan copias de seguridad) en el momento dado de la restauración.
Azure Cosmos DB realiza la copia de seguridad de datos en segundo plano sin consumir ningún rendimiento aprovisionado (RU) adicional ni afectar al rendimiento y la disponibilidad de la base de datos. Las copias de seguridad continuas se crean en todas las regiones en las que existe la cuenta. En la imagen siguiente se muestra cómo se crea una copia de seguridad de un contenedor con una región de escritura en Oeste de EE. UU., regiones de lectura en Este de EE. UU. y Este de EE. UU. 2 en una cuenta de Azure Blob Storage en las regiones respectivas. De manera predeterminada, cada región almacena la copia de seguridad en cuentas de almacenamiento con redundancia local. Si la región tiene habilitadas las [zonas de disponibilidad](high-availability.md#availability-zone-support), la copia de seguridad se almacena en cuentas de almacenamiento con redundancia de zona.
:::image type="content" source="./media/continuous-backup-restore-introduction/continuous-backup-restore-blob-storage.png" alt-text="Copia de seguridad de datos de Azure Cosmos DB en Azure Blob Storage." lightbox="./media/continuous-backup-restore-introduction/continuous-backup-restore-blob-storage.png" border="false":::
La ventana de tiempo disponible para la restauración (también conocido como período de retención) es el más bajo de los dos valores siguientes: *Últimos 30 días desde ahora* o *Hasta la creación del recurso*. El momento dado para la restauración puede ser cualquier marca de tiempo dentro del período de retención.
En la versión preliminar pública, puede restaurar la cuenta de Azure Cosmos DB para el contenido de MongoDB o la API SQL en un momento dado en otra cuenta con [Azure Portal](continuous-backup-restore-portal.md), la [interfaz de la línea de comandos (CLI) de Azure](continuous-backup-restore-command-line.md), [Azure PowerShell](continuous-backup-restore-powershell.md) o [Azure Resource Manager](continuous-backup-restore-template.md).
## <a name="what-is-restored"></a>¿Qué se restaura?
En un estado estable, se realiza una copia de seguridad de todas las mutaciones realizadas en la cuenta de origen (lo que incluye las bases de datos, los contenedores y los elementos) de forma asincrónica en 100 segundos. Si el medio de copia de seguridad (es decir, el almacenamiento de Azure) está fuera de servicio o no está disponible, las mutaciones se conservan localmente hasta que el medio vuelve a estar disponible y, a continuación, se vacían para evitar cualquier pérdida de fidelidad de las operaciones que se pueden restaurar.
Puede optar por restaurar cualquier combinación de contenedores de rendimiento aprovisionados, una base de datos de rendimiento compartida o toda la cuenta. La acción de restauración restaura todos los datos y sus propiedades de índice en una cuenta nueva. El proceso de restauración garantiza que todos los datos restaurados en una cuenta, una base de datos o un contenedor tienen la garantía de que el tiempo de restauración especificado es coherente. La duración de la restauración dependerá de la cantidad de datos que sea necesario restaurar.
> [!NOTE]
> Con el modo de copia de seguridad continua, las copias de seguridad se crean en todas las regiones en las que está disponible su cuenta de Azure Cosmos DB. De manera predeterminada, las copias de seguridad creadas para cada cuenta de región son localmente redundantes y tienen redundancia de zona si la cuenta tiene habilitada la característica de [zona de disponibilidad](high-availability.md#availability-zone-support) para esa región. La acción de restauración siempre restaura los datos en una cuenta nueva.
## <a name="what-is-not-restored"></a>¿Qué no se restaura?
Las configuraciones siguientes no se restauran después de la recuperación a un momento dado:
* Configuración de firewall, red virtual y punto de conexión privado.
* Configuración de coherencia. De manera predeterminada, la cuenta se restaura con coherencia de sesión.
* Regiones.
* Procedimientos almacenados, desencadenadores y UDF.
Puede agregar estas configuraciones a la cuenta restaurada una vez completada la restauración.
## <a name="restore-scenarios"></a>Escenarios de restauración
A continuación, se muestran algunos de los escenarios clave que se abordan con la característica de restauración a un momento dado. Los escenarios [a] a [c] muestran cómo desencadenar una restauración si la marca de tiempo de restauración se conoce de antemano.
Sin embargo, puede haber escenarios en los que no conozca la hora exacta de un daño o una eliminación accidental. Los escenarios [d] y [e] muestran cómo _detectar_ la marca de tiempo de restauración con las API de fuente de eventos nuevas en la base de datos o los contenedores que se pueden restaurar.
:::image type="content" source="./media/continuous-backup-restore-introduction/restorable-account-scenario.png" alt-text="Eventos del ciclo de vida con marcas de tiempo para una cuenta que se puede restaurar." lightbox="./media/continuous-backup-restore-introduction/restorable-account-scenario.png" border="false":::
a. **Restaurar una cuenta eliminada**: puede ver todas las cuentas eliminadas que se pueden restaurar en el panel **Restaurar**. Por ejemplo, si la *cuenta A* se elimina en la marca de tiempo T3. En este caso, basta la marca de tiempo justo antes de T3, la ubicación, el nombre de la cuenta de destino y el grupo de recursos para restaurar desde [Azure Portal](continuous-backup-restore-portal.md#restore-deleted-account), [PowerShell](continuous-backup-restore-powershell.md#trigger-restore) o la [CLI](continuous-backup-restore-command-line.md#trigger-restore).
:::image type="content" source="./media/continuous-backup-restore-introduction/restorable-container-database-scenario.png" alt-text="Eventos del ciclo de vida con marcas de tiempo para una base de datos y contenedor que se pueden restaurar." lightbox="./media/continuous-backup-restore-introduction/restorable-container-database-scenario.png" border="false":::
b. **Restaurar los datos de una cuenta en una región determinada**: por ejemplo, si la *cuenta A* existe en dos regiones, *Este de EE. UU.* y *Oeste de EE. UU.* en la marca de tiempo T3. Si necesita crear una copia de la cuenta A en *Oeste de EE. UU.* , puede hacer una restauración a un momento dado desde [Azure Portal](continuous-backup-restore-portal.md), [PowerShell](continuous-backup-restore-powershell.md#trigger-restore) o la [CLI](continuous-backup-restore-command-line.md#trigger-restore) con Oeste de EE. UU. como la ubicación de destino.
c. **Recuperar de una operación de escritura o eliminación accidental dentro de un contenedor con una marca de tiempo de restauración conocida**: por ejemplo, si **sabe** que el contenido del *contenedor 1* dentro de la *base de datos 1* se modificó accidentalmente en la marca de tiempo T3. Puede realizar una restauración a un momento dado desde [Azure Portal](continuous-backup-restore-portal.md#restore-live-account), [PowerShell](continuous-backup-restore-powershell.md#trigger-restore) o la [CLI](continuous-backup-restore-command-line.md#trigger-restore) en otra cuenta en la marca de tiempo T3 para recuperar el estado deseado del contenedor.
d. **Restaurar una cuenta a un momento dado antes de la eliminación accidental de la base de datos**: en el [Azure Portal](continuous-backup-restore-portal.md#restore-live-account), puede usar el panel fuente de eventos para determinar cuándo se eliminó una base de datos y buscar la hora de restauración. De un modo similar, con la [CLI de Azure](continuous-backup-restore-command-line.md#trigger-restore) y [PowerShell](continuous-backup-restore-powershell.md#trigger-restore), puede descubrir el evento de eliminación de la base de datos si enumera la fuente de eventos de la base de datos y, luego, desencadena el comando de restauración con los parámetros necesarios.
e. **Restaurar una cuenta a un momento dado antes de la eliminación o la modificación accidental de las propiedades del contenedor**: en [Azure Portal](continuous-backup-restore-portal.md#restore-live-account), puede usar el panel de la fuente de eventos para determinar cuándo se creó, modificó o eliminó un contenedor a fin de encontrar la hora de restauración. De un modo similar, con la [CLI de Azure](continuous-backup-restore-command-line.md#trigger-restore) y [PowerShell](continuous-backup-restore-powershell.md#trigger-restore), puede descubrir todos los eventos del contenedor si enumera la fuente de eventos del contenedor y, luego, desencadena el comando de restauración con los parámetros necesarios.
## <a name="permissions"></a>Permisos
Azure Cosmos DB permite aislar y restringir los permisos de restauración de una cuenta de copia de seguridad continua a un rol o una entidad de seguridad concretos. El propietario de la cuenta puede desencadenar una restauración y asignar un rol a otras entidades de seguridad para realizar la operación de restauración. Para más información, consulte el artículo [Permisos](continuous-backup-restore-permissions.md).
## <a name="pricing"></a><a id="continuous-backup-pricing"></a>Precios
Las cuentas de Azure Cosmos DB que tienen habilitada la copia de seguridad continua incurrirán en un cargo mensual adicional por concepto de *almacenamiento de la copia de seguridad* y *restauración de los datos*. El costo de restauración se suma cada vez que se inicia la operación de restauración. Si configura una cuenta con copia de seguridad continua pero no restaura los datos, en la factura solo se incluye el costo de almacenamiento de copia de seguridad.
El ejemplo siguiente se basa en el precio de una cuenta de Azure Cosmos implementada en una región no gubernamental de EE. UU. Los precios y el cálculo pueden variar en función de la región que use; consulte la [página de precios de Azure Cosmos DB](https://azure.microsoft.com/pricing/details/cosmos-db/) para más información.
* Todas las cuentas habilitadas con la directiva de copia de seguridad continua incurren en un cargo mensual adicional para el almacenamiento de copia de seguridad que se calcula como se indica a continuación:
USD 0,20/GB * Tamaño de los datos en GB en la cuenta * Cantidad de regiones
* Cada invocación de API de restauración incurre en un cargo único. El cargo es una función de la cantidad de restauración de datos y se calcula como se indica a continuación:
USD 0,15/GB * Tamaño de los datos en GB.
Por ejemplo, si tiene 1 TB de datos en dos regiones:
* El costo del almacenamiento de copia de seguridad se calcula como (1000 * 0,20 * 2) = USD 400 al mes
* El costo de restauración se calcula como (1000 * 0,15) = USD 150 por restauración
## <a name="current-limitations-public-preview"></a>Limitaciones actuales (versión preliminar)
Actualmente, la funcionalidad de restauración a un momento dado está en versión preliminar pública y tiene estas limitaciones:
* Solo se admiten las API de Azure Cosmos DB para SQL y MongoDB para copias de seguridad continuas. Todavía no se admiten Cassandra API, Table API y Gremlin API.
* No se puede convertir una cuenta existente con la directiva de copia de seguridad periódica predeterminada para usar el modo de copia de seguridad continua.
* Todavía no se admiten las regiones de la nube soberana de Azure y de Azure Government.
* No se admiten cuentas con claves administradas por el cliente para usar la copia de seguridad continua.
* No se admiten cuentas de escritura de varias regiones.
* No se admiten las cuentas con Synapse Link habilitado.
* La cuenta restaurada se crea en la misma región en la que existe la cuenta de origen. No se puede restaurar una cuenta en una región en la que la cuenta de origen no existe.
* La ventana de restauración es solo de 30 días y no se puede cambiar.
* De manera automática, las copias de seguridad no son resistentes a los desastres geográficos. Tendrá que agregar explícitamente otra región a fin de tener resistencia para la cuenta y la copia de seguridad.
* Mientras una restauración está en curso, no modifique ni elimine las directivas de administración de identidades y acceso (IAM) que conceden los permisos para la cuenta ni cambie la configuración del firewall y de red virtual.
* No se admiten las cuentas de API de Azure Cosmos DB para SQL o de MongoDB que crean un índice único una vez creado el contenedor para la copia de seguridad continua. Solo se admiten los contenedores que crean un índice único como parte de la creación inicial del contenedor. En el caso de las cuentas de MongoDB, puede crear un índice único mediante [comandos de extensión](mongodb-custom-commands.md).
* La funcionalidad de restauración a un momento dado siempre se restaura en una nueva cuenta de Azure Cosmos. Actualmente no se admite la restauración de una cuenta existente. Si está interesado en proporcionar comentarios sobre la restauración en contexto, póngase en contacto con el equipo de Azure Cosmos DB mediante su representante de cuenta o [UserVoice](https://feedback.azure.com/forums/263030-azure-cosmos-db).
* Todas las nuevas API expuestas para enumerar `RestorableDatabaseAccount`, `RestorableSqlDatabases`, `RestorableSqlContainer`, `RestorableMongodbDatabase` y `RestorableMongodbCollection` están sujetas a cambios mientras la característica esté en versión preliminar.
* Después de la restauración, es posible que, para determinadas colecciones, se pueda recompilar el índice coherente. Puede comprobar el estado de la operación de recompilación mediante la propiedad [IndexTransformationProgress](how-to-manage-indexing-policy.md).
* El proceso de restauración restaura todas las propiedades de un contenedor, incluida la configuración de TTL. Como resultado, es posible que los datos restaurados se eliminen inmediatamente si se configuraron de este modo. Para evitar esta situación, la marca de tiempo de restauración debe ser antes de que se agreguen las propiedades de TTL en el contenedor.
## <a name="next-steps"></a>Pasos siguientes
* Configure y administre la copia de seguridad continua mediante [Azure Portal](continuous-backup-restore-portal.md), [PowerShell](continuous-backup-restore-powershell.md), la [CLI](continuous-backup-restore-command-line.md) o [Azure Resource Manager](continuous-backup-restore-template.md).
* [Administre los permisos](continuous-backup-restore-permissions.md) necesarios para restaurar datos con el modo de copia de seguridad continua.
* [Modelo de recursos del modo de copia de seguridad continua](continuous-backup-restore-resource-model.md)
| 117.06338 | 872 | 0.802142 | spa_Latn | 0.988027 |
7641d5de50ae7fb2eeae8b4c9048193e50c7b7d0 | 11,996 | md | Markdown | articles/azure-resource-manager/templates/deploy-cli.md | sergibarca/azure-docs.es-es | dabecf2b983b0b41215571b8939077861f0c2667 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/azure-resource-manager/templates/deploy-cli.md | sergibarca/azure-docs.es-es | dabecf2b983b0b41215571b8939077861f0c2667 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/azure-resource-manager/templates/deploy-cli.md | sergibarca/azure-docs.es-es | dabecf2b983b0b41215571b8939077861f0c2667 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Implementación de recursos con una plantilla y la CLI de Azure
description: Use Azure Resource Manager y la CLI de Azure para implementar recursos en Azure. Los recursos se definen en una plantilla de Resource Manager.
ms.topic: conceptual
ms.date: 10/09/2019
ms.openlocfilehash: 242b9f2a4bc39f8aa083d9c89d3dd7ed850b3489
ms.sourcegitcommit: 276c1c79b814ecc9d6c1997d92a93d07aed06b84
ms.translationtype: HT
ms.contentlocale: es-ES
ms.lasthandoff: 01/16/2020
ms.locfileid: "76154302"
---
# <a name="deploy-resources-with-resource-manager-templates-and-azure-cli"></a>Implementación de recursos con plantillas de Resource Manager y la CLI de Azure
En este artículo, se explica el uso de la CLI de Azure con plantillas de Resource Manager para implementar recursos en Azure. Si no está familiarizado con los conceptos de implementación y administración de las soluciones de Azure, vea [Información general sobre plantillas](overview.md).
[!INCLUDE [sample-cli-install](../../../includes/sample-cli-install.md)]
Si no tiene instalada la CLI de Azure, puede usar [Cloud Shell](#deploy-template-from-cloud-shell).
## <a name="deployment-scope"></a>Ámbito de la implementación
La implementación puede tener como destino una suscripción de Azure o un grupo de recursos dentro de una suscripción. En la mayoría de los casos, la implementación tendrá como destino un grupo de recursos. Use las implementaciones de la suscripción para aplicar directivas y asignaciones de roles en la suscripción. También puede usar las implementaciones de la suscripción para crear un grupo de recursos e implementar recursos en él. Según el ámbito de la implementación, usará comandos diferentes.
Para implementar en un **grupo de recursos**, use [az group deployment create](/cli/azure/group/deployment?view=azure-cli-latest#az-group-deployment-create):
```azurecli
az group deployment create --resource-group <resource-group-name> --template-file <path-to-template>
```
Para implementar en una **suscripción**, use [az deployment create](/cli/azure/deployment?view=azure-cli-latest#az-deployment-create):
```azurecli
az deployment create --location <location> --template-file <path-to-template>
```
Para más información sobre las implementaciones en el nivel de suscripción, consulte [Creación de grupos de recursos y otros recursos en el nivel de suscripción](deploy-to-subscription.md).
Actualmente, solo se admiten las implementaciones del grupo de administración mediante la API REST. Para obtener más información sobre las implementaciones de nivel de grupo de administración, consulte [Creación de recursos en el nivel de grupo de administración](deploy-to-management-group.md).
Los ejemplos de este artículo usan las implementaciones del grupo de recursos.
## <a name="deploy-local-template"></a>Implementar una plantilla local
Al implementar recursos en Azure, siga estos pasos:
1. Inicio de sesión en la cuenta de Azure.
2. Cree un grupo de recursos que actúe como contenedor para los recursos implementados. El nombre del grupo de recursos solo puede incluir caracteres alfanuméricos, puntos, guiones bajos, guiones y paréntesis. Puede tener hasta 90 caracteres. No puede terminar en punto.
3. Implemente en el grupo de recursos la plantilla que define los recursos que se van a crear.
Una plantilla puede incluir parámetros que le permiten personalizar la implementación. Por ejemplo, puede proporcionar valores que están diseñados para un entorno concreto (como desarrollo, prueba y producción). La plantilla de ejemplo define un parámetro para la SKU de la cuenta de almacenamiento.
En el ejemplo siguiente se crea un grupo de recursos y se implementa una plantilla desde la máquina local:
```azurecli-interactive
az group create --name ExampleGroup --location "Central US"
az group deployment create \
--name ExampleDeployment \
--resource-group ExampleGroup \
--template-file storage.json \
--parameters storageAccountType=Standard_GRS
```
La implementación puede demorar unos minutos en completarse. Cuando termine, verá un mensaje que incluye el resultado:
```azurecli
"provisioningState": "Succeeded",
```
## <a name="deploy-remote-template"></a>Implementación de una plantilla remota
En lugar de almacenar las plantillas de Resource Manager en el equipo local, quizás prefiera almacenarlas en una ubicación externa. Puede almacenar plantillas en un repositorio de control de código fuente (por ejemplo, GitHub). O bien, puede almacenarlas en una cuenta de Azure Storage para el acceso compartido en su organización.
Para implementar una plantilla externa, use el parámetro **template-uri**. Use el identificador URI en el ejemplo para implementar la plantilla de ejemplo de GitHub.
```azurecli-interactive
az group create --name ExampleGroup --location "Central US"
az group deployment create \
--name ExampleDeployment \
--resource-group ExampleGroup \
--template-uri "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/101-storage-account-create/azuredeploy.json" \
--parameters storageAccountType=Standard_GRS
```
En el ejemplo anterior, se requiere un identificador URI accesible públicamente para la plantilla, que funciona con la mayoría de los escenarios porque la plantilla no debe incluir datos confidenciales. Si tiene que especificar datos confidenciales (por ejemplo, una contraseña de administrador), pase ese valor como un parámetro seguro. Sin embargo, si no desea que la plantilla sea accesible públicamente, puede almacenarla en un contenedor de almacenamiento privado para protegerla. Para más información sobre la implementación de una plantilla que requiere un token de Firma de acceso compartido (SAS), consulte [Implementación de una plantilla privada con el token de SAS](secure-template-with-sas-token.md).
[!INCLUDE [resource-manager-cloud-shell-deploy.md](../../../includes/resource-manager-cloud-shell-deploy.md)]
En Cloud Shell, use los comandos siguientes:
```azurecli-interactive
az group create --name examplegroup --location "South Central US"
az group deployment create --resource-group examplegroup \
--template-uri <copied URL> \
--parameters storageAccountType=Standard_GRS
```
## <a name="parameters"></a>Parámetros
Para pasar valores de parámetros, puede usar parámetros en línea o un archivo de parámetros.
### <a name="inline-parameters"></a>Parámetros en línea
Para pasar parámetros en línea, proporcione los valores en `parameters`. Por ejemplo, para pasar una cadena y una matriz a una plantilla en un shell de Bash, use:
```azurecli
az group deployment create \
--resource-group testgroup \
--template-file demotemplate.json \
--parameters exampleString='inline string' exampleArray='("value1", "value2")'
```
Si usa la CLI de Azure con el símbolo del sistema de Windows (CMD) o PowerShell, pase la matriz con el formato `exampleArray="['value1','value2']"`.
También puede obtener el contenido del archivo y proporcionar ese contenido como un parámetro en línea.
```azurecli
az group deployment create \
--resource-group testgroup \
--template-file demotemplate.json \
--parameters [email protected] [email protected]
```
Obtener un valor de parámetro de un archivo es útil cuando se necesita proporcionar valores de configuración. Por ejemplo, puede proporcionar [valores de cloud-init para una máquina virtual Linux](../../virtual-machines/linux/using-cloud-init.md).
El formato de arrayContent.json es:
```json
[
"value1",
"value2"
]
```
### <a name="parameter-files"></a>Archivos de parámetros
En lugar de pasar parámetros como valores en línea en el script, quizá le resulte más fácil usar un archivo JSON que contiene los valores de parámetro. El archivo de parámetros debe ser un archivo local. No se admiten los archivos de parámetros externos con la CLI de Azure.
Para más información sobre el archivo de parámetro, consulte [Creación de un archivo de parámetros de Resource Manager](parameter-files.md).
Para pasar un archivo de parámetros local, use `@` para especificar un archivo local denominado storage.parameters.json.
```azurecli-interactive
az group deployment create \
--name ExampleDeployment \
--resource-group ExampleGroup \
--template-file storage.json \
--parameters @storage.parameters.json
```
## <a name="handle-extended-json-format"></a>Control del formato JSON extendido
Para implementar una plantilla con cadenas multilínea o comentarios, debe usar el conmutador `--handle-extended-json-format`. Por ejemplo:
```json
{
"type": "Microsoft.Compute/virtualMachines",
"apiVersion": "2018-10-01",
"name": "[variables('vmName')]", // to customize name, change it in variables
"location": "[
parameters('location')
]", //defaults to resource group location
/*
storage account and network interface
must be deployed first
*/
"dependsOn": [
"[resourceId('Microsoft.Storage/storageAccounts/', variables('storageAccountName'))]",
"[resourceId('Microsoft.Network/networkInterfaces/', variables('nicName'))]"
],
```
## <a name="test-a-template-deployment"></a>Prueba de una implementación de plantilla
Para probar los valores de parámetro y de plantilla sin implementar realmente ningún recurso, use [az group deployment validate](/cli/azure/group/deployment#az-group-deployment-validate).
```azurecli-interactive
az group deployment validate \
--resource-group ExampleGroup \
--template-file storage.json \
--parameters @storage.parameters.json
```
Si no se detectan errores, el comando devuelve información sobre la implementación de prueba. En concreto, tenga en cuenta que el valor **error** es null.
```azurecli
{
"error": null,
"properties": {
...
```
Si se detecta un error, el comando devuelve un mensaje de error. Por ejemplo, si se pasa un valor incorrecto para la SKU de la cuenta de almacenamiento, se devuelve el error siguiente:
```azurecli
{
"error": {
"code": "InvalidTemplate",
"details": null,
"message": "Deployment template validation failed: 'The provided value 'badSKU' for the template parameter
'storageAccountType' at line '13' and column '20' is not valid. The parameter value is not part of the allowed
value(s): 'Standard_LRS,Standard_ZRS,Standard_GRS,Standard_RAGRS,Premium_LRS'.'.",
"target": null
},
"properties": null
}
```
Si la plantilla tiene un error de sintaxis, el comando devuelve un error que indica que la plantilla no se pudo analizar. El mensaje indica el número de línea y la posición del error de análisis.
```azurecli
{
"error": {
"code": "InvalidTemplate",
"details": null,
"message": "Deployment template parse failed: 'After parsing a value an unexpected character was encountered:
\". Path 'variables', line 31, position 3.'.",
"target": null
},
"properties": null
}
```
## <a name="next-steps"></a>Pasos siguientes
- Para revertir a una implementación correcta cuando se produce un error, consulte [Revertir en caso de error a una implementación correcta](rollback-on-error.md).
- Para especificar cómo controlar los recursos que existen en el grupo de recursos, pero que no están definidos en la plantilla, consulte [Modos de implementación de Azure Resource Manager](deployment-modes.md).
- Para entender cómo definir parámetros en la plantilla, consulte [Nociones sobre la estructura y la sintaxis de las plantillas de Azure Resource Manager](template-syntax.md).
- Para obtener sugerencias para resolver los errores de implementación más comunes, consulte [Solución de errores comunes de implementación de Azure con Azure Resource Manager](common-deployment-errors.md).
- Para más información sobre la implementación de una plantilla que requiere un token de SAS, vea [Implementación de una plantilla privada con el token de SAS](secure-template-with-sas-token.md).
- Para el lanzamiento seguro del servicio en más de una región, consulte [Azure Deployment Manager](deployment-manager-overview.md).
| 51.706897 | 713 | 0.773758 | spa_Latn | 0.950957 |
76425b9b8007ca688489d415ea5698b8852c614e | 2,144 | md | Markdown | src/catkin_projects/rlg_simulation/README.md | mpetersen94/spartan | 6998c959d46a475ff73ef38d3e83f8a9f3695e6d | [
"BSD-3-Clause-Clear"
] | null | null | null | src/catkin_projects/rlg_simulation/README.md | mpetersen94/spartan | 6998c959d46a475ff73ef38d3e83f8a9f3695e6d | [
"BSD-3-Clause-Clear"
] | null | null | null | src/catkin_projects/rlg_simulation/README.md | mpetersen94/spartan | 6998c959d46a475ff73ef38d3e83f8a9f3695e6d | [
"BSD-3-Clause-Clear"
] | null | null | null | Simulation Tools
=================
Provides tools for simulation in Spartan.
## Config files
This package provides configuration files describing
combinations of URDF files and their initial configurations,
as a way of specifying a simulation initial condition.
See `config/*` for examples. These are currently only
used by the passive simulation scripts in this directory, but
ought to eventually be piped into things like the IIWA simulation
(i.e. be able to support controlled robots too).
Each config file must list `models`, which correlate model
names (keys) to URDF files (values), with URDF filenames
being based from `DRAKE_RESOURCE_ROOT` (due to a limitation
in Drake's `FindResource`.
Each config file must specify `with_ground`, i.e. whether the
sim should add its own ground plane.
Each config file must list `instances`, which correspond to
an instance of the specific model, at a given `q0`. Instances
may be `fixed` to indicate that they should not be dynamic.
## pybullet simulation
`scripts/pybullet_passive_simulation_from_config.py` takes
a configuration file, plus a timestep (-t) and/or sim rate (-r),
and simulates it in bullet. It'll pop up a view window and print
out the simulation rate. Invoke it directly or with `rosrun` --
it doesn't care about ROS or have many dependencies yet.
## drake simulation
`src/drake_passive_simulation_from_config.cc` links Drake to provide
a rigid body simulation. Run it with `--help` to see many arguments you
can provide.
## IIWA simulation (w/ Drake)
`src/iiwa_rlg_simulation/iiwa_rlg_simulation.cc` creates an IIWA simulation
using Drake and lots of support code (based on the KUKA IIWA simulation in
Drake examples). It is pretty hard-coded right now, but has some neat features:
- It simulates the ROS interface of the Schunk WSG50 gripper.
- It generates simulated RGBD images and publishes them on ROS topics. (It only
does images right now.)
- If you run it alongside a `kuka_plan_runner` and the IIWA manip app, you can
drive it. Launch the Director app first, then launch the sim, and you can use
the sim visualization to make the robot pick the object up.
| 38.285714 | 79 | 0.778451 | eng_Latn | 0.995948 |
7642920dc5c1b39b473683a788ceed6766d718c7 | 1,440 | md | Markdown | jekyll/_docs/installing-airbrake/installing-airbrake-in-an-angularjs-app.md | Mattlk13/airbrake-docs | 661b278e8ef0a81902ca8b017ed81f19ef793103 | [
"MIT"
] | null | null | null | jekyll/_docs/installing-airbrake/installing-airbrake-in-an-angularjs-app.md | Mattlk13/airbrake-docs | 661b278e8ef0a81902ca8b017ed81f19ef793103 | [
"MIT"
] | null | null | null | jekyll/_docs/installing-airbrake/installing-airbrake-in-an-angularjs-app.md | Mattlk13/airbrake-docs | 661b278e8ef0a81902ca8b017ed81f19ef793103 | [
"MIT"
] | null | null | null | ---
layout: classic-docs
title: Installing Airbrake in an AngularJS application
short-title: AngularJS
language: angularjs
categories: [installing-airbrake]
description: Installing Airbrake in an AngularJS application
---

{% include_relative airbrake-js/features.md %}
{% include_relative airbrake-js/installation.md %}
## Configuration
After you have installed the [airbrake-js notifier](https://github.com/airbrake/airbrake-js)
the next step is configuring Airbrake in your app. This involves initializing
an `airbrakeJs.Client` with your `projectId` and `projectKey` and adding an
[$exceptionHandler](https://docs.angularjs.org/api/ng/service/$exceptionHandler).
The following config snippet should be added to your `app.js`
file. Be sure to replace the values for `projectId` and `projectKey` with the
real values from your project's settings page.
```js
angular
.module('app')
.factory('$exceptionHandler', function ($log) {
var airbrake = new airbrakeJs.Client({
projectId: 1,
projectKey: 'FIXME'
});
airbrake.addFilter(function (notice) {
notice.context.environment = 'production';
return notice;
});
return function (exception, cause) {
$log.error(exception);
airbrake.notify({error: exception, params: {angular_cause: cause}});
};
});
```
{% include_relative airbrake-js/going-further.md %}
| 29.387755 | 92 | 0.729167 | eng_Latn | 0.736822 |
7642aae492e7c129a5e74efb2e8bafaefe0426dd | 1,122 | md | Markdown | dev-itpro/developer/methods/devenv-padleft-method-text.md | henrikwh/dynamics365smb-devitpro-pb | 340cc093a0c159bb33aae9f9bc3d3be1dc61b975 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | dev-itpro/developer/methods/devenv-padleft-method-text.md | henrikwh/dynamics365smb-devitpro-pb | 340cc093a0c159bb33aae9f9bc3d3be1dc61b975 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | dev-itpro/developer/methods/devenv-padleft-method-text.md | henrikwh/dynamics365smb-devitpro-pb | 340cc093a0c159bb33aae9f9bc3d3be1dc61b975 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "PADLEFT Method"
ms.author: SusanneWindfeldPedersen
ms.custom: na
ms.date: 10/01/2018
ms.reviewer: na
ms.suite: na
ms.tgt_pltfrm: na
ms.topic: article
ms.service: "dynamics365-business-central"
ms.assetid: 620f0e32-eadc-43e9-8f6e-8fc0b12c3aaf
caps.latest.revision: 1
manager: edupont
author: SusanneWindfeldPedersen
redirect_url: /dynamics365/business-central/dev-itpro/developer/methods-auto/library
---
# PADLEFT Method
Returns a new Text that right-aligns the characters in this instance by padding them on the left, for a specified total length.
```
Result := Text.PADLEFT(Count, [Char])
```
## Parameters
*Count*
 Type: Integer
The number of characters in the resulting string, equal to the number of original characters plus any additional padding characters.
*Char*
 Type: Char
A padding character.
## Return Value
*Result*
 Type: Text
## See Also
[Text Data Type](../datatypes/devenv-text-data-type.md)
[Getting Started with AL](../devenv-get-started.md)
[Developing Extensions Using the New Development Environment](../devenv-dev-overview.md)
| 28.05 | 134 | 0.745098 | eng_Latn | 0.595394 |
7643360acca2dcb9219ba154d93f2e566a374e85 | 1,056 | md | Markdown | examples/multiple_nat/README.md | boldlink/terraform-aws-vpc | ea85cda97b0ea5b4e86adcf80176c1128846bb5b | [
"Apache-2.0"
] | null | null | null | examples/multiple_nat/README.md | boldlink/terraform-aws-vpc | ea85cda97b0ea5b4e86adcf80176c1128846bb5b | [
"Apache-2.0"
] | 1 | 2022-03-31T14:02:53.000Z | 2022-03-31T14:02:53.000Z | examples/multiple_nat/README.md | boldlink/terraform-aws-vpc | ea85cda97b0ea5b4e86adcf80176c1128846bb5b | [
"Apache-2.0"
] | null | null | null | # multiple_nat
<!-- BEGINNING OF PRE-COMMIT-TERRAFORM DOCS HOOK -->
## Requirements
No requirements.
## Providers
| Name | Version |
|------|---------|
| <a name="provider_aws"></a> [aws](#provider\_aws) | 4.7.0 |
## Modules
| Name | Source | Version |
|------|--------|---------|
| <a name="module_single_nat_vpc"></a> [single\_nat\_vpc](#module\_single\_nat\_vpc) | ./../.. | n/a |
## Resources
| Name | Type |
|------|------|
| [aws_availability_zones.available](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/availability_zones) | data source |
| [aws_caller_identity.current](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/caller_identity) | data source |
| [aws_region.current](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/region) | data source |
## Inputs
No inputs.
## Outputs
| Name | Description |
|------|-------------|
| <a name="output_outputs"></a> [outputs](#output\_outputs) | n/a |
<!-- END OF PRE-COMMIT-TERRAFORM DOCS HOOK -->
| 27.789474 | 151 | 0.650568 | yue_Hant | 0.573433 |
7643a1f8eaa58bf42c7d0844d3016b992dccfa46 | 836 | md | Markdown | api/Excel.XlLinkStatus.md | ryanmajidi/VBA-Docs | 8b07050f4ff38fcabda606284ec5f6f6634e9569 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | api/Excel.XlLinkStatus.md | ryanmajidi/VBA-Docs | 8b07050f4ff38fcabda606284ec5f6f6634e9569 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | api/Excel.XlLinkStatus.md | ryanmajidi/VBA-Docs | 8b07050f4ff38fcabda606284ec5f6f6634e9569 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: xlLinkStatus enumeration (Excel)
ms.prod: excel
api_name:
- Excel.XlLinkStatus
ms.assetid: 3bcf7b71-bd31-4073-2e4c-aa8643e5be2b
ms.date: 06/08/2017
---
# xlLinkStatus enumeration (Excel)
Specifies the status of a link.
|Name|Value|Description|
|:-----|:-----|:-----|
| **xlLinkStatusCopiedValues**|10|Copied values.|
| **xlLinkStatusIndeterminate**|5|Unable to determine status.|
| **xlLinkStatusInvalidName**|7|Invalid name.|
| **xlLinkStatusMissingFile**|1|File missing.|
| **xlLinkStatusMissingSheet**|2|Sheet missing.|
| **xlLinkStatusNotStarted**|6|Not started.|
| **xlLinkStatusOK**|0|No errors.|
| **xlLinkStatusOld**|3|Status may be out of date.|
| **xlLinkStatusSourceNotCalculated**|4|Not yet calculated.|
| **xlLinkStatusSourceNotOpen**|8|Not open.|
| **xlLinkStatusSourceOpen**|9|Source document is open.|
| 26.967742 | 62 | 0.726077 | yue_Hant | 0.722576 |
76444d1dcb6bee5ea9d03f1a50a14a9babfc4ec2 | 1,242 | md | Markdown | design_docs.md | Ocete/ECS-153-Final-project | b72ce2a109abbf4c0693f909681cb5dfa2e3b88f | [
"MIT"
] | 2 | 2020-05-02T13:22:19.000Z | 2020-05-13T16:47:53.000Z | design_docs.md | Ocete/ECS-153-Final-project | b72ce2a109abbf4c0693f909681cb5dfa2e3b88f | [
"MIT"
] | null | null | null | design_docs.md | Ocete/ECS-153-Final-project | b72ce2a109abbf4c0693f909681cb5dfa2e3b88f | [
"MIT"
] | 1 | 2020-06-11T03:31:47.000Z | 2020-06-11T03:31:47.000Z | # Torzela Design Documents
## General Overview Design Doc
This design doc provides a basic overview of the project as well as some basic background on how Vuvuzela works.
https://docs.google.com/document/d/1VZZLnERFplL6Zhgi-mZC8WDIzLizYSsjkA2ecijUYr0/edit?usp=sharing
## Networking Subsystem (Matthew Donnelly)
The network subsystem that handles the exchange of information between hosts.
https://docs.google.com/document/d/1ahEHM-SRMf0zXW8gQfC6h4ikLVPCOtSuqnsOtEQ36Nk/edit?usp=sharing
## Conversation Protocol Subsystem (Jose Antonio Alvarez Ocete)
This subsystem handles the encryption within onion routing, mixing networks on each chain and noise generation.
https://docs.google.com/document/d/1JHVycZ2zotkthxrbSYLup1sxdetJZ5CLNzX9JL5hSAc/edit?usp=sharing
## Dead Drops Subsystem (Edric Tom)
This subsystem handles the message exchange process, where one client deposits a message and another client picks it up. This ensures that adversaries are unable to identify who is talking to who by looking at access patterns.
https://docs.google.com/document/d/1UwTi2pzKUIhbZcCzO_i3td9yvJVwlYcYBza1qwFD8DY/edit?usp=sharing
## Dialing Protocol Subsystem (Skyler Bala)
[TODO]: complete with basic description of the subsystem.
| 47.769231 | 231 | 0.81723 | eng_Latn | 0.854873 |
7644e3b48b19115022bf40ef601dd2ee96ab0324 | 1,611 | md | Markdown | README.md | Godswilly/study-tracker | 7a03fc52c03edfad41139be8c3deb766ac648df6 | [
"Apache-1.1"
] | 1 | 2021-08-16T14:12:56.000Z | 2021-08-16T14:12:56.000Z | README.md | Godswilly/study-tracker | 7a03fc52c03edfad41139be8c3deb766ac648df6 | [
"Apache-1.1"
] | null | null | null | README.md | Godswilly/study-tracker | 7a03fc52c03edfad41139be8c3deb766ac648df6 | [
"Apache-1.1"
] | null | null | null | # Study Tracker
> A backend API built to store and dispatch data to the frontend UI
## Features
1. User can keep track the number of hours spent studying
2. User can identify if time spent studying is less or more than his target
## Future Feature (v1.12)
1. Add more functionalities
## Built With
- Ruby
- Ruby on Rails API
- Heroku
- Postgresql
- Rspec
## Getting Started
To get a local copy up and running follow these simple example steps.
1. `git clone https://github.com/Godswilly/study-tracker.git`
2. `cd study_tracker`
3. `bundle install`
4. `rails db:create`
5. `rails db:migrate`
5. `rails s`
### Prerequisites
- Ruby v2.7.0
- Heroku
- Rails v6.0.3
## RSpec Test
- run `rspec`
### Deployment
- `heroku create`
- `git push heroku master`
- `heroku run rake db:migrate`
- `heroku open`
## Author
👤 **Kalu Agu Kalu**
- [Github](https://github.com/Godswilly)
- [Twitter](https://twitter.com/KaluAguKalu17)
- [Linkedin](https://www.linkedin.com/in/kaluagukalu/)
## 🤝 Contributing
Contributions, issues, and feature requests are welcome!
Feel free to check the [issues page](https://github.com/Godswilly/study-tracker/issues).
1. [Fork it](https://github.com/Godswilly/study-tracker/fork)
2. Create your feature branch (git checkout -b my-new-feature)
3. Commit your changes (git commit -am 'Add some feature')
4. Push to the branch (git push origin my-new-feature)
5. Create a new Pull Request
## Show your support
Give and ⭐️ if you like this project!
## Acknowledgments
- [Microverse](https://www.microverse.org/)
## 📝 License
This project is [Apache](lic.url) licensed.
| 19.178571 | 88 | 0.715705 | eng_Latn | 0.824956 |
764586cb30f5c24b22fa3c61bc63466aa2f5d12a | 1,440 | md | Markdown | _includes/templates/install/rhel-db-postgresql.md | shryu84/thingsboard.github.io | c6ed44e0b34113aa2d797cc2101f7463cdde2672 | [
"Apache-2.0"
] | null | null | null | _includes/templates/install/rhel-db-postgresql.md | shryu84/thingsboard.github.io | c6ed44e0b34113aa2d797cc2101f7463cdde2672 | [
"Apache-2.0"
] | 1 | 2021-05-20T19:03:05.000Z | 2021-05-20T19:03:05.000Z | _includes/templates/install/rhel-db-postgresql.md | shryu84/thingsboard.github.io | c6ed44e0b34113aa2d797cc2101f7463cdde2672 | [
"Apache-2.0"
] | 1 | 2019-11-22T09:41:07.000Z | 2019-11-22T09:41:07.000Z | {% capture postgresql-info %}
ThingsBoard team recommends to use PostgreSQL for development and production environments with reasonable load (< 5000 msg/sec).
Many cloud vendors support managed PostgreSQL servers which is a cost-effective solution for most of ThingsBoard instances.
{% endcapture %}
{% include templates/info-banner.md content=postgresql-info %}
##### PostgreSQL Installation
{% include templates/install/postgres-install-rhel.md %}
{% include templates/install/create-tb-db-rhel.md %}
##### ThingsBoard Configuration
Edit ThingsBoard configuration file
```bash
sudo nano /etc/thingsboard/conf/thingsboard.conf
```
{: .copy-code}
Add the following lines to the configuration file. Don't forget **to replace** "PUT_YOUR_POSTGRESQL_PASSWORD_HERE" with your **real postgres user password**:
```bash
# DB Configuration
export DATABASE_ENTITIES_TYPE=sql
export DATABASE_TS_TYPE=sql
export SPRING_JPA_DATABASE_PLATFORM=org.hibernate.dialect.PostgreSQLDialect
export SPRING_DRIVER_CLASS_NAME=org.postgresql.Driver
export SPRING_DATASOURCE_URL=jdbc:postgresql://localhost:5432/thingsboard
export SPRING_DATASOURCE_USERNAME=postgres
export SPRING_DATASOURCE_PASSWORD=PUT_YOUR_POSTGRESQL_PASSWORD_HERE
export SPRING_DATASOURCE_MAXIMUM_POOL_SIZE=5
# Specify partitioning size for timestamp key-value storage. Allowed values: DAYS, MONTHS, YEARS, INDEFINITE.
export SQL_POSTGRES_TS_KV_PARTITIONING=MONTHS
```
{: .copy-code} | 38.918919 | 157 | 0.817361 | yue_Hant | 0.35282 |
7645e501b418c907bd557fd0883b6681befa7ec7 | 2,009 | md | Markdown | content/en/docs/managing-jx/common-tasks/helm3.md | juan131/jx-docs | 606845f37576ec7c1a1731cfe6cdb5a2cfafd686 | [
"Apache-2.0",
"CC0-1.0"
] | null | null | null | content/en/docs/managing-jx/common-tasks/helm3.md | juan131/jx-docs | 606845f37576ec7c1a1731cfe6cdb5a2cfafd686 | [
"Apache-2.0",
"CC0-1.0"
] | null | null | null | content/en/docs/managing-jx/common-tasks/helm3.md | juan131/jx-docs | 606845f37576ec7c1a1731cfe6cdb5a2cfafd686 | [
"Apache-2.0",
"CC0-1.0"
] | null | null | null | ---
title: Helm 3
linktitle: Helm 3
description: Using Helm 3 with Jenkins X
weight: 110
---
Jenkins X uses [Helm](https://www.helm.sh/) to install both Jenkins X and to install the applications you create in each of the [Environments](/docs/concepts/features/#environments) (like `Staging` and `Production`)
**NOTE** until Helm 3 is GA we highly recommend folks use [Helm 2.x without Tiller](/news/helm-without-tiller/)
Currently Helm 3 is being developed that has a number of great improvements:
* remove the server side component, Tiller, so that `helm install` uses the current user/ServiceAccount's RBAC
* releases become namespace aware avoiding the need to come up with globally unique release names
At the time of writing helm 3 is still early in its development but to improve feedback we've added support for Helm 2 and Helm 3 into Jenkins X.
You can use either helm 2 or helm 3 to do either of these things:
* install Jenkins X itself
* install your apps into your `Staging` and `Production` environments
e.g. you could use helm 2 to install Jenkins X then use helm 3 for your `Staging` and `Production` environments.
Here's how to specify which helm to use.
## Using helm 3 to install Jenkins X
When installing Jenkins X via `jx create cluster ...` or `jx install` you can specify `--helm3` to use helm 3 instead of helm 2.x.
If you install with helm 2 then your team will default to using helm 2 for its releases. If you install with helm 3 then your team will default to also use helm 3.
To change the version of helm used by your team use [jx edit helmbin](/commands/jx_edit_helmbin/) :
```
jx edit helmbin helm3
```
or to switch to helm 2:
```
jx edit helmbin helm
```
You can view the current settings for your team via [jx get helmbin](/commands/jx_get_helmbin/):
```
jx get helmbin
```
Basically the [pod templates](/docs/managing-jx/common-tasks/pod-templates/) contain both the binaries:
* `helm` which is a 2.x distro of helm
* `helm3` which is a 3.x distro of helm | 35.875 | 215 | 0.750124 | eng_Latn | 0.999172 |
764618407819d42b11aec319303483950d55f243 | 3,631 | md | Markdown | vendor/github.com/hashicorp/vault/website/source/docs/audit/index.html.md | goavro/wednesday | 5d9cc312d1ea1bbaeed086191ab94338573e094f | [
"Apache-2.0"
] | 5 | 2016-05-10T15:11:29.000Z | 2017-07-11T11:56:46.000Z | vendor/github.com/hashicorp/vault/website/source/docs/audit/index.html.md | yanzay/wednesday | 5d9cc312d1ea1bbaeed086191ab94338573e094f | [
"Apache-2.0"
] | 1 | 2016-05-31T12:56:52.000Z | 2016-05-31T12:57:35.000Z | vendor/github.com/hashicorp/vault/website/source/docs/audit/index.html.md | yanzay/wednesday | 5d9cc312d1ea1bbaeed086191ab94338573e094f | [
"Apache-2.0"
] | 7 | 2016-05-10T15:12:12.000Z | 2021-01-27T01:12:34.000Z | ---
layout: "docs"
page_title: "Audit Backends"
sidebar_current: "docs-audit"
description: |-
Audit backends are mountable backends that log requests and responses in Vault.
---
# Audit Backends
Audit backends are the components in Vault that keep a detailed log
of all requests and response to Vault. Because _every_ operation with
Vault is an API request/response, the audit log contains _every_ interaction
with Vault, including errors.
Vault ships with multiple audit backends, depending on the location you want
the logs sent to. Multiple audit backends can be enabled and Vault will send
the audit logs to both. This allows you to not only have a redundant copy,
but also a second copy in case the first is tampered with.
## Sensitive Information
The audit logs contain the full request and response objects for every
interaction with Vault. The data in the request and the data in the
response (including secrets and authentication tokens) will be hashed
with a salt using HMAC-SHA256.
The purpose of the hash is so that secrets aren't in plaintext within your
audit logs. However, you're still able to check the value of secrets by
generating HMACs yourself; this can be done with the audit backend's hash
function and salt by using the `/sys/audit-hash` API endpoint (see the
documentation for more details).
## Enabling/Disabling Audit Backends
When a Vault server is first initialized, no auditing is enabled. Audit
backends must be enabled by a root user using `vault audit-enable`.
When enabling an audit backend, options can be passed to it to configure it.
For example, the command below enables the file audit backend:
```
$ vault audit-enable file path=/var/log/vault_audit.log
...
```
In the command above, we passed the "path" parameter to specify the path
where the audit log will be written to. Each audit backend has its own
set of parameters. See the documentation to the left for more details.
When an audit backend is disabled, it will stop receiving logs immediately.
The existing logs that it did store are untouched.
## Blocked Audit Backends
If there are any audit backends enabled, Vault requires that at least
one be able to persist the log before completing a Vault request.
If you have only one audit backend enabled, and it is blocking (network
block, etc.), then Vault will be _unresponsive_. Vault _will not_ complete
any requests until the audit backend can write.
If you have more than one audit backend, then Vault will complete the request
as long as one audit backend persists the log.
Vault will not respond to requests if audit backends are blocked because
audit logs are critically important and ignoring blocked requests opens
an avenue for attack. Be absolutely certain that your audit backends cannot
block.
## API
### /sys/audit/[path]
#### POST
<dl class="api">
<dt>Description</dt>
<dd>
Enables audit backend at the specified path.
</dd>
<dt>Method</dt>
<dd>POST</dd>
<dd>
<ul>
<li>
<span class="param">type</span>
<span class="param-flags">required</span>
The path to where the audit log will be written. If this
path exists, the audit backend will append to it.
</li>
<li>
<span class="param">description</span>
<span class="param-flags">optional</span>
A description.
</li>
<li>
<span class="param">options</span>
<span class="param-flags">optional</span>
Configuration options of the backend in JSON format.
Refer to `syslog` and `file` audit backend options.
</li>
</ul>
</dd>
</dl>
| 33.934579 | 81 | 0.73561 | eng_Latn | 0.997262 |
764656eacee1048e73fe8f1b660d233ed62751ac | 1,727 | md | Markdown | user-guide/queries/querying_jira_jql.md | cloudron-io/website | 167a2ae1bd31177073d368e2c8a1cd018bba7349 | [
"BSD-2-Clause"
] | null | null | null | user-guide/queries/querying_jira_jql.md | cloudron-io/website | 167a2ae1bd31177073d368e2c8a1cd018bba7349 | [
"BSD-2-Clause"
] | null | null | null | user-guide/queries/querying_jira_jql.md | cloudron-io/website | 167a2ae1bd31177073d368e2c8a1cd018bba7349 | [
"BSD-2-Clause"
] | 1 | 2020-05-09T21:26:22.000Z | 2020-05-09T21:26:22.000Z | ---
description: Connect JQL to Redash easily and query in JQL, visualize and share it in moments.
---
# Querying JIRA (JQL)
Simple query, just return issues with no filtering:
```
{
}
```
Return only specific fields:
```
{
"fields": "summary,priority"
}
```
Return only specific fields and filter by priority:
```
{
"fields": "summary,priority",
"jql": "priority=medium"
}
```
Count number of issues with `priority=medium`:
```
{
"queryType": "count",
"jql": "priority=medium"
}
```
You can also use the field mapping to rename a field for the result - this is useful when working with custom fields:
```
{
"fields": "summary,priority,customfield_10672",
"jql": "priority=medium",
"fieldMapping": {
"customfield_10672": "my_custom_field_name"
}
}
```
Some fields returned by JIRA are JSON objects with multiple properties. You can define a field mapping to pick a specific member property you want to return (in this example 'id' member of the 'priority' field):
```
{
"fields": "summary,priority",
"jql": "priority=medium",
"fieldMapping": {
"priority.id": "priority"
}
}
```
More complex example combining the different filter options:
```
{
"fields": "summary,priority,customfield_10672,resolutiondate,fixVersions,watches,labels",
"jql": "project = MYPROJ AND resolution = unresolved ORDER BY priority DESC, key ASC",
"maxResults": 30,
"fieldMapping": {
"customfield_10672": "my_custom_field_name",
"priority.id": "priority",
"fixVersions.name": "my_fix_version",
"fixVersions.id": "my_fix_version_id"
}
}
```
If a field contains a list of values all are returned concatenated with ",".
| 21.320988 | 211 | 0.665316 | eng_Latn | 0.968886 |
7647570e89f257e4e740ad0b3304626c12a2e5cd | 413 | md | Markdown | _site/shorts/2021-09-27-11-55-MilkDown.md | planetoftheweb/raybo.org | fbc2226bd8f47ebb9760f69186fa52284e0112de | [
"MIT"
] | 81 | 2015-01-14T00:27:24.000Z | 2022-02-19T11:03:49.000Z | _site/shorts/2021-09-27-11-55-MilkDown.md | planetoftheweb/raybo.org | fbc2226bd8f47ebb9760f69186fa52284e0112de | [
"MIT"
] | 29 | 2017-12-26T17:45:33.000Z | 2019-08-26T22:15:55.000Z | _site/shorts/2021-09-27-11-55-MilkDown.md | planetoftheweb/raybo.org | fbc2226bd8f47ebb9760f69186fa52284e0112de | [
"MIT"
] | 48 | 2015-02-06T21:15:44.000Z | 2022-03-25T15:31:34.000Z | ---
layout: post.njk
title: "MilkDown-Pluggable Markdown Editor"
summary: "Lightweight and powerful, I didn't think I would like this markdown editor, but it's pretty terrific. LaTex math equations, table support, copy and paste markdown, Collaboration and emoji support."
thumb: "http://pixelprowess.com/i/2021-09-27_11-56-37.png"
links:
- website: "https://milkdown.dev"
category: shorts
tags:
- external
--- | 37.545455 | 207 | 0.755448 | eng_Latn | 0.838058 |
76476688e017ff2f20986303cfaa74cde60613a7 | 2,888 | md | Markdown | README.md | johnthethird/TutorialSinatra | 85d288ff9796161cf44a3921c3d307cf4418b0e0 | [
"MIT"
] | null | null | null | README.md | johnthethird/TutorialSinatra | 85d288ff9796161cf44a3921c3d307cf4418b0e0 | [
"MIT"
] | null | null | null | README.md | johnthethird/TutorialSinatra | 85d288ff9796161cf44a3921c3d307cf4418b0e0 | [
"MIT"
] | null | null | null |
# TutorialSinatra
This tutorial will step through a simple Sinatra Ruby app, with each commit adding features and functionality. So start from the first commit and then checkout each commit to see the progress. Notes will be added to this README file as we go along.
## Step 1
- Install Linux for Windows (WSL VERSION 2!) https://docs.microsoft.com/en-us/windows/wsl/install-win10
- Then download Ubuntu 20.04 LTS from Windows Store https://www.microsoft.com/store/productId/9N6SVWS3RX71
So now you have Ubuntu Linux running on Windows. Double-clicking on the Ubuntu icon will put you "inside" the Ubuntu virtual machine, so its kind of "seperate" from Windows. When Googling for help, anything about Ubuntu should work.
- Download Windows Terminal https://www.microsoft.com/store/productId/9N0DX20HK701
When you run Windows Terminal, use the arrow drop down to select Ubuntu
- Inside Ubuntu, run these commands:
```
sudo apt update
sudo apt upgrade -y
sudo apt install git curl libssl-dev libreadline-dev zlib1g-dev autoconf bison build-essential libyaml-dev libreadline-dev libncurses5-dev libffi-dev libgdbm-dev
git clone https://github.com/rbenv/rbenv.git ~/.rbenv
echo 'export PATH="$HOME/.rbenv/bin:$PATH"' >> ~/.bashrc
echo 'eval "$(rbenv init -)"' >> ~/.bashrc
exec $SHELL
git clone https://github.com/rbenv/ruby-build.git ~/.rbenv/plugins/ruby-build
echo 'export PATH="$HOME/.rbenv/plugins/ruby-build/bin:$PATH"' >> ~/.bashrc
exec $SHELL
rbenv install 2.7.2
rbenv global 2.7.2
ruby -v
```
Ruby uses "gems" to package and distribute code. To see a list of installed gems, `gem list` or `gem info` for more detail. If you `gem install whatever` it will install the code into the global gem list. But each project you are working on can also have its own private gem list, which is a best practice. So for this we use `bundler`.
Bundler works by having a file called Gemfile in a directory, which you then declare what gems you want to use, and run bundle to install them. Go to the next commit to see this happen.
## Step 2
OK so we added a Gemfile, added the Sinatra gem, and ran `bundle`, which installed the code and created a Gemfile.lock file, which specifies the exact versions used. At any time you can delete the Gemfile.lock and re-run bundle to get the latest versions of everything.
## Step 3
Now we added the main `app.rb` and some views. You can run it by saying,
`ruby app.rb` then going to "http://localhost:4567" in a browser.
# Resources
Here is a page with a lot of Ruby / Dev resources to go through
[Ruby Resources](https://www.notion.so/Ruby-Resources-80758ead48694352b68d93e6bba205ef)
# GitPod
Click the button to lauch the project on GitPod
[](https://gitpod.io/#https://github.com/johnthethird/TutorialSinatra)
| 44.430769 | 336 | 0.755194 | eng_Latn | 0.930839 |
76478b4bc946c2774a61cd7d7d303e0f26585efa | 1,101 | md | Markdown | api/qsharp/microsoft.quantum.math.argcomplexpolar.md | MicrosoftDocs/quantum-docs-pr.zh-CN | 1806cbf948b8a33a63155ed8e2735ef00972ecc4 | [
"CC-BY-4.0",
"MIT"
] | 4 | 2020-03-30T04:57:50.000Z | 2021-01-18T00:09:20.000Z | api/qsharp/microsoft.quantum.math.argcomplexpolar.md | MicrosoftDocs/quantum-docs-pr.zh-CN | 1806cbf948b8a33a63155ed8e2735ef00972ecc4 | [
"CC-BY-4.0",
"MIT"
] | 8 | 2020-03-12T23:57:52.000Z | 2021-01-17T23:51:11.000Z | api/qsharp/microsoft.quantum.math.argcomplexpolar.md | MicrosoftDocs/quantum-docs-pr.zh-CN | 1806cbf948b8a33a63155ed8e2735ef00972ecc4 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2021-01-17T23:24:50.000Z | 2021-11-15T09:23:22.000Z | ---
uid: Microsoft.Quantum.Math.ArgComplexPolar
title: ArgComplexPolar 函数
ms.date: 1/23/2021 12:00:00 AM
ms.topic: article
qsharp.kind: function
qsharp.namespace: Microsoft.Quantum.Math
qsharp.name: ArgComplexPolar
qsharp.summary: Returns the phase of a complex number of type `ComplexPolar`.
ms.openlocfilehash: 5809b5463e6bf8ee73d095dfe4eafedfbb5e0702
ms.sourcegitcommit: 71605ea9cc630e84e7ef29027e1f0ea06299747e
ms.translationtype: MT
ms.contentlocale: zh-CN
ms.lasthandoff: 01/26/2021
ms.locfileid: "98852899"
---
# <a name="argcomplexpolar-function"></a>ArgComplexPolar 函数
命名空间: [Microsoft 量子](xref:Microsoft.Quantum.Math)
包: [Microsoft 量子. 标准版](https://nuget.org/packages/Microsoft.Quantum.Standard)
返回类型为的复数的相位 `ComplexPolar` 。
```qsharp
function ArgComplexPolar (input : Microsoft.Quantum.Math.ComplexPolar) : Double
```
## <a name="input"></a>输入
### <a name="input--complexpolar"></a>输入: [ComplexPolar](xref:Microsoft.Quantum.Math.ComplexPolar)
复数 $c = r e ^ {i t} $。
## <a name="output--double"></a>输出: [Double](xref:microsoft.quantum.lang-ref.double)
阶段 $ \text{Arg} [c] = t $。 | 26.853659 | 98 | 0.75386 | yue_Hant | 0.697996 |
7647a129fd2a329ddbae2b11d20b9ace3fc8845f | 1,582 | md | Markdown | docs/visual-basic/language-reference/modifiers/assembly.md | juucustodio/docs.pt-br | a3c389ac92d6e3c69928c7b906e48fbb308dc41f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/visual-basic/language-reference/modifiers/assembly.md | juucustodio/docs.pt-br | a3c389ac92d6e3c69928c7b906e48fbb308dc41f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/visual-basic/language-reference/modifiers/assembly.md | juucustodio/docs.pt-br | a3c389ac92d6e3c69928c7b906e48fbb308dc41f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Assembly
ms.date: 07/20/2015
f1_keywords:
- vb.Assembly
- vb.AssemblyAttribute
- Assembly
helpviewer_keywords:
- Assembly modifier
- Assembly keyword [Visual Basic]
- attribute blocks, Assembly keyword
ms.assetid: 925e7471-3bdf-4b51-bb93-cbcfc6efc52f
ms.openlocfilehash: 7d313dee1015362bd0215ed98ab7e898312cfbcd
ms.sourcegitcommit: f8c270376ed905f6a8896ce0fe25b4f4b38ff498
ms.translationtype: MT
ms.contentlocale: pt-BR
ms.lasthandoff: 06/04/2020
ms.locfileid: "84373154"
---
# <a name="assembly-visual-basic"></a>Assembly (Visual Basic)
Especifica que um atributo no início de um arquivo de origem se aplica a todo o assembly.
## <a name="remarks"></a>Comentários
Muitos atributos pertencem a um elemento de programação individual, como uma classe ou propriedade. Você aplica esse atributo, anexando o bloco de atributo, entre colchetes angulares ( `< >` ), diretamente à instrução de declaração.
Se um atributo pertencer não apenas ao elemento a seguir, mas ao assembly inteiro, você coloca o bloco de atributo no início do arquivo de origem e identifica o atributo com a `Assembly` palavra-chave. Se se aplicar ao módulo do assembly atual, você usará a palavra-chave do [módulo](module-keyword.md) .
Você também pode aplicar um atributo a um assembly no arquivo AssemblyInfo. vb, caso em que você não precisa usar um bloco de atributos em seu arquivo de código-fonte principal.
## <a name="see-also"></a>Confira também
- [Modulo\<keyword>](module-keyword.md)
- [Visão geral de atributos](../../programming-guide/concepts/attributes/index.md)
| 46.529412 | 307 | 0.774968 | por_Latn | 0.992255 |
7647ee437865ae5bbcfd3ee7bbf508908a8c6fc9 | 1,830 | md | Markdown | plots/parameters/README.md | dmvandam/V928tau | 537112bc296a29a1c6849123f81e05ff75cb70f7 | [
"BSD-2-Clause"
] | null | null | null | plots/parameters/README.md | dmvandam/V928tau | 537112bc296a29a1c6849123f81e05ff75cb70f7 | [
"BSD-2-Clause"
] | null | null | null | plots/parameters/README.md | dmvandam/V928tau | 537112bc296a29a1c6849123f81e05ff75cb70f7 | [
"BSD-2-Clause"
] | null | null | null | # Parameters
This folder contains all the plots produced by orbital analysis.
It shows the available masses and periods for the companion based on the results of the eclipse modelling.
The most particular parameter is the transverse velocity.
Of other significant importance is whether the companion orbits V928 Tau A or B (due to the radius) and whether we use the mass obtained from the standard or magnetic models.
### Restrictions
Some of the considerations are listed below:
1. the apastron passage must be less than 10% of the binary separation (3.2 au)
This is for stability of the orbit.
If the companion exceeds this distance, the orbit would become unstable due to the binary (we assume a stable orbit).
2. the total disk radius must be less than 0.3 times the Hill radius
This is for stability of the disk.
If the disk is larger then the disk would fall apart after much fewer orbits around the star.
Note that we evaluate the Hill radius at periastron passage.
3. the mass of the companion must be less than the substellar mass limit (80 Mjup)
This is due to the spectral energy distribution / imaging.
If the companion were larger than 80 Mjup it would start glowing significantly causing it to be visible in high-resolution imaging (depending on its distance), but it would also have an influence on the SED.
### Notes
From these restrictions we see that the curve at the bottom of each parameter map is carved out by the Hill radius requirement.
We also see that the apastron passage requirement carves out the right side of the parameter map.
### Folders
opaque_fast: are the subset of chains that ended up with velocities greater than 8 $R_*$ / day$^-1}$
opaque_slow: are the subset of chains that ended up with velocities lower than 8 $R_*$ / day$^{-1}$ and with disk radii less than 1.5 $R_*$
| 45.75 | 207 | 0.779781 | eng_Latn | 0.999809 |
7647f3895bbabf8affb6dfc179d30668bfdfdb95 | 865 | md | Markdown | README.md | pplonski/nlp-apps-mercury | 988bd680aab6dd23d077a483afb4471ba66c09c9 | [
"MIT"
] | 12 | 2022-02-18T10:22:25.000Z | 2022-03-24T15:41:54.000Z | README.md | pplonski/nlp-apps-mercury | 988bd680aab6dd23d077a483afb4471ba66c09c9 | [
"MIT"
] | null | null | null | README.md | pplonski/nlp-apps-mercury | 988bd680aab6dd23d077a483afb4471ba66c09c9 | [
"MIT"
] | null | null | null | # 👋 NLP Apps with Mercury

The Natural Language Processing (NLP) web apps built with [Mercury](https://github.com/mljar/mercury) framework, Jupyter Notebook and [SpaCy](https://github.com/explosion/spaCy).
Mercury allows you to convert your notebooks into web apps by adding YAML header. Based on YAML the widgets are generated for notebook.
## 🖥️ Web Application
The appliction is running on Heroku at address [https://nlp-apps-mercury.herokuapp.com/](https://nlp-apps-mercury.herokuapp.com/).
## 🔗 Useful links
- Mercury framework code at [GitHub](https://github.com/mljar/mercury)
- Mercury [website](https://mljar.com/mercury)
## 🚀 Demo

| 41.190476 | 178 | 0.765318 | eng_Latn | 0.545098 |
7648d331806fe81d2c14c1183ae591f7f5f46334 | 4,705 | md | Markdown | docs/extensibility/how-to-troubleshoot-services.md | angelobreuer/visualstudio-docs.de-de | f553469c026f7aae82b7dc06ba7433dbde321350 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/extensibility/how-to-troubleshoot-services.md | angelobreuer/visualstudio-docs.de-de | f553469c026f7aae82b7dc06ba7433dbde321350 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-07-24T14:57:38.000Z | 2020-07-24T14:57:38.000Z | docs/extensibility/how-to-troubleshoot-services.md | angelobreuer/visualstudio-docs.de-de | f553469c026f7aae82b7dc06ba7433dbde321350 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 'Gewusst wie: Troubleshoot Services | Microsoft Docs'
ms.date: 11/04/2016
ms.topic: conceptual
helpviewer_keywords:
- services, troubleshooting
ms.assetid: 001551da-4847-4f59-a0b2-fcd327d7f5ca
author: acangialosi
ms.author: anthc
manager: jillfra
ms.workload:
- vssdk
ms.openlocfilehash: 49560acdf57f5dad2c57f2a8e4649f194d6d8298
ms.sourcegitcommit: 16a4a5da4a4fd795b46a0869ca2152f2d36e6db2
ms.translationtype: MT
ms.contentlocale: de-DE
ms.lasthandoff: 04/06/2020
ms.locfileid: "80710750"
---
# <a name="how-to-troubleshoot-services"></a>Gewusst wie: Fehlerbehebungsdienste
Es gibt mehrere häufige Probleme, die auftreten können, wenn Sie versuchen, einen Dienst abzubekommen:
- Der Dienst ist [!INCLUDE[vsprvs](../code-quality/includes/vsprvs_md.md)]nicht bei registriert.
- Der Dienst wird nach Schnittstellentyp und nicht nach Diensttyp angefordert.
- Das VSPackage, das den Dienst anfordert, wurde nicht standortseither eingerichtet.
- Es wird der falsche Dienstanbieter verwendet.
Wenn der angeforderte Dienst nicht <xref:Microsoft.VisualStudio.Shell.Package.GetService%2A> abgerufen werden kann, wird der Aufruf von null zurückgegeben. Sie sollten immer auf NULL testen, nachdem Sie einen Dienst angefordert haben:
```csharp
IVsActivityLog log =
GetService(typeof(SVsActivityLog)) as IVsActivityLog;
if (log == null) return;
```
## <a name="to-troubleshoot-a-service"></a>So beheben Sie einen Dienst
1. Überprüfen Sie die Systemregistrierung, um festzustellen, ob der Dienst ordnungsgemäß registriert wurde. Weitere Informationen finden Sie unter [Gewusst wie: Bereitstellen eines Dienstes](../extensibility/how-to-provide-a-service.md).
Das folgende *.reg-Dateifragment* zeigt, wie der SVsTextManager-Dienst registriert werden kann:
```
[HKEY_LOCAL_MACHINE\Software\Microsoft\VisualStudio\<version number>\Services\{F5E7E71D-1401-11d1-883B-0000F87579D2}]
@="{F5E7E720-1401-11d1-883B-0000F87579D2}"
"Name"="SVsTextManager"
```
Im obigen Beispiel ist versionsnummer [!INCLUDE[vsprvs](../code-quality/includes/vsprvs_md.md)]die Version von , z. B. 12.0 oder 14.0, der Schlüssel "F5E7E71D-1401-11d1-883B-0000F87579D2" ist die Dienstkennung (SID) des Dienstes, SVsTextManager und der Standardwert "F5E7E720-1401-11d1-883B-0000F87579D2" ist die Paket-GUID des Text-Managers VSPackage, der den Dienst bereitstellt.
2. Verwenden Sie den Diensttyp und nicht den Schnittstellentyp, wenn Sie GetService aufrufen. Beim Anfordern [!INCLUDE[vsprvs](../code-quality/includes/vsprvs_md.md)]eines <xref:Microsoft.VisualStudio.Shell.Package> Dienstes aus extrahiert die GUID aus dem Typ. Ein Dienst wird nicht gefunden, wenn die folgenden Bedingungen vorhanden sind:
1. Anstelle des Diensttyps wird ein Schnittstellentyp an GetService übergeben.
2. Der Schnittstelle ist keine GUID explizit zugewiesen. Daher erstellt das System bei Bedarf eine Standard-GUID für ein Objekt.
3. Stellen Sie sicher, dass das VSPackage, das den Dienst anfordert, standortseitfest eingerichtet wurde. [!INCLUDE[vsprvs](../code-quality/includes/vsprvs_md.md)]ein VSPackage nach dem Erstellen <xref:Microsoft.VisualStudio.Shell.Package.Initialize%2A>und vor dem Aufruf von .
Wenn Sie Code in einem VSPackage-Konstruktor haben, `Initialize` der einen Dienst benötigt, verschieben Sie ihn in die Methode.
4. Stellen Sie sicher, dass Sie den richtigen Dienstanbieter verwenden.
Nicht alle Dienstleister sind gleich. Der Dienstanbieter, [!INCLUDE[vsprvs](../code-quality/includes/vsprvs_md.md)] der an ein Toolfenster übergibt, unterscheidet sich von dem, das er an ein VSPackage übergibt. Der Toolfensterdienstanbieter kennt <xref:Microsoft.VisualStudio.Shell.Interop.STrackSelection>, aber nicht <xref:Microsoft.VisualStudio.Shell.Interop.SVsRunningDocumentTable>von . Sie können <xref:Microsoft.VisualStudio.Shell.Package.GetGlobalService%2A> aufrufen, um einen VSPackage-Dienstanbieter innerhalb eines Toolfensters abzurufen.
Wenn ein Toolfenster ein Benutzersteuerelement oder einen anderen Steuerelementcontainer hostet, wird der Container vom [!INCLUDE[vsprvs](../code-quality/includes/vsprvs_md.md)] Windows-Komponentenmodell erstellt und hat keinen Zugriff auf Dienste. Sie können <xref:Microsoft.VisualStudio.Shell.Package.GetGlobalService%2A> aufrufen, um einen VSPackage-Dienstanbieter innerhalb eines Steuerelementcontainers abzurufen.
## <a name="see-also"></a>Weitere Informationen
- [Liste der verfügbaren Dienste](../extensibility/internals/list-of-available-services.md)
- [Nutzung und Bereitstellung von Dienstleistungen](../extensibility/using-and-providing-services.md)
- [Service-Essentials](../extensibility/internals/service-essentials.md)
| 64.452055 | 554 | 0.802976 | deu_Latn | 0.969613 |
76494e6e676f3f657e61c45a3f8205547bff4192 | 428 | md | Markdown | docs/index.md | buschtoens/qunit | 20eb3a81a9098abe223554eeb18f3a9219c0048d | [
"MIT"
] | null | null | null | docs/index.md | buschtoens/qunit | 20eb3a81a9098abe223554eeb18f3a9219c0048d | [
"MIT"
] | null | null | null | docs/index.md | buschtoens/qunit | 20eb3a81a9098abe223554eeb18f3a9219c0048d | [
"MIT"
] | null | null | null | ---
layout: default
excerpt: API reference documentation for QUnit.
redirect_from:
- "/category/all/"
---
<p>QUnit is a powerful, easy-to-use JavaScript unit test suite.
If you're new to QUnit, check out <a href="https://qunitjs.com/intro/">Getting Started with QUnit</a> on the <a href="https://qunitjs.com/">main site</a>.</p>
<p>QUnit has no dependencies and supports Node.js, SpiderMonkey, and all major browsers.</p>
| 35.666667 | 160 | 0.71729 | eng_Latn | 0.834185 |
7649a64fbe0b7965f0e832a1bc796e35c348ac5f | 36 | md | Markdown | tags/pkgmgr.md | p0rkjello/p0rkjello.github.io | 5b90c65d21e9f06c5f073b09c72e64e97a626221 | [
"BSD-2-Clause"
] | null | null | null | tags/pkgmgr.md | p0rkjello/p0rkjello.github.io | 5b90c65d21e9f06c5f073b09c72e64e97a626221 | [
"BSD-2-Clause"
] | 1 | 2020-05-28T16:33:42.000Z | 2020-05-29T11:30:20.000Z | tags/pkgmgr.md | p0rkjello/p0rkjello.github.io | 5b90c65d21e9f06c5f073b09c72e64e97a626221 | [
"BSD-2-Clause"
] | null | null | null | ---
layout: tagpage
tag: pkgmgr
---
| 7.2 | 15 | 0.611111 | ceb_Latn | 0.327498 |
7649d4281727cd5d7ce68cac50dbb25748e2e993 | 4,143 | md | Markdown | server-2013/lync-server-2013-deploying-external-user-access.md | Nealpatil/OfficeDocs-SkypeforBusiness-Test-pr.fr-fr | ffafeec779e64b7e0ffb234faac78364d8e3eedd | [
"CC-BY-4.0",
"MIT"
] | null | null | null | server-2013/lync-server-2013-deploying-external-user-access.md | Nealpatil/OfficeDocs-SkypeforBusiness-Test-pr.fr-fr | ffafeec779e64b7e0ffb234faac78364d8e3eedd | [
"CC-BY-4.0",
"MIT"
] | null | null | null | server-2013/lync-server-2013-deploying-external-user-access.md | Nealpatil/OfficeDocs-SkypeforBusiness-Test-pr.fr-fr | ffafeec779e64b7e0ffb234faac78364d8e3eedd | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 'Lync Server 2013 : Déploiement de l’accès des utilisateurs externes'
TOCTitle: Déploiement de l’accès des utilisateurs externes
ms:assetid: d40c9574-c16b-4fe6-b848-21ae0b7e4f0e
ms:mtpsurl: https://technet.microsoft.com/fr-fr/library/Gg398918(v=OCS.15)
ms:contentKeyID: 49298955
ms.date: 05/20/2016
mtps_version: v=OCS.15
ms.translationtype: HT
---
# Déploiement de l’accès des utilisateurs externes dans Lync Server 2013
_**Dernière rubrique modifiée :** 2013-09-23_
Le déploiement des composants Edge pour Microsoft Lync Server 2013 permet aux utilisateurs externes qui ne sont pas connectés au réseau interne de votre entreprise, notamment les utilisateurs distants anonymes et authentifiés, les partenaires fédérés (dont les partenaires XMPP), les clients mobile et les utilisateurs de services de messagerie instantanée publique, de communiquer avec d’autres utilisateurs de votre entreprise à l’aide de Lync Server. Les processus de déploiement et de configuration de Lync Server 2013 ne sont pas très différents de Lync Server 2010. Les outils d’installation et d’administration sont très similaires à ceux de Lync Server 2010.
<table>
<thead>
<tr class="header">
<th><img src="images/Gg425917.important(OCS.15).gif" title="important" alt="important" />Important :</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td>L’installation et la configuration du serveur EdgeMicrosoft Lync Server 2013 peuvent représenter un processus complexe nécessitant une planification et une coordination potentiellement importantes avec vos équipes internes, notamment – mais sans s’y limiter – pour la sécurité, la gestion de réseau, le pare-feu, le DNS (domain name system), le programme d’équilibrage de la charge, et l’infrastructure de clé publique (PKI). Il vous est fortement recommandé de consulter et d’utiliser le processus et la documentation de planification fournis avant de déployer vos composants d’accès externe. Cela vous aidera à limiter le nombre et la fréquence de changements non désirés et de problèmes au cours du processus de déploiement. Pour plus d’informations sur la planification de votre accès d’utilisateur externe, reportez-vous à <a href="lync-server-2013-planning-for-external-user-access.md">Planification de l’accès des utilisateurs externes dans Lync Server 2013</a>.</td>
</tr>
</tbody>
</table>
## Dans cette section
- [Liste de vérification du déploiement pour l’accès des utilisateurs externes dans Lync Server 2013](lync-server-2013-deployment-checklist-for-external-user-access.md)
- [Configuration système requise pour les composants d’accès des utilisateurs externes pour Lync Server 2013](lync-server-2013-system-requirements-for-external-user-access-components.md)
- [Préparation de l’installation des serveurs sur le réseau de périmètre pour Lync Server 2013](lync-server-2013-preparing-for-installation-of-servers-in-the-perimeter-network.md)
- [Création d’une topologie de serveurs Edge et directeurs dans Lync Server 2013](lync-server-2013-building-an-edge-and-director-topology.md)
- [Configuration du directeur dans Lync Server 2013](lync-server-2013-setting-up-the-director.md) (facultatif)
- [Configuration des serveurs Edge dans Lync Server 2013](lync-server-2013-setting-up-edge-servers.md)
- [Configuration des serveurs proxy inverses pour Lync Server 2013](lync-server-2013-setting-up-reverse-proxy-servers.md)
- [Configuration de la prise en charge de l’accès des utilisateurs externes dans Lync Server 2013](lync-server-2013-configuring-support-for-external-user-access.md)
- [Guide d’approvisionnement pour la connectivité Lync-Skype dans Lync Server 2013](lync-server-2013-provisioning-guide-for-lync-skype-connectivity.md)
- [Configuration des fédérations SIP et XMPP et de la messagerie instantanée publique dans Lync Server 2013](lync-server-2013-configuring-sip-federation-xmpp-federation-and-public-instant-messaging.md)
- [Déploiement de la mobilité dans Lync Server 2013](lync-server-2013-deploying-mobility.md)
- [Vérification de votre déploiement Edge dans Lync Server 2013](lync-server-2013-verifying-your-edge-deployment.md)
| 69.05 | 978 | 0.797007 | fra_Latn | 0.88056 |
764a7d013654b91bb1a74974c50ddea505bcc390 | 2,356 | md | Markdown | docs-archive-a/2014/tutorials/task-6-verify-domain-based-attribute-master-data-manager.md | v-alji/sql-docs-archive-pr.zh-tw | f71209265d9571de143831056c9aa7d2af7906bc | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs-archive-a/2014/tutorials/task-6-verify-domain-based-attribute-master-data-manager.md | v-alji/sql-docs-archive-pr.zh-tw | f71209265d9571de143831056c9aa7d2af7906bc | [
"CC-BY-4.0",
"MIT"
] | 2 | 2021-10-11T06:38:37.000Z | 2021-11-25T02:24:15.000Z | docs-archive-a/2014/tutorials/task-6-verify-domain-based-attribute-master-data-manager.md | v-alji/sql-docs-archive-pr.zh-tw | f71209265d9571de143831056c9aa7d2af7906bc | [
"CC-BY-4.0",
"MIT"
] | 3 | 2021-07-06T09:11:22.000Z | 2022-03-16T18:11:38.000Z | ---
title: 工作6:確認已使用主資料管理員建立網域屬性(Attribute) |Microsoft Docs
ms.custom: ''
ms.date: 06/13/2017
ms.prod: sql-server-2014
ms.reviewer: ''
ms.technology: data-quality-services
ms.topic: conceptual
ms.assetid: 6e90517a-910c-4c33-8f11-92ac3cff4fdc
author: lrtoyou1223
ms.author: lle
ms.openlocfilehash: ea26202ca9e607058b1e385957be3ea3d04038b3
ms.sourcegitcommit: ad4d92dce894592a259721a1571b1d8736abacdb
ms.translationtype: MT
ms.contentlocale: zh-TW
ms.lasthandoff: 08/04/2020
ms.locfileid: "87584879"
---
# <a name="task-6-verify-that-the-domain-based-attribute-is-created-using-master-data-manager"></a>工作 6:確認已使用主資料管理員建立定義域屬性
在這項工作中,您會透過 [主資料管理員]**** 來確認 **MDS** 中已建立 **State** 實體,而且 **Supplier** 實體的 **State** 屬性為相依於 **State** 實體的網域屬性。
1. 切換到 [主資料管理員]**** Web 應用程式。
2. 按一下頂端的 [SQL Server 2012 Master Data Services]****,回到首頁。
3. 確定已選取 [Supplier]**** 模型,然後按一下 [總管]****。 如果您已經開啟 [總管]****,可以重新整理頁面。
4. 將滑鼠停留在功能表列的 [實體]**** 上方,並注意現在有兩個實體:**Supplier** 和 **State**。
![含 [省/市] 和 [供應商] 的 [實體] 功能表](../../2014/tutorials/media/et-verifythatthedbaiscreatedusingmdm-01.jpg "含 [省/市] 和 [供應商] 的 [實體] 功能表")
5. 按一下 [State]**** (如果尚未開啟此實體)。
6. 從清單中選取 [GA]****。
7. 在右邊的 [詳細資料]**** 窗格中,將 [名稱]**** 變更為右窗格**** 的 **Georgia**,然後按一下 [確定]****。
8. 針對其他州重複上述步驟。
|程式碼|名稱|
|----------|----------|
|CA|California|
|CO|Colorado|
|IL|伊利諾州|
|DC|District of Columbia|
|FL|Florida|
|AL|Alabama|
|KY|Kentucky|
|MA|Massachusetts|
|AZ|Arizona|
|MI|Michigan|
|MN|Minnesota|
|NJ|New Jersey|
|NV|Nevada|
|NY|紐約|
|OH|Ohio|
|確定|Oklahoma|
|或者|Oregon|
|PA|Pennsylvania|
|SC|South Carolina|
|KS|Kansas|
|TN|Tennessee|
|TX|Texas|
|UT|Utah|
|VA|維吉尼亞州|
|WA|Washington|
|WI|Wisconsin|
|HI|Hawaii|
|MD|Maryland|
|CT|Connecticut|
9. 選取任何州,並從工具列按一下 [檢視交易]****。 您應該會看到您剛才更新的交易出現在交易清單中。
10. 將滑鼠停留在 [實體]**** 功能表上方,並按一下 [Supplier]****。
11. 現在,請注意可以使用下拉式清單在 [詳細資料]**** 窗格中變更 [State]**** 欄位的值。 您也可以看到,在左邊的清單中以及 [詳細資料]**** 窗格的下拉式清單中,都會先顯示代碼,然後是大括號中的名稱。 您也可以在 [詳細資料]**** 窗格中變更其他任何值。
![含 [更新代碼] 和 [名稱] 的屬性狀態](../../2014/tutorials/media/et-verifythatthedbaiscreatedusingmdm-02.jpg "含 [更新代碼] 和 [名稱] 的屬性狀態")
## <a name="next-step"></a>後續步驟
[工作 7:檢視您在 Excel 中使用主資料管理員所做的更新](../../2014/tutorials/task-7-viewing-updates-made-using-master-data-manager-in-excel.md)
| 28.047619 | 142 | 0.647708 | yue_Hant | 0.740923 |
764acab7a7df8b5c08b0745bd1721f5bc0e4cf6c | 5,480 | md | Markdown | source/_posts/JS.md | qhx0807/my-blog | 435422d877d2ee052c2ec3aec7d9ad75975b336a | [
"MIT"
] | 2 | 2018-01-30T01:41:29.000Z | 2018-03-07T08:24:37.000Z | source/_posts/JS.md | qhx0807/my-blog | 435422d877d2ee052c2ec3aec7d9ad75975b336a | [
"MIT"
] | 3 | 2021-03-01T21:15:49.000Z | 2022-02-26T01:56:39.000Z | source/_posts/JS.md | qhx0807/my-blog | 435422d877d2ee052c2ec3aec7d9ad75975b336a | [
"MIT"
] | null | null | null | ---
title: JavaScript 有用的代码片段(JavaScript编程黑科技)
date: 2018-02-20 09:54:59
tags: [javascript,黑科技,trick]
categories: javascript
---
收集有用的js代码片段
#### 浮点数取整
```javascript
const x = 123.4567
x >> 0 // 123
~~x // 123
x | 0 // 123
Math.floor(x) //123
// 注意
Math.floor(-12.53) // -13
12.53 | 0 // -12
```
#### 16 进制颜色代码生成
```javascript
'#' + ('00000' + ((Math.random() * 0x1000000) << 0).toString(16)).slice(-6)
```
#### 驼峰命名转下划线
```javascript
const str = 'componentMapModelRegistry'
str
.match(/^[a-z][a-z0-9]+|[A-Z][a-z0-9]*/g)
.join('_')
.toLowerCase()
// component_map_model_registry
```
<!-- more -->
#### url 查询参数转 json 格式
```javascript
// ES6
const query = (search = '') =>
((querystring = '') =>
(q => (
querystring
.split('&')
.forEach(item => (kv => kv[0] && (q[kv[0]] = kv[1]))(item.split('='))),
q
))({}))(search.split('?')[1])
// ES5实现
var query = function(search) {
if (search === void 0) {
search = ''
}
return (function(querystring) {
if (querystring === void 0) {
querystring = ''
}
return (function(q) {
return (
querystring.split('&').forEach(function(item) {
return (function(kv) {
return kv[0] && (q[kv[0]] = kv[1])
})(item.split('='))
}),
q
)
})({})
})(search.split('?')[1])
}
query('?key1=value1&key2=value2') // es6.html:14 {key1: "value1", key2: "value2"}
```
#### 获取 URL 参数
```javascript
function getQueryString(key) {
var reg = new RegExp('(^|&)' + key + '=([^&]*)(&|$)')
var r = window.location.search.substr(1).match(reg)
if (r != null) {
return
unescape(r[2])
}
return null
}
```
#### n维数组展开成一维数组(扁平化数组)
```javascript
var arr = [1, [2, 3], ['4', 5, ['6',7,[8]]], [9], 10]
// 方法一
// 限制:数组项不能出现`,`,同时数组项全部变成了字符数字
arr.toString().split(','); // ["1", "2", "3", "4", "5", "6", "7", "8", "9", "10"]
// 方法二
// 转换后数组项全部变成数字了
eval('[' + arr + ']'); // [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
// 方法三
JSON.parse(`[${JSON.stringify(arr).replace(/\[|]/g, '')}]`); // [1, 2, 3, "4", 5, "6", 7, 8, 9, 10]
// 方法四
const
flatten = (ary) => ary.reduce((a, b) => a.concat(Array.isArray(b) ? flatten(b) : b), [])
flatten(arr) // [1, 2, 3, "4", 5, "6", 7, 8, 9, 10]
// 方法五
function flatten(a) {
return Array.isArray(a) ? [].concat(...a.map(flatten)) : a
}
flatten(arr) // [1, 2, 3, "4", 5, "6", 7, 8, 9, 10]
```
#### 统计文字个数(不记符号)
```javascript
function wordCount(data) {
var pattern = /[a-zA-Z0-9_\u0392-\u03c9]+|[\u4E00-\u9FFF\u3400-\u4dbf\uf900-\ufaff\u3040-\u309f\uac00-\ud7af]+/g;
var m = data.match(pattern);
var count = 0;
if( m === null ) return count;
for (var i = 0; i < m.length; i++) {
if (m[i].charCodeAt(0) >= 0x4E00) {
count += m[i].length;
} else {
count += 1;
}
}
return count;
}
var text = '呵呵!'
wordCount(text) // 2
```
#### 测试质数
```javascript
function isPrime(n) {
return !(/^.?$|^(..+?)\1+$/).test('1'.repeat(n))
}
```
#### 统计字符串中相同字符出现的次数
```javascript
var str = 'abcdaabc'
var info = str.split('').reduce((p, k) => (p[k]++ || (p[k] = 1), p), {})
console.log(info) // { a: 3, b: 2, c: 2, d: 1 }
```
#### 使用 ~x.indexOf('y')来简化 x.indexOf('y')>-1
```javascript
var str = 'hello world'
if (str.indexOf('lo') > -1) {
// ...
}
if (~str.indexOf('lo')) {
// ...
}
```
#### 匿名函数自执行写法(IIFE)
```javascript
function() {}() );
( function() {} )();
[ function() {}() ];
~ function() {}();
! function() {}();
+ function() {}();
- function() {}();
delete function() {}();
typeof function() {}();
void function() {}();
new function() {}();
new function() {};
var f = function() {}();
1, function() {}();
1 ^ function() {}();
1 > function() {}();
```
#### 两个整数交换数值
```javascript
// 1
var a = 20, b = 30;
a ^= b;
b ^= a;
a ^= b;
//2
[a, b] = [b, a]
```
#### 身份证验证
```javascript
function chechCHNCardId(sNo) {
var reg = /^[0-9]{17}[X0-9]$/
if (!reg.test(sNo,)) {
return false;
}
sNo = sNo.toString();
var a, b, c;
a = parseInt(sNo.substr(0, 1)) * 7 + parseInt(sNo.substr(1, 1)) * 9 + parseInt(sNo.substr(2, 1)) * 10;
a = a + parseInt(sNo.substr(3, 1)) * 5 + parseInt(sNo.substr(4, 1)) * 8 + parseInt(sNo.substr(5, 1)) * 4;
a = a + parseInt(sNo.substr(6, 1)) * 2 + parseInt(sNo.substr(7, 1)) * 1 + parseInt(sNo.substr(8, 1)) * 6;
a = a + parseInt(sNo.substr(9, 1)) * 3 + parseInt(sNo.substr(10, 1)) * 7 + parseInt(sNo.substr(11, 1)) * 9;
a = a + parseInt(sNo.substr(12, 1)) * 10 + parseInt(sNo.substr(13, 1)) * 5 + parseInt(sNo.substr(14, 1)) * 8;
a = a + parseInt(sNo.substr(15, 1)) * 4 + parseInt(sNo.substr(16, 1)) * 2;r
b = a % 11;
if (b == 2) {
c = sNo.substr(17, 1).toUpperCase();
} else {
c = parseInt(sNo.substr(17, 1));
}
switch (b) {
case 0:
if (c != 1) {
return false;
}
break;
case 1:
if (c != 0) {
return false;
}
break;
case 2:
if (c != "X") {
return false;
}
break;
case 3:
if (c != 9) {
return false;
}
break;
case 4:
if (c != 8) {
return false;
}
break;
case 5:
if (c != 7) {
return false;
}
break;
case 6:
if (c != 6) {
return false;
}
break;
case 7:
if (c != 5) {
return false;
}
break;
case 8:
if (c != 4) {
return false;
}
break;
case 9:
if (c != 3) {
return false;
}
break;
case 10:
if (c != 2) {
return false;
};
}
return true;
}
```
| 19.22807 | 114 | 0.495073 | yue_Hant | 0.486082 |
764aedf1eaaf58643ab3df190b2f5378760d6909 | 2,580 | md | Markdown | pages/content/amp-dev/documentation/guides-and-tutorials/develop/style_and_layout/placeholders@zh_CN.md | mrjoro/amp.dev | 73f0b80620dd4aadf5753125880aa2611d6a7b1c | [
"Apache-2.0"
] | 1 | 2020-12-27T18:30:40.000Z | 2020-12-27T18:30:40.000Z | pages/content/amp-dev/documentation/guides-and-tutorials/develop/style_and_layout/placeholders@zh_CN.md | mrjoro/amp.dev | 73f0b80620dd4aadf5753125880aa2611d6a7b1c | [
"Apache-2.0"
] | null | null | null | pages/content/amp-dev/documentation/guides-and-tutorials/develop/style_and_layout/placeholders@zh_CN.md | mrjoro/amp.dev | 73f0b80620dd4aadf5753125880aa2611d6a7b1c | [
"Apache-2.0"
] | 6 | 2019-06-19T20:04:43.000Z | 2020-01-05T08:21:14.000Z | ---
$title: 占位符和后备行为
---
为了提高用户感知的性能并实现渐进增强效果,AMP 中的最佳做法是尽可能提供占位符和后备行为。
一些元素甚至通过放宽限制来鼓励您这样做。例如,如果您为 [`<amp-iframe>`](../../../../documentation/components/reference/amp-iframe.md#iframe-with-placeholder) 提供占位符,则可以将该组件用在网页顶部附近(如果不使用占位符,网页将无法正常运行)。
## 占位符
标记有 `placeholder` 属性的元素充当
父级 AMP 元素的占位符。
如果指定,则 `placeholder` 元素必须是 AMP 元素的直接子级。
标记为 `placeholder` 的元素将始终 `fill`(填充)父级 AMP 元素。
[example preview="inline" playground="true" imports="amp-anim:0.1"]
```html
<amp-anim src="{{server_for_email}}/static/inline-examples/images/wavepool.gif"
layout="responsive"
width="400"
height="300">
<amp-img placeholder
src="{{server_for_email}}/static/inline-examples/images/wavepool.png"
layout="fill">
</amp-img>
</amp-anim>
```
[/example]
默认情况下,即使 AMP 元素的资源尚未下载或初始化,
与该 AMP 元素对应的占位符也会立即显示。
准备就绪后,AMP 元素通常会隐藏其占位符并显示相关内容。
[tip type="note"]
占位符不必是 AMP 元素;
任何 HTML 元素都可充当占位符。
[/tip]
## 后备行为
您可以在某元素上指定 `fallback` 属性,以便指明出现以下情况时采取的后备行为:
* 浏览器不支持某个元素
* 内容未能加载(例如,Twitter 微博被删除)
* 图片类型不受支持(例如,并非所有浏览器都支持 WebP)
您可以在任何 HTML 元素(而不仅仅是 AMP 元素)上设置 `fallback` 属性。如果指定,则 `fallback` 元素必须是 AMP 元素的直接子级。
##### 示例:不支持的功能
在以下示例中,我们使用 `fallback` 属性告知用户,浏览器不支持特定功能:
[example preview="inline" playground="true" imports="amp-video:0.1"]
```html
<amp-video {% if format=='stories'%}autoplay {% endif %}controls
width="640"
height="360"
src="{{server_for_email}}/static/inline-examples/videos/kitten-playing.mp4"
poster="{{server_for_email}}/static/inline-examples/images/kitten-playing.png">
<div fallback>
<p>This browser does not support the video element.</p>
</div>
</amp-video>
```
[/example]
##### 示例:提供不同格式的图片
在以下示例中,我们使用 `fallback` 属性告知浏览器,在 WebP 格式不受支持时使用 JPEG 文件。
[example preview="inline" playground="true"]
```html
<amp-img alt="Mountains"
width="550"
height="368"
layout="responsive"
src="{{server_for_email}}/static/inline-examples/images/mountains.webp">
<amp-img alt="Mountains"
fallback
width="550"
height="368"
layout="responsive"
src="{{server_for_email}}/static/inline-examples/images/mountains.jpg"></amp-img>
</amp-img>
```
[/example]
## 占位符和后备行为的互动
对于依赖于动态内容的 AMP 组件(例如 [`amp-twitter`](../../../../documentation/components/reference/amp-twitter.md)、[`amp-list`](../../../../documentation/components/reference/amp-list.md)),后备行为和占位符的互动方式如下:
<ol>
<li>在加载内容时显示占位符。</li>
<li>如果内容加载成功,则隐藏占位符并显示内容。</li>
<li>如果内容未能加载:
<ol>
<li>如果有后备元素,则显示该后备元素。</li>
<li>否则,继续显示占位符。</li>
</ol>
</li>
</ol>
## 隐藏加载指示器
许多 AMP 元素已列入白名单,可以显示“加载指示器”,
这是一个基本动画,用于表明元素尚未加载完毕。
只需添加 `noloading` 属性,元素即可停用此行为。
| 23.243243 | 190 | 0.70155 | yue_Hant | 0.37015 |
764af17e827c92e42e144f7fab32b8028398c31f | 18,926 | md | Markdown | content/portfolio/2021-07-25-akanksha-bhushan/index.md | arvindvenkatadri/creativeportfolio | bdff39311a6ac5b014ab67aea3883ff1e1229223 | [
"MIT"
] | null | null | null | content/portfolio/2021-07-25-akanksha-bhushan/index.md | arvindvenkatadri/creativeportfolio | bdff39311a6ac5b014ab67aea3883ff1e1229223 | [
"MIT"
] | null | null | null | content/portfolio/2021-07-25-akanksha-bhushan/index.md | arvindvenkatadri/creativeportfolio | bdff39311a6ac5b014ab67aea3883ff1e1229223 | [
"MIT"
] | null | null | null | ---
title: Akanksha Bhushan
author: Akanksha Bhushan
date: '2021-07-25'
slug: []
categories:
- R
tags:
- R Markdown
image: https://www.alice-in-wonderland.net/wp-content/uploads/181.jpg
caption: ''
preview: yes
output:
blogdown::html_page:
toc: no
fig_width: 5
fig_height: 5
keep_md: yes
description: These are some of the best works from this course.
---
## Graph 1
For this map i took the star wars data set which had the variables name, height, mass, hair color, skin color, eye color, birth year, sex , gender, homeworld, species, films, vehicles and starships.
My intent in this graph was to find out which all species have lived on the planet of Naboo.
```
## # A tibble: 87 x 14
## name height mass hair_color skin_color eye_color birth_year sex gender
## <chr> <int> <dbl> <chr> <chr> <chr> <dbl> <chr> <chr>
## 1 Luke S~ 172 77 blond fair blue 19 male mascu~
## 2 C-3PO 167 75 <NA> gold yellow 112 none mascu~
## 3 R2-D2 96 32 <NA> white, bl~ red 33 none mascu~
## 4 Darth ~ 202 136 none white yellow 41.9 male mascu~
## 5 Leia O~ 150 49 brown light brown 19 fema~ femin~
## 6 Owen L~ 178 120 brown, grey light blue 52 male mascu~
## 7 Beru W~ 165 75 brown light blue 47 fema~ femin~
## 8 R5-D4 97 32 <NA> white, red red NA none mascu~
## 9 Biggs ~ 183 84 black light brown 24 male mascu~
## 10 Obi-Wa~ 182 77 auburn, wh~ fair blue-gray 57 male mascu~
## # ... with 77 more rows, and 5 more variables: homeworld <chr>, species <chr>,
## # films <list>, vehicles <list>, starships <list>
```
```
## Rows: 87
## Columns: 14
## $ name <chr> "Luke Skywalker", "C-3PO", "R2-D2", "Darth Vader", "Leia Or~
## $ height <int> 172, 167, 96, 202, 150, 178, 165, 97, 183, 182, 188, 180, 2~
## $ mass <dbl> 77.0, 75.0, 32.0, 136.0, 49.0, 120.0, 75.0, 32.0, 84.0, 77.~
## $ hair_color <chr> "blond", NA, NA, "none", "brown", "brown, grey", "brown", N~
## $ skin_color <chr> "fair", "gold", "white, blue", "white", "light", "light", "~
## $ eye_color <chr> "blue", "yellow", "red", "yellow", "brown", "blue", "blue",~
## $ birth_year <dbl> 19.0, 112.0, 33.0, 41.9, 19.0, 52.0, 47.0, NA, 24.0, 57.0, ~
## $ sex <chr> "male", "none", "none", "male", "female", "male", "female",~
## $ gender <chr> "masculine", "masculine", "masculine", "masculine", "femini~
## $ homeworld <chr> "Tatooine", "Tatooine", "Naboo", "Tatooine", "Alderaan", "T~
## $ species <chr> "Human", "Droid", "Droid", "Human", "Human", "Human", "Huma~
## $ films <list> <"The Empire Strikes Back", "Revenge of the Sith", "Return~
## $ vehicles <list> <"Snowspeeder", "Imperial Speeder Bike">, <>, <>, <>, "Imp~
## $ starships <list> <"X-wing", "Imperial shuttle">, <>, <>, "TIE Advanced x1",~
```
<img src="unnamed-chunk-2-1.png" width="480" />
## Graph 2
In the first week we were also told to make a graph using a penguins dataset. The graph i'm showing below shows the flipper length in milli-meters of the male and female population.
The variables for this table are: Species, Island, Bill length in milli meters, Bill depth in millimeters, Flipper depth in millimeters, Body mass in grams, Sex and Year.
```
## [1] "species" "island" "bill_length_mm"
## [4] "bill_depth_mm" "flipper_length_mm" "body_mass_g"
## [7] "sex" "year"
```
```
## # A tibble: 6 x 8
## species island bill_length_mm bill_depth_mm flipper_length_~ body_mass_g sex
## <fct> <fct> <dbl> <dbl> <int> <int> <fct>
## 1 Adelie Torge~ 39.1 18.7 181 3750 male
## 2 Adelie Torge~ 39.5 17.4 186 3800 fema~
## 3 Adelie Torge~ 40.3 18 195 3250 fema~
## 4 Adelie Torge~ NA NA NA NA <NA>
## 5 Adelie Torge~ 36.7 19.3 193 3450 fema~
## 6 Adelie Torge~ 39.3 20.6 190 3650 male
## # ... with 1 more variable: year <int>
```
```
## # A tibble: 6 x 8
## species island bill_length_mm bill_depth_mm flipper_length_~ body_mass_g sex
## <fct> <fct> <dbl> <dbl> <int> <int> <fct>
## 1 Chinst~ Dream 45.7 17 195 3650 fema~
## 2 Chinst~ Dream 55.8 19.8 207 4000 male
## 3 Chinst~ Dream 43.5 18.1 202 3400 fema~
## 4 Chinst~ Dream 49.6 18.2 193 3775 male
## 5 Chinst~ Dream 50.8 19 210 4100 male
## 6 Chinst~ Dream 50.2 18.7 198 3775 fema~
## # ... with 1 more variable: year <int>
```
```
## [1] 344 8
```
```
## [1] TRUE
```
<img src="unnamed-chunk-4-1.png" width="480" />
## Graph 3.
In this assignment we were told to make a csv file with our own dataset on a particular episode of a particular tv show, movie, book, etc. I chose the show Blood Of Zeus which is on Netflix as that was the last show i watched during the holidays.
The graph bellow shows us the number of characters, the race of the characters and the weight of dialogues between the characters. The variables for this graph were Name, Sex, Race, Eye colour, Hair colour, Status and Dialogue weight.
```
## # A tibble: 22 x 4
## From To Weight Type
## <dbl> <dbl> <dbl> <chr>
## 1 3 16 3 Associates
## 2 13 6 1 Family
## 3 1 13 6 Brothers
## 4 1 17 4 Family
## 5 6 1 7 Family
## 6 2 6 5 Brothers
## 7 1 2 17 Family
## 8 7 2 2 Brothers
## 9 10 2 1 Friends
## 10 9 2 1 Friends
## # ... with 12 more rows
```
```
## # A tibble: 17 x 7
## Name `Node id` Sex Race `Eye colour` `Hair colour` Status
## <chr> <dbl> <chr> <chr> <chr> <chr> <chr>
## 1 Zeus 1 Male God Blue Dark Grey Dead
## 2 Heron 2 Male Demigod Blue Brown Alive
## 3 Hera 3 Female God Blue Aubergine Alive
## 4 Electra 4 Female Human Brown Brown Alive
## 5 Seraphim 5 Male Human/ ~ Brown/Red White Dead
## 6 Hermes 6 Male God Blue Light Brown Alive
## 7 Apollo 7 Male God Yellow Blonde Alive
## 8 Hephaestus 8 Male God Brown Brown Alive
## 9 Kofi 9 Male Human Dark Brown Brown Alive
## 10 Evios 10 Male Human Brown Light Brown Alive
## 11 Alexia 11 Female Human Hazel Blonde Alive
## 12 Ares 12 Male God Red Black Alive
## 13 Poseidon 13 Male God Yellow Blue Alive
## 14 Hades 14 Male God Yellow Black Alive
## 15 Robots 15 <NA> Robot <NA> <NA> Alive
## 16 Giants 16 <NA> Giants Black/ Red <NA> Dead
## 17 Gods 17 Male & Female Gods <NA> <NA> Alive
```
```
## # A tbl_graph: 17 nodes and 22 edges
## #
## # An undirected multigraph with 3 components
## #
## # Node Data: 17 x 7 (active)
## Name `Node id` Sex Race `Eye colour` `Hair colour` Status
## <chr> <dbl> <chr> <chr> <chr> <chr> <chr>
## 1 Zeus 1 Male God Blue Dark Grey Dead
## 2 Heron 2 Male Demigod Blue Brown Alive
## 3 Hera 3 Female God Blue Aubergine Alive
## 4 Electra 4 Female Human Brown Brown Alive
## 5 Seraphim 5 Male Human/ Demon Brown/Red White Dead
## 6 Hermes 6 Male God Blue Light Brown Alive
## # ... with 11 more rows
## #
## # Edge Data: 22 x 4
## from to Weight Type
## <int> <int> <dbl> <chr>
## 1 3 16 3 Associates
## 2 6 13 1 Family
## 3 1 13 6 Brothers
## # ... with 19 more rows
```
<img src="unnamed-chunk-7-1.png" width="480" />
## Introduction to Graph 1
###For this map i took the star wars data set which had the variables name, height, mass, hair color, skin color, eye color, birth year, sex , gender, homeworld, species, films, vehicles and starships.
###My intent in this graph was to find out which all species have lived on the planet of Naboo.
```
## # A tibble: 87 x 14
## name height mass hair_color skin_color eye_color birth_year sex gender
## <chr> <int> <dbl> <chr> <chr> <chr> <dbl> <chr> <chr>
## 1 Luke S~ 172 77 blond fair blue 19 male mascu~
## 2 C-3PO 167 75 <NA> gold yellow 112 none mascu~
## 3 R2-D2 96 32 <NA> white, bl~ red 33 none mascu~
## 4 Darth ~ 202 136 none white yellow 41.9 male mascu~
## 5 Leia O~ 150 49 brown light brown 19 fema~ femin~
## 6 Owen L~ 178 120 brown, grey light blue 52 male mascu~
## 7 Beru W~ 165 75 brown light blue 47 fema~ femin~
## 8 R5-D4 97 32 <NA> white, red red NA none mascu~
## 9 Biggs ~ 183 84 black light brown 24 male mascu~
## 10 Obi-Wa~ 182 77 auburn, wh~ fair blue-gray 57 male mascu~
## # ... with 77 more rows, and 5 more variables: homeworld <chr>, species <chr>,
## # films <list>, vehicles <list>, starships <list>
```
```
## Rows: 87
## Columns: 14
## $ name <chr> "Luke Skywalker", "C-3PO", "R2-D2", "Darth Vader", "Leia Or~
## $ height <int> 172, 167, 96, 202, 150, 178, 165, 97, 183, 182, 188, 180, 2~
## $ mass <dbl> 77.0, 75.0, 32.0, 136.0, 49.0, 120.0, 75.0, 32.0, 84.0, 77.~
## $ hair_color <chr> "blond", NA, NA, "none", "brown", "brown, grey", "brown", N~
## $ skin_color <chr> "fair", "gold", "white, blue", "white", "light", "light", "~
## $ eye_color <chr> "blue", "yellow", "red", "yellow", "brown", "blue", "blue",~
## $ birth_year <dbl> 19.0, 112.0, 33.0, 41.9, 19.0, 52.0, 47.0, NA, 24.0, 57.0, ~
## $ sex <chr> "male", "none", "none", "male", "female", "male", "female",~
## $ gender <chr> "masculine", "masculine", "masculine", "masculine", "femini~
## $ homeworld <chr> "Tatooine", "Tatooine", "Naboo", "Tatooine", "Alderaan", "T~
## $ species <chr> "Human", "Droid", "Droid", "Human", "Human", "Human", "Huma~
## $ films <list> <"The Empire Strikes Back", "Revenge of the Sith", "Return~
## $ vehicles <list> <"Snowspeeder", "Imperial Speeder Bike">, <>, <>, <>, "Imp~
## $ starships <list> <"X-wing", "Imperial shuttle">, <>, <>, "TIE Advanced x1",~
```
<img src="unnamed-chunk-9-1.png" width="480" />
## Graph 2
### In the first week we were also told to make a graph using a penguins dataset. The graph i'm showing below shows the flipper length in milli-meters of the male and female population.
### The variables for this table are: Species, Island, Bill length in milli meters, Bill depth in millimeters, Flipper depth in millimeters, Body mass in grams, Sex and Year.
```
## [1] "species" "island" "bill_length_mm"
## [4] "bill_depth_mm" "flipper_length_mm" "body_mass_g"
## [7] "sex" "year"
```
```
## # A tibble: 6 x 8
## species island bill_length_mm bill_depth_mm flipper_length_~ body_mass_g sex
## <fct> <fct> <dbl> <dbl> <int> <int> <fct>
## 1 Adelie Torge~ 39.1 18.7 181 3750 male
## 2 Adelie Torge~ 39.5 17.4 186 3800 fema~
## 3 Adelie Torge~ 40.3 18 195 3250 fema~
## 4 Adelie Torge~ NA NA NA NA <NA>
## 5 Adelie Torge~ 36.7 19.3 193 3450 fema~
## 6 Adelie Torge~ 39.3 20.6 190 3650 male
## # ... with 1 more variable: year <int>
```
```
## # A tibble: 6 x 8
## species island bill_length_mm bill_depth_mm flipper_length_~ body_mass_g sex
## <fct> <fct> <dbl> <dbl> <int> <int> <fct>
## 1 Chinst~ Dream 45.7 17 195 3650 fema~
## 2 Chinst~ Dream 55.8 19.8 207 4000 male
## 3 Chinst~ Dream 43.5 18.1 202 3400 fema~
## 4 Chinst~ Dream 49.6 18.2 193 3775 male
## 5 Chinst~ Dream 50.8 19 210 4100 male
## 6 Chinst~ Dream 50.2 18.7 198 3775 fema~
## # ... with 1 more variable: year <int>
```
```
## [1] 344 8
```
```
## [1] TRUE
```
<img src="unnamed-chunk-11-1.png" width="480" />
## Introduction to Graph 3.
### In this assignment we were told to make a csv file with our own dataset on a particular episode of a particular tv show, movie, book, etc. I chose the show **Blood Of Zeus** which is on Netflix as that was the last show i watched during the holidays.
### The graph bellow shows us the number of characters, the race of the characters and the weight of dialogues between the characters. The variables for this graph were *Name, Sex, Race, Eye colour, Hair colour, Status and Dialogue weight*.
```
## # A tibble: 22 x 4
## From To Weight Type
## <dbl> <dbl> <dbl> <chr>
## 1 3 16 3 Associates
## 2 13 6 1 Family
## 3 1 13 6 Brothers
## 4 1 17 4 Family
## 5 6 1 7 Family
## 6 2 6 5 Brothers
## 7 1 2 17 Family
## 8 7 2 2 Brothers
## 9 10 2 1 Friends
## 10 9 2 1 Friends
## # ... with 12 more rows
```
```
## # A tibble: 17 x 7
## Name `Node id` Sex Race `Eye colour` `Hair colour` Status
## <chr> <dbl> <chr> <chr> <chr> <chr> <chr>
## 1 Zeus 1 Male God Blue Dark Grey Dead
## 2 Heron 2 Male Demigod Blue Brown Alive
## 3 Hera 3 Female God Blue Aubergine Alive
## 4 Electra 4 Female Human Brown Brown Alive
## 5 Seraphim 5 Male Human/ ~ Brown/Red White Dead
## 6 Hermes 6 Male God Blue Light Brown Alive
## 7 Apollo 7 Male God Yellow Blonde Alive
## 8 Hephaestus 8 Male God Brown Brown Alive
## 9 Kofi 9 Male Human Dark Brown Brown Alive
## 10 Evios 10 Male Human Brown Light Brown Alive
## 11 Alexia 11 Female Human Hazel Blonde Alive
## 12 Ares 12 Male God Red Black Alive
## 13 Poseidon 13 Male God Yellow Blue Alive
## 14 Hades 14 Male God Yellow Black Alive
## 15 Robots 15 <NA> Robot <NA> <NA> Alive
## 16 Giants 16 <NA> Giants Black/ Red <NA> Dead
## 17 Gods 17 Male & Female Gods <NA> <NA> Alive
```
```
## # A tbl_graph: 17 nodes and 22 edges
## #
## # An undirected multigraph with 3 components
## #
## # Node Data: 17 x 7 (active)
## Name `Node id` Sex Race `Eye colour` `Hair colour` Status
## <chr> <dbl> <chr> <chr> <chr> <chr> <chr>
## 1 Zeus 1 Male God Blue Dark Grey Dead
## 2 Heron 2 Male Demigod Blue Brown Alive
## 3 Hera 3 Female God Blue Aubergine Alive
## 4 Electra 4 Female Human Brown Brown Alive
## 5 Seraphim 5 Male Human/ Demon Brown/Red White Dead
## 6 Hermes 6 Male God Blue Light Brown Alive
## # ... with 11 more rows
## #
## # Edge Data: 22 x 4
## from to Weight Type
## <int> <int> <dbl> <chr>
## 1 3 16 3 Associates
## 2 6 13 1 Family
## 3 1 13 6 Brothers
## # ... with 19 more rows
```
## Reflections
This workshop really opened me up to the world of coding, I had a cycle before this that was based on a few coding software as well but it never came close to being this stressful whenever I got an error or when freeing when I got the result I wanted.
Coming into the workshop I thought i would not like it as much because of the amount I complained in the previous cycle about coding but it actually turned out to be fun and having Arvind as a facilitator was a real plus point as he was patient enough to explain to my classmates and i where we were going wrong and also rectify our mistakes.
All in all i learnt a lot from this workshop and it was a great experience.
=
<img src="unnamed-chunk-14-1.png" width="480" />
## Reflections
This workshop really opened me up to the world of coding, I had a cycle before this that was based on a few coding software as well but it never came close to being this stressful whenever I got an error or when freeing when I got the result I wanted.
Coming into the workshop I thought i would not like it as much because of the amount I complained in the previous cycle about coding but it actually turned out to be fun and having Arvind as a facilitator was a real plus point as he was patient enough to explain to my classmates and i where we were going wrong and also rectify our mistakes.
All in all i learnt a lot from this workshop and it was a great experience
| 48.904393 | 343 | 0.509405 | eng_Latn | 0.906437 |
764b2fa815730b8ded717e7bcfafe54993b91f1a | 511 | md | Markdown | GAP/README.md | CMUAbstract/Graph-Reordering-IISWC18 | da80dc557bc3eb830acd7e4b9942448fe33e4870 | [
"MIT"
] | 14 | 2019-11-07T10:25:35.000Z | 2022-03-29T11:34:42.000Z | GAP/README.md | CMUAbstract/Graph-Reordering-IISWC18 | da80dc557bc3eb830acd7e4b9942448fe33e4870 | [
"MIT"
] | 3 | 2019-01-25T03:52:31.000Z | 2020-02-15T12:22:34.000Z | GAP/README.md | CMUAbstract/Graph-Reordering-IISWC18 | da80dc557bc3eb830acd7e4b9942448fe33e4870 | [
"MIT"
] | 4 | 2019-11-25T03:09:09.000Z | 2021-11-28T12:29:14.000Z | # Lightweight Graph Reordering techniques for GAP
The directories contain implementation of the following LWRs:
1. Degree Sorting
2. Hub Sorting
3. Hub Clustering
The reordering function is in `*/builder.h`
Each directory contains `preprocessor-<appname>.cc` files that measure
the minimum preprocessing cost incurred for each application. For example,
GAP's implementation of pull-based PageRank does not need to build the
complete outCSR (the application uses only out-degrees and not outgoing-neighbors)
| 36.5 | 82 | 0.812133 | eng_Latn | 0.997506 |
764b4dfd728f62a33d7e7ceca1b16896c5b949e1 | 42 | md | Markdown | README.md | RaoudhaHamdi/MyStore | a10cf8215ab08865a048123c04fcf16e4927ef34 | [
"MIT"
] | null | null | null | README.md | RaoudhaHamdi/MyStore | a10cf8215ab08865a048123c04fcf16e4927ef34 | [
"MIT"
] | null | null | null | README.md | RaoudhaHamdi/MyStore | a10cf8215ab08865a048123c04fcf16e4927ef34 | [
"MIT"
] | null | null | null | # MyStore
Hackathon Oyez_Go My Code 12/18
| 14 | 31 | 0.785714 | kor_Hang | 0.898285 |
764b6bf2f7205b5960f75e5bb34edddd36fcdddf | 9,534 | md | Markdown | docs/relational-databases/sqlxml-annotated-xsd-schemas-using/identifying-key-columns-using-sql-key-fields-sqlxml-4-0.md | thiagoamc/sql-docs.pt-br | 32e5d2a16f76e552e93b54b343566cd3a326b929 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/relational-databases/sqlxml-annotated-xsd-schemas-using/identifying-key-columns-using-sql-key-fields-sqlxml-4-0.md | thiagoamc/sql-docs.pt-br | 32e5d2a16f76e552e93b54b343566cd3a326b929 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/relational-databases/sqlxml-annotated-xsd-schemas-using/identifying-key-columns-using-sql-key-fields-sqlxml-4-0.md | thiagoamc/sql-docs.pt-br | 32e5d2a16f76e552e93b54b343566cd3a326b929 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 'Identificando colunas de chave usando SQL: Key-campos (SQLXML 4.0) | Microsoft Docs'
ms.custom:
ms.date: 03/16/2017
ms.prod: sql-non-specified
ms.prod_service: database-engine, sql-database
ms.service:
ms.component: sqlxml
ms.reviewer:
ms.suite: sql
ms.technology:
- dbe-xml
ms.tgt_pltfrm:
ms.topic: reference
helpviewer_keywords:
- nesting XML results
- proper nesting in results [SQLXML]
- sql:key-fields
- XSD schemas [SQLXML], key columns
- identifying key columns [SQLXML]
- annotated XSD schemas, key columns
- key columns [SQLXML]
- relationships [SQLXML], key columns
- hierarchical relationships [SQLXML]
- key-fields annotation
ms.assetid: 1a5ad868-8602-45c4-913d-6fbb837eebb0
caps.latest.revision:
author: douglaslMS
ms.author: douglasl
manager: craigg
ms.workload: Inactive
ms.openlocfilehash: ac42ee657dd46f070eccf5d63ae9a454c3306e95
ms.sourcegitcommit: 37f0b59e648251be673389fa486b0a984ce22c81
ms.translationtype: MT
ms.contentlocale: pt-BR
ms.lasthandoff: 02/12/2018
---
# <a name="identifying-key-columns-using-sqlkey-fields-sqlxml-40"></a>Identificando colunas de chave usando campos sql:key (SQLXML 4.0)
[!INCLUDE[appliesto-ss-asdb-xxxx-xxx-md](../../includes/appliesto-ss-asdb-xxxx-xxx-md.md)]
Quando uma consulta XPath é especificada em um esquema XSD, as informações de chave são necessárias, na maioria das vezes, para obter o aninhamento adequado no resultado. Especificando o **SQL: Key-campos** anotação é uma maneira de garantir que a hierarquia apropriada seja gerada.
> [!NOTE]
> Para assegurar o aninhamento adequado, é recomendável que você especifique **SQL: Key-campos** para elementos que são mapeados para tabelas. O XML gerado é sensível à ordenação do conjunto de resultados subjacente. Se **SQL: Key-campos** não for especificado, o XML gerado não pode ser formado corretamente.
O valor de **SQL: Key-campos** identifica as colunas que identificam exclusivamente as linhas na relação. Se mais de uma coluna for necessária para identificar uma linha de forma exclusiva, os valores de coluna serão delimitados por espaços.
Você deve usar o **SQL: Key-campos** anotação quando um elemento contém um **\<SQL: Relationship >** que é definida entre o elemento e um elemento filho, mas não fornece a chave primária da tabela que é especificado no elemento pai.
## <a name="examples"></a>Exemplos
Para criar exemplos de funcionamento usando os exemplos a seguir, é necessário atender a determinados requisitos. Para obter mais informações, consulte [requisitos para executar exemplos do SQLXML](../../relational-databases/sqlxml/requirements-for-running-sqlxml-examples.md).
### <a name="a-producing-the-appropriate-nesting-when-sqlrelationship-does-not-provide-sufficient-information"></a>A. Produzir o aninhamento adequado quando \<SQL: Relationship > não fornecer informações suficientes
Este exemplo mostra onde **SQL: Key-campos** deve ser especificado.
Considere o esquema a seguir. O esquema especifica uma hierarquia entre o **\<ordem >** e **\<cliente >** elementos no qual o **\<ordem >**elemento é o pai e o **\<cliente >** elemento é um filho.
O **\<SQL: Relationship >** marca é usada para especificar a relação pai-filho. Ela identifica CustomerID na tabela Sales.SalesOrderHeader como a chave pai que faz referência à chave filho CustomerID na tabela Sales.Customer. As informações fornecidas no **\<SQL: Relationship >** não são suficientes para identificar exclusivamente as linhas na tabela pai (Sales. SalesOrderHeader). Portanto, sem o **SQL: Key-campos** anotação, a hierarquia gerada é imprecisa.
Com **SQL: Key-campos** especificado em **\<ordem >**, a anotação identifica exclusivamente as linhas no pai (tabela Sales. SalesOrderHeader) e seus elementos filhos aparecem abaixo seu pai.
Este é o esquema:
```
<xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"
xmlns:sql="urn:schemas-microsoft-com:mapping-schema">
<xsd:annotation>
<xsd:appinfo>
<sql:relationship name="OrdCust"
parent="Sales.SalesOrderHeader"
parent-key="CustomerID"
child="Sales.Customer"
child-key="CustomerID" />
</xsd:appinfo>
</xsd:annotation>
<xsd:element name="Order" sql:relation="Sales.SalesOrderHeader"
sql:key-fields="SalesOrderID">
<xsd:complexType>
<xsd:sequence>
<xsd:element name="Customer" sql:relation="Sales.Customer"
sql:relationship="OrdCust" >
<xsd:complexType>
<xsd:attribute name="CustID" sql:field="CustomerID" />
<xsd:attribute name="SoldBy" sql:field="SalesPersonID" />
</xsd:complexType>
</xsd:element>
</xsd:sequence>
<xsd:attribute name="SalesOrderID" type="xsd:integer" />
<xsd:attribute name= "CustomerID" type="xsd:string" />
</xsd:complexType>
</xsd:element>
</xsd:schema>
```
##### <a name="to-create-a-working-sample-of-this-schema"></a>Para criar um exemplo de funcionamento deste esquema
1. Copie o código de esquema acima e cole-o em um arquivo de texto. Salve o arquivo como KeyFields1.xml.
2. Copie o modelo a seguir e cole-o em um arquivo de texto. Salve o arquivo como KeyFields1T.xml no mesmo diretório em que você salvou KeyFields1.xml. A consulta XPath no modelo retorna todos os **\<ordem >** elementos com uma CustomerID menor que 3.
```
<ROOT xmlns:sql="urn:schemas-microsoft-com:xml-sql">
<sql:xpath-query mapping-schema="KeyFields1.xml">
/Order[@CustomerID < 3]
</sql:xpath-query>
</ROOT>
```
O caminho de diretório especificado para o esquema de mapeamento (KeyFields1.xml) é relativo ao diretório onde o modelo está salvo. Também é possível especificar um caminho absoluto, por exemplo:
```
mapping-schema="C:\MyDir\KeyFields1.xml"
```
3. Crie e use o script de teste SQLXML 4.0 (Sqlxml4test.vbs) para executar o modelo.
Para obter mais informações, consulte [usando o ADO para executar consultas SQLXML](../../relational-databases/sqlxml/using-ado-to-execute-sqlxml-4-0-queries.md).
Este é o conjunto de resultados parcial:
```
<ROOT xmlns:sql="urn:schemas-microsoft-com:xml-sql">
<Order SalesOrderID="43860" CustomerID="1">
<Customer CustID="1" SoldBy="280"/>
</Order>
<Order SalesOrderID="44501" CustomerID="1">
<Customer CustID="1" SoldBy="280"/>
</Order>
<Order SalesOrderID="45283" CustomerID="1">
<Customer CustID="1" SoldBy="280"/>
</Order>
.....
</ROOT>
```
### <a name="b-specifying-sqlkey-fields-to-produce-proper-nesting-in-the-result"></a>B. Especificar sql:key-fields para produzir o aninhamento adequado no resultado
No esquema a seguir, há uma hierarquia especificada usando **\<SQL: Relationship >**. O esquema ainda requer a especificação de **SQL: Key-campos** anotação para identificar exclusivamente os funcionários na tabela HumanResources. Employee.
```
<xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"
xmlns:sql="urn:schemas-microsoft-com:mapping-schema">
<xsd:element name="HumanResources.Employee" sql:key-fields="EmployeeID" >
<xsd:complexType>
<xsd:sequence>
<xsd:element name="Title">
<xsd:complexType>
<xsd:simpleContent>
<xsd:extension base="xsd:string">
<xsd:attribute name="EmployeeID" type="xsd:integer" />
</xsd:extension>
</xsd:simpleContent>
</xsd:complexType>
</xsd:element>
</xsd:sequence>
</xsd:complexType>
</xsd:element>
</xsd:schema>
```
##### <a name="to-create-a-working-sample-of-this-schema"></a>Para criar um exemplo de funcionamento deste esquema
1. Copie o código de esquema acima e cole-o em um arquivo de texto. Salve o arquivo como KeyFields2.xml.
2. Copie o modelo a seguir e cole-o em um arquivo de texto. Salve o arquivo como KeyFields2T.xml no mesmo diretório em que você salvou KeyFields2.xml. A consulta XPath no modelo retorna todos os **\<HumanResources. Employee >** elementos:
```
<ROOT xmlns:sql="urn:schemas-microsoft-com:xml-sql">
<sql:xpath-query mapping-schema="KeyFields2.xml">
/HumanResources.Employee
</sql:xpath-query>
</ROOT>
```
O caminho de diretório especificado para o esquema de mapeamento (KeyFields2.xml) é relativo ao diretório onde o modelo está salvo. Também é possível especificar um caminho absoluto, por exemplo:
```
mapping-schema="C:\MyDir\KeyFields2.xml"
```
3. Crie e use o script de teste SQLXML 4.0 (Sqlxml4test.vbs) para executar o modelo.
Para obter mais informações, consulte [usando o ADO para executar consultas SQLXML](../../relational-databases/sqlxml/using-ado-to-execute-sqlxml-4-0-queries.md).
Este é o resultado:
```
<ROOT xmlns:sql="urn:schemas-microsoft-com:xml-sql">
<HumanResources.Employee>
<Title EmployeeID="1">Production Technician - WC60</Title>
</HumanResources.Employee>
<HumanResources.Employee>
<Title EmployeeID="2">Marketing Assistant</Title>
</HumanResources.Employee>
<HumanResources.Employee>
<Title EmployeeID="3">Engineering Manager</Title>
</HumanResources.Employee>
...
</ROOT>
```
| 47.432836 | 467 | 0.691525 | por_Latn | 0.95415 |
764b7f56ca0f7e3002e893d81f77e4256a1abaa5 | 2,438 | md | Markdown | README.md | zerocoolElite/simple-bitly-app | eea8e08b8307df26d2720f85dc0f681290a369c7 | [
"MIT"
] | null | null | null | README.md | zerocoolElite/simple-bitly-app | eea8e08b8307df26d2720f85dc0f681290a369c7 | [
"MIT"
] | null | null | null | README.md | zerocoolElite/simple-bitly-app | eea8e08b8307df26d2720f85dc0f681290a369c7 | [
"MIT"
] | null | null | null |
# Bitly App
> Simple application for shortening URLs using Bitly API.
## Getting Started
### Install
Extract `bitly-app.zip` to any destination. Open the root directory in command line and execute this command:
```sh
npm install
```
### Usage
To start the application just execute this command:
```sh
npm run start
```
A message will display in the command line: `App listening on port 3000!`
Open a browser chrome/firefox and enter this url: `http://localhost:3000/`.
Copy any valid url and paste it in the text box `URL` and click the button `Create`.
A result will display like this `http://bit.ly/30atsq6` if the request is successful otherwise an error message will display such as `INVALID_URI`. You can hightlight and copy or click button `copy` to copy the result link and test it. You can test more URL.
### Unit Testing
In your command line, execute this code:
```sh
npm run test
```
Example output of unit testing:
```
PASS local_modules/logger/logger.test.js
PASS local_modules/bitlyClient/bitlyClient.test.js
PASS ./app.test.js
Test Suites: 3 passed, 3 total
Tests: 16 passed, 16 total
Snapshots: 0 total
Time: 2.539s
Ran all test suites.
```
## About
### Logs
Every successful requests or conversions with bitly API are being log by the application in `/temp` directory with the log format:
```
<timestamp> - <longURL> <shortURL>
```
## Development
To start the development just execute this command:
```
npm run dev
```
### Local Modules
- biltyClient - a bitly client API that I developed to communicate with bitly REST API
- logger - a simple application logging library intended for keeping a copy of successful request with bitly
### Dependencies
- body-parser : Parse incoming request bodies
- bootstrap: Use in GUI of the application
- clipboard: Copy the text in the clipboard using a button
- dateformat: Helper for date formatting
- express: Http Server
- jquery: Use for DOM manipulation
- request: Make http calls with NodeJS
### Development Dependencies
- @babel/plugin-transform-modules-commonjs : transform es6 import/export to to commonjs
- babel-cli : JavaScript compiler that convert es6 to backwards compatible version of JavaScript
- babel-preset-es2015 : babel present of es2015
- jest : used for unit testing
- nodemon : allows to restart the server if there is changes in the codes
- valid-url : used to validate URL
- supertest : used for HTTP assertions
| 30.098765 | 258 | 0.748154 | eng_Latn | 0.980356 |
764bba5dd94e468ebe7aa55aaaa07bd5f945e8c7 | 19,000 | md | Markdown | source/includes/static/industries.md | arlopezg/slate | 1bef891e71e1e9da05db5113cdf464812569aa0a | [
"Apache-2.0"
] | null | null | null | source/includes/static/industries.md | arlopezg/slate | 1bef891e71e1e9da05db5113cdf464812569aa0a | [
"Apache-2.0"
] | 8 | 2020-11-27T19:29:23.000Z | 2021-11-19T22:08:23.000Z | source/includes/static/industries.md | admetricks/slay | fbc25bd35764f877e1ea419b695ed94e66d31b62 | [
"Apache-2.0"
] | null | null | null | | ID | Nombre |
| --- | ----------------------------------------------------------------------------------------- |
| 133 | alimentación - aceites y aliños |
| 144 | alimentación - alimentación infantil |
| 134 | alimentación - alimentos congelados |
| 136 | alimentación - alimentos dietéticos y adelgazantes |
| 135 | alimentación - alimentos frescos |
| 137 | alimentación - aperitivos |
| 141 | alimentación - arroz y legumbres secas |
| 138 | alimentación - cafés e infusiones |
| 139 | alimentación - caldos, sopas y platos preparados |
| 150 | alimentación - caramelos y golosinas |
| 140 | alimentación - cereales |
| 142 | alimentación - chocolates |
| 143 | alimentación - conservas |
| 151 | alimentación - dulces navidad |
| 145 | alimentación - galletas |
| 147 | alimentación - helados |
| 149 | alimentación - lácteos |
| 146 | alimentación - panadería y pastelería |
| 152 | alimentación - pastas alimenticias |
| 153 | alimentación - quesos |
| 154 | alimentación - salsas |
| 132 | alimentación - varios |
| 148 | alimentación - yogures y postres |
| 161 | automoción - accesorios y mantenimiento de automóviles |
| 156 | automoción - automóviles |
| 163 | automoción - camiones y autobuses |
| 162 | automoción - concesionarias |
| 157 | automoción - motocicletas |
| 158 | automoción - náutica y aeronáutica |
| 159 | automoción - neumáticos |
| 155 | automoción - varios |
| 160 | automoción - vehículos industriales |
| 165 | bebidas - aguas |
| 166 | bebidas - bebidas alcohólicas varias |
| 168 | bebidas - cervezas |
| 170 | bebidas - gaseosas |
| 169 | bebidas - isotónicas/energéticas |
| 171 | bebidas - jugos |
| 164 | bebidas - varios |
| 167 | bebidas - vinos y espumantes |
| 174 | belleza e higiene - adelgazantes |
| 400 | belleza e higiene - centros de belleza |
| 175 | belleza e higiene - colonias y perfumes |
| 176 | belleza e higiene - cuidados del cuerpo |
| 182 | belleza e higiene - depilatorios |
| 177 | belleza e higiene - desodorantes |
| 180 | belleza e higiene - higiene de la boca |
| 183 | belleza e higiene - higiene de los pies |
| 184 | belleza e higiene - higiene infantil |
| 181 | belleza e higiene - higiene íntima |
| 185 | belleza e higiene - jabones y geles |
| 178 | belleza e higiene - maquillaje y desmaquillaje |
| 179 | belleza e higiene - productos afeitado |
| 187 | belleza e higiene - productos capilares |
| 188 | belleza e higiene - productos y protectores solares |
| 186 | belleza e higiene - tratamientos de belleza faciales |
| 173 | belleza e higiene - varios |
| 193 | construcción - arquitectura e ingeniería |
| 190 | construcción - empresas inmobiliarias |
| 191 | construcción - materiales de construcción |
| 192 | construcción - sanitarios |
| 189 | construcción - varios |
| 199 | cultura - cine |
| 195 | cultura - coleccionables |
| 202 | cultura - conciertos |
| 198 | cultura - editoriales audiovisuales |
| 201 | cultura - espectáculos deportivos |
| 200 | cultura - museos y galerías |
| 197 | cultura - productos editoriales de música |
| 196 | cultura - productos editoriales impresos y online |
| 194 | cultura - varios |
| 210 | deportes y tiempo libre - amor online |
| 204 | deportes y tiempo libre - artículos deportivos |
| 206 | deportes y tiempo libre - camping - caravaning / outdoor |
| 205 | deportes y tiempo libre - clubes y asociaciones tiempo libre |
| 209 | deportes y tiempo libre - consolas y videojuegos |
| 208 | deportes y tiempo libre - juegos y juguetes |
| 211 | deportes y tiempo libre - parques temáticos y atracciones |
| 207 | deportes y tiempo libre - piscinas y equipamiento |
| 212 | deportes y tiempo libre - pubs y discoteques |
| 203 | deportes y tiempo libre - varios |
| 318 | educación y formación - cursos completos |
| 320 | educación y formación - cursos de idiomas |
| 322 | educación y formación - educación pre escolar |
| 321 | educación y formación - educación primaria y secundaria |
| 319 | educación y formación - universidades y enseñanza superior |
| 317 | educación y formación - varios |
| 222 | energía - combustible, lubricantes y carburantes |
| 221 | energía - energía domestica |
| 220 | energía - varios |
| 314 | eventos - ferias y congresos |
| 315 | eventos - tickets |
| 313 | eventos - varios |
| 232 | finanzas - bancos |
| 233 | finanzas - inversiones |
| 236 | finanzas - seguros y previsión |
| 234 | finanzas - servicios financieros |
| 235 | finanzas - tarjetas y cheques |
| 237 | finanzas - transacciones |
| 231 | finanzas - varios |
| 250 | hogar - bricolaje |
| 240 | hogar - climatizacion |
| 239 | hogar - decoración |
| 241 | hogar - electrodomésticos |
| 244 | hogar - embalaje y celulosa doméstico |
| 242 | hogar - menaje de hogar |
| 243 | hogar - muebles |
| 245 | hogar - productos infantiles |
| 246 | hogar - sistemas de seguridad |
| 248 | hogar - sonido |
| 249 | hogar - televisión / video / dvd |
| 247 | hogar - textil hogar |
| 238 | hogar - varios |
| 257 | industrial, material de trabajo, agropecuario - agropecuario |
| 253 | industrial, material de trabajo, agropecuario - baterías y acumuladores industriales |
| 255 | industrial, material de trabajo, agropecuario - envases y embalajes industriales |
| 256 | industrial, material de trabajo, agropecuario - material de trabajo |
| 259 | industrial, material de trabajo, agropecuario - minería |
| 258 | industrial, material de trabajo, agropecuario - seguridad de trabajo |
| 254 | industrial, material de trabajo, agropecuario - tratamiento depuración aguas industriales |
| 252 | industrial, material de trabajo, agropecuario - varios |
| 225 | informática y equipos de oficina - equipos informáticos |
| 226 | informática y equipos de oficina - máquinas de oficina |
| 227 | informática y equipos de oficina - material de escritorio |
| 229 | informática y equipos de oficina - redes sociales |
| 230 | informática y equipos de oficina - servicios informáticos |
| 228 | informática y equipos de oficina - software y aplicaciones |
| 224 | informática y equipos de oficina - varios |
| 306 | juegos y apuestas - casinos y establecimientos de juego |
| 307 | juegos y apuestas - lotería y apuestas |
| 305 | juegos y apuestas - varios |
| 262 | limpieza - higiene del hogar |
| 261 | limpieza - útiles de limpieza |
| 260 | limpieza - varios |
| 311 | mascotas - accesorios de mascotas |
| 309 | mascotas - alimentación animal |
| 310 | mascotas - clínicas veterinarias |
| 312 | mascotas - servicios para mascotas |
| 308 | mascotas - varios |
| 337 | medios de comunicación |
| 267 | objetos personales - artículos viaje y marroquinería |
| 266 | objetos personales - fotografia |
| 265 | objetos personales - instrumentos musicales |
| 264 | objetos personales - varios |
| 325 | religión y esoterismo - esoterismo |
| 324 | religión y esoterismo - religión |
| 323 | religión y esoterismo - varios |
| 274 | salud - clínicas, hospitales y centros médicos |
| 271 | salud - equipos clínicos y ortopedia |
| 272 | salud - farmacias |
| 273 | salud - laboratorios farmacéuticos |
| 269 | salud - medicamentos |
| 270 | salud - productos de-para farmacia |
| 268 | salud - varios |
| 277 | servicios públicos y privados - campañas de interés público |
| 280 | servicios públicos y privados - consultorías y servicios empresariales |
| 278 | servicios públicos y privados - corporaciones y asociaciones |
| 282 | servicios públicos y privados - elecciones políticas |
| 285 | servicios públicos y privados - empresa multimarca |
| 281 | servicios públicos y privados - fundaciones y organizaciones |
| 283 | servicios públicos y privados - infraestructura urbana |
| 279 | servicios públicos y privados - servicios de empresas |
| 284 | servicios públicos y privados - transporte público |
| 276 | servicios públicos y privados - varios |
| 288 | telecomunicaciones e internet - empresas de telecomunicaciones |
| 289 | telecomunicaciones e internet - equipos y terminales |
| 287 | telecomunicaciones e internet - varios |
| 291 | textil y vestimenta - calzado |
| 292 | textil y vestimenta - moda y complementos |
| 293 | textil y vestimenta - moda y complementos online |
| 296 | textil y vestimenta - prendas deportivas |
| 294 | textil y vestimenta - relojería y joyería |
| 295 | textil y vestimenta - ropa interior |
| 290 | textil y vestimenta - varios |
| 215 | tiendas y restaurantes - centros comerciales |
| 216 | tiendas y restaurantes - ópticas |
| 217 | tiendas y restaurantes - restaurantes |
| 219 | tiendas y restaurantes - supermercados y minimarkets |
| 214 | tiendas y restaurantes - tiendas de productos al por menor |
| 218 | tiendas y restaurantes - tiendas online |
| 213 | tiendas y restaurantes - varios |
| 304 | transporte, viajes y turismo - aerolíneas |
| 302 | transporte, viajes y turismo - agencias y operadores turísticos |
| 301 | transporte, viajes y turismo - alquiler de vehículos |
| 303 | transporte, viajes y turismo - hoteles y alojamientos |
| 300 | transporte, viajes y turismo - servicios de transporte de pasajeros/pasaje |
| 299 | transporte, viajes y turismo - transporte mercancía |
| 298 | transporte, viajes y turismo - varios |
| 99.47644 | 99 | 0.338947 | spa_Latn | 0.989829 |
764bbd952d6e5dce567bdb8191631b7b43005223 | 1,123 | md | Markdown | README.md | Laboratory/country-mapper | 81bba85a836b688845ff0838fa41c2e7ee85f9e9 | [
"MIT"
] | 2 | 2016-09-17T10:35:40.000Z | 2022-02-14T08:55:09.000Z | README.md | Laboratory/country-mapper | 81bba85a836b688845ff0838fa41c2e7ee85f9e9 | [
"MIT"
] | 4 | 2016-10-01T10:39:02.000Z | 2021-05-06T19:52:14.000Z | README.md | Laboratory/country-mapper | 81bba85a836b688845ff0838fa41c2e7ee85f9e9 | [
"MIT"
] | null | null | null | # country-mapper
[](https://travis-ci.org/Laboratory/country-mapper)
Country naming approach might be very unique from org to org. The reason for that is that decision makers do not pay much attention to existing industry standards while just stop on the nice looking names, which will lead to errors when different systems should talk to each other. We notice that country should have aligned with ISO 3166 name. To not have you to change your way, we offer you to introduce a country mapping as below:
Sint Eustatius -> Bonaire, Saint Eustatius And Saba
Bonaire -> Bonaire, Saint Eustatius And Saba
## Installation
Via [npm](https://www.npmjs.com/package/country-mapper):
npm install country-mapper
### Usage
-----
```javascript
const mapper = require('country-mapper');
mapper.convert('Russian Federation'); // RU
mapper.convert('Russia'); // RU
```
```javascript
const mapper = require('country-mapper');
mapper.iso('Russia'); // Russian Federation
```
## License
Formidable is licensed under the MIT license.
| 34.030303 | 434 | 0.746215 | eng_Latn | 0.962702 |
764ccb9e231b4820c7f288b06bd7e157d8ea330e | 3,960 | md | Markdown | docs/2014/relational-databases/databases/estimate-the-size-of-a-heap.md | satoshi-baba-0823/sql-docs.ja-jp | a0681de7e067cc6da1be720cb8296507e98e0f29 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/2014/relational-databases/databases/estimate-the-size-of-a-heap.md | satoshi-baba-0823/sql-docs.ja-jp | a0681de7e067cc6da1be720cb8296507e98e0f29 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/2014/relational-databases/databases/estimate-the-size-of-a-heap.md | satoshi-baba-0823/sql-docs.ja-jp | a0681de7e067cc6da1be720cb8296507e98e0f29 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: ヒープ サイズの見積もり | Microsoft Docs
ms.custom: ''
ms.date: 06/13/2017
ms.prod: sql-server-2014
ms.reviewer: ''
ms.technology:
- database-engine
ms.topic: conceptual
helpviewer_keywords:
- disk space [SQL Server], indexes
- estimating heap size
- size [SQL Server], heap
- space [SQL Server], indexes
- heaps
ms.assetid: 81fd5ec9-ce0f-4c2c-8ba0-6c483cea6c75
author: stevestein
ms.author: sstein
manager: craigg
ms.openlocfilehash: 0464304a23e53762b3e2eb887383b111764379fb
ms.sourcegitcommit: 3da2edf82763852cff6772a1a282ace3034b4936
ms.translationtype: MT
ms.contentlocale: ja-JP
ms.lasthandoff: 10/02/2018
ms.locfileid: "48192062"
---
# <a name="estimate-the-size-of-a-heap"></a>ヒープ サイズの見積もり
ヒープにデータを格納するために必要な領域は、次の手順で見積もることができます。
1. 次のように、テーブル内の行数を指定します。
***Num_Rows*** = テーブル内の行数
2. 次のように、固定長列と可変長列の数を指定し、それらの列を格納するために必要な領域を計算します。
固定長列のグループと可変長列のグループがデータ行内で使用する領域を計算します。 列のサイズは、データ型と長さの指定によって異なります。
***Num_Cols*** = 列 (固定長および可変長) の総数
***Fixed_Data_Size*** = すべての固定長列の合計バイト サイズ
***Num_Variable_Cols*** = 可変長列の数
***Max_Var_Size*** = すべての可変長列の最大合計バイト サイズ
3. NULL ビットマップと呼ばれる行の部分は、列の NULL 値の許容を管理するために予約されています。 このサイズは次のように計算します。
***Null_Bitmap*** = 2 + ((***Num_Cols*** + 7) / 8)
この式の計算結果は、整数部分だけを使用します。 小数部分は無視してください。
4. 次の式で、可変長のデータ サイズを計算します。
テーブル内に可変長列が存在する場合、次の式を使用して、行内でそれらの列を格納するために使用する領域を計算します。
***Variable_Data_Size*** = 2 + (***Num_Variable_Cols*** x 2) + ***Max_Var_Size***
***Max_Var_Size*** に追加されたバイトは、それぞれの可変長列を追跡するためのものです。 この式は、すべての可変長列がいっぱいになることを前提としています。 可変長列の格納領域の使用率が 100% 以下になることが予想される場合、その使用率に基づいて ***Max_Var_Size*** の値を調整し、テーブルの全体サイズをより正確に見積もることができます。
> [!NOTE]
> 組み合わせることができます`varchar`、 `nvarchar`、 `varbinary`、または`sql_variant`8,060 バイトを超える定義されているテーブルの合計幅となる列にします。 これらの列のそれぞれの長さ 8,000 バイトの制限内に収まる必要があります、 `varchar`、 `nvarchar,``varbinary`、または`sql_variant`列。 ただし、これらの列を連結したサイズは、テーブルの制限である 8,060 バイトを超過してもかまいません。
可変長列が存在しない場合は、 ***Variable_Data_Size*** に 0 を設定します。
5. 次の式で行サイズの合計を計算します。
***Row_Size*** = ***Fixed_Data_Size*** + ***Variable_Data_Size*** + ***Null_Bitmap*** + 4
上記の式の 4 という値は、データ行の行ヘッダー オーバーヘッドです。
6. 次の式で、1 ページあたりの行数を計算します (1 ページあたりの空きバイト数は 8,096 です)。
***Rows_Per_Page*** = 8096 / (***Row_Size*** + 2)
行は複数のページにまたがらないので、計算結果の端数は切り捨ててください。 上記の式の 2 という値は、ページのスロット配列内の行のエントリのためのものです。
7. 次の式で、すべての行を格納するために必要なページ数を計算します。
***Num_Pages*** = ***Num_Rows*** / ***Rows_Per_Page***
算出したページ数の端数は切り上げてください。
8. 次の式で、ヒープにデータを格納するために必要な領域を計算します (1 ページあたりの総バイト数は 8,192 です)。
ヒープのサイズ (バイト) = 8192 x ***Num_Pages***
この計算では、次のことは考慮されていません。
- [パーティション分割]
パーティション分割による領域のオーバーヘッドはわずかですが、計算が複雑になります。 これは、計算に含めるほど重要なことではありません。
- アロケーション ページ
ヒープに割り当てられたページの追跡に使用する IAM ページが少なくとも 1 ページありますが、使用される IAM ページ数を正確に計算できるアルゴリズムはなく、領域のオーバーヘッドはわずかです。
- ラージ オブジェクト (LOB) の値
LOB データ型を格納する領域の量を使用する正確に特定するアルゴリズム`varchar(max)`、 `varbinary(max)`、 `nvarchar(max)`、 `text`、 **ntextxml**、および`image`値は複雑です。 LOB データ型の値で使用される領域の計算は、予想される LOB 値の平均サイズを合計し、ヒープの合計サイズに加算するだけで十分です。
- 圧縮
圧縮されたヒープのサイズを事前に計算することはできません。
- スパース列
スパース列の領域要件については、「 [スパース列の使用](../tables/use-sparse-columns.md)」を参照してください。
## <a name="see-also"></a>関連項目
[ヒープ (クラスター化インデックスなしのテーブル)](../indexes/heaps-tables-without-clustered-indexes.md)
[クラスター化インデックスと非クラスター化インデックスの概念](../indexes/clustered-and-nonclustered-indexes-described.md)
[クラスター化インデックスの作成](../indexes/create-clustered-indexes.md)
[非クラスター化インデックスの作成](../indexes/create-nonclustered-indexes.md)
[テーブル サイズの見積もり](estimate-the-size-of-a-table.md)
[クラスター化インデックスのサイズの見積もり](estimate-the-size-of-a-clustered-index.md)
[非クラスター化インデックスのサイズの算出](estimate-the-size-of-a-nonclustered-index.md)
[データベース サイズの見積もり](estimate-the-size-of-a-database.md)
| 33 | 256 | 0.70202 | yue_Hant | 0.606442 |
764cf70a8c78fda577e1176f7fb5194153abaed9 | 3,242 | md | Markdown | docs/ado/reference/ado-api/requery-method.md | stummsft/sql-docs | b7007397eb9bab405f87ed60fb4ce06e5835dd14 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2020-02-02T17:51:23.000Z | 2020-10-17T02:37:15.000Z | docs/ado/reference/ado-api/requery-method.md | stummsft/sql-docs | b7007397eb9bab405f87ed60fb4ce06e5835dd14 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/ado/reference/ado-api/requery-method.md | stummsft/sql-docs | b7007397eb9bab405f87ed60fb4ce06e5835dd14 | [
"CC-BY-4.0",
"MIT"
] | 6 | 2021-02-01T23:45:50.000Z | 2021-02-04T21:16:27.000Z | ---
title: "Requery Method | Microsoft Docs"
ms.prod: sql
ms.prod_service: connectivity
ms.component: "ado"
ms.technology: connectivity
ms.custom: ""
ms.date: "01/19/2017"
ms.reviewer: ""
ms.suite: "sql"
ms.tgt_pltfrm: ""
ms.topic: conceptual
apitype: "COM"
f1_keywords:
- "Recordset15::Requery"
- "Recordset15::raw_Requery"
helpviewer_keywords:
- "Requery method [ADO]"
ms.assetid: d81ab76f-1aa8-4ccf-92ec-b65254dc3ea1
caps.latest.revision: 14
author: MightyPen
ms.author: genemi
manager: craigg
---
# Requery Method
Updates the data in a [Recordset](../../../ado/reference/ado-api/recordset-object-ado.md) object by re-executing the query on which the object is based.
## Syntax
```
recordset.Requery Options
```
#### Parameters
*Options*
Optional. A bitmask that contains [ExecuteOptionEnum](../../../ado/reference/ado-api/executeoptionenum.md) and [CommandTypeEnum](../../../ado/reference/ado-api/commandtypeenum.md) values affecting this operation.
> [!NOTE]
> If *Options* is set to **adAsyncExecute**, this operation will execute asynchronously and a [RecordsetChangeComplete](../../../ado/reference/ado-api/willchangerecordset-and-recordsetchangecomplete-events-ado.md) event will be issued when it concludes. The **ExecuteOpenEnum** values of **adExecuteNoRecords** or **adExecuteStream** should not be used with **Requery**.
## Remarks
Use the **Requery** method to refresh the entire contents of a **Recordset** object from the data source by reissuing the original command and retrieving the data a second time. Calling this method is equivalent to calling the [Close](../../../ado/reference/ado-api/close-method-ado.md) and [Open](../../../ado/reference/ado-api/open-method-ado-recordset.md) methods in succession. If you are editing the current record or adding a new record, an error occurs.
While the **Recordset** object is open, the properties that define the nature of the cursor ([CursorType](../../../ado/reference/ado-api/cursortype-property-ado.md), [LockType](../../../ado/reference/ado-api/locktype-property-ado.md), [MaxRecords](../../../ado/reference/ado-api/maxrecords-property-ado.md), and so forth) are read-only. Thus, the **Requery** method can only refresh the current cursor. To change any of the cursor properties and view the results, you must use the [Close](../../../ado/reference/ado-api/close-method-ado.md) method so that the properties become read/write again. You can then change the property settings and call the [Open](../../../ado/reference/ado-api/open-method-ado-recordset.md) method to reopen the cursor.
## Applies To
[Recordset Object (ADO)](../../../ado/reference/ado-api/recordset-object-ado.md)
## See Also
[Execute, Requery, and Clear Methods Example (VB)](../../../ado/reference/ado-api/execute-requery-and-clear-methods-example-vb.md)
[Execute, Requery, and Clear Methods Example (VBScript)](../../../ado/reference/ado-api/execute-requery-and-clear-methods-example-vbscript.md)
[Execute, Requery, and Clear Methods Example (VC++)](../../../ado/reference/ado-api/execute-requery-and-clear-methods-example-vc.md)
[CommandText Property (ADO)](../../../ado/reference/ado-api/commandtext-property-ado.md)
| 58.945455 | 750 | 0.723936 | eng_Latn | 0.884329 |
764d39b06fed97629464f30a8244c1b40e0f4e78 | 63 | md | Markdown | _portfolio/p6.md | emilymliau/emilymliau.github.io | a54322b3cd39dec10471aa33e0faedd84c769ce9 | [
"MIT"
] | 2 | 2021-12-19T05:20:54.000Z | 2021-12-19T05:22:22.000Z | _portfolio/p7.md | emilymliau/emilymliau.github.io | a54322b3cd39dec10471aa33e0faedd84c769ce9 | [
"MIT"
] | null | null | null | _portfolio/p7.md | emilymliau/emilymliau.github.io | a54322b3cd39dec10471aa33e0faedd84c769ce9 | [
"MIT"
] | null | null | null | ---
title: " "
excerpt: <br/>
collection: portfolio
---
<br>
| 7.875 | 21 | 0.555556 | eng_Latn | 0.721795 |
764d6092759483f682ddca45863d32d1ff68e96f | 285 | md | Markdown | content/blog/hawk-tree/index.md | pclauring/pl-blog | 95cace3f82738cfbb132ea73f0cb83ff428af236 | [
"MIT"
] | null | null | null | content/blog/hawk-tree/index.md | pclauring/pl-blog | 95cace3f82738cfbb132ea73f0cb83ff428af236 | [
"MIT"
] | 8 | 2021-03-01T21:24:22.000Z | 2022-02-27T08:26:46.000Z | content/blog/hawk-tree/index.md | pclauring/pl-blog | 95cace3f82738cfbb132ea73f0cb83ff428af236 | [
"MIT"
] | null | null | null | ---
title: Spooky Hawk
date: "2019-10-05T11:26:03.284Z"
description: "Spooky hawk in a tree."
featuredImage: "./hawk-tree-preview.jpg"
---

A **Spooky** hawk appropriate for the cool fall season coming on quick in Michigan. Drawn with mechanical pencil.
| 25.909091 | 113 | 0.722807 | eng_Latn | 0.746505 |
764d7de5de4e83863ddf8bc4ede554765ab09569 | 4,866 | md | Markdown | docs/ado/reference/ado-api/ado-objects-and-interfaces.md | antoniosql/sql-docs.es-es | 0340bd0278b0cf5de794836cd29d53b46452d189 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/ado/reference/ado-api/ado-objects-and-interfaces.md | antoniosql/sql-docs.es-es | 0340bd0278b0cf5de794836cd29d53b46452d189 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/ado/reference/ado-api/ado-objects-and-interfaces.md | antoniosql/sql-docs.es-es | 0340bd0278b0cf5de794836cd29d53b46452d189 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Interfaces y objetos ADO | Microsoft Docs
ms.prod: sql
ms.prod_service: connectivity
ms.technology: connectivity
ms.custom: ''
ms.date: 01/19/2017
ms.reviewer: ''
ms.topic: conceptual
helpviewer_keywords:
- ADO, objects and interfaces
- objects [ADO]
ms.assetid: d0b7e254-c89f-4406-b846-a060ef038c30
author: MightyPen
ms.author: genemi
manager: craigg
ms.openlocfilehash: f0bba8402bb49d481886e4c81071443873834c8c
ms.sourcegitcommit: 61381ef939415fe019285def9450d7583df1fed0
ms.translationtype: MT
ms.contentlocale: es-ES
ms.lasthandoff: 10/01/2018
ms.locfileid: "47660653"
---
# <a name="ado-objects-and-interfaces"></a>Interfaces y objetos ADO
Las relaciones entre estos objetos se representan en el [modelo de objetos ADO](../../../ado/reference/ado-api/ado-object-model.md).
Cada objeto puede estar contenido en su colección correspondiente. Por ejemplo, un [Error](../../../ado/reference/ado-api/error-object.md) objeto puede incluirse en un [errores](../../../ado/reference/ado-api/errors-collection-ado.md) colección. Para obtener más información, consulte [colecciones de ADO](../../../ado/reference/ado-api/ado-collections.md) o un tema de la colección específica.
|||
|-|-|
|[IADOCommandConstruction](https://msdn.microsoft.com/library/windows/desktop/aa965677.aspx)|Se usa para recuperar el comando de OLE DB subyacentes de un objeto ADOCommand.|
|[ADORecordConstruction](../../../ado/reference/ado-api/adorecordconstruction-interface.md)|Construye un ADO **registro** objeto de OLE DB **fila** objeto en una aplicación de C o C++.|
|[ADORecordsetConstruction](../../../ado/reference/ado-api/adorecordsetconstruction-interface.md)|Construye un ADO **Recordset** objeto de OLE DB **conjunto de filas** objeto en una aplicación de C o C++.|
|[Interfaz ADOStreamConstruction](../../../ado/reference/ado-api/adostreamconstruction-interface.md)|Construye un ADO **Stream** objeto de OLE DB **IStream** objeto en una aplicación de C o C++.|
|[Command](../../../ado/reference/ado-api/command-object-ado.md)|Define un comando específico que se va a ejecutar en un origen de datos.<br /><br /> El **comando** objeto no es seguro para scripting.|
|[Conexión](../../../ado/reference/ado-api/connection-object-ado.md)|Representa una conexión abierta a un origen de datos.<br /><br /> El **conexión** objeto es seguro para scripting.|
|[Interfaz IDSOShapeExtensions](../../../ado/reference/ado-api/idsoshapeextensions-interface.md)|Obtiene el objeto de origen de datos OLEDB subyacente para el proveedor de formas.|
|[Error](../../../ado/reference/ado-api/error-object.md)|Contiene detalles sobre los errores de acceso de datos que pertenecen a una única operación que implica al proveedor.<br /><br /> El **Error** objeto no es seguro para scripting.|
|[Campo](../../../ado/reference/ado-api/field-object.md)|Representa una columna de datos con un tipo de datos común.|
|[Parámetro](../../../ado/reference/ado-api/parameter-object.md)|Representa un parámetro o un argumento asociado a un **comando** objeto basado en un procedimiento almacenado o una consulta parametrizada.<br /><br /> El **parámetro** objeto no es seguro para scripting.|
|[Propiedad](../../../ado/reference/ado-api/property-object-ado.md)|Representa una característica dinámica de un objeto ADO que está definida por el proveedor.|
|[Registro](../../../ado/reference/ado-api/record-object-ado.md)|Representa una fila de un **Recordset**, o un directorio o archivo en un sistema de archivos. El **registro** objeto es seguro para scripting.|
|[Conjunto de registros](../../../ado/reference/ado-api/recordset-object-ado.md)|Representa el conjunto de registros de una tabla base o los resultados de un comando ejecutado. En cualquier momento, el **Recordset** objeto hace referencia a solo un único registro dentro del conjunto como el registro actual.<br /><br /> El **Recordset** objeto es seguro para scripting.|
|[secuencia](../../../ado/reference/ado-api/stream-object-ado.md)|Representa un flujo binario de datos.<br /><br /> El **Stream** objeto es seguro para scripting.|
## <a name="see-also"></a>Vea también
[Referencia de API de ADO](../../../ado/reference/ado-api/ado-api-reference.md)
[Colecciones de ADO](../../../ado/reference/ado-api/ado-collections.md)
[Propiedades dinámicas de ADO](../../../ado/reference/ado-api/ado-dynamic-properties.md)
[Constantes enumeradas de ADO](../../../ado/reference/ado-api/ado-enumerated-constants.md)
[Apéndice B: errores de ADO](../../../ado/guide/appendixes/appendix-b-ado-errors.md)
[Eventos de ADO](../../../ado/reference/ado-api/ado-events.md)
[Métodos de ADO](../../../ado/reference/ado-api/ado-methods.md)
[Modelo de objetos ADO](../../../ado/reference/ado-api/ado-object-model.md)
[Propiedades de ADO](../../../ado/reference/ado-api/ado-properties.md)
| 86.892857 | 397 | 0.729963 | spa_Latn | 0.891918 |
764dc74454366c45a64a8c626126979f6d2e2fae | 573 | md | Markdown | CHANGELOG.md | fruitwasp/FHMultiSiteBundle | f590cab3c8ea3ea751044778be0a95faeae92e94 | [
"MIT"
] | null | null | null | CHANGELOG.md | fruitwasp/FHMultiSiteBundle | f590cab3c8ea3ea751044778be0a95faeae92e94 | [
"MIT"
] | 1 | 2021-07-30T07:46:28.000Z | 2021-07-30T07:46:28.000Z | CHANGELOG.md | fruitwasp/FHMultiSiteBundle | f590cab3c8ea3ea751044778be0a95faeae92e94 | [
"MIT"
] | 6 | 2019-05-31T11:08:04.000Z | 2020-05-29T13:43:05.000Z | CHANGELOG
=========
This changelog references the relevant changes (bug/security fixes, enhancements) done in minor versions.
To get the diff between two versions, go to: https://github.com/freshheads/FHMultiSiteBundle/compare/v1.0.0...v1.0.1
1.1
* Drops support for Symfony below version 4.4
* Adds support for Symfony 5.x
* Adds support for Twig 3
* Marks bundle config classes as final in docblock; will be final in next major:
* FH\Bundle\MultiSiteBundle\DependencyInjection\Configuration
* FH\Bundle\MultiSiteBundle\DependencyInjection\FHMultiSiteExtension
| 38.2 | 116 | 0.780105 | eng_Latn | 0.704249 |
764de19695e2f3d3788982b2b53bd4154e2f372c | 2,129 | md | Markdown | website/docs/source/v2/getting-started/teardown.html.md | iNecas/vagrant | ff987b19ec4bed99fe6e4e8d2aea865e4d54f0df | [
"MIT"
] | 1 | 2016-08-18T09:35:19.000Z | 2016-08-18T09:35:19.000Z | website/docs/source/v2/getting-started/teardown.html.md | iNecas/vagrant | ff987b19ec4bed99fe6e4e8d2aea865e4d54f0df | [
"MIT"
] | null | null | null | website/docs/source/v2/getting-started/teardown.html.md | iNecas/vagrant | ff987b19ec4bed99fe6e4e8d2aea865e4d54f0df | [
"MIT"
] | 2 | 2018-01-15T04:53:24.000Z | 2020-05-11T22:15:31.000Z | ---
page_title: "Teardown - Getting Started"
sidebar_current: "gettingstarted-teardown"
---
# Teardown
We now have a fully functional virtual machine we can use for basic
web development. But now let's say it is time to switch gears, maybe work
on another project, maybe go out to lunch, or maybe just time to go home.
How do we clean up our development environment?
With Vagrant, you _suspend_, _halt_, or _destroy_ the guest machine.
Each of these options have pros and cons. Choose the method that works
best for you.
**Suspending** the virtual machine by calling `vagrant suspend` will
save the current running state of the machine and stop it. When you're
ready to begin working again, just run `vagrant up`, and it will be
resumed from where you left off. The main benefit of this method is that it
is super fast, usually taking only 5 to 10 seconds to stop and start your
work. The downside is that the virtual machine still eats up your disk space,
and requires even more disk space to store all the state of the virtual
machine RAM on disk.
**Halting** the virtual machine by calling `vagrant halt` will gracefully
shut down the guest operating system and power down the guest machine.
You can use `vagrant up` when you're ready to boot it again. The benefit of
this method is that it will cleanly shut down your machine, preserving the
contents of disk, and allowing it to be cleanly started again. The downside is
that it'll take some extra time to start from a cold boot, and the guest machine
still consumes disk space.
**Destroying** the virtual machine by calling `vagrant destroy` will remove
all traces of the guest machine from your system. It'll stop the guest machine,
power it down, and remove all of the guest hard disks. Again, when you're ready to
work again, just issue a `vagrant up`. The benefit of this is that _no cruft_
is left on your machine. The disk space and RAM consumed by the guest machine
is reclaimed and your host machine is left clean. The downside is that
`vagrant up` to get working again will take some extra time since it
has to reimport the machine and reprovision it.
| 50.690476 | 83 | 0.779709 | eng_Latn | 0.999847 |
764e39f77eb97c0652a31798509a7339b76ea155 | 6,438 | md | Markdown | azure-go-sdk-conceptual/azure-sdk-go-install.md | MicrosoftDocs/azure-docs-sdk-go.sv-SE | bdb0ba2944911f131210573e888bf2fceb52fb8f | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-05-18T12:01:40.000Z | 2020-05-18T12:01:40.000Z | azure-go-sdk-conceptual/azure-sdk-go-install.md | MicrosoftDocs/azure-docs-sdk-go.sv-SE | bdb0ba2944911f131210573e888bf2fceb52fb8f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | azure-go-sdk-conceptual/azure-sdk-go-install.md | MicrosoftDocs/azure-docs-sdk-go.sv-SE | bdb0ba2944911f131210573e888bf2fceb52fb8f | [
"CC-BY-4.0",
"MIT"
] | 2 | 2019-10-13T19:29:04.000Z | 2021-11-18T11:58:29.000Z | ---
title: Installera Azure SDK för Go
description: Anvisningar om hur du installerar, utför vendoring och konfigurerar Azure SDK för Go.
author: sptramer
ms.author: sttramer
manager: carmonm
ms.date: 03/14/2018
ms.topic: conceptual
ms.prod: azure
ms.technology: azure-sdk-go
ms.devlang: go
ms.openlocfilehash: 2799e3a6c637036eeaf7b20adf8aa55a8a4ab400
ms.sourcegitcommit: 4db332f5e43a5b43032ff9017805d5fd5a650d86
ms.translationtype: MT
ms.contentlocale: sv-SE
ms.lasthandoff: 01/28/2019
ms.locfileid: "55145540"
---
# <a name="install-the-azure-sdk-for-go"></a>Installera Azure SDK för Go
Välkommen till Azure SDK för Go! Med detta SDK kan du hantera och interagera med Azure-tjänster från Go-program.
## <a name="get-the-azure-sdk-for-go"></a>Hämta Azure SDK för Go
[!INCLUDE [azure-sdk-go-get](includes/azure-sdk-go-get.md)]
Vissa Azure-tjänster har sitt eget Go-SDK och ingår inte i Azure SDK for Go-grundpaketet. Följande tabell innehåller en lista över tjänsterna med egna SDK:er och deras paketnamn. Alla dessa paket anses utgöra en förhandsversion.
| Tjänst | Paket |
|---------|---------|
| Blob Storage | [github.com/Azure/azure-storage-blob-go](https://github.com/Azure/azure-storage-blob-go) |
| File Storage | [github.com/Azure/azure-storage-file-go](https://github.com/Azure/azure-storage-file-go) |
| Lagringskö | [github.com/Azure/azure-storage-queue-go](https://github.com/Azure/azure-storage-queue-go) |
| Händelsehubb | [github.com/Azure/azure-event-hubs-go](https://github.com/Azure/azure-event-hubs-go) |
| Service Bus | [github.com/Azure/azure-service-bus-go](https://github.com/Azure/azure-service-bus-go) |
| Application Insights | [github.com/Microsoft/ApplicationInsights-go](https://github.com/Microsoft/ApplicationInsights-go) |
## <a name="vendor-the-azure-sdk-for-go"></a>Vendoring i Azure SDK för Go
Du kan utföra vendoring för Azure SDK för Go via [dep](https://github.com/golang/dep). Vendoring rekommenderas av stabilitetsskäl. Om du vill använda `dep` i ditt projekt lägger du till `github.com/Azure/azure-sdk-for-go` i ett `[[constraint]]`-avsnitt i din `Gopkg.toml`. Om du till exempel vill utföra vendoring för version `14.0.0` lägger du till följande post:
```toml
[[constraint]]
name = "github.com/Azure/azure-sdk-for-go"
version = "14.0.0"
```
## <a name="include-the-azure-sdk-for-go-in-your-project"></a>Ta med Azure SDK för Go i ditt projekt
Om du vill använda Azure-tjänster från din Go-kod importerar du alla tjänster som du interagerar med samt de nödvändiga `autorest`-modulerna.
Du får en fullständig lista över tillgängliga moduler från GoDoc för [tillgängliga tjänster](https://godoc.org/github.com/Azure/azure-sdk-for-go) och [AutoRest-paket](https://godoc.org/github.com/Azure/go-autorest). De vanligaste paketen som du behöver från `go-autorest` är:
| Paket | Beskrivning |
|---------|-------------|
| [github.com/Azure/go-autorest/autorest][autorest] | Objekt för hantering av tjänstklientautentisering |
| [github.com/Azure/go-autorest/autorest/azure][autorest/azure] | Konstanter för interaktioner med Azure-tjänster |
| [github.com/Azure/go-autorest/autorest/adal][autorest/adal] | Autentiseringsmekanismer för åtkomst till Azure-tjänster |
| [github.com/Azure/go-autorest/autorest/to][autorest/to] | Ange kontrollhjälp för att arbeta med Azure SDK-datastrukturer |
[autorest]: https://godoc.org/github.com/Azure/go-autorest/autorest
[autorest/azure]: https://godoc.org/github.com/Azure/go-autorest/autorest/azure
[autorest/adal]: https://godoc.org/github.com/Azure/go-autorest/autorest/adal
[autorest/to]: https://godoc.org/github.com/Azure/go-autorest/autorest/to
Versioner av Go-paket och Azure-tjänster är oberoende av varandra. Tjänstversionerna är en del av importsökvägen för modulen nedanför modulen `services`. Den fullständiga sökvägen för modulen är namnet på tjänsten, följt av versionen i formatet `YYYY-MM-DD`, följt av namnet på tjänsten igen. Så här importerar du till exempel versionen `2017-03-30` av beräkningstjänsten:
```go
import "github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2017-03-30/compute"
```
Vi rekommenderar att du använder den senaste versionen av en tjänst när du börjar utveckla och är konsekvent.
Tjänstekrav kan ändras från en version till nästa och det kan bryta din kod, även om det inte förekommer Go SDK-uppdateringar under den tiden.
Du kan även välja en enskild profilversion om du behöver en kollektiv ögonblicksbild av tjänsterna. Just nu är den enda låsta profilen version `2017-03-09`, som kanske inte har de senaste funktionerna för tjänsterna. Profilerna finns under modulen `profiles` med versionerna i formatet `YYYY-MM-DD`. Tjänsterna är grupperade under profilversionerna. Så här importerar du till exempel hanteringsmodulen för Azure-resurser från profilen `2017-03-09`:
```go
import "github.com/Azure/azure-sdk-for-go/profiles/2017-03-09/resources/mgmt/resources"
```
> [!WARNING]
> Även profilerna `preview` och `latest` är tillgängliga. Vi rekommenderar inte att du använder dem. De här profilerna är löpande versioner och tjänstbeteendet kan därför ändras när som helst.
## <a name="next-steps"></a>Nästa steg
Testa att använda en snabbstart om du vill börja använda Azure SDK för Go.
* [Distribuera en virtuell dator från en mall](azure-sdk-go-qs-vm.md)
* [Överföra objekt till Azure Blob Storage med hjälp av Azure Blob SDK för Go](/azure/storage/blobs/storage-quickstart-blobs-go?toc=%2fgo%2fazure%2ftoc.json)
* [Ansluta till Azure Database for PostgreSQL](/azure/postgresql/connect-go?toc=%2fgo%2fazure%2ftoc.json)
Om du vill komma igång med andra tjänster i Go SDK direkt så kan du ta en titt på några av de tillgängliga exempelkoderna.
* [Autentisera med Azure-tjänster](https://github.com/Azure-Samples/azure-sdk-for-go-samples/tree/master/internal/iam)
* [Distribuera nya virtuella datorer med SSH-autentisering](https://github.com/Azure-Samples/azure-sdk-for-go-samples/tree/master/compute)
* [Distribuera en behållaravbildning till Azure Container Instances](https://github.com/Azure-Samples/azure-sdk-for-go-samples/tree/master/compute)
* [Skapa ett kluster i Azure Kubernetes Service](https://github.com/Azure-Samples/azure-sdk-for-go-samples/blob/master/compute)
* [Arbeta med Azure Storage-tjänster](https://github.com/Azure-Samples/azure-sdk-for-go-samples/tree/master/storage)
* [Alla exempel för Azure SDK för Go](https://github.com/azure-samples/azure-sdk-for-go-samples)
| 65.030303 | 448 | 0.773843 | swe_Latn | 0.982964 |
764ef719930991a8a77ac13c8bb25a32740a7335 | 1,664 | md | Markdown | docs/cli/newrelic_apiAccess.md | olsonso/newrelic-cli | 631e9aa633b74ccc89ec549e1cf44473a97f4983 | [
"Apache-2.0"
] | 70 | 2020-02-27T22:17:52.000Z | 2022-03-02T17:15:57.000Z | docs/cli/newrelic_apiAccess.md | olsonso/newrelic-cli | 631e9aa633b74ccc89ec549e1cf44473a97f4983 | [
"Apache-2.0"
] | 326 | 2020-02-27T20:41:22.000Z | 2022-03-31T22:34:55.000Z | docs/cli/newrelic_apiAccess.md | olsonso/newrelic-cli | 631e9aa633b74ccc89ec549e1cf44473a97f4983 | [
"Apache-2.0"
] | 44 | 2020-04-09T16:37:45.000Z | 2022-03-15T15:36:03.000Z | ## newrelic apiAccess
Manage New Relic API access keys
### Examples
```
newrelic apiaccess apiAccess --help
```
### Options
```
-h, --help help for apiAccess
```
### Options inherited from parent commands
```
-a, --accountId int the account ID to use. Can be overridden by setting NEW_RELIC_ACCOUNT_ID
--debug debug level logging
--format string output text format [JSON, Text, YAML] (default "JSON")
--plain output compact text
--profile string the authentication profile to use
--trace trace level logging
```
### SEE ALSO
* [newrelic](newrelic.md) - The New Relic CLI
* [newrelic apiAccess apiAccessCreateKeys](newrelic_apiAccess_apiAccessCreateKeys.md) - Create keys. You can create keys for multiple accounts at once. You can read more about managing keys on [this documentation page](https://docs.newrelic.com/docs/apis/nerdgraph/examples/use-nerdgraph-manage-license-keys-personal-api-keys).
* [newrelic apiAccess apiAccessDeleteKeys](newrelic_apiAccess_apiAccessDeleteKeys.md) - A mutation to delete keys.
* [newrelic apiAccess apiAccessGetKey](newrelic_apiAccess_apiAccessGetKey.md) - Fetch a single key by ID and type.
---
**NR Internal** | [#help-unified-api](https://newrelic.slack.com/archives/CBHJRSPSA) | visibility(customer)
* [newrelic apiAccess apiAccessUpdateKeys](newrelic_apiAccess_apiAccessUpdateKeys.md) - Update keys. You can update keys for multiple accounts at once. You can read more about managing keys on [this documentation page](https://docs.newrelic.com/docs/apis/nerdgraph/examples/use-nerdgraph-manage-license-keys-personal-api-keys).
| 40.585366 | 328 | 0.73738 | eng_Latn | 0.567297 |
764f22e66b33e482bd97c62ccddae3d250526922 | 2,982 | md | Markdown | docs/cpp/override-specifier.md | asklar/cpp-docs | c5e30ee9c63ab4d88b4853acfb6f084cdddb171f | [
"CC-BY-4.0",
"MIT"
] | 14 | 2018-01-28T18:10:55.000Z | 2021-11-16T13:21:18.000Z | docs/cpp/override-specifier.md | asklar/cpp-docs | c5e30ee9c63ab4d88b4853acfb6f084cdddb171f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/cpp/override-specifier.md | asklar/cpp-docs | c5e30ee9c63ab4d88b4853acfb6f084cdddb171f | [
"CC-BY-4.0",
"MIT"
] | 2 | 2018-11-01T12:33:08.000Z | 2021-11-16T13:21:19.000Z | ---
title: "override Specifier | Microsoft Docs"
ms.custom: ""
ms.date: "11/04/2016"
ms.reviewer: ""
ms.suite: ""
ms.technology: ["cpp-language"]
ms.tgt_pltfrm: ""
ms.topic: "language-reference"
dev_langs: ["C++"]
helpviewer_keywords: ["override Identifier"]
ms.assetid: b286fb46-9374-4ad8-b2e7-4607119b6133
caps.latest.revision: 8
author: "mikeblome"
ms.author: "mblome"
manager: "ghogen"
---
# override Specifier
You can use the `override` keyword to designate member functions that override a virtual function in a base class.
## Syntax
```
function-declaration override;
```
## Remarks
`override` is context-sensitive and has special meaning only when it's used after a member function declaration; otherwise, it's not a reserved keyword.
## Example
Use `override` to help prevent inadvertent inheritance behavior in your code. The following example shows where, without using `override`, the member function behavior of the derived class may not have been intended. The compiler doesn't emit any errors for this code.
```cpp
class BaseClass
{
virtual void funcA();
virtual void funcB() const;
virtual void funcC(int = 0);
void funcD();
};
class DerivedClass: public BaseClass
{
virtual void funcA(); // ok, works as intended
virtual void funcB(); // DerivedClass::funcB() is non-const, so it does not
// override BaseClass::funcB() const and it is a new member function
virtual void funcC(double = 0.0); // DerivedClass::funcC(double) has a different
// parameter type than BaseClass::funcC(int), so
// DerivedClass::funcC(double) is a new member function
};
```
When you use `override`, the compiler generates errors instead of silently creating new member functions.
```cpp
class BaseClass
{
virtual void funcA();
virtual void funcB() const;
virtual void funcC(int = 0);
void funcD();
};
class DerivedClass: public BaseClass
{
virtual void funcA() override; // ok
virtual void funcB() override; // compiler error: DerivedClass::funcB() does not
// override BaseClass::funcB() const
virtual void funcC( double = 0.0 ) override; // compiler error:
// DerivedClass::funcC(double) does not
// override BaseClass::funcC(int)
void funcD() override; // compiler error: DerivedClass::funcD() does not
// override the non-virtual BaseClass::funcD()
};
```
To specify that functions cannot be overridden and that classes cannot be inherited, use the [final](../cpp/final-specifier.md) keyword.
## See Also
[final Specifier](../cpp/final-specifier.md)
[Keywords](../cpp/keywords-cpp.md)
| 32.769231 | 271 | 0.620389 | eng_Latn | 0.938648 |
764f654d7eedf73acec86b75c0c41dc6373253bf | 7,824 | md | Markdown | articles/active-directory/develop/howto-reactivate-disabled-acs-namespaces.md | MartinFuhrmann/azure-docs.de-de | 64a46d56152f41aad992d28721a8707649bde76a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/active-directory/develop/howto-reactivate-disabled-acs-namespaces.md | MartinFuhrmann/azure-docs.de-de | 64a46d56152f41aad992d28721a8707649bde76a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/active-directory/develop/howto-reactivate-disabled-acs-namespaces.md | MartinFuhrmann/azure-docs.de-de | 64a46d56152f41aad992d28721a8707649bde76a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Reaktivieren deaktivierter Azure Access Control Service-Namespaces (ACS)
description: Hier erfahren Sie, wie Sie Ihre Azure ACS-Namespaces (Access Control Service) finden und aktivieren und eine Erweiterung anfordern, um die Aktivierung bis zum 4. Februar 2019 beizubehalten.
services: active-directory
author: rwike77
manager: CelesteDG
ms.service: active-directory
ms.subservice: develop
ms.workload: identity
ms.topic: conceptual
ms.date: 01/21/2019
ms.author: ryanwi
ms.reviewer: jlu
ms.custom: aaddev
ms.collection: M365-identity-device-management
ms.openlocfilehash: 9cc038e67e5528a52b0b98ea1698da07e8120242
ms.sourcegitcommit: 5ab4f7a81d04a58f235071240718dfae3f1b370b
ms.translationtype: HT
ms.contentlocale: de-DE
ms.lasthandoff: 12/10/2019
ms.locfileid: "74966944"
---
# <a name="how-to-reactivate-disabled-access-control-service-namespaces"></a>Gewusst wie: Reaktivieren deaktivierter Access Control Service-Namespaces
Im November 2017 haben wir bekanntgegeben, dass Microsoft Azure Access Control Service (ACS), ein Dienst von Azure Active Directory (Azure AD), wird am 7. November 2018 eingestellt wird.
Wir haben 12 Monate, 9 Monate, 6 Monate, 3 Monate, 1 Monat, 2 Wochen, 1 Woche und 1 Tag vor der Einstellung am 7. November 2018 E-Mails bezüglich der Einstellung von ACS an die Verwaltungs-E-Mail der ACS-Abonnements gesendet.
Am 3. Oktober 2018 gaben wir (per E-Mail und [einem Blogbeitrag](https://azure.microsoft.com/blog/one-month-retirement-notice-access-control-service/)) ein Verlängerungsangebot für Kunden bekannt, die ihre Migration nicht vor dem 7. November 2018 abschließen können. Die Ankündigung enthielt auch Anweisungen zum Anfordern der Verlängerung.
## <a name="why-your-namespace-is-disabled"></a>Darum wird Ihr Namespace deaktiviert
Wenn Sie sich nicht für die Verlängerung entschieden haben, werden wir ab dem 7. November 2018 ACS-Namespaces deaktivieren. Sie müssen die Verlängerung bis zum 4. Februar 2019 angefordert haben. Andernfalls können Sie die Namespaces nicht mithilfe von PowerShell aktivieren.
> [!NOTE]
> Sie müssen ein Dienstadministrator oder Co-Admin des Abonnements sein, um die PowerShell-Befehle auszuführen und eine Verlängerung anzufordern.
## <a name="find-and-enable-your-acs-namespaces"></a>Suchen und Aktivieren Ihrer ACS-Namespaces
Sie können ACS PowerShell verwenden, um alle Ihre ACS-Namespaces aufzulisten und deaktivierte zu reaktivieren.
1. Herunterladen und Installieren von ACS PowerShell:
1. Wechseln Sie zum PowerShell-Katalog, und laden Sie [Acs.Namespaces](https://www.powershellgallery.com/packages/Acs.Namespaces/1.0.2) herunter.
1. Installieren des Moduls:
```powershell
Install-Module -Name Acs.Namespaces
```
1. Rufen Sie eine Liste mit den möglichen Befehlen ab:
```powershell
Get-Command -Module Acs.Namespaces
```
Führen Sie Folgendes aus, um Hilfe zu einem bestimmten Befehl zu erhalten:
```powershell
Get-Help [Command-Name] -Full
```
Hierbei steht `[Command-Name]` für den Namen des ACS-Befehls.
1. Stellen Sie eine Verbindung mit ACS her, indem Sie das Cmdlet **Connect-AcsAccount** verwenden.
Möglicherweise müssen Sie Ihre Ausführungsrichtlinie ändern, indem Sie **Set-ExecutionPolicy** ausführen, bevor Sie den Befehl ausführen können.
1. Listen Sie Ihre verfügbaren Azure-Abonnements auf, indem Sie das Cmdlet **Get-AcsSubscription** verwenden.
1. Listen Sie Ihre ACS-Namespaces auf, indem Sie das Cmdlet **Get-AcsNamespace** verwenden.
1. Bestätigen Sie, dass die Namespaces deaktiviert sind, indem Sie bestätigen, dass `State` `Disabled` ist.
[](./media/howto-reactivate-disabled-acs-namespaces/confirm-disabled-namespace.png#lightbox)
Sie können auch `nslookup {your-namespace}.accesscontrol.windows.net` verwenden, um zu überprüfen, ob die Domäne noch aktiv ist.
1. Aktivieren Sie Ihre ACS-Namespaces, indem Sie das Cmdlet **Enable-AcsNamespace** verwenden.
Nachdem Sie Ihre Namespaces aktiviert haben, können Sie eine Verlängerung beantragen, damit die Namespaces nicht vor dem 4. Februar 2019 wieder deaktiviert werden. Nach diesem Datum tritt bei allen Anforderungen an ACS ein Fehler auf.
## <a name="request-an-extension"></a>Anfordern einer Verlängerung
Wir nehmen ab dem 21. Januar 2019 neue Verlängerungsanforderungen an.
Die Namespaces von Kunden, die eine Verlängerung angefordert haben, werden ab dem 4. Februar 2019 deaktiviert. Sie können Namespaces auch weiterhin mithilfe von PowerShell aktivieren, die Namespaces werden jedoch nach 48 Stunden erneut deaktiviert.
Nach dem 4. März 2019 können die Kunden keine Namespaces mehr mithilfe von PowerShell aktivieren.
Zusätzliche Verlängerungen werden dann nicht mehr automatisch genehmigt. Wenn Sie zusätzliche Zeit für die Migration benötigen, wenden Sie sich an den [Azure-Support](https://portal.azure.com/#create/Microsoft.Support), und stellen Sie einen detaillierten Migrationszeitplan bereit.
### <a name="to-request-an-extension"></a>So fordern Sie eine Verlängerung an
1. Melden Sie sich beim Azure-Portal an, und erstellen Sie eine [neue Supportanfrage](https://portal.azure.com/#create/Microsoft.Support).
1. Füllen Sie das Formular für die neue Supportanfrage wie im folgenden Beispiel gezeigt aus.
| Supportanfragefeld | Wert |
|-----------------------|--------------------|
| **Problemtyp** | `Technical` |
| **Abonnement** | Ihr Abonnement |
| **Service** | `All services` |
| **Ressource** | `General question/Resource not available` |
| **Problemtyp** | `ACS to SAS Migration` |
| **Subject** | Beschreibung des Problems |

<!--
1. Navigate to your ACS namespace's management portal by going to `https://{your-namespace}.accesscontrol.windows.net`.
1. Select the **Read Terms** button to read the [updated Terms of Use](https://azure.microsoft.com/support/legal/access-control/), which will direct you to a page with the updated Terms of Use.
[](./media/howto-reactivate-disabled-acs-namespaces/read-terms-button-expanded.png#lightbox)
1. Select **Request Extension** on the banner at the top of the page. The button will only be enabled after you read the [updated Terms of Use](https://azure.microsoft.com/support/legal/access-control/).
[](./media/howto-reactivate-disabled-acs-namespaces/request-extension-button-expanded.png#lightbox)
1. After the extension request is registered, the page will refresh with a new banner at the top of the page.
[](./media/howto-reactivate-disabled-acs-namespaces/updated-banner-expanded.png#lightbox)
-->
## <a name="help-and-support"></a>Hilfe und Support
- Wenn Sie nach dem Befolgen dieser Anleitung auf Probleme stoßen, wenden Sie sich an den [Azure-Support](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview).
- Wenn Sie Fragen oder Feedback zur Einstellung von ACS haben, kontaktieren Sie uns unter [email protected].
## <a name="next-steps"></a>Nächste Schritte
- Lesen Sie die Informationen zur ACS-Deaktivierung unter [ Migrieren aus dem Azure Access Control Service](active-directory-acs-migration.md).
| 61.125 | 340 | 0.771856 | deu_Latn | 0.923084 |
764f72108c2963328ff5b25964c8e17d9c4906ed | 7,058 | md | Markdown | README.md | wangcongcong123/MLP-TC | fcdc3f57727cfe8c7d8ea628db5548b1d1d48351 | [
"MIT"
] | 3 | 2020-02-16T17:06:35.000Z | 2020-03-26T18:54:20.000Z | README.md | wangcongcong123/MLP-TC | fcdc3f57727cfe8c7d8ea628db5548b1d1d48351 | [
"MIT"
] | null | null | null | README.md | wangcongcong123/MLP-TC | fcdc3f57727cfe8c7d8ea628db5548b1d1d48351 | [
"MIT"
] | null | null | null | # Machine Learning Package for Text Classification
This repository contains codes of machine learning algorithms for text classification, abbreviated to MLP-TC (**M**achine **L**earning **P**ackage for **T**ext **C**lassification).
Due to the poor reproducibility of classification in Jupyter modelling on different datasets with different algorithms, this package is borned. This package is designed in a way especially suitable for researchers conducting comparison experiments and benchmarking analysis for text classification.
This package empowers you to explore the performance difference that different ML techniques have on your specific datasets. _Updated: 2019/12/09._
## Highlights
- Well logged for the whole process of training a model for text classification.
- Feed different datasets into models quickly as long as they are formatted as required.
- Support single or multi label (only binary relevance at this stage) classification.
- Support model save, load, train, predict, eval, etc.
## Dependencies
In order to use the package, clone the repository first and then install the following dependencies if you have not got it ready.
- scikit-learn
- seaborn
- pandas
- numpy
- matplotlib
## Steps of Usage
1. **Data preparation**: format your classification datasets into the following format. For label attribute, labels are separated by "," if multi labels are available for a sample.
Have a look at the [dataset/tweet_sentiment_3](dataset/tweet_sentiment_3) dataset provided in this package to know the format required.
```python
{"content":"this is the content of a sample in dataset","label":"label1,label2,..."}
```
2. **Configuration for model training**: Below is an example of configuration model training (the script is in [main.py](main.py)). Important stuff are commented below.
```python
import pprint
from sklearn.svm import LinearSVC
# from sklearn.linear_model import LogisticRegression
# from sklearn.svm import SVC
# from sklearn.ensemble import RandomForestClassifier
from sklearn.naive_bayes import GaussianNB
print("=========configuration for model training======")
configs = {}
configs["relative_path"] = "./" # the path relative to dataset
configs["data"] = "tweet_sentiment_3/json" # specify the path of your data that is under the dataset dir
configs["data_mapping"] = {"content": "content",
"label": "label"} # this is the mapping from the package required attribute names to your json dataset attributes
configs["stemming"] = "true" # specify whether you want to stem or not in preprocessing
configs["tokenizer"] = "tweet" # if it is a tweet-related dataset, it is suggested to use tweet tokenizer, or "string"
configs["vectorizer"] = "count" # options: count, tf-idf, embeddings/glove.twitter.27B.100d.txt.gz
configs["type"] = "single" # single or multi label classification?
configs[
"model"] = LinearSVC(C=0.1) # Options: LinearSVC(C=0.1),SVC, LogisticRegression(solver='ibfgs'),GaussianNB(),RandomForest, etc.
```
3. **Train and save**: Below is an example of model training and save (the script is in [main.py](main.py)). Important stuff are commented below.
```python
import ml_package.model_handler as mh
print("=========model training and save======")
model = mh.get_model(
configs) # get the specified LinearSVC model from the model handler with configs passed as the parameter
model.train() # train a model
mh.save_model(model, configs) # you can save a model after it is trained
```
4. **Eval and predict**: Below is an example of evaluating train, dev, and test, and predict without ground truth (the script is in [main.py](main.py)). Important stuff are commented below.
```python
print("=========evaluate on train, dev and test set======")
model.eval("train") # classification report for train set
model.eval("dev") # classification report for dev set
model.eval("test", confusion_matrix=True) # we can let confusion_matrix=True so as to report confusion matrix as well
print("=========predict a corpus without ground truth======")
corpus2predict = ["i love you", "i hate you"] # get ready for two documents
data_processor = model.get_data_processor()
to_predict = data_processor.raw2predictable(["i love you", "i hate you"])
predicted = model.predict_in_batch(
to_predict["features"].toarray() if hasattr(to_predict["features"], "toarray") else to_predict["features"])
print("Make predictions for:\n ", to_predict["content"])
print("The predicted results are: ", predicted)
```
The output:
Make predictions for:
['i love you', 'i hate you']
The predicted results are: ['positive' 'negative']
Predict with ground truth, e.g. the first three examples from test set.
```python
print("=========predict a corpus with ground truth======")
train_data, _, test_data = model.get_fit_dataset()
data = test_data
to_predict_first = 0
to_predict_last = 3
if configs["type"] == "multi":
mlb = model.get_multi_label_binarizer()
predicted = model.predict_in_batch(data["features"][to_predict_first:to_predict_last].toarray() if hasattr(
data["features"][to_predict_first:to_predict_last], "toarray") else data["features"][
to_predict_first:to_predict_last])
print("Make predictions for:\n ", "\n".join(data["content"][to_predict_first:to_predict_last]))
print("Ground truth are:\n ")
pprint.pprint(data["labels"][to_predict_first:to_predict_last])
print("The predicted results are: ")
pprint.pprint(mlb.inverse_transform(predicted) if configs["type"] == "multi" else predicted)
```
The output:
=========predict a corpus with ground truth======
Make predictions for:
Clearwire Is Sought by Sprint for Spectrum: Sprint disclosed on Thursday that it had offered to buy a stake in C... http://t.co/5Ais2S9j
5 time @usopen champion Federer defeats 29th seed Kohlschreiber and he's through to the last 16 to play 13th seed Isner or Vesely! #USOpen
Old radio commercials for Grateful Dead albums may just be the best thing I've discovered
Ground truth are:
['neutral', 'neutral', 'positive']
The predicted results are:
array(['neutral', 'negative', 'positive'], dtype='<U8')
* After running [main.py](main.py), you will find `output.log` and `LinearSVCdc26f10760747d1c6d94b3a9679d28cf.pkl` under the root of the repository. When you rerun the experiment, the model will be loaded locally instead of re-training from scratch as long as your configurations are not changed.
## Others
- More extensions of this package go to this tutorial (in plan). Feedback is welcome or any errors/bugs reporting is well appreciated.
| 56.464 | 298 | 0.695664 | eng_Latn | 0.981816 |
764ff585dab229e0c43d88a2bb1b684f602fb22e | 3,525 | md | Markdown | linking_to_external_code/private.md | dirkncl/deno-manual | 221994be2028d810962f00d531a3a10ba6abbfb6 | [
"MIT"
] | null | null | null | linking_to_external_code/private.md | dirkncl/deno-manual | 221994be2028d810962f00d531a3a10ba6abbfb6 | [
"MIT"
] | null | null | null | linking_to_external_code/private.md | dirkncl/deno-manual | 221994be2028d810962f00d531a3a10ba6abbfb6 | [
"MIT"
] | null | null | null | ## Private modules and repositories
There maybe instances where you want to load a remote module that is located in
a _private_ repository, like a private repository on GitHub.
Deno supports sending bearer tokens when requesting a remote module. Bearer
tokens are the predominant type of access token used with OAuth 2.0 and is
broadly supported by hosting services (e.g. GitHub, Gitlab, BitBucket,
Cloudsmith, etc.).
### DENO_AUTH_TOKENS
The Deno CLI will look for an environment variable named `DENO_AUTH_TOKENS` to
determine what authentication tokens it should consider using when requesting
remote modules. The value of the environment variable is in the format of a _n_
number of tokens deliminated by a semi-colon (`;`) where each token is either:
- a bearer token in the format of `{token}@{hostname[:port]}`
- basic auth data in the format of `{username}:{password}@{hostname[:port]}`
For example a single token for would look something like this:
```sh
[email protected]
```
or
```sh
DENO_AUTH_TOKENS=username:[email protected]
```
And multiple tokens would look like this:
```sh
[email protected];[email protected]:8080,username:[email protected]
```
When Deno goes to fetch a remote module, where the hostname matches the hostname
of the remote module, Deno will set the `Authorization` header of the request to
the value of `Bearer {token}` or `Basic {base64EncodedData}`. This allows the
remote server to recognize that the request is an authorized request tied to a
specific authenticated user, and provide access to the appropriate resources and
modules on the server.
### GitHub
To be able to access private repositories on GitHub, you would need to issue
yourself a _personal access token_. You do this by logging into GitHub and going
under _Settings -> Developer settings -> Personal access tokens_:

You would then choose to _Generate new token_ and give your token a description
and appropriate access:

And once created GitHub will display the new token a single time, the value of
which you would want to use in the environment variable:

In order to access modules that are contained in a private repository on GitHub,
you would want to use the generated token in the `DENO_AUTH_TOKENS` environment
variable scoped to the `raw.githubusercontent.com` hostname. For example:
```sh
[email protected]
```
This should allow Deno to access any modules that the user who the token was
issued for has access to.
When the token is incorrect, or the user does not have access to the module,
GitHub will issue a `404 Not Found` status, instead of an unauthorized status.
So if you are getting errors that the modules you are trying to access are not
found on the command line, check the environment variable settings and the
personal access token settings.
In addition, `deno run -L debug` should print out a debug message about the
number of tokens that are parsed out of the environment variable. It will print
an error message if it feels any of the tokens are malformed. It won't print any
details about the tokens for security purposes.
[Import maps →](./linking_to_external_code/import_maps.md)[← Proxies](./linking_to_external_code/proxies.md) | 40.517241 | 108 | 0.790638 | eng_Latn | 0.996877 |
765033a71f6d772f2a4f9dcac22cb1a975ba9e7c | 1,381 | md | Markdown | README.md | zakolenko/jooq4s | be76f1dc9cd55b1fafbf62bb25d84fd96a6b47e5 | [
"Apache-2.0"
] | 2 | 2020-02-15T18:47:39.000Z | 2020-02-16T14:57:08.000Z | README.md | zakolenko/jooq4s | be76f1dc9cd55b1fafbf62bb25d84fd96a6b47e5 | [
"Apache-2.0"
] | null | null | null | README.md | zakolenko/jooq4s | be76f1dc9cd55b1fafbf62bb25d84fd96a6b47e5 | [
"Apache-2.0"
] | null | null | null | # jooq4s
[](https://github.com/zakolenko/jooq4s/actions?query=branch%3Amaster+workflow%3Abuild) [](https://maven-badges.herokuapp.com/maven-central/io.github.zakolenko/jooq4s-core_2.13)
Scala wrapper for jOOQ
## Usage
The packages are published on Maven Central.
```scala
libraryDependencies += "io.github.zakolenko" %% "jooq4s-core" % "<version>"
libraryDependencies += "io.github.zakolenko" %% "jooq4s-hikari" % "<version>"
```
## Contributing
The jooq4s project welcomes contributions from anybody wishing to participate. All code or documentation that is provided must be licensed with the same license that jooq4s is licensed with (Apache 2.0, see LICENSE.txt).
People are expected to follow the [Scala Code of Conduct](./CODE_OF_CONDUCT.md) when discussing jooq4s on GitHub, Gitter channel, or other venues.
Feel free to open an issue if you notice a bug, have an idea for a feature, or have a question about the code. Pull requests are also gladly accepted. For more information, check out the [contributor guide](./CONTRIBUTING.md).
## License
All code in this repository is licensed under the Apache License, Version 2.0. See [LICENCE.md](./LICENSE.md).
| 51.148148 | 381 | 0.769732 | eng_Latn | 0.910256 |
765044e40a61b50b483aae8af66d84d40b69551d | 4,527 | md | Markdown | session3.md | amirrosein/azmayesh-4 | 4da9b9ddc1a60befc8fa2e3c8537d52692ac5489 | [
"MIT"
] | null | null | null | session3.md | amirrosein/azmayesh-4 | 4da9b9ddc1a60befc8fa2e3c8537d52692ac5489 | [
"MIT"
] | 4 | 2021-07-21T19:02:33.000Z | 2021-11-13T19:31:22.000Z | session3.md | amirrosein/azmayesh-4 | 4da9b9ddc1a60befc8fa2e3c8537d52692ac5489 | [
"MIT"
] | 4 | 2021-07-30T12:43:00.000Z | 2021-08-10T05:48:15.000Z | <div dir="rtl" align='justify'>
# آزمایش ۳ - مشاهده رفتار هسته و سیستم عامل
## ۳.۱ مقدمه
در این جلسه از آزمایشگاه خواهیم آموخت که چگونه میتوان
در سیستم عامل لینوکس رفتار هسته را مشاهده کرد و اطلاعات
مربوط به پردازهها و هسته را استخراج نمود.
### ۳.۱.۱ پیشنیازها
انتظار میرود که دانشجویان با موارد زیر آشنا باشند:
* برنامه نویسی به زبان c/c++
* دستورات پوسته لینوکس که در جلسات قبل فرا گرفته شدهاند.
## ۳.۲ فایل سیستم /proc
در سیستم عامل لینوکس برای بررسی وضعیت هسته، مشاهده
پردازههای در حال اجرا و دریافت اطلاعاتی از این دست، روشی
پیشبینی شده است که system file /proc نامیده میشود.
در حقیقت /proc به عنوان یک فایل سیستم عادی نیست، بلکه
واسطی است برای دسترسی به فضای آدرس پردازههای در حال اجرا.
این کار باعث میشود تا بتوان به صورت عادی به کمک
فراخوانیهای سیستمی، open, read, write در مورد پردازهها
اطلاعات مورد نیاز را استخراج کرد یا تغییراتی در آنها
ایجاد نمود.
## ۳.۳ شرح آزمایش
### ۳.۳.۱ مشاهده فایل سیستم /proc
1. وارد سیستم عامل مجازی ایجاد شده در جلسه قبل شوید.
1. با وارد کردن دستور مناسب وارد شاخه /proc شوید.
1. به کمک دستور ls لیست فایلهای موجود در این شاخه را
ببینید.
1. همانطور که ملاحظه میکنید، تعدادی فایل در این شاخه وجود
دارد که اسم های آنها به صورت عدد میباشد. این اسامی در
واقع Process ID یا به اختصار PID پردازههای در حال اجرا
در سیستم میباشند. دقت کنید که این فایلها در واقع
به شکل فایلهای سنتی وجود ندارند، بلکه واسطهایی هستند
که توسط هسته برای دسترسی به اطلاعات پردازهها ایجاد شدهاند.
### ۳.۳.۲ مشاهدهی محتویات یک فایل در شاخه /proc
1. همانطور که در قبل اشاره شد. فایلهای موجود در
شاخه /proc به شکل فایلهای عادی دیده میشوند. اما در واقع
هر کدام از این فایلها یا زیر شاخهها موجود در این بخش،
برنامههایی هستند که متغییرهایی را از هسته خوانده و آنها را
به صورت ASCII بر میگردانند.
1. به کمک دستور cat محتویات مربوط به فایل /proc/version را
در خروجی چاپ کنید. چه چیزی در خروجی مشاهده میکنید؟
1. محتویات چند فایل دیگر (فایلهایی با نام غیر عددی) در
این شاخه را چاپ کنید. هر کدام از این فایلها چه چیزی را
نشان میدهد؟
1. یک برنامه ساده به زبان c++ بنویسید که به کمک
توابع <fstream> فایل /proc/version را خوانده و
محتویات آن را در فایلی با نام Linux Version.txt بنویسد.
همانطور که مشاهده خواهید کرد، به کمک توابع کار با فایل
به راحتی میتوان با فایلهای موجود در زیر شاخه /proc کار
کرد.
1. سعی کنید در فایل /proc/version یک جمله دلخواه را
بنویسید. چه اتفاقی میافتد؟
### ۳.۳.۳ مشاهده وضعیت پردازهها
1. به ازای هر کدام از پردازهها، یک پوشه با شماره آن پردازه
در /proc وجود دارد. به دلخواه وارد یکی از این پوشهها شوید
و سپس با دستور ls فایلهای موجود در آن را ملاحظه کنید.
1. هر کدام از فایلها اطلاعات خاصی را در مورد این پردازه در
اختیار ما قرار میدهند. محتویات هر کدام از فایلهای زیر
را در این شاخه به کمک دستور cat نشان دهید و بررسی نمایید
که هر کدام از این پوشهها حاوی چه چیزی هستند
(لیست در ادامه آمده است). برای اطلاعات
بیشتر در مورد هر کدام از این موارد از دستور man 5 proc
استفاده کنید.
نام فایلها که باید بررسی شوند:
cmdline/ environ/ stat/ status/ statm/ cwd/ exe/ root
1. یک اسکریپت ساده بنویسید که لیست شمارهی پردازههای در
حال اجرا به همراه نام آنها را در خروجی چاپ کند.
**تمرین ۳.۱**: به کمک مطالبی که در بالا آموختهاید،
برنامهای بنویسید که شماره یک پردازه را دریافت و در
خروجی اطلاعاتی اعم از نام فایل اجرایی آن، مقدار حافظه
مصرفی (به بایت)، پارامترهای اجرا و متغییرهای محیطی مربوط
به آن را در خروجی چاپ کند.
### ۳.۳.۴ مشاهده اطلاعات مربوط به هسته
* مشابه روشی که اطلاعات مربوط به پردازهها را میتوان
مشاهده کرد، فایل سیستم /proc این امکان را در اختیار شما
قرار میدهد تا اطلاعات را در ارتباط با هسته مشاهده کنید.
از جمله این اطلاعات میتوان به اطلاعات دستگاههای I/O ،
وضعیت وقفهها، اطلاعات پردازندهها و ... اشاره کرد.
این فایلها در شاخهی اصلی /proc قرار دارند
(فایلهایی که نام آنها عدد نمیباشد).
1. وارد شاخه /proc شوید
1. به کمک دستور ls بار دیگر لیستی از فایلهای موجود در
این شاخه را ببینید.
1. هر کدام از فایلها یا پوشههای زیر را بررسی کنید و
ملاحظه کنید که هر کدام چه اطلاعاتی را در اختیار ما قرار
میدهند.
لیست مواردی که باید بررسی شود:
meminfo/version/uptime/stat/mount/net/loadavg/
interrupts/ioports/filesystem/cpuinfo/cmdline
1. برنامهای بنویسید که نام مدل پردازنده، فرکانس آن و
مقدار حافظه نهان (Cache Memory) آن را در خروجی چاپ کند.
1. برنامهای بنویسید که مقدار حافظه کل،
حافظه استفاده شده و حافظه آزاد را در خروجی چاپ کند.
**تمرین ۳.۲**: به پرسشهای زیر پاسخ دهید.
* درباره پنچ مورد از مهمترین فایلهای موجود
در /proc/sys/kernel تحقیق کنید و کاربرد آنها را بیان
نمایید.
* در مورد self در شاخه /proc و کاربرد آن توضیح دهید.
</div>
| 33.286765 | 61 | 0.733819 | pes_Arab | 0.998787 |
7650e95860d4f1b834e604278cbbfbfdd122c273 | 17,143 | md | Markdown | learning/classes.md | frankschae/www.julialang.org | ad37b738a44098a9ad694cb1b0e4ab5fee8b30e7 | [
"MIT"
] | null | null | null | learning/classes.md | frankschae/www.julialang.org | ad37b738a44098a9ad694cb1b0e4ab5fee8b30e7 | [
"MIT"
] | null | null | null | learning/classes.md | frankschae/www.julialang.org | ad37b738a44098a9ad694cb1b0e4ab5fee8b30e7 | [
"MIT"
] | null | null | null | # Julia in the classroom
Julia is now being used in several universities and online courses.
If you know of other classes using Julia for teaching, please consider updating this list.
@@tight-list
* AGH University of Science and Technology, Poland
* [Signal processing in medical diagnostic systems](http://home.agh.edu.pl/~pieciak/en/dydaktyka/przetwarzanie-sygnalow-w-systemach-diagnostyki-medycznej) (Tomasz Pieciak), Spring 2015
* Arizona State University
* MAT 423, Numerical Analysis (Prof. Clemens Heitzinger), Fall 2014
* Azad University, Science and Research Branch
* CE 3820, Modeling and Evaluation (Dr. Arman Shokrollahi), Fall 2014
* Brown University
* [CSCI 1810](https://cs.brown.edu/courses/csci1810/), Computational Molecular Biology (Prof. Benjamin J. Raphael), Fall 2014
* [Budapest University of Technology and Economics](https://www.bme.hu/)
* [Applications of Differential Equations and Vector Analysis for Engineers II.] ([Brigitta Szilágyi](https://sites.google.com/site/brszilagyi/))
* City University of New York
* [MTH 229](https://www.math.csi.cuny.edu/abhijit/229/), Calculus Computer Laboratory (Prof. John Verzani), Spring 2014. Also see the [MTH 229 Projects](https://mth229.github.io) page.
* Cornell University
* [CS 5220](https://www.cs.cornell.edu/~bindel/class/cs5220-s14/), Applications of Parallel Computers (Prof. David Bindel), Spring 2014
* École Polytechnique Fédérale de Lausanne
* [CIVIL 557] Decision-aid methodologies in transportation (Mor Kaspi, Virginie Lurkin), Spring 2017
* [Einaudi Institute for Economics and Finance, Rome](http://www.eief.it/eief/)
* [Econometrics of DSGE Models](http://gragusa.org/teaching/eief-dsge/) ([Giuseppe Ragusa](http://gragusa.org))
* Emory University
* [MATH 346](https://www.mathcs.emory.edu/~lruthot/courses/sp15-math346.html), Introduction to Optimization Theory (Prof. Lars Ruthotto), Spring 2015
* [MATH 516](https://www.mathcs.emory.edu/~lruthot/courses/math516.html), Numerical Analysis II (Prof. Lars Ruthotto), Spring 2015
* Federal Rural University of Rio de Janeiro (UFRRJ)
* TM429, Introduction to Recommender Systems (Prof. [Filipe Braida](https://github.com/filipebraida)), Fall 2016, Spring 2017
* [Federal University of Alagoas](https://ufal.br) (_Universidade Federal de Alagoas_, UFAL)
* COMP272, Distributed Systems ([Prof. André Lage-Freitas](https://sites.google.com/a/ic.ufal.br/andrelage)): 2015, 2016, and 2017
* [Federal University of Paraná](https://www.ufpr.br/) (_Universidade Federal do Paraná_, UFPR)
* [CM103](https://abelsiqueira.github.io/cm103-2019s2/), Laboratório de Matemática Aplicada (Prof. Abel Soares Siqueira), 2016, 2017, and 2018, 2019
* [CMM014](https://abelsiqueira.github.io/cmm014-2019s1/), Cálculo Numérico (Prof. Abel Soares Siqueira), 2019
* [CM106/CMI043/CMM204/MNUM7079](https://www.youtube.com/playlist?list=PLOOY0eChA1uyk_01nGJVmcQGvcJq9h6_6), Otimização Não Linear (Prof. Abel Soares Siqueira), 2018, 2020
* Federal University of Uberlândia, Institute of Physics
* [GFM050](https://www.infis.ufu.br/gerson), Física Computacional (Prof. Gerson J. Ferreira), Fall 2016
* Hadsel High School, Stokmarknes, Nordland, Norway
* [AnsattOversikt](https://www.hadsel.vgs.no/AnsattOversikt.aspx?personid=16964&mid1=15925), [REA3034] Programmering og modellering (Programming and modeling with Julia and Snap), 2018 / 19 (High school lecturer Olav A Marschall, M.sc. Computer Science)
* IIT Indore
* [ApplNLA](https://github.com/ivanslapnicar/GIAN-Applied-NLA-Course), Modern Applications of Numerical Linear Algebra (Prof. [Ivan Slapnicar](http://marjan.fesb.hr/~slap/)), June 2016
* Iowa State University
* [STAT 590F](https://github.com/heike/stat590f), Topics in Statistical Computing: Julia Seminar (Prof. Heike Hofmann), Fall 2014
* [Luiss University Rome](https://www.luiss.edu/), [Department of Economics and Finance](https://economiaefinanza.luiss.it)
* [Econometric Theory](http://gragusa.org/teaching/grad-et/) ([Giuseppe Ragusa](http://gragusa.org))
* [Lund University, Sweden](https://www.lunduniversity.lu.se/), [Department of Automatic Control](http://control.lth.se/)
* [Julia for Scientific Computing](http://www.control.lth.se/education/doctorate-program/julia-course/julia-course-2019/)
* [Optimization for Learning](https://www.control.lth.se/education/engineering-program/frtn50-optimization-for-learning/)
* Massachusetts Institute of Technology (MIT)
* [6.251 / 15.081](https://stellar.mit.edu/courseguide/course/6/fa15/6.251/), Introduction to Mathematical Programming (Prof. Dimitris J. Bertsimas), Fall 2015
* [18.06](https://web.mit.edu/18.06/www/), Linear Algebra: Fall 2015, Dr. [Alex Townsend](https://github.com/ajt60gaibb); Fall 2014, Prof. Alexander Postnikov; Fall [2013](https://stellar.mit.edu/S/course/18/fa13/18.06), Prof. Alan Edelman
* [18.303](https://math.mit.edu/~stevenj/18.303/), Linear Partial Differential Equations: Analysis and Numerics (Prof. [Steven G. Johnson](https://github.com/stevengj)), Fall 2013–2016.
* [18.337 / 6.338](http://courses.csail.mit.edu/18.337/2016/), Numerical Computing with Julia (Prof. [Alan Edelman](https://github.com/alanedelman)). [Fall 2015](https://courses.csail.mit.edu/18.337/2015) ([IJulia notebooks](https://github.com/alanedelman/18.337_2015)). Fall 2013–
* [18.085 / 0851](https://math.mit.edu/classes/18.085/2015FA/index.html), Computational Science And Engineering I (Prof. Pedro J. Sáenz)
* [18.330](https://homerreid.dyndns.org/teaching/18.330/), Introduction to Numerical Analysis (Dr. Homer Reid), Spring 2013–2015
* [18.335](https://math.mit.edu/~stevenj/18.335/), Introduction to Numerical Methods (Prof. Steven G. Johnson), Fall 2013, Spring 2015
* [18.338](https://web.mit.edu/18.338/www/), Eigenvalues Of Random Matrices (Prof. Alan Edelman), Spring 2015
* [18.S096](https://math.mit.edu/classes/18.S096/iap17/), Performance Computing in a High Level Language (Steven G. Johnson, Alan Edelman, David Sanders, Jeff Bezanson), January 2017.
* 15.093 / 6.255, Optimization Methods (Prof. Dimitris Bertsimas and Dr. Phebe Vayanos), Fall 2014
* [15.S60](https://github.com/IainNZ/ORSoftwareTools2014), Software Tools for Operations Research (Iain Dunning), Spring 2014
* [15.083](https://stellar.mit.edu/S/course/15/sp14/15.083/), Integer Programming and Combinatorial Optimization (Prof. Juan Pablo Vielma), Spring 2014
* Northeastern University, Fall 2016
* MTH3300: Applied Probability & Statistics
* [Óbuda University](https://www.uni-obuda.hu), [John von Neumann Faculty of Informatics, Institute of Applied Mathematics](https://nik.uni-obuda.hu)
* [Intelligent Development Tools (Hungarian)]
* [Intelligent Development Tools (English)]
* [Fundamental Mathematical Methods (English)]
* Pennsylvania State University
* [ASTRO 585](https://www.personal.psu.edu/~ebf11/teach/astro585/), High-Performance Scientific Computing for Astrophysics (Prof. Eric B. Ford), Spring 2014 - [github repo](https://github.com/eford/Astro585_2014_Spring)
* [ASTRO 585](https://www.personal.psu.edu/~ebf11/teach/astro585/), High-Performance Scientific Computing for Astrophysics (Prof. Eric B. Ford), Fall 2015 - [github repo](https://github.com/eford/Astro585_2015_Fall_Public)
* Pontifical Catholic University of Rio de Janeiro (PUC-Rio)
* Programming in Julia (Prof. [Thuener Silva](https://github.com/Thuener)), Summer 2017
* Linear Optimization (Prof. [Alexandre Street](https://alexandrestreet.wordpress.com/)), Spring 2017
* Decision and Risk Analysis (Prof. [Davi Valladão](https://www.ind.puc-rio.br/en/equipe/davi-michel-valladao/)), Fall 2015
* Purdue University
* [CS51400](https://www.cs.purdue.edu/homes/dgleich/cs514-2016/), Numerical Analysis (Prof. [David Gleich](https://www.cs.purdue.edu/homes/dgleich/)), Spring 2016
* Royal Military Academy (Brussels)
* ES123, Computer Algorithms and Programming Project (Prof. [Ben Lauwens](https://github.com/BenLauwens)), Spring 2018
* ES313, Mathematical modelling and Computer Simulation (Prof. [Ben Lauwens](https://github.com/BenLauwens)), Fall 2018
* “Sapienza” University of Rome, Italy
* [Operations Research](http://www.iasi.cnr.it/~liuzzi/teachita.htm) (Giampaolo Liuzzi), Spring 2015
* [Optimization for Complex Systems](http://www.iasi.cnr.it/~liuzzi/teachita.htm) (Giampaolo Liuzzi), Spring 2016
* [Sciences Po Paris](https://www.sciencespo.fr), [Department of Economics](https://www.sciencespo.fr/department-economics/en), Spring 2016.
* [Computational Economics for PhDs](https://github.com/ScPo-CompEcon/Syllabus) ([Florian Oswald](https://floswald.github.io))
* SGH Warsaw School of Economics, Poland
* 223490-0286, Statistical Learning Methods ([Bogumił Kamiński](http://bogumilkaminski.pl/about/)): Fall 2017, Spring 2018, Fall 2018
* 234900-0286, Agent-Based Modeling ([Bogumił Kamiński](http://bogumilkaminski.pl/about/)): Fall 2017, Spring 2018, Fall 2018
* 239420-0553, _Introduction to Deep Learning_ module ([Bogumił Kamiński](http://bogumilkaminski.pl/about/)): Spring 2018
* Southcentral Kentucky Community and Technical College
* CIT 120 Computational Thinking (Inst. [Bryan Knowles](https://github.com/snotskie/)), Online, Fall 2017
* Stanford University
* [AA222](https://www.stanford.edu/class/aa222/), Introduction to Multidisciplinary Design Optimization (Prof. Mykel J. Kochenderfer), Spring 2014
* [AA228/CS238](https://www.stanford.edu/class/aa228/), Decision Making under Uncertainty (Prof. Mykel J. Kochenderfer), Fall 2014
* [EE103](https://stanford.edu/class/ee103/), Introduction to Matrix Methods (Prof. Stephen Boyd), Fall 2014, Fall 2015
* [CME 257](https://github.com/icme/cme257-advanced-julia/), Advanced Topics in Scientific Computing with Julia (Mr. [Brad Nelson](https://github.com/bnels)), Fall 2015
* [EE266](http://ee266.stanford.edu/), Stochastic Control (Prof. Sanjay Lall), Spring 2016
* Tec de Monterrey, Santa Fe Campus, Mexico City
* [IN2022](https://samp.itesm.mx/Materias/VistaPreliminarMateria?clave=IN2022&lang=ES), Modelos de Optimización (Prof. [Marzieh Khakifirooz](https://www.linkedin.com/in/marzieh-khakifirooz-a3b85643/)), Spring 2020
* Tokyo Metropolitan University, Tokyo, Japan
* L0407, [Exercises in Programming I for Mechanical Systems Engineering](https://github.com/hsugawa8651/mseprogOne) (Hiroharu Sugawara): Fall 2018, Fall 2019
* [TU Dortmund / SFB 823](https://www.statistik.tu-dortmund.de/sfb823.html), Germany
* One week introductory course into Julia with applications in statistics and economics ([Tileman Conring](https://www.statistik.tu-dortmund.de/conring.html)): Spring 2018
* Universidad Nacional Autónoma de México
* [Física computacional](https://github.com/dpsanders/fisica_computacional) (Prof. David P. Sanders), Fall 2014
* Métodos numéricos para sistemas dinámicos (Prof. Luis Benet), Fall 2014
* [Métodos numéricos avanzados](https://github.com/dpsanders/MetodosNumericosAvanzados) (Prof. David P. Sanders and Prof. Luis Benet), Spring 2015
* [Métodos computacionales para la física estadística](https://github.com/dpsanders/metodos-monte-carlo) (Prof. David P. Sanders), Spring 2015
* Universidad Nacional Pedro Ruiz Gallo, Lambayeque, Perú
* Julia: el lenguaje del futuro, [Semana de Integración de Ingeniería Electrónica](https://www.slideshare.net/Ownv94/lenguaje-julia-el-lenguaje-del-futuro), (Oscar William Neciosup Vera), Spring 2015
* Universidad Veracruzana, México
* [Algoritmos Evolutivos y de Inteligencia Colectiva](https://github.com/jmejia8/julia-python) (Jesús A. Mejía-de-Dios), Fall 2019
* University at Buffalo
* [IE 572](https://www.chkwon.net/teaching/#ie-572-linear-programming/) Linear Programming (Prof. Changhyun Kwon), Fall 2014
* University of Antwerp, Faculty of Pharmaceutical, Biomedical, Veterinary Sciences, October 2016
* Computational Neuroscience (2070FBDBMW), Master of Biomedical Sciences, of Biochemistry, of Physics ([Michele Giugliano](https://www.uantwerpen.be/popup/opleidingsonderdeel.aspx?catalognr=2070FBDBMW&taal=nl&aj=2016))
* University of California, Los Angeles (UCLA)
* [Stat M230/Biomath 280/Biostat M280](https://hua-zhou.github.io/teaching/biostatm280-2017spring/), Statistical Computing, Spring 2017 (Prof. [Hua Zhou](https://github.com/Hua-Zhou))
* University of Cologne, Institute for Theoretical Physics
* [Computational Physics](https://www.thp.uni-koeln.de/trebst/Lectures/2016-CompPhys.shtml) (Prof. Simon Trebst), Summer 2016
* [Computational Physics](https://www.thp.uni-koeln.de/~bulla/cp-ss17.html) (Prof. Ralf Bulla), Summer 2017
* [Statistical Physics](https://www.thp.uni-koeln.de/trebst/Lectures/2017-StatPhys.shtml) (Prof. Simon Trebst), Winter 2017
* [Computational Many-Body Physics](https://www.thp.uni-koeln.de/trebst/Lectures/2018-CompManyBody.shtml) (Prof. Simon Trebst), Summer 2018
* [Advanced Julia Workshop](https://github.com/crstnbr/JuliaWorkshop18) (MSc. Carsten Bauer), Fall 2018
* [Computational Physics](https://www.thp.uni-koeln.de/trebst/Lectures/2019-CompPhys.shtml) (Prof. Simon Trebst), Summer 2019
* [Advanced Julia Workshop](https://github.com/crstnbr/JuliaWorkshop19) (MSc. Carsten Bauer), Fall 2019
* University of Connecticut, Storrs
* CHEG 5395, Metaheuristic and Heuristic Methods in Chemical Engineering (Prof. Ranjan Srivastava), Spring 2018
* University of Edinburgh
* Spring 2017, [MATH11146](https://www.drps.ed.ac.uk/16-17/dpt/cxmath11146.htm), Modern optimization methods for big data problems (Prof. [Peter Richtarik](https://www.maths.ed.ac.uk/~prichtar/index.html))
* Spring 2016, [MATH11146](http://www.drps.ed.ac.uk/15-16/dpt/cxmath11146.htm), Modern optimization methods for big data problems (Prof. [Peter Richtarik](https://www.maths.ed.ac.uk/~prichtar/index.html))
* University of Glasgow, School of Mathematics and Statistics
* An Introduction to Julia, course of Online Master of Science (MSc) in Data Analytics ([Theodore Papamarkou](https://www.gla.ac.uk/postgraduate/taught/dataanalytics/)), September 2017
* University of Oulu
* Invited [Advanced Julia Workshop](https://github.com/crstnbr/JuliaOulu20) (MSc. Carsten Bauer, University of Cologne), Spring 2020
* University of South Florida
* [ESI 6491](https://www.chkwon.net/teaching/esi-6491/), Linear Programming and Network Optimization (Prof. Changhyun Kwon), Fall 2015
* [EIN 6945](https://www.chkwon.net/teaching/ein-6935/), Nonlinear Optimization and Game Theory (Prof. [Changhyun Kwon](https://www.chkwon.net/)), Spring 2016
* University of Sydney
* [MATH3076/3976](https://www.maths.usyd.edu.au/u/olver/teaching/MATH3976/), Mathematical Computing (Assoc. Prof. [Sheehan Olver](https://www.maths.usyd.edu.au/u/olver/)), Fall 2016
* Université Paul Sabatier, Toulouse
* [Optimization in Machine Learning](https://www.irit.fr/cimi-machine-learning/node/15.html), (Prof. [Peter Richtarik](https://www.maths.ed.ac.uk/~prichtar/)), Fall 2015
* [Université de Liège](https://www.ulg.ac.be/)
* [MATH0462](http://www.tcuvelier.be/teaching-2016-2017-discrete-optimisation), Discrete Optimization (Prof. [Quentin Louveaux](http://www.montefiore.ulg.ac.be/~louveaux/)), Fall 2016
* [MATH0461](https://www.programmes.uliege.be/cocoon/20192020/cours/MATH0461-2.html), Introduction to Numerical Optimization (Prof. [Quentin Louveaux](http://www.montefiore.ulg.ac.be/~louveaux/)), Fall 2016
* [MATH0462](http://www.montefiore.ulg.ac.be/~tcuvelier/teaching/2015-2016-discrete-optimisation), Discrete Optimization (Prof. [Quentin Louveaux](http://www.montefiore.ulg.ac.be/~louveaux/)), Fall 2015
* Université de Montréal
* [IFT1575](https://admission.umontreal.ca/cours-et-horaires/cours/IFT-1575/), Modèles de recherche opérationnelle (Prof. [Bernard Gendron](https://www.iro.umontreal.ca/~gendron/)), Fall 2017
* [IFT3245](https://admission.umontreal.ca/cours-et-horaires/cours/IFT-3245/), Simulation et modèles (Prof. [Fabian Bastin](https://www.iro.umontreal.ca/~bastin/)), Fall 2017
* [IFT3515](https://admission.umontreal.ca/cours-et-horaires/cours/IFT-3515/), Optimisation non linéaire (Prof. [Fabian Bastin](https://www.iro.umontreal.ca/~bastin/)), Winter 2017-2018
* [IFT6512](https://admission.umontreal.ca/cours-et-horaires/cours/IFT-6512/), Programmation stochastique (Prof. [Fabian Bastin](https://www.iro.umontreal.ca/~bastin/)), Winter 2018
* University of Washington
* [AMATH 586](https://trogdoncourses.github.io/amath-586-2020/), Numerical analysis of time-dependent problems (Prof. [Tom Trogdon](http://faculty.washington.edu/trogdon/)), Spring 2020
* Western University Canada
* [CS 2101A](https://www.csd.uwo.ca/~moreno/cs2101a_moreno/index.html), Foundations of Programming for High Performance Computing. (Prof. Marc Moreno Maza), Fall 2013
@@
Have a Julia class you want added to this list? Please [open an issue or pull request](https://github.com/JuliaLang/www.julialang.org/issues).
| 108.5 | 285 | 0.749285 | yue_Hant | 0.434799 |
7651a5499c0d8a49ceb9dbbefabb452a9779a70d | 6,187 | md | Markdown | docs/framework/wcf/feature-details/securing-messages-using-message-security.md | TomekLesniak/docs.pl-pl | 3373130e51ecb862641a40c5c38ef91af847fe04 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wcf/feature-details/securing-messages-using-message-security.md | TomekLesniak/docs.pl-pl | 3373130e51ecb862641a40c5c38ef91af847fe04 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wcf/feature-details/securing-messages-using-message-security.md | TomekLesniak/docs.pl-pl | 3373130e51ecb862641a40c5c38ef91af847fe04 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Korzystanie z zabezpieczeń komunikatów
ms.date: 03/30/2017
ms.assetid: a17ebe67-836b-4c52-9a81-2c3d58e225ee
ms.openlocfilehash: 6aae16b766889f402f774451338ae2cd30162437
ms.sourcegitcommit: bc293b14af795e0e999e3304dd40c0222cf2ffe4
ms.translationtype: MT
ms.contentlocale: pl-PL
ms.lasthandoff: 11/26/2020
ms.locfileid: "96288606"
---
# <a name="securing-messages-using-message-security"></a>Korzystanie z zabezpieczeń komunikatów
W tej części omówiono zabezpieczenia komunikatów WCF podczas korzystania z programu <xref:System.ServiceModel.NetMsmqBinding> .
> [!NOTE]
> Przed przeczytaniem z tego tematu zaleca się zapoznanie się z [pojęciami dotyczącymi zabezpieczeń](security-concepts.md).
Poniższa ilustracja przedstawia model koncepcyjny komunikacji kolejkowanej przy użyciu programu WCF. Ta ilustracja i terminologia są używane do wyjaśnienia
zagadnienia dotyczące zabezpieczeń transportu.

Podczas wysyłania komunikatów umieszczonych w kolejce przy użyciu programu WCF komunikat WCF jest dołączany jako treść komunikatu usługi kolejkowania komunikatów (MSMQ). Chociaż zabezpieczenia transportu zabezpieczają cały komunikat MSMQ, zabezpieczenia komunikatów (lub SOAP) zabezpieczają tylko treść wiadomości MSMQ.
Najważniejszym pojęciem zabezpieczeń komunikatów jest, że klient zabezpieczy komunikat dla aplikacji odbiorczej (usługi), w przeciwieństwie do zabezpieczeń transportu, gdy klient zabezpieczy komunikat dla kolejki docelowej. W związku z tym usługa MSMQ nie pełni żadnej części podczas zabezpieczania wiadomości WCF przy użyciu zabezpieczeń komunikatów.
Funkcja zabezpieczenia komunikatów WCF dodaje nagłówki zabezpieczeń do wiadomości WCF, która integruje się z istniejącymi infrastrukturami zabezpieczeń, takimi jak certyfikat lub protokół Kerberos.
## <a name="message-credential-type"></a>Typ poświadczeń wiadomości
Korzystając z zabezpieczeń komunikatów, usługa i klient mogą przedstawić poświadczenia do wzajemnego uwierzytelniania. Możesz wybrać opcję Zabezpieczenia komunikatów, ustawiając <xref:System.ServiceModel.NetMsmqBinding.Security%2A> tryb na `Message` lub (to `Both` oznacza, że są używane zabezpieczenia transportu i zabezpieczenia komunikatów).
Usługa może użyć właściwości, <xref:System.ServiceModel.ServiceSecurityContext.Current%2A> Aby sprawdzić poświadczenie używane do uwierzytelniania klienta. Można go również użyć do dalszej kontroli autoryzacji, którą usługa wybiera do wdrożenia.
W tej sekcji objaśniono różne typy poświadczeń i sposoby ich używania z kolejkami.
### <a name="certificate"></a>Certyfikat
Typ poświadczeń certyfikatu używa certyfikatu X. 509, aby zidentyfikować usługę i klienta.
W typowym scenariuszu klient i usługa otrzymują prawidłowy certyfikat przez zaufany urząd certyfikacji. Następnie połączenie zostanie nawiązane, a klient uwierzytelnia ważność usługi przy użyciu certyfikatu usługi, aby zdecydować, czy może ufać usłudze. Podobnie usługa używa certyfikatu klienta do sprawdzania poprawności zaufania klienta.
Ze względu na rozłączoną naturę kolejek, klient i usługa nie mogą być w trybie online w tym samym czasie. W związku z tym klient i usługa muszą wymieniać certyfikaty poza pasmem. W szczególności klient z tytułu posiadania certyfikatu usługi (który może być łańcuchem do urzędu certyfikacji) w swoim zaufanym magazynie, musi ufać, że komunikuje się z poprawną usługą. Aby można było uwierzytelniać klienta, usługa używa certyfikatu X. 509 dołączonego do wiadomości w celu dopasowania go do certyfikatu w magazynie w celu zweryfikowania autentyczności klienta. Ponownie certyfikat musi być łańcuchem do urzędu certyfikacji.
Na komputerze z systemem Windows certyfikaty są przechowywane w kilku rodzajach magazynów. Aby uzyskać więcej informacji na temat różnych magazynów, zobacz [magazyny certyfikatów](/previous-versions/windows/it-pro/windows-server-2003/cc757138(v=ws.10)).
### <a name="windows"></a>Windows
Typ poświadczeń komunikatu systemu Windows używa protokołu Kerberos.
Protokół Kerberos jest mechanizmem zabezpieczeń, który uwierzytelnia użytkowników w domenie i zezwala uwierzytelnionym użytkownikom na ustanawianie bezpiecznych kontekstów z innymi jednostkami w domenie.
Problem z korzystaniem z protokołu Kerberos na potrzeby komunikacji w kolejce polega na tym, że bilety zawierające tożsamość klienta dystrybuowaną przez centrum dystrybucji kluczy (KDC) są stosunkowo krótko. *Okres istnienia* jest skojarzony z biletem protokołu Kerberos, który wskazuje na ważność biletu. W związku z tym duże opóźnienie nie można mieć pewności, że token jest nadal ważny dla usługi, która uwierzytelnia klienta.
Należy pamiętać, że w przypadku korzystania z tego typu poświadczeń usługa musi być uruchomiona w ramach konta usługi.
Protokół Kerberos jest używany domyślnie podczas wybierania poświadczeń wiadomości.
### <a name="username-password"></a>Hasło użytkownika
Korzystając z tej właściwości, klient może uwierzytelniać się na serwerze przy użyciu hasła użytkownika w nagłówku zabezpieczeń wiadomości.
### <a name="issuedtoken"></a>IssuedToken
Klient może użyć usługi tokenu zabezpieczającego, aby wystawić token, który następnie może zostać dołączony do wiadomości dla usługi w celu uwierzytelnienia klienta.
## <a name="using-transport-and-message-security"></a>Korzystanie z usługi transport i zabezpieczenia komunikatów
W przypadku korzystania z zabezpieczeń transportu i zabezpieczeń komunikatów certyfikat używany do zabezpieczenia komunikatu zarówno w transportie, jak i na poziomie komunikatu protokołu SOAP musi być taki sam.
## <a name="see-also"></a>Zobacz też
- [Ochrona komunikatów za pomocą zabezpieczeń transportu](securing-messages-using-transport-security.md)
- [Zabezpieczenia komunikatów w ramach kolejkowania komunikatów](../samples/message-security-over-message-queuing.md)
- [Pojęcia dotyczące zabezpieczeń](security-concepts.md)
- [Zabezpieczanie usług i klientów](securing-services-and-clients.md)
| 78.316456 | 624 | 0.815419 | pol_Latn | 0.999983 |
7651bcd60d085759e60944605ff231df4073e4a8 | 8,482 | md | Markdown | docs/src/pages/quasar-cli/boot-files.md | adwinyang/quasar | 1af3cb2d998d4aac13e55f24284bf1a74e2226e4 | [
"MIT"
] | 1 | 2022-01-17T14:39:26.000Z | 2022-01-17T14:39:26.000Z | docs/src/pages/quasar-cli/boot-files.md | adwinyang/quasar | 1af3cb2d998d4aac13e55f24284bf1a74e2226e4 | [
"MIT"
] | null | null | null | docs/src/pages/quasar-cli/boot-files.md | adwinyang/quasar | 1af3cb2d998d4aac13e55f24284bf1a74e2226e4 | [
"MIT"
] | null | null | null | ---
title: 启动文件
desc: 在 Quasar 应用程序中管理你的启动代码。
related:
- /quasar-cli/quasar-conf-js
---
Quasar 应用程序的一个常见用例是**在实例化根Vue应用程序实例之前运行代码**,例如注入和初始化您自己的依赖项(示例:Vue组件、库…)或者简单地配置一些应用程序的启动代码。
由于你无法访问任何`/main.js`文件(以便 Quasar CLI 可以无缝地初始化和构建SPA/PWA/SSR/Cordova/Electron的相同代码库),Quasar通过允许用户定义所谓的启动文件为这个问题提供了一个优雅的解决方案。
在早期的 Quasar 版本中,为了在根Vue实例实例化之前运行代码,你可以改变`/src/main.js`文件并添加任何你需要执行的代码。
这种方法有一个很大的问题:随着项目的发展,你的`main.js`文件很可能变得杂乱无章,维护起来也很困难,这就违背了 Quasar 鼓励开发者编写可维护的、优雅的跨平台应用程序的理念。
有了启动文件,就可以将你的每个依赖关系分割成独立的、易于维护的文件。通过`quasar.conf.js`的配置,禁用任何一个引导文件,甚至在上下文中决定哪些引导文件可以进入构建,也是非常简单的。
## 启动文件的剖析
启动文件是一个简单的JavaScript文件,可以选择导出一个函数。Quasar在启动应用程序时将调用导出的函数,并向该函数传递带有以下属性的***对象。
| 属性名 | 描述 |
| --- | --- |
| `app` | Vue应用实例 |
| `router` | 来自'src/router/index.js'的Vue Router实例。
| `store` | 应用程序Vuex商店的实例 - **store只有在你的项目使用Vuex时才会被传递(你有src/store)**。 |
| `ssrContext` | 仅在服务器端可用,如果为SSR构建。[更多信息](/quasar-cli/developing-ssr/ssr-context) | `urlPath` !
| `urlPath` | URL的路径名(路径+搜索)部分。它在客户端也包含哈希值。|
| `publicPath` | 配置的公共路径。|
| `redirect` | 调用函数来重定向到另一个URL。接受字符串(完整的URL)或Vue Router位置字符串或对象。|
```js
export default ({ app, router, store }) => {
// 有事可做
}
```
启动文件也可以是异步的:
```js
export default async ({ app, router, store }) => {
// 有事可做
await something()
}
```
你可以用`boot`帮助器来包装返回的函数,以获得更好的IDE自动完成体验(通过Typescript)。
```js
import { boot } from 'quasar/wrappers'
export default boot(async ({ app, router, store }) => {
// 有事可做
await something()
})
```
注意我们使用的是[ES6结构化赋值](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Destructuring_assignment) 。只赋值你实际需要/使用的东西。
你可能会问自己为什么我们需要导出一个函数。这实际上是可选的,但在你决定删除默认导出之前,你需要了解你什么时候需要它。
```js
// 在默认出口之外。
// - 这里的代码会被立即执行。
// - 进口声明的好地方。
// - 无法访问路由器、Vuex商店,...
export default async ({ app, router, store }) => {
// 这里的代码可以访问上面的Object param,连接到
// 与你的应用程序的其他部分。
// 这里的代码可以是异步的(使用async/await或者直接返回一个Promise)。
// 这里的代码会在应用程序生命周期中的正确时间由 Quasar CLI 执行。
// - 我们有一个实例化的Router。
// - 我们已经将可选的Vuex存储实例化了。
// - 我们有根应用程序的组件[Object param中的 "app "prop]Object与
// Quasar将对Vue应用程序进行实例化。
// ("new Vue(app)" -- 不要自己调用这个)。
// - ...
}
```
## 何时使用引导文件
::: warning
请确保你了解启动文件解决了什么问题,什么时候使用它们是合适的,以避免在不需要它们的情况下应用它们。
:::
启动文件满足了一个特殊的目的:它们在App的Vue根组件实例化之前**运行代码,同时允许你访问某些变量,如果你需要初始化一个库、干扰Vue Router、注入Vue原型或注入Vue应用的根实例,就需要这样做。
### 适当使用引导文件的示例
* 你的Vue插件有安装说明,比如需要对其调用`app.use()`。
* 你的Vue插件需要对添加到根实例的数据进行实例化 - 一个示例是[vue-i18n](https://github.com/kazupon/vue-i18n/) 。
* 你想用`app.mixin()`添加一个全局混合器。
* 你想在Vue应用的globalProperties中添加一些东西以方便访问 - 一个示例是在你的Vue文件中方便地使用`this.$axios`(用于选项式 API),而不是在每个文件中导入Axios。
* 你想干扰路由器 - 一个示例是使用`router.beforeEach`进行验证。
* 你想干扰Vuex存储实例 - 一个示例是使用`vuex-router-sync`包
* 配置库的各个方面 - 一个示例是用一个基本的URL创建Axios的实例;然后你可以把它注入Vue原型和/或导出它(所以你可以从你的应用程序的任何其他地方导入该实例)
### 不需要使用引导文件的示例
* 对于像Lodash这样的普通JavaScript库,在使用前不需要任何初始化。例如,Lodash,只有当你想用它来注入Vue原型时,作为一个引导文件使用才有意义,比如可以在你的Vue文件中使用`this.$_`。
## 启动文件的用法
第一步是使用 Quasar CLI 生成一个新的启动文件:
```bash
$ quasar new boot <name> [--format ts]
```
其中`<name>`应该用一个合适的名字来代替你的引导文件。
这个命令创建了一个新的文件。`/src/boot/<name>.js`,内容如下。
```js
// 在此输入一些东西
// "async "是可选的!
// 不需要的时候就去掉它
export default async ({ /* app, router, store */ }) => {
// 有事可做
}
```
你也可以返回一个Promise。
```js
// 在此输入一些东西
export default ({ /* app, router, store */ }) => {
return new Promise((resolve, reject) => {
// 做点什么
})
}
```
::: tip
如果你不需要的话,默认导出可以不在启动文件中出现。这些是你不需要访问 "应用程序"、"路由器"、"商店 "等的情况。
:::
现在你可以根据你的引导文件的预期用途在该文件中添加内容。
> 不要忘记你的默认导出需要是一个函数。
> 然而,你可以有任意多的命名导出,如果引导文件为以后的使用暴露了一些东西。在这种情况下,你可以在你的应用程序中的任何地方导入任何这些命名的导出。
最后一步是告诉 Quasar 使用你的新引导文件。要做到这一点,你需要在`/quasar.conf.js`中添加这个文件
```js
boot: [
// 参考 /src/boot/<name>.js
'<name>'
]
```
当建立一个SSR应用程序时,你可能希望一些启动文件只在服务器上或只在客户端运行,在这种情况下,你可以像下面这样做。
```js
boot: [
{
server: false, // run on client-side only!
path: '<name>' // references /src/boot/<name>.js
},
{
client: false, // run on server-side only!
path: '<name>' // references /src/boot/<name>.js
}
]
```
如果你想从node_modules中指定启动文件,你可以通过在路径前加上`~`(tilde)字符来实现。
```js
boot: [
// 从一个npm包中启动文件
'~my-npm-package/some/file'
]
```
如果你想让一个引导文件只为特定的构建类型注入到你的应用程序中。
```js
boot: [
ctx.mode.electron ? 'some-file' : ''
]
```
### 重定向到另一个页面
::: warning
在重定向时请注意,因为你可能会配置应用程序进入无限重定向循环。
:::
```js
export default ({ urlPath, redirect }) => {
// ...
const isAuthorized = // ...
if (!isAuthorized && !urlPath.startsWith('/login')) {
redirect({ path: '/login' })
return
}
// ...
}
```
`redirect()`方法接受一个字符串(完整的URL)或一个Vue Router的位置字符串或对象。在SSR上,它可以接受第二个参数,它应该是一个数字,用于重定向浏览器的任何HTTP STATUS代码(3xx代码)。
```js
// 带有Vue Router位置的redirect()的示例。
redirect('/1') // Vue Router location as String
redirect({ path: '/1' }) // Vue Router location as Object
// 用一个URL重定向()的示例。
redirect('https://quasar.dev')
```
::: warning 重要!
Vue路由器位置(以字符串或对象形式)不是指URL路径(和哈希),而是指您定义的实际Vue路由器路由。
因此**不要向其中添加publicPath**,如果您使用的是Vue路由器哈希模式,则不要向其中添加哈希。
<br>假设我们定义了这个Vue路由器路由:<br><br>
```js
{
path: '/one',
component: PageOne
}
```
<br>然后**不管我们的publicPath**如何,我们都可以像这样调用`redirect()`:<br><br>
```js
// publicPath: /wiki; vueRouterMode: history
redirect('/one') // good way
redirect({ path: '/one' }) // good way
redirect('/wiki/one') // WRONG!
// publicPath: /wiki; vueRouterMode: hash
redirect('/one') // good way
redirect({ path: '/one' }) // good way
redirect('/wiki/#/one') // WRONG!
// no publicPath; vueRouterMode: hash
redirect('/one') // good way
redirect({ path: '/one' }) // good way
redirect('/#/one') // WRONG!
```
:::
正如在前面的章节中提到的,默认的引导文件的输出可以返回一个Promise。如果这个Promise被一个包含 "url "属性的对象拒绝,那么 Quasar CLI 将把用户重定向到那个URL。
```js
export default ({ urlPath }) => {
return new Promise((resolve, reject) => {
// ...
const isAuthorized = // ...
if (!isAuthorized && !urlPath.startsWith('/login')) {
// 这里的 "url "参数是同一类型的
// 至于上面的 "重定向 "问题
reject({ url: '/login' })
return
}
// ...
})
}
```
或一个更简单的等价物:
```js
export default () => {
// ...
const isAuthorized = // ...
if (!isAuthorized && !urlPath.startsWith('/login')) {
return Promise.reject({ url: '/login' })
}
// ...
}
```
### Quasar应用程序流程
为了更好地理解启动文件的工作原理和作用,你需要了解你的网站/应用程序是如何启动的。
1. Quasar被初始化(组件、指令、插件、Quasar i18n、 Quasar 图标集)
2. Quasar额外功能被导入(Roboto字体--如果使用的话,图标,动画,...)。
3. Quasar CSS和你的应用程序的全局CSS被导入。
4. App.vue被加载(尚未使用)。
5. 商店被导入(如果在 src/store 中使用 Vuex 商店)。
6. 路由器已被导入(在 src/router 中)。
7. 导入启动文件
8. 路由器默认导出功能被执行
9. 引导文件得到其默认导出功能的执行
10. (如果在Electron模式下)Electron被导入并注入到Vue原型中。
11. (如果在 Cordova 模式下)监听 "deviceready "事件,然后才继续以下步骤
12. 用根组件实例化Vue,并将其附加到DOM上
## 启动文件的示例
### Axios
```js
import { boot } from 'quasar/wrappers'
import axios from 'axios'
const api = axios.create({ baseURL: 'https://api.example.com' })
export default boot(({ app }) => {
// 通过this.$axios和this.$api在Vue文件中使用(选项式 API)。
app.config.globalProperties.$axios = axios
// ^ ^ 这将允许你使用this.$axios(用于Vue Options API形式)。
// 所以你不一定要在每个vue文件中导入axios。
app.config.globalProperties.$api = api
// ^ ^ 这将允许你使用this.$api(用于Vue选项式 API形式)。
// 所以你可以轻松地对你的应用程序的API执行请求
})
export { axios, api }
```
### vue-i18n
```js
import { createI18n } from 'vue-i18n'
import messages from 'src/i18n'
export default ({ app }) => {
// 创建I18n实例
const i18n = createI18n({
locale: 'en-US',
messages
})
// 告诉应用程序使用I18n实例
app.use(i18n)
}
```
### 路由器认证
一些启动文件可能需要干扰Vue Router的配置。
```js
export default boot(({ router, store }) => {
router.beforeEach((to, from, next) => {
// 现在你需要在这里添加你的认证逻辑,比如调用一个API端点
})
})
```
## 从引导文件中访问数据
有时你想在你不能访问根Vue实例的文件中访问你在引导文件中配置的数据。
幸运的是,因为引导文件只是普通的JavaScript文件,你可以在你的引导文件中添加任意多的命名导出。
让我们以Axios为例。有时你想在你的JavaScript文件中访问你的Axios实例,但你不能访问根Vue实例。为了解决这个问题,你可以在你的启动文件中导出Axios实例,并将其导入其他地方。
考虑下面这个axios的引导文件。
```js
// axios启动文件(src/boot/axios.js)。
import axios from 'axios'
// 我们创建自己的axios实例并设置一个自定义的基本URL。
// 注意,如果我们不在这里设置任何配置,我们就不需要
// 一个命名的导出,因为我们可以直接 "从'axios'导入axios"。
const api = axios.create({
baseURL: 'https://api.example.com'
})
// 通过this.$axios和this.$api在Vue文件中使用。
// (仅在Vue Options API形式下)
export default ({ app }) => {
app.config.globalProperties.$axios = axios
app.config.globalProperties.$api = api
}
// 在这里,我们定义了一个命名的出口
// 我们以后可以在.js文件中使用。
export { axios, api }
```
在任何JavaScript文件中,你都能像这样导入axios实例。
```js
// 我们从src/boot/axios.js中导入其中一个命名的导出项
import { api } from 'boot/axios'
```
关于语法的进一步阅读:[ES6 import](https://developer.mozilla .org/en-US/docs/web/javascript/reference/statements/import), [ES6 export](https://developer.mozilla.org/en-US/docs/web/javascript/reference/statements/export) .
| 21.693095 | 210 | 0.690757 | yue_Hant | 0.632924 |
7651c7ca9add5a1ed3a111faf3ca10afc03fe80f | 3,641 | md | Markdown | docs/code-quality/ca2236.md | MicrosoftDocs/visualstudio-docs.ko-kr | 367344fed1f3d162b028af8a41a785a2137598e8 | [
"CC-BY-4.0",
"MIT"
] | 13 | 2019-10-02T05:47:05.000Z | 2022-03-09T07:28:28.000Z | docs/code-quality/ca2236.md | MicrosoftDocs/visualstudio-docs.ko-kr | 367344fed1f3d162b028af8a41a785a2137598e8 | [
"CC-BY-4.0",
"MIT"
] | 115 | 2018-01-17T01:43:25.000Z | 2021-02-01T07:27:06.000Z | docs/code-quality/ca2236.md | MicrosoftDocs/visualstudio-docs.ko-kr | 367344fed1f3d162b028af8a41a785a2137598e8 | [
"CC-BY-4.0",
"MIT"
] | 33 | 2018-01-17T01:25:13.000Z | 2022-02-14T05:28:44.000Z | ---
title: 'CA2236: ISerializable 형식에서 기본 클래스 메서드를 호출하십시오.'
description: 형식은 ISerializable을 구현하는 형식에서 파생되며 형식은 serialization 생성자를 구현하지만 기본 형식의 serialization 생성자를 호출하지는 않습니다. 또는 형식이 GetObjectData를 구현하지만 기본 형식의 GetObjectData 메서드를 호출하지는 않습니다.
ms.date: 11/04/2016
ms.topic: reference
f1_keywords:
- CA2236
- CallBaseClassMethodsOnISerializableTypes
helpviewer_keywords:
- CA2236
- CallBaseClassMethodsOnISerializableTypes
ms.assetid: 5a15b20d-769c-4640-b31a-36e07077daae
author: mikejo5000
ms.author: mikejo
manager: jmartens
ms.technology: vs-ide-code-analysis
dev_langs:
- CSharp
- VB
ms.workload:
- multiple
ms.openlocfilehash: 5dc2b6d17a680b0cec054b7a3d1003d4be031a6b
ms.sourcegitcommit: b12a38744db371d2894769ecf305585f9577792f
ms.translationtype: MT
ms.contentlocale: ko-KR
ms.lasthandoff: 09/13/2021
ms.locfileid: "126603697"
---
# <a name="ca2236-call-base-class-methods-on-iserializable-types"></a>CA2236: ISerializable 형식에서 기본 클래스 메서드를 호출하십시오.
|항목|값|
|-|-|
|RuleId|CA2236|
|범주|Microsoft.Usage|
|주요 변경 내용|주요 변경 아님|
## <a name="cause"></a>원인
형식은 인터페이스를 구현하는 형식에서 <xref:System.Runtime.Serialization.ISerializable?displayProperty=fullName> 파생되며 다음 조건 중 하나는 true입니다.
- 형식은 serialization 생성자, 즉 , 매개 변수 시그니처를 가진 생성자를 <xref:System.Runtime.Serialization.SerializationInfo?displayProperty=fullName> <xref:System.Runtime.Serialization.StreamingContext?displayProperty=fullName> 구현하지만 기본 형식의 serialization 생성자를 호출하지는 않습니다.
- 형식은 <xref:System.Runtime.Serialization.ISerializable.GetObjectData%2A?displayProperty=fullName> 메서드를 구현하지만 기본 형식의 메서드를 호출하지는 <xref:System.Runtime.Serialization.ISerializable.GetObjectData%2A> 않습니다.
## <a name="rule-description"></a>규칙 설명
사용자 지정 serialization 프로세스에서 형식은 <xref:System.Runtime.Serialization.ISerializable.GetObjectData%2A> 메서드를 구현하여 해당 필드를 serialize하고 serialization 생성자를 구현하여 필드를 직렬화 해제합니다. 형식이 인터페이스를 구현하는 형식에서 파생되는 경우 <xref:System.Runtime.Serialization.ISerializable> 기본 형식 메서드 및 <xref:System.Runtime.Serialization.ISerializable.GetObjectData%2A> serialization 생성자를 호출하여 기본 형식의 필드를 직렬화/직렬화 해제해야 합니다. 그렇지 않으면 형식이 올바르게 직렬화되고 직렬화 해제되지 않습니다. 파생 형식이 새 필드를 추가하지 않는 경우 형식은 <xref:System.Runtime.Serialization.ISerializable.GetObjectData%2A> 메서드나 serialization 생성자를 구현하거나 해당하는 기본 형식을 호출할 필요가 없습니다.
## <a name="how-to-fix-violations"></a>위반 문제를 해결하는 방법
이 규칙 위반 문제를 해결하려면 해당 파생 형식 <xref:System.Runtime.Serialization.ISerializable.GetObjectData%2A> 메서드 또는 생성자에서 기본 형식 메서드 또는 serialization 생성자를 호출합니다.
## <a name="when-to-suppress-warnings"></a>경고를 표시하지 않는 경우
이 규칙에서는 경고를 표시해야 합니다.
## <a name="example"></a>예제
다음 예제에서는 기본 클래스의 serialization 생성자 및 메서드를 호출하여 규칙을 충족하는 파생 형식을 보여 <xref:System.Runtime.Serialization.ISerializable.GetObjectData%2A> 있습니다.
:::code language="vb" source="../snippets/visualbasic/VS_Snippets_CodeAnalysis/FxCop.Usage.CallBaseISerializable/vb/FxCop.Usage.CallBaseISerializable.vb" id="Snippet1":::
:::code language="csharp" source="../snippets/csharp/VS_Snippets_CodeAnalysis/FxCop.Usage.CallBaseISerializable/cs/FxCop.Usage.CallBaseISerializable.cs" id="Snippet1":::
## <a name="related-rules"></a>관련 규칙
[CA2240: ISerializable을 올바르게 구현하십시오.](../code-quality/ca2240.md)
[CA2229: serialization 생성자를 구현하십시오.](/dotnet/fundamentals/code-analysis/quality-rules/ca2229)
[CA2238: serialization 메서드를 올바르게 구현하십시오.](../code-quality/ca2238.md)
[CA2235: 모두 serialize할 수 없는 필드로 표시하십시오.](/dotnet/fundamentals/code-analysis/quality-rules/ca2235)
[CA2237: SerializableAttribute로 ISerializable 형식 표시](/dotnet/fundamentals/code-analysis/quality-rules/ca2237)
[CA2239: 선택적 필드에 deserialization 메서드를 제공하세요.](../code-quality/ca2239.md)
[CA2120: serialization 생성자를 안전하게 하세요.](../code-quality/ca2120.md)
| 49.876712 | 566 | 0.795661 | kor_Hang | 0.998224 |
76521282844aa0354c5ff7e06ad4c14444439c0a | 1,235 | md | Markdown | TypeScript/README.md | makejun168/Front-end-Architecture | 89411d529897880a48889fb4b2eacd6fcf5a3818 | [
"MIT"
] | null | null | null | TypeScript/README.md | makejun168/Front-end-Architecture | 89411d529897880a48889fb4b2eacd6fcf5a3818 | [
"MIT"
] | 3 | 2020-07-19T14:47:09.000Z | 2021-09-18T10:56:15.000Z | TypeScript/README.md | makejun168/Front-end-Architecture | 89411d529897880a48889fb4b2eacd6fcf5a3818 | [
"MIT"
] | null | null | null | # Typescript
1. 安装ts
```shell
npm install typescript -g
```
2. 初始化ts
```shell
tsc --init
```
## strictNullChecks
1. undefined null 两个空类型设计,使用上不方便
2. 通过 strictNullChecks 严格校验,让代码更加安全
## moduleResolution
1. 指的是ts在编译的过程中使用的编译器
2. node 和 classic 两种模式选择
3. node 模式是去 node_modules 的模块中去查找的
4. classis 模式会在 同级目录去查找,如果查找不到会到父级目录查找,直到跟目录为止
5. classis 模式一般用于老项目中使用
## JSX 设置
1. jsx 配置项目 有 'preserve', 'react-native', or 'react' 三个配置
2. preserve 输出的文件后缀是 jsx react 输出的文件后缀是 js react-native 输出的文件后缀是 js
3. react 是输出的是 React.createElement('div')
## esModuleInterop
1. esModuleInterop 是针对module.exports 导出的时候 ts 语法需要 module.exports.default 在配置后面添加default
2. 如果导入的时候需要default属性,他会自动加入default属性
3. 判断导入的模块是否是esmodule模块 如果是加入 default属性 如果不是,就不加
参考代码
```js
var __importDefault = (this && this.__importDefault) || function (mod) {
return (mod && mod.__esModule) ? mod : { "default": mod };
};
```
## noImplicitAny
1. true 不能存在隐式的any类型的赋值,不然就会报错了
2. false 允许代码里面有隐式转换的any类型
## target
1. ECMAScript target version 输出的es版本类型 比如说是 es6 就是es6版本的语法
2. 'ES3' (default), 'ES5', 'ES2015', 'ES2016', 'ES2017', 'ES2018', 'ES2019', 'ES2020', or 'ESNEXT'
3. ES5 和 ES2015 之间差异很大 可以通过设置两个版本 看到细节
4. 一般设置成ES5就可以了
| 25.729167 | 99 | 0.719838 | yue_Hant | 0.624765 |
765228586ac4c61d4600924b56761f987d85248f | 522 | md | Markdown | README.md | step63r/SocialTimer | 963e6b51d2096e6d7004a859e248f2a13eeed11a | [
"MIT"
] | null | null | null | README.md | step63r/SocialTimer | 963e6b51d2096e6d7004a859e248f2a13eeed11a | [
"MIT"
] | null | null | null | README.md | step63r/SocialTimer | 963e6b51d2096e6d7004a859e248f2a13eeed11a | [
"MIT"
] | null | null | null | # SocialTimer
## Description
Simple timer suitable for social game.
## Requirement
- Windows 10 Pro (64-bit)
- .NET Framework 4.5
## Usage
Build this with Visual Studio 2019 and run.
## Install
Fork and clone this repository.
```
$ git clone https://github.com/yourname/SocialTimer.git
```
## Contribution
1. Fork this repository
2. Create your feature branch
3. Commit your changes
4. Push to the branch
5. Create new Pull Request
## License
MIT License
## Author
[minato](https://blog.minatoproject.com/)
| 13.384615 | 55 | 0.722222 | eng_Latn | 0.766678 |
Subsets and Splits