hexsha
stringlengths 40
40
| size
int64 5
1.04M
| ext
stringclasses 6
values | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 3
344
| max_stars_repo_name
stringlengths 5
125
| max_stars_repo_head_hexsha
stringlengths 40
78
| max_stars_repo_licenses
sequencelengths 1
11
| max_stars_count
int64 1
368k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 3
344
| max_issues_repo_name
stringlengths 5
125
| max_issues_repo_head_hexsha
stringlengths 40
78
| max_issues_repo_licenses
sequencelengths 1
11
| max_issues_count
int64 1
116k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 3
344
| max_forks_repo_name
stringlengths 5
125
| max_forks_repo_head_hexsha
stringlengths 40
78
| max_forks_repo_licenses
sequencelengths 1
11
| max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | content
stringlengths 5
1.04M
| avg_line_length
float64 1.14
851k
| max_line_length
int64 1
1.03M
| alphanum_fraction
float64 0
1
| lid
stringclasses 191
values | lid_prob
float64 0.01
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
760db28416e4a4988c9cbeeaa68f6a8a5c76ace2 | 2,067 | markdown | Markdown | _posts/2020-07-19-smooth_seas_do_not_make_skillful_sailors_-_african_proverb.markdown | kowalix-pl/kowalix-pl.github.io | 3c12ba0cd36aa2d4193db890fd75d51369a5f20f | [
"MIT"
] | null | null | null | _posts/2020-07-19-smooth_seas_do_not_make_skillful_sailors_-_african_proverb.markdown | kowalix-pl/kowalix-pl.github.io | 3c12ba0cd36aa2d4193db890fd75d51369a5f20f | [
"MIT"
] | null | null | null | _posts/2020-07-19-smooth_seas_do_not_make_skillful_sailors_-_african_proverb.markdown | kowalix-pl/kowalix-pl.github.io | 3c12ba0cd36aa2d4193db890fd75d51369a5f20f | [
"MIT"
] | null | null | null | ---
layout: post
title: "*Smooth seas do not make skillful sailors - African Proverb*"
date: 2020-07-19 08:59:57 +0000
permalink: smooth_seas_do_not_make_skillful_sailors_-_african_proverb
---
Learning a new foreign language is a daunting task even for those that consider themselves well prepared for it, but learning a new coding language from scratch is a totally different ball game. There are the new alien words to learn, language syntaxes out of this planet, methods, iterations and loops that can make your head spin. In the beginning everything seems simple but as our program develops and we get introduced to even more advanced topics, we may find ourselves in front of the proverbial wall. During those dark moments in our lives, we cannot lose sight of our ultimate goal, we have to push forward, look for new explanations of the topic online and offline, so that we could finally get a good grasp on the subject and get to the finish line of the course.
There are few of my daily rituals that I would like to share with you that helped me on this challenging, yet rewarding journey of learning to code on FLS program and moving my lifelong dream of becoming a full-stack software engineer:
* Dedicate a certain amount of time for your study.
* Prepare your learning space and make sure that there is no clutter on your desk and that there are no other distractions that could negatively impact your daily learning goals.
* Use pomodoro technique to schedule and manage breaks.
* Take good notes (I recommend mind-mapping here), and break down complex ideas into small components.
* Keep track of your progress in a diary, and make notes of the coding solutions that were difficult to get a grasp on.
* Do not be afraid to utilize the AAQ feature that FLS provides for you and ask questions to your cohort via slack. This community rocks
* Engage in some sport and meditate daily in order to keep your blood level low.
* And last, but not least have fun doing it :)
See you all on the other side, future full-stack developers.
| 76.555556 | 775 | 0.785196 | eng_Latn | 0.999801 |
760e2b6d58eac99d6fa85c55433e0113e988bf77 | 3,256 | md | Markdown | v2/clickstream/destinations/lytics.md | mbest1813/documentation | 3dc43e5d2fc17bdafbcd3227e30995c19afb4fc8 | [
"Apache-2.0"
] | null | null | null | v2/clickstream/destinations/lytics.md | mbest1813/documentation | 3dc43e5d2fc17bdafbcd3227e30995c19afb4fc8 | [
"Apache-2.0"
] | null | null | null | v2/clickstream/destinations/lytics.md | mbest1813/documentation | 3dc43e5d2fc17bdafbcd3227e30995c19afb4fc8 | [
"Apache-2.0"
] | null | null | null | ---
title: Lytics
sidebar: platform_sidebar
---
MetaRouter makes it easy to send your data to Lytics. Once you follow the steps below, your data will be routed through our platform and pushed to Lytics in the appropriate format.
## What is Lytics and how does it work?
Lytics is a customer data platform that helps brands orchestrate relevant marketing with built-in data science. Their customer platform makes it easy to build user profiles, build cross-channel campaigns, and predict behaviors with built-in machine learning.
Lytics crawls website text and images, automating digital catalogue and scoring of web content so that marketers can monitor and identify the most popular content and measure the likelihood of user return.
Lytics also enables drag-and-drop segmentation, which allows teams to target customers based on channel, revenue, growth, or at-risk status. Plus, Lytics independently integrates with more than 65 popular platforms like Twitter, AdRoll, and Google Ads.
***Notes:***
- Lytics collects its user data (name, email address) from a JavaScript library that you'd need to load on all pages of your site. This will power an in-app messaging feature and ensure you're not asking already-known users to opt in.
- All data is collected using the Lytics API, which will require a developer to learn and implement its methods and data structure.
[Learn more about Lytics](https://www.getlytics.com/)
## Why send data to Lytics using MetaRouter?
Integrating Lytics with MetaRouter means that you won't have to write any custom code on top of your standard MetaRouter integration. Enabling Lytics in your MetaRouter UI automatically loads the JavaScript library onto your site without making code changes.
MetaRouter automatically maps `page`, `identify`, and `track` calls directly to Lytics, which then uses that customer data to power its implementations.
***Note:** With MetaRouter you can also easily push data from mobile apps or servers to Lytics.*
## Getting Started with Lytics and MetaRouter
### Lytics Side
To get started sending events to Lytics, you'll need your:
- Data API Key
- Account ID
Once you have a Lytics account, you'll be dropped into a welcome page asking you to connect a few integrations.

MetaRouter won't show up on that page, but go ahead and click on your email address at the top right, and into the `Manage Accounts` option.
Once there, you'll see your ID on the left hand side of your account and your Data API Key on the right.

### MetaRouter Side
Add your Account ID and Data API Key into the Lytics connector on your MetaRouter dashboard and give your new connection a unique name.
Now, just click `Save` to activate your pipeline.

Lytics updates their incoming data stream every two minutes, so give it a couple of minutes for your events to show up.
See the image below - you can check back in on the status of your project by going to the `Data` tab at the top.

### Additional Features
* `Stream` - Allows you to organize your data in Lytics. This is only necessary when you are tracking multiple websites | 49.333333 | 258 | 0.77457 | eng_Latn | 0.998209 |
760e82b87a398195dd2ab6d7a1e745637d101882 | 3,358 | md | Markdown | PoseTrack/README.md | singhnarotam1997/Detectron | ecc6b25fc8869486126f1384b4e6e042a718bd5b | [
"Apache-2.0"
] | 60 | 2021-08-07T09:16:52.000Z | 2022-03-14T09:09:00.000Z | PoseTrack/README.md | singhnarotam1997/Detectron | ecc6b25fc8869486126f1384b4e6e042a718bd5b | [
"Apache-2.0"
] | 4 | 2021-10-14T02:44:49.000Z | 2022-03-14T08:18:20.000Z | PoseTrack/README.md | singhnarotam1997/Detectron | ecc6b25fc8869486126f1384b4e6e042a718bd5b | [
"Apache-2.0"
] | 11 | 2021-11-01T00:30:37.000Z | 2021-12-08T10:01:52.000Z | # DensePose-PoseTrack
We introduce the DensePose-Posetrack dataset, which consists of videos of multiple persons containing rapid motions, occlusions and scale variations which leads to a very challenging correspondence task. DensePose-PoseTrack will be a part of the [ECCV 2018 - POSETRACK CHALLENGE](https://posetrack.net/workshops/eccv2018/).
<div align="center">
<img src="https://drive.google.com/uc?export=view&id=1fed2Xvy2G6t4V_ICsEJIm-PaJ8o-e0Ws" width="700px" />
</div>
Please first follow the [INSTALL.md](https://github.com/facebookresearch/DensePose/blob/master/INSTALL.md) and [GETTING_STARTED.md](https://github.com/facebookresearch/DensePose/blob/master/GETTING_STARTED.md), to install and run the DensePose inference and training. Herein, we provide instructions to download and evaluate on the DensePose-PoseTrack dataset.
### Fetch DensePose-PoseTrack dataset
To download the images of the original PoseTrack dataset, please refer to the posetrack webpage: https://posetrack.net. Note that we have used the keypoints provided in the PoseTrack dataset to form the DensePose-PoseTrack dataset. Our dense correspondence annotations are distributed under [NonCommercial Creative Commons](https://creativecommons.org/licenses/by-nc/2.0/) license.
To downoad, run:
```
cd $DENSEPOSE/PoseTrack
bash get_DensePose_PoseTrack.sh
```
This script downloads *.json files that contains all annotations along with files that only contains annotations for images with densepose annotations. The latter is used during evaluation.
Visualization of the DensePose-PoseTrack annotations are demonstrated in the [DensePose-PoseTrack-Visualize.ipynb](https://github.com/facebookresearch/DensePose/blob/master/PoseTrack/DensePose-PoseTrack-Visualize.ipynb):
<div align="center">
<img src="https://drive.google.com/uc?export=view&id=1jUNl07Rw_Y7IRvZimaChfQPDIDkWqxzc" width="600px" />
</div>
## Setting-up the PoseTrack dataset.
Create a symlink for the PoseTrack dataset in your `datasets/data` folder.
```
ln -s /path/to/posetrack $DENSEPOSE/detectron/datasets/data/posetrack
```
Create symlinks for the DensePose-PoseTrack annotations
```
ln -s $DENSEPOSE/PoseTrack/DensePose_PoseTrack/densepose_only_posetrack_train2017.json $DENSEPOSE/detectron/datasets/data/posetrack/
ln -s $DENSEPOSE/PoseTrack/DensePose_PoseTrack/densepose_only_posetrack_val2017.json $DENSEPOSE/detectron/datasets/data/posetrack/
ln -s $DENSEPOSE/PoseTrack/DensePose_PoseTrack/densepose_posetrack_test2017.json $DENSEPOSE/detectron/datasets/data/posetrack/
```
Your local PoseTrack dataset copy at `/path/to/posetrack` should have the following directory structure:
```
posetrack
|_ images
| |_ <im-folder-1>
| |_ ...
| |_ <im-folder-N>.
|_ densepose_only_posetrack_train2017.json
|_ densepose_only_posetrack_val2017.json
|_ densepose_posetrack_test2017.json
```
### Evaluation on DensePose-PoseTrack dataset
To demonstrate the evaluation, we use a DensePose-RCNN with a ResNet-50 trunk that is trained on the DensePose-COCO dataset.
```
cd $DENSEPOSE
python2 tools/test_net.py \
--cfg PoseTrack/configs/DensePose_ResNet50_FPN_s1x-e2e.yaml \
TEST.WEIGHTS https://dl.fbaipublicfiles.com/densepose/DensePose_ResNet50_FPN_s1x-e2e.pkl \
NUM_GPUS 1
```
The evaluation of this baseline network should yield `Bounding Box AP: 0.4438` and `DensePose AP: 0.2698`.
| 51.661538 | 381 | 0.801668 | eng_Latn | 0.419624 |
760e8711714e5e3343a07fd1be2f4c4a6827acd9 | 4,713 | md | Markdown | docs/t-sql/database-console-commands/dbcc-updateusage-transact-sql.md | fanck0605/sql-docs.zh-cn | 17c5cf325ad19a067fe159c4911ff47de810bb8a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/t-sql/database-console-commands/dbcc-updateusage-transact-sql.md | fanck0605/sql-docs.zh-cn | 17c5cf325ad19a067fe159c4911ff47de810bb8a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/t-sql/database-console-commands/dbcc-updateusage-transact-sql.md | fanck0605/sql-docs.zh-cn | 17c5cf325ad19a067fe159c4911ff47de810bb8a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: DBCC UPDATEUSAGE (Transact-SQL) | Microsoft Docs
ms.custom: ''
ms.date: 11/14/2017
ms.prod: sql
ms.prod_service: sql-database
ms.reviewer: ''
ms.technology: t-sql
ms.topic: language-reference
f1_keywords:
- UPDATEUSAGE
- UPDATEUSAGE_TSQL
- DBCC_UPDATEUSAGE_TSQL
- DBCC UPDATEUSAGE
dev_langs:
- TSQL
helpviewer_keywords:
- inaccurate page or row counts [SQL Server]
- space [SQL Server], usage reports
- updating space usage information
- updating row counts
- disk space [SQL Server], inaccurate counts
- counting pages
- reporting count inaccuracies
- updating page counts
- synchronization [SQL Server], inaccurate counts
- incorrect space usage reports [SQL Server]
- DBCC UPDATEUSAGE statement
- integrity [SQL Server], database objects
- counting rows
- row count accuracy [SQL Server]
- page count accuracy [SQL Server]
ms.assetid: b8752ecc-db45-4e23-aee7-13b8bc3cbae2
author: pmasl
ms.author: umajay
ms.openlocfilehash: 7d983f2e7e370ec9fe385e6d46602c4703ca6d1e
ms.sourcegitcommit: 58158eda0aa0d7f87f9d958ae349a14c0ba8a209
ms.translationtype: HT
ms.contentlocale: zh-CN
ms.lasthandoff: 03/30/2020
ms.locfileid: "68040461"
---
# <a name="dbcc-updateusage-transact-sql"></a>DBCC UPDATEUSAGE (Transact-SQL)
[!INCLUDE[tsql-appliesto-ss2008-asdb-xxxx-xxx-md](../../includes/tsql-appliesto-ss2008-asdb-xxxx-xxx-md.md)]
报告目录视图中的页数和行数错误并进行更正。 这些错误可能导致 sp_spaceused 系统存储过程返回不正确的空间使用报告。
 [Transact-SQL 语法约定](../../t-sql/language-elements/transact-sql-syntax-conventions-transact-sql.md)
## <a name="syntax"></a>语法
```sql
DBCC UPDATEUSAGE
( { database_name | database_id | 0 }
[ , { table_name | table_id | view_name | view_id }
[ , { index_name | index_id } ] ]
) [ WITH [ NO_INFOMSGS ] [ , ] [ COUNT_ROWS ] ]
```
## <a name="arguments"></a>参数
database_name | database_id | 0
要对其空间使用统计信息进行报告和更正的数据库的名称或 ID。 如果指定 0,则使用当前数据库。 数据库名称必须符合[标识符](../../relational-databases/databases/database-identifiers.md)规则。
*table_name* | *table_id* | *view_name* | *view_id*
要对其空间使用统计信息进行报告和更正的表或索引视图的名称或 ID。 表和视图的名称必须符合有关标识符的规则。
*index_id* | *index_name*
要使用的索引的 ID 或名称。 如果未指定,语句将对指定的表或视图的所有索引进行处理。
WITH
允许指定选项。
NO_INFOMSGS
取消显示所有信息性消息。
COUNT_ROWS
指定使用表或视图中的行数的当前计数更新 row count 列。
## <a name="remarks"></a>备注
DBCC UPDATEUSAGE 将针对表或索引中的每个分区更正行、已用页、保留页、叶级页和数据页的计数。 如果系统表中没有错误,则 DBCC UPDATEUSAGE 不返回数据。 如果发现错误,并对其进行了更正,同时没有使用 WITH NO_INFOMSGS,DBCC UPDATEUSAGE 将返回系统表中更新的行和列。
DBCC CHECKDB 已得到增强,可以检测页计数或行计数变为负值的情况。 检测到上述问题后,DBCC CHECKDB 的输出会包含一个警告和一个建议,建议运行 DBCC UPDATEUSAGE 解决该问题。
## <a name="best-practices"></a>最佳实践
建议如下:
- 请不要定期运行 DBCC UPDATEUSAGE。 由于对大型表或数据库运行 DBCC UPDATEUSAGE 会耗费一定的时间,因此不应使用此语句,除非您怀疑 sp_spaceused 返回了不正确的值。
- 只有数据库的 CREATE、ALTER 或 DROP 语句等数据定义语言 (DDL) 经常受到修改时,才应考虑定期运行 DBCC UPDATEUSAGE(例如,每周运行一次)。
## <a name="result-sets"></a>结果集
DBCC UPDATEUSAGE 返回(值可能有所不同):
`DBCC execution completed. If DBCC printed error messages, contact your system administrator.`
## <a name="permissions"></a>权限
要求具有 **sysadmin** 固定服务器角色或 **db_owner** 固定数据库角色的成员身份。
## <a name="examples"></a>示例
### <a name="a-updating-page-or-row-counts-or-both-for-all-objects-in-the-current-database"></a>A. 为当前数据库中的所有对象更新页计数或行计数,或同时更新这两者
下面的示例将数据库名称指定为 `0`,且 `DBCC UPDATEUSAGE` 报告当前数据库的已更新页计数或行计数信息。
```sql
DBCC UPDATEUSAGE (0);
GO
```
### <a name="b-updating-page-or-row-counts-or-both-for-adventureworks-and-suppressing-informational-messages"></a>B. 为 AdventureWorks 更新页计数或行计数,或同时更新这两者,并禁止显示信息性消息。
下面的示例将 [!INCLUDE[ssSampleDBobject](../../includes/sssampledbobject-md.md)] 指定为数据库名称,并禁止显示所有信息性消息。
```sql
DBCC UPDATEUSAGE (AdventureWorks2012) WITH NO_INFOMSGS;
GO
```
### <a name="c-updating-page-or-row-counts-or-both-for-the-employee-table"></a>C. 为 Employee 表更新页计数或行计数,或同时更新这两者
下面的示例报告 [!INCLUDE[ssSampleDBobject](../../includes/sssampledbobject-md.md)] 数据库中 `Employee` 表的已更新页计数或行计数信息。
```sql
DBCC UPDATEUSAGE (AdventureWorks2012,'HumanResources.Employee');
GO
```
### <a name="d-updating-page-or-row-counts-or-both-for-a-specific-index-in-a-table"></a>D. 为表中的特定索引更新页计数或行计数,或同时更新这两者
下面的示例将 `IX_Employee_ManagerID` 指定为索引名称。
```sql
DBCC UPDATEUSAGE (AdventureWorks2012, 'HumanResources.Employee', IX_Employee_OrganizationLevel_OrganizationNode);
GO
```
## <a name="see-also"></a>另请参阅
[DBCC (Transact-SQL)](../../t-sql/database-console-commands/dbcc-transact-sql.md)
[sp_spaceused (Transact-SQL)](../../relational-databases/system-stored-procedures/sp-spaceused-transact-sql.md)
[更新统计信息 (Transact-SQL)](../../t-sql/statements/update-statistics-transact-sql.md)
| 34.40146 | 181 | 0.736474 | yue_Hant | 0.857096 |
760e8908df2be0e4c38945f00305b80891090cb7 | 15,097 | md | Markdown | WindowsServerDocs/identity/ad-ds/manage/AD-DS-Simplified-Administration.md | MasahikoSada/windowsserverdocs.ja-jp | afaa88d24841b699fdbf6955048b00797d063d8d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | WindowsServerDocs/identity/ad-ds/manage/AD-DS-Simplified-Administration.md | MasahikoSada/windowsserverdocs.ja-jp | afaa88d24841b699fdbf6955048b00797d063d8d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | WindowsServerDocs/identity/ad-ds/manage/AD-DS-Simplified-Administration.md | MasahikoSada/windowsserverdocs.ja-jp | afaa88d24841b699fdbf6955048b00797d063d8d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
ms.assetid: f74eec9a-2485-4ee0-a0d8-cce01250a294
title: AD DS の簡略化された管理
description: ''
ms.author: joflore
author: MicrosoftGuyJFlo
manager: mtillman
ms.date: 08/09/2018
ms.topic: article
ms.prod: windows-server-threshold
ms.technology: identity-adds
ms.openlocfilehash: 863e5352253d53941e64b52d1ca58d565a3aa8b1
ms.sourcegitcommit: 0d0b32c8986ba7db9536e0b8648d4ddf9b03e452
ms.translationtype: MT
ms.contentlocale: ja-JP
ms.lasthandoff: 04/17/2019
ms.locfileid: "59890593"
---
# <a name="ad-ds-simplified-administration"></a>AD DS の簡略化された管理
>適用先:Windows Server 2016 では、Windows Server 2012 R2、Windows Server 2012
このトピックでは、その機能と Windows Server 2012 ドメイン コント ローラーの展開と管理、および以前のオペレーティング システムの DC 展開と新しい Windows Server 2012 実装の違いの利点について説明します。
Windows Server 2012 では、Active Directory ドメイン サービス簡略化された管理の次世代の導入し、が、最も根本的なドメイン再構想 Windows 2000 Server 以降。 AD DS の簡略化された管理は、Active Directory の 12 年に及ぶ実績から学んだ教訓を活かし、アーキテクトや管理者にとってサポート性、柔軟性、直感性に優れた管理エクスペリエンスを提供します。 このことは、既存テクノロジの新しいバージョンを創出すると共に、Windows Server 2008 R2 でリリースされたコンポーネントの機能を拡張することを意味していました。
AD DS の簡略化された管理とは、ドメイン展開の再イメージ化です。
- AD DS の役割の展開は、新しいサーバー マネージャー アーキテクチャに組み込まれ、リモート インストールが可能になりました。
- AD DS の展開と構成のエンジンは、新しい AD DS 構成ウィザードを使用するときも、Windows PowerShell になりました。
- スキーマ拡張、フォレストの準備、およびドメインの準備は自動的にドメイン コントローラーの昇格の一部となり、スキーマ マスターなどの特殊なサーバー上での別個のタスクが必要なくなりました。
- 昇格機能には、新しいドメイン コントローラーに対するフォレストとドメインの準備状態を検証する前提条件のチェックが含まれており、昇格に失敗する可能性が低減されます。
- Windows PowerShell の Active Directory モジュールには、レプリケーション トポロジ管理、ダイナミック アクセス制御、およびその他の操作のためのコマンドレットが含まれるようになりました。
- Windows Server 2012 のフォレストの機能レベルでは新しい機能は実装されず、ドメインの機能レベルは Kerberos の新機能のサブセットについてのみ必要となるので、管理者は同種のドメイン コントローラー環境を頻繁に用意する必要性から解放されます。
- 自動展開とロールバック保護の機能を加えるため、仮想化ドメイン コントローラーに対する完全サポートを追加しました。
- 仮想化ドメイン コント ローラーの詳細については、次を参照してください。 [Active Directory Domain Services の概要(AD DS) Virtualization(レベル 100)](../../ad-ds/Introduction-to-Active-Directory-Domain-Services-AD-DS-Virtualization-Level-100.md)します。
さらに、管理および保守に関する機能強化も多数あります。
- Active Directory 管理センターには、GUI による Active Directory のごみ箱、細かい設定が可能なパスワード ポリシー管理、および Windows PowerShell 履歴ビューアーが含まれます。
- 新しいサーバー マネージャーには、パフォーマンス監視、ベスト プラクティス分析、重要サービス、およびイベント ログのための AD DS 固有のインターフェイスがあります。
- グループ管理サービス アカウントは、同じセキュリティ プリンシパルを使っている複数のコンピューターをサポートします。
- 稼働期間の長い Active Directory ドメインの管理状態を向上させるため、相対識別子 (RID) の発行と監視に関して機能を強化しました。
AD DS など、Windows Server 2012 に含まれる他の新機能から利益を上げます。
- NIC チーミングおよびデータ センター ブリッジング
- DNS セキュリティ、および起動後に AD 統合ゾーンを使用できるまでの時間の短縮化
- Hyper-V の信頼性とスケーラビリティの向上
- BitLocker ネットワーク ロック解除
- その他の Windows PowerShell コンポーネント管理モジュール
## <a name="adprep-integration"></a>ADPREP の統合
Active Directory フォレストのスキーマ拡張とドメインの準備は、ドメイン コントローラー構成プロセスに統合されるようになりました。 新しいドメイン コントローラーを既存のフォレスト内に昇格する場合は、構成プロセスでアップグレードの状態が検出され、スキーマ拡張とドメインの準備のフェーズが同時に行われます。 最初の Windows Server 2012 ドメイン コントローラーをインストールするユーザーは、Enterprise Admins グループおよび Schema Admins グループのメンバーであるか、有効な代替資格情報を提供する必要があります。
フォレストとドメインの別々の準備のために、Adprep.exe は DVD に収録されています。 Windows Server 2012 に付属するツールのバージョンは、Windows Server 2008 x64 および Windows Server 2008 R2 と下位互換性があります。 Adprep.exe は、ADDSDeployment ベースのドメイン コントローラー構成ツールと同様に、リモートでの forestprep と domainprep もサポートしています。
Adprep および以前のオペレーティング システムでのフォレストの準備については、「 [Adprep の実行 (Windows Server 2008 R2)](https://technet.microsoft.com/library/dd464018(WS.10).aspx)」を参照してください。
## <a name="server-manager-ad-ds-integration"></a>サーバー マネージャー AD DS の統合

サーバー マネージャーは、サーバー管理タスクのハブとして機能します。 そのダッシュボードスタイルのパネルでは、インストールされている役割とリモート サーバー グループのビューが定期的に更新されます。 サーバー マネージャーでは、コンソールにアクセスしなくても、ローカル サーバーとリモート サーバーの集中管理が可能です。
Active Directory Domain Services は、それらのハブ ロール; の 1 つです。ドメイン コント ローラーまたは Windows 8 のリモート サーバー管理ツールでサーバー マネージャーを実行して、フォレスト内のドメイン コント ローラーで重要な最新の問題が発生します。
次のようなビューがあります。
- サーバーの可用性
- CPU とメモリの高い使用率に関するパフォーマンス モニターの警告
- AD DS に固有の Windows サービスの状態
- ディレクトリ サービス関連の最新の警告とイベント ログ内のエラー エントリ
- Microsoft が推奨する一連の規則に対するドメイン コントローラーのベスト プラクティス分析
## <a name="active-directory-administrative-center-recycle-bin"></a>Active Directory 管理センターのごみ箱

Windows Server 2008 R2 では Active Directory のごみ箱が導入されました。このごみ箱は、バックアップからの復元、AD DS サービスの再開、またはドメイン コントローラーの再起動を行わずに、削除された Active Directory オブジェクトを回復します。
Windows Server 2012 では、Active Directory 管理センターの新しいグラフィカル インターフェイスを使って、Windows PowerShell ベースの既存の復元機能を強化しています。 これにより管理者は、ごみ箱を有効にして、削除されたオブジェクトをフォレストのドメイン コンテキスト内で特定または復元できます。これらすべての操作を Windows PowerShell コマンドレットを直接実行せずに行うことができます。 Active Directory 管理センターと Active Directory のごみ箱はバックグラウンドで Windows PowerShell を使用します。したがって、以前のスクリプトや手順は今も有効です。
Active Directory のごみ箱については、「 [Active Directory のごみ箱の手順ガイド (Windows Server 2008 R2)](https://technet.microsoft.com/library/dd392261(WS.10).aspx)」を参照してください。
## <a name="active-directory-administrative-center-fine-grained-password-policy"></a>Active Directory 管理センターの細かい設定が可能なパスワード ポリシー

Windows Server 2008 では、細かい設定が可能なパスワード ポリシー (FGPP) が導入されました。管理者は、ドメインごとに複数のパスワード ポリシーとアカウント ロックアウト ポリシーを構成できます。 この機能によってドメインは、ユーザーとグループに基づいてパスワード規則の制限を強めたり弱めたりする柔軟性のあるソリューションになることができます。 管理用のインターフェイスはなく、管理者が Ldp.exe または Adsiedit.msc を使って構成する必要がありました。 Windows Server 2008 R2 では、Windows PowerShell の Active Directory モジュールが導入され、管理者は細かい設定が可能なパスワード ポリシー (FGPP) のためのコマンドライン インターフェイスを利用できるようになりました。
Windows Server 2012 では、細かい設定が可能なパスワード ポリシーのためのグラフィカル インターフェイスが用意されています。 Active Directory 管理センターは、この新しいダイアログのホームであり、すべての管理者が簡素化された FGPP 管理を利用できます。
細かい設定が可能なパスワード ポリシーの詳細については、「 [ステップ バイ ステップ ガイド - 細かい設定が可能なパスワードおよびアカウント ロックアウトのポリシー設定 (Windows Server 2008 R2)](https://technet.microsoft.com/library/cc770842(WS.10).aspx)」を参照してください。
## <a name="active-directory-administrative-center-windows-powershell-history-viewer"></a>Active Directory 管理センターの Windows PowerShell 履歴ビューアー

Windows Server 2008 R2 では、Active Directory 管理センターが導入されました。これは、Windows 2000 で作成された、古い Active Directory ユーザーとコンピューター スナップインに代わるものでした。 Active Directory 管理センターは、当時新しかった Windows PowerShell の Active Directory モジュールのためのグラフィカルな管理インターフェイスを提供します。
Active Directory モジュールには 100 を超えるコマンドレットがありますが、その習得は管理者にとって困難となる場合もあります。 Windows PowerShell は Windows 管理の戦略に深く統合されていることから、Active Directory 管理センターにはコマンドレットの実行をグラフィカル インターフェイスで見ることができるビューアーが含まれるようになりました。 履歴の検索、コピー、消去、およびメモの追加を、シンプルなインターフェイスを使って行うことができます。 その目的は、管理者がグラフィカル インターフェイスを使ってオブジェクトを作成または変更した後、それを履歴ビューアー内で確認することで、Windows PowerShell スクリプトに関する知識を深め、スクリプト サンプルを変更できるようにすることです。
## <a name="ad-replication-windows-powershell"></a>AD レプリケーションのための Windows PowerShell

Windows Server 2012 では、Windows PowerShell の Active Directory モジュールに Active Directory レプリケーション用のコマンドレットが追加されています。 これらのコマンドレットにより、新しい、または既存のサイト、サブネット、接続、サイト リンク、およびブリッジの構成ができるようになります。 また、Active Directory レプリケーションのメタデータ、レプリケーションの状態、キュー、および最新のベクター情報を返すこともできます。 新たに導入されたレプリケーションのコマンドレットと、展開のコマンドレットおよび他の既存の AD DS コマンドレットとが組み合わさったことによって、管理者は Windows PowerShell だけでフォレストを管理できるようになりました。 グラフィカル インターフェイスを使わずに Windows Server 2012 のプロビジョニングと管理を行いたいと考えている管理者にとっては新しい機会が生まれます。それによって、オペレーティング システムの攻撃の要件とサービスの要件が低くなります。 このことは、Secret Internet Protocol Router (SIPR) や企業 DMZ など、セキュリティ レベルの高いネットワークにサーバーを展開するときに特に重要となります。
AD DS のサイト トポロジとレプリケーションの詳細については、「 [Windows Server テクニカル リファレンス](https://technet.microsoft.com/library/cc739127(WS.10).aspx)」を参照してください。
## <a name="rid-management-and-issuance-improvements"></a>RID の管理と発行の機能強化
Windows 2000 Active Directory では、RID マスターが導入されました。これは、ユーザー、グループ、コンピューターといったセキュリティ トラスティのセキュリティ識別子 (SID) を作成するために、相対識別子のプールをドメイン コントローラーに対して発行します。 既定では、このグローバル RID 空間は、ドメイン内で作成される合計 2<sup>30</sup> (つまり 1,073,741,823) 個の SID に制限されています。 SID をプールに戻したり、再発行したりすることはできません。 時間の経過と共に、大規模なドメインでは RID の残数が少なくなったり、何らかのアクシデントによって RID が無駄に減り、最終的に枯渇したりする場合があります。
Windows Server 2012 では、RID の発行と管理に関する多数の問題に対処しています。それらの問題は、1999 年に最初の Active Directory ドメインが作成されて以降、AD DS が進化を続ける過程で、お客様と Microsoft カスタマー サポートによって発見されたものです。 次のようなクラスがあります。
- RID 消費の警告が定期的にイベント ログに書き込まれます。
- 管理者が RID プールを無効にすると、イベントがログに記録されます。
- RID ポリシーの RID ブロック サイズの上限が適用されるようになりました。
- グローバル ID 空間が少なくなると、RID の人為的シーリングが適用され、ログに記録されるようになりました。これによって管理者は、グローバル空間が枯渇する前に対策を講じることができます。
- グローバル RID 空間は 1 ビット増加できるようになり、そのサイズは 2 倍の 2<sup>31</sup> (2,147,483,648 個の SID) になります。
RID および RID マスターの詳細については、「 [セキュリティ識別子のしくみ](https://technet.microsoft.com/library/cc778824(WS.10).aspx)」を参照してください。
## <a name="ad-ds-role-deployment-and-management-architecture"></a>AD DS の役割の展開と管理のためのアーキテクチャ
サーバー マネージャーおよび Windows PowerShell の ADDSDeployment モジュールは、AD DS の役割を展開または管理するときの機能に関して、次のコア アセンブリに依存します。
- Microsoft.ADroles.Aspects.dll
- Microsoft.ADroles.Instrumentation.dll
- Microsoft.ADRoles.ServerManager.Common.dll
- Microsoft.ADRoles.UI.Common.dll
- Microsoft.DirectoryServices.Deployment.Types.dll
- Microsoft.DirectoryServices.ServerManager.dll
- Addsdeployment.psm1
- Addsdeployment.psd1
どちらも、リモートからの役割のインストールと構成に関して、Windows PowerShell およびそのリモート呼び出しコマンドに依存します。

また、Windows Server 2012 では、次のサービスの一部として、以前の昇格操作の多くを LSASS.EXE からリファクターします。
- DS 役割サーバー サービス (DsRoleSvc)
- DSRoleSvc.dll (DsRoleSvc サービスによって読み込まれる)
仮想ドメイン コントローラーの昇格、降格、または複製を行うためには、このサービスが存在して実行されている必要があります。 AD DS の役割のインストールでは、このサービスが追加され、開始の種類として [手動] が既定で設定されます。 このサービスは無効にしないでください。
## <a name="adprep-and-prerequisite-checking-architecture"></a>ADPrep および前提条件チェックのためのアーキテクチャ
Adprep をスキーマ マスター上で実行する必要はなくなりました。 Windows Server 2008 x64 以降を搭載したコンピューターであればリモートで実行できます。
> [!NOTE]
> Adprep は LDAP で Schxx.ldf ファイルをインポートし、インポート中にスキーマ マスターへの接続が失われた場合は自動的に再接続しません。 インポート プロセスの一環として、スキーマ マスターは特定のモードに設定され、自動再接続は無効にされます。その理由は、接続が失われた後に LDAP が再接続する場合、再確立された接続が特定のモードに入らないためです。 その場合、スキーマは正しく更新されません。
前提条件チェックによって、特定の条件が満たされることが保証されます。 そのような条件は AD DS のインストールが成功するために必要とされます。 満たされない必須条件がある場合は、それらを解決したうえで、インストールを続行できます。 フォレストまたはドメインの準備ができていないことも検出されるので、その場合は Adprep 展開コードが自動的に実行されます。
### <a name="adprep-executables-dlls-ldfs-files"></a>ADPrep の実行可能ファイル、DLL、LDF、およびファイル
- ADprep.dll
- Ldifde.dll
- Csvde.dll
- Sch14.ldf ~ Sch56.ldf
- Schupgrade.cat
- *dcpromo.csv
以前 ADprep.exe に格納されていた AD 準備コードは、adprep.dll 内にリファクターされています。 これによって、ADPrep.exe と Windows PowerShell の ADDSDeployment モジュールの両方が同じタスクのライブラリを使用し、同じ機能を持つことができます。 Adprep.exe はインストール メディアに収録されていますが、自動プロセスがこれを直接呼び出すことはありません。管理者だけが手動で実行します。 Windows Server 2008 x64 以降のオペレーティング システム上でのみ実行できます。 Ldifde.exe と csvde.exe も準備プロセスによって読み込まれる DLL としてリファクターされたバージョンを持ちます。 スキーマ拡張では、以前のオペレーティング システム バージョンと同様、署名確認される LDF ファイルを使用します。

> [!IMPORTANT]
> Windows Server 2012 用の 32 ビット Adprep32.exe ツールはありません。 フォレストとドメインを準備するには、ドメイン コントローラーとして実行される、メンバー サーバーとして実行される、またはワークグループ内で実行される、1 台以上の Windows Server 2008 x64、Windows Server 2008 R2、または Windows Server 2012 コンピューターが必要です。 Adprep.exe は Windows Server 2003 x64 上では実行されません。
## <a name="BKMK_PrereuisiteChecking"></a>前提条件のチェック
Windows PowerShell の ADDSDeployment マネージ コードに組み込まれている前提条件チェック システムは、操作に基づいてさまざまなモードで動作します。 次の表に、各テストについて、それがいつ使用され、何がどのような方法で検証されるのかについて説明します。 検証が失敗し、エラー情報だけでは問題のトラブルシューティングを行えない場合に、この表が役立ちます。
これらのテストのログは、 **DirectoryServices-Deployment** 操作イベント ログ チャネルのタスク カテゴリ **Core**に、常にイベント ID **103**で記録されます。
### <a name="prerequisite-windows-powershell"></a>前提条件のための Windows PowerShell
ドメイン コントローラー展開のすべてのコマンドレットには、対応する Windows PowerShell の ADDSDeployment コマンドレットがあります。 これらのコマンドレットは、その関連するコマンドレットとほぼ同じ引数を持ちます。
- Test-ADDSDomainControllerInstallation
- Test-ADDSDomainControllerUninstallation
- Test-ADDSDomainInstallation
- Test-ADDSForestInstallation
- Test-ADDSReadOnlyDomainControllerAccountCreation
通常、これらのコマンドレットを実行する必要はありません。既定で、展開コマンドレットと一緒に自動的に実行されます。
#### <a name="BKMK_ADDSInstallPrerequisiteTests"></a>前提条件のテスト
||||
|-|-|-|
|テスト名|プロトコル<br /><br />使用される|説明と注意事項|
|VerifyAdminTrusted<br /><br />ForDelegationProvider|LDAP|既存のパートナー ドメイン コントローラーに対する "コンピューターとユーザー アカウントに委任時の信頼を付与" (SeEnableDelegationPrivilege) 特権がユーザーにあることを検証します。 構成された tokenGroups 属性へのアクセスが必要になります。<br /><br />Windows Server 2003 ドメイン コントローラーに接続するときは使用されません。 昇格の前にこの特権を手動で確認する必要があります。|
|VerifyADPrep<br /><br />Prerequisites (フォレスト)|LDAP|rootDSE namingContexts 属性およびスキーマ名前付けコンテキストの fsmoRoleOwner 属性を使って、スキーマ マスターを検出して接続します。 AD DS のインストールにとってどの準備操作 (forestprep、domainprep、または rodcprep) が必要なのかを判断します。 スキーマ objectVersion が想定されていることと、それがさらに拡張を必要としているかどうかを検証します。|
|VerifyADPrep<br /><br />Prerequisites (ドメインおよび RODC)|LDAP|rootDSE namingContexts 属性およびインフラストラクチャ コンテナーの fsmoRoleOwner 属性を使って、インフラストラクチャ マスターを検出して接続します。 RODC のインストールの場合、このテストはドメイン名前付けマスターを検出し、それがオンラインであることを確認します。|
|CheckGroup<br /><br />メンバーシップ|LDAP、<br /><br />SMB 経由の RPC (LSARPC)|操作に応じて、ユーザーが Domain Admins グループまたは Enterprise Admins グループのメンバーであることを検証します (ドメイン コントローラーの追加または降格の場合は DA、ドメインの追加または削除の場合は EA)。|
|CheckForestPrep<br /><br />GroupMembership|LDAP、<br /><br />SMB 経由の RPC (LSARPC)|ユーザーが Schema Admins グループおよび Enterprise Admins グループのメンバーであることと、既存のドメイン コントローラーに対する "監査とセキュリティ ログの管理" (SeSecurityPrivilege) 特権を持っていることを検証します。|
|CheckDomainPrep<br /><br />GroupMembership|LDAP、<br /><br />SMB 経由の RPC (LSARPC)|ユーザーが Domain Admins グループのメンバーであることと、既存のドメイン コントローラーに対する "監査とセキュリティ ログの管理" (SeSecurityPrivilege) 特権を持っていることを検証します。|
|CheckRODCPrep<br /><br />GroupMembership|LDAP、<br /><br />SMB 経由の RPC (LSARPC)|ユーザーが Enterprise Admins グループのメンバーであることと、既存のドメイン コントローラーに対する "監査とセキュリティ ログの管理" (SeSecurityPrivilege) 特権を持っていることを検証します。|
|VerifyInitSync<br /><br />AfterReboot|LDAP|スキーマ マスターが rootDSE 属性 becomeSchemaMaster に対してダミー値を設定して再起動してから 1 回以上レプリケートしていることを検証します。|
|VerifySFUHotFix<br /><br />Applied|LDAP|既存のフォレスト スキーマに既知の問題、"OID が 1.2.840.113556.1.4.7000.187.102 の UID 属性に対する SFU2 拡張" が含まれていないことを検証します。<br /><br />([https://support.microsoft.com/kb/821732](https://support.microsoft.com/kb/821732))|
|VerifyExchange<br /><br />SchemaFixed|LDAP、WMI、DCOM、RPC|既存のフォレストを検証スキーマにない問題 Exchange 2000 の拡張機能の ms がまだ含まれて-こと-アシスタント-名前、ms-こと-LabeledURI と ms こと家識別子 ([https://support.microsoft.com/kb/314649](https://support.microsoft.com/kb/314649))|
|VerifyWin2KSchema<br /><br />一貫性|LDAP|既存のフォレスト スキーマに一貫性のある (サード パーティによって間違って変更されていない) コアの属性とクラスがあることを検証します。|
|DCPromo|RPC 経由の DRSR<br /><br />LDAP、<br /><br />DNS<br /><br />SMB 経由の RPC (SAMR)|プロモーション コードに渡されるコマンド ライン構文を検証し、昇格をテストします。 フォレストまたはドメインを新規に作成する場合、既存のフォレストまたはドメインがないことを検証します。|
|VerifyOutbound<br /><br />ReplicationEnabled|LDAP、SMB 経由の DRSR、SMB 経由の RPC (LSARPC)|レプリケーション パートナーとして指定された既存のドメイン コントローラーで出力方向のレプリケーションが有効であることを検証します。そのために、NTDS 設定オブジェクトの NTDSDSA_OPT_DISABLE_OUTBOUND_REPL (0x00000004) のオプション属性を確認します。|
|VerifyMachineAdmin<br /><br />パスワード|RPC 経由の DRSR<br /><br />LDAP、<br /><br />DNS<br /><br />SMB 経由の RPC (SAMR)|DSRM のセーフ モードのパスワード セットがドメインの複雑さの要件を満たしていることを検証します。|
|VerifySafeModePassword|*該当なし*|ローカルの Administrator パスワード セットが、コンピューター セキュリティ ポリシーの複雑さの要件を満たしていることを検証します。|
| 70.546729 | 606 | 0.821223 | yue_Hant | 0.680623 |
760f4db19f95360090ed10515bb11f42bca060bd | 1,716 | md | Markdown | _posts/2020-09-26-gates-mugshot.md | quorten/quorten-blog1 | f14f1c2d20a66f36cd083d5044635aacd782ff90 | [
"Unlicense"
] | null | null | null | _posts/2020-09-26-gates-mugshot.md | quorten/quorten-blog1 | f14f1c2d20a66f36cd083d5044635aacd782ff90 | [
"Unlicense"
] | 1 | 2021-01-19T23:42:48.000Z | 2021-02-03T04:02:20.000Z | _posts/2020-09-26-gates-mugshot.md | quorten/quorten-blog1 | f14f1c2d20a66f36cd083d5044635aacd782ff90 | [
"Unlicense"
] | null | null | null | ---
layout: post
title: Bill Gates famous Mugshot photo
date: 2020-09-26 19:51 -0500
author: quorten
categories: [misc, vintage-computing, mac-classic]
tags: [misc, vintage-computing, mac-classic]
---
Ha, this is interesting. A graphics designer working at Microsoft
took the liberty to use the Bill Gates mugshot photo .
20200926/https://skeptics.stackexchange.com/questions/48565/was-bill-gates-mugshot-photo-used-in-2010-as-a-template-for-outlook-contacts
20200926/https://library.stanford.edu/areas/apple-computer-collections/access-apple-collections
Really, is there more information online surrounding the famous Bill
Gates mugshot photo? It sure comes up a lot of times, but the
mentions are always that the details aren't clear around the cause of
the arrest, other than it was for some minor traffic violation. Well,
yeah, that's it. But there are other traffic citations of Bill Gates
where the details are more clear.
So, what it seems it could be is something like this. Well, Bill
Gates was known to be very evasive in court... he doesn't play well
with authority so some would say. And from the later police report
that is better understood, it wouldn't be too far a stretch to say
that Bill Gates may have lashed out too much at the previous police
encounter, thus leading to his arrest.
Yeah, and apparently Bill Gates likes racing/speeding.
20200928/DuckDuckGo why is the bill gates mugshot photo so popular
20200928/https://corruptico.com/2017/08/31/bill-gates-arrested-cop-fired/
20200928/https://www.topspeed.com/cars/car-news/bill-gates-famous-mugshot-due-to-a-speed-ticket-in-a-porsche-ar25588.html
20200928/https://www.quora.com/What-did-Bill-Gates-do-to-go-to-jail?share=1
| 46.378378 | 136 | 0.785548 | eng_Latn | 0.989421 |
760f71a762445f1850749220d573a4ab53c46b4c | 1,169 | md | Markdown | docs/2014/integration-services/behavior-changes-to-integration-services-features-in-sql-server-2014.md | MemiyazaMiyazaki/sql-docs.ja-jp | 452d9a1fa796f60125d8bde47f95b67672ccef9f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/2014/integration-services/behavior-changes-to-integration-services-features-in-sql-server-2014.md | MemiyazaMiyazaki/sql-docs.ja-jp | 452d9a1fa796f60125d8bde47f95b67672ccef9f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/2014/integration-services/behavior-changes-to-integration-services-features-in-sql-server-2014.md | MemiyazaMiyazaki/sql-docs.ja-jp | 452d9a1fa796f60125d8bde47f95b67672ccef9f | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-05-28T15:48:13.000Z | 2020-05-28T15:48:13.000Z | ---
title: SQL Server 2014 | の Integration Services 機能の動作の変更Microsoft Docs
ms.custom: ''
ms.date: 06/13/2017
ms.prod: sql-server-2014
ms.reviewer: ''
ms.technology: integration-services
ms.topic: conceptual
helpviewer_keywords:
- behavior changes [Integration Services]
- Integration Services, backward compatibility
ms.assetid: 611d22fa-5ac7-485e-9a40-7131e852f794
author: janinezhang
ms.author: janinez
manager: craigg
ms.openlocfilehash: 30e1e0d882d249130cbd72ca62088ba3f7978ad3
ms.sourcegitcommit: 6fd8c1914de4c7ac24900fe388ecc7883c740077
ms.translationtype: MT
ms.contentlocale: ja-JP
ms.lasthandoff: 04/26/2020
ms.locfileid: "66061255"
---
# <a name="behavior-changes-to-integration-services-features-in-sql-server-2014"></a>SQL Server 2014 における Integration Services 機能の動作の変更
このトピックでは、 [!INCLUDE[ssISnoversion](../includes/ssisnoversion-md.md)]における動作変更について説明します。 動作の変更により、の現在のリリース[!INCLUDE[ssNoVersion](../includes/ssnoversion-md.md)] [!INCLUDE[ssISnoversion](../includes/ssisnoversion-md.md)]での機能や操作方法が、以前[!INCLUDE[ssNoVersion](../includes/ssnoversion-md.md)]のバージョンのと比較して影響を受けるようになります。
SQL Server 2014 では、Integration Services の機能における動作の変更はありません。
| 40.310345 | 314 | 0.804106 | yue_Hant | 0.198983 |
760fe708389af0fa9b3fd51014f7907254cb1887 | 3,607 | md | Markdown | articles/supply-chain/production-control/tasks/create-kanban-rule-kanban-line-event.md | FrankDahl/dynamics-365-unified-operations-public | 51f1f95209a88778c549e01690b504925a7c61c2 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/supply-chain/production-control/tasks/create-kanban-rule-kanban-line-event.md | FrankDahl/dynamics-365-unified-operations-public | 51f1f95209a88778c549e01690b504925a7c61c2 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/supply-chain/production-control/tasks/create-kanban-rule-kanban-line-event.md | FrankDahl/dynamics-365-unified-operations-public | 51f1f95209a88778c549e01690b504925a7c61c2 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
# required metadata
title: Create a kanban rule using a kanban line event
description: This procedure creates a kanban rule by using the kanban line event setting to trigger pull from a process activity.
author: ChristianRytt
manager: AnnBe
ms.date: 08/24/2016
ms.topic: business-process
ms.prod:
ms.service: dynamics-ax-applications
ms.technology:
# optional metadata
# ms.search.form:
audience: Application User
# ms.devlang:
ms.reviewer: yuyus
ms.search.scope: Operations
# ms.tgt_pltfrm:
# ms.custom:
ms.search.region: Global
ms.search.industry: Manufacturing
ms.author: crytt
ms.search.validFrom: 2016-06-30
ms.dyn365.ops.version: AX 7.0.0
---
# Create a kanban rule using a kanban line event
[!include [task guide banner](../../includes/task-guide-banner.md)]
This procedure creates a kanban rule by using the kanban line event setting to trigger pull from a process activity. The kanban rule is triggered by a kanban process activity, with a quantity equal to or greater than 25 each. The demo data company used to create this task is USMF. This task is intended for the process engineer or the value stream manager, as they prepare production of a new or modified product in a lean environment.
## Create a kanban rule
1. Go to Product information management > Lean manufacturing > Kanban rules.
2. Click New.
3. In the Replenishment strategy field, select 'Event'.
* This generates kanbans directly from demand. It is used to set up rules that define a make-to-order scenario.
4. In the First plan activity field, enter or select a value.
* Enter or select SpeakerAssemblyAndPolish. The first activity of a manufacturing kanban rule is a process activity in the production flow. When you select the activity, the validity dates of the activity are copied to the validity dates of the kanban rule.
5. Expand the Details section.
6. In the Product field, type 'L0001'.
7. Expand the Events section.
8. In the Kanban line event field, select 'Automatic'.
* This generates event kanbans on demand. The field is used to configure kanban rules that replenish material that is required for a downstream process activity. When you select Automatic, the event kanbans are created with the demand. This setting is recommended if you expect to execute production on the same day.
9. Set Minimum event quantity to '25'.
* Event kanbans are generated when the demand quantity is equal to or more than this field. This is useful if you want to produce an order quantity less than this field on one machine and more than this field on another machine.
10. Click Save.
## Create sales order and trigger kanban chain
1. Go to Sales and marketing > Sales orders > All sales orders.
2. Click New.
3. In the Customer account field, enter or select a value.
* Select Customer account US-003, Forest Wholesales.
4. Click OK.
5. In the Item number field, type 'L0001'.
* L0001 is the item for which you created the kanban rule.
6. Set Quantity to '27'.
* Because 27 is higher than the minimum quantity of 25 on the kanban rule, this will trigger an event kanban.
7. In the Site field, type '1'.
8. In the Warehouse field, enter or select a value.
* Select warehouse 13.
9. Click Save.
## View the kanban generated by the kanban rule
1. Go to Product information management > Lean manufacturing > Kanban rules.
2. In the list, find and select the desired record.
3. Expand the Kanbans section.
* Notice that a kanban for 27 was created to process the activity based on the created kanban rule.
* This is the last step.
| 48.743243 | 436 | 0.756307 | eng_Latn | 0.997408 |
761048fd1895849dd0d5dc3ee9e332610953753b | 3,091 | md | Markdown | help/data-science-workspace/jupyterlab/using-git-for-collaboration.md | AdobeDocs/experience-platform.pt-BR | 36d4d1ea8372fc5685742fbd1dd400537c766cfe | [
"MIT"
] | null | null | null | help/data-science-workspace/jupyterlab/using-git-for-collaboration.md | AdobeDocs/experience-platform.pt-BR | 36d4d1ea8372fc5685742fbd1dd400537c766cfe | [
"MIT"
] | null | null | null | help/data-science-workspace/jupyterlab/using-git-for-collaboration.md | AdobeDocs/experience-platform.pt-BR | 36d4d1ea8372fc5685742fbd1dd400537c766cfe | [
"MIT"
] | null | null | null | ---
keywords: Experience Platform; JupyterLab; notebooks; Data Science Workspace; tópicos populares; Git; Github
solution: Experience Platform
title: Colaborar no JupyterLab usando Git
topic-legacy: tutorial
type: Tutorial
description: O Git é um sistema distribuído de controle de versão para rastrear alterações no código-fonte durante o desenvolvimento de software. O Git é pré-instalado no ambiente Data Science Workspace JupyterLab.
exl-id: d7b766f7-b97d-4007-bc53-b83742425047
translation-type: tm+mt
source-git-commit: 5d449c1ca174cafcca988e9487940eb7550bd5cf
workflow-type: tm+mt
source-wordcount: '281'
ht-degree: 1%
---
# Colabore em [!DNL JupyterLab] usando [!DNL Git]
[!DNL Git] O é um sistema distribuído de controle de versão para rastrear alterações no código-fonte durante o desenvolvimento de software. O Git é pré-instalado no ambiente [!DNL Data Science Workspace JupyterLab].
## Pré-requisitos
>[!NOTE]
>
> O servidor Git que você pretende usar precisa ser acessível pela Internet.
O ambiente [!DNL Data Science Workspace JupyterLab] é um ambiente hospedado e não é implantado em seu firewall corporativo, portanto, o servidor Git ao qual você se conecta deve estar acessível na Internet pública. Pode ser um repositório público ou privado em [GitHub](https://github.com/) ou outra instância de um servidor [!DNL Git] que você decidiu hospedar.
## Conecte [!DNL Git] ao ambiente [!DNL Data Science Workspace JupyterLab Notebooks]
Comece iniciando [!DNL Adobe Experience Platform] e navegando até o ambiente [[!DNL JupyterLabs Notebooks]](https://platform.adobe.com/notebooks/jupyterLab).
Em [!DNL JupyterLab], selecione **[!UICONTROL File]** e passe o mouse sobre **[!UICONTROL New]**. Na lista suspensa exibida, selecione **[!UICONTROL Terminal]**.

Em seguida, no *Terminal* navegue até o espaço de trabalho usando o seguinte comando: `cd my-workspace`.

>[!TIP]
>
> Para ver uma lista de comandos git disponíveis, execute o comando: `git -help` no terminal.
Em seguida, clone o repositório que deseja usar usando o comando `git clone`. Clonar seu projeto usando uma URL `https://` em vez de `ssh://`.
**Exemplo**:
`git clone https://github.com/adobe/experience-platform-dsw-reference.git`

>[!NOTE]
>
> Para executar qualquer operação de gravação (`git push` por exemplo), os seguintes comandos de configuração precisam ser executados para cada nova sessão. Observe também que qualquer comando push solicita um nome de usuário e senha.
>
>`git config --global user.email "[email protected]"`
>
>`git config --global user.name "Your Name"`
## Próximas etapas
Após terminar de clonar seu repositório, você pode usar o Git normalmente em sua máquina local para colaborar com outras pessoas em notebooks. Para obter mais informações sobre o que você pode fazer em [!DNL JupyterLab], consulte o [[!DNL JupyterLab user guide]](./overview.md).
| 48.296875 | 362 | 0.772889 | por_Latn | 0.99647 |
76110789e882b17302a0afa8279e3821f5a34021 | 9,759 | md | Markdown | hihope_neptune-oh_hid/00_src/v0.1/drivers/framework/README.md | dawmlight/vendor_oh_fun | bc9fb50920f06cd4c27399f60076f5793043c77d | [
"Apache-2.0"
] | 1 | 2022-02-15T08:51:55.000Z | 2022-02-15T08:51:55.000Z | hihope_neptune-oh_hid/00_src/v0.1/drivers/framework/README.md | dawmlight/vendor_oh_fun | bc9fb50920f06cd4c27399f60076f5793043c77d | [
"Apache-2.0"
] | null | null | null | hihope_neptune-oh_hid/00_src/v0.1/drivers/framework/README.md | dawmlight/vendor_oh_fun | bc9fb50920f06cd4c27399f60076f5793043c77d | [
"Apache-2.0"
] | null | null | null | # HDF<a name="EN-US_TOPIC_0000001078041442"></a>
- [Introduction](#section11660541593)
- [Directory Structure](#section161941989596)
- [Usage](#section1312121216216)
- [HDF](#section129654513264)
- [Sensor](#section188637474417)
- [Display](#section161502341317)
- [Input](#section12629164020115)
- [WLAN](#section11408103183114)
- [Repositories Involved](#section1371113476307)
## Introduction<a name="section11660541593"></a>
This repository stores the core source code information of the OpenHarmony driver subsystem, including the driver framework, configuration management, configuration parsing, universal framework model, and unified hardware driver interfaces. It is designed to provide a more precise and efficient development environment, where you can perform one-time development for multi-system deployment.
**Figure 1** Architecture of the HDF<a name="fig19330181162816"></a>

## Directory Structure<a name="section161941989596"></a>
```
/drivers/framework
├── ability # Capabilities for the driver development, such as the message model libraries
│ ├── config # Parsing code of the configuration
│ └── sbuf # Data serialization code
├── core # Core code for implementing the HDF
│ ├── adapter # Kernel adaptation layer
│ ├── common # Common basic code
│ ├── host # Driver host environment module
│ ├── manager # Management module
│ └── shared # Code shared by the host and manager modules
├── include # Header files for the HDF to provide capabilities externally
│ ├── config # Header files declaring capabilities for parsing configuration
│ ├── core # Header files exposed externally
│ ├── net # Header files related to network operations
│ ├── osal # Header files of the OS adaptation layer
│ ├── platform # Header files declaring platform APIs
│ ├── utils # Header files declaring common capabilities
│ └── wifi # Header files for the WLAN module to provide capabilities externally
├── model # Universal framework module for drivers
│ ├── display # Display framework module
│ ├── input # Input framework module
│ ├── network # WLAN framework module
│ └── sensor # Sensor driver module
├── support # Basic capabilities
│ └── platform # Platform driver framework and APIs, including GPIO, I2C, and SPI
├── tools # Source code related to the tools of the HDF
│ └── hc-gen # Source code of the configuration management tool
└── utils # Basic data structures and algorithms
```
## Usage<a name="section1312121216216"></a>
### HDF<a name="section129654513264"></a>
To develop a driver based on the HDF, you only need to register and configure required APIs. The driver framework will load and initialize the driver based on the parsing content.
Driver development based on the HDF consists of the following three parts:
- Driver: Develop the functions.
- Information configuration: Present the loading information of the driver.
- Resource configuration: Configure the hardware information of the driver.
You need to complete the logic code for the functions of a driver by the following APIs.
The first part that catches your eyes is the driver entry, which is described through **DriverEntry**.
Three APIs are available, namely **Bind**, **Init**, and **Release**.
```
struct HdfDriverEntry g_deviceSample = {
.moduleVersion = 1,
.moduleName = "sample_driver",
.Bind = SampleDriverBind,
.Init = SampleDriverInit,
.Release = SampleDriverRelease,
};
```
**Bind**: This API is used to bind driver devices and its functions.
```
int32_t SampleDriverBind(struct HdfDeviceObject *deviceObject)
{
// TODO: Bind device service to device object.
// And you can also initialize device resources here.
return HDF_SUCCESS;
}
```
**Init**: When devices are successfully bound, the HDF calls **Init** to initialize the driver. After initialization is complete, the HDF will determine whether to create external service interfaces based on the configuration file. If the driver fails to be initialized, the driver framework will automatically release the created device interface.
```
int32_t SampleDriverInit(struct HdfDeviceObject *deviceObject)
{
// TODO: Init hardware or other resources here.
return HDF_S UCCESS;
}
```
**Release**: When you need to uninstall a driver, the HDF calls this function to release the driver resources. Then, other internal resources will be released.
```
void SampleDriverRelease(struct HdfDeviceObject *deviceObject)
{
// Release all resources.
return;
}
```
For details, see [HDF Overview](en-us_topic_0000001051611604.md).
### Sensor<a name="section188637474417"></a>
The sensor driver module is developed based on the HDF and supports functions such as cross-OS migration and differentiated device configuration.
- APIs for implementing sensor driver module capabilities: Implement the capabilities of registering, loading, and deregistering sensor drivers as well as detecting sensor device depending on the HDF, normalize APIs for sensor devices of the same type, and offer APIs for parsing register configurations, abstract APIs for bus access, and abstract platform APIs.
- APIs to be implemented by developers: Based on the HDF Configuration Source \(HCS\), implement differentiated configuration for sensors of the same type and serialized configuration of sensor device parameters, and offer APIs for some sensor device operations to simplify the sensor driver development.
For details, see [Sensor Driver Overview](en-us_topic_0000001078401780.md).
### Display<a name="section161502341317"></a>
The display driver model that is developed based on the HDF shields the differences among chip platforms, achieving cross-platform migration of the OS. It also abstracts the common service logic of peripherals and configures differentiated adaptation APIs so that a driver model can be compatible with different peripheral. In this way, third-party vendors can efficiently access the OpenHarmony driver ecosystem.
- APIs for implementing display driver module capabilities: Implement the Hardware Driver Interfaces \(HDIs\) and their adaptation with the chip platform. In addition, the kernel-mode driver abstracts the common services of the panel driver and provides capabilities of initializing the panel, obtaining the panel configuration, powering on/off the panel, and implementing the backlight control.
- APIs to be implemented by developers: Complete the board-level HCS configuration and private data configuration of the panel, or offer differentiated APIs for some components to ensure efficient development of the display driver.
For details, see [LCD Overview](en-us_topic_0000001052857284.md).
### Input<a name="section12629164020115"></a>
The input driver model is developed based on the HDF, provides unified driver APIs for upper-layer input services, and is decoupled from the chip platform. In addition, it abstracts several types of common platform drivers based on different input devices and is compatible with those input devices through configuration and differentiated peripheral APIs.
- APIs for implementing input driver module capabilities: Implement the HDIs and provide capabilities of managing devices, controlling services, and reporting data. Besides, the input driver model provides a unified driver for different input devices and the capabilities of registering/unregistering an input device, reporting event data, parsing configuration, and loading a common driver.
- APIs to be implemented by developers: Based on the provided platform driver, add the device descriptions as well as private configuration of the input device and implement differentiated APIs to greatly shorten the time required for developing input drivers.
For details, see [Touchscreen Overview](en-us_topic_0000001052857350.md).
### WLAN<a name="section11408103183114"></a>
The WLAN module is developed based on the HDF and supports cross-OS migration, component adaptation, and modular assembly and compilation. Based on the unified APIs provided by the WLAN module, driver developers of WLAN vendors can adapt their driver code and developers of the HarmonyOS Driver Interfaces \(HDIs\) are capable of creating, disabling, scanning, and connecting to WLAN hotspots.
- APIs for implementing WLAN driver module capabilities: Implement the APIs of the WLAN HDI layer and provide capabilities of setting/obtaining the MAC address, obtaining the feature type, and setting the transmit power for upper-layer input services, as well as the capabilities of creating/releasing a **WifiModule**, connecting to/disconnecting from a WLAN hotspot, and applying for/releasing a **NetBuf** for developers.
- APIs to be implemented by developers: Based on the provided platform driver, complete the board-level HCS configuration as well as the differentiated WLAN configuration, and offer APIs for initializing, deregistering, enabling, and disabling a network device.
For details, see [WLAN Overview](en-us_topic_0000001051643558.md).
## Repositories Involved<a name="section1371113476307"></a>
[Driver subsystem](https://gitee.com/openharmony/docs/blob/master/en/readme/driver-subsystem.md)
drivers\_framework
[drivers\_adapter\_uhdf](https://gitee.com/openharmony/drivers_adapter_uhdf/blob/master/README.md)
[drivers\_adapter\_khdf\_linux](https://gitee.com/openharmony/drivers_adapter_uhdf/blob/master/README.md)
[drivers\_adapter\_khdf\_liteos](https://gitee.com/openharmony/drivers_adapter_uhdf/blob/master/README.md)
| 59.145455 | 429 | 0.765037 | eng_Latn | 0.984822 |
7611af460ebaf73f503135c0599cc7c59e050efe | 17,542 | markdown | Markdown | _posts/2020-11-01-Gradient-Boosting-Kospi200.markdown | yut-a/yut-a.github.io-a | 729d4ca1bdf063ab5ac7d13a0a05e584af82d6fb | [
"MIT"
] | null | null | null | _posts/2020-11-01-Gradient-Boosting-Kospi200.markdown | yut-a/yut-a.github.io-a | 729d4ca1bdf063ab5ac7d13a0a05e584af82d6fb | [
"MIT"
] | null | null | null | _posts/2020-11-01-Gradient-Boosting-Kospi200.markdown | yut-a/yut-a.github.io-a | 729d4ca1bdf063ab5ac7d13a0a05e584af82d6fb | [
"MIT"
] | null | null | null | ---
layout: post
title: Gradient Boosting을 활용한 Kospi200 주가 방향 예측
date: 2020-11-01
image: computer.png
tags: Data Finance
---
## Gradient Boosting이란?
**Gradient Boosting** 모델 역시 트리 앙상블 모델 중 하나지만, Random Forest와는 차이가 있다. `Boosting` 기법은 weak learner를 결합하여 strong learner를 만드는 방식이다. 정확도가 낮더라도 일단 모델을 만든 후, 다음 모델들이 약점을 보완하는 방식으로 예측이 이루어진다. 그 중, Gradient Boosting은 잔차가 큰 데이터를 더 학습하도록 함으로써, 손실 함수를 최소화한다.
여기서 사용할 `XGBoost`는 Extreme Gradient Boosting이라고 부르며, Gradient Boosting 알고리즘 중 하나이다. 이 모델의 장점은 학습과 예측이 빠르고, 과적합을 규제할 수 있다. 하지만, 하이퍼 파라미터에 굉장히 민감하다는 단점이 존재한다.<BR/><BR/><BR/><BR/>
## 분석 목적
해외 증시, 주요 경제, 금융 지표들을 바탕으로 Kospi200 **주가의 방향을 얼마나 효과적으로 예측**할 수 있는지 알아보고자 한다. 또한, 주가 방향 예측에 **가장 영향을 미치는 요소**들을 파악하고자 한다. 더 나아가, [이전 포스트](https://yut-a.github.io/2020/10/25/RandomForest-Kospi200/)의 RandomForest 모델에서 가장 높은 정확도를 보였던 `6개월 뒤의 주가 방향 예측` 결과를 비교해보고자 한다.<BR/><BR/><BR/><BR/>
## 데이터 소개
target 데이터인 Kospi200은 `Kodex200 ETF`의 주가를 활용했다. 일자 별 데이터들을 바탕으로 예측을 하는 것이기 때문에 다음 날 종가와 `6개월` 뒤의 종가와 비교하여 수익률을 산출했고, 이를 **수익률 > 0**이면 `1`, **수익률 <= 0**이면 `0`으로 분류했다.
예측을 위한 feature는 날짜를 제외하고 총 18개를 사용했다.
* `경기선행지수` : 3~6개월 후의 경기 흐름을 가늠하는 지표이다.
* `수출증가율` : 수출금액지수를 바탕으로 증가율을 산출했다.
* `콜금리` : 금융기관 간 거래에서의 단기 금리이다.
* `CD 금리` : CD(양도성예금증서)가 발행되어 유통시장에서 거래될 때 적용되는 금리이다.
* `DAX 선물` : 독일의 종합주가지수인 DAX 선물 가격이다.
* `DOW 선물` : 미국의 종합주가지수인 DOW 선물 가격이다.
* `FTSE 선물` : 영국의 FTSE 사가 발표하는 세계 주가지수에 대한 선물 가격이다.
* `Nikkei225 선물` : 일본의 종합주가지수인 Nikkei 선물 가격이다.
* `금 선물` `WTI 선물` : 각각 금, 원유에 대한 선물 가격이다.
* `국채 3년` : 우리나라 3년 만기 국채 가격이다.
* `미국 국채 10년` `미국 국채 3년`
* `유로/달러 환율` `원/달러 환율`
* `달러 선물 인덱스` : 달러인덱스는 유로, 엔, 파운드, 캐나다 달러, 스웨덴 크로네, 스위스 프랑 등 경제 규모가 크거나 통화가치가 안정적인 6개국 통화를 기준으로 산정한 미 달러화 가치를 지수화한 것으로, 이에 대한 선물지수이다.
* `VIX` : S&P500 지수옵션의 향후 30일 간의 변동성에 대한 시장의 기대를 나타낸 지수이다.
* `VKOSPI`: KOSPI200 지수옵션의 미래변동성을 측정한 지수이다.
target 기준에 따라 대략 2009년 4월부터 2020년 1월까지의 데이터를 활용했다.<BR/><BR/><BR/><BR/>
## 적용 과정
{% highlight ruby %}
# 데이터 불러오기
import pandas as pd
CLI = pd.read_csv("CLI.csv", encoding = "cp949") # 경기선행지수
EX = pd.read_csv("Export.csv", encoding = "cp949") # 수출금액지수
IR = pd.read_csv("IR.csv", encoding = "cp949") # 콜금리, CD 금리
DAX = pd.read_csv("DAX_futures.csv") # 독일 DAX 선물
DOW = pd.read_csv("dow_futures.csv") # 미국 DOW 선물
EUR_USD = pd.read_csv("EUR_USD.csv") # 유로/달러 환율
FTSE = pd.read_csv("FTSE_futures.csv") # 영국 FTSE 선물
Gold = pd.read_csv("gold_futures.csv") # 금 선물
Korea_3Y_bond = pd.read_csv("Korea_3Y_bond.csv") # 국채 3년
Nikkei = pd.read_csv("Nikkei225_futures.csv") # 일본 Nikkei225 선물
US_3Y_bond = pd.read_csv("US_3Y_bond.csv") # 미국 국채 3년
US_10Y_bond = pd.read_csv("US_10Y_bond.csv") # 미국 국채 10년
USD_index = pd.read_csv("USD_futures.csv") # 달러 선물 인덱스
USD_KRW = pd.read_csv("USD_KRW.csv") # 원/달러 환율
VIX = pd.read_csv("VIX.csv") # VIX
VKOSPI = pd.read_csv("VKOSPI.csv") # KOSPI Volatility
WTI = pd.read_csv("WTI_futures.csv") # WTI 선물
kodex200 = pd.read_csv("kodex200_price.csv", # Kodex200 ETF
skiprows = 4, engine = "python")
{% endhighlight %}
먼저, `경기선행지수`, `수출증가율`, `콜금리`, `CD 금리` 데이터를 정리했다. 수출금액지수를 바탕으로 수출증가율을 산출했다. 경기선행지수와 수출증가율은 월 단위 데이터이기 때문에 해당 달의 데이터로 나머지 일자를 채웠다. 또한, 2020년 9월과 10월 데이터는 아직 발표되지 않았기 때문에 8월 데이터로 채웠다.
{% highlight ruby %}
# CLI, IR, EX 데이터 전처리
def change(df):
if len(df.index) == 1:
df = df.T.reset_index()
df = df.drop([0, 1, 2, 3])
else:
df = df.drop([0, 1, 2])
df = df.reset_index(drop = True)
df = df.rename(columns = {df.columns[0] : "date"})
df["date"] = pd.to_datetime(df["date"], infer_datetime_format = True)
return df
CLI = change(CLI)
IR = change(IR)
EX = change(EX)
CLI = CLI.rename(columns = {CLI.columns[1] : "CLI"})
IR = IR.rename(columns = {IR.columns[1] : "CD_rate", IR.columns[2] : "call_rate"})
EX = EX.rename(columns = {EX.columns[1] : "EX"})
# 수출증가율 산출
EX["EX(%)"] = round(EX["EX"].pct_change() * 100, 2)
EX = EX.drop(["EX"], axis = 1)
EX = EX.dropna(axis = 0)
{% endhighlight %}
{% highlight ruby %}
# 병합하고 결측치 채우기
from_bok = pd.merge(IR, CLI, on = "date", how = "outer")
from_bok = pd.merge(from_bok, EX, on = "date", how = "outer")
from_bok = from_bok.sort_values(by = "date", ascending = True)
from_bok = from_bok.reset_index(drop = True)
from_bok["CLI"] = from_bok["CLI"].fillna(method = "ffill")
from_bok["EX(%)"] = from_bok["EX(%)"].fillna(method = "ffill")
# 날짜 기준으로 내림차순
from_bok = from_bok.sort_values(by = "date", ascending = False).reset_index(drop = True)
from_bok
{% endhighlight %}
<img width="328" alt="스크린샷 2020-10-25 오후 7 04 26" src="https://user-images.githubusercontent.com/70478154/97104147-f13bee00-16f4-11eb-91af-27233670b0d8.png">
target 데이터를 제외한 나머지 데이터들을 정리했다.
{% highlight ruby %}
# 나머지 데이터 전처리
VKOSPI = VKOSPI.sort_values(by = "일자", ascending = False).reset_index(drop = True)
name_list = ["DAX", "DOW", "FTSE", "Gold", "Nikkei", "USD_index", "VIX", "VKOSPI", "WTI",
"EUR_USD", "Korea_3Y_bond", "US_3Y_bond", "US_10Y_bond", "USD_KRW"]
for i in name_list:
# 일부 칼럼만 사용 및 칼럼명 변경
globals()["{}".format(i)] = globals()["{}".format(i)].iloc[:,:2]
globals()["{}".format(i)] = globals()["{}".format(i)].rename(columns = {globals()["{}".format(i)].columns[0] : "date",
globals()["{}".format(i)].columns[1] : i})
# datetime 형식으로 바꾸기 위한 작업
globals()["{}".format(i)]["date"] = globals()["{}".format(i)]["date"].str.replace("년", "-")
globals()["{}".format(i)]["date"] = globals()["{}".format(i)]["date"].str.replace(" ", "")
globals()["{}".format(i)]["date"] = globals()["{}".format(i)]["date"].str.replace("월", "-")
globals()["{}".format(i)]["date"] = globals()["{}".format(i)]["date"].str.replace("일", "")
# datetime 형식으로 변경
globals()["{}".format(i)]["date"] = pd.to_datetime(globals()["{}".format(i)]["date"], infer_datetime_format = True)
{% endhighlight %}
{% highlight ruby %}
# 데이터 병합
all_data = from_bok.copy()
from_invest = [DAX, DOW, FTSE, Gold, Nikkei, USD_index, VIX, VKOSPI, WTI,
EUR_USD, Korea_3Y_bond, US_3Y_bond, US_10Y_bond, USD_KRW]
for i in from_invest:
all_data = pd.merge(all_data, i, on = "date", how = "outer")
all_data
{% endhighlight %}
<img width="984" alt="스크린샷 2020-10-25 오후 7 07 00" src="https://user-images.githubusercontent.com/70478154/97104203-542d8500-16f5-11eb-8a27-c02d6b400d03.png">
다음은, target 데이터를 정리하고 모든 데이터들을 병합했다. 그 후, 결측치들을 모두 제거했다. 해외와 우리나라의 공휴일 차이로 발생하는 결측치와 월 단위 데이터로 인해 주말, 공휴일에 데이터들이 속해있어 발생하는 결측치들을 제거했다. 또, 일부 데이터가 2009년부터 시작되기 때문에 그 이전 데이터들을 모두 제거했다.
{% highlight ruby %}
#target 데이터 전처리
kodex200 = kodex200.drop(kodex200.iloc[:,1:4], axis = 1)
kodex200 = kodex200.drop(kodex200.iloc[:,2:], axis = 1)
kodex200.columns = ["date", "price"]
kodex200 = kodex200.sort_values(by = "date", ascending = False).reset_index(drop = True)
kodex200["date"] = pd.to_datetime(kodex200["date"], infer_datetime_format = True)
kodex200.head()
{% endhighlight %}
<img width="149" alt="스크린샷 2020-10-25 오후 7 14 52" src="https://user-images.githubusercontent.com/70478154/97104330-678d2000-16f6-11eb-90b9-7b300a2d4a98.png">
{% highlight ruby %}
# 모든 데이터 통합
finance = pd.merge(all_data, kodex200, on = "date", how = "outer")
# 결측치 제거
finance = finance.dropna(axis = 0).reset_index(drop = True)
finance
{% endhighlight %}
<img width="985" alt="스크린샷 2020-10-25 오후 7 15 53" src="https://user-images.githubusercontent.com/70478154/97104345-8f7c8380-16f6-11eb-8c74-dfc7a112dc86.png">
분석을 용이하게 하기 위해 숫자형으로 데이터 타입을 변경했다.
{% highlight ruby %}
# 숫자형으로 데이터 타입 변경
for i in range(0, len(finance.columns)):
if finance.dtypes[i] == "object":
finance.iloc[:,i] = finance.iloc[:,i].str.replace(",", "")
cols = list(finance.select_dtypes(include = "object").columns)[:8]
for col in cols:
finance = finance.astype({col : "float"})
finance = finance.astype({"price" : "int"})
finance.dtypes
{% endhighlight %}
<img width="274" alt="스크린샷 2020-10-25 오후 7 17 20" src="https://user-images.githubusercontent.com/70478154/97104373-c05cb880-16f6-11eb-9187-129e1590bbb4.png">
다음날 종가 대비 6개월 뒤 종가의 수익률을 산출하고 target을 기준에 따라 `1, 0`으로 변경했다.
{% highlight ruby %}
# target 변경 (target = 6개월 후)
finance_180 = finance.copy()
finance_180["lag_1"] = finance_180["price"].shift()
finance_180["lag_181"] = finance_180["lag_1"].shift(180)
finance_180["price_pred(%)"] = round((finance_180["lag_181"] - finance_180["lag_1"]) / finance_180["lag_1"] * 100, 2)
def rate_180(x):
if x > 0:
return 1
elif x <= 0:
return 0
finance_180["predict"] = finance_180["price_pred(%)"].apply(rate_180)
finance_180 = finance_180.dropna(axis = 0).reset_index(drop = True)
finance_180 = finance_180.astype({"predict" : "int"})
finance_180 = finance_180.drop(["price", "lag_1", "lag_181", "price_pred(%)"], axis = 1)
finance_180
{% endhighlight %}
<img width="986" alt="스크린샷 2020-11-02 오후 4 26 06" src="https://user-images.githubusercontent.com/70478154/97840900-34ffaa80-1d28-11eb-8cd4-2a7b134c80d5.png">
#### Gradient Boosting의 6개월 뒤 주가 방향 예측
train과 test set을 분리하고 baseline을 확인했다. `1`의 빈도가 가장 높았고, baseline을 `59.7%`로 설정했다. 그 후, feature와 target을 분리했다.
{% highlight ruby %}
# train, test set 분리 / date 칼럼 삭제 (6개월)
train_180 = finance_180[finance_180["date"] < "2017.11.01"].drop("date", axis = 1)
test_180 = finance_180[finance_180["date"] >= "2017.11.01"].drop("date", axis = 1)
train_180.shape, test_180.shape
{% endhighlight %}
<img width="200" alt="스크린샷 2020-11-02 오후 4 29 12" src="https://user-images.githubusercontent.com/70478154/97841148-9758ab00-1d28-11eb-8b11-25d7652efd51.png">
{% highlight ruby %}
# baseline
train_180["predict"].value_counts(normalize = True)
{% endhighlight %}
<img width="256" alt="스크린샷 2020-11-02 오후 4 30 16" src="https://user-images.githubusercontent.com/70478154/97841224-bbb48780-1d28-11eb-94f7-ef0acfa70e8a.png">
{% highlight ruby %}
# features, target 분리
target_180 = "predict"
features_180 = train_180.drop(columns = [target_180]).columns
X_train_180 = train_180[features_180]
y_train_180 = train_180[target_180]
X_test_180 = test_180[features_180]
y_test_180 = test_180[target_180]
{% endhighlight %}
XGBClassifier 모델의 최적 하이퍼 파라미터를 찾기 위해 RandomizedSearchCV를 적용했다. 최적 하이퍼 파라미터와 Cross Validation 평균 정확도를 산출한 후, 최적 하이퍼 파라미터를 적용한 모델을 train set에 재학습시키고 train과 test set의 정확도를 산출했다.
{% highlight ruby %}
# 하이퍼 파라미터를 최적화한 XGBClassifier
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from xgboost import XGBClassifier
from sklearn.model_selection import RandomizedSearchCV
from scipy.stats import randint
standard = StandardScaler()
X_train_180_s = standard.fit_transform(X_train_180)
model = XGBClassifier(random_state = 12)
params = {
"max_depth" : [5, 10, 15, 20],
"min_child_weight" : [1, 5, 10, 15, 20],
"learning_rate" : [0.01, 0.05, 0.1, 0.2, 0.3],
"subsample" : [0.1, 0.2, 0.3, 0.4, 0.5],
"n_estimators" : randint(100, 1000),
"gamma" : [0, 0.25, 0.5, 0.7, 1.0]
}
clf = RandomizedSearchCV(
model,
params,
n_iter = 100,
cv = 15,
n_jobs = -1,
scoring = "accuracy",
verbose = 1,
random_state = 12
)
clf.fit(X_train_180_s, y_train_180);
{% endhighlight %}
{% highlight ruby %}
# 최적 하이퍼 파라미터 / CV score
print("최적 하이퍼파라미터: ", clf.best_params_, "\n")
print("CV accuracy score: ", clf.best_score_)
{% endhighlight %}
<img width="980" alt="스크린샷 2020-11-02 오후 4 34 12" src="https://user-images.githubusercontent.com/70478154/97841503-4ac19f80-1d29-11eb-8afe-fa47ced13e12.png">
{% highlight ruby %}
# 최적 하이퍼 파라미터를 적용한 모델의 train, test set 정확도
model_GB = XGBClassifier(
n_estimators = 323,
max_depth = 10,
min_child_weight = 20,
subsample = 0.5,
learning_rate = 0.01,
gamma = 1.0,
random_state = 12,
n_jobs = -1,
)
X_test_180_s = standard.fit_transform(X_test_180)
model_GB.fit(X_train_180_s, y_train_180)
print("Train set accuracy score: ", model_GB.score(X_train_180_s, y_train_180))
print("Test set accuracy score: ", model_GB.score(X_test_180_s, y_test_180))
{% endhighlight %}
<img width="400" alt="스크린샷 2020-11-02 오후 4 37 23" src="https://user-images.githubusercontent.com/70478154/97841710-be63ac80-1d29-11eb-84b3-9155f088a79e.png">
결과에 따르면, Overfitting 문제가 존재하기는 하지만, Test set의 정확도는 `77%`로 높은 정확도를 보임을 알 수 있다.
다음은, confusion matrix와 ROC-AUC score를 산출했다.
{% highlight ruby %}
# confusion matrix
from sklearn.metrics import plot_confusion_matrix
import matplotlib.pyplot as plt
flg, ax = plt.subplots()
pcm = plot_confusion_matrix(model_GB, X_test_180_s, y_test_180,
cmap = plt.cm.Blues,
ax = ax);
plt.title(f"Confusion matrix, n = {len(y_test_180)}", fontsize = 15)
plt.show()
{% endhighlight %}
<img width="320" alt="스크린샷 2020-11-02 오후 4 48 25" src="https://user-images.githubusercontent.com/70478154/97842642-48f8db80-1d2b-11eb-8128-b488f509c694.png">
{% highlight ruby %}
# ROC-AUC score
from sklearn.metrics import roc_auc_score
GB_pred_proba = model_GB.predict_proba(X_test_180_s)[:,1]
print("ROC-AUC score: ", roc_auc_score(y_test_180, GB_pred_proba))
{% endhighlight %}
<img width="306" alt="스크린샷 2020-11-02 오후 4 50 00" src="https://user-images.githubusercontent.com/70478154/97842774-82314b80-1d2b-11eb-8789-4f01fdb1f5c3.png"><BR/><BR/>
#### RandomForest의 6개월 뒤 주가 방향 예측
이번에는 RandomForest 모델의 최적 하이퍼 파라미터를 찾기 위해 RandomizedSearchCV를 적용했다. 최적 하이퍼 파라미터와 Cross Validation 평균 정확도를 산출한 후, 최적 하이퍼 파라미터를 적용한 모델을 train set에 재학습시키고 train과 test set의 정확도를 산출했다.
{% highlight ruby %}
# 하이퍼 파라미터를 최적화한 RandomForestClassifier
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import RandomizedSearchCV
from scipy.stats import randint
pipe_180 = make_pipeline(
StandardScaler(),
RandomForestClassifier(random_state = 12)
)
dists = {
"randomforestclassifier__n_estimators" : randint(50, 1000),
"randomforestclassifier__max_depth" : [5, 10, 15, 20, 25, None],
"randomforestclassifier__max_leaf_nodes" : [10, 20, 30, 40],
"randomforestclassifier__max_features" : randint(1, 10),
"randomforestclassifier__min_samples_leaf" : randint(1, 10)
}
clf_180 = RandomizedSearchCV(
pipe_180,
param_distributions = dists,
n_iter = 100,
cv = 15,
scoring = "accuracy",
verbose = 1,
n_jobs = -1,
random_state = 12
)
clf_180.fit(X_train_180, y_train_180);
{% endhighlight %}
{% highlight ruby %}
# 최적 하이퍼 파라미터 / CV score
print("최적 하이퍼파라미터: ", clf_180.best_params_, "\n")
print("CV accuracy score: ", clf_180.best_score_)
{% endhighlight %}
<img width="977" alt="스크린샷 2020-11-02 오후 5 01 48" src="https://user-images.githubusercontent.com/70478154/97843729-25cf2b80-1d2d-11eb-8481-c103a7a24d04.png">
{% highlight ruby %}
# 최적 하이퍼 파라미터를 적용한 모델의 train, test set 정확도
fi_pipe_180 = make_pipeline(
StandardScaler(),
RandomForestClassifier(n_estimators = 695, max_depth = 25, max_features = 1, max_leaf_nodes = 10,
min_samples_leaf = 4, n_jobs = -1, random_state = 12)
)
fi_pipe_180.fit(X_train_180, y_train_180)
print("Train set accuracy score: ", fi_pipe_180.score(X_train_180, y_train_180))
print("Test set accuracy score: ", fi_pipe_180.score(X_test_180, y_test_180))
{% endhighlight %}
<img width="400" alt="스크린샷 2020-11-02 오후 5 02 47" src="https://user-images.githubusercontent.com/70478154/97843830-48f9db00-1d2d-11eb-8b92-aa32313071a3.png">
결과에 따르면, XGBoost에 비해 Overfitting 문제가 소폭 줄었고, test set의 정확도가 `78.9%` 소폭 증가했다.
마찬가지로, confusion matrix와 ROC-AUC score를 산출했다.
{% highlight ruby %}
# confusion matrix
from sklearn.metrics import plot_confusion_matrix
import matplotlib.pyplot as plt
flg, ax = plt.subplots()
pcm = plot_confusion_matrix(fi_pipe_180, X_test_180, y_test_180,
cmap = plt.cm.Blues,
ax = ax);
plt.title(f"Confusion matrix, n = {len(y_test_180)}", fontsize = 15)
plt.show()
{% endhighlight %}
<img width="322" alt="스크린샷 2020-11-02 오후 5 06 19" src="https://user-images.githubusercontent.com/70478154/97844105-c58cb980-1d2d-11eb-86d6-d0e4a18a3309.png">
{% highlight ruby %}
# ROC-AUC score
from sklearn.metrics import roc_auc_score
RF_pred_proba = fi_pipe_180.predict_proba(X_test_180)[:,1]
print("ROC-AUC score: ", roc_auc_score(y_test_180, RF_pred_proba))
{% endhighlight %}
<img width="300" alt="스크린샷 2020-11-02 오후 5 07 22" src="https://user-images.githubusercontent.com/70478154/97844190-ee14b380-1d2d-11eb-8d95-8616a9558301.png">
XGBoost와 RandomForest 모델의 confusion matrix를 비교해보면, RandomForest 모델이 상대적으로 이상적인 예측 구성을 보여줌을 알 수 있으며, ROC-AUC score 역시 소폭 높았다.<BR/><BR/><BR/><BR/>
## 결론
결과를 종합하면, `XGBoost`와 `RandomForest` 모델 모두 비슷한 예측 정확도를 보이며, Overfitting 정도와 ROC-AUC score 역시 큰 차이가 없었다. 다만 RandomForest 모델이 전반적인 모형의 성능 측면에서 매우 미세하게 앞서고 있음을 확인했다.
분석을 진행하면서, XGBoost 모델의 경우 하이퍼 파라미터의 변화에 따라 모형의 성능이 매우 민감하게 반응한다는 것을 체감했다. 따라서, 더 세부적으로 하이퍼 파라미터에 변화를 준다면, 성능의 개선을 기대해 볼 수 있을 것이라 생각한다.<BR/><BR/><BR/><BR/>
## 한계
* 일반화하기에 충분한 양의 데이터가 필요하다.
* 모델의 성능을 더 높일 수 있는 feature들을 찾아 적용할 필요가 있다.
* Overfitting 문제에 대한 해결이 필요하다.
* 두 모델 모두 `class 1`의 precision에 대한 개선이 필요하다.
| 38.38512 | 283 | 0.676491 | kor_Hang | 0.967616 |
761239cdab2a1342f977ace568c02cc1f835a77d | 3,269 | md | Markdown | _posts/2016-11-22-Mill-Crest-Vintage-1940-Liquid-Silk-Leaf-Vintage-Wedding-Gown.md | promsome/promsome.github.io | 69236f5c8f4d9591eec55dafa47ce21914b51851 | [
"MIT"
] | null | null | null | _posts/2016-11-22-Mill-Crest-Vintage-1940-Liquid-Silk-Leaf-Vintage-Wedding-Gown.md | promsome/promsome.github.io | 69236f5c8f4d9591eec55dafa47ce21914b51851 | [
"MIT"
] | null | null | null | _posts/2016-11-22-Mill-Crest-Vintage-1940-Liquid-Silk-Leaf-Vintage-Wedding-Gown.md | promsome/promsome.github.io | 69236f5c8f4d9591eec55dafa47ce21914b51851 | [
"MIT"
] | null | null | null | ---
layout: post
date: 2016-11-22
title: "Mill Crest Vintage 1940 Liquid Silk Leaf Vintage Wedding Gown"
category: Mill Crest Vintage
tags: [Mill Crest Vintage]
---
### Mill Crest Vintage 1940 Liquid Silk Leaf Vintage Wedding Gown
Just **$299.99**
###
<table><tr><td>BRANDS</td><td>Mill Crest Vintage</td></tr></table>
<a href="https://www.readybrides.com/en/mill-crest-vintage/84722-mill-crest-vintage-1940-liquid-silk-leaf-vintage-wedding-gown.html"><img src="//img.readybrides.com/220425/mill-crest-vintage-1940-liquid-silk-leaf-vintage-wedding-gown.jpg" alt="Mill Crest Vintage 1940 Liquid Silk Leaf Vintage Wedding Gown" style="width:100%;" /></a>
<!-- break --><a href="https://www.readybrides.com/en/mill-crest-vintage/84722-mill-crest-vintage-1940-liquid-silk-leaf-vintage-wedding-gown.html"><img src="//img.readybrides.com/220426/mill-crest-vintage-1940-liquid-silk-leaf-vintage-wedding-gown.jpg" alt="Mill Crest Vintage 1940 Liquid Silk Leaf Vintage Wedding Gown" style="width:100%;" /></a>
<a href="https://www.readybrides.com/en/mill-crest-vintage/84722-mill-crest-vintage-1940-liquid-silk-leaf-vintage-wedding-gown.html"><img src="//img.readybrides.com/220427/mill-crest-vintage-1940-liquid-silk-leaf-vintage-wedding-gown.jpg" alt="Mill Crest Vintage 1940 Liquid Silk Leaf Vintage Wedding Gown" style="width:100%;" /></a>
<a href="https://www.readybrides.com/en/mill-crest-vintage/84722-mill-crest-vintage-1940-liquid-silk-leaf-vintage-wedding-gown.html"><img src="//img.readybrides.com/220428/mill-crest-vintage-1940-liquid-silk-leaf-vintage-wedding-gown.jpg" alt="Mill Crest Vintage 1940 Liquid Silk Leaf Vintage Wedding Gown" style="width:100%;" /></a>
<a href="https://www.readybrides.com/en/mill-crest-vintage/84722-mill-crest-vintage-1940-liquid-silk-leaf-vintage-wedding-gown.html"><img src="//img.readybrides.com/220429/mill-crest-vintage-1940-liquid-silk-leaf-vintage-wedding-gown.jpg" alt="Mill Crest Vintage 1940 Liquid Silk Leaf Vintage Wedding Gown" style="width:100%;" /></a>
<a href="https://www.readybrides.com/en/mill-crest-vintage/84722-mill-crest-vintage-1940-liquid-silk-leaf-vintage-wedding-gown.html"><img src="//img.readybrides.com/220430/mill-crest-vintage-1940-liquid-silk-leaf-vintage-wedding-gown.jpg" alt="Mill Crest Vintage 1940 Liquid Silk Leaf Vintage Wedding Gown" style="width:100%;" /></a>
<a href="https://www.readybrides.com/en/mill-crest-vintage/84722-mill-crest-vintage-1940-liquid-silk-leaf-vintage-wedding-gown.html"><img src="//img.readybrides.com/220431/mill-crest-vintage-1940-liquid-silk-leaf-vintage-wedding-gown.jpg" alt="Mill Crest Vintage 1940 Liquid Silk Leaf Vintage Wedding Gown" style="width:100%;" /></a>
<a href="https://www.readybrides.com/en/mill-crest-vintage/84722-mill-crest-vintage-1940-liquid-silk-leaf-vintage-wedding-gown.html"><img src="//img.readybrides.com/220424/mill-crest-vintage-1940-liquid-silk-leaf-vintage-wedding-gown.jpg" alt="Mill Crest Vintage 1940 Liquid Silk Leaf Vintage Wedding Gown" style="width:100%;" /></a>
Buy it: [https://www.readybrides.com/en/mill-crest-vintage/84722-mill-crest-vintage-1940-liquid-silk-leaf-vintage-wedding-gown.html](https://www.readybrides.com/en/mill-crest-vintage/84722-mill-crest-vintage-1940-liquid-silk-leaf-vintage-wedding-gown.html)
| 148.590909 | 347 | 0.768431 | yue_Hant | 0.236347 |
76126b6f089308966562e73e1db1e5691b81c7e5 | 547 | md | Markdown | add/metadata/System.Windows.Forms.Design/ShortcutKeysEditor.meta.md | kcpr10/dotnet-api-docs | b73418e9a84245edde38474bdd600bf06d047f5e | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-06-16T22:24:36.000Z | 2020-06-16T22:24:36.000Z | add/metadata/System.Windows.Forms.Design/ShortcutKeysEditor.meta.md | kcpr10/dotnet-api-docs | b73418e9a84245edde38474bdd600bf06d047f5e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | add/metadata/System.Windows.Forms.Design/ShortcutKeysEditor.meta.md | kcpr10/dotnet-api-docs | b73418e9a84245edde38474bdd600bf06d047f5e | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-05-02T13:31:28.000Z | 2020-05-02T13:31:28.000Z | ---
uid: System.Windows.Forms.Design.ShortcutKeysEditor
---
---
uid: System.Windows.Forms.Design.ShortcutKeysEditor.EditValue
---
---
uid: System.Windows.Forms.Design.ShortcutKeysEditor.GetEditStyle
---
---
uid: System.Windows.Forms.Design.ShortcutKeysEditor.#ctor
---
---
uid: System.Windows.Forms.Design.ShortcutKeysEditor.EditValue(System.ComponentModel.ITypeDescriptorContext,System.IServiceProvider,System.Object)
---
---
uid: System.Windows.Forms.Design.ShortcutKeysEditor.GetEditStyle(System.ComponentModel.ITypeDescriptorContext)
---
| 22.791667 | 145 | 0.793419 | yue_Hant | 0.81996 |
76129cfef8509bb90f8f2306720ef4b707418d3b | 1,576 | md | Markdown | test/expected/labels_null/markdown.default.md | kynikos/report-todo | 2810a1c84f4ae81c0eccf8480d180ea4c60e4f36 | [
"MIT"
] | 2 | 2020-05-29T21:17:49.000Z | 2021-08-12T15:03:07.000Z | test/expected/labels_null/markdown.default.md | kynikos/report-todo | 2810a1c84f4ae81c0eccf8480d180ea4c60e4f36 | [
"MIT"
] | 4 | 2021-03-10T10:23:45.000Z | 2021-09-21T07:58:31.000Z | test/expected/labels_null/markdown.default.md | kynikos/report-todo | 2810a1c84f4ae81c0eccf8480d180ea4c60e4f36 | [
"MIT"
] | null | null | null | # Table of contents
1. [[NO LABEL]](#1-0)
2. [label1](#1-1)
3. [label2](#1-2)
4. [label3](#1-3)
# [NO LABEL]<a id="1-0"></a>
| file path | line # | tag | labels | comment
|:----------|:-------|:----|:-------|:-------
| [test/fixtures/labels_null/main.rs](test/fixtures/labels_null/main.rs#L2) | 2 | TODO | | included
# label1<a id="1-1"></a>
| file path | line # | tag | labels | comment
|:----------|:-------|:----|:-------|:-------
| [test/fixtures/labels_null/main.rs](test/fixtures/labels_null/main.rs#L3) | 3 | TODO | label1 | included
| [test/fixtures/labels_null/main.rs](test/fixtures/labels_null/main.rs#L6) | 6 | TODO | label1,label2 | included
| [test/fixtures/labels_null/main.rs](test/fixtures/labels_null/main.rs#L7) | 7 | TODO | label1,label3 | included
# label2<a id="1-2"></a>
| file path | line # | tag | labels | comment
|:----------|:-------|:----|:-------|:-------
| [test/fixtures/labels_null/main.rs](test/fixtures/labels_null/main.rs#L4) | 4 | TODO | label2 | included
| [test/fixtures/labels_null/main.rs](test/fixtures/labels_null/main.rs#L6) | 6 | TODO | label1,label2 | included
| [test/fixtures/labels_null/main.rs](test/fixtures/labels_null/main.rs#L8) | 8 | TODO | label3,label2 | included
# label3<a id="1-3"></a>
| file path | line # | tag | labels | comment
|:----------|:-------|:----|:-------|:-------
| [test/fixtures/labels_null/main.rs](test/fixtures/labels_null/main.rs#L7) | 7 | TODO | label1,label3 | included
| [test/fixtures/labels_null/main.rs](test/fixtures/labels_null/main.rs#L8) | 8 | TODO | label3,label2 | included
| 43.777778 | 113 | 0.60533 | eng_Latn | 0.34085 |
7612afe7878d32ce4254c0405ea346974af5f1f6 | 2,954 | md | Markdown | articles/environment/lite-apply-demo-setup-config-data.md | MicrosoftDocs/dynamics-365-project-operations-pr.ru-RU | fb3d9ddd366b8074a97b1265e1f8d374f8d88e32 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2020-05-18T17:16:05.000Z | 2021-04-20T21:13:46.000Z | articles/environment/lite-apply-demo-setup-config-data.md | MicrosoftDocs/dynamics-365-project-operations-pr.ru-RU | fb3d9ddd366b8074a97b1265e1f8d374f8d88e32 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/environment/lite-apply-demo-setup-config-data.md | MicrosoftDocs/dynamics-365-project-operations-pr.ru-RU | fb3d9ddd366b8074a97b1265e1f8d374f8d88e32 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Применение демонстрационных данных настройки и конфигурации
description: Эта тема предоставляет информацию о том, как применить демонстрационные данные настройки и конфигурации для Project Operations.
author: sigitac
manager: Annbe
ms.date: 10/01/2020
ms.topic: article
ms.service: dynamics-365-customerservice
ms.reviewer: kfend
ms.author: sigitac
ms.openlocfilehash: 42e02f393e89d20b2a462645f519a3792bee8f2f
ms.sourcegitcommit: b9d8bf00239815f31686e9b28998ac684fd2fca4
ms.translationtype: HT
ms.contentlocale: ru-RU
ms.lasthandoff: 10/02/2020
ms.locfileid: "3949037"
---
# <a name="apply-demo-setup-and-configuration-data-for-project-operations-lite-deployment---deal-to-proforma-invoicing"></a>Применение демонстрационных данных настройки и конфигурации для развертывания Project Operations Lite — от сделки до счетов-проформ
_**Облегченное развертывание — от сделки до счетов-проформ_
1. Загрузите [пакет основных данных](https://download.microsoft.com/download/3/4/1/341bf279-a64f-4baa-af31-ce624859b518/ProjOpsSampleSetupData%20-%20CE%20only%20CMT.zip).
2. Перейдите в папку *ProjOpsDemoDataSetupAndMaster — интегрированный CMT* и запустите исполняемый файл, *DataMigrationUtility*.
3. На странице 1 мастера настройки Common Data Service (CMT) выберите **Импортировать данные**, затем выберите **Продолжить**.

4. На странице 2 мастера CMT выберите **Office 365** как **Тип развертывания**.
5. Установите флажки **Показать список доступных организаций** и **Показать расширенный**.
6. Выберите регион своего клиента, введите свои учетные данные, затем выберите **Войти**.

7. На странице 3 из списка организаций в клиенте выберите организацию, в которую вы хотите импортировать демонстрационные данные, затем выберите **Войти**.
8. На странице 4 выберите ZIP-файл *MasterAndSetupData* из распакованной папки, *ProjOpsDemoDataSetupAndMaster — интегрированный CMT*.


9. После выбора ZIP-файла выберите **Импортировать данные**.

10. Импорт будет выполняться примерно от двух до десяти минут в зависимости от скорости вашей сети. После его завершения выйдите из мастера CMT.
11. Проверьте свою организацию на наличие данных по следующим 20 сущностям:
- Валюта
- Подразделение
- Контактные сведения
- Налоговая группа
- Группа клиентов
- Единица
- Группа единиц измерения
- Прайс-лист
- Прайс-лист параметров проекта
- Периодичность выставления счетов
- Сведения периодичности выставления счета
- Категория резервируемого ресурса
- Категория проводки
- Категория расходов
- Цена роли
- Цена категории проводки
- Характеристика
- Резервируемый ресурс
- Назначение категории резервируемого ресурса
- Характеристика резервируемого ресурса

| 42.2 | 255 | 0.80738 | rus_Cyrl | 0.889258 |
7612be91849ba6072e8bb3aceebf55eb903f6493 | 3,723 | md | Markdown | documents/amazon-lumberyard-user-guide/doc_source/cloud-canvas-ui-select-deployment.md | siagholami/aws-documentation | 2d06ee9011f3192b2ff38c09f04e01f1ea9e0191 | [
"CC-BY-4.0"
] | 5 | 2021-08-13T09:20:58.000Z | 2021-12-16T22:13:54.000Z | documents/amazon-lumberyard-user-guide/doc_source/cloud-canvas-ui-select-deployment.md | siagholami/aws-documentation | 2d06ee9011f3192b2ff38c09f04e01f1ea9e0191 | [
"CC-BY-4.0"
] | null | null | null | documents/amazon-lumberyard-user-guide/doc_source/cloud-canvas-ui-select-deployment.md | siagholami/aws-documentation | 2d06ee9011f3192b2ff38c09f04e01f1ea9e0191 | [
"CC-BY-4.0"
] | null | null | null | # Making a Cloud Canvas Deployment Active<a name="cloud-canvas-ui-select-deployment"></a>
You can select the deployment that you want Lumberyard Editor to consider active\. The active deployment is the deployment that you work with in Lumberyard Editor\. Lumberyard Editor uses the active deployment's resources when you launch your game\. When you select the [Working with Resource Groups](cloud-canvas-ui-rm-resource-groups.md) node or an [Managing Individual Resource Groups](cloud-canvas-ui-rm-resource-groups.md#cloud-canvas-ui-rm-individual-resource-group) node in the **Cloud Canvas ****Resource Manager** navigation pane, the status information that appears corresponds to the active deployment\.
You can also select the deployment that you want to be active by default for all team members\.
**Note**
To select a deployment, you must have initialized **Cloud Canvas ****Resource Manager** to work with your AWS account and created a deployment\. For more information, see [Initializing Cloud Canvas Resource Manager](cloud-canvas-ui-rm-initialize.md) and [Create Deployment ](cloud-canvas-ui-rm-deployments.md#cloud-canvas-ui-rm-create-deployment)\.
## Making a Deployment Active<a name="cloud-canvas-ui-select-deployment-active"></a>
You have several ways to make a deployment active in **Cloud Canvas Resource Manager**\.
**To make a deployment active**
+ To make a deployment active, do one of the following:
+ In Lumberyard Editor, click **AWS**, **Cloud Canvas**, **Select a deployment**\.
![\[Image NOT FOUND\]](http://docs.aws.amazon.com/lumberyard/latest/userguide/images/cloud_canvas/cloud-canvas-ui-select-deployment.png)
+ In the **Cloud Canvas Resource Manager** toolbar, click the name of the current deployment, or click **\(none\)** if none is configured:
![\[Image NOT FOUND\]](http://docs.aws.amazon.com/lumberyard/latest/userguide/images/cloud_canvas/cloud-canvas-ui-rm-current-deployment-none.png)
When prompted, choose the deployment that you want to make active:
![\[Image NOT FOUND\]](http://docs.aws.amazon.com/lumberyard/latest/userguide/images/cloud_canvas/cloud-canvas-ui-rm-select-deployment-dev.png)
One or more of the deployments may be marked **protected**\. For more information, see [Using Protected Deployments ](cloud-canvas-protected-deployments.md)\.
+ In the **Cloud Canvas Resource Manager** navigation pane, right\-click the deployment that you want to make active, and then click **Make active deployment**:
![\[Image NOT FOUND\]](http://docs.aws.amazon.com/lumberyard/latest/userguide/images/cloud_canvas/cloud-canvas-ui-select-deployment-rm-active.png)
## Making a Deployment the Default<a name="cloud-canvas-ui-select-deployment-default"></a>
You can use the **Cloud Canvas Resource Manager** to make a deployment the default\.
**To make a deployment active by default for all team members**
1. In Lumberyard Editor, click **AWS**, **Cloud Canvas**, **Cloud Canvas Resource Manager**\.
![\[Image NOT FOUND\]](http://docs.aws.amazon.com/lumberyard/latest/userguide/images/cloud_canvas/cloud-canvas-ui-rm-open.png)
1. In the **Cloud Canvas configuration** navigation tree, expand **Administration \(advanced\)**, and then expand **Deployments**\.
1. Right\-click the deployment that you want to make the default, and then click **Make default deployment**:
![\[Image NOT FOUND\]](http://docs.aws.amazon.com/lumberyard/latest/userguide/images/cloud_canvas/cloud-canvas-ui-select-deployment-rm-default.png)
**To use the command line to make a deployment the default**
+ To use the command line to make a deployment the default, enter the following command:
```
lmbr_aws deployment default --set <deployment name>
``` | 79.212766 | 614 | 0.759871 | eng_Latn | 0.918529 |
7612ef565b8e12194d47b8f8019937585cb2101d | 2,728 | md | Markdown | _pages/cv.md | SnehaShukla937/SnehaShukla937.github.io | 068fca10ce504449c9eebd8f5f03dde924425dfe | [
"MIT"
] | null | null | null | _pages/cv.md | SnehaShukla937/SnehaShukla937.github.io | 068fca10ce504449c9eebd8f5f03dde924425dfe | [
"MIT"
] | null | null | null | _pages/cv.md | SnehaShukla937/SnehaShukla937.github.io | 068fca10ce504449c9eebd8f5f03dde924425dfe | [
"MIT"
] | null | null | null | ---
layout: archive
title: "CV"
permalink: /cv/
author_profile: true
redirect_from:
- /resume
---
{% include base_path %}
<!-- <embed src="../files/Fellowship_CV_Anup_Kumar_Gupta.pdf" type="application/pdf" /> -->
Education
=========
* Ph.D. in Computer Science and Engineering, Indian Institute of Technology Indore (Aug. 2021 - present)
* M.Tech. in Information Technology, National Institute of Technology Raipur (July 2016- July 2018)
* B.E. in Electronics and Telecommunication, Chhattisgarh Swami Vivekananda Technical University, Bhilai (Aug. 2011 - June 2015)
Projects
=========
* Speech Recognition System
* To tackle the problem of pronunciation in the English language, we have designed a Convolution Neural Network (CNN) & Long Short Term Memory (LSTM) based speech recognition system that can recognise recorded mispronounced words.
* Stages Involved: Data prepossessing, MFCC (Mel Frequency Cepstral Coefficients) feature extraction, model training and testing.
* Electroencephalogram (EEG) based Brain-Computer Interface (BCI) systems using machine learning techniques
* Enhancing the performance of EEG based BCI systems for motor imagery task by classifying the EEG data and analyse their performance utilising various machine learning techniques such as bagging, boosting, k nearest neighbour (KNN), naive bayes.
* Stages involved: Data prepossessing, short-time Fourier transform and wavelet feature extraction, rank-based feature selection, classification.
* Face recognition based real-time attendance system
* A computer vision system that can recognise real time faces and record the information to mark attendance.
* Stages involved: Image loading, finding a face in the image, compare training and testing face, record information.
Publications
======
<ul>{% for post in site.publications %}
{% include archive-single-cv.html %}
{% endfor %}</ul>
Work experience
===============
* Indian Institute of Technology Indore (Indore, Madhya Pradesh, Jan. 2021 - Present)
* Project Fellow (JRF)
* Working in the project entitled **Heart rate monitoring from non-contact face videos using deep learning** under the supervision of Dr. Puneet Gupta.
* National Informatics Centre (NIC) (Raipur, Chhattisgarh, India July 2018 - Dec. 2019)
* Involved in the research and development group of a school education project, where I have designed a speech recognition system utilising deep learning techniques. The vision of this project is to tackle the problem of pronunciation in the English language.
* Worked on data analytics and report generation on state education data using SQL queries, SQLite3, pandas, reportlab, matplotlib and several other python libraries.
| 52.461538 | 261 | 0.768328 | eng_Latn | 0.977262 |
7613a04542d509594e5492d6536616739019443c | 1,770 | md | Markdown | docs/cpp/logical-negation-operator-exclpt.md | yoichinak/cpp-docs.ja-jp | 50048c3d1101537497403efb4e7b550108f3a8f0 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/cpp/logical-negation-operator-exclpt.md | yoichinak/cpp-docs.ja-jp | 50048c3d1101537497403efb4e7b550108f3a8f0 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-04-01T04:17:07.000Z | 2021-04-01T04:17:07.000Z | docs/cpp/logical-negation-operator-exclpt.md | yoichinak/cpp-docs.ja-jp | 50048c3d1101537497403efb4e7b550108f3a8f0 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: '論理否定演算子: !'
description: C++ 標準言語の論理否定演算子の構文とを使用します。
ms.date: 07/23/2020
f1_keywords:
- '!'
helpviewer_keywords:
- '! operator'
- NOT operator
- logical negation
ms.assetid: 650add9f-a7bc-426c-b01d-5fc6a81c8b62
ms.openlocfilehash: fdd2e7a71b791375f898372d058a5eeb2afc59f1
ms.sourcegitcommit: 1f009ab0f2cc4a177f2d1353d5a38f164612bdb1
ms.translationtype: MT
ms.contentlocale: ja-JP
ms.lasthandoff: 07/27/2020
ms.locfileid: "87223682"
---
# <a name="logical-negation-operator-"></a>論理否定演算子: !
## <a name="syntax"></a>構文
> **`!`***キャスト式*
## <a name="remarks"></a>解説
論理否定演算子 () は、 **`!`** オペランドの意味を反転させます。 オペランドは、数値型またはポインター型 (または、数値型またはポインター型に評価される式) である必要があります。 オペランドは型に暗黙的に変換され **`bool`** ます。 変換されたオペランドがの場合、結果はになります。変換されたオペランドがの場合、結果はになり **`true`** **`false`** **`false`** **`true`** ます。 結果は型 **`bool`** です。
式の場合、 `e` 単項式 `!e` は、 `(e == 0)` オーバーロードされた演算子が関係する場合を除き、式に相当します。
## <a name="operator-keyword-for-"></a>! の演算子キーワード
C++ **`not`** では、の代替スペルとしてを指定し **`!`** ます。 C では、代替のスペルは、ヘッダーにマクロとして指定され \<iso646.h> ます。 C++ では、代替のスペルはキーワードです。\<iso646.h>または C++ と同等のの使用 \<ciso646> は非推奨とされます。 Microsoft C++ では、 [`/permissive-`](../build/reference/permissive-standards-conformance.md) またはコンパイラオプションを使用して、 [`/Za`](../build/reference/za-ze-disable-language-extensions.md) 別のスペルチェックを有効にする必要があります。
## <a name="example"></a>例
```cpp
// expre_Logical_NOT_Operator.cpp
// compile with: /EHsc
#include <iostream>
using namespace std;
int main() {
int i = 0;
if (!i)
cout << "i is zero" << endl;
}
```
## <a name="see-also"></a>関連項目
[単項演算子を含む式](../cpp/expressions-with-unary-operators.md)<br/>
[C++ の組み込み演算子、優先順位、および結合規則](../cpp/cpp-built-in-operators-precedence-and-associativity.md)<br/>
[単項算術演算子](../c-language/unary-arithmetic-operators.md)<br/>
| 32.181818 | 358 | 0.69548 | yue_Hant | 0.164629 |
761423fdbe0b18df20d875ba6d2384690754cdd9 | 1,387 | md | Markdown | 2020/10/03/2020-10-03 22:35.md | zhzhzhy/WeiBoHot_history | 32ce4800e63f26384abb17d43e308452c537c902 | [
"MIT"
] | 3 | 2020-07-14T14:54:15.000Z | 2020-08-21T06:48:24.000Z | 2020/10/03/2020-10-03 22:35.md | zhzhzhy/WeiBoHot_history | 32ce4800e63f26384abb17d43e308452c537c902 | [
"MIT"
] | null | null | null | 2020/10/03/2020-10-03 22:35.md | zhzhzhy/WeiBoHot_history | 32ce4800e63f26384abb17d43e308452c537c902 | [
"MIT"
] | null | null | null | 2020年10月03日22时数据
Status: 200
1.李希侃鼻子断了
微博热度:4017574
2.特朗普接受抗体鸡尾酒疗法
微博热度:2042397
3.快乐大本营
微博热度:1977101
4.唐一菲怼演员请就位剪辑
微博热度:1709822
5.朱一龙比了个反向心
微博热度:1673680
6.张月
微博热度:1328196
7.小花生离谱
微博热度:1136756
8.民警巡逻时突然被一只小手牵住
微博热度:928163
9.0.5元可买到匹配身份的人脸数据
微博热度:880036
10.这就是街舞总决赛
微博热度:859762
11.我已经不是以前的刘洋了
微博热度:808569
12.刘雨昕卡点引发舒适
微博热度:783860
13.我和我的家乡票房破8亿
微博热度:745725
14.26岁女孩峨眉山失联
微博热度:715918
15.陈一鸣卖车
微博热度:694005
16.易烊千玺说小朝太狠了
微博热度:691387
17.敦煌壁画可以有多精美
微博热度:689382
18.英特尔将协助美国军方生产先进芯片
微博热度:683573
19.中国首座海上球场
微博热度:679550
20.节后月饼去哪了
微博热度:679051
21.邓超兑现和援鄂医生承诺
微博热度:668226
22.犬夜叉
微博热度:614916
23.有些快递走着走着就没了
微博热度:533530
24.小学生捡起车上掉落国旗仔细贴好
微博热度:487975
25.莉莉娅
微博热度:447363
26.陈宥维发长文回应
微博热度:443038
27.儿子用擀面杖陪八旬母亲打球
微博热度:438254
28.NASA发布深空超新星影像
微博热度:435588
29.特朗普竞选经理新冠检测阳性
微博热度:429784
30.头发有点儿杵脖子
微博热度:425986
31.陆朝阳孤独症
微博热度:417266
32.在一起
微博热度:388897
33.如何回怼遭遇家暴后劝你大度的人
微博热度:380843
34.特朗普发视频说状态良好
微博热度:364495
35.彭昱畅我还可以更好一点
微博热度:363137
36.阿水胖了
微博热度:335418
37.长沙橘子洲景区已近最大承载量
微博热度:329867
38.金曲奖
微博热度:320669
39.LGD输了
微博热度:279200
40.中牌树莓红茶
微博热度:239771
41.马戏团黑熊失控攻击驯兽师
微博热度:224909
42.五条人好像哲学家
微博热度:210023
43.吴刚发长文谈夺冠幕后
微博热度:209353
44.乐队的夏天
微博热度:205123
45.不是余霜采访欧成
微博热度:202850
46.彭坦春晓好甜
微博热度:202136
47.四胞胎小狮子国国庆庆中中秋秋
微博热度:201684
48.网友用玉米抖字祝福祖国
微博热度:176922
49.超10万亿元资产因脱欧撤离英国
微博热度:176381
50.王诗龄挂李湘电话
微博热度:174068
| 6.79902 | 19 | 0.77938 | yue_Hant | 0.342698 |
76143a13fff95d3dba2970b0d9ffad31dd0ef539 | 145 | md | Markdown | src/main/paradox/index.md | commercetools/scraml | 88db80678f5ef982eb7369e32a6ea9af09b65321 | [
"Apache-2.0"
] | 5 | 2021-08-11T13:41:19.000Z | 2022-01-12T22:34:12.000Z | src/main/paradox/index.md | commercetools/scraml | 88db80678f5ef982eb7369e32a6ea9af09b65321 | [
"Apache-2.0"
] | 10 | 2021-08-02T15:33:25.000Z | 2022-03-28T19:12:37.000Z | src/main/paradox/index.md | commercetools/scraml | 88db80678f5ef982eb7369e32a6ea9af09b65321 | [
"Apache-2.0"
] | 1 | 2021-08-16T08:06:22.000Z | 2021-08-16T08:06:22.000Z | @@include[README](../../../README.md)
@@@ index
* [Setup](setup.md)
* [Create an API](api.md)
* [Run](run.md)
* [Library Support](libs.md)
@@@ | 14.5 | 37 | 0.565517 | yue_Hant | 0.821734 |
7614450043123edb68858dd58c92ed667fef1350 | 566 | markdown | Markdown | _posts/2017-10-23-project-8.markdown | datbos/datbos.github.io | b7f48743522b33c2ac95179ec4b4a540121d84a9 | [
"Apache-2.0"
] | null | null | null | _posts/2017-10-23-project-8.markdown | datbos/datbos.github.io | b7f48743522b33c2ac95179ec4b4a540121d84a9 | [
"Apache-2.0"
] | null | null | null | _posts/2017-10-23-project-8.markdown | datbos/datbos.github.io | b7f48743522b33c2ac95179ec4b4a540121d84a9 | [
"Apache-2.0"
] | null | null | null | ---
layout: default
modal-id: 8
date: 2017-10-23
img: BPL autocor.png
img1:
alt: image-alt
project-date: Git Energy Consumption
project-link: "https://github.com/marcmeijer/BPL/blob/master/BPL%20City%20Hall%20Electricity%20Analysis.ipynb"
client: Start Bootstrap
client-link:
category: Time Series Analysis
service-link:
description: The TSA (Time Series Analysis) ENERGY consumption of the Boston Public Library (BPL) sampled every 5 minutes is analyzed. The final result is the prediction of future energy consumption from past consumption trends and patterns.
--- | 37.733333 | 241 | 0.798587 | eng_Latn | 0.599331 |
7614793584ea6d8a5b2dac2f6d340fdcd87d13ec | 1,043 | md | Markdown | includes/migration-guide/runtime/wpf/wpf-textbox-defaults-undo-limit-100.md | mtorreao/docs.pt-br | e080cd3335f777fcb1349fb28bf527e379c81e17 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | includes/migration-guide/runtime/wpf/wpf-textbox-defaults-undo-limit-100.md | mtorreao/docs.pt-br | e080cd3335f777fcb1349fb28bf527e379c81e17 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | includes/migration-guide/runtime/wpf/wpf-textbox-defaults-undo-limit-100.md | mtorreao/docs.pt-br | e080cd3335f777fcb1349fb28bf527e379c81e17 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
ms.openlocfilehash: 9d960774161fc44810f90ca30f56eb98f98de3ff
ms.sourcegitcommit: cbacb5d2cebbf044547f6af6e74a9de866800985
ms.translationtype: MT
ms.contentlocale: pt-BR
ms.lasthandoff: 09/05/2020
ms.locfileid: "89496109"
---
### <a name="wpf-textbox-defaults-to-undo-limit-of-100"></a>O padrão da Caixa de Texto do WPF é o limite de 100 na ação desfazer
#### <a name="details"></a>Detalhes
No .NET Framework 4.5, o limite padrão de desfazer para uma caixa de texto do WPF é 100 (em vez de ser ilimitado como no .NET Framework 4.0)
#### <a name="suggestion"></a>Sugestão
Se um limite de 100 da ação desfazer for muito baixo, o limite poderá ser definido explicitamente com <xref:System.Windows.Controls.Primitives.TextBoxBase.UndoLimit>.
| Nome | Valor |
|:--------|:------------|
| Escopo |Microsoft Edge|
|Versão|4.5|
|Tipo|Runtime|
#### <a name="affected-apis"></a>APIs afetadas
- <xref:System.Windows.Controls.TextBox?displayProperty=nameWithType>
<!--
#### Affected APIs
- `T:System.Windows.Controls.TextBox`
-->
| 28.972222 | 166 | 0.720038 | por_Latn | 0.800979 |
7615a3b8a7cd4029c680fdff584aa83281620013 | 15 | md | Markdown | _includes/01-name.md | mrzachnugent/markdown-portfolio | 110ed9c217dc372ff015259aee659fcfda4f496f | [
"MIT"
] | null | null | null | _includes/01-name.md | mrzachnugent/markdown-portfolio | 110ed9c217dc372ff015259aee659fcfda4f496f | [
"MIT"
] | 2 | 2020-05-22T00:24:02.000Z | 2020-05-23T01:31:17.000Z | _includes/01-name.md | mrzachnugent/markdown-portfolio | 110ed9c217dc372ff015259aee659fcfda4f496f | [
"MIT"
] | null | null | null | # Zach Nugent
| 7.5 | 14 | 0.666667 | pol_Latn | 0.645065 |
7615c8191595a2f2e061f0303cca377985033e9d | 1,255 | md | Markdown | docs/7_0_0/components/columntoggler.md | iBlocksLimited/primefaces | 57c263d6ca355ada5081370e919072529a6992e5 | [
"MIT"
] | null | null | null | docs/7_0_0/components/columntoggler.md | iBlocksLimited/primefaces | 57c263d6ca355ada5081370e919072529a6992e5 | [
"MIT"
] | null | null | null | docs/7_0_0/components/columntoggler.md | iBlocksLimited/primefaces | 57c263d6ca355ada5081370e919072529a6992e5 | [
"MIT"
] | null | null | null | # ColumnToggler
ColumnToggler is a helper component for datatable to toggle visibility of columns.
## Info
| Name | Value |
| - | - |
| Tag | columnToggler
| Component Class | org.primefaces.component.columngroup.ColumnGroup
| Component Type | org.primefaces.component. ColumnGroup
| Component Family | org.primefaces.component |
| Renderer Type | org.primefaces.component.ColumnTogglerRenderer
| Renderer Class | org.primefaces.component.columntoggler.ColumnTogglerRenderer
## Attributes
| Name | Default | Type | Description |
| --- | --- | --- | --- |
| id | null | String | Unique identifier of the component
| rendered | true | Boolean | Boolean value to specify the rendering of the component, when set to false component will not be rendered.
| binding | null | Object | An el expression that maps to a server side UIComponent instance in a backing bean
| widgetVar | null | String | Name of the client side widget.
| trigger | null | String | A search expression resolving to a component to get attached to.
| datasource | null | String | A search expression resolving to a DataTable component whose columns to be toggled.
## Getting Started with ColumnToggler
See column toggler section in datatable documentation for detailed information. | 44.821429 | 136 | 0.756972 | eng_Latn | 0.951009 |
76162aa1c7e4f8710c1d33be4dedd72776fccf7c | 17,874 | md | Markdown | articles/aks/open-service-mesh-azure-application-gateway-ingress.md | ZetaPR/azure-docs.es-es | 0e2bf787d1d9ab12065fcb1091a7f13b96c6f8a2 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-03-12T23:37:16.000Z | 2021-03-12T23:37:16.000Z | articles/aks/open-service-mesh-azure-application-gateway-ingress.md | ZetaPR/azure-docs.es-es | 0e2bf787d1d9ab12065fcb1091a7f13b96c6f8a2 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/aks/open-service-mesh-azure-application-gateway-ingress.md | ZetaPR/azure-docs.es-es | 0e2bf787d1d9ab12065fcb1091a7f13b96c6f8a2 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Uso de la entrada de Azure Application Gateway
description: Uso de la entrada de Azure Application Gateway con Open Service Mesh
services: container-service
ms.topic: article
ms.date: 8/26/2021
ms.custom: mvc, devx-track-azurecli
ms.author: pgibson
ms.openlocfilehash: 70eaa03f3a10e01e9e3f17963f355117890a510d
ms.sourcegitcommit: 106f5c9fa5c6d3498dd1cfe63181a7ed4125ae6d
ms.translationtype: HT
ms.contentlocale: es-ES
ms.lasthandoff: 11/02/2021
ms.locfileid: "131066812"
---
# <a name="deploy-an-application-managed-by-open-service-mesh-osm-using-azure-application-gateway-ingress-aks-add-on"></a>Implementación de una aplicación administrada por Open Service Mesh (OSM) mediante el complemento de AKS de entrada de Azure Application Gateway
En este tutorial va a:
> [!div class="checklist"]
>
> - Ver la configuración actual del clúster de OSM
> - Crear los espacios de nombres de OSM para administrar las aplicaciones implementadas en los espacios de nombres
> - Incorporar los espacios de nombres que se van a administrar mediante OSM
> - Implementación de la aplicación de ejemplo
> - Comprobar que la aplicación se ejecuta en el clúster de AKS
> - Crear una instancia de Azure Application Gateway para usarla como controlador de entrada de la aplicación
> - Exponer un servicio a través de la entrada de Azure Application Gateway a Internet
## <a name="before-you-begin"></a>Antes de empezar
En los pasos que se detallan en este tutorial se supone que ya ha habilitado el complemento OSM de AKS para el clúster de AKS. Si no es así, revise el artículo [Implementación del complemento OSM de AKS](./open-service-mesh-deploy-addon-az-cli.md) antes de continuar. Además, el clúster de AKS debe ser de la versión Kubernetes `1.19+` o posterior, tener habilitado RBAC de Kubernetes y haber establecido una conexión `kubectl` con el clúster (si necesita ayuda con cualquiera de estos elementos, consulte el [inicio rápido de AKS](./kubernetes-walkthrough.md)), además de haber instalado el complemento OSM de AKS.
Debe tener instalados los siguientes recursos:
- CLI de Azure, versión 2.20.0 o posterior
- Versión 0.11.1 o posterior de OSM
- Procesador JSON "jq" versión 1.6+
## <a name="view-and-verify-the-current-osm-cluster-configuration"></a>Visualización y comprobación de la configuración actual del clúster de OSM
Una vez que el complemento OSM para AKS se ha habilitado en el clúster de AKS, puede ver los parámetros de configuración actuales del recurso osm-mesh-config. Ejecute el siguiente comando para ver las propiedades:
```azurecli-interactive
kubectl get meshconfig osm-mesh-config -n kube-system -o yaml
```
La salida muestra la configuración actual de OSM para el clúster.
```
apiVersion: config.openservicemesh.io/v1alpha1
kind: MeshConfig
metadata:
creationTimestamp: "0000-00-00A00:00:00A"
generation: 1
name: osm-mesh-config
namespace: kube-system
resourceVersion: "2494"
uid: 6c4d67f3-c241-4aeb-bf4f-b029b08faa31
spec:
certificate:
serviceCertValidityDuration: 24h
featureFlags:
enableEgressPolicy: true
enableMulticlusterMode: false
enableWASMStats: true
observability:
enableDebugServer: true
osmLogLevel: info
tracing:
address: jaeger.osm-system.svc.cluster.local
enable: false
endpoint: /api/v2/spans
port: 9411
sidecar:
configResyncInterval: 0s
enablePrivilegedInitContainer: false
envoyImage: mcr.microsoft.com/oss/envoyproxy/envoy:v1.18.3
initContainerImage: mcr.microsoft.com/oss/openservicemesh/init:v0.9.1
logLevel: error
maxDataPlaneConnections: 0
resources: {}
traffic:
enableEgress: true
enablePermissiveTrafficPolicyMode: true
inboundExternalAuthorization:
enable: false
failureModeAllow: false
statPrefix: inboundExtAuthz
timeout: 1s
useHTTPSIngress: false
```
Observe que **enablePermissiveTrafficPolicyMode** está configurado en **true**. El modo de la directiva de tráfico permisivo en OSM es un modo en el que se omite la aplicación de directivas de tráfico de [SMI](https://smi-spec.io/). En este modo, OSM detecta automáticamente los servicios que forman parte de la malla de servicio y programa reglas de la directiva de tráfico en cada sidecar del proxy de Envoy para comunicarse con estos servicios.
## <a name="create-namespaces-for-the-application"></a>Creación de espacios de nombres para la aplicación
En este tutorial, vamos a usar la aplicación bookstore de OSM que tiene los siguientes componentes:
- `bookbuyer`
- `bookthief`
- `bookstore`
- `bookwarehouse`
Cree espacios de nombres para cada uno de estos componentes de la aplicación.
```azurecli-interactive
for i in bookstore bookbuyer bookthief bookwarehouse; do kubectl create ns $i; done
```
Debería ver la siguiente salida:
```Output
namespace/bookstore created
namespace/bookbuyer created
namespace/bookthief created
namespace/bookwarehouse created
```
## <a name="onboard-the-namespaces-to-be-managed-by-osm"></a>Incorporación de los espacios de nombres que se van a administrar mediante OSM
La adición de los espacios de nombres a la malla de OSM permitirá que el controlador de OSM inserte automáticamente los contenedores del proxy de sidecar de Envoy en la aplicación. Ejecute el siguiente comando para incorporar los espacios de nombres de la aplicación de librería de OSM.
```azurecli-interactive
osm namespace add bookstore bookbuyer bookthief bookwarehouse
```
Debería ver la siguiente salida:
```Output
Namespace [bookstore] successfully added to mesh [osm]
Namespace [bookbuyer] successfully added to mesh [osm]
Namespace [bookthief] successfully added to mesh [osm]
Namespace [bookwarehouse] successfully added to mesh [osm]
```
## <a name="deploy-the-bookstore-application"></a>Implementación de la aplicación Bookstore
```azurecli-interactive
kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-v0.9/docs/example/manifests/apps/bookbuyer.yaml
```
```azurecli-interactive
kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-v0.9/docs/example/manifests/apps/bookthief.yaml
```
```azurecli-interactive
kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-v0.9/docs/example/manifests/apps/bookstore.yaml
```
```azurecli-interactive
kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-v0.9/docs/example/manifests/apps/bookwarehouse.yaml
```
A continuación se resumen todas las salidas de la implementación.
```Output
serviceaccount/bookbuyer created
service/bookbuyer created
deployment.apps/bookbuyer created
serviceaccount/bookthief created
service/bookthief created
deployment.apps/bookthief created
service/bookstore created
serviceaccount/bookstore created
deployment.apps/bookstore created
serviceaccount/bookwarehouse created
service/bookwarehouse created
deployment.apps/bookwarehouse created
```
## <a name="update-the-bookbuyer-service"></a>Actualización del servicio `Bookbuyer`
Actualice el servicio `bookbuyer` a la configuración de puerto de entrada correcta con el siguiente manifiesto de servicio.
```azurecli-interactive
kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
name: bookbuyer
namespace: bookbuyer
labels:
app: bookbuyer
spec:
ports:
- port: 14001
name: inbound-port
selector:
app: bookbuyer
EOF
```
## <a name="verify-the-bookstore-application"></a>Verificación de la aplicación Bookstore
Ahora, hemos implementado la aplicación bookstore de varios contenedores, pero solo se puede acceder a ella desde el clúster de AKS. Más adelante se agregará el controlador de entrada de Azure Application Gateway para exponer la aplicación fuera del clúster de AKS. Para comprobar que la aplicación se ejecuta dentro del clúster, se va a usar un reenvío de puertos para ver la interfaz de usuario del componente `bookbuyer`.
En primer lugar, se obtiene el nombre del pod de `bookbuyer`.
```azurecli-interactive
kubectl get pod -n bookbuyer
```
Debería ver una salida similar a la siguiente. El pod de `bookbuyer` tendrá un nombre único anexado.
```Output
NAME READY STATUS RESTARTS AGE
bookbuyer-7676c7fcfb-mtnrz 2/2 Running 0 7m8s
```
Una vez que tenemos el nombre del pod, podemos usar el comando de reenvío de puerto para configurar un túnel desde nuestro sistema local a la aplicación dentro del clúster de AKS. Ejecute el siguiente comando para configurar el reenvío de puertos para el puerto del sistema local 8080. Vuelva a usar su nombre de pod de `bookbuyer` específico.
```azurecli-interactive
kubectl port-forward bookbuyer-7676c7fcfb-mtnrz -n bookbuyer 8080:14001
```
Debería mostrarse una salida similar a esta.
```Output
Forwarding from 127.0.0.1:8080 -> 14001
Forwarding from [::1]:8080 -> 14001
```
Mientras la sesión de reenvío de puertos está en marcha, navegue a la siguiente URL desde un navegador `http://localhost:8080`. Ahora debería poder ver la interfaz de usuario de la aplicación `bookbuyer` en el explorador, que es similar a la imagen siguiente.

## <a name="create-an-azure-application-gateway-to-expose-the-bookbuyer-application"></a>Creación de una instancia de Azure Application Gateway para exponer la aplicación `bookbuyer`
> [!NOTE]
> Las instrucciones siguientes crearán una nueva instancia de Azure Application Gateway que se usará para la entrada. Si ya tiene una instancia de Azure Application Gateway que desea usar, vaya a la sección para habilitar el complemento del controlador de entrada de Application Gateway.
### <a name="deploy-a-new-application-gateway"></a>Implementación de una nueva instancia de Application Gateway
> [!NOTE]
> Nos estamos refiriendo a la documentación existente para habilitar el complemento de Controlador de entrada de Application Gateway para un clúster de AKS existente. Se han realizado algunas modificaciones para adaptarse a los materiales de OSM. [Aquí](../application-gateway/tutorial-ingress-controller-add-on-existing.md)encontrará documentación más detallada sobre el tema.
Ahora implementará una nueva instancia de Application Gateway para simular una instancia de Application Gateway existente que quiera usar para equilibrar la carga del tráfico en el clúster de AKS, _myCluster_. El nombre de la instancia de Application Gateway será _myApplicationGateway_, pero tendrá que crear primero un recurso de dirección IP pública, denominado _myPublicIp_, y una nueva red virtual denominada _myVnet_ con el espacio de direcciones 11.0.0.0/8 y una subred con el espacio de direcciones 11.1.0.0/16 llamada _mySubnet_ e implementar la instancia de Application Gateway en _mySubnet_ con _myPublicIp_.
Cuando se usa un clúster de AKS y una instancia de Application Gateway en redes virtuales independientes, los espacios de direcciones de las dos redes virtuales no deben superponerse. El espacio de direcciones predeterminado que implementa un clúster de AKS es 10.0.0.0/8, por lo que establecemos el prefijo de dirección de red virtual de la instancia de Application Gateway en 11.0.0.0/8.
```azurecli-interactive
az group create --name myResourceGroup --location eastus2
az network public-ip create -n myPublicIp -g MyResourceGroup --allocation-method Static --sku Standard
az network vnet create -n myVnet -g myResourceGroup --address-prefix 11.0.0.0/8 --subnet-name mySubnet --subnet-prefix 11.1.0.0/16
az network application-gateway create -n myApplicationGateway -l eastus2 -g myResourceGroup --sku Standard_v2 --public-ip-address myPublicIp --vnet-name myVnet --subnet mySubnet
```
> [!NOTE]
> El complemento del controlador de entrada de Application Gateway (AGIC) admite **solo** las SKU de Application Gateway v2 (estándar y WAF) y **no** las SKU de Application Gateway v1.
### <a name="enable-the-agic-add-on-for-an-existing-aks-cluster-through-azure-cli"></a>Habilitación del complemento AGIC para un clúster de AKS existente a través de la CLI de Azure
Si quiere seguir usando la CLI de Azure, puede seguir habilitando el complemento AGIC en el clúster de AKS que creó, _myCluster_, y especificar el complemento AGIC para usar la instancia de Application Gateway existente que creó, _myApplicationGateway_.
```azurecli-interactive
appgwId=$(az network application-gateway show -n myApplicationGateway -g myResourceGroup -o tsv --query "id")
az aks enable-addons -n myCluster -g myResourceGroup -a ingress-appgw --appgw-id $appgwId
```
Puede comprobar que el complemento de AKS de Azure Application Gateway se ha habilitado mediante el siguiente comando.
```azurecli-interactive
az aks list -g osm-aks-rg -o json | jq -r .[].addonProfiles.ingressApplicationGateway.enabled
```
Este comando debería mostrar la salida como `true`.
### <a name="peer-the-two-virtual-networks-together"></a>Emparejamiento de dos redes virtuales juntas
Dado que hemos implementado el clúster de AKS en su propia red virtual y la instancia de Application Gateway en otra red virtual, deberá emparejar las dos redes virtuales juntas para que el tráfico fluya de la instancia de Application Gateway a los pods del clúster. Emparejar las dos redes virtuales requiere ejecutar el comando de la CLI de Azure dos veces independientes para asegurarse de que la conexión sea bidireccional. El primer comando creará una conexión de emparejamiento desde la red virtual de Application Gateway a la red virtual de AKS; el segundo comando creará una conexión de emparejamiento en la otra dirección.
```azurecli-interactive
nodeResourceGroup=$(az aks show -n myCluster -g myResourceGroup -o tsv --query "nodeResourceGroup")
aksVnetName=$(az network vnet list -g $nodeResourceGroup -o tsv --query "[0].name")
aksVnetId=$(az network vnet show -n $aksVnetName -g $nodeResourceGroup -o tsv --query "id")
az network vnet peering create -n AppGWtoAKSVnetPeering -g myResourceGroup --vnet-name myVnet --remote-vnet $aksVnetId --allow-vnet-access
appGWVnetId=$(az network vnet show -n myVnet -g myResourceGroup -o tsv --query "id")
az network vnet peering create -n AKStoAppGWVnetPeering -g $nodeResourceGroup --vnet-name $aksVnetName --remote-vnet $appGWVnetId --allow-vnet-access
```
## <a name="expose-the-bookbuyer-service-to-the-internet"></a>Exposición del servicio `bookbuyer` a Internet
Aplique el siguiente manifiesto de entrada al clúster de AKS para exponer el servicio `bookbuyer` a Internet a través de la instancia de Azure Application Gateway.
```azurecli-interactive
kubectl apply -f - <<EOF
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: bookbuyer-ingress
namespace: bookbuyer
annotations:
kubernetes.io/ingress.class: azure/application-gateway
spec:
rules:
- host: bookbuyer.contoso.com
http:
paths:
- path: /
backend:
serviceName: bookbuyer
servicePort: 14001
backend:
serviceName: bookbuyer
servicePort: 14001
EOF
```
Debería ver la siguiente salida.
```Output
Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
ingress.extensions/bookbuyer-ingress created
```
Dado que el nombre de host en el manifiesto de entrada es un pseudonombre que se usa para las pruebas, el nombre DNS no estará disponible en Internet. También podemos usar el programa de cURL y después el encabezado de nombre de host para la dirección IP pública de Azure Application Gateway y recibir un código 200 que nos conecta correctamente con el servicio `bookbuyer`.
```azurecli-interactive
appGWPIP=$(az network public-ip show -g MyResourceGroup -n myPublicIp -o tsv --query "ipAddress")
curl -H 'Host: bookbuyer.contoso.com' http://$appGWPIP/
```
Debería ver la siguiente salida.
```Output
<!doctype html>
<html itemscope="" itemtype="http://schema.org/WebPage" lang="en">
<head>
<meta content="Bookbuyer" name="description">
<meta content="text/html; charset=UTF-8" http-equiv="Content-Type">
<title>Bookbuyer</title>
<style>
#navbar {
width: 100%;
height: 50px;
display: table;
border-spacing: 0;
white-space: nowrap;
line-height: normal;
background-color: #0078D4;
background-position: left top;
background-repeat-x: repeat;
background-image: none;
color: white;
font: 2.2em "Fira Sans", sans-serif;
}
#main {
padding: 10pt 10pt 10pt 10pt;
font: 1.8em "Fira Sans", sans-serif;
}
li {
padding: 10pt 10pt 10pt 10pt;
font: 1.2em "Consolas", sans-serif;
}
</style>
<script>
setTimeout(function(){window.location.reload(1);}, 1500);
</script>
</head>
<body bgcolor="#fff">
<div id="navbar">
📖 Bookbuyer
</div>
<div id="main">
<ul>
<li>Total books bought: <strong>5969</strong>
<ul>
<li>from bookstore V1: <strong>277</strong>
<li>from bookstore V2: <strong>5692</strong>
</ul>
</li>
</ul>
</div>
<br/><br/><br/><br/>
<br/><br/><br/><br/>
<br/><br/><br/><br/>
Current Time: <strong>Fri, 26 Mar 2021 16:34:30 UTC</strong>
</body>
</html>
```
## <a name="troubleshooting"></a>Solución de problemas
- [Documentación de solución de problemas de AGIC](../application-gateway/ingress-controller-troubleshoot.md)
- [Hay otras herramientas de solución de problemas disponibles en el repositorio de GitHub de AGIC](https://github.com/Azure/application-gateway-kubernetes-ingress/blob/master/docs/troubleshootings/troubleshooting-installing-a-simple-application.md).
| 44.909548 | 631 | 0.757693 | spa_Latn | 0.844276 |
7616a6750645e5ecedd58cd4021df41792bfccc8 | 847 | md | Markdown | AlchemyInsights/become-an-admin.md | isabella232/OfficeDocs-AlchemyInsights-pr.da-DK | a907697f48db2dc57c19d7e003d92831c111566e | [
"CC-BY-4.0",
"MIT"
] | 2 | 2020-05-19T19:06:02.000Z | 2020-09-17T11:26:05.000Z | AlchemyInsights/become-an-admin.md | isabella232/OfficeDocs-AlchemyInsights-pr.da-DK | a907697f48db2dc57c19d7e003d92831c111566e | [
"CC-BY-4.0",
"MIT"
] | 2 | 2022-02-09T06:59:12.000Z | 2022-02-09T06:59:36.000Z | AlchemyInsights/become-an-admin.md | isabella232/OfficeDocs-AlchemyInsights-pr.da-DK | a907697f48db2dc57c19d7e003d92831c111566e | [
"CC-BY-4.0",
"MIT"
] | 2 | 2019-10-11T18:36:50.000Z | 2021-10-09T10:49:57.000Z | ---
title: Bliv administrator
ms.author: pebaum
author: CrystalThomasMS
ms.date: 04/21/2020
ms.audience: ITPro
ms.topic: article
ms.service: o365-administration
ROBOTS: NOINDEX, NOFOLLOW
localization_priority: Normal
ms.assetid: acff9f3e-e5d9-4eee-b1b3-9895a7cb27fc
ms.custom:
- "3"
- "71"
- "13"
ms.openlocfilehash: db534de825d9b77882d4b37396b266ba6a28e49d4287ab1555500b4e54d8c10b
ms.sourcegitcommit: b5f7da89a650d2915dc652449623c78be6247175
ms.translationtype: MT
ms.contentlocale: da-DK
ms.lasthandoff: 08/05/2021
ms.locfileid: "53969279"
---
# <a name="become-an-admin"></a>Bliv administrator
For at antage rollen som administrator for din organisation skal du overtage lejeren.
Følg instruktionerne i denne vejledning: [Administratorovertagelse](https://docs.microsoft.com/azure/active-directory/users-groups-roles/domains-admin-takeover) | 31.37037 | 160 | 0.813459 | dan_Latn | 0.149128 |
76173b22938de5cfeff7fe22c8474b8655ded7ac | 918 | md | Markdown | 20170620.md | liushengxian/somediary | 6cc242092ac01d0ba2b032fca0900e58b7b00a1d | [
"MIT"
] | null | null | null | 20170620.md | liushengxian/somediary | 6cc242092ac01d0ba2b032fca0900e58b7b00a1d | [
"MIT"
] | null | null | null | 20170620.md | liushengxian/somediary | 6cc242092ac01d0ba2b032fca0900e58b7b00a1d | [
"MIT"
] | null | null | null | # About Git
push a new branch from local to remote, just use -u to push. However, I find it's ok to just push it with 'git push --set-upstream origin [branch]'
Reference: https://stackoverflow.com/questions/2765421/how-do-i-push-a-new-local-branch-to-a-remote-git-repository-and-track-it-too
# About CSS padding
Padding with percentage is determined by the width of its parent node.
# About Nice-cabbage Section-title Font-family
I've tried kinds of ways to make the section-title the same. However nothing works at all. Finally, in my view, it's the font-family which determines this feature.
Neither the word-spacing, nor the letter-spacing. It's the font-family.
# How to solve css problems on different browsers?
First, reset browser default style.
# A Javascript Problem
var foo = 1;
function bar(){
foo = 10;
return;
function foo(){};
}
bar();
alert(foo);
what's alerted?
Answer: 1.
| 25.5 | 163 | 0.734205 | eng_Latn | 0.976722 |
76177ac5cada736eddc59604e8c74aeb5a893901 | 8,387 | md | Markdown | articles/active-directory/active-directory-saas-slack-tutorial.md | OpenLocalizationTestOrg/azure-docs-pr15_et-EE | bc69bd1a9d45d7abd2a3d01806d74e2b0848a808 | [
"CC-BY-3.0",
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/active-directory/active-directory-saas-slack-tutorial.md | OpenLocalizationTestOrg/azure-docs-pr15_et-EE | bc69bd1a9d45d7abd2a3d01806d74e2b0848a808 | [
"CC-BY-3.0",
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/active-directory/active-directory-saas-slack-tutorial.md | OpenLocalizationTestOrg/azure-docs-pr15_et-EE | bc69bd1a9d45d7abd2a3d01806d74e2b0848a808 | [
"CC-BY-3.0",
"CC-BY-4.0",
"MIT"
] | null | null | null | <properties
pageTitle="Õpetus: Azure'i Active Directory integreerimine vaikne | Microsoft Azure'i"
description="Saate teada, kuidas lubada ühekordse sisselogimise, automatiseeritud ettevalmistamine ja muud Azure Active Directory vaikne kasutamine!"
services="active-directory"
authors="jeevansd"
documentationCenter="na"
manager="femila"/>
<tags
ms.service="active-directory"
ms.devlang="na"
ms.topic="article"
ms.tgt_pltfrm="na"
ms.workload="identity"
ms.date="09/19/2016"
ms.author="jeedes" />
#<a name="tutorial-azure-active-directory-integration-with-slack"></a>Õpetus: Azure'i Active Directory integreerimine vaikne
Selle õpetuse eesmärk on integreerimine Azure ja vaikne kuvamiseks.
Stsenaarium, mis on kirjeldatud selles õpetuses eeldab, et teil on juba järgmised üksused:
- Azure'i kehtiv tellimus
- Vaikne ühekordse sisselogimise lubatud tellimus
Pärast selle õpetuse, saab Azure AD kasutajate olete määranud vaikne ühekordse sisselogimise taotluse saidile vaikne ettevõtte (teenuse pakkuja algatatud Logi sisse) või [Sissejuhatus Accessi juhtpaneeli](active-directory-saas-access-panel-introduction.md)kaudu.
Stsenaarium, mis on kirjeldatud selles õpetuses koosneb järgmistest koosteüksused.
1. Rakenduste integreerimise jaoks vaikne lubamine
2. Ühekordse sisselogimise konfigureerimine
3. Kasutaja ettevalmistamise konfigureerimine
4. Kasutajate määramine
![Stsenaarium] (./media/active-directory-saas-slack-tutorial/IC794980.png "Stsenaarium")
##<a name="enabling-the-application-integration-for-slack"></a>Rakenduste integreerimise jaoks vaikne lubamine
Selle jaotise eesmärk on liigendamine jaoks vaikne rakenduse integreerimise lubamise kohta.
###<a name="to-enable-the-application-integration-for-slack-perform-the-following-steps"></a>Rakenduste integreerimise jaoks vaikne lubamiseks tehke järgmist.
1. Klassikaline portaalis, klõpsake vasakpoolsel navigeerimispaanil, klõpsake soovitud Azure **Active Directory**.
![Active Directory] (./media/active-directory-saas-slack-tutorial/IC700993.png "Active Directory")
2. Valige loendist **Directory** kataloogi, mille jaoks soovite lubada kataloogi integreerimise.
3. Kuva rakendused directory vaate avamiseks klõpsake ülemises menüüs **rakendused** .
![Rakenduste] (./media/active-directory-saas-slack-tutorial/IC700994.png "Rakenduste")
4. Klõpsake lehe allosas **Lisa** .
![Rakenduse lisamine] (./media/active-directory-saas-slack-tutorial/IC749321.png "Rakenduse lisamine")
5. Klõpsake dialoogiboksis **soovitud teha,** klõpsake nuppu **Lisa rakendus galeriist**.
![Rakenduse kaudu gallerry lisamine] (./media/active-directory-saas-slack-tutorial/IC749322.png "Rakenduse kaudu gallerry lisamine")
6. Tippige **väljale Otsi** **vaikne**.
![Rakenduse Galerii] (./media/active-directory-saas-slack-tutorial/IC794981.png "Rakenduse Galerii")
7. Tulemuste paanil valige **vaikne**, ja klõpsake rakenduse lisamiseks **lõpuleviimine** .
![Stsenaarium] (./media/active-directory-saas-slack-tutorial/IC796925.png "Stsenaarium")
##<a name="configuring-single-sign-on"></a>Ühekordse sisselogimise konfigureerimine
Selle jaotise eesmärk on liigendamine kasutajate autentimiseks vaikne oma konto abil SAML protokolli federation Azure AD lubamise kohta.
Selle toimingu käigus saate vajalike base-64-kodeeritud sertifikaat faili loomine.
Kui te pole seda toimingut juba tuttav, vaadake, [Kuidas teisendada teksti faili kahendarvu sert](http://youtu.be/PlgrzUZ-Y1o)
###<a name="to-configure-single-sign-on-perform-the-following-steps"></a>Ühekordse sisselogimise konfigureerimiseks tehke järgmist.
1. Azure'i klassikaline portaalis lehel **vaikne** rakenduse integreerimise nuppu **Konfigureeri ühekordse sisselogimise** **Konfigureerimine ühekordse sisselogimise** dialoogiboksi avamiseks.
![Ühekordse sisselogimise konfigureerimine] (./media/active-directory-saas-slack-tutorial/IC794982.png "Ühekordse sisselogimise konfigureerimine")
2. **Kuidas kas soovite kasutajad logida vaikne** lehel Valige **Microsoft Azure AD ühekordse sisselogimise**ning seejärel klõpsake nuppu **edasi**.
![Ühekordse sisselogimise konfigureerimine] (./media/active-directory-saas-slack-tutorial/IC794983.png "Ühekordse sisselogimise konfigureerimine")
3. Lehel **Rakenduse URL-i konfigureerimine** **Vaikne sisselogimise URL** väljale Tippige oma vaikne rentniku URL (nt: "*https://azuread.slack.com*"), ja seejärel klõpsake nuppu **edasi**.
![Rakenduse URL-i konfigureerimine] (./media/active-directory-saas-slack-tutorial/IC794984.png "Rakenduse URL-i konfigureerimine")
4. Lehel **Konfigureeri ühekordse sisselogimise vaikne veebisaidil** oma sertifikaadi allalaadimiseks nuppu **Laadi alla serdi**ja seejärel salvestage serdi fail teie arvutile.
![Ühekordse sisselogimise konfigureerimine] (./media/active-directory-saas-slack-tutorial/IC794985.png "Ühekordse sisselogimise konfigureerimine")
5. Erinevate web brauseriaknas, logige sisse saidil vaikne ettevõtte administraatorina.
6. Minge **Microsoft Azure AD \> meeskonnatöö sätted**.
![Meeskonnatöö sätted] (./media/active-directory-saas-slack-tutorial/IC794986.png "Meeskonnatöö sätted")
7. **Meeskonnatöö sätted** jaotises **autentimine** vahekaarti ning seejärel klõpsake käsku **Muuda sätteid**.
![Meeskonnatöö sätted] (./media/active-directory-saas-slack-tutorial/IC794987.png "Meeskonnatöö sätted")
8. Klõpsake dialoogiboksis **SAML autentimissätted** tehke järgmist.
![SAML sätted] (./media/active-directory-saas-slack-tutorial/IC794988.png "SAML sätted")
1. Azure'i klassikaline portaalis lehel **Konfigureeri ühekordse sisselogimise veebisaidil vaikne** dialoogiboksi kopeerige **SAML SSO URL-i** väärtus ja seejärel kleepige **SAML 2.0 lõpp-punkti (HTTP)** tekstiväli.
2. Azure'i klassikaline portaalis lehel **Konfigureeri ühekordse sisselogimise veebisaidil vaikne** dialoogiboksi kopeerige **Väljaandja URL-i** väärtus ja seejärel kleepige **Identiteedi pakkuja väljaandja** tekstiväli.
3. Teie allalaaditud serdi **base-64-kodeeritud** faili loomine.
>[AZURE.TIP] Lisateabe saamiseks vaadake, [Kuidas teisendada teksti faili kahendarvu sert](http://youtu.be/PlgrzUZ-Y1o)
4. Avage oma base-64-kodeeritud sertifikaat Notepadis, kopeerige see sisu teie lõikelauale ja kleepige see **Avaliku serdi** tekstiväli.
5. Tühjendage ruut **Luba kasutajatel muuta oma meiliaadress**.
6. Valige **Luba kasutajatel valida oma kasutajanimi**.
7. **Autentimise teie meeskond peab kasutama**, valige **see pole kohustuslik**.
8. Klõpsake nuppu **Salvesta konfigureerimine**.
9. Azure'i klassikaline portaalis valige kinnituse ühekordse sisselogimise konfigureerimine ja klõpsake **lõpuleviimine** **Konfigureerimine ühekordse sisselogimise** dialoogiboksi sulgemiseks.
![Ühekordse sisselogimise konfigureerimine] (./media/active-directory-saas-slack-tutorial/IC794989.png "Ühekordse sisselogimise konfigureerimine")
##<a name="configuring-user-provisioning"></a>Kasutaja ettevalmistamise konfigureerimine
Selleks, et Azure AD kasutajate vaikne sisse logida, nad peavad olema ettevalmistatud vaikne.
On teil konfigureerida kasutaja ettevalmistamine vaikne toimingu üksusi pole.
Kui mõni määratud kasutaja üritab vaikne sisse logida, luuakse automaatselt vaikne konto vajaduse korral.
##<a name="assigning-users"></a>Kasutajate määramine
Teie konfiguratsiooni testimiseks peate Azure AD kasutajatele anda soovite lubada abil oma rakenduse juurdepääsu, määrates neile.
###<a name="to-assign-users-to-slack-perform-the-following-steps"></a>Vaikne kasutajate määramiseks tehke järgmist.
1. Azure'i klassikaline portaalis testi konto loomine.
2. Klõpsake lehel **vaikne **rakenduse integreerimise **määrata kasutajatele**.
![Kasutajate määramine] (./media/active-directory-saas-slack-tutorial/IC794990.png "Kasutajate määramine")
3. Valige oma testkasutaja, klõpsake nuppu **Määra**, ja seejärel nuppu **Jah** ülesande kinnitamiseks.
![Jah] (./media/active-directory-saas-slack-tutorial/IC767830.png "Jah")
Kui soovite ühekordse sisselogimise sätete testimiseks, avage Accessi paneel. Accessi paani kohta leiate lisateavet teemast [Sissejuhatus Accessi paani](active-directory-saas-access-panel-introduction.md). | 57.841379 | 262 | 0.782401 | est_Latn | 0.998842 |
7617ce92f676ecc029bd975f9ed212c74ab2b30b | 24,613 | md | Markdown | aspnet/mvc/overview/older-versions/getting-started-with-ef-5-using-mvc-4/handling-concurrency-with-the-entity-framework-in-an-asp-net-mvc-application.md | AlexanderUsmanov/Docs.ru-ru | 5e5ce086955ef8e41e97d524a6f1141be2b60d8e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | aspnet/mvc/overview/older-versions/getting-started-with-ef-5-using-mvc-4/handling-concurrency-with-the-entity-framework-in-an-asp-net-mvc-application.md | AlexanderUsmanov/Docs.ru-ru | 5e5ce086955ef8e41e97d524a6f1141be2b60d8e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | aspnet/mvc/overview/older-versions/getting-started-with-ef-5-using-mvc-4/handling-concurrency-with-the-entity-framework-in-an-asp-net-mvc-application.md | AlexanderUsmanov/Docs.ru-ru | 5e5ce086955ef8e41e97d524a6f1141be2b60d8e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
uid: mvc/overview/older-versions/getting-started-with-ef-5-using-mvc-4/handling-concurrency-with-the-entity-framework-in-an-asp-net-mvc-application
title: Обработка параллелизма с платформой Entity Framework в приложении ASP.NET MVC (7, 10) | Документы Microsoft
author: tdykstra
description: Contoso университета примера веб-приложения демонстрирует создание приложения ASP.NET MVC 4, с помощью Entity Framework 5 Code First и Visual Studio...
ms.author: aspnetcontent
manager: wpickett
ms.date: 07/30/2013
ms.topic: article
ms.assetid: b83f47c4-8521-4d0a-8644-e8f77e39733e
ms.technology: dotnet-mvc
ms.prod: .net-framework
msc.legacyurl: /mvc/overview/older-versions/getting-started-with-ef-5-using-mvc-4/handling-concurrency-with-the-entity-framework-in-an-asp-net-mvc-application
msc.type: authoredcontent
ms.openlocfilehash: 609f493845f1d00a47d175a1b623a7f4866d191e
ms.sourcegitcommit: f8852267f463b62d7f975e56bea9aa3f68fbbdeb
ms.translationtype: MT
ms.contentlocale: ru-RU
ms.lasthandoff: 04/06/2018
---
<a name="handling-concurrency-with-the-entity-framework-in-an-aspnet-mvc-application-7-of-10"></a>Обработка параллелизма с платформой Entity Framework в приложении ASP.NET MVC (7, 10)
====================
по [Tom Dykstra](https://github.com/tdykstra)
[Загрузка завершенного проекта](http://code.msdn.microsoft.com/Getting-Started-with-dd0e2ed8)
> Contoso университета примера веб-приложения демонстрирует создание приложения ASP.NET MVC 4, с помощью Entity Framework 5 Code First и Visual Studio 2012. Сведения о серии руководств см. в [первом руководстве серии](creating-an-entity-framework-data-model-for-an-asp-net-mvc-application.md). Учебник рядов можно запустить с самого начала или [загрузить начальный проект для этой главы](building-the-ef5-mvc4-chapter-downloads.md) и начните здесь.
>
> > [!NOTE]
> >
> > Если возникли проблемы, не удается устранить, [загрузить завершенного главе](building-the-ef5-mvc4-chapter-downloads.md) и попробуйте воспроизвести проблему. Обычно можно найти решение проблемы путем сравнения код для завершения кода. Некоторые распространенные ошибки и способы их устранения см. в разделе [ошибок и способы их устранения.](advanced-entity-framework-scenarios-for-an-mvc-web-application.md#errors)
В двух предыдущих занятий вы работали с соответствующими данными. Этот учебник демонстрирует обработки параллелизма. Вы создадите веб-страниц, которые работают с `Department` сущности и страниц, изменение и удаление `Department` сущностей будет обрабатывать ошибки параллелизма. На следующих рисунках страницы индекса и Delete, включая некоторые сообщения, которые отображаются, если возникает конфликт параллелизма.


## <a name="concurrency-conflicts"></a>Конфликты параллелизма
Конфликт параллелизма возникает, когда один пользователь отображает данные сущности, чтобы изменить их, а другой пользователь обновляет данные той же сущности до того, как изменение первого пользователя будет записано в базу данных. Если не включить обнаружение таких конфликтов, то пользователь, обновляющий базу данных последним, перезаписывает изменения другого пользователя. Во многих приложениях такой риск допустим: при небольшом числе пользователей или обновлений, а также в случае, если перезапись некоторых изменений не является критической, стоимость реализации параллелизма может перевесить его преимущества. В этом случае вам не нужно настраивать приложение для обработки конфликтов параллелизма.
### <a name="pessimistic-concurrency-locking"></a>Пессимистическая блокировка (блокировки)
Если приложению нужно предотвратить случайную потерю данных в сценариях параллелизма, одним из способов сделать это являются блокировки базы данных. Это называется *пессимистичный параллелизм*. Например, перед чтением строки из базы данных вы запрашиваете блокировку для доступа для обновления или только для чтения. Если заблокировать строку для обновления, другие пользователи не могут заблокировать ее для обновления или только для чтения, так как получат копию данных, которые находятся в процессе изменения. Если заблокировать строку только для чтения, другие пользователи также могут заблокировать ее только для чтения, но не для обновления.
Управление блокировками имеет недостатки. Оно может оказаться сложным с точки зрения программирования. Она требует значительные ресурсы базы данных управления, и он может вызвать снижение производительности как число пользователей приложения увеличивает (то есть, он плохо масштабируется). Поэтому не все системы управления базами данных поддерживают пессимистичный параллелизм. Платформа Entity Framework предоставляет нет встроенная поддержка и учебнике не показано, как для его реализации.
### <a name="optimistic-concurrency"></a>Оптимистическая блокировка
Вместо него следует использовать для пессимистичный параллелизм *оптимистичного параллелизма*. Оптимистическая блокировка допускает появление конфликтов параллелизма, а затем обрабатывает их соответствующим образом. Например, Джон запускает отделов изменить страницу, изменения **бюджета** сумма для английского языка отдела из 350,000.00 $ $ 0,00.

Прежде чем Джон выбирает **Сохранить**, Мария запускает одну и ту же страницу и изменения **Дата начала** поля из 9/1/2007 8/8/2013.

Джон выбирает **Сохранить** первым и его изменение браузер по возвращении на страницу индекса, а затем Мария щелкает видит **Сохранить**. Дальнейший ход событий определяется порядком обработки конфликтов параллелизма. Некоторые параметры перечислены ниже:
- Вы можете отслеживать, для какого свойства пользователь изменил и обновил только соответствующие столбцы в базе данных. В этом примере сценария данные не будут потеряны, так как эти два пользователя обновляли разные свойства. Далее время кто-то просматривает отделе английского языка, они видят изменения Джон и Джейн — Начальная дата 8/8/2013 и бюджета ноль долларов.
Этот метод обновления помогает снизить число конфликтов, которые могут привести к потере данных, но не позволяет избежать такой потери, когда конкурирующие изменения вносятся в одно свойство сущности. То, работает ли Entity Framework в таком режиме, зависит от того, как вы реализуете код обновления. В веб-приложении это часто нецелесообразно, так как может потребоваться обрабатывать большой объем состояний, чтобы отслеживать все исходные значения свойств для сущности, а также новые значения. Обслуживание больших объемов состояния может повлиять на производительность приложения, так как он требует ресурсы сервера или должны быть включены на странице веб-(например, в скрытых полях).
- Можно перезаписать изменения Джона изменение Джейн. Далее время кто-то просматривает отделе английского языка, они видят 8/8/2013 и восстановленное значение $350,000.00. Такой подход называется *победой клиента* или *сохранением последнего внесенного изменения*. (Значения клиента имеют приоритет над возможности хранилища данных). Как отмечено во введении к этому разделу, если вы не пишете код для обработки параллелизма, она выполняется автоматически.
- Можно запретить изменение Джейн обновляется в базе данных. Обычно сообщение об ошибке, Показать его текущее состояние данных и разрешить его для повторного применения своих изменений, если она по-прежнему хочет сделать их. Это называется *победой хранилища*. (Значения в хранилище имеют приоритет над данными, передаваемыми клиентом.) В этом руководстве вы реализуете сценарий победы хранилища. Данный метод гарантирует, что никакие изменения не перезаписываются без оповещения пользователя о случившемся.
### <a name="detecting-concurrency-conflicts"></a>Обнаружение конфликтов параллелизма
Конфликты можно разрешать путем обработки [OptimisticConcurrencyException](https://msdn.microsoft.com/library/system.data.optimisticconcurrencyexception.aspx) исключения, которые выдает Entity Framework. Чтобы определить, когда именно нужно выдавать исключения, платформа Entity Framework должна быть в состоянии обнаруживать конфликты. Поэтому нужно соответствующим образом настроить базу данных и модель данных. Ниже приведены некоторые варианты для реализации обнаружения конфликтов:
- Включите в таблицу базы данных столбец отслеживания, который позволяет определять, когда была изменена строка. Затем можно настроить для включения этого столбца в Entity Framework `Where` предложение SQL `Update` или `Delete` команд.
Тип данных столбца отслеживания обычно является [rowversion](https://msdn.microsoft.com/library/ms182776(v=sql.110).aspx). [Rowversion](https://msdn.microsoft.com/library/ms182776(v=sql.110).aspx) значение — это число, увеличивающееся каждый раз при обновлении строки. В `Update` или `Delete` команды `Where` предложение содержит исходное значение столбца отслеживания (версии). Если обновляемой строке был изменен другим пользователем, значение в `rowversion` столбца отличается от исходного значения, поэтому `Update` или `Delete` инструкции не удается найти строку для обновления из-за `Where` предложения. Когда Entity Framework находит, что строки не были обновлены с `Update` или `Delete` команды (то есть, когда количество задействованных строк равно нулю), он интерпретирует как конфликт параллелизма.
- Настройка включения исходных значений для каждого из столбцов в таблице в платформе Entity Framework `Where` предложения `Update` и `Delete` команд.
Как и первый вариант, если что-либо в строке изменилась с момента прочтите строки `Where` предложение не возвращает строки для обновления, который Entity Framework интерпретирует как конфликт параллелизма. Для таблиц базы данных, имеющие много столбцов, этот подход может привести к очень больших `Where` предложений и может потребоваться поддерживать большие объемы состояния. Как отмечалось ранее, обслуживание больших объемов состояния могут ухудшить производительность приложения, так как он требует ресурсы сервера или должен быть включен в саму веб-страницу. Поэтому этот подход обычно не рекомендуется, а не метод, используемый в этом учебнике.
Если необходимо реализовать этот подход к параллелизма, необходимо пометить все свойства первичного ключа в сущность, которую необходимо отслеживать параллелизм для добавляя [ConcurrencyCheck](https://msdn.microsoft.com/library/system.componentmodel.dataannotations.concurrencycheckattribute.aspx) к ним атрибут. То, что изменение позволяет платформе Entity Framework включают все столбцы в инструкции SQL, `WHERE` предложения `UPDATE` инструкции.
В оставшейся части этого учебника вам предстоит добавить [rowversion](https://msdn.microsoft.com/library/ms182776(v=sql.110).aspx) свойство для отслеживания `Department` сущности, создать контроллер и представления и проверить, что все работает правильно.
## <a name="add-an-optimistic-concurrency-property-to-the-department-entity"></a>Добавить свойство оптимистичного параллелизма для сущности «отдел»
В *Models\Department.cs*, добавьте отслеживания свойство с именем `RowVersion`:
[!code-csharp[Main](handling-concurrency-with-the-entity-framework-in-an-asp-net-mvc-application/samples/sample1.cs?highlight=18-19)]
[Timestamp](https://msdn.microsoft.com/library/system.componentmodel.dataannotations.timestampattribute.aspx) атрибут указывает, что этот столбец будет включен в `Where` предложения `Update` и `Delete` команды, отправляемые в базу данных. Атрибут называется [Timestamp](https://msdn.microsoft.com/library/system.componentmodel.dataannotations.timestampattribute.aspx) из-за предыдущих версий SQL Server используется SQL [timestamp](https://msdn.microsoft.com/library/ms182776(v=SQL.90).aspx) типа данных перед SQL [rowversion](https://msdn.microsoft.com/library/ms182776(v=sql.110).aspx) заменил его. Тип .net для `rowversion` — массив байтов. Если вы предпочитаете использовать fluent API, можно использовать [IsConcurrencyToken](https://msdn.microsoft.com/library/gg679501(v=VS.103).aspx) метод, чтобы задать свойства трассировки, как показано в следующем примере:
[!code-csharp[Main](handling-concurrency-with-the-entity-framework-in-an-asp-net-mvc-application/samples/sample2.cs)]
Добавив свойство, вы изменили модель базы данных, поэтому нужно выполнить еще одну миграцию. Введите в консоли диспетчера пакетов (PMC) следующие команды:
[!code-console[Main](handling-concurrency-with-the-entity-framework-in-an-asp-net-mvc-application/samples/sample3.cmd)]
## <a name="create-a-department-controller"></a>Создание контроллера отдела
Создание `Department` контроллера и представления так же, как другие контроллеры со следующими параметрами:

В *Controllers\DepartmentController.cs*, добавьте `using` инструкции:
[!code-csharp[Main](handling-concurrency-with-the-entity-framework-in-an-asp-net-mvc-application/samples/sample4.cs)]
Изменение «LastName» на «FullName» везде в этом файле (четыре вхождений) так, чтобы списки раскрывающегося списка администратор отдела будет содержать полное имя инструктора, а не просто фамилию.
[!code-csharp[Main](handling-concurrency-with-the-entity-framework-in-an-asp-net-mvc-application/samples/sample5.cs?highlight=1)]
Замените существующий код для `HttpPost` `Edit` метод следующим кодом:
[!code-csharp[Main](handling-concurrency-with-the-entity-framework-in-an-asp-net-mvc-application/samples/sample6.cs)]
Представление будет хранить исходные `RowVersion` значения в скрытом поле. Если создается связывателя модели `department` экземпляра, этот объект будет иметь исходный `RowVersion` значение свойства, а новые значения для других свойств, как введенные пользователем на странице «Изменение». Затем в том случае, когда платформа Entity Framework создает SQL `UPDATE` команды, команда будет включать `WHERE` предложение, которое ищет строку, которая содержит исходный `RowVersion` значение.
Если строки не затронуты `UPDATE` команда (строки не имеют исходной `RowVersion` значение), Entity Framework создает исключение `DbUpdateConcurrencyException` исключения, а код в `catch` блок получает соответствующие `Department` сущности из исключения объект. Эта сущность содержит значения, считанные из базы данных и новые значения, введенные пользователем:
[!code-csharp[Main](handling-concurrency-with-the-entity-framework-in-an-asp-net-mvc-application/samples/sample7.cs)]
Затем код добавляет сообщения о пользовательской ошибке для каждого столбца, имеющего значений базы данных, отличающийся от введенного пользователем на странице «Изменение»:
[!code-csharp[Main](handling-concurrency-with-the-entity-framework-in-an-asp-net-mvc-application/samples/sample8.cs)]
Длинное сообщение об ошибке объясняется, что произошло и что делать о нем.
[!code-csharp[Main](handling-concurrency-with-the-entity-framework-in-an-asp-net-mvc-application/samples/sample9.cs)]
Наконец, код задает `RowVersion` значение `Department` извлечь объект новое значение из базы данных. Это новое значение `RowVersion` будет сохранено в скрытом поле при повторном отображении страницы "Edit" (Редактирование). Когда пользователь в следующий раз нажимает кнопку **Save** (Сохранить), перехватываются только те ошибки параллелизма, которые возникли с момента повторного отображения страницы "Edit" (Редактирование).
В *Views\Department\Edit.cshtml*, добавьте скрытое поле, чтобы сохранить `RowVersion` значение свойства, следующий сразу за скрытого поля для `DepartmentID` свойства:
[!code-cshtml[Main](handling-concurrency-with-the-entity-framework-in-an-asp-net-mvc-application/samples/sample10.cshtml?highlight=17)]
В *Views\Department\Index.cshtml*, замените существующий код следующим кодом для перемещения ссылки строк слева и изменения страницы заголовок и заголовки столбцов для отображения `FullName` вместо `LastName` в **Администратора** столбца:
[!code-cshtml[Main](handling-concurrency-with-the-entity-framework-in-an-asp-net-mvc-application/samples/sample11.cshtml)]
## <a name="testing-optimistic-concurrency-handling"></a>Тестирование обработки оптимистичного параллелизма
Запустите сайт и нажмите кнопку **отделы**:

Щелкните правой кнопкой мыши **изменить** гиперссылку для Kim Abercrombie и выберите **открыть в новой вкладке** щелкните **изменить** гиперссылку для Kim Abercrombie. В двух окнах отображаются те же сведения.

Измените поля в первом окне браузера и нажмите кнопку **Сохранить**.

В браузере отображается страница индекса с измененным значением.

Измените любое поле в второе окно браузера и нажмите кнопку **Сохранить**.

Нажмите кнопку **Сохранить** во второе окно браузера. Отображается сообщение об ошибке:

Снова нажмите кнопку **Save** (Сохранить). Значение, введенное на второй обозреватель сохраняется вместе с исходное значение данных, изменения в браузере первой. Сохраненные значения отображаются при открытии страницы индекса.

## <a name="updating-the-delete-page"></a>Обновление страницы удаления
Для страницы "Delete" (Удаление) платформа Entity Framework обнаруживает конфликты параллелизма, вызванные схожим изменением кафедры. Когда `HttpGet` `Delete` метод отображает представление подтверждения, представление включает в себя исходный `RowVersion` значения в скрытом поле. Значение затем становится доступным `HttpPost` `Delete` метод, который вызывается, когда пользователь подтверждения удаления. Если Entity Framework создает SQL `DELETE` команды, он включает `WHERE` предложение с первоначальным `RowVersion` значение. Если результаты команды в ноль строк влияет (то есть строка была изменена после отображается страница подтверждения удаления), исключение параллелизма и `HttpGet Delete` метод вызывается с ошибка установлен флаг `true` для повторного отображения страница подтверждения с сообщением об ошибке. Также возможно, что нулевые затронуты, так как строка была удалена другим пользователем, поэтому в этом случае отображается другое сообщение об ошибке.
В *DepartmentController.cs*, замените `HttpGet` `Delete` метод следующим кодом:
[!code-csharp[Main](handling-concurrency-with-the-entity-framework-in-an-asp-net-mvc-application/samples/sample12.cs)]
Этот метод принимает необязательный параметр, который указывает, отображается ли страница повторно после ошибки параллелизма. Если этот флаг `true`, сообщение об ошибке отправляется в представлении с помощью `ViewBag` свойство.
Замените код в `HttpPost` `Delete` метод (с именем `DeleteConfirmed`) следующим кодом:
[!code-csharp[Main](handling-concurrency-with-the-entity-framework-in-an-asp-net-mvc-application/samples/sample13.cs)]
В шаблонном коде, который вы только что заменили, этот метод принимал только идентификатор записи:
[!code-csharp[Main](handling-concurrency-with-the-entity-framework-in-an-asp-net-mvc-application/samples/sample14.cs)]
Вы изменили этот параметр, чтобы `Department` экземпляр сущности, созданные связывателя модели. Это дает доступ к `RowVersion` значение свойства помимо ключа записи.
[!code-csharp[Main](handling-concurrency-with-the-entity-framework-in-an-asp-net-mvc-application/samples/sample15.cs)]
Вы также изменили имя метода действия с `DeleteConfirmed` на `Delete`. Формирования шаблонов код, называемый `HttpPost` `Delete` метод `DeleteConfirmed` для предоставления `HttpPost` метод уникальная сигнатура. (Среда CLR требует, чтобы перегруженные методы имели разные параметры метода.) Теперь, когда сигнатурах являются уникальными, можно не покидайте соглашение MVC и использовать то же имя для `HttpPost` и `HttpGet` удаления методов.
При перехвате ошибки параллелизма код повторно отображает страницу подтверждения удаления и предоставляет флаг, указывающий, что нужно отобразить сообщение об ошибке параллелизма.
В *Views\Department\Delete.cshtml*, replace формирования шаблонов код следующим кодом, который делает некоторые параметры форматирования, изменения и добавляет поля сообщения ошибки. Изменения выделены.
[!code-cshtml[Main](handling-concurrency-with-the-entity-framework-in-an-asp-net-mvc-application/samples/sample16.cshtml?highlight=9,37,40,45-46)]
Этот код добавляет сообщение об ошибке между `h2` и `h3` заголовки:
[!code-cshtml[Main](handling-concurrency-with-the-entity-framework-in-an-asp-net-mvc-application/samples/sample17.cshtml)]
Он заменяет `LastName` с `FullName` в `Administrator` поля:
[!code-cshtml[Main](handling-concurrency-with-the-entity-framework-in-an-asp-net-mvc-application/samples/sample18.cshtml)]
Наконец, он добавляет скрытые поля для `DepartmentID` и `RowVersion` свойства после `Html.BeginForm` инструкции:
[!code-cshtml[Main](handling-concurrency-with-the-entity-framework-in-an-asp-net-mvc-application/samples/sample19.cshtml)]
Запустите страницу индекса отделов. Щелкните правой кнопкой мыши **удаление** гиперссылку для английского языка подразделение и выберите **открыть в новом окне** в первом окне выберите **изменить** гиперссылки для английского языка отдел.
В первом окне, измените одно из значений и нажмите кнопку **Сохранить** :

Страница индекса подтверждает изменения.

Во втором окне щелкните **удалить**.

Вы видите сообщение об ошибке параллелизма, а значения кафедры обновляются с использованием актуальных сведений из базы данных.

Если нажать кнопку **Delete** (Удалить) еще раз, вы будете перенаправлены на страницу индекса, которая показывает, что кафедра была удалена.
## <a name="summary"></a>Сводка
На этом заканчивается введение в обработку конфликтов параллелизма. Сведения о других способах обрабатывать различные сценарии параллелизма см. в разделе [оптимистичного параллелизма шаблоны](https://blogs.msdn.com/b/adonet/archive/2011/02/03/using-dbcontext-in-ef-feature-ctp5-part-9-optimistic-concurrency-patterns.aspx) и [работа со значениями свойств](https://blogs.msdn.com/b/adonet/archive/2011/01/30/using-dbcontext-in-ef-feature-ctp5-part-5-working-with-property-values.aspx) блоге группы разработчиков платформы Entity Framework. Далее учебнике показано, как реализовать таблица на иерархию наследования для `Instructor` и `Student` сущности.
Ссылки на другие ресурсы Entity Framework можно найти в [Карта содержимого для доступа к данным ASP.NET](../../../../whitepapers/aspnet-data-access-content-map.md).
> [!div class="step-by-step"]
> [Назад](updating-related-data-with-the-entity-framework-in-an-asp-net-mvc-application.md)
> [Вперед](implementing-inheritance-with-the-entity-framework-in-an-asp-net-mvc-application.md)
| 102.554167 | 976 | 0.821355 | rus_Cyrl | 0.920096 |
7617d24f59902d6110549e1459a33c5df371bbbc | 165 | md | Markdown | exampleSite/content/motivacion/motivacionConsejos.md | pelos6/tema-simple-hugo | f6cdf3bf71db37ee93babd0bb6fd5ba7b6c9112c | [
"MIT"
] | null | null | null | exampleSite/content/motivacion/motivacionConsejos.md | pelos6/tema-simple-hugo | f6cdf3bf71db37ee93babd0bb6fd5ba7b6c9112c | [
"MIT"
] | null | null | null | exampleSite/content/motivacion/motivacionConsejos.md | pelos6/tema-simple-hugo | f6cdf3bf71db37ee93babd0bb6fd5ba7b6c9112c | [
"MIT"
] | null | null | null | ---
title: Consejos en dibujos
author: javier
tags:
- consejos
- dibujos
---


| 12.692308 | 48 | 0.690909 | spa_Latn | 0.888766 |
76184dff958f0242ec1b9b67cb60cd87c8acdcb4 | 5,521 | md | Markdown | docs-archive-a/2014/ssms/agent/create-a-multiserver-environment.md | redpandabigcat/sql-docs-archive-pr.it-it | 42057907493283d0099bb6f5dc76994d8e9d3b65 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs-archive-a/2014/ssms/agent/create-a-multiserver-environment.md | redpandabigcat/sql-docs-archive-pr.it-it | 42057907493283d0099bb6f5dc76994d8e9d3b65 | [
"CC-BY-4.0",
"MIT"
] | 3 | 2021-10-11T06:40:46.000Z | 2021-11-25T02:25:44.000Z | docs-archive-a/2014/ssms/agent/create-a-multiserver-environment.md | redpandabigcat/sql-docs-archive-pr.it-it | 42057907493283d0099bb6f5dc76994d8e9d3b65 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2021-09-29T08:51:43.000Z | 2021-11-23T02:36:18.000Z | ---
title: Creare un ambiente multiserver | Microsoft Docs
ms.custom: ''
ms.date: 03/06/2017
ms.prod: sql-server-2014
ms.reviewer: ''
ms.technology: ssms
ms.topic: conceptual
helpviewer_keywords:
- SQL Server Agent, multiserver environments
- master servers [SQL Server], about master servers
- target servers [SQL Server], about target servers
- multiserver environments [SQL Server]
ms.assetid: edc2b60d-15da-40a1-8ba3-f1d473366ee6
author: stevestein
ms.author: sstein
ms.openlocfilehash: a6920920aa603c615cdc5f84a34a93204842052d
ms.sourcegitcommit: ad4d92dce894592a259721a1571b1d8736abacdb
ms.translationtype: MT
ms.contentlocale: it-IT
ms.lasthandoff: 08/04/2020
ms.locfileid: "87722219"
---
# <a name="create-a-multiserver-environment"></a>Creazione di un ambiente multiserver
L'amministrazione multiserver richiede l'impostazione di un server master (MSX) e di uno o più server di destinazione (TSX). I processi che verranno eseguiti in tutti i server di destinazione vengono innanzitutto definiti nel server master e quindi scaricati nei server di destinazione.
Per impostazione predefinita, per le connessioni tra server master e server di destinazione sono abilitate la crittografia SSL (Secure Sockets Layer) completa e la convalida del certificato. Per altre informazioni, vedere [Impostazione delle opzioni di crittografia nei server di destinazione](set-encryption-options-on-target-servers.md).
Se si dispone di un numero elevato di server di destinazione, evitare di definire il server master in un server di produzione con requisiti di prestazioni significativi da altre funzionalità di [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] , perché il traffico del server di destinazione può rallentare le prestazioni nel server di produzione. Se si inoltrano anche gli eventi a un server master dedicato, è possibile centralizzare l'intera amministrazione in un singolo server. Per altre informazioni, vedere [Gestire eventi](manage-events.md).
> [!NOTE]
> Per usare l'elaborazione dei processi multiserver, l'account del servizio [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] Agent deve essere membro del ruolo **TargetServersRole** del database **msdb** nel server master. Tramite Configurazione guidata server master l'account del servizio viene automaticamente aggiunto a questo ruolo all'interno del processo di integrazione.
## <a name="considerations-for-multiserver-environments"></a>Considerazioni relative agli ambienti multiserver
Vedere la tabella seguente per le configurazioni MSX/TSX supportate.
||**TSX = 7.0**|**TSX = 8,0 < SP3**|**TSX = 8.0 SP3 o versione successiva**|**TSX = 9.0**|**TSX= 10.0**|**TSX = 10.5**|**TSX = 11.0**|
|-|--------------------|---------------------------|----------------------------------|--------------------|--------------------|---------------------|---------------------|
|**MSX = 7.0**|Sì|Sì|No|No|No|No|No|
|**MSX = 8,0 < SP3**|Sì|Sì|No|No|No|No|No|
|**MSX = 8.0 SP3 o versione successiva**|No|No|Sì|Sì|Sì|Sì|Sì|
|**MSX = 9.0**|No|No|No|Sì|Sì|Sì|Sì|
|**MSX = 10.0**|No|No|No|No|Sì|Sì|Sì|
|**MSX = 10.5**|No|No|No|No|No|Sì|Sì|
|**MSX = 11.0**|No|No|No|No|No|No|Sì|
Al momento della creazione di un ambiente multiserver, è opportuno considerare i problemi seguenti:
- Ogni server di destinazione fa riferimento a un solo server master. Per integrare un server di destinazione in un server master diverso, è necessario escluderlo dal server master corrente.
- Per modificare il nome di un server di destinazione, è necessario escludere il server, modificarne il nome e quindi reintegrarlo dopo la modifica.
- Per annullare una configurazione multiserver, è necessario escludere tutti i server di destinazione dal server master.
- SQL Server Integration Services supporta solo server di destinazione la cui versione è uguale o superiore alla versione del server master.
## <a name="related-tasks"></a>Attività correlate
Negli argomenti seguenti vengono illustrate le attività comuni necessarie per la creazione di un ambiente multiserver.
|Descrizione|Argomento|
|-----------------|-----------|
|Viene illustrato come creare un server master.|[Configurare un server master](make-a-master-server.md)|
|Viene illustrato come creare un server di destinazione.|[Configurare un server di destinazione](make-a-target-server.md)|
|Viene illustrato come integrare un server di destinazione in un server master.|[Integrare un server di destinazione in un server master](enlist-a-target-server-to-a-master-server.md)|
|Viene illustrato come escludere un server di destinazione da un server master.|[Escludere un server di destinazione da un server master](defect-a-target-server-from-a-master-server.md)|
|Viene illustrato come escludere più server di destinazione da un server master.|[Escludere più server di destinazione da un server master](defect-multiple-target-servers-from-a-master-server.md)|
|Viene illustrato come verificare lo stato di un server di destinazione.|[sp_help_targetserver ()Transact-SQL](/sql/relational-databases/system-stored-procedures/sp-help-targetserver-transact-sql)<br /><br /> [sp_help_targetservergroup ()Transact-SQL](/sql/relational-databases/system-stored-procedures/sp-help-targetservergroup-transact-sql)|
## <a name="see-also"></a>Vedere anche
[Risolvere i problemi relativi a processi multiserver che usano proxy](troubleshoot-multiserver-jobs-that-use-proxies.md)
| 75.630137 | 560 | 0.740446 | ita_Latn | 0.981768 |
7618fa90465968dddb4b63d09f0cf674ff0e287e | 6,627 | md | Markdown | docs/zh/guide/essentials/mock-api.md | LZQ5232/vuepress | 0510cb876abba8cba3bb7d53d141c4c26f47074b | [
"MIT"
] | null | null | null | docs/zh/guide/essentials/mock-api.md | LZQ5232/vuepress | 0510cb876abba8cba3bb7d53d141c4c26f47074b | [
"MIT"
] | null | null | null | docs/zh/guide/essentials/mock-api.md | LZQ5232/vuepress | 0510cb876abba8cba3bb7d53d141c4c26f47074b | [
"MIT"
] | null | null | null | # Mock Data
Mock 数据是前端开发过程中必不可少的一环,是分离前后端开发的关键链路。通过预先跟服务器端约定好的接口,模拟请求数据甚至逻辑,能够让前端开发更加独立自主,不会被服务端的开发所阻塞。
## Swagger
在公司的项目中通常使用 [swagger](https://swagger.io/), 由后端来模拟业务数据。
**swagger** 是一个 REST APIs 文档生成工具,它从代码注释中自动生成文档,可以跨平台,开源,支持大部分语言,社区好,总之非常不错,强烈推荐。
[线上 demo](http://petstore.swagger.io/?_ga=2.222649619.983598878.1509960455-2044209180.1509960455#/pet/addPet)
## Easy-Mock
[vue-admin-template](https://github.com/LZQ5232/vue-admin-template) 之前使用的是 [easy-mock](https://easy-mock.com/login) 来模拟数据。
它是一个纯前端可视化,并且能快速生成模拟数据的持久化服务。非常的简单易用还能结合 `swagger`,天然支持跨域 ,不管团队还是个人项目都值得一试。
::: warning
现在线上版本的`vue-admin-template` 已经不使用`easy-mock`。因为`easy-mock`提供的线上免费服务很不稳定,时不时的就会挂掉,如果有需要的可以自己按照它的教程,搭建自己的服务。
:::
## Mockjs
由于 [vue-element-admin](https://github.com/LZQ5232/vue-element-admin) 是一个纯前端个人项目,所有的数据都是用 [mockjs](https://github.com/nuysoft/Mock) 模拟生成。它的原理是: 拦截了所有的请求并代理到本地,然后进行数据模拟,所以你会发现 `network` 中没有发出任何的请求。
但它的最大的问题是就是它的实现机制。它会重写浏览器的`XMLHttpRequest`对象,从而才能拦截所有请求,代理到本地。大部分情况下用起来还是蛮方便的,但就因为它重写了`XMLHttpRequest`对象,所以比如`progress`方法,或者一些底层依赖`XMLHttpRequest`的库都会和它发生不兼容,可以看一下我项目的[issues](https://github.com/LZQ5232/vue-element-admin/issues?utf8=%E2%9C%93&q=mock),就知道多少人被坑了。
它还有一个问题是,因为是它本地模拟的数据,实际上不会走任何网络请求。所以本地调试起来很蛋疼,只能通过`console.log`来调试。就拿`vue-element-admin`来说,想搞清楚 `getInfo()`接口返回了什么数据,只能通过看源码或者手动 `Debug` 才能知道。
## 新方案 <Badge text="v4.0.0+"/>
在`v4.0`版本之后,在本地会启动一个`mock-server`来模拟数据,线上环境还是继续使用`mockjs`来进行模拟(因为本项目是一个纯前端项目,你也可以自己搭建一个线上 server 来提供数据)。不管是本地还是线上所有的数据模拟都是基于`mockjs`生成的,所以只要写一套 mock 数据,就可以在多环境中使用。
该方案的好处是,在保留 `mockjs`的优势的同时,解决之前的痛点。由于我们的 mock 是完全基于`webpack-dev-serve`来实现的,所以在你启动前端服务的同时,`mock-server`就会自动启动,而且这里还通过 [chokidar](https://github.com/paulmillr/chokidar) 来观察 `mock` 文件夹内容的变化。在发生变化时会清除之前注册的`mock-api`接口,重新动态挂载新的接口,从而支持热更新。有兴趣的可以自己看一下代码[mock-server.js](https://github.com/LZQ5232/vue-element-admin/blob/master/mock/mock-server.js)。由于是一个真正的`server`,所以你可以通过控制台中的`network`,清楚的知道接口返回的数据结构。并且同时解决了之前`mockjs`会重写 `XMLHttpRequest`对象,导致很多第三方库失效的问题。
本项目的所有请求都是通过封装的[request.js](https://github.com/LZQ5232/vue-element-admin/blob/master/src/utils/request.js)进行发送的,通过阅读源码可以发现所有的请求都设置了一个`baseURL`,而这个`baseURL`又是通过读取`process.env.VUE_APP_BASE_API`这个环境变量来动态设置的,这样方便我们做到不同环境使用不同的 `api` 地址。
## 移除
如果你不想使用`mock-server`的话只要在[vue.config.js](https://github.com/LZQ5232/vue-element-admin/blob/master/vue.config.js)中移除`webpack-dev-server`中`proxy`和`after`这个`Middleware`就可以了。
现在默认情况下本地的请求会代理到`http://localhost:${port}/mock`下,如果你想调整为自己的 mock 地址可以修改 `proxy`
```js
proxy: {
// change xxx-api/login => mock/login
// detail: https://cli.vuejs.org/config/#devserver-proxy
[process.env.VUE_APP_BASE_API]: {
target: `http://localhost:${port}/mock`,
changeOrigin: true,
pathRewrite: {
['^' + process.env.VUE_APP_BASE_API]: ''
}
}
},
after: require('./mock/mock-server.js')
```
:::tip
**请注意:该操作需要重启服务**
:::
`mock-server`只会在开发环境中使用,线上生产环境目前使用`MockJs`进行模拟。如果不需要请移除。具体代码:[main.js](https://github.com/LZQ5232/vue-element-admin/blob/master/src/main.js)
```js
import { mockXHR } from '../mock'
if (process.env.NODE_ENV === 'production') {
mockXHR()
}
```
## 新增
如果你想添加 mock 数据,只要在根目录下找到`mock`文件,添加对应的路由,对其进行拦截和模拟数据即可。
比如我现在在[src/api/article](https://github.com/LZQ5232/vue-element-admin/blob/master/src/api/article.js)中需要添加一个查询某篇文章下面评论数的接口`fetchComments`,首先新建接口:
```js
export function fetchComments(id) {
return request({
url: `/article/${id}/comments`,
method: 'get'
})
}
```
声明完接口之后,我们需要找到对应的 mock 文件夹[mock/article.js](https://github.com/LZQ5232/vue-element-admin/blob/master/mock/article.js),在下面创建一个能拦截路由的 mock 接口
**请注意,mock 拦截是基于路由来做的,请确保 mock 数据一定能匹配你的 api 路由,支持正则**
```js
// fetchComments 的 mock
{
// url 必须能匹配你的接口路由
// 比如 fetchComments 对应的路由可能是 /article/1/comments 或者 /article/2/comments
// 所以你需要通过正则来进行匹配
url: '/article/[A-Za-z0-9]/comments',
type: 'get', // 必须和你接口定义的类型一样
response: (req, res) => {
// 返回的结果
// req and res detail see
// https://expressjs.com/zh-cn/api.html#req
return {
code: 20000,
data: {
status: 'success'
}
}
}
}
```
## 修改
最常见的操作就是:你本地模拟了了一些数据,待后端完成接口后,逐步替换掉原先 mock 的接口。
我们以[src/api/role.js](https://github.com/LZQ5232/vue-element-admin/blob/master/src/api/role.js)中的`getRoles`接口为例。它原本是在[mock/role/index.js](https://github.com/LZQ5232/vue-element-admin/blob/master/mock/role/index.js)中 mock 的数据。现在我们需要将它切换为真实后端数据,只要在[mock/role/index.js](https://github.com/LZQ5232/vue-element-admin/blob/master/mock/role/index.js)找到对应的路由,之后将它删除即可。这时候你可以在`network`中,查看到真实的数据。
```js
// api 中声明的路由
export function getRoles() {
return request({
url: '/roles',
method: 'get'
})
}
//找到对应的路由,并删除
{
url: '/roles',
type: 'get',
response: _ => {
return {
code: 20000,
data: roles
}
}
},
```
## 多个 server
目前项目只启动了一个`mock-server`,当然你也可以有自己其它的`mock-server`或者代理接口。可以一部分接口走这个服务,另一些接口走另一个服务。只需要将它们分别设置不同的的`baseURL`即可。 [@/utils/request.js](https://github.com/LZQ5232/vue-element-admin/blob/master/src/utils/request.js)
之后根据设置的 url 规则在 [vue.config.js](https://github.com/LZQ5232/vue-element-admin/blob/master/vue.config.js) 中配置多个 `proxy` 。
[相关文档](https://webpack.docschina.org/configuration/dev-server/#devserver-proxy)
## 启用纯前端 Mock
现在在[mock/index.js](https://github.com/LZQ5232/vue-element-admin/blob/master/mock/index.js#L19)也封装了一个纯前端 mock 的方法,你只需要在[src/main.js](https://github.com/LZQ5232/vue-element-admin/tree/master/src)中:
```js
import { mockXHR } from '../mock'
mockXHR()
```
这样就会变成纯前端 mock 数据了和`v4.0`版本之前的 mock 方案是一样的,原理见上文。目前你看到的线上[demo](https://LZQ5232.github.io/vue-element-admin)就是采用该种方式。
## 本地 Mock 数据与线上数据切换
有很多时候我们会遇到本地使用 mock 数据,线上环境使用真实数据,或者说不同环境使用不同的数据。
- **Easy-Mock 的形式**
你需要保证你本地模拟 api 除了根路径其它的地址是一致的。
比如:
```
https://api-dev/login // 本地请求
https://api-prod/login // 线上请求
```
我们可以通过之后会介绍的[环境变量](/zh/guide/essentials/deploy.html#环境变量)来做到不同环境下,请求不同的 api 地址。
```bash
# .env.development
VUE_APP_BASE_API = '/dev-api' #注入本地 api 的根路径
```
```bash
# .env.production
VUE_APP_BASE_API = '/prod-api' #注入线上 api 的根路径
```
之后根据环境变量创建`axios`实例,让它具有不同的`baseURL`。 [@/utils/request.js](https://github.com/LZQ5232/vue-element-admin/blob/master/src/utils/request.js)
```js
// create an axios instance
const service = axios.create({
baseURL: process.env.BASE_API, // api 的 base_url
timeout: 5000 // request timeout
})
```
这样我们就做到了自动根据环境变量切换本地和线上 api。
- **Mock.js 的切换**
当我们本地使用 `Mock.js` 模拟本地数据,线上使用真实环境 api 方法。这与上面的 easy-mock 的方法是差不多的。我们主要是判断:是线上环境的时候,不引入 mock 数据就可以了,只有在本地引入 `Mock.js`。
```js
// main.js
// 通过环境变量来判断是否需要加载启用
if (process.env.NODE_ENV === 'development') {
require('./mock') // simulation data
}
```
只有在本地环境之中才会引入 mock 数据。
| 31.557143 | 449 | 0.736231 | yue_Hant | 0.536809 |
7619056a036d01011f889cc8fc22a2790efb8eee | 803 | md | Markdown | _posts/events/2015-01-18-MLK-Retreat-Jackson-CA.md | nghin/vacsf.org.dev | 4121b2669547c5f18a1628ac72a98fff4e526fdd | [
"MIT"
] | null | null | null | _posts/events/2015-01-18-MLK-Retreat-Jackson-CA.md | nghin/vacsf.org.dev | 4121b2669547c5f18a1628ac72a98fff4e526fdd | [
"MIT"
] | null | null | null | _posts/events/2015-01-18-MLK-Retreat-Jackson-CA.md | nghin/vacsf.org.dev | 4121b2669547c5f18a1628ac72a98fff4e526fdd | [
"MIT"
] | null | null | null | ---
layout: page-fullwidth
subheadline: "Retreat"
title: "MLK Retreat - Jackson, CA"
meta_teaser: "Family Retreat at Jackson, CA"
teaser: "To continue the tradition, our church's families with young children had a retreat this year in Jackson, CA, at a beautiful farm house overlooking gorgeous landscapes. Enjoy this collection of photos."
header: no
image:
thumb: "/thumbs/MLK-Jackson-Retreat-2015.jpg"
categories:
- events
---
<!--more-->
<div class="flex-video"> <iframe width="100%" height="720" src="http://rgb-scale.com/vacsfj336/index.php/photo-galleries/122-mlk-retreat-2015-jackson-ca" frameborder="0" allowfullscreen=""></iframe></div>
<div class="small-12 columns" style="padding: 0px; border-bottom: none;">
<p> </p>
{% include next-previous-post-in-category %}
</div>
| 42.263158 | 210 | 0.722291 | eng_Latn | 0.74716 |
7619870ecec07b2abe455dd6465bed089fba9724 | 82 | md | Markdown | README.md | Ivan53135/My-WishList-Dashboard | b0cd3d5b226c83b1937765ff43f2da8e2b95dfa5 | [
"MIT"
] | 1 | 2021-01-20T14:31:02.000Z | 2021-01-20T14:31:02.000Z | README.md | KozlovIvanDev/My-WishList-Dashboard | b0cd3d5b226c83b1937765ff43f2da8e2b95dfa5 | [
"MIT"
] | null | null | null | README.md | KozlovIvanDev/My-WishList-Dashboard | b0cd3d5b226c83b1937765ff43f2da8e2b95dfa5 | [
"MIT"
] | null | null | null | # My-WishList-Dashboard
Here I will plan my actions in order to achieve my goals!
| 27.333333 | 57 | 0.780488 | eng_Latn | 0.993054 |
7619ef5c3fc2040853317c0fe116b0a11a7f913a | 306 | md | Markdown | .github/ISSUE_TEMPLATE.md | jtpils/optimesh | 24a8276235b1f4e86f2fb92cf814bf81e7fdbc48 | [
"MIT"
] | 1 | 2019-11-20T16:50:34.000Z | 2019-11-20T16:50:34.000Z | .github/ISSUE_TEMPLATE.md | yxmanfred/optimesh | b85f48d1559a51a01cc3df6214c61ca8ad5ed786 | [
"MIT"
] | null | null | null | .github/ISSUE_TEMPLATE.md | yxmanfred/optimesh | b85f48d1559a51a01cc3df6214c61ca8ad5ed786 | [
"MIT"
] | null | null | null | If you're having problems optimizing a mesh, remember to
- [ ] specify your optimesh version (`optimesh -v`),
- [ ] specify what command you were running,
- [ ] attach the mesh (or even better: a small mesh that reproduces the problem).
This will enable other people to reproduce and fix the problem.
| 38.25 | 82 | 0.732026 | eng_Latn | 0.999859 |
761a7e435c0693aa490fc51d0a370da10c46e926 | 737 | md | Markdown | DOCUMENTATION.md | edalzell/statamic-ical | 2fe614e87af2b68eb2dafd805f19bf445a89cd24 | [
"MIT"
] | 2 | 2020-01-24T04:34:10.000Z | 2021-03-10T22:21:36.000Z | DOCUMENTATION.md | edalzell/statamic-ical | 2fe614e87af2b68eb2dafd805f19bf445a89cd24 | [
"MIT"
] | 7 | 2018-05-21T05:14:35.000Z | 2021-06-25T15:22:48.000Z | DOCUMENTATION.md | edalzell/statamic-ical | 2fe614e87af2b68eb2dafd805f19bf445a89cd24 | [
"MIT"
] | 2 | 2020-02-06T14:48:58.000Z | 2020-10-17T11:18:18.000Z | iCal
=================
A Statamic V2 add-on that creates an iCal file that can be easily downloaded. Use if you have events and want people to be to add to their calender.
## Installing
1. Copy the folder contents to your Statamic `site\addons` directory
2. Update your addons, i.e. `php please update:addons`
3. There is no step 3
## Usage
```
<a href="{{ ical:download
start_date="{ start }"
end_date="{ end }"
summary="foo"
description="bar"
location="baz"
url="myevents.com" }}">Add to your calendar</a>
```
`start_date` & `end_date` could be a php date/time or a unix timestamp.
`summary`, `description`, `location` and `url` are all optional.
## LICENSE
[MIT License](http://emd.mit-license.org) | 27.296296 | 148 | 0.667571 | eng_Latn | 0.943856 |
761b65d339780b7cfda0f0c13e29bbcaa207fb67 | 38 | md | Markdown | _my_tags/thoughts.md | Mad-Hyrax/blog | 774d80257b2f35de2cdac2f54b8b58f7859f5e7e | [
"MIT"
] | null | null | null | _my_tags/thoughts.md | Mad-Hyrax/blog | 774d80257b2f35de2cdac2f54b8b58f7859f5e7e | [
"MIT"
] | null | null | null | _my_tags/thoughts.md | Mad-Hyrax/blog | 774d80257b2f35de2cdac2f54b8b58f7859f5e7e | [
"MIT"
] | null | null | null | ---
slug: thoughts
name: thoughts
---
| 7.6 | 14 | 0.631579 | eng_Latn | 0.998792 |
761b8ecbb52af1ff23acf5b274f2fbbb29fda792 | 1,031 | md | Markdown | README.md | Anthony-Mendola/Wellness-Shop | 7c22028c06280f7f8a74840bc5c5d72f71c643b4 | [
"MIT"
] | null | null | null | README.md | Anthony-Mendola/Wellness-Shop | 7c22028c06280f7f8a74840bc5c5d72f71c643b4 | [
"MIT"
] | 21 | 2020-02-28T23:11:41.000Z | 2022-03-30T22:47:37.000Z | README.md | Anthony-Mendola/Wellness-Shop | 7c22028c06280f7f8a74840bc5c5d72f71c643b4 | [
"MIT"
] | null | null | null | # Wellness Shop
React-Redux application with a Rails API backend. Uses Semantic UI for styling.
## Usage
To use this application, clone the repository and
(1) run `npm --prefix ./client/ install ./client/`
> To install all dependencies for the React/Redux frontend application. Alternatively, cd into client folder and run npm install.
(2) cd into Wellness-Shop folder & run `bundle`
> Installs ruby dependencies from the gem file.
(3) run `rails db:migrate db:seed`
> Creates the schema and seeds the API database with shop items.
(4) run `rake start`
> Boots the client application and API server via Foreman.
## Contributing
Bug reports and pull requests are welcome on GitHub at https://github.com/Anthony-Mendola/Wellness-Shop.
This project is intended to be a safe, welcoming space for collaboration, and contributors are expected
to adhere to the [Contributor Covenant](http://contributor-covenant.org) code of conduct.
## License
The application is available as open source under the terms of the
[MIT License]
| 33.258065 | 129 | 0.770126 | eng_Latn | 0.983441 |
761bf32a61c4f1d6b65d9e101d7f4a79c67f3229 | 6,678 | md | Markdown | docs/integration-services/lift-shift/ssis-azure-lift-shift-ssis-packages-overview.md | gewarren/sql-docs | 34be3e3e656de711b4c7a09274c715b23b451014 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-05-04T19:57:42.000Z | 2019-05-04T19:57:42.000Z | docs/integration-services/lift-shift/ssis-azure-lift-shift-ssis-packages-overview.md | jzabroski/sql-docs | 34be3e3e656de711b4c7a09274c715b23b451014 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/integration-services/lift-shift/ssis-azure-lift-shift-ssis-packages-overview.md | jzabroski/sql-docs | 34be3e3e656de711b4c7a09274c715b23b451014 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "Lift and shift SQL Server Integration Services workloads to the cloud | Microsoft Docs"
ms.date: "10/31/2017"
ms.topic: "article"
ms.prod: "sql-server-2017"
ms.technology:
- "integration-services"
author: "douglaslMS"
ms.author: "douglasl"
manager: "craigg"
ms.workload: "Inactive"
---
# Lift and shift SQL Server Integration Services workloads to the cloud
You can now move your SQL Server Integration Services (SSIS) packages and workloads to the Azure cloud.
- Store and manage SSIS projects and packages in the SSIS Catalog database (SSISDB) on Azure SQL Database.
- Run packages in an instance of the Azure SSIS Integration Runtime, introduced as part of Azure Data Factory version 2.
- Use familiar tools such as SQL Server Management Studio (SSMS) for these common tasks.
## Benefits
Moving your on-premises SSIS workloads to Azure has the following potential benefits:
- **Reduce operational costs** and reduce the burden of managing infrastructure that you have when you run SSIS on-premises or on Azure virtual machines.
- **Increase high availability** with the ability to specify multiple nodes per cluster, as well as the high availability features of Azure and of Azure SQL Database.
- **Increase scalability** with the ability to specify multiple cores per node (scale up) and multiple nodes per cluster (scale out).
## Architecture overview
The following table highlights the differences between SSIS on premises and SSIS on Azure. The most significant difference is the separation of storage from compute.
| Storage | Runtime | Scalability |
|---|---|---|
| On premises (SQL Server) | SSIS runtime hosted by SQL Server | SSIS Scale Out (in SQL Server 2017 and later)<br/><br/>Custom solutions (in prior versions of SQL Server) |
| On Azure (SQL Database) | Azure SSIS Integration Runtime, a component of Azure Data Factory version 2 | Scaling options for the SSIS IR |
| | | |
Azure Data Factory hosts the runtime engine for SSIS packages on Azure. The runtime engine is called the Azure SSIS Integration Runtime (SSIS IR).
When you provision the SSIS IR, you can scale up and scale out by specifying values for the following options:
- The node size (including the number of cores) and the number of nodes in the cluster.
- The existing instance of Azure SQL Database to host the SSIS Catalog Database (SSISDB), and the service tier for the database.
- The maximum parallel executions per node.
You only have to provision the SSIS IR one time. After that, you can use familiar tools such as SQL Server Data Tools (SSDT) and SQL Server Management Studio (SSMS) to deploy, configure, run, monitor, schedule, and manage packages.
> [!NOTE]
> During this public preview, the Azure SSIS Integration Runtime is only available in the East US and North Europe regions.
Data Factory also supports other types of Integration Runtimes. To learn more about the SSIS IR and the other types of Integration Runtimes, see [Integration runtime in Azure Data Factory](https://docs.microsoft.com/en-us/azure/data-factory/concepts-integration-runtime).
## Prerequisites
The capabilities described in this topic do not require SQL Server 2017 or SQL Server 2016.
These capabilities require the following versions of SQL Server Data Tools (SSDT):
- For Visual Studio 2017, version 15.3 (preview) or later.
- For Visual Studio 2015, version 17.2 or later.
> [!NOTE]
> When you deploy packages to Azure, the Package Deployment Wizard always upgrades the packages to the latest package format.
For more info about prerequisites in Azure, see [Lift and shift SQL Server Integration Services (SSIS) packages to Azure](https://docs.microsoft.com/en-us/azure/data-factory/tutorial-deploy-ssis-packages-azure).
## SSIS features on Azure
When you provision an instance of SQL Database to host SSISDB, the Azure Feature Pack for SSIS and the Access Redistributable are also installed. These components provide connectivity to **Excel and Access** files and to various **Azure** data sources, in addition to the data sources supported by the built-in components. You can't install **third-party components** for SSIS (including third-party components from Microsoft, such as the Attunity and SAP BI components) at this time.
The **name of the SQL Database** that hosts SSISDB becomes the first part of the four-part name to use when you deploy and manage packages from SSDT and SSMS - `<sql_database_name>.database.windows.net`.
You have to use the **project deployment model**, not the package deployment model, for projects you deploy to SSISDB on Azure SQL Database.
You continue to **design and build packages** on-premises in SSDT, or in Visual Studio with SSDT installed.
For info about how to connect to **on-premises data sources** from the cloud with Windows authentication, see [Connect to on-premises data sources with Windows Authentication](ssis-azure-connect-with-windows-auth.md).
## Common tasks
### Provision
Before you can deploy and run SSIS packages in Azure, you have to provision the SSISDB Catalog database and the Azure SSIS Integration Runtime. Follow the provisioning steps in this article: [Lift and shift SQL Server Integration Services (SSIS) packages to Azure](https://docs.microsoft.com/en-us/azure/data-factory/tutorial-deploy-ssis-packages-azure).
### Deploy and run packages
To deploy projects and run packages on SQL Database, you can use one of several familiar tools and scripting options:
- SQL Server Management Studio (SSMS)
- Transact-SQL (from SSMS, Visual Studio Code, or another tool)
- A command-line tool
- PowerShell
- C# and the SSIS management object model
### Monitor packages
To monitor running packages in SSMS, you can use one of the following reporting tools in SSMS.
- Right-click **SSISDB**, and then select **Active Operations** to open the **Active Operations** dialog box.
- Select a package in Object Explorer, right-click and select **Reports**, then **Standard Reports**, then **All Executions**.
### Schedule packages
To schedule the execution of packages stored in SQL Database, you can use the following tools:
- SQL Server Agent on-premises
- The Data Factory SQL Server Stored Procedure activity
For more info, see [Schedule SSIS package execution on Azure](ssis-azure-schedule-packages.md).
## Next steps
To get started with SSIS workloads on Azure, see the following articles:
- [Lift and shift SQL Server Integration Services (SSIS) packages to Azure](https://docs.microsoft.com/en-us/azure/data-factory/tutorial-deploy-ssis-packages-azure)
- [Deploy, run, and monitor an SSIS package on Azure](ssis-azure-deploy-run-monitor-tutorial.md)
| 66.118812 | 484 | 0.774184 | eng_Latn | 0.974359 |
761c9a8936b25e9ae9076633f23f88d946f37242 | 19 | md | Markdown | README.md | dezhiShen/golang-fyne-study | 62c83426c190f345cd28ed25a3b50c78abdad0d7 | [
"MIT"
] | 1 | 2021-01-22T06:43:44.000Z | 2021-01-22T06:43:44.000Z | README.md | dezhiShen/golang-gui-study | 62c83426c190f345cd28ed25a3b50c78abdad0d7 | [
"MIT"
] | null | null | null | README.md | dezhiShen/golang-gui-study | 62c83426c190f345cd28ed25a3b50c78abdad0d7 | [
"MIT"
] | null | null | null | # golang-gui-study
| 9.5 | 18 | 0.736842 | eng_Latn | 0.468577 |
761ca90187bf51938fa39b8dc25fe77e00a9f5d0 | 11,756 | md | Markdown | articles/service-bus-messaging/service-bus-amqp-overview.md | BielinskiLukasz/azure-docs.pl-pl | 952ecca251b3e6bdc66e84e0559bbad860a886b9 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/service-bus-messaging/service-bus-amqp-overview.md | BielinskiLukasz/azure-docs.pl-pl | 952ecca251b3e6bdc66e84e0559bbad860a886b9 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/service-bus-messaging/service-bus-amqp-overview.md | BielinskiLukasz/azure-docs.pl-pl | 952ecca251b3e6bdc66e84e0559bbad860a886b9 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Omówienie AMQP 1,0 w Azure Service Bus
description: Dowiedz się, jak Azure Service Bus obsługuje Advanced Message Queuing Protocol (AMQP), Open standard Protocol.
ms.topic: article
ms.date: 11/20/2020
ms.openlocfilehash: a643869d7d89b287e899b1eab89c5b9ec11856e5
ms.sourcegitcommit: 1d366d72357db47feaea20c54004dc4467391364
ms.translationtype: MT
ms.contentlocale: pl-PL
ms.lasthandoff: 11/23/2020
ms.locfileid: "95396811"
---
# <a name="amqp-10-support-in-service-bus"></a>Obsługa AMQP 1,0 w Service Bus
Usługa Azure Service Bus w chmurze korzysta z [protokołu Advanced Message Queue Protocol (AMQP) 1,0](http://docs.oasis-open.org/amqp/core/v1.0/amqp-core-overview-v1.0.html) jako podstawowego środka komunikacji. Firma Microsoft współpracuje z partnerami w branży, zarówno klientom, jak i dostawcom konkurujących brokerów obsługi komunikatów, aby rozwijać i rozwijać AMQP w ciągu ostatniej dekady, z nowymi rozszerzeniami opracowanymi w ramach [języka Oasis AMQP](https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=amqp). AMQP 1,0 to standard ISO i IEC ([iso 19464:20149](https://www.iso.org/standard/64955.html)).
AMQP umożliwia tworzenie aplikacji hybrydowych dla wielu platform przy użyciu neutralnych od dostawcy i niezależnych od implementacji protokołu Open Standard. Aplikacje można konstruować przy użyciu składników utworzonych przy użyciu różnych języków i struktur, które działają w różnych systemach operacyjnych. Wszystkie te składniki mogą łączyć się z Service Bus i bezproblemowo wymieniać strukturalne wiadomości biznesowe oraz z pełną dokładnością.
## <a name="introduction-what-is-amqp-10-and-why-is-it-important"></a>Wprowadzenie: co to jest AMQP 1,0 i dlaczego jest ważne?
Tradycyjnie, zorientowane na komunikaty oprogramowanie pośredniczące wykorzystuje protokoły własnościowe do komunikacji między aplikacjami klienckimi i brokerami. Oznacza to, że po wybraniu brokera obsługi komunikatów określonego dostawcy należy użyć bibliotek tego dostawcy, aby połączyć aplikacje klienckie z tym brokerem. Wynika to z stopnia zależności od tego dostawcy, ponieważ port aplikacji do innego produktu wymaga wprowadzenia zmian w kodzie we wszystkich połączonych aplikacjach. W społeczności języka Java standardy interfejsu API specyficzne dla języka, takie jak obsługa komunikatów języka Java (JMS) i struktury szkieletu sprężyny, zostały wyeliminowane w pewnym stopniu, ale mają bardzo wąski zakres funkcji i nie obejmują deweloperów korzystających z innych języków.
Ponadto łączenie brokerów komunikatów z różnych dostawców jest trudne. Zwykle wymaga to mostkowania na poziomie aplikacji przenoszenia komunikatów z jednego systemu do drugiego i przetłumaczenia ich własnościowe formaty komunikatów. Jest to typowy wymóg; na przykład, jeśli musisz podać nowy ujednolicony interfejs dla starszych systemów lub zintegrować systemy IT po fuzji. AMQP umożliwia bezpośrednie łączenie brokerów łączących się bezpośrednio, na przykład za pomocą routerów, takich jak [Apache Qpid](https://qpid.apache.org/components/dispatch-router/index.html) lub Broker-native "shovels", podobnie jak w przypadku jednego z [RabbitMQ](service-bus-integrate-with-rabbitmq.md).
Branża oprogramowania to szybko przenoszona firma; nowe języki programowania i struktury aplikacji są wprowadzane w niebewildering tempie. Podobnie wymagania systemu IT są roznoszone w miarę upływu czasu, a deweloperzy chcą korzystać z najnowszych funkcji platformy. Jednak czasami wybrany dostawca obsługi komunikatów nie obsługuje tych platform. Jeśli protokoły obsługi wiadomości są zastrzeżone, nie jest możliwe, aby udostępnić biblioteki dla tych nowych platform. W związku z tym należy użyć podejścia, takiego jak tworzenie bram lub mostków, które umożliwiają dalsze korzystanie z produktu do obsługi komunikatów.
Rozwój Advanced Message Queuing Protocol (AMQP) 1,0 został umotywowany przez te problemy. Pochodzi ona z Morgan Chase, kto, podobnie jak większość firmowych usług finansowych, jest ciężkimi użytkownikami oprogramowania pośredniczącego zorientowanego na komunikaty. Cel był prosty: Aby utworzyć protokół obsługi komunikatów typu Open-Standard, który umożliwia tworzenie aplikacji opartych na komunikatach przy użyciu składników utworzonych przy użyciu różnych języków, platform i systemów operacyjnych, przy użyciu najlepszych składników z wielu dostawców.
## <a name="amqp-10-technical-features"></a>AMQP 1,0 — funkcje techniczne
AMQP 1,0 to wydajny, niezawodny protokół obsługi komunikatów na poziomie sieci, który umożliwia tworzenie niezawodnych aplikacji do obsługi komunikatów na wielu platformach. Protokół ma prosty cel: do definiowania Mechanics bezpiecznych, niezawodnych i wydajnych transferów komunikatów między dwiema stronami. Same wiadomości są kodowane przy użyciu przenośnej reprezentacji danych, która umożliwia niejednorodnym nadawcom i odbiornikom wymianę strukturalnych komunikatów służbowych z pełną dokładnością. Poniżej znajduje się podsumowanie najważniejszych funkcji:
* **Wydajne**: AMQP 1,0 to protokół zorientowany na połączenia, który używa kodowania binarnego dla instrukcji protokołu i komunikatów służbowych przesyłanych w tym miejscu. Obejmuje ona zaawansowane schematy kontroli przepływu w celu zmaksymalizowania wykorzystania sieci i podłączonych składników. Wspomniany protokół został zaprojektowany w celu zrównoważenia równowagi między wydajnością, elastycznością i współdziałaniem.
* **Niezawodne**: protokół AMQP 1,0 umożliwia wymianę komunikatów z różnymi gwarancjami niezawodności, od pożaru i zapomnień do niezawodnego, dokładnie po potwierdzeniem dostawy.
* **Elastyczny**: AMQP 1,0 to elastyczny protokół, który może służyć do obsługi różnych topologii. Tego samego protokołu można używać w przypadku komunikacji klient-klient, klient-do-Broker i komunikacja między brokerem a brokerem.
* **Broker-model niezależny**: specyfikacja AMQP 1,0 nie wykonuje żadnych wymagań dotyczących modelu obsługi komunikatów używanego przez brokera. Oznacza to, że można łatwo dodać obsługę AMQP 1,0 do istniejących brokerów obsługi komunikatów.
## <a name="amqp-10-is-a-standard-with-a-capital-s"></a>AMQP 1,0 jest standardem (z stolicą)
AMQP 1,0 jest międzynarodowym standardem zatwierdzonym przez ISO i IEC jako ISO/IEC 19464:2014.
AMQP 1,0 jest opracowywany od 2008 przez grupę podstawową ponad 20 firm, zarówno dostawców technologii, jak i przedsiębiorstwa użytkowników końcowych. W tym czasie przedsiębiorstwa użytkowników przyczyniły swoje rzeczywiste wymagania biznesowe, a dostawcy technologii rozwijający protokół, aby spełnić te wymagania. W trakcie całego procesu dostawcy uczestniczyły w warsztatach, w których wspólnie współpracują w celu zweryfikowania współdziałania między ich implementacjami.
W październiku 2011 praca rozwojowa przeniesiona do Komitetu Technicznego w organizacji na potrzeby rozwoju standardów informacji o strukturze (języka Oasis) i języka Oasis AMQP 1,0 standard został opublikowany w październiku 2012. Następujące firmy uczestniczyły w Komitecie technicznym podczas opracowywania standardu:
* **Dostawcy technologii**: oprogramowanie axway, technologie Huawei, IIT Software, INETCO Systems, Kaazing, Microsoft, Mitre Corporation, technologie Primeton, oprogramowanie do postępu, Red Hat, sita, oprogramowanie AG, Solace Systems, VMware, WSO2, Zenika.
* **Firmy użytkowników**: Bank of America, kredyty Suisse, Deutsche Boerse, Goldman Sachs, JPMorgan.
Bieżące krzesła komitetu technicznego [języka Oasis AMQP] (( https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=amqp) reprezentujące Red Hat i firmę Microsoft).
Niektóre z najczęściej natoczonych korzyści z używania otwartych standardów obejmują:
* Mniejsza szansa blokowania dostawcy
* Współdziałanie
* Szeroka dostępność bibliotek i narzędzi
* Ochrona przed utratą
* Dostępność personelu z możliwością wiedzy
* Mniejsze i zarządzane ryzyko
## <a name="amqp-10-and-service-bus"></a>AMQP 1,0 i Service Bus
Obsługa AMQP 1,0 w Azure Service Bus oznacza, że można korzystać z usługi kolejkowania Service Bus i publikowania/subskrybowania funkcji komunikatów obsługiwanych przez brokera z różnych platform przy użyciu wydajnego protokołu binarnego. Ponadto można tworzyć aplikacje składające się ze składników utworzonych przy użyciu różnych języków, struktur i systemów operacyjnych.
Na poniższej ilustracji przedstawiono przykładowe wdrożenie, w którym klienci Java uruchamiani w systemie Linux, zapisanie przy użyciu standardowego interfejsu API usługi wiadomości języka Java (JMS) i klientów platformy .NET działających w systemie Windows, wymieniają komunikaty za pośrednictwem Service Bus przy użyciu AMQP 1,0.
![Diagram przedstawiający jedną Service Bus wymianę komunikatów z dwoma środowiskami Linux i dwoma środowiskami systemu Windows.][0]
**Rysunek 1. przykładowy scenariusz wdrażania przedstawiający obsługę komunikatów na wielu platformach przy użyciu Service Bus i AMQP 1,0**
Wszystkie obsługiwane Service Bus biblioteki klienckie dostępne za pośrednictwem zestawu Azure SDK używają AMQP 1,0.
- [Azure Service Bus dla platformy .NET](https://docs.microsoft.com/dotnet/api/overview/azure/service-bus?view=azure-dotnet&preserve-view=true)
- [Biblioteki Azure Service Bus dla języka Java](https://docs.microsoft.com/java/api/overview/azure/servicebus?view=azure-java-stable&preserve-view=true)
- [Dostawca Azure Service Bus dla języka Java JMS 2,0](how-to-use-java-message-service-20.md)
- [Moduły Azure Service Bus dla języków JavaScript i TypeScript](https://docs.microsoft.com/javascript/api/overview/azure/service-bus?view=azure-node-latest&preserve-view=true)
- [Biblioteki Azure Service Bus dla języka Python](https://docs.microsoft.com/python/api/overview/azure/servicebus?view=azure-python&preserve-view=true)
Ponadto można użyć Service Bus z dowolnego stosu zgodnych protokołów AMQP 1,0:
| Język | Biblioteka |
| --- | --- |
| Java | [Apache Qpid Proton-J](https://qpid.apache.org/proton/index.html) |
| C/C++ |[Azure UAMQP C](https://github.com/azure/azure-uamqp-c/), [Apache Qpid Proton-C](https://qpid.apache.org/proton/index.html) |
| Python |[Azure uAMQP for Python](https://github.com/azure/azure-uamqp-python/), [Apache Qpid Proton Python](https://qpid.apache.org/releases/qpid-proton-0.32.0/proton/python/docs/overview.html) |
| PHP | [Azure uAMQP dla języka PHP](https://github.com/vsouz4/azure-uamqp-php/) |
| Ruby | [Apache Qpid Proton Ruby](https://github.com/apache/qpid-proton/tree/master/ruby) |
| Przejdź | [Azure go AMQP](https://github.com/Azure/go-amqp), [Apache Qpid Proton go](https://github.com/apache/qpid-proton/tree/master/go/examples)
| C#/F #/VB | [AMQP .NET Lite](https://github.com/Azure/amqpnetlite), [Apache NMS AMQP](https://github.com/apache/activemq-nms-amqp)|
| JavaScript/węzeł | [Rhea](https://github.com/grs/rhea) |
**Rysunek 2. tabela bibliotek klienckich AMQP 1,0**
## <a name="summary"></a>Podsumowanie
* AMQP 1,0 to otwarty, niezawodny protokół do obsługi komunikatów, za pomocą którego można tworzyć aplikacje hybrydowe dla wielu platform. AMQP 1,0 jest standardem języka Oasis.
## <a name="next-steps"></a>Następne kroki
Chcesz dowiedzieć się więcej? Odwiedź następujące linki:
* [Korzystanie z Service Bus z platformy .NET z AMQP]
* [Używanie Service Bus z języka Java z AMQP]
* [Instalowanie oprogramowania Apache Qpid Proton-C na maszynie wirtualnej z systemem Linux na platformie Azure]
[0]: ./media/service-bus-amqp-overview/service-bus-amqp-1.png
[Korzystanie z Service Bus z platformy .NET z AMQP]: service-bus-amqp-dotnet.md
[Używanie Service Bus z języka Java z AMQP]: ./service-bus-java-how-to-use-jms-api-amqp.md
[Instalowanie oprogramowania Apache Qpid Proton-C na maszynie wirtualnej platformy Azure z systemem Linux]::
| 115.254902 | 783 | 0.814648 | pol_Latn | 0.999901 |
761cd720d105851913cc299e4c21bcc7ab8cbb16 | 1,906 | md | Markdown | docs/framework/unmanaged-api/fusion/createapplicationcontext-function.md | Athosone/docs.fr-fr | 83c2fd74def907edf5da4a31fee2d08133851d2f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/unmanaged-api/fusion/createapplicationcontext-function.md | Athosone/docs.fr-fr | 83c2fd74def907edf5da4a31fee2d08133851d2f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/unmanaged-api/fusion/createapplicationcontext-function.md | Athosone/docs.fr-fr | 83c2fd74def907edf5da4a31fee2d08133851d2f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: CreateApplicationContext, fonction
ms.date: 03/30/2017
api_name:
- CreateApplicationContext
api_location:
- fusion.dll
api_type:
- DLLExport
f1_keywords:
- CreateApplicationContext
helpviewer_keywords:
- CreateApplicationContext function [.NET Framework fusion]
ms.assetid: 7bf8a141-b2c0-4058-9885-1cef7dcaa811
topic_type:
- apiref
author: rpetrusha
ms.author: ronpet
ms.openlocfilehash: d98829b29100824e5d606e23aaf287c9f6e81d69
ms.sourcegitcommit: 9b552addadfb57fab0b9e7852ed4f1f1b8a42f8e
ms.translationtype: MT
ms.contentlocale: fr-FR
ms.lasthandoff: 04/23/2019
ms.locfileid: "61771915"
---
# <a name="createapplicationcontext-function"></a>CreateApplicationContext, fonction
Cette fonction prend en charge l’infrastructure .NET Framework et n’est pas destinée à être utilisée directement depuis votre code.
## <a name="syntax"></a>Syntaxe
```
HRESULT CreateApplicationContext (
[in] IAssemblyName *pName,
[out] LPPAPPLICATIONCONTEXT *ppCtx
);
```
## <a name="parameters"></a>Paramètres
`pName`
[in] Pointeur vers un nom convivial.
`ppCtx`
[out] Pointeur vers un contexte d’application.
## <a name="requirements"></a>Configuration requise
**Plateformes :** Consultez [Configuration requise](../../../../docs/framework/get-started/system-requirements.md).
**En-tête :** Fusion.h
**Bibliothèque :** Inclus en tant que ressource dans le fichier Fusion.dll
**Versions du .NET Framework :** [!INCLUDE[net_current_v20plus](../../../../includes/net-current-v20plus-md.md)]
## <a name="see-also"></a>Voir aussi
- [IAssemblyCache, interface](../../../../docs/framework/unmanaged-api/fusion/iassemblycache-interface.md)
- [Fonctions statiques globales de fusion](../../../../docs/framework/unmanaged-api/fusion/fusion-global-static-functions.md)
- [Global Assembly Cache](../../../../docs/framework/app-domains/gac.md)
| 32.305085 | 133 | 0.73085 | yue_Hant | 0.409989 |
761d2deaed20a2a8615c3ffb4186644a05a141ae | 181 | md | Markdown | LICENSE.md | bk/historia | 42c542732c6910a1fab18deb45f935944afb5774 | [
"CC-BY-3.0"
] | null | null | null | LICENSE.md | bk/historia | 42c542732c6910a1fab18deb45f935944afb5774 | [
"CC-BY-3.0"
] | null | null | null | LICENSE.md | bk/historia | 42c542732c6910a1fab18deb45f935944afb5774 | [
"CC-BY-3.0"
] | null | null | null | # License
Like the original HTML5 UP Story template, this wmk theme is licensed under the [Creative Commons Attribution 3.0 License](https://creativecommons.org/licenses/by/3.0/).
| 45.25 | 169 | 0.78453 | eng_Latn | 0.931068 |
761eee91b6920b35ba86583e031f61e7034ee86f | 35 | md | Markdown | README.md | mifopen/notion-sdk | 794bfa7f579d203e4c694cb91c72476643220496 | [
"MIT"
] | null | null | null | README.md | mifopen/notion-sdk | 794bfa7f579d203e4c694cb91c72476643220496 | [
"MIT"
] | null | null | null | README.md | mifopen/notion-sdk | 794bfa7f579d203e4c694cb91c72476643220496 | [
"MIT"
] | null | null | null | # notion-sdk
dotnet SDK for Notion
| 11.666667 | 21 | 0.771429 | kor_Hang | 0.778885 |
761f4b34e31d957837126e3644ce28fa786ee9f1 | 1,026 | md | Markdown | data/content/fate-extra-material/archer.zh.md | tmdict/tmdict | c2f8ddb7885a91d01343de4ea7b66fea78351d94 | [
"MIT"
] | 3 | 2022-02-25T11:13:45.000Z | 2022-02-28T11:55:41.000Z | data/content/fate-extra-material/archer.zh.md | SomiaWhiteRing/tmdict | 13c6c818c84a65ee956535e08d20246bde87dd48 | [
"MIT"
] | null | null | null | data/content/fate-extra-material/archer.zh.md | SomiaWhiteRing/tmdict | 13c6c818c84a65ee956535e08d20246bde87dd48 | [
"MIT"
] | 2 | 2022-02-25T09:59:50.000Z | 2022-02-28T11:55:09.000Z | ---
parent: archer
source: fate-extra-material
id: encyclopedia-of-fate-extra
language: zh
weight: 1
translation: "agemizy"
category:
- servant
---
与主角契约的主要从者中的一人。
身穿红色外套、二十五岁左右的青年。
虽然职阶为弓兵却主要采用白刃战的炼铁的英灵。为了避免与绿色衣服的Archer混淆,有时也被称为“红茶”、“红Archer”。
第一人称用“私”。只在某种情况下才说溜嘴用“俺”。
已经不知道解说过多少次,大家已经很熟悉的从者。
讽刺家、冷静而透彻,却很会照顾人的执事系从者。因为与其他从者出处不同,不能说是正统的英灵。
这个英灵被称为守护者,类似于为了人类的存续而无意识产生的防卫装置。
他的关键点是无名的人们选出的、没有面孔的代表者。
“在出现加速人类灭亡的危害之时,消除掉让这个危害所成立的要素”,守护者以此为目的现身,把工作贯彻到底。
这个英灵为什么会成为守护者请参照EXTRA本篇。
因为基本上是个冷静而透彻的工作狂,很符合不夹带私情的法的执行官的形象。总是一副严肃的面孔也是因为这种不自由的缘故吧。
虽然确立为弓的骑士,但他本来是旧世界的魔术师。
使用投影魔术——凭借想象仅几分钟就把道具复制出来的魔术——仿制了大量的名剑、魔剑的赝品制造者。
基本武器为弓是因为他作为英灵并不是很强,最终的战斗风格定为狙击。
虽然与stay night的Archer大致上是同一存在(非同一人物),作为英灵的真名却不一样。
为什么他的真名不是人名而是名词,EXTRA本篇的特别自室有说。
与Cas狐不同,很简单就看得到。各回战都勤勤恳恳地进行自室对话的话,就能达成FLAG哦,白野。
<>
在CCC里是穿着红色夹克的半裸变——以红色鹿皮夹克的时髦打扮,可靠的无赖造型登场。
在月之里侧,冷静而粗野地把握了状况去救助失忆的主角,就像一名啰嗦的教官。
CCC中的对手是Meltlilith。
只有女主角+Archer的组合才会在与Meltlilith相关的事件上内容出现突变。
并且从者结局也只有Archer的方式与别不同。
Saber他们是“从现在开始的未来”,
Archer是“从未来开始的现在”。
为什么会这样,这点与Archer一直战斗过来的“你”,即使不作任何解释也应该能理解。
| 24.428571 | 62 | 0.850877 | yue_Hant | 0.345213 |
761f71afef38aca61a56886604ca3e3490e96863 | 536 | md | Markdown | content/xkcd/0510.md | whatifrussian/xkcdbird | e21cd93b6f45ef04a46328cdd0feececfdd58984 | [
"Unlicense"
] | null | null | null | content/xkcd/0510.md | whatifrussian/xkcdbird | e21cd93b6f45ef04a46328cdd0feececfdd58984 | [
"Unlicense"
] | null | null | null | content/xkcd/0510.md | whatifrussian/xkcdbird | e21cd93b6f45ef04a46328cdd0feececfdd58984 | [
"Unlicense"
] | null | null | null | Title: Сбой в «Сбрасывании яиц»
Slug: 510
Category: xkcd
Date: 2009-01-13 15:55:30
SourceNum: 510
SourceTitle: Egg Drop Failure
Image: /comics/0510.png
MicroImage: /comics/0510_micro.png
MiniImage: /comics/0510_mini.png
Description: Я слышал, мой брат Рики выиграл соревнование, оставив яйцо в курице.
[Человек скидывает яйцо в корзине с парашютом. Ещё один парашют с яйцом в корзине в воздухе. Ещё один на земле. Внизу два человека смотрят]
*хрусть*
[Из корзины среднего парашюта вылетает цыплёнок]
Цыплёнок: *чик-чирик* | 35.733333 | 141 | 0.768657 | rus_Cyrl | 0.928251 |
761f74f2bb2cc5ac891a8915baa8205ba6eee970 | 1,974 | md | Markdown | objetivos/marioanloru.md | Jovalga/IV-18-19 | ef5324aa59331103d1665c60c9d99a5a53c1dfa7 | [
"Artistic-2.0"
] | null | null | null | objetivos/marioanloru.md | Jovalga/IV-18-19 | ef5324aa59331103d1665c60c9d99a5a53c1dfa7 | [
"Artistic-2.0"
] | null | null | null | objetivos/marioanloru.md | Jovalga/IV-18-19 | ef5324aa59331103d1665c60c9d99a5a53c1dfa7 | [
"Artistic-2.0"
] | null | null | null | Objetivos Infraestructura Virtual
============================
### Hito 0
- [x] Creación de fichero de objetivos
- [x] Configuración de git
- [x] Fork del repositorio [https://github.com/JJ/IV-18-19](https://github.com/JJ/IV-18-19)
- [x] Creación del repositorio para el [proyecto](https://github.com/marioanloru/Billboard-IV)
- [x] Creación del repositorio para los [ejercicios](https://github.com/marioanloru/IV-18-19-Ejercicios)
- [x] Entrega y pull request de los repositorios creados
### Hito 1
#### Semana 3
- [x] Comprender cómo las pruebas de software encajan dentro del concepto de infraestructura virtual
- [x] Entender y usar gestores de versiones en diferentes lenguajes de programación
- [x] Entender los ficheros de requisitos y su utilidad en las infraestructuras virtuales
- [x] Entender qué son los objectivos y realizarlos correctamente
- [x] Manejo y creación de *milestones* en Github
- [x] Entender el formato JSON y su importancia para ficheros de configuración e intercambio genérico de datos
- [x] Comenzar el aprendizaje de un nuevo lenguaje de programación, en mi caso: Node.js
- [x] Comprender la utilidad de la virtualización y definición de un fichero de herramientas de desarrollo
- [x] Instalar las herramientas necesarias para crear y aplicar tests en un proyecto
- [x] Comprender el papel de las *build tools* o herramienta de automatización
- [ ] Realizar y entregar ejercicios del tema 1
### Hito 2
#### Semana 4
- [x] Comenzar con la configuración de los sistemas de integración continua
- [x] Comprender cómo las pruebas de software encajan dentro del concepto de infraestructura virtual
- [x] Comprender el nivel plataforma de la infraestructura virtual y su uso en prueba y producción
- [x] Entender y corregir errores en la entrega del primer hito del proyecto
- [x] Comprender mecanismos de despliegue automática usando git
- [x] Entender el formato YAML y su uso en la descripción de diferentes infraestructuras virtuales
| 48.146341 | 110 | 0.762411 | spa_Latn | 0.994753 |
761fd4052d18ae5a5c3249c1f58e4824dd4a5dae | 1,759 | md | Markdown | Cosmere/Tears of Edgli.md | Malthemester/CoppermindScraper | 1a1d70a6a1e0aff2d8e8cf5d124da3c16c061a43 | [
"MIT"
] | null | null | null | Cosmere/Tears of Edgli.md | Malthemester/CoppermindScraper | 1a1d70a6a1e0aff2d8e8cf5d124da3c16c061a43 | [
"MIT"
] | null | null | null | Cosmere/Tears of Edgli.md | Malthemester/CoppermindScraper | 1a1d70a6a1e0aff2d8e8cf5d124da3c16c061a43 | [
"MIT"
] | null | null | null | |**Tears of Edgli**|
|-|-|
|by Isaac Stewart |
|**World**|[[Nalthis\|Nalthis]]|
|**Universe**|[[Cosmere\|Cosmere]]|
|**Featured In**|*Warbreaker*|
>“*Some scholars say that the Manywar was fought over these flower petals, that the kingdoms of Kuth and Huth were destroyed by little drips of color.*”
\-Hoid[1]
The **Tears of Edgli** are a plant found in the valley jungle of [[Hallandren\|Hallandren]] on [[Nalthis\|Nalthis]]. They produce dyes that hold fast in any cloth. Other dyes do not work nearly as well, making the plants highly desirable. They come in different colors and are very efficient for [[Awakening\|Awakening]].
## Contents
1 Relation to Endowment
2 The Manywar
3 Economics
4 Notes
## Relation to Endowment
The Tears of Edgli are named after [[Edgli\|Edgli]], the [[Vessel\|Vessel]] of [[Endowment\|Endowment]]. They only grow in Hallandren because, much like the [[Atium\|atium]] geodes in the [[Pits of Hathsin\|Pits of Hathsin]] on [[Scadrial\|Scadrial]], they are dependent on the [[Investiture\|Investiture]] seeping into the ground from Endowment's [[Perpendicularity\|perpendicularity]], which is nearby. Their ability to power Awakening much more efficiently likely results from this Investiture and connection to Endowment.
## The Manywar
One of the causes of the [[Manywar\|Manywar]] were the economic value of Tears of Edgli.
## Economics
The Tears are the reason for the wealth of the people in [[Hallandren\|Hallandren]]. Despite Hallandren's remote location, the dyes were a rare and attractive commodity to individuals throughout [[Nalthis\|Nalthis]], making [[T'Telir\|T'Telir]] a prime port and was a strong foundation for trade between Hallandren and other kingdoms.
https://coppermind.net/wiki/Tears_of_Edgli | 53.30303 | 525 | 0.751563 | eng_Latn | 0.995954 |
76201af52d4d7d74430f8c96915a5dc4999db0ea | 2,335 | md | Markdown | src/locations/location-104.md | designtocombatcovid19/testinglocations | 498794cc6433073b5f8dcc76a2adbc7457bfdaee | [
"MIT"
] | null | null | null | src/locations/location-104.md | designtocombatcovid19/testinglocations | 498794cc6433073b5f8dcc76a2adbc7457bfdaee | [
"MIT"
] | 4 | 2021-03-02T01:16:23.000Z | 2022-03-08T23:19:34.000Z | src/locations/location-104.md | designtocombatcovid19/testinglocations | 498794cc6433073b5f8dcc76a2adbc7457bfdaee | [
"MIT"
] | null | null | null | ---
layout: location-page
date: Last Modified
description: "Local COVID-19 testing is available at Marlins Park in Miami, Florida, USA."
permalink: "locations/florida/miami/marlins-park/"
tags:
- locations
- florida
title: Marlins Park
uniqueName: marlins-park
state: Florida
stateAbbr: FL
hood: "Miami"
address: "501 Marlins Way"
city: "Miami"
zip: "33125"
zipsNearby: "33427 33428 33429 33431 33432 33433 33434 33464 33481 33486 33487 33488 33496 33497 33498 33499 33424 33425 33426 33435 33436 33437 33472 33473 33474 33004 33441 33442 33443 33444 33445 33446 33448 33482 33483 33484 33301 33302 33303 33304 33305 33306 33307 33308 33309 33310 33311 33312 33313 33314 33315 33316 33317 33318 33319 33320 33321 33322 33323 33324 33325 33326 33327 33328 33329 33330 33331 33332 33334 33335 33336 33337 33338 33339 33340 33345 33346 33348 33349 33351 33355 33359 33388 33394 33008 33009 33002 33010 33011 33012 33013 33014 33015 33016 33017 33018 33019 33020 33021 33022 33023 33024 33025 33026 33027 33028 33029 33081 33083 33084 33030 33031 33032 33033 33034 33035 33039 33090 33092 33037 33449 33454 33460 33461 33462 33463 33465 33466 33467 33101 33102 33106 33111 33112 33114 33116 33122 33124 33125 33126 33127 33128 33129 33130 33131 33132 33133 33134 33135 33136 33137 33138 33142 33143 33144 33145 33146 33147 33149 33150 33151 33152 33153 33155 33156 33157 33158 33159 33160 33161 33162 33163 33164 33165 33166 33167 33168 33169 33170 33172 33173 33174 33175 33176 33177 33178 33179 33180 33181 33182 33183 33184 33185 33186 33187 33188 33189 33190 33193 33194 33196 33197 33199 33206 33222 33231 33233 33234 33238 33242 33243 33245 33247 33255 33256 33257 33261 33265 33266 33269 33280 33283 33296 33299 33109 33119 33139 33140 33141 33154 33239 33054 33055 33056 33060 33061 33062 33063 33064 33065 33066 33067 33068 33069 33071 33072 33073 33074 33075 33076 33077 33093 33097 33082 33070 33107 33110 33121 33148 33195 33447"
mapUrl: "http://maps.apple.com/?q=Marlins+Park&address=501+Marlins+Way,Miami,Florida,33125"
locationType: Drive-thru
phone: "305-499-8767"
website: "undefined"
onlineBooking: undefined
closed: undefined
closedUpdate: June 30th, 2020
notes: "For individuals with symptoms. By appointment only."
days: Contact for hours of operation.
ctaMessage: Call 305-499-8767
ctaUrl: "tel:305-499-8767"
--- | 80.517241 | 1,580 | 0.810278 | yue_Hant | 0.521002 |
76204832b2460559b3d48d158dc71915176a8c43 | 688 | md | Markdown | SECURITY.md | sosavle/cvss.js | 30d41ac751f66e423051544b279fe41861be351a | [
"MIT"
] | 6 | 2020-10-02T09:24:33.000Z | 2020-10-12T08:41:24.000Z | SECURITY.md | sosavle/cvss.js | 30d41ac751f66e423051544b279fe41861be351a | [
"MIT"
] | 46 | 2020-10-02T09:19:15.000Z | 2020-10-16T18:00:09.000Z | SECURITY.md | sosavle/cvss.js | 30d41ac751f66e423051544b279fe41861be351a | [
"MIT"
] | 16 | 2020-10-02T09:45:46.000Z | 2020-10-13T09:06:57.000Z | # cvss.js Security Issues
## Reporting Security Issues
The @turingpointde/cvss.js team takes security bugs seriously. We appreciate your efforts to
responsibly disclose your findings, and will make every effort to acknowledge
your contributions.
To report a security issue, email
[[email protected]](mailto:[email protected]?subject=SECURITY)and include the
word "SECURITY" in the subject line.
The @turingpointde/cvss.js team will send a response indicating the next steps in handling your
report. After the initial reply to your report, the team will keep you informed
of the progress towards a fix and full announcement, and may ask for additional
information or guidance.
| 40.470588 | 95 | 0.8125 | eng_Latn | 0.998579 |
7620a42aa63e319502eccae5121288b74eadb566 | 1,426 | md | Markdown | README.md | ir9/renameexp | f01a4cd8d3ddc310985b00880745421abd398eef | [
"MIT"
] | null | null | null | README.md | ir9/renameexp | f01a4cd8d3ddc310985b00880745421abd398eef | [
"MIT"
] | null | null | null | README.md | ir9/renameexp | f01a4cd8d3ddc310985b00880745421abd398eef | [
"MIT"
] | null | null | null | # renameexp / CG集を買ったらファイル名の連番の桁数が可変長でビュアーが順番に開いてくれない問題を解決するヤツ
1. エロCG集を買い、Downloadし解凍します
2. ワクワクしながら1枚目のjpegを開きます
3. ビュアーで次の画像を開きます(大体"→"キー)
4. **いきなりクライマックスじゃねーーーーか** [**クソァァァァァァァァァァl!!!!!!!!!!!!!!!!!!!!!**](https://www.youtube.com/results?search_query=%E3%82%AD%E3%83%BC%E3%83%9C%E3%83%BC%E3%83%89%E3%82%AF%E3%83%A9%E3%83%83%E3%82%B7%E3%83%A3%E3%83%BC)
という悲しい悲しい事故が発生するのは、大体以下のようなファイル名付けになっています:
```
ero-cg-1.jpg
ero-cg-2.jpg
ero-cg-3.jpg
:
ero-cg-9.jpg
ero-cg-10.jpg
```
`ero-cg-1.jpg` 開いた次に `ero-cg-10.jpg` 開いちゃうって話ですね。 最後ですね。 クライマックスですねぴえん
そんな問題を解決するべく、良い感じにファイル名をリネームしてくれるのが、このスクリプト **「CG集を買ったらファイル名の連番の桁数が可変長でビュアーが順番に開いてくれない問題を解決するヤツ」** です。
わい「過去・現在・未来、すべての宇宙から、2枚目でクライマックスをおっぱじめて、いっぱいいっぱい悲しい思いをする人を無くしたい!!
『キミは少女じゃないからダメ
「えー。 魔法少女えぇやん。 なりたいやん。
『ダメ
「えー
## usage / 使い方
前提:Python3 を使えるようにしてください
```bash
python renameexp.py [-mpb] path_or_filelist
```
```
// example
python renameexp.py /path/to/your/feti-images/dir/
```
基本的にはディレクトリを食わせるだけです。 ディレクトリの代わりに、画像ファイルの一覧が記載されたテキストファイルを食わせることが出来ます。
### `-m`, `--move`
ファイルをリネームします。 `path_or_filelist` にパスを指定した場合、デフォルトで有効になります
### `-p`, `--print`
(オリジナルファイル名, 修正後のファイル名) のペアを出力します。 ファイルに移動は行いません。 `path_or_filelist` に、ファイル名の一覧のファイルを指定した場合、デフォルトで有効になります。
### `-b`, `--backup`
`_backup` ディレクトリを作成しバックアップを取ります
## Author / つくったひと
* いろきゅう
* http://ir9.jp
* twitter:@ir9
なんだかんだ自分で使ってます :-)
| 20.970588 | 214 | 0.69986 | yue_Hant | 0.378679 |
7620acf5d5caf1a0c8331e1152e2888c2ab2bdf4 | 4,639 | md | Markdown | blogPost/README.md | KentaItakura/pix2pix | 68f401dec4e00be051c673c183332621ecde7ca5 | [
"BSD-2-Clause"
] | 3 | 2022-02-04T01:25:36.000Z | 2022-03-13T12:33:09.000Z | blogPost/README.md | KentaItakura/pix2pix | 68f401dec4e00be051c673c183332621ecde7ca5 | [
"BSD-2-Clause"
] | null | null | null | blogPost/README.md | KentaItakura/pix2pix | 68f401dec4e00be051c673c183332621ecde7ca5 | [
"BSD-2-Clause"
] | null | null | null | # pix2pixを勉強&線画から顔画像を生成してみた:前半
# はじめに
この記事では、pix2pixについて勉強したのでそのまとめと、線画から画像に変換する課題にpix2pixを適用してみようと思います。pix2pixは以下の論文です。間違いなどがあれば教えていただけますと幸いです。
[Isola, P., Zhu, J.Y., Zhou, T. and Efros, A.A., 2017. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1125-1134).](https://openaccess.thecvf.com/content_cvpr_2017/html/Isola_Image-To-Image_Translation_With_CVPR_2017_paper.html)
pix2pixの論文の冒頭で、それを試した時の入力と結果の例があります。以下は、論文の図1からの引用です。ラベルから風景に変換したりするだけでなく、線画や時間帯などを変換した例があり、いろいろなシーンで使えそうですね。この手法では、深層学習やGANと呼ばれる手法を用いていて、一対一対応する、画像とラベルのペアを用いて学習を行います。

図出展:Isola et al (2017)
pix2pixを勉強するうえで、顔の線画から顔の画像に変換することを試しに行ってみました。前半ではpix2pixについて述べ、後半ではこのデモについて紹介します。

# GANについて
pix2pixはGANの一種で、画像の生成器だけでなく、もとの(正解の)画像と生成した画像が生成したものか、もともと用意した画像かどうかを判別します。GANについては、ここでは割愛いたします。以下の記事などがわかりやすかったです。ただ、pix2pixに関しては、以下の内容から始めても問題ないのではと思います。
[https://jp.mathworks.com/help/deeplearning/ug/train-generative-adversarial-network.html](https://jp.mathworks.com/help/deeplearning/ug/train-generative-adversarial-network.html)
[https://ledge.ai/gan/](https://ledge.ai/gan/)
# pix2pixについて
## 大まかな流れ
以下の図はpix2pixの流れを簡単にまとめたものです。
- 訓練データ:入力するデータ(例:線画)とその結果(例:線画のもとになってる画像)のペア
- Generatorは入力画像(例:線画)からそれに対応する正解(例:線画のもとになってる画像)を生成することを目指す
- Discriminatorは、その画像が生成されたものか、もともと用意している正解画像かを見分ける
- Generatorは、正解画像とできるだけ同じ画像を生成できるよう学習していく
- Discriminatorは、生成画像をFake, そうでない画像をRealと判別できるよう学習していく

## 生成器(ジエネレーター)について
後半のデモで、線画の変換を行うので、ここでも線画っぽい絵を例にします。以下の左の図を右の図に変換することを考えます。ジェネレーターは畳み込み込みや逆畳み込みを用いて画像から画像の変換を行います。pix2pixでは、Unet構造を用います(以下の図ではそうなってませんが)。
Unetの構造に関しては以下の記事がわかりやすかったです。
https://qiita.com/koshian2/items/603106c228ac6b7d8356

下の図は、後半のデモで用いるネットワークを可視化したものです。入力は縦横が256×256で、チャンネル数は3(RGB)になっています。グレースケール画像なので、チャンネルを1にすることも可能ですが、特にそのような変更は行っていません。ジェネレーター側は、画像から画像を生成しているんだなという理解でひとまず良いと思います。また下の構造をみると、確かにスキップコネクションのような線が下に伸びていることもわかります。

# 識別器(ディスクリミネーター)について
上の流れの図でもありましたが、識別側では、生成されたであれば、Fake、もともと用意した画像ならRealと予測するように学習していきます。単純にCNNで分類するのもよいですが、pix2pixでは、入力の画像と生成された(又は正解画像)をチャンネル方向に重ねてから分類を行います。ちょうど下の図のようなイメージです。

論文では以下の図で説明されています。

実際にネットワークの構造を見てみます。一番上の段をみると、入力の画像が6チャンネルになっていることが確認できます。

また、上の図の最後の段を見ると、16×16×1になっています。Real / Fakeの見極めであればサイズは1であれば良さそうですが、pix2pixの論文では`PatchGAN`というセクションがあります。そこでは、以下のことが述べられています
- L1やL2損失のみを用いてimage-to-image変換をしようとすると全体的にぼやけた画像が生成されやすい
- (ぼやけていないところである)高周波成分をうまく識別器が捉えるべく、識別側に工夫を加える
- 画像をパッチに分けて、その領域ごとに Real / Fakeか見分ける
- 特に識別器の構造を作り変えるわけではない
確かに、高周波成分をうまく捉えて、ぼやけた生成画像を認識することができたら、より鮮明な画像が生成できそうです。その方策として、以下のように述べられています。
> We run this discriminator convolutionally across the image, averaging all responses to provide the ultimate output of D.
ここで、重要なのが先ほどの識別器の出力サイズで、16×16になっていると述べました。画像全体に対して、1つの出力(Real / Fake)を出すのではなく、各パッチごとにその推論を行います。このように出力を調整すれば、ちょうど各パッチごとに推論していることと等しいことを行うことができます。損失の計算では、各パッチの損失をそれぞれ計算し、足し合わせます。
下の図は、パッチのサイズを変えたときの結果です。16×16の時が最もよかったとのことです。パッチサイズが大きいほど、細かいところまで見れるので良さそうですが、大きすぎると画像の全体的な良さを評価できないというトレードオフの関係にあります。複数のステージを設けて、複数のパッチの平均でもよさそうですね。

## ノイズベクトルzについて
GANを勉強し始めると、ノイズベクトルzというのをはじめに目にすると思います。そのzを起点に生成したり、そのノイズベクトルによって生成する画像をある程度制御できたりします。しかし、pix2pixでは、ノイズベクトルzに関しては以下のように書かれています。
> Instead, for our final models, we provide noise only in the form of dropout, applied on several layers of our generator at both training and test time.
つまり、dropout層によって、ランダムに出力を落としてランダム性を加えていると述べられています。また、ここでは、**テストの段階でもドロップアウトを用いる**とあります。そのため、たとえばDCGANに出てくるノイズベクトルzを思い浮かべながらpix2pixをみると少し違和感があるかもしれません。
## 損失について
pix2pixでは、入力と出力のペアの訓練画像があります。生成器の損失としては、出力の画像と生成画像の差分をとって(L1)、それらの差異も損失に加えます。
# 前半まとめ
- pix2pixの流れや方法についてまとめました
- Conditional GANの一種ではあるものの、ノイズベクトルzを明示的には使わず、入力の画像から目的の画像に変換するよう設計されていることがわかりました。
- 論文のイントロダクションにもあるとおり、多くのコンピュータービジョンのタスクは画像の何らかの変換、というふうに言い換えることができて、そのような課題一般にpix2pixを応用することができます。汎用性の高い非常に便利な手法だと思いました。
- また、画像だけでなく、シミュレーションの結果とも合わせて変換をするような研究もあり(以下のPDF)、画像に留まらず、音の生成などでも使えるかもしれないですね
[https://www.jstage.jst.go.jp/article/pjsai/JSAI2019/0/JSAI2019_4K3J1302/_pdf](https://www.jstage.jst.go.jp/article/pjsai/JSAI2019/0/JSAI2019_4K3J1302/_pdf)
後半では、顔画像のデータセットから線画を作成し、訓練させ、線画から顔画像を生成するpix2pixのモデルを作成しました。以下のようになっていて、上手く顔画像の生成ができました。しかし、少し文量が多くなったため、次回に回そうかと思います。
| 37.715447 | 332 | 0.840267 | jpn_Jpan | 0.544072 |
7620e43b643cc5269ad5089a3f4adcfc4c8989fd | 8,676 | md | Markdown | content/rancher/v2.x/en/cluster-admin/tools/monitoring/_index.md | yassan/docs | d46a8d9b7184eb40bc20a6bcb5ad413e9a2807b7 | [
"Apache-2.0"
] | 6 | 2021-10-30T10:59:59.000Z | 2022-01-28T17:25:44.000Z | content/rancher/v2.x/en/cluster-admin/tools/monitoring/_index.md | yassan/docs | d46a8d9b7184eb40bc20a6bcb5ad413e9a2807b7 | [
"Apache-2.0"
] | null | null | null | content/rancher/v2.x/en/cluster-admin/tools/monitoring/_index.md | yassan/docs | d46a8d9b7184eb40bc20a6bcb5ad413e9a2807b7 | [
"Apache-2.0"
] | null | null | null | ---
title: Integrating Rancher and Prometheus for Cluster Monitoring
description: Prometheus lets you view metrics from your different Rancher and Kubernetes objects. Learn about the scope of monitoring and how to enable cluster monitoring
weight: 4
---
_Available as of v2.2.0_
Using Rancher, you can monitor the state and processes of your cluster nodes, Kubernetes components, and software deployments through integration with [Prometheus](https://prometheus.io/), a leading open-source monitoring solution.
This section covers the following topics:
- [About Prometheus](#about-prometheus)
- [Monitoring scope](#monitoring-scope)
- [Enabling cluster monitoring](#enabling-cluster-monitoring)
- [Resource consumption](#resource-consumption)
- [Resource consumption of Prometheus pods](#resource-consumption-of-prometheus-pods)
- [Resource consumption of other pods](#resources-consumption-of-other-pods)
# About Prometheus
Prometheus provides a _time series_ of your data, which is, according to [Prometheus documentation](https://prometheus.io/docs/concepts/data_model/):
You can configure these services to collect logs at either the cluster level or the project level. This page describes how to enable monitoring for a cluster. For details on enabling monitoring for a project, refer to the [project administration section]({{<baseurl>}}/rancher/v2.x/en/project-admin/tools/monitoring/).
>A stream of timestamped values belonging to the same metric and the same set of labeled dimensions, along with comprehensive statistics and metrics of the monitored cluster.
In other words, Prometheus lets you view metrics from your different Rancher and Kubernetes objects. Using timestamps, Prometheus lets you query and view these metrics in easy-to-read graphs and visuals, either through the Rancher UI or [Grafana](https://grafana.com/), which is an analytics viewing platform deployed along with Prometheus.
By viewing data that Prometheus scrapes from your cluster control plane, nodes, and deployments, you can stay on top of everything happening in your cluster. You can then use these analytics to better run your organization: stop system emergencies before they start, develop maintenance strategies, restore crashed servers, etc.
Multi-tenancy support in terms of cluster-only and project-only Prometheus instances are also supported.
# Monitoring Scope
Using Prometheus, you can monitor Rancher at both the cluster level and [project level]({{<baseurl>}}/rancher/v2.x/en/project-admin/tools/monitoring/). For each cluster and project that is enabled for monitoring, Rancher deploys a Prometheus server.
- Cluster monitoring allows you to view the health of your Kubernetes cluster. Prometheus collects metrics from the cluster components below, which you can view in graphs and charts.
- [Kubernetes control plane]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/monitoring/cluster-metrics/#kubernetes-components-metrics)
- [etcd database]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/monitoring/cluster-metrics/#etcd-metrics)
- [All nodes (including workers)]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/monitoring/cluster-metrics/#cluster-metrics)
- [Project monitoring]({{<baseurl>}}/rancher/v2.x/en/project-admin/tools/monitoring/) allows you to view the state of pods running in a given project. Prometheus collects metrics from the project's deployed HTTP and TCP/UDP workloads.
# Enabling Cluster Monitoring
As an [administrator]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/) or [cluster owner]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles), you can configure Rancher to deploy Prometheus to monitor your Kubernetes cluster.
> **Prerequisite:** Make sure that you are allowing traffic on port 9796 for each of your nodes because Prometheus will scrape metrics from here.
1. From the **Global** view, navigate to the cluster that you want to configure cluster monitoring.
1. Select **Tools > Monitoring** in the navigation bar.
1. Select **Enable** to show the [Prometheus configuration options]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/monitoring/prometheus/). Review the [resource consumption recommendations](#resource-consumption) to ensure you have enough resources for Prometheus and on your worker nodes to enable monitoring. Enter in your desired configuration options.
1. Click **Save**.
**Result:** The Prometheus server will be deployed as well as two monitoring applications. The two monitoring applications, `cluster-monitoring` and `monitoring-operator`, are added as an [application]({{<baseurl>}}/rancher/v2.x/en/catalog/apps/) to the cluster's `system` project. After the applications are `active`, you can start viewing [cluster metrics]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/monitoring/cluster-metrics/) through the [Rancher dashboard]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/monitoring/viewing-metrics/#rancher-dashboard) or directly from [Grafana]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/monitoring/#grafana).
> The default username and password for the Grafana instance will be `admin/admin`. However, Grafana dashboards are served via the Rancher authentication proxy, so only users who are currently authenticated into the Rancher server have access to the Grafana dashboard.
# Resource Consumption
When enabling cluster monitoring, you need to ensure your worker nodes and Prometheus pod have enough resources. The tables below provides a guide of how much resource consumption will be used. In larger deployments, it is strongly advised that the monitoring infrastructure be placed on dedicated nodes in the cluster.
### Resource Consumption of Prometheus Pods
This table is the resource consumption of the Prometheus pod, which is based on the number of all the nodes in the cluster. The count of nodes includes the worker, control plane and etcd nodes. Total disk space allocation should be approximated by the `rate * retention` period set at the cluster level. When enabling cluster level monitoring, you should adjust the CPU and Memory limits and reservation.
Number of Cluster Nodes | CPU (milli CPU) | Memory | Disk
------------------------|-----|--------|------
5 | 500 | 650 MB | ~1 GB/Day
50| 2000 | 2 GB | ~5 GB/Day
256| 4000 | 6 GB | ~18 GB/Day
Additional pod resource requirements for cluster level monitoring.
| Workload | Container | CPU - Request | Mem - Request | CPU - Limit | Mem - Limit | Configurable |
|---------------------|---------------------------------|---------------|---------------|-------------|-------------|--------------|
| Prometheus | prometheus | 750m | 750Mi | 1000m | 1000Mi | Y |
| | prometheus-proxy | 50m | 50Mi | 100m | 100Mi | Y |
| | prometheus-auth | 100m | 100Mi | 500m | 200Mi | Y |
| | prometheus-config-reloader | - | - | 50m | 50Mi | N |
| | rules-configmap-reloader | - | - | 100m | 25Mi | N |
| Grafana | grafana-init-plugin-json-copy | 50m | 50Mi | 50m | 50Mi | Y |
| | grafana-init-plugin-json-modify | 50m | 50Mi | 50m | 50Mi | Y |
| | grafana | 100m | 100Mi | 200m | 200Mi | Y |
| | grafana-proxy | 50m | 50Mi | 100m | 100Mi | Y |
| Kube-State Exporter | kube-state | 100m | 130Mi | 100m | 200Mi | Y |
| Node Exporter | exporter-node | 200m | 200Mi | 200m | 200Mi | Y |
| Operator | prometheus-operator | 100m | 50Mi | 200m | 100Mi | Y |
### Resource Consumption of Other Pods
Besides the Prometheus pod, there are components that are deployed that require additional resources on the worker nodes.
Pod | CPU (milli CPU) | Memory (MB)
----|-----------------|------------
Node Exporter (Per Node) | 100 | 30
Kube State Cluster Monitor | 100 | 130
Grafana | 100 | 150
Prometheus Cluster Monitoring Nginx | 50 | 50
| 81.849057 | 665 | 0.669548 | eng_Latn | 0.931445 |
7621301663954be63489b9764a2b195cd0c1be8b | 1,087 | md | Markdown | xsloader4j-core/src/main/resources/xsloader-js/lib/说明.md | gzxishan/xsloader4j | 6fbce6bffc357bd2beb76ae1187d580231c488d7 | [
"Apache-2.0"
] | 2 | 2020-04-02T03:59:11.000Z | 2020-05-18T12:39:47.000Z | xsloader4j-core/src/main/resources/xsloader-js/lib/说明.md | gzxishan/xsloader4j | 6fbce6bffc357bd2beb76ae1187d580231c488d7 | [
"Apache-2.0"
] | 5 | 2020-11-06T08:49:15.000Z | 2020-11-06T08:50:14.000Z | xsloader4j-core/src/main/resources/xsloader-js/lib/说明.md | gzxishan/xsloader4j | 6fbce6bffc357bd2beb76ae1187d580231c488d7 | [
"Apache-2.0"
] | null | null | null | # *.vue说明
- 增加vue-created事件,在beforeCreate后、created前执行。
- 增加vue-destroyed时间,在destroyed事件之前执行
# vue.js修改说明
- 修改createCompileToFunctionFn
```
...
// turn code into functions
var res = {};
var fnGenErrors = [];
var _createFunction=options.createFunction||createFunction;
res.render = _createFunction(compiled.render, fnGenErrors);
res.staticRenderFns = compiled.staticRenderFns.map(function (code) {
return _createFunction(code, fnGenErrors)
});
...
```
- 去掉`createCompileToFunctionFn`里的cache
- 修改checkExpression里的表达式检测
```
new Function(("return " + exp));
修改为:
new (typeof CustomerFunction!="undefined" ? CustomerFunction : Function)(("return " + (exp ? exp.trim() : "")));
```
# js能力说明
- es2018
- regenerator
- syntax-dynamic-import:import(...)
- transform-react-jsx:需要引入Vue且模块名为vue。
- class-properties
- private-methods
- private-property-in-object
- decorators
- nullish-coalescing-operator
- optional-chaining
- numeric-separator
- throw-expressions
- logical-assignment-operators
- do-expressions
# flag-definitions.h
该文件为v8引擎参数说明头文件 | 24.155556 | 112 | 0.723091 | yue_Hant | 0.27671 |
762157f9484f475de67fa58fb64ea579f69bfe63 | 1,005 | md | Markdown | w/wordpress__deprecated/readme.md | ScalablyTyped/SlinkyTyped | abb05700fe72d527728a9c735192f4c156bd9be1 | [
"MIT"
] | 14 | 2020-01-09T02:36:33.000Z | 2021-09-05T13:40:52.000Z | w/wordpress__deprecated/readme.md | oyvindberg/SlinkyTyped | abb05700fe72d527728a9c735192f4c156bd9be1 | [
"MIT"
] | 1 | 2021-07-31T20:24:00.000Z | 2021-08-01T07:43:35.000Z | w/wordpress__deprecated/readme.md | oyvindberg/SlinkyTyped | abb05700fe72d527728a9c735192f4c156bd9be1 | [
"MIT"
] | 4 | 2020-03-12T14:08:42.000Z | 2021-08-12T19:08:49.000Z |
# Scala.js typings for wordpress__deprecated
Typings are for version 2.4
## Library description:
Deprecation utility for WordPress.
| | |
| ------------------ | :-------------: |
| Full name | @wordpress/deprecated |
| Keywords | wordpress, gutenberg, deprecated |
| # releases | 15 |
| # dependents | 18 |
| # downloads | 825466 |
| # stars | 0 |
## Links
- [Homepage](https://github.com/WordPress/gutenberg/tree/master/packages/deprecated/README.md)
- [Bugs](https://github.com/WordPress/gutenberg/issues)
- [Repository](https://github.com/WordPress/gutenberg)
- [Npm](https://www.npmjs.com/package/%40wordpress%2Fdeprecated)
## Note
This library has been generated from typescript code from [DefinitelyTyped](https://definitelytyped.org).
Provided with :purple_heart: from [ScalablyTyped](https://github.com/oyvindberg/ScalablyTyped)
## Usage
See [the main readme](../../readme.md) for instructions.
| 28.714286 | 105 | 0.637811 | eng_Latn | 0.405154 |
7621a7de615a2938f42e77607ff1e6fdb41c615d | 342 | md | Markdown | pageview-processor/README.md | rfxlab/pageview-analytics-with-rfx | 701983c08f158ef664b6b040bbc2e3d62df82222 | [
"Apache-2.0"
] | 1 | 2016-10-26T01:13:40.000Z | 2016-10-26T01:13:40.000Z | pageview-processor/README.md | rfxlab/pageview-analytics-with-rfx | 701983c08f158ef664b6b040bbc2e3d62df82222 | [
"Apache-2.0"
] | null | null | null | pageview-processor/README.md | rfxlab/pageview-analytics-with-rfx | 701983c08f158ef664b6b040bbc2e3d62df82222 | [
"Apache-2.0"
] | null | null | null | RFX - pageview-processor
====================
Create the folder data/kafka-offset
event processing log data
-Xms512m -Xmx2048m -XX:+TieredCompilation -XX:+UseCompressedOops -XX:+DisableExplicitGC -XX:+UseNUMA -server -XX:+UseConcMarkSweepGC
pageview 127.0.0.1 16002 0 1
Core Contributors:
====================
[email protected]
| 20.117647 | 132 | 0.687135 | yue_Hant | 0.344188 |
7621b28673dabe546c978fc04dbadc9454492014 | 4,042 | md | Markdown | source/plugins/habits/README.md | MikeOwino/metrics | 71e07f80190e923ab402eb0f5fd97cf05236608e | [
"MIT"
] | null | null | null | source/plugins/habits/README.md | MikeOwino/metrics | 71e07f80190e923ab402eb0f5fd97cf05236608e | [
"MIT"
] | null | null | null | source/plugins/habits/README.md | MikeOwino/metrics | 71e07f80190e923ab402eb0f5fd97cf05236608e | [
"MIT"
] | null | null | null | ### 💡 Coding habits
The coding *habits* plugin display metrics based on your recent activity, such as active hours or languages recently used.
<table>
<td align="center">
<details open><summary>Recent activity charts</summary>
<img src="https://github.com/lowlighter/metrics/blob/examples/metrics.plugin.habits.charts.svg">
</details>
<details open><summary>Midly interesting facts</summary>
<img src="https://github.com/lowlighter/metrics/blob/examples/metrics.plugin.habits.facts.svg">
</details>
<img width="900" height="1" alt="">
</td>
</table>
Using more events will improve accuracy of these metrics, although it'll increase the number of GitHub requests used.
Active hours and days are computed through your commit history, while indent style is deduced from your recent diffs.
Recent languages activity is also computed from your recent diffs, using [github/linguist](https://github.com/github/linguist).
Use a full `repo` scope token to access **private** events.
By default, dates use Greenwich meridian (GMT/UTC). Be sure to set your timezone (see [here](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones) for a list of supported timezones) for accurate metrics.
> 🔣 On web instances, *recent languages activity* is an extra feature and must be enabled globally in `settings.json`
#### ➡️ Available options
<!--options-->
<table>
<tr>
<td align="center" nowrap="nowrap">Type</i></td><td align="center" nowrap="nowrap">Description</td>
</tr>
<tr>
<td nowrap="nowrap"><code>plugin_habits</code></td>
<td rowspan="2"><p>Display coding habits metrics</p>
<img width="900" height="1" alt=""></td>
</tr>
<tr>
<td nowrap="nowrap"><b>type:</b> <code>boolean</code>
<br>
<b>default:</b> no<br></td>
</tr>
<tr>
<td nowrap="nowrap"><code>plugin_habits_from</code></td>
<td rowspan="2"><p>Number of events to use</p>
<img width="900" height="1" alt=""></td>
</tr>
<tr>
<td nowrap="nowrap"><b>type:</b> <code>number</code>
<i>(1 ≤
𝑥
≤ 1000)</i>
<br>
<b>default:</b> 200<br></td>
</tr>
<tr>
<td nowrap="nowrap"><code>plugin_habits_days</code></td>
<td rowspan="2"><p>Maximum event age</p>
<img width="900" height="1" alt=""></td>
</tr>
<tr>
<td nowrap="nowrap"><b>type:</b> <code>number</code>
<i>(1 ≤
𝑥
≤ 30)</i>
<br>
<b>default:</b> 14<br></td>
</tr>
<tr>
<td nowrap="nowrap"><code>plugin_habits_facts</code></td>
<td rowspan="2"><p>Display coding habits collected facts based on recent activity</p>
<img width="900" height="1" alt=""></td>
</tr>
<tr>
<td nowrap="nowrap"><b>type:</b> <code>boolean</code>
<br>
<b>default:</b> yes<br></td>
</tr>
<tr>
<td nowrap="nowrap"><code>plugin_habits_charts</code></td>
<td rowspan="2"><p>Display coding habits charts based on recent activity</p>
<img width="900" height="1" alt=""></td>
</tr>
<tr>
<td nowrap="nowrap">🌐 Web instances must configure <code>settings.json</code><br>
<b>type:</b> <code>boolean</code>
<br>
<b>default:</b> no<br></td>
</tr>
<tr>
<td nowrap="nowrap"><code>plugin_habits_trim</code></td>
<td rowspan="2"><p>Trim unused hours on daily chart</p>
<img width="900" height="1" alt=""></td>
</tr>
<tr>
<td nowrap="nowrap"><b>type:</b> <code>boolean</code>
<br>
<b>default:</b> no<br></td>
</tr>
</table>
<!--/options-->
*[→ Full specification](metadata.yml)*
#### ℹ️ Examples workflows
<!--examples-->
```yaml
name: Midly interesting facts
uses: lowlighter/metrics@latest
with:
filename: metrics.plugin.habits.facts.svg
token: ${{ secrets.METRICS_TOKEN }}
base: ''
plugin_habits: 'yes'
plugin_habits_facts: 'yes'
plugin_habits_charts: 'no'
config_timezone: Europe/Paris
```
```yaml
name: Recent activity charts
uses: lowlighter/metrics@latest
with:
filename: metrics.plugin.habits.charts.svg
token: ${{ secrets.METRICS_TOKEN }}
base: ''
plugin_habits: 'yes'
plugin_habits_facts: 'no'
plugin_habits_charts: 'yes'
config_timezone: Europe/Paris
```
<!--/examples-->
| 29.50365 | 211 | 0.663038 | eng_Latn | 0.760524 |
762216c039a7587f8511ac84449dedcb71253c7f | 2,951 | md | Markdown | winrt-related-src/schemas/appxpackage/uapmanifestschema/element-uap3-appextensionhost-manual.md | jamesmcroft/winrt-related | dd03ff3238702b4b1b4987713428a7c831ddf227 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | winrt-related-src/schemas/appxpackage/uapmanifestschema/element-uap3-appextensionhost-manual.md | jamesmcroft/winrt-related | dd03ff3238702b4b1b4987713428a7c831ddf227 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | winrt-related-src/schemas/appxpackage/uapmanifestschema/element-uap3-appextensionhost-manual.md | jamesmcroft/winrt-related | dd03ff3238702b4b1b4987713428a7c831ddf227 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
Description: Declares an app extensibility point of type windows.appExtensionHost.
Search.Product: eADQiWindows 10XVcnh
title: uap3:AppExtensionHost
ms.assetid: 632f4bda-7822-4d90-a3a1-688ed4b7cf24
author: mcleanbyron
ms.author: mcleans
keywords: windows 10, uwp, schema, package manifest
ms.topic: reference
ms.date: 04/05/2017
---
# uap3:AppExtensionHost
Declares an app extensibility point of type **windows.appExtensionHost**. This element indicates which categories of extensions the app can host.
## Element hierarchy
<dl>
<dt><a href="element-package.md"><Package></a></dt>
<dd>
<dl>
<dt><a href="element-applications.md"><Applications></a></dt>
<dd>
<dl>
<dt><a href="element-application.md"><Application></a></dt>
<dd>
<dl>
<dt><a href="element-1-extensions.md"><Extensions></a></dt>
<dd>
<dl>
<dt><a href="element-uap3-extension-manual.md"><uap3:Extension></a></dt>
<dd><b><uap3:AppExtensionHost></b></dd>
</dl>
</dd>
</dl>
</dd>
</dl>
</dd>
</dl>
</dd>
</dl>
## Syntax
```
<uap3:AppExtensionHost>
<!-- Child elements -->
uap3:Name+
</uap3:AppExtensionHost>
```
**Key**
\+ one or more
## Attributes and Elements
**Attributes**
None.
**Child Elements**
| Child Element | Description |
|------------------------------------------------|-----------------------------------------------------------|
| [**uap3:Name**](elemennt-uap3-name-manual.md) | Specifies a category of extensions that the app can host. |
**Parent Elements**
| Parent Element | Description |
|---------------------------------------------------------|-----------------------------------------------|
| [**uap3:Extension**](element-uap3-extension-manual.md) | Declares an extensibility point for the app.. |
## Examples
The following example indicates that the app can host the Office spell check and browser extensions.
```XML
<Package ...
xmlns:uap3="http://schemas.microsoft.com/appx/manifest/uap/windows10/3"
IgnorableNamespaces="... uap3">
<Applications>
<Application>
<Extensions>
<uap3:Extension Category="windows.appExtensionHost">
<uap3:AppExtensionHost>
<uap3:Name>com.microsoft.office.spellcheck.ext</uap3:Name>
<uap3:Name>com.microsoft.office.browser.ext</uap3:Name>
</uap3:AppExtensionHost>
</uap3:Extension>
</Extensions>
</Application>
</Applications>
</Package>
```
## Requirements
| | |
|---------------|------------------------------------------------------------|
| **Namespace** | http://schemas.microsoft.com/appx/manifest/uap/windows10/3 |
| 24.798319 | 145 | 0.535073 | yue_Hant | 0.378212 |
7622f2d88eb87887d54f57ef4ceb4782bcf62bd0 | 1,488 | md | Markdown | api/Word.Application.ArbitraryXMLSupportAvailable.md | MarkWithC/VBA-Docs | a43a38a843c95cbe8beed2a15218a5aeca4df8fb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | api/Word.Application.ArbitraryXMLSupportAvailable.md | MarkWithC/VBA-Docs | a43a38a843c95cbe8beed2a15218a5aeca4df8fb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | api/Word.Application.ArbitraryXMLSupportAvailable.md | MarkWithC/VBA-Docs | a43a38a843c95cbe8beed2a15218a5aeca4df8fb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Application.ArbitraryXMLSupportAvailable property (Word)
keywords: vbawd10.chm158335441
f1_keywords:
- vbawd10.chm158335441
ms.prod: word
api_name:
- Word.Application.ArbitraryXMLSupportAvailable
ms.assetid: 5cf53ae7-200b-811e-7946-4fefe825eaec
ms.date: 06/08/2017
ms.localizationpriority: medium
---
# Application.ArbitraryXMLSupportAvailable property (Word)
Returns a **Boolean** that represents whether Microsoft Word accepts custom XML schemas. **True** indicates that Word accepts custom XML schemas.
## Syntax
_expression_. `ArbitraryXMLSupportAvailable`
_expression_ An expression that returns an **[Application](Word.Application.md)** object.
## Remarks
Microsoft Office Standard Edition 2003 includes XML support using the Word XML schema, but it does not provide support for custom XML schemas. Support for custom XML schemas is available only in the stand-alone release of Office Word 2003 or greater and in Office Professional Edition 2003 or greater. Use the **ArbitraryXMLSupportAvailable** property to determine which release is installed.
## Example
The following code displays a message if the installed version of Word does not support custom XML schemas.
```vb
If Application.ArbitraryXMLSupportAvailable = False Then
MsgBox "Custom XML schemas are not " & _
"supported in this version of Microsoft Word."
```
## See also
[Application Object](Word.Application.md)
[!include[Support and feedback](~/includes/feedback-boilerplate.md)] | 30.367347 | 392 | 0.794355 | eng_Latn | 0.935077 |
76239a14ede5f679fb01f513f2f66359f039de92 | 3,442 | md | Markdown | docs/python/quickstart-04-python-in-visual-studio-project-from-cookiecutter.md | MicrosoftDocs/visualstudio-docs.cs-cz | 3861d52726f1a515cfa62d590513a3c7a1b8019b | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-05-20T07:48:22.000Z | 2020-05-20T07:48:22.000Z | docs/python/quickstart-04-python-in-visual-studio-project-from-cookiecutter.md | MicrosoftDocs/visualstudio-docs.cs-cz | 3861d52726f1a515cfa62d590513a3c7a1b8019b | [
"CC-BY-4.0",
"MIT"
] | 7 | 2018-10-02T15:01:11.000Z | 2021-11-05T20:25:20.000Z | docs/python/quickstart-04-python-in-visual-studio-project-from-cookiecutter.md | MicrosoftDocs/visualstudio-docs.cs-cz | 3861d52726f1a515cfa62d590513a3c7a1b8019b | [
"CC-BY-4.0",
"MIT"
] | 7 | 2018-10-01T22:49:53.000Z | 2021-10-09T11:24:44.000Z | ---
title: Rychlý start – Vytváření projektů v Pythonu pomocí Cookiecutteru
description: V tomto rychlém startu vytvoříte projekt Visual Studio pythonu pomocí šablony Cookiecutter.
ms.date: 02/25/2019
ms.topic: quickstart
author: rjmolyneaux
ms.author: rmolyneaux
manager: jmartens
ms.technology: vs-python
ms.workload:
- python
- data-science
ms.openlocfilehash: c124610bab413b59ec2dc00798f8bc63b7221eb5
ms.sourcegitcommit: 8fae163333e22a673fd119e1d2da8a1ebfe0e51a
ms.translationtype: MT
ms.contentlocale: cs-CZ
ms.lasthandoff: 10/13/2021
ms.locfileid: "129968342"
---
# <a name="quickstart-create-a-project-from-a-cookiecutter-template"></a>Rychlý start: Vytvoření projektu ze šablony Cookiecutter
Po instalaci podpory [Pythonu](installing-python-support-in-visual-studio.md)v Visual Studio je snadné vytvořit nový projekt ze šablony Cookiecutter, včetně mnoha projektů publikovaných do GitHub. [Cookiecutter poskytuje](https://cookiecutter.readthedocs.io/en/latest/) grafické uživatelské rozhraní pro zjišťování šablon, zadávání možností šablon a vytváření projektů a souborů. Je součástí verze Visual Studio 2017 a novější a lze ji nainstalovat samostatně ve starších verzích Visual Studio.
1. V tomto rychlém startu nejprve nainstalujte distribuci Anaconda3 v Pythonu, která obsahuje potřebné balíčky Pythonu pro šablonu Cookiecutter zobrazenou tady. Spusťte instalační Visual Studio, vyberte **Upravit,** rozbalte možnosti pro vývoj **v Pythonu** na pravé straně a vyberte **Anaconda3** (32bitová nebo 64bitová verze). Upozorňujeme, že instalace může nějakou dobu trvat v závislosti na rychlosti internetu, ale jedná se o nejjednodušší způsob instalace potřebných balíčků.
1. Spusťte Visual Studio.
1. Vyberte File New From Cookiecutter **(Soubor** > > **nový ze souboru Cookiecutter).** Tento příkaz otevře okno v Visual Studio, kde můžete procházet šablony.

1. Vyberte šablonu **Microsoft/python-sklearn-classifier-cookiecutter** a pak vyberte **Další.** (Proces může trvat několik minut při prvním použití konkrétní šablony, protože Visual Studio potřebné balíčky Pythonu.)
1. V dalším kroku nastavte umístění pro nový projekt v poli Vytvořit do a pak vyberte Vytvořit a **otevřít Project**.

1. Po dokončení procesu se zobrazí zpráva Úspěšně se vytvořily **soubory pomocí šablony...**. Projekt se automaticky otevře Průzkumník řešení souboru.
1. Program **spustíte** stisknutím kláves Ctrl + **F5** nebo **výběrem** možnosti Spustit ladění > bez ladění.

## <a name="next-steps"></a>Další kroky
> [!div class="nextstepaction"]
> [Kurz: Práce s Pythonem v Visual Studio](tutorial-working-with-python-in-visual-studio-step-01-create-project.md)
## <a name="see-also"></a>Viz také
- [Použití rozšíření Cookiecutter](using-python-cookiecutter-templates.md)
- [Ruční identifikace existujícího interpretu Pythonu](managing-python-environments-in-visual-studio.md#manually-identify-an-existing-environment)
- [Instalace podpory Pythonu v Visual Studio 2015 a starších verzích](installing-python-support-in-visual-studio.md)
- [Umístění instalace](installing-python-support-in-visual-studio.md#install-locations)
| 62.581818 | 494 | 0.799245 | ces_Latn | 0.997469 |
7623d2bd57c7594746d34434d59f3876bfd731ee | 2,373 | md | Markdown | README.md | freakypie/django-binding | 1e92e64a38b31b0d25d8cd1ce6050abbf8a52016 | [
"BSD-2-Clause"
] | null | null | null | README.md | freakypie/django-binding | 1e92e64a38b31b0d25d8cd1ce6050abbf8a52016 | [
"BSD-2-Clause"
] | null | null | null | README.md | freakypie/django-binding | 1e92e64a38b31b0d25d8cd1ce6050abbf8a52016 | [
"BSD-2-Clause"
] | null | null | null | # Django Binding
Provides server a real time cache for querysets.
A binding will keep a cached queryset and
registers Django signals to update the cache as the models change.
Naturally changes that don't trigger a Django post_save or post_delete will
not cause the cache to be updated.
Also providing binding implementations for:
- [x] DRF
- [x] django-node-websockets
- [ ] django channels
# Getting started
create a binding:
from binding import Binding
# bind all active users
class UserBinding(Binding):
filters = dict(active=True)
users = UserBinding()
users.all() # will get a cache of the currently active users
# Django Rest Framework
create a BoundModelViewset and it will automatically cache the queryset and
keep it up to date. When Etags/last modified are supported it will return 304
responses for you automatically:
class ProductSerializer(Serializer):
model = Product
class BoundProductViewset(BoundModelViewSet):
model = Product
serializer_class = ProductSerializer
You can also specify a custom binding on the ViewSet:
class ProductBinding(Binding):
model = Product
def filters(self):
return {"active": true}
class BoundProductViewset(BoundModelViewSet):
binding = ProductBinding
serializer_class = ProductSerializer
# Django Node Websockets
With websockets you can get instant notification of changes to the queryset.
class ProductBinding(Binding):
model = Product
class ProductWebsocketView(BoundWebsocketView):
event = "products"
binding = ProductBinding()
groups = ["products"] # specify which group to update
Then hook it up to the event of your choice:
urlpatterns = patterns(
'',
url(r'^products', ProductWebsocketView.as_view()),
)
From the javascript side emit the choosen event to connect,
emit it again with {disconnect: true} to disconnect.
Listen for the same even to get updates
// listen for updates
io.on("products", (data) => {
// data.action: create, update, delete, sync
// sync means that it is giving you the full queryset
// data.payload: the related data
});
// connect and sync
io.emit("products")
// disconnect
io.emit("products", {disconnect: true})
| 25.244681 | 77 | 0.692794 | eng_Latn | 0.976455 |
7623ef4c897e1265a9cbea40fbe3f86b564561c8 | 192 | md | Markdown | README.md | paultbarrett/wt32-86-lvgl | 78f24f19e2403c09729eb70705563b0cd631c015 | [
"MIT"
] | 34 | 2020-12-10T11:58:00.000Z | 2022-03-28T08:56:07.000Z | README.md | paultbarrett/wt32-86-lvgl | 78f24f19e2403c09729eb70705563b0cd631c015 | [
"MIT"
] | 10 | 2021-05-13T12:37:15.000Z | 2022-03-31T08:45:52.000Z | README.md | paultbarrett/wt32-86-lvgl | 78f24f19e2403c09729eb70705563b0cd631c015 | [
"MIT"
] | 15 | 2021-01-06T16:30:15.000Z | 2022-03-30T09:09:15.000Z | # 8ms ESP32
8ms ESP32 project
# prepare ESP32 v4.3 SDK
please refr to https://docs.espressif.com/projects/esp-idf/zh_CN/release-v4.3/get-started/index.html
# build project
idf.py build
| 13.714286 | 100 | 0.75 | kor_Hang | 0.352777 |
7624695ee9194e8ccbd75ffd354195dacd32e443 | 4,003 | md | Markdown | docs/relational-databases/spatial/geometrycollection.md | Jteve-Sobs/sql-docs.de-de | 9843b0999bfa4b85e0254ae61e2e4ada1d231141 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/relational-databases/spatial/geometrycollection.md | Jteve-Sobs/sql-docs.de-de | 9843b0999bfa4b85e0254ae61e2e4ada1d231141 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/relational-databases/spatial/geometrycollection.md | Jteve-Sobs/sql-docs.de-de | 9843b0999bfa4b85e0254ae61e2e4ada1d231141 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
description: GeometryCollection
title: GeometryCollection | Microsoft-Dokumentation
ms.date: 03/01/2017
ms.prod: sql
ms.prod_service: database-engine, sql-database
ms.reviewer: ''
ms.technology: ''
ms.topic: conceptual
helpviewer_keywords:
- GeomCollection geometry subtype [SQL Server]
- geometry subtypes [SQL Server]
ms.assetid: 4445c0d9-a66b-4d7c-88e4-a66fa6f7d9fd
author: MladjoA
ms.author: mlandzic
monikerRange: =azuresqldb-current||>=sql-server-2016||=sqlallproducts-allversions||>=sql-server-linux-2017||=azuresqldb-mi-current
ms.openlocfilehash: c2c7be1815002208ffea6e08ea4b7200579a78e1
ms.sourcegitcommit: e700497f962e4c2274df16d9e651059b42ff1a10
ms.translationtype: HT
ms.contentlocale: de-DE
ms.lasthandoff: 08/17/2020
ms.locfileid: "88455402"
---
# <a name="geometrycollection"></a>GeometryCollection
[!INCLUDE [SQL Server Azure SQL Database](../../includes/applies-to-version/sql-asdb.md)]
Ein **GeometryCollection** -Objekt ist eine Auflistung von null oder mehr **geometry** - oder **geography** -Instanzen. Eine **GeometryCollection** kann leer sein.
## <a name="geometrycollection-instances"></a>GeometryCollection-Instanzen
### <a name="accepted-instances"></a>Akzeptierte Instanzen
Damit eine **GeometryCollection** -Instanz akzeptiert wird, muss es sich um eine leere **GeometryCollection** -Instanz handeln, oder alle Instanzen, die die **GeometryCollection** beinhalten, müssen akzeptierte Instanzen sein. Im folgenden Beispiel werden akzeptierte Instanzen veranschaulicht.
```sql
DECLARE @g1 geometry = 'GEOMETRYCOLLECTION EMPTY';
DECLARE @g2 geometry = 'GEOMETRYCOLLECTION(LINESTRING EMPTY,POLYGON((-1 -1, -1 -5, -5 -5, -5 -1, -1 -1)))';
DECLARE @g3 geometry = 'GEOMETRYCOLLECTION(LINESTRING(1 1, 3 5),POLYGON((-1 -1, -1 -5, -5 -5, -5 -1, -1 -1)))';
```
Das folgende Beispiel löst eine `System.FormatException` aus, weil die **LineString** -Instanz in der **GeometryCollection** -Instanz nicht akzeptiert wird.
```sql
DECLARE @g geometry = 'GEOMETRYCOLLECTION(LINESTRING(1 1), POLYGON((-1 -1, -1 -5, -5 -5, -5 -1, -1 -1)))';
```
### <a name="valid-instances"></a>Gültige Instanzen
Eine **GeometryCollection** -Instanz ist gültig, wenn alle Instanzen, die die **GeometryCollection** -Instanz beinhalten, gültig sind. Im folgenden Beispiel werden drei gültige **GeometryCollection** -Instanzen und eine nicht gültige Instanz gezeigt.
```sql
DECLARE @g1 geometry = 'GEOMETRYCOLLECTION EMPTY';
DECLARE @g2 geometry = 'GEOMETRYCOLLECTION(LINESTRING EMPTY,POLYGON((-1 -1, -1 -5, -5 -5, -5 -1, -1 -1)))';
DECLARE @g3 geometry = 'GEOMETRYCOLLECTION(LINESTRING(1 1, 3 5),POLYGON((-1 -1, -1 -5, -5 -5, -5 -1, -1 -1)))';
DECLARE @g4 geometry = 'GEOMETRYCOLLECTION(LINESTRING(1 1, 3 5),POLYGON((-1 -1, 1 -5, -5 5, -5 -1, -1 -1)))';
SELECT @g1.STIsValid(), @g2.STIsValid(), @g3.STIsValid(), @g4.STIsValid();
```
`@g4` ist nicht gültig, da die **Polygon** -Instanz in der **GeometryCollection** -Instanz nicht gültig ist.
Weitere Informationen über akzeptierte und gültige Instanzen finden Sie unter [Point](../../relational-databases/spatial/point.md), [MultiPoint](../../relational-databases/spatial/multipoint.md), [LineString](../../relational-databases/spatial/linestring.md), [MultiLineString](../../relational-databases/spatial/multilinestring.md), [Polygon](../../relational-databases/spatial/polygon.md)und [MultiPolygon](../../relational-databases/spatial/multipolygon.md).
## <a name="examples"></a>Beispiele
Im folgenden Beispiel wird eine `geometry``GeometryCollection` mit Z-Werten im SRID 1 instanziiert, der eine `Point` -Instanz und eine `Polygon` -Instanz enthält.
```sql
DECLARE @g geometry;
SET @g = geometry::STGeomCollFromText('GEOMETRYCOLLECTION(POINT(3 3 1), POLYGON((0 0 2, 1 10 3, 1 0 4, 0 0 2)))', 1);
```
## <a name="see-also"></a>Siehe auch
[Räumliche Daten (SQL Server)](../../relational-databases/spatial/spatial-data-sql-server.md)
| 55.597222 | 464 | 0.718211 | yue_Hant | 0.307184 |
7624bcdee13f761f640d545059a0513e8c2d689d | 7,775 | md | Markdown | articles/dev-spaces/quickstart-cli.md | gliljas/azure-docs.sv-se-1 | 1efdf8ba0ddc3b4fb65903ae928979ac8872d66e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/dev-spaces/quickstart-cli.md | gliljas/azure-docs.sv-se-1 | 1efdf8ba0ddc3b4fb65903ae928979ac8872d66e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/dev-spaces/quickstart-cli.md | gliljas/azure-docs.sv-se-1 | 1efdf8ba0ddc3b4fb65903ae928979ac8872d66e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Utveckla ett program på Kubernetes
services: azure-dev-spaces
ms.date: 02/20/2020
ms.topic: quickstart
description: Den här snabb starten visar hur du använder Azure dev Spaces och kommando raden för att utveckla ett program i Azure Kubernetes-tjänsten
keywords: Docker, Kubernetes, Azure, AKS, Azure Kubernetes service, Containers, Helm, service nät, service nät-routning, kubectl, K8s
manager: gwallace
ms.openlocfilehash: 337c3cb139e1fe0c35344e49271503b98a59fa7b
ms.sourcegitcommit: 58faa9fcbd62f3ac37ff0a65ab9357a01051a64f
ms.translationtype: MT
ms.contentlocale: sv-SE
ms.lasthandoff: 04/29/2020
ms.locfileid: "82166010"
---
# <a name="quickstart-develop-an-application-on-kubernetes---azure-dev-spaces"></a>Snabb start: utveckla ett program på Kubernetes – Azure dev Spaces
I den här guiden får du lära dig hur du:
- Ställa in Azure Dev Spaces med ett hanterat Kubernetes-kluster i Azure.
- Utveckla och kör kod i behållare med hjälp av kommando raden.
## <a name="prerequisites"></a>Krav
- En Azure-prenumeration. Om du inte har någon Azure-prenumeration kan du skapa ett [kostnads fritt konto](https://azure.microsoft.com/free).
- [Azure CLI installerat](/cli/azure/install-azure-cli?view=azure-cli-latest).
## <a name="create-an-azure-kubernetes-service-cluster"></a>Skapa ett Azure Kubernetes service-kluster
Du måste skapa ett AKS-kluster i en [region som stöds][supported-regions]. Kommandona nedan skapar en resurs grupp med namnet *MyResourceGroup* och ett AKS-kluster som kallas *MyAKS*.
```azurecli
az group create --name MyResourceGroup --location eastus
az aks create -g MyResourceGroup -n MyAKS --location eastus --generate-ssh-keys
```
## <a name="enable-azure-dev-spaces-on-your-aks-cluster"></a>Aktivera Azure dev Spaces i ditt AKS-kluster
Använd `use-dev-spaces` kommandot för att aktivera dev Spaces på ditt AKS-kluster och följ anvisningarna. Kommandot nedan aktiverar dev-utrymmen i *MyAKS* -klustret i gruppen *MyResourceGroup* och skapar ett *standard* dev-utrymme.
> [!NOTE]
> `use-dev-spaces` Kommandot installerar även Azure dev Spaces CLI om det inte redan är installerat. Det går inte att installera Azure dev Spaces CLI i Azure Cloud Shell.
```azurecli
az aks use-dev-spaces -g MyResourceGroup -n MyAKS
```
```output
'An Azure Dev Spaces Controller' will be created that targets resource 'MyAKS' in resource group 'MyResourceGroup'. Continue? (y/N): y
Creating and selecting Azure Dev Spaces Controller 'MyAKS' in resource group 'MyResourceGroup' that targets resource 'MyAKS' in resource group 'MyResourceGroup'...2m 24s
Select a dev space or Kubernetes namespace to use as a dev space.
[1] default
Type a number or a new name: 1
Kubernetes namespace 'default' will be configured as a dev space. This will enable Azure Dev Spaces instrumentation for new workloads in the namespace. Continue? (Y/n): Y
Configuring and selecting dev space 'default'...3s
Managed Kubernetes cluster 'MyAKS' in resource group 'MyResourceGroup' is ready for development in dev space 'default'. Type `azds prep` to prepare a source directory for use with Azure Dev Spaces and `azds up` to run.
```
## <a name="get-sample-application-code"></a>Hämta exempel program kod
I den här artikeln använder du [exempel programmet Azure dev Spaces](https://github.com/Azure/dev-spaces) för att demonstrera användningen av Azure dev Spaces.
Klona programmet från GitHub och navigera till katalogen *dev-Spaces/samples/NodeJS/komma igång/webfrontend* :
```cmd
git clone https://github.com/Azure/dev-spaces
cd dev-spaces/samples/nodejs/getting-started/webfrontend
```
## <a name="prepare-the-application"></a>Förbereda programmet
För att kunna köra ditt program på Azure dev Spaces behöver du ett Dockerfile-och Helm-diagram. För vissa språk, till exempel [Java][java-quickstart], [.net Core][netcore-quickstart]och [Node. js][nodejs-quickstart], kan Azure dev Spaces client-verktyget generera alla de till gångar du behöver. För många andra språk, till exempel Go, PHP och python, kan klient verktyget generera Helm-diagrammet så länge du kan ange en giltig Dockerfile.
Generera Docker-och Helm-diagrammets till gångar för att köra programmet i `azds prep` Kubernetes med hjälp av kommandot:
```cmd
azds prep --enable-ingress
```
Du måste köra `prep` kommandot från katalogen *dev-Spaces/samples/NodeJS/kom-start/webfrontend* för att kunna generera Docker-och Helm-diagrammets till gångar.
> [!TIP]
> `prep` Kommandot försöker generera [ett Dockerfile-och Helm-diagram](how-dev-spaces-works-prep.md#prepare-your-code) för projektet. Azure dev Spaces använder dessa filer för att skapa och köra din kod, men du kan ändra dessa filer om du vill ändra hur projektet skapas och körs.
## <a name="build-and-run-code-in-kubernetes"></a>Skapa och köra kod i Kubernetes
Skapa och kör koden i AKS med hjälp av `azds up` kommandot:
```cmd
$ azds up
Using dev space 'default' with target 'MyAKS'
Synchronizing files...2s
Installing Helm chart...2s
Waiting for container image build...2m 25s
Building container image...
Step 1/8 : FROM node
Step 2/8 : ENV PORT 80
Step 3/8 : EXPOSE 80
Step 4/8 : WORKDIR /app
Step 5/8 : COPY package.json .
Step 6/8 : RUN npm install
Step 7/8 : COPY . .
Step 8/8 : CMD ["npm", "start"]
Built container image in 6m 17s
Waiting for container...13s
Service 'webfrontend' port 'http' is available at `http://webfrontend.1234567890abcdef1234.eus.azds.io/`
Service 'webfrontend' port 80 (http) is available at http://localhost:54256
...
```
Du kan se den tjänst som körs genom att öppna den offentliga URL: en, som visas i utdata `azds up` från kommandot. I det här exemplet är *`http://webfrontend.1234567890abcdef1234.eus.azds.io/`* den offentliga URL: en.
> [!NOTE]
> När du navigerar till tjänsten när du `azds up`kör, visas även spårningen av HTTP-begäranden i `azds up` kommandots utdata. De här spårningarna kan hjälpa dig att felsöka och felsöka tjänsten. Du kan inaktivera dessa spårningar `--disable-http-traces` med när `azds up`du kör.
Om du avbryter `azds up` kommandot med *CTRL + c*fortsätter tjänsten att köras i AKS och den offentliga URL: en är tillgänglig.
## <a name="update-code"></a>Uppdatera kod
Om du vill distribuera en uppdaterad version av tjänsten kan du uppdatera alla filer i projektet och köra `azds up` kommandot igen. Ett exempel:
1. Om `azds up` körs fortfarande trycker du på *CTRL + c*.
1. Uppdatera [rad 13 i `server.js` ](https://github.com/Azure/dev-spaces/blob/master/samples/nodejs/getting-started/webfrontend/server.js#L13) till:
```javascript
res.send('Hello from webfrontend in Azure');
```
1. Spara ändringarna.
1. Kör `azds up` kommandot igen:
```cmd
$ azds up
Using dev space 'default' with target 'MyAKS'
Synchronizing files...1s
Installing Helm chart...3s
Waiting for container image build...
...
```
1. Navigera till den tjänst som körs och observera dina ändringar.
1. Tryck på *CTRL + c* för att `azds up` stoppa kommandot.
## <a name="clean-up-your-azure-resources"></a>Rensa dina Azure-resurser
```azurecli
az group delete --name MyResourceGroup --yes --no-wait
```
## <a name="next-steps"></a>Nästa steg
Lär dig hur Azure dev Spaces hjälper dig att utveckla mer komplexa program över flera behållare och hur du kan förenkla samarbets utveckling genom att arbeta med olika versioner eller grenar av koden i olika utrymmen.
> [!div class="nextstepaction"]
> [Grupp utveckling i Azure dev Spaces][team-quickstart]
[java-quickstart]: quickstart-java.md
[nodejs-quickstart]: quickstart-nodejs.md
[netcore-quickstart]: quickstart-netcore.md
[team-quickstart]: quickstart-team-development.md
[supported-regions]: https://azure.microsoft.com/global-infrastructure/services/?products=kubernetes-service | 47.121212 | 440 | 0.763215 | swe_Latn | 0.892371 |
76266e08f5f6e4a8d1cdbda6a3e85babcf3a1c49 | 327 | md | Markdown | docs/zh_cn/installation/install_using_binaries.md | tianxinheihei/bfe | 38f02ff14312a08c342f8703aabfa54e48c8f5a4 | [
"Apache-2.0"
] | 1,989 | 2020-06-26T20:06:54.000Z | 2022-03-31T13:31:32.000Z | docs/zh_cn/installation/install_using_binaries.md | tianxinheihei/bfe | 38f02ff14312a08c342f8703aabfa54e48c8f5a4 | [
"Apache-2.0"
] | 315 | 2020-06-28T05:15:12.000Z | 2022-03-31T04:10:23.000Z | docs/zh_cn/installation/install_using_binaries.md | jansci621/bfe | 0c9f9102b2e0b4d9258fa514b30d1c7ee7e3cde8 | [
"Apache-2.0"
] | 349 | 2020-06-28T01:59:37.000Z | 2022-03-29T03:04:23.000Z | # 二进制文件下载安装
## 下载文件
- 从这里[下载BFE](https://github.com/bfenetworks/bfe/releases)在各平台的最新版本
## 安装
- 将文件解压到安装目录:
```bash
$ tar zxvf bfe_<version>_<os>_<arch>.tar.gz
```
## 运行
- 基于示例配置运行BFE:
```bash
$ cd bfe/bin
$ ./bfe -c ../conf -l ../log
```
## 下一步
* 了解[命令行参数](../operation/command.md)
* 了解[基本功能配置使用](../example/guide.md)
| 12.111111 | 66 | 0.617737 | yue_Hant | 0.616144 |
7626c985646cdb83858dd5ad1d69c42ada170480 | 4,345 | md | Markdown | CHANGELOG.md | mccarthyryanc/laspy | 8bc16f3c91b7ae514923fe197e68904f3def179c | [
"BSD-2-Clause"
] | null | null | null | CHANGELOG.md | mccarthyryanc/laspy | 8bc16f3c91b7ae514923fe197e68904f3def179c | [
"BSD-2-Clause"
] | null | null | null | CHANGELOG.md | mccarthyryanc/laspy | 8bc16f3c91b7ae514923fe197e68904f3def179c | [
"BSD-2-Clause"
] | null | null | null | # Changelog
## Unreleased
### Fixed
- Fixed `LasHeader.update` (thus fixing `LasData.update_header`) computation of x,y,z mins and maxs
## 2.1.1
### Fixed
- Fixed regression introduced in 2.1.0 where setting the x, y or z value would not properly set the corresponding
X, Y or Z value.
- Fixed `LasData.change_scaling` setting the header's `offsets` and/or `scales` to `None`
if the corresponding optionnal argument was not given to the `change_scaling` method.
The header will now correctly keep the corresponding value unchanged.
## 2.1.0
### Added
- Added a better error message when reading empty files
- Added a new `xyz` attribute to `LasData` that returns x, y, z as a new numpy array or sets the x, y, z from an array
- Added `LasData.remove_extra_dim` and `LasData.remove_extra_dims` to allow the removal of extra dimensions (__only__)
### Changed
- Minimum `lazrs` version updated to 0.4.0 to bring support for LAZ with variable size chunks
(used in [COPC](https://copc.io) files). the `laszip` backend already supported variable size chunks LAZ.
- Improved assigning to multiple dimensions at once (`las[['x', 'y', 'z']] = ...`)
- `laspy` will no longer raise encoding errors when reading files for which the header's `generating_software` or
`system_identifier` as well as the vlr's `description` is not `ascii` encoded as the spec mandates.
However, an encoding error will be raised when writing such files.
- `LasData.__getitem__` will now return a `LasData` when indexing with slice or numpy array.
`assert isinstance(las[[1, 2, 3, 4]], laspy.LasData)`
### Fixed
- Fix `PackedPointRecord.__len__` when array has no dim
- Fix scaled extra byte creation when the offsets/scales given to `ExtraBytesParam` where of type `list` or `tuple`
- Fix `ScaledArrayView` to allow indexing with `list` or `numpy.array`.
## Version 2.0.3
## Fixed
- Fix function that parses geotiff VLRs
- Fix handling of points with 'unregistered' extra bytes (PR #158)
- Fix to handle empty LAS/LAZ more robustly
## Version 2.0.2
### Changed
- Update minimum lazrs version which allows to:
- Fix Appending in LAZ files.
- Improve memory usage when reading/writing. (issue #152)
### Fixed
- Fix system_identifier reading by ignoring non ascii characters instead of erroring ,(issue #148, PR #149).
- Fix `LasData.change_scaling` method.
## Version 2.0.1
### Fixed
- Fix `.min` `.max` methods of array views
- Ship the tests as part of the source distribution (But they won't be installed with `pip install`)
## Version 2.0.0
- Overhaul of the internals by essentially incorporating pylas into laspy,
while the API to retrieve and set dimensions stayed the same, other parts changed
and will require adaptation.
- Better LAZ support
* Added support for writing LAZ
* Changed decompression mechanism by using either `laszip` python bindings (and not laszip-cli)
or `lazrs`
- Added ability to read and write LAS/LAS in `stream` / `chunked` mode.
- Changed laspy to support the reading and writing of LAS/LAZ data from and to `file-objects` and `bytes`
- Dropped support for python2.7, python3.6+ is supported.
---
## Version 1.7.0
- Fixed bug in point record format 5, 9 and 10 [#105](https://github.com/laspy/laspy/issues/105)
- Return explicit msg if laszip executable was not found [#110](https://github.com/laspy/laspy/issues/110)
- Supprt numpy 1.17 [#122](https://github.com/laspy/laspy/issues/122)
## Version 1.6.0
- Bug fix [#92](https://github.com/laspy/laspy/issues/92)
- Test creation of all valid custom dimension data types
- Modify handling of extra bytes to be char data instead of numeric byte data
## Version 1.5.1
- Bug fixes [#67](https://github.com/laspy/laspy/pull/67), [#75](https://github.com/laspy/laspy/pull/75), [b02b40900b5](https://github.com/laspy/laspy/commit/b02b40900b5620972930cd0c201b4db1a6a69754)
- Allow usage of `laszip-cli` when working with LAZ files [#77](https://github.com/laspy/laspy/pull/77)
## Version 1.5.0
- Improved memory handling in base.FileManager [#48](https://github.com/laspy/laspy/pull/48)
- Introduced `r-` file mode, that only reads the header of as LAS file [#48](https://github.com/laspy/laspy/pull/48)
- LAS v. 1.4 bug fixes [#55](https://github.com/laspy/laspy/pull/55)
- Python 3 support [#62](https://github.com/laspy/laspy/pull/62)
| 42.598039 | 199 | 0.734177 | eng_Latn | 0.964035 |
7626e4a5878d4894ffb300efd188b0ab2d3b1813 | 834 | md | Markdown | README.md | k-koech/jquery-effects | 0a9d5a6252c0f1ebdfd82143084f2ce619e33496 | [
"MIT"
] | 1 | 2021-06-15T05:51:40.000Z | 2021-06-15T05:51:40.000Z | README.md | k-koech/jquery-effects | 0a9d5a6252c0f1ebdfd82143084f2ce619e33496 | [
"MIT"
] | null | null | null | README.md | k-koech/jquery-effects | 0a9d5a6252c0f1ebdfd82143084f2ce619e33496 | [
"MIT"
] | null | null | null | ## Project name
- Jquery Effects
## Project description
- Jquery Effects i.e fadeIn(), fadeOut(), slideIn etc
## Author(s) information
- Kelvin Kipchumba Koech
## Setup instructions
- Clone the repository or download code to your desired folder.
- Extract files.
- Open index.html file with your desired folder.
- You are all done! cheers.
## Live link
Deployed project can be accessed here - [Jquery Effects](https://k-koech.github.io/jquery-effects/)
## Technologies used
- JS
- HTML
- CSS
- Bootstrap
## Contact information
- WhatsApp +254725801772
- Email [email protected]
## License and Copyright information
Copyright 2021 Kelvin Kipchumba Koech
Licenced under [MT License](https://github.com/k-koech/jquery-effects/blob/master/README.md).
| 26.903226 | 103 | 0.688249 | eng_Latn | 0.698185 |
7626fd0e8ee22b3e9bd4dbb1827049ccfe8aa51a | 3,461 | md | Markdown | _seminars/2019/2019-07-02-ritwik-pal.md | siddhartha-gadgil/DeptWeb | 9ed39dffb6b0c2c47c611d548633015b13736b46 | [
"MIT"
] | 1 | 2018-05-24T07:33:31.000Z | 2018-05-24T07:33:31.000Z | _seminars/2019/2019-07-02-ritwik-pal.md | siddhartha-gadgil/DeptWeb | 9ed39dffb6b0c2c47c611d548633015b13736b46 | [
"MIT"
] | 55 | 2017-01-31T05:32:50.000Z | 2021-10-11T12:08:38.000Z | _seminars/2019/2019-07-02-ritwik-pal.md | siddhartha-gadgil/DeptWeb | 9ed39dffb6b0c2c47c611d548633015b13736b46 | [
"MIT"
] | 1 | 2017-04-28T06:31:27.000Z | 2017-04-28T06:31:27.000Z | ---
speaker: Ritwik Pal (IISc Mathematics)
title: "Signs of Hecke eigenvalues of modular forms and differential operators on Jacobi forms"
date: 2 July, 2019
time: 3 pm
venue: LH-1, Mathematics Department
series: Thesis
series-prefix: PhD
series-suffix: colloquium
---
This talk would have two parts. In the first part, we will discuss some topics
which can be classified as 'Linnik-type' problems (the motivation being his
original question about locating the first prime in an arithmetic progression)
in the context of Hecke eigenvalues of modular forms on various groups, and then
talk about the distribution of their signs. In the second part we will discuss
differential operators on modular forms, and then talk about their applications
to questions about Jacobi forms.
It is well-known that the sequence of Hecke eigenvalues mentioned above are often
real, and has infinitely many sign changes. First part of the talk would discuss
the problem of estimating the location of the first such sign change in the
context of Hecke eigenvalues of Yoshida lifts (a certain subspace of the Siegel
modular forms) and Fourier coefficients of Hilbert modular forms. We show how to
improve the previously best known results on this topic significantly.
The crucial inputs behind these would be to establish a non-trivial upper bound
on the sum of Hecke eigenvalues of an elliptic newform at primes away from the
level for treating Yoshida lifts; and exploiting Hecke relations along with
generalising related results due to K. Soundararajan, K. Matomaki et al. for the
case of Hilbert modular forms. In both cases we measure the location of the
eigenvalues or Fourier coefficients in terms of an analytic object called the
'analytic conductor', which would be introduced during the talk. Moreover in
the case of Hilbert modular forms, we will also discuss quantitative results
about distribution of positive and negative Hecke eigenvalues. The proof depends
on establishing a certain result on a particular types of multiplicative functions
on the set of integral ideals of a totally real number field.
In the second part of the talk, we will introduce the space of Jacobi forms and
certain results due to J. Kramer and, briefly, a conjecture due to Hashimoto on
theta series attached to quaternion algebras to motivate the results to follow.
The (partial) solution of this conjecture by Arakawa and B\"ocherer transfers the
question to one about differential operators on Jacobi forms, and we would report
on previously known and new results on this topic.
The heart of the second part of the talk would focus on the question about the
differential operators on Jacobi forms. It is well known that certain differential
operators $\{D_{v}\}\_{0}^{2m}$ map the space of Jacobi forms $J_{k,m}(N)$ of weight
$k$, index $m$ and level $N$ to the space of modular forms $M_{k+v}(N)$ of weight
$k+v$ and level $N$. It is also known that the sum of the differential operators
$D_{v}$ for $v=\{1,2,...2m\}$ map $J_{k,m}(N)$ to the direct sum of $M_{k+v}(N)$
injectively. The question alluded to above boils down to investigate whether one
can omit certain differential operators from the list above, maintaining the
injective property. In this regard, we would discuss results of Arakawa--B\"ocherer,
Das--Ramakrishnan, and finally our results. The main point would be to establish
automorphy of the Wronskian of a certain tuple of congruent theta series of weight 3/2.
| 60.719298 | 95 | 0.795435 | eng_Latn | 0.999321 |
762703549164de8cc5d3e40155c844d1c41cea69 | 77 | md | Markdown | README.md | fok666/pt_lm_fastai | d44b1e19de54a5cf5e34adba4dd1c2551c1444bc | [
"MIT"
] | null | null | null | README.md | fok666/pt_lm_fastai | d44b1e19de54a5cf5e34adba4dd1c2551c1444bc | [
"MIT"
] | null | null | null | README.md | fok666/pt_lm_fastai | d44b1e19de54a5cf5e34adba4dd1c2551c1444bc | [
"MIT"
] | null | null | null | # pt_lm_fastai
Portuguese Language Model with Fast AI and WikiPT text corpus
| 25.666667 | 61 | 0.831169 | eng_Latn | 0.766596 |
7627044b7606768c21ba46f1f95cc5035254a2d1 | 2,875 | md | Markdown | docs/en/userguide/index.md | kinglozzer/silverstripe-subsites | 727efb0ea019ecaeb736a8c144036dba553d1add | [
"BSD-3-Clause"
] | 21 | 2015-02-04T20:13:07.000Z | 2020-07-02T08:35:25.000Z | docs/en/userguide/index.md | kinglozzer/silverstripe-subsites | 727efb0ea019ecaeb736a8c144036dba553d1add | [
"BSD-3-Clause"
] | 277 | 2015-01-22T12:10:49.000Z | 2022-03-27T19:49:25.000Z | docs/en/userguide/index.md | kinglozzer/silverstripe-subsites | 727efb0ea019ecaeb736a8c144036dba553d1add | [
"BSD-3-Clause"
] | 74 | 2015-02-17T00:55:39.000Z | 2022-02-15T02:02:17.000Z | title: Working with multiple websites
summary: Setting up and editing multiple websites using SilverStripe
# Working with multiple sites
## In this section:
* Understand subsites
* Learn how to create and delete subsites
* Learn how to manage subsite permissions
* Enable/Disable public access to subsites
* Learn how to create and use subsite templates
* Learn how to edit existing subsites
* Sharing content between the main site and subsites
## Before we begin:
* Make sure you have the SilverStripe [Subsites](http://addons.silverstripe.org/add-ons/silverstripe/subsites) module installed.
* Make sure you are in the "Subsites" section on the Navigation Tabs.
* Make sure you have full administrative rights on your site.
## Understanding subsites
Subsites is a module to allow you manage multiple related sites from a single CMS interface. Because all sites run on a single installation of SilverStripe, they can share users, content and assets. They can all use the same templates, or each use different ones.
When Subsites is installed your existing site is defined as the main site, you will be then be able to create related subsites under the main site.
So for example you may have an international presence and you want to create a subsite for a country where you do business which is geared just for that market. You could create a subsite for this and have all information related to that country kept under this subsite, you can also set up a subdomain for this site.
One of the benefits of subsites is that it is easy to copy pages between the subsites and you have access to all of the assets across all of the subsites.
Subsites is not for running unrelated websites on a single SilverStripe instance so if two sites have different vhosts you will not be able to run them with Subsites on a single SilverStripe instance.
With Subsites you can set up users to have access to all subsites or just a selection of subsites.
## Common subsite uses
Subsites can be used for various different reasons here are some of the common ones:
* Setting up a subsite for a small campaign so for example a clothing company may set up a summer or winter subsite to market just that season of clothing.
* Locking down a particular subsite you may create a particular department like recruitment who would have access to create and edit pages for their particular subsite but they would not be able to modify the main website.
* Running sub-domains on a single SilverStripe instance, with subsites if a sub-domain is pointing to the same instance and has been setup correctly you can manage this via a single CMS instance.
* Subsites can not be used to run multiple websites on a single instance. Subsites does not allow you to run multiple domains/vhosts on a single instance.
## Documentation
* [Set up](set_up.md)
* [Working with subsites](working_with.md) | 59.895833 | 317 | 0.796174 | eng_Latn | 0.999858 |
76275852c1d1b58f99765ccad9aab2a2b9a58629 | 71 | md | Markdown | AUTHORS.md | algoapril/algoapril-2022 | a64816e3553844c89855fcdf4cabd8220be29c17 | [
"MIT"
] | 29 | 2022-01-25T12:27:05.000Z | 2022-03-07T21:14:15.000Z | AUTHORS.md | algoapril/algoapril-2022 | a64816e3553844c89855fcdf4cabd8220be29c17 | [
"MIT"
] | 2 | 2022-02-01T04:12:08.000Z | 2022-02-12T08:22:23.000Z | AUTHORS.md | algoapril/algoapril-2022 | a64816e3553844c89855fcdf4cabd8220be29c17 | [
"MIT"
] | null | null | null | ### Maintainer
- Karsten Schmidt (@postspectacular)
### Contributors
| 11.833333 | 36 | 0.71831 | deu_Latn | 0.267881 |
7627d66038bd7c0cd71f5b6c61ea78ede233dcb5 | 1,332 | md | Markdown | README.md | andras-tim/ansible-role-osx-defaults | fa77f2e7ef31ecfede87e90739d93a8957957dbc | [
"BSD-2-Clause"
] | 2 | 2021-04-25T19:23:48.000Z | 2021-06-14T15:25:38.000Z | README.md | andras-tim/ansible-role-osx-defaults | fa77f2e7ef31ecfede87e90739d93a8957957dbc | [
"BSD-2-Clause"
] | null | null | null | README.md | andras-tim/ansible-role-osx-defaults | fa77f2e7ef31ecfede87e90739d93a8957957dbc | [
"BSD-2-Clause"
] | null | null | null | [![BSD licensed][badge-license]][link-license]
[![Galaxy Role][badge-role]][link-galaxy]
[![Downloads][badge-downloads]][link-galaxy]
[![Build Status][badge-travis]][link-travis]
# osx-defaults
Ansible role to configure defaults on OSX.
## Requirements
Ansible 2.0
## Role Variables
Please check the [defaults/main.yml](defaults/main.yml) file for available variables.
## Example Playbook
``` yaml
- hosts: servers
roles:
- role: andras_tim.ansible_role_osx_defaults
vars:
Bluetooth_Enabled: true
Bluetooth_ShowInMenuBar: false
```
## License
BSD
## Author Information
* Original author: Eric Lafargue
* Patches from: [contributors](https://github.com/andras-tim/ansible-role-osx-defaults/graphs/contributors)
[badge-license]: https://img.shields.io/github/license/andras-tim/ansible-role-osx-defaults.svg
[link-license]: https://raw.githubusercontent.com/andras-tim/ansible-role-osx-defaults/master/LICENSE
[badge-role]: https://img.shields.io/ansible/role/48747.svg
[badge-downloads]: https://img.shields.io/ansible/role/d/48747.svg
[link-galaxy]: https://galaxy.ansible.com/andras_tim/ansible-role-osx-defaults/
[badge-travis]: https://travis-ci.org/andras-tim/ansible-role-osx-defaults.svg?branch=master
[link-travis]: https://travis-ci.org/andras-tim/ansible-role-osx-defaults
| 24.666667 | 107 | 0.744745 | yue_Hant | 0.147278 |
7627e045633c7c49132d36bcd039d01a90f7bee8 | 3,992 | md | Markdown | exampleSite/content/english/_index.md | ITMechanic2017/airspace | efc443ff81036a79020d5fa2395c97408012c696 | [
"MIT"
] | null | null | null | exampleSite/content/english/_index.md | ITMechanic2017/airspace | efc443ff81036a79020d5fa2395c97408012c696 | [
"MIT"
] | null | null | null | exampleSite/content/english/_index.md | ITMechanic2017/airspace | efc443ff81036a79020d5fa2395c97408012c696 | [
"MIT"
] | null | null | null | ---
banner:
enable: true
bg_image: images/slider-bg.jpg
bg_overlay: true
title: A Digital Marketing <br/> & Design Agency
content: We love the Web and the work we do.We work closely with our clients to
deliver the best possible solutions for their needs
button:
enable: true
label: Discover Our Project
link: project/
about:
enable: true
title: About Us
description: Far far away, behind the word mountains, far from the countries Vokalia
and Consonantia, there live the blind texts. Separated they live in Bookmarksgrove
right at the coast of the Semantics
content: Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod
tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis
nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.
Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu
fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in
culpa qui officia deserunt mollit anim id.
image: images/wrapper-img.png
portfolio:
enable: true
bg_image: images/feature-bg.jpg
title: WE BELIEVE IN GREAT IDEAS
content: " Maecenas faucibus mollis interdum. Morbi leo risus, porta ac consectetur
ac, vestibulum at eros. Fusce dapibus, tellus ac cursus commodo, tortor mauris
condimentum nibh, ut fermentum massa justo sit amet risus.\n\nMaecenas faucibus
mollis interdum. Morbi leo risus, porta ac consectetur ac, vestibulum at eros.
Fusce dapibus, tellus ac cursus commodo, tortor mauris condimentum nibh, ut fermentum
massa justo sit amet risus.\n\nMaecenas faucibus mollis interdum. Morbi leo risus,
porta ac consectetur ac, vestibulum at eros. Fusce dapibus, tellus ac cursus commodo,
tortor mauris condimentum nibh, ut fermentum massa justo sit amet risus. "
button:
enable: true
label: View Works
link: project/
service:
enable: true
cta:
enable: true
bg_image: images/call-to-action-bg.jpg
title: We design delightful digital experiences.
content: Read more about what we do and our philosophy of design. Judge for yourself
The work and results <br> we’ve achieved for other clients, and meet our highly
experienced Team who just love to design.
button:
enable: true
label: Tell Us Your Story
link: contact/
funfacts:
enable: true
title: Fun Facts About Us
description: "'Far far away, behind the word mountains, far from the countries Vokalia
and Consonantia, <br> there live the blind texts. Separated they live in Bookmarksgrove
right at the coast of the Semantics'"
funfact_item:
- icon: fas fa-mug-hot
name: Cups Of Coffee
count: '99'
- icon: fas fa-glasses
name: Article Written
count: '45'
- icon: fas fa-keyboard
name: Projects Completed
count: '125'
- icon: fas fa-clock
name: Combined Projects
count: '200'
testimonial_slider:
- name: Raymond Roy
image: images/clients/avater-1.jpg
designation: CEO-Themefisher
content: This Company created an e-commerce site with the tools to make our business
a success, with innovative ideas we feel that our site has unique elements that
make us stand out from the crowd.
- name: Randi Renin
image: images/clients/avater-1.jpg
designation: CEO-Themefisher
content: This Company created an e-commerce site with the tools to make our business
a success, with innovative ideas we feel that our site has unique elements that
make us stand out from the crowd.
- name: Rose Rio
image: images/clients/avater-3.jpg
designation: CEO-Themefisher
content: This Company created an e-commerce site with the tools to make our business
a success, with innovative ideas we feel that our site has unique elements that
make us stand out from the crowd.
bg_image: "/images/18766764_1990998107811118_4108632850646818911_o.jpg"
title: West Nab Shooting Lodge
---
| 41.154639 | 91 | 0.74023 | eng_Latn | 0.924131 |
7628a854a5909b1cad664c27db3928434839e1b8 | 4,673 | md | Markdown | docs/framework/unmanaged-api/profiling/cor-prf-high-monitor-enumeration.md | paularuiz22/docs | 56a652c21770cad32dfcf128f8977d341d106332 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-12-17T08:15:14.000Z | 2019-12-17T08:15:14.000Z | docs/framework/unmanaged-api/profiling/cor-prf-high-monitor-enumeration.md | paularuiz22/docs | 56a652c21770cad32dfcf128f8977d341d106332 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-05-10T16:33:09.000Z | 2019-05-10T16:33:09.000Z | docs/framework/unmanaged-api/profiling/cor-prf-high-monitor-enumeration.md | paularuiz22/docs | 56a652c21770cad32dfcf128f8977d341d106332 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "COR_PRF_HIGH_MONITOR Enumeration"
ms.date: "04/10/2018"
ms.assetid: 3ba543d8-15e5-4322-b6e7-1ebfc92ed7dd
author: "rpetrusha"
ms.author: "ronpet"
---
# COR_PRF_HIGH_MONITOR Enumeration
[Supported in the .NET Framework 4.5.2 and later versions]
Provides flags in addition to those found in the [COR_PRF_MONITOR](../../../../docs/framework/unmanaged-api/profiling/cor-prf-monitor-enumeration.md) enumeration that the profiler can specify to the [ICorProfilerInfo5::SetEventMask2](../../../../docs/framework/unmanaged-api/profiling/icorprofilerinfo5-seteventmask2-method.md) method when it is loading.
## Syntax
```
typedef enum {
COR_PRF_HIGH_MONITOR_NONE = 0x00000000,
COR_PRF_HIGH_ADD_ASSEMBLY_REFERENCES = 0x00000001,
COR_PRF_HIGH_IN_MEMORY_SYMBOLS_UPDATED = 0x00000002,
COR_PRF_HIGH_MONITOR_DYNAMIC_FUNCTION_UNLOADS = 0x00000004,
COR_PRF_HIGH_REQUIRE_PROFILE_IMAGE = 0,
COR_PRF_HIGH_ALLOWABLE_AFTER_ATTACH = COR_PRF_HIGH_IN_MEMORY_SYMBOLS_UPDATED |
COR_PRF_HIGH_MONITOR_DYNAMIC_FUNCTION_UNLOADS,
COR_PRF_HIGH_MONITOR_IMMUTABLE = 0
} COR_PRF_HIGH_MONITOR;
```
## Members
|Member|Description|
|------------|-----------------|
|`COR_PRF_HIGH_MONITOR_NONE`|No flags are set.|
|`COR_PRF_HIGH_ADD_ASSEMBLY_REFERENCES`|Controls the [ICorProfilerCallback6::GetAssemblyReference](../../../../docs/framework/unmanaged-api/profiling/icorprofilercallback6-getassemblyreferences-method.md) callback for adding assembly references during the CLR assembly reference closure walk.|
|`COR_PRF_HIGH_IN_MEMORY_SYMBOLS_UPDATED`|Controls the [ICorProfilerCallback7::ModuleInMemorySymbolsUpdated](../../../../docs/framework/unmanaged-api/profiling/icorprofilercallback7-moduleinmemorysymbolsupdated-method.md) callback for updates to the symbol stream associated with an in-memory module.|
|`COR_PRF_HIGH_MONITOR_DYNAMIC_FUNCTION_UNLOADS`|Controls the [ICorProfilerCallback9::DynamicMethodUnloaded](icorprofilercallback9-dynamicmethodunloaded-method.md) callback for indicating when a dynamic method has been garbage collected and unloaded. <br/> [!INCLUDE[net_current_v472plus](../../../../includes/net-current-v472plus.md)]|
|`COR_PRF_HIGH_REQUIRE_PROFILE_IMAGE`|Represents all `COR_PRF_HIGH_MONITOR` flags that require profile-enhanced images. It corresponds to the `COR_PRF_REQUIRE_PROFILE_IMAGE` flag in the [COR_PRF_MONITOR](../../../../docs/framework/unmanaged-api/profiling/cor-prf-monitor-enumeration.md) enumeration.|
|`COR_PRF_HIGH_ALLOWABLE_AFTER_ATTACH`|Represents all `COR_PRF_HIGH_MONITOR` flags that can be set after the profiler is attached to a running app.|
|`COR_PRF_HIGH_MONITOR_IMMUTABLE`|Represents all `COR_PRF_HIGH_MONITOR` flags that can be set only during initialization. Trying to change any of these flags elsewhere results in an `HRESULT` value that indicates failure.|
## Remarks
The `COR_PRF_HIGH_MONITOR` flags are used with the `pdwEventsHigh` parameter of the [ICorProfilerInfo5::GetEventMask2](../../../../docs/framework/unmanaged-api/profiling/icorprofilerinfo5-geteventmask2-method.md) and [ICorProfilerInfo5::SetEventMask2](../../../../docs/framework/unmanaged-api/profiling/icorprofilerinfo5-seteventmask2-method.md) methods.
Starting with the [!INCLUDE[net_v461](../../../../includes/net-v461-md.md)], the value of the `COR_PRF_HIGH_ALLOWABLE_AFTER_ATTACH` changed from 0 to `COR_PRF_HIGH_IN_MEMORY_SYMBOLS_UPDATED` (0x00000002). Starting with the .NET Framework 4.7.2, its value changed from `COR_PRF_HIGH_IN_MEMORY_SYMBOLS_UPDATED` to `COR_PRF_HIGH_IN_MEMORY_SYMBOLS_UPDATED | COR_PRF_HIGH_MONITOR_DYNAMIC_FUNCTION_UNLOADS`.
`COR_PRF_HIGH_MONITOR_IMMUTABLE` is intended to be a bitmask that represents all flags that can only be set during initialization. Trying to change any of these flags elsewhere results in a failed `HRESULT`.
## Requirements
**Platforms:** See [System Requirements](../../../../docs/framework/get-started/system-requirements.md).
**Header:** CorProf.idl, CorProf.h
**Library:** CorGuids.lib
**.NET Framework Versions:** [!INCLUDE[net_current_v452plus](../../../../includes/net-current-v452plus-md.md)]
## See also
- [Profiling Enumerations](../../../../docs/framework/unmanaged-api/profiling/profiling-enumerations.md)
- [COR_PRF_MONITOR Enumeration](../../../../docs/framework/unmanaged-api/profiling/cor-prf-monitor-enumeration.md)
- [ICorProfilerInfo5 Interface](../../../../docs/framework/unmanaged-api/profiling/icorprofilerinfo5-interface.md)
| 76.606557 | 404 | 0.755617 | kor_Hang | 0.330832 |
762939cafe016e67ec0078bee1a818ab42b4defd | 3,833 | md | Markdown | _posts/2018-11-15-where-do-thoughts-come-from-and-where-do-they-go.md | VinceG3/blog | a9355b992701b6b8ef87d42e71b6be529fcae474 | [
"MIT"
] | null | null | null | _posts/2018-11-15-where-do-thoughts-come-from-and-where-do-they-go.md | VinceG3/blog | a9355b992701b6b8ef87d42e71b6be529fcae474 | [
"MIT"
] | null | null | null | _posts/2018-11-15-where-do-thoughts-come-from-and-where-do-they-go.md | VinceG3/blog | a9355b992701b6b8ef87d42e71b6be529fcae474 | [
"MIT"
] | null | null | null | ---
layout: post
title: Where do thoughts come from and where do they go?
date: 2018-11-15
---
<p>Your brain is a collection of cells, called neurons, each of which evolved to do one thing and do it well. Sense an electrical impulse, and react to it by modulating it a little bit and passing it on to another bundle of neurons. Different neurons do this slightly differently and there are different types of neurons that do it more differently than other neurons of other types.</p><p>All physical sensations get converted into electrical impulses and pass into the brain, going through sophisticated ‘circuitry’ designed to transform them into ‘thoughts’ that can enter the brain. Your whole nervous system evolved to collect information from all over your body and pipe it into your brain.</p><p>So your brain has electrical signals zooming around it, 24 hours a day, 7 days a week. The condition of not having electrical signals zooming around inside is called “brain death.” But not all of these electrical signals reaches awareness, you don’t know about all your heartbeats, so obviously some of this brain activity can be termed “thought” and some of it cannot.</p><p>Reptiles have brain structures called basal ganglia that are responsible for conducting decision-making at the very basic life and death level. Anybody whose ever experienced a legit fight-or-flight response where your conscious brain processes get overridden by an instinct has witnessed the basal ganglia in action. This doesn’t happen very often in modern society but the basal ganglia is also involved in forming the very basic substrate on which experience happens.</p><p>Close to the basal ganglia is the paleo-mammalian cortex, colloquially called the limbic system. The limbic system primarily takes its input from the basal ganglia and further adds color and meaning to brain activity. These are deep instinctual patterns, evolved long before there were ever even apes, let alone humans, and are responsible for our emotions. If you study the way reptiles and mammals behave, you can see the enormous difference the limbic system generates in how they make decisions and act.</p><p>Also present in mammals is the neo-mammalian cortex, colloquially called the neocortex or forebrain. This is where complex thought is generated. In most mammals, all thought orients solely around sensory experience. The neocortex provides for problem solving, and interplays with the limbic system to allow mammals to pursue interpersonal relationships within their social group.</p><p>Humans alone have two specialized brain structures that provide for translating brain activity into <i>symbolic</i> representations of the electrical signals. These are words and thoughts.</p><p>So, all in all, your senses take in information, transmit them through the nervous system into the brain, starting at the basal ganglia, which emits signals which the limbic system then uses to determine how to <i>feel</i> about them, the prefrontal cortex decides what further to <i>do</i> about them, and this whole process gets picked up on by the linguistic systems to make <i>meaning</i> out of it so that you can either communicate them to other humans, or further mull it over so that you can maybe think of better solutions.</p><p>Thought, when distinguished from ‘mere’ feelings and emotions, is simply the interplay between the linguistic system that humans alone have, and the prefrontal cortex, with an additional role served by the limbic system.</p><p>Each of these systems of the brain is, again, made up of neurons, whose <i>only jobs</i> is to take electrical signals, react to them, and send them on to other neurons. <b>Everything</b> that we consider happening in the brain, from consciousness to thought to feeling, is a consequence of these neurons operating.</p>
| 479.125 | 3,736 | 0.800678 | eng_Latn | 0.999898 |
762970c96db5dbb3baed35383a704b10f26171bf | 186 | md | Markdown | src/pages/blog/2021-12-24-testing.md | acer902/Engineering-CMS | eec76a435c0fbb52f50a6604f24ea5fb790730a5 | [
"MIT"
] | null | null | null | src/pages/blog/2021-12-24-testing.md | acer902/Engineering-CMS | eec76a435c0fbb52f50a6604f24ea5fb790730a5 | [
"MIT"
] | null | null | null | src/pages/blog/2021-12-24-testing.md | acer902/Engineering-CMS | eec76a435c0fbb52f50a6604f24ea5fb790730a5 | [
"MIT"
] | null | null | null | ---
templateKey: blog-post
title: Testing
date: 2021-12-24T05:41:50.842Z
description: This is a description
featuredpost: true
featuredimage: /img/apple-touch-icon.png
---
This is a body | 20.666667 | 40 | 0.768817 | eng_Latn | 0.910281 |
7629b26d6959055b31e4c9a208173b20f471d56d | 2,170 | md | Markdown | wdk-ddi-src/content/d3d12umddi/ne-d3d12umddi-d3d12ddi_create_shader_flags.md | pcfist/windows-driver-docs-ddi | a14a7b07cf628368a637899de9c47e9eefba804c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | wdk-ddi-src/content/d3d12umddi/ne-d3d12umddi-d3d12ddi_create_shader_flags.md | pcfist/windows-driver-docs-ddi | a14a7b07cf628368a637899de9c47e9eefba804c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | wdk-ddi-src/content/d3d12umddi/ne-d3d12umddi-d3d12ddi_create_shader_flags.md | pcfist/windows-driver-docs-ddi | a14a7b07cf628368a637899de9c47e9eefba804c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
UID: NE:d3d12umddi.D3D12DDI_CREATE_SHADER_FLAGS
title: D3D12DDI_CREATE_SHADER_FLAGS
author: windows-driver-content
description: Defines flags for shader creation.
old-location: display\d3d12ddi_create_shader_flags.htm
old-project: display
ms.assetid: 93F27775-3E74-4310-8E09-DCB079436706
ms.author: windowsdriverdev
ms.date: 2/26/2018
ms.keywords: D3D12DDI_CREATE_SHADER_FLAGS, D3D12DDI_CREATE_SHADER_FLAGS enumeration [Display Devices], D3D12DDI_CREATE_SHADER_FLAG_DISABLE_OPTIMIZATION, D3D12DDI_CREATE_SHADER_FLAG_ENABLE_SHADER_TRACING, D3D12DDI_CREATE_SHADER_FLAG_NONE, d3d12umddi/D3D12DDI_CREATE_SHADER_FLAGS, d3d12umddi/D3D12DDI_CREATE_SHADER_FLAG_DISABLE_OPTIMIZATION, d3d12umddi/D3D12DDI_CREATE_SHADER_FLAG_ENABLE_SHADER_TRACING, d3d12umddi/D3D12DDI_CREATE_SHADER_FLAG_NONE, display.d3d12ddi_create_shader_flags
ms.prod: windows-hardware
ms.technology: windows-devices
ms.topic: enum
req.header: d3d12umddi.h
req.include-header: D3d12umddi.h
req.target-type: Windows
req.target-min-winverclnt:
req.target-min-winversvr:
req.kmdf-ver:
req.umdf-ver:
req.ddi-compliance:
req.unicode-ansi:
req.idl:
req.max-support:
req.namespace:
req.assembly:
req.type-library:
req.lib:
req.dll:
req.irql:
topic_type:
- APIRef
- kbSyntax
api_type:
- HeaderDef
api_location:
- D3d12umddi.h
api_name:
- D3D12DDI_CREATE_SHADER_FLAGS
product: Windows
targetos: Windows
req.typenames: D3D12DDI_CREATE_SHADER_FLAGS
---
# D3D12DDI_CREATE_SHADER_FLAGS enumeration
## -description
Defines flags for shader creation.
## -syntax
````
typedef enum D3D12DDI_CREATE_SHADER_FLAGS {
D3D12DDI_CREATE_SHADER_FLAG_NONE = 0x0,
D3D12DDI_CREATE_SHADER_FLAG_ENABLE_SHADER_TRACING = 0x1,
D3D12DDI_CREATE_SHADER_FLAG_DISABLE_OPTIMIZATION = 0x2
} D3D12DDI_CREATE_SHADER_FLAGS;
````
## -enum-fields
### -field D3D12DDI_CREATE_SHADER_FLAG_NONE
No flag value for shader creation.
### -field D3D12DDI_CREATE_SHADER_FLAG_ENABLE_SHADER_TRACING
The shader is tracing.
### -field D3D12DDI_CREATE_SHADER_FLAG_DISABLE_OPTIMIZATION_0024
#### - D3D12DDI_CREATE_SHADER_FLAG_DISABLE_OPTIMIZATION
The shader is compiled quickly and less optimally.
| 23.846154 | 483 | 0.822581 | yue_Hant | 0.971826 |
7629b733c2e003da1cfcfec35d522f386a59f623 | 4,139 | md | Markdown | source/content/site-owner-faq.md | musicaljoeker/documentation | 9b7f749259b9124fc2070c52dc4b7ac124babd02 | [
"MIT"
] | null | null | null | source/content/site-owner-faq.md | musicaljoeker/documentation | 9b7f749259b9124fc2070c52dc4b7ac124babd02 | [
"MIT"
] | null | null | null | source/content/site-owner-faq.md | musicaljoeker/documentation | 9b7f749259b9124fc2070c52dc4b7ac124babd02 | [
"MIT"
] | null | null | null | ---
title: New Site Owner FAQs
description: Learn about common billing and administrative tasks performed by a Pantheon Drupal or WordPress site owner.
tags: [manage, billing]
categories: [manage,go-live]
---
When you become a site owner, you receive administrator permissions to manage the billing information, team members, and site settings.
## Administrative Tasks
### How do I add and remove team members?
**Add a Team Member**
In the Team modal, enter the email address of the user, and click **Add Team Member**.
Once the user has been added to the project, they will receive a welcome email notifying them that they are now a member of the site's team. This will allow them to access the site's codebase, create backups, mark the site as private, clear your sites' caches, sync content, and perform updates.
**Remove a Team Member**
<Alert title="Note" type="info">
All users can be removed except the site owner.
See the [Remove a Site Owner](/access-management/#remove-a-site-owner) section of our Access Management doc for more information.
</Alert>
In the Team modal, click the X next to the user you want to delete.
When you delete a user from a site, they lose the ability to perform any operations on that site.
For more information on managing teams, see the [Team Management article](/team-management).
### How do I add a Supporting Agency?
One of the best things about Pantheon is the ability to collaborate with agencies and shops on web projects. If you have contracted with a Pantheon Partner Agency, you can add them to the site as a Supporting Organization, which will give their company access to help build, launch, or maintain your site:
<Partial file="add-supporting-org.md" />
### What add-ons are available for my site?
- [Apache Solr](/solr) is a system for indexing and searching site content. Pantheon provides Apache Solr v3.6 as a service for most plans including the Sandbox site plan.
- [Redis](/redis) is an open-source, networked, in-memory, key-value data store that can be used as a drop-in caching backend for your Drupal or WordPress website.
Pantheon also offers [New Relic Pro](/new-relic) to our customers, built into the Site Dashboard. New Relic offers a wide array of metrics that provide a nearly real-time look into the performance of a web application.
### How do I enable add-ons?
From your Site Dashboard, click **Settings**, then click **Add Ons**. You will see all the available add-ons for your site.
You can access New Relic Pro directly from the Site Dashboard, by clicking on **<span class="glyphicons glyphicons-eye-open"></span> New Relic**.
### Can I downgrade my site to a Basic plan?
Yes. However, if you have Solr and/or Redis add ons enabled, they will break when you go down to Basic plan level. For more information, see [Manage Plans in the Site Dashboard](/site-plan/#basic-plan).
## How do I recover an account after a site owner leaves?
See the steps in our [Site Access](/site-access) doc for recovery instructions.
## Billing Tasks
### How do I change site service levels?
From your Site Dashboard, click **Settings**. Select a plan, and click **Update Plan**. Next, enter the payment information or invite someone to pay for the site, and click **Purchase Plan**.
### Can I update or change the payment method?
You can update the payment method in the **Settings** page. For detailed instructions, see [Account Billing in the User Dashboard](/account-billing).
### Can I pay for my site on an annual or quarterly basis instead of monthly?
Self-serve sites are billable via recurring monthly or [annual](/annual-billing) billing. Sites that are owned by a Reseller, Edu+, or Enterprise organization are invoiced to the organization.
### Can I transfer ownership of a site to someone else?
<Partial file="transfer-ownership-billing-intro.md" />
<Partial file="transfer-ownership-billing-steps.md" />
## See Also
- [Billing in the Site Dashboard](/site-billing)
- [Account Billing in the User Dashboard](/account-billing)
- [Team Management](/team-management)
- [Add a Client Site to your Organization Dashboard](/add-client-site)
| 49.27381 | 305 | 0.759121 | eng_Latn | 0.997117 |
7629e9cba0dd17c565feee7749b7a036114ac0b6 | 1,011 | md | Markdown | README.md | jrdnbradford/Doc-Merge | 119e9fbb521061b25bf8b58a4df1c640f7b7712d | [
"MIT"
] | null | null | null | README.md | jrdnbradford/Doc-Merge | 119e9fbb521061b25bf8b58a4df1c640f7b7712d | [
"MIT"
] | null | null | null | README.md | jrdnbradford/Doc-Merge | 119e9fbb521061b25bf8b58a4df1c640f7b7712d | [
"MIT"
] | 2 | 2020-04-15T00:03:51.000Z | 2020-07-10T22:20:20.000Z | # Doc Merge
Google Sheet-bound document merge application built with Google Apps Script.
The files in this repo are meant to be bound to a Google Sheet container. Information about data used for merging that could be helpful for troubleshooting issues is logged in the [Google Account](https://script.google.com/home/executions) that runs the application. This default behavior can be changed by switching the *logTroubleShootingInfo* boolean variable in [globals.gs](server/globals.gs) to *false*.
## Recommended OAuth Scopes
```json
{
"oauthScopes": [
"https://www.googleapis.com/auth/drive",
"https://www.googleapis.com/auth/documents",
"https://www.googleapis.com/auth/spreadsheets.currentonly",
"https://www.googleapis.com/auth/script.container.ui"
]
}
```
## Authors
**Jordan Bradford** - GitHub: [jrdnbradford](https://github.com/jrdnbradford)
## License
All code in this project is licensed under the MIT license. See [LICENSE.txt](LICENSE.txt) for details.
| 43.956522 | 409 | 0.739862 | eng_Latn | 0.768993 |
762a367df91f190721a4248bbd188891ff214420 | 135 | md | Markdown | README.md | sibvrv/webpack-typescript-library-boilerplate | 86de2c4d896ac8e6262a0d5b585820ad7c3cff73 | [
"MIT"
] | null | null | null | README.md | sibvrv/webpack-typescript-library-boilerplate | 86de2c4d896ac8e6262a0d5b585820ad7c3cff73 | [
"MIT"
] | 2 | 2020-12-04T20:48:08.000Z | 2021-03-10T20:46:59.000Z | README.md | sibvrv/webpack-typescript-library-boilerplate | 86de2c4d896ac8e6262a0d5b585820ad7c3cff73 | [
"MIT"
] | null | null | null | # webpack-typescript-library-boilerplate
This project is demonstrating how to create your own library using TypeScript and Webpack 4.
| 45 | 93 | 0.82963 | eng_Latn | 0.984553 |
762a5d91f0470644ba13b670b853426c83aee188 | 2,223 | md | Markdown | data/2015/01/2015-01-24.md | bouzuya/blog.bouzuya.net | d5e643990b8e9721ae09c18f99334a898d83fcb8 | [
"MIT"
] | 6 | 2016-05-02T21:31:41.000Z | 2018-01-15T04:48:01.000Z | data/2015/01/2015-01-24.md | bouzuya/blog.bouzuya.net | d5e643990b8e9721ae09c18f99334a898d83fcb8 | [
"MIT"
] | 56 | 2015-05-18T04:57:25.000Z | 2021-07-22T20:17:27.000Z | data/2015/01/2015-01-24.md | bouzuya/blog.bouzuya.net | d5e643990b8e9721ae09c18f99334a898d83fcb8 | [
"MIT"
] | 2 | 2016-06-15T04:06:11.000Z | 2016-10-18T13:36:55.000Z | 今週のふりかえり。
- [2015-01-23 『AngularJS リファレンス』をよんだ][2015-01-23]
- [2015-01-22 無題][2015-01-22]
- [2015-01-21 TypeScript の import require ではまった][2015-01-21]
- [2015-01-20 『「代表的プロダクト」について』のこととか][2015-01-20]
- [2015-01-19 映画『ブラザーズ・グリム』をみた & 雑誌『Software Design 2015 2』をよんだ][2015-01-19]
- [2015-01-18 三朝温泉にいった][2015-01-18]
- [2015-01-17 2015-W03 今週のふりかえり & 週ぶり (shuburi) で bouzuya/hspd-bootstrap をつくった][2015-01-17]
[週ぶり (shuburi)][shuburi] 2015-W04 。
[bouzuya/hspd-search][] をつくった。デモは [hspd-search.herokuapp.com](https://hspd-search.herokuapp.com) から。
先週の bouzuya/hspd-bootstrap に検索機能をつけたもの。増分なら同じリポジトリで良いと思うのだけど、別アプリケーションとしてデプロイしているし、良しとしよう。
TypeScript のモジュールでハマった。素の JavaScript で書いていた時とモジュールの export の方法を変えたのがちょっと残念だ。
よもやまばなし。
今週は世間的にはおそらくイスラム国の人質の件なんだろうな。割とどうでもいい。
からだづくり。
今週は割とサボり気味。体重は測っているので良しとしよう。誰かと一緒にやるのが良いと気づいた。競うと良い。
体重。
[![体重のグラフ][graph-weight-img]][graph-weight-url]
体脂肪率。
[![体脂肪率のグラフ][graph-percent-img]][graph-percent-url]
映画『アンストッパブル』・『オーロラの彼方へ』
アンストッパブル。列車が暴走する。絵としては映えないけど、ストーリーはシンプルだ。列車が暴走する理由が実にささいで、ちょっとしたミスが大惨事につながるのが面白い。
オーロラの彼方へ。タイムパラドックスものなのだけど、ご都合主義としか言いようがない。
その他
三朝温泉へ行った。温泉は良い。疲れるけど。あと、彼女が家に来ていて、はかどらない。
KPT 。
K 。
- 週ぶり (shuburi) 。
- よもやまばなし。
- からだづくり (体重測定) 。
P 。
- 週ぶり (shuburi) の 6:00 からの件ができなかった。
- 今年の目標を確認しづらい。
T 。
- 今年の目標を可視化する。
[2015-01-23]: https://blog.bouzuya.net/2015/01/23/
[2015-01-22]: https://blog.bouzuya.net/2015/01/22/
[2015-01-21]: https://blog.bouzuya.net/2015/01/21/
[2015-01-20]: https://blog.bouzuya.net/2015/01/20/
[2015-01-19]: https://blog.bouzuya.net/2015/01/19/
[2015-01-18]: https://blog.bouzuya.net/2015/01/18/
[2015-01-17]: https://blog.bouzuya.net/2015/01/17/
[graph-weight-img]: http://graph.hatena.ne.jp/bouzuya/graph?graphname=weight&startdate=2015-01-01&enddate=2015-01-24
[graph-weight-url]: http://graph.hatena.ne.jp/bouzuya/weight/?startdate=2015-01-01&enddate=2015-01-24
[graph-percent-img]: http://graph.hatena.ne.jp/bouzuya/graph?graphname=percent&startdate=2015-01-01&enddate=2015-01-24
[graph-percent-url]: http://graph.hatena.ne.jp/bouzuya/percent/?startdate=2015-01-01&enddate=2015-01-24
[shuburi]: http://shuburi.org
[bouzuya/hspd-search]: https://github.com/bouzuya/hspd-search
[hspd-search]: https://hspd-search.herokuapp.com/
| 29.25 | 118 | 0.744939 | yue_Hant | 0.364674 |
762a8845e8179953af4ffce6fea819e23ad207bc | 857 | md | Markdown | markdown/bitburner.hacknet.purchasenode.md | HeinousTugboat/bitburner | 6f017bf4f60cbe264556456c69450d7a160856fd | [
"Apache-2.0"
] | 1 | 2022-02-22T01:35:47.000Z | 2022-02-22T01:35:47.000Z | markdown/bitburner.hacknet.purchasenode.md | HeinousTugboat/bitburner | 6f017bf4f60cbe264556456c69450d7a160856fd | [
"Apache-2.0"
] | 37 | 2022-02-18T06:50:49.000Z | 2022-03-06T23:19:23.000Z | markdown/bitburner.hacknet.purchasenode.md | HeinousTugboat/bitburner | 6f017bf4f60cbe264556456c69450d7a160856fd | [
"Apache-2.0"
] | 1 | 2021-12-26T22:16:49.000Z | 2021-12-26T22:16:49.000Z | <!-- Do not edit this file. It is automatically generated by API Documenter. -->
[Home](./index.md) > [bitburner](./bitburner.md) > [Hacknet](./bitburner.hacknet.md) > [purchaseNode](./bitburner.hacknet.purchasenode.md)
## Hacknet.purchaseNode() method
Purchase a new hacknet node.
<b>Signature:</b>
```typescript
purchaseNode(): number;
```
<b>Returns:</b>
number
The index of the Hacknet Node or if the player cannot afford to purchase a new Hacknet Node the function will return -1.
## Remarks
RAM cost: 0 GB
Purchases a new Hacknet Node. Returns a number with the index of the Hacknet Node. This index is equivalent to the number at the end of the Hacknet Node’s name (e.g The Hacknet Node named `hacknet-node-4` will have an index of 4).
If the player cannot afford to purchase a new Hacknet Node then the function will return -1.
| 30.607143 | 230 | 0.733956 | eng_Latn | 0.977769 |
762a9667e032e635460409eced67a1bfabb0bbd9 | 1,569 | md | Markdown | CHANGELOG.md | joezhou888/aws-buildbox | da36f79f405cefe209bc7174ac735f8edfb8496c | [
"MIT"
] | null | null | null | CHANGELOG.md | joezhou888/aws-buildbox | da36f79f405cefe209bc7174ac735f8edfb8496c | [
"MIT"
] | null | null | null | CHANGELOG.md | joezhou888/aws-buildbox | da36f79f405cefe209bc7174ac735f8edfb8496c | [
"MIT"
] | null | null | null | ## 1.4.0 (Created: Apr 26 2019)
Updating Amazon Corretto (JVM) to 8.0.212
* Amazon Linux=2.0.20190228
* Linux Kernel=4.9.125-linuxkit
* Docker=18.06.1
* AWS Cli=1.16.140
* SAM Cli=0.14.2
* CDK Cli=0.29.0
* Node=10.15.3
* Python=2.7.14
* Java=8.0.212-amzn
* Gradle=5.3.1
* Maven=3.6.1
* Sbt=1.2.8
## 1.3.0 (Created: Apr 25 2019)
CDK version upgrade only to 0.29.0
* Amazon Linux=2.0.20190228
* Linux Kernel=4.9.125-linuxkit
* Docker=18.06.1
* AWS Cli=1.16.140
* SAM Cli=0.14.2
* CDK Cli=0.29.0
* Node=10.15.3
* Python=2.7.14
* Java=8.0.202-amzn
* Gradle=5.3.1
* Maven=3.6.1
* Sbt=1.2.8
## 1.2.0 (Created: Apr 17 2019)
Rolling back CDK version from 0.28.0 to 0.27.0 there were some regressions in 0.28.0 specific to the Java libraries.
* Amazon Linux=2.0.20190228
* Linux Kernel=4.9.125-linuxkit
* Docker=18.06.1
* AWS Cli=1.16.140
* SAM Cli=0.14.2
* CDK Cli=0.28.0
* Node=10.15.3
* Python=2.7.14
* Java=8.0.202-amzn
* Gradle=5.3.1
* Maven=3.6.1
* Sbt=1.2.8
## 1.1.0 (Created: Apr 14 2019)
Modified build to lockdown all versions of tools. General version bumps.
* Amazon Linux=2.0.20190228
* Linux Kernel=4.9.125-linuxkit
* Docker=18.06.1
* AWS Cli=1.16.140
* SAM Cli=0.14.2
* CDK Cli=0.28.0
* Node=10.15.3
* Python=2.7.14
* Java=8.0.202-amzn
* Gradle=5.3.1
* Maven=3.6.1
* Sbt=1.2.8
## 1.0.0 (Created: Feb 16 2019)
Initial release.
* Amazon Linux=2.0.20190212
* Linux Kernel=4.9.125-linuxkit
* Docker=18.06.1-ce
* AWS Cli=1.16.106
* SAM Cli=0.11.0
* CDK Cli=0.24.1
* Node=8.15.0
* Python=2.7.14
* Java=8.0.202-amzn
* Gradle=5.0
* Maven=3.6.0
* Sbt=1.2.8 | 18.678571 | 117 | 0.659018 | yue_Hant | 0.545976 |
762aec358f040f4470639a10f5feecf7e5c5d654 | 39 | md | Markdown | README.md | Ong123/smart-contract | c92d7560d3177598ac05c2f4d608414dbcc55bcc | [
"MIT"
] | null | null | null | README.md | Ong123/smart-contract | c92d7560d3177598ac05c2f4d608414dbcc55bcc | [
"MIT"
] | null | null | null | README.md | Ong123/smart-contract | c92d7560d3177598ac05c2f4d608414dbcc55bcc | [
"MIT"
] | null | null | null | ## Election & Campaign Smart Contract
| 19.5 | 38 | 0.74359 | eng_Latn | 0.650696 |
762afc253e7f5cb85c490f67f923c8f2e068a0d6 | 770 | md | Markdown | brainflux/README.md | l2wilson94/vms | 19bf9833bdeb03c9381e59a32285844118491540 | [
"MIT"
] | 2 | 2020-11-19T21:40:55.000Z | 2020-11-21T09:40:39.000Z | brainflux/README.md | l2wilson94/vms | 19bf9833bdeb03c9381e59a32285844118491540 | [
"MIT"
] | null | null | null | brainflux/README.md | l2wilson94/vms | 19bf9833bdeb03c9381e59a32285844118491540 | [
"MIT"
] | 1 | 2020-11-21T13:58:15.000Z | 2020-11-21T13:58:15.000Z | # Brainflux
Brainflux is a port from another foul mouth esoteric programming language. Brainflux on the other hand doesn't want that. Once you give brainflux consent it will massage your brain and put you in a flow state.
```
[0][0][0] .... [0] (tape)
^ (memory pointer)
++++++[>+++++++<-] (instruction)
^ (instruction pointer)
```
This is your tape. Each cell is an 8 bit/1 byte number. It can store a number from 0 to 255, an ASCII character or 8 "flags".
1. You can write on it with: `-` and `+`
1. You can move the memory pointer left and right with: `<` and `>`
1. You can write loops with: `[` and `]`. It only enters/reenters the loop if value at memory pointer is not `0`.
1. You can move the instruction pointer to the value at memory pointer with `@`
| 40.526316 | 209 | 0.688312 | eng_Latn | 0.999831 |
762b8a9f8c7432e3f4ac883acf5b12bd69f0bd61 | 9,986 | md | Markdown | articles/cognitive-services/authentication.md | eltociear/azure-docs.fr-fr | 3302b8be75f0872cf7d7a5e264850849ac36e493 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cognitive-services/authentication.md | eltociear/azure-docs.fr-fr | 3302b8be75f0872cf7d7a5e264850849ac36e493 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cognitive-services/authentication.md | eltociear/azure-docs.fr-fr | 3302b8be75f0872cf7d7a5e264850849ac36e493 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Authentification
titleSuffix: Azure Cognitive Services
description: 'Pour authentifier une requête auprès d’une ressource Azure Cognitive Services, trois options s’offrent à vous : une clé d’abonnement, un jeton du porteur ou un abonnement multiservice. Cet article décrit chaque méthode et explique comment adresser une requête.'
services: cognitive-services
author: erhopf
manager: nitinme
ms.service: cognitive-services
ms.topic: conceptual
ms.date: 11/22/2019
ms.author: erhopf
ms.openlocfilehash: d36961a12162a587def76b1ffeb2109f9ed63f4d
ms.sourcegitcommit: bb0afd0df5563cc53f76a642fd8fc709e366568b
ms.translationtype: HT
ms.contentlocale: fr-FR
ms.lasthandoff: 05/19/2020
ms.locfileid: "83587678"
---
# <a name="authenticate-requests-to-azure-cognitive-services"></a>Authentifier des requêtes auprès d’Azure Cognitive Services
Chaque requête adressée à un service Azure Cognitive Services doit inclure un en-tête d’authentification. Cet en-tête passe une clé d’abonnement ou un jeton d’accès qui sert à valider votre abonnement à un service ou à un groupe de services. Cet article présente trois façons d’authentifier une requête et les conditions de chaque méthode.
* [Authentification avec une clé d’abonnement monoservice](#authenticate-with-a-single-service-subscription-key)
* [Authentification avec une clé d’abonnement multiservice](#authenticate-with-a-multi-service-subscription-key)
* [Authentification avec un jeton](#authenticate-with-an-authentication-token)
* [Authentifier avec Azure Active Directory (AAD)](#authenticate-with-azure-active-directory)
## <a name="prerequisites"></a>Prérequis
Pour adresser une requête, vous devez disposer d’un compte Azure et d’un abonnement Azure Cognitive Services. Si vous avez déjà un compte, passez à la section suivante. Si vous n’avez pas de compte, ce guide va vous aider à en créer un en quelques minutes : [Créer un compte Cognitive Services pour Azure](cognitive-services-apis-create-account.md).
Pour obtenir votre clé d’abonnement, rendez-vous sur le [Portail Azure](cognitive-services-apis-create-account.md#get-the-keys-for-your-resource) après avoir créé votre compte ou activé un [essai gratuit](https://azure.microsoft.com/try/cognitive-services/my-apis).
## <a name="authentication-headers"></a>En-têtes d’authentification
Passons rapidement en revue les en-têtes d’authentification disponibles en vue d’une utilisation avec Azure Cognitive Services.
| En-tête | Description |
|--------|-------------|
| Ocp-Apim-Subscription-Key | Utilisez cet en-tête pour authentifier une requête avec une clé d’abonnement à un service spécifique ou une clé d’abonnement multiservice. |
| Ocp-Apim-Subscription-Region | Cet en-tête n’est nécessaire que si vous utilisez une clé d’abonnement multiservice avec le [service Translator](./Translator/reference/v3-0-reference.md). Utilisez cet en-tête pour spécifier la région de l’abonnement. |
| Autorisation | Utilisez cet en-tête si vous utilisez un jeton d’authentification. Les étapes à suivre pour effectuer un échange de jeton sont détaillées dans les sections suivantes. La valeur fournie est au format suivant : `Bearer <TOKEN>`. |
## <a name="authenticate-with-a-single-service-subscription-key"></a>Authentification avec une clé d’abonnement monoservice
La première option consiste à authentifier une requête avec une clé d’abonnement pour un service spécifique, tel que Translator. Les clés sont disponibles dans le portail Azure pour chaque ressource que vous avez créée. Pour utiliser une clé d’abonnement pour authentifier une requête, vous devez la passer sous la forme de l’en-tête `Ocp-Apim-Subscription-Key`.
Ces exemples de requêtes montrent comment utiliser l’en-tête `Ocp-Apim-Subscription-Key`. Si vous utilisez ces exemples, n’oubliez pas d’inclure une clé d’abonnement valide.
Voici un exemple d’appel à l’API Recherche Web Bing :
```cURL
curl -X GET 'https://api.cognitive.microsoft.com/bing/v7.0/search?q=Welsch%20Pembroke%20Corgis' \
-H 'Ocp-Apim-Subscription-Key: YOUR_SUBSCRIPTION_KEY' | json_pp
```
Voici un exemple d’appel au service Translator :
```cURL
curl -X POST 'https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&from=en&to=de' \
-H 'Ocp-Apim-Subscription-Key: YOUR_SUBSCRIPTION_KEY' \
-H 'Content-Type: application/json' \
--data-raw '[{ "text": "How much for the cup of coffee?" }]' | json_pp
```
La vidéo suivante montre l’utilisation d’une clé Cognitive Services.
## <a name="authenticate-with-a-multi-service-subscription-key"></a>Authentification avec une clé d’abonnement multiservice
>[!WARNING]
> À ce stade, les services suivants **ne prennent pas en charge** les clés multiservices : QnA Maker, services Speech, Custom Vision et Détecteur d’anomalies.
Cette option utilise également une clé d’abonnement pour authentifier les requêtes. La principale différence tient au fait qu’une clé d’abonnement n’est pas liée à un service spécifique. Vous pouvez ainsi utiliser une seule clé pour authentifier des requêtes auprès de plusieurs services Cognitive Services. Pour plus d’informations sur la disponibilité régionale, les fonctionnalités prises en charge et les tarifs, consultez les [tarifs de Cognitive Services](https://azure.microsoft.com/pricing/details/cognitive-services/).
La clé d’abonnement est fournie dans chaque requête sous la forme de l’en-tête `Ocp-Apim-Subscription-Key`.
[](https://www.youtube.com/watch?v=psHtA1p7Cas&feature=youtu.be)
### <a name="supported-regions"></a>Régions prises en charge
Quand vous utilisez la clé d’abonnement multiservice pour adresser une requête à `api.cognitive.microsoft.com`, vous devez inclure la région dans l’URL. Par exemple : `westus.api.cognitive.microsoft.com`.
Quand vous utilisez la clé d’abonnement multiservice avec le service Translator, vous devez spécifier la région de l’abonnement avec l’en-tête `Ocp-Apim-Subscription-Region`.
L’authentification multiservice est prise en charge dans ces régions :
| | | |
|-|-|-|
| `australiaeast` | `brazilsouth` | `canadacentral` |
| `centralindia` | `eastasia` | `eastus` |
| `japaneast` | `northeurope` | `southcentralus` |
| `southeastasia` | `uksouth` | `westcentralus` |
| `westeurope` | `westus` | `westus2` |
### <a name="sample-requests"></a>Exemples de demandes
Voici un exemple d’appel à l’API Recherche Web Bing :
```cURL
curl -X GET 'https://YOUR-REGION.api.cognitive.microsoft.com/bing/v7.0/search?q=Welsch%20Pembroke%20Corgis' \
-H 'Ocp-Apim-Subscription-Key: YOUR_SUBSCRIPTION_KEY' | json_pp
```
Voici un exemple d’appel au service Translator :
```cURL
curl -X POST 'https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&from=en&to=de' \
-H 'Ocp-Apim-Subscription-Key: YOUR_SUBSCRIPTION_KEY' \
-H 'Ocp-Apim-Subscription-Region: YOUR_SUBSCRIPTION_REGION' \
-H 'Content-Type: application/json' \
--data-raw '[{ "text": "How much for the cup of coffee?" }]' | json_pp
```
## <a name="authenticate-with-an-authentication-token"></a>Authentification avec un jeton d’authentification
Certains services Azure Cognitive Services acceptent et, dans certains cas, nécessitent un jeton d’authentification. Les services suivants prennent actuellement en charge les jetons d’authentification :
* API de traduction de texte
* Services Speech : API REST de reconnaissance vocale
* Services Speech : API REST de synthèse vocale
>[!NOTE]
> QnA Maker utilise également l’en-tête Authorization, mais nécessite une clé de point de terminaison. Pour plus d’informations, consultez [QnA Maker : Obtenir des réponses à partir de la base de connaissances](./qnamaker/quickstarts/get-answer-from-knowledge-base-using-url-tool.md).
>[!WARNING]
> Les services qui prennent en charge les jetons d’authentification peuvent changer au fil du temps. Pensez donc à consulter la référence sur l’API d’un service avant d’utiliser cette méthode d’authentification.
Vous pouvez échanger les clés d’abonnement monoservice et multiservice contre des jetons d’authentification. Les jetons d’authentification sont valides pour une durée de 10 minutes.
Les jetons d’authentification sont incluses dans une requête sous la forme de l’en-tête `Authorization`. La valeur du jeton fournie doit être précédée de `Bearer`. Par exemple : `Bearer YOUR_AUTH_TOKEN`.
### <a name="sample-requests"></a>Exemples de demandes
Utilisez cette URL pour échanger une clé d'abonnement contre un jeton d'authentification : `https://YOUR-REGION.api.cognitive.microsoft.com/sts/v1.0/issueToken`.
```cURL
curl -v -X POST \
"https://YOUR-REGION.api.cognitive.microsoft.com/sts/v1.0/issueToken" \
-H "Content-type: application/x-www-form-urlencoded" \
-H "Content-length: 0" \
-H "Ocp-Apim-Subscription-Key: YOUR_SUBSCRIPTION_KEY"
```
Ces régions multiservices prennent en charge l’échange de jeton :
| | | |
|-|-|-|
| `australiaeast` | `brazilsouth` | `canadacentral` |
| `centralindia` | `eastasia` | `eastus` |
| `japaneast` | `northeurope` | `southcentralus` |
| `southeastasia` | `uksouth` | `westcentralus` |
| `westeurope` | `westus` | `westus2` |
Après avoir obtenu un jeton d’authentification, vous devez le passer dans chaque requête sous la forme de l’en-tête `Authorization`. Voici un exemple d’appel au service Translator :
```cURL
curl -X POST 'https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&from=en&to=de' \
-H 'Authorization: Bearer YOUR_AUTH_TOKEN' \
-H 'Content-Type: application/json' \
--data-raw '[{ "text": "How much for the cup of coffee?" }]' | json_pp
```
[!INCLUDE [](../../includes/cognitive-services-azure-active-directory-authentication.md)]
## <a name="see-also"></a>Voir aussi
* [Qu’est-ce que Cognitive Services ?](welcome.md)
* [Tarifs de Cognitive Services](https://azure.microsoft.com/pricing/details/cognitive-services/)
* [Sous-domaines personnalisés](cognitive-services-custom-subdomains.md)
| 59.088757 | 527 | 0.773483 | fra_Latn | 0.910423 |
762b92d708a982f47372ae2a4ca33c2aa4e0e3c8 | 1,102 | md | Markdown | docs/csharp/misc/cs0550.md | iangithub/docs.zh-tw-1 | f6ad873a987ee7cfa5a7aaac9ce6c72283f72f84 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/csharp/misc/cs0550.md | iangithub/docs.zh-tw-1 | f6ad873a987ee7cfa5a7aaac9ce6c72283f72f84 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/csharp/misc/cs0550.md | iangithub/docs.zh-tw-1 | f6ad873a987ee7cfa5a7aaac9ce6c72283f72f84 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 編譯器錯誤 CS0550
ms.date: 07/20/2015
f1_keywords:
- CS0550
helpviewer_keywords:
- CS0550
ms.assetid: 57278c17-443c-40f2-9ebd-853558743564
ms.openlocfilehash: f1b5853534e7bf868f5b7559c68401f152fc0415
ms.sourcegitcommit: 9b552addadfb57fab0b9e7852ed4f1f1b8a42f8e
ms.translationtype: MT
ms.contentlocale: zh-TW
ms.lasthandoff: 04/23/2019
ms.locfileid: "61656577"
---
# <a name="compiler-error-cs0550"></a>編譯器錯誤 CS0550
'accessor' 加入了在介面成員 'property' 中找不到的存取子
在衍生類別中,屬性的實作包含未在基底介面中指定的存取子。
如需詳細資訊,請參閱[使用屬性](../../csharp/programming-guide/classes-and-structs/using-properties.md)。
## <a name="example"></a>範例
下列範例會產生 CS0550:
```csharp
// CS0550.cs
namespace x
{
interface ii
{
int i
{
get;
// add the following accessor to resolve this CS0550
// set;
}
}
public class a : ii
{
int ii.i
{
get
{
return 0;
}
set {} // CS0550 no set in interface
}
public static void Main() {}
}
}
```
| 20.036364 | 92 | 0.597096 | eng_Latn | 0.171566 |
762baa75b6e88ede01638c3a1390de698c130ba3 | 2,486 | md | Markdown | docs/framework/unmanaged-api/wmi/qualifierset-endenumeration.md | mattia-lunardi/docs.it-it | b9909895e77ae22ac89a7cc8dc6ea289e49ce0b3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/unmanaged-api/wmi/qualifierset-endenumeration.md | mattia-lunardi/docs.it-it | b9909895e77ae22ac89a7cc8dc6ea289e49ce0b3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/unmanaged-api/wmi/qualifierset-endenumeration.md | mattia-lunardi/docs.it-it | b9909895e77ae22ac89a7cc8dc6ea289e49ce0b3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Funzione QualifierSet_EndEnumeration (riferimenti alle API non gestite)
description: La funzione QualifierSet_EndEnumeration termina un'enumerazione.
ms.date: 11/06/2017
api_name:
- QualifierSet_EndEnumeration
api_location:
- WMINet_Utils.dll
api_type:
- DLLExport
f1_keywords:
- QualifierSet_EndEnumeration
helpviewer_keywords:
- QualifierSet_EndEnumeration function [.NET WMI and performance counters]
topic_type:
- Reference
author: rpetrusha
ms.author: ronpet
ms.openlocfilehash: a9d3f8966f6333487631a0e155c7be49075a6992
ms.sourcegitcommit: 6b308cf6d627d78ee36dbbae8972a310ac7fd6c8
ms.translationtype: MT
ms.contentlocale: it-IT
ms.lasthandoff: 01/23/2019
ms.locfileid: "54734675"
---
# <a name="qualifiersetendenumeration-function"></a>QualifierSet_EndEnumeration (funzione)
Termina l'enumerazione iniziato con una chiamata per il [QualifierSet_BeginEnumeration](qualifierset-beginenumeration.md) (funzione).
[!INCLUDE[internalonly-unmanaged](../../../../includes/internalonly-unmanaged.md)]
## <a name="syntax"></a>Sintassi
```
HRESULT QualifierSet_EndEnumeration (
[in] int vFunc,
[in] IWbemQualifierSet* ptr
);
```
## <a name="parameters"></a>Parametri
`vFunc`
[in] Questo parametro è inutilizzato.
`ptr`
[in] Un puntatore a un [IWbemQualifierSet](/windows/desktop/api/wbemcli/nn-wbemcli-iwbemqualifierset) istanza.
## <a name="return-value"></a>Valore restituito
Il valore seguente restituito da questa funzione è definito nel *WbemCli.h* file di intestazione, oppure è possibile definirlo come costante nel codice:
|Costante |Value |Descrizione |
|---------|---------|---------|
|`WBEM_S_NO_ERROR` | 0 | La chiamata di funzione è riuscita. |
## <a name="remarks"></a>Note
Questa funzione esegue il wrapping di una chiamata per il [IWbemQualifierSet::EndEnumeration](/windows/desktop/api/wbemcli/nf-wbemcli-iwbemqualifierset-endenumeration) (metodo).
Questa chiamata è consigliata, ma non obbligatorio. Rilascia immediatamente le risorse associate all'enumerazione.
## <a name="requirements"></a>Requisiti
**Piattaforme:** Vedere [Requisiti di sistema](../../../../docs/framework/get-started/system-requirements.md).
**Intestazione:** WMINet_Utils.idl
**Versioni di .NET Framework:** [!INCLUDE[net_current_v472plus](../../../../includes/net-current-v472plus.md)]
## <a name="see-also"></a>Vedere anche
- [WMI e contatori delle prestazioni (riferimenti alle API non gestite)](index.md)
| 34.527778 | 177 | 0.750603 | ita_Latn | 0.841115 |
762c6c32c5494db6f88f233cfcd18fc34bcefba7 | 16,321 | md | Markdown | input/includes/ConceptMap-segment-orc-to-immunization-intro.md | HL7/v2-to-fhir | 3eca3893f0874ae38fa6ae4601e063252300ab07 | [
"Apache-2.0"
] | 35 | 2019-04-11T01:59:49.000Z | 2022-03-31T17:39:30.000Z | input/includes/ConceptMap-segment-orc-to-immunization-intro.md | HL7/v2-to-fhir | 3eca3893f0874ae38fa6ae4601e063252300ab07 | [
"Apache-2.0"
] | 33 | 2020-03-01T09:29:52.000Z | 2022-03-08T03:19:46.000Z | input/includes/ConceptMap-segment-orc-to-immunization-intro.md | HL7/v2-to-fhir | 3eca3893f0874ae38fa6ae4601e063252300ab07 | [
"Apache-2.0"
] | 12 | 2019-07-11T19:03:15.000Z | 2022-03-31T17:39:33.000Z |
This ConceptMap represents the mapping from the HL7 V2 ORC Segment to the FHIR Immunization Resource. See also the <a href='https://github.com/HL7/v2-to-fhir/blob/master/tank/Segment ORC to Immunization.fsh'>FHIR Shorthand</a> or the <a href='https://github.com/HL7/v2-to-fhir/blob/master/mappings/segments/HL7 Segment - FHIR R4_ ORC[Immunization] - ORC.csv'>CSV Source</a>.
<table class='grid'><thead>
<tr><th colspan='6'>HL7 v2</th><th colspan='3'>Condition (IF True, args)</th><th colspan='7'>HL7 FHIR</th><th rowspan='2'>Comments</th></tr>
<tr><th title='Rows are listed in sequence of how they appear in the v2 standard. The first column, Sort Order, provides a sort order that can re-create the original v2 standard sequence in case one opts to re-sort/filter the rows.'>Sort Order</th><th title='Contains the formal Segment Name and Field Sequence according to the base standard using "-" as the delimiter.'>Identifier</th><th title='The formal name of the field in the most current published version.'>Name</th><th title='The data type of the field in the most current published version if not deprecated, otherwise the data type at the time it was deprecated and removed.'>Data Type</th><th title='The V2 min cardinality expressed numerically.'>Cardinality - Min</th><td style='border-right: 2px' title='The V2 max cardinality expressed numerically.'>Cardinality - Max</td><th title='Condition in an easy to read syntax (Computable ANTLR)'>Computable ANTLR</th><th title='Condition in FHIRPath Notation'>Computable FHIRPath</th><td style='border-right: 2px' title='Condition expressed in narrative form'>Narrative</td><th title='An existing FHIR attribute in the target FHIR version.'>FHIR Attribute</th><th title='A proposed extension. It will be expressed with #ext-...# around the proposed name. '>Extension</th><th title='The FHIR attribute's data type in the target FHIR version.'>Data Type</th><th title='The FHIR min cardinality expressed numerically.'>Cardinality - Min</th><td style='border-right: 2px' title='The FHIR max cardinality expressed numerically.'>Cardinality - Max</td><th title='The URL to the Data Type Map that is to be used for the attribute in this segment.'>Data Type Mapping</th><th title='The fixed or computed value to assign'>Assignment</th><th title='The URL to the Vocabulary Map that is to be used for the coded element for this attribute.'>Vocabulary Mapping<br/>(IS, ID, CE, CEN, CWE)</th></tr></thead>
<tbody>
<tr><td>1</td><td>ORC-1</td><td>Order Control</td><td>ID</td><td>1</td><td style='border-right: 2px'>1</td><td></td><td></td><td style='border-right: 2px'></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
<tr><td>2</td><td>ORC-2</td><td>Placer Order Number</td><td>EI</td><td>0</td><td style='border-right: 2px'>1</td><td></td><td></td><td style='border-right: 2px'></td><td><a href='https://hl7.org/fhir/R4/Immunization.Immunization-definitions.html#Immunization.identifier'>Immunization.identifier</a></td><td></td><td><a href='https://hl7.org/fhir/R4/Immunization.Immunization-definitions.html#Immunization.Identifier'>Immunization.Identifier</a></td><td>0</td><td>-1</td><td><a href='ConceptMap-datatype-ei-to-identifier.html'>EI[Identifier]</a></td><td></td><td></td><td></td></tr>
<tr><td>2</td><td>ORC-2</td><td>Placer Order Number</td><td>EI</td><td>0</td><td style='border-right: 2px'>1</td><td></td><td></td><td style='border-right: 2px'></td><td><a href='https://hl7.org/fhir/R4/Immunization.Immunization-definitions.html#Immunization.identifier.type.coding.code'>Immunization.identifier.type.coding.code</a></td><td></td><td><a href='https://hl7.org/fhir/R4/Immunization.Immunization-definitions.html#Immunization.code'>Immunization.code</a></td><td>0</td><td>1</td><td></td><td></td><td>"PLAC"</td><td></td></tr>
<tr><td>2</td><td>ORC-2</td><td>Placer Order Number</td><td>EI</td><td>0</td><td style='border-right: 2px'>1</td><td></td><td></td><td style='border-right: 2px'></td><td><a href='https://hl7.org/fhir/R4/Immunization.Immunization-definitions.html#Immunization.identifier.type.coding.system'>Immunization.identifier.type.coding.system</a></td><td></td><td><a href='https://hl7.org/fhir/R4/Immunization.Immunization-definitions.html#Immunization.uri'>Immunization.uri</a></td><td>0</td><td>1</td><td></td><td></td><td>"<a href='http://terminology.hl7.org/CodeSystem/v2-0203'>http://terminology.hl7.org/CodeSystem/v2-0203</a>"</td><td></td></tr>
<tr><td>3</td><td>ORC-3</td><td>Filler Order Number</td><td>EI</td><td>0</td><td style='border-right: 2px'>1</td><td></td><td></td><td style='border-right: 2px'></td><td><a href='https://hl7.org/fhir/R4/Immunization.Immunization-definitions.html#Immunization.identifier'>Immunization.identifier</a></td><td></td><td><a href='https://hl7.org/fhir/R4/Immunization.Immunization-definitions.html#Immunization.Identifier'>Immunization.Identifier</a></td><td>0</td><td>-1</td><td><a href='ConceptMap-datatype-ei-to-identifier.html'>EI[Identifier]</a></td><td></td><td></td><td></td></tr>
<tr><td>3</td><td>ORC-3</td><td>Filler Order Number</td><td>EI</td><td>0</td><td style='border-right: 2px'>1</td><td></td><td></td><td style='border-right: 2px'></td><td><a href='https://hl7.org/fhir/R4/Immunization.Immunization-definitions.html#Immunization.identifier.type.coding.code'>Immunization.identifier.type.coding.code</a></td><td></td><td><a href='https://hl7.org/fhir/R4/Immunization.Immunization-definitions.html#Immunization.code'>Immunization.code</a></td><td>0</td><td>1</td><td></td><td></td><td>"FILL"</td><td></td></tr>
<tr><td>3</td><td>ORC-3</td><td>Filler Order Number</td><td>EI</td><td>0</td><td style='border-right: 2px'>1</td><td></td><td></td><td style='border-right: 2px'></td><td><a href='https://hl7.org/fhir/R4/Immunization.Immunization-definitions.html#Immunization.identifier.type.coding.system'>Immunization.identifier.type.coding.system</a></td><td></td><td><a href='https://hl7.org/fhir/R4/Immunization.Immunization-definitions.html#Immunization.uri'>Immunization.uri</a></td><td>0</td><td>1</td><td></td><td></td><td>"<a href='http://terminology.hl7.org/CodeSystem/v2-0203'>http://terminology.hl7.org/CodeSystem/v2-0203</a>"</td><td></td></tr>
<tr><td>4</td><td>ORC-4</td><td>Placer Group Number</td><td>EI</td><td>0</td><td style='border-right: 2px'>1</td><td></td><td></td><td style='border-right: 2px'></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
<tr><td>5</td><td>ORC-5</td><td>Order Status</td><td>ID</td><td>0</td><td style='border-right: 2px'>1</td><td></td><td></td><td style='border-right: 2px'></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
<tr><td>6</td><td>ORC-6</td><td>Response Flag</td><td>ID</td><td>0</td><td style='border-right: 2px'>1</td><td></td><td></td><td style='border-right: 2px'></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
<tr><td>7</td><td>ORC-7</td><td>Quantity/Timing</td><td>TQ</td><td>0</td><td style='border-right: 2px'>-1</td><td></td><td></td><td style='border-right: 2px'></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
<tr><td>8</td><td>ORC-8</td><td>Parent Order</td><td>EIP</td><td>0</td><td style='border-right: 2px'>1</td><td></td><td></td><td style='border-right: 2px'></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
<tr><td>9</td><td>ORC-9</td><td>Date/Time of Transaction</td><td>DTM</td><td>0</td><td style='border-right: 2px'>1</td><td></td><td></td><td style='border-right: 2px'></td><td><a href='https://hl7.org/fhir/R4/Immunization.Immunization-definitions.html#Immunization.recorded'>Immunization.recorded</a></td><td></td><td><a href='https://hl7.org/fhir/R4/Immunization.Immunization-definitions.html#Immunization.dateTime'>Immunization.dateTime</a></td><td>0</td><td>1</td><td></td><td></td><td></td><td></td></tr>
<tr><td>10</td><td>ORC-10</td><td>Entered By</td><td>XCN</td><td>0</td><td style='border-right: 2px'>-1</td><td></td><td></td><td style='border-right: 2px'></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
<tr><td>11</td><td>ORC-11</td><td>Verified By</td><td>XCN</td><td>0</td><td style='border-right: 2px'>-1</td><td></td><td></td><td style='border-right: 2px'></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
<tr><td>12</td><td>ORC-12</td><td>Ordering Provider</td><td>XCN</td><td>0</td><td style='border-right: 2px'>-1</td><td></td><td></td><td style='border-right: 2px'></td><td><a href='https://hl7.org/fhir/R4/Immunization.Immunization-definitions.html#Immunization.performer.actor'>Immunization.performer.actor</a>(<a href='https://hl7.org/fhir/R4/Immunization.Immunization-definitions.html#Immunization.Practitioner'>Immunization.Practitioner</a>)</td><td></td><td><a href='https://hl7.org/fhir/R4/references.html'>Reference</a>(<a href='https://hl7.org/fhir/R4/Immunization.Immunization-definitions.html#Immunization.Practitioner'>Immunization.Practitioner</a>)</td><td>0</td><td>1</td><td><a href='ConceptMap-datatype-xcn-to-practitioner.html'>XCN[Practitioner]</a></td><td></td><td></td><td></td></tr>
<tr><td>12</td><td>ORC-12</td><td>Ordering Provider</td><td>XCN</td><td>0</td><td style='border-right: 2px'>-1</td><td></td><td></td><td style='border-right: 2px'></td><td><a href='https://hl7.org/fhir/R4/Immunization.Immunization-definitions.html#Immunization.performer.function.coding.code'>Immunization.performer.function.coding.code</a></td><td></td><td><a href='https://hl7.org/fhir/R4/Immunization.Immunization-definitions.html#Immunization.code'>Immunization.code</a></td><td>0</td><td>1</td><td></td><td></td><td>"OP"</td><td></td></tr>
<tr><td>12</td><td>ORC-12</td><td>Ordering Provider</td><td>XCN</td><td>0</td><td style='border-right: 2px'>-1</td><td></td><td></td><td style='border-right: 2px'></td><td><a href='https://hl7.org/fhir/R4/Immunization.Immunization-definitions.html#Immunization.performer.function.coding.system'>Immunization.performer.function.coding.system</a></td><td></td><td><a href='https://hl7.org/fhir/R4/Immunization.Immunization-definitions.html#Immunization.uri'>Immunization.uri</a></td><td>0</td><td>1</td><td></td><td></td><td>"<a href='http://terminology.hl7.org/CodeSystem/v2-0443'>http://terminology.hl7.org/CodeSystem/v2-0443</a>"</td><td></td></tr>
<tr><td>13</td><td>ORC-13</td><td>Enterer's Location</td><td>PL</td><td>0</td><td style='border-right: 2px'>1</td><td></td><td></td><td style='border-right: 2px'></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
<tr><td>14</td><td>ORC-14</td><td>Call Back Phone Number</td><td>XTN</td><td>0</td><td style='border-right: 2px'>2</td><td></td><td></td><td style='border-right: 2px'></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
<tr><td>15</td><td>ORC-15</td><td>Order Effective Date/Time</td><td>DTM</td><td>0</td><td style='border-right: 2px'>1</td><td></td><td></td><td style='border-right: 2px'></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
<tr><td>16</td><td>ORC-16</td><td>Order Control Code Reason</td><td>CWE</td><td>0</td><td style='border-right: 2px'>1</td><td></td><td></td><td style='border-right: 2px'></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
<tr><td>17</td><td>ORC-17</td><td>Entering Organization</td><td>CWE</td><td>0</td><td style='border-right: 2px'>1</td><td></td><td></td><td style='border-right: 2px'></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
<tr><td>18</td><td>ORC-18</td><td>Entering Device</td><td>CWE</td><td>0</td><td style='border-right: 2px'>1</td><td></td><td></td><td style='border-right: 2px'></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
<tr><td>19</td><td>ORC-19</td><td>Action By</td><td>XCN</td><td>0</td><td style='border-right: 2px'>-1</td><td></td><td></td><td style='border-right: 2px'></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
<tr><td>20</td><td>ORC-20</td><td>Advanced Beneficiary Notice Code</td><td>CWE</td><td>0</td><td style='border-right: 2px'>1</td><td></td><td></td><td style='border-right: 2px'></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
<tr><td>21</td><td>ORC-21</td><td>Ordering Facility Name</td><td>XON</td><td>0</td><td style='border-right: 2px'>-1</td><td></td><td></td><td style='border-right: 2px'></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
<tr><td>22</td><td>ORC-22</td><td>Ordering Facility Address</td><td>XAD</td><td>0</td><td style='border-right: 2px'>-1</td><td></td><td></td><td style='border-right: 2px'></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
<tr><td>23</td><td>ORC-23</td><td>Ordering Facility Phone Number</td><td>XTN</td><td>0</td><td style='border-right: 2px'>-1</td><td></td><td></td><td style='border-right: 2px'></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
<tr><td>24</td><td>ORC-24</td><td>Ordering Provider Address</td><td>XAD</td><td>0</td><td style='border-right: 2px'>-1</td><td></td><td></td><td style='border-right: 2px'></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
<tr><td>25</td><td>ORC-25</td><td>Order Status Modifier</td><td>CWE</td><td>0</td><td style='border-right: 2px'>1</td><td></td><td></td><td style='border-right: 2px'></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
<tr><td>26</td><td>ORC-26</td><td>Advanced Beneficiary Notice Override Reason</td><td>CWE</td><td>0</td><td style='border-right: 2px'>1</td><td></td><td></td><td style='border-right: 2px'></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
<tr><td>27</td><td>ORC-27</td><td>Filler's Expected Availability Date/Time</td><td>DTM</td><td>0</td><td style='border-right: 2px'>1</td><td></td><td></td><td style='border-right: 2px'></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
<tr><td>28</td><td>ORC-28</td><td>Confidentiality Code</td><td>CWE</td><td>0</td><td style='border-right: 2px'>1</td><td></td><td></td><td style='border-right: 2px'></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
<tr><td>29</td><td>ORC-29</td><td>Order Type</td><td>CWE</td><td>0</td><td style='border-right: 2px'>1</td><td></td><td></td><td style='border-right: 2px'></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
<tr><td>30</td><td>ORC-30</td><td>Enterer Authorization Mode</td><td>CNE</td><td>0</td><td style='border-right: 2px'>1</td><td></td><td></td><td style='border-right: 2px'></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
<tr><td>31</td><td>ORC-31</td><td>Parent Universal Service Identifier</td><td>CWE</td><td>0</td><td style='border-right: 2px'>1</td><td></td><td></td><td style='border-right: 2px'></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
<tr><td>32</td><td>ORC-32</td><td>Advanced Beneficiary Notice Date</td><td>DT</td><td>0</td><td style='border-right: 2px'>1</td><td></td><td></td><td style='border-right: 2px'></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
<tr><td>33</td><td>ORC-33</td><td>Alternate Placer Order Number</td><td>CX</td><td>0</td><td style='border-right: 2px'>-1</td><td></td><td></td><td style='border-right: 2px'></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
<tr><td>34</td><td>ORC-34</td><td>Order Workflow Profile</td><td>CWE</td><td>0</td><td style='border-right: 2px'>-1</td><td></td><td></td><td style='border-right: 2px'></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr>
</tbody>
</table>
| 333.081633 | 1,919 | 0.641995 | yue_Hant | 0.226034 |
762cb468bda527ee77eb14368e7a98dc6ccbc5e0 | 1,909 | md | Markdown | articles/virtual-network/virtual-network-disaster-recovery-guidance.md | allanfann/azure-docs.zh-tw | c66e7b6d1ba48add6023a4c08cc54085e3286aa3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/virtual-network/virtual-network-disaster-recovery-guidance.md | allanfann/azure-docs.zh-tw | c66e7b6d1ba48add6023a4c08cc54085e3286aa3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/virtual-network/virtual-network-disaster-recovery-guidance.md | allanfann/azure-docs.zh-tw | c66e7b6d1ba48add6023a4c08cc54085e3286aa3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 虛擬網路商務持續性 | Microsoft Docs
description: 了解發生影響 Azure 虛擬網路的 Azure 服務中斷事件時該怎麼辦
services: virtual-network
documentationcenter: ''
author: NarayanAnnamalai
manager: jefco
editor: ''
ms.assetid: ad260ab9-d873-43b3-8896-f9a1db9858a5
ms.service: virtual-network
ms.workload: virtual-network
ms.tgt_pltfrm: na
ms.devlang: na
ms.topic: article
ms.date: 05/16/2016
ms.author: narayan;aglick
ms.openlocfilehash: 68a9523dcc9c4dd84399c68fc7e31a692c011487
ms.sourcegitcommit: bb85a238f7dbe1ef2b1acf1b6d368d2abdc89f10
ms.translationtype: MT
ms.contentlocale: zh-TW
ms.lasthandoff: 05/10/2019
ms.locfileid: "65523265"
---
# <a name="virtual-network--business-continuity"></a>虛擬網路 – 商務持續性
## <a name="overview"></a>概觀
「虛擬網路」(VNet) 是您網路在雲端的邏輯呈現方式。 它可讓您定義自己的私人 IP 位址空間,並將網路分割成子網路。 VNet 可做為信任界限來裝載您的運算資源,例如「Azure 虛擬機器」和「雲端服務」(Web/背景工作角色)。 VNet 可允許其裝載的資源之間進行直接私人 IP 通訊。 您可以透過「VPN 閘道」或 ExpressRoute 將虛擬網路連結至內部部署網路。
VNet 是在區域的範圍內建立。 您可以*建立*具有相同的 Vnet 位址空間中兩個不同的區域 (例如,美國東部和美國西部),但因為它們具有相同的位址空間,您無法將它們連線在一起。
## <a name="business-continuity"></a>商務持續性
可能有數種不同的方式會讓您的應用程式中斷。 區域可能因天然災害或因多個裝置或服務失敗而導致的部分災害,而被完全切斷。 這每一種情況對 VNet 服務的影響各有不同。
**问:如果整個區域發生中斷,我該怎麼辦?例如,如果區域因為天然災害而完全中斷時會怎樣?裝載於該區域中的虛擬網路會發生什麼情況?**
答:虛擬網路與受影響的區域中的資源保持在服務中斷期間無法存取。

**问:如何重新建立相同的虛擬網路中不同的區域?**
答:虛擬網路是非常輕量型的資源。 您可以叫用 Azure API,在不同的區域中建立具有相同位址空間的 VNet。 若要重新建立受影響區域中所具有的相同環境,您需進行 API 呼叫來重新部署「雲端服務」Web 和背景工作角色,以及您所擁有的虛擬機器。 如果您有內部部署連線 (例如在混合式部署中),則必須部署新的「VPN 閘道」並連接到內部部署網路。
若要建立虛擬網路,請參閱[建立虛擬網路](manage-virtual-network.md#create-a-virtual-network)。
**问:指定的區域中的 vnet 複本可重新建立事先的另一個區域中?**
答:是,您可以建立兩個 Vnet 事先的兩個不同區域中,使用相同的私人 IP 位址空間和資源。 如果您在 VNet 中裝載網際網路對應服務,則您可能已設定「流量管理員」將流量異地路由傳送至作用中的區域。 不過,您無法將兩個具有相同位址空間的 VNet 連線到內部部署網路,因為這會造成路由問題。 當發生災害而失去一個區域中的 VNet 時,您可以將位於可用區域中具有相符位址空間的另一個 VNet 連線到您的內部部署網路。
若要建立虛擬網路,請參閱[建立虛擬網路](manage-virtual-network.md#create-a-virtual-network)。
| 36.018868 | 211 | 0.810372 | yue_Hant | 0.939083 |
762cd325f89364ae4072191efbb547d186a7f708 | 1,170 | md | Markdown | README.md | Lameaux/mock_server | c387af54d1b974ce1ed5f841de214a45d07fe901 | [
"MIT"
] | null | null | null | README.md | Lameaux/mock_server | c387af54d1b974ce1ed5f841de214a45d07fe901 | [
"MIT"
] | null | null | null | README.md | Lameaux/mock_server | c387af54d1b974ce1ed5f841de214a45d07fe901 | [
"MIT"
] | null | null | null | # Mock HTTP Server
Makes it possible to integrate your service with an API that still does not exist.
## How to run
### Localhost
```
FLASK_APP=mock_server.py FLASK_DEBUG=True flask run
```
### Heroku
https://showmax-mock-server.herokuapp.com/
## Feature Flag
Enable FF `t62161_mock_response` to route requests to Mock Server.
## DStv
### Users
| Identity Number | Customer Number | Eligibility |
|---|---|---|
| 3409045056082 | 38739503 | IsEligible |
| 7702050146087 | 38752143 | HasActiveAgreement |
### Endpoints
```
/dstvza/GetOAuthTokenAPI
/dstvza/ShowmaxExtApi/partners/showmax/ZAF/customers/<int:customer_number>/activations
/dstvza/ShowmaxExtApi/partners/showmax/ZAF/customers/<int:customer_number>/agreements/eligibility
/dstvza/ShowmaxExtApi/partners/showmax/agreements
/dstvza/ShowmaxExtApi/partners/showmax/agreements/<string:charge_token>
/dstvza/ShowmaxExtApi/partners/showmax/agreements/<string:charge_token>/charges
/dstvza/showmaxperformanceapi/zaf/customers/customer-detail/identityNumber=<int:identity_number>
```
## Demo Endpoints
### Generic
```
/generic?sleep_seconds=1&status_code=200
```
### Examples
```
/timestamp
/random
``` | 21.666667 | 97 | 0.765812 | eng_Latn | 0.323938 |
762d3f478db8cbc127b5cdbbb31b18039a491d0d | 88,499 | md | Markdown | packages/api-headless-cms/CHANGELOG.md | ankurvr/webiny-js | 1499fd898b11866e77fa22cb1bd902e540aac41f | [
"MIT"
] | null | null | null | packages/api-headless-cms/CHANGELOG.md | ankurvr/webiny-js | 1499fd898b11866e77fa22cb1bd902e540aac41f | [
"MIT"
] | null | null | null | packages/api-headless-cms/CHANGELOG.md | ankurvr/webiny-js | 1499fd898b11866e77fa22cb1bd902e540aac41f | [
"MIT"
] | null | null | null | # Change Log
All notable changes to this project will be documented in this file.
See [Conventional Commits](https://conventionalcommits.org) for commit guidelines.
# [5.6.0](https://github.com/webiny/webiny-js/compare/v5.6.0-beta.2...v5.6.0) (2021-05-10)
**Note:** Version bump only for package @webiny/api-headless-cms
# [5.6.0-beta.2](https://github.com/webiny/webiny-js/compare/v5.6.0-beta.1...v5.6.0-beta.2) (2021-05-07)
### Bug Fixes
* **api-headless-cms:** use RefInput on model group field ([3b56155](https://github.com/webiny/webiny-js/commit/3b56155a6a25073a5006959a77d7852076ae0bf3))
# [5.6.0-beta.1](https://github.com/webiny/webiny-js/compare/v5.6.0-beta.0...v5.6.0-beta.1) (2021-05-07)
**Note:** Version bump only for package @webiny/api-headless-cms
# [5.6.0-beta.0](https://github.com/webiny/webiny-js/compare/v5.5.1-beta.0...v5.6.0-beta.0) (2021-05-06)
### Bug Fixes
* use interface instead of type ([be076cf](https://github.com/webiny/webiny-js/commit/be076cf8856bfa8789e8ea9a4a55149e70f9074c))
### Features
* add support for multiple content models on the same ref field ([#1572](https://github.com/webiny/webiny-js/issues/1572)) ([cf347cc](https://github.com/webiny/webiny-js/commit/cf347cc5cd4772ae9dc7efe232486d191355fe43))
* ES index sharing for multi-tenancy ([#1575](https://github.com/webiny/webiny-js/issues/1575)) ([54ab395](https://github.com/webiny/webiny-js/commit/54ab395e45773ed98814bf2339c4a9166bd234d1))
## [5.5.1-beta.0](https://github.com/webiny/webiny-js/compare/v5.5.0...v5.5.1-beta.0) (2021-04-27)
**Note:** Version bump only for package @webiny/api-headless-cms
# [5.5.0](https://github.com/webiny/webiny-js/compare/v5.5.0-beta.3...v5.5.0) (2021-04-26)
**Note:** Version bump only for package @webiny/api-headless-cms
# [5.5.0-beta.3](https://github.com/webiny/webiny-js/compare/v5.5.0-beta.2...v5.5.0-beta.3) (2021-04-23)
**Note:** Version bump only for package @webiny/api-headless-cms
# [5.5.0-beta.2](https://github.com/webiny/webiny-js/compare/v5.5.0-beta.1...v5.5.0-beta.2) (2021-04-23)
**Note:** Version bump only for package @webiny/api-headless-cms
# [5.5.0-beta.1](https://github.com/webiny/webiny-js/compare/v5.5.0-beta.0...v5.5.0-beta.1) (2021-04-22)
**Note:** Version bump only for package @webiny/api-headless-cms
# [5.5.0-beta.0](https://github.com/webiny/webiny-js/compare/v5.4.0...v5.5.0-beta.0) (2021-04-22)
### Bug Fixes
* **api-headless-cms:** export contentModel crud plugin ([2c10e2d](https://github.com/webiny/webiny-js/commit/2c10e2df2698f7f2e87e7be2a1ca94825c9b43b0))
* **api-headless-cms:** update validateModelAccess function ([36449d8](https://github.com/webiny/webiny-js/commit/36449d8b651031c80aa186e3facb2f39df115931))
* add missing "@webiny/cli" dependency ([f99bf3e](https://github.com/webiny/webiny-js/commit/f99bf3ea31ad4b9d560011181431cd9732a1a8c4))
### Features
* **api-headless-cms:** add cms permission migration for API keys ([2cb07b2](https://github.com/webiny/webiny-js/commit/2cb07b2468ce10a0323ce5af0ad1634459d3efb5))
* **api-headless-cms:** add filterAsync helper function ([35bf612](https://github.com/webiny/webiny-js/commit/35bf612bdb2fc541329f2172307c5f90e0cc0b16))
* **api-headless-cms:** add missing groups in contentModelGroup permission ([e7096a9](https://github.com/webiny/webiny-js/commit/e7096a9d004b8e3ed3e3fbc16d9a6176bbbd90f9))
* **api-headless-cms:** add read permission check for group and model ([bf32479](https://github.com/webiny/webiny-js/commit/bf32479ba92db0d418a889b770f40a8b7c307955))
* **api-headless-cms:** add validateGroupAccess check ([b5d2f7b](https://github.com/webiny/webiny-js/commit/b5d2f7bbd93b3bfbb93777e624ef7dc29f82a994))
* **api-headless-cms:** add validateGroupAccess helper ([a4b747f](https://github.com/webiny/webiny-js/commit/a4b747f8eca3c8864310505526fb1606d3357f68))
* **api-headless-cms:** migrate cms permissions ([728be2a](https://github.com/webiny/webiny-js/commit/728be2a30b134826f993b00785c0117ac477a5df))
* **api-headless-cms:** search via ref field id ([#1567](https://github.com/webiny/webiny-js/issues/1567)) ([4bb65cf](https://github.com/webiny/webiny-js/commit/4bb65cf7659d88060a8c705b1a01b5df817fa8b4))
* **api-headless-cms:** update permission check for models ([a377638](https://github.com/webiny/webiny-js/commit/a377638e2ca4ffa1808dede08687be60ee17ec92))
* **api-headless-cms:** update types ([d5edc6b](https://github.com/webiny/webiny-js/commit/d5edc6bdbbd60d03cbe451afa8fadc699d87b416))
* use new build and watch commands ([4a534a1](https://github.com/webiny/webiny-js/commit/4a534a11d2afe4ca4cddd49f2f80fe2a7e90058a))
# [5.4.0](https://github.com/webiny/webiny-js/compare/v5.4.0-beta.3...v5.4.0) (2021-04-13)
**Note:** Version bump only for package @webiny/api-headless-cms
# [5.4.0-beta.3](https://github.com/webiny/webiny-js/compare/v5.4.0-beta.2...v5.4.0-beta.3) (2021-04-13)
**Note:** Version bump only for package @webiny/api-headless-cms
# [5.4.0-beta.2](https://github.com/webiny/webiny-js/compare/v5.4.0-beta.1...v5.4.0-beta.2) (2021-04-12)
**Note:** Version bump only for package @webiny/api-headless-cms
# [5.4.0-beta.1](https://github.com/webiny/webiny-js/compare/v5.4.0-beta.0...v5.4.0-beta.1) (2021-04-12)
### Bug Fixes
* only use ".keyword" search when the received value is a string ([59539a3](https://github.com/webiny/webiny-js/commit/59539a3abb5c7880c45f20cca41b90dcc0757746))
* only use ".keyword" search when the received value is a string ([e945e4d](https://github.com/webiny/webiny-js/commit/e945e4d15c9dfaf9dde8b79c9d04a5aef539daf9))
# [5.4.0-beta.0](https://github.com/webiny/webiny-js/compare/v5.3.0...v5.4.0-beta.0) (2021-04-09)
**Note:** Version bump only for package @webiny/api-headless-cms
# [5.3.0](https://github.com/webiny/webiny-js/compare/v5.3.0-beta.0...v5.3.0) (2021-03-26)
**Note:** Version bump only for package @webiny/api-headless-cms
# [5.3.0-beta.0](https://github.com/webiny/webiny-js/compare/v5.2.1...v5.3.0-beta.0) (2021-03-25)
### Bug Fixes
* **api-headless-cms:** normalize values before querying ([8e57cea](https://github.com/webiny/webiny-js/commit/8e57cea5305d7550382b2b90289cd1acdb02906a))
## [5.2.1](https://github.com/webiny/webiny-js/compare/v5.2.1-beta.0...v5.2.1) (2021-03-21)
**Note:** Version bump only for package @webiny/api-headless-cms
## [5.2.1-beta.0](https://github.com/webiny/webiny-js/compare/v5.2.0...v5.2.1-beta.0) (2021-03-21)
**Note:** Version bump only for package @webiny/api-headless-cms
# [5.2.0](https://github.com/webiny/webiny-js/compare/v5.2.0-beta.0...v5.2.0) (2021-03-19)
**Note:** Version bump only for package @webiny/api-headless-cms
# [5.2.0-beta.0](https://github.com/webiny/webiny-js/compare/v5.1.0...v5.2.0-beta.0) (2021-03-19)
### Bug Fixes
* add missing dependency ([3d15fd3](https://github.com/webiny/webiny-js/commit/3d15fd35270145dbc4504fbfe3a4e8ad957ebc80))
* cache compressed rich text content ([1f918ab](https://github.com/webiny/webiny-js/commit/1f918ab28105750e0e308aadf041273f3417b134))
* ensure maximum of 100 items are requested via the `batchGet` call ([7bc1208](https://github.com/webiny/webiny-js/commit/7bc120844a0a2a2ff6ac9880334fef9c1b09a0c7))
# [5.1.0](https://github.com/webiny/webiny-js/compare/v5.1.0-beta.1...v5.1.0) (2021-03-16)
**Note:** Version bump only for package @webiny/api-headless-cms
# [5.1.0-beta.1](https://github.com/webiny/webiny-js/compare/v5.1.0-beta.0...v5.1.0-beta.1) (2021-03-16)
**Note:** Version bump only for package @webiny/api-headless-cms
# [5.1.0-beta.0](https://github.com/webiny/webiny-js/compare/v5.0.0...v5.1.0-beta.0) (2021-03-15)
### Bug Fixes
* **api-headless-cms:** schema regeneration last changed date ([#1504](https://github.com/webiny/webiny-js/issues/1504)) ([bff28a0](https://github.com/webiny/webiny-js/commit/bff28a058622b088fa6ed93ff86428fd01c59f24))
### Features
* **app-headless-cms:** add support for content model form layouts ([#1507](https://github.com/webiny/webiny-js/issues/1507)) ([f2b65b0](https://github.com/webiny/webiny-js/commit/f2b65b0f593e83948b85832d3234e4d32793a868))
* enable prefixing ES index with the "ELASTIC_SEARCH_INDEX_PREFIX" env variable ([df42d0c](https://github.com/webiny/webiny-js/commit/df42d0c033aa3fe2cfd00fc4398f3c64789544f4))
# [5.0.0](https://github.com/webiny/webiny-js/compare/v5.0.0-beta.5...v5.0.0) (2021-03-09)
**Note:** Version bump only for package @webiny/api-headless-cms
# [5.0.0-beta.5](https://github.com/webiny/webiny-js/compare/v5.0.0-beta.4...v5.0.0-beta.5) (2021-03-09)
### Bug Fixes
* **api-headless-cms:** restore content model index creation ([a5db46e](https://github.com/webiny/webiny-js/commit/a5db46e1de18c9023faf34c112907d6934a12e12))
* return null instead of false when app version is not available ([a4238c4](https://github.com/webiny/webiny-js/commit/a4238c409484b2fcd86c26fa066ef42da3e20f11))
* **api-headless-cms:** do not create ES index when CM is created ([ae629bc](https://github.com/webiny/webiny-js/commit/ae629bccdff32195509404b4503c3f510ed5442b))
* **api-headless-cms:** schema cache rebuild ([#1490](https://github.com/webiny/webiny-js/issues/1490)) ([b940455](https://github.com/webiny/webiny-js/commit/b940455d900837b9324358d29dd5ceb4863b7de2))
* add "Content-Type" response header ([f0876ec](https://github.com/webiny/webiny-js/commit/f0876ecc3820246c34e0006cb6c0b1bcfc3c3bf8))
* date format for pre-beta5 ([#1437](https://github.com/webiny/webiny-js/issues/1437)) ([bfc5b0a](https://github.com/webiny/webiny-js/commit/bfc5b0a0e4639d4037418f0e62f6a6c653ad82aa))
* do not allow updating model if it contains a non-existing field ([ec2af11](https://github.com/webiny/webiny-js/commit/ec2af11381647bbeac407103d48a126532d20cd8))
* do not throw if a plugin is missing ([d9ba74d](https://github.com/webiny/webiny-js/commit/d9ba74d3adf5f1888e08e36385f3f110e005600b))
* do not throw if a plugin is missing ([f1a2928](https://github.com/webiny/webiny-js/commit/f1a2928986607224397421f50f5aad09cf35b739))
* do not throw if a plugin is missing ([b4ebafd](https://github.com/webiny/webiny-js/commit/b4ebafd16689d20f3e32cc877cfb30869d72197b))
* do not throw if a plugin is missing ([3bb721d](https://github.com/webiny/webiny-js/commit/3bb721da2a61be73a7e95edae6f69c2acb2d2088))
* prevent using reserved "modelId" values ([4a4842c](https://github.com/webiny/webiny-js/commit/4a4842c242f50c1fd9e87ede92b315754d532e68))
* **api-headless-cms:** add track_total_hits flag to get real total count ([e45f4f6](https://github.com/webiny/webiny-js/commit/e45f4f6934523d0e74b0bd76ab85ff337d83dd59))
* **api-headless-cms:** allow to list models without throwing error ([#1418](https://github.com/webiny/webiny-js/issues/1418)) ([19adadc](https://github.com/webiny/webiny-js/commit/19adadc5daf9f3ac4f26606ef8d09c0c280cd89a))
* **api-headless-cms:** do not apply limit to DB read operation ([#1431](https://github.com/webiny/webiny-js/issues/1431)) ([78906aa](https://github.com/webiny/webiny-js/commit/78906aa69114ea74ff808fa695e2b23e13acf700))
* **api-headless-cms:** ensure modelId uniqueness ([2679bb8](https://github.com/webiny/webiny-js/commit/2679bb85379cb03ddf9266972de9e1f7c4902eae))
* **api-headless-cms:** ensure modelId uniqueness ([9b02a53](https://github.com/webiny/webiny-js/commit/9b02a53a12d820b71b08efa3bb12925ab5ddc5f7))
* **api-headless-cms:** es error on list with floats and ints ([#1426](https://github.com/webiny/webiny-js/issues/1426)) ([9d3e2ef](https://github.com/webiny/webiny-js/commit/9d3e2efd5ab117b8a379bfee44a3ade40fb2ace4))
* **app-headless-cms:** ref field not showing selected value ([#1451](https://github.com/webiny/webiny-js/issues/1451)) ([315f4d3](https://github.com/webiny/webiny-js/commit/315f4d33119952aacd8930edf1b37508708ce0f8))
### Features
* add dynamo-to-elastic lambda and ddb stream ([26aac81](https://github.com/webiny/webiny-js/commit/26aac818182da25d39307c716703f5a74bfb200b))
* enable context expansion ([bdbb377](https://github.com/webiny/webiny-js/commit/bdbb37737e155fc82e58bb494caf0e5ae389dbd9))
* introduce app upgrades and versions ([#1494](https://github.com/webiny/webiny-js/issues/1494)) ([f4d2b5e](https://github.com/webiny/webiny-js/commit/f4d2b5e73c899077cb1207940f2ab169a880aec9))
* **api-headless-cms:** add es sorting analyzer and tests ([#1488](https://github.com/webiny/webiny-js/issues/1488)) ([95a27a9](https://github.com/webiny/webiny-js/commit/95a27a9c2122a83eef0651ee1dbfdac67f8c3589))
* migrate to yarn v2 ([#1407](https://github.com/webiny/webiny-js/issues/1407)) ([46ba7ed](https://github.com/webiny/webiny-js/commit/46ba7ed7df28f98820b358698ef1764f46b5db58))
* resource tagging and custom infra setups ([#1474](https://github.com/webiny/webiny-js/issues/1474)) ([46da034](https://github.com/webiny/webiny-js/commit/46da034badccd67adb6cf24196c708a8790c6a84))
* **api-headless-cms:** index ref fields in Elasticsearch ([f47a1d4](https://github.com/webiny/webiny-js/commit/f47a1d4bb717f50cbc04d5e91359f24a24c665c7))
* **api-headless-cms:** migrate ES write operations to Dynamo table. ([4886906](https://github.com/webiny/webiny-js/commit/4886906f82ba1413131a68a6eb34e09d3b2a52b3))
* graphql date, datetime, datetimez and time scalars ([#1434](https://github.com/webiny/webiny-js/issues/1434)) ([c3ac73a](https://github.com/webiny/webiny-js/commit/c3ac73a86568fdf9e7cb0c12722bdb087d4502c3))
# [5.0.0-beta.4](https://github.com/webiny/webiny-js/compare/v5.0.0-beta.3...v5.0.0-beta.4) (2021-02-01)
**Note:** Version bump only for package @webiny/api-headless-cms
# [5.0.0-beta.3](https://github.com/webiny/webiny-js/compare/v5.0.0-beta.2...v5.0.0-beta.3) (2021-02-01)
**Note:** Version bump only for package @webiny/api-headless-cms
# [5.0.0-beta.2](https://github.com/webiny/webiny-js/compare/v5.0.0-beta.1...v5.0.0-beta.2) (2021-01-29)
### Bug Fixes
* **api-headless-cms:** add listValidation to content model graphql ([#1391](https://github.com/webiny/webiny-js/issues/1391)) ([9c15abd](https://github.com/webiny/webiny-js/commit/9c15abdb8b70171bb3e29be03996c5cf36df9bfa))
* **api-headless-cms:** update context.http usage ([e195700](https://github.com/webiny/webiny-js/commit/e1957006417f7d8d90431c4a55d7a89b910a2af0))
* **headless-cms:** default titleFieldId set to id ([#1390](https://github.com/webiny/webiny-js/issues/1390)) ([b1c23a6](https://github.com/webiny/webiny-js/commit/b1c23a60442926acdbf3a0e14c91bb68792d7669))
* **page-builder:** optimize and improve PB editor ([#1393](https://github.com/webiny/webiny-js/issues/1393)) ([286de88](https://github.com/webiny/webiny-js/commit/286de88cf1d416105f4d1c5254556cbd9f0526a4))
### Features
* **api-headless-cms:** use context.http.request object ([c83854d](https://github.com/webiny/webiny-js/commit/c83854da4774d919f39d3f4eee32180173c8e258))
* **headless-cms:** implement field validation ([d856cc7](https://github.com/webiny/webiny-js/commit/d856cc7345fea429f8167e92cd55b3e09df153d1))
# [5.0.0-beta.1](https://github.com/webiny/webiny-js/compare/v5.0.0-beta.0...v5.0.0-beta.1) (2021-01-08)
**Note:** Version bump only for package @webiny/api-headless-cms
# [5.0.0-beta.0](https://github.com/webiny/webiny-js/compare/v4.14.0...v5.0.0-beta.0) (2021-01-08)
### Bug Fixes
* correct dependencies ([f77bd16](https://github.com/webiny/webiny-js/commit/f77bd161c7049212d55a05b1708f7053115f5638))
* correct import paths ([928bc41](https://github.com/webiny/webiny-js/commit/928bc41943be2bf23cc033961db4f919a315e6b9))
* correct TS types and dependencies ([fc26441](https://github.com/webiny/webiny-js/commit/fc264419e868320d49d6d8cd1270481acb8527be))
* disable headless-cms tests ([a36e1c1](https://github.com/webiny/webiny-js/commit/a36e1c1df2cc4c17f510a0fce7ecd7310a632e26))
* entry prepare for index on publish ([#1361](https://github.com/webiny/webiny-js/issues/1361)) ([7446628](https://github.com/webiny/webiny-js/commit/74466282f9a4abd97421d8cf8a3122b5957f4576))
* fix eslint errors ([f559a57](https://github.com/webiny/webiny-js/commit/f559a57522ab893120d8c686a2dd4caeb0c1ccb6))
* import `GraphQLFieldResolver` from `@webiny/handler-graphql` ([e232249](https://github.com/webiny/webiny-js/commit/e23224921b540b6f9393f4046bdfce156189d1bf))
* missing commit with a test ([#1360](https://github.com/webiny/webiny-js/issues/1360)) ([8312fd6](https://github.com/webiny/webiny-js/commit/8312fd61d89ad3e6f60b8ef6bac82fd2f41e325f))
* publish entry on revisions page ([#1364](https://github.com/webiny/webiny-js/issues/1364)) ([542d1d4](https://github.com/webiny/webiny-js/commit/542d1d461403940c4a2fa71956c0c4ffd5fcf10f))
* ref ui field and remove toMatchObject from cms tests ([#1362](https://github.com/webiny/webiny-js/issues/1362)) ([b8cd1a8](https://github.com/webiny/webiny-js/commit/b8cd1a8bfa44601117d6a0f33b0098e696b6e81f))
* remove `@webiny/graphql` ([ce70fd5](https://github.com/webiny/webiny-js/commit/ce70fd581456ba56787c2e12fe8075592138a3b7))
* remove any mention of MongoDb ([b3193cb](https://github.com/webiny/webiny-js/commit/b3193cb3016b58d8e6f95f7558af34a6f588b3c8))
* **api-headless-cms:** add wildcards to query ([2b21390](https://github.com/webiny/webiny-js/commit/2b21390af57004725d2a914ca825efe1f2dc360a))
* **api-headless-cms:** return early on non-existing ref values ([3ebf68e](https://github.com/webiny/webiny-js/commit/3ebf68e96d1c2a9b923876926383952306e1d0d2))
* import `HandlerI18NContext` correctly ([a202336](https://github.com/webiny/webiny-js/commit/a202336b2765c90ed7fe3e2f73756da42d7abe3d))
* optimize `handler-apollo-gateway` - remove redundant code ([3d50b9d](https://github.com/webiny/webiny-js/commit/3d50b9d14e4171d4f205968c8fdde1f2a7e66955))
* prettier and eslint run for v5 ([3069a33](https://github.com/webiny/webiny-js/commit/3069a33ccef2fd3767818b274a730df28cecaf5b))
* read params from `context.http` ([220a408](https://github.com/webiny/webiny-js/commit/220a4084dd3847d1b4576a15ac76cc9580d3df74))
* remove `@webiny/commodo-graphql` ([c63988c](https://github.com/webiny/webiny-js/commit/c63988c264e708a427d2b5beeafa4ca317e0fdf2))
* remove `api-settings-manager` ([547d686](https://github.com/webiny/webiny-js/commit/547d68600132b28950ce6574041bd639f0e752ff))
* remove `applyContextPlugins` ([1ef7d03](https://github.com/webiny/webiny-js/commit/1ef7d03bb8fa8d590a33f5c987a1cff17b5f4f11))
* remove `createResponse` callback ([61f49f0](https://github.com/webiny/webiny-js/commit/61f49f0bdbd24d7976a5d626e3361f53eef223b2))
* remove `extend type SecurityUser` ([8c88675](https://github.com/webiny/webiny-js/commit/8c886754deeb54dc75accd8a57c220bd906e051c))
* remove direct aws-sdk/Lambda client usage ([6dfdeae](https://github.com/webiny/webiny-js/commit/6dfdeae49114b683dfc64c88dd41449a7b1d70c5))
* remove use of `gql` in `graphql-schema` plugins ([101e8fe](https://github.com/webiny/webiny-js/commit/101e8fe782e38644d686a1670cf938e3aa6a0c0c))
* remove word `Handler` from context plugin type names ([277f0dd](https://github.com/webiny/webiny-js/commit/277f0dd300b7451a8a162678417e2b428cf002cf))
* rename `handler-apollo-server` to `handler-graphql` ([c32769a](https://github.com/webiny/webiny-js/commit/c32769a84872658898d74f0153176a7d7c7416ee))
* replace `path` with `path.parameters` ([b6cb3b8](https://github.com/webiny/webiny-js/commit/b6cb3b80713024a7b7300955eb0b3e749355240b))
* send data to `request` property ([4f38f2d](https://github.com/webiny/webiny-js/commit/4f38f2dd5a49ca767684bdd8e176ad7bfedf1ef8))
* update `handle` function's args ([f0ffb37](https://github.com/webiny/webiny-js/commit/f0ffb37e5c95052e559f8ec854545fefbf84fbd7))
* update dependencies ([2da7785](https://github.com/webiny/webiny-js/commit/2da77856bb535144d5f08f6408bba10716737d06))
* update dependencies ([9e23d9d](https://github.com/webiny/webiny-js/commit/9e23d9d435c8e3993713d73123a7b93119893eb1))
* update dependencies ([6559856](https://github.com/webiny/webiny-js/commit/65598567b87479b37a0c737d6c695d68f3bf94df))
* update dependencies ([2598deb](https://github.com/webiny/webiny-js/commit/2598deb9a72b50d0fa0674d0676da6b387ab40d6))
* update permissions and permission keys across the board ([dbb134c](https://github.com/webiny/webiny-js/commit/dbb134c7b48f30e6fbe85decc7db51d17932608f))
* **code-format:** update configs for prettier and eslint ([#1237](https://github.com/webiny/webiny-js/issues/1237)) ([5ff6d22](https://github.com/webiny/webiny-js/commit/5ff6d22f427093a52560963770dadf96d8e6685b))
* adapt for NeDB (and of course, maintain MongoDB compatibility) ([f2d3555](https://github.com/webiny/webiny-js/commit/f2d3555648b9a1230903ae3fc45a1eb2a5829106))
* add "*" scope ([a33fb2a](https://github.com/webiny/webiny-js/commit/a33fb2a714f20c3068c4e11d45505338cccf5cca))
* add `name` field ([4df8bdd](https://github.com/webiny/webiny-js/commit/4df8bdd2e54bda10a7caaffd5013b8117cd61646))
* add environment slug ([dd02293](https://github.com/webiny/webiny-js/commit/dd02293836115f18ba84f57f33828888b71527ba))
* add missing changes after merge ([58ab0bd](https://github.com/webiny/webiny-js/commit/58ab0bd4cb1824be7c168525e5f20c5edfbe88f1))
* add scopes to "list" and "read" resolvers ([3b3c852](https://github.com/webiny/webiny-js/commit/3b3c85282ae0066f2c0cf5b07576ba078ac23977))
* add slug ([aef6925](https://github.com/webiny/webiny-js/commit/aef6925214961f6d50654494968dbbcbbd0eceee))
* allow "getMeta" query publicly ([5337892](https://github.com/webiny/webiny-js/commit/53378922b28bb0f0cc70287213760c8b9f31dfeb))
* allow setting env/type via url/options/event ([0a8baca](https://github.com/webiny/webiny-js/commit/0a8baca41f7aff4526becea7d0b03e45e5249d62))
* avoid hardcoded string ([e3ae87f](https://github.com/webiny/webiny-js/commit/e3ae87f2e31cd888994ce4c6870b5a26d1794d87))
* check if args/event properties are missing ([ac2e0e0](https://github.com/webiny/webiny-js/commit/ac2e0e0ea697586cf88ba90eeae4d49290f6959a))
* cleanup contentModels getter ([6fba948](https://github.com/webiny/webiny-js/commit/6fba948246655540d66c34817a13a0ee9fdfe9db))
* correct check in the `getResolvers` function ([db17a52](https://github.com/webiny/webiny-js/commit/db17a52c5a4efb3a67793b0d901a111a3418b629))
* expose "CONTENT_MODEL_GROUP_ID" ([79da20e](https://github.com/webiny/webiny-js/commit/79da20e84ad0186305b1ec841c9a4e4260bec85b))
* merge `new-security` into `master` ([da26908](https://github.com/webiny/webiny-js/commit/da269089ebaf18cc00c43919688fc4a005314d72))
* remove `SecurityIdentity` class ([f4e403a](https://github.com/webiny/webiny-js/commit/f4e403adce08430b5e6193e073843e022c81ad28))
* remove environments (already included in the scope) ([a192ab1](https://github.com/webiny/webiny-js/commit/a192ab1510f9b9d7a9b72bfe88da1b018b3fee96))
* remove old parts ([72e97a1](https://github.com/webiny/webiny-js/commit/72e97a1bada71958882d2f78331a1f9c5e4c6360))
* remove redundant comment ([5ae2fac](https://github.com/webiny/webiny-js/commit/5ae2facca965319daca545f4a8c3b7b12ff81bb0))
* remove unused import ([37330d9](https://github.com/webiny/webiny-js/commit/37330d9c297455796dc9c7ae5da0eef68ce6a9c7))
* rename type "authentication" to "security" ([59226f4](https://github.com/webiny/webiny-js/commit/59226f40fabcb441140f7c8d99d5643302e046c4))
* update tsconfig references and deps ([eec7eb0](https://github.com/webiny/webiny-js/commit/eec7eb00bc276a9d1496458315b3eb9ec0930f35))
* use "const" instead of "let" ([c348198](https://github.com/webiny/webiny-js/commit/c348198c53adf07edaf6d4ab4c743de59c454da7))
* use handler's context as Apollo GraphQL Server initial context ([11f9f8b](https://github.com/webiny/webiny-js/commit/11f9f8b3e849402e2546d69c7221f68d73dded43))
* use more fine-grained scopes ([1db1e27](https://github.com/webiny/webiny-js/commit/1db1e275467e8aa8c962f54d78acc0cd6c0a651a))
* use newly added `securityAuthPlugins` plugins ([de53e1c](https://github.com/webiny/webiny-js/commit/de53e1cabd35c3c8a28633d535edcf5b2f4e699e))
* **api-headless-cms:** add slug to cmsEnv type ([2f54a4c](https://github.com/webiny/webiny-js/commit/2f54a4c7ea2db1fee8c47898c63ec363938fa919))
* **api-headless-cms:** allow scope assigning instead of returning available scopes ([c7ccf58](https://github.com/webiny/webiny-js/commit/c7ccf5863b7f1dd7dc919f479ed1f2eab408e238))
* **api-headless-cms:** fix contentModels [WIP] ([133ac0a](https://github.com/webiny/webiny-js/commit/133ac0a4be116a273e45c4796b9b06ae3a60a48d))
* **api-headless-cms:** fix env fetching ([e2d7f7d](https://github.com/webiny/webiny-js/commit/e2d7f7dc08791f2e444e48107056985be764691f))
* **api-headless-cms:** use env slug instead of id ([558fff4](https://github.com/webiny/webiny-js/commit/558fff41a829ab397f6794c8ba190fbb25763403))
### Features
* add more tests for ref field ([#1359](https://github.com/webiny/webiny-js/issues/1359)) ([4a8d3e9](https://github.com/webiny/webiny-js/commit/4a8d3e9b49e590742a14160d7d266f2985bea76e))
* **api-headless-cms:** add 'createAccessToken' utility & use it ([db76848](https://github.com/webiny/webiny-js/commit/db76848f833587658ed3081c3f5d995e96eb1c6c))
* **api-headless-cms:** add [WIP] cmsContentModel ([78b9ab5](https://github.com/webiny/webiny-js/commit/78b9ab51c6642033dd6555af3a783e7afa4f85ca))
* **api-headless-cms:** add scopes to Access Tokens + scopes test ([ef1b8cf](https://github.com/webiny/webiny-js/commit/ef1b8cf1df2db30f29c35b6869d5bfea6557aa2b))
* **api-headless-cms:** add scopes to content models ([e2a7288](https://github.com/webiny/webiny-js/commit/e2a728860622bff947a07193bd39ad60123c927e))
* **api-headless-cms:** add slug & contentModels to gql schema ([f5846f7](https://github.com/webiny/webiny-js/commit/f5846f7f9c99135930ec6f24b0b29776c15f83e3))
* **api-headless-cms:** add slug test & [WIP] contentModels test ([a3bb948](https://github.com/webiny/webiny-js/commit/a3bb9489b0227f3f387f32a106f538c098a2cd4b))
* **api-headless-cms:** add slugs & [WIP] add contentModels ([e7b3bbd](https://github.com/webiny/webiny-js/commit/e7b3bbd5d692edf796e903153014f3aed1dcbc98))
* **api-headless-cms:** move / split plugins to preApply ([dda0247](https://github.com/webiny/webiny-js/commit/dda024722e4158fc8915e724b86ea5c5165a6b0b))
* **api-headless-cms:** use `hasCmsPermission` ([f937e2a](https://github.com/webiny/webiny-js/commit/f937e2ac191fef829c4e1cc3e11fc947941959fc))
* **app-page-builder:** switch redux for recoil ([a1c5f18](https://github.com/webiny/webiny-js/commit/a1c5f18e271d27a6e65a912014de66dc048741a9))
* add createAccessToken testing helper function ([b86822c](https://github.com/webiny/webiny-js/commit/b86822ce91437eafd54406473ba4bb7bf1eedc61))
* remove preApply/postApply hooks ([c4275f8](https://github.com/webiny/webiny-js/commit/c4275f881647cd2cdde34cb5d8c304fc36db9ae3))
* **api-headless-cms:** separate [WIP] access token authentication ([658972a](https://github.com/webiny/webiny-js/commit/658972a8695ef6f0c9d023e113608616b315aa00))
# [5.0.0-beta.52](https://github.com/webiny/webiny-js/compare/v5.0.0-beta.51...v5.0.0-beta.52) (2021-01-08)
**Note:** Version bump only for package @webiny/api-headless-cms
# [5.0.0-beta.51](https://github.com/webiny/webiny-js/compare/v5.0.0-beta.50...v5.0.0-beta.51) (2021-01-08)
### Bug Fixes
* publish entry on revisions page ([#1364](https://github.com/webiny/webiny-js/issues/1364)) ([542d1d4](https://github.com/webiny/webiny-js/commit/542d1d461403940c4a2fa71956c0c4ffd5fcf10f))
# [5.0.0-beta.50](https://github.com/webiny/webiny-js/compare/v5.0.0-beta.49...v5.0.0-beta.50) (2021-01-08)
**Note:** Version bump only for package @webiny/api-headless-cms
# [5.0.0-beta.49](https://github.com/webiny/webiny-js/compare/v5.0.0-beta.48...v5.0.0-beta.49) (2021-01-08)
**Note:** Version bump only for package @webiny/api-headless-cms
# [5.0.0-beta.48](https://github.com/webiny/webiny-js/compare/v5.0.0-beta.47...v5.0.0-beta.48) (2021-01-08)
**Note:** Version bump only for package @webiny/api-headless-cms
# [5.0.0-beta.47](https://github.com/webiny/webiny-js/compare/v5.0.0-beta.46...v5.0.0-beta.47) (2021-01-08)
**Note:** Version bump only for package @webiny/api-headless-cms
# [5.0.0-beta.46](https://github.com/webiny/webiny-js/compare/v5.0.0-beta.45...v5.0.0-beta.46) (2021-01-08)
**Note:** Version bump only for package @webiny/api-headless-cms
# [5.0.0-beta.45](https://github.com/webiny/webiny-js/compare/v5.0.0-beta.44...v5.0.0-beta.45) (2021-01-08)
**Note:** Version bump only for package @webiny/api-headless-cms
# [5.0.0-beta.44](https://github.com/webiny/webiny-js/compare/v5.0.0-beta.43...v5.0.0-beta.44) (2021-01-08)
**Note:** Version bump only for package @webiny/api-headless-cms
# [5.0.0-beta.36](https://github.com/webiny/webiny-js/compare/v5.0.0-beta.35...v5.0.0-beta.36) (2021-01-08)
**Note:** Version bump only for package @webiny/api-headless-cms
# [5.0.0-beta.35](https://github.com/webiny/webiny-js/compare/v5.0.0-beta.34...v5.0.0-beta.35) (2021-01-08)
**Note:** Version bump only for package @webiny/api-headless-cms
# [5.0.0-beta.34](https://github.com/webiny/webiny-js/compare/v5.0.0-beta.33...v5.0.0-beta.34) (2021-01-08)
**Note:** Version bump only for package @webiny/api-headless-cms
# [5.0.0-beta.33](https://github.com/webiny/webiny-js/compare/v5.0.0-beta.32...v5.0.0-beta.33) (2021-01-08)
### Bug Fixes
* entry prepare for index on publish ([#1361](https://github.com/webiny/webiny-js/issues/1361)) ([7446628](https://github.com/webiny/webiny-js/commit/74466282f9a4abd97421d8cf8a3122b5957f4576))
* ref ui field and remove toMatchObject from cms tests ([#1362](https://github.com/webiny/webiny-js/issues/1362)) ([b8cd1a8](https://github.com/webiny/webiny-js/commit/b8cd1a8bfa44601117d6a0f33b0098e696b6e81f))
* remove any mention of MongoDb ([b3193cb](https://github.com/webiny/webiny-js/commit/b3193cb3016b58d8e6f95f7558af34a6f588b3c8))
* **api-headless-cms:** add wildcards to query ([2b21390](https://github.com/webiny/webiny-js/commit/2b21390af57004725d2a914ca825efe1f2dc360a))
* **api-headless-cms:** return early on non-existing ref values ([3ebf68e](https://github.com/webiny/webiny-js/commit/3ebf68e96d1c2a9b923876926383952306e1d0d2))
* missing commit with a test ([#1360](https://github.com/webiny/webiny-js/issues/1360)) ([8312fd6](https://github.com/webiny/webiny-js/commit/8312fd61d89ad3e6f60b8ef6bac82fd2f41e325f))
* update permissions and permission keys across the board ([dbb134c](https://github.com/webiny/webiny-js/commit/dbb134c7b48f30e6fbe85decc7db51d17932608f))
### Features
* add more tests for ref field ([#1359](https://github.com/webiny/webiny-js/issues/1359)) ([4a8d3e9](https://github.com/webiny/webiny-js/commit/4a8d3e9b49e590742a14160d7d266f2985bea76e))
# [5.0.0-beta.32](https://github.com/webiny/webiny-js/compare/v5.0.0-beta.31...v5.0.0-beta.32) (2021-01-06)
**Note:** Version bump only for package @webiny/api-headless-cms
# [5.0.0-beta.31](https://github.com/webiny/webiny-js/compare/v5.0.0-beta.30...v5.0.0-beta.31) (2021-01-06)
**Note:** Version bump only for package @webiny/api-headless-cms
# [5.0.0-beta.30](https://github.com/webiny/webiny-js/compare/v5.0.0-beta.29...v5.0.0-beta.30) (2021-01-06)
**Note:** Version bump only for package @webiny/api-headless-cms
# [5.0.0-beta.29](https://github.com/webiny/webiny-js/compare/v5.0.0-beta.28...v5.0.0-beta.29) (2021-01-06)
**Note:** Version bump only for package @webiny/api-headless-cms
# [5.0.0-beta.28](https://github.com/webiny/webiny-js/compare/v5.0.0-beta.27...v5.0.0-beta.28) (2021-01-06)
**Note:** Version bump only for package @webiny/api-headless-cms
# [5.0.0-beta.10](https://github.com/webiny/webiny-js/compare/v4.14.0...v5.0.0-beta.10) (2021-01-04)
### Bug Fixes
* adapt for NeDB (and of course, maintain MongoDB compatibility) ([f2d3555](https://github.com/webiny/webiny-js/commit/f2d3555648b9a1230903ae3fc45a1eb2a5829106))
* add "*" scope ([a33fb2a](https://github.com/webiny/webiny-js/commit/a33fb2a714f20c3068c4e11d45505338cccf5cca))
* add `name` field ([4df8bdd](https://github.com/webiny/webiny-js/commit/4df8bdd2e54bda10a7caaffd5013b8117cd61646))
* add environment slug ([dd02293](https://github.com/webiny/webiny-js/commit/dd02293836115f18ba84f57f33828888b71527ba))
* add missing changes after merge ([58ab0bd](https://github.com/webiny/webiny-js/commit/58ab0bd4cb1824be7c168525e5f20c5edfbe88f1))
* add scopes to "list" and "read" resolvers ([3b3c852](https://github.com/webiny/webiny-js/commit/3b3c85282ae0066f2c0cf5b07576ba078ac23977))
* add slug ([aef6925](https://github.com/webiny/webiny-js/commit/aef6925214961f6d50654494968dbbcbbd0eceee))
* allow "getMeta" query publicly ([5337892](https://github.com/webiny/webiny-js/commit/53378922b28bb0f0cc70287213760c8b9f31dfeb))
* allow setting env/type via url/options/event ([0a8baca](https://github.com/webiny/webiny-js/commit/0a8baca41f7aff4526becea7d0b03e45e5249d62))
* avoid hardcoded string ([e3ae87f](https://github.com/webiny/webiny-js/commit/e3ae87f2e31cd888994ce4c6870b5a26d1794d87))
* check if args/event properties are missing ([ac2e0e0](https://github.com/webiny/webiny-js/commit/ac2e0e0ea697586cf88ba90eeae4d49290f6959a))
* cleanup contentModels getter ([6fba948](https://github.com/webiny/webiny-js/commit/6fba948246655540d66c34817a13a0ee9fdfe9db))
* correct check in the `getResolvers` function ([db17a52](https://github.com/webiny/webiny-js/commit/db17a52c5a4efb3a67793b0d901a111a3418b629))
* correct dependencies ([f77bd16](https://github.com/webiny/webiny-js/commit/f77bd161c7049212d55a05b1708f7053115f5638))
* correct import paths ([928bc41](https://github.com/webiny/webiny-js/commit/928bc41943be2bf23cc033961db4f919a315e6b9))
* correct TS types and dependencies ([fc26441](https://github.com/webiny/webiny-js/commit/fc264419e868320d49d6d8cd1270481acb8527be))
* disable headless-cms tests ([a36e1c1](https://github.com/webiny/webiny-js/commit/a36e1c1df2cc4c17f510a0fce7ecd7310a632e26))
* expose "CONTENT_MODEL_GROUP_ID" ([79da20e](https://github.com/webiny/webiny-js/commit/79da20e84ad0186305b1ec841c9a4e4260bec85b))
* fix eslint errors ([f559a57](https://github.com/webiny/webiny-js/commit/f559a57522ab893120d8c686a2dd4caeb0c1ccb6))
* import `GraphQLFieldResolver` from `@webiny/handler-graphql` ([e232249](https://github.com/webiny/webiny-js/commit/e23224921b540b6f9393f4046bdfce156189d1bf))
* import `HandlerI18NContext` correctly ([a202336](https://github.com/webiny/webiny-js/commit/a202336b2765c90ed7fe3e2f73756da42d7abe3d))
* merge `new-security` into `master` ([da26908](https://github.com/webiny/webiny-js/commit/da269089ebaf18cc00c43919688fc4a005314d72))
* optimize `handler-apollo-gateway` - remove redundant code ([3d50b9d](https://github.com/webiny/webiny-js/commit/3d50b9d14e4171d4f205968c8fdde1f2a7e66955))
* prettier and eslint run for v5 ([3069a33](https://github.com/webiny/webiny-js/commit/3069a33ccef2fd3767818b274a730df28cecaf5b))
* read params from `context.http` ([220a408](https://github.com/webiny/webiny-js/commit/220a4084dd3847d1b4576a15ac76cc9580d3df74))
* remove `@webiny/commodo-graphql` ([c63988c](https://github.com/webiny/webiny-js/commit/c63988c264e708a427d2b5beeafa4ca317e0fdf2))
* remove `@webiny/graphql` ([ce70fd5](https://github.com/webiny/webiny-js/commit/ce70fd581456ba56787c2e12fe8075592138a3b7))
* remove `api-settings-manager` ([547d686](https://github.com/webiny/webiny-js/commit/547d68600132b28950ce6574041bd639f0e752ff))
* remove `applyContextPlugins` ([1ef7d03](https://github.com/webiny/webiny-js/commit/1ef7d03bb8fa8d590a33f5c987a1cff17b5f4f11))
* remove `createResponse` callback ([61f49f0](https://github.com/webiny/webiny-js/commit/61f49f0bdbd24d7976a5d626e3361f53eef223b2))
* remove `extend type SecurityUser` ([8c88675](https://github.com/webiny/webiny-js/commit/8c886754deeb54dc75accd8a57c220bd906e051c))
* remove `SecurityIdentity` class ([f4e403a](https://github.com/webiny/webiny-js/commit/f4e403adce08430b5e6193e073843e022c81ad28))
* remove direct aws-sdk/Lambda client usage ([6dfdeae](https://github.com/webiny/webiny-js/commit/6dfdeae49114b683dfc64c88dd41449a7b1d70c5))
* remove environments (already included in the scope) ([a192ab1](https://github.com/webiny/webiny-js/commit/a192ab1510f9b9d7a9b72bfe88da1b018b3fee96))
* remove old parts ([72e97a1](https://github.com/webiny/webiny-js/commit/72e97a1bada71958882d2f78331a1f9c5e4c6360))
* remove redundant comment ([5ae2fac](https://github.com/webiny/webiny-js/commit/5ae2facca965319daca545f4a8c3b7b12ff81bb0))
* remove unused import ([37330d9](https://github.com/webiny/webiny-js/commit/37330d9c297455796dc9c7ae5da0eef68ce6a9c7))
* remove use of `gql` in `graphql-schema` plugins ([101e8fe](https://github.com/webiny/webiny-js/commit/101e8fe782e38644d686a1670cf938e3aa6a0c0c))
* remove word `Handler` from context plugin type names ([277f0dd](https://github.com/webiny/webiny-js/commit/277f0dd300b7451a8a162678417e2b428cf002cf))
* rename `handler-apollo-server` to `handler-graphql` ([c32769a](https://github.com/webiny/webiny-js/commit/c32769a84872658898d74f0153176a7d7c7416ee))
* rename type "authentication" to "security" ([59226f4](https://github.com/webiny/webiny-js/commit/59226f40fabcb441140f7c8d99d5643302e046c4))
* replace `path` with `path.parameters` ([b6cb3b8](https://github.com/webiny/webiny-js/commit/b6cb3b80713024a7b7300955eb0b3e749355240b))
* send data to `request` property ([4f38f2d](https://github.com/webiny/webiny-js/commit/4f38f2dd5a49ca767684bdd8e176ad7bfedf1ef8))
* update `handle` function's args ([f0ffb37](https://github.com/webiny/webiny-js/commit/f0ffb37e5c95052e559f8ec854545fefbf84fbd7))
* update dependencies ([2da7785](https://github.com/webiny/webiny-js/commit/2da77856bb535144d5f08f6408bba10716737d06))
* update dependencies ([9e23d9d](https://github.com/webiny/webiny-js/commit/9e23d9d435c8e3993713d73123a7b93119893eb1))
* update dependencies ([6559856](https://github.com/webiny/webiny-js/commit/65598567b87479b37a0c737d6c695d68f3bf94df))
* update dependencies ([2598deb](https://github.com/webiny/webiny-js/commit/2598deb9a72b50d0fa0674d0676da6b387ab40d6))
* update tsconfig references and deps ([eec7eb0](https://github.com/webiny/webiny-js/commit/eec7eb00bc276a9d1496458315b3eb9ec0930f35))
* **api-headless-cms:** add slug to cmsEnv type ([2f54a4c](https://github.com/webiny/webiny-js/commit/2f54a4c7ea2db1fee8c47898c63ec363938fa919))
* **api-headless-cms:** allow scope assigning instead of returning available scopes ([c7ccf58](https://github.com/webiny/webiny-js/commit/c7ccf5863b7f1dd7dc919f479ed1f2eab408e238))
* **api-headless-cms:** fix env fetching ([e2d7f7d](https://github.com/webiny/webiny-js/commit/e2d7f7dc08791f2e444e48107056985be764691f))
* **code-format:** update configs for prettier and eslint ([#1237](https://github.com/webiny/webiny-js/issues/1237)) ([5ff6d22](https://github.com/webiny/webiny-js/commit/5ff6d22f427093a52560963770dadf96d8e6685b))
* use "const" instead of "let" ([c348198](https://github.com/webiny/webiny-js/commit/c348198c53adf07edaf6d4ab4c743de59c454da7))
* use handler's context as Apollo GraphQL Server initial context ([11f9f8b](https://github.com/webiny/webiny-js/commit/11f9f8b3e849402e2546d69c7221f68d73dded43))
* use more fine-grained scopes ([1db1e27](https://github.com/webiny/webiny-js/commit/1db1e275467e8aa8c962f54d78acc0cd6c0a651a))
* use newly added `securityAuthPlugins` plugins ([de53e1c](https://github.com/webiny/webiny-js/commit/de53e1cabd35c3c8a28633d535edcf5b2f4e699e))
* **api-headless-cms:** fix contentModels [WIP] ([133ac0a](https://github.com/webiny/webiny-js/commit/133ac0a4be116a273e45c4796b9b06ae3a60a48d))
* **api-headless-cms:** use env slug instead of id ([558fff4](https://github.com/webiny/webiny-js/commit/558fff41a829ab397f6794c8ba190fbb25763403))
### Features
* **api-headless-cms:** use `hasCmsPermission` ([f937e2a](https://github.com/webiny/webiny-js/commit/f937e2ac191fef829c4e1cc3e11fc947941959fc))
* **app-page-builder:** switch redux for recoil ([a1c5f18](https://github.com/webiny/webiny-js/commit/a1c5f18e271d27a6e65a912014de66dc048741a9))
* add createAccessToken testing helper function ([b86822c](https://github.com/webiny/webiny-js/commit/b86822ce91437eafd54406473ba4bb7bf1eedc61))
* remove preApply/postApply hooks ([c4275f8](https://github.com/webiny/webiny-js/commit/c4275f881647cd2cdde34cb5d8c304fc36db9ae3))
* **api-headless-cms:** add 'createAccessToken' utility & use it ([db76848](https://github.com/webiny/webiny-js/commit/db76848f833587658ed3081c3f5d995e96eb1c6c))
* **api-headless-cms:** add [WIP] cmsContentModel ([78b9ab5](https://github.com/webiny/webiny-js/commit/78b9ab51c6642033dd6555af3a783e7afa4f85ca))
* **api-headless-cms:** add scopes to Access Tokens + scopes test ([ef1b8cf](https://github.com/webiny/webiny-js/commit/ef1b8cf1df2db30f29c35b6869d5bfea6557aa2b))
* **api-headless-cms:** add scopes to content models ([e2a7288](https://github.com/webiny/webiny-js/commit/e2a728860622bff947a07193bd39ad60123c927e))
* **api-headless-cms:** add slug & contentModels to gql schema ([f5846f7](https://github.com/webiny/webiny-js/commit/f5846f7f9c99135930ec6f24b0b29776c15f83e3))
* **api-headless-cms:** add slug test & [WIP] contentModels test ([a3bb948](https://github.com/webiny/webiny-js/commit/a3bb9489b0227f3f387f32a106f538c098a2cd4b))
* **api-headless-cms:** add slugs & [WIP] add contentModels ([e7b3bbd](https://github.com/webiny/webiny-js/commit/e7b3bbd5d692edf796e903153014f3aed1dcbc98))
* **api-headless-cms:** move / split plugins to preApply ([dda0247](https://github.com/webiny/webiny-js/commit/dda024722e4158fc8915e724b86ea5c5165a6b0b))
* **api-headless-cms:** separate [WIP] access token authentication ([658972a](https://github.com/webiny/webiny-js/commit/658972a8695ef6f0c9d023e113608616b315aa00))
# [4.14.0](https://github.com/webiny/webiny-js/compare/v4.14.0-beta.1...v4.14.0) (2020-10-30)
**Note:** Version bump only for package @webiny/api-headless-cms
# [4.14.0-beta.1](https://github.com/webiny/webiny-js/compare/v4.14.0-beta.0...v4.14.0-beta.1) (2020-10-30)
**Note:** Version bump only for package @webiny/api-headless-cms
# [4.14.0-beta.0](https://github.com/webiny/webiny-js/compare/v4.13.0...v4.14.0-beta.0) (2020-10-30)
### Bug Fixes
* increase limit to 200 ([947a5f6](https://github.com/webiny/webiny-js/commit/947a5f60a6c40543f440fae3a24a95356b54a36d))
# [4.13.0](https://github.com/webiny/webiny-js/compare/v4.13.0-beta.0...v4.13.0) (2020-10-06)
**Note:** Version bump only for package @webiny/api-headless-cms
# [4.13.0-beta.0](https://github.com/webiny/webiny-js/compare/v4.12.1...v4.13.0-beta.0) (2020-10-06)
### Bug Fixes
* remove redundant "server" ([4c9a47e](https://github.com/webiny/webiny-js/commit/4c9a47ee26fb8584af882549d89738c58bc45dc9))
## [4.12.1](https://github.com/webiny/webiny-js/compare/v4.12.1-beta.0...v4.12.1) (2020-09-17)
**Note:** Version bump only for package @webiny/api-headless-cms
## [4.12.1-beta.0](https://github.com/webiny/webiny-js/compare/v4.12.0...v4.12.1-beta.0) (2020-09-17)
**Note:** Version bump only for package @webiny/api-headless-cms
# [4.12.0](https://github.com/webiny/webiny-js/compare/v4.12.0-beta.1...v4.12.0) (2020-09-16)
**Note:** Version bump only for package @webiny/api-headless-cms
# [4.12.0-beta.1](https://github.com/webiny/webiny-js/compare/v4.12.0-beta.0...v4.12.0-beta.1) (2020-09-16)
**Note:** Version bump only for package @webiny/api-headless-cms
# [4.12.0-beta.0](https://github.com/webiny/webiny-js/compare/v4.11.0...v4.12.0-beta.0) (2020-09-16)
### Bug Fixes
* add `createSchema` prop to `cms-model-field-to-graphql` plugin's `read` property ([67a3b36](https://github.com/webiny/webiny-js/commit/67a3b36c61faa6c1f4a88f002037d7c33529bcd0))
* ensure multiple `models` context plugins don't clash ([044ee77](https://github.com/webiny/webiny-js/commit/044ee77bf396bda33624b74a41b4c560ff843693))
# [4.11.0](https://github.com/webiny/webiny-js/compare/v4.11.0-beta.1...v4.11.0) (2020-09-09)
### Bug Fixes
* **api-headless-cms:** add equality check for id `idValidation` ([5e7ee98](https://github.com/webiny/webiny-js/commit/5e7ee98879e52a799d86a7a8ddd6e0b8baef5f84))
* **api-headless-cms:** trim `fieldId` before save ([173db34](https://github.com/webiny/webiny-js/commit/173db349bfb2fb1f08fc51345cf5bd01732c9851))
# [4.11.0-beta.1](https://github.com/webiny/webiny-js/compare/v4.11.0-beta.0...v4.11.0-beta.1) (2020-09-09)
**Note:** Version bump only for package @webiny/api-headless-cms
# [4.11.0-beta.0](https://github.com/webiny/webiny-js/compare/v4.10.0...v4.11.0-beta.0) (2020-09-09)
**Note:** Version bump only for package @webiny/api-headless-cms
# [4.10.0](https://github.com/webiny/webiny-js/compare/v4.10.0-beta.0...v4.10.0) (2020-09-01)
**Note:** Version bump only for package @webiny/api-headless-cms
# [4.10.0-beta.0](https://github.com/webiny/webiny-js/compare/v4.9.0...v4.10.0-beta.0) (2020-09-01)
### Bug Fixes
* add backwards compatibility support ([5ec071d](https://github.com/webiny/webiny-js/commit/5ec071d8a1971552b9918a9dd37aa2fdd94d495a))
* load latest / published revisions accordingly ([4968ab2](https://github.com/webiny/webiny-js/commit/4968ab2ed2c4f1dea82a11a705a1aa85081aee76))
* use `loadRef` instead of old `findRefArgs` ([299fb02](https://github.com/webiny/webiny-js/commit/299fb02d44a9b0fe1c214d054018520fe9b2c2d9))
* **api-headless-cms:** use triple quotes on content model description ([#1199](https://github.com/webiny/webiny-js/issues/1199)) ([45f3e47](https://github.com/webiny/webiny-js/commit/45f3e47c2a8a984ef401e8e736d95a11c4f0925e))
# [4.9.0](https://github.com/webiny/webiny-js/compare/v4.9.0-beta.0...v4.9.0) (2020-08-18)
**Note:** Version bump only for package @webiny/api-headless-cms
# [4.9.0-beta.0](https://github.com/webiny/webiny-js/compare/v4.8.0...v4.9.0-beta.0) (2020-08-18)
### Bug Fixes
* enable `getContentModels` and `listContentModels` in the READ/PREVIEW API ([6c57c76](https://github.com/webiny/webiny-js/commit/6c57c763511923b5a482b1decdb97da0db85028d))
### Features
* add `context-after-content-models` plugin ([a859fe1](https://github.com/webiny/webiny-js/commit/a859fe1bd46a4c79edb8e50d53a27658c3bdd8d6))
# [4.8.0](https://github.com/webiny/webiny-js/compare/v4.8.0-beta.2...v4.8.0) (2020-08-12)
**Note:** Version bump only for package @webiny/api-headless-cms
# [4.8.0-beta.2](https://github.com/webiny/webiny-js/compare/v4.8.0-beta.1...v4.8.0-beta.2) (2020-08-12)
**Note:** Version bump only for package @webiny/api-headless-cms
# [4.8.0-beta.1](https://github.com/webiny/webiny-js/compare/v4.8.0-beta.0...v4.8.0-beta.1) (2020-08-12)
**Note:** Version bump only for package @webiny/api-headless-cms
# [4.8.0-beta.0](https://github.com/webiny/webiny-js/compare/v4.7.0...v4.8.0-beta.0) (2020-08-12)
### Bug Fixes
* **headless:** abstract function and add comment for reason ([46f80ff](https://github.com/webiny/webiny-js/commit/46f80ffdc988d5762b868ab8673132e8d260a156))
* **headless:** add alternative to pluralize if id is one character ([d19c7a7](https://github.com/webiny/webiny-js/commit/d19c7a7fcee919dbd754927dc39941e372969119))
* **headless:** remove file changes from other pr ([7dd3329](https://github.com/webiny/webiny-js/commit/7dd33291424029b4075f8cbc9525c4b1011d5578))
# [4.7.0](https://github.com/webiny/webiny-js/compare/v4.7.0-beta.1...v4.7.0) (2020-07-29)
**Note:** Version bump only for package @webiny/api-headless-cms
# [4.7.0-beta.1](https://github.com/webiny/webiny-js/compare/v4.7.0-beta.0...v4.7.0-beta.1) (2020-07-29)
**Note:** Version bump only for package @webiny/api-headless-cms
# [4.7.0-beta.0](https://github.com/webiny/webiny-js/compare/v4.6.0...v4.7.0-beta.0) (2020-07-28)
**Note:** Version bump only for package @webiny/api-headless-cms
# [4.6.0](https://github.com/webiny/webiny-js/compare/v4.6.0-beta.0...v4.6.0) (2020-07-21)
**Note:** Version bump only for package @webiny/api-headless-cms
# [4.6.0-beta.0](https://github.com/webiny/webiny-js/compare/v4.5.1...v4.6.0-beta.0) (2020-07-21)
### Bug Fixes
* **api-headless-cms:** update hook plugin name to be entry specific ([9394b36](https://github.com/webiny/webiny-js/commit/9394b36b6b12b09f9be78ba49dda82efbbded393))
### Features
* **api-headless-cms:** add support for data manager hooks ([61101ff](https://github.com/webiny/webiny-js/commit/61101ffcef4116f9b88d7064fca0126640f5bab1))
## [4.5.1](https://github.com/webiny/webiny-js/compare/v4.5.1-beta.1...v4.5.1) (2020-07-19)
**Note:** Version bump only for package @webiny/api-headless-cms
## [4.5.1-beta.1](https://github.com/webiny/webiny-js/compare/v4.5.1-beta.0...v4.5.1-beta.1) (2020-07-19)
**Note:** Version bump only for package @webiny/api-headless-cms
## [4.5.1-beta.0](https://github.com/webiny/webiny-js/compare/v4.5.0...v4.5.1-beta.0) (2020-07-18)
### Bug Fixes
* lock "apollo-server-lambda" version ([f6f57e4](https://github.com/webiny/webiny-js/commit/f6f57e4c91dada71d6feb3ed7e5d11e267c2182d))
* **headless:** remove createdBy and updateBy for api of headless cms ([#1131](https://github.com/webiny/webiny-js/issues/1131)) ([d306838](https://github.com/webiny/webiny-js/commit/d3068382e01cdf8fd9954e9f943c6142c3ce417a))
# [4.5.0](https://github.com/webiny/webiny-js/compare/v4.5.0-beta.4...v4.5.0) (2020-07-14)
**Note:** Version bump only for package @webiny/api-headless-cms
# [4.5.0-beta.4](https://github.com/webiny/webiny-js/compare/v4.5.0-beta.3...v4.5.0-beta.4) (2020-07-14)
**Note:** Version bump only for package @webiny/api-headless-cms
# [4.5.0-beta.3](https://github.com/webiny/webiny-js/compare/v4.5.0-beta.2...v4.5.0-beta.3) (2020-07-14)
**Note:** Version bump only for package @webiny/api-headless-cms
# [4.5.0-beta.2](https://github.com/webiny/webiny-js/compare/v4.5.0-beta.1...v4.5.0-beta.2) (2020-07-14)
**Note:** Version bump only for package @webiny/api-headless-cms
# [4.5.0-beta.1](https://github.com/webiny/webiny-js/compare/v4.5.0-beta.0...v4.5.0-beta.1) (2020-07-14)
**Note:** Version bump only for package @webiny/api-headless-cms
# [4.5.0-beta.0](https://github.com/webiny/webiny-js/compare/v4.4.0...v4.5.0-beta.0) (2020-07-14)
### Bug Fixes
* export mock IDs for direct usage in tests ([4e0c451](https://github.com/webiny/webiny-js/commit/4e0c4518eff4a3145872afae01e2cbe9708873fb))
* make sure newly added ref fields cannot reference a model with no title field ([752753b](https://github.com/webiny/webiny-js/commit/752753b8580d8cfeb4e1b3f436831a6cd7a232ee))
* remerging from master branch, fixed handler error ([2786e6e](https://github.com/webiny/webiny-js/commit/2786e6e6a2e2e51fefcf754d6a03b7287eff48b0))
* removed searchModel from headless cms model fields text ([de1b0b6](https://github.com/webiny/webiny-js/commit/de1b0b6d98dfbee14323818f034c87f47ee04e75))
### Features
* **api-headless-cms:** add `environmentAliases` ([e39cc12](https://github.com/webiny/webiny-js/commit/e39cc125dfe05b32660d2eea60cda378e2af77cd))
# [4.4.0](https://github.com/webiny/webiny-js/compare/v4.4.0-beta.3...v4.4.0) (2020-07-08)
**Note:** Version bump only for package @webiny/api-headless-cms
# [4.4.0-beta.3](https://github.com/webiny/webiny-js/compare/v4.4.0-beta.2...v4.4.0-beta.3) (2020-07-07)
**Note:** Version bump only for package @webiny/api-headless-cms
# [4.4.0-beta.2](https://github.com/webiny/webiny-js/compare/v4.4.0-beta.1...v4.4.0-beta.2) (2020-07-07)
**Note:** Version bump only for package @webiny/api-headless-cms
# [4.4.0-beta.1](https://github.com/webiny/webiny-js/compare/v4.4.0-beta.0...v4.4.0-beta.1) (2020-07-07)
**Note:** Version bump only for package @webiny/api-headless-cms
# [4.4.0-beta.0](https://github.com/webiny/webiny-js/compare/v4.3.0...v4.4.0-beta.0) (2020-07-07)
### Bug Fixes
* correct query error message ([d4b1d57](https://github.com/webiny/webiny-js/commit/d4b1d57033129697c46654b8cf9d2f2b0edc1bb8))
* extract plugin into a separate file ([fb6ebe0](https://github.com/webiny/webiny-js/commit/fb6ebe0da38cc61d2ac4be560274384641e60291))
* remove unused import ([cbd8948](https://github.com/webiny/webiny-js/commit/cbd89481716e385434d5d0b900484974e260aa36))
* throw a proper error message on invalid environment or API type ([3b24817](https://github.com/webiny/webiny-js/commit/3b24817c0545a9aa90931647637f7aeee5a6326b))
* use titleFieldId in order to perform referenceIn checks ([c7b3b6c](https://github.com/webiny/webiny-js/commit/c7b3b6cb77adb67d0c06dd333b52db0a1591dd31))
# [4.3.0](https://github.com/webiny/webiny-js/compare/v4.3.0-beta.5...v4.3.0) (2020-07-01)
**Note:** Version bump only for package @webiny/api-headless-cms
# [4.3.0-beta.5](https://github.com/webiny/webiny-js/compare/v4.3.0-beta.4...v4.3.0-beta.5) (2020-07-01)
### Bug Fixes
* extract plugin into a separate file ([5e47f7b](https://github.com/webiny/webiny-js/commit/5e47f7b47ea6d60f8eb5ab1b496bb9c902e1d2b9))
* remove unused import ([55ecd6f](https://github.com/webiny/webiny-js/commit/55ecd6f3bda00d6aff91159ffffa11785fae4767))
* use titleFieldId in order to perform referenceIn checks ([629c844](https://github.com/webiny/webiny-js/commit/629c8443e3d8e21a6f0ba51b122d30c2b47c8326))
# [4.3.0-beta.4](https://github.com/webiny/webiny-js/compare/v4.3.0-beta.3...v4.3.0-beta.4) (2020-07-01)
**Note:** Version bump only for package @webiny/api-headless-cms
# [4.3.0-beta.3](https://github.com/webiny/webiny-js/compare/v4.3.0-beta.2...v4.3.0-beta.3) (2020-07-01)
**Note:** Version bump only for package @webiny/api-headless-cms
# [4.3.0-beta.2](https://github.com/webiny/webiny-js/compare/v4.3.0-beta.1...v4.3.0-beta.2) (2020-06-30)
**Note:** Version bump only for package @webiny/api-headless-cms
# [4.3.0-beta.1](https://github.com/webiny/webiny-js/compare/v4.3.0-beta.0...v4.3.0-beta.1) (2020-06-30)
**Note:** Version bump only for package @webiny/api-headless-cms
# [4.3.0-beta.0](https://github.com/webiny/webiny-js/compare/v4.2.0...v4.3.0-beta.0) (2020-06-30)
### Bug Fixes
* export mock IDs for direct usage in tests ([fc6882b](https://github.com/webiny/webiny-js/commit/fc6882b2e79d12b3fd13c8b50f097e961378ff9d))
* make sure newly added ref fields cannot reference a model with no title field ([265644d](https://github.com/webiny/webiny-js/commit/265644d3d26b4ffeef62d172ef330a1519aa2d6f))
### Features
* **api-headless-cms:** add `CmsModelLockedFieldPlugin` plugin ([6b12797](https://github.com/webiny/webiny-js/commit/6b127979a4b624173424084844c989d57c7bdba2))
* **api-headless-cms:** add `environmentAliases` ([257a6ef](https://github.com/webiny/webiny-js/commit/257a6ef7595a50f6df635ba2a0387561c952ec45))
* **api-headless-cms:** add additional data to lockedFields ([ec935f8](https://github.com/webiny/webiny-js/commit/ec935f8b8f405f395f80438175e2ca834f144ccc))
* **api-headless-cms:** add more fields to `LockedFields` model ([9834107](https://github.com/webiny/webiny-js/commit/98341075c882f00e3a2f00c679bb3bb685dc6354))
* **api-headless-cms:** check `lockedField` invariant ([102d93b](https://github.com/webiny/webiny-js/commit/102d93b867684236f59b473637ebee119b7317ab))
# [4.2.0](https://github.com/webiny/webiny-js/compare/v4.2.0-beta.2...v4.2.0) (2020-06-23)
**Note:** Version bump only for package @webiny/api-headless-cms
# [4.2.0-beta.2](https://github.com/webiny/webiny-js/compare/v4.2.0-beta.1...v4.2.0-beta.2) (2020-06-23)
**Note:** Version bump only for package @webiny/api-headless-cms
# [4.2.0-beta.1](https://github.com/webiny/webiny-js/compare/v4.2.0-beta.0...v4.2.0-beta.1) (2020-06-23)
**Note:** Version bump only for package @webiny/api-headless-cms
# [4.2.0-beta.0](https://github.com/webiny/webiny-js/compare/v4.1.1-beta.2...v4.2.0-beta.0) (2020-06-23)
**Note:** Version bump only for package @webiny/api-headless-cms
## [4.1.1-beta.2](https://github.com/webiny/webiny-js/compare/v4.1.1-beta.1...v4.1.1-beta.2) (2020-06-23)
**Note:** Version bump only for package @webiny/api-headless-cms
## 4.1.1-beta.1 (2020-06-23)
### Bug Fixes
* **cwp-templates:** explicitly disable prettier trailingComma ([bb1ccf9](https://github.com/webiny/webiny-js/commit/bb1ccf92251f2b18cc5c4f98f5ffd59b607189ea))
## 4.1.1-beta.0 (2020-06-22)
**Note:** Version bump only for package @webiny/api-headless-cms
# 4.1.0 (2020-06-16)
**Note:** Version bump only for package @webiny/api-headless-cms
# 4.1.0-beta.3 (2020-06-16)
### Bug Fixes
* replace "id" and "content.id" ([d1d4bb9](https://github.com/webiny/webiny-js/commit/d1d4bb909d99b2b94678dcf3071b77a379687056))
# 4.1.0-beta.2 (2020-06-16)
**Note:** Version bump only for package @webiny/api-headless-cms
# 4.1.0-beta.1 (2020-06-16)
**Note:** Version bump only for package @webiny/api-headless-cms
# 4.1.0-beta.0 (2020-06-16)
### Bug Fixes
* use `serverless-function` instead of `serverless-app` ([7872c81](https://github.com/webiny/webiny-js/commit/7872c816d47d763c79a1139744c7400a8895a3fb))
# 4.1.0-beta.0 (2020-06-16)
**Note:** Version bump only for package @webiny/api-headless-cms
## 4.0.3-beta.0 (2020-06-16)
**Note:** Version bump only for package @webiny/api-headless-cms
## 4.0.2 (2020-06-05)
**Note:** Version bump only for package @webiny/api-headless-cms
## 4.0.1 (2020-06-04)
**Note:** Version bump only for package @webiny/api-headless-cms
# [4.0.0](https://github.com/webiny/webiny-js/compare/v4.0.0-beta.19...v4.0.0) (2020-06-04)
**Note:** Version bump only for package @webiny/api-headless-cms
# 4.0.0-beta.19 (2020-06-04)
### Bug Fixes
* make sure IDs are saved correctly in single value ref fields ([f08a5d4](https://github.com/webiny/webiny-js/commit/f08a5d45bb687db475f4ba6fe2bb7bf42f908c12))
# 4.0.0-beta.18 (2020-06-04)
### Bug Fixes
* **cwp-template-cms:** add missing api hook ([d1aa7c3](https://github.com/webiny/webiny-js/commit/d1aa7c334e681340ac14fd2b3d83fa583a8e5fc8))
# 4.0.0-beta.17 (2020-06-04)
**Note:** Version bump only for package @webiny/api-headless-cms
# 4.0.0-beta.16 (2020-06-03)
### Bug Fixes
* **create-webiny-project:** pass cwd to git init command ([978372b](https://github.com/webiny/webiny-js/commit/978372b02757c3525372fb711e62786580319f5e))
# 4.0.0-beta.15 (2020-06-03)
### Bug Fixes
* **project-utils:** remove unused dependencies ([95e06ca](https://github.com/webiny/webiny-js/commit/95e06ca58d88131af665a59041e4355ce1ad16d8))
# 4.0.0-beta.14 (2020-06-03)
### Bug Fixes
* **ui:** remove unused type import ([9e5a94d](https://github.com/webiny/webiny-js/commit/9e5a94d7b6bb4cb7c44b84c876dd130ec05f6507))
# 4.0.0-beta.13 (2020-06-02)
**Note:** Version bump only for package @webiny/api-headless-cms
# 4.0.0-beta.12 (2020-06-02)
**Note:** Version bump only for package @webiny/api-headless-cms
# 4.0.0-beta.11 (2020-06-02)
### Bug Fixes
* 🐛 Add dependencies for packages in app-admin ([0cbd052](https://github.com/webiny/webiny-js/commit/0cbd0526d90bba17f0ef5b00b29a35d84bbd831a))
# 4.0.0-beta.10 (2020-06-02)
**Note:** Version bump only for package @webiny/api-headless-cms
# 4.0.0-beta.9 (2020-06-02)
**Note:** Version bump only for package @webiny/api-headless-cms
# 4.0.0-beta.8 (2020-06-02)
### Features
* ✨ Remove unneeded text below password. Add link to checkbox ([39687f4](https://github.com/webiny/webiny-js/commit/39687f42f17c0066c681c16745ab9c37d5759f08))
# 4.0.0-beta.7 (2020-06-01)
**Note:** Version bump only for package @webiny/api-headless-cms
# [4.0.0-beta.6](https://github.com/webiny/webiny-js/compare/v4.0.0-beta.5...v4.0.0-beta.6) (2020-06-01)
### Bug Fixes
* skip all security checks if event is not present ([1e82c4f](https://github.com/webiny/webiny-js/commit/1e82c4fa9fae26718edb98834ae1517557c5a2f2))
# [4.0.0-beta.5](https://github.com/webiny/webiny-js/compare/v4.0.0-beta.4...v4.0.0-beta.5) (2020-05-31)
**Note:** Version bump only for package @webiny/api-headless-cms
# [4.0.0-beta.4](https://github.com/webiny/webiny-js/compare/v4.0.0-beta.3...v4.0.0-beta.4) (2020-05-29)
### Bug Fixes
* add "locale" argument to the value field ([757aecc](https://github.com/webiny/webiny-js/commit/757aeccfcaa191c9fcae68d4f93e45deafdac11a))
* add "locale" argument to the value field ([224a497](https://github.com/webiny/webiny-js/commit/224a4973a725bece5fb8fd8ac11d48bb67e5d7b9))
* add "multipleValues" field ([7584825](https://github.com/webiny/webiny-js/commit/7584825f2d918a14c1e30a4de6c696ce3c3526d4))
* add "multipleValues" field ([f537a32](https://github.com/webiny/webiny-js/commit/f537a322fb73c09c81ba7758c929c219e9826f0c))
* add "multipleValues" support ([507e829](https://github.com/webiny/webiny-js/commit/507e829d985527c4a015a7897b121d202e646c6b))
* add "multipleValues" support ([763903d](https://github.com/webiny/webiny-js/commit/763903dcbb4b8a4842928d181d148bbe29241af1))
* don't throw error if event is not present. ([5702c41](https://github.com/webiny/webiny-js/commit/5702c41ea74cb814cadbdf4d4c617d326653f4f5))
* enable support for "multipleValues" ([559a1da](https://github.com/webiny/webiny-js/commit/559a1daeb66545272f69c8b08502ac53ae0cca75))
* enable support for "multipleValues" ([9c1418b](https://github.com/webiny/webiny-js/commit/9c1418b1c9d405d3fc03aac2e67d16519353740d))
* make sure multiple-value fields cannot be set as entry title ([6302fe6](https://github.com/webiny/webiny-js/commit/6302fe6ba0ef694d8128b926ae9f950f326a560d))
* make sure multiple-value fields cannot be set as entry title ([b1141f0](https://github.com/webiny/webiny-js/commit/b1141f0b15787e4f6246ffca9d64cf82ac2f42e4))
* move used-fields checking into "beforeUpdate" hook ([5d7f6af](https://github.com/webiny/webiny-js/commit/5d7f6af8a12dea9b641a7802da653f3d10124b76))
* remodel usedFields - make it an array of models ([b374e58](https://github.com/webiny/webiny-js/commit/b374e58f9a75a66938393c351650604a8e4236a9))
* remove defatult "[]" value for "usedFields" field ([73352d8](https://github.com/webiny/webiny-js/commit/73352d8c2ba56b0ad5102830787b3141341780b8))
* rename "usedFields" to "lockedFields" ([5a815cb](https://github.com/webiny/webiny-js/commit/5a815cb56d1060dc2836816c51a9a889bd152d10))
* update the "usedFields" field on the content model ([8b9678d](https://github.com/webiny/webiny-js/commit/8b9678da6999c3eb49fe2ad271cb71a21b660584))
### Features
* **headless-cms:** access tokens ([#904](https://github.com/webiny/webiny-js/issues/904)) ([4fee3af](https://github.com/webiny/webiny-js/commit/4fee3af605cc2a7aeb69c3f11f0101b5eb81024b))
* add list inputs ([4366900](https://github.com/webiny/webiny-js/commit/4366900d9196246f5362aa2bafad27c6d5d0ede6))
* add list inputs ([22bd089](https://github.com/webiny/webiny-js/commit/22bd089d37d39be888009d3c422c1f164445b553))
* create list types as well ([c7c1052](https://github.com/webiny/webiny-js/commit/c7c10524b6b2aed69851453cbd884e0d51e84986))
* create list types as well ([0d5839d](https://github.com/webiny/webiny-js/commit/0d5839d15646a2c845ed01330a90a504b882faed))
# [4.0.0-beta.2](https://github.com/webiny/webiny-js/compare/v4.0.0-beta.1...v4.0.0-beta.2) (2020-05-25)
### Bug Fixes
* add "@webiny/api-plugin-commodo-nedb" to the list ([5e04fa9](https://github.com/webiny/webiny-js/commit/5e04fa9a4002e94cc4423021411abf369f22e337))
* update search table on changes of latestVersion / published flags ([e7ff2f9](https://github.com/webiny/webiny-js/commit/e7ff2f9e71060c10066c24d6d9882f4ed4ebef62))
* **api-headless-cms:** check user authentication in manage API ([f0dccdb](https://github.com/webiny/webiny-js/commit/f0dccdb19ef308d467f69bdae17aa563c60d1b25))
# [4.0.0-beta.1](https://github.com/webiny/webiny-js/compare/v4.0.0-beta.0...v4.0.0-beta.1) (2020-05-22)
**Note:** Version bump only for package @webiny/api-headless-cms
# [4.0.0-beta.0](https://github.com/webiny/webiny-js/compare/v1.15.1...v4.0.0-beta.0) (2020-05-22)
### Bug Fixes
* 🐛 add missing type annotation ([#866](https://github.com/webiny/webiny-js/issues/866)) ([03016c5](https://github.com/webiny/webiny-js/commit/03016c52cbb71dc10b0b9d922ee1fd61a988a302))
* add "changedOn" field ([1de89f4](https://github.com/webiny/webiny-js/commit/1de89f44879e87426199410485306756d37720d2))
* add "CmsContentModelGroup" model ([e5ba929](https://github.com/webiny/webiny-js/commit/e5ba9299202ccd1c3b63dad83ae3c703f770cc85))
* add "CmsContentModelGroup" to the schema ([3c0b924](https://github.com/webiny/webiny-js/commit/3c0b924f6a770912f31f4df44d92f9b85019a9d3))
* add "contentModelGroup" resolvers and security ([3bde7bd](https://github.com/webiny/webiny-js/commit/3bde7bd84ba36599e90c52591f40ca3e5e288dfb))
* add "createdOn" for created indexes ([eba16f8](https://github.com/webiny/webiny-js/commit/eba16f849480243eeeca91356f8747b2b79f164f))
* add "FieldId" type and "createdOn" field on "CmsContentModelIndex" ([dafbebd](https://github.com/webiny/webiny-js/commit/dafbebd95679873ad611d779bdedfe04f968f03c))
* add "group" field ([89b013c](https://github.com/webiny/webiny-js/commit/89b013cc5b1b34baba4a6e339cbb6b04c8111987))
* add "group" ref field ([3325657](https://github.com/webiny/webiny-js/commit/33256570e47a837e9d1ce277b9731630fa0788a2))
* add "layout" field ([b827564](https://github.com/webiny/webiny-js/commit/b8275647f6f5b07df293e3fef5dfaf29275fc9b0))
* add "layout" field ([88668ad](https://github.com/webiny/webiny-js/commit/88668adfe9a4fe35a75bcb93d00398fce7bba8b1))
* add "loadsh.clonedeep" ([2a1f904](https://github.com/webiny/webiny-js/commit/2a1f904e4e5800fbf1924584eaf0c4808b0d3711))
* add "titleFieldId" field ([c34d1bb](https://github.com/webiny/webiny-js/commit/c34d1bbdc333092e61eab1d47d6db253bd1a4432))
* add "used" field ([683e716](https://github.com/webiny/webiny-js/commit/683e716202cb37c83e75022eb139344205646b0d))
* add a type annotation ([0048a75](https://github.com/webiny/webiny-js/commit/0048a757eda3c9f17dd2080f26ae626d1e59dc1a))
* add changedOn field ([1427151](https://github.com/webiny/webiny-js/commit/1427151c4fa64a9ce309ceab87e06cab4ad503a2))
* add CmsAny fields and some new CM fields-related fields ([a8495a4](https://github.com/webiny/webiny-js/commit/a8495a4c667cd43e769a648b9012192ab7f77572))
* add CmsEnvironment and CmsEnvironmentAlias types ([963803a](https://github.com/webiny/webiny-js/commit/963803ab239ae4f338d6e9e6486c3b4421d42219))
* add createFrom resolver ([a1cb4f9](https://github.com/webiny/webiny-js/commit/a1cb4f939bbd1cc750e6916b149274117ba0a2cd))
* add custom id generation to environment base model ([c6c21f4](https://github.com/webiny/webiny-js/commit/c6c21f404aa11b2eed10fb75f72f129932318637))
* add dot at the end of the sentence ([e860e13](https://github.com/webiny/webiny-js/commit/e860e133e4ff0d6382a8e36f2a1f0d261a376918))
* add empty headers object ([ad201e9](https://github.com/webiny/webiny-js/commit/ad201e99c5b929253db86f4efbd2183bb6c9445d))
* add missing fields ([ec83b21](https://github.com/webiny/webiny-js/commit/ec83b216b85f0fa499f4cdc98aedd35b11d8f00c))
* add missing fields ([a502abd](https://github.com/webiny/webiny-js/commit/a502abd4bd0c2f8015091284a53e2eba64464761))
* add missing fields ([ee129b0](https://github.com/webiny/webiny-js/commit/ee129b07f6be802b6e302b5bef4cbb7390808fbf))
* add missing unique field to schema type ([46ac7bf](https://github.com/webiny/webiny-js/commit/46ac7bf1a4233db9c5d2048fb6d8d2fc77b57a62))
* add new field properties ([aa50483](https://github.com/webiny/webiny-js/commit/aa504834887a83e0ff4c1288ed7414989d2615ab))
* add support for lists of I18N values ([ad6beb5](https://github.com/webiny/webiny-js/commit/ad6beb5af1f47950ceda2fa60ee8d4cb18907c84))
* add to git ([febe9e6](https://github.com/webiny/webiny-js/commit/febe9e68046c2077428f6bf010c873c4684277e6))
* add type declarations ([6997766](https://github.com/webiny/webiny-js/commit/69977668b2b72528a54492f2c0a2ae5125b45246))
* add used flag ([c359d08](https://github.com/webiny/webiny-js/commit/c359d087cf5753bf90ccf39ca2df698438367863))
* after an entry is saved, update "used" flag for fields in content model ([5cd67f9](https://github.com/webiny/webiny-js/commit/5cd67f9e2d93b283a5e632cba6c9bef0b81d1c4f))
* automatically handle slug for content model groups ([5729b10](https://github.com/webiny/webiny-js/commit/5729b10c26c5ef3a06afc24a1be8ab6b7f6b0247))
* automatically set modelId ([32cf6c7](https://github.com/webiny/webiny-js/commit/32cf6c7fa7e5902fc39bd0e22e6b1319e7198aee))
* automatically set modelId ([0fd1a5b](https://github.com/webiny/webiny-js/commit/0fd1a5bff7f3c8fe55e96b03eabd5b70c1362278))
* avoid importing from "lodash" ([e90c290](https://github.com/webiny/webiny-js/commit/e90c2900a60cd167925a853ba2a5dcb93e525f3b))
* break if "titleFieldId" was found ([48678e3](https://github.com/webiny/webiny-js/commit/48678e385d76c009c771b26437f91069cbdf3356))
* bring back createdOn field ([92aab9d](https://github.com/webiny/webiny-js/commit/92aab9d02454537a1dc8473a34060ed79808178f))
* bring back missing CmsEnvironment const ([01b796c](https://github.com/webiny/webiny-js/commit/01b796c938c4bb688eb20765efb4e6f662d4368c))
* call setContextLocale in every manage resolver ([4b0e129](https://github.com/webiny/webiny-js/commit/4b0e12949203a6938f93037196ed8af018636a0d))
* cast returnArgs as any ([ca75fb3](https://github.com/webiny/webiny-js/commit/ca75fb34554a66861618dab73a0f108b06eb1f97))
* correct file name ([448d965](https://github.com/webiny/webiny-js/commit/448d9653fac4f731fadf0b2c9d229a3ae30b1a69))
* correct import file name ([073766b](https://github.com/webiny/webiny-js/commit/073766bae5ac7970535758ee638e4382636ed583))
* correct import paths ([e3a36ef](https://github.com/webiny/webiny-js/commit/e3a36efee388ba999299ce4fce9e750be987bf5c))
* correct imports ([0c5a402](https://github.com/webiny/webiny-js/commit/0c5a40262e268ca6a7f96c72348a97fcf2ab5a18))
* correct model name ([44c0e60](https://github.com/webiny/webiny-js/commit/44c0e60cb0cf22b719b2eb536a67f35701855412))
* correct used meta keys ([4313225](https://github.com/webiny/webiny-js/commit/4313225b1a8b530a72d959a0e91a40451194fea1))
* create "resolveCreateFrom" resolver ([e060bf0](https://github.com/webiny/webiny-js/commit/e060bf0d73623cf43256cda2be70aa50258f0735))
* create core GraphQL API ([6018555](https://github.com/webiny/webiny-js/commit/60185555df28374e3677fc6fcee61fd8129df614))
* datetime type ([e9b3cfb](https://github.com/webiny/webiny-js/commit/e9b3cfbee153d8788577b9cada03591a8acd3deb))
* describe the purpose of the "before-handler" plugin ([04ed452](https://github.com/webiny/webiny-js/commit/04ed452451a1ea5b6405e57033c07e0a8e1c04da))
* do not allow content model deletion if there are entries existing ([2e148db](https://github.com/webiny/webiny-js/commit/2e148db44fea0d25bd58b38ebedb6928ced7c53c))
* don't assign the "model" field, let the consumers do it ([197b776](https://github.com/webiny/webiny-js/commit/197b776072f2a473eac5c9c5b038c4879816bbb0))
* don't skip this test suite ([be9bde2](https://github.com/webiny/webiny-js/commit/be9bde245e4027b2aa89fbbf9aedfacd2a4ce38f))
* enable loading environments via both environment ID and alias ([151ff8b](https://github.com/webiny/webiny-js/commit/151ff8b6406c535db26af272b723a2c2141203c6))
* extract Apollo handler creation into a separate file ([ba0214d](https://github.com/webiny/webiny-js/commit/ba0214d3f377493e2fffeb9cbd89c9734dec2dd4))
* extract logic into separate files ([ab623d0](https://github.com/webiny/webiny-js/commit/ab623d019cb3e44451566204d509384b6794550d))
* fix field names ([fb8bdaf](https://github.com/webiny/webiny-js/commit/fb8bdaff4d7aa2ce43e53440a1073493b83b7b74))
* fix the "afterDelete" hook callback ([a54a2d2](https://github.com/webiny/webiny-js/commit/a54a2d275c190fd3060e6e4f18434aeafe28b97e))
* generate preview schema using the read schema plugins ([b6c1115](https://github.com/webiny/webiny-js/commit/b6c11150115df5b979a1506bdd1c18856c8c5719))
* GQL filtering options must be built based on created indexes ([dac51f9](https://github.com/webiny/webiny-js/commit/dac51f93c6ab146efa67499b5b3cd1ca432a1198))
* handle "not" operators in Manage API ([fe5f7b4](https://github.com/webiny/webiny-js/commit/fe5f7b48554c9194267f67f45490b0a3464cc740))
* handle undefined model fields on index generation ([0c47203](https://github.com/webiny/webiny-js/commit/0c472035b0e7cc82c2e0e9045074323235eeec07))
* if dirty, mark environment as changed ([22fae78](https://github.com/webiny/webiny-js/commit/22fae78689d015f1eed04bcef2cba095de3a0696))
* improve handling of environment parameter ([48fd113](https://github.com/webiny/webiny-js/commit/48fd1139f2bda03a5aa1a2b7b46eb79362e5f260))
* include environment aliases in the context ([9450e14](https://github.com/webiny/webiny-js/commit/9450e14f51a90bc19f6acef15a654d6f3ef17521))
* include environment aliases in the GQL schema ([4a85c9f](https://github.com/webiny/webiny-js/commit/4a85c9f91bfc0c90f1d8e6c818bfafbfe1edee49))
* instead of the deprecated "valueSet", use "state.set" ([deb4bcc](https://github.com/webiny/webiny-js/commit/deb4bcc1c0df209557ac5c91ca783649fa11e972))
* invoke index generation even when no indexes are defined by the user ([2fb0f5a](https://github.com/webiny/webiny-js/commit/2fb0f5a95609e9a6a92a55fecffad5cb7857f713))
* load environment and environment alias ([4f5edfe](https://github.com/webiny/webiny-js/commit/4f5edfe53c29fb0de5aa37f0b57fb3e6371fe4ff))
* load environment on every request ([90705a7](https://github.com/webiny/webiny-js/commit/90705a7319b40861335bec40a3e8a8b7b96d34a3))
* make renderer as not-required ([004a747](https://github.com/webiny/webiny-js/commit/004a747ec117d99db451222a747d11b5562c2820))
* make sure the `cms` key exists ([ee9f0cf](https://github.com/webiny/webiny-js/commit/ee9f0cfaf916b66c98411d4fb668a891de5395d6))
* make sure the modelId is properly camelCased ([26b7594](https://github.com/webiny/webiny-js/commit/26b759484b9f3ff269993a4b73cfd0cfc569c70c))
* merge "float" and "integer" fields into a single "number" field ([bc9c60e](https://github.com/webiny/webiny-js/commit/bc9c60edb62e4fdfe6c169881d2cd30637ebd924)), closes [#814](https://github.com/webiny/webiny-js/issues/814)
* move "handler" plugins to "handler" folder ([0c7ba7c](https://github.com/webiny/webiny-js/commit/0c7ba7ca96f2725373f203ebc52d58ebcab856bb))
* move latestVersion handling into beforeCreate hook ([a7edf07](https://github.com/webiny/webiny-js/commit/a7edf07ab3ed712c1a22e97b2194dc809f338549))
* move meta field below main type ([3a3303d](https://github.com/webiny/webiny-js/commit/3a3303d6c39822a965eaec469c2595c2f71f9616))
* must return current locale, not the default one ([b7dbe73](https://github.com/webiny/webiny-js/commit/b7dbe736306453d82c0dba6dbc623e1e282e3d0d))
* only add sort enums if the field is present in an index ([42431b3](https://github.com/webiny/webiny-js/commit/42431b3265bccdad26eea70e456e09446eaa9808))
* only return published content models ([687c753](https://github.com/webiny/webiny-js/commit/687c753bfa2f5312dbc936c32dfe9d50107af840))
* pass filters via "query" property ([11bdf7c](https://github.com/webiny/webiny-js/commit/11bdf7cc607df4c49fb4a0026f866ce029f7d92a))
* remove "id" based filters (those will be read from index) ([44aacb2](https://github.com/webiny/webiny-js/commit/44aacb22d3c9a87ffb8f21029009fde5efa90268))
* remove "name" property ([3d868f3](https://github.com/webiny/webiny-js/commit/3d868f3f6d474331105ed7087ca7a261329224b4))
* remove "searchable", "sortable", and "unique" fields ([f93d020](https://github.com/webiny/webiny-js/commit/f93d0206775ef3135564792aa1650e4be4135fad))
* remove "unique" field ([abf4b3f](https://github.com/webiny/webiny-js/commit/abf4b3fcb90834f83dfc0deeac4bbdf01c86b1b1))
* remove "unique" field ([6e87c55](https://github.com/webiny/webiny-js/commit/6e87c55b75170f3653c6810b0c67707ec1578edd))
* remove "used" field ([4d16d41](https://github.com/webiny/webiny-js/commit/4d16d410fffcc0734fcdc327f1c226a226104322))
* remove async onSet function ([f55ce7a](https://github.com/webiny/webiny-js/commit/f55ce7a54d99c1e1e968fc07dc70ada2ca487738))
* remove CmsEnvironment model ([9e51c8a](https://github.com/webiny/webiny-js/commit/9e51c8a72d78aa23c10f0a7f0ef2591d61d7d72e))
* remove content model revisions ([6e53f3b](https://github.com/webiny/webiny-js/commit/6e53f3b9f23174c2420efb08d3a38856572ddba0))
* remove content model revisions ([5f9fc6d](https://github.com/webiny/webiny-js/commit/5f9fc6d200e40d08f863707a89159ba12586de34))
* remove CRUD logs fields ([04604e8](https://github.com/webiny/webiny-js/commit/04604e83249a10878802fa5369f1c0403c2e51d4))
* remove environment name as part of the model name ([31487f0](https://github.com/webiny/webiny-js/commit/31487f0c7fb3246e47fa7b4452235449299cde0e))
* remove http prefix from handler packages ([1f80774](https://github.com/webiny/webiny-js/commit/1f8077412e0a7fb39f8e577e6217e7049e707f75))
* remove isColdStart flag ([90d90d3](https://github.com/webiny/webiny-js/commit/90d90d37c3c0a99f6b50f9b5e2ab819317797b8c))
* remove isSortable flag ([2189367](https://github.com/webiny/webiny-js/commit/21893673c63bf7e719e2c1fcbf8481d601de16ca))
* remove isSortable flag ([dbbd3b1](https://github.com/webiny/webiny-js/commit/dbbd3b1f4c9c961df6c5c28b93026ad2b1cbdb6a))
* remove localization as an option (make it always enabled) ([90a3e4c](https://github.com/webiny/webiny-js/commit/90a3e4cccbf1ca69f0996ac45391c65ae9ff9914))
* remove old file ([6a4cf4a](https://github.com/webiny/webiny-js/commit/6a4cf4a281ee0959de68a25d531e21c8f5a28643))
* remove unnecessary file ([e145764](https://github.com/webiny/webiny-js/commit/e145764004cfe84a0f5ae6c54bd3e60f9e4e992e))
* remove unnecessary file ([a9caf50](https://github.com/webiny/webiny-js/commit/a9caf50420fadea2105aeeb3c6fdf23d8bf1d498))
* remove unnecessary files ([5cb7e3c](https://github.com/webiny/webiny-js/commit/5cb7e3cf321cfb26e42b4dcb3190457d00bae7dc))
* rename "title" to "name" ([1a5fe75](https://github.com/webiny/webiny-js/commit/1a5fe75bf67d121780af78de0159a6b50b4d22dc))
* rename cmsEnvironment to environment ([263b820](https://github.com/webiny/webiny-js/commit/263b82056fe09c4e13c283d31994c3f6a7d80ab3))
* replace "title" with "name" ([25cfbed](https://github.com/webiny/webiny-js/commit/25cfbedba71ae1b717507d8226146a3dccd31895))
* set default value for "multipleValues" flag ([d395e5e](https://github.com/webiny/webiny-js/commit/d395e5e3d042d14e5b5b79662d157b0b8db4a212))
* simplify code ([4a4eb8f](https://github.com/webiny/webiny-js/commit/4a4eb8f7660ba97c23ca99f327ac8f16dcf01896))
* skip models with no fields during schema generation ([c666e17](https://github.com/webiny/webiny-js/commit/c666e17c7f7064b1727b7f3c3df890a4bd465b54))
* type, _id, and _fieldId cannot be changed once set ([cffcee3](https://github.com/webiny/webiny-js/commit/cffcee3a9adb237d121d8357d40b08a9738c7b4a))
* update context type ([9db8aad](https://github.com/webiny/webiny-js/commit/9db8aadaa595f8fc71df96575c1c9b57087ce7b8))
* update dependencies ([c74fb03](https://github.com/webiny/webiny-js/commit/c74fb03085e08cf193b9aa8a00b120ac489c3b1e))
* update findOne method parameters ([146a500](https://github.com/webiny/webiny-js/commit/146a500e2ce3a82e7adfd94b644db2bc1556baca))
* use "@webiny/graphql" package instead of "@webiny/graphql" ([d12865e](https://github.com/webiny/webiny-js/commit/d12865ee3b69886c2132ab3a0748a511278572f9))
* use "@webiny/graphql" package instead of "@webiny/graphql" ([19364fb](https://github.com/webiny/webiny-js/commit/19364fb658131849ea1806b41ad1e53b6ef619cd))
* use "context.models.CmsEnvironment.isId" to validate the value ([9d9bc83](https://github.com/webiny/webiny-js/commit/9d9bc8342e97c2f9662c83b23e830fb34d4e5222))
* use "usedFields" instead of the "used" field ([793b697](https://github.com/webiny/webiny-js/commit/793b6978db68b77ffe28deba83e7a2da4ca61b30))
* use "usedFields" instead of the "used" field ([7530ab9](https://github.com/webiny/webiny-js/commit/7530ab9f677366f39e058712e8c5ba45f4ce1ce5))
* use built-in "pipe" instead of importing lodash ([8f9cc70](https://github.com/webiny/webiny-js/commit/8f9cc70c34307dd89a996b7f40806e5c7dee45f1))
* use built-in "pipe" instead of importing lodash ([0051e0a](https://github.com/webiny/webiny-js/commit/0051e0af69e3676dfadb0b60bf44bdf1c2999950))
* use built-in "pipe" instead of importing lodash ([c677a6e](https://github.com/webiny/webiny-js/commit/c677a6ec34a45f807f5fac77d0ea0254cd0ae00c))
* use built-in "pipe" instead of importing lodash ([ad18fb8](https://github.com/webiny/webiny-js/commit/ad18fb8627e45aff7ac752a3ca0917ab8ac4f4d1))
* use built-in "pipe" instead of importing lodash ([5e66c50](https://github.com/webiny/webiny-js/commit/5e66c50967000798a836706f4a34a99cfd2adba2))
* use built-in "pipe" instead of importing lodash ([c2ed2d2](https://github.com/webiny/webiny-js/commit/c2ed2d23761289c17667e41c656a3214a3518c0a))
* use existing plugins instead of custom code ([21e7df5](https://github.com/webiny/webiny-js/commit/21e7df56cc7e64716e386c4c5e02f11bfcecf703))
* use kebab-case instead of camelCase ([06eeca5](https://github.com/webiny/webiny-js/commit/06eeca5129a0a6d836190fb57bb692f4467f58c0))
* use the correct base model for both data and search models ([ea966a7](https://github.com/webiny/webiny-js/commit/ea966a743d29c2955d4d12cf87c3ec5eb7e965c1))
### Features
* ✨ Update `dateTimeWithTimezone` type from `date` to `string` ([e7fa60b](https://github.com/webiny/webiny-js/commit/e7fa60b8f7c60db90e2d5eed6bb828eeadeb0c9b))
* ✨ Wrap resolvers in `hasScope` ([7309a73](https://github.com/webiny/webiny-js/commit/7309a73c863d1bd3dbc41ea2b73864010407414d))
* add "CmsContentModelGroup" model ([0b9c5b3](https://github.com/webiny/webiny-js/commit/0b9c5b34d8063d190141f2a18dc07b0d6fe776c8))
* add "group" field ([518454b](https://github.com/webiny/webiny-js/commit/518454b323bce2a6e59abcc6a840fc3f6484c49f))
* add "Meta" field ([0d79a6f](https://github.com/webiny/webiny-js/commit/0d79a6fac093c6a598f9a170b7458cab96ac45ba))
* add "Meta" field ([3a46134](https://github.com/webiny/webiny-js/commit/3a46134627dab41d9ff85f4358a42ab9f62fda61))
* add "preview" schema generation ([609ecfd](https://github.com/webiny/webiny-js/commit/609ecfd7ae504f709e450969143b5cd02f8cb3dd))
* add "titleFieldId" field ([fdcb0ed](https://github.com/webiny/webiny-js/commit/fdcb0eddd6afdb5034e38dc440de72889381ecaa))
* add "titleFieldId" field ([61d6c83](https://github.com/webiny/webiny-js/commit/61d6c832ce7cd3a2e0773c7ab67001778b93d4b4))
* add "url" prop ([db1740e](https://github.com/webiny/webiny-js/commit/db1740e55705c7e9cba5b2e9b9d8fbceef1bde76))
* add CmsSearchInput type ([1301645](https://github.com/webiny/webiny-js/commit/1301645f2cd694633364fde40d6bf24dd0712352))
* add content model groups to the schema ([83b077b](https://github.com/webiny/webiny-js/commit/83b077b815c24960c1176fa1d852a6e2dc5ef7ff))
* add Data Manager to handle indexes and data copying ([a1e8cba](https://github.com/webiny/webiny-js/commit/a1e8cbae96b8c42943f15103ab32eeeb61df6d78))
* add Environments module ([7bd417a](https://github.com/webiny/webiny-js/commit/7bd417aada93596cb875f8bca6e47e33b45548f3))
* add meta field ([dc60a8d](https://github.com/webiny/webiny-js/commit/dc60a8dfff9341c74d41ee58cb5a0579859e61b9))
* add pluralizedName and pluralizedModelId fields ([de1cabe](https://github.com/webiny/webiny-js/commit/de1cabe68da21f90cbba37a008415347955d611d))
* add publish and unpublish mutations ([852ecf0](https://github.com/webiny/webiny-js/commit/852ecf0f8653cabc3f5e53c62bc09f91531dd7a4))
* add publish and unpublish resolvers ([62f2168](https://github.com/webiny/webiny-js/commit/62f2168ccf9a70bd6901db721240441ef1c115fc))
* allow getting entries directly by id and skip latest version filtering ([f67aa9b](https://github.com/webiny/webiny-js/commit/f67aa9b9a4895b6ffcfd859870bd2148cc027cb8))
* create content model resolvers ([d0f1ea0](https://github.com/webiny/webiny-js/commit/d0f1ea05f6a541f6eca9c2190788283b4ea58f27))
* create installation process for the Headless CMS app ([#825](https://github.com/webiny/webiny-js/issues/825)) ([5ee8165](https://github.com/webiny/webiny-js/commit/5ee8165aa6bbad82c0663a86bcb5fb60db2319bf)), closes [#808](https://github.com/webiny/webiny-js/issues/808)
* create long-text field ([7b73764](https://github.com/webiny/webiny-js/commit/7b73764c4b53a554921a85b902df1df81f97940d)), closes [#860](https://github.com/webiny/webiny-js/issues/860)
* create rich-text field ([ee1dadb](https://github.com/webiny/webiny-js/commit/ee1dadbcece0929a4fe259f9051554249012c043)), closes [#811](https://github.com/webiny/webiny-js/issues/811)
* migrate to cursor pagination ([b5b097f](https://github.com/webiny/webiny-js/commit/b5b097f362201c7a2a424ccaa2c48403544f2493))
* schema refresh ([f505256](https://github.com/webiny/webiny-js/commit/f505256d71e19effa8415edbf1aa419e9fa4780c))
* store entry data and search data in shared data collection ([9af7668](https://github.com/webiny/webiny-js/commit/9af76686df4760783526296496300ada52dcebe0))
## [2.1.1](https://github.com/webiny/webiny-js/compare/@webiny/[email protected]...@webiny/[email protected]) (2019-12-04)
**Note:** Version bump only for package @webiny/api-cms
## [2.1.1-next.0](https://github.com/webiny/webiny-js/compare/@webiny/[email protected]...@webiny/[email protected]) (2019-12-04)
**Note:** Version bump only for package @webiny/api-cms
# [2.1.0](https://github.com/webiny/webiny-js/compare/@webiny/[email protected]...@webiny/[email protected]) (2019-11-20)
### Features
* add support for DB drivers ([#623](https://github.com/webiny/webiny-js/issues/623)) ([82a6d66](https://github.com/webiny/webiny-js/commit/82a6d66d5ad96e4da13c035d2524c03bd50a7dff))
# [2.1.0-next.0](https://github.com/webiny/webiny-js/compare/@webiny/[email protected]...@webiny/[email protected]) (2019-11-18)
### Features
* add support for DB drivers and improve components. ([3ce1908](https://github.com/webiny/webiny-js/commit/3ce1908))
## [2.0.1](https://github.com/webiny/webiny-js/compare/@webiny/[email protected]...@webiny/[email protected]) (2019-11-08)
**Note:** Version bump only for package @webiny/api-cms
# [2.0.0](https://github.com/webiny/webiny-js/compare/@webiny/[email protected]...@webiny/[email protected]) (2019-10-29)
**Note:** Version bump only for package @webiny/api-cms
## [0.1.11](https://github.com/webiny/webiny-js/compare/@webiny/[email protected]...@webiny/[email protected]) (2019-10-29)
### Bug Fixes
* remove getDatabase from context and remove "mongodb" key from config. ([191e419](https://github.com/webiny/webiny-js/commit/191e419))
## [0.1.10](https://github.com/webiny/webiny-js/compare/@webiny/[email protected]...@webiny/[email protected]) (2019-10-23)
**Note:** Version bump only for package @webiny/api-cms
## [0.1.9](https://github.com/webiny/webiny-js/compare/@webiny/[email protected]...@webiny/[email protected]) (2019-10-21)
### Bug Fixes
* update package versions ([878baa5](https://github.com/webiny/webiny-js/commit/878baa51dd747e3a2962da89cbb68ea15779a04f))
## [0.1.8](https://github.com/webiny/webiny-js/compare/@webiny/[email protected]...@webiny/[email protected]) (2019-10-21)
**Note:** Version bump only for package @webiny/api-cms
## [0.1.7](https://github.com/webiny/webiny-js/compare/@webiny/[email protected]...@webiny/[email protected]) (2019-10-17)
**Note:** Version bump only for package @webiny/api-cms
## [0.1.6](https://github.com/webiny/webiny-js/compare/@webiny/[email protected]...@webiny/[email protected]) (2019-10-15)
**Note:** Version bump only for package @webiny/api-cms
## [0.1.5](https://github.com/webiny/webiny-js/compare/@webiny/[email protected]...@webiny/[email protected]) (2019-10-14)
### Bug Fixes
* synced dependencies across all packages ([#567](https://github.com/webiny/webiny-js/issues/567)) ([38eda54](https://github.com/webiny/webiny-js/commit/38eda547bead6e8a2c46875730bbcd8f1227e475))
## [0.1.4](https://github.com/webiny/webiny-js/compare/@webiny/[email protected]...@webiny/[email protected]) (2019-10-10)
**Note:** Version bump only for package @webiny/api-cms
## [0.1.3](https://github.com/webiny/webiny-js/compare/@webiny/[email protected]...@webiny/[email protected]) (2019-10-08)
**Note:** Version bump only for package @webiny/api-cms
## [0.1.2](https://github.com/webiny/webiny-js/compare/@webiny/[email protected]...@webiny/[email protected]) (2019-10-06)
### Bug Fixes
* update dependencies. ([a399620](https://github.com/webiny/webiny-js/commit/a399620))
## [0.1.1](https://github.com/webiny/webiny-js/compare/@webiny/[email protected]...@webiny/[email protected]) (2019-10-06)
**Note:** Version bump only for package @webiny/api-cms
| 49.029917 | 271 | 0.769161 | yue_Hant | 0.388402 |
762dfb2536dd8810c41d67ce31e7e6c010b2fc27 | 1,161 | md | Markdown | release-notes/2013/oct-17-team-services.md | sevensc/vsts-docs | 60288eaa8e29ab6ba9842b09113977062b1a2af0 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2019-01-21T02:11:58.000Z | 2021-04-18T12:54:35.000Z | release-notes/2013/oct-17-team-services.md | sevensc/vsts-docs | 60288eaa8e29ab6ba9842b09113977062b1a2af0 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | release-notes/2013/oct-17-team-services.md | sevensc/vsts-docs | 60288eaa8e29ab6ba9842b09113977062b1a2af0 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-07-16T02:33:22.000Z | 2020-07-16T02:33:22.000Z | ---
title: Team Foundation Service updates - Oct 17
description: VSTS release notes for October 17 2013
ms.prod: devops
ms.technology: vsts-release-notes
ms.manager: douge
ms.assetid: c93f5236-1c77-4d41-8200-29bfb66edd7c
ms.date: 06/01/2016
ms.author: douge
author: yukom
---
#Team Foundation Service updates - Oct 17
##Build Image Update
Today we interrupt our normally scheduled 3 week update cadence with a bonus update to the service.
It’s a big day for Microsoft platforms with the launch of Windows 8.1, Windows Server 2012 R2 and Visual Studio 2013. As a result we have refreshed our build machine image. It now includes Visual Studio 2013 RTM.
In the next sprint deployment which is upcoming shortly, we will be updating the operating system on the build machine image to Windows Server 2012 R2. This will enable folks to build Windows 8.1 apps using the cloud build service.
As always let us know how we're doing on [User Voice](https://visualstudio.uservoice.com/forums/330519-vso), the [MSDN Forums](http://social.msdn.microsoft.com/Forums/en-US/TFService/threads), and [Twitter](https://twitter.com/search?q=%23tfservice).
Thanks,
Jamie Cool | 43 | 250 | 0.782946 | eng_Latn | 0.930596 |
762e8e0f80836a2b3cd23e02d4202c7c8329bc8b | 159 | md | Markdown | Driver_BME280/README.md | Silmaen/ArduinoLibraries | c702234b0e11978977cf8f6fc7454452d67afa33 | [
"BSD-3-Clause"
] | null | null | null | Driver_BME280/README.md | Silmaen/ArduinoLibraries | c702234b0e11978977cf8f6fc7454452d67afa33 | [
"BSD-3-Clause"
] | null | null | null | Driver_BME280/README.md | Silmaen/ArduinoLibraries | c702234b0e11978977cf8f6fc7454452d67afa33 | [
"BSD-3-Clause"
] | null | null | null | # Arduino library for BME280 Sensor
The BME280 differs from the BMP280 as it also includes Humidity sensor.
The datasheet: [here](doc/bst-bme280-ds002.pdf)
| 22.714286 | 71 | 0.779874 | eng_Latn | 0.994231 |
762f33de8b8399e19c11b729c404dd657b9cb9cc | 15,301 | md | Markdown | docs-archive-a/2014/relational-databases/replication/administration/best-practices-for-replication-administration.md | v-alji/sql-docs-archive-pr.pt-br | 2791ff90ec3525b2542728436f5e9cece0a24168 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs-archive-a/2014/relational-databases/replication/administration/best-practices-for-replication-administration.md | v-alji/sql-docs-archive-pr.pt-br | 2791ff90ec3525b2542728436f5e9cece0a24168 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-11-25T02:18:31.000Z | 2021-11-25T02:26:28.000Z | docs-archive-a/2014/relational-databases/replication/administration/best-practices-for-replication-administration.md | v-alji/sql-docs-archive-pr.pt-br | 2791ff90ec3525b2542728436f5e9cece0a24168 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2021-09-29T08:52:22.000Z | 2021-10-13T09:16:56.000Z | ---
title: Práticas recomendadas para administração de replicação | Microsoft Docs
ms.custom: ''
ms.date: 03/06/2017
ms.prod: sql-server-2014
ms.reviewer: ''
ms.technology: replication
ms.topic: conceptual
helpviewer_keywords:
- administering replication, best practices
- replication [SQL Server], administering
ms.assetid: 850e8a87-b34c-4934-afb5-a1104f118ba8
author: MashaMSFT
ms.author: mathoma
ms.openlocfilehash: 16652561fa873c5d8a33d8826372837744543e70
ms.sourcegitcommit: ad4d92dce894592a259721a1571b1d8736abacdb
ms.translationtype: MT
ms.contentlocale: pt-BR
ms.lasthandoff: 08/04/2020
ms.locfileid: "87683505"
---
# <a name="best-practices-for-replication-administration"></a>Práticas recomendadas para administração de replicação
Depois de configurar a replicação, é importante entender como administrar uma topologia de replicação. Este tópico fornece diretrizes básicas de práticas recomendadas em várias áreas com links para mais informações para cada área. Além das seguintes diretrizes de práticas recomendadas apresentadas neste tópico, considere a leitura do tópico de perguntas frequentes para se familiarizar com as questões e problemas mais comuns: [Perguntas frequentes para os administradores de replicação](frequently-asked-questions-for-replication-administrators.md).
É útil dividir as diretrizes de prática recomendada em duas áreas:
- As informações seguintes cobrem as práticas recomendadas que devem ser implementadas para todas as topologias de replicação:
- Desenvolva e teste uma estratégia de backup e restauração.
- Faça o script da topologia de replicação.
- Crie limites e alertas
- Monitore a topologia de replicação.
- Estabeleça linhas de base de desempenho e ajuste a replicação, se necessário.
- As informações a seguir cobrem as práticas recomendadas que devem ser consideradas, mas podem não ser requeridas para a sua topologia:
- Valide os dados periodicamente.
- Ajuste parâmetros de agente por perfis.
- Ajuste a publicação e os períodos de retenção da distribuição.
- Entenda como alterar as propriedades de artigo e de publicação se os requisitos do aplicativo alterarem.
- Entenda como fazer alterações de esquema altera se os requisitos do aplicativo alterarem.
## <a name="develop-and-test-a-backup-and-restore-strategy"></a>Desenvolva e teste uma estratégia de backup e restauração
Todos os bancos de dados devem ter backups feitos regularmente e a habilidade de restaurar esses backups deve ser testada periodicamente; bancos de dados replicados não são diferentes. Os bancos de dados seguintes deveriam ter seu backup feito regularmente:
- Banco de dados de publicação
- Banco de dados de distribuição
- Banco de dados de assinatura
- O banco de dados**msdb** e o banco de dados **mestre** no Publicador, Distribuidor e todos os Assinantes
Bancos de dados replicados requerem atenção especial em relação a backup e restauração de dados. Para obter mais informações, veja [Fazer backup e restaurar bancos de dados replicados](back-up-and-restore-replicated-databases.md).
## <a name="script-the-replication-topology"></a>Faça o script da topologia de replicação
Todos os componentes de replicação em uma topologia devem ser incluídos no script como parte de um plano de recuperação de desastre e os scripts também podem ser usados para automatizar tarefas repetitivas. Um script contém os procedimentos armazenados do sistema [!INCLUDE[tsql](../../../includes/tsql-md.md)] necessários para implementar o(s) componente(s) de replicação que tiveram seus scripts feitos, como uma publicação ou assinatura. Os scripts podem ser criados em um assistente (como o Assistente para Nova Publicação) ou no [!INCLUDE[msCoName](../../../includes/msconame-md.md)] [!INCLUDE[ssManStudioFull](../../../includes/ssmanstudiofull-md.md)] depois de você criar um componente. É possível exibir, modificar e executar o script por meio do [!INCLUDE[ssManStudioFull](../../../includes/ssmanstudiofull-md.md)] ou do **sqlcmd**. Os scripts podem ser armazenados com arquivos de backup para serem usados no caso de uma topologia de replicação precisar ser reconfigurada. Para obter mais informações, consulte [Scripting Replication](../scripting-replication.md).
Um componente deve ter seu script refeito se qualquer alteração na propriedade for realizada. Se você usar procedimentos armazenados com replicação transacional, uma cópia de cada procedimento deve ser armazenada com os scripts; a cópia deve ser atualizada se o procedimento for alterado (os procedimentos são normalmente alterados devido a mudanças no esquema ou nos requisitos de aplicativo). Para mais informações sobre procedimentos personalizados, consulte [Especificar como as alterações são propagadas para artigos transacionais](../transactional/transactional-articles-specify-how-changes-are-propagated.md).
## <a name="establish-performance-baselines-and-tune-replication-if-necessary"></a>Estabeleça linhas de base de desempenho e ajuste a replicação se necessário
Antes de configurar a replicação, recomenda-se a familiarização com os fatores que afetam o desempenho de replicação:
- Servidor e hardware de rede
- Design do banco de dados
- Configuração do Distribuidor
- Design e opções de publicação
- Design e uso de filtro
- Opções de Assinatura
- Opções de instantâneo
- Parâmetros de agente
- Manutenção
Após a replicação ser configurada, é recomendável desenvolver uma linha de base de desempenho, que permitirá determinar como a replicação se comporta com uma carga de trabalho típica de seus aplicativos e de sua topologia. Use o Replication Monitor e Monitor do Sistema para determinar os números típicos para as seguintes cinco dimensões de desempenho de replicação:
- Latência: a quantidade de tempo necessária para uma alteração de dados ser propagada entre nós em uma topologia de replicação.
- Taxa de Transferência: a quantidade de atividades de replicação (medida em comandos entregues em um período de tempo) que o sistema pode sustentar com o tempo.
- Simultaneidade: o número de processos de replicação que podem funcionar simultaneamente em um sistema.
- Duração de sincronização: quanto tempo uma determinada sincronização precisa para ser completada.
- Consumo de Recursos: hardware e recursos de rede usados como resultado do processamento de replicação.
A latência e a taxa de transferência são mais relevantes para a replicação transacional, porque os sistemas construídos em uma replicação transacional geralmente requerem baixa latência e alta taxa de transferência. Simultaneidade e duração de sincronização são mais relevantes para a replicação de mesclagem porque os sistemas construídos em replicação de mesclagem frequentemente têm um grande número de Assinantes, e um Publicador pode ter um número significante de sincronizações simultâneas com esses Assinantes.
Depois de estabelecer os números de linha de base, defina os limites no Replication Monitor. Para obter mais informações, consulte [Definir os limites e avisos no Replication Monitor](../monitor/set-thresholds-and-warnings-in-replication-monitor.md) e [Usar alertas para eventos do agente de replicação](../agents/use-alerts-for-replication-agent-events.md). Se encontrar um problema de desempenho, recomenda-se ler as sugestões nos tópicos de melhora de desempenho listados anteriormente, e aplicar as alterações em áreas que afetem os problemas encontrados.
## <a name="create-thresholds-and-alerts"></a>Crie limites e alertas
O Replication Monitor permite definir vários limites relacionados a status e desempenho. É recomendável definir os limites apropriados para a sua topologia; se um limite for alcançado, um aviso é exibido e, opcionalmente, um alerta pode ser enviado para uma conta de email, um pager ou outro dispositivo. Para obter mais informações, consulte [Set Thresholds and Warnings in Replication Monitor](../monitor/set-thresholds-and-warnings-in-replication-monitor.md).
Além dos alertas que podem ser associados com a monitoração de limites, a replicação fornece uma série de alertas pré-definidos que respondem a ações do agente de replicação. Esses alertas podem ser usados por um administrador para ficar informado sobre o estado da topologia de replicação. É recomendável a leitura do tópico que descreve os alertas e o uso de qualquer um que se encaixe em suas necessidades administrativas (também é possível criar alertas adicionais, se necessário). Para obter mais informações, consulte [Usar alertas para eventos do agente de replicação](../agents/use-alerts-for-replication-agent-events.md).
## <a name="monitor-the-replication-topology"></a>Monitore a topologia de replicação
Depois que a topologia de replicação estiver funcionando e os limites e alertas tiverem sido configurados, recomenda-se monitorar regularmente a replicação. Monitorar uma topologia de replicação é um aspecto importante na implantação da replicação. Já que a atividade de replicação é distribuída, é essencial controlar sua atividade e o status em todos os computadores envolvidos na replicação. As seguintes ferramentas podem ser usadas para monitorar a replicação:
- O Replication Monitor é a ferramenta mais importante para a monitoração da replicação, permitindo monitorar a saúde geral de uma topologia de replicação. Para obter mais informações, consulte [Monitoring Replication](../monitoring-replication.md).
- O[!INCLUDE[tsql](../../../includes/tsql-md.md)] e o RMO (Replication Management Objects) fornecem interfaces para monitorar replicação. Para obter mais informações, consulte [Monitoring Replication](../monitoring-replication.md).
- O Monitor do Sistema também pode ser útil para monitorar o desempenho da replicação. Para obter mais informações, consulte [Monitoring Replication with System Monitor](../monitor/monitoring-replication-with-system-monitor.md).
## <a name="validate-data-periodically"></a>Valide os dados periodicamente
A validação não é requerida pela replicação, mas é recomendada a execução da validação periódica para a replicação transacional e a replicação de mesclagem. A validação permite verificar que os dados no Assinante correspondem aos dados no Publicador. Uma validação bem-sucedida indica que naquele point-in-time todas as alterações do Publicador foram replicadas no Assinante (e do Assinante no Publicador, se houver suporte para atualizações no Assinante) e que os dois bancos de dados estão em sincronia.
É recomendado que a validação seja executada de acordo com a agenda de backup do banco de dados de publicação. Por exemplo, se o banco de dados de publicação tem um backup completo uma vez por semana, a validação poderia ser executada uma vez por semana após o backup ser completado. Para obter mais informações, consulte [Validar os dados replicados](../validate-data-at-the-subscriber.md).
## <a name="use-agent-profiles-to-change-agent-parameters-if-necessary"></a>Use perfis de agente para alterar os parâmetros de agente, se necessário
Os perfis de agente fornecem um método conveniente de definir parâmetros do agente de replicação. Os parâmetros podem também ser especificados na linha de comando do agente, mas é normalmente mais apropriado usar um perfil predefinido de agente ou criar um novo perfil se for necessário alterar o valor do parâmetro. Por exemplo, se estiver usando uma replicação de mesclagem e o Assinante mudar de uma conexão de banda larga para uma conexão discada, considere usar o perfil **vínculo lento** para o Agente de Mesclagem; esse perfil usa um conjunto de parâmetros que são mais adequados para vínculos de comunicações mais lentas. Para saber mais, confira [Replication Agent Profiles](../agents/replication-agent-profiles.md).
## <a name="adjust-publication-and-distribution-retention-periods-if-necessary"></a>Ajuste a publicação e os períodos de retenção da distribuição, se necessário
A replicação transacional e a replicação de mesclagem usam os períodos de retenção para determinar, respectivamente, quanto tempo as transações são armazenadas no banco de dados de distribuição e com que frequência uma assinatura precisa ser sincronizada. Recomenda-se usar inicialmente os valores padrão, porém é necessário monitorar a topologia para determinar se a configuração requer ajustes. Por exemplo, no caso da replicação de mesclagem, o período de retenção da publicação (que por padrão é de 14 dias) determina quanto tempo os metadados são armazenados nas tabelas do sistema. Se as assinaturas sempre são sincronizadas a cada cinco dias, considere ajustar a configuração para um número menor, o que reduzirá os metadados e possivelmente melhorará o desempenho. Para obter mais informações, consulte [Subscription Expiration and Deactivation](../subscription-expiration-and-deactivation.md).
## <a name="understand-how-to-modify-publications-if-application-requirements-change"></a>Entenda como alterar as publicações se os requisitos do aplicativo alterarem
Depois de criar uma publicação, poderá ser necessário adicionar ou descartar artigos, ou alterar as propriedades da publicação e do artigo. A maioria das alterações é permitida após a criação de uma publicação, mas em alguns casos, é necessário gerar um novo instantâneo para a publicação e/ou reinicializar as assinaturas para uma publicação. Para obter mais informações, consulte [Alterar propriedades da publicação e do artigo](../publish/change-publication-and-article-properties.md) e [Add Articles to and Drop Articles from Existing Publications](../publish/add-articles-to-and-drop-articles-from-existing-publications.md) (Adicionar e remover artigos para/de publicações existentes).
## <a name="understand-how-to-make-schema-changes-if-application-requirements-change"></a>Entenda como fazer alterações de esquema se os requisitos do aplicativo alterarem.
Em muitos casos, são necessárias alterações de esquema depois que um aplicativo estiver em produção. Em uma topologia de replicação, essas alterações devem ser propagadas frequentemente a todos os Assinantes. A replicação oferece suporte para um amplo intervalo de alterações de esquema para objetos publicados. Ao se fazer qualquer uma das seguintes alterações de esquema no objeto publicado adequado em um Editor do [!INCLUDE[msCoName](../../../includes/msconame-md.md)] [!INCLUDE[ssNoVersion](../../../includes/ssnoversion-md.md)], essa alteração será propagada por padrão para todos os Assinantes do [!INCLUDE[ssNoVersion](../../../includes/ssnoversion-md.md)]:
- ALTER TABLE
- ALTER VIEW
- ALTER PROCEDURE
- ALTER FUNCTION
- ALTER TRIGGER
Para obter mais informações, consulte [Make Schema Changes on Publication Databases](../publish/make-schema-changes-on-publication-databases.md) (Fazer alterações de esquema em bancos de dados de publicação).
## <a name="see-also"></a>Consulte Também
[Perguntas Frequentes sobre Administração de Replicação](frequently-asked-questions-for-replication-administrators.md)
| 100.006536 | 1,078 | 0.789752 | por_Latn | 0.999833 |
762f581718f380436e6064232ec5201d98c5bd05 | 7,022 | md | Markdown | articles/active-directory/conditional-access/concept-baseline-protection.md | changeworld/azure-docs.sv-se | 6234acf8ae0166219b27a9daa33f6f62a2ee45ab | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/active-directory/conditional-access/concept-baseline-protection.md | changeworld/azure-docs.sv-se | 6234acf8ae0166219b27a9daa33f6f62a2ee45ab | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/active-directory/conditional-access/concept-baseline-protection.md | changeworld/azure-docs.sv-se | 6234acf8ae0166219b27a9daa33f6f62a2ee45ab | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Principer för villkorlig åtkomst - Azure Active Directory
description: Principer för grundläggande villkorlig åtkomst för att skydda organisationer från vanliga attacker
services: active-directory
ms.service: active-directory
ms.subservice: conditional-access
ms.topic: conceptual
ms.date: 12/18/2019
ms.author: joflore
author: MicrosoftGuyJFlo
manager: daveba
ms.reviewer: rogoya
ms.collection: M365-identity-device-management
ms.openlocfilehash: 55de5a5c604273225a85e49ca682980f83a951d2
ms.sourcegitcommit: 2ec4b3d0bad7dc0071400c2a2264399e4fe34897
ms.translationtype: MT
ms.contentlocale: sv-SE
ms.lasthandoff: 03/27/2020
ms.locfileid: "75767576"
---
# <a name="what-are-baseline-policies"></a>Vad är baslinjeprinciper?
Baslinjeprinciper är en uppsättning fördefinierade principer som hjälper till att skydda organisationer mot många vanliga attacker. Dessa vanliga attacker kan omfatta lösenordsspray, uppspelning och nätfiske. Baslinjeprinciper är tillgängliga i alla versioner av Azure AD. Microsoft gör dessa grundläggande skyddsprinciper tillgängliga för alla eftersom identitetsbaserade attacker har ökat under de senaste åren. Målet med dessa fyra principer är att se till att alla organisationer har en grundläggande säkerhetsnivå aktiverad utan extra kostnad.
För att hantera anpassade principer för villkorlig åtkomst krävs en Azure AD Premium-licens.
> [!IMPORTANT]
> Baslinjeprinciper är inaktuella. Mer information [finns i Vad är nytt i Azure Active Directory?](../fundamentals/whats-new.md#replacement-of-baseline-policies-with-security-defaults)
## <a name="baseline-policies"></a>Baslinjeprinciper

Det finns fyra grundläggande principer:
* Kräv MFA för administratörer (förhandsgranskning)
* Skydd för slutanvändare (förhandsgranskning)
* Blockera äldre autentisering (förhandsgranskning)
* Kräv MFA för tjänsthantering (förhandsversion)
Alla fyra dessa principer påverkar äldre autentiseringsflöden som POP, IMAP och äldre Office-skrivbordsklienter.
### <a name="exclusions"></a>Undantag
När baslinjeprinciper gick in i den första offentliga förhandsversionen fanns det ett alternativ för att utesluta användare från principerna. Den här funktionen utvecklades genom förhandsversionen och togs bort i juli 2019. Organisationer som redan hade skapat undantag kunde fortsätta att hålla dem nya användare kunde inte lägga till undantag till principerna.
### <a name="require-mfa-for-admins-preview"></a>Kräv MFA för administratörer (förhandsgranskning)
På grund av den makt och åtkomst som administratörskonton har, bör du behandla dem med särskild omsorg. En vanlig metod för att förbättra skyddet av privilegierade konton är att kräva en starkare form av kontoverifiering när de används för att logga in. I Azure Active Directory kan du få en starkare kontoverifiering genom att kräva att administratörer registrerar sig för och använder Azure Multi-Factor Authentication.
Kräv MFA för administratörer (förhandsversion) är en baslinjeprincip som kräver MFA (Multifaktor authentication) för följande katalogroller, som anses vara de mest privilegierade Azure AD-rollerna:
* Global administratör
* SharePoint-administratör
* Exchange-administratör
* Administratör för villkorlig åtkomst
* Säkerhetsadministratör
* Helpdesk-administratör / Lösenordsadministratör
* Faktureringsadministratör
* Användaradministratör
Om din organisation har dessa konton som används i skript eller kod kan du överväga att ersätta dem med [hanterade identiteter](../managed-identities-azure-resources/overview.md).
### <a name="end-user-protection-preview"></a>Skydd för slutanvändare (förhandsgranskning)
Hög privilegierade administratörer är inte de enda som är inriktade på attacker. Dåliga aktörer tenderar att rikta sig till vanliga användare. När dessa dåliga aktörer har fått åtkomst kan de begära åtkomst till privilegierad information för den ursprungliga kontoinnehavarens räkning eller ladda ned hela katalogen och utföra en nätfiskeattack på hela organisationen. En vanlig metod för att förbättra skyddet för alla användare är att kräva en starkare form av kontoverifiering när en riskfylld inloggning upptäcks.
**Skydd för slutanvändare (förhandsversion)** är en baslinjeprincip som skyddar alla användare i en katalog. Om du aktiverar den här principen måste alla användare registrera sig för Azure Multi-Factor-autentisering inom 14 dagar. När användarna har registrerats uppmanas de endast att ange MFA under riskfyllda inloggningsförsök. Komprometterade användarkonton blockeras tills lösenordet återställs och riskerar uppsägning.
> [!NOTE]
> Alla användare som tidigare flaggats för risk blockeras tills återställning av lösenord och riskerar uppsägning vid principaktivering.
### <a name="block-legacy-authentication-preview"></a>Blockera äldre autentisering (förhandsgranskning)
Äldre autentiseringsprotokoll (t.ex. IMAP, SMTP, POP3) är protokoll som normalt används av äldre e-postklienter för att autentisera. Äldre protokoll stöder inte multifaktorautentisering. Även om du har en princip som kräver multifaktorautentisering för din katalog kan en dålig aktör autentisera med ett av dessa äldre protokoll och kringgå multifaktorautentisering.
Det bästa sättet att skydda ditt konto från begäranden om skadlig autentisering som görs av äldre protokoll är att blockera dem.
Baslinjeprincipen **Blockera äldre autentisering (förhandsversion)** blockerar autentiseringsbegäranden som görs med äldre protokoll. Modern autentisering måste användas för att logga in för alla användare. Används tillsammans med andra baslinjeprinciper, begäranden som kommer från äldre protokoll kommer att blockeras. Dessutom kommer alla användare att vara skyldiga att MFA när det behövs. Den här principen blockerar inte Exchange ActiveSync.
### <a name="require-mfa-for-service-management-preview"></a>Kräv MFA för tjänsthantering (förhandsversion)
Organisationer använder en mängd olika Azure-tjänster och hanterar dem från Azure Resource Manager-baserade verktyg som:
* Azure Portal
* Azure PowerShell
* Azure CLI
Att använda något av dessa verktyg för att utföra resurshantering är en mycket privilegierad åtgärd. Dessa verktyg kan ändra konfigurationer för hela prenumerationer, till exempel tjänstinställningar och prenumerationsfakturering.
För att skydda privilegierade åtgärder kräver den här principen **kräver MFA för tjänsthantering (förhandsversion)** multifaktorautentisering för alla användare som använder Azure-portalen, Azure PowerShell eller Azure CLI.
## <a name="next-steps"></a>Nästa steg
Mer information finns i:
* [Aktivera standardvärden för säkerhet](../fundamentals/concept-fundamentals-security-defaults.md)
* [Vanliga principer för villkorlig åtkomst](concept-conditional-access-policy-common.md)
* [Fem steg för att skydda din identitetsinfrastruktur](../../security/fundamentals/steps-secure-identity.md)
| 70.22 | 548 | 0.828396 | swe_Latn | 0.999917 |
762fa30cab4564386b835f2195077d44561532c5 | 223 | md | Markdown | _posts/1922-10-26-the-usn-destroyer-converse-arrives.md | MiamiMaritime/miamimaritime.github.io | d087ae8c104ca00d78813b5a974c154dfd9f3630 | [
"MIT"
] | null | null | null | _posts/1922-10-26-the-usn-destroyer-converse-arrives.md | MiamiMaritime/miamimaritime.github.io | d087ae8c104ca00d78813b5a974c154dfd9f3630 | [
"MIT"
] | null | null | null | _posts/1922-10-26-the-usn-destroyer-converse-arrives.md | MiamiMaritime/miamimaritime.github.io | d087ae8c104ca00d78813b5a974c154dfd9f3630 | [
"MIT"
] | null | null | null | ---
title: The USN destroyer Converse arrives
tags:
- Oct 1922
---
The USN destroyer Converse arrives in Miami.
Newspapers: **Miami Morning News or The Miami Herald**
Page: **1**, Section: **N/A**
| 18.583333 | 56 | 0.623318 | eng_Latn | 0.707221 |
762fb4e7295760d81fa8baa59af1089971849122 | 4,534 | md | Markdown | treebanks/qtd_sagt/qtd_sagt-dep-nummod.md | myedibleenso/docs | 4306449b6863a7055a7289a94de010dbcfc11f65 | [
"Apache-2.0"
] | null | null | null | treebanks/qtd_sagt/qtd_sagt-dep-nummod.md | myedibleenso/docs | 4306449b6863a7055a7289a94de010dbcfc11f65 | [
"Apache-2.0"
] | null | null | null | treebanks/qtd_sagt/qtd_sagt-dep-nummod.md | myedibleenso/docs | 4306449b6863a7055a7289a94de010dbcfc11f65 | [
"Apache-2.0"
] | null | null | null | ---
layout: base
title: 'Statistics of nummod in UD_Turkish_German-SAGT'
udver: '2'
---
## Treebank Statistics: UD_Turkish_German-SAGT: Relations: `nummod`
This relation is universal.
239 nodes (1%) are attached to their parents as `nummod`.
220 instances of `nummod` (92%) are right-to-left (child precedes parent).
Average distance between parent and child is 1.28870292887029.
The following 10 pairs of parts of speech are connected with `nummod`: <tt><a href="qtd_sagt-pos-NOUN.html">NOUN</a></tt>-<tt><a href="qtd_sagt-pos-NUM.html">NUM</a></tt> (212; 89% instances), <tt><a href="qtd_sagt-pos-PROPN.html">PROPN</a></tt>-<tt><a href="qtd_sagt-pos-NUM.html">NUM</a></tt> (12; 5% instances), <tt><a href="qtd_sagt-pos-ADJ.html">ADJ</a></tt>-<tt><a href="qtd_sagt-pos-NUM.html">NUM</a></tt> (5; 2% instances), <tt><a href="qtd_sagt-pos-NOUN.html">NOUN</a></tt>-<tt><a href="qtd_sagt-pos-NOUN.html">NOUN</a></tt> (3; 1% instances), <tt><a href="qtd_sagt-pos-NUM.html">NUM</a></tt>-<tt><a href="qtd_sagt-pos-NUM.html">NUM</a></tt> (2; 1% instances), <tt><a href="qtd_sagt-pos-ADV.html">ADV</a></tt>-<tt><a href="qtd_sagt-pos-NOUN.html">NOUN</a></tt> (1; 0% instances), <tt><a href="qtd_sagt-pos-ADV.html">ADV</a></tt>-<tt><a href="qtd_sagt-pos-NUM.html">NUM</a></tt> (1; 0% instances), <tt><a href="qtd_sagt-pos-PRON.html">PRON</a></tt>-<tt><a href="qtd_sagt-pos-NUM.html">NUM</a></tt> (1; 0% instances), <tt><a href="qtd_sagt-pos-VERB.html">VERB</a></tt>-<tt><a href="qtd_sagt-pos-NOUN.html">NOUN</a></tt> (1; 0% instances), <tt><a href="qtd_sagt-pos-VERB.html">VERB</a></tt>-<tt><a href="qtd_sagt-pos-NUM.html">NUM</a></tt> (1; 0% instances).
~~~ conllu
# visual-style 1 bgColor:blue
# visual-style 1 fgColor:white
# visual-style 2 bgColor:blue
# visual-style 2 fgColor:white
# visual-style 2 1 nummod color:blue
1 İki iki NUM _ NumType=Card 2 nummod _ LangID=TR
2 ta-- tane NOUN _ Case=Nom|Number=Sing 4 reparandum _ LangID=TR|CorrectForm=tane
3 iki iki NUM _ NumType=Card 4 nummod _ LangID=TR
4 tane tane NOUN _ Case=Nom|Number=Sing 5 nmod _ LangID=TR
5 Lektüre Lektüre NOUN _ Case=Acc|Gender=Fem|Number=Sing 6 obj _ LangID=DE
6 okumamız oku VERB _ Case=Nom|Number=Sing|Number[psor]=Plur|Person[psor]=1|VerbForm=Vnoun 0 root _ LangID=TR
7 lazım lazım ADJ _ _ 6 csubj _ LangID=TR
8 dı i AUX _ Aspect=Perf|Evident=Fh|Mood=Ind|Number=Sing|Person=3|Tense=Past 6 cop _ LangID=TR
9 Almanca'da Almanca PROPN _ Case=Loc|Number=Sing 6 obl _ LangID=TR|SpaceAfter=No
10 . . PUNCT _ _ 6 punct _ LangID=OTHER
~~~
~~~ conllu
# visual-style 1 bgColor:blue
# visual-style 1 fgColor:white
# visual-style 2 bgColor:blue
# visual-style 2 fgColor:white
# visual-style 2 1 nummod color:blue
1 22 22 NUM _ _ 2 nummod _ LangID=DE
2 Februar Februar PROPN _ Case=Dat|Gender=Masc|Number=Sing 8 obl _ LangID=DE
3 nein nein INTJ _ _ 8 discourse _ LangID=DE
4 28 28 NUM _ _ 5 nummod _ LangID=DE
5 Februar Februar PROPN _ Case=Dat|Gender=Masc|Number=Sing 8 obl _ LangID=DE
6 mesleğim meslek NOUN _ Case=Nom|Number=Sing|Number[psor]=Sing|Person[psor]=1 8 obl _ LangID=TR
7 ile ile ADP _ _ 6 case _ LangID=TR
8 bitiyorum bit VERB _ Aspect=Prog|Evident=Fh|Mood=Ind|Number=Sing|Person=1|Tense=Pres 0 root _ LangID=TR|SpaceAfter=No
9 . . PUNCT _ _ 8 punct _ LangID=OTHER
~~~
~~~ conllu
# visual-style 5 bgColor:blue
# visual-style 5 fgColor:white
# visual-style 6 bgColor:blue
# visual-style 6 fgColor:white
# visual-style 6 5 nummod color:blue
1 Ben ben PRON _ Case=Nom|Number=Sing 3 nsubj _ LangID=TR
2 de de ADV _ _ 1 advmod:emph _ LangID=TR
3 hesapladım hesapla VERB _ Aspect=Perf|Evident=Fh|Mood=Ind|Number=Sing|Person=1|Tense=Past 0 root _ LangID=TR
4 yirmi yirmi NUM _ NumType=Card 6 nummod _ LangID=TR
5 bir bir NUM _ NumType=Card 6 nummod _ LangID=TR
6 buçuk buçuk ADJ _ _ 9 advcl _ LangID=TR
7 oder oder CCONJ _ _ 8 cc _ LangID=DE
8 einundzwanzigle einundzwanzig NOUN _ Case=Ins|Number=Sing 6 conj _ CSPoint=einundzwanzig§le|DeCase=Dat|DeGender=Fem|LangID=MIXED
9 biteceğim bit VERB _ Aspect=Perf|Case=Nom|Evident=Fh|Mood=Ind|Number=Sing|Number[psor]=Sing|Person[psor]=1|Tense=Fut 3 ccomp _ LangID=TR
10 zaten zaten ADV _ _ 9 advmod _ LangID=TR
11 Allah Allah PROPN _ Case=Nom|Number=Sing 13 nsubj _ LangID=TR
12 izin izin NOUN _ Case=Nom|Number=Sing 13 obj _ LangID=TR
13 verirse ver VERB _ Aspect=Imp|Evident=Fh|Mood=Cnd|Number=Sing|Person=3|Tense=Pres 3 parataxis _ LangID=TR
14 mesleğimle meslek NOUN _ Case=Ins|Number=Sing|Number[psor]=Sing|Person[psor]=1 13 obl _ LangID=TR|SpaceAfter=No
15 . . PUNCT _ _ 3 punct _ LangID=OTHER
~~~
| 54.626506 | 1,264 | 0.726952 | yue_Hant | 0.700963 |
762fca7aa03f43f10e5c6377fe16c5c3576c8f55 | 11,047 | md | Markdown | articles/data-lake-store/data-lake-store-performance-tuning-guidance.md | Karishma-Tiwari-MSFT/azure-docs | 2e64918a2226586beac7f247800f248a440c10ff | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-07-18T11:30:47.000Z | 2019-07-18T11:30:47.000Z | articles/data-lake-store/data-lake-store-performance-tuning-guidance.md | Karishma-Tiwari-MSFT/azure-docs | 2e64918a2226586beac7f247800f248a440c10ff | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/data-lake-store/data-lake-store-performance-tuning-guidance.md | Karishma-Tiwari-MSFT/azure-docs | 2e64918a2226586beac7f247800f248a440c10ff | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-06-20T19:42:01.000Z | 2021-06-20T19:42:01.000Z | ---
title: Azure Data Lake Store Performance Tuning Guidelines | Microsoft Docs
description: Azure Data Lake Store Performance Tuning Guidelines
services: data-lake-store
documentationcenter: ''
author: stewu
manager: amitkul
editor: cgronlun
ms.assetid: ebde7b9f-2e51-4d43-b7ab-566417221335
ms.service: data-lake-store
ms.devlang: na
ms.topic: article
ms.date: 06/30/2017
ms.author: stewu
---
# Tuning Azure Data Lake Store for performance
Data Lake Store supports high-throughput for I/O intensive analytics and data movement. In Azure Data Lake Store, using all available throughput – the amount of data that can be read or written per second – is important to get the best performance. This is achieved by performing as many reads and writes in parallel as possible.

Azure Data Lake Store can scale to provide the necessary throughput for all analytics scenario. By default, an Azure Data Lake Store account provides automatically enough throughput to meet the needs of a broad category of use cases. For the cases where customers run into the default limit, the ADLS account can be configured to provide more throughput by contacting Microsoft support.
## Data ingestion
When ingesting data from a source system to ADLS, it is important to consider that the source hardware, source network hardware, and network connectivity to ADLS can be the bottleneck.

It is important to ensure that the data movement is not affected by these factors.
### Source Hardware
Whether you are using on-premises machines or VMs in Azure, you should carefully select the appropriate hardware. For Source Disk Hardware, prefer SSDs to HDDs and pick disk hardware with faster spindles. For Source Network Hardware, use the fastest NICs possible. On Azure, we recommend Azure D14 VMs which have the appropriately powerful disk and networking hardware.
### Network Connectivity to Azure Data Lake Store
The network connectivity between your source data and Azure Data Lake store can sometimes be the bottleneck. When your source data is On-Premises, consider using a dedicated link with [Azure ExpressRoute](https://azure.microsoft.com/services/expressroute/) . If your source data is in Azure, the performance will be best when the data is in the same Azure region as the Data Lake Store.
### Configure Data Ingestion tools for maximum parallelization
Once you have addressed the source hardware and network connectivity bottlenecks above, you are ready to configure your ingestion tools. The following table summarizes the key settings for several popular ingestion tools and provides in-depth performance tuning articles for them. To learn more about which tool to use for your scenario, visit this [article](https://docs.microsoft.com/azure/data-lake-store/data-lake-store-data-scenarios).
| Tool | Settings | More Details |
|--------------------|------------------------------------------------------|------------------------------|
| Powershell | PerFileThreadCount, ConcurrentFileCount | [Link](https://docs.microsoft.com/azure/data-lake-store/data-lake-store-get-started-powershell#performance-guidance-while-using-powershell) |
| AdlCopy | Azure Data Lake Analytics units | [Link](https://docs.microsoft.com/azure/data-lake-store/data-lake-store-copy-data-azure-storage-blob#performance-considerations-for-using-adlcopy) |
| DistCp | -m (mapper) | [Link](https://docs.microsoft.com/azure/data-lake-store/data-lake-store-copy-data-wasb-distcp#performance-considerations-while-using-distcp) |
| Azure Data Factory| parallelCopies | [Link](../data-factory/copy-activity-performance.md) |
| Sqoop | fs.azure.block.size, -m (mapper) | [Link](https://blogs.msdn.microsoft.com/bigdatasupport/2015/02/17/sqoop-job-performance-tuning-in-hdinsight-hadoop/) |
## Structure your data set
When data is stored in Data Lake Store, the file size, number of files, and folder structure have an impact on performance. The following section describes best practices in these areas.
### File size
Typically, analytics engines such as HDInsight and Azure Data Lake Analytics have a per-file overhead. If you store your data as many small files, this can negatively affect performance.
In general, organize your data into larger sized files for better performance. As a rule of thumb, organize data sets in files of 256MB or larger. In some cases such as images and binary data, it is not possible to process them in parallel. In these cases, it is recommended to keep individual files under 2GB.
Sometimes, data pipelines have limited control over the raw data which has lots of small files. It is recommended to have a "cooking" process that generates larger files to use for downstream applications.
### Organizing Time Series data in folders
For Hive and ADLA workloads, partition pruning of time-series data can help some queries read only a subset of the data which improves performance.
Those pipelines that ingest time-series data, often place their files with a very structured naming for files and folders. Below is a very common example we see for data that is structured by date:
\DataSet\YYYY\MM\DD\datafile_YYYY_MM_DD.tsv
Notice that the datetime information appears both as folders and in the filename.
For date and time, the following is a common pattern
\DataSet\YYYY\MM\DD\HH\mm\datafile_YYYY_MM_DD_HH_mm.tsv
Again, the choice you make with the folder and file organization should optimize for the larger file sizes and a reasonable number of files in each folder.
## Optimizing I/O intensive jobs on Hadoop and Spark workloads on HDInsight
Jobs fall into one of the following three categories:
* **CPU intensive.** These jobs have long computation times with minimal I/O times. Examples include machine learning and natural language processing jobs.
* **Memory intensive.** These jobs use lots of memory. Examples include PageRank and real-time analytics jobs.
* **I/O intensive.** These jobs spend most of their time doing I/O. A common example is a copy job which does only read and write operations. Other examples include data preparation jobs that read a lot of data, performs some data transformation, and then writes the data back to the store.
The following guidance is only applicable to I/O intensive jobs.
### General Considerations for an HDInsight cluster
* **HDInsight versions.** For best performance, use the latest release of HDInsight.
* **Regions.** Place the Data Lake Store in the same region as the HDInsight cluster.
An HDInsight cluster is composed of two head nodes and some worker nodes. Each worker node provides a specific number of cores and memory, which is determined by the VM-type. When running a job, YARN is the resource negotiator that allocates the available memory and cores to create containers. Each container runs the tasks needed to complete the job. Containers run in parallel to process tasks quickly. Therefore, performance is improved by running as many parallel containers as possible.
There are three layers within an HDInsight cluster that can be tuned to increase the number of containers and use all available throughput.
* **Physical layer**
* **YARN layer**
* **Workload layer**
### Physical Layer
**Run cluster with more nodes and/or larger sized VMs.** A larger cluster will enable you to run more YARN containers as shown in the picture below.

**Use VMs with more network bandwidth.** The amount of network bandwidth can be a bottleneck if there is less network bandwidth than Data Lake Store throughput. Different VMs will have varying network bandwidth sizes. Choose a VM-type that has the largest possible network bandwidth.
### YARN Layer
**Use smaller YARN containers.** Reduce the size of each YARN container to create more containers with the same amount of resources.

Depending on your workload, there will always be a minimum YARN container size that is needed. If you pick too small a container, your jobs will run into out-of-memory issues. Typically YARN containers should be no smaller than 1GB. It's common to see 3GB YARN containers. For some workloads, you may need larger YARN containers.
**Increase cores per YARN container.** Increase the number of cores allocated to each container to increase the number of parallel tasks that run in each container. This works for applications like Spark which run multiple tasks per container. For applications like Hive which run a single thread in each container, it is better to have more containers rather than more cores per container.
### Workload Layer
**Use all available containers.** Set the number of tasks to be equal or larger than the number of available containers so that all resources are utilized.

**Failed tasks are costly.** If each task has a large amount of data to process, then failure of a task results in an expensive retry. Therefore, it is better to create more tasks, each of which processes a small amount of data.
In addition to the general guidelines above, each application has different parameters available to tune for that specific application. The table below lists some of the parameters and links to get started with performance tuning for each application.
| Workload | Parameter to set tasks |
|--------------------|-------------------------------------------------------------------------------------|
| [Spark on HDInisight](data-lake-store-performance-tuning-spark.md) | <ul><li>Num-executors</li><li>Executor-memory</li><li>Executor-cores</li></ul> |
| [Hive on HDInsight](data-lake-store-performance-tuning-hive.md) | <ul><li>hive.tez.container.size</li></ul> |
| [MapReduce on HDInsight](data-lake-store-performance-tuning-mapreduce.md) | <ul><li>Mapreduce.map.memory</li><li>Mapreduce.job.maps</li><li>Mapreduce.reduce.memory</li><li>Mapreduce.job.reduces</li></ul> |
| [Storm on HDInsight](data-lake-store-performance-tuning-storm.md)| <ul><li>Number of worker processes</li><li>Number of spout executor instances</li><li>Number of bolt executor instances </li><li>Number of spout tasks</li><li>Number of bolt tasks</li></ul>|
## See also
* [Overview of Azure Data Lake Store](data-lake-store-overview.md)
* [Get Started with Azure Data Lake Analytics](../data-lake-analytics/data-lake-analytics-get-started-portal.md)
| 77.251748 | 495 | 0.751969 | eng_Latn | 0.99277 |
7630148f099d15dc4be4a36ae723e4499cc541c9 | 1,934 | md | Markdown | articles/cognitive-services/Speech-Service/includes/quickstarts/intent-recognition/luis-sign-up.md | KazuOnuki/azure-docs.ja-jp | 4dc313dec47a4efdb0258a8b21b45c5376de7ffc | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cognitive-services/Speech-Service/includes/quickstarts/intent-recognition/luis-sign-up.md | KazuOnuki/azure-docs.ja-jp | 4dc313dec47a4efdb0258a8b21b45c5376de7ffc | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cognitive-services/Speech-Service/includes/quickstarts/intent-recognition/luis-sign-up.md | KazuOnuki/azure-docs.ja-jp | 4dc313dec47a4efdb0258a8b21b45c5376de7ffc | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
author: eric-urban
ms.service: cognitive-services
ms.subservice: speech-service
ms.topic: include
ms.date: 05/04/2021
ms.author: eur
ms.openlocfilehash: d2bda8b0bbd4a4d484d2aa1016bb57244bd6caf7
ms.sourcegitcommit: 2cc9695ae394adae60161bc0e6e0e166440a0730
ms.translationtype: HT
ms.contentlocale: ja-JP
ms.lasthandoff: 11/03/2021
ms.locfileid: "131507569"
---
意図認識を実行するには、LUIS プレビュー ポータルを使用して、LUIS アカウントとプロジェクトを作成する必要があります。 このクイックスタートでは、[インテント認識が使用可能なリージョンに](../../../regions.md#intent-recognition) LUIS サブスクリプションが必要です。 音声サービスのサブスクリプションは "*不要*" です。
まず、LUIS プレビュー ポータルを使用して LUIS アカウントとアプリを作成する必要があります。 作成した LUIS アプリでは、意図、エンティティ、および発話例を提供する、ホーム オートメーション用のあらかじめ構築されたドメインを使用します。 完成すると、クラウド内で LUIS エンドポイントが実行されるようになります。これは、Speech SDK を使用して呼び出すことができます。
LUIS アプリを作成するには、次の手順に従います。
* <a href="/azure/cognitive-services/luis/luis-get-started-create-app" target="_blank">クイック スタート: 事前構築済みのドメイン アプリを作成する</a>
完了すると、次の 4 つが必要になります。
* 再発行 (**[Speech priming]\(音声認識の準備\)** をオン)
* LUIS の **プライマリ キー**
* LUIS の **場所**
* LUIS の **アプリ ID**
この情報は、[LUIS プレビュー ポータル](https://preview.luis.ai/)の次の場所で確認できます。
1. LUIS プレビュー ポータルから目的のアプリをアプリし、**[Publish]\(発行\)** ボタンを選択します。
2. **[Production]\(運用\)** スロットを選択します。`en-US` を使用している場合は、 **[change settings]\(設定の変更\)** を選択し、 **[Speech priming]\(音声プライミング\)** オプションを **[On]\(オン\)** の位置に切り替えます。 次に、**[Publish]\(発行\)** ボタンを選択します。
> [!IMPORTANT]
> **[Speech priming]\(音声認識の準備\)** の使用を強くお勧めします。音声認識の精度が向上します。
> [!div class="mx-imgBorder"]
> 
3. LUIS プレビュー ポータルで、**[管理]** を選択し、次に **[Azure リソース]** を選択します。 このページでは、LUIS 予測リソースの LUIS キーと場所 ("_リージョン_" と呼ばれることもあります) を確認できます。
> [!div class="mx-imgBorder"]
> 
4. キーと場所を確認したら、アプリ ID が必要になります。 **[設定]** を選択します。 アプリ ID は、このページから入手できます。
> [!div class="mx-imgBorder"]
> 
| 37.921569 | 198 | 0.718201 | yue_Hant | 0.769603 |
Subsets and Splits