hexsha
stringlengths 40
40
| size
int64 5
1.04M
| ext
stringclasses 6
values | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 3
344
| max_stars_repo_name
stringlengths 5
125
| max_stars_repo_head_hexsha
stringlengths 40
78
| max_stars_repo_licenses
sequencelengths 1
11
| max_stars_count
int64 1
368k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 3
344
| max_issues_repo_name
stringlengths 5
125
| max_issues_repo_head_hexsha
stringlengths 40
78
| max_issues_repo_licenses
sequencelengths 1
11
| max_issues_count
int64 1
116k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 3
344
| max_forks_repo_name
stringlengths 5
125
| max_forks_repo_head_hexsha
stringlengths 40
78
| max_forks_repo_licenses
sequencelengths 1
11
| max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | content
stringlengths 5
1.04M
| avg_line_length
float64 1.14
851k
| max_line_length
int64 1
1.03M
| alphanum_fraction
float64 0
1
| lid
stringclasses 191
values | lid_prob
float64 0.01
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
e00d1b3478c74b56596e9bbf58fd55b43423f141 | 22 | md | Markdown | _includes/01-name.md | whafflez/markdown-portfolio | f7d3129ed4a634075391723934d25e4e1384dce1 | [
"MIT"
] | null | null | null | _includes/01-name.md | whafflez/markdown-portfolio | f7d3129ed4a634075391723934d25e4e1384dce1 | [
"MIT"
] | 5 | 2021-04-06T09:03:18.000Z | 2021-04-06T09:26:41.000Z | _includes/01-name.md | whafflez/markdown-portfolio | f7d3129ed4a634075391723934d25e4e1384dce1 | [
"MIT"
] | null | null | null | # My name is Whafflez
| 11 | 21 | 0.727273 | eng_Latn | 1.000004 |
e00d26011f02a969bdd51ec77223325b6630392b | 10,042 | md | Markdown | articles/virtual-machines/troubleshooting/linux-virtual-machine-cannot-start-fstab-errors.md | harrish0225/azure-docs.zh-cn | ee03d8b199b86cc04823a73646152d3c9c9debe6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/virtual-machines/troubleshooting/linux-virtual-machine-cannot-start-fstab-errors.md | harrish0225/azure-docs.zh-cn | ee03d8b199b86cc04823a73646152d3c9c9debe6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/virtual-machines/troubleshooting/linux-virtual-machine-cannot-start-fstab-errors.md | harrish0225/azure-docs.zh-cn | ee03d8b199b86cc04823a73646152d3c9c9debe6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 排查由于 fstab 错误而导致的 Linux VM 启动问题 |Microsoft Docs
description: 解释 Linux VM 为何无法启动,以及如何解决此问题。
services: virtual-machines-linux
documentationcenter: ''
author: v-miegge
manager: dcscontentpm
editor: ''
tags: ''
ms.service: virtual-machines-linux
ms.topic: troubleshooting
ms.workload: infrastructure-services
ms.tgt_pltfrm: vm-linux
ms.devlang: azurecli
ms.date: 10/09/2019
ms.author: v-six
ms.openlocfilehash: f68221666f370f87af7539d9302aaa3ed472d5e8
ms.sourcegitcommit: d815163a1359f0df6ebfbfe985566d4951e38135
ms.translationtype: MT
ms.contentlocale: zh-CN
ms.lasthandoff: 05/07/2020
ms.locfileid: "82883135"
---
# <a name="troubleshoot-linux-vm-starting-issues-due-to-fstab-errors"></a>排查 fstab 错误导致的 Linux VM 启动问题
无法使用安全外壳 (SSH) 连接来与 Azure Linux 虚拟机 (VM) 建立连接。 在 [Azure 门户](https://portal.azure.com/)上运行[启动诊断](https://docs.microsoft.com/azure/virtual-machines/troubleshooting/boot-diagnostics)功能时,看到类似于以下示例的日志条目:
## <a name="examples"></a>示例
下面是可能的错误示例。
### <a name="example-1-a-disk-is-mounted-by-the-scsi-id-instead-of-the-universally-unique-identifier-uuid"></a>示例 1:磁盘是按 SCSI ID 而不是全局唯一标识符 (UUID) 装载的
```
[K[[1;31m TIME [0m] Timed out waiting for device dev-incorrect.device.
[[1;33mDEPEND[0m] Dependency failed for /data.
[[1;33mDEPEND[0m] Dependency failed for Local File Systems.
…
Welcome to emergency mode! After logging in, type "journalctl -xb" to viewsystem logs, "systemctl reboot" to reboot, "systemctl default" to try again to boot into default mode.
Give root password for maintenance
(or type Control-D to continue)
```
### <a name="example-2-an-unattached-device-is-missing-on-centos"></a>示例 2:CentOS 上缺少某个未附加的设备
```
Checking file systems…
fsck from util-linux 2.19.1
Checking all file systems.
/dev/sdc1: nonexistent device ("nofail" fstab option may be used to skip this device)
/dev/sdd1: nonexistent device ("nofail" fstab option may be used to skip this device)
/dev/sde1: nonexistent device ("nofail" fstab option may be used to skip this device)
[/sbin/fsck.ext3 (1) — /CODE] sck.ext3 -a /dev/sdc1
fsck.ext3: No such file or directory while trying to open /dev/sdc1
/dev/sdc1:
The superblock could not be read or does not describe a correct ext2
filesystem. If the device is valid and it really contains an ext2
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193 <device>
[/sbin/fsck.xfs (1) — /GLUSTERDISK] fsck.xfs -a /dev/sdd1
/sbin/fsck.xfs: /dev/sdd1 does not exist
[/sbin/fsck.ext3 (1) — /DATATEMP] fsck.ext3 -a /dev/sde1 fsck.ext3: No such file or directory while trying to open /dev/sde1
```
### <a name="example-3-a-vm-cannot-start-because-of-an-fstab-misconfiguration-or-because-the-disk-is-no-longer-attached"></a>示例 3:由于 fstab 配置不当或磁盘不再处于附加状态,VM 无法启动
```
The disk drive for /var/lib/mysql is not ready yet or not present.
Continue to wait, or Press S to skip mounting or M for manual recovery
```
### <a name="example-4-a-serial-log-entry-shows-an-incorrect-uuid"></a>示例 4:串行日志条目显示错误的 UUID
```
Checking filesystems
Checking all file systems.
[/sbin/fsck.ext4 (1) — /] fsck.ext4 -a /dev/sda1
/dev/sda1: clean, 70442/1905008 files, 800094/7608064 blocks
[/sbin/fsck.ext4 (1) — /datadrive] fsck.ext4 -a UUID="<UUID>"
fsck.ext4: Unable to resolve UUID="<UUID>"
[FAILED
*** An error occurred during the file system check.
*** Dropping you to a shell; the system will reboot
*** when you leave the shell.
*** Warning — SELinux is active
*** Disabling security enforcement for system recovery.
*** Run 'setenforce 1' to reenable.
type=1404 audit(1428047455.949:4): enforcing=0 old_enforcing=1 auid=<AUID> ses=4294967295
Give root password for maintenance
(or type Control-D to continue)
```
如果文件系统表 (fstab) 语法不正确,或者映射到“/etc/fstab”文件中某个条目的所需数据磁盘未附加到 VM,则可能会出现此问题。
## <a name="resolution"></a>解决方法
若要解决此问题,请使用 Azure 虚拟机的串行控制台在紧急模式下启动 VM。 然后使用该工具修复文件系统。 如果虚拟机上未启用串行控制台,请参阅[修复 vm 脱机](#repair-the-vm-offline)部分。
## <a name="use-the-serial-console"></a>使用串行控制台
### <a name="using-single-user-mode"></a>使用单用户模式
1. 连接到[串行控制台](https://docs.microsoft.com/azure/virtual-machines/troubleshooting/serial-console-linux)。
2. 使用串行控制台获取单用户模式[单一用户](https://docs.microsoft.com/azure/virtual-machines/linux/serial-console-grub-single-user-mode)模式
3. Vm 启动到单用户模式后。 使用偏好的文本编辑器打开 fstab 文件。
```
# nano /etc/fstab
```
4. 查看列出的文件系统。 fstab 文件中的每行表示启动 VM 时装载的文件系统。 有关 fstab 文件语法的详细信息,请运行 man fstab 命令。 若要排查启动失败问题,请查看每行,确保其结构和内容中都正确。
> [!Note]
> * 每行中的字段由制表符或空格分隔。 空行将被忽略。 第一个字符是数字符号 (#) 的行是注释。 注释行可以保留在 fstab 文件中,但不会受到处理。 对于你不确定的 fstab 行,我们建议保留注释,而不要删除这些行。
> * 要使 VM 能够恢复和启动,应该只有文件系统分区是所需的分区。 VM 可能会遇到有关其他带注释分区的应用程序错误。 但是,在没有其他分区的情况下,VM 应该也能启动。 稍后可以取消注释所有带注释的行。
> * 建议使用文件系统分区的 UUID 将数据磁盘装载到 Azure VM 上。 例如,运行以下命令:``/dev/sdc1: LABEL="cloudimg-rootfs" UUID="<UUID>" TYPE="ext4" PARTUUID="<PartUUID>"``
> * 若要确定文件系统的 UUID,请运行 blkid 命令。 有关语法的详细信息,请运行 man blkid 命令。
> * nofail 选项有助于确保即使文件系统已损坏或启动时文件系统不存在,也能启动 VM。 建议在 fstab 文件中使用 nofail 选项,以便在无需用于启动 VM 的分区出错时,能够继续启动。
5. 更改或注释掉 fstab 文件中任何错误的或不必要的行,使 VM 能够正常启动。
6. 保存对 fstab 文件所做的更改。
7. 使用以下命令重新启动 vm。
```
# reboot -f
```
> [!Note]
> 还可以使用 "ctrl + x" 命令,该命令还将重新启动 vm。
8. 如果条目已成功做出注释或修复,系统应会在门户中显示 bash 提示符。 检查是否可以连接到 VM。
### <a name="using-root-password"></a>使用根密码
1. 连接到[串行控制台](https://docs.microsoft.com/azure/virtual-machines/troubleshooting/serial-console-linux)。
2. 使用本地用户和密码登录到系统。
> [!Note]
> 不能使用 SSH 密钥登录到串行控制台中的系统。
3. 查找指示磁盘未装入的错误。 在以下示例中,系统尝试附加一个不再存在的磁盘:
```
[DEPEND] Dependency failed for /datadisk1.
[DEPEND] Dependency failed for Local File Systems.
[DEPEND] Dependency failed for Relabel all filesystems, if necessary.
[DEPEND] Dependency failed for Migrate local... structure to the new structure.
Welcome to emergency mode! After logging in, type "journalctl -xb" to view
system logs, "systemctl reboot" to reboot, "systemctl default" or ^D to try again to boot into default mode.
Give root password for maintenance
(or type Control-D to continue):
```
4. 使用 root 密码连接到 VM(基于 Red Hat 的 VM)。
5. 使用偏好的文本编辑器打开 fstab 文件。 装载磁盘后,为 Nano 运行以下命令:
```
$ nano /mnt/troubleshootingdisk/etc/fstab
```
6. 查看列出的文件系统。 fstab 文件中的每行表示启动 VM 时装载的文件系统。 有关 fstab 文件语法的详细信息,请运行 man fstab 命令。 若要排查启动失败问题,请查看每行,确保其结构和内容中都正确。
> [!Note]
> * 每行中的字段由制表符或空格分隔。 空行将被忽略。 第一个字符是数字符号 (#) 的行是注释。 注释行可以保留在 fstab 文件中,但不会受到处理。 对于你不确定的 fstab 行,我们建议保留注释,而不要删除这些行。
> * 要使 VM 能够恢复和启动,应该只有文件系统分区是所需的分区。 VM 可能会遇到有关其他带注释分区的应用程序错误。 但是,在没有其他分区的情况下,VM 应该也能启动。 稍后可以取消注释所有带注释的行。
> * 建议使用文件系统分区的 UUID 将数据磁盘装载到 Azure VM 上。 例如,运行以下命令:``/dev/sdc1: LABEL="cloudimg-rootfs" UUID="<UUID>" TYPE="ext4" PARTUUID="<PartUUID>"``
> * 若要确定文件系统的 UUID,请运行 blkid 命令。 有关语法的详细信息,请运行 man blkid 命令。
> * nofail 选项有助于确保即使文件系统已损坏或启动时文件系统不存在,也能启动 VM。 建议在 fstab 文件中使用 nofail 选项,以便在无需用于启动 VM 的分区出错时,能够继续启动。
7. 更改或注释掉 fstab 文件中任何错误的或不必要的行,使 VM 能够正常启动。
8. 保存对 fstab 文件所做的更改。
9. 重新启动虚拟机。
10. 如果条目已成功做出注释或修复,系统应会在门户中显示 bash 提示符。 检查是否可以连接到 VM。
11. 通过运行 mount – a 命令来测试任何 fstab 更改时检查装入点。 如果未出错,则装入点应是正常的。
## <a name="repair-the-vm-offline"></a>修复 VM 脱机
1. 将 VM 的系统磁盘作为数据磁盘附加到恢复 VM(任何正常工作的 Linux VM)。 为此,可以使用 [CLI 命令](https://docs.microsoft.com/azure/virtual-machines/troubleshooting/troubleshoot-recovery-disks-linux),或者使用 [VM 修复命令](repair-linux-vm-using-azure-virtual-machine-repair-commands.md)自动设置恢复 VM。
2. 在恢复 VM 上将系统磁盘装载为数据磁盘后,在进行更改之前备份 fstab 文件,然后遵循后续步骤更正 fstab 文件。
3. 查找指出未装载磁盘的错误。 在以下示例中,系统尝试附加一个不再存在的磁盘:
```
[DEPEND] Dependency failed for /datadisk1.
[DEPEND] Dependency failed for Local File Systems.
[DEPEND] Dependency failed for Relabel all filesystems, if necessary.
[DEPEND] Dependency failed for Migrate local... structure to the new structure.
Welcome to emergency mode! After logging in, type "journalctl -xb" to view system logs, "systemctl reboot" to reboot, "systemctl default" or ^D to try again to boot into default mode.
Give root password for maintenance (or type Control-D to continue):
```
4. 使用 root 密码连接到 VM(基于 Red Hat 的 VM)。
5. 使用偏好的文本编辑器打开 fstab 文件。 装载磁盘后,运行以下适用于 Nano 的命令。 请确保打开位于已装载磁盘上的 fstab 文件,而不要使用位于救援 VM 上的 fstab 文件。
```
$ nano /mnt/troubleshootingdisk/etc/fstab
```
6. 查看列出的文件系统。 fstab 文件中的每行表示启动 VM 时装载的文件系统。 有关 fstab 文件语法的详细信息,请运行 man fstab 命令。 若要排查启动失败问题,请查看每行,确保其结构和内容中都正确。
> [!Note]
> * 每行中的字段由制表符或空格分隔。 空行将被忽略。 第一个字符是数字符号 (#) 的行是注释。 注释行可以保留在 fstab 文件中,但不会受到处理。 对于你不确定的 fstab 行,我们建议保留注释,而不要删除这些行。
> * 要使 VM 能够恢复和启动,应该只有文件系统分区是所需的分区。 VM 可能会遇到有关其他带注释分区的应用程序错误。 但是,在没有其他分区的情况下,VM 应该也能启动。 稍后可以取消注释所有带注释的行。
> * 建议使用文件系统分区的 UUID 将数据磁盘装载到 Azure VM 上。 例如,运行以下命令:``/dev/sdc1: LABEL="cloudimg-rootfs" UUID="<UUID>" TYPE="ext4" PARTUUID="<PartUUID>"``
> * 若要确定文件系统的 UUID,请运行 blkid 命令。 有关语法的详细信息,请运行 man blkid 命令。 可以看到,要恢复的磁盘现已装载到新 VM 上。 尽管 UUID 应保持一致,但设备分区 ID(例如“/dev/sda1”)在此 VM 上是不同的。 [使用 CLI 命令](https://docs.microsoft.com/azure/virtual-machines/troubleshooting/troubleshoot-recovery-disks-linux)时,原始有故障 VM 的、位于非系统 VHD 上的文件系统分区不可在恢复 VM 中使用。
> * nofail 选项有助于确保即使文件系统已损坏或启动时文件系统不存在,也能启动 VM。 建议在 fstab 文件中使用 nofail 选项,以便在无需用于启动 VM 的分区出错时,能够继续启动。
7. 更改或注释掉 fstab 文件中任何错误的或不必要的行,使 VM 能够正常启动。
8. 保存对 fstab 文件所做的更改。
9. 重启虚拟机,或重建原始 VM。
10. 如果条目已成功做出注释或修复,系统应会在门户中显示 bash 提示符。 检查是否可以连接到 VM。
11. 通过运行 mount – a 命令来测试任何 fstab 更改时检查装入点。 如果未出错,则装入点应是正常的。
12. 卸载并分离原始虚拟硬盘,然后从原始系统磁盘创建 VM。 为此,可以使用 [CLI 命令](troubleshoot-recovery-disks-linux.md)或 [VM 修复命令](repair-linux-vm-using-azure-virtual-machine-repair-commands.md)(如果使用这些命令创建了恢复 VM)。
13. 再次创建 VM 并可以通过 SSH 连接到该 VM 后,请执行以下操作:
* 查看在恢复期间已更改或注释掉的任何 fstab 行。
* 确保正确使用 UUID 和 nofail 选项。
* 在重启 VM 之前测试任何 fstab 更改。 为此,请使用以下命令:``$ sudo mount -a``
* 创建已更正的 fstab 文件的额外副本,以便在将来的恢复方案中使用。
## <a name="next-steps"></a>后续步骤
* [通过使用 Azure CLI 2.0 将 OS 磁盘附加到恢复 VM 来对 Linux VM 进行故障排除](https://docs.microsoft.com/azure/virtual-machines/virtual-machines-linux-troubleshoot-recovery-disks)
* [通过使用 Azure 门户将 OS 磁盘附加到恢复 VM 来对 Linux VM 进行故障排除](https://docs.microsoft.com/azure/virtual-machines/linux/troubleshoot-recovery-disks-portal)
| 40.987755 | 294 | 0.743378 | yue_Hant | 0.555568 |
e00e6ad351d0d565dc6eda5e0e3aad93e5b4268a | 15,976 | md | Markdown | docs/relational-databases/import-export/keep-nulls-or-use-default-values-during-bulk-import-sql-server.md | kazuyuk/sql-docs.ja-jp | b4913a1e60b25f2c582ab655a35a8225f7796091 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/relational-databases/import-export/keep-nulls-or-use-default-values-during-bulk-import-sql-server.md | kazuyuk/sql-docs.ja-jp | b4913a1e60b25f2c582ab655a35a8225f7796091 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/relational-databases/import-export/keep-nulls-or-use-default-values-during-bulk-import-sql-server.md | kazuyuk/sql-docs.ja-jp | b4913a1e60b25f2c582ab655a35a8225f7796091 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 一括インポート中の NULL の保持または既定値の使用 (SQL Server) | Microsoft Docs
ms.custom: ''
ms.date: 09/20/2016
ms.prod: sql
ms.prod_service: database-engine, sql-database, sql-data-warehouse, pdw
ms.component: import-export
ms.reviewer: ''
ms.suite: sql
ms.technology: data-movement
ms.tgt_pltfrm: ''
ms.topic: conceptual
helpviewer_keywords:
- bulk importing [SQL Server], null values
- bulk importing [SQL Server], default values
- data formats [SQL Server], null values
- bulk rowset providers [SQL Server]
- bcp utility [SQL Server], null values
- BULK INSERT statement
- default values
- OPENROWSET function, bulk importing
- data formats [SQL Server], default values
ms.assetid: 6b91d762-337b-4345-a159-88abb3e64a81
caps.latest.revision: 41
author: douglaslMS
ms.author: douglasl
manager: craigg
monikerRange: '>=aps-pdw-2016||=azuresqldb-current||=azure-sqldw-latest||>=sql-server-2016||=sqlallproducts-allversions||>=sql-server-linux-2017'
ms.openlocfilehash: 567817abd46fa23ba304bc3cd2cdbb0b5a0e20fc
ms.sourcegitcommit: 4cd008a77f456b35204989bbdd31db352716bbe6
ms.translationtype: HT
ms.contentlocale: ja-JP
ms.lasthandoff: 08/06/2018
ms.locfileid: "39552762"
---
# <a name="keep-nulls-or-use-default-values-during-bulk-import-sql-server"></a>一括インポート中の NULL の保持または既定値の使用 (SQL Server)
[!INCLUDE[appliesto-ss-asdb-asdw-pdw-md](../../includes/appliesto-ss-asdb-asdw-pdw-md.md)]
既定では、データをテーブルにインポートするとき、 [bcp](../../tools/bcp-utility.md) コマンドと [BULK INSERT](../../t-sql/statements/bulk-insert-transact-sql.md) ステートメントによって、テーブルの列に対して定義されているすべての既定値が監視されます。 たとえば、データ ファイルに NULL フィールドがある場合は、NULL 値の代わりにその列の既定値が読み込まれます。 [bcp](../../tools/bcp-utility.md) コマンドと [BULK INSERT](../../t-sql/statements/bulk-insert-transact-sql.md) ステートメントの両方で、NULL 値を保持することを指定することもできます。
これに対し、通常の INSERT ステートメントでは、既定値が挿入されるのではなく、NULL 値が保持されます。 INSERT ...SELECT * FROM [OPENROWSET(BULK...)](../../t-sql/functions/openrowset-transact-sql.md) ステートメントでは、通常の INSERT と同じ基本的な動作に加えて、既定値を挿入するための[テーブル ヒント](../../t-sql/queries/hints-transact-sql-table.md)がサポートされます。
|[外枠]|
|---|
|[Null 値を維持する](#keep_nulls)<br />[既定値と INSERT ... を使用するSELECT * FROM OPENROWSET(BULK...)](#keep_default)<br />[テスト条件の例](#etc)<br /> ● [サンプル テーブル](#sample_table)<br /> ● [サンプル データ ファイル](#sample_data_file)<br /> ● [XML 形式以外のフォーマット ファイルのサンプル](#nonxml_format_file)<br />[一括インポート中の NULL の保持または既定値の使用](#import_data)<br /> ● [フォーマット ファイルなしで bcp を使用して Null 値を維持する方法](#bcp_null)<br /> ● [XML 形式以外のフォーマット ファイルで bcp を使用して Null 値を維持する方法](#bcp_null_fmt)<br /> ● [フォーマット ファイルなしで bcp と既定値を使用する方法](#bcp_default)<br /> ● [XML 形式以外のフォーマット ファイルで bcp と既定値を使用する方法](#bcp_default_fmt)<br /> ● [フォーマット ファイルなしで BULK INSERT を使用して Null 値を維持する方法](#bulk_null)<br /> ● [XML 形式以外のフォーマット ファイルで BULK INSERT を使用して Null 値を維持する方法](#bulk_null_fmt)<br /> ● [フォーマット ファイルなしで BULK INSERT と既定値を使用する方法](#bulk_default)<br /> ● [XML 形式以外のフォーマット ファイルで BULK INSERT と既定値を使用する方法](#bulk_default_fmt)<br /> ● [XML 形式以外のフォーマット ファイルで OPENROWSET(BULK...) を使用して Null 値を維持する方法](#openrowset__null_fmt)<br /> ● [XML 形式以外のフォーマット ファイルで OPENROWSET(BULK...) と既定値を使用する方法](#openrowset__default_fmt)
## Null 値を維持する<a name="keep_nulls"></a>
以下の修飾子は、一括インポート操作中、テーブル列の既定値がある場合にその既定値を継承するのではなく、データ ファイルの空のフィールドにそのフィールドの NULL 値を保持することを指定しています。 [OPENROWSET](../../t-sql/functions/openrowset-transact-sql.md)の場合、既定では、一括読み込み操作で指定されていないすべての列が NULL に設定されます。
|コマンド|Qualifier|修飾子の種類|
|-------------|---------------|--------------------|
|bcp|-k|スイッチ|
|BULK INSERT|KEEPNULLS**\***|引数|
|INSERT ...SELECT * FROM OPENROWSET(BULK...)|なし|なし|
**\*** [BULK INSERT](../../t-sql/statements/bulk-insert-transact-sql.md)では、既定値を使用できない場合、NULL 値を許容するようにテーブル列を定義する必要があります。
> [!NOTE]
> 上記の修飾子は、一括インポート コマンドによるテーブルでの DEFAULT 定義の確認を無効にします。 ただし、同時に実行するすべての INSERT ステートメントでは、DEFAULT 定義が必要です。
## 既定値と INSERT ... を使用するSELECT * FROM [OPENROWSET(BULK...)](../../t-sql/functions/openrowset-transact-sql.md)<a name="keep_default"></a>
データ ファイルのフィールドが空の場合、対応するテーブル列に既定値があるときはその列で既定値を使用することを指定できます。 既定値を使用するには、テーブル ヒント [KEEPDEFAULTS](../../t-sql/queries/hints-transact-sql-table.md) を使用します。
> [!NOTE]
> 詳細については、「[INSERT (Transact-SQL)](../../t-sql/statements/insert-transact-sql.md)」、「[SELECT (Transact-SQL)](../../t-sql/queries/select-transact-sql.md)」、「[OPENROWSET (Transact-SQL)](../../t-sql/functions/openrowset-transact-sql.md)」、「[テーブル ヒント (Transact-SQL)](../../t-sql/queries/hints-transact-sql-table.md)」を参照してください。
## テスト条件の例<a name="etc"></a>
このトピックの例は、以下に定義されたテーブル、データ ファイル、およびフォーマット ファイルに基づいています。
### **サンプル テーブル**<a name="sample_table"></a>
以下のスクリプトでは、テスト データベースと `myNulls`という名前のテーブルが作成されます。 4 番目のテーブル列 `Kids`には既定値があることに注意してください。 Microsoft [!INCLUDE[ssManStudioFull](../../includes/ssmanstudiofull-md.md)] (SSMS) で次の Transact-SQL を実行します。
```sql
CREATE DATABASE TestDatabase;
GO
USE TestDatabase;
CREATE TABLE dbo.myNulls (
PersonID smallint not null,
FirstName varchar(25),
LastName varchar(30),
Kids varchar(13) DEFAULT 'Default Value',
BirthDate date
);
```
### **サンプル データ ファイル**<a name="sample_data_file"></a>
メモ帳を使用して、空のファイル `D:\BCP\myNulls.bcp` を作成し、次のデータを挿入します。 4 列目の 3 つ目のレコードに値はありません。
```
1,Anthony,Grosse,Yes,1980-02-23
2,Alica,Fatnowna,No,1963-11-14
3,Stella,Rosenhain,,1992-03-02
```
また、次の PowerShell スクリプトを実行して、データ ファイルを作成および設定することもできます。
```powershell
cls
# revise directory as desired
$dir = 'D:\BCP\';
$bcpFile = $dir + 'MyNulls.bcp';
# Confirm directory exists
IF ((Test-Path -Path $dir) -eq 0)
{
Write-Host "The path $dir does not exist; please create or modify the directory.";
RETURN;
};
# clear content, will error if file does not exist, can be ignored
Clear-Content -Path $bcpFile -ErrorAction SilentlyContinue;
# Add data
Add-Content -Path $bcpFile -Value '1,Anthony,Grosse,Yes,1980-02-23';
Add-Content -Path $bcpFile -Value '2,Alica,Fatnowna,No,1963-11-14';
Add-Content -Path $bcpFile -Value '3,Stella,Rosenhain,,1992-03-02';
#Review content
Get-Content -Path $bcpFile;
Invoke-Item $bcpFile;
```
### **XML 形式以外のフォーマット ファイルのサンプル**<a name="nonxml_format_file"></a>
SQL Server は、非 XML 形式と XML 形式の 2 種類のフォーマット ファイルをサポートしています。 XML 以外のフォーマットとは、以前のバージョンの SQL Server でサポートされる従来のフォーマットです。 詳細については、「 [XML 以外のフォーマット ファイル (SQL Server)](../../relational-databases/import-export/non-xml-format-files-sql-server.md) 」を参照してください。 次のコマンドでは、 [bcp ユーティリティ](../../tools/bcp-utility.md) を使用し、 `myNulls.fmt`のスキーマに基づいて XML 以外のフォーマット ファイル `myNulls`を生成します。 [bcp](../../tools/bcp-utility.md) コマンドを使用してフォーマット ファイルを作成するには、 **format** 引数を指定し、データ ファイルのパスの代わりに **nul** を使用します。 format オプションには、次に示す **-f** オプションが必要です。 さらに、この例では、修飾子 **c** を使用して文字データを指定し、 **t** を使用して [フィールド ターミネータ](../../relational-databases/import-export/specify-field-and-row-terminators-sql-server.md)としてコンマを指定し、 **T** を使用して統合セキュリティによる信頼された接続を指定します。 コマンド プロンプトで、次のコマンドを入力します。
```cmd
bcp TestDatabase.dbo.myNulls format nul -c -f D:\BCP\myNulls.fmt -t, -T
REM Review file
Notepad D:\BCP\myNulls.fmt
```
> [!IMPORTANT]
> XML 以外のフォーマット ファイルは、キャリッジ リターン\ライン フィードで終わるようにします。 そうしないと、次のエラー メッセージが発生する可能性があります。
>
> `SQLState = S1000, NativeError = 0`
> `Error = [Microsoft][ODBC Driver 13 for SQL Server]I/O error while reading BCP format file`
フォーマット ファイルの作成方法の詳細については、「 [フォーマット ファイルの作成 (SQL Server)](../../relational-databases/import-export/create-a-format-file-sql-server.md)を使用します。
## 一括インポート中の NULL の保持または既定値の使用<a name="import_data"></a>
次の例では、データベース、データ ファイル、および上記で作成したフォーマット ファイルを使用します。
### **フォーマット ファイルなしで [bcp](../../tools/bcp-utility.md) を使用して Null 値を維持する方法**<a name="bcp_null"></a>
**-k** スイッチ。 コマンド プロンプトで、次のコマンドを入力します。
```cmd
REM Truncate table (for testing)
SQLCMD -Q "TRUNCATE TABLE TestDatabase.dbo.myNulls;"
REM Import data
bcp TestDatabase.dbo.myNulls IN D:\BCP\myNulls.bcp -c -t, -T -k
REM Review results
SQLCMD -Q "SELECT * FROM TestDatabase.dbo.myNulls;"
```
### **フォーマット ファイルなしで [bcp](../../tools/bcp-utility.md) で [bcp](../../relational-databases/import-export/non-xml-format-files-sql-server.md)**<a name="bcp_null_fmt"></a>
**-k** スイッチと **-f** スイッチ。 コマンド プロンプトで、次のコマンドを入力します。
```cmd
REM Truncate table (for testing)
SQLCMD -Q "TRUNCATE TABLE TestDatabase.dbo.myNulls;"
REM Import data
bcp TestDatabase.dbo.myNulls IN D:\BCP\myNulls.bcp -f D:\BCP\myNulls.fmt -T -k
REM Review results
SQLCMD -Q "SELECT * FROM TestDatabase.dbo.myNulls;"
```
### **フォーマット ファイルなしで [bcp](../../tools/bcp-utility.md) と既定値を使用する方法**<a name="bcp_default"></a>
コマンド プロンプトで、次のコマンドを入力します。
```cmd
REM Truncate table (for testing)
SQLCMD -Q "TRUNCATE TABLE TestDatabase.dbo.myNulls;"
REM Import data
bcp TestDatabase.dbo.myNulls IN D:\BCP\myNulls.bcp -c -t, -T
REM Review results
SQLCMD -Q "SELECT * FROM TestDatabase.dbo.myNulls;"
```
### **フォーマット ファイルなしで [bcp](../../tools/bcp-utility.md) で [bcp](../../relational-databases/import-export/non-xml-format-files-sql-server.md)**<a name="bcp_default_fmt"></a>
**-f** スイッチ。 コマンド プロンプトで、次のコマンドを入力します。
```cmd
REM Truncate table (for testing)
SQLCMD -Q "TRUNCATE TABLE TestDatabase.dbo.myNulls;"
REM Import data
bcp TestDatabase.dbo.myNulls IN D:\BCP\myNulls.bcp -f D:\BCP\myNulls.fmt -T
REM Review results
SQLCMD -Q "SELECT * FROM TestDatabase.dbo.myNulls;"
```
### **フォーマット ファイルなしで [BULK INSERT](../../t-sql/statements/bulk-insert-transact-sql.md) を使用して Null 値を維持する方法**<a name="bulk_null"></a>
**KEEPNULLS** 引数。 Microsoft [!INCLUDE[ssManStudioFull](../../includes/ssmanstudiofull-md.md)] (SSMS) で次の Transact-SQL を実行します。
```sql
USE TestDatabase;
GO
TRUNCATE TABLE dbo.myNulls; -- for testing
BULK INSERT dbo.myNulls
FROM 'D:\BCP\myNulls.bcp'
WITH (
DATAFILETYPE = 'char',
FIELDTERMINATOR = ',',
KEEPNULLS
);
-- review results
SELECT * FROM TestDatabase.dbo.myNulls;
```
### **フォーマット ファイルなしで [BULK INSERT](../../t-sql/statements/bulk-insert-transact-sql.md) で [bcp](../../relational-databases/import-export/non-xml-format-files-sql-server.md)**<a name="bulk_null_fmt"></a>
**KEEPNULLS** 引数と **FORMATFILE** 引数。 Microsoft [!INCLUDE[ssManStudioFull](../../includes/ssmanstudiofull-md.md)] (SSMS) で次の Transact-SQL を実行します。
```sql
USE TestDatabase;
GO
TRUNCATE TABLE dbo.myNulls; -- for testing
BULK INSERT dbo.myNulls
FROM 'D:\BCP\myNulls.bcp'
WITH (
FORMATFILE = 'D:\BCP\myNulls.fmt',
KEEPNULLS
);
-- review results
SELECT * FROM TestDatabase.dbo.myNulls;
```
### **フォーマット ファイルなしで [BULK INSERT](../../t-sql/statements/bulk-insert-transact-sql.md) と既定値を使用する方法**<a name="bulk_default"></a>
Microsoft [!INCLUDE[ssManStudioFull](../../includes/ssmanstudiofull-md.md)] (SSMS) で次の Transact-SQL を実行します。
```sql
USE TestDatabase;
GO
TRUNCATE TABLE dbo.myNulls; -- for testing
BULK INSERT dbo.myNulls
FROM 'D:\BCP\myNulls.bcp'
WITH (
DATAFILETYPE = 'char',
FIELDTERMINATOR = ','
);
-- review results
SELECT * FROM TestDatabase.dbo.myNulls;
```
### **フォーマット ファイルなしで [BULK INSERT](../../t-sql/statements/bulk-insert-transact-sql.md) で [bcp](../../relational-databases/import-export/non-xml-format-files-sql-server.md)**<a name="bulk_default_fmt"></a>
**FORMATFILE** 引数。 Microsoft [!INCLUDE[ssManStudioFull](../../includes/ssmanstudiofull-md.md)] (SSMS) で次の Transact-SQL を実行します。
```sql
USE TestDatabase;
GO
TRUNCATE TABLE dbo.myNulls; -- for testing
BULK INSERT dbo.myNulls
FROM 'D:\BCP\myNulls.bcp'
WITH (
FORMATFILE = 'D:\BCP\myNulls.fmt'
);
-- review results
SELECT * FROM TestDatabase.dbo.myNulls;
```
### **フォーマット ファイルなしで [OPENROWSET(BULK...)](../../t-sql/functions/openrowset-transact-sql.md) で [bcp](../../relational-databases/import-export/non-xml-format-files-sql-server.md)**<a name="openrowset__null_fmt"></a>
**FORMATFILE** 引数。 Microsoft [!INCLUDE[ssManStudioFull](../../includes/ssmanstudiofull-md.md)] (SSMS) で次の Transact-SQL を実行します。
```sql
USE TestDatabase;
GO
TRUNCATE TABLE dbo.myNulls; -- for testing
INSERT INTO dbo.myNulls
SELECT *
FROM OPENROWSET (
BULK 'D:\BCP\myNulls.bcp',
FORMATFILE = 'D:\BCP\myNulls.fmt'
) AS t1;
-- review results
SELECT * FROM TestDatabase.dbo.myNulls;
```
### **フォーマット ファイルなしで [OPENROWSET(BULK...)](../../t-sql/functions/openrowset-transact-sql.md) で [bcp](../../relational-databases/import-export/non-xml-format-files-sql-server.md)**<a name="openrowset__default_fmt"></a>
**KEEPDEFAULTS** テーブル ヒントと **FORMATFILE** 引数。 Microsoft [!INCLUDE[ssManStudioFull](../../includes/ssmanstudiofull-md.md)] (SSMS) で次の Transact-SQL を実行します。
```sql
USE TestDatabase;
GO
TRUNCATE TABLE dbo.myNulls; -- for testing
INSERT INTO dbo.myNulls
WITH (KEEPDEFAULTS)
SELECT *
FROM OPENROWSET (
BULK 'D:\BCP\myNulls.bcp',
FORMATFILE = 'D:\BCP\myNulls.fmt'
) AS t1;
-- review results
SELECT * FROM TestDatabase.dbo.myNulls;
```
## <a name="RelatedTasks"></a> 関連タスク
- [データの一括インポート時の ID 値の保持 (SQL Server)](../../relational-databases/import-export/keep-identity-values-when-bulk-importing-data-sql-server.md)
- [一括エクスポートまたは一括インポートのデータの準備 (SQL Server)](../../relational-databases/import-export/prepare-data-for-bulk-export-or-import-sql-server.md)
**フォーマット ファイルを作成するには**
- [フォーマット ファイルの作成 (SQL Server)](../../relational-databases/import-export/create-a-format-file-sql-server.md)
- [データの一括インポートでのフォーマット ファイルの使用 (SQL Server)](../../relational-databases/import-export/use-a-format-file-to-bulk-import-data-sql-server.md)
- [フォーマット ファイルを使用したテーブル列とデータ ファイル フィールドのマッピング (SQL Server)](../../relational-databases/import-export/use-a-format-file-to-map-table-columns-to-data-file-fields-sql-server.md)
- [フォーマット ファイルを使用したデータ フィールドのスキップ (SQL Server)](../../relational-databases/import-export/use-a-format-file-to-skip-a-data-field-sql-server.md)
- [フォーマット ファイルを使用したテーブル列のスキップ (SQL Server)](../../relational-databases/import-export/use-a-format-file-to-skip-a-table-column-sql-server.md)
**一括インポートまたは一括エクスポートのデータ形式を使用するには**
- [以前のバージョンの SQL Server からのネイティブ形式データおよび文字形式データのインポート](../../relational-databases/import-export/import-native-and-character-format-data-from-earlier-versions-of-sql-server.md)
- [文字形式を使用したデータのインポートまたはエクスポート (SQL Server)](../../relational-databases/import-export/use-character-format-to-import-or-export-data-sql-server.md)
- [ネイティブ形式を使用したデータのインポートまたはエクスポート (SQL Server)](../../relational-databases/import-export/use-native-format-to-import-or-export-data-sql-server.md)
- [Unicode 文字形式を使用したデータのインポートまたはエクスポート (SQL Server)](../../relational-databases/import-export/use-unicode-character-format-to-import-or-export-data-sql-server.md)
- [Unicode ネイティブ形式を使用したデータのインポートまたはエクスポート (SQL Server)](../../relational-databases/import-export/use-unicode-native-format-to-import-or-export-data-sql-server.md)
**bcp を使用した互換性のためのデータ形式を指定するには**
- [フィールド ターミネータと行ターミネータの指定 (SQL Server)](../../relational-databases/import-export/specify-field-and-row-terminators-sql-server.md)
- [bcp を使用したデータ ファイルのプレフィックス長の指定 (SQL Server)](../../relational-databases/import-export/specify-prefix-length-in-data-files-by-using-bcp-sql-server.md)
- [bcp を使用したファイル ストレージ型の指定 (SQL Server)](../../relational-databases/import-export/specify-file-storage-type-by-using-bcp-sql-server.md)
## <a name="see-also"></a>参照
[BACKUP (Transact-SQL)](../../t-sql/statements/backup-transact-sql.md)
[OPENROWSET (Transact-SQL)](../../t-sql/functions/openrowset-transact-sql.md)
[bcp ユーティリティ](../../tools/bcp-utility.md)
[BULK INSERT (Transact-SQL)](../../t-sql/statements/bulk-insert-transact-sql.md)
[テーブル ヒント (Transact-SQL)](../../t-sql/queries/hints-transact-sql-table.md)
| 43.89011 | 1,242 | 0.715699 | yue_Hant | 0.706389 |
e00f968a841ba38a1f20c0a3c887d3674fcec716 | 4,065 | md | Markdown | CHANGELOG.md | instill-ai/visual-data-pipeline | fd87eba8dda3adcbee8b7582d5a126930765e12b | [
"Apache-2.0"
] | 14 | 2022-02-15T06:23:51.000Z | 2022-03-11T08:46:38.000Z | CHANGELOG.md | instill-ai/visual-data-pipeline | fd87eba8dda3adcbee8b7582d5a126930765e12b | [
"Apache-2.0"
] | 40 | 2022-02-13T12:23:05.000Z | 2022-03-22T22:24:36.000Z | CHANGELOG.md | instill-ai/vdp | fd87eba8dda3adcbee8b7582d5a126930765e12b | [
"Apache-2.0"
] | null | null | null | # Changelog
### [0.1.6-alpha](https://github.com/instill-ai/vdp/compare/v0.1.5-alpha...v0.1.6-alpha) (2022-04-03)
### Miscellaneous Chores
* release 0.1.6-alpha ([66e4988](https://github.com/instill-ai/vdp/commit/66e498857a67af7343096512fbf73fbc741e7611))
### [0.1.5-alpha](https://github.com/instill-ai/vdp/compare/v0.1.4-alpha...v0.1.5-alpha) (2022-03-22)
### Miscellaneous Chores
* release 0.1.5-alpha ([36da44e](https://github.com/instill-ai/vdp/commit/36da44ebd27bb8b7aa6f1fdcf39f432293501275))
### [0.1.4-alpha](https://github.com/instill-ai/vdp/compare/v0.1.3-alpha...v0.1.4-alpha) (2022-03-22)
### Miscellaneous Chores
* release 0.1.4-alpha ([0b0e649](https://github.com/instill-ai/vdp/commit/0b0e64915491aac3b3a2a6274da30bdd1a7c940d))
### [0.1.3-alpha](https://github.com/instill-ai/vdp/compare/v0.1.2-alpha...v0.1.3-alpha) (2022-02-24)
### Bug Fixes
* some typos in examples-go ([#80](https://github.com/instill-ai/vdp/issues/80)) ([d81f564](https://github.com/instill-ai/vdp/commit/d81f5649265c2deb1ff9b08fd31b2ad779450922))
* typos in examples-go ([#77](https://github.com/instill-ai/vdp/issues/77)) ([cb94c9f](https://github.com/instill-ai/vdp/commit/cb94c9fbcf3bbfa98a2f691ab50d33bfb5afd7c0))
### [0.1.2-alpha](https://github.com/instill-ai/vdp/compare/v0.1.1-alpha...v0.1.2-alpha) (2022-02-21)
### Bug Fixes
* add name in update model api ([#41](https://github.com/instill-ai/vdp/issues/41)) ([35b7e31](https://github.com/instill-ai/vdp/commit/35b7e31f8771eccf16bcd7cc36785138540c869d))
* **docker-compose:** change correct dependency for temporal ([#60](https://github.com/instill-ai/vdp/issues/60)) ([eb98049](https://github.com/instill-ai/vdp/commit/eb98049b21fe6d6e4c037234dbc6cde9cf32b325))
* update model backend without specify version when creating model ([#63](https://github.com/instill-ai/vdp/issues/63)) ([f4cf161](https://github.com/instill-ai/vdp/commit/f4cf161fa789573985b47e465fc9a30c6da9ec6c))
* wrong type in sample model prediction ([#39](https://github.com/instill-ai/vdp/issues/39)) ([523d5f8](https://github.com/instill-ai/vdp/commit/523d5f8ea1d8b00799ceb5aef183ae727069726c))
### [0.1.1-alpha](https://github.com/instill-ai/vdp/compare/v0.1.0-alpha...v0.1.1-alpha) (2022-02-13)
### Miscellaneous Chores
* release 0.1.1-alpha ([932e3ee](https://github.com/instill-ai/vdp/commit/932e3eed14d6bd672ae0f00938cc0afd7992163c))
## [0.1.0-alpha](https://github.com/instill-ai/visual-data-preparation/compare/v0.0.0-alpha...v0.1.0-alpha) (2022-02-11)
### Features
* add provisioned `visual-data-pipeline` from docker-compose ([0ec790b](https://github.com/instill-ai/visual-data-preparation/commit/0ec790b4d99aae5c99544a571b3f349f51d2ee75))
* initial commit ([2916d75](https://github.com/instill-ai/visual-data-preparation/commit/2916d757f49bf922b2582a5a598086fa79f58162))
* refactor the name to `visual data preparation` and add `model-backend` into docker-compose ([#7](https://github.com/instill-ai/visual-data-preparation/issues/7)) ([d24c0ef](https://github.com/instill-ai/visual-data-preparation/commit/d24c0efa4579d1dab122d8db3c18e976504f9399))
* refactor to adopt koanf configuration management ([51eba89](https://github.com/instill-ai/visual-data-preparation/commit/51eba893746dceb2a0f57ecb6069f72a8f74a2b3))
### Bug Fixes
* fix dependency ([d418871](https://github.com/instill-ai/visual-data-preparation/commit/d418871d04fd20dff30da8b492b740f32d7f6fcd))
* fix wrong package ([740cda0](https://github.com/instill-ai/visual-data-preparation/commit/740cda07363488c66a27563ea91d6fbe935a1f57))
* remove unused file ([78e5d98](https://github.com/instill-ai/visual-data-preparation/commit/78e5d98c44da65ec37e73a36cac2c745a4e4bc97))
* use public dockerhub as our image repository ([#4](https://github.com/instill-ai/visual-data-preparation/issues/4)) ([f8ba1b8](https://github.com/instill-ai/visual-data-preparation/commit/f8ba1b8f260f091f49dd72936e46087c68f6b42a))
* use public zapadapter ([2bc8c6d](https://github.com/instill-ai/visual-data-preparation/commit/2bc8c6dda4849c013b6b830c36cb0cebb338239d))
| 60.671642 | 278 | 0.76802 | yue_Hant | 0.13186 |
e00fed1dd510111cebebd8be013f093e9843d9b3 | 570 | md | Markdown | docs_source_files/content/guidelines_and_recommendations/fresh_browser_per_test.zh-cn.md | Madh93/seleniumhq.github.io | b56ba28d00f6b39bb8cc7e5e440fcb00b2f1f1a6 | [
"Apache-2.0"
] | 121 | 2015-01-08T18:45:22.000Z | 2022-02-27T19:08:47.000Z | docs_source_files/content/guidelines_and_recommendations/fresh_browser_per_test.zh-cn.md | Madh93/seleniumhq.github.io | b56ba28d00f6b39bb8cc7e5e440fcb00b2f1f1a6 | [
"Apache-2.0"
] | 156 | 2015-04-12T05:14:42.000Z | 2019-11-17T21:07:40.000Z | docs_source_files/content/guidelines_and_recommendations/fresh_browser_per_test.zh-cn.md | Madh93/seleniumhq.github.io | b56ba28d00f6b39bb8cc7e5e440fcb00b2f1f1a6 | [
"Apache-2.0"
] | 186 | 2015-01-08T18:45:27.000Z | 2021-03-23T22:34:59.000Z | ---
title: "Fresh browser per test"
weight: 9
---
{{% notice info %}}
<i class="fas fa-language"></i> 页面需要从英语翻译为简体中文。
您熟悉英语与简体中文吗?帮助我们翻译它,通过 pull requests 给我们!
{{% /notice %}}
Start each test from a clean known state.
Ideally, spin up a new virtual machine for each test.
If spinning up a new virtual machine is not practical,
at least start a new WebDriver for each test.
For Firefox, start a WebDriver with your known profile.
```java
FirefoxProfile profile = new FirefoxProfile(new File("pathToFirefoxProfile"));
WebDriver driver = new FirefoxDriver(profile);
```
| 27.142857 | 78 | 0.742105 | eng_Latn | 0.928251 |
e00ffbcd88d80253c4a1534cc530ca4dd71c8a41 | 352 | md | Markdown | docs/DocBuilder/stubs/page_git_command_feature_content.md | ArtARTs36/GitHandler | e8bc297e5138e3c24efdc7ea7a1ef71a1d798aad | [
"MIT"
] | 21 | 2021-03-28T19:38:40.000Z | 2021-12-19T14:35:21.000Z | docs/DocBuilder/stubs/page_git_command_feature_content.md | ArtARTs36/GitHandler | e8bc297e5138e3c24efdc7ea7a1ef71a1d798aad | [
"MIT"
] | 2 | 2021-08-23T12:02:08.000Z | 2021-09-23T22:03:26.000Z | docs/DocBuilder/stubs/page_git_command_feature_content.md | ArtARTs36/GitHandler | e8bc297e5138e3c24efdc7ea7a1ef71a1d798aad | [
"MIT"
] | null | null | null | ### * {$featureName}
#### Method Signature:
{$featureSuggestsClasses}
```php
{$featureMethodSignature}
```
#### Equals Git Command:
{$realGitCommands}
#### Example:
```php
use \ArtARTs36\GitHandler\Factory\LocalGitFactory;
(new LocalGitFactory())->factory(__DIR__)->{$factoryMethodName}()->{$featureMethodName}({$featureExampleArguments});
```
| 16 | 116 | 0.701705 | yue_Hant | 0.87568 |
e011ac1a17e0c461ba297b56439c4100e8b82037 | 467 | md | Markdown | README.md | orgsync/rec | 3674389c275c4897d534a5607cd924f5bf3484b1 | [
"MIT"
] | null | null | null | README.md | orgsync/rec | 3674389c275c4897d534a5607cd924f5bf3484b1 | [
"MIT"
] | null | null | null | README.md | orgsync/rec | 3674389c275c4897d534a5607cd924f5bf3484b1 | [
"MIT"
] | null | null | null | # Rec
**Rec**ommendations as you type.
[Demo](http://orgsync.github.com/rec) | [Tests](http://orgsync.github.com/rec/test)
## Install
Rec's two dependencies are jQuery and Underscore, so include them before Rec.
```html
<script src='https://raw.github.com/orgsync/rec/master/rec.js'></script>
```
...or use [bower](https://github.com/twitter/bower)...
```
bower install rec
```
...and get the JavaScript file from `components/rec/rec.js`.
## API
To come...
| 17.961538 | 83 | 0.680942 | eng_Latn | 0.340748 |
e011adc5346992610398b74c6f75dfb0693ebbae | 621 | markdown | Markdown | readme.markdown | le4ker/write-right-app | 6669a3849d54b7aa73ca5c00ebd563573ace670a | [
"MIT"
] | 1 | 2016-06-27T05:44:27.000Z | 2016-06-27T05:44:27.000Z | readme.markdown | le4ker/write-right-app | 6669a3849d54b7aa73ca5c00ebd563573ace670a | [
"MIT"
] | null | null | null | readme.markdown | le4ker/write-right-app | 6669a3849d54b7aa73ca5c00ebd563573ace670a | [
"MIT"
] | 1 | 2018-11-12T17:32:56.000Z | 2018-11-12T17:32:56.000Z | # WriteRight
WriteRight is an input method for android devices which predicts the future letters and shrinks the keys that are unlikely to be pressed next.
[Demo](http://www.youtube.com/watch?v=LjigGHm_1KY)
[Download for Android](https://market.android.com/details?id=panos.sakkos.softkeyboard.writeright)
## Future Work
Integrate [Anima](https://github.com/PanosSakkos/anima) into the prediction engine. Anima will emit time and location-aware predictions.
## License
MIT
## Supporting the repo
In case you want to support the development, feel free to send a few bits here 1NyZsiyYRsA3mzQZTHBWgEPHVuaE5zkN8r
| 29.571429 | 142 | 0.79066 | eng_Latn | 0.942943 |
e01209581daf50069abb56dafa2b41da3cc34a84 | 3,760 | md | Markdown | docs/vs-2015/csharp-ide/reorder-parameters-refactoring-csharp.md | HolgerJeromin/visualstudio-docs.de-de | 511a0e68a681a91d8b53969c83fbc7a7fb8e043f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vs-2015/csharp-ide/reorder-parameters-refactoring-csharp.md | HolgerJeromin/visualstudio-docs.de-de | 511a0e68a681a91d8b53969c83fbc7a7fb8e043f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vs-2015/csharp-ide/reorder-parameters-refactoring-csharp.md | HolgerJeromin/visualstudio-docs.de-de | 511a0e68a681a91d8b53969c83fbc7a7fb8e043f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Neuanordnen von Parametern (C#) | Microsoft-Dokumentation
ms.custom: ''
ms.date: 2018-06-30
ms.prod: visual-studio-dev14
ms.reviewer: ''
ms.suite: ''
ms.technology:
- devlang-csharp
ms.tgt_pltfrm: ''
ms.topic: article
f1_keywords:
- vs.csharp.refactoring.reorder
dev_langs:
- CSharp
helpviewer_keywords:
- refactoring [C#], Reorder Parameters
- Reorder Parameters refactoring [C#]
ms.assetid: 4dabf21a-a9f0-41e9-b11b-55760cf2bd90
caps.latest.revision: 26
author: gewarren
ms.author: gewarren
manager: wpickett
ms.openlocfilehash: d0d0428449ce5c78ae098a68d0466262cedd32ef
ms.sourcegitcommit: 55f7ce2d5d2e458e35c45787f1935b237ee5c9f8
ms.translationtype: MT
ms.contentlocale: de-DE
ms.lasthandoff: 08/22/2018
ms.locfileid: "47512503"
---
# <a name="reorder-parameters-refactoring-c"></a>Refactoring „Parameter neu anordnen“ (C#)
[!INCLUDE[vs2017banner](../includes/vs2017banner.md)]
`Reorder Parameters` ist ein Visual C#-Vorgang refactoring, die eine einfache Möglichkeit zum Ändern der Reihenfolge der Parameter für Methoden, Indexer und Delegaten bereitstellt. `Reorder Parameters` Ändert die Deklaration, und an allen Positionen, in dem das Element aufgerufen wird, werden die Parameter entsprechend die neue Reihenfolge neu angeordnet.
Zum Ausführen der `Reorder Parameters` -Operation, platzieren Sie den Cursor auf oder neben einer Methode, Indexer oder Delegat. Wenn der Cursor positioniert wurde, rufen die `Reorder Parameters` Vorgang durch Drücken der Tastenkombination oder durch Klicken auf den Befehl aus dem Kontextmenü.
> [!NOTE]
> Sie können den ersten Parameter in eine Erweiterungsmethode nicht ändern.
### <a name="to-reorder-parameters"></a>Parameter neu anordnen.
1. Erstellen einer Klassenbibliothek mit dem Namen `ReorderParameters`, und Ersetzen Sie `Class1` mit dem folgenden Beispielcode.
```csharp
class ProtoClassA
{
// Invoke on 'MethodB'.
public void MethodB(int i, bool b) { }
}
class ProtoClassC
{
void D()
{
ProtoClassA MyClassA = new ProtoClassA();
// Invoke on 'MethodB'.
MyClassA.MethodB(0, false);
}
}
```
2. Platzieren Sie den Cursor auf `MethodB`, entweder in der Deklaration der Methode oder der Aufruf der Methode.
3. Auf der **Umgestalten** Menü klicken Sie auf **Parameter neu anordnen**.
Die **Parameter neu anordnen** Dialogfeld wird angezeigt.
4. In der **Parameter neu anordnen** wählen Sie im Dialogfeld `int i` in die **Parameter** aus, und klicken Sie dann auf die Schaltfläche nach unten.
Alternativ können Sie ziehen `int i` nach `bool b` in die **Parameter** Liste.
5. In der **Parameter neu anordnen** Dialogfeld klicken Sie auf **OK**.
Wenn die **Vorschau der verweisänderungen** Option ausgewählt ist, der **Parameter neu anordnen** im Dialogfeld die **Vorschau der Änderungen – Parameter neu anordnen** Dialogfeld wird angezeigt. Es bietet eine Vorschau der Änderungen in der Parameterliste für `MethodB` in der Signatur und den Methodenaufruf.
1. Wenn die **Vorschau der Änderungen – Parameter neu anordnen** klicken Sie im angezeigten Dialogfeld **übernehmen**.
In diesem Beispiel ist die Deklaration der Methode und alle der Methodenaufruf Websites für `MethodB` werden aktualisiert.
## <a name="remarks"></a>Hinweise
Sie können die Parameter aus der Deklaration einer Methode oder ein Methodenaufruf neu anordnen. Positionieren Sie den Cursor auf oder neben der Deklaration der Methoden- oder Delegatparameter, aber nicht im Text.
## <a name="see-also"></a>Siehe auch
[Refactoring (C#)](../csharp-ide/refactoring-csharp.md) | 43.72093 | 359 | 0.726862 | deu_Latn | 0.986435 |
e01224b77d29199c333c23df24f5f22401a78f8a | 247 | md | Markdown | docs/privacy/Shield.md | sambacha/baseline-truffle-box | 682cabf654d83124246765362481f3d5bc6f383f | [
"Unlicense"
] | 3 | 2020-11-16T18:16:10.000Z | 2021-01-22T09:16:22.000Z | docs/privacy/Shield.md | sambacha/baseline-truffle-box | 682cabf654d83124246765362481f3d5bc6f383f | [
"Unlicense"
] | null | null | null | docs/privacy/Shield.md | sambacha/baseline-truffle-box | 682cabf654d83124246765362481f3d5bc6f383f | [
"Unlicense"
] | 1 | 2020-11-16T18:16:44.000Z | 2020-11-16T18:16:44.000Z | ## `Shield`
### `constructor(address _verifier, uint256 _treeHeight)` (public)
### `getVerifier() → address` (external)
### `verifyAndPush(uint256[] _proof, uint256[] _publicInputs, bytes32 _newCommitment) → bool` (external)
| 9.148148 | 104 | 0.65587 | eng_Latn | 0.2424 |
e0124ba673f5e355183e5da3289a4d451318fc71 | 2,345 | md | Markdown | README.md | OpenMandrivaSoftware/kahinah | b8325e59f9610558020071b6320fb1acb1bbb853 | [
"MIT"
] | null | null | null | README.md | OpenMandrivaSoftware/kahinah | b8325e59f9610558020071b6320fb1acb1bbb853 | [
"MIT"
] | null | null | null | README.md | OpenMandrivaSoftware/kahinah | b8325e59f9610558020071b6320fb1acb1bbb853 | [
"MIT"
] | null | null | null | Kahinah
=======
Kahinah is a QA system. Upon receiving notification of a built package, it runs
a set of tasks (usually determined from where it came from) and then displays it
for users to upvote or downvote until it can be published or rejected.
Requirements
------------
* Git
* mktemp -d
* Go 1.5+
* sqlite3 or mysql
Setup
-----
1. Configure your application in app.toml (see app.toml.example for details).
1. Configure news.md.example.
1. Make sure your databases are created, if needed.
1. Run it.
Tips
----
* Make sure to run `go clean -i -r -x` before every build.
* Clear out old libraries.
* Update the source every time with `go get -u`.
* If you get errors, the above cleaning command helps.
Administration
--------------
To be able to control users and permissions, you need to have an admin account.
1. Login with your account to Persona. This will create an account in the database.
1. Add your email address to adminwhitelist (you may add several emails, seperated by semicolons)
1. Restart Kahinah and navigate to /admin. Add the kahinah.admin permission to your users.
It's recommended that you remove your email addresses from adminwhitelist afterwards, and replace them with impossible ones (e.g. 1y8oehowhfoinwdaf).
Whitelist
---------
Kahinah, by default, allows everyone to vote up/down builds. This may have unintended effects.
Currently, there is a whitelist system embedded - set "whitelist" in app.conf to true to use it.
This allows only users in the whitelist to vote. It also adds a permanent notice to the front page
that indicates that it is only available for whitelisted users.
Administrators should use the above administration tool to add the "kahinah.whitelist" permission
to users that should be allowed to vote.
Database
--------
`db_type` is either `sqlite3` or `mysql`. For `sqlite3`, the `db_name` is the filename of the database.
The rest should be self-explainatory. Hopefully.
No major database changes are expected, I think. Maybe we'll introduce migrations with hood or goose
when the time comes to change the database schema, but I don't anticipate major changes anytime soon.
News
----
Markdown.
License
-------
As of v4, MIT licensed. See [the LICENSE file](http://robxu9.mit-license.org)
for more information.
Previous versions were licensed under the AGPLv3.
| 29.683544 | 149 | 0.747548 | eng_Latn | 0.998838 |
e014065555bcce35f3113beb600b0b7c63fbf1f6 | 7,721 | md | Markdown | README.md | ovh/owstats-ui | 0b17f8bb86c1e517f3c77c413b6b2169996b3878 | [
"Apache-2.0"
] | 7 | 2022-01-24T13:19:43.000Z | 2022-01-25T09:44:37.000Z | README.md | ovh/owstats-ui | 0b17f8bb86c1e517f3c77c413b6b2169996b3878 | [
"Apache-2.0"
] | null | null | null | README.md | ovh/owstats-ui | 0b17f8bb86c1e517f3c77c413b6b2169996b3878 | [
"Apache-2.0"
] | null | null | null | # OVHcloud Web Statistics
# Introduction
OWstats is a webanalytics service which reports website usage (traffic, users localization...) to OVHcloud webhosting customers thanks to server logs analysis. There are 3 building blocks to this service :
- owstats-backend : OVHcloud webhosting Apache servers logs are parsed, transformed and the result is stored in databases
- owstats-api : databases built with owstats-backend are exposed through owstats-api
- owstats-ui : frontend part of the application is displaying data to final users
This service is already installed and available for all OVHcloud hostings (see https://docs.ovh.com/ie/en/hosting/shared_view_my_websites_logs_and_statistics/).
The aim of this open source repository is to provide more flexibilty to users who want to analyse deeper their server logs.
Currently, only owstats-ui code is open source. It is planned to release backend and api code in the near future so owstats can be fully installed on other servers (and not limited to OVHcloud webhosting customers).
# A. Generate OVHcloud API credentials
In order to use owstats-ui, you will need to have OVHcloud API credentials. You can generate them by following this documentation :
- https://docs.ovh.com/ie/en/api/first-steps-with-ovh-api/#advanced-usage-pair-ovhcloud-apis-with-an-application_2
- in createToken page, set validity as "Unlimited" and rights as "GET" "/hosting/web*" (see picture below)
- Script name and description can have any value you wish
- note down Application Key, Application Secret and Consumer Key generated.

# B. Owstats-ui local development
## Pre-requesites
- git : https://ovh.github.io/manager/how-to/#install-git
- nvm (or node v12, v14 or v16) : https://ovh.github.io/manager/how-to/#install-node-js
- an OVHcloud Web Hosting plan (https://www.ovh.ie/web-hosting/)
- OVHcloud API credentials generated (see "Generate OVHcloud API credentials" section)
## Local developpement configuration
In your local machine, install dependencies and launch development server by doing the following commands in a terminal :
```
nvm install 16
nvm use 16
npm install --global yarn
git clone https://github.com/ovh/owstats-ui.git
cd owstats-ui
yarn install
yarn local-config
yarn serve
```
Then, you can access your local logs website (replace hostingname by your hosting) :
```
http://localhost:8080/hostingname/owstats#/dashboard
```
## Launch cypress (front end testing dashboard)
Be sure to have no process running on port 8090 of your localhost and then run this command in terminal. It will open up a dashboard where you can display owstats-ui with test data :
```
yarn cypress-dashboard
```
See cypress documentation for more information : https://docs.cypress.io/
Troubleshooting : if you encounter the error "No version of Cypress is installed in:" you can install cypress manually with the command "./node_modules/.bin/cypress install" (else, other installation troubleshooting and methods: https://docs.cypress.io/guides/getting-started/installing-cypress#System-requirements)
## Other yarn commands
### Compiles and minifies for production
```
yarn build
```
### Lints and fixes files
```
yarn lint
```
### Tests
```
yarn test
```
# C. Deploy owstats UI
You can use owstats-ui repository to deploy your own webanalytics service. For now, this functionality is limited to display OVHcloud webhosting data. It is planned to release API and backend code in order to install full owstats stack for other servers data (VPS...).
For deployment, you need to understand the key distinction between "hosting" and "domain name" :
- hosting: your OVHcloud webhosting service. Data displayed in your webanalytics service will be data related to this webhosting.
- domain name: the domain name where your webanalytics service will be accessible (url). Data displayed in your webanalytics service will not depend on the domain name chosen, only on the hosting.
- after deployment, your webanalytics service will be accessible at the url "{domain}/{hosting}/owstats/#/"
Another important point to understand is who can access your webanalytics service after deployment. Your webanalytics service will be protected by username and password. These credentials are the ones administrated in "User administration" section of "Statistics and Logs" panel in OVHcloud webhosting manager ( https://docs.ovh.com/fr/hosting/mutualise-consulter-les-statistiques-et-les-logs-de-mon-site/#administration-des-utilisateurs )
## Pre-requesites
- git : https://ovh.github.io/manager/how-to/#install-git
- nvm (or node v12, v14 or v16) : https://ovh.github.io/manager/how-to/#install-node-js
- an OVHcloud Web Hosting plan (https://www.ovh.ie/web-hosting/)
- OVHcloud API credentials generated (see "Generate OVHcloud API credentials" section)
## Deploy: Domain name attached to OVHcloud hosting
The most simple option is to deploy your webanalytics service on a domain name attached to your OVHcloud webhosting. In order to do so, perfom following commands in your local machine terminal :
```
nvm install 16
nvm use 16
npm install --global yarn
git clone https://github.com/ovh/owstats-ui.git
cd owstats-ui
yarn install
yarn deploy
```
Then, select "OVHcloud Webhosting" when asked the question :
```
Do you want your analytics service to be published at an url associated to your OVHcloud hosting or at another url ?
```
Follow the instructions and your webanalytics service will automatically be deployed.
## Deploy: Domain name in other hosting services
You also have the possibility to deploy your webanalytics service on a domain name outside OVHcloud webhosting. Be aware that :
- this choice will affect only the url of your analytics service, data displayed will still be computed from the logs of your OVHcloud webhosting
- it requires more manual actions than deployment on an OVHcloud hosting domain name
```
nvm install 16
nvm use 16
npm install --global yarn
git clone https://github.com/ovh/owstats-ui.git
cd owstats-ui
yarn install
yarn deploy
```
Then, select "Other server" when asked the question :
```
Do you want your analytics service to be published at an url associated to your OVHcloud hosting or at another url ?
```
Folder "deploy" is then created in owstats-ui directory. This folder contains all the files needed to deploy your webanalytics service. You need to upload them to the server which serves your domain name :
- ".env.owstats" : it stores your OVHcloud API credentials : be careful to not expose it publicly
- "login_owstats.php" : it must be accessible at the address "{domain}/login_owstats.php" and its variable "$location" must point to ".env.owstats" file location (by default it is "../.env.owstats")
- "root/{hosting}/owstats" folder: files in this folder must be accessible at the address "{domain}/{hosting}/owstats/{file_path}".
- For example : your domain name is "test.com" and your OVHcloud webhosting is "hosting.ovh". After running the command "yarn deploy', "countries.geojson" file which is in directory "deploy/root/hosting.ovh/owstats". After manual deployment, "countries.geojson" must be accessbile at the address "test.com/hosting.ovh/owstats/data/countries.json"
# D. Modify owstats-ui
You can modify the code of owstats-ui to display additional data or change the styles :
- it uses vuejs, vue router and vuex
- You can use any raw statistics that are documented in owstats API documentation : https://ovh.github.io/owstats-ui/
- styles (colors, spacing...) can be customized easily using the file "src/assets/sass/_variables.scss"
- after modification, you can dislay your changes locally (section B) or deploy your analytics service (section C) | 50.796053 | 439 | 0.778785 | eng_Latn | 0.988312 |
e0141b64fd01a74728213e4d894633d1f04c82ce | 1,867 | md | Markdown | problems/4sum-ii/README.md | jianpingbadao/leetcode | 0ddead0b7820c05e353e08c551418fc8e318ee96 | [
"MIT"
] | null | null | null | problems/4sum-ii/README.md | jianpingbadao/leetcode | 0ddead0b7820c05e353e08c551418fc8e318ee96 | [
"MIT"
] | null | null | null | problems/4sum-ii/README.md | jianpingbadao/leetcode | 0ddead0b7820c05e353e08c551418fc8e318ee96 | [
"MIT"
] | null | null | null | <!--|This file generated by command(leetcode description); DO NOT EDIT. |-->
<!--+----------------------------------------------------------------------+-->
<!--|@author openset <[email protected]> |-->
<!--|@link https://github.com/openset |-->
<!--|@home https://github.com/openset/leetcode |-->
<!--+----------------------------------------------------------------------+-->
[< Previous](https://github.com/openset/leetcode/tree/master/problems/minimum-moves-to-equal-array-elements "Minimum Moves to Equal Array Elements")
[Next >](https://github.com/openset/leetcode/tree/master/problems/assign-cookies "Assign Cookies")
## [454. 4Sum II (Medium)](https://leetcode.com/problems/4sum-ii "四数相加 II")
<p>Given four lists A, B, C, D of integer values, compute how many tuples <code>(i, j, k, l)</code> there are such that <code>A[i] + B[j] + C[k] + D[l]</code> is zero.</p>
<p>To make problem a bit easier, all A, B, C, D have same length of N where 0 ≤ N ≤ 500. All integers are in the range of -2<sup>28</sup> to 2<sup>28</sup> - 1 and the result is guaranteed to be at most 2<sup>31</sup> - 1.</p>
<p><b>Example:</b></p>
<pre>
<b>Input:</b>
A = [ 1, 2]
B = [-2,-1]
C = [-1, 2]
D = [ 0, 2]
<b>Output:</b>
2
<b>Explanation:</b>
The two tuples are:
1. (0, 0, 0, 1) -> A[0] + B[0] + C[0] + D[1] = 1 + (-2) + (-1) + 2 = 0
2. (1, 1, 0, 0) -> A[1] + B[1] + C[0] + D[0] = 2 + (-1) + (-1) + 0 = 0
</pre>
<p> </p>
### Related Topics
[[Hash Table](https://github.com/openset/leetcode/tree/master/tag/hash-table/README.md)]
[[Binary Search](https://github.com/openset/leetcode/tree/master/tag/binary-search/README.md)]
### Similar Questions
1. [4Sum](https://github.com/openset/leetcode/tree/master/problems/4sum) (Medium)
| 42.431818 | 232 | 0.53669 | eng_Latn | 0.308225 |
e0141df3ded19bc5d932095eec3d5fc66a3b8d3f | 762 | md | Markdown | README.md | ignasi35/akka-grpc-play-quickstart-java | 4ad775cade0aee46231e5cbf6c0a56fe249a7041 | [
"CC0-1.0"
] | null | null | null | README.md | ignasi35/akka-grpc-play-quickstart-java | 4ad775cade0aee46231e5cbf6c0a56fe249a7041 | [
"CC0-1.0"
] | null | null | null | README.md | ignasi35/akka-grpc-play-quickstart-java | 4ad775cade0aee46231e5cbf6c0a56fe249a7041 | [
"CC0-1.0"
] | null | null | null | # Play Java gRPC Example
This example is described in the [Play Java gRPC Example site](https://developer.lightbend.com/guides/play-java-grpc-example/).
This is an example application that shows how to use Akka gRPC to both expose and use gRPC services inside a Play application.
For detailed documentation refer to https://www.playframework.com/documentation/latest/Home and https://developer.lightbend.com/docs/akka-grpc/current/.
## Sample license
Written in 2018 by Lightbend, Inc.
To the extent possible under law, the author(s) have dedicated all copyright and related
and neighboring rights to this template to the public domain worldwide.
This template is distributed without any warranty. See <http://creativecommons.org/publicdomain/zero/1.0/>.
| 44.823529 | 152 | 0.796588 | eng_Latn | 0.990849 |
e0151050d0aaacd0ced7844d69170e09c1ddb26d | 425 | md | Markdown | README.md | iBuildApp/android_module_TapToCall | 82ccd535b49bfaa3dcbb94954760822a8c66b907 | [
"Apache-2.0"
] | 1 | 2015-09-16T14:06:27.000Z | 2015-09-16T14:06:27.000Z | README.md | iBuildApp/android_module_TapToCall | 82ccd535b49bfaa3dcbb94954760822a8c66b907 | [
"Apache-2.0"
] | null | null | null | README.md | iBuildApp/android_module_TapToCall | 82ccd535b49bfaa3dcbb94954760822a8c66b907 | [
"Apache-2.0"
] | null | null | null | Use our code to save yourself time on cross-platform, cross-device and cross OS version development and testing
# android module TapToCall
Tap to Call module allows users to make calls on predefined phone number.
**XML Structure declaration**
Tags:
- title - button name
- phone - phone number
Example:
<data>
<title><![CDATA[Button]]></title>
<phone><![CDATA[+12(345)1234123412]]></phone>
</data>
| 23.611111 | 111 | 0.701176 | eng_Latn | 0.953599 |
e0153b2eb1c9619f62c5208e9bde1063d46f3596 | 2,913 | md | Markdown | README.md | skoltech-nlp/certain-transformer | 00417c4a5fd63c574f96d9063d7e1c2d83f3904f | [
"Apache-2.0"
] | 8 | 2021-04-26T07:39:14.000Z | 2022-03-29T16:17:45.000Z | README.md | skoltech-nlp/certain-transformer | 00417c4a5fd63c574f96d9063d7e1c2d83f3904f | [
"Apache-2.0"
] | 1 | 2021-10-30T12:49:28.000Z | 2021-10-30T12:49:28.000Z | README.md | skoltech-nlp/certain-transformer | 00417c4a5fd63c574f96d9063d7e1c2d83f3904f | [
"Apache-2.0"
] | 1 | 2021-07-15T21:32:20.000Z | 2021-07-15T21:32:20.000Z | # Uncertainty Estimation for Transformer
This repository is about Uncertainty Estimation (UE) for classification tasks on GLUE based on the Transformer models for NLP. Namely, the repository contain codes related to the the paper ["How certinaty is your Transformer?"](https://www.aclweb.org/anthology/2021.eacl-main.157/) at the EACL-2021 conference on NLP.
## What the paper is about?
In this work, we consider the problem of uncertainty estimation for Transformer-based models. We investigate the applicability of uncertainty estimates based on dropout usage at the inference stage (Monte Carlo dropout). The series of experiments on natural language understanding tasks shows that the resulting uncertainty estimates improve the quality of detection of error-prone instances. Special attention is paid to the construction of computationally inexpensive estimates via Monte Carlo dropout and Determinantal Point Processes.
## More information and citation
You can learn more about the methods implemented in this repository in the following paper:
*Shelmanov, A., Tsymbalov, E., Puzyrev, D., Fedyanin, K., Panchenko, A., Panov, M. (2021): [How Certain is Your Transformer?](https://www.aclweb.org/anthology/2021.eacl-main.157/) In Proceeding of the 16th conference of the European Chapter of the Association for Computational Linguistics (EACL).*
If you found the materials presented in this paper and/or repository useful, please cite it as following:
```
@inproceedings{shelmanov-etal-2021-certain,
title = "How Certain is Your {T}ransformer?",
author = "Shelmanov, Artem and
Tsymbalov, Evgenii and
Puzyrev, Dmitri and
Fedyanin, Kirill and
Panchenko, Alexander and
Panov, Maxim",
booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume",
month = apr,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.eacl-main.157",
pages = "1833--1840",
abstract = "In this work, we consider the problem of uncertainty estimation for Transformer-based models. We investigate the applicability of uncertainty estimates based on dropout usage at the inference stage (Monte Carlo dropout). The series of experiments on natural language understanding tasks shows that the resulting uncertainty estimates improve the quality of detection of error-prone instances. Special attention is paid to the construction of computationally inexpensive estimates via Monte Carlo dropout and Determinantal Point Processes.",
}
```
## Usage
```
HYDRA_CONFIG_PATH=../configs/sst2.yaml python ./run_glue.py
```
## Contact
For all the questions contact to Artem Shelmanov, Maxim Panov, or Alexander Panchenko (you can find emails in the paper) or just open an issue in this repository.
| 59.44898 | 556 | 0.772743 | eng_Latn | 0.987317 |
e01556665b50e8898bb446aa2cae2a7b296fd754 | 517 | md | Markdown | topics/medical-imaging/index.md | siraben/explore | 1437627e39aa608f2bf049551c31758a5e8a1133 | [
"CC-BY-4.0"
] | 3 | 2022-03-26T13:59:12.000Z | 2022-03-26T13:59:23.000Z | topics/medical-imaging/index.md | siraben/explore | 1437627e39aa608f2bf049551c31758a5e8a1133 | [
"CC-BY-4.0"
] | 19 | 2021-12-13T09:24:33.000Z | 2022-03-15T09:56:30.000Z | topics/medical-imaging/index.md | siraben/explore | 1437627e39aa608f2bf049551c31758a5e8a1133 | [
"CC-BY-4.0"
] | 1 | 2022-02-18T23:15:24.000Z | 2022-02-18T23:15:24.000Z | ---
aliases: biological-imaging, ultrasound-imaging
display_name: Medical imaging
related: imaging, image-processing, ultrasound, x-ray
short_description: Medical imaging tasks include methods for acquiring, processing, analyzing, and understanding tissue and organ images.
topic: medical-imaging
wikipedia_url: https://en.wikipedia.org/wiki/Medical_imaging
---
Medical imaging is the technique and process of creating visual representations of the interior of a body for clinical analysis, and medical intervention.
| 51.7 | 154 | 0.825919 | eng_Latn | 0.956027 |
e015a29a115172854db8599abcedd18f58a8e75f | 1,344 | md | Markdown | CHANGELOG.md | DasKeyboard/q-applet-github | f412bb38a46a8deeef7b382e0dadf32b7cb32f30 | [
"MIT"
] | 3 | 2019-02-16T22:29:39.000Z | 2022-01-20T10:00:02.000Z | CHANGELOG.md | DasKeyboard/q-applet-github | f412bb38a46a8deeef7b382e0dadf32b7cb32f30 | [
"MIT"
] | 5 | 2019-02-19T15:44:17.000Z | 2022-02-02T09:50:01.000Z | CHANGELOG.md | DasKeyboard/q-applet-github | f412bb38a46a8deeef7b382e0dadf32b7cb32f30 | [
"MIT"
] | 2 | 2019-10-25T00:36:51.000Z | 2021-04-25T04:08:03.000Z | #### 1.0.9 (2021-04-26)
##### New Features
* **configuration:** added key effect and color
#### 1.0.8 (2019-11-11)
##### Bug Fixes
* Changed polling rate to 15sec instead of 5min (e6f968e4)
#### 1.0.7 (2019-05-10)
##### Bug Fixes
* **fix:** Do not send signal when getting internet connection error
#### 1.0.6 (2019-04-16)
##### Bug Fixes
* **fix:** Handled internet connection error
#### 1.0.5 (2019-02-14)
##### Chore
* **syntax:** Github become GitHub with a new image
#### 1.0.4 (2019-02-05)
##### Bug Fixes
* **Request:** Requests will use daskeyboard proxy only to ask for new tokens (914511a1)
#### 1.0.3 (2019-01-28)
##### New Features
* **Notification message:** changed the content of the notification sent to the Das Keyboard Q app (9755d391)
##### Code Style Changes
* **Icon:** Changed applet icon (297617d6)
#### 1.0.2 (2018-12-28)
##### New Features
* **notifications:** clear color in key if notifications has been read (977f8eb2)
#### 1.0.1 (2018-12-27)
##### Documentation Changes
* **readme:** Two readme updated (827a4ed4)
##### New Features
* **action:** added action to view notifications on the Github website (69f69982)
* **authorization:** use Oauth2 (f2c3fa47)
* **daskeyboard-applet module:** upgraded to latest version (c25694eb)
#### 1.0.0 - (2018-10-30)
First release
| 19.764706 | 110 | 0.640625 | eng_Latn | 0.835545 |
e01605cc1264d6baee4621a28d4de8b95582f871 | 2,057 | markdown | Markdown | _posts/2009-12-23-million-dollar-quartet-your-discography-help-needed.markdown | Trott/blog.musicroutes.com | 0c6d32d857aad80c8d941d351da914f8aee9ee45 | [
"MIT"
] | null | null | null | _posts/2009-12-23-million-dollar-quartet-your-discography-help-needed.markdown | Trott/blog.musicroutes.com | 0c6d32d857aad80c8d941d351da914f8aee9ee45 | [
"MIT"
] | null | null | null | _posts/2009-12-23-million-dollar-quartet-your-discography-help-needed.markdown | Trott/blog.musicroutes.com | 0c6d32d857aad80c8d941d351da914f8aee9ee45 | [
"MIT"
] | null | null | null | ---
layout: post
status: publish
published: true
title: 'Million Dollar Quartet: Your Discography Help Needed'
author:
display_name: Trott
login: admin
email: [email protected]
url: http://musicroutes.com/
author_login: admin
author_email: [email protected]
author_url: http://musicroutes.com/
wordpress_id: 593
wordpress_url: http://blog.musicroutes.com/?p=593
date: '2009-12-23 16:43:00 -0800'
date_gmt: '2009-12-23 23:43:00 -0800'
tags:
- elvis presley
- usa for africa
- million dollar quartet
- johnny cash
- carl perkins
- jerry lee lewis
comments: []
---
<p>With a site like <a href="http://musicroutes.com/" target="_blank">Music Routes</a>, how can it not have the <strong>Million Dollar Quartet</strong> in the database?</p>
<p>Well, let me explain...</p>
<p>The <a href="http://en.wikipedia.org/wiki/Million_Dollar_Quartet" target="_blank">Million Dollar Quartet</a> is a famous jam session that featured <strong>Elvis Presley</strong>, <strong>Johnny Cash</strong> (or maybe not; see below), <strong>Carl Perkins</strong>, and <strong>Jerry Lee Lewis</strong>. Among 1956 recordings, this may be the closest thing to a <strong>USA For Africa</strong> type of one-off supergroup.</p>
<p>Here's the problem: It's unclear who is on what track. Some credible sources (such as Presley biographer Ernst Jorgensen) say that Cash isn't actually on the recording, that he was just there for the photo and wasn't even in the studio while the tape was rolling! Cash on the other hand, in his own book, insists that he was there the whole time and that his voice can be heard quietly and in a higher register than usual.</p>
<p>But even leaving aside the Cash question, it's still unclear exactly who is on each track. There were three or four other musicians in the studio in addition to the famous quartet (or trio). They're not all playing on every track! (Or are they?)</p>
<p><strong>Are there any tracks on <em>Million Dollar Quartet</em> that definitely feature all three of Perkins, Presley, and Lewis playing or singing?</strong></p>
| 62.333333 | 432 | 0.750122 | eng_Latn | 0.984677 |
e016598002f91a432ed3965d2115341d90cd4257 | 6,621 | md | Markdown | wiki/mysql.md | zesk06/dotfiles | fcb7f911fa182470e610644eaf053b1e4347ae99 | [
"MIT"
] | 1 | 2021-03-15T01:43:31.000Z | 2021-03-15T01:43:31.000Z | wiki/mysql.md | zesk06/dotfiles | fcb7f911fa182470e610644eaf053b1e4347ae99 | [
"MIT"
] | null | null | null | wiki/mysql.md | zesk06/dotfiles | fcb7f911fa182470e610644eaf053b1e4347ae99 | [
"MIT"
] | null | null | null | # mysql
## SQL beginer course
as seen [youtube](https://www.youtube.com/watch?v=HXV3zeQKqGY)
```mysql
# connect
mysql -h 127.0.0.1 -u mytest -p
# ##################################################################
# TABLE
# ##################################################################
# fields constraints could be
# - AUTO_INCREMENT pour la primary par ex
# - UNIQUE
# - NOT NULL
# - DEFAULT 'default value'
CREATE TABLE student (
student_id INT AUTO_INCREMENT,
name VARCHAR(32) NOT NULL,
major VARCHAR(32) DEFAULT 'Literracy',
PRIMARY KEY(student_id)
# FOREIGN KEY(mgr_id) REFERENCES other_table(other_column) ON DELETE SET NULL
);
DESCRIBE student;
# DROP TABLE student;
# ajout d'une colonne decimale, 3 chiffres, avec max deux decimales
ALTER TABLE student ADD gpa DECIMAL(3,2)
ALTER TABLE student DROP column gpa;
# ##################################################################
# INSERT
# ##################################################################
INSERT INTO student(name,major) VALUES ('Jack', 'Biology');
INSERT INTO student(name,major) VALUES ('Kate', 'Sociology');
INSERT INTO student(name) VALUES ('Claire');
INSERT INTO student(name,major) VALUES ('Jack', 'Biology');
INSERT INTO student(name,major) VALUES ('Mike', 'Computer Science');
# ##################################################################
# UPDATE
# ##################################################################
UPDATE student SET major = 'Bio' WHERE major = 'Biology'
UPDATE student SET major = 'Bio' WHERE major = 'Biology' AND student_id = 1;
UPDATE student SET major = 'Bio' WHERE major = 'Biology' OR student_id = 1;
# ##################################################################
# DELETE
# ###################################################################
DELETE from student WHERE major = 'Biology';
# ##################################################################
# SELECT
# ###################################################################
SELECT * FROM student;
SELECT name FROM student;
SELECT student.name FROM student;
SELECT student.name FROM student ORDER BY name;
SELECT student.name FROM student ORDER BY name DESC;
SELECT student.name FROM student ORDER BY name ASC LIMIT 2;
SELECT name,major FROM student WHERE major = 'Chemistry' ;
SELECT name,major FROM student WHERE major <> 'Chemistry' ;
SELECT name,major FROM student WHERE major = 'Chemistry' OR major = 'Computer Science';
SELECT name,major FROM student WHERE name IN ('Claire', 'Kate', 'Mike');
SELECT name as surname FROM student;
SELECT DISTINCT name FROM student;
# ###################################################################
# FOREIGN KEY
# ###################################################################
CREATE TABLE student (
student_id INT AUTO_INCREMENT,
name VARCHAR(32) NOT NULL,
major VARCHAR(32) DEFAULT 'Literracy',
best_friend INT,
PRIMARY KEY(student_id)
FOREIGN KEY(best_friend) REFERENCES student(student_id) ON DELETE SET NULL
);
# creating foreign key afterwards
ALTER TABLE employee ADD FOREIGN KEY(branch_id) REFERENCES branch(branch_id) ON DELETE SET NULL;
# ###################################################################
# FUNCTIONS
# ###################################################################
SELECT COUNT(student_id) from student;
SELECT AVG(salary) from employee;
SELECT SUM(salary) from employee;
SELECT SUM(sex) from employee GROUP BY sex;
# ###################################################################
# WILDCARDS
# ###################################################################
SELECT * FROM branch_supplier WHERE supplier_name LIKE '%Lab%';
SELECT * FROM employee WHERE birth_date LIKE '____-10-__';
# ###################################################################
# UNION
# ###################################################################
SELECT client_name FROM client
UNION SELECT supplier_name from branch_supplier;
SELECT client_name,branch_id FROM client
UNION SELECT supplier_name,branch_id from branch_supplier;
# ###################################################################
# JOIN
# ###################################################################
# INNER JOIN: get only row that match ON statement
SELECT employee.emp_id, employee.first_name, employee.last_name, branch.branch_name
FROM employee
JOIN branch
ON employee.emp_id = branch.mgr_id;
# LEFT JOIN: get all rows from LEFT table
SELECT employee.emp_id, employee.first_name, employee.last_name, branch.branch_name
FROM employee
LEFT JOIN branch
ON employee.emp_id = branch.mgr_id;
# RIGHT JOIN: get all rows from RIGHT table
SELECT employee.emp_id, employee.first_name, employee.last_name, branch.branch_name
FROM employee
RIGHT JOIN branch
ON employee.emp_id = branch.mgr_id;
# ###################################################################
# NESTED QUERIES
# ###################################################################
# find names of all employees who have sold over 30K to a single client
SELECT employee.first_name, employee.last_name
FROM employee
WHERE employee.emp_id IN (
SELECT emp_id FROM works_with WHERE total_sales > 30000
);
# select all clients of manager 102
# must limit to 1 to avoid errors in case of multiple results
SELECT client.client_name
FROM client
WHERE client.branch_id = (
SELECT branch.branch_id
FROM branch
WHERE branch.mgr_id = 102
LIMIT 1
);
# ###################################################################
# TRIGGER
# ###################################################################
CREATE TABLE trigger_test (
message VARCHAR(100),
emp_id VARCHAR(100)
);
DELIMITER $$;
CREATE
TRIGGER my_trigger BEFORE INSERT
ON employee
FOR EACH ROW BEGIN
INSERT INTO trigger_test VALUES('added new employee', NEW.first_name);
END$$
CREATE
TRIGGER my_trigger2 BEFORE INSERT
ON employee
FOR EACH ROW BEGIN
IF NEW.sex = 'M' THEN
INSERT INTO trigger_test VALUES('added new male', NEW.first_name);
ELSEIF NEW.sex = 'F' THEN
INSERT INTO trigger_test VALUES('added new female', NEW.first_name);
ELSE
INSERT INTO trigger_test VALUES('added new other', NEW.first_name);
END IF;
END$$
DELIMITER ;
INSERT INTO
employee
VALUES(110, "beth", "weasly", "1969-01-23", "F", 12000, NULL, 1);
#######
# VALUES(109, "john", "doe", "1969-01-23", "M", 12000, NULL, 1);
#######
```
| 33.439394 | 96 | 0.532548 | yue_Hant | 0.443819 |
e0172202c1e83fadce3ee29227f002b2582e5c91 | 5,562 | md | Markdown | articles/governance/policy/assign-policy-template.md | stebet/azure-docs | 92b61b1bb545665d497bb0dfec40cc1ccb5aa6b1 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/governance/policy/assign-policy-template.md | stebet/azure-docs | 92b61b1bb545665d497bb0dfec40cc1ccb5aa6b1 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/governance/policy/assign-policy-template.md | stebet/azure-docs | 92b61b1bb545665d497bb0dfec40cc1ccb5aa6b1 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "Quickstart: New policy assignment with templates"
description: In this quickstart, you use a Resource Manager template to create a policy assignment to identify non-compliant resources.
ms.date: 03/16/2020
ms.topic: quickstart
ms.custom: subject-armqs
---
# Quickstart: Create a policy assignment to identify non-compliant resources by using a Resource Manager template
The first step in understanding compliance in Azure is to identify the status of your resources.
This quickstart steps you through the process of creating a policy assignment to identify virtual
machines that aren't using managed disks. At the end of this process, you'll successfully identify virtual machines that aren't using managed
disks. They're _non-compliant_ with the policy assignment.
[!INCLUDE [About Azure Resource Manager](../../../includes/resource-manager-quickstart-introduction.md)]
## Prerequisites
If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account
before you begin.
## Create a policy assignment
In this quickstart, you create a policy assignment and assign a built-in policy definition called
_Audit VMs that do not use managed disks_. For a partial list of available built-in policies, see
[Azure Policy samples](./samples/index.md).
### Review the template
The template used in this quickstart is from [Azure Quickstart templates](https://azure.microsoft.com/resources/templates/101-azurepolicy-assign-builtinpolicy-resourcegroup/).
:::code language="json" source="~/quickstart-templates/101-azurepolicy-assign-builtinpolicy-resourcegroup/azuredeploy.json" range="1-36" highlight="26-34":::
The resource defined in the template is:
- [Microsoft.Authorization/policyAssignments](/azure/templates/microsoft.authorization/policyassignments)
### Deploy the template
> [!NOTE]
> Azure Policy service is free. For more information, see
> [Overview of Azure Policy](./overview.md).
1. Select the following image to sign in to the Azure portal and open the template:
[](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F101-azurepolicy-assign-builtinpolicy-resourcegroup%2Fazuredeploy.json)
1. Select or enter the following values:
| Name | Value |
|------|-------|
| Subscription | Select your Azure subscription. |
| Resource group | Select **Create new**, specify a name, and then select **OK**. In the screenshot, the resource group name is _mypolicyquickstart\<Date in MMDD\>rg_. |
| Location | Select a region. For example, **Central US**. |
| Policy Assignment Name | Specify a policy assignment name. You can use the policy definition display if you want. For example, **Audit VMs that do not use managed disks**. |
| Rg Name | Specify a resource group name where you want to assign the policy to. In this quickstart, use the default value **[resourceGroup().name]**. **[resourceGroup()](../../azure-resource-manager/templates/template-functions-resource.md#resourcegroup)** is a template function that retrieves the resource group. |
| Policy Definition ID | Specify **/providers/Microsoft.Authorization/policyDefinitions/0a914e76-4921-4c19-b460-a2d36003525a**. |
| I agree to the terms and conditions stated above | (Select) |
1. Select **Purchase**.
Some additional resources:
- To find more samples templates, see
[Azure Quickstart template](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Authorization&pageNumber=1&sort=Popular).
- To see the template reference, go to
[Azure template reference](/azure/templates/microsoft.authorization/allversions).
- To learn how to develop Resource Manager templates, see
[Azure Resource Manager documentation](../../azure-resource-manager/management/overview.md).
- To learn subscription-level deployment, see
[Create resource groups and resources at the subscription level](../../azure-resource-manager/templates/deploy-to-subscription.md).
## Validate the deployment
Select **Compliance** in the left side of the page. Then locate the **Audit VMs that do not use
managed disks** policy assignment you created.

If there are any existing resources that aren't compliant with this new assignment, they appear
under **Non-compliant resources**.
For more information, see
[How compliance works](./how-to/get-compliance-data.md#how-compliance-works).
## Clean up resources
To remove the assignment created, follow these steps:
1. Select **Compliance** (or **Assignments**) in the left side of the Azure Policy page and locate
the **Audit VMs that do not use managed disks** policy assignment you created.
1. Right-click the **Audit VMs that do not use managed disks** policy assignment and select **Delete
assignment**.

## Next steps
In this quickstart, you assigned a built-in policy definition to a scope and evaluated its
compliance report. The policy definition validates that all the resources in the scope are compliant
and identifies which ones aren't.
To learn more about assigning policies to validate that new resources are compliant, continue to the
tutorial for:
> [!div class="nextstepaction"]
> [Creating and managing policies](./tutorials/create-and-manage.md) | 51.5 | 321 | 0.773103 | eng_Latn | 0.963853 |
e0174ad76f282a35065fbc8e47dca21db63d1bcd | 5,509 | md | Markdown | docs/CodeDoc/Atc/Index.md | TomMalow/atc | 3cdb9b43efbcc1ba553622226e75317b1cc32c15 | [
"MIT"
] | null | null | null | docs/CodeDoc/Atc/Index.md | TomMalow/atc | 3cdb9b43efbcc1ba553622226e75317b1cc32c15 | [
"MIT"
] | null | null | null | docs/CodeDoc/Atc/Index.md | TomMalow/atc | 3cdb9b43efbcc1ba553622226e75317b1cc32c15 | [
"MIT"
] | null | null | null | <div style='text-align: right'>
[References extended](IndexExtended.md)
</div>
# References
## [Atc](Atc.md)
- [AddressType](Atc.md#addresstype)
- [ArticleNumberType](Atc.md#articlenumbertype)
- [AtcAssemblyTypeInitializer](Atc.md#atcassemblytypeinitializer)
- [BooleanOperatorType](Atc.md#booleanoperatortype)
- [CardinalDirectionType](Atc.md#cardinaldirectiontype)
- [CasingStyle](Atc.md#casingstyle)
- [CollectionActionType](Atc.md#collectionactiontype)
- [DateTimeDiffCompareType](Atc.md#datetimediffcomparetype)
- [DropDownFirstItemType](Atc.md#dropdownfirstitemtype)
- [Enum<T>](Atc.md#enum<t>)
- [EnumGuidAttribute](Atc.md#enumguidattribute)
- [FileSystemWatcherChangeType](Atc.md#filesystemwatcherchangetype)
- [ForwardReverseType](Atc.md#forwardreversetype)
- [GlobalizationConstants](Atc.md#globalizationconstants)
- [GlobalizationLcidConstants](Atc.md#globalizationlcidconstants)
- [GridCell](Atc.md#gridcell)
- [IdentityRoleType](Atc.md#identityroletype)
- [IgnoreDisplayAttribute](Atc.md#ignoredisplayattribute)
- [InsertRemoveType](Atc.md#insertremovetype)
- [LeftRightType](Atc.md#leftrighttype)
- [LetterAccentType](Atc.md#letteraccenttype)
- [LocalizedDescriptionAttribute](Atc.md#localizeddescriptionattribute)
- [LogCategoryType](Atc.md#logcategorytype)
- [NumericAlphaComparer](Atc.md#numericalphacomparer)
- [OnOffType](Atc.md#onofftype)
- [Point2D](Atc.md#point2d)
- [Point3D](Atc.md#point3d)
- [SortDirectionType](Atc.md#sortdirectiontype)
- [TriggerActionType](Atc.md#triggeractiontype)
- [TupleEqualityComparer<T1, T2>](Atc.md#tupleequalitycomparer<t1-t2>)
- [UpDownType](Atc.md#updowntype)
- [YesNoType](Atc.md#yesnotype)
## [Atc.Collections](Atc.Collections.md)
- [ConcurrentHashSet<T>](Atc.Collections.md#concurrenthashset<t>)
## [Atc.Data](Atc.Data.md)
- [DataFactory](Atc.Data.md#datafactory)
## [Atc.Data.Models](Atc.Data.Models.md)
- [Culture](Atc.Data.Models.md#culture)
- [KeyValueItem](Atc.Data.Models.md#keyvalueitem)
- [LogKeyValueItem](Atc.Data.Models.md#logkeyvalueitem)
## [Atc.Extensions.BaseTypes](Atc.Extensions.BaseTypes.md)
- [DateTimeExtensions](Atc.Extensions.BaseTypes.md#datetimeextensions)
## [Atc.Helpers](Atc.Helpers.md)
- [ArticleNumberHelper](Atc.Helpers.md#articlenumberhelper)
- [CardinalDirectionTypeHelper](Atc.Helpers.md#cardinaldirectiontypehelper)
- [CultureHelper](Atc.Helpers.md#culturehelper)
- [DayOfWeekHelper](Atc.Helpers.md#dayofweekhelper)
- [DropDownFirstItemTypeHelper](Atc.Helpers.md#dropdownfirstitemtypehelper)
- [EnumHelper](Atc.Helpers.md#enumhelper)
- [MathHelper](Atc.Helpers.md#mathhelper)
- [ReflectionHelper](Atc.Helpers.md#reflectionhelper)
- [RegionInfoHelper](Atc.Helpers.md#regioninfohelper)
- [SimpleTypeHelper](Atc.Helpers.md#simpletypehelper)
- [ThreadHelper](Atc.Helpers.md#threadhelper)
## [Atc.Structs](Atc.Structs.md)
- [CartesianCoordinate](Atc.Structs.md#cartesiancoordinate)
## [System](System.md)
- [AppDomainExtensions](System.md#appdomainextensions)
- [ArgumentNullOrDefaultException](System.md#argumentnullordefaultexception)
- [ArgumentNullOrDefaultPropertyException](System.md#argumentnullordefaultpropertyexception)
- [ArgumentNullPropertyException](System.md#argumentnullpropertyexception)
- [ArgumentPropertyException](System.md#argumentpropertyexception)
- [ArgumentPropertyNullException](System.md#argumentpropertynullexception)
- [ArrayExtensions](System.md#arrayextensions)
- [BooleanExtensions](System.md#booleanextensions)
- [DateTimeOffsetExtensions](System.md#datetimeoffsetextensions)
- [DecimalExtensions](System.md#decimalextensions)
- [DoubleExtensions](System.md#doubleextensions)
- [EnumExtensions](System.md#enumextensions)
- [ExceptionExtensions](System.md#exceptionextensions)
- [IntegerExtensions](System.md#integerextensions)
- [ItemNotFoundException](System.md#itemnotfoundexception)
- [LongExtensions](System.md#longextensions)
- [StringExtensions](System.md#stringextensions)
- [StringHasIsExtensions](System.md#stringhasisextensions)
- [SwitchCaseDefaultException](System.md#switchcasedefaultexception)
- [TimeSpanExtensions](System.md#timespanextensions)
- [TypeExtensions](System.md#typeextensions)
- [UnexpectedTypeException](System.md#unexpectedtypeexception)
- [VersionExtensions](System.md#versionextensions)
## [System.Collections.Generic](System.Collections.Generic.md)
- [ReadOnlyListExtensions](System.Collections.Generic.md#readonlylistextensions)
## [System.Data](System.Data.md)
- [DataTableExtensions](System.Data.md#datatableextensions)
## [System.IO](System.IO.md)
- [MemoryStreamExtensions](System.IO.md#memorystreamextensions)
- [StreamExtensions](System.IO.md#streamextensions)
## [System.Net](System.Net.md)
- [HttpStatusCodeExtensions](System.Net.md#httpstatuscodeextensions)
## [System.Reflection](System.Reflection.md)
- [AssemblyExtensions](System.Reflection.md#assemblyextensions)
- [FieldInfoExtensions](System.Reflection.md#fieldinfoextensions)
- [MemberInfoExtensions](System.Reflection.md#memberinfoextensions)
- [MethodInfoExtensions](System.Reflection.md#methodinfoextensions)
- [PropertyInfoExtensions](System.Reflection.md#propertyinfoextensions)
## [System.Security.Claims](System.Security.Claims.md)
- [ClaimsPrincipalExtensions](System.Security.Claims.md#claimsprincipalextensions)
## [System.Text](System.Text.md)
- [StringBuilderExtensions](System.Text.md#stringbuilderextensions)
<hr /><div style='text-align: right'><i>Generated by MarkdownCodeDoc version 1.2</i></div>
| 38.795775 | 92 | 0.803231 | yue_Hant | 0.563789 |
e0177a553b88a3d945b1b23233706c63c10f56ec | 487 | md | Markdown | README.md | mgryszko/gtasks | 982fdd04e0952c5a1a1f937c36b0812f654bb96c | [
"Unlicense"
] | null | null | null | README.md | mgryszko/gtasks | 982fdd04e0952c5a1a1f937c36b0812f654bb96c | [
"Unlicense"
] | null | null | null | README.md | mgryszko/gtasks | 982fdd04e0952c5a1a1f937c36b0812f654bb96c | [
"Unlicense"
] | null | null | null | # Google Tasks list in a text format
Script that connects to Google Tasks and prints all task lists with all their tasks in a Markdown-like format.
## Installation
### Turn on the Google Tasks API
Follow the [Tasks API Ruby Quickstart](https://developers.google.com/google-apps/tasks/quickstart/ruby)
### Install the Google Client Library
Follow the [Tasks API Ruby Quickstart](https://developers.google.com/google-apps/tasks/quickstart/ruby)
## Usage
```
ruby list_tasks.rb
```
| 24.35 | 110 | 0.761807 | kor_Hang | 0.478778 |
e018839375331aee5201449bd7dc18774b1decb3 | 6,065 | md | Markdown | markdown templates/markdown-template-for-support-articles-symptom-cause-resolution.md | OpenLocalizationTestOrg/azure-docs-pr15_el-GR | 9f7579626c9f63b39b5039748978ac36e4d54ebc | [
"CC-BY-3.0",
"CC-BY-4.0",
"MIT"
] | null | null | null | markdown templates/markdown-template-for-support-articles-symptom-cause-resolution.md | OpenLocalizationTestOrg/azure-docs-pr15_el-GR | 9f7579626c9f63b39b5039748978ac36e4d54ebc | [
"CC-BY-3.0",
"CC-BY-4.0",
"MIT"
] | null | null | null | markdown templates/markdown-template-for-support-articles-symptom-cause-resolution.md | OpenLocalizationTestOrg/azure-docs-pr15_el-GR | 9f7579626c9f63b39b5039748978ac36e4d54ebc | [
"CC-BY-3.0",
"CC-BY-4.0",
"MIT"
] | null | null | null | <properties
pageTitle="Τίτλος σελίδας που εμφανίζει το πρόγραμμα περιήγησης tab και τα αποτελέσματα αναζήτησης"
description="Το άρθρο περιγραφή που θα εμφανίζεται σε σελίδες υποδοχής και στα περισσότερα αποτελέσματα αναζήτησης"
services="service-name"
documentationCenter="dev-center-name"
authors="GitHub-alias-of-only-one-author"
manager="manager-alias"
editor=""
tags="comma-separates-additional-tags-if-required"/>
<tags
ms.service="required"
ms.devlang="may be required"
ms.topic="article"
ms.tgt_pltfrm="may be required"
ms.workload="required"
ms.date="mm/dd/yyyy"
ms.author="Your MSFT alias or your full email address;semicolon separates two or more"/>
# <a name="title-maximum-120-characters-target-the-primary-keyword"></a>Τίτλος (μέγιστο 120 χαρακτήρες, προορισμού τη λέξη-κλειδί πρωτεύοντος)
_Χρησιμοποιήστε δευτερεύουσες λέξεις-κλειδιά 2-3 στην περιγραφή._
_Επιλέξτε ένα από τα εξής αποποίηση ευθυνών ανάλογα με το σενάριό σας. Εάν το άρθρο σας είναι agnostic μοντέλο ανάπτυξης, παραβλέψτε το._
[AZURE.INCLUDE [learn-about-deployment-models](../../includes/learn-about-deployment-models-rm-include.md)]μοντέλο κλασική ανάπτυξης.
[AZURE.INCLUDE [learn-about-deployment-models](../../includes/learn-about-deployment-models-classic-include.md)]
[AZURE.INCLUDE [learn-about-deployment-models](../../learn-about-deployment-models-both-include.md)]
## <a name="summary-optional-especially-when-the-article-is-short"></a>Σύνοψη (προαιρετικά, ιδίως όταν το άρθρο είναι μικρή)
- _Περιγράφει συνοπτικά το πρόβλημα._
- _Η ενότητα σύνοψης είναι ένα καλό μέρος για να χρησιμοποιήσετε διαφορετικό λέξεις-κλειδιά από τα αντίστοιχα στον τίτλο, αλλά φροντίστε να μην πραγματοποιείτε πολύ Αναφορικές. Τις προτάσεις πρέπει να ροής σωστά και είναι εύκολο να κατανοήσετε._
- _(Προαιρετικό) - εξαιρέσεις λίστας σχετικών σενάρια που δεν καλύπτεται σε αυτό το άρθρο. Για παράδειγμα, "σενάρια Linux/OSS δεν καλύπτεται σε αυτό το άρθρο"._
_Εάν πρόκειται για ένα άρθρο για το θέμα της χρέωσης, συμπεριλάβετε το παρακάτω σημείωση (τη σημείωση παρακάτω είναι λίγο διαφορετικά από τον στο κάτω μέρος αυτού του άρθρου):_
> [AZURE.NOTE] Εάν χρειάζεστε περισσότερη βοήθεια σε οποιοδήποτε σημείο σε αυτό το άρθρο, επικοινωνήστε με [επικοινωνία με την υποστήριξη](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade) για να λάβετε το πρόβλημα επιλυθεί γρήγορα.
_Εάν δεν είναι ένα άρθρο χρεώσεων, χρησιμοποιήστε την παρακάτω αναφορά:_
[AZURE.INCLUDE [support-disclaimer](../../includes/support-disclaimer.md)]
## <a name="symptom"></a>Σύμπτωμα
- _Ποιες ενέργειες θα το χρήστη να προσπαθείτε να ολοκληρώσετε;_
- _Τι γίνεται απέτυχε;_
- _Τι συστήματα και λογισμικό θα ο χρήστης χρησιμοποιούσαν;_
- _Ποια μηνύματα θα μπορούσε να έχει εμφανίζεται σφάλμα;_
- _Συμπερίληψη στιγμιότυπου οθόνης εάν είναι δυνατό._
## <a name="cause"></a>Αιτία
- _Τι κάνει αυτό το πρόβλημα._
## <a name="solution"></a>Λύση
- _Εάν είναι δυνατό να προσθέσετε στιγμιότυπα οθόνης._
- _Εάν υπάρχουν πολλές λύσεις, να τα τοποθετήσετε με τη σειρά των πολυπλοκότητα και παρέχει οδηγίες σχετικά με τον τρόπο για να ορίσετε κάποιες από αυτές._
| <em>Έκδοση 1: Άρθρο σας είναι agnostic μοντέλο ανάπτυξης</em> | <em>Έκδοση 2: Βήματα για τη διαχείριση πόρων και κλασική είναι στην σε μεγάλο βαθμό τα ίδια</em> | <em>Έκδοση 3: Βήματα για τη διαχείριση πόρων και κλασική κυρίως είναι διαφορετικά. <br />Σε αυτήν την περίπτωση, χρησιμοποιήστε την <a href="https://github.com/Azure/azure-content-pr/blob/master/contributor-guide/custom-markdown-extensions.md#simple-selectors">απλή επιλογέων τεχνική στο Github</a>. <br />Σημείωση: Εικονική άρθρα για ARM εξαιρέσεις και δεν θα πρέπει να χρησιμοποιήσετε τον επιλογέα ARM/κλασική.</em> |
|:------------------------------------------------------|:-----------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <p><h3>Λύση 1</h3><em>(το πιο απλός και πιο αποτελεσματική)</em></p><ol><li>[Βήμα 1]</li><li>[Βήμα 2]</li></ol><p><h3>Λύση 2</h3><em>(όσο λιγότερα απλό ή αποτελεσματικές)</em></p><ol><li>[Βήμα 1]</li><li>[Βήμα 2]</li></ol><br /><br /><br /><br /><br /><br /><br /><br /> | <p><h3>Λύση 1</h3><em>(το πιο απλός και πιο αποτελεσματική)</em></p><ol><li>[Βήμα 1]</li><li>Εάν χρησιμοποιείτε το μοντέλο κλασική ανάπτυξης, [κάντε το εξής].<br />Εάν χρησιμοποιείτε το μοντέλο ανάπτυξης για τη διαχείριση πόρων, [κάντε το εξής].</li><li>[Βήμα 3]</li></ol><p><h3>Λύση 2</h3><em>(όσο λιγότερα απλό ή αποτελεσματικές)</em></p><ol><li>[Βήμα 1]</li><li>Εάν χρησιμοποιείτε το μοντέλο κλασική ανάπτυξης, [κάντε το εξής].<br />Εάν χρησιμοποιείτε το μοντέλο ανάπτυξης για τη διαχείριση πόρων, [κάντε το εξής].</li><li>[Βήμα 3]</li></ol> | <img src="media/markdown-template-for-support-articles-symptom-cause-resolution/rm-classic.png" alt="ARM-Classic"><p><h3>Λύση 1</h3><em>(το πιο απλός και πιο αποτελεσματική)</em></p><ol><li>[Βήμα 1]</li><li>[Βήμα 2]</li></ol><p><h3>Λύση 2</h3><em>(όσο λιγότερα απλό ή αποτελεσματικές)</em></p><ol><li>[Βήμα 1]</li><li>[Βήμα 2]</li></ol><br /><br /><br /><br /> |
## <a name="next-steps"></a>Επόμενα βήματα
_Συμπερίληψη αυτής της ενότητας εάν υπάρχουν μπετόν 1 -3, ιδιαίτερα σχετικές επόμενα βήματα που πρέπει να εκτελεί ο χρήστης. Διαγραφή εάν υπάρχουν χωρίς επόμενα βήματα. Δεν είναι ένα σημείο για μια λίστα με συνδέσεις. Εάν συμπεριλάβετε συνδέσεις σε επόμενα βήματα, βεβαιωθείτε ότι για να συμπεριλάβετε κείμενο για να εξηγήσετε γιατί τα επόμενα βήματα είναι σχετική / σημαντικές._
_Εάν πρόκειται για ένα άρθρο σχετικά με το θέμα χρεώσεων, συμπεριλάβετε το παρακάτω σημείωση (τη σημείωση παρακάτω είναι λίγο διαφορετικά από τον στην αρχή αυτού του άρθρου):_
> [AZURE.NOTE] Εάν εξακολουθείτε να έχετε περισσότερες ερωτήσεις, λάβετε [Επικοινωνήστε με την υποστήριξη](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade) για να λάβετε το πρόβλημα επιλυθεί γρήγορα.
| 86.642857 | 1,184 | 0.723001 | ell_Grek | 0.999378 |
e019c5df5df1977c2079552a12fba1327b022ec8 | 2,412 | md | Markdown | content/home/experience.md | mroavi/academic-resume | 3de9ceff53d7a2da3c61302cbaace74988f08876 | [
"MIT"
] | null | null | null | content/home/experience.md | mroavi/academic-resume | 3de9ceff53d7a2da3c61302cbaace74988f08876 | [
"MIT"
] | null | null | null | content/home/experience.md | mroavi/academic-resume | 3de9ceff53d7a2da3c61302cbaace74988f08876 | [
"MIT"
] | null | null | null | ---
# An instance of the Experience widget.
# Documentation: https://wowchemy.com/docs/page-builder/
widget: experience
# This file represents a page section.
headless: true
# Order that this section appears on the page.
weight: 40
title: Experience
subtitle:
# Date format for experience
# Refer to https://wowchemy.com/docs/customization/#date-format
date_format: Jan 2006
# Experiences.
# Add/remove as many `experience` items below as you like.
# Required fields are `title`, `company`, and `date_start`.
# Leave `date_end` empty if it's your current employer.
# Begin multi-line descriptions with YAML's `|2-` multi-line prefix.
experience:
- title: Teacher
company: Eindhoven University of Technology
company_url: 'tue.nl'
company_logo: tue
location: Eindhoven, The Netherlands
date_start: '2020-02-01'
date_end: ''
description: |2-
Courses:
* [Autonomous Vehicles (5AID0)](https://www.es.ele.tue.nl/education/5aid0/)
* Embedded Visual Control (5LIA0)
- title: Teaching Assistant
company: Eindhoven University of Technology
company_url: 'tue.nl'
company_logo: tue
location: Eindhoven, The Netherlands
date_start: '2018-02-01'
date_end: ''
description: |3-
Courses:
* [Intelligent Architectures (5LIL0)](https://www.es.ele.tue.nl/~heco/courses/IA-5LIL0/index.html)
* [Embedded Systems Laboratory (5LIB0)](https://www.tue.nl/en/research/research-groups/electronic-systems/embedded-control-systems-lab/)
* [Computation I (5ELA0)](https://research.tue.nl/nl/courses/computation-i-hardwaresoftware-interface-3)
- title: Embedded Systems Designer
company: Philips
company_url: 'https://www.philips.com/a-w/about/innovation/research.html'
company_logo: philips
location: Eindhoven, The Netherlands
date_start: '2013-11-01'
date_end: '2018-01-01'
description: Architecture design, documentation, and implementation of high-quality embedded software/electronics.
- title: Teaching Assistant
company: Universidad Nacional de Colombia
company_url: 'http://www.manizales.unal.edu.co/'
company_logo: unal
location: Manizales, Colombia
date_start: '2008-08-01'
date_end: '2009-05-01'
description: |2-
Courses:
* Analogue Electronics I
* Electronic Design
design:
columns: '2'
---
| 31.324675 | 144 | 0.697347 | eng_Latn | 0.586533 |
e019d0b682efec6d9d28b8938ffdbb6a6d7f9589 | 2,104 | md | Markdown | content/posts/daily/博客趋于稳定.md | eallion/eallion.com | 294e26d7b50a9b2e4a9fa470441839b70d0e2262 | [
"MIT"
] | 18 | 2020-12-17T21:32:57.000Z | 2022-03-29T06:15:33.000Z | content/posts/daily/博客趋于稳定.md | eallion/eallion.com | 294e26d7b50a9b2e4a9fa470441839b70d0e2262 | [
"MIT"
] | 5 | 2021-06-14T13:26:50.000Z | 2021-08-08T18:54:21.000Z | content/posts/daily/博客趋于稳定.md | eallion/eallion.com | 294e26d7b50a9b2e4a9fa470441839b70d0e2262 | [
"MIT"
] | 9 | 2020-12-24T02:47:37.000Z | 2022-02-10T11:12:55.000Z | ---
title: "博客趋于稳定"
categories: ["日志"]
tags: ["博客","插件","typecho"]
draft: false
slug: "type-final-echo"
date: "2011-01-14 11:11:26"
---
因为前几天主机空间老是出故障
到今天才把博客弄好
差不多没有什么再要弄的了
就这样子吧
该弄的都弄了
再去模仿几个wp上的小品插件就可以了
比如显示评论者的浏览器操作系统什么的
博客启用了下列插件:
AjaxComments - Typecho 内置嵌套评论專用
AutoBackup - Typecho 自动备份插件
CommentFilter - 评论过滤器
CommentToMail - 评论回复邮件提醒插件
Dewplayer - 小巧的mp3播放器,在编辑器代码模式下使用
Google Code Prettify - Google高亮代码
HighSlide - 使用最新HighSlide全功能内核自动替换图片链接
JustArchives - 日志归档插件
Magike Editor - 简易编辑器,从Magike移植过来的
MiniPlay1g1g - 1g1g 迷你播放器,集搜索&输出&歌词显示功能
RandomArticleList - 随机显示文章列表
Sitemap - 为博客生成sitemap文件。
Smilies - 评论表情及贴图
Stat - 页面浏览次数统计插件
上传但未激活的插件:
Typecho Captcha - 验证码插件
Links - 友情链接插件
tinyMCE Editor - 集成tinyMCE编辑器
<strong>模板</strong>是用的<a href="http://justs.me/" target="_blank">九尾博客</a>的9ve-05
但是原模板已经比较老了
里面被我换掉了好多东西
有几个板块甚至完全是我自己重新写的
而且CSS里面自己也改了一半以上的吧
暂时就定下来用这个吧
基调是原模板的样子
感谢原模板作者<a href="http://justs.me/" target="_blank">九尾博客</a>
博客启用了<strong>相册</strong>页面
这真是一个可有可无的页面
相册用到了两种实现方式
一种是<a href="http://immmmm.com/latest-flickr-pictures-show.html" title="感谢 木木木木木 提供的技术" target="_blank"> 木木木木木 </a>这里提到的
一种是用了<a href="http://www.jzwalk.com/archives/net/highslide-for-typecho" title="感谢 羽中 提供的技术" target="_blank"> 羽中 </a>的HighSlide插件集成的相册功能
两种方法切换着在用
在看哪种方法更好一些
不过两种相册都不会用到本地图片
这大大的减小了主机的负担
导航页面还有一个<a href="http://eallion.com/category/sz/" target="_blank"><strong>山寨版山贼</strong></a>
这是我一个朋友的页面
是一个文艺小青年
又帅又多金又靠谱反正各种好
他会偶尔来更新几篇博客的
欢迎大家随时来喷他
<a href="http://t.eallion.com/" target="_blank"><strong>微博</strong></a>和<a href="http://s.eallion.com/" target="_blank"><strong>个人导航</strong></a>页面完全是做的站外链接
其中微博是我的个人微博
这个微博用的是<a href="http://www.pagecookery.com/" target="_blank">PageCookery</a>程序
它几乎记录了我从开启那个微博以来所有post到网上各处的文字……
鬼知道我当初是怎么想的
也许是想记录给我的子子孙孙看吧
<strong>个人导航</strong>用到了我另外一个闲置的空间
索性就用来放一些自己用的外链图片,mp3,文件什么的
那个主页上收集的都是我自己经常会去逛的网站
我比较懒,不爱输网址,我喜欢点点鼠标的便捷
另外还有就是到各个博客里面去偷了一些东西
在此就一并向你们表示感谢
PS:本博客已经完全没考虑要兼容IE6。
但是本博客也没有一些很炫的特效,所以IE6下的显示效果也不是太烂。
| 25.658537 | 156 | 0.739544 | yue_Hant | 0.707442 |
e01afe9f68385afa1ffee0dfd755ef38fc6b5c18 | 18,067 | md | Markdown | articles/event-hubs/event-hubs-resource-manager-namespace-event-hub-enable-capture.md | MicrosoftDocs/azure-docs.hu-hu | 5fb082c5dae057fd040c7e09881e6c407e535fe2 | [
"CC-BY-4.0",
"MIT"
] | 7 | 2017-08-28T07:44:33.000Z | 2021-04-20T21:12:50.000Z | articles/event-hubs/event-hubs-resource-manager-namespace-event-hub-enable-capture.md | MicrosoftDocs/azure-docs.hu-hu | 5fb082c5dae057fd040c7e09881e6c407e535fe2 | [
"CC-BY-4.0",
"MIT"
] | 412 | 2018-07-25T09:31:03.000Z | 2021-03-17T13:17:45.000Z | articles/event-hubs/event-hubs-resource-manager-namespace-event-hub-enable-capture.md | MicrosoftDocs/azure-docs.hu-hu | 5fb082c5dae057fd040c7e09881e6c407e535fe2 | [
"CC-BY-4.0",
"MIT"
] | 13 | 2017-09-05T09:10:35.000Z | 2021-11-05T11:42:31.000Z | ---
title: Event hub létrehozása az engedélyezve rögzítéssel – Azure Event Hubs | Microsoft Docs
description: Azure Event Hubs-névtér létrehozása egy eseményközponttal és a Rögzítés funkció engedélyezése az Azure Resource Manager sablonjának használatával
ms.topic: quickstart
ms.date: 06/23/2020
ms.custom: devx-track-azurecli, devx-track-azurepowershell
ms.openlocfilehash: 1b7ce00c1db0866b1965ed6d1bba31aa28f8bc24
ms.sourcegitcommit: b4fbb7a6a0aa93656e8dd29979786069eca567dc
ms.translationtype: MT
ms.contentlocale: hu-HU
ms.lasthandoff: 04/13/2021
ms.locfileid: "107304936"
---
# <a name="create-a-namespace-with-event-hub-and-enable-capture-using-a-template"></a>Névtér létrehozása egy eseményközponttal és a Rögzítés funkció engedélyezése sablon használatával
Ez a cikk ismerteti egy olyan Azure Resource Manager-sablon használatát, amely egy [Event Hubs](./event-hubs-about.md)-névteret hoz létre egy Event Hubs-példánnyal, valamint leírja az eseményközpont [Capture funkciójának](event-hubs-capture-overview.md) engedélyezését is. A cikk leírja továbbá, hogyan kell meghatározni, hogy mely erőforrások lesznek üzembe helyezve, és hogyan kell meghatározni az üzembe helyezés végrehajtásakor megadandó paramétereket. Ez a sablont használhatja a saját környezeteiben, vagy testre is szabhatja a saját követelményeinek megfelelően.
Ez a cikk azt is bemutatja, hogyan adhatja meg, hogy a rendszer az Azure Storage-blobokban vagy egy Azure Data Lake Store-ban rögzítse az eseményeket, az Ön által kiválasztott célhely alapján.
A sablonok létrehozásáról további információkat az [Authoring Azure Resource Manager templates][Authoring Azure Resource Manager templates] (Azure Resource Manager-sablonok készítése) című témakörben talál. A sablonban használandó JSON-szintaxis és-tulajdonságok megtekintéséhez lásd: [Microsoft. EventHub-erőforrástípusok](/azure/templates/microsoft.eventhub/allversions).
További információk az Azure-erőforrások elnevezési szabályainak mintáiról és gyakorlati megoldásairól: [Az Azure-erőforrások elnevezési szabályai][Azure Resources naming conventions].
Az összes sablon eléréséhez kattintson az alábbi GitHub-hivatkozásokra:
- [Eseményközpont és a Rögzítés tárolóban sablon engedélyezése][Event Hub and enable Capture to Storage template]
- [Eseményközpont és a Rögzítés Azure Data Lake Store-ban sablon engedélyezése][Event Hub and enable Capture to Azure Data Lake Store template]
> [!NOTE]
> A legújabb sablonokért keresse fel az [Azure-gyorssablonok][Azure Quickstart Templates] gyűjteményt, és keressen az Event Hubs kifejezésre.
>
>
## <a name="what-will-you-deploy"></a>Mit fog üzembe helyezni?
Ezzel a sablonnal egy Event Hubs-névteret helyez üzembe eseményközponttal, továbbá engedélyezi az [az Event Hubs Rögzítés funkcióját](event-hubs-capture-overview.md). Az Event Hubs Capture funkciója lehetővé teszi a streamelt Event Hubs-adatok automatikus továbbítását az Azure Blob Storage-ba vagy az Azure Data Lake Store-ba egy megadott időtartamon belül vagy az Ön által kiválasztott méretegységekben. Kattintson az alábbi gombra az Event Hubs Azure Storage-ba való rögzítésének engedélyezéséhez:
[](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F201-eventhubs-create-namespace-and-enable-capture%2Fazuredeploy.json)
Kattintson az alábbi gombra az Event Hubs Azure Data Lake Store-ba való rögzítésének engedélyezéséhez:
[](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F201-eventhubs-create-namespace-and-enable-capture-for-adls%2Fazuredeploy.json)
## <a name="parameters"></a>Paraméterek
Az Azure Resource Managerrel meghatározhatja a sablon üzembe helyezésekor megadandó értékek paramétereit. A sablonban található egy `Parameters` nevű rész, amely magába foglalja az összes paraméterértéket. Azokhoz az értékekhez adjon meg paramétert, amelyek az üzembe helyezendő projekt vagy az üzembe helyezési környezet alapján változhatnak. Ne adjon meg olyan paramétereket olyan értékhez, amelyek nem változnak. A sablonban minden egyes paraméterérték az üzembe helyezendő erőforrások megadásához lesz felhasználva.
A sablon a következő paramétereket adja meg.
### <a name="eventhubnamespacename"></a>eventHubNamespaceName
A létrehozandó Event Hubs-névtér neve.
```json
"eventHubNamespaceName":{
"type":"string",
"metadata":{
"description":"Name of the EventHub namespace"
}
}
```
### <a name="eventhubname"></a>eventHubName
Az Event Hubs-névtérben létrehozott eseményközpont neve.
```json
"eventHubName":{
"type":"string",
"metadata":{
"description":"Name of the event hub"
}
}
```
### <a name="messageretentionindays"></a>messageRetentionInDays
A napok száma, amíg az üzenetek meg lesznek őrizve az eseményközpontban.
```json
"messageRetentionInDays":{
"type":"int",
"defaultValue": 1,
"minValue":"1",
"maxValue":"7",
"metadata":{
"description":"How long to retain the data in event hub"
}
}
```
### <a name="partitioncount"></a>partitionCount
Az eseményközpontban létrehozandó partíciók száma.
```json
"partitionCount":{
"type":"int",
"defaultValue":2,
"minValue":2,
"maxValue":32,
"metadata":{
"description":"Number of partitions chosen"
}
}
```
### <a name="captureenabled"></a>captureEnabled
A Rögzítés funkció engedélyezése az eseményközpontban.
```json
"captureEnabled":{
"type":"string",
"defaultValue":"true",
"allowedValues": [
"false",
"true"],
"metadata":{
"description":"Enable or disable the Capture for your event hub"
}
}
```
### <a name="captureencodingformat"></a>captureEncodingFormat
Az eseményadat szerializálásához megadott kódolási formátum.
```json
"captureEncodingFormat":{
"type":"string",
"defaultValue":"Avro",
"allowedValues":[
"Avro"],
"metadata":{
"description":"The encoding format in which Capture serializes the EventData"
}
}
```
### <a name="capturetime"></a>captureTime
Az az időintervallum, amelyben az Event Hubs Capture elkezdi az adatok rögzítését.
```json
"captureTime":{
"type":"int",
"defaultValue":300,
"minValue":60,
"maxValue":900,
"metadata":{
"description":"The time window in seconds for the capture"
}
}
```
### <a name="capturesize"></a>captureSize
Az a méretegység, amelynek elérésekor a Capture funkció elkezdi az adatok rögzítését.
```json
"captureSize":{
"type":"int",
"defaultValue":314572800,
"minValue":10485760,
"maxValue":524288000,
"metadata":{
"description":"The size window in bytes for capture"
}
}
```
### <a name="capturenameformat"></a>captureNameFormat
Az Event Hubs Capture által használt névformátum az Avro-fájlok írásakor. Vegye figyelembe, hogy a Capture névformátumának tartalmaznia kell a következő mezőket: `{Namespace}`, `{EventHub}`, `{PartitionId}`, `{Year}`, `{Month}`, `{Day}`, `{Hour}`, `{Minute}` és `{Second}`. Ezek a mezők bármilyen sorrend szerint rendezhetők, elválasztó karakterekkel vagy azok nélkül.
```json
"captureNameFormat": {
"type": "string",
"defaultValue": "{Namespace}/{EventHub}/{PartitionId}/{Year}/{Month}/{Day}/{Hour}/{Minute}/{Second}",
"metadata": {
"description": "A Capture Name Format must contain {Namespace}, {EventHub}, {PartitionId}, {Year}, {Month}, {Day}, {Hour}, {Minute} and {Second} fields. These can be arranged in any order with or without delimeters. E.g. Prod_{EventHub}/{Namespace}\\{PartitionId}_{Year}_{Month}/{Day}/{Hour}/{Minute}/{Second}"
}
}
```
### <a name="apiversion"></a>apiVersion
A sablon API-verziója.
```json
"apiVersion":{
"type":"string",
"defaultValue":"2017-04-01",
"metadata":{
"description":"ApiVersion used by the template"
}
}
```
Ha az Azure Storage-ot választja célhelyként, használja a következő paramétereket.
### <a name="destinationstorageaccountresourceid"></a>destinationStorageAccountResourceId
A Rögzítés használatához egy Azure Storage-fiókhoz tartozó erőforrás-azonosító szükséges, amellyel engedélyezheti a rögzítést a választott Storage-fiókban.
```json
"destinationStorageAccountResourceId":{
"type":"string",
"metadata":{
"description":"Your existing Storage account resource ID where you want the blobs be captured"
}
}
```
### <a name="blobcontainername"></a>blobContainerName
A blobtároló, amelyben rögzíti az eseményadatokat.
```json
"blobContainerName":{
"type":"string",
"metadata":{
"description":"Your existing storage container in which you want the blobs captured"
}
}
```
A következő paraméterek használata esetén válassza az 1. generációs Azure Data Lake Store a cél lehetőséget. Az engedélyeket arra a Data Lake Store-útvonalra kell beállítania, ahol az eseményeket rögzíteni kívánja. Az engedélyek megadásával kapcsolatban lásd: [adatgyűjtés az 1. generációs Azure Data Lake Storage](event-hubs-capture-enable-through-portal.md#capture-data-to-azure-data-lake-storage-gen-1).
### <a name="subscriptionid"></a>subscriptionId
Az Event Hubs-névtér és az Azure Data Lake Store előfizetés-azonosítója. Mindkét erőforráshoz azonos előfizetés-azonosítónak kell tartoznia.
```json
"subscriptionId": {
"type": "string",
"metadata": {
"description": "Subscription ID of both Azure Data Lake Store and Event Hubs namespace"
}
}
```
### <a name="datalakeaccountname"></a>dataLakeAccountName
A rögzített események Azure Data Lake Store-neve.
```json
"dataLakeAccountName": {
"type": "string",
"metadata": {
"description": "Azure Data Lake Store name"
}
}
```
### <a name="datalakefolderpath"></a>dataLakeFolderPath
A rögzített események célmappájának elérési útja. Ez az a mappa a Data Lake Store-fiókban, amelybe a rendszer leküldi az eseményeket a rögzítési művelet során. A mappa engedélyeinek beállításáról lásd [az Event Hubsból származó adatok Azure Data Lake Store segítségével történő rögzítését ismertető](../data-lake-store/data-lake-store-archive-eventhub-capture.md) cikket.
```json
"dataLakeFolderPath": {
"type": "string",
"metadata": {
"description": "Destination capture folder path"
}
}
```
## <a name="resources-to-deploy-for-azure-storage-as-destination-to-captured-events"></a>Az Azure Storage-ban történő eseményrögzítéshez üzembe helyezendő erőforrások
Létrehoz egy **EventHub** típusú névteret egy eseményközponttal, valamint engedélyezi az Azure Blob Storage-ba való rögzítést is.
```json
"resources":[
{
"apiVersion":"[variables('ehVersion')]",
"name":"[parameters('eventHubNamespaceName')]",
"type":"Microsoft.EventHub/Namespaces",
"location":"[variables('location')]",
"sku":{
"name":"Standard",
"tier":"Standard"
},
"resources": [
{
"apiVersion": "2017-04-01",
"name": "[parameters('eventHubNamespaceName')]",
"type": "Microsoft.EventHub/Namespaces",
"location": "[resourceGroup().location]",
"sku": {
"name": "Standard"
},
"properties": {
"isAutoInflateEnabled": "true",
"maximumThroughputUnits": "7"
},
"resources": [
{
"apiVersion": "2017-04-01",
"name": "[parameters('eventHubName')]",
"type": "EventHubs",
"dependsOn": [
"[concat('Microsoft.EventHub/namespaces/', parameters('eventHubNamespaceName'))]"
],
"properties": {
"messageRetentionInDays": "[parameters('messageRetentionInDays')]",
"partitionCount": "[parameters('partitionCount')]",
"captureDescription": {
"enabled": "true",
"skipEmptyArchives": false,
"encoding": "[parameters('captureEncodingFormat')]",
"intervalInSeconds": "[parameters('captureTime')]",
"sizeLimitInBytes": "[parameters('captureSize')]",
"destination": {
"name": "EventHubArchive.AzureBlockBlob",
"properties": {
"storageAccountResourceId": "[parameters('destinationStorageAccountResourceId')]",
"blobContainer": "[parameters('blobContainerName')]",
"archiveNameFormat": "[parameters('captureNameFormat')]"
}
}
}
}
}
]
}
]
```
## <a name="resources-to-deploy-for-azure-data-lake-store-as-destination"></a>Az Azure Data Lake Store célhelyként való használatához üzembe helyezendő erőforrások
Létrehoz egy **EventHub** típusú névteret egy eseményközponttal, valamint engedélyezi az Azure Data Lake Store-ba való rögzítést is.
```json
"resources": [
{
"apiVersion": "2017-04-01",
"name": "[parameters('namespaceName')]",
"type": "Microsoft.EventHub/Namespaces",
"location": "[variables('location')]",
"sku": {
"name": "Standard",
"tier": "Standard"
},
"resources": [
{
"apiVersion": "2017-04-01",
"name": "[parameters('eventHubName')]",
"type": "EventHubs",
"dependsOn": [
"[concat('Microsoft.EventHub/namespaces/', parameters('namespaceName'))]"
],
"properties": {
"path": "[parameters('eventHubName')]",
"captureDescription": {
"enabled": "true",
"skipEmptyArchives": false,
"encoding": "[parameters('archiveEncodingFormat')]",
"intervalInSeconds": "[parameters('captureTime')]",
"sizeLimitInBytes": "[parameters('captureSize')]",
"destination": {
"name": "EventHubArchive.AzureDataLake",
"properties": {
"DataLakeSubscriptionId": "[parameters('subscriptionId')]",
"DataLakeAccountName": "[parameters('dataLakeAccountName')]",
"DataLakeFolderPath": "[parameters('dataLakeFolderPath')]",
"ArchiveNameFormat": "[parameters('captureNameFormat')]"
}
}
}
}
}
]
}
]
```
> [!NOTE]
> Engedélyezheti vagy letilthatja az üres fájlok kibocsátását, ha a rögzítési időszak során nem történik esemény a **skipEmptyArchives** tulajdonság használatával.
## <a name="commands-to-run-deployment"></a>Az üzembe helyezést futtató parancsok
[!INCLUDE [app-service-deploy-commands](../../includes/app-service-deploy-commands.md)]
## <a name="powershell"></a>PowerShell
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
Helyezze üzembe a sablont az Azure Storage-ba történő Event Hubs-rögzítés engedélyezéséhez:
```powershell
New-AzResourceGroupDeployment -ResourceGroupName \<resource-group-name\> -TemplateFile https://raw.githubusercontent.com/azure/azure-quickstart-templates/master/201-eventhubs-create-namespace-and-enable-capture/azuredeploy.json
```
Helyezze üzembe a sablont az Azure Data Lake Store-ba történő Event Hubs-rögzítés engedélyezéséhez:
```powershell
New-AzResourceGroupDeployment -ResourceGroupName \<resource-group-name\> -TemplateFile https://raw.githubusercontent.com/azure/azure-quickstart-templates/master/201-eventhubs-create-namespace-and-enable-capture-for-adls/azuredeploy.json
```
## <a name="azure-cli"></a>Azure CLI
Az Azure Blob Storage mint célhely:
```azurecli
az deployment group create \<my-resource-group\> \<my-deployment-name\> --template-uri [https://raw.githubusercontent.com/azure/azure-quickstart-templates/master/201-eventhubs-create-namespace-and-enable-capture/azuredeploy.json][]
```
Az Azure Data Lake Store mint célhely:
```azurecli
az deployment group create \<my-resource-group\> \<my-deployment-name\> --template-uri [https://raw.githubusercontent.com/azure/azure-quickstart-templates/master/201-eventhubs-create-namespace-and-enable-capture-for-adls/azuredeploy.json][]
```
## <a name="next-steps"></a>Következő lépések
Az Event Hubs Rögzítés funkcióját az [Azure Portal](https://portal.azure.com) segítségével is konfigurálhatja. További információért tekintse meg az [Enable Event Hubs Capture using the Azure portal](event-hubs-capture-enable-through-portal.md) (Az Event Hubs Rögzítés funkciójának engedélyezése az Azure Portalon) című témakört.
Az alábbi webhelyeken további információt talál az Event Hubsról:
* [Event Hubs áttekintése](./event-hubs-about.md)
* [Eseményközpont létrehozása](event-hubs-create.md)
* [Event Hubs – gyakori kérdések](event-hubs-faq.yml)
[Authoring Azure Resource Manager templates]: ../azure-resource-manager/templates/template-syntax.md
[Azure Quickstart Templates]: https://azure.microsoft.com/documentation/templates/?term=event+hubs
[Azure Resources naming conventions]: /azure/cloud-adoption-framework/ready/azure-best-practices/naming-and-tagging
[Event hub and enable Capture to Storage template]: https://github.com/Azure/azure-quickstart-templates/tree/master/201-eventhubs-create-namespace-and-enable-capture
[Event hub and enable Capture to Azure Data Lake Store template]: https://github.com/Azure/azure-quickstart-templates/tree/master/201-eventhubs-create-namespace-and-enable-capture-for-adls
| 41.918794 | 569 | 0.691039 | hun_Latn | 0.989868 |
e01bde346698f041a9959f03c961e72825b4e2a9 | 7,240 | md | Markdown | docs/Getting Started/Adapting to the Head Unit Language/index.md | dandehavilland/sdl_app_library_guides | 8bab07934ff14b7eeaa3ca4ef7ea5d3d56578991 | [
"BSD-3-Clause"
] | null | null | null | docs/Getting Started/Adapting to the Head Unit Language/index.md | dandehavilland/sdl_app_library_guides | 8bab07934ff14b7eeaa3ca4ef7ea5d3d56578991 | [
"BSD-3-Clause"
] | null | null | null | docs/Getting Started/Adapting to the Head Unit Language/index.md | dandehavilland/sdl_app_library_guides | 8bab07934ff14b7eeaa3ca4ef7ea5d3d56578991 | [
"BSD-3-Clause"
] | null | null | null | # Adapting to the Head Unit Language
Since a head unit can support multiple languages, you may want to add support for more than one language to your SDL app. The SDL library allows you to check which language is currently used by the head unit. If desired, the app's name and the app's text-to-speech (TTS) name can be customized to reflect the head unit's current language. If your app name is not part of the current lexicon, you should tell the VR system how a native speaker will pronounce your app name by setting the TTS name using [phonemes](https://en.wikipedia.org/wiki/Phoneme) from either the Microsoft SAPI phoneme set or from the LHPLUS phoneme set.
## Setting the Default Language
The initial configuration of the @![iOS]`SDLManager`!@@![android,javaSE,javaEE,javascript]`SdlManager`!@ requires a default language when setting the @![iOS]`SDLLifecycleConfiguration`!@@![android,javaSE,javaEE]`Builder`!@@![javascript]`SDL.manager.LifecycleConfig`!@. If not set, the SDL library uses American English (*EN_US*) as the default language. The connection will fail if the head unit does not support the `language` set in the @![iOS]`SDLLifecycleConfiguration`!@@![android,javaSE,javaEE]`Builder`!@@![javascript]`SDL.manager.LifecycleConfig`!@. The `RegisterAppInterface` response RPC will return `INVALID_DATA` as the reason for rejecting the request.
### What if My App Does Not Support the Head Unit Language?
If your app does not support the current head unit language, you should decide on a default language to use in your app. All text should be created using this default language. Unfortunately, your VR commands will probably not work as the VR system will not recognize your users' pronunciation.
### Checking the Current Head Unit Language
After starting the `SDLManager` you can check the @![iOS]`registerResponse`!@@![android,javaSE,javaEE,javascript]`sdlManager.getRegisterAppInterfaceResponse()`!@ property for the head unit's `language` and `hmiDisplayLanguage`. The `language` property gives you the current VR system language; `hmiDisplayLanguage` the current display text language.
@![iOS]
##### Objective-C
```objc
SDLLanguage headUnitLanguage = self.sdlManager.registerResponse.language;
SDLLanguage headUnitHMIDisplayLanguage = self.sdlManager.registerResponse.hmiDisplayLanguage;
```
##### Swift
```swift
let headUnitLanguage = sdlManager.registerResponse?.language
let headUnitHMIDisplayLanguage = sdlManager.registerResponse?.hmiDisplayLanguage
```
!@
@![android,javaSE,javaEE]
```java
Language headUnitLanguage = sdlManager.getRegisterAppInterfaceResponse().getLanguage();
Language headUnitHMILanguage = sdlManager.getRegisterAppInterfaceResponse().getHmiDisplayLanguage();
```
!@
@![javascript]
```javascript
const headUnitLanguage = sdlManager.getRegisterAppInterfaceResponse().getLanguage();
const headUnitHMILanguage = sdlManager.getRegisterAppInterfaceResponse().getHmiDisplayLanguage();
```
!@
## Updating the SDL App Name
@![javascript]
The SDL JavaScript Suite currently does not support updating the app name. This will be addressed in a future release.
!@
@![iOS,android,javaSE,javaEE]
To customize the app name for the head unit's current language, implement the following steps:
1. Set the default `language` in the !@@![iOS]`SDLLifecycleConfiguration`!@@![android,javaSE,javaEE]`Builder`.!@
@![iOS]
2. Add all languages your app supports to `languagesSupported` in the `SDLLifecycleConfiguration`.
3. Implement the `SDLManagerDelegate`'s `managerShouldUpdateLifecycleToLanguage:hmiLanguage:` method. If the module's current HMI language or voice recognition (VR) language is different from the app's default language, the method will be called with the module's current HMI and/or VR language. Please note that the delegate method will only be called if your app supports the head unit's current language. Return a `SDLLifecycleConfigurationUpdate` object with the new `appName` and/or `ttsName`.
!@
@![android,javaSE,javaEE]
2. Implement the `sdlManagerListener`'s `managerShouldUpdateLifecycle(Language language, Language hmiLanguage)` method. If the module's current HMI language or voice recognition (VR) language is different from the app's default language, the listener will be called with the module's current HMI and/or VR language. Return a `LifecycleConfigurationUpdate` with the new `appName` and/or `ttsName`.
!@
@![iOS]
##### Objective-C
```objc
// The `hmiLanguage` is the text language of the head unit, the `language` is the VR language of the head unit. These will usually be the same, but not always. You may want to update your `appName` (text) and `ttsName` (VR) separately.
- (nullable SDLLifecycleConfigurationUpdate *)managerShouldUpdateLifecycleToLanguage:(SDLLanguage)language hmiLanguage:(SDLLanguage)hmiLanguage {
SDLLifecycleConfigurationUpdate *configurationUpdate = [[SDLLifecycleConfigurationUpdate alloc] init];
if ([language isEqualToEnum:SDLLanguageEnUs]) {
update.appName = <#App Name in English#>;
} else if ([language isEqualToEnum:SDLLanguageEsMx]) {
update.appName = <#App Name in Spanish#>;
} else if ([language isEqualToEnum:SDLLanguageFrCa]) {
update.appName = <#App Name in French#>;
} else {
return nil;
}
update.ttsName = [SDLTTSChunk textChunksFromString:update.appName];
return configurationUpdate;
}
```
##### Swift
```swift
// The `hmiLanguage` is the text language of the head unit, the `language` is the VR language of the head unit. These will usually be the same, but not always. You may want to update your `appName` (text) and `ttsName` (VR) separately.
func managerShouldUpdateLifecycle(toLanguage language: SDLLanguage, hmiLanguage: SDLLanguage) -> SDLLifecycleConfigurationUpdate? {
let configurationUpdate = SDLLifecycleConfigurationUpdate()
switch language {
case .enUs:
configurationUpdate.appName = <#App Name in English#>
case .esMx:
configurationUpdate.appName = <#App Name in Spanish#>
case .frCa:
configurationUpdate.appName = <#App Name in French#>
default:
return nil
}
update.ttsName = [SDLTTSChunk(text: update.appName!, type: .text)]
return configurationUpdate
}
```
!@
@![android,javaSE,javaEE]
```java
@Override
public LifecycleConfigurationUpdate managerShouldUpdateLifecycle(Language language, Language hmiLanguage) {
boolean isNeedUpdate = false;
String appName = APP_NAME;
String ttsName = APP_NAME;
switch (language) {
case ES_MX:
isNeedUpdate = true;
ttsName = APP_NAME_ES;
break;
case FR_CA:
isNeedUpdate = true;
ttsName = APP_NAME_FR;
break;
default:
break;
}
switch (hmiLanguage) {
case ES_MX:
isNeedUpdate = true;
appName = APP_NAME_ES;
break;
case FR_CA:
isNeedUpdate = true;
appName = APP_NAME_FR;
break;
default:
break;
}
if (isNeedUpdate) {
return new LifecycleConfigurationUpdate(appName, null, TTSChunkFactory.createSimpleTTSChunks(ttsName), null);
} else {
return null;
}
}
```
!@
| 51.714286 | 665 | 0.739917 | eng_Latn | 0.800443 |
e01c1ecc40ce8a07460beba80a8b0834bf1a1dcf | 7,114 | md | Markdown | source/_posts/agent_scott_mcgarvey_reveals_all_about_his_role_in_the_sam_allardyce_scandal.md | soumyadipdas37/finescoop.github.io | 0346d6175a2c36d4054083c144b7f8364db73f2f | [
"MIT"
] | null | null | null | source/_posts/agent_scott_mcgarvey_reveals_all_about_his_role_in_the_sam_allardyce_scandal.md | soumyadipdas37/finescoop.github.io | 0346d6175a2c36d4054083c144b7f8364db73f2f | [
"MIT"
] | null | null | null | source/_posts/agent_scott_mcgarvey_reveals_all_about_his_role_in_the_sam_allardyce_scandal.md | soumyadipdas37/finescoop.github.io | 0346d6175a2c36d4054083c144b7f8364db73f2f | [
"MIT"
] | 2 | 2021-09-18T12:06:26.000Z | 2021-11-14T15:17:34.000Z | ---
extends: _layouts.post
section: content
image: https://i.dailymail.co.uk/1s/2020/09/28/16/33725904-0-image-a-57_1601308502935.jpg
title: Agent Scott McGarvey reveals all about his role in the Sam Allardyce scandal
description: Four years ago Sam Allardyces tenure as England manager came to a swift end after just 67 days and one match in charge. It was the shortest tenure of any England manager.
date: 2020-09-28-17-24-26
categories: [latest, sports]
featured: true
---
Four years ago Sam Allardyce's tenure as England manager came to a swift end after just 67 days and one match in charge.
It was the shortest tenure of any permanent England manager, after he was the target of an embarrassing Daily Telegraph sting that cost him his job.
He was condemned by the FA for 'inappropriate' conduct and a 'serious error of judgment' after he used his job to negotiate a £400,000 deal with undercover reporters, posing as businessmen, and offered them advice on how to get around transfer rules.
Scott McGarvey was the agent with Allardyce during the scandal that cost him his dream job.
Extracts from the forthcoming autobiography Sort it out Scotty - My life as a footballer and agent by Scott McGarvey, detail his role...
Four years ago Sam Allardyce's tenure as England manager came to an end after 67 days
That conversation still makes me sick. Sick to the very pit of my stomach.
'I'm sorry Mr McGarvey, there is no job or Meiran Sports. You have been part of a newspaper investigation into football corruption. I'll send you an email, you and your solicitor can get in touch on Monday. Would you like to add anything?'
'You what?' The disbelief, the anger spewed out of me. 'F**k off! When I see you I'll, I'll... I've signed a contract!'
I had signed a contract. I'd signed a contract for £218,000 a year after tax and they'd ordered me a Range Rover. I'd been working with them for four months, they'd paid expenses for my travel and opened an office in London. The reason I'd phoned them that Sunday morning was to ask when the car was coming and for my copy of the contract. If I hadn't called would I have been used for longer, setting up other meetings?
How could I have been taken in like that?
My head was spinning. 'I can assure you, we are not after you,' he said, '...it's Sam we want.'
What had I done?
Allardyce's tenure was shortest of any England manager and he was in charge for one game
Agent Scott McGarvey was at the heart of the scandal that rocked English football
I'd introduced them to Sam Allardyce the England manager, Jimmy Floyd Hasselbaink, boss at QPR and Eric Black at Southampton.
I called Sam. It went to answerphone. I called my lawyer. I was racking my brains, what had I said?
Sam called back. He was on the golf course. I splurted out what had happened. 'Sort it out, Scotty. It's a stitch-up,' he said, remarkably pragmatic. 'We've done nothing wrong.'
He was right, we'd done nothing wrong.
Three days later, Sam Allardyce was no longer the England football team manager.
It was my fault. My f*****g fault.
Sam was my mate. He had only turned up to those meetings for me, to ensure I got the job.
Times had been hard sure but I was back on my feet again when the phone had gone in June and the woman on the end of the phone introduced herself.
Allardyce was caught out by a newspaper sting that cost him the job he had dreamed of
She was working on behalf of a wealthy businessman based in Asia. His company Meiran Sports wanted to start an agency.
We discussed bringing in players, taking football personalities to the Far East for motivational speaking dates, they fished about third-party ownership and I suggested if they had that much money they'd be better off buying a football club... one like Leeds United.
I'd been recommended and they were willing to offer an attractive package for me to come on board and broker introductions. The salary was big, yes, but if you are going to do things properly in modern football you need cash behind you. You are competing with millionaires, trying to attract would-be young millionaires. They often need to be impressed, see the car you are in, meet in the plush hotels.
Besides, I'd been a footballer at Manchester United since the age of 17 and an agent since 35. I knew people. I knew a lot of people. Almost everyone in the game. In my mind, I could understand why they'd asked me. They said someone at United had recommended me and I didn't think to question that. I certainly didn't think to question that this woman was an undercover reporter and I was being used as a pawn in their investigation.
He was condemned by the FA for 'inappropriate' conduct and a 'serious error of judgment'
When I first approached Sam for them, he was Sunderland manager. We'd been friends since coming up against each other in our playing days. I'd done a few transfer deals with him over years at Bolton and West Ham. They wanted him to give speeches to businessmen. Four dates in the Far East and would pay him £150,000 a time.
We met in the Mayfair Hotel, London and at Wing's, the Chinese restaurant in Manchester's city centre. Beforehand the guy with me said 'get Sam to ask for more. Ask for £250,000. This lot will pay it.' They were setting us up. Sam didn't bite. He never once asked for cash. He only agreed to do the talks if it was okay with the FA and left the negotiating to his agent Mark Curtis.
The Meiran guys were delighted at the end of the meeting, I thanked Sam and he turned to me to say: 'As long as you've got the job Scotty. I'm not even sure if I'll be able to do these talks but as long as it gets you the job.'
Unbeknown to us, the conversations had been recorded.
Sam was alleged to have discussed how clubs could navigate third party ownership. Even though this was later ruled to have been reported inaccurately by the Independent Press Standards Organisation, the FA ruled that it was unbefitting of an England manager and Gareth Southgate was promptly installed as caretaker.
Adam Lallana scored the only goal as England beat Slovakia in Allardyce's sole game in charge
How could they do that? This was entrapment, surely? Surely the Football Association would see it that way?
I was devastated for Sam. I knew how much that job meant to him.
For over three years we didn't speak again. I understood on his part. Who could blame him? My wife suggested I write a letter, I should have. I'd like to believe he knew I'd never set him up.
It's been four years, four years of absolute s**t for me. I know Sam doesn't blame me. He reached out to let me know after I did an interview. He knew I wouldn't set him up. He promised we'd talk.
We never have had that conversation. It's not self-pity it's rage, I'd sue if I could afford. For Sam, if he'd been in charge of England for that World Cup semi final against Croatia we wouldn't have lost.
It was only after I did an interview with the Daily Mail and then on talkSPORT radio, that Sam reached out to me. He text to say all between us was okay and we'd meet up. We just haven't got round to it yet.
| 76.494624 | 433 | 0.773545 | eng_Latn | 0.999948 |
e01c42776d5e03481d26f2346d83a4af8aae0506 | 1,098 | md | Markdown | _posts/php.environment.md | hiloy/alskjstl.github.io | b094607860e1465a76a78922ae824d6bfc3c4333 | [
"MIT"
] | null | null | null | _posts/php.environment.md | hiloy/alskjstl.github.io | b094607860e1465a76a78922ae824d6bfc3c4333 | [
"MIT"
] | null | null | null | _posts/php.environment.md | hiloy/alskjstl.github.io | b094607860e1465a76a78922ae824d6bfc3c4333 | [
"MIT"
] | null | null | null | ## php environment 相关
### php 命令安装
sudo yum install php-devel 命令行工具
### sudo yum install libmemcached
sudo yum install libmemcached
### php7 安装位置
/usr/local/php7/ 安装路径
ln -s /usr/local/php7/bin/php /usr/local/bin/php7
ln -s /usr/local/php7/bin/php-config /usr/local/bin/php7-config
ln -s /usr/local/php7/bin/phpize /usr/local/bin/php7ize
ln -s /usr/local/php7/bin/pear /usr/local/bin/pear7
ln -s /usr/local/php7/bin/pecl /usr/local/bin/pecl7
### pecl install
pecl install ZendOpcache
### php-mysql 安装
yum -y install php-mysql //you don't need to use "pecl install pdo_mysql"
### 启动php自带服务器
官方说明:http://php.net/manual/zh/features.commandline.webserver.php
php -S localhost:8080 //配置域名为localhost,端口为8080
php -S localhost:8080 -t /home/webroot/www //另外配置服务根目录是/home/webroot/www
php -S 0.0.0.0:8080 -t /home/webroot/www 支持远程调用
例子:
php -S 0:8080 router.php
请求图片直接显示图片,请求HTML则显示“Welcome to PHP”
<?php
// router.php
if (preg_match('/\.(?:png|jpg|jpeg|gif)$/', $_SERVER["REQUEST_URI"]))
return false; // 直接返回请求的文件
else {
echo "<p>Welcome to PHP</p>";
}
| 28.894737 | 74 | 0.695811 | yue_Hant | 0.239242 |
e01e8c8722ed09a1f8edb7f847ea4f64641e772c | 134 | md | Markdown | src/reviews/games/minecraft.md | andy-bond/andy.bond | 44b7a535d4de894a7f95766bf8f3faefab1994fb | [
"MIT"
] | null | null | null | src/reviews/games/minecraft.md | andy-bond/andy.bond | 44b7a535d4de894a7f95766bf8f3faefab1994fb | [
"MIT"
] | null | null | null | src/reviews/games/minecraft.md | andy-bond/andy.bond | 44b7a535d4de894a7f95766bf8f3faefab1994fb | [
"MIT"
] | 1 | 2021-12-18T10:26:58.000Z | 2021-12-18T10:26:58.000Z | ---
title: Minecraft
subtitle: Mojang Studios
rating: 10
date: 2011-11-18
image: src/static/img/game-minecraft.png
category: game
---
| 14.888889 | 40 | 0.746269 | eng_Latn | 0.177711 |
e01ea40bffc59921c664a75b6e07662dd75a9ec9 | 551 | md | Markdown | iot-c-reference/iot-c-ref-iothub-deviceconfiguration-h/iothubdeviceconfiguration-freeconfigurationmembers.md | nschonni/azure-reference-other | 6ab55a08d43984965d4e75fc8ebfa8e477819cf1 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | iot-c-reference/iot-c-ref-iothub-deviceconfiguration-h/iothubdeviceconfiguration-freeconfigurationmembers.md | nschonni/azure-reference-other | 6ab55a08d43984965d4e75fc8ebfa8e477819cf1 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | iot-c-reference/iot-c-ref-iothub-deviceconfiguration-h/iothubdeviceconfiguration-freeconfigurationmembers.md | nschonni/azure-reference-other | 6ab55a08d43984965d4e75fc8ebfa8e477819cf1 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | # IoTHubDeviceConfiguration_FreeConfigurationMembers()
Free members of the [IOTHUB_DEVICE_CONFIGURATION](../iot-c-ref-iothub-deviceconfiguration-h.md#iothub_device_configuration) structure (NOT the structure itself)
## Syntax
\#include "[azure-iot-sdk-c/iothub_service_client/inc/iothub_deviceconfiguration.h](../iot-c-ref-iothub-deviceconfiguration-h.md)"
```C
void IoTHubDeviceConfiguration_FreeConfigurationMembers(
IOTHUB_DEVICE_CONFIGURATION configuration
);
```
## Parameters
* `configuration` The structure to have its members freed.
| 32.411765 | 160 | 0.811252 | yue_Hant | 0.811844 |
e01fa30a39151001c0ce9bdd26677bac92d5c299 | 988 | md | Markdown | docs/spec/deployments-properties-alert_action-properties-email-properties-message.md | l-kutch/security_content | ea36d1969a8ef17d02f67a7602451ad99978ed32 | [
"Apache-2.0"
] | null | null | null | docs/spec/deployments-properties-alert_action-properties-email-properties-message.md | l-kutch/security_content | ea36d1969a8ef17d02f67a7602451ad99978ed32 | [
"Apache-2.0"
] | 490 | 2021-02-15T14:05:16.000Z | 2022-03-31T14:09:05.000Z | docs/spec/deployments-properties-alert_action-properties-email-properties-message.md | l-kutch/security_content | ea36d1969a8ef17d02f67a7602451ad99978ed32 | [
"Apache-2.0"
] | null | null | null | # Untitled string in Deployment Schema Schema
```txt
#/properties/alert_action/properties/email/properties/message#/properties/alert_action/properties/email/properties/message
```
message of email
| Abstract | Extensible | Status | Identifiable | Custom Properties | Additional Properties | Access Restrictions | Defined In |
| :------------------ | :--------- | :------------- | :---------------------- | :---------------- | :-------------------- | :------------------ | :------------------------------------------------------------------------------- |
| Can be instantiated | No | Unknown status | Unknown identifiability | Forbidden | Allowed | none | [deployments.spec.json*](../../out/deployments.spec.json "open original schema") |
## message Type
`string`
## message Examples
```yaml
Splunk Alert $name$ triggered %fields%
```
| 42.956522 | 228 | 0.466599 | eng_Latn | 0.498565 |
e0204c0f7316ca5d05bd3bf49849566e44698d68 | 2,038 | md | Markdown | _posts/2020-06-29-dev-python-jupiter-3.listtuple.md | Gangmi/Gangmi.github.io | 8e423fe147e615836121ae22c7b8cb2b90106814 | [
"MIT"
] | null | null | null | _posts/2020-06-29-dev-python-jupiter-3.listtuple.md | Gangmi/Gangmi.github.io | 8e423fe147e615836121ae22c7b8cb2b90106814 | [
"MIT"
] | null | null | null | _posts/2020-06-29-dev-python-jupiter-3.listtuple.md | Gangmi/Gangmi.github.io | 8e423fe147e615836121ae22c7b8cb2b90106814 | [
"MIT"
] | null | null | null | ---
layout: post
title: "[Python] Jupiter Notebook을 이용한 파이썬의 리스트와 튜플에 대한 간단한 설명"
subtitle: "파이썬 리스트와 튜플의 사용"
categories: dev
tags: python
comments: true
---
> 파이썬의 튜플과 리스트를 쥬피터 노트북을 통해서 복습
## 파이썬의 리스트와 튜플
### 파이썬의 리스트와 튜플
+ 리스트:변경가능[]
+ 튜플 : 변경불가능()
```python
## (1) 22, 44, 11 요소의 리스트 a_data 생성
## (2) 길자, 길동, 길길 요소의 튜플 b_data생성
## (3) b_data 튜플을 a_data 리스트에 추가
a_data = [22,44,11]
b_data = ('길자','길동','길길')
a_data.append(b_data)
a_data
```
[22, 44, 11, ('길자', '길동', '길길')]
```python
## (4) b_data에서 2번째 요소까지 출력
b_data[:2]
```
('길자', '길동')
```python
## (5) a_data를 튜플로 변경
## 11을 10으로 변경하려면?
a_data[2]=10
c_data=tuple(a_data)
print(c_data)
```
(22, 44, 10, ('길자', '길동', '길길'))
## 리스트 복습
- append(추가요소) : 리스트 맨 마지막에 하나 추가
- pop() : 마지막요소를 지움
- extend([추가요소들]) : 리스트 마지막에 여러 개 추가
- remove('요소값') : 해당 요소값 지움
---
- insert(idx, '데이타') : 원하는 위치에 자료 삽입
---
[슬라이싱]
- 리스트명(n:m) : n부터 m-1까지 데이타 추출
```python
movies = ['어밴져스','기생충','가디언스갤럭시','어떤영화','유명한 영화']
movies
```
['어밴져스', '기생충', '가디언스갤럭시', '어떤영화', '유명한 영화']
(1) '어젠져스2' 추가
```python
movies.append('어젠져스2')
movies
```
['어밴져스', '기생충', '가디언스갤럭시', '어떤영화', '유명한 영화', '어젠져스2']
(2) 마지막요소 제거
```python
del movies[-1]
movies
```
['어밴져스', '기생충', '가디언스갤럭시', '어떤영화', '유명한 영화']
(3) '어벤져스2','기생충2' 요소를 한꺼번에 추가
```python
movies+['어벤져스2','기생충2']
```
['어밴져스', '기생충', '가디언스갤럭시', '어떤영화', '유명한 영화', '어벤져스2', '기생충2']
(4) '어벤져스2' 요소 제거
```python
del movies[5]
```
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-8-82c9fc387d92> in <module>
----> 1 del movies[5]
IndexError: list assignment index out of range
(5) 4번째 위치에 '오래된 영화' 추가
```python
movies.insert(4,'오래된영화')
movies
```
['어밴져스', '기생충', '가디언스갤럭시', '어떤영화', '오래된영화', '유명한 영화']
(6) 영화목록에서 3번째에서 5번째까지 추출
```python
movies[3:6]
```
['어떤영화', '오래된영화', '유명한 영화']
| 10.505155 | 79 | 0.517174 | kor_Hang | 0.996994 |
e020aeb3d887e9bfe6b84536baa5db21fd3336a9 | 43 | md | Markdown | _site_listings/cpu-galaxy.at.md | understream/1mb-club | 988b962387187ba1dfe2c868b69dfa6f27e7b023 | [
"MIT"
] | null | null | null | _site_listings/cpu-galaxy.at.md | understream/1mb-club | 988b962387187ba1dfe2c868b69dfa6f27e7b023 | [
"MIT"
] | null | null | null | _site_listings/cpu-galaxy.at.md | understream/1mb-club | 988b962387187ba1dfe2c868b69dfa6f27e7b023 | [
"MIT"
] | null | null | null | ---
pageurl: cpu-galaxy.at
size: 756.5
---
| 8.6 | 22 | 0.604651 | vie_Latn | 0.193116 |
e0217ca7bda737cc4e7be3f5b364e83dc3eec20a | 2,079 | md | Markdown | docs/2014/database-engine/new-search-property-list.md | marcustung/sql-docs.zh-tw | f64ee32984b48f6607d66d80450d51c2b2b6531d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/2014/database-engine/new-search-property-list.md | marcustung/sql-docs.zh-tw | f64ee32984b48f6607d66d80450d51c2b2b6531d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/2014/database-engine/new-search-property-list.md | marcustung/sql-docs.zh-tw | f64ee32984b48f6607d66d80450d51c2b2b6531d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 新增搜尋屬性清單 |Microsoft Docs
ms.custom: ''
ms.date: 03/08/2017
ms.prod: sql-server-2014
ms.reviewer: ''
ms.technology: search
ms.topic: conceptual
f1_keywords:
- sql12.swb.spl.newsearchpropertylist.f1
ms.assetid: ffca78e9-8608-4b15-bd38-b2d78da4247a
author: craigg-msft
ms.author: craigg
manager: craigg
ms.openlocfilehash: 2aff15a42c8bffeb5a54e92b9ce7a09ace282ce4
ms.sourcegitcommit: 3026c22b7fba19059a769ea5f367c4f51efaf286
ms.translationtype: MT
ms.contentlocale: zh-TW
ms.lasthandoff: 06/15/2019
ms.locfileid: "62774496"
---
# <a name="new-search-property-list"></a>新增搜尋屬性清單
使用這個對話方塊來建立搜尋屬性清單。
## <a name="options"></a>選項
**搜尋屬性清單名稱**
請輸入搜尋屬性清單的名稱。
**[擁有者]**
指定搜尋屬性清單的擁有者。 如果您想要將擁有權指派給自己 (亦即,目前的使用者),請將這個欄位保留空白。 若要指定不同的使用者,請按一下瀏覽按鈕。
### <a name="create-search-property-list-options"></a>建立搜尋屬性清單選項
請按一下下列其中一個選項:
**建立空的搜尋屬性清單**
建立沒有任何屬性的搜尋屬性清單。
**從現有搜尋屬性清單建立**
將現有搜尋屬性清單的屬性複製到新的屬性清單中。 搜尋屬性清單是資料庫物件,所以您必須指定包含要複製之屬性清單的資料庫。
**來源資料庫**
指定現有搜尋屬性清單所屬的資料庫名稱。 系統預設會選取目前的資料庫。 另外,如果目前連接與另一個資料庫中的使用者識別碼相關聯,您也可以選擇使用清單方塊來選取該資料庫。
**來源搜尋屬性清單**
從屬於選取之資料庫的搜尋屬性清單,選取現有的搜尋屬性清單名稱。
## <a name="permissions"></a>Permissions
請參閱[CREATE SEARCH PROPERTY LIST (TRANSACT-SQL)](/sql/t-sql/statements/create-search-property-list-transact-sql)。
## <a name="to-use-sql-server-management-studio-to-manage-search-property-lists"></a>若要使用 SQL Server Management Studio 管理搜尋屬性清單
如需如何建立、檢視、變更或刪除搜尋屬性清單,以及如何設定全文檢索索引以進行屬性搜尋的詳細資訊,請參閱< [Search Document Properties with Search Property Lists](../relational-databases/search/search-document-properties-with-search-property-lists.md)>。
## <a name="see-also"></a>另請參閱
[CREATE SEARCH PROPERTY LIST (Transact-SQL)](/sql/t-sql/statements/create-search-property-list-transact-sql)
[使用搜索屬性清單搜索文件屬性](../relational-databases/search/search-document-properties-with-search-property-lists.md)
[sys.registered_search_property_lists (Transact-SQL)](/sql/relational-databases/system-catalog-views/sys-registered-search-property-lists-transact-sql)
| 35.237288 | 201 | 0.755652 | yue_Hant | 0.441634 |
e021df403ed124afb8078dae95a47a27e0505012 | 352 | md | Markdown | 2020-Autumn/Backend/task_02/Lidong Zou/README.md | Comyn-Echo/TechMap-Works | f8beb725ea21c7c494d1288a635c4a298e86f19b | [
"MIT"
] | null | null | null | 2020-Autumn/Backend/task_02/Lidong Zou/README.md | Comyn-Echo/TechMap-Works | f8beb725ea21c7c494d1288a635c4a298e86f19b | [
"MIT"
] | null | null | null | 2020-Autumn/Backend/task_02/Lidong Zou/README.md | Comyn-Echo/TechMap-Works | f8beb725ea21c7c494d1288a635c4a298e86f19b | [
"MIT"
] | null | null | null | #阴雨,考试,忙碌
#####关于第二次作业的总结
这两周事情都很多,高数的期中考试,英语四级的口语考试,两套英语四级卷子,一套雅思试卷,1500字军事理论,高数做也做不完的习题。
上面的这些内容表面上是抱怨,实则我想说,忙碌可以证明自己的存在感,也顺便解释一下这次任务交的比较晚的原因。
对于数组,指针,我是属于零基础,在刚上手的时候,完全属于不知所措,看到题目有自己的想法之后,也有一些相关的语句,语法是自己所不知道的,也借助了百度的帮忙。也就只有第六题寻求了一下安慰。
那么话归正题,这一次作业给我的经验就是要在完成任务之前,要了解相关的知识,我在这里说的知识是更进一步的知识,更基础的知识,一些铺垫性质的知识,这样才能更好去完成任务。
| 50.285714 | 98 | 0.795455 | zho_Hans | 0.611562 |
e021e51f439eff490335f17d93bd579dedf7ebef | 1,065 | md | Markdown | README.md | gracekashe/pig-dice-IP | 4ddaa05321b365b359143d10591675b1cf6e6221 | [
"MIT"
] | null | null | null | README.md | gracekashe/pig-dice-IP | 4ddaa05321b365b359143d10591675b1cf6e6221 | [
"MIT"
] | null | null | null | README.md | gracekashe/pig-dice-IP | 4ddaa05321b365b359143d10591675b1cf6e6221 | [
"MIT"
] | null | null | null | ###Pig Dice Game
version.1.2.0
By Grace Marion
##Description
This web application allows two players to play a game of Pig Dice.
###Specs
##Behavior Input Outcome
1.Player 1 inputs name and Player 2 inputs name, and clicks start button Player 1: mercy / Player 2: victor / Click START Goes to game console
2. If Player 1 rolls any number other than 1, that roll is added to round total Roll = 2 Round total = 2
3.If Player 1 rolls a 1, no score is added and round for Player 1 ends Roll = 1 Round total = 2 / Total score = 2 / Player 2 begins
4.Repeat for Player 2 Roll = 1 Round total = 0 / Total score = 0 / Player 1 begins
5.When a player's total score reaches 100 or more, game ends and winner page shows Player 1 total score = 100 Winner page
##Setup/Installation Requirements
Clone this repository https://github.com/gracekashe/pig-dice-IP.git
Open a HTML file
Open web browser of choice
Notes
##Technologies Used
HTML
CSS
Bootstrap
JavaScript
jQuery
##License
This software is licensed under the MIT license.
Copyright (c) 2018 Grace Marion
| 31.323529 | 144 | 0.751174 | eng_Latn | 0.946688 |
e021ee0728105c0ed5439a984209a6a77fa83667 | 1,356 | md | Markdown | www/_template/tutorials/quick-start.md | smcenlly/snowpack | 769c39f8f130ea66621c302a267362300ee619d4 | [
"MIT"
] | null | null | null | www/_template/tutorials/quick-start.md | smcenlly/snowpack | 769c39f8f130ea66621c302a267362300ee619d4 | [
"MIT"
] | null | null | null | www/_template/tutorials/quick-start.md | smcenlly/snowpack | 769c39f8f130ea66621c302a267362300ee619d4 | [
"MIT"
] | null | null | null | ---
layout: layouts/main.njk
title: Quick Start
---
### Install Snowpack
#### npm
```bash
npm install --save-dev snowpack
```
#### yarn
```bash
yarn add --dev snowpack
```
#### pnpm
```bash
pnpm add --save-dev snowpack
```
### Run the Snowpack CLI
Popular package managers support running installed packages via CLI. This prevents you from having to install the package globally just to run it yourself.
```bash
npx snowpack [command]
yarn run snowpack [command]
pnpm run snowpack [command]
```
Throughout our documentation, we'll use `snowpack [command]` to document the CLI. To run this yourself, add the `npx`/`yarn run`/`pnpm run` prefix of the package manager that you used to install Snowpack.
### Serve your project locally
```
snowpack dev
```
This starts the local dev server for development. By default this serves your current working directory to the browser, and will look for an `index.html` file to start. You can customize which directories you want to serve via the ["mount"](/reference/configuration) configuration.
### Build your project
```
snowpack build
```
This builds your project into a static `build/` directory that you can deploy anywhere. You can customize your build via [configuration](/reference/configuration).
### See all options
```
snowpack --help
```
The `--help` flag will display helpful output.
| 21.870968 | 281 | 0.729351 | eng_Latn | 0.993809 |
e0221d72690cbf6bbb2b500952a87c84e96cb918 | 21,360 | md | Markdown | website/blog/2020-11-10-lithium.md | nolondil/metals | 3fbe9bd873ebf329a424c54ad005849e4a887e30 | [
"Apache-2.0"
] | 1,674 | 2018-01-16T01:57:34.000Z | 2022-03-30T11:53:57.000Z | website/blog/2020-11-10-lithium.md | zmerr/metals | 3c4c3316a31888c23faff1f778ee2b4cfd4c6be1 | [
"Apache-2.0"
] | 2,112 | 2018-01-15T21:21:14.000Z | 2022-03-31T20:21:38.000Z | website/blog/2020-11-10-lithium.md | zmerr/metals | 3c4c3316a31888c23faff1f778ee2b4cfd4c6be1 | [
"Apache-2.0"
] | 262 | 2018-03-06T17:11:12.000Z | 2022-03-26T05:01:29.000Z | ---
author: Tomasz Godzik
title: Metals v0.9.5 - Lithium
authorURL: https://twitter.com/TomekGodzik
authorImageURL: https://github.com/tgodzik.png
---
We're happy to announce the release of Metals v0.9.5, which brings about some of
the long awaited features such as implicit decorations and the organize imports
code action. We also greatly simplified connecting to the sbt BSP server and
added support for Scala 3's first milestone `3.0.0-M1`.
<table>
<tbody>
<tr>
<td>Commits since last release</td>
<td align="center">185</td>
</tr>
<tr>
<td>Merged PRs</td>
<td align="center">70</td>
</tr>
<tr>
<td>Contributors</td>
<td align="center">9</td>
</tr>
<tr>
<td>Closed issues</td>
<td align="center">33</td>
</tr>
<tr>
<td>New features</td>
<td align="center">5</td>
</tr>
</tbody>
</table>
For full details: https://github.com/scalameta/metals/milestone/28?closed=1
Metals is a language server for Scala that works with VS Code, Vim, Emacs,
Sublime Text, Atom and Eclipse. Metals is developed at the
[Scala Center](https://scala.epfl.ch/) and [VirtusLab](https://virtuslab.com)
with the help from [Lunatech](https://lunatech.com) along with contributors from
the community.
## TL;DR
Check out [https://scalameta.org/metals/](https://scalameta.org/metals/), and
give Metals a try!
- Organize imports code action.
- Show implicit parameters and inferred type decorations.
- Improved sbt BSP support.
- Support for Scala 3.0.0-M1.
- Remote debugging.
- Environment variables in run/debug.
## Organize imports code action
One of the most requested features in the Metals repository, current list can be
found [here](https://github.com/scalameta/metals/issues/707), was the ability to
automatically organize imports. We are happy to announce, that thanks to
[mlachkar's](https://github.com/mlachkar) amazing contributions to both
[Scalafix](https://github.com/scalacenter/scalafix) and Metals, this new feature
is now available via an `Organize imports` code action.

Depending on the editor this code action can be invoked differently, please
consult your specific editor's documentation. For example in Visual Studio Code
'organize imports' can be invoked from command console, shortcut, or from the
dropdown menu when right clicking inside a file.
The organize imports code action is enabled using
[Scalafix](https://github.com/scalacenter/scalafix) and specifically the awesome
organize imports rule created by [liancheng](https://github.com/liancheng).
The rule can be used to automatically sort imports in a file by the Ascii order,
which is the default setting, or use the user specific configuration defined
using scalafix configuration file. This file can be either `.scalafix.conf` in
the current workspace or an absolute file specified in the
`metals.scalafixConfigPath` user setting. It's important to note that the new
code action is consistent with how sbt's scalafix plugin will behave.
An example scalafix configuration for the organize imports rule can look like
this:
```scala
OrganizeImports {
groups = ["re:javax?\\.", "scala.", "*"]
removeUnused = true
}
```
This will sort imports into 3 groups defined with regexes and remove any unused
ones. Specifically, it will turn:
```scala
import scala.collection.mutable.{Buffer, ArrayBuffer}
import java.time.Clock
import java.lang.{Long => JLong, Double => JDouble}
object RemoveUnused {
val buffer: ArrayBuffer[Int] = ArrayBuffer.empty[Int]
val long: JLong = JLong.parseLong("0")
}
```
into
```scala
import java.lang.{Long => JLong}
import scala.collection.mutable.ArrayBuffer
object RemoveUnused {
val buffer: ArrayBuffer[Int] = ArrayBuffer.empty[Int]
val long: JLong = JLong.parseLong("0")
}
```
Please do NOT use the Scalafix built-in RemoveUnused.imports together with
OrganizeImports to remove unused imports, since it might result in a broken
code.
More information can be found in the
[liancheng/scalafix-organize-imports](https://github.com/liancheng/scalafix-organize-imports)
repository.
## Show implicits and type decorations
Another highly anticipated feature was the ability to show additional
information about the code, which is not provided explicitly. In this new
release, users can use two new options when looking through their code:
- `metals.showImplicitArguments` will enable users to see implicit parameters
within their code:

- `metals.showInferredType` will enable users to see inferred type for any
generic methods they are using:

Both new options are disabled by default, since they add a great number of
information, which might not be necessary for all users. Full name with package
for each of the additional types or values will be available on hover.
These options can be set in Metals options in VS Code or in Emacs
[(lsp-metals)](https://github.com/emacs-lsp/lsp-metals) using
`lsp-metals-show-implicit-arguments` and `lsp-metals-show-inferred-type`
variables set to `t`.
Unfortunately, due to current limitations, additional decorations are only
possible in Visual Studio Code and Emacs. In other editors the additional
information is available via hover and new `With synthetics added` section,
which shows how the whole current line would look with the additional
decorations.
Example of how this alternative approach looks in Vim:

## Improved sbt BSP support
In recent months, [eed3si9n](https://github.com/eed3si9n) and
[adpi2](https://github.com/adpi2) worked, under the auspices of Scala Center, on
making sbt capable of acting as a
[Build Server Protocol](https://build-server-protocol.github.io/) server. This
enables Metals and other editors such as Intellij IDEA to directly communicate
with sbt in order to compile the user's code.
Some more work was required in order to make the new features work smoothly with
Metals and currently, thanks to [ckipp01](https://github.com/ckipp01), users can
easily try out the new sbt BSP support. It's important to note that using Bloop
instead of sbt is still the recommended approach as we believe it provides the
best user experience and features like running or debugging are still not yet
implemented for sbt. More details and explanations can be found in the
[blogpost](/metals/blog/2020/11/06/sbt-BSP-support).
## Remote debugging
Thanks to the great work of [pvid](https://github.com/pvid), Metals now supports
remote debugging, which means that it can attach to a running process with the
proper JVM options specified. There is no longer a need to run the application
or test from within the editor.
In case of a simple java process those options will take a form of for example:
`-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=5005`
Some of the more important options here are:
- `suspend=y` will make the process wait for Metals to connect to it if
specified to `y`. `n` will cause the process to run normally.
- `address=5005` specifies which port to use and it can be any free port.
For a detailed explanation of the different options please refer to the proper
documentation
[here](https://docs.oracle.com/javase/8/docs/technotes/guides/jpda/conninv.html)
When using sbt, remote debugging can be achieved by specifying additional
settings:
```scala
javaOptions in run := List("-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=5005"),
fork in run := true
```
This will cause running `sbt run` to wait for Metals to connect to it, which
might be especially useful when reading user input, which is currently
impossible to do for example from within VS Code. Similarly, these options can
be specified in any of the supported build tools.
To later connect to the running process, you need to use the additional Attach
request type with the `buildTarget`, `hostname` and `port` fields specified. In
case of Visual Studio Code this will take a form of:
```json
{
"type": "scala",
"request": "attach",
"name": "Attach",
"buildTarget": "root",
"hostName": "localhost",
"port": 5005
}
```
## Environment variables in run/debug
Metals now supports specifying additional environment options when running or
debugging applications. This can be done twofold:
- By specifying the new `env` field:
```json
{
"type": "scala",
"name": "Debug Main",
"request": "launch",
"mainClass": "Main",
"args": ["hello", "world"],
"env": { "VARIABLE1": " 123" }
}
```
- By using the `envFile` field:
```json
{
"type": "scala",
"name": "Debug Main",
"request": "launch",
"mainClass": "Main",
"args": ["hello", "world"],
"envFile": "local.env"
}
```
Where the `local.env` file can take a form of:
```
# single line values
key1=value 1
key2='value 2' # ignored inline comment
key3="value 3"
# multi-line values
key4='line 1
line 2'
key5="line 1
line 2"
# export statements
export key6=value 6
# comma delimiter
key7:value 6
# keys cannot contain dots or dashes
a.b.key8=value 8 # will be ignored
a-b-key9=value 9 # will be ignored
```
The format is adapted from
[mefellows/sbt-dotenv](https://github.com/mefellows/sbt-dotenv) and
[bkeepers/dotenv](https://github.com/bkeepers/dotenv). Example json
configurations above are defined for Visual Studio Code, the `type` and `name`
fields can be omitted when using in other clients.
This new feature has been contributed by [mwz](https://github.com/mwz). Thanks
for the great work!
## Miscellaneous
- Removed `javax` from default excluded packages - it can be added back in the
configuration.
- Fixed top level completions in empty files.
- Fixed issues with run/debug when using VPN.
- Fixed go to java sources in standalone worksheets.
- Fixed issues with worksheets if the workspace was not first compiled.
- Fixed sbt completions coming from `project/*.scala` files.
- Only use `import {java.util => ju}` when there are conflicting symbols in the
file.
- Muted InvalidProtocolException in the logs - this exception might happen.
normally and does not break anything.
- Changed scalafix and scalafmt location in user configuration to absolute path.
- Only report parsing errors after Scala version is confirmed for a file.
- Added automatic retry in case of build server connection failure.
## Contributors
Big thanks to everybody who contributed to this release or reported an issue!
```
$ git shortlog -sn --no-merges v0.9.4..v0.9.5
Chris Kipp
Scala Steward
Tomasz Godzik
Michael Wizner
Meriam Lachkar
Gabriele Petronella
Krzysiek Bochenek
Pavol Vidlicka
Vadim Chelyshov
```
## Merged PRs
## [v0.9.5](https://github.com/scalameta/metals/tree/v0.9.5) (2020-11-10)
[Full Changelog](https://github.com/scalameta/metals/compare/v0.9.4...v0.9.5)
**Merged pull requests:**
- Rename user configuration options for decoration provider
[\#2196](https://github.com/scalameta/metals/pull/2196)
([tgodzik](https://github.com/tgodzik))
- Show warning about unsupported Scala version in worksheets
[\#2194](https://github.com/scalameta/metals/pull/2194)
([ckipp01](https://github.com/ckipp01))
- Add support for Scala3 3.0.0-M1
[\#2190](https://github.com/scalameta/metals/pull/2190)
([tgodzik](https://github.com/tgodzik))
- Update scalafix [\#2192](https://github.com/scalameta/metals/pull/2192)
([tgodzik](https://github.com/tgodzik))
- Add retry in case of timeout in build/initialize
[\#2184](https://github.com/scalameta/metals/pull/2184)
([tgodzik](https://github.com/tgodzik))
- Ensure scalacOptions for Test are correct for sbt BSP.
[\#2191](https://github.com/scalameta/metals/pull/2191)
([ckipp01](https://github.com/ckipp01))
- Support remote debugging
[\#2125](https://github.com/scalameta/metals/pull/2125)
([pvid](https://github.com/pvid))
- Add in timer methods to a new timerProvider.
[\#2186](https://github.com/scalameta/metals/pull/2186)
([ckipp01](https://github.com/ckipp01))
- Remove condition for jfr, since it's now reliably available on CI
[\#2185](https://github.com/scalameta/metals/pull/2185)
([tgodzik](https://github.com/tgodzik))
- Make sure build server is connected or not available before parsing
[\#2169](https://github.com/scalameta/metals/pull/2169)
([tgodzik](https://github.com/tgodzik))
- Preload scalafix to optimize first organize imports run
[\#2168](https://github.com/scalameta/metals/pull/2168)
([tgodzik](https://github.com/tgodzik))
- Allow for reset in html doctor
[\#2172](https://github.com/scalameta/metals/pull/2172)
([ckipp01](https://github.com/ckipp01))
- Track and show progress to user about connecting to sbt
[\#2182](https://github.com/scalameta/metals/pull/2182)
([ckipp01](https://github.com/ckipp01))
- Remove old bloop script now that we just use coursier
[\#2181](https://github.com/scalameta/metals/pull/2181)
([ckipp01](https://github.com/ckipp01))
- Update interface to 1.0.1
[\#2178](https://github.com/scalameta/metals/pull/2178)
([scala-steward](https://github.com/scala-steward))
- Update mdoc-interfaces, sbt-mdoc to 2.2.10
[\#2180](https://github.com/scalameta/metals/pull/2180)
([scala-steward](https://github.com/scala-steward))
- Update flyway-core to 7.0.4
[\#2179](https://github.com/scalameta/metals/pull/2179)
([scala-steward](https://github.com/scala-steward))
- Update coursier to 2.0.5
[\#2177](https://github.com/scalameta/metals/pull/2177)
([scala-steward](https://github.com/scala-steward))
- Update scribe, scribe-slf4j to 2.8.6
[\#2176](https://github.com/scalameta/metals/pull/2176)
([scala-steward](https://github.com/scala-steward))
- Update guava to 30.0-jre
[\#2175](https://github.com/scalameta/metals/pull/2175)
([scala-steward](https://github.com/scala-steward))
- Update bloop-config, bloop-launcher to 1.4.4-23-dbacf644
[\#2174](https://github.com/scalameta/metals/pull/2174)
([scala-steward](https://github.com/scala-steward))
- Update sbt-dotty to 0.4.5
[\#2173](https://github.com/scalameta/metals/pull/2173)
([scala-steward](https://github.com/scala-steward))
- Fix resolver for sbt-metals snapshot
[\#2170](https://github.com/scalameta/metals/pull/2170)
([ckipp01](https://github.com/ckipp01))
- Enable smoother sbt bsp integration.
[\#2154](https://github.com/scalameta/metals/pull/2154)
([ckipp01](https://github.com/ckipp01))
- Change scalafix and scalafmt conf to absolute path
[\#2165](https://github.com/scalameta/metals/pull/2165)
([tgodzik](https://github.com/tgodzik))
- Fix race condition while using textDocument/foldingRange
[\#2166](https://github.com/scalameta/metals/pull/2166)
([tgodzik](https://github.com/tgodzik))
- Account for possible null value in ScalaTarget's baseDirectory
[\#2164](https://github.com/scalameta/metals/pull/2164)
([ckipp01](https://github.com/ckipp01))
- Update mill scripts [\#2162](https://github.com/scalameta/metals/pull/2162)
([tgodzik](https://github.com/tgodzik))
- Mute InvalidProtocolException which might happen normally
[\#2159](https://github.com/scalameta/metals/pull/2159)
([tgodzik](https://github.com/tgodzik))
- Update sbt to 1.4.1 [\#2161](https://github.com/scalameta/metals/pull/2161)
([tgodzik](https://github.com/tgodzik))
- Make sure that default version is picked up for Scalafmt provider
[\#2158](https://github.com/scalameta/metals/pull/2158)
([tgodzik](https://github.com/tgodzik))
- Don't import {java.util => ju} when no conflicting symbols are available
[\#2155](https://github.com/scalameta/metals/pull/2155)
([tgodzik](https://github.com/tgodzik))
- Fixed sbt completions coming from project/\*.scala
[\#2129](https://github.com/scalameta/metals/pull/2129)
([dos65](https://github.com/dos65))
- Add java sources to standalone worksheets and run compilation before
evaluation if necessary.
[\#2133](https://github.com/scalameta/metals/pull/2133)
([ckipp01](https://github.com/ckipp01))
- Bump setup-scala and cache-action actions
[\#2149](https://github.com/scalameta/metals/pull/2149)
([ckipp01](https://github.com/ckipp01))
- Add JFR for non oracle JDK releases
[\#2137](https://github.com/scalameta/metals/pull/2137)
([tgodzik](https://github.com/tgodzik))
- Bump scalafmt up to 2.7.4
[\#2148](https://github.com/scalameta/metals/pull/2148)
([ckipp01](https://github.com/ckipp01))
- Update scalameta, semanticdb-scalac, ... to 4.3.24
[\#2147](https://github.com/scalameta/metals/pull/2147)
([scala-steward](https://github.com/scala-steward))
- Update munit, sbt-munit to 0.7.14
[\#2146](https://github.com/scalameta/metals/pull/2146)
([scala-steward](https://github.com/scala-steward))
- Update jol-core to 0.14
[\#2144](https://github.com/scalameta/metals/pull/2144)
([scala-steward](https://github.com/scala-steward))
- Update flyway-core to 7.0.3
[\#2143](https://github.com/scalameta/metals/pull/2143)
([scala-steward](https://github.com/scala-steward))
- Update undertow-core to 2.2.2.Final
[\#2142](https://github.com/scalameta/metals/pull/2142)
([scala-steward](https://github.com/scala-steward))
- Update coursier to 2.0.3
[\#2141](https://github.com/scalameta/metals/pull/2141)
([scala-steward](https://github.com/scala-steward))
- Update scribe, scribe-slf4j to 2.8.3
[\#2140](https://github.com/scalameta/metals/pull/2140)
([scala-steward](https://github.com/scala-steward))
- Update ujson to 1.2.2 [\#2139](https://github.com/scalameta/metals/pull/2139)
([scala-steward](https://github.com/scala-steward))
- Update jackson-databind to 2.11.3
[\#2138](https://github.com/scalameta/metals/pull/2138)
([scala-steward](https://github.com/scala-steward))
- Use 127.0.0.1 address always for DebugProvider
[\#2135](https://github.com/scalameta/metals/pull/2135)
([tgodzik](https://github.com/tgodzik))
- Add support for loading env variables from a .env file.
[\#2123](https://github.com/scalameta/metals/pull/2123)
([mwz](https://github.com/mwz))
- Do not show hover if the type is error
[\#2126](https://github.com/scalameta/metals/pull/2126)
([tgodzik](https://github.com/tgodzik))
- Make sure files are compiled when running scalafix
[\#2119](https://github.com/scalameta/metals/pull/2119)
([tgodzik](https://github.com/tgodzik))
- Show implicit arguments and type annotations for Scala files
[\#2103](https://github.com/scalameta/metals/pull/2103)
([tgodzik](https://github.com/tgodzik))
- Account for possible null value on PopupChoiceReset message request
[\#2121](https://github.com/scalameta/metals/pull/2121)
([ckipp01](https://github.com/ckipp01))
- Support environment variables when running or debugging
[\#2118](https://github.com/scalameta/metals/pull/2118)
([tgodzik](https://github.com/tgodzik))
- Add in top level-completions for empty file.
[\#2088](https://github.com/scalameta/metals/pull/2088)
([ckipp01](https://github.com/ckipp01))
- Update scalafix-interfaces to 0.9.21
[\#2114](https://github.com/scalameta/metals/pull/2114)
([scala-steward](https://github.com/scala-steward))
- Update sbt-mdoc to 2.2.9
[\#2113](https://github.com/scalameta/metals/pull/2113)
([scala-steward](https://github.com/scala-steward))
- Update flyway-core to 6.5.7
[\#2112](https://github.com/scalameta/metals/pull/2112)
([scala-steward](https://github.com/scala-steward))
- Update undertow-core to 2.2.0.Final
[\#2111](https://github.com/scalameta/metals/pull/2111)
([scala-steward](https://github.com/scala-steward))
- Update directory-watcher to 0.10.1
[\#2110](https://github.com/scalameta/metals/pull/2110)
([scala-steward](https://github.com/scala-steward))
- Update scribe, scribe-slf4j to 2.7.13
[\#2109](https://github.com/scalameta/metals/pull/2109)
([scala-steward](https://github.com/scala-steward))
- Update sbt-scalafix, scalafix-interfaces to 0.9.21
[\#2108](https://github.com/scalameta/metals/pull/2108)
([scala-steward](https://github.com/scala-steward))
- Update bloop-config, bloop-launcher to 1.4.4-15-56a96a99
[\#2107](https://github.com/scalameta/metals/pull/2107)
([scala-steward](https://github.com/scala-steward))
- Update organize-import rule to add Scala 2.11 support
[\#2101](https://github.com/scalameta/metals/pull/2101)
([gabro](https://github.com/gabro))
- Simplify TestHovers and remove warning
[\#2098](https://github.com/scalameta/metals/pull/2098)
([gabro](https://github.com/gabro))
- Display used BuildServer in Doctor
[\#2097](https://github.com/scalameta/metals/pull/2097)
([kpbochenek](https://github.com/kpbochenek))
- Implement organize import using scalafix
[\#1971](https://github.com/scalameta/metals/pull/1971)
([mlachkar](https://github.com/mlachkar))
- Remove javax from default excluded packages
[\#2091](https://github.com/scalameta/metals/pull/2091)
([gabro](https://github.com/gabro))
- Add documentation for new parameter in GotoLocation
[\#2095](https://github.com/scalameta/metals/pull/2095)
([kpbochenek](https://github.com/kpbochenek))
- Fix Metals version in the blog post
[\#2089](https://github.com/scalameta/metals/pull/2089)
([tgodzik](https://github.com/tgodzik))
- Add release notes for v0.9.4
[\#2081](https://github.com/scalameta/metals/pull/2081)
([tgodzik](https://github.com/tgodzik))
| 39.192661 | 115 | 0.73736 | eng_Latn | 0.747681 |
e022672b2504512def4678005a39659f9131b65c | 12,455 | md | Markdown | articles/cosmos-db/use-python-notebook-features-and-commands.md | jobailla/azure-docs.fr-fr | 8c7734bc0d6ffaa8f9f2a39982929b8d9fc8235e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cosmos-db/use-python-notebook-features-and-commands.md | jobailla/azure-docs.fr-fr | 8c7734bc0d6ffaa8f9f2a39982929b8d9fc8235e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cosmos-db/use-python-notebook-features-and-commands.md | jobailla/azure-docs.fr-fr | 8c7734bc0d6ffaa8f9f2a39982929b8d9fc8235e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Utiliser les fonctionnalités et commandes de notebook intégrées dans les notebooks Python Azure Cosmos DB (préversion)
description: Découvrez comment utiliser les fonctionnalités et commandes intégrées pour effectuer des opérations courantes à l’aide des notebooks Python intégrés d’Azure Cosmos DB.
author: deborahc
ms.service: cosmos-db
ms.topic: how-to
ms.date: 05/19/2020
ms.author: dech
ms.openlocfilehash: 596d34ef0544f4160c18210f05f68b488ec114d3
ms.sourcegitcommit: 3bcce2e26935f523226ea269f034e0d75aa6693a
ms.translationtype: HT
ms.contentlocale: fr-FR
ms.lasthandoff: 10/23/2020
ms.locfileid: "92476278"
---
# <a name="use-built-in-notebook-commands-and-features-in-azure-cosmos-db-python-notebooks-preview"></a>Utiliser les fonctionnalités et commandes de notebook intégrées dans les notebooks Python Azure Cosmos DB (préversion)
Les notebooks Jupyter intégrés dans Azure Cosmos DB vous permettent d’analyser et de visualiser vos données via le portail Azure. Cet article explique comment utiliser les fonctionnalités et commandes de notebook intégrées pour effectuer des opérations courantes dans des notebooks Python.
## <a name="install-a-new-package"></a>Installer un nouveau package
Dès lors que vous avez activé la prise en charge des notebooks pour vos comptes Azure Cosmos, vous pouvez ouvrir un nouveau notebook et installer un package.
Dans une nouvelle cellule de code, insérez et exécutez le code suivant en remplaçant ``PackageToBeInstalled`` par le package Python souhaité.
```python
import sys
!{sys.executable} -m pip install PackageToBeInstalled --user
```
Ce package pourra être utilisé à partir de n’importe quel notebook de l’espace de travail du compte Azure Cosmos.
> [!TIP]
> Si votre notebook nécessite un package personnalisé, nous vous recommandons d’y ajouter une cellule pour installer le package. En effet, si vous [réinitialisez l’espace de travail](#reset-notebooks-workspace), les packages sont supprimés.
## <a name="run-a-sql-query"></a>Exécuter une requête SQL
Vous pouvez utiliser la commande magic ``%%sql`` pour exécuter une [requête SQL](sql-query-getting-started.md) sur un conteneur quelconque de votre compte. Utilisez la syntaxe suivante :
```python
%%sql --database {database_id} --container {container_id}
{Query text}
```
- Remplacez ``{database_id}`` et ``{container_id}`` par le nom de la base de données et du conteneur dans votre compte Cosmos. Si les arguments ``--database`` et ``--container`` ne sont pas fournis, la requête est exécutée sur la [base de données et le conteneur par défaut](#set-default-database-for-queries).
- Vous pouvez exécuter n’importe quelle requête SQL valide dans Azure Cosmos DB. Le texte de la requête doit figurer sur une nouvelle ligne.
Par exemple :
```python
%%sql --database RetailDemo --container WebsiteData
SELECT c.Action, c.Price as ItemRevenue, c.Country, c.Item FROM c
```
Exécutez ```%%sql?``` dans une cellule pour afficher la documentation d’aide sur la commande magic SQL dans le notebook.
## <a name="run-a-sql-query-and-output-to-a-pandas-dataframe"></a>Exécuter une requête SQL et générer les résultats dans un DataFrame Pandas
Vous pouvez générer les résultats d’une requête ``%%sql`` dans un [DataFrame Pandas](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html#pandas.DataFrame). Utilisez la syntaxe suivante :
```python
%%sql --database {database_id} --container {container_id} --output {outputDataFrameVar}
{Query text}
```
- Remplacez ``{database_id}`` et ``{container_id}`` par le nom de la base de données et du conteneur dans votre compte Cosmos. Si les arguments ``--database`` et ``--container`` ne sont pas fournis, la requête est exécutée sur la [base de données et le conteneur par défaut](#set-default-database-for-queries).
- Remplacez ``{outputDataFrameVar}`` par le nom de la variable DataFrame qui contiendra les résultats.
- Vous pouvez exécuter n’importe quelle requête SQL valide dans Azure Cosmos DB. Le texte de la requête doit figurer sur une nouvelle ligne.
Par exemple :
```python
%%sql --database RetailDemo --container WebsiteData --output df_cosmos
SELECT c.Action, c.Price as ItemRevenue, c.Country, c.Item FROM c
```
```python
df_cosmos.head(10)
Action ItemRevenue Country/Region Item
0 Viewed 9.00 Tunisia Black Tee
1 Viewed 19.99 Antigua and Barbuda Flannel Shirt
2 Added 3.75 Guinea-Bissau Socks
3 Viewed 3.75 Guinea-Bissau Socks
4 Viewed 55.00 Czech Republic Rainjacket
5 Viewed 350.00 Iceland Cosmos T-shirt
6 Added 19.99 Syrian Arab Republic Button-Up Shirt
7 Viewed 19.99 Syrian Arab Republic Button-Up Shirt
8 Viewed 33.00 Tuvalu Red Top
9 Viewed 14.00 Cabo Verde Flip Flop Shoes
```
## <a name="set-default-database-for-queries"></a>Définir la base de données par défaut pour les requêtes
Vous pouvez définir la base de données par défaut qui sera utilisée par les commandes ```%%sql``` pour le notebook. Remplacez ```{database_id}``` par le nom de votre base de données.
```python
%database {database_id}
```
Exécutez ```%database?``` dans une cellule pour afficher la documentation dans le notebook.
## <a name="set-default-container-for-queries"></a>Définir le conteneur par défaut pour les requêtes
Vous pouvez définir le conteneur par défaut qui sera utilisé par les commandes ```%%sql``` pour le notebook. Remplacez ```{container_id}``` par le nom de votre conteneur.
```python
%container {container_id}
```
Exécutez ```%container?``` dans une cellule pour afficher la documentation dans le notebook.
## <a name="upload-json-items-to-a-container"></a>Charger des éléments JSON dans un conteneur
Vous pouvez utiliser la commande magic ``%%upload`` pour télécharger des données d’un fichier JSON vers un conteneur Azure Cosmos spécifié. Utilisez la commande suivante pour télécharger les éléments :
```python
%%upload --databaseName {database_id} --containerName {container_id} --url {url_location_of_file}
```
- Remplacez ``{database_id}`` et ``{container_id}`` par le nom de la base de données et du conteneur dans votre compte Azure Cosmos. Si les arguments ``--database`` et ``--container`` ne sont pas fournis, la requête est exécutée sur la [base de données et le conteneur par défaut](#set-default-database-for-queries).
- Remplacez ``{url_location_of_file}`` par l’emplacement de votre fichier JSON. Le fichier doit être un tableau d’objets JSON valides et doit être accessible via l’Internet public.
Par exemple :
```python
%%upload --database databaseName --container containerName --url
https://contoso.com/path/to/data.json
```
```
Documents successfully uploaded to ContainerName
Total number of documents imported : 2654
Total time taken : 00:00:38.1228087 hours
Total RUs consumed : 25022.58
```
Avec les statistiques de sortie, vous pouvez calculer les RU/s effectives utilisées pour charger les éléments. Par exemple, si 25 000 unités de requête ont été consommées sur 38 secondes, les RU/s effectives sont de 25 000 RU/38 secondes, soit 658 RU/s.
## <a name="run-another-notebook-in-current-notebook"></a>Exécuter un autre notebook dans le notebook actuel
Vous pouvez utiliser la commande magique ``%%run`` pour exécuter un autre notebook dans votre espace de travail à partir de votre notebook actuel. Utilisez la syntaxe suivante :
```python
%%run ./path/to/{notebookName}.ipynb
```
Remplacez ``{notebookName}`` par le nom du notebook que vous voulez exécuter. Le notebook doit figurer dans votre espace de travail « Mes notebooks » actuel.
## <a name="use-built-in-nteract-data-explorer"></a>Utiliser l’Explorateur de données nteract intégré
Vous pouvez utiliser [l’Explorateur de données nteract](https://blog.nteract.io/designing-the-nteract-data-explorer-f4476d53f897) intégré pour filtrer et visualiser un tableau. Pour activer cette fonctionnalité, définissez l’option ``pd.options.display.html.table_schema`` sur ``True``, et ``pd.options.display.max_rows`` sur la valeur souhaitée (vous pouvez définir ``pd.options.display.max_rows`` sur ``None`` pour afficher tous les résultats).
```python
import pandas as pd
pd.options.display.html.table_schema = True
pd.options.display.max_rows = None
df_cosmos.groupby("Item").size()
```
:::image type="content" source="media/use-notebook-features-and-commands/nteract-built-in-chart.png" alt-text="Explorateur de données nteract":::
## <a name="use-the-built-in-python-sdk"></a>Utiliser le SDK Python intégré
La version 4 du [SDK Python Azure Cosmos DB pour API SQL](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/cosmos/azure-cosmos) est installée et incluse dans l’environnement du notebook pour le compte Azure Cosmos.
Utilisez l’instance ``cosmos_client`` intégrée pour exécuter n’importe quelle opération du SDK.
Par exemple :
```python
## Import modules as needed
from azure.cosmos.partition_key import PartitionKey
## Create a new database if it doesn't exist
database = cosmos_client.create_database_if_not_exists('RetailDemo')
## Create a new container if it doesn't exist
container = database.create_container_if_not_exists(id='WebsiteData', partition_key=PartitionKey(path='/CartID'))
```
Consultez [Exemples du kit SDK Python](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/cosmos/azure-cosmos/samples).
> [!IMPORTANT]
> Le SDK Python intégré est pris en charge uniquement avec les comptes d’API SQL (Core). Pour les autres API, vous devez [installer le pilote Python](#install-a-new-package) correspondant à l’API.
## <a name="create-a-custom-instance-of-cosmos_client"></a>Créer une instance personnalisée de ``cosmos_client``
Pour plus de flexibilité, vous pouvez créer une instance personnalisée de ``cosmos_client``. Vous pourrez ainsi :
- Personnaliser la [stratégie de connexion](/python/api/azure-cosmos/azure.cosmos.documents.connectionpolicy?preserve-view=true&view=azure-python-preview)
- Exécuter des opérations sur un compte Azure Cosmos différent de celui dans lequel vous vous trouvez
Vous pouvez accéder à la chaîne de connexion et à la clé primaire du compte actuel à l’aide des [variables d’environnement](#access-the-account-endpoint-and-primary-key-env-variables).
```python
import azure.cosmos.cosmos_client as cosmos
import azure.cosmos.documents as documents
# These should be set to a region you've added for Azure Cosmos DB
region_1 = "Central US"
region_2 = "East US 2"
custom_connection_policy = documents.ConnectionPolicy()
custom_connection_policy.PreferredLocations = [region_1, region_2] # Set the order of regions the SDK will route requests to. The regions should be regions you've added for Cosmos, otherwise this will error.
# Create a new instance of CosmosClient, getting the endpoint and key from the environment variables
custom_client = cosmos.CosmosClient(url=COSMOS.ENDPOINT, credential=COSMOS.KEY, connection_policy=custom_connection_policy)
```
## <a name="access-the-account-endpoint-and-primary-key-env-variables"></a>Accéder aux variables d’environnement de point de terminaison et de clé primaire du compte
Vous pouvez accéder au point de terminaison intégré et à la clé du compte dans lequel se trouve votre notebook.
```python
endpoint = COSMOS.ENDPOINT
primary_key = COSMOS.KEY
```
> [!IMPORTANT]
> Les variables ``COSMOS.ENDPOINT`` et ``COSMOS.KEY`` s’appliquent uniquement à l’API SQL. Pour les autres API, recherchez le point de terminaison et la clé dans le panneau **Chaînes de connexion** ou **Clés** de votre compte Azure Cosmos.
## <a name="reset-notebooks-workspace"></a>Réinitialiser l’espace de travail des notebooks
Pour rétablir les paramètres par défaut de l’espace de travail des notebooks, sélectionnez **Réinitialiser l’espace de travail** dans la barre de commandes. Cette opération supprime tous les packages personnalisés installés et redémarre le serveur Jupyter. Vos notebooks, fichiers et ressources Azure Cosmos ne seront pas affectés.
:::image type="content" source="media/use-notebook-features-and-commands/reset-workspace.png" alt-text="Explorateur de données nteract":::
## <a name="next-steps"></a>Étapes suivantes
- Découvrez les avantages de [notebooks Jupyter d’Azure Cosmos DB](cosmosdb-jupyter-notebooks.md)
- Apprenez-en davantage sur le [Kit Python Azure Cosmos DB pour API SQL](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/cosmos/azure-cosmos). | 59.879808 | 446 | 0.76933 | fra_Latn | 0.898361 |
e0230387338512ccc2155f182900e3ab0007fe91 | 3,972 | md | Markdown | docs/modeling/code-generation-and-t4-text-templates.md | MicrosoftDocs/visualstudio-docs.ja-jp | 9e5d2448fa132240e5e896f8587952d491569739 | [
"CC-BY-4.0",
"MIT"
] | 21 | 2018-01-02T00:40:26.000Z | 2021-11-27T22:41:23.000Z | docs/modeling/code-generation-and-t4-text-templates.md | MicrosoftDocs/visualstudio-docs.ja-jp | 9e5d2448fa132240e5e896f8587952d491569739 | [
"CC-BY-4.0",
"MIT"
] | 1,784 | 2018-02-14T20:18:58.000Z | 2021-10-02T07:23:46.000Z | docs/modeling/code-generation-and-t4-text-templates.md | MicrosoftDocs/visualstudio-docs.ja-jp | 9e5d2448fa132240e5e896f8587952d491569739 | [
"CC-BY-4.0",
"MIT"
] | 53 | 2017-12-13T07:35:40.000Z | 2021-11-24T05:45:59.000Z | ---
title: コード生成と T4 テキスト テンプレート
description: テキスト ブロックと制御ロジックが混在する T4 テキスト テンプレートを使用してテキスト ファイルを生成する方法について説明します。
ms.custom: SEO-VS-2020
ms.date: 11/04/2016
ms.topic: overview
f1_keywords:
- VS.ToolsOptionsPages.TextTemplating.TextTemplating
helpviewer_keywords:
- generating text
- .tt files
- code generation
- text templates
- generating code
author: mgoertz-msft
ms.author: mgoertz
manager: jmartens
ms.technology: vs-ide-modeling
ms.workload:
- multiple
ms.openlocfilehash: e862500f8e8518cc4f2b47402544431ef6706c25
ms.sourcegitcommit: 68897da7d74c31ae1ebf5d47c7b5ddc9b108265b
ms.translationtype: HT
ms.contentlocale: ja-JP
ms.lasthandoff: 08/13/2021
ms.locfileid: "122061442"
---
# <a name="code-generation-and-t4-text-templates"></a>コード生成と T4 テキスト テンプレート
Visual Studio の "*T4 テキスト テンプレート*" は、テキスト ファイルを生成できる、テキスト ブロックと制御ロジックが混在するファイルです。 制御ロジックは、 [!INCLUDE[csprcs](../data-tools/includes/csprcs_md.md)] または [!INCLUDE[vbprvb](../code-quality/includes/vbprvb_md.md)]のプログラム コードのフラグメントとして記述します。 Visual Studio 2015 Update 2 以降では、T4 テンプレート ディレクティブで C# バージョン 6.0 の機能を使用できます。 Web ページ、リソース ファイル、任意の言語のプログラム ソース コードなど、あらゆる種類のテキスト ファイルを生成できます。
T4 テキスト テンプレートには、実行時とデザイン時の 2 種類があります。
## <a name="run-time-t4-text-templates"></a>実行時 T4 テキスト テンプレート
実行時テンプレートは '前処理された' テンプレートとも呼ばれ、通常、出力の一部としてテキスト文字列を生成するために、アプリケーションで実行されます。 たとえば、次のように HTML ページを定義するテンプレートを作成できます。
```
<html><body>
The date and time now is: <#= DateTime.Now #>
</body></html>
```
このテンプレートは、生成される出力に似ている点にご注目ください。 このように、テンプレートは結果の出力に似ているため、テンプレートを変更する場合に誤りを防ぐことができます。
また、テンプレートにはプログラム コードのフラグメントも含まれます。 これらのフラグメントを使用して、テキストのセクションの繰り返し、条件付きセクションの作成、アプリケーションのデータの表示を行うことができます。
出力を生成するには、テンプレートによって生成される関数をアプリケーションで呼び出します。 次に例を示します。
```csharp
string webResponseText = new MyTemplate().TransformText();
```
アプリケーションは、Visual Studio がインストールされていないコンピューターでも実行できます。
実行時テンプレートを作成するには、 **前処理されたテキスト テンプレート** ファイルをプロジェクトに追加します。 または、プレーンテキスト ファイルを追加し、 **[カスタム ツール]** プロパティを **TextTemplatingFilePreprocessor** に設定することもできます。
詳細については、「[T4 テキスト テンプレートを使用した実行時テキスト生成](../modeling/run-time-text-generation-with-t4-text-templates.md)」を参照してください。 テンプレートの構文の詳細については、「[T4 テキスト テンプレートの作成](../modeling/writing-a-t4-text-template.md)」を参照してください。
## <a name="design-time-t4-text-templates"></a>デザイン時 T4 テキスト テンプレート
デザイン時テンプレートは、アプリケーションのソース コードや他のリソースの一部を定義します。 通常は、複数のテンプレートを使用して 1 つの入力ファイルまたはデータベースのデータを読み込み、一部の *.cs*、 *.vb* などのソース ファイルを生成します。 テンプレートごとに 1 つのファイルが生成されます。 これらは、Visual Studio または MSBuild 内で実行されます。
たとえば、入力データが構成データの XML ファイルであるとします。 開発中に XML ファイルを編集するたびに、テキスト テンプレートによって、アプリケーション コードの一部が再生成されます。 テンプレートの例を次に示します。
```
<#@ output extension=".cs" #>
<#@ assembly name="System.Xml" #>
<#
System.Xml.XmlDocument configurationData = ...; // Read a data file here.
#>
namespace Fabrikam.<#= configurationData.SelectSingleNode("jobName").Value #>
{
... // More code here.
}
```
XML ファイルの値に応じて、次のような *.cs* ファイルが生成されます。
```
namespace Fabrikam.FirstJob
{
... // More code here.
}
```
別の例として、入力がビジネス アクティビティのワークフローの図であるとします。 ユーザーがビジネス ワークフローを変更した場合や、別のワークフローを使用する新しいユーザーとの作業を開始する場合は、新しいモデルに合わせてコードを簡単に再生成できます。
デザイン時テンプレートを使用すると、要件が変わったときに構成をすばやく変更できるようになり、変更の信頼性も高まります。 ワークフローの例のように、ビジネス要件の観点で入力を定義するのが一般的です。 このように定義すると、変更点についてユーザーと共に検討しやすくなります。 そのため、デザイン時テンプレートは、アジャイル開発プロセスにおける有用なツールとなります。
デザイン時テンプレートを作成するには、 **テキスト テンプレート** ファイルをプロジェクトに追加します。 または、プレーンテキスト ファイルを追加し、 **[カスタム ツール]** プロパティを **TextTemplatingFileGenerator** に設定することもできます。
詳細については、「[T4 テキスト テンプレートを使用したデザイン時コード生成](../modeling/design-time-code-generation-by-using-t4-text-templates.md)」を参照してください。 テンプレートの構文の詳細については、「[T4 テキスト テンプレートの作成](../modeling/writing-a-t4-text-template.md)」を参照してください。
> [!NOTE]
> 1 つ以上のテンプレートで読み込まれるデータを示す際に、 *モデル* という用語を使用する場合があります。 モデルはどのような形式でもかまいません。あらゆる種類のファイルまたはデータベースを使用できます。 必ずしも UML モデルやドメイン固有言語モデルである必要はありません。 'モデル' は、コードのようなものではなく、ビジネス概念の観点でデータを定義できることを示します。
テキスト テンプレート変換機能は、 *T4* と名付けられています。
## <a name="see-also"></a>関連項目
- [ドメイン固有言語からコードを生成する](../modeling/generating-code-from-a-domain-specific-language.md)
| 38.563107 | 376 | 0.796828 | yue_Hant | 0.602611 |
e0234363136887b92257a0efeeb86cf99386cdd0 | 1,202 | md | Markdown | docs/source/api/Apollo/classes/MaxRetryInterceptor.md | kimroen/apollo-ios | 0d0064c6a1d9d48c35bde922d1c2f2059bec1090 | [
"MIT"
] | 3,195 | 2017-01-31T00:14:16.000Z | 2022-03-31T09:46:27.000Z | docs/source/api/Apollo/classes/MaxRetryInterceptor.md | kimroen/apollo-ios | 0d0064c6a1d9d48c35bde922d1c2f2059bec1090 | [
"MIT"
] | 1,425 | 2017-02-01T10:55:39.000Z | 2022-03-31T18:56:48.000Z | docs/source/api/Apollo/classes/MaxRetryInterceptor.md | kimroen/apollo-ios | 0d0064c6a1d9d48c35bde922d1c2f2059bec1090 | [
"MIT"
] | 666 | 2017-02-01T10:23:30.000Z | 2022-03-28T19:57:32.000Z | **CLASS**
# `MaxRetryInterceptor`
```swift
public class MaxRetryInterceptor: ApolloInterceptor
```
An interceptor to enforce a maximum number of retries of any `HTTPRequest`
## Methods
### `init(maxRetriesAllowed:)`
```swift
public init(maxRetriesAllowed: Int = 3)
```
Designated initializer.
- Parameter maxRetriesAllowed: How many times a query can be retried, in addition to the initial attempt before
#### Parameters
| Name | Description |
| ---- | ----------- |
| maxRetriesAllowed | How many times a query can be retried, in addition to the initial attempt before |
### `interceptAsync(chain:request:response:completion:)`
```swift
public func interceptAsync<Operation: GraphQLOperation>(
chain: RequestChain,
request: HTTPRequest<Operation>,
response: HTTPResponse<Operation>?,
completion: @escaping (Result<GraphQLResult<Operation.Data>, Error>) -> Void)
```
#### Parameters
| Name | Description |
| ---- | ----------- |
| chain | The chain the interceptor is a part of. |
| request | The request, as far as it has been constructed |
| response | [optional] The response, if received |
| completion | The completion block to fire when data needs to be returned to the UI. | | 26.711111 | 111 | 0.71381 | eng_Latn | 0.932844 |
e02397eea0a93dc4ed1da5b430e30606244dc759 | 4,144 | md | Markdown | SEP-0008.md | Cadair/sunpy-SEP | fa6cfb22272d279926b223f58172f5cd40f2a7b1 | [
"CC-BY-4.0"
] | null | null | null | SEP-0008.md | Cadair/sunpy-SEP | fa6cfb22272d279926b223f58172f5cd40f2a7b1 | [
"CC-BY-4.0"
] | null | null | null | SEP-0008.md | Cadair/sunpy-SEP | fa6cfb22272d279926b223f58172f5cd40f2a7b1 | [
"CC-BY-4.0"
] | null | null | null | # SEP-0009 - Astropy Time
| SEP | 8 |
|---------------|-------------------------------------------|
| title | Astropy Time |
| author(s) | Stuart Mumford |
| contact email | [email protected] |
| date-creation | 2018-05-30 |
| type | standard |
| status | discussion |
| discussion | https://github.com/sunpy/sunpy/issues/993 |
# Introduction
This SEP proposes that SunPy transitions all internal, and external use of a
Python object to represent a date or time from the standard library `datetime`
module to the classes provided in `astropy.time`. The main classes being
replacing `datetime.date` and `datetime.datetime` with `astropy.Time` and
replacing `datetime.timedelta` with `astropy.time.TimeDelta`.
# Detailed Description
Since its inception the SunPy core library has used `datetime` for handling
time. This is a very useful standard library module, however it lacks some key
benefits that are provided by Astropy and are useful to the solar community:
- Support for non-UTC time scales. This is especially useful for JSOC and HMI data which use the TAI time scale rather than UTC.
- Support for high (double-double) precision time representation and arithmetic.
- Support for leap seconds.
- Support for different representations of time, i.e. Julian Day, unix time, GPS.
- Support for reading and writing the new FITS Time standard.
- Support for light travel time calculations for different locations on Earth.
- Support for custom Time scales and formats, i.e. utime.
The `astropy.time.Time` object can be converted to many different
representations of time, i.e. different pre-defined or custom string formats,
`datetime.datetime` objects. This allows users to convert back to `datetime`
objects if needed.
## Accepting Time as Input
All over SunPy when time-like objects are accepted as input, this input must be
sanitised with the `sunpy.time.parse_time` function. This function currently
returns a `datetime.datetime` object. The major API change proposed by the
transition to `astropy.time` is the return value of the `parse_time` function
will become `astropy.time.Time`. It is also proposed that the function signature
of `parse_time` be standardised with the signature of `astropy.time.Time` for
clarity, the details of this are left to the implementation.
The `astropy.time.Time` object should be used as an internal representation of
time where ever possible, to ensure consistency of things like leap seconds
which would not work if `datetime` were used internally.
## Other Changes
Many other parts of SunPy will need to be adapted to work with `astropy.time`
and convert to datetime in as few places as possible to maintain the advantages
of the Astropy representation. This will result in the return values of any
functions returning time objects also becoming `astropy.time.Time` objects. It
is not expected that there will be any significant changes in functionality.
## sunpy.timeseries
It should be noted that because `sunpy.timeseries.GenericTimeseries` is using
Pandas it will not be changed to use `astropy.time` it would be necessary to
migrate from Pandas to `astropy.table` to gain the advantages of `astropy.time`.
Timeseries data are currently not sufficiently supported by `astropy.table`.
Timeseries support in Astropy table is proposed to be added in
[APE 9](https://github.com/astropy/astropy-APEs/pull/12). It is expected that any
future implementation of timeseries in Astropy will be evaluated and timeseries
potentially adapted to use it. This would be the subject of a separate SEP.
The API of the `sunpy.timeseries` module will be converted to use `astropy.time`
as appropriate although the actual temporal index of the data (which is the
pandas dataframe) will not be changed.
# Decision Rationale
Agreed to in Board meeting of 4-Sept-2018. Implemented in SunPy [PR#2691](https://github.com/sunpy/sunpy/pull/2691).
| 50.536585 | 129 | 0.725386 | eng_Latn | 0.997557 |
e0239eda5f5cb1306657dab9d957cbb00d99006c | 2,112 | md | Markdown | content/about.md | spacecadet9/my-pico-blog | 617c45dbca88009c884c71bb58499b51f59e03be | [
"MIT"
] | null | null | null | content/about.md | spacecadet9/my-pico-blog | 617c45dbca88009c884c71bb58499b51f59e03be | [
"MIT"
] | null | null | null | content/about.md | spacecadet9/my-pico-blog | 617c45dbca88009c884c71bb58499b51f59e03be | [
"MIT"
] | null | null | null | /*
Title: About
Description: This description will go in the meta description tag
*/
# About
Donec efficitur sed nibh eget bibendum. Quisque rutrum nec magna non imperdiet. Nullam sollicitudin sodales nisl non molestie. Fusce et enim neque. Ut dapibus dui arcu, nec maximus ex feugiat ut. Nulla facilisi. Maecenas auctor condimentum vehicula. Nam erat sem, dapibus sed felis non, scelerisque tincidunt enim. Maecenas at vehicula nunc. Maecenas finibus interdum dui, nec fringilla orci porta id. Aliquam facilisis ipsum at ullamcorper porttitor. Praesent ultricies porttitor rhoncus. Nullam non arcu a dolor pulvinar vehicula in eu nisi. Maecenas vel aliquet ligula, a pharetra ante.
Vestibulum maximus sit amet lectus eget elementum. Morbi pulvinar dictum eros, nec sollicitudin sem fermentum in. Donec vitae nisl neque. Nam quis velit commodo, suscipit magna eu, fringilla elit. Nunc quis ultrices neque. Aliquam erat volutpat. In a nulla aliquet, porttitor dui varius, dignissim elit. Morbi eleifend condimentum semper. Curabitur accumsan velit a pharetra vulputate. Interdum et malesuada fames ac ante ipsum primis in faucibus. Morbi vel risus vitae mauris euismod egestas ut nec quam. Mauris semper efficitur eleifend. Sed a vehicula ipsum. Vivamus a odio sit amet lacus maximus interdum in at lorem. Aliquam mollis efficitur magna iaculis pharetra. Aenean lacinia massa augue, vitae tempor augue tincidunt consectetur.
Ut faucibus dui luctus libero malesuada venenatis. Sed sit amet fringilla est. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Quisque at est odio. Sed sed aliquam mi, ac lacinia libero. Nam tempus nisi elementum, suscipit ligula congue, feugiat tellus. Nulla iaculis rutrum mauris, non efficitur mauris tristique id. Aliquam erat volutpat. Integer fringilla sagittis mi id tempus. In hac habitasse platea dictumst. Cras et nunc maximus, suscipit orci in, aliquam purus. Sed tempor, orci at luctus scelerisque, dui arcu congue erat, at luctus nisi nunc tempus turpis. Fusce imperdiet est at varius tincidunt. Donec scelerisque dignissim posuere.
| 162.461538 | 740 | 0.815341 | hun_Latn | 0.147599 |
e023d20aaa8b64c277505df1a28d80f85998b1d7 | 35,681 | md | Markdown | docs/cn/client.md | qinhunter/brpc | c5d4611fac4cedbb07c2e6ce04a65b18224aa27e | [
"Apache-2.0"
] | 2 | 2020-03-31T15:52:24.000Z | 2020-03-31T15:52:36.000Z | docs/cn/client.md | ww5365/brpc | 33538c6bb69f3151863707b2ebdeff74f93e4b65 | [
"Apache-2.0"
] | null | null | null | docs/cn/client.md | ww5365/brpc | 33538c6bb69f3151863707b2ebdeff74f93e4b65 | [
"Apache-2.0"
] | null | null | null | [English version](../en/client.md)
# 示例程序
Echo的[client端代码](https://github.com/brpc/brpc/blob/master/example/echo_c++/client.cpp)。
# 事实速查
- Channel.Init()是线程不安全的。
- Channel.CallMethod()是线程安全的,一个Channel可以被所有线程同时使用。
- Channel可以分配在栈上。
- Channel在发送异步请求后可以析构。
- 没有brpc::Client这个类。
# Channel
Client指发起请求的一端,在brpc中没有对应的实体,取而代之的是[brpc::Channel](https://github.com/brpc/brpc/blob/master/src/brpc/channel.h),它代表和一台或一组服务器的交互通道,Client和Channel在角色上的差别在实践中并不重要,你可以把Channel视作Client。
Channel可以**被所有线程共用**,你不需要为每个线程创建独立的Channel,也不需要用锁互斥。不过Channel的创建和Init并不是线程安全的,请确保在Init成功后再被多线程访问,在没有线程访问后再析构。
一些RPC实现中有ClientManager的概念,包含了Client端的配置信息和资源管理。brpc不需要这些,以往在ClientManager中配置的线程数、长短连接等等要么被加入了brpc::ChannelOptions,要么可以通过gflags全局配置,这么做的好处:
1. 方便。你不需要在创建Channel时传入ClientManager,也不需要存储ClientManager。否则不少代码需要一层层地传递ClientManager,很麻烦。gflags使一些全局行为的配置更加简单。
2. 共用资源。比如server和channel可以共用后台线程。(bthread的工作线程)
3. 生命周期。析构ClientManager的过程很容易出错,现在由框架负责则不会有问题。
就像大部分类那样,Channel必须在**Init**之后才能使用,options为NULL时所有参数取默认值,如果你要使用非默认值,这么做就行了:
```c++
brpc::ChannelOptions options; // 包含了默认值
options.xxx = yyy;
...
channel.Init(..., &options);
```
注意Channel不会修改options,Init结束后不会再访问options。所以options一般就像上面代码中那样放栈上。Channel.options()可以获得channel在使用的所有选项。
Init函数分为连接一台服务器和连接服务集群。
# 连接一台服务器
```c++
// options为NULL时取默认值
int Init(EndPoint server_addr_and_port, const ChannelOptions* options);
int Init(const char* server_addr_and_port, const ChannelOptions* options);
int Init(const char* server_addr, int port, const ChannelOptions* options);
```
这类Init连接的服务器往往有固定的ip地址,不需要命名服务和负载均衡,创建起来相对轻量。但是**请勿频繁创建使用域名的Channel**。这需要查询dns,可能最多耗时10秒(查询DNS的默认超时)。重用它们。
合法的“server_addr_and_port”:
- 127.0.0.1:80
- www.foo.com:8765
- localhost:9000
不合法的"server_addr_and_port":
- 127.0.0.1:90000 # 端口过大
- 10.39.2.300:8000 # 非法的ip
# 连接服务集群
```c++
int Init(const char* naming_service_url,
const char* load_balancer_name,
const ChannelOptions* options);
```
这类Channel需要定期从`naming_service_url`指定的命名服务中获得服务器列表,并通过`load_balancer_name`指定的负载均衡算法选择出一台机器发送请求。
你**不应该**在每次请求前动态地创建此类(连接服务集群的)Channel。因为创建和析构此类Channel牵涉到较多的资源,比如在创建时得访问一次命名服务,否则便不知道有哪些服务器可选。由于Channel可被多个线程共用,一般也没有必要动态创建。
当`load_balancer_name`为NULL或空时,此Init等同于连接单台server的Init,`naming_service_url`应该是"ip:port"或"域名:port"。你可以通过这个Init函数统一Channel的初始化方式。比如你可以把`naming_service_url`和`load_balancer_name`放在配置文件中,要连接单台server时把`load_balancer_name`置空,要连接服务集群时则设置一个有效的算法名称。
## 命名服务
命名服务把一个名字映射为可修改的机器列表,在client端的位置如下:

有了命名服务后client记录的是一个名字,而不是每一台下游机器。而当下游机器变化时,就只需要修改命名服务中的列表,而不需要逐台修改每个上游。这个过程也常被称为“解耦上下游”。当然在具体实现上,上游会记录每一台下游机器,并定期向命名服务请求或被推送最新的列表,以避免在RPC请求时才去访问命名服务。使用命名服务一般不会对访问性能造成影响,对命名服务的压力也很小。
`naming_service_url`的一般形式是"**protocol://service_name**"
### bns://\<bns-name\>
BNS是百度内常用的命名服务,比如bns://rdev.matrix.all,其中"bns"是protocol,"rdev.matrix.all"是service-name。相关一个gflag是-ns_access_interval: 
如果BNS中显示不为空,但Channel却说找不到服务器,那么有可能BNS列表中的机器状态位(status)为非0,含义为机器不可用,所以不会被加入到server候选集中.状态位可通过命令行查看:
`get_instance_by_service [bns_node_name] -s`
### file://\<path\>
服务器列表放在`path`所在的文件里,比如"file://conf/machine_list"中的“conf/machine_list”对应一个文件:
* 每行是一台服务器的地址。
* \#之后的是注释会被忽略
* 地址后出现的非注释内容被认为是tag,由一个或多个空格与前面的地址分隔,相同的地址+不同的tag被认为是不同的实例。
* 当文件更新时, brpc会重新加载。
```
# 此行会被忽略
10.24.234.17 tag1 # 这是注释,会被忽略
10.24.234.17 tag2 # 此行和上一行被认为是不同的实例
10.24.234.18
10.24.234.19
```
优点: 易于修改,方便单测。
缺点: 更新时需要修改每个上游的列表文件,不适合线上部署。
### list://\<addr1\>,\<addr2\>...
服务器列表直接跟在list://之后,以逗号分隔,比如"list://db-bce-81-3-186.db01:7000,m1-bce-44-67-72.m1:7000,cp01-rd-cos-006.cp01:7000"中有三个地址。
地址后可以声明tag,用一个或多个空格分隔,相同的地址+不同的tag被认为是不同的实例。
优点: 可在命令行中直接配置,方便单测。
缺点: 无法在运行时修改,完全不能用于线上部署。
### http://\<url\>
连接一个域名下所有的机器, 例如http://www.baidu.com:80 ,注意连接单点的Init(两个参数)虽然也可传入域名,但只会连接域名下的一台机器。
优点: DNS的通用性,公网内网均可使用。
缺点: 受限于DNS的格式限制无法传递复杂的meta数据,也无法实现通知机制。
### consul://\<service-name\>
通过consul获取服务名称为service-name的服务列表。consul的默认地址是localhost:8500,可通过gflags设置-consul\_agent\_addr来修改。consul的连接超时时间默认是200ms,可通过-consul\_connect\_timeout\_ms来修改。
默认在consul请求参数中添加[stale](https://www.consul.io/api/index.html#consistency-modes)和passing(仅返回状态为passing的服务列表),可通过gflags中-consul\_url\_parameter改变[consul请求参数](https://www.consul.io/api/health.html#parameters-2)。
除了对consul的首次请求,后续对consul的请求都采用[long polling](https://www.consul.io/api/index.html#blocking-queries)的方式,即仅当服务列表更新或请求超时后consul才返回结果,这里超时时间默认为60s,可通过-consul\_blocking\_query\_wait\_secs来设置。
若consul返回的服务列表[响应格式](https://www.consul.io/api/health.html#sample-response-2)有错误,或者列表中所有服务都因为地址、端口等关键字段缺失或无法解析而被过滤,consul naming server会拒绝更新服务列表,并在一段时间后(默认500ms,可通过-consul\_retry\_interval\_ms设置)重新访问consul。
如果consul不可访问,服务可自动降级到file naming service获取服务列表。此功能默认关闭,可通过设置-consul\_enable\_degrade\_to\_file\_naming\_service来打开。服务列表文件目录通过-consul \_file\_naming\_service\_dir来设置,使用service-name作为文件名。该文件可通过consul-template生成,里面会保存consul不可用之前最新的下游服务节点。当consul恢复时可自动恢复到consul naming service。
### 更多命名服务
用户可以通过实现brpc::NamingService来对接更多命名服务,具体见[这里](https://github.com/brpc/brpc/blob/master/docs/cn/load_balancing.md#%E5%91%BD%E5%90%8D%E6%9C%8D%E5%8A%A1)
### 命名服务中的tag
tag的用途主要是实现“粗粒度的wrr”,当给同一个地址加上不同的tag后,它们会被认为是不同的实例,从而让那个地址被负载均衡器正比与不同tag个数的流量。不过,你应当优先考虑使用[wrr算法](#wrr),相比使用tag可以实现更加精细的按权重分流。
### VIP相关的问题
VIP一般是4层负载均衡器的公网ip,背后有多个RS。当客户端连接至VIP时,VIP会选择一个RS建立连接,当客户端连接断开时,VIP也会断开与对应RS的连接。
如果客户端只与VIP建立一个连接(brpc中的单连接),那么来自这个客户端的所有流量都会落到一台RS上。如果客户端的数量非常多,至少在集群的角度,所有的RS还是会分到足够多的连接,从而基本均衡。但如果客户端的数量不多,或客户端的负载差异很大,那么可能在个别RS上出现热点。另一个问题是当有多个vip可选时,客户端分给它们的流量与各自后面的RS数量可能不一致。
解决这个问题的一种方法是使用连接池模式(pooled),这样客户端对一个vip就可能建立多个连接(约为一段时间内的最大并发度),从而让负载落到多个RS上。如果有多个vip,可以用[wrr负载均衡](#wrr)给不同的vip声明不同的权重从而分到对应比例的流量,或给相同的vip后加上多个不同的tag而被认为是多个不同的实例。
注意:在客户端使用单连接时,给相同的vip加上不同的tag确实能让这个vip分到更多的流量,但连接仍然只会有一个,这是由目前brpc实现决定的。
### 命名服务过滤器
当命名服务获得机器列表后,可以自定义一个过滤器进行筛选,最后把结果传递给负载均衡:

过滤器的接口如下:
```c++
// naming_service_filter.h
class NamingServiceFilter {
public:
// Return true to take this `server' as a candidate to issue RPC
// Return false to filter it out
virtual bool Accept(const ServerNode& server) const = 0;
};
// naming_service.h
struct ServerNode {
butil::EndPoint addr;
std::string tag;
};
```
常见的业务策略如根据server的tag进行过滤。
自定义的过滤器配置在ChannelOptions中,默认为NULL(不过滤)。
```c++
class MyNamingServiceFilter : public brpc::NamingServiceFilter {
public:
bool Accept(const brpc::ServerNode& server) const {
return server.tag == "main";
}
};
int main() {
...
MyNamingServiceFilter my_filter;
...
brpc::ChannelOptions options;
options.ns_filter = &my_filter;
...
}
```
## 负载均衡
当下游机器超过一台时,我们需要分割流量,此过程一般称为负载均衡,在client端的位置如下图所示:

理想的算法是每个请求都得到及时的处理,且任意机器crash对全局影响较小。但由于client端无法及时获得server端的延迟或拥塞,而且负载均衡算法不能耗费太多的cpu,一般来说用户得根据具体的场景选择合适的算法,目前rpc提供的算法有(通过load_balancer_name指定):
### rr
即round robin,总是选择列表中的下一台服务器,结尾的下一台是开头,无需其他设置。比如有3台机器a,b,c,那么brpc会依次向a, b, c, a, b, c, ...发送请求。注意这个算法的前提是服务器的配置,网络条件,负载都是类似的。
### wrr
即weighted round robin, 根据服务器列表配置的权重值来选择服务器。服务器被选到的机会正比于其权重值,并且该算法能保证同一服务器被选到的结果较均衡的散开。
### random
随机从列表中选择一台服务器,无需其他设置。和round robin类似,这个算法的前提也是服务器都是类似的。
### la
locality-aware,优先选择延时低的下游,直到其延时高于其他机器,无需其他设置。实现原理请查看[Locality-aware load balancing](lalb.md)。
### c_murmurhash or c_md5
一致性哈希,与简单hash的不同之处在于增加或删除机器时不会使分桶结果剧烈变化,特别适合cache类服务。
发起RPC前需要设置Controller.set_request_code(),否则RPC会失败。request_code一般是请求中主键部分的32位哈希值,**不需要和负载均衡使用的哈希算法一致**。比如用c_murmurhash算法也可以用md5计算哈希值。
[src/brpc/policy/hasher.h](https://github.com/brpc/brpc/blob/master/src/brpc/policy/hasher.h)中包含了常用的hash函数。如果用std::string key代表请求的主键,controller.set_request_code(brpc::policy::MurmurHash32(key.data(), key.size()))就正确地设置了request_code。
注意甄别请求中的“主键”部分和“属性”部分,不要为了偷懒或通用,就把请求的所有内容一股脑儿计算出哈希值,属性的变化会使请求的目的地发生剧烈的变化。另外也要注意padding问题,比如struct Foo { int32_t a; int64_t b; }在64位机器上a和b之间有4个字节的空隙,内容未定义,如果像hash(&foo, sizeof(foo))这样计算哈希值,结果就是未定义的,得把内容紧密排列或序列化后再算。
实现原理请查看[Consistent Hashing](consistent_hashing.md)。
## 健康检查
连接断开的server会被暂时隔离而不会被负载均衡算法选中,brpc会定期连接被隔离的server,以检查他们是否恢复正常,间隔由参数-health_check_interval控制:
| Name | Value | Description | Defined At |
| ------------------------- | ----- | ---------------------------------------- | ----------------------- |
| health_check_interval (R) | 3 | seconds between consecutive health-checkings | src/brpc/socket_map.cpp |
一旦server被连接上,它会恢复为可用状态。如果在隔离过程中,server从命名服务中删除了,brpc也会停止连接尝试。
# 发起访问
一般来说,我们不直接调用Channel.CallMethod,而是通过protobuf生成的桩XXX_Stub,过程更像是“调用函数”。stub内没什么成员变量,建议在栈上创建和使用,而不必new,当然你也可以把stub存下来复用。Channel::CallMethod和stub访问都是**线程安全**的,可以被所有线程同时访问。比如:
```c++
XXX_Stub stub(&channel);
stub.some_method(controller, request, response, done);
```
甚至
```c++
XXX_Stub(&channel).some_method(controller, request, response, done);
```
一个例外是http client。访问http服务和protobuf没什么关系,直接调用CallMethod即可,除了Controller和done均为NULL,详见[访问HTTP服务](http_client.md)。
## 同步访问
指的是:CallMethod会阻塞到收到server端返回response或发生错误(包括超时)。
同步访问中的response/controller不会在CallMethod后被框架使用,它们都可以分配在栈上。注意,如果request/response字段特别多字节数特别大的话,还是更适合分配在堆上。
```c++
MyRequest request;
MyResponse response;
brpc::Controller cntl;
XXX_Stub stub(&channel);
request.set_foo(...);
cntl.set_timeout_ms(...);
stub.some_method(&cntl, &request, &response, NULL);
if (cntl->Failed()) {
// RPC失败了. response里的值是未定义的,勿用。
} else {
// RPC成功了,response里有我们想要的回复数据。
}
```
## 异步访问
指的是:给CallMethod传递一个额外的回调对象done,CallMethod在发出request后就结束了,而不是在RPC结束后。当server端返回response或发生错误(包括超时)时,done->Run()会被调用。对RPC的后续处理应该写在done->Run()里,而不是CallMethod后。
由于CallMethod结束不意味着RPC结束,response/controller仍可能被框架及done->Run()使用,它们一般得创建在堆上,并在done->Run()中删除。如果提前删除了它们,那当done->Run()被调用时,将访问到无效内存。
你可以独立地创建这些对象,并使用[NewCallback](#使用NewCallback)生成done,也可以把Response和Controller作为done的成员变量,[一起new出来](#继承google::protobuf::Closure),一般使用前一种方法。
**发起异步请求后Request和Channel也可以立刻析构**。这两样和response/controller是不同的。注意:这是说Channel的析构可以立刻发生在CallMethod**之后**,并不是说析构可以和CallMethod同时发生,删除正被另一个线程使用的Channel是未定义行为(很可能crash)。
### 使用NewCallback
```c++
static void OnRPCDone(MyResponse* response, brpc::Controller* cntl) {
// unique_ptr会帮助我们在return时自动删掉response/cntl,防止忘记。gcc 3.4下的unique_ptr是模拟版本。
std::unique_ptr<MyResponse> response_guard(response);
std::unique_ptr<brpc::Controller> cntl_guard(cntl);
if (cntl->Failed()) {
// RPC失败了. response里的值是未定义的,勿用。
} else {
// RPC成功了,response里有我们想要的数据。开始RPC的后续处理。
}
// NewCallback产生的Closure会在Run结束后删除自己,不用我们做。
}
MyResponse* response = new MyResponse;
brpc::Controller* cntl = new brpc::Controller;
MyService_Stub stub(&channel);
MyRequest request; // 你不用new request,即使在异步访问中.
request.set_foo(...);
cntl->set_timeout_ms(...);
stub.some_method(cntl, &request, response, google::protobuf::NewCallback(OnRPCDone, response, cntl));
```
由于protobuf 3把NewCallback设置为私有,r32035后brpc把NewCallback独立于[src/brpc/callback.h](https://github.com/brpc/brpc/blob/master/src/brpc/callback.h)(并增加了一些重载)。如果你的程序出现NewCallback相关的编译错误,把google::protobuf::NewCallback替换为brpc::NewCallback就行了。
### 继承google::protobuf::Closure
使用NewCallback的缺点是要分配三次内存:response, controller, done。如果profiler证明这儿的内存分配有瓶颈,可以考虑自己继承Closure,把response/controller作为成员变量,这样可以把三次new合并为一次。但缺点就是代码不够美观,如果内存分配不是瓶颈,别用这种方法。
```c++
class OnRPCDone: public google::protobuf::Closure {
public:
void Run() {
// unique_ptr会帮助我们在return时自动delete this,防止忘记。gcc 3.4下的unique_ptr是模拟版本。
std::unique_ptr<OnRPCDone> self_guard(this);
if (cntl->Failed()) {
// RPC失败了. response里的值是未定义的,勿用。
} else {
// RPC成功了,response里有我们想要的数据。开始RPC的后续处理。
}
}
MyResponse response;
brpc::Controller cntl;
}
OnRPCDone* done = new OnRPCDone;
MyService_Stub stub(&channel);
MyRequest request; // 你不用new request,即使在异步访问中.
request.set_foo(...);
done->cntl.set_timeout_ms(...);
stub.some_method(&done->cntl, &request, &done->response, done);
```
### 如果异步访问中的回调函数特别复杂会有什么影响吗?
没有特别的影响,回调会运行在独立的bthread中,不会阻塞其他的逻辑。你可以在回调中做各种阻塞操作。
### rpc发送处的代码和回调函数是在同一个线程里执行吗?
一定不在同一个线程里运行,即使该次rpc调用刚进去就失败了,回调也会在另一个bthread中运行。这可以在加锁进行rpc(不推荐)的代码中避免死锁。
## 等待RPC完成
注意:当你需要发起多个并发操作时,可能[ParallelChannel](combo_channel.md#parallelchannel)更方便。
如下代码发起两个异步RPC后等待它们完成。
```c++
const brpc::CallId cid1 = controller1->call_id();
const brpc::CallId cid2 = controller2->call_id();
...
stub.method1(controller1, request1, response1, done1);
stub.method2(controller2, request2, response2, done2);
...
brpc::Join(cid1);
brpc::Join(cid2);
```
**在发起RPC前**调用Controller.call_id()获得一个id,发起RPC调用后Join那个id。
Join()的行为是等到RPC结束**且done->Run()运行后**,一些Join的性质如下:
- 如果对应的RPC已经结束,Join将立刻返回。
- 多个线程可以Join同一个id,它们都会醒来。
- 同步RPC也可以在另一个线程中被Join,但一般不会这么做。
Join()在之前的版本叫做JoinResponse(),如果你在编译时被提示deprecated之类的,修改为Join()。
在RPC调用后`Join(controller->call_id())`是**错误**的行为,一定要先把call_id保存下来。因为RPC调用后controller可能被随时开始运行的done删除。下面代码的Join方式是**错误**的。
```c++
static void on_rpc_done(Controller* controller, MyResponse* response) {
... Handle response ...
delete controller;
delete response;
}
Controller* controller1 = new Controller;
Controller* controller2 = new Controller;
MyResponse* response1 = new MyResponse;
MyResponse* response2 = new MyResponse;
...
stub.method1(controller1, &request1, response1, google::protobuf::NewCallback(on_rpc_done, controller1, response1));
stub.method2(controller2, &request2, response2, google::protobuf::NewCallback(on_rpc_done, controller2, response2));
...
brpc::Join(controller1->call_id()); // 错误,controller1可能被on_rpc_done删除了
brpc::Join(controller2->call_id()); // 错误,controller2可能被on_rpc_done删除了
```
## 半同步
Join可用来实现“半同步”访问:即等待多个异步访问完成。由于调用处的代码会等到所有RPC都结束后再醒来,所以controller和response都可以放栈上。
```c++
brpc::Controller cntl1;
brpc::Controller cntl2;
MyResponse response1;
MyResponse response2;
...
stub1.method1(&cntl1, &request1, &response1, brpc::DoNothing());
stub2.method2(&cntl2, &request2, &response2, brpc::DoNothing());
...
brpc::Join(cntl1.call_id());
brpc::Join(cntl2.call_id());
```
brpc::DoNothing()可获得一个什么都不干的done,专门用于半同步访问。它的生命周期由框架管理,用户不用关心。
注意在上面的代码中,我们在RPC结束后又访问了controller.call_id(),这是没有问题的,因为DoNothing中并不会像上节中的on_rpc_done中那样删除Controller。
## 取消RPC
brpc::StartCancel(call_id)可取消对应的RPC,call_id必须**在发起RPC前**通过Controller.call_id()获得,其他时刻都可能有race condition。
注意:是brpc::StartCancel(call_id),不是controller->StartCancel(),后者被禁用,没有效果。后者是protobuf默认提供的接口,但是在controller对象的生命周期上有严重的竞争问题。
顾名思义,StartCancel调用完成后RPC并未立刻结束,你不应该碰触Controller的任何字段或删除任何资源,它们自然会在RPC结束时被done中对应逻辑处理。如果你一定要在原地等到RPC结束(一般不需要),则可通过Join(call_id)。
关于StartCancel的一些事实:
- call_id在发起RPC前就可以被取消,RPC会直接结束(done仍会被调用)。
- call_id可以在另一个线程中被取消。
- 取消一个已经取消的call_id不会有任何效果。推论:同一个call_id可以被多个线程同时取消,但最多一次有效果。
- 这里的取消是纯client端的功能,**server端未必会取消对应的操作**,server cancelation是另一个功能。
## 获取Server的地址和端口
remote_side()方法可知道request被送向了哪个server,返回值类型是[butil::EndPoint](https://github.com/brpc/brpc/blob/master/src/butil/endpoint.h),包含一个ip4地址和端口。在RPC结束前调用这个方法都是没有意义的。
打印方式:
```c++
LOG(INFO) << "remote_side=" << cntl->remote_side();
printf("remote_side=%s\n", butil::endpoint2str(cntl->remote_side()).c_str());
```
## 获取Client的地址和端口
r31384后通过local_side()方法可**在RPC结束后**获得发起RPC的地址和端口。
打印方式:
```c++
LOG(INFO) << "local_side=" << cntl->local_side();
printf("local_side=%s\n", butil::endpoint2str(cntl->local_side()).c_str());
```
## 应该重用brpc::Controller吗?
不用刻意地重用,但Controller是个大杂烩,可能会包含一些缓存,Reset()可以避免反复地创建这些缓存。
在大部分场景下,构造Controller(snippet1)和重置Controller(snippet2)的性能差异不大。
```c++
// snippet1
for (int i = 0; i < n; ++i) {
brpc::Controller controller;
...
stub.CallSomething(..., &controller);
}
// snippet2
brpc::Controller controller;
for (int i = 0; i < n; ++i) {
controller.Reset();
...
stub.CallSomething(..., &controller);
}
```
但如果snippet1中的Controller是new出来的,那么snippet1就会多出“内存分配”的开销,在一些情况下可能会慢一些。
# 设置
Client端的设置主要由三部分组成:
- brpc::ChannelOptions: 定义在[src/brpc/channel.h](https://github.com/brpc/brpc/blob/master/src/brpc/channel.h)中,用于初始化Channel,一旦初始化成功无法修改。
- brpc::Controller: 定义在[src/brpc/controller.h](https://github.com/brpc/brpc/blob/master/src/brpc/controller.h)中,用于在某次RPC中覆盖ChannelOptions中的选项,可根据上下文每次均不同。
- 全局gflags:常用于调节一些底层代码的行为,一般不用修改。请自行阅读服务[/flags页面](flags.md)中的说明。
Controller包含了request中没有的数据和选项。server端和client端的Controller结构体是一样的,但使用的字段可能是不同的,你需要仔细阅读Controller中的注释,明确哪些字段可以在server端使用,哪些可以在client端使用。
一个Controller对应一次RPC。一个Controller可以在Reset()后被另一个RPC复用,但一个Controller不能被多个RPC同时使用(不论是否在同一个线程发起)。
Controller的特点:
1. 一个Controller只能有一个使用者,没有特殊说明的话,Controller中的方法默认线程不安全。
2. 因为不能被共享,所以一般不会用共享指针管理Controller,如果你用共享指针了,很可能意味着出错了。
3. Controller创建于开始RPC前,析构于RPC结束后,常见几种模式:
- 同步RPC前Controller放栈上,出作用域后自行析构。注意异步RPC的Controller绝对不能放栈上,否则其析构时异步调用很可能还在进行中,从而引发未定义行为。
- 异步RPC前new Controller,done中删除。
## 线程数
和大部分的RPC框架不同,brpc中并没有独立的Client线程池。所有Channel和Server通过[bthread](http://wiki.baidu.com/display/RPC/bthread)共享相同的线程池. 如果你的程序同样使用了brpc的server, 仅仅需要设置Server的线程数。 或者可以通过[gflags](http://wiki.baidu.com/display/RPC/flags)设置[-bthread_concurrency](http://brpc.baidu.com:8765/flags/bthread_concurrency)来设置全局的线程数.
## 超时
**ChannelOptions.timeout_ms**是对应Channel上所有RPC的总超时,Controller.set_timeout_ms()可修改某次RPC的值。单位毫秒,默认值1秒,最大值2^31(约24天),-1表示一直等到回复或错误。
**ChannelOptions.connect_timeout_ms**是对应Channel上所有RPC的连接超时,单位毫秒,默认值1秒。-1表示等到连接建立或出错,此值被限制为不能超过timeout_ms。注意此超时独立于TCP的连接超时,一般来说前者小于后者,反之则可能在connect_timeout_ms未达到前由于TCP连接超时而出错。
注意1:brpc中的超时是deadline,超过就意味着RPC结束,超时后没有重试。其他实现可能既有单次访问的超时,也有代表deadline的超时。迁移到brpc时请仔细区分。
注意2:RPC超时的错误码为**ERPCTIMEDOUT (1008)**,ETIMEDOUT的意思是连接超时,且可重试。
## 重试
ChannelOptions.max_retry是该Channel上所有RPC的默认最大重试次数,Controller.set_max_retry()可修改某次RPC的值,默认值3,0表示不重试。
r32111后Controller.retried_count()返回重试次数。
r34717后Controller.has_backup_request()获知是否发送过backup_request。
**重试时框架会尽量避开之前尝试过的server。**
重试的触发条件有(条件之间是AND关系):
### 连接出错
如果server一直没有返回,但连接没有问题,这种情况下不会重试。如果你需要在一定时间后发送另一个请求,使用backup request。
工作机制如下:如果response没有在backup_request_ms内返回,则发送另外一个请求,哪个先回来就取哪个。新请求会被尽量送到不同的server。注意如果backup_request_ms大于超时,则backup request总不会被发送。backup request会消耗一次重试次数。backup request不意味着server端cancel。
ChannelOptions.backup_request_ms影响该Channel上所有RPC,单位毫秒,默认值-1(表示不开启),Controller.set_backup_request_ms()可修改某次RPC的值。
### 没到超时
超时后RPC会尽快结束。
### 有剩余重试次数
Controller.set_max_retry(0)或ChannelOptions.max_retry=0关闭重试。
### 错误值得重试
一些错误重试是没有意义的,就不会重试,比如请求有错时(EREQUEST)不会重试,因为server总不会接受,没有意义。
用户可以通过继承[brpc::RetryPolicy](https://github.com/brpc/brpc/blob/master/src/brpc/retry_policy.h)自定义重试条件。比如brpc默认不重试HTTP相关的错误,而你的程序中希望在碰到HTTP_STATUS_FORBIDDEN (403)时重试,可以这么做:
```c++
#include <brpc/retry_policy.h>
class MyRetryPolicy : public brpc::RetryPolicy {
public:
bool DoRetry(const brpc::Controller* cntl) const {
if (cntl->ErrorCode() == brpc::EHTTP && // HTTP错误
cntl->http_response().status_code() == brpc::HTTP_STATUS_FORBIDDEN) {
return true;
}
// 把其他情况丢给框架。
return brpc::DefaultRetryPolicy()->DoRetry(cntl);
}
};
...
// 给ChannelOptions.retry_policy赋值就行了。
// 注意:retry_policy必须在Channel使用期间保持有效,Channel也不会删除retry_policy,所以大部分情况下RetryPolicy都应以单例模式创建。
brpc::ChannelOptions options;
static MyRetryPolicy g_my_retry_policy;
options.retry_policy = &g_my_retry_policy;
...
```
一些提示:
* 通过cntl->response()可获得对应RPC的response。
* 对ERPCTIMEDOUT代表的RPC超时总是不重试,即使你继承的RetryPolicy中允许。
### 重试应当保守
由于成本的限制,大部分线上server的冗余度是有限的,主要是满足多机房互备的需求。而激进的重试逻辑很容易导致众多client对server集群造成2-3倍的压力,最终使集群雪崩:由于server来不及处理导致队列越积越长,使所有的请求得经过很长的排队才被处理而最终超时,相当于服务停摆。默认的重试是比较安全的: 只要连接不断RPC就不会重试,一般不会产生大量的重试请求。用户可以通过RetryPolicy定制重试策略,但也可能使重试变成一场“风暴”。当你定制RetryPolicy时,你需要仔细考虑client和server的协作关系,并设计对应的异常测试,以确保行为符合预期。
## 协议
Channel的默认协议是baidu_std,可通过设置ChannelOptions.protocol换为其他协议,这个字段既接受enum也接受字符串。
目前支持的有:
- PROTOCOL_BAIDU_STD 或 “baidu_std",即[百度标准协议](baidu_std.md),默认为单连接。
- PROTOCOL_HULU_PBRPC 或 "hulu_pbrpc",hulu的协议,默认为单连接。
- PROTOCOL_NOVA_PBRPC 或 ”nova_pbrpc“,网盟的协议,默认为连接池。
- PROTOCOL_HTTP 或 ”http", http 1.0或1.1协议,默认为连接池(Keep-Alive)。具体方法见[访问HTTP服务](http_client.md)。
- PROTOCOL_SOFA_PBRPC 或 "sofa_pbrpc",sofa-pbrpc的协议,默认为单连接。
- PROTOCOL_PUBLIC_PBRPC 或 "public_pbrpc",public_pbrpc的协议,默认为连接池。
- PROTOCOL_UBRPC_COMPACK 或 "ubrpc_compack",public/ubrpc的协议,使用compack打包,默认为连接池。具体方法见[ubrpc (by protobuf)](ub_client.md)。相关的还有PROTOCOL_UBRPC_MCPACK2或ubrpc_mcpack2,使用mcpack2打包。
- PROTOCOL_NSHEAD_CLIENT 或 "nshead_client",这是发送baidu-rpc-ub中所有UBXXXRequest需要的协议,默认为连接池。具体方法见[访问UB](ub_client.md)。
- PROTOCOL_NSHEAD 或 "nshead",这是发送NsheadMessage需要的协议,默认为连接池。具体方法见[nshead+blob](ub_client.md#nshead-blob) 。
- PROTOCOL_MEMCACHE 或 "memcache",memcached的二进制协议,默认为单连接。具体方法见[访问memcached](memcache_client.md)。
- PROTOCOL_REDIS 或 "redis",redis 1.2后的协议(也是hiredis支持的协议),默认为单连接。具体方法见[访问Redis](redis_client.md)。
- PROTOCOL_NSHEAD_MCPACK 或 "nshead_mcpack", 顾名思义,格式为nshead + mcpack,使用mcpack2pb适配,默认为连接池。
- PROTOCOL_ESP 或 "esp",访问使用esp协议的服务,默认为连接池。
- PROTOCOL_THRIFT 或 "thrift",访问使用thrift协议的服务,默认为连接池, 具体方法见[访问thrift](thrift.md)。
## 连接方式
brpc支持以下连接方式:
- 短连接:每次RPC前建立连接,结束后关闭连接。由于每次调用得有建立连接的开销,这种方式一般用于偶尔发起的操作,而不是持续发起请求的场景。没有协议默认使用这种连接方式,http 1.0对连接的处理效果类似短链接。
- 连接池:每次RPC前取用空闲连接,结束后归还,一个连接上最多只有一个请求,一个client对一台server可能有多条连接。http 1.1和各类使用nshead的协议都是这个方式。
- 单连接:进程内所有client与一台server最多只有一个连接,一个连接上可能同时有多个请求,回复返回顺序和请求顺序不需要一致,这是baidu_std,hulu_pbrpc,sofa_pbrpc协议的默认选项。
| | 短连接 | 连接池 | 单连接 |
| ------------------- | ---------------------------------------- | --------------------- | ------------------- |
| 长连接 | 否 | 是 | 是 |
| server端连接数(单client) | qps*latency (原理见[little's law](https://en.wikipedia.org/wiki/Little%27s_law)) | qps*latency | 1 |
| 极限qps | 差,且受限于单机端口数 | 中等 | 高 |
| latency | 1.5RTT(connect) + 1RTT + 处理时间 | 1RTT + 处理时间 | 1RTT + 处理时间 |
| cpu占用 | 高, 每次都要tcp connect | 中等, 每个请求都要一次sys write | 低, 合并写出在大流量时减少cpu占用 |
框架会为协议选择默认的连接方式,用户**一般不用修改**。若需要,把ChannelOptions.connection_type设为:
- CONNECTION_TYPE_SINGLE 或 "single" 为单连接
- CONNECTION_TYPE_POOLED 或 "pooled" 为连接池, 单个远端对应的连接池最多能容纳的连接数由-max_connection_pool_size控制。注意,此选项不等价于“最大连接数”。需要连接时只要没有闲置的,就会新建;归还时,若池中已有max_connection_pool_size个连接的话,会直接关闭。max_connection_pool_size的取值要符合并发,否则超出的部分会被频繁建立和关闭,效果类似短连接。若max_connection_pool_size为0,就近似于完全的短连接。
| Name | Value | Description | Defined At |
| ---------------------------- | ----- | ---------------------------------------- | ------------------- |
| max_connection_pool_size (R) | 100 | Max number of pooled connections to a single endpoint | src/brpc/socket.cpp |
- CONNECTION_TYPE_SHORT 或 "short" 为短连接
- 设置为“”(空字符串)则让框架选择协议对应的默认连接方式。
brpc支持[Streaming RPC](streaming_rpc.md),这是一种应用层的连接,用于传递流式数据。
## 关闭连接池中的闲置连接
当连接池中的某个连接在-idle_timeout_second时间内没有读写,则被视作“闲置”,会被自动关闭。默认值为10秒。此功能只对连接池(pooled)有效。打开-log_idle_connection_close在关闭前会打印一条日志。
| Name | Value | Description | Defined At |
| ------------------------- | ----- | ---------------------------------------- | ----------------------- |
| idle_timeout_second | 10 | Pooled connections without data transmission for so many seconds will be closed. No effect for non-positive values | src/brpc/socket_map.cpp |
| log_idle_connection_close | false | Print log when an idle connection is closed | src/brpc/socket.cpp |
## 延迟关闭连接
多个channel可能通过引用计数引用同一个连接,当引用某个连接的最后一个channel析构时,该连接将被关闭。但在一些场景中,channel在使用前才被创建,用完立刻析构,这时其中一些连接就会被无谓地关闭再被打开,效果类似短连接。
一个解决办法是用户把所有或常用的channel缓存下来,这样自然能避免channel频繁产生和析构,但目前brpc没有提供这样一个utility,用户自己(正确)实现有一些工作量。
另一个解决办法是设置全局选项-defer_close_second
| Name | Value | Description | Defined At |
| ------------------ | ----- | ---------------------------------------- | ----------------------- |
| defer_close_second | 0 | Defer close of connections for so many seconds even if the connection is not used by anyone. Close immediately for non-positive values | src/brpc/socket_map.cpp |
设置后引用计数清0时连接并不会立刻被关闭,而是会等待这么多秒再关闭,如果在这段时间内又有channel引用了这个连接,它会恢复正常被使用的状态。不管channel创建析构有多频率,这个选项使得关闭连接的频率有上限。这个选项的副作用是一些fd不会被及时关闭,如果延时被误设为一个大数值,程序占据的fd个数可能会很大。
## 连接的缓冲区大小
-socket_recv_buffer_size设置所有连接的接收缓冲区大小,默认-1(不修改)
-socket_send_buffer_size设置所有连接的发送缓冲区大小,默认-1(不修改)
| Name | Value | Description | Defined At |
| ----------------------- | ----- | ---------------------------------------- | ------------------- |
| socket_recv_buffer_size | -1 | Set the recv buffer size of socket if this value is positive | src/brpc/socket.cpp |
| socket_send_buffer_size | -1 | Set send buffer size of sockets if this value is positive | src/brpc/socket.cpp |
## log_id
通过set_log_id()可设置64位整型log_id。这个id会和请求一起被送到服务器端,一般会被打在日志里,从而把一次检索经过的所有服务串联起来。字符串格式的需要转化为64位整形才能设入log_id。
## 附件
baidu_std和hulu_pbrpc协议支持附件,这段数据由用户自定义,不经过protobuf的序列化。站在client的角度,设置在Controller::request_attachment()的附件会被server端收到,response_attachment()则包含了server端送回的附件。附件不受压缩选项影响。
在http协议中,附件对应[message body](http://www.w3.org/Protocols/rfc2616/rfc2616-sec4.html),比如要POST的数据就设置在request_attachment()中。
## 开启SSL
要开启SSL,首先确保代码依赖了最新的openssl库。如果openssl版本很旧,会有严重的安全漏洞,支持的加密算法也少,违背了开启SSL的初衷。然后设置`ChannelOptions.ssl_options`,具体见[ssl_option.h](https://github.com/brpc/brpc/blob/master/src/brpc/ssl_option.h)。
```c++
// SSL options at client side
struct ChannelSSLOptions {
// Whether to enable SSL on the channel.
// Default: false
bool enable;
// Cipher suites used for SSL handshake.
// The format of this string should follow that in `man 1 cipers'.
// Default: "DEFAULT"
std::string ciphers;
// SSL protocols used for SSL handshake, separated by comma.
// Available protocols: SSLv3, TLSv1, TLSv1.1, TLSv1.2
// Default: TLSv1, TLSv1.1, TLSv1.2
std::string protocols;
// When set, fill this into the SNI extension field during handshake,
// which can be used by the server to locate the right certificate.
// Default: empty
std::string sni_name;
// Options used to verify the server's certificate
// Default: see above
VerifyOptions verify;
// ... Other options
};
```
- 目前只有连接单点的Channel可以开启SSL访问,使用了命名服务的Channel**不支持开启SSL**。
- 开启后,该Channel上任何协议的请求,都会被SSL加密后发送。如果希望某些请求不加密,需要额外再创建一个Channel。
- 针对HTTPS做了些易用性优化:`Channel.Init`时能自动识别https://前缀,自动开启SSL;-http_verbose时也会输出证书信息。
## 认证
client端的认证一般分为2种:
1. 基于请求的认证:每次请求都会带上认证信息。这种方式比较灵活,认证信息中可以含有本次请求中的字段,但是缺点是每次请求都会需要认证,性能上有所损失
2. 基于连接的认证:当TCP连接建立后,client发送认证包,认证成功后,后续该连接上的请求不再需要认证。相比前者,这种方式灵活度不高(一般认证包里只能携带本机一些静态信息),但性能较好,一般用于单连接/连接池场景
针对第一种认证场景,在实现上非常简单,将认证的格式定义加到请求结构体中,每次当做正常RPC发送出去即可;针对第二种场景,brpc提供了一种机制,只要用户继承实现:
```c++
class Authenticator {
public:
virtual ~Authenticator() {}
// Implement this method to generate credential information
// into `auth_str' which will be sent to `VerifyCredential'
// at server side. This method will be called on client side.
// Returns 0 on success, error code otherwise
virtual int GenerateCredential(std::string* auth_str) const = 0;
};
```
那么当用户并发调用RPC接口用单连接往同一个server发请求时,框架会自动保证:建立TCP连接后,连接上的第一个请求中会带有上述`GenerateCredential`产生的认证包,其余剩下的并发请求不会带有认证信息,依次排在第一个请求之后。整个发送过程依旧是并发的,并不会等第一个请求先返回。若server端认证成功,那么所有请求都能成功返回;若认证失败,一般server端则会关闭连接,这些请求则会收到相应错误。
目前自带协议中支持客户端认证的有:[baidu_std](baidu_std.md)(默认协议), HTTP, hulu_pbrpc, ESP。对于自定义协议,一般可以在组装请求阶段,调用Authenticator接口生成认证串,来支持客户端认证。
## 重置
调用Reset方法可让Controller回到刚创建时的状态。
别在RPC结束前重置Controller,行为是未定义的。
## 压缩
set_request_compress_type()设置request的压缩方式,默认不压缩。
注意:附件不会被压缩。
HTTP body的压缩方法见[client压缩request body](http_client#压缩request-body)。
支持的压缩方法有:
- brpc::CompressTypeSnappy : [snanpy压缩](http://google.github.io/snappy/),压缩和解压显著快于其他压缩方法,但压缩率最低。
- brpc::CompressTypeGzip : [gzip压缩](http://en.wikipedia.org/wiki/Gzip),显著慢于snappy,但压缩率高
- brpc::CompressTypeZlib : [zlib压缩](http://en.wikipedia.org/wiki/Zlib),比gzip快10%~20%,压缩率略好于gzip,但速度仍明显慢于snappy。
下表是多种压缩算法应对重复率很高的数据时的性能,仅供参考。
| Compress method | Compress size(B) | Compress time(us) | Decompress time(us) | Compress throughput(MB/s) | Decompress throughput(MB/s) | Compress ratio |
| --------------- | ---------------- | ----------------- | ------------------- | ------------------------- | --------------------------- | -------------- |
| Snappy | 128 | 0.753114 | 0.890815 | 162.0875 | 137.0322 | 37.50% |
| Gzip | 10.85185 | 1.849199 | 11.2488 | 66.01252 | 47.66% | |
| Zlib | 10.71955 | 1.66522 | 11.38763 | 73.30581 | 38.28% | |
| Snappy | 1024 | 1.404812 | 1.374915 | 695.1555 | 710.2713 | 8.79% |
| Gzip | 16.97748 | 3.950946 | 57.52106 | 247.1718 | 6.64% | |
| Zlib | 15.98913 | 3.06195 | 61.07665 | 318.9348 | 5.47% | |
| Snappy | 16384 | 8.822967 | 9.865008 | 1770.946 | 1583.881 | 4.96% |
| Gzip | 160.8642 | 43.85911 | 97.13162 | 356.2544 | 0.78% | |
| Zlib | 147.6828 | 29.06039 | 105.8011 | 537.6734 | 0.71% | |
| Snappy | 32768 | 16.16362 | 19.43596 | 1933.354 | 1607.844 | 4.82% |
| Gzip | 229.7803 | 82.71903 | 135.9995 | 377.7849 | 0.54% | |
| Zlib | 240.7464 | 54.44099 | 129.8046 | 574.0161 | 0.50% | |
下表是多种压缩算法应对重复率很低的数据时的性能,仅供参考。
| Compress method | Compress size(B) | Compress time(us) | Decompress time(us) | Compress throughput(MB/s) | Decompress throughput(MB/s) | Compress ratio |
| --------------- | ---------------- | ----------------- | ------------------- | ------------------------- | --------------------------- | -------------- |
| Snappy | 128 | 0.866002 | 0.718052 | 140.9584 | 170.0021 | 105.47% |
| Gzip | 15.89855 | 4.936242 | 7.678077 | 24.7294 | 116.41% | |
| Zlib | 15.88757 | 4.793953 | 7.683384 | 25.46339 | 107.03% | |
| Snappy | 1024 | 2.087972 | 1.06572 | 467.7087 | 916.3403 | 100.78% |
| Gzip | 32.54279 | 12.27744 | 30.00857 | 79.5412 | 79.79% | |
| Zlib | 31.51397 | 11.2374 | 30.98824 | 86.90288 | 78.61% | |
| Snappy | 16384 | 12.598 | 6.306592 | 1240.276 | 2477.566 | 100.06% |
| Gzip | 537.1803 | 129.7558 | 29.08707 | 120.4185 | 75.32% | |
| Zlib | 519.5705 | 115.1463 | 30.07291 | 135.697 | 75.24% | |
| Snappy | 32768 | 22.68531 | 12.39793 | 1377.543 | 2520.582 | 100.03% |
| Gzip | 1403.974 | 258.9239 | 22.25825 | 120.6919 | 75.25% | |
| Zlib | 1370.201 | 230.3683 | 22.80687 | 135.6524 | 75.21% | |
# FAQ
### Q: brpc能用unix domain socket吗
不能。同机TCP socket并不走网络,相比unix domain socket性能只会略微下降。一些不能用TCP socket的特殊场景可能会需要,以后可能会扩展支持。
### Q: Fail to connect to xx.xx.xx.xx:xxxx, Connection refused
一般是对端server没打开端口(很可能挂了)。
### Q: 经常遇到至另一个机房的Connection timedout

这个就是连接超时了,调大连接和RPC超时:
```c++
struct ChannelOptions {
...
// Issue error when a connection is not established after so many
// milliseconds. -1 means wait indefinitely.
// Default: 200 (milliseconds)
// Maximum: 0x7fffffff (roughly 30 days)
int32_t connect_timeout_ms;
// Max duration of RPC over this Channel. -1 means wait indefinitely.
// Overridable by Controller.set_timeout_ms().
// Default: 500 (milliseconds)
// Maximum: 0x7fffffff (roughly 30 days)
int32_t timeout_ms;
...
};
```
注意: 连接超时不是RPC超时,RPC超时打印的日志是"Reached timeout=..."。
### Q: 为什么同步方式是好的,异步就crash了
重点检查Controller,Response和done的生命周期。在异步访问中,RPC调用结束并不意味着RPC整个过程结束,而是在进入done->Run()时才会结束。所以这些对象不应在调用RPC后就释放,而是要在done->Run()里释放。你一般不能把这些对象分配在栈上,而应该分配在堆上。详见[异步访问](client.md#异步访问)。
### Q: 怎么确保请求只被处理一次
这不是RPC层面的事情。当response返回且成功时,我们确认这个过程一定成功了。当response返回且失败时,我们确认这个过程一定失败了。但当response没有返回时,它可能失败,也可能成功。如果我们选择重试,那一个成功的过程也可能会被再执行一次。一般来说带副作用的RPC服务都应当考虑[幂等](http://en.wikipedia.org/wiki/Idempotence)问题,否则重试可能会导致多次叠加副作用而产生意向不到的结果。只有读的检索服务大都没有副作用而天然幂等,无需特殊处理。而带写的存储服务则要在设计时就加入版本号或序列号之类的机制以拒绝已经发生的过程,保证幂等。
### Q: Invalid address=`bns://group.user-persona.dumi.nj03'
```
FATAL 04-07 20:00:03 7778 src/brpc/channel.cpp:123] Invalid address=`bns://group.user-persona.dumi.nj03'. You should use Init(naming_service_name, load_balancer_name, options) to access multiple servers.
```
访问命名服务要使用三个参数的Init,其中第二个参数是load_balancer_name,而这里用的是两个参数的Init,框架认为是访问单点,就会报这个错。
### Q: 两端都用protobuf,为什么不能互相访问
**协议 !=protobuf**。protobuf负责一个包的序列化,协议中的一个消息可能会包含多个protobuf包,以及额外的长度、校验码、magic number等等。打包格式相同不意味着协议可以互通。在brpc中写一份代码就能服务多协议的能力是通过把不同协议的数据转化为统一的编程接口完成的,而不是在protobuf层面。
### Q: 为什么C++ client/server 能够互相通信, 和其他语言的client/server 通信会报序列化失败的错误
检查一下C++ 版本是否开启了压缩 (Controller::set_compress_type), 目前其他语言的rpc框架还没有实现压缩,互相返回会出现问题。
# 附:Client端基本流程

主要步骤:
1. 创建一个[bthread_id](https://github.com/brpc/brpc/blob/master/src/bthread/id.h)作为本次RPC的correlation_id。
2. 根据Channel的创建方式,从进程级的[SocketMap](https://github.com/brpc/brpc/blob/master/src/brpc/socket_map.h)中或从[LoadBalancer](https://github.com/brpc/brpc/blob/master/src/brpc/load_balancer.h)中选择一台下游server作为本次RPC发送的目的地。
3. 根据连接方式(单连接、连接池、短连接),选择一个[Socket](https://github.com/brpc/brpc/blob/master/src/brpc/socket.h)。
4. 如果开启验证且当前Socket没有被验证过时,第一个请求进入验证分支,其余请求会阻塞直到第一个包含认证信息的请求写入Socket。server端只对第一个请求进行验证。
5. 根据Channel的协议,选择对应的序列化函数把request序列化至[IOBuf](https://github.com/brpc/brpc/blob/master/src/butil/iobuf.h)。
6. 如果配置了超时,设置定时器。从这个点开始要避免使用Controller对象,因为在设定定时器后随时可能触发超时->调用到用户的超时回调->用户在回调中析构Controller。
7. 发送准备阶段结束,若上述任何步骤出错,会调用Channel::HandleSendFailed。
8. 将之前序列化好的IOBuf写出到Socket上,同时传入回调Channel::HandleSocketFailed,当连接断开、写失败等错误发生时会调用此回调。
9. 如果是同步发送,Join correlation_id;否则至此CallMethod结束。
10. 网络上发消息+收消息。
11. 收到response后,提取出其中的correlation_id,在O(1)时间内找到对应的Controller。这个过程中不需要查找全局哈希表,有良好的多核扩展性。
12. 根据协议格式反序列化response。
13. 调用Controller::OnRPCReturned,可能会根据错误码判断是否需要重试,或让RPC结束。如果是异步发送,调用用户回调。最后摧毁correlation_id唤醒Join着的线程。
| 41.537835 | 299 | 0.698299 | yue_Hant | 0.355769 |
e02440dcf512833fec5186f00bfcd8db786c6968 | 1,079 | md | Markdown | .history/_posts/2020-01-06-trapwater_20200106234808.md | ydbB/ydbB.github.io | b988b0e83557c51f5e9dd7479f8f97ae6adf90a5 | [
"MIT"
] | null | null | null | .history/_posts/2020-01-06-trapwater_20200106234808.md | ydbB/ydbB.github.io | b988b0e83557c51f5e9dd7479f8f97ae6adf90a5 | [
"MIT"
] | null | null | null | .history/_posts/2020-01-06-trapwater_20200106234808.md | ydbB/ydbB.github.io | b988b0e83557c51f5e9dd7479f8f97ae6adf90a5 | [
"MIT"
] | null | null | null | ---
layout: post
title: 收集雨水
---
给定 n 个非负整数表示每个宽度为 1 的柱子的高度图,计算按此排列的柱子,下雨之后能接多少雨水。

上面是由数组 [0,1,0,2,1,0,1,3,2,1,2,1] 表示的高度图,在这种情况下,可以接 6 个单位的雨水(蓝色部分表示雨水)。 感谢 Marcos 贡献此图。
示例:
输入: [0,1,0,2,1,0,1,3,2,1,2,1]
输出: 6
来源:力扣(LeetCode)
链接:https://leetcode-cn.com/problems/trapping-rain-water
著作权归领扣网络所有。商业转载请联系官方授权,非商业转载请注明出处。
### 题解
``` java
public int trap(int[] height) {
if (height == null || height.length == 0) return 0;
int cupOfWater = 0;
int left_max = 0, right_max = 0;
int left = 0, right = height.length - 1;
while (left < right) {
if (height[left] < height[right]) {
if (height[left] < left_max) cupOfWater += left_max - height[left];
else left_max = height[left];
left ++;
} else {
if (height[right] < right_max) cupOfWater += right_max - height[right];
else right_max = height[right];
right --;
}
}
return cupOfWater;
}
```
思路: | 25.690476 | 87 | 0.55607 | eng_Latn | 0.41599 |
e024618c1ce2af6582aa420e1ee5f549c8661bf2 | 60 | md | Markdown | README.md | f3rcho/f3rcho-next-js | 9305bd4ed6eb970a0478dda773e0a0480cc501ef | [
"MIT"
] | null | null | null | README.md | f3rcho/f3rcho-next-js | 9305bd4ed6eb970a0478dda773e0a0480cc501ef | [
"MIT"
] | null | null | null | README.md | f3rcho/f3rcho-next-js | 9305bd4ed6eb970a0478dda773e0a0480cc501ef | [
"MIT"
] | null | null | null | # f3rcho-next-js
My website built with next js and pure css
| 20 | 42 | 0.766667 | eng_Latn | 0.968212 |
e02485b01fcd38d6efdddb400ebd93fd1aa18eb4 | 54 | md | Markdown | README.md | sblasa/lesson5_example | cf4637875be0b07149ac5d501c4b918b4588b439 | [
"MIT"
] | null | null | null | README.md | sblasa/lesson5_example | cf4637875be0b07149ac5d501c4b918b4588b439 | [
"MIT"
] | null | null | null | README.md | sblasa/lesson5_example | cf4637875be0b07149ac5d501c4b918b4588b439 | [
"MIT"
] | null | null | null | # lesson5_example
Lesson 5 homework example replacing
| 18 | 35 | 0.851852 | eng_Latn | 0.994633 |
e024a17a67e9e6e0e7d3311d470805c345969565 | 2,142 | md | Markdown | docs/components/Fade.md | nikhil2601/test | 66c14cd7c92abdffe5ec7315cde00e9941d3e0ac | [
"MIT"
] | null | null | null | docs/components/Fade.md | nikhil2601/test | 66c14cd7c92abdffe5ec7315cde00e9941d3e0ac | [
"MIT"
] | 2 | 2020-09-04T03:27:50.000Z | 2020-12-18T23:02:05.000Z | docs/components/Fade.md | nikhil2601/test | 66c14cd7c92abdffe5ec7315cde00e9941d3e0ac | [
"MIT"
] | null | null | null | [pep-comp](/) > [docs](/docs/README.md) > [components](/docs/components/README.md) > Fade
--------------------------------------------------------------------------------
# Fade
**src**: [Fade.js](/src/lib/Fade/Fade.js)
> Motion helps make a UI expressive and easy to use.
<cite>
<a href="https://material.io/design/motion/#principles">-- Google Material Design</a>
</cite>
`Fade` in from a transparent element to an opaque one.
## Example
```jsx
import React, { Component } from 'react';
import { Container, Fade, Typography } from 'pep-comp';
class FadeAnimation extends Component {
state = {
faded: false
};
handleChange = () => this.setState(prevState => ({ faded: !prevState.faded }));
render() {
const { faded } = this.state;
return (
<Container>
<Fade in={faded}>
<Typography type="display1">Some Content</Typography>
</Fade>
</Container>
);
}
}
export default FadeAnimation;
```
## Props
Any extra `props` supplied will be spread to the internal root `react-transition-group`'s [Transition element](https://reactcommunity.org/react-transition-group/transition).
```javascript
/**
* A single element for the Transition to render.
*/
children: PropTypes.oneOfType([PropTypes.element, PropTypes.func]),
/**
* If `true`, the transition will kick in.
*/
in : PropTypes.bool,
/**
* Callback fired before the `entering` status is applied.
*
* @param {HTMLElement} node
* @param {boolean} isAppearing
*/
onEnter : PropTypes.func,
/**
* Callback fired before the `exiting` status is applied.
*
* @param {HTMLElement} node
*/
onExit : PropTypes.func,
/**
* @ignore
*/
style : PropTypes.object,
/**
* The duration for the transition, in milliseconds.
*/
timeout : PropTypes.oneOfType([
PropTypes.number,
PropTypes.shape({ enter: PropTypes.number, exit: PropTypes.number })
])
```
## Default Props
```javascript
children: null,
in : null,
onEnter : null,
onExit : null,
style : null,
timeout : {
enter: DURATION.enter,
exit : DURATION.exit
}
```
| 22.082474 | 173 | 0.610644 | eng_Latn | 0.57456 |
e024ad8a9af79bb7fe887c0b758e90c52eac050b | 1,204 | md | Markdown | README.md | mstrandgren/boilerplate | 6bd065c07d2dc2ed35183eeba993a251f78c1ee4 | [
"MIT"
] | null | null | null | README.md | mstrandgren/boilerplate | 6bd065c07d2dc2ed35183eeba993a251f78c1ee4 | [
"MIT"
] | null | null | null | README.md | mstrandgren/boilerplate | 6bd065c07d2dc2ed35183eeba993a251f78c1ee4 | [
"MIT"
] | null | null | null | # A Most Formal Boilerplate Project Setter-Upper
Utility to set up a boilerplate project based on a git repo. This particular
repo is partial to grunt, coffeescript, browserify, react, and sublime text,
since that's what I use. It could easily be amended with other tools.
## Usage
```
$ npm install -g mf-boilerplate
$ boilerplate my-new-project
```
### More options
```
Usage: boilerplate [opts] [branch] project_name
-c Client app only
-s Server app only
-r Create remote repo (will prompt for bitbucket credentials the first time)
-p Create sublime project
-b <branch> Use a specific branch, other than master
-l Give the project an MIT license
-h Show this help message
```
## To Customize
1. Fork the repo
2. Change the ```BOILERPLATE_REPO``` in ```bin/boilerplate``` (on the top) to your fork
3. Go wild
## Pull Requests Welcome
1. Feature branches with different types of projects are conveniently accessible
through the ```-b``` flag. So PR away for different types of boilerplates.
2. Auto-generating projects for more editors than sublime would be nice. I hear there
are confused souls that don't use sublime.
3. Can't come up with any more stuff now. Sure someone else can. | 28 | 87 | 0.746678 | eng_Latn | 0.993369 |
e0257f51fc98ea692ab4419a12b969800a57045a | 1,025 | md | Markdown | _posts/2018-04-24-markdown.md | mjf1986/mjf1986.github.io | 4b653b99f0be504582ab4bb594be3aa8f02581cd | [
"Apache-2.0"
] | null | null | null | _posts/2018-04-24-markdown.md | mjf1986/mjf1986.github.io | 4b653b99f0be504582ab4bb594be3aa8f02581cd | [
"Apache-2.0"
] | null | null | null | _posts/2018-04-24-markdown.md | mjf1986/mjf1986.github.io | 4b653b99f0be504582ab4bb594be3aa8f02581cd | [
"Apache-2.0"
] | null | null | null | ---
layout: post
title: HEX && UTF8 Converter
date: 2018-04-25
categories: Skill
tags: Blockchain Ethereum
---
# 记录在以太坊区块链,永不篡改

## 在以太坊上永久记录:
* ##### 工具
1. Hex与UTF8转换:https://sites.google.com/site/nathanlexwww/tools/utf8-convert。
2. 以太钱包,此处用手机App:[imtoken](https://github.com/bitcoin).
* ##### 步骤
1. 造句:”[utf8] 比原链2018年4月24日下午2点正式问世。“
2. 转换HEX: 5B757466385D20E6AF94E58E9FE993BE32303138E5B9B434E69C883234E697A5E4B88BE58D8832E782B9E6ADA3E5BC8FE997AEE4B896E38082
3. 转账用imtoken.高级选项:填充data。

4. 交易成功后,语句被永久记录在以太坊区块链的5497977高度上。

5. 你可以把图片/音乐数据导出二进制格式,保存在区块链上,当然你也可以把重要的事件以文本记录在区块链上。
如果你想打赏我:这里->>> 0xf088598355A6254D6Aa9F3379b559e897c1Be4cC。
##### | 23.837209 | 126 | 0.740488 | yue_Hant | 0.358937 |
e0258a841abb709d7b262b148f4e8726ec1c88e9 | 5,243 | markdown | Markdown | _posts/2012-01-02-modificando_base_de_datos_manualmente_con_entity_framework_code_first.markdown | yagopv/blog | 2593e44290f56a3cc50a4e802d7e761ac25794e3 | [
"MIT"
] | null | null | null | _posts/2012-01-02-modificando_base_de_datos_manualmente_con_entity_framework_code_first.markdown | yagopv/blog | 2593e44290f56a3cc50a4e802d7e761ac25794e3 | [
"MIT"
] | null | null | null | _posts/2012-01-02-modificando_base_de_datos_manualmente_con_entity_framework_code_first.markdown | yagopv/blog | 2593e44290f56a3cc50a4e802d7e761ac25794e3 | [
"MIT"
] | null | null | null | ---
layout: post
title: Como modificar manualmente una base de datos generada a partir de Entity Framework
Code First
date: '2012-01-02 21:51:57'
tags:
- entity framework
categories:
- .NET
---
Entity Framework Code First tiene la capacidad de validar el modelo de datos contra la base de datos subyacente y recrearla automáticamente cuando se realiza alguna modificación en las entidades que conforman el modelo de dominio.
Realizar cambios manuales en la base de datos puede provocar que la aplicación falle si no hemos tenido en cuenta algunos detalles.
Con Entity Framework 4.1 (Code First) podemos detectar automáticamente cambios en el modelo que no se corresponden con el esquema de la base de datos subyacente.
Cuando estamos comenzando el desarrollo y ejecutamos la aplicación, si estos cambios se detectan, podemos incluir código que nos permite regenerar la base de datos como deseemos. Para generar la base de datos podemos crear una clase que herede de [IDatabaseInitializer<TContext>](http://msdn.microsoft.com/en-us/library/gg696323%28v=vs.103%29.aspx) y customizar el proceso, o bien usar alguna de las implementaciones que ya existen, como la que usa siempre por defecto Entity Framework, [DropCreateDatabaseIfModelChanges<TContext>](http://msdn.microsoft.com/en-us/library/gg679604%28v=vs.103%29.aspx) y que elimina y genera de nuevo la base de datos cada vez que cambia el modelo.
Efectivamente, esto es perfecto cuando estamos comenzando con el desarrollo y regeneramos la base de datos costántemente, pero si estamos realizando un mantenimiento de alguna aplicación lo más probable es que necesitemos en más de una ocasión añadir campos a la base de datos, modificar longitudes y demás acciones sin poder por supuesto eliminar los datos que ya existen. Si modificamos la base de datos manualmente y adaptamos el modelo para que encaje con la modificación que hemos realizado nos vamos a encontrar con que se produce la siguiente excepción:
```
The model backing the 'MyDbContext' context has changed since the database was
created. Either manually delete/update the database, or call Database.SetInitializer
with an IDatabaseInitializer instance. For example, the DropCreateDatabaseIfModelChanges
strategy will automatically delete and recreate the database,
and optionally seed it with new data.
```
La razón por la que se produce esta excepción es porque Entity Framework no chequea el esquema del modelo de datos completamente. En lugar de hacer esto calcula un código hash que compara con el código calculado la última vez que se generó el modelo. Este código se almacena en la tabla EdmMetadata que podemos observar que se genera automáticamente cada vez que generamos la base de datos.
Podemos solucionar este problema de varias formas diferentes, dependiendo de nuestra situación:
1. indicar explicitamente a Entity Framework que no calcule más este código:
```c
protected override void OnModelCreating(DbModelBuilder modelBuilder) {
base.OnModelCreating(modelBuilder);
modelBuilder.Conventions.Remove<IncludeMetadataConvention>();
}
```
Esto provocará que ya no se valide el modelo automáticamente, con lo cual perdemos esta característica y ahora deberemos ser más cuidadosos cuando modifiquemos la base de datos manualmente.
2. Recalcular el código hash para el nuevo modelo y actualizar la fila con el codigo en la tabla EdmMetadata. Para esto podemos usar la siguiente función:
```c
System.Data.Entity.Infrastructure.EdmMetadata.TryGetModelHash(context)
```
Usando esta función podemos recuperar el código hash del modelo que estamos intentando crear y actualizarlo en la base de datos. Con este pequeño truco podemos seguir manteniendo la validación del modelo que siempre es una característica interesante de Entity Framework Code First. Además de actualizar este código, hay que por supuesto actualizar la base de datos con los cambios que se han hecho en el modelo. Una alternativa a la obtención del código con esta función sería regenerar completamente una base de datos vacía con la opción de borrar y crear de nuevo ([DropCreateDatabaseIfModelChanges<TContext>](http://msdn.microsoft.com/en-us/library/gg679604%28v=vs.103%29.aspx)) y copiar el código de la tabla EdmMetadata.
**Modificado 22/06/2012**
A partir de la versión 4.3 de Entity Framework, la excepción nos indica otra cosa:
```
The model backing the 'MyDbContext' context has changed since the database was
created.Consider using Code First Migrations to update the database
```
Lo cierto es que existe una posibilidad de evitar el error y tomar el control manual de la base de datos. Simplemente has de establecer el Initializer a null, por ejemplo en el Global.asax
```c
protected void Application_Start() {
AreaRegistration.RegisterAllAreas();
RegisterRoutes(RouteTable.Routes);
//...
Database.SetInitializer<MyContext>(null);
}
```
Es necesario hacerlo así porque IncludeMetadataConvention ha dejado de estar disponible en esta última versión (Han sustituido la tabla EdmMetadata por MigrationHistory.
Puedes hacer esto, o bien como indica el error usar Migrations y actualizar la base de datos con las últimas modificaciones en el modelo de datos pero esto ya es otra historia …
Hasta pronto!!
| 64.728395 | 725 | 0.808507 | spa_Latn | 0.983366 |
e025f751b57d0f1b0ddb4a4a82bce0f0160b1c40 | 715 | md | Markdown | README.md | DVMEND/Digital-Planner | 45d8104d7a38492b651b502074b03a2bfe761b56 | [
"MIT"
] | null | null | null | README.md | DVMEND/Digital-Planner | 45d8104d7a38492b651b502074b03a2bfe761b56 | [
"MIT"
] | null | null | null | README.md | DVMEND/Digital-Planner | 45d8104d7a38492b651b502074b03a2bfe761b56 | [
"MIT"
] | null | null | null | # Work Day Scheduler
Work Day Scheduler is a website that lets you plan out your working hours and save the plan locally.
## Screenshot

## Installation
No installation required
## Usage
Click on a time block that you have an event for to edit.
Click on the save button to save your event.
If you reload the page, your event should still be there.
## Contributing
Open for pull requests. Raise an issue for major changes.
## License
[](https://opensource.org/licenses/MIT)
## Contact Me
Github: https://github.com/DVMEND
Email: [email protected]
| 21.666667 | 107 | 0.752448 | eng_Latn | 0.89005 |
e025fb4444e0daefaa50e90c337d33fdeb8cc805 | 1,936 | md | Markdown | includes/app-service-mobile-restrict-permissions-dotnet-backend.md | matmahnke/azure-docs.pt-br | 6c96d25caf8663547775f333164198e3ed03972f | [
"CC-BY-4.0",
"MIT"
] | 39 | 2017-08-28T07:46:06.000Z | 2022-01-26T12:48:02.000Z | includes/app-service-mobile-restrict-permissions-dotnet-backend.md | matmahnke/azure-docs.pt-br | 6c96d25caf8663547775f333164198e3ed03972f | [
"CC-BY-4.0",
"MIT"
] | 562 | 2017-06-27T13:50:17.000Z | 2021-05-17T23:42:07.000Z | includes/app-service-mobile-restrict-permissions-dotnet-backend.md | matmahnke/azure-docs.pt-br | 6c96d25caf8663547775f333164198e3ed03972f | [
"CC-BY-4.0",
"MIT"
] | 113 | 2017-07-11T19:54:32.000Z | 2022-01-26T21:20:25.000Z | ---
author: conceptdev
ms.service: app-service-mobile
ms.topic: include
ms.date: 08/23/2018
ms.author: crdun
ms.openlocfilehash: b609a708a987194398c53bdf83f0d6e1f281808d
ms.sourcegitcommit: 2ec4b3d0bad7dc0071400c2a2264399e4fe34897
ms.translationtype: MT
ms.contentlocale: pt-BR
ms.lasthandoff: 03/27/2020
ms.locfileid: "67172415"
---
Por padrão, APIs em um back-end de Aplicativos Móveis podem ser chamadas de forma anônima. Em seguida, você precisa restringir o acesso somente aos clientes autenticados.
* **Back-end do Node.js (por meio do portal do Azure)** :
Nas configurações de seus Aplicativos Móveis, clique em **Tabelas Fáceis** e selecione a tabela. Clique em **Alterar permissões**, selecione **Apenas acesso autenticado** para todas as permissões e clique em **Salvar**.
* **Back-end do .NET (C#)**:
No projeto do servidor, navegue **até controladores** > **TodoItemController.cs**. Adicione o atributo `[Authorize]` à classe **TodoItemController** , como a seguir. Para restringir o acesso somente aos métodos específicos, você também pode aplicar esse atributo apenas aos métodos, em vez de à classe. Republicar o projeto de servidor.
[Authorize]
public class TodoItemController : TableController<TodoItem>
* **Back-end do Node.js (por meio de código Node.js)** :
Para exigir autenticação para acesso à tabela, adicione a seguinte linha ao script de servidor Node.js:
table.access = 'authenticated';
Para obter mais detalhes, veja [Como exigir autenticação para acesso às tabelas](../articles/app-service-mobile/app-service-mobile-node-backend-how-to-use-server-sdk.md#howto-tables-auth). Para saber como baixar o projeto de código de início rápido do seu site, consulte [Como baixar o projeto de código de início rápido de back-end do Node.js usando Git](../articles/app-service-mobile/app-service-mobile-node-backend-how-to-use-server-sdk.md#download-quickstart).
| 58.666667 | 469 | 0.76188 | por_Latn | 0.992013 |
e025fd9352b64f0bd639a06ac3aa752c04117417 | 8,496 | md | Markdown | README.md | MirkoBruhn/IntroPythonForDS | d910d1707ddd7c8d2d713cbbb310652221945238 | [
"Apache-2.0"
] | null | null | null | README.md | MirkoBruhn/IntroPythonForDS | d910d1707ddd7c8d2d713cbbb310652221945238 | [
"Apache-2.0"
] | null | null | null | README.md | MirkoBruhn/IntroPythonForDS | d910d1707ddd7c8d2d713cbbb310652221945238 | [
"Apache-2.0"
] | null | null | null | # Introduction to Python For Data Science
This repo contains the teaching material for the Introduction to Python (and useful libraries) masterclass at the [Data Science Retreat](http://datascienceretreat.com/), it does not cover Pandas.
## Table of contents
* [About Me](#about-me)
* [The Python Programming Language](#the-python-programming-language)
* [Why Python?](#why-python)
* [Python for DS Components](#python-for-ds-components)
* [Python 2 vs. Python 3](#python-2-vs-python-3)
* [Installing Python and all useful packages](#installing-python-and-all-useful-packages)
* [Running the IPython interpreter and a python file](#running-the-ipython-interpreter-and-a-python-file)
* [Jupyter Notebook](#jupyter-notebook)
* [Python basics](#python-basics)
* [NumPy and Matplotlib](#numpy-and-matplotlib)
* [NumPy](#numpy)
* [Matplotlib](#matplotlib)
* [SciPy](#scipy)
## About me
Slides for this section can be found [here](https://docs.google.com/presentation/d/e/2PACX-1vTbd4eONN5nSiNaTWW3uM2RM3O0jsoVT8gQ9byqa0X5vStBZGUBfiUSM7-HegCjymaDbaUzQ-9yyvMR/pub).
## The Python Programming Language
Complete slides [here](https://docs.google.com/presentation/d/e/2PACX-1vRPV8i3pQw7MCa6eG-9y9LgIFREJF_3sN4opFDXQ2r_NJgea9ObLJQfj4S_CiM6Ptxs7t0WU6lCa-QH/pub?start=false&loop=false&delayms=3000), inclusive of exercises.
Extra links:
* [The SciPy Lectures -- The Python Language](http://scipy-lectures.github.io/intro/language/python_language.html).
Practice those examples using alternatively python files, the IPython interpreter and an IPython Notebook.
To practice:
* [Python interactive exercises](http://codingbat.com/python)
* [Join the codewars competitions](http://www.codewars.com/?language=python)
### Python 2 vs. Python 3
Note: as explained in the lesson you should now just go with Python 3. These links are from more than 2 years ago but still useful if you need to use old libraries.
A great [notebook](http://nbviewer.ipython.org/github/rasbt/python_reference/blob/master/tutorials/key_differences_between_python_2_and_3.ipynb) covering the main differences has been written by Sebastian Raschka.
To keep your code compatible with both Python 2 and Python 3, you might also want to use [this Cheat Sheet](http://python-future.org/compatible_idioms.html#cheat-sheet-writing-python-2-3-compatible-code).
### Installing Python and all useful packages
Slides on this topic start [here](http://slides.com/utstikkar/introtopython-pythonproglanguage#/10)
### Tools for writing Python Code (from Kristian)
#### Python shell
The most basic interactive Python command line, where each line starts with a `>>>`.
#### IDLE
Standard editor in Python distributions, easy to use but very basic.
#### IPython
A more sophisticated interactive Python command line. It incorporates tab-completion, interactive help and regular shell commands. Also look up the `%`-magic commands.
#### Spyder
**Spyder** is part of the **Anaconda** Python distribution. It is a small IDE mostly for data analysis, similar to RStudio. It automatically highlights Syntax errors, contains a variable explorer, debugging functionality and other useful things.
#### Jupyter Notebooks
Interactive environment for the web browser. A Jupyter notebook contains Python code, text, images and any output from your program (including plots!). It is a great tool for exploratory data analysis.
#### Sublime2
A general-purpose text editor that works on all systems. There are many plugins for Python available. There are a free and a commercial version available.
#### Visual Studio Code
The Open Source cousin of Sublime2, similar to Atom.
#### PyCharm
PyCharm is probably the most luxurious IDE for Python. It contains tons of functions that are a superset of all the above. PyCharm is a great choice for bigger Python projects. Free for non-commercial use.
#### Notepad++
If you must use a text editor on Windows to edit Python code, refuse to use anything worse than **Notepad++**.
#### Vim
I know people who are successfully using Vim to write Python code and are happy with it.
#### Emacs
I know people who are successfully using Emacs to write Python code, but haven't asked them how happy they are.
### Running the IPython interpreter and a python file
Slides on this topic start [here](http://slides.com/utstikkar/introtopython-pythonproglanguage#/12)
### Jupyter Notebook
A live demo will be given during the masterclass. Here just a [warning note](https://docs.google.com/presentation/d/e/2PACX-1vR2ntOr6vWHgHoC0X3arDtim9fIhaoF7r6Vl5fVjxSXeXpD2NRykOSR_UyQzbtjppD2tiqwkw2peMfQ/pub?start=false&loop=false&delayms=3000)
Experiment further with the IPython Notebook environment with [this Jupyter Notebook](http://nbviewer.ipython.org/github/ipython/ipython/blob/2.x/examples/Notebook/Running%20Code.ipynb).
Try to clone or download it, before opening it, running and modifying its cells.
Many more Jupyter features in [this blog post](http://arogozhnikov.github.io/2016/09/10/jupyter-features.html).
And of course, be aware of the fact Jupyter is NOT an IDE and can bite you in various ways: [See this presentation](https://docs.google.com/presentation/d/1n2RlMdmv1p25Xy5thJUhkKGvjtV-dkAIsUXP-AL4ffI/edit#slide=id.g3cb1319227_1_388)
## Git
Slides are [here](https://docs.google.com/presentation/d/e/2PACX-1vSRDWRpbJpNmtPk5SufekG8bSbBSJGjsua-nf-BxTzS_F2qMkHwmFPzjQlnR6op2pwa0QzL-PTFGikx/pub?start=false&loop=false&delayms=3000)
## What is machine learning
A brief introduction/recap of ML its terminology. Slides [here](https://docs.google.com/presentation/d/e/2PACX-1vRfxH8TbgtOQy24JBu28i12kYrbUquXKu6VZhZC3wyCUdiLW1HqF75mgnLI-EjKHFQUdPeZ-6OYD8G7/pub?start=false&loop=false&delayms=3000)
## NumPy and Matplotlib
### NumPy
Start with the official [NumPy Tutorial](http://wiki.scipy.org/Tentative_NumPy_Tutorial). Note: if this link returns an error, move to the [PDF version](https://docs.google.com/viewer?url=http://www.cs.man.ac.uk/~barry/mydocs/MyCOMP28512/MS15_Notes/PyRefs/Tentative_NumPy_Tutorial.pdf).
Move on to these [exercises](http://scipy-lectures.github.io/intro/numpy/exercises.html).
### Matplotlib
Learn the basics and some more advanced plotting tricks in Matplotlib with this [hands-on tutorial](http://scipy-lectures.github.io/intro/matplotlib/matplotlib.html).
It's also very useful to look at the [gallery](https://matplotlib.org/gallery.html) to find examples of every possible chart you may want.
## Scikit-learn and your first ML case
Slides are [here](https://docs.google.com/presentation/d/e/2PACX-1vTjCOfNagJZzOjovAPgNBkVxcddNlKbWZ5oxEjicbuFyEwpAbMjG8m7x0tx3xjqUyKkoYFh0rysWRNL/pub?start=false&loop=false&delayms=3000)
### Scikit-learn
* Introduction to machine learning with scikit-learn [slides](http://slides.com/luciasantamaria/intro-machine-learning-scikit-learn#/)
* Doing machine learning with scikit-learn [slides](https://github.com/luciasantamaria/pandas-tutorial/blob/master/scikit-learn.pdf)
* [Tutorial: Introduction to scikit-learn](https://github.com/utstikkar/pandas-tutorial/blob/master/intro-to-scikit-learn-1-Basics.ipynb)
* [To go further](http://nbviewer.jupyter.org/github/jakevdp/sklearn_tutorial/blob/master/notebooks/Index.ipynb)
## SciPy
SciPy is a collection of mathematical algorithms and convenience functions built on the Numpy extension of Python.
[Here](http://scipy-lectures.github.io/intro/scipy.html) is a hands-on overview of this collection, together with practical exercises and more advanced problems.
For those willing to go further on the statistical aspects of SciPy, I recommend having a look at these IPython Notebooks on [Effect Size](http://nbviewer.ipython.org/github/donnemartin/data-science-ipython-notebooks/blob/master/scipy/effect_size.ipynb), [Random Sampling](http://nbviewer.ipython.org/github/donnemartin/data-science-ipython-notebooks/blob/master/scipy/sampling.ipynb) and [Hypothesis Testing](http://nbviewer.ipython.org/github/donnemartin/data-science-ipython-notebooks/blob/master/scipy/hypothesis.ipynb).
## License
This repository contains a variety of content: some developed by Amélie Anglade, some derived from or largely inspired by third-parties' work, and some entirely from third-parties.
The third-party content is distributed under the license provided by those parties. Any derivative work respects the original licenses, and credits its initial authors.
Original content developed by Amélie Anglade is distributed under the MIT license.
| 62.470588 | 525 | 0.79096 | eng_Latn | 0.878945 |
e0262d1e167d993e0f37a5be3343bd13c01b1af2 | 468 | md | Markdown | CHANGELOG.md | blockchainofthings/catenis-file | f44d422713442b9354e93b9802fa56901db11257 | [
"MIT"
] | null | null | null | CHANGELOG.md | blockchainofthings/catenis-file | f44d422713442b9354e93b9802fa56901db11257 | [
"MIT"
] | 2 | 2020-04-06T15:30:21.000Z | 2020-10-26T20:18:41.000Z | CHANGELOG.md | blockchainofthings/catenis-file | f44d422713442b9354e93b9802fa56901db11257 | [
"MIT"
] | null | null | null | # Changelog
## [1.1.0] - 2020-10-26
### New features
- Enhancements to MessageChunker class, adding the following new features:
- Constructor's `encoding` argument is now used both for adding and for retrieving message data chunks;
- Constructor's `encoding` argument can now be set to either 'base64' or 'hex';
- When retrieving the whole message, a specific text encoding can be specified.
## [1.0.0] - 2019-06-08
### New features
- Initial version.
| 31.2 | 107 | 0.713675 | eng_Latn | 0.992236 |
e02686eec1c67da8aa36b1b3543df5a4d12bf06c | 183 | md | Markdown | .github/PULL_REQUEST_TEMPLATE.md | CodeDocsJECRC/templates | 9a97a1a0b555d5f300c259c0b9bf1733f3bbbc7e | [
"Apache-2.0"
] | null | null | null | .github/PULL_REQUEST_TEMPLATE.md | CodeDocsJECRC/templates | 9a97a1a0b555d5f300c259c0b9bf1733f3bbbc7e | [
"Apache-2.0"
] | 1 | 2018-10-02T18:42:09.000Z | 2018-10-02T19:03:59.000Z | .github/PULL_REQUEST_TEMPLATE.md | CodeDocsJECRC/templates | 9a97a1a0b555d5f300c259c0b9bf1733f3bbbc7e | [
"Apache-2.0"
] | null | null | null | **Before submitting a pull request,** please make sure the following is done:
1. Fork the repo [Project](https://github.com/CodeDocsJECRC/[repo])
2. Create your branch from `master`
| 36.6 | 77 | 0.748634 | eng_Latn | 0.978728 |
e02706b643fb953a2d689a69df305019c7861aa6 | 6,775 | md | Markdown | README.md | mahavishnup/ubuntu-after-installation-setup | 72278bcb1f5b1c3556ba82c18b316ea16b2b94a8 | [
"MIT"
] | null | null | null | README.md | mahavishnup/ubuntu-after-installation-setup | 72278bcb1f5b1c3556ba82c18b316ea16b2b94a8 | [
"MIT"
] | null | null | null | README.md | mahavishnup/ubuntu-after-installation-setup | 72278bcb1f5b1c3556ba82c18b316ea16b2b94a8 | [
"MIT"
] | null | null | null | # ubuntu-after-installation-setup
### Installation deb packages
```shell
sudo dpkg -i ./*.deb;
```
###
```shell
sudo dpkg -i ./*.deb;
sudo apt clean;
sudo apt update;
sudo apt upgrade;
sudo apt list --upgradable;
sudo apt upgrade -y;
sudo apt dist-upgrade;
sudo apt dist-upgrade -y;
sudo apt dist-upgrade -Vy;
sudo apt full-upgrade -y;
sudo apt autoremove;
sudo apt autoclean;
sudo add-apt-repository ppa:nginx/stable;
sudo apt-get update;
sudo add-apt-repository ppa:ondrej/php;
sudo apt-get update;
sudo add-apt-repository -y ppa:teejee2008/ppa;
sudo apt-get update;
sudo apt install net-tools;
sudo apt-get install ufw -y;
sudo ufw enable;
sudo ufw reload;
sudo ufw allow ssh;
sudo ufw status;
sudo apt install vlc gimp gparted 0ad;
sudo sed -i s/geteuid/getppid/g /usr/bin/vlc;
sudo apt install ubuntu-restricted-extras;
sudo apt install gnome-tweaks;
sudo apt install gnome-shell-extensions;
sudo apt install chrome-gnome-shell;
sudo apt install timeshift;
sudo apt install preload;
sudo apt install tlp tlp-rdw;
sudo apt install bleachbit;
sudo apt install tor torbrowser-launcher;
sudo apt install gdebi;
sudo apt-get install git;
sudo snap install whatsdesk;
sudo snap install telegram-desktop;
sudo apt install wine winetricks;
sudo apt install steam;
sudo apt-get install openjdk-11-jdk;
sudo apt install rar unrar p7zip-full p7zip-rar;
sudo apt install python3-pip python;
sudo ufw enable;
sudo apt-get install gufw;
sudo apt-get install synaptic;
sudo apt-get install tlp tlp-rdw;
sudo systemctl enable tlp;
sudo apt-get install wine64;
sudo apt-get install flatpak;
sudo apt-get install gnome-software-plugin-flatpak;
sudo add-apt-repository ppa:gerardpuig/ppa;
sudo apt update;
sudo apt-get install Ubuntu-cleaner;
sudo apt install nginx;
sudo apt install -y nginx;
sudo ufw app list;
sudo ufw allow 'Nginx HTTP';
sudo ufw status;
sudo systemctl enable nginx;
sudo systemctl start nginx;
sudo apt install -y php7.4-fpm php7.4-cli php7.4-gd php7.4-mysql php7.4-pgsql php7.4-imap php7.4-mbstring php7.4-xml php7.4-curl php7.4-bcmath php7.4-sqlite3 php7.4-zip php-memcached php7.4-common php-xdebug php7.4-{cli,json,imap,bcmath,bz2,intl,gd,mbstring,mysql,zip};
php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');";
php -r "if (hash_file('sha384', 'composer-setup.php') === '906a84df04cea2aa72f40b5f787e49f22d4c2f19492ac310e8cba5b96ac8b64115ac402c8cd292b8a03482574915d1a8') { echo 'Installer verified'; } else { echo 'Installer corrupt'; unlink('composer-setup.php'); } echo PHP_EOL;";
php composer-setup.php;
php -r "unlink('composer-setup.php');";
sudo mv composer.phar /usr/local/bin/composer;
composer global require "laravel/installer";
sudo nano ~/.bashrc;
export PATH="~/.config/composer/vendor/bin:$PATH"
source ~/.bashrc;
sudo service php7.4-fpm restart;
sudo apt install mysql-server -y;
sudo mysql_secure_installation;
sudo mysql -u root -p;
USE mysql;
CREATE DATABASE trytrabby;
CREATE USER 'newuser'@'localhost' IDENTIFIED BY 'password';
GRANT ALL PRIVILEGES ON *.* TO 'newuser'@'localhost';
FLUSH PRIVILEGES;
sudo systemctl enable tlp;
sudo apt update;
sudo apt install apache2;
sudo ufw app list;
sudo ufw allow 'Apache';
sudo ufw status;
sudo systemctl enable apache2;
sudo apt update;
sudo apt install redis-server;
sudo nano /etc/redis/redis.conf;
supervised systemd
sudo systemctl restart redis.service;
sudo apt install curl;
curl -fsSL https://cli.github.com/packages/githubcli-archive-keyring.gpg | sudo dd of=/usr/share/keyrings/githubcli-archive-keyring.gpg;
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/githubcli-archive-keyring.gpg] https://cli.github.com/packages stable main" | sudo tee /etc/apt/sources.list.d/github-cli.list > /dev/null;
sudo apt update;
sudo apt install gh;
curl -sL https://deb.nodesource.com/setup_16.x -o nodesource_setup.sh;
sudo nano nodesource_setup.sh;
sudo bash nodesource_setup.sh;
sudo apt-get install -y nodejs;
sudo apt autoremove;
sudo apt-get install gcc g++ make;
curl -sL https://dl.yarnpkg.com/debian/pubkey.gpg | gpg --dearmor | sudo tee /usr/share/keyrings/yarnkey.gpg >/dev/null;
echo "deb [signed-by=/usr/share/keyrings/yarnkey.gpg] https://dl.yarnpkg.com/debian stable main" | sudo tee /etc/apt/sources.list.d/yarn.list;
sudo apt-get update && sudo apt-get install yarn;
curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | sudo apt-key add -;
echo "deb https://dl.yarnpkg.com/debian/ stable main" | sudo tee /etc/apt/sources.list.d/yarn.list;
sudo apt-get update;
mkdir ~/.npm-global;
npm config set prefix '~/.npm-global';
sudo nano ~/.profile;
export PATH=~/.npm-global/bin:$PATH
source ~/.profile;
npm install -g jshint;
npm install -g expo-cli;
npm install -g @vue/cli;
npm install -g @vue/cli @vue/cli-service-global;
yarn global add @vue/cli @vue/cli-service-global;
sudo ufw reload;
#ssh mahavishnu@mahavishnu;
#ssh-keygen;
sudo apt update;
sudo apt install nginx -y;
sudo apt install software-properties-common -y;
sudo add-apt-repository ppa:ondrej/php;
sudo apt install php7.4-fpm php7.4-common php7.4-dom php7.4-intl php7.4-mysql php7.4-xml php7.4-xmlrpc php7.4-curl php7.4-gd php7.4-imagick php7.4-cli php7.4-dev php7.4-imap php7.4-mbstring php7.4-soap php7.4-zip php7.4-bcmath -y;
wget -c https://files.phpmyadmin.net/phpMyAdmin/5.0.2/phpMyAdmin-5.0.2-english.tar.gz;
tar xzvf phpMyAdmin-5.0.2-english.tar.gz;
sudo mv phpMyAdmin-5.0.2-english /usr/share/phpmyadmin;
sudo ln -s /usr/share/phpmyadmin /var/www/html;
#Firefox Tweaks:
about:config
layers.acceleration.force-enabled
gfx.webrender.all
#Change DNS:
8.8.8.8,8.8.4.4
sudo apt clean;
sudo apt update;
sudo apt upgrade;
sudo apt upgrade -y;
sudo apt dist-upgrade;
sudo apt dist-upgrade -y;
sudo apt dist-upgrade -Vy;
sudo apt full-upgrade -y;
sudo apt list --upgradable;
sudo apt autoremove;
sudo apt autoclean;
sudo reboot;
sudo apt-add-repository universe;
sudo apt install gnome-tweak-tool;
sudo apt install gnome-shell-extensions;
sudo apt install dconf-editor;
sudo ln -s /home/mahavishnu/Downloads /usr/share/themes;
sudo cp -r /home/mahavishnu/Downloads/* /usr/share/themes;
sudo cp -r /home/mahavishnu/Downloads/Cupertino-Catalina /usr/share/icons;
sudo rm -rf *;
https://computingforgeeks.com/install-phpmyadmin-on-kali-linux/
https://bishrulhaq.com/tutorials/installing-nginx-and-phpmyadmin-on-ubuntu-18-04/
https://laravel-news.com/media-library-pro
laravel new trytrabby-extranet --jet --git --branch="main" --github="--private"
laravel new trytrabby --git --branch="main" --github="--private"
laravel new laravel-stisla-jetstream-admin-dashboard-pro-kits --jet --stack="livewire" --teams --force --git --branch="master" --github="--public"
laravel new andamanroom --jet --stack="livewire" --teams --git --branch="master" --github="--private"
```
| 36.621622 | 269 | 0.761919 | kor_Hang | 0.131685 |
e0285a6e6288578655636270e2089c0e9b29f2a0 | 296 | md | Markdown | java/IO.md | Urumuqi/gitbook | dcb2667a0dded284cc16a2c16595a3967f42995c | [
"Apache-2.0"
] | 1 | 2019-12-27T03:11:37.000Z | 2019-12-27T03:11:37.000Z | java/IO.md | Urumuqi/gitbook | dcb2667a0dded284cc16a2c16595a3967f42995c | [
"Apache-2.0"
] | null | null | null | java/IO.md | Urumuqi/gitbook | dcb2667a0dded284cc16a2c16595a3967f42995c | [
"Apache-2.0"
] | 1 | 2020-01-09T09:11:54.000Z | 2020-01-09T09:11:54.000Z | # Java IO
## 分类
### 按功能
- 输入
- 输出
### 按类型
- 字节流
以字节为单位输入/输出,字节流按照8位传输
- 字符流
以字符为单位输入/输入,字符流按照16为传输
## IO 类
- FileInputStream
- FileOutputStream
- BufferInputStream
- BufferOutputStream
- FileReader
- FileWriter
- BufferReader
- BufferWriter
- ObjectInputStream
- ObjectOutputStream
| 9.25 | 24 | 0.712838 | yue_Hant | 0.834771 |
e0285bfac5d3f35215ccfd00aafcaa9628e72d60 | 11,644 | md | Markdown | articles/data-factory/data-factory-samples.md | ggailey777/azure-docs | 4520cf82cb3d15f97877ba445b0cfd346c81a034 | [
"CC-BY-3.0"
] | null | null | null | articles/data-factory/data-factory-samples.md | ggailey777/azure-docs | 4520cf82cb3d15f97877ba445b0cfd346c81a034 | [
"CC-BY-3.0"
] | null | null | null | articles/data-factory/data-factory-samples.md | ggailey777/azure-docs | 4520cf82cb3d15f97877ba445b0cfd346c81a034 | [
"CC-BY-3.0"
] | 2 | 2017-02-18T05:45:54.000Z | 2019-12-21T21:23:13.000Z | ---
title: Azure Data Factory - Samples
description: Provides details about samples that ship with the Azure Data Factory service.
services: data-factory
documentationcenter: ''
author: sharonlo101
manager: jhubbard
editor: monicar
ms.assetid: c0538b90-2695-4c4c-a6c8-82f59111f4ab
ms.service: data-factory
ms.workload: data-services
ms.tgt_pltfrm: na
ms.devlang: na
ms.topic: article
ms.date: 10/18/2016
ms.author: shlo
---
# Azure Data Factory - Samples
## Samples on GitHub
The [GitHub Azure-DataFactory repository](https://github.com/azure/azure-datafactory) contains several samples that help you quickly ramp up with Azure Data Factory service (or) modify the scripts and use it in own application. The Samples\JSON folder contains JSON snippets for common scenarios.
| Sample | Description |
|:--- |:--- |
| [ADF Walkthrough](https://github.com/Azure/Azure-DataFactory/tree/master/Samples/ADFWalkthrough) |This sample provides an end-to-end walkthrough for processing log files using Azure Data Factory to turn data from log files in to insights. <br/><br/>In this walkthrough, the Data Factory pipeline collects sample logs, processes and enriches the data from logs with reference data, and transforms the data to evaluate the effectiveness of a marketing campaign that was recently launched. |
| [JSON samples](https://github.com/Azure/Azure-DataFactory/tree/master/Samples/JSON) |This sample provides JSON examples for common scenarios. |
| [Http Data Downloader Sample](https://github.com/Azure/Azure-DataFactory/tree/master/Samples/HttpDataDownloaderSample) |This sample showcases downloading of data from an HTTP endpoint to Azure Blob Storage using custom .NET activity. |
| [Cross AppDomain Dot Net Activity Sample](https://github.com/Azure/Azure-DataFactory/tree/master/Samples/CrossAppDomainDotNetActivitySample) |This sample allows you to author a custom .NET activity that is not constrained to assembly versions used by the ADF launcher (For example, WindowsAzure.Storage v4.3.0, Newtonsoft.Json v6.0.x, etc.). |
| [Run R script](https://github.com/Azure/Azure-DataFactory/tree/master/Samples/RunRScriptUsingADFSample) |This sample includes the Data Factory custom activity that can be used to invoke RScript.exe. This sample works only with your own (not on-demand) HDInsight cluster that already has R Installed on it. |
| [Invoke Spark jobs on HDInsight Hadoop cluster](https://github.com/Azure/Azure-DataFactory/tree/master/Samples/Spark) |This sample shows how to use MapReduce activity to invoke a Spark program. The spark program just copies data from one Azure Blob container to another. |
| [Twitter Analysis using Azure Machine Learning Batch Scoring Activity](https://github.com/Azure/Azure-DataFactory/tree/master/Samples/TwitterAnalysisSample-AzureMLBatchScoringActivity) |This sample shows how to use AzureMLBatchScoringActivity to invoke an Azure Machine Learning model that performs twitter sentiment analysis, scoring, prediction etc. |
| [Twitter Analysis using custom activity](https://github.com/Azure/Azure-DataFactory/tree/master/Samples/TwitterAnalysisSample-CustomC%23Activity) |This sample shows how to use a custom .NET activity to invoke an Azure Machine Learning model that performs twitter sentiment analysis, scoring, prediction etc. |
| [Parameterized Pipelines for Azure Machine Learning](https://github.com/Azure/Azure-DataFactory/tree/master/Samples/ParameterizedPipelinesForAzureML/) |The sample provides an end-to-end C# code to deploy N pipelines for scoring and retraining each with a different region parameter where the list of regions is coming from a parameters.txt file, which is included with this sample. |
| [Reference Data Refresh for Azure Stream Analytics jobs](https://github.com/Azure/Azure-DataFactory/tree/master/Samples/ReferenceDataRefreshForASAJobs) |This sample shows how to use Azure Data Factory and Azure Stream Analytics together to run the queries with reference data and setup the refresh for reference data on a schedule. |
| [Hybrid Pipeline with On-premises Hortonworks Hadoop](https://github.com/Azure/Azure-DataFactory/tree/master/Samples/HybridPipelineWithOnPremisesHortonworksHadoop) |The sample uses an on-premises Hadoop cluster as a compute target for running jobs in Data Factory just like you would add other compute targets like an HDInsight based Hadoop cluster in cloud. |
| [JSON Conversion Tool](https://github.com/Azure/Azure-DataFactory/tree/master/Samples/JSONConversionTool) |This tool allows you to convert JSONs from version prior to 2015-07-01-preview to latest or 2015-07-01-preview (default). |
| [U-SQL sample input file](https://github.com/Azure/Azure-DataFactory/tree/master/Samples/U-SQL%20Sample%20Input%20File) |This file is a sample file used by an U-SQL activity. |
## Azure Resource Manager templates
You can find the following Azure Resource Manager templates for Data Factory on Github.
| Template | Description |
| --- | --- |
| [Copy from Azure Blob Storage to Azure SQL Database](https://github.com/Azure/azure-quickstart-templates/tree/master/101-data-factory-blob-to-sql-copy) |Deploying this template creates an Azure data factory with a pipeline that copies data from the specified Azure blob storage to the Azure SQL database |
| [Copy from Salesforce to Azure Blob Storage](https://github.com/Azure/azure-quickstart-templates/tree/master/101-data-factory-salesforce-to-blob-copy) |Deploying this template creates an Azure data factory with a pipeline that copies data from the specified Salesforce account to the Azure blob storage. |
| [Transform data by running Hive script on an Azure HDInsight cluster](https://github.com/Azure/azure-quickstart-templates/tree/master/101-data-factory-hive-transformation) |Deploying this template creates an Azure data factory with a pipeline that transforms data by running the sample Hive script on an Azure HDInsight Hadoop cluster. |
## Samples in Azure portal
You can use the **Sample pipelines** tile on the home page of your data factory to deploy sample pipelines and their associated entities (datasets and linked services) in to your data factory.
1. Create a data factory or open an existing data factory. See [Copy data from Blob Storage to SQL Database using Data Factory](data-factory-copy-data-from-azure-blob-storage-to-sql-database.md) for steps to create a data factory.
2. In the **DATA FACTORY** blade for the data factory, click the **Sample pipelines** tile.

3. In the **Sample pipelines** blade, click the **sample** that you want to deploy.

4. Specify configuration settings for the sample. For example, your Azure storage account name and account key, Azure SQL server name, database, User ID, and password, etc.

5. After you are done with specifying the configuration settings, click **Create** to create/deploy the sample pipelines and linked services/tables used by the pipelines.
6. You see the status of deployment on the sample tile you clicked earlier on the **Sample pipelines** blade.

7. When you see the **Deployment succeeded** message on the tile for the sample, close the **Sample pipelines** blade.
8. On **DATA FACTORY** blade, you see that linked services, data sets, and pipelines are added to your data factory.

## Samples in Visual Studio
### Prerequisites
You must have the following installed on your computer:
* Visual Studio 2013 or Visual Studio 2015
* Download Azure SDK for Visual Studio 2013 or Visual Studio 2015. Navigate to [Azure Download Page](https://azure.microsoft.com/downloads/) and click **VS 2013** or **VS 2015** in the **.NET** section.
* Download the latest Azure Data Factory plugin for Visual Studio: [VS 2013](https://visualstudiogallery.msdn.microsoft.com/754d998c-8f92-4aa7-835b-e89c8c954aa5) or [VS 2015](https://visualstudiogallery.msdn.microsoft.com/371a4cf9-0093-40fa-b7dd-be3c74f49005). If you are using Visual Studio 2013, you can also update the plugin by doing the following steps: On the menu, click **Tools** -> **Extensions and Updates** -> **Online** -> **Visual Studio Gallery** -> **Microsoft Azure Data Factory Tools for Visual Studio** -> **Update**.
### Use Data Factory Templates
1. Click **File** on the menu, point to **New**, and click **Project**.
2. In the **New Project** dialog box, do the following steps:
1. Select **DataFactory** under **Templates**.
2. Select **Data Factory Templates** in the right pane.
3. Enter a **name** for the project.
4. Select a **location** for the project.
5. Click **OK**.

3. In the **Data Factory Templates** dialog box, select the sample template from the **Use-Case Templates** section, and click **Next**. The following steps walk you through using the **Customer Profiling** template. Steps are similar for the other samples.

4. In the **Data Factory Configuration** dialog, click **Next** on the **Data Factory Basics** page.
5. On the **Configure data factory** page, do the following steps:
1. Select **Create New Data Factory**. You can also select **Use existing data factory**.
2. Enter a **name** for the data factory.
3. Select the **Azure subscription** in which you want the data factory to be created.
4. Select the **resource group** for the data factory.
5. Select the **West US**, **East US**, or **North Europe** for the **region**.
6. Click **Next**.
6. In the **Configure data stores** page, specify an existing **Azure SQL database** and **Azure storage account** (or) create database/storage, and click Next.
7. In the **Configure compute** page, select defaults, and click **Next**.
8. In the **Summary** page, review all settings, and click **Next**.
9. In the **Deployment Status** page, wait until the deployment is finished, and click **Finish**.
10. Right-click project in the Solution Explorer, and click **Publish**.
11. If you see **Sign in to your Microsoft account** dialog box, enter your credentials for the account that has Azure subscription, and click **sign in**.
12. You should see the following dialog box:

13. In the **Configure data factory** page, do the following steps:
1. Confirm that **Use existing data factory** option.
2. Select the **data factory** you had select when using the template.
3. Click **Next** to switch to the **Publish Items** page. (Press **TAB** to move out of the Name field to if the **Next** button is disabled.)
14. In the **Publish Items** page, ensure that all the Data Factories entities are selected, and click **Next** to switch to the **Summary** page.
15. Review the summary and click **Next** to start the deployment process and view the **Deployment Status**.
16. In the **Deployment Status** page, you should see the status of the deployment process. Click Finish after the deployment is done.
See [Build your first data factory (Visual Studio)](data-factory-build-your-first-pipeline-using-vs.md) for details about using Visual Studio to author Data Factory entities and publishing them to Azure.
| 97.033333 | 535 | 0.769839 | eng_Latn | 0.943148 |
e028dbbb3b8ffa99f37efddd14725169db5fcb9c | 3,668 | md | Markdown | wdk-ddi-src/content/dbgeng/nf-dbgeng-idebugsystemobjects2-gettotalnumberthreads.md | sathishcg/windows-driver-docs-ddi | 238bb4dc827cb3d2c64e535cf3f91d205b20c943 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2020-08-12T00:32:04.000Z | 2021-07-17T15:32:12.000Z | wdk-ddi-src/content/dbgeng/nf-dbgeng-idebugsystemobjects2-gettotalnumberthreads.md | sathishcg/windows-driver-docs-ddi | 238bb4dc827cb3d2c64e535cf3f91d205b20c943 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | wdk-ddi-src/content/dbgeng/nf-dbgeng-idebugsystemobjects2-gettotalnumberthreads.md | sathishcg/windows-driver-docs-ddi | 238bb4dc827cb3d2c64e535cf3f91d205b20c943 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
UID: NF:dbgeng.IDebugSystemObjects2.GetTotalNumberThreads
title: IDebugSystemObjects2::GetTotalNumberThreads
author: windows-driver-content
description: The GetTotalNumberThreads method returns the total number of threads for all the processes in the current target, in addition to the largest number of threads in any process for the current target.
old-location: debugger\gettotalnumberthreads.htm
tech.root: debugger
ms.assetid: dce67b78-a5e0-4664-b183-f462bcd773c8
ms.author: windowsdriverdev
ms.date: 5/3/2018
ms.keywords: GetTotalNumberThreads, GetTotalNumberThreads method [Windows Debugging], GetTotalNumberThreads method [Windows Debugging],IDebugSystemObjects interface, GetTotalNumberThreads method [Windows Debugging],IDebugSystemObjects2 interface, GetTotalNumberThreads method [Windows Debugging],IDebugSystemObjects3 interface, GetTotalNumberThreads method [Windows Debugging],IDebugSystemObjects4 interface, IDebugSystemObjects interface [Windows Debugging],GetTotalNumberThreads method, IDebugSystemObjects2 interface [Windows Debugging],GetTotalNumberThreads method, IDebugSystemObjects2.GetTotalNumberThreads, IDebugSystemObjects2::GetTotalNumberThreads, IDebugSystemObjects3 interface [Windows Debugging],GetTotalNumberThreads method, IDebugSystemObjects3::GetTotalNumberThreads, IDebugSystemObjects4 interface [Windows Debugging],GetTotalNumberThreads method, IDebugSystemObjects4::GetTotalNumberThreads, IDebugSystemObjects::GetTotalNumberThreads, IDebugSystemObjects_fece8f3e-8d85-492a-b1f8-beadc398613e.xml, dbgeng/IDebugSystemObjects2::GetTotalNumberThreads, dbgeng/IDebugSystemObjects3::GetTotalNumberThreads, dbgeng/IDebugSystemObjects4::GetTotalNumberThreads, dbgeng/IDebugSystemObjects::GetTotalNumberThreads, debugger.gettotalnumberthreads
ms.prod: windows-hardware
ms.technology: windows-devices
ms.topic: method
req.header: dbgeng.h
req.include-header: Dbgeng.h
req.target-type: Desktop
req.target-min-winverclnt:
req.target-min-winversvr:
req.kmdf-ver:
req.umdf-ver:
req.ddi-compliance:
req.unicode-ansi:
req.idl:
req.max-support:
req.namespace:
req.assembly:
req.type-library:
req.lib:
req.dll:
req.irql:
topic_type:
- APIRef
- kbSyntax
api_type:
- COM
api_location:
- dbgeng.h
api_name:
- IDebugSystemObjects.GetTotalNumberThreads
- IDebugSystemObjects2.GetTotalNumberThreads
- IDebugSystemObjects3.GetTotalNumberThreads
- IDebugSystemObjects4.GetTotalNumberThreads
product:
- Windows
targetos: Windows
req.typenames:
---
# IDebugSystemObjects2::GetTotalNumberThreads
## -description
The <b>GetTotalNumberThreads</b> method returns the total number of <a href="https://msdn.microsoft.com/6182ca34-ee5e-47e9-82fe-29266397e3a8">threads</a> for all the <a href="https://msdn.microsoft.com/6182ca34-ee5e-47e9-82fe-29266397e3a8">processes</a> in the current target, in addition to the largest number of threads in any process for the current target.
## -parameters
### -param Total [out]
Receives the total number of threads for all the processes in the current target.
### -param LargestProcess [out]
Receives the largest number of threads in any process for the current target.
## -returns
This method may also return error values. See <a href="https://msdn.microsoft.com/713f3ee2-2f5b-415e-9908-90f5ae428b43">Return Values</a> for more details.
<table>
<tr>
<th>Return code</th>
<th>Description</th>
</tr>
<tr>
<td width="40%">
<dl>
<dt><b>S_OK</b></dt>
</dl>
</td>
<td width="60%">
The method was successful.
</td>
</tr>
</table>
## -remarks
For more information about threads, see <a href="https://msdn.microsoft.com/library/windows/hardware/ff558896">Threads and Processes</a>.
| 33.345455 | 1,254 | 0.810523 | yue_Hant | 0.776414 |
e0293dcb80f9bc807df719c34a97013783f8754c | 12,597 | md | Markdown | msteams-platform/bots/how-to/create-a-bot-for-teams.md | ojchoudh/msteams-docs | 80d41884b44aae55aaaf575a008756c8f96bf56b | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-02-26T02:52:15.000Z | 2020-02-26T02:52:15.000Z | msteams-platform/bots/how-to/create-a-bot-for-teams.md | ojchoudh/msteams-docs | 80d41884b44aae55aaaf575a008756c8f96bf56b | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-01-21T15:36:59.000Z | 2021-01-21T15:36:59.000Z | msteams-platform/bots/how-to/create-a-bot-for-teams.md | Surfndez/msteams-docs | 1781b1abff4499fbcd71086f22d486d87cb3bd6d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Create a bot for Microsoft Teams
author: clearab
description: How to create a bot for Microsoft Teams.
ms.topic: conceptual
localization_priority: Priority
ms.author: anclear
---
# Create a bot for Microsoft Teams
You'll need to complete the following steps to create a conversational bot:
1. Prepare your development environment.
1. Create your web service.
1. Register your web service as a bot with Microsoft Bot Framework.
1. Create your app manifest and your app package.
1. Upload your package to Microsoft Teams.
Creating your web service, registering your web service, and creating your app package, with the Bot Framework can be done in any order; however, because the three pieces are so intertwined, no matter in which order you do them, you'll need to return to update the others. Your registration needs the messaging endpoint from your deployed web service and your web service needs the ID and password created from your registration. Your app manifest also needs the registration ID to connect Teams to your web service.
As you're building your bot, you'll regularly move between changing your app manifest and deploying code to your web service. When working with the app manifest, keep in mind you can either manually manipulate the JSON file, or make changes through App Studio. Either way, you'll need to re-deploy (upload) your app in Teams when you make a change to the manifest; however, there's no need to do so when you deploy changes to your web service.
See the [Bot Framework Documentation](/azure/bot-service/) for additional information on the Bot Framework.
[!include[prepare environment](~/includes/prepare-environment.md)]
## Create your web service
The heart of your bot is your web service. It will define a single route, typically `/api/messages`, on which to receive all requests. To get started, you have a few options to choose from:
* Start with the Teams conversation bot sample in either [C#/dotnet](https://github.com/microsoft/BotBuilder-Samples/tree/master/samples/csharp_dotnetcore/57.teams-conversation-bot) or [JavaScript](https://github.com/microsoft/BotBuilder-Samples/tree/master/samples/javascript_nodejs/57.teams-conversation-bot).
* If you're using JavaScript, use the [Yeoman Generator for Microsoft Teams](https://github.com/OfficeDev/generator-teams) to scaffold your Teams app, including your web service. This is particularly helpful when building a Teams app that contains more than just a conversational bot.
* Create your web service from scratch. You can choose to add the Bot Framework SDK for your language, or you can work directly with the JSON payloads.
## Register your web service with the Bot Framework
> [!IMPORTANT]
> When registering your web service, be sure to set the **Display name** to the same name you used for your **Short name** in your app manifest. When your app is distributed by either direct uploading or through an organization's app catalog, messages sent to a conversation by your bot will use the registration's **Display name** rather than the app's **Short name**.
Registering your web service with the Bot Framework provides a secure communication channel between the Teams client and your web service. The Teams client and your web service never communicate directly. Instead, messages are routed through the Bot Framework Service (Microsoft Teams uses a separate instance of this service that is compliant with Office 365 standards).
You have two options when registering your web service with the Bot Framework. You can use either [App Studio](#using-app-studio) or the [legacy portal](#in-the-legacy-portal) to register your bot without using an Azure subscription. Or, if you already have an Azure subscription (or don't mind creating one), you can use [the Azure portal](#with-an-azure-subscription) to register your web service.
### Without an Azure subscription
If you do not wish to create your bot registration in Azure, you **must** use either this link - https://dev.botframework.com/bots/new, or App Studio. If you click on the *Create a bot* button in the Bot Framework portal, you will create your bot registration in Microsoft Azure, and will need to provide an Azure subscription. To manage your registration or migrate it to an Azure subscription after creation go to: https://dev.botframework.com/bots.
When you edit the properties of an existing Bot Framework registration not registered in Azure, you'll see A "Migration status" column and a blue "Migrate" button that will take you to the Microsoft Azure portal. Don't select the "Migrate" button unless that's what you want to do. Instead, select the **name** of the bot and you can edit its properties:

Scenarios when you **must** have your bot registration in Azure (either by creating it in the Azure portal or via migration):
* You want to use the Bot Framework's [OAuthPrompt](./authentication/auth-flow-bot.md) for authentication.
* You want to enable additional channels like Web Chat, Direct Line, or Skype.
#### Using App Studio
*App Studio* is a Teams app that helps you build Teams apps, including registering your web service as a bot, creating an app manifest, and your app package. It also contains a React control library and configurable samples for cards. See [Getting started with Teams App Studio](../../concepts/build-and-test/app-studio-overview.md).
Remember, if you use App Studio to register your web service you'll need to go to https://dev.botframework.com/bots to manage your registration. Some settings (like your messaging endpoint) can be updated in App Studio as well.
#### In the legacy portal
Create your bot registration using this link: https://dev.botframework.com/bots/new. **Be sure to add Microsoft Teams as a channel from the featured channels list after creating your bot.** Feel free to re-use any Microsoft App ID you generated if you've already created your app package/manifest.

### With an Azure subscription
You can also register your web service by creating a Bot Channels Registration resource in the Azure portal.
[!INCLUDE [bot channels registration steps](~/includes/bots/azure-bot-channels-registration.md)]
The [Bot Framework portal](https://dev.botframework.com) is optimized for registering bots in Microsoft Azure. Here are some things to know:
* The Microsoft Teams channel for bots registered on Azure is **free**. Messages sent over the Teams channel will NOT count towards the consumed messages for the bot.
* If you register your bot using Microsoft Azure, your bot code doesn't need to be *hosted* on Microsoft Azure.
* If you do register a bot using Microsoft Azure portal, you must have a Microsoft Azure account. You can [create one for free](https://azure.microsoft.com/free/). To verify your identity when you create an Azure account, you must provide a credit card, but it won't be charged; it's always free to create and use bots with Microsoft Teams.
## Create your app manifest and package
Your [app manifest](~/resources/schema/manifest-schema.md) defines the metadata for your app, the extension points your app is using, and pointers to the web services those extension points connect to. You can either use App Studio to help you create your app manifest, or create it manually.
### Add using App Studio
1. In the Teams client, open App Studio from the **...** overflow menu on the left navigation rail. If App Studio isn't already installed, you can do so by searching for it.
2. On the **Manifest editor** tab select **Create a new app** (or if you're adding a bot to an existing app, you can import your app package)
3. Add your app details (see [manifest schema definition](~/resources/schema/manifest-schema.md) for full descriptions of each field).
4. On the **Bots** tab select the **Setup** button.
5. You can either create a new web service registration (**New bot**), or if you've already registered one, select **Existing bot**.
6. Select the capabilities and scopes your bot will need.
7. If necessary, update your bot endpoint address to point to your bot. It should look something like `https://someplace.com/api/messages`.
8. Optionally, add [bot commands](~/bots/how-to/create-a-bot-commands-menu.md).
9. Optionally, you can download your completed app package from the **Test and distribute** tab.
### Create it manually
As with messaging extensions and tabs, you update the [app-manifest](~/resources/schema/manifest-schema.md) to define your bot. Add new top-level JSON structure in your app manifest with the `bots` property.
|Name| Type| Maximum size | Required | Description|
|---|---|---|---|---|
|`botId`|String|64 characters|✔|The unique Microsoft app ID for the bot as registered with the Bot Framework. This may well be the same as the overall app ID.|
|`needsChannelSelector`|Boolean|||Describes whether or not the bot utilizes a user hint to add the bot to a specific channel. Default: `false`.|
|`isNotificationOnly`|Boolean|||Indicates whether a bot is a one-way, notification-only bot, as opposed to a conversational bot. Default: `false`.|
|`supportsFiles`|Boolean|||Indicates whether the bot supports the ability to upload/download files in personal chat. Default: `false`.|
|`scopes`|Array of enum|3|✔|Specifies whether the bot offers an experience in the context of a channel in a `team`, in a group chat (`groupchat`), or an experience scoped to an individual user alone (`personal`). These options are non-exclusive.|
Optionally, you can define one or more lists of commands that your bot can recommend to users. The object is an array (maximum of 2 elements) with all elements of type `object`. You must define a separate command list for each scope that your bot supports. *See* [Bot menus](./create-a-bot-commands-menu.md), for more information.
|Name| Type| Maximum size | Required | Description|
|---|---|---|---|---|
|`items.scopes`|array of enum|3|✔|Specifies the scope for which the command list is valid. Options are `team`, `personal`, and `groupchat`.|
|`items.commands`|array of objects|10|✔|An array of commands the bot supports:<br>`title`: the bot command name (string, 32)<br>`description`: a simple description or example of the command syntax and its argument (string, 128)|
#### Simple manifest example
The example below is a simple bot object, with two command lists defined. This is not the entire app manifest file, just the part specific to messaging extensions.
```json
...
"bots": [
{
"botId": "%MICROSOFT-APP-ID-REGISTERED-WITH-BOT-FRAMEWORK%",
"needsChannelSelector": false,
"isNotificationOnly": false,
"scopes": [ "team", "personal", "groupchat" ],
"supportsFiles": true,
"commandLists": [
{
"scopes": [ "team", "groupchat" ],
"commands": [
{
"title": "Command 1",
"description": "Description of Command 1"
},
{
"title": "Command N",
"description": "Description of Command N"
}
]
},
{
"scopes": [ "personal", "groupchat" ],
"commands": [
{
"title": "Personal command 1",
"description": "Description of Personal command 1"
},
{
"title": "Personal command N",
"description": "Description of Personal command N"
}
]
}
]
}
],
...
```
#### Create your app package manually
To create an app package, you need to add your app manifest and (optionally) your app icons to a .zip archive file. See [Create your app package](~/concepts/build-and-test/apps-package.md) for complete details. Make sure your .zip archive contains only the necessary files, and has no additional folder structure inside of it.
## Upload your package to Microsoft Teams
If you've been using App Studio, you can install your app from the **Test and distribute** tab of the **Manifest editor**. Alternatively, you can install your app package by clicking the `...` overflow menu from the left navigation rail, clicking **More apps**, then the **Upload a custom app** link. You can also import an app manifest or app package into App Studio to make additional updates before uploading.
## Next steps
* [Bot conversation basics](./conversations/conversation-basics.md)
* [Subscribe to conversation events](./conversations/subscribe-to-conversation-events.md)
| 72.396552 | 516 | 0.744939 | eng_Latn | 0.996157 |
e0295084706c7e20638eb58ac308035d0fdcd084 | 708 | md | Markdown | src/pages/articles/2018-04-17---Thirty-Third-Post/index.md | maxgrok/v2-maxgrok.github.io | f32ffd3d73e4d2c2224d9ed195145291708db910 | [
"MIT"
] | null | null | null | src/pages/articles/2018-04-17---Thirty-Third-Post/index.md | maxgrok/v2-maxgrok.github.io | f32ffd3d73e4d2c2224d9ed195145291708db910 | [
"MIT"
] | 13 | 2018-04-04T13:40:13.000Z | 2022-02-26T19:24:45.000Z | src/pages/articles/2018-04-17---Thirty-Third-Post/index.md | maxgrok/maxgrok.github.io | f32ffd3d73e4d2c2224d9ed195145291708db910 | [
"MIT"
] | null | null | null | ---
title: "P2P$: Module 2 Final Project"
date: "2018-04-17T15:30:13.121Z"
layout: post
draft: true
path: "/posts/thirty-third-post/"
category: "Flatiron School"
tags:
- "Flatiron School"
- "Immersive"
- "Coding School"
description: "This is a post about how I built a peer-to-peer crowdfunding platform with Rails 5.2, Font-Awesome, and Semantic-UI."
---
I'm building a peer-to-peer crowdfunding platform from scratch.
In module 2, we covered CRUD with validations in Rails, as well as associations, authentication, forms, and other rails helpers.
What's working:
Categories: Update, Read, Delete, Create
What's not working:
- Pledges create, update, delete
- Projects create, update, delete | 27.230769 | 131 | 0.735876 | eng_Latn | 0.954451 |
e02985b17c15303dc394f2f9268b5e2d4953b521 | 9,790 | md | Markdown | v2.0/training/data-unavailability-troubleshooting.md | rkruze/docs | 4be97c3499bae58dac01e54be0e8a2e65a6fb875 | [
"CC-BY-4.0",
"BSD-3-Clause"
] | 1 | 2018-11-27T02:41:17.000Z | 2018-11-27T02:41:17.000Z | v2.0/training/data-unavailability-troubleshooting.md | rkruze/docs | 4be97c3499bae58dac01e54be0e8a2e65a6fb875 | [
"CC-BY-4.0",
"BSD-3-Clause"
] | null | null | null | v2.0/training/data-unavailability-troubleshooting.md | rkruze/docs | 4be97c3499bae58dac01e54be0e8a2e65a6fb875 | [
"CC-BY-4.0",
"BSD-3-Clause"
] | 1 | 2018-11-27T12:56:21.000Z | 2018-11-27T12:56:21.000Z | ---
title: Data Unavailability Troubleshooting
toc: true
toc_not_nested: true
sidebar_data: sidebar-data-training.json
block_search: false
redirect_from: /training/data-unavailability-troubleshooting.html
---
<iframe src="https://docs.google.com/presentation/d/e/2PACX-1vQwCCLjWfQRS32P_lbRgOlRRoQUt77KGMrsRrg08_cT_R19YD-CUPe3fQZMNTPxNW2hz9PGcotH7M6J/embed?start=false&loop=false" frameborder="0" width="756" height="454" allowfullscreen="true" mozallowfullscreen="true" webkitallowfullscreen="true"></iframe>
<style>
#toc ul:before {
content: "Hands-on Lab"
}
</style>
## Before You Begin
In this lab, you'll start with a fresh cluster, so make sure you've stopped and cleaned up the cluster from the previous labs.
## Step 1. Start a cluster spread across 3 separate localities
Create a 9-node cluster, with 3 nodes in each of 3 different localities.
1. In a new terminal, start node 1 in locality us-east-1:
{% include copy-clipboard.html %}
~~~ shell
$ ./cockroach start \
--insecure \
--locality=datacenter=us-east-1 \
--store=node1 \
--host=localhost \
--port=26257 \
--http-port=8080 \
--join=localhost:26257,localhost:26258,localhost:26259 &
~~~~
2. In the same terminal, start node 2 in locality us-east-1:
{% include copy-clipboard.html %}
~~~ shell
$ ./cockroach start \
--insecure \
--locality=datacenter=us-east-1 \
--store=node2 \
--host=localhost \
--port=26258 \
--http-port=8081 \
--join=localhost:26257,localhost:26258,localhost:26259 &
~~~~
3. In the same terminal, start node 3 in locality us-east-1:
{% include copy-clipboard.html %}
~~~ shell
$ ./cockroach start \
--insecure \
--locality=datacenter=us-east-1 \
--store=node3 \
--host=localhost \
--port=26259 \
--http-port=8082 \
--join=localhost:26257,localhost:26258,localhost:26259 &
~~~~
4. In the same terminal, start node 4 in locality us-east-2:
{% include copy-clipboard.html %}
~~~ shell
$ ./cockroach start \
--insecure \
--locality=datacenter=us-east-2 \
--store=node4 \
--host=localhost \
--port=26260 \
--http-port=8083 \
--join=localhost:26257,localhost:26258,localhost:26259 &
~~~
5. In the same terminal, start node 5 in locality us-east-2:
{% include copy-clipboard.html %}
~~~ shell
$ ./cockroach start \
--insecure \
--locality=datacenter=us-east-2 \
--store=node5 \
--host=localhost \
--port=26261 \
--http-port=8084 \
--join=localhost:26257,localhost:26258,localhost:26259 &
~~~
6. In the same terminal, start node 6 in locality us-east-2:
{% include copy-clipboard.html %}
~~~ shell
$ ./cockroach start \
--insecure \
--locality=datacenter=us-east-2 \
--store=node6 \
--host=localhost \
--port=26262 \
--http-port=8085 \
--join=localhost:26257,localhost:26258,localhost:26259 &
~~~
7. In the same terminal, start node 7 in locality us-east-3:
{% include copy-clipboard.html %}
~~~ shell
$ ./cockroach start \
--insecure \
--locality=datacenter=us-east-3 \
--store=node7 \
--host=localhost \
--port=26263 \
--http-port=8086 \
--join=localhost:26257,localhost:26258,localhost:26259 &
~~~
8. In the same terminal, start node 8 in locality us-east-3:
{% include copy-clipboard.html %}
~~~ shell
$ ./cockroach start \
--insecure \
--locality=datacenter=us-east-3 \
--store=node8 \
--host=localhost \
--port=26264 \
--http-port=8087 \
--join=localhost:26257,localhost:26258,localhost:26259 &
~~~
9. In the same terminal, start node 9 in locality us-east-3:
{% include copy-clipboard.html %}
~~~ shell
$ ./cockroach start \
--insecure \
--locality=datacenter=us-east-3 \
--store=node9 \
--host=localhost \
--port=26265 \
--http-port=8088 \
--join=localhost:26257,localhost:26258,localhost:26259 &
~~~
10. In the same terminal, perform a one-time initialization of the cluster:
{% include copy-clipboard.html %}
~~~ shell
$ ./cockroach init --insecure
~~~
## Step 2. Simulate the problem
In preparation, add a table and use a replication zone to force the table's data onto the new nodes.
1. Generate an `intro` database with a `mytable` table:
{% include copy-clipboard.html %}
~~~ shell
$ ./cockroach gen example-data intro | ./cockroach sql --insecure
~~~
2. Create a [replication zone](../configure-replication-zones.html) forcing the replicas of the `mytable` range to be located on nodes with the `datacenter=us-east-3` locality:
{% include copy-clipboard.html %}
~~~ shell
$ echo 'constraints: [+datacenter=us-east-3]' | ./cockroach zone set intro.mytable --insecure -f -
~~~
~~~
range_min_bytes: 1048576
range_max_bytes: 67108864
gc:
ttlseconds: 90000
num_replicas: 3
constraints: [+datacenter=us-east-3]
~~~
3. Use the `SHOW EXPERIMENTAL_RANGES` SQL command to determine the nodes on which the replicas for the `mytable` table are now located:
{% include copy-clipboard.html %}
~~~ shell
$ ./cockroach sql \
--insecure \
--execute="SHOW EXPERIMENTAL_RANGES FROM TABLE intro.mytable;"
~~~
~~~
+-----------+---------+----------+--------------+
| Start Key | End Key | Replicas | Lease Holder |
+-----------+---------+----------+--------------+
| NULL | NULL | {3,6,9} | 9 |
+-----------+---------+----------+--------------+
(1 row)
~~~
4. The node IDs above may not match the order in which we started the nodes because node IDs only get allocated after `cockroach init` is run. You can verify that the nodes listed by `SHOW EXPERIMENTAL_RANGES` are all in the `datacenter=us-east-3` locality by opening the **Node Diagnostics** debug page at <a href="http://localhost:8080/#/reports/nodes" data-proofer-ignore>http://localhost:8080/#/reports/nodes</a> and checking the locality for each of the 3 node IDs.
<img src="{{ 'images/v2.0/training-19.png' | relative_url }}" alt="CockroachDB Admin UI" style="border:1px solid #eee;max-width:100%" />
## Step 3. Simulate the problem
Stop 2 of the nodes containing `mytable` replicas. This will cause the range to lose a majority of its replicas and become unavailable. However, all other ranges are spread evenly across all three localities because the replication zone only applies to `mytable`, so the cluster as a whole will remain available.
1. Kill nodes 8 and 9:
{% include copy-clipboard.html %}
~~~ shell
$ ./cockroach quit \
--insecure \
--port=26264
~~~
{% include copy-clipboard.html %}
~~~ shell
$ ./cockroach quit \
--insecure \
--port=26265
~~~
## Step 4. Troubleshoot the problem
1. In a new terminal, try to insert into the `mytable` table, pointing at a node that is still online:
{% include copy-clipboard.html %}
~~~ shell
$ ./cockroach sql \
--insecure \
--port=26257 \
--execute="INSERT INTO intro.mytable VALUES (42, '')" \
--logtostderr=WARNING
~~~
Because the range for `mytable` no longer has a majority of its replicas, the query will hang indefinitely.
2. Go back to the Admin UI at <a href="http://localhost:8080" data-proofer-ignore>http://localhost:8080</a> and click **Metrics** on the left.
3. Select the **Replication** dashboard.
4. Hover over the **Ranges** graph:
<img src="{{ 'images/v2.0/training-14.png' | relative_url }}" alt="CockroachDB Admin UI" style="border:1px solid #eee;max-width:100%" />
You should see that 1 range is now unavailable. If the unavailable count is larger than 1, that would mean that some system ranges had a majority of replicas on the down nodes as well.
The **Summary** panel on the right should tell you the same thing:
<img src="{{ 'images/v2.0/training-15.png' | relative_url }}" alt="CockroachDB Admin UI" style="border:1px solid #eee;max-width:25%" />
5. For more insight into the ranges that are unavailable, go to the **Problem Ranges Report** at <a href="http://localhost:8080/#/reports/problemranges" data-proofer-ignore>http://localhost:8080/#/reports/problemranges</a>.
<img src="{{ 'images/v2.0/training-16.png' | relative_url }}" alt="CockroachDB Admin UI" style="border:1px solid #eee;max-width:100%" />
## Step 5. Resolve the problem
1. Restart the stopped nodes:
{% include copy-clipboard.html %}
~~~ shell
$ ./cockroach start \
--insecure \
--locality=datacenter=us-east-3 \
--store=node8 \
--host=localhost \
--port=26264 \
--http-port=8087 \
--join=localhost:26257,localhost:26258,localhost:26259 &
~~~
{% include copy-clipboard.html %}
~~~ shell
$ ./cockroach start \
--insecure \
--locality=datacenter=us-east-3 \
--store=node9 \
--host=localhost \
--port=26265 \
--http-port=8088 \
--join=localhost:26257,localhost:26258,localhost:26259 &
~~~
3. Go back to the Admin UI, click **Metrics** on the left, and verify that ranges are no longer unavailable.
4. Check back on your `INSERT` statement that was stuck and verify that it completed successfully.
## Step 6. Clean up
In the next lab, you'll start a new cluster from scratch, so take a moment to clean things up.
1. Stop all CockroachDB nodes:
{% include copy-clipboard.html %}
~~~ shell
$ pkill -9 cockroach
~~~
2. Remove the nodes' data directories:
{% include copy-clipboard.html %}
~~~ shell
$ rm -rf node1 node2 node3 node4 node5 node6 node7 node8 node9
~~~
## What's Next?
[Data Corruption Troubleshooting](data-corruption-troubleshooting.html)
| 31.178344 | 470 | 0.648008 | eng_Latn | 0.832648 |
e02990d023e548756de1bb89ef793ce31cdba242 | 5,059 | md | Markdown | eng/common/TestResources/README.md | ppartarr/azure-sdk-for-java | e3c832f87cf65219dd7b500ab25a998c38dab8a3 | [
"MIT"
] | 2 | 2019-05-17T21:24:53.000Z | 2020-02-12T11:13:42.000Z | eng/common/TestResources/README.md | ppartarr/azure-sdk-for-java | e3c832f87cf65219dd7b500ab25a998c38dab8a3 | [
"MIT"
] | null | null | null | eng/common/TestResources/README.md | ppartarr/azure-sdk-for-java | e3c832f87cf65219dd7b500ab25a998c38dab8a3 | [
"MIT"
] | 2 | 2020-05-21T22:51:22.000Z | 2020-05-26T20:53:01.000Z | # Live Test Resource Management
Running and recording live tests often requires first creating some resources
in Azure. Service directories that include a test-resources.json file require
running [New-TestResources.ps1][] to create these resources and output
environment variables you must set.
The following scripts can be used both in on your desktop for developer
scenarios as well as on hosted agents for continuous integration testing.
* [New-TestResources.ps1][] - Creates new test resources for a given service.
* [Remove-TestResources.ps1][] - Deletes previously created resources.
## Prerequisites
1. Install [PowerShell][] version 7.0 or newer.
2. Install the [Azure PowerShell][PowerShellAz].
## On the Desktop
To set up your Azure account to run live tests, you'll need to log into Azure,
create a service principal, and set up your resources defined in
test-resources.json as shown in the following example using Azure Search.
Note that `-Subscription` is an optional parameter but recommended if your account
is a member of multiple subscriptions.
```powershell
Connect-AzAccount -Subscription 'YOUR SUBSCRIPTION ID'
$sp = New-AzADServicePrincipal -Role Owner
eng\common\TestResources\New-TestResources.ps1 `
-BaseName 'myusername' `
-ServiceDirectory 'search' `
-TestApplicationId $sp.ApplicationId `
-TestApplicationSecret (ConvertFrom-SecureString $sp.Secret -AsPlainText)
```
If you are running this for a .NET project on Windows, the recommended method is to
add the `-OutFile` switch to the above command. This will save test environment settings
into a test-resources.json.env file next to test-resources.json. The file is protected via DPAPI.
The environment file would be scoped to the current repository directory and avoids the need to
set environment variables or restart your IDE to recognize them.
Along with some log messages, this will output environment variables based on
your current shell like in the following example:
```powershell
$env:AZURE_TENANT_ID = '<<secret>>'
$env:AZURE_CLIENT_ID = '<<secret>>'
$env:AZURE_CLIENT_SECRET = '<<secret>>'
$env:AZURE_SUBSCRIPTION_ID = 'YOUR SUBSCRIPTION ID'
$env:AZURE_RESOURCE_GROUP = 'rg-myusername'
$env:AZURE_LOCATION = 'westus2'
$env:AZURE_SEARCH_STORAGE_NAME = 'myusernamestg'
$env:AZURE_SEARCH_STORAGE_KEY = '<<secret>>'
```
For security reasons we do not set these environment variables automatically
for either the current process or persistently for future sessions. You must
do that yourself based on your current platform and shell.
If your current shell was detected properly, you should be able to copy and
paste the output directly in your terminal and add to your profile script.
For example, in PowerShell on Windows you can copy the output above and paste
it back into the terminal to set those environment variables for the current
process. To persist these variables for future terminal sessions or for
applications started outside the terminal, you could copy and paste the
following commands:
```powershell
setx AZURE_TENANT_ID $env:AZURE_TENANT_ID
setx AZURE_CLIENT_ID $env:AZURE_CLIENT_ID
setx AZURE_CLIENT_SECRET $env:AZURE_CLIENT_SECRET
setx AZURE_SUBSCRIPTION_ID $env:AZURE_SUBSCRIPTION_ID
setx AZURE_RESOURCE_GROUP $env:AZURE_RESOURCE_GROUP
setx AZURE_LOCATION $env:AZURE_LOCATION
setx AZURE_SEARCH_STORAGE_NAME $env:AZURE_SEARCH_STORAGE_NAME
setx AZURE_SEARCH_STORAGE_KEY $env:AZURE_SEARCH_STORAGE_KEY
```
After running or recording live tests, if you do not plan on further testing
you can remove the test resources you created above by running:
[Remove-TestResources.ps1][]:
```powershell
Remove-TestResources.ps1 -BaseName 'myusername' -Force
```
If you created a new service principal as shown above, you might also remove it:
```powershell
Remove-AzADServicePrincipal -ApplicationId $sp.ApplicationId -Force
```
If you persisted environment variables, you should also remove those as well.
## In CI
Test pipelines should include deploy-test-resources.yml and
remove-test-resources.yml like in the following examples:
```yml
- template: /eng/common/TestResources/deploy-test-resources.yml
parameters:
ServiceDirectory: '${{ parameters.ServiceDirectory }}'
# Run tests
- template: /eng/common/TestResources/remove-test-resources.yml
```
Be sure to link the **Secrets for Resource Provisioner** variable group
into the test pipeline for these scripts to work.
## Documentation
To regenerate documentation for scripts within this directory, you can install
[platyPS][] and run it like in the following example:
```powershell
Install-Module platyPS -Scope CurrentUser -Force
New-MarkdownHelp -Command .\New-TestResources.ps1 -OutputFolder . -Force
```
PowerShell markdown documentation created with [platyPS][].
[New-TestResources.ps1]: ./New-TestResources.ps1.md
[Remove-TestResources.ps1]: ./Remove-TestResources.ps1.md
[PowerShell]: https://github.com/PowerShell/PowerShell
[PowerShellAz]: https://docs.microsoft.com/powershell/azure/install-az-ps
[platyPS]: https://github.com/PowerShell/platyPS
| 38.037594 | 97 | 0.796205 | eng_Latn | 0.895553 |
e02bfa5aaaf603375104179fa3db1b2f5cac9c74 | 1,311 | md | Markdown | _posts/2017-9-4-data-science-books.md | tordable/tordable.github.io | 38f9962108539ec98b8226bcf0ffa9dd2610a505 | [
"MIT"
] | 1 | 2017-04-15T20:11:10.000Z | 2017-04-15T20:11:10.000Z | _posts/2017-9-4-data-science-books.md | tordable/tordable.github.io | 38f9962108539ec98b8226bcf0ffa9dd2610a505 | [
"MIT"
] | null | null | null | _posts/2017-9-4-data-science-books.md | tordable/tordable.github.io | 38f9962108539ec98b8226bcf0ffa9dd2610a505 | [
"MIT"
] | 3 | 2018-09-17T04:55:37.000Z | 2019-10-11T17:23:49.000Z | ---
layout: post
title: Data Science Books
---
Data science is a discipline that uses scientific methods, statistics and computer science in order
to solve business problems using data. Recently it has become incredibly popular, thanks to the multitude of
open source frameworks, cheaper compute resources and especially the increase in the
amount of data that is being generated and captured. Data science can be applied to a variety of areas,
but its biggest impact recently has been in the context of business analytics and decision processes.
It is not accident that data scientist is one of the most sought after jobs, and one of the better paid positions.
See for example the following report by [Glassdoor](https://www.glassdoor.com/List/Best-Jobs-in-America-LST_KQ0,20.htm).

If you are even remotely interested in data science you may find this interesting: Humble Bundle is selling a
bundle of [data science books](https://www.humblebundle.com/books/data-science-books) for an incredibly low price. They cover
a wide spectrum of data science topic areas. From the foundations of the discipline to applied methods to store and
process vast amounts of data at scale. Check them out [here](https://www.humblebundle.com/books/data-science-books).
| 59.590909 | 125 | 0.798627 | eng_Latn | 0.998258 |
e02c3e3740b8bea974f2084939e5b9358f5e4988 | 5,853 | md | Markdown | guides/v3.4.0/templates/actions.md | zion/guides-source | 6d7a3253e6d8266ea8adbd456c8dc58a58e7c6e9 | [
"MIT"
] | null | null | null | guides/v3.4.0/templates/actions.md | zion/guides-source | 6d7a3253e6d8266ea8adbd456c8dc58a58e7c6e9 | [
"MIT"
] | 3 | 2021-09-20T22:28:55.000Z | 2022-02-26T02:14:53.000Z | guides/v3.4.0/templates/actions.md | zion/guides-source | 6d7a3253e6d8266ea8adbd456c8dc58a58e7c6e9 | [
"MIT"
] | null | null | null | Your app will often need a way to let users interact with controls that
change application state. For example, imagine that you have a template
that shows a blog title, and supports expanding the post to show the body.
If you add the
[`{{action}}`](https://www.emberjs.com/api/ember/release/classes/Ember.Templates.helpers/methods/action?anchor=action)
helper to any HTML DOM element, when a user clicks the element, the named event
will be sent to the template's corresponding component or controller.
```handlebars {data-filename=app/templates/components/single-post.hbs}
<h3><button {{action "toggleBody"}}>{{title}}</button></h3>
{{#if isShowingBody}}
<p>{{{body}}}</p>
{{/if}}
```
In the component or controller, you can then define what the action does within
the `actions` hook:
```javascript {data-filename=app/components/single-post.js}
import Component from '@ember/component';
export default Component.extend({
actions: {
toggleBody() {
this.toggleProperty('isShowingBody');
}
}
});
```
You will learn about more advanced usages in the Component's [Triggering Changes With Actions](../../components/triggering-changes-with-actions/) guide,
but you should familiarize yourself with the following basics first.
## Action Parameters
You can optionally pass arguments to the action handler. Any values
passed to the `{{action}}` helper after the action name will be passed to
the handler as arguments.
For example, if the `post` argument was passed:
```handlebars
<p><button {{action "select" post}}>✓</button> {{post.title}}</p>
```
The `select` action handler would be called with a single argument
containing the post model:
```javascript {data-filename=app/components/single-post.js}
import Component from '@ember/component';
export default Component.extend({
actions: {
select(post) {
console.log(post.get('title'));
}
}
});
```
## Specifying the Type of Event
By default, the
[`{{action}}`](https://www.emberjs.com/api/ember/release/classes/Ember.Templates.helpers/methods/action?anchor=action)
helper listens for click events and triggers the action when the user clicks
on the element.
You can specify an alternative event by using the `on` option.
```handlebars
<p>
<button {{action "select" post on="mouseUp"}}>✓</button>
{{post.title}}
</p>
```
You should use the <code>camelCased</code> event names, so two-word names like `keypress`
become `keyPress`.
## Allowing Modifier Keys
By default, the `{{action}}` helper will ignore click events with
pressed modifier keys. You can supply an `allowedKeys` option
to specify which keys should not be ignored.
```handlebars
<button {{action "anActionName" allowedKeys="alt"}}>
click me
</button>
```
This way the `{{action}}` will fire when clicking with the alt key
pressed down.
## Allowing Default Browser Action
By default, the `{{action}}` helper prevents the default browser action of the
DOM event. If you want to allow the browser action, you can stop Ember from
preventing it.
For example, if you have a normal link tag and want the link to bring the user
to another page in addition to triggering an ember action when clicked, you can
use `preventDefault=false`:
```handlebars
<a href="newPage.htm" {{action "logClick" preventDefault=false}}>Go</a>
```
With `preventDefault=false` omitted, if the user clicked on the link, Ember.js
will trigger the action, but the user will remain on the current page.
With `preventDefault=false` present, if the user clicked on the link, Ember.js
will trigger the action *and* the user will be directed to the new page.
## Modifying the action's first parameter
If a `value` option for the
[`{{action}}`](https://www.emberjs.com/api/ember/release/classes/Ember.Templates.helpers/methods/action?anchor=action)
helper is specified, its value will be considered a property path that will
be read off of the first parameter of the action. This comes very handy with
event listeners and enables to work with one-way bindings.
```handlebars
<label>What's your favorite band?</label>
<input type="text" value={{favoriteBand}} onblur={{action "bandDidChange"}} />
```
Let's assume we have an action handler that prints its first parameter:
```javascript
actions: {
bandDidChange(newValue) {
console.log(newValue);
}
}
```
By default, the action handler receives the first parameter of the event
listener, the event object the browser passes to the handler, so
`bandDidChange` prints `Event {}`.
Using the `value` option modifies that behavior by extracting that property from
the event object:
```handlebars
<label>What's your favorite band?</label>
<input type="text" value={{favoriteBand}} onblur={{action "bandDidChange" value="target.value"}} />
```
The `newValue` parameter thus becomes the `target.value` property of the event
object, which is the value of the input field the user typed. (e.g 'Foo Fighters')
## Attaching Actions to Non-Clickable Elements
Note that actions may be attached to any element of the DOM, but not all
respond to the `click` event. For example, if an action is attached to an `a`
link without an `href` attribute, or to a `div`, some browsers won't execute
the associated function. If it's really needed to define actions over such
elements, a CSS workaround exists to make them clickable, `cursor: pointer`.
For example:
```css
[data-ember-action]:not(:disabled) {
cursor: pointer;
}
```
Keep in mind that even with this workaround in place, the `click` event will
not automatically trigger via keyboard driven `click` equivalents (such as
the `enter` key when focused). Browsers will trigger this on clickable
elements only by default. This also doesn't make an element accessible to
users of assistive technology. You will need to add additional things like
`role` and/or `tabindex` to make this accessible for your users.
| 33.445714 | 152 | 0.746797 | eng_Latn | 0.992505 |
e02c4380ce304f730cd70f08fcb4d0eb2e6bda6e | 13,997 | md | Markdown | bach_code/raw_data/README.md | glasperfan/thesis | aead2dfb8052afbff4d05203a0be5b0b7ef69462 | [
"Apache-2.0"
] | 5 | 2015-12-08T21:47:41.000Z | 2020-10-28T12:39:08.000Z | bach_code/raw_data/README.md | glasperfan/thesis | aead2dfb8052afbff4d05203a0be5b0b7ef69462 | [
"Apache-2.0"
] | null | null | null | bach_code/raw_data/README.md | glasperfan/thesis | aead2dfb8052afbff4d05203a0be5b0b7ef69462 | [
"Apache-2.0"
] | 1 | 2020-10-28T12:39:09.000Z | 2020-10-28T12:39:09.000Z | # Chorale index
* 1.xml: Aus meines Herzens Grunde
* 2.xml: Ich dank dir, lieber Herre
* 3.xml: Ach Gott, vom Himmel sieh' darein
* 4.xml: Es ist das Heil uns kommen her
* 5.xml: An Wasserflüssen Babylon
* 6.xml: Christus, der ist mein Leben
* 7.xml: Nun lob, mein Seel, den Herren
* 8.xml: Freuet euch, ihr Christen alle
* 9.xml: Ermuntre dich, mein schwacher Geist
* 10.xml: Aus tiefer Not schrei ich zu dir 1
* 11.xml: Puer natus in Bethlehem
* 12.xml: Allein zu dir, Herr Jesu Christ
* 13.xml: Christ lag in Todesbanden
* 14.xml: Es woll uns Gott genädig sein 2
* 15.xml: Menschenkind, merk eben
* 16.xml: Ich hab mein Sach Gott heimgestellt
* 17.xml: Ein feste Burg ist unser Gott
* 18.xml: Herzlich tut mich verlangen
* 19.xml: Helft mir Gotts Güte preisen
* 20.xml: Valet will ich dir geben
* 21.xml: O Ewigkeit, du Donnerwort
* 22.xml: Es spricht der Unweisen Mund
* 23.xml: Nun komm, der Heiden Heiland
* 24.xml: Wie nach einer Wasserquelle
* 25.xml: Jesus Christus, unser Heiland, der von uns den Gottes Zorn wandt
* 26.xml: Wo Gott der Herr nicht bei uns hält
* 27.xml: Nun danket alle Gott
* 28.xml: Herr, ich habe mißgehandelt
* 29.xml: Erbarm dich mein, o Herre Gott
* 30.xml: Gott des Himmels und der Erden
* 31.xml: Nun bitten wir den heiligen Geist
* 32.xml: Wachet doch, erwacht, ihr Schläfer
* 33.xml: Ach, was soll ich Sünder machen
* 34.xml: Ach Gott und Herr, wie groß und schwer
* 35.xml: Was mein Gott will, das g'scheh allzeit
* 36.xml: Du Friedefürst, Herr Jesu Christ
* 37.xml: Mach's mit mir, Gott, nach deiner Güt
* 38.xml: Kommt her zu mir, spricht Gottes Söhn
* 39.xml: Vater unser im Himmelreich
* 40.xml: Ach wie nichtig, ach wie flüchtig
* 41.xml: Mit Fried und Freud ich fahr dahin
* 42.xml: O Welt, ich muß dich lassen
* 43.xml: Wenn mein Stündlein vorhanden ist 1
* 44.xml: Das neugeborne Kindelein
* 45.xml: Kommt her, ihr lieben Schwesterlein
* 46.xml: Wir Christenleut
* 47.xml: Christum wir sollen loben schon
* 48.xml: O Traurigkeit, o Herzeleid
* 49.xml: Herzlich lieb hab ich dich, o Herr
* 50.xml: Herzliebster Jesu, was hast du verbrochen
* 51.xml: O stilles Gotteslamm
* 52.xml: Jesu Kreuz, Leiden und Pein
* 53.xml: Wer nur den lieben Gott läßt walten
* 54.xml: O Welt, ich muß dich lassen
* 55.xml: Was Gott tut, das ist wohlgetan
* 56.xml: Es woll uns Gott genädig sein 1
* 57.xml: Wie nach einer Wasserquelle
* 58.xml: Wenn wir in höchsten Nöten sein
* 59.xml: Komm, heiliger Geist, Herre Gott
* 60.xml: Gott sei gelobet und gebenedeiet
* 61.xml: Erhalt uns, Herr, bei deinem Wort
* 62.xml: Wenn mein Stündlein vorhanden ist 2
* 63.xml: Herzlich tut mich verlangen
* 64.xml: Das walt' mein Gott, Vater, Sohn und heiliger Geist
* 65.xml: Wie nach einer Wasserquelle
* 66.xml: In dich hab ich gehoffet, Herr 1
* 67.xml: Herzliebster Jesu, was hast du verbrochen
* 68.xml: Heut' triumphieret Gottes Sohn
* 69.xml: Herzlich tut mich verlangen
* 70.xml: Christus, der uns selig macht
* 71.xml: Jesu Kreuz, Leiden und Pein
* 72.xml: Nun bitten wir den heiligen Geist
* 73.xml: Die Wollust dieser Welt
* 74.xml: Wie schön leuchtet der Morgenstern
* 75.xml: Du geballtes Weltgebäude
* 76.xml: Helft mir Gotts Güte preisen
* 77.xml: Hast du denn, Liebster, dein Angesicht gänzlich verborgen
* 78.xml: Verleih uns Frieden gnädiglich (beginning)
* 79.xml: Wenn mein Stündlein vorhanden ist 2
* 80.xml: Nun laßt uns Gott dem Herren
* 81.xml: Warum betrübst du dich, mein Herz
* 82.xml: Werde munter, mein Gemüte
* 83.xml: Jesu, meine Freude
* 84.xml: Nun bitten wir den heiligen Geist
* 85.xml: Helft mir Gotts Güte preisen
* 86.xml: Durch Adams Fall ist ganz verderbt
* 87.xml: Herr Christ, der einge Gottes-Söhn
* 88.xml: Ermuntre dich, mein schwacher Geist
* 89.xml: O Welt, ich muß dich lassen
* 90.xml: Wer nur den lieben Gott läßt walten
* 91.xml: Herzliebster Jesu, was hast du verbrochen
* 92.xml: Jesu Kreuz, Leiden und Pein
* 93.xml: Herzlich lieb hab ich dich, o Herr
* 94.xml: Valet will ich dir geben
* 95.xml: Da Christus geboren war
* 96.xml: Vater unser im Himmelreich
* 97.xml: Herzliebster Jesu, was hast du verbrochen
* 98.xml: Wer nur den lieben Gott läßt walten
* 99.xml: Christus, der uns selig macht
* 100.xml: Von Gott will ich nicht lassen
* 101.xml: Was mein Gott will, das g'scheh allzeit
* 102.xml: O Welt, ich muß dich lassen
* 103.xml: In dich hab ich gehoffet, Herr 1
* 104.xml: Es woll uns Gott genädig sein 1
* 105.xml: Was mein Gott will, das g'scheh allzeit
* 106.xml: Werde munter, mein Gemüte
* 107.xml: Ist Gott mein Schild und Helfersmann
* 108.xml: Helft mir Gotts Güte preisen
* 109.xml: Auf, auf, mein Herz, und du, mein ganzer Sinn
* 110.xml: Allein Gott in der Höh sei Ehr
* 111.xml: Durch Adams Fall ist ganz verderbt
* 112.xml: Dies sind die heiligen zehn Gebot
* 113.xml: Alles ist an Gottes Segen
* 114.xml: Keinen hat Gott verlassen
* 115.xml: Meine Seel erhebt den Herren
* 116.xml: Liebster Jesu, wir sind hier
* 117.xml: Kyrie, Gott Vater in Ewigkeit
* 118.xml: Wir glauben all an einen Gott, Schöpfer Himmels und der Erden
* 119.xml: Du geballtes Weltgebäude
* 120.xml: Gott der Vater wohn uns bei
* 121.xml: Herr Jesu Christ, dich zu uns wend
* 122.xml: Wer Gott vertraut, hat wohl gebaut
* 123.xml: Jesu, meine Freude
* 124.xml: Warum sollt ich mich denn grämen
* 125.xml: In allen meinen Taten
* 126.xml: Schwing dich auf zu deinem Gott
* 127.xml: In dulci jubilo
* 128.xml: Aus tiefer Not schrei ich zu dir 2
* 129.xml: Warum betrübst du dich, mein Herz
* 130.xml: Wer nur den lieben Gott läßt walten
* 131.xml: Wenn ich in Angst und Not mein' Augen heb empor
* 132.xml: Danket dem Herrn, heuf und allzeit
* 133.xml: Nicht so traurig, nicht so sehr
* 134.xml: Jesus ist mein Aufenthalt
* 135.xml: Meinen Jesum laß ich nicht, weil er sich für mich gegeben
* 136.xml: Alle Menschen müssen sterben 1
* 137.xml: Der du bist drei in Einigkeit
* 138.xml: Hilf, Herr Jesu, laß gelingen 1
* 139.xml: Herr Jesu Christ, meins Lebens Licht
* 140.xml: Wo Gott zum Haus nicht gibt sein Gunst
* 141.xml: Der Tag der ist so freudenreich
* 142.xml: Als der gütige Gott vollenden wollt sein Wort
* 143.xml: Gelobet seist du, Jesu Christ
* 144.xml: Ihr Gestirn, ihr hohen Lüfte
* 145.xml: Das alte Jahr vergangen ist
* 146.xml: Für Freuden laßt uns springen
* 147.xml: Ihr Knecht des Herren allzugleich
* 148.xml: O Lamm Gottes, unschuldig
* 149.xml: Es stehn vor Gottes Throne
* 150.xml: Du großer Schmerzensmann, vom Vater so geschlagen
* 151.xml: Heut ist, o Mensch, ein großer Trauertag
* 152.xml: Jesu, der du selbsten wohl
* 153.xml: Schaut, ihr Sünder! Ihr macht mir große Pein
* 154.xml: Sei gegrüßet, Jesu gütig
* 155.xml: O Herzensangst, o Bangigkeit und Zagen
* 156.xml: Jesus Christus, unser Heiland, der den Tod überwandt
* 157.xml: Jesus, meine Zuversicht
* 158.xml: Erstanden ist der heilig Christ
* 159.xml: Danket dem Herrn heut und allzeit
* 160.xml: Das neugeborne Kindelein
* 161.xml: Wachet auf, ruft uns die Stimme
* 162.xml: Als Jesus Christus in der Nacht
* 163.xml: Gott hat das Evangelium
* 164.xml: Nun freut euch, lieben Christen, g'mein 1
* 165.xml: Christ lag in Todesbanden
* 166.xml: Ihr lieben Christen, freut euch nun
* 167.xml: Ach Gott, erhör mein Seufzen und Wehklagen
* 168.xml: Komm, Gott Schöpfer, heiliger Geist
* 169.xml: Ach Herre Gott, mich treibt die Not
* 170.xml: Herr Jesu Christ, wahr' Mensch und Gott
* 171.xml: Herr, nun laß in Frieden
* 172.xml: Gottlob, es geht nunmehr zu Ende
* 173.xml: Was bist du doch, o Seele, so betrübet
* 174.xml: Liebster Immanuel, Herzog der Frommen
* 175.xml: Wie schön leuchtet der Morgenstern
* 176.xml: Da der Herr Christ zu Tische saß
* 177.xml: Christ ist erstanden
* 178.xml: Christus, der uns selig macht
* 179.xml: Hilf, Gott laß mirs gelingen
* 180.xml: Christus ist erstanden, hat überwunden
* 181.xml: Es sind doch selig alle, die im rechten Glauben wandeln
* 182.xml: O wir armen Sünder
* 183.xml: O Mensch, schau Jesum Christum an
* 184.xml: Wer nur den lieben Gott läßt walten
* 185.xml: Herr Gott, dich loben wir, Herr Gott, wir danken dir
* 186.xml: So gibst du nun, mein Jesu, gute Nacht
* 187.xml: Spiritus sancti gratia
* 188.xml: Am Sabbat früh Marien drei 1
* 189.xml: Dir, dir, Jehova, will ich singen
* 190.xml: Weltlich Ehr und zeitlich Gut
* 191.xml: Lob sei dir, gütiger Gott
* 192.xml: O wie selig seid ihr doch, ihr Frommen
* 193.xml: Mitten wir im Leben sind
* 194.xml: Es ist genug
* 195.xml: Herr Jesu Christ, meins Lebens Licht
* 196.xml: Herr, dein Ohren zu mir neige
* 197.xml: Ach, wie groß ist Gottes Güt und Wohltat
* 198.xml: Lasset uns den Herren preisen
* 199.xml: Herr, straf mich nicht in deinem Zorn
* 200.xml: Nun preiset alle Gottes Barmherzigkeit
* 201.xml: Ich dank dir Gott für alle Wohltat
* 202.xml: Das walt Gott Vater und Gott Sohn
* 203.xml: Gott, der du selber bist das Licht
* 204.xml: Herr Jesu Christ, du hast bereit
* 205.xml: Lobet den Herren, denn er ist sehr freundlich
* 206.xml: Vitam quae faciunt
* 207.xml: Mein Hüter und mein Hirt ist Gott der Herre
* 208.xml: Christ, der du bist der helle Tag
* 209.xml: Die Nacht ist kommen, drin wir ruhen sollen
* 210.xml: O höchster Gott, o unser lieber Herre
* 211.xml: Werde munter, mein Gemüte
* 212.xml: Gott lebet noch, Seele, was verzagst du doch?
* 213.xml: Heilig, heilig, heilig
* 214.xml: Was betrübst du dich, mein Herze
* 215.xml: Es wird schier der letzte Tag herkommen
* 216.xml: Den Vater dort oben
* 217.xml: Nun sich der Tag geendet hat
* 218.xml: Was willst du dich, o meine Seele, kränken
* 219.xml: Wie bist du, Seele, in mir so gar betrübt
* 220.xml: Jesu, du mein liebstes Leben
* 221.xml: Jesu, Jesu, du bist mein
* 222.xml: Singt dem Herrn ein neues Lied
* 223.xml: Wenn wir in höchsten Nöten sein
* 224.xml: Allein Gott in der Höh sei Ehr
* 225.xml: Ein feste Burg ist unser Gott
* 226.xml: Ich bin ja, Herr, in deiner Macht
* 227.xml: Jesu, nun sei gepreiset
* 228.xml: Ach Gott, vom Himmel sieh' darein
* 229.xml: Wie nach einer Wasserquelle
* 230.xml: Wie nach einer Wasserquelle
* 231.xml: Nun laßt uns Gott dem Herren
* 232.xml: Meine Augen schließ ich jetzt in Gottes Namen zu
* 233.xml: Gib unsern Fürsten und aller Obrigkeit (ending)
* 234.xml: Nun freut euch, lieben Christen, g'mein 2
* 235.xml: Christ lag in Todesbanden
* 236.xml: Ach Gott, vom Himmel sieh' darein
* 237.xml: Jesu, meine Freude
* 238.xml: Jesu, meines Herzens Freud'
* 239.xml: Was mein Gott will, das g'scheh allzeit
* 240.xml: Wenn mein Stündlein vorhanden ist 2
* 241.xml: Vater unser im Himmelreich
* 242.xml: Nun lob, mein Seel, den Herren
* 243.xml: Wachet doch, erwacht, ihr Schläfer
* 244.xml: Gib dich zufrieden und sei stille
* 245.xml: Ich dank dir, lieber Herre
* 246.xml: Ein feste Burg ist unser Gott
* 247.xml: O Ewigkeit, du Donnerwort
* 248.xml: O Welt, ich muß dich lassen
* 249.xml: Kommt her, ihr lieben Schwesterlein
* 250.xml: Herzlich lieb hab ich dich, o Herr
* 251.xml: Wie schön leuchtet der Morgenstern
* 252.xml: Ach Gott und Herr, wie groß und schwer
* 253.xml: Eins ist not, ach Herr, dies eine
* 254.xml: Auf meinen lieben Gott
* 255.xml: Wie nach einer Wasserquelle
* 256.xml: Jesu, meine Freude
* 257.xml: Wenn einer schon ein Haus aufbaut
* 258.xml: Wo Gott der Herr nicht bei uns hält
* 259.xml: Herzlich tut mich verlangen
* 260.xml: Herr, ich habe mißgehandelt
* 261.xml: Gelobet seist du, Jesu Christ
* 262.xml: Nun ruhen alle Wälder
* 263.xml: Es ist das Heil uns kommen her
* 264.xml: Die Wollust dieser Welt
* 265.xml: Vater unser im Himmelreich
* 266.xml: Was Gott tut, das ist wohlgetan
* 267.xml: Wenn mein Stündlein vorhanden ist 2
* 268.xml: Rex Christe factor omnium
* 269.xml: Nun lob, mein Seel, den Herren
* 270.xml: Wachet doch, erwacht, ihr Schläfer
* 271.xml: Meinen Jesum laß ich nicht
* 272.xml: Warum betrübst du dich, mein Herz
* 273.xml: Wo Gott, der Herr, nicht bei uns hält
* 274.xml: Hilf, Gott, laß mirs gelingen
* 275.xml: Herr Christ, der einge Gottes-Söhn
* 276.xml: Auf meinen lieben Gott
* 277.xml: Wie schön leuchtet der Morgenstern
* 278.xml: Es sind doch selig alle, die im rechten Glauben wandeln
* 279.xml: Christus, der uns selig macht
* 280.xml: Mach's mit mir, Gott, nach deiner Güt
* 281.xml: Jesus Christ, unser Herre
* 282.xml: Die Wollust dieser Welt
* 283.xml: Das alte Jahr vergangen ist
* 284.xml: O Gott, du frommer Gott
* 285.xml: Christus, der ist mein Leben
* 286.xml: Aus tiefer Not schrei ich zu dir 2
* 287.xml: Aus tiefer Not schrei ich zu dir 2
* 288.xml: Heilig, heilig, heilig
* 289.xml: Meine Seel erhebt den Herren
* 290.xml: Wir Christenleut
* 291.xml: Wenn mein Stündlein vorhanden ist 1
* 292.xml: Jesu, meine Freude
* 293.xml: Mit Fried und Freud ich fahr dahin
* 294.xml: Allein Gott in der Höh sei Ehr
* 295.xml: Von Gott will ich nicht lassen
* 296.xml: Es woll uns Gott genädig sein 2
* 297.xml: Ihr Knecht des Herren allzugleich
* 298.xml: Es ist das Heil uns kommen her
* 299.xml: Wo Gott, der Herr, nicht bei uns hält
* 300.xml: Jesus, meine Zuversicht
* 301.xml: Wer nur den lieben Gott läßt walten
* 302.xml: Lobet Gott, unsern Herren
* 303.xml: Ich dank dir, lieber Herre
* 304.xml: Kommt her, ihr lieben Schwesterlein
* 305.xml: Ermuntre dich, mein schwacher Geist
* 306.xml: Herzlich tut mich verlangen
* 307.xml: Meines Lebens letzte Zeit
* 308.xml: Was mein Gott will, das g'scheh allzeit
* 309.xml: Werde munter, mein Gemüte
* 310.xml: Wenn mein Stündlein vorhanden ist 1
* 311.xml: Es woll uns Gott genädig sein 2
* 312.xml: O Welt, ich muß dich lassen
* 313.xml: Jesu, meine Freude
* 314.xml: Warum sollt ich mich denn grämen
* 315.xml: Meine Seel erhebt den Herren
* 316.xml: Allein zu dir, Herr Jesu Christ
* 317.xml: Wir Christenleut
* 318.xml: Ermuntre dich, mein schwacher Geist
* 319.xml: O Welt, ich muß dich lassen
* 320.xml: Von Gott will ich nicht lassen
* 321.xml: Werde munter, mein Gemüte
* 322.xml: O Welt, ich muß dich lassen
* 323.xml: Herzlich tut mich verlangen
* 324.xml: Wachet doch, erwacht, ihr Schläfer
* 325.xml: Kommt her zu mir, spricht Gottes Söhn
* 326.xml: Christ lag in Todesbanden
| 42.544073 | 74 | 0.735372 | deu_Latn | 0.965291 |
e02c6e2f60eea13daac97299ec259e3348a5161a | 1,263 | md | Markdown | .github/ISSUE_TEMPLATE/feature-request.md | Jarskih/OpenApoc | 16f13f1d5eb5ea5639cdc0ee488d71b70c5a8cc3 | [
"MIT"
] | null | null | null | .github/ISSUE_TEMPLATE/feature-request.md | Jarskih/OpenApoc | 16f13f1d5eb5ea5639cdc0ee488d71b70c5a8cc3 | [
"MIT"
] | null | null | null | .github/ISSUE_TEMPLATE/feature-request.md | Jarskih/OpenApoc | 16f13f1d5eb5ea5639cdc0ee488d71b70c5a8cc3 | [
"MIT"
] | null | null | null | ---
name: Feature Request
about: Suggest an idea and it's implementation for OpenApoc. Ask yourself if it would
be better as a mod first though.
title: ''
labels: Feature Request
assignees: ''
---
**Is your feature request related to a problem? Please describe and reference bug reports, where possible, with # followed by the issue number.**
A clear and concise description of what the problem your proposal tries to resolve is and why you think it would benefit the game.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Would the feature be better implemented as a mod?**
Explain why you feel that this feature needs changes to the base OpenApoc code instead of using existing modder resources to Implement it. Or, if requesting a feature FOR modding, please explain why you cannot do it with the existing mod structure?
**Include any design documents**
Have a design or plan for your proposed feature? Please include all documents and references to assist implementation.
**Additional context**
Add any other context or screenshots about the feature request here.
| 45.107143 | 248 | 0.787015 | eng_Latn | 0.999756 |
e02caaad71062b7ce6eb9f78f37c310ff5ef476f | 4,518 | md | Markdown | skype/skype-ps/skype/Set-CsHybridMediationServer.md | cimares/office-docs-powershell | 180335958b76072631100a64c270a4fc843472ce | [
"CC-BY-4.0",
"MIT"
] | null | null | null | skype/skype-ps/skype/Set-CsHybridMediationServer.md | cimares/office-docs-powershell | 180335958b76072631100a64c270a4fc843472ce | [
"CC-BY-4.0",
"MIT"
] | null | null | null | skype/skype-ps/skype/Set-CsHybridMediationServer.md | cimares/office-docs-powershell | 180335958b76072631100a64c270a4fc843472ce | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
external help file:
applicable: Skype for Business Online
title: Set-CsHybridMediationServer
schema: 2.0.0
---
# Set-CsHybridMediationServer
## SYNOPSIS
Sets the external FQDN of an Edge server access proxy as the hosting provider for a user.
## SYNTAX
```
Set-CsHybridMediationServer [[-Identity] <Object>] [-AccessProxyExternalFqdn <Object>] [-Fqdn <Object>]
[-Confirm] [-Tenant <Object>] [-WhatIf] [-AsJob] [<CommonParameters>]
```
## DESCRIPTION
The `Set-CsHybridMediationServer` cmdlet provides FQDN settings for users.
Use this command to register mediation serviers from on-prem on O365 and point them to access proxy (Edge server) from on-prem.
Then the existing Skype for Business online call routing mechanism can establish the conference.
This cmdlet will set the Edge server Access Proxy external FQDN as the hosting provider of the user whose name equals to the FDQN parameter.
Users must be retrieved by their identity and they cannot already have assigned licenses.
## EXAMPLES
### -------------------------- Example 1 --------------------------
```
Set-CsHybridMediationServer -Fqdn users.fabrikam.com -AccessProxyExternalFqdn mediationserver.Contoso.com
```
This command sets the FQDN of a mediation server.
## PARAMETERS
### -AccessProxyExternalFqdn
The fully qualified domain name of the Edge server's access proxy.
```yaml
Type: Object
Parameter Sets: (All)
Aliases:
Applicable: Skype for Business Online
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -Fqdn
Specifies the fully qualified domain name of the mediation server that includes the internal Active Directory domain, such as mediationserver.contoso.com
```yaml
Type: Object
Parameter Sets: (All)
Aliases:
Applicable: Skype for Business Online
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -Identity
Specifies the identity of the hybrid public switched telephone network (PSTN) site.
For example: `-Identity "SeattlePSTN"`.
```yaml
Type: Object
Parameter Sets: (All)
Aliases:
Applicable: Skype for Business Online
Required: False
Position: 1
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -Confirm
The Confirm switch causes the command to pause processing and requires confirmation to proceed.
```yaml
Type: SwitchParameter
Parameter Sets: (All)
Aliases: cf
Applicable: Skype for Business Online
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -Tenant
Specifies the globally unique identifier (GUID) of your Skype for Business Online tenant account.
For example: `-Tenant "38aad667-af54-4397-aaa7-e94c79ec2308"`.
You can find your tenant ID by running this command: `Get-CsTenant | Select-Object DisplayName, TenantID`
If you are using a remote session of Windows PowerShell and are connected only to Skype for Business Online, you do not have to include the Tenant parameter.
The tenant ID will be determined by your connection and credentials.
The Tenant parameter is primarily for use in a hybrid deployment.
```yaml
Type: Object
Parameter Sets: (All)
Aliases:
Applicable: Skype for Business Online
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -WhatIf
The WhatIf switch causes the command to simulate its results.
By using this switch, you can view what changes would occur without having to commit those changes.
```yaml
Type: SwitchParameter
Parameter Sets: (All)
Aliases: wi
Applicable: Skype for Business Online
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -AsJob
{{Fill AsJob Description}}
```yaml
Type: SwitchParameter
Parameter Sets: (All)
Aliases:
Applicable: Skype for Business Online
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### CommonParameters
This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see about_CommonParameters (http://go.microsoft.com/fwlink/?LinkID=113216).
## INPUTS
## OUTPUTS
## NOTES
## RELATED LINKS
[Get-CsHybridMediationServer](Get-CsHybridMediationServer.md)
| 26.115607 | 314 | 0.772466 | eng_Latn | 0.858663 |
e02ceba0fe67eb2a015d8a684810d042f2c9c19c | 1,209 | md | Markdown | wiki/translations/en/Sketcher_CompCreateRectangles.md | arslogavitabrevis/FreeCAD-documentation | 8166ce0fd906f0e15672cd1d52b3eb6cf9e17830 | [
"CC0-1.0"
] | null | null | null | wiki/translations/en/Sketcher_CompCreateRectangles.md | arslogavitabrevis/FreeCAD-documentation | 8166ce0fd906f0e15672cd1d52b3eb6cf9e17830 | [
"CC0-1.0"
] | null | null | null | wiki/translations/en/Sketcher_CompCreateRectangles.md | arslogavitabrevis/FreeCAD-documentation | 8166ce0fd906f0e15672cd1d52b3eb6cf9e17830 | [
"CC0-1.0"
] | null | null | null | ---
- GuiCommand:
Name:Sketcher_CompCreateRectangles
Icon:Sketcher_CompCreateRectangles.png
MenuLocation:None (toolbar only)
Workbenches:[Sketcher](Sketcher_Workbench.md)
Version:0.20
---
# Sketcher CompCreateRectangles/en
## Description
**Create a rectangle** is an icon button in the Sketcher geometries toolbar that groups tools to create rectangles. Click on the down arrow to its right to expand the icons below it and select a tool.
## Types of rectangles
- <img alt="" src=images/Sketcher_CreateRectangle.svg style="width:32px;"> [Rectangle](Sketcher_CreateRectangle.md): Draws a rectangle from 2 opposite points.
- <img alt="" src=images/Sketcher_CreateRectangle_Center.svg style="width:32px;"> [Centered Rectangle](Sketcher_CreateRectangle_Center.md): Draws a rectangle from a central point and an edge point. <small>(v0.20)</small>
- <img alt="" src=images/Sketcher_CreateOblong.svg style="width:32px;"> [Rounded Rectangle](Sketcher_CreateOblong.md): Draws a rounded rectangle from 2 opposite points. <small>(v0.20)</small>
{{Sketcher Tools navi
}}
---
[documentation index](../README.md) > [Sketcher](Sketcher_Workbench.md) > Sketcher CompCreateRectangles/en
| 37.78125 | 224 | 0.759305 | eng_Latn | 0.481359 |
e02d59b8f3608700e274ee14a7adcb2761d811db | 1,118 | md | Markdown | appliances/dynamo-local-admin/README.md | balazs-szogi-instructure/dockerfiles | 0992fadccf85b106a3797d5727ab7b4cff5358e4 | [
"MIT"
] | 48 | 2016-02-09T22:04:02.000Z | 2021-11-20T15:33:48.000Z | appliances/dynamo-local-admin/README.md | balazs-szogi-instructure/dockerfiles | 0992fadccf85b106a3797d5727ab7b4cff5358e4 | [
"MIT"
] | 11 | 2017-03-27T09:26:48.000Z | 2022-02-15T12:48:21.000Z | appliances/dynamo-local-admin/README.md | balazs-szogi-instructure/dockerfiles | 0992fadccf85b106a3797d5727ab7b4cff5358e4 | [
"MIT"
] | 44 | 2016-02-10T16:29:22.000Z | 2022-02-15T12:46:35.000Z | # DynamoDB Local Admin
This project builds a Docker image with [DynamoDB Local][dbd] and
[dynamodb-admin][dbd-admin] installed and hooked up.
[dbd]: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBLocal.html
[dbd-admin]: https://github.com/aaronshaf/dynamodb-admin
## Usage
```
docker pull instructure/dynamo-local-admin
docker run -p 8000:8000 -it --rm instructure/dynamo-local-admin
```
Now open http://localhost:8000/ in your browser, and you'll see the admin UI.
You can also hit port 8000 with Dynamo API requests:
```
AWS_ACCESS_KEY_ID=key AWS_SECRET_ACCESS_KEY=secret aws --region us-east-1 dynamodb --endpoint http://localhost:8000/ list-tables
```
A proxy is included which differentiates between Dynamo requests and web
requests, and proxies to the appropriate server.
## Why?
It's just very convenient if you do a lot of Dynamo development.
## Dinghy Users
If you're using Dinghy and its automatic proxy, `VIRTUAL_HOST` and
`VIRTUAL_PORT` are already set, so you don't have to specify any ports in the
`docker run` and can just open http://dynamo.docker/ in your browser.
| 31.942857 | 128 | 0.767442 | eng_Latn | 0.936707 |
e02dd356cb52ce3d41b46d0827dbc4b0e4f72a24 | 9,150 | md | Markdown | dotnet/WPF/VisualLayerIntegration/README.md | bgrainger/Windows.UI.Composition-Win32-Samples | 9dadcf50b4c0e712f170541d5f1ee2d068f32c0d | [
"MIT"
] | 263 | 2019-05-07T13:27:11.000Z | 2022-03-25T03:19:33.000Z | dotnet/WPF/VisualLayerIntegration/README.md | akon47/Windows.UI.Composition-Win32-Samples | 80eec88412a3ed85bdfbba101737c9b941619a32 | [
"MIT"
] | 66 | 2019-07-06T08:13:56.000Z | 2022-03-25T21:08:45.000Z | dotnet/WPF/VisualLayerIntegration/README.md | akon47/Windows.UI.Composition-Win32-Samples | 80eec88412a3ed85bdfbba101737c9b941619a32 | [
"MIT"
] | 142 | 2019-05-09T09:48:21.000Z | 2022-03-25T08:28:51.000Z | # WPF Visual Layer Integration sample
An example app user interface (UI) that demonstrates the use of the Universal Windows Platform (UWP) [Visual Layer](https://docs.microsoft.com/windows/uwp/composition/visual-layer) APIs ([Windows.UI.Composition](https://docs.microsoft.com/uwp/api/windows.ui.composition)) in a Windows Presentation Foundation (WPF) app.
The Visual Layer APIs provide a high performance, retained-mode API for graphics, effects, and animations. It's the recommended replacement for DirectComposition in apps that run on Windows 10.
This sample demonstrates how to use these APIs to enhance an existing WPF app UI. We use UWP hosting APIs to add a bar graph to a WPF app and show a visual representation of data selected in a DataGrid.

For an introduction to hosting Visual Layer APIs in a WPF app, see the **Using the Visual Layer with WPF** [tutorial](https://docs.microsoft.com/windows/uwp/composition/using-the-visual-layer-with-wpf) and [sample](https://github.com/Microsoft/Windows.UI.Composition-Win32-Samples/tree/master/dotnet/WPF/HelloComposition). This sample builds on the code introduced there.
## Features
This sample includes the following features:
- A WPF host class that implements HwndHost.
- Use of PInvoke and COM interop to access additional platform APIs.
- A custom graph class and associated helpers.
- Use of Composition APIs and animations to enhance look & feel - including Implicit animation, Lights, and Shapes.
- Text rendering using SharpDX
## Run the sample
This sample requires:
- Visual Studio 2017 or later - [Get a free copy of Visual Studio](http://go.microsoft.com/fwlink/?LinkID=280676)
- .NET Framework 4.7.2 or later
- Windows 10 version 1803 or later
- Windows 10 SDK 17134 or later
## Code at a glance
### Composition features
The code that makes use of the Visual Layer is located in the _BarGraphUtilities_ folder. Once the interop between the UWP Visual Layer and the desktop app is set up, using the Visual Layer in WPF is much like using it in UWP. We demonstrate several common composition features to enhance the UI, and the code to implement the bar graph is essentially the same code you would use in a UWP app.
#### Visuals and shapes
[ShapeVisuals](https://docs.microsoft.com/uwp/api/windows.ui.composition.shapevisual) are easily integrated into the hosted visual tree.
See the _CreateBar_ method in **Bar.cs** for an example of code that creates a shape subtree and attaches it to the main visual tree.
```csharp
// Define shape visual for bar outline.
shapeOutlineVisual = compositor.CreateShapeVisual();
...
// Create geometry and shape for the bar outline.
rectOutlineGeometry = compositor.CreateRectangleGeometry();
var barOutlineVisual = compositor.CreateSpriteShape(rectOutlineGeometry);
...
shapeOutlineVisual.Shapes.Add(barOutlineVisual);
...
```
#### Brushes
[CompositionBrushes](https://docs.microsoft.com/uwp/api/windows.ui.composition.compositionbrush) are applied to hosted Visuals to define how they are painted.
In this sample, the **BarBrushHelper.cs** class is used to create these composition brushes, which are then applied to the Visual.
> NOTE : See _Limitations_ at the end of this readme for more information about effect brushes.
#### Animations
Animations are a key feature of the Visual Layer APIs. Animations can be applied to hosted Visual content.
In this sample we demonstrate creating an implicit animation for the bar graphs. The animation is triggered when a row is selected in the WPF data grid.
When the bars are first created in Bar.cs , an implicit animation is attached to the bar rectangle geometry:
```csharp
{
...
// Add implicit animation to bar.
var implicitAnimations = compositor.CreateImplicitAnimationCollection();
// Trigger animation when the size property changes.
implicitAnimations["Size"] = CreateAnimation();
rectGeometry.ImplicitAnimations = implicitAnimations;
rectOutlineGeometry.ImplicitAnimations = implicitAnimations;
...
}
Vector2KeyFrameAnimation CreateAnimation()
{
var animation = compositor.CreateVector2KeyFrameAnimation();
animation.InsertExpressionKeyFrame(0f, "this.StartingValue");
animation.InsertExpressionKeyFrame(1f, "this.FinalValue");
animation.Target = "Size";
animation.Duration = TimeSpan.FromSeconds(1);
return animation;
}
```
In the same file, setting the Height property updates the bar geometry Size and triggers the implicit animation. (The Height property is updated when a user selects a row in the DataGrid, which updates the graph with new data.)
#### Lights
[CompositionLights](https://docs.microsoft.com/uwp/api/windows.ui.composition.compositionlight) are used to apply lighting to the content of the hosted visual tree. To light an element, you add it to the Targets collection of the composition light.
See the _AddLight_ method in **BarGraph.cs** for an example of code that creates lights. The sample creates several different kinds of light: [AmbientLight](https://docs.microsoft.com/uwp/api/windows.ui.composition.ambientlight), [PointLight](https://docs.microsoft.com/uwp/api/windows.ui.composition.pointlight), and [SpotLight](https://docs.microsoft.com/uwp/api/windows.ui.composition.spotlight).
```csharp
ambientLight = compositor.CreateAmbientLight();
ambientLight.Color = Colors.White;
ambientLight.Targets.Add(mainContainer);
...
```
### Pointer input
You can respond to pointer input with some additional code to pass the pointer events through to your hosted Visual content.
In the **BarGraphHostControl.xaml.cs** file, we handle the _MouseMoved_ event (*HostControl_MouseMoved*) and pass the pointer position to the bar graph.
However, before the pointer location is passed to the bar graph, it needs to be converted from physical pixels to Device Independent Pixels (DIPs) that account for DPI settings (_GetPointInDIP_), and then adjusted to be relative to the host control.
In the bar graph, we use the pointer position to update the X and Y coordinates of the PointLight and SpotLight. (See the _UpdateLight_ method in **BarGraph.cs**.)
### Resizing for DPI changes and airspace
When two different UI technologies are used together, such as WPF and the Visual Layer, they are each responsible for drawing their own pixels on the screen, and they can't share pixels. As a result, Visual Layer content is always rendered on top of WPF content. (This is known as the _airspace_ issue.) In addition, content hosted in a WPF app doesn't automatically resize or scale for DPI. To keep the bar graph from being rendered on top of the DataGrid and other WPF content, some extra steps are required to redraw the bar graph at the appropriate size when the size of the host control changes, or when the DPI changes.
First, all size calculations in **BarGraph** and **BarGraphHostControl** take the current DPI into account.
Then, we override the [OnDpiChanged](https://docs.microsoft.com/dotnet/api/system.windows.media.visual.ondpichanged) method and handle the [SizeChanged](https://docs.microsoft.com/dotnet/api/system.windows.frameworkelement.sizechanged) event for the BarGraphHostControl to manage these situations. In both cases, the size and DPI settings are passed to the BarGraph's _UpdateSize_ method, which calculates new sizes for the bars and text in the graph.
For more information about responding to DPI changes, see [Using the Visual Layer with WPF](https://docs.microsoft.com/windows/uwp/composition/using-the-visual-layer-with-wpf) and the [Per Monitor DPI Developer Guide and samples](https://github.com/Microsoft/WPF-Samples/tree/master/PerMonitorDPI).
## Limitations
While many Visual Layer features work the same when hosted in a WPF app as they do in a UWP app, some features do have limitations. Here are some of the limitations to be aware of:
- Effect chains rely on [Win2D](http://microsoft.github.io/Win2D/html/Introduction.htm) for the effect descriptions. The [Win2D NuGet package](https://www.nuget.org/packages/Win2D.uwp) is not supported in desktop apps, so you would need to recompile it from the [source code](https://github.com/Microsoft/Win2D). The sample does not demonstrate this.
- To do hit testing, you need to do bounds calculations by walking the visual tree yourself. This is the same as the Visual Layer in UWP, except in this case there's no XAML element you can easily bind to for hit testing.
- The Visual Layer does not have a primitive for rendering text. This sample uses the SharpDX NuGet package to render text to a surface.
## See also
We've covered a small subset of Windows Composition features that can be easily integrated into your existing WPF app. There are still many others, such as shadows, more animation types, perspective transforms, and so forth. For an overview of other Composition features and the benefits they can bring to your applications, see the [Visual Layer documentation](https://docs.microsoft.com/windows/uwp/composition/visual-layer).
API reference: [Windows.UI.Composition](https://docs.microsoft.com/uwp/api/windows.ui.composition) | 64.43662 | 626 | 0.785464 | eng_Latn | 0.977533 |
e02e307dfe372bd659545be046fa5e1d742dbdd6 | 290 | md | Markdown | README.md | JustinHattrup/anonymousPosts | b27d31dc5db48b5e9b3933b731c6c8870750fb89 | [
"MIT"
] | null | null | null | README.md | JustinHattrup/anonymousPosts | b27d31dc5db48b5e9b3933b731c6c8870750fb89 | [
"MIT"
] | 1 | 2021-05-10T07:17:20.000Z | 2021-05-10T07:17:20.000Z | README.md | JustinHattrup/anonymousPosts | b27d31dc5db48b5e9b3933b731c6c8870750fb89 | [
"MIT"
] | null | null | null | # anonymousPosts
This is an app using a Node + mongoDB + GraphQL + Apollo + React stack. </br>
It can add and delete posts
(the logic to update a post is available in the backend I just didnt implement it in the frontend)
Deployed on heroku: https://murmuring-hollows-33435.herokuapp.com/
| 41.428571 | 98 | 0.762069 | eng_Latn | 0.990865 |
e02e46e29b27a68a74fe3b14672c806415b33abc | 107 | md | Markdown | doc/notes.md | xuejiai/webui | b65eaaaacbdca063705714edd566d4495fad7690 | [
"MIT"
] | null | null | null | doc/notes.md | xuejiai/webui | b65eaaaacbdca063705714edd566d4495fad7690 | [
"MIT"
] | null | null | null | doc/notes.md | xuejiai/webui | b65eaaaacbdca063705714edd566d4495fad7690 | [
"MIT"
] | null | null | null | ## 坑
- husky 无响应,git commit 时没有执行 hook,可尝试删除 node_moduls,单独安装 cnpm i husky -D,成功时会在在.git\hooks 目录下生成相关文件 | 35.666667 | 99 | 0.757009 | kor_Hang | 0.188035 |
e02e827c54f3087b53bcc4d6c5dd365791c99d8f | 1,547 | md | Markdown | docs/technology.md | montoyamoraga/drawing-machine | 75bf2d5e03bf4b5fb075d7b516502e30b02eedb8 | [
"MIT"
] | 3 | 2020-11-11T06:32:34.000Z | 2021-05-18T23:51:14.000Z | docs/technology.md | montoyamoraga/open-drawing-machine | 75bf2d5e03bf4b5fb075d7b516502e30b02eedb8 | [
"MIT"
] | null | null | null | docs/technology.md | montoyamoraga/open-drawing-machine | 75bf2d5e03bf4b5fb075d7b516502e30b02eedb8 | [
"MIT"
] | null | null | null | # Open Drawing Machine
## Technology
This project is built on the shoulders of the following projects:
* [Arduino Uno](https://en.wikipedia.org/wiki/Arduino_Uno): "The Arduino Uno is an open-source microcontroller board based on the Microchip ATmega328P microcontroller and developed by Arduino.cc."
* [G-code](https://en.wikipedia.org/wiki/G-code): "G-code (also RS-274), which has many variants, is the common name for the most widely used computer numerical control (CNC) programming language. It is used mainly in computer-aided manufacturing to control automated machine tools."
* [Grbl](https://github.com/grbl/grbl): "Grbl is a no-compromise, high performance, low cost alternative to parallel-port-based motion control for CNC milling. It will run on a vanilla Arduino (Duemillanove/Uno) as long as it sports an Atmega 328."
## Dependencies
As of September 2020, we are using the following dependencies:
* [https://github.com/arnabdasbwn/grbl-coreXY-servo](https://github.com/arnabdasbwn/grbl-coreXY-servo): this repository is a fork of vanilla grbl, for 2 motors for XY movement, and a servo for Z movement.
The Open Drawing machine can be controlled with the [openFrameworks](https://openframeworks.cc/) addon ofxOpenDrawingMachine, available at [https://github.com/montoyamoraga/ofxOpenDrawingMachine](https://github.com/montoyamoraga/ofxOpenDrawingMachine)
## Software Configuration
On the G-code sender, you should program these:
* $100 = 80, steps/mm for x axis
* $101 = 80, steps/mm for y axis
| 57.296296 | 284 | 0.763413 | eng_Latn | 0.971582 |
e02f15e57b16b4e637e7326579bf4a8ec0568a2c | 10,803 | md | Markdown | powerapps-docs/guidance/fusion-dev-ebook/04-using-dataverse-as-data-source.md | mattruma/powerapps-docs | 6fe922145366c7dbba22a00cd560a8402622b995 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | powerapps-docs/guidance/fusion-dev-ebook/04-using-dataverse-as-data-source.md | mattruma/powerapps-docs | 6fe922145366c7dbba22a00cd560a8402622b995 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | powerapps-docs/guidance/fusion-dev-ebook/04-using-dataverse-as-data-source.md | mattruma/powerapps-docs | 6fe922145366c7dbba22a00cd560a8402622b995 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "4: Using Microsoft Dataverse as the data source | Microsoft Docs"
description: "Learn about the benefits of using Microsoft Dataverse as the data source."
author: spboyer
ms.topic: conceptual
ms.custom: ebook
ms.date: 05/07/2021
ms.subservice: guidance
ms.author: shboyer
ms.reviewer: kvivek
---
# Chapter 4: Using Microsoft Dataverse as the data source
Maria has built a prototype app by using test data held in Excel workbooks. She can now consider how to connect the app to data sources that will provide real-world data. She has heard about Microsoft Dataverse as an option for doing this, and wants to know more about it.
## What is Dataverse?
Dataverse is a data store with a set of standard tables. You can use it to store business information, manage business rules, and define business dataflows. In many ways it acts like a database, except that it holds more than just data. You can use it to record elements of business logic for your solutions, and share this logic across apps. Dataverse includes scheduling capabilities that enable you to automate processing and workflows. Additionally, you can add charts and associate them with your data; Power Apps can reference these charts directly from Dataverse. More information: [What is Dataverse?](/powerapps/maker/data-platform/data-platform-intro) in Power Apps docs
Dataverse follows the "low-code" approach of Power Apps, enabling a business user to create business entities and workflows. Additionally, Dataverse is a scalable, reliable, and secure system, implemented in Azure. Role-based access control limits the type of access to different users in your organization; users can only see or manipulate the entities to which they've been granted access.
> [!NOTE]
> The definitions of applications and users in Power Apps are also stored in Dataverse. Power Apps uses this information for creating, editing, and publishing apps.
Dataverse enables you to unify data held in disparate databases into a single repository. You can create dataflows that periodically ingest data held in one or more databases into the tables in Dataverse to create aggregated datasets. More information: [Why choose Dataverse?](../../maker/data-platform/why-dataverse-overview.md)

## Defining entities and relationships in Dataverse
Dataverse contains a collection of open-sourced, standardized, extensible data entities and relationships that Microsoft and its partners have published in the industry-wide Open Data Initiative. The data for these entities is stored in a set of tables. Dataverse defines entities for many common business objects, such as Account, Address, Contact, Organization, Team, and User. You can view the tables in Dataverse on the **Tables** tab under **Data** in [Power Apps](https://make.powerapps.com). You can add your own custom tables to Dataverse if necessary, but it's a good practice to use existing tables wherever possible. This will help to ensure the portability of your apps. Tables that are part of the default Dataverse have a Type designated as Standard, but the Type of your own tables will be marked as Custom.

In Dataverse, each entity is tabular, with a default set of columns that are also defined by the Open Data Initiative. You can view the definition of a table by using the **Edit** command for that entity in the list of tables. You can extend a table by using your own columns, but—as noted earlier—it's a good practice to use existing columns wherever possible. The following example shows the default definition of the Account table.
> [!NOTE]
> You can modify the display name of tables and columns without changing their names. Power Apps uses the display names as the default labels that appear on forms.

Dataverse supports a rich set of data types for columns, ranging from simple text and numeric values to abstractions that have specified formatting constraints, such as **Email**, **URL**, **Phone**, and **Ticker Symbol**. You can use other types, such as **Choice** and **Lookup**, to restrict the values entered in a column to a fixed domain or data retrieved from a column in a related table. Use the **File** and **Image** types to store unstructured data and images in a table. Images have a maximum size of 30 MB, but files can be as large as 128 MB.
> [!NOTE]
> You can define your own custom choices for use by **Choice** columns in Power Apps.
You can also define relationships among tables. These relationships can be *many-to-one*, *one-to-many*, or *many-to-many*. In addition, you specify the behavior of the related entities as part of the relationship. The behavior can be:
- **Referential**, with or without restricted delete. Restricted delete prevents a row in a related table from being removed if it's referenced by another row in the same, or a different, table.
- **Parental**, in which any action performed on a row is also applied to any rows that it references.
- **Custom**, which enables you to specify how referenced rows are affected by an action performed on the referencing row.
The following example shows how to add a one-to-many relationship from the Account table to a custom table named SalesLT Customer. The behavior prevents a customer from being deleted if it's referenced by a row in the Account table.

## Adding views and business rules
A view provides access to specified columns and rows in one or more related tables. You can think of a view as being a query, but with a name that allows you to treat it as a table. A view contains selected columns from a table but can include columns from related tables. Additionally, a view can filter rows to only show rows that match specified criteria. You can also stipulate the default sort order for the rows presented by a view. Note that a view provides a dynamic window onto the underlying data; if the data changes in the tables behind a view, so does the information represented by the view. You can display data through views in model-driven apps. The following image shows the view designer. The user is adding a new column to a view based on the Account table.

You use business rules to define validations and automate the flow of control when data is added, modified, or deleted in an entity. A business rule comprises a condition that can test for certain conditions in the affected entity, such as whether the data in a column matches or breaks a given rule. The business rules designer in Power Apps Studio provides a graphical user interface for defining business rules, as shown in the the following image.

The business rules designer supports the following actions:
- Set column values.
- Clear column values.
- Set column requirement levels.
- Show or hide columns (for model-driven apps only).
- Enable or disable columns (for model-driven apps only).
- Validate data and show error messages.
- Create business recommendations based on business intelligence (for model-driven apps only).
> [!NOTE]
> [Business rules](/powerapps/maker/data-platform/data-platform-create-business-rule) are best suited to model-driven apps. Not all business rule actions are supported by canvas apps.
## Defining business activities
There are two fundamental types of table in Dataverse: *Standard* tables (including custom tables), which contain data, and *Activity* tables, which represent business actions and workflows that can be scheduled to run by Dataverse. An activity table contains references to the data entities involved in the activity (such as customers or salespeople), a series of states through which the activity can progress, its current state, and other information used by Dataverse to schedule operations when appropriate.
Dataverse contains built-in activities for managing meetings, scheduling business processes, marketing, managing the sales process, creating recurring appointments, and handling customer service incidents. More information: [Activity tables](../../developer/data-platform/activity-entities.md)
You implement the actual business logic by using custom actions, or your own code if you require additional control that's not directly available in Power Apps. The details of this process are beyond the scope of this guide, but for more information, go to [Create a custom action](../../maker/data-platform/create-actions.md).
## Adding graphical display elements
In addition to storing the data structure and logic associated with a business entity, Dataverse can also store layouts for forms, charts, and dashboards associated with an entity. When you create a model-driven app, you can use these forms for data entry and display, while the charts and dashboards enable a user to visualize the data more easily than by looking at basic data values.

## Maria's decision to use Dataverse
Dataverse is an excellent choice of repository for many situations. You should seriously consider it for Power Apps development based on new systems and services and adding new functionality to existing applications, especially if you're creating model-driven apps.
However, in the application that Maria is building, the data already exists in a legacy database. A web API exists that connects to that database to retrieve and modify data and it's deployed in Azure App Service. Those legacy solutions are proven to work and Kiana and her high-code development team are very comfortable supporting those solutions going forward.
An advantage of fusion development teams and Dataverse is fusion development teams allow members to be their most productive in tools that they already know and are most comfortable with. A team does not need to migrate their existing data to Dataverse immediately to build an app using Power Apps. Likewise, when a team is building an application that requires new data, Dataverse makes a ton of sense as an option. It is not uncommon to see an app built using Power Apps to use a combination of legacy data sources and data in Dataverse.
When Maria starts to add new functionality to her app, for example having the field technicians add customer visit notes, she expects to use Dataverse to store that data.
So, for the time being Maria will connect the web API Kiana's team has already developed to her app to obtain the data needed. The following chapters will walk through that process.
> [!div class="step-by-step"]
> [Previous](03-building-low-code-prototype.md)
> [Next](05-creating-publishing-web-api-in-azure.md)
| 97.324324 | 822 | 0.793483 | eng_Latn | 0.999207 |
e02f59f4a2fecd3378949b6c7d80f6e5c56db244 | 60,613 | md | Markdown | docs/architecture/microservices/microservice-ddd-cqrs-patterns/microservice-application-layer-implementation-web-api.md | janlopera/docs.es-es | d206a93d99877d3b154a259e327580c14a4841f1 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/architecture/microservices/microservice-ddd-cqrs-patterns/microservice-application-layer-implementation-web-api.md | janlopera/docs.es-es | d206a93d99877d3b154a259e327580c14a4841f1 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/architecture/microservices/microservice-ddd-cqrs-patterns/microservice-application-layer-implementation-web-api.md | janlopera/docs.es-es | d206a93d99877d3b154a259e327580c14a4841f1 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Implementación del nivel de aplicación de microservicios mediante la API web
description: Comprenda los procesos de inserción de dependencias y los patrones de mediador y sus detalles de implementación en la capa de aplicación de la API web.
ms.date: 01/30/2020
ms.openlocfilehash: 76562d87b09a18e4a4ecb7625a2e823bc1ccff78
ms.sourcegitcommit: e3cbf26d67f7e9286c7108a2752804050762d02d
ms.translationtype: HT
ms.contentlocale: es-ES
ms.lasthandoff: 04/09/2020
ms.locfileid: "80988471"
---
# <a name="implement-the-microservice-application-layer-using-the-web-api"></a>Implementación del nivel de aplicación de microservicios mediante la API web
## <a name="use-dependency-injection-to-inject-infrastructure-objects-into-your-application-layer"></a>Uso de la inserción de dependencias para insertar objetos de la infraestructura en el nivel de aplicación
Como se ha mencionado anteriormente, el nivel de aplicación se puede implementar como parte del artefacto (ensamblado) que se está creando, por ejemplo, dentro de un proyecto de API web o de aplicación web MVC. En el caso de un microservicio compilado con ASP.NET Core, el nivel de aplicación normalmente será la biblioteca de API web. Si quiere separar lo que proviene de ASP.NET Core (su infraestructura y los controladores) del código de nivel de aplicación personalizado, también puede colocar el nivel de aplicación en una biblioteca de clases independiente, pero es algo opcional.
Por ejemplo, el código de nivel de aplicación del microservicio de pedidos se implementa directamente como parte del proyecto **Ordering.API** (un proyecto de API web de ASP.NET Core), como se muestra en la figura 7-23.
:::image type="complex" source="./media/microservice-application-layer-implementation-web-api/ordering-api-microservice.png" alt-text="Captura de pantalla del microservicio Ordering.API en el Explorador de soluciones.":::
Vista del Explorador de soluciones del microservicio Ordering.API que muestra las subcarpetas de la carpeta Application: Behaviors, Commands, DomainEventHandlers, IntegrationEvents, Models, Queries y Validations.
:::image-end:::
**Figura 7-23.** Nivel de aplicación en el proyecto de API web de ASP.NET Core Ordering.API
En ASP.NET Core se incluye un simple [contenedor de IoC integrado](https://docs.microsoft.com/aspnet/core/fundamentals/dependency-injection) (representado por la interfaz IServiceProvider) que admite la inserción de constructores de forma predeterminada, y ASP.NET hace que determinados servicios estén disponibles a través de DI. En ASP.NET Core se usa el término *servicio* para cualquiera de los tipos que se registran para la inserción mediante DI. Los servicios del contenedor integrado se configuran en el método ConfigureServices de la clase Startup de la aplicación. Las dependencias se implementan en los servicios que un tipo necesita y que se registran en el contenedor IoC.
Normalmente, le interesará insertar dependencias que implementen objetos de infraestructura. Una dependencia muy habitual para insertar es un repositorio. Pero también podría insertar cualquier otra dependencia de infraestructura que pueda tener. Para las implementaciones más sencillas, también podría insertar directamente el objeto de patrón de unidades de trabajo (el objeto DbContext de EF), porque DBContext también es la implementación de los objetos de persistencia de infraestructura.
En el ejemplo siguiente, puede ver cómo .NET Core inserta los objetos de repositorio necesarios a través del constructor. La clase es un controlador de comandos, que se explica en la sección siguiente.
```csharp
public class CreateOrderCommandHandler
: IAsyncRequestHandler<CreateOrderCommand, bool>
{
private readonly IOrderRepository _orderRepository;
private readonly IIdentityService _identityService;
private readonly IMediator _mediator;
// Using DI to inject infrastructure persistence Repositories
public CreateOrderCommandHandler(IMediator mediator,
IOrderRepository orderRepository,
IIdentityService identityService)
{
_orderRepository = orderRepository ??
throw new ArgumentNullException(nameof(orderRepository));
_identityService = identityService ??
throw new ArgumentNullException(nameof(identityService));
_mediator = mediator ??
throw new ArgumentNullException(nameof(mediator));
}
public async Task<bool> Handle(CreateOrderCommand message)
{
// Create the Order AggregateRoot
// Add child entities and value objects through the Order aggregate root
// methods and constructor so validations, invariants, and business logic
// make sure that consistency is preserved across the whole aggregate
var address = new Address(message.Street, message.City, message.State,
message.Country, message.ZipCode);
var order = new Order(message.UserId, address, message.CardTypeId,
message.CardNumber, message.CardSecurityNumber,
message.CardHolderName, message.CardExpiration);
foreach (var item in message.OrderItems)
{
order.AddOrderItem(item.ProductId, item.ProductName, item.UnitPrice,
item.Discount, item.PictureUrl, item.Units);
}
_orderRepository.Add(order);
return await _orderRepository.UnitOfWork
.SaveEntitiesAsync();
}
}
```
En la clase se usan los repositorios insertados para ejecutar la transacción y conservar los cambios de estado. No importa si esa clase es un controlador de comandos, un método de controlador de API web de ASP.NET Core, o un [servicio de aplicación DDD](https://lostechies.com/jimmybogard/2008/08/21/services-in-domain-driven-design/). En última instancia, es una clase simple que usa repositorios, entidades de dominio y otra coordinación de aplicaciones de forma similar a un controlador de comandos. La inserción de dependencias funciona igual en todas las clases mencionadas, como en el ejemplo de uso de DI según el constructor.
### <a name="register-the-dependency-implementation-types-and-interfaces-or-abstractions"></a>Registro de los tipos de implementación de dependencias e interfaces o abstracciones
Antes de usar los objetos insertados mediante constructores, debe saber dónde registrar las interfaces y clases que generan los objetos que se insertan en las clases de aplicación a través de DI. (Como la inserción de dependencias basada en el constructor, tal y como se mostró anteriormente).
#### <a name="use-the-built-in-ioc-container-provided-by-aspnet-core"></a>Uso del contenedor de IoC integrado proporcionado por ASP.NET Core
Cuando use el contenedor de IoC integrado proporcionado por ASP.NET Core, registre los tipos que quiera insertar en el método ConfigureServices del archivo Startup.cs, como se muestra en el código siguiente:
```csharp
// Registration of types into ASP.NET Core built-in container
public void ConfigureServices(IServiceCollection services)
{
// Register out-of-the-box framework services.
services.AddDbContext<CatalogContext>(c =>
c.UseSqlServer(Configuration["ConnectionString"]),
ServiceLifetime.Scoped);
services.AddMvc();
// Register custom application dependencies.
services.AddScoped<IMyCustomRepository, MyCustomSQLRepository>();
}
```
El modelo más común al registrar los tipos en un contenedor de IoC es registrar un par de tipos: una interfaz y su clase de implementación relacionada. Después, cuando se solicita un objeto del contenedor de IoC a través de cualquier constructor, se solicita un objeto de un tipo de interfaz determinado. En el ejemplo anterior, la última línea indica que, cuando cualquiera de los constructores tiene una dependencia de IMyCustomRepository (interfaz o abstracción), el contenedor de IoC insertará una instancia de la clase de implementación MyCustomSQLServerRepository.
#### <a name="use-the-scrutor-library-for-automatic-types-registration"></a>Uso de la biblioteca Scrutor para el registro de tipos automático
Al usar DI en .NET Core, es posible que le interese poder examinar un ensamblado y registrar sus tipos de manera automática por convención. Actualmente, esta característica no está disponible en ASP.NET Core, pero puede usar la biblioteca [Scrutor](https://github.com/khellang/Scrutor) para hacerlo. Este enfoque resulta conveniente cuando existen docenas de tipos que deben registrarse en el contenedor de IoC.
#### <a name="additional-resources"></a>Recursos adicionales
- **Matthew King. Registering services with Scrutor** (Registro de servicios con Scrutor) \
<https://www.mking.net/blog/registering-services-with-scrutor>
- **Kristian Hellang. Scrutor.** Repositorio de GitHub. \
<https://github.com/khellang/Scrutor>
#### <a name="use-autofac-as-an-ioc-container"></a>Uso de Autofac como un contenedor de IoC
También se pueden usar contenedores de IoC adicionales y conectarlos a la canalización de ASP.NET Core, como se muestra en el microservicio de pedidos en eShopOnContainers, donde se usa [Autofac](https://autofac.org/). Cuando se usa Autofac normalmente los tipos se registran a través de módulos, lo que permite dividir los tipos de registro entre varios archivos, en función de dónde se encuentren los tipos, al igual que los tipos de aplicaciones podrían estar distribuidos entre varias bibliotecas de clases.
Por ejemplo, el siguiente es el [módulo de aplicación de Autofac](https://github.com/dotnet-architecture/eShopOnContainers/blob/master/src/Services/Ordering/Ordering.API/Infrastructure/AutofacModules/ApplicationModule.cs) para el proyecto de [API web Ordering.API](https://github.com/dotnet-architecture/eShopOnContainers/tree/master/src/Services/Ordering/Ordering.API) con los tipos que se quieren insertar.
```csharp
public class ApplicationModule : Autofac.Module
{
public string QueriesConnectionString { get; }
public ApplicationModule(string qconstr)
{
QueriesConnectionString = qconstr;
}
protected override void Load(ContainerBuilder builder)
{
builder.Register(c => new OrderQueries(QueriesConnectionString))
.As<IOrderQueries>()
.InstancePerLifetimeScope();
builder.RegisterType<BuyerRepository>()
.As<IBuyerRepository>()
.InstancePerLifetimeScope();
builder.RegisterType<OrderRepository>()
.As<IOrderRepository>()
.InstancePerLifetimeScope();
builder.RegisterType<RequestManager>()
.As<IRequestManager>()
.InstancePerLifetimeScope();
}
}
```
Autofac también tiene una característica para [analizar ensamblados y registrar tipos por convenciones de nombre](https://autofac.readthedocs.io/en/latest/register/scanning.html).
El proceso de registro y los conceptos son muy similares a la manera en que se pueden registrar tipos con el contenedor integrado de IoC de ASP.NET Core, pero cuando se usa Autofac la sintaxis es un poco diferente.
En el código de ejemplo, la abstracción IOrderRepository se registra junto con la clase de implementación OrderRepository. Esto significa que cada vez que un constructor declare una dependencia a través de la abstracción o la interfaz IOrderRepository, el contenedor de IoC insertará una instancia de la clase OrderRepository.
El tipo de ámbito de la instancia determina cómo se comparte una instancia entre las solicitudes del mismo servicio o dependencia. Cuando se realiza una solicitud de una dependencia, el contenedor de IoC puede devolver lo siguiente:
- Una sola instancia por ámbito de duración (denominada *con ámbito* en el contenedor de IoC de ASP.NET Core).
- Una nueva instancia por dependencia (denominada *transitoria* en el contenedor de IoC de ASP.NET Core).
- Una única instancia que se comparte entre todos los objetos que usan el contenedor de IoC (denominada *singleton* en el contenedor de IoC de ASP.NET Core).
#### <a name="additional-resources"></a>Recursos adicionales
- **Introduction to Dependency Injection in ASP.NET Core** (Introducción a la inserción de dependencias en ASP.NET Core) \
[https://docs.microsoft.com/aspnet/core/fundamentals/dependency-injection](/aspnet/core/fundamentals/dependency-injection)
- **Autofac.** Documentación oficial. \
<https://docs.autofac.org/en/latest/>
- **Comparing ASP.NET Core IoC container service lifetimes with Autofac IoC container instance scopes** (Comparación de las duraciones de servicio del contenedor IoC de ASP.NET Core con ámbitos de instancia de contenedor Autofac IoC) - Cesar de la Torre. \
<https://devblogs.microsoft.com/cesardelatorre/comparing-asp-net-core-ioc-service-life-times-and-autofac-ioc-instance-scopes/>
## <a name="implement-the-command-and-command-handler-patterns"></a>Implementación de los patrones de comando y controlador de comandos
En el ejemplo de DI a través del constructor mostrado en la sección anterior, el contenedor de IoC insertaba repositorios a través de un constructor en una clase. ¿Pero exactamente dónde se insertaban? En una API web simple (por ejemplo, el microservicio de catálogo de eShopOnContainers), se insertan en el nivel de controladores de MVC, en un constructor de controlador, como parte de la canalización de solicitud de ASP.NET Core. Pero en el código inicial de esta sección (la clase [CreateOrderCommandHandler](https://github.com/dotnet-architecture/eShopOnContainers/blob/master/src/Services/Ordering/Ordering.API/Application/Commands/CreateOrderCommandHandler.cs) del servicio Ordering.API en eShopOnContainers), la inserción de dependencias se realiza a través del constructor de un determinado controlador de comandos. Vamos a explicar qué es un controlador de comandos y por qué le interesaría usarlo.
El patrón de comandos está intrínsecamente relacionado con el patrón CQRS que se presentó anteriormente en esta guía. CQRS tiene dos lados. La primera área son las consultas, mediante consultas simplificadas con el micro-ORM [Dapper](https://github.com/StackExchange/dapper-dot-net), que se explicó anteriormente. La segunda área son los comandos, el punto inicial para las transacciones y el canal de entrada desde el exterior del servicio.
Como se muestra en la figura 7-24, el patrón se basa en la aceptación de comandos del lado cliente, su procesamiento según las reglas del modelo de dominio y, por último, la conservación de los estados con transacciones.

**Figura 7-24.** Vista general de los comandos o el "lado transaccional" en un patrón CQRS
En la figura 7-24 se muestra que la aplicación de interfaz de usuario envía un comando a través de la API que llega a un elemento `CommandHandler`, que depende del modelo de dominio y de la infraestructura para actualizar la base de datos.
### <a name="the-command-class"></a>La clase de comando
Un comando es una solicitud para que el sistema realice una acción que cambia el estado del sistema. Los comandos son imperativos y se deben procesar una sola vez.
Como los comandos son imperativos, normalmente se denominan con un verbo en modo imperativo (por ejemplo, "create" o "update"), y es posible que incluyan el tipo agregado, como CreateOrderCommand. A diferencia de un evento, un comando no es un hecho del pasado; es solo una solicitud y, por tanto, se puede denegar.
Los comandos se pueden originar desde la interfaz de usuario como resultado de un usuario que inicia una solicitud, o desde un administrador de procesos cuando está dirigiendo un agregado para realizar una acción.
Una característica importante de un comando es que debe procesarse una sola vez por un único receptor. Esto se debe a que un comando es una única acción o transacción que se quiere realizar en la aplicación. Por ejemplo, el mismo comando de creación de pedidos no se debe procesar más de una vez. Se trata de una diferencia importante entre los comandos y los eventos. Los eventos se pueden procesar varias veces, dado que es posible que muchos sistemas o microservicios estén interesados en el evento.
Además, es importante que un comando solo se procese una vez en caso de que no sea idempotente. Un comando es idempotente si se puede ejecutar varias veces sin cambiar el resultado, ya sea debido a la naturaleza del comando, o bien al modo en que el sistema lo controla.
Un procedimiento recomendado consiste en hacer que los comandos y las actualizaciones sean idempotentes cuando tenga sentido según las reglas de negocio y los elementos invariables del dominio. Para usar el mismo ejemplo, si por algún motivo (lógica de reintento, piratería, etc.) el mismo comando CreateOrder llega varias veces al sistema, debería poder identificarlo y asegurarse de que no se crean varios pedidos. Para ello, debe adjuntar algún tipo de identidad en las operaciones e identificar si el comando o la actualización ya se ha procesado.
Un comando se envía a un único receptor; no se publica. La publicación es para los eventos que notifican un hecho: que ha sucedido algo y que podría ser interesante para los receptores de eventos. En el caso de los eventos, al publicador no le interesa qué receptores obtienen el evento o las acciones que realizan. Pero los eventos de integración o de dominio son diferentes y ya se presentaron en secciones anteriores.
Un comando se implementa con una clase que contiene campos de datos o colecciones con toda la información necesaria para ejecutar ese comando. Un comando es un tipo especial de objeto de transferencia de datos (DTO), que se usa específicamente para solicitar cambios o transacciones. El propio comando se basa en la información exacta que se necesita para procesar el comando y nada más.
En el siguiente ejemplo se muestra la clase `CreateOrderCommand` simplificada. Se trata de un comando inmutable que se usa en el microservicio de pedidos de eShopOnContainers.
```csharp
// DDD and CQRS patterns comment
// Note that we recommend that you implement immutable commands
// In this case, immutability is achieved by having all the setters as private
// plus being able to update the data just once, when creating the object
// through the constructor.
// References on immutable commands:
// http://cqrs.nu/Faq
// https://docs.spine3.org/motivation/immutability.html
// http://blog.gauffin.org/2012/06/griffin-container-introducing-command-support/
// https://docs.microsoft.com/dotnet/csharp/programming-guide/classes-and-structs/how-to-implement-a-lightweight-class-with-auto-implemented-properties
[DataContract]
public class CreateOrderCommand
:IAsyncRequest<bool>
{
[DataMember]
private readonly List<OrderItemDTO> _orderItems;
[DataMember]
public string City { get; private set; }
[DataMember]
public string Street { get; private set; }
[DataMember]
public string State { get; private set; }
[DataMember]
public string Country { get; private set; }
[DataMember]
public string ZipCode { get; private set; }
[DataMember]
public string CardNumber { get; private set; }
[DataMember]
public string CardHolderName { get; private set; }
[DataMember]
public DateTime CardExpiration { get; private set; }
[DataMember]
public string CardSecurityNumber { get; private set; }
[DataMember]
public int CardTypeId { get; private set; }
[DataMember]
public IEnumerable<OrderItemDTO> OrderItems => _orderItems;
public CreateOrderCommand()
{
_orderItems = new List<OrderItemDTO>();
}
public CreateOrderCommand(List<BasketItem> basketItems, string city,
string street,
string state, string country, string zipcode,
string cardNumber, string cardHolderName, DateTime cardExpiration,
string cardSecurityNumber, int cardTypeId) : this()
{
_orderItems = MapToOrderItems(basketItems);
City = city;
Street = street;
State = state;
Country = country;
ZipCode = zipcode;
CardNumber = cardNumber;
CardHolderName = cardHolderName;
CardSecurityNumber = cardSecurityNumber;
CardTypeId = cardTypeId;
CardExpiration = cardExpiration;
}
public class OrderItemDTO
{
public int ProductId { get; set; }
public string ProductName { get; set; }
public decimal UnitPrice { get; set; }
public decimal Discount { get; set; }
public int Units { get; set; }
public string PictureUrl { get; set; }
}
}
```
Básicamente, la clase de comando contiene todos los datos que se necesitan para llevar a cabo una transacción empresarial mediante los objetos de modelo de dominio. Por tanto, los comandos son simplemente las estructuras de datos que contienen datos de solo lectura y ningún comportamiento. El nombre del comando indica su propósito. En muchos lenguajes como C#, los comandos se representan como clases, pero no son verdaderas clases en el sentido real orientado a objetos.
Como una característica adicional, los comandos son inmutables, dado que el uso esperado es que el modelo de dominio los procese directamente. No deben cambiar durante su duración prevista. En una clase de C#, se puede lograr la inmutabilidad si no hay establecedores ni otros métodos que cambien el estado interno.
Tenga en cuenta que si quiere o espera que los comandos pasen por un proceso de serialización o deserialización, las propiedades deben tener un establecedor privado y el atributo `[DataMember]` (o `[JsonProperty]`). De lo contrario, el deserializador no podrá reconstruir el objeto en el destino con los valores necesarios. También puede usar propiedades que realmente sean de solo lectura si la clase tiene un constructor con parámetros para todas las propiedades, con la convención de nomenclatura de camelCase habitual, y anotar el constructor como `[JsonConstructor]`. Sin embargo, esta opción requiere más código.
Por ejemplo, la clase de comando para crear un pedido probablemente sea similar en cuanto a los datos del pedido que se quiere crear, pero es probable que no se necesiten los mismos atributos. Por ejemplo, `CreateOrderCommand` no tiene un identificador de pedido, porque el pedido aún no se ha creado.
Muchas clases de comando pueden ser simples y requerir solo unos cuantos campos sobre algún estado que deba cambiarse. Ese sería el caso si solo se va a cambiar el estado de un pedido de "en proceso" a "pagado" o "enviado" con un comando similar al siguiente:
```csharp
[DataContract]
public class UpdateOrderStatusCommand
:IAsyncRequest<bool>
{
[DataMember]
public string Status { get; private set; }
[DataMember]
public string OrderId { get; private set; }
[DataMember]
public string BuyerIdentityGuid { get; private set; }
}
```
Algunos desarrolladores separan los objetos de solicitud de interfaz de usuario de los DTO de comando, pero es solo una cuestión de preferencia. Es una separación tediosa sin demasiado valor añadido y los objetos tienen prácticamente la misma forma. Por ejemplo, en eShopOnContainers, algunos comandos proceden directamente del lado cliente.
### <a name="the-command-handler-class"></a>Clase de controlador de comandos
Debe implementar una clase de controlador de comandos específica para cada comando. Ese es el funcionamiento del patrón y el lugar en el que se usarán el objeto de comando, los objetos de dominio y los objetos de repositorio de infraestructura. De hecho, el controlador de comandos es el núcleo del nivel de aplicación en lo que a CQRS y DDD respecta. Sin embargo, toda la lógica del dominio debe incluirse en las clases de dominio, dentro de las raíces agregadas (entidades raíz), las entidades secundarias o [los servicios de dominio](https://lostechies.com/jimmybogard/2008/08/21/services-in-domain-driven-design/), pero no en el controlador de comandos, que es una clase del nivel de aplicación.
La clase de controlador de comandos ofrece un punto de partida seguro en la forma de lograr el principio de responsabilidad única (SRP) mencionado en una sección anterior.
Un controlador de comandos recibe un comando y obtiene un resultado del agregado que se usa. El resultado debe ser la ejecución correcta del comando, o bien una excepción. En el caso de una excepción, el estado del sistema no debe cambiar.
Normalmente, el controlador de comandos realiza estos pasos:
- Recibe el objeto de comando, como un DTO (desde el [mediador](https://en.wikipedia.org/wiki/Mediator_pattern) u otro objeto de infraestructura).
- Valida que el comando sea válido (si no lo hace el mediador).
- Crea una instancia de la instancia de raíz agregada que es el destino del comando actual.
- Ejecuta el método en la instancia de raíz agregada y obtiene los datos necesarios del comando.
- Conserva el nuevo estado del agregado en su base de datos relacionada. Esta última operación es la transacción real.
Normalmente, un controlador de comandos administra un único agregado controlado por su raíz agregada (la entidad raíz). Si varios agregados deben verse afectados por la recepción de un único comando, podría usar eventos de dominio para propagar los estados o las acciones entre varios agregados.
El aspecto importante aquí es que cuando se procesa un comando, toda la lógica del dominio debe incluirse en el modelo de dominio (los agregados), completamente encapsulada y lista para las pruebas unitarias. El controlador de comandos solo actúa como una manera de obtener el modelo de dominio de la base de datos y, como último paso, para indicar al nivel de infraestructura (los repositorios) que conserve los cambios cuando el modelo cambie. La ventaja de este enfoque es que se puede refactorizar la lógica del dominio en un modelo de dominio de comportamiento aislado, completamente encapsulado y enriquecido sin cambiar el código del nivel de aplicación o infraestructura, que forman el nivel de establecimiento (controladores de comandos, la API web, repositorios, etc.).
Cuando los controladores de comandos se complican, con demasiada lógica, se puede producir un problema en el código. Revíselos y, si encuentra lógica de dominio, refactorice el código para mover ese comportamiento de dominio a los métodos de los objetos de dominio (la raíz agregada y la entidad secundaria).
Como ejemplo de clase de controlador de comandos, en el código siguiente se muestra la misma clase `CreateOrderCommandHandler` que se vio al principio de este capítulo. En este caso, se pretende resaltar el método Handle y las operaciones con los objetos de modelo de dominio y agregados.
```csharp
public class CreateOrderCommandHandler
: IAsyncRequestHandler<CreateOrderCommand, bool>
{
private readonly IOrderRepository _orderRepository;
private readonly IIdentityService _identityService;
private readonly IMediator _mediator;
// Using DI to inject infrastructure persistence Repositories
public CreateOrderCommandHandler(IMediator mediator,
IOrderRepository orderRepository,
IIdentityService identityService)
{
_orderRepository = orderRepository ??
throw new ArgumentNullException(nameof(orderRepository));
_identityService = identityService ??
throw new ArgumentNullException(nameof(identityService));
_mediator = mediator ??
throw new ArgumentNullException(nameof(mediator));
}
public async Task<bool> Handle(CreateOrderCommand message)
{
// Create the Order AggregateRoot
// Add child entities and value objects through the Order aggregate root
// methods and constructor so validations, invariants, and business logic
// make sure that consistency is preserved across the whole aggregate
var address = new Address(message.Street, message.City, message.State,
message.Country, message.ZipCode);
var order = new Order(message.UserId, address, message.CardTypeId,
message.CardNumber, message.CardSecurityNumber,
message.CardHolderName, message.CardExpiration);
foreach (var item in message.OrderItems)
{
order.AddOrderItem(item.ProductId, item.ProductName, item.UnitPrice,
item.Discount, item.PictureUrl, item.Units);
}
_orderRepository.Add(order);
return await _orderRepository.UnitOfWork
.SaveEntitiesAsync();
}
}
```
Estos son los pasos adicionales que debe realizar un controlador de comandos:
- Usar los datos del comando para funcionar con los métodos y el comportamiento de la raíz agregada.
- Dentro de los objetos de dominio, generar eventos de dominio mientras se ejecuta la transacción, pero de forma transparente desde el punto de vista de un controlador de comandos.
- Si el resultado de la operación del agregado es correcto y una vez finalizada la transacción, generar eventos de integración. (Es posible que clases de infraestructura como repositorios también los generen).
#### <a name="additional-resources"></a>Recursos adicionales
- **Mark Seemann. At the Boundaries, Applications are Not Object-Oriented (En los límites, las aplicaciones no están orientadas a objetos)** \
<https://blog.ploeh.dk/2011/05/31/AttheBoundaries,ApplicationsareNotObject-Oriented/>
- **Commands and events (Comandos y eventos)** \
<https://cqrs.nu/Faq/commands-and-events>
- **What does a command handler do?** (¿De qué se encarga un controlador de comandos?) \
<https://cqrs.nu/Faq/command-handlers>
- **Jimmy Bogard. Domain Command Patterns – Handlers (Patrones de comandos de dominio: controladores)** \
<https://jimmybogard.com/domain-command-patterns-handlers/>
- **Jimmy Bogard. Domain Command Patterns – Validation (Patrones de comandos de dominio: validación)** \
<https://jimmybogard.com/domain-command-patterns-validation/>
## <a name="the-command-process-pipeline-how-to-trigger-a-command-handler"></a>La canalización del proceso de comando: cómo desencadenar un controlador de comandos
La siguiente pregunta es cómo invocar un controlador de comandos. Se podría llamar manualmente desde cada controlador de ASP.NET Core relacionado. Pero ese enfoque sería demasiado acoplado y no es lo ideal.
Las otras dos opciones principales, que son las recomendadas, son estas:
- A través de un artefacto de patrón de mediador en memoria.
- Con una cola de mensajes asincrónicos, entre los controladores.
### <a name="use-the-mediator-pattern-in-memory-in-the-command-pipeline"></a>Uso del patrón de mediador (en memoria) en la canalización de comandos
Como se muestra en la figura 7-25, en un enfoque CQRS se usa un mediador inteligente, similar a un bus en memoria, que es lo suficientemente inteligente como para redirigir al controlador de comandos correcto según el tipo del comando o DTO que se recibe. Las flechas simples de color negro entre los componentes representan las dependencias entre los objetos (en muchos casos, insertados mediante DI) con sus interacciones relacionadas.

**Figura 7-25.** Uso del patrón de mediador en proceso en un único microservicio CQRS
En el diagrama anterior se muestra más detalle de la imagen 7-24: el controlador ASP.NET Core envía el comando a la canalización de comandos MediatR para que llegue al controlador adecuado.
El motivo por el que tiene sentido usar el patrón de mediador es que, en las aplicaciones empresariales, las solicitudes de procesamiento pueden resultar complicadas. Le interesa poder agregar un número abierto de cuestiones transversales como registro, validaciones, auditoría y seguridad. En estos casos, puede basarse en una canalización de mediador (vea [Patrón de mediador](https://en.wikipedia.org/wiki/Mediator_pattern)) para proporcionar un medio para estos comportamientos adicionales o cuestiones transversales.
Un mediador es un objeto que encapsula el "cómo" de este proceso: coordina la ejecución en función del estado, la forma de invocar un controlador de comandos o la carga que se proporciona al controlador. Con un componente de mediador se pueden aplicar cuestiones transversales de forma centralizada y transparente aplicando elementos Decorator (o [comportamientos de canalización](https://github.com/jbogard/MediatR/wiki/Behaviors) desde [MediatR 3](https://www.nuget.org/packages/MediatR/3.0.0)). Para obtener más información, vea el [Patrón de Decorator](https://en.wikipedia.org/wiki/Decorator_pattern).
Los elementos Decorator y los comportamientos son similares a la [Programación orientada a aspectos (AOP)](https://en.wikipedia.org/wiki/Aspect-oriented_programming), solo se aplican a una canalización de proceso específica administrada por el componente de mediador. Los aspectos en AOP que implementan cuestiones transversales se aplican en función de *tejedores de aspectos* que se insertan en tiempo de compilación o en función de la intercepción de llamadas de objeto. En ocasiones, se dice que ambos enfoques típicos de AOP funcionan "de forma mágica", porque no es fácil ver cómo realiza AOP su trabajo. Cuando se trabaja con problemas graves o errores, AOP puede ser difícil de depurar. Por otro lado, estos elementos Decorator o comportamientos son explícitos y solo se aplican en el contexto del mediador, por lo que la depuración es mucho más sencilla y predecible.
Por ejemplo, en el microservicio de pedidos de eShopOnContainers, se implementaron dos comportamientos de ejemplo, las clases [LogBehavior](https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/src/Services/Ordering/Ordering.API/Application/Behaviors/LoggingBehavior.cs) y [ValidatorBehavior](https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/src/Services/Ordering/Ordering.API/Application/Behaviors/ValidatorBehavior.cs). En la siguiente sección se explica la implementación de los comportamientos y se muestra cómo eShopOnContainers usa los [comportamientos](https://github.com/jbogard/MediatR/wiki/Behaviors) de [MediatR 3](https://www.nuget.org/packages/MediatR/3.0.0).
### <a name="use-message-queues-out-of-proc-in-the-commands-pipeline"></a>Uso de colas de mensajes (fuera de proceso) en la canalización del comando
Otra opción consiste en usar mensajes asincrónicos basados en agentes o colas de mensajes, como se muestra en la figura 7-26. Esa opción también se podría combinar con el componente de mediador justo antes del controlador de comandos.

**Figura 7-26.** Uso de colas de mensajes (comunicación fuera de proceso y entre procesos) con comandos CQRS
La canalización del comando también puede controlarse mediante una cola de mensajes de alta disponibilidad para entregar los comandos en el controlador adecuado. El uso de colas de mensajes para aceptar los comandos puede complicar más la canalización del comando, ya que probablemente sea necesario dividir la canalización en dos procesos conectados a través de la cola de mensajes externos. Pero se debe usar si hay que ofrecer mayor escalabilidad y rendimiento según la mensajería asincrónica. Téngalo en cuenta en el caso de la figura 7-26, donde el controlador simplemente envía el mensaje de comando a la cola y vuelve. Después, los controladores de comandos procesan los mensajes a su propio ritmo. Esa es una gran ventaja de las colas: la cola de mensajes puede actuar como un búfer en casos en que se necesita hiperescalabilidad (por ejemplo, para existencias o cualquier otro escenario con un gran volumen de datos de entrada).
En cambio, debido a la naturaleza asincrónica de las colas de mensajes, debe saber cómo comunicar a la aplicación cliente si el proceso del comando se ha realizado correctamente o no. Como norma, nunca debería usar comandos "Fire and Forget" (dispare y olvídese). Cada aplicación empresarial necesita saber si un comando se ha procesado correctamente, o al menos se ha validado y aceptado.
De este modo, la capacidad de responder al cliente después de validar un mensaje de comando que se envió a una cola asincrónica agrega complejidad al sistema, en comparación con un proceso de comando en proceso que devuelve el resultado de la operación después de ejecutar la transacción. Mediante las colas, es posible que tenga que devolver el resultado del proceso de comando a través de otros mensajes de resultado de la operación, lo que requiere componentes adicionales y comunicación personalizada en el sistema.
Además, los comandos asincrónicos son unidireccionales, lo que es posible que en muchos casos no sea necesario, tal y como se explica en el siguiente e interesante intercambio entre Burtsev Alexey y Greg Young en una [conversación en línea](https://groups.google.com/forum/#!msg/dddcqrs/xhJHVxDx2pM/WP9qP8ifYCwJ):
> \[Burtsev Alexey\] Veo gran cantidad de código en la que los usuarios usan el control de comandos asincrónicos o la mensajería de comandos unidireccionales sin ningún motivo para hacerlo (no están realizando una operación extensa, no ejecutan código asincrónico externo, ni siquiera cruzan los límites de la aplicación para usar bus de mensajes). ¿Por qué agregan esta complejidad innecesaria? Y en realidad, hasta ahora no he visto ningún ejemplo de código CQRS con controladores de comandos de bloqueo, aunque funcionaría correctamente en la mayoría de los casos.
>
> \[Greg Young\] \[...\] un comando asincrónico no existe; en realidad es otro evento. Si tengo que aceptar lo que se me envía y generar un evento si no estoy de acuerdo, ya no se me está pidiendo que realice una acción \[es decir, no es un comando\]. Se me está diciendo que se ha realizado algo. Al principio puede parecer una pequeña diferencia, pero tiene muchas implicaciones.
Los comandos asincrónicos aumentan considerablemente la complejidad de un sistema, porque no hay ninguna manera sencilla de indicar los errores. Por tanto, los comandos asincrónicos no son recomendables a no ser que se necesiten requisitos de escalado o en casos especiales de comunicación de microservicios internos a través de mensajería. En esos casos, se debe diseñar un sistema independiente de informes y recuperación de errores del sistema.
En la versión inicial de eShopOnContainers, decidimos usar el procesamiento de comandos sincrónicos, iniciados desde solicitudes HTTP y controlados por el patrón de mediador. Eso permite devolver con facilidad si el proceso se ha realizado correctamente o no, como en la implementación [CreateOrderCommandHandler](https://github.com/dotnet-architecture/eShopOnContainers/blob/master/src/Services/Ordering/Ordering.API/Application/Commands/CreateOrderCommandHandler.cs).
En cualquier caso, debe ser una decisión basada en los requisitos empresariales de la aplicación o el microservicio.
## <a name="implement-the-command-process-pipeline-with-a-mediator-pattern-mediatr"></a>Implementación de la canalización del proceso de comando con un patrón de mediador (MediatR)
Como implementación de ejemplo, en esta guía se propone el uso de la canalización de proceso basada en el patrón de mediador para controlar la ingesta de comandos y enrutarlos, en memoria, a los controladores de comandos correctos. En la guía también se propone la aplicación de [comportamientos](https://github.com/jbogard/MediatR/wiki/Behaviors) para separar las cuestiones transversales.
Para la implementación en .NET Core, hay varias bibliotecas de código abierto disponibles que implementan el patrón de mediador. En esta guía se usa la biblioteca de código abierto [MediatR](https://github.com/jbogard/MediatR) (creada por Jimmy Bogard), pero puede usar otro enfoque. MediatR es una biblioteca pequeña y simple que permite procesar mensajes en memoria como un comando, mientras se aplican elementos Decorator o comportamientos.
El uso del patrón de mediador ayuda a reducir el acoplamiento y aislar los problemas del trabajo solicitado, mientras se conecta automáticamente al controlador que lleva a cabo ese trabajo, en este caso, a controladores de comandos.
En la revisión de esta guía, Jimmy Bogard explica otra buena razón para usar el patrón de mediador:
> Creo que aquí valdría la pena mencionar las pruebas: proporcionan una ventana coherente al comportamiento del sistema. Solicitud de entrada, respuesta de salida. Hemos comprobado que es un aspecto muy valioso a la hora de generar pruebas que se comporten de forma coherente.
En primer lugar, veremos un controlador WebAPI de ejemplo donde se usaría realmente el objeto de mediador. Si no se usara el objeto de mediador, sería necesario insertar todas las dependencias para ese controlador, elementos como un objeto de registrador y otros. Por tanto, el constructor sería bastante complicado. Por otra parte, si se usa el objeto de mediador, el constructor del controlador puede ser mucho más sencillo, con solo algunas dependencias en lugar de muchas si hubiera una por cada operación transversal, como en el ejemplo siguiente:
```csharp
public class MyMicroserviceController : Controller
{
public MyMicroserviceController(IMediator mediator,
IMyMicroserviceQueries microserviceQueries)
{
// ...
}
}
```
Se puede ver que el mediador proporciona un constructor de controlador de API web limpio y eficiente. Además, dentro de los métodos de controlador, el código para enviar un comando al objeto de mediador es prácticamente una línea:
```csharp
[Route("new")]
[HttpPost]
public async Task<IActionResult> ExecuteBusinessOperation([FromBody]RunOpCommand
runOperationCommand)
{
var commandResult = await _mediator.SendAsync(runOperationCommand);
return commandResult ? (IActionResult)Ok() : (IActionResult)BadRequest();
}
```
### <a name="implement-idempotent-commands"></a>Implementación de comandos idempotentes
En **eShopOnContainers**, un ejemplo más avanzado que el anterior es el envío de un objeto CreateOrderCommand desde el microservicio Ordering. Pero como el proceso empresarial Ordering es un poco más complejo y, en nuestro caso, se inicia realmente en el microservicio Basket, esta acción de enviar el objeto CreateOrderCommand se realiza desde un controlador de eventos de integración denominado [UserCheckoutAcceptedIntegrationEventHandler](https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/src/Services/Ordering/Ordering.API/Application/IntegrationEvents/EventHandling/UserCheckoutAcceptedIntegrationEventHandler.cs), en lugar de un controlador WebAPI sencillo al que se llama desde la aplicación cliente, como ocurre en el ejemplo anterior más sencillo.
Pero la acción de enviar el comando a MediatR es bastante similar, como se muestra en el código siguiente.
```csharp
var createOrderCommand = new CreateOrderCommand(eventMsg.Basket.Items,
eventMsg.UserId, eventMsg.City,
eventMsg.Street, eventMsg.State,
eventMsg.Country, eventMsg.ZipCode,
eventMsg.CardNumber,
eventMsg.CardHolderName,
eventMsg.CardExpiration,
eventMsg.CardSecurityNumber,
eventMsg.CardTypeId);
var requestCreateOrder = new IdentifiedCommand<CreateOrderCommand,bool>(createOrderCommand,
eventMsg.RequestId);
result = await _mediator.Send(requestCreateOrder);
```
Pero este caso también es un poco más avanzado porque también se implementan comandos idempotentes. El proceso CreateOrderCommand debe ser idempotente, por lo que si el mismo mensaje procede duplicado a través de la red, por cualquier motivo, como un reintento, el mismo pedido se procesará una sola vez.
Esto se implementa mediante la encapsulación del comando de negocio (en este caso CreateOrderCommand) y su inserción en un IdentifiedCommand genérico del que se realiza el seguimiento con un identificador de todos los mensajes que lleguen a través de la red que tienen que ser idempotentes.
En el código siguiente, puede ver que el IdentifiedCommand no es más que un DTO con un identificador junto con el objeto de comando de negocio insertado.
```csharp
public class IdentifiedCommand<T, R> : IRequest<R>
where T : IRequest<R>
{
public T Command { get; }
public Guid Id { get; }
public IdentifiedCommand(T command, Guid id)
{
Command = command;
Id = id;
}
}
```
Después, el CommandHandler para el IdentifiedCommand denominado [IdentifiedCommandHandler.cs](https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/src/Services/Ordering/Ordering.API/Application/Commands/IdentifiedCommandHandler.cs) básicamente comprobará si el identificador que procede como parte del mensaje ya existe en una tabla. Si ya existe, ese comando no se volverá a procesar, por lo que se comporta como un comando idempotente. Ese código de infraestructura se ejecuta mediante la llamada al método `_requestManager.ExistAsync` siguiente.
```csharp
// IdentifiedCommandHandler.cs
public class IdentifiedCommandHandler<T, R> :
IAsyncRequestHandler<IdentifiedCommand<T, R>, R>
where T : IRequest<R>
{
private readonly IMediator _mediator;
private readonly IRequestManager _requestManager;
public IdentifiedCommandHandler(IMediator mediator,
IRequestManager requestManager)
{
_mediator = mediator;
_requestManager = requestManager;
}
protected virtual R CreateResultForDuplicateRequest()
{
return default(R);
}
public async Task<R> Handle(IdentifiedCommand<T, R> message)
{
var alreadyExists = await _requestManager.ExistAsync(message.Id);
if (alreadyExists)
{
return CreateResultForDuplicateRequest();
}
else
{
await _requestManager.CreateRequestForCommandAsync<T>(message.Id);
// Send the embedded business command to mediator
// so it runs its related CommandHandler
var result = await _mediator.Send(message.Command);
return result;
}
}
}
```
Dado que IdentifiedCommand actúa como sobre de un comando de negocios, cuando el comando de negocios se debe procesar porque no es un identificador repetido, toma ese comando de negocios interno y lo vuelve a enviar al mediador, como se muestra en la última parte del código anterior al ejecutar `_mediator.Send(message.Command)` desde [IdentifiedCommandHandler.cs](https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/src/Services/Ordering/Ordering.API/Application/Commands/IdentifiedCommandHandler.cs).
Al hacerlo, se vincula y ejecuta el controlador de comandos de negocios, en este caso, [CreateOrderCommandHandler](https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/src/Services/Ordering/Ordering.API/Application/Commands/CreateOrderCommandHandler.cs) que ejecuta transacciones en la base de datos Ordering, como se muestra en el código siguiente.
```csharp
// CreateOrderCommandHandler.cs
public class CreateOrderCommandHandler
: IAsyncRequestHandler<CreateOrderCommand, bool>
{
private readonly IOrderRepository _orderRepository;
private readonly IIdentityService _identityService;
private readonly IMediator _mediator;
// Using DI to inject infrastructure persistence Repositories
public CreateOrderCommandHandler(IMediator mediator,
IOrderRepository orderRepository,
IIdentityService identityService)
{
_orderRepository = orderRepository ??
throw new ArgumentNullException(nameof(orderRepository));
_identityService = identityService ??
throw new ArgumentNullException(nameof(identityService));
_mediator = mediator ??
throw new ArgumentNullException(nameof(mediator));
}
public async Task<bool> Handle(CreateOrderCommand message)
{
// Add/Update the Buyer AggregateRoot
var address = new Address(message.Street, message.City, message.State,
message.Country, message.ZipCode);
var order = new Order(message.UserId, address, message.CardTypeId,
message.CardNumber, message.CardSecurityNumber,
message.CardHolderName, message.CardExpiration);
foreach (var item in message.OrderItems)
{
order.AddOrderItem(item.ProductId, item.ProductName, item.UnitPrice,
item.Discount, item.PictureUrl, item.Units);
}
_orderRepository.Add(order);
return await _orderRepository.UnitOfWork
.SaveEntitiesAsync();
}
}
```
### <a name="register-the-types-used-by-mediatr"></a>Registro de los tipos usados por MediatR
Para que MediatR sea consciente de las clases de controlador de comandos, debe registrar las clases de mediador y las de controlador de comandos en el contenedor de IoC. De forma predeterminada, MediatR usa Autofac como el contenedor de IoC, pero también se puede usar el contenedor de IoC integrado de ASP.NET Core o cualquier otro contenedor compatible con MediatR.
En el código siguiente se muestra cómo registrar los tipos y comandos del mediador al usar módulos de Autofac.
```csharp
public class MediatorModule : Autofac.Module
{
protected override void Load(ContainerBuilder builder)
{
builder.RegisterAssemblyTypes(typeof(IMediator).GetTypeInfo().Assembly)
.AsImplementedInterfaces();
// Register all the Command classes (they implement IAsyncRequestHandler)
// in assembly holding the Commands
builder.RegisterAssemblyTypes(
typeof(CreateOrderCommand).GetTypeInfo().Assembly).
AsClosedTypesOf(typeof(IAsyncRequestHandler<,>));
// Other types registration
//...
}
}
```
Aquí es donde "ocurre la magia" con MediatR.
Como cada controlador de comandos implementa la interfaz genérica `IAsyncRequestHandler<T>`, al registrar los ensamblados, el código registra con `RegisteredAssemblyTypes` todos los tipos marcados como `IAsyncRequestHandler` mientras relaciona los `CommandHandlers` con sus `Commands`, gracias a la relación indicada en la clase `CommandHandler`, como en el siguiente ejemplo:
```csharp
public class CreateOrderCommandHandler
: IAsyncRequestHandler<CreateOrderCommand, bool>
{
```
Ese es el código que pone en correlación los comandos y los controladores de comandos. El controlador es simplemente una clase, pero hereda de `RequestHandler<T>`, donde T es el tipo de comando, y MediatR se asegura de que se invoque con la carga correcta (el comando).
## <a name="apply-cross-cutting-concerns-when-processing-commands-with-the-behaviors-in-mediatr"></a>Aplicación de cuestiones transversales al procesar comandos con los comportamientos de MediatR
Hay otro aspecto: la capacidad de aplicar cuestiones transversales a la canalización de mediador. También puede ver al final del código del módulo de registro de Autofac cómo registra un tipo de comportamiento, en concreto una clase LoggingBehavior personalizada y una clase ValidatorBehavior. Pero también se podrían agregar otros comportamientos personalizados.
```csharp
public class MediatorModule : Autofac.Module
{
protected override void Load(ContainerBuilder builder)
{
builder.RegisterAssemblyTypes(typeof(IMediator).GetTypeInfo().Assembly)
.AsImplementedInterfaces();
// Register all the Command classes (they implement IAsyncRequestHandler)
// in assembly holding the Commands
builder.RegisterAssemblyTypes(
typeof(CreateOrderCommand).GetTypeInfo().Assembly).
AsClosedTypesOf(typeof(IAsyncRequestHandler<,>));
// Other types registration
//...
builder.RegisterGeneric(typeof(LoggingBehavior<,>)).
As(typeof(IPipelineBehavior<,>));
builder.RegisterGeneric(typeof(ValidatorBehavior<,>)).
As(typeof(IPipelineBehavior<,>));
}
}
```
Esa clase [LoggingBehavior](https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/src/Services/Ordering/Ordering.API/Application/Behaviors/LoggingBehavior.cs) se puede implementar como el código siguiente, que registra información sobre el controlador de comandos que se está ejecutando y si se ha realizado correctamente o no.
```csharp
public class LoggingBehavior<TRequest, TResponse>
: IPipelineBehavior<TRequest, TResponse>
{
private readonly ILogger<LoggingBehavior<TRequest, TResponse>> _logger;
public LoggingBehavior(ILogger<LoggingBehavior<TRequest, TResponse>> logger) =>
_logger = logger;
public async Task<TResponse> Handle(TRequest request,
RequestHandlerDelegate<TResponse> next)
{
_logger.LogInformation($"Handling {typeof(TRequest).Name}");
var response = await next();
_logger.LogInformation($"Handled {typeof(TResponse).Name}");
return response;
}
}
```
Con la simple implementación de esta clase de comportamiento y su registro en la canalización (en el MediatorModule anterior), todos los comandos que se procesan a través de MediatR registrarán información sobre la ejecución.
El microservicio de pedidos de eShopOnContainers también aplica un segundo comportamiento para validaciones básicas, la clase [ValidatorBehavior](https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/src/Services/Ordering/Ordering.API/Application/Behaviors/ValidatorBehavior.cs) que se basa en la biblioteca [FluentValidation](https://github.com/JeremySkinner/FluentValidation), como se muestra en el código siguiente:
```csharp
public class ValidatorBehavior<TRequest, TResponse>
: IPipelineBehavior<TRequest, TResponse>
{
private readonly IValidator<TRequest>[] _validators;
public ValidatorBehavior(IValidator<TRequest>[] validators) =>
_validators = validators;
public async Task<TResponse> Handle(TRequest request,
RequestHandlerDelegate<TResponse> next)
{
var failures = _validators
.Select(v => v.Validate(request))
.SelectMany(result => result.Errors)
.Where(error => error != null)
.ToList();
if (failures.Any())
{
throw new OrderingDomainException(
$"Command Validation Errors for type {typeof(TRequest).Name}",
new ValidationException("Validation exception", failures));
}
var response = await next();
return response;
}
}
```
El comportamiento aquí está generando una excepción si se produce un error de validación, pero también podría devolver un objeto de resultado, que contiene el resultado del comando si se realiza correctamente o la validación de mensajes en caso de que no lo hiciese. Esto probablemente facilitaría mostrar los resultados de validación al usuario.
Después, en función de la biblioteca [FluentValidation](https://github.com/JeremySkinner/FluentValidation), se crea la validación de los datos pasados con CreateOrderCommand, como se muestra en el código siguiente:
```csharp
public class CreateOrderCommandValidator : AbstractValidator<CreateOrderCommand>
{
public CreateOrderCommandValidator()
{
RuleFor(command => command.City).NotEmpty();
RuleFor(command => command.Street).NotEmpty();
RuleFor(command => command.State).NotEmpty();
RuleFor(command => command.Country).NotEmpty();
RuleFor(command => command.ZipCode).NotEmpty();
RuleFor(command => command.CardNumber).NotEmpty().Length(12, 19);
RuleFor(command => command.CardHolderName).NotEmpty();
RuleFor(command => command.CardExpiration).NotEmpty().Must(BeValidExpirationDate).WithMessage("Please specify a valid card expiration date");
RuleFor(command => command.CardSecurityNumber).NotEmpty().Length(3);
RuleFor(command => command.CardTypeId).NotEmpty();
RuleFor(command => command.OrderItems).Must(ContainOrderItems).WithMessage("No order items found");
}
private bool BeValidExpirationDate(DateTime dateTime)
{
return dateTime >= DateTime.UtcNow;
}
private bool ContainOrderItems(IEnumerable<OrderItemDTO> orderItems)
{
return orderItems.Any();
}
}
```
Podría crear validaciones adicionales. Se trata de una forma muy limpia y elegante de implementar las validaciones de comandos.
De forma similar, podría implementar otros comportamientos para aspectos adicionales o cuestiones transversales que quiera aplicar a los comandos cuando los administre.
#### <a name="additional-resources"></a>Recursos adicionales
##### <a name="the-mediator-pattern"></a>El patrón de mediador
- **Patrón de mediador** \
[https://en.wikipedia.org/wiki/Mediator\_pattern](https://en.wikipedia.org/wiki/Mediator_pattern)
##### <a name="the-decorator-pattern"></a>El patrón Decorator
- **Patrón Decorator** \
[https://en.wikipedia.org/wiki/Decorator\_pattern](https://en.wikipedia.org/wiki/Decorator_pattern)
##### <a name="mediatr-jimmy-bogard"></a>MediatR (Jimmy Bogard)
- **MediatR.** Repositorio de GitHub. \
<https://github.com/jbogard/MediatR>
- **CQRS with MediatR and AutoMapper (CQRS con MediatR y AutoMapper)** \
<https://lostechies.com/jimmybogard/2015/05/05/cqrs-with-mediatr-and-automapper/>
- **Put your controllers on a diet: POSTs and commands** (Poner los controladores a dieta: POST y comandos). \
<https://lostechies.com/jimmybogard/2013/12/19/put-your-controllers-on-a-diet-posts-and-commands/>
- **Tackling cross-cutting concerns with a mediator pipeline (Abordar cuestiones transversales con una canalización de mediador)** \
<https://lostechies.com/jimmybogard/2014/09/09/tackling-cross-cutting-concerns-with-a-mediator-pipeline/>
- **CQRS and REST: the perfect match (CQRS y REST: la combinación perfecta)** \
<https://lostechies.com/jimmybogard/2016/06/01/cqrs-and-rest-the-perfect-match/>
- **MediatR Pipeline Examples (Ejemplos de canalización de MediatR)** \
<https://lostechies.com/jimmybogard/2016/10/13/mediatr-pipeline-examples/>
- **Vertical Slice Test Fixtures for MediatR and ASP.NET Core (Accesorios de prueba de segmentos verticales para MediatR y ASP.NET Core)** \
<https://lostechies.com/jimmybogard/2016/10/24/vertical-slice-test-fixtures-for-mediatr-and-asp-net-core/>
- **MediatR Extensions for Microsoft Dependency Injection Released (Extensiones de MediatR para el lanzamiento de inserciones de dependencias de Microsoft)** \
<https://lostechies.com/jimmybogard/2016/07/19/mediatr-extensions-for-microsoft-dependency-injection-released/>
##### <a name="fluent-validation"></a>Validación fluida
- **Jeremy Skinner. Validación fluida.** Repositorio de GitHub. \
<https://github.com/JeremySkinner/FluentValidation>
> [!div class="step-by-step"]
> [Anterior](microservice-application-layer-web-api-design.md)
> [Siguiente](../implement-resilient-applications/index.md)
| 71.646572 | 937 | 0.754459 | spa_Latn | 0.958623 |
e02fa2e16066ba8c2ff416683e9b1d97fcd34aa0 | 106 | md | Markdown | Packs/GmailSingleUser/ReleaseNotes/1_2_4.md | mazmat-panw/content | 024a65c1dea2548e2637a9cbbe54966e9e34a722 | [
"MIT"
] | 2 | 2021-12-06T21:38:24.000Z | 2022-01-13T08:23:36.000Z | Packs/GmailSingleUser/ReleaseNotes/1_2_4.md | mazmat-panw/content | 024a65c1dea2548e2637a9cbbe54966e9e34a722 | [
"MIT"
] | 87 | 2022-02-23T12:10:53.000Z | 2022-03-31T11:29:05.000Z | Packs/GmailSingleUser/ReleaseNotes/1_2_4.md | henry-sue-pa/content | 043c6badfb4f9c80673cad9242fdea72efe301f7 | [
"MIT"
] | 2 | 2022-01-05T15:27:01.000Z | 2022-02-01T19:27:43.000Z |
#### Integrations
##### Gmail Single User
- Added type validations and other internal code improvements.
| 21.2 | 62 | 0.745283 | eng_Latn | 0.944873 |
e02fcdad9ce5e89636c52fec01a353f83de46932 | 9,882 | md | Markdown | guide/software-preparation.md | pscadding/bedrock-wiki | ef8ca880140160cec63e70abc43d86652f62a6cc | [
"MIT"
] | null | null | null | guide/software-preparation.md | pscadding/bedrock-wiki | ef8ca880140160cec63e70abc43d86652f62a6cc | [
"MIT"
] | null | null | null | guide/software-preparation.md | pscadding/bedrock-wiki | ef8ca880140160cec63e70abc43d86652f62a6cc | [
"MIT"
] | null | null | null | ---
layout: guide
title: Software and preparation
parent: Beginners Guide
nav_order: 1
badge: 1
badge_color: guide
badge_justification: left
---
# Software and Preparation
<details id="toc" open markdown="block">
<summary>
Table of contents
</summary>
{: .text-delta }
1. TOC
{:toc}
</details>
In order to be able to code addons you'll need a certain set of software installed. While Windows 10 offers the best development environment, and largest variety of tools, alternatives can be found on other platforms, including mobile.
## A valid copy of Bedrock Minecraft
- [Windows 10](https://www.microsoft.com/en-us/p/minecraft-for-windows-10/9nblggh2jhxj?activetab=pivot:overviewtab)
- [Android](https://play.google.com/store/apps/details?id=com.mojang.minecraftpe&hl=en)
- [iOS](https://apps.apple.com/us/app/minecraft/id479516143)
## Picking an Editor
Bedrock Addons can be created using any text editor (even the Windows-pre-installed Notepad), however it's much more comfortable to work in a dedicated Code Editor.
There are strong opinions about the best editor for beginners, but generally speaking you cannot go wrong selecting any of the following editors.
Editor recommendations are starred.
### Plaintext Editors
- ⭐[_VSCode Code_](https://code.visualstudio.com/) - is optimal in many cases, due to the fact that it has a variety of extensions for Addon development. (*Warning: Do not install Visual Studio, which is something different*)
- [_Sublime Text_](https://www.sublimetext.com/) - is another code editor with good theme customization capabilities.
- [_Atom_](https://atom.io/) - is another solid editor, which can be thought of as the precursor to VSCode.
### Graphical Editors
- ⭐[_Bridge_](https://github.com/bridge-core/bridge.) - is a visual software for Minecraft Add-On development. It offers JSON in tree view. However, the process of creating add-ons in bridge is parallel to creating them in a Code editor, so once you grasped the basics you could easily switch to using Bridge.
- [_CoreCoder ($$$)_](https://hanprog.itch.io/core-coder) - is a unique Code Editor developed specifically for Addon creation.
### Mobile Alternatives
- **Android**: [_ES File Explorer_](https://play.google.com/store/apps/details?id=com.File.Manager.Filemanager&hl=de&gl=US)
- **iOS**: [_Kodex_](https://apps.apple.com/us/app/kodex/id1038574481)
### Additional Notes
<details>
<summary>
Features to look for in a Code Editor
</summary>
- **Opening Folders:** When editing Addons, it is very convenient to open an entire folder as a project, instead of just individual files. This allows you to edit the files in both the Behavior Pack and Resource Pack at the same time, and quickly switch between tasks.
- **Json Linting/Prettify:** Linting is the ability to validate code as correct in real-time. Linting for json will mark things like missing commas, misplaced parens, or other formatting issues so that you can fix them. [Linting can also be found online](https://jsonlint.com/), but having real-time linting built directly into your editor is very much preferred.
- **Built in Terminal:** I find a terminal built into my editor to be very useful. I often use python scripting to supplement my workflow, and having easy access to a terminal speeds up that workflow.
</details>
<details>
<summary>
VSCode Extensions for Addon development
</summary>
Many packages exist for VSCode that make editing addons easier:
- [Blockceptions Minecraft Bedrock Development](https://marketplace.visualstudio.com/items?itemName=BlockceptionLtd.blockceptionvscodeminecraftbedrockdevelopmentextension)
- [.mcfunction support](https://marketplace.visualstudio.com/items?itemName=arcensoth.language-mcfunction)
- [.lang support](https://marketplace.visualstudio.com/items?itemName=zz5840.minecraft-lang-colorizer)
- [Bedrock Definitions](https://marketplace.visualstudio.com/items?itemName=destruc7i0n.vscode-bedrock-definitions)
- [Prettt-json](https://marketplace.visualstudio.com/items?itemName=mohsen1.prettify-json)
- [Spell Checker (for writing wiki)](https://marketplace.visualstudio.com/items?itemName=streetsidesoftware.code-spell-checker)
- [Snowstorm Particle Editor](https://marketplace.visualstudio.com/items?itemName=JannisX11.snowstorm)
- [Bracket Pair Colorizer](https://marketplace.visualstudio.com/items?itemName=CoenraadS.bracket-pair-colorizer-2)
- [UUID Generator](https://marketplace.visualstudio.com/items?itemName=netcorext.uuid-generator)
</details>
<details>
<summary>
If you choose to use bridge.
</summary>
You should be aware that it is a application that you benefit most from when you use it exclusively for editing your addon. Switching between a different editor and bridge. creates a bit of an overhead in your workflow (more later). The program builds up a knowledge base of your files as you use the editor. This enables very fast and dynamic auto-completions and file validation but also means that all of your files are cached in the background by default. There are two ways to workaround Bridge's caching strategy:
1) Increase or remove the `bridge-file-version: #11` comment the app leaves in your files after editing a file without bridge.
2) Add files that you want to edit without bridge. to a `.no-cache` file at the root of your behavior pack
Due to the nature of the file versioning system, most scripts and tools will continue to work as expected.
For further guidance on the editor, feel free to contact [solvedDev](https://twitter.com/solvedDev). bridge. also has an [official Discord server](https://discord.gg/wcRJZN3), with announcements, plugin discussion, add-on help, and more.
</details>
<br>
## Additional Addon-creation Software
- [**Blockbench**](https://blockbench.net/) is a 'boxy 3D model editor ' typically used to create Minecraft entity/block models, textures and animations. Also provides a web-browser version compatible with mobile. An image editor, like [GIMP](https://www.gimp.org/), [Krita](https://krita.org/en/) [Photoshop *($$$)*](https://www.adobe.com/products/photoshop.html) or paint.net, is recommended to be used along.
- You may also be recommended software such as [AJG (\$\$\$)](https://kaifireborn.itch.io/add-on-json-generator) for repetitious task automation (e.g mass weapon generation) or [FRG (\$\$\$)](https://machine-builder.itch.io/frg-v2) for quick custom structure creation.

___
Now that you have your tools installed, let's move onto some pre-organisation:
## The com.mojang folder and your Workspace
The com.mojang folder is the folder we're going to be working with throughout the Guide and Add-On development in general. It's the way to communicate with your game - you can find it in:
- Windows: `C:\Users\USERNAME\AppData\Local\Packages\Microsoft.MinecraftUWP_8wekyb3d8bbwe\LocalState\games\com.mojang`;
- Android: `Phone>games>com.mojang`;
- iOS: `My iDevice>Minecaraft>games>com.mojang`);
I strongly recommend creating a shortcut to the folder on your Desktop, in order to be able to easily access it at any time.
You'll find a lot of folders and files in the folder, among them: `behavior_packs`, `development_behavior_packs`, `resource_packs`, `development_resource_packs`.

`development_..._packs` are for developing add-ons - if you do some changes to them, _then exit and re-enter a world with the packs applied_, the packs will _update_. That way you can quickly test the changes without having to change the pack version and reloading the game. Thus we'll work with these folders.
(In `..._packs`, on the other hand, stable add-ons, including those imported via `.mcpack` are stored. They're also used to submit add-ons to Realms. We do not need them right now.)
____
Let's create you first add-on workspace in Visual Studio Code now.
1. Open VSC(*Visual Studio Code, the code editor*);
1. Create a folder named "`your_pack_name_RP`" in `development_resource_packs`. **I'll refer to this folder as `RP`**, according to the[Style Guide](https://wiki.bedrock.dev/knowledge/style-guide.html).
1. Create a folder "`your_pack_name_BP`" in `development_behavior_packs`. **I'll refer to this folder as `BP`**.
1. Go to `File>Add folder to workspace...` and choose `BP`. Do the same with `RP`.
1. Press `File>Save Workpsace as...` to save the workspace file to your Desktop. Whenever you're working on your add-on, all you have to do is open the workspace in VSC for quick access to both add-on folders.
## Learning to reference
Referencing means looking at other add-ons to find out how certain results are achieved. Minecraft's unmodified files are a good place to start. Download these zips of the Example resource packs and Example behavior packs and get creative! I recommend adding them to your workspace, but not changing them.
The most important thing to check all the time are the official documentations. bedrock.dev is an amazing collection of all the documentations and schemes, and typing `bedrock.dev/r` in your search window will immediately show you the documentations of the most recent official Minecraft release. (`bedrock.dev/b` results in the latest beta's docs.)
Once you are though with this guide, you could download and reference some open-source add-ons from, for example, MCPEDL.com (You might also share your own ones there).
___
___
## Your progress so far
**What you've done:**
- [x] Installed the necessary software;
- [x] Downloaded the Vanilla Example files;
- [x] Located your com.mojang folder and created your add-on's workspace.
**What you are to do next:**
- [ ] Create your add-ons manifests, pack icons;
- [ ] Learn to use `.mcfunction`, .`mcstructure`, `.mcpack` and `.mcaddon.`
| 60.256098 | 522 | 0.764926 | eng_Latn | 0.981833 |
e0300832ec887c7e48d5416f4f3c76837c4dd667 | 18,212 | md | Markdown | articles/active-directory-b2c/secure-api-management.md | ZetaPR/azure-docs.es-es | 0e2bf787d1d9ab12065fcb1091a7f13b96c6f8a2 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-03-12T23:37:16.000Z | 2021-03-12T23:37:16.000Z | articles/active-directory-b2c/secure-api-management.md | ZetaPR/azure-docs.es-es | 0e2bf787d1d9ab12065fcb1091a7f13b96c6f8a2 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/active-directory-b2c/secure-api-management.md | ZetaPR/azure-docs.es-es | 0e2bf787d1d9ab12065fcb1091a7f13b96c6f8a2 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Protección de una API de Azure API Management mediante Azure Active Directory B2C
description: Obtenga información sobre el uso de tokens de acceso emitidos por Azure Active Directory B2C para proteger un punto de conexión de API de Azure API Management.
services: active-directory-b2c
author: kengaderdus
manager: CelesteDG
ms.service: active-directory
ms.workload: identity
ms.topic: how-to
ms.date: 09/20/2021
ms.author: kengaderdus
ms.subservice: B2C
ms.openlocfilehash: d5d32f9a95c555cc69878d10e2c292ffaba4efda
ms.sourcegitcommit: 91915e57ee9b42a76659f6ab78916ccba517e0a5
ms.translationtype: HT
ms.contentlocale: es-ES
ms.lasthandoff: 10/15/2021
ms.locfileid: "130035360"
---
# <a name="secure-an-azure-api-management-api-with-azure-ad-b2c"></a>Protección de una API de Azure API Management con Azure AD B2C
Obtenga información sobre cómo restringir el acceso a la API de Azure API Management a los clientes que se han autenticado con Azure Active Directory B2C (Azure AD B2C). Siga las instrucciones de este artículo para crear y probar una directiva de entrada en Azure API Management que restrinja el acceso solo a aquellas solicitudes que incluyan un token de acceso válido emitido por Azure AD B2C.
## <a name="prerequisites"></a>Requisitos previos
Antes de empezar, asegúrese de que tiene los siguientes recursos implementados:
* Un [inquilino de Azure AD B2C](tutorial-create-tenant.md).
* Una [aplicación web registrada en su inquilino](tutorial-register-applications.md).
* [Flujos de usuario creados en su inquilino](tutorial-create-user-flows.md).
* Una [API publicada](../api-management/import-and-publish.md) en Azure API Management.
* (Opcional) Una plataforma de [Postman](https://www.getpostman.com/) para probar el acceso protegido.
## <a name="get-azure-ad-b2c-application-id"></a>Obtención del identificador de aplicación de Azure AD B2C
Cuando se protege una API en Azure API Management con Azure AD B2C, se necesitan varios valores para la [directiva de entrada](../api-management/api-management-howto-policies.md) que se crea en Azure API Management. En primer lugar, anote el identificador de una aplicación que haya creado anteriormente en el inquilino de Azure AD B2C. Si usa la aplicación que creó para satisfacer los requisitos previos, utilice el id. de aplicación de *webapp1*.
Para registrar una aplicación en el inquilino de Azure AD B2C, puede usar la nueva experiencia unificada *Registros de aplicaciones*, o bien nuestra experiencia heredada *Aplicaciones*. Más información sobre la nueva [experiencia de registros](./app-registrations-training-guide.md).
# <a name="app-registrations"></a>[Registros de aplicaciones](#tab/app-reg-ga/)
1. Inicie sesión en [Azure Portal](https://portal.azure.com).
1. Asegúrese de que usa el directorio que contiene el inquilino de Azure AD B2C. Seleccione el icono **Directorios y suscripciones** en la barra de herramientas del portal.
1. En la página **Configuración del portal | Directorios y suscripciones**, busque el directorio de Azure AD B2C en la lista **Nombre de directorio** y seleccione **Cambiar**.
1. En el panel de la izquierda, seleccione **Azure AD B2C**. O bien, puede seleccionar **Todos los servicios** y, luego, buscar y seleccionar **Azure AD B2C**.
1. Seleccione **Registros de aplicaciones** y, a continuación, la pestaña **Aplicaciones propias**.
1. Anote el valor de la columna **Id. de aplicación (cliente)** de *webapp1* u otra aplicación que haya creado previamente.
# <a name="applications-legacy"></a>[Applications (Legacy)](#tab/applications-legacy/) (Aplicaciones [heredadas])
1. Inicie sesión en [Azure Portal](https://portal.azure.com).
1. Asegúrese de que usa el directorio que contiene el inquilino de Azure AD B2C. Seleccione el icono **Directorios y suscripciones** en la barra de herramientas del portal.
1. En la página **Configuración del portal | Directorios y suscripciones**, busque el directorio de Azure AD B2C en la lista **Nombre de directorio** y seleccione **Cambiar**.
1. En el panel de la izquierda, seleccione **Azure AD B2C**. O bien, puede seleccionar **Todos los servicios** y, luego, buscar y seleccionar **Azure AD B2C**.
1. En **Administrar**, seleccione **Aplicaciones (heredado)** .
1. Anote el valor de la columna **Id. de aplicación** de *webapp1* u otra aplicación que haya creado previamente.
* * *
## <a name="get-a-token-issuer-endpoint"></a>Obtención del punto de conexión del emisor de un token
A continuación, obtenga la conocida dirección URL de configuración de uno de los flujos de usuario de Azure AD B2C. También necesita el identificador URI del punto de conexión del emisor del token que desea admitir en Azure API Management.
1. En [Azure Portal](https://portal.azure.com), vaya al inquilino de Azure AD B2C.
1. En **Directivas**, seleccione **Flujos de usuario**.
1. Seleccione una directiva existente (por ejemplo, *B2C_1_signupsignin1*) y, luego, seleccione **Ejecutar flujo de usuario**.
1. Anote la dirección URL del hipervínculo que aparece en el encabezado **Ejecutar flujo de usuario** situado cerca de la parte superior de la página. Esta dirección URL es el conocido punto de conexión de detección de OpenID Connect del flujo de usuario, que usará en la sección siguiente cuando configure la directiva de entrada en Azure API Management.

1. Seleccione el hipervínculo para ir a la conocida página de configuración de OpenID Connect.
1. En la página que se abre en el explorador, registre el valor `issuer`. Por ejemplo:
`https://<tenant-name>.b2clogin.com/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/v2.0/`
Este valor se usará en la sección siguiente al configurar la API en Azure API Management.
Ahora debería tener dos direcciones URL anotadas para su uso en la sección siguiente: la dirección URL del punto de conexión de configuración de OpenID Connect y el identificador URI del emisor. Por ejemplo:
```
https://<tenant-name>.b2clogin.com/<tenant-name>.onmicrosoft.com/B2C_1_signupsignin1/v2.0/.well-known/openid-configuration
https://<tenant-name>.b2clogin.com/99999999-0000-0000-0000-999999999999/v2.0/
```
## <a name="configure-the-inbound-policy-in-azure-api-management"></a>Configuración de la directiva de entrada en Azure API Management
Ahora está listo para agregar la directiva de entrada en Azure API Management que valida las llamadas a la API. Puede asegurarse de que solo se aceptan llamadas a la API con un token válido mediante la adición de una directiva de [validación de tokens web JSON (JWT)](../api-management/api-management-access-restriction-policies.md#ValidateJWT) que compruebe la audiencia y el emisor de un token de acceso.
1. En [Azure Portal](https://portal.azure.com), vaya a la instancia de Azure API Management.
1. Seleccione **API**.
1. Seleccione la API que desea proteger con Azure AD B2C.
1. Seleccione la pestaña **Diseño**.
1. En **Procesamiento de entrada**, seleccione **\</\>** para abrir el editor de código de directivas.
1. Coloque la siguiente etiqueta `<validate-jwt>` dentro de la directiva `<inbound>` y, a continuación, haga lo siguiente:
a. Actualice el valor de `url` del elemento `<openid-config>` con la conocida dirección URL de configuración de la directiva.
b. Actualice el elemento `<audience>` con el id. de aplicación de la aplicación que creó anteriormente en el inquilino de B2C (por ejemplo, *webapp1*).
c. Actualice el elemento `<issuer>` con el punto de conexión del emisor del token que anotó anteriormente.
```xml
<policies>
<inbound>
<validate-jwt header-name="Authorization" failed-validation-httpcode="401" failed-validation-error-message="Unauthorized. Access token is missing or invalid.">
<openid-config url="https://<tenant-name>.b2clogin.com/<tenant-name>.onmicrosoft.com/B2C_1_signupsignin1/v2.0/.well-known/openid-configuration" />
<audiences>
<audience>44444444-0000-0000-0000-444444444444</audience>
</audiences>
<issuers>
<issuer>https://<tenant-name>.b2clogin.com/99999999-0000-0000-0000-999999999999/v2.0/</issuer>
</issuers>
</validate-jwt>
<base />
</inbound>
<backend> <base /> </backend>
<outbound> <base /> </outbound>
<on-error> <base /> </on-error>
</policies>
```
## <a name="validate-secure-api-access"></a>Validación del acceso protegido a la API
Para asegurarse de que solo los autores de llamada autenticados pueden acceder a la API, puede validar la configuración de Azure API Management mediante una llamada a la API con [Postman](https://www.getpostman.com/).
Para llamar a la API, necesita un token de acceso emitido por Azure AD B2C y una clave de suscripción de Azure API Management.
### <a name="get-an-access-token"></a>Obtención de un token de acceso
En primer lugar, necesita un token emitido por Azure AD B2C para usarlo en el encabezado `Authorization` de Postman. Puede obtener uno mediante la característica *Ejecutar ahora* del flujo de usuario de registro o de inicio de sesión que creó como uno de los requisitos previos.
1. En [Azure Portal](https://portal.azure.com), vaya al inquilino de Azure AD B2C.
1. En **Directivas**, seleccione **Flujos de usuario**.
1. Seleccione un flujo de usuario de registro o de inicio de sesión existente (por ejemplo, *B2C_1_signupsignin1*).
1. En **Aplicación**, seleccione *webapp1*.
1. En **Dirección URL de respuesta**, seleccione `https://jwt.ms`.
1. Seleccione **Ejecutar flujo de usuario**.

1. Complete el proceso de inicio de sesión. Debería redirigirse a `https://jwt.ms`.
1. Anote el valor del token codificado que se muestra en el explorador. Use este valor de token para el encabezado Authorization de Postman.

### <a name="get-an-api-subscription-key"></a>Obtención de una clave de suscripción de API
Una aplicación cliente (en este caso, Postman) que llama a una API publicada debe incluir una clave de suscripción de API Management válida en sus solicitudes HTTP a la API. Para obtener una clave de suscripción para su inclusión en la solicitud HTTP de Postman:
1. En [Azure Portal](https://portal.azure.com), vaya a la instancia de servicio de Azure API Management.
1. Seleccione **Suscripciones**.
1. Seleccione los puntos suspensivos ( **...** ) junto a **Producto: Ilimitado** y, a continuación, seleccione **Mostrar u ocultar claves**.
1. Anote la **clave principal** del producto. Esta clave se usa para el encabezado `Ocp-Apim-Subscription-Key` de la solicitud HTTP en Postman.

### <a name="test-a-secure-api-call"></a>Prueba de una llamada a la API protegida
Con el token de acceso y la clave de suscripción de Azure API Management anotadas, ya está listo para probar si ha configurado correctamente el acceso seguro a la API.
1. Cree una nueva solicitud `GET` en [Postman](https://www.getpostman.com/). Para la dirección URL de la solicitud, especifique el punto de conexión de la lista de oradores de la API que publicó como uno de los requisitos previos. Por ejemplo:
`https://contosoapim.azure-api.net/conference/speakers`
1. A continuación, agregue los encabezados siguientes:
| Clave | Valor |
| --- | ----- |
| `Authorization` | Valor del token codificado que anotó anteriormente, con el prefijo `Bearer ` (incluya el espacio después de "Bearer"). |
| `Ocp-Apim-Subscription-Key` | La clave de suscripción de Azure API Management que registró anteriormente. |
| | |
La dirección URL de la solicitud **GET** y los **Encabezados** deben ser similares a los de la imagen siguiente:

1. En Postman, seleccione el botón **Enviar** para ejecutar la solicitud. Si lo ha configurado todo correctamente, debería aparecer una respuesta JSON con una colección de oradores de conferencia (aquí se muestra truncada):
```json
{
"collection": {
"version": "1.0",
"href": "https://conferenceapi.azurewebsites.net:443/speakers",
"links": [],
"items": [
{
"href": "https://conferenceapi.azurewebsites.net/speaker/1",
"data": [
{
"name": "Name",
"value": "Scott Guthrie"
}
],
"links": [
{
"rel": "http://tavis.net/rels/sessions",
"href": "https://conferenceapi.azurewebsites.net/speaker/1/sessions"
}
]
},
[...]
```
### <a name="test-an-insecure-api-call"></a>Prueba de una llamada a una API no protegida
Ahora que ha realizado una solicitud correcta, pruebe el caso de error para asegurarse de que las llamadas a la API con un token *no válido* se rechazan según lo previsto. Una manera de realizar la prueba es agregar o cambiar algunos caracteres en el valor del token y, a continuación, ejecutar la misma solicitud `GET` que antes.
1. Agregue varios caracteres al valor del token para simular un token no válido. Por ejemplo, podría agregar "INVALID" al valor del token, como se muestra aquí:

1. Seleccione el botón **Enviar** para ejecutar la solicitud. Con un token no válido, el resultado esperado es un código de estado no autorizado `401`:
```json
{
"statusCode": 401,
"message": "Unauthorized. Access token is missing or invalid."
}
```
Si ve el código de estado `401`, habrá comprobado que solo los autores de llamada con un token de acceso válido emitido por Azure AD B2C pueden realizar solicitudes correctas a la API de Azure API Management.
## <a name="support-multiple-applications-and-issuers"></a>Compatibilidad con varias aplicaciones y emisores
Normalmente, interactúan varias aplicaciones con una sola API REST. Para permitir que la API acepte los tokens destinados a varias aplicaciones, agregue sus id. de aplicación al elemento `<audiences>` de la directiva de entrada de Azure API Management.
```xml
<!-- Accept tokens intended for these recipient applications -->
<audiences>
<audience>44444444-0000-0000-0000-444444444444</audience>
<audience>66666666-0000-0000-0000-666666666666</audience>
</audiences>
```
Del mismo modo, para admitir varios emisores de tokens, agregue sus URI del punto de conexión al elemento `<issuers>` de la directiva de entrada de Azure API Management.
```xml
<!-- Accept tokens from multiple issuers -->
<issuers>
<issuer>https://<tenant-name>.b2clogin.com/99999999-0000-0000-0000-999999999999/v2.0/</issuer>
<issuer>https://login.microsoftonline.com/99999999-0000-0000-0000-999999999999/v2.0/</issuer>
</issuers>
```
## <a name="migrate-to-b2clogincom"></a>Migración a b2clogin.com
Si tiene una API de Azure API Management que valida los tokens emitidos por el punto de conexión heredado `login.microsoftonline.com`, debe migrar la API y las aplicaciones que lo llaman para que usen tokens emitidos por [b2clogin.com](b2clogin.md).
Puede seguir este proceso general para realizar una migración por fases:
1. Agregue compatibilidad en la directiva de entrada de Azure API Management para los tokens emitidos por b2clogin.com y login.microsoftonline.com.
1. Actualice las aplicaciones de una en una para obtener los tokens del punto de conexión b2clogin.com.
1. Cuando todas las aplicaciones obtengan correctamente tokens de b2clogin.com, elimine de la API la compatibilidad con los tokens emitidos por login.microsoftonline.com.
En el siguiente ejemplo de directiva de entrada de Azure API Management se muestra cómo aceptar tokens emitidos por b2clogin.com y login.microsoftonline.com. Además, la directiva admite solicitudes de API de dos aplicaciones.
```xml
<policies>
<inbound>
<validate-jwt header-name="Authorization" failed-validation-httpcode="401" failed-validation-error-message="Unauthorized. Access token is missing or invalid.">
<openid-config url="https://<tenant-name>.b2clogin.com/<tenant-name>.onmicrosoft.com/B2C_1_signupsignin1/v2.0/.well-known/openid-configuration" />
<audiences>
<audience>44444444-0000-0000-0000-444444444444</audience>
<audience>66666666-0000-0000-0000-666666666666</audience>
</audiences>
<issuers>
<issuer>https://login.microsoftonline.com/99999999-0000-0000-0000-999999999999/v2.0/</issuer>
<issuer>https://<tenant-name>.b2clogin.com/99999999-0000-0000-0000-999999999999/v2.0/</issuer>
</issuers>
</validate-jwt>
<base />
</inbound>
<backend> <base /> </backend>
<outbound> <base /> </outbound>
<on-error> <base /> </on-error>
</policies>
```
## <a name="next-steps"></a>Pasos siguientes
Para más información sobre las directivas de Azure API Management, consulte el [índice de referencia de la directiva de Azure API Management](../api-management/api-management-policies.md).
Para obtener información sobre la migración de las API web basadas en OWIN y sus aplicaciones a b2clogin.com, consulte [Migración de una API web basada en OWIN a b2clogin.com](multiple-token-endpoints.md).
| 64.58156 | 449 | 0.738304 | spa_Latn | 0.957173 |
e030286aab997fb7ed0ee715185eaeeabf3ff228 | 16,496 | md | Markdown | reference/7.0/Microsoft.PowerShell.Archive/Compress-Archive.md | TBrasse/PowerShell-Docs | 2b72cc4f9000724a2eb83d3f6f90806004ad349e | [
"CC-BY-4.0",
"MIT"
] | 892 | 2019-01-24T05:33:27.000Z | 2022-03-31T02:39:46.000Z | reference/7.0/Microsoft.PowerShell.Archive/Compress-Archive.md | TBrasse/PowerShell-Docs | 2b72cc4f9000724a2eb83d3f6f90806004ad349e | [
"CC-BY-4.0",
"MIT"
] | 5,152 | 2019-01-24T01:36:00.000Z | 2022-03-31T21:57:54.000Z | reference/7.0/Microsoft.PowerShell.Archive/Compress-Archive.md | TBrasse/PowerShell-Docs | 2b72cc4f9000724a2eb83d3f6f90806004ad349e | [
"CC-BY-4.0",
"MIT"
] | 894 | 2019-01-25T03:04:48.000Z | 2022-03-31T15:12:56.000Z | ---
external help file: Microsoft.PowerShell.Archive-help.xml
Locale: en-US
Module Name: Microsoft.PowerShell.Archive
ms.date: 02/20/2020
online version: https://docs.microsoft.com/powershell/module/microsoft.powershell.archive/compress-archive?view=powershell-7&WT.mc_id=ps-gethelp
schema: 2.0.0
title: Compress-Archive
---
# Compress-Archive
## Synopsis
Creates a compressed archive, or zipped file, from specified files and directories.
## Syntax
### Path (Default)
```
Compress-Archive [-Path] <String[]> [-DestinationPath] <String> [-CompressionLevel <String>]
[-PassThru] [-WhatIf] [-Confirm] [<CommonParameters>]
```
### PathWithUpdate
```
Compress-Archive [-Path] <String[]> [-DestinationPath] <String> [-CompressionLevel <String>] -Update
[-PassThru] [-WhatIf] [-Confirm] [<CommonParameters>]
```
### PathWithForce
```
Compress-Archive [-Path] <String[]> [-DestinationPath] <String> [-CompressionLevel <String>] -Force
[-PassThru] [-WhatIf] [-Confirm] [<CommonParameters>]
```
### LiteralPathWithUpdate
```
Compress-Archive -LiteralPath <String[]> [-DestinationPath] <String> [-CompressionLevel <String>]
-Update [-PassThru] [-WhatIf] [-Confirm] [<CommonParameters>]
```
### LiteralPathWithForce
```
Compress-Archive -LiteralPath <String[]> [-DestinationPath] <String> [-CompressionLevel <String>]
-Force [-PassThru] [-WhatIf] [-Confirm] [<CommonParameters>]
```
### LiteralPath
```
Compress-Archive -LiteralPath <String[]> [-DestinationPath] <String> [-CompressionLevel <String>]
[-PassThru] [-WhatIf] [-Confirm] [<CommonParameters>]
```
## Description
The `Compress-Archive` cmdlet creates a compressed, or zipped, archive file from one or more
specified files or directories. An archive packages multiple files, with optional compression, into
a single zipped file for easier distribution and storage. An archive file can be compressed by using
the compression algorithm specified by the **CompressionLevel** parameter.
The `Compress-Archive` cmdlet uses the Microsoft .NET API
[System.IO.Compression.ZipArchive](/dotnet/api/system.io.compression.ziparchive) to compress files.
The maximum file size is 2 GB because there's a limitation of the underlying API.
Some examples use splatting to reduce the line length of the code samples. For more information, see
[about_Splatting](../Microsoft.PowerShell.Core/About/about_Splatting.md).
## Examples
### Example 1: Compress files to create an archive file
This example compresses files from different directories and creates an archive file. A wildcard is
used to get all files with a particular file extension. There's no directory structure in the
archive file because the **Path** only specifies file names.
```powershell
$compress = @{
Path = "C:\Reference\Draftdoc.docx", "C:\Reference\Images\*.vsd"
CompressionLevel = "Fastest"
DestinationPath = "C:\Archives\Draft.Zip"
}
Compress-Archive @compress
```
The **Path** parameter accepts specific file names and file names with wildcards, `*.vsd`. The
**Path** uses a comma-separated list to get files from different directories. The compression level
is **Fastest** to reduce processing time. The **DestinationPath** parameter specifies the location
for the `Draft.zip` file. The `Draft.zip` file contains `Draftdoc.docx` and all the files with a
`.vsd` extension.
### Example 2: Compress files using a LiteralPath
This example compresses specific named files and creates a new archive file. There's no directory
structure in the archive file because the **Path** only specifies file names.
```powershell
$compress = @{
LiteralPath= "C:\Reference\Draft Doc.docx", "C:\Reference\Images\diagram2.vsd"
CompressionLevel = "Fastest"
DestinationPath = "C:\Archives\Draft.Zip"
}
Compress-Archive @compress
```
Absolute path and file names are used because the **LiteralPath** parameter doesn't accept
wildcards. The **Path** uses a comma-separated list to get files from different directories. The
compression level is **Fastest** to reduce processing time. The **DestinationPath** parameter
specifies the location for the `Draft.zip` file. The `Draft.zip` file only contains `Draftdoc.docx`
and `diagram2.vsd`.
### Example 3: Compress a directory that includes the root directory
This example compresses a directory and creates an archive file that **includes** the root
directory, and all its files and subdirectories. The archive file has a directory structure because
the **Path** specifies a root directory.
```powershell
Compress-Archive -Path C:\Reference -DestinationPath C:\Archives\Draft.zip
```
`Compress-Archive` uses the **Path** parameter to specify the root directory, `C:\Reference`. The
**DestinationPath** parameter specifies the location for the archive file. The `Draft.zip` archive
includes the `Reference` root directory, and all its files and subdirectories.
### Example 4: Compress a directory that excludes the root directory
This example compresses a directory and creates an archive file that **excludes** the root directory
because the **Path** uses an asterisk (`*`) wildcard. The archive contains a directory structure
that contains the root directory's files and subdirectories.
```powershell
Compress-Archive -Path C:\Reference\* -DestinationPath C:\Archives\Draft.zip
```
`Compress-Archive` uses the **Path** parameter to specify the root directory, `C:\Reference` with an
asterisk (`*`) wildcard. The **DestinationPath** parameter specifies the location for the archive
file. The `Draft.zip` archive contains the root directory's files and subdirectories. The
`Reference` root directory is excluded from the archive.
### Example 5: Compress only the files in a root directory
This example compresses only the files in a root directory and creates an archive file. There's no
directory structure in the archive because only files are compressed.
```powershell
Compress-Archive -Path C:\Reference\*.* -DestinationPath C:\Archives\Draft.zip
```
`Compress-Archive` uses the **Path** parameter to specify the root directory, `C:\Reference` with a
**star-dot-star** (`*.*`) wildcard. The **DestinationPath** parameter specifies the location for the
archive file. The `Draft.zip` archive only contains the `Reference` root directory's files and the
root directory is excluded.
### Example 6: Use the pipeline to archive files
This example sends files down the pipeline to create an archive. There's no directory structure in
the archive file because the **Path** only specifies file names.
```powershell
Get-ChildItem -Path C:\Reference\Afile.txt, C:\Reference\Images\Bfile.txt |
Compress-Archive -DestinationPath C:\Archives\PipelineFiles.zip
```
`Get-ChildItem` uses the **Path** parameter to specify two files from different directories. Each
file is represented by a **FileInfo** object and is sent down the pipeline to `Compress-Archive`.
The two specified files are archived in `PipelineFiles.zip`.
### Example 7: Use the pipeline to archive a directory
This example sends a directory down the pipeline to create an archive. Files are sent as
**FileInfo** objects and directories as **DirectoryInfo** objects. The archive's directory structure
doesn't include the root directory, but its files and subdirectories are included in the archive.
```powershell
Get-ChildItem -Path C:\LogFiles | Compress-Archive -DestinationPath C:\Archives\PipelineDir.zip
```
`Get-ChildItem` uses the **Path** parameter to specify the `C:\LogFiles` root directory. Each
**FileInfo** and **DirectoryInfo** object is sent down the pipeline.
`Compress-Archive` adds each object to the `PipelineDir.zip` archive. The **Path** parameter isn't
specified because the pipeline objects are received into parameter position 0.
### Example 8: How recursion can affect archives
This example shows how recursion can duplicate files in your archive. For example, if you use
`Get-ChildItem` with the **Recurse** parameter. As recursion processes, each **FileInfo** and
**DirectoryInfo** object is sent down the pipeline and added to the archive.
```powershell
Get-ChildItem -Path C:\TestLog -Recurse |
Compress-Archive -DestinationPath C:\Archives\PipelineRecurse.zip
```
The `C:\TestLog` directory doesn't contain any files. It does contain a subdirectory named `testsub`
that contains the `testlog.txt` file.
`Get-ChildItem` uses the **Path** parameter to specify the root directory, `C:\TestLog`. The
**Recurse** parameter processes the files and directories. A **DirectoryInfo** object is created for
`testsub` and a **FileInfo** object `testlog.txt`.
Each object is sent down the pipeline to `Compress-Archive`. The **DestinationPath** specifies the
location for the archive file. The **Path** parameter isn't specified because the pipeline objects
are received into parameter position 0.
The following summary describes the `PipelineRecurse.zip` archive's contents that contains a
duplicate file:
- The **DirectoryInfo** object creates the `testsub` directory and contains the `testlog.txt` file,
which reflects the original directory structure.
- The **FileInfo** object creates a duplicate `testlog.txt` in the archive's root. The duplicate
file is created because recursion sent a file object to `Compress-Archive`. This behavior is
expected because each object sent down the pipeline is added to the archive.
### Example 9: Update an existing archive file
This example updates an existing archive file, `Draft.Zip`, in the `C:\Archives` directory. In this
example, the existing archive file contains the root directory, and its files and subdirectories.
```powershell
Compress-Archive -Path C:\Reference -Update -DestinationPath C:\Archives\Draft.Zip
```
The command updates `Draft.Zip` with newer versions of existing files in the `C:\Reference`
directory and its subdirectories. And, new files that were added to `C:\Reference` or its
subdirectories are included in the updated `Draft.Zip` archive.
## Parameters
### -CompressionLevel
Specifies how much compression to apply when you're creating the archive file. Faster compression
requires less time to create the file, but can result in larger file sizes.
If this parameter isn't specified, the command uses the default value, **Optimal**.
The following are the acceptable values for this parameter:
- **Fastest**. Use the fastest compression method available to reduce processing time. Faster
compression can result in larger file sizes.
- **NoCompression**. Doesn't compress the source files.
- **Optimal**. Processing time is dependent on file size.
```yaml
Type: System.String
Parameter Sets: (All)
Aliases:
Accepted values: Optimal, NoCompression, Fastest
Required: False
Position: Named
Default value: Optimal
Accept pipeline input: False
Accept wildcard characters: False
```
### -DestinationPath
This parameter is required and specifies the path to the archive output file. The
**DestinationPath** should include the name of the zipped file, and either the absolute or relative
path to the zipped file.
If the file name in **DestinationPath** doesn't have a `.zip` file name extension, the cmdlet adds
the `.zip` file name extension.
```yaml
Type: System.String
Parameter Sets: (All)
Aliases:
Required: True
Position: 1
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -Force
Forces the command to run without asking for user confirmation.
```yaml
Type: System.Management.Automation.SwitchParameter
Parameter Sets: PathWithForce, LiteralPathWithForce
Aliases:
Required: True
Position: Named
Default value: False
Accept pipeline input: False
Accept wildcard characters: False
```
### -LiteralPath
Specifies the path or paths to the files that you want to add to the archive zipped file. Unlike the
**Path** parameter, the value of **LiteralPath** is used exactly as it's typed. No characters are
interpreted as wildcards. If the path includes escape characters, enclose each escape character in
single quotation marks, to instruct PowerShell not to interpret any characters as escape sequences.
To specify multiple paths, and include files in multiple locations in your output zipped file, use
commas to separate the paths.
```yaml
Type: System.String[]
Parameter Sets: LiteralPathWithUpdate, LiteralPathWithForce, LiteralPath
Aliases: PSPath
Required: True
Position: Named
Default value: None
Accept pipeline input: True (ByPropertyName)
Accept wildcard characters: False
```
### -PassThru
Causes the cmdlet to output a file object representing the archive file created.
This parameter was introduced in PowerShell 6.0.
```yaml
Type: System.Management.Automation.SwitchParameter
Parameter Sets: (All)
Aliases:
Required: False
Position: Named
Default value: False
Accept pipeline input: False
Accept wildcard characters: False
```
### -Path
Specifies the path or paths to the files that you want to add to the archive zipped file. To specify
multiple paths, and include files in multiple locations, use commas to separate the paths.
This parameter accepts wildcard characters. Wildcard characters allow you to add all files in a
directory to your archive file.
Using wildcards with a root directory affects the archive's contents:
- To create an archive that **includes** the root directory, and all its files and subdirectories,
specify the root directory in the **Path** without wildcards. For example: `-Path C:\Reference`
- To create an archive that **excludes** the root directory, but zips all its files and
subdirectories, use the asterisk (`*`) wildcard. For example: `-Path C:\Reference\*`
- To create an archive that only zips the files in the root directory, use the **star-dot-star**
(`*.*`) wildcard. Subdirectories of the root aren't included in the archive. For example:
`-Path C:\Reference\*.*`
```yaml
Type: System.String[]
Parameter Sets: Path, PathWithUpdate, PathWithForce
Aliases:
Required: True
Position: 0
Default value: None
Accept pipeline input: True (ByPropertyName, ByValue)
Accept wildcard characters: True
```
### -Update
Updates the specified archive by replacing older file versions in the archive with newer file
versions that have the same names. You can also add this parameter to add files to an existing
archive.
```yaml
Type: System.Management.Automation.SwitchParameter
Parameter Sets: PathWithUpdate, LiteralPathWithUpdate
Aliases:
Required: True
Position: Named
Default value: False
Accept pipeline input: False
Accept wildcard characters: False
```
### -Confirm
Prompts you for confirmation before running the cmdlet.
```yaml
Type: System.Management.Automation.SwitchParameter
Parameter Sets: (All)
Aliases: cf
Required: False
Position: Named
Default value: False
Accept pipeline input: False
Accept wildcard characters: False
```
### -WhatIf
Shows what would happen if the cmdlet runs. The cmdlet isn't run.
```yaml
Type: System.Management.Automation.SwitchParameter
Parameter Sets: (All)
Aliases: wi
Required: False
Position: Named
Default value: False
Accept pipeline input: False
Accept wildcard characters: False
```
### CommonParameters
This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable,
-InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose,
-WarningAction, and -WarningVariable. For more information, see
[about_CommonParameters](https://go.microsoft.com/fwlink/?LinkID=113216).
## Inputs
### System.String
You can pipe a string that contains a path to one or more files.
## Outputs
### System.IO.FileInfo
The cmdlet only returns a **FileInfo** object when you use the **PassThru** parameter.
## Notes
Using recursion and sending objects down the pipeline can duplicate files in your archive. For
example, if you use `Get-ChildItem` with the **Recurse** parameter, each **FileInfo** and
**DirectoryInfo** object that's sent down the pipeline is added to the archive.
The [ZIP file specification](https://pkware.cachefly.net/webdocs/casestudies/APPNOTE.TXT) does not
specify a standard way of encoding filenames that contain non-ASCII characters. The
`Compress-Archive` cmdlet uses UTF-8 encoding. Other ZIP archive tools may use a different encoding
scheme. When extracting files with filenames not stored using UTF-8 encoding, `Expand-Archive` uses
the raw value found in the archive. This can result in a filename that is different than the source
filename stored in the archive.
## Related links
[Expand-Archive](Expand-Archive.md)
[Get-ChildItem](../Microsoft.PowerShell.Management/Get-ChildItem.md)
| 36.175439 | 144 | 0.771399 | eng_Latn | 0.921853 |
e03040fa964d4d549dffb53757ee24ed8994193b | 1,637 | md | Markdown | results/oratory1990/harman_in-ear_2019v2/Sony WF-XB700/README.md | NekoAlosama/AutoEq-optimized | 354873974d31dea14aa95cf1b181c724554a19d3 | [
"MIT"
] | 2 | 2020-04-27T23:56:38.000Z | 2020-08-06T08:54:28.000Z | results/oratory1990/harman_in-ear_2019v2/Sony WF-XB700/README.md | NekoAlosama/AutoEq-optimized | 354873974d31dea14aa95cf1b181c724554a19d3 | [
"MIT"
] | 3 | 2020-07-21T22:10:04.000Z | 2020-11-22T15:07:43.000Z | results/oratory1990/harman_in-ear_2019v2/Sony WF-XB700/README.md | NekoAlosama/AutoEq-optimized | 354873974d31dea14aa95cf1b181c724554a19d3 | [
"MIT"
] | 1 | 2020-08-06T08:54:41.000Z | 2020-08-06T08:54:41.000Z | # Sony WF-XB700
See [usage instructions](https://github.com/jaakkopasanen/AutoEq#usage) for more options and info.
### Parametric EQs
In case of using parametric equalizer, apply preamp of **-6.51dB** and build filters manually
with these parameters. The first 5 filters can be used independently.
When using independent subset of filters, apply preamp of **-6.41 dB**.
| Type | Fc | Q | Gain |
|--------:|------------:|-----:|---------:|
| Peaking | 29.56 Hz | 0.43 | -5.75 dB |
| Peaking | 110.01 Hz | 0.2 | -1.71 dB |
| Peaking | 2821.01 Hz | 0.84 | 1.75 dB |
| Peaking | 2861.51 Hz | 3.3 | -3.40 dB |
| Peaking | 7082.74 Hz | 2.86 | 5.92 dB |
| Peaking | 115.65 Hz | 5.45 | 0.86 dB |
| Peaking | 4370.70 Hz | 3.37 | 1.17 dB |
| Peaking | 5135.67 Hz | 5.77 | -2.68 dB |
| Peaking | 10409.22 Hz | 1.32 | -0.51 dB |
| Peaking | 19791.26 Hz | 0.39 | 5.90 dB |
### Fixed Band EQs
In case of using fixed band (also called graphic) equalizer, apply preamp of **-4.31dB**
(if available) and set gains manually with these parameters.
| Type | Fc | Q | Gain |
|--------:|------------:|-----:|---------:|
| Peaking | 31.25 Hz | 1.41 | -7.25 dB |
| Peaking | 62.50 Hz | 1.41 | -4.38 dB |
| Peaking | 125.00 Hz | 1.41 | -1.59 dB |
| Peaking | 250.00 Hz | 1.41 | -1.74 dB |
| Peaking | 500.00 Hz | 1.41 | -0.73 dB |
| Peaking | 1000.00 Hz | 1.41 | 0.24 dB |
| Peaking | 2000.00 Hz | 1.41 | 0.07 dB |
| Peaking | 4000.00 Hz | 1.41 | 0.21 dB |
| Peaking | 8000.00 Hz | 1.41 | 3.85 dB |
| Peaking | 16000.01 Hz | 1.41 | 3.91 dB |
### Graphs
 | 40.925 | 98 | 0.554062 | eng_Latn | 0.687199 |
e030b9ec2e29a14b339ce42343e527c4dcc75da7 | 688 | md | Markdown | example/README.md | mcxinyu/stereo | 68a34fc0123fa53fd0eb04b40705b3eb3a13f37e | [
"MIT"
] | 75 | 2017-08-16T21:43:45.000Z | 2021-11-08T11:58:01.000Z | example/README.md | mcxinyu/stereo | 68a34fc0123fa53fd0eb04b40705b3eb3a13f37e | [
"MIT"
] | 37 | 2017-08-17T13:48:11.000Z | 2019-08-20T20:31:07.000Z | example/README.md | mcxinyu/stereo | 68a34fc0123fa53fd0eb04b40705b3eb3a13f37e | [
"MIT"
] | 17 | 2017-08-17T11:07:25.000Z | 2021-02-24T11:01:00.000Z | # Stereo Example
<center>
<img src="doc/example.jpg" height="500" />
</center>
This example shows how the library works.
* The two buttons *Play dubstep.mp3* and *Play pi.mp3* load a file from the application directory and play them. These files don't have ID3 tags so no information will be displayed.
* The button *Invalid URL* triggers a dialog to inform that the file is not playable.
* The button *Pick file* pops an UI to pick a track from phone's internal storage.
* The buttons at the bottom are controls to play, pause and stop the playback. The fourth button is just there to show the current playback state (playing or not). It's greyed out since no action is bound.
| 43 | 205 | 0.75 | eng_Latn | 0.999396 |
e030df4ac37c542a7f7979065947ab94c47553b8 | 742 | md | Markdown | docs/framework/wcf/diagnostics/event-logging/failedtoinitializetracesource.md | marcustung/docs.zh-tw-1 | 7c224633316a0d983d25a8a9bb1f7626743d062a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wcf/diagnostics/event-logging/failedtoinitializetracesource.md | marcustung/docs.zh-tw-1 | 7c224633316a0d983d25a8a9bb1f7626743d062a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wcf/diagnostics/event-logging/failedtoinitializetracesource.md | marcustung/docs.zh-tw-1 | 7c224633316a0d983d25a8a9bb1f7626743d062a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: FailedToInitializeTraceSource
ms.date: 03/30/2017
ms.assetid: ce6fea55-292c-4fb9-908e-3713fcd4cf8f
ms.openlocfilehash: 84fa33050e6479fb4a3eca154d7e28d1875a3666
ms.sourcegitcommit: 0be8a279af6d8a43e03141e349d3efd5d35f8767
ms.translationtype: MT
ms.contentlocale: zh-TW
ms.lasthandoff: 04/18/2019
ms.locfileid: "59079483"
---
# <a name="failedtoinitializetracesource"></a>FailedToInitializeTraceSource
識別碼:101
嚴重性:錯誤
類別:追蹤
## <a name="description"></a>描述
未初始化追蹤來源。 已停用追蹤。 此事件會列出例外狀況、處理序名稱和處理序識別碼。
## <a name="see-also"></a>另請參閱
- [事件記錄](../../../../../docs/framework/wcf/diagnostics/event-logging/index.md)
- [事件一般參考](../../../../../docs/framework/wcf/diagnostics/event-logging/events-general-reference.md)
| 28.538462 | 99 | 0.74124 | yue_Hant | 0.271941 |
e030ef8887a7c9e7400cf28c685806e0042c8431 | 1,271 | md | Markdown | docs/ado/guide/referencing-the-ado-libraries.md | csrowell/sql-docs | 5e15fa8674a09821becd437e78cfb0bb472e3bc8 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-05-04T19:57:42.000Z | 2019-05-04T19:57:42.000Z | docs/ado/guide/referencing-the-ado-libraries.md | jzabroski/sql-docs | 34be3e3e656de711b4c7a09274c715b23b451014 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/ado/guide/referencing-the-ado-libraries.md | jzabroski/sql-docs | 34be3e3e656de711b4c7a09274c715b23b451014 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-01-13T23:26:30.000Z | 2021-01-13T23:26:30.000Z | ---
title: "Referencing the ADO Libraries | Microsoft Docs"
ms.custom: ""
ms.date: "01/19/2017"
ms.reviewer: ""
ms.suite: ""
ms.tgt_pltfrm: ""
ms.prod: "sql-non-specified"
ms.technology: “drivers”
ms.topic: "article"
helpviewer_keywords:
- "libraries [ADO]"
- "referencing libraries [ADO]"
- "ADO, libraries"
ms.assetid: 573f8f27-babd-4e2f-bf9a-270ee7024975
caps.latest.revision: 11
author: "MightyPen"
ms.author: "genemi"
manager: "jhubbard"
ms.workload: "Inactive"
---
# Referencing the ADO Libraries
The latest version of ADO is packaged as *msado15.dll*. The latest versions of ADO MD and ADOX are packaged as *msadom.dll* and *msadox.dll*, respectively. These libraries are installed by default in *$installDir*, where *$installDir* stands for the path of the directory in which the ADO library has been installed on your computer. To use the ADO libraries in your application, you must reference them explicitly in the application project.
The following are steps that you can take to reference the ADO libraries:
- [In a Visual Basic Application](../../ado/guide/referencing-the-ado-libraries-in-a-visual-basic-6-application.md)
- [In a Visual C++ Application](../../ado/guide/referencing-the-ado-libraries-in-a-visual-c-application.md)
| 42.366667 | 444 | 0.740362 | eng_Latn | 0.928012 |
e030f34f47fa69538d8746b567105eb08d33bded | 6,224 | md | Markdown | dashboard/custom-view/README.md | etienneburdet/ods-cookbook | 37b16e304e6c70ab0fd5a32eab2803598a266047 | [
"MIT"
] | 20 | 2015-08-19T23:36:43.000Z | 2022-01-12T19:16:36.000Z | dashboard/custom-view/README.md | gennaweber/ods-cookbook | 45a49ba041c6869ce8fd838e6de1cc69b9d8c84a | [
"MIT"
] | 3 | 2015-12-15T08:17:26.000Z | 2020-03-28T00:43:56.000Z | dashboard/custom-view/README.md | gennaweber/ods-cookbook | 45a49ba041c6869ce8fd838e6de1cc69b9d8c84a | [
"MIT"
] | 5 | 2015-11-28T14:59:50.000Z | 2022-01-07T17:17:13.000Z | ### Custom view
#### Live result
[Please find the live exemple on Discovery](https://discovery.opendatasoft.com/explore/dataset/2015-registry-of-power-plant-connections-to-the-eletricity-transport-network/custom/)
#### Code
HTML code :
```html
<ods-tabs class="custom-tab">
<ods-pane title="Number of power plants and maximum power" icon="pie-chart">
<p>
NB: The values below match the active filters
</p>
<div class="row">
<div class="col-sm-6">
<h3>
Total maximum power per sector (in MW)
</h3>
<ods-chart>
<ods-chart-query context="ctx" field-x="sector">
<ods-chart-serie expression-y="maximum_power_mw"
chart-type="treemap"
function-y="SUM"
color="range-Accent"
scientific-display="true">
</ods-chart-serie>
</ods-chart-query>
</ods-chart>
</div>
<div class="col-sm-6">
<h3>
Number of power plants per sector
</h3>
<ods-chart>
<ods-chart-query context="ctx" field-x="sector">
<ods-chart-serie expression-y="maximum_power_mw" chart-type="treemap" function-y="COUNT" color="range-Accent" scientific-display="true">
</ods-chart-serie>
</ods-chart-query>
</ods-chart>
</div>
</div>
</ods-pane>
<ods-pane title="Geographic distribution" icon="map" pane-auto-unload="true">
<p>
NB: The values below match the active filters
</p>
<ods-map display-control="false"
search-box="true"
toolbar-fullscreen="true"
toolbar-geolocation="true"
basemap="jawg.streets"
location="6,47.30903,2.0874"
style="height: 600px;">
<ods-map-layer-group>
<ods-map-layer context="ctx"
color-numeric-ranges="{'42152923196196.8':'#e6755c','24286534228129.4':'#fc9272','60019312164264.2':'#d05946','77885701132331.6':'#ba3d30','95752090100399':'#a5211b'}"
color-by-field="maximum_power_mw"
picto="circle"
show-marker="true"
display="choropleth"
border-color="#FFFFFF"
caption="true">
<h3>
{{ record.fields.sector }}
</h3>
<p>
<i class="fa fa-industry fa-fw" aria-hidden="true"></i>
<strong>{{ record.fields.company }}</strong>
<br />
<i class="fa fa-battery-full fa-fw" aria-hidden="true"></i>
<strong>{{ record.fields.maximum_power_mw|number }} MW</strong> (maximum power)
<br />
<i class="fa fa-calendar fa-fw" aria-hidden="true"></i>
Entry into service on <strong>{{ record.fields.entry_into_service | moment: 'LL' }}</strong>
</p>
</ods-map-layer>
</ods-map-layer-group>
</ods-map>
</ods-pane>
<ods-pane title="Wind power" icon="leaf">
<div class="row">
<div class="col-sm-6">
<p>
For the specific case of wind power, we'll have a look at the connections made on both sides of the energy transportation network.
That is connection made by the entity in charge of the high voltage network (the so-called network operator, which provided all data in this dataset)
and connections made by the entity in charge of the medium and low voltage network (called distributor).
We can consider that connections made by the former are to companies which primary activity is power production whereas those made by the latter are to individuals and local companies (who happen to have a local energy production).
</p>
<p>
Using data from both entities, is it therefore possible to compare their activity and to see that wind power is mostly due to local companies and not nation-wide energy producers.
</p>
</div>
<div class="col-sm-6" style="overflow: hidden;">
<img src="/assets/theme_image/turbines_wind_farm.jpg" style="max-height: 400px;">
</div>
</div>
<div class="row">
<div class="col-sm-6">
<h3>
Connections made to the high-voltage network
</h3>
<ods-dataset-context context="registryofpowerplantconnectionstotheeletricitytransportnetwork"
registryofpowerplantconnectionstotheeletricitytransportnetwork-dataset="2015-registry-of-power-plant-connections-to-the-eletricity-transport-network"
registryofpowerplantconnectionstotheeletricitytransportnetwork-parameters="{'disjunctive.sector':true,'refine.sector':'Eolien'}">
<ods-chart>
<ods-chart-query context="registryofpowerplantconnectionstotheeletricitytransportnetwork" field-x="region_name" sort="serie1-1">
<ods-chart-serie expression-y="maximum_power_mw" chart-type="column" function-y="SUM" color="#2C3F56" scientific-display="true">
</ods-chart-serie>
</ods-chart-query>
</ods-chart>
</ods-dataset-context>
</div>
</div>
</ods-pane>
</ods-tabs>
```
CSS code:
```css
.custom-tab {
margin-left: -21px;
margin-right: -21px;
}
```
| 46.447761 | 251 | 0.509319 | eng_Latn | 0.84468 |
e0315addca3a1750ee128ad4d94d27a540f5eaf9 | 22 | md | Markdown | README.md | Plazl/plazium.net | 3e302fde32eebf51cd213c3f93be048e6bb853ef | [
"MIT"
] | null | null | null | README.md | Plazl/plazium.net | 3e302fde32eebf51cd213c3f93be048e6bb853ef | [
"MIT"
] | null | null | null | README.md | Plazl/plazium.net | 3e302fde32eebf51cd213c3f93be048e6bb853ef | [
"MIT"
] | null | null | null | # plazium.net
Plazium
| 7.333333 | 13 | 0.772727 | vie_Latn | 0.755988 |
e0320000977ee07c8b993e7da49167a6eb53f1e5 | 22 | md | Markdown | README.md | maaeedee/Data-and-affordances | 9c021c2176cebcd9636736459662ec28d6cfcf12 | [
"MIT"
] | null | null | null | README.md | maaeedee/Data-and-affordances | 9c021c2176cebcd9636736459662ec28d6cfcf12 | [
"MIT"
] | null | null | null | README.md | maaeedee/Data-and-affordances | 9c021c2176cebcd9636736459662ec28d6cfcf12 | [
"MIT"
] | null | null | null | # Data-and-affordances | 22 | 22 | 0.818182 | eng_Latn | 0.360053 |
e032608242ce22b4a6267073428617df8cb17650 | 375 | md | Markdown | README.md | msazurestackworkloads/helm-charts | ffada593b1e9a5257671ccd24f6ee43a14508325 | [
"MIT"
] | null | null | null | README.md | msazurestackworkloads/helm-charts | ffada593b1e9a5257671ccd24f6ee43a14508325 | [
"MIT"
] | null | null | null | README.md | msazurestackworkloads/helm-charts | ffada593b1e9a5257671ccd24f6ee43a14508325 | [
"MIT"
] | null | null | null | # helm-charts
## Configuration
Add the chart repository.
``` bash
helm repo add azs-ecs https://raw.githubusercontent.com/msazurestackworkloads/helm-charts/master/repo/
```
## Chart source
All chart source material including chart readme files are found under the [src directory](/src/).
## Updating charts
``` bash
helm package src/CHART_NAME -d repo/
make index
```
| 17.857143 | 102 | 0.744 | eng_Latn | 0.802341 |
e0326acdbbea382e3f4934e2738ee930ebe49daa | 3,434 | md | Markdown | mip/develop/concept-handler-policy-auditing-cpp.md | hyoshioka0128/Azure-RMSDocs.ja-jp | bca57a70d71c9e11fbe23c1130708884c6ca501e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | mip/develop/concept-handler-policy-auditing-cpp.md | hyoshioka0128/Azure-RMSDocs.ja-jp | bca57a70d71c9e11fbe23c1130708884c6ca501e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | mip/develop/concept-handler-policy-auditing-cpp.md | hyoshioka0128/Azure-RMSDocs.ja-jp | bca57a70d71c9e11fbe23c1130708884c6ca501e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 概念 - Microsoft Information Protection SDK ポリシー API
description: この記事は、Microsoft Information Protection SDK を使用してポリシー API 監査イベントを Azure Information Protection Analytics に送信する方法を理解するのに役立ちます。
services: information-protection
author: tommoser
ms.service: information-protection
ms.topic: conceptual
ms.collection: M365-security-compliance
ms.date: 11/07/2018
ms.author: tommos
ms.openlocfilehash: 729570c902ad3175b65ddd8167005c0cb4e4078c
ms.sourcegitcommit: fff4c155c52c9ff20bc4931d5ac20c3ea6e2ff9e
ms.translationtype: MT
ms.contentlocale: ja-JP
ms.lasthandoff: 04/24/2019
ms.locfileid: "60175226"
---
# <a name="auditing-in-the-mip-sdk"></a>MIP SDK での監査
Azure Information Protection の管理ポータルでは、管理者レポートにアクセスできます。 これらのレポートにより、MIP SDK が統合された任意のアプリケーションの間にユーザーが手動または自動で適用したラベルの可視性が提供されます。 SDK を利用している開発パートナーは、この機能を簡単に有効にして自分のアプリケーションからの情報が顧客レポートに表示されるようにすることができます。
## <a name="event-types"></a>イベントの種類
SDK を介して Azure Information Protection Analytics に送信できるイベントには 3 つの種類があります。 **ハートビート イベント**、**検出イベント**、および**変更イベント**です
### <a name="heartbeat-events"></a>ハートビート イベント
ハートビート イベントは、ポリシー API が統合された任意のアプリケーションに対して自動的に作成されます。 ハートビート イベントには、次のものが含まれます。
* TenantId
* 作成された時刻
* ユーザー プリンシパル名
* 監査が作成されたマシンの名前
* プロセス名
* プラットフォーム
* アプリケーション ID - Azure AD のアプリケーション ID に対応します。
これらのイベントは、Microsoft Information Protection SDK を使用しているアプリケーションを、会社全体にわたって検出する際に便利です。
### <a name="discovery-events"></a>検出イベント
検出イベントでは、ポリシー API によって読み取られたり使用されたりするラベル付き情報に関する情報が提供されます。 これらのイベントは、情報にアクセスしているデバイス、場所、およびユーザーを組織全体にわたって確認できるので便利です。
作成するときに、フラグを設定して、ポリシー API で検出イベントが生成されます、`mip::PolicyHandler`オブジェクト。 値は、次の例で**isAuditDiscoveryEnabled**に設定されている`true`します。 ときに`mip::ExecutionState`に渡される`ComputeActions()`または`GetSensitivityLabel()`(識別子を持つ既存のメタデータ情報およびコンテンツ)、探索情報を Azure 情報保護 Analytics に送信されます。
アプリケーションによって `ComputeActions()` または `GetSensitivityLabel()` が呼び出され、`mip::ExecutionState` が指定されると、検出監査が作成されます。 このイベントは、ハンドラーごとに 1 回だけ作成されます。
実行状態の詳細については、`mip::ExecutionState` の概念に関するドキュメントを参照してください。
```cpp
// Create PolicyHandler, passing in true for isAuditDiscoveryEnabled
auto handler = mEngine->CreatePolicyHandler(true);
// Returns vector of mip::Action and generates discovery event.
auto actions = handler->ComputeActions(*state);
//Or, get the label for a given state
auto label = handler->GetSensitivityLabel(*state);
```
実際には、`mip::PolicyHandler` の構築中に **isAuditDiscoveryEnabled** を `true` に設定して、ファイル アクセス情報が Azure Information Protection Analytics に送信されるようにしておく必要があります。
## <a name="change-event"></a>変更イベント
変更イベントでは、ファイル、適用または変更されたラベル、およびユーザーが示した理由について情報が提供されます。 変更イベントを生成するには、`mip::PolicyHandler` 上で `NotifyCommittedActions()` を呼び出します。 変更がファイルに正常にコミットされると呼び出しが行われ、アクションの計算に使用された `mip::ExecutionState` が渡されます。
> アプリケーションによるこの関数の呼び出しが失敗した場合、Azure Information Protection Analytics にイベントは届きません。
```cpp
handler->NotifyCommittedActions(*state);
```
## <a name="audit-dashboard"></a>監査ダッシュボード
Azure Information Protection 監査パイプラインに送信されたイベントは、 https://portal.azure.com にあるレポートに表示されます。 Azure 情報保護の Analytics はパブリック プレビューであり、機能/機能を変更することがあります。
## <a name="next-steps"></a>次の手順
- 詳細については、Azure Information Protection での監査操作は、次を参照してください。、[技術コミュニティでの発表に関するブログをプレビュー](https://techcommunity.microsoft.com/t5/Azure-Information-Protection/Data-discovery-reporting-and-analytics-for-all-your-data-with/ba-p/253854)します。
- ダウンロード、[ポリシー API のサンプルを GitHub ポリシー API を試すから](https://azure.microsoft.com/resources/samples/?sort=0&term=mipsdk+policyapi)
| 41.878049 | 257 | 0.81392 | yue_Hant | 0.699833 |
e032af24611c0d3728f572bfd02e5f413e883a80 | 31,892 | md | Markdown | docs/mfc/reference/cdocument-class.md | changeworld/cpp-docs.zh-cn | fab4b89663eadfc318b1c0e5f0c4f2506f24bbd6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/mfc/reference/cdocument-class.md | changeworld/cpp-docs.zh-cn | fab4b89663eadfc318b1c0e5f0c4f2506f24bbd6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/mfc/reference/cdocument-class.md | changeworld/cpp-docs.zh-cn | fab4b89663eadfc318b1c0e5f0c4f2506f24bbd6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: CDocument 类
ms.date: 11/04/2016
f1_keywords:
- CDocument
- AFXWIN/CDocument
- AFXWIN/CDocument::CDocument
- AFXWIN/CDocument::AddView
- AFXWIN/CDocument::BeginReadChunks
- AFXWIN/CDocument::CanCloseFrame
- AFXWIN/CDocument::ClearChunkList
- AFXWIN/CDocument::ClearPathName
- AFXWIN/CDocument::DeleteContents
- AFXWIN/CDocument::FindChunk
- AFXWIN/CDocument::GetAdapter
- AFXWIN/CDocument::GetDocTemplate
- AFXWIN/CDocument::GetFile
- AFXWIN/CDocument::GetFirstViewPosition
- AFXWIN/CDocument::GetNextView
- AFXWIN/CDocument::GetPathName
- AFXWIN/CDocument::GetThumbnail
- AFXWIN/CDocument::GetTitle
- AFXWIN/CDocument::InitializeSearchContent
- AFXWIN/CDocument::IsModified
- AFXWIN/CDocument::IsSearchAndOrganizeHandler
- AFXWIN/CDocument::LoadDocumentFromStream
- AFXWIN/CDocument::OnBeforeRichPreviewFontChanged
- AFXWIN/CDocument::OnChangedViewList
- AFXWIN/CDocument::OnCloseDocument
- AFXWIN/CDocument::OnCreatePreviewFrame
- AFXWIN/CDocument::OnDocumentEvent
- AFXWIN/CDocument::OnDrawThumbnail
- AFXWIN/CDocument::OnLoadDocumentFromStream
- AFXWIN/CDocument::OnNewDocument
- AFXWIN/CDocument::OnOpenDocument
- AFXWIN/CDocument::OnPreviewHandlerQueryFocus
- AFXWIN/CDocument::OnPreviewHandlerTranslateAccelerator
- AFXWIN/CDocument::OnRichPreviewBackColorChanged
- AFXWIN/CDocument::OnRichPreviewFontChanged
- AFXWIN/CDocument::OnRichPreviewSiteChanged
- AFXWIN/CDocument::OnRichPreviewTextColorChanged
- AFXWIN/CDocument::OnSaveDocument
- AFXWIN/CDocument::OnUnloadHandler
- AFXWIN/CDocument::PreCloseFrame
- AFXWIN/CDocument::ReadNextChunkValue
- AFXWIN/CDocument::ReleaseFile
- AFXWIN/CDocument::RemoveChunk
- AFXWIN/CDocument::RemoveView
- AFXWIN/CDocument::ReportSaveLoadException
- AFXWIN/CDocument::SaveModified
- AFXWIN/CDocument::SetChunkValue
- AFXWIN/CDocument::SetModifiedFlag
- AFXWIN/CDocument::SetPathName
- AFXWIN/CDocument::SetTitle
- AFXWIN/CDocument::UpdateAllViews
- AFXWIN/CDocument::OnFileSendMail
- AFXWIN/CDocument::OnUpdateFileSendMail
- AFXWIN/CDocument::m_bGetThumbnailMode
- AFXWIN/CDocument::m_bPreviewHandlerMode
- AFXWIN/CDocument::m_bSearchMode
- AFXWIN/CDocument::m_clrRichPreviewBackColor
- AFXWIN/CDocument::m_clrRichPreviewTextColor
- AFXWIN/CDocument::m_lfRichPreviewFont
helpviewer_keywords:
- CDocument [MFC], CDocument
- CDocument [MFC], AddView
- CDocument [MFC], BeginReadChunks
- CDocument [MFC], CanCloseFrame
- CDocument [MFC], ClearChunkList
- CDocument [MFC], ClearPathName
- CDocument [MFC], DeleteContents
- CDocument [MFC], FindChunk
- CDocument [MFC], GetAdapter
- CDocument [MFC], GetDocTemplate
- CDocument [MFC], GetFile
- CDocument [MFC], GetFirstViewPosition
- CDocument [MFC], GetNextView
- CDocument [MFC], GetPathName
- CDocument [MFC], GetThumbnail
- CDocument [MFC], GetTitle
- CDocument [MFC], InitializeSearchContent
- CDocument [MFC], IsModified
- CDocument [MFC], IsSearchAndOrganizeHandler
- CDocument [MFC], LoadDocumentFromStream
- CDocument [MFC], OnBeforeRichPreviewFontChanged
- CDocument [MFC], OnChangedViewList
- CDocument [MFC], OnCloseDocument
- CDocument [MFC], OnCreatePreviewFrame
- CDocument [MFC], OnDocumentEvent
- CDocument [MFC], OnDrawThumbnail
- CDocument [MFC], OnLoadDocumentFromStream
- CDocument [MFC], OnNewDocument
- CDocument [MFC], OnOpenDocument
- CDocument [MFC], OnPreviewHandlerQueryFocus
- CDocument [MFC], OnPreviewHandlerTranslateAccelerator
- CDocument [MFC], OnRichPreviewBackColorChanged
- CDocument [MFC], OnRichPreviewFontChanged
- CDocument [MFC], OnRichPreviewSiteChanged
- CDocument [MFC], OnRichPreviewTextColorChanged
- CDocument [MFC], OnSaveDocument
- CDocument [MFC], OnUnloadHandler
- CDocument [MFC], PreCloseFrame
- CDocument [MFC], ReadNextChunkValue
- CDocument [MFC], ReleaseFile
- CDocument [MFC], RemoveChunk
- CDocument [MFC], RemoveView
- CDocument [MFC], ReportSaveLoadException
- CDocument [MFC], SaveModified
- CDocument [MFC], SetChunkValue
- CDocument [MFC], SetModifiedFlag
- CDocument [MFC], SetPathName
- CDocument [MFC], SetTitle
- CDocument [MFC], UpdateAllViews
- CDocument [MFC], OnFileSendMail
- CDocument [MFC], OnUpdateFileSendMail
- CDocument [MFC], m_bGetThumbnailMode
- CDocument [MFC], m_bPreviewHandlerMode
- CDocument [MFC], m_bSearchMode
- CDocument [MFC], m_clrRichPreviewBackColor
- CDocument [MFC], m_clrRichPreviewTextColor
- CDocument [MFC], m_lfRichPreviewFont
ms.assetid: e5a2891d-e1e1-4599-8c7e-afa9b4945446
ms.openlocfilehash: e84ceb11ad789ef3bd6933292030ef2af6f1d817
ms.sourcegitcommit: 6052185696adca270bc9bdbec45a626dd89cdcdd
ms.translationtype: MT
ms.contentlocale: zh-CN
ms.lasthandoff: 10/31/2018
ms.locfileid: "50609308"
---
# <a name="cdocument-class"></a>CDocument 类
提供用户定义文档类的基本功能。
## <a name="syntax"></a>语法
```
class CDocument : public CCmdTarget
```
## <a name="members"></a>成员
### <a name="public-constructors"></a>公共构造函数
|名称|描述|
|----------|-----------------|
|[CDocument::CDocument](#cdocument)|构造 `CDocument` 对象。|
### <a name="public-methods"></a>公共方法
|名称|描述|
|----------|-----------------|
|[CDocument::AddView](#addview)|将视图附加到文档。|
|[CDocument::BeginReadChunks](#beginreadchunks)|初始化消息的分块读取。|
|[CDocument::CanCloseFrame](#cancloseframe)|高级可重写;关闭帧窗口查看本文档之前调用。|
|[CDocument::ClearChunkList](#clearchunklist)|清除的区块列表。|
|[CDocument::ClearPathName](#clearpathname)|清除文档对象的路径。|
|[CDocument::DeleteContents](#deletecontents)|调用以执行清理的文档。|
|[CDocument::FindChunk](#findchunk)|查找指定 GUID 的块区。|
|[CDocument::GetAdapter](#getadapter)|将指针返回到对象,实现`IDocument`接口。|
|[CDocument::GetDocTemplate](#getdoctemplate)|返回一个指向描述文档的类型的文档模板。|
|[CDocument::GetFile](#getfile)|将指针返回到所需`CFile`对象。|
|[CDocument::GetFirstViewPosition](#getfirstviewposition)|返回的第一个位置中的视图; 列表用于开始迭代。|
|[CDocument::GetNextView](#getnextview)|循环访问与文档关联的视图的列表。|
|[CDocument::GetPathName](#getpathname)|返回文档的数据文件的路径。|
|[CDocument::GetThumbnail](#getthumbnail)|调用以创建要将缩略图提供程序用来显示缩略图的位图。|
|[CDocument::GetTitle](#gettitle)|返回文档的标题。|
|[CDocument::InitializeSearchContent](#initializesearchcontent)|调用以初始化为搜索处理程序的搜索内容。|
|[CDocument::IsModified](#ismodified)|指示文档自上次保存以来是否已修改。|
|[CDocument::IsSearchAndOrganizeHandler](#issearchandorganizehandler)|指示是否此实例的`CDocument`搜索和组织的处理程序创建对象。|
|[CDocument::LoadDocumentFromStream](#loaddocumentfromstream)|调用以从流加载文档数据。|
|[CDocument::OnBeforeRichPreviewFontChanged](#onbeforerichpreviewfontchanged)|丰富的预览字体更改之前调用。|
|[CDocument::OnChangedViewList](#onchangedviewlist)|调用后添加或从文档中删除视图。|
|[CDocument::OnCloseDocument](#onclosedocument)|调用关闭文档。|
|[CDocument::OnCreatePreviewFrame](#oncreatepreviewframe)|需要创建丰富的预览的预览帧时由框架调用。|
|[CDocument::OnDocumentEvent](#ondocumentevent)|由框架调用以响应文档事件。|
|[CDocument::OnDrawThumbnail](#ondrawthumbnail)|重写此方法在派生类来绘制内容的缩略图中。|
|[CDocument::OnLoadDocumentFromStream](#onloaddocumentfromstream)|当它需要从流加载文档数据时由框架调用。|
|[CDocument::OnNewDocument](#onnewdocument)|调用以创建新文档。|
|[CDocument::OnOpenDocument](#onopendocument)|调用以打开现有文档。|
|[CDocument::OnPreviewHandlerQueryFocus](#onpreviewhandlerqueryfocus)|指示要从调用 GetFocus 函数返回 HWND 的预览处理程序。|
|[CDocument::OnPreviewHandlerTranslateAccelerator](#onpreviewhandlertranslateaccelerator)|指示要处理键击中向上传递,从在其中预览处理程序正在运行的进程的消息泵的预览处理程序。|
|[CDocument::OnRichPreviewBackColorChanged](#onrichpreviewbackcolorchanged)|丰富的预览背景色发生更改时调用。|
|[CDocument::OnRichPreviewFontChanged](#onrichpreviewfontchanged)|丰富的预览字体发生更改时调用。|
|[CDocument::OnRichPreviewSiteChanged](#onrichpreviewsitechanged)|丰富的预览站点发生更改时调用。|
|[CDocument::OnRichPreviewTextColorChanged](#onrichpreviewtextcolorchanged)|丰富的预览文本颜色发生更改时调用。|
|[CDocument::OnSaveDocument](#onsavedocument)|调用以将文档保存到磁盘。|
|[CDocument::OnUnloadHandler](#onunloadhandler)|当正在卸载模块预览处理程序时由框架调用。|
|[CDocument::PreCloseFrame](#precloseframe)|框架窗口已关闭之前调用。|
|[CDocument::ReadNextChunkValue](#readnextchunkvalue)|读取下一个块区值。|
|[CDocument::ReleaseFile](#releasefile)|释放文件以使其可供使用由其他应用程序。|
|[CDocument::RemoveChunk](#removechunk)|删除指定的 GUID 的块区。|
|[CDocument::RemoveView](#removeview)|分离文档中的视图。|
|[CDocument::ReportSaveLoadException](#reportsaveloadexception)|高级可重写;调用时打开或保存操作无法完成由于异常。|
|[CDocument::SaveModified](#savemodified)|高级可重写;调用以要求用户是否应保存该文档。|
|[CDocument::SetChunkValue](#setchunkvalue)|设置一个块区值。|
|[CDocument::SetModifiedFlag](#setmodifiedflag)|设置一个标志,指示自上次保存以来您已修改文档。|
|[CDocument::SetPathName](#setpathname)|设置文档所使用的数据文件的路径。|
|[CDocument::SetTitle](#settitle)|设置文档的标题。|
|[CDocument::UpdateAllViews](#updateallviews)|通知已修改文档的所有视图。|
### <a name="protected-methods"></a>受保护的方法
|名称|描述|
|----------|-----------------|
|[CDocument::OnFileSendMail](#onfilesendmail)|附加的文档发送一封电子邮件。|
|[CDocument::OnUpdateFileSendMail](#onupdatefilesendmail)|如果存在邮件支持,则将启用发送邮件命令。|
### <a name="public-data-members"></a>公共数据成员
|名称|描述|
|----------|-----------------|
|[CDocument::m_bGetThumbnailMode](#m_bgetthumbnailmode)|指定`CDocument`dllhost 缩略图创建对象。 应签入`CView::OnDraw`。|
|[CDocument::m_bPreviewHandlerMode](#m_bpreviewhandlermode)|指定的`CDocument`对象已通过为 prevhost `Rich Preview`。 应签入`CView::OnDraw`。|
|[CDocument::m_bSearchMode](#m_bsearchmode)|指定`CDocument`对象索引器或其他搜索应用程序来创建。|
|[CDocument::m_clrRichPreviewBackColor](#m_clrrichpreviewbackcolor)|指定丰富的预览窗口的背景的色。 由主机设置此颜色。|
|[CDocument::m_clrRichPreviewTextColor](#m_clrrichpreviewtextcolor)|指定丰富的预览窗口的前景色。 由主机设置此颜色。|
|[CDocument::m_lfRichPreviewFont](#m_lfrichpreviewfont)|指定丰富的预览窗口的文本字体。 此字体的信息是由主机设置。|
## <a name="remarks"></a>备注
文档表示用户通常使用文件打开命令打开和保存文件另存命令使用的数据的单位。
`CDocument` 支持标准操作,例如创建文档、 加载它,并将其保存。 Framework 操作使用的接口定义的文档`CDocument`。
应用程序可以支持多个类型的文档;例如,应用程序可能支持电子表格和文本文档。 每种类型的文档具有关联的文档模板;文档模板指定为该类型的文档使用了哪些资源 (例如,菜单、 图标或快捷键对应表)。 每个文档包含一个指向其关联的`CDocTemplate`对象。
用户与通过文档进行交互[CView](../../mfc/reference/cview-class.md)与之关联的对象。 视图呈现的文档框架窗口中的映像,并将用户输入解释为对文档的操作。 文档可以包含多个与之关联的视图。 当用户打开文档上的一个窗口时,框架将创建一个视图,并将其附加到文档。 文档模板指定哪种类型的视图和框架窗口用于显示每种类型的文档。
文档是框架的标准的一部分命令路由,并因此从标准用户界面组件 (如保存文件菜单项) 接收命令。 文档接收转发的活动视图的命令。 如果文档不处理给定的命令,它会转发到文档模板,用于管理它的命令。
修改后文档的数据,其视图的每个必须反映这些修改。 `CDocument` 提供了[UpdateAllViews](#updateallviews)成员函数,以便通知此类更改的视图,使视图可以根据需要重新绘制自己。 该框架还会提示用户在关闭之前保存修改后的文件。
若要实现典型的应用程序中的文档,必须执行以下操作:
- 从派生类`CDocument`每种类型的文档。
- 添加成员变量来存储每个文档的数据。
- 实现用于读取和修改文档的数据的成员函数。 文档中的视图是这些成员函数的最重要用户。
- 重写[cobject:: Serialize](../../mfc/reference/cobject-class.md#serialize)成员函数在您的文档类来写入和读取该文档的数据传入和传出磁盘。
`CDocument` 支持发送你的文档通过邮件,邮件支持 (MAPI) 是否存在。 请参阅文章[MAPI](../../mfc/mapi.md)并[MFC 中的 MAPI 支持](../../mfc/mapi-support-in-mfc.md)。
有关详细信息`CDocument`,请参阅[序列化](../../mfc/serialization-in-mfc.md),[文档/视图体系结构主题](../../mfc/document-view-architecture.md),并且[文档/视图创建](../../mfc/document-view-creation.md)。
## <a name="inheritance-hierarchy"></a>继承层次结构
[CObject](../../mfc/reference/cobject-class.md)
[CCmdTarget](../../mfc/reference/ccmdtarget-class.md)
`CDocument`
## <a name="requirements"></a>要求
**标头:** afxwin.h
## <a name="addview"></a> CDocument::AddView
调用此函数可将视图附加到文档。
```
void AddView(CView* pView);
```
### <a name="parameters"></a>参数
*pView*<br/>
指向要添加的视图。
### <a name="remarks"></a>备注
此函数将指定的视图添加到与该文档; 关联的视图的列表该函数还设置为本文档的视图的文档指针。 框架调用此函数时将新创建的视图对象附加到文档中。响应文件新建、 打开文件或新的窗口的命令或拆分的拆分器窗口时出现这种情况。
仅当手动创建和附加视图,请调用此函数。 通常会让连接通过定义的文档和视图框架[CDocTemplate](../../mfc/reference/cdoctemplate-class.md)对象关联的文档类,视图类和框架窗口类。
### <a name="example"></a>示例
[!code-cpp[NVC_MFCDocViewSDI#12](../../mfc/codesnippet/cpp/cdocument-class_1.cpp)]
## <a name="beginreadchunks"></a> CDocument::BeginReadChunks
初始化消息的分块读取。
```
virtual void BeginReadChunks ();
```
### <a name="remarks"></a>备注
## <a name="cancloseframe"></a> CDocument::CanCloseFrame
框架窗口显示在文档关闭之前由框架调用。
```
virtual BOOL CanCloseFrame(CFrameWnd* pFrame);
```
### <a name="parameters"></a>参数
*pFrame*<br/>
指向附加到文档的视图的框架窗口。
### <a name="return-value"></a>返回值
非零,则可以安全地关闭帧窗口; 如果否则为 0。
### <a name="remarks"></a>备注
默认实现会检查是否有其他显示文档的框架窗口。 如果指定的框架窗口,显示的文档的最后一个函数将提示用户保存文档,如果已修改。 如果你想要执行特殊处理的框架窗口关闭时,重写此函数。 这是一种高级可重写。
## <a name="cdocument"></a> CDocument::CDocument
构造 `CDocument` 对象。
```
CDocument();
```
### <a name="remarks"></a>备注
该框架将处理为你创建文档。 重写[OnNewDocument](#onnewdocument)成员函数以执行初始化基于每个文档; 这是在单文档界面 (SDI) 应用程序中特别重要。
## <a name="clearchunklist"></a> CDocument::ClearChunkList
清除的区块列表。
```
virtual void ClearChunkList ();
```
### <a name="remarks"></a>备注
## <a name="clearpathname"></a> CDocument::ClearPathName
清除文档对象的路径。
```
virtual void ClearPathName();
```
### <a name="remarks"></a>备注
清除从路径`CDocument`对象会导致应用程序,接下来保存文档时提示用户。 这使得**保存**命令的行为类似于**另存为**命令。
## <a name="deletecontents"></a> CDocument::DeleteContents
由框架调用以删除文档的数据,而不销毁`CDocument`对象本身。
```
virtual void DeleteContents();
```
### <a name="remarks"></a>备注
文档将被销毁之前调用它。 它也称为以确保将重复使用它之前,文档为空。 这一点特别重要的 SDI 应用程序,它使用只有一个文档;每当用户创建或打开另一个文档,文档将重复使用。 调用此函数可实现"编辑全部清除"或类似的命令将删除所有文档的数据。 此函数的默认实现不执行任何操作。 重写此函数可删除在文档中的数据。
### <a name="example"></a>示例
[!code-cpp[NVC_MFCDocView#57](../../mfc/codesnippet/cpp/cdocument-class_2.cpp)]
## <a name="findchunk"></a> CDocument::FindChunk
查找指定 GUID 的块区。
```
virtual POSITION FindChunk(
REFCLSID guid,
DWORD pid);
```
### <a name="parameters"></a>参数
*guid*<br/>
指定要查找块的 GUID。
pid<br/>
指定块区的 PID 查找。
### <a name="return-value"></a>返回值
如果成功,则在内部区块列表中的位置。 否则为,为 NULL。
### <a name="remarks"></a>备注
## <a name="getadapter"></a> CDocument::GetAdapter
返回一个指向一个对象,实现`IDocument`接口。
```
virtual ATL::IDocument* GetAdapter();
```
### <a name="return-value"></a>返回值
指向一个对象,实现`IDocument`接口。
### <a name="remarks"></a>备注
## <a name="getdoctemplate"></a> CDocument::GetDocTemplate
调用此函数可获得此文档类型指针的文档模板。
```
CDocTemplate* GetDocTemplate() const;
```
### <a name="return-value"></a>返回值
指向此文档类型或如果文档不受文档模板,则为 NULL 的文档模板的指针。
### <a name="example"></a>示例
[!code-cpp[NVC_MFCDocView#58](../../mfc/codesnippet/cpp/cdocument-class_3.cpp)]
## <a name="getfile"></a> CDocument::GetFile
调用此成员函数可获取一个指向`CFile`对象。
```
virtual CFile* GetFile(
LPCTSTR lpszFileName,
UINT nOpenFlags,
CFileException* pError);
```
### <a name="parameters"></a>参数
*lpszFileName*<br/>
一个字符串,是所需的文件的路径。 路径可能是相对值还是绝对值。
*pError*<br/>
指向现有文件异常对象,该值指示该操作的完成状态的指针。
*nOpenFlags*<br/>
共享和访问模式。 指定当打开文件时要执行的操作。 你可以组合 CFile 构造函数中所列选项[CFile::CFile](../../mfc/reference/cfile-class.md#cfile)通过使用按位 OR (|) 运算符。 一个访问权限和一个共享选项是必需的;`modeCreate`和`modeNoInherit`模式是可选的。
### <a name="return-value"></a>返回值
指向 `CFile` 对象的指针。
## <a name="getfirstviewposition"></a> CDocument::GetFirstViewPosition
调用此函数可获取与文档关联的视图的列表中的第一个视图的位置。
```
virtual POSITION GetFirstViewPosition() const;
```
### <a name="return-value"></a>返回值
一个位置值,可用于迭代[GetNextView](#getnextview)成员函数。
### <a name="example"></a>示例
[!code-cpp[NVC_MFCDocView#59](../../mfc/codesnippet/cpp/cdocument-class_4.cpp)]
## <a name="getnextview"></a> CDocument::GetNextView
调用此函数可循环访问所有文档中的视图。
```
virtual CView* GetNextView(POSITION& rPosition) const;
```
### <a name="parameters"></a>参数
*rPosition*<br/>
对位置值的引用返回上次调用`GetNextView`或[GetFirstViewPosition](#getfirstviewposition)成员函数。 此值不能为 NULL。
### <a name="return-value"></a>返回值
指向由标识视图的指针*rPosition*。
### <a name="remarks"></a>备注
该函数将返回由标识的视图*rPosition* ,然后设置*rPosition*到下一个视图列表中的位置值。 如果检索到的视图是在列表中上, 一次然后*rPosition*设置为 NULL。
### <a name="example"></a>示例
[!code-cpp[NVC_MFCDocView#59](../../mfc/codesnippet/cpp/cdocument-class_4.cpp)]
## <a name="getpathname"></a> CDocument::GetPathName
调用此函数可获取文档的磁盘文件的完全限定的路径。
```
const CString& GetPathName() const;
```
### <a name="return-value"></a>返回值
文档的完全限定的路径。 如果该文档尚未保存,或者没有与之关联的磁盘文件,此字符串为空。
## <a name="getthumbnail"></a> CDocument::GetThumbnail
创建要将缩略图的提供程序用来显示缩略图的位图。
```
virtual BOOL GetThumbnail(
UINT cx,
HBITMAP* phbmp,
DWORD* pdwAlpha);
```
### <a name="parameters"></a>参数
*cx*<br/>
指定的宽度和位图的高度。
*phbmp*<br/>
在函数已成功返回时包含位图的句柄。
*pdwAlpha*<br/>
包含一个 dword 值,指定的 alpha 通道值,当函数返回时已成功。
### <a name="return-value"></a>返回值
创建成功,则返回 TRUE,如果在缩略图的位图否则为 FALSE。
### <a name="remarks"></a>备注
## <a name="gettitle"></a> CDocument::GetTitle
调用此函数可获取文档的标题,通常派生自文档的文件名。
```
const CString& GetTitle() const;
```
### <a name="return-value"></a>返回值
文档的标题。
## <a name="initializesearchcontent"></a> CDocument::InitializeSearchContent
调用以初始化为搜索处理程序的搜索内容。
```
virtual void InitializeSearchContent ();
```
### <a name="remarks"></a>备注
重写此方法在派生类初始化搜索内容中。 内容应为字符串,且由分隔的部分";"。 例如,"点;矩形;ole 项"。
## <a name="ismodified"></a> CDocument::IsModified
调用此函数可确定自上次保存以来是否已修改文档。
```
virtual BOOL IsModified();
```
### <a name="return-value"></a>返回值
如果自上次保存; 该文档已被修改,非零值否则为 0。
## <a name="issearchandorganizehandler"></a> CDocument::IsSearchAndOrganizeHandler
指示是否此实例的`CDocument`已创建的搜索和组织的处理程序。
```
BOOL IsSearchAndOrganizeHandler() const;
```
### <a name="return-value"></a>返回值
返回 TRUE,如果此实例的`CDocument`已创建的搜索和组织的处理程序。
### <a name="remarks"></a>备注
当前此函数仅为在进程外服务器中实现的丰富的预览处理程序返回 TRUE。 可以在您的应用程序级别,以使此函数返回 TRUE,返回设置适当的标记 (m_bPreviewHandlerMode,m_bSearchMode,m_bGetThumbnailMode)。
## <a name="loaddocumentfromstream"></a> CDocument::LoadDocumentFromStream
调用以从流加载文档数据。
```
virtual HRESULT LoadDocumentFromStream(
IStream* pStream,
DWORD dwGrfMode);
```
### <a name="parameters"></a>参数
*pStream*<br/>
指向流的指针。 此流提供的命令行程序。
*dwGrfMode*<br/>
写入流的访问模式。
### <a name="return-value"></a>返回值
如果加载操作成功,否则为,错误代码的 HRESULT,则为 S_OK。
### <a name="remarks"></a>备注
您可以重写此方法在派生类自定义如何将数据加载从流中。
## <a name="m_bgetthumbnailmode"></a> CDocument::m_bGetThumbnailMode
指定`CDocument`dllhost 缩略图创建对象。 应签入`CView::OnDraw`。
```
BOOL m_bGetThumbnailMode;
```
### <a name="remarks"></a>备注
`TRUE` 指示文档已通过缩略图的 dllhost。
## <a name="m_bpreviewhandlermode"></a> CDocument::m_bPreviewHandlerMode
指定`CDocument`为丰富的预览 prevhost 通过创建对象。 应签入`CView::OnDraw`。
```
BOOL m_bPreviewHandlerMode;
```
### <a name="remarks"></a>备注
TRUE 表示,该文档通过创建 prevhost 为丰富的预览。
## <a name="m_bsearchmode"></a> CDocument::m_bSearchMode
指定`CDocument`由索引器或其他搜索应用程序创建对象。
```
BOOL m_bSearchMode;
```
### <a name="remarks"></a>备注
`TRUE` 指示文档已创建的索引器或其他搜索应用程序。
## <a name="m_clrrichpreviewbackcolor"></a> CDocument::m_clrRichPreviewBackColor
指定丰富的预览窗口的背景的色。 由主机设置此颜色。
```
COLORREF m_clrRichPreviewBackColor;
```
### <a name="remarks"></a>备注
## <a name="m_clrrichpreviewtextcolor"></a> CDocument::m_clrRichPreviewTextColor
指定的丰富的预览窗口的前景色。 由主机设置此颜色。
```
COLORREF m_clrRichPreviewTextColor;
```
### <a name="remarks"></a>备注
## <a name="m_lfrichpreviewfont"></a> CDocument::m_lfRichPreviewFont
指定丰富的预览窗口的文本字体。 此字体的信息是由主机设置。
```
CFont m_lfRichPreviewFont;
```
### <a name="remarks"></a>备注
## <a name="onbeforerichpreviewfontchanged"></a> CDocument::OnBeforeRichPreviewFontChanged
丰富的预览字体更改之前调用。
```
virtual void OnBeforeRichPreviewFontChanged();
```
### <a name="remarks"></a>备注
## <a name="onchangedviewlist"></a> CDocument::OnChangedViewList
添加到或从文档中移除视图之后,由框架调用。
```
virtual void OnChangedViewList();
```
### <a name="remarks"></a>备注
此函数的默认实现检查最后一个视图正被删除,是否是这样,将删除文档。 如果你想要执行特殊处理,框架将添加或删除视图时,重写此函数。 例如,如果你想要保持打开状态,即使没有附加到它的视图的文档,重写此函数。
## <a name="onclosedocument"></a> CDocument::OnCloseDocument
当文档关闭时,通常作为文件关闭命令的一部分,由框架调用。
```
virtual void OnCloseDocument();
```
### <a name="remarks"></a>备注
此函数的默认实现销毁的所有框架用于查看文档、 关闭视图、 清理文档的内容,然后调用[DeleteContents](#deletecontents)成员函数来删除该文档数据。
如果你想要执行特殊清理处理框架关闭文档时,重写此函数。 例如,如果文档表示数据库中的记录,你可能想要重写此函数以关闭数据库。 通过重写应调用此函数的基类版本。
## <a name="oncreatepreviewframe"></a> CDocument::OnCreatePreviewFrame
需要创建丰富的预览的预览帧时由框架调用。
```
virtual BOOL OnCreatePreviewFrame();
```
### <a name="return-value"></a>返回值
如果成功,则创建框架将返回 TRUE否则为 FALSE。
### <a name="remarks"></a>备注
## <a name="ondocumentevent"></a> CDocument::OnDocumentEvent
由框架调用以响应文档事件。
```
virtual void OnDocumentEvent(DocumentEvent deEvent);
```
### <a name="parameters"></a>参数
*deEvent*<br/>
[in]枚举的数据类型,它描述事件的类型。
### <a name="remarks"></a>备注
文档事件可能会影响多个类。 此方法负责处理而不会影响类的文档事件[CDocument 类](../../mfc/reference/cdocument-class.md)。 目前,唯一必须文档事件响应的类是[CDataRecoveryHandler 类](../../mfc/reference/cdatarecoveryhandler-class.md)。 `CDocument`类具有其他可重写方法负责处理效果`CDocument`。
下表列出了可能的值为*deEvent*以及与其对应的事件。
|“值”|相应的事件|
|-----------|-------------------------|
|`onAfterNewDocument`|已创建新文档。|
|`onAfterOpenDocument`|打开一个新的文档。|
|`onAfterSaveDocument`|已保存文档。|
|`onAfterCloseDocument`|已关闭的文档。|
## <a name="ondrawthumbnail"></a> CDocument::OnDrawThumbnail
重写此方法在派生类来绘制缩略图中。
```
virtual void OnDrawThumbnail(
CDC& dc,
LPRECT lprcBounds);
```
### <a name="parameters"></a>参数
*dc*<br/>
对设备上下文的引用。
*lprcBounds*<br/>
指定应在其中绘制缩略图的区域的边框。
### <a name="remarks"></a>备注
## <a name="onfilesendmail"></a> CDocument::OnFileSendMail
发送的消息通过常驻邮件主机 (如果有) 与文档一起作为附件。
```
void OnFileSendMail();
```
### <a name="remarks"></a>备注
`OnFileSendMail` 调用[OnSaveDocument](#onsavedocument) (保存) 到一个临时文件,然后通过电子邮件发送无标题和已修改文档序列化。 如果文档不被修改,则不需要的临时文件;发送原始。 `OnFileSendMail` 加载 MAPI32。如果尚未加载的 DLL。
一种特殊的实现`OnFileSendMail`有关[COleDocument](../../mfc/reference/coledocument-class.md)句柄正确复合文件。
`CDocument` 支持发送你的文档通过邮件,邮件支持 (MAPI) 是否存在。 请参阅文章[MAPI 主题](../../mfc/mapi.md)并[MFC 中的 MAPI 支持](../../mfc/mapi-support-in-mfc.md)。
## <a name="onloaddocumentfromstream"></a> CDocument::OnLoadDocumentFromStream
当它需要从流加载文档数据时由框架调用。
```
virtual HRESULT OnLoadDocumentFromStream(
IStream* pStream,
DWORD grfMode);
```
### <a name="parameters"></a>参数
*pStream*<br/>
指向传入流的指针。
*grfMode*<br/>
写入流的访问模式。
### <a name="return-value"></a>返回值
如果负载是成功,则为 S_OK否则为错误代码。
### <a name="remarks"></a>备注
## <a name="onnewdocument"></a> CDocument::OnNewDocument
由框架作为新文件的命令的一部分调用。
```
virtual BOOL OnNewDocument();
```
### <a name="return-value"></a>返回值
已成功初始化文档; 如果非零值否则为 0。
### <a name="remarks"></a>备注
此函数的默认实现调用[DeleteContents](#deletecontents)成员函数,以确保文档为空,并且随后会标记为干净新文档。 重写此函数可初始化新文档的数据结构。 通过重写应调用此函数的基类版本。
如果用户在 SDI 应用程序中选择新建文件命令,框架将使用此函数以重新初始化现有的文档,而不是创建一个新。 如果用户在多文档界面 (MDI) 应用程序中选择新文件时,框架每次创建一个新文档,并随后调用此函数可对其进行初始化。 必须将初始化代码置于此函数,而不是新文件的命令是在 SDI 应用程序中有效的构造函数中。
请注意,存在事例`OnNewDocument`调用两次。 作为 ActiveX 文档服务器嵌入文档时,将发生这种情况。 第一次调用该函数`CreateInstance`方法 (由`COleObjectFactory`-派生类) 和由第二次`InitNew`方法 (公开的`COleServerDoc`-派生类)。
### <a name="example"></a>示例
以下示例说明了初始化文档对象的替代方法。
[!code-cpp[NVC_MFCDocView#60](../../mfc/codesnippet/cpp/cdocument-class_5.cpp)]
[!code-cpp[NVC_MFCDocView#61](../../mfc/codesnippet/cpp/cdocument-class_6.cpp)]
[!code-cpp[NVC_MFCDocView#62](../../mfc/codesnippet/cpp/cdocument-class_7.cpp)]
## <a name="onopendocument"></a> CDocument::OnOpenDocument
由框架作为文件打开命令的一部分调用。
```
virtual BOOL OnOpenDocument(LPCTSTR lpszPathName);
```
### <a name="parameters"></a>参数
*lpszPathName*<br/>
指向要打开的文档的路径。
### <a name="return-value"></a>返回值
如果已成功加载该文档; 非零值否则为 0。
### <a name="remarks"></a>备注
此函数的默认实现将打开指定的文件,调用[DeleteContents](#deletecontents)成员函数,以确保该文档为空,将调用[cobject:: Serialize](../../mfc/reference/cobject-class.md#serialize)读取该文件和内容,然后将标记为干净的文档。 如果你想要使用的内容存档机制或文件机制以外,重写此函数。 例如,您可以编写文档其中代表数据库而不是单独的文件中的记录的应用程序。
如果用户在 SDI 应用程序中选择文件打开命令,框架将使用此函数重新初始化现有`CDocument`对象,而不是创建一个新。 如果用户选择文件已打开的 MDI 应用程序中,该框架构造一个新`CDocument`对象每次,然后调用此函数对其进行初始化。 必须将初始化代码置于此函数,而不是文件打开命令是在 SDI 应用程序中有效的构造函数中。
### <a name="example"></a>示例
以下示例说明了初始化文档对象的替代方法。
[!code-cpp[NVC_MFCDocView#60](../../mfc/codesnippet/cpp/cdocument-class_5.cpp)]
[!code-cpp[NVC_MFCDocView#61](../../mfc/codesnippet/cpp/cdocument-class_6.cpp)]
[!code-cpp[NVC_MFCDocView#62](../../mfc/codesnippet/cpp/cdocument-class_7.cpp)]
[!code-cpp[NVC_MFCDocView#63](../../mfc/codesnippet/cpp/cdocument-class_8.cpp)]
## <a name="onpreviewhandlerqueryfocus"></a> CDocument::OnPreviewHandlerQueryFocus
指示要返回 HWND 从调用中检索到的预览处理程序`GetFocus`函数。
```
virtual HRESULT OnPreviewHandlerQueryFocus(HWND* phwnd);
```
### <a name="parameters"></a>参数
*phwnd*<br/>
[out]此方法返回时,包含指向从调用返回 HWND`GetFocus`从预览处理程序的前台线程的函数。
### <a name="return-value"></a>返回值
如果成功,则为 S_OK 返回或以其他方式的错误值。
### <a name="remarks"></a>备注
## <a name="onpreviewhandlertranslateaccelerator"></a> CDocument::OnPreviewHandlerTranslateAccelerator
指示要处理键击中向上传递,从在其中预览处理程序正在运行的进程的消息泵的预览处理程序。
```
virtual HRESULT OnPreviewHandlerTranslateAccelerator(MSG* pmsg);
```
### <a name="parameters"></a>参数
*pmsg*<br/>
[in]指向窗口消息的指针。
### <a name="return-value"></a>返回值
如果可以通过预览处理程序处理键击消息,该处理程序对其进行处理,并返回 S_OK。 如果预览处理程序无法处理键击消息,它提供给宿主通过`IPreviewHandlerFrame::TranslateAccelerator`。 如果主机处理此消息,此方法返回 S_OK。 如果主机不处理消息,此方法将返回 S_FALSE。
### <a name="remarks"></a>备注
## <a name="onrichpreviewbackcolorchanged"></a> CDocument::OnRichPreviewBackColorChanged
丰富的预览背景色发生更改时调用。
```
virtual void OnRichPreviewBackColorChanged();
```
### <a name="remarks"></a>备注
## <a name="onrichpreviewfontchanged"></a> CDocument::OnRichPreviewFontChanged
丰富的预览字体发生更改时调用。
```
virtual void OnRichPreviewFontChanged();
```
### <a name="remarks"></a>备注
## <a name="onrichpreviewsitechanged"></a> CDocument::OnRichPreviewSiteChanged
丰富的预览站点发生更改时调用。
```
virtual void OnRichPreviewSiteChanged();
```
### <a name="remarks"></a>备注
## <a name="onrichpreviewtextcolorchanged"></a> CDocument::OnRichPreviewTextColorChanged
丰富的预览文本颜色发生更改时调用。
```
virtual void OnRichPreviewTextColorChanged();
```
### <a name="remarks"></a>备注
## <a name="onsavedocument"></a> CDocument::OnSaveDocument
由框架作为文件保存或文件另存为命令的一部分调用。
```
virtual BOOL OnSaveDocument(LPCTSTR lpszPathName);
```
### <a name="parameters"></a>参数
*lpszPathName*<br/>
指向该文件将保存到其中的完全限定路径。
### <a name="return-value"></a>返回值
已成功保存文档; 如果非零值否则为 0。
### <a name="remarks"></a>备注
此函数的默认实现将打开指定的文件,调用[cobject:: Serialize](../../mfc/reference/cobject-class.md#serialize)文档的数据写入文件,然后标记为文档清理。 如果你想要执行特殊处理,当框架保存文档时,重写此函数。 例如,您可以编写文档其中代表数据库而不是单独的文件中的记录的应用程序。
## <a name="onunloadhandler"></a> CDocument::OnUnloadHandler
卸载了 preview 处理程序时由框架调用。
```
virtual void OnUnloadHandler();
```
### <a name="remarks"></a>备注
## <a name="onupdatefilesendmail"></a> CDocument::OnUpdateFileSendMail
如果邮件支持 (MAPI) 存在,请启用 ID_FILE_SEND_MAIL 命令。
```
void OnUpdateFileSendMail(CCmdUI* pCmdUI);
```
### <a name="parameters"></a>参数
*pCmdUI*<br/>
一个指向[CCmdUI](../../mfc/reference/ccmdui-class.md)与 ID_FILE_SEND_MAIL 命令相关联的对象。
### <a name="remarks"></a>备注
否则该函数从菜单中,包括分隔符的上方或下方为相应的菜单项将删除 ID_FILE_SEND_MAIL 命令。 如果启用 MAPI MAPI32。DLL 文件的路径中,并在 WIN [Mail] 部分中出现。INI 文件,MAPI = 1。 大多数应用程序将此命令放在文件菜单上。
`CDocument` 支持发送你的文档通过邮件,邮件支持 (MAPI) 是否存在。 请参阅文章[MAPI 主题](../../mfc/mapi.md)并[MFC 中的 MAPI 支持](../../mfc/mapi-support-in-mfc.md)。
## <a name="precloseframe"></a> CDocument::PreCloseFrame
销毁框架窗口之前,由框架调用此成员函数。
```
virtual void PreCloseFrame(CFrameWnd* pFrame);
```
### <a name="parameters"></a>参数
*pFrame*<br/>
指向[CFrameWnd](../../mfc/reference/cframewnd-class.md)保存关联`CDocument`对象。
### <a name="remarks"></a>备注
它可以重写以提供自定义清理,但一定要调用的基的类。
默认值为`PreCloseFrame`不执行任何操作`CDocument`。 `CDocument`的派生类[COleDocument](../../mfc/reference/coledocument-class.md)并[CRichEditDoc](../../mfc/reference/cricheditdoc-class.md)使用此成员函数。
## <a name="readnextchunkvalue"></a> CDocument::ReadNextChunkValue
读取下一个块区值。
```
virtual BOOL ReadNextChunkValue(IFilterChunkValue** ppValue);
```
### <a name="parameters"></a>参数
*ppValue*<br/>
[out]当函数返回时, *ppValue*包含读取的值。
### <a name="return-value"></a>返回值
如果成功,则不为 0;否则为 0。
### <a name="remarks"></a>备注
## <a name="releasefile"></a> CDocument::ReleaseFile
若要释放的文件,使其可供使用由其他应用程序的框架由调用此成员函数。
```
virtual void ReleaseFile(
CFile* pFile,
BOOL bAbort);
```
### <a name="parameters"></a>参数
*pFile*<br/>
指向要释放的 CFile 对象的指针。
*bAbort*<br/>
指定文件是否是通过使用释放`CFile::Close`或`CFile::Abort`。 如果该文件是使用释放,则为 FALSE [CFile::Close](../../mfc/reference/cfile-class.md#close);如果该文件是使用发布则为 TRUE [cfile:: Abort](../../mfc/reference/cfile-class.md#abort)。
### <a name="remarks"></a>备注
如果*bAbort*为 TRUE,`ReleaseFile`调用`CFile::Abort`,并释放该文件。 `CFile::Abort` 不会引发异常。
如果*bAbort*为 FALSE 时,`ReleaseFile`调用`CFile::Close`并释放该文件。
重写此成员函数以发布该文件之前需要用户执行操作。
## <a name="removechunk"></a> CDocument::RemoveChunk
删除具有指定 GUID 的区块。
```
virtual void RemoveChunk(
REFCLSID guid,
DWORD pid);
```
### <a name="parameters"></a>参数
*Guid*<br/>
指定要删除一个块的 GUID。
*pid*<br/>
指定要删除的块区 PID。
### <a name="remarks"></a>备注
## <a name="removeview"></a> CDocument::RemoveView
调用此函数可分离文档中的视图。
```
void RemoveView(CView* pView);
```
### <a name="parameters"></a>参数
*pView*<br/>
指向要删除的视图。
### <a name="remarks"></a>备注
此函数与文档; 关联的视图的列表中移除指定的视图它还设置视图的文档指针为 NULL。 框架窗口已关闭或拆分器窗口的窗格已关闭时,由框架调用此函数。
调用此函数才手动分离视图。 通常将允许通过定义分离文档和视图的框架[CDocTemplate](../../mfc/reference/cdoctemplate-class.md)对象关联的文档类,视图类和框架窗口类。
在示例,请参阅[AddView](#addview)有关的示例实现。
## <a name="reportsaveloadexception"></a> CDocument::ReportSaveLoadException
如果引发异常,则调用 (通常[CFileException](../../mfc/reference/cfileexception-class.md)或[CArchiveException](../../mfc/reference/carchiveexception-class.md)) 时保存或加载文档。
```
virtual void ReportSaveLoadException(
LPCTSTR lpszPathName,
CException* e,
BOOL bSaving,
UINT nIDPDefault);
```
### <a name="parameters"></a>参数
*lpszPathName*<br/>
指向在时的文档的名称保存或加载。
*e*<br/>
指向所引发的异常。 可以为 NULL。
*bSaving*<br/>
标志指示正在进行中; 什么样的操作如果文档已保存,0 如果加载文档时,非零值。
*nIDPDefault*<br/>
如果函数没有指定一个更具体显示的错误消息的标识符。
### <a name="remarks"></a>备注
默认实现将检查异常对象,并查找专门说明原因的错误消息。 如果找不到特定的消息,或如果*e*为 NULL,由指定的常规消息*nIDPDefault*使用参数。 然后,该函数显示包含错误消息的消息框。 如果您想要提供附加的自定义失败消息,重写此函数。 这是一种高级可重写。
## <a name="savemodified"></a> CDocument::SaveModified
要关闭修改后的文档之前由框架调用。
```
virtual BOOL SaveModified();
```
### <a name="return-value"></a>返回值
非零,则可以安全地继续并关闭文档; 如果如果文档不应被关闭,则为 0。
### <a name="remarks"></a>备注
此函数的默认实现将显示一个消息框,询问用户是否将所做的更改保存到文档中,如果进行任何。 如果在程序需要不同的提示过程,重写此函数。 这是一种高级可重写。
## <a name="setchunkvalue"></a> CDocument::SetChunkValue
设置一个块区值。
```
virtual BOOL SetChunkValue (IFilterChunkValue* pValue);
```
### <a name="parameters"></a>参数
*pValue*<br/>
指定要设置的块区值。
### <a name="return-value"></a>返回值
如果成功,则不为 0;否则为 0。
### <a name="remarks"></a>备注
## <a name="setmodifiedflag"></a> CDocument::SetModifiedFlag
调用此函数后所做的对文档进行任何修改。
```
virtual void SetModifiedFlag(BOOL bModified = TRUE);
```
### <a name="parameters"></a>参数
*bModified*<br/>
指示文档是否已修改的标记。
### <a name="remarks"></a>备注
通过持续调用此函数时,可确保该框架会提示用户关闭文档前保存更改。 通常应使用默认值为 TRUE *bModified*参数。 若要将文档如清除标记 (未修改),调用此函数,值为 FALSE。
## <a name="setpathname"></a> CDocument::SetPathName
调用此函数可指定文档的磁盘文件的完全限定的路径。
```
virtual void SetPathName(
LPCTSTR lpszPathName,
BOOL bAddToMRU = TRUE);
```
### <a name="parameters"></a>参数
*lpszPathName*<br/>
指向要用作文档的路径的字符串。
*bAddToMRU*<br/>
确定是否将文件名添加到最近使用的 (MRU) 文件列表。 如果为 TRUE,将文件名添加;如果为 FALSE,未添加。
### <a name="remarks"></a>备注
具体取决于值*bAddToMRU*的路径不添加,或者未添加到由应用程序维护的 MRU 列表。 请注意,一些文档不与磁盘文件相关联。 仅当您要重写用于打开和保存文件由框架使用的默认实现调用此函数。
## <a name="settitle"></a> CDocument::SetTitle
调用此函数可指定文档的标题 (框架窗口的标题栏中显示的字符串)。
```
virtual void SetTitle(LPCTSTR lpszTitle);
```
### <a name="parameters"></a>参数
*lpszTitle*<br/>
指向要用作文档的标题的字符串。
### <a name="remarks"></a>备注
调用此函数将更新的显示文档的所有框架窗口的标题。
## <a name="updateallviews"></a> CDocument::UpdateAllViews
调用此函数后修改该文档。
```
void UpdateAllViews(
CView* pSender,
LPARAM lHint = 0L,
CObject* pHint = NULL);
```
### <a name="parameters"></a>参数
*pSender*<br/>
指向的视图,修改文档,则所有视图都将更新为 null。
*lHint*<br/>
包含有关修改信息。
*pHint*<br/>
指的对象存储有关修改信息。
### <a name="remarks"></a>备注
调用后,应调用此函数[SetModifiedFlag](#setmodifiedflag)成员函数。 此函数将通知每个视图附加到文档,但由指定的视图除外*pSender*,修改文档。 您通常此函数从调用视图类后,用户已更改的文档通过视图。
此函数将调用[CView::OnUpdate](../../mfc/reference/cview-class.md#onupdate)之外的发送的文档的视图的每个成员函数查看,请传递*pHint*并*lHint*。 使用这些参数将信息传递给视图获取有关对文档所做的修改。 您可以使用信息进行编码*lHint*和/或您可以定义[CObject](../../mfc/reference/cobject-class.md)-派生类来存储有关所做的修改的信息并传递一个对象的类使用*pHint*. 重写`CView::OnUpdate`成员函数在你[CView](../../mfc/reference/cview-class.md)的派生类,以优化的视图显示基于传递的信息的更新。
### <a name="example"></a>示例
[!code-cpp[NVC_MFCDocView#64](../../mfc/codesnippet/cpp/cdocument-class_9.cpp)]
## <a name="see-also"></a>请参阅
[MFC 示例 MDIDOCVW](../../visual-cpp-samples.md)<br/>
[MFC 示例 SNAPVW](../../visual-cpp-samples.md)<br/>
[MFC 示例 NPP](../../visual-cpp-samples.md)<br/>
[CCmdTarget 类](../../mfc/reference/ccmdtarget-class.md)<br/>
[层次结构图](../../mfc/hierarchy-chart.md)<br/>
[CCmdTarget 类](../../mfc/reference/ccmdtarget-class.md)<br/>
[CView 类](../../mfc/reference/cview-class.md)<br/>
[CDocTemplate 类](../../mfc/reference/cdoctemplate-class.md)
| 25.211067 | 339 | 0.737552 | yue_Hant | 0.968472 |
e032b0363e6113c4bc7d633c6e5c0e556ea19569 | 1,564 | md | Markdown | CHANGELOG.md | chooxide/tachyons | 0ab2be34ec9d0b1144cb8eb5adbf5ea936f178c6 | [
"Apache-2.0",
"MIT"
] | 5 | 2018-10-11T08:11:03.000Z | 2018-10-12T00:08:06.000Z | CHANGELOG.md | chooxide/tachyons | 0ab2be34ec9d0b1144cb8eb5adbf5ea936f178c6 | [
"Apache-2.0",
"MIT"
] | null | null | null | CHANGELOG.md | chooxide/tachyons | 0ab2be34ec9d0b1144cb8eb5adbf5ea936f178c6 | [
"Apache-2.0",
"MIT"
] | null | null | null | ## 2018-10-11, Version 0.1.1
### Commits
- [[`0928def021`](https://github.com/chooxide/tachyons/commit/0928def021d3e27e5e6a8219bff5d364f9a2c158)] (cargo-release) version 0.1.1 (Yoshua Wuyts)
- [[`0ddffaba97`](https://github.com/chooxide/tachyons/commit/0ddffaba973824ac62b16e210de225229997415c)] add example (Yoshua Wuyts)
- [[`48ea507cd0`](https://github.com/chooxide/tachyons/commit/48ea507cd0813210a99d294ab762f0f305e40f04)] init tachyons vars (Yoshua Wuyts)
- [[`a4b7d83b2b`](https://github.com/chooxide/tachyons/commit/a4b7d83b2bae102825ed3b0a5b858f3fae588d56)] . (Yoshua Wuyts)
### Stats
```diff
.github/CODE_OF_CONDUCT.md | 75 ++++++++++++++++-
.github/CONTRIBUTING.md | 63 +++++++++++++-
.github/ISSUE_TEMPLATE.md | 41 +++++++++-
.github/PULL_REQUEST_TEMPLATE.md | 21 ++++-
.github/stale.yml | 17 ++++-
.gitignore | 7 +-
.gitmodules | 3 +-
.travis.yml | 13 +++-
CERTIFICATE | 37 ++++++++-
Cargo.toml | 12 +++-
LICENSE-APACHE | 190 ++++++++++++++++++++++++++++++++++++++++-
LICENSE-MIT | 21 ++++-
README.md | 65 ++++++++++++++-
examples/main.rs | 15 +++-
rustfmt.toml | 2 +-
src/lib.rs | 12 +++-
src/variables.css | 160 ++++++++++++++++++++++++++++++++++-
tachyons-custom | 1 +-
18 files changed, 755 insertions(+)
```
| 48.875 | 149 | 0.498721 | yue_Hant | 0.452822 |
e03386153320b5b1093933831beff1dc687b4c44 | 1,612 | md | Markdown | _posts/people-love/22/w/bcbc/2021-04-07-isabella-day.md | chito365/p | d43434482da24b09c9f21d2f6358600981023806 | [
"MIT"
] | null | null | null | _posts/people-love/22/w/bcbc/2021-04-07-isabella-day.md | chito365/p | d43434482da24b09c9f21d2f6358600981023806 | [
"MIT"
] | null | null | null | _posts/people-love/22/w/bcbc/2021-04-07-isabella-day.md | chito365/p | d43434482da24b09c9f21d2f6358600981023806 | [
"MIT"
] | null | null | null | ---
id: 17050
title: Isabella Day
date: 2021-04-07T18:50:54+00:00
author: victor
layout: post
guid: https://ukdataservers.com/isabella-day/
permalink: /04/07/isabella-day
tags:
- show love
- unspecified
- single
- relationship
- engaged
- married
- complicated
- open relationship
- widowed
- separated
- divorced
- Husband
- Wife
- Boyfriend
- Girlfriend
category: Guides
---
* some text
{: toc}
## Who is Isabella Day
Most known for her role as Isabella on the ABC series Cristela, this child actress also had roles on Nickelodeon’s Sam & Cat, NBC’s The Night Shift and NBC’s Superstore.
## Prior to Popularity
She began studying musical theatre at the age of four. By the age of nine, she had appeared in commercials for McDonalds and Ore-Ida and had made her series debut in an episode of Brooklyn Nine-Nine. She was a student at Fairmont Private Schools on the Anaheim Hills Campus.
## Random data
When not filming television episodes, she trained in the martial art of TaeKwonDo.
## Family & Everyday Life of Isabella Day
She grew up alongside her older sister in Littleton, Colorado. She often posts pictures of her family on social media.
## People Related With Isabella Day
She had a recurring role in NBC’s Superstore starring America Ferrera.
| 20.405063 | 274 | 0.609181 | eng_Latn | 0.997465 |
e033ed50d3635d5c8bbb97667e5f324e8c6c191e | 57,887 | md | Markdown | Statistics/Featured builders info list.md | DoubleCookies/gd-statistics | 20e30935de0e4ef6a34ba21c709a9329d4cf3637 | [
"MIT"
] | 1 | 2020-07-31T18:50:33.000Z | 2020-07-31T18:50:33.000Z | Statistics/Featured builders info list.md | DoubleCookies/gd-statistics | 20e30935de0e4ef6a34ba21c709a9329d4cf3637 | [
"MIT"
] | 5 | 2021-11-18T09:28:08.000Z | 2022-03-02T13:42:49.000Z | Statistics/Featured builders info list.md | DoubleCookies/gd-statistics | 20e30935de0e4ef6a34ba21c709a9329d4cf3637 | [
"MIT"
] | null | null | null | | Author | Count |
|:---:|:---:|
null | 336
ViPriN | 131
ZenthicAlpha | 107
Serponge | 99
Experience D | 93
YunHaSeu14 | 85
Darwin | 79
Jeyzor | 71
mulpan | 70
f3lixsram | 69
Glittershroom | 68
Jayuff | 67
Usermatt18 | 67
BraedenTheCroco | 64
Gelt | 62
Minesap | 61
Cirtrax | 59
Dorami | 59
alkali | 57
SMBlacktime | 56
ZelLink | 56
ChuchitoDomin | 56
SirHadoken | 55
WerewolfGD | 55
xLunaire | 55
SamMaxx | 54
Echonox | 54
Etzer | 52
DangerKat | 52
danolex | 52
FunnyGame | 51
Adiale | 51
TheAlmightyWave | 51
AleXins | 50
Mazl | 50
JerkRat | 49
Skitten | 49
TheRealDarnoc | 48
izhar | 48
MrLorenzo | 47
CastriX | 47
pineapple | 46
Nemsy | 44
DanZmeN | 43
Optical | 43
Lemons | 43
abdula | 42
Spectex | 41
Split72 | 41
Mineber | 40
haoN | 39
TriAxis | 39
Nico99 | 39
Olympic | 39
AutoNick | 38
TrueNature | 38
Spu7Nix | 38
xcy7 | 38
ghostface | 38
Zoroa | 38
ItzMezzo | 38
Wergulz16 | 38
LandonGMD | 37
Xstar7 | 37
Berkoo | 37
Knots | 37
Awzer | 37
Codex | 36
Wilz | 36
Pipenachho | 36
Nexender | 36
Subwoofer | 35
Rob Buck | 35
ElectroidDash | 35
Agils | 34
Hann | 34
Sumsar | 34
bunch | 34
balli | 34
Rublock5 | 33
gluewis | 33
SaabS | 33
Texic | 33
Findexi | 33
Noweek | 33
DreamEater | 32
TamaN | 32
pocke | 32
iMina | 32
TrueChaos | 32
Oskux | 31
cerufiffy | 31
Iid4sh3riI | 31
RealZober | 31
JefryKawaii | 31
Hyenada | 31
GD Jose | 31
Mantevian | 31
KeiAs | 31
DevinCoyote | 30
Nicki1202 | 30
IIINePtunEIII | 30
Waffl3X | 30
Daivrt | 30
ASonicMen | 30
Picaaaa | 29
Torch121 | 29
ISparkI | 29
RobzombiGD | 29
DanielDlc | 28
Dubbayoo | 28
Zidnes | 28
DesTicY | 28
Axils | 28
AbstractDark | 28
BryanC2 | 28
RyanAB | 27
Superopi | 27
Dudex | 27
Serinox | 27
sgam | 27
AirForce | 27
kale | 27
BranSilver | 26
NASHII | 26
Manix648 | 26
Vesimeloni | 26
Evan048 | 26
Thomartin | 26
M3nhHu3 | 26
ILRELL | 26
noobas | 26
Florent | 26
sweetdude | 26
Lyod | 25
Annon | 25
lilbin | 25
Alkatraz | 25
Sharks | 25
Pauze | 25
Gepsoni4 | 25
Picha | 25
IKaira | 25
ChaSe | 24
Squall7 | 24
Gusearth | 24
Piseto | 24
Shatt3rium | 24
Xyle | 24
AmorAltra | 24
Noriega | 24
FUNKYpercy | 23
GirlyAle02 | 23
Hinds | 23
Fofii | 23
distortt | 23
schady | 23
Dubst3pF4n4tic | 23
Vertuoz | 23
MuLhM | 23
2turntdeezy | 23
Evasium622 | 23
Rustam | 23
Xatz | 23
FreakEd7 | 23
iFuse | 23
Hyper314 | 23
RadiationV2 | 22
MrKoolTrix | 22
x8Px | 22
Riky2610 | 22
NukeIIX | 22
IsakN016 | 22
Cubix | 22
Inf3rnal | 22
LeocreatorX | 22
CodeNate | 22
Aleiz21 | 22
jslimeyt | 22
janucha | 22
juandeman | 22
JustL3o | 22
Vexes7 | 22
RoyalP | 22
xSAgiTt | 22
Markydash | 21
MasK463 | 21
Jerry Bronze V | 21
EpiEpz | 21
Alex112300 | 21
Luddee | 21
Edooox | 21
Rawin | 21
Pechuga20 | 21
loogiah | 21
legitshot | 21
Retropt | 21
Partition | 21
DaFinn | 20
Fillipsmen | 20
Spa8 | 20
Jezzel | 20
LeX97 | 20
NeutKat | 20
Dashtrict | 20
CreatorRT | 20
GeomTer | 20
Enlex | 20
Fletzer | 20
Pawlogates | 20
atmospher | 20
MortleX | 20
Ferdefunky | 19
Akvaz | 19
NogZ | 19
Unzor | 19
Dhafin | 19
Elisione | 19
LazerBlitz | 19
G4lvatron | 19
Nikce | 19
Yendis | 19
Peton | 19
nasgubb | 19
Sillow | 19
flash | 19
HcreatoR | 19
ValkyrieMaster | 19
Kaii07 | 19
SUOMI | 19
WOOGI1411 | 19
Quiken | 19
OSIRIS GD | 19
IZann | 19
Snarlax523 | 19
Optation | 19
Pan | 19
Ploid | 19
PotatoBaby | 19
Sharkarie | 19
KFAOpitar | 18
luisJR | 18
Darixen | 18
Rabbitical | 18
HollowEarth | 18
Hermar | 18
Moffer | 18
ZoomkS | 18
iriswolfx | 18
DzRAS | 18
GiggsRH | 18
robotchief | 18
DHaner | 18
stubbypinata | 18
Carminius | 18
Spord | 18
Rek3dge | 18
RoiMousti | 18
Pennutoh | 18
kr1t | 18
TrueCopa | 18
Vrymer | 18
BlueLite | 18
Nariel | 18
goose | 18
ausk | 18
mikalgd | 18
Tongii | 17
RealSoulDash | 17
IIAnubisI | 17
Toxic GD | 17
Allex20 | 17
Tronzeki | 17
TheTrueEclipse | 17
Schneider9 | 17
Jeikins | 17
Paintingorange | 17
Fault | 17
HutniX | 17
EnZore | 17
MattySpark | 17
Amori | 17
Crombie | 17
Jovc | 17
AeonAir | 17
akApple | 17
SkiesCore | 17
Falkuma | 17
Havok | 17
Regulus24 | 17
JacobROso | 17
Neutronic | 17
Klafterno | 17
DeniPol | 17
Lixars | 17
JA4Y | 17
DamianosKabanos | 16
MrSpaghetti | 16
Roli GD | 16
8BitFudge | 16
HTigerzGD | 16
EpicMasta11 | 16
ByBoy 11 | 16
Jordi6304 | 16
Vermillion | 16
Diamondgirl01 | 16
fayaddd | 16
FaekI | 16
Flosia | 16
IIIJaylexIII | 16
Shemo | 16
SleyGD | 16
MalZir | 16
CatronixGD | 16
DashTy | 16
TheDevon | 16
Wulzy | 16
Flux | 16
JamAttack | 16
AceVict | 16
Elody | 16
chamoylol | 16
DreamTide | 16
Emadeus | 16
VrageraGD | 16
Wintter | 15
Arysta | 15
Xoroz | 15
iZappeR | 15
OrotS | 15
Xender Game | 15
softable | 15
JustJohn | 15
realwhata | 15
Cdpre | 15
Zinht | 15
Bluskys | 15
DWShin | 15
HuTaoFan | 15
thazm | 15
neigefeu | 15
Hikex | 15
BlueRimz | 15
amixam | 15
Ethrakk | 15
God Of Music | 15
crashyy | 15
Filaret | 15
Zyzyx | 15
RealKamijo | 15
PAHC | 15
Non4med | 15
rafer | 15
Ellisha | 15
Drob3 | 15
VecToRx GD | 14
Ragnarus | 14
JustBasic | 14
99geometrydash | 14
Edge | 14
MrCheeseTigrr | 14
Squidely | 14
BitZGD | 14
Blaireswip | 14
Lake | 14
MaykollGD | 14
SmitN | 14
Marrk | 14
Andro3d | 14
DorSha | 14
Angelism | 14
P4nther | 14
Insendium | 14
Rifky12 | 14
Jghost | 14
ItsJustCohen | 14
Anzer | 14
Wixers | 14
xPix3lest | 14
PENTpresents | 14
DYSCO | 14
heatherhayes | 14
Staps | 14
DORABAE | 14
Alkatreize | 14
neogamerGD | 14
AnLa | 14
iIs4n4rIi | 14
Flukester | 14
RNBW | 14
ithedarki | 14
Moxirulo | 14
Nezziee | 14
Moffe | 14
Hakkou | 14
LuisGX12 | 14
xXLOCOXx | 14
Reunomi | 14
nainteils | 14
Xaro | 14
Galaxxyss | 14
GeraldBrown | 14
Xylph | 14
Starbooy | 14
SrMDK | 14
MaxyLAND | 14
iIFrostIi | 14
MrZiedZ | 14
ZecretDash | 13
BizaareGD | 13
Galzo | 13
skrillero01 | 13
Chlorines | 13
Supris | 13
mrjedi | 13
mathboy | 13
TAKUMII | 13
Tedesco96 | 13
Df0rDie | 13
charky | 13
Platnuu | 13
jaffytaffy | 13
Vadi | 13
Lerevon | 13
Blochyy | 13
KittyDoge | 13
Svyre | 13
Haru | 13
Hyenaedon | 13
xMisery | 13
Kebabbo | 13
Destriv | 13
Alexchii | 13
Cubiccc | 13
SirGumball | 13
TheRealSpex | 13
Aerid | 13
FloxMi | 13
victorinoxX | 13
theGoT | 13
jacr360 | 13
IFSGeorge | 13
Jbeast15 | 13
Whirl | 13
Extrox | 12
LmAnubis | 12
EvoNuclearGD | 12
Baanz | 12
shocksidian | 12
Ross12344 | 12
Gachristian1 | 12
spuddles | 12
Belastet | 12
Megaman9 | 12
TropoGD | 12
Minimi427 | 12
Eridani | 12
Leoftine | 12
Dioxis | 12
Bianox | 12
MorpheiX | 12
OverZero | 12
IyuriI | 12
Swirl | 12
TD Epic | 12
Itserson | 12
Norcda Childa | 12
Gafen | 12
Minity | 12
Atlant | 12
Szilu | 12
LunarSimg | 12
PTyXaLPHaZ | 12
By7on | 12
Xenone | 12
xtobe5 | 12
Rustere | 12
Oasis | 12
R503Sv | 12
Twoots | 12
Meeloz | 12
Ardant | 12
AlexS2003 | 12
TroxxP1 | 12
Qubb | 12
zejoant | 12
Aiyamii | 12
Stamina | 12
GrimScythe | 12
MaJackO | 12
wlfn | 12
Woom | 12
Mojitoz | 12
Luxew | 12
Fury0313 | 12
EndLevel | 12
TianzCraftGD | 12
MazZedy | 12
DiMaViKuLov26 | 12
BloodStorm GD | 12
Myinus | 11
MMSOKA | 11
JumpingTheory | 11
MattMrn | 11
Angelism2 | 11
Vallier | 11
ReYzen | 11
Pollapo | 11
OutlawMz | 11
MaxiKD | 11
Ggb0y | 11
Chiand | 11
ItzM1ntJellY | 11
RayOriens | 11
ItsXZ | 11
Rockstr99 | 11
Vlacc | 11
xenoteric | 11
PleoSlim RMD | 11
Hir0shi | 11
BowtieGD | 11
Myo0 | 11
matty2003 | 11
buoGrOsSo777 | 11
FilleFjonk | 11
Alfred PKNess | 11
Censr | 11
Jabbagrullo | 11
ZenThriXGD | 11
talia067 | 11
GrenAde | 11
iZeo | 11
FastRefleksX | 11
DavoBoss | 11
XanN | 11
iZinaD4sh | 11
SleepYcAAt | 11
Jirk | 11
ChiN3x | 11
ThatJack | 11
Giron | 11
ZionMikido | 11
Requacc | 11
IamXstep | 11
Masterale | 11
lioleo | 10
- | 10
BitZel | 10
Apstrom | 10
SilverSoul | 10
Desumari | 10
crepuscole | 10
kodex360 | 10
JonathanGD | 10
Jakedoggd | 10
NukeNacho | 10
Xarcotz | 10
Dankx | 10
ZenthiMegax | 10
realtheo | 10
Lorserix | 10
ghathiitho | 10
Nottus | 10
Zircone | 10
Mitchell | 10
CapnColbyCube | 10
Andromeda GMD | 10
Ryder | 10
Stormfly | 10
Churrasco | 10
N R G | 10
Renn241 | 10
Player Time | 10
Laxoutz | 10
Nwolc | 10
xGW | 10
Agdor | 10
Optonix | 10
ty12345678 | 10
SirZaiss | 10
FrostDragonGD | 10
R3XX3R | 10
Volkmire289 | 10
EzzequielL | 10
Spym | 10
Tickle GD | 10
Lyriaki | 10
Kips | 10
LTGS | 10
ValentInsanity | 10
shaggy23 | 10
TheRealSalad | 10
SupamarioXx | 10
Ajedaboss | 10
Vexentic | 10
beptile | 10
Rekiii | 10
Zanna83 | 10
TriPodX | 10
xenots | 10
Terron | 10
Xylluk | 10
PencilDog | 10
RayZN | 10
Xevenfurious | 10
Creatorlings | 10
SP ValuE | 10
AlexOnYT | 10
Syunide | 10
Wolfkami | 10
Lebi06 | 10
zJei | 10
Anya21 | 10
StevenKsttle | 10
Danola | 10
RehanZ | 10
Star77 | 10
ImSamo | 10
B1n4ry | 10
GETZUCCED | 10
Iqrar99 | 10
GDvesuvius | 10
Th04 | 10
Rapace | 10
LexipGG | 10
MisterM | 10
KrmaL | 10
PuffiTree | 10
Presta | 10
ZubWill | 10
syndd | 10
Charaa15 | 10
Benqun | 9
Vonic | 9
guppy28 | 9
LXVT | 9
Jax | 9
TheRealModiK | 9
Lieb | 9
Credd | 9
weoweoteo | 9
DrallumGC | 9
16lord | 9
Ares | 9
ZephiroX | 9
cronibet | 9
willy5000 | 9
im fernando | 9
Apollone | 9
GDObsco | 9
SkCray Ace | 9
stardust1971 | 9
wless | 9
crossblade | 9
aLeLsito | 9
Whitehead | 9
Alexcont | 9
icewither | 9
NiTro451 | 9
JustPotatoNow | 9
stretcher500 | 9
shademirai | 9
PoIsEn324 | 9
nys | 9
GrenadeofTacos | 9
VeXyn | 9
Jamerry | 9
soda2D | 9
James | 9
oc3andark | 9
DYZEX | 9
Erdyuri | 9
iDancre | 9
Ardolf | 9
Hinataa | 9
Samoht | 9
FozaeKitty | 9
Existence | 9
Patchimator | 9
Soluble | 9
Insidee | 9
iILoBeeIi | 9
doritos1 | 9
iITesseractIi | 9
CreatorMoldy | 9
Zerenity | 9
TheGalaxyCat | 9
GiaMmiX | 9
xKstrol | 9
Nox | 9
twigxcabaret | 9
DreamNoter | 9
R3S1GNAT1ON | 9
SunBun | 9
CorroX | 9
Robotic24 | 9
SpooFy | 9
Buffed | 9
ZelfTix | 9
TheRM22 | 9
VytraxVerbast | 9
ViP3r | 9
DavJT | 9
SirZeus | 9
lucut | 9
Earthum | 9
DhrAw | 9
Morce | 9
ItsBon | 9
Zafkiel7 | 9
Pxj | 9
SoulzGaming | 9
TMNGaming | 9
BEkID1442 | 9
Sting871 | 9
Lugunium | 9
Nxtion | 9
Alphirox | 9
Rabb2t | 9
LaxHDz | 9
FlacoGD | 9
Nickalopogas | 9
AgentJDN | 9
ZepherGD | 9
RoXion | 9
Fixinator | 9
BaconPotato | 9
ELITEXD | 9
Itocp | 9
hyperfox | 9
BrayanKJ | 8
Hdow | 8
kDarko | 8
Zobros | 8
Sneakyx | 8
JeremyGD4 | 8
Underings | 8
CrispyCrepes | 8
Chromatik | 8
BlUhOl | 8
AbsoleN | 8
Dominus | 8
Tartofrez | 8
99percent | 8
TyphoonThunder | 8
VadriX | 8
RhakY | 8
NatDak | 8
Laaloooz | 8
EthanLX | 8
Agate | 8
Joenuh | 8
Waltertheboss64 | 8
Jo2000 | 8
Haminopulus | 8
GilangRf | 8
Sminx | 8
khelado | 8
CutieKitty | 8
HeniXs | 8
ElEcmEtAl | 8
chona026 | 8
YakobNugget | 8
dhk2725 | 8
Rlol | 8
etgx | 8
Jamzko | 8
DennysGMD | 8
Megantwoo | 8
RealStarShip | 8
1374 | 8
Niji | 8
Stardevoir | 8
2003devin | 8
Ellixium | 8
Ludicrous | 8
Kingoo | 8
JamieZz | 8
MagicianXx | 8
Alex1304 | 8
AzuFX | 8
SlacTe | 8
ViralDL | 8
Dragun | 8
Epxa | 8
Incidius | 8
AlrexX | 8
Supmaxy | 8
Allan | 8
nyab | 8
Alderite | 8
Syniath | 8
Altin | 8
Chayper | 8
VoBrAn | 8
Lewny | 8
Korita | 8
swwft | 8
TempledoX | 8
KatrieX | 8
Fss | 8
hashgd | 8
Pasiblitz | 8
Fir3wall | 8
ClasterJack | 8
djskilling | 8
LaZye | 8
Antonsen | 8
J735 | 8
tpAesuki | 8
helito6x3 | 8
Defiant | 8
DiaGram | 8
NnolokK | 8
Studt | 8
IsmailDaBest | 8
RapidBlaze | 8
MaxK | 8
Floc4bulary | 8
AcZor | 8
Knobbelboy | 8
KiroShiMaru | 8
ThomasMP | 8
zDemonDasher | 8
sqb | 8
TotoTie | 8
Al3xD | 8
Nevec | 8
Milos482 | 8
Maxann | 8
R4NGER | 8
RicoLP | 7
Surv | 7
durianhead | 7
felixen | 7
ZaDoXXZl | 7
IvashkaUA | 7
Systile | 7
knappygd | 7
BlazeJcy | 7
Doroku | 7
Soulsand | 7
Dako | 7
Danke | 7
ElastoGD | 7
TheRealSquizz | 7
joathxD156 | 7
JustPark | 7
Jasii | 7
ZeeToss | 7
TrueCelTa | 7
EmyX | 7
Weidazhongguo | 7
FGHJapan | 7
Waddl3 | 7
Sparg | 7
Dasher3000 | 7
iNewD | 7
Yakimaru | 7
GDAndreZ | 7
KowZ | 7
F5night | 7
SaturnCube | 7
MeRlO CreatoR | 7
oraangee | 7
ThatKai | 7
tohey | 7
stalkermaster | 7
motleyorc | 7
Th3HungVN | 7
Geox01 | 7
HazzR | 7
AimForward | 7
Defectum | 7
Metalface221 | 7
TaBu2711 | 7
Noxop | 7
Warrek | 7
Michigun | 7
KaiGD23 | 7
tisYuurei | 7
Creator Cloud | 7
Fairfax | 7
aj04 | 7
Novus | 7
RikiGD | 7
AyDiePay | 7
lysk | 7
JHYMHMHY | 7
Squared | 7
speedfreak | 7
StepsisV3xer | 7
Gamerespawn | 7
Chavacado | 7
heda | 7
SH3RIFFO | 7
TruongWF | 7
ML500 | 7
WazBerry | 7
OpteX | 7
Enolab | 7
EfrainGDM | 7
Nampac | 7
PraxedisGD | 7
gatitos0w0 | 7
Diaso | 7
TaKaYaMa | 7
John265 | 7
Splenetic | 7
Shutter | 7
Shaun Goodwin | 7
Spoby | 7
TypicalGMD | 7
Pyxidus | 7
Kubias | 7
Lovarii | 7
smieciarz | 7
iMorphix | 7
TruDee | 7
CubicShadow | 7
Sir Doge | 7
DafaIdham | 7
zac2605 | 7
MoonSpark | 7
Mike4VN | 7
HeroMoltenGD | 7
Polli | 7
TeamUprising | 7
King Woofy | 7
OmegaFalcon | 7
Zhiana | 7
SouneX | 7
iISanE | 7
V9LT | 7
HugusTheNoob | 7
Dams778 | 7
Nibblerez | 7
Custi | 7
Team Proxima | 7
tunar98 | 7
Bronks | 7
BrainETR | 7
Ebenexz | 7
Klonex | 7
Nena Kiwi | 7
RedlixHD | 7
LuM1noX | 7
Alt3r3d | 7
Tygrysek | 7
Cthulu | 7
Deevoo | 7
CarterN2000 | 7
Zhak | 7
DaddePro | 7
Penttagram | 7
Supacruncha | 7
bli | 7
AirilGD | 7
KJbeast1000 | 7
Terminus M | 7
Temptati0N | 7
logiking | 7
pg1004 | 7
TheRealRow | 7
xVicoGD | 7
TheGoms | 7
Namtar | 7
SpilexTV | 7
MrSaturnuz | 7
potatoplayer | 7
mishy | 7
GeoSlam1 | 7
DemonMKer | 7
Mursu | 7
Ozix | 6
Theeb | 6
DiaboloHD | 6
GDSkele | 6
Aquatias | 6
whitepythor | 6
LumiLunatic | 6
Darkrozz | 6
croshz | 6
Yuananmin | 6
Toxikuu | 6
TheUnnamedd | 6
Pyraven | 6
Xiprus | 6
AlaskaFX | 6
TiTi26 | 6
LKH20 | 6
TeamNoX | 6
danielost | 6
HHyper | 6
BashfulGoose | 6
MitKit | 6
BobRatchet | 6
Failure444 | 6
TheePersian | 6
LRelix | 6
Glory | 6
ICaptain JackI | 6
Caspri | 6
UltraChris | 6
CactusDimension | 6
iISpaceDustIi | 6
HelpegasuS | 6
OpticalFox | 6
RikLymback | 6
kelleyxp | 6
kastor | 6
FreeZor | 6
MovieManiac | 6
Konsi | 6
lumpy | 6
ThisIsPailyn | 6
Americat5 | 6
Prism | 6
Tec | 6
ChaseGMD | 6
Whirlaroni | 6
Oubai | 6
GeoMEtryManGD | 6
g1E | 6
Mochiiii | 6
Nocturnson | 6
0xNano | 6
albinomaster | 6
DSprint | 6
DashDude | 6
nikroplays | 6
PixelGlory | 6
Isj3y | 6
MistFix | 6
GuraNuS | 6
SirExcelDJ | 6
Turtle2107 | 6
TheRealDwiki | 6
vit12 | 6
Carlospb2 | 6
connot | 6
Raygon | 6
AdidasBoi | 6
IRBenja | 6
lzoifzkfmoaoxoz | 6
chipzz | 6
ARtu | 6
aArbolito | 6
StillLagan | 6
crashpancake2 | 6
DRAG747 | 6
DubLollo | 6
Lfritz | 6
emoR | 6
Camacho7 | 6
wiktord | 6
KrazyKako9 | 6
MASOOON | 6
Jenkins | 6
tamagoaisu | 6
chokureload | 6
Bio21 | 6
J0eyOnGD | 6
LightWinner | 6
SrGuillester | 6
iRooki | 6
snowmage | 6
Soverney | 6
Millepatte | 6
greXxio | 6
GlobalisTik | 6
JaxtilanX | 6
JustSlushy | 6
VirtualCrack | 6
FlashYizz | 6
Kyromi | 6
Elvii | 6
Adabae | 6
GDLoco | 6
Droit jr | 6
alecast | 6
xSkart | 6
Doge164 | 6
Suixam | 6
zzzgecko | 6
Sancey | 6
Hydren | 6
Axile | 6
Xcreatorgoal | 6
HotoCot | 6
Sluss | 6
Carnage37 | 6
Jaasim | 6
Colombia Dash | 6
ISariaI | 6
Crispinus | 6
TheEvolution2 | 6
PeterNg | 6
Leksitoo | 6
Oskreix | 6
niremasit | 6
BoyoftheCones | 6
Crystal CM | 6
krisz | 6
Cheeseguy | 6
GhostAppleGD | 6
Navoltski | 6
5quid | 6
netherdon | 6
Zynith | 6
IJBSI | 6
Pongix | 6
Psynapse | 6
Yannnis | 6
Koopazu | 6
BlowMyPooh | 6
buttstallionpc | 6
TheLasaga | 6
MadPilot | 6
Yoonsr | 6
HaxellRyan | 6
Girr | 6
Ophelix | 6
LP44 | 6
KuraiYonaka | 6
TinyXD | 6
TNewCreators | 6
BoneSoup | 6
Dirdov | 6
loserchik67 | 6
PymGD | 6
Booglee | 6
NeoSweet | 6
forlat | 5
xSmoKes | 5
Dj5uN | 5
Entutt | 5
Juffin | 5
Nvaen | 5
K911unA | 5
blackP2Sfull | 5
deadlama | 5
Dastan21 | 5
Arb | 5
lSt0rm | 5
ToastLord | 5
scrumpy | 5
SariaGD | 5
xSioni | 5
YirokoS | 5
Azeria | 5
para | 5
NEKONGAMES | 5
isaacpl01 | 5
MoustiKipiK | 5
Mangosteen | 5
Hexcelia | 5
Inokuro | 5
LeynarMusic | 5
Darite | 5
imDariux | 5
Marwec | 5
Negat | 5
ParzivalGames | 5
LakiFarux | 5
midas | 5
PriBel | 5
UlbomE | 5
JiJangs | 5
JustWemo | 5
TerKai | 5
j4eger | 5
ExtoPlasm | 5
combatRT | 5
ItzTG | 5
Pinecones | 5
Spawn | 5
DivideNick | 5
OcuTa | 5
Spac3GD | 5
Yoshee | 5
IronDofus435 | 5
Balloons | 5
Delts | 5
xQuadrant | 5
2kb | 5
Jeady | 5
Kiiroaka | 5
MaFFaKa | 5
Kixgz | 5
Swift | 5
Zenovia | 5
Stormy97 | 5
EVAD3 | 5
Tecler | 5
Reizon | 5
CreatorDiana | 5
elibeast | 5
Samifying | 5
IHogartI | 5
Tropiica | 5
Xyris | 5
DeCody | 5
123Ev4n | 5
The Carrot | 5
Riki Dash | 5
Faun | 5
Numptaloid | 5
saywoo | 5
ItzDemonicDash | 5
CouponGod | 5
Digitalzero | 5
Zeroniumm | 5
Tundra | 5
xTuffy | 5
XCYNICX | 5
GDSpeed | 5
Joaquinvega | 5
HyDroTek | 5
wamiq | 5
DarinLou | 5
Vexiion | 5
SerpTop | 5
iNocto | 5
geometrico10 | 5
dittoh | 5
CryoChemist | 5
DrayPlay | 5
BGames | 5
vortrox | 5
RoseGolds | 5
zara | 5
maks550a | 5
CrafGD | 5
SuprianGD | 5
Digitalight | 5
Met3o | 5
VictorGD4 | 5
GirlyDash | 5
Xyound | 5
DeadFoxx | 5
Rokioto | 5
Glaid | 5
eskim0 | 5
dakiro | 5
Myguelh07 | 5
xSkullgas | 5
Freyda | 5
Krawler | 5
Maboflo | 5
Astral7 | 5
Lumicon | 5
IRock3roI | 5
Rimuruu | 5
Snowbound | 5
T3mpl4te | 5
MrClyde | 5
Sruj | 5
ramppi | 5
DarkZoneTV | 5
SirFluffybeans | 5
Shuffle49 | 5
Dragn3el | 5
AnielChasseur | 5
kr1j | 5
RedDragoN1404 | 5
HyperSoul | 5
LPpassy96 | 5
Exylem | 5
XOrder | 5
Enboy | 5
Tinymanx | 5
DokyG | 5
MarioLTE | 5
PauF | 5
BelonziK | 5
GMD Condor | 5
AUFrosty | 5
CD Jeremy | 5
MrLumo | 5
MrMeurick | 5
Geom3zon | 5
SmoopInc | 5
xToxiKGD | 5
CairoX | 5
Desx74 | 5
Syakin | 5
Kohtpojiep | 5
DamiGMD | 5
MatthewMW | 5
Gummimon | 5
ALISYdvblhas li | 5
Therealclubin | 5
ARLUNOJO | 5
mSeesha | 5
Zylenox | 5
Blixie | 5
Verification | 5
Dalfa | 5
AnddyVM | 5
Zajicek | 5
bloodshot | 5
PocoPukich | 5
Davoxt | 5
Eclipsed | 5
Astratos GD | 5
FrancaDash | 5
Waturh | 5
Mmath | 5
sachikoadmirer | 5
X1RON | 5
BlushingMary | 5
Rex3rGD | 5
Rawerz | 5
ZubwaR | 5
Vaperz | 5
TheRealShun | 5
Barbos3D | 5
ElMatoSWAG | 5
MrShetoss | 5
NewDubsC | 5
PocketG | 5
Sechsan | 5
draneee | 5
ovdfo | 5
TheHuevon | 5
PunkySoul | 5
Igno | 5
BurstStar | 5
SALtinecrack3r | 4
Small | 4
St1ngray | 4
VurviousD | 4
IsraEL GD | 4
TrusTa | 4
MindCap | 4
SmugMugi | 4
GDVaniel | 4
Grizzley | 4
Elliptic4l | 4
Toughfey | 4
Predawn | 4
JamHD | 4
iI Flow Ii | 4
Noodle1453 | 4
LinkTheGamer | 4
Wagot | 4
Carnitine | 4
Nepts | 4
SirDavies | 4
Disp | 4
MAYEROSA | 4
SomeRandomCow | 4
lunaxi | 4
EDUARDO 21 | 4
Blaake | 4
Typic4l | 4
CreatorForce | 4
Coil | 4
AdvyStyles | 4
Mr chips | 4
gibbon765 | 4
Kinel | 4
iZale | 4
xenoxenon | 4
Bytrius | 4
ReflexPrince | 4
EnteiX | 4
EliNox | 4
JusTmax1M | 4
MintyLeaf | 4
camgotbitten | 4
Prometheus | 4
wollompboid | 4
Nigh7fury | 4
Saeve | 4
Lipz | 4
Shimishimi | 4
Reymondash123 | 4
Yirka | 4
Lorena GD | 4
Enceladus GD | 4
iIBlueMoonIi | 4
JaoDay | 4
Splash | 4
FlyArCz | 4
Bryan1150 | 4
Scanbrux | 4
mrog | 4
EthanMG | 4
Swib | 4
DemonBestGG | 4
Xaika | 4
EnderNile | 4
ShauxFix | 4
Cry0 | 4
Mintury | 4
TrueHaron | 4
Blitzmister | 4
RealHaHaHa | 4
ZAR3 | 4
Tirr | 4
Panfi2 | 4
CreatorFreeze | 4
Wahffle | 4
Ironistic | 4
hfcRed | 4
OtjiobPakob | 4
Brawlboxgaming | 4
NeroFX | 4
yakone | 4
nether | 4
spirits310 | 4
Player | 4
Lellas | 4
TheHungerGamer | 4
creeper1347 | 4
ScorchVx | 4
nahuel2998 | 4
A145 | 4
DanyKha | 4
TrigoniX01 | 4
VoltJolt | 4
groose22 | 4
MRT | 4
ZiTron | 4
Selskiy Kot | 4
grebe | 4
Homeboye | 4
JaredKawaii | 4
Matttopia | 4
SterDN | 4
Tahsen | 4
JaredKawaiii | 4
GameForGame | 4
joarZ | 4
PytekGD | 4
ItsKumiGD | 4
GdTheTactiq | 4
N3moProd | 4
Antuness | 4
Relayx | 4
suzume | 4
Blu Fatal | 4
assing | 4
Jinnans | 4
acoolman91 | 4
Expy | 4
IIBackfischII | 4
Syndromed | 4
Horiz | 4
baguettesyeah | 4
OwO Whats This | 4
nomm | 4
iIiRulasiIi | 4
Shulkern | 4
TheFakeLogik | 4
ReZEL | 4
Obsidium | 4
TretosGD | 4
Sandstorm | 4
emiliax | 4
NoLetter | 4
cxli | 4
Alex084 | 4
YoReid | 4
The Goola | 4
HertyGD | 4
BlueWorldGD | 4
Dragoj17 | 4
KryptGMD | 4
qMystic | 4
Zeidos | 4
Diffuse | 4
W3ndy | 4
VerceMusic | 4
RickTz | 4
CompleXx | 4
Feko9 | 4
Wodka | 4
zBM | 4
Colxic | 4
InsaneJohnson | 4
donutcopper | 4
Sergeisonic95 | 4
noxycraft | 4
greenwater | 4
xstaticstorm | 4
StiX190 | 4
XxJ0SHxX | 4
Mixroid | 4
Aldanoo | 4
SChaotyx | 4
griffyn87 | 4
ismael58899 | 4
Korai | 4
Traxline | 4
meechy | 4
XDream | 4
Tchotchke | 4
Sandal | 4
COLON | 4
AgentJo | 4
Cuhilk | 4
Edicts | 4
RealTrueLogic | 4
ShuyGD | 4
Buragoz | 4
ItsHybrid | 4
EDSJustin | 4
Cancel | 4
MrSupremeGD | 4
Ghastgames | 4
zombier | 4
maximum12 | 4
Mousile | 4
jaeyoon1234510 | 4
Exi0n | 4
7Ak | 4
Alfian10 | 4
Zeus9 | 4
carapa22 | 4
Kaito | 4
MikyFC | 4
FurixGD | 4
AAAAAlex | 4
denberZ | 4
Hantein | 4
Slipiao | 4
MYKM | 4
rwichy77 | 4
Inex | 4
jahaziel | 4
Fluore | 4
Dudethegeo | 4
AndrixGD | 4
TheKris | 4
ItzByCrack10 | 4
tenzk | 4
Clasi | 4
Waboomania | 4
Randodacamando | 4
Rustle | 4
Sikatsuka | 4
Netligob | 4
sssturtle2005 | 4
killervn | 4
IlhamAulia | 4
AirSavage | 4
G3ingo | 4
Aviilia | 4
xmvrgg | 4
Skota6i | 4
DarkMoom | 4
Ilikethecheeeez | 4
D4MIAn | 4
manu123890 | 4
XdMaNIaC | 4
ZWK | 4
TomawiN | 4
Mashcake | 4
TonyStar | 4
Fixum | 4
Lucoraf | 4
xSanni | 4
iSonicSpeedi | 4
Nekotik | 4
KingKingman01 | 4
Golden | 4
IuseHack | 4
RayRayReig | 4
Hamix | 4
Grax | 4
exploreX6 | 4
ArthurGenius | 4
BlasterRobotz | 4
Renodex | 3
CrisPy Dash | 3
Extractz | 3
GerboPawa | 3
aidancrdbl | 3
E0ts | 3
Zipper01 | 3
Dhrawh3yN | 3
votanuu | 3
ToXxin | 3
Lebreee | 3
EpicLucas | 3
CreatorLogman | 3
Pharoi | 3
LightningSL | 3
ThunderBat | 3
Abaddox | 3
TeamN2 | 3
NovaJR | 3
Zeptrus | 3
Copypasta | 3
DT Mark | 3
Asuith | 3
ghostmonkeyb | 3
TCTeam | 3
Abdou01 | 3
Ic3Fir3 | 3
DragonSK | 3
Virtualoid | 3
Krovolos | 3
Hysteric | 3
Lixyy | 3
Goodbye GD | 3
CookiZz | 3
smarkey12 | 3
CreatorAnuar | 3
DangnghiGD | 3
iIiKrisDashiIi | 3
radaskk | 3
Plygon | 3
Leinad421 | 3
70UNIK | 3
Alkazam | 3
Pavyzone | 3
xDiji | 3
electro2012 | 3
Nik Gambardella | 3
danerdogger | 3
Jorgitto16 | 3
Bl0by | 3
RealsCoPa | 3
virtues | 3
TeamArdent | 3
Viricent | 3
Hafiz | 3
Chaos | 3
Ulasp | 3
Lish | 3
ArToS2 | 3
phobicGD | 3
Ka1ns | 3
EhwaZ | 3
Marsh28 | 3
Chances | 3
Matterz | 3
Aconn | 3
tezeraqt | 3
RobotitoGD | 3
Civilgames | 3
Neribus | 3
Koishite | 3
Seokso | 3
Dezzed | 3
Jetaplex | 3
AvaKai | 3
Glouti | 3
HugoLA | 3
KrzGxzmxn2 | 3
SpinStudios | 3
Nunet | 3
SoulXGD | 3
Galuss | 3
Axdrel | 3
Ezel142 | 3
WazE | 3
BrayBrayGD | 3
Aleker | 3
crackmiklox | 3
Mxnnuelv | 3
Dz4ky | 3
mkComic | 3
Sanea18CM | 3
iMaT | 3
DashFire | 3
GunnerBones | 3
maximo64 | 3
ScepTium | 3
Lalter | 3
NitSel | 3
ZtratoZ | 3
PsyCheDeLia | 3
AxelGMD | 3
Jlexa | 3
Valkinator | 3
Sxap | 3
stratos2596 | 3
BreadKing | 3
TheOutLowLP | 3
Cyprix | 3
MasterDam | 3
B3renice | 3
Radius | 3
Magpipe | 3
bEntaM | 3
antonio130 | 3
InterChange | 3
VoiDsPiriT | 3
OliSW | 3
Espii | 3
Ewop | 3
ZielXY | 3
DrDdog | 3
shuterCL | 3
Ashes | 3
ThorMode | 3
Vapen | 3
Zayuna | 3
Necria | 3
RobtobFreak | 3
Rexonics | 3
Del4yed | 3
DogBan | 3
IcarusWC | 3
KazVA | 3
officialfreck | 3
KubaCreator | 3
Flexu | 3
JM70 | 3
Zigma | 3
Phantomech2 | 3
iIRedCatIi | 3
zYuko | 3
DarkBoshy | 3
GDLeinad | 3
Farm | 3
SirLimee | 3
Mocro | 3
Camelback | 3
Lucid Dreamer | 3
raidennn | 3
An Gyung | 3
Gabewb | 3
Eiken | 3
DeceptivePan | 3
LYR0Y | 3
TwoHalves | 3
LereckWTF | 3
RuebeXPX | 3
mrjaakko | 3
knoeppel | 3
Darkvid | 3
iIiViRuZiIi | 3
RobZenW | 3
Dux | 3
Hedgefox | 3
xCaptain | 3
UvonuctV | 3
Ownbit | 3
TrmGD | 3
Eduptal | 3
Jameise | 3
clique | 3
Sylv | 3
Sigma | 3
Elih3l | 3
iAndyV | 3
Nacho21 | 3
GKRancagua | 3
NTT | 3
arcatexx | 3
Tabib | 3
SushiiGD | 3
Achievess | 3
RainerOW | 3
Kirikanan | 3
Zyplex | 3
zaRIKU | 3
Loltad | 3
Mycrafted | 3
ruined | 3
ChrySpy | 3
ItsYisusOwO | 3
verticallity | 3
witt | 3
Glitchymatic | 3
Wafflee | 3
EXPIREANZ | 3
XronoM | 3
karkiller | 3
THEREALPETER | 3
Eyoi | 3
VoidSquad | 3
vismuth | 3
Kyre1911 | 3
KY0T0 | 3
Smeave | 3
WSKKung | 3
Pl4sma | 3
Rexone | 3
JumpydooM5 | 3
GMDBlockbite | 3
whalemage58 | 3
Ligh7ningStrik3 | 3
yakine | 3
Pavlaxan | 3
superkirill | 3
SaintORC | 3
SRXnico97 | 3
jazari | 3
Nemphis | 3
dongchi | 3
Grenate | 3
Vex02 | 3
Atonix | 3
fishtoon | 3
yll | 3
Salzbrezel | 3
Sakana | 3
BurritoGamerX | 3
AIGDmaster | 3
VoPriX | 3
Charliux | 3
Hahaha | 3
Saygrowt | 3
StrikeKing22 | 3
Wod | 3
CronosMx | 3
denis002 | 3
SkyWalker14 | 3
artherr | 3
Dezorax | 3
B u r n | 3
Fl3b0 | 3
Xandy77 | 3
GamerPG11 | 3
LightStyles | 3
itsXDiego7 | 3
pepjin | 3
Eliotis | 3
7vuv7 | 3
MysticMaloug | 3
Ternamit | 3
CarlosBotelloCP | 3
Cloud72 | 3
PIS | 3
Madman123 | 3
LaZoR | 3
AlasstorGD | 3
GDZimnior12 | 3
Legowanwan | 3
kakemancool | 3
SnarkZ | 3
TheRealThawe | 3
oligarhen | 3
D0meR | 3
Xfaider | 3
Plompy | 3
XShadowWizardX | 3
Vancis | 3
Ondys25 | 3
GGtime | 3
FadeOff | 3
GelinK | 3
PyroGix | 3
MiGor07 | 3
Ad0NAY27GD | 3
EvanX | 3
Amidon | 3
Aslambek | 3
NyPlex | 3
dashiell10 | 3
Arkonite | 3
dawnii | 3
DorFlayGD | 3
eopc | 3
life40 | 3
Zeniux | 3
lujian | 3
Rullstol | 3
SyQual | 3
Natteboss | 3
kenjiDW | 3
Zylsia | 3
Locked | 3
GoodSmile | 3
Djdvd17 | 3
TrueDhensoGD | 3
ZakKest | 3
Aerin | 3
Opak | 3
Rhythm1C | 3
Zelda9912 | 3
AstravokPley | 3
LaserSword33 | 3
Gomez28GD | 3
StiwardGD12 | 3
GMD Dominator | 3
Changeable | 3
simplox | 3
HubDubs | 3
DownStop | 3
red1511 | 3
RefluX | 3
Redon8890 | 3
DrPuppydude | 3
Makenzy | 3
ZenthicAxy78 | 3
PlebKIngdom | 3
nazerr | 3
Marpha | 3
wilbert04 | 3
Palinka | 3
itsBypipez | 3
PhrostiX | 3
KOEKI | 3
Antman0426 | 3
Dyeo | 3
Galluxi | 3
MittensGD | 3
LunarSonya | 3
V3ntique | 3
condabeast | 3
SkyZor | 3
Olikros | 3
RealMartix | 3
NeKho | 3
Apep | 3
angel791 | 3
Bluzze | 3
MasterCarrot438 | 3
Delta Revenge | 3
PariThePlatypus | 3
PSAY | 3
NovaSirius | 3
Itzabunny | 3
Umbraleviathan | 3
linus02 | 3
AltairGalaxy | 3
GD Charz | 3
Wiv29 | 3
Inoculist | 3
Stevstyles | 3
TeamHFC | 3
Nukewarrior | 3
SCOCAT | 3
Cheetahx | 3
KR0N0S | 3
ItzSlash | 3
Riot | 3
XMGIRLSTEP | 3
Ghost | 3
Laaseer | 3
Darkness c | 3
ZapZed36 | 3
TheCheeseNugget | 3
Lyal | 3
Klevin105 | 3
SparksOmega | 3
bpdoles | 3
ArkronGaming | 3
Killoway | 3
DrumBreak | 3
Warrior | 3
CubeDasher | 3
Lovelty | 3
Gravitype | 3
VtrFight | 3
Elsig2002 | 3
Cardium | 3
TekhnikaKR | 3
paolo7u7 | 3
Rev0lt | 3
tricipital | 3
Hydronus | 3
InfiniteR88 | 2
Chillark | 2
azekk | 2
XarnoZ | 2
Chaldy | 2
SES | 2
Jouca | 2
zTeamSpl0it | 2
MadrazoGD | 2
N E C R O | 2
K76 | 2
Coreybirdo | 2
ValenQpr | 2
Veganus | 2
D4rkGryf | 2
eigh | 2
TrueSync | 2
Raszagal | 2
FlukedSky | 2
Veol | 2
TeamSmokeWeed | 2
xVainaja | 2
ZeffZane | 2
herozombie80 | 2
RedyBally | 2
Polarprism | 2
SerenoGD | 2
haizenberg | 2
IPablinI | 2
novichokk | 2
Little Scoty | 2
Proxxi | 2
SubStra | 2
Sinatix | 2
JessyGxz | 2
aurumble | 2
DuDuDuDuDuDuDu | 2
DanyMei | 2
Brigide | 2
J0Ni | 2
PrayToEntei | 2
Krexon | 2
Lenzelium | 2
cleangame | 2
Feliix7 | 2
p3rception | 2
DonutTV | 2
HammerfallThud | 2
LyoNa | 2
msm12 | 2
NinKaz | 2
NecXell1 | 2
MikanNeko | 2
squishyj | 2
Kirlian | 2
RainWasHere | 2
WizTicFX | 2
Kapinapi | 2
RevanXD | 2
DTMaster09 | 2
Meldon | 2
XxKaitaNxX | 2
ZePor | 2
G4zmask | 2
Otorimasu | 2
Aqualime | 2
HyderGMD | 2
DeLorean40 | 2
BlackWolf436 | 2
lZexer | 2
Ruf | 2
PerkyPenguiN | 2
Sheet0k | 2
Rambostro | 2
Taniyaun | 2
Kanyu | 2
Al3dium | 2
Arturlist | 2
N4TH4N | 2
Xyvero | 2
RataConTiner | 2
101Pixels | 2
anpy | 2
mrmattrice | 2
Gdcrystal | 2
do1Xy | 2
gmdtoto | 2
csx42 | 2
SWAGSWAGSWAGSWA | 2
Phantomech | 2
VolteX | 2
MetaloidE | 2
GDFrotzn | 2
Alexkazam | 2
Almondy | 2
ricky123 | 2
Smitty23 | 2
C4rpeDiem | 2
klaurosssS | 2
xTeamSpl0it | 2
Yael Ximil | 2
mvry301 | 2
JFZDash | 2
Puffi65 | 2
Kwol | 2
Menegoth | 2
Logon | 2
R4nchi | 2
artimiel | 2
AriaReruchu | 2
lTemp | 2
derpsensei67 | 2
cherryteam | 2
GoldenVortexx | 2
Ranexi | 2
McSwaggerson | 2
Makhozz | 2
Droo79 | 2
KorpralGurka | 2
Roxanned | 2
shrympo | 2
Glenforce | 2
LUKNN | 2
skelox | 2
Migueword | 2
T3chno | 2
kang131 | 2
CdlAwesomeMe | 2
sandeti | 2
lesuperchef | 2
nurong3 | 2
Duzty | 2
XypherGD | 2
T0rnat | 2
4chairs | 2
MomoTheCat | 2
Voltimand | 2
Passilite1 | 2
epiclight | 2
crohn44 | 2
twee | 2
WhiPerZz | 2
MIRugene | 2
Hubbubble | 2
fenk | 2
Cyberic | 2
Azonic | 2
creeperus | 2
GhostKitty | 2
Kiwi30 | 2
BrainOutear | 2
RealEddieSmith | 2
Cycazen | 2
itsDreaazy | 2
Padalome16 | 2
Avero | 2
Salaxium | 2
albertalberto | 2
CultriX | 2
RikoFinn | 2
PixelWooliUwU | 2
o3k | 2
MegaBro | 2
Razorendoe | 2
DanLom | 2
Se7eN | 2
koalakamikaze | 2
tim55 | 2
KohiTeam | 2
AEROSTATIK | 2
Vilms | 2
renepuentes | 2
DryBones | 2
RealInf3ction | 2
Vitol64 | 2
AIMR | 2
hiavl | 2
NachoDark | 2
Revontuli | 2
M3llo | 2
elpapueliam | 2
NormDanchik | 2
Vesta | 2
zSkye | 2
JustFeijoa | 2
KYENDU | 2
GamingwithS | 2
Skrypto | 2
dolphe | 2
parowan | 2
0dysseus | 2
Syndii | 2
3osiah | 2
kudabe | 2
Jacques Melissa | 2
GDStarDust | 2
nekochan nd | 2
Ryan | 2
emiheliou | 2
Gryllex | 2
lucasyecla99 | 2
RositaGMD | 2
Stinger613 | 2
JustinV21 | 2
SnyderYT | 2
ItsGwMe | 2
Pedrin0 | 2
Mith0 | 2
SpacialBoom | 2
Its Darky | 2
LordeQuacc | 2
Not Omicron | 2
joojmiguel | 2
Zwenty | 2
Asthae | 2
Farhan125 | 2
PanSoniX | 2
HejamiX | 2
thiago42 | 2
ItsIceE | 2
Iamlost1337 | 2
brojaxx | 2
Perars | 2
ZaphireX | 2
Colorflux | 2
T3TSUUUUOOOO | 2
IceNeko | 2
mystery | 2
DashingInfinity | 2
Eluselus | 2
iINovaStylesIi | 2
AL33X | 2
S7arGazer | 2
AlexEa | 2
GMDBlue | 2
Rockwizard5 | 2
zenthimegal | 2
HissY189 | 2
Hostility | 2
XTHANEX | 2
ItsUmaruGG | 2
Tedoss | 2
Coolyoman | 2
spitdash | 2
Juhou | 2
MrTurtles37 | 2
FarnndesGM | 2
qurnl2012 | 2
Am0r | 2
Wiild | 2
Hypno74 | 2
SamukaGD | 2
Wolvyy | 2
Balandran GD | 2
Wooobs | 2
denroy1780 | 2
iNubble | 2
Syth3R | 2
IIPyro | 2
killua282 | 2
CosmicSabre | 2
Electrone1 | 2
cybronaut | 2
GeoGroer | 2
TDChris02 | 2
SyberianMP3 | 2
Arquas | 2
butterfunk | 2
ImMaxX1 | 2
Charmii | 2
Yotzin | 2
Alknot | 2
AlexaUwUGD | 2
gdParallax | 2
equist | 2
SwfCapsLock | 2
Darix GD | 2
xyuns | 2
XenoDisastro | 2
Exen | 2
Snowr33de | 2
TheRealTpatey04 | 2
Daniel720 | 2
kientong345 | 2
LoariiHyena | 2
ZetByte | 2
PMCreatorArco | 2
BlockBossGD | 2
sergioad | 2
ItzKiba | 2
Weenie | 2
GDNeoz | 2
Wombatosaurus | 2
TKS MelonMan | 2
Ryan LC | 2
Nads | 2
znXen | 2
GmD SamUL | 2
Urepy | 2
Sekya | 2
DarTrix24 | 2
Shaaant | 2
KingGMC | 2
IOVOI | 2
Fabio1711 | 2
EncisoGD | 2
Lacheln | 2
tomas162 | 2
TrueHyPper | 2
FreakyLibya | 2
Dolnerz | 2
Amers | 2
Tutone | 2
Zuree | 2
Darthllama | 2
TrueOmega | 2
naksuil | 2
DaCooki3 | 2
Xiodazer | 2
Dewott | 2
ZerNiS | 2
Gtnaon8 | 2
Misterbeff | 2
ew4n | 2
ZidDer BGD | 2
Fiizzy | 2
RealGares | 2
jerkmath | 2
IIKun | 2
CreatorGolden | 2
nyseoul | 2
ISLEEEEEP | 2
S1l3nce | 2
CSLoW | 2
Luap314 | 2
hauxz | 2
AddBadx | 2
Daceuwu | 2
Logicdash | 2
Archibot | 2
CreatorLister | 2
xBonsai | 2
Alex M | 2
Motu | 2
SasaLvk | 2
Blad3M | 2
Ahab47 | 2
iDemi | 2
Shaday | 2
TheGeoDude | 2
Brandonlarkin | 2
xNoire | 2
shyhexx | 2
harbour | 2
VioLio | 2
Dom89 | 2
Rusto | 2
SebasPeru | 2
MetalBray | 2
Fulgore201 | 2
KumoriGD | 2
StanaronGD | 2
Frikoh | 2
MrG30 | 2
v1031 | 2
felixitron | 2
PhantomX | 2
DyronGD | 2
AlloX | 2
Veiled | 2
Comically | 2
Alice7 | 2
Rishie | 2
Prents | 2
Blazer2 | 2
DalarCoin | 2
ShadowIan | 2
Fritocollix58 | 2
Kardes | 2
jakerz95 | 2
RegentGD | 2
kelmaq007 | 2
UltraS4 | 2
Squb | 2
Tear Rai | 2
Aquarelas | 2
DerKat | 2
ShakeoffX | 2
KoDTIFF | 2
4y4 | 2
TDG Productions | 2
PermaFrosty | 2
drakeghast | 2
NoName01 | 2
TheLuckyShroom | 2
Maxie25 | 2
Amverial | 2
alk1m123 | 2
Moosh | 2
YuiAnitaAnggora | 2
Tech36 | 2
Deneb | 2
iYOO | 2
veyzz | 2
Land3x | 2
iMagician | 2
CreatorMindGD | 2
Antelhio | 2
Seabers | 2
AnnoyingBaby | 2
Geemi | 2
iRawstep | 2
Jozhii | 2
QuantumFlux | 2
Saao | 2
storyking1559 | 2
viggoUhh | 2
Reddmi | 2
Mintycube | 2
GD Melk | 2
Guviking | 2
Polarbeahr | 2
ReinnCloud | 2
XT34RX | 2
Simone777 | 2
Verxes | 2
MidNight2 | 2
CreatorAiron | 2
Sergiio18 | 2
Grideos | 2
Vsbl | 2
G T N | 2
Justin12611 | 2
OxidoGD | 2
Tamii | 2
Erickblaster | 2
ivanJr22 | 2
s0nic x | 2
the008 | 2
AlbertoG | 2
Orange09 | 2
bendykid | 2
RenoVatio | 2
kaval | 2
CreatorRC | 2
SylntNyt | 2
Kyhros | 2
Monotype | 2
Flex | 2
4nuBis29 | 2
T G I | 2
NathanielChinn | 2
CreatorRP | 2
ProKillerGMD | 2
Kenshii | 2
artziifin | 2
ZeroSR | 2
PhysicWave | 2
GPZ | 2
Hadronic | 2
Lithuanium | 2
Maudanik | 2
IronIngot | 2
Gerkat | 2
ItsMaxHam | 2
Exqualater | 2
Akunakunn | 2
Toomis | 2
BEASTman1025 | 2
NightXStalker | 2
gradientxd | 2
SkullyHop | 2
Kazey | 2
SQRL | 2
Vizors | 2
EdictZ | 2
Dakkuro | 2
DrCuber | 2
Greenyy | 2
Adfdguj1k | 2
Al3xTitan | 2
xFREYEN | 2
Praa | 2
Kyumini | 2
FreezeDash | 2
Samuraychik | 2
RadiantEnergy | 2
davisjay | 2
Requiem08 | 2
tombrid | 2
MJC456 | 2
RatQuesadilla | 2
Creator Darkar | 2
Zghost101 | 2
NaezharGD | 2
WhitePartyhat | 2
SneyDry | 2
AndreFernando | 2
Phaneron | 2
Splinter25 | 2
Geogamer12 | 2
Straw | 2
magicstuff | 2
PETTER | 2
Noctaliium | 2
Goobfrudla | 2
Herzilo | 2
Anob | 2
Psyse | 2
pyshoGD | 2
Miguel135 | 2
dyn1x | 2
ManoMagician | 2
iiLuna | 2
ericys1 | 2
gadri | 2
HexagonDashers | 2
Y0rk | 2
deflang | 2
carlosbmt22 | 2
Tamaa | 2
zDrift | 2
Wr3nch | 2
The Bread | 2
imKalium | 2
TileZ | 2
zorlex | 2
Ow3nGD | 2
lundshurk | 2
Bi2cide | 2
BHusseinObama | 2
White Knight | 2
Wazzuuup | 2
Parallax | 2
Volander15 | 2
3nzyGD | 2
R4y4 | 2
Sparkle224 | 2
tricks33 | 2
Imjum | 2
adelrocky | 2
JuanSky | 2
Bitiaa | 2
ZeptonixGD | 2
N4xoL | 2
Exonium | 2
BlackSould | 2
PixelMini | 2
Tamii3309 | 2
iMist | 2
Arbelos | 2
NinjaConch | 2
Pawstin | 2
DarXis4 | 2
ColorBolt | 2
SkooArtz | 2
carlosart16 | 2
Hubtos | 2
DestinyKawaii | 2
CreatorAurum | 2
KillGoreOne | 2
SkillzAura | 2
Mitz | 2
EichGMD | 2
Irika | 2
Raivolt | 2
Dverry | 2
makifei | 2
AntRocks42 | 2
gdapples | 2
unnn | 2
jct | 2
friendless | 2
CanadianKing | 2
Uprush | 2
CreatorRiley | 2
SirioBlueX14 | 2
8uua | 2
Ellasio | 2
GMDDEEPSPACE | 2
ozpectro | 2
rafer300 | 2
Benniko | 2
HeLLCaT120 | 2
Chevere | 2
Mirotam | 2
naom | 2
din mak | 2
Dyrox | 2
matt843 | 2
Yoshinon | 2
Square01 | 2
TeamCreatorsNG | 2
theParadoxTeam | 2
Hellfire51 | 2
HiIsBacksenpai | 2
Lyrial | 2
GDSlimJim | 2
TomsoNtm | 2
mrafflin | 2
Alphalaneous | 2
LNTM | 2
NotMiki | 2
Zak Senpai | 2
YoXdie | 2
VitjaTB1 | 2
Aytan | 2
zizibibi | 2
GlintZ | 2
Dienid | 2
Oyupii | 2
martinity | 2
zekvo | 2
s1naptico | 2
ThePacificTeam | 2
Hamato | 2
cyaNeon | 2
MooBee | 2
CreatorToile | 2
xFLOYDx | 2
Texelash | 2
CodeN | 2
AniGleb | 2
ExequielSoria | 2
Creatorchess | 2
ItzApex | 2
TrueAzt3k | 2
A2948198482 | 2
caio2000 | 2
capaxl | 2
110010110 | 2
BadgerBoy | 2
NuclearChild | 2
Furorem | 2
Zhorox | 2
Edilberto | 2
4x3lino | 2
TheRealWeenside | 2
Gabbs | 2
Orderoutofchaoz | 2
acxle | 2
Axrus | 2
moufi | 2
Zechla | 2
craftysteve | 2
HadoX | 2
rittee | 2
Dister K | 2
beemil2 | 2
MadManga | 2
evanjo | 2
Jambees | 2
Tonight | 2
Rezoku | 2
imapotitoo | 2
Buine12 | 2
endevvor | 2
xSkylighter | 2
Eternalyz | 2
XerazoX | 2
H20ghost | 2
TheTrueSanji | 2
Yerylik | 2
LuckyTheGamer | 2
AL3X1hP | 2
Tribbles | 2
spearain | 2
HellaKat | 2
1GTommie | 2
HaryantoGD | 2
Empika | 2
CheeseFudge | 2
Tippymang0 | 2
Vernam | 2
Dysphemism | 2
Yorre | 2
MCAASJ | 2
iIJigSawIi | 2
MRPJ | 2
IiImrduckIiI | 2
Zorochase | 2
DangerZ | 2
Nosef | 2
14Circles | 2
Djoxy | 2
Zendayo | 2
TavitoGD | 2
DJdiscoveryRD | 2
DarkenedNova | 2
SynthSaturn | 2
VAlence | 2
RZQ | 2
SharksVN | 2
Glimpse | 2
RebecaRamc140 | 2
Xinpa | 2
tayoshi | 2
Canceluum | 2
guerronojas | 2
spl1nt | 2
naven | 2
Andaya | 2
DjDayyy | 2
TwisterDude161 | 2
golden3oloto | 2
Svicide | 2
BreadMom | 2
TheElectroGD | 2
chaosxstream | 2
Diabeticus | 2
JuNiOr202 | 2
Rann0x | 2
LGGD | 2
RareNightmare | 2
XXDalmanskiXX | 2
Lightbulbb | 2
TheGoodOne31 | 2
Tomwins17 | 1
RestarTheNitwit | 1
ruwkTl1 | 1
Gatinhos | 1
juanmagun | 1
RealDelector | 1
VGeodasherV | 1
BORSosnovy | 1
jdJoseDiaz | 1
MAUBILE | 1
ManGolu | 1
FlaksY | 1
infinite | 1
tdViViD | 1
jonink | 1
chunlv1 | 1
Mr kim | 1
Jxy08 | 1
B0nilla | 1
GabrielGuti | 1
SuperSNSD | 1
NixinityFurry | 1
Polterion | 1
RyanRukshan | 1
SSamurai | 1
ErisethSP | 1
Trixem | 1
Ith | 1
Mike4Gaming | 1
nplsm | 1
EpicPartyGuy | 1
Djente | 1
NikolaiRei | 1
Technica | 1
RenedI | 1
PulsefireGD | 1
MiraX | 1
TheRealArtee | 1
mihaha | 1
TheRealDmitry | 1
RicoGF | 1
iIRockyIi | 1
MysticDasherGD | 1
Sedgehog | 1
SamyGD128 | 1
saidddddddddddd | 1
ForZ | 1
ghastyghast | 1
CMJ05 | 1
TheAquila | 1
sceptwastaken | 1
AdWize | 1
Zypher | 1
TNeliem | 1
Rixu44 | 1
rizodan | 1
azasuh2 | 1
MobiGFX | 1
YMighty | 1
Brimbis | 1
Kirion | 1
NaimNaro | 1
lNevil | 1
Mickfurax | 1
iEloy | 1
damianwc | 1
BlissGMD | 1
Aywa | 1
NaneSae | 1
SyreNide | 1
GDZesty | 1
Kirzok | 1
FloofiWan | 1
KingEunHo4 | 1
GDturtle | 1
Dayleindaily | 1
Akame GD | 1
Steedlan | 1
Joskope | 1
Yendis2 | 1
staticsnow | 1
PolenGD | 1
Kiffin | 1
llZestll | 1
NotZer | 1
elsehurdle159 | 1
Wespdx | 1
Snambs | 1
Krizard | 1
rkawlqkf123 | 1
Ninjajai | 1
bra1k | 1
Astericks | 1
MrKebab | 1
stanstanmansan | 1
asvezesopa | 1
BENtong | 1
STYL0 | 1
Kerko | 1
PTB10 | 1
doar | 1
Arkhanoz GD | 1
ElectroBoy2217 | 1
PTWhismal | 1
tolstyh | 1
limequality | 1
BigMeatBillie | 1
OmgItsLemon | 1
slimefury | 1
Zcratch | 1
SwagFox | 1
iIHalogenIi | 1
Aniliat | 1
SilentSmiles | 1
dudley | 1
anderxab1 | 1
wapon77 | 1
dimden | 1
joelmacool | 1
Deplex | 1
GDips | 1
Vulcan | 1
Shaowin | 1
N3my | 1
Gaero | 1
TriStorm | 1
e96 | 1
mariot3 | 1
Laser5trike | 1
Alvilation | 1
iFuriFin | 1
Steelic | 1
RedUniverse | 1
Ascorbine | 1
Nebs12 | 1
Kizzon | 1
LedXi | 1
BySajiro | 1
VortexVernon | 1
Mabby | 1
128738347891278 | 1
Drallok | 1
Deron | 1
Zerbit | 1
MattewGame | 1
UniFox | 1
Souls TRK | 1
PlayMaker77 | 1
radovan258 | 1
ChiefJackyt | 1
M0oN | 1
Wavort | 1
KittyFlare | 1
kylelovestoad | 1
Anjony | 1
DimensionXgd | 1
Jekko | 1
SwagzZilla | 1
Amn3sia | 1
JustVal | 1
ProSlain | 1
KK3 | 1
KCorp | 1
epicgmdman | 1
Skye GD | 1
Novic | 1
Koopi | 1
Alleviate | 1
Xeexr | 1
Ishigiro | 1
Zeryk | 1
Codly | 1
Gluspum | 1
chaddy | 1
Vinxyl | 1
Zenith03 | 1
Joltic | 1
Koooper | 1
The19yahir | 1
sk3lerex | 1
Cloudsubject12 | 1
Sensation | 1
BlackCat | 1
ruzted | 1
lGyro | 1
Sleepter | 1
piramidka | 1
Attackofthetroy | 1
Brighty | 1
Peduh | 1
OONestorOO | 1
98macdj | 1
Kuki1537 | 1
Frozec | 1
Xjlegend | 1
Shantak | 1
ArathhSA | 1
Esencia | 1
Barlem | 1
SomeGuyMusic | 1
Blasting Ant | 1
KoOkIe MoNsTeR | 1
zinkzeta | 1
Twirly | 1
Adagaki | 1
GRISK | 1
PbS205 | 1
Vanix | 1
kenaz | 1
SaudDehan04 | 1
nSwish | 1
eiruni | 1
titanium | 1
H4ppyL1fe | 1
klaux | 1
GDanZ | 1
ZeroPoint | 1
SlipGhost | 1
Xarus | 1
CR3ATOR | 1
Hakurei Reimu | 1
VelociReX | 1
ronlix | 1
xSWAGO | 1
xu1234 | 1
Nyctonium | 1
jakupii | 1
KJackpot | 1
Auria | 1
Mercury6779 | 1
deadlockez | 1
RyuDieDragon | 1
Puri1 | 1
superdahl | 1
RcR777 | 1
LIBetaIL | 1
JoseMachGD | 1
Djman13 | 1
Geomike | 1
Xeraton | 1
Helecty | 1
LCynox | 1
NotCyanide | 1
R1veraX | 1
GmDWolde | 1
S3rios | 1
OthonWronger | 1
CreepyLuxu | 1
MarvyGD | 1
Feeto | 1
rasvan | 1
HighLandTigerGD | 1
AlexandriA | 1
mckpm | 1
ddest1nyy | 1
FomalhautGD | 1
FreeedTheDolfin | 1
curxe | 1
xXTotoXx | 1
Astriiix | 1
RealGerryX | 1
stinkyy | 1
Acors | 1
N0CTUM | 1
BlOwArS | 1
48Vpower | 1
IshikiI | 1
MatiTailsGD | 1
Sonyx | 1
LibanThePro | 1
PeteTheDragon | 1
wubzzy | 1
NickMarZ | 1
Bohemian | 1
ReYikT | 1
Platysmus | 1
LuxiGD | 1
Killa | 1
NaimPvP | 1
LChaseR | 1
xRaff | 1
GeoLite | 1
ItsFish | 1
commentbannedlo | 1
AzPlx | 1
Jaku20 | 1
netraena | 1
MoleculeOxygen | 1
SEM 1 | 1
GGaven | 1
ggoggang | 1
Asolire | 1
Ivan017 | 1
Sycro | 1
KryptonixX | 1
yStyles11 | 1
Edlectricity | 1
iPLAY3R | 1
Xelin | 1
Sumersun BGD | 1
Kama2326 | 1
Plexidit | 1
AlmanzaAR | 1
xander556 | 1
LEVIIXD | 1
TrueNachoPro | 1
Decorus | 1
Zephal11 | 1
Billzer | 1
DimaNelis | 1
GDestrys | 1
GeoStarlight | 1
iPeanut | 1
Walroose | 1
thatguy255 | 1
GMDElite | 1
AngelicaGMD | 1
Enderloning | 1
GD Micrael | 1
VoiD | 1
Cybus | 1
jamesgrosso | 1
Hackerzpolzki | 1
VlZBookableZlV | 1
Alvaromg15 | 1
GD Clazzi | 1
Rm1k | 1
TxVenomxT | 1
GMZer0 | 1
Plasmatic | 1
Niko22 | 1
Mattzezilla | 1
2S8 | 1
notlsa | 1
PumpkinHeadGD | 1
Mustad | 1
Cry0mancer | 1
TriVerSe | 1
SuprSwiglz18 | 1
rAaron | 1
Vrock24 | 1
The Bouncy | 1
pusnine | 1
EmilJsson | 1
Kev | 1
LEVR1819 | 1
Xyriak | 1
npesta | 1
Mr Krugger | 1
Yeojj | 1
Qozmim | 1
Maevicon | 1
JustMatt101 | 1
Frapai | 1
jaycom | 1
LIM | 1
TheBatGD | 1
pugmaster706 | 1
lSunix | 1
TecHNeT | 1
xRiver007 | 1
CardistrYy | 1
MadanGD | 1
ShadyKitty | 1
Elexity | 1
X357 | 1
matt88 | 1
TheHyperDash | 1
Fragox | 1
waffeyy | 1
XultroniX | 1
InsideMobile | 1
alledia | 1
naxen | 1
Ashkar | 1
iIJellyBeanIi | 1
TyphoonGD | 1
NovaForce | 1
halpmynamesucks | 1
Xplosives22 | 1
itzDUBU | 1
itsjavii11 | 1
Ju2De2 | 1
PickleBR3NT0N | 1
Kuandlo | 1
YeoLacrA | 1
Inga8 | 1
Luna Moonlit | 1
She3rlucks | 1
Aydrian | 1
FieryGlaceon | 1
Zkad | 1
VladAAA | 1
EpileptiKat | 1
Priisma | 1
Sir Heis | 1
Godie012 | 1
Uchuu | 1
N17R00000000000 | 1
Enzeux | 1
ShadowAxeKid | 1
elemelon | 1
pkat | 1
AdriRoPi | 1
ISSLOL | 1
RedPed | 1
pualoGD | 1
Star Chaser | 1
titiwi13 | 1
doominator222 | 1
Atonish | 1
iDarkBy11 | 1
DumpyUnicorn | 1
Xisars | 1
Spectrier | 1
Soam | 1
GhostGuy707 | 1
edc | 1
PalomitaJump | 1
MarioGDx | 1
TheFlamingLite | 1
Aveno | 1
SkyFaII | 1
DimusLv | 1
xJadztax | 1
MakFeed | 1
kidskers | 1
L3010 | 1
zZeusGD | 1
Aiden87 | 1
Unseasons | 1
KlechuS | 1
Angel Z | 1
BostonRaveParty | 1
Quasar GD | 1
DrSlurrp | 1
llblueeyesll | 1
CreatorExol | 1
yuugennn | 1
GorabiE | 1
aviovlad | 1
DiscJoker | 1
OrBiTaL | 1
iReumii | 1
Streetm3t | 1
ChAoTiCFoXx | 1
Arcri | 1
W0lF3 | 1
Caeden117 | 1
4TLHacks | 1
SwiPeX | 1
Depzon | 1
Eastra | 1
Cuaer7 | 1
TeamThrowback | 1
Alex GD | 1
jomppegg | 1
1234 | 1
RavenVN | 1
HJfod | 1
TOYOIL | 1
SamDav | 1
Delta | 1
DARKSCORANGLOL | 1
TheYisusMC | 1
Pasticho24 | 1
GammAndrew | 1
Vicentts | 1
Creator SR | 1
Tiw | 1
Charvq | 1
GodzDash | 1
FaKob | 1
Nigocy | 1
trash260 | 1
TheRealXFuture | 1
EchoSpiracy | 1
Dhexvi | 1
karister | 1
LVallence | 1
Aeseorl | 1
mbed | 1
MeTuna | 1
HTmega | 1
diosgamer01 | 1
AshX | 1
AraxGames | 1
TamTixx | 1
SrBert | 1
AzaFTW | 1
Antassma | 1
iMela | 1
NotSVaN | 1
HyperBlastz | 1
Chom | 1
anigmus | 1
Rogue | 1
TheRaptor999 | 1
Gvozden | 1
YGYoshI | 1
N3xus4 | 1
Lythium | 1
CarlM50 | 1
NitronicFury | 1
Viscon | 1
sushiywy | 1
CarnicoGG | 1
Salmane | 1
BrunoGM | 1
MZeldris | 1
xSynthGD | 1
potass | 1
IronMaker | 1
Pholo | 1
SilowGD | 1
harris7n | 1
kdavid2123 | 1
Eternum | 1
Kalibuth | 1
nei | 1
moe machine | 1
ThePlayy | 1
Vichiwi | 1
R3YGA | 1
ExoticCheeses | 1
TeamZeelk | 1
colebowl | 1
Okuu | 1
exxxtreme | 1
mvngo | 1
ItsAlexTG | 1
IIIGH0STIII | 1
GDtbz | 1
dynotoast | 1
Gekou | 1
MIZURA | 1
Jaxile | 1
Coneth | 1
Noktuska | 1
LucaRmd | 1
Creator D3lta | 1
M2coL | 1
IK4ROS | 1
roihu3M | 1
iTzMagicGD | 1
FlatterDesert | 1
ELLVO | 1
ItzSmasH | 1
GDZenmuron | 1
SmittSui | 1
NyxT | 1
Ysshi | 1
heck223 | 1
Lanfear | 1
Smarted | 1
Anticmateria | 1
itisgreatandfun | 1
ItsRavenGD | 1
GDmelody | 1
SuperMarioBro | 1
Flack3005 | 1
LilT3ddy | 1
HideousToaster | 1
DescendantGeo | 1
TeslaGD | 1
gryph0n1 | 1
FunctionSquad | 1
Luckita | 1
luisilloRS | 1
SpiceBerry | 1
RealMonkey | 1
SaTRiX95 | 1
xCarbon87 | 1
Dorianti | 1
TDC ChRiS | 1
102stile | 1
Getpwnedowo | 1
AuRuS | 1
Lenov | 1
iLLusIoNz74 | 1
jacksamvdvv | 1
WolfieRC | 1
xe113 | 1
ozzyqboth | 1
BlackX | 1
LightningBass | 1
masterstrike | 1
polarbearz | 1
OldL | 1
lZain | 1
Mechabot | 1
retnuh | 1
Minemario | 1
PaganiniGD | 1
DeTetra | 1
TTT311 | 1
BlockinBlocker | 1
benefact0r | 1
tomi555 | 1
sofabeddd | 1
Styber | 1
Amadei | 1
FEZ PEZ | 1
KarelTheGreat | 1
SomeThingness | 1
lammyzz | 1
SunCOMoon | 1
CreatorZP | 1
Paragon | 1
Sinity | 1
Pixagon | 1
GDcob | 1
SUKEGAME | 1
arcGMD | 1
Maljot | 1
Aegeas | 1
RealGab | 1
XxspacekatzxX | 1
Gragon | 1
GWFiCo | 1
GMD VoRt3X | 1
Aeria744 | 1
Yanni | 1
Byzhelltryx5 | 1
idahkL | 1
PokeChatt | 1
iIElDanteIi | 1
Aberranturtle | 1
Lss | 1
iPrisy | 1
Atomic Wall | 1
r y u | 1
IIp1aY3rII | 1
jensGD | 1
xXAgustin72Xx | 1
Isfy | 1
VonSketchz | 1
SkullCatzz | 1
GK NK 98 | 1
Aguara | 1
Pakki | 1
Devylan | 1
FinnFlama | 1
ReOrione | 1
KyNutZ | 1
snsor | 1
DeS | 1
bananabomber | 1
InideuX | 1
nekitdev | 1
TheSorin | 1
Futurism | 1
GeRon | 1
ImMasterX | 1
Lightvoid | 1
GD Jupiter | 1
Amoeba | 1
youngjun30 | 1
Oxayy | 1
hypercube1 | 1
Atacron | 1
swallforce | 1
Osshan | 1
xStain | 1
Plyush | 1
BaharTeam | 1
Nigu | 1
Alter | 1
lrios777 | 1
JungKEEK | 1
LMapex | 1
LilAce | 1
OmegaCrux | 1
FiggyJr | 1
CarlittosKs | 1
Solidized | 1
Froggo Doggo | 1
Xerena | 1
iNeo | 1
ErrorDetected | 1
GGLudvigg | 1
DenPelm | 1
zGodYT | 1
CrisisHedgehog | 1
benissad | 1
Niivii | 1
Emilu96 | 1
LaZerBeamer | 1
Dawnf4ll | 1
DEMOLITIONDON96 | 1
Zgwegos | 1
ImaXgameXD | 1
CoNuT | 1
Tierce | 1
MortoxX | 1
MegaDBow | 1
Wolfstar | 1
YisusChan | 1
Razorhell | 1
xsebasfnx | 1
Ranar | 1
Lucidal | 1
Gormuck | 1
xAct | 1
Riqirez | 1
Thisse | 1
oaf | 1
L0STS0LE | 1
Brigno | 1
RodryKC | 1
TheTrueTrIpl3 | 1
Alpha rainbow | 1
Miniunicycle | 1
IcepixelGD | 1
WalkerTech | 1
Vyuu | 1
iv4n24 | 1
Nova to | 1
Cookieszs | 1
WaterPLE | 1
TheRealJoseph | 1
Arrownote | 1
CreatorCold | 1
Lyomi | 1
DryCryCrystal | 1
Raizen | 1
Hexodex | 1
DjibrilIII | 1
ShaD2k4 | 1
Cantis | 1
Riddikdash | 1
NotWaffles | 1
Gresov | 1
junthebad | 1
MrZaphkiel | 1
RegularDays | 1
RIXIS | 1
Xenticist | 1
HateLife | 1
Wcly | 1
Tanker1K | 1
OwenEOA | 1
Nach | 1
nestorboy | 1
viridyane | 1
Demonico17 | 1
RUDETUNE | 1
Silvcr | 1
Chelonian | 1
fletser | 1
senvers | 1
InDead | 1
ryxtell | 1
Wreach | 1
ELTITAN343 | 1
CreatorREX11 | 1
Ph4lip | 1
gaidenhertuny | 1
ToastedTwix | 1
Hevii | 1
EvilAnvil | 1
EVW | 1
Ecila | 1
zDoms | 1
HGDJ | 1
MouglyDlimboO | 1
adafka | 1
facusgg | 1
SheX | 1
MUGNA | 1
40elyk | 1
KonshinBro | 1
eptic777 | 1
Itstimtime | 1
Defflex | 1
JustaGDplay3r | 1
CrEpUsCuLe | 1
MuhAbd123 | 1
Jerouen1031 | 1
ChocoGabz | 1
Brunisimo | 1
FireBot Xp | 1
RiiZal | 1
NyamLock | 1
Xellion | 1
logan4722 | 1
elze | 1
Tizoki | 1
mau9375 | 1
MuseOfLove | 1
MachineDollX896 | 1
Adriam71 | 1
Vicolor | 1
Accelec | 1
NitenTeria | 1
iRainstorm | 1
DeKitty | 1
F4bio | 1
HoriZon Lights | 1
Sqiddo | 1
JHERALD | 1
Pepper360 | 1
MacroGDX | 1
NetFlix | 1
Superchat | 1
Ranos | 1
Flups | 1
SntVTC | 1
LouisPF494 | 1
Plystat | 1
ElectroBlaze | 1
SleepyCube | 1
LaSteven | 1
mynamejeffff | 1
FireKiP14 | 1
ItsAgSilver | 1
SatanicGD | 1
JohnSnail | 1
Exlextron | 1
DeltaYK | 1
Wonduu | 1
DarkoGD | 1
Oculus | 1
Rinnegun | 1
Nerex | 1
Foxann | 1
IINimbusII | 1
YahirPoint | 1
startor | 1
Dynasteel | 1
demondoomvn | 1
WinderDG | 1
brixx | 1
Nakane | 1
kaisteam | 1
iIAkariIi | 1
Jamus | 1
MasjosbarGD | 1
Banila | 1
Version20 | 1
Numberlock | 1
YeoTic | 1
JesusAlen | 1
Skepo | 1
iFlame | 1
Eboshi | 1
Dseinor | 1
MrSimpleJames | 1
Prezesi | 1
iBurger | 1
dasher | 1
Virgo03 | 1
TheLris | 1
Jindotgae | 1
mafaaaaa | 1
GD Master | 1
Felixx | 1
Master doge joy | 1
Zacx199 | 1
RealWater | 1
GCDarkGGG | 1
Xeniel | 1
NinjaShrimp | 1
KriticK | 1
Shirks | 1
APTeamOfficial | 1
vcsouly | 1
Fridge | 1
Moonlchan | 1
SuperNao | 1
GDFruitsalad | 1
j4sonpixelz | 1
BrandisGamer | 1
GDrolx | 1
Reamoid | 1
TomGMD | 1
GD DoctorScar | 1
Floppy | 1
Garbanzitoh | 1
SparKendary | 1
Belphox | 1
chevkoronaldi | 1
nectaroso | 1
IgnaciioPaez | 1
Xron GD | 1
Intrix | 1
PeaKnight | 1
Deriery | 1
Vral | 1
vtxteam | 1
RaiXe | 1
ThereIsASpark | 1
wCookieD | 1
Dragon | 1
BIGMAC77 | 1
Cerelax | 1
GD Iris | 1
KaXxerK | 1
D3aThBR1ng3r | 1
ToToo | 1
Vird | 1
PringlesGD | 1
diamondskull | 1
timeless real | 1
CreatorWeeMan | 1
DeathVenom29 | 1
Minerjoeb2 | 1
Zeroya | 1
Japic | 1
ItsDanito | 1
Ducknorriss | 1
IiITGTIiI | 1
aton022 | 1
NachoLag | 1
PurpleGuy99 | 1
NazKm | 1
David Cordero | 1
R3xter | 1
YoshiiBros | 1
TheJerome | 1
TrueBloodshot | 1
Jhemyson1 | 1
ThePotato2 | 1
Elzeko | 1
Dragyn | 1
DeadIx | 1
KingSupuu | 1
Atimistic | 1
zPsychal | 1
ghostmode | 1
CobaltV | 1
X73 | 1
LuichoX | 1
zZenith | 1
geomania | 1
TheDolb1natoR | 1
j2402c | 1
Rexium | 1
JustSpam | 1
WitRo | 1
Voltz251 | 1
Acetari | 1
Miyolophone | 1
TerraX | 1
Meroo | 1
TrueLyaLya | 1
inzzane | 1
Joseph117 | 1
trashey | 1
geogame | 1
WillTheDash3r | 1
DiaiR | 1
Pitone | 1
Mylon | 1
farisu | 1
MaksPunch | 1
Muochi | 1
stewart | 1
Mawky | 1
Smexx | 1
o0LevelEditor0o | 1
Pix3lated | 1
rayrsd | 1
nadsads | 1
GhosteX7 | 1
MiniEagle | 1
Electroid | 1
ArianaX | 1
Nelll | 1
UltimatePlaty | 1
Berke | 1
DeadlyPlaysGD | 1
LaserX17 | 1
Xayvion | 1
Exilence | 1
Skipumi | 1
i1Summer1i | 1
YazuHD | 1
AHM12345 | 1
RobhilL | 1
Denks | 1
Xhaos | 1
CribCrab | 1
EntiqueGD | 1
DjCrash1013 | 1
Virgo8020 | 1
davih107 | 1
Luvery | 1
EMER0W0 | 1
Zenky | 1
Seaphase | 1
IiLejandiI | 1
Xarius | 1
PeteTheWagon | 1
wormfodder | 1
CocoXs32 | 1
GeoSaur | 1
JellyUerO | 1
Lazirfa | 1
amonostor | 1
HeYDaX | 1
lMattz | 1
GilipollasV2 | 1
ScootyPuffJr | 1
gamexgd | 1
heonel | 1
SaDaCy | 1
FawksGD | 1
Typhoon9000 | 1
TTFTK | 1
Syndrome GMD | 1
GdVortex | 1
0bsco | 1
RoJoDo | 1
FlyingChicken | 1
SlashGH | 1
Arthur | 1
Tr0d3ll | 1
iIHyperBotIi | 1
TuMiTu | 1
Kivixi | 1
Angelgdlm979 | 1
Pine O Matic | 1
RayZ0n | 1
Kimi18 | 1
ZunEGD | 1
Pauwe | 1
Fynzyme | 1
iIMyZoneIi | 1
Tyguy124 | 1
nahuel250204 | 1
EchoAux | 1
Fizzy GD | 1
TheGrisha | 1
STARxd | 1
Caecus | 1
PinKel | 1
GREEN | 1
Exotium | 1
Pounix | 1
Callixto | 1
Ugg3 | 1
guabaya | 1
Mida 51 | 1
HungryHamburger | 1
nnaMae | 1
kanji | 1
CetreX | 1
Minebraine | 1
Inspirity | 1
xHeiDax | 1
MaganenZo | 1
ColorsX | 1
DoliaX | 1
LucariusM | 1
SPUDSSS | 1
Jecya | 1
AlphARecords | 1
TheSquidLord | 1
Jextins | 1
Cyphexx | 1
Warped | 1
Centurie | 1
leonaril | 1
MonTaSsir | 1
AzazelFX | 1
Revellixia | 1
Xanii | 1
Zipiks | 1
DysprosiaX | 1
dimannike | 1
Varse | 1
Waivve | 1
DoctorSnake | 1
noxel10 | 1
Steekmen | 1
EeroMTF | 1
Zephyr | 1
Intriago | 1
ItzRezz | 1
Azartt | 1
ImaZe | 1
HarshHere | 1
Aeci | 1
Rzary | 1
LenZiP | 1
CreatorTemple | 1
ColorBlind | 1
wBERSw | 1
FlappySheepy | 1
SamiiKawaii | 1
P4uL333 | 1
Khirby | 1
VermAmvGD | 1
AnghelZ | 1
Block Jumper | 1
Alphaz | 1
xXxShadowxXx | 1
Bouffi | 1
Horneet | 1
GroovyGravyGD | 1
Electro543 | 1
KerryTheBall | 1
Cryst4l | 1
Green Dragon | 1
IIyunx18II | 1
Al3M0N | 1
Reburned | 1
Jrockmonster | 1
Ultimxte | 1
GD virus | 1
TeamArtistic | 1
TheSuperbot | 1
Filarious | 1
Dextrose | 1
IndiRam | 1
Dans2jours | 1
DAVIDSHOCK | 1
hydroduck | 1
ZorgX | 1
VegasKoneko | 1
Andr1x | 1
AkyliZz | 1
edson442 | 1
ArtaxFerSarh | 1
s3rj | 1
ImPanther | 1
Xetiny | 1
sanyu | 1
Carbide | 1
TheRealSpook | 1
KirbyFan | 1
Mastermind | 1
LVtheFirst | 1
j53 | 1
diablox | 1
VigorousGard3n | 1
TheShii | 1
LegeNdiuM | 1
Extravagon | 1
Elias100 | 1
Zisl3reX | 1
Mononoke | 1
Waferz | 1
TCRK | 1
Claren | 1
GSD | 1
xFourze | 1
Symnikat | 1
babaabbas0 | 1
We4therMan | 1
Danflop | 1
GeryuilJTL | 1
itsBorto | 1
inzipid | 1
NexAxel | 1
WasherNami | 1
Sarz | 1
nNumb | 1
xavionnn | 1
WoiteForest | 1
cyro | 1
Rigel | 1
Elias1277 | 1
kadenrice | 1
Sivlol | 1
CreepyDash | 1
Caernarvon725 | 1
Cyrillius | 1
Aecate | 1
Bruno543 | 1
desoxicolico | 1
CreatorJC | 1
Fearotic | 1
ReaL3x | 1
Xcharlie | 1
C0LECT0R | 1
7Tash | 1
GomsGD | 1
TacoWaifu | 1
xMrFinlandx | 1
Mutty99 | 1
gomopidor | 1
iCanne | 1
ADepressedBean | 1
DeathHogz | 1
IBlitzDashAirI | 1
ItzDamnu | 1
Broskm | 1
ZumiRiu | 1
gdcrusader | 1
GD Orange123 | 1
deadlychicken | 1
Splexx | 1
gergedan | 1
XaphanJC | 1
MigueStyles | 1
lbpa | 1
mameister | 1
GustavoGutavo | 1
GOKILL | 1
PMK | 1
Newgdusername | 1
cab210 | 1
JordyGD01 | 1
Extrimeman | 1
sparter | 1
shawm | 1
ImNach | 1
GhostVandalf | 1
ITSBYCRAFTXX | 1
temavero | 1
Frosty85 | 1
MathyKS | 1
DimaDank | 1
Scotty16 | 1
Flxr | 1
kira9999 | 1
SixGames | 1
Eightee | 1
SpacialKitty | 1
ReLampaGoBR GD | 1
BlizzardCo | 1
FyreDragan | 1
VexautdYT | 1
XDDDDDDDDDDDDD1 | 1
TGBrockS | 1
EOPRG | 1
SalMosA | 1
aryski21 | 1
Warhol | 1
JamesRamirez08 | 1
Prinplup | 1
dgbeck | 1
icl | 1
iCookieX | 1
DONIC | 1
Kiranox | 1
Mr Flueco | 1
TraZox | 1
tricer | 1
Vircoss | 1
smertnet | 1
Aurifex | 1
INTREPID | 1
zDeadlox | 1
SEAturtle | 1
Clea | 1
d0nK | 1
SirWafel | 1
JuanitoUwUr | 1
Hydro7 | 1
EMSoulless | 1
Kdaua | 1
NiceNike | 1
JULIANGAMER24 | 1
Cyber Jupiter | 1
jKreno | 1
Mitycalf | 1
Bissis | 1
suujimatsu | 1
Revaus | 1
BySelling | 1
Qventiam | 1
BoBoBoBoBoBoBo | 1
iPandaSwag | 1
SimilarAMZ | 1
vCrimson | 1
KrisKing | 1
Zombiekiller8 | 1
Nach170 | 1
zim | 1
JstepheN | 1
JustKhalilz | 1
volplay | 1
VotcHi | 1
Kanarulp | 1
AzrilDripana | 1
Snaiky | 1
Xiel | 1
monkeybazooka | 1
IIExenityII | 1
Sufi2425 | 1
boxtroll | 1
Tribar | 1
Richzilla | 1
jokys | 1
Stevie19 | 1
FlaskeR | 1
Supermoon | 1
Laperon | 1
Moskie | 1
Metronic | 1
ItzDolphy | 1
Oskulicious | 1
Dehily | 1
Brennanrj | 1
Alphanetic | 1
Step4enko | 1
Melrin | 1
AleHN | 1
KobazZz | 1
Atomic | 1
ButterFish | 1
Cosmicss | 1
Cleber012 | 1
EvenSlash | 1
joe12 | 1
StarkyTheSalad | 1
Moonlitex | 1
Darkhumans | 1
DTheWolf | 1
N1XO | 1
Malicia | 1
Morphizn | 1
Stelistul21 | 1
Graded | 1
Streyt4ward | 1
Alice24 | 1
melX0exe | 1
Anonymous User | 1
xMale | 1
ChilliusVGM | 1
KingJayR | 1
DaPlaquE | 1
VaporWaveTeam | 1
RandumbGamer | 1
NubScrub | 1
Nightning | 1
RealRasberry | 1
IiBlackLL | 1
Chanel 6 | 1
BlackEye23 | 1
Kiroishi | 1
fredature | 1
zeSophia | 1
DualTroniX | 1
ImMantaa | 1
dohyen | 1
NyuCat | 1
Azexon | 1
Zecret | 1
Legacy | 1
zuax | 1
ae8 | 1
TMNightHunt | 1
Meteor Strike | 1
Stixis | 1
crazilogan | 1
Tyrannyy | 1
Rashi | 1
SpadeKilla17 | 1
iiIneonIii | 1
MrFreckles | 1
AurelienD | 1
LimeGun | 1
nSebas | 1
PixelCat | 1
xchezzex | 1
Cenos | 1
ZiMkIl | 1
LDrix | 1
Keyblade | 1
stussss | 1
cheetah5211 | 1
RaxisGD | 1
DeLoreanZ | 1
Berth Hero | 1
SlushyAgistic | 1
Findel | 1
davidbelesp | 1
AVRG | 1
Alphnite | 1
StarMage | 1
Rinemichi | 1
AngryBoy0644 | 1
SuperDonitauwu | 1
Jobet | 1
Renokkusu | 1
TopSanic | 1
xDrewww | 1
Zodacx | 1
ByMaxCraft | 1
cydoX | 1
Henmin | 1
RealCookie | 1
Dzeser | 1
CornFungi | 1
DaedraZelos | 1
Junkyy | 1
LiLiuM11 | 1
W4rlock | 1
eierfrucht | 1
SuMmErGd | 1
WoooooooooX | 1
EquineTree | 1
Idiocy | 1
IrekB4i | 1
Argne | 1
KreoN | 1
Rocketeer | 1
Shizennoaki | 1
tritonshift | 1
Maelstrom | 1
DesTiNy | 1
GeometryPil0t | 1
martiyaso | 1
Cris79X | 1
waterphoenix | 1
Grandium | 1
BlastiK | 1
Fabman650 | 1
Mimish | 1
The Lamb | 1
Quadrant08 | 1
KarploosGD | 1
Cooney | 1
xIMTx | 1
Celenzone | 1
MirageXVII | 1
Judas | 1
Root4jun | 1
dno123 | 1
alex7g | 1
AncherStram | 1
TheRealFakey | 1
schneitho | 1
J i | 1
Rocky27 | 1
Alnoc25 | 1
thor500 | 1
Luxium | 1
AngeloLP | 1
MarEng | 1
PlasmaGD | 1
Bamkii | 1
ZukulentoO | 1
TheParkas13 | 1
TheSheepster | 1
LegoFatXD | 1
ItsYisusUwU | 1
CraftedKing | 1
hexary | 1
FlowerAce | 1
Verognee | 1
reytrix011 | 1
OssyTheDragon | 1
DnkGD | 1
kapianol | 1
viby | 1
Dammer Dash | 1
Tremendousdash | 1
atlantis11656 | 1
Lazergenix | 1
aqu4rius | 1
Magic Square | 1
Vust42 | 1
Portex144 | 1
Duffz | 1
Britzlar | 1
WHErwin | 1
HandleMyBagel | 1
davphla | 1
Eclipseyy | 1
Sebbax | 1
Relic686 | 1
H4RDMACHINE | 1
BoldStep | 1
SHAZIMA | 1
koolboom | 1
Moldavsky | 1
xstrikerx | 1
xXazurite | 1
ItsEnigma | 1
CDMusic | 1
ZeiThex | 1
ANPE | 1
Forest UwU | 1
Linco | 1
Prayer | 1
Lutecia Concord | 1
Rytm | 1
Netvideo | 1
XxAviilaxX | 1
Xuesse | 1
bobmoneybags | 1
astrii | 1
Spec7rix | 1
RetroScope | 1
xmysef | 1
The Shoot | 1
CatAtKmart | 1
RaftersBC | 1
Baskerville | 1
RH69 | 1
Desolator 930 | 1
Chloe2012 | 1
Wieee | 1
Worz | 1
Taolicia | 1
MasterTheo03 | 1
AsterionGD | 1
TyakyoFyai | 1
llKrooXll | 1
Nosaima | 1
Zidane24 | 1
Oxiblitz | 1
Plexie | 1
Adry0013 | 1
SantiiGMD | 1
4rcO | 1
RipTideZ | 1
Zentexesus | 1
GD Hans3l | 1
Venomite | 1
3FanTom3 | 1
CarlYT | 1
iNeccu | 1
Vioxgaming | 1
Handexx | 1
DanZwomeN | 1
KiziBro7 | 1
Srinkelz | 1
TheBew | 1
GeometryTom | 1
FrAmos | 1
Vuge | 1
Ackr0 | 1
Jcglasser | 1
NAMHO | 1
LeoGinN | 1
Farva | 1
Streeetkyt | 1
eMartCider | 1
Tommasso | 1
HHurricane | 1
Sylphid | 1
dWdubz | 1
XSparkGaming | 1
MraleGD | 1
Silouute | 1
SummerSolsta7 | 1
xTheCHUCHOx | 1
Lekattn | 1
Exiked | 1
Meekah | 1
GMoney03 | 1
Saneron TB | 1
LordSQ | 1
Brindikz | 1
Noroi | 1
AstroFox | 1
SenK3etsu | 1
JoNiUs | 1
Saturnite | 1
augi | 1
LymphGD | 1
DiaWord | 1
SlothBlock | 1
adventure070 | 1
Xenomorphs | 1
Saltydrop | 1
Wafflism | 1
ShadowChaotic | 1
Alaiskai | 1
UnableToWin | 1
Peyt0n | 1
Shadecat | 1
visuallydynamic | 1
Skeaten | 1
ElMake | 1
Larvogold | 1
CrazyLaurent | 1
Cyclepeidia | 1
GRVTY | 1
Aesth | 1
GD Quasar | 1
DrAkcel | 1
LewinGD | 1
FreshDom3 | 1
Miniman2098 | 1
TheOnyxGuy | 1
RealGalaxy | 1
VegetarianBacon | 1
GALAXYfox | 1
CristobalGmD | 1
Orange Juice | 1
HadiGD | 1
Furiga | 1
DrZenQi | 1
Epic74 | 1
Korean kty | 1
Megadere | 1
Vin003 | 1
EDoosh | 1
MoonBlade2000 | 1
CreatorPig | 1
NatRizen | 1
InversionGD | 1
Dimen | 1
saRy | 1
extint | 1
FriSaiL | 1
Arachnus | 1
edenharley | 1
rRaddy | 1
CoolCreeper | 1
Julmer | 1
Gabriel | 1
Murae | 1
TheChicken2124 | 1
classic10 | 1
DeVeReL | 1
macoh | 1
FrostSavage | 1
Nazor | 1
Ba1tazar | 1
XoVaK | 1
polarmanzues | 1
AeverMess | 1
TheRealZust | 1
Borgen | 1
mergan | 1
Funkeee | 1
DivinePotato | 1
LunarStep11 | 1
ARIpl | 1
random0120 | 1
Neutro | 1
DeviantArt | 1
TGTomahawk09 | 1
WarlocK15 | 1
marryhun | 1
TrphqcdaT | 1
ShiroGMD | 1
Ivanpercie | 1
Ionn | 1
DreamerOwO | 1
KrampuX | 1
LePaulin | 1
juanchacon99 | 1
SlushiiDash | 1
Rzix | 1
Rangerlust | 1
CosmiiK | 1
0rd3n4ry | 1
JharitsonGD | 1
softcube | 1
PaysonQ | 1
megalivvy125 | 1
fufafac | 1
yupbro | 1
KarikioX7gd | 1
AlanShot | 1
Subjekt | 1
Zarkyi | 1
MenhHue | 1
ToxYc | 1
Wizzuki | 1
IceFortreeX | 1
MizzHiffouu | 1
kyurime | 1
Sarynin | 1
LimeTime313 | 1
ferrarizo | 1
drawn | 1
OilPanic | 1
Efext | 1
PaLiX | 1
cmsGD | 1
8exforce8 | 1
NudesSender | 1
Yush2n | 1
FP GobSmack3d | 1
crispapple | 1
Artanao | 1
Noblefoul | 1
Shaman123 | 1
shadow speed | 1
GD ParaDoX | 1
Wamel | 1
V i b e | 1
Zeonx | 1
Simplepixels | 1
omegus117 | 1
HydroPero | 1
sintil | 1
ChaozAF | 1
zap101 | 1
TeamGaruda | 1
Reminant | 1
MetryMasterJJ | 1
OdeII | 1
Opcheeseburger | 1
fitatfi | 1
kriper2004 | 1
Skidel | 1
AgentY | 1
juanmatiip | 1
ShinjiGat | 1
icemask | 1
Fehleno | 1
ChiinoGD | 1
IiDeaychawooiI | 1
Licen00 | 1
KAPIBARA | 1
Plexa | 1
crispybag | 1
NagromX | 1
MrPPs | 1
Has2CP | 1
Mwkranel | 1
Buziris | 1
FishDot | 1
MaxMason747 | 1
v0lt13 | 1
P0ki | 1
Galactrix | 1
Caustic | 1
Aurora123 | 1
Nosii | 1
xSynthi | 1
siri | 1
Wakanding | 1
URean | 1
Orelu | 1
JordanTRS | 1
Spectruh | 1
m0use | 1
Etta | 1
Lam1503 | 1
ZachLy | 1
bassiegames | 1
Slopes | 1
iIiEupeNoxiIi | 1
layyzbean | 1
TheHabanero | 1
GDSkulll | 1
iDarko | 1
KireiMirai | 1
empyreanGD | 1
JeshuLucero | 1
GEGDGames | 1
Moyss | 1
alexANDgame | 1
IceKeyHammer | 1
| 13.078852 | 20 | 0.69019 | yue_Hant | 0.276596 |
e0351f6935cca0415aaf6133b2e7566f32d4f9f1 | 1,387 | md | Markdown | README.md | daniel-s-ingram/design | 2dbf709532dad49778c72d328356b95b3d0e1cf4 | [
"Apache-2.0"
] | 1 | 2021-01-11T06:29:48.000Z | 2021-01-11T06:29:48.000Z | README.md | daniel-s-ingram/design | 2dbf709532dad49778c72d328356b95b3d0e1cf4 | [
"Apache-2.0"
] | null | null | null | README.md | daniel-s-ingram/design | 2dbf709532dad49778c72d328356b95b3d0e1cf4 | [
"Apache-2.0"
] | 1 | 2020-03-12T03:51:31.000Z | 2020-03-12T03:51:31.000Z | # ROS 2.0 design
This repository is a [Jekyll](http://jekyllrb.com/) website hosted on [Github Pages](http://pages.github.com/) at http://design.ros2.org/.
The repository/website is meant to be a point around which users can collaborate on the ROS 2.0 design efforts as well as capture those discussions for posterity.
The best mailing list for discussing these topics is [[email protected]](mailto:[email protected]).
You can view the archives [here](https://groups.google.com/forum/?fromgroups#!forum/ros-sig-ng-ros)
## Working Locally
You can run the site locally by running this command in this repository:
```
jekyll serve --watch --baseurl=''
```
And navgiating to your browser to:
[http://localhost:4000/](http://localhost:4000/)
## Site Setup
Site is a Jekyll website with `design.ros2.org` as the `CNAME`.
The site requires no static generation outside of github's static jekyll generation, which means that changes are published to the site as soon as they are pushed (can take up to 10 minutes for github to update the site).
The github login (for showing pull requests) also requires that https://github.com/prose/gatekeeper is setup in heroku, and the url for that is http://auth.design.ros2.org/authenticate/TEMP_TOKEN. Because of the free Heroku instance, the first time someone logins in after a period of time, there is a delay.
| 46.233333 | 308 | 0.764239 | eng_Latn | 0.991795 |
e0354bca204d43d903357e74dfc42293c2fa57ef | 19,237 | md | Markdown | release-notes/5.0/api-diff/netstandard2.1/5.0_System.Reflection.PortableExecutable.md | yygyjgmhje1987/core | d73a85454b1dd7f323e18e5e728ef0c95ff5a57a | [
"MIT"
] | 19,395 | 2015-01-02T20:41:47.000Z | 2022-03-31T20:10:11.000Z | release-notes/5.0/api-diff/netstandard2.1/5.0_System.Reflection.PortableExecutable.md | yygyjgmhje1987/core | d73a85454b1dd7f323e18e5e728ef0c95ff5a57a | [
"MIT"
] | 6,129 | 2015-01-22T15:19:50.000Z | 2022-03-31T18:47:06.000Z | release-notes/5.0/api-diff/netstandard2.1/5.0_System.Reflection.PortableExecutable.md | yygyjgmhje1987/core | d73a85454b1dd7f323e18e5e728ef0c95ff5a57a | [
"MIT"
] | 6,068 | 2015-01-05T18:03:07.000Z | 2022-03-31T08:08:49.000Z | # System.Reflection.PortableExecutable
``` diff
+namespace System.Reflection.PortableExecutable {
+ public enum Characteristics : ushort {
+ AggressiveWSTrim = (ushort)16,
+ Bit32Machine = (ushort)256,
+ BytesReversedHi = (ushort)32768,
+ BytesReversedLo = (ushort)128,
+ DebugStripped = (ushort)512,
+ Dll = (ushort)8192,
+ ExecutableImage = (ushort)2,
+ LargeAddressAware = (ushort)32,
+ LineNumsStripped = (ushort)4,
+ LocalSymsStripped = (ushort)8,
+ NetRunFromSwap = (ushort)2048,
+ RelocsStripped = (ushort)1,
+ RemovableRunFromSwap = (ushort)1024,
+ System = (ushort)4096,
+ UpSystemOnly = (ushort)16384,
+ }
+ public readonly struct CodeViewDebugDirectoryData {
+ public int Age { get; }
+ public Guid Guid { get; }
+ public string Path { get; }
+ }
+ public sealed class CoffHeader {
+ public Characteristics Characteristics { get; }
+ public Machine Machine { get; }
+ public short NumberOfSections { get; }
+ public int NumberOfSymbols { get; }
+ public int PointerToSymbolTable { get; }
+ public short SizeOfOptionalHeader { get; }
+ public int TimeDateStamp { get; }
+ }
+ public enum CorFlags {
+ ILLibrary = 4,
+ ILOnly = 1,
+ NativeEntryPoint = 16,
+ Prefers32Bit = 131072,
+ Requires32Bit = 2,
+ StrongNameSigned = 8,
+ TrackDebugData = 65536,
+ }
+ public sealed class CorHeader {
+ public DirectoryEntry CodeManagerTableDirectory { get; }
+ public int EntryPointTokenOrRelativeVirtualAddress { get; }
+ public DirectoryEntry ExportAddressTableJumpsDirectory { get; }
+ public CorFlags Flags { get; }
+ public ushort MajorRuntimeVersion { get; }
+ public DirectoryEntry ManagedNativeHeaderDirectory { get; }
+ public DirectoryEntry MetadataDirectory { get; }
+ public ushort MinorRuntimeVersion { get; }
+ public DirectoryEntry ResourcesDirectory { get; }
+ public DirectoryEntry StrongNameSignatureDirectory { get; }
+ public DirectoryEntry VtableFixupsDirectory { get; }
+ }
+ public sealed class DebugDirectoryBuilder {
+ public DebugDirectoryBuilder();
+ public void AddCodeViewEntry(string pdbPath, BlobContentId pdbContentId, ushort portablePdbVersion);
+ public void AddEmbeddedPortablePdbEntry(BlobBuilder debugMetadata, ushort portablePdbVersion);
+ public void AddEntry(DebugDirectoryEntryType type, uint version, uint stamp);
+ public void AddEntry<TData>(DebugDirectoryEntryType type, uint version, uint stamp, TData data, Action<BlobBuilder, TData> dataSerializer);
+ public void AddPdbChecksumEntry(string algorithmName, ImmutableArray<byte> checksum);
+ public void AddReproducibleEntry();
+ }
+ public readonly struct DebugDirectoryEntry {
+ public DebugDirectoryEntry(uint stamp, ushort majorVersion, ushort minorVersion, DebugDirectoryEntryType type, int dataSize, int dataRelativeVirtualAddress, int dataPointer);
+ public int DataPointer { get; }
+ public int DataRelativeVirtualAddress { get; }
+ public int DataSize { get; }
+ public bool IsPortableCodeView { get; }
+ public ushort MajorVersion { get; }
+ public ushort MinorVersion { get; }
+ public uint Stamp { get; }
+ public DebugDirectoryEntryType Type { get; }
+ }
+ public enum DebugDirectoryEntryType {
+ CodeView = 2,
+ Coff = 1,
+ EmbeddedPortablePdb = 17,
+ PdbChecksum = 19,
+ Reproducible = 16,
+ Unknown = 0,
+ }
+ public readonly struct DirectoryEntry {
+ public readonly int RelativeVirtualAddress;
+ public readonly int Size;
+ public DirectoryEntry(int relativeVirtualAddress, int size);
+ }
+ public enum DllCharacteristics : ushort {
+ AppContainer = (ushort)4096,
+ DynamicBase = (ushort)64,
+ HighEntropyVirtualAddressSpace = (ushort)32,
+ NoBind = (ushort)2048,
+ NoIsolation = (ushort)512,
+ NoSeh = (ushort)1024,
+ NxCompatible = (ushort)256,
+ ProcessInit = (ushort)1,
+ ProcessTerm = (ushort)2,
+ TerminalServerAware = (ushort)32768,
+ ThreadInit = (ushort)4,
+ ThreadTerm = (ushort)8,
+ WdmDriver = (ushort)8192,
+ }
+ public enum Machine : ushort {
+ Alpha = (ushort)388,
+ Alpha64 = (ushort)644,
+ AM33 = (ushort)467,
+ Amd64 = (ushort)34404,
+ Arm = (ushort)448,
+ Arm64 = (ushort)43620,
+ ArmThumb2 = (ushort)452,
+ Ebc = (ushort)3772,
+ I386 = (ushort)332,
+ IA64 = (ushort)512,
+ M32R = (ushort)36929,
+ MIPS16 = (ushort)614,
+ MipsFpu = (ushort)870,
+ MipsFpu16 = (ushort)1126,
+ PowerPC = (ushort)496,
+ PowerPCFP = (ushort)497,
+ SH3 = (ushort)418,
+ SH3Dsp = (ushort)419,
+ SH3E = (ushort)420,
+ SH4 = (ushort)422,
+ SH5 = (ushort)424,
+ Thumb = (ushort)450,
+ Tricore = (ushort)1312,
+ Unknown = (ushort)0,
+ WceMipsV2 = (ushort)361,
+ }
+ public class ManagedPEBuilder : PEBuilder {
+ public const int ManagedResourcesDataAlignment = 8;
+ public const int MappedFieldDataAlignment = 8;
+ public ManagedPEBuilder(PEHeaderBuilder header, MetadataRootBuilder metadataRootBuilder, BlobBuilder ilStream, BlobBuilder mappedFieldData = null, BlobBuilder managedResources = null, ResourceSectionBuilder nativeResources = null, DebugDirectoryBuilder debugDirectoryBuilder = null, int strongNameSignatureSize = 128, MethodDefinitionHandle entryPoint = default(MethodDefinitionHandle), CorFlags flags = CorFlags.ILOnly, Func<IEnumerable<Blob>, BlobContentId> deterministicIdProvider = null);
+ protected override ImmutableArray<PEBuilder.Section> CreateSections();
+ protected internal override PEDirectoriesBuilder GetDirectories();
+ protected override BlobBuilder SerializeSection(string name, SectionLocation location);
+ public void Sign(BlobBuilder peImage, Func<IEnumerable<Blob>, byte[]> signatureProvider);
+ }
+ public readonly struct PdbChecksumDebugDirectoryData {
+ public string AlgorithmName { get; }
+ public ImmutableArray<byte> Checksum { get; }
+ }
+ public abstract class PEBuilder {
+ protected PEBuilder(PEHeaderBuilder header, Func<IEnumerable<Blob>, BlobContentId> deterministicIdProvider);
+ public PEHeaderBuilder Header { get; }
+ public Func<IEnumerable<Blob>, BlobContentId> IdProvider { get; }
+ public bool IsDeterministic { get; }
+ protected abstract ImmutableArray<PEBuilder.Section> CreateSections();
+ protected internal abstract PEDirectoriesBuilder GetDirectories();
+ protected ImmutableArray<PEBuilder.Section> GetSections();
+ public BlobContentId Serialize(BlobBuilder builder);
+ protected abstract BlobBuilder SerializeSection(string name, SectionLocation location);
+ protected readonly struct Section {
+ public readonly SectionCharacteristics Characteristics;
+ public readonly string Name;
+ public Section(string name, SectionCharacteristics characteristics);
+ }
+ }
+ public sealed class PEDirectoriesBuilder {
+ public PEDirectoriesBuilder();
+ public int AddressOfEntryPoint { get; set; }
+ public DirectoryEntry BaseRelocationTable { get; set; }
+ public DirectoryEntry BoundImportTable { get; set; }
+ public DirectoryEntry CopyrightTable { get; set; }
+ public DirectoryEntry CorHeaderTable { get; set; }
+ public DirectoryEntry DebugTable { get; set; }
+ public DirectoryEntry DelayImportTable { get; set; }
+ public DirectoryEntry ExceptionTable { get; set; }
+ public DirectoryEntry ExportTable { get; set; }
+ public DirectoryEntry GlobalPointerTable { get; set; }
+ public DirectoryEntry ImportAddressTable { get; set; }
+ public DirectoryEntry ImportTable { get; set; }
+ public DirectoryEntry LoadConfigTable { get; set; }
+ public DirectoryEntry ResourceTable { get; set; }
+ public DirectoryEntry ThreadLocalStorageTable { get; set; }
+ }
+ public sealed class PEHeader {
+ public int AddressOfEntryPoint { get; }
+ public int BaseOfCode { get; }
+ public int BaseOfData { get; }
+ public DirectoryEntry BaseRelocationTableDirectory { get; }
+ public DirectoryEntry BoundImportTableDirectory { get; }
+ public DirectoryEntry CertificateTableDirectory { get; }
+ public uint CheckSum { get; }
+ public DirectoryEntry CopyrightTableDirectory { get; }
+ public DirectoryEntry CorHeaderTableDirectory { get; }
+ public DirectoryEntry DebugTableDirectory { get; }
+ public DirectoryEntry DelayImportTableDirectory { get; }
+ public DllCharacteristics DllCharacteristics { get; }
+ public DirectoryEntry ExceptionTableDirectory { get; }
+ public DirectoryEntry ExportTableDirectory { get; }
+ public int FileAlignment { get; }
+ public DirectoryEntry GlobalPointerTableDirectory { get; }
+ public ulong ImageBase { get; }
+ public DirectoryEntry ImportAddressTableDirectory { get; }
+ public DirectoryEntry ImportTableDirectory { get; }
+ public DirectoryEntry LoadConfigTableDirectory { get; }
+ public PEMagic Magic { get; }
+ public ushort MajorImageVersion { get; }
+ public byte MajorLinkerVersion { get; }
+ public ushort MajorOperatingSystemVersion { get; }
+ public ushort MajorSubsystemVersion { get; }
+ public ushort MinorImageVersion { get; }
+ public byte MinorLinkerVersion { get; }
+ public ushort MinorOperatingSystemVersion { get; }
+ public ushort MinorSubsystemVersion { get; }
+ public int NumberOfRvaAndSizes { get; }
+ public DirectoryEntry ResourceTableDirectory { get; }
+ public int SectionAlignment { get; }
+ public int SizeOfCode { get; }
+ public int SizeOfHeaders { get; }
+ public ulong SizeOfHeapCommit { get; }
+ public ulong SizeOfHeapReserve { get; }
+ public int SizeOfImage { get; }
+ public int SizeOfInitializedData { get; }
+ public ulong SizeOfStackCommit { get; }
+ public ulong SizeOfStackReserve { get; }
+ public int SizeOfUninitializedData { get; }
+ public Subsystem Subsystem { get; }
+ public DirectoryEntry ThreadLocalStorageTableDirectory { get; }
+ }
+ public sealed class PEHeaderBuilder {
+ public PEHeaderBuilder(Machine machine = Machine.Unknown, int sectionAlignment = 8192, int fileAlignment = 512, ulong imageBase = (ulong)4194304, byte majorLinkerVersion = (byte)48, byte minorLinkerVersion = (byte)0, ushort majorOperatingSystemVersion = (ushort)4, ushort minorOperatingSystemVersion = (ushort)0, ushort majorImageVersion = (ushort)0, ushort minorImageVersion = (ushort)0, ushort majorSubsystemVersion = (ushort)4, ushort minorSubsystemVersion = (ushort)0, Subsystem subsystem = Subsystem.WindowsCui, DllCharacteristics dllCharacteristics = DllCharacteristics.DynamicBase | DllCharacteristics.NoSeh | DllCharacteristics.NxCompatible | DllCharacteristics.TerminalServerAware, Characteristics imageCharacteristics = Characteristics.Dll, ulong sizeOfStackReserve = (ulong)1048576, ulong sizeOfStackCommit = (ulong)4096, ulong sizeOfHeapReserve = (ulong)1048576, ulong sizeOfHeapCommit = (ulong)4096);
+ public DllCharacteristics DllCharacteristics { get; }
+ public int FileAlignment { get; }
+ public ulong ImageBase { get; }
+ public Characteristics ImageCharacteristics { get; }
+ public Machine Machine { get; }
+ public ushort MajorImageVersion { get; }
+ public byte MajorLinkerVersion { get; }
+ public ushort MajorOperatingSystemVersion { get; }
+ public ushort MajorSubsystemVersion { get; }
+ public ushort MinorImageVersion { get; }
+ public byte MinorLinkerVersion { get; }
+ public ushort MinorOperatingSystemVersion { get; }
+ public ushort MinorSubsystemVersion { get; }
+ public int SectionAlignment { get; }
+ public ulong SizeOfHeapCommit { get; }
+ public ulong SizeOfHeapReserve { get; }
+ public ulong SizeOfStackCommit { get; }
+ public ulong SizeOfStackReserve { get; }
+ public Subsystem Subsystem { get; }
+ public static PEHeaderBuilder CreateExecutableHeader();
+ public static PEHeaderBuilder CreateLibraryHeader();
+ }
+ public sealed class PEHeaders {
+ public PEHeaders(Stream peStream);
+ public PEHeaders(Stream peStream, int size);
+ public PEHeaders(Stream peStream, int size, bool isLoadedImage);
+ public CoffHeader CoffHeader { get; }
+ public int CoffHeaderStartOffset { get; }
+ public CorHeader CorHeader { get; }
+ public int CorHeaderStartOffset { get; }
+ public bool IsCoffOnly { get; }
+ public bool IsConsoleApplication { get; }
+ public bool IsDll { get; }
+ public bool IsExe { get; }
+ public int MetadataSize { get; }
+ public int MetadataStartOffset { get; }
+ public PEHeader PEHeader { get; }
+ public int PEHeaderStartOffset { get; }
+ public ImmutableArray<SectionHeader> SectionHeaders { get; }
+ public int GetContainingSectionIndex(int relativeVirtualAddress);
+ public bool TryGetDirectoryOffset(DirectoryEntry directory, out int offset);
+ }
+ public enum PEMagic : ushort {
+ PE32 = (ushort)267,
+ PE32Plus = (ushort)523,
+ }
+ public readonly struct PEMemoryBlock {
+ public int Length { get; }
+ public unsafe byte* Pointer { get; }
+ public ImmutableArray<byte> GetContent();
+ public ImmutableArray<byte> GetContent(int start, int length);
+ public BlobReader GetReader();
+ public BlobReader GetReader(int start, int length);
+ }
+ public sealed class PEReader : IDisposable {
+ public unsafe PEReader(byte* peImage, int size);
+ public unsafe PEReader(byte* peImage, int size, bool isLoadedImage);
+ public PEReader(ImmutableArray<byte> peImage);
+ public PEReader(Stream peStream);
+ public PEReader(Stream peStream, PEStreamOptions options);
+ public PEReader(Stream peStream, PEStreamOptions options, int size);
+ public bool HasMetadata { get; }
+ public bool IsEntireImageAvailable { get; }
+ public bool IsLoadedImage { get; }
+ public PEHeaders PEHeaders { get; }
+ public void Dispose();
+ public PEMemoryBlock GetEntireImage();
+ public PEMemoryBlock GetMetadata();
+ public PEMemoryBlock GetSectionData(int relativeVirtualAddress);
+ public PEMemoryBlock GetSectionData(string sectionName);
+ public CodeViewDebugDirectoryData ReadCodeViewDebugDirectoryData(DebugDirectoryEntry entry);
+ public ImmutableArray<DebugDirectoryEntry> ReadDebugDirectory();
+ public MetadataReaderProvider ReadEmbeddedPortablePdbDebugDirectoryData(DebugDirectoryEntry entry);
+ public PdbChecksumDebugDirectoryData ReadPdbChecksumDebugDirectoryData(DebugDirectoryEntry entry);
+ public bool TryOpenAssociatedPortablePdb(string peImagePath, Func<string, Stream> pdbFileStreamProvider, out MetadataReaderProvider pdbReaderProvider, out string pdbPath);
+ }
+ public enum PEStreamOptions {
+ Default = 0,
+ IsLoadedImage = 8,
+ LeaveOpen = 1,
+ PrefetchEntireImage = 4,
+ PrefetchMetadata = 2,
+ }
+ public abstract class ResourceSectionBuilder {
+ protected ResourceSectionBuilder();
+ protected internal abstract void Serialize(BlobBuilder builder, SectionLocation location);
+ }
+ public enum SectionCharacteristics : uint {
+ Align1024Bytes = (uint)11534336,
+ Align128Bytes = (uint)8388608,
+ Align16Bytes = (uint)5242880,
+ Align1Bytes = (uint)1048576,
+ Align2048Bytes = (uint)12582912,
+ Align256Bytes = (uint)9437184,
+ Align2Bytes = (uint)2097152,
+ Align32Bytes = (uint)6291456,
+ Align4096Bytes = (uint)13631488,
+ Align4Bytes = (uint)3145728,
+ Align512Bytes = (uint)10485760,
+ Align64Bytes = (uint)7340032,
+ Align8192Bytes = (uint)14680064,
+ Align8Bytes = (uint)4194304,
+ AlignMask = (uint)15728640,
+ ContainsCode = (uint)32,
+ ContainsInitializedData = (uint)64,
+ ContainsUninitializedData = (uint)128,
+ GPRel = (uint)32768,
+ LinkerComdat = (uint)4096,
+ LinkerInfo = (uint)512,
+ LinkerNRelocOvfl = (uint)16777216,
+ LinkerOther = (uint)256,
+ LinkerRemove = (uint)2048,
+ Mem16Bit = (uint)131072,
+ MemDiscardable = (uint)33554432,
+ MemExecute = (uint)536870912,
+ MemFardata = (uint)32768,
+ MemLocked = (uint)262144,
+ MemNotCached = (uint)67108864,
+ MemNotPaged = (uint)134217728,
+ MemPreload = (uint)524288,
+ MemProtected = (uint)16384,
+ MemPurgeable = (uint)131072,
+ MemRead = (uint)1073741824,
+ MemShared = (uint)268435456,
+ MemSysheap = (uint)65536,
+ MemWrite = (uint)2147483648,
+ NoDeferSpecExc = (uint)16384,
+ TypeCopy = (uint)16,
+ TypeDSect = (uint)1,
+ TypeGroup = (uint)4,
+ TypeNoLoad = (uint)2,
+ TypeNoPad = (uint)8,
+ TypeOver = (uint)1024,
+ TypeReg = (uint)0,
+ }
+ public readonly struct SectionHeader {
+ public string Name { get; }
+ public ushort NumberOfLineNumbers { get; }
+ public ushort NumberOfRelocations { get; }
+ public int PointerToLineNumbers { get; }
+ public int PointerToRawData { get; }
+ public int PointerToRelocations { get; }
+ public SectionCharacteristics SectionCharacteristics { get; }
+ public int SizeOfRawData { get; }
+ public int VirtualAddress { get; }
+ public int VirtualSize { get; }
+ }
+ public readonly struct SectionLocation {
+ public SectionLocation(int relativeVirtualAddress, int pointerToRawData);
+ public int PointerToRawData { get; }
+ public int RelativeVirtualAddress { get; }
+ }
+ public enum Subsystem : ushort {
+ EfiApplication = (ushort)10,
+ EfiBootServiceDriver = (ushort)11,
+ EfiRom = (ushort)13,
+ EfiRuntimeDriver = (ushort)12,
+ Native = (ushort)1,
+ NativeWindows = (ushort)8,
+ OS2Cui = (ushort)5,
+ PosixCui = (ushort)7,
+ Unknown = (ushort)0,
+ WindowsBootApplication = (ushort)16,
+ WindowsCEGui = (ushort)9,
+ WindowsCui = (ushort)3,
+ WindowsGui = (ushort)2,
+ Xbox = (ushort)14,
+ }
+}
```
| 48.334171 | 922 | 0.659458 | kor_Hang | 0.324103 |
e035bd1fd99da023913dab3fe8e370270d0b95e8 | 1,783 | md | Markdown | docs/2014/analysis-services/multidimensional-models/user-defined-hierarchies-create.md | satoshi-baba-0823/sql-docs.ja-jp | a0681de7e067cc6da1be720cb8296507e98e0f29 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/2014/analysis-services/multidimensional-models/user-defined-hierarchies-create.md | satoshi-baba-0823/sql-docs.ja-jp | a0681de7e067cc6da1be720cb8296507e98e0f29 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/2014/analysis-services/multidimensional-models/user-defined-hierarchies-create.md | satoshi-baba-0823/sql-docs.ja-jp | a0681de7e067cc6da1be720cb8296507e98e0f29 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: ユーザー定義階層の作成 |Microsoft Docs
ms.custom: ''
ms.date: 06/13/2017
ms.prod: sql-server-2014
ms.reviewer: ''
ms.technology:
- analysis-services
ms.topic: conceptual
helpviewer_keywords:
- user-defined hierarchies [Analysis Services]
ms.assetid: 16715b85-0630-4a8e-99b0-c0d213cade26
author: minewiskan
ms.author: owend
manager: craigg
ms.openlocfilehash: 7cbef7b6adae6c8650b80b8c91bfd1357dde0541
ms.sourcegitcommit: 3da2edf82763852cff6772a1a282ace3034b4936
ms.translationtype: MT
ms.contentlocale: ja-JP
ms.lasthandoff: 10/02/2018
ms.locfileid: "48218562"
---
# <a name="create-user-defined-hierarchies"></a>ユーザー定義階層の作成
[!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] [!INCLUDE[ssASnoversion](../../includes/ssasnoversion-md.md)] を使用すると、ユーザー定義階層を作成できます。 階層とは、属性に基づいたレベルのコレクションのことです。 たとえば、時間階層には、年、四半期、月、週、および日の各レベルが含まれる場合があります。 階層によっては、各メンバー属性が、その上位にあるメンバー属性を一意に示すことがあります。 これは、自然階層と呼ばれることがあります。 エンド ユーザーは、階層を使用してキューブ データを参照できます。 階層を定義するには、 [!INCLUDE[ssBIDevStudioFull](../../includes/ssbidevstudiofull-md.md)]のディメンション デザイナーの [階層] ペインを使用します。
ユーザー定義階層を作成すると、階層は *不規則*になる場合があります。 不規則階層とは、少なくとも 1 つのメンバーの論理上の親メンバーが、そのメンバーのすぐ上のレベルにない階層です。 不規則階層を使用している場合は、欠落しているメンバーを表示するかどうか、および欠落しているメンバーを表示する方法を制御する設定があります。 詳細については、「 [不規則階層](user-defined-hierarchies-ragged-hierarchies.md)」を参照してください。
> [!NOTE]
> ユーザー定義階層のデザインと構成に関連するパフォーマンスの問題の詳細については、「 [SQL Server 2005 Analysis Services パフォーマンス ガイド](http://go.microsoft.com/fwlink/?LinkId=81621)」を参照してください。
## <a name="see-also"></a>参照
[ユーザー階層プロパティ](../multidimensional-models-olap-logical-dimension-objects/user-hierarchies-properties.md)
[レベル プロパティ[機能強化]](../multidimensional-models-olap-logical-dimension-objects/user-hierarchies-level-properties.md)
[親子階層](parent-child-dimension.md)
| 48.189189 | 434 | 0.78968 | yue_Hant | 0.460888 |
e035dc9383cffe79e184ca41778f1bbae8692964 | 16 | md | Markdown | README.md | hshdhs10/yee | c55addb203a2303993b978aa698b9d810a0d19d0 | [
"Apache-2.0"
] | null | null | null | README.md | hshdhs10/yee | c55addb203a2303993b978aa698b9d810a0d19d0 | [
"Apache-2.0"
] | null | null | null | README.md | hshdhs10/yee | c55addb203a2303993b978aa698b9d810a0d19d0 | [
"Apache-2.0"
] | null | null | null | # yee
yeeeeeee~
| 5.333333 | 9 | 0.6875 | xho_Latn | 0.934685 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.