hexsha
stringlengths 40
40
| size
int64 5
1.04M
| ext
stringclasses 6
values | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 3
344
| max_stars_repo_name
stringlengths 5
125
| max_stars_repo_head_hexsha
stringlengths 40
78
| max_stars_repo_licenses
sequencelengths 1
11
| max_stars_count
int64 1
368k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 3
344
| max_issues_repo_name
stringlengths 5
125
| max_issues_repo_head_hexsha
stringlengths 40
78
| max_issues_repo_licenses
sequencelengths 1
11
| max_issues_count
int64 1
116k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 3
344
| max_forks_repo_name
stringlengths 5
125
| max_forks_repo_head_hexsha
stringlengths 40
78
| max_forks_repo_licenses
sequencelengths 1
11
| max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | content
stringlengths 5
1.04M
| avg_line_length
float64 1.14
851k
| max_line_length
int64 1
1.03M
| alphanum_fraction
float64 0
1
| lid
stringclasses 191
values | lid_prob
float64 0.01
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
b934ee701e33f14cb492795d4563fc57596f1ced | 24,057 | md | Markdown | docs/connect/homepage-sql-connection-programming.md | bingenortuzar/sql-docs.es-es | 9e13730ffa0f3ce461cce71bebf1a3ce188c80ad | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-04-26T21:26:08.000Z | 2021-04-26T21:26:08.000Z | docs/connect/homepage-sql-connection-programming.md | jlporatti/sql-docs.es-es | 9b35d3acbb48253e1f299815df975f9ddaa5e9c7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/connect/homepage-sql-connection-programming.md | jlporatti/sql-docs.es-es | 9b35d3acbb48253e1f299815df975f9ddaa5e9c7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Programación de cliente SQL en la página principal | Microsoft Docs
description: Página de concentrador con anotado vínculos a descargas y documentación de numerosas combinaciones de idiomas y sistemas operativos, para conectarse a SQL Server o a Azure SQL Database.
author: MightyPen
ms.date: 11/07/2018
ms.prod: sql
ms.prod_service: connectivity
ms.custom: ''
ms.technology: connectivity
ms.topic: conceptual
ms.reviewer: v-daveng
ms.author: genemi
ms.openlocfilehash: d773e05a3ed953e5210c0ade3226b4a32e82aeab
ms.sourcegitcommit: 3026c22b7fba19059a769ea5f367c4f51efaf286
ms.translationtype: MTE75
ms.contentlocale: es-ES
ms.lasthandoff: 06/15/2019
ms.locfileid: "63182208"
---
# <a name="homepage-for-client-programming-to-microsoft-sql-server"></a>Página principal de la programación de clientes de Microsoft SQL Server
Le damos la bienvenida a nuestra página principal sobre programación para interactuar con Microsoft SQL Server y con Azure SQL Database en la nube del cliente. En este artículo se proporciona la siguiente información:
- Enumera y describe las combinaciones de idioma y el controlador disponibles.
- Se proporciona información para los sistemas operativos de Windows, MacOS y Linux (Ubuntu y otros).
- Proporciona vínculos a la documentación detallada para cada combinación.
- Muestra las áreas y las subáreas de la documentación jerárquica para determinados idiomas, si procede.
#### <a name="azure-sql-database"></a>Base de datos SQL de Azure
En cualquier lenguaje determinado, el código que se conecta a SQL Server es casi idéntico al código para conectarse a Azure SQL Database.
Para obtener más información acerca de las cadenas de conexión para conectarse a Azure SQL Database, consulte:
- [Usar .NET Core (C#) para consultar una base de datos de SQL Azure](/azure/sql-database/sql-database-connect-query-dotnet-core).
- Otra base de datos de SQL de Azure que están cercanos en el artículo anterior de la tabla de contenido, acerca de otros lenguajes. Por ejemplo, ver [uso de PHP para consultar una base de datos SQL de Azure](https://docs.microsoft.com/azure/sql-database/sql-database-connect-query-php).
#### <a name="build-an-app-webpages"></a>Páginas Web de una aplicación compilada
Nuestro *una aplicación compilada* las páginas Web contienen ejemplos de código, junto con información de configuración, en un formato alternativo. Para obtener más información, vea más adelante en este artículo el [sección con la etiqueta *sitio Web de una aplicación compilada*](#an-204-aka-ms-sqldev).
<a name="an-050-languages-clients" />
## <a name="languages-and-drivers-for-client-programs"></a>Controladores para los programas cliente y lenguajes
En la tabla siguiente, cada imagen de lenguaje es un vínculo a los detalles sobre cómo usar el lenguaje con SQL Server. Cada vínculo lleva a una sección más adelante en este artículo.
| | | |
| :-- | :-- | :-- |
| [![Logotipo de C#][image-ref-320-csharp]](#an-110-ado-net-docu) | [![ORM Entity Framework de .NET Framework][image-ref-333-ef]](#an-116-csharp-ef-orm) | [![Logotipo de Java][image-ref-330-java]](#an-130-jdbc-docu) |
| [![Logotipo de Node.js][image-ref-340-node]](#an-140-node-js-docu) | [ **`ODBC for C++`** ](#an-160-odbc-cpp-docu)<br/>[![cpp-big-plus][image-ref-322-cpp]](#an-160-odbc-cpp-docu) | [![Logotipo PHP][image-ref-360-php]](#an-170-php-docu) |
| [![Logotipo de Python][image-ref-370-python]](#an-180-python-docu) | [![Logotipo de Ruby][image-ref-380-ruby]](#an-190-ruby-docu) | ... |
| | | <br />|
#### <a name="downloads-and-installs"></a>Descarga e instala
El siguiente artículo se dedica a la descarga e instalar varios controladores de conexión de SQL, para su uso con lenguajes de programación:
- [Controladores de SQL Server](sql-server-drivers.md)
<a name="an-110-ado-net-docu" />
## <a name="c-logoimage-ref-320-csharp-c-using-adonet"></a>![Logotipo de C#][image-ref-320-csharp] C# mediante ADO.NET
Los lenguajes administrados de. NET, como C# y Visual Basic, son los usuarios más comunes de ADO.NET. *ADO.NET* es un nombre para un subconjunto de clases de .NET Framework casual.
#### <a name="code-examples"></a>Ejemplos de código
|||
| :-- | :-- |
| [Prueba de concepto de la conexión a SQL mediante ADO.NET](./ado-net/step-3-proof-of-concept-connecting-to-sql-using-ado-net.md) | Un ejemplo de código pequeño centrado sobre cómo conectar y consultar SQL Server. |
| [Conectar de forma resistente a SQL con ADO.NET](./ado-net/step-4-connect-resiliently-to-sql-with-ado-net.md) | Lógica de reintento en un ejemplo de código, porque las conexiones en ocasiones, pueden experimentar momentos de pérdida de conectividad.<br /><br />Lógica de reintento se aplica también a las conexiones que mantiene a través de internet en cualquier base de datos en la nube, como a Azure SQL Database. |
| [Base de datos SQL Azure: Demostración de cómo usar .NET Core en Windows, Linux y macOS para crear un programa de C#, para conectarse y consultar](https://docs.microsoft.com/azure/sql-database/sql-database-connect-query-dotnet-core) | Ejemplo de Azure SQL Database. |
| [Generar una aplicación: Windows C#, ADO.NET,](https://www.microsoft.com/sql-server/developer-get-started/csharp/win/) | Información de configuración, junto con ejemplos de código. |
| | <br /> |
#### <a name="documentation"></a>Documentación
|||
| :-- | :-- |
| [C# mediante ADO.NET](./ado-net/index.md)| Raíz de nuestra documentación. |
| [Namespace: System.Data](https://docs.microsoft.com/dotnet/api/system.data) | Un conjunto de clases que se usan para ADO.NET. |
| [Namespace: System.Data.SqlClient](https://docs.microsoft.com/dotnet/api/system.data.SqlClient) | El conjunto de clases que son más directamente el centro de ADO.NET. |
| | <br /> |
<a name="an-116-csharp-ef-orm" />
## <a name="entity-framework-logoimage-ref-333-ef-entity-framework-ef-with-cx23"></a>![Logotipo de Entity Framework][image-ref-333-ef] Entity Framework (EF) con C#
Entity Framework (EF) proporciona la asignación relacional de objetos (ORM). ORM facilita el código fuente de programación orientada a objetos (OOP) manipular los datos que se recuperaron de una base de datos relacional de SQL.
EF tiene relaciones directas o indirectas con las siguientes tecnologías:
- .NET Framework
- [LINQ to SQL](https://docs.microsoft.com/dotnet/framework/data/adonet/sql/linq/), o [LINQ to Entities](https://docs.microsoft.com/dotnet/framework/data/adonet/ef/language-reference/linq-to-entities)
- Mejoras de sintaxis del lenguaje, como el **=>** operador en C#.
- Programas útiles que generan código fuente para las clases que se asignan a las tablas de la base de datos SQL. Por ejemplo, [EdmGen.exe](https://docs.microsoft.com/dotnet/framework/data/adonet/ef/edm-generator-edmgen-exe).
#### <a name="original-ef-and-new-ef"></a>EF original y EF nuevo
El [página de inicio para Entity Framework](https://docs.microsoft.com/ef/) presenta EF con una descripción similar al siguiente:
- Entity Framework es un asignador relacional de objetos (O/RM) que permite a los desarrolladores de .NET trabajar con una base de datos mediante objetos. NET. Elimina la necesidad para la mayoría del código fuente de acceso a datos que los desarrolladores normalmente deben escribir.
*Entity Framework* es un nombre compartido por dos bifurcaciones de código fuente independiente. Una bifurcación EF es anterior, y ahora se puede mantener su código fuente público. Otro EF es nuevo. El dos EFs se describen a continuación:
| | |
| :-- | :-- |
| [EF 6.x](https://docs.microsoft.com/ef/ef6/) | Microsoft por primera vez EF en agosto de 2008. En marzo de 2015, Microsoft anunció que EF 6.x era la versión final que Microsoft desarrolla. Microsoft publicó el código fuente en el dominio público.<br /><br />Inicialmente, EF formaba parte de .NET Framework. Pero EF 6.x se ha quitado de .NET Framework.<br /><br />[EF 6.x de código fuente en Github, en el repositorio *aspnet/EntityFramework6*](https://github.com/aspnet/EntityFramework6) |
| [EF Core](https://docs.microsoft.com/ef/core/) | Microsoft publicó el núcleo de EF recién desarrollada en junio de 2016. EF Core está diseñado para una mayor flexibilidad y portabilidad. EF Core puede ejecutar en sistemas operativos más allá de simplemente Microsoft Windows. Y EF Core puede interactuar con bases de datos más allá de simplemente Microsoft SQL Server y otras bases de datos relacionales.<br /><br />**C# ejemplos de código:**<br />[Introducción a Entity Framework Core](https://docs.microsoft.com/ef/core/get-started/index)<br />[Introducción a EF Core en .NET Framework con una base de datos existente](https://docs.microsoft.com/ef/core/get-started/full-dotnet/existing-db) |
| | <br /> |
EF y tecnologías relacionadas son eficaces y son mucho que aprender para los desarrolladores que quieran controlar toda el área.
<a name="an-130-jdbc-docu" />
## <a name="java-logoimage-ref-330-java-java-and-jdbc"></a>![Logotipo de Java][image-ref-330-java] Java y JDBC
Microsoft proporciona un controlador Java Database Connectivity (JDBC) para su uso con SQL Server (o con Azure SQL Database, por supuesto). Se trata de un controlador JDBC de tipo 4 que proporciona conectividad a bases de datos mediante las interfaces de programación de aplicaciones (API) estándar JDBC.
#### <a name="code-examples"></a>Ejemplos de código
|||
| :-- | :-- |
| [Ejemplos de código](./jdbc/code-samples/index.md) | Ejemplos de código que enseñan a los tipos de datos, conjuntos de resultados y datos de gran tamaño. |
| [Ejemplo de URL de conexión](./jdbc/connection-url-sample.md) | Describe cómo usar una dirección URL de conexión para conectarse a SQL Server. A continuación, usarla para usar una instrucción SQL para recuperar datos. |
| [Ejemplo de origen de datos](./jdbc/data-source-sample.md) | Describe cómo usar un origen de datos para conectarse a SQL Server. A continuación, utilizar un procedimiento almacenado para recuperar datos. |
| [Use Java para consultar una base de datos SQL de Azure](https://docs.microsoft.com/azure/sql-database/sql-database-connect-query-java) | Ejemplo de Azure SQL Database. |
| [Crear aplicaciones de Java con SQL Server en Ubuntu](https://www.microsoft.com/sql-server/developer-get-started/java/ubuntu/) | Información de configuración, junto con ejemplos de código. |
| | <br /> |
#### <a name="documentation"></a>Documentación
La documentación de JDBC incluye las siguientes áreas principales:
|||
| :-- | :-- |
| [Java Database Connectivity (JDBC)](./jdbc/index.md) | Raíz de la documentación de JDBC. |
| [Referencia](./jdbc/reference/index.md) | Interfaces, clases y miembros. |
| [Guía de programación del controlador JDBC para SQL](./jdbc/programming-guide-for-jdbc-sql-driver.md) | Información de configuración, junto con ejemplos de código. |
| | <br /> |
<a name="an-140-node-js-docu" />
## <a name="nodejs-logoimage-ref-340-node-nodejs"></a>![Logotipo de Node.js][image-ref-340-node] Node.js
Con Node.js puede conectarse a SQL Server de Windows, Linux o Mac. Es la raíz de la documentación de Node.js [aquí](./node-js/index.md).
El controlador de conexión de Node.js para SQL Server se implementa en JavaScript. El controlador utiliza el protocolo TDS, que es compatible con todas las versiones actuales de SQL Server. El controlador es un proyecto de código abierto, [disponible en Github](https://tediousjs.github.io/tedious/).
#### <a name="code-examples"></a>Ejemplos de código
|||
| :-- | :-- |
| [Prueba de concepto de la conexión a SQL mediante Node.js](./node-js/step-3-proof-of-concept-connecting-to-sql-using-node-js.md) | Desnuda código para conectarse a SQL Server y ejecutar una consulta de origen. |
| [La base de datos SQL Azure: uso de Node.js para la consulta](https://docs.microsoft.com/azure/sql-database/sql-database-connect-query-nodejs) | Ejemplo de Azure SQL Database en la nube. |
| [Crear aplicaciones de Node.js para usar SQL Server en macOS](https://www.microsoft.com/sql-server/developer-get-started/node/mac/) | Información de configuración, junto con ejemplos de código. |
| | <br /> |
<a name="an-160-odbc-cpp-docu" />
## <a name="odbc-for-c"></a>ODBC de C++
![Logotipo ODBC][image-ref-350-odbc] ![cpp-big-plus][image-ref-322-cpp]
Conectividad abierta de base de datos (ODBC) se desarrolló en la década de 1990, y es anterior a .NET Framework. ODBC está diseñado para ser independiente de cualquier sistema de base de datos determinada e independiente del sistema operativo.
En los años numerosos controladores ODBC se han creado y publicado por grupos dentro y fuera de Microsoft. El intervalo de controladores implican varios lenguajes de programación de cliente. La lista de destinos de datos va más allá de SQL Server.
Algunos otros controladores de conectividad usan ODBC internamente.
#### <a name="code-example"></a>Ejemplo de código
- [Ejemplo de código de C++, con ODBC](../odbc/reference/sample-odbc-program.md)
#### <a name="documentation-outline"></a>Esquema de la documentación
El contenido ODBC en esta sección se centra en obtener acceso a SQL Server o Azure SQL Database, desde C++. La tabla siguiente muestra un esquema aproximado de la documentación principal para ODBC.
| Área | Subárea | Descripción |
| :--- | :------ | :---------- |
| [ODBC de C++](./odbc/index.md) | Raíz de nuestra documentación. |
| [Linux-Mac](./odbc/linux-mac/index.md) | | Información sobre cómo usar ODBC en los sistemas operativos Linux o MacOS. |
| [Windows](./odbc/windows/index.md) | | Información sobre cómo usar ODBC en el sistema operativo Windows. |
| [Administración](../odbc/admin/index.md) | | La herramienta administrativa para administrar orígenes de datos ODBC. |
| [Microsoft](../odbc/microsoft/index.md) | | Varios controladores ODBC que se crean y proporcionados por Microsoft. |
| [Conceptual y referencia](../odbc/reference/index.md) | | Información conceptual acerca de la interfaz ODBC, además de referencia tradicional. |
| " | [Apéndices](../odbc/reference/appendixes/index.md) | Tablas de transición de estado, biblioteca de cursores ODBC y mucho más. |
| " | [Desarrollar la aplicación](../odbc/reference/develop-app/index.md) | Las funciones, identificadores y mucho más. |
| " | [Desarrollo de controladores](../odbc/reference/develop-driver/index.md) | Cómo desarrollar su propio controlador ODBC, si tiene un origen de datos especializado. |
| " | [Instalar](../odbc/reference/install/index.md) | Instalación de ODBC, subclaves y mucho más. |
| " | [Sintaxis](../odbc/reference/syntax/index.md) | API para el acceso a datos, instalador, traducción y el programa de instalación. |
| | | <br /> |
<a name="an-170-php-docu" />
## <a name="php-logoimage-ref-360-php-php"></a>![Logotipo PHP][image-ref-360-php] PHP
Puede usar PHP para interactuar con SQL Server. Es la raíz de la documentación de PHP [aquí](./php/index.md).
#### <a name="code-examples"></a>Ejemplos de código
|||
| :-- | :-- |
| [Prueba de concepto de la conexión a SQL mediante PHP](./php/step-3-proof-of-concept-connecting-to-sql-using-php.md) | Un ejemplo de código pequeño centrado sobre cómo conectar y consultar SQL Server. |
| [Paso 4: Conectar de forma resistente a SQL con PHP](./php/step-4-connect-resiliently-to-sql-with-php.md) | Lógica de reintento en un ejemplo de código, porque las conexiones a través de Internet y en la nube en ocasiones, pueden experimentar momentos de pérdida de conectividad. |
| [La base de datos SQL Azure: uso de PHP para consulta](https://docs.microsoft.com/azure/sql-database/sql-database-connect-query-php) | Ejemplo de Azure SQL Database. |
| [Crear aplicaciones PHP para utilizar SQL Server en RHEL](https://www.microsoft.com/sql-server/developer-get-started/php/rhel/) | Información de configuración, junto con ejemplos de código. |
| | <br /> |
<a name="an-180-python-docu" />
## <a name="python-logoimage-ref-370-python-python"></a>![Logotipo de Python][image-ref-370-python] Python
Puede usar Python para interactuar con SQL Server.
#### <a name="code-examples"></a>Ejemplos de código
|||
| :-- | :-- |
| [Prueba de concepto que se conecta a SQL con Python mediante pyodbc](./python/pyodbc/step-3-proof-of-concept-connecting-to-sql-using-pyodbc.md) | Un ejemplo de código pequeño centrado sobre cómo conectar y consultar SQL Server. |
| [La base de datos SQL Azure: uso de Python para la consulta](https://docs.microsoft.com/azure/sql-database/sql-database-connect-query-python) | Ejemplo de Azure SQL Database. |
| [Crear aplicaciones PHP para utilizar SQL Server en SLES](https://www.microsoft.com/sql-server/developer-get-started/python/sles/) | Información de configuración, junto con ejemplos de código. |
| | <br /> |
#### <a name="documentation"></a>Documentación
| Área | Descripción |
| :--- | :---------- |
| [Python para SQL Server](./python/index.md) | Raíz de nuestra documentación. |
| [controlador pymssql](./python/pymssql/index.md) | Microsoft no mantener o probar el controlador pymssql.<br /><br />El controlador pymssql de conexión es una interfaz sencilla para las bases de datos SQL, para su uso en programas de Python. Pymssql se basa en FreeTDS para proporcionar una interfaz de DB-API de Python (PEP-249) a Microsoft SQL Server. |
| [controlador pyodbc](./python/pyodbc/index.md) | El controlador de conexión de pyodbc es un módulo de Python de código abierto que simplifica el acceso a las bases de datos ODBC. Implementa la especificación de API de DB 2.0, pero está equipado con incluso más comodidad de Python. |
| | <br /> |
<a name="an-190-ruby-docu" />
## <a name="ruby-logoimage-ref-380-ruby-ruby"></a>![Logotipo de Ruby][image-ref-380-ruby] Ruby
Puede usar Ruby para interactuar con SQL Server. Es la raíz de la documentación de Ruby [aquí](./ruby/index.md).
#### <a name="code-examples"></a>Ejemplos de código
|||
| :-- | :-- |
| [Prueba de concepto de la conexión a SQL con Ruby](./ruby/step-3-proof-of-concept-connecting-to-sql-using-ruby.md) | Un ejemplo de código pequeño centrado sobre cómo conectar y consultar SQL Server. |
| [La base de datos SQL Azure: uso de Ruby para consulta](https://docs.microsoft.com/azure/sql-database/sql-database-connect-query-ruby) | Ejemplo de Azure SQL Database. |
| [Crear aplicaciones de Ruby para usar SQL Server en MacOS](https://www.microsoft.com/sql-server/developer-get-started/ruby/mac/) | Información de configuración, junto con ejemplos de código. |
| | <br /> |
<a name="an-204-aka-ms-sqldev" />
## <a name="build-an-app-website-for-sql-client-developmenthttpswwwmicrosoftcomsql-serverdeveloper-get-started"></a>[Sitio Web de una aplicación compilada para el desarrollo de cliente SQL](https://www.microsoft.com/sql-server/developer-get-started/)
En nuestro [ *una aplicación compilada* ](https://www.microsoft.com/sql-server/developer-get-started/) puede elegir entre una larga lista de lenguajes para conectarse a SQL Server de programación de las páginas Web. Y el programa cliente puede ejecutar una variedad de sistemas operativos.
*Una aplicación compilada* enfatiza la simplicidad y la integridad para el desarrollador que se acaba de empezar. Los pasos explican las tareas siguientes:
1. Cómo instalar Microsoft SQL Server
2. Cómo descargar e instalar las herramientas y los controladores.
3. Cómo hacer que las configuraciones necesarias, según corresponda para el sistema operativo elegido.
4. Cómo compilar el código fuente proporcionado.
5. Cómo ejecutar el programa.
A continuación son un par contornos aproximados de los detalles proporcionados en el sitio Web:
#### <a name="java-on-ubuntu"></a>Java en Ubuntu:
1. Configurar el entorno
- Paso 1.1: instalar SQL Server
- Paso 1.2 instalar Java
- Paso 1.3 instalar el Kit de desarrollo de Java (JDK)
- Paso 1.4 instalar Maven
2. Crear aplicación de Java con SQL Server
- Crear una aplicación Java que se conecta a SQL Server y ejecuta consultas de paso 2.1
- Paso 2.2 Creación de una aplicación Java que se conecta a SQL Server mediante el marco de trabajo popular Hibernar
3. Hacer que la aplicación de Java hasta 100 veces más rápido
- Paso 3.1 crear una aplicación de Java para demostrar los índices de almacén de columnas
#### <a name="python-on-windows"></a>Python en Windows:
1. Configurar el entorno
- Paso 1.1: instalar SQL Server
- Paso 1.2 instale Python
- Paso 1.3 instalar el controlador ODBC y la utilidad de línea de comandos SQL para SQL Server
2. Crear aplicación de Python con SQL Server
- Paso 2.1 instalación del controlador Python para SQL Server
- Paso 2.2 crear una base de datos para la aplicación
- Paso 2.3 Creación de una aplicación de Python que se conecta a SQL Server y ejecuta las consultas
3. Hacer que la aplicación de Python hasta 100 veces más rápido
- Paso 3.1 Creación de una nueva tabla con 5 millones mediante sqlcmd
- Paso 3.2 Creación de una aplicación de Python que consulta esta tabla y mide el tiempo empleado
- Paso 3.3 medir cuánto tarda en ejecutar la consulta
- Paso 3.4 agregar un índice de almacén de columnas a la tabla
- Paso 3.5 medir cuánto se tarda en ejecutar la consulta con un índice de almacén de columnas
Las capturas de pantalla siguientes darle una idea del aspecto de nuestro sitio Web documentación de desarrollo de SQL.
#### <a name="choose-a-language"></a>Elija un idioma:
![Sitio Web de desarrollo de SQL, introducción][image-ref-390-aka-ms-sqldev-choose-language]
#### <a name="choose-an-operating-system"></a>Elija un sistema operativo:
![Sitio Web de desarrollo de SQL, Java Ubuntu][image-ref-400-aka-ms-sqldev-java-ubuntu]
## <a name="other-development"></a>Otros entornos de desarrollo
Esta sección proporciona vínculos sobre otras opciones de desarrollo. Estos incluyen el uso estos mismos lenguajes de desarrollo de Azure en general. La información va más allá de destino es simplemente Azure SQL Database y Microsoft SQL Server.
#### <a name="developer-hub-for-azure"></a>Centro para desarrolladores de Azure
- [Centro para desarrolladores de Azure](https://docs.microsoft.com/azure/)
- [Azure para desarrolladores de .NET](https://docs.microsoft.com/dotnet/azure/)
- [Azure para desarrolladores de Java](https://docs.microsoft.com/java/azure/)
- [Azure para desarrolladores de Node.js](https://docs.microsoft.com/nodejs/azure/)
- [Azure para desarrolladores de Python](https://docs.microsoft.com/python/azure/)
- [Crear una aplicación web PHP en Azure](https://docs.microsoft.com/azure/app-service-web/app-service-web-get-started-php)
#### <a name="other-languages"></a>Otros idiomas
- [Crear aplicaciones de Go con SQL Server en Windows](https://www.microsoft.com/sql-server/developer-get-started/go/windows/)
<!-- Image references. -->
[image-ref-322-cpp]: ./media/homepage-sql-connection-drivers/gm-cpp-4point-p61f.png
[image-ref-320-csharp]: ./media/homepage-sql-connection-drivers/gm-csharp-c10c.png
[image-ref-333-ef]: ./media/homepage-sql-connection-drivers/gm-entity-framework-ef20d.png
[image-ref-330-java]: ./media/homepage-sql-connection-drivers/gm-java-j18c.png
[image-ref-340-node]: ./media/homepage-sql-connection-drivers/gm-node-n30.png
[image-ref-350-odbc]: ./media/homepage-sql-connection-drivers/gm-odbc-ic55826-o35.png
[image-ref-360-php]: ./media/homepage-sql-connection-drivers/gm-php-php60.png
[image-ref-370-python]: ./media/homepage-sql-connection-drivers/gm-python-py72.png
[image-ref-380-ruby]: ./media/homepage-sql-connection-drivers/gm-ruby-un-r82.png
[image-ref-390-aka-ms-sqldev-choose-language]: ./media/homepage-sql-connection-drivers/gm-aka-ms-sqldev-choose-language-g21.png
[image-ref-400-aka-ms-sqldev-java-ubuntu]: ./media/homepage-sql-connection-drivers/gm-aka-ms-sqldev-java-ubuntu-c31.png
| 63.981383 | 701 | 0.74602 | spa_Latn | 0.943565 |
b93504d22df93aa336513e274d32fc3550dbad8e | 10,726 | md | Markdown | articles/azure-maps/migrate-from-bing-maps.md | Myhostings/azure-docs.tr-tr | 536eaf3b454f181f4948041d5c127e5d3c6c92cc | [
"CC-BY-4.0",
"MIT"
] | 16 | 2017-08-28T08:29:36.000Z | 2022-01-02T16:46:30.000Z | articles/azure-maps/migrate-from-bing-maps.md | Ahmetmaman/azure-docs.tr-tr | 536eaf3b454f181f4948041d5c127e5d3c6c92cc | [
"CC-BY-4.0",
"MIT"
] | 470 | 2017-11-11T20:59:16.000Z | 2021-04-10T17:06:28.000Z | articles/azure-maps/migrate-from-bing-maps.md | Ahmetmaman/azure-docs.tr-tr | 536eaf3b454f181f4948041d5c127e5d3c6c92cc | [
"CC-BY-4.0",
"MIT"
] | 25 | 2017-11-11T19:39:08.000Z | 2022-03-30T13:47:56.000Z | ---
title: "Öğretici: Bing Haritalar 'dan Azure Maps 'e geçiş | Microsoft Azure haritaları"
description: Bing Haritalar 'dan Microsoft Azure haritalarını geçirmeye yönelik bir öğretici. Yönergeler, Azure Maps API 'Lerine ve SDK 'larına nasıl geçlenebileceğine kılavuzluk eder.
author: rbrundritt
ms.author: richbrun
ms.date: 12/17/2020
ms.topic: tutorial
ms.service: azure-maps
services: azure-maps
manager: cpendle
ms.custom: ''
ms.openlocfilehash: 9bd0516889733a666bf15668cffd124dcc468f3e
ms.sourcegitcommit: f28ebb95ae9aaaff3f87d8388a09b41e0b3445b5
ms.translationtype: MT
ms.contentlocale: tr-TR
ms.lasthandoff: 03/29/2021
ms.locfileid: "100388966"
---
# <a name="tutorial-migrate-from-bing-maps-to-azure-maps"></a>Öğretici: Bing Haritalar 'dan Azure Maps 'e geçiş
Bu kılavuzda, Web, mobil ve sunucu tabanlı uygulamaların Bing Haritalar 'dan Azure haritalar platformuna nasıl geçirileceğiyle ilgili Öngörüler sunulmaktadır. Bu kılavuzda, Azure Maps 'e geçiş için karşılaştırılma kodu örnekleri, geçiş önerileri ve en iyi uygulamalar yer almaktadır.
Bu öğreticide şunları öğreneceksiniz:
> [!div class="checklist"]
> * Azure haritalar 'da bulunan eşdeğer Bing Haritalar özellikleri için üst düzey karşılaştırma.
> * Hangi lisans farklılıkları göz önünde bulundurulmalıdır.
> * Geçişinizi planlayın.
> * Teknik kaynakların ve desteğin nerede bulunacağı.
## <a name="prerequisites"></a>Önkoşullar
1. [Azure portalında](https://portal.azure.com) oturum açın. Azure aboneliğiniz yoksa başlamadan önce [ücretsiz bir hesap](https://azure.microsoft.com/free/) oluşturun.
2. [Azure haritalar hesabı oluşturma](quick-demo-map-app.md#create-an-azure-maps-account)
3. Birincil anahtar veya abonelik anahtarı olarak da bilinen [birincil bir abonelik anahtarı alın](quick-demo-map-app.md#get-the-primary-key-for-your-account). Azure haritalar 'da kimlik doğrulaması hakkında daha fazla bilgi için bkz. [Azure haritalar 'da kimlik doğrulamasını yönetme](how-to-manage-authentication.md).
## <a name="azure-maps-platform-overview"></a>Azure haritalar platformuna genel bakış
Azure haritalar, geliştiricilerin Web ve mobil uygulamalar için coğrafi bağlam sağlamak üzere sunulan en güncel eşleme verileriyle paketlenmiş, tüm sektörlerin güçlü Jeo-uzamsal yeteneklerini sunmaktadır. Azure haritalar; haritalar, arama, yönlendirme, trafik, saat dilimleri, bölge sınırlaması, harita verileri, hava durumu verileri ve birden çok platformda daha kolay, esnek ve taşınabilir hale getirmek için hem Web hem de Android SDK 'Lar tarafından birlikte bir Azure tek API uyumlu REST API kümesidir. [Azure haritalar da Power BI de mevcuttur](power-bi-visual-getting-started.md).
## <a name="high-level-platform-comparison"></a>Üst düzey platform karşılaştırması
Aşağıdaki tablo, Bing Haritalar özelliklerinin üst düzey bir listesini ve Azure Maps 'taki bu özellikler için göreli desteği sağlar. Bu liste erişilebilirlik, bölge sınırlaması API 'Leri, Trafik Hizmetleri, uzamsal işlemler, doğrudan harita kutucuğu erişimi ve Batch hizmetleri gibi ek Azure Maps özellikleri içermez.
| Bing Haritalar özelliği | Azure haritalar desteği |
|---------------------------------------|:------------------:|
| Web SDK’sı | ✓ |
| Android SDK | ✓ |
| iOS SDK | Planlandı |
| UWP SDK 'Sı | Yok |
| WPF SDK 'Sı | Yok |
| REST hizmeti API 'Leri | ✓ |
| Otomatik öneri | ✓ |
| Yönergeler (kamyon dahil) | ✓ |
| Uzaklık matrisi | ✓ |
| Yükseltmeleri | ✓ (Önizleme) |
| Imagery – statik eşleme | ✓ |
| Imagery meta verileri | ✓ |
| İzokron | ✓ |
| Yerel Öngörüler | ✓ |
| Yerel arama | ✓ |
| Konum tanıma | ✓ |
| Konumlar (ileri/ters coğrafi kodlama) | ✓ |
| İyileştirilmiş yolculuk yolları | Planlandı |
| Yollara yapış | ✓ |
| Uzamsal veri Hizmetleri (SDS) | Kısmi |
| Saat Dilimi | ✓ |
| Trafik olayları | ✓ |
| Yapılandırma temelli haritalar | Yok |
Bing Haritalar temel anahtar tabanlı kimlik doğrulaması sağlar. Azure haritalar hem temel anahtar tabanlı kimlik doğrulaması hem de yüksek düzeyde güvenli Azure Active Directory kimlik doğrulaması sağlar.
## <a name="licensing-considerations"></a>Lisanslama konuları
Bing Haritalar 'dan Azure Maps 'a geçiş yaparken, lisanslamayla ilgili olarak aşağıdaki bilgiler göz önünde bulundurulmalıdır.
* Azure Maps, yüklenen harita kutucuklarının sayısına göre etkileşimli haritalar kullanımı için ücretlendirirken, Bing Haritalar harita denetimi 'nin (oturumlar) yüklemesi için ücretlendirir. Azure haritalar, geliştiricilerin maliyetlerini azaltmak için harita kutucuklarını otomatik olarak önbelleğe alır. Yüklenen her 15 harita kutucuğu için bir Azure haritalar işlemi oluşturulur. Etkileşimli Azure Haritalar SDK 'Ları 512 piksellik kutucukları kullanır ve ortalama üzerinde sayfa görünümü başına bir veya daha az işlem oluşturur.
* Azure haritalar, platformdaki verilerin Azure 'da depolanmasını sağlar. Ayrıca [kullanım koşullarına](https://www.microsoftvolumelicensing.com/DocumentSearch.aspx?Mode=3&DocumentTypeId=31)göre altı aya kadar bir yerde önbelleğe alınabilir.
Azure haritalar için lisanslama ile ilgili bazı kaynaklar aşağıda verilmiştir:
- [Azure haritalar fiyatlandırma sayfası](https://azure.microsoft.com/pricing/details/azure-maps/)
- [Azure fiyatlandırma hesaplayıcısı](https://azure.microsoft.com/pricing/calculator/?service=azure-maps)
- [Azure Haritalar kullanım koşulları](https://www.microsoftvolumelicensing.com/DocumentSearch.aspx?Mode=3&DocumentTypeId=31) (Microsoft Online Services koşulları 'nda bulunur)
- [Azure haritalar 'da doğru fiyatlandırma katmanını seçin](./choose-pricing-tier.md)
## <a name="suggested-migration-plan"></a>Önerilen geçiş planı
Aşağıda, üst düzey bir geçiş planına örnek verilmiştir.
1. Uygulamanızın kullandığı Bing Haritalar SDK 'Ları ve hizmetleri hakkında daha fazla işlem yapın ve Azure Maps 'e geçiş yapmanız için alternatif SDK 'lar ve hizmetler sağladığını doğrulayın.
2. Bir Azure aboneliği oluşturun (henüz bir tane yoksa) <https://azure.com> .
3. Azure haritalar hesabı ([Belgeler](./how-to-manage-account-keys.md)) ve kimlik doğrulama anahtarı veya Azure Active Directory ([Belgeler](./how-to-manage-authentication.md)) oluşturun.
4. Uygulama kodunuzu geçirin.
5. Geçirilen uygulamanızı test edin.
6. Geçirilen uygulamanızı üretime dağıtın.
## <a name="create-an-azure-maps-account"></a>Azure Haritalar hesabı oluşturma
Bir Azure Maps hesabı oluşturmak ve Azure haritalar platformuna erişmek için şu adımları izleyin:
1. Azure aboneliğiniz yoksa başlamadan önce [ücretsiz bir hesap](https://azure.microsoft.com/free/) oluşturun.
2. [Azure portalında](https://portal.azure.com/) oturum açın.
3. [Azure haritalar hesabı](./how-to-manage-account-keys.md)oluşturun.
4. Gelişmiş güvenlik için [Azure Maps abonelik anahtarınızı](./how-to-manage-authentication.md#view-authentication-details) veya kurulum Azure Active Directory kimlik doğrulamasını alın.
## <a name="azure-maps-technical-resources"></a>Azure haritalar teknik kaynakları
Azure haritalar için yararlı teknik kaynakların listesi aşağıda verilmiştir.
* Genel Bakış: <https://azure.com/maps>
* Belgelerle <https://aka.ms/AzureMapsDocs>
* Web SDK kodu örnekleri: <https://aka.ms/AzureMapsSamples>
* Geliştirici forumları: <https://aka.ms/AzureMapsForums>
* Larınız <https://aka.ms/AzureMapsVideos>
* Lenemeyen <https://aka.ms/AzureMapsBlog>
* Azure haritalar geri bildirimi (UserVoice): <https://aka.ms/AzureMapsFeedback>
## <a name="migration-support"></a>Geçiş desteği
Geliştiriciler, [Forum](/answers/topics/azure-maps.html) aracılığıyla veya birçok Azure destek seçeneğinden biri aracılığıyla geçiş desteği arayabilir: <https://azure.microsoft.com/support/options/>
## <a name="new-terminology"></a>Yeni terminoloji
Aşağıdaki listede, yaygın Bing Haritalar terimleri ve bunlara karşılık gelen Azure haritalar terimleri yer almaktadır.
| Bing Haritalar terimi | Azure haritalar terimi |
|-----------------------------------|----------------------------------------------------------------|
| Havadan | Uydu veya havadan |
| Yönergeler | Yönlendirme olarak da adlandırılabilir |
| Varlıklar | Geometriler veya Özellikler |
| `EntityCollection` | Veri kaynağı veya katmanı |
| `Geopoint` | Konum |
| `GeoXML` | Uzamsal GÇ modülündeki XML dosyaları |
| Zemin kaplama | Görüntü katmanı |
| Karma (eşleme türüne başvuru olarak) | Yollar ile uydu |
| Kapat | Açılan Pencere |
| Konum | Konum |
| `LocationRect` | Sınırlayıcı kutusu |
| Eşleme Türü | Harita stili |
| Gezinti çubuğu | Harita stil Seçicisi, yakınlaştırma denetimi, aralıklı denetim, pusula denetimi |
| Raptiye | Kabarcık katmanı, simge katmanı veya HTML Işaretçisi |
## <a name="clean-up-resources"></a>Kaynakları temizleme
Temizleme gerektiren kaynak yok.
## <a name="next-steps"></a>Sonraki adımlar
Aşağıdaki makalelerle Bing Haritalar uygulamanızı nasıl geçirileceğiyle ilgili ayrıntıları öğrenin:
> [!div class="nextstepaction"]
> [Web uygulamasını geçirme](migrate-from-bing-maps-web-app.md)
| 69.2 | 587 | 0.641152 | tur_Latn | 0.999547 |
b93523d08775ae092a0a4e56395d64612e5a3915 | 4,451 | md | Markdown | server-2013/lync-server-2013-conference-join-time-report.md | v-bhmakwn/OfficeDocs-SkypeforBusiness-Test-pr.ja-jp | ce82a171f24282d060b7989f3e0d35c9c2fe5157 | [
"CC-BY-4.0",
"MIT"
] | 3 | 2020-05-19T19:27:26.000Z | 2021-11-30T09:27:40.000Z | server-2013/lync-server-2013-conference-join-time-report.md | v-bhmakwn/OfficeDocs-SkypeforBusiness-Test-pr.ja-jp | ce82a171f24282d060b7989f3e0d35c9c2fe5157 | [
"CC-BY-4.0",
"MIT"
] | 40 | 2018-04-26T17:26:59.000Z | 2018-08-24T18:07:43.000Z | server-2013/lync-server-2013-conference-join-time-report.md | v-bhmakwn/OfficeDocs-SkypeforBusiness-Test-pr.ja-jp | ce82a171f24282d060b7989f3e0d35c9c2fe5157 | [
"CC-BY-4.0",
"MIT"
] | 16 | 2018-05-29T23:08:01.000Z | 2021-11-15T11:22:54.000Z | ---
title: 会議参加時間レポート
TOCTitle: 会議参加時間レポート
ms:assetid: e64dc89a-25e4-4cb8-bcb1-51712e69ba5a
ms:mtpsurl: https://technet.microsoft.com/ja-jp/library/JJ205344(v=OCS.15)
ms:contentKeyID: 48273998
ms.date: 05/19/2016
mtps_version: v=OCS.15
ms.translationtype: HT
---
# 会議参加時間レポート
_**トピックの最終更新日:** 2015-03-09_
会議参加時間の概要を使用すると、ユーザーが会議に参加するのに要した時間を判定できます。このレポートは、平均参加時間 (ミリ秒) を表示します。また、2 秒以内に会議に参加できたユーザーの数や会議に参加するのに 2 ~ 5 秒かかったユーザーの数などが分かる詳細を提供します。
## 会議参加時間レポートにアクセスする
監視レポートのホーム ページから会議参加時間レポートにアクセスできます。
## フィルター
フィルターは、細かく絞り込んだデータ セットを返したり、返されたデータをさまざま方法で表示したりする方法として利用できます。次の表に、会議参加時間レポートで使用できるフィルターを示します。
### 会議参加時間レポートのフィルター
<table>
<colgroup>
<col style="width: 50%" />
<col style="width: 50%" />
</colgroup>
<thead>
<tr class="header">
<th>名前</th>
<th>説明</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td><p><strong>開始</strong></p></td>
<td><p>時間範囲の開始日と開始時刻。データを時間単位で表示するには、次のように開始日と開始時刻の両方を入力します。</p>
<p>7/7/2012 1:00 PM</p>
<p>開始時刻を入力しないと、レポートは自動的に指定日の午前 12:00 に開始します。データを日単位で表示するには、次のように日付のみを入力します。</p>
<p>7/7/2012</p>
<p>週単位または月単位で表示するには、表示する週または月の任意の日付を入力します (その週または月の最初の日である必要はありません)。</p>
<p>7/3/2012</p>
<p>週は、常に日曜日から土曜日までです。</p></td>
</tr>
<tr class="even">
<td><p><strong>終了</strong></p></td>
<td><p>時間範囲の終了日と終了時刻。データを時間単位で表示するには、次のように終了日と終了時刻の両方を入力します。</p>
<p>7/7/2012 1:00 PM</p>
<p>終了時刻を入力しないと、レポートは自動的に指定日の午前 12:00 に終了します。データを日単位で表示するには、次のように日付のみを入力します。</p>
<p>7/7/2012</p>
<p>週単位または月単位で表示するには、表示する週または月の任意の日付を入力します (その週または月の最初の日である必要はありません)。</p>
<p>7/3/2012</p>
<p>週は、常に日曜日から土曜日までです。</p></td>
</tr>
<tr class="odd">
<td><p><strong>間隔</strong></p></td>
<td><p>時間間隔です。次のいずれかを選択します。</p>
<ul>
<li><p>時間単位 (最大 25 時間の表示が可能)</p></li>
<li><p>日単位 (最大 31 日の表示が可能)</p></li>
<li><p>週単位 (最大 12 週の表示が可能)</p></li>
<li><p>月単位 (最大 12 か月の表示が可能)</p></li>
</ul>
<p>入力した開始日と終了日が選択した間隔で使用できる値の最大数を超える場合は、最大数の値 (開始日からカウント) のみが表示されます。たとえば、開始日と終了日をそれぞれ 7/7/2012 (2012 年 7 月 7 日)、2/28/2012 (2012 年 2 月 28 日) として毎日の間隔を選択しても、2012 年 8 月 7 日の午前 12:00 から 2012 年 9 月 7 日の午前 12:00 までの日付のデータ (つまり、合計 31 日分のデータのみ) が表示されることになります。</p></td>
</tr>
<tr class="even">
<td><p><strong>プール</strong></p></td>
<td><p>レジストラー プールまたはエッジ サーバーの完全修飾ドメイン名 (FQDN)。個別のプールを選択するか、[<strong>すべて</strong>] をクリックしてすべてのプールのデータを表示できます。このドロップダウン リストは、データベース内のレコードに基づいて自動的に設定されます。</p></td>
</tr>
<tr class="odd">
<td><p><strong>会議セッション</strong></p></td>
<td><p>セッションの種類。有効な値は次のとおりです。</p>
<ul>
<li><p>[すべて]</p></li>
<li><p>フォーカス セッション</p></li>
<li><p>アプリケーション共有</p></li>
<li><p>音声ビデオ会議</p></li>
</ul>
<p>[すべて] を選択した場合は、会議参加時間の合計がレポートの上部に表示されます。これらの合計は、Microsoft Exchange または Microsoft Outlook を使用してスケジュールされた会議のみについての合計であることに注意してください。</p></td>
</tr>
</tbody>
</table>
## 指標
次の表に、会議参加時間レポートで提供される情報を示します。
### 会議参加時間レポートの指標
<table>
<colgroup>
<col style="width: 33%" />
<col style="width: 33%" />
<col style="width: 33%" />
</colgroup>
<thead>
<tr class="header">
<th>名前</th>
<th>この項目での並べ替え</th>
<th>説明</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td><p><strong>日付</strong></p>
<p>この指標の実際の名前は、選択した間隔によって異なります。</p></td>
<td><p>いいえ</p></td>
<td><p>会議が開催された日時です。</p></td>
</tr>
<tr class="even">
<td><p><strong>セッションの合計数</strong></p></td>
<td><p>いいえ</p></td>
<td><p>セッションの総数です。成功したセッション、失敗したセッション (予期されるエラーと予期しないエラーの両方)、およびどちらにも分類されないセッションを含みます。</p></td>
</tr>
<tr class="odd">
<td><p><strong>平均 (ミリ秒)</strong></p></td>
<td><p>いいえ</p></td>
<td><p>参加者が会議に参加するのに要した平均時間 (ミリ秒) です。</p></td>
</tr>
<tr class="even">
<td><p><strong>セッション < 2 秒、数</strong></p></td>
<td><p>いいえ</p></td>
<td><p>2 秒未満で会議に参加できた参加者の数です。</p></td>
</tr>
<tr class="odd">
<td><p><strong>セッション < 2 秒、割合</strong></p></td>
<td><p>いいえ</p></td>
<td><p></p></td>
</tr>
<tr class="even">
<td><p><strong>セッション 2 ~ 5 秒、数</strong></p></td>
<td><p>いいえ</p></td>
<td><p>2 ~ 5 秒で会議に参加できた参加者の数です。</p></td>
</tr>
<tr class="odd">
<td><p><strong>セッション 2 ~ 5 秒、割合</strong></p></td>
<td><p>いいえ</p></td>
<td><p>2 ~ 5 秒で会議に参加できた参加者の合計の割合です。</p></td>
</tr>
<tr class="even">
<td><p><strong>セッション 5 ~ 10 秒、数</strong></p></td>
<td><p>いいえ</p></td>
<td><p>5 ~ 10 秒で会議に参加できた参加者の数です。</p></td>
</tr>
<tr class="odd">
<td><p><strong>セッション 5 ~ 10 秒、割合</strong></p></td>
<td><p>いいえ</p></td>
<td><p>5 ~ 10 秒で会議に参加できた参加者の合計の割合です。</p></td>
</tr>
<tr class="even">
<td><p><strong>セッション > 10 秒、数</strong></p></td>
<td><p>いいえ</p></td>
<td><p>会議に参加するのに 10 秒以上かかった参加者の数です。</p></td>
</tr>
<tr class="odd">
<td><p><strong>セッション > 10 秒、割合</strong></p></td>
<td><p>いいえ</p></td>
<td><p>会議に参加するのに 10 秒以上かかった参加者の合計の割合です。</p></td>
</tr>
</tbody>
</table>
| 26.02924 | 260 | 0.665244 | yue_Hant | 0.832518 |
b9354536f4f2bb608bf04ba345ca4fec808e6475 | 10,231 | md | Markdown | resources/md-files/solution-architecture.md | carloshm/LearnAI-KnowledgeMiningBootcamp | fa36415dcd1cd51760d3bff8add553f8b6559bab | [
"CC-BY-4.0",
"MIT"
] | 17 | 2019-09-05T17:13:28.000Z | 2022-01-26T15:54:27.000Z | resources/md-files/solution-architecture.md | carloshm/LearnAI-KnowledgeMiningBootcamp | fa36415dcd1cd51760d3bff8add553f8b6559bab | [
"CC-BY-4.0",
"MIT"
] | null | null | null | resources/md-files/solution-architecture.md | carloshm/LearnAI-KnowledgeMiningBootcamp | fa36415dcd1cd51760d3bff8add553f8b6559bab | [
"CC-BY-4.0",
"MIT"
] | 13 | 2019-11-20T14:14:19.000Z | 2022-01-26T15:55:17.000Z | # Solution Architecture
This solution uses a variety of pre-built cognitive skills and extend the AI transformations with custom skills, based on Azure Functions. In this Architecture document you will see details of the solution created throughout the training labs. There are details about the target use case, the dataset, the labs, the cost, the tools, and the interface.
To fully understand this document, It is expected that you have understood all the information presented in the [introduction](./Introduction.md)) of the training: **What is Cognitive Search, How it works, Why it is relevant for any company in the world, when to use it**.
The labs have a progressive level of complexity and they will help you to understand how each aspect of the technology can be used for the search solution.
## Use Case
Every company has business documents: contracts, memos, presentations, images, spreadsheets, business plans and so on. Usually these documents doesn't have the metadata necessary to be searchable, **as you can see in the image below**. Since documents don't have tags, categories and comments, they only can be found by name. This creates a poor search experience, slowing down business process and reducing productivity.

Azure Cognitive Search, the Microsoft product for Knowledge Mining, uses the most advanced cognitive capabilities, based on Microsoft's Azure AI Platform, to extract and create enriched metadata about your documents, vastly improving the overall search experience. This process also allow companies to enforce compliance, detect risks and detect policies violations.
Enterprises may need to search for:
+ Words like "risk" and "fraud" in pdf/word contracts, when they are 10 or less words distant one from the other.
+ Specific people or objects in images.
+ Document content instead of its name, the only option for the situation of the image below.
+ Entities like companies or technologies in memos or reports.
+ Compliance violations like forbidden words or phrases in any document or image.
+ Forms content, handwritten or not.
This Cognitive Search solution addresses these problems, extracting insights from multiple documents formats and languages.
>Tip! Some other possible uses for the labs could be:
>
>+ Demos, you can keep this environment ready, loaded
>+ POCs: You just need to upload some of the client's data and re-execute the enrichment pipeline. **You can prove the concept in front of the client, in minutes**.
>+ Production: Since we are using PaaS, it has SLA and scalability by design.
>+ Personal Use: If you have lots of documents or photos, you can use these labs to analyze them, too.
### Architecture

## Labs Details
In the [First Lab](../../labs/lab-environment-creation.md) you will learn how to create the required environment for this training, including the business documents dataset upload into Azure Blob Storage.
In the [Second Lab](../../labs/lab-02-azure-search.md) you will learn how index the business documents with "basic" Azure Search. The objective is teach how the standard features adds sophisticated search capabilities to your documents: natural language search, ranking, paging, suggestions and so on. This lab will use the Azure Portal only, no coding is required.
In the [Third Lab](../../labs/lab-03-text-skills.md) you will learn the next level of data enrichment, using Cognitive Search. It will be clear for you how AI can **extend** the metadata created, enabling an advanced search experience. In this lab you will do some coding with Postman.
In the [Fourth Lab](../../labs/lab-04-image-skills.md) you will learn how text skills don't work for images. You will detect and fix this situation, making your images queryable too. For this lab you will do some coding with Postman.
In the [Fifth Lab](../../labs/lab-05-custom-skills.md) you will learn how to create a custom skill using Azure Content Moderator API and Azure Functions, connection this transformation into the enrichment pipeline. You will detect documents with incompliant content. For this lab you will do some coding with Postman and Visual Studio. The Azure Portal is also used, to create the Azure Function instance.
In the [Sixth Lab](../../labs/lab-06-bot-business-documents.md) you will learn how to use a Bot to interact with the Azure Search Index, the Business Documents Bot. This lab uses the Bot Emulator and Visual Studio.
In the [Seventh Lab](../../labs/lab-final-case.md) you are invited to, based on what you have learned, create the architecture of a Knowledge Mining solution for another use case.
## Dataset
We will provide a sample dataset that contains documents with multiple languages and formats including HTML, doc, pdf, ppt, png and jpg. They were selected for a better learning experience, showcasing the technology capabilities.
The dataset has 21 files and 15 MBs. It includes public Microsoft business documents. There is a document in spanish, so you can learn about language identification. There is also a document with anonymized Personal Identifiable Information (PII) for the Content Moderator lab.
Since we are working with unstructured data, any set of files can be used. In other words, this could be a **Bring Your Own Data** solution; you can test later with any dataset you want.
## Demo - Cognitive Search Pipeline
The [AI Sandbox](https://text-analytics-demo-dev.azurewebsites.net/) is an interesting demo of the Cognitive Search Pipeline, similar to what will be implemented. It is useful to understand how a cognitive skill output is input for another one, in the same pipeline.
This demo is public and you can use with clients and partners.
## Costs
Here you can see a list of the resources used in this training. The [Azure Calculator](https://azure.microsoft.com/en-us/pricing/calculator/) can be used for pricing estimation.
Prices are estimates and are not intended as actual price quotes. Actual prices may vary depending upon the date of purchase, currency of payment, and type of agreement you enter with Microsoft. Contact a Microsoft sales representative for additional information on pricing.
**The estimated monthly cost of this solution, with the provided dataset, is close to US$ 76.18 or US$2.54 per day.**

>Note! Starting December 21, 2018, you will be able to associate a Cognitive Services resource with an Azure Search skillset. This will allow us to start charging for skillset execution. On this date, we will also begin charging for image extraction as part of the document-cracking stage. Text extraction from documents will continue to be offered at no additional cost. The execution of built-in skills will be charged at the existing [Cognitive Services pay-as-you go price](https://azure.microsoft.com/en-us/pricing/details/cognitive-services/). Image extraction pricing will be charged at preview pricing, and is described on the [Azure Search pricing page](https://azure.microsoft.com/en-us/pricing/details/search/). About images: You pay for the images extracted/normalized (even if it is a pdf), and then pay for any built-in skills you call (including OCR). [This](https://docs.microsoft.com/en-us/azure/search/cognitive-search-attach-cognitive-services#example-estimating-the-cost-of-document-cracking-and-enrichment) is an example of how that may work.
## Information Delivery - A Bot as User Interface
Microsoft Azure Search provides an API for web or mobile applications, creating great search experiences for users. Another type of application that can benefit from Azure Search is a Bot, a trending technology from Microsoft.
Although this is not a training on bots, you will learn how to integrate one with the [Azure Search Rest API](https://docs.microsoft.com/en-us/azure/search/search-query-rest-api). This Bot will be as simple as possible, running locally with the [Bot Emulator](https://github.com/Microsoft/BotFramework-Emulator).
This [gif](../../resources/images/lab-bot/retrieving-cognitive-attrributes.gif) has the expected finished solution, but with a different dataset. Now you have idea of what we will be created by the end of the training.
The Microsoft Learn AI Team has a 2 day [Computer Vision Bot Bootcamp](https://github.com/Azure/LearnAI-Bootcamp) that shows you how to create an intelligent bot using Azure Search, CosmosDB and Cognitive Services.
## Lab Tools for APIs
Labs 3, 4, and 5 will use Postman for [REST API calls](https://docs.microsoft.com/en-us/azure/search/search-fiddler). You can use any other REST API tool that can formulate and send HTTP requests, but we suggest you to use Postman since the training was created with/for it. The image below shows a visual example of Postman being used for Cognitive Search. Please check the suggested Postman tutorial on [Pre-Reqs section of the initial page](./readme.md).

> **Tip** Important details about Postman:
>
> + You can save your commands, which is useful for reuse, not only within this workshop, but also in your future projects.
> + You need to create a free account. A confirmation message is emailed to you.
> + You can export all your commands into json format. This file can then be saved into the storage account of the lab, into a cloud storage like OneDrive, or anywhere you like. This process helps you save, share, and reuse your work.
> + These return codes indicate success after an API call request: 200, 201 and 204.
> + Error messages and warnings are very clear.
> + Besides the API URL and call type, we will use GET/PUT/POST (depending on what action we are taking), and you need to use the header for Content-Type and api-key. The json commands must be placed into the "body / raw" area. If you are struggling using Postman, here's a friendly reminder to [review the resource from the prerequisites](https://docs.microsoft.com/en-us/azure/search/search-fiddler).
## Next step
[Environment Creation Lab](../../labs/lab-environment-creation.md) or [Back to Read Me](../../README.md) | 97.438095 | 1,063 | 0.782817 | eng_Latn | 0.998075 |
b93660203b1dd12ed1a606fcd6e590efc4dd8296 | 3,070 | md | Markdown | docs/c-runtime-library/c-run-time-library-reference.md | asklar/cpp-docs | c5e30ee9c63ab4d88b4853acfb6f084cdddb171f | [
"CC-BY-4.0",
"MIT"
] | 14 | 2018-01-28T18:10:55.000Z | 2021-11-16T13:21:18.000Z | docs/c-runtime-library/c-run-time-library-reference.md | asklar/cpp-docs | c5e30ee9c63ab4d88b4853acfb6f084cdddb171f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/c-runtime-library/c-run-time-library-reference.md | asklar/cpp-docs | c5e30ee9c63ab4d88b4853acfb6f084cdddb171f | [
"CC-BY-4.0",
"MIT"
] | 2 | 2018-11-01T12:33:08.000Z | 2021-11-16T13:21:19.000Z | ---
title: "C Run-Time Library Reference | Microsoft Docs"
ms.custom: ""
ms.date: "11/04/2016"
ms.reviewer: ""
ms.suite: ""
ms.technology: ["cpp-standard-libraries"]
ms.tgt_pltfrm: ""
ms.topic: "article"
f1_keywords: ["c.runtime"]
dev_langs: ["C++"]
helpviewer_keywords: ["CRT", "run-time libraries", "CRT, reference"]
ms.assetid: a503e11c-8dca-4846-84fb-025a826c32b8
caps.latest.revision: 9
author: "corob-msft"
ms.author: "corob"
manager: "ghogen"
---
# C Run-Time Library Reference
The Microsoft run-time library provides routines for programming for the Microsoft Windows operating system. These routines automate many common programming tasks that are not provided by the C and C++ languages.
Sample programs are included in the individual reference topics for most routines in the library.
## In This Section
[C Run-Time Libraries](../c-runtime-library/crt-library-features.md)
Discusses the .lib files that comprise the C run-time libraries.
[Run-Time Routines by Category](../c-runtime-library/run-time-routines-by-category.md)
Provides links to the run-time library by category.
[Global Variables and Standard Types](../c-runtime-library/global-variables-and-standard-types.md)
Provides links to the global variables and standard types provided by the run-time library.
[Global Constants](../c-runtime-library/global-constants.md)
Provides links to the global constants defined by the run-time library.
[Alphabetical Function Reference](../c-runtime-library/reference/crt-alphabetical-function-reference.md)
Provides a table of contents entry point into an alphabetical listing of all C run-time library functions.
[Generic-Text Mappings](../c-runtime-library/generic-text-mappings.md)
Provides links to the generic-text mappings defined in Tchar.h.
[Language and Country/Region Strings](../c-runtime-library/locale-names-languages-and-country-region-strings.md)
Describes how to use the `setlocale` function to set the language and Country/Region strings.
## Related Sections
[Debug Routines](../c-runtime-library/debug-routines.md)
Provides links to the debug versions of the run-time library routines.
[Run-Time Error Checking](../c-runtime-library/run-time-error-checking.md)
Provides links to functions that support run-time error checks.
[DLLs and Visual C++ run-time library behavior](../build/run-time-library-behavior.md)
Discusses the entry point and startup code used for a DLL.
[Visual C++ Libraries](http://msdn.microsoft.com/en-us/fec23c40-10c0-4857-9cdc-33a3b99b30ae)
Provides links to the various libraries provided with Visual C++, including ATL, MFC, OLE DB Templates, the C run-time library, and the C++ Standard Library.
[Debugging](/visualstudio/debugger/debugging-in-visual-studio)
Provides links to using the Visual Studio debugger to correct logic errors in your application or stored procedures.
## See Also
[Visual C++ Libraries Reference](http://msdn.microsoft.com/en-us/fec23c40-10c0-4857-9cdc-33a3b99b30ae) | 48.730159 | 214 | 0.751792 | eng_Latn | 0.838549 |
b936da029b6f5cfe07b8540a0d22fab07f4fbc16 | 4,685 | md | Markdown | README.md | rhettg/tf_aws_lab | 43e805bcfb818f1822ab4110475ee30ba2f99953 | [
"Apache-2.0"
] | 3 | 2016-04-01T00:58:00.000Z | 2021-07-01T03:08:13.000Z | README.md | rhettg/tf_aws_lab | 43e805bcfb818f1822ab4110475ee30ba2f99953 | [
"Apache-2.0"
] | null | null | null | README.md | rhettg/tf_aws_lab | 43e805bcfb818f1822ab4110475ee30ba2f99953 | [
"Apache-2.0"
] | 1 | 2021-07-01T03:08:20.000Z | 2021-07-01T03:08:20.000Z | # tf_aws_lab
A Terraform module for creating a VPC Laboratory allowing you to connect to
your lab network using a IPSec VPN.
This is useful for quickly and securely building a development infrastructure
in AWS. It integrates with private Route53 so you'll get a complete domain and
DNS records inside your VPC.
## Input Variables
### Required
Nothing. It just works
### Recommended
* `key_name` Name of the key_pair to use for creating a VPN instance. (so you can ssh in)
* `name` - Name for the lab. Becomes the domainname for the VPC as well as controls Environment labels.
* `vpn_base_ami` - AMI to use in your region. Default assumes us-east-1 and ubuntu trusty.
* `vpn_instance_type` - Defaults to `t2.small`
### Optional
* `lab_bucket_name` - Name for an S3 bucket. Defaults to `<lab name>-lab-bucket`
* `vpn_user` - Defaults to lab name
* `vpn_password` - Default generates a uuid
* `vpn_sharedkey` - Default generates a uuid
* `vpc_cidr` - Network layout for the VPC. Defaults to 10.0.0.0/16
* `vpn_subnet` - Where to build the main subnet. Defaults to 10.0.249.0/24
## Outputs
You'll likely need these to connect to your VPN:
* `vpn_ip`
* `vpn_user`
* `vpn_password`
* `vpn_sharedkey`
These will be useful for building additional resources:
* `subnet_id`
* `security_group_id`
* `zone_id`
* `domain`
You might also use:
* `vpc_id`
* `vpn_instance_id`
* `bucket_name` - S3 Bucket name to stage data.
* `bucket_url` - S3 URL you can use to pull resources out of the bucket
## Example
```
provider "aws" {
access_key = "${var.access_key}"
secret_key = "${var.secret_key}"
region = "${var.region}"
}
module "vpc_lab" {
source = "github.com/rhettg/tf_aws_lab"
}
resource "aws_instance" "test" {
ami = "ami-c80b0aa2"
instance_type = "m3.medium"
subnet_id = "${module.vpc_lab.subnet_id}"
vpc_security_group_ids = ["${module.vpc_lab.security_group_id}"]
}
resource "aws_route53_record" "test" {
zone_id = "${module.vpc_lab.zone_id}"
name = "test.${module.vpc_lab.domain}"
type = "A"
ttl = "300"
records = ["${aws_instance.test.private_ip}"]
}
output "vpn_ip" {
value = "${module.vpc_lab.vpn_ip}"
}
output "vpn_sharedkey" {
value = "${module.vpc_lab.vpn_sharedkey}"
}
output "vpn_user" {
value = "${module.vpc_lab.vpn_user}"
}
output "vpn_password" {
value = "${module.vpc_lab.vpn_password}"
}
```
This will create a VPC and include an instance called `vpn0`. You can then configure
your local VPN client to using "Cisco IPSec" with the generated user, password,
shared key and ip address.
After successfully connecting, you should be able to connect to any other
resource you create in the VPC.
$ ping test.lab
PING test.lab (10.0.249.113): 56 data bytes
64 bytes from 10.0.249.113: icmp_seq=0 ttl=64 time=71.382 ms
...
## Uploading Data
While Terraform has "provisioners" such as file upload or script execution, you
can't really easily use them here because you'd have to be connected to your
VPN to connect to your hosts.
In a production environment you should likely be building images with Packer,
but for prototyping that's not a great workflow.
Doing all your provisioning with just user_data scripts can also work, but
you're limited to 16Kb.
To get around these limitations, tf_aws_lab has helpfully configured an S3
bucket your instances inside the VPC can access.
You can define resources that should exist in your bucket:
resource "aws_s3_bucket_object" "lab_provision" {
bucket = "${module.vpc_lab.bucket_name}"
key = "lab.tgz"
source = "build/lab.tgz"
etag = "${md5(file(\"build/lab.tgz\"))}"
}
To effectively use, you should add this to your instance:
depends_on = ["aws_s3_bucket_object.lab_provision"]
Then you can templatize your user_data script such as:
resource "template_file" "test_user_data" {
count = 1
template = "${file(\"test_user_data.sh\")}"
vars {
hostname = "test${count.index}"
bucket_url = "${module.vpc_lab.bucket_url}"
}
}
And your `test_user_data.sh`
```
#!/bin/bash
set -e
echo "Setting hostname"
echo "${hostname}" > /etc/hostname
hostname -F /etc/hostname
cd /tmp
wget ${bucket_url}/lab.tgz
tar --no-same-owner -xzf lab.tgz
./provision.sh
```
The content you upload is of course up to you. A simple binary, a set of python
scripts, or even a full puppet manifest.
## Authors
Originally suggested by [@splaice](https://github.com/splaice), initial VPN
configuration by [@bickfordb](https://github.com/bickfordb) and terraform-fu by
[@rhettg](https://github.com/rhettg).
| 25.883978 | 104 | 0.699253 | eng_Latn | 0.9559 |
b936f252200ffddddc2425d7a3eaa16ea451f118 | 1,688 | md | Markdown | api/Excel.Sheets.Add.md | qiezhenxi/VBA-Docs | c49aebcccbd73eadf5d1bddc0a4dfb622e66db5d | [
"CC-BY-4.0",
"MIT"
] | 1 | 2018-10-15T16:15:38.000Z | 2018-10-15T16:15:38.000Z | api/Excel.Sheets.Add.md | qiezhenxi/VBA-Docs | c49aebcccbd73eadf5d1bddc0a4dfb622e66db5d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | api/Excel.Sheets.Add.md | qiezhenxi/VBA-Docs | c49aebcccbd73eadf5d1bddc0a4dfb622e66db5d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Sheets.Add Method (Excel)
keywords: vbaxl10.chm152073
f1_keywords:
- vbaxl10.chm152073
ms.prod: excel
api_name:
- Excel.Sheets.Add
ms.assetid: db5de750-fd09-2b18-c52b-98d88eeb0ffc
ms.date: 06/08/2017
---
# Sheets.Add Method (Excel)
Creates a new worksheet, chart, or macro sheet. The new worksheet becomes the active sheet.
## Syntax
_expression_. `Add`( `_Before_` , `_After_` , `_Count_` , `_Type_` )
_expression_ A variable that represents a [Sheets](./Excel.Sheets.md) object.
### Parameters
|Name|Required/Optional|Data type|Description|
|:-----|:-----|:-----|:-----|
| _Before_|Optional| **Variant**|An object that specifies the sheet before which the new sheet is added.|
| _After_|Optional| **Variant**|An object that specifies the sheet after which the new sheet is added.|
| _Count_|Optional| **Variant**|The number of sheets to be added. The default value is one.|
| _Type_|Optional| **Variant**|Specifies the sheet type. Can be one of the following **[XlSheetType](Excel.XlSheetType.md)** constants: **xlWorksheet** , **xlChart** , **xlExcel4MacroSheet** , or **xlExcel4IntlMacroSheet** . If you are inserting a sheet based on an existing template, specify the path to the template. The default value is **xlWorksheet** .|
### Return value
An Object value that represents the new worksheet, chart, or macro sheet.
## Remarks
If _Before_ and _After_ are both omitted, the new sheet is inserted before the active sheet.
## Example
This example inserts a new worksheet before the last worksheet in the active workbook.
```vb
ActiveWorkbook.Sheets.Add Before:=Worksheets(Worksheets.Count)
```
## See also
[Sheets Object](Excel.Sheets.md)
| 27.225806 | 359 | 0.727488 | eng_Latn | 0.931602 |
b9378fe02a0fe41e3a00d510c3469adc8ddf13e8 | 4,497 | md | Markdown | snippets/native-platforms/cordova/use.md | davidrissato/docs | 8842a0b230129017ed4d16068b0343eb5b331f5d | [
"MIT"
] | 1 | 2019-10-02T20:33:04.000Z | 2019-10-02T20:33:04.000Z | snippets/native-platforms/cordova/use.md | davidrissato/docs | 8842a0b230129017ed4d16068b0343eb5b331f5d | [
"MIT"
] | 25 | 2021-07-21T10:10:26.000Z | 2021-07-21T10:11:37.000Z | snippets/native-platforms/cordova/use.md | davidrissato/docs | 8842a0b230129017ed4d16068b0343eb5b331f5d | [
"MIT"
] | null | null | null | ```js
var Auth0 = require('auth0-js');
var Auth0Cordova = require('@auth0/cordova');
function getBySelector(arg) {
return document.querySelector(arg);
}
function getById(id) {
return document.getElementById(id);
}
function getRedirectUrl() {
var returnTo = env.PACKAGE_ID + '://${account.namespace}/cordova/' + env.PACKAGE_ID + '/callback';
var url = 'https://${account.namespace}/v2/logout?client_id=${account.clientId}&returnTo=' + returnTo;
return url;
}
function openUrl(url) {
SafariViewController.isAvailable(function (available) {
if (available) {
SafariViewController.show({
url: url
},
function(result) {
if (result.event === 'opened') {
console.log('opened');
} else if (result.event === 'loaded') {
console.log('loaded');
} else if (result.event === 'closed') {
console.log('closed');
}
},
function(msg) {
console.log("KO: " + JSON.stringify(msg));
})
} else {
window.open(url, '_system');
}
})
}
function App() {
this.auth0 = new Auth0.Authentication({
domain: '${account.namespace}',
clientID: '${account.clientId}'
});
this.login = this.login.bind(this);
this.logout = this.logout.bind(this);
}
App.prototype.state = {
authenticated: false,
accessToken: false,
currentRoute: '/',
routes: {
'/': {
id: 'loading',
onMount: function(page) {
if (this.state.authenticated === true) {
return this.redirectTo('/home');
}
return this.redirectTo('/login');
}
},
'/login': {
id: 'login',
onMount: function(page) {
if (this.state.authenticated === true) {
return this.redirectTo('/home');
}
var loginButton = page.querySelector('.btn-login');
loginButton.addEventListener('click', this.login);
}
},
'/home': {
id: 'profile',
onMount: function(page) {
if (this.state.authenticated === false) {
return this.redirectTo('/login');
}
var logoutButton = page.querySelector('.btn-logout');
var avatar = page.querySelector('#avatar');
var profileCodeContainer = page.querySelector('.profile-json');
logoutButton.addEventListener('click', this.logout);
this.loadProfile(function(err, profile) {
if (err) {
profileCodeContainer.textContent = 'Error ' + err.message;
}
profileCodeContainer.textContent = JSON.stringify(profile, null, 4);
avatar.src = profile.picture;
});
}
}
}
};
App.prototype.run = function(id) {
this.container = getBySelector(id);
this.resumeApp();
};
App.prototype.loadProfile = function(cb) {
this.auth0.userInfo(this.state.accessToken, cb);
};
App.prototype.login = function(e) {
e.target.disabled = true;
var client = new Auth0Cordova({
domain: '${account.namespace}',
clientId: '${account.clientId}',
packageIdentifier: 'YOUR_PACKAGE_ID' // found in config.xml
});
var options = {
scope: 'openid profile',
audience: 'https://${account.namespace}/userinfo'
};
var self = this;
client.authorize(options, function(err, authResult) {
if (err) {
console.log(err);
return (e.target.disabled = false);
}
localStorage.setItem('access_token', authResult.accessToken);
self.resumeApp();
});
};
App.prototype.logout = function(e) {
localStorage.removeItem('access_token');
var url = getRedirectUrl();
openUrl(url);
this.resumeApp();
};
App.prototype.redirectTo = function(route) {
if (!this.state.routes[route]) {
throw new Error('Unknown route ' + route + '.');
}
this.state.currentRoute = route;
this.render();
};
App.prototype.resumeApp = function() {
var accessToken = localStorage.getItem('access_token');
if (accessToken) {
this.state.authenticated = true;
this.state.accessToken = accessToken;
} else {
this.state.authenticated = false;
this.state.accessToken = null;
}
this.render();
};
App.prototype.render = function() {
var currRoute = this.state.routes[this.state.currentRoute];
var currRouteEl = getById(currRoute.id);
var element = document.importNode(currRouteEl.content, true);
this.container.innerHTML = '';
this.container.appendChild(element);
currRoute.onMount.call(this, this.container);
};
module.exports = App;
``` | 26.452941 | 104 | 0.611519 | kor_Hang | 0.260585 |
b937b7a9c08e2aadac936e8465fdf9b3a7a9a4d2 | 1,225 | md | Markdown | README.md | bayroio/ahoj-token-issuance | 176b61bcfe0c04cc8a2aed5f6fda73cde50d6103 | [
"BSD-3-Clause"
] | null | null | null | README.md | bayroio/ahoj-token-issuance | 176b61bcfe0c04cc8a2aed5f6fda73cde50d6103 | [
"BSD-3-Clause"
] | null | null | null | README.md | bayroio/ahoj-token-issuance | 176b61bcfe0c04cc8a2aed5f6fda73cde50d6103 | [
"BSD-3-Clause"
] | 1 | 2020-09-01T17:46:55.000Z | 2020-09-01T17:46:55.000Z | [](https://gitpod.io/#https://github.com/bayroio/ahoj-token-issuance)
# ahoj-token-issuance
This solution can be used to create and trade a variable-cap, fungible asset. You can specify which sets of addresses may mint more units. Also you can mint more units of an asset. You can check address balances and transfer shares.
# EVEREST - Assets Faucet Address
X-everest15z9krm5kfsy4vagstfxg9va2qykzgvw806gu8u
# Assets
AssetId: 2qoA17geKM6D8oFwaZzRgQ4bE2sthDxhQJ8ZE7sjRC6BRJ7bDh
NAME: Kikicoin the intelligent coin
SYMBOL: KIKI
AssetId: 2wR5jFEHeECTQLbWWQr1fhJuj4FDNjGviCBRTamvGTqayVBDrC
NAME: TEcoin the coin of Team Entropy
SYMBOL: TEEN
AssetId: G1KJEoJxxsTnBcWvVfVhqaRURNuqbNBGroT8XsoGXfmRVPCHX
NAME: NinaCoin
SYMBOL: NINA
AssetId: 2J8rV9wPmsJJXHHzLf9aUiqWRC5LmHdN3dfuvNUvaYnoSr8pVe
NAME: Psycho Token
SYMBOL: SYKO
AssetId: 2QDdn35MAaEUj8efLczdhycA8B4H7kHLh6Dyiu86Dtx1qgkQU7
NAME: FourTwenty Token of The Waldos
SYMBOL: FOTW
# FUJI - - Assets Faucet Address
X-fuji1fd2h5ers2xffll2s7d9m0npn4wf0ghwfmmcuaf
# Assets
AssetId: WcNip9fVZMPSkXPzaYqZ9NZbJYhmN7Y3dgDuevWxHZw4YbjzD
NAME: Kikicoin the intelligent coin
SYMBOL: KIKI | 33.108108 | 232 | 0.835102 | yue_Hant | 0.298949 |
b9380cbb770c360bb3ac0760fa04d4a565b2c7a0 | 21,970 | md | Markdown | articles/cloud-services/cloud-services-python-how-to-use-service-management.md | RobAaldijk/azure-docs.nl-nl | 519c7fc80075795af2670d665d1d93078faf7a87 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cloud-services/cloud-services-python-how-to-use-service-management.md | RobAaldijk/azure-docs.nl-nl | 519c7fc80075795af2670d665d1d93078faf7a87 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cloud-services/cloud-services-python-how-to-use-service-management.md | RobAaldijk/azure-docs.nl-nl | 519c7fc80075795af2670d665d1d93078faf7a87 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: De functie handleiding voor Service Management-API (python) gebruiken
description: Informatie over het programmatisch uitvoeren van algemene Service beheer taken vanuit Python.
services: cloud-services
documentationcenter: python
author: tanmaygore
manager: vashan
editor: ''
ms.assetid: 61538ec0-1536-4a7e-ae89-95967fe35d73
ms.service: cloud-services
ms.workload: tbd
ms.tgt_pltfrm: na
ms.devlang: python
ms.topic: article
ms.date: 05/30/2017
ms.author: tagore
ms.custom: devx-track-python
ms.openlocfilehash: ef155116904ee0d3ecab250a254010e2f7664757
ms.sourcegitcommit: a92fbc09b859941ed64128db6ff72b7a7bcec6ab
ms.translationtype: MT
ms.contentlocale: nl-NL
ms.lasthandoff: 10/15/2020
ms.locfileid: "92073985"
---
# <a name="use-service-management-from-python"></a>Service beheer van python gebruiken
In deze hand leiding wordt beschreven hoe u via programma code veelvoorkomende Service beheer taken vanuit python kunt uitvoeren. De **maken** -klasse in de [Azure SDK voor python](https://github.com/Azure/azure-sdk-for-python) ondersteunt programmatische toegang tot veel van de functies voor Service beheer die beschikbaar zijn in de [Azure Portal][management-portal]. U kunt deze functie gebruiken om Cloud Services, implementaties, gegevens beheer Services en virtuele machines te maken, bij te werken en te verwijderen. Deze functionaliteit kan nuttig zijn bij het bouwen van toepassingen die programmatische toegang tot Service beheer nodig hebben.
## <a name="what-is-service-management"></a><a name="WhatIs"> </a>Wat is Service beheer?
De Azure Service Management-API biedt programmatische toegang tot veel van de Service beheer functionaliteit die beschikbaar is via de [Azure Portal][management-portal]. U kunt de Azure SDK voor python gebruiken om uw Cloud Services en opslag accounts te beheren.
Als u de Service Management-API wilt gebruiken, moet u [een Azure-account maken](https://azure.microsoft.com/pricing/free-trial/).
## <a name="concepts"></a><a name="Concepts"> </a>Concepten
De Azure SDK voor python verloopt de [Service Management-API][svc-mgmt-rest-api]. Dit is een rest API. Alle API-bewerkingen worden uitgevoerd via TLS en wederzijds geverifieerd met behulp van X. 509 v3-certificaten. De beheer service kan worden geopend vanuit een service die in azure wordt uitgevoerd. Het kan ook rechtstreeks via internet worden geopend vanuit elke toepassing die een HTTPS-aanvraag kan verzenden en een HTTPS-antwoord kan ontvangen.
## <a name="installation"></a><a name="Installation"> </a>Installatie
Alle functies die in dit artikel worden beschreven, zijn beschikbaar in het `azure-servicemanagement-legacy` pakket, dat u kunt installeren met behulp van PIP. Zie [python en de Azure SDK installeren](/azure/developer/python/azure-sdk-install)voor meer informatie over de installatie (bijvoorbeeld als u geen ervaring hebt met python).
## <a name="connect-to-service-management"></a><a name="Connect"> </a>Verbinding maken met Service Management
U hebt uw Azure-abonnements-ID en een geldig beheer certificaat nodig om verbinding te maken met het Service Management-eind punt. U kunt uw abonnements-ID verkrijgen via de [Azure Portal][management-portal].
> [!NOTE]
> U kunt nu certificaten gebruiken die zijn gemaakt met OpenSSL wanneer u op Windows uitvoert. Python 2.7.4 of hoger is vereist. We raden u aan OpenSSL te gebruiken in plaats van. pfx, omdat ondersteuning voor pfx-certificaten waarschijnlijk in de toekomst wordt verwijderd.
>
>
### <a name="management-certificates-on-windowsmaclinux-openssl"></a>Beheer certificaten op Windows/Mac/Linux (OpenSSL)
U kunt [openssl](https://www.openssl.org/) gebruiken om uw beheer certificaat te maken. U moet twee certificaten maken, één voor de server (een `.cer` bestand) en één voor de client (een `.pem` bestand). Voer het volgende uit om het bestand te maken `.pem` :
```console
openssl req -x509 -nodes -days 365 -newkey rsa:1024 -keyout mycert.pem -out mycert.pem
```
Voer het volgende uit om het certificaat te maken `.cer` :
```console
openssl x509 -inform pem -in mycert.pem -outform der -out mycert.cer
```
Zie [overzicht van certificaten voor azure Cloud Services](cloud-services-certs-create.md)voor meer informatie over Azure-certificaten. Zie de documentatie op voor een volledige beschrijving van OpenSSL-para meters [https://www.openssl.org/docs/apps/openssl.html](https://www.openssl.org/docs/apps/openssl.html) .
Nadat u deze bestanden hebt gemaakt, uploadt u het `.cer` bestand naar Azure. Selecteer in het [Azure Portal][management-portal]op het tabblad **instellingen** de optie **uploaden**. Houd er rekening mee dat u het bestand hebt opgeslagen `.pem` .
Nadat u uw abonnements-ID hebt verkregen, maakt u een certificaat en uploadt u het `.cer` bestand naar Azure, maakt u verbinding met het Azure-beheer eindpunt. Maak verbinding door de abonnements-ID en het pad naar het `.pem` bestand door te geven aan **maken**.
```python
from azure import *
from azure.servicemanagement import *
subscription_id = '<your_subscription_id>'
certificate_path = '<path_to_.pem_certificate>'
sms = ServiceManagementService(subscription_id, certificate_path)
```
In het voor gaande voor beeld `sms` is een **maken** -object. De klasse **maken** is de primaire klasse die wordt gebruikt voor het beheren van Azure-Services.
### <a name="management-certificates-on-windows-makecert"></a>Beheer certificaten op Windows (MakeCert)
U kunt een zelfondertekend beheer certificaat maken op uw computer met behulp van `makecert.exe` . Open een **Visual Studio-opdracht prompt** als **beheerder** en gebruik de volgende opdracht, waarbij u *AzureCertificate* vervangt door de naam van het certificaat dat u wilt gebruiken:
```console
makecert -sky exchange -r -n "CN=AzureCertificate" -pe -a sha1 -len 2048 -ss My "AzureCertificate.cer"
```
Met de opdracht maakt `.cer` u het bestand en installeert u het in het **persoonlijke** certificaat archief. Zie het [overzicht van certificaten voor Azure Cloud Services](cloud-services-certs-create.md)voor meer informatie.
Nadat u het certificaat hebt gemaakt, uploadt u het `.cer` bestand naar Azure. Selecteer in het [Azure Portal][management-portal]op het tabblad **instellingen** de optie **uploaden**.
Nadat u uw abonnements-ID hebt verkregen, maakt u een certificaat en uploadt u het `.cer` bestand naar Azure, maakt u verbinding met het Azure-beheer eindpunt. Maak verbinding door de abonnements-ID en de locatie van het certificaat in uw **persoonlijke** certificaat Archief door te geven aan **maken** (opnieuw, vervang *AzureCertificate* door de naam van uw certificaat).
```python
from azure import *
from azure.servicemanagement import *
subscription_id = '<your_subscription_id>'
certificate_path = 'CURRENT_USER\\my\\AzureCertificate'
sms = ServiceManagementService(subscription_id, certificate_path)
```
In het voor gaande voor beeld `sms` is een **maken** -object. De klasse **maken** is de primaire klasse die wordt gebruikt voor het beheren van Azure-Services.
## <a name="list-available-locations"></a><a name="ListAvailableLocations"> </a>Beschik bare locaties weer geven
Als u de locaties wilt weer geven die beschikbaar zijn voor hosting services, gebruikt u de **lijst \_ locatie** methode.
```python
from azure import *
from azure.servicemanagement import *
sms = ServiceManagementService(subscription_id, certificate_path)
result = sms.list_locations()
for location in result:
print(location.name)
```
Wanneer u een Cloud service of opslag service maakt, moet u een geldige locatie opgeven. De methode **lijst \_ locaties** retourneert altijd een actuele lijst van de beschik bare locaties. Vanaf deze locatie zijn de beschik bare locaties:
* Europa -west
* Europa - noord
* Azië - zuidoost
* Azië - oost
* Central US
* VS - noord-centraal
* VS - zuid-centraal
* VS - west
* VS - oost
* Japan - oost
* Japan - west
* Brazil South
* Australië - oost
* Australië - zuidoost
## <a name="create-a-cloud-service"></a><a name="CreateCloudService"> </a>Een Cloud service maken
Wanneer u een toepassing maakt en deze in azure uitvoert, worden de code en configuratie samen een Azure- [Cloud service][cloud service]genoemd. (Het heette een *gehoste service* in eerdere versies van Azure.) U kunt de methode ** \_ gehoste \_ service maken** gebruiken om een nieuwe gehoste service te maken. Maak de service door een gehoste service naam op te geven (deze moet uniek zijn in Azure), een label (automatisch gecodeerd naar base64), een beschrijving en een locatie.
```python
from azure import *
from azure.servicemanagement import *
sms = ServiceManagementService(subscription_id, certificate_path)
name = 'myhostedservice'
label = 'myhostedservice'
desc = 'my hosted service'
location = 'West US'
sms.create_hosted_service(name, label, desc, location)
```
U kunt alle gehoste services voor uw abonnement weer geven met de ** \_ gehoste methode \_ Services** .
```python
result = sms.list_hosted_services()
for hosted_service in result:
print('Service name: ' + hosted_service.service_name)
print('Management URL: ' + hosted_service.url)
print('Location: ' + hosted_service.hosted_service_properties.location)
print('')
```
Als u informatie wilt ophalen over een bepaalde gehoste service, geeft u de naam van de gehoste service door aan de methode **Get \_ hosted \_ service \_ Properties** .
```python
hosted_service = sms.get_hosted_service_properties('myhostedservice')
print('Service name: ' + hosted_service.service_name)
print('Management URL: ' + hosted_service.url)
print('Location: ' + hosted_service.hosted_service_properties.location)
```
Nadat u een Cloud service hebt gemaakt, implementeert u de code in de service met de methode ** \_ implementatie maken** .
## <a name="delete-a-cloud-service"></a><a name="DeleteCloudService"> </a>Een Cloud service verwijderen
U kunt een Cloud service verwijderen door de service naam door te geven aan de methode ** \_ gehoste \_ service verwijderen** .
```python
sms.delete_hosted_service('myhostedservice')
```
Voordat u een service kunt verwijderen, moeten alle implementaties voor de service eerst worden verwijderd. Zie [een implementatie verwijderen](#DeleteDeployment)voor meer informatie.
## <a name="delete-a-deployment"></a><a name="DeleteDeployment"> </a>Een implementatie verwijderen
Als u een implementatie wilt verwijderen, gebruikt u de methode ** \_ implementatie verwijderen** . In het volgende voor beeld ziet u hoe u een implementatie verwijdert met de naam `v1` :
```python
from azure import *
from azure.servicemanagement import *
sms = ServiceManagementService(subscription_id, certificate_path)
sms.delete_deployment('myhostedservice', 'v1')
```
## <a name="create-a-storage-service"></a><a name="CreateStorageService"> </a>Een opslag service maken
Een [opslag service](../storage/common/storage-account-create.md) geeft u toegang tot Azure- [blobs](../storage/blobs/storage-quickstart-blobs-python.md),- [tabellen](../cosmos-db/table-storage-how-to-use-python.md)en- [wacht rijen](../storage/queues/storage-python-how-to-use-queue-storage.md). Als u een opslag service wilt maken, moet u een naam voor de service (tussen 3 en 24 kleine letters en uniek in Azure) hebben. U hebt ook een beschrijving nodig, een label (Maxi maal 100 tekens, automatisch gecodeerd naar base64) en een locatie. In het volgende voor beeld ziet u hoe u een opslag service maakt door een locatie op te geven:
```python
from azure import *
from azure.servicemanagement import *
sms = ServiceManagementService(subscription_id, certificate_path)
name = 'mystorageaccount'
label = 'mystorageaccount'
location = 'West US'
desc = 'My storage account description.'
result = sms.create_storage_account(name, desc, label, location=location)
operation_result = sms.get_operation_status(result.request_id)
print('Operation status: ' + operation_result.status)
```
In het vorige voor beeld kan de status van de bewerking ** \_ opslag \_ account maken** worden opgehaald door het resultaat dat wordt geretourneerd door het maken van een ** \_ opslag \_ account** aan de methode **Get \_ Operation- \_ status** door te geven.
U kunt uw opslag accounts en de bijbehorende eigenschappen weer geven met de methode ** \_ opslag \_ accounts weer geven** .
```python
from azure import *
from azure.servicemanagement import *
sms = ServiceManagementService(subscription_id, certificate_path)
result = sms.list_storage_accounts()
for account in result:
print('Service name: ' + account.service_name)
print('Location: ' + account.storage_service_properties.location)
print('')
```
## <a name="delete-a-storage-service"></a><a name="DeleteStorageService"> </a>Een opslag service verwijderen
Als u een opslag service wilt verwijderen, geeft u de naam van de opslag service door aan de methode voor het verwijderen van een ** \_ opslag \_ account** . Als u een opslag service verwijdert, worden alle gegevens die zijn opgeslagen in de service (blobs, tabellen en wacht rijen) verwijderd.
```python
from azure import *
from azure.servicemanagement import *
sms = ServiceManagementService(subscription_id, certificate_path)
sms.delete_storage_account('mystorageaccount')
```
## <a name="list-available-operating-systems"></a><a name="ListOperatingSystems"> </a>Beschik bare besturings systemen weer geven
Gebruik de methode ** \_ Operating \_ Systems** om de besturings systemen weer te geven die beschikbaar zijn voor hosting services.
```python
from azure import *
from azure.servicemanagement import *
sms = ServiceManagementService(subscription_id, certificate_path)
result = sms.list_operating_systems()
for os in result:
print('OS: ' + os.label)
print('Family: ' + os.family_label)
print('Active: ' + str(os.is_active))
```
U kunt ook de methode voor het ** \_ besturings \_ systeem \_ listen** gebruiken, waarin de besturings systemen worden gegroepeerd op familie.
```python
result = sms.list_operating_system_families()
for family in result:
print('Family: ' + family.label)
for os in family.operating_systems:
if os.is_active:
print('OS: ' + os.label)
print('Version: ' + os.version)
print('')
```
## <a name="create-an-operating-system-image"></a><a name="CreateVMImage"> </a>Een installatie kopie van een besturings systeem maken
Als u een installatie kopie van een besturings systeem wilt toevoegen aan de opslag plaats voor installatie kopieën, gebruikt u de methode ** \_ OS- \_ installatie kopie toevoegen** .
```python
from azure import *
from azure.servicemanagement import *
sms = ServiceManagementService(subscription_id, certificate_path)
name = 'mycentos'
label = 'mycentos'
os = 'Linux' # Linux or Windows
media_link = 'url_to_storage_blob_for_source_image_vhd'
result = sms.add_os_image(label, media_link, name, os)
operation_result = sms.get_operation_status(result.request_id)
print('Operation status: ' + operation_result.status)
```
Als u wilt weer geven welke installatie kopieën van het besturings systeem beschikbaar zijn, gebruikt u de methode ** \_ \_ installatie kopieën lijst besturingssysteem** . Het bevat alle platform installatie kopieën en gebruikers installatie kopieën.
```python
result = sms.list_os_images()
for image in result:
print('Name: ' + image.name)
print('Label: ' + image.label)
print('OS: ' + image.os)
print('Category: ' + image.category)
print('Description: ' + image.description)
print('Location: ' + image.location)
print('Media link: ' + image.media_link)
print('')
```
## <a name="delete-an-operating-system-image"></a><a name="DeleteVMImage"> </a>Een installatie kopie van een besturings systeem verwijderen
Als u een installatie kopie van een gebruiker wilt verwijderen, gebruikt u de methode ** \_ \_ installatie kopie van het besturings systeem verwijderen** .
```python
from azure import *
from azure.servicemanagement import *
sms = ServiceManagementService(subscription_id, certificate_path)
result = sms.delete_os_image('mycentos')
operation_result = sms.get_operation_status(result.request_id)
print('Operation status: ' + operation_result.status)
```
## <a name="create-a-virtual-machine"></a><a name="CreateVM"> </a>Een virtuele machine maken
Als u een virtuele machine wilt maken, moet u eerst een [Cloud service](#CreateCloudService)maken. Maak vervolgens de implementatie van de virtuele machine met behulp van de ** \_ implementatie methode virtuele \_ machine \_ maken** .
```python
from azure import *
from azure.servicemanagement import *
sms = ServiceManagementService(subscription_id, certificate_path)
name = 'myvm'
location = 'West US'
#Set the location
sms.create_hosted_service(service_name=name,
label=name,
location=location)
# Name of an os image as returned by list_os_images
image_name = 'OpenLogic__OpenLogic-CentOS-62-20120531-en-us-30GB.vhd'
# Destination storage account container/blob where the VM disk
# will be created
media_link = 'url_to_target_storage_blob_for_vm_hd'
# Linux VM configuration, you can use WindowsConfigurationSet
# for a Windows VM instead
linux_config = LinuxConfigurationSet('myhostname', 'myuser', 'mypassword', True)
os_hd = OSVirtualHardDisk(image_name, media_link)
sms.create_virtual_machine_deployment(service_name=name,
deployment_name=name,
deployment_slot='production',
label=name,
role_name=name,
system_config=linux_config,
os_virtual_hard_disk=os_hd,
role_size='Small')
```
## <a name="delete-a-virtual-machine"></a><a name="DeleteVM"> </a>Een virtuele machine verwijderen
Als u een virtuele machine wilt verwijderen, verwijdert u eerst de implementatie met behulp van de ** \_ implementatie methode verwijderen** .
```python
from azure import *
from azure.servicemanagement import *
sms = ServiceManagementService(subscription_id, certificate_path)
sms.delete_deployment(service_name='myvm',
deployment_name='myvm')
```
De Cloud service kan vervolgens worden verwijderd met behulp van de methode ** \_ gehoste \_ service verwijderen** .
```python
sms.delete_hosted_service(service_name='myvm')
```
## <a name="create-a-virtual-machine-from-a-captured-virtual-machine-image"></a>Een virtuele machine maken op basis van een vastgelegde installatie kopie van een virtuele machine
Als u een VM-installatie kopie wilt vastleggen, roept u eerst de methode voor het **vastleggen van \_ VM- \_ installatie kopie** op.
```python
from azure import *
from azure.servicemanagement import *
sms = ServiceManagementService(subscription_id, certificate_path)
# replace the below three parameters with actual values
hosted_service_name = 'hs1'
deployment_name = 'dep1'
vm_name = 'vm1'
image_name = vm_name + 'image'
image = CaptureRoleAsVMImage ('Specialized',
image_name,
image_name + 'label',
image_name + 'description',
'english',
'mygroup')
result = sms.capture_vm_image(
hosted_service_name,
deployment_name,
vm_name,
image
)
```
Om ervoor te zorgen dat u de installatie kopie hebt vastgelegd, gebruikt u de **lijst \_ VM- \_ installatie kopieën weer geven** . Zorg ervoor dat de afbeelding wordt weer gegeven in de resultaten.
```python
images = sms.list_vm_images()
```
Als u de virtuele machine tot slot wilt maken met behulp van de vastgelegde installatie kopie, gebruikt u de methode voor het maken van een ** \_ virtuele \_ machine \_ ** als voorheen, maar deze tijd geeft u in plaats daarvan op in de vm_image_name.
```python
from azure import *
from azure.servicemanagement import *
sms = ServiceManagementService(subscription_id, certificate_path)
name = 'myvm'
location = 'West US'
#Set the location
sms.create_hosted_service(service_name=name,
label=name,
location=location)
sms.create_virtual_machine_deployment(service_name=name,
deployment_name=name,
deployment_slot='production',
label=name,
role_name=name,
system_config=linux_config,
os_virtual_hard_disk=None,
role_size='Small',
vm_image_name = image_name)
```
Zie [een virtuele Linux-machine vastleggen](/previous-versions/azure/virtual-machines/linux/classic/capture-image-classic)voor meer informatie over het vastleggen van een virtuele Linux-machine in het klassieke implementatie model.
Zie [een virtuele Windows-machine vastleggen](/previous-versions/azure/virtual-machines/windows/classic/capture-image-classic)voor meer informatie over het vastleggen van een virtuele Windows-machine in het klassieke implementatie model.
## <a name="next-steps"></a><a name="What's Next"> </a>Volgende stappen
Nu u de basis principes van Service beheer hebt geleerd, kunt u toegang krijgen tot de [volledige API-referentie documentatie voor de Azure python-SDK](https://azure-sdk-for-python.readthedocs.org/) en eenvoudig complexe taken uitvoeren om uw python-toepassing te beheren.
Raadpleeg het [Python Developer Center](https://azure.microsoft.com/develop/python/) voor meer informatie.
[What is service management?]: #WhatIs
[Concepts]: #Concepts
[Connect to service management]: #Connect
[List available locations]: #ListAvailableLocations
[Create a cloud service]: #CreateCloudService
[Delete a cloud service]: #DeleteCloudService
[Create a deployment]: #CreateDeployment
[Update a deployment]: #UpdateDeployment
[Move deployments between staging and production]: #MoveDeployments
[Delete a deployment]: #DeleteDeployment
[Create a storage service]: #CreateStorageService
[Delete a storage service]: #DeleteStorageService
[List available operating systems]: #ListOperatingSystems
[Create an operating system image]: #CreateVMImage
[Delete an operating system image]: #DeleteVMImage
[Create a virtual machine]: #CreateVM
[Delete a virtual machine]: #DeleteVM
[Next steps]: #NextSteps
[management-portal]: https://portal.azure.com/
[svc-mgmt-rest-api]: /previous-versions/azure/ee460799(v=azure.100)
[cloud service]:/azure/cloud-services/ | 45.770833 | 654 | 0.769367 | nld_Latn | 0.983605 |
b938e32bfcbcb527277d0d0ce9747cdf3f96cf99 | 107 | md | Markdown | translations/zh-CN/data/reusables/webhooks/project_short_desc.md | nyanthanya/Cuma_Info | d519c49504fc3818c1294f14e63ee944d2f4bd89 | [
"CC-BY-4.0",
"MIT"
] | 11,698 | 2020-10-07T16:22:18.000Z | 2022-03-31T18:54:47.000Z | translations/zh-CN/data/reusables/webhooks/project_short_desc.md | nyanthanya/Cuma_Info | d519c49504fc3818c1294f14e63ee944d2f4bd89 | [
"CC-BY-4.0",
"MIT"
] | 8,317 | 2020-10-07T16:26:58.000Z | 2022-03-31T23:24:25.000Z | translations/zh-CN/data/reusables/webhooks/project_short_desc.md | nyanthanya/Cuma_Info | d519c49504fc3818c1294f14e63ee944d2f4bd89 | [
"CC-BY-4.0",
"MIT"
] | 48,204 | 2020-10-07T16:15:45.000Z | 2022-03-31T23:50:42.000Z | 与项目板有关的活动。 {% data reusables.webhooks.action_type_desc %} 更多信息请参阅“[项目](/rest/reference/projects)”REST API。
| 53.5 | 106 | 0.775701 | yue_Hant | 0.513132 |
b939517045828abb6ab4b88269236debfd75c267 | 1,506 | md | Markdown | result/hello-world/csv/hello-world-csv.md | ibraheemdev/rust-web-benchmarks | c1ec9bc01f945db694f67de78a36f33f1b88bbc6 | [
"MIT"
] | 53 | 2021-07-31T12:23:23.000Z | 2022-03-10T18:49:56.000Z | result/hello-world/csv/hello-world-csv.md | ibraheemdev/rust-web-benchmarks | c1ec9bc01f945db694f67de78a36f33f1b88bbc6 | [
"MIT"
] | 7 | 2021-08-10T06:16:41.000Z | 2022-02-25T15:50:27.000Z | result/hello-world/csv/hello-world-csv.md | ibraheemdev/rust-web-benchmarks | c1ec9bc01f945db694f67de78a36f33f1b88bbc6 | [
"MIT"
] | 12 | 2021-07-31T23:24:18.000Z | 2022-02-22T10:17:08.000Z | Benchmarked on a 2021 Apple Macbook Pro M1 14"
```
| framework | latency_avg | latency_max | latency_min | latency_std_deviation | requests_avg | requests_total | transfer_rate | transfer_total |
| --------- | ----------- | ----------- | ----------- | --------------------- | ------------ | -------------- | ------------- | -------------- |
| Actix Web | 4.60 | 344.35 | 0.02 | 12.63 | 108,513.26 | 3,255,164 | 14,103,776.93 | 423,082,920 |
| Astra | 1.48 | 354.47 | 0.01 | 8.26 | 81,411.83 | 2,442,277 | 8,709,485.49 | 261,276,238 |
| Axum | 4.65 | 185.45 | 0.03 | 7.39 | 107,463.59 | 3,223,676 | 12,355,774.87 | 370,646,610 |
| Hyper | 4.73 | 216.34 | 0.03 | 9.21 | 105,674.31 | 3,169,884 | 9,403,254.45 | 282,066,899 |
| Poem | 4.60 | 181.70 | 0.04 | 6.87 | 108,599.25 | 3,257,886 | 12,486,310.30 | 374,578,805 |
| Rocket | 4.34 | 152.68 | 0.03 | 3.71 | 115,006.88 | 3,449,816 | 28,516,182.34 | 855,388,704 |
| Tide | 6.10 | 112.69 | 0.02 | 2.93 | 81,910.67 | 2,457,028 | 10,565,326.10 | 316,922,094 |
| Warp | 4.75 | 171.04 | 0.03 | 8.27 | 105,243.68 | 3,157,013 | 13,678,870.24 | 410,327,450 |
```
| 100.4 | 144 | 0.381142 | yue_Hant | 0.048148 |
b93970441f44c10cc7c748224678fd84a4ce4746 | 459 | md | Markdown | docs/api/components/tooltip.en.md | kagawagao/G2Plot | ff9a470b5327137067981cbcccfbaa8469f3574c | [
"MIT"
] | 1 | 2021-02-05T02:17:28.000Z | 2021-02-05T02:17:28.000Z | docs/api/components/tooltip.en.md | kagawagao/G2Plot | ff9a470b5327137067981cbcccfbaa8469f3574c | [
"MIT"
] | null | null | null | docs/api/components/tooltip.en.md | kagawagao/G2Plot | ff9a470b5327137067981cbcccfbaa8469f3574c | [
"MIT"
] | null | null | null | ---
title: Tooltip
order: 3
---
`markdown:docs/styles/component.md`
<div class="component-api_tooltip">
🎨 Go to [AntV 设计 | 提示信息 Tooltip](https://www.yuque.com/mo-college/vis-design/vrxog6)of 墨者学院 to learn more about **Design guide**.
#### Tooltip
<img src="https://gw.alipayobjects.com/zos/antfincdn/HjTKrPN%24j6/tooltip-intro.png" class="component-img" alt="tooltip" />
#### Configurations (_TooltipCfg_)
`markdown:docs/common/tooltip.en.md`
</div> | 22.95 | 130 | 0.714597 | yue_Hant | 0.388912 |
b93a38c37a81fcb5e4db2d0df95409cbed3a9d7a | 611 | md | Markdown | includes/migration-guide/runtime/mef/mef-catalogs-implement-ienumerable-therefore-can-no-longer-be-used-create.md | MMiooiMM/docs.zh-tw | df6d917d6a71a772c0ab98727fb4d167399cdef6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | includes/migration-guide/runtime/mef/mef-catalogs-implement-ienumerable-therefore-can-no-longer-be-used-create.md | MMiooiMM/docs.zh-tw | df6d917d6a71a772c0ab98727fb4d167399cdef6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | includes/migration-guide/runtime/mef/mef-catalogs-implement-ienumerable-therefore-can-no-longer-be-used-create.md | MMiooiMM/docs.zh-tw | df6d917d6a71a772c0ab98727fb4d167399cdef6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
ms.openlocfilehash: 4cc91e7c6054fdb8e96cecf7120df5b9f25de56c
ms.sourcegitcommit: 0be8a279af6d8a43e03141e349d3efd5d35f8767
ms.translationtype: HT
ms.contentlocale: zh-TW
ms.lasthandoff: 04/18/2019
ms.locfileid: "59803546"
---
### <a name="mef-catalogs-implement-ienumerable-and-therefore-can-no-longer-be-used-to-create-a-serializer"></a>MEF 目錄會實作 IEnumerable,因此不能再用來建立序列化程式
| | |
|---|---|
|詳細資料|從 .NET Framework 4.5 開始,MEF 目錄會實作 IEnumerable,因此不能再用來建立序列化程式 (<xref:System.Xml.Serialization.XmlSerializer?displayProperty=name> 物件)。 嘗試序列化 MEF 目錄會擲回例外狀況。|
|建議|無法再使用 MEF 建立序列化程式|
|範圍|主要|
|版本|4.5|
|類型|執行階段|
| 33.944444 | 161 | 0.772504 | yue_Hant | 0.641458 |
b93a487a2d8719dec5f46ef4242b60389132ac26 | 3,644 | md | Markdown | windows-driver-docs-pr/debugger/-pagein--page-in-memory-.md | ahidaka/windows-driver-docs | 6eac87818eba4c606a292991994b90f3279c2ab8 | [
"CC-BY-4.0",
"MIT"
] | 485 | 2017-05-26T02:26:37.000Z | 2022-03-30T18:22:09.000Z | windows-driver-docs-pr/debugger/-pagein--page-in-memory-.md | ahidaka/windows-driver-docs | 6eac87818eba4c606a292991994b90f3279c2ab8 | [
"CC-BY-4.0",
"MIT"
] | 2,511 | 2017-05-16T23:06:32.000Z | 2022-03-31T23:57:00.000Z | windows-driver-docs-pr/debugger/-pagein--page-in-memory-.md | ahidaka/windows-driver-docs | 6eac87818eba4c606a292991994b90f3279c2ab8 | [
"CC-BY-4.0",
"MIT"
] | 687 | 2017-05-19T03:16:24.000Z | 2022-03-31T03:19:04.000Z | ---
title: .pagein (Page In Memory)
description: The .pagein command pages in the specified region of memory.
keywords: ["Page In Memory (.pagein) command", "memory, Page In Memory (.pagein) command", ".pagein (Page In Memory) Windows Debugging"]
ms.date: 05/23/2017
topic_type:
- apiref
api_name:
- .pagein (Page In Memory)
api_type:
- NA
ms.localizationpriority: medium
---
# .pagein (Page In Memory)
The **.pagein** command pages in the specified region of memory.
```dbgcmd
.pagein [Options] Address
```
## <span id="ddk_meta_page_in_memory_dbg"></span><span id="DDK_META_PAGE_IN_MEMORY_DBG"></span>Parameters
<span id="_______Options______"></span><span id="_______options______"></span><span id="_______OPTIONS______"></span> *Options*
Any of the following options:
<span id="_p_Process"></span><span id="_p_process"></span><span id="_P_PROCESS"></span>**/p** **** *Process*
Specifies the address of the process that owns the memory that you want to page in. (More precisely, this parameter specifies the address of the EPROCESS block for the process.) If you omit *Process* or specify zero, the debugger uses the current process setting. For more information about the process setting, see [**.process (Set Process Context)**](-process--set-process-context-.md)
<span id="_f"></span><span id="_F"></span>**/f**
Forces the memory to be paged in, even if the address is in kernel memory and the version of the Microsoft Windows operating system does not support this action.
<span id="_______Address______"></span><span id="_______address______"></span><span id="_______ADDRESS______"></span> *Address*
Specifies the address to page in.
### <span id="Environment"></span><span id="environment"></span><span id="ENVIRONMENT"></span>Environment
<table>
<colgroup>
<col width="50%" />
<col width="50%" />
</colgroup>
<tbody>
<tr class="odd">
<td align="left"><p><strong>Modes</strong></p></td>
<td align="left"><p>Kernel mode only (but not during local kernel debugging)</p></td>
</tr>
<tr class="even">
<td align="left"><p><strong>Targets</strong></p></td>
<td align="left"><p>Live debugging only</p></td>
</tr>
<tr class="odd">
<td align="left"><p><strong>Platforms</strong></p></td>
<td align="left"><p>All</p></td>
</tr>
</tbody>
</table>
## Remarks
After you run the **.pagein** command, you must use the [**g (Go)**](g--go-.md) command to resume program execution. After a brief time, the target computer automatically breaks into the debugger again.
At this point, the address that you specify is paged in. If you use the **/p** option, the process context is also set to the specified process, exactly as if you used the [**.process /i Process**](-process--set-process-context-.md) command.
If the address is already paged in, the **.pagein** command still checks that the address is paged in and then breaks back into the debugger. If the address is invalid, this command only breaks back into the debugger.
In Windows Server 2003 and Windows XP, you can page in only user-mode addresses by using **.pagein**. You can override this restriction by using the **/f** switch, but we do not recommend that you use this switch. In Windows Vista, you can safely page in user-mode and kernel-mode memory.
**Warning** If you use **.pagein** on an address in a kernel stack in Windows Server 2003 or Windows XP, a bug check might occur.
## Requirements
<table>
<colgroup>
<col width="50%" />
<col width="50%" />
</colgroup>
<tbody>
<tr class="odd">
<td align="left"><p>Version</p></td>
<td align="left"><p>Supported in Windows XP and later versions of Windows.</p></td>
</tr>
</tbody>
</table>
| 36.079208 | 387 | 0.710483 | eng_Latn | 0.923971 |
b93a6424fd2b79e3bda1935729967e13ed1ad0f0 | 268 | md | Markdown | README.md | asonni/sys-top | e68c426c85d26ddedf5f6faa11f336756c5d16c9 | [
"MIT"
] | null | null | null | README.md | asonni/sys-top | e68c426c85d26ddedf5f6faa11f336756c5d16c9 | [
"MIT"
] | 2 | 2020-10-06T18:26:27.000Z | 2021-01-28T21:05:44.000Z | README.md | asonni/sys-top | e68c426c85d26ddedf5f6faa11f336756c5d16c9 | [
"MIT"
] | null | null | null | # SysTop
CPU & Memory monitor app built with Electron
## Usage
### Install Dependencies
```
npm install
```
### Run
```
npm start
npm run dev (with Nodemon)
```
### Package
```
npm run package-mac
npm run package-win
npm run package-linux
```
## LICENSE
MIT
| 8.645161 | 44 | 0.649254 | kor_Hang | 0.48329 |
b93a95cf52d013d11b6b8b376a36c352fb7f3305 | 848 | md | Markdown | ssh-tunnel/Readme.md | stopsopa/docker-puppeteer-html-scraper | 03abf0d71e60aac1a62ae6b3615a2454fc52e399 | [
"MIT"
] | null | null | null | ssh-tunnel/Readme.md | stopsopa/docker-puppeteer-html-scraper | 03abf0d71e60aac1a62ae6b3615a2454fc52e399 | [
"MIT"
] | null | null | null | ssh-tunnel/Readme.md | stopsopa/docker-puppeteer-html-scraper | 03abf0d71e60aac1a62ae6b3615a2454fc52e399 | [
"MIT"
] | null | null | null |
Read more about ssh tunnels:
https://blog.trackets.com/2014/05/17/ssh-tunnel-local-and-remote-port-forwarding-explained-with-examples.html
usefull command to determine if it's working on target machine:
netstat -ntpl
# ALLOW BINDING TO 0.0.0.0 ON THE SERVER
[https://superuser.com/a/588665](https://superuser.com/a/588665)
# Installation autossh - mac
brew install autossh
# Usage
on mac run:
/bin/bash tunnel.sh
and run scraper server:
make start
on target server (with static ip) run:
node proxy-server.js
now open in browser <ip of server>:<NODEPORT>
it will redirect traffic to localhost:<TARGETHOSTPORT>
now through tunnel it will be redirected to Mac scraper service on port <LOCALPORT>
# cron
* * * * * root cd /var/www/html-scraper/ssh-tunnel && (make isworking || make start) | 26.5 | 113 | 0.708726 | eng_Latn | 0.73303 |
b93a99ecbc6b31b9dce7e88e98196fa41d215af6 | 506 | md | Markdown | exercises/basic/syntax.md | smathot/pythontutorials | 77390927fb0cd8be4e50f6a865cff827211ad5d5 | [
"CC-BY-3.0"
] | 1 | 2021-02-15T18:16:32.000Z | 2021-02-15T18:16:32.000Z | exercises/basic/syntax.md | smathot/pythontutorials | 77390927fb0cd8be4e50f6a865cff827211ad5d5 | [
"CC-BY-3.0"
] | null | null | null | exercises/basic/syntax.md | smathot/pythontutorials | 77390927fb0cd8be4e50f6a865cff827211ad5d5 | [
"CC-BY-3.0"
] | 1 | 2022-01-01T13:14:53.000Z | 2022-01-01T13:14:53.000Z | 
Imagine a right triangle like the one above and:
- Read a number from the standard input and assign it to `a`
- Read another number from the standard input and assign it to `b`
- Use Pythagoras theorem to determine the value of the long side `c`
- Use string formatting to print out the length of the long side
- If `c` is larger than `PI` (a constant), also print out: *And this is longer than PI*
| 50.6 | 104 | 0.752964 | eng_Latn | 0.995066 |
b93aec6677227287ffc1395d2db248302d14aad9 | 17,937 | markdown | Markdown | ResearchKit/docs/Survey/CreatingSurveys-template.markdown | mluedke2/karenina | 8cdd9f1fe501a659f47fe95835c76d08afe805a5 | [
"MIT"
] | 2 | 2020-05-11T01:04:29.000Z | 2020-12-02T07:22:14.000Z | ResearchKit/docs/Survey/CreatingSurveys-template.markdown | mluedke2/karenina | 8cdd9f1fe501a659f47fe95835c76d08afe805a5 | [
"MIT"
] | null | null | null | ResearchKit/docs/Survey/CreatingSurveys-template.markdown | mluedke2/karenina | 8cdd9f1fe501a659f47fe95835c76d08afe805a5 | [
"MIT"
] | null | null | null | #
<sub>These materials are for informational purposes only and do not constitute legal advice. You should contact an attorney to obtain advice with respect to the development of a research app and any applicable laws.</sub>
#Creating Surveys
A survey task is a collection of step objects (`ORKStep`) representing
a sequence of questions, such as "What medications are you taking?" or
"How many hours did you sleep last night?" You can collect results
for the individual steps or for the entire task.
The steps for creating a task to present a survey are:
1. <a href="#create">Create one or more steps</a>
2. <a href="#task">Create a task</a>
3. <a href="#results">Collect results</a>
##1. Create Steps<a name="create"></a>
The survey module provides a single-question step (`ORKQuestionStep`)
and a form step that can contain more than one item
(`ORKFormStep`). You can also use an instruction step
(`ORKInstructionStep`) to introduce the survey or provide specific
instructions.
Every step has its own step view controller that defines the UI
presentation for that type of step. When a task view controller needs
to present a step, it instantiates and presents the right step view
controller for the step. If needed, you can customize the details of
each step view controller, such as button titles and appearance, by
implementing task view controller delegate methods (see
`ORKTaskViewControllerDelegate`).
### Instruction Step
An instruction step explains the purpose of a task and provides
instructions for the user. An `ORKInstructionStep` object includes an
identifier, title, text, detail text, and an image. Because an
instruction step does not collect any data, it yields an empty
`ORKStepResult` that nonetheless records how long the instruction was
on screen.
```
ORKInstructionStep *step =
[[ORKInstructionStep alloc] initWithIdentifier:@"identifier"];
step.title = @"Selection Survey";
step.text = @"This survey can help us understand your eligibility for the fitness study";
```
Creating a step as shown in the code above, including it in a task, and
presenting with a task view controller, yields something like this:
<center>
<figure>
<img src="SurveyImages/InstructionStep.png" width="25%" alt="Instruction step" style="border: solid black 1px;" align="middle"/>
<figcaption> <center>Example of an instruction step.</center></figcaption>
</figure>
</center>
### Question Step
A question step (`ORKQuestionStep`) presents a single question,
composed of a short `title` and longer, more descriptive `text`. Configure the type of data the user can enter by setting the answer format. You can
also provide an option for the user to skip the question with the
step's `optional` property.
For numeric and text answer formats, the question step's `placeholder`
property specifies a short hint that describes the expected value of
an input field.
A question step yields a step result that, like the instruction step's
result, indicates how long the user had the question on screen. It
also has a child, an `ORKQuestionResult` subclass that reports the
user's answer.
The following code configures a simple numeric question step.
```
ORKNumericAnswerFormat *format =
[ORKNumericAnswerFormat integerAnswerFormatWithUnit:@"years"];
format.minimum = @(18);
format.maximum = @(90);
ORKQuestionStep *step =
[ORKQuestionStep questionStepWithIdentifier:kIdentifierAge
title:@"How old are you?"
answer:format];
```
Adding this question step to a task and presenting the task produces
a screen that looks like this:
<center>
<figure>
<img src="SurveyImages/QuestionStep.png" width="25%" alt="Question step" style="border: solid black 1px;" align="middle"/>
<figcaption> <center>Example of a question step.</center></figcaption>
</figure>
</center>
###Form Step
When the user needs to answer several related questions together, it
may be preferable to use a form step (`ORKFormStep`) in order to present them all on one page. Form steps support all the same answer formats as question
steps, but can contain multiple items (`ORKFormItem`), each with its
own answer format.
Forms can be organized into sections by incorporating extra "dummy" form
items that contain only a title. See the `ORKFormItem` reference documentation
for more details.
The result of a form step is similar to the result of a question step,
except that it contains one question result for each form
item. The results are matched to their corresponding form items using
their identifiers (the `identifier` property).
For example, the following code shows how to create a form that requests some basic details, using default values extracted from HealthKit on iOS to accelerate data entry:
```
ORKFormStep *step =
[[ORKFormStep alloc] initWithIdentifier:kFormIdentifier
title:@"Form"
text:@"Form groups multi-entry in one page"];
NSMutableArray *items = [NSMutableArray new];
ORKAnswerFormat *genderFormat =
[ORKHealthKitCharacteristicTypeAnswerFormat
answerFormatWithCharacteristicType:
[HKCharacteristicType characteristicTypeForIdentifier:HKCharacteristicTypeIdentifierBiologicalSex]];
[items addObject:
[[ORKFormItem alloc] initWithIdentifier:kGenderItemIdentifier
text:@"Gender"
answerFormat:genderFormat];
// Include a section separator
[items addObject:
[[ORKFormItem alloc] initWithSectionTitle:@"Basic Information"]];
ORKAnswerFormat *bloodTypeFormat =
[ORKHealthKitCharacteristicTypeAnswerFormat
answerFormatWithCharacteristicType:
[HKCharacteristicType characteristicTypeForIdentifier:HKCharacteristicTypeIdentifierBloodType]];
[items addObject:
[[ORKFormItem alloc] initWithIdentifier:kBloodTypeItemIdentifier
text:@"Blood Type"
answerFormat:bloodTypeFormat];
ORKAnswerFormat *dateOfBirthFormat =
[ORKHealthKitCharacteristicTypeAnswerFormat
answerFormatWithCharacteristicType:
[HKCharacteristicType characteristicTypeForIdentifier:HKCharacteristicTypeIdentifierDateOfBirth]];
ORKFormItem *dateOfBirthItem =
[[ORKFormItem alloc] initWithIdentifier:kDateOfBirthItemIdentifier
text:@"DOB"
answerFormat:dateOfBirthFormat];
dateOfBirthItem.placeholder = @"DOB";
[items addObject:dateOfBirthItem];
// ... And so on, adding additional items
step.formItems = items;
```
The code above gives you something like this:
<center>
<figure>
<img src="SurveyImages/FormStep.png" width="25%" alt="Form step" style="border: solid black 1px;" align="middle"/>
<figcaption> <center>Example of a form step.</center></figcaption>
</figure>
</center>
### Answer Format
In the ResearchKit™ framework, an answer format defines how the user should be asked to
answer a question or an item in a form. For example, consider a
survey question such as "On a scale of 1 to 10, how much pain do you
feel?" The answer format for this question would naturally be a
continuous scale on that range, so you can use
`ORKScaleAnswerFormat`, and set its `minimum` and `maximum` properties
to reflect the desired range.
The screenshots below show the standard answer formats that the ResearchKit framework provides.
<p style="float: left; font-size: 9pt; text-align: center; width: 25%; margin-right: 5%; margin-bottom: 0.5em;"><img src="SurveyImages/ScaleAnswerFormat.png" style="width: 100%;border: solid black 1px; ">Scale answer format</p><p style="float: left; font-size: 9pt; text-align: center; width: 25%; margin-right: 5%; margin-bottom: 0.5em;"><img src="SurveyImages/BooleanAnswerFormat.png" style="width: 100%;border: solid black 1px;">Boolean answer format</p><p style="float: left; font-size: 9pt; text-align: center; width: 25%; margin-right: 3%; margin-bottom: 0.5em;"><img src="SurveyImages/ValuePickerAnswerFormat.png" style="width: 100%;border: solid black 1px;">Value picker answer format </p>
<p style="clear: both;">
<p style="float: left; font-size: 9pt; text-align: center; width: 25%; margin-right: 5%; margin-bottom: 0.5em;"><img src="SurveyImages/ImageChoiceAnswerFormat.png" style="width: 100%;border: solid black 1px; ">Image choice answer format </p><p style="float: left; font-size: 9pt; text-align: center; width: 25%; margin-right: 5%; margin-bottom: 0.5em;"><img src="SurveyImages/TextChoiceAnswerFormat_1.png" style="width: 100%;border: solid black 1px;">Text choice answer format (single text choice answer) </p><p style="float: left; font-size: 9pt; text-align: center; width: 25%; margin-right: 3%; margin-bottom: 0.5em;"><img src="SurveyImages/TextChoiceAnswerFormat_2.png" style="width: 100%;border: solid black 1px;">Text choice answer format (multiple text choice answer) </p>
<p style="clear: both;">
<p style="float: left; font-size: 9pt; text-align: center; width: 25%; margin-right: 5%; margin-bottom: 0.5em;"><img src="SurveyImages/NumericAnswerFormat.png" style="width: 100%;border: solid black 1px; ">Numeric answer format</p><p style="float: left; font-size: 9pt; text-align: center; width: 25%; margin-right: 5%; margin-bottom: 0.5em;"><img src="SurveyImages/TimeOfTheDayAnswerFormat.png" style="width: 100%;border: solid black 1px;">TimeOfTheDay answer format</p><p style="float: left; font-size: 9pt; text-align: center; width: 25%; margin-right: 3%; margin-bottom: 0.5em;"><img src="SurveyImages/DateAnswerFormat.png" style="width: 100%;border: solid black 1px;">Date answer format</p>
<p style="clear: both;">
<p style="float: left; font-size: 9pt; text-align: center; width: 25%; margin-right: 5%; margin-bottom: 0.5em;"><img src="SurveyImages/TextAnswerFormat_1.png" style="width: 100%;border: solid black 1px; ">Text answer format (unlimited text entry)</p><p style="float: left; font-size: 9pt; text-align: center; width: 25%; margin-right: 5%; margin-bottom: 0.5em;"><img src="SurveyImages/TextAnswerFormat_2.png" style="width: 100%;border: solid black 1px;">Text answer format (limited text entry) </p>
<p style="clear: both;"></p>
In addition to the preceding answer formats, the ResearchKit framework provides
special answer formats for asking questions about quantities or
characteristics that the user might already have stored in the Health
app. When a HealthKit answer format is used, the task view controller
automatically presents a Health data access request to the user (if
they have not already granted access to your app). The presentation
details are populated automatically, and, if the user has granted
access, the field defaults to the current value retrieved from their
Health database.
## 2. Create a Survey Task<a name="task"></a>
Once you create one or more steps, create an `ORKOrderedTask` to
hold them. The code below shows a Boolean step being added to a task.
```
// Create a boolean step to include in the task.
ORKStep *booleanStep =
[[ORKQuestionStep alloc] initWithIdentifier:kNutritionIdentifier];
booleanStep.title = @"Do you take nutritional supplements?";
booleanStep.answerFormat = [ORKBooleanAnswerFormat new];
booleanStep.optional = NO;
// Create a task wrapping the boolean step.
ORKOrderedTask *task =
[[ORKOrderedTask alloc] initWithIdentifier:kTaskIdentifier
steps:@[booleanStep]];
```
You must assign a string identifier to each step. The step identifier should be unique within the task, because it is the key that connects a step in the task hierarchy with the step result in the result hierarchy.
To present the task, attach it to a task view controller and present
it. The code below shows how to create a task view controller and present it modally.
```
// Create a task view controller using the task and set a delegate.
ORKTaskViewController *taskViewController =
[[ORKTaskViewController alloc] initWithTask:task taskRunUUID:nil];
taskViewController.delegate = self;
// Present the task view controller.
[self presentViewController:taskViewController animated:YES completion:nil];
```
<p><i>Note: `ORKOrderedTask` assumes that you will always present all the questions,
and will never decide what question to show based on previous answers.
To introduce conditional logic, you must need to either subclass
`ORKOrderedTask` or implement the `ORKTask` protocol yourself.</i></p>
##3. Collect Results<a name="results"></a>
The `result` property of the task view controller gives you the results of the task.
Each step view controller that the user views produces a step result
(`ORKStepResult`). The task view controller collates these results as
the user navigates through the task, in order to produce an
`ORKTaskResult`.
Both the task result and step result are collection results, in that
they can contain other result objects. For example, a task result
contains an array of step results.
The results contained in a step result vary depending on the type of
step. For example, a question step produces a question result
(`ORKQuestionResult`); a form step produces one question result for
every form item; and an active task with recorders generally produces
one result for each recorder.
The hierarchy of results corresponds closely to the input
model hierarchy of task and steps, as you can see here:
<center>
<figure>
<img src="SurveyImages/ResultsHierarchy.png" width="50%" alt="Completion step" align="middle" style="border: solid black 1px;">
<figcaption> <center>Example of a result hierarchy</center>
</figcaption>
</figure>
</center>
Among other properties, every result has an identifier. This
identifier is what connects the result to the model object (task,
step, form item, or recorder) that produced it. Every result also
includes start and end times, using the `startDate` and `endDate`
properties respectively. These properties can be used to infer how long the user
spent on the step.
#### Step Results That Determine the Next Step
Sometimes it's important to know the result of a step before
presenting the next step. For example, suppose a step asks "Do you
have a fever?" If the user answers “Yes,” the next question the next question might be "What is your
temperature now?"; otherwise it might be, "Do you have any additional
health concerns?"
The following example demonstrates how to subclass
`ORKOrderedTask` to provide a different set of steps depending on the
user's answer to a Boolean question. Although the code shows the step after step method, a corresponding implementation of "step before step"
is usually necessary.
```
- (ORKStep *)stepAfterStep:(ORKStep *)step
withResult:(id<ORKTaskResultSource>)result {
NSString *identifier = step.identifier;
if ([identifier isEqualToString:self.qualificationStep.identifier])
{
ORKStepResult *stepResult = [result stepResultForStepIdentifier:identifier];
ORKQuestionResult *result = (ORKQuestionResult *)stepResult.firstResult;
if ([result isKindOfClass:[ORKBooleanQuestionResult class]])
{
ORKBooleanQuestionResult *booleanResult = result;
NSNumber *booleanAnswer = booleanResult.booleanAnswer;
if (booleanAnswer)
{
return booleanAnswer.boolValue ? self.regularQuestionStep : self.terminationStep;
}
}
}
return [super stepAfterStep:step withResult:result];
}
```
#### Saving Results on Task Completion
After the task is completed, you can save or upload the results. This
will likely include serializing the result hierarchy in some form,
either using the built-in `NSSecureCoding` support, or to another
format appropriate for your application.
If your task can produce file output, the files are generally referenced by an `ORKFileResult`, and they are placed in the output directory that you set on the task view controller. After you complete a task, one implementation might be to serialize the result hierarchy into the output directory, zip up the entire output
directory, and share it.
In the following example, the result is archived with
`NSKeyedArchiver` on successful completion. If you choose to support
saving and restoring tasks, the user may save the task, so this
example also demonstrates how to obtain the restoration data that
would later be needed to restore the task.
```
- (void)taskViewController:(ORKTaskViewController *)taskViewController
didFinishWithReason:(ORKTaskViewControllerFinishReason)reason
error:(NSError *)error
{
switch (reason) {
case ORKTaskViewControllerFinishReasonCompleted:
// Archive the result object first
NSData *data = [NSKeyedArchiver archivedDataWithRootObject:taskViewController.result];
// Save the data to disk with file protection
// or upload to a remote server securely.
// If any file results are expected, also zip up the outputDirectory.
break;
case ORKTaskViewControllerFinishReasonFailed:
case ORKTaskViewControllerFinishReasonDiscarded:
// Generally, discard the result.
// Consider clearing the contents of the output directory.
break;
case ORKTaskViewControllerFinishReasonSaved:
NSData *data = [taskViewController restorationData];
// Store the restoration data persistently for later use.
// Normally, keep the output directory for when you will restore.
break;
}
}
```
| 51.248571 | 780 | 0.730836 | eng_Latn | 0.972444 |
b93b642cba4bc2458030fc693525ee285bd0a4fb | 4,092 | md | Markdown | 16.9/README.md | appelmar/scidb-eo | 0c1dd615a615399368e40e7b476a76635d24af43 | [
"MIT"
] | 1 | 2017-03-22T15:54:50.000Z | 2017-03-22T15:54:50.000Z | 16.9/README.md | mappl/scidb-eo | 0c1dd615a615399368e40e7b476a76635d24af43 | [
"MIT"
] | null | null | null | 16.9/README.md | mappl/scidb-eo | 0c1dd615a615399368e40e7b476a76635d24af43 | [
"MIT"
] | 6 | 2017-08-13T14:18:43.000Z | 2018-06-29T10:53:17.000Z | # scidb-eo
Docker Images for Earth Observation Analytics with SciDB
---
## Prerequisites
- [Docker Engine](https://www.docker.com/products/docker-engine) (>1.10.0)
- Around 15 GBs free disk space
- Internet connection to download software and dependencies
## Getting started
_**Note**: Depending on your Docker configuration, the following commands must be executed with sudo rights._
### 1. Build the Docker image (1-2 hours)
The provided Docker image is based on a minimally sized Ubuntu OS. Among others, it includes the compilation and installation of [SciDB](http://www.paradigm4.com/), [GDAL](http://gdal.org/), SciDB extensions ([scidb4geo](https://github.com/appelmar/scidb4geo), [scidb4gdal](https://github.com/appelmar/scidb4gdal)) and the installation of all dependencies. The image will take around 15 GBs of disk space. It can be created by executing:
```
git clone https://github.com/appelmar/scidb-eo && cd scidb-eo/16.9
docker build --tag="scidb-eo:16.9" . # don't miss the dot
```
_Note that by default, this includes a rather careful SciDB configuration with relatively little demand for main memory. You may modify `conf/scidb_docker.ini` if you have a powerful machine._
### 2. Start a container
The following command starts a cointainer in detached mode, i.e. it will run as a service until it is explicitly stopped with `docker stop scidbeo-scalbf`.
_Note that the following command limits the number of CPU cores and main memory available to the container. Feel free to use different settings for `--cpuset-cpu` and `-m`._
```
sudo docker run -d --name="scidb-eo-1609" --cpuset-cpus="0,1" -m "4G" -h "scidb-eo-1609" -p 33330:22 -p 33331:8083 -p 33332:8787 scidb-eo:16.9
```
The container is now accessible to the host system via SSH (port 33330), Shim (port 33331), and Rstudio Server (port 33332).
### 3. Clean up
To clean up your system, you can remove containers and the image with
1. `sudo docker rm -f scidb-eo-1609` and
2. `sudo docker rmi scidb-eo:16.09`.
## Files
| File | Description |
| :------------- | :-------------------------------------------------------|
| install/ | Directory for installation scripts |
| install/install_scidb.sh | Installs SciDB 16.9 from sources |
| install/init_scidb.sh | Initializes SciDB based on provided configuration file |
| install/install_shim.sh | Installs Shim |
| install/install_scidb4geo.sh | Installs the scidb4geo plugin |
| install/install_gdal.sh | Installs GDAL with SciDB driver |
| install/install_R.sh | Installs the latest R version |
| install/install_streaming.sh | Installs SciDB's streaming plugin |
| install/scidb-16.9.0.db1a98f.tgz| SciDB 16.9 source code |
| install/install.packages.R | Installs relevant R packages |
| conf/ | Directory for configuration files |
| conf/scidb_docker.ini | SciDB configuration file |
| conf/supervisord.conf | Configuration file to manage automatic starts in Docker containers |
| conf/iquery.conf | Default configuration file for iquery |
| conf/shim.conf | Default configuration file for shim |
| Dockerfile | Docker image definition file |
| container_startup.sh | Script that starts SciDB, Rserve, and other system services within a container |
## License
This Docker image contains source code of SciDB in install/scidb-16.9.0.db1a98f.tgz. SciDB is copyright (C) 2008-2017 SciDB, Inc. and licensed under the AFFERO GNU General Public License as published by the Free Software Foundation. You should have received a copy of the AFFERO GNU General Public License. If not, see <http://www.gnu.org/licenses/agpl-3.0.html>
License of this Docker image can be found in the `LICENSE`file.
## Notes
This Docker image is for demonstration purposes only. Building the image includes both compiling software from sources and installing binaries. Some installations require downloading files which are not provided within this image (e.g. GDAL source code). If these links are not available or URLs become invalid, the build procedure might fail.
----
## Author
Marius Appel <[email protected]>
| 45.466667 | 438 | 0.739003 | eng_Latn | 0.961289 |
b93b8dbe349c00805eb1a6957a359e28b4404a5d | 3,038 | md | Markdown | README.md | izaack89/code-base | 07823c357444b376caf5eb24d6b8139abc120dfb | [
"MIT"
] | null | null | null | README.md | izaack89/code-base | 07823c357444b376caf5eb24d6b8139abc120dfb | [
"MIT"
] | null | null | null | README.md | izaack89/code-base | 07823c357444b376caf5eb24d6b8139abc120dfb | [
"MIT"
] | null | null | null | # 🖥 [Code Base](https://izaack89.github.io/code-base/)
This Quiz is to help you to understand Javascript so take the quiz and you can find on what topics you should review or you can demostrate that you are a Master.
## Quiz Elements
The goal of this project is to show that we can do a Project as Full-Stack cause on the backend I change the DOM by using Javascript depengin on what view is selected and with CSS I did a design that is responsive.
The quiz have 4 Views all created inside a javascript variable and all are displayed on the same page.
1. Main View

2. Quiz View.- The quiz is defined on the code inside of a variable, what I do is to suffle the questions in order to not have the same order when the quiz is activated

2.1.⏱ Timer.- I used to show the total time that the person have to answer the quiz and if the answer is correct the timer decrease by 10 seconds. Once the timer is 0 the Game is Over 
2.2.✅,❌ Validation .- Here I can validate if the answer is correct according to the question

 2.3. Score count.- I can save the score of the quiz while is ongoing, once is finished the quiz or the time runs out I can display the total score
3. Game Over View.- On this view the user see the score and also is able to set their initials to save that on the localstorage

4. HighScores View.- On this section I display the information that is storage on the localstorage, I also can delete the infomration that are there by clicking the button "Delete Highscores"


## 📱 Responsive Design.- The quiz have a responsive design

## [Code Base GitHub Code](https://github.com/izaack89/code-base)
## Code Base References
- [sort](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/sort) - sort Function
- [List with Badge](https://developer.mozilla.org/fr/docs/Web/CSS/Layout_cookbook/List_group_with_badges) - List Layout with badge
- [createElement](https://developer.mozilla.org/es/docs/Web/API/Document/createElement) - createElement Function
- [buttons CSS](https://www.w3schools.com/css/css3_buttons.asp) - Style for buttons CSS
- [iteration Object](https://stackoverflow.com/questions/14379274/how-to-iterate-over-a-javascript-object) - How to iterate over Object
- [Box-shadow](https://developer.mozilla.org/es/docs/Web/CSS/box-shadow) - Box Shadow
- [ReadMe Markdowns](https://github.com/tchapi/markdown-cheatsheet/blob/master/README.md)
## Author
- **German Ramirez** - [GitHub](https://github.com/izaack89/)
| 55.236364 | 246 | 0.763002 | eng_Latn | 0.969708 |
b93ba3a7117c5796e0b554fb163dc25df757a568 | 3,187 | md | Markdown | README.md | drnitinmalik/multiple-linear-regression | e6848fd8d6f207b24e4c2751e5bfaac0fd63281e | [
"MIT"
] | null | null | null | README.md | drnitinmalik/multiple-linear-regression | e6848fd8d6f207b24e4c2751e5bfaac0fd63281e | [
"MIT"
] | null | null | null | README.md | drnitinmalik/multiple-linear-regression | e6848fd8d6f207b24e4c2751e5bfaac0fd63281e | [
"MIT"
] | 1 | 2022-02-24T08:28:25.000Z | 2022-02-24T08:28:25.000Z | # Pre-regression steps, Regression model, Post-regression analysis
In multiple regression (regression is also known as function approximation), we are interested in predicting one (dependent) variable from two or more (independent) variables e.g. predicting height from weight and age. Regression implies causation. Change in the dependent variable is due to the change in the independent variables. Linear regression implies that the relationship between the dependent variable and independent variables is linear and thus can be described by a linear plane known as the regression plane. We are in the process of finding a regression plane that fits (touches) the maximum number of data points (no. of data points = no. of records in the dataset).
**MLR MODEL**
Objective: Predict the dependent variable Yi
Model structure: Yi=β0 + β1XI + β2X2 + ... βKXK + εi where β0 (y-intercept/constant) and βK (slope) are population regression coefficients and εi is the prediction error (prediction does not match with the actual) (error also termed as residue)
Model parameters: sample regression coefficients b0, b1... bk [determined by solving normal equations]
Model hyperparameters: Summed squared error, Mean square error [error functions]
Methods to estimate model parameters: Ordinary Least square (Minimise error function), Maximum likelihood (likelihood is the probability that a dependent variable can be predicted from the independent variables)
Model assumptions: Relationship between dependent and independent variables is linear, Dependent variable is continuous (interval or ratio), No correlation (a measure of the relationship between two variables derived from covariance) between errors & between error and independent variables, all errors have equal variances, errors follow a normal probability distribution, independent variables are not collinear (no multicollinearity)
**USE CASE**
Let's say we want to predict the profits (dependent variable) of 50 US Startup companies from the data which contains three types of expenditures (R&D spend, Administrative spend and Marketing spend, all of which is numerical data) and their location (categorical data). The [python code](https://github.com/drnitinmalik/multiple-linear-regression/blob/main/mlr%20predicting%20profit.PY) and the [data file](https://github.com/drnitinmalik/multiple-linear-regression/blob/main/50-startups.csv) are available on GitHub
**RESULTS**
Duplicate values: None
Missing value: None
Multicollinearity: Yes.
Regression coefficients:
array([ 5.69631699e-16, 4.99600361e-16, -3.05311332e-16, 1.17961196e-15,
4.05057932e-16, -8.32667268e-17, 1.00000000e+00, 2.20489350e-13, 2.81742200e-12])
y-intercept: -2.6193447411060333e-10
prediction on test data
array([ 96778.92, 35673.41, 191792.06, 192261.83, 132602.65, 108552.04,
69758.98, 110352.25, 146121.95, 125370.37, 182901.99, 77798.83, 96712.8 ])
Levene test for homoscedasticity
test statistic=5.351611538586933e-29, p-value=1.0.
**DISCUSSION**
The correlation between marketing spend and R&D is on the higher side which confirms multicollinearity.
After encoding the categorical variable using one-hot encoding, we have 10 columns
| 75.880952 | 682 | 0.799812 | eng_Latn | 0.9906 |
b93bd2b9b62b64c06cf7952b90b4518227461b2b | 1,237 | md | Markdown | _pages/about.md | zuoyuan/zuoyuan.github.io | 02a2592020eed1cf75a2945fbb9a8aa4bd439eea | [
"MIT"
] | null | null | null | _pages/about.md | zuoyuan/zuoyuan.github.io | 02a2592020eed1cf75a2945fbb9a8aa4bd439eea | [
"MIT"
] | null | null | null | _pages/about.md | zuoyuan/zuoyuan.github.io | 02a2592020eed1cf75a2945fbb9a8aa4bd439eea | [
"MIT"
] | null | null | null | ---
permalink: /
title: "About me"
excerpt: "左源, Yuan Zuo's home page, from Beihang University(BUAA) ,whose research interests include topic modeling, opinion mining and deep learning."
author_profile: true
redirect_from:
- /about/
- /about.html
---
Yuan Zuo received his Ph.D. degree from Beihang University, Beijing, PRC, in 2017. He is currently an associate professor in Information Systems Department of Beihang University. His B.Eng is received in Computer Science from Tianjin University of Technology, Tianjin, PRC. His research interests include data mining and machine learning.
News:
-------------
* One full paper accepted in KDD 2018: Embedding Temporal Network via Neighborhood Formation.
* Previous homepage (http://ipv6.nlsde.buaa.edu.cn/zuoyuan/) is deprecated.
Associate Professor
-------------
School of Economics and Management
Beihang University
Beijing, 100191, P. R. China
Office: A717, New Main Building
Email: skywatcher.buaa AT gmail.com; zuoyuan AT buaa.edu.cn
(Please replace AT by @)
<script
type="text/javascript" id="clustrmaps" src="//cdn.clustrmaps.com/map_v2.js?cl=ffffff&w=300&t=tt&d=9osu0yyDaRG4SQIevEaYDLFmcMR_H07ph8rcVwCnF9s&co=2d78ad&ct=ffffff&cmo=3acc3a&cmn=ff5353"></script>
| 34.361111 | 338 | 0.758286 | eng_Latn | 0.858297 |
b93bd51a219f0248ec29298836d63073700b6b66 | 35 | md | Markdown | content/apps.md | aemrei/dev.aemre.net | 4835916c1ffc2eb6a666cb34728c288d1398e358 | [
"MIT"
] | 1 | 2022-01-02T20:21:07.000Z | 2022-01-02T20:21:07.000Z | content/apps.md | aemrei/dev.aemre.net | 4835916c1ffc2eb6a666cb34728c288d1398e358 | [
"MIT"
] | null | null | null | content/apps.md | aemrei/dev.aemre.net | 4835916c1ffc2eb6a666cb34728c288d1398e358 | [
"MIT"
] | null | null | null | ---
title: App Configs & Hints
---
| 8.75 | 26 | 0.571429 | eng_Latn | 0.872758 |
b93bebbbf1f81cb7760cfc50ad42046baf408042 | 16,813 | md | Markdown | README.md | ITank-Online/waxbadges | d5e3f38cef5372bbaf5ffa07b078f4d04a434190 | [
"MIT"
] | 4 | 2019-07-26T11:56:42.000Z | 2022-01-09T13:07:56.000Z | README.md | kdmukai/achieveos | d5e3f38cef5372bbaf5ffa07b078f4d04a434190 | [
"MIT"
] | null | null | null | README.md | kdmukai/achieveos | d5e3f38cef5372bbaf5ffa07b078f4d04a434190 | [
"MIT"
] | 2 | 2021-02-07T23:16:25.000Z | 2022-02-26T20:28:42.000Z | ### Achievements logged on the blockchain for eternity! Keep what you earn!

# WAXBadges
_An open Achievements platform for the WAX blockchain_
twitter: [@WAXBadges](https://twitter.com/WAXBadges)
* WAXBadges Achievements Explorer: [explorer.waxbadges.com](https://explorer.waxbadges.com)
* WAXBadges CREATOR tool: [github: waxbadges_creator](https://github.com/kdmukai/waxbadges_creator)
* Example game integration:
* Play now! [2048 - WAXBages Edition](https://2048.waxbadges.com)
* source: [github: waxbages_2048](https://github.com/kdmukai/waxbadges_2048)
* Twitter campaign tool: [github: waxbadges_twitter](https://github.com/kdmukai/waxbadges_twitter)
## Motivation
Current achievement systems are completely trapped within their own ecosystems--XBox gamertags, each individual mobile app, Steam trophies, even certifications for tech or skills training (e.g. Khan Academy badges).
I shouldn't have to go to each individual ecosystem or sign into each individual app to see and share my achievements. But there's currently no way to view my accomplishments from, say, Steam alongside all my mobile game achievements and every other game all in one place.
This siloing has another bad consequence: my achievements suffer from varying levels of impermanence and fragility. I can work my tail off to unlock a new badge in my running app ("50-Mile Club!") but if that service shuts down, poof! My badge goes with it.
### Enter the blockchain
The blockchain offers permanent, public online data storage. Writing achievements to the blockchain will preserve them regardless of what happens to the company that originally granted them. And once your achievements are written to the blockchain it'll be simple to view them all--across all your games--in one grand trophy room and share them out to social media.
## WAXBadges overview
WAXBadges is a WAX smart contract and related services that provide a simple, open platform for any permanent achievement system to be built upon. Think of WAXBadges as a kind of backend service (AaaS -- Achievements as a Service?) that handles storage, permissions logic, management, and more. This allows game developers to easily write their users' achievements as WAXBadges to the WAX blockchain. Their players will be extra-excited to stay engaged with their games as they see their in-game achievements now accessible in one central location.
The smart contract details will be totally hidden away from the players; they won't need to know anything about blockchains to be able to unlock and view achievements.
WAXBadges also supports achievements that can be limited in quantity to provide enhanced exclusivity and greater player engagement as they race to be the early few who are able to claim a rare, limited achievement. _Coming soon_
### Easy onboarding; "custodial" achievements
A big hurdle with blockchain-based user data systems is the overly complex onboarding process: would-be users have to convert fiat to crypto; set up access to those funds via tools like Metamask, Scatter, etc; and be comfortable signing transactions and managing their private/public keys. This is just not a reasonable expectation for 99% of gamers.
So instead WAXBadges allows each project to add their users without worrying about whether or not the user has an on-chain account. Gamers' account records exist as simple `string` blockchain data based solely on the game's internal records (`name=Keith01`, `userid=your_internal_id_1234`). The studio can then immediately start granting achievements to their users. At this stage these blockchain user achievements can be thought of as being held _in custody_ on their users' behalf.
But for more advanced users...
### Claim ownership; unify achievements
If a user has the interest and the savvy to create their own blockchain account, WAXBadges provides a mechanism for them to claim their user identity in each studio's achievement ecosystem. This then allows them to view all of their achievements--across all participating games, studios, and platforms--in one place.
In brief:
* Each studio would provide an option for a user to specify their blockchain account in their in-game profile.
* The studio would write this additional info to the gamer's `User` record on chain.
* The gamer can then submit a transaction to the WAXBadges smart contract to "claim" each `User` entry and permanently tie them to their blockchain account.
After the claims are made it is then simple for a gamer to view all of their WAXBadges achievements in one place via an WAXBadges-aware block explorer.
### Expand achievements beyond gaming
WAXBadges is launching with a proof-of-concept achievement campaign based solely on twitter activity. WAXBadges is a totally open platform so _any_ entity can create an achievements ecosystem for _any_ kind of activity. It doesn't matter if that activity happens in a video game, in a twitter thread, or offline in the real world.
This opens up new outreach possibilities that can also benefit from the permanence of the blockchain. Imagine a limited quantity achievement set up by a musician or DJ with a rabid fanbase. "The first 30k fans to do X will gain 'True Swifty' status... for life!"
### Structure
The basic organizational structure of WAXBadges achievements is pretty simple:
```
Ecosystem: "Banzai's Great Adventure"
|
+----Category: "Solo"
| |
| +----Achievement: "Coin Master"
| +----Achievement: "Treasure Finder"
| +----Achievement: "Grinder Extraordinaire"
|
|
+----Category: "Team"
|
+----Achievement: "Purple Heart"
+----Achievement: "My Savior"
+----Achievement: "MVP"
+----Achievement: "Da GOAT"
```
Each individual game would create its own **Ecosystem** entry. _Note that WAXBadges doesn't have to be limited to just gaming use cases. An `Ecosystem` could be created for academic awards (e.g. a high school's NHS inductees), records for a sports team, certifications for a training system, etc._
A game studio creates a new `Ecosystem` in two simple steps:
* Create a blockchain account for their studio (or a separate account for each game they produce).
* Submit a simple transaction from that account to the WAXBadges smart contract to create a new `Ecosystem` entry.
The WAXBadges smart contract ensures that the studio's blockchain account is the only one that can then alter any of the data within that new `Ecosystem`.
They are then free to define whatever achievement **Categories** make sense for their game. _Note: at least one `Category` is required, but it can be a generic catch-all if the studio doesn't need different categories._
Finally they add various **Achievements** within a `Category`.
The actual `Achievement` entry consists of a title, description, and the name of an image asset (more on assets below):
```
{
name: "Spicy Stunt Roll",
description: "Rolled through fire while shielded",
assetname: "spicy_roll.png"
}
```
### Assets
Images for each achievement are probably too much data to store on the blockchain. So instead each `Ecosystem` specifies an `assetbaseurl` (e.g. "mydomainname.com/images/trophies"). This is then combined with the `Achievement.assetname` to yield a complete url: https://mydomainname.com/images/trophies/spicy_roll.png.
The studio can always change the `Ecosystem.assetbaseurl` if they need to change domains, hosts, etc.
In this way we strike a compromise between providing nicely rendered achievement browsing without burdening game studios with excessive blockchain storage costs.
# Technical Notes
## Achievements are not NFTs
The WAX blockchain is focused on its NFT (Non-Fungible Token) marketplace for digital collectibles like OpsSkins. Once an NFT is purchased on the marketplace the owner has the option to resell it as s/he sees fit.
But achievements have different properties, the primary one being that they must be non-transferrable. Either you earned the achievement or you didn't; there's no buying your way into achievement bragging rights.
## Blockchain storage costs
The structure above was carefully designed to minimize blockchain storage costs. There are _**numerous**_ pitfalls when storing data to the blockchain that could prove _**very**_ costly if done poorly.
I learned this the hard way while developing the first version of this project for the EOS blockchain. I have a full writeup here: [RAM Rekt! EOS Storage Pitfalls](https://medium.com/@kdmukai_22159/ram-rekt-1eb8851b6fba). It is remarkable that a few minor design changes take the code from an impossibly cost-heavy _seems-great-in-theory-but-is-garbage-in-practice_ toy project to a truly viable, highly cost-effective achievements platform.
## For game developers
Browse through the [WAXBadges Achievements Explorer](https://explorer.waxbadges.com) to get an idea for how the achievements data is organized.
Then head over to the [WAXBadges CREATOR tool](https://github.com/kdmukai/waxbadges_creator) for a full guide on how to get started with a WAX account and start creating your own achievements ecosystem on WAXBadges.
## Testing and deploying the contract
### EOS local dev
This project originally started on the EOS blockchain but has been migrated to WAX. However, because the WAX blockchain is a fork of `eosio` it fully supports EOS smart contracts. This means that we can continue to do local development against the well-tooled EOS blockchain, even if the WAX blockchain is our ultimate target.
### Install local EOS tools
We need to run a local dev EOS blockchain along with command line tools to interact with it. On a Mac:
```
brew tap eosio/eosio
brew install eosio
```
### WAX blockchain specifics
The WAX blockchain is currently only compatible with contracts compiled with an older version of the EOS Contract Development Toolkit: `eosio.cdt` v1.4.1
_Note that this is separate from the local blockchain we just installed above_
Install via Homebrew, targeting the v1.4.1 release's git hash:
```
brew unlink eosio.cdt
brew install https://raw.githubusercontent.com/EOSIO/homebrew-eosio.cdt/e6fc339b845219d8bc472b7a4ad0c146bd33752a/eosio.cdt.rb
```
_WAX also has their own v1.4.1 `eosio.cdt` release [here](https://github.com/worldwide-asset-exchange/wax-cdt) but it is not necessary if your contract is fully compliant with `eosio.cdt` 1.4.1._
### Supported versions
WAXBadges compiles with `eosio.cdt` v1.4.1.
Tests run successfully against the latest `eosio` node (currently v1.8.1).
## Running tests
Requirements:
* python3.6+
* virtualenv
The tests are written using [EOSFactory](https://eosfactory.io/) which makes it easy to write thorough and complex unit tests in Python.
Create a new python3 virtualenv.
Install the `eosfactory` python-based testing environment from Tokenika:
* Follow the installation instructions [here](https://github.com/tokenika/eosfactory/blob/master/docs/tutorials/01.InstallingEOSFactory.md).
_Note: I had trouble getting things to work when I installed via PyPi, but the `git clone` option worked fine. YMMV._
_Note: If you're running in a `virtualenv` as I recommend, you'll need to edit the `install.sh` script and make the following change:_
```
# Original
pip3 install --user -e .
# Remove the '--user' flag for virtualenv compatibility
pip3 install -e .
```
EOSFactory will launch your local test node, reset the blockchain data to a clean state, generate user accounts, compile the smart contract, deploy it, and then execute the unit tests.
In theory the entire process is kicked off by a single command:
```
python test/test_achieveos.py
```
However, I ran into issues after stepping `eosio.cdt` down to v1.4.1. The automatic compilation step succeeded, but calls against the smart contract in the tests failed. But if we just keep the compilation step separate from running the EOSFactory tests, everything works just fine.
So I added two simple scripts:
* `compile.sh` compiles and then copies the resulting WASM and ABI files to EOSFactory's `build/` directory.
* `run_tests.py` runs the EOSFactory tests but disables the automatic compilation step.
## Running locally
If you want to directly interact with the smart contract on your local blockchain, there are a number of manual steps. But aside from being not super user-friendly, it's more or less straightforward:
Start keos and nodeos:
```
keosd &
nodeos -e -p eosio \
--plugin eosio::producer_plugin \
--plugin eosio::chain_api_plugin \
--plugin eosio::http_plugin \
--plugin eosio::history_plugin \
--plugin eosio::history_api_plugin \
--access-control-allow-origin='*' \
--contracts-console \
--http-validate-host=false \
--verbose-http-errors >> /dev/null 2>&1 &
```
Compile the smart contract:
```
eosio-cpp -o waxbadges.wasm waxbadges.cpp --abigen
```
Create initial dev wallet, save the password:
```
cleos wallet create --to-console
Creating wallet: default
Save password to use in the future to unlock this wallet.
Without password imported keys will not be retrievable.
"PW5Kewn9L76X8Fpd....................t42S9XCw2"
```
Open and unlock the wallet:
```
cleos wallet open
cleos wallet unlock
```
Create keys and copy public key:
```
cleos wallet create_key
> Created new private key with a public key of: "EOS8PEJ5FM42xLpHK...X6PymQu97KrGDJQY5Y"
```
Import the default dev 'eosio' key:
```
cleos wallet import
> private key: 5KQwrPbwdL6PhXujxW37FSSQZ1JiwsST4cqQzDeyXtP79zkvFD3
```
Create test accounts:
```
cleos create account eosio bob EOSyourpublickeyfromabove
cleos create account eosio alice EOSyourpublickeyfromabove
```
Create the contract account:
```
cleos create account eosio waxbadges EOSyourpublickeyfromabove -p eosio@active
```
Deploy the compiled contract:
```
cleos set contract waxbadges /path/to/contracts/waxbadges -p waxbadges@active
```
Push some basic smart contract actions:
```
cleos push action waxbadges addecosys '["alice", "Awesome Ecosystem", "fakedomain.com/assets"]' -p alice@active
```
## Cleanup / Resetting
To stop kleos and nodeos:
```
pkill -SIGTERM nodeos
pkill -SIGTERM keosd
```
To reset the local chain's wallets:
```
rm -rf ~/eosio-wallet
```
## Manually interacting with the deployed contract
All of the same `cleos` steps above apply for the live production contract. Simply point `cleos` at the WAX chain with the `-u` switch:
```
cleos -u https://chain.wax.io get table waxbadgesftw alice ecosystems
```
In order to act on behalf of a particular account, you'll have to add its private key to your local `cleos` wallet:
```
cleos wallet import --private-key 1234someprivatekey098
```
```
cleos push action waxbadges addecosys '["someacct", "Some Ecosystem", "blah.com/assets"]' -p someacct@active
```
## Data table migrations
_More details to come_
Re-Deploy the compiled contract:
```
rm -rf ~/eosio-wallet
cleos wallet create -n waxbadges --to-console
cleos wallet open -n waxbadges
cleos wallet unlock -n waxbadges
cleos wallet import -n waxbadges
cleos -u https://chain.wax.io push action waxbadgesftw wipetables '[]' -p waxbadgesftw@active
cleos -u https://chain.wax.io set contract waxbadgesftw /path/to/contracts/waxbadges -p waxbadgesftw@active
cleos -u https://chain.wax.io push action waxbadgesftw addecosys '["waxbadgesftw", "WAXBadges Genesis Campaign", "https://waxbadges.com", "explorer.waxbadges.com/assets", "waxbadges_logo.png"]' -p waxbadgesftw@active
cleos -u https://chain.wax.io push action waxbadgesftw addcat '["waxbadgesftw", "0", "twitter"]' -p waxbadgesftw@active
cleos -u https://chain.wax.io push action waxbadgesftw addach '["waxbadgesftw", "0", "0", "First", "First achievement ever. First 50 to follow @WAXBadges.", "ach_hand.png", "50"]' -p waxbadgesftw@active
cleos -u https://chain.wax.io get table waxbadgesftw waxbadgesftw ecosystems
```
# TODO / Future Features
* Achievements with limited quantities.
* Simple WAXBadges-aware block explorer to view achievements:
* Browse by `Ecosystem`; see the possible `Achievements`, how many people were granted each `Achievement`
* Browse by `User` in each `Ecosystem`; see which `Achievements` they were granted.
* Browse by gamer's blockchain account; see their unified `Achievements` across all linked `Ecosystems`.
* Social media sharing.
* Basic management webapp for game developers to create and manage their achievement ecosystems.
* Basic demonstration webapp with simple tasks users can complete to earn achievements.
* (eventually) include option for blockchain-savvy players to claim their achievements by linking their WAX account.
* Add support for a points system for each `Achievement`, point totals for `User`s?
* Shard Ecosystems User table for huge ecosystems?
| 48.45245 | 548 | 0.771903 | eng_Latn | 0.993332 |
b93bf60610d81349777a3d8ecef08a42cb72d512 | 206 | md | Markdown | src/Text-Core.package/TextFontReference.class/README.md | hernanmd/pharo | d1b0e3ed73b5f1879acf0fd3ba041b3290f1d499 | [
"MIT"
] | 5 | 2019-09-09T21:28:33.000Z | 2019-12-24T20:34:04.000Z | src/Text-Core.package/TextFontReference.class/README.md | tinchodias/pharo | b1600a96667c16b28a2ce456b2000840df447171 | [
"MIT"
] | 1 | 2018-01-14T20:32:07.000Z | 2018-01-16T06:51:28.000Z | src/Text-Core.package/TextFontReference.class/README.md | tinchodias/pharo | b1600a96667c16b28a2ce456b2000840df447171 | [
"MIT"
] | null | null | null | A TextFontReference encodes a font change applicable over a given range of text. The font reference is absolute: unlike a TextFontChange, it is independent of the textStyle governing display of this text. | 206 | 206 | 0.820388 | eng_Latn | 0.989217 |
b93c380c40da7ca272a6c1cd693ad4ff77cb748c | 2,725 | md | Markdown | content/post/how-to-configure-the-bash-prompt/index.md | brunomiguel/blog.brunomiguel.net | 0e7004d309640810180df977f3b0bf87256dcba2 | [
"MIT"
] | 2 | 2022-02-05T18:15:56.000Z | 2022-02-05T18:25:48.000Z | content/post/how-to-configure-the-bash-prompt/index.md | brunomiguel/blog.brunomiguel.net | 0e7004d309640810180df977f3b0bf87256dcba2 | [
"MIT"
] | null | null | null | content/post/how-to-configure-the-bash-prompt/index.md | brunomiguel/blog.brunomiguel.net | 0e7004d309640810180df977f3b0bf87256dcba2 | [
"MIT"
] | null | null | null | ---
title: "How-to customize the Bash prompt"
date: "2018-09-21"
categories:
- "geekices"
tags:
- "bash"
- "debian"
- "git"
- "promp"
- "tips"
---
In order to adapt a bit more my _Debian Stable_ installation to my workflow, I've been tweaking the _bash_ prompt. Simplicity and small line width are key here, because I often have _tmux_ running with several panes in the same window and small panes with large one-liner prompts suck a lot! Everything feels crammed and hard to read. Just take a look at the image below to get an idea.

After running a few commands in each pane with this prompt configuration, everything gets really crowded and confuse. For sanity safeguarding reasons and workflow improvement, the only thing to do is customize the prompt.
The _Debian Stable_ bash prompt, shown on the image above, default value is:
if \[ "$color\_prompt" = yes \]; then
PS1='${debian\_chroot:+($debian\_chroot)}\\\[\\033\[01;32m\\\]\\u@\\h\\\[\\033\[00m\\\]:\\\[\\033\[01;34m\\\]\\w\\\[\\033\[00m\\\]\\$ '
else
PS1='${debian\_chroot:+($debian\_chroot)}\\u@\\h:\\w\\$ '
fi
unset color\_prompt force\_color\_prompt
To make it more useful, I changed the second line to this:
PS1="\[\\033\[00;32m\]\\u@\\h\[\\033\[00m\]:\\w\[\\033\[00m\]\\n└─ \[$(tput bold)\]\\$(\_\_git\_ps1 '\[%s\] ')\\$: \[$(tput sgr0)\]"
All put together:
if \[ "$color\_prompt" = yes \]; then
PS1="\\\[\\033\[00;32m\\\]\\u@\\h\\\[\\033\[00m\\\]:\\w\\\[\\033\[00m\\\]\\n└─ \\\[$(tput bold)\\\]\\$(\_\_git\_ps1 '\[%s\] ')\\$: \\\[$(tput sgr0)\\\]"
else
PS1='${debian\_chroot:+($debian\_chroot)}\\u@\\h:\\w\\$ '
fi
unset color\_prompt force\_color\_prompt
And this is the result:

Not only I get a more readable prompt (and with "more room to breathe", if you may), but I get the name of the current branch if I'm in a _Git_ repository folder. This is a convenient feature to have if you work with this version control system.
There are a lot more ways one can configure the prompt. Both [_How-To Geek_](https://www.howtogeek.com/307701/how-to-customize-and-colorize-your-bash-prompt/) and [_Boolean World_](https://www.booleanworld.com/customizing-coloring-bash-prompt/) websites have nice introductory guides to get you started. The _Arch Linux_ [wiki entry](https://wiki.archlinux.org/index.php/Bash/Prompt_customization) about this is also a good read. Oh, and [_RTFM_](http://tldp.org/HOWTO/Bash-Prompt-HOWTO/bash-prompt-escape-sequences.html) (Read The ... Fine ... Manual).
| 55.612245 | 553 | 0.68844 | eng_Latn | 0.908025 |
b93c544b170ddf941e96ea66e7e2f19bef28dccd | 12,695 | md | Markdown | repos/golang/remote/rc-windowsservercore.md | Mattlk13/repo-info | 734e8af562852b4d6503f484be845727b88a97ae | [
"Apache-2.0"
] | null | null | null | repos/golang/remote/rc-windowsservercore.md | Mattlk13/repo-info | 734e8af562852b4d6503f484be845727b88a97ae | [
"Apache-2.0"
] | 1 | 2020-11-05T19:56:17.000Z | 2020-11-12T13:09:29.000Z | repos/golang/remote/rc-windowsservercore.md | Mattlk13/repo-info | 734e8af562852b4d6503f484be845727b88a97ae | [
"Apache-2.0"
] | 1 | 2017-02-09T22:16:59.000Z | 2017-02-09T22:16:59.000Z | ## `golang:rc-windowsservercore`
```console
$ docker pull golang@sha256:23d3adb66b10db0e25947041f2444c239c3a34dda9cfecea3687d46107af2bd1
```
- Manifest MIME: `application/vnd.docker.distribution.manifest.list.v2+json`
- Platforms: 2
- windows version 10.0.20348.524; amd64
- windows version 10.0.17763.2565; amd64
### `golang:rc-windowsservercore` - windows version 10.0.20348.524; amd64
```console
$ docker pull golang@sha256:f3918242fc835acf7c16340fce514dec03dd4e7127577e55f13f2846115a8100
```
- Docker Version: 20.10.8
- Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json`
- Total Size: **2.4 GB (2393931678 bytes)**
(compressed transfer size, not on-disk size)
- Image ID: `sha256:d6c1b27f3d52d02acb83942c5833f3dae1241fa06fb6d7aa20876db5961441a8`
- Default Command: `["c:\\windows\\system32\\cmd.exe"]`
- `SHELL`: `["powershell","-Command","$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]`
```dockerfile
# Sat, 08 May 2021 09:40:24 GMT
RUN Apply image 2022-RTM-amd64
# Tue, 01 Feb 2022 02:49:40 GMT
RUN Install update ltsc2022-amd64
# Wed, 09 Feb 2022 13:37:18 GMT
SHELL [powershell -Command $ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';]
# Wed, 09 Feb 2022 13:37:19 GMT
ENV GIT_VERSION=2.23.0
# Wed, 09 Feb 2022 13:37:20 GMT
ENV GIT_TAG=v2.23.0.windows.1
# Wed, 09 Feb 2022 13:37:21 GMT
ENV GIT_DOWNLOAD_URL=https://github.com/git-for-windows/git/releases/download/v2.23.0.windows.1/MinGit-2.23.0-64-bit.zip
# Wed, 09 Feb 2022 13:37:22 GMT
ENV GIT_DOWNLOAD_SHA256=8f65208f92c0b4c3ae4c0cf02d4b5f6791d539cd1a07b2df62b7116467724735
# Wed, 09 Feb 2022 13:38:28 GMT
RUN Write-Host ('Downloading {0} ...' -f $env:GIT_DOWNLOAD_URL); [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12; Invoke-WebRequest -Uri $env:GIT_DOWNLOAD_URL -OutFile 'git.zip'; Write-Host ('Verifying sha256 ({0}) ...' -f $env:GIT_DOWNLOAD_SHA256); if ((Get-FileHash git.zip -Algorithm sha256).Hash -ne $env:GIT_DOWNLOAD_SHA256) { Write-Host 'FAILED!'; exit 1; }; Write-Host 'Expanding ...'; Expand-Archive -Path git.zip -DestinationPath C:\git\.; Write-Host 'Removing ...'; Remove-Item git.zip -Force; Write-Host 'Updating PATH ...'; $env:PATH = 'C:\git\cmd;C:\git\mingw64\bin;C:\git\usr\bin;' + $env:PATH; [Environment]::SetEnvironmentVariable('PATH', $env:PATH, [EnvironmentVariableTarget]::Machine); Write-Host 'Verifying install ("git version") ...'; git version; Write-Host 'Complete.';
# Wed, 09 Feb 2022 13:38:30 GMT
ENV GOPATH=C:\go
# Wed, 09 Feb 2022 13:38:49 GMT
RUN $newPath = ('{0}\bin;C:\Program Files\Go\bin;{1}' -f $env:GOPATH, $env:PATH); Write-Host ('Updating PATH: {0}' -f $newPath); [Environment]::SetEnvironmentVariable('PATH', $newPath, [EnvironmentVariableTarget]::Machine);
# Thu, 17 Feb 2022 22:15:49 GMT
ENV GOLANG_VERSION=1.18rc1
# Thu, 17 Feb 2022 22:18:41 GMT
RUN $url = 'https://dl.google.com/go/go1.18rc1.windows-amd64.zip'; Write-Host ('Downloading {0} ...' -f $url); [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12; Invoke-WebRequest -Uri $url -OutFile 'go.zip'; $sha256 = '9fd911fcb429b189b8dc1039d48e3c36eaa7ea4b18fa6ca941d3043ab49df0e9'; Write-Host ('Verifying sha256 ({0}) ...' -f $sha256); if ((Get-FileHash go.zip -Algorithm sha256).Hash -ne $sha256) { Write-Host 'FAILED!'; exit 1; }; Write-Host 'Expanding ...'; Expand-Archive go.zip -DestinationPath C:\; Write-Host 'Moving ...'; Move-Item -Path C:\go -Destination 'C:\Program Files\Go'; Write-Host 'Removing ...'; Remove-Item go.zip -Force; Write-Host 'Verifying install ("go version") ...'; go version; Write-Host 'Complete.';
# Thu, 17 Feb 2022 22:18:42 GMT
WORKDIR C:\go
```
- Layers:
- `sha256:8f616e6e9eec767c425fd9346648807d1b658d20ff6097be1d955aac69c26642`
Size: 1.3 GB (1251699055 bytes)
MIME: application/vnd.docker.image.rootfs.foreign.diff.tar.gzip
- `sha256:898469748ff68223ab87b654b29fb366c1f4f2b7cfad7a37426346ea16db8dfa`
Size: 963.2 MB (963225591 bytes)
MIME: application/vnd.docker.image.rootfs.foreign.diff.tar.gzip
- `sha256:7062696b7aef1ca33afdf32084a532f7e3151a844eb7cb2455bcc907e0f42a58`
Last Modified: Wed, 09 Feb 2022 14:28:27 GMT
Size: 1.4 KB (1426 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:7e671073281e0a800f3f64cb8a9d1092a4e93d2f94cd818b0c1d47824366a5cd`
Last Modified: Wed, 09 Feb 2022 14:28:27 GMT
Size: 1.4 KB (1395 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:fdc69c3c295ae9d3878060e3969bb79b86d5163188d65fb1e7afb60d6a74308b`
Last Modified: Wed, 09 Feb 2022 14:28:25 GMT
Size: 1.4 KB (1432 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:18dfebfb35a8f45f52ec961615f102607858d20fa48cc66d2b29225c9642a0f2`
Last Modified: Wed, 09 Feb 2022 14:28:25 GMT
Size: 1.4 KB (1407 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:712a35dd10725b5d8b6c55235638512bae5e3f33553578ee34182bb664c413a4`
Last Modified: Wed, 09 Feb 2022 14:28:25 GMT
Size: 1.4 KB (1422 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:842429c8e7997e5d0455ed2cdd37856f1caddc8f07913623d0d1de313c7c75a9`
Last Modified: Wed, 09 Feb 2022 14:28:30 GMT
Size: 25.7 MB (25700843 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:f724804f007d606a9b3ef21df6efbede87da7e499e740b09cdb131cd840e245e`
Last Modified: Wed, 09 Feb 2022 14:28:22 GMT
Size: 1.4 KB (1427 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:ff9089a4c47de35611de02ff572f618dd2020421763479d5b63e3215eefdee80`
Last Modified: Wed, 09 Feb 2022 14:28:23 GMT
Size: 534.7 KB (534739 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:d43d8b4ad553aaca8c4368dfcdef95c181c329c7a1aec1f610036ed5d402f5ab`
Last Modified: Thu, 17 Feb 2022 22:32:18 GMT
Size: 1.4 KB (1420 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:8a1b2f16aec8c64ccedb430285450c8b13ee09da1aac718db9a26e6a57e13d96`
Last Modified: Thu, 17 Feb 2022 22:33:07 GMT
Size: 152.8 MB (152759951 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:823c49fa6de1bb37666c7e01cbd80219f4c48b057ee1142542e1071e6b0bcced`
Last Modified: Thu, 17 Feb 2022 22:32:18 GMT
Size: 1.6 KB (1570 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
### `golang:rc-windowsservercore` - windows version 10.0.17763.2565; amd64
```console
$ docker pull golang@sha256:e2dc297de0106103f70d5f486f57d9c522a89a20c7f0f9f922e8d70a3931b431
```
- Docker Version: 20.10.8
- Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json`
- Total Size: **2.9 GB (2892084955 bytes)**
(compressed transfer size, not on-disk size)
- Image ID: `sha256:563bb7347573405d8d1ee8648d904d514d457f364939d1963019d87aedb2c4c8`
- Default Command: `["c:\\windows\\system32\\cmd.exe"]`
- `SHELL`: `["powershell","-Command","$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]`
```dockerfile
# Thu, 07 May 2020 05:09:25 GMT
RUN Apply image 1809-RTM-amd64
# Wed, 02 Feb 2022 19:28:56 GMT
RUN Install update 1809-amd64
# Wed, 09 Feb 2022 13:09:18 GMT
SHELL [powershell -Command $ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';]
# Wed, 09 Feb 2022 13:42:02 GMT
ENV GIT_VERSION=2.23.0
# Wed, 09 Feb 2022 13:42:03 GMT
ENV GIT_TAG=v2.23.0.windows.1
# Wed, 09 Feb 2022 13:42:04 GMT
ENV GIT_DOWNLOAD_URL=https://github.com/git-for-windows/git/releases/download/v2.23.0.windows.1/MinGit-2.23.0-64-bit.zip
# Wed, 09 Feb 2022 13:42:05 GMT
ENV GIT_DOWNLOAD_SHA256=8f65208f92c0b4c3ae4c0cf02d4b5f6791d539cd1a07b2df62b7116467724735
# Wed, 09 Feb 2022 13:43:39 GMT
RUN Write-Host ('Downloading {0} ...' -f $env:GIT_DOWNLOAD_URL); [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12; Invoke-WebRequest -Uri $env:GIT_DOWNLOAD_URL -OutFile 'git.zip'; Write-Host ('Verifying sha256 ({0}) ...' -f $env:GIT_DOWNLOAD_SHA256); if ((Get-FileHash git.zip -Algorithm sha256).Hash -ne $env:GIT_DOWNLOAD_SHA256) { Write-Host 'FAILED!'; exit 1; }; Write-Host 'Expanding ...'; Expand-Archive -Path git.zip -DestinationPath C:\git\.; Write-Host 'Removing ...'; Remove-Item git.zip -Force; Write-Host 'Updating PATH ...'; $env:PATH = 'C:\git\cmd;C:\git\mingw64\bin;C:\git\usr\bin;' + $env:PATH; [Environment]::SetEnvironmentVariable('PATH', $env:PATH, [EnvironmentVariableTarget]::Machine); Write-Host 'Verifying install ("git version") ...'; git version; Write-Host 'Complete.';
# Wed, 09 Feb 2022 13:43:41 GMT
ENV GOPATH=C:\go
# Wed, 09 Feb 2022 13:44:39 GMT
RUN $newPath = ('{0}\bin;C:\Program Files\Go\bin;{1}' -f $env:GOPATH, $env:PATH); Write-Host ('Updating PATH: {0}' -f $newPath); [Environment]::SetEnvironmentVariable('PATH', $newPath, [EnvironmentVariableTarget]::Machine);
# Thu, 17 Feb 2022 22:19:02 GMT
ENV GOLANG_VERSION=1.18rc1
# Thu, 17 Feb 2022 22:23:13 GMT
RUN $url = 'https://dl.google.com/go/go1.18rc1.windows-amd64.zip'; Write-Host ('Downloading {0} ...' -f $url); [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12; Invoke-WebRequest -Uri $url -OutFile 'go.zip'; $sha256 = '9fd911fcb429b189b8dc1039d48e3c36eaa7ea4b18fa6ca941d3043ab49df0e9'; Write-Host ('Verifying sha256 ({0}) ...' -f $sha256); if ((Get-FileHash go.zip -Algorithm sha256).Hash -ne $sha256) { Write-Host 'FAILED!'; exit 1; }; Write-Host 'Expanding ...'; Expand-Archive go.zip -DestinationPath C:\; Write-Host 'Moving ...'; Move-Item -Path C:\go -Destination 'C:\Program Files\Go'; Write-Host 'Removing ...'; Remove-Item go.zip -Force; Write-Host 'Verifying install ("go version") ...'; go version; Write-Host 'Complete.';
# Thu, 17 Feb 2022 22:23:15 GMT
WORKDIR C:\go
```
- Layers:
- `sha256:4612f6d0b889cad0ed0292fae3a0b0c8a9e49aff6dea8eb049b2386d9b07986f`
Size: 1.7 GB (1718332879 bytes)
MIME: application/vnd.docker.image.rootfs.foreign.diff.tar.gzip
- `sha256:1bd78008c728d8f9e56dc2093e6eb55f0f0b1aa96e5d0c7ccc830c5f60876cdf`
Size: 995.4 MB (995381853 bytes)
MIME: application/vnd.docker.image.rootfs.foreign.diff.tar.gzip
- `sha256:f0c1566a9285d9465334dc923e9d6fd93a51b3ef6cb8497efcacbcf64e3b93fc`
Last Modified: Wed, 09 Feb 2022 13:26:13 GMT
Size: 1.4 KB (1424 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:1b56caecef9c44ed58d2621ffb6f87f797b532c81f1271d9c339222462523eb2`
Last Modified: Wed, 09 Feb 2022 14:31:28 GMT
Size: 1.4 KB (1446 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:5a3ed0a076d58c949f5debdbc3616b6ccd008426c62635ab387836344123e2a6`
Last Modified: Wed, 09 Feb 2022 14:31:26 GMT
Size: 1.4 KB (1422 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:f25f9584c1aa90dae36704d6bef0e59e72002fcb13e8a4618f64c9b13479c0df`
Last Modified: Wed, 09 Feb 2022 14:31:26 GMT
Size: 1.4 KB (1436 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:12d4fbc7cf0f85fc63860f052f76bfb4429eca8b878abce79a25bfdc30f9e9f5`
Last Modified: Wed, 09 Feb 2022 14:31:26 GMT
Size: 1.4 KB (1424 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:c325dc9f1660ea537aae55b89be63d336762d5a3a02e929d52940586fb0f677e`
Last Modified: Wed, 09 Feb 2022 14:31:31 GMT
Size: 25.4 MB (25448246 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:dd4f3aabaa2a9bf80e2a7f417dba559f6b34e640c21b138dce099328406c8903`
Last Modified: Wed, 09 Feb 2022 14:31:23 GMT
Size: 1.4 KB (1423 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:57e61367d26baed9e16a8d5310c520ae3429d5cc7956569f325cd9de01f33604`
Last Modified: Wed, 09 Feb 2022 14:31:24 GMT
Size: 317.3 KB (317319 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:2c1e96783094e290a2e657003a390393a1b34bc84c946a4495915dde57627ac8`
Last Modified: Thu, 17 Feb 2022 22:33:21 GMT
Size: 1.4 KB (1398 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:b02a444980d4dbd404a4918b5e280bedda66c9134180f9fd9ab4750b650bd7a8`
Last Modified: Thu, 17 Feb 2022 22:34:05 GMT
Size: 152.6 MB (152593175 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:77430b3a26b9cb9b3dcaed44a8294d6e41a0cacbe03bef08bb1a973fc85af598`
Last Modified: Thu, 17 Feb 2022 22:33:21 GMT
Size: 1.5 KB (1510 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
| 63.159204 | 850 | 0.744388 | yue_Hant | 0.450104 |
b93cc6d679c77be73336d68570749671773ab55a | 8,144 | md | Markdown | README.md | zensh/jsgen | ed2520944068254af1d27a11dc96befee82c44d2 | [
"MIT"
] | 682 | 2015-01-02T04:23:31.000Z | 2021-11-08T11:10:57.000Z | README.md | pl2476/jsgen | ed2520944068254af1d27a11dc96befee82c44d2 | [
"MIT"
] | 9 | 2015-03-31T02:09:39.000Z | 2016-12-24T12:39:26.000Z | README.md | pl2476/jsgen | ed2520944068254af1d27a11dc96befee82c44d2 | [
"MIT"
] | 325 | 2015-01-01T15:10:21.000Z | 2020-10-14T01:57:15.000Z | {jsGen} <small>0.8.x</small>【停止更新】
=======
**——JavaScript Generated**
### [ENGLISH README][12]
### 在线演示及交流社区:[AngularJS中文社区][2]
### 注意,从0.6.x版使用了redis!请先安装redis再启动jsGen!
### 0.7.x更新说明(开发中)
1. 调整前端代码框架,使用 bower 和 gulp 管理代码;
2. **第一次启动需带`install`参数,用于初始化MongoDB数据库;**
3. 文章编辑页面增加localStorage本地存储;
4. 线上模式和开发模式的端口统一为3000;
5. `gulp` 命令编译本地运行文件,`gulp build` 编译 CDN 运行文件,其中 CDN 可在 `package.json` 中定义。
**v0.7.7版 升级了账号密码系统,v0.7.6及之前的版本升级后需更新数据库,请运行 `node app.js update-passwd` **
### 简介 (Introduction)
[JsGen][1]是用纯JavaScript编写的新一代开源社区网站系统,主要用于搭建SNS类型的专业社区,对客户端AngularJS应用稍作修改也可变成多用户博客系统、论坛或者CMS内容管理系统。
jsGen基于NodeJS编写服务器端程序,提供静态文件响应和REST API接口服务。基于AngularJS编写浏览器端应用,构建交互式网页UI视图。基于MongoDB编写数据存储系统。
#### 安装 (Installation)
**系统需要Node.js 0.10.x和mongoDB 2.4.x**
**Windows环境需要Python2.7和VS2012(用于编译node-gyp及其它需要编译的Node.js插件)**
**Dependencies: Node.js 0.10.x, redis 2.6.12, mongoDB 2.4.x.**
**Windows: Python2.7 and VS2012**
config目录下的config.js配置jsGen运行参数,包括监听端口、数据库等,内有说明。
api目录下的install.js是jsGen运行初始化文件,设置管理员初始密码,邮箱,内有说明。
git clone git://github.com/zensh/jsgen.git
cd jsgen
npm install node-gyp //windows需要先运行此命令,linux不需要
//此命令依赖python和vs2012,请参考 https://github.com/TooTallNate/node-gyp/wiki/Visual-Studio-2010-Setup
npm install //npm安装依赖模块,请确保依赖模块全部安装好。
//windows下请运行 npm install --msvs_version=2012
node app.js install //启动jsGen之前,初始化MongoDB数据库
node app.js [recache] //正式启动,可选参数 `recache`,启动时重建redis缓存
npm start //正常启动,或 `node app.js`
浏览器端输入网址[http://localhost/](http://localhost/)即可访问。
默认的管理员用户名: **admin** 密码: **[email protected]**。
Default administrator username: **admin** password: **[email protected]**.
#### 升级 (Update)
git pull origin //更新jsGen
npm update //更新Node.js模块
### 更新 (Changelog)
### 0.6.x更新说明
+ 2013/11/02 jsGen v0.7.0 调整前端代码结构,使用bower和grunt管理前端代码,增加localStorage。
+ 2013/08/25 jsGen v0.6.x 完全重构Node.js服务器端代码。使用redis作为缓存,使用then.js处理异步任务,重构服务后台代码。
+ 2013/07/29 jsGen v0.5.0 完全重构AngularJS客户端部分,服务器端代码做相应调整。使用pure CSS框架,优化UI,兼容IE8!重写并优化AngularJS代码,添加若干很酷的功能代码,在学习AngularJS的码农不妨看看!
+ 2013/06/01 jsGen v0.3.5 修复若干bug,标签允许空格。
+ 2013/05/26 jsGen v0.3.4 修复管理后台不出现网站设置的bug,管理后台增加邮箱验证设置,默认关闭邮箱验证。
+ 2013/04/25 jsGen v0.3.3 优化浏览器端AngularJS应用。
+ 2013/04/25 jsGen v0.3.2 修复评论编辑器按钮隐藏、输入卡的bug(修改了Markdown.Editor.js),指令前缀改为gen。
+ 2013/04/25 jsGen v0.3.1 浏览器端AngularJS应用自动更新功能。
+ 2013/04/21 jsGen v0.3.0 服务器端增加用户自动登录功能,用户邮箱手动验证。客户端AngularJS应用更新jQuery、Bootstrap至最新版,优化UI。
+ 2013/04/13 jsGen v0.2.11 调整代码,升级AngularJS到1.0.6。
+ 2013/04/13 jsGen v0.2.10 视觉调整。
+ 2013/04/13 jsGen v0.2.9 修复热门文章、热门评论bug,优化代码,暂停使用Cluster。
+ 2013/04/09 jsGen v0.2.8 修复文章编辑器Bug。
+ 2013/04/07 jsGen v0.2.7 修复process.nextTick引起的bug(导致进程退出),优化热门文章统计、热门评论统计、最近更新统计。
+ 2013/04/07 jsGen v0.2.6 优化cacheTL,优化在线用户统计。
+ 2013/04/03 jsGen v0.2.5 修复cacheTL的bug(该Bug可能导致获取后台信息出错)。
+ 2013/04/02 jsGen v0.2.4 完善用户个人主页,显示阅读时间线、更新文章和已阅读文章列表。
+ 2013/04/02 jsGen v0.2.3 修复用户名、用户邮箱大小写漏洞。
+ 2013/04/02 jsGen v0.2.2 修正bug,调整BootStrap视图,使网页视觉效果更明了,可开启Node.js的cluster多进程功能。
+ 2013/04/01 jsGen v0.2.0 大幅优化用户、文章、标签ID相关代码,代码更简洁。
+ 2013/03/31 jsGen v0.1.2 修正bug,添加加载进度条。
+ 2013/03/30 jsGen v0.1.1 修正几个bug,添加forever启动脚本。
+ 2013/03/29 jsGen v0.1.0 测试版发布。
### 0.5.x更新说明
1. 兼容IE8。
2. 放弃Bootstrap 3框架,改用YUI的pure CSS框架,并入部分Bootstrap框架代码,如Modal、Tooltip等。
3. 使用超酷的Icon:Font-Awesome。
4. 动画效果,文章列表精简/摘要模式切换。
5. toastr信息提示条,用于显示错误或成功的请求信息。
6. 优化响应式设计,手机、平板浏览器可完美访问。
7. 分离语言机制,可方便切换成其它语言(模板中的语言暂未分离,待完成)。
8. 完全重构AngularJS代码,各种很酷的功能代码如下。
9. 全局Loading检测,自动响应loading状态,默认延迟1秒响应loading。可响应AngularJS内部所有http请求,如API请求、html模板请求等。
10. 全局Error检测,自动过滤错误响应(即进入到controlller中的都是成功响应),包括服务器自身的错误响应如404、500等和服务器定义的错误响应,toastr显示错误信息。
11. 统一的Validation验证机制,通过`genTooltip`指令收集并提示无效输入,配合`uiValidate`可对输入完成任何自定义验证。主要应用于用户登录、用户注册、用户信息修改、发表文章、发表评论,管理后台配置等。
12. 统一的Dirty检测机制,通过`genModal`指令和`union/intersect`函数实现,在发表/编辑文章页面、用户信息配置页面、后台管理页面等修改了数据时,若未保存离开,提示警告信息。
13. 通用的`genPagination`指令,效果仿Github,可实现有链接和无链接分页导航。前者生成url,可产生导航记录(浏览器前进后退),具体效果见文章列表。后者通过事件机制实现,不改变url,无导航记录(不能前进后退),具体效果见文章详情页面中的评论分页导航。
14. 图片预占位异步加载`genSrc`指令,目前主要用于用户头像。jsGen使用Gavatar,再用户的Gavatar没用加载完成之前,显示本地服务器的占位图像,加载完成后自动替换成用户头像。
15. 还有其他很酷的代码如定时器触发器`timing`,自动定位页面元素的`anchorScroll`(动画效果,方便好使,取代AngularJS内置的$anchorScroll),无须担心digest错误的`applyFn`(代替$apply),通用的Cookies存储服务`myConf`等
### 目录和文件 (menus and files)
+api // 服务器端API目录
-article.js // 文章和评论系统API接口
-collection.js // 合集系统API接口
-index.js // 网站全局信息API接口
-install.js // 初始化安装程序
-message.js // 站内信息系统API接口
-tag.js // 标签系统API接口
-user.js // 用户系统API
+config
-config.js // 网站配置文件
+dao // MongoDB数据库访问层
-articleDao.js // 文章评论访问接口
-collectionDao.js // 合集系统访问接口
-indexDao.js // 网站全局信息访问接口
-messageDao.js // 站内信息系统访问接口
-mongoDao.js // MongoDB访问接口
-tagDao.js // 标签系统访问接口
-userDao.js // 用户系统访问接口
+lib // 通用工具模块
-anyBaseConverter.js // 通用进制转换器
-cacheLRU.js // LRU缓存模块
-cacheTL.js // TL缓存模块
-email.js // SMTP Email模块
-json.js // 数据库格式模板
-msg.js // 程序信息
-tools.js // 核心工具函数
+mylogs // 日志目录,网站运行后产生内容
+node_modules // Node.js模块目录,npm install后产生内容
+static // 浏览器端AngularJS WEB应用
+css
+font-awesome //很酷的web icon
+img
+js
+lib // AngularJS、jQuery等js模块
-app.js // 全局初始化模块
-controllers.js // 控制器模块
-directives.js // 指令模块
-filters.js // 过滤器模块
-locale_zh-cn.js// 语言包
-router.js // 路由模块
-services.js // 通用服务模块
-tools.js // 工具函数模块
+md // MarkDown文档
+tpl // html模板
-favicon.ico
-index.html // AngularJS WEB应用入口文件
+tmp // 缓存目录
+static // 压缩js、css缓存目录,必须
+tpl // html模板文件缓存目录
+upload // 上传文件缓存目录
-app.js // Node.js入口文件
-package.json // jsGen信息文件
### 特点 (Features)
1. 前沿的WEB技术,前所未有的网站构架形态,前端与后端完全分离,前端由 **AngularJS** 生成视图,后端由 **Node.js** 提供REST API数据接口和静态文件服务。只需改动前端AngularJS应用视图形态,即可变成论坛、多用户博客、内容管理系统等。
2. 用户数据、文章评论数据、标签数据、分页缓存数据、用户操作间隔限时等都使用 **LRU缓存** ,降低数据库IO操作,同时保证同步更新数据。
3. 前后端利用 **json** 数据包进行通信。文章、评论采用 **Markdown** 格式编辑、存储,支持GitHub的GFM,AngularJS应用将Markdown解析成HTML DOM。
4. **用户帐号系统**,关注(follow)用户/粉丝、邮箱验证激活、邮箱重置密码、SHA256加密安全登录、登录失败5次锁定/邮箱解锁、用户标签、用户积分、用户权限等级、用户阅读时间线等功能。用户主页只展现感兴趣的最新文章(关注标签、关注作者的文章)。
5. **文章/评论系统**,文章、评论使用统一数据结构,均可被评论、支持、反对、标记(mark,即收藏),当评论达到一定条件(精彩评论)可自动提升为文章(进入文章列表展现,类branch功能),同样文章达到一定条件即可自动推荐。自动实时统计文章、评论热度,自动生成最新文章列表、一周内最热文章列表、一周内最热评论列表、最近更新文章列表。强大的文章、评论列表分页导航功能,缓存每个用户的分页导航浏览记录。
6. **标签系统**,文章和用户均可加标签,可设置文章、用户标签数量上限。用户通过标签设置自己关注话题,文章通过标签形成分类。标签在用户设置标签或文章设置标签时自动生成。自动展现热门标签。
7. **文章合集系统**,作者、编辑、管理员可将一系列相关文章组成合集,形成有章节大纲目录的在线电子书形态,可用于教程文档、主题合集甚至小说连载等。(待完成)
8. **站内短信系统**,提供在文章、评论中 @用户的功能,重要短信发送邮件通知功能等。(待完成)
9. **后台管理系统**,网站参数设置、缓存设置、网站运行信息、文章、评论、用户、标签、合集、站内短信等管理。
10. **Robot SEO系统**,由于AngularJS网页内容在客户端动态生成,对搜索引擎robot天生免疫。jsGen针对robot访问,在服务器端动态生成robot专属html页面。搜索引擎Robot名称可在管理后台添加。
### 感谢 (Acknowledgments)
**jsGen** 是为[AngularJS中文社区][2]开发的网站系统,测试版已经上线,还请大家温柔测试,积极反馈Bug。
非常感谢[GitHub][3]和在GitHub上贡献开源代码的[Node.js][4]、[AngularJS][5]、[MongoDB][6]、[Bootstrap][7]以及其他JavsScript插件的伟大码农们,还有国内码农贡献的[rrestjs][8]、[mongoskin][9]、[xss][10]等。jsGen也是开源免费。
### MIT 协议
[1]: https://github.com/zensh/jsgen
[2]: http://angularjs.cn
[3]: https://github.com/
[4]: https://github.com/joyent/node
[5]: https://github.com/angular/angular.js
[6]: https://github.com/mongodb/mongo
[7]: https://github.com/twitter/bootstrap
[8]: https://github.com/DoubleSpout/rrestjs
[9]: https://github.com/kissjs/node-mongoskin
[10]: https://github.com/leizongmin/js-xss
[11]: http://cnodejs.org/
[12]: https://github.com/zensh/jsgen/blob/master/README_en.md
| 40.316832 | 202 | 0.675098 | yue_Hant | 0.738484 |
b93d3501290d4eb04f4261fb5da3a6950b670b06 | 174 | md | Markdown | src/lib/templates/fa.md | bdukes/adr | 632856cd6280cf7d0d376a245bcb12a5bd13f851 | [
"MIT"
] | null | null | null | src/lib/templates/fa.md | bdukes/adr | 632856cd6280cf7d0d376a245bcb12a5bd13f851 | [
"MIT"
] | null | null | null | src/lib/templates/fa.md | bdukes/adr | 632856cd6280cf7d0d376a245bcb12a5bd13f851 | [
"MIT"
] | null | null | null | # {NUMBER}. {TITLE}
تاریخ: {DATE}
## وضعیت
{DATE} پیشنهاد شده
## زمینه
زمینه تصمیم را اینجا بنویسید
## تصمیم
نتایج را اینجا بنویسید
## نتایج
نتایج را اینجا بنویسید
| 8.7 | 28 | 0.666667 | pes_Arab | 0.947947 |
b93d5d9b3c78f46d004e71e2393e1eeba2ab5784 | 87 | md | Markdown | README.md | swethadidugu/phptravels | 6b3d9bbd2f03eb1f1bb2ee99659ac2c5dfece607 | [
"MIT"
] | null | null | null | README.md | swethadidugu/phptravels | 6b3d9bbd2f03eb1f1bb2ee99659ac2c5dfece607 | [
"MIT"
] | null | null | null | README.md | swethadidugu/phptravels | 6b3d9bbd2f03eb1f1bb2ee99659ac2c5dfece607 | [
"MIT"
] | null | null | null | # phptravels
Automated UI Tests for the https://www.phptravels.net/home (demo website)
| 29 | 73 | 0.781609 | kor_Hang | 0.489804 |
b93d74a776673d6e4123fd138abd8921b3aa41b5 | 575 | md | Markdown | desktop-src/Midl/odl.md | citelao/win32 | bf61803ccb0071d99eee158c7416b9270a83b3e4 | [
"CC-BY-4.0",
"MIT"
] | 552 | 2019-08-20T00:08:40.000Z | 2022-03-30T18:25:35.000Z | desktop-src/Midl/odl.md | citelao/win32 | bf61803ccb0071d99eee158c7416b9270a83b3e4 | [
"CC-BY-4.0",
"MIT"
] | 1,143 | 2019-08-21T20:17:47.000Z | 2022-03-31T20:24:39.000Z | desktop-src/Midl/odl.md | citelao/win32 | bf61803ccb0071d99eee158c7416b9270a83b3e4 | [
"CC-BY-4.0",
"MIT"
] | 1,287 | 2019-08-20T05:37:48.000Z | 2022-03-31T20:22:06.000Z | ---
title: odl attribute
description: MKTYPLIB required the \ odl\ attribute on ODL interfaces.
ms.assetid: d46530ad-3037-43fb-8dfe-24e464970bbd
keywords:
- odl attribute MIDL
topic_type:
- apiref
api_name:
- odl
api_type:
- NA
ms.topic: reference
ms.date: 05/31/2018
---
# odl attribute
MKTYPLIB required the **\[odl\]** attribute on ODL interfaces. The MIDL compiler does not require the **\[odl\]** attribute; it is recognized only for compatibility with older **ODL** files.
> [!Note]
> The Mktyplib.exe tool is obsolete. Use the MIDL compiler instead.
| 17.424242 | 190 | 0.723478 | eng_Latn | 0.916835 |
b93eb1d7ddda7ebd8d8f5aefb69be0f0e2543928 | 5,491 | md | Markdown | README.md | textcreationpartnership/N00513 | 27f1fcb76385cd2a916dfdb466faf4acc0b3efce | [
"CC0-1.0"
] | null | null | null | README.md | textcreationpartnership/N00513 | 27f1fcb76385cd2a916dfdb466faf4acc0b3efce | [
"CC0-1.0"
] | null | null | null | README.md | textcreationpartnership/N00513 | 27f1fcb76385cd2a916dfdb466faf4acc0b3efce | [
"CC0-1.0"
] | null | null | null | #The heresie and hatred which has falsly [sic] charged upon the innocent justly returned upon the guilty. Giving some brief and impartial account of the most material passages of a late dispute in writing, that hath passed at Philadelphia betwixt John Delavall and George Keith, with some intermixt remarks and observations on the whole.#
##Keith, George, 1639?-1716.##
The heresie and hatred which has falsly [sic] charged upon the innocent justly returned upon the guilty. Giving some brief and impartial account of the most material passages of a late dispute in writing, that hath passed at Philadelphia betwixt John Delavall and George Keith, with some intermixt remarks and observations on the whole.
Keith, George, 1639?-1716.
##General Summary##
**Links**
[TCP catalogue](http://www.ota.ox.ac.uk/tcp/) •
[HTML](http://tei.it.ox.ac.uk/tcp/Texts-HTML/free/N00/N00513.html) •
[EPUB](http://tei.it.ox.ac.uk/tcp/Texts-EPUB/free/N00/N00513.epub)
**Availability**
This keyboarded and encoded edition of the
work described above is co-owned by the institutions
providing financial support to the Early English Books
Online Text Creation Partnership. This Phase I text is
available for reuse, according to the terms of Creative
Commons 0 1.0 Universal. The text can be copied,
modified, distributed and performed, even for
commercial purposes, all without asking permission.
**Major revisions**
1. __2004-03__ __TCP__ *Assigned for keying and markup*
1. __2004-07__ __AEL Data (Chennai)__ *Keyed and coded from Readex/Newsbank page images*
1. __2004-08__ __Olivia Bottum__ *Sampled and proofread*
1. __2004-08__ __Olivia Bottum__ *Text and markup reviewed and edited*
1. __2004-10__ __pfs.__ *Batch review (QC) and XML conversion*
##Content Summary##
#####Front#####
#####Body#####
1. Heresie and Hatred justly returned on the GUILTY, &c.
#####Back#####
1. The Printer's Advertisement
**Types of content**
* Oh, Mr. Jourdain, there is **prose** in there!
There are 331 **ommitted** fragments!
@__reason__ (331) : illegible (331) • @__resp__ (331) : #AELD (331) • @__extent__ (331) : 2 letters (11), 1 letter (49), 1 word (224), 2 words (34), 3 letters (1), 4 words (1), 3 words (8), 5 letters (1), 4 letters (1), 1 span (1)
**Character listing**
|Text|string(s)|codepoint(s)|
|---|---|---|
|General Punctuation|•…|8226 8230|
|Geometric Shapes|◊▪|9674 9642|
|CJKSymbolsandPunctuation|〈〉|12296 12297|
##Tag Usage Summary##
###Header Tag Usage###
|No|element name|occ|attributes|
|---|---|---|---|
|1.|__author__|2||
|2.|__availability__|1||
|3.|__biblFull__|1||
|4.|__change__|5||
|5.|__date__|7| @__when__ (1) : 2004-12 (1)|
|6.|__editorialDecl__|1||
|7.|__extent__|2||
|8.|__idno__|7| @__type__ (7) : DLPS (1), TCP (1), STC (2), NOTIS (1), IMAGE-SET (1), EVANS-CITATION (1)|
|9.|__keywords__|1| @__scheme__ (1) : http://authorities.loc.gov/ (1)|
|10.|__label__|5||
|11.|__langUsage__|1||
|12.|__language__|1| @__ident__ (1) : eng (1)|
|13.|__listPrefixDef__|1||
|14.|__note__|5||
|15.|__notesStmt__|2||
|16.|__p__|11||
|17.|__prefixDef__|2| @__ident__ (2) : tcp (1), char (1) • @__matchPattern__ (2) : ([0-9\-]+):([0-9IVX]+) (1), (.+) (1) • @__replacementPattern__ (2) : http://eebo.chadwyck.com/downloadtiff?vid=$1&page=$2 (1), https://raw.githubusercontent.com/textcreationpartnership/Texts/master/tcpchars.xml#$1 (1)|
|18.|__projectDesc__|1||
|19.|__pubPlace__|2||
|20.|__publicationStmt__|2||
|21.|__publisher__|2||
|22.|__ref__|2| @__target__ (2) : https://creativecommons.org/publicdomain/zero/1.0/ (1), http://www.textcreationpartnership.org/docs/. (1)|
|23.|__seriesStmt__|1||
|24.|__sourceDesc__|1||
|25.|__term__|3||
|26.|__textClass__|1||
|27.|__title__|3||
|28.|__titleStmt__|2||
###Text Tag Usage###
|No|element name|occ|attributes|
|---|---|---|---|
|1.|__abbr__|15||
|2.|__closer__|2||
|3.|__desc__|331||
|4.|__div__|3| @__type__ (3) : title_page (1), text (1), printer_to_the_reader (1)|
|5.|__g__|103| @__ref__ (103) : char:EOLhyphen (89), char:punc (14)|
|6.|__gap__|331| @__reason__ (331) : illegible (331) • @__resp__ (331) : #AELD (331) • @__extent__ (331) : 2 letters (11), 1 letter (49), 1 word (224), 2 words (34), 3 letters (1), 4 words (1), 3 words (8), 5 letters (1), 4 letters (1), 1 span (1)|
|7.|__head__|2||
|8.|__hi__|373||
|9.|__p__|54||
|10.|__pb__|23| @__facs__ (23) : tcp:000641_0000_0FA96267CF5E8260 (1), tcp:000641_0001_0FA96268889641E8 (1), tcp:000641_0002_0FA9626957ECA240 (1), tcp:000641_0003_0FA9626A38AF1008 (1), tcp:000641_0004_0FA9626BCDBC0D60 (1), tcp:000641_0005_0FA9626C51B33C18 (1), tcp:000641_0006_0FA9626D08105C90 (1), tcp:000641_0007_0FA9626DCDF610F0 (1), tcp:000641_0008_0FA9626E945A8688 (1), tcp:000641_0009_0FA9626F4D473F08 (1), tcp:000641_0010_0FA962700E1FCC88 (1), tcp:000641_0011_0FA96271022F0D30 (1), tcp:000641_0012_0FA962729BB06CB8 (1), tcp:000641_0013_0FA96273173112A0 (1), tcp:000641_0014_0FA96273D252F2D0 (1), tcp:000641_0015_0FA962748996ECF0 (1), tcp:000641_0016_0FA9627555E59F10 (1), tcp:000641_0017_0FA9627622F3C8D0 (1), tcp:000641_0018_0FA96276C7AB2280 (1), tcp:000641_0019_0FA9627ACFA1CCF0 (1), tcp:000641_0020_0FA9627C76101298 (1), tcp:000641_0021_0FA9627DF9810608 (1), tcp:000641_0022_0FA9627E49C501D0 (1) • @__n__ (20) : 3 (1), 4 (1), 7 (2), 6 (2), 9 (1), 10 (1), 11 (1), 12 (1), 13 (1), 14 (1), 15 (1), 16 (1), 17 (1), 18 (1), 19 (1), 20 (1), 21 (1), 22 (1)|
|11.|__q__|2||
|12.|__signed__|2||
|13.|__trailer__|1||
| 47.747826 | 1,062 | 0.698962 | eng_Latn | 0.436516 |
b93ed0dd664dd3ef0a26d9338adf399ac5196291 | 967 | md | Markdown | docs/framework/wcf/diagnostics/event-logging/wmiunregistrationfailed.md | homard/docs.fr-fr | 1ea296656ac8513433dd186266b80b1d04487190 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wcf/diagnostics/event-logging/wmiunregistrationfailed.md | homard/docs.fr-fr | 1ea296656ac8513433dd186266b80b1d04487190 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wcf/diagnostics/event-logging/wmiunregistrationfailed.md | homard/docs.fr-fr | 1ea296656ac8513433dd186266b80b1d04487190 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: WmiUnregistrationFailed
ms.date: 03/30/2017
ms.assetid: 7d1d31a7-efab-492d-b0ff-3233d5dc7a2a
ms.openlocfilehash: de00e0d0408a300afadbbfdf5ce77d08702cda80
ms.sourcegitcommit: 3d5d33f384eeba41b2dff79d096f47ccc8d8f03d
ms.translationtype: MT
ms.contentlocale: fr-FR
ms.lasthandoff: 05/04/2018
ms.locfileid: "33469893"
---
# <a name="wmiunregistrationfailed"></a>WmiUnregistrationFailed
ID : 127
Gravité : Erreur
Catégorie : ServiceModel
## <a name="description"></a>Description
Cet événement indique que le fournisseur WMI n'a pas été désinscrit. L'événement répertorie l'objet WMI, l'erreur, le nom de processus et l'ID de processus.
## <a name="see-also"></a>Voir aussi
[Journalisation des événements](../../../../../docs/framework/wcf/diagnostics/event-logging/index.md)
[Informations de référence générales sur les événements](../../../../../docs/framework/wcf/diagnostics/event-logging/events-general-reference.md)
| 38.68 | 159 | 0.748707 | fra_Latn | 0.381031 |
b93ef21c81d92e86007145e0c86494d575786242 | 598 | md | Markdown | api/Access.Form.RecordSourceQualifier.md | italicize/VBA-Docs | 8d12d72a1e3e9e32f31b87be3a3f9e18e411c1b0 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | api/Access.Form.RecordSourceQualifier.md | italicize/VBA-Docs | 8d12d72a1e3e9e32f31b87be3a3f9e18e411c1b0 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | api/Access.Form.RecordSourceQualifier.md | italicize/VBA-Docs | 8d12d72a1e3e9e32f31b87be3a3f9e18e411c1b0 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Form.RecordSourceQualifier Property (Access)
keywords: vbaac10.chm13560
f1_keywords:
- vbaac10.chm13560
ms.prod: access
api_name:
- Access.Form.RecordSourceQualifier
ms.assetid: e4c94bb5-b1e4-bfeb-c5f1-b21ae27762b2
ms.date: 06/08/2017
---
# Form.RecordSourceQualifier Property (Access)
Returns or sets a **String** indicating the SQL Server owner name of the record source for the specified form. Read/write.
## Syntax
_expression_. `RecordSourceQualifier`
_expression_ A variable that represents a [Form](Access.Form.md) object.
## See also
[Form Object](Access.Form.md)
| 19.290323 | 123 | 0.769231 | eng_Latn | 0.489845 |
b93efa524d2cf85754438b3c30a167f7d3b27140 | 5,254 | md | Markdown | quickstart-javase/docs/JVM/JVM与Linux的内存关系详解.md | youngzil/quickstart-framework | 5252ab4ffe089461969ed54420d3f9f8980baa03 | [
"Apache-2.0"
] | 6 | 2019-01-02T11:02:38.000Z | 2021-01-30T16:35:20.000Z | quickstart-javase/docs/JVM/JVM与Linux的内存关系详解.md | youngzil/quickstart-framework | 5252ab4ffe089461969ed54420d3f9f8980baa03 | [
"Apache-2.0"
] | 31 | 2019-11-13T02:06:18.000Z | 2022-03-31T20:51:49.000Z | quickstart-javase/docs/JVM/JVM与Linux的内存关系详解.md | youngzil/quickstart-framework | 5252ab4ffe089461969ed54420d3f9f8980baa03 | [
"Apache-2.0"
] | 3 | 2018-07-10T15:08:02.000Z | 2020-09-02T06:48:07.000Z | - [一、Linux与进程内存模型](#一、Linux与进程内存模型)
- [二、进程与JVM内存空间](#二、进程与JVM内存空间)
- [2、JVM 和JMM的区别](#2、JVM和JMM的区别)
- [as-if-serial语义](#As-if-serial语义)
- [happens-before定义](#Happens-before定义)
JMM总结
讲一下jmm,为何这样设计
并发编程,为了保证数据的安全,需要满足以下三个特性
什么是线程安全
举三个例子分别描述 jmm的三个特性「原子性、有序性、可见性」导致的线程安全问题
---------------------------------------------------------------------------------------------------------------------
## 一、Linux与进程内存模型
JVM以一个进程(Process)的身份运行在Linux系统上,了解Linux与进程的内存关系,是理解JVM与Linux内存的关系的基础。
从硬件上看,Linux系统的内存空间由两个部分构成:物理内存和SWAP(位于磁盘)。
物理内存是Linux活动时使用的主要内存区域;
当物理内存不够使用时,Linux会把一部分暂时不用的内存数据放到磁盘上的SWAP中去,以便腾出更多的可用内存空间;
而当需要使用位于SWAP的数据时,必须 先将其换回到内存中。
从Linux系统上看,除了引导系统的BIN区,整个内存空间主要被分成两个部分:
内核内存(Kernel space)、
用户内存(User space)。
内核内存是Linux自身使用的内存空间,主要提供给程序调度、内存分配、连接硬件资源等程序逻辑使用。
用户内存是提供给各个进程主要空间,Linux给各个进程提供相同的虚拟内存空间;这使得进程之间相互独立,互不干扰。
实现的方法是采用虚拟内存技术:给每一个进程一定虚拟内存空间,而只有当虚拟内存实 际被使用时,才分配物理内存。
从进程的角度来看,进程能直接访问的用户内存(虚拟内存空间)被划分为5个部分:代码区、数据区、堆区、栈区、未使用区。
代码区中存放应用程序的机器代码,运行过程中代码不能被修改,具有只读和固定大小的特点。
数据区中存放了应用程序中的全局数据,静态数据和一些常量字符串等,其大小也是固定的。
堆是运行时程序动态申请的空间,属于程序运行时直接申请、释放的内存资源。
栈区用来存放函数的传入参数、临时变量,以及返回地址等数据。
未使用区是分配新内 存空间的预备区域。
## 二、进程与JVM内存空间
JVM本质就是一个进程,因此其内存空间(也称之为运行时数据区,注意与JMM的区别)也有进程的一般特点。深入浅出 Java 中 JVM 内存管理,这篇参考下。
但是,JVM又不是一个普通的进程,其在内存空间上有许多崭新的特点,主要原因有两 个:
- JVM将许多本来属于操作系统管理范畴的东西,移植到了JVM内部,目的在于减少系统调用的次数;
- Java NIO,目的在于减少用于读写IO的系统调用的开销。JVM进程与普通进程内存模型比较如下图:
1.用户内存
2.内核内存
应用程序通常不直接和内核内存打交道,内核内存由操作系统进行管理和使用;
不过随着Linux对性能的关注及改进,一些新的特性使得应用程序可以使 用内核内存,或者是映射到内核空间。
Java NIO正是在这种背景下诞生的,其充分利用了Linux系统的新特性,提升了Java程序的IO性能。
参考
https://juejin.im/post/5cceea7c6fb9a0322279307b
https://blog.csdn.net/hxcaifly/article/details/82563269
---------------------------------------------------------------------------------------------------------------------
## 2、JVM和JMM的区别
首先从定义上看
- JVM (Java Virtual Machine)Java虚拟机模型 主要描述的是Java虚拟机内部的结构以及各个结构之间的关系。
- JMM(Java Memory Model) Java内存模型 主要规定了一些内存和线程之间的关系
其次从内存结构上看
- JVM内存结构:PC程序计数器,虚拟机栈,方法栈,堆,方法区
- 主内存、工作内存(线程本地内存)
Java内存模型(Java Memory Model,JMM)JMM主要是为了规定了线程和内存之间的一些关系。
根据JMM的设计,系统存在一个主内存(Main Memory),Java中所有变量都储存在主存中,对于所有线程都是共享的。
每条线程都有自己的工作内存(Working Memory),工作内存中保存的是主存中某些变量的拷贝,线程对所有变量的操作都是在工作内存中进行,线程之间无法相互直接访问,变量传递均需要通过主存完成。
总结:
jmm中的主内存、工作内存与jvm中的Java堆、栈、方法区等并不是同一个层次的内存划分,这两者基本上是没有关系的,
如果两者一定要勉强对应起来,那从变量、主内存、工作内存的定义来看,主内存主要对应于Java堆中的对象实例数据部分,而工作内存则对应于虚拟机栈中的部分区域。
从更低层次上说,主内存就直接对应于物理硬件的内存,而为了获取更好的运行速度,虚拟机(甚至是硬件系统本身的优化措施)可能会让工作内存优先存储于寄存器和高速缓存中,因为程序运行时主要访问读写的是工作内存。
JMM总结
JMM是一种规范,目的是解决由于多线程通过共享内存进行通信时,存在的本地内存数据不一致、编译器会对代码指令重排序、处理器会对代码乱序执行等带来的问题。目的是保证并发编程场景中的原子性、可见性和有序性。
讲一下jmm,为何这样设计
java memeory model ,java 内存模型,设计的目的是屏蔽掉各种硬件和操作系统之间的差异性,实现让 Java 在各种平台下都能能到一致的并发效果。
jmm 中,分为主内存和工作内存,其中每个线程拥有自己的工作内存,而主内存是所有线程共享的。
Java内存模型的两个关键概念:可见性(Visibility)和可排序性(Ordering)
为何要有工作内存,有了主内存和工作内存不是更麻烦啊,要不断的复制移动数据,为何不能直接对主内存操作?【就是上面两点:高效并发】
这就跟为何要提出寄存器和缓存一样的道理,如果所有的操作都在内存中完成,那速度实在是太慢了,只有工作在寄存器和缓存中,速度才能让人满意,而这里的主内存就类比为内存,工作内存就类比为寄存器和缓存。
分为主内存、工作内存,为什么这么设计:
1、高效:高速缓存的作用,多次修改,一次同步到主内存
2、并发:除了synchronized和volatile等
并发编程,为了保证数据的安全,需要满足以下三个特性:
原子性是指在一个操作中就是cpu不可以在中途暂停然后再调度,既不被中断操作,要不执行完成,要不就不执行。
可见性是指当多个线程访问同一个变量时,一个线程修改了这个变量的值,其他线程能够立即看得到修改的值。
有序性即程序执行的顺序按照代码的先后顺序执行。
有没有发现,缓存一致性问题其实就是可见性问题。
而处理器优化是可以导致原子性问题的。
指令重排即会导致有序性问题。
什么是线程安全
多个线程访问一个对象,无需考虑环境和额外同步,调用这个对象的行为就能得到正确的答案,就说明这个对象是线程安全的。
举三个例子分别描述 jmm的三个特性「原子性、有序性、可见性」导致的线程安全问题
1、不遵循原子性:volatile 变量的自加,复合操作,导致线程不安全;
2、不遵循有序性:比如共享变量,多个线程同时访问,不按序,每个都拷贝一份到自己的工作内存,必然会导致线程不安全;
3、不遵循可见性:普通变量,跟有序性一样的例子,每个都从主内存拷贝一份变量的副本到工作内存,必然会导致线程不安全。
## As-if-serial语义
as-if-serial语义的意思是:不管怎么重排序(编译器和处理器为了提供并行度),(单线程)程序的执行结果不能被改变。编译器,runtime和处理器都必须遵守as-if-serial语义。as-if-serial语义把单线程程序保护了起来,遵守as-if-serial语义的编译器,runtime和处理器共同为编写单线程程序的程序员创建了一个幻觉:单线程程序是按程序的顺序来执行的。比如上面计算圆面积的代码,在单线程中,会让人感觉代码是一行一行顺序执行上,实际上A,B两行不存在数据依赖性可能会进行重排序,即A,B不是顺序执行的。as-if-serial语义使程序员不必担心单线程中重排序的问题干扰他们,也无需担心内存可见性问题。
## Happens before定义
JSR-133使用happens-before的概念来指定两个操作之间的执行顺序。由于这两个操作可以在一个线程之内,也可以是在不同线程之间。
因此,JMM可以通过happens-before关系向程序员提供跨线程的内存可见性保证(如果A线程的写操作a与B线程的读操作b之间存在happens-before关系,尽管a操作和b操作在不同的线程中执行,但JMM向程序员保证a操作将对b操作可见)。具体的定义为:
1)如果一个操作happens-before另一个操作,那么第一个操作的执行结果将对第二个操作可见,而且第一个操作的执行顺序排在第二个操作之前。
2)两个操作之间存在happens-before关系,并不意味着Java平台的具体实现必须要按照happens-before关系指定的顺序来执行。如果重排序之后的执行结果,与按happens-before关系来执行的结果一致,那么这种重排序并不非法(也就是说,JMM允许这种重排序)。
上面的1)是JMM对程序员的承诺。从程序员的角度来说,可以这样理解happens-before关系:如果A happens-before B,那么Java内存模型将向程序员保证——A操作的结果将对B可见,且A的执行顺序排在B之前。注意,这只是Java内存模型向程序员做出的保证!
上面的2)是JMM对编译器和处理器重排序的约束原则。正如前面所言,JMM其实是在遵循一个基本原则:只要不改变程序的执行结果(指的是单线程程序和正确同步的多线程程序),编译器和处理器怎么优化都行。JMM这么做的原因是:程序员对于这两个操作是否真的被重排序并不关心,程序员关心的是程序执行时的语义不能被改变(即执行结果不能被改变)。因此,happens-before关系本质上和as-if-serial语义是一回事。
比较:
as-if-serial VS happens-before
1、as-if-serial语义保证单线程内程序的执行结果不被改变,happens-before关系保证正确同步的多线程程序的执行结果不被改变。
2、as-if-serial语义给编写单线程程序的程序员创造了一个幻境:单线程程序是按程序的顺序来执行的。happens-before关系给编写正确同步的多线程程序的程序员创造了一个幻境:正确同步的多线程程序是按happens-before指定的顺序来执行的。
3、as-if-serial语义和happens-before这么做的目的,都是为了在不改变程序执行结果的前提下,尽可能地提高程序执行的并行度。
参考
https://www.jianshu.com/p/797129612bed
https://juejin.im/post/5d67d248f265da03ea5a9198
https://blog.csdn.net/zhaomengszu/article/details/80270696
https://www.cnblogs.com/huxuer/p/9066269.html
https://blog.csdn.net/longfulong/article/details/78790955
| 31.650602 | 317 | 0.804911 | yue_Hant | 0.553821 |
b93f3515af291d32a8c73e79146885589887a780 | 10,519 | md | Markdown | data/data_it/reports/resources/org.who.infoapp___2.3.5/report.md | JaspervanRooijen/covid-apps-observer | 59f6049a493c80797d83fd24e4a4789a14f3110e | [
"MIT"
] | null | null | null | data/data_it/reports/resources/org.who.infoapp___2.3.5/report.md | JaspervanRooijen/covid-apps-observer | 59f6049a493c80797d83fd24e4a4789a14f3110e | [
"MIT"
] | null | null | null | data/data_it/reports/resources/org.who.infoapp___2.3.5/report.md | JaspervanRooijen/covid-apps-observer | 59f6049a493c80797d83fd24e4a4789a14f3110e | [
"MIT"
] | null | null | null | # WHO Info
App version ``2.3.5``
Analyzed with [covid-apps-observer](http://github.com/covid-apps-observer) project, version ``0.1``
<img src="icon.png" alt="WHO Info icon" width="80"/>
## App overview
| | |
|-------------------------|-------------------------|
| **Name** | WHO Info |
| **Unique identifier** | org.who.infoapp |
| **Link to Google Play** | [https://play.google.com/store/apps/details?id=org.who.infoapp](https://play.google.com/store/apps/details?id=org.who.infoapp) |
| **Summary** | L'App ufficiale di informazione dell'Organizzazione mondiale della sanità. |
| **Privacy policy** | [https://www.who.int/about/who-we-are/privacy-policy](https://www.who.int/about/who-we-are/privacy-policy) |
| **Latest version** | 2.3.5 |
| **Last update** | 2020-09-28 15:54:01 |
| **Recent changes** | This release resolves some minor issues. |
| **Installs** | 100.000+ |
| **Category** | Notizie e riviste |
| **First release** | 13 apr 2020 |
| **Size** | 11M |
| **Supported Android version** | 4.2 e versioni successive |
### Description
> Have the latest health information at your fingertips with the official World Health Organization Information App. This app displays the latest news, events, features and breaking updates on outbreaks.
<br>
<br>WHO works worldwide to promote health, keep the world safe, and serve the vulnerable.
<br>Our goal is to ensure that a billion more people have universal health coverage, to protect a billion more people from health emergencies, and provide a further billion people with better health and well-being.
### User interface
The developers of the app provide the following screenshots in the Google play store.
| | | |
|:-------------------------:|:-------------------------:|:-------------------------:|
| <img src="screenshot_1.png" alt="screenshot" width="300"/> | <img src="screenshot_2.png" alt="screenshot" width="300"/> | <img src="screenshot_3.png" alt="screenshot" width="300"/> |
| <img src="screenshot_4.png" alt="screenshot" width="300"/> | <img src="screenshot_5.png" alt="screenshot" width="300"/> | <img src="screenshot_6.png" alt="screenshot" width="300"/> |
| <img src="screenshot_7.png" alt="screenshot" width="300"/> | <img src="screenshot_8.png" alt="screenshot" width="300"/> | <img src="screenshot_9.png" alt="screenshot" width="300"/> |
| <img src="screenshot_10.png" alt="screenshot" width="300"/> | <img src="screenshot_11.png" alt="screenshot" width="300"/> | <img src="screenshot_12.png" alt="screenshot" width="300"/> |
| <img src="screenshot_13.png" alt="screenshot" width="300"/> | <img src="screenshot_14.png" alt="screenshot" width="300"/> | <img src="screenshot_15.png" alt="screenshot" width="300"/> |
| <img src="screenshot_16.png" alt="screenshot" width="300"/> | <img src="screenshot_17.png" alt="screenshot" width="300"/> | <img src="screenshot_18.png" alt="screenshot" width="300"/> |
| <img src="screenshot_19.png" alt="screenshot" width="300"/> | <img src="screenshot_20.png" alt="screenshot" width="300"/> | <img src="screenshot_21.png" alt="screenshot" width="300"/> |
| <img src="screenshot_22.png" alt="screenshot" width="300"/> | <img src="screenshot_23.png" alt="screenshot" width="300"/> | <img src="screenshot_24.png" alt="screenshot" width="300"/> |
## Development team
In the following we report the main information provided by the development team in the Google play store.
| | |
|-------------------------|-------------------------|
| **Developer** | World Health Organization |
| **Website** | [https://www.who.int/](https://www.who.int/) |
| **Email** | [email protected] |
| **Physical address** | [Avenu Appia 20 1211 Geneva Switzerland](https://www.google.com/maps/search/Avenu%20Appia%2020%201211%20Geneva%20Switzerland) (Google Maps) |
| **Other developed apps** | [https://play.google.com/store/apps/developer?id=World+Health+Organization](https://play.google.com/store/apps/developer?id=World+Health+Organization) |
## Android support
| | |
|-------------------------|-------------------------|
| **Declared target Android version** | Pie, version 9 (API level 28) |
| **Effective target Android version** | Pie, version 9 (API level 28) |
| **Minimum supported Android version** | Jelly Bean, version 4.2.x (API level 17) |
| **Maximum target Android version** | - |
The larger the difference between the minimum and maximum supported Android versions, the better. A larger difference means a wider audience. For example, old phones have a very low Android version, so a high minimum supported Android version means that the app cannot be used by users with old phones, thus leading to accessibility problems.
## Requested permissions
In the following we report the complete list of the permissions requested by the app.
| **Permission** | **Protection level** | **Description** |
|-------------------------|-------------------------|-------------------------|
**android.permission<br>ACCESS_NETWORK_STATE** | Normal | Allows applications to access information about networks.
**android.permission<br>INTERNET** | Normal | Allows applications to open network sockets.
**android.permission<br>READ_CALENDAR** | :warning:**Dangerous** | Allows an application to read the user's calendar data.
**android.permission<br>READ_EXTERNAL_STORAGE** | :warning:**Dangerous** | Allows an application to read from external storage.
**android.permission<br>WAKE_LOCK** | Normal | Allows using PowerManager WakeLocks to keep processor from sleeping or screen from dimming.
**android.permission<br>WRITE_CALENDAR** | :warning:**Dangerous** | Allows an application to write the user's calendar data.
**android.permission<br>WRITE_EXTERNAL_STORAGE** | :warning:**Dangerous** | Allows an application to write to external storage.
**com.google.android.c2dm.permission<br>RECEIVE** | - | -
**com.google.android.finsky.permission<br>BIND_GET_INSTALL_REFERRER_SERVICE** | - | -
## Mentioned servers
| **Server** | **Registrant** | **Registrant country** | **Creation date** |
|-------------------------|-------------------------|-------------------------|-------------------------|
| adobe.com | Adobe Inc. | :us: US | 1986-11-17 05:00:00 |
| googlesyndication.com | Google LLC | :us: US | 2003-01-21 06:17:24 |
| google.com | Google LLC | :us: US | 1997-09-15 04:00:00 |
| app-measurement.com | Google LLC | :us: US | 2015-06-19 20:13:31 |
| googleapis.com | Google LLC | :us: US | 2005-01-25 17:52:26 |
| googleadservices.com | Google LLC | :us: US | 2003-06-19 16:34:53 |
## Security analysis
Below we report the main security warnings raised by our execution of the [Androwarn](https://github.com/maaaaz/androwarn) security analysis tool.
**Connection interfaces exfiltration**
> - This application reads details about the currently active data network<br>
> - This application tries to find out if the currently active data network is metered<br>
**Suspicious connection establishment**
> - This application opens a Socket and connects it to the remote address 'Lfi/iki/elonen/NanoHTTPD$ResponseException;' on the 'N/A' port <br>
> - This application opens a Socket and connects it to the remote address 'NanoHttpd Shutdown' on the 'N/A' port <br>
**Code execution**
> - This application loads a native library: 'NativeScript'<br>
> - This application executes a UNIX command containing this argument: '2'<br>
## User ratings and reviews
Below we provide information about how end users are reacting to the app in terms of ratings and reviews in the Google Play store.
### Ratings
The WHO Info app has been installed by more than **100000** times. At this time, **955** rated the app and its average score is **3.77**. Below we show the distribution of the ratings across the usual star-based rating of Google Play
:star::star::star::star::star:: 565
:star::star::star::star:: 57
:star::star::star:: 95
:star::star:: 28
:star:: 210
### Reviews
#### 5-star reviews
<p align="center">
<img src="5_star_reviews_wordcloud.png" alt="org.who.infoapp 5 reviews"/>
</p>
> App ben fatta, unica pecca poche sincronizzazioni con enti nazionali e di epidemiologia<br> :date: __2020-10-09 11:58:41__
> Uti<br> :date: __2020-06-16 07:00:29__
> Molto utile per me che viaggio molto.<br> :date: __2020-04-24 06:42:36__
> Per me questa ap é molto utile in questo momento difficile ma noi siamo molto forti e vinceremo però per ora restamo a casa<br> :date: __2020-04-22 14:59:40__
> Lo provò se va benne<br> :date: __2020-04-22 10:50:07__
#### 4-star reviews
<p align="center">
<img src="4_star_reviews_wordcloud.png" alt="org.who.infoapp 4 reviews"/>
</p>
> per chi non sa leggere l'inglese basta che tenga premuto il testo lo evidenzi e lo traduca<br> :date: __2020-04-26 21:44:50__
#### 3-star reviews
<p align="center">
<img src="3_star_reviews_wordcloud.png" alt="org.who.infoapp 3 reviews"/>
</p>
> Ma è possibile che tutti quelli che abbiano istallato l'app non conoscano l'Inglese? (O almeno l'Italiano visto che lo parlate, ma non sembrerebbe) C'è più ignoranza di quanta ne servirebbe nelle critiche.<br> :date: __2020-06-30 12:43:55__
#### 2-star reviews
<p align="center">
</p>
No recent reviews available with 2 stars.
#### 1-star reviews
<p align="center">
<img src="1_star_reviews_wordcloud.png" alt="org.who.infoapp 1 reviews"/>
</p>
> Non serve a niente . vorrei darle meno di una stella ma non si poe 😄😄😄<br> :date: __2020-09-05 08:23:52__
> Non riconosce Taiwan e Hong Kong nella ricerca del paese desiderato. Ciò indica che WHO si è venduta alla Cina. Male male, come sempre il governo cinese non ha anima.<br> :date: __2020-07-23 15:04:05__
> Visto che molte persone non conoscono l'inglese (e tra queste anche io)sarebbe più opportuno metterla in italiano<br> :date: __2020-07-01 16:19:18__
> Non si apre mi da errore<br> :date: __2020-06-18 14:35:44__
> Non si apre, inoltre dato che non tutti conosco l'inglese sarebbe opportuno metterla in ITALIANO.<br> :date: __2020-05-31 15:19:43__
> Inutile...<br> :date: __2020-05-25 12:15:14__
> Io non conosco l'inglese non si potrebbe averla in italiano visto che siamo in Italia ?<br> :date: __2020-04-24 23:15:26__
> Non funziona<br> :date: __2020-04-19 18:22:27__
| 51.563725 | 343 | 0.684951 | eng_Latn | 0.443366 |
b940395b85fee000bffe6a4d48faefdd7416047e | 20,304 | md | Markdown | plugin/plugin-components.md | Honvid/october-cms-document-chinese | 6e8ea1a103e768fae8d944616674b4ca2e100995 | [
"MIT"
] | 1 | 2017-01-23T01:16:22.000Z | 2017-01-23T01:16:22.000Z | plugin/plugin-components.md | Honvid/october-cms-document-chinese | 6e8ea1a103e768fae8d944616674b4ca2e100995 | [
"MIT"
] | null | null | null | plugin/plugin-components.md | Honvid/october-cms-document-chinese | 6e8ea1a103e768fae8d944616674b4ca2e100995 | [
"MIT"
] | null | null | null | # Component Development
- [Introduction](#introduction)
- [Component class definition](#component-class-definition)
- [Component registration](#component-registration)
- [Component properties](#component-properties)
- [Dropdown properties](#dropdown-properties)
- [Page list properties](#page-list-properties)
- [Routing parameters](#routing-parameters)
- [Handling the page execution cycle](#page-cycle)
- [Page execution life cycle handlers](#page-cycle-handlers)
- [Component initialization](#page-cycle-init)
- [Halting with a response](#page-cycle-response)
- [AJAX handlers](#ajax-handlers)
- [Default markup](#default-markup)
- [Component partials](#component-partials)
- [Referencing "self"](#referencing-self)
- [Unique identifier](#unique-identifier)
- [Rendering partials from code](#render-partial-method)
- [Injecting page assets with components](#component-assets)
<a name="introduction"></a>
## Introduction
Components files and directories reside in the **/components** subdirectory of a plugin directory. Each component has a PHP file defining the component class and an optional component partials directory. The component partials directory name matches the component class name written in lowercase. An example of a component directory structure:
plugins/
acme/
myplugin/
components/
componentname/ <=== Component partials directory
default.htm <=== Component default markup (optional)
ComponentName.php <=== Component class file
Plugin.php
Components must be [registered in the Plugin registration class](#component-registration) with the `registerComponents()` method.
<a name="component-class-definition"></a>
## Component class definition
The **component class file** defines the component functionality and [component properties](#component-properties). The component class file name should match the component class name. Component classes should extend the `\Cms\Classes\ComponentBase` class. The component form the next example should be defined in the plugins/acme/blog/components/BlogPosts.php file.
namespace Acme\Blog\Components;
class BlogPosts extends \Cms\Classes\ComponentBase
{
public function componentDetails()
{
return [
'name' => 'Blog Posts',
'description' => 'Displays a collection of blog posts.'
];
}
// This array becomes available on the page as {{ component.posts }}
public function posts()
{
return ['First Post', 'Second Post', 'Third Post'];
}
}
The `componentDetails()` method is required. The method should return an array with two keys: `name` and `description`. The name and description are display in the CMS back-end user interface.
When this [component is attached to a page or layout](../cms/components), the class properties and methods become available on the page through the component variable, which name matches the component short name or the alias. For example, if the BlogPost component from the previous example was defined on a page with its short name:
url = "/blog"
[blogPosts]
==
You would be able to access its `posts()` method through the `blogPosts` variable. Note that Twig supports the property notation for methods, so that you don't need to use brackets.
{% for post in blogPosts.posts %}
{{ post }}
{% endfor %}
<a name="component-registration"></a>
### Component registration
Components must be registered by overriding the `registerComponents()` method inside the [Plugin registration class](registration#registration-file). This tells the CMS about the Component and provides a **short name** for using it. An example of registering a component:
public function registerComponents()
{
return [
'October\Demo\Components\Todo' => 'demoTodo'
];
}
This will register the Todo component class with the default alias name **demoTodo**. More information on using components can be found at the [CMS components article](../cms/components).
<a name="component-properties"></a>
## Component properties
When you add a component to a page or layout you can configure it using properties. The properties are defined with the `defineProperties()` method of the component class. The next example shows how to define a component property:
public function defineProperties()
{
return [
'maxItems' => [
'title' => 'Max items',
'description' => 'The most amount of todo items allowed',
'default' => 10,
'type' => 'string',
'validationPattern' => '^[0-9]+$',
'validationMessage' => 'The Max Items property can contain only numeric symbols'
]
];
}
The method should return an array with the property keys as indexes and property parameters as values. The property keys are used for accessing the component property values inside the component class. The property parameters are defined with an array with the following keys:
Key | Description
------------- | -------------
**title** | required, the property title, it is used by the component Inspector in the CMS back-end.
**description** | required, the property description, it is used by the component Inspector in the CMS back-end.
**default** | optional, the default property value to use when the component is added to a page or layout in the CMS back-end.
**type** | optional, specifies the property type. The type defines the way how the property is displayed in the Inspector. Currently supported types are **string**, **checkbox** and **dropdown**. Default value: **string**.
**validationPattern** | optional Regular Expression to use when a user enters the property value in the Inspector. The validation can be used only with **string** properties.
**validationMessage** | optional error message to display if the validation fails.
**required** | optional, forces field to be filled. Uses validationMessage when left empty.
**placeholder** | optional placeholder for string and dropdown properties.
**options** | optional array of options for dropdown properties.
**depends** | an array of property names a dropdown property depends on. See the [dropdown properties](#dropdown-properties) below.
**group** | an optional group name. Groups create sections in the Inspector simplifying the user experience. Use a same group name in multiple properties to combine them.
**showExternalParam** | specifies visiblity of the External Parameter editor for the property in the Inspector. Default value: **true**.
Inside the component you can read the property value with the `property()` method:
$this->property('maxItems');
If the property value is not defined, you can supply the default value as a second parameter of the `property()` method:
$this->property('maxItems', 6);
You can also load all the properties as array:
$properties = $this->getProperties();
<a name="dropdown-properties"></a>
### Dropdown properties
The option list for dropdown properties can be static or dynamic. Static options are defined with the `options` element of the property definition.Example:
public function defineProperties()
{
return [
'units' => [
'title' => 'Units',
'type' => 'dropdown',
'default' => 'imperial',
'placeholder' => 'Select units',
'options' => ['metric'=>'Metric', 'imperial'=>'Imperial']
]
];
}
The list of options could be fetched dynamically from the server when the Inspector is displayed. If the `options` parameter is omitted in a dropdown property definition the option list is considered dynamic. The component class must define a method returning the option list. The method should have a name in the following format: `get*Property*Options()`, where **Property** is the property name, for example: `getCountryOptions`. The method returns an array of options with the option values as keys and option labels as values. Example of a dynamic dropdown list definition:
public function defineProperties()
{
return [
'country' => [
'title' => 'Country',
'type' => 'dropdown',
'default' => 'us'
]
];
}
public function getCountryOptions()
{
return ['us'=>'United states', 'ca'=>'Canada'];
}
Dynamic drop-down lists can depend on other properties. For example, the state list could depend on the selected country. The dependencies are declared with the `depends` parameter in the property definition. The next example defines two dynamic dropdown properties and the state list depends on the country:
public function defineProperties()
{
return [
'country' => [
'title' => 'Country',
'type' => 'dropdown',
'default' => 'us'
],
'state' => [
'title' => 'State',
'type' => 'dropdown',
'default' => 'dc',
'depends' => ['country'],
'placeholder' => 'Select a state'
]
];
}
In order to load the state list you should know what country is currently selected in the Inspector. The Inspector POSTs all property values to the `getPropertyOptions()` handler, so you can do the following:
public function getStateOptions()
{
$countryCode = Request::input('country'); // Load the country property value from POST
$states = [
'ca' => ['ab'=>'Alberta', 'bc'=>'British columbia'],
'us' => ['al'=>'Alabama', 'ak'=>'Alaska']
];
return $states[$countryCode];
}
<a name="page-list-properties"></a>
### Page list properties
Sometimes components need to create links to the website pages. For example, the blog post list contains links to the blog post details page. In this case the component should know the post details page file name (then it can use the [page Twig filter](../cms/markup#page-filter)). October includes a helper for creating dynamic dropdown page lists. The next example defines the postPage property which displays a list of pages:
public function defineProperties()
{
return [
'postPage' => [
'title' => 'Post page',
'type' => 'dropdown',
'default' => 'blog/post'
]
];
}
public function getPostPageOptions()
{
return Page::sortBy('baseFileName')->lists('baseFileName', 'baseFileName');
}
<a name="routing-parameters"></a>
## Routing parameters
Components can directly access routing parameter values defined the [URL of the page](../cms/pages#url-syntax).
// Returns the URL segment value, eg: /page/:post_id
$postId = $this->param('post_id');
In some cases a [component property](#component-properties) may act as a hard coded value or reference the value from the URL.
This hard coded example shows the blog post with an identifier `2` being used:
url = "/blog/hard-coded-page"
[blogPost]
id = "2"
Alternatively the value can be referenced dynamically from the page URL using an [external property value](../cms/components#external-property-values):
url = "/blog/:my_custom_parameter"
[blogPost]
id = "{{ :my_custom_parameter }}"
In both cases the value can be retrieved by using the `property()` method:
$this->property('id');
If you need to access the routing parameter name:
$this->paramName('id'); // Returns "my_custom_parameter"
<a name="page-cycle"></a>
## Handling the page execution cycle
Components can be involved in the Page execution cycle events by overriding the `onRun()` method in the component class. The CMS controller executes this method every time when the page or layout loads. Inside the method you can inject variables to the Twig environment through the `page` property:
public function onRun()
{
// This code will be executed when the page or layout is
// loaded and the component is attached to it.
$this->page['var'] = 'value'; // Inject some variable to the page
}
<a name="page-cycle-handlers"></a>
### Page execution life cycle handlers
When a page loads, October executes handler functions that could be defined in the layout and page [PHP section](../cms/themes#php-section) and component classes. The sequence the handlers are executed is following:
1. Layout `onInit()` function.
1. Page `onInit()` function.
1. Layout `onStart()` function.
1. Layout components `onRun()` method.
1. Layout `onBeforePageStart()` function.
1. Page `onStart()` function.
1. Page components `onRun()` method.
1. Page `onEnd()` function.
1. Layout `onEnd()` function.
<a name="page-cycle-init"></a>
### Component initialization
Sometimes you may wish to execute code at the time the component class is first instantiated. You may override the `init` method in the component class to handle any initialization logic, this will execute before AJAX handlers and before the page execution life cycle. For example, this method can be used for attaching another component to the page dynamically.
public function init()
{
$this->addComponent('Acme\Blog\Components\BlogPosts', 'blogPosts');
}
<a name="page-cycle-response"></a>
### Halting with a response
Like all methods in the [page execution life cycle](../cms/layouts#layout-life-cycle), if the `onRun()` method in a component returns a value, this will stop the cycle at this point and return the response to the browser. Here we return an access denied message using the `Response` facade:
public function onRun()
{
if (true) {
return Response::make('Access denied!', 403);
}
}
<a name="ajax-handlers"></a>
## AJAX handlers
Components can host AJAX event handlers. They are defined in the component class exactly like they can be defined in the [page or layout code](../ajax/handlers). An example AJAX handler method defined in a component class:
public function onAddItem()
{
$value1 = post('value1');
$value2 = post('value2');
$this->page['result'] = $value1 + $value2;
}
If the alias for this component was *demoTodo* this handler can be accessed by `demoTodo::onAddItems`. Please see the [Calling AJAX handlers defined in components](../ajax/handlers#calling-handlers) article for details about using AJAX with components.
<a name="default-markup"></a>
## Default markup
All components can come with default markup that is used when including it on a page with the `{% component %}` tag, although this is optional. Default markup is kept inside the **component partials directory**, which has the same name as the component class in lower case.
The default component markup should be placed in a file named **default.htm**. For example, the default markup for the Demo ToDo component is defined in the file **/plugins/october/demo/components/todo/default.htm**. It can then be inserted anywhere on the page by using the `{% component %}` tag:
url = "/todo"
[demoTodo]
==
{% component 'demoTodo' %}
The default markup can also take parameters that override the [component properties](#component-properties) at the time they are rendered.
{% component 'demoTodo' maxItems="7" %}
These properties will not be available in the `onRun()` method since they are established after the page cycle has completed. Instead they can be processed by overriding the `onRender()` method in the component class. The CMS controller executes this method before the default markup is rendered.
public function onRender()
{
// This code will be executed before the default component
// markup is rendered on the page or layout.
$this->page['var'] = 'Maximum items allowed: ' . $this->property('maxItems');
}
<a name="component-partials"></a>
## Component partials
In addition to the default markup, components can also offer additional partials that can be used on the front-end or within the default markup itself. If the Demo ToDo component had a **pagination** partial, it would be located in **/plugins/october/demo/components/todo/pagination.htm** and displayed on the page using:
{% partial 'demoTodo::pagination' %}
A relaxed method can be used that is contextual. If called inside a component partial, it will directly refer to itself. If called inside a theme partial, it will scan all components used on the page/layout for a matching partial name and use that.
{% partial '@pagination' %}
Multiple components can share partials by placing the partial file in a directory called **components/partials**. The partials found in this directory are used as a fallback when the usual component partial cannot be found. For example, a shared partial located in **/plugins/acme/blog/components/partials/shared.htm** can be displayed on the page by any component using:
{% partial '@shared' %}
<a name="referencing-self"></a>
### Referencing "self"
Components can reference themselves inside their partials by using the `__SELF__` variable. By default it will return the component's short name or [alias](../cms/components#aliases).
<form data-request="{{__SELF__}}::onEventHandler">
[...]
</form>
Components can also reference their own properties.
{% for item in __SELF__.items() %}
{{ item }}
{% endfor %}
If inside a component partial you need to render another component partial concatenate the `__SELF__` variable with the partial name:
{% partial __SELF__~"::screenshot-list" %}
<a name="unique-identifier"></a>
### Unique identifier
If an identical component is called twice on the same page, an `id` property can be used to reference each instance.
{{__SELF__.id}}
The ID is unique each time the component is displayed.
<!-- ID: demoTodo527c532e9161b -->
{% component 'demoTodo' %}
<!-- ID: demoTodo527c532ec4c33 -->
{% component 'demoTodo' %}
<a name="render-partial-method"></a>
## Rendering partials from code
You may programmatically render component partials inside the PHP code using the `renderPartial` method. This will check the component for the partial named `component-partial.htm` and return the result as a string. The second parameter is used for passing view variables.
$content = $this->renderPartial('component-partial.htm');
$content = $this->renderPartial('component-partial.htm', [
'name' => 'John Smith'
]);
For example, to render a partial as a response to an [AJAX handler](../ajax/handlers):
function onGetTemplate()
{
return ['#someDiv' => $this->renderPartial('component-partial.htm')];
}
Another example could be overriding the entire page view response by returning a value from the `onRun` [page cycle method](#page-cycle). This code will specifically return an XML response using the `Response` facade:
public function onRun()
{
$content = $this->renderPartial('default.htm');
return Response::make($content)->header('Content-Type', 'text/xml');
}
<a name="component-assets"></a>
## Injecting page assets with components
Components can inject assets (CSS and JavaScript files) to pages or layouts they're attached to. Use the controller's `addCss()` and `addJs()` methods to add assets to the CMS controllers. It could be done in the component's `onRun()` method. Please read more details about [injecting assets in the Pages article](../cms/page#injecting-assets). Example:
public function onRun()
{
$this->addJs('/plugins/acme/blog/assets/javascript/blog-controls.js');
}
If the path specified in the `addCss()` and `addJs()` method argument begins with a slash (/) then it will be relative to the website root. If the asset path does not begin with a slash then it is relative to the component directory.
| 46.250569 | 578 | 0.689815 | eng_Latn | 0.993727 |
b940629fe982efd86c5643cd8bd1a544b97d8fd6 | 592 | md | Markdown | docs/papers/ocr_quality_by_year.md | akrause2014/defoe | 6d0dc83a8e647a5f7c626096441c6a1730c50434 | [
"MIT"
] | 15 | 2019-01-25T20:19:18.000Z | 2021-11-05T04:58:03.000Z | docs/papers/ocr_quality_by_year.md | akrause2014/defoe | 6d0dc83a8e647a5f7c626096441c6a1730c50434 | [
"MIT"
] | 26 | 2019-01-15T14:49:57.000Z | 2022-02-01T09:43:13.000Z | docs/papers/ocr_quality_by_year.md | akrause2014/defoe | 6d0dc83a8e647a5f7c626096441c6a1730c50434 | [
"MIT"
] | 6 | 2019-01-17T13:03:14.000Z | 2022-02-01T09:32:16.000Z | # Get measure of OCR quality for each article and group by year
* Query module: `defoe.papers.queries.ocr_quality_by_year`
* Configuration file: None
* Result format:
```
<YEAR>: [<QUALITY>, ...]
...
```
## Sample results
Query over `Part 1/0000164- The Courier and Argus/1907/0000164_19070603/0000164_19070603.xml` and `Part 1/0000164- The Courier and Argus/1915/0000164_19151123/0000164_19151123.xml`:
```
1907: [91.22, 85.82, 78.1, 76.0, 67.64, 75.34, 82.83, 75.49,
75.87, 78.33, 82.74,...]
1915: [90.13, 80.48, 85.55, 82.36, 69.57, 82.06, 66.46, 74.6
, 83.75, 83.92, 82.47,...]
```
| 26.909091 | 181 | 0.672297 | eng_Latn | 0.33765 |
b940c3d036ef12758491480ea3e20bc8606b65a4 | 471 | md | Markdown | LL/ll_kth_from_end/README.md | theidi267/data-structures-and-algorithms | 1fbab683030e88a529a8fe5f92c2815b20be2332 | [
"MIT"
] | null | null | null | LL/ll_kth_from_end/README.md | theidi267/data-structures-and-algorithms | 1fbab683030e88a529a8fe5f92c2815b20be2332 | [
"MIT"
] | 1 | 2018-05-14T23:16:22.000Z | 2018-05-14T23:16:22.000Z | LL/ll_kth_from_end/README.md | theidi267/data-structures-and-algorithms | 1fbab683030e88a529a8fe5f92c2815b20be2332 | [
"MIT"
] | null | null | null | # Reverse Array
## Challenge
Write a method for the Linked List class which takes a number, k, as a parameter. Return the node that is k from the end of the linked list. You have access to the Node class and all the properties on the Linked List class as well as the methods created in previous challenges.
Examples:

## Solution

## Collaboration
On this lab, we worked with Jen and Ovi
| 24.789474 | 279 | 0.757962 | eng_Latn | 0.999324 |
b940eb0bb2f8366c90f039db2ee6b4dc64cb5752 | 11,630 | md | Markdown | docs/application/native/guides/multimedia/media-controller.md | sojinp/tizen-docs | 5b46c96e07d884dc292025164154de90fb6a14b9 | [
"CC-BY-3.0",
"BSD-3-Clause"
] | null | null | null | docs/application/native/guides/multimedia/media-controller.md | sojinp/tizen-docs | 5b46c96e07d884dc292025164154de90fb6a14b9 | [
"CC-BY-3.0",
"BSD-3-Clause"
] | null | null | null | docs/application/native/guides/multimedia/media-controller.md | sojinp/tizen-docs | 5b46c96e07d884dc292025164154de90fb6a14b9 | [
"CC-BY-3.0",
"BSD-3-Clause"
] | null | null | null | # Media Controller
You can establish communication between a media controller server and a media controller client. You can send commands from the client to the server, and the client can request updated metadata and playback information from the server.
The main features of the Media Controller API include:
- Updating and retrieving information
You can [update the metadata and playback information](#get_media) on the server side, and then retrieve the metadata and playback information on the client side.
The media controller server provides current information about the registered application that you can send to the client.
When the client requests the information, the media controller server updates the state information of an active application before transferring the data. If the application is not running when the client request arrives, the media controller server transfers the latest information.
- Sending and processing commands
You can [send a command](#send_media) to the server from the client side, and then process the command on the server side.
The client can request [server state](#serverstate) or [server metadata](#servermetadata) information from the server, and receive it through a callback.
## Prerequisites
To enable your application to use the media controller functionality:
- To use the media controller server:
1. To use the functions and data types of the Media Controller Server API (in [mobile](../../../../org.tizen.native.mobile.apireference/group__CAPI__MEDIA__CONTROLLER__SERVER__MODULE.html) and [wearable](../../../../org.tizen.native.wearable.apireference/group__CAPI__MEDIA__CONTROLLER__SERVER__MODULE.html) applications), include the `<media_controller_server.h>` header file in your application:
```
#include <media_controller_server.h>
```
2. To work with the Media Controller Server API, define a handle variable for the media controller server:
```
static mc_server_h g_server_h = NULL;
```
The server updates the metadata and playback information, and processes the requests and commands sent by the client.
This guide uses a global variable for the handle.
- To use the media controller client:
1. To use the functions and data types of the Media Controller Client API (in [mobile](../../../../org.tizen.native.mobile.apireference/group__CAPI__MEDIA__CONTROLLER__CLIENT__MODULE.html) and [wearable](../../../../org.tizen.native.wearable.apireference/group__CAPI__MEDIA__CONTROLLER__CLIENT__MODULE.html) applications), include the `<media_controller_client.h>` header file in your application:
```
#include <media_controller_client.h>
```
2. To work with the Media Controller Client API, define a handle variable for the media controller client:
```
static mc_client_h g_client_h = NULL;
```
The client requests metadata and playback information from the server, and sends playback commands to server.
This guide uses a global variable for the handle.
<a name="get_media"></a>
## Updating and Retrieving Information
To update the metadata and playback information on the server side:
1. Create the media controller server handle using the `mc_server_create()` function:
```
ret = mc_server_create(&g_server_h);
```
2. Set the metadata or playback information to be updated using the corresponding `mc_server_set_XXX()` function, and then update the metadata or playback information using the corresponding `mc_server_update_XXX()` function.
For example, to update the playback state information, set the information to be updated using the `mc_server_set_playback_state()` function, and then update the information using the `mc_server_update_playback_info()` function:
```
ret = mc_server_set_playback_state(g_mc_server, MC_PLAYBACK_STATE_PLAYING);
ret = mc_server_update_playback_info(g_mc_server);
```
3. When no longer needed, destroy the media controller server handle using the `mc_server_destroy()` function:
```
mc_server_destroy(g_server_h);
```
To retrieve the metadata and playback information on the client side:
1. Create the media controller client handle using the `mc_client_create()` function:
```
ret = mc_client_create(&g_client_h);
```
2. Retrieve the server name using the `mc_client_get_latest_server_info()` function:
```
char *server_name = NULL;
mc_server_state_e server_state;
ret = mc_client_get_latest_server_info(g_mc_client, &server_name, &server_state);
dlog_print(DLOG_DEBUG, LOG_TAG, "Server Name: %s, Server state: %d\n", server_name, server_state);
```
3. Retrieve the metadata or playback information from the server using the corresponding `mc_client_get_server_XXX()` function. Use the server name retrieved in the previous step to identify the server.
For example, to retrieve the playback information from the server, use the `mc_client_get_server_playback_info()` function:
```
mc_playback_h playback = NULL;
mc_playback_states_e playback_state;
ret = mc_client_get_server_playback_info(g_client_h, server_name, &playback);
ret = mc_client_get_playback_state(playback, &playback_state);
dlog_print(DLOG_DEBUG, LOG_TAG, "Playback State: %d\n", playback_state);
```
The `mc_client_get_playback_state()` function retrieves the playback state from the playback information returned by the `mc_client_get_server_playback_info()` function.
4. When no longer needed, destroy the media controller client handle using the `mc_client_destroy()` function:
```
mc_client_destroy(g_client_h);
```
<a name="send_media"></a>
## Sending and Processing Commands
To send a command to the server from the client side:
1. Create the media controller client handle using the `mc_client_create()` function:
```
ret = mc_client_create(&g_client_h);
```
2. Retrieve the server name using the `mc_client_get_latest_server_info()` function:
```
char *server_name = NULL;
mc_server_state_e server_state;
ret = mc_client_get_latest_server_info(g_mc_client, &server_name, &server_state);
dlog_print(DLOG_DEBUG, LOG_TAG, "Server Name: %s, Server state: %d\n", server_name, server_state);
```
3. Send the command to the server using the corresponding `mc_client_send_XXX()` function. Use the server name retrieved in the previous step to identify the server.
For example, to send a playback state change command to the server, use the `mc_client_send_playback_state_command()` function with the new state defined in the third parameter:
```
mc_playback_h playback = NULL;
mc_playback_states_e playback_state = MC_PLAYBACK_STATE_PLAYING;
ret = mc_client_send_playback_state_command(g_client_h, server_name, playback_state);
```
If you want to define your own commands to send to the server, use the `mc_client_send_custom_command()` function.
4. When no longer needed, destroy the media controller client handle using the `mc_client_destroy()` function:
```
mc_client_destroy(g_client_h);
```
To process the received command on the server side:
1. Create the media controller server handle using the `mc_server_create()` function:
```
ret = mc_server_create(&g_server_h);
```
2. Define the callback that is invoked when the server receives the command.
For example, to define a callback for playback state change commands:
```
void
command_received_cb(const char* client_name, mc_playback_states_e state, void *user_data)
{
dlog_print(DLOG_DEBUG, LOG_TAG, "Client Name: %s, Playback state: %d\n", client_name, state);
}
```
3. Register the callback:
- To register a callback for playback state change commands, use the `mc_server_set_playback_state_command_received_cb()` function.
- To register a callback for a custom command, use the `mc_server_set_custom_command_received_cb()` function.
For example, to register a callback for playback state change commands:
```
ret = mc_server_set_playback_state_command_received_cb(g_mc_server, command_received_cb, NULL);
```
4. When no longer needed, destroy the media controller server handle using the `mc_server_destroy()` function:
```
mc_server_destroy(g_server_h);
```
<a name="serverstate"></a>
## Media Controller Server State Attributes
The following table lists all the server state attributes the client can receive.
**Table: Media controller server state attributes**
| Attribute | Description |
|----------------------------------|------------------------------------------|
| **Server states** | |
| `MC_SERVER_ACTIVATE` | Requested media controller server is active |
| `MC_SERVER_DEACTIVATE` | Requested media controller server is not active |
| **Playback states** | |
| `MC_PLAYBACK_STATE_NONE` | No history of media playback |
| `MC_PLAYBACK_STATE_PLAYING` | Playback state of playing |
| `MC_PLAYBACK_STATE_PAUSED` | Playback state of paused |
| `MC_PLAYBACK_STATE_STOPPED` | Playback state of stopped |
| `MC_PLAYBACK_STATE_NEXT_FILE` | Playback state of next file |
| `MC_PLAYBACK_STATE_PREV_FILE` | Playback state of previous file |
| `MC_PLAYBACK_STATE_FAST_FORWARD` | Playback state of fast forward |
| `MC_PLAYBACK_STATE_REWIND` | Playback state of rewind |
| **Shuffle mode states** | |
| `MC_SHUFFLE_MODE_ON` | Shuffle mode is on |
| `MC_SHUFFLE_MODE_OFF` | Shuffle mode is off |
| **Repeat mode states** | |
| `MC_REPEAT_MODE_ON` | Repeat mode is on |
| `MC_REPEAT_MODE_OFF` | Repeat mode is off |
<a name="servermetadata"></a>
## Media Controller Server Metadata Attributes
The following table lists all the server metadata attributes the client can receive.
**Table: Media controller server metadata attributes**
| Attribute | Description |
|-----------------------------|------------------------------------------|
| `MC_META_MEDIA_TITLE` | Title of the latest content in the media controller server |
| `MC_META_MEDIA_ARTIST` | Artist of the latest content in the media controller server |
| `MC_META_MEDIA_ALBUM` | Album name of the latest content in the media controller server |
| `MC_META_MEDIA_AUTHOR` | Author of the latest content in the media controller server |
| `MC_META_MEDIA_GENRE` | Genre of the latest content in the media controller server |
| `MC_META_MEDIA_DURATION` | Duration of the latest content in the media controller server |
| `MC_META_MEDIA_DATE` | Date of the latest content in the media controller server |
| `MC_META_MEDIA_COPYRIGHT` | Copyright of the latest content in the media controller server |
| `MC_META_MEDIA_DESCRIPTION` | Description of the latest content in the media controller server |
| `MC_META_MEDIA_TRACK_NUM` | Track number of the latest content in the media controller server |
| `MC_META_MEDIA_PICTURE` | Album art of the latest content in the media controller server |
## Related Information
* Dependencies
- Tizen 2.4 and Higher for Mobile
- Tizen 3.0 and Higher for Wearable
| 45.252918 | 399 | 0.710232 | eng_Latn | 0.936713 |
b941a904f542d727f68eebf2edcd72b992cabd1d | 1,885 | md | Markdown | in-person-hackathons/creating-your-hackathon-schedule/do-a-run-through-with-the-team.md | yashovardhan/mlh-hackathon-organizer-guide | 161d3c608f51ad2209deab4e5db6c652aa2442ed | [
"CC-BY-4.0"
] | 239 | 2016-08-25T18:43:20.000Z | 2022-03-21T21:09:08.000Z | in-person-hackathons/creating-your-hackathon-schedule/do-a-run-through-with-the-team.md | yashovardhan/mlh-hackathon-organizer-guide | 161d3c608f51ad2209deab4e5db6c652aa2442ed | [
"CC-BY-4.0"
] | 19 | 2016-09-01T16:28:50.000Z | 2021-08-15T10:15:31.000Z | in-person-hackathons/creating-your-hackathon-schedule/do-a-run-through-with-the-team.md | yashovardhan/mlh-hackathon-organizer-guide | 161d3c608f51ad2209deab4e5db6c652aa2442ed | [
"CC-BY-4.0"
] | 106 | 2016-09-26T16:38:00.000Z | 2022-03-21T21:09:10.000Z | # Run-Through with the Team
Just like a play or a wedding, it’s important to rehearse your schedule once with all team members.
* Create and distribute a [run-of-show document](https://docs.google.com/spreadsheets/d/1e2B4-AYUU3Y0xFmiTGLYfRosP2IdXxF1Ud5GvGh-6cE/edit?usp=sharing) with:
* A timeline including the official hackathon schedule and a secondary staff schedule denoting what needs to happen behind the scenes — and when
* Confirmed roles and report times for each team member. Sample responsibilities include:
* Checking in guests
* Greeting and directing attendees
* Running social media
* Setting out food
* Managing transportation
* Guiding and attending to sponsors and press
* Running AV
* Restocking supplies and picking up garbage
Any others?
* Physically walk-through the run-of-show document in the event space.
## MLH Tips
* Keep in mind most organizers, especially team leaders, will be responsible for more than one thing
* Hackathon schedules inevitably change. The social media point person should be in charge of communicating schedules updates via Twitter and updates to the Facebook invite as soon as they happen, as well as answering attendee questions in real time.
* Assign at least one organizer to make sure the event is going well and put out fires as needed
## Resources
* [Example Run of Show](https://docs.google.com/spreadsheets/d/1e2B4-AYUU3Y0xFmiTGLYfRosP2IdXxF1Ud5GvGh-6cE/edit?usp=sharing)
* [Winning with Volunteers - Hackcon EU](https://www.youtube.com/watch?v=59EYS0JkLWk&t=915s)
* [Student Hack Volunteer Guide](https://github.com/MLH/mlh-hackathon-organizer-guide/blob/master/Organizer-Resources/StudentHack%20Volunteer%20Guide.docx?raw=true)
* _If you have minors attending your event you must consult with your venue, school administrators, and/or a lawyer to make sure all necessary paperwork is taken care of_
| 55.441176 | 250 | 0.793103 | eng_Latn | 0.993986 |
b941af8f63660e31a2dd93f61af4453582b8b85f | 24,215 | md | Markdown | articles/data-factory/how-to-invoke-ssis-package-ssis-activity.md | Ksantacr/azure-docs.es-es | d3abf102433fd952aafab2c57a55973ea05a9acb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/data-factory/how-to-invoke-ssis-package-ssis-activity.md | Ksantacr/azure-docs.es-es | d3abf102433fd952aafab2c57a55973ea05a9acb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/data-factory/how-to-invoke-ssis-package-ssis-activity.md | Ksantacr/azure-docs.es-es | d3abf102433fd952aafab2c57a55973ea05a9acb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 'Ejecución de paquetes SSIS con la actividad Ejecutar paquete de SSIS: Azure | Microsoft Docs'
description: En este artículo se describe cómo ejecutar un paquete de SQL Server Integration Services (SSIS) desde una canalización de Azure Data Factory mediante la actividad Ejecutar paquete de SSIS.
services: data-factory
documentationcenter: ''
ms.service: data-factory
ms.workload: data-services
ms.tgt_pltfrm: ''
ms.devlang: powershell
ms.topic: conceptual
ms.date: 03/19/2019
author: swinarko
ms.author: sawinark
ms.reviewer: douglasl
manager: craigg
ms.openlocfilehash: 7287dc2fccf461cf23c45202336e3d92bc5a40aa
ms.sourcegitcommit: 3102f886aa962842303c8753fe8fa5324a52834a
ms.translationtype: MT
ms.contentlocale: es-ES
ms.lasthandoff: 04/23/2019
ms.locfileid: "66152992"
---
# <a name="run-an-ssis-package-with-the-execute-ssis-package-activity-in-azure-data-factory"></a>Ejecución de un paquete de SSIS mediante la actividad Ejecutar paquete SSIS de Azure Data Factory
En este artículo se describe cómo ejecutar un paquete SSIS desde una canalización de Azure Data Factory mediante la actividad Ejecutar paquete de SSIS.
## <a name="prerequisites"></a>Requisitos previos
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
Cree una instancia de Azure-SSIS Integration Runtime (IR) si no tiene ya una. Para ello, siga las instrucciones paso a paso del [Tutorial: Implementación de paquetes SSIS en Azure](tutorial-create-azure-ssis-runtime-portal.md).
## <a name="run-a-package-in-the-azure-portal"></a>Ejecución de un paquete en Azure Portal
En esta sección, usará la interfaz de usuario (UI) o aplicación de ADF para crear una canalización de ADF con la actividad Ejecutar paquete de SSIS que ejecuta el paquete de SSIS.
### <a name="create-a-pipeline-with-an-execute-ssis-package-activity"></a>Creación de una canalización con una actividad Ejecutar paquete de SSIS
En este paso, usará la interfaz de usuario o aplicación de ADF para crear una canalización. Agregará una actividad Ejecutar paquete de SSIS a la canalización y la configurará para ejecutar el paquete de SSIS.
1. En la página de inicio o información general de ADF de Azure Portal, haga clic en el icono **Author & Monitor** (Creación y supervisión) para iniciar la interfaz de usuario o aplicación de ADF en una pestaña aparte.

En la página **Let's get started** (Empecemos), haga clic en **Create pipeline** (Crear canalización):

2. En el cuadro de herramientas **Activities** (Actividades), expanda **General**, arrastre la actividad **Execute SSIS Package** (Ejecutar paquete de SSIS) y colóquela en la superficie del diseñador de canalizaciones.

3. En la pestaña **General** de la actividad Execute SSIS Package (Ejecutar paquete de SSIS), proporcione un nombre y una descripción para la actividad. Establezca el tiempo de espera opcional y los valores de reintento.

4. En la pestaña **Settings** (Configuración) de la actividad Execute SSIS Package (Ejecutar paquete de SSIS), seleccione su instancia de Azure-SSIS IR que está asociada a la base de datos SSISDB donde está implementado el paquete. Si el paquete usa la autenticación de Windows para tener acceso a almacenes de datos, p. ej. recursos compartidos de archivo de los servidores de SQL en el entorno local, Azure Files, etc., compruebe el **autenticación de Windows** casilla y escriba el dominio/usuario/contraseña para el paquete ejecución. Si el paquete necesita el entorno de ejecución de 32 bits para funcionar, active la casilla **32-Bit runtime** (Entorno de ejecución de 32 bits). En **Logging level** (Nivel de registro), seleccione un ámbito predefinido de registro para la ejecución de su paquete. Active la casilla **Customized** (Personalizado), si quiere escribir en su lugar un nombre de registro personalizado. Cuando se ejecuta la instancia de Azure-SSIS IR y la casilla **Manual entries** (Entradas manuales) está desactivada, puede examinar y seleccionar sus carpetas, proyectos, paquetes y entornos existentes de SSISDB. Haga clic en el botón **Refresh** (Actualizar) para capturar las carpetas, proyectos, paquetes y entornos recién agregados de SSISDB, de forma que estén disponibles para su examen y selección.

Cuando no se está ejecutando la instancia de IR de SSIS de Azure o el **entradas manuales** está activada la casilla de verificación, puede especificar las rutas de acceso de paquete y el entorno de SSISDB directamente en los siguientes formatos: `<folder name>/<project name>/<package name>.dtsx` y `<folder name>/<environment name>`.

5. En la pestaña **SSIS Parameters** (Parámetros de SSIS) de la actividad Execute SSIS Package (Ejecutar paquete de SSIS), cuando la instancia de Azure-SSIS IR está en ejecución y la casilla **Manual entries** (Entradas manuales) de la pestaña **Settings** (Configuración) está desactivada, se muestran los parámetros de SSIS existentes en el proyecto o paquete seleccionados de SSISDB para que les asigne valores. En caso contrario, puede escribirlos uno a uno para asignar valores manualmente. Asegúrese de que existen y se han escrito correctamente para que la ejecución del paquete se realice de forma adecuada. Puede agregar contenido dinámico a sus valores mediante expresiones, funciones, variables del sistema ADF y canalización ADF parámetros o variables. Como alternativa, puede usar los secretos almacenados en su Azure Key Vault (AKV) como sus valores. Para ello, haga clic en el **AZURE KEY VAULT** la seleccionar o modificar el servicio vinculado de Azure Key VAULT existente casilla de verificación junto al parámetro correspondiente, o crear uno nuevo y, a continuación, seleccione el nombre/versión del secreto para el valor del parámetro. Cuando crea o modifica el servicio vinculado de Azure Key VAULT, puede seleccionar o modificar su Azure Key VAULT existente o crear uno nuevo, pero conceda acceso a la identidad administrada de ADF para su Azure Key VAULT si aún no lo ha hecho. También puede escribir los secretos directamente en el siguiente formato: `<AKV linked service name>/<secret name>/<secret version>`.

6. En el **administradores de conexión** pestaña actividad de ejecución de paquetes SSIS, si se está ejecutando la instancia de IR de SSIS de Azure y la **entradas manuales** casilla de verificación de **configuración** está desactivada, la ficha se mostrará a los administradores de conexiones existente en el paquete o proyecto seleccionado de SSISDB para que asignar valores a sus propiedades. En caso contrario, se puede escribir ellos uno por uno para asignar valores a sus propiedades de forma manual, asegúrese de que existen y se especifican correctamente para que la ejecución del paquete se realice correctamente. Puede agregar contenido dinámico a sus valores de propiedad mediante expresiones, funciones, variables del sistema ADF y canalización ADF parámetros o variables. Como alternativa, puede usar los secretos almacenados en su Azure Key Vault (AKV) como sus valores de propiedad. Para ello, haga clic en el **AZURE KEY VAULT** la seleccionar o modificar el servicio vinculado de Azure Key VAULT existente casilla situada junto a la propiedad correspondiente, o crear uno nuevo y, a continuación, seleccione el nombre/versión del secreto para el valor de propiedad. Cuando crea o modifica el servicio vinculado de Azure Key VAULT, puede seleccionar o modificar su Azure Key VAULT existente o crear uno nuevo, pero conceda acceso a la identidad administrada de ADF para su Azure Key VAULT si aún no lo ha hecho. También puede escribir los secretos directamente en el siguiente formato: `<AKV linked service name>/<secret name>/<secret version>`.

7. En la pestaña **Property Overrides** (Reemplazos de propiedad) de la actividad Execute SSIS Package (Ejecutar paquete de SSIS), puede escribir las rutas de acceso de las propiedades existentes en el paquete seleccionado de SSISDB una a una para asignarles valores manualmente. Asegúrese de que existen y de que se escriben correctamente para que la ejecución del paquete se realice de forma adecuada; por ejemplo, para invalidar el valor de la variable de usuario, escriba su ruta de acceso en el siguiente formato: `\Package.Variables[User::YourVariableName].Value`. También puede agregar contenido dinámico a sus valores mediante expresiones, funciones, variables del sistema de ADF y parámetros o variables de canalización de ADF.

8. Para validar la configuración de la canalización, haga clic en **Validate** (Validar) en la barra de herramientas. Para cerrar **Pipeline Validation Report** (Informe de comprobación de la canalización), haga clic en **>>**.
9. Para publicar la canalización en ADF, haga clic en **Publish All** (Publicar todo).
### <a name="run-the-pipeline"></a>Ejecución de la canalización
En este paso, desencadenará una ejecución de canalización.
1. Para desencadenar una ejecución de canalización, haga clic en **Trigger** (Desencadenar) en la barra de herramientas y en **Trigger now** (Desencadenar ahora).

2. En la ventana **Pipeline Run** (Ejecución de canalización), seleccione **Finish** (Finalizar).
### <a name="monitor-the-pipeline"></a>Supervisar la canalización
1. Cambie a la pestaña **Monitor** (Supervisar) de la izquierda. Verá la ejecución de canalización y su estado junto con otro tipo de información (como la hora de inicio de la ejecución). Para actualizar la vista, haga clic en **Refresh** (Actualizar).

2. Haga clic en el vínculo **View Activity Runs** (Ver ejecuciones de actividad) de la columna **Actions** (Acciones). Solo verá una ejecución de actividad porque la canalización solo tiene una actividad (actividad Ejecutar paquete de SSIS).

3. Puede ejecutar la **consulta** siguiente en la base de datos SSISDB en el servidor de Azure SQL para comprobar la ejecución del paquete.
```sql
select * from catalog.executions
```

4. También puede obtener el identificador de ejecución de SSISDB desde la salida de la ejecución de la actividad de canalización, y usar el identificador para comprobar registros de ejecución y mensajes de error más completos en SSMS.

### <a name="schedule-the-pipeline-with-a-trigger"></a>Programación de la canalización con un desencadenador
También puede crear un desencadenador programado para la canalización de manera que esta se ejecute según una programación (por hora, cada día, etc.). Para ver un ejemplo, consulte [Create a data factory - Data Factory UI](quickstart-create-data-factory-portal.md#trigger-the-pipeline-on-a-schedule) (Creación de una factoría de datos: interfaz de usuario de Data Factory).
## <a name="run-a-package-with-powershell"></a>Ejecución de un paquete con PowerShell
En esta sección, usará Azure PowerShell para crear una canalización de ADF con una actividad Ejecutar paquete de SSIS que ejecuta el paquete de SSIS.
Instale los módulos de Azure PowerShell más recientes siguiendo las instrucciones paso a paso que se indican en [Cómo instalar y configurar Azure PowerShell](/powershell/azure/install-az-ps).
### <a name="create-an-adf-with-azure-ssis-ir"></a>Creación de un ADF con Azure-SSIS IR
Puede usar un ADF existente que ya tenga aprovisionado Azure-SSIS IR o crear un ADF con Azure-SSIS IR siguiendo las instrucciones paso a paso del [Tutorial: Implementación de paquetes SSIS en Azure mediante PowerShell](https://docs.microsoft.com/azure/data-factory/tutorial-deploy-ssis-packages-azure-powershell).
### <a name="create-a-pipeline-with-an-execute-ssis-package-activity"></a>Creación de una canalización con una actividad Ejecutar paquete de SSIS
En este paso se crea una canalización con una actividad Ejecutar paquete de SSIS. La actividad ejecuta el paquete de SSIS.
1. Cree un archivo JSON con el nombre **RunSSISPackagePipeline.json** en la carpeta **C:\ADF\RunSSISPackage** con un contenido similar al del siguiente ejemplo:
> [!IMPORTANT]
> Reemplace los nombres de objeto, descripciones, rutas de acceso, valores de propiedades y parámetros, contraseñas y otros valores de variables antes de guardar el archivo.
```json
{
"name": "RunSSISPackagePipeline",
"properties": {
"activities": [{
"name": "mySSISActivity",
"description": "My SSIS package/activity description",
"type": "ExecuteSSISPackage",
"typeProperties": {
"connectVia": {
"referenceName": "myAzureSSISIR",
"type": "IntegrationRuntimeReference"
},
"executionCredential": {
"domain": "MyDomain",
"userName": "MyUsername",
"password": {
"type": "SecureString",
"value": "**********"
}
},
"runtime": "x64",
"loggingLevel": "Basic",
"packageLocation": {
"packagePath": "FolderName/ProjectName/PackageName.dtsx"
},
"environmentPath": "FolderName/EnvironmentName",
"projectParameters": {
"project_param_1": {
"value": "123"
},
"project_param_2": {
"value": {
"value": "@pipeline().parameters.MyPipelineParameter",
"type": "Expression"
}
}
},
"packageParameters": {
"package_param_1": {
"value": "345"
},
"package_param_2": {
"value": {
"type": "AzureKeyVaultSecret",
"store": {
"referenceName": "myAKV",
"type": "LinkedServiceReference"
},
"secretName": "MySecret"
}
}
},
"projectConnectionManagers": {
"MyAdonetCM": {
"userName": {
"value": "sa"
},
"passWord": {
"value": {
"type": "SecureString",
"value": "abc"
}
}
}
},
"packageConnectionManagers": {
"MyOledbCM": {
"userName": {
"value": {
"value": "@pipeline().parameters.MyUsername",
"type": "Expression"
}
},
"passWord": {
"value": {
"type": "AzureKeyVaultSecret",
"store": {
"referenceName": "myAKV",
"type": "LinkedServiceReference"
},
"secretName": "MyPassword",
"secretVersion": "3a1b74e361bf4ef4a00e47053b872149"
}
}
}
},
"propertyOverrides": {
"\\Package.MaxConcurrentExecutables": {
"value": 8,
"isSensitive": false
}
}
},
"policy": {
"timeout": "0.01:00:00",
"retry": 0,
"retryIntervalInSeconds": 30
}
}]
}
}
```
2. En Azure PowerShell, cambie a la carpeta `C:\ADF\RunSSISPackage`.
3. Para crear la canalización **RunSSISPackagePipeline**, ejecute el **conjunto AzDataFactoryV2Pipeline** cmdlet.
```powershell
$DFPipeLine = Set-AzDataFactoryV2Pipeline -DataFactoryName $DataFactory.DataFactoryName `
-ResourceGroupName $ResGrp.ResourceGroupName `
-Name "RunSSISPackagePipeline"
-DefinitionFile ".\RunSSISPackagePipeline.json"
```
Este es la salida de ejemplo:
```
PipelineName : Adfv2QuickStartPipeline
ResourceGroupName : <resourceGroupName>
DataFactoryName : <dataFactoryName>
Activities : {CopyFromBlobToBlob}
Parameters : {[inputPath, Microsoft.Azure.Management.DataFactory.Models.ParameterSpecification], [outputPath, Microsoft.Azure.Management.DataFactory.Models.ParameterSpecification]}
```
### <a name="run-the-pipeline"></a>Ejecución de la canalización
Use la **Invoke AzDataFactoryV2Pipeline** para ejecutar la canalización. El cmdlet devuelve el identificador de ejecución de la canalización para realizar una supervisión en un futuro.
```powershell
$RunId = Invoke-AzDataFactoryV2Pipeline -DataFactoryName $DataFactory.DataFactoryName `
-ResourceGroupName $ResGrp.ResourceGroupName `
-PipelineName $DFPipeLine.Name
```
### <a name="monitor-the-pipeline"></a>Supervisar la canalización
Ejecute el script de PowerShell siguiente para comprobar continuamente el estado de ejecución de la canalización hasta que termine de copiar los datos. Copie y pegue el siguiente script en la ventana de PowerShell y presione ENTRAR.
```powershell
while ($True) {
$Run = Get-AzDataFactoryV2PipelineRun -ResourceGroupName $ResGrp.ResourceGroupName `
-DataFactoryName $DataFactory.DataFactoryName `
-PipelineRunId $RunId
if ($Run) {
if ($run.Status -ne 'InProgress') {
Write-Output ("Pipeline run finished. The status is: " + $Run.Status)
$Run
break
}
Write-Output "Pipeline is running...status: InProgress"
}
Start-Sleep -Seconds 10
}
```
También puede supervisar la canalización mediante Azure Portal. Para ver instrucciones paso a paso, consulte [Supervisar la canalización](quickstart-create-data-factory-resource-manager-template.md#monitor-the-pipeline).
### <a name="schedule-the-pipeline-with-a-trigger"></a>Programación de la canalización con un desencadenador
En el paso anterior, ejecutó la canalización a petición. También puede crear un desencadenador de programación para ejecutar la canalización en una programación (cada hora, día, etc.).
1. Cree un archivo JSON con el nombre **MyTrigger.json** en la carpeta **C:\ADF\RunSSISPackage** con el siguiente contenido:
```json
{
"properties": {
"name": "MyTrigger",
"type": "ScheduleTrigger",
"typeProperties": {
"recurrence": {
"frequency": "Hour",
"interval": 1,
"startTime": "2017-12-07T00:00:00-08:00",
"endTime": "2017-12-08T00:00:00-08:00"
}
},
"pipelines": [{
"pipelineReference": {
"type": "PipelineReference",
"referenceName": "RunSSISPackagePipeline"
},
"parameters": {}
}]
}
}
```
2. En **Azure PowerShell**, cambie a la carpeta **C:\ADF\RunSSISPackage**.
3. Ejecute el **conjunto AzDataFactoryV2Trigger** cmdlet, que crea el desencadenador.
```powershell
Set-AzDataFactoryV2Trigger -ResourceGroupName $ResGrp.ResourceGroupName `
-DataFactoryName $DataFactory.DataFactoryName `
-Name "MyTrigger" -DefinitionFile ".\MyTrigger.json"
```
4. De manera predeterminada, el desencadenador está en estado detenido. Inicio del desencadenador mediante la ejecución de la **inicio AzDataFactoryV2Trigger** cmdlet.
```powershell
Start-AzDataFactoryV2Trigger -ResourceGroupName $ResGrp.ResourceGroupName `
-DataFactoryName $DataFactory.DataFactoryName `
-Name "MyTrigger"
```
5. Confirme que el desencadenador se inicia ejecutando el **Get AzDataFactoryV2Trigger** cmdlet.
```powershell
Get-AzDataFactoryV2Trigger -ResourceGroupName $ResourceGroupName `
-DataFactoryName $DataFactoryName `
-Name "MyTrigger"
```
6. Ejecute el comando siguiente al comenzar la hora siguiente. Por ejemplo, si la hora actual es 15:25 UTC, ejecute el comando a las 16:00 UTC.
```powershell
Get-AzDataFactoryV2TriggerRun -ResourceGroupName $ResourceGroupName `
-DataFactoryName $DataFactoryName `
-TriggerName "MyTrigger" `
-TriggerRunStartedAfter "2017-12-06" `
-TriggerRunStartedBefore "2017-12-09"
```
Puede ejecutar la consulta siguiente en la base de datos SSISDB en el servidor de Azure SQL para comprobar la ejecución del paquete.
```sql
select * from catalog.executions
```
## <a name="next-steps"></a>Pasos siguientes
Vea la siguiente entrada de blog:
- [Modernize and extend your ETL/ELT workflows with SSIS activities in ADF pipelines](https://blogs.msdn.microsoft.com/ssis/2018/05/23/modernize-and-extend-your-etlelt-workflows-with-ssis-activities-in-adf-pipelines/) (Modernización y ampliación de los flujos de trabajo ETL/ETL con actividades de SSIS en las canalizaciones de ADF)
| 67.829132 | 1,563 | 0.662275 | spa_Latn | 0.941562 |
b942069b2e8a69d735f82c573c58a84cd735b7f0 | 1,529 | md | Markdown | CODE_OF_CONDUCT-es.md | vim-usds/justice40-tool | 6691df3e318b531b0e05454a79b8560b7d307b36 | [
"CC0-1.0"
] | null | null | null | CODE_OF_CONDUCT-es.md | vim-usds/justice40-tool | 6691df3e318b531b0e05454a79b8560b7d307b36 | [
"CC0-1.0"
] | null | null | null | CODE_OF_CONDUCT-es.md | vim-usds/justice40-tool | 6691df3e318b531b0e05454a79b8560b7d307b36 | [
"CC0-1.0"
] | null | null | null | # Código de conducta del colaborador
*[Read this in English!](CODE_OF_CONDUCT.md)*
Como colaboradores y mantenedores de este proyecto, nos comprometemos a respetar a todas las personas que contribuyen con informes de problemas, solicitudes de funciones, actualización de documentación, solicitudes de incorporación de cambios o revisiones, y otras actividades.
Nuestro compromiso es que no exista acoso para los participantes en este proyecto, sin considerar su nivel de experiencia, género, identidad y expresión de género, orientación sexual, discapacidad, aspecto personal, tamaño corporal, raza, etnia, edad o religión.
Algunos ejemplos de comportamiento inaceptable de los participantes son: uso de lenguaje o imágenes sexuales, comentarios despectivos o ataques personales, troleo, acoso público o privado, insultos u otra conducta poco profesional.
Los mantenedores del proyecto tienen el derecho y la obligación de eliminar, editar o rechazar comentarios, confirmaciones, código, modificaciones de wiki, problemas u otras colaboraciones que no cumplan con este Código de conducta.
Los casos de abuso, acoso o de otro comportamiento inaceptable se pueden denunciar abriendo un problema o contactando con uno o más de los mantenedores del proyecto en [email protected].
Este Código de conducta es una adaptación de la versión 1.0.0 del Convenio del colaborador ([Contributor Covenant](contributor-covenant.org), *en inglés*) disponible en el sitio http://contributor-covenant.org/version/1/0/0/ *(en inglés)*.
| 89.941176 | 277 | 0.81622 | spa_Latn | 0.992991 |
b94237f1e0aa8b7f790f18267c36d4de6b8e54ee | 8,357 | md | Markdown | articles/migrate/how-to-set-up-appliance-physical.md | youngick/azure-docs.ko-kr | b6bc928fc360216bb122e24e225a5b7b0ab51d7e | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-03-12T23:37:18.000Z | 2021-03-12T23:37:18.000Z | articles/migrate/how-to-set-up-appliance-physical.md | zoonoo/azure-docs.ko-kr | 3ca44c0720204e9f9a6ef9fe2c73950aa17251c1 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/migrate/how-to-set-up-appliance-physical.md | zoonoo/azure-docs.ko-kr | 3ca44c0720204e9f9a6ef9fe2c73950aa17251c1 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 물리적 서버에 대 한 Azure Migrate 어플라이언스 설정
description: 물리적 서버 검색 및 평가를 위해 Azure Migrate 어플라이언스를 설정 하는 방법에 대해 알아봅니다.
author: vineetvikram
ms.author: vivikram
ms.manager: abhemraj
ms.topic: how-to
ms.date: 03/13/2021
ms.openlocfilehash: 9052cbd3dc728b50b62c33f3a11a5e36a7504f29
ms.sourcegitcommit: 2c1b93301174fccea00798df08e08872f53f669c
ms.translationtype: MT
ms.contentlocale: ko-KR
ms.lasthandoff: 03/22/2021
ms.locfileid: "104771569"
---
# <a name="set-up-an-appliance-for-physical-servers"></a>물리적 서버용 어플라이언스 설정
이 문서에서는 Azure Migrate: 검색 및 평가 도구를 사용 하 여 물리적 서버를 평가 하는 경우 Azure Migrate 어플라이언스를 설정 하는 방법을 설명 합니다.
Azure Migrate 어플라이언스는 Azure Migrate: 검색 및 평가에서 다음을 수행 하는 데 사용 되는 경량 어플라이언스입니다.
- 온-프레미스 서버를 검색합니다.
- 검색 된 서버에 대 한 메타 데이터 및 성능 데이터를 Azure Migrate: 검색 및 평가로 보냅니다.
Azure Migrate 어플라이언스에 대해 [자세히 알아봅니다](migrate-appliance.md).
## <a name="appliance-deployment-steps"></a>어플라이언스 배포 단계
어플라이언스를 설정하려면 다음을 수행합니다.
- 포털에서 어플라이언스 이름을 제공 하 고 프로젝트 키를 생성 합니다.
- Azure Portal에서 Azure Migrate 설치 프로그램 스크립트가 포함된 압축 파일을 다운로드합니다.
- 압축 파일의 콘텐츠를 추출합니다. 관리자 권한으로 PowerShell 콘솔을 시작합니다.
- PowerShell 스크립트를 실행하여 어플라이언스 웹 애플리케이션을 시작합니다.
- 처음으로 어플라이언스를 구성 하 고 프로젝트 키를 사용 하 여 프로젝트에 등록 합니다.
### <a name="generate-the-project-key"></a>프로젝트 키 생성
1. **마이그레이션 목표** > **Windows, Linux 및 SQL server** > **Azure Migrate: 검색 및 평가** 에서 **검색** 을 선택 합니다.
2. 서버에서 가상화 된 서버를 **검색** > **하** 고, **물리적 또는 기타 (AWS, gcp, Xen 등)** 를 선택 합니다.
3. **1: 프로젝트 키 생성** 에서 실제 또는 가상 서버를 검색 하는 데 설정할 Azure Migrate 어플라이언스의 이름을 제공 합니다. 이름은 14자 이하의 영숫자여야 합니다.
1. **키 생성** 을 클릭하여 필요한 Azure 리소스 만들기를 시작합니다. 리소스를 만드는 동안 서버 검색 페이지를 닫지 마십시오.
1. Azure 리소스를 성공적으로 만든 후에는 **프로젝트 키** 가 생성 됩니다.
1. 이 키는 구성 단계에서 어플라이언스 등록을 완료하는 데 필요하므로 복사해 둡니다.
### <a name="download-the-installer-script"></a>설치 프로그램 스크립트 다운로드
**2: Azure Migrate 어플라이언스 다운로드** 에서 **다운로드** 를 클릭합니다.


### <a name="verify-security"></a>보안 확인
배포하기 전에 압축된 파일이 안전한지 확인합니다.
1. 파일이 다운로드 된 서버에서 관리자 명령 창을 엽니다.
2. 다음 명령을 실행하여 압축된 파일의 해시를 생성합니다.
- ```C:\>CertUtil -HashFile <file_location> [Hashing Algorithm]```
- 퍼블릭 클라우드의 사용 예: ```C:\>CertUtil -HashFile C:\Users\administrator\Desktop\AzureMigrateInstaller-Server-Public.zip SHA256 ```
- 정부 클라우드의 사용 예: ``` C:\>CertUtil -HashFile C:\Users\administrator\Desktop\AzureMigrateInstaller-Server-USGov.zip MD5 ```
3. 최신 버전의 어플라이언스 및 [해시 값](tutorial-discover-physical.md#verify-security) 설정을 확인 합니다.
## <a name="run-the-azure-migrate-installer-script"></a>Azure Migrate 설치 프로그램 스크립트 실행
설치 프로그램 스크립트는 다음을 수행합니다.
- 물리적 서버 검색 및 평가를 위한 에이전트와 웹 애플리케이션을 설치합니다.
- Windows 정품 인증 서비스, IIS 및 PowerShell ISE를 비롯한 Windows 역할을 설치합니다.
- IIS 재작성 모듈을 다운로드하여 설치합니다. [자세히 알아보기](https://www.microsoft.com/download/details.aspx?id=7435).
- Azure Migrate에 대한 영구적인 설정 세부 정보를 사용하여 레지스트리 키(HKLM)를 업데이트합니다.
- 지정된 경로에 다음 파일을 만듭니다.
- **구성 파일**: %Programdata%\Microsoft Azure\Config
- **로그 파일**: %Programdata%\Microsoft Azure\Logs
스크립트를 다음과 같이 실행합니다.
1. 어플라이언스를 호스팅할 서버의 폴더에 압축 파일을 추출합니다. 기존 Azure Migrate 어플라이언스를 포함 하는 서버에서 스크립트를 실행 하지 않아야 합니다.
2. 위 서버에서 관리자(상승된) 권한을 사용하여 PowerShell을 시작합니다.
3. 다운로드한 압축 파일에서 콘텐츠를 추출한 폴더로 PowerShell 디렉터리를 변경합니다.
4. 다음 명령을 실행하여 **AzureMigrateInstaller.ps1** 이라는 스크립트를 실행합니다.
- 퍼블릭 클라우드의 경우:
``` PS C:\Users\administrator\Desktop\AzureMigrateInstaller-Server-Public> .\AzureMigrateInstaller.ps1 ```
- Azure Government의 경우:
``` PS C:\Users\Administrators\Desktop\AzureMigrateInstaller-Server-USGov>.\AzureMigrateInstaller.ps1 ```
스크립트가 성공적으로 완료되면 어플라이언스 웹 애플리케이션이 시작됩니다.
문제가 발생하는 경우 문제 해결을 위해 C:\ProgramData\Microsoft Azure\Logs\AzureMigrateScenarioInstaller_<em>Timestamp</em>.log에서 스크립트 로그에 액세스할 수 있습니다.
### <a name="verify-appliance-access-to-azure"></a>Azure에 대한 어플라이언스 액세스 확인
어플라이언스에서 [퍼블릭](migrate-appliance.md#public-cloud-urls) 및 [정부](migrate-appliance.md#government-cloud-urls) 클라우드의 Azure URL에 연결할 수 있는지 확인합니다.
### <a name="configure-the-appliance"></a>어플라이언스 구성
어플라이언스를 처음으로 설정합니다.
1. 어플라이언스에 연결할 수 있는 모든 머신에서 브라우저를 열고, 어플라이언스 웹앱의 URL(**https://*어플라이언스 이름 또는 IP 주소*): 44368**)을 엽니다.
또는 바탕 화면에서 앱 바로 가기를 클릭하여 앱을 열 수 있습니다.
2. **사용 조건** 에 동의하고 타사 정보를 읽습니다.
1. 웹앱 > **필수 구성 요소 설정** 에서 다음을 수행합니다.
- **연결**: 앱에서 서버가 인터넷에 액세스할 수 있는지 확인합니다. 서버에서 프록시를 사용하는 경우:
- **설정 프록시** 를 클릭 하 고 양식 http://ProxyIPAddress 또는 http://ProxyFQDN) 수신 대기 포트에서 프록시 주소를 지정 합니다.
- 프록시에 인증이 필요한 경우 자격 증명을 지정합니다.
- HTTP 프록시만 지원됩니다.
- 프록시 세부 정보를 추가하거나 프록시 및/또는 인증을 사용하지 않도록 설정한 경우 **저장** 을 클릭하여 연결 확인을 다시 트리거합니다.
- **시간 동기화**: 시간이 확인됩니다. 서버 검색이 제대로 작동하려면 어플라이언스의 시간이 인터넷 시간과 동기화되어야 합니다.
- **업데이트 설치**: Azure Migrate: 검색 및 평가는 어플라이언스에 최신 업데이트가 설치 되어 있는지 확인 합니다. 확인이 완료되면 **어플라이언스 서비스 보기** 를 클릭하여 어플라이언스에서 실행되는 구성 요소의 상태와 버전을 확인할 수 있습니다.
### <a name="register-the-appliance-with-azure-migrate"></a>Azure Migrate를 사용하여 어플라이언스 등록
1. 포털에서 복사한 **프로젝트 키** 를 붙여넣습니다. 키가 없는 경우 **Azure Migrate: 검색 및 평가> 검색 및 평가를 검색> 기존 어플라이언스를 검색** 하 고, 키 생성 시 제공한 어플라이언스 이름을 선택 하 고, 해당 키를 복사 합니다.
1. Azure로 인증하려면 디바이스 코드가 필요합니다. **로그인** 을 클릭하면 아래와 같이 디바이스 코드가 포함된 모달이 열립니다.

1. **코드 복사 및 로그인** 을 클릭하여 디바이스 코드를 복사하고 새 브라우저 탭에서 Azure 로그인 프롬프트를 엽니다. 표시되지 않으면 브라우저에서 팝업 차단을 사용하지 않도록 설정했는지 확인합니다.
1. 새 탭에서 Azure 사용자 이름 및 암호를 사용 하 여 장치 코드를 붙여넣고 로그인 합니다.
PIN을 사용한 로그인은 지원되지 않습니다.
3. 로그인 탭을 실수로 로그인하지 않고 닫은 경우에는 어플라이언스 구성 관리자의 브라우저 탭을 새로 고쳐 로그인 단추를 다시 사용하도록 설정해야 합니다.
1. 성공적으로 로그인한 후 어플라이언스 구성 관리자를 사용하여 이전 탭으로 돌아갑니다.
4. 로깅에 사용되는 Azure 사용자 계정에 키 생성 시 만든 Azure 리소스에 대한 올바른 [권한](./tutorial-discover-physical.md)이 있는 경우 어플라이언스 등록이 시작됩니다.
1. 어플라이언스가 성공적으로 등록되면 **세부 정보 보기** 를 클릭하여 등록 세부 정보를 확인할 수 있습니다.
## <a name="start-continuous-discovery"></a>연속 검색 시작
이제 어플라이언스에서 검색할 물리적 서버에 연결하여 검색을 시작합니다.
1. **1단계: Windows 및 Linux 물리적 또는 가상 서버 검색을 위한 자격 증명 제공** 에서 **자격 증명 추가** 를 클릭합니다.
1. Windows server의 경우 원본 유형으로 **Windows server** 를 선택 하 고, 자격 증명의 이름을 지정 하 고, 사용자 이름 및 암호를 추가 합니다. **Save** 를 클릭합니다.
1. Linux 서버에 대 한 암호 기반 인증을 사용 하는 경우 원본 유형으로 **Linux 서버 (암호 기반)** 를 선택 하 고, 자격 증명의 이름을 지정 하 고, 사용자 이름 및 암호를 추가 합니다. **Save** 를 클릭합니다.
1. Linux server에 대 한 SSH 키 기반 인증을 사용 하는 경우 원본 유형으로 **Linux 서버 (SSH 키 기반)** 를 선택 하 고, 자격 증명의 이름을 지정 하 고, 사용자 이름을 추가 하 고, 검색 하 고, SSH 개인 키를 선택할 수 있습니다. **Save** 를 클릭합니다.
- Azure Migrate는 RSA, DSA, ECDSA 및 ed25519 알고리즘을 사용 하 여 ssh-ssh-keygen 명령으로 생성 된 SSH 개인 키를 지원 합니다.
- 현재 Azure Migrate는 암호 기반 SSH 키를 지원 하지 않습니다. 암호 없이 SSH 키를 사용 합니다.
- 현재 Azure Migrate는 PuTTY에서 생성된 SSH 프라이빗 키 파일을 지원하지 않습니다.
- Azure Migrate는 아래와 같이 SSH 프라이빗 키 파일의 OpenSSH 형식을 지원합니다.

1. 여러 자격 증명을 한 번에 추가하려면 **추가** 를 클릭하여 더 많은 자격 증명을 저장하고 추가합니다. 물리적 서버 검색에 여러 자격 증명이 지원됩니다.
1. **2단계: 물리적 또는 가상 서버 세부 정보 제공** 에서 **검색 원본 추가** 를 클릭하여 서버 **IP 주소/FQDN** 을 지정하고 서버에 연결할 자격 증명의 식별 이름을 지정합니다.
1. 한 번에 하나씩 **단일 항목을 추가** 하거나 한꺼번에 **여러 항목을 추가** 할 수 있습니다. 또한 **CSV 가져오기** 를 통해 서버 세부 정보를 제공하는 옵션도 있습니다.

- **단일 항목 추가** 를 선택하는 경우 OS 유형을 선택하고, 자격 증명의 식별 이름을 지정하고, 서버 **IP 주소/FQDN** 을 추가하고 **저장** 을 클릭합니다.
- **여러 항목 추가** 를 선택 하는 경우 텍스트 상자에 자격 증명의 이름을 지정 하 여 서버 **IP 주소/a p i** 를 지정 하 여 여러 레코드를 한 번에 추가할 수 있습니다. 추가 된 레코드를 확인 하 고 **저장** 을 클릭 합니다.
- **CSV가져오기** 를 선택하는 경우 _(기본적으로 선택됨)_ CSV 템플릿 파일을 다운로드하고 서버 **IP 주소/FQDN** 및 자격 증명 식별 이름으로 파일을 채울 수 있습니다. 그런 다음, 파일을 어플라이언스로 가져와 파일의 레코드를 **확인** 하고 **저장** 을 클릭합니다.
1. 저장을 클릭하면 어플라이언스가 추가된 서버에 대한 연결의 유효성을 검사하고 각 서버에 대한 테이블에 **유효성 검사 상태** 를 표시합니다.
- 서버에 대한 유효성 검사가 실패하면 테이블의 상태 열에서 **유효성 검사 실패** 를 클릭하여 오류를 검토합니다. 문제를 해결하고, 유효성을 다시 검사합니다.
- 서버를 제거하려면 **삭제** 를 클릭합니다.
1. 검색을 시작 하기 전에 언제 든 지 서버에 대 한 연결의 **유효성** 을 다시 검사할 수 있습니다.
1. **검색 시작** 을 클릭하여 유효성 검사에 성공한 서버의 검색을 시작합니다. 검색이 성공적으로 시작되었으면 테이블의 각 서버에 대한 검색 상태를 확인할 수 있습니다.
그러면 검색을 시작합니다. 검색된 서버의 메타데이터가 Azure Portal에 표시되는 데 서버당 약 2분이 걸립니다.
## <a name="verify-servers-in-the-portal"></a>포털에서 서버 확인
검색이 완료되면 서버가 포털에 표시되는지 확인할 수 있습니다.
1. Azure Migrate 대시보드를 엽니다.
2. **Azure Migrate-Windows, Linux 및 SQL server** > **Azure Migrate: 검색 및 평가** 페이지에서 **검색 된 서버의** 수를 표시 하는 아이콘을 클릭 합니다.
## <a name="next-steps"></a>다음 단계
Azure Migrate를 사용 하 여 [물리적 서버 평가](tutorial-assess-physical.md) 를 시험해 보세요. 검색 및 평가. | 46.171271 | 167 | 0.684935 | kor_Hang | 1.00001 |
b943772b1c09d9598b9c83a954595325f4b1ecc9 | 7,649 | md | Markdown | docs/vs-2015/ide/visual-cpp-intellisense.md | Ash-Shaun/visualstudio-docs | 787c3856e6cbfc65d97612854fc093785dc5573a | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-04-06T02:01:40.000Z | 2019-04-06T02:01:40.000Z | docs/vs-2015/ide/visual-cpp-intellisense.md | Ash-Shaun/visualstudio-docs | 787c3856e6cbfc65d97612854fc093785dc5573a | [
"CC-BY-4.0",
"MIT"
] | 1 | 2018-10-08T17:51:50.000Z | 2018-10-08T17:51:50.000Z | docs/vs-2015/ide/visual-cpp-intellisense.md | Ash-Shaun/visualstudio-docs | 787c3856e6cbfc65d97612854fc093785dc5573a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "Visual C++ Intellisense | Microsoft Docs"
ms.custom: ""
ms.date: "2018-06-30"
ms.prod: "visual-studio-dev14"
ms.reviewer: ""
ms.suite: ""
ms.technology:
- "vs-ide-general"
ms.tgt_pltfrm: ""
ms.topic: "article"
ms.assetid: 9d7c6414-4e6c-4889-a74c-a6033795eccc
caps.latest.revision: 11
author: gewarren
ms.author: gewarren
manager: "ghogen"
---
# Visual C++ Intellisense
[!INCLUDE[vs2017banner](../includes/vs2017banner.md)]
The latest version of this topic can be found at [Visual C++ Intellisense](https://docs.microsoft.com/visualstudio/ide/visual-cpp-intellisense).
In Visual Studio 2015, IntelliSense is available for single code files as well as for files in projects. In cross-platform projects, some IntelliSense features are available in .cpp and .c files in the shared code project even when you are in an Android or iOS context.
## IntelliSense features in C++
IntelliSense is a name given to a set of features that make coding more convenient. Since different people have different ideas about what is convenient, virtually all of the IntelliSense features can be enabled or disabled in the **Text Editor, C/C++, Advanced** property page.

You can use the menu items and keyboard shortcuts shown in the following image to access IntelliSense.

### Statement completion and member list
When you start typing a keyword, type, function, variable name, or other program element that the compiler recognizes, the editor offers to complete the word for you
For a list of the icons and their meanings, see [Class View and Object Browser Icons](../ide/class-view-and-object-browser-icons.md).

The first time member list is invoked it only shows members that are accessible for the current context. If you use **Ctrl + J** after that, it shows all members regardless of accessibility. If you invoke it a third time, an even wider list of program elements is shown. You can turn off statement completion in the **C/C++ General Options** page.

### Parameter Help
When you type an opening brace of a function call, or angle bracket on a class template variable declaration, the editor shows a small window with the parameter types for each overload of the function or constructor. The "current" parameter--based on the cursor location--is in bold. You can turn off Statement completion in the **C/C++ General Options** page.

### Quick Info
When you hover the mouse cursor over a variable, a small window appears inline that shows the type information and the header in which the type is defined. Hover over a function call to see the function's signature. You can turn off Quick Info in the **Text Editor, C/C++, Advanced** page.

## Error squiggles
Squiggles under a program element (variable, keyword, brace, type name, and so on) call your attention to an error or potential error in the code. A green squiggle appears when you write a forward declaration, to remind you that you still need to write the implementation. A purple squiggle appears in a shared project when there is an error in code that is not currently active, for example when you are working in the Windows context but enter something that would be an error in an Android context. A red squiggle indicates a compiler error or warning in active code that you need to deal with.

## Code Colorization and Fonts
The default colors and fonts can be changed by using the **Environment, Fonts and Colors** property page. You can change the fonts for many UI windows here, not just the editor. The settings that are specific to C++ begin with "C++"; the other settings are for all languages.
## Cross-Platform IntelliSense
In a shared code project, some IntelliSense features such as squiggles are available even when you are working in an Android context. If you write some code that would result in an error in an inactive project, IntelliSense still shows squiggles, but they are in a different color than squiggles for errors in the current context.
Here’s an OpenGLES Application that is configured to build for Android and iOS. The illustration shows shared code being edited. In the first image, Android is the active project:

Notice the following:
- The #else branch on line 8 is grayed out to indicate inactive region, because __ANDROID\_\_ is defined for Android project.
- The greeting variable at line 11 is initialized with identifier HELLO, which has a purple squiggle. This is because no identifier HELLO is defined in the currently inactive iOS project. While in Android project line 11 would compile, it won’t in iOS. Since this is shared code, that is something you should change even though it compiles in the currently active configuration.
- Line 12 has red squiggle on identifier BYE; this identifier is not defined in the currently selected active project.
Now, change the active project to iOS.StaticLibrary and notice how the squiggles change.

Notice the following:
- The #ifdef branch on line 6 is grayed out to indicate inactive region, because __ANDROID\_\_ is not defined for iOS project.
- The greeting variable at line 11 is initialized with identifier HELLO, which now has red squiggle. This is because no identifier HELLO is defined in the currently active iOS project.
- Line 12 has purple squiggle on identifier BYE; this identifier is not defined in currently inactive Android.NativeActivity project.
## Single File IntelliSense
When you open a single file outside of any project, you still get IntelliSense. You can enable or disable particular features by going to **Text Editor, C/C++, Advanced** to turn on or off IntelliSense features. To configure IntelliSense for single files that aren't part of a project, look for **IntelliSense and Browsing for Non-Project Files** in the **Advanced** section. See [Visual C++ Guided Tour](http://msdn.microsoft.com/en-us/499cb66f-7df1-45d6-8b6b-33d94fd1f17c).

By default, single file IntelliSense only uses standard include directories to find header files. To add additional directories, open the shortcut menu on the Solution node, and add your directory to **Debug Source Code** list, as the following illustration shows:

## See Also
[Using IntelliSense](../ide/using-intellisense.md)
| 73.548077 | 600 | 0.760753 | eng_Latn | 0.994933 |
b943d8f2c69dde8fd9a986329f89e0776b2444f0 | 1,902 | md | Markdown | sql-statements/sql-statement-start-transaction.md | BenMusch/docs | ca2d569c8d28634d1342e407c77b7eb9c62d1700 | [
"Apache-2.0"
] | 515 | 2016-07-25T06:48:33.000Z | 2022-03-22T13:19:42.000Z | sql-statements/sql-statement-start-transaction.md | BenMusch/docs | ca2d569c8d28634d1342e407c77b7eb9c62d1700 | [
"Apache-2.0"
] | 6,432 | 2016-07-23T06:23:49.000Z | 2022-03-31T17:01:16.000Z | sql-statements/sql-statement-start-transaction.md | BenMusch/docs | ca2d569c8d28634d1342e407c77b7eb9c62d1700 | [
"Apache-2.0"
] | 596 | 2016-08-11T09:08:03.000Z | 2022-03-29T05:44:17.000Z | ---
title: START TRANSACTION | TiDB SQL Statement Reference
summary: An overview of the usage of START TRANSACTION for the TiDB database.
aliases: ['/docs/dev/sql-statements/sql-statement-start-transaction/','/docs/dev/reference/sql/statements/start-transaction/']
---
# START TRANSACTION
This statement starts a new transaction inside of TiDB. It is similar to the statement `BEGIN`.
In the absence of a `START TRANSACTION` statement, every statement will by default autocommit in its own transaction. This behavior ensures MySQL compatibility.
## Synopsis
**BeginTransactionStmt:**
```ebnf+diagram
BeginTransactionStmt ::=
'BEGIN' ( 'PESSIMISTIC' | 'OPTIMISTIC' )?
| 'START' 'TRANSACTION' ( 'READ' ( 'WRITE' | 'ONLY' ( ( 'WITH' 'TIMESTAMP' 'BOUND' TimestampBound )? | AsOfClause ) ) | 'WITH' 'CONSISTENT' 'SNAPSHOT' | 'WITH' 'CAUSAL' 'CONSISTENCY' 'ONLY' )?
AsOfClause ::=
( 'AS' 'OF' 'TIMESTAMP' Expression)
```
## Examples
```sql
mysql> CREATE TABLE t1 (a int NOT NULL PRIMARY KEY);
Query OK, 0 rows affected (0.12 sec)
mysql> START TRANSACTION;
Query OK, 0 rows affected (0.00 sec)
mysql> INSERT INTO t1 VALUES (1);
Query OK, 1 row affected (0.00 sec)
mysql> COMMIT;
Query OK, 0 rows affected (0.01 sec)
```
## MySQL compatibility
* `START TRANSACTION` immediately starts a transaction inside TiDB. This differs from MySQL, where `START TRANSACTION` lazily creates a transaction. But `START TRANSACTION` in TiDB is equivalent to MySQL's `START TRANSACTION WITH CONSISTENT SNAPSHOT`.
* The statement `START TRANSACTION READ ONLY` is parsed for compatibility with MySQL, but still allows write operations.
## See also
* [COMMIT](/sql-statements/sql-statement-commit.md)
* [ROLLBACK](/sql-statements/sql-statement-rollback.md)
* [BEGIN](/sql-statements/sql-statement-begin.md)
* [START TRANSACTION WITH CAUSAL CONSISTENCY ONLY](/transaction-overview.md#causal-consistency)
| 35.222222 | 251 | 0.73817 | kor_Hang | 0.473426 |
b9440776ceebde0d9f7e1c92d5bfdc0baadacf3c | 1,262 | md | Markdown | README.md | 1129ljc/video-cliping-detection | 9059a656b4d3e41d7455cd8f84d160294932a028 | [
"Apache-2.0"
] | 2 | 2022-03-29T06:46:22.000Z | 2022-03-30T09:12:47.000Z | README.md | 1129ljc/video-cliping-detection | 9059a656b4d3e41d7455cd8f84d160294932a028 | [
"Apache-2.0"
] | null | null | null | README.md | 1129ljc/video-cliping-detection | 9059a656b4d3e41d7455cd8f84d160294932a028 | [
"Apache-2.0"
] | null | null | null | # video-cliping-detection
视频恶意剪辑伪造分析
## 算法原理请参考
Gaussian-model-of-optical-flow
```
@inproceedings{wang2013identifying,
title={Identifying video forgery process using optical flow},
author={Wang, Wan and Jiang, Xinghao and Wang, Shilin and Wan, Meng and Sun, Tanfeng},
booktitle={International Workshop on Digital Watermarking},
pages={244--257},
year={2013},
organization={Springer}
}
```
Sum-of-X-and-Y-optical-flow
```
@inproceedings{chao2012novel,
title={A novel video inter-frame forgery model detection scheme based on optical flow consistency},
author={Chao, Juan and Jiang, Xinghao and Sun, Tanfeng},
booktitle={International Workshop on Digital Watermarking},
pages={267--281},
year={2012},
organization={Springer}
}
```
Hog-of-video-frames
```
@article{fadl2020exposing,
title={Exposing video inter-frame forgery via histogram of oriented gradients and motion energy image},
author={Fadl, Sondos and Han, Qi and Qiong, Li},
journal={Multidimensional Systems and Signal Processing},
volume={31},
number={4},
pages={1365--1384},
year={2020},
publisher={Springer}
}
```
## 环境
```
pip install opencv-python numpy matplotlib scipy scenedetect[opencv]
```
## 说明
首先使用scenedetect工具进行镜头切分,然后使用上述三个算法在切分是频段中进行分析。 | 22.945455 | 105 | 0.736926 | eng_Latn | 0.710907 |
b9459b51b9c00ca564ee9eaabaae03786dbbe924 | 13,879 | md | Markdown | docs/scenarios/azure-template-ci/azure-template-ci.md | rayhogan/PSRule.Rules.Azure | f0b48d5726a169f7b39d5baa0bc63b03a94f9db0 | [
"MIT"
] | 1 | 2021-03-21T13:31:00.000Z | 2021-03-21T13:31:00.000Z | docs/scenarios/azure-template-ci/azure-template-ci.md | rayhogan/PSRule.Rules.Azure | f0b48d5726a169f7b39d5baa0bc63b03a94f9db0 | [
"MIT"
] | 6 | 2021-03-22T19:51:55.000Z | 2021-04-02T07:44:42.000Z | docs/scenarios/azure-template-ci/azure-template-ci.md | Tryweirder/PSRule.Rules.Azure | 7474b0c524e68d19d41df95ce290e561f659e23a | [
"MIT"
] | 1 | 2021-03-21T13:31:03.000Z | 2021-03-21T13:31:03.000Z | # Validate Azure resources from templates with continuous integration (CI)
Azure Resource Manager (ARM) templates are a JSON-based file structure.
ARM templates are typically not static, they include parameters, functions and conditions.
Depending on the parameters provided to a template, resources may differ significantly.
Important resource properties that should be validated are often variables, parameters or deployed conditionally.
Under these circumstances, to correctly validate resources in a template, parameters must be resolved.
The following scenario shows how to validate Azure resources from templates using a generic pipeline.
The examples provided can be integrated into a continuous integration (CI) pipeline able to run PowerShell.
For integrating into Azure DevOps see [Validate Azure resources from templates with Azure Pipelines](../azure-pipelines-ci/azure-pipelines-ci.md).
This scenario covers the following:
- [Installing PSRule within a CI pipeline](#installing-psrule-within-a-ci-pipeline)
- [Exporting rule data for analysis](#exporting-rule-data-for-analysis)
- [Validating exported resources](#validating-exported-resources)
- [Formatting output](#formatting-output)
- [Failing the pipeline](#failing-the-pipeline)
- [Generating NUnit output](#generating-nunit-output)
- [Complete example](#complete-example)
- [Additional options](#additional-options)
## Installing PSRule within a CI pipeline
Typically, PSRule is not pre-installed on CI worker nodes and must be installed within the pipeline.
PSRule PowerShell modules need to be installed prior to calling PSRule cmdlets.
If your CI pipeline runs on a persistent virtual machine that you control, consider pre-installing PSRule.
The following examples focus on installing PSRule dynamically during execution of the pipeline.
Which is suitable for cloud-based CI worker nodes.
To install PSRule within a CI pipeline, execute the `Install-Module` PowerShell cmdlet.
Depending on your environment, the CI worker process may not have administrative permissions.
To install modules into the current context running the CI pipeline use `-Scope CurrentUser`.
The PowerShell Gallery is not a trusted source by default.
Use the `-Force` switch to suppress a prompt to install modules from PowerShell Gallery.
For example:
```powershell
$Null = Install-Module -Name PSRule.Rules.Azure -Scope CurrentUser -Force;
```
Installing `PSRule.Rules.Azure` also installs the base `PSRule` module and associated Azure dependencies.
The `PSRule.Rules.Azure` module includes cmdlets and pre-built rules for validating Azure resources.
Using the pre-built rules is completely optional.
In some cases, installing NuGet and PowerShellGet may be required to connect to the PowerShell Gallery.
The NuGet package provider can be installed using the `Install-PackageProvider` PowerShell cmdlet.
```powershell
$Null = Install-PackageProvider -Name NuGet -Scope CurrentUser -Force;
```
The example below includes both steps together with checks:
```powershell
if ($Null -eq (Get-PackageProvider -Name NuGet -ErrorAction SilentlyContinue)) {
$Null = Install-PackageProvider -Name NuGet -Scope CurrentUser -Force;
}
if ($Null -eq (Get-InstalledModule -Name PowerShellGet -MinimumVersion 2.2.1 -ErrorAction Ignore)) {
Install-Module PowerShellGet -MinimumVersion 2.2.1 -Scope CurrentUser -Force -AllowClobber;
}
if ($Null -eq (Get-InstalledModule -Name PSRule.Rules.Azure -MinimumVersion '0.12.1' -ErrorAction SilentlyContinue)) {
$Null = Install-Module -Name PSRule.Rules.Azure -Scope CurrentUser -MinimumVersion '0.12.1' -Force;
}
```
Add `-AllowPrerelease` to install pre-release versions.
See the [change log](https://github.com/Microsoft/PSRule.Rules.Azure/blob/main/CHANGELOG.md) for the latest version.
## Exporting rule data for analysis
In PSRule, the `Export-AzRuleTemplateData` cmdlet resolves a template and returns a resultant set of resources.
The resultant set of resources can then be validated.
No connectivity to Azure is required by default when calling `Export-AzRuleTemplateData`.
### Export cmdlet parameters
To run `Export-AzRuleTemplateData` two key parameters are required:
- `-TemplateFile` - An absolute or relative path to the template JSON file.
- `-ParameterFile` - An absolute or relative path to one or more parameter JSON files.
The `-ParameterFile` parameter is optional when all parameters defined in the template have `defaultValue` set.
Optionally the following parameters can be used:
- `-Name` - The name of the deployment. If not specified a default name of `export-<xxxxxxxx>` will be used.
- `-OutputPath` - An absolute or relative path where the resultant resources will be written to JSON.
If not specified the current working path be used.
- `-ResourceGroup` - The name of a resource group where the deployment is intended to be run.
If not specified placeholder values will be used.
- `-Subscription` - The name or subscription Id of a subscription where the deployment is intended to be run.
If not specified placeholder values will be used.
See cmdlet help for a full list of parameters.
If `-OutputPath` is a directory or is not set, the output file will be automatically named `resources-<name>.json`.
For example:
```powershell
Export-AzRuleTemplateData -TemplateFile .\template.json -ParameterFile .\parameters.json;
```
Multiple parameter files that map to the same template can be supplied in a single cmdlet call.
Additional templates can be exported by calling `Export-AzRuleTemplateData` multiple times.
### Use of placeholder values
A number of functions that can be used within Azure templates retrieve information from Azure.
Some examples include `reference`, `subscription`, `resourceGroup`, `list*`.
The default for `Export-AzRuleTemplateData` is to operate without requiring authenticated connectivity to Azure.
As a result, functions that retrieve information from Azure use placeholders such as `{{Subscription.SubscriptionId}}`.
To provide a real value for `subscription` and `resourceGroup` use the `-Subscription` and `-ResourceGroup` parameters.
When using `-Subscription` and `-ResourceGroup` the subscription and resource group must already exist.
Additionally the context running the cmdlet must have at least read access (i.e. `Reader`).
It is currently not possible to provide a real value for `reference` and `list*`, only placeholders will be used.
Key Vault references in parameter files use placeholders instead of the real value to prevent accidental exposure of secrets.
## Validating exported resources
To validate exported resources use `Invoke-PSRule`, `Assert-PSRule` or `Test-PSRuleTarget`.
In a CI pipeline, `Assert-PSRule` is recommended.
`Assert-PSRule` outputs preformatted results ideal for use within a CI pipeline.
Use `Assert-PSRule` with the resolved resource output as an input using `-InputPath`.
In the following example, resources from `.\resources.json` are validated against pre-built rules:
```powershell
Assert-PSRule -InputPath .\resources-export-*.json -Module PSRule.Rules.Azure;
```
Example output:
```text
-> vnet-001 : Microsoft.Network/virtualNetworks
[PASS] Azure.Resource.UseTags
[PASS] Azure.VirtualNetwork.UseNSGs
[PASS] Azure.VirtualNetwork.SingleDNS
[PASS] Azure.VirtualNetwork.LocalDNS
-> vnet-001/subnet2 : Microsoft.Network/virtualNetworks/subnets
[FAIL] Azure.Resource.UseTags
```
To process multiple input files a wildcard `*` can be used.
```powershell
Assert-PSRule -InputPath .\out\*.json -Module PSRule.Rules.Azure;
```
## Formatting output
When executing a CI pipeline, feedback on any validation failures is important.
The `Assert-PSRule` cmdlet provides easy to read formatted output instead of PowerShell objects.
Additionally, `Assert-PSRule` supports styling formatted output for Azure Pipelines and GitHub Actions.
Use the `-Style AzurePipelines` or `-Style GitHubActions` parameter to style output.
For example:
```powershell
Assert-PSRule -InputPath .\out\*.json -Style AzurePipelines -Module PSRule.Rules.Azure;
```
## Failing the pipeline
When using PSRule within a CI pipeline, a failed rule should stop the pipeline.
When using `Assert-PSRule` if any rules fail, an error will be generated.
```text
Assert-PSRule : One or more rules reported failure.
At line:1 char:1
+ Assert-PSRule -Module PSRule.Rules.Azure -InputPath .\out\tests\Resou ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidData: (:) [Assert-PSRule], FailPipelineException
+ FullyQualifiedErrorId : PSRule.Fail,Assert-PSRule
```
A single PowerShell error is typically enough to stop a CI pipeline.
If you are using a different configuration additionally `-ErrorAction Stop` can be used.
For example:
```powershell
Assert-PSRule -Module PSRule.Rules.Azure -InputPath .\out\*.json -ErrorAction Stop;
```
## Generating NUnit output
NUnit is a popular unit test framework for .NET.
NUnit generates a test report format that is widely interpreted by CI systems.
While PSRule does not use NUnit directly, it support outputting validation results in the NUnit3 format.
Using a common format allows integration with any system that supports the NUnit3 for publishing test results.
To generate an NUnit report:
- Use the `-OutputFormat NUnit3` parameter.
- Use the `-OutputPath` parameter to specify the path of the report file to write.
```powershell
Assert-PSRule -OutputFormat NUnit3 -OutputPath .\reports\rule-report.xml -Module PSRule.Rules.Azure -InputPath .\out\*.json;
```
The output path will be created if it does not exist.
## Complete example
Putting each of these steps together.
### Install dependencies
```powershell
# Install dependencies for connecting to PowerShell Gallery
if ($Null -eq (Get-PackageProvider -Name NuGet -ErrorAction Ignore)) {
Install-PackageProvider -Name NuGet -Force -Scope CurrentUser;
}
if ($Null -eq (Get-InstalledModule -Name PowerShellGet -MinimumVersion 2.2.1 -ErrorAction Ignore)) {
Install-Module PowerShellGet -MinimumVersion 2.2.1 -Scope CurrentUser -Force -AllowClobber;
}
```
### Validate templates
```powershell
# Install PSRule.Rules.Azure module
if ($Null -eq (Get-InstalledModule -Name PSRule.Rules.Azure -MinimumVersion '0.12.1' -ErrorAction SilentlyContinue)) {
$Null = Install-Module -Name PSRule.Rules.Azure -Scope CurrentUser -MinimumVersion '0.12.1' -Force;
}
# Resolve resources
Export-AzRuleTemplateData -TemplateFile .\template.json -ParameterFile .\parameters.json -OutputPath out/;
# Validate resources
$assertParams = @{
InputPath = 'out/*.json'
Module = 'PSRule.Rules.Azure'
Style = 'AzurePipelines'
OutputFormat = 'NUnit3'
OutputPath = 'reports/rule-report.xml'
}
Assert-PSRule @assertParams;
```
## Additional options
### Using Invoke-Build
`Invoke-Build` is a build automation cmdlet that can be installed from the PowerShell Gallery by installing the _InvokeBuild_ module.
Within Invoke-Build, each build process is broken into tasks.
The following example shows an example of using _PSRule.Rules.Azure_ with _InvokeBuild_ tasks.
```powershell
# Synopsis: Install PSRule modules
task InstallPSRule {
if ($Null -eq (Get-InstalledModule -Name PSRule.Rules.Azure -MinimumVersion '0.12.1' -ErrorAction SilentlyContinue)) {
$Null = Install-Module -Name PSRule.Rules.Azure -Scope CurrentUser -MinimumVersion '0.12.1' -Force;
}
}
# Synopsis: Run validation
task ValidateTemplate InstallPSRule, {
# Resolve resources
Export-AzRuleTemplateData -TemplateFile .\template.json -ParameterFile .\parameters.json -OutputPath out/;
# Validate resources
$assertParams = @{
InputPath = 'out/*.json'
Module = 'PSRule.Rules.Azure'
Style = 'AzurePipelines'
OutputFormat = 'NUnit3'
OutputPath = 'reports/rule-report.xml'
}
Assert-PSRule @assertParams;
}
# Synopsis: Run all build tasks
task Build ValidateTemplate
```
```powershell
Invoke-Build Build;
```
### Calling from Pester
Pester is a unit test framework for PowerShell that can be installed from the PowerShell Gallery.
Typically, Pester unit tests are built for a particular pipeline.
PSRule can complement Pester unit tests by providing dynamic and sharable rules that are easy to reuse.
By using `-If` or `-Type` pre-conditions, rules can dynamically provide validation for a range of use cases.
When calling PSRule from Pester use `Invoke-PSRule` instead of `Assert-PSRule`.
`Invoke-PSRule` returns validation result objects that can be tested by Pester `Should` conditions.
Additionally, the `Logging.RuleFail` option can be included to generate an error message for each failing rule.
For example:
```powershell
Describe 'Azure' {
Context 'Resource templates' {
It 'Use content rules' {
Export-AzRuleTemplateData -TemplateFile .\template.json -ParameterFile .\parameters.json -OutputPath .\out\resources.json;
# Validate resources
$invokeParams = @{
InputPath = 'out/*.json'
Module = 'PSRule.Rules.Azure'
OutputFormat = 'NUnit3'
OutputPath = 'reports/rule-report.xml'
Option = (New-PSRuleOption -LoggingRuleFail Error)
}
Invoke-PSRule @invokeParams -Outcome Fail,Error | Should -BeNullOrEmpty;
}
}
}
```
## More information
- [pipeline-deps.ps1](pipeline-deps.ps1) - Example script installing pipeline dependencies.
- [validate-template.ps1](validate-template.ps1) - Example script for running template validation.
- [template.json](template.json) - Example template file.
- [parameters.json](parameters.json) - Example parameters file.
[publish-test-results]: https://docs.microsoft.com/en-us/azure/devops/pipelines/tasks/test/publish-test-results
| 40.112717 | 146 | 0.76151 | eng_Latn | 0.897753 |
b947643f811a66e51596891b8e7839ef2c290439 | 8,263 | md | Markdown | packages/Core/CHANGELOG.md | elliza01/welcome-ui | 5d4dd89bcc13d69c27ef530405367addd104cbb0 | [
"MIT"
] | 1 | 2021-07-30T13:47:25.000Z | 2021-07-30T13:47:25.000Z | packages/Core/CHANGELOG.md | elliza01/welcome-ui | 5d4dd89bcc13d69c27ef530405367addd104cbb0 | [
"MIT"
] | null | null | null | packages/Core/CHANGELOG.md | elliza01/welcome-ui | 5d4dd89bcc13d69c27ef530405367addd104cbb0 | [
"MIT"
] | null | null | null | # Change Log
All notable changes to this project will be documented in this file.
See [Conventional Commits](https://conventionalcommits.org) for commit guidelines.
# [3.5.0](https://github.com/WTTJ/welcome-ui/compare/v3.4.3...v3.5.0) (2021-04-28)
### Features
* add new sub color 7 ([#844](https://github.com/WTTJ/welcome-ui/issues/844)) ([81a9770](https://github.com/WTTJ/welcome-ui/commit/81a9770e9310e7ae44d5d0ef6d6ed0a507ce603e))
## [3.4.3](https://github.com/WTTJ/welcome-ui/compare/v3.4.2...v3.4.3) (2021-04-28)
**Note:** Version bump only for package @welcome-ui/core
## [3.4.2](https://github.com/WTTJ/welcome-ui/compare/v3.4.1...v3.4.2) (2021-04-19)
**Note:** Version bump only for package @welcome-ui/core
## [3.4.1](https://github.com/WTTJ/welcome-ui/compare/v3.4.0...v3.4.1) (2021-04-19)
**Note:** Version bump only for package @welcome-ui/core
# [3.4.0](https://github.com/WTTJ/welcome-ui/compare/v3.3.0...v3.4.0) (2021-04-19)
### Features
* new webfont workflow ([#820](https://github.com/WTTJ/welcome-ui/issues/820)) ([e2d5c35](https://github.com/WTTJ/welcome-ui/commit/e2d5c35c89aa855e815437bcf258eee1db56e3b8))
# [3.1.0](https://github.com/WTTJ/welcome-ui/compare/v3.0.1...v3.1.0) (2021-03-25)
### Features
* add Drawer component ([#836](https://github.com/WTTJ/welcome-ui/issues/836)) ([90c6296](https://github.com/WTTJ/welcome-ui/commit/90c6296f3d9feca72c2213f621b68f92fd0ff77b))
## [2.15.3](https://github.com/WTTJ/welcome-ui/compare/v2.15.2...v2.15.3) (2021-02-04)
**Note:** Version bump only for package @welcome-ui/core
# [2.8.0](https://github.com/WTTJ/welcome-ui/compare/v2.7.3...v2.8.0) (2020-10-27)
### Features
* add Popover component ([#797](https://github.com/WTTJ/welcome-ui/issues/797)) ([5442882](https://github.com/WTTJ/welcome-ui/commit/544288263d699ad1bc2e42eeb49f75aeb8b5100a))
## [2.7.1](https://github.com/WTTJ/welcome-ui/compare/v2.7.0...v2.7.1) (2020-10-20)
### Bug Fixes
* put back normalize for now ([#795](https://github.com/WTTJ/welcome-ui/issues/795)) ([2ec3e74](https://github.com/WTTJ/welcome-ui/commit/2ec3e744103f87d0e31ea6ff7fe99918987eccb7))
# [2.7.0](https://github.com/WTTJ/welcome-ui/compare/v2.6.3...v2.7.0) (2020-10-20)
### Features
* add useReset on WuiProvider for GlobalStyle ([#794](https://github.com/WTTJ/welcome-ui/issues/794)) ([9e21bb4](https://github.com/WTTJ/welcome-ui/commit/9e21bb454dab54d60380fb959f5fe2a4fcca560b))
## [2.6.3](https://github.com/WTTJ/welcome-ui/compare/v2.6.2...v2.6.3) (2020-10-19)
**Note:** Version bump only for package @welcome-ui/core
## [2.1.4](https://github.com/WTTJ/welcome-ui/compare/v2.1.3...v2.1.4) (2020-09-02)
**Note:** Version bump only for package @welcome-ui/core
## [2.0.2](https://github.com/WTTJ/welcome-ui/compare/v2.0.1...v2.0.2) (2020-08-04)
### Bug Fixes
* https for all links ([#755](https://github.com/WTTJ/welcome-ui/issues/755)) ([6b8e0ea](https://github.com/WTTJ/welcome-ui/commit/6b8e0ea7807486510169437bb909cb65038ff6f5))
## [1.29.2](https://github.com/WTTJ/welcome-ui/compare/v1.29.1...v1.29.2) (2020-06-10)
**Note:** Version bump only for package @welcome-ui/core
# [1.27.0](https://github.com/WTTJ/welcome-ui/compare/v1.26.4...v1.27.0) (2020-05-25)
### Features
* swiper component ([#633](https://github.com/WTTJ/welcome-ui/issues/633)) ([e212942](https://github.com/WTTJ/welcome-ui/commit/e21294266efd42ccdf5899c9ccfd4cca87e97e8a))
## [1.26.4](https://github.com/WTTJ/welcome-ui/compare/v1.26.3...v1.26.4) (2020-05-07)
### Bug Fixes
* import whole utils package rather than sub-package ([#623](https://github.com/WTTJ/welcome-ui/issues/623)) ([bcd56d6](https://github.com/WTTJ/welcome-ui/commit/bcd56d601df4ff7c9b3aba7915fea92f1cddea57))
## [1.26.3](https://github.com/WTTJ/welcome-ui/compare/v1.26.2...v1.26.3) (2020-05-07)
**Note:** Version bump only for package @welcome-ui/core
# [1.26.0](https://github.com/WTTJ/welcome-ui/compare/v1.25.2...v1.26.0) (2020-05-04)
### Features
* add Accordion component ([#613](https://github.com/WTTJ/welcome-ui/issues/613)) ([f7be647](https://github.com/WTTJ/welcome-ui/commit/f7be6477aae898ecc4a2cb804e96f1ce9aaa0e87))
## [1.25.1](https://github.com/WTTJ/welcome-ui/compare/v1.25.0...v1.25.1) (2020-04-23)
**Note:** Version bump only for package @welcome-ui/core
# [1.23.0](https://github.com/WTTJ/welcome-ui/compare/v1.22.1...v1.23.0) (2020-04-14)
### Features
* add Avatar component ([#598](https://github.com/WTTJ/welcome-ui/issues/598)) ([6f2f38e](https://github.com/WTTJ/welcome-ui/commit/6f2f38e9729b97d51011be95ea73b5414e998cd3))
* add Breadcrumb component ([#600](https://github.com/WTTJ/welcome-ui/issues/600)) ([962d83d](https://github.com/WTTJ/welcome-ui/commit/962d83db290ae20290b7a5d2bf08bf244a9c47b0))
# [1.22.0](https://github.com/WTTJ/welcome-ui/compare/v1.21.2...v1.22.0) (2020-04-10)
### Features
* automate icon font ([#583](https://github.com/WTTJ/welcome-ui/issues/583)) ([e0e92c6](https://github.com/WTTJ/welcome-ui/commit/e0e92c6f7a37d7eccb4a31817811be84c50ec5fb))
## [1.21.2](https://github.com/WTTJ/welcome-ui/compare/v1.21.1...v1.21.2) (2020-04-09)
**Note:** Version bump only for package @welcome-ui/core
## [1.20.1](https://github.com/WTTJ/welcome-ui/compare/v1.20.0...v1.20.1) (2020-04-06)
**Note:** Version bump only for package @welcome-ui/core
# [1.20.0](https://github.com/WTTJ/welcome-ui/compare/v1.19.2...v1.20.0) (2020-04-06)
**Note:** Version bump only for package @welcome-ui/core
## [1.10.1](https://github.com/WTTJ/welcome-ui/compare/v1.10.0...v1.10.1) (2020-02-18)
### Bug Fixes
* **core:** use correct ramda import ([#541](https://github.com/WTTJ/welcome-ui/issues/541)) ([0f72839](https://github.com/WTTJ/welcome-ui/commit/0f72839047321a7951511715e9d0c2fc477fdade))
# [1.10.0](https://github.com/WTTJ/welcome-ui/compare/v1.9.3...v1.10.0) (2020-02-17)
### Features
* add new font ([#459](https://github.com/WTTJ/welcome-ui/issues/459)) ([473ed4d](https://github.com/WTTJ/welcome-ui/commit/473ed4d95d6b3149c28ead1cb58fa3807be0b645))
## [1.9.3](https://github.com/WTTJ/welcome-ui/compare/v1.9.2...v1.9.3) (2020-02-12)
**Note:** Version bump only for package @welcome-ui/core
# [1.8.0](https://github.com/WTTJ/welcome-ui/compare/v1.7.1...v1.8.0) (2020-02-11)
### Bug Fixes
* remove focus on firefox ([#529](https://github.com/WTTJ/welcome-ui/issues/529)) ([03511e6](https://github.com/WTTJ/welcome-ui/commit/03511e65d6aa812f28c297c46a0927c683028da0))
## [1.7.1](https://github.com/WTTJ/welcome-ui/compare/v1.7.0...v1.7.1) (2020-02-04)
**Note:** Version bump only for package @welcome-ui/core
# [1.7.0](https://github.com/WTTJ/welcome-ui/compare/v1.6.3...v1.7.0) (2020-02-04)
### Features
* add Toast component ([#523](https://github.com/WTTJ/welcome-ui/issues/523)) ([c9fa7f5](https://github.com/WTTJ/welcome-ui/commit/c9fa7f5694494523aaaff422fc20c028a645c96f))
# [1.6.0](https://github.com/WTTJ/welcome-ui/compare/v1.5.3...v1.6.0) (2020-01-28)
### Features
* add Modal component ([#501](https://github.com/WTTJ/welcome-ui/issues/501)) ([c6fd1bd](https://github.com/WTTJ/welcome-ui/commit/c6fd1bd48bfac86eda2f6354163a1bd2d89c9795))
## [1.5.2](https://github.com/WTTJ/welcome-ui/compare/v1.5.1...v1.5.2) (2020-01-21)
**Note:** Version bump only for package @welcome-ui/core
# [1.4.0](https://github.com/WTTJ/welcome-ui/compare/v1.3.0...v1.4.0) (2020-01-15)
### Features
* add Card component ([a926070](https://github.com/WTTJ/welcome-ui/commit/a926070eae0e8b0f62f29b54950daf472d6d88e8))
# [1.3.0](https://github.com/WTTJ/welcome-ui/compare/v1.2.0...v1.3.0) (2020-01-14)
### Features
* add Table component ([be42342](https://github.com/WTTJ/welcome-ui/commit/be423428309ba2cf065a19187bbf9b7478da3402))
# [1.2.0](https://github.com/WTTJ/welcome-ui/compare/v1.1.6...v1.2.0) (2020-01-13)
### Features
* add Dark theme ([#481](https://github.com/WTTJ/welcome-ui/issues/481)) ([96c0cf3](https://github.com/WTTJ/welcome-ui/commit/96c0cf3b3f7cff70a0dc5d548222c0eaed753ca2))
## [1.1.4](https://github.com/WTTJ/welcome-ui/compare/v1.1.3...v1.1.4) (2020-01-08)
**Note:** Version bump only for package @welcome-ui/core
| 23.608571 | 204 | 0.695873 | yue_Hant | 0.167337 |
b94768eb3c59982637e456ef5fdc4f187563e010 | 1,292 | md | Markdown | README.md | MaxitoSama/GDJM | 1d5d08cd16ddc42d4a4b5de9168daf50660a3b4c | [
"MIT"
] | null | null | null | README.md | MaxitoSama/GDJM | 1d5d08cd16ddc42d4a4b5de9168daf50660a3b4c | [
"MIT"
] | null | null | null | README.md | MaxitoSama/GDJM | 1d5d08cd16ddc42d4a4b5de9168daf50660a3b4c | [
"MIT"
] | null | null | null | # First Assignment: Ninja's Path
## Authors
Josep: [Github](https://github.com/joseppi)
Marc: [Github](https://github.com/MaxitoSama)
## Information about the game.
Ninja's Path
Short 2D platform game with enemies that make pathfinding.
There are two types of controls.
For gameplay:
- Left = "D"
- Right = "A"
- Slide = "S + A/D"
- Jump = "Space"
- Sprint = "A/D + Left Shift"
- Music Volume Up = "+"
- Music Volume Down = "-"
- Save = "F5"
- Load = "F6"
- Pause = "P"
- Select GUI buttons = "Shift"
- Execute selected button = "Enter"
For Testing:
- First Scene = "F1"
- Start of Scene = "F2"
- See Colliders = "F3"
- Show Colliders GUI & Enemies Pathfinding = "F8"
- Save = "F5"
- Load = "F6"
- God Mode = "F10"
We tried to implement slow motion in the game, but we ran out of time and so we couldn't adapt it to our mechanics.
You can test it pressing Q.
## Github Link
[Repository](https://github.com/MaxitoSama/GDJM)
[Web](https://maxitosama.github.io/GDJM/)
## Licence
Tile Map: [Link](http://www.gameart2d.com/free-platformer-game-tileset.html)
Main Character: [Link](http://www.gameart2d.com/ninja-adventure---free-sprites.html)
Background Music: [Link](https://www.youtube.com/watch?v=KCoqdCjzFV8)
| 24.846154 | 115 | 0.647059 | yue_Hant | 0.276069 |
b947afb8cd063ed6199d3783821935f93f087fd9 | 76 | md | Markdown | README.md | junkdna/docker_gpg | 94b10a83f7149ecfa4a3998d9733afbdab351bd5 | [
"MIT"
] | null | null | null | README.md | junkdna/docker_gpg | 94b10a83f7149ecfa4a3998d9733afbdab351bd5 | [
"MIT"
] | null | null | null | README.md | junkdna/docker_gpg | 94b10a83f7149ecfa4a3998d9733afbdab351bd5 | [
"MIT"
] | null | null | null | # docker_gpg
Messing around with using docker to hide master key operations
| 25.333333 | 62 | 0.828947 | eng_Latn | 0.990248 |
b948e01fd29d9059ac085286a2d6beb2dfcb3b72 | 1,609 | markdown | Markdown | content/articles/write-code-everyday.markdown | prakhar1989/hugo-blog | 926b41a3290145a779dd42738d168df8bf7889eb | [
"Apache-2.0"
] | 32 | 2015-01-01T16:09:07.000Z | 2021-01-03T02:55:42.000Z | content/articles/write-code-everyday.markdown | prakhar1989/hugo-blog | 926b41a3290145a779dd42738d168df8bf7889eb | [
"Apache-2.0"
] | 5 | 2015-02-21T19:53:27.000Z | 2016-03-10T08:51:07.000Z | content/articles/write-code-everyday.markdown | prakhar1989/hugo-blog | 926b41a3290145a779dd42738d168df8bf7889eb | [
"Apache-2.0"
] | 19 | 2015-03-02T17:42:13.000Z | 2020-01-15T11:03:33.000Z | ---
title: Write Code Everyday
tags: [inspiration, rant]
date: 2014-04-10T12:34:58+03:00
---
> It’s important to note that that I don’t particularly care about the outward perception of the above Github chart. I think that’s the most important take away from this experiment: this is about a change that you’re making in your life for yourself not a change that you’re making to satisfy someone else’s perception of your work. The same goes for any form of dieting or exercise: if you don’t care about improving yourself then you’ll never actually succeed. - John Resig
Its inspiring to read about how programming greats like [John Resig](http://ejohn.org) are struggling with working on side projects and the changes they make to their lifestyle in order to push something meaningful each day.
I think no matter how experienced or productive one is as a programmer, each of us struggles with the same issues of feeling lazy, bored and occasionally too burnt out to code. What's motivated me most after the post is the fact that if John Resig can work this hard to write code each day so should I.
I'm currently on a 20 day streak github, which is the highest I've been on this far. Working on [newman](http://github.com/a85/newman) has been both fun and challenging to keep me engaged for 3 weeks. I'm quite anxious on how I will keep up the streak post the release but as John says, the important point is to make a change in yourself for something that you care about.
For those of you who haven't read the blog [post](http://ejohn.org/blog/write-code-every-day/), I'd strongly encourage you to do so!
| 107.266667 | 476 | 0.778123 | eng_Latn | 0.999793 |
b9490438cd14b04182c7c9006b354d848c566d14 | 576 | md | Markdown | catalog/kitakaze-to-taiyou/en-US_kitakaze-to-taiyou.md | htron-dev/baka-db | cb6e907a5c53113275da271631698cd3b35c9589 | [
"MIT"
] | 3 | 2021-08-12T20:02:29.000Z | 2021-09-05T05:03:32.000Z | catalog/kitakaze-to-taiyou/en-US_kitakaze-to-taiyou.md | zzhenryquezz/baka-db | da8f54a87191a53a7fca54b0775b3c00f99d2531 | [
"MIT"
] | 8 | 2021-07-20T00:44:48.000Z | 2021-09-22T18:44:04.000Z | catalog/kitakaze-to-taiyou/en-US_kitakaze-to-taiyou.md | zzhenryquezz/baka-db | da8f54a87191a53a7fca54b0775b3c00f99d2531 | [
"MIT"
] | 2 | 2021-07-19T01:38:25.000Z | 2021-07-29T08:10:29.000Z | # Kitakaze to Taiyou

- **type**: movie
- **episodes**: 1
- **original-name**: きたかぜとたいよう
- **start-date**: 2060-04-19
- **rating**: G - All Ages
## Tags
## Sinopse
A short film released by Gakken. It is based on the Aesop's Fable "The North Wind and the Sun."
## Links
- [My Anime list](https://myanimelist.net/anime/32693/Kitakaze_to_Taiyou)
- [Official Site](http://gakken.co.jp/campaign/70th/archives/)
- [AnimeDB](http://anidb.info/perl-bin/animedb.pl?show=anime&aid=7182)
| 26.181818 | 95 | 0.675347 | yue_Hant | 0.267852 |
b94a79b15989db0c9e2dfc829f22545e1ce82716 | 6,494 | md | Markdown | docs/cross-platform/develop-apps-for-the-universal-windows-platform-uwp.md | Simran-B/visualstudio-docs.de-de | 0e81681be8dbccb2346866f432f541b97d819dac | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/cross-platform/develop-apps-for-the-universal-windows-platform-uwp.md | Simran-B/visualstudio-docs.de-de | 0e81681be8dbccb2346866f432f541b97d819dac | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/cross-platform/develop-apps-for-the-universal-windows-platform-uwp.md | Simran-B/visualstudio-docs.de-de | 0e81681be8dbccb2346866f432f541b97d819dac | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Entwickeln von Apps für die universelle Windows-Plattform (UWP)
ms.date: 10/24/2017
ms.technology: vs-ide-general
ms.topic: conceptual
ms.assetid: eac59cb6-f12e-4a77-9953-6d62b164a643
author: TerryGLee
ms.author: tglee
manager: jillfra
ms.workload:
- uwp
ms.openlocfilehash: 2ef09f58d22e3cb72af5b745f16b2acf8920900e
ms.sourcegitcommit: d233ca00ad45e50cf62cca0d0b95dc69f0a87ad6
ms.translationtype: HT
ms.contentlocale: de-DE
ms.lasthandoff: 01/01/2020
ms.locfileid: "75587146"
---
# <a name="develop-apps-for-the-universal-windows-platform-uwp"></a>Entwickeln von Apps für die universelle Windows-Plattform (UWP)
Mit Universelle Windows-Plattform und unserem zentralen Windows-Kern können Sie die gleiche App auf jedem Windows 10-Gerät ausführen, egal ob es sich um Smartphones oder Desktop-PCs handelt. Erstellen Sie diese universellen Windows-Apps mit Visual Studio und den universellen Windows-App-Entwicklungstools.

Führen Sie Ihre App auf einem Windows 10-Phone, Windows 10-Desktop oder einer Xbox aus. Es ist das gleiche App-Paket. Mit der Einführung eines einzelnen, einheitlichen Windows 10-Kerns kann ein App-Paket auf allen Plattformen ausgeführt werden. Mehrere Plattformen verfügen über Erweiterungs-SDKs, die Sie zu Ihrer App hinzufügen können, um die Vorteile bestimmter plattformspezifischer Verhaltensweisen zu nutzen. Mit dem Erweiterungs-SDK für Mobilgeräte wird z. B. die auf einem Windows Phone gedrückte ZURÜCK-Taste behandelt. Wenn Sie in Ihrem Projekt auf ein Erweiterungs-SDK verweisen, fügen Sie einfach Laufzeitüberprüfungen hinzu, um zu prüfen, ob dieses SDK für diese Plattform verfügbar ist. Auf dieses Weise können Sie das gleiche App-Paket für alle Plattformen verwenden.
**Was ist Windows Core?**
Zum ersten Mal wurde Windows so umgestaltet, dass ein gemeinsamer Kern für alle Windows 10-Plattformen verwendet wird. Es ist eine gemeinsame Quelle, ein gemeinsamer Windows-Kernel, ein Datei-E/A-Stapel und ein App-Modell vorhanden. Für die Benutzeroberfläche ist nur ein XAML-Benutzeroberlächen-Framework und ein HTML-Benutzeroberflächen-Framework vorhanden. Sie können sich auf das Entwickeln einer tollen App konzentrieren, weil wir die Bereitstellung Ihrer App auf unterschiedlichen Windows 10-Geräten vereinfacht haben.
**Was ist die Universelle Windows-Plattform?**
Universelle Windows-Plattform ist einfach eine Sammlung von Verträgen und Versionen. Mit diesen können Sie die Ziele auswählen, für die Ihre App ausgeführt werden kann. Ihr Ziel ist nicht länger ein Betriebssystem, sondern Sie können den Fokus auf eine oder mehrere Gerätefamilien legen. Weitere Informationen finden Sie unter [Einführung in Universelle Windows-Plattform](/windows/uwp/get-started/universal-application-platform-guide).
## <a name="requirements"></a>Anforderungen
Die Entwicklungstools für universelle Windows-Apps verfügen über Emulatoren, die Sie verwenden können, um zu prüfen, wie Ihre App auf unterschiedlichen Geräten aussieht. Wenn Sie diesen Emulatoren verwenden möchten, müssen Sie diese Software auf einem physischen Computer installieren. Auf dem physischen Computer muss Windows 8.1 (X 64) Professional Edition oder höher installiert sein, und er muss über einen Prozessor verfügen, der Hyper-V für Clients und SLAT (Second Level Address Translation) unterstützt. Die Emulatoren können nicht verwendet werden, wenn Visual Studio auf einem virtuellen Computer installiert ist.
Hier finden Sie die Liste erforderlicher Softwarekomponenten:
::: moniker range="vs-2017"
- [Windows 10](https://support.microsoft.com/help/17777/downloads-for-windows). Visual Studio 2017 unterstützt die UWP-Entwicklung nur unter Windows 10. Weitere Informationen finden Sie unter [Platform targeting](/visualstudio/productinfo/vs2017-compatibility-vs) (Zielplattformen) und [Systemanforderungen](/visualstudio/productinfo/vs2017-system-requirements-vs) auf Visual Studio.
- [Visual Studio](https://visualstudio.microsoft.com/vs/older-downloads/?utm_medium=microsoft&utm_source=docs.microsoft.com&utm_campaign=vs+2017+download). Sie benötigen ebenso die Entwicklungsworkload von Universelle Windows-Plattform.

::: moniker-end
::: moniker range="vs-2019"
- [Windows 10](https://support.microsoft.com/help/17777/downloads-for-windows). Visual Studio 2019 unterstützt die UWP-Entwicklung nur unter Windows 10. Weitere Informationen finden Sie unter [Platform targeting](/visualstudio/releases/2019/compatibility/) (Zielplattformen) und [Systemanforderungen](/visualstudio/releases/2019/system-requirements/) auf Visual Studio.
- [Visual Studio](https://visualstudio.microsoft.com/downloads). Sie benötigen ebenso die Entwicklungsworkload von Universelle Windows-Plattform.

::: moniker-end
Nach der Installation der Software müssen Sie Ihr Windows 10-Gerät für die Entwicklung aktivieren. Weitere Informationen finden Sie unter [Aktivieren von Geräten für die Entwicklung](/windows/uwp/get-started/enable-your-device-for-development). Eine Entwicklerlizenz für jedes Windows 10-Gerät wird nicht mehr benötigt.
## <a name="universal-windows-apps"></a>Universelle Windows-Apps
Wählen Sie aus C#, Visual Basic, C++ oder JavaScript Ihre bevorzugte Entwicklungssprache aus, um eine UWP-App für Geräte unter Windows 10 zu erstellen. Lesen Sie [Erstellen der ersten App](/windows/uwp/get-started/your-first-app), oder sehen Sie sich das Video [Tools for Windows 10 Overview (Übersicht über Tools für Windows 10)](https://channel9.msdn.com/Series/ConnectOn-Demand/229) an.
Wenn Sie über Windows Store 8.1-Apps, Windows Phone 8.1-Apps oder mit Visual Studio 2015 erstellte universelle Windows-Apps verfügen, müssen Sie diese Apps portieren, um die neueste Version von Universelle Windows-Plattform verwenden zu können. Weitere Informationen finden Sie unter [Wechsel von Windows-Runtime 8.x zu UWP](/windows/uwp/porting/w8x-to-uwp-root).
Nachdem Sie die universelle Windows-App erstellt haben, müssen Sie Ihre App packen, um sie auf einem Windows 10-Gerät zu installieren oder an den Windows Store zu übermitteln. Weitere Informationen finden Sie unter [Packaging apps (Packen von Apps)](/windows/uwp/packaging/index).
## <a name="see-also"></a>Siehe auch
- [Plattformübergreifende Mobile-Entwicklung in Visual Studio](../cross-platform/cross-platform-mobile-development-in-visual-studio.md)
| 87.756757 | 782 | 0.813828 | deu_Latn | 0.988521 |
b94acb22968f4381d9387f6fd9d42fc0484e8621 | 3,646 | md | Markdown | msteams-platform/bots/what-are-bots.md | sanaga94/msteams-docs | 4f8fc0c4162828e9e73434c6735fab1102e827d3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | msteams-platform/bots/what-are-bots.md | sanaga94/msteams-docs | 4f8fc0c4162828e9e73434c6735fab1102e827d3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | msteams-platform/bots/what-are-bots.md | sanaga94/msteams-docs | 4f8fc0c4162828e9e73434c6735fab1102e827d3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Bots in Microsoft Teams
author: clearab
description: An overview of bots in Microsoft Teams.
ms.topic: overview
localization_priority: Normal
ms.author: anclear
---
# Bots in Microsoft Teams
A bot also referred to as a chatbot or conversational bot is an app that runs simple and repetitive automated tasks performed by the users, such as customer service or support staff. Examples of bots in everyday use include, bots that provide information about the weather, make dinner reservations, or provide travel information. A bot interaction can be a quick question and answer, or it can be a complex conversation that provides access to services.
> [!VIDEO https://www.youtube-nocookie.com/embed/zSIysk0yL0Q]
Conversational bots allow users to interact with your web service through text, interactive cards, and task modules.


<img src="~/assets/images/task-module-example.png" alt="Invoke bot using task module" width="400"/>
Conversational bots are incredibly flexible and can be scoped to handle a few simple commands, or complex, artificial-intelligence-powered, and natural-language-processing tasks. They can be one aspect of a larger application, or be completely stand-alone.
Finding the right mix of cards, text, and task modules are key to create a useful bot. The following image shows a user conversing with a bot in a one-to-one chat using both, text and interactive cards:
:::image type="content" source="~/assets/images/FAQPlusEndUser.gif" alt-text="Sample FAQ bot" border="true":::
Every interaction between the user and the bot is represented as an activity. When a bot receives an activity, it passes it on to its activity handlers. For more information, see [bot activity handlers](~/bots/bot-basics.md).
In addition, bots are apps that have a conversational interface. You can interact with a bot using text, interactive cards, and speech. A bot behaves differently depending on whether the conversation is a channel or group chat conversation, or it is a one-to-one conversation. Conversations are handled through the Bot Framework connector. For more information, see [conversation basics](~/bots/how-to/conversations/conversation-basics.md).
Your bot requires contextual information, such as user profile details to access relevant content and enhance the bot experience. For more information, see [get Teams context](~/bots/how-to/get-teams-context.md).
You can also send and receive files through the bot using Graph APIs or Teams bot APIs. For more information, see [send and receive files through the bot](~/bots/how-to/bots-filesv4.md).
In addition, rate limiting is used to optimize bots used for your Teams application. To protect Microsoft Teams and its users, the bot APIs provide a rate limit for incoming requests. For more information, see [optimize your bot with rate limiting in Teams](~/bots/how-to/rate-limit.md).
With Microsoft Graph APIs for calls and online meetings, Microsoft Teams apps can now interact with users using voice and video. For more information, see [calls and meetings bots](~/bots/calls-and-meetings/calls-meetings-bots-overview.md).
You can use the Teams bot APIs to get information for one or more members of a chat or team. For more information, see [changes to Teams bot APIs for fetching team or chat members](~/resources/team-chat-member-api-changes.md).
## See also
[Create a bot for Teams](~/bots/how-to/create-a-bot-for-teams.md)
## Next step
> [!div class="nextstepaction"]
> [Bots and SDKs](~/bots/bot-features.md)
| 71.490196 | 454 | 0.78497 | eng_Latn | 0.995316 |
b94b1b9767a5b1d69878f4c51a6777ba5bdb3641 | 10,653 | md | Markdown | _episodes/06-agg.md | hbs-rcs/sql-novice-survey | cfb25ff37d681f25676126dc0716dc7badbe6f10 | [
"CC-BY-4.0"
] | null | null | null | _episodes/06-agg.md | hbs-rcs/sql-novice-survey | cfb25ff37d681f25676126dc0716dc7badbe6f10 | [
"CC-BY-4.0"
] | 5 | 2020-02-04T18:52:05.000Z | 2020-02-07T20:41:33.000Z | _episodes/06-agg.md | hbs-rcs/sql-novice-survey | cfb25ff37d681f25676126dc0716dc7badbe6f10 | [
"CC-BY-4.0"
] | null | null | null | ---
title: "Aggregation"
teaching: 10
exercises: 10
questions:
- "How can I calculate sums, averages, and other summary values?"
objectives:
- "Define aggregation and give examples of its use."
- "Write queries that compute aggregated values."
- "Trace the execution of a query that performs aggregation."
- "Explain how missing data is handled during aggregation."
keypoints:
- "Use aggregation functions to combine multiple values."
- "Aggregation functions ignore `null` values."
- "Aggregation happens after filtering."
- "Use GROUP BY to combine subsets separately."
- "If no aggregation function is specified for a field, the query may return an arbitrary value for that field."
---
We now want to calculate ranges and averages for our data.
We know how to select all of the dates from the `Visited` table:
~~~
SELECT dated FROM Visited;
~~~
{: .sql}
|dated |
|----------|
|1927-02-08|
|1927-02-10|
|1930-01-07|
|1930-01-12|
|1930-02-26|
|-null- |
|1932-01-14|
|1932-03-22|
but to combine them,
we must use an [aggregation function]({% link reference.md %}#aggregation-function)
such as `min` or `max`.
Each of these functions takes a set of records as input,
and produces a single record as output:
~~~
SELECT min(dated) FROM Visited;
~~~
{: .sql}
|min(dated)|
|----------|
|1927-02-08|

~~~
SELECT max(dated) FROM Visited;
~~~
{: .sql}
|max(dated)|
|----------|
|1932-03-22|
`min` and `max` are just two of
the aggregation functions built into SQL.
Three others are `avg`,
`count`,
and `sum`:
~~~
SELECT avg(reading) FROM Survey WHERE quant = 'sal';
~~~
{: .sql}
|avg(reading) |
|----------------|
|7.20333333333333|
~~~
SELECT count(reading) FROM Survey WHERE quant = 'sal';
~~~
{: .sql}
|count(reading)|
|--------------|
|9 |
~~~
SELECT sum(reading) FROM Survey WHERE quant = 'sal';
~~~
{: .sql}
|sum(reading)|
|------------|
|64.83 |
We used `count(reading)` here,
but we could just as easily have counted `quant`
or any other field in the table,
or even used `count(*)`,
since the function doesn't care about the values themselves,
just how many values there are.
SQL lets us do several aggregations at once.
We can,
for example,
find the range of sensible salinity measurements:
~~~
SELECT min(reading), max(reading) FROM Survey WHERE quant = 'sal' AND reading <= 1.0;
~~~
{: .sql}
|min(reading)|max(reading)|
|------------|------------|
|0.05 |0.21 |
We can also combine aggregated results with raw results,
although the output might surprise you:
~~~
SELECT person_id, count(*) FROM Survey WHERE quant = 'sal' AND reading <= 1.0;
~~~
{: .sql}
|person_id|count(\*)|
|------|--------|
|lake |7 |
Why does Lake's name appear rather than Roerich's or Dyer's?
The answer is that when it has to aggregate a field,
but isn't told how to,
the database manager chooses an actual value from the input set.
It might use the first one processed,
the last one,
or something else entirely.
Another important fact is that when there are no values to aggregate ---
for example, where there are no rows satisfying the `WHERE` clause ---
aggregation's result is "don't know"
rather than zero or some other arbitrary value:
~~~
SELECT person_id, max(reading), sum(reading) FROM Survey WHERE quant = 'missing';
~~~
{: .sql}
|person_id|max(reading)|sum(reading)|
|------|------------|------------|
|-null-|-null- |-null- |
One final important feature of aggregation functions is that
they are inconsistent with the rest of SQL in a very useful way.
If we add two values,
and one of them is null,
the result is null.
By extension,
if we use `sum` to add all the values in a set,
and any of those values are null,
the result should also be null.
It's much more useful,
though,
for aggregation functions to ignore null values
and only combine those that are non-null.
This behavior lets us write our queries as:
~~~
SELECT min(dated) FROM Visited;
~~~
{: .sql}
|min(dated)|
|----------|
|1927-02-08|
instead of always having to filter explicitly:
~~~
SELECT min(dated) FROM Visited WHERE dated IS NOT NULL;
~~~
{: .sql}
|min(dated)|
|----------|
|1927-02-08|
Aggregating all records at once doesn't always make sense.
For example,
suppose we suspect that there is a systematic bias in our data,
and that some scientists' radiation readings are higher than others.
We know that this doesn't work:
~~~
SELECT person_id, count(reading), round(avg(reading), 2)
FROM Survey
WHERE quant = 'rad';
~~~
{: .sql}
|person_id|count(reading)|round(avg(reading), 2)|
|------|--------------|----------------------|
|roe |8 |6.56 |
because the database manager selects a single arbitrary scientist's name
rather than aggregating separately for each scientist.
Since there are only five scientists,
we could write five queries of the form:
~~~
SELECT person_id, count(reading), round(avg(reading), 2)
FROM Survey
WHERE quant = 'rad'
AND person_id = 'dyer';
~~~
{: .sql}
person_id|count(reading)|round(avg(reading), 2)|
------|--------------|----------------------|
dyer |2 |8.81 |
but this would be tedious,
and if we ever had a data set with fifty or five hundred scientists,
the chances of us getting all of those queries right is small.
What we need to do is
tell the database manager to aggregate the hours for each scientist separately
using a `GROUP BY` clause:
~~~
SELECT person_id, count(reading), round(avg(reading), 2)
FROM Survey
WHERE quant = 'rad'
GROUP BY person_id;
~~~
{: .sql}
person_id|count(reading)|round(avg(reading), 2)|
------|--------------|----------------------|
dyer |2 |8.81 |
lake |2 |1.82 |
pb |3 |6.66 |
roe |1 |11.25 |
`GROUP BY` does exactly what its name implies:
groups all the records with the same value for the specified field together
so that aggregation can process each batch separately.
Since all the records in each batch have the same value for `person_id`,
it no longer matters that the database manager
is picking an arbitrary one to display
alongside the aggregated `reading` values.
Just as we can sort by multiple criteria at once,
we can also group by multiple criteria.
To get the average reading by scientist and quantity measured,
for example,
we just add another field to the `GROUP BY` clause:
~~~
SELECT person_id, quant, count(reading), round(avg(reading), 2)
FROM Survey
GROUP BY person_id, quant;
~~~
{: .sql}
|person_id|quant|count(reading)|round(avg(reading), 2)|
|------|-----|--------------|----------------------|
|-null-|sal |1 |0.06 |
|-null-|temp |1 |-26.0 |
|dyer |rad |2 |8.81 |
|dyer |sal |2 |0.11 |
|lake |rad |2 |1.82 |
|lake |sal |4 |0.11 |
|lake |temp |1 |-16.0 |
|pb |rad |3 |6.66 |
|pb |temp |2 |-20.0 |
|roe |rad |1 |11.25 |
|roe |sal |2 |32.05 |
Note that we have added `quant` to the list of fields displayed,
since the results wouldn't make much sense otherwise.
Let's go one step further and remove all the entries
where we don't know who took the measurement:
~~~
SELECT person_id, quant, count(reading), round(avg(reading), 2)
FROM Survey
WHERE person_id IS NOT NULL
GROUP BY person_id, quant
ORDER BY person_id, quant;
~~~
{: .sql}
|person_id|quant|count(reading)|round(avg(reading), 2)|
|------|-----|--------------|----------------------|
|dyer |rad |2 |8.81 |
|dyer |sal |2 |0.11 |
|lake |rad |2 |1.82 |
|lake |sal |4 |0.11 |
|lake |temp |1 |-16.0 |
|pb |rad |3 |6.66 |
|pb |temp |2 |-20.0 |
|roe |rad |1 |11.25 |
|roe |sal |2 |32.05 |
Looking more closely,
this query:
1. selected records from the `Survey` table
where the `person_id` field was not null;
2. grouped those records into subsets
so that the `person_id` and `quant` values in each subset
were the same;
3. ordered those subsets first by `person_id`,
and then within each sub-group by `quant`;
and
4. counted the number of records in each subset,
calculated the average `reading` in each,
and chose a `person_id` and `quant` value from each
(it doesn't matter which ones,
since they're all equal).
> ## Counting Temperature Readings
>
> How many temperature readings did Frank Pabodie record,
> and what was their average value?
>
> > ## Solution
> >
> > ~~~
> > SELECT count(reading), avg(reading) FROM Survey WHERE quant = 'temp' AND person_id = 'pb';
> > ~~~
> > {: .sql}
> >
> > |count(reading)|avg(reading)|
> > |--------------|------------|
> > |2 |-20.0 |
> {: .solution}
{: .challenge}
> ## Averaging with NULL
>
> The average of a set of values is the sum of the values
> divided by the number of values.
> Does this mean that the `avg` function returns 2.0 or 3.0
> when given the values 1.0, `null`, and 5.0?
>
> > ## Solution
> > The answer is 3.0.
> > `NULL` is not a value; it is the absence of a value.
> > As such it is not included in the calculation.
> >
> > You can confirm this, by executing this code:
> > ```
> > SELECT AVG(a) FROM (
> > SELECT 1 AS a
> > UNION ALL SELECT NULL
> > UNION ALL SELECT 5);
> > ```
> > {: .sql}
> {: .solution}
{: .challenge}
> ## What Does This Query Do?
>
> We want to calculate the difference between
> each individual radiation reading
> and the average of all the radiation readings.
> We write the query:
>
> ~~~
> SELECT reading - avg(reading) FROM Survey WHERE quant = 'rad';
> ~~~
> {: .sql}
>
> What does this actually produce, and why?
{: .challenge}
> ## Ordering When Concatenating
>
> The function `group_concat(field, separator)`
> concatenates all the values in a field
> using the specified separator character
> (or ',' if the separator isn't specified).
> Use this to produce a one-line list of scientists' names,
> such as:
>
> ~~~
> William Dyer, Frank Pabodie, Anderson Lake, Valentina Roerich, Frank Danforth
> ~~~
> {: .sql}
>
> Can you find a way to order the list by surname?
{: .challenge}
| 26.766332 | 112 | 0.609969 | eng_Latn | 0.991083 |
b94d2a57065d5276af94ff92327763e20a1a074f | 3,841 | md | Markdown | old-external/jna/www/WindowsDevelopmentEnvironment.md | veriktig/scandium | 17b22f29d70b1a972271071f62017880e8e0b5c7 | [
"Apache-2.0"
] | null | null | null | old-external/jna/www/WindowsDevelopmentEnvironment.md | veriktig/scandium | 17b22f29d70b1a972271071f62017880e8e0b5c7 | [
"Apache-2.0"
] | null | null | null | old-external/jna/www/WindowsDevelopmentEnvironment.md | veriktig/scandium | 17b22f29d70b1a972271071f62017880e8e0b5c7 | [
"Apache-2.0"
] | null | null | null | Setting up a Windows Development Environment
============================================
Java
----
For a 32-bit build, set `JAVA_HOME` to a 32-bit JDK, eg. `C:\Program Files (x86)\java\jdk1.6.0_24`.
For a 64-bit build, set `JAVA_HOME` to a 64-bit JDK, eg. `C:\Program Files\java\jdk1.6.0_24`.
Native
------
### MSVC / Visual Studio
JNA uses the free MS Visual Studio C++ Express compiler to compile
native bits if MSVC is set in the environment. The MS compiler provides
structured event handling (SEH), which allows JNA to trap native faults when
run in protected mode.
On 64-bit windows, you will still need to install mingw64 in order to
compile a small bit of inline assembly.
To use the MS compiler, ensure that the appropriate 32-bit or 64-bit versions
of cl.exe/ml.exe/ml64.exe/link.exe are in your PATH and that the INCLUDE and
LIB environment variables are set properly (as in VCVARS.BAT).
Sample configuration setting up INCLUDE/LIB (see an alternative below):
``` shell
export MSVC="/c/Program Files (x86)/Microsoft Visual Studio 10.0/vc"
export WSDK="/c/Program Files (x86)/Microsoft SDKs/Windows/v7.0A"
export WSDK_64="/c/Program Files/Microsoft SDKs/Windows/v7.1"
export INCLUDE="$(cygpath -m "$MSVC")/include;$(cygpath -m "$WSDK")/include"
# for 64-bit target
export LIB="$(cygpath -m "$MSVC")/lib/amd64;$(cygpath -m "$WSDK_64")/lib/x64"
# for 32-bit target
export LIB="$(cygpath -m "$MSVC")/lib;$(cygpath -m "$WSDK")/lib"
```
### mingw
Install [cygwin](http://www.cygwin.com/).
When installing cygwin, include ssh, git, make, autotools, and mingw{32|64}-g++.
Ensure the mingw compiler (i686-pc-mingw32-gcc.exe or i686-pc-mingw64-gcc.exe) is on your path.
If `cl.exe` is found on your %PATH%, you'll need to invoke `ant native
-DUSE_MSVC=false` in order to avoid using the MS compiler.
### Issues
#### Backslash R Command Not Found
If you get errors such as `'\r': command not found`, run `dos2unix -f [filename]`
for each file that it's complaining about.
### Building
Type `ant` from the top to build the project.
Recipe for building on windows
------------------------------
This is the contents of a note I made for myself to be able to build JNA on
windows.
This builds the library based on the Visual C++ compiler.
<pre>
0. Start-Point: A clean Windows 10 Installation with all patches as of 2017-11-05
1. Install Visual C++ Build Tools 2015 (http://landinghub.visualstudio.com/visual-cpp-build-tools) (the 8.1 SDK is enough)
2. Install Oracle JDK 8u152 (64 bit)
3. Install Cygwin 64 Bit (https://cygwin.com/install.html)
- make
- automake
- automake1.15
- libtool
- mingw64-x86_64-gcc-g++ (Version 5.4.0-4)
- mingw64-x86_64-gcc-core (Version 5.4.0-4)
- gcc-g++
- diffutils
- git
4. Ensure ant, maven, cygwin (64 Bit!) are accessible from the PATH
5. Run
"C:\Program Files (x86)\Microsoft Visual C++ Build Tools\vcbuildtools.bat" x64
inside a windows command prompt
6. Point JAVA_HOME to the root of a 64 Bit JDK
7. Run native build
For 32bit:
0. Start-Point: A clean Windows 10 Installation with all patches as of 2017-11-05
1. Install Visual C++ Build Tools 2015 (http://landinghub.visualstudio.com/visual-cpp-build-tools) (the 8.1 SDK is enough)
2. Install Oracle JDK 8u152 (32 bit)
3. Install Cygwin 32 Bit (https://cygwin.com/install.html)
- make
- automake
- automake1.15
- libtool
- mingw64-i686-gcc-g++ (Version 5.4.0-4)
- mingw64-i686-gcc-core (Version 5.4.0-4)
- gcc-g++
- diffutils
- git
4. Ensure ant, maven, cygwin (32 Bit!) are accessible from the PATH
5. Run
"C:\Program Files (x86)\Microsoft Visual C++ Build Tools\vcbuildtools.bat" x86
inside a windows command prompt
6. Point JAVA_HOME to the root of a 32 Bit JDK
7. Run native build
</pre>
To build without Visual C++, using only Cygwin, just skip steps 1 and 5. | 33.692982 | 122 | 0.703983 | eng_Latn | 0.853169 |
b94d5bbe506f2a68b1f30981d877b4cd169b6e9c | 133 | md | Markdown | README.md | xixixixixiao/cpamase | 9404b4015fb8e2d03dde996ce269e3278a78ac84 | [
"MIT"
] | null | null | null | README.md | xixixixixiao/cpamase | 9404b4015fb8e2d03dde996ce269e3278a78ac84 | [
"MIT"
] | null | null | null | README.md | xixixixixiao/cpamase | 9404b4015fb8e2d03dde996ce269e3278a78ac84 | [
"MIT"
] | null | null | null | # C 语言程序设计: 现代方法 第二版 课后习题练习
My solutions to C Programming: A Modern Approach, Second Edition
本仓库是我对 *C 语言程序设计: 现代方法 第二版* 的课后习题的练习.
| 22.166667 | 64 | 0.759398 | yue_Hant | 0.954027 |
b94d975ec2c460c913ef17b858933aa1c7c5605c | 63 | md | Markdown | README.md | evan-evone/finances | 4ebc0c9b507cb488f497c758fe5d90896fc1141c | [
"MIT"
] | null | null | null | README.md | evan-evone/finances | 4ebc0c9b507cb488f497c758fe5d90896fc1141c | [
"MIT"
] | null | null | null | README.md | evan-evone/finances | 4ebc0c9b507cb488f497c758fe5d90896fc1141c | [
"MIT"
] | null | null | null | # finances
An app to help understand and manage one's finances
| 21 | 51 | 0.793651 | eng_Latn | 0.999727 |
b94d9be62ff6fb959939cb867e6e1e24bb7e7b15 | 1,939 | md | Markdown | _posts/2019-02-08-Ask-Not-What-AI-Can-Do-But-What-AI-Should-Do-Towards-a-Framework-of-Task-Delegability.md | AMDS123/papers | 80ccfe8c852685e4829848229b22ba4736c65a7c | [
"MIT"
] | 7 | 2018-02-11T01:50:19.000Z | 2020-01-14T02:07:17.000Z | _posts/2019-02-08-Ask-Not-What-AI-Can-Do-But-What-AI-Should-Do-Towards-a-Framework-of-Task-Delegability.md | AMDS123/papers | 80ccfe8c852685e4829848229b22ba4736c65a7c | [
"MIT"
] | null | null | null | _posts/2019-02-08-Ask-Not-What-AI-Can-Do-But-What-AI-Should-Do-Towards-a-Framework-of-Task-Delegability.md | AMDS123/papers | 80ccfe8c852685e4829848229b22ba4736c65a7c | [
"MIT"
] | 4 | 2018-02-04T15:58:04.000Z | 2019-08-29T14:54:14.000Z | ---
layout: post
title: "Ask Not What AI Can Do, But What AI Should Do: Towards a Framework of Task Delegability"
date: 2019-02-08 19:00:02
categories: arXiv_AI
tags: arXiv_AI Survey
author: Brian Lubars, Chenhao Tan
mathjax: true
---
* content
{:toc}
##### Abstract
Although artificial intelligence holds promise for addressing societal challenges, issues of exactly which tasks to automate and the extent to do so remain understudied. We approach the problem of task delegability from a human-centered perspective by developing a framework on human perception of task delegation to artificial intelligence. We consider four high-level factors that can contribute to a delegation decision: motivation, difficulty, risk, and trust. To obtain an empirical understanding of human preferences in different tasks, we build a dataset of 100 tasks from academic papers, popular media portrayal of AI, and everyday life. For each task, we administer a survey to collect judgments of each factor and ask subjects to pick the extent to which they prefer AI involvement. We find little preference for full AI control and a strong preference for machine-in-the-loop designs, in which humans play the leading role. Our framework can effectively predict human preferences in degrees of AI assistance. Among the four factors, trust is the most predictive of human preferences of optimal human-machine delegation. This framework represents a first step towards characterizing human preferences of automation across tasks. We hope this work may encourage and aid in future efforts towards understanding such individual attitudes; our goal is to inform the public and the AI research community rather than dictating any direction in technology development.
##### Abstract (translated by Google)
##### URL
[http://arxiv.org/abs/1902.03245](http://arxiv.org/abs/1902.03245)
##### PDF
[http://arxiv.org/pdf/1902.03245](http://arxiv.org/pdf/1902.03245)
| 74.576923 | 1,472 | 0.797834 | eng_Latn | 0.994017 |
b94da762d1e7fabd726a55201f99e693057c3d16 | 39,108 | md | Markdown | memdocs/configmgr/core/plan-design/changes/whats-new-in-version-2002.md | eltociear/memdocs.es-es | 107aa5ef82dc8742af287d2d776d76de6a507c77 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | memdocs/configmgr/core/plan-design/changes/whats-new-in-version-2002.md | eltociear/memdocs.es-es | 107aa5ef82dc8742af287d2d776d76de6a507c77 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | memdocs/configmgr/core/plan-design/changes/whats-new-in-version-2002.md | eltociear/memdocs.es-es | 107aa5ef82dc8742af287d2d776d76de6a507c77 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Novedades de la versión 2002
titleSuffix: Configuration Manager
description: Obtenga detalles sobre los cambios y las nuevas funcionalidades incorporados en la versión 2002 de la rama actual de Configuration Manager.
ms.date: 07/27/2020
ms.prod: configuration-manager
ms.technology: configmgr-core
ms.topic: conceptual
ms.assetid: de718cdc-d0a9-47e2-9c99-8fa2cb25b5f8
author: mestew
ms.author: mstewart
manager: dougeby
ms.openlocfilehash: 4035a6684fc346205f7c7af109bf4c0389576e77
ms.sourcegitcommit: 4b8c317c71535c2d464f336c03b5bebdd2c6d4c9
ms.translationtype: HT
ms.contentlocale: es-ES
ms.lasthandoff: 09/15/2020
ms.locfileid: "90083971"
---
# <a name="whats-new-in-version-2002-of-configuration-manager-current-branch"></a>Novedades de la versión 2002 de la rama actual de Configuration Manager
*Se aplica a: Configuration Manager (rama actual)*
La actualización 2002 de la rama actual de Configuration Manager está disponible como una actualización en consola. Aplique esta actualización en los sitios que ejecuten la versión 1810 o versiones posteriores. <!-- baseline only statement:-->Al instalar un nuevo sitio, también está disponible como una versión de línea de base. En este artículo se resumen los cambios y las nuevas características de la versión 2002 de Configuration Manager.
Revise siempre la lista de comprobación más reciente para instalar esta actualización. Para obtener más información, vea la [lista de comprobación para la instalación de la actualización 2002](../../servers/manage/checklist-for-installing-update-2002.md). Después de actualizar un sitio, revise también la [lista de comprobación posterior a la actualización](../../servers/manage/checklist-for-installing-update-2002.md#post-update-checklist).
Para aprovechar al máximo las nuevas características de Configuration Manager, después de actualizar el sitio, actualice también los clientes a la versión más reciente. Aunque la funcionalidad nueva aparece en la consola de Configuration Manager cuando se actualiza el sitio y la consola, la totalidad del escenario no es funcional hasta que la versión del cliente también es la más reciente.
> [!TIP]
> Para obtener una notificación cuando se actualice esta página, copie y pegue la siguiente dirección URL en su lector de fuentes RSS: `https://docs.microsoft.com/api/search/rss?search=%22what%27s+new+in+version+2002+-+Configuration+Manager%22&locale=en-us`
## <a name="microsoft-endpoint-manager-tenant-attach"></a><a name="bkmk_tenant"></a> Asociación de inquilinos de Microsoft Endpoint Manager
### <a name="device-sync-and-device-actions"></a><a name="bkmk_attach"></a> Sincronización de dispositivos y acciones de dispositivo
<!--3555758-->
Microsoft Endpoint Manager es una solución integrada para administrar todos los dispositivos. Microsoft reúne Configuration Manager e Intune en una única consola denominada **centro de administración de Microsoft Endpoint Manager**. A partir de esta versión, puede cargar los dispositivos de Configuration Manager en el servicio en la nube y realizar acciones desde la hoja **Dispositivos** del centro de administración.
Para obtener más información, vea el artículo [Asociación de inquilinos de Microsoft Endpoint Manager](../../../tenant-attach/device-sync-actions.md).
## <a name="site-infrastructure"></a><a name="bkmk_infra"></a> Infraestructura del sitio
### <a name="remove-a-central-administration-site"></a>Eliminación de un sitio de administración central
<!-- 3607277 -->
Si su jerarquía consta de un sitio de administración central (CAS) y de un único sitio primario secundario, ahora puede quitar el CAS. Esta acción simplifica la infraestructura de Configuration Manager a un único sitio primario independiente. Elimina las complejidades de la replicación de sitio a sitio y centra sus tareas de administración en un único sitio primario.
Para obtener más información, vea [Eliminación del CAS](../../servers/deploy/install/remove-central-administration-site.md).
### <a name="new-management-insight-rules"></a>Nuevas reglas de conclusiones de administración
Esta versión incluye las siguientes reglas de conclusiones de administración:
- Nueve reglas en el grupo **Configuration Manager Assessment**, cortesía del soporte técnico Premier de Microsoft Azure de ingeniería de campo. Estas reglas son un ejemplo de las muchas comprobaciones más que proporciona el soporte técnico Premier de Microsoft Azure en el centro de servicios.<!-- 3607758 -->
- La detección de grupos de seguridad de Active Directory está configurada para ejecutarse con demasiada frecuencia.
- La detección de sistemas de Active Directory está configurada para ejecutarse con demasiada frecuencia.
- La detección de usuarios de Active Directory está configurada para ejecutarse con demasiada frecuencia.
- Recopilaciones limitadas a Todos los sistemas o Todos los usuarios.
- La detección de latidos está deshabilitada.
- Consultas de recopilaciones de larga duración habilitadas para las actualizaciones incrementales.
- Reducir el número de aplicaciones y paquetes en los puntos de distribución.
- Problemas de instalación del sitio secundario.
- Actualizar todos los sitios a la misma versión.
- Dos reglas adicionales en el grupo de **Cloud Services** para ayudarle a configurar el sitio con el fin de agregar comunicación HTTPS segura:<!-- 6268489 -->
- Sitios que no tienen una configuración correcta de HTTPS.
- Dispositivos no cargados en Azure AD.
Para obtener más información, vea [Información de administración](../../servers/manage/management-insights.md).
### <a name="improvements-to-administration-service"></a>Mejoras en el servicio de administración
<!-- 5728365 -->
El servicio de administración es una API REST del proveedor de SMS. Anteriormente, había que implementar una de las siguientes dependencias:
- Habilitar HTTP mejorado para todo el sitio
- Enlazar de forma manual un certificado basado en PKI al IIS en el servidor que hospeda el rol de proveedor de SMS
A partir de esta versión, el servicio de administración usa automáticamente el certificado autofirmado del sitio. Este cambio ayuda a reducir la fricción para facilitar el uso del servicio de administración. El sitio siempre genera este certificado. La configuración de sitio HTTP mejorado en **Usar los certificados generados por Configuration Manager para sistemas de sitios HTTP** solo controla si los sistemas de sitio lo usan o no. Ahora, el servicio de administración omite esta configuración del sitio porque siempre usa el certificado del sitio, incluso si ningún otro sistema de sitio usa HTTP mejorado. Todavía puede usar un certificado de autenticación de servidor basado en PKI.
Para obtener más información, consulte los siguientes artículos nuevos:
- [¿Qué es el servicio de administración?](../../../develop/adminservice/overview.md)
- [Cómo configurar el servicio de administración](../../../develop/adminservice/set-up.md)
### <a name="proxy-support-for-azure-active-directory-discovery-and-group-sync"></a>Compatibilidad del proxy con la detección y la sincronización de grupos de Azure Active Directory
<!-- 5913817 -->
La configuración de proxy del sistema de sitio, incluida la autenticación, ahora se usa en:
- La detección de usuarios de Azure Active Directory (Azure AD).
- La detección de grupos de usuarios de Azure AD.
- La sincronización de los resultados de la pertenencia a recopilaciones con grupos de Azure Active Directory.
Para obtener más información, vea [Compatibilidad de servidor proxy](../network/proxy-server-support.md#bkmk_other).
## <a name="cloud-attached-management"></a><a name="bkmk_cloud"></a> Administración conectada a la nube
### <a name="critical-status-message-shows-server-connection-errors-to-required-endpoints"></a>Mensaje de estado crítico que indica que hay errores de conexión del servidor con los puntos de conexión necesarios
<!-- 5566763 -->
Si el sitio de Configuration Manager no se puede conectar a los puntos de conexión necesarios para un servicio en la nube, genera un mensaje de estado crítico con el identificador 11488. Cuando el servidor de sitio no se puede conectar con el servicio, el estado del componente SMS_SERVICE_CONNECTOR cambia a crítico. Consulte el estado detallado en el nodo Estado del componente de la consola de Configuration Manager.
### <a name="token-based-authentication-for-cloud-management-gateway"></a>Autenticación basada en tokens para Cloud Management Gateway
<!-- 5686290 -->
Cloud Management Gateway (CMG) admite muchos tipos de clientes, pero, incluso con un protocolo HTTP mejorado, estos clientes requieren un certificado de autenticación de cliente. Este requisito de certificado puede ser difícil de aprovisionar en clientes basados en Internet que no se suelen conectar a la red interna, no consiguen conectar con Azure Active Directory (Azure AD) y no disponen de ningún método para instalar un certificado emitido con PKI.
Configuration Manager amplía la compatibilidad de su dispositivo con los métodos siguientes:
- Registro de un solo token en la red interna
- Creación de un token de registro masivo para dispositivos basados en Internet
Para obtener más información, vea [Autenticación basada en tokens para CMG](../../clients/deploy/deploy-clients-cmg-token.md).
### <a name="microsoft-endpoint-configuration-manager-cloud-features"></a>Características basadas en la nube de Microsoft Endpoint Configuration Manager
<!--5834830-->
Cuando haya nuevas características basadas en la nube disponibles en el centro de administración de Microsoft Endpoint Manager u otros servicios en la nube asociados para la instalación local de Configuration Manager, puede seleccionar estas nuevas características en la consola de Configuration Manager. Para obtener más información sobre la habilitación de características en la consola de Configuration Manager, vea [Habilitar características opcionales de las actualizaciones](../../servers/manage/install-in-console-updates.md#bkmk_options).
## <a name="desktop-analytics"></a><a name="bkmk_da"></a> Análisis de escritorio
Para más información sobre los cambios mensuales en el servicio en la nube de Análisis de escritorio, vea [Novedades de Análisis de escritorio](../../../desktop-analytics/whats-new.md).
### <a name="connection-health-dashboard-shows-client-connection-issues"></a>El panel Estado de la conexión muestra los problemas de conexión del cliente
Use el panel Estado de la conexión de Análisis de escritorio que hay en Configuration Manager para supervisar el estado de conectividad de los clientes. Ahora le ayuda identificar más fácilmente los problemas de configuración del proxy de clientes en dos áreas:
- **Comprobaciones de conectividad del punto de conexión**: Si los clientes no se pueden conectar a un punto de conexión necesario, verá una alerta de configuración en el panel. Explore en profundidad para ver los puntos de conexión a los que los clientes no pueden conectarse debido a problemas de configuración del proxy.<!-- 4963230 -->
- **Estado de conectividad**: Si sus clientes usan un servidor proxy para acceder al servicio en la nube de Análisis de escritorio, Configuration Manager ahora muestra los problemas de autenticación del proxy que tienen los clientes. Explore en profundidad para ver los clientes que no se pueden inscribir debido a problemas de autenticación del proxy.<!-- 4963383 -->
Para más información, consulte [Supervisión del estado de conexión](../../../desktop-analytics/monitor-connection-health.md).
## <a name="real-time-management"></a><a name="bkmk_real"></a> Administración en tiempo real
### <a name="improvements-to-cmpivot"></a>Mejoras en CMPivot
<!-- 5870934 -->
Se ha facilitado la navegación por las entidades CMPivot. Ahora puede buscar entidades CMPivot. También se han agregado nuevos iconos para que pueda diferenciar fácilmente las entidades y los tipos de objeto de entidad.
Para obtener más información, vea [CMPivot](../../servers/manage/cmpivot-changes.md#bkmk_2002).
## <a name="content-management"></a><a name="bkmk_content"></a> Administración de contenido
### <a name="exclude-certain-subnets-for-peer-content-download"></a>Exclusión de determinadas subredes para la descarga de contenido del mismo nivel
<!-- 3555777 -->
Los grupos de límites incluyen la opción siguiente para las descargas del mismo nivel: **Durante las descargas del mismo nivel, use solo elementos del mismo nivel dentro de la misma subred**. Si habilita esta opción, la lista de ubicaciones de contenido del punto de administración solo incluye orígenes del mismo nivel que se encuentran en la misma subred y el mismo grupo de límites que el cliente. En función de la configuración de su red, ahora puede excluir ciertas subredes para que no coincidan. Por ejemplo, en el caso de que quiera incluir un límite, pero excluir una subred de VPN específica.
Para obtener más información, consulte las [opciones de grupo de límites](../../servers/deploy/configure/boundary-groups.md#bkmk_bgoptions).
### <a name="proxy-support-for-microsoft-connected-cache"></a>Compatibilidad del proxy con la Caché conectada de Microsoft
<!-- 5856396 -->
Si su entorno usa un servidor proxy no autenticado para el acceso a Internet, ahora cuando habilita un punto de distribución de Configuration Manager para la Caché conectada de Microsoft, puede comunicarse a través del proxy. Para más información, vea [Caché de conexión de Microsoft](../hierarchy/microsoft-connected-cache.md).
## <a name="client-management"></a><a name="bkmk_client"></a> Administración de clientes
### <a name="client-log-collection"></a>Recopilación de registros de cliente
<!-- 4226618 -->
Ahora puede desencadenar un dispositivo cliente para cargar sus registros de cliente en el servidor del sitio mediante el envío de una acción de notificación de cliente desde la consola de Configuration Manager.
Para obtener más información, consulte [Notificación de cliente](../../clients/manage/client-notification.md#client-diagnostics).
### <a name="wake-up-a-device-from-the-central-administration-site"></a>Reactivación de un dispositivo desde el sitio de administración central
<!-- 6030715 -->
En el nodo Dispositivos o Recopilaciones de dispositivos del sitio de administración central (CAS), ahora puede usar la acción de notificación del cliente para reactivar dispositivos. Anteriormente, esta acción solo estaba disponible desde un sitio primario.
Para obtener más información, vea [Cómo configurar Wake on LAN](../../clients/deploy/configure-wake-on-lan.md#bkmk_wol-1810).
### <a name="improvements-to-support-for-arm64-devices"></a>Mejoras de compatibilidad con dispositivos ARM64
<!--5954175-->
La plataforma **Todo Windows 10 (ARM64)** está disponible en la lista de versiones de sistema operativo admitidas en objetos con reglas de requisitos o listas de aplicabilidad.
> [!NOTE]
> Si previamente ha seleccionado la plataforma de nivel superior **Windows 10**, esta acción ha seleccionado automáticamente **Todo Windows 10 (64 bits)** y **Todo Windows 10 (32 bits)** . Esta nueva plataforma no se selecciona automáticamente. Si quiere agregar **Todo Windows 10 (ARM64)** , selecciónelo manualmente en la lista.
Para obtener más información sobre la compatibilidad de Configuration Manager con dispositivos ARM64, vea [Windows 10 en ARM64](../configs/support-for-windows-10.md#bkmk_arm64).
### <a name="track-configuration-item-remediations"></a>Seguimiento de las correcciones de elementos de configuración
<!--4261411-->
Ahora puede **realizar un seguimiento del historial de correcciones cuando se admita** en las reglas de cumplimiento de los elementos de configuración. Cuando se habilita esta opción, cualquier corrección que se produzca en el cliente para el elemento de configuración generará un mensaje de estado. El historial se almacena en la base de datos de Configuration Manager.
Para obtener más información, consulte el artículo [Creación de elementos de configuración personalizados para equipos de escritorio y servidores de Windows administrados con el cliente de Configuration Manager](../../../compliance/deploy-use/create-custom-configuration-items-for-windows-desktop-and-server-computers-managed-with-the-client.md#bkmk_track).
<!-- ## <a name="bkmk_comgmt"></a> Co-management -->
## <a name="application-management"></a><a name="bkmk_app"></a> Administración de aplicaciones
### <a name="microsoft-edge-management-dashboard"></a>Panel de administración de Microsoft Edge
<!-- 3871913 -->
El panel de administración de Microsoft Edge le proporciona información sobre el uso de Microsoft Edge y otros exploradores. En este panel, puede hacer lo siguiente:
- Ver cuántos dispositivos tienen instalado Microsoft Edge
- Vea cuántos clientes tienen instaladas versiones diferentes de Microsoft Edge
- Ver los exploradores instalados en todos los dispositivos
- Ver el explorador preferido en cada dispositivo
En el área de trabajo Biblioteca de software, haga clic en Administración de Microsoft Edge para ver el panel. Para cambiar la colección de los datos del gráfico, haga clic en Examinar y elija otra colección. De forma predeterminada, la lista desplegable incluye las cinco colecciones más grandes. Al seleccionar una colección que no está en la lista, la colección recién seleccionada pasa a ocupar la parte inferior de la lista desplegable.
Para obtener más información, vea el artículo [Administración de Microsoft Edge](../../../apps/deploy-use/deploy-edge.md#bkmk_edge-dash).
### <a name="improvements-to-microsoft-edge-management"></a>Mejoras en la administración de Microsoft Edge
<!-- 4561024 -->
Ahora puede crear una aplicación de Microsoft Edge que esté configurada para recibir actualizaciones automáticas en lugar de deshabilitar las actualizaciones automáticas. Este cambio le permite elegir administrar las actualizaciones de Microsoft Edge con Configuration Manager o permitir que Microsoft Edge se actualice automáticamente. Al crear la aplicación, seleccione la opción para permitir que Microsoft Edge actualice automáticamente la versión del cliente en el dispositivo del usuario final en la página de configuración de Microsoft Edge.
Para obtener más información, vea el artículo [Administración de Microsoft Edge](../../../apps/deploy-use/deploy-edge.md#bkmk_autoupdate).
### <a name="task-sequence-as-an-app-model-deployment-type"></a>Secuencia de tareas como un tipo de implementación de modelo de aplicación
<!-- 3555953 -->
Ahora puede instalar aplicaciones complejas mediante secuencias de tareas a través del modelo de aplicación. Agregue un tipo de implementación a una aplicación que sea una secuencia de tareas, para instalar o desinstalar la aplicación. Esta característica proporciona los comportamientos siguientes:
- Mostrar la secuencia de tareas de aplicación con un icono en el Centro de software. Un icono facilita a los usuarios buscar e identificar la secuencia de tareas de aplicación.
- Definir metadatos adicionales para la secuencia de tareas de aplicación, incluida información localizada.
Para obtener más información, vea [Creación de aplicaciones Windows](../../../apps/get-started/creating-windows-applications.md#bkmk_tsdt).
## <a name="os-deployment"></a><a name="bkmk_osd"></a> Implementación del sistema operativo
### <a name="bootstrap-a-task-sequence-immediately-after-client-registration"></a>Arranque de una secuencia de tareas inmediatamente después del registro del cliente
<!-- 5526972 -->
Al instalar y registrar un nuevo cliente de Configuration Manager, y también implementar una secuencia de tareas en él, es difícil determinar cuánto tiempo después del registro se ejecutará la secuencia de tareas. En esta versión se introduce una nueva propiedad de instalación de cliente que puede usar para iniciar una secuencia de tareas en un cliente después de que se registre correctamente en el sitio.
Para obtener más información, vea [Acerca de los parámetros y propiedades de instalación de cliente: PROVISIONTS](../../clients/deploy/about-client-installation-properties.md#provisionts).
### <a name="improvements-to-check-readiness-task-sequence-step"></a>Mejoras en el paso de la secuencia de tareas “Comprobar preparación”
<!-- 6005561 -->
Ahora puede comprobar más propiedades del dispositivo en el paso de la secuencia de tareas **Comprobar preparación**. Use este paso en una secuencia de tareas para comprobar que el equipo de destino cumpla los requisitos previos.
- Arquitectura del sistema operativo actual
- Versión de SO mínima
- Versión de SO máxima
- Versión mínima del cliente
- Idioma del sistema operativo actual
- Alimentación de CA conectada
- El adaptador de red está conectado y no es inalámbrico
Para obtener más información, vea [Pasos de la secuencia de tareas: Comprobar preparación](../../../osd/understand/task-sequence-steps.md#BKMK_CheckReadiness).
### <a name="improvements-to-task-sequence-progress"></a>Mejoras en el progreso de secuencias de tareas
<!-- 5932692 -->
La ventana de progreso de la secuencia de tareas ahora incluye las mejoras siguientes:
- Puede habilitarla para que muestre el número de paso actual, el número total de pasos y el porcentaje completado.
- Se ha aumentado el ancho de la ventana para proporcionar más espacio de modo que se vea mejor el nombre de la organización en una sola línea.
Para obtener más información, vea el artículo [Experiencias de usuario para la implementación de sistemas operativos](../../../osd/understand/user-experience.md#task-sequence-progress).
### <a name="improvements-to-os-deployment"></a>Mejoras en la implementación del sistema operativo
Esta versión incluye las siguientes mejoras en las implementaciones del sistema operativo:
- El entorno de secuencias de tareas incluye una nueva variable de solo lectura, `_TSSecureBoot`.<!--5842295--> Use esta variable para determinar el estado de arranque seguro en un dispositivo habilitado para UEFI. Para obtener más información, consulte la sección [_TSSecureBoot](../../../osd/understand/task-sequence-variables.md#TSSecureBoot).
- Establezca variables de secuencia de tareas para configurar el contexto de usuario para los pasos **Ejecutar línea de comandos** y **Ejecutar script PowerShell**.<!-- 5573175 --> Para obtener más información, vea las secciones sobre las variables [SMSTSRunCommandLineAsUser](../../../osd/understand/task-sequence-variables.md#SMSTSRunCommandLineAsUser) y [SMSTSRunPowerShellAsUser](../../../osd/understand/task-sequence-variables.md#SMSTSRunPowerShellAsUser).
- En el paso **Ejecutar script PowerShell**, ahora puede establecer la propiedad **Parámetros** en una variable.<!-- 5690481 --> Para más información, vea [Ejecutar script de PowerShell](../../../osd/understand/task-sequence-steps.md#BKMK_RunPowerShellScript).
- Ahora, el respondedor PXE de Configuration Manager envía mensajes de estado al servidor de sitio. Este cambio facilita la solución de problemas de las implementaciones de sistemas operativos que usan este servicio.<!-- 5568051 -->
<!-- ## <a name="bkmk_userxp"></a> Software Center -->
## <a name="software-updates"></a><a name="bkmk_sum"></a> Actualizaciones de software
### <a name="orchestration-groups"></a>Grupos de orquestaciones
<!-- 3098816 -->
Cree un grupo de orquestaciones para controlar mejor la implementación de las actualizaciones de software en los dispositivos. Muchos administradores de servidores deben administrar cuidadosamente las actualizaciones de cargas de trabajo específicas y automatizar los comportamientos entre ellas.
Un grupo de orquestaciones ofrece la flexibilidad de actualizar los dispositivos en función de un porcentaje, un número específico o un orden explícito. También puede ejecutar un script de PowerShell antes y después de que los dispositivos ejecuten la implementación de actualización.
Los miembros de un grupo de orquestaciones pueden ser cualquier cliente de Configuration Manager, no solo servidores. Las reglas del grupo de orquestaciones se aplican a los dispositivos de todas las implementaciones de actualizaciones de software en cualquier colección que contenga un miembro del grupo de orquestaciones. Se siguen aplicando otros comportamientos de implementación. Por ejemplo, las ventanas de mantenimiento y las programaciones de implementación.
Para obtener más información, consulte el artículo [Grupos de orquestaciones](../../../sum/deploy-use/orchestration-groups.md).
### <a name="evaluate-software-updates-after-a-servicing-stack-update"></a>Evaluación de las actualizaciones de software después de una actualización de la pila de servicio
<!-- 4639943 -->
Ahora Configuration Manager detecta si una actualización de la pila de servicio (SSU) forma parte de una instalación de varias actualizaciones. Cuando se detecta una SSU, se instala en primer lugar. Después de instalar la SSU, se ejecuta un ciclo de evaluación de actualizaciones de software para instalar las actualizaciones restantes. Este cambio permite instalar una actualización acumulativa dependiente después de la actualización de la pila de servicio. No es necesario reiniciar el dispositivo entre instalaciones ni tampoco crear una ventana de mantenimiento adicional. Las SSU se instalan en primer lugar solo para instalaciones no iniciadas por el usuario. Por ejemplo, si un usuario inicia una instalación de varias actualizaciones desde el centro de software, es posible que la SSU no se instale primero.
Para obtener más información, vea [Planear actualizaciones de software](../../../sum/plan-design/plan-for-software-updates.md#bkmk_ssu).
### <a name="microsoft-365-updates-for-disconnected-software-update-points"></a>Actualizaciones de Microsoft 365 para puntos de actualización de software desconectados
<!-- 4065163 -->
Puede usar una nueva herramienta para importar actualizaciones de Microsoft 365 desde un servidor de WSUS conectado a Internet en un entorno de Configuration Manager desconectado. Anteriormente, cuando exportaba e importaba metadatos de software actualizado en entornos desconectados, no podía implementar actualizaciones de Microsoft 365. Las actualizaciones de Microsoft 365 requieren metadatos adicionales descargados de una API de Office y la red CDN de Office, lo cual no es posible en entornos desconectados.
Para más información, vea [Sincronización de actualizaciones de Microsoft 365 desde un punto de actualización de software desconectado](../../../sum/get-started/synchronize-office-updates-disconnected.md).
<!-- ## <a name="bkmk_o365"></a> Office management -->
## <a name="protection"></a><a name="bkmk_protect"></a> Protección
### <a name="expand-microsoft-defender-advanced-threat-protection-atp-onboarding"></a>Expansión de la incorporación de Protección contra amenazas avanzada (ATP) de Microsoft Defender
<!-- 5229962 -->
Configuration Manager ha ampliado su compatibilidad con la incorporación de dispositivos a ATP de Microsoft Defender. Para obtener más información, consulte [Protección contra amenazas avanzada de Microsoft Defender (ATP)](../../../protect/deploy-use/defender-advanced-threat-protection.md).
### <a name="onboard-configuration-manager-clients-to-microsoft-defender-atp-via-the-microsoft-endpoint-manager-admin-center"></a><a name="bkmk_atp"></a> Incorporación de clientes Configuration Manager a ATP de Microsoft Defender a través del Centro de administración de Microsoft Endpoint Manager
<!--5691658-->
Ahora puede implementar directivas de incorporación de respuesta y detección de puntos de conexión de ATP de Microsoft Defender (EDR) en clientes administrados de Configuration Manager. Estos clientes no requieren inscripción en Azure AD o MDM y la directiva está destinada a colecciones de Configuration Manager en lugar de a grupos de Azure AD.
Esta funcionalidad permite a los clientes administrar la incorporación de Intune MDM y del cliente de EDR/ATP de Configuration Manager desde una sola experiencia de administración: el Centro de administración de Microsoft Endpoint Manager. Para obtener más información, consulte [Directiva de detección y respuesta de puntos de conexión para la seguridad de puntos de conexión en Intune](../../../../intune/protect/endpoint-security-edr-policy.md).
> [!Important]
> Necesitará el paquete acumulativo de revisiones, [KB4563473](https://support.microsoft.com/help/4563473), instalado en el entorno para esta característica.
### <a name="improvements-to-bitlocker-management"></a>Mejoras en la administración de BitLocker
- La directiva de administración de BitLocker ahora incluye opciones de configuración adicionales, incluidas directivas para unidades fijas y extraíbles.<!-- 5925683 --> Para obtener más información, vea [Referencia de la configuración de BitLocker](../../../protect/tech-ref/bitlocker/settings.md).
- En la rama actual de Configuration Manager, versión 1910, para integrar el servicio de recuperación de BitLocker se necesita un punto de administración habilitado para HTTPS. La conexión HTTPS es necesaria para cifrar las claves de recuperación en la red desde el cliente de Configuration Manager al punto de administración. La configuración del punto de administración y de todos los clientes para HTTPS puede resultar complicada para muchos clientes.
A partir de esta versión, el requisito de HTTPS es para el sitio web de IIS que hospeda el servicio de recuperación, no para todo el rol de punto de administración. Este cambio reduce los requisitos de certificado, pero sigue cifrando las claves de recuperación en tránsito.<!-- 5925660 --> Para obtener más información, consulte el artículo [Cifrado de los datos de recuperación](../../../protect/deploy-use/bitlocker/encrypt-recovery-data.md).
## <a name="reporting"></a><a name="bkmk_report"></a> Generación de informes
### <a name="integrate-with-power-bi-report-server"></a>Integración con Power BI Report Server
<!-- 3721603 -->
Ahora puede integrar Power BI Report Server con informes de Configuration Manager. Esta integración proporciona una visualización moderna y un mejor rendimiento. Además, la consola admite informes de Power BI de manera similar a como ya ocurre con SQL Server Reporting Services.
Para obtener más información, vea [Integración con Power BI Report Server](../../servers/manage/powerbi-report-server.md).
## <a name="configuration-manager-console"></a><a name="bkmk_admin"></a> Consola de Configuration Manager
### <a name="show-boundary-groups-for-devices"></a>Visualización de los grupos de límites de los dispositivos
<!--6521835-->
Para ayudarle a solucionar mejor los problemas de comportamiento de los dispositivos con grupos de límites, ahora puede ver los grupos de límites de dispositivos específicos. En el nodo **Dispositivos**, o cuando se muestran los miembros de una **Colección de dispositivos**, agregue la nueva columna **Grupos de límites** a la vista de lista.
Para obtener más información, consulte [Boundary groups (Grupos de límites)](../../servers/deploy/configure/boundary-groups.md#bkmk_show-boundary).
### <a name="send-a-smile-improvements"></a>Mejoras en la opción Enviar una sonrisa
<!-- 5891852 -->
Al usar las opciones Enviar una sonrisa o Enviar una desaprobación, se crea un mensaje de estado en el momento de enviar los comentarios. Con esta mejora se proporciona un registro de lo siguiente:
- Momento en el que se han enviado los comentarios
- Quién ha enviado los comentarios
- Identificador de los comentarios
- Si los comentarios se han enviado correctamente o no
Los mensajes de estado con el identificador 53900 significan que los comentarios se han enviado correctamente; en cambio, si el identificador es el 53901, significa que no se han podido enviar.
Para obtener más información, vea [Comentarios sobre el producto](../../understand/find-help.md#BKMK_1806Feedback).
### <a name="search-all-subfolders-for-configuration-items-and-configuration-baselines"></a>Búsqueda de elementos de configuración y líneas base de configuración en todas las subcarpetas
<!--5891241-->
De forma similar a las mejoras de versiones anteriores, ahora se puede usar la opción de búsqueda **Todas las subcarpetas** de los nodos **Elementos de configuración** y **Líneas base de configuración**.
### <a name="community-hub"></a>Centro de la comunidad
<!--3555935, 3555936-->
*(Se incorporó por primera vez en junio de 2020)*
La comunidad de administradores de TI ha desarrollado multitud de conocimientos con el paso de los años. En lugar de reinventar elementos como scripts e informes desde cero, hemos creado un **Centro de comunidad** de Configuration Manager donde compartirlos. Al aprovechar el trabajo de los demás, puede ahorrarse horas de trabajo. El Centro de comunidad fomenta la creatividad mediante la creación de otros trabajos y con otras personas que se basan en los suyos. GitHub ya tiene procesos y herramientas en todo el sector diseñados para el uso compartido. Ahora, el Centro de comunidad aprovechará esas herramientas directamente en la consola de Configuration Manager como piezas fundamentales para impulsar esta nueva comunidad. Para la versión inicial, el contenido disponible en el Centro de comunidad solo será cargado por Microsoft.
Para más información, consulte [Centro de comunidad y GitHub](../../servers/manage/community-hub.md).
## <a name="tools"></a><a name="bkmk_tools"></a> Herramientas
### <a name="onetrace-log-groups"></a>Grupos de registros de OneTrace
<!-- 5559993 -->
OneTrace ahora admite grupos de registros personalizables, de forma similar a la característica del Centro de soporte técnico. Los grupos de registros permiten abrir todos los archivos de registro de un único escenario. Actualmente, OneTrace incluye grupos para los siguientes escenarios:
- Administración de aplicaciones
- Configuración de cumplimiento (también llamada Administración de configuración deseada)
- Actualizaciones de software
Para obtener más información, consulte [Centro de soporte técnico OneTrace](../../support/support-center-onetrace.md).
### <a name="improvements-to-extend-and-migrate-on-premises-site-to-microsoft-azure"></a><a name="bkmk_extend"></a> Mejoras para extender y migrar un sitio local a Microsoft Azure
<!--5665775, 6307931-->
La herramienta para extender y migrar un sitio local a Microsoft Azure ahora permite aprovisionar varios roles de sistema de sitio en una sola máquina virtual de Azure. Puede agregar roles de sistema de sitio después de que se haya completado la implementación inicial de la máquina virtual de Azure.
Para más información, vea [Extensión y migración de un sitio local a Microsoft Azure](../../support/azure-migration-tool.md#bkmk_add_role).
## <a name="other-updates"></a>Otras actualizaciones
A partir de esta versión, las características siguientes dejarán de estar en [versión preliminar](../../servers/manage/pre-release-features.md):
- [Detección de grupos de usuarios de Azure Active Directory](../../servers/deploy/configure/configure-discovery-methods.md#bkmk_azuregroupdisco)<!--3611956-->
- [Sincronización de los resultados de pertenencia a recopilaciones con Azure Active Directory](../../clients/manage/collections/create-collections.md#bkmk_aadcollsync)<!--3607475-->
- [CMPivot independiente](../../servers/manage/cmpivot.md#bkmk_standalone)<!--3555890/4692885-->
- [Aplicaciones cliente para dispositivos administrados conjuntamente](../../../comanage/workloads.md#client-apps) (antes denominadas *aplicaciones móviles para dispositivos administrados conjuntamente*)<!-- 1357892/3600959 -->
Para obtener más información sobre los cambios en los cmdlets de Windows PowerShell relativos a Configuration Manager, vea las [notas de la versión de PowerShell 2002](/powershell/sccm/2002-release-notes).
Para obtener más información sobre los cambios en la API REST del servicio de administración, consulte las [notas de la versión del servicio de administración](../../../develop/adminservice/release-notes.md#bkmk_2002).
Además de nuevas características, esta versión también incluye cambios adicionales como, por ejemplo, correcciones de errores. Para más información, vea [Resumen de cambios en la rama actual de Configuration Manager, versión 2002](https://support.microsoft.com/help/4556203).
El paquete acumulativo de actualizaciones siguiente (4560496) está disponible en la consola desde el 15 de julio de 2020: [Paquete acumulativo de actualizaciones para Microsoft Endpoint Configuration Manager, versión 2002](https://support.microsoft.com/help/4560496).
### <a name="hotfixes"></a>Revisiones
Se ofrecen estas revisiones adicionales para solucionar problemas específicos:
| Id. | Título | Fecha | En la consola |
|---------|---------|---------|---------|
| [4575339](https://support.microsoft.com/help/4575339) | Los dispositivos aparecen dos veces en el centro de administración de Microsoft Endpoint Configuration Manager. | 23 de julio de 2020 | No |
| [4575774](https://support.microsoft.com/help/4575774) | Error del cmdlet New-CMTSStepPrestartCheck en la versión 2002 de Configuration Manager. | 24 de julio de 2020 | No |
| [4576782](https://support.microsoft.com/help/4576782) | Se agota el tiempo de espera de la hoja de la aplicación en el Centro de administración de Microsoft Endpoint Manager | 11 de agosto de 2020 | No |
| [4578123](https://support.microsoft.com/help/4578123) | Las consultas de CMPivot devuelven resultados inesperados en Configuration Manager, versión 2002 | 24 de agosto de 2020 | No |
<!--
> [!NOTE]
> Starting in version 1902, in-console hotfixes now have supersedence relationships. For more information, see [Supersedence for in-console hotfixes](../../servers/manage/updates.md#bkmk_supersede).
-->
## <a name="next-steps"></a>Pasos siguientes
<!-- At this time, version 2002 is released for the early update ring. To install this update, you need to opt in. For more information, see [Early update ring](../../servers/manage/checklist-for-installing-update-2002.md#early-update-ring). -->
A partir del 11 de mayo de 2020, la versión 2002 está disponible globalmente para que todos los clientes puedan instalarla.
Cuando esté listo para instalar esta versión, vea cómo [instalar actualizaciones para Configuration Manager](../../servers/manage/updates.md) y la [lista de comprobación para la instalación de la actualización 2002](../../servers/manage/checklist-for-installing-update-2002.md).
> [!TIP]
> Para instalar un sitio nuevo, use una versión de línea de base de Configuration Manager.
>
> Más información acerca de:
>
> - [Instalación de nuevos sitios](../../servers/deploy/install/installing-sites.md)
> - [Versiones de línea de base y versiones de actualización](../../servers/manage/updates.md#bkmk_Baselines)
Para conocer los problemas conocidos e importantes, vea las [Notas de la versión](../../servers/deploy/install/release-notes.md).
Después de actualizar un sitio, revise también la [lista de comprobación posterior a la actualización](../../servers/manage/checklist-for-installing-update-2002.md#post-update-checklist).
| 83.031847 | 838 | 0.790938 | spa_Latn | 0.985943 |
b94dc8e31324bf8b4c555764a23164eb16879ffb | 3,557 | md | Markdown | README.md | cfn-modules/sqs-queue | 8173ef88393be9930c064e0911894433cee90675 | [
"Apache-2.0"
] | 1 | 2020-03-09T19:05:51.000Z | 2020-03-09T19:05:51.000Z | README.md | cfn-modules/sqs-queue | 8173ef88393be9930c064e0911894433cee90675 | [
"Apache-2.0"
] | 10 | 2018-07-13T09:14:02.000Z | 2020-08-04T19:16:09.000Z | README.md | cfn-modules/sqs-queue | 8173ef88393be9930c064e0911894433cee90675 | [
"Apache-2.0"
] | null | null | null | # cfn-modules: AWS SQS queue
AWS SQS queue with a dead letter queue, encryption, and [alerting](https://www.npmjs.com/package/@cfn-modules/alerting).
## Install
> Install [Node.js and npm](https://nodejs.org/) first!
```
npm i @cfn-modules/sqs-queue
```
## Usage
```
---
AWSTemplateFormatVersion: '2010-09-09'
Description: 'cfn-modules example'
Resources:
Queue:
Type: 'AWS::CloudFormation::Stack'
Properties:
Parameters:
AlertingModule: !GetAtt 'Alerting.Outputs.StackName' # optional
KmsKeyModule: !GetAtt 'Key.Outputs.StackName' # optional
DelaySeconds: 0 # optional
KmsDataKeyReusePeriodSeconds: 300 # optional
MaximumMessageSize: 262144 # optional
MessageRetentionPeriod: 345600 # optional
ReceiveMessageWaitTimeSeconds: 0 # optional
VisibilityTimeout: 0 # optional
TemplateURL: './node_modules/@cfn-modules/sqs-queue/module.yml'
```
## Examples
* [serverless-iam](https://github.com/cfn-modules/docs/tree/master/examples/serverless-iam)
* [serverless-image-resize](https://github.com/cfn-modules/docs/tree/master/examples/serverless-image-resize)
* [serverless-sqs-queue](https://github.com/cfn-modules/docs/tree/master/examples/serverless-sqs-queue)
## Related modules
* [lambda-event-source-sqs-queue](https://github.com/cfn-modules/lambda-event-source-sqs-queue)
* [kinesis-data-stream](https://github.com/cfn-modules/kinesis-data-stream)
## Parameters
<table>
<thead>
<tr>
<th>Name</th>
<th>Description</th>
<th>Default</th>
<th>Required?</th>
<th>Allowed values</th>
</tr>
</thead>
<tbody>
<tr>
<td>AlertingModule</td>
<td>Stack name of <a href="https://www.npmjs.com/package/@cfn-modules/alerting">alerting module</a></td>
<td></td>
<td>no</td>
<td></td>
</tr>
<tr>
<td>KmsKeyModule</td>
<td>Stack name of <a href="https://www.npmjs.com/package/@cfn-modules/kms-key">kms-key module</a></td>
<td></td>
<td>no</td>
<td></td>
</tr>
<tr>
<td>DelaySeconds</td>
<td>The time in seconds that the delivery of all messages in the queue is delayed</td>
<td>0</td>
<td>no</td>
<td>[0-900]</td>
</tr>
<tr>
<td>KmsDataKeyReusePeriodSeconds</td>
<td>The length of time in seconds that Amazon SQS can reuse a data key to encrypt or decrypt messages before calling AWS KMS again</td>
<td>300</td>
<td>no</td>
<td>[60-86400]</td>
</tr>
<tr>
<td>MaximumMessageSize</td>
<td>The limit of how many bytes that a message can contain before Amazon SQS rejects it</td>
<td>262144</td>
<td>no</td>
<td>[1024-262144]</td>
</tr>
<tr>
<td>MessageRetentionPeriod</td>
<td>The number of seconds that Amazon SQS retains a message</td>
<td>345600</td>
<td>no</td>
<td>[60-1209600]</td>
</tr>
<tr>
<td>ReceiveMessageWaitTimeSeconds</td>
<td>Specifies the duration, in seconds, that the ReceiveMessage action call waits until a message is in the queue in order to include it in the response, as opposed to returning an empty response if a message isn't yet available</td>
<td>0</td>
<td>no</td>
<td>[0-20]</td>
</tr>
<tr>
<td>VisibilityTimeout</td>
<td>The length of time during which a message will be unavailable after a message is delivered from the queue</td>
<td>30</td>
<td>no</td>
<td>[0-43200]</td>
</tr>
</tbody>
</table>
| 30.401709 | 239 | 0.633961 | kor_Hang | 0.382408 |
b94eb4728aa2f3bf71c24a9eaa804ec39f6c3395 | 2,650 | md | Markdown | README.md | RadiusNetworks/scanbeacon-gem | f3f9ff9123593ff821b3dba7f253d9735e10082f | [
"MIT"
] | 28 | 2015-07-10T18:46:58.000Z | 2021-02-08T13:01:31.000Z | README.md | RadiusNetworks/scanbeacon-gem | f3f9ff9123593ff821b3dba7f253d9735e10082f | [
"MIT"
] | 21 | 2015-07-28T01:07:47.000Z | 2019-01-28T18:25:34.000Z | README.md | RadiusNetworks/scanbeacon-gem | f3f9ff9123593ff821b3dba7f253d9735e10082f | [
"MIT"
] | 11 | 2015-07-10T18:48:03.000Z | 2017-08-16T21:13:36.000Z | # ScanBeacon gem
A ruby gem that allows you to scan for beacon advertisements using IOBluetooth (on Mac OS X), BlueZ (on Linux), or a BlueGiga BLE112 device (on mac or linux)
# Example Usage
## Install the gem
```
gem install scan_beacon
```
[Tips for installing & using on Ubuntu](https://github.com/RadiusNetworks/scanbeacon-gem/wiki/ubuntu)
## Create your scanner
``` ruby
require 'scan_beacon'
# to scan using the default device on mac or linux
scanner = ScanBeacon::DefaultScanner.new
# to scan using CoreBluetooth on a mac
scanner = ScanBeacon::CoreBluetoothScanner.new
# to scan using BlueZ on Linux (make sure you have privileges)
scanner = ScanBeacon::BlueZScanner.new
# to scan using a BLE112 device
scanner = ScanBeacon::BLE112Scanner.new
```
## Start a scan, yield beacons in a loop
``` ruby
scanner.scan do |beacons|
beacons.each do |beacon|
puts beacon.inspect
end
end
```
## Set a specific scan cycle period
``` ruby
require 'scan_beacon'
scanner = ScanBeacon::BLE112Scanner.new cycle_seconds: 5
scanner.scan do |beacons|
beacons.each do |beacon|
puts beacon.inspect
end
end
```
## Scan once for a set period and then return an array of beacons
``` ruby
scanner = ScanBeacon::CoreBluetoothScanner.new cycle_seconds: 2
beacons = scanner.scan
```
## Add a custom beacon layout
By default, this gem supports AltBeacon and Eddystone advertisements. But you can add a beacon parser to support other major beacon formats as well.
Example:
``` ruby
scanner = ScanBeacon::BLE112Scanner.new
scanner.add_parser( ScanBeacon::BeaconParser.new(:mybeacon, "m:2-3=0000,i:4-19,i:20-21,i:22-23,p:24-24") )
...
```
## Advertise as a beacon on Linux using BlueZ or a Mac using IOBluetooth
Example:
``` ruby
# altbeacon
beacon = ScanBeacon::Beacon.new(
ids: ["2F234454CF6D4A0FADF2F4911BA9FFA6", 11,11],
power: -59,
mfg_id: 0x0118,
beacon_type: :altbeacon
)
advertiser = ScanBeacon::DefaultAdvertiser.new(beacon: beacon)
advertiser.start
...
advertiser.stop
# Eddystone UID
beacon = ScanBeacon::Beacon.new(
ids: ["2F234454F4911BA9FFA6", 3],
power: -20,
service_uuid: 0xFEAA,
beacon_type: :eddystone_uid
)
advertiser = ScanBeacon::DefaultAdvertiser.new(beacon: beacon)
advertiser.start
...
advertiser.stop
# Eddystone URL (PhysicalWeb)
beacon = ScanBeacon::EddystoneUrlBeacon.new(
url: "http://radiusnetworks.com",
power: -20,
)
advertiser = ScanBeacon::DefaultAdvertiser.new(beacon: beacon)
advertiser.start
...
advertiser.stop
```
# Dependencies
To scan for beacons or advertise, you must have a Linux machine with BlueZ installed, or a Mac, or a BLE112 device plugged in to a USB port (on Mac or Linux).
| 25.728155 | 158 | 0.74717 | eng_Latn | 0.698455 |
b94edbae73337e70521fb5c4ef6ed898e34e4c0c | 474 | md | Markdown | windows.applicationmodel.userdatatasks.dataprovider/userdatatasklistdeletetaskrequest_taskid.md | gbaychev/winrt-api | 25346cd51bc9d24c8c4371dc59768e039eaf02f1 | [
"CC-BY-4.0",
"MIT"
] | 199 | 2017-02-09T23:13:51.000Z | 2022-03-28T15:56:12.000Z | windows.applicationmodel.userdatatasks.dataprovider/userdatatasklistdeletetaskrequest_taskid.md | gbaychev/winrt-api | 25346cd51bc9d24c8c4371dc59768e039eaf02f1 | [
"CC-BY-4.0",
"MIT"
] | 2,093 | 2017-02-09T21:52:45.000Z | 2022-03-25T22:23:18.000Z | windows.applicationmodel.userdatatasks.dataprovider/userdatatasklistdeletetaskrequest_taskid.md | gbaychev/winrt-api | 25346cd51bc9d24c8c4371dc59768e039eaf02f1 | [
"CC-BY-4.0",
"MIT"
] | 620 | 2017-02-08T19:19:44.000Z | 2022-03-29T11:38:25.000Z | ---
-api-id: P:Windows.ApplicationModel.UserDataTasks.DataProvider.UserDataTaskListDeleteTaskRequest.TaskId
-api-type: winrt property
---
<!-- Property syntax.
public string TaskId { get; }
-->
# Windows.ApplicationModel.UserDataTasks.DataProvider.UserDataTaskListDeleteTaskRequest.TaskId
## -description
Gets the task ID of the task to be deleted.
## -property-value
The task ID of the task to be deleted.
## -remarks
## -see-also
## -examples
| 20.608696 | 104 | 0.723629 | yue_Hant | 0.307284 |
b94f3f713440e08ec2e6a25bc63c62ada3e9a52d | 2,158 | md | Markdown | _posts/2017-09-25-install-postfix-php-mail.md | eawd/eawd.github.io | a1c155cdd6a6c1e9c4f93af1dadeb0c05ee3a5ca | [
"MIT"
] | null | null | null | _posts/2017-09-25-install-postfix-php-mail.md | eawd/eawd.github.io | a1c155cdd6a6c1e9c4f93af1dadeb0c05ee3a5ca | [
"MIT"
] | null | null | null | _posts/2017-09-25-install-postfix-php-mail.md | eawd/eawd.github.io | a1c155cdd6a6c1e9c4f93af1dadeb0c05ee3a5ca | [
"MIT"
] | null | null | null | ---
layout: post
title: "Setup PHP to use Postfix to send emails"
language: english
date: 2017-09-17 21:29:00 +0800
categories: php postfix
---
Postfix is a [Message Transfer Agent](https://en.wikipedia.org/wiki/Message_transfer_agent) implementation, basically an email server to send and recieve emails, and since PHP doesn’t include an SMTP implementation it needs an external server to do that if you plan on using the internal `mail` function.
While it is not encouraged to install and use your own email server, you might need one for testing purposes or in situations where you have to use PHP’s internal [mail](https://secure.php.net/manual/en/function.mail.php) function, so here is how I installed Postfix on Ubuntu and set it up to send emails using PHP’s [mail](https://secure.php.net/manual/en/function.mail.php) function.
First you'll install postfix using the following command:
```bash
sudo apt-get install postfix
```
It will prompt you to enter a FQDN which is your top domain name e.g. `example.org`, It's well explained in the installation process so it shouldn't be a problem.
<!--description-->
To test your installation enter this command `sendmail -tif [email protected] [email protected]` Where *[email protected]* is the email you're sending from and *[email protected]* is the email you're sending to.
Write a message then press `Ctrl+d`, you should now find your message in *[email protected]* inbox or spam folders.
Now edit your proper `php.ini` file for me it was in `/etc/php/7.0/fpm/php.ini` and edit `sendmail_path` entry to be:
```
sendmail_path = "sendmail -tif [email protected]"
```
Again *[email protected]* will be the default email address used as the from email.
Now reload Postfix and your server then you should be able to send emails using mail function from your server.
-----------
This serverfault answer helped a lot:
[https://serverfault.com/a/289290](https://serverfault.com/a/289290)
And as a bonus, here is how to configure Postfix to use sendgrid as a relay host.
[https://sendgrid.com/docs/Integrate/Mail_Servers/postfix.html](https://sendgrid.com/docs/Integrate/Mail_Servers/postfix.html)
---------- | 49.045455 | 386 | 0.7595 | eng_Latn | 0.986454 |
b94f843585ff67a8a5e683111ccae44330c94112 | 2,393 | md | Markdown | docs/extensibility/debugger/evaluation-context.md | ailen0ada/visualstudio-docs.ja-jp | 12f304d1399580c598406cf74a284144471e88c0 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/extensibility/debugger/evaluation-context.md | ailen0ada/visualstudio-docs.ja-jp | 12f304d1399580c598406cf74a284144471e88c0 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/extensibility/debugger/evaluation-context.md | ailen0ada/visualstudio-docs.ja-jp | 12f304d1399580c598406cf74a284144471e88c0 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 評価コンテキスト |Microsoft Docs
ms.custom: ''
ms.date: 11/04/2016
ms.technology:
- vs-ide-sdk
ms.topic: conceptual
helpviewer_keywords:
- debugging [Debugging SDK], expression evaluation
- expression evaluation, context
ms.assetid: 008a20c7-1b27-4013-bf96-d6a3f510da02
author: gregvanl
ms.author: gregvanl
manager: douge
ms.workload:
- vssdk
ms.openlocfilehash: 523ef45d52a81a475eca0e3560243e0eb8357bbd
ms.sourcegitcommit: 25a62c2db771f938e3baa658df8b1ae54a960e4f
ms.translationtype: MT
ms.contentlocale: ja-JP
ms.lasthandoff: 07/24/2018
ms.locfileid: "39232456"
---
# <a name="evaluation-context"></a>評価コンテキスト
> [!IMPORTANT]
> Visual Studio 2015 での式エバリュエーターの実装には、この方法は非推奨とされます。 CLR 式エバリュエーターの実装方法の詳細については、次を参照してください。 [CLR 式エバリュエーター](https://github.com/Microsoft/ConcordExtensibilitySamples/wiki/CLR-Expression-Evaluators)と[マネージ式エバリュエーターのサンプル](https://github.com/Microsoft/ConcordExtensibilitySamples/wiki/Managed-Expression-Evaluator-Sample)します。
渡されるデバッグ エンジン (DE) を呼び出すと、式エバリュエーター (EE)、3 つの引数を[EvaluateSync](../../extensibility/debugger/reference/idebugparsedexpression-evaluatesync.md)次の表に示すように、検索と評価のシンボルの状況を判断します。
## <a name="arguments"></a>引数
|引数|説明|
|--------------|-----------------|
|`pSymbolProvider`|[IDebugSymbolProvider](../../extensibility/debugger/reference/idebugsymbolprovider.md)記号を識別するために使用するシンボル ハンドラー (SH) を指定するインターフェイス。|
|`pAddress`|[IDebugAddress](../../extensibility/debugger/reference/idebugaddress.md)実行の現在位置を指定するインターフェイス。 このインターフェイスは、実行されているコードを含むメソッドを検索します。|
|`pBinder`|[IDebugBinder](../../extensibility/debugger/reference/idebugbinder.md)値と指定した名前のシンボルの型を検索するインターフェイス。|
`IDebugParsedExpression::EvaluateSync` 返します、 [IDebugProperty2](../../extensibility/debugger/reference/idebugproperty2.md)結果の値とその型を表すインターフェイス。
## <a name="see-also"></a>関連項目
[主要なエバリュエーター インターフェイス](../../extensibility/debugger/key-expression-evaluator-interfaces.md)
[ローカルの表示](../../extensibility/debugger/displaying-locals.md)
[EvaluateSync](../../extensibility/debugger/reference/idebugparsedexpression-evaluatesync.md)
[IDebugProperty2](../../extensibility/debugger/reference/idebugproperty2.md)
[IDebugSymbolProvider](../../extensibility/debugger/reference/idebugsymbolprovider.md)
[IDebugAddress](../../extensibility/debugger/reference/idebugaddress.md)
[IDebugBinder](../../extensibility/debugger/reference/idebugbinder.md) | 50.914894 | 323 | 0.777685 | yue_Hant | 0.632346 |
b94f9e1542761bef0680da8d92c6eae7ba874a7e | 927 | md | Markdown | docs/zh/rules/dispatch.md | dragon8github/whistle | 658ee37e50bc45365a2dc73c20d94fb4ba7c8796 | [
"MIT"
] | 1 | 2019-01-18T06:07:00.000Z | 2019-01-18T06:07:00.000Z | docs/zh/rules/dispatch.md | dragon8github/whistle | 658ee37e50bc45365a2dc73c20d94fb4ba7c8796 | [
"MIT"
] | null | null | null | docs/zh/rules/dispatch.md | dragon8github/whistle | 658ee37e50bc45365a2dc73c20d94fb4ba7c8796 | [
"MIT"
] | null | null | null | # dispatch
为尽可能缩减协议,减少复杂度,该协议已在最新版本的 whistle (`>=v1.12.3`) 中删除,请及时[更新whistle](../update.html),并用[reqScript](./reqScript.html)或[resScript](./resScript.html)代替:
```
pattern reqScript://{test.js}
pattern resScript://{test.js}
```
#### 过滤规则
需要确保whistle是最新版本:[更新whistle](../update.html)
如果要过滤指定请求或指定协议的规则匹配,可以用如下协议:
1. [ignore](./ignore.html):忽略指定规则
2. [filter](./filter.html):过滤指定pattern,支持根据请求方法、请求头、请求客户端IP过滤
例子:
```
# 下面表示匹配pattern的同时不能为post请求且请求头里面的cookie字段必须包含test(忽略大小写)、url里面必须包含 cgi-bin 的请求
# 即:过滤掉匹配filter里面的请求
pattern operator1 operator2 filter://m:post filter://h:cookie!=test filter://!/cgi-bin/i
# 下面表示匹配pattern1、pattern2的同时必须为post请求且请求头里面的cookie字段不能包含类似 `uin=123123`、且url里面不能包含 cgi-bin 的请求
operator pattern1 pattern2 filter://m:!post filter://h:cookie=/uin=o\d+/i filter:///cgi-bin/i
# 下面表示匹配pattern的请求忽略除了host以外的所有规则
pattern ignore://*|!host
# 下面表示匹配pattern的请求忽略file和host协议的规则
pattern ignore://file|host
```
| 28.96875 | 147 | 0.759439 | yue_Hant | 0.265944 |
b9505dd6d7bdb6e86b7ed760bac398285b914a3e | 6,467 | md | Markdown | partner-center/learn-about-competencies.md | MicrosoftDocs/partner-center-pr.fr-fr | e140861f4bdb504ed4ddafafcc54ac49037298bb | [
"CC-BY-4.0",
"MIT"
] | 4 | 2020-05-19T19:37:55.000Z | 2021-08-15T22:43:31.000Z | partner-center/learn-about-competencies.md | MicrosoftDocs/partner-center-pr.fr-fr | e140861f4bdb504ed4ddafafcc54ac49037298bb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | partner-center/learn-about-competencies.md | MicrosoftDocs/partner-center-pr.fr-fr | e140861f4bdb504ed4ddafafcc54ac49037298bb | [
"CC-BY-4.0",
"MIT"
] | 4 | 2019-10-09T19:50:50.000Z | 2021-10-09T10:24:46.000Z | ---
title: Différencier votre entreprise en atteignant les compétences Microsoft
ms.topic: article
ms.date: 09/27/2021
ms.service: partner-dashboard
ms.subservice: partnercenter-membership
description: Découvrez comment atteindre l’élite des partenaires Microsoft et attirer de nouveaux clients en répondant aux conditions des compétences pour acquérir les niveaux d’adhésion Gold et Silver.
author: ArpithaKanuganti
ms.author: v-arkanu
ms.localizationpriority: high
ms.custom: SEOMAY.20
ms.openlocfilehash: 667bdec1277d4b7fc6279b29c84f155825a14b41
ms.sourcegitcommit: 947c91c2c6b08fe5ce2156ac0f205d829c19a454
ms.translationtype: HT
ms.contentlocale: fr-FR
ms.lasthandoff: 10/12/2021
ms.locfileid: "129826611"
---
# <a name="differentiate-your-business-by-attaining-microsoft-competencies"></a>Différencier votre entreprise en atteignant les compétences Microsoft
**Rôles appropriés** : Administrateur général | Administrateur de la gestion des utilisateurs
Mettez en évidence votre expertise éprouvée en solutions de qualité dans un ou plusieurs domaines d’activités spécialisés. Les compétences Microsoft visent à vous préparer à répondre aux besoins de vos clients et à contribuer à attirer de nouveaux clients en quête de fournisseurs de solutions Microsoft certifiés. Rejoignez le niveau élite des partenaires Microsoft et démarquez-vous de vos homologues.
- Obtenez une compétence de niveau **Silver** pour illustrer vos capacités et votre engagement continus.
- Obtenez une compétence de niveau **Gold** pour mettre en évidence vos capacités d’excellence dans le domaine d’une solution Microsoft.
L’obtention d’une compétence peut également être une porte d’entrée vers d’autres avantages et opportunités :
- Déverrouillez des offres et des programmes [Go-to-Market](mpn-learn-about-go-to-market-benefits.md) puissants qui peuvent vous aider à développer votre activité. [Apprenez-en davantage sur le Go-to-Market avec Microsoft](https://partner.microsoft.com/solutions/go-to-market).
- L’obtention d’une compétence de niveau Gold vous donne la possibilité d’aller plus loin et d’ajouter davantage de [spécialisations avancées](advanced-specializations.md) à votre portefeuille. Les spécialisations avancées vous aident à présenter votre expertise croissante aux clients. [Apprenez-en davantage sur les spécialisations avancées](https://partner.microsoft.com/membership/advanced-specialization).
## <a name="check-your-status-as-you-attain-a-competency"></a>Vérifier votre statut lorsque vous obtenez une compétence
Vous pouvez voir les exigences et ce que votre entreprise a obtenu dans la zone de compétences de votre tableau de bord de l’Espace partenaires.
> [!NOTE]
> Pour plus d’informations sur l’interface des espaces de travail, consultez la page [Parcours de l’Espace partenaires](get-around-partner-center.md#turn-workspaces-on-and-off).
#### <a name="workspaces-view"></a>[Vue espaces de travail](#tab/workspaces-view)
1. Connectez-vous au [tableau de bord de l’Espace partenaires](https://partner.microsoft.com/dashboard/home).
2. Sélectionnez la vignette **Appartenance**, puis **Compétences**.
3. Sélectionnez le nom de la compétence et l’option de compétence spécifique pour afficher les détails. Vous pouvez voir ce que vous avez effectué ainsi que toutes les exigences et les liens vers les examens spécifiques et les dates de validation des examens supprimés.
#### <a name="current-view"></a>[Affichage actuel](#tab/current-view)
1. Connectez-vous au [tableau de bord de l’Espace partenaires](https://partner.microsoft.com/dashboard/home).
2. Sous la section MPN du menu de l’Espace partenaires, sélectionnez **Compétences**.
3. Sélectionnez le nom de la compétence et l’option de compétence spécifique pour afficher les détails. Vous pouvez voir ce que vous avez effectué ainsi que toutes les exigences et les liens vers les examens spécifiques et les dates de validation des examens supprimés.
* * *
## <a name="competency-areas"></a>Domaines de compétence
Pour acquérir une compétence de niveau d’adhésion Silver ou Gold, vous êtes invité à faire la démonstration de votre expertise dans différents domaines d’activité et technologiques.
|**Zone** |**Compétence** |
|--------------------|--------------------------------|
|Applications et infrastructure| - Développement d’application<br/> - Intégration d’application<br/> - Plateforme cloud<br/> - DevOps<br/> - Centre de données |
|Applications métier | - Applications métier dans le cloud</br> - Planification des ressources d’entreprise (ERP)</br> - Projet et portefeuille |
|Données et IA| - Analytique des données<br/> - Plateforme de données |
|Espace de travail moderne et sécurité | - Collaboration et contenu<br/> - Communications<br/> - Productivité dans le cloud<br/> - Gestion d’Enterprise Mobility<br/> - Messagerie<br/> - Sécurité<br/> - Solutions cloud pour petites et moyennes entreprises<br/> - Windows et appareils |
Les niveaux de compétence Silver et Gold présentent des exigences qui varient en fonction des domaines. À chaque domaine correspondent des cours et des examens que les employés de votre entreprise peuvent passer pour accéder au niveau de compétence souhaité.
Pour en savoir plus sur les exigences liées à l’obtention de compétences de niveau Silver et Gold, consultez [Microsoft Partner Network - Compétences](https://partner.microsoft.com/membership/competencies).
## <a name="next-steps"></a>Étapes suivantes
- Apprenez-en davantage sur les [exigences liées à l’obtention de compétences de niveau Silver et Gold](https://partner.microsoft.com/membership/competencies).
- Découvrez comment mettre en avant votre expertise en associant une compétence de niveau Gold à une ou plusieurs [spécialisations avancées](advanced-specializations.md).
- Apprenez-en davantage sur les [ressources propres à Go-to-Market](mpn-learn-about-go-to-market-benefits.md) incluses avec les compétences.
- Apprenez-en davantage sur [les factures et les taxes relatives aux compétences](mpn-view-print-maps-invoice.md).
- Découvrez comment [payer les frais](mpn-pay-fee-silver-gold-competency.md) associés à l’adhésion à une compétence Silver ou Gold.
- Affichez un [rapport de compétences](insights-competencies-report.md) d’Insights d’Espace partenaires qui indique l’état actuel de vos compétences.
- Obtenez des réponses aux [questions les plus fréquemment posées](competencies-faq.yml) sur les compétences.
| 71.855556 | 410 | 0.786609 | fra_Latn | 0.977144 |
b9509739715b6aebb9ab668ee3ec82c7eaf46a0c | 8,943 | md | Markdown | docs/controls/diagrams-and-maps/diagram/troubleshoot/common-issues.md | mmunchandersen/kendo-ui-core | e7f132ed3577056ad4684c0811c258d524878f1f | [
"Apache-2.0"
] | null | null | null | docs/controls/diagrams-and-maps/diagram/troubleshoot/common-issues.md | mmunchandersen/kendo-ui-core | e7f132ed3577056ad4684c0811c258d524878f1f | [
"Apache-2.0"
] | null | null | null | docs/controls/diagrams-and-maps/diagram/troubleshoot/common-issues.md | mmunchandersen/kendo-ui-core | e7f132ed3577056ad4684c0811c258d524878f1f | [
"Apache-2.0"
] | null | null | null | ---
title: Common Issues
page_title: Common Issues | Kendo UI Diagram
description: "Learn how to deal with issues you may encounter while using the Kendo UI Diagram widget."
slug: troubleshooting_diagram_widget
position: 1
---
# Common Issues
This page provides solutions for common problems related to the Kendo UI Diagrams.
## Rendering
### Diagram Graphics Do Not Render in Internet Explorer
> **Important**
>
> A security message suggesting that you enable the Intranet settings might appear. If you choose to do so, then you do not need to follow the steps below.
**Solution**
Select **Internet Options** > **Security** > **Internet** (or **Local intranet**) > **Custom Level** and enable **Binary and script behaviors** by ticking the **Enable** radio button.
**Figure 2. Options and settings to apply to render the chart graphics**

## Export
### Layout Is Different in Exported PDF Files
Such issues are typically caused by the different fonts that are used on screen and in the PDF.
For display, the browser substitutes the selected font with whatever is provided by the system. During export, you take the metrics from the actual font in use and determine the PDF layout from that. It is likely that the resulting PDF is displayed with a different font, leading to layout and encoding issues.
**Solution**
The solution is to [make the fonts available for embedding]({% slug pdfderawingexport_drawingapi %}#configuration-Custom). This means that the fonts should be available as binary TTF files and registered for export.
This is demonstrated in the [PDF Export demo on Diagram](http://demos.telerik.com/kendo-ui/diagram/pdf-export) as well.
The example below demonstrates how to embed fonts in exported PDF.
###### Example
```dojo
<button class='export-pdf k-button'>Save as PDF</button>
<div id="diagram"></div>
<script>
// Import DejaVu Sans font for embedding
kendo.pdf.defineFont({
"DejaVu Sans" : "https://kendo.cdn.telerik.com/2016.1.112/styles/fonts/DejaVu/DejaVuSans.ttf",
"DejaVu Sans|Bold" : "https://kendo.cdn.telerik.com/2016.1.112/styles/fonts/DejaVu/DejaVuSans-Bold.ttf",
"DejaVu Sans|Bold|Italic" : "https://kendo.cdn.telerik.com/2016.1.112/styles/fonts/DejaVu/DejaVuSans-Oblique.ttf",
"DejaVu Sans|Italic" : "https://kendo.cdn.telerik.com/2016.1.112/styles/fonts/DejaVu/DejaVuSans-Oblique.ttf"
});
</script>
<!-- Load Pako ZLIB library to enable PDF compression -->
<script src="//kendo.cdn.telerik.com/2016.1.112/js/pako_deflate.min.js"></script>
<script>
$(".export-pdf").click(function() {
$("#diagram").getKendoDiagram().saveAsPDF();
});
function createDiagram() {
var data = [{
firstName: "Antonio",
lastName: "Moreno",
image: "antonio.jpg",
title: "Team Lead",
colorScheme: "#1696d3",
items: [{
firstName: "Elizabeth",
image: "elizabeth.jpg",
lastName: "Brown",
title: "Design Lead",
colorScheme: "#ef6944",
items: [{
firstName: "Ann",
lastName: "Devon",
image: "ann.jpg",
title: "UI Designer",
colorScheme: "#ef6944"
}]
}, {
firstName: "Diego",
lastName: "Roel",
image: "diego.jpg",
title: "QA Engineer",
colorScheme: "#ee587b",
items: [{
firstName: "Fran",
lastName: "Wilson",
image: "fran.jpg",
title: "QA Intern",
colorScheme: "#ee587b"
}]
}, {
firstName: "Felipe",
lastName: "Izquiedro",
image: "felipe.jpg",
title: "Senior Developer",
colorScheme: "#75be16",
items: [{
firstName: "Daniel",
lastName: "Tonini",
image: "daniel.jpg",
title: "Developer",
colorScheme: "#75be16"
}]
}]
}];
function visualTemplate(options) {
var dataviz = kendo.dataviz;
var g = new dataviz.diagram.Group();
var dataItem = options.dataItem;
g.append(new dataviz.diagram.Rectangle({
width: 210,
height: 75,
stroke: {
width: 0
},
fill: dataItem.colorScheme
}));
/*
Use the DejaVu Sans font for display and embedding in the PDF file.
The standard PDF fonts have no support for Unicode characters.
*/
g.append(new dataviz.diagram.TextBlock({
text: dataItem.firstName + " " + dataItem.lastName,
fontFamily: "DejaVu Sans",
fontSize: "14px",
x: 10,
y: 20,
fill: "#fff"
}));
g.append(new dataviz.diagram.TextBlock({
text: dataItem.title,
fontFamily: "DejaVu Sans",
fontSize: "14px",
x: 10,
y: 40,
fill: "#fff"
}));
return g;
}
$("#diagram").kendoDiagram({
dataSource: new kendo.data.HierarchicalDataSource({
data: data,
schema: {
model: {
children: "items"
}
}
}),
layout: {
type: "layered"
},
shapeDefaults: {
visual: visualTemplate
},
connectionDefaults: {
stroke: {
color: "#979797",
width: 2
}
}
});
var diagram = $("#diagram").getKendoDiagram();
diagram.bringIntoView(diagram.shapes);
}
$(document).ready(createDiagram);
</script>
```
## See Also
Other articles on styling, appearance, and rendering of Kendo UI widgets:
* [Themes and Appearance of the Kendo UI Widgets]({% slug themesandappearnce_kendoui_desktopwidgets %})
* [Rendering Modes for Data Visualization]({% slug renderingmodesfor_datavisualization_kendouistyling %})
Other articles on troubleshooting:
* [Common Issues in Kendo UI]({% slug troubleshooting_common_issues_kendoui %})
* [Kendo UI JavaScript Errors]({% slug troubleshooting_javascript_errors_kendoui %})
* [Kendo UI Performance Issues]({% slug troubleshooting_system_memory_symptoms_kendoui %})
* [Kendo UI Content Security Policy]({% slug troubleshooting_content_security_policy_kendoui %})
* [Common Issues in Kendo UI Excel Export]({% slug troubleshooting_excel_export_kendoui %})
* [Common Issues in Kendo UI Charts]({% slug troubleshooting_chart_widget %})
* [Performance Issues in Kendo UI Widgets for Data Visualization]({% slug tipsandtricks_kendouistyling %})
* [Common Issues in Kendo UI ComboBox]({% slug troubleshooting_common_issues_combobox_kendoui %})
* [Common Issues in Kendo UI DropDownList]({% slug troubleshooting_common_issues_dropdownlist_kendoui %})
* [Common Issues in Kendo UI Editor]({% slug troubleshooting_editor_widget %})
* [Common Issues in Kendo UI MultiSelect]({% slug troubleshooting_common_issues_multiselect_kendoui %})
* [Common Issues in Kendo UI Scheduler]({% slug troubleshooting_scheduler_widget %})
* [Common Issues in Kendo UI Upload]({% slug troubleshooting_upload_widget %})
* [Common Issues Related to Styling, Appearance, and Rendering]({% slug commonissues_troubleshooting_kendouistyling %})
* [Common Issues in Telerik UI for ASP.NET MVC](http://docs.telerik.com/aspnet-mvc/troubleshoot/troubleshooting)
* [Validation Issues in Telerik UI for ASP.NET MVC](http://docs.telerik.com/aspnet-mvc/troubleshoot/troubleshooting-validation)
* [Scaffolding Issues in Telerik UI for ASP.NET MVC](http://docs.telerik.com/aspnet-mvc/troubleshoot/troubleshooting-scaffolding)
* [Common Issues in the Grid ASP.NET MVC HtmlHelper Extension](http://docs.telerik.com/aspnet-mvc/helpers/grid/troubleshoot/troubleshooting)
* [Excel Export with the Grid ASP.NET MVC HtmlHelper Extension](http://docs.telerik.com/aspnet-mvc/helpers/grid/troubleshoot/excel-export-issues)
* [Common Issues in the Spreadsheet ASP.NET MVC HtmlHelper Extension](http://docs.telerik.com/aspnet-mvc/helpers/spreadsheet/troubleshoot/troubleshooting)
* [Common Issues in the Upload ASP.NET MVC HtmlHelper Extension](http://docs.telerik.com/aspnet-mvc/helpers/upload/troubleshoot/troubleshooting)
| 40.65 | 310 | 0.608297 | eng_Latn | 0.317088 |
b950d9aab013b4c2a51c772440672ee74002e25e | 1,049 | md | Markdown | docs/error-messages/compiler-warnings/compiler-warning-level-1-c4655.md | sunbohong/cpp-docs.zh-cn | 493f8d9a3d1ad73e056941fde491e76329f9c5ec | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/error-messages/compiler-warnings/compiler-warning-level-1-c4655.md | sunbohong/cpp-docs.zh-cn | 493f8d9a3d1ad73e056941fde491e76329f9c5ec | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/error-messages/compiler-warnings/compiler-warning-level-1-c4655.md | sunbohong/cpp-docs.zh-cn | 493f8d9a3d1ad73e056941fde491e76329f9c5ec | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
description: 了解详细信息:编译器警告 (等级 1) C4655
title: 编译器警告(等级 1)C4655
ms.date: 08/27/2018
f1_keywords:
- C4655
helpviewer_keywords:
- C4655
ms.assetid: 540f2c7a-e4a1-49af-84b4-03eeea1bbf41
ms.openlocfilehash: 2573ac5410114a0fe4ff4b074b83bbbb2efc8c97
ms.sourcegitcommit: d6af41e42699628c3e2e6063ec7b03931a49a098
ms.translationtype: MT
ms.contentlocale: zh-CN
ms.lasthandoff: 12/11/2020
ms.locfileid: "97318776"
---
# <a name="compiler-warning-level-1-c4655"></a>编译器警告(等级 1)C4655
> "*symbol*":自上次生成以来,变量类型是新的,或在其他地方以不同的方式定义
## <a name="remarks"></a>备注
自上次成功生成后你更改或添加了新的数据类型。 “编辑并继续”不支持对现有数据类型的更改。
此警告后跟 [错误 C1092](../../error-messages/compiler-errors-1/fatal-error-c1092.md)。 有关详细信息,请参阅 [受支持的代码更改](/visualstudio/debugger/supported-code-changes-cpp)。
### <a name="to-remove-this-warning-without-ending-the-current-debug-session"></a>若要在不结束当前调试会话的情况下删除此警告
1. 将数据类型改回其在发生错误前的状态。
2. 在“调试” 菜单中选择“应用代码更改” 。
### <a name="to-remove-this-warning-without-changing-your-source-code"></a>若要在不更改源代码的情况下删除此警告
1. 在“调试” 菜单上,选择“停止调试” 。
2. 在“生成” 菜单上,选择“生成” 。
| 27.605263 | 152 | 0.757865 | yue_Hant | 0.119395 |
b9520e8e7c2e82c29a8e0ba1d280ca07e8988e20 | 9,187 | md | Markdown | articles/active-directory-b2c/partner-jumio.md | Microsoft/azure-docs.sv-se | a43cb26da920952026f5e9c8720f3356a84de75b | [
"CC-BY-4.0",
"MIT"
] | 7 | 2017-08-28T08:02:11.000Z | 2021-05-05T07:47:55.000Z | articles/active-directory-b2c/partner-jumio.md | MicrosoftDocs/azure-docs.sv-se | a43cb26da920952026f5e9c8720f3356a84de75b | [
"CC-BY-4.0",
"MIT"
] | 476 | 2017-10-15T08:20:18.000Z | 2021-04-16T05:20:11.000Z | articles/active-directory-b2c/partner-jumio.md | MicrosoftDocs/azure-docs.sv-se | a43cb26da920952026f5e9c8720f3356a84de75b | [
"CC-BY-4.0",
"MIT"
] | 39 | 2017-08-03T09:46:48.000Z | 2021-11-05T11:41:27.000Z | ---
title: Självstudie för att konfigurera Azure Active Directory B2C med Jumio
titleSuffix: Azure AD B2C
description: I den här självstudien konfigurerar du Azure Active Directory B2C med Jumio för automatisk ID-verifiering, vilket skyddar kund information.
services: active-directory-b2c
author: gargi-sinha
manager: martinco
ms.service: active-directory
ms.workload: identity
ms.topic: how-to
ms.date: 08/20/2020
ms.author: gasinh
ms.subservice: B2C
ms.openlocfilehash: e344c849a8e9021daea9caebacec3289b99d03e6
ms.sourcegitcommit: 20f8bf22d621a34df5374ddf0cd324d3a762d46d
ms.translationtype: MT
ms.contentlocale: sv-SE
ms.lasthandoff: 04/09/2021
ms.locfileid: "107256675"
---
# <a name="tutorial-for-configuring-jumio-with-azure-active-directory-b2c"></a>Självstudie för att konfigurera Jumio med Azure Active Directory B2C
I den här självstudien ger vi vägledning om hur du integrerar Azure Active Directory B2C (Azure AD B2C) med [Jumio](https://www.jumio.com/). Jumio är en tjänst för ID-verifiering som möjliggör verifiering av automatiserat ID i real tid för att skydda kund information.
## <a name="prerequisites"></a>Förutsättningar
För att komma igång behöver du:
- En Azure AD-prenumeration. Om du inte har någon prenumeration kan du få ett [kostnads fritt konto](https://azure.microsoft.com/free/).
- En [Azure AD B2C klient](./tutorial-create-tenant.md) som är länkad till din Azure-prenumeration.
## <a name="scenario-description"></a>Scenariobeskrivning
Jumio-integreringen innehåller följande komponenter:
- Azure AD B2C: den auktoriserade server som ansvarar för att verifiera användarens autentiseringsuppgifter. Det kallas även för identitets leverantören.
- Jumio: tjänsten som tar de ID-uppgifter som användaren har fått och som verifierar dem.
- Mellanliggande REST API: API: et som implementerar integreringen mellan Azure AD B2C och Jumio-tjänsten.
- Azure Blob Storage: tjänsten som tillhandahåller anpassade UI-filer till Azure AD B2C principer.
I följande arkitektur diagram visas implementeringen.

|Steg | Beskrivning |
|:-----| :-----------|
| 1. | Användaren kommer till en sida för att antingen logga in eller registrera dig för att skapa ett konto. Azure AD B2C samlar in användarattribut.
| 2. | Azure AD B2C anropar API: t i mitten och passerar användar attributen.
| 3. | API för mellanlager samlar in användarattribut och omvandlar dem till ett format som Jumio-API: n kan använda. Sedan skickar den attributen till Jumio.
| 4. | När Jumio förbrukar informationen och bearbetar den returneras resultatet till API för mellanlager.
| 5. | API för mellanlager bearbetar informationen och skickar tillbaka relevant information till Azure AD B2C.
| 6. | Azure AD B2C tar emot information tillbaka från API för mellanlager. Om det visar ett felsvar visas ett fel meddelande för användaren. Om det visar sig ett lyckat svar, autentiseras och skrivs användaren till katalogen.
## <a name="sign-up-with-jumio"></a>Registrera dig med Jumio
Kontakta [Jumio](https://www.jumio.com/contact/)om du vill skapa ett Jumio-konto.
## <a name="configure-azure-ad-b2c-with-jumio"></a>Konfigurera Azure AD B2C med Jumio
När du har skapat ett Jumio-konto kan du använda kontot för att konfigurera Azure AD B2C. I följande avsnitt beskrivs processen i tur och ordning.
### <a name="deploy-the-api"></a>Distribuera API: et
Distribuera den tillhandahållna [API-koden](https://github.com/azure-ad-b2c/partner-integrations/tree/master/samples/Jumio/API/Jumio.Api) till en Azure-tjänst. Du kan publicera koden från Visual Studio genom att följa [dessa anvisningar](/visualstudio/deployment/quickstart-deploy-to-azure).
>[!NOTE]
>Du behöver URL: en för den distribuerade tjänsten för att konfigurera Azure AD med nödvändiga inställningar.
### <a name="deploy-the-client-certificate"></a>Distribuera klient certifikatet
1. Ett klient certifikat skyddar Jumio-API-anropet. Skapa ett självsignerat certifikat med hjälp av följande PowerShell-exempel kod:
``` PowerShell
$cert = New-SelfSignedCertificate -Type Custom -Subject "CN=Demo-SigningCertificate" -TextExtension @("2.5.29.37={text}1.3.6.1.5.5.7.3.3") -KeyUsage DigitalSignature -KeyAlgorithm RSA -KeyLength 2048 -NotAfter (Get-Date).AddYears(2) -CertStoreLocation "Cert:\CurrentUser\My"
$cert.Thumbprint
$pwdText = "Your password"
$pwd = ConvertTo-SecureString -String $pwdText -Force -AsPlainText
Export-PfxCertificate -Cert $Cert -FilePath "{your-local-path}\Demo-SigningCertificate.pfx" -Password $pwd.
```
Certifikatet exporteras sedan till den plats som angetts för ``{your-local-path}`` .
3. Importera certifikatet till Azure App Service genom att följa anvisningarna i [den här artikeln](../app-service/configure-ssl-certificate.md#upload-a-private-certificate).
### <a name="create-a-signingencryption-key"></a>Skapa en signerings-/krypterings nyckel
Skapa en slumpmässig sträng med en längd på högst 64 tecken som bara innehåller bokstäver och siffror.
Exempelvis: ``C9CB44D98642A7062A0D39B94B6CDC1E54276F2E7CFFBF44288CEE73C08A8A65``
Använd följande PowerShell-skript för att skapa strängen:
```PowerShell
-join ((0x30..0x39) + ( 0x41..0x5A) + ( 0x61..0x7A) + ( 65..90 ) | Get-Random -Count 64 | % {[char]$_})
```
### <a name="configure-the-api"></a>Konfigurera API
Du kan [Konfigurera program inställningar i Azure App Service](../app-service/configure-common.md#configure-app-settings). Med den här metoden kan du konfigurera inställningar på ett säkert sätt utan att kontrol lera dem i en lagrings plats. Du måste ange följande inställningar för REST-API: et:
| Programinställningar | Källa | Kommentarer |
| :-------- | :------------| :-----------|
|JumioSettings:AuthUsername | Konfiguration av Jumio-konto | |
|JumioSettings:AuthPassword | Konfiguration av Jumio-konto | |
|AppSettings: SigningCertThumbprint|Tumavtryck för det skapade självsignerade certifikatet| |
|AppSettings: IdTokenSigningKey| Signerings nyckel som skapats med PowerShell | |
| AppSettings: IdTokenEncryptionKey |Krypterings nyckel som skapats med PowerShell
| AppSettings: IdTokenIssuer | Utfärdare som ska användas för JWT-token (ett GUID-värde rekommenderas) |
| AppSettings: IdTokenAudience | Mål grupp som ska användas för JWT-token (ett GUID-värde rekommenderas) |
|AppSettings: BaseRedirectUrl | Bas-URL för den Azure AD B2C principen | https://{din-Tenant-Name}. b2clogin. com/{ditt-Application-ID}|
| WEBSITE_LOAD_CERTIFICATES| Tumavtryck för det skapade självsignerade certifikatet |
### <a name="deploy-the-ui"></a>Distribuera användar gränssnittet
1. Konfigurera en [Blob Storage-behållare i ditt lagrings konto](../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container).
2. Lagra UI-filerna från [mappen UI](https://github.com/azure-ad-b2c/partner-integrations/tree/master/samples/Jumio/UI) i BLOB-behållaren.
#### <a name="update-ui-files"></a>Uppdatera UI-filer
1. I UI-filerna går du till mappen [ocean_blue](https://github.com/azure-ad-b2c/partner-integrations/tree/master/samples/Jumio/UI/ocean_blue).
2. Öppna varje HTML-fil.
3. Sök och Ersätt `{your-ui-blob-container-url}` med URL: en för din BLOB-behållare.
4. Sök och Ersätt `{your-intermediate-api-url}` med URL: en för mellanliggande API app-tjänsten.
>[!NOTE]
> Som bästa praxis rekommenderar vi att du lägger till medgivande meddelande på sidan samling av attribut. Meddela användarna att informationen kommer att skickas till tjänster från tredje part för identitets verifiering.
### <a name="configure-the-azure-ad-b2c-policy"></a>Konfigurera principen för Azure AD B2C
1. Gå till [Azure AD B2C principen](https://github.com/azure-ad-b2c/partner-integrations/tree/master/samples/Jumio/Policies) i mappen principer.
2. Följ [den här artikeln](tutorial-create-user-flows.md?pivots=b2c-custom-policy#custom-policy-starter-pack) för att ladda ned [LocalAccounts start paket](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/tree/master/LocalAccounts).
3. Konfigurera principen för Azure AD B2C klient.
>[!NOTE]
>Uppdatera de angivna principerna så att de relaterar till din angivna klient.
## <a name="test-the-user-flow"></a>Testa användar flödet
1. Öppna Azure AD B2C-klienten. Under **principer** väljer du **Identity Experience Framework**.
2. Välj din tidigare skapade **SignUpSignIn**.
3. Välj **Kör användar flöde** och sedan:
a. För **program** väljer du den registrerade appen (EXEMPLET är JWT).
b. För **svars-URL** väljer du **omdirigerings-URL**: en.
c. Välj **Kör användar flöde**.
4. Gå igenom registrerings flödet och skapa ett konto.
5. Jumio-tjänsten kommer att anropas under flödet efter att attributet User har skapats. Om flödet är ofullständigt kontrollerar du att användaren inte har sparats i katalogen.
## <a name="next-steps"></a>Nästa steg
Mer information finns i följande artiklar:
- [Anpassade principer i Azure AD B2C](./custom-policy-overview.md)
- [Kom igång med anpassade principer i Azure AD B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy)
| 52.798851 | 296 | 0.772505 | swe_Latn | 0.987476 |
b9526e8775f8691211a16490406330fd3787ab9b | 1,696 | md | Markdown | README.md | 0n3tw0/SoundMaskerDistracterFocuser | b0977cc5614769d8a924462b5981c20aa4927177 | [
"MIT"
] | null | null | null | README.md | 0n3tw0/SoundMaskerDistracterFocuser | b0977cc5614769d8a924462b5981c20aa4927177 | [
"MIT"
] | null | null | null | README.md | 0n3tw0/SoundMaskerDistracterFocuser | b0977cc5614769d8a924462b5981c20aa4927177 | [
"MIT"
] | null | null | null | # SoundMaskerDistracterFocuser
I don't know if I'm being fooled, fooling myself, or something really is here. I've thought about this for years while being decieved until recently discovering how simple this is in javascript. It seems whenever I update this project the speech transmitted into my head intensifies which is very frustrating for me.
Possible relief from sensory overload, noise pollution, forced speech, and forced consumption and illegal marketing of unmarketable unethical capitalist products. It'll either distract you from the harassment or focus you more on what you're listening to on the TV.
The possibilities to use drum samples or instruments exist that respond to a microphone and I'd like to explore this more but my basic human rights have been violated for more than 6 years. I'm more focused on expressing these violations than doing something creative and useful especially when I'm threatened every day as well as my future by this open secret technology that's illegally broadcasting speech while professionals and everyone alike continue to deceive. Ignorant or not?
Some of the voices broadcast into my head sound like they have an algorithm processing their speech to sound more frightening, threatening, whiny, or intimidating. I started this project this morning to mimic the type of audio algorithms they use with their illegal broadcasting tech.
With help from this CodePen example: https://codepen.io/zapplebee/pen/gbNbZE
A video recording of this in action can be [seen here](https://youtu.be/S3UxLItOQfI).
[](https://youtu.be/S3UxLItOQfI)
| 113.066667 | 485 | 0.8125 | eng_Latn | 0.999662 |
b9527fa2f8b6c398c0214f3f6e8b50975f05d7a5 | 1,336 | md | Markdown | docs/visual-basic/misc/bc31086.md | Graflinger/docs.de-de | 9dfa50229d23e2ee67ef4047b6841991f1e40ac4 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/visual-basic/misc/bc31086.md | Graflinger/docs.de-de | 9dfa50229d23e2ee67ef4047b6841991f1e40ac4 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/visual-basic/misc/bc31086.md | Graflinger/docs.de-de | 9dfa50229d23e2ee67ef4047b6841991f1e40ac4 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "'<type1>' nicht überschreiben <type2> da er nicht 'Overridable' deklariert ist"
ms.date: 07/20/2015
f1_keywords:
- bc31086
- vbc31086
helpviewer_keywords:
- BC31086
ms.assetid: ce071994-2e32-4460-a65d-f48f914386c6
ms.openlocfilehash: eeebab15550b58fd5011976b23f16cb579d0ea93
ms.sourcegitcommit: 558d78d2a68acd4c95ef23231c8b4e4c7bac3902
ms.translationtype: MT
ms.contentlocale: de-DE
ms.lasthandoff: 04/09/2019
ms.locfileid: "59314944"
---
# <a name="type1-cannot-override-type2-because-it-is-not-declared-overridable"></a>'\<Typ1 >' kann nicht überschreiben, \<Typ2 > da sie nicht'Overridable'deklariert ist
Ein Member in einer abgeleiteten Klasse überschreibt einen Basisklassenmember, der nicht mit dem `Overridable` -Modifizierer markiert ist.
**Fehler-ID:** BC31086
## <a name="to-correct-this-error"></a>So beheben Sie diesen Fehler
1. Fügen Sie dem überschriebenen Member in der Basisklasse den `Overridable` -Modifizierer hinzu.
2. Verwenden Sie das `Shadows` -Schlüsselwort, um das Element in der Basisklasse zu überschatten.
## <a name="see-also"></a>Siehe auch
- [Overrides](../../visual-basic/language-reference/modifiers/overridable.md)
- [Overrides](../../visual-basic/language-reference/modifiers/overrides.md)
- [Shadows](../../visual-basic/language-reference/modifiers/shadows.md)
| 40.484848 | 168 | 0.764222 | deu_Latn | 0.756115 |
b95298093056ed5a931544d33fcb031abc4c9f45 | 299 | md | Markdown | docs/api/Weaccount.md | we-crypto/weaccount | 48865a9ce66703a9fc4530582238551916b77126 | [
"MIT"
] | null | null | null | docs/api/Weaccount.md | we-crypto/weaccount | 48865a9ce66703a9fc4530582238551916b77126 | [
"MIT"
] | null | null | null | docs/api/Weaccount.md | we-crypto/weaccount | 48865a9ce66703a9fc4530582238551916b77126 | [
"MIT"
] | null | null | null | # Weaccount APIs
#### Test Account
auth:
```json
{
"version": "1",
"cipher_txt": "2ftrNteCfewee4FywMAHMcM47pv7SaG5GRSem8A4iV8wdJSQKHKLbypu7CUH1a5mZ4iUb4sfpX1pLZj8kdq9f4FFBXaw3LFGn3DKd9b3PW1JLbhJwsZXR462Douo2vs9gV9",
"did": "did9LhiCYsjPgDrNfhWdpDUoBzwPgigUQvRXzFrqKm5tz4h"
}
```
| 23 | 151 | 0.772575 | yue_Hant | 0.294608 |
b952d514b97e603d12934ddaec3c05d0135dc1cb | 65 | md | Markdown | README.md | vincent-tsugranes/redhat-airline-api | 36874fc473081ea63b04d8672acabe173b6e28d9 | [
"MIT"
] | null | null | null | README.md | vincent-tsugranes/redhat-airline-api | 36874fc473081ea63b04d8672acabe173b6e28d9 | [
"MIT"
] | null | null | null | README.md | vincent-tsugranes/redhat-airline-api | 36874fc473081ea63b04d8672acabe173b6e28d9 | [
"MIT"
] | null | null | null | # redhat-airline-api
npm start
http://localhost:9000/schedule
| 9.285714 | 30 | 0.753846 | kor_Hang | 0.425208 |
b95447247c9ec05fd43188a409fcbcae3a98106f | 28 | md | Markdown | src/common/components/PageNotReady.md | dictyBase/genomepage | ab2e5f3b215ff12b8e4acefe2a65b5cbfadab590 | [
"BSD-2-Clause"
] | 3 | 2017-10-10T16:36:56.000Z | 2018-06-19T14:17:11.000Z | src/common/components/PageNotReady.md | dictyBase/dicty-frontpage | 0128fe2812bd1002cb7923d2e756039bf5e41cfd | [
"BSD-2-Clause"
] | 1,119 | 2017-07-06T19:44:49.000Z | 2022-03-31T06:39:30.000Z | src/common/components/PageNotReady.md | dictyBase/genomepage | ab2e5f3b215ff12b8e4acefe2a65b5cbfadab590 | [
"BSD-2-Clause"
] | 2 | 2017-12-14T08:51:48.000Z | 2021-07-07T21:45:44.000Z | ```jsx
<PageNotReady />
```
| 7 | 16 | 0.535714 | kor_Hang | 0.17249 |
b954915b1df6fb25c01dcc11114ff4894a703b9f | 2,676 | md | Markdown | site/content/nieuws/19-20/vlaggenschip-tholen-boekt-overwinning/index.md | Jensderond/hala3-gatsby | d807620f456a7ff17d0c744f7f9c92129e0e0238 | [
"MIT"
] | null | null | null | site/content/nieuws/19-20/vlaggenschip-tholen-boekt-overwinning/index.md | Jensderond/hala3-gatsby | d807620f456a7ff17d0c744f7f9c92129e0e0238 | [
"MIT"
] | 402 | 2019-11-15T13:55:23.000Z | 2020-09-10T00:33:59.000Z | site/content/nieuws/19-20/vlaggenschip-tholen-boekt-overwinning/index.md | Jensderond/hala3-gatsby | d807620f456a7ff17d0c744f7f9c92129e0e0238 | [
"MIT"
] | null | null | null | ---
title: Vlaggenschip Tholen boekt overtuigende zege in Roosendaal!
date: 2019-10-12
cover: ../../no-photo-today.png
description: |
Na een tweetal nederlagen tegen Rillandia 2 en Fc Bergen 5 wachtte vandaag in Roosendaal de uitwedstrijd tegen Alliance 5....
tags: [wedstrijdverslag, overwinning, 2019-2020]
---
VLAGGENSCHIP THOLEN BOEKT OVERTUIGENDE ZEGE IN ROOSENDAAL
Na een tweetal nederlagen tegen Rillandia 2 en Fc Bergen 5 wachtte vandaag in Roosendaal de uitwedstrijd tegen Alliance 5. Zij waren nog ongeslagen in de competitie en het beloofde dus een lastige wedstrijd te worden.
Er werd om 12:30 afgetrapt in Roosendaal. De boys begonnen de partij met Boy als sluitpost. De achterhoede bestond uit Sander, Stokkers, Jeroen en Junge, wie tegenwoordig als laatste man speelt in plaats als de koelbloedige spits. Otter, Jens, Dirk en Nabil vormde het middenveld. Ayoub en Himiet complementeerde het sterrenensemble en zij moesten voor de aanvallende dreiging zorgen.
Er werd scherp begonnen aan de wedstrijd. De opbouw van achteruit was goed verzorgd, helaas resulteerde dit naast enkele speldenprikjes niet tot échte kansen. Na ongeveer 35 minuten voetballen vond Robin met een uitstekende pass Jens de Rond. Wat er vervolgens gebeurde was voor de meegereisde supporters vanuit Tholen reden tot juichen. Met een uitstekend hard schot in de korte hoek zorgde Jens de Rond, of moeten we zeggen Steven Berghuis, voor de 0-1. De voorsprong werd vastgehouden en de boys gingen aan de thee met deze voorsprong.
De tweede helft kwam Mo Bicep, ofwel Ilias in de ploeg en hiermee maakt hij zijn debuut voor Hala 3. Ook in de tweede helft bleven wij de bovenliggende partij en er werden een aantal goede mogelijkheden gecreëerd. Uiteindelijk viel dan toch de welverdiende 0-2. Na een schot van Ayoub (nee, Ayoub we tellen hem niet als assist) kon Nabil in de rebound zijn naam op het wedstrijdformulier noteren en de voorsprong werd verdubbeld. Helaas viel vrij snel hierna dan toch, tegen de verhoudingen in, de 1-2 vanuit een corner van de thuisploeg. Vervolgens werd de mentale weerbaarheid van de ploeg zichtbaar, we bleven voetballen en al snel zorgde Ilias voor de bevrijdende 1-3. Ayoub zorgde voor het slotakkoord en met het doelpunt waar hij zo maar snakte zette hij de 1-4 eindstand op het scorebord.
De boys konden voldaan de drie punten in de tas meenemen naar Tholen. Er was goed gevoetbald en hard gewerkt.
Samih en Boris bedankt voor de assistentie als grensrechter vandaag!
Over twee weken krijg de zegereeks mogelijk een vervolg in de thuiswedstrijd tegen Waarde 3. De wedstrijd start dan om 14:30 en u bent uiteraard van harte welkom. 🍻
**HALA 3!**
| 99.111111 | 795 | 0.804185 | nld_Latn | 1.000003 |
b9558bcf365ed6d8838ec302a3c7026b6e9bf0bc | 11,965 | md | Markdown | networking/file-sharing.md | nikitavoloboev/knowledge | 2dd694c6d2b96a5c3ab556b473ff49e131741efc | [
"CC-BY-4.0"
] | 3,224 | 2017-09-21T23:18:04.000Z | 2022-03-31T23:10:24.000Z | networking/file-sharing.md | nikitavoloboev/knowledge | 2dd694c6d2b96a5c3ab556b473ff49e131741efc | [
"CC-BY-4.0"
] | 70 | 2017-10-05T22:55:00.000Z | 2022-03-19T12:58:14.000Z | networking/file-sharing.md | nikitavoloboev/knowledge | 2dd694c6d2b96a5c3ab556b473ff49e131741efc | [
"CC-BY-4.0"
] | 522 | 2017-10-20T16:00:10.000Z | 2022-03-28T16:44:49.000Z | # File sharing
## Links
- [Firefox Send](https://github.com/mozilla/send) - File sharing experiment which allows you to send encrypted files to other users. ([HN](https://news.ycombinator.com/item?id=19367850))
- [ffsend](https://github.com/timvisee/ffsend) - Easily and securely share files from the command line. A fully featured Firefox Send client.
- [Transfer.sh](https://transfer.sh/) - Easy file sharing from the command line or web. ([HN](https://news.ycombinator.com/item?id=27739991))
- [Syncthing](https://github.com/syncthing/syncthing) - Open Source Continuous File Synchronization. ([Web](https://syncthing.net/)) ([HN](https://news.ycombinator.com/item?id=27149002)) ([HN](https://news.ycombinator.com/item?id=27929194)) ([HN](https://news.ycombinator.com/item?id=28859521))
- [OnionShare](https://github.com/micahflee/onionshare) - Securely and anonymously send and receive files, and publish onion sites.
- [Dropbox Transfer](https://www.dropbox.com/transfer)
- [bita](https://github.com/oll3/bita) - Differential file synchronization over http.
- [Filestash](https://github.com/mickael-kerjean/filestash) - Modern web client for SFTP, S3, FTP, WebDAV, Git, Minio, LDAP, CalDAV, CardDAV, Mysql, Backblaze.
- [Seafile](https://github.com/haiwen/seafile) - Open source cloud storage system with privacy protection and teamwork features. ([Web](https://www.seafile.com/en/home/))
- [Perkeep](https://github.com/perkeep/perkeep) - Personal storage system for life: a way of storing, syncing, sharing, modelling and backing up content.
- [mc](https://github.com/minio/mc) - MinIO Client is a replacement for ls, cp, mkdir, diff and rsync commands for filesystems and object storage.
- [CryFS](https://github.com/cryfs/cryfs) - Cryptographic filesystem for the cloud.
- [gocat](https://github.com/sumup-oss/gocat) - 21st century, multi-purpose relay from source to destination.
- [Fast.io](https://fast.io/) - Host anything (PDFs, video, zips, images, etc.) with direct links on a super-fast global network. ([HN](https://news.ycombinator.com/item?id=21589213))
- [Static FileZ](https://github.com/killercup/static-filez) - Build compressed archives for static files and serve them over HTTP.
- [qrcp](https://github.com/claudiodangelis/qrcp) - Transfer files over wifi from your computer to your mobile device by scanning a QR code without leaving the terminal. ([HN](https://news.ycombinator.com/item?id=22914789))
- [0x0.st](https://0x0.st/) - No-bullshit file hosting and URL shortening service. ([Code](https://github.com/mia-0/0x0))
- [Librevault](https://github.com/librevault/librevault) - Open-source peer-to-peer file synchronization program, designed with convenience and privacy in mind.
- [Flying Carpet](https://github.com/spieglt/FlyingCarpet) - Wireless, encrypted file transfer over automatically configured ad hoc networking.
- [shoop](https://github.com/mcginty/shoop) - High-speed encrypted file transfer tool reminiscent of scp.
- [Airshare](https://github.com/kurolabs/airshare) - Cross-platform content sharing in a local network.
- [ShareDrop](https://www.sharedrop.io/) - Easy P2P file transfer powered by WebRTC - inspired by Apple AirDrop. ([Code](https://github.com/cowbell/sharedrop))
- [Syncthing is everything I used to love about computers (2020)](https://tonsky.me/blog/syncthing/) ([Lobsters](https://lobste.rs/s/4ucmcp/computers_as_i_used_love_them)) ([HN](https://news.ycombinator.com/item?id=23537243))
- [Perkeep](https://perkeep.org/) - Lets you permanently keep your stuff, for life. ([HN](https://news.ycombinator.com/item?id=23676350))
- [HN: Discussing Dropbox (2020)](https://news.ycombinator.com/item?id=23787446)
- [SCP user's migration guide to rsync (2020)](https://fedoramagazine.org/scp-users-migration-guide-to-rsync/) ([Lobsters](https://lobste.rs/s/uupfif/scp_user_s_migration_guide_rsync))
- [Broccoli: Syncing faster by syncing less (2020)](https://dropbox.tech/infrastructure/-broccoli--syncing-faster-by-syncing-less)
- [Ask HN: What is your favorite method of sending large files? (2020)](https://news.ycombinator.com/item?id=24351111)
- [Data Transfer Project](https://github.com/google/data-transfer-project) - Makes it easy for people to transfer their data between online services. ([Web](https://datatransferproject.dev/))
- [File Transfer with SSH, Tee, and Base64 (2019)](https://susam.in/blog/file-transfer-with-ssh-tee-and-base64/) -
- [Tardigrade](https://tardigrade.io/) - Decentralized Cloud Object Storage.
- [croc](https://github.com/schollz/croc) - Easily and securely send things from one computer to another. ([HN](https://news.ycombinator.com/item?id=24503077))
- [wave-share](https://github.com/ggerganov/wave-share) - Serverless, peer-to-peer, local file sharing through sound. ([HN](https://news.ycombinator.com/item?id=24586390))
- [JustBeamIt](https://justbeamit.com/) - File transfer made easy. ([CLI](https://github.com/justbeamit/beam))
- [Magic Wormhole](https://github.com/magic-wormhole/magic-wormhole) - Get things from one computer to another, safely.
- [Send Anywhere](https://send-anywhere.com/) - File transfer.
- [FileRoom](https://fileroom.io) - Browser File Transfer. Send files to anyone on the same WiFi or network that's on fileroom.io.
- [wormhole-william](https://github.com/psanford/wormhole-william) - End-to-end encrypted file transfer. A magic wormhole CLI and API in Go.
- [Global Socket](https://github.com/hackerschoice/gsocket) - Moving data from here to there. Securely, Fast and trough NAT/Firewalls.
- [Warp](https://github.com/minio/warp) - S3 benchmarking tool.
- [smart_open](https://github.com/RaRe-Technologies/smart_open) - Python 3 library for efficient streaming of very large files from/to storages such as S3, GCS, Azure Blob Storage, HDFS, WebHDFS, HTTP, HTTPS, SFTP, or local filesystem.
- [FilePizza](https://file.pizza/) - Peer-to-peer file transfers in your browser. ([Code](https://github.com/kern/filepizza))
- [tus](https://tus.io/) - Resumable file uploads. ([GitHub](https://github.com/tus))
- [Hyp](https://github.com/hypercore-protocol/cli/) - CLI for peer-to-peer file sharing (and more) using the Hypercore Protocol. ([Demo](https://www.youtube.com/watch?v=SVk1uIQxOO8))
- [Snapdrop](https://snapdrop.net/) - Local file sharing in your browser. Inspired by Apple's Airdrop. ([Code](https://github.com/RobinLinus/snapdrop))
- [Shary](https://github.com/wilk/shary) - Share your files effortlessly with QRCodes.
- [myDrive](https://github.com/subnub/myDrive) - Node.js and mongoDB Google Drive Clone.
- [WebDrop](https://webdrop.space/#/) - Group P2P File & Message transfer in browser with WebRTC. ([Code](https://github.com/subins2000/WebDrop))
- [Juicesync](https://github.com/juicedata/juicesync) - Tool to move your data between any clouds or regions.
- [goploader](https://github.com/Depado/goploader) - Easy file sharing with server-side encryption, curl/httpie/wget compliant.
- [ownCloud Infinite Scale](https://github.com/owncloud/ocis) - Modern file-sync and share platform. ([Docs](https://owncloud.github.io/ocis/))
- [Nextcloud](https://nextcloud.com/) - On-premises file share and collaboration platform. ([Server Code](https://github.com/nextcloud/server))
- [S3P - Massively Parallel S3 Copying (2021)](https://www.genui.com/open-source/s3p-massively-parallel-s3-copying)
- [filite](https://github.com/raftario/filite) - Simple, light and standalone pastebin, URL shortener and file-sharing service.
- [Sync](https://www.sync.com/) - Secure Cloud Storage.
- [Pydio](https://pydio.com/) - Enterprise File Sharing & Sync Platform. ([Code](https://github.com/pydio/cells))
- [Cacheroach](https://github.com/bobvawter/cacheroach) - Multi-tenant, multi-region, multi-cloud file store built using CockroachDB.
- [Teleport](https://goteleport.com/) - Access Computing Resources Anywhere. ([Code](https://github.com/gravitational/teleport)) ([GitHub](https://github.com/gravitational))
- [ownCloud](https://owncloud.com/) - Share files and folders, easy and secure. ([Code](https://github.com/owncloud/core))
- [Wormhole](https://wormhole.app/) - Simple, private file sharing. ([HN](https://news.ycombinator.com/item?id=26666142))
- [SkyTransfer](https://skytransfer.hns.siasky.net/#/) - Free, Open-Source, Decentralized and Encrypted File-Sharing. ([Code](https://github.com/kamy22/skytransfer)) ([HN](https://news.ycombinator.com/item?id=27017805))
- [Sending Files with Taildrop (2021)](https://tailscale.com/blog/sending-files-with-taildrop/)
- [Powergate](https://github.com/textileio/powergate) - Multitiered file storage API built on Filecoin and IPFS.
- [Tresorit Send](https://send.tresorit.com/) - Send Big Files up to 5GB Securely.
- [Send](https://send.vis.ee/) - Encrypt and send files with a link that automatically expires to ensure your important documents don't stay online forever.
- [Triox](https://github.com/AaronErhardt/Triox) - Free file hosting server that focuses on speed, reliability and security.
- [portal](https://github.com/jackyzha0/portal) - Zero-config peer-to-peer encrypted live folder syncing tool that respects your .gitignore.
- [sfz](https://github.com/weihanglo/sfz) - Simple static file serving command-line tool written in Rust.
- [Streamwo](https://streamwo.com/) - Simple video hosting & sharing.
- [Estuary](https://estuary.tech/) - Use any browser and our API to store public data on the Filecoin Network and retrieve it from anywhere, anytime.
- [S3Sync](https://github.com/larrabee/s3sync) - Really fast sync tool for S3.
- [osm](https://github.com/appscode/osm) - Object Store Manipulator - curl for cloud storage.
- [Rustypaste](https://github.com/orhun/rustypaste) - Minimal file upload/pastebin service.
- [Chibisafe](https://chibisafe.moe/) - Blazing fast file uploader and awesome bunker written in node. ([Code](https://github.com/WeebDev/chibisafe))
- [OnlyFiles](https://onlyfiles.cc/) - Media file sharing service.
- [Faster File Distribution with HDFS and S3 (2019)](https://tech.marksblogg.com/faster-file-distribution-hadoop-hdfs-s3.html)
- [LuminS](https://github.com/wchang22/LuminS) - Fast and reliable alternative to rsync for synchronizing local files.
- [Using rclone for Cloud to Cloud Transfer](https://www.rsync.net/resources/howto/rclone.html)
- [Ubercopy](https://github.com/jasonwhite/ubercopy) - Quickly and intelligently copies files based on a generated list.
- [Bindle](https://github.com/deislabs/bindle) - Object Storage for Collections.
- [Portal](https://github.com/landhb/portal) - Secure file transfer protocol, written in Rust.
- [THRON](https://www.thron.com/en/) - Digital Content Management Software.
- [Dragonfly](https://github.com/dragonflyoss/Dragonfly2) - Open-source P2P-based Image and File Distribution System. ([Web](https://d7y.io/en-us/))
- [portal](https://github.com/ZinoKader/portal) - Quick and easy CLI file transfer utility.
- [raw.githack.com](https://raw.githack.com/) - Serves raw files directly from various source code hosting services with proper Content-Type headers.
- [Buzon](https://buzon.io/) - Send large files.
- [Upload](https://upload.io/) - File Upload Platform. ([JS lib](https://github.com/upload-js/upload-js)) ([Upload Plugin SDK](https://github.com/upload-js/upload-plugin-sdk)) ([Compression Plugin](https://github.com/upload-js/upload-compression-plugin))
- [File Browser](https://github.com/filebrowser/filebrowser) - Directory and it can be used to upload, delete, preview, rename and edit your files. ([Docs](https://filebrowser.org/))
- [How to rsync files between two remotes? (2021)](https://vincent.bernat.ch/en/blog/2021-rsync-ssh-two-remotes)
- [cend.me](http://cend.me/) - Direct file transfer with no server involvement.
- [Slik Safe](https://www.sliksafe.com/) - Decentralized, End-to-End Encrypted Alternative to Dropbox. ([HN](https://news.ycombinator.com/item?id=29637188))
- [Greedia](https://github.com/greedia/greedia) - Greedily cache media and serve it up fast.
| 123.350515 | 294 | 0.748684 | eng_Latn | 0.314138 |
b9561df14b25128a1571136649b4b723d212f48b | 1,268 | md | Markdown | kubernetes-security/README.md | otus-kuber-2020-04/SOMikhaylov_platform | 873c5375e7afd7968a8ce9f64e29643fade6bd98 | [
"MIT"
] | null | null | null | kubernetes-security/README.md | otus-kuber-2020-04/SOMikhaylov_platform | 873c5375e7afd7968a8ce9f64e29643fade6bd98 | [
"MIT"
] | 2 | 2019-08-07T09:22:50.000Z | 2019-10-07T08:01:30.000Z | docs/kubernetes-security.md | otus-kuber-2019-06/SOMikhaylov_platform | e5c3d39a07133787f2685f78d7939fd7ebb963d4 | [
"MIT"
] | 2 | 2019-10-22T10:46:51.000Z | 2020-01-31T16:53:03.000Z | ## task1
1. 01-sa-bob-admin.yaml - создает service account bob, с ролью admin в рамках всего кластера
2. 02-sa-dave.yaml - создает service account dave без доступа в кластер.
проверка:
- kubectl get clusterroles admin -o yaml
- kubectl auth can-i get deployment --as system:serviceaccount:default:bob
- kubectl auth can-i get deployment --as system:serviceaccount:default:dave
## task2
1. 01-namespace-prometheus.yaml - создает namespace prometheus
2. 02-sa-carol.yaml - создает service account carol в namespace prometheus
3. 03-rules-prometheus.yaml - дает права всем sa делать list,get,watch на pods
проверка:
- kubectl auth can-i watch pods --as system:serviceaccount:prometheus:carol
- kubectl auth can-i delete pods --as system:serviceaccount:prometheus:carol
## task3
1. 01-namespace-dev.yaml - создает namespace dev
2. 02-sa-jane.yaml - создает service account jane в namespace dev
3. 03-rolebinding-jane-admin.yaml - дает jane роль admin в namespace dev
4. 04-sa-ken.yaml - создает service account ken в namespace dev
5. 05-rolebinding-ken-view.yaml - дает ken роль view в namespace dev
проверка:
- kubectl auth can-i get deployment --as system:serviceaccount:dev:jane -n dev
- kubectl auth can-i list jobs --as system:serviceaccount:dev:ken -n dev
| 46.962963 | 92 | 0.772871 | kor_Hang | 0.277122 |
b95669f3d4c2f142aeb9c0143529fc9f9eedc1ce | 2,539 | md | Markdown | wdk-ddi-src/content/dbgeng/nf-dbgeng-idebugbreakpoint-getid.md | jesweare/windows-driver-docs-ddi | a6e73cac25d8328115822ec266dabdf87d395bc7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | wdk-ddi-src/content/dbgeng/nf-dbgeng-idebugbreakpoint-getid.md | jesweare/windows-driver-docs-ddi | a6e73cac25d8328115822ec266dabdf87d395bc7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | wdk-ddi-src/content/dbgeng/nf-dbgeng-idebugbreakpoint-getid.md | jesweare/windows-driver-docs-ddi | a6e73cac25d8328115822ec266dabdf87d395bc7 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-12-08T21:34:31.000Z | 2021-12-08T21:34:31.000Z | ---
UID: NF:dbgeng.IDebugBreakpoint.GetId
title: IDebugBreakpoint::GetId (dbgeng.h)
description: The GetId method returns a breakpoint ID, which is the engine's unique identifier for a breakpoint.
old-location: debugger\getid.htm
tech.root: debugger
ms.assetid: 991d8a40-1991-4c06-9557-9abee3ed8073
ms.date: 05/03/2018
keywords: ["IDebugBreakpoint::GetId"]
ms.keywords: ComOther_408e8e80-f34e-4895-9bae-66dbb0f9aa97.xml, GetId, GetId method [Windows Debugging], GetId method [Windows Debugging],IDebugBreakpoint interface, GetId method [Windows Debugging],IDebugBreakpoint2 interface, IDebugBreakpoint interface [Windows Debugging],GetId method, IDebugBreakpoint.GetId, IDebugBreakpoint2 interface [Windows Debugging],GetId method, IDebugBreakpoint2::GetId, IDebugBreakpoint::GetId, dbgeng/IDebugBreakpoint2::GetId, dbgeng/IDebugBreakpoint::GetId, debugger.getid
req.header: dbgeng.h
req.include-header: Dbgeng.h
req.target-type: Desktop
req.target-min-winverclnt:
req.target-min-winversvr:
req.kmdf-ver:
req.umdf-ver:
req.ddi-compliance:
req.unicode-ansi:
req.idl:
req.max-support:
req.namespace:
req.assembly:
req.type-library:
req.lib:
req.dll:
req.irql:
targetos: Windows
req.typenames:
f1_keywords:
- IDebugBreakpoint::GetId
- dbgeng/IDebugBreakpoint::GetId
topic_type:
- APIRef
- kbSyntax
api_type:
- COM
api_location:
- dbgeng.h
api_name:
- IDebugBreakpoint.GetId
- IDebugBreakpoint2.GetId
---
# IDebugBreakpoint::GetId
## -description
The <b>GetId</b> method returns a breakpoint ID, which is the engine's unique identifier for a breakpoint.
## -parameters
### -param Id
[out]
The breakpoint ID.
## -returns
<table>
<tr>
<th>Return code</th>
<th>Description</th>
</tr>
<tr>
<td width="40%">
<dl>
<dt><b>S_OK</b></dt>
</dl>
</td>
<td width="60%">
The method was successful.
</td>
</tr>
</table>
This method can also return error values. For more information, see <a href="/windows-hardware/drivers/debugger/hresult-values">Return Values</a>.
## -remarks
The breakpoint ID remains fixed as long as the breakpoint exists. However, after the breakpoint has been removed, you can use its ID for another breakpoint.
The <a href="/windows-hardware/drivers/ddi/dbgeng/nf-dbgeng-idebugbreakpoint2-getparameters">GetParameters</a> method also returns the breakpoint ID.
For more information about how to use breakpoints, see <a href="/windows-hardware/drivers/debugger/using-breakpoints2">Using Breakpoints</a>. | 28.852273 | 506 | 0.740055 | eng_Latn | 0.360044 |
b956781fb64d90005bd9e33540e4b2d3bfdb7950 | 6,831 | md | Markdown | exercises/ansible_network/8-tower-rbac/README.ja.md | lwhitty/lwhitty.ansible.io | 50a2d08c4c95f1d23297b6aac2354c3089901992 | [
"MIT"
] | 3 | 2019-09-28T13:48:31.000Z | 2020-05-07T07:04:19.000Z | exercises/ansible_network/8-tower-rbac/README.ja.md | lwhitty/lwhitty.ansible.io | 50a2d08c4c95f1d23297b6aac2354c3089901992 | [
"MIT"
] | 3 | 2019-09-24T11:01:25.000Z | 2020-01-09T13:02:08.000Z | exercises/ansible_network/8-tower-rbac/README.ja.md | lwhitty/lwhitty.ansible.io | 50a2d08c4c95f1d23297b6aac2354c3089901992 | [
"MIT"
] | 5 | 2019-07-24T14:56:09.000Z | 2022-03-10T19:39:10.000Z | # Exercise 8: RBAC によるアクセスコントロール
**別の言語で読む**:  [English](README.md),  [日本語](README.ja.md).
## Table of Contents
- [Objective](#Objective)
- [Guide](#Guide)
- [Step 1: Login as network-admin](#step-1-login-as-network-admin)
- [Step 2: Open the NETWORK ORGANIZATION](#step-2-open-the-network-organization)
- [Step 3: Examine Teams](#step-3-examine-teams)
- [Step 4: Examine the Netops Team](#step-4-examine-the-netops-team)
- [Step 5: Login as network-admin](#step-5-login-as-network-admin)
- [Step 6: Understand Team Roles](#step-6-understand-team-roles)
- [Step 7: Job Template Permissions](#step-7-job-template-permissions)
- [Step 8: Login as network-operator](#Step-8-login-as-network-operator)
- [Step 9: Launching a Job Template](#step-9-launching-a-job-template)
- [Bonus Step](#bonus-step)
- [Takeaways](#takeaways)
# Objective
Ansible Tower を利用するメリットとして、システムを利用するユーザーのコントロールがあります。 この演習の目的は、管理者が定義できるテナント、チーム、ロールと、これらの役割に割り当てるユーザーを使って、ロールベースのアクセス制御([RBAC](https://docs.ansible.com/ansible-tower/latest/html/userguide/security.html#role-based-access-controls))を理解することです。これは組織にセキュアな自動化システムとコンプライアンス要件の達成をサポートします。
# Guide
いくつかの Ansible Tower 用語を確認します:
- **Organizations:** テナントを定義します。例、 *Network-org* 、 *Compute-org* 。 これは顧客の組織の内部構造を写したものになるかもしれません。
- **Teams:** 各 organization 内で複数のチームがあるかもしれません。例えば、 *tier1-helpdesk*, *tier2-support*, *tier3-support*, *build-team* などです。
- **Users:** 一般的にユーザーはチームに所属します。Tower でユーザーは何ができるかは **role** によって制御、定義されます。
- **Roles:** ロールはユーザーが実行可能なアクションを定義します。この仕組みは、Lv1 ヘルプデスクのメンバー、Lv2 または上級管理者といった役割に応じたアクセス制限を設けている一般的なネットワーク組織とうまくマッピングすることができるはずです。Tower は組み込みの role セットを持っています。[documentation ](https://docs.ansible.com/ansible-tower/latest/html/userguide/security.html#built-in-roles)
より詳細な RBAC 関連の用語に関しては [documentation](https://docs.ansible.com/ansible-tower/latest/html/userguide/security.html#role-based-access-controls) を参照してください。
## Step 1: Opening up Organizations
1. Tower へ **admin** ユーザーでログインします。
| Parameter | Value |
|---|---|
| username | `admin` |
| password| 講師から指示があります |
2. **admin** としてログインしたことを確認してください。

3. 左メニューから **ACCESS** の下の **Organizations** をクリックします。
*admin* ユーザーの時には、Tower 上で構成されている全ての組織を確認できます:
>Note: 組織、チーム、ユーザーは演習のために自動作成されています。
4. 組織の確認
2つの組織が作成されています(他はデフォルトで存在している組織です):
1. **RED HAT COMPUTE ORGANIZATION**
2. **RED HAT NETWORK ORGANIZATION**

>このページでは、組織に割り当てられている全てのチーム、ユーザー、インベントリー、プロジェクト、ジョブテンプレートのサマリーが確認できます。他のユーザーでも、組織レベルの管理者権限が設定されている場合には同じ画面を確認することができます。
## Step 2: Open the NETWORK ORGANIZATION
1. **RED HAT NETWORK ORGANIZATION** をクリックしてください。
組織の詳細を表示する画面が表示されます。

2. **USERS** をクリックすると、この組織に割り当てられているユーザーを確認できます。
>**network-admin** と **network-operator** ユーザーの両方がこの組織に割り当てられていることを確認します。
## Step 3: Examine Teams
1. サイドバーの **TEAMS** をクリックします。

2. チームを確認します。Ansible Tower 管理者は全ての有効なチームを確認できます。ここでは4つのチームが存在します:
1. Compute T1
2. Compute T2
3. Netadmin
4. Netops

## Step 4: Examine the Netops Team
1. **Netops** チームをクリックし、その後に **USERS** ボタンをクリックします。ここには2つの特定ユーザーがいることに注意してください:
1. network-admin
2. network-operator

2. 以下の2つを確認します:
1. **network-admin** ユーザーは **RED HAT NETWORK ORGANIZATION** の管理者権限を持っています。
2. **network-operator** は Netops チームの一般メンバーです。ロールについ理解するために、それぞれのユーザーでログインします。
## Step 5: Login as network-admin
1. Tower 右上の電源アイコンボタンをクリックして admin をログアウトします:
電源アイコン: 
2. **network-admin** ユーザーで再ログインします。
| Parameter | Value |
|---|---|
| username | network-admin |
| password| 講師から指示があります |
3. **network-admin** でログインしていることを確認してください。

4. サイドバーから **Organizations** をクリックします。
自分が管理者である **REDHAT NETWORK ORGANIZATION** のみが確認できることに注目してください。
以下の2つの組織は表示されません:
- REDHAT COMPUTE ORGANIZATION
- Default
5. オプション: network-operator ユーザーで同じ手順を実行します(パスワードは network-admin と同じです)。
- どのような違いが確認できるでしょうか?
- network-operator は他のユーザーを確認できますか?
- 新しいユーザーを追加したり、資格情報の編集を行えますか?
## Step 6: Understand Team Roles
1. ロールの違いと RBAC の割当てを理解するために、**admin** ユーザーでログインし直します。
2. **Inventories** へ移動し、 **Workshop Inventory** をクリックします。
3. **PERMISSIONS** ボタンをクリックします。

4. それぞれのユーザーへの権限の割当てを確認します。

**network-admin** と **network-operator** ユーザーに割り当てられた **TEAM ROLE** に注意してください。 **network-operator** は **USE** ロールを割り当てられたことで、このインベントリーを使用する権限を得ています。
## Step 7: Job Template Permissions
1. 左メニューから **Templates** をクリックします。
2. **Network-Commands** ジョブを選択します。
3. 上部の **PERMISSIONS** ボタンをクリックします。

>先と同じユーザーがジョブテンプレートに対しては異なるロールを持っていることに注意してください。Ansible Tower では「誰が何にアクセス可能か」を操作の粒度で指定できることに注目してください。この例では、network-admin は **Network-Commands** を更新(**ADMIN**)できますが、network-operator は実行(**EXECUTE**) しかできません。
## Step 8: Login as network-operator
最後に操作を実行して RBAC を確認します。
1. admin からログアウトし、**network-operator** ユーザーでログインし直します。
| Parameter | Value |
|---|---|
| username | `network-operator` |
| password| 講師から指示されます |
2. **Templates** へ移動し、**Network-Commands** をクリックします。

3. ここで、 *network-operator* ユーザーはどのフィールドも変更できないことに注目してください。
## Step 9: Launching a Job Template
1. `network-operator` ユーザーでログインしていることを確認します。
2. サイドバーの **Templates** を再びクリックします。
3. 今回は **Network-Commands** のロケットアイコンをクリックしてジョブを起動します:

4. 事前設定された、show コマンドを1つ選択するプロンプトが表示されます。

5. 1つのコマンドを選択して、**Next** 、 **Launch** と選択し Playbook が実行され結果が表示されることを確認します。
## Bonus Step
時間に余裕があれば、network-admin でログインし直して、オペレーターに実行してもらいたい好きな show コマンドを追加してください。これは、 network-admin の *Admin* ロールがジョブテンプレートの編集と更新を許可していることを確認するのに役立ちます。
# Takeaways
- Ansible Tower の RBAC 機能を使うことで、運用オペレーターが商用環境へのアクセスを必要とせずに、許可されたコマンドだけを実行をさせることが簡単に行なえます。
- Ansible Tower は複数の組織、チーム、ユーザーをサポートしています。ユーザーは必要に応じて、複数の組織とチームに所属することができます。この演習ではカバーされていませんが、Ansible Tower は Active Directory, LDAP, RADIUS, SAML, TACACS+ などの [enterprise authentication](https://docs.ansible.com/ansible-tower/latest/html/administration/ent_auth.html) を使うと Tower でユーザー管理を行う必要はなくなります。
- 例外的なアクセス権(ユーザーはアクセスできるが、このユーザーが属するチームはアクセスできない等)にも対応可能です。RBAC の粒度は個別のユーザーに対してクレデンシャル、インベントリー、ジョブテンプレートまで落とし込めます。
---
# Complete
以上で exercise 8 は終了です。
[Click here to return to the Ansible Network Automation Workshop](../README.ja.md)
| 30.909502 | 301 | 0.742351 | yue_Hant | 0.615835 |
b95695b9812c3344044b15910a434fe62f4d120d | 3,262 | md | Markdown | docs/relational-databases/system-stored-procedures/sp-xp-cmdshell-proxy-account-transact-sql.md | stummsft/sql-docs | b7007397eb9bab405f87ed60fb4ce06e5835dd14 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2020-02-02T17:51:23.000Z | 2020-10-17T02:37:15.000Z | docs/relational-databases/system-stored-procedures/sp-xp-cmdshell-proxy-account-transact-sql.md | stummsft/sql-docs | b7007397eb9bab405f87ed60fb4ce06e5835dd14 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/relational-databases/system-stored-procedures/sp-xp-cmdshell-proxy-account-transact-sql.md | stummsft/sql-docs | b7007397eb9bab405f87ed60fb4ce06e5835dd14 | [
"CC-BY-4.0",
"MIT"
] | 6 | 2021-02-01T23:45:50.000Z | 2021-02-04T21:16:27.000Z | ---
title: "sp_xp_cmdshell_proxy_account (Transact-SQL) | Microsoft Docs"
ms.custom: ""
ms.date: "03/16/2017"
ms.prod: sql
ms.prod_service: "database-engine, sql-database"
ms.component: "system-stored-procedures"
ms.reviewer: ""
ms.suite: "sql"
ms.technology: system-objects
ms.tgt_pltfrm: ""
ms.topic: "language-reference"
f1_keywords:
- "sp_xp_cmdshell_proxy_account"
- "sp_xp_cmdshell_proxy_account_TSQL"
dev_langs:
- "TSQL"
helpviewer_keywords:
- "sp_xp_cmdshell_proxy_account"
- "xp_cmdshell"
ms.assetid: f807c373-7fbc-4108-a2bd-73b48a236003
caps.latest.revision: 15
author: edmacauley
ms.author: edmaca
manager: craigg
monikerRange: "= azuresqldb-current || >= sql-server-2016 || = sqlallproducts-allversions"
---
# sp_xp_cmdshell_proxy_account (Transact-SQL)
[!INCLUDE[tsql-appliesto-ss2008-asdb-xxxx-xxx-md](../../includes/tsql-appliesto-ss2008-asdb-xxxx-xxx-md.md)]
Creates a proxy credential for **xp_cmdshell**.
> [!NOTE]
> **xp_cmdshell** is disabled by default. To enable **xp_cmdshell**, see [xp_cmdshell Server Configuration Option](../../database-engine/configure-windows/xp-cmdshell-server-configuration-option.md).
 [Transact-SQL Syntax Conventions](../../t-sql/language-elements/transact-sql-syntax-conventions-transact-sql.md)
## Syntax
```
sp_xp_cmdshell_proxy_account [ NULL | { 'account_name' , 'password' } ]
```
## Arguments
NULL
Specifies that the proxy credential should be deleted.
*account_name*
Specifies a Windows login that will be the proxy.
*password*
Specifies the password of the Windows account.
## Return Code Values
0 (success) or 1 (failure)
## Remarks
The proxy credential will be called **##xp_cmdshell_proxy_account##**.
When it is executed using the NULL option, **sp_xp_cmdshell_proxy_account** deletes the proxy credential.
## Permissions
Requires CONTROL SERVER permission.
## Examples
### A. Creating the proxy credential
The following example shows how to create a proxy credential for a Windows account called `ADVWKS\Max04` with password `ds35efg##65`.
```
EXEC sp_xp_cmdshell_proxy_account 'ADVWKS\Max04', 'ds35efg##65';
GO
```
### B. Dropping the proxy credential
The following example removes the proxy credential from the credential store.
```
EXEC sp_xp_cmdshell_proxy_account NULL;
GO
```
## See Also
[xp_cmdshell (Transact-SQL)](../../relational-databases/system-stored-procedures/xp-cmdshell-transact-sql.md)
[CREATE CREDENTIAL (Transact-SQL)](../../t-sql/statements/create-credential-transact-sql.md)
[sys.credentials (Transact-SQL)](../../relational-databases/system-catalog-views/sys-credentials-transact-sql.md)
[System Stored Procedures (Transact-SQL)](../../relational-databases/system-stored-procedures/system-stored-procedures-transact-sql.md)
[Security Stored Procedures (Transact-SQL)](../../relational-databases/system-stored-procedures/security-stored-procedures-transact-sql.md)
| 35.456522 | 215 | 0.703556 | eng_Latn | 0.424244 |
b9569e50eba6e6a11adbe70f9af7d748fb51c8f8 | 2,335 | md | Markdown | bower_components/meyer-reset/README.md | GarrettDesigns/centrum-neighborhood | df52070412f47922cdf2af1041355b715e370898 | [
"MIT"
] | null | null | null | bower_components/meyer-reset/README.md | GarrettDesigns/centrum-neighborhood | df52070412f47922cdf2af1041355b715e370898 | [
"MIT"
] | null | null | null | bower_components/meyer-reset/README.md | GarrettDesigns/centrum-neighborhood | df52070412f47922cdf2af1041355b715e370898 | [
"MIT"
] | null | null | null | # Eric Meyer - CSS Reset Stylesheet
Eric Meyer's CSS reset stylesheet as [Sass](http://sass-lang.com/), delivered as a [Compass Extension](http://compass-style.org/docs/tutorials/extensions/) and [Ruby Gem](http://rubygems.org/).
For more information: [http://meyerweb.com/eric/tools/css/reset/](http://meyerweb.com/eric/tools/css/reset/)
## Installation and Usage
### Compass Extension
To install as a Ruby Gem to use this as a Compass Extension, run the following in your Terminal app.
gem install meyer-reset
Then add `require 'meyer-reset'` to your Compass config file.
Using this in your Sass stylesheet is pretty easy. Simply import the extension into your stylesheet (preferably as the first import or declaration in your Sass stylesheet).
If you look at [the extension](https://github.com/adamstac/meyer-reset/blob/master/stylesheets/_meyer-reset.scss), you will notice that we are "including" the mixin `@include meyer-reset` for you in the last line. All you will need to do is import and go.
@import "meyer-reset";
...
### Simple Sass Partial
For non Compass users, or someone who just wants to use this as a simple Sass partial vs installing as a Ruby Gem and Compass extension. In your terminal app, navigate to where you want to download this to.
For example: `cd path/to/project/sass`
Then use curl to pull down the raw file.
curl -0 https://github.com/adamstac/meyer-reset/raw/master/stylesheets/_meyer-reset.scss
The same rules apply as mentioned above. All you will need to do is import and go.
@import "meyer-reset";
...
## For fun
For those who want to learn about how to use Rake and want to play with how Sass and Compass work when compiling, run the command `rake -T` to see a list of rake tasks that: clear, build and release this gem, and compile and convert Sass.
Dig into the [Rakefile](https://github.com/adamstac/meyer-reset/blob/master/Rakefile) to see what makes all this happen.
rake css:clear # Clear the styles
rake gem:build # Build the gem
rake gem:release # Build and release the gem
rake sass:compile # Compile new styles
rake sass:convert # Converts the Sass to SCSS
## License
None (public domain)
* v2.0 | 20110126
* [http://meyerweb.com/eric/tools/css/reset/](http://meyerweb.com/eric/tools/css/reset/)
| 39.576271 | 255 | 0.735332 | eng_Latn | 0.973246 |
b957bfe48bb641bf57a94824e5a654a00a216cf8 | 4,930 | md | Markdown | docs/visual-basic/programming-guide/concepts/linq/how-to-validate-using-xsd-linq-to-xml.md | acid-chicken/docs.ja-jp | 6af8d52f0d846aac92c12c76e61de8753ba3e8ab | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/visual-basic/programming-guide/concepts/linq/how-to-validate-using-xsd-linq-to-xml.md | acid-chicken/docs.ja-jp | 6af8d52f0d846aac92c12c76e61de8753ba3e8ab | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/visual-basic/programming-guide/concepts/linq/how-to-validate-using-xsd-linq-to-xml.md | acid-chicken/docs.ja-jp | 6af8d52f0d846aac92c12c76e61de8753ba3e8ab | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: '方法: XSD (LINQ to XML) を使用して検証 (Visual Basic)'
ms.date: 07/20/2015
ms.assetid: a0fe88d4-4e77-49e7-90de-8953feeccc21
ms.openlocfilehash: 9e4250ac1da4b25ce3f1644b38ff0e71693ecc57
ms.sourcegitcommit: 6b308cf6d627d78ee36dbbae8972a310ac7fd6c8
ms.translationtype: MT
ms.contentlocale: ja-JP
ms.lasthandoff: 01/23/2019
ms.locfileid: "54691217"
---
# <a name="how-to-validate-using-xsd-linq-to-xml-visual-basic"></a>方法: XSD (LINQ to XML) を使用して検証 (Visual Basic)
<xref:System.Xml.Schema> 名前空間には、XML スキーマ定義言語 (XSD) ファイルに対して XML ツリーを簡単に検証できる拡張メソッドが含まれています。 詳細については、<xref:System.Xml.Schema.Extensions.Validate%2A> メソッドのドキュメントを参照してください。
## <a name="example"></a>例
次の例では、<xref:System.Xml.Schema.XmlSchemaSet> を作成し、このスキーマ セットに対して 2 つの <xref:System.Xml.Linq.XDocument> オブジェクトを検証します。 ドキュメントの 1 つは有効ですが、その他のドキュメントは無効です。
```vb
Dim errors As Boolean = False
Private Sub XSDErrors(ByVal o As Object, ByVal e As ValidationEventArgs)
Console.WriteLine("{0}", e.Message)
errors = True
End Sub
Sub Main()
Dim xsdMarkup As XElement = _
<xsd:schema xmlns:xsd='http://www.w3.org/2001/XMLSchema'>
<xsd:element name='Root'>
<xsd:complexType>
<xsd:sequence>
<xsd:element name='Child1' minOccurs='1' maxOccurs='1'/>
<xsd:element name='Child2' minOccurs='1' maxOccurs='1'/>
</xsd:sequence>
</xsd:complexType>
</xsd:element>
</xsd:schema>
Dim schemas As XmlSchemaSet = New XmlSchemaSet()
schemas.Add("", xsdMarkup.CreateReader)
Dim doc1 As XDocument = _
<?xml version='1.0'?>
<Root>
<Child1>content1</Child1>
<Child2>content1</Child2>
</Root>
Dim doc2 As XDocument = _
<?xml version='1.0'?>
<Root>
<Child1>content1</Child1>
<Child3>content1</Child3>
</Root>
Console.WriteLine("Validating doc1")
errors = False
doc1.Validate(schemas, AddressOf XSDErrors)
Console.WriteLine("doc1 {0}", IIf(errors = True, "did not validate", "validated"))
Console.WriteLine()
Console.WriteLine("Validating doc2")
errors = False
doc2.Validate(schemas, AddressOf XSDErrors)
Console.WriteLine("doc2 {0}", IIf(errors = True, "did not validate", "validated"))
End Sub
```
この例を実行すると、次の出力が生成されます。
```
Validating doc1
doc1 validated
Validating doc2
The element 'Root' has invalid child element 'Child3'. List of possible elements expected: 'Child2'.
doc2 did not validate
```
## <a name="example"></a>例
次の例は、XML ドキュメントからのことを検証[サンプル XML ファイル。顧客と注文 (LINQ to XML)](../../../../visual-basic/programming-guide/concepts/linq/sample-xml-file-customers-and-orders-linq-to-xml.md)がスキーマに従った有効[サンプル XSD ファイル。顧客と注文](../../../../visual-basic/programming-guide/concepts/linq/sample-xsd-file-customers-and-orders.md)します。 次に、ソース XML ドキュメントを変更します。 変更するのは最初の顧客の `CustomerID` 属性です。 変更が完了すると、存在しない顧客を注文が参照するようになります。したがって、この XML ドキュメントは有効ではなくなります。
この例では、次の XML ドキュメントを使用します。[サンプル XML ファイル:顧客と注文 (LINQ to XML)](../../../../visual-basic/programming-guide/concepts/linq/sample-xml-file-customers-and-orders-linq-to-xml.md)します。
この例では、次の XSD スキーマを使用します。[サンプル XSD ファイル:顧客と注文](../../../../visual-basic/programming-guide/concepts/linq/sample-xsd-file-customers-and-orders.md)します。
```vb
Dim errors As Boolean = False
Private Sub XSDErrors(ByVal o As Object, ByVal e As ValidationEventArgs)
Console.WriteLine("{0}", e.Message)
errors = True
End Sub
Sub Main()
Dim schemas As XmlSchemaSet = New XmlSchemaSet()
schemas.Add("", "CustomersOrders.xsd")
Console.WriteLine("Attempting to validate")
Dim custOrdDoc As XDocument = XDocument.Load("CustomersOrders.xml")
errors = False
custOrdDoc.Validate(schemas, AddressOf XSDErrors)
Console.WriteLine("custOrdDoc {0}", IIf(errors, "did not validate", "validated"))
Console.WriteLine()
' Modify the source document so that it will not validate.
custOrdDoc.<Root>.<Orders>.<Order>.<CustomerID>(0).Value = "AAAAA"
Console.WriteLine("Attempting to validate after modification")
errors = False
custOrdDoc.Validate(schemas, AddressOf XSDErrors)
Console.WriteLine("custOrdDoc {0}", IIf(errors, "did not validate", "validated"))
End Sub
```
この例を実行すると、次の出力が生成されます。
```
Attempting to validate
custOrdDoc validated
Attempting to validate after modification
The key sequence 'AAAAA' in Keyref fails to refer to some key.
custOrdDoc did not validate
```
## <a name="see-also"></a>関連項目
- <xref:System.Xml.Schema.Extensions.Validate%2A>
- [XML ツリー (Visual Basic) の作成](../../../../visual-basic/programming-guide/concepts/linq/creating-xml-trees.md)
| 38.515625 | 426 | 0.660852 | yue_Hant | 0.507371 |
b958b15fe37c1370f230abd793ff7e8eda3be29f | 1,820 | md | Markdown | README.md | Medo-X/Nitro-lang | 6df0aa0d28b12af0f43cea5e6ad27f6b562be440 | [
"MIT"
] | null | null | null | README.md | Medo-X/Nitro-lang | 6df0aa0d28b12af0f43cea5e6ad27f6b562be440 | [
"MIT"
] | null | null | null | README.md | Medo-X/Nitro-lang | 6df0aa0d28b12af0f43cea5e6ad27f6b562be440 | [
"MIT"
] | null | null | null | | Nitro |Source|still|Best|
|---|---|---|---|

<p align="center">
<a href="https://telegram.me/ggggw" target="blank" style='margin-right:4px'>
<img align="center" src="images/telegram.svg" alt="midudev" height="28px" width="28px" />
</a>
<a href="https://sourcenitro.online" target="blank">
<img align="center" src="images/global.svg" alt="midudev" height="28px" width="28px" />
</a>
<div align="center">
<h1>مرحباً بكم في <a href="https://telegram.me/vvhvvv">سورس نيترو</a> <img src="https://media.giphy.com/media/hvRJCLFzcasrR4ia7z/giphy.gif" width="25px"> </h1>
<div align="center">
<h3><img src="https://media.giphy.com/media/WUlplcMpOCEmTGBtBW/giphy.gif" width="30"> طريقة التشغيل<img src="https://media.giphy.com/media/WUlplcMpOCEmTGBtBW/giphy.gif" width="30"></h3>
</div>
|انسخ الكود والصقة بالترمنال واضغط انتر|
|---|
***
`wget -q -O - "https://raw.githubusercontent.com/Medo-X/Nitro/master/install.txt" | bash;cd Nitro;python3.7 setup.py
`
***
|وانتظر تنصيب المكاتب بعدها يطلب منك التوكن والايدي املئ معلوماتك واضغط انتر بعدها اذهب الى التليكرام واستعمل بوتك|
|---|
<div align="center">
<h3><img src="https://media.giphy.com/media/MEgmtF9GMMLuqpgke0/giphy.gif" width="30"> فيديو لطريقة التنصيب<img src="https://media.giphy.com/media/MEgmtF9GMMLuqpgke0/giphy.gif" width="30"></h3>
</div>
[](https://medo.gq/videos/ex.gif)
| لغه برمجة السورس |
|---|
<p align="center">
<img src="https://raw.githubusercontent.com/8bithemant/8bithemant/master/svg/dev/languages/python.svg" alt="python" style="vertical-align:top; margin:4px">
</p>
|BoT|https://t.me/i_PBot||
|---|----|-----|
|Medo|https://t.me/GGGGw||
|ححمود|https://t.me/QoQo6 ||
| 36.4 | 192 | 0.687363 | yue_Hant | 0.325967 |
b9595fdfdef32d0e0e30a1733ba7defa4b5a92dc | 120 | md | Markdown | README.md | M-Shehu/chatterbox-client | c0f85e5796f086bc82cec5fe5846ef5a047ef6eb | [
"MIT"
] | null | null | null | README.md | M-Shehu/chatterbox-client | c0f85e5796f086bc82cec5fe5846ef5a047ef6eb | [
"MIT"
] | 2 | 2020-07-17T03:53:50.000Z | 2021-05-08T23:56:10.000Z | README.md | M-Shehu/chatterbox-client | c0f85e5796f086bc82cec5fe5846ef5a047ef6eb | [
"MIT"
] | null | null | null | # Chatterbox - Client
## Project Description
Client for a simple chat-room app.
## License
This app is MIT licensed.
| 13.333333 | 34 | 0.733333 | eng_Latn | 0.936334 |
b95a51c7da5ac2eaa1587edebe87bb0714fc8808 | 865 | md | Markdown | doc/architecture/decisions/0004-auction-service.md | SCS-ASSE-FS21-Group4/tapas | 800edff054df9baf76e32ca65f54a166cdd38e4d | [
"Unlicense"
] | null | null | null | doc/architecture/decisions/0004-auction-service.md | SCS-ASSE-FS21-Group4/tapas | 800edff054df9baf76e32ca65f54a166cdd38e4d | [
"Unlicense"
] | null | null | null | doc/architecture/decisions/0004-auction-service.md | SCS-ASSE-FS21-Group4/tapas | 800edff054df9baf76e32ca65f54a166cdd38e4d | [
"Unlicense"
] | null | null | null | # 4. Auction service
Date: 2021-09-29
## Status
Accepted
## Context
If no internal Executor has been found, an Auction will be placed to allow external Auction Houses to bid on a specific Task. The creation of an Auction is initiated by the Executor Pool Service where the coordination of Task assignment happens.
## Decision
The Auction Domain is modeled as an own Auction Service, that handles independently the needed actions to find a suitable external Executor for a specific Task. If an Executor can be found the Auction House Service gets notified.
## Consequences
The separation from the other services guarantees fault-tolerance, scalability, extensibility and availability. For example if the Auction Service is unavailable the remainder of the system stays intact and allows further planning of Tasks with capabilities for an internal Executor.
| 43.25 | 283 | 0.806936 | eng_Latn | 0.999323 |
b95ae8a1427d47d87ed47f0da90d185ee2f973d0 | 1,108 | md | Markdown | docs/visual-basic/developing-apps/programming/computer-resources/accessing-the-mouse.md | drvoss/docs.ko-kr | 108d884ebe03f99edfd57e1d9a20b3334fa3a0fe | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/visual-basic/developing-apps/programming/computer-resources/accessing-the-mouse.md | drvoss/docs.ko-kr | 108d884ebe03f99edfd57e1d9a20b3334fa3a0fe | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/visual-basic/developing-apps/programming/computer-resources/accessing-the-mouse.md | drvoss/docs.ko-kr | 108d884ebe03f99edfd57e1d9a20b3334fa3a0fe | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 마우스에 액세스
ms.date: 07/20/2015
helpviewer_keywords:
- My.Computer.Mouse object [Visual Basic], tasks
- mouse [Visual Basic], accessing [Visual Basic]
ms.assetid: 6d31a3d2-d860-459d-9d13-3aa192d62ba2
ms.openlocfilehash: cd0b7664ea17a9280c6d142d377f7c3601dce9af
ms.sourcegitcommit: 17ee6605e01ef32506f8fdc686954244ba6911de
ms.translationtype: HT
ms.contentlocale: ko-KR
ms.lasthandoff: 11/22/2019
ms.locfileid: "74347011"
---
# <a name="accessing-the-mouse-visual-basic"></a>마우스에 액세스(Visual Basic)
`My.Computer.Mouse` 개체는 마우스가 있는 경우 마우스 단추 수, 마우스 휠 정보 등 컴퓨터의 마우스에 대한 정보를 찾는 방법을 제공합니다.
## <a name="remarks"></a>설명
이 표에서는 `My.Computer.Mouse` 개체와 관련된 작업을 나열하고, 각 작업을 수행하는 방법을 보여 주는 항목을 가리킵니다.
|대상|참조 항목|
|--------|---------|
|마우스에 스크롤 휠이 있는지 여부를 확인합니다.|<xref:Microsoft.VisualBasic.Devices.Mouse.WheelExists>|
|왼쪽 및 오른쪽 마우스 단추가 바뀌었는지 여부를 확인합니다.|<xref:Microsoft.VisualBasic.Devices.Mouse.ButtonsSwapped>|
|마우스 휠을 한 눈금 돌릴 때 스크롤되는 양을 설정합니다.|<xref:Microsoft.VisualBasic.Devices.Mouse.WheelScrollLines>|
## <a name="see-also"></a>참고 항목
- <xref:Microsoft.VisualBasic.Devices.Mouse>
| 34.625 | 96 | 0.732852 | kor_Hang | 0.998344 |
b95b621323c3171b674f158173649b6456fc841c | 1,925 | md | Markdown | README.md | chimo/fontello-svg | bced088e0aaffd0f7910ade7a0253fdae32f6518 | [
"MIT"
] | 64 | 2015-01-02T22:34:32.000Z | 2022-02-20T19:34:28.000Z | README.md | chimo/fontello-svg | bced088e0aaffd0f7910ade7a0253fdae32f6518 | [
"MIT"
] | 8 | 2015-02-01T21:48:56.000Z | 2020-01-16T18:39:17.000Z | README.md | chimo/fontello-svg | bced088e0aaffd0f7910ade7a0253fdae32f6518 | [
"MIT"
] | 10 | 2015-11-06T17:54:21.000Z | 2020-07-09T14:14:25.000Z | # fontello-svg
fontello-svg is a command-line tool to generate the SVG versions of a [Fontello](http://fontello.com/) icon set, with a corresponding CSS file.
[](https://travis-ci.org/bpierre/fontello-svg)
## Installation
```shell
$ npm install fontello-svg -g
```
## Example
You need to select and download an icon set from the Fontello website, then indicate the path of the `config.json` file with the `--config` parameter.
```shell
$ fontello-svg --config fontello-config-file.json \
--out ./iconset-directory \
--fill-colors "grey:rgb(77,78,83)|blue:rgb(0,149,221)"
```
## Usage
```shell
Usage: fontello-svg --config <config file> --out <dir> [options]
Options:
-h, --help output usage information
-V, --version output the version number
-c, --config <config file> Set the Fontello configuration file (required)
-o, --out <dir> Set the export directory (required)
-f, --fill-colors <colors> Transform the SVG paths to the specified colors. Syntax: --fill-colors "black:rgb(0,0,0) | red:rgb(255,0,0)"
-p, --css-path <path> Set a CSS path for SVG backgrounds
--file-format <format> Override the default filename. Values: {0} - collection, {1} - name, {2} - color. Syntax: --file-format "{0}-{1}-{2}.svg" | --file-format "{0}-Custom-{1}.svg"
--no-css Do not create the CSS file
--no-skip Do not skip existing files
--verbose Verbose output
```
## Tutorial
[Sara Soueidan](https://sarasoueidan.com/) wrote a blog post explaining how to use fontello-svg and other tools to convert an icons-as-font configuration into SVG files. Read it here: <https://sarasoueidan.com/blog/icon-fonts-to-svg/>
## License
[MIT](http://pierre.mit-license.org/)
| 39.285714 | 234 | 0.642078 | eng_Latn | 0.607507 |
b95b80d5dc441d6a3e211dcdb977af1b4d2ef1ba | 5,625 | md | Markdown | docs/hybrid/static/index.md | sammcgeown/kube-vip | 98803b4d748f317033fd985d85aa25095862b3ba | [
"Apache-2.0"
] | 1 | 2021-07-01T09:39:54.000Z | 2021-07-01T09:39:54.000Z | docs/hybrid/static/index.md | sammcgeown/kube-vip | 98803b4d748f317033fd985d85aa25095862b3ba | [
"Apache-2.0"
] | null | null | null | docs/hybrid/static/index.md | sammcgeown/kube-vip | 98803b4d748f317033fd985d85aa25095862b3ba | [
"Apache-2.0"
] | 1 | 2020-11-07T17:27:20.000Z | 2020-11-07T17:27:20.000Z | # Kube-vip as a Static Pod
In Hybrid mode `kube-vip` will manage a virtual IP address that is passed through it's configuration for a Highly Available Kubernetes cluster, it will also "watch" services of `type:LoadBalancer` and once their `spec.LoadBalancerIP` is updated (typically by a cloud controller) it will advertise this address using BGP/ARP.
The "hybrid" mode is now the default mode in `kube-vip` from `0.2.3` onwards, and allows both modes to be enabled at the same time.
## Generating a Manifest
This section details creating a number of manifests for various use cases
### Set configuration details
`export VIP=192.168.0.40`
`export INTERFACE=<interface>`
### Configure to use a container runtime
The easiest method to generate a manifest is using the container itself, below will create an alias for different container runtimes.
#### containerd
`alias kube-vip="ctr run --rm --net-host docker.io/plndr/kube-vip:0.3.1 vip /kube-vip"`
#### Docker
`alias kube-vip="docker run --network host --rm plndr/kube-vip:0.3.1"`
### ARP
This configuration will create a manifest that starts `kube-vip` providing **controlplane** and **services** management, using **leaderElection**. When this instance is elected as the leader it will bind the `vip` to the specified `interface`, this is also the same for services of `type:LoadBalancer`.
`export INTERFACE=eth0`
```
kube-vip manifest pod \
--interface $INTERFACE \
--vip $VIP \
--controlplane \
--services \
--arp \
--leaderElection | tee /etc/kubernetes/manifests/kube-vip.yaml
```
### BGP
This configuration will create a manifest that will start `kube-vip` providing **controlplane** and **services** management. **Unlike** ARP, all nodes in the BGP configuration will advertise virtual IP addresses.
**Note** we bind the address to `lo` as we don't want multiple devices that have the same address on public interfaces. We can specify all the peers in a comma seperate list in the format of `address:AS:password:multihop`.
`export INTERFACE=lo`
```
kube-vip manifest pod \
--interface $INTERFACE \
--vip $VIP \
--controlplane \
--services \
--bgp \
--localAS 65000 \
--bgpRouterID 192.168.0.2 \
--bgppeers 192.168.0.10:65000::false,192.168.0.11:65000::false | tee /etc/kubernetes/manifests/kube-vip.yaml
```
### BGP with Equinix Metal
When deploying Kubernetes with Equinix Metal with the `--controlplane` functionality we need to pre-populate the BGP configuration in order for the control plane to be advertised and work in a HA scenario. Luckily Equinix Metal provides the capability to "look up" the configuration details (for BGP) that we need in order to advertise our virtual IP for HA functionality. We can either make use of the [Equinix Metal API](https://metal.equinix.com/developers/api/) or we can parse the [Equinix Metal Metadata service](https://metal.equinix.com/developers/docs/servers/metadata/).
**Note** If this cluster will be making use of Equinix Metal for `type:LoadBalancer` (by using the [Equinix Metal CCM](https://github.com/packethost/packet-ccm)) then we will need to ensure that nodes are set to use an external cloud-provider. Before doing a `kubeadm init|join` ensure the kubelet has the correct flags by using the following command `echo KUBELET_EXTRA_ARGS=\"--cloud-provider=external\" > /etc/default/kubelet`.
#### Creating a manifest using the API
We can enable `kube-vip` with the capability to discover the required configuration for BGP by passing the `--metal` flag and the API Key and our project ID.
```
kube-vip manifest pod \
--interface $INTERFACE\
--vip $VIP \
--controlplane \
--services \
--bgp \
--metal \
--metalKey xxxxxxx \
--metalProjectID xxxxx | tee /etc/kubernetes/manifests/kube-vip.yaml
```
#### Creating a manifest using the metadata
We can parse the metadata, *however* it requires that the tools `curl` and `jq` are installed.
```
kube-vip manifest pod \
--interface $INTERFACE\
--vip $VIP \
--controlplane \
--services \
--bgp \
--peerAS $(curl https://metadata.platformequinix.com/metadata | jq '.bgp_neighbors[0].peer_as') \
--peerAddress $(curl https://metadata.platformequinix.com/metadata | jq -r '.bgp_neighbors[0].peer_ips[0]') \
--localAS $(curl https://metadata.platformequinix.com/metadata | jq '.bgp_neighbors[0].customer_as') \
--bgpRouterID $(curl https://metadata.platformequinix.com/metadata | jq -r '.bgp_neighbors[0].customer_ip') | sudo tee /etc/kubernetes/manifests/vip.yaml
```
## Deploy your Kubernetes Cluster
### First node
```
sudo kubeadm init \
--kubernetes-version 1.19.0 \
--control-plane-endpoint $VIP \
--upload-certs
```
### Additional Node(s)
Due to an oddity with `kubeadm` we can't have our `kube-vip` manifest present **before** joining our additional nodes. So on these control plane nodes we will add them first to the cluster.
```
sudo kubeadm join $VIP:6443 \
--token w5atsr.blahblahblah
--control-plane \
--certificate-key abc123
```
**Once**, joined these nodes can have the same command that we ran on the first node to populate the `/etc/kubernetes/manifests/` folder with the `kube-vip` manifest.
## Services
At this point your `kube-vip` static pods will be up and running and where used with the `--services` flag will also be watching for Kubernetes services that they can advertise. In order for `kube-vip` to advertise a service it needs a CCM or other controller to apply an IP address to the `spec.LoadBalancerIP`, which marks the loadbalancer as defined.
| 43.269231 | 580 | 0.725867 | eng_Latn | 0.952107 |
b95d6769d61ceccbe8776b0b41f7f5ceb3a9fb7a | 1,210 | md | Markdown | Docs/Raccoon.Ninja.ValidatorDotNet.Exceptions/NotGreaterThanException/NotGreaterThanException.md | brenordv/validator-dot-net | dea2cf2eeb15139ecd87afe10126546cc32efdf2 | [
"MIT"
] | null | null | null | Docs/Raccoon.Ninja.ValidatorDotNet.Exceptions/NotGreaterThanException/NotGreaterThanException.md | brenordv/validator-dot-net | dea2cf2eeb15139ecd87afe10126546cc32efdf2 | [
"MIT"
] | null | null | null | Docs/Raccoon.Ninja.ValidatorDotNet.Exceptions/NotGreaterThanException/NotGreaterThanException.md | brenordv/validator-dot-net | dea2cf2eeb15139ecd87afe10126546cc32efdf2 | [
"MIT"
] | null | null | null | # NotGreaterThanException constructor (1 of 4)
```csharp
public NotGreaterThanException(Exception e)
```
## See Also
* class [NotGreaterThanException](../NotGreaterThanException.md)
* namespace [Raccoon.Ninja.ValidatorDotNet.Exceptions](../../ValidatorDotNet.md)
---
# NotGreaterThanException constructor (2 of 4)
```csharp
public NotGreaterThanException(string message)
```
## See Also
* class [NotGreaterThanException](../NotGreaterThanException.md)
* namespace [Raccoon.Ninja.ValidatorDotNet.Exceptions](../../ValidatorDotNet.md)
---
# NotGreaterThanException constructor (3 of 4)
```csharp
protected NotGreaterThanException(SerializationInfo info, StreamingContext context)
```
## See Also
* class [NotGreaterThanException](../NotGreaterThanException.md)
* namespace [Raccoon.Ninja.ValidatorDotNet.Exceptions](../../ValidatorDotNet.md)
---
# NotGreaterThanException constructor (4 of 4)
```csharp
public NotGreaterThanException(string message, Exception e)
```
## See Also
* class [NotGreaterThanException](../NotGreaterThanException.md)
* namespace [Raccoon.Ninja.ValidatorDotNet.Exceptions](../../ValidatorDotNet.md)
<!-- DO NOT EDIT: generated by xmldocmd for ValidatorDotNet.dll -->
| 23.269231 | 83 | 0.765289 | yue_Hant | 0.650312 |
b95f037379fa124e5f66126592875b968a8637a2 | 242 | md | Markdown | README.md | mdbuck77/del-empty-dirs | 8836830b2e25f5de7c15a049379be4373ead1e26 | [
"MIT"
] | null | null | null | README.md | mdbuck77/del-empty-dirs | 8836830b2e25f5de7c15a049379be4373ead1e26 | [
"MIT"
] | null | null | null | README.md | mdbuck77/del-empty-dirs | 8836830b2e25f5de7c15a049379be4373ead1e26 | [
"MIT"
] | null | null | null | del-empty-dirs (ded)
===
A simple CLI designed to delete all empty directories while skipping ones that contain at least one file.
Synopsis
---
```ded [DIR]```
Description
---
Delete the specified DIR if empty and all empty subdirectories.
| 20.166667 | 105 | 0.743802 | eng_Latn | 0.996649 |
b95f108de3d4a6a889716a3e86ed6636c144aff5 | 3,246 | md | Markdown | docs/framework/unmanaged-api/metadata/corcallingconvention-enumeration.md | mtorreao/docs.pt-br | e080cd3335f777fcb1349fb28bf527e379c81e17 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/unmanaged-api/metadata/corcallingconvention-enumeration.md | mtorreao/docs.pt-br | e080cd3335f777fcb1349fb28bf527e379c81e17 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/unmanaged-api/metadata/corcallingconvention-enumeration.md | mtorreao/docs.pt-br | e080cd3335f777fcb1349fb28bf527e379c81e17 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Enumeração CorCallingConvention
ms.date: 03/30/2017
api_name:
- CorCallingConvention
api_location:
- mscoree.dll
api_type:
- COM
f1_keywords:
- CorCallingConvention
helpviewer_keywords:
- CorCallingConvention enumeration [.NET Framework metadata]
ms.assetid: 69156fbf-7219-43bf-b4b8-b13f1a2fcb86
topic_type:
- apiref
ms.openlocfilehash: c9b20500a4a9e4649a938e00e3b059d1395da1d3
ms.sourcegitcommit: d8020797a6657d0fbbdff362b80300815f682f94
ms.translationtype: MT
ms.contentlocale: pt-BR
ms.lasthandoff: 11/24/2020
ms.locfileid: "95718924"
---
# <a name="corcallingconvention-enumeration"></a>Enumeração CorCallingConvention
Contém valores que descrevem os tipos de convenções de chamada que são feitas em código gerenciado.
## <a name="syntax"></a>Sintaxe
```cpp
typedef enum CorCallingConvention
{
IMAGE_CEE_CS_CALLCONV_DEFAULT = 0x0,
IMAGE_CEE_CS_CALLCONV_VARARG = 0x5,
IMAGE_CEE_CS_CALLCONV_FIELD = 0x6,
IMAGE_CEE_CS_CALLCONV_LOCAL_SIG = 0x7,
IMAGE_CEE_CS_CALLCONV_PROPERTY = 0x8,
IMAGE_CEE_CS_CALLCONV_UNMGD = 0x9,
IMAGE_CEE_CS_CALLCONV_GENERICINST = 0xa,
IMAGE_CEE_CS_CALLCONV_NATIVEVARARG = 0xb,
IMAGE_CEE_CS_CALLCONV_MAX = 0xc,
IMAGE_CEE_CS_CALLCONV_MASK = 0x0f,
IMAGE_CEE_CS_CALLCONV_HASTHIS = 0x20,
IMAGE_CEE_CS_CALLCONV_EXPLICITTHIS = 0x40,
IMAGE_CEE_CS_CALLCONV_GENERIC = 0x10
} CorCallingConvention;
```
## <a name="members"></a>Membros
|Membro|DESCRIÇÃO|
|------------|-----------------|
|`IMAGE_CEE_CS_CALLCONV_DEFAULT`|Indica uma Convenção de chamada padrão.|
|`IMAGE_CEE_CS_CALLCONV_VARARG`|Indica que o método usa um número variável de parâmetros.|
|`IMAGE_CEE_CS_CALLCONV_FIELD`|Indica que a chamada é para um campo.|
|`IMAGE_CEE_CS_CALLCONV_LOCAL_SIG`|Indica que a chamada é para um método local.|
|`IMAGE_CEE_CS_CALLCONV_PROPERTY`|Indica que a chamada é para uma propriedade.|
|`IMAGE_CEE_CS_CALLCONV_UNMGD`|Indica que a chamada não é gerenciada.|
|`IMAGE_CEE_CS_CALLCONV_GENERICINST`|Indica uma instanciação de método genérico.|
|`IMAGE_CEE_CS_CALLCONV_NATIVEVARARG`|Indica uma chamada de PInvoke de 64 bits para um método que usa um número variável de parâmetros.|
|`IMAGE_CEE_CS_CALLCONV_MAX`|Descreve um valor inválido de 4 bits.|
|`IMAGE_CEE_CS_CALLCONV_MASK`|Indica que a Convenção de chamada é descrita pelos quatro bits inferiores.|
|`IMAGE_CEE_CS_CALLCONV_HASTHIS`|Indica que o bit superior descreve um `this` parâmetro.|
|`IMAGE_CEE_CS_CALLCONV_EXPLICITTHIS`|Indica que um `this` parâmetro é descrito explicitamente na assinatura.|
|`IMAGE_CEE_CS_CALLCONV_GENERIC`|Indica uma assinatura de método genérico com um número explícito de argumentos de tipo. Isso precede uma contagem de parâmetros comum.|
## <a name="requirements"></a>Requisitos
**Plataformas:** confira [Requisitos do sistema](../../get-started/system-requirements.md).
**Cabeçalho:** CorHdr. h
**.NET Framework versões:**[!INCLUDE[net_current_v10plus](../../../../includes/net-current-v10plus-md.md)]
## <a name="see-also"></a>Confira também
- [Enumerações de metadados](metadata-enumerations.md)
| 40.074074 | 170 | 0.743685 | por_Latn | 0.496516 |
b95ff4c6f0dbc0b484d3cd3e712481926fcc6965 | 1,421 | md | Markdown | contrib/go/docs/PropertyValuePair.md | saramsey/biolink-model | 3f3d13969a45407b775060d12b3210f3100a3fd1 | [
"CC0-1.0"
] | 1 | 2018-08-22T18:02:41.000Z | 2018-08-22T18:02:41.000Z | contrib/go/docs/PropertyValuePair.md | saramsey/biolink-model | 3f3d13969a45407b775060d12b3210f3100a3fd1 | [
"CC0-1.0"
] | 20 | 2018-06-09T22:58:57.000Z | 2018-07-09T23:58:38.000Z | contrib/go/docs/PropertyValuePair.md | saramsey/biolink-model | 3f3d13969a45407b775060d12b3210f3100a3fd1 | [
"CC0-1.0"
] | null | null | null | # Class: property value pair
URI: [http://w3id.org/biolink/vocab/PropertyValuePair](http://w3id.org/biolink/vocab/PropertyValuePair)
%20*>\[PropertyValuePair],%20\[ExtensionsAndEvidenceAssociationMixin]-%20object%20extensions(i)%20*>\[PropertyValuePair],%20\[Association]-%20subject%20extensions(i)%20*>\[PropertyValuePair])
## Mappings
## Inheritance
## Children
## Used in
* class: **[Association](Association.md)** *[extensions context slot](extensions_context_slot.md)* **[PropertyValuePair](PropertyValuePair.md)**
* class: **[ExtensionsAndEvidenceAssociationMixin](ExtensionsAndEvidenceAssociationMixin.md)** *[object extensions](object_extensions.md)* **[PropertyValuePair](PropertyValuePair.md)**
* class: **[Association](Association.md)** *[subject extensions](subject_extensions.md)* **[PropertyValuePair](PropertyValuePair.md)**
## Fields
* [filler](filler.md)
* Description: The value in a property-value tuple
* range: [NamedThing](NamedThing.md)
* __Local__
* [relation](relation.md)
* Description: the relationship type by which a subject is connected to an object in an association
* range: [RelationshipType](RelationshipType.md) [required]
* __Local__
| 47.366667 | 400 | 0.751583 | yue_Hant | 0.734996 |
b9606be47b2981db7bcb21924bad7a0ea37e6aad | 5,261 | md | Markdown | articles/search/cognitive-search-skill-textmerger.md | zeroshell09/azure-docs.fr-fr | f4197c09fac92554b82fb03e06f36f82ba861e69 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/search/cognitive-search-skill-textmerger.md | zeroshell09/azure-docs.fr-fr | f4197c09fac92554b82fb03e06f36f82ba861e69 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/search/cognitive-search-skill-textmerger.md | zeroshell09/azure-docs.fr-fr | f4197c09fac92554b82fb03e06f36f82ba861e69 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Compétence de fusion de texte de la recherche cognitive - Recherche Azure
description: Fusionne en un seul champ consolidé du texte issu d’une collection de champs. Utilisez cette compétence cognitive dans un pipeline d’enrichissement Recherche Azure.
services: search
manager: nitinme
author: luiscabrer
ms.service: search
ms.workload: search
ms.topic: conceptual
ms.date: 05/02/2019
ms.author: luisca
ms.subservice: cognitive-search
ms.openlocfilehash: 3cf816a07b61fd5c398dba376276ef1e9f28e985
ms.sourcegitcommit: 7a6d8e841a12052f1ddfe483d1c9b313f21ae9e6
ms.translationtype: HT
ms.contentlocale: fr-FR
ms.lasthandoff: 08/30/2019
ms.locfileid: "70186335"
---
# <a name="text-merge-cognitive-skill"></a>Compétence cognitive Fusion de texte
La compétence **Fusion de texte** consolide en un champ unique du texte issu d’une collection de champs.
> [!NOTE]
> Cette compétence n’est pas liée à une API Cognitive Services et son utilisation ne vous est pas facturée. Toutefois, vous devez toujours [attacher une ressource Cognitive Services](cognitive-search-attach-cognitive-services.md) pour remplacer l’option de ressource **Gratuit** qui vous limite à un petit nombre d’enrichissements quotidiens par jour.
## <a name="odatatype"></a>@odata.type
Microsoft.Skills.Text.MergeSkill
## <a name="skill-parameters"></a>Paramètres de la compétence
Les paramètres respectent la casse.
| Nom du paramètre | Description |
|--------------------|-------------|
| insertPreTag | Chaîne à inclure avant chaque insertion. La valeur par défaut est `" "`. Pour omettre l’espace, choisissez la valeur `""`. |
| insertPostTag | Chaîne à inclure après chaque insertion. La valeur par défaut est `" "`. Pour omettre l’espace, choisissez la valeur `""`. |
## <a name="sample-input"></a>Exemple d’entrée
Voici un exemple de document JSON fournissant des données d’entrée exploitables pour cette compétence :
```json
{
"values": [
{
"recordId": "1",
"data":
{
"text": "The brown fox jumps over the dog",
"itemsToInsert": ["quick", "lazy"],
"offsets": [3, 28],
}
}
]
}
```
## <a name="sample-output"></a>Exemple de sortie
Cet exemple montre la sortie de l’entrée précédente, à supposer que *insertPreTag* ait la valeur `" "` et *insertPostTag* la valeur `""`.
```json
{
"values": [
{
"recordId": "1",
"data":
{
"mergedText": "The quick brown fox jumps over the lazy dog"
}
}
]
}
```
## <a name="extended-sample-skillset-definition"></a>Exemple étendu de définition de compétences
La fusion de texte permet notamment de fusionner la représentation textuelle d’images (texte issu d’une compétence OCR ou légende d’une image) dans le champ de contenu d’un document.
L’exemple de compétences suivant utilise la reconnaissance optique des caractères pour extraire du texte à partir d’images incorporées dans le document. Ensuite, il crée un champ *merged_text* qui contiendra le texte avant et après reconnaissance de chaque image. Vous trouverez plus d'informations sur la reconnaissance optique des caractères [ici](https://docs.microsoft.com/azure/search/cognitive-search-skill-ocr).
```json
{
"description": "Extract text from images and merge with content text to produce merged_text",
"skills":
[
{
"description": "Extract text (plain and structured) from image.",
"@odata.type": "#Microsoft.Skills.Vision.OcrSkill",
"context": "/document/normalized_images/*",
"defaultLanguageCode": "en",
"detectOrientation": true,
"inputs": [
{
"name": "image",
"source": "/document/normalized_images/*"
}
],
"outputs": [
{
"name": "text"
}
]
},
{
"@odata.type": "#Microsoft.Skills.Text.MergeSkill",
"description": "Create merged_text, which includes all the textual representation of each image inserted at the right location in the content field.",
"context": "/document",
"insertPreTag": " ",
"insertPostTag": " ",
"inputs": [
{
"name":"text", "source": "/document/content"
},
{
"name": "itemsToInsert", "source": "/document/normalized_images/*/text"
},
{
"name":"offsets", "source": "/document/normalized_images/*/contentOffset"
}
],
"outputs": [
{
"name": "mergedText", "targetName" : "merged_text"
}
]
}
]
}
```
L’exemple ci-dessus suppose l’existence d’un champ normalized-images. Pour obtenir ce champ, définissez la configuration *imageAction* dans la définition de votre indexeur sur *generateNormalizedImages* comme ci-dessous :
```json
{
//...rest of your indexer definition goes here ...
"parameters":{
"configuration":{
"dataToExtract":"contentAndMetadata",
"imageAction":"generateNormalizedImages"
}
}
}
```
## <a name="see-also"></a>Voir aussi
+ [Compétences prédéfinies](cognitive-search-predefined-skills.md)
+ [Guide pratique pour définir un ensemble de compétences](cognitive-search-defining-skillset.md)
+ [Créer un indexeur (REST)](https://docs.microsoft.com/rest/api/searchservice/create-indexer)
| 35.073333 | 418 | 0.675347 | fra_Latn | 0.783082 |
b96073d951c56058fe73acbd2ca8f9f3903be9f6 | 482 | md | Markdown | translations/pt-BR/content/admin/enterprise-support/overview/index.md | JoyChannel/docs | 2d85af3d136df027c5e9230cac609b3712abafb3 | [
"CC-BY-4.0",
"MIT"
] | 17 | 2021-01-05T16:29:05.000Z | 2022-02-26T09:08:44.000Z | translations/pt-BR/content/admin/enterprise-support/overview/index.md | moonlightnigh/docs | 37b2dc7444c4f38bd089298a097a755dd0df46ab | [
"CC-BY-4.0",
"MIT"
] | 116 | 2021-10-13T00:58:04.000Z | 2022-03-19T23:23:44.000Z | translations/pt-BR/content/admin/enterprise-support/overview/index.md | moonlightnigh/docs | 37b2dc7444c4f38bd089298a097a755dd0df46ab | [
"CC-BY-4.0",
"MIT"
] | 3 | 2021-08-31T03:18:06.000Z | 2021-10-30T17:49:09.000Z | ---
title: Visão Geral
intro: 'Saiba mais sobre as opções de suporte disponíveis para {% data variables.product.product_name %}.'
redirect_from:
- /enterprise/admin/enterprise-support/overview
versions:
enterprise-server: '*'
github-ae: '*'
topics:
- Enterprise
children:
- /about-github-enterprise-support
- /about-github-premium-support-for-github-enterprise-server
- /about-github-premium-support-for-github-enterprise
- /about-support-for-advanced-security
---
| 26.777778 | 106 | 0.746888 | eng_Latn | 0.269935 |
b96131e53dabf7a47cb6e1b9c905d9af0e9c2857 | 6,262 | md | Markdown | README.md | prideout/clumpy | 8c8b33e8834b96cff071ee048d32903917297e64 | [
"MIT"
] | 36 | 2018-07-14T16:36:41.000Z | 2022-03-16T08:15:28.000Z | README.md | prideout/clumpy | 8c8b33e8834b96cff071ee048d32903917297e64 | [
"MIT"
] | 1 | 2022-01-13T01:25:24.000Z | 2022-01-13T01:25:24.000Z | README.md | prideout/clumpy | 8c8b33e8834b96cff071ee048d32903917297e64 | [
"MIT"
] | 3 | 2018-12-07T11:31:50.000Z | 2021-05-23T21:38:34.000Z | [](https://travis-ci.org/prideout/clumpy)
[](https://github.com/prideout/clumpy/blob/master/LICENSE)
This tool can manipulate or generate large swaths of image data stored in [numpy
files](https://docs.scipy.org/doc/numpy-1.13.0/neps/npy-format.html). It's a sandbox for implementing
operations in C++ that are either slow or non-existent in [pillow](https://python-pillow.org/),
[scikit-image](http://scikit-image.org/), or the [SciPy](https://www.scipy.org/) ecosystem.
Since it's just a command line tool, it doesn't contain any
[FFI](https://en.wikipedia.org/wiki/Foreign_function_interface) messiness. Feel free to contribute
by adding your own command, but keep it simple! Add a `cc` file to the `commands` folder and make a pull request.
This is just a toy library. For serious C++ applications you might want to look at
[xtensor](https://github.com/QuantStack/xtensor) (which can read / write npy files) and
[xtensor-io](https://github.com/QuantStack/xtensor-io). To achieve huge speed-ups with numpy, see
[numba](https://numba.pydata.org/).
---
Build and run clumpy.
cmake -H. -B.release -GNinja && cmake --build .release
alias clumpy=$PWD/.release/clumpy
clumpy help
---
Generate two octaves of simplex noise and combine them.
clumpy generate_simplex 500x250 0.5 16.0 0 noise1.npy
clumpy generate_simplex 500x250 1.0 8.0 0 noise2.npy
python <<EOL
import numpy as np; from PIL import Image
noise1, noise2 = np.load("noise1.npy"), np.load("noise2.npy")
result = np.clip(np.abs(noise1 + noise2), 0, 1)
Image.fromarray(np.uint8(result * 255), "L").show()
EOL
<img src="https://github.com/prideout/clumpy/raw/master/extras/example1.png">
---
Create a distance field with a random shape.
clumpy generate_dshapes 500x250 1 0 shapes.npy
clumpy visualize_sdf shapes.npy rgb shapeviz.npy
python <<EOL
import numpy as np; from PIL import Image
Image.fromarray(np.load('shapeviz.npy'), 'RGB').show()
EOL
<img src="https://github.com/prideout/clumpy/raw/master/extras/example2.png">
---
Create a 2x2 atlas of distance fields, each with 5 random shapes.
for i in {1..4}; do clumpy generate_dshapes 250x125 5 $i shapes$i.npy; done
for i in {1..4}; do clumpy visualize_sdf shapes$i.npy shapes$i.npy; done
python <<EOL
import numpy as np; from PIL import Image
a, b, c, d = (np.load('shapes{}.npy'.format(i)) for i in [1,2,3,4])
img = np.vstack(((np.hstack((a,b)), np.hstack((c,d)))))
Image.fromarray(img, 'RGB').show()
EOL
<img src="https://github.com/prideout/clumpy/raw/master/extras/example3.png">
---
Create a nice distribution of ~20k points, cull points that overlap certain areas, and plot them. Do
all this in less than a second and use only one thread.
clumpy bridson_points 500x250 2 0 coords.npy
clumpy generate_dshapes 500x250 1 0 shapes.npy
clumpy cull_points coords.npy shapes.npy culled.npy
clumpy splat_points culled.npy 500x250 u8disk 1 1.0 splats.npy
python <<EOL
import numpy as np; from PIL import Image
Image.fromarray(np.load("splats.npy"), "L").show()
EOL
<img src="https://github.com/prideout/clumpy/raw/master/extras/example4.png">
---
You may wish to invoke clumpy from within Python using `os.system` or `subprocess.Popen `.
Here's an example that generates 240 frames of an advection animation with ~12k points, then
brightens up the last frame and displays it. This entire script takes about 1 second to execute and
uses only one core (3.1 GHz Intel Core i7).
```python
from numpy import load
from PIL import Image
from os import system
def clumpy(cmd):
result = system('./clumpy ' + cmd)
if result: raise Exception("clumpy failed with: " + cmd)
clumpy('generate_simplex 1000x500 1.0 8.0 0 potential.npy')
clumpy('curl_2d potential.npy velocity.npy')
clumpy('bridson_points 1000x500 5 0 pts.npy')
clumpy('advect_points pts.npy velocity.npy 30 1 0.95 240 anim.npy')
Image.fromarray(load("000anim.npy"), "L").point(lambda p: p * 2).show()
```
<img src="https://github.com/prideout/clumpy/raw/master/extras/example5.png">
---
Create a visualization of pendulum's phase space.
clumpy pendulum_phase 4000x2000 0.9 2 5 field.npy
clumpy bridson_points 4000x2000 20 0 pts.npy
clumpy advect_points pts.npy field.npy 2.5 5 0.99 400 phase.npy
<img src="https://github.com/prideout/clumpy/raw/master/extras/example6.png">
<!--
TODO
find_contours <input_img> <output_svg>
https://github.com/BlockoS/blob/blob/master/blob.h#L127
https://github.com/adishavit/simple-svg/blob/master/main_1.0.0.cpp#L35
http://katlas.org/wiki/The_Rolfsen_Knot_Table_Mosaic
cnpy.h should be abstracted out into a base class methods for save and load.
appveyer windows build, like:
https://t.co/bkJ7ZqXAGy
rename extras to docs then add a mkdocs pipeline
heman color island but without lighting
shouldn't require any new functionality
could be a python-first example.
# Should this function throw if system returns nonzero?
def clumpy(cmd):
os.system('./clumpy ' + cmd)
search for "color lookup" here:
https://docs.scipy.org/doc/numpy-1.12.0/user/basics.indexing.html
look at pillow example here (although it should have h=1, then resize)
https://stackoverflow.com/questions/25668828/how-to-create-colour-gradient-in-python
clumpy('advect_points pts.npy velocity.npy ' +
'{step_size} {kernel_size} {decay} {nframes} anim.npy'.format(
step_size = 399,
kernel_size = 1,
decay = 0.9,
nframes = 240
))
grayscale island waves sequence
could perhaps use multiprocessing
https://github.com/prideout/reba-island
lighting / AO... make the streamlines look like 3D tadpoles?
"Import a bitmap, generate a distance field from it, add noise, and export."
variable_blur
https://github.com/scipy/scipy/blob/master/scipy/ndimage/filters.py#L213
gradient_magnitude (similar to curl2d)
https://docs.scipy.org/doc/numpy/reference/routines.math.html
https://blind.guru/simple_cxx11_workqueue.html
-->
| 35.579545 | 126 | 0.718141 | eng_Latn | 0.780856 |
b9616034c7d0f82bf4f91de1e56917c9e42fcd2b | 799 | md | Markdown | docs/MatchOption.md | nilsmartel/mailslurp-client-go | de0107765903131188f14a1b8dca490d812c4619 | [
"MIT"
] | null | null | null | docs/MatchOption.md | nilsmartel/mailslurp-client-go | de0107765903131188f14a1b8dca490d812c4619 | [
"MIT"
] | null | null | null | docs/MatchOption.md | nilsmartel/mailslurp-client-go | de0107765903131188f14a1b8dca490d812c4619 | [
"MIT"
] | null | null | null | # MatchOption
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**Field** | **string** | The email property to match on. One of SUBJECT, TO, BCC, CC or FROM | [optional]
**Should** | **string** | What criteria to apply. CONTAIN or EQUAL. Note CONTAIN is recommended due to some SMTP servers adding new lines to fields and body content. | [optional]
**Value** | **string** | The value you wish to compare with the value of the field specified using the `should` value passed. For example `BODY` should `CONTAIN` a value passed. | [optional]
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
| 57.071429 | 221 | 0.647059 | eng_Latn | 0.871159 |
b9618416c8e135efd9e84520495b551415eb1bd4 | 558 | md | Markdown | Readme.md | kennethreitz/krTheme.tmTheme | ec21b6d65a3d38d699b47b76026b2746f38d1681 | [
"Vim"
] | 5 | 2016-02-19T11:26:12.000Z | 2017-09-15T02:50:32.000Z | Readme.md | kennethreitz/krTheme.tmTheme | ec21b6d65a3d38d699b47b76026b2746f38d1681 | [
"Vim"
] | null | null | null | Readme.md | kennethreitz/krTheme.tmTheme | ec21b6d65a3d38d699b47b76026b2746f38d1681 | [
"Vim"
] | null | null | null | krTheme for TextMate
====================
Most TextMate themes suck. This one rocks.
[Download krTheme](http://github.com/kennethreitz/krTheme.tmTheme/raw/master/krTheme.tmTheme) [[ZIP](http://github.com/kennethreitz/krTheme.tmTheme/zipball/master)]!

On the horizon
--------------
**Possible support for the following: **
* Terminal.app + SIMBL
* VIM
* E Texteditor
Legal Stuff
-----------
Copyright 2010 Kenneth Reitz. All Rights Reserved.
| 24.26087 | 165 | 0.707885 | yue_Hant | 0.590194 |
Subsets and Splits