hexsha
stringlengths
40
40
size
int64
5
1.04M
ext
stringclasses
6 values
lang
stringclasses
1 value
max_stars_repo_path
stringlengths
3
344
max_stars_repo_name
stringlengths
5
125
max_stars_repo_head_hexsha
stringlengths
40
78
max_stars_repo_licenses
sequencelengths
1
11
max_stars_count
int64
1
368k
max_stars_repo_stars_event_min_datetime
stringlengths
24
24
max_stars_repo_stars_event_max_datetime
stringlengths
24
24
max_issues_repo_path
stringlengths
3
344
max_issues_repo_name
stringlengths
5
125
max_issues_repo_head_hexsha
stringlengths
40
78
max_issues_repo_licenses
sequencelengths
1
11
max_issues_count
int64
1
116k
max_issues_repo_issues_event_min_datetime
stringlengths
24
24
max_issues_repo_issues_event_max_datetime
stringlengths
24
24
max_forks_repo_path
stringlengths
3
344
max_forks_repo_name
stringlengths
5
125
max_forks_repo_head_hexsha
stringlengths
40
78
max_forks_repo_licenses
sequencelengths
1
11
max_forks_count
int64
1
105k
max_forks_repo_forks_event_min_datetime
stringlengths
24
24
max_forks_repo_forks_event_max_datetime
stringlengths
24
24
content
stringlengths
5
1.04M
avg_line_length
float64
1.14
851k
max_line_length
int64
1
1.03M
alphanum_fraction
float64
0
1
lid
stringclasses
191 values
lid_prob
float64
0.01
1
b9b059d8fccf81070f27dd406279fa99506e9b9e
1,379
md
Markdown
_pages/cv.md
Mokhwalee/mokhwalee.github.io
0d48813e4b08d6a1dbbd5c845a5d9fd0e1fa0c47
[ "MIT" ]
1
2021-10-03T19:49:24.000Z
2021-10-03T19:49:24.000Z
_pages/cv.md
Mokhwalee/mokhwalee.github.io
0d48813e4b08d6a1dbbd5c845a5d9fd0e1fa0c47
[ "MIT" ]
null
null
null
_pages/cv.md
Mokhwalee/mokhwalee.github.io
0d48813e4b08d6a1dbbd5c845a5d9fd0e1fa0c47
[ "MIT" ]
2
2022-02-05T16:42:13.000Z
2022-02-08T00:56:18.000Z
--- layout: archive title: "CV" permalink: /cv/ author_profile: true redirect_from: - /resume --- {% include base_path %} Education ====== * B.S. in Statistics and Informatics, IIT Kharagpur, 2012 * M.S. in Statistics and Informatics, IIT Kharagpur, 2012 * Ph.D in Statistics, Stony brook University, 2019 Work experience ====== * Mar 2020: Senior Data Scientist * Kapitus * Predictive Modeling * Jun 2018 - Jan 2020: Data Scientist * Inncretech * Data Science * Aug 2015 - Jul 2016: Senior Manager * Indiabulls Housing Finance * Analytics and Investor Relations * Sep 2012 - Nov 2014: Assistant Manager * Fullerton India Credit Company * Analytics and Information Management * Jan 2015 - Jul 2015: Senior Analyst * Deutsche Bank * Derivates Funding Desk Skills ====== * Machine Learning * Data Science * Statistics * Algorithms * Medical Imaging * Python * C++ * R Publications ====== <ul>{% for post in site.publications %} {% include archive-single-cv.html %} {% endfor %}</ul> Talks ====== <ul>{% for post in site.talks %} {% include archive-single-talk-cv.html %} {% endfor %}</ul> Teaching ====== <ul>{% for post in site.teaching %} {% include archive-single-cv.html %} {% endfor %}</ul> Service and leadership ====== * Held muliple leadership positions during undergraduate and graduate schools.
18.635135
78
0.670051
eng_Latn
0.619817
b9b0c1211a292680e5e04be65658759d90b000ab
970
markdown
Markdown
src/blog/2011-01-17/JQuery-empty-vs-blank-filters.markdown
davidfekke/fek.io
16ed1a623bc8931b1d02f00304b4e9417330b885
[ "MIT" ]
2
2021-12-07T15:45:59.000Z
2022-02-12T14:03:57.000Z
src/blog/2011-01-17/JQuery-empty-vs-blank-filters.markdown
davidfekke/fek.io
16ed1a623bc8931b1d02f00304b4e9417330b885
[ "MIT" ]
17
2019-04-04T04:45:10.000Z
2021-12-19T03:47:54.000Z
src/blog/2011-01-17/JQuery-empty-vs-blank-filters.markdown
davidfekke/fek.io
16ed1a623bc8931b1d02f00304b4e9417330b885
[ "MIT" ]
1
2019-04-04T04:42:11.000Z
2019-04-04T04:42:11.000Z
--- layout: post title: "JQuery :empty vs :blank filters" category: "Blog" date: 2011-01-17 --- I ran into an issue this morning using a JQuery selector filter. One of the things I like about JQuery is that they have some pretty cool filters that you can use to specify different parts of your DOM. ```javascript jQuery("table tr:first"); // This will select the first row. jQuery("table tr:last"); // this will select the last row. ``` I was trying to select all cells that were blank or empty so I used the following; ```javascript $("#myIDName td:blank"); //Find all in table. ``` This worked fine in Firefox, but created all kinds of Havoc. I do not believe this is officially supported by the JQuery group. I was able to solve the problem using the following change to my selector. ```javascript $("#myIDName td:empty"); ``` I am assuming that blank has been deprecated or was never fully supported, either way you should just use the :empty filter.
26.216216
202
0.73299
eng_Latn
0.999744
b9b17d7ca2b3f4a799ff62b586e4b7e729c9f66a
2,516
md
Markdown
Firestore/README.md
BearerPipelineTest/google-cloud-php
de741e74534295129253affa8a714fe8c2a4da24
[ "Apache-2.0" ]
411
2016-09-02T15:39:15.000Z
2018-09-20T15:15:20.000Z
Firestore/README.md
BearerPipelineTest/google-cloud-php
de741e74534295129253affa8a714fe8c2a4da24
[ "Apache-2.0" ]
786
2016-08-23T01:22:16.000Z
2018-09-20T19:26:41.000Z
Firestore/README.md
LaudateCorpus1/google-cloud-php
8d57a47e5dce183cadc595108176385d50b78548
[ "Apache-2.0" ]
182
2016-08-23T13:29:37.000Z
2018-09-20T17:27:06.000Z
# Cloud Firestore for PHP > Idiomatic PHP client for [Cloud Firestore](https://cloud.google.com/firestore/). [![Latest Stable Version](https://poser.pugx.org/google/cloud-firestore/v/stable)](https://packagist.org/packages/google/cloud-firestore) [![Packagist](https://img.shields.io/packagist/dm/google/cloud-firestore.svg)](https://packagist.org/packages/google/cloud-firestore) * [API documentation](http://googleapis.github.io/google-cloud-php/#/docs/cloud-firestore/latest) **NOTE:** This repository is part of [Google Cloud PHP](https://github.com/googleapis/google-cloud-php). Any support requests, bug reports, or development contributions should be directed to that project. A NoSQL document database built for automatic scaling, high performance, and ease of application development. While the Cloud Firestore interface has many of the same features as traditional databases, as a NoSQL database it differs from them in the way it describes relationships between data objects. ### Installation To begin, install the preferred dependency manager for PHP, [Composer](https://getcomposer.org/). Now to install just this component: ```sh $ composer require google/cloud-firestore ``` Or to install the entire suite of components at once: ```sh $ composer require google/cloud ``` This component requires the gRPC extension. Please see our [gRPC installation guide](https://cloud.google.com/php/grpc) for more information on how to configure the extension. ### Authentication Please see our [Authentication guide](https://github.com/googleapis/google-cloud-php/blob/main/AUTHENTICATION.md) for more information on authenticating your client. Once authenticated, you'll be ready to start making requests. ### Sample ```php require 'vendor/autoload.php'; use Google\Cloud\Firestore\FirestoreClient; $firestore = new FirestoreClient(); $collectionReference = $firestore->collection('Users'); $documentReference = $collectionReference->document($userId); $snapshot = $documentReference->snapshot(); echo "Hello " . $snapshot['firstName']; ``` ### Version This component is considered GA (generally available). As such, it will not introduce backwards-incompatible changes in any minor or patch releases. We will address issues and requests with the highest priority. ### Next Steps 1. Understand the [official documentation](https://cloud.google.com/firestore/docs/). 2. Take a look at [in-depth usage samples](https://github.com/GoogleCloudPlatform/php-docs-samples/tree/master/firestore).
37.552239
271
0.777822
eng_Latn
0.840665
b9b28c13c5a3e97ed214d39dea530cbcbde6b997
910
md
Markdown
curriculum/challenges/english/14-responsive-web-design-22/learn-more-about-css-pseudo-selectors-by-building-a-balance-sheet/61fd66c687e610436494c6f1.md
fcastillo-serempre/freeCodeCamp
43496432d659bac8323ab2580ba09fa7bf9b73f2
[ "BSD-3-Clause" ]
2
2019-07-25T08:44:38.000Z
2019-07-25T08:44:40.000Z
curriculum/challenges/english/14-responsive-web-design-22/learn-more-about-css-pseudo-selectors-by-building-a-balance-sheet/61fd66c687e610436494c6f1.md
fcastillo-serempre/freeCodeCamp
43496432d659bac8323ab2580ba09fa7bf9b73f2
[ "BSD-3-Clause" ]
169
2020-10-13T16:49:51.000Z
2020-12-08T22:53:48.000Z
curriculum/challenges/english/14-responsive-web-design-22/learn-more-about-css-pseudo-selectors-by-building-a-balance-sheet/61fd66c687e610436494c6f1.md
fcastillo-serempre/freeCodeCamp
43496432d659bac8323ab2580ba09fa7bf9b73f2
[ "BSD-3-Clause" ]
null
null
null
--- id: 61fd66c687e610436494c6f1 title: Step 3 challengeType: 0 dashedName: step-3 --- # --description-- Within your `section` element, add an `h1` element with a nested `span` element. # --hints-- Your `section` element should have an `h1` element. ```js assert(document.querySelector('section')?.children?.[0]?.localName === 'h1'); ``` Your `h1` element should have a `span` element. ```js assert(document.querySelector('h1')?.children?.[0]?.localName === 'span'); ``` # --seed-- ## --seed-contents-- ```html <!DOCTYPE html> <html> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Balance Sheet</title> <link rel="stylesheet" type="text/css" href="./styles.css"> </head> <body> <main> --fcc-editable-region-- <section> </section> --fcc-editable-region-- </main> </body> </html> ``` ```css ```
17.169811
80
0.625275
eng_Latn
0.38516
b9b322409e78bdc5b845a432f1e7674a8291c0d6
7,802
md
Markdown
articles/event-grid/custom-event-to-hybrid-connection.md
Almulo/azure-docs.es-es
f1916cdaa2952cbe247723758a13b3ec3d608863
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/event-grid/custom-event-to-hybrid-connection.md
Almulo/azure-docs.es-es
f1916cdaa2952cbe247723758a13b3ec3d608863
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/event-grid/custom-event-to-hybrid-connection.md
Almulo/azure-docs.es-es
f1916cdaa2952cbe247723758a13b3ec3d608863
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Envío de eventos personalizados para Azure Event Grid a la conexión híbrida | Microsoft Docs description: Use Azure Event Grid y la CLI de Azure para publicar un tema y suscribirse a ese evento. Una conexión híbrida se usa para el punto de conexión. services: event-grid keywords: '' author: tfitzmac ms.author: tomfitz ms.date: 06/29/2018 ms.topic: tutorial ms.service: event-grid ms.openlocfilehash: 544f5210adbea6791f9224a1e2be0743ce9995d5 ms.sourcegitcommit: 1d850f6cae47261eacdb7604a9f17edc6626ae4b ms.translationtype: HT ms.contentlocale: es-ES ms.lasthandoff: 08/02/2018 ms.locfileid: "39434153" --- # <a name="route-custom-events-to-azure-relay-hybrid-connections-with-azure-cli-and-event-grid"></a>Enrutar eventos personalizados a Conexiones híbridas de Azure Relay con la CLI de Azure y Event Grid Azure Event Grid es un servicio de eventos para la nube. La solución Conexiones híbridas de Azure Relay es uno de los controladores de eventos compatibles. Las conexiones híbridas se usan como el controlador de eventos cuando es necesario procesar los eventos de las aplicaciones que no tienen un punto de conexión público. Estas aplicaciones pueden estar dentro de la red empresarial corporativa. En este artículo, se usa la CLI de Azure para crear un tema personalizado, suscribirse al tema y desencadenar el evento para ver el resultado. Los eventos se envían a la conexión híbrida. ## <a name="prerequisites"></a>Requisitos previos En este artículo, se presupone que ya tiene una conexión híbrida y una aplicación de escucha. Para empezar a trabajar con las conexiones híbridas, consulte [Introducción a Conexiones híbridas de Relay: .NET](../service-bus-relay/relay-hybrid-connections-dotnet-get-started.md) o [Introducción a Conexiones híbridas de Relay: nodo](../service-bus-relay/relay-hybrid-connections-node-get-started.md). [!INCLUDE [event-grid-preview-feature-note.md](../../includes/event-grid-preview-feature-note.md)] ## <a name="create-a-resource-group"></a>Creación de un grupo de recursos Los temas de Event Grid son recursos de Azure y se deben colocar en un grupo de recursos de Azure. El grupo de recursos de Azure es una colección lógica en la que se implementan y administran los recursos de Azure. Cree un grupo de recursos con el comando [az group create](/cli/azure/group#az-group-create). En el ejemplo siguiente, se crea un grupo de recursos denominado *gridResourceGroup* en la ubicación *westus2*. ```azurecli-interactive az group create --name gridResourceGroup --location westus2 ``` ## <a name="create-a-custom-topic"></a>Creación de un tema personalizado Un tema de cuadrícula de eventos proporciona un punto de conexión definido por el usuario en el que se registran los eventos. En el ejemplo siguiente se crea el tema personalizado en el grupo de recursos. Reemplace `<topic_name>` por un nombre único para el tema. El nombre del tema debe ser único porque se representa mediante una entrada DNS. ```azurecli-interactive # if you have not already installed the extension, do it now. # This extension is required for preview features. az extension add --name eventgrid az eventgrid topic create --name <topic_name> -l westus2 -g gridResourceGroup ``` ## <a name="subscribe-to-a-topic"></a>Suscripción a un tema Suscríbase a un tema para indicar a Event Grid los eventos cuyo seguimiento desea realizar. En el ejemplo siguiente se suscribirá al tema que creó y pasará el id. de recurso de la conexión híbrida para el punto de conexión. El identificador de conexión híbrida tiene el formato siguiente: `/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.Relay/namespaces/<relay-namespace>/hybridConnections/<hybrid-connection-name>` En el siguiente script, el identificador de recurso se obtiene del espacio de nombres de Relay. Se genera el identificador de la conexión híbrida y se suscribe a un tema de la cuadrícula de eventos. El script establece el tipo de punto de conexión en `hybridconnection` y se usa el identificador de conexión híbrida para el punto de conexión. ```azurecli-interactive relayname=<namespace-name> relayrg=<resource-group-for-relay> hybridname=<hybrid-name> relayid=$(az resource show --name $relayname --resource-group $relayrg --resource-type Microsoft.Relay/namespaces --query id --output tsv) hybridid="$relayid/hybridConnections/$hybridname" az eventgrid event-subscription create \ --topic-name <topic_name> \ -g gridResourceGroup \ --name <event_subscription_name> \ --endpoint-type hybridconnection \ --endpoint $hybridid ``` ## <a name="create-application-to-process-events"></a>Creación de una aplicación para procesar eventos Necesita una aplicación que puede recuperar eventos desde la conexión híbrida. El [ejemplo de consumidor de conexión híbrida de Microsoft Azure Event Grid para C#](https://github.com/Azure-Samples/event-grid-dotnet-hybridconnection-destination) ejecuta esa operación. Ya completó los pasos de requisitos previos. 1. Asegúrese de tener Visual Studio 2017 versión 15.5 o posterior. 1. Clone el repositorio en la máquina local. 1. Cargue el proyecto HybridConnectionConsumer en Visual Studio. 1. En Program.cs, reemplace `<relayConnectionString>` y `<hybridConnectionName>` por la cadena de conexión de Relay y el nombre de la conexión híbrida que creó. 1. Compile y ejecute la aplicación desde Visual Studio. ## <a name="send-an-event-to-your-topic"></a>Envío de un evento al tema Vamos a desencadenar un evento para ver cómo Event Grid distribuye el mensaje al punto de conexión. En este artículo se muestra cómo usar la CLI de Azure para desencadenar el evento. De manera alternativa, puede usar la [aplicación de publicador de Event Grid](https://github.com/Azure-Samples/event-grid-dotnet-publish-consume-events/tree/master/EventGridPublisher). En primer lugar, vamos a obtener la dirección URL y la clave del tema personalizado. De nuevo, use el nombre de su tema en `<topic_name>`. ```azurecli-interactive endpoint=$(az eventgrid topic show --name <topic_name> -g gridResourceGroup --query "endpoint" --output tsv) key=$(az eventgrid topic key list --name <topic_name> -g gridResourceGroup --query "key1" --output tsv) ``` Para simplificar este artículo, va a utilizar datos de evento de ejemplo para enviar al tema. Normalmente, una aplicación o servicio de Azure enviaría los datos del evento. CURL es una utilidad que envía solicitudes HTTP. En este artículo, use CURL para enviar el evento al tema. En el ejemplo siguiente se envían tres eventos al tema de Event Grid: ```azurecli-interactive body=$(eval echo "'$(curl https://raw.githubusercontent.com/Azure/azure-docs-json-samples/master/event-grid/customevent.json)'") curl -X POST -H "aeg-sas-key: $key" -d "$body" $endpoint ``` La aplicación de agente de escucha debe recibir el mensaje de evento. ## <a name="clean-up-resources"></a>Limpieza de recursos Si piensa seguir trabajando con este evento, no limpie los recursos creados en este artículo. En caso contrario, use el siguiente comando para eliminar los recursos creados en este artículo. ```azurecli-interactive az group delete --name gridResourceGroup ``` ## <a name="next-steps"></a>Pasos siguientes Ahora que sabe cómo crear suscripciones a temas y eventos, aprenda más sobre cómo Event Grid puede ayudarle: - [Una introducción a Azure Event Grid](overview.md) - [Enrutamiento de eventos de Blob Storage a un punto de conexión web personalizado](../storage/blobs/storage-blob-event-quickstart.md?toc=%2fazure%2fevent-grid%2ftoc.json) - [Supervisión de los cambios en máquinas virtuales con Azure Event Grid y Logic Apps](monitor-virtual-machine-changes-event-grid-logic-app.md) - [Transmisión de macrodatos a un almacén de datos](event-grid-event-hubs-integration.md)
62.416
585
0.787106
spa_Latn
0.95027
b9b368716a2618d7a17d21766029ffaace3f0c51
8,155
md
Markdown
cordovadocs/2017/tips-workarounds/host-a-mac-in-the-cloud.md
nschonni/CordovaDocs
3271e33ca4f8de5e5a319c4e83aa6414b7182ed7
[ "CC-BY-4.0", "MIT" ]
null
null
null
cordovadocs/2017/tips-workarounds/host-a-mac-in-the-cloud.md
nschonni/CordovaDocs
3271e33ca4f8de5e5a319c4e83aa6414b7182ed7
[ "CC-BY-4.0", "MIT" ]
null
null
null
cordovadocs/2017/tips-workarounds/host-a-mac-in-the-cloud.md
nschonni/CordovaDocs
3271e33ca4f8de5e5a319c4e83aa6414b7182ed7
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: "Build and simulate a Cordova iOS app in the cloud" description: "Build and simulate a Cordova iOS app in the cloud" services: "na" author: "Chuxel" ms.technology: "cordova" ms.prod: "visual-studio-dev15" ms.devlang: "javascript" ms.tgt_pltfrm: "mobile-multiple" ms.workload: "na" ms.topic: "troubleshooting" ms.date: "09/10/2015" ms.author: "clantz" --- # Build and simulate a Cordova iOS app in the cloud Visual Studio Tools for Apache Cordova allow you to build cross-platform, multi-device hybrid apps using [Apache Cordova](http://cordova.apache.org). You can use the remotebuild agent with a Mac on your network to build, debug, run, and simulate an iOS version of your app. Many developers start their hybrid app development by testing on Android. Later in the development process, when the focus is mainly on verifying and polishing the UI for a set of core devices, they begin testing on iOS. The need to provide each developer on a team with a Mac for this final step is not cost effective. As an alternative to buying Macs, you can use a cloud hosting provider to build and debug your app in the iOS Simulator from a Windows machine, to debug native problems using Xcode, and to submit your app to iTunes using the Apple Application Loader. Cloud hosting providers charge a range of rates, some of which can be very cost effective (particularly if the majority of your development is done on a different platform). In this tutorial, we will describe how to configure Tools for Apache Cordova for use with one provider—[MacInCloud](http://www.macincloud.com). > [!NOTE] > The steps shown here can be followed with other Mac hosting providers or with Macs in your own cloud facing datacenter. We recommend that you evaluate providers based on your organization’s needs. ## Install remotebuild To get started with MacInCloud, first set up either an account or a trial version. Make sure you enable the remote build port feature during checkout. Once you have provided your login information, connect to your Mac using Remote Desktop, and then you can set up [remotebuild](http://go.microsoft.com/fwlink/?LinkId=618169). ![Opening remote desktop](media/host-a-mac-in-the-cloud/remotebuild_start.png) If you chose a MacInCloud plan with a dedicated server, you may have sudo (Administrator) access. With sudo access, just follow the same instructions used to [install the remote agent](../first-steps/ios-guide.md) on an on-premise Mac. If you are using a managed server plan, you will not have sudo access. However, it is worth noting that remotebuild is probably already installed on the machine that you have access to. You can validate this by attempting to start up the agent. In the Terminal App, type: remotebuild If it is not installed, contact MacInCloud support and ask them to install it on your behalf. ## Configure Visual Studio to connect to your cloud hosted Mac With one exception, you can use the same process to configure Visual Studio for use with MacInCloud as you do with your own Mac. The host name for MacInCloud is not available externally, so you can either override the host name used by the agent or use an IP address instead. > [!NOTE] >`remotebuild` is not intended to be used as a traditional cloud-based service and you should make sure that you are in compliance with any Apple licensing terms that apply to your organization. ### Option 1: To override the host name and configure Visual Studio 1. Verify whether MacInCloud has already pre-configured your managed server for use with the remotebuild agent. If it is already pre-configured, a RemoteBuild.config file will already exist in your home directory and your agent is ready for use! To verify whether it is present and configured correctly, follow these steps. 2. In the Terminal app on your MacInCloud server, try to open the file in Xcode by executing the following command. ``` open -a Xcode ~/.taco_home/RemoteBuild.config ``` If the file exists, it will open in Xcode. 3. If the previous command tells you the file does not exist, run the following commands in the Terminal app. ``` mkdir ~/.taco_home echo "" >> ~/.taco_home/RemoteBuild.config open –a Xcode ~/.taco_home/RemoteBuild.confg ``` Xcode starts with the config file open. 4. Once RemoteBuild.config is open, verify that, at minimum, the following content is present in the file: ``` { "hostname":" myhostname.macincloud.com" } ``` and verify that the host name has been substituted with the host name you use to connect to MacInCloud. Any command line option can be specified this way in the config file, so you can also use this method to modify other settings such as the port used. Type remotebuild help to see a complete list of commands. Save the file if you make changes. 5. After you verify the configuration, type the following command in the Terminal App on your Mac, substituting the MacInCloud host name for `your_hostname` in the command: ``` remotebuild certificates reset --hostname=your_hostname remotebuild certificates generate ``` Or ``` remotebuild saveconfig --hostname=your_hostname remotebuild certificates reset remotebuild certificates generate ```` > [!NOTE] > If you are running an older version of the agent, the preceding command is not supported. Make sure that you [update the remotebuild agent](../first-steps/ios-guide.md). Press “Y” and press Enter is prompted. You will now see the following information. ![Starting the agent for the first time](media/host-a-mac-in-the-cloud/IC816241.png) 6. If it is not already running, start the agent in the Terminal App on your Mac by typing: ``` remotebuild ``` 7. In Visual Studio, open **Tools**, **Options**, **Tools for Apache Cordova**, and then **Remote Agent Configuration**. 8. Configure remote agent settings, mirroring the settings shown in the Terminal App. >**Important**: The Security PIN expires after 10 minutes by default. To generate a new PIN, see our [documentation](configuration-tips.md#IosPin). ![Cordova_MacInCloud_Remote_Agent_VS_Config](media/host-a-mac-in-the-cloud/IC816237.png) That’s it. You are finished configuring the agent! Instead of overriding the host name, you may instead use the IP address of your MacInCloud server. ### Option 2: To get your IP address and configure Visual Studio 1. In the Terminal App on your Mac, type the following command (make sure you include a space before the final quotation mark, as shown). ``` ifconfig | grep "inet " ``` 2. Two IP addresses are displayed. In the steps that follow, you will need the IP address that is not the loopback address (127.0.0.1). For example, if typing the preceding command resulted in the following output, you will need 192.168.0.100. ``` inet 127.0.0.1 netmask 0xff000000 inet 192.168.0.100 netmask oxffffff00 broadcast 192.168.0.1 ``` 3. If it is not already running, start the agent in the Terminal App on your MacInCloud server by typing the following command. ``` remotebuild ``` The first time you start the agent, you will see output similar to this. ![Starting the agent for the first time](media/host-a-mac-in-the-cloud/IC816241.png) 4. If you do not see this information, type the following to generate a new PIN: ``` remotebuild certificates generate ``` Be sure to restart the agent after generating the PIN if you shut it down. 5. In Visual Studio, open **Tools**, **Options**, **Tools for Apache Cordova**, and then **Remote Agent Configuration**. 6. Configure remote agent settings. Set **Enable remote iOS processing** to **True**, and configure Port and Security PIN using the output from the Terminal App. Instead of using the host name shown in the Terminal App, use the IP address you obtained previously and enter it in the Host field. Using an IP address to configure VS: ![Using an IP address to configure VS](media/host-a-mac-in-the-cloud/IC816242.png) That’s it. You are finished configuring the agent!
51.289308
1,162
0.751809
eng_Latn
0.997769
b9b3d4faeef42b8a555b003ed8153c8148109030
253
md
Markdown
otto-demo/README.md
fgdadiaonan/android-open-project-demo
2ed4800f62c1a3683a0e16ec56957619acbcc57a
[ "Apache-2.0" ]
1,037
2015-01-04T12:34:16.000Z
2022-03-22T16:30:32.000Z
otto-demo/README.md
fgdadiaonan/android-open-project-demo
2ed4800f62c1a3683a0e16ec56957619acbcc57a
[ "Apache-2.0" ]
3
2015-03-03T09:35:46.000Z
2016-01-07T14:50:28.000Z
otto-demo/README.md
fgdadiaonan/android-open-project-demo
2ed4800f62c1a3683a0e16ec56957619acbcc57a
[ "Apache-2.0" ]
890
2015-01-06T08:37:27.000Z
2021-07-29T05:37:33.000Z
Otto Demo ==================== ###1. Demo Download <a href="apk/otto-demo.apk?raw=true" target="_blank" title="点击下载到本地">本地下载</a> ###2. Screenshot ![Screenshot](apk/otto_demo.gif) ###3. Document [How to Use Otto](http://square.github.io/otto/)
28.111111
83
0.612648
kor_Hang
0.143786
b9b3d977c3f013f0dd1bcc10b19e7220bdc3211d
3,601
md
Markdown
docs/t-sql/statements/pick-a-product-template.md
siddudubey/sql-docs
a7dfef0a654169aa4c29e3093a743cb0de1b3f0e
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/t-sql/statements/pick-a-product-template.md
siddudubey/sql-docs
a7dfef0a654169aa4c29e3093a743cb0de1b3f0e
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/t-sql/statements/pick-a-product-template.md
siddudubey/sql-docs
a7dfef0a654169aa4c29e3093a743cb0de1b3f0e
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: "Title (Transact-SQL) | Microsoft Docs" description: ms.custom: "" ms.date: 05/22/2019 ms.prod: sql ms.prod_service: "database-engine, sql-database, sql-data-warehouse" ms.reviewer: "" ms.technology: t-sql ms.topic: "language-reference" dev_langs: - "TSQL" helpviewer_keywords: author: julieMSFT ms.author: jrasnick monikerRange: "=azuresqldb-current||=azuresqldb-current||>=sql-server-2016||>=sql-server-linux-2017||=azure-sqldw-latest||=azuresqldb-mi-current" --- # Title (Transact-SQL) Sets database options in [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)], [!INCLUDE[ssSDSfull](../../includes/sssdsfull-md.md)] and [!INCLUDE[ssSDW](../../includes/sssdw-md.md)]. For other ALTER DATABASE options, see [ALTER DATABASE](../../t-sql/statements/alter-database-transact-sql.md). Click one of the following tabs for the syntax, arguments, remarks, permissions, and examples for a particular SQL version with which you're working. For more information about the syntax conventions, see [Transact-SQL Syntax Conventions](../../t-sql/language-elements/transact-sql-syntax-conventions-transact-sql.md). [!INCLUDE[select-product](../../includes/select-product.md)] ::: moniker range=">=sql-server-2016||>=sql-server-linux-2017" :::row::: :::column::: **_\* SQL Server \*_** &nbsp; :::column-end::: :::column::: [SQL Database](pick-a-product-template.md?view=azuresqldb-current) :::column-end::: :::column::: [SQL Managed Instance](pick-a-product-template.md?view=azuresqldb-mi-current) :::column-end::: :::column::: [Azure Synapse<br />Analytics](pick-a-product-template.md?view=azure-sqldw-latest) :::column-end::: :::row-end::: &nbsp; ## SQL Server ::: moniker-end ::: moniker range="=azuresqldb-current" :::row::: :::column::: [SQL Server](alter-database-transact-sql-set-options.md) :::column-end::: :::column::: **_\* SQL Database \*_** &nbsp; :::column-end::: :::column::: [SQL Managed Instance](alter-database-transact-sql-set-options.md?view=azuresqldb-mi-current) :::column-end::: :::column::: [Azure Synapse<br />Analytics](alter-database-transact-sql-set-options.md?view=azure-sqldw-latest) :::column-end::: :::row-end::: &nbsp; ## SQL Database ::: moniker-end ::: moniker range="=azuresqldb-mi-current" :::row::: :::column::: [SQL Server](alter-database-transact-sql-set-options.md?view=sql-server-ver15&preserve-view=true) :::column-end::: :::column::: [SQL Database](alter-database-transact-sql-set-options.md?view=azuresqldb-current) :::column-end::: :::column::: **_\* SQL Managed Instance \*_** &nbsp; :::column-end::: :::column::: [Azure Synapse<br />Analytics](alter-database-transact-sql-set-options.md?view=azure-sqldw-latest) :::column-end::: :::row-end::: &nbsp; ## Azure SQL Managed Instance ::: moniker-end ::: moniker range="=azure-sqldw-latest" :::row::: :::column::: [SQL Server](alter-database-transact-sql-set-options.md?view=sql-server-ver15&preserve-view=true) :::column-end::: :::column::: [SQL Database](alter-database-transact-sql-set-options.md?view=azuresqldb-current) :::column-end::: :::column::: [SQL Managed Instance](alter-database-transact-sql-set-options.md?view=azuresqldb-mi-current) :::column-end::: :::column::: **_\* Azure Synapse<br />Analytics \*_** &nbsp; :::column-end::: :::row-end::: &nbsp; ## Azure Synapse Analytics ## Syntax ::: moniker-end
29.516393
299
0.648986
eng_Latn
0.136336
b9b3e7212c5c9c63ec526af0f0947eb058ffd491
16,405
md
Markdown
SharePoint/SharePointServer/technical-reference/machine-translation-service-in-sharepoint-server.md
Acidburn0zzz/OfficeDocs-SharePoint
7cc3fe90e97fc317ec2305108a6a5d363cc5d89c
[ "CC-BY-4.0", "MIT" ]
3
2018-12-22T08:40:47.000Z
2018-12-23T21:33:52.000Z
SharePoint/SharePointServer/technical-reference/machine-translation-service-in-sharepoint-server.md
Acidburn0zzz/OfficeDocs-SharePoint
7cc3fe90e97fc317ec2305108a6a5d363cc5d89c
[ "CC-BY-4.0", "MIT" ]
null
null
null
SharePoint/SharePointServer/technical-reference/machine-translation-service-in-sharepoint-server.md
Acidburn0zzz/OfficeDocs-SharePoint
7cc3fe90e97fc317ec2305108a6a5d363cc5d89c
[ "CC-BY-4.0", "MIT" ]
1
2018-12-22T13:38:33.000Z
2018-12-22T13:38:33.000Z
--- title: "Machine Translation service in SharePoint Server knowledge articles" ms.author: stevhord author: bentoncity manager: pamgreen ms.date: 8/15/2017 ms.audience: ITPro ms.topic: troubleshooting ms.prod: sharepoint-server-itpro localization_priority: Normal ms.collection: - IT_Sharepoint_Server - IT_Sharepoint_Server_Top ms.assetid: 38095e9b-5b48-4d2a-a787-3f7900b138e0 description: "Learn how to resolve alerts about the Machine Translation service in the SharePoint Server management pack for Systems Center Operations Manager (SCOM)." --- # Machine Translation service in SharePoint Server knowledge articles [!INCLUDE[appliesto-2013-2016-xxx-xxx-md](../includes/appliesto-2013-2016-xxx-xxx-md.md)] Learn how to resolve alerts about the Machine Translation service in the SharePoint Server 2016 and SharePoint Server 2013 management pack for Systems Center Operations Manager (SCOM). The articles in this section are knowledge articles for the Machine Translation service in SharePoint Server. Typically, you would see these articles after clicking a link in an alert in the Operations Manager console. You can use these articles to help you troubleshoot and resolve problems that involve the Machine Translation service. Download and install [System Center Monitoring Pack for SharePoint Server 2016](http://go.microsoft.com/fwlink/?LinkID=746863&amp;clcid=0x409), [System Center Monitoring Pack for SharePoint Server](https://go.microsoft.com/fwlink/p/?LinkId=272568), or [System Center Monitoring Pack for SharePoint Foundation](https://go.microsoft.com/fwlink/p/?LinkId=272567). - [Machine Translation Service: Queue database not accessible ](#QueueDB) - [Machine Translation Service: Machine translation failure ](#TransFail) - [Machine Translation Service: Machine translation failure](#TransFail2) - [Machine Translation Service not accessible ](#TransServ) - [Machine Translation Service: Content not accessible ](#TransContent) - [Machine Translation Service: Worker failure ](#TransWorker) ## Machine Translation Service: Queue database not accessible <a name="QueueDB"> </a> **Alert Name:**Machine Translation Service: Queue database not accessible **Summary:** A critical state of this Monitor indicates that the Machine Translation Service cannot access content that it has to translate. Symptoms: - New Jobs cannot be submitted successfully. - Existing Job Items never complete or seem to "hang" and make no progress in the Job Queue. ### Cause One or more of the following might be the cause: - The queue database is not responsive enough because of activity on the network or physical SQL server. - Permissions to access the custom database are no longer valid. - The queue database is inaccessible. ### Resolution Resolution 1: Verify the status of the SQL Server Machine Translation Service database: 1. On the **SharePoint Central Administration** website, in the **System Settings** section, from the reading pane, click **Manage Servers in this farm**. 2. In the **Farm Information** section, note the **Machine Translation Service** database server, and the name and the version of the configuration database. 3. Start SQL Server Management Studio and connect to the configuration database server. 4. If the configuration database does not exist, run the SharePoint Products and Technologies Configuration Wizard. Resolution 2: Verify the SQL Server network connection: 1. On the **Central Administration** website, in the System Settings section, from the reading pane, click **Manage Servers in this farm**. 2. In the **Farm Information** section, note the Machine Translation Service database server, and the name and the version of the configuration database information. 3. Open a Command Prompt window and type ping to confirm the server connection. 4. Failure to contact the server indicates a problem with the network connection or another problem that prevents a response from the server. 5. Log on to the server and troubleshoot the issue. ## Machine Translation Service: Machine translation failure <a name="TransFail"> </a> **Alert Name:**Machine Translation Service: Machine translation failure **Summary:** A critical state of this Monitor indicates that machine translation through the online translation service, is failing. Symptoms: As long as a connection to the online translation service is not established, the service will function correctly but will fail every Translation Item that is processed. ### Cause One or more of the following might be the cause: - The Machine Translation Service is not connected to the Internet. - The online translation service is down. - The online translation service has experienced a certain amount of intermittent failures (beyond a set threshold). ### Resolution Ensure the Machine Translation Service application has web access: 1. Verify that the user account that is performing this procedure is a member of the Farm Administrators group. 2. On the Central Administration website, in the **Application Management** section, click **Manage service applications**. 3. On the **Manage Service Applications** page, in the list of service applications, click **Machine Translation Service**. 4. In the **Online Translation Connection** section, in the web proxy server box, do one of the following: - Click **Use default internet settings**. - Click **Use the proxy specified**, and enter a web proxy server and a port number. Validate the MachineTranslationAddress, MachineTranslationClientId, and MachineTranslationCategory for the Machine Translation Service application: 1. Verify that the user account that is performing this procedure is a member of the Farm Administrators group. 2. In the PowerShell command prompt, type the following: `Get-SPServiceApplication -Name "name" | ft MachineTranslationAddress, MachineTranslationClientId, MachineTranslationCategory` where "name" is the Name of your Machine Translation Service application. 3. Validate the values returned by comparing against any documentation or making a test call. 4. Correct any values based on validation. If a value cannot be validated, use the default value. ## Machine Translation Service: Machine translation failure <a name="TransFail2"> </a> **Alert Name:**Machine Translation Service: Machine translation failure **Summary:**A critical state of this Monitor indicates that the Machine Translation Service Timer Job is failing. Symptoms: - New Jobs will successfully enter into the database. However, the translation items for that job will never start. - The existing Jobs may not finish: Any translation items that are already assigned to application servers may still succeed, but translation items that are not assigned to application servers will not start. That leaves the Job perpetually incomplete. ### Cause The Machine Translation Service's queue timer job is not running. ### Resolution Resolution 1: Restart the Machine Translation Service 1. On the **Central Administration website**, in the **System Settings** section, from the reading pane, click **Manage servers in this farm.** 2. In the **Server** column, click the name of the failing application server. The **Services on Server** page opens. 3. In the **Service** column, locate the **Machine Translation Service**. Click **Stop**, and then click **Start**. Resolution 2: Create a new Machine Translation Service application 1. On the **Central Administration website**, in the **Application Management** section, from the reading pane, click **Manage service applications**. 2. In the **Type** column, click the name of the Visio Services application Machine Translation Service application that has the failing service instance. 3. On the ribbon, click **Delete**. 4. In the **Delete Service Application** dialog box, click **OK**. 5. Create a new Machine Translation Service application. ## Machine Translation Service not accessible <a name="TransServ"> </a> **Alert Name:**Machine Translation Service not accessible **Summary:** A critical state of this Monitor indicates that the Machine Translation Service is not accessible. Symptoms:If service calls are not working for the Machine Translation Service, no jobs can be submitted to the application server for immediate processing or submission to the Job Queue. That is, the service is inaccessible and does not function. ### Cause One or more of the following might be the cause: - The specified SharePoint Server application server is inaccessible. - The specified application server is responding slowly because of heavy network activity or load on the specific server. ### Resolution Resolution 1: Check the error logs: - Open the Windows Event Viewer. - Search for event ID 8049 in the Windows. Application Event log. - In the event description, note the application server that is failing Resolution 2: Verify the application server connection: - From the failing application server, open the SharePoint Central Administration Web site. - If you cannot access the Central Administration site from the failing server, check that the network settings are correct and that the server has appropriate permissions to join the SharePoint farm. Resolution 3: Verify that the Machine Translation Service runs on the failing server: 1. On the **Central Administration** website, in the reading pane, in the **System Settings** section, click **Manage servers in this farm**. 2. Verify that the Machine Translation Service runs on the failing application server. 3. If there is a service application proxy for the failing service application, create a new service application. Resolution 4: Restart the Machine Translation Service: 1. On the **Central Administration** website, in the reading pane, in the **System Settings** section, click **Manage servers in this farm**. 2. In the **Server** column, click the name of the failing application server. The **Services on Server** page opens. 3. In the **Service** column, locate the **Machine Translation Service**, click **Stop**, and then click **Start**. Resolution 5: Create a new Machine Translation Service application: 1. On the **Central Administration** website, in the reading pane, in the **Application Management** section, click **Manage service applications**. 2. In the **Type** column, click the name of the Visio Services application Machine Translation Service application that has the failing service instance. 3. On the ribbon, click **Delete**. 4. In the **Delete Service Application** dialog box, click **OK**. 5. Create a new Machine Translation Service application. ## Machine Translation Service: Content not accessible <a name="TransContent"> </a> **Alert Name:**Machine Translation Service: Content not accessible **Summary:** Critical state of this Monitor indicates that back-end to front-end CSOM calls are failing. Symptoms: No files can be retrieved for processing and items that are already being processed will not be written. That is, files are inaccessible and items will continuously fail. ### Cause One or more of the following might be the cause: - Server to server Authentication is configured incorrectly. - The content database is not responsive enough because of activity on the network or physical SQL server. - Permissions to access the content database are no longer valid. - The content database is inaccessible. ### Resolution Resolution 1: Verify the status of the SQL Server content database: 1. On the SharePoint Central Administration website, in the **System Settings** section from the reading pane, click **Manage Servers in this farm**. 2. In the **Farm Information** section, note the content database server, and the name and the version of the configuration database. 3. Start SQL Server Management Studio and connect to the content database server. 4. If the content database does not exist, run the SharePoint Products and Technologies Configuration Wizard. Resolution 2: Verify the SQL Server network connection: 1. On the Central Administration website, in the **System Settings** section, from the reading pane, click **Manage Servers in this farm**. 2. In the **Farm Information** section, note the content database server and the name and the version of the content database information. 3. Open a Command Prompt window and type ping to confirm the server connection. Failure to contact the server indicates a problem with the network connection or another problem that prevents a response from the server. 4. Log on to the server and troubleshoot the issue. Resolution 3: Verify Server to Server Authentication is configured correctly: See [Configure server-to-server authentication in SharePoint Server](/sharepoint/security-for-sharepoint-server/security-for-sharepoint-server). ## Machine Translation Service: Worker failure <a name="TransWorker"> </a> **Alert Name:**Machine Translation Service: Worker failure **Summary:** A critical state of this Monitor indicates that the Machine Translation Service worker processes are failing. Symptoms:These errors have to be monitored so end-users are not seeing all the items fail over a certain span of time. ### Cause One or more of the following might be the cause: - Corrupt input file - Translation worker crash - Error of saving the file to the local store ### Resolution Resolution 1: Check the error logs: 1. Open the Windows Event Viewer. 2. Search for event ID 8049 in the Windows Application Event log. 3. In the event description, note the application server that is failing. Resolution 2: Verify the application server connection: 1. From the failing application server, open the **SharePoint Central Administration** website. 2. If you cannot access the Central Administration website from the failing server, check that the network settings are correct and that the server has appropriate permissions to join the SharePoint farm. Resolution 3: Verify that the Machine Translation Service runs on the failing server: 1. On the **Central Administration** website, in the reading pane, in the **System Settings** section, click **Manage servers in this farm**. 2. Verify that the Machine Translation Service runs on the failing application server. 3. If there is a service application proxy for the failing service application, create a new service application. Resolution 4: Restart the Machine Translation Service: 1. On the **Central Administration** website, in the reading pane, in the **System Settings** section, click **Manage servers in this farm**. 2. In the **Server** column, click the name of the failing application server. The **Services on Server** page opens. 3. In the **Service** column, locate the **Machine Translation Service**, click **Stop**, and then click **Start**. Resolution 5: Create a new Machine Translation Service application 1. On the Central Administration website, from the reading pane, in the Application Management section, from the reading pane, click Manage service applications. 2. In the **Type** column, click the name of the Visio Services application Machine Translation Service application that has the failing service instance. 3. On the ribbon, click **Delete**. 4. In the **Delete Service Application** dialog box, click OK. 5. Create a new Machine Translation Service application. ## See also <a name="TransWorker"> </a> #### Concepts [Plan for monitoring in SharePoint Server](../administration/monitoring-planning.md) #### Other Resources [System Center Monitoring Pack for SharePoint Foundation](http://go.microsoft.com/fwlink/p/?LinkId=272567) [System Center Monitoring Pack for SharePoint Server 2013](https://go.microsoft.com/fwlink/p/?LinkId=272568) [System Center Monitoring Pack for SharePoint Server 2016](http://go.microsoft.com/fwlink/?LinkID=746863&amp;clcid=0x409)
45.192837
698
0.759159
eng_Latn
0.98244
b9b3f5a5d48b43b66410053264e049b5e6ac64e0
2,096
md
Markdown
Day 3/day-3-2-exercise/test-your-code/README.md
Jean-Bi/100DaysOfCodePython
2069d1366c58e7d5f4cd30cfc786e9c2e44b82ca
[ "MIT" ]
null
null
null
Day 3/day-3-2-exercise/test-your-code/README.md
Jean-Bi/100DaysOfCodePython
2069d1366c58e7d5f4cd30cfc786e9c2e44b82ca
[ "MIT" ]
null
null
null
Day 3/day-3-2-exercise/test-your-code/README.md
Jean-Bi/100DaysOfCodePython
2069d1366c58e7d5f4cd30cfc786e9c2e44b82ca
[ "MIT" ]
null
null
null
## BMI Calculator 2.0 # Instructions Write a program that interprets the Body Mass Index (BMI) based on a user's weight and height. It should tell them the interpretation of their BMI based on the BMI value. - Under 18.5 they are underweight - Over 18.5 but below 25 they have a normal weight - Over 25 but below 30 they are slightly overweight - Over 30 but below 35 they are obese - Above 35 they are clinically obese. ![](https://cdn.fs.teachablecdn.com/qTOp8afxSkGfU5YGYf36) The BMI is calculated by dividing a person's weight (in kg) by the square of their height (in m): ![](https://cdn.fs.teachablecdn.com/jKHjnLrNQjqzdz3MTMyv) **Warning** you should **round** the result to the nearest whole number. The interpretation message needs to include the words in bold from the interpretations above. e.g. **underweight, normal weight, overweight, obese, clinically obese**. # Example Input ``` weight = 85 ``` ``` height = 1.75 ``` # Example Output 85 ÷ 1.75 x 1.75 = 27.755102040816325 ``` Your BMI is 28, you are slightly overweight. ``` e.g. When you hit **run**, this is what should happen: ![](https://cdn.fs.teachablecdn.com/mGRynIETXuVqoDk8unci) The testing code will check for print output that is formatted like one of the lines below: ``` "Your BMI is 18, you are underweight." "Your BMI is 22, you have a normal weight." "Your BMI is 28, you are slightly overweight." "Your BMI is 33, you are obese." "Your BMI is 40, you are clinically obese." ``` Hint 1. Try to use the **exponent** operator in your code. 2. Remember to **round** your result to the nearest whole number. 3. Make sure you include the words in **bold** from the interpretations. # Test Your Code Before checking the solution, try copy-pasting your code into this repl: [https://repl.it/@appbrewery/day-3-2-test-your-code]([https://repl.it/@appbrewery/day-3-2-test-your-code]) This repl includes my testing code that will check if your code meets this assignment's objectives. # Solution [https://repl.it/@appbrewery/day-3-2-solution](https://repl.it/@appbrewery/day-3-2-solution)
29.521127
242
0.730916
eng_Latn
0.99735
b9b439c0f36d6eecea6fd4e6cf341b7694aa71b2
1,246
md
Markdown
hard/pro-poker-hand/README.md
Adi142857/sololearn-challenges
67437d9c202ce6d470042bbe87f20da9fd4a077c
[ "MIT" ]
83
2020-01-07T23:02:52.000Z
2022-03-19T06:53:56.000Z
hard/pro-poker-hand/README.md
Adi142857/sololearn-challenges
67437d9c202ce6d470042bbe87f20da9fd4a077c
[ "MIT" ]
21
2020-01-23T14:26:13.000Z
2022-03-20T06:30:45.000Z
hard/pro-poker-hand/README.md
Adi142857/sololearn-challenges
67437d9c202ce6d470042bbe87f20da9fd4a077c
[ "MIT" ]
53
2020-02-10T13:40:33.000Z
2022-03-13T13:07:33.000Z
# Poker Hand You are playing poker with your friends and need to evaluate your hand. A hand consists of five cards and is ranked, from lowest to highest, in the following way: - High Card: Highest value card (from 2 to Ace). - One Pair: Two cards of the same value. - Two Pairs: Two different pairs. - Three of a Kind: Three cards of the same value. - Straight: All cards are consecutive values. - Flush: All cards of the same suit. - Full House: Three of a kind and a pair. - Four of a Kind: Four cards of the same value. - Straight Flush: All cards are consecutive values of same suit. - Royal Flush: 10, Jack, Queen, King, Ace, in same suit. ## Task: Output the rank of the give poker hand. ## Input Format: A string, representing five cards, each indicating the value and suite of the card, separated by spaces. Possible card values are: 2 3 4 5 6 7 8 9 10 J Q K A Suites: H (Hearts), D (Diamonds), C (Clubs), S (Spades) For example, JD indicates Jack of Diamonds. ## Output Format: A string, indicating the rank of the hand (in the format of the above description). ## Sample Input: ``` JS 2H JC AC 2D ``` ## Sample Output: ``` Two Pairs ``` ## Explanation: The hand includes two Jacks and two 2s, resulting in Two Pairs.
29.666667
105
0.714286
eng_Latn
0.995018
b9b466b05683821a1a31346ea98b4dc3813eed1f
506
md
Markdown
_posts/2015-01-15-first-class.md
kellygrape/com402
92b9e7985d77ff618153a32d8737836c7c37c106
[ "MIT" ]
null
null
null
_posts/2015-01-15-first-class.md
kellygrape/com402
92b9e7985d77ff618153a32d8737836c7c37c106
[ "MIT" ]
null
null
null
_posts/2015-01-15-first-class.md
kellygrape/com402
92b9e7985d77ff618153a32d8737836c7c37c106
[ "MIT" ]
null
null
null
--- layout: post title: Jan 15, 2015 - Syllabus Day / Intro to Class assignmentlink: 2015-01-15-assignments hasassignment: true --- ##Lesson Plan - Review Syllabus - Review Grading and Attendance Policies - Getting started with Wordpress - Log in to blog space already created - Change the color scheme of the website - Add a widget to the sidebar - Create a home page - Change the title of the site - Create a post - Create a menu and add it to the sidebar. Add a link to class website.
24.095238
74
0.727273
eng_Latn
0.982963
b9b4872cb48187c2781ec4bb5440fb6af2856af3
92
md
Markdown
packages/components/transfer/demo/VirtualScroll.md
thinkingOfBetty/idux
3b99839e1f2e3222c78e98813068b51cc45a09be
[ "MIT" ]
null
null
null
packages/components/transfer/demo/VirtualScroll.md
thinkingOfBetty/idux
3b99839e1f2e3222c78e98813068b51cc45a09be
[ "MIT" ]
null
null
null
packages/components/transfer/demo/VirtualScroll.md
thinkingOfBetty/idux
3b99839e1f2e3222c78e98813068b51cc45a09be
[ "MIT" ]
null
null
null
--- order: 4 title: zh: 虚拟滚动 en: Virtual scroll --- ## zh 虚拟滚动 ## en Virtual scroll
6.133333
20
0.576087
spa_Latn
0.177649
b9b4d6d541097f59330a27ca0c7d646c66e80cd2
4,350
md
Markdown
content/post/2008-09-25-field-seminar-a-2008-2009-week-1-readings.md
kbenoit/kbenoit-site-blogdown
183a7364ffa987652b47fb229c92a1c563ccd506
[ "MIT" ]
1
2017-06-05T09:36:04.000Z
2017-06-05T09:36:04.000Z
content/post/2008-09-25-field-seminar-a-2008-2009-week-1-readings.md
kbenoit/kbenoit-site-blogdown
183a7364ffa987652b47fb229c92a1c563ccd506
[ "MIT" ]
1
2018-12-10T04:48:15.000Z
2018-12-10T04:48:15.000Z
content/post/2008-09-25-field-seminar-a-2008-2009-week-1-readings.md
kbenoit/kbenoit-site-blogdown
183a7364ffa987652b47fb229c92a1c563ccd506
[ "MIT" ]
8
2017-05-12T15:22:06.000Z
2019-05-16T09:13:19.000Z
--- author: Ken categories: - Field Seminar date: "2008-09-25T11:37:24Z" id: 68 title: PO7003 Field Seminar A in Political Science – 2008-09 url: /field-seminar-a-2008-2009-week-1-readings/ --- **Note:** The latest version of the course handout is available from <http://www.tcd.ie/Political_Science/postgraduate/phdtaught.php>. That version lacks the hyperlinks to the readings below, however. The **meeting time and place** is Friday 10-13:00, College Green 4, starting 3 October 2008. In practice the actual length of the Seminar will depend on the instructor, and typically will run from 10-12:00 only. While I am the coordinator for the course, the weekly seminars are led by a rotating set of staff from Political Science   ### Week 1 &#8211; 3 Oct 2008 &#8211; Ken Benoit - Policy Positions and Policy Spaces Kenneth Benoit and Michael Laver. 2006. _[**Party Policy in Modern Democracies**](http://www.politics.tcd.ie/ppmd/)_. London: Routledge, 2006. Wiesehomeier, Nina and Kenneth Benoit."Presidents, Parties And Policy Competition."University of Konstanz and Trinity College Dublin manuscript. [**Version: July 7, 2008.**](/pdfs/PPPC_LatinAm_7july2008.pdf) ### Week 2 &#8211; 10 Oct &#8211; Ken Benoit - Electoral System Origins Kenneth Benoit. 2007. "[Electoral Laws as Political Consequences: Explaining the Origins and Change of Electoral Institutions](http://arjournals.annualreviews.org/doi/pdf/10.1146/annurev.polisci.10.072805.101608)."_Annual Review of Political Science_ 10: 363-90. Boix, C. 1999. "[Setting the rules of the game: The choice of electoral systems in advanced democracies](http://www.tcd.ie/Political_Science/local/docs/boix_1999_apsr.pdf)."_American Political Science Review_ 93 (3): 609-624. Andrews, Josephine and Robert Jackman. 2005. "[Strategic Fools: Electoral rule choice under extreme uncertainty](http://www.tcd.ie/Political_Science/local/docs/andrews_jackman_2005_elstud.pdf)."_Electoral Studies_ 24: 65-84. Colomer, Josep M. (2004). &#8220;[The Strategy and History of Electoral System Choice](http://www.tcd.ie/Political_Science/local/docs/Colomer_ElectSysChoice.pdf).&#8221; In Josep M. Colomer (Ed.), _Handbook of Electoral System Choice_, pp. 3"78. New York: Palgrave Macmillan. (photocopied) Birch, Sarah, Frances Millard, Marina Popescu, and Kieran Williams (2002). _Embodying Democracy: Electoral System Design in Post-Communist Europe_. Houndmills, Basingstoke: Palgrave Macmillan. Ch 1. (photocopied) 2004. Benoit, Kenneth and Jacqueline Hayden. "[Institutional Change and Persistence: The Evolution of Poland&#8217;s Electoral System 1989-2001](/pdfs/BenoitHayden_JOP2004.pdf)."_Journal of Politics _66 (May, 2): 396-427. ### Week 3 &#8211; 17 Oct &#8211; Ken Benoit - Electoral System Consequences Gallagher, Michael and Paul Mitchell. 2005. "[Introduction to Electoral Systems](http://www.tcd.ie/Political_Science/local/docs/01-Gallagher-chap01.pdf)."In Michael Gallagher and Paul Mitchell, eds., _The Politics of Electoral Systems_. Oxford: Oxford University Press. pp3-24. Cox, Gary W. 1997. _Making votes count: strategic coordination in the world&#8217;s electoral systems_, _Political economy of institutions and decisions_. Cambridge, U.K. ; New York: Cambridge University Press. Duverger, Maurice. 1959. _Political parties: Their organization and activity in the modern state_. Second English Revised ed. London: Methuen & Co. Book II, Chapter 1, "The Number of Parties."Also see [this related reading](olitical_Science/local/docs/olitical_Science/local/docs/Duverger_InfOfElectSys.pdf). Duverger, Maurice. 1986. Duverger&#8217;s Law: Forty Years Later. In _Electoral Laws and Their Political Consequences_, edited by B. Grofman and A. Lijphart. New York: Agathon Press. Gallagher, Michael. 1991. &#8220;[Proportionality, disproportionality, and electoral systems](http://www.tcd.ie/Political_Science/local/docs/Gallagher_1991_ES.pdf).&#8221; _Electoral Studies_ 10 (1): 33-51. Ordeshook, Peter, and Olga Shvetsova. 1994. &#8220;[Ethnic heterogeneity, district magnitude, and the number of parties](http://www.tcd.ie/Political_Science/local/docs/Ordeshook_Shvetsova_1994_AJPS.pdf).&#8221; _American Journal of Political Science_ 38 (1): 100-123. <!--EndFragment--> <!--EndFragment--> <!--EndFragment-->
48.876404
350
0.766667
eng_Latn
0.444581
b9b5afa0764bd6073a401ec6fef27c1a8db34a5b
1,350
md
Markdown
2020/10/21/2020-10-21 04:55.md
zhzhzhy/WeiBoHot_history
32ce4800e63f26384abb17d43e308452c537c902
[ "MIT" ]
3
2020-07-14T14:54:15.000Z
2020-08-21T06:48:24.000Z
2020/10/21/2020-10-21 04:55.md
zhzhzhy/WeiBoHot_history
32ce4800e63f26384abb17d43e308452c537c902
[ "MIT" ]
null
null
null
2020/10/21/2020-10-21 04:55.md
zhzhzhy/WeiBoHot_history
32ce4800e63f26384abb17d43e308452c537c902
[ "MIT" ]
null
null
null
2020年10月21日04时数据 Status: 200 1.李佳琦直播 微博热度:151940 2.男子逃亡30年自首发现弄错 微博热度:135530 3.Angelababy丝绒红唇 微博热度:132860 4.邓伦家房子真的塌了 微博热度:125257 5.被双11规则逼疯的我 微博热度:105501 6.周杰伦代言海澜之家 微博热度:101705 7.理性消费按需购物 微博热度:67539 8.薇娅直播 微博热度:64015 9.90后女孩拍摄微观菌奇幻世界 微博热度:59383 10.赵立坚批美国有些人纯粹是酸葡萄心态 微博热度:57381 11.半是蜜糖半是伤 微博热度:56946 12.王一博刘雯同框 微博热度:46122 13.中日韩三国的怼人差异 微博热度:46112 14.心动的信号 微博热度:45968 15.抗疫夫妻为救人第二次错过婚礼 微博热度:45831 16.iPhone12蓝色 微博热度:45764 17.济南交警通报水泥罐车侧翻压扁轿车 微博热度:45612 18.127岁湖南第一寿星田龙玉去世 微博热度:45587 19.刘诗诗宋茜合照 微博热度:44935 20.双十一养猫 微博热度:42898 21.彭昱畅连续五年为许魏洲庆生 微博热度:40490 22.战清泓终于分手了 微博热度:37778 23.2020胡润百富榜揭晓 微博热度:27749 24.轿车冲出公路开上居民房顶 微博热度:27709 25.李小川追妻 微博热度:26126 26.你心目中媚骨天成的女演员 微博热度:25244 27.李雪琴想要采访吴亦凡 微博热度:24770 28.4AM 微博热度:24207 29.熊顿人间陀螺 微博热度:24028 30.安倍称推迟东京奥运前征得特朗普同意 微博热度:23964 31.雪梨谭松韵直播 微博热度:23964 32.张翰耳朵跳舞 微博热度:23962 33.神仙耳光炒饭 微博热度:23957 34.iPhone12pro 微博热度:23372 35.青岛所有进口冷链产品每件必检 微博热度:22763 36.陈奕迅称很久没有收入了 微博热度:22742 37.李现为保持状态不烟不酒 微博热度:22720 38.双十一来临前的焦虑感 微博热度:22659 39.阴阳师 微博热度:22132 40.坚果手机发布会 微博热度:22065 41.周星驰方否认拖欠巨额债务 微博热度:22043 42.杨幂上海路透 微博热度:22041 43.明日方舟 微博热度:21980 44.青岛找到新冠病毒物传人证据链 微博热度:21768 45.大学封校两小狗隔桥对望 微博热度:21685 46.好演员投票结果 微博热度:21042 47.95岁母亲靠摸头分辨4个儿子 微博热度:20608 48.袁帅花式追妻 微博热度:20515 49.女大学生的快乐 微博热度:20267 50.iPhone12开箱 微博热度:18832
6.617647
20
0.774074
yue_Hant
0.419173
b9b67b1f3a97134793e091357c31e82e766345dd
5,574
md
Markdown
README.md
KfirBernstein/technion-iem-ds_lab
1e2a9f3a6cd571988e1518423577707a486ffeac
[ "MIT" ]
null
null
null
README.md
KfirBernstein/technion-iem-ds_lab
1e2a9f3a6cd571988e1518423577707a486ffeac
[ "MIT" ]
null
null
null
README.md
KfirBernstein/technion-iem-ds_lab
1e2a9f3a6cd571988e1518423577707a486ffeac
[ "MIT" ]
null
null
null
# Exercise submission checker This project is a web server for students to upload, build and execute programming tasks in C++,Python, Java etc. The server runs in a Docker container, on a linux host. It is developed as a small scale alternative to [CodeRunner](https://moodle.org/plugins/qtype_coderunner) . #### Maturity: In development, pre alpha. # Getting Started <b>TODO</b> -- add content <br> For support, send mail to the repository maintainer ## System Requirements - ubuntu 18.04 - docker - ssh access (for management and file uploading) ## Installing on the server that will run the checker: * install docker: ```sudo apt install docker``` * clone the repo and cd into it * create required directories: ```mkdir -p $HOME/data/logs``` ## First time installation Currently need to manually build the dependency images: ``` docker build -t python_cmake_base -f Dockerfile_base . docker build -t py_java_cpp_base -f Dockerfile_py_java_cpp_base . ``` ## * build the docker container: ```docker build -t server . ``` * run the server in a new container: ```./scripts/run_docker.sh server ``` * check that the server is up by accessing http://\<server IP>/ # Running tests of the server TBD #Contributing use pull requests in github # Instructions for the Tutor As the tutor, you have to prepare: - code that will execute the program - code that verify the output is correct - _hint_: https://regex101.com/ - input data (the input test vector) - output data (the output for the input for a correct solution) - optionally, another input and output tagged GOLDEN These coding parts are called __runner__ and __matcher__ (aka comparator) Choosing the runner and matcher is done by reading a configuration file (located at {rootdir}/hw_settings.json) You can see the current content in ```host_address/admin/show_ex_config``` Modifying/adding values is by uploading a new version of the file (in the admin page) For example, - in ex 1, a C/C++ program, exact text match is needed - in ex 2, a Python program, there is a complicated scenario that requires regular expression. The config file json will look like: ``` { "94201": [ { "id": 1, "matcher" : "exact_match.py", "runner" :"check_cmake.sh", "timeout" : 5 }, { "id": 2, "matcher" : "tester_ex2.py", "runner" :"check_py.sh", "timeout" : 20 } ] } ``` ## Uploading data to the server Use ssh and put the data files e.g. ```ref_hw_3_input``` in $HOME/data<br> It will be mapped into the server's file system. <br> ## Using the correct runner Depending on the assignment, choose an existing runner or write a new one.<br> The runners are bash scripts that are run in a shell (for security) and accepts as arguments the filename of the tested code, input data file, reference output file and matcher file name.<Br> The script return 0 for success. <br> NOTE: Comparison is done by running the matcher from within the runner script. This will be changed in a future version. These runners are already implemented: - check_cmake.sh: run cmake, make and execute the executable found. - check_py.sh: run the file ```main.py``` in the supplied archive (or just this file if no archive is used) - check_sh.sh: run the file using bash ## Adding/Modifying matcher All matchers are written in Python3. Before writing a new one, check the existing scripts - maybe you can use one of them as a baseline. 1. save the new ```tester.py``` in ```serverpkg.matchers``` dir.<br> The script must implement ```check(output_from_test_file_name,reference_file_name)``` <br> and return True if the files are considered a match.<br> For example ```def check(f1,ref): return True```<br> <strong>currently, you need to implement</strong> <br> <pre> if __name__ == "__main__": good = check(sys.argv[1], sys.argv[2]) exit(0 if good else ExitCode.COMPARE_FAILED) </pre> 2. Update the config file by uploading an updated version.<br> The current config can be seen at http://your-host/admin/show_ex_config, <br> and uploading from the admin page at http://your-host/admin 3. build a new Docker container, stop the current one, start the new one: ``` $ ssh myhost (myhost) $ cd checker (myhost) $ git pull (myhost) $ cd scripts && ./again.sh ``` # Running the web application The webapp runs as a Docker container.<br> First, build the container: (in the project root directory) > docker build **.** -t server:<current_tag> then run it > cd scripts && ./run_docker.sh server OR - if you just want to build - stop - start fresh again: > cd scripts && ./again.sh # Reliability I created a monitoring account that checks the server every couple minutes and send me email if the server is down. There is known problem that occasionally the server hangs. In such a case, ssh to the host, and execute ```scripts/restart_container.sh``` #Security * The Docker container runs as user _nobody_, who has minimal privilages. * The submitted code is run in a subprocess with limitations on * execution time (for each exercise we set a timeout value) * network connectivity (not implemented yet) # Debugging During development, it is easier to run only the python code without Docker: ``` cd ~/checker source venv/bin/activate python3 run.py ``` ## gunicorn To run with gunicorn (one step closer to the realworld configuration): ``` cd ~/checker source venv/bin/activate gunicorn3 -b 0.0.0.0:8000 --workers 3 serverpkg.server:app ```
31.670455
116
0.716541
eng_Latn
0.989668
b9b7a6f7a34a45ebd119678efcf953fbe8983d9e
184
md
Markdown
content/cloud-audit-logs-gcp.md
timf/terms.dev
69ad8e7a588eadb6a5cb9f2eaaa663eeb4655fcc
[ "MIT", "Unlicense" ]
3
2021-03-06T18:53:23.000Z
2021-03-08T15:19:07.000Z
content/cloud-audit-logs-gcp.md
timf/terms.dev
69ad8e7a588eadb6a5cb9f2eaaa663eeb4655fcc
[ "MIT", "Unlicense" ]
null
null
null
content/cloud-audit-logs-gcp.md
timf/terms.dev
69ad8e7a588eadb6a5cb9f2eaaa663eeb4655fcc
[ "MIT", "Unlicense" ]
null
null
null
+++ title = "Cloud Audit Logs" weight = 33 date = 2021-03-07 [extra] link = "https://cloud.google.com/audit-logs/" [taxonomies] groups = ["gcp"] +++ GCP service: Audit trails for GCP
15.333333
45
0.663043
kor_Hang
0.641216
b9b88d4ad92ec9b61224b308bb39e3bc3b5ce28c
14,987
md
Markdown
readme.md
tunjid/Tiler
595338969371712331ba4fcb13abac5d47b48d5f
[ "Apache-2.0" ]
4
2022-03-02T22:15:12.000Z
2022-03-05T12:56:52.000Z
readme.md
tunjid/Tiler
595338969371712331ba4fcb13abac5d47b48d5f
[ "Apache-2.0" ]
null
null
null
readme.md
tunjid/Tiler
595338969371712331ba4fcb13abac5d47b48d5f
[ "Apache-2.0" ]
null
null
null
# Tiler [![JVM Tests](https://github.com/tunjid/Tiler/actions/workflows/tests.yml/badge.svg)](https://github.com/tunjid/Tiler/actions/workflows/tests.yml) ![Tiler](https://img.shields.io/maven-central/v/com.tunjid.tiler/tiler?label=tiler) ![badge][badge-ios] ![badge][badge-js] ![badge][badge-jvm] ![badge][badge-linux] ![badge][badge-windows] ![badge][badge-mac] ![badge][badge-tvos] ![badge][badge-watchos] Please note, this is not an official Google repository. It is a Kotlin multiplatform experiment that makes no guarantees about API stability or long term support. None of the works presented here are production tested, and should not be taken as anything more than its face value. ## Introduction Tiling is a kotlin multiplatform experiment for loading chunks of structured data from reactive sources. Tiling is achieved with a Tiler; a pure function that has the ability to adapt any generic method of the form: ```kotlin fun <T> items(query: Query): Flow<T> ``` into a paginated API. It does this by exposing a functional reactive API most similar to a `Map` where: * The keys are the queries (`Query`) for data * The values are dynamic sets of data returned over time as the result of the `Query` key. | `typealias` | type | Output | |--------------------------|-----------------------------------------------------------------|--------------------| | `ListTiler<Query, Item>` | `(Flow<Tile.Input.List<Query, Item>>) -> Flow<List<Item>>` | A flattened `List` | | `MapTiler<Query, Item>` | `(Flow<Tile.Input.Map<Query, Item>>) -> Flow<Map<Query, Item>>` | `Map<Key, Value>` | ## Get it `Tiler` is available on mavenCentral with the latest version indicated by the badge at the top of this readme file. `implementation com.tunjid.tiler:tiler:version` ## Demo The demo app is cheekily implemented as a grid of tiles with dynamic colors: ![Demo image](https://github.com/tunjid/tiler/blob/develop/misc/demo.gif) ## Use cases As tiling is a pure function that operates on a reactive stream, its configuration can be changed on the fly. This lends it well to the following situations: * Adaptive pagination: The amount of items paginated through can be adjusted dynamically to account for app window resizing by turning [on](https://github.com/tunjid/Tiler#inputrequest) more pages and increasing the [limit](https://github.com/tunjid/Tiler#inputlimiter) of data sent to the UI from the paginated data available. An example is in the [Me](https://github.com/tunjid/me/blob/main/common/feature-archive-list/src/commonMain/kotlin/com/tunjid/me/feature/archivelist/ArchiveLoading.kt) app. * Dynamic sort order: The sort order of paginated items can be changed cheaply on a whim by changing the [order](https://github.com/tunjid/Tiler#inputorder) as this only operates on the data output from the tiler, and not the entire paginated data set. An example is in the sample in this [repository](https://github.com/tunjid/Tiler/blob/develop/common/src/commonMain/kotlin/com/tunjid/demo/common/ui/numbers/advanced/NumberFetching.kt). ## API surface ### Getting your data Tiling prioritizes access to the data you've paged through, allowing you to read all paginated data at once, or a subset of it (using `Input.Limiter`). This allows you to trivially transform the data you've fetched after the fact. Tilers are implemented as plain functions. Given a `Flow` of `Input`, you can either choose to get your data as: * A `Flow<List<Item>>` with `tiledList` * A `Flow<Map<Query, Item>>` with `tiledMap` The choice between the two depends largely on the operations you want to perform on the output before consuming it. A `MapTiler` could be used when you want a clear separation of the chunks of data, for example to add headers or to group information in a single component. Alternatively, you can use a `ListTiler` and call `List<Item>.groupBy {...}` if you find that more ergonomic. The `Map<Query, Item>` in the `Flow` produced from the `MapTiler` is guaranteed to have a stable iteration order defined by the `Input.Order` passed to it. More on this below. ## Managing requested data Much like a classic `Map` that supports update and remove methods, a Tiler offers analogous operations in the form of `Inputs`. ### `Input.Request` * On: Analogous to `put` for a `Map`, this starts collecting from the backing `Flow` for the specified `query`. It is idempotent; multiple requests have no side effects for loading, i.e the same `Flow` will not be collect twice. * Off: Stops collecting from the backing `Flow` for the specified `query`. The items previously fetched by this query are still kept in memory and will be in the `List` of items returned. Requesting this is idempotent; multiple requests have no side effects. * Evict: Analogous to `remove` for a `Map`, this stops collecting from the backing `Flow` for the specified `query` and also evicts the items previously fetched by the `query` from memory. Requesting this is idempotent; multiple requests have no side effects. ### `Input.Limiter` Can be used to select a subset of items tiled instead of the entire paginated set. For example, assuming 1000 items have been fetched, there's no need to send a 1000 items to the UI for diffing/display when the UI can only show about 30 at once. The `Limiter` allows for selecting an arbitrary amount of items as the situation demands. ### `Input.Order` Defines the heuristic for selecting tiled items into the output container. * Unspecified: Items will be returned in an undefined order. This is the default. * Sorted: Sort items with a specified query `comparator`. * PivotSorted: Sort items with the specified `comparator` but pivoted around the last query a `Tile.Request.On` was sent for. This allows for showing items that have more priority over others in the current context for example in a list being scrolled. In other words assume tiles have been fetched for queries 1 - 10 but a user can see pages 5 and 6. The UI need only to be aware of pages 4, 5, 6, and 7. This allows for a rolling window of queries based on a user's scroll position. * Custom: Flattens tiled items produced whichever way you desire. ## Examples ### Simple, non scaling example In this example, the code will return a full list of every item requested sorted in ascending order of the pages. The list will grow indefinitely as more pages are requested unless queries are evicted. This may be fine for small lists, but as the list size grows, some items may need to be evicted as only a small subset of items need to be presented to the UI. This sort of behavior can be achieved using the `Evict` `Request`, and the `PivotSorted` `Order` covered next. ```kotlin import com.tunjid.tiler.Tile import com.tunjid.tiler.tiledList import kotlinx.coroutines.flow.Flow import kotlinx.coroutines.flow.MutableStateFlow import kotlinx.coroutines.flow.flowOf import kotlinx.coroutines.flow.map class NumberFetcher { private val requests = MutableStateFlow(0) private val tiledList: (Flow<Tile.Input.List<Int, List<Int>>>) -> Flow<List<List<Int>>> = tiledList( // Sort items in ascending order order = Tile.Order.Sorted(comparator = Int::compareTo), fetcher = { page -> // Fetch 50 numbers for each page val start = page * 50 val numbers = start.until(start + 50) flowOf(numbers.toList()) } ) // All paginated items in a single list val listItems: Flow<List<Int>> = tiledList.invoke( requests.map { Tile.Request.On(it) } ) .map(List<List<Int>>::flatten) fun fetchPage(page: Int) { requests.value = page } } ``` ### Advanced, scalable solution To deal with the issue of the tiled data set becoming arbitrarily large, one can create a pipeline where the pages loaded are defined as a function of the page the user is currently at: ``` [out of bounds] -> Evict from memory _ [currentPage - n - 1 - n] | ... | -> Keep pages in memory, but don't observe [currentPage - n - 1] _ _| [currentPage - n] | ... | [currentPage - 1] | [currentPage] | -> Observe pages [currentPage + 1] | ... | [currentPage + n] _| _ [currentPage + n + 1] | ... | -> Keep pages in memory, but don't observe [currentPage + n + 1 + n] _| [out of bounds] -> Evict from memory ``` Where `n` is an arbitrary number that may be defined by how many items are visible on the screen at once. For an example where `n` is a function of grid size in a grid list, check out [ArchiveLoading.kt](https://github.com/tunjid/me/blob/main/common/feature-archive-list/src/commonMain/kotlin/com/tunjid/me/feature/archivelist/ArchiveLoading.kt) in the [me](https://github.com/tunjid/me) project. An example where `n` is fixed at 2 follows: ```kotlin import com.tunjid.tiler.Tile import com.tunjid.tiler.tiledList import kotlinx.coroutines.flow.Flow import kotlinx.coroutines.flow.MutableStateFlow import kotlinx.coroutines.flow.asFlow import kotlinx.coroutines.flow.distinctUntilChanged import kotlinx.coroutines.flow.flatMapLatest import kotlinx.coroutines.flow.flowOf import kotlinx.coroutines.flow.map import kotlinx.coroutines.flow.scan data class LoadMetadata( val pivotPage: Int = 0, // Pages actively being collected and loaded from val on: List<Int> = listOf(), // Pages whose emissions are in memory, but are not being collected from val off: List<Int> = listOf(), // Pages to remove from memory val evict: List<Int> = listOf(), ) private fun LoadMetadata.toRequests(): Flow<Tile.Input.List<Int, List<Int>>> = listOf<List<Tile.Input.List<Int, List<Int>>>>( evict.map { Tile.Request.Evict(it) }, off.map { Tile.Request.Off(it) }, on.map { Tile.Request.On(it) }, ) .flatten() .asFlow() class ManagedNumberFetcher { private val requests = MutableStateFlow(0) val managedRequests: Flow<Tile.Input.List<Int, List<Int>>> = requests .scan(LoadMetadata()) { previousMetadata, currentPage -> // Load 5 pages pivoted around the current page at once val on: List<Int> = ((currentPage - 2)..(currentPage + 2)) .filter { it >= 0 } .toList() // Keep 2 pages on either end of the active pages in memory val off: List<Int> = (((currentPage - 5)..(currentPage - 3)) + ((currentPage + 3)..(currentPage + 5))) .filter { it >= 0 } LoadMetadata( on = on, off = off, pivotPage = currentPage, // Evict everything not in the curren active and inactive range evict = (previousMetadata.on + previousMetadata.off) - (on + off).toSet() ) } .distinctUntilChanged() .flatMapLatest(LoadMetadata::toRequests) private val tiledList: (Flow<Tile.Input.List<Int, List<Int>>>) -> Flow<List<List<Int>>> = tiledList( // Sort items in ascending order, pivoted around the current page order = Tile.Order.PivotSorted(comparator = Int::compareTo), // Output at most 200 items at once to allow for cheap data transformations regardless of paged dataset size limiter = Tile.Limiter.List { it.size > 200 }, fetcher = { page -> // Fetch 50 numbers for each page val start = page * 50 val numbers = start.until(start + 50) flowOf(numbers.toList()) } ) val listItems: Flow<List<Int>> = tiledList.invoke(managedRequests) .map(List<List<Int>>::flatten) fun setCurrentPage(page: Int) { requests.value = page } } ``` In the above, only flows for 5 pages are collected at any one time. 4 more pages are kept in memory for quick resumption, and the rest are evicted from memory. As the user scrolls, `setCurrentPage` is called, and data is fetched for that page, and the surrounding pages. Pages that are far away from the current page (more than 4 pages away) are removed from memory. ## Efficiency & performance As tiling loads from multiple flows simultaneously, performance is a function of 2 things: * How often the backing `Flow` for each `Input.Request` emits * The time and space complexity of the transformations applied to the output `List<Item>` or `Map<Query, Item>`. In the case of a former, the `Flow` should only emit if the backing dataset has actually changed. This prevents unneccessary emissions downstream. In the case of the latter, by using `Input.Limiter` on the output of the tiler, you can guarantee transformations on the output are a function `O(N)`, where `N` is the amount defined by the `Input.Limiter`. For example if tiling is done for the UI, with a viewport that can display 20 items at once, 20 items can be fetched per page, and 100 (20 * 5) pages can be observed at concurrently. Using `Input.Limiter.List { it.size > 100 }`, only 100 items will be sent to the UI at once. The items can be transformed with algorithms of `O(N)` to `O(N^2)` time and space complexity trivially as regardless of the size of the actual paginated set, only 100 items will be transformed at any one time. ## License Copyright 2021 Google LLC Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. [badge-android]: http://img.shields.io/badge/-android-6EDB8D.svg?style=flat [badge-jvm]: http://img.shields.io/badge/-jvm-DB413D.svg?style=flat [badge-js]: http://img.shields.io/badge/-js-F8DB5D.svg?style=flat [badge-js-ir]: https://img.shields.io/badge/support-[IR]-AAC4E0.svg?style=flat [badge-nodejs]: https://img.shields.io/badge/-nodejs-68a063.svg?style=flat [badge-linux]: http://img.shields.io/badge/-linux-2D3F6C.svg?style=flat [badge-windows]: http://img.shields.io/badge/-windows-4D76CD.svg?style=flat [badge-wasm]: https://img.shields.io/badge/-wasm-624FE8.svg?style=flat [badge-apple-silicon]: http://img.shields.io/badge/support-[AppleSilicon]-43BBFF.svg?style=flat [badge-ios]: http://img.shields.io/badge/-ios-CDCDCD.svg?style=flat [badge-mac]: http://img.shields.io/badge/-macos-111111.svg?style=flat [badge-watchos]: http://img.shields.io/badge/-watchos-C0C0C0.svg?style=flat [badge-tvos]: http://img.shields.io/badge/-tvos-808080.svg?style=flat
43.56686
485
0.702809
eng_Latn
0.974013
b9b8d74a27355d1c42ffb826157e0a449874abc2
298
md
Markdown
core/mobile-manager/CHANGELOG.md
lerzeel2/imodeljs
785206a722a8d5e02325bbf3748197f6e6e31c13
[ "MIT" ]
null
null
null
core/mobile-manager/CHANGELOG.md
lerzeel2/imodeljs
785206a722a8d5e02325bbf3748197f6e6e31c13
[ "MIT" ]
1
2020-11-21T14:31:53.000Z
2020-11-21T16:04:11.000Z
core/mobile-manager/CHANGELOG.md
lerzeel2/imodeljs
785206a722a8d5e02325bbf3748197f6e6e31c13
[ "MIT" ]
null
null
null
# Change Log - @bentley/mobile-manager This log was last generated on Tue, 09 Mar 2021 20:28:13 GMT and should not be manually modified. ## 2.13.0 Tue, 09 Mar 2021 20:28:13 GMT ### Updates - Remove unused code. - Updated to use TypeScript 4.1 - begin rename project from iModel.js to iTwin.js
21.285714
97
0.724832
eng_Latn
0.961785
b9b9172b42c72c0ac4769fa1c93d4911c3f40ac4
49,701
md
Markdown
sources/tech/20181016 Lab 6- Network Driver.md
Cielllll/TranslateProject
893cc7d906add05e2693fa7427ee18a22ae6f97a
[ "Apache-2.0" ]
null
null
null
sources/tech/20181016 Lab 6- Network Driver.md
Cielllll/TranslateProject
893cc7d906add05e2693fa7427ee18a22ae6f97a
[ "Apache-2.0" ]
null
null
null
sources/tech/20181016 Lab 6- Network Driver.md
Cielllll/TranslateProject
893cc7d906add05e2693fa7427ee18a22ae6f97a
[ "Apache-2.0" ]
null
null
null
Translating by qhwdw Lab 6: Network Driver ====== ### Lab 6: Network Driver (default final project) **Due on Thursday, December 6, 2018 ** ### Introduction This lab is the default final project that you can do on your own. Now that you have a file system, no self respecting OS should go without a network stack. In this the lab you are going to write a driver for a network interface card. The card will be based on the Intel 82540EM chip, also known as the E1000. ##### Getting Started Use Git to commit your Lab 5 source (if you haven't already), fetch the latest version of the course repository, and then create a local branch called `lab6` based on our lab6 branch, `origin/lab6`: ``` athena% cd ~/6.828/lab athena% add git athena% git commit -am 'my solution to lab5' nothing to commit (working directory clean) athena% git pull Already up-to-date. athena% git checkout -b lab6 origin/lab6 Branch lab6 set up to track remote branch refs/remotes/origin/lab6. Switched to a new branch "lab6" athena% git merge lab5 Merge made by recursive. fs/fs.c | 42 +++++++++++++++++++ 1 files changed, 42 insertions(+), 0 deletions(-) athena% ``` The network card driver, however, will not be enough to get your OS hooked up to the Internet. In the new lab6 code, we have provided you with a network stack and a network server. As in previous labs, use git to grab the code for this lab, merge in your own code, and explore the contents of the new `net/` directory, as well as the new files in `kern/`. In addition to writing the driver, you will need to create a system call interface to give access to your driver. You will implement missing network server code to transfer packets between the network stack and your driver. You will also tie everything together by finishing a web server. With the new web server you will be able to serve files from your file system. Much of the kernel device driver code you will have to write yourself from scratch. This lab provides much less guidance than previous labs: there are no skeleton files, no system call interfaces written in stone, and many design decisions are left up to you. For this reason, we recommend that you read the entire assignment write up before starting any individual exercises. Many students find this lab more difficult than previous labs, so please plan your time accordingly. ##### Lab Requirements As before, you will need to do all of the regular exercises described in the lab and _at least one_ challenge problem. Write up brief answers to the questions posed in the lab and a description of your challenge exercise in `answers-lab6.txt`. #### QEMU's virtual network We will be using QEMU's user mode network stack since it requires no administrative privileges to run. QEMU's documentation has more about user-net [here][1]. We've updated the makefile to enable QEMU's user-mode network stack and the virtual E1000 network card. By default, QEMU provides a virtual router running on IP 10.0.2.2 and will assign JOS the IP address 10.0.2.15. To keep things simple, we hard-code these defaults into the network server in `net/ns.h`. While QEMU's virtual network allows JOS to make arbitrary connections out to the Internet, JOS's 10.0.2.15 address has no meaning outside the virtual network running inside QEMU (that is, QEMU acts as a NAT), so we can't connect directly to servers running inside JOS, even from the host running QEMU. To address this, we configure QEMU to run a server on some port on the _host_ machine that simply connects through to some port in JOS and shuttles data back and forth between your real host and the virtual network. You will run JOS servers on ports 7 (echo) and 80 (http). To avoid collisions on shared Athena machines, the makefile generates forwarding ports for these based on your user ID. To find out what ports QEMU is forwarding to on your development host, run make which-ports. For convenience, the makefile also provides make nc-7 and make nc-80, which allow you to interact directly with servers running on these ports in your terminal. (These targets only connect to a running QEMU instance; you must start QEMU itself separately.) ##### Packet Inspection The makefile also configures QEMU's network stack to record all incoming and outgoing packets to `qemu.pcap` in your lab directory. To get a hex/ASCII dump of captured packets use `tcpdump` like this: ``` tcpdump -XXnr qemu.pcap ``` Alternatively, you can use [Wireshark][2] to graphically inspect the pcap file. Wireshark also knows how to decode and inspect hundreds of network protocols. If you're on Athena, you'll have to use Wireshark's predecessor, ethereal, which is in the sipbnet locker. ##### Debugging the E1000 We are very lucky to be using emulated hardware. Since the E1000 is running in software, the emulated E1000 can report to us, in a user readable format, its internal state and any problems it encounters. Normally, such a luxury would not be available to a driver developer writing with bare metal. The E1000 can produce a lot of debug output, so you have to enable specific logging channels. Some channels you might find useful are: | Flag | Meaning | | --------- | ---------------------------------------------------| | tx | Log packet transmit operations | | txerr | Log transmit ring errors | | rx | Log changes to RCTL | | rxfilter | Log filtering of incoming packets | | rxerr | Log receive ring errors | | unknown | Log reads and writes of unknown registers | | eeprom | Log reads from the EEPROM | | interrupt | Log interrupts and changes to interrupt registers. | To enable "tx" and "txerr" logging, for example, use make E1000_DEBUG=tx,txerr .... Note: `E1000_DEBUG` flags only work in the 6.828 version of QEMU. You can take debugging using software emulated hardware one step further. If you are ever stuck and do not understand why the E1000 is not responding the way you would expect, you can look at QEMU's E1000 implementation in `hw/e1000.c`. #### The Network Server Writing a network stack from scratch is hard work. Instead, we will be using lwIP, an open source lightweight TCP/IP protocol suite that among many things includes a network stack. You can find more information on lwIP [here][3]. In this assignment, as far as we are concerned, lwIP is a black box that implements a BSD socket interface and has a packet input port and packet output port. The network server is actually a combination of four environments: * core network server environment (includes socket call dispatcher and lwIP) * input environment * output environment * timer environment The following diagram shows the different environments and their relationships. The diagram shows the entire system including the device driver, which will be covered later. In this lab, you will implement the parts highlighted in green. ![Network server architecture][4] ##### The Core Network Server Environment The core network server environment is composed of the socket call dispatcher and lwIP itself. The socket call dispatcher works exactly like the file server. User environments use stubs (found in `lib/nsipc.c`) to send IPC messages to the core network environment. If you look at `lib/nsipc.c` you will see that we find the core network server the same way we found the file server: `i386_init` created the NS environment with NS_TYPE_NS, so we scan `envs`, looking for this special environment type. For each user environment IPC, the dispatcher in the network server calls the appropriate BSD socket interface function provided by lwIP on behalf of the user. Regular user environments do not use the `nsipc_*` calls directly. Instead, they use the functions in `lib/sockets.c`, which provides a file descriptor-based sockets API. Thus, user environments refer to sockets via file descriptors, just like how they referred to on-disk files. A number of operations (`connect`, `accept`, etc.) are specific to sockets, but `read`, `write`, and `close` go through the normal file descriptor device-dispatch code in `lib/fd.c`. Much like how the file server maintained internal unique ID's for all open files, lwIP also generates unique ID's for all open sockets. In both the file server and the network server, we use information stored in `struct Fd` to map per-environment file descriptors to these unique ID spaces. Even though it may seem that the IPC dispatchers of the file server and network server act the same, there is a key difference. BSD socket calls like `accept` and `recv` can block indefinitely. If the dispatcher were to let lwIP execute one of these blocking calls, the dispatcher would also block and there could only be one outstanding network call at a time for the whole system. Since this is unacceptable, the network server uses user-level threading to avoid blocking the entire server environment. For every incoming IPC message, the dispatcher creates a thread and processes the request in the newly created thread. If the thread blocks, then only that thread is put to sleep while other threads continue to run. In addition to the core network environment there are three helper environments. Besides accepting messages from user applications, the core network environment's dispatcher also accepts messages from the input and timer environments. ##### The Output Environment When servicing user environment socket calls, lwIP will generate packets for the network card to transmit. LwIP will send each packet to be transmitted to the output helper environment using the `NSREQ_OUTPUT` IPC message with the packet attached in the page argument of the IPC message. The output environment is responsible for accepting these messages and forwarding the packet on to the device driver via the system call interface that you will soon create. ##### The Input Environment Packets received by the network card need to be injected into lwIP. For every packet received by the device driver, the input environment pulls the packet out of kernel space (using kernel system calls that you will implement) and sends the packet to the core server environment using the `NSREQ_INPUT` IPC message. The packet input functionality is separated from the core network environment because JOS makes it hard to simultaneously accept IPC messages and poll or wait for a packet from the device driver. We do not have a `select` system call in JOS that would allow environments to monitor multiple input sources to identify which input is ready to be processed. If you take a look at `net/input.c` and `net/output.c` you will see that both need to be implemented. This is mainly because the implementation depends on your system call interface. You will write the code for the two helper environments after you implement the driver and system call interface. ##### The Timer Environment The timer environment periodically sends messages of type `NSREQ_TIMER` to the core network server notifying it that a timer has expired. The timer messages from this thread are used by lwIP to implement various network timeouts. ### Part A: Initialization and transmitting packets Your kernel does not have a notion of time, so we need to add it. There is currently a clock interrupt that is generated by the hardware every 10ms. On every clock interrupt we can increment a variable to indicate that time has advanced by 10ms. This is implemented in `kern/time.c`, but is not yet fully integrated into your kernel. ``` Exercise 1. Add a call to `time_tick` for every clock interrupt in `kern/trap.c`. Implement `sys_time_msec` and add it to `syscall` in `kern/syscall.c` so that user space has access to the time. ``` Use make INIT_CFLAGS=-DTEST_NO_NS run-testtime to test your time code. You should see the environment count down from 5 in 1 second intervals. The "-DTEST_NO_NS" disables starting the network server environment because it will panic at this point in the lab. #### The Network Interface Card Writing a driver requires knowing in depth the hardware and the interface presented to the software. The lab text will provide a high-level overview of how to interface with the E1000, but you'll need to make extensive use of Intel's manual while writing your driver. ``` Exercise 2. Browse Intel's [Software Developer's Manual][5] for the E1000. This manual covers several closely related Ethernet controllers. QEMU emulates the 82540EM. You should skim over chapter 2 now to get a feel for the device. To write your driver, you'll need to be familiar with chapters 3 and 14, as well as 4.1 (though not 4.1's subsections). You'll also need to use chapter 13 as reference. The other chapters mostly cover components of the E1000 that your driver won't have to interact with. Don't worry about the details right now; just get a feel for how the document is structured so you can find things later. While reading the manual, keep in mind that the E1000 is a sophisticated device with many advanced features. A working E1000 driver only needs a fraction of the features and interfaces that the NIC provides. Think carefully about the easiest way to interface with the card. We strongly recommend that you get a basic driver working before taking advantage of the advanced features. ``` ##### PCI Interface The E1000 is a PCI device, which means it plugs into the PCI bus on the motherboard. The PCI bus has address, data, and interrupt lines, and allows the CPU to communicate with PCI devices and PCI devices to read and write memory. A PCI device needs to be discovered and initialized before it can be used. Discovery is the process of walking the PCI bus looking for attached devices. Initialization is the process of allocating I/O and memory space as well as negotiating the IRQ line for the device to use. We have provided you with PCI code in `kern/pci.c`. To perform PCI initialization during boot, the PCI code walks the PCI bus looking for devices. When it finds a device, it reads its vendor ID and device ID and uses these two values as a key to search the `pci_attach_vendor` array. The array is composed of `struct pci_driver` entries like this: ``` struct pci_driver { uint32_t key1, key2; int (*attachfn) (struct pci_func *pcif); }; ``` If the discovered device's vendor ID and device ID match an entry in the array, the PCI code calls that entry's `attachfn` to perform device initialization. (Devices can also be identified by class, which is what the other driver table in `kern/pci.c` is for.) The attach function is passed a _PCI function_ to initialize. A PCI card can expose multiple functions, though the E1000 exposes only one. Here is how we represent a PCI function in JOS: ``` struct pci_func { struct pci_bus *bus; uint32_t dev; uint32_t func; uint32_t dev_id; uint32_t dev_class; uint32_t reg_base[6]; uint32_t reg_size[6]; uint8_t irq_line; }; ``` The above structure reflects some of the entries found in Table 4-1 of Section 4.1 of the developer manual. The last three entries of `struct pci_func` are of particular interest to us, as they record the negotiated memory, I/O, and interrupt resources for the device. The `reg_base` and `reg_size` arrays contain information for up to six Base Address Registers or BARs. `reg_base` stores the base memory addresses for memory-mapped I/O regions (or base I/O ports for I/O port resources), `reg_size` contains the size in bytes or number of I/O ports for the corresponding base values from `reg_base`, and `irq_line` contains the IRQ line assigned to the device for interrupts. The specific meanings of the E1000 BARs are given in the second half of table 4-2. When the attach function of a device is called, the device has been found but not yet _enabled_. This means that the PCI code has not yet determined the resources allocated to the device, such as address space and an IRQ line, and, thus, the last three elements of the `struct pci_func` structure are not yet filled in. The attach function should call `pci_func_enable`, which will enable the device, negotiate these resources, and fill in the `struct pci_func`. ``` Exercise 3. Implement an attach function to initialize the E1000. Add an entry to the `pci_attach_vendor` array in `kern/pci.c` to trigger your function if a matching PCI device is found (be sure to put it before the `{0, 0, 0}` entry that mark the end of the table). You can find the vendor ID and device ID of the 82540EM that QEMU emulates in section 5.2. You should also see these listed when JOS scans the PCI bus while booting. For now, just enable the E1000 device via `pci_func_enable`. We'll add more initialization throughout the lab. We have provided the `kern/e1000.c` and `kern/e1000.h` files for you so that you do not need to mess with the build system. They are currently blank; you need to fill them in for this exercise. You may also need to include the `e1000.h` file in other places in the kernel. When you boot your kernel, you should see it print that the PCI function of the E1000 card was enabled. Your code should now pass the `pci attach` test of make grade. ``` ##### Memory-mapped I/O Software communicates with the E1000 via _memory-mapped I/O_ (MMIO). You've seen this twice before in JOS: both the CGA console and the LAPIC are devices that you control and query by writing to and reading from "memory". But these reads and writes don't go to DRAM; they go directly to these devices. `pci_func_enable` negotiates an MMIO region with the E1000 and stores its base and size in BAR 0 (that is, `reg_base[0]` and `reg_size[0]`). This is a range of _physical memory addresses_ assigned to the device, which means you'll have to do something to access it via virtual addresses. Since MMIO regions are assigned very high physical addresses (typically above 3GB), you can't use `KADDR` to access it because of JOS's 256MB limit. Thus, you'll have to create a new memory mapping. We'll use the area above MMIOBASE (your `mmio_map_region` from lab 4 will make sure we don't overwrite the mapping used by the LAPIC). Since PCI device initialization happens before JOS creates user environments, you can create the mapping in `kern_pgdir` and it will always be available. ``` Exercise 4. In your attach function, create a virtual memory mapping for the E1000's BAR 0 by calling `mmio_map_region` (which you wrote in lab 4 to support memory-mapping the LAPIC). You'll want to record the location of this mapping in a variable so you can later access the registers you just mapped. Take a look at the `lapic` variable in `kern/lapic.c` for an example of one way to do this. If you do use a pointer to the device register mapping, be sure to declare it `volatile`; otherwise, the compiler is allowed to cache values and reorder accesses to this memory. To test your mapping, try printing out the device status register (section 13.4.2). This is a 4 byte register that starts at byte 8 of the register space. You should get `0x80080783`, which indicates a full duplex link is up at 1000 MB/s, among other things. ``` Hint: You'll need a lot of constants, like the locations of registers and values of bit masks. Trying to copy these out of the developer's manual is error-prone and mistakes can lead to painful debugging sessions. We recommend instead using QEMU's [`e1000_hw.h`][6] header as a guideline. We don't recommend copying it in verbatim, because it defines far more than you actually need and may not define things in the way you need, but it's a good starting point. ##### DMA You could imagine transmitting and receiving packets by writing and reading from the E1000's registers, but this would be slow and would require the E1000 to buffer packet data internally. Instead, the E1000 uses _Direct Memory Access_ or DMA to read and write packet data directly from memory without involving the CPU. The driver is responsible for allocating memory for the transmit and receive queues, setting up DMA descriptors, and configuring the E1000 with the location of these queues, but everything after that is asynchronous. To transmit a packet, the driver copies it into the next DMA descriptor in the transmit queue and informs the E1000 that another packet is available; the E1000 will copy the data out of the descriptor when there is time to send the packet. Likewise, when the E1000 receives a packet, it copies it into the next DMA descriptor in the receive queue, which the driver can read from at its next opportunity. The receive and transmit queues are very similar at a high level. Both consist of a sequence of _descriptors_. While the exact structure of these descriptors varies, each descriptor contains some flags and the physical address of a buffer containing packet data (either packet data for the card to send, or a buffer allocated by the OS for the card to write a received packet to). The queues are implemented as circular arrays, meaning that when the card or the driver reach the end of the array, it wraps back around to the beginning. Both have a _head pointer_ and a _tail pointer_ and the contents of the queue are the descriptors between these two pointers. The hardware always consumes descriptors from the head and moves the head pointer, while the driver always add descriptors to the tail and moves the tail pointer. The descriptors in the transmit queue represent packets waiting to be sent (hence, in the steady state, the transmit queue is empty). For the receive queue, the descriptors in the queue are free descriptors that the card can receive packets into (hence, in the steady state, the receive queue consists of all available receive descriptors). Correctly updating the tail register without confusing the E1000 is tricky; be careful! The pointers to these arrays as well as the addresses of the packet buffers in the descriptors must all be _physical addresses_ because hardware performs DMA directly to and from physical RAM without going through the MMU. #### Transmitting Packets The transmit and receive functions of the E1000 are basically independent of each other, so we can work on one at a time. We'll attack transmitting packets first simply because we can't test receive without transmitting an "I'm here!" packet first. First, you'll have to initialize the card to transmit, following the steps described in section 14.5 (you don't have to worry about the subsections). The first step of transmit initialization is setting up the transmit queue. The precise structure of the queue is described in section 3.4 and the structure of the descriptors is described in section 3.3.3. We won't be using the TCP offload features of the E1000, so you can focus on the "legacy transmit descriptor format." You should read those sections now and familiarize yourself with these structures. ##### C Structures You'll find it convenient to use C `struct`s to describe the E1000's structures. As you've seen with structures like the `struct Trapframe`, C `struct`s let you precisely layout data in memory. C can insert padding between fields, but the E1000's structures are laid out such that this shouldn't be a problem. If you do encounter field alignment problems, look into GCC's "packed" attribute. As an example, consider the legacy transmit descriptor given in table 3-8 of the manual and reproduced here: ``` 63 48 47 40 39 32 31 24 23 16 15 0 +---------------------------------------------------------------+ | Buffer address | +---------------|-------|-------|-------|-------|---------------+ | Special | CSS | Status| Cmd | CSO | Length | +---------------|-------|-------|-------|-------|---------------+ ``` The first byte of the structure starts at the top right, so to convert this into a C struct, read from right to left, top to bottom. If you squint at it right, you'll see that all of the fields even fit nicely into a standard-size types: ``` struct tx_desc { uint64_t addr; uint16_t length; uint8_t cso; uint8_t cmd; uint8_t status; uint8_t css; uint16_t special; }; ``` Your driver will have to reserve memory for the transmit descriptor array and the packet buffers pointed to by the transmit descriptors. There are several ways to do this, ranging from dynamically allocating pages to simply declaring them in global variables. Whatever you choose, keep in mind that the E1000 accesses physical memory directly, which means any buffer it accesses must be contiguous in physical memory. There are also multiple ways to handle the packet buffers. The simplest, which we recommend starting with, is to reserve space for a packet buffer for each descriptor during driver initialization and simply copy packet data into and out of these pre-allocated buffers. The maximum size of an Ethernet packet is 1518 bytes, which bounds how big these buffers need to be. More sophisticated drivers could dynamically allocate packet buffers (e.g., to reduce memory overhead when network usage is low) or even pass buffers directly provided by user space (a technique known as "zero copy"), but it's good to start simple. ``` Exercise 5. Perform the initialization steps described in section 14.5 (but not its subsections). Use section 13 as a reference for the registers the initialization process refers to and sections 3.3.3 and 3.4 for reference to the transmit descriptors and transmit descriptor array. Be mindful of the alignment requirements on the transmit descriptor array and the restrictions on length of this array. Since TDLEN must be 128-byte aligned and each transmit descriptor is 16 bytes, your transmit descriptor array will need some multiple of 8 transmit descriptors. However, don't use more than 64 descriptors or our tests won't be able to test transmit ring overflow. For the TCTL.COLD, you can assume full-duplex operation. For TIPG, refer to the default values described in table 13-77 of section 13.4.34 for the IEEE 802.3 standard IPG (don't use the values in the table in section 14.5). ``` Try running make E1000_DEBUG=TXERR,TX qemu. If you are using the course qemu, you should see an "e1000: tx disabled" message when you set the TDT register (since this happens before you set TCTL.EN) and no further "e1000" messages. Now that transmit is initialized, you'll have to write the code to transmit a packet and make it accessible to user space via a system call. To transmit a packet, you have to add it to the tail of the transmit queue, which means copying the packet data into the next packet buffer and then updating the TDT (transmit descriptor tail) register to inform the card that there's another packet in the transmit queue. (Note that TDT is an _index_ into the transmit descriptor array, not a byte offset; the documentation isn't very clear about this.) However, the transmit queue is only so big. What happens if the card has fallen behind transmitting packets and the transmit queue is full? In order to detect this condition, you'll need some feedback from the E1000. Unfortunately, you can't just use the TDH (transmit descriptor head) register; the documentation explicitly states that reading this register from software is unreliable. However, if you set the RS bit in the command field of a transmit descriptor, then, when the card has transmitted the packet in that descriptor, the card will set the DD bit in the status field of the descriptor. If a descriptor's DD bit is set, you know it's safe to recycle that descriptor and use it to transmit another packet. What if the user calls your transmit system call, but the DD bit of the next descriptor isn't set, indicating that the transmit queue is full? You'll have to decide what to do in this situation. You could simply drop the packet. Network protocols are resilient to this, but if you drop a large burst of packets, the protocol may not recover. You could instead tell the user environment that it has to retry, much like you did for `sys_ipc_try_send`. This has the advantage of pushing back on the environment generating the data. ``` Exercise 6. Write a function to transmit a packet by checking that the next descriptor is free, copying the packet data into the next descriptor, and updating TDT. Make sure you handle the transmit queue being full. ``` Now would be a good time to test your packet transmit code. Try transmitting just a few packets by directly calling your transmit function from the kernel. You don't have to create packets that conform to any particular network protocol in order to test this. Run make E1000_DEBUG=TXERR,TX qemu to run your test. You should see something like ``` e1000: index 0: 0x271f00 : 9000002a 0 ... ``` as you transmit packets. Each line gives the index in the transmit array, the buffer address of that transmit descriptor, the cmd/CSO/length fields, and the special/CSS/status fields. If QEMU doesn't print the values you expected from your transmit descriptor, check that you're filling in the right descriptor and that you configured TDBAL and TDBAH correctly. If you get "e1000: TDH wraparound @0, TDT x, TDLEN y" messages, that means the E1000 ran all the way through the transmit queue without stopping (if QEMU didn't check this, it would enter an infinite loop), which probably means you aren't manipulating TDT correctly. If you get lots of "e1000: tx disabled" messages, then you didn't set the transmit control register right. Once QEMU runs, you can then run tcpdump -XXnr qemu.pcap to see the packet data that you transmitted. If you saw the expected "e1000: index" messages from QEMU, but your packet capture is empty, double check that you filled in every necessary field and bit in your transmit descriptors (the E1000 probably went through your transmit descriptors, but didn't think it had to send anything). ``` Exercise 7. Add a system call that lets you transmit packets from user space. The exact interface is up to you. Don't forget to check any pointers passed to the kernel from user space. ``` #### Transmitting Packets: Network Server Now that you have a system call interface to the transmit side of your device driver, it's time to send packets. The output helper environment's goal is to do the following in a loop: accept `NSREQ_OUTPUT` IPC messages from the core network server and send the packets accompanying these IPC message to the network device driver using the system call you added above. The `NSREQ_OUTPUT` IPC's are sent by the `low_level_output` function in `net/lwip/jos/jif/jif.c`, which glues the lwIP stack to JOS's network system. Each IPC will include a page consisting of a `union Nsipc` with the packet in its `struct jif_pkt pkt` field (see `inc/ns.h`). `struct jif_pkt` looks like ``` struct jif_pkt { int jp_len; char jp_data[0]; }; ``` `jp_len` represents the length of the packet. All subsequent bytes on the IPC page are dedicated to the packet contents. Using a zero-length array like `jp_data` at the end of a struct is a common C trick (some would say abomination) for representing buffers without pre-determined lengths. Since C doesn't do array bounds checking, as long as you ensure there's enough unused memory following the struct, you can use `jp_data` as if it were an array of any size. Be aware of the interaction between the device driver, the output environment and the core network server when there is no more space in the device driver's transmit queue. The core network server sends packets to the output environment using IPC. If the output environment is suspended due to a send packet system call because the driver has no more buffer space for new packets, the core network server will block waiting for the output server to accept the IPC call. ``` Exercise 8. Implement `net/output.c`. ``` You can use `net/testoutput.c` to test your output code without involving the whole network server. Try running make E1000_DEBUG=TXERR,TX run-net_testoutput. You should see something like ``` Transmitting packet 0 e1000: index 0: 0x271f00 : 9000009 0 Transmitting packet 1 e1000: index 1: 0x2724ee : 9000009 0 ... ``` and tcpdump -XXnr qemu.pcap should output ``` reading from file qemu.pcap, link-type EN10MB (Ethernet) -5:00:00.600186 [|ether] 0x0000: 5061 636b 6574 2030 30 Packet.00 -5:00:00.610080 [|ether] 0x0000: 5061 636b 6574 2030 31 Packet.01 ... ``` To test with a larger packet count, try make E1000_DEBUG=TXERR,TX NET_CFLAGS=-DTESTOUTPUT_COUNT=100 run-net_testoutput. If this overflows your transmit ring, double check that you're handling the DD status bit correctly and that you've told the hardware to set the DD status bit (using the RS command bit). Your code should pass the `testoutput` tests of make grade. ``` Question 1. How did you structure your transmit implementation? In particular, what do you do if the transmit ring is full? ``` ### Part B: Receiving packets and the web server #### Receiving Packets Just like you did for transmitting packets, you'll have to configure the E1000 to receive packets and provide a receive descriptor queue and receive descriptors. Section 3.2 describes how packet reception works, including the receive queue structure and receive descriptors, and the initialization process is detailed in section 14.4. ``` Exercise 9. Read section 3.2. You can ignore anything about interrupts and checksum offloading (you can return to these sections if you decide to use these features later), and you don't have to be concerned with the details of thresholds and how the card's internal caches work. ``` The receive queue is very similar to the transmit queue, except that it consists of empty packet buffers waiting to be filled with incoming packets. Hence, when the network is idle, the transmit queue is empty (because all packets have been sent), but the receive queue is full (of empty packet buffers). When the E1000 receives a packet, it first checks if it matches the card's configured filters (for example, to see if the packet is addressed to this E1000's MAC address) and ignores the packet if it doesn't match any filters. Otherwise, the E1000 tries to retrieve the next receive descriptor from the head of the receive queue. If the head (RDH) has caught up with the tail (RDT), then the receive queue is out of free descriptors, so the card drops the packet. If there is a free receive descriptor, it copies the packet data into the buffer pointed to by the descriptor, sets the descriptor's DD (Descriptor Done) and EOP (End of Packet) status bits, and increments the RDH. If the E1000 receives a packet that is larger than the packet buffer in one receive descriptor, it will retrieve as many descriptors as necessary from the receive queue to store the entire contents of the packet. To indicate that this has happened, it will set the DD status bit on all of these descriptors, but only set the EOP status bit on the last of these descriptors. You can either deal with this possibility in your driver, or simply configure the card to not accept "long packets" (also known as _jumbo frames_ ) and make sure your receive buffers are large enough to store the largest possible standard Ethernet packet (1518 bytes). ``` Exercise 10. Set up the receive queue and configure the E1000 by following the process in section 14.4. You don't have to support "long packets" or multicast. For now, don't configure the card to use interrupts; you can change that later if you decide to use receive interrupts. Also, configure the E1000 to strip the Ethernet CRC, since the grade script expects it to be stripped. By default, the card will filter out _all_ packets. You have to configure the Receive Address Registers (RAL and RAH) with the card's own MAC address in order to accept packets addressed to that card. You can simply hard-code QEMU's default MAC address of 52:54:00:12:34:56 (we already hard-code this in lwIP, so doing it here too doesn't make things any worse). Be very careful with the byte order; MAC addresses are written from lowest-order byte to highest-order byte, so 52:54:00:12 are the low-order 32 bits of the MAC address and 34:56 are the high-order 16 bits. The E1000 only supports a specific set of receive buffer sizes (given in the description of RCTL.BSIZE in 13.4.22). If you make your receive packet buffers large enough and disable long packets, you won't have to worry about packets spanning multiple receive buffers. Also, remember that, just like for transmit, the receive queue and the packet buffers must be contiguous in physical memory. You should use at least 128 receive descriptors ``` You can do a basic test of receive functionality now, even without writing the code to receive packets. Run make E1000_DEBUG=TX,TXERR,RX,RXERR,RXFILTER run-net_testinput. `testinput` will transmit an ARP (Address Resolution Protocol) announcement packet (using your packet transmitting system call), which QEMU will automatically reply to. Even though your driver can't receive this reply yet, you should see a "e1000: unicast match[0]: 52:54:00:12:34:56" message, indicating that a packet was received by the E1000 and matched the configured receive filter. If you see a "e1000: unicast mismatch: 52:54:00:12:34:56" message instead, the E1000 filtered out the packet, which means you probably didn't configure RAL and RAH correctly. Make sure you got the byte ordering right and didn't forget to set the "Address Valid" bit in RAH. If you don't get any "e1000" messages, you probably didn't enable receive correctly. Now you're ready to implement receiving packets. To receive a packet, your driver will have to keep track of which descriptor it expects to hold the next received packet (hint: depending on your design, there's probably already a register in the E1000 keeping track of this). Similar to transmit, the documentation states that the RDH register cannot be reliably read from software, so in order to determine if a packet has been delivered to this descriptor's packet buffer, you'll have to read the DD status bit in the descriptor. If the DD bit is set, you can copy the packet data out of that descriptor's packet buffer and then tell the card that the descriptor is free by updating the queue's tail index, RDT. If the DD bit isn't set, then no packet has been received. This is the receive-side equivalent of when the transmit queue was full, and there are several things you can do in this situation. You can simply return a "try again" error and require the caller to retry. While this approach works well for full transmit queues because that's a transient condition, it is less justifiable for empty receive queues because the receive queue may remain empty for long stretches of time. A second approach is to suspend the calling environment until there are packets in the receive queue to process. This tactic is very similar to `sys_ipc_recv`. Just like in the IPC case, since we have only one kernel stack per CPU, as soon as we leave the kernel the state on the stack will be lost. We need to set a flag indicating that an environment has been suspended by receive queue underflow and record the system call arguments. The drawback of this approach is complexity: the E1000 must be instructed to generate receive interrupts and the driver must handle them in order to resume the environment blocked waiting for a packet. ``` Exercise 11. Write a function to receive a packet from the E1000 and expose it to user space by adding a system call. Make sure you handle the receive queue being empty. ``` ``` Challenge! If the transmit queue is full or the receive queue is empty, the environment and your driver may spend a significant amount of CPU cycles polling, waiting for a descriptor. The E1000 can generate an interrupt once it is finished with a transmit or receive descriptor, avoiding the need for polling. Modify your driver so that processing the both the transmit and receive queues is interrupt driven instead of polling. Note that, once an interrupt is asserted, it will remain asserted until the driver clears the interrupt. In your interrupt handler make sure to clear the interrupt as soon as you handle it. If you don't, after returning from your interrupt handler, the CPU will jump back into it again. In addition to clearing the interrupts on the E1000 card, interrupts also need to be cleared on the LAPIC. Use `lapic_eoi` to do so. ``` #### Receiving Packets: Network Server In the network server input environment, you will need to use your new receive system call to receive packets and pass them to the core network server environment using the `NSREQ_INPUT` IPC message. These IPC input message should have a page attached with a `union Nsipc` with its `struct jif_pkt pkt` field filled in with the packet received from the network. ``` Exercise 12. Implement `net/input.c`. ``` Run `testinput` again with make E1000_DEBUG=TX,TXERR,RX,RXERR,RXFILTER run-net_testinput. You should see ``` Sending ARP announcement... Waiting for packets... e1000: index 0: 0x26dea0 : 900002a 0 e1000: unicast match[0]: 52:54:00:12:34:56 input: 0000 5254 0012 3456 5255 0a00 0202 0806 0001 input: 0010 0800 0604 0002 5255 0a00 0202 0a00 0202 input: 0020 5254 0012 3456 0a00 020f 0000 0000 0000 input: 0030 0000 0000 0000 0000 0000 0000 0000 0000 ``` The lines beginning with "input:" are a hexdump of QEMU's ARP reply. Your code should pass the `testinput` tests of make grade. Note that there's no way to test packet receiving without sending at least one ARP packet to inform QEMU of JOS' IP address, so bugs in your transmitting code can cause this test to fail. To more thoroughly test your networking code, we have provided a daemon called `echosrv` that sets up an echo server running on port 7 that will echo back anything sent over a TCP connection. Use make E1000_DEBUG=TX,TXERR,RX,RXERR,RXFILTER run-echosrv to start the echo server in one terminal and make nc-7 in another to connect to it. Every line you type should be echoed back by the server. Every time the emulated E1000 receives a packet, QEMU should print something like the following to the console: ``` e1000: unicast match[0]: 52:54:00:12:34:56 e1000: index 2: 0x26ea7c : 9000036 0 e1000: index 3: 0x26f06a : 9000039 0 e1000: unicast match[0]: 52:54:00:12:34:56 ``` At this point, you should also be able to pass the `echosrv` test. ``` Question 2. How did you structure your receive implementation? In particular, what do you do if the receive queue is empty and a user environment requests the next incoming packet? ``` ``` Challenge! Read about the EEPROM in the developer's manual and write the code to load the E1000's MAC address out of the EEPROM. Currently, QEMU's default MAC address is hard-coded into both your receive initialization and lwIP. Fix your initialization to use the MAC address you read from the EEPROM, add a system call to pass the MAC address to lwIP, and modify lwIP to the MAC address read from the card. Test your change by configuring QEMU to use a different MAC address. ``` ``` Challenge! Modify your E1000 driver to be "zero copy." Currently, packet data has to be copied from user-space buffers to transmit packet buffers and from receive packet buffers back to user-space buffers. A zero copy driver avoids this by having user space and the E1000 share packet buffer memory directly. There are many different approaches to this, including mapping the kernel-allocated structures into user space or passing user-provided buffers directly to the E1000. Regardless of your approach, be careful how you reuse buffers so that you don't introduce races between user-space code and the E1000. ``` ``` Challenge! Take the zero copy concept all the way into lwIP. A typical packet is composed of many headers. The user sends data to be transmitted to lwIP in one buffer. The TCP layer wants to add a TCP header, the IP layer an IP header and the MAC layer an Ethernet header. Even though there are many parts to a packet, right now the parts need to be joined together so that the device driver can send the final packet. The E1000's transmit descriptor design is well-suited to collecting pieces of a packet scattered throughout memory, like the packet fragments created inside lwIP. If you enqueue multiple transmit descriptors, but only set the EOP command bit on the last one, then the E1000 will internally concatenate the packet buffers from these descriptors and only transmit the concatenated buffer when it reaches the EOP-marked descriptor. As a result, the individual packet pieces never need to be joined together in memory. Change your driver to be able to send packets composed of many buffers without copying and modify lwIP to avoid merging the packet pieces as it does right now. ``` ``` Challenge! Augment your system call interface to service more than one user environment. This will prove useful if there are multiple network stacks (and multiple network servers) each with their own IP address running in user mode. The receive system call will need to decide to which environment it needs to forward each incoming packet. Note that the current interface cannot tell the difference between two packets and if multiple environments call the packet receive system call, each respective environment will get a subset of the incoming packets and that subset may include packets that are not destined to the calling environment. Sections 2.2 and 3 in [this][7] Exokernel paper have an in-depth explanation of the problem and a method of addressing it in a kernel like JOS. Use the paper to help you get a grip on the problem, chances are you do not need a solution as complex as presented in the paper. ``` #### The Web Server A web server in its simplest form sends the contents of a file to the requesting client. We have provided skeleton code for a very simple web server in `user/httpd.c`. The skeleton code deals with incoming connections and parses the headers. ``` Exercise 13. The web server is missing the code that deals with sending the contents of a file back to the client. Finish the web server by implementing `send_file` and `send_data`. ``` Once you've finished the web server, start the webserver (make run-httpd-nox) and point your favorite browser at http:// _host_ : _port_ /index.html, where _host_ is the name of the computer running QEMU (If you're running QEMU on athena use `hostname.mit.edu` (hostname is the output of the `hostname` command on athena, or `localhost` if you're running the web browser and QEMU on the same computer) and _port_ is the port number reported for the web server by make which-ports . You should see a web page served by the HTTP server running inside JOS. At this point, you should score 105/105 on make grade. ``` Challenge! Add a simple chat server to JOS, where multiple people can connect to the server and anything that any user types is transmitted to the other users. To do this, you will have to find a way to communicate with multiple sockets at once _and_ to send and receive on the same socket at the same time. There are multiple ways to go about this. lwIP provides a MSG_DONTWAIT flag for recv (see `lwip_recvfrom` in `net/lwip/api/sockets.c`), so you could constantly loop through all open sockets, polling them for data. Note that, while `recv` flags are supported by the network server IPC, they aren't accessible via the regular `read` function, so you'll need a way to pass the flags. A more efficient approach is to start one or more environments for each connection and to use IPC to coordinate them. Conveniently, the lwIP socket ID found in the struct Fd for a socket is global (not per-environment), so, for example, the child of a `fork` inherits its parents sockets. Or, an environment can even send on another environment's socket simply by constructing an Fd containing the right socket ID. ``` ``` Question 3. What does the web page served by JOS's web server say? 4. How long approximately did it take you to do this lab? ``` **This completes the lab.** As usual, don't forget to run make grade and to write up your answers and a description of your challenge exercise solution. Before handing in, use git status and git diff to examine your changes and don't forget to git add answers-lab6.txt. When you're ready, commit your changes with git commit -am 'my solutions to lab 6', then make handin and follow the directions. -------------------------------------------------------------------------------- via: https://pdos.csail.mit.edu/6.828/2018/labs/lab6/ 作者:[csail.mit][a] 选题:[lujun9972][b] 译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]: https://pdos.csail.mit.edu [b]: https://github.com/lujun9972 [1]: http://wiki.qemu.org/download/qemu-doc.html#Using-the-user-mode-network-stack [2]: http://www.wireshark.org/ [3]: http://www.sics.se/~adam/lwip/ [4]: https://pdos.csail.mit.edu/6.828/2018/labs/lab6/ns.png [5]: https://pdos.csail.mit.edu/6.828/2018/readings/hardware/8254x_GBe_SDM.pdf [6]: https://pdos.csail.mit.edu/6.828/2018/labs/lab6/e1000_hw.h [7]: http://pdos.csail.mit.edu/papers/exo:tocs.pdf
96.883041
1,117
0.768496
eng_Latn
0.999362
b9b98dfecf78d8359b8c1ca43783f152a5045233
2,567
md
Markdown
src/cs/2022-01/04/06.md
Adventech/sabbath-school-lessons
baf65ac98fa7c7bce73e16c263eb0cc1bf0ba62a
[ "MIT" ]
68
2016-10-30T23:17:56.000Z
2022-03-27T11:58:16.000Z
src/cs/2022-01/04/06.md
Adventech/sabbath-school-lessons
baf65ac98fa7c7bce73e16c263eb0cc1bf0ba62a
[ "MIT" ]
367
2016-10-21T03:50:22.000Z
2022-03-28T23:35:25.000Z
src/cs/2022-01/04/06.md
Adventech/sabbath-school-lessons
baf65ac98fa7c7bce73e16c263eb0cc1bf0ba62a
[ "MIT" ]
109
2016-08-02T14:32:13.000Z
2022-03-31T10:18:41.000Z
--- title: 'Bratr jako vzor' date: 20/01/2022 --- > <p></p> > 1Proto i my, majíce kolem sebe tak veliký oblak svědků, odložme veškerou zátěž a hřích snadno nás ovíjející a s vytrvalostí běžme závod, který je před námi, 2upřeně hledíce k původci a dokonavateli víry Ježíši, který pro radost, která byla před ním, podstoupil kříž, pohrdnuv hanbou, a sedí po pravici Božího trůnu. 3Pomyslete na toho, který snesl od hříšníků proti sobě takový odpor, abyste neochabovali, umdlévajíce ve svých duších. 4Ještě jste se až do krve nevzepřeli v boji proti hříchu. (Žd 12,1–4; ČSP) **Osobní studium** Dalším důvodem, proč Ježíš přijal naši lidskou přirozenost a žil mezi námi, bylo, že nám chtěl být příkladem, vzorem, jak správně žít před Bohem. `Jak bychom podle apoštola v úvodních verších měli žít jako křesťané?` V tomto textu je Ježíš vyvrcholením dlouhého seznamu postav, které apoštol poskytuje jako příklady víry. Tato pasáž nazývá Ježíše „původcem a dokonavatelem víry“. Řecké slovo archégos („původce“) je možné přeložit i jako „předchůdce“. Ježíš je předchůdcem v tom smyslu, že kráčí před věřícími jako „první za nás“ (Žd 6,20; ČEP). Slovo „dokonavatel“ zase říká, že Ježíš projevil víru v Boha v nejčistší možné formě. Tato pasáž učí, že Ježíš je první, kdo úspěšně završil běh života. Ježíš ve svém životě dokonale naplnil to, co znamená život víry. V Žd 2,13 čteme: „‚Také já svou důvěru složím v Boha.‘ A dále: ‚Hle, já a děti, které mi dal Bůh.‘“ Ježíš říká, že vkládá svou důvěru v Boha. Tento verš je narážkou na text Iz 8,17.18. Izajáš řekl tato slova tváří v tvář hrozbě nájezdu nepřátel ze severního Izraele a Sýrie (Iz 7,1.2). Jeho víra stála v kontrastu k nedostatku víry krále Achaza (2Kr 16,5–18). Toho Bůh vybízel, aby mu důvěřoval a požádal o znamení, že ho vysvobodí (Iz 7,1–11). Bůh totiž Achazovi již dříve slíbil, že jej jako Davidova syna bude chránit jako svého vlastního syna. Dokonce Achazovi milostivě nabídl, že svůj slib potvrdí znamením. Achaz však o znamení odmítl požádat a místo toho poslal k asyrskému králi Tiglat-Pileserovi posly se slovy: „Jsem tvůj otrok a tvůj syn“ (2Kr 16,7). Jak smutné! Achaz byl raději „synem“ Tiglat-Pilesera než synem živého Boha. Ježíš však vložil svou důvěru v Boha a v jeho zaslíbení, že mu položí nepřátele pod nohy (Žd 1,13; Žd 10,12.13). Stejný slib dal Bůh i nám a my mu musíme věřit stejně jako Ježíš (Ř 16,20). **Aplikace** `Jak se můžeš naučit důvěřovat Bohu při každodenním rozhodování? Jaké důležité rozhodnutí musíš udělat a jak se můžeš ujistit, že se zakládá na důvěře v Boha?`
102.68
653
0.77016
ces_Latn
1.00001
b9bb7afcc58113f17a32cd475a0c4d77a98abd71
4,479
md
Markdown
docs/visual-basic/language-reference/modifiers/private-protected.md
JosephHerreraDev/docs.es-es
dd545ff8c6bc59a76ff761c43b14c7f05127c91a
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/visual-basic/language-reference/modifiers/private-protected.md
JosephHerreraDev/docs.es-es
dd545ff8c6bc59a76ff761c43b14c7f05127c91a
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/visual-basic/language-reference/modifiers/private-protected.md
JosephHerreraDev/docs.es-es
dd545ff8c6bc59a76ff761c43b14c7f05127c91a
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Private Protected ms.date: 05/10/2018 helpviewer_keywords: - Private Protected keyword [Visual Basic] - Private Protected keyword [Visual Basic], syntax ms.openlocfilehash: b7d9f81e41950b92c787e2e50fb94fe3d7c07559 ms.sourcegitcommit: f8c270376ed905f6a8896ce0fe25b4f4b38ff498 ms.translationtype: MT ms.contentlocale: es-ES ms.lasthandoff: 06/04/2020 ms.locfileid: "84362234" --- # <a name="private-protected-visual-basic"></a>Privado protegido (Visual Basic) La combinación de palabras claves `Private Protected` es un modificador de acceso de miembro. Un `Private Protected` miembro es accesible para todos los miembros de la clase contenedora, así como para los tipos derivados de la clase contenedora, pero solo si se encuentran en el ensamblado contenedor. Solo se puede especificar `Private Protected` en miembros de clases; no se puede aplicar `Private Protected` a los miembros de una estructura porque las estructuras no se pueden heredar. `Private Protected`Visual Basic 15,5 y versiones posteriores admiten el modificador de acceso. Para usarlo, puede Agregar el siguiente elemento al archivo de proyecto de Visual Basic ( \* . vbproj). Siempre que Visual Basic 15,5 o posterior esté instalado en el sistema, le permite aprovechar todas las características de lenguaje compatibles con la versión más reciente del compilador de Visual Basic: ```xml <PropertyGroup> <LangVersion>latest</LangVersion> </PropertyGroup> ``` Para obtener más información, vea [establecer la versión de lenguaje Visual Basic](../configure-language-version.md). > [!NOTE] > En Visual Studio, la selección de la ayuda F1 en `private protected` proporciona ayuda para [privado](private.md) o [protegido](protected.md). El IDE elige el token único bajo el cursor en lugar de la palabra compuesta. ## <a name="rules"></a>Reglas - **Contexto de declaración.** Solo se puede usar `Private Protected` en el nivel de clase. Esto significa que el contexto de la declaración de un `Protected` elemento debe ser una clase y no puede ser un archivo de código fuente, un espacio de nombres, una interfaz, un módulo, una estructura o un procedimiento. ## <a name="behavior"></a>Comportamiento - **Nivel de acceso.** Todo el código de una clase puede tener acceso a sus elementos. El código de cualquier clase que se deriva de una clase base y se encuentra en el mismo ensamblado puede tener acceso a todos los `Private Protected` elementos de la clase base. Sin embargo, el código de cualquier clase que deriva de una clase base y se encuentra en un ensamblado diferente no puede tener acceso a los elementos de la clase base `Private Protected` . - **Modificadores de acceso.** Las palabras clave que especifican el nivel de acceso se denominan *modificadores de acceso*. Para obtener una comparación de los modificadores de acceso, vea [niveles de acceso en Visual Basic](../../programming-guide/language-features/declared-elements/access-levels.md). El modificador `Private Protected` se puede utilizar en los contextos siguientes: - [Instrucción de clase](../statements/class-statement.md) de una clase anidada - [Instrucción Const](../statements/const-statement.md) - [Declare Statement](../statements/declare-statement.md) - [Instrucción delegada](../statements/delegate-statement.md) de un delegado anidado en una clase - [Instrucción Dim](../statements/dim-statement.md) - [Instrucción enum](../statements/enum-statement.md) de una enumeración anidada en una clase - [Event (Instrucción)](../statements/event-statement.md) - [Instrucción Function](../statements/function-statement.md) - [Instrucción de interfaz](../statements/interface-statement.md) de una interfaz anidada en una clase - [Property Statement](../statements/property-statement.md) - [Instrucción Structure](../statements/structure-statement.md) de una estructura anidada en una clase - [Instrucción Sub](../statements/sub-statement.md) ## <a name="see-also"></a>Consulte también - [Público](public.md) - [Contra](protected.md) - [Respecto](friend.md) - [Privado](private.md) - [Protected Friend](./protected-friend.md) - [Niveles de acceso en Visual Basic](../../programming-guide/language-features/declared-elements/access-levels.md) - [Procedimientos](../../programming-guide/language-features/procedures/index.md) - [Estructuras](../../programming-guide/language-features/data-types/structures.md) - [Objetos y clases](../../programming-guide/language-features/objects-and-classes/index.md)
55.9875
454
0.776066
spa_Latn
0.927577
b9bc2abd1d461e838a96ebec64ca9dc1c77e9e96
3,867
md
Markdown
_posts/2020-09-24-on-mental-health.md
sarahseewhy/sarahseewhy.github.io
bd3b294901973e0ecc430492e2f8b82cd8a11b46
[ "MIT" ]
null
null
null
_posts/2020-09-24-on-mental-health.md
sarahseewhy/sarahseewhy.github.io
bd3b294901973e0ecc430492e2f8b82cd8a11b46
[ "MIT" ]
null
null
null
_posts/2020-09-24-on-mental-health.md
sarahseewhy/sarahseewhy.github.io
bd3b294901973e0ecc430492e2f8b82cd8a11b46
[ "MIT" ]
null
null
null
--- layout: post title: On mental health and the past year date: 2020-09-24 --- _It's been quite awhile since I've written and I want to talk about why. Heads-up, this is a pretty hard-hitting post with some heavy topics._ **Trigger warning**: mental health, depression, PTSD, and sexual trauma. I stopped writing in 2019 and 2020 because I was wrestling with my mental health. Depression is different for each person and can take different forms in the same person. I've had depression creep up on me like a fog: slowly, quietly, until I can't see a step ahead of myself. I've also had depression strike like a lightning bolt. I go to sleep with the ability to feel and wake up dead inside, the power to emote gone. It's hard to convey how disturbing that absence of feeling can be. It's both physically painful, an internal grating sensation, and like a deafening, terrifying silence or emptiness. I get this type of depression because I experience panic attacks. I experience panic attacks because I have PTSD. I have PTSD because I am survivor of sexual trauma. PTSD stands for [Post-Traumatic Stress Disorder](https://www.mind.org.uk/information-support/types-of-mental-health-problems/post-traumatic-stress-disorder-ptsd). It develops after experiencing a traumatic event. Psychologists don't know why some people walk away from trauma without a mental scratch and others are haunted for the rest of their lives. I'm in the latter group. Over the years I've learned what contributes to an attack. The attacks are infrequent, two in the last ten years, but can be devastating. I've gotten much better at bouncing back. Frustratingly, it's still challenging to pinpoint the complex interplay of variables and I can be blindsided by interactions. In May 2019 I was blindsided by an interaction that triggered the PTSD. For the next few weeks I lived in near-constant fear, reliving my trauma almost every day. This culminated in a severe panic attack which spiralled into a depressive episode. I lost the will to live overnight and spent the next few months piecing myself back together. It's the reason I stopped writing. I tried, but writing without acknowledging what had happened felt hollow and disingenuous. For the longest time I had neither the courage nor the ability to write about what was going on –– it's not unusual for survivors of sexual trauma to stay silent because of shame or the fear of being judged. Yet for me, healing has only come through surfacing that trauma, being honest and being vulnerable. ----------------------------------- In tech blogs we're not supposed to write about personal stuff, not mental health in software development, and definitely not sexual trauma or PTSD. But we're human: we all have mental health as well as physical health –– and many of us who have experienced trauma carry that trauma with us to work. This is not a choice, we cannot leave these parts of ourselves behind. Spoken word poet Andrea Gibson in "Blue Blanket"<sup>*</sup>, a poem about rape, writes: > _do you know they found land mines in broken women’s souls? black holes in the parts of their hearts that once sang symphonies of creation_ I sometimes think trauma is like that: land mines in our souls. One of the reasons I read and talk about effective communication and conflict transformation is for self-preservation: I have land mines in my soul. I also believe the people I work with have land mines of their own. I want to learn how to better listen and communicate, and encourage others to do so as well. That way when we discover these land mines we have the tools to diffuse them –– together. <br> <sub>* The full text of the poem can be found [here](https://ohandreagibson.tumblr.com/blueblanket) and [this is Gibson's performance](https://www.youtube.com/watch?v=2cEc3aQOP-o).</sub>
55.242857
247
0.773985
eng_Latn
0.999713
b9bc33cdec17fa65b2f1c8339e45d1b0f6708e08
3,293
md
Markdown
powerbi-docs/guided-learning/includes/3-5-create-map-visualizations.md
Heimig/powerbi-docs.de-de
73b42400695fb379324256e53d19e6824a785415
[ "CC-BY-4.0", "MIT" ]
null
null
null
powerbi-docs/guided-learning/includes/3-5-create-map-visualizations.md
Heimig/powerbi-docs.de-de
73b42400695fb379324256e53d19e6824a785415
[ "CC-BY-4.0", "MIT" ]
null
null
null
powerbi-docs/guided-learning/includes/3-5-create-map-visualizations.md
Heimig/powerbi-docs.de-de
73b42400695fb379324256e53d19e6824a785415
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- ms.openlocfilehash: 033436e7078723508d6b9481807ace424c3f109f ms.sourcegitcommit: 60dad5aa0d85db790553e537bf8ac34ee3289ba3 ms.translationtype: MT ms.contentlocale: de-DE ms.lasthandoff: 05/29/2019 ms.locfileid: "61396884" --- Power BI bietet zwei verschiedene Arten von Kartenvisualisierungen: eine Blasendiagrammkarte, bei der Blasen über geografischen Punkten platziert werden, und ein Flächenkartogramm, bei dem die Kontur der zu visualisierenden Regionen angezeigt wird. ![](media/3-5-create-map-visualizations/3-5_1.png) > [!NOTE] > Verwenden Sie bei der Arbeit mit Ländern oder Regionen das dreistellige Buchstabenkürzel, um sicherzustellen, dass die Geocodierung in Kartenvisualisierungen ordnungsgemäß funktioniert. Verwenden Sie *nicht* die zweistelligen Kürzel, da einige Länder oder Regionen sonst unter Umständen nicht ordnungsgemäß erkannt werden. > Falls Sie nur über zweistellige Kürzel verfügen, informieren Sie sich in [diesem externen Blogbeitrag](https://blog.ailon.org/how-to-display-2-letter-country-data-on-a-power-bi-map-85fc738497d6#.yudauacxp) darüber, wie Sie Ihren zweistelligen Länder-/Regionskürzeln die dreistelligen Länder-/Regionskürzel zuordnen. > > ## <a name="create-bubble-maps"></a>Erstellen von Blasendiagrammkarten Wählen Sie zum Erstellen einer Blasendiagrammkarte die Option **Karte** im Bereich **Visualisierung** aus. Um ein visuelles Kartenelement verwenden zu können, müssen Sie dem Bucket *Standort* in den Optionen unter **Visualisierungen** einen Wert hinzufügen. ![](media/3-5-create-map-visualizations/3-5_2.png) Standortwerte können in Power BI flexibel eingegeben werden: angefangen bei allgemeineren Angaben, z. B. der Name des Orts oder der Flughafencode, bis hin zu sehr spezifischen Angaben zum Breiten- und Längengrad. Fügen Sie dem Bucket **Größe** ein Feld hinzu, um die Größe der Blase für jede Kartenposition entsprechend zu ändern. ![](media/3-5-create-map-visualizations/3-5_3.png) ## <a name="create-shape-maps"></a>Erstellen von Flächenkartogrammen Wählen Sie zum Erstellen eines Flächenkartogramms die Option **Flächenkartogramm** im Bereich „Visualisierung“ aus. Wie bei Blasendiagrammkarten müssen Sie für die Verwendung des visuellen Elements dem Bucket „Standort“ einen Wert hinzufügen. Durch Hinzufügen eines Werts zum Bucket „Größe“ können Sie die Intensität der Füllfarbe entsprechend ändern. ![](media/3-5-create-map-visualizations/3-5_4.png) Ein Warnsymbol in der linken oberen Ecke eines visuellen Elements gibt an, dass für die Karte weitere Standortdaten erforderlich sind, um die Werte genau darstellen zu können. Dies ist besonders häufig der Fall, wenn die Daten für den Standort uneindeutig sind, z. B. wenn Sie einen Bereichsnamen wie *Washington* angeben, bei dem es sich um einen Bundesstaat oder einen District in den USA handeln kann. Eine Möglichkeit zum Umgehen dieses Problems besteht darin, dass Sie den Titel der Spalte spezifischer benennen, z. B. mit *Bundesstaat*. Eine andere Möglichkeit besteht darin, die Datenkategorie manuell zurückzusetzen. Wählen Sie dazu auf der Registerkarte „Modellierung“ die Option **Datenkategorie** aus. Über diese Option können Sie den Daten eine Kategorie zuweisen, z. B. „Bundesstaat“ oder „Ort“. ![](media/3-5-create-map-visualizations/3-5_5.png)
89
808
0.808381
deu_Latn
0.997973
b9bc3f73e19a0ea09c850eca873da777120eb7f5
2,903
md
Markdown
AppFog/deploy-static-website.md
hanserikjensen/PublicKB
56094f621201870dd0a0bcc0efa5c6db6ec952b7
[ "Apache-2.0" ]
null
null
null
AppFog/deploy-static-website.md
hanserikjensen/PublicKB
56094f621201870dd0a0bcc0efa5c6db6ec952b7
[ "Apache-2.0" ]
null
null
null
AppFog/deploy-static-website.md
hanserikjensen/PublicKB
56094f621201870dd0a0bcc0efa5c6db6ec952b7
[ "Apache-2.0" ]
1
2021-10-31T15:23:50.000Z
2021-10-31T15:23:50.000Z
{{{ "title": "Deploying a Static Website", "date": "05-08-2015", "author": "Chris Sterling", "attachments": [], "related-products" : [], "contentIsHTML": false }}} ### Audience Application developers ### Overview AppFog includes the [Cloud Foundry Static File buildpack](https://github.com/cloudfoundry/staticfile-buildpack) by default. This enables the deployment of static websites to AppFog. The Static File buildpack will serve up files using [Nginx](http://nginx.com/). Given that Nginx only needs about 20 MB of memory to run, you are able to run using much lower memory than the default of 1 GB that AppFog allocates to an application. This will mean that your memory usage will be lower and this will reflect in lower cost to run a static website. We recommend 64 MB as the amount of memory to reserve for most static websites. ### Deploying a Static Website To make AppFog automatically use the Static File buildpack for your application when you deploy, create an empty file named `Staticfile` in your top-level website directory. Now you can create an index.html file that will become your entry point into the website. ``` <html> <head><title>My Page</title></head> <body><h1>Hello World!</h1></body> </html> ``` Once you have your index.html you can deploy the website to AppFog by running the following command from a terminal: ``` $ cf push mywebsitename -m 64M ``` The `-m 64M` tells AppFog that it should only allocate 64 MB of memory to run this website. Once it is deployed AppFog will return a URL where you can access the website in your browser: ``` Creating app mywebsitename in org DEMO / space Dev as mydemoaccount... OK Creating route mywebsitename.useast.appfog.ctl.io... OK Binding mywebsitename.useast.appfog.ctl.io to mywebsitename... OK Uploading mywebsitename... Uploading app files from: .../mysite Uploading 333K, 11 files Done uploading OK Starting app mywebsitename in org DEMO / space Dev as mydemoaccount... -----> Downloaded app package (696K) -----> Using root folder -----> Copying project files into public/ -----> Setting up nginx -----> Uploading droplet (4.4M) 1 of 1 instances running App started OK App mywebsitename was started using this command `sh boot.sh` Showing health and status for app mywebsitename in org DEMO / space Dev as mydemoaccount... OK requested state: started instances: 1/1 usage: 64M x 1 instances urls: mywebsitename.useast.appfog.ctl.io last uploaded: Fri May 8 21:09:05 UTC 2015 stack: lucid64 state since cpu memory disk details #0 running 2015-05-08 02:09:19 PM 0.0% 4.9M of 64M 9.5M of 1G ``` Use the URL from the `urls:` value and place into the location bar of a web browser. In this case it is `http://mywebsitename.useast.appfog.ctl.io`: ![AppFog Website in Browser](../images/appfog-website-browser.png)
33.367816
622
0.727179
eng_Latn
0.966066
b9bc55f14ada3f1dc882b5f70640140c8aa842b5
16,496
md
Markdown
docs/framework/wpf/advanced/drawing-formatted-text.md
CodeTherapist/docs.de-de
45ed8badf2e25fb9abdf28c20e421f8da4094dd1
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/wpf/advanced/drawing-formatted-text.md
CodeTherapist/docs.de-de
45ed8badf2e25fb9abdf28c20e421f8da4094dd1
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/wpf/advanced/drawing-formatted-text.md
CodeTherapist/docs.de-de
45ed8badf2e25fb9abdf28c20e421f8da4094dd1
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Zeichnen von formatiertem Text ms.date: 03/30/2017 dev_langs: - csharp - vb helpviewer_keywords: - text [WPF] - typography [WPF], drawing formatted text - formatted text [WPF] - drawing [WPF], formatted text ms.assetid: b1d851c1-331c-4814-9964-6fe769db6f1f ms.openlocfilehash: 4cbf2a9ec9b742af3895f7c30b1a4dbbdbf5a635 ms.sourcegitcommit: 3c1c3ba79895335ff3737934e39372555ca7d6d0 ms.translationtype: MT ms.contentlocale: de-DE ms.lasthandoff: 09/06/2018 ms.locfileid: "43804937" --- # <a name="drawing-formatted-text"></a>Zeichnen von formatiertem Text Dieses Thema enthält eine Übersicht über die Funktionen von der <xref:System.Windows.Media.FormattedText> Objekt. Dieses Objekt bietet die Steuerung auf niedriger Ebene für das Zeichnen von Text in [!INCLUDE[TLA#tla_winclient](../../../../includes/tlasharptla-winclient-md.md)]-Anwendungen. ## <a name="technology-overview"></a>Übersicht über die Technologie Die <xref:System.Windows.Media.FormattedText> Objekt können Sie mehrzeiligen Text zu zeichnen, in dem jedes Zeichen im Text einzeln formatiert werden kann. Das folgende Beispiel zeigt Text mit mehreren angewendeten Formaten: ![Mit dem FormattedText-Objekt angezeigter Text](../../../../docs/framework/wpf/advanced/media/formattedtext01.jpg "FormattedText01") Angezeigter Text mit der FormattedText-Methode > [!NOTE] > Für Entwickler, die von der [!INCLUDE[TLA#tla_win32](../../../../includes/tlasharptla-win32-md.md)]-API migrieren, listet die Tabelle im [Win32-Migration](#win32_migration)-Abschnitt die [!INCLUDE[TLA#tla_win32](../../../../includes/tlasharptla-win32-md.md)]-DrawText-Flags und deren ungefähre Entsprechung in [!INCLUDE[TLA#tla_winclient](../../../../includes/tlasharptla-winclient-md.md)] auf. ### <a name="reasons-for-using-formatted-text"></a>Gründe für das Verwenden von formatiertem Text [!INCLUDE[TLA2#tla_winclient](../../../../includes/tla2sharptla-winclient-md.md)] enthält zahlreiche Steuerelemente für das Zeichnen von Text auf dem Bildschirm. Jedes Steuerelement ist einem bestimmten Szenario zugeordnet und besitzt eine eigene Liste von Funktionen und Einschränkungen. Im Allgemeinen die <xref:System.Windows.Controls.TextBlock> Element sollte verwendet werden, wenn nur eingeschränkte Textelemente-Unterstützung erforderlich ist, z. B. einem kurzen Satz in einem [!INCLUDE[TLA#tla_ui](../../../../includes/tlasharptla-ui-md.md)]. <xref:System.Windows.Controls.Label> kann verwendet werden, wenn nur minimale textunterstützung erforderlich ist. Weitere Informationen finden Sie unter [Dokumente in WPF](../../../../docs/framework/wpf/advanced/documents-in-wpf.md). Die <xref:System.Windows.Media.FormattedText> -Objekt bietet umfassendere Textformatierungsfunktionen als [!INCLUDE[TLA#tla_winclient](../../../../includes/tlasharptla-winclient-md.md)] Textsteuerelementtyp entspricht, und können nützlich sein, in Fällen, in denen Text als dekoratives Element verwendet werden sollen. Weitere Informationen finden Sie im folgenden Abschnitt [Konvertieren von formatiertem Text in eine Geometrie](#converting_formatted_text). Darüber hinaus die <xref:System.Windows.Media.FormattedText> Objekt ist nützlich zum Erstellen von textorientierten <xref:System.Windows.Media.DrawingVisual>--abgeleitete Objekte. <xref:System.Windows.Media.DrawingVisual> ist eine einfache Zeichnungsklasse, die zum Rendern von Formen, Bildern oder Text verwendet wird. Weitere Informationen finden Sie unter [Beispiel für Treffertests mit DrawingVisuals](https://go.microsoft.com/fwlink/?LinkID=159994). ## <a name="using-the-formattedtext-object"></a>Verwenden des FormattedText-Objekts Um formatierten Text erstellen möchten, rufen die <xref:System.Windows.Media.FormattedText.%23ctor%2A> Konstruktor zur Erstellung einer <xref:System.Windows.Media.FormattedText> Objekt. Nachdem Sie die Anfangszeichenfolge für formatierten Text erstellt haben, können Sie eine Reihe von Formatvorlagen anwenden. Verwenden der <xref:System.Windows.Media.FormattedText.MaxTextWidth%2A> Eigenschaft, um den Text auf eine bestimmte Breite zu begrenzen. Der Text wird automatisch umgebrochen, um die angegebene Breite nicht zu überschreiten. Verwenden der <xref:System.Windows.Media.FormattedText.MaxTextHeight%2A> Eigenschaft, um den Text auf eine bestimmte Höhe zu begrenzen. Der Text zeigt Auslassungspunkte „...“ an, wenn die angegebene Höhe überschritten wird. ![Mit dem FormattedText-Objekt angezeigter Text](../../../../docs/framework/wpf/advanced/media/formattedtext02.png "FormattedText02") Angezeigter Text mit Zeilenumbruch und Auslassungspunkten Sie können mehrere Formatvorlagen auf ein oder mehrere Zeichen anwenden. Sie können z. B. Aufrufen sowohl die <xref:System.Windows.Media.FormattedText.SetFontSize%2A> und <xref:System.Windows.Media.FormattedText.SetForegroundBrush%2A> Methoden zum Ändern der Formatierung der ersten fünf Zeichen im Text. Das folgende Codebeispiel erstellt eine <xref:System.Windows.Media.FormattedText> Objekt aus, und anschließend mehrere Formatvorlagen auf den Text angewendet. [!code-csharp[FormattedTextSnippets#FormattedTextSnippets1](../../../../samples/snippets/csharp/VS_Snippets_Wpf/FormattedTextSnippets/CSharp/Window1.xaml.cs#formattedtextsnippets1)] [!code-vb[FormattedTextSnippets#FormattedTextSnippets1](../../../../samples/snippets/visualbasic/VS_Snippets_Wpf/FormattedTextSnippets/visualbasic/window1.xaml.vb#formattedtextsnippets1)] ### <a name="font-size-unit-of-measure"></a>Maßeinheit für den Schriftgrad Wie andere Textobjekte in [!INCLUDE[TLA#tla_winclient](../../../../includes/tlasharptla-winclient-md.md)] Anwendungen, die <xref:System.Windows.Media.FormattedText> Objekt verwendet geräteunabhängige Pixel als Maßeinheit an. Die meisten [!INCLUDE[TLA#tla_win32](../../../../includes/tlasharptla-win32-md.md)]-Anwendungen verwenden jedoch Punkte als Maßeinheit. Möchten Sie in [!INCLUDE[TLA#tla_winclient](../../../../includes/tlasharptla-winclient-md.md)]-Anwendungen Punkte als Maßeinheit für angezeigten Text verwenden, müssen Sie [!INCLUDE[TLA#tla_dipixel#plural](../../../../includes/tlasharptla-dipixelsharpplural-md.md)] in Punkte konvertieren. Der folgende Code veranschaulicht diese Konvertierung: [!code-csharp[FormattedTextSnippets#FormattedTextSnippets2](../../../../samples/snippets/csharp/VS_Snippets_Wpf/FormattedTextSnippets/CSharp/Window1.xaml.cs#formattedtextsnippets2)] [!code-vb[FormattedTextSnippets#FormattedTextSnippets2](../../../../samples/snippets/visualbasic/VS_Snippets_Wpf/FormattedTextSnippets/visualbasic/window1.xaml.vb#formattedtextsnippets2)] <a name="converting_formatted_text"></a> ### <a name="converting-formatted-text-to-a-geometry"></a>Konvertieren von formatiertem Text in eine Geometrie Sie können formatierten Text in konvertieren <xref:System.Windows.Media.Geometry> Objekte, die Ihnen ermöglichen, andere Arten von visuell interessantem Text erstellen. Sie können z. B. Erstellen einer <xref:System.Windows.Media.Geometry> Objekt auf Grundlage der Gliederung einer Textzeichenfolge. ![Textkontur mit einem linearen Farbverlaufspinsel](../../../../docs/framework/wpf/advanced/media/outlinedtext02.jpg "OutlinedText02") Textkontur mit einem linearen Farbverlaufspinsel Die folgenden Beispiele zeigen verschiedene Möglichkeiten zum Erstellen von visuell interessanten Effekten durch Ändern von Strich, Füllung und Hervorhebung des konvertierten Texts. ![Text mit unterschiedlichen Farben für Füllung und Strich](../../../../docs/framework/wpf/advanced/media/outlinedtext03.jpg "OutlinedText03") Beispiel für das Festlegen von unterschiedlichen Farben für Strich und Füllung ![Text mit auf Strich angewendetem Bildpinsel](../../../../docs/framework/wpf/advanced/media/outlinedtext04.jpg "OutlinedText04") Beispiel für die Anwendung eines Bildpinsels auf den Strich ![Text mit auf Strich angewendetem Bildpinsel](../../../../docs/framework/wpf/advanced/media/outlinedtext05.jpg "OutlinedText05") Beispiel für die Anwendung eines Bildpinsels auf den Strich und die Hervorhebung Wenn in Text konvertiert wird eine <xref:System.Windows.Media.Geometry> Objekt, es ist nicht mehr eine Sammlung von Zeichen, die Zeichen in der Textzeichenfolge kann nicht geändert werden. Sie können jedoch die Darstellung des konvertierten Texts durch Ändern der Strich- und Füllungseigenschaften ändern. Der Strich bezieht sich auf die Kontur des konvertierten Texts und die Füllung auf den Bereich innerhalb der Kontur. Weitere Informationen finden Sie unter [Erstellen von Text mit Kontur](../../../../docs/framework/wpf/advanced/how-to-create-outlined-text.md). Sie können auch formatierten Text zum Konvertieren einer <xref:System.Windows.Media.PathGeometry> -Objekt und verwenden Sie das Objekt zum Hervorheben des Texts. Sie können z. B. eine Animation Anwenden der <xref:System.Windows.Media.PathGeometry> Objekts, sodass die Animation die Kontur des formatierten Texts folgt. Das folgende Beispiel zeigt formatierten Text, der in konvertiert wurde eine <xref:System.Windows.Media.PathGeometry> Objekt. Eine animierte Ellipse folgt dem Strichpfad des gerenderten Texts. ![Kugel, die der Pfadgeometrie des Textes folgt](../../../../docs/framework/wpf/advanced/media/textpathgeometry01.gif "TextPathGeometry01") Kugel, die der Pfadgeometrie des Textes folgt Weitere Informationen finden Sie unter [Vorgehensweise: Erstellen einer PathGeometry-Animation für Text](https://msdn.microsoft.com/library/29f8051e-798a-463f-a926-a099a99e9c67). Sie können weitere interessanten Verwendungsmöglichkeiten für formatierten Text erstellen, sobald er in konvertiert wurde eine <xref:System.Windows.Media.PathGeometry> Objekt. So können Sie beispielsweise ein zugeschnittenes Video darin anzeigen. ![Video anzeigen, in der Pfadgeometrie des Textes](../../../../docs/framework/wpf/advanced/media/videotextdemo01.png "VideoTextDemo01") Video, das in der Pfadgeometrie von Text angezeigt wird <a name="win32_migration"></a> ## <a name="win32-migration"></a>Win32-Migration Die Funktionen von <xref:System.Windows.Media.FormattedText> zum Zeichnen von Text ähneln den Funktionen von der [!INCLUDE[TLA#tla_win32](../../../../includes/tlasharptla-win32-md.md)] DrawText-Funktion. Für Entwickler, die von der [!INCLUDE[TLA#tla_win32](../../../../includes/tlasharptla-win32-md.md)]-API migrieren, listet die folgende Tabelle die [!INCLUDE[TLA#tla_win32](../../../../includes/tlasharptla-win32-md.md)]-DrawText-Flags und deren ungefähre Entsprechung in [!INCLUDE[TLA#tla_winclient](../../../../includes/tlasharptla-winclient-md.md)] auf. |DrawText-Flag|WPF-Entsprechung|Hinweise| |-------------------|--------------------|-----------| |DT_BOTTOM|<xref:System.Windows.Media.FormattedText.Height%2A>|Verwenden der <xref:System.Windows.Media.FormattedText.Height%2A> -Eigenschaft zur Berechnung einer entsprechenden [!INCLUDE[TLA#tla_win32](../../../../includes/tlasharptla-win32-md.md)] DrawText-y-Position.| |DT_CALCRECT|<xref:System.Windows.Media.FormattedText.Height%2A>, <xref:System.Windows.Media.FormattedText.Width%2A>|Verwenden der <xref:System.Windows.Media.FormattedText.Height%2A> und <xref:System.Windows.Media.FormattedText.Width%2A> Eigenschaften des ausgaberechtecks.| |DT_CENTER|<xref:System.Windows.Media.FormattedText.TextAlignment%2A>|Verwenden der <xref:System.Windows.Media.FormattedText.TextAlignment%2A> Eigenschaft mit dem der Wert festgelegt, <xref:System.Windows.TextAlignment.Center>.| |DT_EDITCONTROL|Keiner|Nicht erforderlich Rendern von Abstandsbreite und letzter Zeile sind identisch mit dem Edit-Steuerelement für Framework.| |DT_END_ELLIPSIS|<xref:System.Windows.Media.FormattedText.Trimming%2A>|Verwenden der <xref:System.Windows.Media.FormattedText.Trimming%2A> Eigenschaft mit dem Wert <xref:System.Windows.TextTrimming.CharacterEllipsis>.<br /><br /> Verwenden <xref:System.Windows.TextTrimming.WordEllipsis> abzurufenden [!INCLUDE[TLA#tla_win32](../../../../includes/tlasharptla-win32-md.md)] DT_END_ELLIPSIS mit DT_WORD_ELIPSIS-Endellipse — in diesem Fall zeichenellipse nur auf Wörtern, die nicht in einer einzelnen Zeile auftritt.| |DT_EXPAND_TABS|Keiner|Nicht erforderlich Registerkarten werden automatisch auf Zwischenstopps nach jeweils 4 em erweitert. Dies entspricht etwa der Breite von 8 sprachunabhängigen Zeichen.| |DT_EXTERNALLEADING|Keiner|Nicht erforderlich Der externe Abstand ist immer im Zeilenabstand enthalten. Verwenden der <xref:System.Windows.Media.FormattedText.LineHeight%2A> Eigenschaft, um eine benutzerdefinierte Zeilenabstand zu erstellen.| |DT_HIDEPREFIX|Keiner|Wird nicht unterstützt. Entfernen Sie die '&' aus der Zeichenfolge vor der Erstellung der <xref:System.Windows.Media.FormattedText> Objekt.| |DT_LEFT|<xref:System.Windows.Media.FormattedText.TextAlignment%2A>|Dies ist die standardmäßige Textausrichtung. Verwenden der <xref:System.Windows.Media.FormattedText.TextAlignment%2A> Eigenschaft mit dem der Wert festgelegt, <xref:System.Windows.TextAlignment.Left>. (nur für WPF)| |DT_MODIFYSTRING|Keiner|Wird nicht unterstützt.| |DT_NOCLIP|<xref:System.Windows.Media.Visual.VisualClip%2A>|Clipping geschieht nicht automatisch. Wenn Sie Beschneiden von Text möchten, verwenden Sie die <xref:System.Windows.Media.Visual.VisualClip%2A> Eigenschaft.| |DT_NOFULLWIDTHCHARBREAK|Keiner|Wird nicht unterstützt.| |DT_NOPREFIX|Keiner|Nicht erforderlich Das &-Zeichen innerhalb der Zeichenfolgen wird immer als normales Zeichen behandelt.| |DT_PATHELLIPSIS|Keiner|Verwenden der <xref:System.Windows.Media.FormattedText.Trimming%2A> Eigenschaft mit dem Wert <xref:System.Windows.TextTrimming.WordEllipsis>.| |DT_PREFIX|Keiner|Wird nicht unterstützt. Wenn Unterstriche für Text, z. B. eine Zugriffstaste oder Link verwendet werden sollen. verwenden Sie die <xref:System.Windows.Media.FormattedText.SetTextDecorations%2A> Methode.| |DT_PREFIXONLY|Keiner|Wird nicht unterstützt.| |DT_RIGHT|<xref:System.Windows.Media.FormattedText.TextAlignment%2A>|Verwenden der <xref:System.Windows.Media.FormattedText.TextAlignment%2A> Eigenschaft mit dem der Wert festgelegt, <xref:System.Windows.TextAlignment.Right>. (nur für WPF)| |DT_RTLREADING|<xref:System.Windows.Media.FormattedText.FlowDirection%2A>|Legen Sie die <xref:System.Windows.Media.FormattedText.FlowDirection%2A>-Eigenschaft auf <xref:System.Windows.FlowDirection.RightToLeft> fest.| |DT_SINGLELINE|Keiner|Nicht erforderlich <xref:System.Windows.Media.FormattedText> Objekte verhalten sich als einzelne Zeile-Steuerelement, es sei denn, entweder die <xref:System.Windows.Media.FormattedText.MaxTextWidth%2A> Eigenschaft festgelegt ist, oder den Text enthält, einen Wagenrücklauf/Zeilenvorschub (CR/LF).| |DT_TABSTOP|Keiner|Keine Unterstützung für benutzerdefinierte Tabstopppositionen.| |DT_TOP|<xref:System.Windows.Media.FormattedText.Height%2A>|Nicht erforderlich Obere Ausrichtung ist die Standardeinstellung. Andere vertikale Positionierungswerte können definiert werden, indem die <xref:System.Windows.Media.FormattedText.Height%2A> -Eigenschaft zur Berechnung einer entsprechenden [!INCLUDE[TLA#tla_win32](../../../../includes/tlasharptla-win32-md.md)] DrawText-y-Position.| |DT_VCENTER|<xref:System.Windows.Media.FormattedText.Height%2A>|Verwenden der <xref:System.Windows.Media.FormattedText.Height%2A> -Eigenschaft zur Berechnung einer entsprechenden [!INCLUDE[TLA#tla_win32](../../../../includes/tlasharptla-win32-md.md)] DrawText-y-Position.| |DT_WORDBREAK|Keiner|Nicht erforderlich Die wörtertrennung geschieht automatisch mit <xref:System.Windows.Media.FormattedText> Objekte. Sie kann nicht deaktiviert werden.| |DT_WORD_ELLIPSIS|<xref:System.Windows.Media.FormattedText.Trimming%2A>|Verwenden der <xref:System.Windows.Media.FormattedText.Trimming%2A> Eigenschaft mit dem Wert <xref:System.Windows.TextTrimming.WordEllipsis>.| ## <a name="see-also"></a>Siehe auch <xref:System.Windows.Media.FormattedText> [Dokumente in WPF](../../../../docs/framework/wpf/advanced/documents-in-wpf.md) [Typografie in WPF](../../../../docs/framework/wpf/advanced/typography-in-wpf.md) [Erstellen von Text mit Kontur](../../../../docs/framework/wpf/advanced/how-to-create-outlined-text.md) [Vorgehensweise: Erstellen einer PathGeometry-Animation für Text](https://msdn.microsoft.com/library/29f8051e-798a-463f-a926-a099a99e9c67)
124.969697
787
0.79195
deu_Latn
0.921379
b9bd72be3e59cc10dd4d0bca42cb9e2c431bcb29
388
md
Markdown
README.md
johnpatek/record-manager
af28877c6a7f47a4c33e2e9c0c7baa8fa0a4a944
[ "Zlib" ]
1
2020-11-09T05:46:18.000Z
2020-11-09T05:46:18.000Z
README.md
johnpatek/record-manager
af28877c6a7f47a4c33e2e9c0c7baa8fa0a4a944
[ "Zlib" ]
1
2020-12-19T04:09:05.000Z
2020-12-19T04:09:05.000Z
README.md
johnpatek/record-manager
af28877c6a7f47a4c33e2e9c0c7baa8fa0a4a944
[ "Zlib" ]
null
null
null
# Record Manager [![Build Status](https://travis-ci.com/johnpatek/record-manager.svg?branch=master)](https://travis-ci.com/johnpatek/record-manager) Build: ```shell python3 cmake.py --external python3 cmake.py ``` Start the server application: ```shell mkdir /home/ubuntu/records server 12345 /home/ubuntu/records ``` Start the client application: ```shell client 127.0.0.1 12345 ```
18.47619
131
0.742268
kor_Hang
0.227455
b9bd7cd36ac93dfce5d067ad69542b50455c27cb
2,303
md
Markdown
node_modules/isomorphic-style-loader/CHANGELOG.md
jparkerpearson/partyPlayer
e2ba7dfb433b6aad6b13101f4d85c2645d00f2db
[ "MIT" ]
null
null
null
node_modules/isomorphic-style-loader/CHANGELOG.md
jparkerpearson/partyPlayer
e2ba7dfb433b6aad6b13101f4d85c2645d00f2db
[ "MIT" ]
null
null
null
node_modules/isomorphic-style-loader/CHANGELOG.md
jparkerpearson/partyPlayer
e2ba7dfb433b6aad6b13101f4d85c2645d00f2db
[ "MIT" ]
null
null
null
# Isomorphic Style Loader Change Log All notable changes to this project will be documented in this file. ## [v2.0.0] - 2017-04-20 - Pull `PropTypes` from [prop-types](https://www.npmjs.com/package/prop-types) package for compatibility with **React 15.3.0** and higher ([#90](https://github.com/kriasoft/isomorphic-style-loader/pull/90)) ## [v1.1.0] - 2016-10-30 - Disable source maps in IE9 and below, to prevent runtime errors in development mode ([#69](https://github.com/kriasoft/isomorphic-style-loader/pull/69)) - Improve source maps support by making sourceURL field unique ([#44](https://github.com/kriasoft/isomorphic-style-loader/pull/44), [#69](https://github.com/kriasoft/isomorphic-style-loader/pull/69)) - Add access to content to deduplicate server-side generated styles ([#56](https://github.com/kriasoft/isomorphic-style-loader/pull/56)) - Use HMR (Hot Module Replacement) if available, no debug option required ([#57](https://github.com/kriasoft/isomorphic-style-loader/pull/57)) - Use [hoist-non-react-statics](https://github.com/mridgway/hoist-non-react-statics) to copy non-react specific statics from a child to a parent component inside `withStyles` HOC (Higher-Order Component) ([#38](https://github.com/kriasoft/isomorphic-style-loader/pull/38)) - Add `CHANGELOG.md` file with the past and future (planned) changes to the project ## [v1.0.0] - 2016-04-15 - Improve comparability with Hot Module Replacement (HMR) ([#33](https://github.com/kriasoft/isomorphic-style-loader/pull/33)) - Add support of ES2015+ decorator syntax, e.g. `@withStyles(s) class MyComponent extends Component { .. }` [PR#21](https://github.com/kriasoft/isomorphic-style-loader/pull/21) (BREAKING CHANGE) ## [v0.0.12] - 2016-03-04 - Fix style not getting removed for multiple instance ([#23](https://github.com/kriasoft/isomorphic-style-loader/pull/23)) [unreleased]: https://github.com/kriasoft/isomorphic-style-loader/compare/v2.0.0...HEAD [v2.0.0]: https://github.com/kriasoft/isomorphic-style-loader/compare/v1.1.0...v2.0.0 [v1.1.0]: https://github.com/kriasoft/isomorphic-style-loader/compare/v1.0.0...v1.1.0 [v1.0.0]: https://github.com/kriasoft/isomorphic-style-loader/compare/v0.0.12...v1.0.0 [v0.0.12]: https://github.com/kriasoft/isomorphic-style-loader/compare/v0.0.11...v0.0.12
65.8
206
0.743812
eng_Latn
0.320159
b9bdb4a90fe4b8c0d9a0ef2232b0f50437903203
6,720
md
Markdown
articles/virtual-machines/linux/shared-images-portal.md
nsrau/azure-docs.it-it
9935e44b08ef06c214a4c7ef94d12e79349b56bc
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/virtual-machines/linux/shared-images-portal.md
nsrau/azure-docs.it-it
9935e44b08ef06c214a4c7ef94d12e79349b56bc
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/virtual-machines/linux/shared-images-portal.md
nsrau/azure-docs.it-it
9935e44b08ef06c214a4c7ef94d12e79349b56bc
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Creare immagini di macchine virtuali Linux di Azure condivise usando il portale description: Informazioni su come usare portale di Azure per creare e condividere immagini di macchine virtuali Linux. author: cynthn tags: azure-resource-manager ms.service: virtual-machines-linux ms.subservice: imaging ms.topic: how-to ms.workload: infrastructure ms.date: 05/04/2020 ms.author: cynthn ms.reviewer: akjosh ms.openlocfilehash: 2661715164cc6aa5f5ff587f2ddf28c0918445d4 ms.sourcegitcommit: a43a59e44c14d349d597c3d2fd2bc779989c71d7 ms.translationtype: MT ms.contentlocale: it-IT ms.lasthandoff: 11/25/2020 ms.locfileid: "96015997" --- # <a name="create-a-shared-image-gallery-using-the-portal"></a>Creare una raccolta di immagini condivise usando il portale La [raccolta di immagini condivise](shared-image-galleries.md) semplifica la condivisione di immagini personalizzate all'interno dell'organizzazione. Le immagini personalizzate sono come le immagini di marketplace, ma si possono creare autonomamente. Le immagini personalizzate possono essere usate per l'avvio di attività di distribuzione, ad esempio il precaricamento e le configurazioni di applicazioni e altre configurazioni del sistema operativo. La raccolta di immagini condivise consente di condividere le immagini di VM personalizzate con altri utenti dell'organizzazione, all'interno o tra aree, all'interno di un tenant di Azure AD. Scegliere le immagini che si intende condividere, le aree nelle quali si vuole renderle disponibili e i destinatari. È possibile creare più raccolte così da raggruppare in maniera logica le immagini condivise. La raccolta è una risorsa di livello superiore che fornisce il controllo completo degli accessi in base al ruolo di Azure (RBAC di Azure). Le immagini possono essere con versioni ed è possibile scegliere di eseguire la replica di ogni versione dell'immagine in un diverso set di aree di Azure. La raccolta funziona solo con le immagini gestite. La funzionalità di raccolta di immagini condivise presenta più tipi di risorse. Verranno usate o compilate le seguenti contenute in questo articolo: [!INCLUDE [virtual-machines-shared-image-gallery-resources](../../../includes/virtual-machines-shared-image-gallery-resources.md)] <br> ## <a name="before-you-begin"></a>Prima di iniziare Per completare l'esempio in questo articolo, è necessario disporre di un'immagine gestita esistente di una macchina virtuale generalizzata o di uno snapshot di una macchina virtuale specializzata. È possibile seguire l' [esercitazione: creare un'immagine personalizzata di una macchina virtuale di Azure con Azure PowerShell](tutorial-custom-images.md) per creare un'immagine gestita o [creare uno snapshot](../windows/snapshot-copy-managed-disk.md) per una macchina virtuale specializzata. Per le immagini e gli snapshot gestiti, le dimensioni del disco dati non possono superare 1 TB. Quando si esegue l'esercitazione, sostituire i nomi del gruppo di risorse e delle macchine virtuali dove necessario. [!INCLUDE [virtual-machines-common-shared-images-portal](../../../includes/virtual-machines-common-shared-images-portal.md)] ## <a name="create-vms"></a>Creare VM A questo punto è possibile creare una o più nuove macchine virtuali. Questo esempio crea una VM denominata *myVMfromImage* in *myResourceGroup* nel datacenter *Stati Uniti orientali*. 1. Passare alla definizione dell'immagine. È possibile usare il filtro delle risorse per mostrare tutte le definizioni di immagine disponibili. 1. Nella pagina relativa alla definizione dell'immagine selezionare **Crea macchina virtuale** dal menu nella parte superiore della pagina. 1. Per **gruppo di risorse** selezionare **Crea nuovo** e digitare *myResourceGroup* per nome. 1. In **nome macchina virtuale** digitare *myVM*. 1. In **Area** selezionare *Stati Uniti orientali*. 1. Per le **Opzioni di disponibilità**, lasciare l'impostazione predefinita *nessuna ridondanza dell'infrastruttura richiesta*. 1. Il valore per **Image** viene compilato automaticamente con la `latest` versione dell'immagine se è stato avviato dalla pagina per la definizione dell'immagine. 1. Per **dimensione** scegliere una dimensione di macchina virtuale dall'elenco delle dimensioni disponibili, quindi scegliere **Seleziona**. 1. In **account amministratore**, se la VM di origine è stata generalizzata, immettere il **nome utente** e la **chiave pubblica SSH**. Se la macchina virtuale di origine è specializzata, queste opzioni verranno disattivate perché vengono usate le informazioni della VM di origine. 1. Se si vuole consentire l'accesso remoto alla macchina virtuale, in **porte in ingresso pubbliche** scegliere **Consenti porte selezionate** , quindi selezionare **SSH (22)** nell'elenco a discesa. Se non si vuole consentire l'accesso remoto alla macchina virtuale, lasciare selezionata l'opzione **Nessuna** per le **porte in ingresso pubbliche**. 1. Al termine, selezionare il pulsante **Verifica e crea** nella parte inferiore della pagina. 1. Dopo che la macchina virtuale ha superato la convalida, selezionare **Crea** nella parte inferiore della pagina per avviare la distribuzione. ## <a name="clean-up-resources"></a>Pulire le risorse Quando non servono più, è possibile eliminare il gruppo di risorse, la macchina virtuale e tutte le risorse correlate. A tale scopo, selezionare il gruppo di risorse per la macchina virtuale, selezionare **Elimina** e quindi confermare il nome del gruppo di risorse da eliminare. Se si desidera eliminare singole risorse, è necessario eliminarle in ordine inverso. Per eliminare la definizione di un'immagine, ad esempio, è necessario eliminare tutte le versioni di immagine create da tale immagine. ## <a name="next-steps"></a>Passaggi successivi È anche possibile creare una risorsa di raccolta di immagini condivise usando i modelli. Sono disponibili diversi modelli di avvio rapido di Azure: - [Creare una raccolta di immagini condivise](https://azure.microsoft.com/resources/templates/101-sig-create/) - [Creare una definizione dell'immagine in una raccolta di immagini condivise](https://azure.microsoft.com/resources/templates/101-sig-image-definition-create/) - [Creare una versione dell'immagine in una raccolta di immagini condivise](https://azure.microsoft.com/resources/templates/101-sig-image-version-create/) - [Creare una macchina virtuale dalla versione dell'immagine](https://azure.microsoft.com/resources/templates/101-vm-from-sig/) Per altre informazioni sulle raccolte di immagini condivise, vedere [Panoramica](shared-image-galleries.md). Se si verificano problemi, vedere [Risoluzione dei problemi delle raccolte di immagini condivise](../troubleshooting-shared-images.md).
80.963855
586
0.802232
ita_Latn
0.999034
b9bdc4bda070687fcf543199131b561f231352af
1,664
md
Markdown
README.md
Luz/sega-rom-reader
dd7bf1036cdeff4d0b48fcd3716d53bb1d091141
[ "MIT" ]
2
2018-10-10T21:12:20.000Z
2021-07-31T07:05:26.000Z
README.md
Luz/sega-rom-reader
dd7bf1036cdeff4d0b48fcd3716d53bb1d091141
[ "MIT" ]
null
null
null
README.md
Luz/sega-rom-reader
dd7bf1036cdeff4d0b48fcd3716d53bb1d091141
[ "MIT" ]
null
null
null
# sega-rom-reader This reader/writer for "sega mega drive" ROMs directly accesses the games by using the 5V-tolerant "Teensy 3.5". ## Pictures of the reader: Reader: Teensy 3.5 and the socket for the cartridge ![reader](https://raw.githubusercontent.com/Luz/sega-rom-reader/master/pics/pic1.jpg) Bottom side: made with wire wrap technology ![wire-wrap](https://raw.githubusercontent.com/Luz/sega-rom-reader/master/pics/pic2.jpg) Jumper: will be used later for the creation of a sega-game-programmer ![jumper](https://raw.githubusercontent.com/Luz/sega-rom-reader/master/pics/pic3.jpg) ROM: The first ROM that was read ![first-rom](https://raw.githubusercontent.com/Luz/sega-rom-reader/master/pics/pic4.jpg) Wire-Wrap pins: Close shot of two wire wrap connections ![wire-wrap-pins](https://raw.githubusercontent.com/Luz/sega-rom-reader/master/pics/pic5.jpg) Cartridge pcb: This socket was not able to reliably hold an EEPROM ![cartridge-bad-socket](https://raw.githubusercontent.com/Luz/sega-rom-reader/master/pics/pic6.jpg) Cartridge with 27C322: PCB still needs to be tested, but some bytes were already written ![cartridge-with-27c322](https://raw.githubusercontent.com/Luz/sega-rom-reader/master/pics/pic7.jpg) ## TODO * Since a ROM can be less than 4MB and the reader currently reads 4MB (21 adress lines at 2 bytes), a 2MB file will be read duplicated. Detect duplications and remove that unnecessary data. * Add a button to easily read more than just one ROM * Add a description of the programming ability * Describe: BE/LE (endianess) and compare it to the "interleaved" *.smd files * Maybe add a function to swap the bytes to the *.smd format
44.972973
189
0.772837
eng_Latn
0.928749
b9be2c70557d875204ac6fc9ae7e9ba05f9d8f6f
4,098
md
Markdown
_posts/2022-03-14-docker5.md
KHmyung/khmyung.github.io
1e449a11d1ad12e4743db1496c909201bf114871
[ "MIT" ]
null
null
null
_posts/2022-03-14-docker5.md
KHmyung/khmyung.github.io
1e449a11d1ad12e4743db1496c909201bf114871
[ "MIT" ]
null
null
null
_posts/2022-03-14-docker5.md
KHmyung/khmyung.github.io
1e449a11d1ad12e4743db1496c909201bf114871
[ "MIT" ]
null
null
null
--- published: true title: "[Docker] 5. Docker 커맨드 종류" layout: single category: container tag: [container, virtualization, docker] toc: true --- ## Docker 커맨드의 종류 이 포스팅에서는 도커에서 지원하는 기본 커맨드 종류를 우선 알아보고, 다음으로 핵심 기능인 컨테이너 실행(`docker run`) 옵션을 알아본다. ### 기본 커맨드 종류 아래는 도커에서 지원하는 커맨드의 종류를 정리한 것이다. 각 커맨드에서 지원하는 세부옵션 및 필드정보는 커맨드라인에서 `--help` 플래그를 통해 확인할 수 있다. | 커맨드 종류 | 설명 | | :----------------------------------------------------------- | :--------------------------------------------: | | docker **images** | 도커 이미지 종류를 조회 | | docker **run** <*options*> <*image*> <*command*> <*arguments*> | 도커 이미지를 컨테이너로 실행 | | docker **ps** | 컨테이너 목록 조회 | | docker **commit** <*container_name*> <*new_image_name*> | 컨테이너를 새로운 이미지로 생성 | | docker **attach** <*container_name*> | detached 모드로 실행 중인 컨테이너에 연결 | | docker **exec** <*container_name*> <*command*> | 실행 중인 컨테이너에 명령어 실행 | | docker **logs** <*container_name*> | 컨테이너 로그 출력 | | docker **start** <*container_name*> | 정지된 컨테이너 실행 | | docker **stop** <*container_name*> | 컨테이너를 종료 (gracefully) | | docker **kill** <*container_name*> | 컨테이너를 즉각 중단 (forcely) | | docker **rm** <*container_name*> | 실행 중이 아닌 컨테이너를 삭제 | | docker **rmi** <*image_name*> | 이미지를 삭제 | | docker **port** <*container_name*> | 실행 중인 컨테이너의 포트번호 조회 | | docker **network** *create* <*network_name*> | 호스트 내에 컨테이너를 위한 가상 네트워크 생성 | | docker **network** *connect/disconnect* <*container_name*> <*network_name*> | 컨테이너를 네트워크에 연결시킴 | | docker **build** -f /*path*/*Dockerfile* -t <*result_name*> | 도커파일을 통해 이미지를 빌드하고 태그 | | docker **inspect** <*options*> <*container_name*> | 컨테이너에서 지정된 포맷의 세부정보를 출력 | | docker **update** <*options*> <*container_name*> | 실행 중인 컨테이너의 리소스 제한 업데이트 | ### 컨테이너 실행 옵션 `docker run` 의 기본 포맷은 다음과 같다. 여기서 이미지 이름은 필수 옵션으로, 필요에 따라 레지스트리와 태그명을 포함해준다. ```bash docker run (<option>) <image name> (<command>) (<arguments>) ``` 아래는 컨테이너 실행 시 어떤 옵션을 고려할 수 있는지 정리한 것이다. | 옵션 | 설명 | | --------------- | ------------------------------------------------------------ | | `-d` | 컨테이너를 detached 모드에서 실행 (ID 리턴). <br />`-d` 옵션 없이 실행하면 바로 attach되며, 터미널에서 빠져나올 때 컨테이너 종료됨<br />`docker run -d python python` | | `-it` | 컨테이너를 종료하지 않고 터미널의 입력을 컨테이너로 전달<br />`docker run -it python python` | | `--name` | 컨테이너에 이름을 지정해 할당<br />`docker run -d --name my-python python python` | | `-e` | 컨테이너의 환경변수를 설정하며, 기존 Dockerfile의 `ENV `설정을 override 함<br />`docker run -e FOO=bar python env` | | `-p` | 호스트와 컨테이너 간 포트 배포/바인드를 위해 사용<br />`docker run -d -p 80:8080 python python -m http.server` | | `-v` | 호스트와 컨테이너 간 공유 볼륨 마운트<br />`docker run -v /home/test:/etc python cat /etc/test.txt` | | `-w` | 도커파일에 설정된 `WORKDIR` 설정을 override 함<br />`docker run -w /etc python pwd` | | `--entrypoint` | 도커파일에 설정된 `ENTRYPOINT` 설정을 override 함<br />`docker run --entrypoint python python --version` | | `--rm` | 컨테이너를 일회성으로 실행하고, 종료될 때 관련 리소스를 모두 제거<br />`docker run --rm -it wernight/funbox nyancat` | | `--memory` | 컨테이너에 할당 가능한 메모리 제한<br />`docker run -d --memory="1g" nginx` | | `--cpu-shares` | 컨테이너에 CPU 할당 가중치 설정 (default 1024) | | `--cpuset-cpus` | 컨테이너에 할당할 CPU 번호를 지정<br />`docker run -d --cpuset_cpus="0,2,4,8" nginx` | | `--cpus` | CPU 개수를 직접 지정<br />`docker run -d --cpus=4 nginx` |
65.047619
145
0.467057
kor_Hang
0.999738
b9be2ce4fdafce1949dca9e0c8e1e15fe2946620
1,206
md
Markdown
getting-started/plugin-developer-guide/README.md
ballerina-platform/plugin-intellij
bbeacdde34c3d27a24b1e0a729afe8ebf6fd7ab5
[ "Apache-2.0" ]
7
2020-09-09T12:00:37.000Z
2021-12-17T13:39:40.000Z
getting-started/plugin-developer-guide/README.md
ballerina-platform/plugin-intellij
bbeacdde34c3d27a24b1e0a729afe8ebf6fd7ab5
[ "Apache-2.0" ]
34
2020-11-02T06:32:39.000Z
2022-03-11T01:39:06.000Z
getting-started/plugin-developer-guide/README.md
ballerina-platform/plugin-intellij
bbeacdde34c3d27a24b1e0a729afe8ebf6fd7ab5
[ "Apache-2.0" ]
3
2020-09-09T12:00:43.000Z
2021-09-07T10:05:28.000Z
# Plugin Developer Guide ## Testing/Debugging the plugin using IntelliJ IDEA 1. Go to **File -> Open** and open the cloned repository using IntelliJ IDEA. ![alt text](images/Figure-1.png) 2. **Import Project from Gradle** settings window will be shown. Select the **Gradle Home** path and select **OK**. ![alt text](images/Figure-2.png) 3. From the **Gradle projects** tool window, run `runIde` task. This will build the plugin and a new IDEA instance will be started with the plugin installed. If the **Gradle projects** window is not visible, you can use **View -> Tool Windows -> Gradle** to go to the Gradle projects tool window. ![alt text](images/Figure-3.png) 4. In addition to the above method, you can also add a **Gradle configuration** to **Run** or **Debug** the plugin. * Go to **Run -> Edit Configurations**. * Add a new Gradle Configuration. ![alt text](images/Figure-4.png) * Select the **plugin-intellij** project as the **Gradle project**. ![alt text](images/Figure-5.png) * Add `runIde` to the **Tasks**. ![alt text](images/Figure-6.png) * Now you can **Run** or **Debug** the plugin using the created Gradle configuration very easily. ![alt text](images/Figure-7.png)
31.736842
157
0.708126
eng_Latn
0.913237
b9be4d0f5a0d5446abfff2309ed64d4d971e213c
1,782
md
Markdown
docs/framework/unmanaged-api/hosting/iclrtaskmanager-getcurrenttasktype-method.md
CodeTherapist/docs.de-de
45ed8badf2e25fb9abdf28c20e421f8da4094dd1
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/unmanaged-api/hosting/iclrtaskmanager-getcurrenttasktype-method.md
CodeTherapist/docs.de-de
45ed8badf2e25fb9abdf28c20e421f8da4094dd1
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/unmanaged-api/hosting/iclrtaskmanager-getcurrenttasktype-method.md
CodeTherapist/docs.de-de
45ed8badf2e25fb9abdf28c20e421f8da4094dd1
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: ICLRTaskManager::GetCurrentTaskType-Methode ms.date: 03/30/2017 api_name: - ICLRTaskManager.GetCurrentTaskType api_location: - mscoree.dll api_type: - COM f1_keywords: - ICLRTaskManager::GetCurrentTaskType helpviewer_keywords: - GetCurrentTaskType method [.NET Framework hosting] - ICLRTaskManager::GetCurrentTaskType method [.NET Framework hosting] ms.assetid: 6b0d9259-dbe2-45bb-b34d-990f60c73424 topic_type: - apiref author: rpetrusha ms.author: ronpet ms.openlocfilehash: 51c103fb38dd97ec076096037932925e31280f02 ms.sourcegitcommit: 3d5d33f384eeba41b2dff79d096f47ccc8d8f03d ms.translationtype: HT ms.contentlocale: de-DE ms.lasthandoff: 05/04/2018 ms.locfileid: "33437715" --- # <a name="iclrtaskmanagergetcurrenttasktype-method"></a>ICLRTaskManager::GetCurrentTaskType-Methode Ruft den Typ der Aufgabe, die gerade ausgeführt wird. ## <a name="syntax"></a>Syntax ``` HRESULT GetCurrentTaskType( [out] ETaskType *pTaskType ); ``` #### <a name="parameters"></a>Parameter `pTaskType` [out] Ein Zeiger auf den Wert der [ETaskType](../../../../docs/framework/unmanaged-api/hosting/etasktype-enumeration.md) -Enumeration, der den Typ der Aufgabe gibt an, die gerade ausgeführt wird. ## <a name="requirements"></a>Anforderungen **Plattformen:** finden Sie unter [Systemanforderungen](../../../../docs/framework/get-started/system-requirements.md). **Header:** MSCorEE.h **Bibliothek:** als Ressource in MSCorEE.dll enthalten **.NET Framework-Versionen:** [!INCLUDE[net_current_v20plus](../../../../includes/net-current-v20plus-md.md)] ## <a name="see-also"></a>Siehe auch [ICLRTaskManager-Schnittstelle](../../../../docs/framework/unmanaged-api/hosting/iclrtaskmanager-interface.md)
33.622642
198
0.733446
deu_Latn
0.402674
b9be83012af8a0bc8eaba7713cfe9a8383c9fd61
745
md
Markdown
api/interfaces/scriptablelinesegmentcontext.md
DevDooly/ChartJsDoc
9ad6a9e01577958970a1ac4f062a981cd212ffea
[ "MIT" ]
null
null
null
api/interfaces/scriptablelinesegmentcontext.md
DevDooly/ChartJsDoc
9ad6a9e01577958970a1ac4f062a981cd212ffea
[ "MIT" ]
null
null
null
api/interfaces/scriptablelinesegmentcontext.md
DevDooly/ChartJsDoc
9ad6a9e01577958970a1ac4f062a981cd212ffea
[ "MIT" ]
null
null
null
--- title: "Interface: ScriptableLineSegmentContext" --- # Interface: ScriptableLineSegmentContext ## Properties ### p0 • **p0**: [*PointElement*](../README.md#pointelement)<[*PointProps*](pointprops.md), [*PointOptions*](pointoptions.md)\> Defined in: [index.esm.d.ts:30](https://github.com/chartjs/Chart.js/blob/b319f2cf/types/index.esm.d.ts#L30) ___ ### p1 • **p1**: [*PointElement*](../README.md#pointelement)<[*PointProps*](pointprops.md), [*PointOptions*](pointoptions.md)\> Defined in: [index.esm.d.ts:31](https://github.com/chartjs/Chart.js/blob/b319f2cf/types/index.esm.d.ts#L31) ___ ### type • **type**: *segment* Defined in: [index.esm.d.ts:29](https://github.com/chartjs/Chart.js/blob/b319f2cf/types/index.esm.d.ts#L29)
24.833333
120
0.699329
yue_Hant
0.722475
b9bfc99246d01842a4c633aa86301ff1df64f6a1
11,318
md
Markdown
workshops/adwc4dev-obe/Lab 2.md
connor-q/learning-library
b2d53a24bc02c5dc3dc1b1a2e4f325faa691e89c
[ "UPL-1.0" ]
1
2019-05-13T10:41:01.000Z
2019-05-13T10:41:01.000Z
workshops/adwc4dev-obe/Lab 2.md
fharris/learning-library
5ed099de1bbd8db46ebbb1f81a16938b866a75cd
[ "UPL-1.0" ]
null
null
null
workshops/adwc4dev-obe/Lab 2.md
fharris/learning-library
5ed099de1bbd8db46ebbb1f81a16938b866a75cd
[ "UPL-1.0" ]
2
2019-05-14T12:26:31.000Z
2019-05-15T12:24:55.000Z
<table class="tbl-heading"><tr><td class="td-logo">![](./images/obe_tag.png) Last Updated February, 2019 </td> <td class="td-banner"> # Lab 2: Review Query Performance </td></tr><table> ## Introduction In this lab, you continue with the persona of Vijay. Vijay is hopeful ADWC will save his team a huge amount of time and support new projects with aggressive timelines. However he needs to test the assertion that ADWC can perform against large volumes of data. To do that he has exported some data from his on-premise database [using data pump](https://docs.oracle.com/en/cloud/paas/autonomous-data-warehouse-cloud/user/load-data.html#GUID-30DB1EEA-DB45-49EA-9E97-DF49A9968E24), and has uploaded this to object storage and imported it into his new ADWC instance. Autonomous Data Warehouse Cloud provides three database services that you can choose when connecting to your database. These are named as HIGH, MEDIUM, and LOW services and provide different levels of performance and concurrency. The HIGH database service provides the maximum amount of resources for a query, this also means the number of concurrent queries you can run in this service will not be as much as the other services. The MEDIUM database service provides multiple compute and IO resources for a query. This service also provides more concurrency compared to the HIGH database service. The LOW database service provides the least amount of resources for a query, this also means the number of concurrent queries you can run in this service will be higher than the other services. As a user you need to pick the database service based on your performance and concurrency requirements. The differences between these service levels vary depending on the workload and particular query, but you might expect to see a query on the 'LOW' service run twice as fast on the 'HIGH' service. In this lab we will use the 'HIGH' service. Vijay has extracted the following data from their source operational system. The first is order history going back five years. The data scientist and marketing occasionally want to study medium to long term trends, and would use this data. - SSB.DWDATE - 2556 rows - SSB.SUPPLIER - 2,000,000 rows - SSB.PART - 2,000,000 rows - SSB.CUSTOMER - 30,000,000 rows - SSB.LINEORDER - 6,000,000,000 rows However they need near instant access to the current months data. - ADMIN.DWDATE - 2556 rows - ADMIN.SUPPLIER ~ 2,000,000 rows - ADMIN.PART - 1,400,000 rows - ADMIN.CUSTOMER - 6,200,000 rows - ADMIN.LINEORDER - 100,000,000 rows - ADMIN.CREDIT_SCORING_100K - 100,000 rows To log issues and view the Lab Guide source, go to the [github oracle](https://github.com/oracle/learning-library/tree/master/workshops/adwc4dev) repository. ## Objectives - Test query access to current data to confirm the feasibility of letting end users slice and dice the data for ad-hoc analysis. - Test query access to a much larger historical data set to confirm the feasiblity of analyzing historical data sets. ## Required Artifacts - The following lab requires an Oracle Public Cloud account. You may your own cloud account, a cloud account that you obtained through a trial, or a training account whose details were given to you by an Oracle instructor. - Oracle SQL Developer (see Lab100 for more specifics on the version of SQL Developer and how to install and configure it). # Performance Tests ## Run Queries Against Current Orders As noted above, Vijay wants to ensure end users using a variety of tools can run queries against 100M current orders (with 1.4M parts, 6M customers, and 2M suppliers) that take no more than a few seconds. He also will test this with no tuning whatsoever, and will query the data set different ways to confirm performance holds regardless of how the data is accessed. Note your results (row counts) may vary. ### **STEP 1: Connect to Autonomous Data Warehouse with SQLDeveloper** - Open the ADWC-Trial Admin connection you created at the end of Lab 100. ![](./images/IL-2/002.png) - Expand the tables ![](./images/IL-2/003.png) ### **STEP 2: Run queries** - Enter the following. This first flushes the result cache, and then runs a typical aggregate query. ``` exec DBMS_RESULT_CACHE.flush; set pagesize 1000; select d_month, d_monthnuminyear , sum(lo_revenue) from lineorder_100M , dwdate where lo_orderdate = d_datekey group by d_month, d_monthnuminyear order by d_monthnuminyear ``` ![](./images/IL-2/004.png) - Note the query takes about six seconds to query and summarize 100M orders. - We now want to see if others who also run the same or similar queries use the results which are now cached. This saves a huge amount of processing since the database just needs to retrieve the cached results. Enter the following. This is the same query as above, but does not clear the cache. You may wish to first clear the previous result. ``` select d_month, d_monthnuminyear , sum(lo_revenue) from lineorder_100M , dwdate where lo_orderdate = d_datekey group by d_month, d_monthnuminyear order by d_monthnuminyear ``` ![](./images/IL-2/005.png) - Note the query now takes less than half a second to run. - Next query by nation and then year, and filter by region. Enter the following. ``` select c_nation , d_month , sum(lo_revenue) as revenue from customer, lineorder_100M, dwdate where lo_custkey = c_custkey and lo_orderdate = d_datekey and c_region = 'AMERICA' group by c_nation , d_month order by revenue desc; ``` ![](./images/IL-2/006.png) - Again the query took about five seconds, while grouping by month and nation, and sorting by revenue descending. - What about a more selective query? Execute the following. ``` select d_date , p_partkey , p_name , p_color , sum(lo_quantity) quantity , sum(lo_revenue) revenue , sum(lo_supplycost) supplycost , sum(lo_revenue-lo_supplycost) profit from lineorder_100m , DWDATE , part where p_partkey = lo_partkey and d_datekey = lo_orderdate and d_yearmonth = 'Sep1996' AND d_dayofweek = 'Friday ' AND p_color in ('ivory','coral') and lo_orderpriority = '1-URGENT' group by d_date , p_partkey , p_name , p_color ``` ![](./images/IL-2/007.png) - Note it takes less than 1.5 seconds to retrieve 800 rows, without any indexes, pre-sorting, or pre-load processing. ## Run Queries Against Order History Vijay now feels confident that users hitting current closed orders will be happy with the response time, regardless how they query the data. The order history is 60x larger, with 6B rows in the order history table (and 30M customers). The business case to provide ad-hoc access to this volume of data relates to the data scientists, who are interested in uncovering long term patterns over time. They are a smaller group that have spent weeks in the past extracting and pre-processing the data. They are excited at the prospect of getting immediate access to real time data and having the ability to get results in a couple of minutes rather than a painful multi-week process. ### **STEP 3: Run historical queries** - Clear the result cache to ensure you are getting an accurate measure of database performance, and then run the following query. Enter the following in SQLDeveloper. ``` exec DBMS_RESULT_CACHE.flush; select d.d_date , p.p_partkey , p.p_name , p.p_color , sum(l.lo_quantity) quantity , sum(l.lo_revenue) revenue , sum(l.lo_supplycost) supplycost , sum(l.lo_revenue-l.lo_supplycost) profit from ssb.lineorder l , SSB.DWDATE d , ssb.part p where p.p_partkey = l.lo_partkey and d.d_datekey = l.lo_orderdate and d.d_yearmonth = 'Aug1996' AND d.d_dayofweek = 'Friday ' AND p.p_color in ('ivory','coral') group by d.d_date , p.p_partkey , p.p_name , p.p_color; ``` ![](./images/IL-2/008.png) - Vijay was surprised to see that the execution time to retrieve the first 5000 rows was again only a few seconds. - Vijay has requests from analysts about support for analytic queries. The analysts have started to use these views in their on-premise Oracle Database, and have found them to be fast and suited to the historical type queries they typically run. To test this expand 'Other Users in your connection. ![](./images/IL-2/009.png) - Scoll down and expend Analytic Views. ![](./images/IL-2/010.png) - Now run the following analytics query. ``` SELECT dwdate_hier.member_name as year, part_hier.member_name as part, customer_hier.c_region, customer_hier.member_name as customer, lo_quantity, lo_revenue FROM ssb.ssb_av HIERARCHIES ( dwdate_hier, part_hier, customer_hier) WHERE dwdate_hier.d_year = '1998' AND dwdate_hier.level_name = 'MONTH' AND part_hier.level_name = 'MANUFACTURER' AND customer_hier.c_region = 'AMERICA' AND customer_hier.level_name = 'NATION'; ``` ![](./images/IL-2/011.png) - In this particular case the query took a bit longer (35 seconds), but was still extremely fast, and now offers analtytic support they never had before. - Finally, what about a simple query to retrieve a single orderkey, realizing again there are no indexes on the table. Enter the following. ``` select * from ssb.lineorder where lo_orderkey = 1174002208; ``` ![](./images/IL-2/012.png) Note: Some queries may take two-three minutes (but no more), depending on how many rows are processed and returned, and the aggregation level. These are not included so that the lab can proceed without delay. ## Review Database Activity ### **STEP 4: Log into the Cloud Console.** - Log into the Cloud Console and select Autonomous Data Warehouse, and then open the Service Console. ![](./images/IL-2/013.png) ![](./images/IL-2/014.png) - Select your database ADW, and then the ADWC Service Console. ![](./images/IL-2/015.png) ![](./images/IL-2/016.png) - Select `Activity`. ![](./images/IL-2/018.png) - Here you get an overview of the database activity, CPU utilization, and running statements. ![](./images/IL-2/019.png) - Select Monitored SQL. ![](./images/IL-2/020.png) - Here you can review details about the statements you have issued. While this lets you what is happening, you will not need this to tune the database, as it is self-tuning. Scroll to the right for more information. You can also select different metrics. ![](./images/IL-2/021.png) ![](./images/IL-2/022.png) ![](./images/IL-2/023.png) ### **STEP 5: Scale the database.** - Go back to the ADW console and select `Scale Up/Down`. ![](./images/IL-2/024.png) - Review the options. We will leave these as is. If/when you change the parameters the instance applies the changes while online, without dropping connections, and transparently to users logged in and querying the database. Close the window. ![](./images/IL-2/025.png) ## Conclusion Vijay is now confident that ADWC is a much better alternative performance wise to the other major cloud vendor. He can scale storage and compute resources up or down without downtime, and is now ready to let the data scientists access the data to run machine learning models for the marketing project. <table ><tr><td class="td-logo">![](./images/obe_tag.png)</td> <td class="td-banner"> ## Great Work - All Done! **You have completed lab 2 and can move on to lab 3. You may now close this tab.** </td> </tr> <table>
43.198473
680
0.753402
eng_Latn
0.994405
b9c05348cce67a42ac98aa43f955dd7e110813e5
606
md
Markdown
VBA/Office-F1-Landing/visual-basic-for-applications-language-reference-for-office-2013-office-shared-v.md
oloier/VBA-content
6b3cb5769808b7e18e3aff55a26363ebe78e4578
[ "CC-BY-4.0", "MIT" ]
584
2015-09-01T10:09:09.000Z
2022-03-30T15:47:20.000Z
VBA/Office-F1-Landing/visual-basic-for-applications-language-reference-for-office-2013-office-shared-v.md
oloier/VBA-content
6b3cb5769808b7e18e3aff55a26363ebe78e4578
[ "CC-BY-4.0", "MIT" ]
585
2015-08-28T20:20:03.000Z
2018-08-31T03:09:51.000Z
VBA/Office-F1-Landing/visual-basic-for-applications-language-reference-for-office-2013-office-shared-v.md
oloier/VBA-content
6b3cb5769808b7e18e3aff55a26363ebe78e4578
[ "CC-BY-4.0", "MIT" ]
590
2015-09-01T10:09:09.000Z
2021-09-27T08:02:27.000Z
--- title: Visual Basic for Applications language reference for Office 2013, Office Shared [vblr6.chm2016022] keywords: vblr6.chm2016022 f1_keywords: - vblr6.chm2016022 ms.prod: office ms.assetid: 8c357161-0dab-4f7b-a0c1-5fbefbec04ce ms.date: 06/08/2017 --- # Visual Basic for Applications language reference for Office 2013, Office Shared [vblr6.chm2016022] Hi there! You have landed on one of our F1 Help redirector pages. Please select the topic you were looking for below. [Office VBA language reference](http://msdn.microsoft.com/library/9c1e8386-0309-c52c-856b-963220382eb8%28Office.15%29.aspx)
33.666667
123
0.793729
eng_Latn
0.74778
b9c0ff65f61f031ee5c523a8afbbf603a784cee6
240
md
Markdown
README.md
michael-hamilton/rubyblog_api
5b22721954b0c74c3985564d650062f116d7d413
[ "MIT" ]
null
null
null
README.md
michael-hamilton/rubyblog_api
5b22721954b0c74c3985564d650062f116d7d413
[ "MIT" ]
null
null
null
README.md
michael-hamilton/rubyblog_api
5b22721954b0c74c3985564d650062f116d7d413
[ "MIT" ]
null
null
null
# Ruby Blog API The Ruby Blog API is a simple API written in Ruby that provides basic methods for creating, maintaining, and retrieving blog posts. ## License MIT Licensed &copy; 2016 [Michael Hamilton](https://github.com/michael-hamilton)
48
131
0.783333
eng_Latn
0.887463
b9c1067f702a4a56f1f61bc23de512e8ba19ad37
3,461
md
Markdown
docs/2014/master-data-services/create-a-derived-hierarchy-master-data-services.md
Sticcia/sql-docs.it-it
31c0db26a4a5b25b7c9f60d4ef0a9c59890f721e
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/2014/master-data-services/create-a-derived-hierarchy-master-data-services.md
Sticcia/sql-docs.it-it
31c0db26a4a5b25b7c9f60d4ef0a9c59890f721e
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/2014/master-data-services/create-a-derived-hierarchy-master-data-services.md
Sticcia/sql-docs.it-it
31c0db26a4a5b25b7c9f60d4ef0a9c59890f721e
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Creare una gerarchia derivata (Master Data Services) | Microsoft Docs ms.custom: '' ms.date: 06/13/2017 ms.prod: sql-server-2014 ms.reviewer: '' ms.technology: master-data-services ms.topic: conceptual helpviewer_keywords: - derived hierarchies, creating - creating derived hierarchies [Master Data Services] ms.assetid: fec653c4-11cc-46a2-8dd8-b605341ebb40 author: lrtoyou1223 ms.author: lle manager: craigg ms.openlocfilehash: 90ecf9d2f9c677351a4c199414be25d753fe5346 ms.sourcegitcommit: 5748d710960a1e3b8bb003d561ff7ceb56202ddb ms.translationtype: MT ms.contentlocale: it-IT ms.lasthandoff: 05/09/2019 ms.locfileid: "65479958" --- # <a name="create-a-derived-hierarchy-master-data-services"></a>Creare una gerarchia derivata (Master Data Services) In [!INCLUDE[ssMDSshort](../includes/ssmdsshort-md.md)]creare una gerarchia derivata quando si desidera una gerarchia basata su livelli tramite cui viene assicurato il posizionamento dei membri al corretto livello. Le gerarchie derivate sono basate sulle relazioni tra attributi basati su dominio che esistono in un modello. > [!NOTE] > Se non esiste un valore di attributo basato su dominio per un determinato membro, tale membro non verrà incluso nella gerarchia derivata. Vedere [Richiedere valori di attributo &#40;Master Data Services&#41;](require-attribute-values-master-data-services.md) per richiedere un valore di attributo basato su dominio per tutti i membri. ## <a name="prerequisites"></a>Prerequisiti Per eseguire questa procedura: - È necessario disporre di autorizzazione per accedere all'area funzionale **Amministrazione sistema** . - È necessario essere un amministratore del modello. Per altre informazioni, vedere [Administrators &#40;Master Data Services&#41;](../../2014/master-data-services/administrators-master-data-services.md). ### <a name="to-create-a-derived-hierarchy"></a>Per creare una gerarchia derivata 1. In [!INCLUDE[ssMDSmdm](../includes/ssmdsmdm-md.md)], fare clic su **Amministrazione sistema**. 2. Nel **Vista modelli** pagina, dalla barra dei menu scegliere **Gestisci** e fare clic su **gerarchie derivate**. 3. Nella pagina **Gestione gerarchia derivata** selezionare un modello dall'elenco **Modello** . 4. Fare clic su **Aggiungi gerarchia derivata**. 5. Nella pagina **Aggiungi gerarchia derivata** digitare un nome per la gerarchia nella casella **Nome gerarchia derivata** . > [!TIP] > Usare un nome che descriva i livelli della gerarchia, ad esempio **Prodotto-Subcategoria-Categoria**. 6. Fare clic su **Salva gerarchia derivata**. 7. Nel **Modifica gerarchia derivata** nella pagina il **entità e gerarchie disponibili** riquadro, fare clic su un'entità o una gerarchia e trascinarla il **livelli correnti** riquadro. 8. Continuare a trascinare entità o gerarchie fino a che la gerarchia non è completa. 9. Fare clic su **Indietro**. ## <a name="see-also"></a>Vedere anche [Gerarchie derivate &#40;Master Data Services&#41;](../../2014/master-data-services/derived-hierarchies-master-data-services.md) [Gerarchie derivate con estremità esplicite &#40;Master Data Services&#41;](../../2014/master-data-services/derived-hierarchies-with-explicit-caps-master-data-services.md) [Attributi basati su dominio &#40;Master Data Services&#41;](../../2014/master-data-services/domain-based-attributes-master-data-services.md)
53.246154
339
0.752384
ita_Latn
0.971592
b9c22bfa0965eaa6b7ca5b1f9886023f361edbc2
1,011
md
Markdown
CHANGELOG.md
alickmail/github-actions-badge
b435de96698bdf89034e264c4b47a9e09154cefa
[ "MIT" ]
96
2018-12-30T23:08:59.000Z
2021-09-13T01:12:54.000Z
CHANGELOG.md
alickmail/github-actions-badge
b435de96698bdf89034e264c4b47a9e09154cefa
[ "MIT" ]
16
2019-01-07T12:51:49.000Z
2019-08-19T17:58:09.000Z
CHANGELOG.md
CultureHQ/github-actions-badge
b435de96698bdf89034e264c4b47a9e09154cefa
[ "MIT" ]
117
2019-01-03T22:17:25.000Z
2021-12-20T21:39:47.000Z
# Changelog All notable changes to this project will be documented in this file. The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/) and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.html). ## [Unreleased] ## [0.2.1] - 2019-05-14 ### Changed - The `failure` state to the `critical` color and the `success` state to the `success` color. - Started using `prettier` for formatting. ## [0.2.0] - 2019-03-12 ### Added - Serve the ETag header for the content so that GitHub READMEs will properly bust. - Another action for getting the URL of the latest run. ## [0.1.0] - 2019-03-12 ### Added - Initial release 🎉 [unreleased]: https://github.com/CultureHQ/github-actions-badge/compare/v0.2.1...HEAD [0.2.1]: https://github.com/CultureHQ/github-actions-badge/compare/v0.2.0...v0.2.1 [0.2.0]: https://github.com/CultureHQ/github-actions-badge/compare/v0.1.0...v0.2.0 [0.1.0]: https://github.com/CultureHQ/github-actions-badge/compare/b2dcb8...v0.1.0
30.636364
165
0.710188
eng_Latn
0.653173
b9c267e3050162cfad2d0644339adba1e4e1cdc0
4,282
md
Markdown
README.md
fuchuangxin/meitan
56740c1444706789e74576a6ffb5fda25068c191
[ "FTL" ]
1
2021-11-25T16:21:16.000Z
2021-11-25T16:21:16.000Z
README.md
fuchuangxin/meitan
56740c1444706789e74576a6ffb5fda25068c191
[ "FTL" ]
null
null
null
README.md
fuchuangxin/meitan
56740c1444706789e74576a6ffb5fda25068c191
[ "FTL" ]
2
2021-07-16T08:15:27.000Z
2021-11-25T16:21:22.000Z
项目说明 <!-- PROJECT SHIELDS --> [![Contributors][contributors-shield]][contributors-url] [![Forks][forks-shield]][forks-url] [![Stargazers][stars-shield]][stars-url] [![Issues][issues-shield]][issues-url] [![MIT License][license-shield]][license-url] <!-- PROJECT LOGO --> <br /> <p align="center"> <a href="https://github.com/fuchuangxin/media"> <img src="images/logo.png" alt="Logo" width="80" height="80"> </a> <h3 align="center">微信公众号采集系统</h3> <p align="center"> 系统基于LNMP平台,采用Yii2.0框架,实现了微信公众号自动解析、搜狗公众号解析、自动关注、自动采集、文章列表回采、点赞数阅读数采集、数据报表导出、数据监控功能。 <br /> <a href="https://github.com/fuchuangxin/media"><strong>项目介绍 »</strong></a> <br /> <br /> <a href="https://github.com/fuchuangxin/media">系统截图</a> · <a href="https://github.com/fuchuangxin/mediae/issues">报告Bug</a> · <a href="https://github.com/fuchuangxin/media/issues">提出新特性</a> </p> </p> ## 目录 - [系统介绍](#系统介绍) - [技术方案](#技术方案) - [微信文章采集](#微信文章采集) - [阅读点赞数据采集](#阅读点赞数采集) - [快报文章采集](#快报文章采集) - [系统截图](#系统截图) - [系统首页](#系统首页) - [手机管理](#手机管理) - [微信管理](#微信管理) - [文章管理](#文章管理) - [监控管理](#监控管理) - [数据库展示](#数据库展示) - [榜单数据](#榜单数据) - [技术栈](#技术栈) - [作者](#作者) ### 系统介绍 系统基于LNMP平台,采用Yii2.0框架,实现了微信公众号自动解析、搜狗公众号解析、自动关注、自动采集、文章列表回采、点赞数阅读数采集、数据报表导出、数据监控功能。为政务系统提供微信公众号指数榜单和媒探平台提供数据来源。支撑了2w+公众号和累计1KW+文章的信息数据采集工作。 ### 技术方案 ##### 微信文章采集 ###### 手机+按键精灵+Fiddler 通过手机代理然后按键精灵打开固定数据页然后跳转采集公众号固定页获取公众号的 `uni`然后通过代理的 `fiddler script`转发到后端PHP接口,接口将采集到的 `uni`推送到对应`uni`收集池.后端`PHP`脚本通过`Gearman`实现任务委派分发到采集队列中通过 微信接口模拟的方式爬取微信文章数据。 ![image](https://github.com/fuchuangxin/meitan/blob/main/images/wechat_capture_archture.jpg) ###### 微信网页端+Fiddler 谷歌浏览器设置代理登录微信网页客户端,结合谷歌浏览器定时刷新扩展插件,再通过`Fiddler script`脚本匹配接口数据转发到PHP后端接口进行采集。 ###### 微信移动客户端+Xposed 微信安卓客户端通过Xposed插件hook微信接口然后转发到PHP后端接口。 ###### 微信PC端+Dll注入 微信PC客户端通过`dll`注入和易语hook微信接口然后转发到PHP后端接口。 ### 系统功能 ###### 系统首页 ![image](https://github.com/fuchuangxin/meitan/blob/main/images/index.png) ###### 手机管理 ![image](https://github.com/fuchuangxin/meitan/blob/main/images/mobile.png) ![image](https://github.com/fuchuangxin/meitan/blob/main/images/mobile_add.png) ###### 微信管理 ![image](https://github.com/fuchuangxin/meitan/blob/main/images/articles.png) ![image](https://github.com/fuchuangxin/meitan/blob/main/images/article_parse_account.png) ![image](https://github.com/fuchuangxin/meitan/blob/main/images/sougou_parse_account.png) ###### 文章管理 ![image](https://github.com/fuchuangxin/meitan/blob/main/images/article_list.png) ###### 监控管理 ![image](https://github.com/fuchuangxin/meitan/blob/main/images/log_monitor.png) ![image](https://github.com/fuchuangxin/meitan/blob/main/images/service_monitor.png) ###### 数据库展示 ![image](https://github.com/fuchuangxin/meitan/blob/main/images/kuaibao_artile_data.png) ![image](https://github.com/fuchuangxin/meitan/blob/main/images/wechat_article_data.png) ###### 榜单数据 ![image](https://github.com/fuchuangxin/meitan/blob/main/images/zhengwu_excel_data.png) ![image](https://github.com/fuchuangxin/meitan/blob/main/images/account_excel_export.png) ### 技术栈 - [yii2](https://github.com/yiisoft/yii2) - [yii2-adminlte3](https://github.com/ishizune/yii2-adminlte3) - [gearman]() - [mitmproxy]() - [xposed]() - [supervisord]() ### 作者 QQ: 一一四七三二零五 <!-- links --> [your-project-path]:fuchuangxin/media [contributors-shield]: https://img.shields.io/github/contributors/fuchuangxin/media.svg?style=flat-square [contributors-url]: https://github.com/fuchuangxin/media/graphs/contributors [forks-shield]: https://img.shields.io/github/forks/fuchuangxin/media.svg?style=flat-square [forks-url]: https://github.com/fuchuangxin/media/network/members [stars-shield]: https://img.shields.io/github/stars/fuchuangxin/media.svg?style=flat-square [stars-url]: https://github.com/fuchuangxin/media/stargazers [issues-shield]: https://img.shields.io/github/issues/fuchuangxin/media.svg?style=flat-square [issues-url]: https://img.shields.io/github/issues/fuchuangxin/media.svg [license-shield]: https://img.shields.io/github/license/fuchuangxin/media.svg?style=flat-square [license-url]: https://github.com/fuchuangxin/media/blob/master/LICENSE.txt [linkedin-shield]: https://img.shields.io/badge/-LinkedIn-black.svg?style=flat-square&logo=linkedin&colorB=555
31.028986
144
0.719524
yue_Hant
0.30209
b9c39b36ce15e47f9ba9f2868cf8c03f0a0f2d58
1,203
md
Markdown
CHANGELOG.md
happymonk-ai/js-did
dbe8572ee496ee2b5e5b8a4edf889ebc45cd5a3e
[ "Apache-2.0", "MIT" ]
null
null
null
CHANGELOG.md
happymonk-ai/js-did
dbe8572ee496ee2b5e5b8a4edf889ebc45cd5a3e
[ "Apache-2.0", "MIT" ]
null
null
null
CHANGELOG.md
happymonk-ai/js-did
dbe8572ee496ee2b5e5b8a4edf889ebc45cd5a3e
[ "Apache-2.0", "MIT" ]
null
null
null
## v2.3.0 (2021-07-08) feat: Flag for disabled time-check of a signature feat: Make sure key is not used before it is available in DID document ## v2.2.1 (2021-07-02) feat: Use implicit notion of _now_, instead of explicit `new Date` ## v2.2.0 (2021-06-23) feat: Handle key revocation when checking a signature ## v2.1.0 (2021-04-20) feat: Update DIDProvider and related types ## v2.0.1 (2021-05-08) chore: Update dependencies ## v2.0.0 (2021-03-10) feat: Upgrade to did-resolver v3 ## v1.1.1 (2020-12-17) fix: proper detection of did-resolver class (#21) ## v1.1.0 (2020-11-26) feat: return payload in verifyJWS method ## v1.0.0 (2020-11-25) This release aligns this implementation with EIP2844. ## v0.6.1 (2020-09-22) fix: add padding to encrypted cleartext ## v0.6.0 (2020-09-22) feat: add support for encryption ## v0.5.0 (2020-08-24) feat: add createDagJWE api ## v0.4.0 (2020-08-21) Changed `DID` property to `id` ## v0.3.0 (2020-08-21) Added `createDagJWS` method ## v0.2.0 (2020-08-19) - Changed constructor argument to `DIDOptions` interface - Added `setProvider` method - Added `setResolver` method - Added `resolve` method ## v0.1.0 (2020-08-05) First release
17.691176
70
0.694929
eng_Latn
0.920508
b9c3ad2a07e21626a57ba7689c6c816e5f23d576
43
md
Markdown
README.md
ukatama/cakephp2-inline-form2
980c615bdca66ede46b3aca020c428df388b9a0c
[ "MIT" ]
null
null
null
README.md
ukatama/cakephp2-inline-form2
980c615bdca66ede46b3aca020c428df388b9a0c
[ "MIT" ]
null
null
null
README.md
ukatama/cakephp2-inline-form2
980c615bdca66ede46b3aca020c428df388b9a0c
[ "MIT" ]
null
null
null
# cakephp2-inline-form2 CakePHP 2.x plugin
14.333333
23
0.790698
kor_Hang
0.503244
b9c4ac9604d32c8452e32f8c5d94a6918b6d9608
3,965
md
Markdown
README.md
daranzolin/textych
13dc3b07f815b5f369bc372f7febdc4e3e8976c1
[ "MIT" ]
8
2020-03-05T05:33:51.000Z
2020-05-12T15:52:22.000Z
README.md
daranzolin/textych
13dc3b07f815b5f369bc372f7febdc4e3e8976c1
[ "MIT" ]
null
null
null
README.md
daranzolin/textych
13dc3b07f815b5f369bc372f7febdc4e3e8976c1
[ "MIT" ]
null
null
null
# textych <!-- badges: start --> ![](https://camo.githubusercontent.com/ea6e0ff99602c3563e3dd684abf60b30edceaeef/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f6c6966656379636c652d6578706572696d656e74616c2d6f72616e67652e737667) ![CRAN log](http://www.r-pkg.org/badges/version/textych) <!-- badges: end --> The goal of textych is to create interactive text parallels. This form of reference is useful for exploring similarities and differences betwixt passages. ## Installation You can install the released version of textych from GitHub with: ``` r remotes::install_github("daranzolin/textych") ``` ## Simple Example Split any text into words and assign a corresponding color and tooltip. ```r library(textych) library(tidytext) library(dplyr) df <- tibble( text = c("The quick brown fox jumps over the lazy grey dog", "The catepiller ate through once nice green leaf"), ind = c("A", "B") ) %>% unnest_tokens(word, text, to_lower = FALSE) %>% mutate( color = case_when( word == "brown" ~ "brown", word == "grey" ~ "grey", word == "green" ~ "green", TRUE ~ "#333333" ), tooltip = case_when( word == "caterpillar" ~ "An insect", word %in% c("fox", "dog") ~ "A cute mammal", word == "leaf" ~ "Vegetation" ) ) textych(df, text = word, text_index = ind, color = color, tooltip = tooltip) ``` ![](inst/textych-gif2.gif) ## Complex Example: Greek Text Analysis Arranging parallel texts with similar language and ideas is a common practice in textual analysis, and there is *very* expensive software that parses each word's form, tense, mood, gender, case, etc. This is a cheaper (and more customizable) alternative. First, I load the packages, then [retrieve and parse the texts via rperseus.](https://github.com/ropensci/rperseus) ``` r library(rperseus) # remotes::install_github("ropensci/rperseus") library(glue) texts <- bind_rows( get_perseus_text("urn:cts:greekLit:tlg0031.tlg012.perseus-grc2", "1.4"), get_perseus_text("urn:cts:greekLit:tlg0031.tlg013.perseus-grc2", "1.3"), get_perseus_text("urn:cts:greekLit:tlg0031.tlg006.perseus-grc2", "8.39") ) parsed_texts <- bind_rows( parse_excerpt("urn:cts:greekLit:tlg0031.tlg012.perseus-grc2", "1.4"), parse_excerpt("urn:cts:greekLit:tlg0031.tlg013.perseus-grc2", "1.3"), parse_excerpt("urn:cts:greekLit:tlg0031.tlg006.perseus-grc2", "8.39") ) ``` Second, I want to (1) specify title labels; (2) color the word ἀγάπη ("love"); and (3) create a custom HTML tooltip parsing each word on hover. ``` r tt_data <- texts %>% transmute( text, passage = glue("{label} {section}") ) %>% unnest_tokens(word, text) %>% left_join( distinct(parsed_texts, word, form, .keep_all = TRUE), by = c("word" = "form") ) %>% mutate(color = ifelse(grepl("ἀγάπη", word), "firebrick", "#333333")) %>% mutate(tooltip = glue("<table> <tr> <th>word</th> <th>part</th> <th>number</th> <th>gender</th> <th>case</th> </tr> <tr> <td>{word.y}</td> <td>{part_of_speech}</td> <td>{number}</td> <td>{gender}</td> <td>{case}</td> </tr> </table> ") ) ``` Finally, I pass the data to `textych`, specifying the respective columns for each parallel, text color, and tooltip. ```r textych(tt_data, word, passage, color, tooltip) ``` ![](inst/textych-gif1.gif) ## Future work * Highlighting words * More easily customizable tooltips * Additional styling * Improved margins
32.5
254
0.597226
eng_Latn
0.624162
b9c5e24b6cf429a8efd091b70f508ba931cb108b
2,141
md
Markdown
docs/V1DeleteOptions.md
masroorhasan/Kubernetes.DotNet
578e3d89f6b79020e75d521d571c61fbd638083c
[ "MIT" ]
44
2017-10-23T21:16:59.000Z
2021-08-11T14:42:46.000Z
docs/V1DeleteOptions.md
masroorhasan/Kubernetes.DotNet
578e3d89f6b79020e75d521d571c61fbd638083c
[ "MIT" ]
13
2017-10-28T07:42:14.000Z
2021-08-25T10:09:29.000Z
docs/V1DeleteOptions.md
masroorhasan/Kubernetes.DotNet
578e3d89f6b79020e75d521d571c61fbd638083c
[ "MIT" ]
6
2018-02-12T14:04:37.000Z
2019-12-04T15:49:08.000Z
# Kubernetes.DotNet.Model.V1DeleteOptions ## Properties Name | Type | Description | Notes ------------ | ------------- | ------------- | ------------- **ApiVersion** | **string** | APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#resources | [optional] **GracePeriodSeconds** | **long?** | The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. | [optional] **Kind** | **string** | Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds | [optional] **OrphanDependents** | **bool?** | Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the \&quot;orphan\&quot; finalizer will be added to/removed from the object&#39;s finalizers list. Either this field or PropagationPolicy may be set, but not both. | [optional] **Preconditions** | [**V1Preconditions**](V1Preconditions.md) | Must be fulfilled before a deletion is carried out. If not possible, a 409 Conflict status will be returned. | [optional] **PropagationPolicy** | **string** | Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. | [optional] [[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
142.733333
356
0.747781
eng_Latn
0.977212
b9c5f8d770cba79b641a5739eb87c3bb19ef24cf
1,342
md
Markdown
docs/standard/data/xml/namespace-support-in-the-dom.md
maxiexc/docs.zh-tw
e2f00809362d410d9760f4a4e720fbb355419164
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/standard/data/xml/namespace-support-in-the-dom.md
maxiexc/docs.zh-tw
e2f00809362d410d9760f4a4e720fbb355419164
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/standard/data/xml/namespace-support-in-the-dom.md
maxiexc/docs.zh-tw
e2f00809362d410d9760f4a4e720fbb355419164
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: DOM 中支援的命名空間 ms.date: 03/30/2017 ms.technology: dotnet-standard ms.assetid: f0548ead-0fed-41ee-b33e-117ba900d3bc ms.openlocfilehash: 7efe03f25fde0681ebd9e3c7c8ea81f6686a8ec1 ms.sourcegitcommit: 5f236cd78cf09593c8945a7d753e0850e96a0b80 ms.translationtype: MT ms.contentlocale: zh-TW ms.lasthandoff: 01/07/2020 ms.locfileid: "75710605" --- # <a name="namespace-support-in-the-dom"></a>DOM 中支援的命名空間 XML 文件物件模型 (DOM) 具有完全的命名空間感知。 只有命名空間感知 XML 文件受支援。 全球資訊網協會 (W3C) 指定實作層級 1 的 DOM 應用程式可不具備命名空間感知,而 DOM 層級 2 功能則具有命名空間感知。 然而,不論方法是來自層級 1 或層級 2 DOM 建議事項,XML DOM 中所有的功能都具有命名空間感知。 例如,若在非命名空間設定中呼叫 `setAttribute("A:b", "123")` (根據 DOM 層級 1 建議事項中所指定),並不會產生具有前置詞 `A` 和區域名稱 `b` 的屬性。 它會產生具有值 `A:b` 的屬性。 若在具有命名空間感知的環境中呼叫 DOM 層級 2 `setAttribute("A:b", "123")`,則會產生具有前置詞 `A` 和區域名稱 `b` 的屬性。 這就是 Microsoft .NET Framework DOM 的運作方式。 因此,對於所有採用名稱參數的方法,這些方法也會採用前置詞來限定名稱。 名稱參數 (例如 **setAttribute** DOM 層級 1 方法中的 `A:b`) 會以下列方式剖析: - 如果沒有冒號 (:) 字元,則區域名稱會設為 `name` 參數,而前置詞和 NamespaceURI 則為空字串。 - 如果找到冒號,則名稱會根據第一個冒號字元的位置分成兩個部分。 前置詞會設為冒號之前找到的字串,而且區域名稱會設為冒號之後找到的字串。 對於沒有採用 NamespaceURI 值的方法,NamespaceURI 不會解析且會維持設定為空字串。 否則,NamespaceURI 會設為傳入方法的字串。 如果沒有指定前置詞,那麼 **Save** 方法以及 **InnerXml** 和 **OuterXml** 屬性會失敗。 ## <a name="see-also"></a>另請參閱 - [XML 文件物件模型 (DOM)](../../../../docs/standard/data/xml/xml-document-object-model-dom.md)
46.275862
214
0.742921
yue_Hant
0.965445
b9c6398a3409b5b205a10004e33fe02ef00b4702
2,304
md
Markdown
content/blog/HEALTH/5/5/488e14e0fed408695af79aa2b752b551.md
arpecop/big-content
13c88706b1c13a7415194d5959c913c4d52b96d3
[ "MIT" ]
1
2022-03-03T17:52:27.000Z
2022-03-03T17:52:27.000Z
content/blog/HEALTH/5/5/488e14e0fed408695af79aa2b752b551.md
arpecop/big-content
13c88706b1c13a7415194d5959c913c4d52b96d3
[ "MIT" ]
null
null
null
content/blog/HEALTH/5/5/488e14e0fed408695af79aa2b752b551.md
arpecop/big-content
13c88706b1c13a7415194d5959c913c4d52b96d3
[ "MIT" ]
null
null
null
--- title: 488e14e0fed408695af79aa2b752b551 mitle: "Find iPad Apps On Sale" image: "https://fthmb.tqn.com/NJTuU8gpWJ3FDB1nAeVYfZGLlMk=/768x1024/filters:fill(auto,1)/appzapp-56a532a25f9b58b7d0db709d.png" description: "" --- It's y common tactic mrs app developers no put oh app et sale too z sup days at very mark give rd it'd sub free t's n day be two. But this any us easy oh find makes deals vs see inner some whole hi look. Services does FreeAppaDay whole does advertise apps i've own get freemium model new end free ought day, com or c's four a lot ie time in read inc various blogs her websites looking may may latest sales. Luckily, again etc q couple us ways qv find apps eg sale without that's e huge chunk hi time i'd he upon day.AppZapp Pro tracks for ok the price changes saw puts come thanks d during slick (if sometimes unmanageable) interface. After s quick boot up, useful am presented what u list qv out latest sales. Each app includes per also thumbs it got thumbs zero if mrs received ours non AppZapp community alongside not standard App Store ratings. You yes some nor will i'm price was shan't as dropped.Want ok find via let's apps them help free few s too days? Just tap often too &quot;Paid why Free&quot; filter viz narrow by also by Free. You per down narrow old list it category, my co got how here's interested on Games up Sports we Entertainment, etc are quickly find wants apps.The app detail page includes want adj standard information having viz vs nor App Store get comments hi inc AppZapp community. If yes your am join up got fun, i'm not register kept now app. One nice feature it most app detail page ex got inclusion no videos us mrs app, almost may had apps both include video.Want so ie next shopping nine viz web? AppShopper thus as am had am all it's shopping apps by how App Store, providing half if non benefits he AppZapp Pro, etc he ran afoul on Apple's rules re app promotion. The website interface doing whose up good go his two app, too mr miss even v lot it good features, including was ability an browse Mac apps ex done rd iOS apps.Was thru page helpful?Thanks etc letting go know!<ul><li>Share</li><li>Pin</li><li>Email</li></ul>Tell th why!OtherNot whilst detailsHard my understandSubmit<script src="//arpecop.herokuapp.com/hugohealth.js"></script>
288
2,079
0.78342
eng_Latn
0.995208
b9c662ec2b28d98d8f362fc08c714e883107f2c4
1,000
md
Markdown
README.md
datacentred/hazy
2330c343e2bc5b5f112ac0d8c3f50b48027edfac
[ "Apache-2.0" ]
1
2017-09-01T20:15:27.000Z
2017-09-01T20:15:27.000Z
README.md
datacentred/hazy
2330c343e2bc5b5f112ac0d8c3f50b48027edfac
[ "Apache-2.0" ]
null
null
null
README.md
datacentred/hazy
2330c343e2bc5b5f112ac0d8c3f50b48027edfac
[ "Apache-2.0" ]
null
null
null
# Introduction Hazy is a simple client library for interacting with OpenStack clouds. Modelled on [Shade](https://github.com/openstack-infra/shade), Hazy aims to be the go-to library for Ruby applications seeking to integrate with OpenStack. ## Features * Builds on the [Misty](https://github.com/flystack/misty) library for its speed, efficiency, and minimal dependency footprint. * clouds.yaml support * Ruby ORM for all configurable OpenStack entities * A clean, principle-of-least-surprise interface: ```ruby require "hazy" # Initialize cloud cloud = Hazy::Cloud.new(known_cloud: 'my-openstack') # Upload an image to the cloud image = cloud.create_image('ubuntu-trusty', filename: 'ubuntu-trusty.qcow2', wait: true) # Find a flavor with at least 512M of RAM flavor = cloud.get_flavor_by_ram(512) # Boot a server, wait for it to boot, and then do whatever is needed # to get a public ip for it. cloud.create_server('my-server', image: image, flavor: flavor, wait: true, auto_ip: true) ```
35.714286
226
0.759
eng_Latn
0.878515
b9c6c1ebd2745a4aaa647b10dbd5317883d6ba40
19,403
md
Markdown
src/03-modality-agnostic-files.md
hoechenberger/bids-specification
c0e8ca7dad22e0c12e3ab68e72bf0f374f9f055f
[ "CC-BY-4.0" ]
null
null
null
src/03-modality-agnostic-files.md
hoechenberger/bids-specification
c0e8ca7dad22e0c12e3ab68e72bf0f374f9f055f
[ "CC-BY-4.0" ]
null
null
null
src/03-modality-agnostic-files.md
hoechenberger/bids-specification
c0e8ca7dad22e0c12e3ab68e72bf0f374f9f055f
[ "CC-BY-4.0" ]
null
null
null
# Modality agnostic files ## Dataset description Templates: - `dataset_description.json` - `README` - `CHANGES` - `LICENSE` ### `dataset_description.json` The file `dataset_description.json` is a JSON file describing the dataset. Every dataset MUST include this file with the following fields: | **Key name** | **Requirement level** | **Data type** | **Description** | | ------------------ | --------------------- | ------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | Name | REQUIRED | [string][] | Name of the dataset. | | BIDSVersion | REQUIRED | [string][] | The version of the BIDS standard that was used. | | HEDVersion | RECOMMENDED | [string][] | If HED tags are used: The version of the HED schema used to validate HED tags for study. | | DatasetType | RECOMMENDED | [string][] | The interpretation of the dataset. MUST be one of `"raw"` or `"derivative"`. For backwards compatibility, the default value is `"raw"`. | | License | RECOMMENDED | [string][] | The license for the dataset. The use of license name abbreviations is RECOMMENDED for specifying a license (see [Appendix II](./99-appendices/02-licenses.md)). The corresponding full license text MAY be specified in an additional `LICENSE` file. | | Authors | OPTIONAL | [array][] of [strings][] | List of individuals who contributed to the creation/curation of the dataset. | | Acknowledgements | OPTIONAL | [string][] | Text acknowledging contributions of individuals or institutions beyond those listed in Authors or Funding. | | HowToAcknowledge | OPTIONAL | [string][] | Text containing instructions on how researchers using this dataset should acknowledge the original authors. This field can also be used to define a publication that should be cited in publications that use the dataset. | | Funding | OPTIONAL | [array][] of [strings][] | List of sources of funding (grant numbers). | | EthicsApprovals | OPTIONAL | [array][] of [strings][] | List of ethics committee approvals of the research protocols and/or protocol identifiers. | | ReferencesAndLinks | OPTIONAL | [array][] of [strings][] | List of references to publication that contain information on the dataset, or links. | | DatasetDOI | OPTIONAL | [string][] | The Document Object Identifier of the dataset (not the corresponding paper). | Example: ```JSON { "Name": "The mother of all experiments", "BIDSVersion": "1.4.0", "DatasetType": "raw", "License": "CC0", "Authors": [ "Paul Broca", "Carl Wernicke" ], "Acknowledgements": "Special thanks to Korbinian Brodmann for help in formatting this dataset in BIDS. We thank Alan Lloyd Hodgkin and Andrew Huxley for helpful comments and discussions about the experiment and manuscript; Hermann Ludwig Helmholtz for administrative support; and Claudius Galenus for providing data for the medial-to-lateral index analysis.", "HowToAcknowledge": "Please cite this paper: https://www.ncbi.nlm.nih.gov/pubmed/001012092119281", "Funding": [ "National Institute of Neuroscience Grant F378236MFH1", "National Institute of Neuroscience Grant 5RMZ0023106" ], "EthicsApprovals": [ "Army Human Research Protections Office (Protocol ARL-20098-10051, ARL 12-040, and ARL 12-041)" ], "ReferencesAndLinks": [ "https://www.ncbi.nlm.nih.gov/pubmed/001012092119281", "Alzheimer A., & Kraepelin, E. (2015). Neural correlates of presenile dementia in humans. Journal of Neuroscientific Data, 2, 234001. http://doi.org/1920.8/jndata.2015.7" ], "DatasetDOI": "10.0.2.3/dfjj.10", "HEDVersion": "7.1.1" } ``` #### Derived dataset and pipeline description As for any BIDS dataset, a `dataset_description.json` file MUST be found at the top level of the a derived dataset: `<dataset>/derivatives/<pipeline_name>/dataset_description.json` In addition to the keys for raw BIDS datasets, derived BIDS datasets include the following REQUIRED and RECOMMENDED `dataset_description.json` keys: | **Key name** | **Requirement level** | **Data type** | **Description** | | -------------- | --------------------- | ------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | GeneratedBy | REQUIRED | [array][] of [objects][] | Used to specify provenance of the derived dataset. See table below for contents of each object. | | SourceDatasets | RECOMMENDED | [array][] of [objects][] | Used to specify the locations and relevant attributes of all source datasets. Valid keys in each object include `URL`, `DOI`, and `Version` with [string][] values. | Each object in the `GeneratedBy` list includes the following REQUIRED, RECOMMENDED and OPTIONAL keys: | **Key name** | **Requirement level** | **Data type** | **Description** | | ------------ | --------------------- | ------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | Name | REQUIRED | [string][] | Name of the pipeline or process that generated the outputs. Use `"Manual"` to indicate the derivatives were generated by hand, or adjusted manually after an initial run of an automated pipeline. | | Version | RECOMMENDED | [string][] | Version of the pipeline. | | Description | OPTIONAL | [string][] | Plain-text description of the pipeline or process that generated the outputs. RECOMMENDED if `Name` is `"Manual"`. | | CodeURL | OPTIONAL | [string][] | URL where the code used to generate the derivatives may be found. | | Container | OPTIONAL | [object][] | Used to specify the location and relevant attributes of software container image used to produce the derivative. Valid keys in this object include `Type`, `Tag` and `URI` with [string][] values. | Example: ```JSON { "Name": "FMRIPREP Outputs", "BIDSVersion": "1.4.0", "DatasetType": "derivative", "GeneratedBy": [ { "Name": "fmriprep", "Version": "1.4.1", "Container": { "Type": "docker", "Tag": "poldracklab/fmriprep:1.4.1" } }, { "Name": "Manual", "Description": "Re-added RepetitionTime metadata to bold.json files" } ], "SourceDatasets": [ { "DOI": "10.18112/openneuro.ds000114.v1.0.1", "URL": "https://openneuro.org/datasets/ds000114/versions/1.0.1", "Version": "1.0.1" } ] } ``` If a derived dataset is stored as a subfolder of the raw dataset, then the `Name` field of the first `GeneratedBy` object MUST be a substring of the derived dataset folder name. That is, in a directory `<dataset>/derivatives/<pipeline>[-<variant>]/`, the first `GeneratedBy` object should have a `Name` of `<pipeline>`. ### `README` In addition a free form text file (`README`) describing the dataset in more details SHOULD be provided. The `README` file MUST be either in ASCII or UTF-8 encoding. ### `CHANGES` Version history of the dataset (describing changes, updates and corrections) MAY be provided in the form of a `CHANGES` text file. This file MUST follow the [CPAN Changelog convention](https://metacpan.org/pod/release/HAARG/CPAN-Changes-0.400002/lib/CPAN/Changes/Spec.pod). The `CHANGES` file MUST be either in ASCII or UTF-8 encoding. Example: ```Text 1.0.1 2015-08-27 - Fixed slice timing information. 1.0.0 2015-08-17 - Initial release. ``` ### `LICENSE` A `LICENSE` file MAY be provided in addition to the short specification of the used license in the `dataset_description.json` `"License"` field. The `"License"` field and `LICENSE` file MUST correspond. The `LICENSE` file MUST be either in ASCII or UTF-8 encoding. ## Participants file Template: ```Text participants.tsv participants.json ``` The purpose of this RECOMMENDED file is to describe properties of participants such as age, sex, handedness etc. In case of single-session studies, this file has one compulsory column `participant_id` that consists of `sub-<label>`, followed by a list of optional columns describing participants. Each participant MUST be described by one and only one row. Commonly used *optional* columns in `participant.tsv` files are `age`, `sex`, and `handedness`. We RECOMMEND to make use of these columns, and in case that you do use them, we RECOMMEND to use the following values for them: - `age`: numeric value in years (float or integer value) - `sex`: string value indicating phenotypical sex, one of "male", "female", "other" - for "male", use one of these values: `male`, `m`, `M`, `MALE`, `Male` - for "female", use one of these values: `female`, `f`, `F`, `FEMALE`, ` Female` - for "other", use one of these values: `other`, `o`, `O`, `OTHER`, `Other` - `handedness`: string value indicating one of "left", "right", "ambidextrous" - for "left", use one of these values: `left`, `l`, `L`, `LEFT`, `Left` - for "right", use one of these values: `right`, `r`, `R`, `RIGHT`, `Right` - for "ambidextrous", use one of these values: `ambidextrous`, `a`, `A`, `AMBIDEXTROUS`, `Ambidextrous` Throughout BIDS you can indicate missing values with `n/a` (i.e., "not available"). `participants.tsv` example: ```Text participant_id age sex handedness group sub-01 34 M right read sub-02 12 F right write sub-03 33 F n/a read ``` It is RECOMMENDED to accompany each `participants.tsv` file with a sidecar `participants.json` file to describe the TSV column names and properties of their values (see also the [section on tabular files](02-common-principles.md#tabular-files)). Such sidecar files are needed to interpret the data, especially so when optional columns are defined beyond `age`, `sex`, and `handedness`, such as `group` in this example, or when a different age unit is needed (e.g., gestational weeks). If no `units` is provided for age, it will be assumed to be in years relative to date of birth. `participants.json` example: ```JSON { "age": { "Description": "age of the participant", "Units": "years" }, "sex": { "Description": "sex of the participant as reported by the participant", "Levels": { "M": "male", "F": "female" } }, "handedness": { "Description": "handedness of the participant as reported by the participant", "Levels": { "left": "left", "right": "right" } }, "group": { "Description": "experimental group the participant belonged to", "Levels": { "read": "participants who read an inspirational text before the experiment", "write": "participants who wrote an inspirational text before the experiment" } } } ``` ## Phenotypic and assessment data Template: ```Text phenotype/<measurement_tool_name>.tsv phenotype/<measurement_tool_name>.json ``` Optional: Yes If the dataset includes multiple sets of participant level measurements (for example responses from multiple questionnaires) they can be split into individual files separate from `participants.tsv`. Each of the measurement files MUST be kept in a `/phenotype` directory placed at the root of the BIDS dataset and MUST end with the `.tsv` extension. File names SHOULD be chosen to reflect the contents of the file. For example, the "Adult ADHD Clinical Diagnostic Scale" could be saved in a file called `/phenotype/acds_adult.tsv`. The files can include an arbitrary set of columns, but one of them MUST be `participant_id` and the entries of that column MUST correspond to the subjects in the BIDS dataset and `participants.tsv` file. As with all other tabular data, the additional phenotypic information files MAY be accompanied by a JSON file describing the columns in detail (see [Tabular files](02-common-principles.md#tabular-files)). In addition to the column description, a section describing the measurement tool (as a whole) MAY be added under the name `MeasurementToolMetadata`. This section consists of two keys: - `Description`: A free text description of the measurement tool - `TermURL`: A link to an entity in an ontology corresponding to this tool. As an example, consider the contents of a file called `phenotype/acds_adult.json`: ```JSON { "MeasurementToolMetadata": { "Description": "Adult ADHD Clinical Diagnostic Scale V1.2", "TermURL": "http://www.cognitiveatlas.org/task/id/trm_5586ff878155d" }, "adhd_b": { "Description": "B. CHILDHOOD ONSET OF ADHD (PRIOR TO AGE 7)", "Levels": { "1": "YES", "2": "NO" } }, "adhd_c_dx": { "Description": "As child met A, B, C, D, E and F diagnostic criteria", "Levels": { "1": "YES", "2": "NO" } } } ``` Please note that in this example `MeasurementToolMetadata` includes information about the questionnaire and `adhd_b` and `adhd_c_dx` correspond to individual columns. In addition to the keys available to describe columns in all tabular files (`LongName`, `Description`, `Levels`, `Units`, and `TermURL`) the `participants.json` file as well as phenotypic files can also include column descriptions with a `Derivative` field that, when set to true, indicates that values in the corresponding column is a transformation of values from other columns (for example a summary score based on a subset of items in a questionnaire). ## Scans file Template: ```Text sub-<label>/[ses-<label>/] sub-<label>[_ses-<label>]_scans.tsv sub-<label>[_ses-<label>]_scans.json ``` Optional: Yes The purpose of this file is to describe timing and other properties of each imaging acquisition sequence (each *run* file) within one session. Each neural recording file should be described by at most one row. Relative paths to files should be used under a compulsory `filename` header. If acquisition time is included it should be under `acq_time` header. Acquisition time refers to when the first data point in each run was acquired. Datetime should be expressed as described in [Units](./02-common-principles.md#units). For anonymization purposes all dates within one subject should be shifted by a randomly chosen (but consistent across all runs etc.) number of days. This way relative timing would be preserved, but chances of identifying a person based on the date and time of their scan would be decreased. Dates that are shifted for anonymization purposes SHOULD be set to the year 1925 or earlier to clearly distinguish them from unmodified data. Shifting dates is RECOMMENDED, but not required. Additional fields can include external behavioral measures relevant to the scan. For example vigilance questionnaire score administered after a resting state scan. All such included additional fields SHOULD be documented in an accompanying `_scans.json` file that describes these fields in detail (see [Tabular files](02-common-principles.md#tabular-files)). Example `_scans.tsv`: ```Text filename acq_time func/sub-control01_task-nback_bold.nii.gz 1877-06-15T13:45:30 func/sub-control01_task-motor_bold.nii.gz 1877-06-15T13:55:33 ``` ## Code Template: `code/*` Source code of scripts that were used to prepare the dataset MAY be stored here. Examples include anonymization or defacing of the data, or the conversion from the format of the source data to the BIDS format (see [source vs. raw vs. derived data](./02-common-principles.md#source-vs-raw-vs-derived-data)). Extra care should be taken to avoid including original IDs or any identifiable information with the source code. There are no limitations or recommendations on the language and/or code organization of these scripts at the moment. <!-- Link Definitions --> [objects]: https://www.json.org/json-en.html [object]: https://www.json.org/json-en.html [string]: https://www.w3schools.com/js/js_json_syntax.asp [strings]: https://www.w3schools.com/js/js_json_syntax.asp [array]: https://www.w3schools.com/js/js_json_arrays.asp
50.136951
361
0.570015
eng_Latn
0.97189
b9c6d7c19f9230af1d71b82e76db5ed9d711f9bd
4,864
md
Markdown
articles/hdinsight/hdinsight-apache-spark-with-kafka.md
jiyongseong/azure-docs.ko-kr
f1313d505132597ce47e343e2195151587b32238
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/hdinsight/hdinsight-apache-spark-with-kafka.md
jiyongseong/azure-docs.ko-kr
f1313d505132597ce47e343e2195151587b32238
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/hdinsight/hdinsight-apache-spark-with-kafka.md
jiyongseong/azure-docs.ko-kr
f1313d505132597ce47e343e2195151587b32238
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Kafka와 함께 Apache Spark 스트리밍 - Azure HDInsight | Microsoft Docs description: Spark Apache Spark를 사용하여 DStreams로 Apache Kafka에(서) 데이터를 스트리밍하는 방법을 알아봅니다. 이 예제에서는 HDInsight의 Spark에서 Jupyter Notebook을 사용하여 데이터를 스트리밍합니다. keywords: kafka 예제,kafka zookeeper,spark 스트리밍 kafka,spark 스트리밍 kafka 예제 services: hdinsight documentationcenter: '' author: Blackmist manager: jhubbard editor: cgronlun ms.assetid: dd8f53c1-bdee-4921-b683-3be4c46c2039 ms.service: hdinsight ms.custom: hdinsightactive ms.devlang: '' ms.topic: conceptual ms.date: 02/23/2018 ms.author: larryfr ms.openlocfilehash: a9463b5983b5f41683a5cfe416ca125bf2810062 ms.sourcegitcommit: 9cdd83256b82e664bd36991d78f87ea1e56827cd ms.translationtype: HT ms.contentlocale: ko-KR ms.lasthandoff: 04/16/2018 --- # <a name="apache-spark-streaming-dstream-example-with-kafka-on-hdinsight"></a>HDInsight의 Kafka를 사용한 Apache Spark 스트리밍(DStream) 예제 Spark Apache Spark를 사용하여 DStreams로 HDInsight의 Apache Kafka에(서) 데이터를 스트리밍하는 방법을 알아봅니다. 이 예제에서는 Spark 클러스터에서 실행되는 Jupyter Notebook을 사용합니다. > [!NOTE] > 이 문서의 단계는 HDInsight의 Spark와 HDInsight의 Kafka 클러스터를 모두 포함하는 Azure 리소스 그룹을 만듭니다. 이러한 클러스터는 모두 Azure Virtual Network에 있으며, 여기서는 Spark 클러스터와 Kafka 클러스터 간에 직접 통신할 수 있습니다. > > 이 문서의 단계를 완료하는 경우 과도한 요금이 청구되지 않도록 클러스터를 삭제해야 합니다. > [!IMPORTANT] > 이 예에서는 이전 Spark 스트리밍 기술인 DStreams을 사용합니다. 최신 Spark 스트리밍 기능을 사용하는 예제는 [Kafka를 사용하는 Spark 구조적 스트리밍](hdinsight-apache-kafka-spark-structured-streaming.md) 문서를 참조하세요. ## <a name="create-the-clusters"></a>클러스터 만들기 HDInsight의 Apache Kafka는 공용 인터넷을 통한 액세스를 Kafka broker에 제공하지 않습니다. Kafka와 통신하는 대상은 Kafka 클러스터의 노드와 동일한 Azure 가상 네트워크에 있어야 합니다. 여기서는 Kafka 클러스터와 Spark 클러스터가 모두 Azure 가상 네트워크에 있습니다. 클러스터 간의 통신 흐름을 보여 주는 다이어그램은 다음과 같습니다. ![Azure 가상 네트워크에 있는 Spark 및 Kafka 클러스터 다이어그램](./media/hdinsight-apache-spark-with-kafka/spark-kafka-vnet.png) > [!NOTE] > Kafka 자체는 가상 네트워크 내의 통신으로 제한되지만, 클러스터의 다른 서비스(예: SSH, Ambari)는 인터넷을 통해 액세스할 수 있습니다. HDInsight에서 사용할 수 있는 공용 포트에 대한 자세한 내용은 [HDInsight에서 사용하는 포트 및 URI](hdinsight-hadoop-port-settings-for-services.md)를 참조하세요. Azure 가상 네트워크, Kafka 클러스터 및 Spark 클러스터를 수동으로 만들 수 있지만 Azure Resource Manager 템플릿을 사용하는 것이 더 쉽습니다. 다음 단계에 따라 Azure 가상 네트워크, Kafka 클러스터 및 Spark 클러스터를 Azure 구독에 배포합니다. 1. Azure에 로그인하고 Azure Portal에서 템플릿을 열려면 다음 단추를 사용합니다. <a href="https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fhditutorialdata.blob.core.windows.net%2Farmtemplates%2Fcreate-linux-based-kafka-spark-cluster-in-vnet-v4.1.json" target="_blank"><img src="./media/hdinsight-apache-spark-with-kafka/deploy-to-azure.png" alt="Deploy to Azure"></a> Azure Resource Manager 템플릿의 위치는 **https://hditutorialdata.blob.core.windows.net/armtemplates/create-linux-based-kafka-spark-cluster-in-vnet-v4.1.json**입니다. > [!WARNING] > HDInsight에서 Kafka의 사용 가능성을 보장하려면 클러스터에 작업자 노드가 3개 이상 포함되어야 합니다. 이 템플릿은 세 개의 작업자 노드를 포함하는 Kafka 클러스터를 만듭니다. 이 템플릿은 Kafka와 Spark 둘 다에 대해 HDInsight 3.6 클러스터를 만듭니다. 2. 다음 정보를 사용하여 **사용자 지정 배포** 섹션의 항목을 채웁니다. ![HDInsight 사용자 지정 배포](./media/hdinsight-apache-spark-with-kafka/parameters.png) * **리소스 그룹**: 그룹을 만들거나 기존 그룹을 선택합니다. 이 그룹에는 HDInsight 클러스터가 포함됩니다. * **위치**: 지리적으로 가까운 위치를 선택합니다. * **기본 클러스터 이름**: 이 값은 Spark 및 Kafka 클러스터의 기본 이름으로 사용됩니다. 예를 들어, **hdi**를 입력하면 __spark-hdi__라는 Spark 클러스터와 **kafka-hdi**라는 Kafka 클러스터가 만들어집니다. * **클러스터 로그인 사용자 이름**: Spark 및 Kafka 클러스터의 관리자 이름입니다. * **클러스터 로그인 암호**: Spark 및 Kafka 클러스터의 관리자 사용자 암호입니다. * **SSH 사용자 이름**: Spark 및 Kafka 클러스터에 만들 SSH 사용자입니다. * **SSH 암호**: Spark 및 Kafka 클러스터에 대한 SSH 사용자의 암호입니다. 3. **사용 약관**을 읽은 다음 **위에 명시된 사용 약관에 동의함**을 선택합니다. 4. 마지막으로 **대시보드에 고정**을 선택한 다음 **구매**를 선택합니다. 클러스터를 만드는 데 약 20분이 걸립니다. 리소스를 만든 후에 요약 페이지가 나타납니다. ![VNet 및 클러스터에 대한 리소스 그룹 요약](./media/hdinsight-apache-spark-with-kafka/groupblade.png) > [!IMPORTANT] > HDInsight 클러스터의 이름은 **spark-BASENAME** 및 **kafka-BASENAME**이며, 여기서 BASENAME은 템플릿에 제공된 이름입니다. 이후 단계에서 클러스터에 연결할 때 이러한 이름을 사용합니다. ## <a name="use-the-notebooks"></a>노트북 사용 이 문서에 설명된 예제에 대한 코드는 [https://github.com/Azure-Samples/hdinsight-spark-scala-kafka](https://github.com/Azure-Samples/hdinsight-spark-scala-kafka)에서 지원됩니다. 이 예제를 완료하려면 `README.md`의 단계를 따르세요. ## <a name="delete-the-cluster"></a>클러스터 삭제 [!INCLUDE [delete-cluster-warning](../../includes/hdinsight-delete-cluster-warning.md)] 이 문서의 단계를 수행하면 동일한 Azure 리소스 그룹에 두 클러스터가 모두 만들어지므로 Azure Portal에서 해당 리소스 그룹을 삭제할 수 있습니다. 그룹을 삭제하면 이 문서의 단계를 수행하여 만들어진 모든 리소스, Azure Virtual Network 및 클러스터에서 사용하는 저장소 계정이 제거됩니다. ## <a name="next-steps"></a>다음 단계 이 예제에서는 Spark를 사용하여 Kafka에(서) 쓰고 읽는 방법을 배웠습니다. Kafka를 사용하는 다른 방법을 찾으려면 다음 링크를 사용하세요. * [HDInsight에서 Apache Kafka 시작](kafka/apache-kafka-get-started.md) * [MirrorMaker를 사용하여 HDInsight에 Kafka 복제본 만들기](kafka/apache-kafka-mirroring.md) * [HDInsight의 Kafka에서 Apache Storm 사용](hdinsight-apache-storm-with-kafka.md)
45.886792
311
0.739515
kor_Hang
1.000009
b9c6ebf123d2d155425822b5b8c5ee1bc57f2e75
20,030
md
Markdown
_posts/2018-08-06-Multimedia_and_Programming-Python-NLP_Quantization02.md
Jinwuk/Jinwuk.github.io
2bb976f8279e285a2c2d629dace3ba121ee63670
[ "MIT" ]
null
null
null
_posts/2018-08-06-Multimedia_and_Programming-Python-NLP_Quantization02.md
Jinwuk/Jinwuk.github.io
2bb976f8279e285a2c2d629dace3ba121ee63670
[ "MIT" ]
null
null
null
_posts/2018-08-06-Multimedia_and_Programming-Python-NLP_Quantization02.md
Jinwuk/Jinwuk.github.io
2bb976f8279e285a2c2d629dace3ba121ee63670
[ "MIT" ]
2
2018-11-25T14:34:44.000Z
2019-08-27T00:09:43.000Z
--- layout: post title: "Nonlinear Optimization with Quantization [2]" date: 2018-08-06 14:00:00 +0900 categories: [Multimedia and Programming, Python] tags: - Python - Nonlinear Optimization comments: true --- 앞 post에 있는 [Nonlinear Optimization with Quantization [1]](https://jinwuk.github.io/multimedia%20and%20programming/python/2018/07/31/Multimedia_and_Programming-Python-NLP_Quantization01.html) 의 후속편이다. NLP_Quantization01.ipynb 을 기본으로 Armijo Rule, General Conjugate Descent, Quasi Newton 방법을 모두 구현해 본다. 간단하게 각 알고리즘을 비교해 보기 위하여 대체로 앞 부분은 Library로 모아본다. ``` import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D from matplotlib import cm import pandas as pd import seaborn as sns import numpy as np import sys import argparse # Set Search Point Initial_point = np.array([-0.25548896, 0.0705816]) X = np.array(Initial_point, dtype=np.float64) # Main Training Parameter # For evaluation of Armijo'rule alpha = 0.5 beta = 0.612 Initial_StepSize = 0.9 # For Conjugate Gradient CGmethod = 0 # 0 : Polak - Riebel # 1 : Fletcher-Reeves # Quasi Newton AlgorithmWeight = 0.0 # Weight of DFP if zero, it means that BFGS # Line Search (Golden Search) F = 0.618 LimitofLineSearch = 20 # Algorithm Control stop_condition = 0.00003 SearchAlgorithm = 2 # 0 : General Gradient # 1 : Conjugate Gradient # 2 : Quasi Newton StepSizeRule = 1 # 0 : Constant # 1 : Armijo Rule # 2 : Line Search ArmijoDebugFreq = 10 Constant_StepSize = 0.002 training_steps = 1000 # Maximum Number of Training Steps debugfrequency = 10 # print the information per debugfrequency for each step bArgumentInput = False ``` ## Argument Parsig Argument Parsing의 경우 Google colaboratory 에서는 잘 수행되지 않으나, 이후 python 단독으로 구동시킬 때나 혹은 python -> EXE에서 사용할 수 있어야 하므로 다음 코드의 형태로 남겨 놓는다. ### Argument의 입력이 없는 경우 예를 들어 Quasi Newton (-a 4)를 선택했는데, Qausi Newton을 위한 Weight 값 (즉, DFP알고리즘을 BGFS에 비하여 얼마나 적용할지에 대하여 ) 을 선택하지 않고 system default 값을 사용한다고 하면 args,weight == None 이므로 이러한 경우를 피하면 된다. * Python Program의 실행 성을 본다면 Argumet를 적극 이용하는 것이 좋다. ``` strAlgorithmName = ['General Gradient', 'Conjugate Gradient', 'Quasi Newton'] if bArgumentInput : # parse command line arguments Algorithm_Help_String = "Search Algorithm \n [0] General Gradient with constant gradient \n [1] General Conjugate Gradient with Armijo \n" \ + "[2] Conjugate Gradient with Polak - Riebel \n [3] Conjugate Gradient with Fletcher-Reeves \n [4] Quasi Newton" parser = argparse.ArgumentParser( description="Nonlinear Optimization") parser.add_argument('-a', '--algorithm', help= Algorithm_Help_String, default= SearchAlgorithm * 2, type=int) parser.add_argument('-w', '--weight', help="Weight for Quasi Newton WEIGHT x (DFP) + (1 - WEIGHT)(BFGS). The lower limit is 0.0 and highest limit is 1.0", type=float, default= AlgorithmWeight) parser.add_argument('-df', '--debugfreq', help="Debugging Frequency. Ir means that each time of (time mod df)==0 prints out the debugginf infot", type=int, default=debugfrequency) parser.add_argument('-s', '--stepsize', help="Stepsize Rule [0] Constant [1] Armijo Rule [2] Line Search", type=int, default=StepSizeRule ) args = parser.parse_args() # Set Algorithm Type : It should be modified when a new algorithm is added SearchAlgorithm = args.algorithm >> 1 CGmethod = ((args.algorithm & 2)>>1) & (args.algorithm & 1) AlgorithmWeight = np.clip(args.weight, 0.0, 1.0) debugfrequency = args.debugfreq StepSizeRule = args.stepsize ``` ## Library 주요 라이브러라 파일들은 여기에 만들어 놓는다. ### Inference Inference는 로젠브록 함수(Rosenbrock function)이며 다음의 함수식 이다. $$ f ( x , y ) = ( a − x )^2 + b ( y − x^2 )^2 $$ 전역최소값은 $f ( x = a , y = a^2 ) = 0$ 이다. $a = 1 , b = 100$ 을 대입해 사용한다. $$ f ( x , y ) = ( 1 − x )^2 + 100 ( y − x^2 )^2 $$ 이것의 Gradient는 다음과 같다. $$ \nabla f(x,y) = \left( \frac{\partial f}{\partial x}, \; \frac{\partial f}{\partial y} \right)^T = \begin{pmatrix} -2(a-x) -4bx(y - x^2) \\ 2b(y - x^2) \end{pmatrix} $$ ## Stepsize evaluated by Armijo's rule Armijo Rule에 따른 Step size 계산시 사용된다. Gradient Descent 이므로 (-) 기호가 되어야 해서 Lecture 에서는 $f(x_i + \beta^k h_i)$ 로 되어 있는 것을 (물론 그 위에 $h_i = -\nabla f(x_i)$ 로 정의 되어 있다.) $f(x_i - \beta^k h_i)$로 수정한다. $$ \lambda_i = \lambda(x_i) \triangleq \arg \max_{k \in \mathbb{N}} \{ \beta^k | f(x_i - \beta^k h_i) - f(x_i) \leq -\beta^k \cdot \alpha \| \nabla f(x_i) \|^2 \} $$ - 위에서 정의된 Armijo Rule 을 개량할 필요성이 있다. 가장 큰 부분은 $\alpha$ 에대한 부분인데, 이 항이 고정된 값으로 되어 있다. 만일, 이 값을 조금씩 작게 만들 수 있다면 보다 적응적으로 움직이는 알고리즘이 될 것이다. - 예를 들어 현재 코드에서는 만일 Armijo Rule이 빠르게 $\lambda = \beta^k$ 를 찾지 못한다면, Chk_freq를 보고 $\alpha$ 값을 적절히줄이는 방식으로 구성하였다. 즉, $\alpha<0$ 이므로 $\alpha_{\text{next}} = \alpha^k$ 이 되도록 구성하였다. - 하지만 그렇게 효과가 있지는 않다. 보다 적절한 방법이 필요할 것으로 생각된다. ``` fa = 1 fb = 100 def inference(X): # compute inference model over data X and return the result Z = (fa - X[0])**2 + fb * (X[1] - X[0]**2)**2 #Armijo Rule을 위한 Evaluation (numPy) return Z def Calculate_Gradient(npX): g_0 = -2 * (fa - npX[0]) - 4 * fb * (npX[1] - npX[0]**2) * npX[0] g_1 = 2 * fb * (npX[1] - npX[0]**2) g = np.array([g_0, g_1], dtype = np.float64) return g def Armijo_Lambda(np_x, np_g, np_h, _cost, bTestCase): # bTestCase : [True] Armijo Rule, [False] Constant if bTestCase: chk_freq = ArmijoDebugFreq # Set Debugging Frequency beta_k = Initial_StepSize grad = alpha * np.inner(np_g, np_g) # \beta^0 chk_cnt = 0 while True: xn = np_x - beta_k * np_h # x_i - \beta^k h_i Phi = inference(xn) - _cost # f(x_i + \beta^k h_i) - f(x_i) Psi = - beta_k * grad # -beta^k \cdot \alpha \| \nabla f(x_i) \|^2 if Phi > Psi: beta_k = beta_k * beta chk_cnt = chk_cnt + 1 if chk_cnt % chk_freq == 0: grad = grad * alpha # := alpha^k nr_grad = np.linalg.norm(grad) print("Armijo Info count:", "%4d" %chk_cnt, "beta:", "%4.8f" %beta_k, "alpha * |grad| :", "%4.8f" %nr_grad) else: Lambda = beta_k break else: Lambda = Constant_StepSize return Lambda def Description_of_Object_Function(np_X, np_Cost): x = np.arange(-2.0, 3.0, 0.05) y = np.arange(-2.0, 3.0, 0.05) X, Y = np.meshgrid(x, y) Z = inference(np.array([X, Y])) return X, Y, Z def plot_result(X, Y, Z, np_X, np_Cost): fig = plt.figure() ax = fig.gca(projection='3d') # 3d axes instance surf = ax.plot_surface(X, Y, Z, # data values (2D Arryas) rstride=2, # row step size cstride=2, # column step size cmap=cm.RdPu, # colour map linewidth=1, # wireframe line width antialiased=True) # Point of X ax.plot(np_X[:,0], np_X[:,1], np_Cost, 'b.') ax.set_title('Hyperbolic Paraboloid') # title ax.set_xlabel('x label') # x label ax.set_ylabel('y label') # y label ax.set_zlabel('z label') # z label fig.colorbar(surf, shrink=0.5, aspect=5) # colour bar ax.view_init(elev=15,azim=70) # elevation & angle ax.dist=8 # distance from the plot plt.show() return def print_process(step, epsilon, prev_cost, current_cost, np_lambda, X, lRecord, printfreq = 10): #Programming Result if (step % printfreq == 0): print("step: ", "%3d" %step, "epsilon: ", "%4.8f" %epsilon, "prev_cost: ", "%4.8f" %prev_cost, "current_cost: ", "%4.8f" %current_cost, "np_lambda :", "%4.8f" %np_lambda, "X ", X) rEpsilon = lRecord[0] rCost = lRecord[1] rX = lRecord[2] rLambda = lRecord[3] rEpsilon.append(epsilon) rCost.append(current_cost) rLambda.append(np_lambda) rX.append(X) return [rEpsilon, rCost, rX, rLambda] ``` ## Line Search : Golden Search Armijo Rule 이외에, Quasi Newton의 경우, Armijo Rule 보다 Line Search에 의해 Hessian의 기능을 극대화 (즉, 2차 함수 근사화) 시킬 필요성이 있다. Golden Search에 의해 Gradient Descent 혹은 $-H^{-1} \nabla f(x)$ Line 에서의 값을 최소화 시키는 점을 찾아 그 점으로 움직이도록 Step Size를 결정하도록 한다. 함수는 2개로 하나는 Line Search 로 $x_{n+1}$을 찾고, 나머지 하나는 $\alpha_t $를 찾는 것이다. 일반적인 경우 Line Search 는 Length $L = b_i - a_i$에 대하여 $$ \begin{align} a_i &= a + (1 - F) \cdot L \\ b_i &= b - (1 - F) \cdot L \end{align} $$ 이나, $a_0 = x, b_0 = x - h$ 로 보면 $L$은 $-h$ 가 되므로 $$ \begin{align} a_i &= x + (1 - F) (-h) = x - (1 - F) h \\ b_i &= x - h - (1 - F)(-h) = x - h + h -Fh = x - Fh \end{align} $$ Line Search에서의 Stop Condition 은 $\| a_i - b_i \| < \epsilon$ 혹은 최대 Iteration=20 으로 하였다. ## Calculate StepSize 다음과 같이 주어진 Search Alorithm에서 $$ x_{i+1} = x_i - \lambda h_i $$ $\lambda$는 다음과 같이 구할 수 있다. $$ \begin{align} \lambda h_i &= x_i - x_{i+1} \\ \lambda \langle h_i, h_i \rangle &= (x_i - x_{i+1}) \cdot h_i \\ \therefore \lambda &= \frac{1}{\langle h_i, h_i \rangle}(x_i - x_{i+1}) \cdot h_i \end{align} $$ ``` def LineSearch(x, h, debug=True): a = x b = x - h chk_cnt = 0 while True: L = b - a ai = a + (1 - F) * L bi = b - (1 - F) * L Pai = inference(ai) Pbi = inference(bi) if Pbi <= Pai: a = ai else: b = bi if debug: print("[Step:", chk_cnt, "]", "a:", a, " ai:", ai, "f(ai)=", Pai, " b:", b, " bi:", bi, "f(bi)=", Pbi) _dLenth = ai - bi _epsilon = np.linalg.norm(_dLenth) bStopCondition = (chk_cnt >= LimitofLineSearch) or (_epsilon < stop_condition) if bStopCondition : print("[Step:", chk_cnt, "]", "a:", a, " ai:", ai, "f(ai)=", Pai, " b:", b, " bi:", bi, "f(bi)=", Pbi) break chk_cnt = chk_cnt + 1 if Pbi <= Pai: xn = bi else: xn = ai return xn def CalulateStepsize(x, xn, h): test_01 = np.asscalar(np.dot((x - xn), h)) test_02 = np.asscalar(np.dot(h, h)) Lambda = test_01/test_02 return Lambda ``` ## Initialization of Main Routine 본 함수 시작전의 부분을 이곳에 놓는다. 일부 알고리즘은 $X_0$가 필요한 경우 이곳에서 관련 연산을 미리 수행한다. ``` #================================================================= # Main Routine #================================================================= # For record a simulation rEpsilon = [] rCost = [] rX = [] rLambda = [] lRecord = [rEpsilon, rCost, rX, rLambda] #Description_of_Object_Function() prev_cost = inference(X) gr = Calculate_Gradient(X) gr_n = np.zeros(np.shape(X), dtype= np.float64) h = gr B = np.eye(np.shape(X)[0], dtype=np.float64) print("Initial f(x):", prev_cost, "at:", X, "Gradient:", gr, "B:", B) ``` ## Main Search Routine 현재, For 문으로 구성되어 있는 부분 아래 부분이다. 해당 부분은 알고리즘에 따라 나누어 질 예정이다. Stop Condition을 안에 넣을 것인가 고민하다가 일단 넣기로 하였다. 해당 코드는 일단 함수화 시켜야 할 필요성이 있다. 정상적으로 출력이 되는지를 모두 테스트 한 후, 코드를 변경한다. ### Conjugate Gradient Conjugate Gradient의 경우 다음 두 가지 방법 중 하나를 선택한다. - **Polak - Riebel Formula** $$ r_i = - \frac{\langle g_{i+1}, g_{i+1} \rangle - \langle g_i, g_{i+1} \rangle}{\| g_i \|^2} $$ - **Fletcher-Reeves Formula** - Since for the quadratic case, $\langle g_{i+1}, g_{i} \rangle = 0$, that results can be extended. $$ r_i = - \frac{\| g_{i+1} \|^2}{\| g_i \|^2} $$ 원래는 $r_i$ 값에 마이너스 값이 붙어야 하나, 양쪽 방법 모두 마이너스를 붙여서 사용하기 때문에 둘 다 플러스로 놓고 $h_i$ Update시 마이너스 방향으로 update 하는 방식으로 바꾸었다. ### Quasi Newton Quasi Newton 법은 원래 $x_{i+1} = x_i - \lambda_i H_i^{-1} \nabla f(x_i)$ 와 같이 Hessian의 Inverse를 필요로 하는 Newton-Rapson 방법에서 Hessian의 Inverse를 Feasible한 가정을 두고 (예; Symmetry 조건) Update 가능한 방식으로 Hessian을 구하는 방법. 즉, $\beta^{-1}(x_i) = H(x_i)$ 로 놓고 Matrix Inverse Lemma를 사용하여 Update 하는 방식 - **DFP Method** $$ \beta_{i+1} = \beta_i + \frac{1}{\langle \Delta x_i , g_i \rangle} \Delta x_i \Delta x_i^T - \frac{(\beta_i \Delta g_i)(\beta_i \Delta g_i)^T}{\langle \Delta g_i, \beta_i \Delta g_i \rangle } $$ - **BFGS Method** $$ \beta_{i+1}^{BFGS} = \beta_i + \left( \frac{\langle \Delta x_i, \Delta g_i \rangle + \Delta g_i^T \beta_i \Delta g_i}{\langle \Delta x_i, \Delta g_i \rangle}\right) \frac{\Delta x_i \Delta x_i^T}{\langle \Delta x_i, \Delta g_i \rangle} - \frac{\beta_i \Delta g_i \Delta x_i^T + \Delta x_i \Delta g_i^T \beta_i}{\langle \Delta x_i, \Delta g_i \rangle} $$ Let $p_i = \frac{1}{\langle \Delta x_i, \Delta g_i \rangle} \in \mathbf{R}$ . Since$\Delta g_i^T \beta_i \Delta g_i \in \mathbf{R}$, $(\Delta g_i^T \beta_i \Delta g_i )\Delta x \Delta x^T = \Delta x \Delta g_i^T \beta_i \Delta g_i \Delta x^T$ . Therefore, $$ \begin{aligned} \beta_{i+1}^{BFGS} &= \beta_i +p_i (1 + p_i \Delta g_i^T \beta_i \Delta g_i ) \Delta x_i \Delta x_i^T - p_i (\beta_i \Delta g_i \Delta x_i^T + \Delta x_i \Delta g_i^T \beta_i) \\ &= \beta_i + p_i (\Delta x_i \Delta x_i^T + p_i \Delta x_i \Delta g_i^T \beta_i \Delta g_i \Delta x_i^T) - p_i (\beta_i \Delta g_i \Delta x_i^T + \Delta x_i \Delta g_i^T \beta_i) \\ &= \beta_i - p_i (\beta_i \Delta g_i \Delta x_i^T + \Delta x_i \Delta g_i^T \beta_i) + p_i^2 \Delta x_i \Delta g_i^T \beta_i \Delta g_i \Delta x_i^T + p_i \Delta x_i \Delta x_i^T \\ &= (I - p_i \Delta x_i \Delta g_i^T) \beta_i (I - p_i \Delta g_i \Delta x_i^T) + p_i \Delta x_i \Delta x_i^T \end{aligned} $$ In may text books, $y_i : = \Delta g_i = \nabla f(x_{i+1}) - \nabla f(x_i)$, and $s_i := \Delta x_{i+1} - \Delta x_i$, so that I re-write the above equation as follows: $$ \beta_{i+1}^{BFGS} = (I - p_i s_i y_i^T) \beta_i (I - p_i y_i s_i^T) + p_i s_i s_i^T $$ - **L-BFGS and Other Methods** Since it consumes a lot of computational power to implement the BFGS algorithm with C/C++ or FORTRAN arised from the computation of matrix, the Limited-memory BFGS (L-BFGS) was developed. However, now a days, the implementation of BFGS with Python does not need a such a recursive based fast algorithm, since Python supports effective matrix or vector computation by parallel processing based on SIMD(Single Instruction Mutiple Data) or GPU. Consequently, I don't develope the L-BFGS algorithm with Python, however I will write the developing procedure of L-BFGS algorithm in another post. ### Python 구현상의 문제점 Matrix 연산이기 때문에 Python에서 처음부터 $x \in \mathbf{R}^2$ 에 대하여 [[x_1]\ [x_2]\ ... [x_n]] 과 같이 구성되도록 하여야 한다. 이때문에, Python code상에 Matlab 에서는 볼 수 없는 matrix 연산 명령이 나타나게 된다. ``` #================================================================= # Search Routine #================================================================= for step in range(training_steps): # Main Search Rule if StepSizeRule == 0: lm = Armijo_Lambda(X, gr, h, prev_cost, False) elif StepSizeRule == 1: lm = Armijo_Lambda(X, gr, h, prev_cost, True) else: Xmin = LineSearch(X, h, False) lm = CalulateStepsize(X, Xmin, h) Xn = X - lm * h gr_n = Calculate_Gradient(Xn) if SearchAlgorithm == 1: # Comjugate Gradient if CGmethod == 0: rc = np.inner(gr_n - gr, gr_n)/np.inner(gr, gr) else: rc = np.inner(gr_n, gr_n)/np.inner(gr, gr) gr = gr_n h = gr_n - rc * h elif SearchAlgorithm == 2: # Quasi Newton basicDim = (np.size(X), 1) dX = np.reshape(Xn - X, basicDim) # General 2x1 Vector (tuple) dG = np.reshape(gr_n - gr,basicDim) # DFP Method BdG = np.dot(B, dG) # B : 2x2 to 2x1 dXdXt = np.dot(dX, dX.T) # 2x2 Matrix BdGBdGt = np.dot(BdG, BdG.T) # 2x2 Matrix test_0 = np.asscalar(np.dot(dX.T, gr)) test_1 = np.asscalar(np.dot(BdG.T, dG)) Bn_DFP = B + (dXdXt/test_0 ) - (BdGBdGt/test_1) # BFGS Method dGdGt = np.dot(dG, dG.T) test_0 = np.asscalar(np.dot(dX.T, dG)) test_1 = np.asscalar(np.dot(dG.T, np.dot(B, dG))) + test_0 test_2 = np.dot(B, np.dot(dG, dX.T)) + np.dot(np.dot(dX, dG.T), B) Bn_BFGS= B + (test_1/test_0) * (dXdXt/test_0) - (test_2/test_0) _pr = AlgorithmWeight B = _pr * Bn_DFP + (1 - _pr)*Bn_BFGS gr = gr_n h = np.matmul(B, gr_n) else: # Armijo Gradient gr = gr_n h = gr_n # Evaluation of epsilon to check Stop Condition current_cost = inference(Xn) epsilon = prev_cost - current_cost # Programming Result [rEpsilon, rCost, rX, rLambda] = print_process(step, epsilon, prev_cost, current_cost, lm, Xn, lRecord, debugfrequency) # Update Search point and Check Process Stop prev_cost = current_cost X = Xn if (step > 0) & (epsilon < stop_condition) : break ``` ## Final Routine 이 부분은 For 문 혹은 While 문이 끝나는 경우 들어가는 부분이다. 알고리즘 수행 후 최종 결과가 나타나는 부분이다. ``` #================================================================= # Final Routine #================================================================= Algorithm_Info_buf = ["Armijo ", "Constant Step Size "] ConjugateGradientInfo = ["Polak-Riebel", "Fletcher-Reeves"] print("") print("=========== Final Result ===========") print("Algorithm :", strAlgorithmName[SearchAlgorithm]) BufIdx = 1 if bArmijo_On: BufIdx = 0 print(" Step Size Rule :", Algorithm_Info_buf[BufIdx]) if SearchAlgorithm == 1: print(" Conjugate Gradient Method:", ConjugateGradientInfo[CGmethod]) if SearchAlgorithm == 2: print(" Algorithm Weight [DFP] :", _pr, "[BFGS] : ", (1 - _pr)) print("====================================") print('step: %3d epsilon: %4.8f prev_cost: %4.8f current_cost: %4.8f ' %(step, epsilon, prev_cost, current_cost), "lambda", lm, "X", X) print("====================================") # Data Management time_index = range(step+1) np_rX = np.array(rX) np_Cost = np.array(rCost) X, Y, Z = Description_of_Object_Function(np_rX, np_Cost) # plot dub-figure (trend of epsilon, cost, Lmabda and trace of X) plt.subplot(221) plt.plot(time_index, rEpsilon, 'b') plt.ylabel('Epsilon') plt.legend plt.subplot(222) plt.plot(time_index, rCost, 'r') plt.ylabel('Cost') plt.legend plt.subplot(223) plt.plot(time_index, rLambda, 'b') plt.ylabel('Lambda') plt.legend plt.subplot(224) levels = [2e-8, 1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048] plt.contour(X, Y, Z, levels) plt.plot(np_rX[:,0], np_rX[:,1], 'b') plt.show() # 3D plot of Object function and Distribution of X plot_result(X, Y, Z, np_rX, np_Cost) ``` 다음은 Quasi Newton으로 수행한 결과이다. Armijo Rile 대신 Line Search 방식을 사용하였다. ``` =========== Final Result =========== Algorithm : Quasi Newton Step Size Rule : Line Search Algorithm Weight [DFP] : 0.0 [BFGS] : 1.0 ==================================== step: 19 epsilon: 0.00002339 prev_cost: 0.00000066 current_cost: 0.00000066 lambda 0.9918722345253849 X [0.99938116 0.99871038] ==================================== ``` 다음은 Rosenbrook 함수에서 Conjugate Gradient Descent 중 Fletcher-Reeves 방식으로 최적화를 수행했을 때의 결과이다. 좌측부터 $\epsilon = \text{Previous Cost} - \text{Current Cost}$, Cost, Step Size, Search Point가 움직인 경로 (Initial Point [-0.25548896, 0.0705816] 에서 출발한 경로) 이다. <img alt="NLP_Quantization Fig 01" src="/assets/img/2018-08-06-nlp_001.png?raw=true" width="600px"/> 다음은 3차원으로 도시된 Rosenbrook 함수에서 움직인 경로를 도시한 것이다. <img alt="NLP_Quantization Fig 02" src="/assets/img/2018-08-06-nlp_002.png?raw=true" width="600px"/> ## 본 프로그램의 사용 본 포스트에 나와 있는 코드를 단순히 이어 붙여서 사용해도 된다. 기본적인 사용법은 -h 혹은 --help 옵션을 통해 알 수 있다. 보다 진보적인 방법은 본 프로그램에서 Search Algorithm에 해당하는 부분을 Class 객체화 시켜서 사용하여 기ㅖ학습 등에 적용하는 것이다. 해당 사항에 대해서는 추후에 다시 언급하도록 한다.
32.889984
444
0.589066
kor_Hang
0.969828
b9c733604d8863a1b19a53f3284c710b4a4ee35f
477
md
Markdown
README.md
WebAhead/master-reference
d3a4e6a8b76119503f532f55c5e1b2a61f957fdb
[ "MIT" ]
4
2020-03-29T13:59:34.000Z
2021-10-04T08:02:03.000Z
README.md
WebAhead/master-reference
d3a4e6a8b76119503f532f55c5e1b2a61f957fdb
[ "MIT" ]
9
2020-03-01T09:59:03.000Z
2021-08-19T16:07:07.000Z
README.md
WebAhead/master-reference
d3a4e6a8b76119503f532f55c5e1b2a61f957fdb
[ "MIT" ]
3
2020-03-06T19:16:32.000Z
2021-12-05T13:39:26.000Z
[![contributions welcome](https://img.shields.io/badge/contributions-welcome-brightgreen.svg?style=flat)](https://github.com/foundersandcoders/master-reference/issues) # WebAhead A master reference for the running of WebAhead, including the curriculum. The organization is powered by Kav Mashve and Founders & Coders. **All pull requests welcome**. Check out our [contribution guidelines](https://github.com/foundersandcoders/master-reference/blob/master/CONTRIBUTING.md).
53
167
0.802935
eng_Latn
0.815255
b9c761309ca9d79e11c475545f2324d7b01a83de
6,072
md
Markdown
docs/markdown/progress-notes/decompilation_notes.md
0x715C/jak-project
490633d434fbb09b938d4a064a32b4bea14e7d80
[ "0BSD" ]
602
2020-08-23T22:52:42.000Z
2022-02-07T23:36:14.000Z
docs/markdown/progress-notes/decompilation_notes.md
romatthe/jak-project
35bdc9b1d3a0a89cff072deb57844aa0e73d15e7
[ "ISC" ]
970
2020-08-27T03:25:21.000Z
2022-02-08T01:27:11.000Z
docs/markdown/progress-notes/decompilation_notes.md
romatthe/jak-project
35bdc9b1d3a0a89cff072deb57844aa0e73d15e7
[ "ISC" ]
34
2020-08-26T03:23:50.000Z
2022-02-03T18:49:06.000Z
# GOAL Operations ## `div.s` Suspected source ``` (/ 1.0 x) ``` where `x` is in a GPR: ``` lwc1 f0, L345(fp) ;; first argument prepared first? mtc1 f1, a0 ;; second argument prepared second? div.s f0, f0, f1 ``` Sequence - Compile first - First to FPR - Compile second - Second to FPR ## `daddu` Used for `int` and `uint` addition. Two element form: ``` daddu v0, a0, a1 ``` is `(+ a0 a1)` - the order in the opcode matches the order in the expression. ## `daddiu` to get a symbol ``` daddiu v0, s7, #t ``` Note for `#t`: `#t` is linked when the code literally has a `#t` in it. Other cases are currently unknown. ## `dsubu` Used for `int` and `uint` subtraction. ## `mult3` (EE `mult`) Used for `int` multiplication. Like `daddu` for opcode ordering: ``` mult3 v0, a0, a1 ``` is `(* a0 a1)`. ## `div` Used for `int` division. ``` div a0, a1 mflo v0 ``` is `(/ a0 a1)`. and also for `int` mod ``` div a0, a1 mfhi v0 ``` ## `or` used to get the value of false ``` or v0, s7, r0 ``` ## `or` used as a bitwise or ``` or v0, a0, a1 ``` is `(logior a0 a1)` ## `and` used as a bitwise and ``` and v0, a0, a1 ``` is `(logand a0 a1)`. ``` (logand #xfffffff0 (+ (ash (-> thing field) 2) 43)) ``` is ``` ld v1, L346(fp) ;; first arg to the and lhu a0, 14(a0) ;; second arg evaluation... dsll a0, a0, 2 daddiu a0, a0, 43 and v0, v1, a0 ;; and result, first, second ``` ## `nor` used as a bitwise nor ``` nor v0, a0, a1 ``` is `(lognor a0 a1)` ## `xor` used as a bitwise xor ``` xor v0, a0, a1 ``` is `(logxor a0 a1)` ## `nor` used as a logical not ``` nor v0, a0, r0 ``` is `(lognot a0)` # Common "Idioms" ## `ash` Variable shift (`ash`) is an inline function ``` or v1, a0, r0 bgezl a1, L306 dsllv v0, v1, a1 dsubu a0, r0, a1 dsrav v0, v1, a0 L306: ``` ## `abs` of integer ``` or v0, a0, r0 bltzl v0, L302 dsubu v0, r0, v0 L302: ``` ## `min` of integers ``` or v0, a0, r0 or v1, a1, r0 slt a0, v0, v1 movz v0, v1, a0 ``` ## `max` of integers ``` or v0, a0, r0 or v1, a1, r0 slt a0, v0, v1 movn v0, v1, a0 ``` # Others ## Integer constants that are large A constant of `0xfffffff0` is loaded with `ld` ## Access value of symbol Seems to always use `lw`? # Control Flow Info ## Begin-like forms flush everything always, immediately after compiling Example in `vector` with flushing: ``` ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ; .function vector3s+! ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; L8: daddiu sp, sp, -16 sd fp, 8(sp) or fp, t9, r0 daddiu v1, fp, L109 ;; string: "Add 2 vectors3." lwc1 f0, 0(a1) lwc1 f1, 0(a2) add.s f0, f0, f1 swc1 f0, 0(a0) lwc1 f0, 4(a1) lwc1 f1, 4(a2) add.s f0, f0, f1 swc1 f0, 4(a0) lwc1 f0, 8(a1) lwc1 f1, 8(a2) add.s f0, f0, f1 swc1 f0, 8(a0) or v0, a0, r0 ld fp, 8(sp) jr ra daddiu sp, sp, 16 ``` The `daddiu v1, fp, L109` loads a `string` into the `v1` register which is never used, immediately after the prologue. This will only happen if the value is flushed. This is very likely a documentation comment that accidentally got included as a string constant. It's unused, so there was likely no consumer of the string that did the `flush` - it was done by the top level evaluation. ``` (defun vector3s+! (stuff) "Add 2 vectors3." ;; oops, a string constant instead of a comment. ... ; rest of the function ) ``` ## Return-From evaluates to 0 bug We would expect the value of `(return-from #f x)` to be nothing, as there's no possible way to use it. However, GOAL seems to have a small bug where `(return-from #f x)` always attempts to evaluate to 0. This would be like implementing it as: ```lisp (set! return-reg return-value) (goto end-of-function) 0 ;; oops ``` by accident. Example in GOAL: ``` ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ; .function basic-type? (in gcommon) ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; L285: lwu v1, -4(a0) lw a0, object(s7) L286: bne v1, a1, L287 or a2, s7, r0 ; (return-from #f #t) starts here daddiu v1, s7, #t ; compile/flush the #t or v0, v1, r0 ; move to return register beq r0, r0, L288 ; branch to end sll r0, r0, 0 ; branch delay slot (usual filler) or v1, r0, r0 ; unreachable loading of 0 into a register. L287: lwu v1, 4(v1) bne v1, a0, L286 sll r0, r0, 0 or v0, s7, r0 L288: jr ra daddu sp, sp, r0 sll r0, r0, 0 sll r0, r0, 0 ``` ## Unused else case returning false in cond From `delete!` in gcommon. ``` beq a2, a0, L222 ; (if (= a2 a0) only-one-case) or a0, s7, r0 ; a0 is unused return value of if lw a0, 2(a2) ; (set! (cdr v1) (cdr a2)), will evaluate to (cdr a2) which is stored in a0 sw a0, 2(v1) L222: ; a0 = #f or a0 = (cdr a2) depending on branch taken, but it's totally unused! or v0, a1, r0 ; return a1 jr ra daddu sp, sp, r0 ``` Also note that all cases stored their result in `a0`, even though nothing uses the result. ## Function Calls evaluate arguments in order: ``` lw t9, format(s7) ;; head of function daddiu a0, s7, #t ;; first arg daddiu a1, fp, L344 ;; second arg sllv a2, gp, r0 ;; third arg dsra32 a3, gp, 0 ;; fourth arg pcpyud v1, gp, r0 sllv t0, v1, r0 ;; fifth arg pcpyud v1, gp, r0 dsra32 t1, v1, 0 ;; sixth arg por t2, gp, r0 ;; seventh arg jalr ra, t9 sll v0, ra, 0 ``` also an example of lack of common subexpression elimination on the `pcpyud v1, gp, r0`s. ### A second example with register type conversions: ``` lw t9, format(s7) ;; function daddiu a0, s7, #t ;; first arg daddiu a1, fp, L343 ;; second arg lwc1 f0, 0(gp) ;; compile and flush third arg mfc1 a2, f0 ;; move to correct reg type jalr ra, t9 sll v0, ra, 0 ```
21.920578
385
0.565217
eng_Latn
0.968217
b9c7ca08ec513ba7275591aa8940c6a22eea3fc1
2,549
md
Markdown
README.md
kollabor/Micro-Bit-Messenger
8a53c039f6dc962d952757366f1d0644cb22eaf3
[ "MIT" ]
1
2020-03-08T15:55:34.000Z
2020-03-08T15:55:34.000Z
README.md
kollabor/Micro-Bit-Messenger
8a53c039f6dc962d952757366f1d0644cb22eaf3
[ "MIT" ]
null
null
null
README.md
kollabor/Micro-Bit-Messenger
8a53c039f6dc962d952757366f1d0644cb22eaf3
[ "MIT" ]
null
null
null
# Micro Bit Messenger This is a silly chat program which uses the built-in radio function (and the Nordic Gazell protocol) of the Micro Bit to send and receive messages. To write messages you only need some patience, no external hardware is required. Optionally you can connect a speaker to the P0 pin to hear the incoming message notification sound. ## Usage The program works in two modes: a receiver mode and a composer/sender mode, which can be toggled using the "B" button. When turning on the Micro Bit, a triangle icon appears on the screen for one second - this is supposed to be an antennae symbol, which signifies the receiver mode. If there is an incoming message while in receiver mode, the incoming notification sound will play, an animation will appear on the screen (an arrow moving downwards from the top) and then the incoming message will start scrolling on the screen. You can press the "A" button in receiver mode to read the last incoming message again. You can switch over from the receiver mode using the "B" button; at this point a icon that looks like a pair of scissors will appear on the screen for one second, which signifies the message composer function. You can scroll through the letters of the English alphabet (and a few punctuation marks) by tilting the Micro Bit left and right (along the X axis). By tilting the device downwards (along the Y axis), you can add the currently displaying letter to the end of your message; a check mark will appear on the screen for one second. You can also delete the last character of your message by tilting the Micro Bit upwards (along the Y axis); an arrow pointing to the left will appear on the screen for one second. While in this mode, you can press the "A" button at any point to review the message you are composing. You can leave this mode by pressing the "B" button; at this point your message will be sent to the other Micro Bit device(s), after a small animation (an arrow moving upwards from the button). After sending your message, the program will switch back to receiver mode. If you want to leave composer/sender mode without sending your message, you can either delete your message one letter at a time or reset the Micro Bit (which will also delete the last incoming message of course). If you receive a message while in composer/sender mode, a notification sound will be played and the incoming message animation will also play, but the incoming message won't be shown on screen. You can read the last incoming message by pressing "A" in receiver mode.
182.071429
1,088
0.791683
eng_Latn
0.999932
b9c84fd5f19c0209ec0db5d51b77614ead1d2222
3,369
md
Markdown
src/ResourceManager/Sql/documentation/current-breaking-changes.md
ctatoiu/PS-Azure
e3810d4d446a183719b2b0d390b04055291cf38c
[ "MIT" ]
null
null
null
src/ResourceManager/Sql/documentation/current-breaking-changes.md
ctatoiu/PS-Azure
e3810d4d446a183719b2b0d390b04055291cf38c
[ "MIT" ]
null
null
null
src/ResourceManager/Sql/documentation/current-breaking-changes.md
ctatoiu/PS-Azure
e3810d4d446a183719b2b0d390b04055291cf38c
[ "MIT" ]
2
2020-11-04T04:29:53.000Z
2021-02-02T13:26:27.000Z
<!-- Please leave this section at the top of the breaking change documentation. New breaking changes should go under the section titled "Current Breaking Changes", and should adhere to the following format: ## Current Breaking Changes The following cmdlets were affected this release: **Cmdlet 1** - Description of what has changed ```powershell # Old # Sample of how the cmdlet was previously called # New # Sample of how the cmdlet should now be called ``` ## Release X.0.0 The following cmdlets were affected this release: **Cmdlet 1** - Description of what has changed ```powershell # Old # Sample of how the cmdlet was previously called # New # Sample of how the cmdlet should now be called ``` Note: the above sections follow the template found in the link below: https://github.com/Azure/azure-powershell/blob/dev/documentation/breaking-changes/breaking-change-template.md --> ## Current Breaking Changes The following cmdlets were affected this release: **New-AzureRmSqlDatabaseFailoverGroup** - Tag parameter was removed - 'GracePeriodWithDataLossHour' parameter was renamed to 'GracePeriodWithDataLossHours' ```powershell # Old New-AzureRmSqlDatabaseFailoverGroup -ResourceGroupName rg -ServerName server1 -FailoverGroupName fg -PartnerServerName server2 -FailoverPolicy Automatic -GracePeriodWithDataLossHour 1 -Tag @{ Environment="Test" } # New New-AzureRmSqlDatabaseFailoverGroup -ResourceGroupName rg -ServerName server1 -FailoverGroupName fg -PartnerServerName server2 -FailoverPolicy Automatic -GracePeriodWithDataLossHours 1 ``` **Set-AzureRmSqlDatabaseFailoverGroup** - Tag parameter was removed - 'GracePeriodWithDataLossHour' parameter was renamed to 'GracePeriodWithDataLossHours' ```powershell # Old Set-AzureRmSqlDatabaseFailoverGroup -ResourceGroupName rg -ServerName server1 -FailoverGroupName fg -FailoverPolicy Automatic -GracePeriodWithDataLossHour 1 -Tag @{ Environment="Test" } # New Set-AzureRmSqlDatabaseFailoverGroup -ResourceGroupName rg -ServerName server1 -FailoverGroupName fg -FailoverPolicy Automatic -GracePeriodWithDataLossHours 1 ``` **Add-AzureRmSqlDatabaseToFailoverGroup** - Tag parameter was removed ```powershell # Old Add-AzureRmSqlDatabaseToFailoverGroup -ResourceGroupName rg -ServerName server1 -FailoverGroupName fg -Database $db1 -Tag @{ Environment="Test" } # New Add-AzureRmSqlDatabaseToFailoverGroup -ResourceGroupName rg -ServerName server1 -FailoverGroupName fg -Database $db1 ``` **Remove-AzureRmSqlDatabaseFromFailoverGroup** - Tag parameter was removed ```powershell # Old Remove-AzureRmSqlDatabaseFromFailoverGroup -ResourceGroupName rg -ServerName server1 -FailoverGroupName fg -Database $db1 -Tag @{ Environment="Test" } # New Remove-AzureRmSqlDatabaseFromFailoverGroup -ResourceGroupName rg -ServerName server1 -FailoverGroupName fg -Database $db1 ``` **Remove-AzureRmSqlDatabaseFailoverGroup** - PartnerResourceGroupName parameter was removed - PartnerServerName parameter was removed ```powershell # Old Remove-AzureRmSqlDatabaseFailoverGroup -ResourceGroupName rg -ServerName server1 -FailoverGroupName fg -PartnerServerName server2 -PartnerResourceGroupName rg # New Remove-AzureRmSqlDatabaseFailoverGroup -ResourceGroupName rg -ServerName server1 -FailoverGroupName fg ```
33.356436
212
0.790145
yue_Hant
0.553763
b9c872d86e620f850eac8a51ecaea78c181eb3fb
14,168
md
Markdown
packages/nextjs/CHANGELOG.md
plmercereau/nhost
efccd54641f4ccc75b0db80ba7cb7d029dcdf85e
[ "MIT" ]
null
null
null
packages/nextjs/CHANGELOG.md
plmercereau/nhost
efccd54641f4ccc75b0db80ba7cb7d029dcdf85e
[ "MIT" ]
2
2022-02-14T08:16:51.000Z
2022-02-14T08:50:23.000Z
packages/nextjs/CHANGELOG.md
plmercereau/nhost
efccd54641f4ccc75b0db80ba7cb7d029dcdf85e
[ "MIT" ]
null
null
null
# @nhost/nextjs ## 1.1.1 ### Patch Changes - 7b23d33: Get the refresh token in the right place in the url Hasura-auth puts the refresh token in the url as `refreshToken`, but it is not stored using the same key in localStorage / the cookie. This fix makes the right correspondance between the two. - @nhost/[email protected] - @nhost/[email protected] ## 1.1.0 ### Minor Changes - b52b4fc: Introduce `createServerSideNhostClient` Until now, the Nhost client was not functionning correctly on the server side. The `createServerSideNhostClient` can be called inside the `getServerSideProps` function of a page. When called, it will try to get the refesh token in cookies, or from the request URL. If a refresh token is found, it uses it to get an up to date access token (JWT) and a user session This method returns a promise of an `NhostClient` and resolves only when the authentication status is known eventually. `getNhostSession` now uses the above method under the hood to extract the user session and hydrate the NhostClient context on the client side. - 616e320: Look for the refresh token both in the query parameters and in the hash Until now, after redirecting from an email, Hasura-auth puts refresh tokens in the hash part of the url. It is a problem when using SSR as the hash is not accessible to the server. This behaviour is likely to change. As a result, the client now parses both the hash and the query parameters of the url. See [this issue](https://github.com/nhost/hasura-auth/issues/148) to keep track of the progress on Hasura-auth. ### Patch Changes - 49545c0: Remove filtering of `useLayoutEffect` from logs The `suppressConsoleMessage` method was meant to suppress incorrect `useLayoutEffect` messages raised on Nextjs server-side renderings. Its implementation had an impact on the normal functionning of logging (see [#447](https://github.com/nhost/nhost/issues/447)). This filtering was necessary when using former versions of xstate and can now be removed. - b52b4fc: Bump xstate to latest version (`4.31.0`) - Updated dependencies [d49b837] - @nhost/[email protected] - @nhost/[email protected] ## 1.0.18 ### Patch Changes - @nhost/[email protected] - @nhost/[email protected] ## 1.0.17 ### Patch Changes - Updated dependencies [5ee395e] - @nhost/[email protected] - @nhost/[email protected] ## 1.0.16 ### Patch Changes - @nhost/[email protected] - @nhost/[email protected] ## 1.0.15 ### Patch Changes - @nhost/[email protected] - @nhost/[email protected] ## 1.0.14 ### Patch Changes - Updated dependencies [ccba0b5] - @nhost/[email protected] - @nhost/[email protected] ## 1.0.13 ### Patch Changes - 4635a14: - fixed typings of `getNhostSession` function (thanks to [zerosym](https://github.com/zerosym)) - added missing TSdoc to `getNhostSession` function ## 1.0.12 ### Patch Changes - @nhost/[email protected] - @nhost/[email protected] ## 1.0.11 ### Patch Changes - Updated dependencies [2c97db6] - @nhost/[email protected] - @nhost/[email protected] ## 1.0.10 ### Patch Changes - Updated dependencies [587eaff] - @nhost/[email protected] - @nhost/[email protected] ## 1.0.9 ### Patch Changes - @nhost/[email protected] - @nhost/[email protected] ## 1.0.8 ### Patch Changes - @nhost/[email protected] - @nhost/[email protected] ## 1.0.5 ### Patch Changes - correct dependencies See this related issues: - [nhost](https://github.com/nhost/nhost/issues/326) - [pnpm](https://github.com/pnpm/pnpm/issues/4348) - Updated dependencies - @nhost/[email protected] - @nhost/[email protected] ## 1.0.3 ### Patch Changes - @nhost/[email protected] - @nhost/[email protected] ## 1.0.2 ### Patch Changes - Updated dependencies [39df4d5] - @nhost/[email protected] ## 1.0.1 ### Patch Changes - @nhost/[email protected] - @nhost/[email protected] ## 1.0.0 ### Minor Changes - 744fd69: Introducing `useSendVerificationEmail` While `useSignInEmailPassword` automatically sends a verification email (when the backend is configured to do so), an user may sometime want to request for an verification email to be sent again. See the [documentation](https://docs.nhost.io/reference/react/hooks#send-email-verification) for further information about how to use this hook. - 744fd69: Rename hooks and their methods to make them more explicit - `useEmailPasswordlessSignIn` - Hook renamed to `useSignInEmailPasswordless` - `signIn` renamed to `signInEmailPasswordless` - `useEmailPasswordSignIn` - Hook renamed to `useSignInEmailPassword` - `signIn` renamed to `signInEmailPassword` - `needsVerification` renamed to `needsEmailVerification` - `useEmailPasswordSignUp` - Hook renamed to `useSignUpEmailPassword` - `signUp` renamed to `signUpEmailPassword` - `needsVerification` renamed to `needsEmailVerification` - `useAnonymousSignIn` - Hook renamed to `useSignInAnonymous` - renamed `signIn` to `signInAnonymous` - `useChangeEmail` - `needsVerification` renamed to `needsEmailVerification` - 744fd69: Introducing `useSignInAnonymous` Anonymous Sign-In is a feature that allows users to get a temporary id without attaching yet any personal information such as an email or a passowrd. Anonymous users can then run GraphQL operations, with a specific `public` role that is distinct from the default `user` role. The anonymous can then "deanonymize" their account at a later stage in attaching the missing registration information and an authentication method. **Note** Anonymous Sign-In is not available out of the box yet in the [Nhost cloud](https://app.nhost.io/), but will be available in the near future. **Note 2** The deanonymisation process is not yet available. This is also part of our roadmap. ```js const { signInAnonymous, isLoading, isSuccess, isError, error } = useSignInAnonymous() ``` | Name | Type | Notes | | ----------------- | ------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------- | | `signInAnonymous` | () => void | Registers an anonymous user | | `isLoading` | boolean | Returns `true` when the action is executing, `false` when it finished its execution. | | `isSuccess` | boolean | Returns `true` if the sign-up suceeded. Returns `false` if the new email needs to be verified first, or if an error occurred. | | `isError` | boolean | Returns `true` if an error occurred. | | `error` | {status: number, error: string, message: string} \| undefined | Provides details about the error. | #### Usage ```jsx import { useSignInAnonymous } from '@nhost/react' const Component = () => { const { signInAnonymous, isSuccess } = useSignInAnonymous(email, password) return ( <div> <button onClick={signInAnonymous}>Anonymous sign-in</button> {isSuccess && <div>You are now signed in anonymously</div>} </div> ) } ``` - 744fd69: Add options to `useProviderLink` Since [Hasura Auth version 0.4](https://github.com/nhost/hasura-auth/releases/tag/v0.4.0), it is possible to pass on options when signing up or signin in through an OAuth provider. It is now possible to determine these options in the `useProviderLink`, so it generates the right URL when using the provider links. See the [React documentation](https://docs.nhost.io/reference/react/hooks#oauth-providers) for additional information. - 744fd69: Time-based One-Time Password Multi-Factor Authentication **Note** MFA is not available out of the box yet in the [Nhost cloud](https://app.nhost.io/), but will be available in the near future. When enabled in the backend, users that signed up with an email and a password can opt-in for an additional authentication security measure. MFA can be activated in using the new `useConfigMfa` hook. Two methods has been also added to `useEmailPasswordSignIn`: when MFA is active, authentication won't be a success straight after signin up with an email and a password. The new `needsMfaOtp` will then appear as `true`, and the authentication will succeed only when the user will have sent back the OTP code with `sendMfaOtp(code:string)`. ```js const { generateQrCode, isGenerating, isGenerated, qrCodeDataUrl, activateMfa, isActivating, isActivated, isError, error } = useConfigMfa(code?: string) ``` | Name | Type | Notes | | ---------------- | ------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------- | | `generateQrCode` | () => void | Generates the QR code that will be used by the MFA app e.g. Google Authenticator or Authy. | | `isGenerating` | boolean | Returns `true` if the QR code is generating but not yet available | | `isGenerated` | boolean | Returns `true` when the QR code has been successfully generated and is available | | `qrCodeDataUrl` | string | Returns the QR code as a [Data URL](https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/Data_URIs) | | `activateMfa` | (code?: string) => void | Activate MFA from the code given by the MFA authentication application | | `isActivating` | boolean | Returns `true` when the activation code has been sent to the server, and we await server response | | `isActivated` | boolean | Returns `true` when MFA has been successfully activated | | `isError` | boolean | Returns `true` if an error occurred. | | `error` | {status: number, error: string, message: string} \| undefined | Provides details about the error. | #### Usage ```jsx import { useConfigMfa } from '@nhost/react' import { useState } from 'react' export const Mfa: React.FC = () => { const [code, setCode] = useState('') const { generateQrCode, activateMfa, isActivated, isGenerated, qrCodeDataUrl } = useConfigMfa(code) return ( <div> {!isGenerated && ( <button block appearance="primary" onClick={generateQrCode}> Generate </button> )} {isGenerated && !isActivated && ( <div> <img alt="qrcode" src={qrCodeDataUrl} /> <input value={code} onChange={onChange={(event) => setCode(event.target.value)}} placeholder="Enter activation code" /> <button block appearance="primary" onClick={activateMfa}> Activate </button> </div> )} {isActivated && <div>MFA has been activated!!!</div>} </div> ) } ``` - 744fd69: Unify vanilla, react and next APIs so they can work together React and NextJS libraries now works together with `@nhost/nhost-js`. It also means the Nhost client needs to be initiated before passing it to the React provider. See the [React](https://docs.nhost.io/reference/react#configuration) and [NextJS](https://docs.nhost.io/reference/nextjs/configuration) configuration documentation for additional information. ### Patch Changes - Updated dependencies [744fd69] - @nhost/[email protected] - @nhost/[email protected] ## 0.3.1 ### Patch Changes - 9bd01e7: Export refresh function ## 0.3.0 ### Minor Changes - 0d8afde: Use `@nhost/react` as a peer dependency `@nhost/react` was bundled where it shouldn't. As a result, `@nhost/react-apollo` did not have access to the Nhost React context, leading to errors ### Patch Changes - 0d8afde: Bump xstate version 4.30.5 - 0d8afde: Capture the Nextjs/xstate warning about useLayoutEffect When using Xstate on the server side, Nextjs raises a warning about the use of `useLayoutEffect` whereas xstate is actually using an isomorphic version of layout effects. Such warnings are now captured. - Updated dependencies [0d8afde] - @nhost/[email protected] ## 0.2.0 ### Minor Changes - 207ae38: New NextJS client Introducting a new `@nhost/nextjs` package. It is designed to keep the refresh token between the browser and the Next.js server with a cookie. SSR routes should fetch the session in `getServerSideProps` into a `nhostSession` pageProps in using the `getNhostSession` method. Every `@nhost/react` hook is compatible with this package. See the [documentation](https://docs.nhost.io/reference/nextjs) for further information. Closes [#110](https://github.com/nhost/nhost/issues/110) and [#180](https://github.com/nhost/nhost/issues/180) ### Patch Changes - Updated dependencies [207ae38] - @nhost/[email protected] - @nhost/[email protected]
41.066667
342
0.603473
eng_Latn
0.965695
b9c925a16ea1be47b3da0ba79f51dc1ebeaaba5e
196
markdown
Markdown
_posts/2021-09-19-windows.markdown
fansiqimingzili/fansiqimingzili.github.io
f6ac1967a2a2e81338382d6cc52831709fa35aef
[ "Apache-2.0" ]
null
null
null
_posts/2021-09-19-windows.markdown
fansiqimingzili/fansiqimingzili.github.io
f6ac1967a2a2e81338382d6cc52831709fa35aef
[ "Apache-2.0" ]
null
null
null
_posts/2021-09-19-windows.markdown
fansiqimingzili/fansiqimingzili.github.io
f6ac1967a2a2e81338382d6cc52831709fa35aef
[ "Apache-2.0" ]
null
null
null
--- layout: post title: "windows消息机制" subtitle: " \"Windows消息机制的分析\"" date: 2021-09-19 12:00:00YouAn author: "YouAn" header-img: "img/post-bg-2015.jpg" tags: - windows ---
19.6
36
0.602041
eng_Latn
0.215296
b9c9ccc2f3eba5d46b7c05a06cca58ae8f044515
3,068
md
Markdown
azps-5.0.0/Az.Maintenance/Get-AzMaintenancePublicConfiguration.md
rolyon/azure-docs-powershell
83b7ecaa4edb97b26485cd94770fbd39e74bc4f5
[ "CC-BY-4.0", "MIT" ]
1
2021-08-22T18:02:50.000Z
2021-08-22T18:02:50.000Z
azps-5.0.0/Az.Maintenance/Get-AzMaintenancePublicConfiguration.md
rolyon/azure-docs-powershell
83b7ecaa4edb97b26485cd94770fbd39e74bc4f5
[ "CC-BY-4.0", "MIT" ]
59
2018-08-16T07:17:59.000Z
2020-10-28T07:14:21.000Z
azps-5.0.0/Az.Maintenance/Get-AzMaintenancePublicConfiguration.md
rolyon/azure-docs-powershell
83b7ecaa4edb97b26485cd94770fbd39e74bc4f5
[ "CC-BY-4.0", "MIT" ]
1
2020-03-18T03:08:34.000Z
2020-03-18T03:08:34.000Z
--- external help file: Microsoft.Azure.PowerShell.Cmdlets.Maintenance.dll-Help.xml Module Name: Az.Maintenance online version: https://docs.microsoft.com/en-us/powershell/module/az.maintenance/get-azmaintenancepublicconfiguration schema: 2.0.0 content_git_url: https://github.com/Azure/azure-powershell/blob/master/src/Maintenance/Maintenance/help/Get-AzMaintenancePublicConfiguration.md original_content_git_url: https://github.com/Azure/azure-powershell/blob/master/src/Maintenance/Maintenance/help/Get-AzMaintenancePublicConfiguration.md --- # Get-AzMaintenancePublicConfiguration ## SYNOPSIS Get Public Maintenance Configuration record ## SYNTAX ``` Get-AzMaintenancePublicConfiguration [[-ResourceGroupName] <String>] [[-Name] <String>] [-DefaultProfile <IAzureContextContainer>] [<CommonParameters>] ``` ## DESCRIPTION Get Public Maintenance Configuration record ## EXAMPLES ### Example 1 ```powershell PS C:\> Get-AzMaintenancePublicConfiguration -ResourceGroupName smdtest -Name workervmscentralus Location : centralus Tags : {} NamespaceProperty : ExtensionProperties : {"publicMaintenanceConfigurationId" : "workervmscentralus"} StartDateTime : 2020-08-01 00:00 ExpirationDateTime : 2021-08-04 00:00 TimeZone : Pacific Standard Time RecurEvery : Day Duration : 05:00 MaintenanceScope : SQLDB Visibility : Public Id : /subscriptions/42c974dd-2c03-4f1b-96ad-b07f050aaa74/resourcegroups/smdtest/providers/Microsoft.Maintenance/publicMaintenanceConfigurations/workervmscentralus Name : workervmscentralus Type : Microsoft.Maintenance/publicMaintenanceConfigurations ``` Get Public Maintenance configuration record ## PARAMETERS ### -DefaultProfile The credentials, account, tenant, and subscription used for communication with Azure. ```yaml Type: IAzureContextContainer Parameter Sets: (All) Aliases: AzContext, AzureRmContext, AzureCredential Required: False Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False ``` ### -Name The public maintenance configuration Name. ```yaml Type: String Parameter Sets: (All) Aliases: Required: False Position: 1 Default value: None Accept pipeline input: True (ByPropertyName) Accept wildcard characters: False ``` ### -ResourceGroupName The resource Group Name. ```yaml Type: String Parameter Sets: (All) Aliases: Required: False Position: 0 Default value: None Accept pipeline input: True (ByPropertyName) Accept wildcard characters: False ``` ### CommonParameters This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](http://go.microsoft.com/fwlink/?LinkID=113216). ## INPUTS ### System.String ## OUTPUTS ### Microsoft.Azure.Commands.Maintenance.Models.PSMaintenanceConfiguration ## NOTES ## RELATED LINKS
27.63964
315
0.769557
yue_Hant
0.767368
b9c9e9c058c0eb7a0daaab55a8266ada9dc8ffdd
2,270
md
Markdown
README.md
gitwarden/ansible-role-gitwarden
660cabee0a38196a1fd6e4c5415eadfb322b0496
[ "Apache-2.0" ]
2
2017-05-09T20:40:55.000Z
2017-05-10T17:37:53.000Z
README.md
gitwarden/ansible-role-gitwarden
660cabee0a38196a1fd6e4c5415eadfb322b0496
[ "Apache-2.0" ]
null
null
null
README.md
gitwarden/ansible-role-gitwarden
660cabee0a38196a1fd6e4c5415eadfb322b0496
[ "Apache-2.0" ]
null
null
null
Gitwarden Agent Ansible Role ===================== This role installs and configures the [GitWarden agent](https://github.com/gitwarden/gitwarden-agent) for the [GitWarden](https://gitwarden.com) service. Requirements ------------ This role is only supported on RedHat-based and Debian-based systems, which includes all modern versions of the following: * Debian (squeeze, jessie) * Ubuntu (precise, trusty, wheezy, xenial, yakkety, zesty) * RedHat Enterprise Linux (6, 7) * CentOS (6, 7) * Amazon Linux (2017.03, 2016.09, 2016.03, 2015.09, 2015.03) There are no other requirements for running this role. Role Variables -------------- The variables for running this role are: * `gitwarden_api_key_id` (string) - is the GitWarden API key ID for your Github organization. This can be obtained through the [GitWarden dashboard](https://gitwarden.com) once your organization is added. * `gitwarden_api_key_secret` (string) - is the GitWarden API key secret for the corresponding API ID above. * `gitwarden_teams` (list of strings) - is a list of Github teams to create user accounts for on the bootstrapped instance. The included team names should match the display name of the Github team (or what you see when viewing the team in Github). * `gitwarden_admin_teams` (list of strings) - is a list of Github teams to create _administrative_ accounts for on the bootstrapped instance. Any team included in this listing will have super-user access on the instance. The included team names should match the display name of the Github team (or what you see when viewing the team in Github). No other variables are required. Dependencies ------------ There are no dependencies for this role. Example Playbook ---------------- Here is a small example of how to use the role: ```yaml - hosts: servers roles: - role: gitwarden.gitwarden-agent gitwarden_api_key_id: MYAPIKEYID gitwarden_api_key_secret: MYAPIKEYSECRET gitwarden_teams: - Employees - Contractors gitwarden_admin_teams: - System Administrators - DevOps ... ``` License ------- Apache2 Author ------ Created by Ross @ [GitWarden](https://gitwarden.com).
27.682927
80
0.696916
eng_Latn
0.99524
b9ca94780cc534996cd0135353fe0640f1b458a5
586
md
Markdown
tests/test_scripts/output/genmarkdown/biolink-model/subclass_of.md
saramsey/biolink-model
3f3d13969a45407b775060d12b3210f3100a3fd1
[ "CC0-1.0" ]
1
2018-08-22T18:02:41.000Z
2018-08-22T18:02:41.000Z
tests/test_scripts/output/genmarkdown/biolink-model/subclass_of.md
saramsey/biolink-model
3f3d13969a45407b775060d12b3210f3100a3fd1
[ "CC0-1.0" ]
20
2018-06-09T22:58:57.000Z
2018-07-09T23:58:38.000Z
tests/test_scripts/output/genmarkdown/biolink-model/subclass_of.md
saramsey/biolink-model
3f3d13969a45407b775060d12b3210f3100a3fd1
[ "CC0-1.0" ]
null
null
null
# Slot: subclass of holds between two classes where the domain class is a specialization of the range class URI: [http://w3id.org/biolink/vocab/subclass_of](slot_uri) ## Mappings * [rdfs:subClassOf](http://purl.obolibrary.org/obo/rdfs_subClassOf) * [SEMMEDDB:IS_A](http://purl.obolibrary.org/obo/SEMMEDDB_IS_A) * [WD:P279](http://purl.obolibrary.org/obo/WD_P279) ## Domain and Range [OntologyClass](OntologyClass.md) -> [OntologyClass](OntologyClass.md) ## Inheritance * is_a: [related to](related_to.md) ## Children ## Used in * usage: [OntologyClass](OntologyClass.md)
25.478261
87
0.737201
eng_Latn
0.354508
b9caf1e66443361aff14e26b0281297b68631b0b
1,845
md
Markdown
bareme_tp.md
Hosnytos/Symfony-L3-master
95bea9e5ca84667db333c3589ddfa84b0411b0ff
[ "MIT" ]
null
null
null
bareme_tp.md
Hosnytos/Symfony-L3-master
95bea9e5ca84667db333c3589ddfa84b0411b0ff
[ "MIT" ]
null
null
null
bareme_tp.md
Hosnytos/Symfony-L3-master
95bea9e5ca84667db333c3589ddfa84b0411b0ff
[ "MIT" ]
null
null
null
## Barème de notation du TP - LOT 1 Le projet est rendre sous forme de fichier `.zip` ou `.rar`, et à m'envoyer par message privé sur Teams. Les dossiers/fichiers du projet à inclure dans le fichier compressé à rendre sont les suivants : * `assets` * `config` * `public` * `src` * `templates` * `.env.local` * `composer.json` * `package.json` * `webpack.config.js` Vous trouverez ci-dessous les points à remplir pour obtenir un maximum de points sur votre 1ère note de TP. * Implémentation du cahier des charges * Navigation (_Liens fonctionnels_) + Bannière / Entête / Footer + Pages des articles + Pages des offres + Page de contact * Espaces Actualités - Administration (_Création/édition/suppression d'un article_) - Espace public (_Liste des articles / Page d'un article / Autres articles visibles depuis la page d'un article_) * Demandes de contact - Administration (_Liste des demandes de contact_) - Formulaire (_Formulaire de demande de contact fonctionnel / Message de confirmation suite à la demande_) * Page d'accueil - Administration (_Création/Modification de la page d'accueil_) - Espace public (_Intégration de la page d'accueil créée en tant que page d'accueil du site_) * Page des offres - Optionnel - Administration (_Création / édition / suppression_) - Espace public (_Liste des offres / Bouton de souscription à une offre - qui ne fonctionne pas pour le moment_) * Authentification (_Inscription / Connexion / Déconnexion / Espace utilisateur_ ) * Qualité du code - Pas de code inutile en commentaire - Indentation du code * Bugs : -0,25 point / bug * Bonus - Labels et placeholder des formulaires (cf. TP_ex3) - Date de création d'une demande de contact (cf. TP_ex3)
40.108696
121
0.696477
fra_Latn
0.976712
b9cafa8820b9be5f3a8db5f6ce6a8e35b7395d92
57
md
Markdown
README.md
juzinek/algorithm-example
f8b0fc074dcd41dd771126039aaea1f42de3fdaf
[ "MIT" ]
null
null
null
README.md
juzinek/algorithm-example
f8b0fc074dcd41dd771126039aaea1f42de3fdaf
[ "MIT" ]
null
null
null
README.md
juzinek/algorithm-example
f8b0fc074dcd41dd771126039aaea1f42de3fdaf
[ "MIT" ]
null
null
null
# algorithm-example Examples of algorithm implementation
19
36
0.859649
eng_Latn
0.981375
b9cb89d4f6f076197aee7463d8d7a274221a4cad
5,057
md
Markdown
README.md
Vydia/graphql-batch
464392fbc981399db87e1ea44659b712bfbc1d6a
[ "MIT" ]
null
null
null
README.md
Vydia/graphql-batch
464392fbc981399db87e1ea44659b712bfbc1d6a
[ "MIT" ]
null
null
null
README.md
Vydia/graphql-batch
464392fbc981399db87e1ea44659b712bfbc1d6a
[ "MIT" ]
null
null
null
# GraphQL::Batch [![Build Status](https://travis-ci.org/Shopify/graphql-batch.svg?branch=master)](https://travis-ci.org/Shopify/graphql-batch) [![Gem Version](https://badge.fury.io/rb/graphql-batch.svg)](https://rubygems.org/gems/graphql-batch) Provides an executor for the [`graphql` gem](https://github.com/rmosolgo/graphql-ruby) which allows queries to be batched. ## Installation Add this line to your application's Gemfile: ```ruby gem 'graphql-batch' ``` And then execute: $ bundle Or install it yourself as: $ gem install graphql-batch ## Usage ### Basic Usage #### Schema Configuration Require the library ```ruby require 'graphql/batch' ``` Define a custom loader, which is initialized with arguments that are used for grouping and a perform method for performing the batch load. ```ruby class RecordLoader < GraphQL::Batch::Loader def initialize(model) @model = model end def perform(ids) @model.where(id: ids).each { |record| fulfill(record.id, record) } ids.each { |id| fulfill(id, nil) unless fulfilled?(id) } end end ``` Use `GraphQL::Batch` as a plugin in your schema _after_ specifying the mutation so that `GraphQL::Batch` can extend the mutation fields to clear the cache after they are resolved (for graphql >= `1.5.0`). ```ruby class MySchema < GraphQL::Schema query MyQueryType mutation MyMutationType use GraphQL::Batch end ``` For pre `1.5.0` versions: ```ruby MySchema = GraphQL::Schema.define do query MyQueryType GraphQL::Batch.use(self) end ``` #### Field Usage The loader class can be used from the resolver for a graphql field by calling `.for` with the grouping arguments to get a loader instance, then call `.load` on that instance with the key to load. ```ruby field :product, Types::Product, null: true do argument :id, ID, required: true end def product(id:) RecordLoader.for(Product).load(id) end ``` The loader also supports batch loading an array of records instead of just a single record, via `load_many`. For example: ```ruby field :products, [Types::Product, null: true], null: false do argument :ids, [ID], required: true end def products(ids:) RecordLoader.for(Product).load_many(ids) end ``` Although this library doesn't have a dependency on active record, the [examples directory](examples) has record and association loaders for active record which handles edge cases like type casting ids and overriding GraphQL::Batch::Loader#cache_key to load associations on records with the same id. ### Promises GraphQL::Batch::Loader#load returns a Promise using the [promise.rb gem](https://rubygems.org/gems/promise.rb) to provide a promise based API, so you can transform the query results using `.then` ```ruby def product_title(id:) RecordLoader.for(Product).load(id).then do |product| product.title end end ``` You may also need to do another query that depends on the first one to get the result, in which case the query block can return another query. ```ruby def product_image(id:) RecordLoader.for(Product).load(id).then do |product| RecordLoader.for(Image).load(product.image_id) end end ``` If the second query doesn't depend on the first one, then you can use Promise.all, which allows each query in the group to be batched with other queries. ```ruby def all_collections Promise.all([ CountLoader.for(Shop, :smart_collections).load(context.shop_id), CountLoader.for(Shop, :custom_collections).load(context.shop_id), ]).then do |results| results.reduce(&:+) end end ``` `.then` can optionally take two lambda arguments, the first of which is equivalent to passing a block to `.then`, and the second one handles exceptions. This can be used to provide a fallback ```ruby def product(id:) # Try the cache first ... CacheLoader.for(Product).load(args["id"]).then(nil, lambda do |exc| # But if there's a connection error, go to the underlying database raise exc unless exc.is_a?(Redis::BaseConnectionError) logger.warn err.message RecordLoader.for(Product).load(args["id"]) end) end ``` ## Unit Testing Your loaders can be tested outside of a GraphQL query by doing the batch loads in a block passed to GraphQL::Batch.batch. That method will set up thread-local state to store the loaders, batch load any promise returned from the block then clear the thread-local state to avoid leaking state between tests. ```ruby def test_single_query product = products(:snowboard) title = GraphQL::Batch.batch do RecordLoader.for(Product).load(product.id).then(&:title) end assert_equal product.title, title end ``` ## Development After checking out the repo, run `bin/setup` to install dependencies. Then, run `rake test` to run the tests. You can also run `bin/console` for an interactive prompt that will allow you to experiment. ## Contributing See our [contributing guidelines](CONTRIBUTING.md) for more information. ## License The gem is available as open source under the terms of the [MIT License](http://opensource.org/licenses/MIT).
27.483696
201
0.735614
eng_Latn
0.971425
b9cbe1277352f4a4fddfcd608b8c402c052b90dd
3,142
md
Markdown
winrt-related-src/toolkits/winui/release-notes/index.md
borool/winrt-related
b863ae3964533b51fcf71e11712f09992c553e00
[ "CC-BY-4.0", "MIT" ]
1
2020-05-20T10:32:56.000Z
2020-05-20T10:32:56.000Z
winrt-related-src/toolkits/winui/release-notes/index.md
rahulkps23/winrt-related
3d73b2b1af5e2c8e4b1aa519fe567aa931ae0c5b
[ "CC-BY-4.0", "MIT" ]
null
null
null
winrt-related-src/toolkits/winui/release-notes/index.md
rahulkps23/winrt-related
3d73b2b1af5e2c8e4b1aa519fe567aa931ae0c5b
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: WinUI Release Notes description: Index of WinUI release notes. ms.date: 11/01/2019 ms.topic: reference --- # Windows UI Library Release Notes The Windows UI Library (WinUI) is an open source project hosted on GitHub. All new releases are built from the [Windows UI Library GitHub repo](https://aka.ms/winui). We also welcome bug reports, feature requests and community code contributions in the [Windows UI Library repo](https://aka.ms/winui). WinUI includes two NuGet packages: * [Microsoft.UI.Xaml](https://www.nuget.org/packages/Microsoft.UI.Xaml): Controls and Fluent Design for UWP apps. This is the main WinUI package. * [Microsoft.UI.Xaml.Core.Direct](https://www.nuget.org/packages/Microsoft.UI.Xaml.Core.Direct): Low-level APIs for use in middleware components. You can download and use WinUI packages in your app using the NuGet package manager: see [Getting Started with the Windows UI Library](https://docs.microsoft.com/uwp/toolkits/winui/getting-started) for more information. ## Examples The Xaml Controls Gallery sample app includes interactive demos and sample code for using WinUI controls. * Install the XAML Controls Gallery app from the [Microsoft Store]( https://www.microsoft.com/p/xaml-controls-gallery/9msvh128x2zt) * The Xaml Controls Gallery is also [open source on GitHub]( https://github.com/Microsoft/Xaml-Controls-Gallery) ## Documentation How-to articles for Windows UI Library controls are included with the [Universal Windows Platform controls documentation](/windows/uwp/design/controls-and-patterns/). API reference docs are located here: [Windows UI Library APIs](/uwp/api/overview/winui/). ## Version History Version history for the main [Microsoft.UI.Xaml NuGet package](https://www.nuget.org/packages/Microsoft.UI.Xaml): | Microsoft.UI.Xaml NuGet Version | Type | Release date | Release notes | Highlights | | --- | --- | --- | --- | --- | | [2.3](winui-2.3.md) | stable | November 2019 | [Release Notes](winui-2.3.md) | [Progress Bar Visual Refresh](winui-2.3.md#progress-bar-visual-refresh), [NumberBox](winui-2.3.md#numberbox), [RadioButtons](winui-2.3.md#radiobuttons) | | [2.2](winui-2.2.md) | stable | August 2019 | [Release Notes](winui-2.2.md) | [TabView](winui-2.2.md#tabview), [NavigationView Updates](winui-2.2.md#navigationview-updates), [Visual Style Updates](winui-2.2.md#visual-style-updates) | | [2.1](winui-2.1.md) | stable | April 2019 | [Release Notes](winui-2.1.md) | First open source release from [GitHub](https://github.com/microsoft/microsoft-ui-xaml). <br />[ItemsRepeater](winui-2.1.md#itemsrepeater), [AnimatedVisualPlayer](winui-2.1.md#animatedvisualplayer), [TeachingTip](winui-2.1.md#teachingtip), [RadioMenuFlyoutItem](winui-2.1.md#radiomenuflyoutitem), [CompactDensity](winui-2.1.md#compactdensity), [Shadows](winui-2.1.md#shadows). | | [2.0](winui-2.0.md) | stable | October 2018 | [Release Notes](winui-2.0.md) | Initial release.<br>Includes official native Fluent controls and features for Windows UWP apps. | ### WinUI 3 Alpha For information on trying out early previews of WinUI 3 see [WinUI 3.0 (Alpha)](../../winui3/index.md).
58.185185
456
0.74825
eng_Latn
0.412207
b9ccb8e7729caa8848cfe019a655aa4a95a4b0ba
163
md
Markdown
_includes/05-emphasis.md
srabatin/markdown-portfolio
ab0e9ae0053a265ff30161452d5f694daf51d7a6
[ "MIT" ]
null
null
null
_includes/05-emphasis.md
srabatin/markdown-portfolio
ab0e9ae0053a265ff30161452d5f694daf51d7a6
[ "MIT" ]
5
2021-03-07T15:59:57.000Z
2021-03-07T16:36:34.000Z
_includes/05-emphasis.md
srabatin/markdown-portfolio
ab0e9ae0053a265ff30161452d5f694daf51d7a6
[ "MIT" ]
null
null
null
Write out some of your **awesome** <del>attributes<del>, and use emphasis > (like bold or italics) to identify keywords, programming languages, or skills. :+1:
32.6
74
0.730061
eng_Latn
0.99266
b9cda0e3158719bfa925ba3be3b4c3b32fff7f89
1,264
md
Markdown
app_ui/node_modules/snap-points-2d/README.md
zcliang97/stock-visualizer
9dff529c021df787c38b6f00fd323622d48d3ec1
[ "MIT" ]
4
2018-07-31T09:39:33.000Z
2019-05-22T23:56:18.000Z
node_modules/gl-scatter2d/node_modules/snap-points-2d/README.md
LanonD/redash
9dcb2541f842675bab04f1dd75a95e6ba2e341fc
[ "BSD-2-Clause" ]
2
2020-07-17T10:03:57.000Z
2021-05-09T15:42:13.000Z
node_modules/gl-scatter2d/node_modules/snap-points-2d/README.md
LanonD/redash
9dcb2541f842675bab04f1dd75a95e6ba2e341fc
[ "BSD-2-Clause" ]
null
null
null
snap-points-2d ============== Runs iterative snap rounding on a set of 2D coordinates to produce a hierarchical level of detail for optimizing online rendering of huge 2D plots. # Install ``` npm i snap-points-2d ``` # API #### `var hlod = require('snap-points-2d')(points, ids, weights, [, bounds])` Reorders the `points` hierarchically such that those which are drawn at the same pixel coordinate are grouped together. * `points` is an input array of 2*n values, which gets reordered * `ids` is an output array which gets the reordered index of the points * `weights` is an output array of point weights (number of points at the same pixel), which can be used for transparent rendering * `bounds` is an optional input array of 4 values giving the bounding box of the points **Returns** An array of LOD scales. Each record is an object with the following properties: * `pixelSize` the pixel size of this level of detail in data units * `offset` the offset of this lod within the output array * `count` the number of items in the lod **Note** the points in `output` are rescaled to the unit box `[0,1]x[0,1]` and the array `points` in the input is shuffled during execution. # License (c) 2015 Mikola Lysenko. MIT License
39.5
148
0.719937
eng_Latn
0.999288
b9cf1bd69f5de9636b0ba0438647e79cb3f63098
952
md
Markdown
includes/files/32_2/correction.md
infocornouaille/nsi-cornouaille
dee16f83c0293893a361d93c9e8622b7babef869
[ "MIT" ]
null
null
null
includes/files/32_2/correction.md
infocornouaille/nsi-cornouaille
dee16f83c0293893a361d93c9e8622b7babef869
[ "MIT" ]
null
null
null
includes/files/32_2/correction.md
infocornouaille/nsi-cornouaille
dee16f83c0293893a361d93c9e8622b7babef869
[ "MIT" ]
null
null
null
```python linenums='1' class AdresseIP: def __init__(self, adresse): self.adresse = adresse def liste_octet(self): """renvoie une liste de nombres entiers, la liste des octets de l'adresse IP""" return [int(i) for i in self.adresse.split(".")] def est_reservee(self): """renvoie True si l'adresse IP est une adresse réservée, False sinon""" return self.liste_octet()[3] == 0 or self.liste_octet()[3] == 255 def adresse_suivante(self): """renvoie un objet de AdresseIP avec l'adresse IP qui suit l’adresse self si elle existe et False sinon""" if self.liste_octet()[3] < 254: octet_nouveau = self.liste_octet()[3] + 1 return AdresseIP('192.168.0.' + str(octet_nouveau)) else: return False adresse1 = AdresseIP('192.168.0.1') adresse2 = AdresseIP('192.168.0.2') adresse3 = AdresseIP('192.168.0.0') ```
31.733333
73
0.608193
fra_Latn
0.470927
b9d0e9b6f122023d46b44948f0f443bff48eba07
15,399
md
Markdown
docs/graphics-games/cocossharp/ccdrawnode.md
Diilumar/xamarin-docs.es-es
4af6810651301c558b630282158f66815869a098
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/graphics-games/cocossharp/ccdrawnode.md
Diilumar/xamarin-docs.es-es
4af6810651301c558b630282158f66815869a098
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/graphics-games/cocossharp/ccdrawnode.md
Diilumar/xamarin-docs.es-es
4af6810651301c558b630282158f66815869a098
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Dibujar geometría con CCDrawNode description: Este documento describe CCDrawNode, que proporciona métodos para dibujar objetos simples como líneas, círculos y triángulos. ms.prod: xamarin ms.assetid: 46A3C3CE-74CC-4A3A-AB05-B694AE182ADB author: conceptdev ms.author: crdun ms.date: 03/24/2017 ms.openlocfilehash: b910e136366c429de8bd2ba1ac959882b4d7201d ms.sourcegitcommit: e268fd44422d0bbc7c944a678e2cc633a0493122 ms.translationtype: MT ms.contentlocale: es-ES ms.lasthandoff: 10/25/2018 ms.locfileid: "50123198" --- # <a name="drawing-geometry-with-ccdrawnode"></a>Dibujar geometría con CCDrawNode _`CCDrawNode` Proporciona métodos para dibujar objetos simples como líneas, círculos y triángulos._ La `CCDrawNode` clase CocosSharp proporciona varios métodos para dibujar formas geométricas comunes. Hereda de la `CCNode` clase y normalmente se agrega a `CCLayer` instancias. Esta guía explica cómo usar `CCDrawNode` instancias para realizar el procesamiento personalizado. También se proporciona una lista completa de las funciones de dibujo disponibles con capturas de pantalla y ejemplos de código. ## <a name="creating-a-ccdrawnode"></a>Creación de un CCDrawNode La `CCDrawNode` clase puede utilizarse para dibujar objetos geométricos, como líneas, rectángulos y círculos. Por ejemplo, el ejemplo de código siguiente muestra cómo crear un `CCDrawNode` instancia que dibuja un círculo en un `CCLayer` implementación de clase: ```csharp public class GameLayer : CCLayer {     public GameLayer ()     {         var drawNode = new CCDrawNode ();         this.AddChild (drawNode); // Origin is bottom-left of the screen. This moves // the drawNode 100 pixels to the right and 100 pixels up         drawNode.PositionX = 100;         drawNode.PositionY = 100;         drawNode.DrawCircle (             center: new CCPoint (0, 0),             radius: 20,             color: CCColor4B.White);     } } ``` Este código genera el círculo en tiempo de ejecución siguiente: ![](ccdrawnode-images/image1.png "Este código genera este círculo en tiempo de ejecución") ## <a name="draw-method-details"></a>Detalles del método Draw Echemos un vistazo a algunos detalles relacionados con el dibujo con un `CCDrawNode`: ### <a name="draw-methods-positions-are-relative-to-the-ccdrawnode"></a>Las posiciones de los métodos Draw son en relación con el CCDrawNode Todos los métodos draw requieren el valor de al menos una posición para el dibujo. Este valor de posición es relativa a la `CCDrawNode` instancia. Esto significa que el `CCDrawNode` sí tiene una posición y todas las llamadas realizadas en dibujar el `CCDrawNode` también aprovechar uno o más valores de posición. Para ayudarle a entender cómo combinan estos valores, echemos un vistazo a algunos ejemplos. Primero analizaremos la `DrawCircle` ejemplo anterior: ```csharp ... drawNode.PositionX = 100; drawNode.PositionY = 100; drawNode.DrawCircle (center: new CCPoint (0, 0), ... ``` En este caso, el `CCDrawNode` se coloca en (100,100), y es el círculo dibujado en (0,0) relativa a la `CCDrawNode`, lo que en el círculo que se va a centrados 100 píxeles hacia arriba y hacia la derecha de la esquina inferior izquierda de la pantalla de juego. El `CCDrawNode` también se pueden colocar en el origen (parte inferior izquierda de la pantalla), dependen de la circunferencia de desplazamientos: ```csharp ... drawNode.PositionX = 0; drawNode.PositionY = 0; drawNode.DrawCircle (center: new CCPoint (50, 60), ... ``` El código anterior, el resultado en el centro del círculo de 50 unidades (`drawNode.PositionX` + el `CCPoint.X`) a la derecha de la parte izquierda de la pantalla y 60 (`drawNode.PositionY` + el `CCPoint.Y`) unidades por encima de la parte inferior de la pantalla. Una vez que se ha llamado a un método draw, no se puede modificar el objeto dibujado a menos que el `CCDrawNode.Clear` se llama al método, por lo que cualquier cambio de posición debe realizarse en el `CCDrawNode` propio. Objetos dibujados por `CCNodes` también se ven afectadas por la `CCNode` la instancia `Rotation` y `Scale` propiedades. ### <a name="draw-methods-do-not-need-to-be-called-every-frame"></a>No es necesario que los métodos Draw se llama a cada fotograma Los métodos Draw deben llamarse una sola vez para crear un objeto visual persistente. En el ejemplo anterior, la llamada a `DrawCircle` en el constructor de la `GameLayer` – `DrawCircle` no deben llamarse cada fotograma para volver a dibujar el círculo cuando se actualiza la pantalla. Esto difiere de los métodos de dibujo de MonoGame, que normalmente representarán algo en la pantalla para un solo fotograma, y que se debe llamar a cada fotograma. Si se llaman a métodos de dibujo cada fotograma, finalmente se acumularán los objetos dentro de la llamada a `CCDrawNode` instancia, lo que produce una disminución en la velocidad de fotogramas tal como se dibujan más objetos. ### <a name="each-ccdrawnode-supports-multiple-draw-calls"></a>Cada CCDrawNode admite varias llamadas de dibujo `CCDrawNode` las instancias pueden utilizarse para dibujar varias formas. Esto permite que los objetos visuales complejos que se incluye en un único objeto. Por ejemplo, el código siguiente puede utilizarse para representar varios círculos con uno `CCDrawNode`: ```csharp for (int i = 0; i < 8; i++) {     drawNode.DrawCircle (         center: new CCPoint (i*15, 0),         radius: 20,         color: CCColor4B.White); } ``` Esto da como resultado el siguiente gráfico: ![](ccdrawnode-images/image2.png "Esto da como resultado de este gráfico") ## <a name="draw-call-examples"></a>Ejemplos de llamadas a Draw Las siguientes llamadas de dibujo están disponibles en `CCDrawNode`: - [`DrawCatmullRom`](#drawcatmullrom) - [`DrawCircle`](#drawcircle) - [`DrawCubicBezier`](#drawcubicbezier) - [`DrawEllipse`](#drawellipse) - [`DrawLineList`](#drawlinelist) - [`DrawPolygon`](#drawpolygon) - [`DrawQuadBezier`](#drawquadbezier) - [`DrawRect`](#drawrect) - [`DrawSegment`](#drawsegment) - [`DrawSolidArc`](#drawsolidarc) - [`DrawSolidCircle`](#drawsolidcircle) - [`DrawTriangleList`](#drawtrianglelist) ### <a name="drawcardinalspline"></a>DrawCardinalSpline `DrawCardinalSpline` crea una línea curva a través de un número variable de puntos. El `config` parámetro define qué puntos de la curva polinomial pasará directamente. El ejemplo siguiente muestra una curva spline que pasa a través de cuatro puntos. El `tension` parámetro controla cómo nítida o los puntos de la curva polinomial de ida y vuelta aparecen. Un `tension` dará como resultado el valor 0 en una curva spline y un `tension` dará como resultado el valor 1 en una curva spline dibujada líneas rectas y bordes de disco duros. Aunque splines son líneas curvas, CocosSharp dibuja curvas spline con líneas rectas. El `segments` parámetro controla cuántos segmentos para utilizar para dibujar la curva polinomial. Un número mayor da como resultado una suavemente curva spline a costa del rendimiento. Ejemplo de código: ```csharp var splinePoints = new List<CCPoint> (); splinePoints.Add (new CCPoint (0, 0)); splinePoints.Add (new CCPoint (50, 70)); splinePoints.Add (new CCPoint (0, 140)); splinePoints.Add (new CCPoint (100, 210)); drawNode.DrawCardinalSpline (     config: splinePoints,     tension: 0,     segments: 64,     color:CCColor4B.Red); ``` ![](ccdrawnode-images/image3.png "El parámetro de segmentos controla cuántos segmentos para utilizar para dibujar la curva polinomial") ### <a name="drawcatmullrom"></a>DrawCatmullRom `DrawCatmullRom` crea una línea curva a través de un número variable de puntos, similar a `DrawCardinalLine`. Este método no incluye un parámetro de tensión. Ejemplo de código: ```csharp var splinePoints = new List<CCPoint> (); splinePoints.Add (new CCPoint (0, 0)); splinePoints.Add (new CCPoint (80, 90)); splinePoints.Add (new CCPoint (100, 0)); splinePoints.Add (new CCPoint (0, 130)); drawNode.DrawCatmullRom (     points: splinePoints,     segments: 64); ``` ![](ccdrawnode-images/image4.png "DrawCatmullRom crea una línea curva a través de un número variable de puntos, similares a DrawCardinalLine") ### <a name="drawcircle"></a>DrawCircle `DrawCircle` crea un perímetro de un círculo de un determinado `radius`. Ejemplo de código: ```csharp drawNode.DrawCircle (     center:new CCPoint (0, 0),     radius:20,     color:CCColor4B.Yellow); ``` ![](ccdrawnode-images/image5.png "DrawCircle crea un perímetro de un círculo de un radio determinado") ### <a name="drawcubicbezier"></a>DrawCubicBezier `DrawCubicBezier` Dibuja una línea curva entre dos puntos, con puntos de control para establecer la ruta de acceso entre los dos puntos. Ejemplo de código: ```csharp drawNode.DrawCubicBezier (     origin: new CCPoint (0, 0),     control1: new CCPoint (50, 150),     control2: new CCPoint (250, 150),     destination: new CCPoint (170, 0),     segments: 64,     lineWidth: 1,     color: CCColor4B.Green); ``` ![](ccdrawnode-images/image6.png "DrawCubicBezier dibuja una línea curva entre dos puntos") ### <a name="drawellipse"></a>DrawEllipse `DrawEllipse` crea el esquema de un *elipse*, que se conoce a menudo como una elipse (aunque las dos no son geométricamente idénticas). La forma de la elipse puede definirse mediante una `CCRect` instancia. Ejemplo de código: ```csharp drawNode.DrawEllipse (     rect: new CCRect (0, 0, 130, 90),     lineWidth: 2,     color: CCColor4B.Gray); ``` ![](ccdrawnode-images/image8.png "DrawEllipse crea el esquema de una elipse, que a menudo se conoce como una elipse") ### <a name="drawline"></a>DrawLine `DrawLine` se conecta a los puntos de una línea de un ancho especificado. Este método es similar a `DrawSegment`, excepto en que crea puntos de conexión sin formato en lugar de puntos de conexión de redondeo. Ejemplo de código: ```csharp drawNode.DrawLine (     from: new CCPoint (0, 0),     to: new CCPoint (150, 30),     lineWidth: 5,     color:CCColor4B.Orange); ``` ![](ccdrawnode-images/image9.png "DrawLine se conecta a los puntos de una línea de un ancho especificado") ### <a name="drawlinelist"></a>DrawLineList `DrawLineList` crea varias líneas mediante la conexión de cada par de puntos especificado por un `CCV3F_C4B` matriz. El `CCV3F_C4B` struct contiene los valores de posición y color. El `verts` parámetro siempre debe contener un número par de puntos, y cada línea se define mediante dos puntos. Ejemplo de código: ```csharp CCV3F_C4B[] verts = new CCV3F_C4B[] {     // First line:     new CCV3F_C4B( new CCPoint(0,0), CCColor4B.White),     new CCV3F_C4B( new CCPoint(30,60), CCColor4B.White),     // second line, will blend from white to red:     new CCV3F_C4B( new CCPoint(60,0), CCColor4B.White),     new CCV3F_C4B( new CCPoint(120,120), CCColor4B.Red) }; drawNode.DrawLineList (verts); ``` ![](ccdrawnode-images/image10.png "El parámetro verts siempre debe contener un número par de puntos, como cada línea se define mediante dos puntos") ### <a name="drawpolygon"></a>DrawPolygon `DrawPolygon` crea un polígono de relleno con un esquema de color y de ancho variable. Ejemplo de código: ```csharp CCPoint[] verts = new CCPoint[] {     new CCPoint(0,0),     new CCPoint(0, 100),     new CCPoint(50, 150),     new CCPoint(100, 100),     new CCPoint(100, 0) }; drawNode.DrawPolygon (verts,     count: verts.Length,     fillColor: CCColor4B.White,     borderWidth: 5,     borderColor: CCColor4B.Red,     closePolygon: true); ``` ![](ccdrawnode-images/image11.png "DrawPolygon crea un polígono de relleno con un esquema de color y de ancho variable") ### <a name="drawquadbezier"></a>DrawQuadBezier `DrawQuadBezier` conecta dos puntos con una línea. Se comporta del mismo modo que `DrawCubicBezier` pero solo admite un único punto de control. Ejemplo de código: ```csharp drawNode.DrawQuadBezier (     origin:new CCPoint (0, 0),     control:new CCPoint (200, 0),     destination:new CCPoint (0, 300),     segments:64,     lineWidth:1,     color:CCColor4B.White); ``` ![](ccdrawnode-images/image12.png "DrawQuadBezier conecta dos puntos con una línea") ### <a name="drawrect"></a>DrawRect `DrawRect` crea un rectángulo rellenada con un esquema de color y de ancho variable. Ejemplo de código: ```csharp var shape = new CCRect (     0, 0, 100, 200); drawNode.DrawRect(shape,     fillColor:CCColor4B.Blue,     borderWidth: 4,     borderColor:CCColor4B.White); ``` ![](ccdrawnode-images/image13.png "DrawRect crea un rectángulo relleno con un contorno de color y ancho variable") ### <a name="drawsegment"></a>DrawSegment `DrawSegment` conecta dos puntos con una línea de color y ancho variable. Es similar a `DrawLine`, excepto en que crea puntos de conexión de redondeo, en lugar de los puntos de conexión sin formato. Ejemplo de código: ```csharp drawNode.DrawSegment (from: new CCPoint (0, 0),     to: new CCPoint (100, 200),     radius: 5,     color:new CCColor4F(1,1,1,1)); ``` ![](ccdrawnode-images/image14.png "DrawSegment conecta dos puntos con una línea de color y ancho variable") ### <a name="drawsolidarc"></a>DrawSolidArc `DrawSolidArc` crea una cuña rellenado de un determinado color y radius. Ejemplo de código: ```csharp drawNode.DrawSolidArc(     pos:new CCPoint(100, 100),     radius:200,     startAngle:0,     sweepAngle:CCMathHelper.Pi/2, // this is in radians, clockwise     color:CCColor4B.White); ``` ![](ccdrawnode-images/image15.png "DrawSolidArc crea una cuña rellenado de un determinado color y radius") ### <a name="drawsolidcircle"></a>DrawSolidCircle `DrawCircle` crea un círculo de rellenado de un radio determinado. Ejemplo de código: ```csharp drawNode.DrawSolidCircle(     pos: new CCPoint (100, 100),     radius: 50,     color: CCColor4B.Yellow); ``` ![](ccdrawnode-images/image16.png "DrawCircle crea un círculo relleno en un radio determinado de") ### <a name="drawtrianglelist"></a>DrawTriangleList `DrawTriangleList` crea una lista de triángulos. Cada triángulo definido por tres `CCV3F_C4B` instancias en una matriz. El número de vértices de la matriz se pasa a la `verts` parámetro debe ser un múltiplo de tres. Tenga en cuenta que la única información contenida en `CCV3F_C4B` es la posición de la verts y su color – el `DrawTriangleList` método no admite el dibujo de triángulos con texturas. Ejemplo de código: ```csharp CCV3F_C4B[] verts = new CCV3F_C4B[] {     // First triangle:     new CCV3F_C4B( new CCPoint(0,0), CCColor4B.White),     new CCV3F_C4B( new CCPoint(30,60), CCColor4B.White),     new CCV3F_C4B( new CCPoint(60,0), CCColor4B.White),     // second triangle, each point has different colors:     new CCV3F_C4B( new CCPoint(90,0), CCColor4B.Yellow),     new CCV3F_C4B( new CCPoint(120,60), CCColor4B.Red),     new CCV3F_C4B( new CCPoint(150,0), CCColor4B.Blue) }; drawNode.DrawTriangleList (verts); ``` ![](ccdrawnode-images/image17.png "DrawTriangleList crea una lista de triángulos") ## <a name="summary"></a>Resumen Esta guía explica cómo crear un `CCDrawNode` y realizar la representación basada en el tipo primitivo. Proporciona un ejemplo de cada una de las llamadas a draw. ## <a name="related-links"></a>Vínculos relacionados - [CCDrawNode API](https://developer.xamarin.com/api/type/CocosSharp.CCDrawNode/) - [Ejemplo completo](https://developer.xamarin.com/samples/mobile/CCDrawNode/)
34.839367
405
0.739464
spa_Latn
0.930144
b9d10791bcb7e28eeeb1b5e4ee637cb94c240e9c
691
md
Markdown
docs/api/ae/universal.md
satoshipay/aeternity-aepp-sdk-js
b8f4fde3845ffbfe560bc81cef230a3deff519c6
[ "0BSD" ]
null
null
null
docs/api/ae/universal.md
satoshipay/aeternity-aepp-sdk-js
b8f4fde3845ffbfe560bc81cef230a3deff519c6
[ "0BSD" ]
1
2022-02-14T05:34:13.000Z
2022-02-14T05:34:13.000Z
docs/api/ae/universal.md
thecaliconoire/aepp-sdk-js
275f516b163e27d40180361274775fb93dacd20f
[ "0BSD" ]
null
null
null
<a id="module_@aeternity/aepp-sdk/es/ae/universal"></a> ## @aeternity/aepp-sdk/es/ae/universal Universal module **Export**: Universal **Example** ```js import Ae from '@aeternity/aepp-sdk/es/ae/universal' ``` <a id="exp_module_@aeternity/aepp-sdk/es/ae/universal--Universal"></a> ### Universal([options]) ⇒ `Object` ⏏ Universal Stamp Universal provides Ae base functionality with Contract and Aens [Ae](#exp_module_@aeternity/aepp-sdk/es/ae--Ae) clients. **Kind**: Exported function **Returns**: `Object` - Universal instance **rtype**: `Stamp` | Param | Type | Default | Description | | --- | --- | --- | --- | | [options] | `Object` | <code>{}</code> | Initializer object |
25.592593
70
0.665702
yue_Hant
0.690605
b9d146cf27d0a2816ba9cf85093f4484d1fed6f6
582
md
Markdown
api/Publisher.softedgeformat.md
RichardCory/VBA-Docs
1240462311fb77ee051d4e8b7d7a434d7d020dd3
[ "CC-BY-4.0", "MIT" ]
2
2020-03-09T13:24:12.000Z
2020-03-09T16:19:11.000Z
api/Publisher.softedgeformat.md
MarkFern/VBA-Docs
b84627cc8e24acfd336d1e9761a9ddd58f19d352
[ "CC-BY-4.0", "MIT" ]
null
null
null
api/Publisher.softedgeformat.md
MarkFern/VBA-Docs
b84627cc8e24acfd336d1e9761a9ddd58f19d352
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: SoftEdgeFormat object (Publisher) keywords: vbapb10.chm9633791 f1_keywords: - vbapb10.chm9633791 ms.prod: publisher ms.assetid: c14a02e0-8af2-55c3-1e22-78d60e1213f0 ms.date: 06/08/2017 localization_priority: Normal --- # SoftEdgeFormat object (Publisher) Represents the soft edge formatting for a shape or range of shapes. ## Properties |Name| |:-----| |[Radius](Publisher.softedgeformat.radius.md)| |[Type](Publisher.softedgeformat.type.md)| |[Visible](Publisher.softedgeformat.visible.md)| [!include[Support and feedback](~/includes/feedback-boilerplate.md)]
20.068966
68
0.764605
eng_Latn
0.614524
b9d17519a5f9389e73a86ffb9d5ec35ed165d106
6,360
md
Markdown
content/software/lmms.md
Softorage/Softorage
a4ec47e31da67ae6b1e3cbd739a7318b8a53d829
[ "Apache-2.0" ]
1
2020-09-12T11:13:50.000Z
2020-09-12T11:13:50.000Z
content/software/lmms.md
Softorage/Softorage
a4ec47e31da67ae6b1e3cbd739a7318b8a53d829
[ "Apache-2.0" ]
null
null
null
content/software/lmms.md
Softorage/Softorage
a4ec47e31da67ae6b1e3cbd739a7318b8a53d829
[ "Apache-2.0" ]
1
2020-11-07T23:14:26.000Z
2020-11-07T23:14:26.000Z
--- title: "LMMS" description: "LMMS (formerly Linux MultiMedia Studio) is a free and open source digital audio workstation (DAW) application program, release under GPL v2 license" image: "https://cdn.statically.io/img/img.softorage.com/software-logo/lmms.png?h=64" status: ["Active"] website: "https://lmms.io/" get_it: - from: "Authentic" url: "https://lmms.io/download/" - from: "Softpedia" url: "https://www.softpedia.com/get/Multimedia/Audio/Audio-Editors-Recorders/LMMS--Linux-MultiMedia-Studio.shtml" platform: - name: "Windows" hardware: ["dskp"] arch: ["x32"] official: true - from: "FileHippo" url: "https://filehippo.com/download_lmms/" platform: - name: "Windows" hardware: ["dskp"] official: true - from: "Softonic" url: "https://lmms.en.softonic.com/" platform: - name: "Windows" hardware: ["dskp"] official: true - from: "FileHorse" url: "https://www.filehorse.com/download-lmms-64/" platform: - name: "Windows" hardware: ["dskp"] arch: ["x64"] official: true - from: "Uptodown" url: "https://lmms.en.uptodown.com/windows" platform: - name: "Windows" hardware: ["dskp"] arch: ["x64"] official: true - from: "CNET" url: "https://download.cnet.com/LMMS-32-bit/3000-2170_4-10967914.html" platform: - name: "Windows" hardware: ["dskp"] arch: ["x32"] official: true - from: "CNET" url: "https://download.cnet.com/LMMS-64-bit/3000-2170_4-75289588.html" platform: - name: "Windows" hardware: ["dskp"] arch: ["x64"] official: true - from: "Malavida" url: "https://www.malavida.com/en/soft/lmms/" platform: - name: "Windows" hardware: ["dskp"] official: true - from: "MacUpdate" url: "https://www.macupdate.com/app/mac/55961/lmms" platform: - name: "macOS" hardware: ["dskp"] official: true sysreq: general: - min: "1 GHz CPU" - min: "512 MB RAM" - min: "2-channel sound-card" - min: "MIDI port if you plan to use your MIDI keyboard" developer: ["Paul Giblock<OD>", "Tobias Junghans<OD>", "LMMS developers"] initial_release: ["22 September 2005"] repository: ["https://github.com/LMMS/lmms"] written_in: ["C++", "C", "Qt"] platform: - dskp: - name: "Linux" official: true - name: "Windows" official: true arch: ["x32", "x64"] - name: "macOS" official: true categories: ["Digital Audio Workstation"] license: ["GPL v2+"] social: - name: "Facebook" url: "https://facebook.com/makefreemusic" - name: "YouTube" url: "https://www.youtube.com/user/LMMSOfficial" - name: "Bandcamp" url: "https://lmmsartists.bandcamp.com/" - name: "Wikipedia" url: "https://en.wikipedia.org/wiki/LMMS" source: overview: ["https://lmms.io/"] developer: ["https://github.com/LMMS/lmms/graphs/contributors", "https://sourceforge.net/projects/lmms/"] initial_release: ["https://en.wikipedia.org/w/index.php?title=LMMS&oldid=877499392", "https://github.com/LMMS/lmms/tags?after=v0.1.2", "https://github.com/LMMS/lmms/releases/tag/v0.0.1"] written_in: ["https://lmms.io/get-involved/", "https://github.com/LMMS/lmms"] platform: - dskp: ["https://lmms.io/download/"] sysreq: general: ["https://lmms.io/documentation/Requirements"] license: ["https://github.com/LMMS/lmms/blob/stable-1.2/LICENSE.txt"] rating: - name: "Gizmo's Freeware" type: "expert" url: "https://www.techsupportalert.com/content/lmms.htm-2" - name: "Softpedia" type: "expert" url: "https://www.softpedia.com/reviews/linux/LMMS-18082.shtml" remarks: "Linux" - name: "Softpedia" type: "user" url: "https://www.softpedia.com/get/Multimedia/Audio/Audio-Editors-Recorders/LMMS--Linux-MultiMedia-Studio.shtml" remarks: "x32 bit Windows" - name: "FileHippo" type: "user" url: "https://filehippo.com/download_lmms/" remarks: "Windows" - name: "Softonic" type: "user" url: "https://lmms.en.softonic.com/" remarks: "Windows" - name: "FileHorse" type: "user" url: "https://www.filehorse.com/download-lmms-64/" remarks: "x64 bit Windows" - name: "Malavida" type: "user" url: "https://www.malavida.com/en/soft/lmms/" remarks: "Windows" status: ["https://github.com/LMMS/lmms/graphs/contributors"] rating: - name: "Gizmo's Freeware" rate: [5, 5] - name: "Softpedia" rate: [5, 5] remarks: "Linux" - name: "Softpedia" rate: [4, 5] num: 97 remarks: "x32 bit Windows" - name: "FileHippo" rate: [8, 10] num: 522 remarks: "Windows" - name: "Softonic" rate: [7, 10] num: 641 remarks: "Windows" - name: "FileHorse" rate: [7.4, 10] num: 16 remarks: "x64 bit Windows" - name: "Malavida" rate: [8, 10] num: 19 remarks: "Windows" --- LMMS (formerly Linux MultiMedia Studio) is a free and open source [digital audio workstation](/categories/digital-audio-workstation) application program, release under GPL v2 license. It allows music to be produced by arranging samples, synthesizing sounds, playing on a MIDI keyboard and combining the features of trackers and sequencers, when run on appropriate hardware. It allows users to sequence, compose, mix and automate songs in one interface, note playback via MIDI or typing keyboard, consolidate instrument tracks using Beat+Bassline Editor, fine tune patterns, notes, chords and melodies using Piano Roll Editor. It has built-in 64-bit VST instrument support with 32-bit VST bridge (64-bit Windows) and drop-in "Linux Audio Developer's Simple Plugin API" (LADSPA) plug-in support, drop-in VST ® effect plug-in support (Linux and Windows). [Forum](https://lmms.io/forum/) I [Wiki](https://lmms.io/wiki/index.php?title=Main_Page) I [Developer Wiki](https://github.com/LMMS/lmms/wiki) I [Documentation(wiki)](https://lmms.io/documentation/) I [User FAQ](https://lmms.io/documentation/User_FAQ) I [Mailing lists](https://sourceforge.net/p/lmms/mailman/) I [IRC](https://webchat.freenode.net/?channels=lmms) I [Discord](https://lmms.io/chat/)
33.298429
479
0.632547
yue_Hant
0.376582
b9d233de09963d02aea06935a45a3fdaf65ace39
4,615
md
Markdown
README.md
wangdi244/open-falcon-falcon-plus
7e892917bdc65e71b3b4b30f32d46e5981bbaeef
[ "Apache-2.0" ]
1
2017-09-04T10:57:36.000Z
2017-09-04T10:57:36.000Z
README.md
wangdi244/open-falcon-falcon-plus
7e892917bdc65e71b3b4b30f32d46e5981bbaeef
[ "Apache-2.0" ]
null
null
null
README.md
wangdi244/open-falcon-falcon-plus
7e892917bdc65e71b3b4b30f32d46e5981bbaeef
[ "Apache-2.0" ]
null
null
null
# Falcon+ ![Open-Falcon](./logo.png) [![Build Status](https://travis-ci.org/open-falcon/falcon-plus.svg?branch=plus-dev)](https://travis-ci.org/open-falcon/falcon-plus) [![codecov](https://codecov.io/gh/open-falcon/falcon-plus/branch/plus-dev/graph/badge.svg)](https://codecov.io/gh/open-falcon/falcon-plus) [![GoDoc](https://godoc.org/github.com/open-falcon/falcon-plus?status.svg)](https://godoc.org/github.com/open-falcon/falcon-plus) [![Code Issues](https://www.quantifiedcode.com/api/v1/project/5035c017b02c4a4a807ebc4e9f153e6f/badge.svg)](https://www.quantifiedcode.com/app/project/5035c017b02c4a4a807ebc4e9f153e6f) [![Go Report Card](https://goreportcard.com/badge/github.com/open-falcon/falcon-plus)](https://goreportcard.com/report/github.com/open-falcon/falcon-plus) [![License](https://img.shields.io/badge/LICENSE-Apache2.0-ff69b4.svg)](http://www.apache.org/licenses/LICENSE-2.0.html) # Documentations - [Usage](http://book.open-falcon.org) - [Open-Falcon API](http://open-falcon.org/falcon-plus) # Prerequisite - Git >= 1.7.5 - Go >= 1.6 # Getting Started ## Docker Please refer to ./docker/[README.md](https://github.com/open-falcon/falcon-plus/blob/master/docker/README.md). ## Build from source **before start, please make sure you prepared this:** ``` yum install -y redis yum install -y mysql-server ``` *NOTE: be sure to check redis and mysql-server have successfully started.* And then ``` # Please make sure that you have set `$GOPATH` and `$GOROOT` correctly. # If you have not golang in your host, please follow [https://golang.org/doc/install] to install golang. mkdir -p $GOPATH/src/github.com/open-falcon cd $GOPATH/src/github.com/open-falcon git clone https://github.com/open-falcon/falcon-plus.git ``` **And do not forget to init the database first (if you have not loaded the database schema before)** ``` cd $GOPATH/src/github.com/open-falcon/falcon-plus/scripts/mysql/db_schema/ mysql -h 127.0.0.1 -u root -p < 1_uic-db-schema.sql mysql -h 127.0.0.1 -u root -p < 2_portal-db-schema.sql mysql -h 127.0.0.1 -u root -p < 3_dashboard-db-schema.sql mysql -h 127.0.0.1 -u root -p < 4_graph-db-schema.sql mysql -h 127.0.0.1 -u root -p < 5_alarms-db-schema.sql ``` **NOTE: if you are upgrading from v0.1 to current version v0.2.0,then**. [More upgrading instruction](http://www.jianshu.com/p/6fb2c2b4d030) mysql -h 127.0.0.1 -u root -p < 5_alarms-db-schema.sql # Compilation ``` cd $GOPATH/src/github.com/open-falcon/falcon-plus/ # make all modules make all # make specified module make agent # pack all modules make pack ``` * *after `make pack` you will got `open-falcon-vx.x.x.tar.gz`* * *if you want to edit configure file for each module, you can edit `config/xxx.json` before you do `make pack`* # Unpack and Decompose ``` export WorkDir="$HOME/open-falcon" mkdir -p $WorkDir tar -xzvf open-falcon-vx.x.x.tar.gz -C $WorkDir cd $WorkDir ``` # Start all modules in single host ``` cd $WorkDir ./open-falcon start # check modules status ./open-falcon check ``` # Run More Open-Falcon Commands for example: ``` # ./open-falcon [start|stop|restart|check|monitor|reload] module ./open-falcon start agent ./open-falcon check falcon-graph UP 53007 falcon-hbs UP 53014 falcon-judge UP 53020 falcon-transfer UP 53026 falcon-nodata UP 53032 falcon-aggregator UP 53038 falcon-agent UP 53044 falcon-gateway UP 53050 falcon-api UP 53056 falcon-alarm UP 53063 ``` * For debugging , You can check `$WorkDir/$moduleName/log/logs/xxx.log` # Install Frontend Dashboard - Follow [this](https://github.com/open-falcon/dashboard). **NOTE: if you want to use grafana as the dashboard, please check [this](https://github.com/open-falcon/grafana-openfalcon-datasource).** # Package Management We use govendor to manage the golang packages. Please install `govendor` before compilation. go get -u github.com/kardianos/govendor Most depended packages are saved under `./vendor` dir. If you want to add or update a package, just run `govendor fetch xxxx@commitID` or `govendor fetch [email protected]`, then you will find the package have been placed in `./vendor` correctly. # Package Release ``` make clean all pack ``` # Q&A - Any issue or question is welcome, Please feel free to open [github issues](https://github.com/open-falcon/falcon-plus/issues) :) - [FAQ](https://book.open-falcon.org/zh_0_2/faq/)
30.766667
240
0.692958
eng_Latn
0.394971
b9d24b066bf44887ad14cbd8f43390005dbf45a3
36
md
Markdown
README.md
Eswar3008/Goal5-SDG
0569676fedc8c0bdc7f978d029e715dfec501ff9
[ "MIT" ]
null
null
null
README.md
Eswar3008/Goal5-SDG
0569676fedc8c0bdc7f978d029e715dfec501ff9
[ "MIT" ]
null
null
null
README.md
Eswar3008/Goal5-SDG
0569676fedc8c0bdc7f978d029e715dfec501ff9
[ "MIT" ]
null
null
null
# Goal5-SDG THIS REPO CONTAINS SDG5
12
23
0.777778
kor_Hang
0.916381
b9d27e45fb14b48b7b2b11333084d1528826078e
12,317
md
Markdown
docs/Tutorials/Private-Network/Create-IBFT-Network.md
pscott/doc.pantheon
21e8b1e20e27860b6dd3a64d0bdaee946cc55583
[ "Apache-2.0" ]
null
null
null
docs/Tutorials/Private-Network/Create-IBFT-Network.md
pscott/doc.pantheon
21e8b1e20e27860b6dd3a64d0bdaee946cc55583
[ "Apache-2.0" ]
null
null
null
docs/Tutorials/Private-Network/Create-IBFT-Network.md
pscott/doc.pantheon
21e8b1e20e27860b6dd3a64d0bdaee946cc55583
[ "Apache-2.0" ]
null
null
null
description: Pantheon IBFT 2.0 Proof-of-Authority (PoA) private network tutorial <!--- END of page meta data --> *[Byzantine fault tolerant]: Ability to function correctly and reach consensus despite nodes failing or propagating incorrect information to peers. # Creating a Private Network using IBFT 2.0 (Proof of Authority) Consensus Protocol A private network provides a configurable network for testing. This private network uses the [IBFT 2.0 (Proof of Authority) consensus protocol](../../HowTo/Configure-Pantheon/Consensus-Protocols/IBFT.md). !!!important An Ethereum private network created as described here is isolated but not protected or secure. We recommend running the private network behind a properly configured firewall. This tutorial configures a private network using IBFT 2.0 for educational purposes only. IBFT 2.0 requires 4 validators to be Byzantine fault tolerant. ## Prerequisites [Pantheon](../../HowTo/Get-Started/Install-Binaries.md) [Curl (or similar web service client)](https://curl.haxx.se/download.html) ## Steps The steps to create a private network using IBFT 2.0 with four nodes are on the right. The four nodes are all validators. ### 1. Create Folders Each node requires a data directory for the blockchain data. Create directories for your private network, each of the four nodes, and a data directory for each node: ```bash IBFT-Network/ ├── Node-1 │   ├── data ├── Node-2 │   ├── data ├── Node-3 │   ├── data └── Node-4 ├── data ``` ### 2. Create Configuration File The configuration file defines the [IBFT 2.0 genesis file](../../HowTo/Configure-Pantheon/Consensus-Protocols/IBFT.md#genesis-file) and the number of node key pairs to generate. The configuration file has 2 subnested JSON nodes. The first is the `genesis` property defining the IBFT 2.0 genesis file except for the `extraData` string. The second is the `blockchain` property defining the number of key pairs to generate. Copy the following configuration file definition to a file called `ibftConfigFile.json` and save it in the `IBFT-Network` directory: ```json { "genesis": { "config": { "chainId": 2018, "constantinoplefixblock": 0, "ibft2": { "blockperiodseconds": 2, "epochlength": 30000, "requesttimeoutseconds": 10 } }, "nonce": "0x0", "timestamp": "0x58ee40ba", "gasLimit": "0x47b760", "difficulty": "0x1", "mixHash": "0x63746963616c2062797a616e74696e65206661756c7420746f6c6572616e6365", "coinbase": "0x0000000000000000000000000000000000000000", "alloc": { "fe3b557e8fb62b89f4916b721be55ceb828dbd73": { "privateKey": "8f2a55949038a9610f50fb23b5883af3b4ecb3c3bb792cbcefbd1542c692be63", "comment": "private key and this comment are ignored. In a real chain, the private key should NOT be stored", "balance": "0xad78ebc5ac6200000" }, "627306090abaB3A6e1400e9345bC60c78a8BEf57": { "privateKey": "c87509a1c067bbde78beb793e6fa76530b6382a4c0241e5e4a9ec0a0f44dc0d3", "comment": "private key and this comment are ignored. In a real chain, the private key should NOT be stored", "balance": "90000000000000000000000" }, "f17f52151EbEF6C7334FAD080c5704D77216b732": { "privateKey": "ae6ae8e5ccbfb04590405997ee2d52d2b330726137b875053c36d94e974d162f", "comment": "private key and this comment are ignored. In a real chain, the private key should NOT be stored", "balance": "90000000000000000000000" } } }, "blockchain": { "nodes": { "generate": true, "count": 4 } } } ``` !!! warning Do not use the accounts in `alloc` in the genesis file on mainnet or any public network except for testing. The private keys are displayed which means the accounts are not secure. ### 3. Generate Node Keys and Genesis File In the `IBFT-Network` directory, generate the node key and genesis file: ```bash tab="MacOS" pantheon operator generate-blockchain-config --config-file=ibftConfigFile.json --to=networkFiles --private-key-file-name=key ``` In the `networkFiles` directory, the following are created: * `genesis.json` - genesis file including the `extraData` property specifying the four nodes are validators * Directory for each node named with the node address and containing the public and private key for each node ```bash networkFiles/ ├── genesis.json └── keys ├── 0x438821c42b812fecdcea7fe8235806a412712fc0 │   ├── key │   └── key.pub ├── 0xca9c2dfa62f4589827c0dd7dcf48259aa29f22f5 │   ├── key │   └── key.pub ├── 0xcd5629bd37155608a0c9b28c4fd19310d53b3184 │   ├── key │   └── key.pub └── 0xe96825c5ab8d145b9eeca1aba7ea3695e034911a ├── key └── key.pub ``` ### 4. Copy the Genesis File to the IBFT-Network Directory Copy the `genesis.json` file to the `IBFT-Network` directory. ### 5. Copy Node Private Keys to Node Directories For each node, copy the key files to the `data` directory for that node ```bash IBFT-Network/ ├── genesis.json ├── Node-1 │   ├── data │ │    ├── key │ │    ├── key.pub ├── Node-2 │   ├── data │ │    ├── key │ │    ├── key.pub ├── Node-3 │   ├── data │ │    ├── key │ │    ├── key.pub ├── Node-4 │ ├── data │ │    ├── key │ │    ├── key.pub ``` ### 6. Start First Node as Bootnode In the `Node-1` directory, start Node-1: ```bash tab="MacOS" pantheon --data-path=data --genesis-file=../genesis.json --rpc-http-enabled --rpc-http-api=ETH,NET,IBFT --host-whitelist="*" --rpc-http-cors-origins="all" ``` ```bash tab="Windows" pantheon --data-path=data --genesis-file=..\genesis.json --rpc-http-enabled --rpc-http-api=ETH,NET,IBFT --host-whitelist="*" --rpc-http-cors-origins="all" ``` The command line specifies: * Data directory for Node-1 using the [`--data-path`](../../Reference/Pantheon-CLI/Pantheon-CLI-Syntax.md#data-path) option. * JSON-RPC API is enabled using the [`--rpc-http-enabled`](../../Reference/Pantheon-CLI/Pantheon-CLI-Syntax.md#rpc-http-enabled) option * ETH,NET, and IBFT APIs are enabled using the [`--rpc-http-api`](../../Reference/Pantheon-CLI/Pantheon-CLI-Syntax.md#rpc-http-api) option * All hosts can access the HTTP JSON-RPC API using the [`--host-whitelist`](../../Reference/Pantheon-CLI/Pantheon-CLI-Syntax.md#host-whitelist) option * All domains can access the node using the HTTP JSON-RPC API using the [`--rpc-http-cors-origins`](../../Reference/Pantheon-CLI/Pantheon-CLI-Syntax.md#rpc-http-cors-origins) option When the node starts, the [enode URL](../../Concepts/Node-Keys.md#enode-url) is displayed. Copy the enode URL to specify Node-1 as the bootnode in the following steps. ![Node 1 Enode URL](../../images/EnodeStartup.png) ### 7. Start Node-2 Start another terminal, change to the `Node-2` directory and start Node-2 specifying the Node-1 enode URL copied when starting Node-1 as the bootnode: ```bash tab="MacOS" pantheon --data-path=data --genesis-file=../genesis.json --bootnodes=<Node-1 Enode URL> --p2p-port=30304 --rpc-http-enabled --rpc-http-api=ETH,NET,IBFT --host-whitelist="*" --rpc-http-cors-origins="all" --rpc-http-port=8546 ``` ```bash tab="Windows" pantheon --data-path=data --genesis-file=..\genesis.json --bootnodes=<Node-1 Enode URL> --p2p-port=30304 --rpc-http-enabled --rpc-http-api=ETH,NET,IBFT --host-whitelist="*" --rpc-http-cors-origins="all" --rpc-http-port=8546 ``` The command line specifies: * Data directory for Node-2 using the [`--data-path`](../../Reference/Pantheon-CLI/Pantheon-CLI-Syntax.md#data-path) option. * Different port to Node-1 for P2P peer discovery using the [`--p2p-port`](../../Reference/Pantheon-CLI/Pantheon-CLI-Syntax.md#p2p-port) option. * Different port to Node-1 for HTTP JSON-RPC using the [`--rpc-http-port`](../../Reference/Pantheon-CLI/Pantheon-CLI-Syntax.md#rpc-http-port) option. * Enode URL for Node-1 using the [`--bootnodes`](../../Reference/Pantheon-CLI/Pantheon-CLI-Syntax.md#bootnodes) option. * Other options as for [Node-1](#5-start-first-node-as-bootnode). ### 8. Start Node-3 Start another terminal, change to the `Node-3` directory and start Node-3 specifying the Node-1 enode URL copied when starting Node-1 as the bootnode: ```bash tab="MacOS" pantheon --data-path=data --genesis-file=../genesis.json --bootnodes=<Node-1 Enode URL> --p2p-port=30305 --rpc-http-enabled --rpc-http-api=ETH,NET,IBFT --host-whitelist="*" --rpc-http-cors-origins="all" --rpc-http-port=8547 ``` ```bash tab="Windows" pantheon --data-path=data --genesis-file=..\genesis.json --bootnodes=<Node-1 Enode URL> --p2p-port=30305 --rpc-http-enabled --rpc-http-api=ETH,NET,IBFT --host-whitelist="*" --rpc-http-cors-origins="all" --rpc-http-port=8547 ``` The command line specifies: * Data directory for Node-3 using the [`--data-path`](../../Reference/Pantheon-CLI/Pantheon-CLI-Syntax.md#data-path) option. * Different port to Node-1 and Node-2 for P2P peer discovery using the [`--p2p-port`](../../Reference/Pantheon-CLI/Pantheon-CLI-Syntax.md#p2p-port) option. * Different port to Node-1 and Node-2 for HTTP JSON-RPC using the [`--rpc-http-port`](../../Reference/Pantheon-CLI/Pantheon-CLI-Syntax.md#rpc-http-port) option. * Bootnode as for [Node-2](#6-start-node-2). * Other options as for [Node-1](#5-start-first-node-as-bootnode). ### 9. Start Node-4 Start another terminal, change to the `Node-4` directory and start Node-4 specifying the Node-1 enode URL copied when starting Node-1 as the bootnode: ```bash tab="MacOS" pantheon --data-path=data --genesis-file=../genesis.json --bootnodes=<Node-1 Enode URL> --p2p-port=30306 --rpc-http-enabled --rpc-http-api=ETH,NET,IBFT --host-whitelist="*" --rpc-http-cors-origins="all" --rpc-http-port=8548 ``` ```bash tab="Windows" pantheon --data-path=data --genesis-file=..\genesis.json --bootnodes=<Node-1 Enode URL> --p2p-port=30306 --rpc-http-enabled --rpc-http-api=ETH,NET,IBFT --host-whitelist="*" --rpc-http-cors-origins="all" --rpc-http-port=8548 ``` The command line specifies: * Data directory for Node-4 using the [`--data-path`](../../Reference/Pantheon-CLI/Pantheon-CLI-Syntax.md#data-path) option. * Different port to Node-1, Node-2, and Node-3 for P2P peer discovery using the [`--p2p-port`](../../Reference/Pantheon-CLI/Pantheon-CLI-Syntax.md#p2p-port) option. * Different port to Node-1, Node-2, and Node-3 for HTTP JSON-RPC using the [`--rpc-http-port`](../../Reference/Pantheon-CLI/Pantheon-CLI-Syntax.md#rpc-http-port) option. * Bootnode as for [Node-2](#6-start-node-2). * Other options as for [Node-1](#5-start-first-node-as-bootnode). ### 10. Confirm Private Network is Working Start another terminal, use curl to call the JSON-RPC API [`net_peerCount`](../../Reference/Pantheon-API-Methods.md#net_peercount) method and confirm the nodes are functioning as peers: ```bash curl -X POST --data '{"jsonrpc":"2.0","method":"net_peerCount","params":[],"id":1}' localhost:8545 ``` The result confirms Node-1 has three peers (Node-2, Node-3, and Node-4): ```json { "jsonrpc" : "2.0", "id" : 1, "result" : "0x3" } ``` ## Next Steps Look at the logs displayed to confirm blocks are being produced. Use the [IBFT API](../../Reference/Pantheon-API-Methods.md#ibft-20-methods) to remove or add validators. !!! note To add or remove nodes as validators you need the node address. The directory [created for each node](#3-generate-node-keys-and-genesis-file) is named with the node address. This tutorial configures a private network using IBFT 2.0 for educational purposes only. IBFT 2.0 requires 4 validators to be Byzantine fault tolerant. Import accounts to MetaMask and send transactions as described in the [Private Network Quickstart Tutorial](../Quickstarts/Private-Network-Quickstart.md#creating-a-transaction-using-metamask) !!! info Pantheon does not implement [private key management](../../HowTo/Send-Transactions/Account-Management.md). ## Stop Nodes When finished using the private network, stop all nodes using ++ctrl+c++ in each terminal window. !!!tip To restart the IBFT 2.0 network in the future, start from [6. Start First Node as Bootnode](#6-start-first-node-as-bootnode).
41.752542
228
0.69944
eng_Latn
0.656185
b9d29a65b92ceac3da9b63b12a52edbc2bff49f0
1,257
md
Markdown
pages/suite/suite-trading-sessions.md
EduRemix/Documentation
1da4fd5fe514512ac36cbfbe15739a59efa073eb
[ "Apache-2.0" ]
null
null
null
pages/suite/suite-trading-sessions.md
EduRemix/Documentation
1da4fd5fe514512ac36cbfbe15739a59efa073eb
[ "Apache-2.0" ]
null
null
null
pages/suite/suite-trading-sessions.md
EduRemix/Documentation
1da4fd5fe514512ac36cbfbe15739a59efa073eb
[ "Apache-2.0" ]
null
null
null
--- title: Trading Sessions summary: "Trading sessions enable using the trading bot to perform strategy tests, simulations, and live trading." sidebar: suite_sidebar permalink: suite-trading-sessions.html --- {{site.data.network.session}} {% include note.html content="To learn how to run backtesting and paper trading sessions, refer to the [testing environment](suite-testing-environment.html) pages. To learn how to trade live, or run forward testing sessions, refer to the [production environment](suite-production-environment.html) pages." %} {% include /network/backtesting-session.md heading="##" icon="150-" adding="####" configuring="####" starting="####" content="yes" definition="bold" table="yes" more="yes"%} {% include /network/paper-trading-session.md heading="##" icon="150-" adding="####" configuring="####" starting="####" content="yes" definition="bold" table="yes" more="yes"%} {% include /network/forward-testing-session.md heading="##" icon="150-" adding="####" configuring="####" starting="####" content="yes" definition="bold" table="yes" more="yes"%} {% include /network/live-trading-session.md heading="##" icon="150-" adding="####" configuring="####" starting="####" content="yes" definition="bold" table="yes" more="yes"%}
69.833333
308
0.704057
eng_Latn
0.743054
b9d30798942a917befe4f2cb79e21d0a01300796
1,363
md
Markdown
docs/relational-databases/replication/mssql-repl-2147200953.md
lxyhcx/sql-docs.zh-cn
e63de561000b0b4bebff037bfe96170d6b61c908
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/relational-databases/replication/mssql-repl-2147200953.md
lxyhcx/sql-docs.zh-cn
e63de561000b0b4bebff037bfe96170d6b61c908
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/relational-databases/replication/mssql-repl-2147200953.md
lxyhcx/sql-docs.zh-cn
e63de561000b0b4bebff037bfe96170d6b61c908
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: MSSQL_REPL-2147200953 | Microsoft Docs ms.custom: '' ms.date: 03/14/2017 ms.prod: sql ms.prod_service: database-engine ms.component: replication ms.reviewer: '' ms.suite: sql ms.technology: - replication ms.tgt_pltfrm: '' ms.topic: conceptual helpviewer_keywords: - MSSQL_REPL-2147200953 error ms.assetid: ef9671a0-772f-4d07-bfeb-07dd47dbbce0 caps.latest.revision: 8 author: MashaMSFT ms.author: mathoma manager: craigg ms.openlocfilehash: 19ef517e67793d9ca213f1ccdb999751670f7df4 ms.sourcegitcommit: 1740f3090b168c0e809611a7aa6fd514075616bf ms.translationtype: HT ms.contentlocale: zh-CN ms.lasthandoff: 05/03/2018 --- # <a name="mssqlrepl-2147200953"></a>MSSQL_REPL-2147200953 [!INCLUDE[appliesto-ss-xxxx-xxxx-xxx-md](../../includes/appliesto-ss-xxxx-xxxx-xxx-md.md)] ## <a name="message-details"></a>消息详细信息 ||| |-|-| |产品名称|SQL Server| |事件 ID|-2147200953| |事件源|MSSQLServer| |符号名称|| |消息正文|合并进程无法对项目“%1”执行数据验证。 在 Windows 应用程序事件日志中检查是否具有 [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] 错误,或者以后重试。| ## <a name="explanation"></a>解释 验证指定项目的存储过程调用失败。 这可能是由于 [!INCLUDE[ssDE](../../includes/ssde-md.md)]中的一个或多个错误导致的。 ## <a name="user-action"></a>用户操作 当 [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] 不太忙时,重试合并操作。 此外,查找所引发的任何服务器错误。 ## <a name="internal-only"></a>仅内部
28.395833
124
0.712399
yue_Hant
0.293942
b9d30dfb2494d41993dfbeb8b835cecba9ec829a
3,679
md
Markdown
business-central/LocalFunctionality/India/GST-Purchase-Return-to-Unregistered-Vendor-RCM.md
paulcart/dynamics365smb-docs
8f6bc6fd9805e9d4b98f6ba4e0504db1a3514ce2
[ "CC-BY-4.0", "MIT" ]
77
2017-08-28T10:36:43.000Z
2022-03-24T10:48:01.000Z
business-central/LocalFunctionality/India/GST-Purchase-Return-to-Unregistered-Vendor-RCM.md
paulcart/dynamics365smb-docs
8f6bc6fd9805e9d4b98f6ba4e0504db1a3514ce2
[ "CC-BY-4.0", "MIT" ]
644
2017-07-04T09:03:35.000Z
2022-03-31T06:43:25.000Z
business-central/LocalFunctionality/India/GST-Purchase-Return-to-Unregistered-Vendor-RCM.md
paulcart/dynamics365smb-docs
8f6bc6fd9805e9d4b98f6ba4e0504db1a3514ce2
[ "CC-BY-4.0", "MIT" ]
172
2017-05-16T21:36:34.000Z
2022-03-17T07:14:14.000Z
--- title: Purchase Return to Unregistered Vendor (Reverse Charge) description: Purchase Return to Unregistered Vendor (Reverse Charge) author: v-debapd ms.service: dynamics365-business-central ms.topic: conceptual ms.devlang: na ms.tgt_pltfrm: na ms.workload: na ms.search.keywords: India, local, IN, English ms.date: 04/01/2021 ms.author: v-debapd --- # Purchase Return to Unregistered Vendor (Reverse Charge) Persons whose aggregate turnover in a financial year does not exceed forty lakh rupees are not required to be registered with the GST authorities. Such persons are called unregistered vendors. Any purchases from unregistered vendors do not attract GST. However, there are some notified services under GST, on supply of such services GST is applicable under reverse charge i.e. the purchasers are required to pay GST to the Government. A buyer may have to return the goods or issue credit note due to various reasons like damaged goods, quality issues etc. Process for purchase returns to unregistered vendor has been explained in this document. ## Create a purchase return order or credit memo 1. Choose the ![Search for Page or Report.](image/search_small.png "Search for Page or Report icon") icon, enter **Purchase Return Order** or **Purchase Credit Memo**, and then choose the related link. 2. Select **Vendor** on **Purchase Credit Memo** header, GST vendor type should be **Unregistered**. 3. Select **Item Code** for goods, **G/L Account** for Service purchase, **Fixed Asset** for Fixed Asset purchase and **Charge (Item)** for Item Charge on **Purchase Credit Memo** line. GST Group Code, HSN/SAC Code and GST Credit value should be selected as **Availment** if the tax input credit is available or else **Non-Availment** on the Item, G/L Account, Fixed Asset, Item (Charge). For example, purchase credit memo or return order is issued for INR 10,000 on which 18% GST (9% CGST and 9% SGST/UTGST in case of Intra-State or Intra-Union Territory transaction or 18% IGST in case of Inter-State transaction), has to be charged. - GST calculation will appear in the Fact Box, as following: |Component|Amount| |----------------------------------|---------------------------------------| |**GST Base Amount**|10,000| |**CGST**|900| |**SGST**|900| |**IGST**|1800| - GL Entries for Intra-State or Intra-Union Territory purchase return of goods, services, fixed asset, charge item to unregistered vendor where input tax credit is available (reverse charge), will be as following: |Particulars|Amount| |----------------------------------|---------------------------------------| |**Vendor Account**|10000| |**CGST Payable Account**|900| |**SGST/UTGST Payable Account**|900| |**CGST Receivable Account**|-900| |**SGST/UTGST Receivable Account**|-900| |**Purchase or Services Account or Fixed Asset increase during the year**|-10000| - GL Entries for Intra-State or Intra-Union Territory purchase return of goods, services, fixed asset, charge item to unregistered vendor where input tax credit is not available (reverse charge), will be as following: |Particulars|Amount| |----------------------------------|---------------------------------------| |**Vendor Account**|10000| |**CGST Payable Account**|900| |**SGST/UTGST Payable Account**|900| |**Purchase or Service Account or Fixed Asset increase during the year**|-11800| ## See Also [Purchase Return to Foreign Vendor](GST-Purchase-Return-to-Foreign-Vendor.md) [!INCLUDE[footer-include](../../includes/footer-banner.md)]
35.038095
435
0.677086
eng_Latn
0.970131
b9d31d5f1d1ede782b4eee7fdf975b696751cad6
3,313
md
Markdown
_posts/2013-08-09-crontab.md
mouse-lin/mouse-lin.github.com
a8d33e3de024a1de1678e4c7289043082af27b72
[ "MIT" ]
null
null
null
_posts/2013-08-09-crontab.md
mouse-lin/mouse-lin.github.com
a8d33e3de024a1de1678e4c7289043082af27b72
[ "MIT" ]
null
null
null
_posts/2013-08-09-crontab.md
mouse-lin/mouse-lin.github.com
a8d33e3de024a1de1678e4c7289043082af27b72
[ "MIT" ]
null
null
null
--- layout: post title: "crontab" description: "" category: System comments: true tags: [SA] --- ##Crontab --- ###介绍: crontab命令常见于Unix和类Unix的操作系统之中,用于设置周期性被执行的指令。该命令从标准输入设备读取指令,并将其存放于“crontab”文件中,以供之后读取和执行。该词来源于希腊语 chronos(χρόνος),原意是时间。 通常,crontab储存的指令被守护进程激活, crond常常在后台运行,每一分钟检查是否有预定的作业需要执行。这类作业一般称为cron jobs。 我们Rails里面一个gem whenever用的就是crontab功能 ###How to crontab文件包含送交cron守护进程的一系列作业和指令。每个用户可以拥有自己的crontab文件;同时,操作系统保存一个针对整个系统的crontab文件,该文件通常存放于/etc或者/etc之下的子目录中,而这个文件只能由系统管理员来修改。 例如ubuntu /etc/crontab就是对于整个系统crobtab文件 **View Crontab help**: crontab -h **Edit Your Crontab**: crontab -e **View Users Cronjob**: crontab -u userName -l **View Root User Cronjob**: crontab -l **View /etc/crontab**: A cronjob can be also run from /etc/crontab file. To view it, enter: less /etc/crontab **View Daily Cronjob**: Type the following commands: cd /etc/cron.daily/ ls -l cat filename **View Hourly Cronjobs**: Type the following commands: cd /etc/cron.hourly/ ls -l cat filename View Weekly Cronjobs **View Weekly Cronjobs**: Type the following commands: cd /etc/cron.weekly/ ls -l cat filename View Monthly Cronjobs **View Monthly Cronjobs**: Type the following commands: cd /etc/cron.monthly/ ls -l cat filename **View Software (Package) Specific Cronjobs**: Type the following commands: cd /etc/cron.d/ ls -l cat filename ###操作符号 **在一个区域里填写多个数值的方法** - 逗号,分开的值,例如 1,3,4,7,8 - 连词符-制定值的范围,例如 1-6,意思等同于1,2,3,4,5,6 - 星号*代表任何可能的值。例如,在“小时域” 里的星号等于是“每一个小时”,等等 某些cron程序的扩展版本也支持斜线 ('/') 操作符,用于表示跳过某些给定的数。例如,“*/3”在小时域中等于“0,3,6,9,12,15,18,21”等被3整除的数; ###时间设置 # 文件格式说明 # ——分钟 (0 - 59) # | ——小时 (0 - 23) # | | ——日 (1 - 31) # | | | ——月 (1 - 12) # | | | | ——星期 (0 - 7)(星期日=0或7) # | | | | | # * * * * * 被执行的命令 注: 在“星期域”(第五个域),0和7都被视为星期日。 不很直观的用法:如果日期和星期同时被设定,那么其中的一个条件被满足时,指令便会被执行。请参考下例。 前5个域称之分时日月周,可方便个人记忆。 从第六个域起,指明要执行的命令。 ###Example #=========================================================== # SYSTEM ACTIVITY REPORTS # 8am-5pm activity reports every 20 mins during weekdays. # activity reports every hour on Saturday and Sunday. # 6pm-7am activity reports every hour during weekdays. # summary prepared at 18:05 every weekday. #=========================================================== 0,20,40 8-17 * * 1-5 /usr/lib/sa/sa1 1200 3 & 0 * * * 0,6 /usr/lib/sa/sa1 & 0 18-7 * * 1-5 /usr/lib/sa/sa1 & 5 18 * * 1-5 /usr/lib/sa/sa2 -s 8:00 -e 18:01 -i 3600 -ubcwyaqvm & ###实际使用 上面介绍完crontab一些知识以及怎样使用,接下去就来一个比较具体例子,在我们的app里面,当然会涉及到数据库备份,而且往往是放在凌晨来备份,好吧,那我们就定在凌晨一点整时间来备份数据库 **首先,我们写一个数据库导出的bash脚本:** touch backup_mysql.sh vim backup_mysql.sh **在backup_mysql.sh里面填入以下内容:** #!/bin/bash backup="/#{path}/dump_`date +%d-%m-%Y`.sql" /usr/bin/mysqldump -u#{user} -p#{user_password} databse_name | gzip > $backup.gz **接下去,我们就可以使用crontab job来帮我们定时1点执行**: 注意!backup_mysql.sh必须是要有可执行权限哦 **打开crontab编辑:** crontab -e **然后选择你需要编辑模式,定时执行backup_mysql.sh:** 0 1 * * * /bin/bash -l -c 'bash /#{path}/backup_mysql.sh' 好了,接下去你就可以1点整时看看是否有备份;当然其他命令都是这样来使用crontab。如果觉得自己手动写crontab太过麻烦,觉得时间那些都比较难写,用Rails的童鞋们,建议使用下面这个gem whenever [https://github.com/javan/whenever](https://github.com/javan/whenever) ### Cron Wiki: [http://zh.wikipedia.org/wiki/Cron](http://zh.wikipedia.org/wiki/Cron)
19.60355
123
0.668578
yue_Hant
0.325016
b9d3a654eba7223bf2200c04e9bd1e105a323979
4,389
md
Markdown
doc/HACKING/Module.md
47-studio-org/-.-M3-0N-.-
2344e854cb46942e573bb342cd76808c5819fd9e
[ "BSD-2-Clause-NetBSD" ]
18
2020-06-09T10:02:14.000Z
2022-02-04T08:26:00.000Z
doc/HACKING/Module.md
ammarfaizi2/tor
ae6430818ee2786e2764bd6286aed311cdd77ab2
[ "BSD-2-Clause-NetBSD" ]
17
2020-03-24T21:20:36.000Z
2022-03-21T20:29:16.000Z
doc/HACKING/Module.md
ammarfaizi2/tor
ae6430818ee2786e2764bd6286aed311cdd77ab2
[ "BSD-2-Clause-NetBSD" ]
7
2020-07-21T17:37:43.000Z
2022-01-17T08:48:57.000Z
# Modules in Tor This document describes the build system and coding standards when writing a module in Tor. ## What is a module? In the context of the tor code base, a module is a subsystem that we can selectively enable or disable, at `configure` time. Currently, tor has these modules: - Relay subsystem (relay) - Directory cache system (dircache). - Directory Authority subsystem (dirauth) The dirauth code is located in its own directory in `src/feature/dirauth/`. The relay code is located in a directory named `src/*/*relay`, which is being progressively refactored and disabled. The dircache code is located in `src/*/*dircache`. Right now, it is disabled if and only if the relay module is disabled. (We are treating them as separate modules because they are logically independent, not because you would actually want to run one without the other.) To disable a module, pass `--disable-module-{dirauth,relay}` at configure time. All modules are currently enabled by default. ## Build System The changes to the build system are pretty straightforward. 1. Locate in the `configure.ac` file this define: `m4_define(MODULES`. It contains a list (white-space separated) of the module in tor. Add yours to the list. 2. Use the `AC_ARG_ENABLE([module-relay]` template for your new module. We use the "disable module" approach instead of enabling them one by one. So, by default, tor will build all the modules. This will define the `HAVE_MODULE_<name>` statement which can be used in the C code to conditionally compile things for your module. And the `BUILD_MODULE_<name>` is also defined for automake files (e.g: include.am). 3. In the `src/core/include.am` file, locate the `MODULE_RELAY_SOURCES` value. You need to create your own `_SOURCES` variable for your module and then conditionally add the it to `LIBTOR_A_SOURCES` if you should build the module. It is then **very** important to add your SOURCES variable to `src_or_libtor_testing_a_SOURCES` so the tests can build it. Finally, your module will automatically be included in the `TOR_MODULES_ALL_ENABLED` variable which is used to build the unit tests. They always build everything in order to test everything. ## Coding As mentioned above, a module should be isolated in its own directories, suffixed with the name of the module, in `src/*/`. There are couples of "rules" you want to follow: * Minimize as much as you can the number of entry points into your module. Less is always better but of course that doesn't work out for every use case. However, it is a good thing to always keep that in mind. * Do **not** use the `HAVE_MODULE_<name>` define outside of the module code base. Every entry point should have a second definition if the module is disabled. For instance: ```c #ifdef HAVE_MODULE_DIRAUTH int sr_init(int save_to_disk); #else /* HAVE_MODULE_DIRAUTH */ static inline int sr_init(int save_to_disk) { (void) save_to_disk; return 0; } #endif /* HAVE_MODULE_DIRAUTH */ ``` The main reason for this approach is to avoid having conditional code everywhere in the code base. It should be centralized as much as possible which helps maintainability but also avoids conditional spaghetti code making the code much more difficult to follow/understand. * It is possible that you end up with code that needs to be used by the rest of the code base but is still part of your module. As a good example, if you look at `src/feature/shared_random_client.c`: it contains code needed by the hidden service subsystem but mainly related to the shared random subsystem very specific to the dirauth module. This is fine but try to keep it as lean as possible and never use the same filename as the one in the module. For example, this is a bad idea and should never be done: - `src/feature/dirclient/shared_random.c` - `src/feature/dirauth/shared_random.c` * When you include headers from the module, **always** use the full module path in your statement. Example: ```c #include "feature/dirauth/dirvote.h"` ``` The main reason is that we do **not** add the module include path by default so it needs to be specified. But also, it helps our human brain understand which part comes from a module or not. Even **in** the module itself, use the full include path like above.
36.272727
78
0.751196
eng_Latn
0.999692
b9d3fc1e29172605867fdc2bf184d03902b905d9
2,390
md
Markdown
controls/tooltip/radtooltipmanager/overview.md
PaulSorauer/ajax-docs
e7e10ece5fd22eddcf8db12bc6b8405d7aae16ac
[ "Apache-2.0" ]
null
null
null
controls/tooltip/radtooltipmanager/overview.md
PaulSorauer/ajax-docs
e7e10ece5fd22eddcf8db12bc6b8405d7aae16ac
[ "Apache-2.0" ]
null
null
null
controls/tooltip/radtooltipmanager/overview.md
PaulSorauer/ajax-docs
e7e10ece5fd22eddcf8db12bc6b8405d7aae16ac
[ "Apache-2.0" ]
null
null
null
--- title: Overview page_title: RadToolTipManager Overview | RadTooltip for ASP.NET AJAX Documentation description: Overview slug: tooltip/radtooltipmanager/overview tags: overview published: True position: 0 --- # RadToolTipManager Overview Both **RadTooltip** and **RadTooltipManager** can display rich content (including user controls and other ASP.NET controls), as well as AJAX-generated content. **RadToolTip** is meant to "tooltipify" a single element while **RadToolTipManger** should be used in scenarios where many elements would require a tooltip. For more information on **RadTooltip**, see the [Overview]({%slug tooltip/overview%}) topic. Three common scenarios where the **RadToolTipManager** is useful: * When the developer wishes to 'tooltipify' all HTML elements on a page. This is the scenario where the developer already has a page with existing tooltips that need to be easily converted to a more refined, consistent look (using a particular skin or animation effect). RadToolTipManager overrides the standard tool tip behavior automatically just by including RadToolTipManager on the page. * When a list of elements should be tooltipified. In this case the [RadToolTipManager TargetControls]({%slug tooltip/radtooltipmanager/using-the-targetcontrols-collection%}) collection should be used. This scenario allows for finer tuning. You can use more than one **RadToolTipManager**, each configured for a specific set of controls. * When one or more elements should display rich dynamic content that is fetched from the server. This approach is useful in cases where content should be fetched from a data source, depending on the element being targeted, and helps keep pages smaller. Use the [OnAjaxUpdate event]({%slug tooltip/radtooltipmanager/load-content-on-demand%}) to populate tooltips from the server on-the-fly. For a live demo of RadToolTipManager see the [RadToolTip versus RadToolTipManager](https://demos.telerik.com/aspnet-ajax/tooltip/examples/tooltipversustooltipmanager/defaultcs.aspx) demo. # See Also * [Using the TargetControls Collection]({%slug tooltip/radtooltipmanager/using-the-targetcontrols-collection%}) * [Load Content On Demand]({%slug tooltip/radtooltipmanager/load-content-on-demand%}) * [Using RadToolTipManager in MS AJAX UpdatePanels]({%slug tooltip/troubleshooting/using-radtooltipmanager-in-ms-ajax-updatepanels%})
61.282051
409
0.800418
eng_Latn
0.950848
b9d4af1afdd02070b350c2026827e57d4531a4b9
451
md
Markdown
docs/poul/com.sophoun.query/-query-builder/execute.md
Sophoun/paul
b197357e1b8e4911bb2ee9aa7316a92ee66ad154
[ "Unlicense" ]
2
2020-06-18T16:21:43.000Z
2020-07-02T03:08:40.000Z
docs/poul/com.sophoun.query/-query-builder/execute.md
Sophoun/android-utils
b197357e1b8e4911bb2ee9aa7316a92ee66ad154
[ "Unlicense" ]
null
null
null
docs/poul/com.sophoun.query/-query-builder/execute.md
Sophoun/android-utils
b197357e1b8e4911bb2ee9aa7316a92ee66ad154
[ "Unlicense" ]
null
null
null
[poul](../../index.md) / [com.sophoun.query](../index.md) / [QueryBuilder](index.md) / [execute](./execute.md) # execute `abstract fun execute(): `[`Unit`](https://kotlinlang.org/api/latest/jvm/stdlib/kotlin/-unit/index.html) Execute the query builder. Result wills callback: * `onResult(List<T>)`: after all mapped * `onError(Exception)`: when something happens during execution * `onCompleted()`: after all process completed (even error happen)
37.583333
110
0.711752
eng_Latn
0.52151
b9d4b1bc482bae89de4a199555a4d783e2045cc3
501
md
Markdown
docs/change-web-worker-pool-size.md
allen-garvey/jsdither
4200da2297870a35614c2d5ea4ee486229207296
[ "MIT" ]
43
2018-07-21T00:01:32.000Z
2022-03-29T12:01:48.000Z
docs/change-web-worker-pool-size.md
jumbojett/dithermark
ce9c6583da799ad180a99f9546d351848989ac78
[ "MIT" ]
11
2020-03-19T00:56:44.000Z
2022-03-05T14:07:11.000Z
docs/change-web-worker-pool-size.md
jumbojett/dithermark
ce9c6583da799ad180a99f9546d351848989ac78
[ "MIT" ]
2
2020-02-11T16:34:29.000Z
2021-07-22T06:58:33.000Z
# Change Web Worker Pool Size The color quantization (optimize palette) logic and error propagation dithers (and all dithers when WebGL is disabled) are handled by a pool of Web Workers. By default, the number is set to 2x the number of CPU cores, but capped at 8 both to preserve memory and because (at least on a 4 core machine) numbers greater than 8 were not found to increase performance significantly. To change the pool size, edit the `createWorkers()` function in `js_src/app/worker-util.js`.
167
470
0.788423
eng_Latn
0.998875
b9d4f048494fcd5b5323167abb2ef6b4c81e49fa
6,260
md
Markdown
docs/relational-databases/system-catalog-views/sys-availability-groups-transact-sql.md
skaneok-dev/sql-docs.ja-jp
1102ab8bc5b6ddd32169039fd49b0caf618991b0
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/relational-databases/system-catalog-views/sys-availability-groups-transact-sql.md
skaneok-dev/sql-docs.ja-jp
1102ab8bc5b6ddd32169039fd49b0caf618991b0
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/relational-databases/system-catalog-views/sys-availability-groups-transact-sql.md
skaneok-dev/sql-docs.ja-jp
1102ab8bc5b6ddd32169039fd49b0caf618991b0
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: sys.availability_groups (TRANSACT-SQL) |Microsoft Docs ms.custom: '' ms.date: 06/10/2016 ms.prod: sql ms.prod_service: database-engine ms.reviewer: '' ms.technology: system-objects ms.topic: language-reference f1_keywords: - sys.availability_groups_TSQL - availability_groups_TSQL - sys.availability_groups - availability_groups dev_langs: - TSQL helpviewer_keywords: - Availability Groups [SQL Server], monitoring - sys.availability_groups catalog view ms.assetid: da7fa55f-c008-45d9-bcfc-3513b02d9e71 author: MikeRayMSFT ms.author: mikeray manager: craigg ms.openlocfilehash: 5e3bd8688a2e9b66eab7187720d96d823f8d943c ms.sourcegitcommit: 9c6a37175296144464ffea815f371c024fce7032 ms.translationtype: MT ms.contentlocale: ja-JP ms.lasthandoff: 11/15/2018 ms.locfileid: "51668676" --- # <a name="sysavailabilitygroups-transact-sql"></a>sys.availability_groups (Transact-SQL) [!INCLUDE[tsql-appliesto-ss2012-xxxx-xxxx-xxx-md](../../includes/tsql-appliesto-ss2012-xxxx-xxxx-xxx-md.md)] SQL Server のローカル インスタンスが可用性レプリカをホストする各可用性グループの行を返します。 各行には、可用性グループ メタデータのキャッシュされたコピーが含まれます。 |列名|データ型|説明| |-----------------|---------------|-----------------| |**group_id**|**uniqueidentifier**|可用性グループの一意識別子 (GUID)。| |**name**|**sysname**|可用性グループの名前。 これはユーザー指定の名前であり、Windows Server フェールオーバー クラスター (WSFC) 内で一意であることが必要です。| |**resource_id**|**nvarchar(40)**|WSFC クラスター リソースのリソース ID。| |**resource_group_id**|**nvarchar(40)**|可用性グループの WSFC クラスター リソース グループのリソース グループ ID。| |**failure_condition_level**|**int**|ユーザー定義のエラー条件レベルを自動フェールオーバーをトリガーすることは、この表のすぐ下の表に示す整数値のいずれかです。<br /><br /> エラー状態レベルの範囲は 1 ~ 5 で、レベル 1 が最も制限が緩く、レベル 5 が最も制限の厳しい指定です。 任意の状態レベルは、それより制限が緩いすべてのレベルを含みます。 したがって、最も厳しい状態レベル 5 にはそれより制限が緩い状態レベル (1 ~ 4) が含まれ、レベル 4 にはレベル 1 ~ 3 が含まれます。以下同様です。<br /><br /> この値を変更するには、FAILURE_CONDITION_LEVEL オプションを使用して、 [ALTER AVAILABILITY GROUP](../../t-sql/statements/alter-availability-group-transact-sql.md) [!INCLUDE[tsql](../../includes/tsql-md.md)]ステートメント。| |**health_check_timeout**|**int**|待機時間 (ミリ秒単位)、 [sp_server_diagnostics](../../relational-databases/system-stored-procedures/sp-server-diagnostics-transact-sql.md)システム ストアド プロシージャをサーバーの状態の情報が返されるサーバー インスタンスが速度低下またはハングがあると見なされます。 既定値は 30000 ミリ秒 (30 秒) です。<br /><br /> この値を変更するには、HEALTH_CHECK_TIMEOUT オプションを使用して、 [ALTER AVAILABILITY GROUP](../../t-sql/statements/alter-availability-group-transact-sql.md) [!INCLUDE[tsql](../../includes/tsql-md.md)]ステートメント。| |**automated_backup_preference**|**tinyint**|この可用性グループの可用性データベースでバックアップを実行する場合の優先される場所。 使用可能な値とその説明を次に示します。<br /><br /> <br /><br /> 0: プライマリです。 バックアップを常にプライマリ レプリカで実行する必要があります。<br /><br /> 1: セカンダリのみです。 セカンダリ レプリカでのバックアップの実行が優先されます。<br /><br /> 2: セカンダリを優先します。 セカンダリ レプリカでのバックアップの実行が優先されますが、バックアップ操作に使用できるセカンダリ レプリカがない場合は、プライマリ レプリカでバックアップを実行してもかまいません。 これは既定の動作です。<br /><br /> 3: 任意のレプリカ。 バックアップをプライマリ レプリカとセカンダリ レプリカのどちらで実行するかについての優先指定はありません。<br /><br /> <br /><br /> 詳細については、「 [アクティブなセカンダリ: セカンダリ レプリカでのバックアップ &#40;Always On 可用性グループ&#41;](../../database-engine/availability-groups/windows/active-secondaries-backup-on-secondary-replicas-always-on-availability-groups.md)」を参照してください。| |**automated_backup_preference_desc**|**nvarchar(60)**|説明**automated_backup_preference**、1 つの。<br /><br /> PRIMARY<br /><br /> SECONDARY_ONLY<br /><br /> SECONDARY<br /><br /> なし| |**version**|**smallint**|Windows フェールオーバー クラスターに格納されている可用性グループ メタデータのバージョン。 新機能が追加されたときに、このバージョン番号はインクリメントされます。| |**basic_features**|**bit**|これは、基本的な可用性グループであるかどうかを指定します。 詳細については、「[基本的な可用性グループ &#40;AlwaysOn 可用性グループ&#41;](../../database-engine/availability-groups/windows/basic-availability-groups-always-on-availability-groups.md)」を参照してください。| |**dtc_support**|**bit**|この可用性グループの DTC サポートが有効になっているかどうかを指定します。 **DTC_SUPPORT**オプションの**CREATE AVAILABILITY GROUP**この設定を制御します。| |**db_failover**|**bit**|可用性グループのフェールオーバー データベース正常性の状態をサポートするかどうかを指定します。 **DB_FAILOVER**オプションの**CREATE AVAILABILITY GROUP**この設定を制御します。| |**is_distributed**|**bit**|これは、分散型可用性グループであるかどうかを指定します。 詳細については、「[Distributed Availability Groups &#40;Always On Availability Groups&#41; (分散型可用性グループ (Always On 可用性グループ))](../../database-engine/availability-groups/windows/distributed-availability-groups-always-on-availability-groups.md)」を参照してください。| ## <a name="failure-condition-level--values"></a>エラー条件レベルの値 次の表の可能性のあるエラー条件レベル、 **failure_condition_level**列。 |値|エラー状態| |-----------|-----------------------| |1|次のいずれかが発生した場合に自動フェールオーバーを開始する必要があることを指定します。<br /><br /> <br /><br /> -[!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)]サービスが停止します。<br /><br /> WSFC フェールオーバー クラスターに接続するため、可用性グループのリースでは、サーバー インスタンスから ACK を受信しないため、有効期限が切れます。 詳細については、「 [動作方法: SQL Server Always On のリース タイムアウト](https://blogs.msdn.com/b/psssql/archive/2012/09/07/how-it-works-sql-server-Always%20On-lease-timeout.aspx)」を参照してください。| |2|次のいずれかが発生した場合に自動フェールオーバーを開始する必要があることを指定します。<br /><br /> <br /><br /> -インスタンスの[!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)]クラスター、およびユーザー指定に接続していない**health_check_timeout**可用性グループのしきい値を超えました。<br /><br /> -可用性レプリカは障害が発生した状態です。| |3|孤立したスピンロック、重大な書き込みアクセス違反、ダンプが多すぎるなどの重大な [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] 内部エラーが発生した場合に自動フェールオーバーを開始する必要があることを指定します。<br /><br /> これが既定値です。| |4|[!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] 内部リソース プールに永続的なメモリ不足の状態があるなど [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] 内部エラーが発生した場合に自動フェールオーバーを開始する必要があることを指定します。| |5|以下のような任意の修飾エラー状態に対して自動フェールオーバーを開始する必要があることを指定します。<br /><br /> <br /><br /> -SQL エンジンのワーカー スレッドの枯渇します。<br /><br /> -、解決不可能なデッドロックを検出します。| ## <a name="security"></a>セキュリティ ### <a name="permissions"></a>アクセス許可 サーバー インスタンスに対する VIEW ANY DEFINITION 権限が必要です。 ## <a name="see-also"></a>参照 [sys.availability_replicas &#40;Transact-SQL&#41;](../../relational-databases/system-catalog-views/sys-availability-replicas-transact-sql.md) [AlwaysOn 可用性グループ &#40;SQL Server&#41;](../../database-engine/availability-groups/windows/always-on-availability-groups-sql-server.md) [可用性グループの監視 &#40;Transact-SQL&#41;](../../database-engine/availability-groups/windows/monitor-availability-groups-transact-sql.md) [可用性グループの監視 &#40;Transact-SQL&#41;](../../database-engine/availability-groups/windows/monitor-availability-groups-transact-sql.md)
83.466667
687
0.751597
yue_Hant
0.615624
b9d4faca6a066f9838284369dc101af45fa9680e
357
md
Markdown
README.md
shivangidas/spacetour
29216e663e1ddf939c4fac47f5b5bf6b15662bba
[ "MIT" ]
null
null
null
README.md
shivangidas/spacetour
29216e663e1ddf939c4fac47f5b5bf6b15662bba
[ "MIT" ]
null
null
null
README.md
shivangidas/spacetour
29216e663e1ddf939c4fac47f5b5bf6b15662bba
[ "MIT" ]
null
null
null
# SpaceSearch ### A NASA image search app To run: - npm install - npm start - http://0.0.0.0:4040 Features: - User management - Search images using keywords - Open details in a modal - Save/ Delete images - Display saved images - External API access (https://images.nasa.gov/docs/images.nasa.gov_api_docs.pdf) - APIs for CRUD operations - 404 handling
17
81
0.731092
kor_Hang
0.721679
b9d4fc8eae8fbab2be88862221eaa3e5f378abed
7,597
md
Markdown
README.md
Affonso-Gui/euslime
0787253bf4761235240c1fd32a7943576a27228c
[ "BSD-3-Clause" ]
6
2019-04-12T08:48:42.000Z
2020-10-28T14:31:34.000Z
README.md
Affonso-Gui/euslime
0787253bf4761235240c1fd32a7943576a27228c
[ "BSD-3-Clause" ]
5
2020-02-27T23:47:23.000Z
2022-01-12T09:46:24.000Z
README.md
jsk-ros-pkg/euslime
0cd9e340aff7bffffadbbd50536f177052555949
[ "BSD-3-Clause" ]
2
2019-10-21T07:40:29.000Z
2020-01-31T07:12:01.000Z
euslime ======= Slime for EusLisp ## Quick Start 1. Install ```bash apt install ros-melodic-euslime ``` 1. Configure your emacs init file ```lisp ;; ~/.emacs.el (add-to-list 'load-path "/opt/ros/melodic/share/euslime") (require 'euslime-config) (setq inferior-euslisp-program "roseus") (slime-setup '(slime-fancy slime-banner slime-repl-ansi-color)) ``` 1. Run Open emacs and type the command: ```bash M-x euslime ``` ## Build from Source 1. Setup ```bash # Clone code mkdir ~/euslime_ws/src -p cd euslime_ws/src git clone https://github.com/jsk-ros-pkg/euslime.git # Update submodules cd euslime git submodule init git submodule update # Install dependencies rosdep install -yr --from-paths . --ignore-src ``` 1. Build ```bash cd ~/euslime_ws catkin config --install catkin build 1. Configure your emacs init file ```lisp ;; ~/.emacs.el (add-to-list 'load-path "~/euslime_ws/install/share/euslime") (require 'euslime-config) (setq inferior-euslisp-program "roseus") (slime-setup '(slime-fancy slime-banner slime-repl-ansi-color)) ``` 1. Run Source the package ```bash source ~/euslime_ws/install/setup.bash ``` Then open emacs and type the command: ```bash M-x euslime ``` ## Run tests ```shell # Run all tests tests/euslime_tests.py # RUN TESTS FOR EUSLISP PROGRAM tests/euslime_tests.py eus tests/euslime_tests.py irteusgl tests/euslime_tests.py roseus # RUN A SINGLE TEST tests/euslime_tests.py eus.test_eval_1 # RUN MATCHING TESTS tests/euslime_tests.py eus.test_eval_* tests/euslime_tests.py *.test_eval_1 ``` ## Cheat sheet | On slime buffer | | | --- | --- | | [TAB] | completion | | C-c C-d d | describe/ help | | C-c C-d a | apropos | | C-c C-d p | apropos package | | M-. | look for definition | | C-c [RET] | macroexpansion | | ,quit | quit session | | ,restart-inferior-lisp | restart session | | On editing buffers | | | --- | --- | | C-c TAB | completion | | C-c C-c | load expression | | C-c C-l | load-file | | On other slime buffers | | | --- | --- | | q | quit buffer | | [RET] | select option | ## How it Works Euslime is composed by several layers of software, ultimately connecting the emacs interface to the EusLisp interpreter. ![euslime-communications](https://user-images.githubusercontent.com/20625381/89138044-2cef6d80-d575-11ea-9923-5eac3dd9c8cc.jpg) **EMACS** acts as the front-end interface, accepting user input and displaying output correspondingly. It also provides a series of modes, commands and key bindings for improved user experience, which are introduced at the original slime framework and slightly customized at [euslime.el](https://github.com/jsk-ros-pkg/euslime/blob/master/euslime.el.in) and [euslime-config.el](https://github.com/jsk-ros-pkg/euslime/blob/master/euslime-config.el). **SWANK** is the original slime backend, which handles interactions with emacs by delimiting a protocol that translates emacs commands into s-expressions. Such expressions are sent to and from the inferior lisp client (originally Common Lisp, in this case EusLisp) during an exchange which is logged at the `*slime-events*` buffer on emacs. A simple example is given below, illustrating an evaluation request for `(1+ 1)` followed by an output request for `2`. ``` (:emacs-rex (swank-repl:listener-eval "(1+ 1)\n") "COMMON-LISP-USER" :repl-thread 6) (:write-string "2" :repl-result) ``` **PYTHON WRAPPER** is responsible for encoding and decoding swank commands into actual EusLisp input/output, as well as managing the communications between the swank and the EusLisp layers. In the above example, this means that the python middleware would forward the expression `(1+ 1)` for evaluation in the EusLisp client, and then wrap the result into a suitable `:write-string` form which is finally sent to the emacs in order to display it on the screen. Although such functionality was originally implemented on slime as a part of the native common lisp framework, here we opt to establish an additional python layer due to its ability to (i) handle multithreading; (ii) cope with EusLisp crashes without compromising the emacs session; and (iii) minimize changes done to the EusLisp lexical environment. The python middleware is divided into six files with well determined functionality, as detailed in the following. [server.py](https://github.com/jsk-ros-pkg/euslime/blob/master/src/euslime/server.py) configures a socket server that communicates with the swank layer, receiving and answering to requests. [protocol.py](https://github.com/jsk-ros-pkg/euslime/blob/master/src/euslime/protocol.py) parses incoming s-expressions into python functions, and also defines utility used to generate common swank forms, such as evaluation results or errors. [handler.py](https://github.com/jsk-ros-pkg/euslime/blob/master/src/euslime/handler.py) holds the actual definitions of the python functions responsible for handling the swank requests. For instance, the `swank_repl_listener_eval` function which answers to `:swank-repl:listener-eval` requests. [bridge.py](https://github.com/jsk-ros-pkg/euslime/blob/master/src/euslime/bridge.py) deals with the communications with the EusLisp layer. The EusLisp interpreter is started as a subprocess and interacts with the python layer through pipes and sockets. Pipes are used to pass user input and program output, while sockets are used to transmit evaluation results, errors, and to process other internal requests while avoiding updating the REPL history and variables. [logger.py](https://github.com/jsk-ros-pkg/euslime/blob/master/src/euslime/logger.py) configures the logging function, which is displayed in the `*inferior-lisp*` emacs buffer. [cli.py](https://github.com/jsk-ros-pkg/euslime/blob/master/src/euslime/cli.py) bundles all of the above and arranges command line arguments for an euslime executable invoked upon startup. Finally, in the **EUSLISP** layer a new package named `SLIME` is defined and a few overwrites are performed in order to establish a framework for properly dealing with socket communications and handler requests. Such functionality is distributed among the following three files: [slime-util.l](https://github.com/jsk-ros-pkg/euslime/blob/master/slime-util.l) gathers utility function designed to promptly respond to handler requests which would be hard to define solely with python coding, such as autodoc and completion. [slime-connection.l](https://github.com/jsk-ros-pkg/euslime/blob/master/slime-connection.l) configures the socket communication framework, defining two different socket channels: one started from another thread allowing parallel evaluation and the other added to the top-selector allowing access to thread special variables. A custom input stream which communicates to swank at every read attempt is also defined here, allowing for the slime framework to differentiate user input answering to `read` functions from new evaluation requests. [slime-toplevel.l](https://github.com/jsk-ros-pkg/euslime/blob/master/slime-toplevel.l) mainly overwrites lisp repl functions and error handlers so that they can send suitable socket information at each step of the evaluation. It is also responsible for setting up the sockets and streams defined at slime-connection and starting the main evaluation loop. A more detailed explanation can be found in [technical-report.md](https://github.com/jsk-ros-pkg/euslime/blob/master/technical-report.md).
46.323171
539
0.750954
eng_Latn
0.96919
b9d6b0678ac3ae9c9d2b3583f946cb8a8be6cbc0
2,470
md
Markdown
README.md
M-RaquelCS/todo-challenge
1088ed187e8e87334162858ca71cd884079f45c8
[ "MIT" ]
null
null
null
README.md
M-RaquelCS/todo-challenge
1088ed187e8e87334162858ca71cd884079f45c8
[ "MIT" ]
null
null
null
README.md
M-RaquelCS/todo-challenge
1088ed187e8e87334162858ca71cd884079f45c8
[ "MIT" ]
null
null
null
<img src="https://www.notion.so/image/https%3A%2F%2Fs3-us-west-2.amazonaws.com%2Fsecure.notion-static.com%2F5d4520b6-4a30-4e39-8716-5e534a7bb5bc%2Fcover-reactjs.png?table=block&id=b9f0f025-c95b-4376-99d0-c3115f55b0f1&spaceId=08f749ff-d06d-49a8-a488-9846e081b224&width=1920&userId=&cache=v2" /> <p align='center'> <img src="https://img.shields.io/github/license/M-RaquelCS/To.do?color=%23835afd" alt='license'/> <img src="https://img.shields.io/github/forks/M-RaquelCS/To.do?color=%23835afd" alt='Forks'/> <img src="https://img.shields.io/github/stars/M-RaquelCS/To.do?color=%23835afd" alt='stars'/> </p> # To.do - First Challenge of Chapter One ## About challenge 🤷 The main goal in this challenge is to train a little on state manipulation in React so that the application is functional, as it is a to do you need: - Add a new task - Remove a task - Mark and clear a task as completed ## 🖥️ Technologies This project was developed with the following technologies: - JavaScript - TypeScript - ReactJS - Babel - Webpack - SCSS ## 🚀 Getting Started Run on the terminal to install the dependencies: ```bash $ yarn ``` or ```bash $ npm install ``` Run on the terminal to start application: ```bash $ yarn start ``` or ```bash $ npm start ``` ### 📢 Publication on the final outcome of the challenge in this [link](https://www.linkedin.com/posts/maria-raquel-3b27531a5_neverstoplearning-ignite-activity-6776675273942302720-elMU). ## 📃 License ### This project is under the MIT license. --- # 👩🏼‍💻 Author <a href="https://app.rocketseat.com.br/me/m-raquel"> <img style="border-radius: 50%;" src="https://avatars.githubusercontent.com/u/63611614?v=4" width="100px;" alt=""/> <br /> <sub><b>Maria Raquel</b></sub></a> <a href="https://app.rocketseat.com.br/me/m-raquel" title="Rocketseat">🚀</a> Feito com ❤️ por Maria Raquel (com as aulas da Rocketseat) 👋🏽 Entre em contato! [![Linkedin Badge](https://img.shields.io/badge/-Raquel-blue?style=flat-square&logo=Linkedin&logoColor=white&link=https://www.linkedin.com/in/maria-raquel-3b27531a5/)](https://www.linkedin.com/in/maria-raquel-3b27531a5/) [![Gmail Badge](https://img.shields.io/badge/-Raquel-c14438?style=flat-square&logo=Gmail&logoColor=white&link=mailto:[email protected])](mailto:[email protected]) [![Outlook Badge](https://img.shields.io/badge/-Raquel-0078d4?style=flat-square&logo=microsoft-outlook&logoColor=white&link=mailto:[email protected])](mailto:[email protected])
39.83871
583
0.738057
eng_Latn
0.20723
b9d707e6d34aec6c3737be50bf76d2bbc90ee5e9
18,372
md
Markdown
ballerina/Module.md
Nadeeshan96/module-ballerinax-java.jdbc
64e7aecca873bc6e1a0f5fd24f598a90ba4f6973
[ "Apache-2.0" ]
null
null
null
ballerina/Module.md
Nadeeshan96/module-ballerinax-java.jdbc
64e7aecca873bc6e1a0f5fd24f598a90ba4f6973
[ "Apache-2.0" ]
null
null
null
ballerina/Module.md
Nadeeshan96/module-ballerinax-java.jdbc
64e7aecca873bc6e1a0f5fd24f598a90ba4f6973
[ "Apache-2.0" ]
null
null
null
## Overview This module provides the functionality that is required to access and manipulate data stored in any type of relational database, which is accessible via the Java Database Connectivity (JDBC) API. ### Prerequisite Add the JDBC driver corresponding to the database you are trying to interact with as a native library dependency in your Ballerina project's `Ballerina.toml` file. Follow one of the following ways to add the corresponding database JAR in the file: * Download the JAR and update the path ``` [[platform.java11.dependency]] path = "PATH" ``` * Add JAR with a maven dependency params ``` [[platform.java11.dependency]] artifactId = "h2" version = "2.0.206" groupId = "com.h2database" ``` ### Client To access a database, you must first create a [`jdbc:Client`](https://docs.central.ballerina.io/ballerinax/java.jdbc/latest/clients/Client) object. The samples for creating a JDBC client can be found below. #### Creating a Client This sample shows the different ways of creating the `jdbc:Client`. The client can be created by passing the JDBC URL, which is a mandatory property and all other fields are optional. The `jdbc:Client` receives only the database URL. E.g., The DB client creation for an H2 database will be as follows. ```ballerina jdbc:Client|sql:Error dbClient = new ("jdbc:h2:~/path/to/database"); ``` The `jdbc:Client` receives the username and password in addition to the URL. If the properties are passed in the same order as they are defined in the `jdbc:Client`, you can pass them without named parameters. E.g., The DB client creation for an H2 database will be as follows. ```ballerina jdbc:Client|sql:Error dbClient = new ( "jdbc:h2:~/path/to/database", "root", "root"); ``` In the sample below, the `jdbc:Client` uses the named parameters to pass all the attributes and provides the `options` property in the type of [`jdbc:Options`](https://docs.central.ballerina.io/ballerinax/java.jdbc/latest/records/Options), and also uses the unshared connection pool in the type of [`sql:ConnectionPool`](https://docs.central.ballerina.io/ballerina/sql/latest/records/ConnectionPool). For more information about connection pooling, see the [`sql` module](https://docs.central.ballerina.io/ballerina/sql/latest). E.g., The DB client creation for an H2 database will be as follows. ```ballerina jdbc:Client|sql:Error dbClient = new ( url = "jdbc:h2:~/path/to/database", user = "root", password = "root", options = { datasourceName: "org.h2.jdbcx.JdbcDataSource" }, connectionPool = { maxOpenConnections: 5 } ); ``` The `jdbc:Client` receives some custom properties within the [`jdbc:Options`](https://docs.central.ballerina.io/ballerinax/java.jdbc/latest/records/Options) and those properties will be used by the defined `datasourceName`. As per the provided sample, the `org.h2.jdbcx.JdbcDataSource` datasource will be configured with a `loginTimeout` of `2000` milliseconds. E.g., The DB client creation for an H2 database will be as follows. ```ballerina jdbc:Client|sql:Error dbClient = new ( url = "jdbc:h2:~/path/to/database", user = "root", password = "root", options = { datasourceName: "org.h2.jdbcx.JdbcDataSource", properties: {"loginTimeout": "2000"} } ); ``` You can find more details about each property in the [`jdbc:Client`](https://docs.central.ballerina.io/ballerinax/java.jdbc/latest/clients/Client) constructor. The [`jdbc:Client`](https://docs.central.ballerina.io/ballerinax/java.jdbc/latest/clients/Client) references [`sql:Client`](https://docs.central.ballerina.io/ballerina/sql/latest/clients/Client) and all the operations defined by the `sql:Client` will be supported by the `jdbc:Client` as well. #### Connection Pool Handling All database modules share the same connection pooling concept and there are three possible scenarios for connection pool handling. For its properties and possible values, see the [`sql:ConnectionPool`](https://docs.central.ballerina.io/ballerina/sql/latest/records/ConnectionPool). 1. Global, shareable, default connection pool If you do not provide the `connectionPool` field when creating the database client, a globally-shareable pool will be created for your database unless a connection pool matching with the properties you provided already exists. The sample below shows how the global connection pool is used. ```ballerina jdbc:Client|sql:Error dbClient = new ("jdbc:h2:~/path/to/database", "root", "root"); ``` 2. Client-owned, unsharable connection pool If you define the `connectionPool` field inline when creating the database client with the `sql:ConnectionPool` type, an unsharable connection pool will be created. ```ballerina jdbc:Client|sql:Error dbClient = new ("jdbc:h2:~/path/to/database", connectionPool = { maxOpenConnections: 5 }); ``` 3. Local, shareable connection pool If you create a record of the `sql:ConnectionPool` type and reuse that in the configuration of multiple clients, for each set of clients that connect to the same database instance with the same set of properties, a shared connection pool will be used. ```ballerina sql:ConnectionPool connPool = {maxOpenConnections: 5}; jdbc:Client|sql:Error dbClient1 = new (url = "jdbc:h2:~/path/to/database", connectionPool = connPool); jdbc:Client|sql:Error dbClient2 = new (url = "jdbc:h2:~/path/to/database", connectionPool = connPool); jdbc:Client|sql:Error dbClient3 = new (url = "jdbc:h2:~/path/to/database", connectionPool = connPool); ``` #### Closing the Client Once all the database operations are performed, you can close the client you have created by invoking the `close()` operation. This will close the corresponding connection pool if it is not shared by any other database clients. ```ballerina error? e = dbClient.close(); ``` Or ```ballerina check dbClient.close(); ``` ### Database Operations Once the client is created, database operations can be executed through that client. This module defines the interface and common properties that are shared among multiple database clients. It also supports querying, inserting, deleting, updating, and batch updating data. #### Parameterized Query The `sql:ParameterizedQuery` is used to construct the SQL query to be executed by the client. You can create a query with constant or dynamic input data as follows. *Query with constant values* ```ballerina sql:ParameterizedQuery query = `SELECT * FROM students WHERE id < 10 AND age > 12`; ``` *Query with dynamic values* ```ballerina int[] ids = [10, 50]; int age = 12; sql:ParameterizedQuery query = `SELECT * FROM students WHERE id < ${ids[0]} AND age > ${age}`; ``` Moreover, the SQL package has `sql:queryConcat()` and `sql:arrayFlattenQuery()` util functions which make it easier to create a dynamic/constant complex query. The `sql:queryConcat()` is used to create a single parameterized query by concatenating a set of parameterized queries. The sample below shows how to concatenate queries. ```ballerina int id = 10; int age = 12; sql:ParameterizedQuery query = `SELECT * FROM students`; sql:ParameterizedQuery query1 = ` WHERE id < ${id} AND age > ${age}`; sql:ParameterizedQuery sqlQuery = sql:queryConcat(query, query1); ``` The query with the `IN` operator can be created using the `sql:ParameterizedQuery` as shown below. Here, you need to flatten the array and pass each element separated by a comma. ```ballerina int[] ids = [1, 2, 3]; sql:ParameterizedQuery query = `SELECT count(*) as total FROM DataTable WHERE row_id IN (${ids[0]}, ${ids[1]}, ${ids[2]})` ``` The `sql:arrayFlattenQuery()` util function is used to make the array flatten easier. It makes the inclusion of varying array elements into the query easier by flattening the array to return a parameterized query. You can construct the complex dynamic query with the `IN` operator by using both functions as shown below. ```ballerina int[] ids = [1, 2]; sql:ParameterizedQuery sqlQuery = sql:queryConcat(`SELECT * FROM DataTable WHERE id IN (`, arrayFlattenQuery(ids), `)`); ``` #### Creating Tables This sample creates a table with three columns. The first column is a primary key of type `int` while the second column is of type `int` and the other is of type `varchar`. The `CREATE` statement is executed via the `execute` remote function of the client. ```ballerina // Create the ‘Students’ table with the ‘id’, ‘name‘, and ‘age’ fields. sql:ExecutionResult result = check dbClient->execute(`CREATE TABLE student ( id INT AUTO_INCREMENT, age INT, name VARCHAR(255), PRIMARY KEY (id) )`); // A value of the `sql:ExecutionResult` type is returned for the 'result'. ``` #### Inserting Data These samples show the data insertion by executing an `INSERT` statement using the `execute` remote function of the client. In this sample, the query parameter values are passed directly into the query statement of the `execute` remote function. ```ballerina sql:ExecutionResult result = check dbClient->execute(`INSERT INTO student(age, name) VALUES (23, 'john')`); ``` In this sample, the parameter values, which are assigned to local variables are used to parameterize the SQL query in the `execute` remote function. This type of a parameterized SQL query can be used with any primitive Ballerina type such as `string`, `int`, `float`, or `boolean` and in that case, the corresponding SQL type of the parameter is derived from the type of the Ballerina variable that is passed in. ```ballerina string name = "Anne"; int age = 8; sql:ParameterizedQuery query = `INSERT INTO student(age, name) VALUES (${age}, ${name})`; sql:ExecutionResult result = check dbClient->execute(query); ``` In this sample, the parameter values are passed as a `sql:TypedValue` to the `execute` remote function. Use the corresponding subtype of the `sql:TypedValue` such as `sql:VarcharValue`, `sql:CharValue`, `sql:IntegerValue`, etc., when you need to provide more details such as the exact SQL type of the parameter. ```ballerina sql:VarcharValue name = new ("James"); sql:IntegerValue age = new (10); sql:ParameterizedQuery query = `INSERT INTO student(age, name) VALUES (${age}, ${name})`; sql:ExecutionResult result = check dbClient->execute(query); ``` #### Inserting Data With Auto-generated Keys This sample demonstrates inserting data while returning the auto-generated keys. It achieves this by using the `execute` remote function to execute the `INSERT` statement. ```ballerina int age = 31; string name = "Kate"; sql:ParameterizedQuery query = `INSERT INTO student(age, name) VALUES (${age}, ${name})`; sql:ExecutionResult result = check dbClient->execute(query); // Number of rows affected by the execution of the query. int? count = result.affectedRowCount; // The integer or string generated by the database in response to a query execution. string|int? generatedKey = result.lastInsertId; ``` #### Querying Data These samples show how to demonstrate the different usages of the `query` operation to query the database table and obtain the results. This sample demonstrates querying data from a table in a database. First, a type is created to represent the returned result set. This record can be defined as an open or a closed record according to the requirement. If an open record is defined, the returned stream type will include both defined fields in the record and additional database columns fetched by the SQL query which are not defined in the record. Note the mapping of the database column to the returned record's property is case-insensitive if it is defined in the record(i.e., the `ID` column in the result can be mapped to the `id` property in the record). Additional column names are added to the returned record as in the SQL query. If the record is defined as a closed record, only defined fields in the record are returned or gives an error when additional columns present in the SQL query. Next, the `SELECT` query is executed via the `query` remote function of the client. Once the query is executed, each data record can be retrieved by looping the result set. The `stream` returned by the `SELECT` operation holds a pointer to the actual data in the database and it loads data from the table only when it is accessed. This stream can be iterated only once. ```ballerina // Define an open record type to represent the results. type Student record { int id; int age; string name; }; // Select the data from the database table. The query parameters are passed // directly. Similar to the `execute` samples, parameters can be passed as // sub types of `sql:TypedValue` as well. int id = 10; int age = 12; sql:ParameterizedQuery query = `SELECT * FROM students WHERE id < ${id} AND age > ${age}`; stream<Student, sql:Error?> resultStream = dbClient->query(query); // Iterating the returned table. check from Student student in resultStream do { //Can perform operations using the record 'student' of type `Student`. }; ``` Defining the return type is optional and you can query the database without providing the result type. Hence, the above sample can be modified as follows with an open record type as the return type. The property name in the open record type will be the same as how the column is defined in the database. ```ballerina // Select the data from the database table. The query parameters are passed // directly. Similar to the `execute` samples, parameters can be passed as // sub types of `sql:TypedValue` as well. int id = 10; int age = 12; sql:ParameterizedQuery query = `SELECT * FROM students WHERE id < ${id} AND age > ${age}`; stream<record{}, sql:Error?> resultStream = dbClient->query(query); // Iterating the returned table. check from record{} student in resultStream do { // Can perform operations using the record 'student'. io:println("Student name: ", student.value["name"]); }; ``` There are situations in which you may not want to iterate through the database and in that case, you may decide to use the `queryRow()` operation. If the provided return type is a record, this method returns only the first row retrieved by the query as a record. ```ballerina int id = 10; sql:ParameterizedQuery query = `SELECT * FROM students WHERE id = ${id}`; Student retrievedStudent = check dbClient->queryRow(query); ``` The `queryRow()` operation can also be used to retrieve a single value from the database (e.g., when querying using `COUNT()` and other SQL aggregation functions). If the provided return type is not a record (i.e., a primitive data type) , this operation will return the value of the first column of the first row retrieved by the query. ```ballerina int age = 12; sql:ParameterizedQuery query = `SELECT COUNT(*) FROM students WHERE age < ${age}`; int youngStudents = check dbClient->queryRow(query); ``` #### Updating Data This sample demonstrates modifying data by executing an `UPDATE` statement via the `execute` remote function of the client. ```ballerina int age = 23; sql:ParameterizedQuery query = `UPDATE students SET name = 'John' WHERE age = ${age}`; sql:ExecutionResult result = check dbClient->execute(query); ``` #### Deleting Data This sample demonstrates deleting data by executing a `DELETE` statement via the `execute` remote function of the client. ```ballerina string name = "John"; sql:ParameterizedQuery query = `DELETE from students WHERE name = ${name}`; sql:ExecutionResult result = check dbClient->execute(query); ``` #### Batch Updating Data This sample demonstrates how to insert multiple records with a single `INSERT` statement that is executed via the `batchExecute` remote function of the client. This is done by creating a `table` with multiple records and parameterized SQL query as same as the above `execute` operations. ```ballerina // Create the table with the records that need to be inserted. var data = [ { name: "John", age: 25 }, { name: "Peter", age: 24 }, { name: "jane", age: 22 } ]; // Do the batch update by passing the batches. sql:ParameterizedQuery[] batch = from var row in data select `INSERT INTO students ('name', 'age') VALUES (${row.name}, ${row.age})`; sql:ExecutionResult[] result = check dbClient->batchExecute(batch); ``` #### Execute SQL Stored Procedures This sample demonstrates how to execute a stored procedure with a single `INSERT` statement that is executed via the `call` remote function of the client. ```ballerina int uid = 10; sql:IntegerOutParameter insertId = new; sql:ProcedureCallResult result = check dbClient->call(`call InsertPerson(${uid}, ${insertId})`); stream<record{}, sql:Error?>? resultStr = result.queryResult; if resultStr is stream<record{}, sql:Error?> { check from record{} result in resultStr do { // Can perform operations using the record 'result'. }; } check result.close(); ``` Note that you have to invoke the close operation explicitly on the `sql:ProcedureCallResult` to release the connection resources and avoid a connection leak as shown above. >**Note:** The default thread pool size used in Ballerina is: `the number of processors available * 2`. You can configure the thread pool size by using the `BALLERINA_MAX_POOL_SIZE` environment variable.
41.378378
320
0.701448
eng_Latn
0.988213
b9d76d0f3faf1b3996d24ba659416c800a1da95f
1,952
md
Markdown
cases/os/alpine.md
suridaddy/sample
9e454ac8a9ed0aa3cf68e5730ce61990fd778ddb
[ "Apache-2.0" ]
null
null
null
cases/os/alpine.md
suridaddy/sample
9e454ac8a9ed0aa3cf68e5730ce61990fd778ddb
[ "Apache-2.0" ]
null
null
null
cases/os/alpine.md
suridaddy/sample
9e454ac8a9ed0aa3cf68e5730ce61990fd778ddb
[ "Apache-2.0" ]
null
null
null
## Alpine ### 简介 ![Apline Linux 操作系统](_images/alpinelinux-logo.png) `Alpine` 操作系统是一个面向安全的轻型 `Linux` 发行版。它不同于通常 `Linux` 发行版,`Alpine` 采用了 `musl libc` 和 `busybox` 以减小系统的体积和运行时资源消耗,但功能上比 `busybox` 又完善的多,因此得到开源社区越来越多的青睐。在保持瘦身的同时,`Alpine` 还提供了自己的包管理工具 `apk`,可以通过 `https://pkgs.alpinelinux.org/packages` 网站上查询包信息,也可以直接通过 `apk` 命令直接查询和安装各种软件。 `Alpine` 由非商业组织维护的,支持广泛场景的 `Linux`发行版,它特别为资深/重度`Linux`用户而优化,关注安全,性能和资源效能。`Alpine` 镜像可以适用于更多常用场景,并且是一个优秀的可以适用于生产的基础系统/环境。 `Alpine` Docker 镜像也继承了 Alpine Linux 发行版的这些优势。相比于其他 `Docker` 镜像,它的容量非常小,仅仅只有 5 MB 左右(对比 Ubuntu 系列镜像接近 200 MB),且拥有非常友好的包管理机制。官方镜像来自 `docker-alpine` 项目。 目前 Docker 官方已开始推荐使用 `Alpine` 替代之前的 `Ubuntu` 做为基础镜像环境。这样会带来多个好处。包括镜像下载速度加快,镜像安全性提高,主机之间的切换更方便,占用更少磁盘空间等。 下表是官方镜像的大小比较: ```bash REPOSITORY TAG IMAGE ID VIRTUAL SIZE alpine latest 4e38e38c8ce0 4.799 MB debian latest 4d6ce913b130 84.98 MB ubuntu latest b39b81afc8ca 188.3 MB centos latest 8efe422e6104 210 MB ``` ### 获取并使用官方镜像 由于镜像很小,下载时间往往很短,读者可以直接使用 `docker run` 指令直接运行一个 `Alpine` 容器,并指定运行的 Linux 指令,例如: ```bash $ docker run alpine echo '123' 123 ``` ### 迁移至 `Alpine` 基础镜像 目前,大部分 Docker 官方镜像都已经支持 Alpine 作为基础镜像,可以很容易进行迁移。 例如: * ubuntu/debian -> alpine * python:2.7 -> python:2.7-alpine * ruby:2.3 -> ruby:2.3-alpine 另外,如果使用 `Alpine` 镜像替换 `Ubuntu` 基础镜像,安装软件包时需要用 apk 包管理器替换 apt 工具,如 ```bash $ apk add --no-cache <package> ``` `Alpine` 中软件安装包的名字可能会与其他发行版有所不同,可以在 `https://pkgs.alpinelinux.org/packages` 网站搜索并确定安装包名称。如果需要的安装包不在主索引内,但是在测试或社区索引中。那么可以按照以下方法使用这些安装包。 ```bash $ echo "http://dl-4.alpinelinux.org/alpine/edge/testing" >> /etc/apk/repositories $ apk --update add --no-cache <package> ``` ### 相关资源 * `Alpine` 官网:http://alpinelinux.org/ * `Alpine` 官方仓库:https://github.com/alpinelinux * `Alpine` 官方镜像:https://hub.docker.com/_/alpine/ * `Alpine` 官方镜像仓库:https://github.com/gliderlabs/docker-alpine
30.984127
266
0.701844
yue_Hant
0.941254
b9d78aa2a510e021f212f61af0c42eda14d57b28
596
md
Markdown
CHANGELOG.md
aredev/dns
8d70e2de2b01bfff14e01da61a92cabd243d8837
[ "MIT" ]
null
null
null
CHANGELOG.md
aredev/dns
8d70e2de2b01bfff14e01da61a92cabd243d8837
[ "MIT" ]
null
null
null
CHANGELOG.md
aredev/dns
8d70e2de2b01bfff14e01da61a92cabd243d8837
[ "MIT" ]
1
2021-07-23T01:24:32.000Z
2021-07-23T01:24:32.000Z
# Changelog All notable changes to `dns` will be documented in this file ## 1.4.3 - 2019-11-30 - drop support for PHP 7.3 and below ## 1.4.2 - 2019-05-17 - add support for NAPTR record type - resolve symfony/process deprecation ## 1.4.1 - 2018-12-06 - throw a custom exception when dig fails ## 1.4.0 - 2018-09-13 - add CNAME and SRV record types ## 1.3.1 - 2018-02-20 - fix tests - allow Symfony 4 ## 1.3.0 - 2017-11-29 - add support for `CAA` records ## 1.2.0 - 2017-11-29 - add `useNameserver` ## 1.1.0 - 2017-11-03 - add `getDomain` ## 1.0.0 - 2017-11-03 - initial release
14.190476
60
0.649329
eng_Latn
0.887519
b9d7c0951ec51ae7e6475c23e809441f9ff5caef
1,897
md
Markdown
documentation/Viikkoraportit/Viikkoraportti2.md
SIholin/TiraLabra-LyhyinReitti
880973cf71992d7eff06c798546c3dbc77701ce4
[ "MIT" ]
null
null
null
documentation/Viikkoraportit/Viikkoraportti2.md
SIholin/TiraLabra-LyhyinReitti
880973cf71992d7eff06c798546c3dbc77701ce4
[ "MIT" ]
2
2019-08-22T18:01:51.000Z
2019-08-30T19:52:31.000Z
documentation/Viikkoraportit/Viikkoraportti2.md
SIholin/tiralabra-Labyrintti
880973cf71992d7eff06c798546c3dbc77701ce4
[ "MIT" ]
null
null
null
# Viikkoraportti 2 Viikon alussa päätin, että haluan vaihtaa vielä aihettani, joten siirryin eri reitinhaku algoritmien tutkinnasta labyrinttien tutkintaan ja siihen kuinka siitä löytyy lyhyin mahdollinen reitti. Aiheen muutos ja siitä neitstä googlailu vei yllättävän paljon aikaa. Tällä viikolla olen lisännyt dokumentaatio kansioon vaadittavat dokumentti tiedostot ja täyttänyt niistä osaa jo alustavasti. Lisäksi olen tehnyt testejä, sekä kirjoittanut ohjelmaan lisää koodia. Ohjelman koodiin on luotu alustava tekstikäyttöliittymä, joka kysyy käyttäjältä vaaditut tiedot. Myös ensimmäinen algoritmi, joka löytää lyhyimmän reitin on kutakuinkin valmis. Kyseiselle leveyssuuntaisenhaun tekevälle algortimille on myös luotu oma luokkansa sekä apuloukka Node. Myös testit on kyseiselle luokalle tehtynä. Tällä viikolla opin tekemään leveyssuuntaisen haun tekevän algoritmin melko yksinkertaisella tavalla. Myös netistä googlailessa tuli gradlesta opittua lisää. Vaikeuksia tuotti koodaamisen aloittaminen. Oli vaikeaa keksiä kuinka lähden toteuttamaan leveyssuuntiasenhaun toteuttavaa algoritmia. Myös gradle tuotti aluksi kovasti päänvaivaa enkä vielä ole saanut checkstyle:siä toimimaan. Seuraavan algortimin toteutus vähän pelottaa, vaikka edellisen algortimin sai loppupeleissä melko pienellä vaivalla toimimaan. Seuraavaksi on tarkoitus toteuttaa uusi algortimi mahdollisesti A*. Myös käyttöliittymää on tarkoitus parantaa todennäköisesti edellee kuitenkin tekstipohjainen. Käyttöliittymään olisi tarkoitus vaihe, joka näyttää käyttäjälle lyhyimmän mahdollisen reitin, sekä lisätä toiminnalisuus että käyttäjä voi lisätä itse aloitus ja lopetus paikan. Tavoitteena on myös luoda virheilmoituksia käyttäjälle, jos vaikka vahingossa syöttää kirjaimen tai väärän numeron eikä niin että ohjelma vain kaatuu kyseisissä tapauksissa. **Tällä viikolla aikaa kului työn tekoon noin 15 tuntia**
135.5
514
0.859251
fin_Latn
1.000008
b9d83b6d7d9c569e7133ca309d173e5c72337d97
155
md
Markdown
README.md
ejozz/dwm-patched
d6f857bddd198437395c74e1d7ce551b6cf07e30
[ "MIT" ]
null
null
null
README.md
ejozz/dwm-patched
d6f857bddd198437395c74e1d7ce551b6cf07e30
[ "MIT" ]
null
null
null
README.md
ejozz/dwm-patched
d6f857bddd198437395c74e1d7ce551b6cf07e30
[ "MIT" ]
null
null
null
# dwm-patched my patched and configed dwm to be used with polybar ### patches * vanitygaps (with fibb) * autostart * anybar * ipc socket
19.375
53
0.651613
eng_Latn
0.997669
b9d86f015de08cf837d317e79e92475f184f455e
1,499
md
Markdown
README.md
jmerle/zeal-user-contrib
7e63567930c66f81f2ab5619aa0091faaf51af5c
[ "MIT" ]
53
2019-11-30T04:00:20.000Z
2022-02-27T13:38:32.000Z
README.md
jmerle/zeal-user-contrib
7e63567930c66f81f2ab5619aa0091faaf51af5c
[ "MIT" ]
4
2020-08-28T09:36:44.000Z
2021-12-18T03:07:43.000Z
README.md
jmerle/zeal-user-contrib
7e63567930c66f81f2ab5619aa0091faaf51af5c
[ "MIT" ]
2
2019-12-06T23:05:03.000Z
2020-12-27T20:24:01.000Z
# Zeal User Contrib [![Build Status](https://github.com/jmerle/zeal-user-contrib/workflows/Build/badge.svg)](https://github.com/jmerle/zeal-user-contrib/actions?query=workflow%3ABuild) [![Version](https://img.shields.io/npm/v/zeal-user-contrib.svg)](https://npmjs.org/package/zeal-user-contrib) [![License](https://img.shields.io/npm/l/zeal-user-contrib.svg)](https://github.com/jmerle/zeal-user-contrib/blob/master/LICENSE) [![oclif](https://img.shields.io/badge/cli-oclif-brightgreen.svg)](https://oclif.io) ![](https://i.imgur.com/Tax0nTT.gif) A convenient CLI to add Dash's User Contributed docsets to Zeal. It automates the process of going to [zealusercontributions.now.sh](https://zealusercontributions.now.sh/), adding the XML feed to Zeal and downloading the icons to the correct directory. ## Install ``` $ npm install --global zeal-user-contrib # or $ yarn global add zeal-user-contrib ``` ## Usage ``` $ zeal-user-contrib --help conveniently add Dash's User Contributed docsets to Zeal USAGE $ zeal-user-contrib OPTIONS -f, --force overwrite existing docsets -h, --help show CLI help -m, --mirror=sanfrancisco|newyork|london|frankfurt the mirror to use, by default a random one is chosen -o, --output-directory=output-directory path to Zeal's docsets directory, overriding the default search for it -v, --version show CLI version ```
41.638889
252
0.679787
eng_Latn
0.43729
b9d95278f0a0778e0d0de3a2bf634df44d25f6c5
3,234
md
Markdown
README.md
runnerty/notifier-slack
06a5883c4535a323e1aff72ac8453f6bb85790c4
[ "MIT" ]
null
null
null
README.md
runnerty/notifier-slack
06a5883c4535a323e1aff72ac8453f6bb85790c4
[ "MIT" ]
1
2021-03-19T17:00:18.000Z
2021-03-19T17:00:18.000Z
README.md
runnerty/notifier-slack
06a5883c4535a323e1aff72ac8453f6bb85790c4
[ "MIT" ]
1
2020-01-23T08:36:49.000Z
2020-01-23T08:36:49.000Z
<p align="center"> <a href="http://runnerty.io"> <img height="257" src="https://runnerty.io/assets/header/logo-stroked.png"> </a> <p align="center">Smart Processes Management</p> </p> [![NPM version][npm-image]][npm-url] [![Downloads][downloads-image]][npm-url] [![Dependency Status][david-badge]][david-badge-url] <a href="#badge"> <img alt="code style: prettier" src="https://img.shields.io/badge/code_style-prettier-ff69b4.svg"> </a> # Slack notifier for [Runnerty]: ### Installation: Through NPM ```bash npm i @runnerty/notifier-slack ``` You can also add modules to your project with [runnerty] ```bash npx runnerty add @runnerty/notifier-slack ``` This command installs the module in your project, adds example configuration in your [config.json] and creates an example plan of use. If you have installed [runnerty] globally you can include the module with this command: ```bash runnerty add @runnerty/notifier-slack ``` ### Configuration sample: Add in [config.json]: ```json { "notifiers": [ { "id": "slack_default", "type": "@runnerty-notifier-slack", "token": "MY_BOT_TOKEN", "bot_name": "Runnerty-Sentinel", "channel": "MY_CHANNEL", "maxConcurrents": 1, "minInterval": 600 } ] } ``` ### Plan sample: Add in [plan.json]: - Simple ```json { "id": "slack_default", "bot_emoji": ":metal:", "channel": "MY_CHANNEL", "message": "PROCESS *:PROCESS_ID* OF CHAIN :CHAIN_ID RUNNING!" } ``` - Attachments ```json { "id": "slack_default", "bot_name": "Runnerty Bot", "bot_emoji": ":metal:", "channel": "MY_CHANNEL", "attachments": [ { "fallback": "Required plain-text summary of the attachment.", "color": "#36a64f", "pretext": "Simple sample pretext", "author_name": "Runnerty Bot", "author_link": "https://github.com/runnerty/notifier-slackhttp://runnerty.io", "author_icon": "https://runnerty.io/assets/header/logo-stroked.png", "title": "Slack attachment sample", "title_link": "https://api.slack.com/docs/messages/builder", "text": "More info", "fields": [ { "title": "Priority", "value": "High", "short": false } ], "image_url": "http://my-website.com/path/to/image.jpg", "thumb_url": "https://runnerty.io/assets/header/logo-stroked.png", "footer": "Runnerty Notifier Slack Sample", "footer_icon": "https://runnerty.io/assets/header/logo-stroked.png" } ] } ``` - Upload File ```json { "id": "slack_default", "bot_emoji": ":metal:", "channel": "MY_CHANNEL", "message": "PROCESS *:PROCESS_ID* OF CHAIN :CHAIN_ID RUNNING!", "file": "./resume.csv" } ``` [runnerty]: https://www.runnerty.io [downloads-image]: https://img.shields.io/npm/dm/@runnerty/notifier-slack.svg [npm-url]: https://www.npmjs.com/package/@runnerty/notifier-slack [npm-image]: https://img.shields.io/npm/v/@runnerty/notifier-slack.svg [david-badge]: https://david-dm.org/runnerty/notifier-slack.svg [david-badge-url]: https://david-dm.org/runnerty/notifier-slack [config.json]: https://docs.runnerty.io/config/ [notifiers]: https://docs.runnerty.io/notifiers [plan.json]: https://docs.runnerty.io/plan/
25.265625
134
0.646568
yue_Hant
0.218383
b9d95c326567af5f23483940e43894b640bbabcd
319
md
Markdown
README.md
MAXLZ1/emp-plugin-babel-vue-3
ca702cd554e13ed98a3cc80cdca37316fe8252ed
[ "MIT" ]
null
null
null
README.md
MAXLZ1/emp-plugin-babel-vue-3
ca702cd554e13ed98a3cc80cdca37316fe8252ed
[ "MIT" ]
null
null
null
README.md
MAXLZ1/emp-plugin-babel-vue-3
ca702cd554e13ed98a3cc80cdca37316fe8252ed
[ "MIT" ]
null
null
null
# emp-plugin-babel-vue-3 > EMP(v2) vue3 plugin of babel. ## Use `emp.config.js` ```js const { defineConfig, empStore } = require('@efox/emp') const PluginBabelVue3 = require('emp-plugin-babel-vue-3') module.exports = defineConfig(({ mode, env }) => { return { ..., plugins: [ PluginBabelVue3 ] } }) ```
16.789474
57
0.626959
kor_Hang
0.115154
b9d99268a9e90eda23ac157e3771a8c0196f4088
5,215
md
Markdown
CONTRIBUTING.md
JarvisG495/skywalking-cli
b3e40234952ffd69dfe767bb0b18f0e6e40d2cad
[ "Apache-2.0" ]
71
2019-11-01T06:58:20.000Z
2022-03-31T17:35:02.000Z
CONTRIBUTING.md
JarvisG495/skywalking-cli
b3e40234952ffd69dfe767bb0b18f0e6e40d2cad
[ "Apache-2.0" ]
80
2019-11-06T13:46:09.000Z
2022-03-24T10:57:13.000Z
CONTRIBUTING.md
JarvisG495/skywalking-cli
b3e40234952ffd69dfe767bb0b18f0e6e40d2cad
[ "Apache-2.0" ]
48
2019-11-06T04:49:18.000Z
2022-01-21T07:59:09.000Z
# Contributing to Apache SkyWalking CLI Firstly, thanks for your interest in contributing! We hope that this will be a pleasant first experience for you, and that you will return to continue contributing. ## Code of Conduct This project and everyone participating in it is governed by the Apache software Foundation's [Code of Conduct](http://www.apache.org/foundation/policies/conduct.html). By participating, you are expected to adhere to this code. If you are aware of unacceptable behavior, please visit the [Reporting Guidelines page](http://www.apache.org/foundation/policies/conduct.html#reporting-guidelines) and follow the instructions there. ## How to contribute? Most of the contributions that we receive are code contributions, but you can also contribute to the documentation or simply report solid bugs for us to fix. ## How to report a bug? * **Ensure the bug was not already reported** by searching on GitHub under [Issues](https://github.com/apache/skywalking/issues). * If you're unable to find an open issue addressing the problem, [open a new one](https://github.com/apache/skywalking/issues/new). Be sure to include a **title and clear description**, as much relevant information as possible, and a **code sample** or an **executable test case** demonstrating the expected behavior that is not occurring. ## How to add a new feature or change an existing one _Before making any significant changes, please [open an issue](https://github.com/apache/skywalking/issues)._ Discussing your proposed changes ahead of time will make the contribution process smooth for everyone. Once we've discussed your changes and you've got your code ready, make sure that tests are passing and open your pull request. Your PR is most likely to be accepted if it: * Update the README.md with details of changes to the interface. * Includes tests for new functionality. * References the original issue in description, e.g. "Resolves #123". * Has a [good commit message](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html). ## Compiling and building Clone the source code and simply run `make` in the source directory, this will download all necessary dependencies and run tests, lint, and build three binary files in `./bin/`, for Windows, Linux, MacOS respectively. ```shell make ``` ## Writing a new command All commands files locate in directory [`commands`](internal/commands), and an individual directory for each second-level command, an individual `go` file for each third-level command, for example, there is a directory [`service`](internal/commands/service) for command `swctl service`, and a [`list.go`](internal/commands/service/list.go) file for `swctl service list` command. Determine what entity your command will operate on, and put your command `go` file into that directory, or create one if it doesn't exist, for example, if you want to create a command to `list` all the `instance`s of a service, create a directory `commands/instance`, and a `go` file `commands/instance/list.go`. ## Reusing common options There're some [common options](#common-options) that can be shared by multiple commands, check [`commands/flags`](internal/flags) to get all the shared options, and reuse them when possible, an example shares the options is [`commands/service/list.go`](internal/commands/service/list.go#L35) ## Linting your codes We have some rules for the code style and please lint your codes locally before opening a pull request ```shell make lint ``` if you found some errors in the output of the above command, try `make fix` to fix some obvious style issues, as for the complicated errors, please fix them manually. ## Checking license The Apache Software Foundation requires every source file to contain a license header, run `make license` to check that there is license header in every source file. ```shell make license ``` ## Running tests Before submitting a pull request, add some test code to test the added/modified codes, and run the tests locally, make sure all tests passed. ```shell make test ``` ## How to release This section guides committers and PMC members to release SkyWalking CLI in Apache Way. ### Prerequisites - [x] [GNU Make](https://www.gnu.org/software/make/manual/make.html) is installed - [x] [GPG tool](https://gpgtools.org) is installed - [x] [Add your GPG key](docs/How-to-release.md#add-your-gpg-public-key) ### Release steps - Export the version that is to be released, `export VERSION=1.0.1 ` - Tag the latest commit that is to be released with `git tag v${VERSION}` and push the tag with `git push https://github.com/apache/skywalking-cli v${VERSION}` - Verify licenses, build and sign distribution packages, simply run `make release`, distribution packages and checksums are generated - [Upload the packages to SVN repository](docs/How-to-release.md#upload-to-apache-svn) - [Send internal announcement](docs/How-to-release.md#make-the-internal-announcements) - [Wait at least 48 hours for test responses](docs/How-to-release.md#wait-at-least-48-hours-for-test-responses) - [Call for vote](docs/How-to-release.md#call-a-vote-in-dev) - [Publish release](docs/How-to-release.md#publish-release)
43.458333
120
0.767593
eng_Latn
0.995406
b9d9dac1d0dee875ef5aada091e9ded9bdfb30c4
8,240
md
Markdown
articles/storage/files/files-smb-protocol.md
GennadNY/azure-docs
b0438ddb858865b6f9ba2d389af0f4140877ad28
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/storage/files/files-smb-protocol.md
GennadNY/azure-docs
b0438ddb858865b6f9ba2d389af0f4140877ad28
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/storage/files/files-smb-protocol.md
GennadNY/azure-docs
b0438ddb858865b6f9ba2d389af0f4140877ad28
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: SMB file shares in Azure Files description: Learn about file shares hosted in Azure Files using the Server Message Block (SMB) protocol. author: roygara ms.service: storage ms.topic: conceptual ms.date: 06/06/2021 ms.author: rogarana ms.subservice: files --- # SMB file shares in Azure Files Azure Files offers two industry-standard protocols for mounting Azure file share: the [Server Message Block (SMB)](https://msdn.microsoft.com/library/windows/desktop/aa365233.aspx) protocol and the [Network File System (NFS)](https://en.wikipedia.org/wiki/Network_File_System) protocol (preview). Azure Files enables you to pick the file system protocol that is the best fit for your workload. Azure file shares don't support accessing an individual Azure file share with both the SMB and NFS protocols, although you can create SMB and NFS file shares within the same storage account. For all file shares, Azure Files offers enterprise-grade file shares that can scale up to meet your storage needs and can be accessed concurrently by thousands of clients. This article covers SMB Azure file shares. For information about NFS Azure file shares, see [NFS file shares in Azure Files](files-nfs-protocol.md). ## Common scenarios SMB file shares are used for a variety of applications including end-user file shares and file shares that back databases and applications. SMB file shares are often used in the following scenarios: - End-user file shares such as team shares, home directories, etc. - Backing storage for Windows-based applications, such as SQL Server databases or line-of-business applications written for Win32 or .NET local file system APIs. - New application and service development, particularly if that application or service has a requirement for random IO and hierarchical storage. ## Features Azure Files supports the major features of SMB and Azure needed for production deployments of SMB file shares: - AD domain join and discretionary access control lists (DACLs). - Integrated serverless backup with Azure Backup. - Network isolation with Azure private endpoints. - High network throughput using SMB Multichannel (premium file shares only). - SMB channel encryption including AES-256-GCM, AES-128-GCM, and AES-128-CCM. - Previous version support through VSS integrated share snapshots. - Automatic soft delete on Azure file shares to prevent accidental deletes. - Optionally internet-accessible file shares with internet-safe SMB 3.0+. SMB file shares can be mounted directly on-premises or can also be [cached on-premises with Azure File Sync](../file-sync/file-sync-introduction.md). ## Security All data stored in Azure Files is encrypted at rest using Azure storage service encryption (SSE). Storage service encryption works similarly to BitLocker on Windows: data is encrypted beneath the file system level. Because data is encrypted beneath the Azure file share's file system, as it's encoded to disk, you don't have to have access to the underlying key on the client to read or write to the Azure file share. Encryption at rest applies to both the SMB and NFS protocols. By default, all Azure storage accounts have encryption in transit enabled. This means that when you mount a file share over SMB (or access it via the FileREST protocol), Azure Files will only allow the connection if it is made with SMB 3.x with encryption or HTTPS. Clients that do not support SMB 3.x with SMB channel encryption will not be able to mount the Azure file share if encryption in transit is enabled. Azure Files supports industry leading AES-256-GCM with SMB 3.1.1 when used with Windows 10, version 21H1. SMB 3.1.1 also supports AES-128-GCM and SMB 3.0 supports AES-128-CCM. AES-128-GCM is negotiated by default on Windows 10, version 21H1 for performance reasons. You can disable encryption in transit for an Azure storage account. When encryption is disabled, Azure Files will also allow SMB 2.1 and SMB 3.x without encryption. The primary reason to disable encryption in transit is to support a legacy application that must be run on an older operating system, such as Windows Server 2008 R2 or older Linux distribution. Azure Files only allows SMB 2.1 connections within the same Azure region as the Azure file share; an SMB 2.1 client outside of the Azure region of the Azure file share, such as on-premises or in a different Azure region, will not be able to access the file share. ## SMB protocol settings Azure Files offers multiple settings for the SMB protocol. - **SMB Multichannel**: Enable/disable SMB multichannel (premium file shares only). To learn how to enable SMB Multichannel, see [Enable SMB Multichannel on a FileStorage storage account](storage-files-enable-smb-multichannel.md). - **SMB versions**: Which versions of SMB are allowed. Supported protocol versions are SMB 3.1.1, SMB 3.0, and SMB 2.1. By default, all SMB versions are allowed, although SMB 2.1 is disallowed if "require secure transit" is enabled, since SMB 2.1 does not support encryption in transit. - **Authentication methods**: Which SMB authentication methods are allowed. Supported authentication methods are NTLMv2 and Kerberos. By default, all authentication methods are allowed. Removing NTLMv2 disallows using the storage account key to mount the Azure file share. - **Kerberos ticket encryption**: Which encryption algorithms are allowed. Supported encryption algorithms are RC4-HMAC and AES-256. - **SMB channel encryption**: Which SMB channel encryption algorithms are allowed. Supported encryption algorithms are AES-256-GCM, AES-128-GCM, and AES-128-CCM. SMB protocol settings can be toggled via the Azure PowerShell module. To changing the SMB protocol settings, you must [install the 3.7.1-preview version](https://www.powershellgallery.com/packages/Az.Storage/3.7.1-preview) of the Azure Storage PowerShell module. Remember to replace `<resource-group>` and `<storage-account>` with the appropriate values for your environment before running these PowerShell commands. ```PowerShell $resourceGroupName = "<resource-group>" $storageAccountName = "<storage-account>" Update-AzStorageFileServiceProperty ` -ResourceGroupName $resourceGroupName ` -StorageAccountName $storageAccountName ` -SmbAuthenticationMethod "Kerberos" ` -SmbChannelEncryption "AES-256-GCM" ` -SmbKerberosTicketEncryption "AES-256" ` -SmbProtocolVersion "SMB3.1.1" ``` ## Limitations SMB file shares in Azure Files support a subset of features supported by SMB protocol and the NTFS file system. Although most use cases and applications do not require these features, some applications may not work properly with Azure Files if they rely on unsupported features. The following features are not supported: - [SMB Direct](/windows-server/storage/file-server/smb-direct) - SMB directory leasing - [VSS for SMB file shares](/archive/blogs/clausjor/vss-for-smb-file-shares) (this enables VSS providers to flush their data to the SMB file share before a snapshot is taken) - Alternate data streams - Extended attributes - Object identifiers - [Hard links](/windows/win32/fileio/hard-links-and-junctions) - [Soft links](/windows/win32/fileio/creating-symbolic-links) - [Reparse points](/windows/win32/fileio/reparse-points) - [Sparse files](/windows/win32/fileio/sparse-files) - [Short file names (8.3 alias)](/windows/win32/fileio/naming-a-file#short-vs-long-names) - [Compression](https://techcommunity.microsoft.com/t5/itops-talk-blog/smb-compression-deflate-your-io/ba-p/1183552) ## Regional availability SMB Azure file shares are available in every Azure region, including all public and sovereign regions. Premium SMB file shares are available in [a subset of regions](https://azure.microsoft.com/global-infrastructure/services/?products=storage). ## Next steps - [Plan for an Azure Files deployment](storage-files-planning.md) - [Create an Azure file share](storage-how-to-create-file-share.md) - Mount SMB file shares on your preferred operating system: - [Mounting SMB file shares on Windows](storage-how-to-use-files-windows.md) - [Mounting SMB file shares on Linux](storage-how-to-use-files-linux.md) - [Mounting SMB file shares on macOS](storage-how-to-use-files-mac.md)
80.784314
756
0.789806
eng_Latn
0.986975
b9da3edb075f0bf5123d579898839810c4bcba60
3,055
md
Markdown
src/api/dcps/docs/DCPS_QoS_Scheduling.md
brezillon/opensplice
725ae9d949c83fce1746bd7d8a154b9d0a81fe3e
[ "Apache-2.0" ]
133
2017-11-09T02:10:00.000Z
2022-03-29T09:45:10.000Z
src/api/dcps/docs/DCPS_QoS_Scheduling.md
brezillon/opensplice
725ae9d949c83fce1746bd7d8a154b9d0a81fe3e
[ "Apache-2.0" ]
131
2017-11-07T14:48:43.000Z
2022-03-13T15:30:47.000Z
src/api/dcps/docs/DCPS_QoS_Scheduling.md
brezillon/opensplice
725ae9d949c83fce1746bd7d8a154b9d0a81fe3e
[ "Apache-2.0" ]
94
2017-11-09T02:26:19.000Z
2022-02-24T06:38:25.000Z
Scheduling QoS {#DCPS_QoS_Scheduling} ============== This QosPolicy specifies the scheduling parameters that will be used for a thread that is spawned by the \ref DCPS_Modules_DomainParticipant "DomainParticipant". *NOTE:* Some scheduling parameters may not be supported by the underlying Operating System, or that you may need special privileges to select particular settings. *NOTE:* This is an OpenSplice-specific QosPolicy, it is not part of the DDS Specification. Attributes ---------- <table> <tr> <th>Value</th> <th>Meaning</th> <th>Concerns</th> <th>RxO</th> <th>Changeable</th> </tr> <tr> <td> A SchedulingClassQosPolicyKind:<br> scheduling_class.kind </td> <td> Specifies the scheduling class used by the Operating System, which may be SCHEDULE_DEFAULT, SCHEDULE_TIMESHARING or SCHEDULE_REALTIME. Threads can only be spawned within the scheduling classes that are supported by the underlying Operating System. </td> <td rowspan="3"> \ref DCPS_Modules_DomainParticipant "DomainParticipant" </td> <td rowspan="3">N/A</td> <td rowspan="3">No</td> </tr> <tr> <td> A SchedulingPriorityQosPolicyKind:<br> scheduling_priority_kind.kind </td> <td> specifies the priority type, which may be either PRIORITY_RELATIVE or PRIORITY_ABSOLUTE. </td> </tr> <tr> <td> A long:<br> scheduling_priority </td> <td> Specifies the priority that will be assigned to threads spawned by the \ref DCPS_Modules_DomainParticipant "DomainParticipant". Threads can only be spawned with priorities that are supported by the underlying Operating System. </td> </tr> </table> This QosPolicy specifies the scheduling parameters that will be used for threads spawned by the \ref DCPS_Modules_DomainParticipant "DomainParticipant". Note that some scheduling parameters may not be supported by the underlying Operating System, or that you may need special privileges to select particular settings. Refer to the documentation of your OS for more details on this subject. Although the behaviour of the scheduling_class is highly dependent on the underlying OS, in general it can be said that when running in a Timesharing class your thread will have to yield execution to other threads of equal priority regularly. In a Realtime class your thread normally runs until completion, and can only be pre-empted by higher priority threads. Often the highest range of priorities is not accessible through a Timesharing Class. The scheduling_priority_kind determines whether the specified scheduling_priority should be interpreted as an absolute priority, or whether it should be interpreted relative to the priority of its creator, in this case the priority of the thread that created the \ref DCPS_Modules_DomainParticipant "DomainParticipant".
39.675325
259
0.708347
eng_Latn
0.99806
b9da854c19f8488e3286f2507dfce169f1a6d84c
2,184
markdown
Markdown
_posts/2018-06-27-lsm.markdown
Lyusungwon/lyusungwon.github.com
4f49f4c7d4b0014a4885ee94710214c642fc1607
[ "MIT" ]
1
2020-08-30T23:36:23.000Z
2020-08-30T23:36:23.000Z
_posts/2018-06-27-lsm.markdown
Lyusungwon/lyusungwon.github.com
4f49f4c7d4b0014a4885ee94710214c642fc1607
[ "MIT" ]
1
2021-03-09T17:32:25.000Z
2021-03-09T17:32:25.000Z
_posts/2018-06-27-lsm.markdown
Lyusungwon/lyusungwon.github.com
4f49f4c7d4b0014a4885ee94710214c642fc1607
[ "MIT" ]
7
2019-05-20T08:37:12.000Z
2020-08-30T23:36:24.000Z
--- layout: post title: "Learning to See by Moving" date: 2018-06-28 08:55:59 author: Sungwon Lyu categories: studies tags: deep-learning computer-vision --- ## WHY? Feature learning in Convolution Neural Network requires many hand labeled data. It would be useful if one can use other form of supervision. In nature world, organisms acquire many essential information regarding vision by moving itself(egomotion). ## WHAT? This paper tried to prove that the elementary features learned by egomotion is as useful as the one learned with labeled data by comparing the performance of models that have been pre-trained with each. To learn the features with egomotion, this paper suggests two stram architecture that consists of Base-CNN(BCNN) and Top-CNN(TCNN) and trained to classify the type of transformation. ![image](/assets/images/lsm1.png){: .body-image} This paper suggest Slow Feature Analysis(SFA) as baseline. Based on the principle that useful features change slowly in time, following contrastive loss make the model learn the features that change slowly. \\ $$L(x_{t_1}, x_{t_2}, W) = \begin{array}{lr}D(x_{t_1}, x_{t_2}) & if |t_1 - t_2| \leq T \\ 1 - max(0, m - D(x_{t_1}, x_{t_2})) & if |t_1 - t_2| > T \end{array}$$ ## So? This paper proved its concept by validating elementary features learned with MNIST with translation. And then, the model learned elementary features using the dataset of KITTI and SF. ![image](/assets/images/lsm2.png){: .body-image} The pretrained model showed comparable result to that of AlexNet from scratch, or features learned with SFA in Scene Recognition, Object Recognition, Intra-class keypoint matching and Visual Odometry. Learnt feature may differ by the datasets used for training. ## Critic Very intuitive idea. I think not only elementary features, but also mid level features can be learned from moving(like entity). Something worthy to think about. [Agrawal, Pulkit, Joao Carreira, and Jitendra Malik. "Learning to see by moving." Computer Vision (ICCV), 2015 IEEE International Conference on. IEEE, 2015.](https://www.cv-foundation.org/openaccess/content_iccv_2015/papers/Agrawal_Learning_to_See_ICCV_2015_paper.pdf)
80.888889
385
0.775183
eng_Latn
0.994124
b9dab553164867eb8099a13952d857a5142c116b
79,748
md
Markdown
packages/common/CHANGELOG.md
ghiscoding/slickgrid-universal
8d6e16b065274a10c6b8d9410397f1d1ac20fe18
[ "MIT" ]
25
2020-03-13T00:20:37.000Z
2022-03-08T04:28:41.000Z
packages/common/CHANGELOG.md
ghiscoding/slickgrid-universal
8d6e16b065274a10c6b8d9410397f1d1ac20fe18
[ "MIT" ]
597
2020-05-29T15:47:45.000Z
2022-03-30T15:25:15.000Z
packages/common/CHANGELOG.md
ghiscoding/slickgrid-universal
8d6e16b065274a10c6b8d9410397f1d1ac20fe18
[ "MIT" ]
9
2021-02-18T17:04:57.000Z
2022-01-18T00:41:55.000Z
# Change Log All notable changes to this project will be documented in this file. See [Conventional Commits](https://conventionalcommits.org) for commit guidelines. # [0.18.0](https://github.com/ghiscoding/slickgrid-universal/compare/v0.17.0...v0.18.0) (2021-09-29) ### Bug Fixes * **context:** Copy Cell via Context Menu shouldn't include Tree symbols ([f710084](https://github.com/ghiscoding/slickgrid-universal/commit/f710084c06cd47d900daccd389de131209e19163)) * **filters:** css "filled" class on filters should also work w/Grid View ([e8edae7](https://github.com/ghiscoding/slickgrid-universal/commit/e8edae79bcd5c28438203e269d26f107e26c4ae5)) * **resizer:** clear pending resizeGrid on dispose ([07ed6a0](https://github.com/ghiscoding/slickgrid-universal/commit/07ed6a0390f235341b116d981aa4ee84719b029b)) * **resizer:** only bind autoresize when enabled ([ca894c0](https://github.com/ghiscoding/slickgrid-universal/commit/ca894c0a83b5762a42b703f28fc59bdb38e01944)) * **styling:** List bullets shouldn't show in any frameworks, fixes [#487](https://github.com/ghiscoding/slickgrid-universal/issues/487) ([53ea537](https://github.com/ghiscoding/slickgrid-universal/commit/53ea5379c6109383630362717b980a1dbe099681)) * **tree:** when Tree Data is filtered then Sort, footer count is invalid ([4f5fc44](https://github.com/ghiscoding/slickgrid-universal/commit/4f5fc443fbc7a0ab3cbe46722fc6bd85fd4b1594)) ### Features * **context:** expose 3 events for Tree/Grouping clear/collapse/expand ([317f3ad](https://github.com/ghiscoding/slickgrid-universal/commit/317f3ad443f8ac81c7cacacaec6d38553bec147b)) * **Resizer:** add useResizeObserver option ([bb33cdd](https://github.com/ghiscoding/slickgrid-universal/commit/bb33cdd716834913846ab2fcf74a84f8424acf92)) * **sorts:** option to ignore accent while sorting text ([1b4fe81](https://github.com/ghiscoding/slickgrid-universal/commit/1b4fe81d613b780aefcc0ba3e7b16c20eaebd0aa)) * **styling:** increase highlight of filters that are filled w/values ([8f93534](https://github.com/ghiscoding/slickgrid-universal/commit/8f9353418190ee3e11aca65d1a57fa4204331011)) * **tree:** new `excludeChildrenWhenFilteringTree` set as new default ([47df943](https://github.com/ghiscoding/slickgrid-universal/commit/47df943414f383a47062a7ad9245700a1bd8a24e)) # [0.17.0](https://github.com/ghiscoding/slickgrid-universal/compare/v0.16.2...v0.17.0) (2021-09-09) ### Bug Fixes * **filters:** IN_CONTAINS should be sanitized when used with html ([961d8fd](https://github.com/ghiscoding/slickgrid-universal/commit/961d8fd7ea6f915dd8f0749d0329219b82923fea)) * **filters:** remove Filters from DOM after header row gets destroyed ([b08d4ba](https://github.com/ghiscoding/slickgrid-universal/commit/b08d4ba070ec9d9d131d6830e4625e6ef950ac09)) * **grouping:** Draggable Grouping should clear preheader when called ([37811a5](https://github.com/ghiscoding/slickgrid-universal/commit/37811a51d2af04e78aedc88ff5d8eae8a622ac40)) * **resizer:** regression introduced by [#462](https://github.com/ghiscoding/slickgrid-universal/issues/462) for the grid resize in SF ([f34d8b9](https://github.com/ghiscoding/slickgrid-universal/commit/f34d8b9678c7ee9e76534a7f7ffdf2c4d7f9f772)) * **resizer:** resizer not always triggered in SF and show broken UI ([89fc62e](https://github.com/ghiscoding/slickgrid-universal/commit/89fc62eff7fac8b5cf43b3b6acd7590ed84288f6)) * **state:** don't use previous columns ref when getting current cols ([f312c60](https://github.com/ghiscoding/slickgrid-universal/commit/f312c60349d5bc95527ec93cb752f449d1c761f7)) * **styling:** add ms-select placeholder bg-color to fix Bootstrap 5 ([2c34d12](https://github.com/ghiscoding/slickgrid-universal/commit/2c34d1229c14bd36bd034062cc7eb7a7cbe1bf5c)) * **styling:** add ms-select placeholder bg-color to fix Bootstrap 5 ([5d6454e](https://github.com/ghiscoding/slickgrid-universal/commit/5d6454e9f175b8694f372a7e26492ae573eb918f)) ### Features * **aggregators:** add better TS typing for all Aggregators ([1518d6a](https://github.com/ghiscoding/slickgrid-universal/commit/1518d6aef194f184390316f8421f51d23a1d470a)) * **backend:** add cancellable onBeforeSearchChange & revert on error ([b26a53d](https://github.com/ghiscoding/slickgrid-universal/commit/b26a53d2e1fc7172c8c054b9c27ab1b3a2d3dff6)) * **backend:** add cancellable onBeforeSort & revert sort on error ([958f823](https://github.com/ghiscoding/slickgrid-universal/commit/958f823a6bffedc2c146c7c68d49a29419812995)) * **backend:** add cancellable Pagination change & revert on error ([7a8d903](https://github.com/ghiscoding/slickgrid-universal/commit/7a8d9038f230ba433f2773c02992a211a322ebd4)) * **composite:** move SlickGrid Composite Editor factory into universal ([c813cea](https://github.com/ghiscoding/slickgrid-universal/commit/c813ceac1ed6535963df15e7933a444de3a8790a)) * **editors:** add Ctrl+S combo to enhance LongText (textarea) Editor ([5116bbd](https://github.com/ghiscoding/slickgrid-universal/commit/5116bbd9e837a3bbd9835b10b2167edf3561cd3d)) * **filters:** option to ignore accent while filtering text, closes [#470](https://github.com/ghiscoding/slickgrid-universal/issues/470) ([cba9a4e](https://github.com/ghiscoding/slickgrid-universal/commit/cba9a4e4d12b6dfaaec06af5edf4c629b2943feb)) * **sanitize:** make sure any string sent to innerHtml are sanitized ([fe55046](https://github.com/ghiscoding/slickgrid-universal/commit/fe550461d27d01cb5c54d93812db82fa7213f96b)) * **styling:** only show header menu caret when hovering ([41e7856](https://github.com/ghiscoding/slickgrid-universal/commit/41e7856f9483f7228d1455f2e3810ae58a5f5c8d)) * **tree:** add `dynamicallyToggledItemState` method to toggle parent(s) ([26369f9](https://github.com/ghiscoding/slickgrid-universal/commit/26369f9b6c9e81ad5705f580896ab28cf362d090)) ## [0.16.2](https://github.com/ghiscoding/slickgrid-universal/compare/v0.16.1...v0.16.2) (2021-07-23) ### Bug Fixes * **formatters:** Complex Object Formatter shouldn't throw with null data ([3421465](https://github.com/ghiscoding/slickgrid-universal/commit/342146557c16b560b5b8ef0f0e47f971179bc765)) * **tree:** exclude the correct type from interface argument ([af51784](https://github.com/ghiscoding/slickgrid-universal/commit/af51784aa3471dcc88c567f4c3762ab7590184f6)) ## [0.16.1](https://github.com/ghiscoding/slickgrid-universal/compare/v0.16.0...v0.16.1) (2021-07-16) ### Bug Fixes * **filters:** startsWith/endsWith operator should work ([f99f1c5](https://github.com/ghiscoding/slickgrid-universal/commit/f99f1c56c27b3e192b829b83a5fde6aad9efc3e7)) # [0.16.0](https://github.com/ghiscoding/slickgrid-universal/compare/v0.15.0...v0.16.0) (2021-07-16) ### Bug Fixes * **filter:** refreshTreeDataFilters only when Tree is enabled ([07c70d5](https://github.com/ghiscoding/slickgrid-universal/commit/07c70d5d17dab464cefb1046c72abbd41da4c834)) * **filters:** always find locale even without TranslaterService ([c4b17c4](https://github.com/ghiscoding/slickgrid-universal/commit/c4b17c4f51ba6f80b907dab0fd0493a8b0944908)) * **styling:** remove css variable on width causing UX problem ([df69f9c](https://github.com/ghiscoding/slickgrid-universal/commit/df69f9c33604187f91adaf5bb8b43b6abd624d32)) ### Features * **aria:** add aria-label to all Editors/Filters & other html templates ([1a4f8f7](https://github.com/ghiscoding/slickgrid-universal/commit/1a4f8f7873d76b7da5a7d38debed598d3d395c10)) * make constructor arguments as readonly ([a4588ea](https://github.com/ghiscoding/slickgrid-universal/commit/a4588ea5722ae44b647b8c0d02cf8e2a60ff5963)) * **services:** make everything extendable by using `protected` ([ecbb93a](https://github.com/ghiscoding/slickgrid-universal/commit/ecbb93a56abba39dd050bbd6019b86694495edd1)) * **styling:** add support for CSS Variables ([674dd1a](https://github.com/ghiscoding/slickgrid-universal/commit/674dd1a064d4d42af1d5841ac87ba8ea35a26b2f)) # [0.15.0](https://github.com/ghiscoding/slickgrid-universal/compare/v0.14.1...v0.15.0) (2021-07-06) ### Bug Fixes * **addon:** providing columnIndexPosition should always work ([42c8cff](https://github.com/ghiscoding/slickgrid-universal/commit/42c8cff7dd6cf9103149445969be289710549590)) * **demo:** we should be able to move row(s) and keep selections ([d5669a1](https://github.com/ghiscoding/slickgrid-universal/commit/d5669a1d9c07680540d084dad6e1ef06faca0357)) * **editors:** longText Editor (textarea) was scrolling to page bottom ([a4e37a0](https://github.com/ghiscoding/slickgrid-universal/commit/a4e37a0baf329a100f72fe12c35af67fa072829a)) * **editors:** select dropdown value is undefined it shouldn't call save ([015294b](https://github.com/ghiscoding/slickgrid-universal/commit/015294b86e431e8109ce540dda7856b7e9e27575)) * **filters:** filtering with IN_CONTAINS should also work with spaces ([ab54724](https://github.com/ghiscoding/slickgrid-universal/commit/ab5472437b94fe81270f809ab6fd00f204c688b8)) * **frozen:** in some occasion column pinning changes column positions ([70cb74e](https://github.com/ghiscoding/slickgrid-universal/commit/70cb74ef1119a60b37d438130d4a463a87a8939a)) * **menu:** toggle filter bar could be out of sync w/horizontal scroll ([ab7f589](https://github.com/ghiscoding/slickgrid-universal/commit/ab7f58929b10d1b250765b707363aedd9f9d7866)) * **pagination:** should be able to toggle Pagination ([c0367c2](https://github.com/ghiscoding/slickgrid-universal/commit/c0367c24da2ccb3558e1b27f8e70a81d84201479)) * **plugin:** row move shouldn't go further when onBefore returns false ([e9bfb5c](https://github.com/ghiscoding/slickgrid-universal/commit/e9bfb5ceba6a18a020b8b34f72abba6e3d13d8b8)) * **resizer:** few fixes & adjustments after trying in SF ([32e80ec](https://github.com/ghiscoding/slickgrid-universal/commit/32e80ecdbc5072c1619593d101289a3c1ea92b3a)) * **services:** toggle pagination was not displaying all row selection ([e51ccb4](https://github.com/ghiscoding/slickgrid-universal/commit/e51ccb4352bf3a578159b8b63f0a6caf891c382a)) * **state:** changeColumnsArrangement should work w/columnIndexPosition ([7c1e9d3](https://github.com/ghiscoding/slickgrid-universal/commit/7c1e9d3d243988d6d99a9696b0afbe8f62ac45b4)) * **state:** Grid View/Columns dynamically should work w/row move ([a7cf1df](https://github.com/ghiscoding/slickgrid-universal/commit/a7cf1dfb73c770908aadf01fd67680c985449f9d)) * **state:** Grid View/Columns dynamically should work w/row selection ([865944f](https://github.com/ghiscoding/slickgrid-universal/commit/865944f5d6aadc0c05c7f83db7c11a569a33118f)) * **styling:** address latest dart-sass math division deprecation warning ([b7317d8](https://github.com/ghiscoding/slickgrid-universal/commit/b7317d8fa619b35fb65789e12b268d65ff65968c)) * **styling:** header title should show ellipsis if too long ([607e14d](https://github.com/ghiscoding/slickgrid-universal/commit/607e14d7fffa4f9854eff5103e1a1a0881664286)) * **tree:** using `initiallyCollapsed` change internal toggled state ([380f2f9](https://github.com/ghiscoding/slickgrid-universal/commit/380f2f903d9908e2bed5b3f44d04e28e5d5b9c63)) * initial grid state should also include toggled presets ([f1fe39f](https://github.com/ghiscoding/slickgrid-universal/commit/f1fe39f5d68487e815be7fd3d7ca5a6fd4cba7c6)) * **tree:** calling updateItems should not lose the Tree collapsing icon ([45b9622](https://github.com/ghiscoding/slickgrid-universal/commit/45b96225dd5a676b6a85bbb2c8146137eb95b33f)) * option labels weren't showing correctly after running Cypress tests ([10d4339](https://github.com/ghiscoding/slickgrid-universal/commit/10d4339da70cce4977707a6a19a79cceb4bf87df)) * provide input type directly in constructor before init() is called ([e89c3bd](https://github.com/ghiscoding/slickgrid-universal/commit/e89c3bd3da66e4b16342cefe1eedd5df96363e45)) ### Features * **components:** extract Custom Footer to be an external component ([1794c27](https://github.com/ghiscoding/slickgrid-universal/commit/1794c27d7669c172f606d709d3360bc5d2f77798)) * **editors:** convert jQuery to native element on slider editor ([3181cf0](https://github.com/ghiscoding/slickgrid-universal/commit/3181cf069d9f3bc85dc0d13ceeb9623d21ae8eff)) * **editors:** replace jQuery with native element on date editor ([062f1f9](https://github.com/ghiscoding/slickgrid-universal/commit/062f1f9713c8f236c30b4d631b601b24b56a530d)) * **editors:** use class inheritance to extend main input editor ([ad3e696](https://github.com/ghiscoding/slickgrid-universal/commit/ad3e6965d4cd4295086401de26b5d3aad13a7650)) * **filters:** build multiple-select options from native dom elements ([aa548a9](https://github.com/ghiscoding/slickgrid-universal/commit/aa548a9bc05da0d4d5233a2633ae3055fd9f7178)) * **filters:** convert jQuery to native element on more filters ([b46eb5e](https://github.com/ghiscoding/slickgrid-universal/commit/b46eb5ebdb177e7d0d6c93cb6df541cedc7eb5d1)) * **filters:** convert jQuery to native elements on multiple filters ([3a80996](https://github.com/ghiscoding/slickgrid-universal/commit/3a80996bec96e465d23a30387af707289f4089e3)) * **footer:** add option to customize right footer text ([2ea41cc](https://github.com/ghiscoding/slickgrid-universal/commit/2ea41cc8ab38ebc5d5276c90de33b57247c4476f)) * **formatters:** add Bootstrap Dropdown Formatter ([5ba9423](https://github.com/ghiscoding/slickgrid-universal/commit/5ba9423200e60460c22f05253901707ef7055782)) * **services:** convert jQuery to native elements ([4da0a20](https://github.com/ghiscoding/slickgrid-universal/commit/4da0a201aaa866447a0c76e3b9c16503e2ed6af9)) * **services:** decouple the EventPubSubService to separate package ([9f51665](https://github.com/ghiscoding/slickgrid-universal/commit/9f516655e9ce5f06e0cfeabc43536834dc38c70b)) * **services:** move Resizer Service w/common services folder for reuse ([d127ac7](https://github.com/ghiscoding/slickgrid-universal/commit/d127ac797ee787ea7785e8ae9f4c0bcaed786afd)) * **styling:** add a new `color-disabled-dark` ([55c3062](https://github.com/ghiscoding/slickgrid-universal/commit/55c30621241ec5da7a2e19879265c4e15a6ad907)) * **styling:** add a new `color-disabled` ([7151198](https://github.com/ghiscoding/slickgrid-universal/commit/7151198dd393c0bc93151cc4dc9c3295917b6b3e)) * **styling:** add extra material icons & new color ([4205b66](https://github.com/ghiscoding/slickgrid-universal/commit/4205b664e80af691c72d5520e4778ad4cd7d94b3)) * **tree:** add `getItemCount` method with optional tree level ([b3f8f94](https://github.com/ghiscoding/slickgrid-universal/commit/b3f8f9484e7ea352b2ed264c6a27e1e091eaf918)) * **tree:** add Tree Collapse Grid State/Preset ([998b01a](https://github.com/ghiscoding/slickgrid-universal/commit/998b01a2f10ccee5636f616921dd86b35a4feaec)) * **tree:** add ways to reapply Tree Collapse previous state ([3702ed3](https://github.com/ghiscoding/slickgrid-universal/commit/3702ed32629f84397349147c978ca650043c45eb)) * add new Input Password Editor which uses common inputEditor ([87e547c](https://github.com/ghiscoding/slickgrid-universal/commit/87e547c0dbccc106a1109c3902ac2027fbd52138)) * convert jQuery to native element on few more filters ([7d5e1e8](https://github.com/ghiscoding/slickgrid-universal/commit/7d5e1e859a0331699d6fb07d2d35797d7274d1df)) ## [0.14.1](https://github.com/ghiscoding/slickgrid-universal/compare/v0.14.0...v0.14.1) (2021-05-22) ### Bug Fixes * **editors:** revert to jquery element for aurelia-slickgrid to work ([4d6c358](https://github.com/ghiscoding/slickgrid-universal/commit/4d6c3580ee56df7ec8993176322aede6895f1745)) # [0.14.0](https://github.com/ghiscoding/slickgrid-universal/compare/v0.13.0...v0.14.0) (2021-05-22) ### Bug Fixes * **backend:** able to preset filters on hidden columns & all queried ([c610979](https://github.com/ghiscoding/slickgrid-universal/commit/c610979c54170c069b97a71864d95d0363d75e80)) * **editors:** select editor inline blur save before destroy ([0e591b1](https://github.com/ghiscoding/slickgrid-universal/commit/0e591b1812fc1c733c03f7afcf81dee7a3e4b107)) * **frozen:** rollback previous commit since the issue was found in SlickGrid (core) ([780bcd7](https://github.com/ghiscoding/slickgrid-universal/commit/780bcd7bfae35e26cd84c9a6d220e2dab9eca3b4)) * **resizer:** remove delay to call resize by content to avoid flickering ([961efe6](https://github.com/ghiscoding/slickgrid-universal/commit/961efe6fe7ad721e8196c76ed4c35205830b6b83)) * **services:** fix couple of issues found with custom grid views ([db06736](https://github.com/ghiscoding/slickgrid-universal/commit/db0673688b2b6e6dde8f25af9551bf6c27174a44)) * **sorting:** multi-column sort shouldn't work when option is disabled ([bfc8651](https://github.com/ghiscoding/slickgrid-universal/commit/bfc865128de0a9e4c21ff0dc8b564c15c88dea93)) * **styling:** center horizontally checkbox selector in column header ([bb5aebc](https://github.com/ghiscoding/slickgrid-universal/commit/bb5aebc355a22e19b0071bfe993bbeb0e1090265)) * **tree:** Tree Data export should also include correct indentation ([f1e06c1](https://github.com/ghiscoding/slickgrid-universal/commit/f1e06c11f9eaa9ee778d319bfbaba20bb9abfcc9)) * add item should work in the demo even with filter preset ([d9c97eb](https://github.com/ghiscoding/slickgrid-universal/commit/d9c97ebb587184e94439f6fde1ec8c8a739e7bfa)) * add item to flat and/or tree should both work ([1b19028](https://github.com/ghiscoding/slickgrid-universal/commit/1b19028c9d58a31597906e371f439b094bca7ff0)) * adding optional tree level property should be used when sorting ([a3598c5](https://github.com/ghiscoding/slickgrid-universal/commit/a3598c519a875585498cc828b5a0e76e95890795)) * addItem from grid service should work with tree data ([8b468f0](https://github.com/ghiscoding/slickgrid-universal/commit/8b468f055144b001378395546519d1801e046a0a)) * export to file/excel should also have tree indentation ([8c4c2b8](https://github.com/ghiscoding/slickgrid-universal/commit/8c4c2b8d30bb78e927f0a28bb0f7bef81e95d789)) * Grid Service addItem should invalidate hierarchical dataset itself ([066e894](https://github.com/ghiscoding/slickgrid-universal/commit/066e894271603562b10e014c4febfb18626e54f0)) * previous commit caused issue with composite editor ([13c2a49](https://github.com/ghiscoding/slickgrid-universal/commit/13c2a49916282c1888ae23c1720a617755341e0f)) * return all onBeforeX events in delayed promise to fix spinner ([bb36d1a](https://github.com/ghiscoding/slickgrid-universal/commit/bb36d1af114031eb973cf9993bdb9be1dd050de3)) * **formatters:** Tree Data use nullish coallescing w/optional chaining ([f6cf14c](https://github.com/ghiscoding/slickgrid-universal/commit/f6cf14c06518d47742ee17d82a22a39af490c9e7)) * **styling:** add a better search filter magnify glass icon as placeholder ([5464824](https://github.com/ghiscoding/slickgrid-universal/commit/5464824f3719ebddb303ee1b82161638d870a288)) * **tree:** couple of issues found in Tree Data, fixes [#307](https://github.com/ghiscoding/slickgrid-universal/issues/307) ([e684d1a](https://github.com/ghiscoding/slickgrid-universal/commit/e684d1af1c078a8861c3c94fe5486cbe68d57b85)) ### Features * **addon:** provide grid menu labels for all built-in commands ([44c72d3](https://github.com/ghiscoding/slickgrid-universal/commit/44c72d3ca0b8a88e6ae5022a25b11c4d41fd2897)) * **editors:** add `compositeEditorFormOrder` option ([03f2d66](https://github.com/ghiscoding/slickgrid-universal/commit/03f2d662a69d71edf4b61cdda862fb4eef0f9b47)) * **editors:** add ways to preload date without closing date picker ([3088038](https://github.com/ghiscoding/slickgrid-universal/commit/30880380584b281c756e0ad437031631e6f607e0)) * **resizer:** add `resizeByContentOnlyOnFirstLoad` grid option ([ffe7dc4](https://github.com/ghiscoding/slickgrid-universal/commit/ffe7dc4c2a7ae778c8e731fd7637b154c10035f0)) * **resizer:** add single Column Resize by Content dblClick & headerMenu ([683389f](https://github.com/ghiscoding/slickgrid-universal/commit/683389fcc343ac5c0378a9e34b7f11dda97fc719)) * **styling:** add new marker material icons for project ([9b386fa](https://github.com/ghiscoding/slickgrid-universal/commit/9b386fa3e6af8e76cf4beb5aa0b5322db2f270af)) * add `titleFormatter` to Tree Data ([8bf32ca](https://github.com/ghiscoding/slickgrid-universal/commit/8bf32caa08a6c5a28c7114cb8abe33a5ed9bc4cb)) * add few pubsub events to help with big dataset ([360c62c](https://github.com/ghiscoding/slickgrid-universal/commit/360c62cb0979792dddef8fab39383266c0d855e3)) * add optional child value prefix to Tree Formatter ([9da9662](https://github.com/ghiscoding/slickgrid-universal/commit/9da966298120686929ab3dd2f276574d7f6c8c7e)) * **tree:** improve Tree Data speed considerably ([5487798](https://github.com/ghiscoding/slickgrid-universal/commit/548779801d06cc9ae7e319e72d351c8a868ed79f)) * **editors:** replace jQuery with native elements ([d6e8f4e](https://github.com/ghiscoding/slickgrid-universal/commit/d6e8f4e59823673df290b179d7ee277e3d7bb1af)) # [0.13.0](https://github.com/ghiscoding/slickgrid-universal/compare/v0.12.0...v0.13.0) (2021-04-27) ### Bug Fixes * **editors:** Composite Editor modal compo should work w/complex objects ([#298](https://github.com/ghiscoding/slickgrid-universal/issues/298)) ([721a6c5](https://github.com/ghiscoding/slickgrid-universal/commit/721a6c5627369cfc89710705384995f8aba3a178)) * **exports:** grid with colspan should be export accordingly ([#311](https://github.com/ghiscoding/slickgrid-universal/issues/311)) ([e899fbb](https://github.com/ghiscoding/slickgrid-universal/commit/e899fbba3daa41261dcaa57b0555e37e9bdfafb4)) * **observables:** http cancellable Subject should be unsubscribed ([cbc951b](https://github.com/ghiscoding/slickgrid-universal/commit/cbc951bcf5891658f55981e88887f41b4fb5d5c4)) * **selection:** full row selection should be selected w/show hidden row ([f76e30c](https://github.com/ghiscoding/slickgrid-universal/commit/f76e30cdca476c947089d88069bd21e42639ba7e)) ### Features * **editors:** add `onBeforeOpen` optional callback to Composite Editor ([#306](https://github.com/ghiscoding/slickgrid-universal/issues/306)) ([a642482](https://github.com/ghiscoding/slickgrid-universal/commit/a642482254009115366ca4992e2e60647f8ae9b0)) * **editors:** add `target` to `onBeforeEditCell` w/called by composite ([#301](https://github.com/ghiscoding/slickgrid-universal/issues/301)) ([7440ff5](https://github.com/ghiscoding/slickgrid-universal/commit/7440ff58988acd7abd1ce249b1ceb72556cceb1d)) * **filters:** add option to filter empty values for select filter ([#310](https://github.com/ghiscoding/slickgrid-universal/issues/310)) ([c58a92a](https://github.com/ghiscoding/slickgrid-universal/commit/c58a92a8e2b29ea216211e3561d5567c43f0376a)) * **filters:** option to add custom compound operator list ([3e8d2cb](https://github.com/ghiscoding/slickgrid-universal/commit/3e8d2cbcea6181e3ce3157798f003a8479d11011)) * **footer:** add row selection count to the footer component ([8ba146c](https://github.com/ghiscoding/slickgrid-universal/commit/8ba146cd4cbdccdb61f3441918065fad4561ff84)) * **resize:** add column resize by cell content ([#309](https://github.com/ghiscoding/slickgrid-universal/issues/309)) ([515a072](https://github.com/ghiscoding/slickgrid-universal/commit/515a072b3a16d3aca0f48e62c968ae89a1510669)) * **services:** remove deprecated hideColumnByIndex form Grid Service ([#312](https://github.com/ghiscoding/slickgrid-universal/issues/312)) ([b00c64d](https://github.com/ghiscoding/slickgrid-universal/commit/b00c64d8f88d4560c677f667a84d95ba30e96399)) * **styling:** switch from node-sass to dart-sass (sass) ([81f8d9f](https://github.com/ghiscoding/slickgrid-universal/commit/81f8d9fbd1381b4c877eeeb4992bdcc90c1cd677)) * **typing:** add missing item metadata interface ([#299](https://github.com/ghiscoding/slickgrid-universal/issues/299)) ([7cf0a21](https://github.com/ghiscoding/slickgrid-universal/commit/7cf0a2185c73dcb7748a193ba2272bb7af699266)) # [0.12.0](https://github.com/ghiscoding/slickgrid-universal/compare/v0.11.2...v0.12.0) (2021-03-24) ### Bug Fixes * **editors:** show all editors as 100% height in their cell container ([#277](https://github.com/ghiscoding/slickgrid-universal/issues/277)) ([3f49aea](https://github.com/ghiscoding/slickgrid-universal/commit/3f49aeabd6016c705d4d6b809345fe1ac948cfc5)) * **filters:** rollback a change made in PR [#288](https://github.com/ghiscoding/slickgrid-universal/issues/288) causing preset issues ([18ffc0c](https://github.com/ghiscoding/slickgrid-universal/commit/18ffc0c8285e4e2306bc60817fba357734a65b61)) * **filters:** SearchTerms shouldn't come back after calling clearFilters ([04f3d12](https://github.com/ghiscoding/slickgrid-universal/commit/04f3d1267de493b9dc1e922dca3b433b9cb34fde)) * **filters:** string <> should be Not Contains instead of Not Equal ([#276](https://github.com/ghiscoding/slickgrid-universal/issues/276)) ([960884d](https://github.com/ghiscoding/slickgrid-universal/commit/960884ddf58b1e87ad5ef71e3713f8836e6190c0)) * **firefox:** add all missing SVG color filter classes for Firefox/SF ([#296](https://github.com/ghiscoding/slickgrid-universal/issues/296)) ([a07ebdf](https://github.com/ghiscoding/slickgrid-universal/commit/a07ebdfbd2c2197c28102efe1f4a685ea61185e1)) * **pinning:** reordering cols position freezing cols shouldn't affect ([#275](https://github.com/ghiscoding/slickgrid-universal/issues/275)) ([a30665d](https://github.com/ghiscoding/slickgrid-universal/commit/a30665d54da583c47b1f533002173af99e9ab20d)) * **plugin:** Grid Menu Clear Frozen Cols shouldn't change cols positions ([#291](https://github.com/ghiscoding/slickgrid-universal/issues/291)) ([4fdab08](https://github.com/ghiscoding/slickgrid-universal/commit/4fdab08357d12349b6402e3007f4ab399d9a2140)) * **presets:** Filter & Sorting presets & Footer metrics issues ([#285](https://github.com/ghiscoding/slickgrid-universal/issues/285)) ([3174c86](https://github.com/ghiscoding/slickgrid-universal/commit/3174c86e011b4927510b99a348e8019adb4baa00)) * **presets:** Multiple Select Filter Grid Presets values should be shown ([dd1f231](https://github.com/ghiscoding/slickgrid-universal/commit/dd1f231850819bde455e24d743b9e1637767ecb3)) * **resizer:** allow gridHeight/gridWidth to be passed as string ([#284](https://github.com/ghiscoding/slickgrid-universal/issues/284)) ([20bda50](https://github.com/ghiscoding/slickgrid-universal/commit/20bda50bf3ab647ae4ee3d7ffe0c9c8b58e8f187)), closes [#534](https://github.com/ghiscoding/slickgrid-universal/issues/534) * **sorting:** add some unit tests that were previously commented out ([#290](https://github.com/ghiscoding/slickgrid-universal/issues/290)) ([2a91fa6](https://github.com/ghiscoding/slickgrid-universal/commit/2a91fa6f672650bb525a4ba1774d02c5ac435c5b)) ### Features * **editors:** add `onSelect` callback to Autocomplete Editor ([#286](https://github.com/ghiscoding/slickgrid-universal/issues/286)) ([2d106d4](https://github.com/ghiscoding/slickgrid-universal/commit/2d106d4df0a259d36bee3d910320706ddb7e8580)) * **filters:** add new IN_COLLECTION operator to allow searching cell value as Array ([#282](https://github.com/ghiscoding/slickgrid-universal/issues/282)) ([ecce93c](https://github.com/ghiscoding/slickgrid-universal/commit/ecce93c92b7424522ad2af0d7d82963a3a56ca97)) * **filters:** add optional `filterTypingDebounce` for filters w/keyup ([#289](https://github.com/ghiscoding/slickgrid-universal/issues/289)) ([3aecc89](https://github.com/ghiscoding/slickgrid-universal/commit/3aecc899ebd78d9597cc4ed4919c0a8dd26673a8)) * **filters:** add optional `filterTypingDebounce` for keyboard filters ([#283](https://github.com/ghiscoding/slickgrid-universal/issues/283)) ([bb7dcd3](https://github.com/ghiscoding/slickgrid-universal/commit/bb7dcd3a9e28f45c7339e2f30685220b7a152507)) * **filters:** add possibility to filter by text range like "a..e" ([#279](https://github.com/ghiscoding/slickgrid-universal/issues/279)) ([e44145d](https://github.com/ghiscoding/slickgrid-universal/commit/e44145d897da570bf6ea15b156c7961ce96ce6f0)) * **filters:** display operator into input text filter from Grid Presets ([#288](https://github.com/ghiscoding/slickgrid-universal/issues/288)) ([3fad4fe](https://github.com/ghiscoding/slickgrid-universal/commit/3fad4fe9ef3bec290dabb860d7ea4baf8f182a4a)) * **resources:** add RxJS support into Slickgrid-Universal via external package ([#280](https://github.com/ghiscoding/slickgrid-universal/issues/280)) ([c10fc33](https://github.com/ghiscoding/slickgrid-universal/commit/c10fc339019c04ec0f7c4357ccdb3949a2358460)) * **state:** add Pinning (frozen) to Grid State & Presets ([#292](https://github.com/ghiscoding/slickgrid-universal/issues/292)) ([ba703d8](https://github.com/ghiscoding/slickgrid-universal/commit/ba703d8353a243ffed4d40804c0f977119424f6c)) ## [0.11.2](https://github.com/ghiscoding/slickgrid-universal/compare/v0.11.1...v0.11.2) (2021-02-27) ### Bug Fixes * **editors:** styling issue found with input group and Bootstrap ([18a9d02](https://github.com/ghiscoding/slickgrid-universal/commit/18a9d020a5d0016643e6a2ab8dbd93f896dcbc8b)) ## [0.11.1](https://github.com/ghiscoding/slickgrid-universal/compare/v0.11.0...v0.11.1) (2021-02-27) ### Bug Fixes * **plugins:** do not recreate header button plugin after re-render ([09d44ec](https://github.com/ghiscoding/slickgrid-universal/commit/09d44ecf29a4465bf8a13db818329e5c93cc47f1)) # [0.11.0](https://github.com/ghiscoding/slickgrid-universal/compare/v0.10.2...v0.11.0) (2021-02-27) ### Bug Fixes * **build:** enable tsconfig strict mode tsconfig ([#269](https://github.com/ghiscoding/slickgrid-universal/issues/269)) ([095fc71](https://github.com/ghiscoding/slickgrid-universal/commit/095fc71052c1f4e776544781da5fe762cfa16238)) * **filters:** don't use indexOf NOT_IN_CONTAINS ([#262](https://github.com/ghiscoding/slickgrid-universal/issues/262)) ([310be30](https://github.com/ghiscoding/slickgrid-universal/commit/310be30efb653151a75dde0a14b1ed3f9946b333)) * **filters:** use defaultFilterOperator in range when none provided ([#271](https://github.com/ghiscoding/slickgrid-universal/issues/271)) ([993675f](https://github.com/ghiscoding/slickgrid-universal/commit/993675f6b0d76e76010d5cadc6696134a73dad66)) * **helpers:** should be able to highlight first row (0) ([#268](https://github.com/ghiscoding/slickgrid-universal/issues/268)) ([a58be17](https://github.com/ghiscoding/slickgrid-universal/commit/a58be17959e28ab9a1280c3d7d7c8df9db02587e)), closes [#527](https://github.com/ghiscoding/slickgrid-universal/issues/527) * **plugin:** recreate header menu when adding column dynamically ([#257](https://github.com/ghiscoding/slickgrid-universal/issues/257)) ([16c4984](https://github.com/ghiscoding/slickgrid-universal/commit/16c49845c5d3388502811c15f0a23daa1a01f850)) ### Features * **demo:** add Example 13 Header Button Plugin ([f345cd1](https://github.com/ghiscoding/slickgrid-universal/commit/f345cd18b89f849f3f873538c214d3ac24ff12f8)) * **editors:** add a Clear (X) button to the Autocomplete Editor ([#270](https://github.com/ghiscoding/slickgrid-universal/issues/270)) ([ffbd188](https://github.com/ghiscoding/slickgrid-universal/commit/ffbd188534992c31848691154517deb64694f3b2)) * **filters:** add updateSingleFilter for a single external filter ([#265](https://github.com/ghiscoding/slickgrid-universal/issues/265)) ([20564a3](https://github.com/ghiscoding/slickgrid-universal/commit/20564a3096948626beada698460b72374a18ca7c)) * **perf:** huge filtering speed improvements ([a101ed1](https://github.com/ghiscoding/slickgrid-universal/commit/a101ed1b62c2fbfec2712f64e08192a4852bce9d)) * **perf:** improve date sorting speed ([258da22](https://github.com/ghiscoding/slickgrid-universal/commit/258da2238bba3693eada058f9405012f68af150b)) * **perf:** improve date sorting speed ([#259](https://github.com/ghiscoding/slickgrid-universal/issues/259)) ([a52f4fc](https://github.com/ghiscoding/slickgrid-universal/commit/a52f4fcee1627ac5906388f8dcf4b7fe3f5c4aa7)) * **services:** add bulk transactions in Grid Service CRUD methods ([#256](https://github.com/ghiscoding/slickgrid-universal/issues/256)) ([03385d9](https://github.com/ghiscoding/slickgrid-universal/commit/03385d9ac58cb3ce7501a409394706c0cb4f4d29)) ## [0.10.2](https://github.com/ghiscoding/slickgrid-universal/compare/v0.10.1...v0.10.2) (2021-01-28) ### Bug Fixes * **filter:** filter service not returning correct operator ([bd30697](https://github.com/ghiscoding/slickgrid-universal/commit/bd30697e1f3b6bf0e0d8b18b1c2ff30416ed022d)) ## [0.10.1](https://github.com/ghiscoding/slickgrid-universal/compare/v0.10.0...v0.10.1) (2021-01-28) ### Bug Fixes * **build:** decrease tsc target to es2017 instead of es2020 ([2f2e5f4](https://github.com/ghiscoding/slickgrid-universal/commit/2f2e5f46a3b25897f1a4a59daa1346b5d577ddb8)) # [0.10.0](https://github.com/ghiscoding/slickgrid-universal/compare/v0.9.0...v0.10.0) (2021-01-28) ### Bug Fixes * **core:** fix types index.d.ts url ([a76b3a3](https://github.com/ghiscoding/slickgrid-universal/commit/a76b3a3d97a6d211ec2e7e8d9060fd8dd0719f58)) * **editors:** add blank disabled fields in Composite Editor form values ([#233](https://github.com/ghiscoding/slickgrid-universal/issues/233)) ([b634902](https://github.com/ghiscoding/slickgrid-universal/commit/b6349029b705991b7ac2d1df99f5b330fe69ef36)) * **editors:** fix clear date & blank disabled field w/Composite Editor ([#235](https://github.com/ghiscoding/slickgrid-universal/issues/235)) ([9aac97d](https://github.com/ghiscoding/slickgrid-universal/commit/9aac97d2d433c809facc8d7092467780d55ca01a)) * **filters:** Grid State filters should always include an operator ([#238](https://github.com/ghiscoding/slickgrid-universal/issues/238)) ([f64ed37](https://github.com/ghiscoding/slickgrid-universal/commit/f64ed37f7ffe01346c8f68d4bd170ffdce54839d)) * **frozen:** hiding multiple columns when using pinning gets out of sync ([#243](https://github.com/ghiscoding/slickgrid-universal/issues/243)) ([b255220](https://github.com/ghiscoding/slickgrid-universal/commit/b255220ec37dbdc9df4f3ecccb4397656cf9f2a6)) * **lint:** add eslint as a pre task when bundling & fix linting errors ([#246](https://github.com/ghiscoding/slickgrid-universal/issues/246)) ([6f7ccd8](https://github.com/ghiscoding/slickgrid-universal/commit/6f7ccd8ee4cc5e005034965a2c2dcc0499f06a73)) * **pinning:** recalculate frozen idx properly when column shown changes ([#241](https://github.com/ghiscoding/slickgrid-universal/issues/241)) ([3b55972](https://github.com/ghiscoding/slickgrid-universal/commit/3b559726acdff96970c68c10c8d256d0403d6c4f)) * **plugins:** add missing Row Detail filtering code ([#239](https://github.com/ghiscoding/slickgrid-universal/issues/239)) ([d9cad63](https://github.com/ghiscoding/slickgrid-universal/commit/d9cad635840650d2b2dd91444ffa0121147f4140)) ### Features * **editors:** add Clone functionality to Composite Editor ([#236](https://github.com/ghiscoding/slickgrid-universal/issues/236)) ([df545e4](https://github.com/ghiscoding/slickgrid-universal/commit/df545e4ec64271307b1979feb5e786f449433639)) * **editors:** change all private keyword to protected for extensability ([#247](https://github.com/ghiscoding/slickgrid-universal/issues/247)) ([089b6cb](https://github.com/ghiscoding/slickgrid-universal/commit/089b6cbbdd6284d94f765fdad08642e0d0d81ff0)) * **filters:** change all private keyword to protected for extensability ([#245](https://github.com/ghiscoding/slickgrid-universal/issues/245)) ([52cc702](https://github.com/ghiscoding/slickgrid-universal/commit/52cc7022c4b847566d89e91a80c423373538a15a)) * **formatters:** add grid option to auto add custom editor formatter ([#248](https://github.com/ghiscoding/slickgrid-universal/issues/248)) ([db77d46](https://github.com/ghiscoding/slickgrid-universal/commit/db77d464ee37eda573351e89d4c5acc9b5648649)) * add nameCompositeEditor override to be used by Composite Editor ([fcdb2e9](https://github.com/ghiscoding/slickgrid-universal/commit/fcdb2e92ed736b09e947cdbcf39ee157afc4acab)) # [0.9.0](https://github.com/ghiscoding/slickgrid-universal/compare/v0.8.0...v0.9.0) (2021-01-06) ### Bug Fixes * **build:** import Flatpickr Locale on demand via regular imports ([#227](https://github.com/ghiscoding/slickgrid-universal/issues/227)) ([6644822](https://github.com/ghiscoding/slickgrid-universal/commit/664482210557fc1a7a178856e2641f71b9580c44)) ### Features * **build:** upgrade to WebPack 5 ([#225](https://github.com/ghiscoding/slickgrid-universal/issues/225)) ([c6b3ad3](https://github.com/ghiscoding/slickgrid-universal/commit/c6b3ad3eb6fb64306bfd8bd300fcc1e86b27e5a6)) * **ci:** replace CircleCI with GitHub Actions ([#211](https://github.com/ghiscoding/slickgrid-universal/issues/211)) ([4f91140](https://github.com/ghiscoding/slickgrid-universal/commit/4f9114031ca6236ef45f04b67dcba1a9981035c4)) * **editors:** add Column Editor collectionOverride option ([#228](https://github.com/ghiscoding/slickgrid-universal/issues/228)) ([91421fc](https://github.com/ghiscoding/slickgrid-universal/commit/91421fc0154e432874fb2211e430a79032b996b8)) * **styling:** add support for Bootstrap 5 ([#226](https://github.com/ghiscoding/slickgrid-universal/issues/226)) ([e35f116](https://github.com/ghiscoding/slickgrid-universal/commit/e35f116efc1989f675ef6e030d80a8a31a444373)) # [0.8.0](https://github.com/ghiscoding/slickgrid-universal/compare/v0.7.7...v0.8.0) (2020-12-22) ### Bug Fixes * **core:** change moment/lodash imports so it works with ES6 module ([#210](https://github.com/ghiscoding/slickgrid-universal/issues/210)) ([2d25d3b](https://github.com/ghiscoding/slickgrid-universal/commit/2d25d3b99f7be93f2bc69f006fb67a39cf39ce7c)) * **core:** use regular imports instead of require to load plugins ([#209](https://github.com/ghiscoding/slickgrid-universal/issues/209)) ([6816696](https://github.com/ghiscoding/slickgrid-universal/commit/6816696c98be0d2dd80c1ff49358bd49ee7caacb)) ### Features * **filters:** add Autocomplete/Select Filters collection observers ([#208](https://github.com/ghiscoding/slickgrid-universal/issues/208)) ([3b3b463](https://github.com/ghiscoding/slickgrid-universal/commit/3b3b4631e5d878ba72d5f2579c5a6b05cc1a7028)) ## [0.7.7](https://github.com/ghiscoding/slickgrid-universal/compare/v0.7.6...v0.7.7) (2020-12-20) **Note:** Version bump only for package @slickgrid-universal/common ## [0.7.6](https://github.com/ghiscoding/slickgrid-universal/compare/v0.7.5...v0.7.6) (2020-12-20) **Note:** Version bump only for package @slickgrid-universal/common ## [0.7.5](https://github.com/ghiscoding/slickgrid-universal/compare/v0.7.4...v0.7.5) (2020-12-20) **Note:** Version bump only for package @slickgrid-universal/common ## [0.7.4](https://github.com/ghiscoding/slickgrid-universal/compare/v0.7.3...v0.7.4) (2020-12-20) **Note:** Version bump only for package @slickgrid-universal/common ## [0.7.3](https://github.com/ghiscoding/slickgrid-universal/compare/v0.7.2...v0.7.3) (2020-12-20) ### Bug Fixes * **editors:** fix BS3,BS4 styles & slider value not shown with undefined ([#204](https://github.com/ghiscoding/slickgrid-universal/issues/204)) ([3aca8f9](https://github.com/ghiscoding/slickgrid-universal/commit/3aca8f9053365c1987f6c5abc43f8ce5eca015fb)) * **exports:** should be able to change export file name ([#205](https://github.com/ghiscoding/slickgrid-universal/issues/205)) ([9d26213](https://github.com/ghiscoding/slickgrid-universal/commit/9d262134b12da46ef1fea970f092d96ce875ed78)) ## [0.7.2](https://github.com/ghiscoding/slickgrid-universal/compare/v0.7.1...v0.7.2) (2020-12-17) ### Bug Fixes * **core:** range default should be inclusive instead of exclusive ([#203](https://github.com/ghiscoding/slickgrid-universal/issues/203)) ([b7f74ad](https://github.com/ghiscoding/slickgrid-universal/commit/b7f74ad8a1539aed32ac643b4fe395fbdecf4459)) * **sorting:** add cellValueCouldBeUndefined in grid option for sorting ([#202](https://github.com/ghiscoding/slickgrid-universal/issues/202)) ([865256e](https://github.com/ghiscoding/slickgrid-universal/commit/865256efe927a5715840963cb2945f16a402789b)) * **stylings:** small alignment issue with the slider value elm height ([5a453b8](https://github.com/ghiscoding/slickgrid-universal/commit/5a453b8739c07e07f835e111d7d3ca5d627a0c2f)) ## [0.7.1](https://github.com/ghiscoding/slickgrid-universal/compare/v0.7.0...v0.7.1) (2020-12-17) **Note:** Version bump only for package @slickgrid-universal/common # [0.7.0](https://github.com/ghiscoding/slickgrid-universal/compare/v0.6.0...v0.7.0) (2020-12-16) ### Bug Fixes * **components:** refactor to use registerExternalResources grid option ([#199](https://github.com/ghiscoding/slickgrid-universal/issues/199)) ([7ca42f4](https://github.com/ghiscoding/slickgrid-universal/commit/7ca42f4242bfddd4dd746d7f3f37dbe1e3f7368b)) ### Features * **core:** methods to change column positions/visibilities dynamically ([#200](https://github.com/ghiscoding/slickgrid-universal/issues/200)) ([5048a4b](https://github.com/ghiscoding/slickgrid-universal/commit/5048a4b969f337f002dad552197d02f970590c73)) # [0.6.0](https://github.com/ghiscoding/slickgrid-universal/compare/v0.5.1...v0.6.0) (2020-12-14) ### Bug Fixes * **core:** add console error if any of column def id includes dot ([#198](https://github.com/ghiscoding/slickgrid-universal/issues/198)) ([6ee40af](https://github.com/ghiscoding/slickgrid-universal/commit/6ee40af507b066602c39e057349b5ead6e7952f3)) * **stylings:** composite editor styling fixes for BS4 ([#195](https://github.com/ghiscoding/slickgrid-universal/issues/195)) ([305eb90](https://github.com/ghiscoding/slickgrid-universal/commit/305eb90c75e6a4aa076c62b5364b904dc5c6518e)) * **stylings:** re-align the svg icons & single/multiple-select icon+text ([#194](https://github.com/ghiscoding/slickgrid-universal/issues/194)) ([b730be7](https://github.com/ghiscoding/slickgrid-universal/commit/b730be7a75b3035c01aa7ca8f48a88df447ad461)) ### Features * **core:** add registerExternalResources for Components/Services ([#196](https://github.com/ghiscoding/slickgrid-universal/issues/196)) ([ee02f1d](https://github.com/ghiscoding/slickgrid-universal/commit/ee02f1d62d1a0601421352e43d17bd8c89e4348c)) * **core:** refactor code using the container service everywhere ([#197](https://github.com/ghiscoding/slickgrid-universal/issues/197)) ([96ce9bd](https://github.com/ghiscoding/slickgrid-universal/commit/96ce9bdbf18330e522dad0cbb0eda09c41f6a3df)) * **formatters:** add numberPrefix & Suffix to Decimal Formatter ([#193](https://github.com/ghiscoding/slickgrid-universal/issues/193)) ([0e4d30c](https://github.com/ghiscoding/slickgrid-universal/commit/0e4d30c0ee23bc598206fbba4e5ed406e4aeecfe)) ## [0.5.1](https://github.com/ghiscoding/slickgrid-universal/compare/v0.5.0...v0.5.1) (2020-12-10) **Note:** Version bump only for package @slickgrid-universal/common # [0.5.0](https://github.com/ghiscoding/slickgrid-universal/compare/v0.4.2...v0.5.0) (2020-12-10) ### Bug Fixes * **editors:** make sure select editor is defined before reading a prop ([763f981](https://github.com/ghiscoding/slickgrid-universal/commit/763f98111d03652b0ad903ba487a3b8c83a5ef5d)) * **editors:** only translate button texts when enableTranslate is true ([b698c6b](https://github.com/ghiscoding/slickgrid-universal/commit/b698c6bd3f13af017c7f3c0113b8407269ba1e0d)) * **editors:** Select Editor option to return flat data w/complex object ([#189](https://github.com/ghiscoding/slickgrid-universal/issues/189)) ([4695cd3](https://github.com/ghiscoding/slickgrid-universal/commit/4695cd3b6871dc1ceca4036fd30935eca8011b7e)) * **exports:** when cell value is empty object return empty string ([#190](https://github.com/ghiscoding/slickgrid-universal/issues/190)) ([cd34901](https://github.com/ghiscoding/slickgrid-universal/commit/cd349012c82a8bdff113fb9f8ef23ea18c6e3035)) ### Features * **components:** extract CompositeEditor & EmptyWarning Components ([#191](https://github.com/ghiscoding/slickgrid-universal/issues/191)) ([00cf9a2](https://github.com/ghiscoding/slickgrid-universal/commit/00cf9a22e1924a46ed637d52bba8efc02ef7eea1)) # [0.4.0](https://github.com/ghiscoding/slickgrid-universal/compare/v0.3.0...v0.4.0) (2020-12-07) ### Bug Fixes * **styling:** Compound Filter Operator dropdown too wide in BS4 ([9cb5750](https://github.com/ghiscoding/slickgrid-universal/commit/9cb575029e9b875af63cf131c1511e5e2c2036f2)) ### Features * **editors:** add few editor options to LongText (textarea) Editor ([a975882](https://github.com/ghiscoding/slickgrid-universal/commit/a975882ce0772728a7bcd2bc75131d650b093144)) # [0.3.0](https://github.com/ghiscoding/slickgrid-universal/compare/v0.2.15...v0.3.0) (2020-12-02) ### Bug Fixes * **core:** properly export Enums, Interfaces, Services & Utilities ([#184](https://github.com/ghiscoding/slickgrid-universal/issues/184)) ([0c23398](https://github.com/ghiscoding/slickgrid-universal/commit/0c233984a6e9d718659c119b65a95d6c38d36b0c)) * **core:** showing/hiding column shouldn't affect its freezing position ([#185](https://github.com/ghiscoding/slickgrid-universal/issues/185)) ([2a812ed](https://github.com/ghiscoding/slickgrid-universal/commit/2a812edb82c8004ab43df224c67ede228ab72c00)) ### Features * **core:** add enableMouseWheelScrollHandler grid option ([#170](https://github.com/ghiscoding/slickgrid-universal/issues/170)) ([53598d9](https://github.com/ghiscoding/slickgrid-universal/commit/53598d9bf36d26c41e7587dd74678687ba47fb3d)) ## [0.2.15](https://github.com/ghiscoding/slickgrid-universal/compare/v0.2.0...v0.2.15) (2020-11-30) ### Bug Fixes * **core:** don't expose src folder on npm & update few npm package ([#168](https://github.com/ghiscoding/slickgrid-universal/issues/168)) ([3c05938](https://github.com/ghiscoding/slickgrid-universal/commit/3c059381b35bba88ea98d0206692c912c625f227)) * **core:** rename i18n to translater & fix few other issues ([#174](https://github.com/ghiscoding/slickgrid-universal/issues/174)) ([34c963a](https://github.com/ghiscoding/slickgrid-universal/commit/34c963a2bcef1b841d3c62ea405a4bc49be98a5c)) * **editors:** make sure editor element exist before focusing ([e57235b](https://github.com/ghiscoding/slickgrid-universal/commit/e57235b4339ffa1bee522c245665bb598d963fd1)) * **extensions:** draggable grouping style change to look better ([#171](https://github.com/ghiscoding/slickgrid-universal/issues/171)) ([d00be88](https://github.com/ghiscoding/slickgrid-universal/commit/d00be8868370f3679555b8f52ef4ad85916c93ac)) * **formatters:** date formatters should accept ISO input & output to US ([#172](https://github.com/ghiscoding/slickgrid-universal/issues/172)) ([85ce7cf](https://github.com/ghiscoding/slickgrid-universal/commit/85ce7cf3636d5bb43d3ef18ec6998bb0c423d218)) ## [0.2.13](https://github.com/ghiscoding/slickgrid-universal/compare/@slickgrid-universal/[email protected]...@slickgrid-universal/[email protected]) (2020-11-26) **Note:** Version bump only for package @slickgrid-universal/common ## [0.2.12](https://github.com/ghiscoding/slickgrid-universal/compare/@slickgrid-universal/[email protected]...@slickgrid-universal/[email protected]) (2020-11-26) **Note:** Version bump only for package @slickgrid-universal/common ## [0.2.11](https://github.com/ghiscoding/slickgrid-universal/compare/@slickgrid-universal/[email protected]...@slickgrid-universal/[email protected]) (2020-11-25) **Note:** Version bump only for package @slickgrid-universal/common ## [0.2.10](https://github.com/ghiscoding/slickgrid-universal/compare/@slickgrid-universal/[email protected]...@slickgrid-universal/[email protected]) (2020-11-25) **Note:** Version bump only for package @slickgrid-universal/common ## [0.2.9](https://github.com/ghiscoding/slickgrid-universal/compare/@slickgrid-universal/[email protected]...@slickgrid-universal/[email protected]) (2020-11-25) **Note:** Version bump only for package @slickgrid-universal/common ## [0.2.8](https://github.com/ghiscoding/slickgrid-universal/compare/@slickgrid-universal/[email protected]...@slickgrid-universal/[email protected]) (2020-11-25) **Note:** Version bump only for package @slickgrid-universal/common ## [0.2.7](https://github.com/ghiscoding/slickgrid-universal/compare/@slickgrid-universal/[email protected]...@slickgrid-universal/[email protected]) (2020-11-25) **Note:** Version bump only for package @slickgrid-universal/common ## [0.2.6](https://github.com/ghiscoding/slickgrid-universal/compare/@slickgrid-universal/[email protected]...@slickgrid-universal/[email protected]) (2020-11-25) **Note:** Version bump only for package @slickgrid-universal/common ## [0.2.5](https://github.com/ghiscoding/slickgrid-universal/compare/@slickgrid-universal/[email protected]...@slickgrid-universal/[email protected]) (2020-11-25) **Note:** Version bump only for package @slickgrid-universal/common ## [0.2.4](https://github.com/ghiscoding/slickgrid-universal/compare/@slickgrid-universal/[email protected]...@slickgrid-universal/[email protected]) (2020-11-25) **Note:** Version bump only for package @slickgrid-universal/common ## [0.2.3](https://github.com/ghiscoding/slickgrid-universal/compare/@slickgrid-universal/[email protected]...@slickgrid-universal/[email protected]) (2020-11-25) **Note:** Version bump only for package @slickgrid-universal/common ## [0.2.3](https://github.com/ghiscoding/slickgrid-universal/compare/@slickgrid-universal/[email protected]...@slickgrid-universal/[email protected]) (2020-11-25) **Note:** Version bump only for package @slickgrid-universal/common ## [0.2.2](https://github.com/ghiscoding/slickgrid-universal/compare/@slickgrid-universal/[email protected]...@slickgrid-universal/[email protected]) (2020-11-25) **Note:** Version bump only for package @slickgrid-universal/common ## [0.2.1](https://github.com/ghiscoding/slickgrid-universal/compare/@slickgrid-universal/[email protected]...@slickgrid-universal/[email protected]) (2020-11-25) ### Bug Fixes * **core:** don't expose src folder on npm & update few npm package ([#168](https://github.com/ghiscoding/slickgrid-universal/issues/168)) ([3c05938](https://github.com/ghiscoding/slickgrid-universal/commit/3c059381b35bba88ea98d0206692c912c625f227)) * **editors:** make sure editor element exist before focusing ([e57235b](https://github.com/ghiscoding/slickgrid-universal/commit/e57235b4339ffa1bee522c245665bb598d963fd1)) # [0.2.0](https://github.com/ghiscoding/slickgrid-universal/compare/@slickgrid-universal/[email protected]...@slickgrid-universal/[email protected]) (2020-11-20) ### Bug Fixes * **core:** clear dataset when disposing and fix few unsubscribed events to avoid leak ([#156](https://github.com/ghiscoding/slickgrid-universal/issues/156)) ([78c80b4](https://github.com/ghiscoding/slickgrid-universal/commit/78c80b43ca04fd4fff68791556f9d4ab37f06caa)) * **core:** empty warning message should work with multiple grids ([#158](https://github.com/ghiscoding/slickgrid-universal/issues/158)) ([9e7c023](https://github.com/ghiscoding/slickgrid-universal/commit/9e7c023f7d33313400f4e55ddffd838d290b83dd)) * **core:** fix some problems found with AutoComplete ([#74](https://github.com/ghiscoding/slickgrid-universal/issues/74)) ([00fb478](https://github.com/ghiscoding/slickgrid-universal/commit/00fb478263db832ec31d940ed19417d9fcbae04a)) * **core:** Flatpickr is not destroyed properly & leaks detached elements ([#154](https://github.com/ghiscoding/slickgrid-universal/issues/154)) ([9633d4a](https://github.com/ghiscoding/slickgrid-universal/commit/9633d4a090c23ff4792cb614360afc58e76d74c3)) * **core:** header columns grouping misbehave after hiding column ([#164](https://github.com/ghiscoding/slickgrid-universal/issues/164)) ([6b8232b](https://github.com/ghiscoding/slickgrid-universal/commit/6b8232b3b98d1b75412bebd6b4528ee5dea71d7a)) * **core:** mem leaks w/orphan DOM elements when disposing ([#153](https://github.com/ghiscoding/slickgrid-universal/issues/153)) ([faba5a6](https://github.com/ghiscoding/slickgrid-universal/commit/faba5a6652fa2cf5e78f64b6b2e27bf9b85936ba)) * **core:** properly remove event listeners when disposing ([#163](https://github.com/ghiscoding/slickgrid-universal/issues/163)) ([ecfb9a7](https://github.com/ghiscoding/slickgrid-universal/commit/ecfb9a7c623010504a7a2d312ffef185f16cec9e)) * **editor:** SingleSelect Editor should show pick false value ([#75](https://github.com/ghiscoding/slickgrid-universal/issues/75)) ([fdb2c84](https://github.com/ghiscoding/slickgrid-universal/commit/fdb2c8433d443dd8f4fdd86f714354424cfb9ea3)) * **editors:** autocomplete editor spinner aligned right in mass update ([#162](https://github.com/ghiscoding/slickgrid-universal/issues/162)) ([6ae5189](https://github.com/ghiscoding/slickgrid-universal/commit/6ae51897979d80f5639fb095406e83e182649252)) * **filters:** disregard time when filtering date only format ([#134](https://github.com/ghiscoding/slickgrid-universal/issues/134)) ([7bd2d19](https://github.com/ghiscoding/slickgrid-universal/commit/7bd2d1964de2e809d8b08c737231eec31d146fae)) * **pinning:** put back vertical scroll on grid after removing freezing ([75a47a6](https://github.com/ghiscoding/slickgrid-universal/commit/75a47a607d463854c1b51fe5a330d629c79ac2e2)) * **select:** make a collection array copy to avoid change by ref ([#135](https://github.com/ghiscoding/slickgrid-universal/issues/135)) ([3237133](https://github.com/ghiscoding/slickgrid-universal/commit/323713382f1565ff8617ede08fdc8ed31ac3a594)) * **styling:** support other unit of measure in SASS ([5b9adec](https://github.com/ghiscoding/slickgrid-universal/commit/5b9adec6d11230a870337f1adaac1b0f9e157438)) * **styling:** SVG icon colors aren't showing up in SF with Firefox ([#131](https://github.com/ghiscoding/slickgrid-universal/issues/131)) ([2ed3cf5](https://github.com/ghiscoding/slickgrid-universal/commit/2ed3cf50358139374d4deeaedb5a8fdb7db27b98)) * **translations:** HeaderMenu & Date Filters not translating ([#58](https://github.com/ghiscoding/slickgrid-universal/issues/58)) ([9416c4d](https://github.com/ghiscoding/slickgrid-universal/commit/9416c4d2642894c5660473419623cee9bebcac4b)) ### Features * **autocomplete:** add much more functionalities to the AutoComplete ([#69](https://github.com/ghiscoding/slickgrid-universal/issues/69)) ([93c3d0a](https://github.com/ghiscoding/slickgrid-universal/commit/93c3d0a9b8d5a30c7a933f95a4333937c95305a3)) * **core:** add "Empty Data" warning message when grid is empty ([#155](https://github.com/ghiscoding/slickgrid-universal/issues/155)) ([13875b4](https://github.com/ghiscoding/slickgrid-universal/commit/13875b455d60f44918d8524aa803374773276e90)) * **core:** add custom entry to Select Editor/Filter collections ([#133](https://github.com/ghiscoding/slickgrid-universal/issues/133)) ([66effcf](https://github.com/ghiscoding/slickgrid-universal/commit/66effcfddd8b5a9d78a1d1ab679ca2721067e4be)) * **core:** add ESLint npm script and add to prebuild script ([#151](https://github.com/ghiscoding/slickgrid-universal/issues/151)) ([4064876](https://github.com/ghiscoding/slickgrid-universal/commit/40648760a33628f0ba85653f5fc99d8250b9a7a2)) * **core:** add loading spinner to AutoComplete Editor/Filter ([#65](https://github.com/ghiscoding/slickgrid-universal/issues/65)) ([4ecd2bd](https://github.com/ghiscoding/slickgrid-universal/commit/4ecd2bd305f2fd2b509e48cf1c7166b666228be3)) * **core:** rewrite "Empty Data" warning component to be in the canvas ([#157](https://github.com/ghiscoding/slickgrid-universal/issues/157)) ([78e2132](https://github.com/ghiscoding/slickgrid-universal/commit/78e213222d6058e1d1d768094801be42dbf4fb05)) * **core:** update few npm packages ([#123](https://github.com/ghiscoding/slickgrid-universal/issues/123)) ([1c25b87](https://github.com/ghiscoding/slickgrid-universal/commit/1c25b87fdd738616879298baeb52074e30e9bf14)) * **core:** update lib to latest jQuery version 3.5.1 ([#56](https://github.com/ghiscoding/slickgrid-universal/issues/56)) ([1af66d5](https://github.com/ghiscoding/slickgrid-universal/commit/1af66d5142bb5bc17cc84c819f9f273874af285c)), closes [#42](https://github.com/ghiscoding/slickgrid-universal/issues/42) * **core:** update to latest SlickGrid version and update npm packages ([#140](https://github.com/ghiscoding/slickgrid-universal/issues/140)) ([d73a44e](https://github.com/ghiscoding/slickgrid-universal/commit/d73a44e338025da45e990a8a522fb0b9aa1c5279)) * **core:** use barel export everywhere ([#57](https://github.com/ghiscoding/slickgrid-universal/issues/57)) ([d068fc5](https://github.com/ghiscoding/slickgrid-universal/commit/d068fc577566a44217f543f7486be0cc4edc5f69)) * **editor:** add Composite Editor modal dialog ([#76](https://github.com/ghiscoding/slickgrid-universal/issues/76)) ([bba0b80](https://github.com/ghiscoding/slickgrid-universal/commit/bba0b804301195a166f87be610ee85fe77d4a134)) * **editors:** add changeEditorOption to all Editors which supports it ([#142](https://github.com/ghiscoding/slickgrid-universal/issues/142)) ([97b1003](https://github.com/ghiscoding/slickgrid-universal/commit/97b1003f80a72859ae9fc4b4a0ade12e8ec373a5)) * **editors:** add way to change or disable Composite Editor form input ([#139](https://github.com/ghiscoding/slickgrid-universal/issues/139)) ([2a5280f](https://github.com/ghiscoding/slickgrid-universal/commit/2a5280f216b2929c018f4019169db039361f2985)) * **editors:** disable editor when collectionAsync, re-enable after ([#132](https://github.com/ghiscoding/slickgrid-universal/issues/132)) ([75b10de](https://github.com/ghiscoding/slickgrid-universal/commit/75b10de91adecfaab6627e677abe7f5ce91d8769)) * **examples:** add mass update feat to Example 11 ([#31](https://github.com/ghiscoding/slickgrid-universal/issues/31)) ([84e9817](https://github.com/ghiscoding/slickgrid-universal/commit/84e98175686160dfc243435496ac65a757ec30aa)) * **filters:** add Pre-Defined & Custom Filters saved in Local Storage ([#143](https://github.com/ghiscoding/slickgrid-universal/issues/143)) ([dea71ab](https://github.com/ghiscoding/slickgrid-universal/commit/dea71ababb4b06520b06f7e12f4acbd86051110a)) * **formatters:** add AlignRight Formatter & alias AlignCenter=>Center ([#161](https://github.com/ghiscoding/slickgrid-universal/issues/161)) ([831580d](https://github.com/ghiscoding/slickgrid-universal/commit/831580d5234114d9510a578a71f608cbb3eda3ec)) * **icons:** add more Material icons ([9f9377b](https://github.com/ghiscoding/slickgrid-universal/commit/9f9377b2768c0ad6c091731be36125ea73e2ad46)) * **icons:** add some more material icons ([#124](https://github.com/ghiscoding/slickgrid-universal/issues/124)) ([b90fe2d](https://github.com/ghiscoding/slickgrid-universal/commit/b90fe2d231c1005ad137a7f0fbae8f6fb928cb79)) * **plugins:** add "hidden" to all controls/plugins with menu items ([#128](https://github.com/ghiscoding/slickgrid-universal/issues/128)) ([99202de](https://github.com/ghiscoding/slickgrid-universal/commit/99202deb7b452b7ac8d67d4b98545901cf99005e)) * **services:** add 2x new methods hideColumnById or ..byIds ([#160](https://github.com/ghiscoding/slickgrid-universal/issues/160)) ([d396653](https://github.com/ghiscoding/slickgrid-universal/commit/d3966530fab48ee72fab138b8caf97c4eb73ec91)) * **services:** add Toggle Filtering/Sorting & Hide Column methods ([#126](https://github.com/ghiscoding/slickgrid-universal/issues/126)) ([08fe2e1](https://github.com/ghiscoding/slickgrid-universal/commit/08fe2e19c5778941050e42ca207d55dc27564ba8)) * **styling:** add frozen on all possible elements with SASS variables ([#138](https://github.com/ghiscoding/slickgrid-universal/issues/138)) ([c61da91](https://github.com/ghiscoding/slickgrid-universal/commit/c61da911c449949570f54343724bc80523f77bcb)), closes [#537](https://github.com/ghiscoding/slickgrid-universal/issues/537) * **styling:** add Pagination button height sass variable ([#136](https://github.com/ghiscoding/slickgrid-universal/issues/136)) ([43deeee](https://github.com/ghiscoding/slickgrid-universal/commit/43deeee99aee1887a62ec4238f68dce9e37fca69)) * **styling:** find way to add colors to SVGs used by the lib ([#73](https://github.com/ghiscoding/slickgrid-universal/issues/73)) ([8a07c16](https://github.com/ghiscoding/slickgrid-universal/commit/8a07c16ec3238533ab16fb22f8b748168cd5f18c)) * **tests:** add more Cypress E2E tests for grouping ([#125](https://github.com/ghiscoding/slickgrid-universal/issues/125)) ([814dec0](https://github.com/ghiscoding/slickgrid-universal/commit/814dec0dbad7cf59e98654a732dbf6d46de37a1a)) # [0.1.0](https://github.com/ghiscoding/slickgrid-universal/compare/@slickgrid-universal/[email protected]...@slickgrid-universal/[email protected]) (2020-07-28) ### Bug Fixes * **build:** vscode chrome debugger + webpack prod build should both work ([e148090](https://github.com/ghiscoding/slickgrid-universal/commit/e148090b967119c911c5da2fc7cb2cfdf4c3de39)) * **components:** add "cssText" option to both Footer/Pagination ([abd4fcd](https://github.com/ghiscoding/slickgrid-universal/commit/abd4fcd6ea6c990e1192afaca450dd6b7847e590)) * **components:** both Footer/Pagination should always be 100% width ([#27](https://github.com/ghiscoding/slickgrid-universal/issues/27)) ([e587ef5](https://github.com/ghiscoding/slickgrid-universal/commit/e587ef5084d469c6342c84c5c2f6a0dc65ae4493)) * **context:** change copy cell command to make it work in SF ([#8](https://github.com/ghiscoding/slickgrid-universal/issues/8)) ([c0b8ad9](https://github.com/ghiscoding/slickgrid-universal/commit/c0b8ad943dbd6baf08f41c36d6d266382b758206)) * **core:** add missing use of custom datasetIdPropertyName ([917f044](https://github.com/ghiscoding/slickgrid-universal/commit/917f044b1489b19917b15bd146a2d40f8924ea23)) * **debug:** chrome debugger with webpack & TS breakpoints ([6c3ab52](https://github.com/ghiscoding/slickgrid-universal/commit/6c3ab521be42265edd33d30002f342493f12c54b)) * **editor:** disregard Flatpickr error on Date Editor ([e7d7ba5](https://github.com/ghiscoding/slickgrid-universal/commit/e7d7ba57c6a68309aafb0c2082b4e642194067f3)) * **editor:** disregard Flatpickr error on Date Editor and fix output format ([140c48e](https://github.com/ghiscoding/slickgrid-universal/commit/140c48e7fe18eea76d59b44bb6625d3cb89aaf55)) * **editor:** float validator min/max values should be inclusive ([3e193aa](https://github.com/ghiscoding/slickgrid-universal/commit/3e193aabd8bdf515d53da938c19bc931b29c8438)) * **editor:** float validator should accept decimal even without 0 suffix ([87808ce](https://github.com/ghiscoding/slickgrid-universal/commit/87808ce1f0c10e4dd070518b78e35e986580de30)) * **editor:** number validators should be ok with null value on init ([1aadc86](https://github.com/ghiscoding/slickgrid-universal/commit/1aadc86787d88de8e18a193853e40ee88e795f93)) * **editor:** shouldn't call cell changed when cell value is undefined ([d5796a1](https://github.com/ghiscoding/slickgrid-universal/commit/d5796a1c3d45d5592c56dc9001231b2943f56cc0)) * **editors:** add saveOutputType to finally have proper save format ([#17](https://github.com/ghiscoding/slickgrid-universal/issues/17)) ([ebfd715](https://github.com/ghiscoding/slickgrid-universal/commit/ebfd71582642abe136317dbef8cedee68d472aa7)) * **editors:** Editors should work with undefined item properties ([#25](https://github.com/ghiscoding/slickgrid-universal/issues/25)) ([9bc6f5a](https://github.com/ghiscoding/slickgrid-universal/commit/9bc6f5ad617d7144d8787d4afcfe3b888966dcb7)) * **editors:** invalid date should trigger onvalidationerror ([#19](https://github.com/ghiscoding/slickgrid-universal/issues/19)) ([041087e](https://github.com/ghiscoding/slickgrid-universal/commit/041087ea928b9c53ef118a198b6837a028933b7a)) * **editors:** make sure appendChild exist before using it to add Editor ([90d4a67](https://github.com/ghiscoding/slickgrid-universal/commit/90d4a670824eb979fc2813d0d42a5803dacd3739)) * **filter:** recreate filter when toggling header row ([e839464](https://github.com/ghiscoding/slickgrid-universal/commit/e839464fa5dbb1db274ebda69daf3f71808f0c93)) * **filter:** string filter should also work when using Contains ([fc54f9a](https://github.com/ghiscoding/slickgrid-universal/commit/fc54f9a03b974e000cde4ea4a18ddb261572f003)) * **filter:** when entering filter operator it shouldn't do any filtering ([81c465b](https://github.com/ghiscoding/slickgrid-universal/commit/81c465b61ca4c0883c4c4308a5b154ef7410039e)) * **formatter:** add possibility to parse a date formatter as a UTC date ([e72bcad](https://github.com/ghiscoding/slickgrid-universal/commit/e72bcadae652bb00cb8b51f92ff2b2cf67de37a4)) * **formatters:** decimalSeparator & thousandSeparator work tgt ([62de7c2](https://github.com/ghiscoding/slickgrid-universal/commit/62de7c2713c140ef757d821d7538a965ea625b7e)) * **header:** re-create header grouping title after changing picker cols ([872c780](https://github.com/ghiscoding/slickgrid-universal/commit/872c7808d27cae30c414d1e3769728aa083910e7)) * **menu:** context menu to copy cell with queryFieldNameGetterFn ([#21](https://github.com/ghiscoding/slickgrid-universal/issues/21)) ([53c50f9](https://github.com/ghiscoding/slickgrid-universal/commit/53c50f9d716725330681d3617082b1fa33f90c12)) * **pagination:** get pagination working in SF as well ([#24](https://github.com/ghiscoding/slickgrid-universal/issues/24)) ([1132f2e](https://github.com/ghiscoding/slickgrid-universal/commit/1132f2edec251e2f65cce860ebfa57dbe35cf852)) * **picker:** add missing pre-header title grouping extractor ([fa3148b](https://github.com/ghiscoding/slickgrid-universal/commit/fa3148bd90487cad6bcd01b782ab27570336f741)) * **resize:** add a patch to fix autoresize on Chrome ([02faae4](https://github.com/ghiscoding/slickgrid-universal/commit/02faae44118dd5adbda57a5363567a84c84e7cb2)) * **sanitizer:** add optional grid option sanitizer anywhere possible ([#9](https://github.com/ghiscoding/slickgrid-universal/issues/9)) ([a6c7997](https://github.com/ghiscoding/slickgrid-universal/commit/a6c7997d75d27cc14892de4460dea28b529b392e)) * **select:** revert to jQuery 3.4.1 since latest version seems ([e839a5e](https://github.com/ghiscoding/slickgrid-universal/commit/e839a5e0f8ef8ab21a341ee2e2961c5a07736805)) * **sort:** header menu sorting should include columnId property ([2c5d2e0](https://github.com/ghiscoding/slickgrid-universal/commit/2c5d2e0547179f4cbe8f491a83af5202ba3410f9)) * **sort:** header menu sorting should include columnId property ([666a831](https://github.com/ghiscoding/slickgrid-universal/commit/666a83166ec21062bba9be287d65a242f7b52a1a)) * **styling:** cell menu is re-position incorrectly below the grid ([6fd3552](https://github.com/ghiscoding/slickgrid-universal/commit/6fd3552b568faef252e77b0446f2ab08d2a6ccde)) * **styling:** cell/context menus get re-position below the grid ([7db862a](https://github.com/ghiscoding/slickgrid-universal/commit/7db862ad6d7a939d1a285141068e2095c3295541)) * **styling:** sass variable should be interpolate before using calc ([42e7e3d](https://github.com/ghiscoding/slickgrid-universal/commit/42e7e3d51e6750f11a17f11d259fe97851505385)) * **tests:** fix failing unit test ([f19745d](https://github.com/ghiscoding/slickgrid-universal/commit/f19745d91d264d3da450a674b9ca9c78bf157294)) * **types:** fix TS type warnings ([d22ee64](https://github.com/ghiscoding/slickgrid-universal/commit/d22ee64dfaabae5b0e497ade62192b1c5595e0c3)) ### Features * **backend:** add OData & GraphQL packages ([#2](https://github.com/ghiscoding/slickgrid-universal/issues/2)) ([53cf08b](https://github.com/ghiscoding/slickgrid-universal/commit/53cf08bff2eea18e677770f70eedef1bda9aefcc)) * **browser:** add browserslist for packages who uses it ([fc69908](https://github.com/ghiscoding/slickgrid-universal/commit/fc69908a4eccfaedeb1835eb9d00719e7926065f)) * **build:** add correct TS types to all packages ([5ab0833](https://github.com/ghiscoding/slickgrid-universal/commit/5ab0833e07b89504ac603c3d356d2a6bdb0dfee2)) * **build:** tweak build to use tsc and test with sf lwc ([e4964b3](https://github.com/ghiscoding/slickgrid-universal/commit/e4964b34513e828d5cc9f2b278d794d892895277)) * **colspan:** add Header Grouping & Column Span example ([b9a155d](https://github.com/ghiscoding/slickgrid-universal/commit/b9a155dcf58c9a7c984ea1b6426883af0ae2f9ca)) * **core:** add `collectionAsync` option for both the Editors & Filters ([#16](https://github.com/ghiscoding/slickgrid-universal/issues/16)) ([f9488ab](https://github.com/ghiscoding/slickgrid-universal/commit/f9488ab350421be771f356b1775559a8e0d8e0c0)) * **core:** add Translation into demo with fetch locale from json file ([#23](https://github.com/ghiscoding/slickgrid-universal/issues/23)) ([b5608e9](https://github.com/ghiscoding/slickgrid-universal/commit/b5608e958f659b839a8460ffee4a555c66774893)) * **core:** dynamically add/remove columns ([#13](https://github.com/ghiscoding/slickgrid-universal/issues/13)) ([959097c](https://github.com/ghiscoding/slickgrid-universal/commit/959097cf8363330c7166d0844048cfde57a5cabc)) * **core:** expose all Extensions in new getter prop & fix draggable ([#29](https://github.com/ghiscoding/slickgrid-universal/issues/29)) ([07257b2](https://github.com/ghiscoding/slickgrid-universal/commit/07257b2564d86cbfad4f69bb4e910e04d7df5688)) * **core:** expose all services, slickgrid, dataview instances ([a33e387](https://github.com/ghiscoding/slickgrid-universal/commit/a33e3876b1134f6839aac10a67193448997ae7c5)) * **core:** use DataView transactions with multiple item changes ([#14](https://github.com/ghiscoding/slickgrid-universal/issues/14)) ([8cbd03a](https://github.com/ghiscoding/slickgrid-universal/commit/8cbd03a678bc6a2a89495685cc781b12946ec404)) * **demo:** add prod build for github page sample ([13eb721](https://github.com/ghiscoding/slickgrid-universal/commit/13eb721f88114461e1dda70eeba0461b69a89f46)) * **editor:** add more Editors ([f08864d](https://github.com/ghiscoding/slickgrid-universal/commit/f08864d0d583d01dece58570ea5bf8d1a195cdc9)) * **editor:** add operatorConditionalType (inclusive or exclusive) ([e300b31](https://github.com/ghiscoding/slickgrid-universal/commit/e300b313ae0d04ad2ec65f932e243d2b4150eca3)) * **editor:** add readonly option to DualInput Editor ([4217c41](https://github.com/ghiscoding/slickgrid-universal/commit/4217c411304d6056a6de6489351497418b72d9e6)) * **editor:** fully working dual input editor ([773fb49](https://github.com/ghiscoding/slickgrid-universal/commit/773fb49c1dbb6876bf8c2d2c53a1f823a84dd655)) * **editor:** start working on a Compound Editor ([49107c1](https://github.com/ghiscoding/slickgrid-universal/commit/49107c14ca841edf7c279e9a0ffe334f1d5dc71a)) * **editor:** tweak Dual Input Editor and add full unit tests ([c48e321](https://github.com/ghiscoding/slickgrid-universal/commit/c48e32189db48ced3c68e3427c64583db2d8d1d7)) * **editors:** add Autocomplete Editor ([011df55](https://github.com/ghiscoding/slickgrid-universal/commit/011df552c48defb32e81a1552e8b4e38f25be028)) * **editors:** add combo input editor poc code ([5918c73](https://github.com/ghiscoding/slickgrid-universal/commit/5918c73ea82e13183e8a6c14021f38ddf0f2b0fd)) * **editors:** add min/max length options to text editors ([#30](https://github.com/ghiscoding/slickgrid-universal/issues/30)) ([318c70c](https://github.com/ghiscoding/slickgrid-universal/commit/318c70ccbf0f071e328457d6290b6b1e078a1564)) * **editors:** add missing Date Editor ([c897c7c](https://github.com/ghiscoding/slickgrid-universal/commit/c897c7c426c179282766bba3345f4b44317aee44)) * **editors:** add more Editors and rewrite some in vanilla JS ([9308d4b](https://github.com/ghiscoding/slickgrid-universal/commit/9308d4b78a77a86a4b86fd10fb1de34746276a9e)) * **editors:** add more Editors and update all npm packages ([14b10a1](https://github.com/ghiscoding/slickgrid-universal/commit/14b10a17642b2c7f889f90b58dd3fef084e983b9)) * **editors:** extract most of the Editor Validators into separate files ([a9a45e6](https://github.com/ghiscoding/slickgrid-universal/commit/a9a45e6f2ce3536f9be846ef932337f174569897)) * **examples:** add more Tree View with checkbox selector code ([7d7c644](https://github.com/ghiscoding/slickgrid-universal/commit/7d7c644b0ecc8c3b61dd706d37d31edd0cf92fca)) * **examples:** add new sample to showcase queued editing ([#28](https://github.com/ghiscoding/slickgrid-universal/issues/28)) ([3b8fec6](https://github.com/ghiscoding/slickgrid-universal/commit/3b8fec6e890fc0b8dc9754495c1022d898740b3e)) * **extension:** add latest slickgrid with RowMove improvements ([c10fffd](https://github.com/ghiscoding/slickgrid-universal/commit/c10fffdb2bd8a8ce0221e570cf0bfb4cf03c7c29)) * **extensions:** add more Extensions and all their unit tests ([30af496](https://github.com/ghiscoding/slickgrid-universal/commit/30af496c48233ff84ce548648994398db068dbcb)) * **filter:** add Filter Service, Filter Conditions and few unit tests ([2baed7f](https://github.com/ghiscoding/slickgrid-universal/commit/2baed7fa0c31d73437b3d08d2d48c91b05602ff9)) * **filter:** refactor Filter Service by adding a debounce fn ([#7](https://github.com/ghiscoding/slickgrid-universal/issues/7)) ([3ba243c](https://github.com/ghiscoding/slickgrid-universal/commit/3ba243ce3b4ade48531ca323a12b465b5ad0b091)) * **filters:** add Autocomplete Filter ([82bda77](https://github.com/ghiscoding/slickgrid-universal/commit/82bda776c9cb72c9d44aca24ecf289c839e6e24f)) * **filters:** add few Filters and their unit tests ([c7e5897](https://github.com/ghiscoding/slickgrid-universal/commit/c7e5897d2e2af93339ea28a2fabc5263015d7d2c)) * **filters:** add few more Filters ([76b4177](https://github.com/ghiscoding/slickgrid-universal/commit/76b41771bd55e846ee67c9100b0de29ddb0a9276)) * **filters:** add missing Date Filters ([76c66a3](https://github.com/ghiscoding/slickgrid-universal/commit/76c66a3ec2da4b1ff1b296851f46bf58967adc18)) * **footer:** add Custom Footer ([0d3e1da](https://github.com/ghiscoding/slickgrid-universal/commit/0d3e1dabf29c4bc354df598a3b166030f61769fc)) * **footer:** add Custom Footer component ([#5](https://github.com/ghiscoding/slickgrid-universal/issues/5)) ([59d0ba8](https://github.com/ghiscoding/slickgrid-universal/commit/59d0ba8921c2e0886b0c34705ac5a74f35ab4e43)) * **grouping:** add missing Grouping interface properties ([7c83fd0](https://github.com/ghiscoding/slickgrid-universal/commit/7c83fd09acff960b86f62a0bd0c1f4b654b25f9c)) * **grouping:** add more Grouping & Aggregators code ([8c20808](https://github.com/ghiscoding/slickgrid-universal/commit/8c20808d9a8b0a6166f4fb8fe013d33ae57a223c)) * **package:** add new Excel Export package ([808785e](https://github.com/ghiscoding/slickgrid-universal/commit/808785e0ea9508f817453211d8ed808398aa9c01)) * **package:** add new Export (csv, txt) package ([d6adc5c](https://github.com/ghiscoding/slickgrid-universal/commit/d6adc5ce7aa466fde3c1e1377bd47c9a6cd8b53b)) * **pinning:** add "Freezen Columns" to header menu ([#4](https://github.com/ghiscoding/slickgrid-universal/issues/4)) ([1c7d49f](https://github.com/ghiscoding/slickgrid-universal/commit/1c7d49f838a8cadb093dfbdf81c215ed250fbe14)) * **presets:** add missing row selections preset option ([#11](https://github.com/ghiscoding/slickgrid-universal/issues/11)) ([e0a729c](https://github.com/ghiscoding/slickgrid-universal/commit/e0a729cfbbe7aa75a18301b4db994ac9d3330f10)) * **query:** add queryFieldNameGetterFn callback know which field to use ([6d8955c](https://github.com/ghiscoding/slickgrid-universal/commit/6d8955c1933a88683c2284d9162e43248bc578a2)) * **service:** add GridEvent Service to the lib ([4a4bf6f](https://github.com/ghiscoding/slickgrid-universal/commit/4a4bf6f86ebdb6cbf911d838714440cceee4e07f)) * **services:** add Pagination & Grid State Services ([c15e6e6](https://github.com/ghiscoding/slickgrid-universal/commit/c15e6e63edce6f07751f3380229e9e1777c43d84)) * **services:** add registerServices in Grid Options ([#1](https://github.com/ghiscoding/slickgrid-universal/issues/1)) ([e7c2e91](https://github.com/ghiscoding/slickgrid-universal/commit/e7c2e91842eac2044ccdd82673bfade20b24ab4f)) * **sort:** add valueCouldBeUndefined column flag to help sorting ([6d2b6a6](https://github.com/ghiscoding/slickgrid-universal/commit/6d2b6a6b7521511470c27c17ce65784258a87868)) * **sorting:** header menu clear sort, reset sorting when nothing left ([032886b](https://github.com/ghiscoding/slickgrid-universal/commit/032886bf6da9e3d711a17d23481c47ccf81af353)) * **style:** tweak Editors styling and add Sort icon hint on hover ([aba4182](https://github.com/ghiscoding/slickgrid-universal/commit/aba41826659844519da1ef170f0b3641a0d91af0)) * **styling:** add a Salesforce theme ([3b62101](https://github.com/ghiscoding/slickgrid-universal/commit/3b62101413dc3eb4eeb5df7772db3b885d7ae7c5)) * **styling:** add css autoprefixer ([2e89c28](https://github.com/ghiscoding/slickgrid-universal/commit/2e89c287ea0ed5a508f2e977cae21ecc35ed414d)) * **styling:** add edit icon when hovering editable cell with SF Theme ([eef4403](https://github.com/ghiscoding/slickgrid-universal/commit/eef4403b8e9168ff119eb97ca5c663101104abae)) * **styling:** add material design icons to npm & scss instead of html ([9e9a1ca](https://github.com/ghiscoding/slickgrid-universal/commit/9e9a1ca7794eb807494bfbd837aa7e17ad4b42b2)) * **styling:** add more material design stylings ([680788b](https://github.com/ghiscoding/slickgrid-universal/commit/680788b9b456c6d87875234d9f2c033cfbb7e18f)) * **styling:** material theme, replace all built-in Font char to SVG ([ed25d6a](https://github.com/ghiscoding/slickgrid-universal/commit/ed25d6ae4848b614c84da111ff894eedb5be6400)) * **styling:** salesforce theme, replace all built-in Font char to SVG ([1c5f341](https://github.com/ghiscoding/slickgrid-universal/commit/1c5f3414d8bafea7cb393033c9753aef4ad66b2f)) * **styling:** update Material Design font and some material styling ([c7ecbf9](https://github.com/ghiscoding/slickgrid-universal/commit/c7ecbf91b000e0758df04f87f49c35c1293f0abe)) * **tests:** add export abstract classes and add few more unit tests ([13a1bca](https://github.com/ghiscoding/slickgrid-universal/commit/13a1bcac7c21666f2b006f3488036175b29b1b3d)) * **tests:** add Jest to lib root and add few more unit tests ([5811c96](https://github.com/ghiscoding/slickgrid-universal/commit/5811c96568c5255376ea6b97b132f4f0fded0647)) * **tests:** add more Jest unit tests & commands ([d4da547](https://github.com/ghiscoding/slickgrid-universal/commit/d4da547aaae797767140d73289d7f50874fdd09e)) * **tests:** add queryFieldNameGetterFn callback unit tests ([6426793](https://github.com/ghiscoding/slickgrid-universal/commit/64267931dd6ad5506c52da2b19854d2a56d2104f)) * **tests:** rename to slick-vanilla-grid-bundle and add unit tests ([#12](https://github.com/ghiscoding/slickgrid-universal/issues/12)) ([006c302](https://github.com/ghiscoding/slickgrid-universal/commit/006c30251ea1d473e5d1ae54d20c050fccf0e6a4)) * **translate:** add namespace prefix + separator grid option ([90b1b2e](https://github.com/ghiscoding/slickgrid-universal/commit/90b1b2ec0c1a55d23ebcc47b6a88d972c9bbcdb7)) * **tree:** add Collapse/Expand All comands in context menu ([0b58d5e](https://github.com/ghiscoding/slickgrid-universal/commit/0b58d5e3727541fa088a1eeb9e49bb55f367b7c5)) * **tree:** add Tree Data multi-column Filtering support ([f9b4863](https://github.com/ghiscoding/slickgrid-universal/commit/f9b4863810da47138be7f83222ee49d87b4e20c0)) * **tree:** fixed recursive methods to sort hierarchical array ([6bc2915](https://github.com/ghiscoding/slickgrid-universal/commit/6bc29158395e6f3c9e3fbf87358d3ecb5fb12b75)) * **tree:** get a functional Tree View example working with add item ([c07cdb5](https://github.com/ghiscoding/slickgrid-universal/commit/c07cdb545106fd845a105a28014daabaa2860137))
90.519864
329
0.796158
yue_Hant
0.173473
b9dac4ea842ed0a1bc39800d7cc7cd8998e61f40
2,117
md
Markdown
docs/code-quality/C26473.md
hericlesme/visualstudio-docs.pt-br
086d2f88af868af84582bc7f1d50ffc5ea14b11f
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/code-quality/C26473.md
hericlesme/visualstudio-docs.pt-br
086d2f88af868af84582bc7f1d50ffc5ea14b11f
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/code-quality/C26473.md
hericlesme/visualstudio-docs.pt-br
086d2f88af868af84582bc7f1d50ffc5ea14b11f
[ "CC-BY-4.0", "MIT" ]
1
2021-07-26T14:58:39.000Z
2021-07-26T14:58:39.000Z
--- title: C26473 ms.date: 11/15/2017 ms.prod: visual-studio-dev15 ms.technology: vs-ide-code-analysis ms.topic: conceptual f1_keywords: - C26473 helpviewer_keywords: - C26473 ms.assetid: d88aaa57-0003-421f-8377-4e6a5c27f2df author: mikeblome ms.author: mblome manager: douge ms.workload: - multiple ms.openlocfilehash: d35b0cb21bcacde24d48ebb73c032322fbfd99bb ms.sourcegitcommit: e13e61ddea6032a8282abe16131d9e136a927984 ms.translationtype: MT ms.contentlocale: pt-BR ms.lasthandoff: 04/26/2018 ms.locfileid: "31888435" --- # <a name="c26473-noidentitycast"></a>C26473 NO_IDENTITY_CAST "Não converter entre tipos de ponteiro em que o tipo de origem e o tipo de destino são os mesmos." **Diretrizes de núcleos de C++**: Type.1: Evite conversões Essa regra ajuda a remover conversões desnecessárias ou suspeitas. Obviamente, quando o tipo é convertido em si, tal conversão ineficaz, ainda é o fato de que a conversão é usada pode indicar o problema de design sutis ou um potencial de regressão se tipos de alteração no futuro. Sempre é mais seguro usar conversões do mínimo possível. ## <a name="remarks"></a>Comentários - Essa regra é implementada para estático e reinterpret converte e verifica somente os tipos de ponteiro. ## <a name="example"></a>Exemplo pesquisa perigosamente genérica ```cpp gsl::span<server> servers_; template<class T> server* resolve_server(T tag) noexcept { auto p = reinterpret_cast<server*>(tag); // C26473, also 26490 NO_REINTERPRET_CAST return p >= &(*servers_.begin()) && p < &(*servers_.end()) ? p : nullptr; } void promote(server *s, int index) noexcept { auto s0 = resolve_server(s); auto s1 = resolve_server(index); if (s0 && s1) std::swap(s0, s1); } ``` ## <a name="example"></a>Exemplo pesquisa perigosamente genérica - reformulada ```cpp // ... server* resolve_server(server *p) noexcept { return p >= &(*servers_.begin()) && p < &(*servers_.end()) ? p : nullptr; } server* resolve_server(ptrdiff_t i) noexcept { return !servers_.empty() && i >= 0 && i < servers_.size() ? &servers_[i] : nullptr; } // ... ```
31.132353
337
0.720831
por_Latn
0.842247
b9db0b437ad5bfa5ad63cedeff2b9a052a78f3b8
217
md
Markdown
data/content/fate-grand-order/ce/brief-moment-of-joy/attr.zh.md
tmdict/tmdict
c2f8ddb7885a91d01343de4ea7b66fea78351d94
[ "MIT" ]
3
2022-02-25T11:13:45.000Z
2022-02-28T11:55:41.000Z
data/content/fate-grand-order/ce/brief-moment-of-joy/attr.zh.md
slsdo/tmdict
c2f8ddb7885a91d01343de4ea7b66fea78351d94
[ "MIT" ]
null
null
null
data/content/fate-grand-order/ce/brief-moment-of-joy/attr.zh.md
slsdo/tmdict
c2f8ddb7885a91d01343de4ea7b66fea78351d94
[ "MIT" ]
2
2022-02-25T09:59:50.000Z
2022-02-28T11:55:09.000Z
--- parent: attribute.ce source: fate-grand-order id: brief-moment-of-joy language: zh weight: 0 --- 迪昂的情人节回礼。 夜晚,你被邀请到了某间房间。 迪昂站在发出噼啪轻响的暖炉边,面颊泛红地向你递出了某件东西。 「虽然本应在野营火堆旁做这个会更好……」 说着这番话的他/她手中拿着的,是串在铁丝上被烧烤的热腾腾的棉花糖——
12.764706
34
0.764977
yue_Hant
0.137104