hexsha
stringlengths
40
40
size
int64
5
1.04M
ext
stringclasses
6 values
lang
stringclasses
1 value
max_stars_repo_path
stringlengths
3
344
max_stars_repo_name
stringlengths
5
125
max_stars_repo_head_hexsha
stringlengths
40
78
max_stars_repo_licenses
sequencelengths
1
11
max_stars_count
int64
1
368k
max_stars_repo_stars_event_min_datetime
stringlengths
24
24
max_stars_repo_stars_event_max_datetime
stringlengths
24
24
max_issues_repo_path
stringlengths
3
344
max_issues_repo_name
stringlengths
5
125
max_issues_repo_head_hexsha
stringlengths
40
78
max_issues_repo_licenses
sequencelengths
1
11
max_issues_count
int64
1
116k
max_issues_repo_issues_event_min_datetime
stringlengths
24
24
max_issues_repo_issues_event_max_datetime
stringlengths
24
24
max_forks_repo_path
stringlengths
3
344
max_forks_repo_name
stringlengths
5
125
max_forks_repo_head_hexsha
stringlengths
40
78
max_forks_repo_licenses
sequencelengths
1
11
max_forks_count
int64
1
105k
max_forks_repo_forks_event_min_datetime
stringlengths
24
24
max_forks_repo_forks_event_max_datetime
stringlengths
24
24
content
stringlengths
5
1.04M
avg_line_length
float64
1.14
851k
max_line_length
int64
1
1.03M
alphanum_fraction
float64
0
1
lid
stringclasses
191 values
lid_prob
float64
0.01
1
ff2fc724f1c1bf365cbaf3b73dd0c4d09cd8c544
108
md
Markdown
README.md
thisismk/scaling-octopus
de509d03325768affa1e75c1ad62d8d6823863e8
[ "MIT" ]
null
null
null
README.md
thisismk/scaling-octopus
de509d03325768affa1e75c1ad62d8d6823863e8
[ "MIT" ]
null
null
null
README.md
thisismk/scaling-octopus
de509d03325768affa1e75c1ad62d8d6823863e8
[ "MIT" ]
null
null
null
Playing around with Hugo.io ============ Template repo for a blog built with Hugo deployed on github-pages
21.6
65
0.712963
eng_Latn
0.998371
ff30dab69009d12665d2d366ed40a93f3d9ef63a
2,672
md
Markdown
permission-user.md
yajra/laravel-acl-docs
0862628dbf2f1fb3a256d39f5212622c348d7fb2
[ "MIT" ]
5
2017-05-01T13:12:37.000Z
2017-10-28T08:47:19.000Z
permission-user.md
yajra/laravel-acl-docs
0862628dbf2f1fb3a256d39f5212622c348d7fb2
[ "MIT" ]
null
null
null
permission-user.md
yajra/laravel-acl-docs
0862628dbf2f1fb3a256d39f5212622c348d7fb2
[ "MIT" ]
null
null
null
# User Permissions User Permissions is an extended version of `HasRole` that allows us to directly add custom permissions to a `User`. To implement this setup, you need to extend `Yajra\Acl\Traits\HasRoleAndPermission` trait on your `User` model. ```php class User extends Authenticatable { use HasRoleAndPermission; } ``` - [`getPermissions()`](#get-permissions) - [`grantPermission($ids, array $attributes = [], $touch = true)`](#grant) - [`grantPermissionBySlug($slug)`](#grant-slug) - [`grantPermissionByResource($resource)`](#grant-resource) - [`revokePermission($ids = null, $touch = true)($ids = null, $touch = true)`](#revoke) - [`revokePermissionBySlug($slug)`](#revoke-slug) - [`revokeAllPermissions()`](#revoke-all) - [`syncPermissions($ids, $detaching = true)`](#sync) <a name="get-permissions"></a> ## getPermissions() Retrieves an array of assigned permission slugs for the user. ```php $user = User::find(1); return $user->getPermissions(); ``` <a name="grant"></a> ## grantPermission($ids, array $attributes = [], $touch = true) Grant the given permission to the user. ```php $user = User::find(1); $user->grantPermission(1); $permissions = Permission::all(); $user->grantPermission($permissions); ``` <a name="grant-slug"></a> ## grantPermissionBySlug($slug) Grant the given permission slug to the user. ```php $user = User::find(1); $user->grantPermissionBySlug('create-post'); $permissions = ['create-post', 'view-post']; $user->grantPermissionBySlug($permissions); ``` <a name="grant-resource"></a> ## grantPermissionByResource($resource) Grant the given permission resource to the user. ```php $user = User::find(1); $user->grantPermissionByResource('Posts'); $resources = ['Users', 'Posts']; $user->grantPermissionByResource($resources); ``` <a name="revoke"></a> ## revokePermission($ids = null, $touch = true) Revokes the given permission from the user. ```php $user = User::find(1); $user->revokePermission(1); $user->revokePermission([1, 2]); ``` <a name="revoke-slug"></a> ## revokePermissionBySlug($slug) Revokes the given permission by slug from the user. ```php $user = User::find(1); $user->revokePermissionBySlug('create-posts'); $user->revokePermissionBySlug(['create-posts', 'update-posts']); ``` <a name="revoke-all"></a> ##revokeAllPermissions() Revokes all permissions from the user. ```php $user = User::find(1); $user->revokeAllPermissions(); ``` <a name="sync"></a> ## syncPermissions($ids, $detaching = true) Syncs the given permissions with the user. This will revoke any permissions not supplied. ```php $user = User::find(1); $user->syncPermissions(1); $user->syncPermissions([1, 2, 3]); ```
23.438596
227
0.694985
eng_Latn
0.247198
ff3155b566b81db406f3a354b4b1d1d37201d640
3,387
md
Markdown
articles/marketplace/partner-center-portal/saas-fulfillment-apis-faq.md
eltociear/azure-docs.zh-cn
b24f1a5a0fba668fed89d0ff75ca11d3c691f09b
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/marketplace/partner-center-portal/saas-fulfillment-apis-faq.md
eltociear/azure-docs.zh-cn
b24f1a5a0fba668fed89d0ff75ca11d3c691f09b
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/marketplace/partner-center-portal/saas-fulfillment-apis-faq.md
eltociear/azure-docs.zh-cn
b24f1a5a0fba668fed89d0ff75ca11d3c691f09b
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: SaaS 履行 API - 常见问题 |Azure 应用商店 description: Azure 应用商店中 SaaS 产品的客户发现和购买体验。 author: dsindona ms.author: dsindona ms.service: marketplace ms.subservice: partnercenter-marketplace-publisher ms.topic: conceptual ms.date: 07/11/2019 ms.openlocfilehash: 6d3a84341d5221950da20f39456461dafc5d2e75 ms.sourcegitcommit: 2ec4b3d0bad7dc0071400c2a2264399e4fe34897 ms.translationtype: MT ms.contentlocale: zh-CN ms.lasthandoff: 03/28/2020 ms.locfileid: "80275690" --- # <a name="saas-fulfillment-apis---faq"></a>SaaS 履行 API - 常见问题解答 列出了与 Azure 应用商店的集成要求,以使 Azure 客户能够订阅 SaaS 产品/服务。 ## <a name="discovery-experience"></a>发现体验 发布产品/服务后,Azure 用户可以在 Azure 应用商店中发现 SaaS 产品/服务。 您的客户将能够根据产品类型 (SaaS) 筛选产品/服务,并了解他们感兴趣的 SaaS 服务。 ## <a name="purchase-experience"></a>购买体验 用户对特定的 SaaS 服务感兴趣后,用户可以从 Azure 应用商店订阅该服务。 ### <a name="what-does-it-mean-for-an-azure-user-to-subscribe-to-a-saas-offer-in-azure-marketplace"></a>Azure 用户订阅 Azure 应用商店中的 SaaS 产品/服务意味着什么? 这意味着用户可以查看与 SaaS 服务关联的使用条款和隐私声明,并同意根据您(SaaS 产品发布者)在 Microsoft 发票上设置的计费条款付款。 用户可以使用 Azure 中的现有付款配置文件来为 SaaS 服务消耗付费。 这有很多原因。 客户现在可以使用 Microsoft 云平台作为受信任的来源在一个地方发现和订阅,而无需审查它打算使用的每个 ISV 软件。 此外,客户可以使用其现有的付款配置文件,而无需单独显式支付每个 ISV 软件。 ### <a name="is-the-user-charged-automatically-when-the-offer-is-subscribed"></a>订阅产品/服务时,用户是否自动收费? 在订阅 SaaS 优惠时,用户已同意通过 Microsoft 平台支付 SaaS 服务的使用费用。 但是,费用仅在消费产品/服务时开始。 用户必须转到您的 SaaS 产品/服务并确认帐户创建才能开始使用产品/服务。 然后,您将通知 Microsoft 开始为此客户 SaaS 订阅计费。 ### <a name="how-are-you-notified-when-a-user-subscribes-to-your-saas-offer"></a>当用户订阅您的 SaaS 产品/服务时,您如何收到通知? 订阅产品/服务后,Azure 用户可以在 Azure 中发现和管理其所有产品/服务。 默认情况下,新订阅的 SaaS 产品/服务的状态为 **"预配,正在履行"。** 在此状态下,将提示 Azure 用户执行 **"配置帐户"** 的操作,以便浏览到 Azure 门户中的 SaaS 订阅管理体验。 当用户单击 **"配置帐户"** 时,他们将重定向到 SaaS 服务网站。 导航到它们的 URL 由发布者在发布产品/服务时提供。 此页称为发布者的着陆页。 Azure 用户应该能够基于 Azure 中现有的 AAD 凭据登录到 SaaS 登录页。 将 Azure 用户重定向到登录页时,将标记添加到查询 URL 中。 此令牌持续时间短,有效期为 24 小时。 然后,可以检测此令牌的存在,并调用 Microsoft 的 API 以获取有关令牌的更多上下文。 ![客户订阅流](media/saas-metering-service-integration-flow-a.png) 有关在 SaaS 产品生命周期内处理交易方案的 API 合同的详细信息,请参阅[SaaS 履行 API](https://docs.microsoft.com/azure/marketplace/partner-center-portal/pc-saas-fulfillment-api-v2)文档。 ### <a name="how-do-you-know-the-saas-offer-to-which-the-user-subscribes-in-azure"></a>您如何知道用户在 Azure 中订阅的 SaaS 产品/服务? 对 API`Resolve`的响应包括与 SaaS 订阅关联的产品/服务并计划信息。 ### <a name="how-can-the-azure-user-change-the-plan-associated-with-this-azure-subscription"></a>Azure 用户如何更改与此 Azure 订阅关联的计划? * Azure 用户可以直接在 SaaS 体验中或通过 Microsoft 平台更改与 SaaS 订阅关联的计划。 * 转换可以在计费周期的任何时间完成。 您必须确认任何转换,一旦确认,该转换将生效。 * 预付费计划(**每月**或**每年**)费率按比例计算。 在转换时间前发出的任何超额费用将在下一张发票中收取。 将根据新计划排放新的超额。 >[!Note] >如果不想支持特定的转换路径,则可以阻止降级。 以下序列捕获 Azure 客户在 SaaS 体验中更改计划时的流: ![客户计划变更流程](media/saas-metering-service-integration-flow-b.png) 以下序列捕获 Azure 客户在 Microsoft 网店中更改计划时的流 ![客户店面计划变更流程](media/saas-metering-service-integration-flow-c.png) ### <a name="how-can-the-azure-user-unsubscribe-from-the-plan-associated-with-azure-subscription"></a>Azure 用户如何取消订阅与 Azure 订阅关联的计划? Azure 用户可以直接在 SaaS 体验中或通过 Microsoft 平台取消订阅已购买的 SaaS 产品。 用户取消订阅后,将不再从下一个计费周期中收取费用。 以下序列捕获 Azure 客户取消订阅 SaaS 体验中 SaaS 产品/服务的流: ![客户在 SaaS 体验中取消订阅](media/saas-metering-service-integration-flow-d.png) 以下序列捕获 Azure 用户在 Microsoft 网店中取消订阅时的流: ![客户在微软的网店取消订阅](media/saas-metering-service-integration-flow-e.png) ## <a name="next-steps"></a>后续步骤 - 有关详细信息[,请参阅应用商店计量服务 API。](./marketplace-metering-service-apis.md)
38.05618
150
0.777679
yue_Hant
0.652534
ff317a593d61fe42d592409718b8a9f161f4d58c
1,723
md
Markdown
README.md
german-antonio/technisys-challenge
8569130598141eb754f1e9a5b8d09dfd9d124c27
[ "MIT" ]
null
null
null
README.md
german-antonio/technisys-challenge
8569130598141eb754f1e9a5b8d09dfd9d124c27
[ "MIT" ]
null
null
null
README.md
german-antonio/technisys-challenge
8569130598141eb754f1e9a5b8d09dfd9d124c27
[ "MIT" ]
null
null
null
# Technisys Code Challenge Code challenge resolution for a job application at Technisys ## Prerequisites In order to run this project you need to have NodeJS installed (along with NPM). Optionally, Docker will be needed to run the containerized version (see below). ## Installation ### Local Open a terminal and clone the repository into your local machine ``` git clone [email protected]:german-antonio/technisys-challenge.git ``` and then run ``` npm install ``` to install the required npm packages. ### Containerized version If you'd prefer to install the containerized version, open a terminal and type in the following command to pull the image ``` docker pull germanantonio/technisys-challenge:latest ``` ## Usage ### Local On a terminal, navigate to the repository's directory, and type ``` npm start ``` You can also run ``` npm test ``` if you want to run the unit tests and check their coverage. ### Containerized version Open a terminal and run the image with the following command ``` docker run -t -d germanantonio/technisys-challenge:latest ``` The container should now be running. Use the following command to see the image running ``` docker ps ``` Check the output of this command to find the container ID ``` CONTAINER ID IMAGE COMMAND CREATED STATUS 227b353e2b39 germanantonio/technisys-challenge:latest "docker-entrypoint.s…" 6 minutes ago Up 6 minutes ``` Use the container ID to open a shell inside the container ``` docker exec -it <container-id> /bin/bash ``` Once inside the container, you can run ``` npm start ``` to run the application, or ``` npm test ``` to run the unit tests and check the code coverage.
25.716418
121
0.721997
eng_Latn
0.994876
ff32bda80e7d7157571a048d154b32dcfcc23066
1,667
md
Markdown
courses/algo-1/08_lesson/readme.md
sakenism/curriculum
29b6b24474fc9a6759fdec94ce6d8a2ddff025e0
[ "MIT" ]
null
null
null
courses/algo-1/08_lesson/readme.md
sakenism/curriculum
29b6b24474fc9a6759fdec94ce6d8a2ddff025e0
[ "MIT" ]
null
null
null
courses/algo-1/08_lesson/readme.md
sakenism/curriculum
29b6b24474fc9a6759fdec94ce6d8a2ddff025e0
[ "MIT" ]
null
null
null
# Динамическое программирование Динамическое программирование — способ решения сложных задач путём разбиения их на более простые подзадачи. На уроке рекурсии мы проходили числа Фибоначчи, где каждое число Фибоначчи это сумма двух предыдущих чисел Фибоначчи. - Fib(0) = 0 - Fib(1) = 1 - Fib(2) = 1 - Fib(3) = 2 и так далее. Для нахождения числа Fib(n), можно разделить задачу сумму двех подзадач, т.е. Fib(n) = Fib(n - 1) + Fib(n - 2). Здесь задачу для **n** раздроблена на подзадачи поменьше, и теперь нужно решить их. Для большего понятие Динамического программирования (ДП), давай разберем следующую задачу: Вася стоит на земле и перед ним стоит бесконечно высокая лестница. Вася может подняться либо на одну ступень, либо на две. Сколькими разными способами Вася может подняться до ступени **n**. Давай разберем несколько случаев чтобы задача стала более понятной: - На первую ступень он может подняться только 1 способом, сделав шаг высотой один - (1). - На вторую ступень он может подняться 2 способами, сделав 2 шага высотой один, либо 1 шаг высотой 2 - (1, 1) или (2). - На третью ступень можно сделать (1, 1, 1), (1, 2), (2, 1). <iframe height="400px" width="100%" src="https://repl.it/@SakenMukanov/IndolentUntriedLaboratory?lite=true" scrolling="no" frameborder="no" allowtransparency="true" allowfullscreen="true" sandbox="allow-forms allow-pointer-lock allow-popups allow-same-origin allow-scripts allow-modals"></iframe> Если не получается, пытайся до тех пор, пока не получится. В конце концов, можешь посмотреть <a href="https://repl.it/@SakenMukanov/JuniorUsefulExpertise" target="_blank">Решение</a>.
52.09375
298
0.743851
rus_Cyrl
0.950062
ff333b277db5223cc311b76af68b9fa679cbddf7
425
md
Markdown
_posts/2002-11-14-sem-titulo-01.md
olivia-olivia/olivia-olivia.github.io
49808b1a32de30f3adecf0f5044166c3469734b2
[ "MIT" ]
1
2021-09-20T20:16:41.000Z
2021-09-20T20:16:41.000Z
_posts/2002-11-14-sem-titulo-01.md
olivia-olivia/olivia-olivia.github.io
49808b1a32de30f3adecf0f5044166c3469734b2
[ "MIT" ]
9
2019-06-08T23:41:09.000Z
2019-08-22T02:35:45.000Z
_posts/2002-11-14-sem-titulo-01.md
olivia-olivia/olivia-olivia.github.io
49808b1a32de30f3adecf0f5044166c3469734b2
[ "MIT" ]
1
2020-09-20T14:55:13.000Z
2020-09-20T14:55:13.000Z
--- layout: post title: "(sem título)" date: 2002-11-14 -0300 categories: translucido preferidas-da-poeta author-comment: "sim, é sobre meu pai." --- do ponto eqüidistante, entre o lúcido e o translúcido, <!--more--> não se enxerga nada se faz nem como um ruído, bem como nem fonemas puídos e sem sentido, que entre muitas mortes e falantes, que em nada se escoram, não conseguem ser palavra morta-viva
22.368421
43
0.708235
por_Latn
0.998028
ff3420daa6812520092a6cee3dd0aafb975007dd
19,136
md
Markdown
_posts/2019-03-23-seq2seq-6.md
lifanchen-simm/lifanchen-simm.github.io
2d3c4bbfa43559e930f0e14f7882d210df20dc28
[ "Apache-2.0" ]
null
null
null
_posts/2019-03-23-seq2seq-6.md
lifanchen-simm/lifanchen-simm.github.io
2d3c4bbfa43559e930f0e14f7882d210df20dc28
[ "Apache-2.0" ]
null
null
null
_posts/2019-03-23-seq2seq-6.md
lifanchen-simm/lifanchen-simm.github.io
2d3c4bbfa43559e930f0e14f7882d210df20dc28
[ "Apache-2.0" ]
null
null
null
--- layout: post title: seq2seq-(6) subtitle: transformer,attention is all you need date: 2019-03-23 author: lifanchen header-img: catalog: true math: true tags: - seq2seq - pytorch --- # 6 - Attention is All You Need ## 背景 减少顺序计算的目标构成了扩展神经GPU,ByteNet和ConvS2S的基础,所有这些都使用卷积神经网络作为基本构建块,**并行计算**所有输入和输出位置的隐藏表示。在这些模型中,关联来自两个任意输入或输出位置的信号所需的操作数量随着位置之间距离的增长而增长,对于ConvS2S呈线性增长,对于ByteNet呈对数增长。<span style="color:red">这使得学习远程位置之间的依赖性变得更加困难。</span>在`transformer`中,这将被减少到**恒定的操作次数**,尽管由于平均注意力加权位置而导致有效分辨率降低,这是我们与多头注意力相抵消的效果。 `自我注意力(self-attention)`,有时称为内部关注是关联机制,通过关联单个序列的不同位置来计算序列的表示。`自我注意力`已经成功地用于各种任务,包括阅读理解,抽象概括,文本蕴涵和学习任务独立的句子表示。端到端存储器网络基于`循环注意机制`而不是`序列对齐重复`,并且已经证明在简单语言问答和语言建模任务上表现良好。 然而,据我们所知,`transformer`是第一个完全依靠`自我注意力(self-attention)`的转换模型来计算其输入和输出的表示,而不使用序列对齐的RNN或卷积。 ## 模型框架 大多数竞争性神经序列转导模型具有编码器 - 解码器结构。 这里,编码器将符号表示的输入序列$(x_1,...,x_n)$ 映射到连续表示序列$z =(z_1,...,z_n)$。 给定z,解码器一次一个元素地生成输出序列$(y_1,...,y_m)$。 在每个步骤中,模型是自动回归的,在生成下一个字符时会利用之前生成的符号作为附加输入。 ![transformer](http://nlp.seas.harvard.edu/images/the-annotated-transformer_14_0.png) 模型整体结构如下: ![模型图](https://pic3.zhimg.com/80/v2-c14a98dbcb1a7f6f2d18cf9a1f591be6_hd.jpg) ## 数据预处理 和之前博文中实现的模型一样。 ```python import torch import torch.nn as nn import torch.optim as optim import torch.nn.functional as F import torchtext from torchtext.datasets import TranslationDataset, Multi30k from torchtext.data import Field, BucketIterator import spacy import random import math import os import time SEED = 1 random.seed(SEED) torch.manual_seed(SEED) torch.backends.cudnn.deterministic = True spacy_de = spacy.load('de') spacy_en = spacy.load('en') def tokenize_de(text): """ Tokenizes German text from a string into a list of strings """ return [tok.text for tok in spacy_de.tokenizer(text)] def tokenize_en(text): """ Tokenizes English text from a string into a list of strings """ return [tok.text for tok in spacy_en.tokenizer(text)] SRC = Field(tokenize=tokenize_de, init_token='<sos>', eos_token='<eos>', lower=True, batch_first=True) # batch_first=True ---> [batch_size,sequence_length] TRG = Field(tokenize=tokenize_en, init_token='<sos>', eos_token='<eos>', lower=True, batch_first=True) train_data, valid_data, test_data = Multi30k.splits(exts=('.de', '.en'), fields=(SRC, TRG)) SRC.build_vocab(train_data, min_freq=2) TRG.build_vocab(train_data, min_freq=2) device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') BATCH_SIZE = 128 train_iterator, valid_iterator, test_iterator = BucketIterator.splits( (train_data, valid_data, test_data), batch_size=BATCH_SIZE, device=device) ``` ## Encoder ```python class Encoder(nn.Module): def __init__(self, input_dim, hid_dim, n_layers, n_heads, pf_dim, encoder_layer, self_attention, positionwise_feedforward, dropout, device): super().__init__() self.input_dim = input_dim self.hid_dim = hid_dim self.n_layers = n_layers self.n_heads = n_heads self.pf_dim = pf_dim self.encoder_layer = encoder_layer self.self_attention = self_attention self.positionwise_feedforward = positionwise_feedforward self.dropout = dropout self.device = device self.tok_embedding = nn.Embedding(input_dim, hid_dim) self.pos_embedding = nn.Embedding(1000, hid_dim) self.layers = nn.ModuleList([encoder_layer(hid_dim, n_heads, pf_dim, self_attention, positionwise_feedforward, dropout, device) for _ in range(n_layers)]) self.do = nn.Dropout(dropout) self.scale = torch.sqrt(torch.FloatTensor([hid_dim])).to(device) def forward(self, src, src_mask): #src = [batch size, src sent len] #src_mask = [batch size, src sent len] pos = torch.arange(0, src.shape[1]).unsqueeze(0).repeat(src.shape[0], 1).to(self.device) src = self.do((self.tok_embedding(src) * self.scale) + self.pos_embedding(pos)) #src = [batch size, src sent len, hid dim] for layer in self.layers: src = layer(src, src_mask) return src ``` ## Encoder layer ```python class EncoderLayer(nn.Module): def __init__(self, hid_dim, n_heads, pf_dim, self_attention, positionwise_feedforward, dropout, device): super().__init__() self.ln = nn.LayerNorm(hid_dim) self.sa = self_attention(hid_dim, n_heads, dropout, device) self.pf = positionwise_feedforward(hid_dim, pf_dim, dropout) self.do = nn.Dropout(dropout) def forward(self, src, src_mask): #src = [batch size, src sent len, hid dim] #src_mask = [batch size, src sent len] src = self.ln(src + self.do(self.sa(src, src, src, src_mask))) src = self.ln(src + self.do(self.pf(src))) return src ``` ## self-attention $$ \text{Attention}(Q,K,V)=\text{softmax}(\frac{QK^T}{\sqrt{d_k}})V \tag1 $$ Multi-Head Attention相当于 h 个不同的self-attention的集成(ensemble),在这里我们以 h=8 举例说明。Multi-Head Attention的输出分成3步: 将数据 X 分别输入到图13所示的8个self-attention中,得到8个加权后的特征矩阵$ Z_i, i\in\{1,2,...,8\} $。 将8个$ Z_i $ 按列拼成一个大的特征矩阵; 特征矩阵经过一层全连接后得到输出 Z 。 整个过程如图14所示: ![multi-headed](https://pic3.zhimg.com/v2-c2a91ac08b34e73c7f4b415ce823840e_r.jpg) ```python class SelfAttention(nn.Module): def __init__(self, hid_dim, n_heads, dropout, device): super().__init__() self.hid_dim = hid_dim self.n_heads = n_heads assert hid_dim % n_heads == 0 self.w_q = nn.Linear(hid_dim, hid_dim) self.w_k = nn.Linear(hid_dim, hid_dim) self.w_v = nn.Linear(hid_dim, hid_dim) self.fc = nn.Linear(hid_dim, hid_dim) self.do = nn.Dropout(dropout) self.scale = torch.sqrt(torch.FloatTensor([hid_dim // n_heads])).to(device) def forward(self, query, key, value, mask=None): bsz = query.shape[0] #query = key = value [batch size, sent len, hid dim] Q = self.w_q(query) K = self.w_k(key) V = self.w_v(value) #Q, K, V = [batch size, sent len, hid dim] Q = Q.view(bsz, -1, self.n_heads, self.hid_dim // self.n_heads).permute(0, 2, 1, 3) K = K.view(bsz, -1, self.n_heads, self.hid_dim // self.n_heads).permute(0, 2, 1, 3) V = V.view(bsz, -1, self.n_heads, self.hid_dim // self.n_heads).permute(0, 2, 1, 3) #Q, K, V = [batch size, n heads, sent len, hid dim // n heads] energy = torch.matmul(Q, K.permute(0, 1, 3, 2)) / self.scale #energy = [batch size, n heads, sent len, sent len] if mask is not None: energy = energy.masked_fill(mask == 0, -1e10) attention = self.do(F.softmax(energy, dim=-1)) #attention = [batch size, n heads, sent len, sent len] x = torch.matmul(attention, V) #x = [batch size, n heads, sent len, hid dim // n heads] x = x.permute(0, 2, 1, 3).contiguous() #x = [batch size, sent len, n heads, hid dim // n heads] x = x.view(bsz, -1, self.n_heads * (self.hid_dim // self.n_heads)) #x = [batch size, src sent len, hid dim] x = self.fc(x) #x = [batch size, sent len, hid dim] return x ``` ## positionwise-feedforward ```python class PositionwiseFeedforward(nn.Module): def __init__(self, hid_dim, pf_dim, dropout): super().__init__() self.hid_dim = hid_dim self.pf_dim = pf_dim self.fc_1 = nn.Conv1d(hid_dim, pf_dim, 1) # convolution neural units self.fc_2 = nn.Conv1d(pf_dim, hid_dim, 1) # convolution neural units self.do = nn.Dropout(dropout) def forward(self, x): #x = [batch size, sent len, hid dim] x = x.permute(0, 2, 1) #x = [batch size, hid dim, sent len] x = self.do(F.relu(self.fc_1(x))) #x = [batch size, pf dim, sent len] x = self.fc_2(x) #x = [batch size, hid dim, sent len] x = x.permute(0, 2, 1) #x = [batch size, sent len, hid dim] return x ``` ## Decoder ```python class Decoder(nn.Module): def __init__(self, output_dim, hid_dim, n_layers, n_heads, pf_dim, decoder_layer, self_attention, positionwise_feedforward, dropout, device): super().__init__() self.output_dim = output_dim self.hid_dim = hid_dim self.n_layers = n_layers self.n_heads = n_heads self.pf_dim = pf_dim self.decoder_layer = decoder_layer self.self_attention = self_attention self.positionwise_feedforward = positionwise_feedforward self.dropout = dropout self.device = device self.tok_embedding = nn.Embedding(output_dim, hid_dim) self.pos_embedding = nn.Embedding(1000, hid_dim) self.layers = nn.ModuleList([decoder_layer(hid_dim, n_heads, pf_dim, self_attention, positionwise_feedforward, dropout, device) for _ in range(n_layers)]) self.fc = nn.Linear(hid_dim, output_dim) self.do = nn.Dropout(dropout) self.scale = torch.sqrt(torch.FloatTensor([hid_dim])).to(device) def forward(self, trg, src, trg_mask, src_mask): #trg = [batch_size, trg sent len] #src = [batch_size, src sent len, hid_dim] # encoder output #trg_mask = [batch size, trg sent len] #src_mask = [batch size, src sent len] pos = torch.arange(0, trg.shape[1]).unsqueeze(0).repeat(trg.shape[0], 1).to(self.device) trg = self.do((self.tok_embedding(trg) * self.scale) + self.pos_embedding(pos)) #trg = [batch size, trg sent len, hid dim] for layer in self.layers: trg = layer(trg, src, trg_mask, src_mask) return self.fc(trg) ``` ## Decoder layer ```python class DecoderLayer(nn.Module): def __init__(self, hid_dim, n_heads, pf_dim, self_attention, positionwise_feedforward, dropout, device): super().__init__() self.ln = nn.LayerNorm(hid_dim) self.sa = self_attention(hid_dim, n_heads, dropout, device) self.ea = self_attention(hid_dim, n_heads, dropout, device) self.pf = positionwise_feedforward(hid_dim, pf_dim, dropout) self.do = nn.Dropout(dropout) def forward(self, trg, src, trg_mask, src_mask): #trg = [batch size, trg sent len, hid dim] #src = [batch size, src sent len, hid dim] #trg_mask = [batch size, trg sent len] #src_mask = [batch size, src sent len] trg = self.ln(trg + self.do(self.sa(trg, trg, trg, trg_mask))) trg = self.ln(trg + self.do(self.ea(trg, src, src, src_mask))) trg = self.ln(trg + self.do(self.pf(trg))) return trg ``` ## transformer ```python class Seq2Seq(nn.Module): def __init__(self, encoder, decoder, pad_idx, device): super().__init__() self.encoder = encoder self.decoder = decoder self.pad_idx = pad_idx self.device = device def make_masks(self, src, trg): #src = [batch size, src sent len] #trg = [batch size, trg sent len] src_mask = (src != self.pad_idx).unsqueeze(1).unsqueeze(2) trg_pad_mask = (trg != self.pad_idx).unsqueeze(1).unsqueeze(3) trg_len = trg.shape[1] trg_sub_mask = torch.tril(torch.ones((trg_len, trg_len), dtype=torch.uint8, device=self.device)) trg_mask = trg_pad_mask & trg_sub_mask return src_mask, trg_mask def forward(self, src, trg): #src = [batch size, src sent len] #trg = [batch size, trg sent len] src_mask, trg_mask = self.make_masks(src, trg) enc_src = self.encoder(src, src_mask) #enc_src = [batch size, src sent len, hid dim] out = self.decoder(trg, enc_src, trg_mask, src_mask) #out = [batch size, trg sent len, output dim] return out ``` ## 实例化模型 ```python input_dim = len(SRC.vocab) hid_dim = 512 n_layers = 6 n_heads = 8 pf_dim = 2048 dropout = 0.1 enc = Encoder(input_dim, hid_dim, n_layers, n_heads, pf_dim, EncoderLayer, SelfAttention, PositionwiseFeedforward, dropout, device) output_dim = len(TRG.vocab) hid_dim = 512 n_layers = 6 n_heads = 8 pf_dim = 2048 dropout = 0.1 dec = Decoder(output_dim, hid_dim, n_layers, n_heads, pf_dim, DecoderLayer, SelfAttention, PositionwiseFeedforward, dropout, device) pad_idx = SRC.vocab.stoi['<pad>'] model = Seq2Seq(enc, dec, pad_idx, device).to(device) ``` ## 初始化参数 ```python for p in model.parameters(): if p.dim() > 1: nn.init.xavier_uniform_(p) class NoamOpt: "Optim wrapper that implements rate." def __init__(self, model_size, factor, warmup, optimizer): self.optimizer = optimizer self._step = 0 self.warmup = warmup self.factor = factor self.model_size = model_size self._rate = 0 def step(self): "Update parameters and rate" self._step += 1 rate = self.rate() for p in self.optimizer.param_groups: p['lr'] = rate self._rate = rate self.optimizer.step() def rate(self, step = None): "Implement `lrate` above" if step is None: step = self._step return self.factor * (self.model_size ** (-0.5) * min(step ** (-0.5), step * self.warmup ** (-1.5))) optimizer = NoamOpt(hid_dim, 1, 2000, torch.optim.Adam(model.parameters(), lr=0, betas=(0.9, 0.98), eps=1e-9)) criterion = nn.CrossEntropyLoss(ignore_index=pad_idx) ``` ## 训练和评估模型 ```python def train(model, iterator, optimizer, criterion, clip): model.train() epoch_loss = 0 for i, batch in enumerate(iterator): src = batch.src trg = batch.trg optimizer.optimizer.zero_grad() output = model(src, trg[:,:-1]) #output = [batch size, trg sent len - 1, output dim] #trg = [batch size, trg sent len] output = output.contiguous().view(-1, output.shape[-1]) trg = trg[:,1:].contiguous().view(-1) #output = [batch size * trg sent len - 1, output dim] #trg = [batch size * trg sent len - 1] loss = criterion(output, trg) loss.backward() torch.nn.utils.clip_grad_norm_(model.parameters(), clip) optimizer.step() epoch_loss += loss.item() return epoch_loss / len(iterator) def evaluate(model, iterator, criterion): model.eval() epoch_loss = 0 with torch.no_grad(): for i, batch in enumerate(iterator): src = batch.src trg = batch.trg output = model(src, trg[:,:-1]) #output = [batch size, trg sent len - 1, output dim] #trg = [batch size, trg sent len] output = output.contiguous().view(-1, output.shape[-1]) trg = trg[:,1:].contiguous().view(-1) #output = [batch size * trg sent len - 1, output dim] #trg = [batch size * trg sent len - 1] loss = criterion(output, trg) epoch_loss += loss.item() return epoch_loss / len(iterator) def epoch_time(start_time, end_time): elapsed_time = end_time - start_time elapsed_mins = int(elapsed_time / 60) elapsed_secs = int(elapsed_time - (elapsed_mins * 60)) return elapsed_mins, elapsed_secs N_EPOCHS = 10 CLIP = 1 SAVE_DIR = 'models' MODEL_SAVE_PATH = os.path.join(SAVE_DIR, 'transformer-seq2seq.pt') best_valid_loss = float('inf') if not os.path.isdir(f'{SAVE_DIR}'): os.makedirs(f'{SAVE_DIR}') for epoch in range(N_EPOCHS): start_time = time.time() train_loss = train(model, train_iterator, optimizer, criterion, CLIP) valid_loss = evaluate(model, valid_iterator, criterion) end_time = time.time() epoch_mins, epoch_secs = epoch_time(start_time, end_time) if valid_loss < best_valid_loss: best_valid_loss = valid_loss torch.save(model.state_dict(), MODEL_SAVE_PATH) print(f'| Epoch: {epoch+1:03} | Time: {epoch_mins}m {epoch_secs}s| Train Loss: {train_loss:.3f} | Train PPL: {math.exp(train_loss):7.3f} | Val. Loss: {valid_loss:.3f} | Val. PPL: {math.exp(valid_loss):7.3f} |') ``` 训练结果: ``` | Epoch: 001 | Time: 0m 53s| Train Loss: 5.947 | Train PPL: 382.509 | Val. Loss: 4.110 | Val. PPL: 60.939 | | Epoch: 002 | Time: 0m 53s| Train Loss: 3.772 | Train PPL: 43.474 | Val. Loss: 3.196 | Val. PPL: 24.446 | | Epoch: 003 | Time: 0m 53s| Train Loss: 3.127 | Train PPL: 22.811 | Val. Loss: 2.806 | Val. PPL: 16.538 | | Epoch: 004 | Time: 0m 54s| Train Loss: 2.762 | Train PPL: 15.824 | Val. Loss: 2.570 | Val. PPL: 13.060 | | Epoch: 005 | Time: 0m 53s| Train Loss: 2.507 | Train PPL: 12.263 | Val. Loss: 2.413 | Val. PPL: 11.162 | | Epoch: 006 | Time: 0m 53s| Train Loss: 2.313 | Train PPL: 10.104 | Val. Loss: 2.323 | Val. PPL: 10.209 | | Epoch: 007 | Time: 0m 54s| Train Loss: 2.186 | Train PPL: 8.901 | Val. Loss: 2.310 | Val. PPL: 10.072 | | Epoch: 008 | Time: 0m 53s| Train Loss: 2.103 | Train PPL: 8.191 | Val. Loss: 2.283 | Val. PPL: 9.807 | | Epoch: 009 | Time: 0m 53s| Train Loss: 2.057 | Train PPL: 7.820 | Val. Loss: 2.307 | Val. PPL: 10.043 | | Epoch: 010 | Time: 0m 52s| Train Loss: 2.003 | Train PPL: 7.408 | Val. Loss: 2.285 | Val. PPL: 9.823 | ``` 测试结果: ```python model.load_state_dict(torch.load(MODEL_SAVE_PATH)) test_loss = evaluate(model, test_iterator, criterion) print(f'| Test Loss: {test_loss:.3f} | Test PPL: {math.exp(test_loss):7.3f} |') ``` ``` | Test Loss: 2.281 | Test PPL: 9.791 | ``` ## 总结 **优点**:(1)虽然Transformer最终也没有逃脱传统学习的套路,Transformer也只是一个全连接(或者是一维卷积)加Attention的结合体。但是其设计已经足够有创新,因为其抛弃了在NLP中最根本的RNN或者CNN并且取得了非常不错的效果,算法的设计非常精彩,值得每个深度学习的相关人员仔细研究和品位。(2)**<span style="color:red">Transformer的设计最大的带来性能提升的关键是将任意两个单词的距离是1,这对解决NLP中棘手的长期依赖问题是非常有效的。</span>**(3)Transformer不仅仅可以应用在NLP的机器翻译领域,甚至可以不局限于NLP领域,是非常有科研潜力的一个方向。(4)算法的并行性非常好,符合目前的硬件(主要指GPU)环境。 **缺点**:(1)粗暴的抛弃RNN和CNN虽然非常炫技,但是它也使模型丧失了捕捉局部特征的能力,RNN + CNN + Transformer的结合可能会带来更好的效果。(2)**<span style="color:red">Transformer失去的位置信息其实在NLP中非常重要,而论文中在特征向量中加入Position Embedding也只是一个权宜之计,并没有改变Transformer结构上的固有缺陷。</span>**
31.629752
355
0.607076
eng_Latn
0.174727
ff34522d548b236aeb8d9b56ed2b2a68e4e522d3
1,386
md
Markdown
wdk-ddi-src/content/ntifs/nf-ntifs-keinitializemutant.md
tianye606/windows-driver-docs-ddi
23fec97f3ed3a0c99b117543982d34ee592501e7
[ "CC-BY-4.0", "MIT" ]
null
null
null
wdk-ddi-src/content/ntifs/nf-ntifs-keinitializemutant.md
tianye606/windows-driver-docs-ddi
23fec97f3ed3a0c99b117543982d34ee592501e7
[ "CC-BY-4.0", "MIT" ]
null
null
null
wdk-ddi-src/content/ntifs/nf-ntifs-keinitializemutant.md
tianye606/windows-driver-docs-ddi
23fec97f3ed3a0c99b117543982d34ee592501e7
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- UID: NF:ntifs.KeInitializeMutant title: KeInitializeMutant function (ntifs.h) description: Reserved for system use. old-location: ifsk\keinitializemutant.htm tech.root: ifsk ms.assetid: 75c31158-5d9c-465a-bb62-392b85fd8791 ms.date: 04/16/2018 ms.keywords: KeInitializeMutant, KeInitializeMutant function [Installable File System Drivers], ifsk.keinitializemutant, keref_b0f59cc4-6d50-45bc-928c-3c2288ba0f14.xml, ntifs/KeInitializeMutant ms.topic: function f1_keywords: - "ntifs/KeInitializeMutant" req.header: ntifs.h req.include-header: Ntifs.h req.target-type: Windows req.target-min-winverclnt: req.target-min-winversvr: req.kmdf-ver: req.umdf-ver: req.ddi-compliance: req.unicode-ansi: req.idl: req.max-support: req.namespace: req.assembly: req.type-library: req.lib: req.dll: req.irql: topic_type: - APIRef - kbSyntax api_type: - HeaderDef api_location: - ntifs.h api_name: - KeInitializeMutant product: - Windows targetos: Windows req.typenames: --- # KeInitializeMutant function ## -description The <b>KeInitializeMutant</b> routine is reserved for system use. See <a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/wdm/nf-wdm-keinitializemutex">KeInitializeMutex</a>. ## -parameters ### -param Mutant <p>Reserved.</p> ### -param InitialOwner Reserved.
19.25
194
0.727994
kor_Hang
0.257215
ff34d2b0020e684a358e873337e0151d0391a90d
13,794
md
Markdown
docs/zh-cn/contract/token/relay.token.md
xuyp1991/Document
369f9bbf84fa073e72546115ace5452c73519e5a
[ "Apache-2.0" ]
null
null
null
docs/zh-cn/contract/token/relay.token.md
xuyp1991/Document
369f9bbf84fa073e72546115ace5452c73519e5a
[ "Apache-2.0" ]
null
null
null
docs/zh-cn/contract/token/relay.token.md
xuyp1991/Document
369f9bbf84fa073e72546115ace5452c73519e5a
[ "Apache-2.0" ]
null
null
null
# Codex System 系统合约 ## 合约说明 relay.token 合约是Codex映射其他链资产的代币发行合约 ## 1. Action ### 1.1 create 创建代币 创建代币符号 ```cpp void create( account_name issuer, name chain, account_name side_account, action_name side_action, asset maximum_supply ); ``` 参数: - issuer : 发行人 - chain : 链名 - side_account : 链上的合约名 - side_action : 链上转账的操作名 - maximum_supply : 代币及最大供应量 ### 1.2 issue 发行代币 发行一定数量代币到指定账户 ```cpp void issue( name chain, account_name to, asset quantity, string memo ); ``` 参数: - chain : 链名 - to : 接收币的账户 - quantity :增发的币的数量 - memo : 备注 ### 1.3 transfer 代币转账 代币转账操作 ```cpp void transfer( account_name from, account_name to, name chain, asset quantity, string memo ); ``` 参数: - from :转账的账户 - to : 接收币的账户 - chain : 链名 - quantity :转账的币 - memo : 备注 ### 1.4 trade 代币交易 用于调用其他交易合约的功能的action ```cpp void trade( account_name from, account_name to, name chain, asset quantity, trade_type type, string memo); ``` 参数: - from : 转账的账户 - to : 接收币的账户,一般是其他交易合约 - chain : 链名 - quantity : 转账的币 - type : 类型 用于识别是哪个功能 - memo : 备注 ### 1.5 销毁代币 将代币从Codex上销毁,被销毁的代币会分发到对应的链上对应的账户 ```cpp void destroy( name chain, account_name from, asset quantity, string memo ); ``` 参数: - chain : 链名 - from : 销毁代币的账户 - quantity : 销毁的代币 - memo : 备注 ### 1.6 领取铸币分红 领取分红到铸币池 ```cpp void claim(name chain,asset quantity,account_name receiver); ``` 参数: - chain : 链名 - quantity : 领取分红的代币 数量是无效的 - receiver : 领取分红的账户 ## ABI - [relay.token](https://github.com/codexnetwork/codex.relay/blob/develop/contracts/relay.token/relay.token.abi) ```json { "version":"eosio::abi/1.0", "types":[ { "new_type_name":"account_name", "type":"name" }, { "new_type_name":"permission_name", "type":"name" }, { "new_type_name":"action_name", "type":"name" }, { "new_type_name":"transaction_id_type", "type":"checksum256" }, { "new_type_name":"weight_type", "type":"uint16" } ], "structs":[ { "name":"permission_level", "base":"", "fields":[ { "name":"actor", "type":"account_name" }, { "name":"permission", "type":"permission_name" } ] }, { "name":"key_weight", "base":"", "fields":[ { "name":"key", "type":"public_key" }, { "name":"weight", "type":"weight_type" } ] }, { "name":"permission_level_weight", "base":"", "fields":[ { "name":"permission", "type":"permission_level" }, { "name":"weight", "type":"weight_type" } ] }, { "name":"wait_weight", "base":"", "fields":[ { "name":"wait_sec", "type":"uint32" }, { "name":"weight", "type":"weight_type" } ] }, { "name":"authority", "base":"", "fields":[ { "name":"threshold", "type":"uint32" }, { "name":"keys", "type":"key_weight[]" }, { "name":"accounts", "type":"permission_level_weight[]" }, { "name":"waits", "type":"wait_weight[]" } ] }, { "name":"action", "base":"", "fields":[ { "name":"account", "type":"account_name" }, { "name":"name", "type":"action_name" }, { "name":"authorization", "type":"permission_level[]" }, { "name":"data", "type":"bytes" } ] }, { "name":"on", "base":"", "fields":[ { "name":"chain", "type":"name" }, { "name":"block_id", "type":"checksum256" }, { "name":"act", "type":"action" } ] }, { "name":"transfer", "base":"", "fields":[ { "name":"from", "type":"account_name" }, { "name":"to", "type":"account_name" }, { "name":"chain", "type":"name" }, { "name":"quantity", "type":"asset" }, { "name":"memo", "type":"string" } ] }, { "name":"trade", "base":"", "fields":[ { "name":"from", "type":"account_name" }, { "name":"to", "type":"account_name" }, { "name":"chain", "type":"name" }, { "name":"quantity", "type":"asset" }, { "name":"type", "type":"uint64" }, { "name":"memo", "type":"string" } ] }, { "name":"create", "base":"", "fields":[ { "name":"issuer", "type":"account_name" }, { "name":"chain", "type":"name" }, { "name":"side_account", "type":"account_name" }, { "name":"side_action", "type":"action_name" } { "name":"maximum_supply", "type":"asset" } ] }, { "name":"issue", "base":"", "fields":[ { "name":"chain", "type":"name" }, { "name":"to", "type":"account_name" }, { "name":"quantity", "type":"asset" }, { "name":"memo", "type":"string" } ] }, { "name":"destroy", "base":"", "fields":[ { "name":"chain", "type":"name" }, { "name":"from", "type":"account_name" }, { "name":"quantity", "type":"asset" }, { "name":"memo", "type":"string" } ] }, { "name":"account", "base":"", "fields":[ { "name":"id", "type":"uint64" }, { "name":"balance", "type":"asset" }, { "name":"chain", "type":"name" }, {"name":"mineage", "type":"int128"}, {"name":"mineage_update_height","type":"uint32"}, {"name":"reward", "type":"asset"} ] }, { "name":"account_next_id", "base":"", "fields":[ { "name":"id", "type":"uint64" }, { "name":"account", "type":"account_name" } ] }, { "name":"reward_mine_info", "base":"", "fields":[ { "name":"total_mineage", "type":"int128" }, { "name":"reward_pool", "type":"asset" }, { "name":"reward_block_num", "type":"int32" } ] }, { "name":"currency_stats", "base":"", "fields":[ { "name":"supply", "type":"asset" }, { "name":"max_supply", "type":"asset" }, { "name":"issuer", "type":"account_name" }, { "name":"chain", "type":"name" }, { "name":"side_account", "type":"account_name" }, { "name":"side_action", "type":"action_name" }, {"name":"total_mineage", "type":"int128"}, {"name":"total_mineage_update_height","type":"uint32"}, {"name":"reward_scope", "type":"uint64"}, {"name":"reward_size", "type":"int32"} ] }, { "name":"reward_currency", "base":"", "fields":[ { "name":"id", "type":"uint64" }, { "name":"chain", "type":"name" }, { "name":"supply", "type":"asset" }, { "name":"reward_now", "type":"bool" } ] }, { "name":"addreward", "base":"", "fields":[ { "name":"chain", "type":"name" }, { "name":"supply", "type":"asset" }, { "name":"reward_now", "type":"int32" } ] }, { "name":"rewardmine", "base":"", "fields":[ { "name":"quantity", "type":"asset" } ] }, { "name":"claim", "base":"", "fields":[ { "name":"chain", "type":"name" }, { "name":"quantity", "type":"asset" }, { "name":"receiver", "type":"account_name" } ] }, { "name":"settlemine", "base":"", "fields":[ { "name":"system_account", "type":"account_name" } ] }, { "name":"activemine", "base":"", "fields":[ { "name":"system_account", "type":"account_name" } ] } ], "actions":[ { "name":"on", "type":"on", "ricardian_contract":"" }, { "name":"create", "type":"create", "ricardian_contract":"" }, { "name":"issue", "type":"issue", "ricardian_contract":"" }, { "name":"destroy", "type":"destroy", "ricardian_contract":"" }, { "name":"transfer", "type":"transfer", "ricardian_contract":"" }, { "name":"trade", "type":"trade", "ricardian_contract":"" }, { "name":"addreward", "type":"addreward", "ricardian_contract":"" }, { "name":"rewardmine", "type":"rewardmine", "ricardian_contract":"" }, { "name":"claim", "type":"claim", "ricardian_contract":"" }, { "name":"settlemine", "type":"settlemine", "ricardian_contract":"" }, { "name":"activemine", "type":"activemine", "ricardian_contract":"" } ], "tables":[ { "name":"accounts", "type":"account", "index_type":"i64", "key_names":[ "currency" ], "key_types":[ "uint64" ] }, { "name":"accountid", "type":"account_next_id", "index_type":"i64", "key_names":[ "account" ], "key_types":[ "account_name" ] }, { "name":"stat", "type":"currency_stats", "index_type":"i64", "key_names":[ "currency" ], "key_types":[ "uint64" ] }, { "name":"reward", "type":"reward_currency", "index_type":"i64", "key_names":[ "currency" ], "key_types":[ "uint64" ] }, { "name":"minereward", "type":"reward_mine_info", "index_type":"i64", "key_names":[ "blocknum" ], "key_types":[ "uint64" ] } ], "ricardian_clauses":[ ], "error_messages":[ ], "abi_extensions":[] } ```
20.255507
111
0.323691
eng_Latn
0.324926
ff3580cfff152a77ea8332f1c518a078e9b66851
29,078
md
Markdown
translation-ja/queues.md
OUDON/ja-docs-7.x
bae3ce88bb9a77e0dbf6080a2d74db553324f73a
[ "MIT" ]
null
null
null
translation-ja/queues.md
OUDON/ja-docs-7.x
bae3ce88bb9a77e0dbf6080a2d74db553324f73a
[ "MIT" ]
null
null
null
translation-ja/queues.md
OUDON/ja-docs-7.x
bae3ce88bb9a77e0dbf6080a2d74db553324f73a
[ "MIT" ]
null
null
null
# キュー - [イントロダクション](#introduction) - [接続 Vs. キュー](#connections-vs-queues) - [ドライバの注意事項と要件](#driver-prerequisites) - [ジョブの作成](#creating-jobs) - [ジョブクラスの生成](#generating-job-classes) - [クラス構成](#class-structure) - [ジョブミドルウェア](#job-middleware) - [ジョブのディスパッチ](#dispatching-jobs) - [遅延ディスパッチ](#delayed-dispatching) - [同期ディスパッチ](#synchronous-dispatching) - [ジョブのチェーン](#job-chaining) - [キューと接続のカスタマイズ](#customizing-the-queue-and-connection) - [最大試行回数/タイムアウト値の指定](#max-job-attempts-and-timeout) - [レート制限](#rate-limiting) - [エラー処理](#error-handling) - [クロージャのキュー投入](#queueing-closures) - [キューワーカの実行](#running-the-queue-worker) - [キュープライオリティ](#queue-priorities) - [キューワーカとデプロイ](#queue-workers-and-deployment) - [ジョブの期限切れとタイムアウト](#job-expirations-and-timeouts) - [Supervisor設定](#supervisor-configuration) - [失敗したジョブの処理](#dealing-with-failed-jobs) - [ジョブ失敗後のクリーンアップ](#cleaning-up-after-failed-jobs) - [ジョブ失敗イベント](#failed-job-events) - [失敗したジョブの再試行](#retrying-failed-jobs) - [不明なモデルの無視](#ignoring-missing-models) - [ジョブイベント](#job-events) <a name="introduction"></a> ## イントロダクション > {tip} 現在、LaravelはRedisで動作するキューのための美しいダッシュボードと設定システムを備えたHorizonを提供しています。詳細は、[Horizonのドキュメント](/docs/{{version}}/horizon)で確認してください。 Laravelのキューサービスは、Beanstalk、Amazon SQS、Redis、さらにはリレーショナル・データベースなどさまざまなキューバックエンドに対し共通のAPIを提供しています。キューによりメール送信のような時間を費やす処理を遅らせることが可能です。時間のかかるタスクを遅らせることで、よりアプリケーションのリクエストをドラマチックにスピードアップできます。 キューの設定ファイルは`config/queue.php`です。このファイルにはフレームワークに含まれているそれぞれのドライバーへの接続設定が含まれています。それにはデータベース、[Beanstalkd](https://beanstalkd.github.io/)、[Amazon SQS](https://aws.amazon.com/sqs)、[Redis](https://redis.io)、ジョブが即時に実行される同期(ローカル用途)ドライバーが含まれています。 `null`キュードライバはキューされたジョブが実行されないように、破棄します。 <a name="connections-vs-queues"></a> ### 接続 Vs. キュー Laravelのキューへ取り掛かる前に、「接続」と「キュー」の区別を理解しておくことが重要です。`config/queue.php`設定ファイルの中には、`connections`設定オプションがあります。このオプションはAmazon SQS、Beanstalk、Redisなどのバックエンドサービスへの個々の接続を定義します。しかし、どんな指定されたキュー接続も、複数の「キュー」を持つことができます。「キュー」とはキュー済みのジョブのスタック、もしくは積み重ねのことです。 `queue`接続ファイルの`queue`属性を含んでいる、各接続設定例に注目してください。ジョブがディスパッチされ、指定された接続へ送られた時にのデフォルトキューです。言い換えれば、どのキューへディスパッチするのか明確に定義していないジョブをディスパッチすると、そのジョブは接続設定の`queue`属性で定義したキューへ送られます。 // このジョブはデフォルトキューへ送られる Job::dispatch(); // このジョブは"emails"キューへ送られる Job::dispatch()->onQueue('emails'); あるアプリケーションでは複数のキューへジョブを送る必要はなく、代わりに1つのシンプルなキューが適しているでしょう。しかし、複数のキューへジョブを送ることは優先順位づけしたい、もしくはジョブの処理を分割したいアプリケーションでとくに便利です。Laravelのキューワーカはプライオリティによりどのキューで処理するかを指定できるからです。たとえば、ジョブを`high`キューへ送れば、より高い処理プライオリティのワーカを実行できます。 php artisan queue:work --queue=high,default <a name="driver-prerequisites"></a> ### ドライバの注意事項と要件 #### データベース `database`キュードライバを使用するには、ジョブを記録するためのデータベーステーブルが必要です。このテーブルを作成するマイグレーションは`queue:table` Artisanコマンドにより生成できます。マイグレーションが生成されたら、`migrate`コマンドでデータベースをマイグレートしてください。 php artisan queue:table php artisan migrate #### Redis `redis`キュードライバーを使用するには、`config/database.php`設定ファイルでRedisのデータベースを設定する必要があります。 **Redisクラスタ** Redisキュー接続でRedisクラスタを使用している場合は、キュー名に[キーハッシュタグ](https://redis.io/topics/cluster-spec#keys-hash-tags)を含める必要があります。これはキューに指定した全Redisキーが同じハッシュスロットに確実に置かれるようにするためです。 'redis' => [ 'driver' => 'redis', 'connection' => 'default', 'queue' => '{default}', 'retry_after' => 90, ], **ブロッキング** Redisキューを使用する場合、ワーカのループの繰り返しとRedisデータベースに対する再ポールの前に、ジョブを実行可能にするまでどの程度待つのかを指定する、`block_for`設定オプションを使うことができます。 新しいジョブを得るため、Redisデータベースに連続してポールしてしまうより、キューの負荷にもとづきより効率的になるよう、この値を調整してください。たとえば、ジョブを実行可能にするまで、ドライバーが5秒間ブロックするように指示するには、値に`5`をセットします。 'redis' => [ 'driver' => 'redis', 'connection' => 'default', 'queue' => 'default', 'retry_after' => 90, 'block_for' => 5, ], > {note} `block_for`へ`0`を設定するとジョブが利用可能になるまで、キューワーカを無制限にブロックしてしまいます。これはさらに、次のジョブが処理されるまで、`SIGTERM`のようなシグナルが処理されるのも邪魔してしまいます。 #### 他のドライバの要件 以下の依存パッケージがリストしたキュードライバを使用するために必要です。 <div class="content-list" markdown="1"> - Amazon SQS: `aws/aws-sdk-php ~3.0` - Beanstalkd: `pda/pheanstalk ~4.0` - Redis: `predis/predis ~1.0`、もしくはphpredis PHP拡張 </div> <a name="creating-jobs"></a> ## ジョブの作成 <a name="generating-job-classes"></a> ### ジョブクラスの生成 キュー投入可能なアプリケーションの全ジョブは、デフォルトで`app/Jobs`ディレクトリへ保存されます。`app/Jobs`ディレクトリが存在しなくても、`make:job` Artisanコマンドの実行時に生成されます。新しいキュージョブをArtisan CLIで生成できます。 php artisan make:job ProcessPodcast 非同期で実行するため、ジョブをキューへ投入することをLaravelへ知らせる、`Illuminate\Contracts\Queue\ShouldQueue`インターフェイスが生成されたクラスには実装されます。 > {tip} Job stubs may be customized using [stub publishing](/docs/{{version}}/artisan#stub-customization) <a name="class-structure"></a> ### クラス構成 ジョブクラスは通常とてもシンプルで、キューによりジョブが処理される時に呼び出される、`handle`メソッドのみで構成されています。手始めに、ジョブクラスのサンプルを見てみましょう。この例は、ポッドキャストの公開サービスを管理し、公開前にアップロードしたポッドキャストファイルを処理する必要があるという仮定です。 <?php namespace App\Jobs; use App\AudioProcessor; use App\Podcast; use Illuminate\Bus\Queueable; use Illuminate\Contracts\Queue\ShouldQueue; use Illuminate\Foundation\Bus\Dispatchable; use Illuminate\Queue\InteractsWithQueue; use Illuminate\Queue\SerializesModels; class ProcessPodcast implements ShouldQueue { use Dispatchable, InteractsWithQueue, Queueable, SerializesModels; protected $podcast; /** * 新しいジョブインスタンスの生成 * * @param Podcast $podcast * @return void */ public function __construct(Podcast $podcast) { $this->podcast = $podcast; } /** * ジョブの実行 * * @param AudioProcessor $processor * @return void */ public function handle(AudioProcessor $processor) { // アップロード済みポッドキャストの処理… } } この例中、キュージョブのコンテナーに直接[Eloquentモデル](/docs/{{version}}/eloquent)が渡せることに注目してください。ジョブが使用している`SerializesModels`トレイトによりEloquentモデルとロード済みのリレーションは優雅にシリアライズされ、ジョブが処理される時にアンシリアライズされます。キュー投入されたジョブがコンテナでEloquentモデルを受け取ると、モデルの識別子のみシリアライズされています。ジョブが実際に処理される時、キューシステムは自動的にデータベースから完全なモデルインスタンスとロード済みだったリレーションを再取得します。これらはすべてアプリケーションの完全な透過性のためであり、Eloquentモデルインスタンスをシリアライズするときに発生する問題を防ぐことができます。 `handle`メソッドはキューによりジョブが処理されるときに呼びだされます。ジョブの`handle`メソッドにタイプヒントにより依存を指定できることに注目してください。Laravelの[サービスコンテナ](/docs/{{version}}/container)が自動的に依存を注入します。 もし、どのようにコンテナが依存を`handle`メソッドへ注入するかを完全にコントロールしたい場合は、コンテナの`bindMethod`メソッドを使用します。`bindMethod`メソッドは、ジョブとコンテナを受け取るコールバックを引数にします。コールバックの中で、お望みのまま自由に`handle`メソッドを起動できます。通常は、[サービスプロバイダ](/docs/{{version}}/providers)からこのメソッドを呼び出すべきでしょう。 use App\Jobs\ProcessPodcast; $this->app->bindMethod(ProcessPodcast::class.'@handle', function ($job, $app) { return $job->handle($app->make(AudioProcessor::class)); }); > {note} Rawイメージコンテンツのようなバイナリデータは、キュージョブへ渡す前に、`base64_encode`関数を通してください。そうしないと、そのジョブはキューへ設置する前にJSONへ正しくシリアライズされません。 #### リレーションの処理 ロード済みのリレーションもシリアライズされるため、シリアライズ済みのジョブ文字列は極めて大きくなり得ます。リレーションがシリアライズされるのを防ぐには、プロパティの値を設定するときにモデルの`withoutRelations`メソッドを呼び出してください。このメソッドは、ロード済みのリレーションを外したモデルのインスタンスを返します。 /** * 新しいジョブインスタンスの生成 * * @param \App\Podcast $podcast * @return void */ public function __construct(Podcast $podcast) { $this->podcast = $podcast->withoutRelations(); } <a name="job-middleware"></a> ### ジョブミドルウェア ジョブミドルウェアはキュー済みジョブの実行周りのカスタムロジックをラップできるようにし、ジョブ自身の定形コードを減らします。例として、5分毎に1ジョブのみを処理するために、LaravelのRedisレート制限機能を活用する、以下の`handle`メソッドを考えてみましょう。 /** * ジョブの実行 * * @return void */ public function handle() { Redis::throttle('key')->block(0)->allow(1)->every(5)->then(function () { info('Lock obtained...'); // ジョブの処理… }, function () { // ロック取得ができない… return $this->release(5); }); } このコードは有効ですが、Redisレート制限ロジックが散らかっているため、`handle`メソッドの構造はうるさくなりました。さらに、レート制限をかけたい他のジョブでもこのレート制限ロジックが重複してしまいます。 handleメソッドの中でレート制限をする代わりに、レート制限を処理するジョブミドルウェアを定義できます。Laravelはジョブミドルウェアの置き場所を決めていないため、アプリケーションのどこにでもジョブミドルウェアを設置できます。この例では、`app/Jobs/Middleware`ディレクトリへミドルウェアを設置しています。 <?php namespace App\Jobs\Middleware; use Illuminate\Support\Facades\Redis; class RateLimited { /** * キュー済みジョブの処理 * * @param mixed $job * @param callable $next * @return mixed */ public function handle($job, $next) { Redis::throttle('key') ->block(0)->allow(1)->every(5) ->then(function () use ($job, $next) { // ロックを取得した場合の処理… $next($job); }, function () use ($job) { // ロックを取得できなかった処理… $job->release(5); }); } } ご覧の通り、[ルートミドルウェア](/docs/{{version}}/middleware)と同様に、ジョブミドルウェアも処理するジョブを受け取り、コールバックは処理を続けるため呼び出されます。 ジョブミドルウェアを作成したら、ジョブの`middleware`メソッドから返すことにより、指定します。このメソッドはジョブのスカフォールドを行う`make:job` Artisanコマンドでは作成されないため、ジョブクラスの定義に自身で追加してください。 use App\Jobs\Middleware\RateLimited; /** * このジョブが通過する必要のあるミドルウェアの取得 * * @return array */ public function middleware() { return [new RateLimited]; } <a name="dispatching-jobs"></a> ## ジョブのディスパッチ ジョブクラスを書き上げたら、ジョブクラス自身の`dispatch`メソッドを使い、ディスパッチできます。`dispatch`メソッドへ渡す引数は、ジョブのコンストラクタへ渡されます。 <?php namespace App\Http\Controllers; use App\Http\Controllers\Controller; use App\Jobs\ProcessPodcast; use Illuminate\Http\Request; class PodcastController extends Controller { /** * 新ポッドキャストの保存 * * @param Request $request * @return Response */ public function store(Request $request) { // ポッドキャスト作成… ProcessPodcast::dispatch($podcast); } } 条件によりジョブをディスパッチする場合は、`dispatchIf`か`dispatchUnless`を使います。 ProcessPodcast::dispatchIf($accountActive === true, $podcast); ProcessPodcast::dispatchUnless($accountSuspended === false, $podcast); <a name="delayed-dispatching"></a> ### 遅延ディスパッチ キュー投入されたジョブの実行を遅らせたい場合は、ジョブのディスパッチ時に`delay`メソッドを使います。例として、ディスパッチ後10分経つまでは、処理が行われないジョブを指定してみましょう。 <?php namespace App\Http\Controllers; use App\Http\Controllers\Controller; use App\Jobs\ProcessPodcast; use Illuminate\Http\Request; class PodcastController extends Controller { /** * 新ポッドキャストの保存 * * @param Request $request * @return Response */ public function store(Request $request) { // ポッドキャスト作成… ProcessPodcast::dispatch($podcast) ->delay(now()->addMinutes(10)); } } > {note} Amazon SQSキューサービスは、最大15分の遅延時間です。 #### レスポンスをブラウザへ送信後のディスパッチ 別の方法として、ユーザーのブラウザにレスポンスを送り終えるまで、ジョブのディスパッチを遅らせる`dispatchAfterResponse`メソッドがあります。これによりキューされたジョブがまだ実行中であっても、ユーザーはアプリケーションをすぐ使い始めることができます。この方法は通常、メール送信のようなユーザーを数秒待たせるジョブにのみ使うべきでしょう。 use App\Jobs\SendNotification; SendNotification::dispatchAfterResponse(); `dispatch`でクロージャをディスパッチし、`afterResponse`メソッドをチェーンすることで、ブラウザにレスポンスを送り終えたらクロージャを実行することも可能です。 use App\Mail\WelcomeMessage; use Illuminate\Support\Facades\Mail; dispatch(function () { Mail::to('[email protected]')->send(new WelcomeMessage); })->afterResponse(); <a name="synchronous-dispatching"></a> ### 同期ディスパッチ ジョブを即時(同期的)にディスパッチしたい場合は、`dispatchNow`メソッドを使用します。このメソッドを使用する場合、そのジョブはキューされずに現在のプロセスで即時実行されます。 <?php namespace App\Http\Controllers; use App\Http\Controllers\Controller; use App\Jobs\ProcessPodcast; use Illuminate\Http\Request; class PodcastController extends Controller { /** * 新ポッドキャストの保存 * * @param Request $request * @return Response */ public function store(Request $request) { // ポッドキャスト作成… ProcessPodcast::dispatchNow($podcast); } } <a name="job-chaining"></a> ### ジョブチェーン 主要なジョブが正しく実行し終えた後に連続して実行する必要がある、キュー投入ジョブのリストをジョブチェーンで指定できます。一連のジョブの内、あるジョブが失敗すると、残りのジョブは実行されません。キュー投入ジョブチェーンを実行するには、dispatchableジョブどれかに対し、`withChain`メソッドを使用します。 ProcessPodcast::withChain([ new OptimizePodcast, new ReleasePodcast ])->dispatch(); ジョブクラスインスタンスのチェーンだけでなく、クロージャもチェーンできます。 ProcessPodcast::withChain([ new OptimizePodcast, new ReleasePodcast, function () { Podcast::update(...); }, ])->dispatch(); > {note} ジョブの削除に`$this->delete()`メソッドを使用しても、チェーンしたジョブの処理を停止できません。チェーンの実行を停止するのは、チェーン中のジョブが失敗した場合のみです。 #### チェーンの接続とキュー ジョブチェーンで使用するデフォルトの接続とキューを指定したい場合は、`allOnConnection`と`allOnQueue`メソッドを使用します。これらのメソッドは、キューされたジョブへ別の接続/キューが明確に指定されていない限り使用される、接続とキューを設定します。 ProcessPodcast::withChain([ new OptimizePodcast, new ReleasePodcast ])->dispatch()->allOnConnection('redis')->allOnQueue('podcasts'); <a name="customizing-the-queue-and-connection"></a> ### キューと接続のカスタマイズ #### 特定キューへのディスパッチ ジョブを異なるキューへ投入することで「カテゴライズ」できますし、さまざまなキューにいくつのワーカを割り当てるかと言うプライオリティ付けもできます。これはキー設定ファイルで定義した、別々のキュー「接続」へのジョブ投入を意味してはいないことに気をつけてください。一つの接続内の複数のキューを指定する方法です。 <?php namespace App\Http\Controllers; use App\Http\Controllers\Controller; use App\Jobs\ProcessPodcast; use Illuminate\Http\Request; class PodcastController extends Controller { /** * 新ポッドキャストの保存 * * @param Request $request * @return Response */ public function store(Request $request) { // ポッドキャスト作成… ProcessPodcast::dispatch($podcast)->onQueue('processing'); } } #### 特定の接続へのディスパッチ 複数のキュー接続を利用するなら、ジョブを投入するキューを指定できます。ジョブをディスパッチする時に、`onConnection`メソッドで接続を指定します。 <?php namespace App\Http\Controllers; use App\Http\Controllers\Controller; use App\Jobs\ProcessPodcast; use Illuminate\Http\Request; class PodcastController extends Controller { /** * 新ポッドキャストの保存 * * @param Request $request * @return Response */ public function store(Request $request) { // ポッドキャスト作成… ProcessPodcast::dispatch($podcast)->onConnection('sqs'); } } ジョブを投入する接続とキューを指定するために、`onConnection`と`onQueue`メソッドをチェーンすることもできます。 ProcessPodcast::dispatch($podcast) ->onConnection('sqs') ->onQueue('processing'); <a name="max-job-attempts-and-timeout"></a> ### 最大試行回数/タイムアウト値の指定 #### 最大試行回数 ジョブが試行する最大回数を指定するアプローチの一つは、Artisanコマンドラインへ`--tries`スイッチ使う方法です。 php artisan queue:work --tries=3 しかし、より粒度の高いアプローチは、ジョブクラス自身に最大試行回数を定義する方法です。これはコマンドラインで指定された値より、優先度が高くなっています。 <?php namespace App\Jobs; class ProcessPodcast implements ShouldQueue { /** * 最大試行回数 * * @var int */ public $tries = 5; } <a name="time-based-attempts"></a> #### 時間ベースの試行 失敗するまでジョブの試行を何度認めるかを定義する代わりに、ジョブのタイムアウト時間を定義することもできます。これにより、指定した時間内で複数回ジョブを試行します。タイムアウト時間を定義するには、ジョブクラスに`retryUntil`メソッドを追加します。 /** * タイムアウトになる時間を決定 * * @return \DateTime */ public function retryUntil() { return now()->addSeconds(5); } > {tip} キューイベントリスナでも、`retryUntil`メソッドを定義できます。 #### Max例外 ジョブを何度も再試行するように指定している場合、指定した回数の例外が発生したことをきっかけにしてその再試行を失敗として取り扱いたい場合も起きると思います。そうするにはジョブクラスに`maxExceptions`プロパティを定義してください。 <?php namespace App\Jobs; class ProcessPodcast implements ShouldQueue { /** * 最大試行回数 * * @var int */ public $tries = 25; /** * 失敗と判定するまで許す最大例外数 * * @var int */ public $maxExceptions = 3; /** * ジョブの実行 * * @return void */ public function handle() { Redis::throttle('key')->allow(10)->every(60)->then(function () { // ロックが取得でき、ポッドキャストの処理を行う… }, function () { // ロックが取得できなかった return $this->release(10); }); } } この例の場合、アプリケーションがRedisのロックを取得できない場合は、そのジョブは10秒でリリースされます。そして、25回再試行を継続します。しかし発生した例外を3回処理しなかった場合、ジョブは失敗します。 #### タイムアウト > {note} `timeout`機能はPHP7.1以上で、`pcntl` PHP拡張に最適化しています。 同様に、ジョブの最大実行秒数を指定するために、Artisanコマンドラインに`--timeout`スイッチを指定できます。 php artisan queue:work --timeout=30 しかしながら、最大実行秒数をジョブクラス自身に定義することもできます。ジョブにタイムアウト時間を指定すると、コマンドラインに指定されたタイムアウトよりも優先されます。 <?php namespace App\Jobs; class ProcessPodcast implements ShouldQueue { /** * ジョブがタイムアウトになるまでの秒数 * * @var int */ public $timeout = 120; } <a name="rate-limiting"></a> ### レート制限 > {note} この機能が動作するには、アプリケーションで[Redisサーバ](/docs/{{version}}/redis)が利用できる必要があります。 アプリケーションでRedisを利用しているなら、時間と回数により、キュージョブを制限できます。この機能は、キュージョブがレート制限のあるAPIに関連している場合に役立ちます。 `throttle`メソッドの使用例として、指定したジョブタイプを60秒毎に10回だけ実行できるように制限しましょう。ロックできなかった場合、あとで再試行できるように、通常はジョブをキューへ戻す必要があります。 Redis::throttle('key')->allow(10)->every(60)->then(function () { // ジョブのロジック処理… }, function () { // ロックできなかった場合の処理… return $this->release(10); }); > {tip} 上記の例で`key`は、レート制限したいジョブのタイプを表す一意の認識文字列です。たとえば、ジョブのクラス名と(そのジョブに含まれているならば)EloquentモデルのIDを元に、制限できます。 > {note} レート制限に引っかかったジョブをキューへ戻す(release)する場合も、ジョブの総試行回数(attempts)は増加します。 もしくは、ジョブを同時に処理するワーカの最大数を指定可能です。これは、一度に一つのジョブが更新すべきリソースを変更するキュージョブを使用する場合に、役立ちます。`funnel`メソッドの使用例として、一度に1ワーカのみにより処理される、特定のタイプのジョブを制限してみましょう。 Redis::funnel('key')->limit(1)->then(function () { // ジョブのロジック処理… }, function () { // ロックできなかった場合の処理… return $this->release(10); }); > {tip} レート制限を使用する場合、実行を成功するまでに必要な試行回数を決めるのは、難しくなります。そのため、レート制限は[時間ベースの試行](#time-based-attempts)と組み合わせるのが便利です。 <a name="error-handling"></a> ### エラー処理 ジョブの処理中に例外が投げられると、ジョブは自動的にキューへ戻され、再試行されます。ジョブはアプリケーションが許している最大試行回数に達するまで、連続して実行されます。最大試行回数は`queue:work` Artisanコマンドへ`--tries`スイッチを使い定義されます。もしくは、ジョブクラス自身に最大試行回数を定義することもできます。キューワーカの実行についての情報は、[以降](#running-the-queue-worker)で説明します。 <a name="queueing-closures"></a> ## クロージャのキュー投入 ジョブクラスをキューへディスパッチする代わりに、クロージャもディスパッチできます。これは現在のリクエストサイクル外で実行する必要のある、シンプルなタスクを扱うのに適しています。 $podcast = App\Podcast::find(1); dispatch(function () use ($podcast) { $podcast->publish(); }); クロージャをキューへディスパッチすると、処理中に改変されないように、クロージャのコード内容は暗号化署名されます。 <a name="running-the-queue-worker"></a> ## キューワーカの実行 Laravelには、キューに投入された新しいジョブを処理する、キューワーカも含まれています。`queue:work` Artisanコマンドを使いワーカを実行できます。`queue:work`コマンドが起動したら、皆さんが停止するか、ターミナルを閉じるまで実行し続けることに注意してください。 php artisan queue:work > {tip} バックグランドで`queue:work`プロセスを永続的に実行し続けるには、キューワーカが止まらずに実行し続けていることを確実にするため、[Supervisor](#supervisor-configuration)のようなプロセスモニタを利用する必要があります。 キューワーカは長時間起動するプロセスで、メモリ上にアプリケーション起動時の状態を保存していることを記憶にとどめてください。そのため、開発段階では[キューワーカの再起動](#queue-workers-and-deployment)を確実に実行してください。付け加えて、アプリケーションにより生成、もしくは変更された静的な状態は、ジョブ間で自動的にリセットされないことも覚えておきましょう。 別の方法として、`queue:listen`コマンドを実行することもできます。`queue:listen`コマンドを使えば更新したコードをリロード、もしくはアプリケーションの状態をリセットしたい場合に、手動でワーカをリスタートする必要がなくなります。しかし、このコマンドは`queue:work`ほど効率はよくありません。 php artisan queue:listen #### 接続とキューの指定 どのキュー接続をワーカが使用するのかを指定できます。`work`コマンドで指定する接続名は、`config/queue.php`設定ファイルで定義されている接続と対応します。 php artisan queue:work redis 指定した接続の特定のキューだけを処理するように、さらにキューワーカをカスタマイズすることもできます。たとえば、メールの処理をすべて、`redis`キュー接続の`emails`キューで処理する場合、以下のコマンドでキューの処理だけを行うワーカを起動できます。 php artisan queue:work redis --queue=emails #### ジョブを一つ処理する `--once`オプションは、ワーカにキュー中のジョブをひとつだけ処理するように指示します。 php artisan queue:work --once #### キューされたすべてのジョブを処理し、終了する `--stop-when-empty`オプションは、すべてのジョブを処理してから終了するように、ワーカへ指示するために使用します。このオプションは、LaravelキューがDockerコンテナ中で動作していて、キューが空になった後でコンテナをシャットダウンしたい場合に便利です。 php artisan queue:work --stop-when-empty #### リソースの考察 デーモンキューワーカは各ジョブを処理する前に、フレームワークを「再起動」しません。そのため、各ジョブが終了したら、大きなリソースを開放してください。たとえば、GDライブラリでイメージ処理を行ったら、終了前に`imagedestroy`により、メモリを開放してください。 <a name="queue-priorities"></a> ### キュープライオリティ ときどき、キューをどのように処理するかをプライオリティ付けしたいことも起きます。たとえば、`config/queue.php`で`redis`接続のデフォルト`queue`を`low`に設定したとしましょう。しかし、あるジョブを`high`プライオリティでキューへ投入したい場合です。 dispatch((new Job)->onQueue('high')); `low`キュー上のジョブの処理が継続される前に、全`high`キュージョブが処理されることを確実にするには、`work`コマンドのキュー名にコンマ区切りのリストで指定してください。 php artisan queue:work --queue=high,low <a name="queue-workers-and-deployment"></a> ### キューワーカとデプロイ キューワーカは長時間起動プロセスであるため、リスタートしない限りコードの変更を反映しません。ですから、キューワーカを使用しているアプリケーションをデプロイする一番シンプルな方法は、デプロイ処理の間、ワーカをリスタートすることです。`queue:restart`コマンドを実行することで、全ワーカを穏やかに再起動できます。 php artisan queue:restart このコマンドは存在しているジョブが失われないように、現在のジョブの処理が終了した後に、全キューワーカーへ穏やかに「終了する(die)」よう指示します。キューワーカは`queue:restart`コマンドが実行されると、終了するわけですから、キュージョブを自動的に再起動する、Supervisorのようなプロセスマネージャーを実行すべきでしょう。 > {tip} このコマンドはリスタートシグナルを保存するために、[キャッシュ](/docs/{{version}}/cache)を使用します。そのため、この機能を使用する前に、アプリケーションのキャッシュドライバーが、正しく設定されていることを確認してください。 <a name="job-expirations-and-timeouts"></a> ### ジョブの期限切れとタイムアウト #### ジョブの有効期限 `config/queue.php`設定ファイルの中で、各キュー接続は`retry_after`オプションを定義しています。このオプションは処理中のジョブを再試行するまで、キュー接続を何秒待つかを指定します。たとえば、`retry_after`の値が`90`であれば、そのジョブは90秒の間に削除されることなく処理され続ければ、キューへ再投入されます。通常、`retry_after`値はジョブが処理を妥当に完了するまでかかるであろう秒数の最大値を指定します。 > {note} `retry_after`を含まない唯一の接続は、Amazon SQSです。SQSはAWSコンソールで管理する、[Default Visibility Timeout](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/AboutVT.html)を元にリトライを行います。 #### ワーカタイムアウト `queue:work` Artisanコマンドは`--timeout`オプションも提供しています。`--timeout`オプションはLaravelキューマスタプロセスが、ジョブを処理する子のキューワーカをKillするまでどのくらい待つかを指定します。さまざまな理由により、時に子のキュープロセスは「フリーズ」します。`--timeout`オプションは、指定した実行時間を過ぎたフリーズプロセスを取り除きます。 php artisan queue:work --timeout=60 `retry_after`設定オプションと`--timeout` CLIオプションは異なります。しかし、確実にジョブを失わずに、一度だけ処理を完了できるよう共に働きます。 > {note} `--timeout`値は、最低でも数秒`retry_after`設定値よりも短くしてください。これにより、与えられたジョブを処理するワーカが、ジョブのリトライ前に確実にkillされます。`--timeout`オプションを`retry_after`設定値よりも長くすると、ジョブが2度実行されるでしょう。 #### ワーカスリープ時間 ジョブがキュー上に存在しているとき、ワーカは各ジョブ間にディレイを取らずに実行し続けます。`sleep`オプションは、新しく処理するジョブが存在しない時に、どの程度「スリープ」するかを秒単位で指定します。スリープ中、ワーカは新しいジョブを処理しません。ジョブはワーカが目を覚ました後に処理されます。 php artisan queue:work --sleep=3 <a name="supervisor-configuration"></a> ## Supervisor設定 #### Supervisorのインストール SupervisorはLinuxオペレーティングシステムのプロセスモニタで、`queue:work`プロセスが落ちると自動的に起動します。UbuntuにSupervisorをインストールするには、次のコマンドを使ってください。 sudo apt-get install supervisor > {tip} Supervisorの設定に圧倒されそうならば、Laravelプロジェクトのために、Supervisorを自動的にインストールし、設定する[Laravel Forge](https://forge.laravel.com)の使用を考慮してください。 #### Supervisorの設定 Supervisorの設定ファイルは、通常`/etc/supervisor/conf.d`ディレクトリに保存します。このディレクトリの中には、Supervisorにどのようにプロセスを監視するのか指示する設定ファイルを好きなだけ設置できます。たとえば、`laravel-worker.conf`ファイルを作成し、`queue:work`プロセスを起動、監視させてみましょう。 [program:laravel-worker] process_name=%(program_name)s_%(process_num)02d command=php /home/forge/app.com/artisan queue:work sqs --sleep=3 --tries=3 autostart=true autorestart=true user=forge numprocs=8 redirect_stderr=true stdout_logfile=/home/forge/app.com/worker.log stopwaitsecs=3600 この例の`numprocs`ディレクティブは、Supervisorに全部で8つのqueue:workプロセスを実行・監視し、落ちている時は自動的に再起動するよう指示しています。`command`ディレクティブの`queue:work sqs`の部分を変更し、希望のキュー接続に合わせてください。 > {note} 一番時間がかかるジョブが消費する秒数より大きな値を`stopwaitsecs`へ必ず指定してください。そうしないと、Supervisorは処理が終了する前に、そのジョブをキルしてしまうでしょう。 #### Supervisorの起動 設定ファイルができたら、Supervisorの設定を更新し起動するために以下のコマンドを実行してください。 sudo supervisorctl reread sudo supervisorctl update sudo supervisorctl start laravel-worker:* Supervisorの詳細情報は、[Supervisorドキュメント](http://supervisord.org/index.html)で確認してください。 <a name="dealing-with-failed-jobs"></a> ## 失敗したジョブの処理 時より、キューされたジョブは失敗します。心配ありません。物事は計画通りに進まないものです。Laravelではジョブを再試行する最大回数を指定できます。この回数試行すると、そのジョブは`failed_jobs`データベーステーブルに挿入されます。`failed_jobs`テーブルのマイグレーションを生成するには`queue:failed-table`コマンドを実行してください。 php artisan queue:failed-table php artisan migrate 次に[キューワーカ](#running-the-queue-worker)の実行時、`queue:work`コマンドに`--tries`スイッチを付け、最大試行回数を指定します。`--tries`オプションに値を指定しないと、ジョブは1回のみ試行します。 php artisan queue:work redis --tries=3 さらに、`--delay`オプションを使用し、失敗してから再試行するまでに何秒待てばよいかをLaravelへ指定できます。デフォルトでは、時間を置かずに再試行します。 php artisan queue:work redis --tries=3 --delay=3 ジョブごとに失敗したジョブの再試行までの遅延を設定したい場合は、キュー投入するジョブクラスで`retryAfter`プロパティを定義してください。 /** * ジョブを再試行するまでに待つ秒数 * * @var int */ public $retryAfter = 3; <a name="cleaning-up-after-failed-jobs"></a> ### ジョブ失敗後のクリーンアップ 失敗時にジョブ特定のクリーンアップを実行するため、ジョブクラスで`failed`メソッドを直接定義できます。これはユーザーに警告を送ったり、ジョブの実行アクションを巻き戻すために最適な場所です。`failed`メソッドには、そのジョブを落とすことになった例外(`Exception`)が渡されます。 <?php namespace App\Jobs; use App\AudioProcessor; use App\Podcast; use Exception; use Illuminate\Bus\Queueable; use Illuminate\Contracts\Queue\ShouldQueue; use Illuminate\Queue\InteractsWithQueue; use Illuminate\Queue\SerializesModels; class ProcessPodcast implements ShouldQueue { use InteractsWithQueue, Queueable, SerializesModels; protected $podcast; /** * 新しいジョブインスタンスの生成 * * @param Podcast $podcast * @return void */ public function __construct(Podcast $podcast) { $this->podcast = $podcast; } /** * ジョブの実行 * * @param AudioProcessor $processor * @return void */ public function handle(AudioProcessor $processor) { // アップロード済みポッドキャストの処理… } /** * 失敗したジョブの処理 * * @param Exception $exception * @return void */ public function failed(Exception $exception) { // 失敗の通知をユーザーへ送るなど… } } > {note} `failed`メソッドは、ジョブが`dispatchNow`メソッドでディスパッチされた場合には呼び出されません。 <a name="failed-job-events"></a> ### ジョブ失敗イベント ジョブが失敗した時に呼び出されるイベントを登録したい場合、`Queue::failing`メソッドが使えます。このイベントはメールや[Slack](https://www.slack.com)により、チームへ通知する良い機会になります。例として、Laravelに含まれている`AppServiceProvider`で、このイベントのコールバックを付け加えてみましょう。 <?php namespace App\Providers; use Illuminate\Support\Facades\Queue; use Illuminate\Support\ServiceProvider; use Illuminate\Queue\Events\JobFailed; class AppServiceProvider extends ServiceProvider { /** * 全アプリケーションサービスの登録 * * @return void */ public function register() { // } /** * 全アプリケーションサービスの初期処理 * * @return void */ public function boot() { Queue::failing(function (JobFailed $event) { // $event->connectionName // $event->job // $event->exception }); } } <a name="retrying-failed-jobs"></a> ### 失敗したジョブの再試行 `failed_jobs`データベーステーブルに挿入された、失敗したジョブを全部確認したい場合は`queue:failed` Artisanコマンドを利用します。 php artisan queue:failed `queue:failed`コマンドはジョブID、接続、キュー、失敗した時間をリスト表示します。失敗したジョブをジョブIDで指定すると、リトライ可能です。たとえば、IDが`5`の失敗したジョブを再試行するには、以下のコマンドを実行します。 php artisan queue:retry 5 失敗したジョブをすべて再試行するには、IDとして`all`を`queue:retry`コマンドへ指定し、実行してください。 php artisan queue:retry all 失敗したジョブを削除する場合は、`queue:forget`コマンドを使います。 php artisan queue:forget 5 失敗したジョブを全部削除するには、`queue:flush`コマンドを使います。 php artisan queue:flush <a name="ignoring-missing-models"></a> ### 不明なモデルの無視 Eloquentモデルをジョブで取り扱う場合は自動的にキューへ積む前にシリアライズし、ジョブを処理するときにリストアされます。しかし、ジョブがワーカにより処理されるのを待っている間にモデルが削除されると、そのジョブは`ModelNotFoundException`により失敗します。 利便性のため、ジョブの`deleteWhenMissingModels`プロパティを`true`に指定すれば、モデルが見つからない場合自動的に削除できます。 /** * モデルが存在していない場合に、ジョブを削除する * * @var bool */ public $deleteWhenMissingModels = true; <a name="job-events"></a> ## ジョブイベント `Queue`[ファサード](/docs/{{version}}/facades)に`before`と`after`メソッドを使い、キューされたジョブの実行前後に実行する、コールバックを指定できます。これらのコールバックはログを追加したり、ダッシュボードの状態を増加させたりするための機会を与えます。通常、これらのメソッドは[サービスプロバイダ](/docs/{{version}}/providers)から呼び出します。たとえば、Laravelに含まれる`AppServiceProvider`を使っていましょう。 <?php namespace App\Providers; use Illuminate\Support\Facades\Queue; use Illuminate\Support\ServiceProvider; use Illuminate\Queue\Events\JobProcessed; use Illuminate\Queue\Events\JobProcessing; class AppServiceProvider extends ServiceProvider { /** * 全アプリケーションサービスの登録 * * @return void */ public function register() { // } /** * 全アプリケーションサービスの初期処理 * * @return void */ public function boot() { Queue::before(function (JobProcessing $event) { // $event->connectionName // $event->job // $event->job->payload() }); Queue::after(function (JobProcessed $event) { // $event->connectionName // $event->job // $event->job->payload() }); } } `Queue` [ファサード](/docs/{{version}}/facades)の`looping`メソッドを使用し、ワーカがキューからジョブをフェッチする前に、指定したコールバックを実行できます。たとえば、直前の失敗したジョブの未処理のままのトランザクションをロールバックするクロージャを登録できます。 Queue::looping(function () { while (DB::transactionLevel() > 0) { DB::rollBack(); } });
29.490872
374
0.703143
yue_Hant
0.517218
ff359d6518d968824419151392da4239dba9b3a0
1,634
md
Markdown
results/referenceaudioanalyzer/referenceaudioanalyzer_siec_harman_in-ear_2019v2/Fidue A31s/README.md
NekoAlosama/AutoEq-nekomod
a314a809c3fe46c3c8526243bd97f0f31a90c710
[ "MIT" ]
null
null
null
results/referenceaudioanalyzer/referenceaudioanalyzer_siec_harman_in-ear_2019v2/Fidue A31s/README.md
NekoAlosama/AutoEq-nekomod
a314a809c3fe46c3c8526243bd97f0f31a90c710
[ "MIT" ]
null
null
null
results/referenceaudioanalyzer/referenceaudioanalyzer_siec_harman_in-ear_2019v2/Fidue A31s/README.md
NekoAlosama/AutoEq-nekomod
a314a809c3fe46c3c8526243bd97f0f31a90c710
[ "MIT" ]
null
null
null
# Fidue A31s See [usage instructions](https://github.com/jaakkopasanen/AutoEq#usage) for more options and info. ### Parametric EQs In case of using parametric equalizer, apply preamp of **-14.01dB** and build filters manually with these parameters. The first 5 filters can be used independently. When using independent subset of filters, apply preamp of **-14.11 dB**. | Type | Fc | Q | Gain | |--------:|------------:|-----:|---------:| | Peaking | 13.93 Hz | 0.33 | -5.42 dB | | Peaking | 136.96 Hz | 0.38 | -6.82 dB | | Peaking | 784.99 Hz | 1.04 | 3.36 dB | | Peaking | 8595.54 Hz | 1.57 | 7.96 dB | | Peaking | 10756.54 Hz | 1.25 | 8.88 dB | | Peaking | 2261.68 Hz | 3.62 | -3.09 dB | | Peaking | 3311.68 Hz | 2.86 | 5.82 dB | | Peaking | 4933.56 Hz | 2.73 | -6.51 dB | | Peaking | 6331.77 Hz | 3.71 | 5.07 dB | | Peaking | 7664.60 Hz | 4.62 | -1.44 dB | ### Fixed Band EQs In case of using fixed band (also called graphic) equalizer, apply preamp of **-14.54dB** (if available) and set gains manually with these parameters. | Type | Fc | Q | Gain | |--------:|------------:|-----:|---------:| | Peaking | 31.25 Hz | 1.41 | -5.96 dB | | Peaking | 62.50 Hz | 1.41 | -4.45 dB | | Peaking | 125.00 Hz | 1.41 | -6.16 dB | | Peaking | 250.00 Hz | 1.41 | -4.75 dB | | Peaking | 500.00 Hz | 1.41 | 0.07 dB | | Peaking | 1000.00 Hz | 1.41 | 2.65 dB | | Peaking | 2000.00 Hz | 1.41 | -0.62 dB | | Peaking | 4000.00 Hz | 1.41 | -1.51 dB | | Peaking | 8000.00 Hz | 1.41 | 14.41 dB | | Peaking | 16000.01 Hz | 1.41 | 2.99 dB | ### Graphs ![](./Fidue%20A31s.png)
40.85
98
0.555692
eng_Latn
0.707619
ff35bb69b831de31d7774d67387aba1372a14ef5
355
md
Markdown
builders/command-output.md
jasonk/notify-slack
d1ffdb03d9f5e4343a1c76f8a0feda9689ae39c4
[ "MIT" ]
null
null
null
builders/command-output.md
jasonk/notify-slack
d1ffdb03d9f5e4343a1c76f8a0feda9689ae39c4
[ "MIT" ]
null
null
null
builders/command-output.md
jasonk/notify-slack
d1ffdb03d9f5e4343a1c76f8a0feda9689ae39c4
[ "MIT" ]
null
null
null
# command-output # This builder takes a command and it's arguments, runs the command, captures it's output, and posts the output to Slack as pre-formatted text. ## Usage ## ```sh inform-slack --attach command-output 'ls -l $WORKSPACE' ``` ## Options ## * `--shell <shell>` or `--shell=<shell>` - Run the command with a shell, rather than directly.
20.882353
66
0.687324
eng_Latn
0.993463
ff3666cb9a1a619b1cee8913c9bbb013f1cf3f44
428
md
Markdown
sdk/tables/Azure.Data.Tables/src/autorest.md
kr-santosh/azure-sdk-for-net
8a33fd4d133bd851b6ecf42639860c8d0df126a4
[ "MIT" ]
1
2018-09-20T10:07:20.000Z
2018-09-20T10:07:20.000Z
sdk/tables/Azure.Data.Tables/src/autorest.md
AzureDataBox/azure-sdk-for-net
15bc1f60f9fca3a7bd23672a9cca73078c812349
[ "MIT" ]
null
null
null
sdk/tables/Azure.Data.Tables/src/autorest.md
AzureDataBox/azure-sdk-for-net
15bc1f60f9fca3a7bd23672a9cca73078c812349
[ "MIT" ]
1
2020-06-25T17:06:22.000Z
2020-06-25T17:06:22.000Z
# Azure.Data.Tables ### AutoRest Configuration > see https://aka.ms/autorest Run `dotnet msbuild /t:GenerateCode` to generate code. ``` yaml title: Azure.Data.Tables input-file: - https://raw.githubusercontent.com/Azure/azure-rest-api-specs/33c52e4f87f3ae21611d45f34db65f9ccc510ea6/specification/cosmos-db/data-plane/Microsoft.Tables/preview/2019-02-02/table.json namespace: Azure.Data.Tables include-csproj: disable ```
28.533333
189
0.785047
yue_Hant
0.509575
ff381e729c908545cfa59780acaba5912e63ded7
300
md
Markdown
tools/python/README.md
CSIRO-enviro-informatics/dpn-ontology
6a647a78ac288a756ed48d8986bc3c9057aca54d
[ "CC-BY-4.0" ]
1
2022-03-03T03:30:40.000Z
2022-03-03T03:30:40.000Z
tools/python/README.md
CSIRO-enviro-informatics/dpn-ontology
6a647a78ac288a756ed48d8986bc3c9057aca54d
[ "CC-BY-4.0" ]
1
2022-03-03T00:26:08.000Z
2022-03-03T00:26:08.000Z
tools/python/README.md
CSIRO-enviro-informatics/dpn-ontology
6a647a78ac288a756ed48d8986bc3c9057aca54d
[ "CC-BY-4.0" ]
null
null
null
# Python tools relating to the DPN ontology ## Installation Setup virtualenv or conda environment ``` $ conda env create -f environment.yml ``` Install using python setuptools ``` $ python setup.py install # enable command line tools $ pip install -e . ``` ## Running iso2dpn ``` $ iso2dpn ```
13.043478
43
0.703333
eng_Latn
0.920925
ff38300763dfc1a931f6c97f355ddedb0dae467b
335
md
Markdown
sdk/docs/UpsertLegalEntityAccessMetadataRequest.md
finbourne/lusid-sdk-java-preview
1edd5b68324c382d09850daae18afff0363f9054
[ "MIT" ]
null
null
null
sdk/docs/UpsertLegalEntityAccessMetadataRequest.md
finbourne/lusid-sdk-java-preview
1edd5b68324c382d09850daae18afff0363f9054
[ "MIT" ]
74
2020-03-20T23:54:19.000Z
2022-03-30T14:23:07.000Z
sdk/docs/UpsertLegalEntityAccessMetadataRequest.md
finbourne/lusid-sdk-java-preview
1edd5b68324c382d09850daae18afff0363f9054
[ "MIT" ]
4
2019-09-03T15:01:27.000Z
2021-04-19T17:02:02.000Z
# UpsertLegalEntityAccessMetadataRequest ## Properties Name | Type | Description | Notes ------------ | ------------- | ------------- | ------------- **metadata** | [**List&lt;AccessMetadataValue&gt;**](AccessMetadataValue.md) | The access control metadata to assign to a Legal Entity that matches the identifier | [optional]
23.928571
176
0.614925
yue_Hant
0.332035
ff384732c8ccd5adc76a7b35ec7b33fd5e1bf401
1,740
md
Markdown
customize/desktop/wsim/find-a-component-setting-or-package-in-windows-sim.md
imingc/commercialization-public
70a2bcf94b61655df50987bfea83d4fc7be443d9
[ "CC-BY-4.0", "MIT" ]
1
2019-01-25T20:02:01.000Z
2019-01-25T20:02:01.000Z
customize/desktop/wsim/find-a-component-setting-or-package-in-windows-sim.md
andreiztm/commercialization-public
9a9565a191bc1ecddb33c9b26e701ae32b0c8d65
[ "CC-BY-4.0", "MIT" ]
null
null
null
customize/desktop/wsim/find-a-component-setting-or-package-in-windows-sim.md
andreiztm/commercialization-public
9a9565a191bc1ecddb33c9b26e701ae32b0c8d65
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Find a Component, Setting, or Package in Windows SIM description: Find a Component, Setting, or Package in Windows SIM MSHAttr: - 'PreferredSiteName:MSDN' - 'PreferredLib:/library/windows/hardware' ms.assetid: d19af880-aa9e-4737-9fbb-36421e879758 ms.mktglfcycl: deploy ms.sitesec: msdn author: themar-msft ms.author: themar ms.date: 05/02/2017 ms.topic: article ms.prod: windows-hardware ms.technology: windows-oem --- # Find a Component, Setting, or Package in Windows SIM You can use Windows® System Image Manager (Windows SIM) to search for a component, setting, file in a distribution share, or package name by using the Find feature. The following procedure describes how to use Find. 1. Open Windows SIM. 1. On the **Edit** menu, click **Find**, or use the keyboard shortcut Ctrl+F. The **Find** dialog box appears. 1. In the **Find what** box, enter the search criteria. 1. In the **Look in** drop-down list, select from the currently open Windows image and answer file, a distribution share, or the **Messages** pane. 1. Click **Find Now**. ## Related topics [Windows System Image Manager How-to Topics](windows-system-image-manager-how-to-topics.md) [Create or Open an Answer File](create-or-open-an-answer-file.md) [Configure Components and Settings in an Answer File](configure-components-and-settings-in-an-answer-file.md) [Validate an Answer File](validate-an-answer-file.md) [Hide Sensitive Data in an Answer File](hide-sensitive-data-in-an-answer-file.md) [Add a Device Driver Path to an Answer File](add-a-device-driver-path-to-an-answer-file.md) [Add a Package to an Answer File](add-a-package-to-an-answer-file.md) [Add a Custom Command to an Answer File](add-a-custom-command-to-an-answer-file.md)
39.545455
215
0.758621
eng_Latn
0.849706
ff38484ceb49c120d5800c234fbe1c0c27937544
9,466
md
Markdown
articles/dev-spaces/how-to/run-dev-spaces-windows-containers.md
MicrosoftDocs/azure-docs.hu-hu
5fb082c5dae057fd040c7e09881e6c407e535fe2
[ "CC-BY-4.0", "MIT" ]
7
2017-08-28T07:44:33.000Z
2021-04-20T21:12:50.000Z
articles/dev-spaces/how-to/run-dev-spaces-windows-containers.md
MicrosoftDocs/azure-docs.hu-hu
5fb082c5dae057fd040c7e09881e6c407e535fe2
[ "CC-BY-4.0", "MIT" ]
412
2018-07-25T09:31:03.000Z
2021-03-17T13:17:45.000Z
articles/dev-spaces/how-to/run-dev-spaces-windows-containers.md
MicrosoftDocs/azure-docs.hu-hu
5fb082c5dae057fd040c7e09881e6c407e535fe2
[ "CC-BY-4.0", "MIT" ]
13
2017-09-05T09:10:35.000Z
2021-11-05T11:42:31.000Z
--- title: Windows-tárolók használata services: azure-dev-spaces ms.date: 01/16/2020 ms.topic: conceptual description: Megtudhatja, hogyan futtathat Azure Dev Spacest windowsos tárolókat tartalmazó meglévő fürtön keywords: Azure Dev Spaces, Dev Spaces, Docker, Kubernetes, Azure, AKS, Azure Kubernetes Service, tárolók, Windows-tárolók ms.openlocfilehash: bbef5eafe44e38691327714c14c6a6026d45a3c7 ms.sourcegitcommit: 4b0e424f5aa8a11daf0eec32456854542a2f5df0 ms.translationtype: MT ms.contentlocale: hu-HU ms.lasthandoff: 04/20/2021 ms.locfileid: "107777436" --- # <a name="interact-with-windows-containers-using-azure-dev-spaces"></a>Windows-tárolók használata az Azure Dev Spaces használatával [!INCLUDE [Azure Dev Spaces deprecation](../../../includes/dev-spaces-deprecation.md)] Az Azure Dev Spaces az új és a meglévő Kubernetes-névterekben is engedélyezhető. Az Azure Dev Spaces Linux-tárolókon futó és eszközszolgáltatásokat fog futtatni. Ezek a szolgáltatások az azonos névtérben található Windows-tárolókon futó alkalmazásokkal is interakcióba léphetnek. Ez a cikk bemutatja, hogyan futtathat szolgáltatásokat meglévő Windows-tárolókat tartalmazó névtérben a Dev Spaces használatával. Az Azure Dev Spaces használatával jelenleg nem lehet hibakeresést végezni vagy windowsos tárolókat csatolni. ## <a name="set-up-your-cluster"></a>A fürt beállítása Ez a cikk feltételezi, hogy már rendelkezik Linux- és Windows-csomópontkészletekkel is. Ha Linux- és Windows-csomópontkészletekkel kell fürtöt létrehoznia, kövesse az itt található [utasításokat.][windows-container-cli] Csatlakozzon a fürthöz a [kubectl][kubectl], a Kubernetes parancssori ügyfél használatával. Az [az aks get-credentials][az-aks-get-credentials] paranccsal konfigurálható `kubectl` a Kubernetes-fürthöz való csatlakozásra. Ez a parancs letölti a hitelesítő adatokat, és konfigurálja a Kubernetes parancssori felületét azok használatára. ```azurecli-interactive az aks get-credentials --resource-group myResourceGroup --name myAKSCluster ``` A fürthöz való csatlakozás ellenőrzéséhez használja a [kubectl get][kubectl-get] parancsot a fürtcsomópontok listájának lekéréséhez. ```azurecli-interactive kubectl get nodes ``` Az alábbi példakimenet egy Windows és Linux rendszerű fürtöt mutat be. Mielőtt továbblépne, győződjön meg arról, hogy az állapot *Készen* áll az egyes csomópontok számára. ```console NAME STATUS ROLES AGE VERSION aks-nodepool1-12345678-vmss000000 Ready agent 13m v1.14.8 aks-nodepool1-12345678-vmss000001 Ready agent 13m v1.14.8 aksnpwin000000 Ready agent 108s v1.14.8 ``` Fertőzöttság [alkalmazása a][using-taints] Windows-csomópontokra. A Windows-csomópontok fertőzött volta megakadályozza, hogy a Dev Spaces Linux-tárolókat ütemezsen a Windows-csomópontokon való futtatásra. Az alábbi példaparancs egy fertőzöttet alkalmaz az előző példában található *aksnpwin987654* Windows-csomópontra. ```azurecli-interactive kubectl taint node aksnpwin987654 sku=win-node:NoSchedule ``` > [!IMPORTANT] > Amikor fertőzöttségeket alkalmaz egy csomópontra, konfigurálnia kell egy megfelelő tűrést a szolgáltatás üzembehelyezési sablonján, hogy a szolgáltatás ezen a csomóponton fusson. A mintaalkalmazás már konfigurálva van [az][sample-application-toleration-example] előző parancsban konfigurált fertőzöttségnek megfelelő tűrővel. ## <a name="run-your-windows-service"></a>A Windows-szolgáltatás futtatása Futtassa a Windows-szolgáltatást az AKS-fürtön, és ellenőrizze, hogy *Fut* állapotban van-e. Ez a cikk egy [mintaalkalmazást használ][sample-application] a fürtön futó Windows- és Linux-szolgáltatás szemléltetására. Klónozza a mintaalkalmazást a GitHubról, és lépjen a `dev-spaces/samples/existingWindowsBackend/mywebapi-windows` könyvtárba: ```console git clone https://github.com/Azure/dev-spaces cd dev-spaces/samples/existingWindowsBackend/mywebapi-windows ``` A mintaalkalmazás [a Helm 3-as verzióval][helm-installed] futtatja a Windows-szolgáltatást a fürtön. Lépjen a `charts` könyvtárba, és a Helm használatával futtassa a Windows-szolgáltatást: ```console cd charts/ kubectl create ns dev helm install windows-service . --namespace dev ``` A fenti parancs a Helm használatával futtatja a Windows-szolgáltatást a *dev* névtérben. Ha nem hoz létre dev nevű névteret, az létrejön. Az `kubectl get pods` paranccsal ellenőrizze, hogy a Windows-szolgáltatás fut-e a fürtön. ```console $ kubectl get pods --namespace dev --watch NAME READY STATUS RESTARTS AGE myapi-4b9667d123-1a2b3 0/1 ContainerCreating 0 47s ... myapi-4b9667d123-1a2b3 1/1 Running 0 98s ``` ## <a name="enable-azure-dev-spaces"></a>Az Azure Dev Spaces engedélyezése Engedélyezze a Dev Spacest ugyanabban a névtérben, mint a Windows-szolgáltatás futtatásához. A következő parancs engedélyezi a Dev Spacest a *dev* névtérben: ```console az aks use-dev-spaces -g myResourceGroup -n myAKSCluster --space dev --yes ``` ## <a name="update-your-windows-service-for-dev-spaces"></a>Windows-szolgáltatás frissítése a Dev Spaceshez Ha olyan meglévő névtéren engedélyezi a Dev Spacest, amely már fut, alapértelmezés szerint a Dev Spaces megpróbálja az adott névtérben futó összes új tárolót kiszeretni. A Dev Spaces a névtérben már futó szolgáltatáshoz létrehozott új tárolókat is megpróbálja kiszeretni. Ha meg szeretné akadályozni, hogy a Dev Spaces bevezessen egy, a névtérben futó tárolót, adja hozzá a *no-proxy fejlécet* a fájlhoz. `deployment.yaml` Adja `azds.io/no-proxy: "true"` hozzá a következőt a `existingWindowsBackend/mywebapi-windows/charts/templates/deployment.yaml` fájlhoz: ```yaml apiVersion: apps/v1 kind: Deployment metadata: ... spec: replicas: {{ .Values.replicaCount }} selector: ... template: metadata: labels: app.kubernetes.io/name: {{ include "mywebapi.name" . }} app.kubernetes.io/instance: {{ .Release.Name }} azds.io/no-proxy: "true" ``` A `helm list` windowsos szolgáltatás üzemelő példányának felsorolásához használja a következőt: ```cmd $ helm list --namespace dev NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE windows-service 1 Wed Jul 24 15:45:59 2019 DEPLOYED mywebapi-0.1.0 1.0 dev ``` A fenti példában az üzemelő példány neve *windows-service.* Frissítse a Windows-szolgáltatást az új konfigurációval a `helm upgrade` használatával: ```cmd helm upgrade windows-service . --namespace dev ``` Mivel frissítette a `deployment.yaml` -et, a Dev Spaces nem próbálja meg beszerkesni a szolgáltatást. ## <a name="run-your-linux-application-with-azure-dev-spaces"></a>Linux-alkalmazás futtatása az Azure Dev Spaces használatával Lépjen a könyvtárba, és a és a paranccsal futtassa `webfrontend` `azds prep` a `azds up` Linux-alkalmazást a fürtön. ```console cd ../../webfrontend-linux/ azds prep --enable-ingress azds up ``` A `azds prep --enable-ingress` parancs létrehozza a Helm-diagramot és a Docker-fájlokat az alkalmazáshoz. > [!TIP] > Az Azure Dev Spaces a projekt [Dockerfile-](../how-dev-spaces-works-prep.md#prepare-your-code) és Helm-diagramját használja a kód felépítéséhez és futtatásához, de módosíthatja ezeket a fájlokat, ha módosítani szeretné a projekt felépítését és futtatását. A `azds up` parancs a névtérben futtatja a szolgáltatást. ```console $ azds up Using dev space 'dev' with target 'myAKSCluster' Synchronizing files...4s Installing Helm chart...11s Waiting for container image build...6s Building container image... Step 1/12 : FROM mcr.microsoft.com/dotnet/core/sdk:2.2 ... Step 12/12 : ENTRYPOINT ["/bin/bash", "/entrypoint.sh"] Built container image in 36s Waiting for container...2s Service 'webfrontend' port 'http' is available at http://dev.webfrontend.abcdef0123.eus.azds.io/ Service 'webfrontend' port 80 (http) is available via port forwarding at http://localhost:57648 ``` A futó szolgáltatást a nyilvános URL-cím megnyitásával láthatja, amely az azds up parancs kimenetében jelenik meg. Ebben a példában a nyilvános URL-cím `http://dev.webfrontend.abcdef0123.eus.azds.io/` a következő: . Nyissa meg a szolgáltatást egy böngészőben, és kattintson a lap tetején található *About (About)* elemre. Ellenőrizze, hogy megjelenik-e üzenet a *mywebapi szolgáltatástól,* amely a tároló által használt Windows-verziót tartalmazza. ![Mintaalkalmazás a mywebapi Windows-verziójával](../media/run-dev-spaces-windows-containers/sample-app.png) ## <a name="next-steps"></a>Következő lépések További információ az Azure Dev Spacesről. > [!div class="nextstepaction"] > [Az Azure Dev Spaces működése](../how-dev-spaces-works.md) [kubectl]: https://kubernetes.io/docs/user-guide/kubectl/ [kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get [helm-installed]: https://helm.sh/docs/intro/install/ [sample-application]: https://github.com/Azure/dev-spaces/tree/master/samples/existingWindowsBackend [sample-application-toleration-example]: https://github.com/Azure/dev-spaces/blob/master/samples/existingWindowsBackend/mywebapi-windows/charts/templates/deployment.yaml#L24-L27 [az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials [using-taints]: ../../aks/use-multiple-node-pools.md#setting-nodepool-taints [windows-container-cli]: ../../aks/windows-container-cli.md
51.167568
518
0.771075
hun_Latn
0.999465
ff3933c876871e9103875e1527de516ce865c892
300
md
Markdown
README.md
davidli3100/flow
a9b41f9cc583712dd3e7bd5d9c39f2a6745ca2c6
[ "MIT" ]
null
null
null
README.md
davidli3100/flow
a9b41f9cc583712dd3e7bd5d9c39f2a6745ca2c6
[ "MIT" ]
null
null
null
README.md
davidli3100/flow
a9b41f9cc583712dd3e7bd5d9c39f2a6745ca2c6
[ "MIT" ]
null
null
null
# Flow An app paired with ESP8266-12E Node MCU 1.0 hardware to create an IoT enabled moisture monitoring system for irrigation and agricultural purposes. pH sensor data is dummy data, however, moisture levels are live using the thingspeak API as an intermediary between the ESP8266 and the web app
60
147
0.81
eng_Latn
0.998721
ff394e9b60ac6dd64c30a96ff15d8f0aa3d4c70d
1,046
md
Markdown
_bm/dep/ccomp.md
sylvainkahane/docs
50340ae2af8009c020a3d048ddc40c593a44d268
[ "Apache-2.0" ]
null
null
null
_bm/dep/ccomp.md
sylvainkahane/docs
50340ae2af8009c020a3d048ddc40c593a44d268
[ "Apache-2.0" ]
null
null
null
_bm/dep/ccomp.md
sylvainkahane/docs
50340ae2af8009c020a3d048ddc40c593a44d268
[ "Apache-2.0" ]
null
null
null
--- layout: relation title: 'ccomp' shortdef: 'clausal complement' udver: '2' --- The `ccomp` link is used for clausal dependents which are core arguments, when the subject is not controlled. However, when the subject is controlled, we use the [xcomp]() link. It is used with copula like verb kó while it introduces indirect speech. If kó introduces direct speech, we use [parataxis:obj]() or [obj](). ~~~ conllu # visual-style 2 7 ccomp color:blue # visual-style 2 bgColor:blue # visual-style 2 fgColor:white # visual-style 7 bgColor:blue # visual-style 7 fgColor:white 1 masakè màsakɛ NOUN _ _ 2 nsubj _ _ 2 ko kó VERB _ _ 0 root _ _ 3 ko kó PART _ _ 7 discourse _ _ 4 muso mùso NOUN _ _ 7 nsubj _ _ 5 bèè bɛ́ɛ DET _ _ 4 det _ _ 6 ka ka AUX _ _ 7 aux _ _ 7 na nà VERB _ _ 2 ccomp _ _ 8 ni ni ADP _ _ 11 case _ _ 9 u ù PRON _ _ 11 nmod:poss _ _ 10 ka ka ADP _ _ 9 case _ _ 11 buguri bùguri NOUN _ _ 7 obl _ _ 12 ye yé ADP _ _ 11 case _ _ 13 ! ! PUNCT _ _ 7 punct _ _ 'A king says that all women must come with their dust'. ~~~
30.764706
321
0.699809
eng_Latn
0.936404
ff395af85b7201c7d0bcb3ddcd9d71d687301bc0
2,011
md
Markdown
README.md
GTron-1729/writing_in_air
a7bb2cc6f2d0a3e4a1307dcdc7ffae3415a9b815
[ "MIT" ]
1
2020-07-23T17:01:29.000Z
2020-07-23T17:01:29.000Z
README.md
GTron-1729/writing_in_air
a7bb2cc6f2d0a3e4a1307dcdc7ffae3415a9b815
[ "MIT" ]
null
null
null
README.md
GTron-1729/writing_in_air
a7bb2cc6f2d0a3e4a1307dcdc7ffae3415a9b815
[ "MIT" ]
null
null
null
# Alphabet Recognition Through Gestures [![](https://img.shields.io/github/license/mashape/apistatus.svg)](https://github.com/akshaychandra21/Alphabet_Recognition_RealTime/blob/master/LICENSE.txt) ## Working Example <img src="demo.gif"> ## Code Requirements The code is in Python (version 3.6 or higher). You also need to install OpenCV and Keras (2.1.4 version) libraries. ## Data Description A popular demonstration of the capability of deep learning techniques is object recognition in image data. The "Extended Hello World" of object recognition for machine learning and deep learning is the [EMNIST dataset](https://www.kaggle.com/crawford/emnist) for handwritten letters recognition. It is an extended version of the [MNIST](https://en.wikipedia.org/wiki/MNIST_database) dataset. A set of sample images is shown below. <img src="images/emnist_sample.png" width=600 height=100/> Each of the letters is stored as a numbered array as shown below. <img src="images/emnist_single_sample.png" width=400 height=400/> I built a Multilayer Perceptron (MLP) model as well as a Convolutional Neural Network (CNN) model using [Keras](https://keras.io/) library. The predictions of both the models are shown on the screen in real time. The Test accuracies were as follows: * MLP Test Accuracy: 91.7% * CNN Test Accuracy: 93.1% For both the models, I actually used the exact same architectures I implemented in the [Digits Recognition](https://github.com/akshaychandra111/Digits_Recognition_RealTime) project (for obvious 'extended' reasons). ## Code Explanation I have written [a tutorial post on medium](https://medium.com/@akshaychandra21/97e697b8fb86) explaining the code. ## Execution Order of Execution is as follows: Step 1 - Execute ``` python mlp_model_builder.py ``` Step 2 - Execute ``` python cnn_model_builder.py ``` Step 3 - This could take a while, so feel free to take a quick nap. Step 4 - Execute ``` python alphabet_recognition.py ``` Step 5 - Grab a blue bottle cap and have fun!
43.717391
284
0.770761
eng_Latn
0.939573
ff399e50da5a8a879722399a69d7e8d082fbdcc2
15,969
md
Markdown
bookinfo-example/README.md
incfly/authservice
c918b4b52070bab45fca596be5747d531ea73e44
[ "Apache-2.0" ]
null
null
null
bookinfo-example/README.md
incfly/authservice
c918b4b52070bab45fca596be5747d531ea73e44
[ "Apache-2.0" ]
null
null
null
bookinfo-example/README.md
incfly/authservice
c918b4b52070bab45fca596be5747d531ea73e44
[ "Apache-2.0" ]
null
null
null
# Bookinfo with Authservice Example This doc shows how to integrate Authservice into an Istio system deployed on Kubernetes. This demo uses the [Istio Bookinfo sample application](https://istio.io/docs/examples/bookinfo/). This demo takes relies on Istio [external authorization provider](https://istio.io/latest/docs/tasks/security/authorization/authz-custom/), released since 1.9. ### Pre-requisites: 1. Prepare your OIDC provider configuration. In our example, we use Google as identity provider. Follow [instructions](https://developers.google.com/identity/protocols/oauth2/openid-connect) to create one. ```shell export OIDC_CLIENT_ID="<your-client-id>" export OIDC_CLIENT_SECRET="<your-client-secret>" ``` 1. Install Istio 1.9 or later. ```shell istioctl install -y kubectl label namespace default istio-injection=enabled --overwrite ``` ### Install and Enable Authservice 1. In our example, we use a self signed certificate at localhost for easy setup. This is used to terminate HTTPS at the ingress gateway since OIDC requires client callback URI to be hosted on a protected endpoint. ```shell bash ./scripts/generate-self-signed-certs-for-ingress-gateway.sh ``` 1. Configure the Istio mesh config with an [external authorization provider](https://istio.io/latest/docs/tasks/security/authorization/authz-custom/). ```shell kubectl edit cm -n istio-system ``` Change the mesh config with the config below. ```yaml data: mesh: |- extensionProviders: - name: "authservice-grpc" envoyExtAuthzGrpc: service: authservice.default.svc.cluster.local port: "10003" ``` 1. Fetch the identity provider public key and populate into the configmap. In our example, run `scripts/google-jwks.sh`. ```shell bash scripts/google-jwks.sh ``` Copy the output JWK (with escape) literally to the [templates/config.yaml](https://github.com/istio-ecosystem/authservice/blob/master/bookinfo-example/authservice/templates/config.yaml#L30) to replace the JWK content. TODO(Shikugawa): this is a limitation. We are currently working on making authservice fetch JWK by itself when a jwk URI is provided. See https://github.com/istio-ecosystem/authservice/issues/34. 1. Install authservice via Helm. ```shell helm template authservice \ --set oidc.clientID=${OIDC_CLIENT_ID} \ --set oidc.clientSecret=${OIDC_CLIENT_SECRET} \ | kubectl apply -f - ``` 1. Access product page via port-forwarding at local host. ```shell kubectl port-forward service/istio-ingressgateway 8443:443 -n istio-system ``` At your browser visit the page at https://localhost:8443/productpage. By default the Helm packages adds the OIDC integration at ingress gateway proxy. You can change [values.yaml](https://github.com/istio-ecosystem/authservice/blob/2931c4cc05ecc6f0a2efec7a97dfcfbe5305a602/bookinfo-example/authservice/values.yaml#L7) `authservice.enforcingMode=productpage` to see how to enable this application sidecar. ### Further Protect via RequestAuthentication and Authorization Policy Istio native RequestAuthentication and Authorization policy can be used configure which end user can access specific apps, at specific paths. For example, you can apply the sample configuration to only allow authenticated request to access productpage service. ```shell kubectl apply -f ./config/productpage-authn-authz.yaml ``` ### Other Authservice Deployment Mode You can also deploy authservice as a container in the ingress or application pod. This could help reducing the latencies for the external authz check request. Instead of sending `Check` request to a Kubernetes service (`authservice.default.svc.cluster.local`), the request is sent to `localhost:10003` within the pod. This requires to change the application pod spec. See `config/bookinfo-with-authservice-template.yaml` for an example. ## How It Works The browser should redirect to the OIDC provider's login page. Upon login, the authenticated user should be redirected back and gain access to the `productpage`. This works because the Authservice is involved in every request to the `productpage` service. 1. On the first request, the Authservice detected that the user is unauthenticated 1. The Authservice redirected the browser to the OIDC Provider's authorization endpoint, which redirected the browser again to the OIDC Provider's login page 1. After the user logged in, the OIDC provider redirected the browser back to the `productpage` service with an authorization code as a query parameter 1. The Authservice intercepted this OIDC provider callback redirect and captured the authorization code from the query parameter 1. The Authservice exchanged the authorization code for tokens by making a call from the Authservice directly to the OIDC provider (as a "backend-to-backend request", rather than another browser redirect) 1. The Authservice redirected the browser back to the originally requested path of the `productpage` service 1. The Authservice received the request to the `productpage` and injected the OIDC ID token into the `Authentication` http request header of that request before allowing the request to continue on to the `productpage` 1. Before the request continues to the `productpage`, the Istio authentication policy validated the token from the `Authentication` request header and, since it was valid, allowed the request to go to the `productpage` 1. The `productpage` renders its UI in the http response and the browser shows the UI The Authservice sets a session ID cookie on user's browser, so future `productpage` page loads in the browser will not require authentication until the OIDC tokens expire. To log out and remove the current user's session immediately, point the browser to `https://<INGRESS_HOST>/authservice_logout` (this path is configurable in the Authservice's `ConfigMap`). ## :warning: The REST documnetation needs updates. ## Deploy Bookinfo Using the Authservice for Token Acquisition + Authorization (Sidecar integration) The authentication tokens acquired using the Authservice can also be used for authorization, provided that they contain scopes. This section demonstrates how to leverage the Authservice to relay the authorization token to protected apps and services. 1. Configure the Authservice to provide authorization. It must both request scopes for protected resources and also attach the authorization token as a header. 1. Setup a `ConfigMap` for Authservice. Fill in [`config/authservice-configmap-template-for-authn-and-authz.yaml`](config/authservice-configmap-template-for-authn-and-authz.yaml) to include the OIDC provider's configurations. Currently, only the `oidc` filter can be configured in the `ConfigMap`. See [here](../docs/README.md) for the description of each field. Once the values have been substituted, apply the `ConfigMap`. ```bash kubectl apply -f config/authservice-configmap-template-for-authn-and-authz.yaml ``` This `ConfigMap` has several notable changes compared to the previous `ConfigMap` for authentication only ([`config/authservice-configmap-template-for-authn.yaml`](config/authservice-configmap-template-for-authn.yaml)). 1. It updates the value at the key `chains[*].filters[*].oidc.scopes` which contains a list of strings of scopes that the Authservice is enabled to request on behalf of the service it is protecting. In this example, the Authservice will request `productpage.read` and `reviews.read`. 1. It adds a key `chains[*].filters[*].oidc.access_token` which is an object defining a preamble and a header name to provide the access token as a header after receipt. Note that this example assumes that the access token will be returned by the OIDC Provider in JWT format. Please check the documentation for your OIDC Provider's Authorization endpoint. In this example, the access token is configured to be sent on the header named `Authorization`. This aligns with the default header name used by Istio's Authentication `Policy` to validate JWT tokens. 1. It has changed the value at the key `chains[*].filters[*].oidc.id_token`. This moves the ID token to a different request header compared to the `ConfigMap` for authentication only used previously. Now the ID token will be sent on a header called `x-id-token`. The header name `x-id-token` itself does not have any special meaning. 1. Configure the Bookinfo app 1. Edit [`config/bookinfo-with-authservice-template.yaml`](config/bookinfo-with-authservice-template.yaml) Supply a Authservice image. This has previously been described in the steps ["Deploy Bookinfo Using the Authservice for Token Acquisition"](#authservice-image) from above. 1. Deploy Bookinfo and Authservice by applying the Authservice deployment file. ```bash kubectl apply -f config/bookinfo-with-authservice-template.yaml watch kubectl get pods -A ``` 1. Wait for the new pods to be in `Running` state. Note that the Authservice will be deployed in the same Pod as `productpage`. 1. If the `callback` or `logout` paths in [`config/authservice-configmap-template-for-authn-and-authz.yaml`](config/authservice-configmap-template-for-authn-and-authz.yaml) were edited in a previous step, then edit those same paths in [`config/bookinfo-gateway.yaml`](config/bookinfo-gateway.yaml). Otherwise, no edit is needed. When ready, apply the file to create the ingress gateway and routing rules for Bookinfo: ```bash kubectl apply -f config/bookinfo-gateway.yaml ``` Note that session affinity (via Istio `DestinationRule`) is required when you deploy multiple instances of `productpage`, which ensures that the requests from the same user-agent reach the same instance of `productpage`. This is required because Authservice currently only supports in-memory session storage. 1. Next confirm that the Bookinfo app is running. After determining the [ingress IP and port](https://istio.io/docs/tasks/traffic-management/ingress/ingress-control/#determining-the-ingress-ip-and-ports), use a browser to navigate to the `productpage` UI, substituting the ingress host: `https://<INGRESS_HOST>/productpage`. Note that at this point, the Bookinfo sample apps are deployed without any authentication, and without activating the Authservice, so the `productpage` UI should show in the browser without being asked to authenticate. 1. Enable Authz 1. Apply the authentication policy, which creates a `Policy` that enforces authentication on the services under `targets`. Replace the fields under `jwt`(`issuer` and `jwksUri` settings). ```bash kubectl apply -f config/bookinfo-authn-policy-template-adding-reviews.yaml ``` 1. Apply the authorization policy, creating one `AuthorizationPolicy` each for `productpage` and `reviews`. ```bash kubectl apply -f config/bookinfo-authz-using-istio-authorization-policy.yaml ``` Note: `config/bookinfo-authz-using-deprecated-rbac.yaml` can also be used, but will be removed in Istio 1.6. | ⚠️Note⚠️: Unless logout is setup prior these steps, multiple users with different scopes will be required. | | --- | 1. Navigate to the `productpage`, substituting the ingress host: `https://<INGRESS_HOST>/productpage`. Because authentication is enabled, the user is prompted to provide their credentials for the identity provider given in the `ConfigMap`. 1. Assuming the authenticated user has neither `productpage.read` nor the `reviews.read` scopes, then they should not see the `productpage` but instead should be met with an Istio unauthorized message `"RBAC: access denied"`. 1. Add the scope `productpage.read` to a different user and login. The user should be able to view the `productpage` sans reviews. * A message should appear on the `productpage` stating `"Sorry, product reviews are currently unavailable for this book."` This is because the authenticated user is not authorized to access the `reviews` service and would require the scope `reviews.read` in order to access it. #### Authz with `review` service (optional) 1. Patch `productpage` to forward authorization headers to other services. 1. Clone https://github.com/istio/istio. 1. Make the changes below and build the image using `/samples/bookinfo/src/productpage/Dockerfile`. ```diff --- a/samples/bookinfo/src/productpage/productpage.py +++ b/samples/bookinfo/src/productpage/productpage.py @@ -182,7 +182,9 @@ def getForwardHeaders(request): if 'user' in session: headers['end-user'] = session['user'] - incoming_headers = ['x-request-id', 'x-datadog-trace-id', 'x-datadog-parent-id', 'x-datadog-sampled'] + incoming_headers = ['x-request-id', + 'x-datadog-trace-id', 'x-datadog-parent-id', 'x-datadog-sampled', + 'authorization'] # Add user-agent to headers manually if 'user-agent' in request.headers: ``` 1. Tag and push the image created to an accessible registry. 1. Replace the `productpage` image in `config/bookinfo-with-authservice-template.yaml` with the image built above. 1. Reapply the deployment file. ```bash kubectl apply -f config/bookinfo-with-authservice-template.yaml ``` 1. Log in to the `productpage` app as previously done, using a user authorized with both scopes `productpage.read` and `reviews.read`. The user will be authorized to view the `productpage` with reviews. There are three scenarios once authenticated: | Behavior | productpage.read | reviews.read | |-------------------------------------------------|------------------|--------------| | Page is fully viewable | x | x | | Page is viewable but reviews are not | x | | | Istio unauthorized message: RBAC: access denied | | | For a full list of Authservice configuration options, see the [configuration docs](../docs/README.md). ### Additional Pre-requisites: 1. External Load Balancer: Currently Authservice can be used at either the sidecar or gateway. However, there may be issues when it is used at the gateway in an installation with multiple gateway instances. These issues are due to session state being stored in-memory, and only happen when users go from talking to one Authservice instance to another mid-session. Such problems can be avoided it the gateway instances are placed behind a load balancer that supports session affinity. 1. Installing Authservice in Istio Ingress-gateway: Currently, there is not yet a native way to install Authservice into the Istio Ingress-gateway. A more integrated way to install Authservice as part of the Gateway will be considered in the future. However, you can manually modify the `Deployment` of `istio-ingressgateway` to add the Authservice container: ``` containers: # Adding the authservice container - name: authservice image: AUTHSERVICE_IMAGE imagePullPolicy: Always ports: - containerPort: 10003 volumeMounts: - name: authcode-sample-app-authservice-configmap-volume mountPath: /etc/authservice - name: istio-proxy ... ``` ## FAQ Where I can find the authservice images? We use Github packages to host [authservice images](https://github.com/istio-ecosystem/authservice/pkgs/container/authservice%2Fauthservice).
57.032143
561
0.730165
eng_Latn
0.98957
ff3abdca00cc4fbc68ede8d9f763dc1027da261f
4,716
md
Markdown
src/IconButton/README.md
raccoongang/paragon
1bf44915f0b3305048c4f9522d63350d569a2666
[ "Apache-2.0" ]
65
2017-06-21T15:21:12.000Z
2021-12-28T04:05:03.000Z
src/IconButton/README.md
raccoongang/paragon
1bf44915f0b3305048c4f9522d63350d569a2666
[ "Apache-2.0" ]
911
2017-06-22T15:23:38.000Z
2022-01-12T13:53:11.000Z
src/IconButton/README.md
raccoongang/paragon
1bf44915f0b3305048c4f9522d63350d569a2666
[ "Apache-2.0" ]
19
2017-10-15T05:18:55.000Z
2021-12-17T14:03:58.000Z
--- title: 'IconButton' type: 'component' components: - IconButton categories: - Buttonlike status: 'New' designStatus: 'Done' devStatus: 'Done' notes: '' --- ### Basic Usage with Paragon Icon ```jsx live () => { const variants = ["brand", "primary", "secondary", "success", "warning", "danger", "light", "dark", "black"]; return ( <div className="d-flex"> {variants.map((variant) => ( <IconButton src={Close} iconAs={Icon} alt="Close" onClick={() => {}} variant={variant} className="mr-2" /> ))} </div> ); } ``` ### Basic Usage with FontAwesome Icon ```jsx live () => { const icon = FontAwesome.faTimes; return ( <div className="d-flex"> <IconButton className="mr-2" icon={icon} alt="Close" onClick={() => {}} variant="brand" /> <IconButton className="mr-2" icon={icon} alt="Close" onClick={() => {}} variant="primary" /> <IconButton className="mr-2" icon={icon} alt="Close" onClick={() => {}} variant="secondary" /> <IconButton className="mr-2" icon={icon} alt="Close" onClick={() => {}} variant="success" /> <IconButton className="mr-2" icon={icon} alt="Close" onClick={() => {}} variant="warning" /> <IconButton className="mr-2" icon={icon} alt="Close" onClick={() => {}} variant="danger" /> <IconButton className="mr-2" icon={icon} alt="Close" onClick={() => {}} variant="light" /> <IconButton className="mr-2" icon={icon} alt="Close" onClick={() => {}} variant="dark" /> <IconButton className="mr-2" icon={icon} alt="Close" onClick={() => {}} variant="black" /> </div> ); } ``` ### Inverted Colors ```jsx live () => { const icon = FontAwesome.faBars; return ( <div className="d-flex"> <div className="p-1 bg-brand"> <IconButton icon={icon} alt="Menu" onClick={() => console.log("You clicked the menu button")} variant="brand" invertColors /> </div> <div className="p-1 bg-primary"> <IconButton icon={icon} alt="Menu" onClick={() => console.log("You clicked the menu button")} variant="primary" invertColors /> </div> <div className="p-1 bg-secondary"> <IconButton icon={icon} alt="Menu" onClick={() => console.log("You clicked the menu button")} variant="secondary" invertColors /> </div> <div className="p-1 bg-success"> <IconButton icon={icon} alt="Menu" onClick={() => console.log("You clicked the menu button")} variant="success" invertColors /> </div> <div className="p-1 bg-warning"> <IconButton icon={icon} alt="Menu" onClick={() => console.log("You clicked the menu button")} variant="warning" invertColors /> </div> <div className="p-1 bg-danger"> <IconButton icon={icon} alt="Menu" onClick={() => console.log("You clicked the menu button")} variant="danger" invertColors /> </div> <div className="p-1 bg-light"> <IconButton icon={icon} alt="Menu" onClick={() => console.log("You clicked the menu button")} variant="light" invertColors /> </div> <div className="p-1" style={{ background: "black" }}> <IconButton icon={icon} alt="Menu" onClick={() => console.log("You clicked the menu button")} variant="black" invertColors /> </div> </div> ); } ``` ### Sizes ```jsx live () => { return ( <> <div className="mb-1"> Small <IconButton icon={FontAwesome.faBars} alt="Menu" onClick={() => {}} variant="primary" size="sm" /> </div> <div className="mb-1"> Inline: <IconButton icon={FontAwesome.faBars} alt="Menu" onClick={() => {}} variant="primary" size="inline" /> </div> <div className="x-small mb-1"> An <strong>inline</strong> Icon Button inherits font size! For example, applying className="x-small" will make the Icon Button look like this: <IconButton icon={FontAwesome.faSmile} alt="Smile" onClick={() => {}} variant="primary" size="inline" /> . The Icon Button will also wrap with the text as long as it is not a direct child of a flex box. </div> </> ); } ```
26.494382
114
0.514631
kor_Hang
0.165396
ff3b4ef8f990b66717a87653c0060e260b6b06e1
883
md
Markdown
_ideas/hiring-portal.md
SAITARUN55/lab.codingblocks.com
6df557eb001d8ca4b1e71fc83599fa3eda33ccf6
[ "MIT" ]
10
2017-04-15T21:28:11.000Z
2021-08-30T18:49:47.000Z
_ideas/hiring-portal.md
SAITARUN55/lab.codingblocks.com
6df557eb001d8ca4b1e71fc83599fa3eda33ccf6
[ "MIT" ]
26
2017-05-27T19:17:46.000Z
2020-10-02T13:28:51.000Z
_ideas/hiring-portal.md
SAITARUN55/lab.codingblocks.com
6df557eb001d8ca4b1e71fc83599fa3eda33ccf6
[ "MIT" ]
71
2017-05-04T15:49:45.000Z
2021-08-30T18:49:48.000Z
--- layout: post permalink: "ideas/hiring-portal" title: "Hiring Portal" --- A portal where students and companies can be connected with each other for prospective jobs and/or internships. There are two user stories we are trying to build here - #### From the students perspective 1. Signup on the platform 2. Create profile page 3. Add CV/Resume to profile page 4. Add links to projects and papers 5. System generate "tags" for the student's profile based on strengths and experiences 6. Student can browse companies 7. Student can search openings based on filters like required skills #### From the company representative perspective 1. Signup on the platform 2. Create company profile page 3. Create openings. Each opening is a separate entity 4. Add required skill sets for each opening 5. Browse students 6. Search students based on filters like skill sets, experience etc
30.448276
86
0.783692
eng_Latn
0.998566
ff3c25141388c79e7794ef1a160cb07fd8ebf961
186
md
Markdown
src/content/cmss/netlify.md
jmeeling1963/serverless
1af1050582dec6004512bd0e78436395f87c3f41
[ "MIT" ]
4
2019-11-15T14:51:18.000Z
2021-11-08T09:00:29.000Z
src/content/cmss/netlify.md
jmeeling1963/serverless
1af1050582dec6004512bd0e78436395f87c3f41
[ "MIT" ]
null
null
null
src/content/cmss/netlify.md
jmeeling1963/serverless
1af1050582dec6004512bd0e78436395f87c3f41
[ "MIT" ]
1
2021-02-25T20:05:28.000Z
2021-02-25T20:05:28.000Z
--- path: "services/cmss/netlify" title: "Netlify CMS" url: "https://www.netlifycms.org/" logo: "/images/netlify.png" --- This is a React SPA that works with any static site generator.
20.666667
62
0.709677
eng_Latn
0.881401
ff3c93512fec4d05b6c0259051ccfc666c059e25
80
md
Markdown
docs/content/icons/cloud-moon.md
kdisarno/icons
3d9840c29d87aedda278a2e649a9d2085280cf4f
[ "MIT" ]
5,864
2019-11-26T18:13:55.000Z
2022-03-31T18:45:31.000Z
docs/content/icons/cloud-moon.md
kdisarno/icons
3d9840c29d87aedda278a2e649a9d2085280cf4f
[ "MIT" ]
754
2019-11-26T20:20:21.000Z
2022-03-30T07:53:46.000Z
docs/content/icons/cloud-moon.md
kdisarno/icons
3d9840c29d87aedda278a2e649a9d2085280cf4f
[ "MIT" ]
1,093
2019-11-27T04:54:31.000Z
2022-03-30T18:36:59.000Z
--- title: Cloud moon categories: - Weather tags: - cloudy - overcast ---
8.888889
17
0.6125
ita_Latn
0.276093
ff3ce3d411e9dda965e7fd2981fd0ffc9a7a1775
9,290
md
Markdown
socrata/rikd-mt35.md
axibase/open-data-catalog
18210b49b6e2c7ef05d316b6699d2f0778fa565f
[ "Apache-2.0" ]
7
2017-05-02T16:08:17.000Z
2021-05-27T09:59:46.000Z
socrata/rikd-mt35.md
axibase/open-data-catalog
18210b49b6e2c7ef05d316b6699d2f0778fa565f
[ "Apache-2.0" ]
5
2017-11-27T15:40:39.000Z
2017-12-05T14:34:14.000Z
socrata/rikd-mt35.md
axibase/open-data-catalog
18210b49b6e2c7ef05d316b6699d2f0778fa565f
[ "Apache-2.0" ]
3
2017-03-03T14:48:48.000Z
2019-05-23T12:57:42.000Z
# Adult Arrests by County: Beginning 1970 ## Dataset | Name | Value | | :--- | :---- | | Catalog | [Link](https://catalog.data.gov/dataset/adult-arrests-by-county-beginning-1970) | | Metadata | [Link](https://data.ny.gov/api/views/rikd-mt35) | | Data: JSON | [100 Rows](https://data.ny.gov/api/views/rikd-mt35/rows.json?max_rows=100) | | Data: CSV | [100 Rows](https://data.ny.gov/api/views/rikd-mt35/rows.csv?max_rows=100) | | Host | data.ny.gov | | Id | rikd-mt35 | | Name | Adult Arrests by County: Beginning 1970 | | Attribution | New York State Division of Criminal Justice Services | | Category | Public Safety | | Tags | arrest, public safety, felony, misdemeanor, dwi | | Created | 2013-03-01T17:38:50Z | | Publication Date | 2016-04-13T22:00:56Z | ## Description The counts of arrests are derived from information transmitted from law enforcement agencies to the Division of Criminal Justice Services Computerized Criminal History database for fingerprintable offenses.An adult arrest is defined as an arrest of a person 16 years old or older or a juvenile offender prosecuted in adult court. Fingerprintable offenses (defined in Criminal Procedure Law ?160.10) include any felony, a misdemeanor defined in the penal law, a misdemeanor defined outside the penal law which would constitute a felony if such a person had a previous judgment of conviction for a crime, or loitering for the purpose of engaging in prostitution as defined in subdivision two of Penal Law ?240.37. ## Columns ```ls | Included | Schema Type | Field Name | Name | Data Type | Render Type | | ======== | ============== | ================= | ================= | ========= | =========== | | Yes | series tag | county | County | text | text | | Yes | time | year | Year | number | number | | Yes | numeric metric | total | Total | number | number | | Yes | numeric metric | felony_total | Felony Total | number | number | | Yes | numeric metric | drug_felony | Drug Felony | number | number | | Yes | numeric metric | violent_felony | Violent Felony | number | number | | Yes | numeric metric | dwi_felony | DWI Felony | number | number | | Yes | numeric metric | other_felony | Other Felony | number | number | | Yes | numeric metric | misdemeanor_total | Misdemeanor Total | number | number | | Yes | numeric metric | drug_misd | Drug Misd | number | number | | Yes | numeric metric | dwi_misd | DWI Misd | number | number | | Yes | numeric metric | property_misd | Property Misd | number | number | | Yes | numeric metric | other_misd | Other Misd | number | number | ``` ## Time Field ```ls Value = year Format & Zone = yyyy ``` ## Data Commands ```ls series e:rikd-mt35 d:1970-01-01T00:00:00.000Z t:county=Albany m:total=1222 m:misdemeanor_total=536 m:drug_misd=206 m:drug_felony=97 m:other_felony=394 m:dwi_felony=5 m:violent_felony=190 m:property_misd=95 m:dwi_misd=48 m:felony_total=686 m:other_misd=187 series e:rikd-mt35 d:1971-01-01T00:00:00.000Z t:county=Albany m:total=1831 m:misdemeanor_total=1003 m:drug_misd=204 m:drug_felony=130 m:other_felony=461 m:dwi_felony=6 m:violent_felony=231 m:property_misd=271 m:dwi_misd=111 m:felony_total=828 m:other_misd=417 series e:rikd-mt35 d:1972-01-01T00:00:00.000Z t:county=Albany m:total=3028 m:misdemeanor_total=1979 m:drug_misd=284 m:drug_felony=210 m:other_felony=577 m:dwi_felony=8 m:violent_felony=254 m:property_misd=539 m:dwi_misd=297 m:felony_total=1049 m:other_misd=859 ``` ## Meta Commands ```ls metric m:total p:integer l:Total d:"Number of adult felony and misdemeanor arrests" t:dataTypeName=number metric m:felony_total p:integer l:"Felony Total" d:"Number of adult arrests in which the top charge is an offense for which a sentence to a term of imprisonment in excess of one year may be imposed (Penal Law ?10.05)" t:dataTypeName=number metric m:drug_felony p:integer l:"Drug Felony" d:"Number of adult arrests in which the top charge is a felony listed in Penal Law ?220 (Controlled Substances) or ?221 (Marijuana)" t:dataTypeName=number metric m:violent_felony p:integer l:"Violent Felony" d:"Number of adult arrests in which the top charge is a Violent Felony Offense listed in Penal Law ?70.02" t:dataTypeName=number metric m:dwi_felony p:integer l:"DWI Felony" d:"Number of adult arrests in which the top charge is a Driving While Intoxicated felony listed in Vehicle and Traffic Law ?1192" t:dataTypeName=number metric m:other_felony p:integer l:"Other Felony" d:"Number of adult arrests in which the top charge is a felony not specified in another category" t:dataTypeName=number metric m:misdemeanor_total p:integer l:"Misdemeanor Total" d:"# of adult arrests in which top charge is an offense, not a traffic infraction, for which a sentence to a term of imprisonment in excess of 15 days may be imposed, but for which a sentence to term of imprisonment cannot exceed 1 year (Penal Law ?10.04)" t:dataTypeName=number metric m:drug_misd p:integer l:"Drug Misd" d:"Number of adult arrests in which the top charge is a Penal Law ?220 (Controlled Substances) or ?221 (Marijuana) misdemeanor" t:dataTypeName=number metric m:dwi_misd p:integer l:"DWI Misd" t:dataTypeName=number metric m:property_misd p:integer l:"Property Misd" d:"Number of arrests in which the top charge is a misdemeanor listed in Penal Law ?140, 145, 150, 155, and 165" t:dataTypeName=number metric m:other_misd p:integer l:"Other Misd" d:"Number of adult arrests in which the top charge is a misdemeanor not specified in another category" t:dataTypeName=number entity e:rikd-mt35 l:"Adult Arrests by County: Beginning 1970" t:attribution="New York State Division of Criminal Justice Services" t:url=https://data.ny.gov/api/views/rikd-mt35 property e:rikd-mt35 t:meta.view v:id=rikd-mt35 v:category="Public Safety" v:attributionLink=http://www.criminaljustice.ny.gov/crimnet/ojsa/arrests/index.htm v:averageRating=0 v:name="Adult Arrests by County: Beginning 1970" v:attribution="New York State Division of Criminal Justice Services" property e:rikd-mt35 t:meta.view.owner v:id=xzik-pf59 v:profileImageUrlMedium=/api/users/xzik-pf59/profile_images/THUMB v:profileImageUrlLarge=/api/users/xzik-pf59/profile_images/LARGE v:screenName="NY Open Data" v:profileImageUrlSmall=/api/users/xzik-pf59/profile_images/TINY v:displayName="NY Open Data" property e:rikd-mt35 t:meta.view.tableauthor v:id=xzik-pf59 v:profileImageUrlMedium=/api/users/xzik-pf59/profile_images/THUMB v:profileImageUrlLarge=/api/users/xzik-pf59/profile_images/LARGE v:screenName="NY Open Data" v:profileImageUrlSmall=/api/users/xzik-pf59/profile_images/TINY v:roleName=publisher v:displayName="NY Open Data" property e:rikd-mt35 t:meta.view.metadata.custom_fields.common_core v:Publisher="State of New York" v:[email protected] v:Contact_Name="Open Data NY" ``` ## Top Records ```ls | county | year | total | felony_total | drug_felony | violent_felony | dwi_felony | other_felony | misdemeanor_total | drug_misd | dwi_misd | property_misd | other_misd | | ====== | ==== | ===== | ============ | =========== | ============== | ========== | ============ | ================= | ========= | ======== | ============= | ========== | | Albany | 1970 | 1222 | 686 | 97 | 190 | 5 | 394 | 536 | 206 | 48 | 95 | 187 | | Albany | 1971 | 1831 | 828 | 130 | 231 | 6 | 461 | 1003 | 204 | 111 | 271 | 417 | | Albany | 1972 | 3028 | 1049 | 210 | 254 | 8 | 577 | 1979 | 284 | 297 | 539 | 859 | | Albany | 1973 | 3568 | 1133 | 244 | 274 | 28 | 587 | 2435 | 369 | 497 | 667 | 902 | | Albany | 1974 | 4244 | 1324 | 281 | 307 | 17 | 719 | 2920 | 434 | 618 | 884 | 984 | | Albany | 1975 | 4167 | 1256 | 209 | 343 | 12 | 692 | 2911 | 399 | 461 | 975 | 1076 | | Albany | 1976 | 4586 | 1430 | 201 | 431 | 26 | 772 | 3156 | 359 | 573 | 1008 | 1216 | | Albany | 1977 | 4810 | 1337 | 122 | 403 | 45 | 767 | 3473 | 270 | 857 | 1132 | 1214 | | Albany | 1978 | 5754 | 1481 | 85 | 431 | 58 | 907 | 4273 | 156 | 1536 | 1330 | 1251 | | Albany | 1979 | 6525 | 1657 | 144 | 513 | 65 | 935 | 4868 | 223 | 1845 | 1416 | 1384 | ```
82.946429
712
0.616577
eng_Latn
0.68768
ff3cebb7dccdd02769d4a8075cc7b451974cf02f
151
md
Markdown
README.md
NARUTOne-note/react-study-admin
fd88df3785fb0e4bb1a051f68599673532c299e2
[ "BSD-3-Clause" ]
null
null
null
README.md
NARUTOne-note/react-study-admin
fd88df3785fb0e4bb1a051f68599673532c299e2
[ "BSD-3-Clause" ]
12
2021-03-11T01:35:55.000Z
2021-10-14T08:45:27.000Z
README.md
NARUTOne-note/react-study-admin
fd88df3785fb0e4bb1a051f68599673532c299e2
[ "BSD-3-Clause" ]
null
null
null
# React > :rocket: 记录、学习一些React相关最新版本技术 - [React](https://reactjs.org/) - [React UI 设计](https://overreacted.io/zh-hans/react-as-a-ui-runtime/)
21.571429
71
0.662252
kor_Hang
0.286254
ff3d113a14259ecc20f380990c851a8308470cec
2,976
md
Markdown
README.md
ajanicij/yt-timedtext-convert
ce7e5a763fef360891501931ab6ba99f6f9a399d
[ "Apache-2.0" ]
2
2019-10-16T18:53:16.000Z
2019-10-22T16:49:02.000Z
README.md
ajanicij/yt-timedtext-convert
ce7e5a763fef360891501931ab6ba99f6f9a399d
[ "Apache-2.0" ]
2
2019-10-16T19:12:45.000Z
2021-09-25T12:07:34.000Z
README.md
ajanicij/yt-timedtext-convert
ce7e5a763fef360891501931ab6ba99f6f9a399d
[ "Apache-2.0" ]
3
2019-10-16T18:53:19.000Z
2021-02-14T11:05:55.000Z
# yt-timedtext-convert Web application that converts YouTube timedtext.xml to .srt file This project contains JavaScript code that converts YouTube timedtext.xml to .srt file. First, what is timedtext.xml? That is a file that contains subtitles of a YouTube video. It is an XML file that looks like this: <?xml version="1.0" encoding="UTF-8"?> <timedtext format="3"> <head> <pen id="1" fc="#E5E5E5"/> <pen id="2" fc="#CCCCCC"/> <ws id="0"/> <ws id="1" mh="2" ju="0" sd="3"/> <wp id="0"/> <wp id="1" ap="6" ah="20" av="100" rc="2" cc="40"/> </head> <body> <w t="0" id="1" wp="1" ws="1"/> <p t="6319" d="3721" w="1"><s p="2" ac="184">non</s><s t="1000" ac="243"> mais</s><s t="1181" ac="246"> je</s><s t="1301" ac="222"> n'ai</s><s t="1451" ac="252"> pas</s><s p="1" t="1600" ac="216"> là</s></p> <p t="14629" w="1" a="1"> </p> <p t="14639" d="3370" w="1"><s p="1" ac="218">je</s><s t="1000" ac="241"> suis</s><s t="1271" ac="252"> pas</s><s t="1361" ac="246"> agressif</s><s p="2" t="1511" ac="161"> celle</s><s t="1810" ac="236"> qui</s><s p="1" t="1931" ac="207"> m'agresse</s></p> <p t="16769" d="1240" w="1" a="1"> </p> ... <p t="679130" d="4620" w="1"><s ac="239">merci</s><s t="1000" ac="252"> à</s><s t="1030" ac="252"> vous</s><s t="1180" ac="242"> merci</s><s t="1960" ac="239"> merci</s></p> </body> </timedtext> When we download the YouTube video, say as an MP4 file, it doesn't contain subtitles. If we play it with VLC, we can add subtitles; but VLC expects subtitles in .srt format. For the XML file above, a matching .srt file would look like this: 1 00:00:06,319 --> 00:00:15,629 non mais je n'ai pas là 2 00:00:15,639 --> 00:00:17,769 je suis pas agressif celle qui m'agresse ... 309 00:11:14,010 --> 00:11:19,120 dans ma vie je suis plus instinctive So, this project contains code (in JavaScript) that converts timedtext.xml to a .srt file. But, it has a twist. I wanted to make it as easy for you to use as possible. You don't have to install anything, just open your browser and open the GitHub Pages page of this project: https://ajanicij.github.io/yt-timedtext-convert/ In that page, browse for the timedtext.xml file that you want to convert, click on button CONVERT, and when the conversion is done, click on the link beneath the button and save the converted file. Oh yes, I also wanted to make it as easy for me as possible, so I am not running any server for this - it's all in that one static page. Just in case, here's how I am getting timedtext from YouTube videos: open the page https://www.addictivetips.com/web/3-ways-to-download-subtitles-from-a-youtube-video/ and look at the section "Your Browser". Briefly, in Firefox or Chrome open the developer tools, click on the Network tab, then enable closed caption in the video and open the downloaded file in a new tab. Then you can save it to a file.
40.216216
260
0.639449
eng_Latn
0.895921
ff3d55a869eb188335eb65404b75b36b443e33af
5,839
md
Markdown
powerbi-docs/service-admin-premium-purchase.md
hongju/powerbi-docs.ko-kr
4769532ee0ecbb362e8bd364c57f5e250886df43
[ "CC-BY-4.0", "MIT" ]
null
null
null
powerbi-docs/service-admin-premium-purchase.md
hongju/powerbi-docs.ko-kr
4769532ee0ecbb362e8bd364c57f5e250886df43
[ "CC-BY-4.0", "MIT" ]
null
null
null
powerbi-docs/service-admin-premium-purchase.md
hongju/powerbi-docs.ko-kr
4769532ee0ecbb362e8bd364c57f5e250886df43
[ "CC-BY-4.0", "MIT" ]
3
2020-01-21T14:11:01.000Z
2020-01-21T19:28:54.000Z
--- title: Power BI 프리미엄 구매 방법 description: Power BI 프리미엄을 관리하고 전체 조직에 대한 콘텐츠에 대한 액세스를 활성화할 수 있는 방법을 알아봅니다. author: mgblythe manager: kfile ms.reviewer: '' ms.service: powerbi ms.component: powerbi-admin ms.topic: conceptual ms.date: 10/17/2017 ms.author: mblythe LocalizationGroup: Premium ms.openlocfilehash: 2789f2e3e8198ddc0363fb07488f5fe8f39441a6 ms.sourcegitcommit: fbb7924603f8915d07b5e6fc8f4d0c7f70c1a1e1 ms.translationtype: HT ms.contentlocale: ko-KR ms.lasthandoff: 08/02/2018 ms.locfileid: "34297380" --- # <a name="how-to-purchase-power-bi-premium"></a>Power BI 프리미엄 구매 방법 조직에 대한 Power BI 프리미엄 용량을 구입하는 방법을 알아봅니다. <iframe width="640" height="360" src="https://www.youtube.com/embed/NkvYs5Qp4iA?rel=0&amp;showinfo=0" frameborder="0" allowfullscreen></iframe> Office 365 관리 센터를 통해 Power BI 프리미엄 용량 노드를 구입할 수 있습니다. 또한 조직 내에서 프리미엄 용량 SKU(P1~P3)의 임의 조합도 포함할 수 있습니다. 이러한 조합은 다양한 리소스 기능을 제공합니다. Power BI 프리미엄에 대한 자세한 내용은 [Power BI 프리미엄이란?](service-premium.md)을 참조하세요. Power BI에 대한 현재 가격 책정을 보려면 [Power BI 가격 책정 페이지](https://powerbi.microsoft.com/pricing/)를 참조하세요. [Power BI Premium 계산기](https://powerbi.microsoft.com/calculator/)를 사용하여 Power BI 프리미엄에 대한 비용도 계획할 수 있습니다. > [!IMPORTANT] > Power BI 프리미엄을 구입했더라도 콘텐츠 작성자는 여전히 Power BI Pro 라이선스가 필요합니다. > > ## <a name="create-a-new-tenant-with-power-bi-premium-p1"></a>Power BI 프리미엄 P1로 새 테넌트 만들기 기존 테넌트가 없어 새로 하나를 만들려면 Power BI 프리미엄도 동시에 구입할 수 있습니다. 다음 링크를 통해 Office 365를 사용하여 새 테넌트를 만드는 과정을 안내받고 Power BI 프리미엄을 구입할 수 있습니다. 테넌트가 만들어지면 사용자용 Power BI Pro 라이선스를 구입해야 합니다. 테넌트를 만들면 자동으로 해당 테넌트에 대한 전역 관리자가 됩니다. 이 라이선스를 구입하려면 [Power BI 프리미엄 P1 제품](https://signup.microsoft.com/Signup?OfferId=b3ec5615-cc11-48de-967d-8d79f7cb0af1)을 참조하세요. ![](media/service-admin-premium-purchase/premium-purchase-with-tenant.png) ## <a name="purchase-a-power-bi-premium-capacity-for-an-existing-organization"></a>기존 조직에 대한 Power BI 프리미엄 용량 구입 기존 조직이 있는 경우 구독 및 라이선스를 구입하려면, 전역 관리자 또는 대금 청구 관리자여야 합니다. 자세한 내용은 [Office 365 관리자 역할 정보](https://support.office.com/article/About-Office-365-admin-roles-da585eea-f576-4f55-a1e0-87090b6aaa9d)를 참조하세요. 프리미엄 용량을 구입하려면 다음을 수행해야 합니다. 1. Power BI 서비스 내에서 **Office 365 앱 선택** > **관리자**를 선택합니다. 또는 Office 365 관리 센터를 찾아볼 수 있습니다. https://portal.office.com으로 이동하여 **관리자**를 선택하면 도달할 수 있습니다. ![](media/service-admin-premium-purchase/o365-app-picker.png) 2. **청구** > **서비스 구입**을 선택합니다. 3. **기타 계획** 아래에서 Power BI 프리미엄 제안을 찾습니다. P1~P3, EM3 및 P1(매월)로 나열됩니다. 4. **줄임표(...)** 를 마우스로 가리키고 **지금 구입**을 선택합니다. ![](media/service-admin-premium-purchase/premium-purchase.png) 5. 단계에 따라 구입을 완료합니다. 또한 이러한 항목의 구입 페이지로 바로 이동하는 다음 링크를 선택할 수도 있습니다. 이러한 SKU에 대한 자세한 내용은 [Power BI 프리미엄이란?](service-premium.md#premiumskus)을 참조하세요. Power BI Premium SKU를 구입하려면 테넌트 내에서 ***전역 또는 대금 청구 관리자***여야 합니다. 관리자가 아닌 경우 아래 링크를 선택하면 오류가 발생합니다. | 바로 구입 링크 | | --- | | [EM3(매월) SKU](https://portal.office.com/commerce/completeorder.aspx?OfferId=4004702D-749C-4F74-BF47-3048F1833780&adminportal=1) | | [P1 SKU](https://portal.office.com/commerce/completeorder.aspx?OfferId=b3ec5615-cc11-48de-967d-8d79f7cb0af1&adminportal=1) | | [P1(매월) SKU](https://portal.office.com/commerce/completeorder.aspx?OfferId=E4C8EDD3-74A1-4D42-A738-C647972FBE81&adminportal=1) | | [P2 SKU](https://portal.office.com/commerce/completeorder.aspx?OfferId=062F2AA7-B4BC-4B0E-980F-2072102D8605&adminportal=1) | | [P3 SKU](https://portal.office.com/commerce/completeorder.aspx?OfferId=40c7d673-375c-42a1-84ca-f993a524fed0&adminportal=1) | 구입을 완료하면 서비스 구입 화면에 항목을 구입했고 활성 상태인 것으로 표시됩니다. ![](media/service-admin-premium-purchase/premium-purchased.png) 이제 Power BI 관리 센터 내에서 이 용량을 관리할 수 있습니다. 자세한 내용은 [Power BI 프리미엄 관리](service-admin-premium-manage.md)를 참조하세요. ## <a name="purchase-more-capacities"></a>추가 용량 구입 Power BI 관리 포털의 **프리미엄 설정**에 있고 관리자인 경우 **추가 구입** 단추가 표시됩니다. 이 단추를 통해 Office 365 포털으로 이동합니다. Office 365 관리 센터에 있는 경우 다음을 수행할 수 있습니다. 1. **청구** > **서비스 구입**을 선택합니다. 2. **기타 계획** 아래에서 추가로 구입할 Power BI 프리미엄 항목을 찾습니다. 3. **줄임표(...)** 를 마우스로 가리키고 **라이선스 수량 변경**을 선택합니다. ![](media/service-admin-premium-purchase/premium-purchase-more.png) 4. 이 항목에 대해 포함할 인스턴스 수를 변경합니다. 그런 다음 완료했으면 **제출**을 선택합니다. > [!IMPORTANT] > **제출**을 선택하면 파일의 신용 카드에 요금이 발생합니다. > > **서비스 구입** 페이지에 보유하고 있는 인스턴스 수가 표시됩니다. Power BI 관리 포털 내의 **용량 설정** 아래에서 사용 가능한 V 코어는 구입한 새 용량을 반영합니다. ![Power BI Premium 용량에 대한 사용 가능한 V 코어](media/service-admin-premium-purchase/premium-capacities.png) 이제 Power BI 관리 센터 내에서 이 용량을 관리할 수 있습니다. 자세한 내용은 [Power BI 프리미엄 관리](service-admin-premium-manage.md)를 참조하세요. ## <a name="cancel-your-subscription"></a>구독 취소 Office 365 관리 센터 내에서 구독을 취소할 수 있습니다. 프리미엄 구독을 취소하려면 다음을 수행합니다. ![](media/service-admin-premium-purchase/premium-cancel-subscription.png "프리미엄 구독 취소") 1. Office 365 관리 센터로 이동합니다. 2. **청구** > **구독**을 선택합니다. 3. 목록에서 Power BI 프리미엄 구독을 선택합니다. 4. **추가 작업** 드롭다운에서 **구독 취소**를 선택합니다. ![](media/service-admin-premium-purchase/o365-more-actions.png) 5. **구독 취소** 페이지는 [조기 종료 요금](https://support.office.com/article/early-termination-fees-6487d4de-401a-466f-8bc3-c0beb5cc40d3)에 대한 책임 여부를 나타냅니다. 이 페이지는 구독에 대한 데이터가 삭제될 때도 이를 알려줍니다. 6. 정보를 자세히 읽고 계속하려면 **구독 취소**를 선택합니다. ## <a name="next-steps"></a>다음 단계 [Power BI 가격 책정 페이지](https://powerbi.microsoft.com/pricing/) [Power BI 프리미엄 계산기](https://powerbi.microsoft.com/calculator/) [Power BI 프리미엄이란?](service-premium.md) [Power BI 프리미엄 관리](service-admin-premium-manage.md) [Power BI 프리미엄 FAQ](service-premium-faq.md) [Power BI 프리미엄 릴리스 정보](service-premium-release-notes.md) [Microsoft Power BI 프리미엄 백서](https://aka.ms/pbipremiumwhitepaper) [Power BI Enterprise 배포 계획 백서](https://aka.ms/pbienterprisedeploy) [Power BI 관리 포털](service-admin-portal.md) [조직에서 Power BI 관리](service-admin-administering-power-bi-in-your-organization.md) 궁금한 점이 더 있나요? [Power BI 커뮤니티에 질문합니다.](http://community.powerbi.com/)
47.471545
274
0.725124
kor_Hang
0.999947
ff3d7d1117b7ecdec69fcdfa5905caa9de17b7c7
118,573
md
Markdown
intl.en-US/Release notes/Release notes.md
roura356a/csk
4092ff464596cdb5ed2bdf39a8eb3764f8a42015
[ "MIT" ]
null
null
null
intl.en-US/Release notes/Release notes.md
roura356a/csk
4092ff464596cdb5ed2bdf39a8eb3764f8a42015
[ "MIT" ]
null
null
null
intl.en-US/Release notes/Release notes.md
roura356a/csk
4092ff464596cdb5ed2bdf39a8eb3764f8a42015
[ "MIT" ]
null
null
null
--- keyword: [release notes, K8s, new features] --- # Release notes This topic describes the release notes for Container Service for Kubernetes \(ACK\) and provides links to the relevant references. - ACK supports Kubernetes V1.20.4, V1.18.8, and V1.16.9. - ACK supports the following operating systems: CentOS 7.x, Alibaba Cloud Linux 2.1903, Windows Server 2019, and Windows Server version 1909. ## July 2021 |Feature|Description|Region|References| |-------|-----------|------|----------| |Kubernetes version upgrade|Upgrade from Kubernetes 1.18 to Kubernetes 1.20 is supported.|All regions|[Upgrade a cluster](/intl.en-US/User Guide for Kubernetes Clusters/Cluster/Manage clusters/Upgrade a cluster.md) and [Kubernetes 1.20 release notes](/intl.en-US/Release notes/Kubernetes release notes/Kubernetes 1.20 release notes.md)| |CoreDNS|CoreDNS is supported on the Add-ons page of the console. CoreDNS is the default plug-in used to implement DNS-based service discovery in ACK clusters and edge Kubernetes clusters. CoreDNS provides domain name resolutions for services within the clusters.|All regions|[CoreDNS](/intl.en-US/Release notes/System Component change Records/Networking/CoreDNS.md)| |Cost analysis|The cost analysis feature is improved to provide resource usage trends and the cost estimation of individual CPU cores per unit time for applications and pods based on namespace.|All regions|[Cost analysis](/intl.en-US/User Guide for Kubernetes Clusters/Cost analysis/Cost analysis.md)| |Cluster security|The security of registered Kubernetes clusters is enhanced. You can install security-inspector, aliyun-acr-credential-helper, and gatekeeper in registered Kubernetes clusters. security-inspector is used to perform security scans. aliyun-acr-credential-helper is used to pull images without passwords. gatekeeper is used to manage Open Policy Agent \(OPA\) policies.|All regions|[Overview of registered clusters](/intl.en-US/User Guide for Kubernetes Clusters/Multi-cloud and hybrid cloud management/Management of external clusters/Overview of registered clusters.md)| ## June 2021 |Feature|Description|Region|References| |-------|-----------|------|----------| |Resource group|The resource group can be selected from a drop-down list when you create the cluster or node pool in the console. The cluster and Elastic Compute Service \(ECS\) instances in the cluster are grouped into the selected resource group. Previously, the resource group was selected at the top of the console. The resource group that you select at the top of the console is used to filter resources displayed on the page, such as virtual private clouds \(VPCs\).|All regions|N/A| |Network policy|Kubernetes network policies can be used to configure policy-based network control. You can use network policies to control traffic at the IP address or port level. ACK provides a visual interface that you can use to configure network policies in a convenient manner.|All regions|[Use network policies](/intl.en-US/User Guide for Kubernetes Clusters/Network/Container network/Use network policies.md)| |ACK Terway Hubble|ACK Terway Hubble can be deployed in clusters by using App Catalog. ACK Terway Hubble is a network architecture, workload, and topology observability platform. You can deploy ACK Terway Hubble in a managed Kubernetes cluster to gain observability into the network traffic and network policies.|All regions|[Implement network observability by using ACK Terway Hubble](/intl.en-US/User Guide for Kubernetes Clusters/Observability/Monitoring management/Implement network observability by using ACK Terway Hubble.md)| |Cost analysis|Cost allocations and trends of resources, applications, and containers can be provided at the node pool level. The cost analysis feature also provides cost optimization suggestions based on the current cost and the sales strategies of node pools.|All regions|[Cost analysis](/intl.en-US/User Guide for Kubernetes Clusters/Cost analysis/Cost analysis.md)| |Auto scaling|The scan interval is added to auto scaling configurations. The system evaluates whether scaling is required at the scan interval. You can specify 15 seconds, 30 seconds, and 1 minute as the scan interval.|All regions|[Auto scaling of nodes](/intl.en-US/User Guide for Kubernetes Clusters/Auto Scaling/Auto scaling of nodes.md)| |ASK|Custom subject alternative names \(SANs\) can be modified for the API server certificate of a serverless Kubernetes \(ASK\) cluster. This allows you to update the information about the API server certificate, such as the domain name, IP address, and URL, after the ASK cluster is created.|All regions|[Update the SAN of the API server certificate for an existing ACK cluster](/intl.en-US/User Guide for Kubernetes Clusters/Security management/Infrastructure security/Customize the SAN of the API server certificate when you create an ACK cluster.md)| |Cluster security|The inspection feature can be used to detect security risks in the workloads of a registered Kubernetes cluster.|All regions|[Use the inspection feature to check for security risks in the workloads of a registered Kubernetes cluster](/intl.en-US/User Guide for Kubernetes Clusters/Multi-cloud and hybrid cloud management/Observability of external clusters/Use the inspection feature to check for security risks in the workloads of a registered Kubernetes cluster.md)| |Topology-aware scheduling|The following scheduling policies are supported by topology-aware CPU scheduling:- Dynamically adjust resource water marks to improve the resource utilization of workloads with different priorities. - Use the Last Level Cache \(L3 cache\) and Memory Bandwidth Allocation \(MBA\) to improve the resource isolation of tasks with different priorities. |All regions|- [Dynamically adjust resource water marks to improve the resource utilization of workloads with different priorities](/intl.en-US/User Guide for Kubernetes Clusters/Scheduling/Resource scheduling/Dynamically adjust resource water marks to improve the resource utilization of workloads with different priorities.md) - [Use the L3 cache and MBA to improve the resource isolation of tasks with different priorities](/intl.en-US/User Guide for Kubernetes Clusters/Scheduling/Resource scheduling/Use the L3 cache and MBA to improve the resource isolation of tasks with different priorities.md) | ## May 2021 |Feature|Description|Region|References| |-------|-----------|------|----------| |Worker node|Center for Internet Security \(CIS\) reinforcement is supported for worker nodes. You can enable CIS reinforcement to enhance OS security for cluster nodes. CIS is a third-party security organization that is committed to leading a global community of enterprises, public service sectors, and academia to develop security best practice solutions. CIS reinforcement supports only Alibaba Cloud Linux 2, which is the official OS image of Alibaba Cloud and the default OS image used in ACK clusters. |All regions|[CIS reinforcement](/intl.en-US/User Guide for Kubernetes Clusters/Cluster/Operating system/CIS reinforcement.md)| |New region|Professional managed Kubernetes clusters are available in Nanjing Local Region.|Nanjing Local Region|N/A| |New region|Professional managed Kubernetes clusters are available in the China North 2 Ali Gov region on Alibaba Gov Cloud.|China North 2 Ali Gov|[Supported regions](/intl.en-US/Product Introduction/Supported regions.md)| |Cost analysis|The cost analysis feature is added to help IT administrators analyze resource usage and allocate costs. This feature offers suggestions on cost savings and helps improve resource utilization. This feature provides the following services:- Cost analysis of cloud resources - Cost trend analysis - Suggestions on cost savings - Real-time cost forecasting - Cost allocation based on namespaces - Optimization of application costs |All regions|[Cost analysis](/intl.en-US/User Guide for Kubernetes Clusters/Cost analysis/Cost analysis.md)| |Custom SSL certificate|Custom SSL certificates can be specified for Server Load Balancer \(SLB\) instances by using annotations when you create Ingresses in ASK clusters. The SSL certificates are no longer forcibly specified by using Secrets.|All regions|N/A| |Topology-aware scheduling|resource-controller V1.2.1-d1e280f-aliyun is released. This component works with ack-sceduler of Kubernetes 1.20.4 to support the topology-aware scheduling of AMD CPUs.|All regions|[Topology-aware CPU scheduling](/intl.en-US/User Guide for Kubernetes Clusters/Scheduling/Resource scheduling/Topology-aware CPU scheduling.md)| ## April 2021 |Feature|Description|Region|References| |-------|-----------|------|----------| |Kubernetes 1.20|Kubernetes 1.20 is supported. You can select this Kubernetes version when you create a cluster.|All regions|[Kubernetes 1.20 release notes](/intl.en-US/Release notes/Kubernetes release notes/Kubernetes 1.20 release notes.md)| |Hot migration|Hot migration from existing dedicated Kubernetes clusters to professional Kubernetes clusters is supported. You can dynamically migrate workloads from dedicated Kubernetes clusters to professional Kubernetes clusters without service interruptions.|All regions|[Hot migration from dedicated Kubernetes clusters to professional managed Kubernetes clusters](/intl.en-US/User Guide for Kubernetes Clusters/Cluster/Migrate to professional managed Kubernetes clusters/Hot migration from dedicated Kubernetes clusters to professional managed Kubernetes clusters.md)| |NodeLocal DNSCache|ACK NodeLocal DNSCache is a local DNS cache solution developed based on the open source NodeLocal DNSCache project. This solution consists of a DNS caching agent that runs as a DaemonSet and an admission controller that runs as a Deployment to dynamically inject data to DNSConfig. The admission controller listens on pod creation requests and dynamically modifies DNSConfig. This enables pods to use local cache to accelerate DNS lookups.|All regions|[ACK NodeLocal DNSCache](/intl.en-US/Release notes/System Component change Records/Networking/ACK NodeLocal DNSCache.md)| |Registered Kubernetes cluster|The Kubernetes event center feature and the aliyun-acr-credential-helper component are supported in registered Kubernetes clusters.|All regions|[Create a cluster registration proxy and register an on-premises cluster](/intl.en-US/User Guide for Kubernetes Clusters/Multi-cloud and hybrid cloud management/Management of external clusters/Create a cluster registration proxy and register an on-premises cluster.md) and [Pull images without a password in a self-managed Kubernetes cluster](/intl.en-US/User Guide for Kubernetes Clusters/Multi-cloud and hybrid cloud management/Management of external clusters/Pull images without a password in a self-managed Kubernetes cluster.md)| |Control plane|Custom control plane parameters are supported in professional Kubernetes clusters to meet the requirements for modifying control plane parameters in production environments. You can modify the parameters of kube-apiserver and kube-controller-manager based on your requirements.|All regions|[Customize the settings of control plane components in professional managed Kubernetes clusters](/intl.en-US/User Guide for Kubernetes Clusters/Professional Kubernetes clusters/Customize the settings of control plane components in professional managed Kubernetes clusters.md)| |Alerting|The alerting feature is added to enable centralized alert management. ACK allows you to configure alerts to centrally manage exceptions in the cluster and provides various metrics for different scenarios. By default, the alerting feature is enabled when you create clusters. ACK allows you to deploy CRDs in a cluster to configure and manage alert rules.|All regions|[Alert management](/intl.en-US/User Guide for Kubernetes Clusters/Observability/Alert management.md)| ## March 2021 |Feature|Description|Region|References| |-------|-----------|------|----------| |Node pool|Information about nodes in a node pool can be exported as comma-separated values \(CSV\) files. This facilitates the O&M of nodes in a node pool.|All regions|[Manage node pools](/intl.en-US/User Guide for Kubernetes Clusters/Node management/Node pool management/Manage node pools.md)| |Managed Kubernetes cluster|Updates to the SANs in the API server certificates are supported for standard and professional managed Kubernetes clusters.|All regions|[Customize the SAN of the API server certificate when you create an ACK cluster](/intl.en-US/User Guide for Kubernetes Clusters/Security management/Infrastructure security/Customize the SAN of the API server certificate when you create an ACK cluster.md)| |Cluster access|Temporary kubeconfig files are supported for access to ACK clusters. The validity period of a temporary kubeconfig file used to access an ACK cluster ranges from 30 minutes to three days. This meets the requirements for temporary access to ACK clusters.|All regions|[t16645.md\#](/intl.en-US/User Guide for Kubernetes Clusters/Cluster/Access clusters/Connect to ACK clusters by using kubectl.md)| |containerd|The containerd runtime is supported by ACK. You can select containerd as the container runtime when you create a cluster. You can also select containerd when you create a regular node pool or a managed node pool. This allows you to deploy both containerd containers and Docker containers in a cluster. Hot migration from Docker containers to containerd containers is not supported. To migrate from Docker containers to containerd containers, you must recreate pods.|All regions|[Release notes for containerd](/intl.en-US/Release notes/Release notes for Runtime/Release notes for containerd.md)| ## February 2021 |Feature|Description|Region|References| |-------|-----------|------|----------| |Professional edge Kubernetes cluster|Professional edge Kubernetes clusters can be created. This type of cluster provides the same reliability and stability as professional Kubernetes clusters. The billing methods of professional edge Kubernetes clusters are also the same as the that of professional Kubernetes clusters.|All regions|[Introduction to professional edge Kubernetes clusters](/intl.en-US/User Guide for Edge Container Service/ACK@Edge Pro edition cluster/Introduction to professional edge Kubernetes clusters.md)| |Log Center|Log Center is available in the ACK console. You can check the log of a cluster and the logs of control plane components in Log Center.|All regions|[View the logs of control plane components](/intl.en-US/User Guide for Kubernetes Clusters/Observability/Log management/Collect the logs of control plane components in a managed Kubernetes cluster.md) and [View cluster logs](/intl.en-US/User Guide for Kubernetes Clusters/Cluster/Manage clusters/View cluster information.md)| |Prometheus monitoring|A CoreDNS dashboard is displayed on the Prometheus Monitoring page in the ACK console.|All regions|[Enable ARMS Prometheus](/intl.en-US/User Guide for Kubernetes Clusters/Observability/Monitoring management/Enable ARMS Prometheus.md)| |Node pool|Public IPv4 addresses can be associated to regular node pools and managed node pools. When you create a regular node pool or a managed node pool, you can enable the nodes to automatically associate with elastic IP addresses \(EIPs\). This enables the nodes to access the Internet. You can also configure a NAT gateway when you create a cluster to enable all nodes in the cluster to access the Internet by using the NAT gateway.|All regions|[Manage node pools](/intl.en-US/User Guide for Kubernetes Clusters/Node management/Node pool management/Manage node pools.md)| |New region|Professional Kubernetes clusters are available in the China South 1 Finance region.|All regions|[Introduction to professional managed Kubernetes clusters](/intl.en-US/User Guide for Kubernetes Clusters/Professional Kubernetes clusters/Introduction to professional managed Kubernetes clusters.md)| ## January 2021 |Feature|Description|Region|References| |-------|-----------|------|----------| |Observability|The observabilities of the API server and etcd control components are enabled in professional Kubernetes clusters. You can observe these components in monitoring dashboards and receive alerts upon exceptions. This allows you to detect system exceptions and potential risks, and provides information to help you implement measures to ensure the stability of Kubernetes clusters. |All regions |[Enable ARMS Prometheus](/intl.en-US/User Guide for Kubernetes Clusters/Observability/Monitoring management/Enable ARMS Prometheus.md)| |Custom parameter|Custom parameters are supported for kube-apiserver and kube-controller-manager in professional Kubernetes clusters. This meets the requirements for custom parameters of cluster control components in production environments.|All regions |[Customize the settings of control plane components in professional managed Kubernetes clusters](/intl.en-US/User Guide for Kubernetes Clusters/Professional Kubernetes clusters/Customize the settings of control plane components in professional managed Kubernetes clusters.md)| |Log collection|Logs of control components, such as kube-apiserver, kube-controller-manager, and kube-scheduler, can be collected. To enable log collection, select Enable for Log Collection for Control Plane Components when you create a cluster. This helps you monitor the cluster status and detect anomalies in the cluster. |All regions |[View the logs of control plane components](/intl.en-US/User Guide for Kubernetes Clusters/Observability/Log management/Collect the logs of control plane components in a managed Kubernetes cluster.md)| |Preemptible instance|Preemptible instances are supported when you set the billing method of a node pool. Preemptible instances are cost-effective. You can bid for idle resources of Alibaba Cloud, obtain the resources, and then run containers until the resources are reclaimed due to higher bids from other customers. This reduces the cost of computing resources.|All regions |[Set the ratio of preemptible instances to existing instances in a node pool](/intl.en-US/User Guide for Kubernetes Clusters/Node management/Node pool management/Set the ratio of preemptible instances to existing instances in a node pool.md)| |Edge node pool|Edge node pools are supported in edge Kubernetes clusters. You can abstract a set of nodes with one or more identical attributes into an edge node pool for an edge Kubernetes cluster. This way, you can use the edge node pool to manage and perform O&M operations on nodes from different regions in a unified manner. An edge node pool uses the basic or enhanced coordination network between the cloud and edge. The enhanced coordination network is built by using the software-defined networking \(SDN\) solution of ACK@Edge, and allows you to coordinate cloud and edge computing in a secure and fast network environment. This allows applications deployed in edge node pools to access the cloud through the VPC where the cluster is deployed. Compared with the basic coordination network, the enhanced coordination network provides higher network quality and improves data security. |All regions |[Overview of edge node pools](/intl.en-US/User Guide for Edge Container Service/Cell-based management at the edge/Manage edge node pools/Overview of edge node pools.md)| |Elastic node pool|Node pools are supported in registered Kubernetes clusters. You can use a node pool to manage a set of ECS instances with the same attributes. You can also add them to a self-managed Kubernetes cluster or a Kubernetes cluster that is deployed in the public cloud of a third-party cloud service provider. This allows you to schedule resources in the cloud, data centers, and self-managed Kubernetes clusters in a unified, flexible, and cost-effective manner.|All regions |[Configure auto scaling](/intl.en-US/User Guide for Kubernetes Clusters/Multi-cloud and hybrid cloud management/Management of external clusters/Configure auto scaling.md)| |Application backup|The application backup feature is released. This feature meets the critical requirement for data security in Kubernetes clusters where an increasing number of applications are deployed. You can use application backups to restore applications that are accidentally disrupted for a long period of time. Different from the traditional single-server backup and disk backup, the application backup feature is used to back up applications and the relevant data, resource objects, and configurations. You can also use this feature to back up all resources in a namespace. This feature is available in ACK clusters and registered Kubernetes clusters. You can use this feature to back up applications, volumes, and persistent volumes \(PVs\) in a cluster, and also restore backups to other clusters.|All regions |[Install the application backup component](/intl.en-US/User Guide for Kubernetes Clusters/Disaster recovery center/Install the application backup component.md)| |Cost reduction policy|The ratio of preemptible instances to pay-as-you-go instances can be set in a node pool. This allows you to reduce the cost. However, you must make sure that the node pool has enough pay-as-you-go instances to ensure performance stability.|All regions |[Set the ratio of preemptible instances to existing instances in a node pool](/intl.en-US/User Guide for Kubernetes Clusters/Node management/Node pool management/Set the ratio of preemptible instances to existing instances in a node pool.md)| ## December 2020 |Feature|Description|Region|References| |-------|-----------|------|----------| |New region|ACK is now available in the China \(Guangzhou\) region. |China \(Guangzhou\)|[Limits](/intl.en-US/Product Introduction/Limits.md)| |Hot migration|Hot migration from existing standard managed Kubernetes clusters to professional Kubernetes clusters is supported. Your services are not affected during the migration. Professional managed Kubernetes clusters are developed based on managed Kubernetes clusters. This type of cluster provides higher reliability and security in large-scale production environments for enterprise users. Professional managed Kubernetes clusters are also covered by service level agreements \(SLAs\) that include compensation clauses. |All regions |[Hot migration from standard managed Kubernetes clusters to professional managed Kubernetes clusters](/intl.en-US/User Guide for Kubernetes Clusters/Cluster/Migrate to professional managed Kubernetes clusters/Hot migration from standard managed Kubernetes clusters to professional managed Kubernetes clusters.md)| |SLB specification|The specification of the SLB instance that is used to access the API server can be selected when you create an ACK cluster. You can select different SLB specifications based on your business requirements. This allows you to handle different traffic loads on the API server of the cluster. |All regions |[Create a professional managed Kubernetes cluster](/intl.en-US/User Guide for Kubernetes Clusters/Professional Kubernetes clusters/Create a professional managed Kubernetes cluster.md)| |Preemptible instance|Preemptible instances are supported when you set the billing method of a node pool. Preemptible instances are cost-effective. You can bid for unused resources of Alibaba Cloud, obtain the resources, and then run containers until the container resources are reclaimed due to higher bids from other customers. This reduces the costs of elastic container instances in some scenarios.|All regions |N/A| |Kubernetes 1.18|Upgrades from Kubernetes 1.16 to 1.18 are supported.|All regions |[Upgrade a cluster](/intl.en-US/User Guide for Kubernetes Clusters/Cluster/Manage clusters/Upgrade a cluster.md)| |CronHPA|Cron Horizontal Pod Autoscaler \(CronHPA\) can be enabled in the ACK console for your workloads. You must install ack-kubernetes-cronhpa-controller in the cluster before you enable CronHPA.|All regions |[CronHPA](/intl.en-US/User Guide for Kubernetes Clusters/Auto Scaling/CronHPA.md)| |CentOS 7.8|CentOS 7.8 can be used as the node OS when you create a cluster or a node pool.|All regions |[Manage node pools](/intl.en-US/User Guide for Kubernetes Clusters/Node management/Node pool management/Manage node pools.md)| |Reinforcement based on classified protection|Reinforcement based on classified protection is supported for the cloud-native Alibaba Cloud Linux operating system in compliance with Multi-Level Protection Scheme \(MLPS\) 2.0 level 3 standards. The following features are provided:- Implement identity authentication - RAM - Security auditing - Intrusion prevention - Malicious code protection To enable reinforcement based on classified protection for the node OS when you create a cluster or a node pool, you must select Alibaba Cloud Linux 2.1903 as the node OS and select **Reinforcement based on classified protection**. |All regions | | |Volume snapshot|Volume snapshots created from disks are supported by the Container Storage Interface \(CSI\) component of ACK. This allows you to back up and restore workload data.|All regions |[Use volume snapshots created from disks](/intl.en-US/User Guide for Kubernetes Clusters/Storage management-CSI/Disk volumes/Use volume snapshots created from disks.md)| |Cluster upgrade and new component|ASK clusters can be upgraded. The metrics-server, cronhpa-controller, and alb-ingress-controller components can be installed and managed on the Add-ons page of the ACK console.|All regions |N/A| ## November 2020 |Feature|Description|Region|References| |-------|-----------|------|----------| |Managed node pool|Managed node pools are released to support auto upgrading and auto repair. This provides centralized, managed, and O&M-free lifecycle management of nodes. You do not need to be concerned about the O&M of nodes, such as component upgrading, OS upgrading, or patching to fix Common Vulnerabilities and Exposures \(CVE\) vulnerabilities. ACK automatically fixes node exceptions for the nodes in a managed node pool. Managed node pools are supported by professional managed Kubernetes clusters. |All regions |[Overview](/intl.en-US/User Guide for Kubernetes Clusters/Node management/Managed node pools/Overview.md)| |kubernetes-dashboard|Kubernetes 1.18 is supported by the kubernetes-dashboard application provided by App Catalog. This fixes the issue that the pods of Kubernetes 1.18 cannot be accessed by terminals. You can find and install the Helm chart for kubernetes-dashboard from App Catalog.|All regions |[View the application catalog](/intl.en-US/User Guide for Kubernetes Clusters/Application marketplace/App catalog management/View the application catalog.md)| |Enhanced SSD|The performance level of an enhanced SSD can be set to PL0 or PL1 when you create a cluster. This allows you to customize the performance level of your cluster. This feature is supported by professional managed Kubernetes clusters, standard managed Kubernetes clusters, dedicated Kubernetes clusters, and managed edge Kubernetes clusters. |All regions |[Elastic Block Storage FAQ](/intl.en-US/Block Storage/Elastic Block Storage FAQ.md)| |Cloud controller manager|The cloud controller manager is upgraded to V1.9.3.339-g9830b58-aliyun. Hash values are supported in the configurations of LoadBalancer Services. This way, when the cloud controller manager is restarted, only the backend vServer groups of the related SLB instances are updated if the Service configuration is not changed. The configurations of the related SLB instances and listeners are not updated.|All regions |[Cloud Controller Manager](/intl.en-US/Release notes/System Component change Records/Core components/Cloud Controller Manager.md)| |Disk monitoring|Disk monitoring is supported by the latest version of the CSI component. This feature allows you to monitor the states of persistent volume claims \(PVCs\) through Application Real-Time Monitoring Service \(ARMS\) Prometheus when you use disks that are mounted by using the PVCs. You can also configure alerts by setting thresholds for the storage space and input/output operations per second \(IOPS\) of the disks.|All regions |N/A| |Ingress controller and CoreDNS|Ingress controllers and CoreDNS can be installed when you create an ASK cluster or on the Add-ons page of the ACK console after the cluster is created.|All regions |[Create an ASK cluster](/intl.en-US/User Guide for Serverless Kubernetes Clusters/Cluster/Create an ASK cluster.md)| |Node pool|Node pools are supported in registered Kubernetes clusters. You can use a node pool in the ACK console to manage a set of ECS instances for a registered Kubernetes cluster. You can add ECS nodes from a node pool to a self-managed Kubernetes cluster or a Kubernetes cluster that is deployed in the public cloud of a third-party cloud service provider. You can also use node pools to manage the labels and taints of nodes in node pools.|All regions |[Manage node pools](/intl.en-US/User Guide for Kubernetes Clusters/Node management/Node pool management/Manage node pools.md)| ## October 2020 |Feature|Description|Region|References| |-------|-----------|------|----------| |Time zone|The time zone can be selected when you create a cluster. By default, the time zone configured for your browser is selected. This feature is supported by professional managed Kubernetes clusters, standard managed Kubernetes clusters, dedicated Kubernetes clusters, and ASK clusters. |All regions |[Create a managed Kubernetes cluster](/intl.en-US/User Guide for Kubernetes Clusters/Cluster/Create Kubernetes clusters/Create a managed Kubernetes cluster.md)| |Tag|Disks, NAS file systems, and Log Service projects with tags are supported by CSI and Logtail. Disks, NAS file systems, and Log Service projects that are created by ACK for a cluster are added with the cluster ID as tags. This makes it easier to allocate resource fees.|All regions |N/A| ## September 2020 |Feature|Description|Region|References| |-------|-----------|------|----------| |New region|ACK is available in the China \(Ulanqab\) region.|All regions |[Introduction to professional managed Kubernetes clusters](/intl.en-US/User Guide for Kubernetes Clusters/Professional Kubernetes clusters/Introduction to professional managed Kubernetes clusters.md)| |SMB|Server Message Block \(SMB\) file systems can be mounted to a Windows container. In the NAS console, you can create an SMB file system in the VPC where the cluster is deployed. You can also create a mount target for the file system. You must use the FlexVolume plug-in to mount an SMB file system.|All regions |[Mount SMB file systems to Windows containers](/intl.en-US/User Guide for Kubernetes Clusters/Windows container/Mount SMB file systems to Windows containers.md)| |Time zone|The time zone can be selected for master nodes and worker nodes when you create a dedicated or managed Kubernetes cluster.|All regions |N/A| |Kubernetes 1.18|Kubernetes 1.18.8 is supported. You can select this Kubernetes version when you create a cluster. ACK clusters of Kubernetes 1.18 or later no longer support Kubernetes Dashboard. To use Kubernetes Dashboard, we recommend that you install **kubernetes-dashboard** on the App Catalog page. |All regions |[Kubernetes 1.18 release notes](/intl.en-US/Release notes/Kubernetes release notes/Kubernetes 1.18 release notes.md) and [\[Product Changes\] ACK ends support for Kubernetes Dashboard](t2084941.md#)| |NetworkPolicy|The NetworkPolicy feature can be enabled or disabled for Terway when you create a cluster.|All regions |- [Use network policies](/intl.en-US/User Guide for Kubernetes Clusters/Network/Container network/Use network policies.md) - [Improve the performance of the NetworkPolicy feature for a large ACK cluster in Terway mode](/intl.en-US/Best Practices/Network/Improve the performance of the NetworkPolicy feature for a large ACK cluster in Terway mode.md) | |Periodic inspection|Periodic inspection policies can be configured for a cluster on the Inspections page of the ACK console.|All regions |[Use the inspection feature to detect security risks in the workloads of an ACK cluster](/intl.en-US/User Guide for Kubernetes Clusters/Security management/Security/Use the inspection feature to detect security risks in the workloads of an ACK cluster.md)| |Cluster auditing|The cluster auditing feature can be enabled or disabled on the Cluster Auditing page of the ACK console.|All regions |[Enable cluster auditing](/intl.en-US/User Guide for Kubernetes Clusters/Security management/Infrastructure security/Enable cluster auditing.md)| |New component|The logtail-ds component is provided to collect container log from registered external Kubernetes clusters, including stdout files and log files of containers. The migrate-controller component is provided to migrate applications across Kubernetes clusters. This component is developed based on the open source Velero project. The ack-virtual-node component is provided to enable auto scaling for registered Kubernetes clusters. |All regions |- [Enable Log Service for an external Kubernetes cluster](/intl.en-US/User Guide for Kubernetes Clusters/Multi-cloud and hybrid cloud management/Observability of external clusters/Enable Log Service for an external Kubernetes cluster.md) - [Install the application backup component](/intl.en-US/User Guide for Kubernetes Clusters/Disaster recovery center/Install the application backup component.md) | |Sandboxed-Container 2.0|Sandboxed-Container is upgraded to V2.0. Sandboxed-Container 2.0 has the following benefits:- Sandboxed-Container is a container runtime that is developed by Alibaba Cloud based on lightweight virtual machines. Compared with Sandboxed-Container 1.0, Sandboxed-Container 2.0 supports more lightweight and efficient deployment and simplifies the architecture and maintenance of ACK clusters. - Sandboxed-Container 2.0 reduces the resource overheads by 90% and improves the startup speed of sandboxed containers by three times. - Sandboxed-Container 2.0 increases the deployment density of sandboxed containers on a single node by 10 times. - Sandboxed-Container 2.0 supports the virtio-fs file system, which provides higher performance than the 9pfs file system. |All regions |[Sandboxed-Container overview](/intl.en-US/User Guide for Kubernetes Clusters/Sandboxed-Container management/Sandboxed-Container overview.md)| |Knative component|Knative components are supported in ASK clusters. Knative is a cloud-native and cross-platform orchestration engine for serverless applications. You can deploy Knative in ASK clusters. This allows you to use cloud resources by calling the Knative API without the need to pay for the Knative controller.|All regions |[Overview](/intl.en-US/User Guide for Kubernetes Clusters/Knative/Overview.md)| ## August 2020 |Feature|Description|Region|References| |-------|-----------|------|----------| |Gatekeeper|The gatekeeper component can be installed on the Add-ons page of the ACK console. This component facilitates the management and implementation of policies that are executed by Open Policy Agent \(OPA\) in ACK clusters.|All regions |[gatekeeper](/intl.en-US/Release notes/System Component change Records/Security/gatekeeper.md)| |Runtime inspection|Runtime inspections can be performed on the Runtime Security page of the ACK console. This feature monitors the container runtime and triggers alerts upon the following types of security events: malicious image startups, attacks by viruses or malicious programs, intrusions into containers, container escapes, and high-risk operations on containers. To use this feature, you must first activate Security Center. If you use a Resource Access Management \(RAM\) user, make sure that the RAM user has the permissions to access Security Center.|All regions |[Use the runtime security feature to monitor ACK clusters and configure alerts](/intl.en-US/User Guide for Kubernetes Clusters/Security management/Security/Use the runtime security feature to monitor ACK clusters and configure alerts.md)| |Scheduled backup|Scheduled backups are supported for Elastic Block Storage \(EBS\) devices. You can create scheduled snapshots from disks. To use this feature, you must first install the cluster-storage-operator component.|All regions |N/A| |IPVLAN and eBPF|IPVLAN and extended Berkeley Packet Filter \(eBPF\) are supported by Terway. If an elastic network interface \(ENI\) is shared among pods, Terway allows you to use IPVLAN and eBPF for network virtualization. Terway enables pod network virtualization by using the lightweight IPVLAN technology. This allows pod traffic to bypass the network stack of the host and reduces the network performance overheads. Terway uses Cilium as the BPF agent on nodes to configure BPF rules for pod ENIs. This enables Services and network policies to be configured on ENIs. This way, requests within pod networks are forwarded to ENIs through IPVLAN. This reduces network complexity. **Note:** This feature applies to the Alibaba Cloud Linux 2 operating system. To use this feature, you must [Submit a ticket](https://workorder-intl.console.aliyun.com/console.htm) to enable this feature your account. |All regions |[Use the Terway plug-in](/intl.en-US/User Guide for Kubernetes Clusters/Network/Container network/Use the Terway plug-in.md)| |New region|Professional Kubernetes clusters are available in the China \(Beijing\), China \(Shenzhen\), Germany \(Frankfurt\), Indonesia \(Jakarta\), and China East 2 Finance regions.|China \(Beijing\), China \(Shenzhen\), Germany \(Frankfurt\), Indonesia \(Jakarta\), and China East 2 Finance|[Introduction to professional managed Kubernetes clusters](/intl.en-US/User Guide for Kubernetes Clusters/Professional Kubernetes clusters/Introduction to professional managed Kubernetes clusters.md)| |ACK@Edge|ACK@Edge is released for commercial use. ACK@Edge is a cloud-managed solution that is provided by ACK to coordinate cloud and edge computing.|All regions |[ACK@Edge overview](/intl.en-US/User Guide for Edge Container Service/ACK@Edge overview.md)| ## July 2020 |Feature|Description|Region|References| |-------|-----------|------|----------| |Professional Kubernetes cluster|Professional Kubernetes clusters are released for public preview. This type of cluster is developed based on managed Kubernetes cluster and provides higher reliability and security in large-scale production environments for enterprise users. Professional Kubernetes clusters are also covered by SLAs that include compensation clauses. This type of cluster is suitable for the following users:- Internet enterprises. These enterprises deploy their business in large-scale production environments and require business management with high stability, security, and observability. - Big data computing enterprises. These enterprises deploy large-scale data computing services, high-performance data processing services, and other services with high elasticity. These services require clusters with high stability, high performance, and efficient computing capabilities. - International enterprises that run their business in China. These enterprises prioritize security and services that provide SLAs with compensation clauses. - Financial enterprises. These enterprises require SLAs that include compensation clauses. |All regions |[Introduction to professional managed Kubernetes clusters](/intl.en-US/User Guide for Kubernetes Clusters/Professional Kubernetes clusters/Introduction to professional managed Kubernetes clusters.md)| |New region|ASK is available in the Japan \(Tokyo\) and Indonesia \(Jakarta\) regions.|Japan \(Tokyo\) and Indonesia \(Jakarta\)|[ASK overview](/intl.en-US/User Guide for Serverless Kubernetes Clusters/ASK overview.md)| |Cloud controller manager|The cloud controller manager is upgraded to V1.9.3.313-g748f81e-aliyun. The following features are provided:- Supports deletion protection for SLB instances. By default, deletion protection is enabled for newly created SLB instances. - Supports modification protection for the configurations of SLB instances. By default, modification protection is enabled for the configurations of newly created SLB instances. - Allows you to specify the resource group for an SLB instance when you create a Service. - Allows you to specify the name of an SLB instance when you create a Service. - Allows you to mount pods in Terway mode to the backend of an SLB instance. |All regions |[Cloud Controller Manager](/intl.en-US/Release notes/System Component change Records/Core components/Cloud Controller Manager.md)| |Security management|Security management is supported for your clusters. You can configure pod security policies and cluster inspections. Pod security policy is a significant method to verify the security of pod configurations before pods are deployed. This ensures that applications are running in secure pods. Cluster inspection detects the security risks of workloads in an ACK cluster and generates inspection reports for your reference. This way, you can check whether the workloads in your ACK cluster run in a secure environment. |All regions |[Configure and enforce pod security policies](/intl.en-US/User Guide for Kubernetes Clusters/Security management/Security/Configure and enforce pod security policies.md)| |Shared VPC|Shared VPCs are supported. A shared VPC can host cloud resources that are created by multiple accounts. The cloud resources include ECS instances, SLB instances, and ApsaraDB RDS instances. This provides a unified approach for you to manage cloud resources in a shared VPC. Shared VPCs are powered by the resource sharing mechanism. The Alibaba Cloud account that owns a shared VPC can share all vSwitches in the VPC with other accounts in the same organization. You can select a shared VPC when you create an ACK cluster. If you select a shared VPC for an ACK cluster, you can use only Terway as the network plug-in.|All regions |N/A| |Cluster registration|Cluster registration is supported. During daily O&M, you may need to deploy multiple clusters in the cloud and data centers. In some scenarios, you may even deploy clusters in the clouds of different cloud service providers. In these cases, you can register external Kubernetes clusters in the ACK console. This allows you to manage external Kubernetes clusters in the console and reduce O&M costs.|All regions |[Overview of registered clusters](/intl.en-US/User Guide for Kubernetes Clusters/Multi-cloud and hybrid cloud management/Management of external clusters/Overview of registered clusters.md)| |Workload management|Redeployment and rollback of workloads are supported. ACK provides features on the workload management page in the ACK console, such as application redeployment and rollback. This makes it more convenient to manage your workloads.|All regions |[Create a stateless application by using a Deployment](/intl.en-US/User Guide for Kubernetes Clusters/Application management/Workloads/Create a stateless application by using a Deployment.md)| ## June 2020 |Feature|Description|Region|References| |-------|-----------|------|----------| |Taint management|Taint management is supported for node pools. You can configure taints when you create or edit a node pool. This allows you to add taints to all nodes in the node pool. You can select **Synchronize Node Labels and Taints** to update taints for existing nodes in a node pool.|All regions |[Manage taints](/intl.en-US/User Guide for Kubernetes Clusters/Node management/Node/Manage taints.md)| |Application migration|Application migration from virtual machines to ACK clusters by using Server Migration Center \(SMC\) is supported. SMC allows you to migrate servers to Container Registry. You can use SMC to migrate containerized applications to Container Registry at low costs.|All regions |[Migrate source servers to Container Registry](/intl.en-US/Best Practices/Migrate source servers to Container Registry.md)| ## May 2020 |Feature|Description|Region|References| |-------|-----------|------|----------| |Advanced security group|Advanced security groups are supported when you create a cluster. You can select a basic security group, an advanced security group, or an existing security group. Compared with a basic security group, an advanced security group can contain up to 65,536 private IP addresses. Advanced security groups are used for clusters where a large number of containers or instances are deployed.|All regions|[Create a managed Kubernetes cluster](/intl.en-US/User Guide for Kubernetes Clusters/Cluster/Create Kubernetes clusters/Create a managed Kubernetes cluster.md)| |Component management|The Prometheus component and Kubernetes event center can be installed from the Add-ons page of the ACK console. ACK is integrated with the most commonly used Prometheus component in the container monitoring field, and the most commonly used node-problem-detector \(NPD\) component in the O&M field. You can select these components when you create a cluster. You can also upgrade and maintain the components on the Add-ons page of the ACK console. The Prometheus component is provided by ARMS. NPD is a tool used for node problem detection. NPD can export events that record node exceptions, such as Docker Engine hangs, Linux kernel hangs, network access issues, and file descriptor issues. You can click the **Event Center** tab on the **Events** page to view event details.|All regions|[Enable ARMS Prometheus](/intl.en-US/User Guide for Kubernetes Clusters/Observability/Monitoring management/Enable ARMS Prometheus.md)| |Kubernetes 1.16.9|Kubernetes 1.16.9 is supported. You can create a cluster of Kubernetes 1.16.9. If the Kubernetes version of your cluster is earlier than V1.16.9, go to the Clusters page and choose **More** \> **Upgrade Cluster** in the Actions column to upgrade to Kubernetes 1.16.9. Compared with the previous Kubernetes 1.16.6, Kubernetes 1.16.9 fixes the CVE-2020-8555 SSRF vulnerability for the kube-controller-manager component.|All regions|[Vulnerability fixed: CVE-2020-8555 in kube-controller-manager](/intl.en-US/Bulletin/Security bulletins/Vulnerability fixed: CVE-2020-8555 in kube-controller-manager.md)| |Elastic workload|Elastic workloads are supported. You can go to the **App Catalog** page and select **ack-kubernetes-elastic-workload** to install the component. You can use ACK and Virtual Kubelet in combination to proportionally schedule pay-as-you-go and preemptible instances. This allows you to schedule your workloads with elasticity.|All regions|[View the application catalog](/intl.en-US/User Guide for Kubernetes Clusters/Application marketplace/App catalog management/View the application catalog.md)| |Application Center|Application Center is released in the ACK console. In earlier versions of the ACK console, after applications are deployed, the topology of the applications is not displayed in a unified view. Therefore, version management and rollback cannot be unified for continuous deployments. Application Center provides a unified portal for your applications. This allows you to view the deployment of applications in a unified manner. You can also view the deployment status and changes of all ACK sub-resources that are allocated to each application. In addition, Gits and Helm charts are used to deploy applications in ACK clusters by versions. This allows you to publish or roll back different application versions deployed in ACK clusters.|All regions|[Overview](/intl.en-US/User Guide for Kubernetes Clusters/Application center/Overview.md)| ## April 2020 |Feature|Description|Region|References| |-------|-----------|------|----------| |AGS|Alibaba Cloud Genomics Service \(AGS\) is released for commercial use. AGS is an ACK-based big data compute service provided by Alibaba Cloud for users in the biotechnology industry. AGS provides efficient, elastic, and reliable services. AGS is faster in computing and more cost-effective than traditional methods. AGS uses the pay-as-you-go billing method and charges you based on the number of successful API calls in the backend. To submit a computing task, you only need to run a command to call the AGS API on the client. This process is counted as one API call.|All regions|[AGS overview](/intl.en-US/User Guide for Genomics Service/AGS overview.md)| |Dynamically provisioned volume|Expansion of dynamically provisioned volumes without restarting pods is supported for Kubernetes 1.16 and later.|All regions|[Expand a disk volume without service interruptions](/intl.en-US/User Guide for Kubernetes Clusters/Storage management-CSI/Disk volumes/Expand a disk volume without service interruptions.md)| |Ingress controller|Multiple Ingress controllers can be deployed in a Kubernetes cluster. An Ingress is an important entry for Layer 7 services. If you create only one Ingress for a cluster, the routing performance may encounter a bottleneck. If an Ingress allows inbound access through the Internet and private network at the same time, security risks exist. To solve these issues, ACK provides a Helm chart for the Ingress controller when only one Ingress is used. The name of the Helm chart is ack-ingress-nginx. You can deploy multiple Ingress controllers from App Catalog. You can use YAML files to configure access to Internet-facing and internal-facing SLB instances separately.|All regions|[Deploy Ingresses in a high-reliability architecture](/intl.en-US/User Guide for Kubernetes Clusters/Network/Ingress management/Deploy Ingresses in a high-reliability architecture.md)| |New region|ASK is available in the India \(Mumbai\) region.|India \(Mumbai\)|[Create an ASK cluster](/intl.en-US/User Guide for Serverless Kubernetes Clusters/Cluster/Create an ASK cluster.md)| ## March 2020 |Feature|Description|Region|References| |-------|-----------|------|----------| |Component management|The following features are added to component management:- Allows you to view the YAML files of components. - Allows you to perform health checks for nodes before component upgrades. This prevents component upgrade failures that are caused by node drains or exceptions. - Allows you to manually refresh the Add-ons page. |All regions|[Manage system components](/intl.en-US/User Guide for Kubernetes Clusters/Component/Manage system components.md)| |Self-managed ECS instance|Self-managed ECS instance-based nodes can be added to the backend of SLB instances by using the cloud controller manager. This way, the existing applications and containerized applications share the same SLB instances and inbound traffic. This is suitable for scenarios where existing applications are gradually replaced by containerized applications.|All regions|[Cloud Controller Manager](/intl.en-US/Release notes/System Component change Records/Core components/Cloud Controller Manager.md)| |Terway|Cluster expansion and node specification changes are supported by Terway. When you manually expand a cluster, you may need to create nodes in new zones. In earlier versions, to create pods in a new zone, you must first add new pod vSwitches in the zone. You can add pod vSwitches in Terway ConfigMaps. When you change the specifications of a node, the maximum number of pods that are supported by Terway on the node also changes. After this release, the K8s max-pod parameter is automatically adjusted to fit the new node specifications.|All regions|[Use the Terway plug-in](/intl.en-US/User Guide for Kubernetes Clusters/Network/Container network/Use the Terway plug-in.md)| |Node pool management|Node pool management is supported. A node pool contains a group of nodes with the same configurations. For example, nodes in a node pool are configured with the same container runtime, OS, and security group. You can create multiple node pools for a cluster. This allows you to deploy a variety of services to different node pools in a cluster. Node pools also support auto scaling. Nodes can be automatically added when a node pool is short of required resources.|All regions|[Manage node pools](/intl.en-US/User Guide for Kubernetes Clusters/Node management/Node pool management/Manage node pools.md)| |Cluster check|Cluster checks are optimized. Cluster check is the core feature provided by ACK for cluster O&M. Cluster check dynamically scans clusters to identify potential risks. The optimization provides the following services:- Displays information about unknown hosts. - Checks the availability of Yellow dogUpdater, Modified \(YUM\). - Checks the availability of systemd. |All regions|[Use cluster check to troubleshoot cluster issues](/intl.en-US/User Guide for Kubernetes Clusters/Cluster/Use cluster check to troubleshoot cluster issues.md)| |Kubernetes 1.16|Upgrade to Kubernetes 1.16.6 is supported. You can upgrade your clusters from Kubernetes 1.14.8 to 1.16.6. You can also create clusters of Kubernetes 1.16.6. We recommend that you read the upgrade notes before you upgrade your clusters.|All regions|[Upgrade a cluster](/intl.en-US/User Guide for Kubernetes Clusters/Cluster/Manage clusters/Upgrade a cluster.md)| |New region|Managed Kubernetes clusters are available in the China South 1 Finance region on Alibaba Finance Cloud.|China South 1 Finance|[Create a managed Kubernetes cluster](/intl.en-US/User Guide for Kubernetes Clusters/Cluster/Create Kubernetes clusters/Create a managed Kubernetes cluster.md)| |ephemeral-storage|The ephemeral-storage parameter is added for container configurations when you create an application. Ephemeral storage is a new storage resource similar to CPU and memory. Kubernetes uses this parameter to manage and schedule the transient storage of applications that run in Kubernetes clusters. The root directory and log directories \(/var/log\) of kubelet are stored on the primary partition of a node. In addition, emptyDir volumes, container log, image layers, and the writable layers of containers are also stored on the primary partition. Therefore, ephemeral-storage is used to manage the primary partition of a node. You can set requests and limits when you create an application. This allows you to schedule and manage the storage resources that are allocated from the primary partition to the application.|All regions|[Create a stateless application by using a Deployment](/intl.en-US/User Guide for Kubernetes Clusters/Application management/Workloads/Create a stateless application by using a Deployment.md)| ## February 2020 |Feature|Description|Region|References| |-------|-----------|------|----------| |Kubernetes 1.16 and Docker 19.03.5|Kubernetes 1.16 and Docker 19.03.5 are supported to provide enhanced cloud-native capabilities. Compared with the earlier version, Kubernetes 1.16 accelerates pod creation and improves affinity, stability, and observability. You can select Docker 19.03.5 when you create a cluster. ACK accelerates container startups and the building of images that are based on Docker 19.03.5.|All regions|[Kubernetes 1.16 release notes](/intl.en-US/Release notes/Kubernetes release notes/Kubernetes 1.16 release notes.md)| |Auto scaling|AliyunLinux 2, custom security groups, advanced security groups, and GPU-accelerated preemptible instances are configurable for auto scaling. To use AliyunLinux 2 and custom security groups, you must first submit a ticket to enable them for your account.|All regions|[Submit a ticket](https://workorder-intl.console.aliyun.com/console.htm). | |CentOS 7.7|CentOS 7.7 is supported as the node OS. You can specify the CentOS 7.7 operating system when you create worker nodes. CentOS 7.7 is automatically used when you expand clusters or enable auto scaling for clusters.|All regions|[Submit a ticket](https://workorder-intl.console.aliyun.com/console.htm). | |Helm|Helm 3 is supported. You can install Helm 3 from App Catalog. Compared with Helm 2, Helm 3 improves the security of role assignment, provides full compatibility with Kubernetes role-based access control \(RBAC\) in multi-tenant scenarios, and supports hooks for more management operations.|All regions|For more information about how to upgrade from Helm 2, see [\[Component Upgrades\] Upgrade Helm V2 to V3](t1872780.md#).| |New region|ASK is available in the Indonesia \(Jakarta\) and UK \(London\) regions. You can create ASK clusters in these regions in the ACK console.|Indonesia \(Jakarta\) and UK \(London\)|[Create an ASK cluster](/intl.en-US/User Guide for Serverless Kubernetes Clusters/Cluster/Create an ASK cluster.md)| |ASK|ClusterIP Services are supported in ASK clusters. This provides more options when you deploy containerized applications in ASK clusters. You can create ClusterIP Services in an ASK cluster to enable access to your workloads from within the ASK cluster.|All regions|[Manage Services](/intl.en-US/User Guide for Kubernetes Clusters/Network/Service Management/Manage Services.md)| |Cloud controller manager|ECS instances and elastic container instances can be attached to the backend of SLB instances that are associated with Services by using the cloud controller manager. This enables unified scheduling for application pods across worker nodes and virtual nodes. This also improves application resilience.|All regions|[Release notes for the Cloud controller manager](/intl.en-US/Release notes/System Component change Records/Core components/Cloud Controller Manager.md)| |Edge Kubernetes cluster|32-bit and 64-bit ARM nodes are supported in edge Kubernetes clusters. This allows edge Kubernetes clusters to support more heterogeneous infrastructures. You can add Edge Node Service \(ENS\) nodes or nodes from data centers to edge Kubernetes clusters.|All regions|[Add an edge node](/intl.en-US/User Guide for Edge Container Service/Node management/Add an edge node.md)| ## January 2020 |Feature|Description|Region|References| |-------|-----------|------|----------| |Virtual node|ClusterIP Services can be accessed by pods that are deployed on virtual nodes. This enables Kubernetes to centrally manage virtual nodes and elastic container instances. You can deploy applications on virtual nodes without the inconvenience of resource capacity planning. This meets the requirements of scenarios such as online workload scaling, offline computing, and CI/CD, and also reduces the overall computing costs. To enable this feature, log on to the console, click App Catalog, and then find and install ack-virtual-node.|All regions|[Deploy the virtual node controller and use it to create Elastic Container Instance-based pods](/intl.en-US/User Guide for Kubernetes Clusters/Virtual nodes and ECI/Deploy the virtual node controller and use it to create Elastic Container Instance-based pods.md)| |API server|Service account token volume projection can be enabled for the API server when you create a cluster. This enables service account authentication on pods. This feature is also required if mutual Transport Layer Security \(TLS\) authentication is enabled on Istio through Secret Discovery Service \(SDS\).|All regions|[Create a dedicated Kubernetes cluster](/intl.en-US/User Guide for Kubernetes Clusters/Cluster/Create Kubernetes clusters/Create a dedicated Kubernetes cluster.md)| |CSI|CSI can be selected as the volume plug-in when you create an ACK cluster. The optimized CSI plug-in provides the following features:- Object Storage Service \(OSS\) subdirectories can be mounted to containers. - The Memory type emptyDir volumes are supported. The Memory type volume is a RAM-based temporary file system, whose storage space is limited by memory. This type of file system provides good performance and is typically used to provide caching space in containers. - Accelerated OSSFS transmission is supported. OSSFS allows you to share data by mounting OSS buckets to local file systems in Linux. To meet the requirements of big data and AI scenarios, ACK improves read speed by adjusting concurrency, block size, and libfuse configurations. For more information, see [alibaba-cloud-csi-driver](https://github.com/kubernetes-sigs/alibaba-cloud-csi-driver). |All regions|[Install CSI](/intl.en-US/User Guide for Kubernetes Clusters/Storage management-CSI/Install and upgrade the CSI plug-in.md)| |Sandboxed container|Disks and NAS file systems can be mounted to sandboxed containers to enhance cloud-native capabilities. This allows ACK to provide the same storage performance as when these storage services are used on virtual machines. ACK also supports RootFS BLKIO Limit and disk I/O throttling on pods, and optimizes its support for multi-tenancy.|All regions|[Mount a NAS file system to a sandboxed container](/intl.en-US/User Guide for Kubernetes Clusters/Sandboxed-Container management/Security Sandbox storage/Mount a NAS file system to a sandboxed container.md) and [Mount a disk to a sandboxed container](/intl.en-US/User Guide for Kubernetes Clusters/Sandboxed-Container management/Security Sandbox storage/Mount a disk to a sandboxed container.md)| |Kubernetes cluster for confidential computing|Kubernetes clusters for confidential computing are released for public preview. This type of cluster is developed on top of Intel Software Guard Extensions \(SGX\) and is particularly suitable for sensitive data protection and scenarios such as smart contracts in blockchains, user secrets processing, intellectual property protection, genomics computing in bioinformatics, and edge computing. You can create and manually expand Kubernetes clusters for confidential computing. You can also enable auto scaling, and add different types of nodes to the clusters. For more information, see [Create a managed Kubernetes cluster for confidential computing](/intl.en-US/User Guide for Kubernetes Clusters/TEE-based confidential computing/Create a managed Kubernetes cluster for confidential computing.md) and [SGX application development guide](https://developer.aliyun.com/article/740793). ACK also provides open source sgx-device-plugin to help you deploy SGX applications in Kubernetes clusters. For more information, see [Kubernetes device plugin for Intel SGX](https://github.com/AliyunContainerService/sgx-device-plugin). **Note:** Intel \(R\) SGX is a set of CPU instruction codes that are developed by Intel. Intel \(R\) SGX allows you to run application code and data in a special runtime environment called enclave, which is built on top of hardware silos and memory encryption technologies. Enclaves refer to Trusted Execution Environment \(TEE\). No application, OS Kernel, BIOS, or hardware other than the CPU can access an enclave without verification. All data in the enclave memory is encrypted. Users encrypt the code and data in an enclave with their private keys obtained from Intel. An enclave can be started only after the signature is verified through Intel Attestation Service \(IAS\), which is a remote certification service of Intel. |All regions|[Create a managed Kubernetes cluster for confidential computing](/intl.en-US/User Guide for Kubernetes Clusters/TEE-based confidential computing/Create a managed Kubernetes cluster for confidential computing.md)| |AGS|Gene sequencing is supported by calling AGS API operations. ACK has released a set of AGS API operations. You can call these API operations to submit gene sequencing tasks. Results are automatically uploaded to your OSS buckets. This saves you the inconvenience of cluster creation and task deployments. These API operations support different SLA levels and provide computing resources based on different requirements. This allows you to reduce costs and improve efficiency. This feature is in public preview. To use the feature, submit a ticket.|All regions|[Use AGS to perform WGS tasks](/intl.en-US/User Guide for Genomics Service/AGS acceleration API/Use AGS to perform WGS tasks.md)| ## December 2019 |Feature|Description|Region|References| |-------|-----------|------|----------| |Component management|Component management is supported. You can log on to the ACK console. On the **Clusters** page, find the cluster that you want to manage and choose **More** \> **Manage System Components** in the Actions column to manage cluster components. You can manage all system components and optional components with operations such as upgrade, uninstall, and reinstall. Custom component configurations will be available soon.|All regions|[Manage system components](/intl.en-US/User Guide for Kubernetes Clusters/Component/Manage system components.md)| |App Catalog|The ack-node-local-dns plug-in is provided in App Catalog to speed up Domain Name Service \(DNS\) queries. ack-node-local-dns sends internal Domain Name Service \(DNS\) queries to CoreDNS and directly forwards external DNS queries to external DNS resolvers. ack-node-local-dns caches all queries and provides DNS caching on each node. This significantly improves the overall DNS query rate of the cluster.|All regions|[View the application catalog](/intl.en-US/User Guide for Kubernetes Clusters/Application marketplace/App catalog management/View the application catalog.md)| |New region|Managed Kubernetes clusters are available in the China East 1 Finance region on Alibaba Finance Cloud. You only need to create worker nodes in a managed Kubernetes cluster. ACK creates and manages master nodes. This type of cluster is easy to use and provides high availability at low costs. This saves you the inconvenience of master node O&M and allows you to focus on business development.|China East 1 Finance|[Create a managed Kubernetes cluster](/intl.en-US/User Guide for Kubernetes Clusters/Cluster/Create Kubernetes clusters/Create a managed Kubernetes cluster.md)| |NPU-accelerated ECS instances|Neural processing unit \(NPU\)-accelerated ECS instances are supported when you create managed or dedicated Kubernetes clusters. The instance type is ecs.ebman1.26xlarge, which is suitable for big data analytics and AI scenarios in video and graphics industries.|All regions|[Create a managed Kubernetes cluster](/intl.en-US/User Guide for Kubernetes Clusters/Cluster/Create Kubernetes clusters/Create a managed Kubernetes cluster.md)| |Terway|The user experience of Terway is improved. The new user interface provides information about the number of pods that are supported by each ECS instance type when you create a cluster. When you expand a cluster, the user interface also provides multiple options. This allows you to select vSwitches for nodes and pods. The user interface is optimized to clearly and accurately provide information.|All regions|[Use the Terway plug-in](/intl.en-US/User Guide for Kubernetes Clusters/Network/Container network/Use the Terway plug-in.md)| ## November 2019 |Feature|Description|Region|References| |-------|-----------|------|----------| |Cluster expansion|Multiple zones and multiple data disks are supported when you expand a Kubernetes cluster. The user interface for expanding an ACK cluster is updated to provide the same configuration options as those for creating an ACK cluster. You can select multiple zones when you expand an ACK cluster. You can also mount multiple data disks to a node and specify whether to encrypt these disks.|All regions|[Expand an ACK cluster](/intl.en-US/User Guide for Kubernetes Clusters/Cluster/Manage clusters/Expand an ACK cluster.md)| |Custom node configurations|Custom scripts, tags, and Operation Orchestration Service \(OOS\) are supported for node configurations. You can write custom scripts to configure nodes when you create or expand an ACK cluster. To use this feature, submit a ticket to enable this feature for your account. You can use this feature to specify the node OS. Instead of building custom images, you can directly inject scripts into standard images. Auto scaling allows you to add tags to cluster nodes. This makes it easier for you to identify cluster nodes and allocate the cost of nodes. ACK integrates OOS into the node O&M. You can go to the OOS page from the ACK console and run OOS scripts to maintain nodes on the OOS page.|All regions|[Expand an ACK cluster](/intl.en-US/User Guide for Kubernetes Clusters/Cluster/Manage clusters/Expand an ACK cluster.md)| |ASK|Multiple zones and log auditing are supported in ASK clusters. After ASK is upgraded to V2.0, ASK clusters provide more cloud-native features. Cross-zone ASK clusters and log auditing are supported. You can deploy pods across zones to improve the availability of your business. You can also use log auditing to improve the security of ASK clusters. ASK clusters will be improved to provide the same features as dedicated and managed Kubernetes clusters.|All regions|[Create an ASK cluster](/intl.en-US/User Guide for Serverless Kubernetes Clusters/Cluster/Create an ASK cluster.md)| |vGPU|vGPU resources are provided through the vgn5i instance family to meet the requirements of AI and big data industries. You can select instance types of the vgn5i instance family when you create an ACK cluster.|All regions|N/A| |Terway|ENI buffer pools are supported for Terway. Terway is a container network plug-in that is developed on top of Alibaba Cloud ENI. The update enables Terway to create a buffer pool of ENI IP addresses during node initialization. This accelerates pod creation and improves user experience.|All regions|[Use the Terway plug-in](/intl.en-US/User Guide for Kubernetes Clusters/Network/Container network/Use the Terway plug-in.md)| |Cloud controller manager|External ECS instances can be added to the backend of SLB instances by using the cloud controller manager. The cloud controller manager is a system component that associates Services with SLB instances. By default, cluster nodes that host Services are mounted to the backend of the related SLB instances. The update allows you to add ECS instances outside an ACK cluster as the backend servers to the related SLB instances. This makes it easier to perform application migration and canary releases.|All regions|[Cloud Controller Manager](/intl.en-US/Release notes/System Component change Records/Core components/Cloud Controller Manager.md)| ## October 2019 |Feature|Description|Region|References| |-------|-----------|------|----------| |AliyunLinux2|The AliyunLinux2 operating system is supported. AliyunLinux2 is the latest OS version that is developed by Alibaba Cloud on top of an advanced CentOS kernel version. AliyunLinux2.1903 is fully adapted to ACK. This OS version supports faster startups and optimized performance, and improves the efficiency and reliability of ACK clusters.|All regions|[Create a dedicated Kubernetes cluster](/intl.en-US/User Guide for Kubernetes Clusters/Cluster/Create Kubernetes clusters/Create a dedicated Kubernetes cluster.md)| |Ingress dashboard|The Ingress dashboard is provided. In earlier versions, you must manually configure the Ingress dashboard, which is a time-consuming and error-prone task. A check box is added to the configuration page of the Ingress controller. You need to select the check box to enable the Ingress dashboard feature. This way, the Ingress dashboard is automatically installed after the cluster is created.|All regions|[Create a dedicated Kubernetes cluster](/intl.en-US/User Guide for Kubernetes Clusters/Cluster/Create Kubernetes clusters/Create a dedicated Kubernetes cluster.md)| |SLB instance specification|Multiple SLB instance specifications are supported when you create a Service. In earlier versions, when you create a LoadBalancer Service, ACK automatically creates shared-performance SLB instances. To meet your requirements in various scenarios, ACK allows you to select SLB instance specifications when you create a LoadBalancer Service. The SLB instances adopt the pay-as-you-go billing method.|All regions|[Manage Services](/intl.en-US/User Guide for Kubernetes Clusters/Network/Service Management/Manage Services.md)| |API server|An EIP can be associated to or disassociated from the API server of a Kubernetes cluster. SLB instances provide access to the API server of an ACK cluster. When you create an ACK cluster, ACK allows you to specify an Internet-facing or internal-facing SLB instance to handle traffic to the cluster. However, you may need to change the network type of the SLB instance after the cluster is created. ACK allows you to bind an EIP to or unbind the EIP from the SLB instance after the cluster is created. This allows you to change the access mode to the API server between Internet access and internal access.|All regions|[Create a dedicated Kubernetes cluster](/intl.en-US/User Guide for Kubernetes Clusters/Cluster/Create Kubernetes clusters/Create a dedicated Kubernetes cluster.md)| |ACK@Edge|The auto scaling of ENS nodes in edge Kubernetes clusters is supported. To support edge computing scenarios, ACK allows you to configure auto scaling of ENS nodes in edge Kubernetes clusters. This feature can be implemented by calling the API.|All regions|[Auto scaling of nodes](/intl.en-US/User Guide for Kubernetes Clusters/Auto Scaling/Auto scaling of nodes.md)| |New region|ASK is available in the China \(Zhangjiakou\) region.|China \(Zhangjiakou\)|[Create an ASK cluster](/intl.en-US/User Guide for Serverless Kubernetes Clusters/Cluster/Create an ASK cluster.md)| ## September 2019 |Feature|Description|Region|References| |-------|-----------|------|----------| |New region|ACK is available in the China \(Chengdu\) region. You can create dedicated Kubernetes clusters in the China \(Chengdu\) region. To create managed Kubernetes clusters in the China \(Chengdu\) region, submit a ticket. |China \(Chengdu\)|[Create a dedicated Kubernetes cluster](/intl.en-US/User Guide for Kubernetes Clusters/Cluster/Create Kubernetes clusters/Create a dedicated Kubernetes cluster.md)| |Kubernetes 1.14.6 and new features for cluster upgrades|The canary release of the upgrade from Kubernetes 1.14.6 is implemented in the following regions: China \(Shanghai\), China \(Zhangjiakou\), Singapore \(Singapore\), and Germany \(Frankfurt\). Upgrades from Kubernetes 1.14.6 will soon be available in all regions. More features are also provided to simplify the upgrade process. In the ACK console, you can click **Upgrade Cluster** on the Clusters page to upgrade your cluster. The new upgrade feature adds the following improvements to secure upgrades: - A comprehensive cluster check is performed before an upgrade. - You can manually pause or resume an upgrade. - Detailed log of upgrades is retained. |- China \(Shanghai\) - China \(Zhangjiakou\) - Singapore \(Singapore\) - Germany \(Frankfurt\) |[Upgrade a cluster](/intl.en-US/User Guide for Kubernetes Clusters/Cluster/Manage clusters/Upgrade a cluster.md)| |Node maintenance|Node maintenance is supported. To maintain nodes in a cluster, you must make sure that workloads are not deployed on the nodes that you want to maintain. ACK supports node maintenance. You can select one or more nodes that you want to maintain and set them to unschedulable on the Nodes page. You can also drain these nodes. - After you set a node to unschedulable, pods cannot be scheduled to the node. - If you drain a node, no new pods are scheduled to the nodes and existing pods on the node are migrated to other nodes. However, pods that are managed by DaemonSets are not migrated from the node. If you have a LoadBalancer Service, you can specify whether to remove nodes that run the pods that are associated with the Service from the backend of the related SLB instance when these nodes are set to unschedulable. This allows you to flexibly manage your workloads during node maintenance. |All regions|[Set node schedulability](/intl.en-US/User Guide for Kubernetes Clusters/Node management/Node/Set node schedulability.md)| |Custom node name|Custom node names are supported. To manage a cluster that includes a large number of nodes, you must identify nodes by name. The default node names provided by ACK are not easy to identify. ACK allows you to customize node names when you create a cluster. When you create a cluster in the ACK console, you can select **Custom Node Name** in the advanced settings of the cluster. You can define a prefix, an IP substring length, and a suffix for a custom node name. The IP substring length specifies the number of digits to be truncated from the end of a node IP address and can be used to uniquely identify a node.|All regions|[Create a dedicated Kubernetes cluster](/intl.en-US/User Guide for Kubernetes Clusters/Cluster/Create Kubernetes clusters/Create a dedicated Kubernetes cluster.md)| |Advanced security group|Advanced security groups are supported when you create a Kubernetes clusters. Compared with basic security groups, advanced security groups support more ECS instances, more ENIs, and effective management on an infinite number of private IP addresses. Advanced security groups are suitable in scenarios that require high O&M efficiency, high ECS instance specifications, and a large number of compute nodes. To meet the requirements of a large-scale cluster, you can select advanced security groups for **Security Group** in the advanced settings when you create the cluster.|All regions|[Create a dedicated Kubernetes cluster](/intl.en-US/User Guide for Kubernetes Clusters/Cluster/Create Kubernetes clusters/Create a dedicated Kubernetes cluster.md)| |Disk encryption and CSI|Disk encryption and the CSI component are supported. ACK allows you to encrypt data disks. You can enable disk encryption for the selected data disks when you create a cluster. This feature can automatically encrypt the data that is transmitted from an ECS instance to a data disk and automatically decrypt the data when it is read. This improves data security. In addition, Kubernetes 1.14.6 supports the standard CSI plug-in, which is generally used for volume management. You can select FlexVolume or CSI when you create a cluster.|All regions|[Create a dedicated Kubernetes cluster](/intl.en-US/User Guide for Kubernetes Clusters/Cluster/Create Kubernetes clusters/Create a dedicated Kubernetes cluster.md) and [Storage overview](/intl.en-US/User Guide for Kubernetes Clusters/Storage management-CSI/Storage overview.md)| ## August 2019 |Feature|Description|Region|References| |-------|-----------|------|----------| |Kubernetes 1.14.6|Kubernetes 1.14.6 is supported. You can select Kubernetes 1.14.6 when you create a cluster in the ACK console. You cannot upgrade an existing cluster to Kubernetes 1.14.6.|All regions|[Kubernetes release notes](https://v1-14.docs.kubernetes.io/docs/setup/release/notes/)| |New region|ASK is available in the Singapore \(Singapore\), China \(Hong Kong\), and Australia \(Sydney\) regions. ASK allows you to create containerized applications without managing or maintaining clusters and nodes. You are billed based on the actual amount of resources that are consumed by the elastic container instances that run the applications. ASK clusters allow you to focus on the design and development of applications, instead of managing the underlying infrastructures. |Singapore China \(Hong Kong\) Australia \(Sydney\) |[Create an ASK cluster](/intl.en-US/User Guide for Serverless Kubernetes Clusters/Cluster/Create an ASK cluster.md)| |ASK|ASK 2.0 is released to provide more Kubernetes-native features. ASK 2.0 supports multiple namespaces, CRDs, RBAC, PVs, and PVCs. ASK 2.0 improves the security and isolation capability of clusters. The average price of ASK clusters is reduced by 46% due to lower costs of elastic container instances. This includes a 30% reduction in CPUs and a 65% reduction in memory.|All regions|[Create an ASK cluster](/intl.en-US/User Guide for Serverless Kubernetes Clusters/Cluster/Create an ASK cluster.md)| |SCC|Kubernetes clusters based on Super Computing Cluster \(SCC\) resources are supported. SCCs are powered by ECS Bare Metal \(EBM\) instances and use the high-speed Remote Direct Memory Access \(RDMA\) technology. SCCs improve network performance. SCCs are used in scenarios such as high-performance computing, AI, machine learning, scientific and engineering computing, data analytics, and audio and video processing. You can create SCC-based ACK clusters. This type of cluster combines high-performance infrastructure resources with lightweight and agile containers. SCC-based ACK clusters are applicable to high network throughput and compute-intensive scenarios.|All regions|[Create a dedicated Kubernetes cluster](/intl.en-US/User Guide for Kubernetes Clusters/Cluster/Create Kubernetes clusters/Create a dedicated Kubernetes cluster.md)| |Auto scaling and cross-zone scheduling|Multiple scaling groups are supported for auto scaling. Cross-zone scheduling policies are supported. The auto scaling feature is optimized. You can configure multiple scaling groups so that resources of different specifications are automatically added when the scaling threshold is reached. This feature meets the requirements of running compute-intensive applications and GPU computing tasks. When you configure auto scaling policies, you can specify different scheduling policies for multiple zones, including priority policies, cost optimization policies, and zone balancing policies. This meets the requirement for resource scheduling when the cluster is deployed across multiple zones.|All regions|[Auto scaling of nodes](/intl.en-US/User Guide for Kubernetes Clusters/Auto Scaling/Auto scaling of nodes.md)| |Custom cluster domain names|Custom cluster domain names are supported. ACK allows you to customize a cluster domain name by specifying the cluster-domain parameter. The cluster-domain parameter specifies the local domain name that is used for service discovery. If you have multiple clusters, we recommend that you customize the local domain names to simplify the management of clusters and services. ACK allows you to customize a cluster domain name when you create a cluster. This simplifies management and improves the O&M efficiency.|All regions|[Create a dedicated Kubernetes cluster](/intl.en-US/User Guide for Kubernetes Clusters/Cluster/Create Kubernetes clusters/Create a dedicated Kubernetes cluster.md)| |App Hub|App Hub is provided in App Catalog. App Hub provides various cloud-native and open source containerized applications. ACK integrates App Hub into App Catalog. To deploy cloud-native applications in your cluster, log on to the ACK console and click the **App Hub** tab on the **App Catalog** page to find and install the applications with one click. This saves you the inconvenience of creating clusters and deploying applications by using a CLI.|All regions|[View the application catalog](/intl.en-US/User Guide for Kubernetes Clusters/Application marketplace/App catalog management/View the application catalog.md)| |Cloud controller manager|The cloud controller manager is upgraded. The cloud controller manager is the core component in an ACK cluster and is responsible for managing various cloud resources, such as SLB instances and VPCs. The following features are added to the cloud controller manager:- SLB instances can be created with access control settings. You can specify an IP whitelist for an SLB instance that is created by ACK. This enhances the security of the ACK cluster. - You can specify whether to remove unschedulable nodes when you run the kubectl cordon or kubectl drain command. Cordoning and draining nodes are important features in cluster maintenance. However, the community has not reached an agreement on whether to remove a node from the backend of an SLB instance when the node is set to unschedulable for maintenance. The cloud controller manager provides an interface that allows you to specify whether to remove such nodes from the backend of the SLB instance. This ensures the flexibility of maintenance. - Pods can be mounted to the backend of an SLB instance by using Terway. Terway ENI is the latest network plug-in that is provided by ACK. The core feature of Terway ENI is to mount the ENI IP address of a node to a pod. The cloud controller manager allows you to mount pods instead of nodes to the backend of an SLB instance. This prevents traffic forwarding through nodes and improves network performance. - Node weights can be set based on the number of pods on each node for Services in Local mode. The cloud controller manager can adjust the percentage of traffic that is sent to each node based on the number of pods on each node. This balances workloads among nodes. This feature applies to only Services in Local mode. |All regions|[Cloud Controller Manager](/intl.en-US/Release notes/System Component change Records/Core components/Cloud Controller Manager.md)| ## July 2019 |Feature|Description|Region|References| |-------|-----------|------|----------| |Managed edge Kubernetes cluster|Managed edge Kubernetes clusters are released for public preview. You can add edge nodes or ENS nodes to managed edge Kubernetes clusters. This type of cluster supports edge computing and manages edge nodes and ENS nodes to reduce O&M costs. This type of cluster also supports autonomous edges and networks to meet the requirements in different edge computing scenarios. You can select this type of cluster on the cluster template page.|China site|-| |Multi-cluster management|The multi-cluster management feature is released for public preview. You can select **Register Kubernetes Cluster** on the cluster template page to add Kubernetes clusters from data centers and other public clouds to the ACK console. Then, you can deploy applications to these clusters in the console. You can manage hybrid cloud clusters and clusters that are deployed across multiple clouds. After you add self-managed clusters from data centers to ACK, you can manage these clusters by using the O&M feature that is provided by ACK.|China site|[Create a cluster registration proxy and register an on-premises cluster](/intl.en-US/User Guide for Kubernetes Clusters/Multi-cloud and hybrid cloud management/Management of external clusters/Create a cluster registration proxy and register an on-premises cluster.md)| |New region|Managed Kubernetes clusters are available on the Alibaba Cloud Japan site. - Saves resources. You do not need to create master nodes in a managed Kubernetes cluster. If you use another type of cluster, you must create at least three master nodes. - Simplifies O&M. ACK manages master nodes. - Ensures security. ACK meets various security requirements. |Japan site|[Create a managed Kubernetes cluster](/intl.en-US/User Guide for Kubernetes Clusters/Cluster/Create Kubernetes clusters/Create a managed Kubernetes cluster.md)| |Cluster creation|Multiple data disks can be mounted to nodes when you create a Kubernetes cluster. This saves you the inconvenience of manually adding data disks after the cluster is created. ACK formats and mounts one of the selected data disks to the docker directory. You can determine how to handle the other data disks.|All regions|[Create a dedicated Kubernetes cluster](/intl.en-US/User Guide for Kubernetes Clusters/Cluster/Create Kubernetes clusters/Create a dedicated Kubernetes cluster.md)| |Cluster creation|An existing security group can be selected when you create a Kubernetes cluster. You can specify an existing security group for the VPC of your cluster in the advanced settings. This allows you to use custom inbound and outbound security group rules to improve the security of your cluster.|All regions|[Create a dedicated Kubernetes cluster](/intl.en-US/User Guide for Kubernetes Clusters/Cluster/Create Kubernetes clusters/Create a dedicated Kubernetes cluster.md)| |Deletion protection|Deletion protection is released to ensure the security of your cluster. You are required to enter a Short Message Service \(SMS\) verification code when you delete a cluster. However, you may mistakenly delete the cluster by calling the API. To ensure the security of clusters, ACK supports deletion protection for clusters. You can enable deletion protection when you create a cluster. This way, you cannot delete the cluster in the console or by calling the API. To delete the cluster, you must first disable deletion protection. You can enable or disable deletion protection on the cluster details page.|All regions|[Create a dedicated Kubernetes cluster](/intl.en-US/User Guide for Kubernetes Clusters/Cluster/Create Kubernetes clusters/Create a dedicated Kubernetes cluster.md)| |Authorization|Multiple RAM users can be authorized at the same time. You can also grant the permissions to manage all clusters. This allows you to efficiently authorize RAM users. The authorization procedure is also optimized to improve user experience.|All regions|[Authorization overview](/intl.en-US/User Guide for Kubernetes Clusters/Authorization management/Authorization overview.md)| |Time zone|The time zone of an application can be synchronized to that of the node. You can select **Synchronize Timezone from Node to Container** when you create an application from an image. This ensures that the application pods and the host node use the same time zone.|All regions|[Create a stateless application by using a Deployment](/intl.en-US/User Guide for Kubernetes Clusters/Application management/Workloads/Create a stateless application by using a Deployment.md)| |New region|Container Registry Enterprise Edition is available in the UK \(London\) region. Container Registry Enterprise Edition supports large-scale image distribution with enhanced security. This service is suitable for enterprise users that require high security and large-scale nodes.|UK \(London\)|[t2058233.dita\#concept\_2058233]()| |Container Registry Enterprise Edition|Helm 2 charts are supported by Container Registry Enterprise Edition to make it easier for you to manage cloud-native assets. You can enable the charts component on the Overview page of your Container Registry Enterprise Edition instance. When the component is running, you can start to manage Helm chart repositories.|All regions|N/A| ## June 2019 |Feature|Description|Region|References| |-------|-----------|------|----------| |New region|Managed Kubernetes clusters are available in the Japan \(Tokyo\) and UK \(London\) regions on Alibaba Cloud public cloud.|Japan \(Tokyo\) UK \(London\) |[What is Container Service for Kubernetes?](/intl.en-US/Product Introduction/What is Container Service for Kubernetes?.md)| |Terway|A new version of Terway is released. The exclusive ENI mode and the inclusive ENI mode are supported by this version. The default mode is the inclusive ENI mode. - The exclusive ENI mode: In this mode, the number of pods that can be deployed on a node must match the number of ENIs that can be created on the node. This mode improves network performance. - The inclusive ENI mode: In this mode, you can deploy multiple pods on a node. The pods share the same ENI. |All regions|[Use the Terway plug-in](/intl.en-US/User Guide for Kubernetes Clusters/Network/Container network/Use the Terway plug-in.md)| |Knative|Knative is supported. Knative is a Kubernetes-based serverless framework. Knative creates a cloud-native and cross-platform orchestration standard for serverless applications. Knative implements this standard by integrating the creation of containers \(or functions\), workload management \(auto scaling\), and event models. ACK supports Knative and allows you to install and upgrade the Build, Serving, and Eventing components. You must deploy Istio before you use Knative. ACK provides instructions to deploy sample applications, and also provides the best practices of tracing, monitoring, and logging applications.|All regions|[Overview](/intl.en-US/User Guide for Kubernetes Clusters/Knative/Overview.md), [Use Knative to deploy serverless applications](/intl.en-US/User Guide for Kubernetes Clusters/Knative/Manage Knative services/Use Knative to deploy serverless applications.md)| |Pod search|Pods can be searched for by node IP address or pod IP address. In the ACK console, choose **Applications** \> **Pods** and specify a node IP address or pod IP address to search for a pod. This saves the time to find pods that you want to manage and maintain.|All regions|N/A| ## May 2019 |Feature|Description|Region|References| |-------|-----------|------|----------| |New region|Managed Kubernetes clusters are available in the Australia \(Sydney\) region on Alibaba Cloud public cloud and the China East 2 Finance region on Alibaba Finance Cloud. You can create managed Kubernetes clusters in the Australia \(Sydney\) region on Alibaba Cloud public cloud and the China East 2 Finance region on Alibaba Finance Cloud. |Australia \(Sydney\) China East 2 Finance |[What is Container Service for Kubernetes?](/intl.en-US/Product Introduction/What is Container Service for Kubernetes?.md)| |Genomics computing cluster that is designed for genomics computing|Genomics computing clusters are released. This type of cluster uses high-performance computing \(HPC\) instances as worker nodes and provides a large-scale workflow engine for batch genomics computing. Genomics computing clusters are suitable for data splitting and mutation detection, and support data analytics for the following formats: BCL, FASTQ, BAM, SAM, and VCF. In the ACK console, choose **Clusters** \> **Clusters** and click Create Kubernetes Cluster. In the Select Cluster Template dialog box, select **Genomics Computing Cluster**.|All regions|N/A| |FPGA cluster|Field-programmable gate array \(FPGA\) clusters are released. This type of cluster uses FPGA F3 instances as worker nodes and is used for H265 video encoding and image conversion from JPEG to HEIF. FPGA-based video encoding reduces the processing time from more than 1 week to 15 minutes. This significantly reduces the bitrate and saves bandwidth costs when transcoding videos of the same quality. In the ACK console, choose **Clusters** \> **Clusters** and click Create Kubernetes Cluster. In the Select Cluster Template dialog box, select **Dedicated FPGA Cluster** to create a dedicated FPGA cluster.|All regions|N/A| |Cloud controller manager|The cloud controller manager is upgraded to V1.9.3.110-g4938309-aliyun. This version supports more SLB configuration options. The following features are provided:- Allows you to restrict the creation of Internet-facing SLB instances by setting parameters. - Allows you to change certificate IDs. - Allows you to specify a vSwitch when you attach an internal-facing SLB instance to a Service. - Allows you to set SLB instance configuration to redirect traffic from HTTP port 80 to HTTPS port 443. |All regions|[Cloud Controller Manager](/intl.en-US/Release notes/System Component change Records/Core components/Cloud Controller Manager.md)| |Istio|Istio is upgraded to V1.1.4. Istio 1.1.4 improves self-recovery capabilities, and supports automatic recovery of the control plane and automatic upgrades of earlier versions. Istio is also integrated with Time Series Database \(TSDB\). TSDB is a database service that supports high-speed read and write operations, compressed storage, and real-time computing. To fix the local storage issues in Prometheus, TSDB provides remote storage services with high performance and high reliability at low costs. Compared with other remote storage solutions provided by the community, TSDB is easier to use and only requires you to change the Prometheus configuration. The solution supports parallel read and write operations and is highly compatible with PromQL. TSDB is a distributed storage system with auto scaling capabilities. |All regions|N/A| |Container Registry Enterprise Edition|Images can be synchronized across all regions worldwide for instances of Container Registry Enterprise Edition. This solves issues in the global delivery of applications and improves the business iteration efficiency for enterprises. Container Registry Enterprise Edition supports large-scale image distribution with enhanced security. It is suitable for enterprises that require high security and a large number of nodes.|All regions|N/A| |Cluster creation|Multiple zones and five master nodes are supported when you create a dedicated Kubernetes cluster. This allows you to create a cross-zone dedicated Kubernetes cluster with five master nodes to significantly improve the availability of the cluster.|All regions|N/A| ## April 2019 |Feature|Description|Region|References| |-------|-----------|------|----------| |Kubernetes 1.12.6|Managed or dedicated Kubernetes clusters in all regions can be upgraded from Kubernetes 1.11.5 to 1.12.6 in the ACK console.|All regions|N/A| |Audit log|Audit log can be collected from managed Kubernetes clusters. Audit log record operations on the API server and allow cluster administrators to trace the activities of different users.|All regions|[t21467.md\#](/intl.en-US/User Guide for Kubernetes Clusters/Security management/Infrastructure security/Enable cluster auditing.md)| |Istio|Istio is upgraded to V1.1. Istio 1.1 allows you to manage Istio applications in the ACK console. You can create and manage Istio applications and services on a graphical interface. You can create different application versions, implement canary releases, set canary release policies, and also configure fault injection policies. |All regions|N/A| |ASK|GPU-accelerated pods are supported when you create applications in an ASK cluster. When you create an application from a template, specify the pod type as GPU in the YAML file.|All regions|N/A| |Container Registry Enterprise Edition|Container Registry Enterprise Edition is available in the China \(Beijing\) region.|China \(Beijing\)|[t2058233.dita\#concept\_2058233]()| |ACK releases FPGA clusters to accelerate image and video processing.|FPGA clusters are released. This type of cluster uses FPGA F3 instances as worker nodes and is used for H265 video encoding and image conversion from JPEG to HEIF. FPGA-based video encoding reduces the processing time from more than 1 week to a short period of time. This significantly reduces the bitrate and reduces bandwidth costs when transcoding videos of the same quality. In the ACK console, choose **Clusters** \> **Clusters** and click Create Kubernetes Cluster. In the Select Cluster Template dialog box, select **Dedicated FPGA Cluster** to create a dedicated FPGA cluster.|All regions|N/A| ## March 2019 |Feature|Description|Region|References| |-------|-----------|------|----------| |New region|Managed Kubernetes clusters are available in the China \(Zhangjiakou\), China \(Hohhot\), US \(Silicon Valley\), and Germany \(Frankfurt\) regions.|China \(Zhangjiakou\) China \(Hohhot\) Germany \(Frankfurt\) US \(Silicon Valley\) |[What is Container Service for Kubernetes?](/intl.en-US/Product Introduction/What is Container Service for Kubernetes?.md)| |Container Registry Enterprise Edition|Container Registry Enterprise Edition was officially released at the Alibaba Cloud Summit on March 21, 2019. This edition provides higher security and supports large-scale image distribution. Container Registry Enterprise Edition is in public preview in the China \(Shanghai\) region. To use this edition, submit a ticket.|China \(Shanghai\)|[t2058233.dita\#concept\_2058233]()| |Container Registry Shared Edition|Container Registry Shared Edition is available in all regions on the International site \(alibabacloud.com\).|All regions|[t2058233.dita\#concept\_2058233]()| |Kubernetes 1.12.6|Kubernetes 1.12.6 is supported. You can create a cluster of Kubernetes 1.12 in the console.|All regions|[Create a dedicated Kubernetes cluster](/intl.en-US/User Guide for Kubernetes Clusters/Cluster/Create Kubernetes clusters/Create a dedicated Kubernetes cluster.md)| |Log Service|The Log Service plug-in is supported by managed Kubernetes clusters. You can enable Log Service when you create a managed or dedicated Kubernetes cluster. After the plug-in is installed, you can use Log Service to manage Kubernetes log.|All regions|[Create a managed Kubernetes cluster](/intl.en-US/User Guide for Kubernetes Clusters/Cluster/Create Kubernetes clusters/Create a managed Kubernetes cluster.md)| |New region|Managed Kubernetes cluster that runs Windows are available. You can create this type of cluster in the ACK console or by calling the API. This way, you can create Windows containers and deploy traditional Windows applications on cloud-native platforms to achieve agility and elasticity.|All regions|Windows clusters are no longer supported.| |IPVS|The IP Virtual Server \(IPVS\) proxy mode is supported. Compared with the traditional iptables mode, the IPVS mode significantly improves the load balancing performance in large-scale clusters. You can use this mode in all clusters and all regions.|All regions|[Create a dedicated Kubernetes cluster](/intl.en-US/User Guide for Kubernetes Clusters/Cluster/Create Kubernetes clusters/Create a dedicated Kubernetes cluster.md)| |Cluster template|Multiple cluster templates are provided in the console. You can select templates of different cluster types based on your business requirements. Templates of the following cluster types are supported: managed Kubernetes clusters, clusters with EBM instances, GPU-accelerated clusters, and Windows clusters. Cluster templates allow you to create ACK clusters based on your business requirements.|All regions|N/A| |Elastic Container Instance|High-specification elastic container instances are provided for genomics computing. The maximum CPU specification is increased from 8 vCPUs to 64 vCPUs. The highest specification of an elastic container instance is 64 vCPUs and 256 GiB memory. The lowest specification of an elastic container instance is 0.25 vCPU and 0.5 GiB memory. You can select a specification based on your business requirements to achieve the highest cost efficiency.|All regions|[Limits](https://www.alibabacloud.com/help/zh/doc-detail/89138.html)| ## February 2019 |Feature|Description|Region|References| |-------|-----------|------|----------| |New region|Managed Kubernetes clusters are available in the China \(Shenzhen\) region. Managed Kubernetes clusters provide the following core benefits:- Saves resources. You do not need to create master nodes in a managed Kubernetes cluster. Compared with other cluster types, this cluster type saves you the costs of three master nodes. - Simplifies O&M. ACK manages the master nodes. - Ensures security. ACK meets various security requirements. |China \(Shenzhen\)|[Create a managed Kubernetes cluster](/intl.en-US/User Guide for Kubernetes Clusters/Cluster/Create Kubernetes clusters/Create a managed Kubernetes cluster.md)| |App Catalog|Knative add-ons are provided in App Catalog. Knative is a scale-to-zero and request-driven computing runtime based on Kubernetes and Istio. Knative supports the deployment of serverless applications and functions. ACK provides Knative add-ons to help you build the Knative Serving environment in your cluster. |All regions|[Overview](/intl.en-US/User Guide for Kubernetes Clusters/Knative/Overview.md)| |Cluster check|Cluster checks are supported. You can use this feature to perform in-depth checks on cluster resources, components, and configurations. This can identify the causes of errors in your cluster.|Mainland China|[t159904.md\#](/intl.en-US/User Guide for Kubernetes Clusters/Cluster/Use cluster check to troubleshoot cluster issues.md)| ## January 2019 |Feature|Description|Region|References| |-------|-----------|------|----------| |Windows container|Windows containers are supported. This allows you to deploy and run Windows applications in containers of Kubernetes clusters. This enables Kubernetes-based elastic scheduling and management of Window applications. You can add Windows nodes to managed and dedicated Kubernetes clusters. Container Registry Enterprise Edition is in internal preview. To use this service, submit a ticket. |All regions|[Create a Windows node pool](/intl.en-US/User Guide for Kubernetes Clusters/Windows container/Create a Windows node pool.md)| |Container Registry Enterprise Edition|Container Registry Enterprise Edition is released for internal preview. Container Registry Enterprise Edition provides container image repositories built on top of dedicated resources. This edition provides stable image building, large-scale image distribution, and image hosting with enterprise-class security. It is suitable for enterprises that require high security and a large number of nodes. Container Registry Enterprise Edition is in internal preview. To use this service, submit a ticket. |All regions|[t2058233.dita\#concept\_2058233]()| |Intelligent cluster O&M|Intelligent cluster O&M is available in the China \(Hangzhou\) region. Intelligent O&M provides the best practices for cluster management in different scenarios. This allows you to identify the causes of errors in the cluster by performing in-depth checks on cluster resources, components, and configurations.|China \(Hangzhou\)|[Use cluster check to troubleshoot cluster issues](/intl.en-US/User Guide for Kubernetes Clusters/Cluster/Use cluster check to troubleshoot cluster issues.md)| |ARMS|ARMS is supported and integrated into ACK. After you install the ARMS plug-in, you can monitor the application performance in your cluster. ARMS is a monitoring service for application performance management \(APM\). To monitor a Java application, you only need to attach an ARMS agent to the startup script of the application. No code change is required. ARMS enables you to locate failed API operations or slow calls, reproduce API parameters, detect memory leaks, and discover system bottlenecks. This significantly improves the efficiency of service diagnostics. |All regions|[Monitor application performance](/intl.en-US/User Guide for Kubernetes Clusters/Observability/Monitoring management/Monitor application performance.md)| |Elastic Container Instance|Starting January 22, 2019, you are charged for the commercial use of Elastic Container Instance. Elastic container instances are deployed as the underlying infrastructures of ASK cluster. You are charged when you create elastic container instances in ASK clusters. ASK clusters remain free to use.|All regions|[Billing](https://www.alibabacloud.com/help/zh/doc-detail/89142.html)| |New region|ASK clusters are available in the China \(Beijing\) and China \(Shenzhen\) regions. ASK clusters provide excellent experience with serverless containers.|China \(Beijing\) China \(Shenzhen\) |[Create an ASK cluster](/intl.en-US/User Guide for Serverless Kubernetes Clusters/Cluster/Create an ASK cluster.md)| ## December 2018 |Feature|Description|Region|References| |-------|-----------|------|----------| |New region|ACK is available in the UK \(London\) region on both the China site \(aliyun.com\) and the International site \(alibabacloud.com\).|UK \(London\)|[Create a dedicated Kubernetes cluster](/intl.en-US/User Guide for Kubernetes Clusters/Cluster/Create Kubernetes clusters/Create a dedicated Kubernetes cluster.md)| |New region|Managed Kubernetes clusters are available in the China \(Shanghai\), Malaysia \(Kuala Lumpur\), and India \(Mumbai\) regions on both the China site \(aliyun.com\) and the International site \(alibabacloud.com\).|China \(Shanghai\) Malaysia \(Kuala Lumpur\) India \(Mumbai\) |[Create a managed Kubernetes cluster](/intl.en-US/User Guide for Kubernetes Clusters/Cluster/Create Kubernetes clusters/Create a managed Kubernetes cluster.md)| |Node removal|Nodes can be removed from an ACK cluster. You can also choose whether to release the related ECS instances.|All regions|[Remove nodes from an ACK cluster](/intl.en-US/User Guide for Kubernetes Clusters/Node management/Node/Remove nodes from an ACK cluster.md)| |DaemonSet|DaemonSets are supported. DaemonSet is a daemon process that ensures that each node runs one copy of a pod.|All regions|N/A| |Istio|Custom Istio Ingress and Egress gateways are supported by configuring different parameters.|All regions|[ASM]()| |Istio CoreDNS|Istio CoreDNS is supported. You can use the CoreDNS plug-in to read Istio service entries and associate the IP addresses of the services to their host addresses.|All regions|[ASM]()| |Cluster creation|Existing ECS instances can be added as worker nodes when you create a managed Kubernetes cluster.|All regions|[Create a managed Kubernetes cluster](/intl.en-US/User Guide for Kubernetes Clusters/Cluster/Create Kubernetes clusters/Create a managed Kubernetes cluster.md)| ## November 2018 |Feature|Description|Region|References| |-------|-----------|------|----------| |New region|Managed Kubernetes clusters are available in the Indonesia \(Jakarta\) region on the International site \(alibabacloud.com\).|Indonesia \(Jakarta\)|[Create a managed Kubernetes cluster](/intl.en-US/User Guide for Kubernetes Clusters/Cluster/Create Kubernetes clusters/Create a managed Kubernetes cluster.md)| |Terway|The Terway plug-in is released. Terway enables direct communication between containers through ENIs and provides higher network performance than Flannel.|All regions|[Use the Terway plug-in](/intl.en-US/User Guide for Kubernetes Clusters/Network/Container network/Use the Terway plug-in.md)| |Worker node|Thumbnail images are used to display the performance metrics of worker nodes, which makes it easy for you to view the states of nodes.|All regions|N/A| |Node adding|Multiple existing nodes can be added to a cluster at the same time.|All regions|N/A| |Cluster certificate|Rolling renewal of cluster certificates is supported to prevent certificates from expiring.|All regions|N/A| ## October 2018 |Feature|Description|Region|References| |-------|-----------|------|----------| |New region|ACK is available in the China South 1 Finance region on Alibaba Finance Cloud.|China South 1 Finance|[Create a dedicated Kubernetes cluster](/intl.en-US/User Guide for Kubernetes Clusters/Cluster/Create Kubernetes clusters/Create a dedicated Kubernetes cluster.md)| |New region|N/A|Regions outside China|[Create a managed Kubernetes cluster](/intl.en-US/User Guide for Kubernetes Clusters/Cluster/Create Kubernetes clusters/Create a managed Kubernetes cluster.md)| |Deployment|Version management and rollback are supported for Deployments.|All regions|N/A| |Istio|Istio is deeply integrated into ACK and Istio add-ons are supported.|All regions|N/A| ## September 2018 |Feature|Description|Region|References| |-------|-----------|------|----------| |Kubernetes 1.11|- Kubernetes 1.11 is supported to provide features, such as CRD upgrade, CoreDNS general availability \(GA\), pod priority settings, and preemptive scheduling. - Multiple Kubernetes versions are supported, such as Kubernetes 1.10 and 1.11. - Multi-container applications and stateful applications are supported in the console. |All regions|[t21663.md\#](/intl.en-US/User Guide for Kubernetes Clusters/Application management/Workloads/Use a StatefulSet to create a stateful application.md)| |Container Registry|Images can be pulled from the private repositories of Container Registry without a password.|All regions| | |Auto scaling|Auto scaling of nodes is supported. ACK provides the auto scaling component for nodes to automatically scale in and out. Regular instances, GPU-accelerated instances, and preemptible instances can be automatically added to or removed from an ACK cluster as required. This feature is applicable to instances that are deployed across multiple zones and diverse instance types, and also supports different scaling modes.|All regions|[Auto scaling of nodes](/intl.en-US/User Guide for Kubernetes Clusters/Auto Scaling/Auto scaling of nodes.md)| |Preemptible instances are supported.|N/A|All regions| | ## August 2018 |Feature|Description|Region|References| |-------|-----------|------|----------| |Managed Kubernetes cluster|Managed Kubernetes clusters are released for public preview.|All regions|[Create a managed Kubernetes cluster](/intl.en-US/User Guide for Kubernetes Clusters/Cluster/Create Kubernetes clusters/Create a managed Kubernetes cluster.md)| |Istio|Istio add-ons are supported.|All regions|N/A| ## July 2018 |Feature|Description|Region|References| |-------|-----------|------|----------| |New region|N/A|Australia \(Sydney\)|[Create a dedicated Kubernetes cluster](/intl.en-US/User Guide for Kubernetes Clusters/Cluster/Create Kubernetes clusters/Create a dedicated Kubernetes cluster.md)| |Canary releases and phased releases are supported.|N/A|All regions|[Use Ingresses to implement canary releases](/intl.en-US/User Guide for Kubernetes Clusters/Network/Ingress management/Use Ingresses to implement canary releases.md). Phased releases are no longer supported by ACK.| ## June 2018 |Feature|Description|Region|References| |-------|-----------|------|----------| |New region|N/A|Japan \(Tokyo\) China \(Hohhot\) |[Create a dedicated Kubernetes cluster](/intl.en-US/User Guide for Kubernetes Clusters/Cluster/Create Kubernetes clusters/Create a dedicated Kubernetes cluster.md)| |FPGA and HugePages are supported by Kubernetes 1.10.|N/A|All regions|N/A| |Application monitoring and alerting|Application monitoring and alerting are supported.|All regions|N/A| |The subscription billing method is supported when you create a Kubernetes cluster.|N/A|All regions|[Create a dedicated Kubernetes cluster](/intl.en-US/User Guide for Kubernetes Clusters/Cluster/Create Kubernetes clusters/Create a dedicated Kubernetes cluster.md)| |Ingresses and the exec and attach commands are supported.|N/A|All regions|[Features](/intl.en-US/User Guide for Serverless Kubernetes Clusters/Features.md)| ## May 2018 |Feature|Description|Region|References| |-------|-----------|------|----------| |New region|ACK is available in the China East 2 Finance region on Alibaba Finance Cloud. Alibaba Finance Cloud provides services in compliance with security regulations.|China East 2 Finance|[Create a dedicated Kubernetes cluster](/intl.en-US/User Guide for Kubernetes Clusters/Cluster/Create Kubernetes clusters/Create a dedicated Kubernetes cluster.md)| |ASK is released.|N/A|All regions|[Create an ASK cluster](/intl.en-US/User Guide for Serverless Kubernetes Clusters/Cluster/Create an ASK cluster.md)| |Blue-green releases, canary releases, and A/B testing are supported.|N/A|All regions|[Use Ingresses to implement canary releases](/intl.en-US/User Guide for Kubernetes Clusters/Network/Ingress management/Use Ingresses to implement canary releases.md)| ## April 2018 |Description|Region|References| |-----------|------|----------| |ACK is available in five regions in Southeast Asia, the Middle East, and India. Kubernetes 1.9 is stably supported.|Malaysia \(Kuala Lumpur\) Indonesia \(Jakarta\) Singapore \(Singapore\) India \(Mumbai\) UAE \(Dubai\) |[Create a dedicated Kubernetes cluster](/intl.en-US/User Guide for Kubernetes Clusters/Cluster/Create Kubernetes clusters/Create a dedicated Kubernetes cluster.md)| |MySQL, RDS, RabbitMQ, and Spark are supported in Service Catalog.|All regions|This feature is phased out.| |Management of applications released by using Helm is supported in App Catalog.|All regions|[Manage releases by using Helm](/intl.en-US/User Guide for Kubernetes Clusters/Release management/Manage releases by using Helm.md)| ## March 2018 |Feature|Description|Region|References| |-------|-----------|------|----------| |Kubernetes 1.9|Kubernetes 1.9.3 is supported. ACK releases Workloads API. By default, CRD is enabled. GPU scheduling is supported. You can select custom ECS images when you create a cluster. You can also reset images when you add nodes to a cluster.|All regions|N/A| |Helm|App Catalog is released to allow you to deploy applications by using Helm.|All regions|[Manage releases by using Helm](/intl.en-US/User Guide for Kubernetes Clusters/Release management/Manage releases by using Helm.md)| |ServiceBroker|App Catalog is released to support ServiceBroker.|All regions|This feature is phased out.| |CloudMonitor|Nodes can be monitored by using CloudMonitor.|All regions|[Monitor basic resources](/intl.en-US/User Guide for Kubernetes Clusters/Observability/Monitoring management/Monitor basic resources.md)| ## January 2018 |Feature|Description|Region|References| |-------|-----------|------|----------| |ACK and Container Registry are released on the International site \(alibabacloud.com\).|N/A|Regions outside China|[What is Container Service for Kubernetes?](/intl.en-US/Product Introduction/What is Container Service for Kubernetes?.md)| |Kubernetes 1.8.4 is supported to provide features such as security enhancement and auto scaling.|N/A|All regions|[Auto scaling of nodes](/intl.en-US/User Guide for Kubernetes Clusters/Auto Scaling/Auto scaling of nodes.md)| |FlexVolume|The FlexVolume plug-in is released to support disks, NAS file systems, and OSS buckets.|All regions|[Usage notes for disk volumes](/intl.en-US/User Guide for Kubernetes Clusters/Storage management-Flexvolume/Disk volumes/Usage notes for disk volumes.md), [t18764.md\#](/intl.en-US/User Guide for Kubernetes Clusters/Storage management-Flexvolume/NAS volumes/Use NAS volumes.md), and [Mount OSS volumes](/intl.en-US/User Guide for Kubernetes Clusters/Storage management-Flexvolume/OSS volumes/Mount OSS volumes.md)| |Network policies and bandwidth throttling|Kubernetes network policies and bandwidth throttling are supported. This improves network performance.|All regions|[Use annotations to configure load balancing](/intl.en-US/User Guide for Kubernetes Clusters/Network/Service Management/Use annotations to configure load balancing.md)| |EBM instances are supported.|N/A|All regions|N/A| ## October 2017 |Feature|Description|Region|References| |-------|-----------|------|----------| |Kubernetes 1.8.1|Kubernetes 1.8.1 is supported.|All regions|[What is Container Service for Kubernetes?](/intl.en-US/Product Introduction/What is Container Service for Kubernetes?.md)| |Blockchain solutions are released for public preview.|N/A|All regions|N/A| ## August 2017 |Feature|Description|Region|References| |-------|-----------|------|----------| |Kubernetes 1.7.2 is supported.|N/A|All regions|[Create a dedicated Kubernetes cluster](/intl.en-US/User Guide for Kubernetes Clusters/Cluster/Create Kubernetes clusters/Create a dedicated Kubernetes cluster.md)|
150.856234
1,899
0.799457
eng_Latn
0.974078
ff3d87cab276d60256351064bf539bcf64c42847
31
md
Markdown
README.md
atm-uam/atm-uam.github.io
eb7593aaaf60410e82f84950339bf6599082e5a9
[ "CC0-1.0" ]
null
null
null
README.md
atm-uam/atm-uam.github.io
eb7593aaaf60410e82f84950339bf6599082e5a9
[ "CC0-1.0" ]
null
null
null
README.md
atm-uam/atm-uam.github.io
eb7593aaaf60410e82f84950339bf6599082e5a9
[ "CC0-1.0" ]
null
null
null
# Website of Miguel Alvarez T.
15.5
30
0.741935
eng_Latn
0.434765
ff3d981911cc5490f11fcae9d9e40ae0bf6c8bc0
812
md
Markdown
archive/s/swift/README.md
jeffin07/sample-programs
5b064cd1556d0ba685f1927ae58c2349b7c2fd07
[ "MIT" ]
4
2019-10-18T13:04:23.000Z
2020-10-03T16:07:14.000Z
archive/s/swift/README.md
jeffin07/sample-programs
5b064cd1556d0ba685f1927ae58c2349b7c2fd07
[ "MIT" ]
null
null
null
archive/s/swift/README.md
jeffin07/sample-programs
5b064cd1556d0ba685f1927ae58c2349b7c2fd07
[ "MIT" ]
1
2020-07-09T03:26:23.000Z
2020-07-09T03:26:23.000Z
# Sample Programs in Swift Welcome to Sample Programs in Swift! ## Sample Programs - [Hello World in Swift](https://therenegadecoder.com/code/hello-world-in-swift/) - [Reverse String in Swift](https://therenegadecoder.com/code/reverse-a-string-in-swift/) - [Fizzbuzz in Swift](https://github.com/TheRenegadeCoder/sample-programs/issues/482) - [Baklava in Swift](https://github.com/TheRenegadeCoder/sample-programs/issues/620) ## Fun Facts - Debut: 2014 - Developer: Apple - Typing: Static ## References - [Swift Wiki](https://en.wikipedia.org/wiki/Swift_(programming_language)) - [Swift Docs](https://swift.org/) - [Swift GitHub](https://github.com/apple/swift) - [Swift Online Editor (iswift)](https://iswift.org/playground) - [Swift Online Editor (GDB)](https://www.onlinegdb.com/online_swift_compiler)
32.48
89
0.747537
kor_Hang
0.135939
ff3edb511e92761d1b604c55b17645238b2ec5aa
349
md
Markdown
intro_reports/developer-18.md
VaibhavAgarwal2210/opensource-iitbhu
65f90cb60ea9210aa961e21de539bec1426f9924
[ "MIT" ]
9
2018-05-24T11:59:41.000Z
2022-03-17T13:10:40.000Z
intro_reports/developer-18.md
VaibhavAgarwal2210/opensource-iitbhu
65f90cb60ea9210aa961e21de539bec1426f9924
[ "MIT" ]
19
2018-05-24T17:56:05.000Z
2018-06-01T05:43:04.000Z
intro_reports/developer-18.md
VaibhavAgarwal2210/opensource-iitbhu
65f90cb60ea9210aa961e21de539bec1426f9924
[ "MIT" ]
231
2018-05-24T14:46:37.000Z
2022-03-17T13:10:17.000Z
hey, I am manish from 1st-year mining engineering, iitbhu ##Currently I can code in, c c++ ## Nowdays, i am learning python3 for ml purpose because i am interested in AI ## In webDevelopment, in frontend dev, i know HTML, CSS, and just started working on js, and i know php and soon started django framework for backend development, ....
17.45
73
0.730659
eng_Latn
0.999038
ff3f08b62886fc1839405328e29a0c6d3659f85b
568
md
Markdown
hugo/content/posts/podcast-14.md
Reeywhaar/radio-t-site
e88967492d07d0c0157cc02ab4acaf28263edeb6
[ "MIT" ]
null
null
null
hugo/content/posts/podcast-14.md
Reeywhaar/radio-t-site
e88967492d07d0c0157cc02ab4acaf28263edeb6
[ "MIT" ]
null
null
null
hugo/content/posts/podcast-14.md
Reeywhaar/radio-t-site
e88967492d07d0c0157cc02ab4acaf28263edeb6
[ "MIT" ]
null
null
null
+++ title = "Радио-T 14" date = "2006-12-10T04:51:00" categories = ["podcast"] filename = "rt_podcast14" +++ - Конкурс по поиску и некоторые около-поисковые заметки - Новый миллионный проект - Пара изменений в gmail - Суперкомпьютер для всех и каждого - Самые последние и самые достоверные слухи О ... - Zune и UMG - Грамотная активация висты "особым" образом - Oбзор [гаджетов](http://crunchgear.com/2006/12/05/crunchgears-best-of-2006/) [аудио](http://cdn.radio-t.com/rt_podcast14.mp3) <audio src="http://cdn.radio-t.com/rt_podcast14.mp3" preload="none"></audio>
28.4
78
0.730634
rus_Cyrl
0.467594
ff3f15e0d1f44ce07789ad002c4a293c1194c083
1,378
md
Markdown
help/communities/tools.md
Jdruwe/experience-manager-65.en
3c8ddae25b0ebb90ec7fd368aed39e2098d049ab
[ "MIT" ]
null
null
null
help/communities/tools.md
Jdruwe/experience-manager-65.en
3c8ddae25b0ebb90ec7fd368aed39e2098d049ab
[ "MIT" ]
null
null
null
help/communities/tools.md
Jdruwe/experience-manager-65.en
3c8ddae25b0ebb90ec7fd368aed39e2098d049ab
[ "MIT" ]
null
null
null
--- title: Communities Tools seo-title: Communities Tools description: How to access Communities Tools console seo-description: How to access Communities Tools console uuid: 3172fe00-7132-4cee-9fd1-b6f96eb43200 contentOwner: Janice Kendall products: SG_EXPERIENCEMANAGER/6.5/COMMUNITIES topic-tags: administering content-type: reference discoiquuid: 410149d6-15bd-41e5-bdba-1d8e6eab7b87 pagetitle: Communities Tools --- # Communities Tools {#communities-tools} To access the Communities tools console, login your author instance: * From global navigation: **[!UICONTROL Tools]** > **[!UICONTROL Communities]**. ![chlimage_1-129](assets/chlimage_1-129.png) * [Site Templates](sites.md) - Console for site template creation and management. * [Group Templates](tools-groups.md) - Console for group template creation and management. * [Community Functions](functions.md) - Console for community function creation and management. * [Storage Configuration](srp-config.md) - Console for configuration and selection of the [default SRP](working-with-srp.md). * [Component Guide](components-guide.md) - Opens an interactive site that allows for experimentation with how the SCF components work and how they can be configured or customized. * [Badges](badges.md) - Console from where custom badges can be added for use in [scoring and badging rules](implementing-scoring.md)
39.371429
179
0.786647
eng_Latn
0.838681
ff3f190e8a549e300239e3f00efc8c1dc44a46bf
507
md
Markdown
flask_app/README.md
glasnt/boilerplates
2205fbd57a08dc7047220e9ded8a2f35429776ac
[ "BSD-3-Clause" ]
2
2018-11-19T02:54:44.000Z
2018-11-19T14:01:42.000Z
flask_app/README.md
glasnt/boilerplates
2205fbd57a08dc7047220e9ded8a2f35429776ac
[ "BSD-3-Clause" ]
null
null
null
flask_app/README.md
glasnt/boilerplates
2205fbd57a08dc7047220e9ded8a2f35429776ac
[ "BSD-3-Clause" ]
null
null
null
# Basic Flask app Read more: [Flask Quickstart](http://flask.pocoo.org/docs/1.0/quickstart/) ## Start your website ```shell flask run ``` ⚠️🎩: This command only works because by default, Flask will run `wsgi.py` or `app.py`. If you name your app anything else, you need to tell Flask what to run: ```shell $ export FLASK_APP=site.py $ flask run ``` ## View your website Go to a brower and open `localhost:5000` (Flask should tell you this URL) ## Custom Ports ```shell $ flask run --port 1337 ```
18.777778
159
0.692308
eng_Latn
0.976777
ff3f2e98eeb837b075a998081e857518059587e0
2,581
md
Markdown
README.md
FGrosse/graphigo
5770fe631d9a4eef82623c6ee9425ee98ad1aeb1
[ "MIT" ]
2
2019-07-23T23:56:21.000Z
2020-11-27T20:12:12.000Z
README.md
FGrosse/graphigo
5770fe631d9a4eef82623c6ee9425ee98ad1aeb1
[ "MIT" ]
null
null
null
README.md
FGrosse/graphigo
5770fe631d9a4eef82623c6ee9425ee98ad1aeb1
[ "MIT" ]
2
2017-08-30T18:45:09.000Z
2019-04-20T17:52:05.000Z
Graphigo ======== [![Build Status](https://travis-ci.org/fgrosse/graphigo.svg?branch=master)](https://travis-ci.org/fgrosse/graphigo) [![GoDoc](https://godoc.org/gopkg.in/fgrosse/graphigo.v2?status.svg)](https://godoc.org/gopkg.in/fgrosse/graphigo.v2) A simple go client for the [graphite monitoring tool][1]. ## Features - simple and clean API - send a whole bunch of metrics and send them all with one TCP call - supports timeouts - supports prefixes - sane defaults - all client functions are automatically noops if the client is `nil` - good test coverage and documentation - stable API via gopkg.in ## Installation Use `go get` to install graphigo: ``` go get gopkg.in/fgrosse/graphigo.v2 ``` No additional dependencies are required. ## Usage ```go package main import ( "time" "gopkg.in/fgrosse/graphigo.v2" ) func main() { c := graphigo.Client{ // If you omit the entire address localhost:2004 will be assumed // Just omitting the port is also valid and wil use the default port Address: "graphite.your.org:2004", // set a custom timeout (seconds) for the graphite connection // if timeout = 0 then the graphigo.DefaultTimeout = 5 seconds is used // Setting Timeout to graphite.TimeoutDisabled (-1) disables the timeout Timeout: 42, // set a custom prefix for all recorded metrics of this client (optional) Prefix: "foo.bar.baz", } if err := c.Connect(); err != nil { panic(err) // do proper error handling } // close the TCP connection properly if you don't need it anymore defer c.Close() // capture and send values using a single line c.SendValue("hello.graphite.world", 42) // capture a metric and send it any time later. You can use any type as value // The next line could also be simplified with graphigo.CaptureMetric metric := graphigo.Metric{Name: "test", Value: 3.14, Timestamp: time.Now()} defer c.Send(metric) // create a whole bunch of metrics and send them all with one TCP call c.SendAll([]graphigo.Metric{ {Name: "shut", Value: 1}, {Name: "up", Value: 2}, {Name: "and", Value: 3}, {Name: "take", Value: 4}, {Name: "my", Value: 5}, {Name: "money", Value: 6}, }) } ``` ## Alternatives - https://github.com/marpaia/graphite-golang - https://github.com/ohlol/graphite-go ## Contributing Any contributions are always welcome (use pull requests). Please keep in mind that I might not always be able to respond immediately but I usually try to react within the week ☺. [1]: http://graphite.readthedocs.org/en/latest/overview.html [2]: https://godoc.org/gopkg.in/fgrosse/graphigo.v2
27.752688
120
0.709415
eng_Latn
0.897451
ff3f3c7ab82e6dd106b4cbd196dc7803b7938d7b
3,175
md
Markdown
spreadsheet/how-to/restricting-number-of-visible-rows-and-columns.md
attilaantal/winforms-docs
c311033085e6f770435eaa3c921edde9efcb12dd
[ "MIT" ]
30
2016-02-18T13:23:42.000Z
2021-09-23T01:26:05.000Z
spreadsheet/how-to/restricting-number-of-visible-rows-and-columns.md
attilaantal/winforms-docs
c311033085e6f770435eaa3c921edde9efcb12dd
[ "MIT" ]
25
2016-03-16T07:13:47.000Z
2021-07-30T13:31:24.000Z
spreadsheet/how-to/restricting-number-of-visible-rows-and-columns.md
attilaantal/winforms-docs
c311033085e6f770435eaa3c921edde9efcb12dd
[ "MIT" ]
183
2016-02-19T09:56:35.000Z
2022-01-17T18:03:36.000Z
--- title: Restrict the Number of Visible Rows and Columns page_title: Restrict the Number of Visible Rows and Columns description: Restrict the Number of Visible Rows and Columns slug: radspreadsheet-ui-restricting-number-of-visible-rows-and-columns tags: restrict,the,number,of,visible,rows,and,columns published: True position: 1 --- # Restrict the Number of Visible Rows and Columns By default the document of __RadSpreadsheet__ presents to the user 1048576 rows and 16384 columns. However, it is not always necessary to present the user with the entire worksheet. In certain cases, you may want to restrict the visible area of the document in order to limit the user interaction to a certain range. For such scenarios __RadSpreadsheet__ offers an easy way to reduce the number of visible rows and columns. ## Restricting the Number of Visible Rows and Columns __RadSpreadsheetElement__ exposes a __VisibleSize__ property of type __SizeI__ that determines the count of the visible rows and columns. The __SizeI__ structure is similar to Size except that internally it uses integer instead of double values. That said, if you would like to set the visible columns and rows to 50 and 100 respectively, you need to create a __SizeI__ with __Width__ set to 50 and __Height__ set to 100 and assign the instance to the __VisiableSize__ property. Here is a sample snippet that illustrates how to achieve this: {{source=..\SamplesCS\Spreadsheet\SelectionCode.cs region=Selection_11}} {{source=..\SamplesVB\Spreadsheet\SelectionCode.vb region=Selection_11}} ````C# radSpreadsheet.SpreadsheetElement.VisibleSize = new SizeI(5, 10); ```` ````VB.NET radSpreadsheet.SpreadsheetElement.VisibleSize = New SizeI(5, 10) ```` {{endregion}} As a result, __RadSpreadsheet__ displays to the user only 5 columns and 10 rows: ![Rad Spreadsheet UI Restrict Number Visible Rows Columns 1](images/spreadsheet-restrict-the-number-of-visible-rows-and-columns001.png) ## How it works? Note that the __VisibleSize__ property affects only the user interface and does not change the count of rows and columns in the document model. In other words, the property restricts the number of rows and columns presented to the user, not the actual number of rows and columns in the worksheets. This has several implications that you need to consider before you start using the feature: * Even though the user is unable to reach the cells that are not visible, you as a developer can assign a value to every cell in the worksheet. Also, you may allow the user to open a file that holds data that exceeds the visible size. RadSpreadsheet does not have a mechanism to notify you that there is data outside the visible range. In such cases it is your responsibility to check whether the VisibleSize is less than the UsedCellRange of the worksheet and correct the number of visible rows and columns. * Since the property restricts the rows and columns that the RadWorksheetEditor presents to the user, all worksheets will be displayed to fit in the restricted size. RadSpreadsheet cannot assign a different size to each individual worksheet out of the box.
67.553191
541
0.793071
eng_Latn
0.999267
ff3f9e72f13807b13f2229ba5c08001b838615e5
3,609
md
Markdown
hugo-source/content/output/sahakAra nagar, bengaLUru/MULTI_NEW_MOON_SIDEREAL_MONTH_ADHIKA__CHITRA_AT_180/gregorian/2000s/2020s/2020_monthly/2020-03/2020-03-15.md
Prabhakaran-cbe/jyotisha
689327c5944c6cc84b7e58af4deae2a4ebe94d7b
[ "MIT" ]
40
2017-10-01T04:22:35.000Z
2020-11-30T03:47:57.000Z
hugo-source/content/output/sahakAra nagar, bengaLUru/MULTI_NEW_MOON_SIDEREAL_MONTH_ADHIKA__CHITRA_AT_180/gregorian/2000s/2020s/2020_monthly/2020-03/2020-03-15.md
Prabhakaran-cbe/jyotisha
689327c5944c6cc84b7e58af4deae2a4ebe94d7b
[ "MIT" ]
71
2017-08-27T13:54:06.000Z
2020-12-11T01:16:47.000Z
hugo-source/content/output/sahakAra nagar, bengaLUru/MULTI_NEW_MOON_SIDEREAL_MONTH_ADHIKA__CHITRA_AT_180/gregorian/2000s/2020s/2020_monthly/2020-03/2020-03-15.md
Prabhakaran-cbe/jyotisha
689327c5944c6cc84b7e58af4deae2a4ebe94d7b
[ "MIT" ]
23
2017-08-27T11:54:41.000Z
2020-11-14T19:41:58.000Z
+++ title = "2020-03-15" +++ ## फाल्गुनः-12-22,वृश्चिकः-अनूराधा🌛🌌◢◣मीनः-पूर्वप्रोष्ठपदा-12-02🌌🌞◢◣तपस्यः-12-26🪐🌞भानुः - Indian civil date: 1941-12-25, Islamic: 1441-07-20 Rajab - संवत्सरः - विकारी - वर्षसङ्ख्या 🌛- शकाब्दः 1941, विक्रमाब्दः 2076, कलियुगे 5120 ___________________ - 🪐🌞**ऋतुमानम्** — शिशिरऋतुः उत्तरायणम् - 🌌🌞**सौरमानम्** — शिशिरऋतुः उत्तरायणम् - 🌛**चान्द्रमानम्** — शिशिरऋतुः फाल्गुनः ___________________ ## खचक्रस्थितिः - |🌞-🌛|**तिथिः** — कृष्ण-सप्तमी►27:19*; कृष्ण-अष्टमी► - 🌌🌛**नक्षत्रम्** — अनूराधा►11:22; ज्येष्ठा► (वृश्चिकः) - 🌌🌞**सौर-नक्षत्रम्** — पूर्वप्रोष्ठपदा► ___________________ - 🌛+🌞**योगः** — वज्रम्►15:13; सिद्धिः► - २|🌛-🌞|**करणम्** — विष्टिः►15:46; बवः►27:19*; बालवः► - 🌌🌛- **चन्द्राष्टम-राशिः**—मेषः - 🌞-🪐 **अमूढग्रहाः** - बुधः (25.55° → 26.05°), मङ्गलः (65.87° → 66.17°), शनैश्चरः (55.47° → 56.38°), गुरुः (62.94° → 63.77°), शुक्रः (-45.74° → -45.79°) ___________________ ## दिनमान-कालविभागाः - 🌅**सूर्योदयः**—06:30-12:28🌞️-18:26🌇 - 🌛**चन्द्रास्तमयः**—11:00; **चन्द्रोदयः**—00:14(+1) ___________________ - 🌞⚝भट्टभास्कर-मते वीर्यवन्तः— **प्रातः**—06:30-08:00; **साङ्गवः**—09:29-10:59; **मध्याह्नः**—12:28-13:58; **अपराह्णः**—15:27-16:57; **सायाह्नः**—18:26-19:57 - 🌞⚝सायण-मते वीर्यवन्तः— **प्रातः-मु॰1**—06:30-07:18; **प्रातः-मु॰2**—07:18-08:05; **साङ्गवः-मु॰2**—09:41-10:29; **पूर्वाह्णः-मु॰2**—12:04-12:52; **अपराह्णः-मु॰2**—14:27-15:15; **सायाह्नः-मु॰2**—16:51-17:39; **सायाह्नः-मु॰3**—17:39-18:26 - 🌞कालान्तरम्— **ब्राह्मं मुहूर्तम्**—04:53-05:42; **मध्यरात्रिः**—23:15-01:40 ___________________ - **राहुकालः**—16:57-18:26; **यमघण्टः**—12:28-13:58; **गुलिककालः**—15:27-16:57 ___________________ - **शूलम्**—प्रतीची दिक् (►11:16); **परिहारः**–गुडम् ___________________ ## उत्सवाः - फाल्गुन-अष्टका-पूर्वेद्युः, बाजीराव-जयसिंह-मेलनम् #२८५, भानुसप्तमी ### बाजीराव-जयसिंह-मेलनम् #२८५ Event occured on 1735-03-15 (gregorian). Julian date was converted to Gregorian in this reckoning. bAjI rAv met jayasiMha of jayapura, extracted tribute and alliance. At Nath-Dwara Bajirao and his wife Kashibai offered their joint devotion to the celebrated deity, and proceeding further he and Sawai Jaysinh had their first personal meeting on 4th March at Bambhola near Kishangad. They arrived both riding on their elephants and as soon as they sighted each other, they dismounted, embraced and sat down on the same musnad in an open Darbar. Their visit lasted for several days up to 8th March when they discussed the peace terms and arrangements for the visit to the Emperor regarding which a communication was expected from Delhi. Jaysinh offered to pay 5 lacs Chauth annually for Jaipur and promised to obtain from the Emperor written grants for the provinces of Malwa and Gujarat. #### Details - [Edit config file](https://github.com/jyotisham/adyatithi/blob/master/mahApuruSha/xatra-later/julian/day/03/04/bhAjI-rAva-jayasiMha-melanam.toml) - Tags: ### भानुसप्तमी saptamī tithi on a Sunday is as sacred as a solar eclipse. Particularly good for worshipping Surya. अमावस्या तु सोमेन सप्तमी भानुना सह। चतुर्थी भूमिपुत्रेण सोमपुत्रेण चाष्टमी। चतस्रस्तिथयस्त्वेताः सूर्यग्रहणसन्निभाः॥ #### Details - [Edit config file](https://github.com/jyotisham/adyatithi/blob/master/time_focus/tithi-vara-combinations/description_only/bhAnusaptamI.toml) - Tags: RareDays Combinations ### फाल्गुन-अष्टका-पूर्वेद्युः Shannavati Shraddham Day. #### Details - [Edit config file](https://github.com/jyotisham/adyatithi/blob/master/devatA/pitR/relative_event/phAlguna-aSTakA-zrAddham/offset__-1/phAlguna-aSTakA-pUrvEdyuH.toml) - Tags: ShannavatiTarpanaDays
46.269231
719
0.62455
yue_Hant
0.279184
ff40c635d555ac51bba143b6ed1977848a701bc0
4,100
md
Markdown
docs/team/luminousleek.md
rohit0718/tp
8a3d70d5ea7ac74cfcfc046486ed1b5e0725be7a
[ "MIT" ]
null
null
null
docs/team/luminousleek.md
rohit0718/tp
8a3d70d5ea7ac74cfcfc046486ed1b5e0725be7a
[ "MIT" ]
139
2021-09-27T00:10:42.000Z
2021-11-08T08:14:11.000Z
docs/team/luminousleek.md
rohit0718/tp
8a3d70d5ea7ac74cfcfc046486ed1b5e0725be7a
[ "MIT" ]
4
2021-09-14T05:59:26.000Z
2021-09-25T16:10:15.000Z
--- layout: page title: Isaac Lee's Project Portfolio Page --- ### Project: NUS ModBook NUS ModBook is a desktop application for NUS students to manage modules, optimized for use via a Command Line Interface. I was the team leader for this project. Given below are my contributions to the project. * **Code contributed**: [RepoSense link](https://nus-cs2103-ay2122s1.github.io/tp-dashboard/?search=&sort=groupTitle&sortWithin=title&timeframe=commit&mergegroup=&groupSelect=groupByRepos&breakdown=true&checkedFileTypes=docs~functional-code~test-code~other&since=2021-09-17&tabOpen=true&tabType=authorship&tabAuthor=luminousleek&tabRepo=AY2122S1-CS2103T-T13-1%2Ftp%5Bmaster%5D&authorshipIsMergeGroup=false&authorshipFileTypes=docs~functional-code~test-code~other&authorshipIsBinaryFileTypeChecked=false&zFR=false) * **New Features**: * Added the representation for Timeslots in ModBook [\#39](https://github.com/AY2122S1-CS2103T-T13-1/tp/pull/39) * Created the class to encapsulate Timeslots, as well as its various fields and methods * Added methods to parse times and timeslots * Added support for parsing multiple time formats [\#96](https://github.com/AY2122S1-CS2103T-T13-1/tp/pull/96) * Added ability to restrict certain commands using the `GuiState` [\#70](https://github.com/AY2122S1-CS2103T-T13-1/tp/pull/70) * Added `GuiState` as a parameter for parsing and executing commands * Modified test utility methods to take `GuiState` into account * Made `GuiState` handling more natural and intuitive [\#120](https://github.com/AY2122S1-CS2103T-T13-1/tp/pull/120) * Update command syntax to make reflect `GuiState` and make it more intuitive [\#122](https://github.com/AY2122S1-CS2103T-T13-1/tp/pull/122) * Added ability to see the details of a module [\#69](https://github.com/AY2122S1-CS2103T-T13-1/tp/pull/69) * Added the `detail` command and its corresponding parser * Added tests for `detail` command * **Enhancements to existing features**: * Updated Storage to be able to store Lessons, Exams and Timeslots in JSON formats [\#57](https://github.com/AY2122S1-CS2103T-T13-1/tp/pull/57) * Adapted these classes for JSON * Integrated the jackson-datatype-jdk8 package to the project to properly handle `Optionals` * This included going into the serialiser to make it print relative rather than absolute paths * Wrote tests, and also wrote some utility test methods * Update `clear` command to work properly [\#75](https://github.com/AY2122S1-CS2103T-T13-1/tp/pull/75) * **Documentation**: * User Guide: * Wrote command summary table * Wrote section on valid time formats [\#96](https://github.com/AY2122S1-CS2103T-T13-1/tp/pull/96) * Developer Guide: * Added implementation details of `GuiState` handling [\#97](https://github.com/AY2122S1-CS2103T-T13-1/tp/pull/97) * **Contributions to team-based tasks**: * Set up GitHub team organisation and repo * Maintained the issue tracker * Enabled java assertions [\#116](https://github.com/AY2122S1-CS2103T-T13-1/tp/pull/116) * Updated user guide link [\#73](https://github.com/AY2122S1-CS2103T-T13-1/tp/pull/73) * Fixed various bugs: [\#63](https://github.com/AY2122S1-CS2103T-T13-1/tp/pull/63), [\#64](https://github.com/AY2122S1-CS2103T-T13-1/tp/pull/64), [\#162](https://github.com/AY2122S1-CS2103T-T13-1/tp/pull/162), [\#175](https://github.com/AY2122S1-CS2103T-T13-1/tp/pull/175), [\#181](https://github.com/AY2122S1-CS2103T-T13-1/tp/pull/181) * **Reviewing/Mentoring contributions**: * PRs reviewed (with non-trivial review comments): [\#60](https://github.com/AY2122S1-CS2103T-T13-1/tp/pull/60), [\#82](https://github.com/AY2122S1-CS2103T-T13-1/tp/pull/82), [\#101](https://github.com/AY2122S1-CS2103T-T13-1/tp/pull/101), [\#103](https://github.com/AY2122S1-CS2103T-T13-1/tp/pull/103) * Assisted with writing the comparator for the `Day` enum * **Contributions beyond the project team**: * Reported bugs for other teams as part of Mock PE ([link](https://github.com/luminousleek/ped/issues))
71.929825
513
0.736098
eng_Latn
0.574432
ff40f34a536dea49e128139bbb6cd40fe26c8661
2,153
md
Markdown
content/publication/csin-btas-2019/index.md
CRIPAC-SIR/cripacsir.cn
e1751dabb1ec62c2768d3611a4319c73b49ceb61
[ "MIT" ]
null
null
null
content/publication/csin-btas-2019/index.md
CRIPAC-SIR/cripacsir.cn
e1751dabb1ec62c2768d3611a4319c73b49ceb61
[ "MIT" ]
null
null
null
content/publication/csin-btas-2019/index.md
CRIPAC-SIR/cripacsir.cn
e1751dabb1ec62c2768d3611a4319c73b49ceb61
[ "MIT" ]
null
null
null
--- title: "Cross-sensor iris recognition using adversarial strategy and sensor-specific information" date: 2019-01-01 publishDate: 2020-02-16T11:44:49.305472Z authors: ["Jianze Wei", "Yunlong Wang", "Xiang Wu", "Zhaofeng He", "Ran He", "Zhenan Sun"] publication_types: ["1"] abstract: "Due to the growing demand of iris biometrics, lots of new sensors are being developed for high-quality image acquisition. However, upgrading the sensor and re-enrolling for users is expensive and time-consuming. This leads to a dilemma where enrolling on one type of sensor but recognizing on the others. For this cross-sensor matching, the large gap between distributions of enrolling and recognizing images usually results in degradation in recognition performance. To alleviate this degradation, we propose Cross-sensor iris network (CSIN) by applying the adversarial strategy and weakening interference of sensor-specific information. Specifically, there are three valuable efforts towards learning discriminative iris features. Firstly, the proposed CSIN adds extra feature extractors to generate residual components containing sensor-specific information and then utilizes these components to narrow the distribution gap. Secondly, an adversarial strategy is borrowed from Generative Adversarial Networks to align feature distributions and further reduce the discrepancy of images caused by sensors. Finally, we extend triplet loss and propose instance-anchor loss to pull the instances of the same class together and push away from others. It is worth mentioning that the proposed method doesn’t need pair-same data or triplet, which reduced the cost of data preparation. Experiments on two real-world datasets validate the effectiveness of the proposed method in cross-sensor iris recognition." featured: false publication: "*10th IEEE International Conference on Biometrics: Theory, Applications and Systems (BTAS2019)*" url_pdf: csin-btas-2019.pdf projects: ["Iris Recognition"] image: placement: 1 caption: "Cross-sensor iris recognition using adversarial strategy and sensor-specific information" focal_point: "Center" preview_only: false ---
107.65
1,513
0.813748
eng_Latn
0.994821
ff410c5d755e2fb69d69569363f416153401ea94
7,007
md
Markdown
content/blog/c/1dc4ad12463a821bf129084982e9a46c_t.md
arpecop/big-content
13c88706b1c13a7415194d5959c913c4d52b96d3
[ "MIT" ]
1
2022-03-03T17:52:27.000Z
2022-03-03T17:52:27.000Z
content/blog/c/1dc4ad12463a821bf129084982e9a46c_t.md
arpecop/big-content
13c88706b1c13a7415194d5959c913c4d52b96d3
[ "MIT" ]
null
null
null
content/blog/c/1dc4ad12463a821bf129084982e9a46c_t.md
arpecop/big-content
13c88706b1c13a7415194d5959c913c4d52b96d3
[ "MIT" ]
null
null
null
--- title: 1dc4ad12463a821bf129084982e9a46c_t mitle: "Седмичен хороскоп за 4-10 декември 2017 година" description: "Овен През тази седмица не се втурвайте през глава да започвате нови неща. Още в понеделник върнете нещата 2-3 седмици назад и прегледайте дали няма нещо за дооправяне. Вторник и сряда нека бъдат най-активните ви дни от седмицата. Тогава заложете на активни действия и преговори. В четвъртък и петък може да се поотпуснете, като са …" image: "https://cdnone.netlify.com/db/2017/12/ZODII-1.jpg" --- <p><img src="https://cdnone.netlify.com/db/2017/12/ZODII-1.jpg" alt="" width="600" height="350" class="alignnone size-full wp-image-2643" srcset="https://cdnone.netlify.com/db/2017/12/ZODII-1.jpg 600w, https://cdnone.netlify.com/db/2017/12/ZODII-1-514x300.jpg 514w" sizes="(max-width: 600px) 100vw, 600px"/><br/> Овен </p> <p>През тази седмица не се втурвайте през глава да започвате нови неща. Още в понеделник върнете нещата 2-3 седмици назад и прегледайте дали няма нещо за дооправяне. Вторник и сряда нека бъдат най-активните ви дни от седмицата. Тогава заложете на активни действия и преговори. В четвъртък и петък може да се поотпуснете, като са заемете с рутинна работа и включите автопилота. Събота и неделя не очаквайте чудеса от другите. Сега е ваш ред да организирате свободното си време.</p> <p>Телец </p> <p>Може да ви се струва скучно, но седмицата ще опровергае усещането ви. В понеделник ще ви възложат задача извън вашата обичайна дейност. Във вторник така ще се развихрите, че ще ви се струва, че цял живот само това сте правили. Сряда и четвъртък ще приемате похвали, но не се отпускайте. Последният работен ден се очертава напрегнат, ще има въпроси за решаване от личен характер. . Добре е да е правите компромиси с храненето и времето за сън, което отделяте. Ако е възможно си вземете отпуска, но без да се налага да наваксвате през празничните дни. Потушете ентусиазма си за покупки. Точно сега не е подходящо време за пазаруване на техника. В края на седмицата може да привлечете вниманието на човек, който отдавна ви интересува.<br/> Събота и неделя не може да останете у дома, просто няма ви свърта на едно място. Затова излезте и се забавлявайте с приятели, имате нужда да разпускате Не забравяйте топлите дрехи и да приемате повече витамини, пазете се.</p> <p>Близнаци</p> <p> Много чудене ще имате през тази седмица. Вече ви е завладяла предпразничната треска и правилно. В понеделник помислете добре и направете списъка, а във вторник набележете местата, откъдето да закупите подаръците. В сряда очаквайте телефонно обаждане от образователна институция. Може да се наложи да обърнете повече внимание на детето си. Четвъртък и петък ще сте заети с мисълта за промени в някои навици в семейството ви. Ако не се налагат не пришпорвайте нещата. Почивните дни посетете роднини и се разберете за посещенията по празниците.</p> <p>Рак </p> <p>Седмицата не е подходяща за делови преговори. Ще сте склонни да се съгласявате на условия, които няма да са особено изгодни за вас. Не продавайте произведенията си на ниска цена, само защото искате да се отървете от тях. Ще дойдат по-добри времена, но е добре да знаете цената си. В личен план може да преминете някои граници. Посъветвайте се с ваш приятел дали не губите връзка с реалността. Рискувате или да се задоволите с прекалено малко или да искате прекалено много от партньора си. Нито една от крайностите не е добър вариант за вас, нито за връзката ви. Ако имате деца, те ще имат нужда да им обърнете повече внимание.</p> <p>Лъв </p> <p>Тази седмица намерете повече време за почивка. Със сигурност направете така, че да имате повече свободно време или да намалите натоварването. В личен план ви очакват промени. Самите вие дълго време сте ги искали, но все не ви достигаше нещо, за да се решите на тях. Сега е моментът да ги посрещнете храбро. Роднините очакват много от вас във връзка с празниците. Преценете добре възможностите си и се разберете ясно, за да няма неприятни изненади, когато вече няма да има време за промяна на плановете.</p> <p>Дева</p> <p>През тази седмица ви очакват непланирани пътувания. Със сигурност това ще бъде едно приятно разнообразие. Погледнете от тази страна на нещата и натоварването няма да ви измъчи. Смяната на обстановката ще ви разведри и ще ви даде много нови идеи за близкото бъдеще. В работата с документи може да възникнат някои неясноти. При решаването на въпроси от личен план се осланяйте на опита си. Сега не е време за илюзии.</p> <p>Везни</p> <p>Очакват ви нови познанства и хора, към чието общуване винаги сте се стремели. Опитайте се да се доверите на интуицията си, ако ви поканят за съвместна дейност. Със сигурност няма да ви е лесно, но големите неща не се случват от раз и изискват повече усилия. В личен план може да дадете неволно причина на партньора ви да ви ревнува. Не позволявайте конфликта да се задълбочи и ако има причина, която се крие у вас помислете как да я отстраните.</p> <p>Скорпион</p> <p>Очаква ви романтична седмица, която ще доведе до развитие в отношенията с партньора ви. Сега е моментът да направите предложение, ако отдавна го планирате. Ако преди време сте получили отказ, то сега е много вероятно нещата да се променят. Периодът е благоприятен за всички отношения, в които е останало нещо недоизяснено. Сред роднините ви може да възникне спор, който само вие да сте в състояние да регулирате. Все пак се опитайте да не поемате ролята на съдия.</p> <p>Козирог </p> <p>Най-важното през тази седмица ще бъде да запазите хармонията и спокойната обстановка в семейството. Не си мислете, че всичко е загубено. От страна на партньора ви има желание за съдействие, само трябва да намерите подходящите думи и начин. Постоянството и търпението ще са най-силните ви оръжия. Ще се наложи да се разделите с някои изрази, които са били част от речника, но са ранявали половинката ви. Ще имате достатъчно време, за да направите промяната в себе си, достатъчно осезаема, за да имате едни незабравими от щастие празници.</p> <p>Водолей </p> <p>Вероятно сте по-умни от околните, но никой не обича това да му се натрапва. Сега е период, в който да слезете при „простосмъртните“, както и да дадете сигнал, че сте готови за едни равноправни отношения. Задачата ви няма да е лесна. От резултата обаче зависи развитието на важни за вас отношения. Усамотяването е хубаво нещо, но при предстоящите празници може да се получи така, че да се почувствате изолирани. Вероятно ще искате да поправите нещата, но няма да имате достатъчно време. Направете така че това да не се случи. </p> <p>Риби</p> <p>Понякога единствено увереността и куражът са достатъчни, за да променят обстановката и ситуацията, в която се намирате. Сега се намирате в точно такъв период. Нищо не ви пречи да опитате. Все пак резултатът ще е положителен за вас и ще се учудите колко лесно ще се случи всичко. Може дори да съжалите, че не сте се решили по-рано на този вид тактика. От съжаление, обаче няма смисъл. Направете така, че тази линия на поведение да бъде всекидневна за вас.</p>
189.378378
739
0.789354
bul_Cyrl
0.999944
ff41837492ee369cb1fb9f7a98c572d99189a085
905
md
Markdown
docs/content/stable/yedis/api/zscore.md
boazjohn/yugabyte-db
15b28b528637b26ce242f8d70d9a562a92082fbf
[ "Apache-2.0", "CC0-1.0" ]
null
null
null
docs/content/stable/yedis/api/zscore.md
boazjohn/yugabyte-db
15b28b528637b26ce242f8d70d9a562a92082fbf
[ "Apache-2.0", "CC0-1.0" ]
null
null
null
docs/content/stable/yedis/api/zscore.md
boazjohn/yugabyte-db
15b28b528637b26ce242f8d70d9a562a92082fbf
[ "Apache-2.0", "CC0-1.0" ]
null
null
null
--- title: ZSCORE linkTitle: ZSCORE description: ZSCORE menu: stable: parent: api-yedis weight: 2545 aliases: - /stable/api/redis/zscore - /stable/api/yedis/zscore isTocNested: true showAsideToc: true --- ## Synopsis <b>`ZSCORE key member`</b><br> Returns the score of the member in the sorted set at key. If member does not exist in the sorted set, or key does not exist, null is returned. If `key` is associated with non sorted set data, an error is returned. ## Return value The score of member (a double precision floating point number), represented as string. ## Examples ```sh $ ZADD z_key 1.0 v1 ``` ``` (integer) 1 ``` ```sh $ ZSCORE z_key v1 ``` ``` "1.0" ``` ```sh $ ZSCORE z_key v2 ``` ``` (null) ``` ## See also [`zadd`](../zadd/), [`zcard`](../zcard/), [`zrange`](../zrange/), [`zrangebyscore`](../zrangebyscore/), [`zrem`](../zrem/), [`zrevrange`](../zrevrange)
16.160714
151
0.644199
eng_Latn
0.909369
ff41a9a24c50564c9b44d9eb45bd0d871e6edf0d
1,745
md
Markdown
docs/relational-databases/errors-events/mssqlserver-33027-database-engine-error.md
PowerBee-AK/sql-docs.de-de
f6f4854db855a89c4e49dc0557fa456da060b3c7
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/relational-databases/errors-events/mssqlserver-33027-database-engine-error.md
PowerBee-AK/sql-docs.de-de
f6f4854db855a89c4e49dc0557fa456da060b3c7
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/relational-databases/errors-events/mssqlserver-33027-database-engine-error.md
PowerBee-AK/sql-docs.de-de
f6f4854db855a89c4e49dc0557fa456da060b3c7
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- description: MSSQLSERVER_33027 title: MSSQLSERVER_33027 | Microsoft-Dokumentation ms.custom: '' ms.date: 04/04/2017 ms.prod: sql ms.reviewer: '' ms.technology: supportability ms.topic: reference helpviewer_keywords: - 33027 (Database Engine error) ms.assetid: bfdc626e-7958-4511-987d-3b687824e8af author: MashaMSFT ms.author: mathoma ms.openlocfilehash: a577e04c9a4959fa95421a48877adf2834fe1d33 ms.sourcegitcommit: 33f0f190f962059826e002be165a2bef4f9e350c ms.translationtype: HT ms.contentlocale: de-DE ms.lasthandoff: 01/30/2021 ms.locfileid: "99190972" --- # <a name="mssqlserver_33027"></a>MSSQLSERVER_33027 [!INCLUDE [SQL Server](../../includes/applies-to-version/sqlserver.md)] ## <a name="details"></a>Details | attribute | Wert | | :-------- | :---- | |Produktname|SQL Server| |Ereignis-ID|33027| |Ereignisquelle|MSSQLSERVER| |Komponente|SQLEngine| |Symbolischer Name|SEC_CRYPTOPROV_CANTLOADDLL| |Meldungstext|Fehler beim Laden des Kryptografieanbieters ‚%.*ls’ aufgrund einer ungültigen Authenticode-Signatur oder eines ungültigen Dateipfades. Überprüfen Sie vorhergehende Meldungen auf weitere Fehler.| ## <a name="explanation"></a>Erklärung SQL Server konnte den in der Fehlermeldung aufgelisteten Kryptografieanbieter nicht verwenden, da SQL Server die DLL nicht laden konnte. Entweder ist der Name ungültig, oder die Authenticode-Signatur ist ungültig. ## <a name="user-action"></a>Benutzeraktion Überprüfen Sie, ob die Datei vorhanden ist und SQL Server die Berechtigung hat, auf diesen Speicherort zuzugreifen. Überprüfen Sie das Fehlerprotokoll auf zusätzliche verwandte Fehlermeldungen. Anderenfalls wenden Sie sich an den Kryptografieanbieter, um weitere Informationen zu erhalten.
41.547619
291
0.782808
deu_Latn
0.848385
ff424aee34fffddcb3e5dc4d9719beaf85ceb7b6
13,849
md
Markdown
readme.md
brozeph/mongoose-middleware
ae6ae300db64dd63ad96adb67ecadf23cf593ff1
[ "MIT" ]
null
null
null
readme.md
brozeph/mongoose-middleware
ae6ae300db64dd63ad96adb67ecadf23cf593ff1
[ "MIT" ]
null
null
null
readme.md
brozeph/mongoose-middleware
ae6ae300db64dd63ad96adb67ecadf23cf593ff1
[ "MIT" ]
null
null
null
# Mongoose Middleware [![Build Status](https://secure.travis-ci.org/PlayNetwork/mongoose-middleware.png?branch=master)](https://travis-ci.org/brozeph/mongoose-middleware.svg?branch=main) [![Coverage Status](https://coveralls.io/repos/github/brozeph/mongoose-middleware/badge.svg?branch=main)](https://coveralls.io/github/brozeph/mongoose-middleware?branch=main) ## Features * Pagination (start, count and total matching) * Filtering (mandatory matches, optional matches and keyword search) * Sorting (ascending and descending) * Projection (response field filtering) * Promise support ## Install ```javascript npm install @brozeph/mongoose-middleware ``` Then, simply require the library and pass in the instance of the `require('mongoose')` statement to the initialize method as follows: ```javascript let mongoose = require('mongoose'); require('@brozeph/mongoose-middleware').initialize(mongoose); ``` Optionally configure max documents for pagination: ```javascript let mongoose = require('mongoose'); require('@brozeph/mongoose-middleware') .initialize({ maxDocs : 1000 }, mongoose); ``` ## Overview This project aims to make basic searching, sorting, filtering and projection tasks against documents stored in MongoDB trivial via Mongoose middleware. The middle exposes a set of Mongoose Query object chainable methods for ease and simplicity of use. The following example shows usage of field projections, mandatory and optional search filters, sorting and pagination. ```javascript const mongoose = require('mongoose'), Schema = mongoose.Schema, KittehModel = mongoose.model( 'kittehs', new Schema({ birthday : { type : Date, default : Date.now }, features : { color : String, isFurreh : Boolean }, home : String, name : String, peePatches : [String] }) ); require('@brozeph/mongoose-middleware').initialize(mongoose); /* Retrieve the name, home and features.color of kittehs that live in Seattle, that are named "Hamish" and that are brindle, black or white in color and born prior to January 1st, 2014. The results should be sorted by birthday in descending order and name in ascending order. */ let options = { filters : { field : ['name', 'home', 'features.color'], mandatory : { contains : { home : 'seattle' }, exact : { name : 'Hamish' }, lessThan : { birthday : new Date(2014, 1, 1) } }, optional : { contains : { 'features.color' : ['brindle', 'black', 'white'] } } }, sort : ['-birthday', 'name'], start : 0, count : 500 }; KittehModel .find() .field(options) .keyword(options) .filter(options) .order(options) .page(options, function (err, kittehs) { if (!err) { console.log('we haz kittehs!'); console.log(kittehs); } else { console.log(err); } }); ``` ### Promise Support When using `mongoose-middleware`, the library does not interfere with existing [Mongoose support for Promises](http://mongoosejs.com/docs/promises.html). The [`#page`](#pagination) method will return a native Promise if the `callback` argument is not specified. ```javascript let options = { start : 0, count : 500 }; KittehModel .find() .page(options) .then((kittehs) => { console.log('we haz kittehs!'); console.log(kittehs); }) .catch(console.error); ``` ### Data The options submitted to the `page(options, callback)` middleware method are echoed back in the response along with the results of the query and the total count of documents matching the specified filters. ```javascript { options : { count : 500, filters : { field : ['name', 'home', 'features.color'], mandatory : { contains : { 'features.color' : ['brindle', 'black', 'white'] }, exact : { name : 'Hamish' } }, optional : { contains : { home : 'seattle' } } }, sort : ['-birthday', 'name'], start : 0 }, data : [ ... ], // the first 500 brindled, black or white kittehs named Hamish in Seattle total : 734 } ``` ## API ### Initialization The maxDocs property may optionally be specified on initialize to ensure no more than the specified number of documents are ever returned from a query. Please note that this does not affect the ability for the component to return the correct total count of results when using the pagination middleware function. ```javascript let mongoose = require('mongoose'); require('mongoose-middleware').initialize({ maxDocs : 1000 }, mongoose); ``` ### Projection (Field Filters) In order specify specific fields from a document in Mongo to be returned, the fields filter may be used. ```javascript let options = { filters : { field : ['name', 'home', 'qualities.demeanor'] } }; KittehModel .find() .field(options) .exec(function (err, data) { // work with response... }); ``` Alternatively, a single field can be specified (not in an array): ```javascript KittehModel .find() .field({ filters : { field : '_id' } }) .exec(callback); ``` ### Filters Filters can be used in three ways: mandatory, optional and keyword searches. Additionally, for mandatory and optional searches, exact, equals, contains and startsWith string pattern matches may be used. The following filters can be used for *mandatory*, *optional*, and *keyword* searches. * `exact` - Matches the string letter for letter, but is not case sensitive * `contains` - Matches documents where the string exists as a substring of the field (similar to a where field like '%term%' query in a relational datastore) * `startsWith` - Matches documents where field begins with the string supplied (similar to a where field like 'term%' query in a relational datastore) * `endsWith` - Matches documents where field ends with the string supplied (similar to a where field like '%term' query in a relational datastore) The following filters can *ONLY* be used for *mandatory* and *keyword* searches. * `equals` - Matches for string value * `greaterThan` (or `gt`) - Matches documents where field value is greater than supplied number or Date value in query * `greaterThanEqual` (or `gte`) - Matches documents where field value is greater than or equal to supplied number or Date value in query * `in` - Matches from a list / Array of values * `lessThan` (or `lt`) - Matches documents where field value is less than supplied number or Date value in query * `lessThanEqual` (or `lte`) - Matches documents where field value is less than or equal to supplied number or Date value in query * `notEqual` (or `ne`) - Matches documents where field value is not equal to the supplied value * `notIn` (or `nin`) - Matches documents where field value (as Array) does not contain a matching value #### Mandatory Mandatory filters require that the document matches the specified search options or they will not be returned. #### Optional Optional searches allow you to specify more than one filter that you would like to match results for. This type of search is great for cases where you need to find documents that either match "this" *OR* "that". As an example, image you are searching for cats that are either manx, siamese or tabby, you would configure the filter as follows: ```javascript let options = { filters : { optional : { exact : { breed : ['manx', 'siamese', 'tabby'] } } } }; KittehModel .find() .filter(options) .exec(function (err, data) { // work with response... }); ``` #### Keyword Keyword searches provide a convenient way to search more than one field with a single string. Additionally, keyword filters work differently from mandatory and optional filters in that they do not support `exact`, `contains` or `startsWith`. Instead the matches look for occurrences in a similar way to `contains` but with the ability to specify multiple terms in the query. The following query will search for documents where the name, description or knownAliases contain Heathcliff the Cat. If the name (or description and knownAliases) contains "Cat, the Heathcliff", "the Cat, Heathcliff", "Heathcliff Cat, the" and "the Heathcliff Cat", those results will also be returned. ```javascript let options = { filters : { keyword : { fields : ['name', 'description', 'knownAliases'], term : 'Heathcliff the Cat' } } }; KittehModel .find() .filter(options) .exec(function (err, data) { // work with response... }); ``` If you would like to ensure that matches of "Heathcliff the Cat" in that exact format are returned, simply enclose the term in quotes: ```javascript let options = { filters : { keyword : { fields : ['name', 'description', 'knownAliases'], term : '"Heathcliff the Cat"' } } }; ``` ### Sorting Sorting, at this point, is fairly basic. All descending sorts will be applied prior to ascending sorts when specifying multiple sorts of each direction. Supports JSON API specs. #### Descending ```javascript let options = { sort : ['-name', '-description', '-knownAliases'] }; KittehModel .find() .order(options) .exec(function (err, data) { // work with response... }); ``` You may also specify a single field (not an array) as well as an object for both descending and ascending sorts: ```javascript let options = { sort : '-name' }; ``` ```javascript let options = { sort : { 'name': -1, 'description': 1 } }; ``` #### Ascending ```javascript let options = { sort : ['name', 'description', 'knownAliases'] }; KittehModel .find() .order(options) .exec(function (err, data) { // work with response... }); ``` You may also specify ascending and descending sorts together: ```javascript let options = { sort : ['name', '-birthday', '-home'] }; ``` ### Pagination Pagination is performed by swapping the `exec()` function of Mongoose with `page()`. Pagination may be specified as follows: ```javascript let options = { start : 0, count : 100 }; KittehModel .find() .page(options, function (err, data) { // work with response... }); ``` Pagination relies on the count of documents in a collection in order to return the total. By default, the Mongoose [`estimatedDocumentCount`](https://mongoosejs.com/docs/api.html#model_Model.estimatedDocumentCount) method for performance, but this can be overidden to use ['countDocuments`](https://mongoosejs.com/docs/api.html#model_Model.countDocuments) instead. ```javascript const mongoose = require('mongoose'), KittehModel = require('./models/kitteh'); require('mongoose-middleware').initialize({ estimatedDocumentCount : false }, mongoose); let options = { start : 0, count : 100 }; KittehModel .find() .page(options, function (err, data) { // data.total will be the result of countDocuments instead of estimatedDocumentCount }); ``` When using pagination, maxDocs may specified via the `initialize()` function of the library which will result in no more than that maximum number of documents being returned. ```javascript const mongoose = require('mongoose'), KittehModel = require('./models/kitteh'); require('mongoose-middleware').initialize({ maxDocs : 50 }, mongoose); let options = { start : 0, count : 100 }; KittehModel .find() .page(options, function (err, data) { // data.options.count === 50 }); ``` *Please note*: While the maxDocs will limit the number of returned documents, it will not affect the total count value of matching documents. #### Response Pagination returns the specified start, count and overall total numer of matching documents as a wrapper to the results from Mongo. ```javascript { options : { count : 50, start : 0 }, data : [ ... ], total : 734 } ``` ## Utility Methods ### mergeFilters mongoose-middleware provides a helper function if you need to programmatically add filters to the query. It will intelligently merge structures, and ensure that elements are turned into Arrays when they need to be. #### Example ```javascript let base = { filters : { mandatory : { exact : { breed : ['manx', 'siamese', 'tabby'], name : 'Ballard' } } } }, model = { filters : { mandatory : { exact : { breed : 'calico', name : 'Fremont' } } } }, merged = require('mongoose-middleware').mergeFilters(base, model); ``` #### Result ```javascript { filters : { mandatory : { exact : { breed : ['manx', 'siamese', 'tabby', 'calico'], name : ['Ballard', 'Fremont'] } } } } ``` ## License MIT Style ```text Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ```
27.921371
374
0.684309
eng_Latn
0.985298
ff42b70c77ebc30c3c03acb7c0a5a2e6dc6eb6b8
6,323
md
Markdown
content/publication/example/index.md
Dee0092/ABC
aa60d3d0b7a523c9e7f15d8c0357f5178091a106
[ "MIT" ]
null
null
null
content/publication/example/index.md
Dee0092/ABC
aa60d3d0b7a523c9e7f15d8c0357f5178091a106
[ "MIT" ]
null
null
null
content/publication/example/index.md
Dee0092/ABC
aa60d3d0b7a523c9e7f15d8c0357f5178091a106
[ "MIT" ]
null
null
null
--- title: "Acupoint-brain (acubrain) mapping: Common and distinct cortical language regions activated by focused ultrasound stimulation on two language-relevant acupoints" # Authors # If you created a profile for a user (e.g. the default `admin` user), write the username (folder name) here # and it will be replaced with their full name and linked to their profile. authors: - admin # Author notes (optional) # author_notes: # - "Equal contribution" # - "Equal contribution" date: "2021-02-01T00:00:00Z" doi: "" # Schedule page publish date (NOT publication's date). publishDate: "2021-02-01T00:00:00Z" # Publication type. # Legend: 0 = Uncategorized; 1 = Conference paper; 2 = Journal article; # 3 = Preprint / Working Paper; 4 = Report; 5 = Book; 6 = Book section; # 7 = Thesis; 8 = Patent publication_types: ["2"] # Publication name and optional abbreviated publication name. publication: In *Brain and Language* publication_short: In *Brain and Language* abstract: Acupuncture, taking the advantage of modality-specific neural pathways, has shown promising results in the treatment of brain disorders that affect different modalities such as pain and vision. However, the precise underlying mechanisms of within-modality neuromodulation of acupoints on human high-order cognition remain largely unknown. In the present study, we used a non-invasive and easy-operating method, focused ultrasound, to stimulate two language-relevant acupoints, namely GB39 (Xuanzhong) and SJ8 (Sanyangluo), of thirty healthy adults. The effect of focused ultrasound stimulation (FUS) on brain activation was examined by functional magnetic resonance imaging (fMRI). We found that stimulating GB39 and SJ8 by FUS evoked overlapping but distinct brain activation patterns. Our findings provide a major step toward within-modality (in this case, language) acupoint-brain (acubrain) mapping and shed light on to the potential use of FUS as a personalized treatment option for brain disorders that affect high-level cognitive functions. # Summary. An optional shortened abstract. summary: focused ultrasound stimulation, language, neuromodulation, acupoint tags: [] # Display this page in the Featured widget? featured: true # Custom links (uncomment lines below) # links: # - name: Custom Link # url: http://example.org url_pdf: '' url_code: '' url_dataset: '' url_poster: '' url_project: '' url_slides: '' url_source: '' url_video: '' # Featured image # To use, add an image named `featured.jpg/png` to your page's folder. image: caption: '' focal_point: "" preview_only: false # Associated Projects (optional). # Associate this publication with one or more of your projects. # Simply enter your project's folder or file name without extension. # E.g. `internal-project` references `content/project/internal-project/index.md`. # Otherwise, set `projects: []`. projects: - example # Slides (optional). # Associate this publication with Markdown slides. # Simply enter your slide deck's filename without extension. # E.g. `slides: "example"` references `content/slides/example/index.md`. # Otherwise, set `slides: ""`. slides: example --- --- title: "Acupoint-brain (acubrain) mapping: Common and distinct cortical language regions activated by focused ultrasound stimulation on two language-relevant acupoints" # Authors # If you created a profile for a user (e.g. the default `admin` user), write the username (folder name) here # and it will be replaced with their full name and linked to their profile. authors: - admin # Author notes (optional) # author_notes: # - "Equal contribution" # - "Equal contribution" date: "2021-02-01T00:00:00Z" doi: "" # Schedule page publish date (NOT publication's date). publishDate: "2021-02-01T00:00:00Z" # Publication type. # Legend: 0 = Uncategorized; 1 = Conference paper; 2 = Journal article; # 3 = Preprint / Working Paper; 4 = Report; 5 = Book; 6 = Book section; # 7 = Thesis; 8 = Patent publication_types: ["2"] # Publication name and optional abbreviated publication name. publication: In *Brain and Language* publication_short: In *Brain and Language* abstract: Acupuncture, taking the advantage of modality-specific neural pathways, has shown promising results in the treatment of brain disorders that affect different modalities such as pain and vision. However, the precise underlying mechanisms of within-modality neuromodulation of acupoints on human high-order cognition remain largely unknown. In the present study, we used a non-invasive and easy-operating method, focused ultrasound, to stimulate two language-relevant acupoints, namely GB39 (Xuanzhong) and SJ8 (Sanyangluo), of thirty healthy adults. The effect of focused ultrasound stimulation (FUS) on brain activation was examined by functional magnetic resonance imaging (fMRI). We found that stimulating GB39 and SJ8 by FUS evoked overlapping but distinct brain activation patterns. Our findings provide a major step toward within-modality (in this case, language) acupoint-brain (acubrain) mapping and shed light on to the potential use of FUS as a personalized treatment option for brain disorders that affect high-level cognitive functions. # Summary. An optional shortened abstract. summary: focused ultrasound stimulation, language, neuromodulation, acupoint tags: [] # Display this page in the Featured widget? featured: true # Custom links (uncomment lines below) # links: # - name: Custom Link # url: http://example.org url_pdf: '' url_code: '' url_dataset: '' url_poster: '' url_project: '' url_slides: '' url_source: '' url_video: '' # Featured image # To use, add an image named `featured.jpg/png` to your page's folder. image: caption: '' focal_point: "" preview_only: false # Associated Projects (optional). # Associate this publication with one or more of your projects. # Simply enter your project's folder or file name without extension. # E.g. `internal-project` references `content/project/internal-project/index.md`. # Otherwise, set `projects: []`. projects: - example # Slides (optional). # Associate this publication with Markdown slides. # Simply enter your slide deck's filename without extension. # E.g. `slides: "example"` references `content/slides/example/index.md`. # Otherwise, set `slides: ""`. slides: example ---
41.598684
1,057
0.760715
eng_Latn
0.986915
ff42d385dd02cd54062a249f178ecb034f6544ab
49
md
Markdown
content/gallery/2007-11-04-11-00-00--lighthouse-at-the-cape.jpg/index.md
Jaza/worldtrip
ddddb1601594daa8234405d2ff64f3406799fcea
[ "Apache-2.0" ]
null
null
null
content/gallery/2007-11-04-11-00-00--lighthouse-at-the-cape.jpg/index.md
Jaza/worldtrip
ddddb1601594daa8234405d2ff64f3406799fcea
[ "Apache-2.0" ]
null
null
null
content/gallery/2007-11-04-11-00-00--lighthouse-at-the-cape.jpg/index.md
Jaza/worldtrip
ddddb1601594daa8234405d2ff64f3406799fcea
[ "Apache-2.0" ]
null
null
null
+++ draft = false +++ _Lighthouse at the cape._
8.166667
25
0.632653
eng_Latn
0.996083
ff43cba24006a9410c0e2b2113db90d159deb63f
438
md
Markdown
code/typescript/FundsAPIforDigitalPortals/v2/docs/FundNotationScreenerSearchDataFundIssuerCountry.md
factset/enterprise-sdk
3fd4d1360756c515c9737a0c9a992c7451d7de7e
[ "Apache-2.0" ]
6
2022-02-07T16:34:18.000Z
2022-03-30T08:04:57.000Z
code/typescript/FundsAPIforDigitalPortals/v2/docs/FundNotationScreenerSearchDataFundIssuerCountry.md
factset/enterprise-sdk
3fd4d1360756c515c9737a0c9a992c7451d7de7e
[ "Apache-2.0" ]
2
2022-02-07T05:25:57.000Z
2022-03-07T14:18:04.000Z
code/typescript/FundsAPIforDigitalPortals/v2/docs/FundNotationScreenerSearchDataFundIssuerCountry.md
factset/enterprise-sdk
3fd4d1360756c515c9737a0c9a992c7451d7de7e
[ "Apache-2.0" ]
null
null
null
# fundsapifordigitalportals.FundNotationScreenerSearchDataFundIssuerCountry ## Properties Name | Type | Description | Notes ------------ | ------------- | ------------- | ------------- **restrict** | [**FundIssuerSearchDataIssuerCountryRestrict**](FundIssuerSearchDataIssuerCountryRestrict.md) | | [optional] **exclude** | [**FundIssuerSearchDataIssuerCountryExclude**](FundIssuerSearchDataIssuerCountryExclude.md) | | [optional]
39.818182
125
0.691781
yue_Hant
0.516407
ff440008d7dcd5fa363c44ae3b9e904edef3cf86
1,089
md
Markdown
source/driver-crash.md
appveyor-tests/ruby-bundler-with-clean-option
1f38d0fd85af05dc8c89870ef7e13e2312c6bacd
[ "MIT" ]
null
null
null
source/driver-crash.md
appveyor-tests/ruby-bundler-with-clean-option
1f38d0fd85af05dc8c89870ef7e13e2312c6bacd
[ "MIT" ]
null
null
null
source/driver-crash.md
appveyor-tests/ruby-bundler-with-clean-option
1f38d0fd85af05dc8c89870ef7e13e2312c6bacd
[ "MIT" ]
null
null
null
--- layout: default title: A driver just crashed permalink: /driver-crash.php slug: driver-crash sitemap: false --- # A driver just crashed <!-- htmlmin:ignore --> {% raw %}<?php $driver = isset($_GET['driver']) ? $_GET['driver'] : ''; $dumpID = isset($_GET['dumpID']) ? $_GET['dumpID'] : ''; ?>{% endraw %} It seems that MPC-HC crashed because of a driver{% raw %}<?php if (!empty($driver)) { echo ' named <strong>' . htmlspecialchars($driver) . '</strong>.'; } else { echo '.'; } ?>{% endraw %} Since the problem occurred outside of MPC-HC, there is nothing we can do. However here are some possible solutions: * Install (or reinstall) the latest version of the driver in case the issue has been fixed. * Report the problem to the constructor so that they can fix their driver. If you need more assistance, you can open a ticket on our [bug tracker](https://trac.mpc-hc.org/). {% raw %}<?php if (is_numeric($dumpID)) { echo 'Please include the following crash ID: <strong>' . $dumpID . '</strong> in the description of your ticket.'; } ?>{% endraw %} <!-- htmlmin:ignore -->
28.657895
116
0.666667
eng_Latn
0.979828
ff447fb25adf82d75260359ef30fd1ca93095f47
1,610
md
Markdown
_modules/week-09.md
uw-cse599p/uw-cse599p.github.io
c29481f9ac303a1495b863630d0668e7aa09a0c5
[ "MIT" ]
null
null
null
_modules/week-09.md
uw-cse599p/uw-cse599p.github.io
c29481f9ac303a1495b863630d0668e7aa09a0c5
[ "MIT" ]
null
null
null
_modules/week-09.md
uw-cse599p/uw-cse599p.github.io
c29481f9ac303a1495b863630d0668e7aa09a0c5
[ "MIT" ]
null
null
null
--- title: "Oct 28: Race" --- **Milestone**{: .label .label-purple } [Methods Section (or similar)](https://canvas.uw.edu/courses/1512970/assignments/6672481) Benjamin, R. (2019). Race after technology: Abolitionist tools for the new jim code. Social Forces. (read pages [1-17](https://drive.google.com/file/d/1qK-P4LS2JhTI_RXFCEeu1yGwGjPjPFtL/view?usp=sharing)) Ogbonnaya-Ogburu, I. F., Smith, A. D., To, A., & Toyama, K. (2020, April). [Critical Race Theory for HCI](https://drive.google.com/file/d/1mqdkYkv_bA_3GmQTSQSxdJQ3dITijpCR/view?usp=sharing). In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1-16). Field, A., Blodgett, S. L., Waseem, Z., & Tsvetkov, Y. (2021). [A Survey of Race, Racism, and Anti-Racism in NLP](https://arxiv.org/pdf/2106.11410.pdf). arXiv preprint arXiv:2106.11410. **Optional**{: .label .label-green } Hankerson, D., Marshall, A. R., Booker, J., El Mimouni, H., Walker, I., & Rode, J. A. (2016, May). [Does technology have race?](https://dl.acm.org/doi/pdf/10.1145/2851581.2892578?casa_token=sa47s_b-ibEAAAAA:s4l-ithHie-ESAaIIoCactUbqZsaNDRavXd4PwqlsGDy-4MpwdafKu0hbEsOMfWorhYTUpqn2C193g). In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems (pp. 473-486). **Optional**{: .label .label-green } Schlesinger, A., O'Hara, K. P., & Taylor, A. S. (2018, April). [Let's talk about race: Identity, chatbots, and AI](https://drive.google.com/file/d/1YXLvAhbTgd5rPUHDz0Yc42nIMRn_CO7d/view?usp=sharing). In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (pp. 1-14).
89.444444
438
0.732919
yue_Hant
0.374066
ff44c90439fa700029e7d5eb41d71230117e2051
1,004
md
Markdown
microsoft.ui.xaml.automation.peers/pivotautomationpeer_setscrollpercent_1312178693.md
stevemonaco/winui-api
3e5ad1a5275746690c39fd2502c60928b756f3b5
[ "CC-BY-4.0", "MIT" ]
63
2018-11-02T13:52:13.000Z
2022-03-31T16:31:24.000Z
microsoft.ui.xaml.automation.peers/pivotautomationpeer_setscrollpercent_1312178693.md
stevemonaco/winui-api
3e5ad1a5275746690c39fd2502c60928b756f3b5
[ "CC-BY-4.0", "MIT" ]
99
2018-11-16T15:15:12.000Z
2022-03-31T15:53:15.000Z
microsoft.ui.xaml.automation.peers/pivotautomationpeer_setscrollpercent_1312178693.md
stevemonaco/winui-api
3e5ad1a5275746690c39fd2502c60928b756f3b5
[ "CC-BY-4.0", "MIT" ]
35
2018-10-16T05:35:33.000Z
2022-03-30T23:27:08.000Z
--- -api-id: M:Microsoft.UI.Xaml.Automation.Peers.PivotAutomationPeer.SetScrollPercent(System.Double,System.Double) -api-type: winrt method --- <!-- Method syntax public void SetScrollPercent(System.Double horizontalPercent, System.Double verticalPercent) --> # Microsoft.UI.Xaml.Automation.Peers.PivotAutomationPeer.SetScrollPercent ## -description Sets the horizontal and vertical scroll position as a percentage of the total content area within the control. ## -parameters ### -param horizontalPercent The horizontal position as a percentage of the content area's total range. Pass [NoScroll](../microsoft.ui.xaml.automation/scrollpatternidentifiers_noscroll.md) if the control cannot be scrolled in this direction. ### -param verticalPercent The vertical position as a percentage of the content area's total range. Pass [NoScroll](../microsoft.ui.xaml.automation/scrollpatternidentifiers_noscroll.md) if the control cannot be scrolled in this direction. ## -remarks ## -examples ## -see-also
37.185185
213
0.795817
eng_Latn
0.672054
ff44f0ca1012d7909727f038650782edf1f38b13
7,406
md
Markdown
windows-driver-docs-pr/devapps/uwp-device-apps-for-specialized-devices.md
AmadeusW/windows-driver-docs
6d272f80814969bbb5ec836cbbebdf5cae52ee35
[ "CC-BY-4.0", "MIT" ]
4
2017-05-30T18:13:16.000Z
2021-09-26T19:45:08.000Z
windows-driver-docs-pr/devapps/uwp-device-apps-for-specialized-devices.md
AmadeusW/windows-driver-docs
6d272f80814969bbb5ec836cbbebdf5cae52ee35
[ "CC-BY-4.0", "MIT" ]
null
null
null
windows-driver-docs-pr/devapps/uwp-device-apps-for-specialized-devices.md
AmadeusW/windows-driver-docs
6d272f80814969bbb5ec836cbbebdf5cae52ee35
[ "CC-BY-4.0", "MIT" ]
1
2021-09-26T19:45:46.000Z
2021-09-26T19:45:46.000Z
--- title: UWP device apps for internal devices description: This topic introduces the ways that UWP device apps can access internal devices. ms.assetid: 864EDABF-C734-425D-A532-A01E545E4E51 ms.author: windowsdriverdev ms.date: 04/20/2017 ms.topic: article ms.prod: windows-hardware ms.technology: windows-devices --- # UWP device apps for internal devices This topic introduces the ways that UWP device apps can access internal devices. *Internal devices* are devices that reside inside or are integrated with the PC enclosure. **Note**  Some APIs that are mentioned in this topic can be used to access external devices too. This topic focuses specifically on accessing internal devices. For more info about each API, see the [Windows API reference](http://go.microsoft.com/fwlink/p/?LinkId=250938).   ## <span id="Accessing_internal_devices"></span><span id="accessing_internal_devices"></span><span id="ACCESSING_INTERNAL_DEVICES"></span>Accessing internal devices There are three ways that UWP apps can access internal devices: | Recommended? | API | Developer | Is device metadata required? | |--------------|------------------------------------------------------|----------------|---------------------------------| | Yes | Device scenario APIs (image capture, scanning, etc.) | all developers | no | | Yes | Device protocol APIs (USB, HID, etc.) | OEM | yes (for internal devices only) | | No | Custom driver access | OEM | yes |   ## <span id="Device_scenario_APIs"></span><span id="device_scenario_apis"></span><span id="DEVICE_SCENARIO_APIS"></span>Device scenario APIs The Windows Runtime provides several APIs for accessing common devices that are built-in or attached to the PC, such as APIs for image capture, scanning, printing, and using motion sensors. Because these APIs are designed with a specific scenario in mind, they are referred to as *device scenario APIs*. Device scenario APIs can be used by all developers and no device metadata is required to use them. For more info about scenario APIs, see [Integrating devices]( http://go.microsoft.com/fwlink/p/?LinkId=306557). Any access beyond what the device scenario APIs offer is limited to OEMs (or component suppliers, working in coordination with OEMs), and requires device metadata for the system container. ## <span id="Device_protocol_APIs"></span><span id="device_protocol_apis"></span><span id="DEVICE_PROTOCOL_APIS"></span>Device protocol APIs When an OEM/component supplier needs to access an internal device in a way that is not satisfied by the scenario APIs, they can use the *device protocol APIs*. The device protocol APIs are Windows Runtime APIs that UWP apps can use to access USB and human interface devices (HID). The type of access varies per API. | Device protocol API | Namespace | Access type | |---------------------|-----------------------------------------------------------------------------------------|----------------------------------| | USB | [Windows.Devices.Usb](http://go.microsoft.com/fwlink/p/?LinkId=306694) | exclusive read & exclusive write | | HID | [Windows.Devices.HumanInterfaceDevice](http://go.microsoft.com/fwlink/p/?LinkId=306697) | shared read & exclusive write |   To access peripheral devices that use only Microsoft class drivers - the most common use for the device protocol APIs - device metadata is not required. However, to access internal devices with those APIs, metadata is required. When accessing an internal device, the app must be specified in the device metadata as a privileged app for the system container. This requirements restricts internal device access to OEMs. For more info, see: - [Writing apps for USB devices](http://go.microsoft.com/fwlink/p/?LinkId=324880) - [Supporting human interface devices (HID)](http://go.microsoft.com/fwlink/p/?LinkId=324881) - [Supporting Bluetooth devices](http://go.microsoft.com/fwlink/p/?LinkId=324882) - [Device driver requirements](step-1--create-a-uwp-device-app.md) (from step 1 of the step-by-step guide) - [Creating device metadata](step-2--create-device-metadata.md) (step 2 of the step-by-step guide) ## <span id="Custom_driver_access"></span><span id="custom_driver_access"></span><span id="CUSTOM_DRIVER_ACCESS"></span>Custom driver access When OEMs or IHVs are unable to use the device protocol APIs to access their (internal or peripheral) device, they should first contact Microsoft to discuss their scenario with the Windows Ecosystem team. In some instances - upon Microsoft approval - a UWP device app can directly access a custom driver. Custom driver access requires device metadata. To access a custom driver, the app must be specified in the device metadata as a privileged app for the peripheral device or system container. For more info about custom driver access, see [UWP device apps design guide for specialized devices internal to the PC](http://go.microsoft.com/fwlink/p/?LinkId=306693). ## <span id="Component_suppliers"></span><span id="component_suppliers"></span><span id="COMPONENT_SUPPLIERS"></span>Component suppliers Component suppliers can work with OEMs to develop UWP device apps for their internal device. This can happen in a couple of ways: - **Component supplier develops and distributes the app**: In this case, the component supplier owns, develops, and distributes the app and driver that accesses the internal device. The OEM owns the device metadata. - **OEM develops and distributes the app**: In this case, the OEM develops and distributes the app that accesses one or more internal devices from different component suppliers. The OEM ultimately owns app development, app distribution, and device metadata maintenance. The component supplier owns the driver. For more info about these workflows, see [UWP device apps design guide for specialized devices internal to the PC](http://go.microsoft.com/fwlink/p/?LinkId=306693). ## <span id="related_topics"></span>Related topics [Identifying the location of internal cameras (UWP device apps)](identifying-the-location-of-internal-cameras.md)     [Send comments about this topic to Microsoft](mailto:[email protected]?subject=Documentation%20feedback%20[devapps\devapps]:%20Windows%20Store%20device%20apps%20for%20internal%20devices%20%20RELEASE:%20%281/20/2017%29&body=%0A%0APRIVACY%20STATEMENT%0A%0AWe%20use%20your%20feedback%20to%20improve%20the%20documentation.%20We%20don't%20use%20your%20email%20address%20for%20any%20other%20purpose,%20and%20we'll%20remove%20your%20email%20address%20from%20our%20system%20after%20the%20issue%20that%20you're%20reporting%20is%20fixed.%20While%20we're%20working%20to%20fix%20this%20issue,%20we%20might%20send%20you%20an%20email%20message%20to%20ask%20for%20more%20info.%20Later,%20we%20might%20also%20send%20you%20an%20email%20message%20to%20let%20you%20know%20that%20we've%20addressed%20your%20feedback.%0A%0AFor%20more%20info%20about%20Microsoft's%20privacy%20policy,%20see%20http://privacy.microsoft.com/default.aspx. "Send comments about this topic to Microsoft")
77.145833
964
0.717796
eng_Latn
0.921717
ff45b03d89cef3af239a844eb555b98eddcc5ac5
5,657
md
Markdown
wdk-ddi-src/content/fwpsk/ne-fwpsk-fwps_fields_ale_resource_release_v4_.md
pcfist/windows-driver-docs-ddi
a14a7b07cf628368a637899de9c47e9eefba804c
[ "CC-BY-4.0", "MIT" ]
null
null
null
wdk-ddi-src/content/fwpsk/ne-fwpsk-fwps_fields_ale_resource_release_v4_.md
pcfist/windows-driver-docs-ddi
a14a7b07cf628368a637899de9c47e9eefba804c
[ "CC-BY-4.0", "MIT" ]
null
null
null
wdk-ddi-src/content/fwpsk/ne-fwpsk-fwps_fields_ale_resource_release_v4_.md
pcfist/windows-driver-docs-ddi
a14a7b07cf628368a637899de9c47e9eefba804c
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- UID: NE:fwpsk.FWPS_FIELDS_ALE_RESOURCE_RELEASE_V4_ title: FWPS_FIELDS_ALE_RESOURCE_RELEASE_V4_ author: windows-driver-content description: The FWPS_FIELDS_ALE_RESOURCE_RELEASE_V4 enumeration type specifies the data field identifiers for the FWPS_LAYER_ALE_RESOURCE_RELEASE_V4 run-time filtering layer. old-location: netvista\fwps_fields_ale_resource_release_v4.htm old-project: netvista ms.assetid: ad3d3099-58eb-4a34-b15c-a323dcedba84 ms.author: windowsdriverdev ms.date: 2/27/2018 ms.keywords: FWPS_FIELDS_ALE_RESOURCE_RELEASE_V4, FWPS_FIELDS_ALE_RESOURCE_RELEASE_V4 enumeration [Network Drivers Starting with Windows Vista], FWPS_FIELDS_ALE_RESOURCE_RELEASE_V4_, FWPS_FIELD_ALE_RESOURCE_RELEASE_V4_ALE_APP_ID, FWPS_FIELD_ALE_RESOURCE_RELEASE_V4_ALE_PACKAGE_ID, FWPS_FIELD_ALE_RESOURCE_RELEASE_V4_ALE_USER_ID, FWPS_FIELD_ALE_RESOURCE_RELEASE_V4_FLAGS, FWPS_FIELD_ALE_RESOURCE_RELEASE_V4_IP_LOCAL_ADDRESS, FWPS_FIELD_ALE_RESOURCE_RELEASE_V4_IP_LOCAL_ADDRESS_TYPE, FWPS_FIELD_ALE_RESOURCE_RELEASE_V4_IP_LOCAL_INTERFACE, FWPS_FIELD_ALE_RESOURCE_RELEASE_V4_IP_LOCAL_PORT, FWPS_FIELD_ALE_RESOURCE_RELEASE_V4_IP_PROTOCOL, FWPS_FIELD_ALE_RESOURCE_RELEASE_V4_MAX, fwpsk/FWPS_FIELDS_ALE_RESOURCE_RELEASE_V4, fwpsk/FWPS_FIELD_ALE_RESOURCE_RELEASE_V4_ALE_APP_ID, fwpsk/FWPS_FIELD_ALE_RESOURCE_RELEASE_V4_ALE_PACKAGE_ID, fwpsk/FWPS_FIELD_ALE_RESOURCE_RELEASE_V4_ALE_USER_ID, fwpsk/FWPS_FIELD_ALE_RESOURCE_RELEASE_V4_FLAGS, fwpsk/FWPS_FIELD_ALE_RESOURCE_RELEASE_V4_IP_LOCAL_ADDRESS, fwpsk/FWPS_FIELD_ALE_RESOURCE_RELEASE_V4_IP_LOCAL_ADDRESS_TYPE, fwpsk/FWPS_FIELD_ALE_RESOURCE_RELEASE_V4_IP_LOCAL_INTERFACE, fwpsk/FWPS_FIELD_ALE_RESOURCE_RELEASE_V4_IP_LOCAL_PORT, fwpsk/FWPS_FIELD_ALE_RESOURCE_RELEASE_V4_IP_PROTOCOL, fwpsk/FWPS_FIELD_ALE_RESOURCE_RELEASE_V4_MAX, netvista.fwps_fields_ale_resource_release_v4, wfp_ref_5_const_3_data_fields_09378323-ec5f-4db4-89d3-8398e4b76fac.xml ms.prod: windows-hardware ms.technology: windows-devices ms.topic: enum req.header: fwpsk.h req.include-header: Fwpsk.h req.target-type: Windows req.target-min-winverclnt: Supported starting with Windows 7. req.target-min-winversvr: req.kmdf-ver: req.umdf-ver: req.ddi-compliance: req.unicode-ansi: req.idl: req.max-support: req.namespace: req.assembly: req.type-library: req.lib: req.dll: req.irql: "<= DISPATCH_LEVEL" topic_type: - APIRef - kbSyntax api_type: - HeaderDef api_location: - fwpsk.h api_name: - FWPS_FIELDS_ALE_RESOURCE_RELEASE_V4 product: Windows targetos: Windows req.typenames: FWPS_FIELDS_ALE_RESOURCE_RELEASE_V4 --- # FWPS_FIELDS_ALE_RESOURCE_RELEASE_V4_ enumeration ## -description The FWPS_FIELDS_ALE_RESOURCE_RELEASE_V4 enumeration type specifies the data field identifiers for the FWPS_LAYER_ALE_RESOURCE_RELEASE_V4 <a href="https://msdn.microsoft.com/en-us/library/windows/desktop/aa366492">run-time filtering layer</a>. ## -syntax ```` typedef enum FWPS_FIELDS_ALE_RESOURCE_RELEASE_V4_ { FWPS_FIELD_ALE_RESOURCE_RELEASE_V4_ALE_APP_ID, FWPS_FIELD_ALE_RESOURCE_RELEASE_V4_ALE_USER_ID, FWPS_FIELD_ALE_RESOURCE_RELEASE_V4_IP_LOCAL_ADDRESS, FWPS_FIELD_ALE_RESOURCE_RELEASE_V4_IP_LOCAL_ADDRESS_TYPE, FWPS_FIELD_ALE_RESOURCE_RELEASE_V4_IP_LOCAL_PORT, FWPS_FIELD_ALE_RESOURCE_RELEASE_V4_IP_PROTOCOL, FWPS_FIELD_ALE_RESOURCE_RELEASE_V4_IP_LOCAL_INTERFACE, FWPS_FIELD_ALE_RESOURCE_RELEASE_V4_FLAGS, #if (NTDDI_VERSION >= NTDDI_WIN8) FWPS_FIELD_ALE_RESOURCE_RELEASE_V4_ALE_PACKAGE_ID, #endif FWPS_FIELD_ALE_RESOURCE_RELEASE_V4_MAX } FWPS_FIELDS_ALE_RESOURCE_RELEASE_V4; ```` ## -enum-fields ### -field FWPS_FIELD_ALE_RESOURCE_RELEASE_V4_ALE_APP_ID The full path of the application. ### -field FWPS_FIELD_ALE_RESOURCE_RELEASE_V4_ALE_USER_ID The identifier of the local user. ### -field FWPS_FIELD_ALE_RESOURCE_RELEASE_V4_IP_LOCAL_ADDRESS The local IP address. ### -field FWPS_FIELD_ALE_RESOURCE_RELEASE_V4_IP_LOCAL_ADDRESS_TYPE The local IP address type. The possible values are defined by the <a href="https://msdn.microsoft.com/library/windows/hardware/ff568757">NL_ADDRESS_TYPE</a> enumeration. ### -field FWPS_FIELD_ALE_RESOURCE_RELEASE_V4_IP_LOCAL_PORT The local transport protocol port number. ### -field FWPS_FIELD_ALE_RESOURCE_RELEASE_V4_IP_PROTOCOL The IP protocol number, as specified in RFC 1700. ### -field FWPS_FIELD_ALE_RESOURCE_RELEASE_V4_IP_LOCAL_INTERFACE The locally unique identifier (<a href="..\igpupvdev\ns-igpupvdev-_luid.md">LUID</a>) for the network interface associated with the local IP address. ### -field FWPS_FIELD_ALE_RESOURCE_RELEASE_V4_FLAGS A bitwise OR of a combination of filtering condition flags. For information about the possible flags, see <a href="https://msdn.microsoft.com/library/windows/hardware/ff549942">Filtering Condition Flags</a>. ### -field FWPS_FIELD_ALE_RESOURCE_RELEASE_V4_ALE_PACKAGE_ID The package identifier is a security identifier (SID) that identifies the associated AppContainer process. For more information about the SID structure, see the description for the SID structure in the Microsoft Windows SDK documentation. <div class="alert"><b>Note</b>  Supported starting with Windows 8.</div> <div> </div> ### -field FWPS_FIELD_ALE_RESOURCE_RELEASE_V4_ALE_SECURITY_ATTRIBUTE_FQBN_VALUE ### -field FWPS_FIELD_ALE_RESOURCE_RELEASE_V4_COMPARTMENT_ID ### -field FWPS_FIELD_ALE_RESOURCE_RELEASE_V4_MAX The maximum value for this enumeration. This value might change in future versions of the NDIS header files and binaries. ## -see-also <a href="https://msdn.microsoft.com/library/windows/hardware/ff568757">NL_ADDRESS_TYPE</a> <a href="..\igpupvdev\ns-igpupvdev-_luid.md">LUID</a>    
35.136646
1,386
0.845678
yue_Hant
0.879655
ff45b92c41244fb3ae0c1dbc011f572fcdce2599
1,143
md
Markdown
docs/WebhookEventSubscription.md
UltraCart/rest_api_v2_sdk_javascript
417b521ff63d44d33046d8405755c0e06e9fe5e5
[ "Apache-2.0" ]
1
2018-11-23T17:55:30.000Z
2018-11-23T17:55:30.000Z
docs/WebhookEventSubscription.md
UltraCart/rest_api_v2_sdk_javascript
417b521ff63d44d33046d8405755c0e06e9fe5e5
[ "Apache-2.0" ]
5
2018-04-25T13:01:13.000Z
2022-02-01T20:09:02.000Z
docs/WebhookEventSubscription.md
UltraCart/rest_api_v2_sdk_javascript
417b521ff63d44d33046d8405755c0e06e9fe5e5
[ "Apache-2.0" ]
null
null
null
# UltraCartRestApiV2.WebhookEventSubscription ## Properties Name | Type | Description | Notes ------------ | ------------- | ------------- | ------------- **comments** | **String** | Comment about the event to provide further clarification to the end user | [optional] **deprecated_flag** | **Boolean** | True if the event is deprecated. See the API change log for details on when it will be discontinued. | [optional] **discontinued_flag** | **Boolean** | True if the event is discontinued. See the API change log for details on migration details. | [optional] **event_description** | **String** | Description of the event | [optional] **event_name** | **String** | Event name | [optional] **expansion** | **String** | The expand string for the notification object. See the individual resource _expand documentation for valid values. | [optional] **subscribed** | **Boolean** | True if this is event is subscribed to | [optional] **supports_reflow** | **Boolean** | True if the event can be triggered to reflow existing records | [optional] **webhook_event_oid** | **Number** | The webhook event object identifier | [optional]
67.235294
158
0.68154
eng_Latn
0.953621
ff45d1d56b63639b98c914dd0afebdcac99c0ed1
8,256
md
Markdown
articles/migrate/tutorial-prepare-vmware.md
skmspd/azure-docs.ja-jp
27f5987c209eddccb7e5dd9edc7a3450b54d7b2c
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/migrate/tutorial-prepare-vmware.md
skmspd/azure-docs.ja-jp
27f5987c209eddccb7e5dd9edc7a3450b54d7b2c
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/migrate/tutorial-prepare-vmware.md
skmspd/azure-docs.ja-jp
27f5987c209eddccb7e5dd9edc7a3450b54d7b2c
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Azure Migrate を使用して評価と Azure への移行のために VMware VM を準備する | Microsoft Docs description: Azure Migrate を使用して、オンプレミスの VMware VM の評価および Azure への移行を準備する方法について説明します。 author: rayne-wiselman ms.service: azure-migrate ms.topic: tutorial ms.date: 10/23/2019 ms.author: raynew ms.custom: mvc ms.openlocfilehash: 4cc04e9ab0acdc9d0cdff77ed1de7bea1c1362d4 ms.sourcegitcommit: c22327552d62f88aeaa321189f9b9a631525027c ms.translationtype: HT ms.contentlocale: ja-JP ms.lasthandoff: 11/04/2019 ms.locfileid: "73498485" --- # <a name="prepare-vmware-vms-for-assessment-and-migration-to-azure"></a>評価および Azure への移行のために VMware VM を準備する この記事は、[Azure Migrate](migrate-services-overview.md) を使用して、オンプレミスの VMware VM の評価や Azure への移行を準備する場合に役立ちます。 [Azure Migrate](migrate-overview.md) では、アプリ、インフラストラクチャ、およびワークロードを検出、評価、および Microsoft Azure に移行するために役立つツールのハブが提供されます。 このハブには、Azure Migrate ツールと、サードパーティ製の独立系ソフトウェア ベンダー (ISV) オファリングが含まれています。 これはシリーズの最初のチュートリアルであり、VMware VM を評価して移行する方法を示しています。 このチュートリアルでは、以下の内容を学習します。 > [!div class="checklist"] > * Azure Migrate と連携するように Azure を準備します。 > * VMware for VM の評価を準備します。 > * VMware for VM の移行を準備します。 > [!NOTE] > チュートリアルでは、シナリオの最も簡単なデプロイ パスを示します。 これらは、デプロイを設定する方法を学習する場合に、また簡単な概念実証として役立ちます。 チュートリアルではできるだけ既定のオプションを使用しており、使用可能な設定とパスをすべて示しているわけではありません。 詳細な手順については、VMware の評価と移行に関するハウツーを参照してください。 Azure サブスクリプションをお持ちでない場合は、開始する前に [無料アカウント](https://azure.microsoft.com/pricing/free-trial/) を作成してください。 ## <a name="prepare-azure"></a>Azure を準備する これらのアクセス許可が必要です。 **タスク** | **アクセス許可** --- | --- | --- **Azure Migrate プロジェクトの作成** | Azure アカウントには、プロジェクトを作成するためのアクセス許可が必要です。 **Azure Migrate アプライアンスを登録する** | Azure Migrate では、軽量の Azure Migrate アプライアンスを使用して、Azure Migrate Server Assessment で VMware VM を評価し、Azure Migrate Server Migration で VMware VM の[エージェントレス移行](server-migrate-overview.md)を実行します。 このアプライアンスでは VM が検出され、VM のメタデータとパフォーマンス データが Azure Migrate に送信されます。<br/><br/>登録時に、Azure Migrate によって、アプライアンスを一意に識別する 2 つの Azure Active Directory (Azure AD) アプリが作成され、これらのアプリを作成するためのアクセス許可が必要です。<br/> - 1 つ目のアプリは、Azure Migrate サービス エンドポイントと通信します。<br/> - 2 つ目のアプリでは、登録時に作成された Azure キー コンテナーにアクセスし、Azure AD アプリ情報とアプライアンス構成設定を格納します。 **キー コンテナーの作成** | Azure Migrate Server Migration を使用して VMware VM を移行するために、Azure Migrate によってキー コンテナーが作成され、ご自分のサブスクリプションのレプリケーション ストレージ アカウントへのアクセス キーが管理されます。 コンテナーを作成するには、Azure Migrate プロジェクトが存在しているリソース グループに対するロールの割り当てアクセス許可が必要です。 ### <a name="assign-permissions-to-create-project"></a>プロジェクトを作成するためのアクセス許可を割り当てる 1. Azure portal でサブスクリプションを開き、 **[アクセス制御 (IAM)]** を選択します。 2. **[アクセスの確認]** で関連するアカウントを探し、それをクリックしてアクセス許可を表示します。 3. **共同作成者**または**所有者**のアクセス許可を持っている必要があります。 - 無料の Azure アカウントを作成したばかりであれば、自分のサブスクリプションの所有者になっています。 - サブスクリプションの所有者でない場合は、所有者と協力してロールを割り当てます。 ### <a name="assign-permissions-to-register-the-appliance"></a>アプライアンスを登録するためのアクセス許可を割り当てる アプライアンスを登録するには、アプライアンスの登録時に Azure AD アプリを作成するためのアクセス許可を Azure Migrate に割り当てます。 次のいずれかの方法を使用して、アクセス許可を割り当てることができます。 - テナントおよびグローバル管理者は、Azure AD アプリを作成および登録するためのアクセス許可を、テナント内のユーザーに付与できます。 - テナントおよびグローバル管理者は、アプリケーション開発者ロール (アクセス許可が含まれています) をアカウントに割り当てることができます。 > [!NOTE] > - アプリには、上記で説明した以外に、サブスクリプションに対するアクセス許可はありません。 > - 新しいアプライアンスを登録するときに必要なのは、これらのアクセス許可だけです。 アクセス許可はアプライアンスを設定した後で削除できます。 #### <a name="grant-account-permissions"></a>アカウントへのアクセス許可の付与 テナントおよびグローバル管理者は、次のようにアクセス許可を付与できます 1. テナント/グローバル管理者は Azure AD で **[Azure Active Directory]** > **[ユーザー]** > **[ユーザー設定]** の順に移動します。 2. 管理者は、 **[アプリの登録]** を **[はい]** に設定する必要があります。 これは、重要ではない既定の設定です。 [詳細情報](https://docs.microsoft.com/azure/active-directory/develop/active-directory-how-applications-are-added#who-has-permission-to-add-applications-to-my-azure-ad-instance)。 ![Azure AD のアクセス許可](./media/tutorial-prepare-vmware/aad.png) #### <a name="assign-application-developer-role"></a>アプリケーション開発者ロールの割り当て テナントおよびグローバル管理者は、アプリケーション開発者ロールをアカウントに割り当てることができます。 [詳細情報](https://docs.microsoft.com/azure/active-directory/fundamentals/active-directory-users-assign-role-azure-portal)。 ### <a name="assign-role-assignment-permissions"></a>ロール割り当てのアクセス許可の割り当て Azure Migrate でキー コンテナーを作成できるようにするには、次のようにロールの割り当てのアクセス許可を割り当てます。 1. Azure portal で、リソース グループの **[アクセス制御 (IAM)]** を選択します。 2. **[アクセスの確認]** で関連するアカウントを探し、それをクリックしてアクセス許可を表示します。 - サーバー評価を実行するには、**共同作成者**のアクセス許可で十分です。 - エージェントレスのサーバー移行を実行するには、**所有者** (または**共同作成者**および**ユーザー アクセス管理者**) のアクセス許可が必要です。 3. 必要なアクセス許可がない場合は、リソース グループの所有者にそれらを依頼してください。 ## <a name="prepare-for-vmware-vm-assessment"></a>VMware VM の評価の準備 VMware VM の評価を準備するには、以下が必要です。 - **VMware の設定を確認します**。 移行する vCenter Server と VM が要件を満たしていることを確認します。 - **評価アカウントを設定します**。 評価のための VM を検出するには、Azure Migrate が vCenter Server にアクセスする必要があります。 Azure Migrate アクセスには読み取り専用アカウントが必要です。 - **アプライアンスの要件を確認します**。 評価に使用する Azure Migrate アプライアンスのデプロイ要件を確認します。 ### <a name="verify-vmware-settings"></a>VMware の設定の確認 1. 評価に関する VMware サーバーの要件を[確認](migrate-support-matrix-vmware.md#assessment-vcenter-server-requirements)します。 2. 必要なポートが vCenter サーバー上で開かれていることを[確認](migrate-support-matrix-vmware.md#assessment-port-requirements)します。 ### <a name="set-up-an-account-for-assessment"></a>評価のためのアカウントの設定 評価およびエージェントレス移行のための VM を検出するには、Azure Migrate が vCenter Server にアクセスする必要があります。 評価のためだけであれば、vCenter Server 用の読み取り専用アカウントを設定します。 ### <a name="verify-appliance-settings-for-assessment"></a>評価用のアプライアンス設定の確認 アプライアンスをデプロイする前にアプライアンスの要件を確認します。 1. アプライアンスの要件と制限を[確認](migrate-support-matrix-vmware.md#assessment-appliance-requirements)します。 2. URL ベースのファイアウォール プロキシを使用している場合は、アプライアンスがアクセスする必要がある Azure URL を[確認](migrate-support-matrix-vmware.md#assessment-url-access-requirements)します。 URL の検索中に受信するすべての CNAME レコードがプロキシによって解決されることを確認します。 3. 検出および評価中にアプライアンスによって収集される[パフォーマンス データ](migrate-appliance.md#collected-performance-data-vmware)と[メタデータ](migrate-appliance.md#collected-metadata-vmware)を確認します。 4. アプライアンスによってアクセスされるポートに[注意](migrate-support-matrix-vmware.md#assessment-port-requirements)します。 5. vCenter Server で、OVA ファイルを使用して VM を作成するためのアクセス許可がアカウントにあることを確認します。 OVA ファイルを使用して、Azure Migrate アプライアンスを VMware VM としてデプロイします。 URL ベースのファイアウォール プロキシを使用している場合は、必要な [Azure URL](migrate-support-matrix-vmware.md#assessment-url-access-requirements) へのアクセスを許可します。 ## <a name="prepare-for-agentless-vmware-migration"></a>エージェントレスの VMware 移行の準備 VMware VM のエージェントレス移行の要件を確認します。 1. VMware サーバーの要件を[確認](migrate-support-matrix-vmware.md#agentless-migration-vmware-server-requirements)します。 2. Azure Migrate Server Migration を使用してエージェントレス移行を行うために Azure Migrate が vCenter Server にアクセスできるように、[必須のアクセス許可](migrate-support-matrix-vmware.md#agentless-migration-vcenter-server-permissions)を持つアカウントを設定します。 3. エージェントレス移行を使用して Azure に移行する VMware VM の要件を[確認](migrate-support-matrix-vmware.md#agentless-migration-vmware-vm-requirements)します。 4. エージェントレス移行に Azure Migrate アプライアンスを使用するための要件を[確認](migrate-support-matrix-vmware.md#agentless-migration-appliance-requirements)します。 5. Azure Migrate アプライアンスがエージェントレス移行に必要とする [URL アクセス](migrate-support-matrix-vmware.md#agentless-migration-url-access-requirements)と[ポート アクセス](migrate-support-matrix-vmware.md#agentless-migration-port-requirements)に注意します。 ## <a name="prepare-for-agent-based-vmware-migration"></a>エージェントベースの VMware 移行の準備 VMware VM の[エージェントベース移行](server-migrate-overview.md)の要件を確認します。 1. VMware サーバーの要件を[確認](migrate-support-matrix-vmware.md#agent-based-migration-vmware-server-requirements)します。 2. [必須のアクセス許可](migrate-support-matrix-vmware.md#agent-based-migration-vcenter-server-permissions)を持つアカウントを設定します。 それにより、Azure Migrate Server Migration を使用してエージェントベース移行を行うために Azure Migrate が vCenter Server にアクセスできるようにします。 3. エージェントベース移行を使用して Azure に移行する VMware VM の要件 (移行する各 VM へのモビリティ サービスのインストールなど) を[確認](migrate-support-matrix-vmware.md#agent-based-migration-vmware-vm-requirements)します。 4. [URL アクセス](migrate-support-matrix-vmware.md#agent-based-migration-url-access-requirements)に注意します。 5. Azure Migrate コンポーネントがエージェントベースのアクセスに必要とする[ポート アクセス](migrate-support-matrix-vmware.md#agent-based-migration-port-requirements)を確認します。 ## <a name="next-steps"></a>次の手順 このチュートリアルでは、次のことを行いました。 > [!div class="checklist"] > * Azure アクセス許可を設定しました。 > * 評価と移行のために VMware を準備しました。 2 番目のチュートリアルに進み、Azure Migrate プロジェクトを設定し、Azure に移行するために VMware VM を評価します。 > [!div class="nextstepaction"] > [VMware VM の評価](./tutorial-assess-vmware.md)
50.036364
547
0.804748
yue_Hant
0.383417
ff45d6212773f1c473a1c82b94ab3ad64e922852
27
md
Markdown
README.md
alysson-azevedo/ngame
ef90f16e20e0fe15cf214dfb8ea356349ee44a24
[ "MIT" ]
null
null
null
README.md
alysson-azevedo/ngame
ef90f16e20e0fe15cf214dfb8ea356349ee44a24
[ "MIT" ]
null
null
null
README.md
alysson-azevedo/ngame
ef90f16e20e0fe15cf214dfb8ea356349ee44a24
[ "MIT" ]
null
null
null
# ngame Games in Angularjs
9
18
0.777778
slv_Latn
0.837436
ff46073fd249ef986337327d5ba34da02edb31b4
309
md
Markdown
packages/mercurius-codegen/CHANGELOG.md
luke88jones/mercurius-typescript
11fbd7a360e16daa78aa05658049da6a483bfe89
[ "MIT" ]
39
2020-12-31T01:28:26.000Z
2022-03-22T14:38:53.000Z
packages/mercurius-codegen/CHANGELOG.md
luke88jones/mercurius-typescript
11fbd7a360e16daa78aa05658049da6a483bfe89
[ "MIT" ]
25
2020-11-08T10:12:28.000Z
2022-01-04T17:54:17.000Z
packages/mercurius-codegen/CHANGELOG.md
luke88jones/mercurius-typescript
11fbd7a360e16daa78aa05658049da6a483bfe89
[ "MIT" ]
9
2021-01-11T05:30:15.000Z
2022-02-01T18:05:54.000Z
# mercurius-codegen ## 3.3.0 ### Minor Changes - 43701ff: Support mercurius@^9.0.0 & graphql-js v16 ## 3.2.0 ### Minor Changes - ad1303c: Disable pre-built schema generation for `loadSchemaFiles` if "targetPath" is set to `null` ## 3.1.1 ### Patch Changes - 42301d7: Update @graphql-tools/load-files
15.45
101
0.686084
eng_Latn
0.515117
ff4655bcd3f1033576bfea629e74d79a54869319
3,800
md
Markdown
_posts/2018-10-12-datamining.md
wysheng/wysheng.github.io
ffae3e0a244fec6405914af7f92590f0a22c953c
[ "MIT" ]
null
null
null
_posts/2018-10-12-datamining.md
wysheng/wysheng.github.io
ffae3e0a244fec6405914af7f92590f0a22c953c
[ "MIT" ]
null
null
null
_posts/2018-10-12-datamining.md
wysheng/wysheng.github.io
ffae3e0a244fec6405914af7f92590f0a22c953c
[ "MIT" ]
null
null
null
--- layout: post title: "基于用户的协同过滤推荐算法java实现(UserCF)" date: 2018-10-12 10:45:38 categories: 数据挖掘 tags: 数据挖掘 推荐算法 java --- * content {:toc} UserCF的核心思想即为根据用户数据模拟向量相似度,我们根据这个相似度,来找出指定用户的相似用户,然后将相似用户买过的而指定用户没有买的东西推荐给指定用户,推荐度的计算也是结合了相似用户与指定用户的相似度累加。注意这里我们默认是用户的隐反馈行为,所以每一个物品的影响因子默认为1。 ``` package cn.csu.CFUtils; import java.util.HashMap; import java.util.HashSet; import java.util.Iterator; import java.util.Map; import java.util.Map.Entry; import java.util.Scanner; import java.util.Set; /** * 基于用户的协同过滤推荐算法实现 A a b d B a c C b e D c d e * @author Administrator * */ public class UserCF { public static void main(String[] args) { /** * 输入用户-->物品条目 一个用户对应多个物品 * 用户ID 物品ID集合 * A a b d * B a c * C b e * D c d e */ Scanner scanner = new Scanner(System.in); System.out.println("Input the total users number:"); //输入用户总量 int N = scanner.nextInt(); int[][] sparseMatrix = new int[N][N];//建立用户稀疏矩阵,用于用户相似度计算【相似度矩阵】 Map<String, Integer> userItemLength = new HashMap<>();//存储每一个用户对应的不同物品总数 eg: A 3 Map<String, Set<String>> itemUserCollection = new HashMap<>();//建立物品到用户的倒排表 eg: a A B Set<String> items = new HashSet<>();//辅助存储物品集合 Map<String, Integer> userID = new HashMap<>();//辅助存储每一个用户的用户ID映射 Map<Integer, String> idUser = new HashMap<>();//辅助存储每一个ID对应的用户映射 System.out.println("Input user--items maping infermation:<eg:A a b d>"); scanner.nextLine(); for(int i = 0; i < N ; i++){//依次处理N个用户 输入数据 以空格间隔 String[] user_item = scanner.nextLine().split(" "); int length = user_item.length; userItemLength.put(user_item[0], length-1);//eg: A 3 userID.put(user_item[0], i);//用户ID与稀疏矩阵建立对应关系 idUser.put(i, user_item[0]); //建立物品--用户倒排表 for(int j = 1; j < length; j ++){ if(items.contains(user_item[j])){//如果已经包含对应的物品--用户映射,直接添加对应的用户 itemUserCollection.get(user_item[j]).add(user_item[0]); }else{//否则创建对应物品--用户集合映射 items.add(user_item[j]); itemUserCollection.put(user_item[j], new HashSet<String>());//创建物品--用户倒排关系 itemUserCollection.get(user_item[j]).add(user_item[0]); } } } System.out.println(itemUserCollection.toString()); //计算相似度矩阵【稀疏】 Set<Entry<String, Set<String>>> entrySet = itemUserCollection.entrySet(); Iterator<Entry<String, Set<String>>> iterator = entrySet.iterator(); while(iterator.hasNext()){ Set<String> commonUsers = iterator.next().getValue(); for (String user_u : commonUsers) { for (String user_v : commonUsers) { if(user_u.equals(user_v)){ continue; } sparseMatrix[userID.get(user_u)][userID.get(user_v)] += 1;//计算用户u与用户v都有正反馈的物品总数 } } } System.out.println(userItemLength.toString()); System.out.println("Input the user for recommendation:<eg:A>"); String recommendUser = scanner.nextLine(); System.out.println(userID.get(recommendUser)); //计算用户之间的相似度【余弦相似性】 int recommendUserId = userID.get(recommendUser); for (int j = 0;j < sparseMatrix.length; j++) { if(j != recommendUserId){ System.out.println(idUser.get(recommendUserId)+"--"+idUser.get(j)+"相似度:"+sparseMatrix[recommendUserId][j]/Math.sqrt(userItemLength.get(idUser.get(recommendUserId))*userItemLength.get(idUser.get(j)))); } } //计算指定用户recommendUser的物品推荐度 for(String item: items){//遍历每一件物品 Set<String> users = itemUserCollection.get(item);//得到购买当前物品的所有用户集合 if(!users.contains(recommendUser)){//如果被推荐用户没有购买当前物品,则进行推荐度计算 double itemRecommendDegree = 0.0; for(String user: users){ itemRecommendDegree += sparseMatrix[userID.get(recommendUser)][userID.get(user)]/Math.sqrt(userItemLength.get(recommendUser)*userItemLength.get(user));//推荐度计算 } System.out.println("The item "+item+" for "+recommendUser +"'s recommended degree:"+itemRecommendDegree); } } scanner.close(); } } ```
32.758621
205
0.690263
yue_Hant
0.426033
ff46ba7f468bed87a24533a7880959438cc45a6d
1,831
md
Markdown
docs/deployment/capistrano.md
codemancode/errbit
6cb19c235400ed6ed566078f7afdcaabac0e28dd
[ "MIT" ]
1
2015-11-05T07:58:56.000Z
2015-11-05T07:58:56.000Z
docs/deployment/capistrano.md
codemancode/errbit
6cb19c235400ed6ed566078f7afdcaabac0e28dd
[ "MIT" ]
null
null
null
docs/deployment/capistrano.md
codemancode/errbit
6cb19c235400ed6ed566078f7afdcaabac0e28dd
[ "MIT" ]
null
null
null
# Deploy with Capistrano These instructions should be good enough to get you started deploying capistrano with Errbit. More than likely, you'll have to adjust some things to suit your needs, so you should understand how to use capistrano before you continue. ## Clone and prepare the source code repository ```bash git clone [email protected]:errbit/errbit.git cd errbit # Create and edit deploy.rb cp config/deploy.example.rb config/deploy.rb $EDITOR config/deploy.rb # Create and edit production.rb cp config/deploy/production.example.rb config/deploy/production.rb $EDITOR config/deploy/production.rb # Check to make sure configs exist bundle exec cap production deploy:check # Create the configs yourself, or run errbit:setup_configs to upload the # defaults bundle exec cap production errbit:setup_configs # Deploy bundle exec cap production deploy # Setup the remote DB if you haven't already bundle exec cap production db:setup ``` ## Static Assets For a deployment of any real size, you'll probably want to set up a web server for efficiently serving static assets. If you choose to go this route, just map all requests for /assets/.\* to /deploy/path/shared/public/assets ## Starting Errbit Errbit comes with some capistrano tasks to manage running Errbit under unicorn. To start Errbit, you can run: ```bash bundle exec cap production unicorn:start ``` Supervising and monitoring Errbit is beyond the scope of this documentation. ### rbenv support Pass `rbenv` environment when running `cap` to use rbenv. See [capistrano/rbenv](https://github.com/capistrano/rbenv) for more information. ```bash rbenv=1 bundle exec cap production deploy ``` ## Schedule recurring tasks You may want to periodically clear resolved errors to free up space. Schedule ```rake errbit:db:clear_resolved``` to run every day or so.
27.742424
72
0.785363
eng_Latn
0.984867
ff46ee90d86432c03472b749ff87e8234983f0b8
10,368
md
Markdown
conference-publications/folders/publications_icml21/README.md
AmirShirian/graph-based-deep-learning-literature
bd494a45f95263e0fc21811b0e525e13a321e07e
[ "MIT" ]
2
2021-07-02T13:07:53.000Z
2021-07-08T01:39:18.000Z
conference-publications/folders/publications_icml21/README.md
AmirSh15/graph-based-deep-learning-literature
bd494a45f95263e0fc21811b0e525e13a321e07e
[ "MIT" ]
null
null
null
conference-publications/folders/publications_icml21/README.md
AmirSh15/graph-based-deep-learning-literature
bd494a45f95263e0fc21811b0e525e13a321e07e
[ "MIT" ]
1
2021-06-14T20:19:46.000Z
2021-06-14T20:19:46.000Z
# [Publications in ICML 2021](https://icml.cc/Conferences/2021/AcceptedPapersInitial) # Oversmoothing - [GRAND: Graph Neural Diffusion](https://github.com/naganandy/graph-based-deep-learning-literature/blob/master/conference-publications/folders/publications_icml21/grand_icml21/README.md) - Improving Breadth-Wise Backpropagation in Graph Neural Networks helps Learning Long-Range Dependencies - [Optimization of Graph Neural Networks: Implicit Acceleration by Skip Connections and More Depth](https://github.com/naganandy/graph-based-deep-learning-literature/blob/master/conference-publications/folders/publications_icml21/optgnn_icml21/README.md) # Efficient Training - [A Unified Lottery Ticket Hypothesis for Graph Neural Networks](https://github.com/naganandy/graph-based-deep-learning-literature/blob/master/conference-publications/folders/publications_icml21/glt_icml21/README.md) - Memory-Efficient Graph Neural Networks - [GNNAutoScale: Scalable and Expressive Graph Neural Networks via Historical Embeddings](https://github.com/naganandy/graph-based-deep-learning-literature/blob/master/conference-publications/folders/publications_icml21/gnnautoscale_icml21/README.md) # Explainability - [On Explainability of Graph Neural Networks via Subgraph Explorations](https://github.com/naganandy/graph-based-deep-learning-literature/blob/master/conference-publications/folders/publications_icml21/subgraphx_icml21/README.md) - [Generative Causal Explanations for Graph Neural Networks](https://github.com/naganandy/graph-based-deep-learning-literature/blob/master/conference-publications/folders/publications_icml21/gem_icml21/README.md) - [Improving Molecular Graph Neural Network Explainability with Orthonormalization and Induced Sparsity](https://github.com/naganandy/graph-based-deep-learning-literature/blob/master/conference-publications/folders/publications_icml21/brogini_icml21/README.md) - [Automated Graph Representation Learning with Hyperparameter Importance Explanation](https://github.com/naganandy/graph-based-deep-learning-literature/blob/master/conference-publications/folders/publications_icml21/autogr_icml21/README.md) # Expressivity - [A Collective Learning Framework to Boost GNN Expressiveness for Node Classification](https://github.com/naganandy/graph-based-deep-learning-literature/blob/master/conference-publications/folders/publications_icml21/clgnn_icml21/README.md) - [Weisfeiler and Lehman Go Topological: Message Passing Simplicial Networks](https://github.com/naganandy/graph-based-deep-learning-literature/blob/master/conference-publications/folders/publications_icml21/mpsn_icml21/README.md) - [Breaking the Limits of Message Passing Graph Neural Networks](https://github.com/naganandy/graph-based-deep-learning-literature/blob/master/conference-publications/folders/publications_icml21/gnnml_icml21/README.md) - [Let's Agree to Degree: Comparing Graph Convolutional Networks in the Message-Passing Framework](https://github.com/naganandy/graph-based-deep-learning-literature/blob/master/conference-publications/folders/publications_icml21/adgcn_icml21/README.md) # Robustness - [Expressive 1-Lipschitz Neural Networks for Robust Multiple Graph Learning against Adversarial Attacks](https://github.com/naganandy/graph-based-deep-learning-literature/blob/master/conference-publications/folders/publications_icml21/ernn_icml21/README.md) - [Graph Neural Networks Inspired by Classical Iterative Algorithms](https://github.com/naganandy/graph-based-deep-learning-literature/blob/master/conference-publications/folders/publications_icml21/twirls_icml21/README.md) - [Elastic Graph Neural Networks](https://github.com/naganandy/graph-based-deep-learning-literature/blob/master/conference-publications/folders/publications_icml21/elasticgnn_icml21/README.md) - [Interpretable Stability Bounds for Spectral Graph Filters](https://github.com/naganandy/graph-based-deep-learning-literature/blob/master/conference-publications/folders/publications_icml21/sgf_icml21/README.md) - [Information Obfuscation of Graph Neural Networks](https://github.com/naganandy/graph-based-deep-learning-literature/blob/master/conference-publications/folders/publications_icml21/gal_icml21/README.md) - [Integrated Defense for Resilient Graph Matching](https://github.com/naganandy/graph-based-deep-learning-literature/blob/master/conference-publications/folders/publications_icml21/idrgm_icml21/README.md) # Graph Matching - [Stochastic Iterative Graph Matching](https://github.com/naganandy/graph-based-deep-learning-literature/blob/master/conference-publications/folders/publications_icml21/sigma_icml21/README.md) - [Deep Latent Graph Matching](https://github.com/naganandy/graph-based-deep-learning-literature/blob/master/conference-publications/folders/publications_icml21/dlgm_icml21/README.md) - [GLSearch: Maximum Common Subgraph Detection via Learning to Search](https://github.com/naganandy/graph-based-deep-learning-literature/blob/master/conference-publications/folders/publications_icml21/glsearch_icml21/README.md) # Generalisability - [From Local Structures to Size Generalization in Graph Neural Networks](https://github.com/naganandy/graph-based-deep-learning-literature/blob/master/conference-publications/folders/publications_icml21/pattern_icml21/README.md) - [Graph Convolution for Semi-Supervised Classification: Improved Linear Separability and Out-of-Distribution Generalization](https://github.com/naganandy/graph-based-deep-learning-literature/blob/master/conference-publications/folders/publications_icml21/sepgcn_icml21/README.md) # Self-Supervision - [Graph Contrastive Learning Automated](https://github.com/naganandy/graph-based-deep-learning-literature/blob/master/conference-publications/folders/publications_icml21/joao_icml21/README.md) - [Self-supervised Graph-level Representation Learning with Local and Global Structure](https://github.com/naganandy/graph-based-deep-learning-literature/blob/master/conference-publications/folders/publications_icml21/graphlog_icml21/README.md) # Graph Pooling - [Size-Invariant Graph Representations for Graph Classification Extrapolations](https://github.com/naganandy/graph-based-deep-learning-literature/blob/master/conference-publications/folders/publications_icml21/signn_icml21/README.md) - [How Framelets Enhance Graph Neural Networks](https://github.com/naganandy/graph-based-deep-learning-literature/blob/master/conference-publications/folders/publications_icml21/ufgpool_icml21/README.md) # Normalisation - [GraphNorm: A Principled Approach to Accelerating Graph Neural Network Training](https://github.com/naganandy/graph-based-deep-learning-literature/blob/master/conference-publications/folders/publications_icml21/graphnorm_icml21/README.md) - [Lipschitz Normalization for Self-Attention Layers with Application to Graph Neural Networks](https://github.com/naganandy/graph-based-deep-learning-literature/blob/master/conference-publications/folders/publications_icml21/lipschitznorm_icml21/README.md) # Directed Graphs - [Directional Graph Networks](https://github.com/naganandy/graph-based-deep-learning-literature/blob/master/conference-publications/folders/publications_icml21/dgn_icml21/README.md) - Directed Graph Embeddings in Pseudo-Riemannian Manifolds # Reinforcement Learning - [Towards Open Ad Hoc Teamwork Using Graph-based Policy Learning](https://github.com/naganandy/graph-based-deep-learning-literature/blob/master/conference-publications/folders/publications_icml21/gpl_icml21/README.md) - [Controlling Graph Dynamics with Reinforcement Learning and Graph Neural Networks](https://github.com/naganandy/graph-based-deep-learning-literature/blob/master/conference-publications/folders/publications_icml21/rlgn_icml21/README.md) # Computer Vision - [Learning Intra-Batch Connections for Deep Metric Learning](https://github.com/naganandy/graph-based-deep-learning-literature/blob/master/conference-publications/folders/publications_icml21/mpndml_icml21/README.md) - [Compositional Video Synthesis with Action Graphs](https://github.com/naganandy/graph-based-deep-learning-literature/blob/master/conference-publications/folders/publications_icml21/ag2vid_icml21/README.md) - Two Heads are Better Than One: Hypergraph-Enhanced Graph Reasoning for Visual Event Ratiocination # Miscellaneous - [Scalable Optimal Transport in High Dimensions for Graph Distances, Embedding Alignment, and More](https://github.com/naganandy/graph-based-deep-learning-literature/blob/master/conference-publications/folders/publications_icml21/gtn_icml21/README.md) - [Message Passing Adaptive Resonance Theory for Online Active Semi-supervised Learning](https://github.com/naganandy/graph-based-deep-learning-literature/blob/master/conference-publications/folders/publications_icml21/mpart_icml21/README.md) - [Persistence Homology for Link Prediction: An Interactive View](https://github.com/naganandy/graph-based-deep-learning-literature/blob/master/conference-publications/folders/publications_icml21/tlcgnn_icml21/README.md) - [E(n) Equivariant Graph Neural Networks](https://github.com/naganandy/graph-based-deep-learning-literature/blob/master/conference-publications/folders/publications_icml21/egnn_icml21/README.md) - [GraphDF: A Discrete Flow Model for Molecular Graph Generation](https://github.com/naganandy/graph-based-deep-learning-literature/blob/master/conference-publications/folders/publications_icml21/graphdf_icml21/README.md) - [Graph Mixture Density Networks](https://github.com/naganandy/graph-based-deep-learning-literature/blob/master/conference-publications/folders/publications_icml21/gmdn_icml21/README.md) - [Z-GCNETs: Time Zigzags at Graph Convolutional Networks for Time Series Forecasting](https://github.com/naganandy/graph-based-deep-learning-literature/blob/master/conference-publications/folders/publications_icml21/zgcnet_icml21/README.md) - Recovering AES Keys with a Deep Cold Boot Attack - Symmetric Spaces for Graph Embeddings: A Finsler-Riemannian Approach - Order Matters: Probabilistic Modeling of Node Sequence for Graph Generation - Towards Better Laplacian Representation in Reinforcement Learning with Generalized Graph Drawing - Local Graph Algorithms for Learning Higher-Order Structures - Unbiased Gradient Estimation in Unrolled Computation Graphs with Persistent Evolution Strategies
95.119266
280
0.844039
kor_Hang
0.301914
ff4717a76c5fc9523f23e31674b805c057fb97d0
605
md
Markdown
content/components/keyword-list/examples/example-default/code.md
sirleech/designsystem
b576d0db58a3f65ddce9905b5fdb717e8f46757a
[ "MIT" ]
2
2019-11-20T13:40:31.000Z
2020-05-11T22:57:21.000Z
content/components/keyword-list/examples/example-default/code.md
sirleech/designsystem
b576d0db58a3f65ddce9905b5fdb717e8f46757a
[ "MIT" ]
null
null
null
content/components/keyword-list/examples/example-default/code.md
sirleech/designsystem
b576d0db58a3f65ddce9905b5fdb717e8f46757a
[ "MIT" ]
null
null
null
<ul class="au-keyword-list au-link-list"> <li> <small class="au-keyword-list__small">Department of </small> Agriculture and Water Resources </li> <li> <small class="au-keyword-list__small">Department of </small> Communications and the Arts </li> </ul> <div class="au-body au-body--dark"> <ul class="au-keyword-list au-keyword-list--dark au-link-list"> <li> <small class="au-keyword-list__small">Department of </small> Agriculture and Water Resources </li> <li> <small class="au-keyword-list__small">Department of </small> Communications and the Arts </li> </ul> </div>
24.2
64
0.687603
eng_Latn
0.523723
ff475aa8cfb0d51b20ac09737fffa8c7da51a227
30,686
md
Markdown
articles/app-service/faq-configuration-and-management.md
flarocca/azure-docs.es-es
8d69748012641d57ddb2b81a3e1c2d079703ed8d
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/app-service/faq-configuration-and-management.md
flarocca/azure-docs.es-es
8d69748012641d57ddb2b81a3e1c2d079703ed8d
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/app-service/faq-configuration-and-management.md
flarocca/azure-docs.es-es
8d69748012641d57ddb2b81a3e1c2d079703ed8d
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Preguntas frecuentes sobre la configuración description: Obtenga respuestas a preguntas frecuentes acerca de los problemas de configuración y administración de Azure App Service. author: genlin manager: dcscontentpm tags: top-support-issue ms.assetid: 2fa5ee6b-51a6-4237-805f-518e6c57d11b ms.topic: article ms.date: 10/30/2018 ms.author: genli ms.openlocfilehash: 5545acbfd6bb239b9518fbe352b819f300dafaf0 ms.sourcegitcommit: 648c8d250106a5fca9076a46581f3105c23d7265 ms.translationtype: HT ms.contentlocale: es-ES ms.lasthandoff: 08/27/2020 ms.locfileid: "88962356" --- # <a name="configuration-and-management-faqs-for-web-apps-in-azure"></a>Preguntas más frecuentes sobre la configuración y administración de Web Apps en Azure Este artículo contiene respuestas a las preguntas más frecuentes (P+F) sobre los problemas de configuración y administración de la [característica Web Apps de Azure App Service](https://azure.microsoft.com/services/app-service/web/). [!INCLUDE [support-disclaimer](../../includes/support-disclaimer.md)] ## <a name="are-there-limitations-i-should-be-aware-of-if-i-want-to-move-app-service-resources"></a>¿Existen limitaciones que debo conocer si quiero mover recursos de App Service? Si tiene previsto mover recursos de App Service a un nuevo grupo de recursos o suscripción, hay algunas limitaciones que debe conocer. Para más información, consulte [Limitaciones de App Service](../azure-resource-manager/management/move-limitations/app-service-move-limitations.md). ## <a name="how-do-i-use-a-custom-domain-name-for-my-web-app"></a>¿Cómo se usa un nombre de dominio personalizado para una aplicación web? Para obtener respuestas a preguntas comunes sobre el uso de un nombre de dominio personalizado con su aplicación web de Azure, consulte nuestro vídeo de siete minutos sobre [cómo agregar un nombre de dominio personalizado](https://channel9.msdn.com/blogs/Azure-App-Service-Self-Help/Add-a-Custom-Domain-Name). El vídeo ofrece un tutorial sobre cómo agregar un nombre de dominio personalizado. Describe cómo usar su propia dirección URL en lugar de *.azurewebsites.net con la aplicación web de App Service. También puede ver un tutorial detallado de [cómo asignar un nombre de dominio personalizado](app-service-web-tutorial-custom-domain.md). ## <a name="how-do-i-purchase-a-new-custom-domain-for-my-web-app"></a>¿Cómo se compra un nuevo dominio personalizado para una aplicación web? Para saber cómo comprar y configurar un dominio personalizado para su aplicación web de App Service, consulte [Comprar y configurar un nombre de dominio personalizado en Azure App Service](manage-custom-dns-buy-domain.md). ## <a name="how-do-i-upload-and-configure-an-existing-tlsssl-certificate-for-my-web-app"></a>¿Cómo se carga y configura un certificado TLS/SSL existente para una aplicación web? Para obtener información sobre cómo cargar y configurar un certificado TLS/SSL personalizado, consulte [Adición de un certificado TLS/SSL a la aplicación App Service](configure-ssl-certificate.md). ## <a name="how-do-i-purchase-and-configure-a-new-tlsssl-certificate-in-azure-for-my-web-app"></a>¿Cómo se compra y configura un nuevo certificado TLS/SSL en Azure para una aplicación web? Para saber cómo comprar y configurar un certificado TLS/SSL para la aplicación web App Service, consulte [Adición de un certificado TLS/SSL a la aplicación App Service](configure-ssl-certificate.md). ## <a name="how-do-i-move-application-insights-resources"></a>¿Cómo se mueven los recursos de Application Insights? Actualmente, Azure Application Insights no admite la operación de movimiento. Si su grupo de recursos original incluye un recurso de Application Insights, no puede mover ese recurso. Si se incluye el recurso de Application Insights al intentar mover una aplicación de App Service, la operación entera de movimiento dará error. Sin embargo, no es necesario que el plan de App Service y Application Insights residan en el mismo grupo de recursos que la aplicación para que esta funcione correctamente. Para más información, consulte [Limitaciones de App Service](../azure-resource-manager/management/move-limitations/app-service-move-limitations.md). ## <a name="where-can-i-find-a-guidance-checklist-and-learn-more-about-resource-move-operations"></a>¿Dónde puedo encontrar una lista de comprobación guía y aprender más sobre las operaciones de movimiento de recursos? En [Limitaciones de App Service](../azure-resource-manager/management/move-limitations/app-service-move-limitations.md) se muestra cómo mover los recursos a una nueva suscripción o a un nuevo grupo de recursos de la misma suscripción. Podrá obtener información sobre la lista de comprobación de movimiento de recursos, aprender qué servicios admite la operación de movimiento y saber más sobre las limitaciones de App Service y otros temas. ## <a name="how-do-i-set-the-server-time-zone-for-my-web-app"></a>¿Cómo se establece la zona horaria del servidor para una aplicación web? Para establecer la zona horaria de servidor para la aplicación web: 1. En Azure Portal, en la suscripción de App Service, vaya al menú **Configuración de la aplicación**. 2. En **Configuración de la aplicación**, agregue este valor: * Clave = WEBSITE_TIME_ZONE * Valor = *la zona horaria que desea* 3. Seleccione **Guardar**. Para los servicios de aplicación que se ejecutan en Windows, consulte la columna **Zona horaria** en el artículo [Zonas horarias predeterminadas](/windows-hardware/manufacture/desktop/default-time-zones) para ver los valores aceptados. En el caso de los servicios de aplicaciones que se ejecutan en Linux, establezca el [nombre de la base de datos de TZ](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones) como el valor de zona horaria. Este es un ejemplo de nombre de base de datos de TZ: América/Adak. ## <a name="why-do-my-continuous-webjobs-sometimes-fail"></a>¿Por qué mis trabajos web continuos en ocasiones dan error? De forma predeterminada, las aplicaciones web se descargan si están inactivas durante algún tiempo. Esto permite que el sistema conserve recursos. En los planes Básico y Estándar, puede activar el valor **Siempre activado** para que la aplicación web se mantenga cargada en todo momento. Si la aplicación web ejecuta trabajos web continuos, debe seleccionar **Siempre activado** o dichos trabajos podrían no ejecutarse de manera confiable. Para más información, consulte [Creación de un trabajo web de ejecución continua](webjobs-create.md#CreateContinuous). ## <a name="how-do-i-get-the-outbound-ip-address-for-my-web-app"></a>¿Cómo se obtiene una dirección IP de salida para una aplicación web? Para obtener la lista de direcciones IP de salida para la aplicación web: 1. En Azure Portal, en la hoja de la aplicación web, vaya al menú **Propiedades**. 2. Busque **direcciones ip de salida**. Aparece la lista de direcciones IP de salida. Para saber cómo obtener la dirección IP de salida si su sitio web está hospedado en App Service Environment, consulte [Direcciones de red de salida](environment/app-service-app-service-environment-network-architecture-overview.md#outbound-network-addresses). ## <a name="how-do-i-get-a-reserved-or-dedicated-inbound-ip-address-for-my-web-app"></a>¿Cómo se obtiene una dirección IP de entrada reservada o dedicada para una aplicación web? Para configurar una dirección IP dedicada o reservada para las llamadas entrantes realizadas a su sitio web de Azure, instale y configure un certificado TLS/SSL basado en IP. Tenga en cuenta que para usar una dirección IP dedicada o reservada para las llamadas entrantes, el plan de App Service debe estar en un plan de servicio Básico o superior. ## <a name="can-i-export-my-app-service-certificate-to-use-outside-azure-such-as-for-a-website-hosted-elsewhere"></a>¿Se puede exportar el certificado de App Service para usarlo fuera de Azure, por ejemplo, en un sitio web hospedado en otra parte? Sí, lo puede exportar para usarlo fuera de Azure. Para más información, consulte las [preguntas más frecuentes sobre los dominios personalizados y los certificados de App Service](https://social.msdn.microsoft.com/Forums/azure/f3e6faeb-5ed4-435a-adaa-987d5db43b80/faq-on-app-service-certificates-and-custom-domains?forum=windowsazurewebsitespreview). ## <a name="can-i-export-my-app-service-certificate-to-use-with-other-azure-cloud-services"></a>¿Se puede exportar un certificado de App Service para usarlo con otros servicios en la nube de Azure? El portal proporciona una experiencia de primera clase para la implementación de un certificado de servicio en aplicaciones de App Service mediante Azure Key Vault. No obstante, se han recibido solicitudes de clientes para usar estos certificados fuera de la plataforma de App Service, por ejemplo, con Azure Virtual Machines. Para aprender a crear una copia PFX local de su certificado de App Service de forma que pueda usarlo con otros recursos de Azure, consulte la [entrada del blog sobre cómo crear una copia PFX local de un certificado de App Service](https://blogs.msdn.microsoft.com/appserviceteam/2017/02/24/creating-a-local-pfx-copy-of-app-service-certificate/). Para más información, consulte las [preguntas más frecuentes sobre los dominios personalizados y los certificados de App Service](https://social.msdn.microsoft.com/Forums/azure/f3e6faeb-5ed4-435a-adaa-987d5db43b80/faq-on-app-service-certificates-and-custom-domains?forum=windowsazurewebsitespreview). ## <a name="why-do-i-see-the-message-partially-succeeded-when-i-try-to-back-up-my-web-app"></a>¿Por qué aparece un mensaje "Realizado parcialmente" cuando intento hacer una copia de seguridad de mi aplicación web? Una causa común de errores en la copia de seguridad es que algunos archivos están en uso por la aplicación. Estos archivos se bloquean mientras se realiza la copia de seguridad. Como consecuencia, no se puede hacer una copia de seguridad de ellos y podría aparecer el estado "Realizado parcialmente". Existe una posibilidad de impedir que esto ocurra y es excluir los archivos del proceso de copia de seguridad. Puede elegir hacer copia de seguridad solo de lo necesario. Para más información, consulte [Backup just the important parts of your site with Azure web apps](https://zainrizvi.io/blog/creating-partial-backups-of-your-site-with-azure-web-apps/) (Copia de seguridad de solo las partes importantes de un sitio con aplicaciones web de Azure). ## <a name="how-do-i-remove-a-header-from-the-http-response"></a>¿Cómo se quita un encabezado de la respuesta HTTP? Para quitar los encabezados de la respuesta HTTP, actualice el archivo web.config de su sitio. Para más información, consulte [Remove standard server headers on your Azure websites](https://azure.microsoft.com/blog/removing-standard-server-headers-on-windows-azure-web-sites/) (Eliminación de encabezados de servidor estándar en los sitios web de Azure). ## <a name="is-app-service-compliant-with-pci-standard-30-and-31"></a>¿Es compatible App Service con el Estándar de PCI 3.0 y 3.1? Actualmente, la característica Web Apps de Azure App Service es compatible con la versión 3.0, nivel 1 de PCI Data Security Standard (DSS). La versión 3.1 de PCI DSS está en nuestros planes. Ya está en marcha el plan de adopción del último estándar. Para la certificación de la versión 3.1 de PCI DSS es necesario deshabilitar la versión 1.0 del protocolo Seguridad de la capa de transporte (TLS). En la actualidad, la deshabilitación de TLS 1.0 no es una opción en la mayoría de los planes de App Service. Sin embargo, si usa App Service Environment o está dispuesto a migrar su carga de trabajo a App Service Environment, puede obtener un mayor control de su entorno. Para ello, es necesario deshabilitar TLS 1.0 con la ayuda del soporte técnico de Azure. En un futuro próximo, tenemos pensado hacer que esos valores sean accesibles para los usuarios. Para más información, consulte el artículo sobre el [cumplimiento de las aplicaciones web de Microsoft Azure App Service con el Estándar de PCI 3.0 y 3.1](https://support.microsoft.com/help/3124528). ## <a name="how-do-i-use-the-staging-environment-and-deployment-slots"></a>¿Cómo se usan el entorno de ensayo y las ranuras de implementación? En los planes Premium y Standard de App Service, al implementar la aplicación web en App Service, puede hacerlo en una ranura de implementación independiente distinta a la de producción predeterminada. Los espacios de implementación son aplicaciones web activas que tienen sus propios nombres de host. Los elementos de contenido y configuración de aplicaciones web se pueden intercambiar entre dos espacios de implementación, incluida la ranura de producción. Para más información sobre el uso de las ranuras de implementación, consulte [Configuración de entornos de ensayo en Azure App Service](deploy-staging-slots.md). ## <a name="how-do-i-access-and-review-webjob-logs"></a>¿Cómo se accede y se revisan registros de WebJob? Para revisar los registros de WebJob: 1. Inicie sesión en su **sitio web de Kudu** (`https://*yourwebsitename*.scm.azurewebsites.net`). 2. Seleccione la instancia de WebJob. 3. Seleccione el botón **Alternar salida**. 4. Para descargar el archivo de salida, seleccione el vínculo **Descargar**. 5. Para ejecuciones individuales, seleccione **Individual Invoke** (Invocación individual). 6. Seleccione el botón **Alternar salida**. 7. Seleccione el vínculo de descarga. ## <a name="im-trying-to-use-hybrid-connections-with-sql-server-why-do-i-see-the-message-systemoverflowexception-arithmetic-operation-resulted-in-an-overflow"></a>Estoy intentando usar las conexiones híbridas con SQL Server. ¿Por qué aparece el mensaje "System.OverflowException: la operación aritmética ha provocado un desbordamiento"? Si usa conexiones híbridas para acceder a SQL Server, una actualización de Microsoft .NET del 10 de mayo de 2016 podría provocar un error en las conexiones. Puede que aparezca este mensaje: ``` Exception: System.Data.Entity.Core.EntityException: The underlying provider failed on Open. —> System.OverflowException: Arithmetic operation resulted in an overflow. or (64 bit Web app) System.OverflowException: Array dimensions exceeded supported range, at System.Data.SqlClient.TdsParser.ConsumePreLoginHandshake ``` ### <a name="resolution"></a>Solución La excepción se produjo debido a un problema con el Administrador de conexiones híbridas que ya se ha corregido. Asegúrese de [actualizar el Administrador de conexiones híbridas](https://go.microsoft.com/fwlink/?LinkID=841308) para resolver este problema. ## <a name="how-do-i-add-a-url-rewrite-rule"></a>¿Cómo se agrega una regla de reescritura de direcciones URL? Para agregar una regla de reescritura de URL, cree un archivo web.config con las entradas de configuración correspondientes en la carpeta **wwwroot**. Para más información, consulte [Azure App Services: Understanding URL rewrite](/archive/blogs/madhurabharadwaj/azure-app-services-understanding-url-re-write) (Azure App Services: descripción de la reescritura de URL). ## <a name="how-do-i-control-inbound-traffic-to-app-service"></a>¿Cómo se controla el tráfico de entrada a App Service? En el nivel de sitio, tiene dos opciones para controlar el tráfico de entrada a App Service: * Activar las restricciones de IP dinámicas. Para saber cómo activar las restricciones de IP dinámicas, consulte [IP and domain restrictions for Azure websites](https://azure.microsoft.com/blog/ip-and-domain-restrictions-for-windows-azure-web-sites/) (Restricciones de dominio e IP para sitios web de Azure). * Active Module Security. Para saber cómo activar Module Security, consulte [ModSecurity web application firewall on Azure websites](https://azure.microsoft.com/blog/modsecurity-for-azure-websites/) (Firewall de aplicaciones web de ModSecurity en sitios web de Azure). Si usa App Service Environment, puede emplear el [firewall de Barracuda](https://azure.microsoft.com/blog/configuring-barracuda-web-application-firewall-for-azure-app-service-environment/). ## <a name="how-do-i-block-ports-in-an-app-service-web-app"></a>¿Cómo se bloquean los puertos en una aplicación web de App Service? En el entorno de inquilinos compartidos de App Service, no es posible bloquear puertos específicos debido a la naturaleza de la infraestructura. Los puertos TCP 4020, 4022 y 4024 podrían estar también abiertos para la depuración remota de Visual Studio. En App Service Environment, dispone de control total sobre el tráfico de entrada y de salida. Puede usar grupos de seguridad de red para restringir o bloquear puertos específicos. Para más información sobre App Service Environment, consulte [Introducing App Service Environment](https://azure.microsoft.com/blog/introducing-app-service-environment/) (Introducción a App Service Environment). ## <a name="how-do-i-capture-an-f12-trace"></a>¿Cómo se captura un seguimiento F12? Para capturar un seguimiento F12, tiene dos opciones: * Seguimiento HTTP de F12 * Salida de consola de F12 ### <a name="f12-http-trace"></a>Seguimiento HTTP de F12 1. En Internet Explorer, vaya a su sitio web. Es importante iniciar sesión antes de realizar los siguientes pasos. En caso contrario, el seguimiento F12 captura los datos de inicio de sesión confidenciales. 2. Presione F12. 3. Compruebe que esté seleccionada la pestaña **Red** y luego seleccione el botón verde **Reproducir**. 4. Realice los pasos que reproducen el problema. 5. Seleccione el botón rojo **Detener**. 6. Seleccione el botón **Guardar** (icono de disco) y guarde el archivo HAR (en Internet Explorer y Microsoft Edge) *o* haga clic con el botón derecho en él y seleccione **Save as HAR with content** (Guardar como HAR con contenido) (en Chrome). ### <a name="f12-console-output"></a>Salida de consola de F12 1. Seleccione la pestaña **Consola**. 2. Para cada pestaña que contenga más de cero elementos, seleccione la pestaña (**Error**, **Advertencia** o **Información**). Si la pestaña no está seleccionada, el icono de la pestaña está de color gris o negro al mover el cursor fuera de ella. 3. Haga clic con el botón derecho en el área de mensajes del panel y luego seleccione **Copiar todo**. 4. Pegue el texto copiado en un archivo y luego guarde el archivo. Para ver un archivo HAR, puede usar el [visor de HAR](http://www.softwareishard.com/har/viewer/). ## <a name="why-do-i-get-an-error-when-i-try-to-connect-an-app-service-web-app-to-a-virtual-network-that-is-connected-to-expressroute"></a>¿Por qué recibo un error al intentar conectar una aplicación web de App Service a una red virtual que está conectada a ExpressRoute? Si intenta conectar una aplicación web de Azure a una red virtual que está conectada a ExpressRoute de Azure, se produce un error. Aparece el mensaje siguiente: "La puerta de enlace no es una puerta de enlace de VPN". Actualmente, no puede tener conexiones VPN de punto a sitio a una red virtual que esté conectada a ExpressRoute. Una VPN de punto a sitio y ExpressRoute no pueden coexistir en la misma red virtual. Para más información, consulte [Límites y limitaciones de ExpressRoute y las conexiones VPN de sitio a sitio](../expressroute/expressroute-howto-coexist-classic.md#limits-and-limitations). ## <a name="how-do-i-connect-an-app-service-web-app-to-a-virtual-network-that-has-a-static-routing-policy-based-gateway"></a>¿Cómo se conecta una aplicación web de App Service a una red virtual que tiene una puerta de enlace de enrutamiento estático (basado en directivas)? Actualmente, no se admite la conexión de una aplicación web de App Service a una red virtual que tenga una puerta de enlace de enrutamiento estático (basado en directivas). Si su red virtual de destino ya existe, debe tener habilitada la VPN de punto a sitio, con una puerta de enlace de enrutamiento dinámico, para poder conectarse a una aplicación. Si la puerta de enlace está configurada para el enrutamiento estático, no podrá habilitar una VPN de punto a sitio. Para más información, consulte [Integración de su aplicación con una red virtual de Azure](web-sites-integrate-with-vnet.md). ## <a name="in-my-app-service-environment-why-can-i-create-only-one-app-service-plan-even-though-i-have-two-workers-available"></a>En App Service Environment, ¿por qué solo se puede crear un plan de App Service si tengo dos trabajos disponibles? Para proporcionar tolerancia a errores, App Service Environment requiere que para cada grupo de trabajo haya al menos un recurso de proceso adicional. No se puede asignar una carga de trabajo al recurso de proceso adicional. Para más información, consulte [Creación de App Service Environment](environment/app-service-web-how-to-create-an-app-service-environment.md). ## <a name="why-do-i-see-timeouts-when-i-try-to-create-an-app-service-environment"></a>¿Por qué veo los tiempos de espera al intentar crear una instancia de App Service Environment? En ocasiones, la creación de una instancia de App Service Environment produce error. En ese caso, verá el siguiente error en los registros de actividad: ``` ResourceID: /subscriptions/{SubscriptionID}/resourceGroups/Default-Networking/providers/Microsoft.Web/hostingEnvironments/{ASEname} Error:{"error":{"code":"ResourceDeploymentFailure","message":"The resource provision operation did not complete within the allowed timeout period."}} ``` Para resolver este problema, asegúrese de que ninguna de las siguientes condiciones sean ciertas: * La subred sea demasiado pequeña. * La subred no esté vacía. * ExpressRoute impida los requisitos de conectividad de red de una instancia de App Service Environment. * Un grupo de seguridad de red incorrecto impida los requisitos de conectividad de red de una instancia de App Service Environment. * La tunelización forzada esté activada. Para más información, consulte [Frequent issues when deploying (creating) a new Azure App Service Environment](/archive/blogs/waws/most-frequent-issues-when-deploying-creating-a-new-azure-app-service-environment-ase) (Problemas frecuentes al implementar [crear] una nueva instancia de Azure App Service Environment). ## <a name="why-cant-i-delete-my-app-service-plan"></a>¿Por qué no puedo eliminar mi plan de App Service? No podrá eliminar un plan de App Service si alguna de las aplicaciones de App Service está asociada con dicho plan. Antes de eliminar un plan de App Service, quite todas las aplicaciones asociadas. ## <a name="how-do-i-schedule-a-webjob"></a>¿Cómo se programa un trabajo web? Puede crear un trabajo web programado mediante expresiones Cron: 1. Cree un archivo settings.job. 2. En este archivo JSON, incluya una propiedad de programación mediante una expresión Cron: ```json { "schedule": "{second} {minute} {hour} {day} {month} {day of the week}" } ``` Para más información sobre los trabajos web programados, consulte [Creación de un trabajo web programado utilizando una expresión CRON](webjobs-create.md#CreateScheduledCRON). ## <a name="how-do-i-perform-penetration-testing-for-my-app-service-app"></a>¿Cómo se realizan pruebas de penetración para una aplicación de App Service? Para realizar pruebas de penetración, [envíe una solicitud](https://portal.msrc.microsoft.com/engage/pentest). ## <a name="how-do-i-configure-a-custom-domain-name-for-an-app-service-web-app-that-uses-traffic-manager"></a>¿Cómo se configura un nombre de dominio personalizado para una aplicación web de App Service que usa Traffic Manager? Para aprender a usar un nombre de dominio común con una aplicación de App Service que emplea Azure Traffic Manager para el equilibrio de carga, consulte [Configuración de un nombre de dominio personalizado para una aplicación web de Azure con Traffic Manager](configure-domain-traffic-manager.md). ## <a name="my-app-service-certificate-is-flagged-for-fraud-how-do-i-resolve-this"></a>El certificado de App Service está marcado como fraudulento. ¿Cómo se resuelve este problema? [!INCLUDE [updated-for-az](../../includes/updated-for-az.md)] Durante la comprobación del dominio de una compra de certificado de App Service, puede aparecer el siguiente mensaje: "Su certificado se ha marcado por posible fraude. La solicitud está actualmente bajo revisión. Si el certificado no es utilizable en 24 horas, póngase en contacto con el soporte técnico de Azure". Como indica el mensaje, este proceso de comprobación de fraude puede tardar hasta 24 horas en completarse. Durante este tiempo, seguirá apareciendo el mensaje. Si el certificado de App Service sigue mostrando este mensaje transcurridas 24 horas, ejecute el siguiente script de PowerShell. El script establece contacto directamente con el [proveedor de certificados](https://www.godaddy.com/) para resolver este problema. ```powershell Connect-AzAccount Set-AzContext -SubscriptionId <subId> $actionProperties = @{ "Name"= "<Customer Email Address>" }; Invoke-AzResourceAction -ResourceGroupName "<App Service Certificate Resource Group Name>" -ResourceType Microsoft.CertificateRegistration/certificateOrders -ResourceName "<App Service Certificate Resource Name>" -Action resendRequestEmails -Parameters $actionProperties -ApiVersion 2015-08-01 -Force ``` ## <a name="how-do-authentication-and-authorization-work-in-app-service"></a>¿Cómo funciona la autenticación y la autorización en App Service? Para obtener documentación detallada sobre la autenticación y la autorización en App Service, puede consultar la documentación de inicio de sesión de varios proveedores de identidad: * [Azure Active Directory](configure-authentication-provider-aad.md) * [Facebook](configure-authentication-provider-facebook.md) * [Google](configure-authentication-provider-google.md) * [Cuenta Microsoft](configure-authentication-provider-microsoft.md) * [Twitter](configure-authentication-provider-twitter.md) ## <a name="how-do-i-redirect-the-default-azurewebsitesnet-domain-to-my-azure-web-apps-custom-domain"></a>¿Cómo se redirige el dominio *.azurewebsites.net predeterminado a un dominio personalizado de la aplicación web de Azure? Cuando se crea un nuevo sitio web mediante Web Apps en Azure, se asigna un dominio *sitename*.azurewebsites.net predeterminado a su sitio. Si agrega un nombre de host personalizado al sitio y no desea que los usuarios puedan acceder a su dominio *.azurewebsites.net predeterminado, puede redirigir la dirección URL predeterminada. Para aprender a redirigir todo el tráfico desde el dominio predeterminado de su sitio web a su dominio personalizado, consulte [Redirect the default domain to your custom domain in Azure web apps](https://zainrizvi.io/blog/block-default-azure-websites-domain/) (Redirección del dominio predeterminado al dominio personalizado en aplicaciones web de Azure). ## <a name="how-do-i-determine-which-version-of-net-version-is-installed-in-app-service"></a>¿Cómo se determina qué versión de .NET está instalada en App Service? La manera más rápida de encontrar la versión de Microsoft .NET que está instalada en App Service es usar la consola de Kudu. Puede acceder a la consola de Kudu desde el portal o mediante la dirección URL de su aplicación de App Service. Para obtener instrucciones detalladas, consulte [Determine the installed .NET version in App Service](/archive/blogs/waws/how-to-determine-the-installed-net-version-in-azure-app-services) (Determinación de la versión de .NET instalada en App Service). ## <a name="why-isnt-autoscale-working-as-expected"></a>¿Por qué no funciona el escalado automático según lo previsto? Si con el escalado automático de Azure no se ha escalado o reducido la instancia de la aplicación web horizontalmente como se esperaba, puede que esté trabajando en un escenario en el que se ha decidido intencionadamente no escalar para evitar un bucle infinito debido a oscilaciones. Esto suele suceder cuando no hay un margen suficiente entre los umbrales de escalado y reducción horizontal. Para aprender a evitar las oscilaciones y leer sobre otros procedimientos recomendados del escalado automático, consulte [Procedimientos recomendados de escalado automático](../azure-monitor/platform/autoscale-best-practices.md#autoscale-best-practices). ## <a name="why-does-autoscale-sometimes-scale-only-partially"></a>¿Por qué a veces el escalado automático solo escala parcialmente? El escalado automático se desencadena cuando las métricas superan los límites previamente configurados. En ocasiones, puede observar que la capacidad solo está parcialmente llena, en comparación con lo que se esperaba. Esto puede ocurrir cuando el número de instancias que desea no está disponible. En ese escenario, el escalado automático se llena parcialmente con el número de instancias disponibles. Luego, ejecuta la lógica de reequilibrio para obtener más capacidad y asigna el resto de instancias. Tenga en cuenta que esta operación puede tardar unos minutos. Si, transcurridos unos minutos, no ve el número esperado de instancias, es posible que el relleno parcial fuera suficiente para llevar las métricas dentro de los límites. O bien, el escalado automático podría haber reducido verticalmente porque alcanzó el límite inferior de las métricas. Si ninguna de estas condiciones se aplica y el problema continúa, envíe una solicitud de soporte técnico. ## <a name="how-do-i-turn-on-http-compression-for-my-content"></a>¿Cómo se activa la compresión HTTP para el contenido? Para activar la compresión para los tipos de contenido estáticos y dinámicos, agregue el código siguiente al archivo web.config de nivel de aplicación: ```xml <system.webServer> <urlCompression doStaticCompression="true" doDynamicCompression="true" /> </system.webServer> ``` También puede especificar los tipos MIME dinámicos y estáticos específicos que desea comprimir. Para más información, consulte nuestra respuesta a una pregunta del foro en [httpCompression settings on a simple Azure website](https://social.msdn.microsoft.com/Forums/azure/890b6d25-f7dd-4272-8970-da7798bcf25d/httpcompression-settings-on-a-simple-azure-website?forum=windowsazurewebsitespreview) (Configuración de httpCompression en un sitio web de Azure sencillo). ## <a name="how-do-i-migrate-from-an-on-premises-environment-to-app-service"></a>¿Cómo se migra desde un entorno local a App Service? Para migrar sitios de servidores web de Windows y Linux a App Service, puede usar Azure App Service Migration Assistant. La herramienta de migración crea las aplicaciones web y las bases de datos en Azure que son necesarias y luego publica el contenido. Para más información, consulte [Azure App Service Migration Assistant](https://appmigration.microsoft.com/).
96.801262
750
0.795509
spa_Latn
0.975539
ff47757ee92999f7c56a76dcc858933f63917d1c
1,248
md
Markdown
README.md
justinboswell/crtrs
181fc6a3a081c30efabdc7c9a09f829833bceb88
[ "Apache-2.0" ]
null
null
null
README.md
justinboswell/crtrs
181fc6a3a081c30efabdc7c9a09f829833bceb88
[ "Apache-2.0" ]
null
null
null
README.md
justinboswell/crtrs
181fc6a3a081c30efabdc7c9a09f829833bceb88
[ "Apache-2.0" ]
null
null
null
# crtrs ## Example: `$ cargo build` will result in the following code being generated from src/lib.rs (dumped via `cargo expand`): ```rust #![feature(prelude_import)] #![feature(rustc_private)] #[prelude_import] use std::prelude::v1::*; #[macro_use] extern crate std; #[macro_use] extern crate crt_macros; extern crate libc; #[allow(non_snake_case)] #[no_mangle] pub extern "C" fn TestStruct_new() -> *mut TestStruct { unsafe { std::mem::zeroed() } } #[allow(non_snake_case)] #[no_mangle] pub extern "C" fn TestStruct_destroy(s: *mut TestStruct) { std::mem::drop(s); } #[repr(C)] pub struct TestStruct { member_int: i32, } #[allow(non_snake_case)] #[no_mangle] pub extern "C" fn TestStruct_do_thing(me: *mut TestStruct) { me.do_thing(); } #[allow(non_snake_case)] #[no_mangle] pub extern "C" fn TestStruct_return_str(me: *mut TestStruct) -> String { me.return_str() } impl TestStruct { pub fn do_thing(&self) { { ::std::io::_print(::core::fmt::Arguments::new_v1( &["DO_THING\n"], &match () { () => [], }, )); }; } pub fn return_str(&self) -> String { return String::from("RETURN_STR"); } } ```
22.285714
110
0.600962
eng_Latn
0.25499
ff47b1bea822224deaa8f9af941ab3f8dd4cf394
8,327
md
Markdown
CONTRIBUTING.md
ssyms/firecloud-ui
887b0c991b759ccb785b681c3d5757ca0d9d0edb
[ "BSD-3-Clause" ]
26
2015-08-04T13:01:25.000Z
2022-01-04T23:16:12.000Z
CONTRIBUTING.md
ssyms/firecloud-ui
887b0c991b759ccb785b681c3d5757ca0d9d0edb
[ "BSD-3-Clause" ]
1,042
2015-06-18T14:11:30.000Z
2022-02-15T02:37:29.000Z
CONTRIBUTING.md
ssyms/firecloud-ui
887b0c991b759ccb785b681c3d5757ca0d9d0edb
[ "BSD-3-Clause" ]
13
2016-01-08T02:34:50.000Z
2021-01-06T21:19:51.000Z
# Contributing to Firecloud UI ## Firecloud Style Guide When running in a non-production environment, the style guide can be accessed at `/#styles`, or by hovering at the right edge of the footer. ## ClojureScript Style Conventions For ClojureScript code, we follow the [Clojure Style Guide](https://github.com/bbatsov/clojure-style-guide) with exceptions noted here. ### Styling in editors **Atom** The [**Lisp Paredit**](https://atom.io/packages/lisp-paredit) package formats code correctly. **IntelliJ** The [**Cursive**](https://cursive-ide.com) plugin formats code correctly (after a few configuration changes), but is not free. It requires the following configuration: 1. Correct cursive settings are included in this repo in importable form, in the file [`IntelliJ-clojure-style.xml`](IntelliJ-clojure-style.xml). Import them from `IntelliJ IDEA -> Preferences -> Editor -> Code Style -> Clojure`: <img width="831" src="https://user-images.githubusercontent.com/22642695/32802415-f3cd6452-c94d-11e7-85f4-4698da453d78.png"> 2. You need to tell Cursive how to resolve some custom macros: 1. Find an instance of `react/defc` and place your cursor on it or highlight it. 2. Run the IntelliJ command _Show Intention Actions_ (Mac default Option + Return) 3. Select _Resolve as..._ 4. Select _def_ <img src="https://cloud.githubusercontent.com/assets/22642695/21731936/f7e5a17c-d424-11e6-973b-bf5897bbf833.png" title="resolve defc as def" width="458" height="114"/> 5. Repeat this process for `defc-`. 6. Find an instance of `utils/multi-swap!` and resolve it as `->`. ### Source code layout & organization We feel the 80-character line length limit in the style guide is more restrictive than necessary. Where feasible, avoid making lines longer than 100 characters. We do not strictly adhere to the guide's suggestion to keep functions under 10 lines of code. In general, however, shorter functions are preferred. ### Naming React component names are camel-cased, starting with a capital letter: `[comps/Button]` Methods on components are kebab-cased, and "private" (although this is technically unenforced) methods start with a dash: `:-create-dropdown-ref-handler` Native clojure(script) methods and structures are kebab-cased: `(common/render-info-box)` Method and function names should always be verbs, and structures should be nouns. ### Requiring Namespaces The order for required namespaces is as follows: 1. `[dmohs.react :as react]` 2. any third-party libraries 3. any internal clojure namespaces 4. any broadfcui namespaces Within each section, all lines should be alphabetized. Leave the closing double paren on its own line, to avoid excess line changes in git. The only unused namespace that can be left in a `require` is `utils`. Avoid `refer`ing to a function from the namespace, except when a namespace contains only one public member. When requiring a namespace, we _generally_ require it as its full name. Some exceptions to this are `components`, which is required as `comps`, and `monitor.common` as `moncommon`. Common sense applies. A full namespace declaration should look like this: ```cljs (ns broadfcui.new-namespace (:require [dmohs.react :as react] [inflections.core :as inflections] [clojure.string :as string] [broadfcui.common :as common] [broadfcui.components :as comps] [broadfcui.utils :as utils] )) ``` ## React Conventions ### Don't create a component if you don't have to Every React component that's created has a state and props that have to be tracked in memory by the application. When you're creating something, a `def` or `defn` is preferred over a `defc`. As a quick rule of thumb, if the thing you're creating doesn't have an internal state that needs to be tracked, and it doesn't need to respond to lifecycle events (i.e. `component-did-mount`), it shouldn't be a component. ### Styles inside of component definitions We avoid using CSS files. Instead, we define styles for components in place, along with their logic, so that all of the attributes of a component are described in one place. Our reasons for this are [outlined in this slide deck](https://speakerdeck.com/vjeux/react-css-in-js). ### Prefer `let` If you're creating a function or value that's only used in one place, for instance inside of only one method on a component, rather than creating a method on the component, `let` it at the top of the function. If that makes things too crowded, consider putting it in a private function in the namespace. ### Avoid passing state A React component's state is considered private to that component. Do not pass the `state` atom to another component. Avoid this: ```clojure (react/defc Foo ...) (react/defc Bar {:render (fn [{:keys [state]}] [Foo {:parent-state state}])}) ``` Instead, do something like this: ```clojure (react/defc Foo ...) (react/defc Bar {:render (fn [{:keys [state]}] [Foo {:handle-some-action (fn [value] (swap! state ...))}])}) ``` ### React elements are not DOM nodes ```clojure (react/defc Foo {:render (fn [] [:div {} ;; ^ This is not a real <div> element. It is a vector that will be ;; turned into a React element by the function that calls `render` ;; on this component. (react/create-element :div {})])}) ;; ^ Likewise, this is not a real <div> either. This creates a ;; React element directly. ``` In non-React JavaScript, you can do things like: ```javascript var myDiv = document.createElement('div', ...); myDiv.focus(); ``` or ```javascript var myDiv = document.createElement('div', ...); SomeThirdPartyLibraryThatTakesADomNode(myDiv); ``` In situations where a method operates on a DOM node, React elements may not be substituted. You must use a `ref` ([see React's documentation](https://facebook.github.io/react/docs/refs-and-the-dom.html)) to obtain access to the DOM node once React has rendered it into the browser window. ### Set state → read state: It doesn't work the way you think it should [State updates are not immediate](https://facebook.github.io/react/docs/state-and-lifecycle.html#state-updates-may-be-asynchronous), meaning that `state` will not immediately contain a new value after you set it, but instead will have that value after the next re-render. For example: ```clojure (swap! state assoc :foo 17) (get @state :foo) ; <- :foo has yet to be changed to 17 here! ``` So, instead of immediately reading a value back from state: ```clojure (swap! state assoc :foo (bar ...)) (some-func (:foo @state)) ``` use the new value directly: ```clojure (let [new-value (bar ...)] (swap! state assoc :foo new-value) (some-func new-value)) ``` or wait until after the re-render: ```clojure (this :some-state-modifying-method) (after-update #(some-func (:some-key @state)))) ``` ### Avoid infinite loops Changing state causes a re-render. If you update state in a lifecycle method, this can lead to a loop: 1. state changes in `component-did-update` 2. state change starts re-render 3. re-render calls `component-did-update` 4. state changes in `component-did-update` 5. state change starts re-render 6. ... So: some lifecycle methods are automatically called every render. Avoid changing state inside of them. ## JavaScript and (S)CSS We adhere to Google's official style guide on [JS](https://google.github.io/styleguide/jsguide.html) & [CSS](https://google.github.io/styleguide/htmlcssguide.html), which dictate two-space indentation. We indent 4 spaces for html, because 2 spaces looks weird. ## Gotchas A list of any "gotchas" that have been found in development. These may be browser bugs (or "features"), or react issues. ### Browser - A "feature" in browsers is that any `button` inside of a `form` will cause the page to submit, even if you don't define an `on-click` or link attribute on it. Be careful when using buttons inside of forms because the behavior you define may not be the behavior you get. ### React - See above ## Tooling Notes When doing UI development, Chrome's caching gets in the way. We recommending disabling it when devtools is open (via devtools settings): ![disable cache image](https://cloud.githubusercontent.com/assets/1545444/21811560/1a1772c4-d71e-11e6-80bf-4ac3ce28e187.png)
37.85
303
0.735079
eng_Latn
0.989998
ff4839d72ca8d6938141af88cebf28cd047eac8a
84
md
Markdown
README.md
herki01/ajaxee
32f93b0b0c7b221e46278d67608b2253d570c4ff
[ "MIT" ]
1
2015-06-26T12:34:54.000Z
2015-06-26T12:34:54.000Z
README.md
herki01/ajaxee
32f93b0b0c7b221e46278d67608b2253d570c4ff
[ "MIT" ]
null
null
null
README.md
herki01/ajaxee
32f93b0b0c7b221e46278d67608b2253d570c4ff
[ "MIT" ]
null
null
null
# ajaxee jQuery plugin to make your browser AJAX code simle. http://www.ajaxee.org
16.8
51
0.761905
kor_Hang
0.340537
ff484e3c7406ef94a987adaa3268fbdf2dd4acf6
2,001
md
Markdown
paths/git-trouble/fc-setup.md
Zari65/training-kit
1ea48c50cddbdfc6acd3c7a61f08e0e2cbcd6d90
[ "CC-BY-4.0" ]
null
null
null
paths/git-trouble/fc-setup.md
Zari65/training-kit
1ea48c50cddbdfc6acd3c7a61f08e0e2cbcd6d90
[ "CC-BY-4.0" ]
null
null
null
paths/git-trouble/fc-setup.md
Zari65/training-kit
1ea48c50cddbdfc6acd3c7a61f08e0e2cbcd6d90
[ "CC-BY-4.0" ]
1
2018-06-26T04:03:42.000Z
2018-06-26T04:03:42.000Z
--- layout: simple-class header: overlay_image: cover.jpeg overlay_filter: rgba(46, 129, 200, 0.6) title: Set Up Your Git Scenario Environment permalink: /git-trouble/git-set-up next-page: /git-trouble/git-scenarios facilitator: false sidebar: nav: "advanced" main-content: | This stuff is a lot more fun if you try it out. Let's create a sample repository to play with: 1. Create a repository on [GitHub.com](https://www.github.com) and `clone` it to your desktop. 1. Create a new branch, call it ```test```. 1. Create a series of commits that give you a rich history to practice the scenarios in this course. Feel free to use this handy script to generate them for you: - **Bash**: for d in {1..6}; do touch file$d.md; git add file$d.md; git commit -m "adding file $d"; done - **PowerShell:** for ($d=1; $d -le 6;$d++) { touch file$d.md; git add file$d.md; git commit -m "adding file$d.md"; } :metal: You're ready to rock! :guitar: ## Using `git log` If you type `git log --oneline`, your commit history should include several commits that look something like this: 5950a1b adding file 4 Those first 7 characters are going to be unique to your machine and are a section of the SHA-1 hash assigned to that specific commit (the SHA-1 hash is 40 characters long). We are going to use that hash identifier a lot as we learn how to 'git' out of sticky situations. ## New UI Addition When trying to get out of a pickle, the best tool for the job is typically dependent on if you `push`ed your commits to your remote (or not). Look :eyes: for these drop downs throughout the course: ![](/on-demand/images/push-dropdowns.png){: .align-center} They will help you find the right instructions for each situation. refresh: includes: - tell-me-why/create-repo.md - tell-me-why/clone-repo.md show-me-how: tell-me-why: ---
35.105263
272
0.665667
eng_Latn
0.994696
ff4876468370081cf22e64ba22b00c9534fbbc81
1,219
md
Markdown
packages/web-vue/components/resize-box/README.en-US.md
chenc041/arco-design-vue
261ea5ed1476351ec310435f0852898fc2e86eb9
[ "MIT" ]
711
2021-10-29T18:02:14.000Z
2022-03-31T16:22:18.000Z
packages/web-vue/components/resize-box/README.en-US.md
chenc041/arco-design-vue
261ea5ed1476351ec310435f0852898fc2e86eb9
[ "MIT" ]
490
2021-10-30T01:44:20.000Z
2022-03-31T19:29:16.000Z
packages/web-vue/components/resize-box/README.en-US.md
chenc041/arco-design-vue
261ea5ed1476351ec310435f0852898fc2e86eb9
[ "MIT" ]
116
2021-10-30T02:33:53.000Z
2022-03-31T12:23:11.000Z
```yaml meta: type: Component category: Other title: ResizeBox description: Telescopic frame components. ``` *Auto translate by google.* @import ./__demo__/basic.md @import ./__demo__/controlled.md @import ./__demo__/layout.md @import ./__demo__/custom-triggers.md ## API ### `<resize-box>` Props |Attribute|Description|Type|Default| |---|---|---|:---:| |width **(v-model)**|Width|`number`|`-`| |height **(v-model)**|Height|`number`|`-`| |component|The html tag of the telescopic box|`string`|`'div'`| |directions|Can be stretched side, there are up, down, left and right can be used|`('left' \| 'right' \| 'top' \| 'bottom')[]`|`['right']`| ### `<resize-box>` Events |Event Name|Description|Parameters| |---|---|---| |moving-start|Triggered when dragging starts|ev: `MouseEvent`| |moving|Triggered when dragging|size: `{ width: number; height: number; }`<br>ev: `MouseEvent`| |moving-end|Triggered when the drag ends|ev: `MouseEvent`| ### `<resize-box>` Slots |Slot Name|Description|Parameters| |---|---|---| |resize-trigger|The contents of the resize pole|direction: `'left' \| 'right' \| 'top' \| 'bottom'`| |resize-trigger-icon|Resize pole icon|direction: `'left' \| 'right' \| 'top' \| 'bottom'`|
27.088889
139
0.660377
eng_Latn
0.45244
ff4891fa15bdedc10cea8c6843f60896c71652e1
179
md
Markdown
Source/Fuse.Scripting.JavaScript/JavaScriptCore/3rdparty/README.md
helilabs/fuselibs
a5052b2e60cd4e6984a72e68169951ec4ff9fa29
[ "MIT" ]
89
2017-05-08T04:46:01.000Z
2018-05-07T11:08:42.000Z
Source/Fuse.Scripting.JavaScript/JavaScriptCore/3rdparty/README.md
helilabs/fuselibs
a5052b2e60cd4e6984a72e68169951ec4ff9fa29
[ "MIT" ]
802
2017-05-10T11:00:57.000Z
2018-05-09T15:55:09.000Z
Source/Fuse.Scripting.JavaScript/JavaScriptCore/3rdparty/README.md
helilabs/fuselibs
a5052b2e60cd4e6984a72e68169951ec4ff9fa29
[ "MIT" ]
49
2017-05-04T14:02:34.000Z
2018-05-07T11:08:44.000Z
### Update JavaScriptCore headers Currently it can only be done on macOS by replacing `Headers/JavaScriptCore` with `/System/Library/Frameworks/JavaScriptCore.framework/Headers`.
59.666667
144
0.821229
eng_Latn
0.770682
ff48e6a0822def8dddd895b43d4369ae82194f58
1,030
md
Markdown
README.md
Megatron4537/star-wars-app
fdecc99322330d58d9624a5cd3c1cdf819c9f955
[ "MIT" ]
null
null
null
README.md
Megatron4537/star-wars-app
fdecc99322330d58d9624a5cd3c1cdf819c9f955
[ "MIT" ]
null
null
null
README.md
Megatron4537/star-wars-app
fdecc99322330d58d9624a5cd3c1cdf819c9f955
[ "MIT" ]
null
null
null
# star-wars-app React-Native app # iOS & Android apps to fetch and display a list of products for a business ## Requirements - [node.js 12+](https://nodejs.org/en/) - [iOS & Android environment setup](https://reactnative.dev/docs/environment-setup) - [yarn(v1)](https://classic.yarnpkg.com/lang/en/) - [Latest Xcode](https://developer.apple.com/xcode/) - [Cocoapods](https://cocoapods.org/) ## Minimum support This app targets iOS 11.0 and Android 5.0 (API 21) or newer ## Getting started ### Android 1. `yarn` 2. connect your Android device/simulator and run `yarn run android` 3. Then run `yarn start` and reload the application ### iOS 1. `yarn` 2. go into the ios directory and run `pod install` 3. connect your iOS device/simulator and run `yarn run ios` 4. Then run `yarn start` and reload the application ## Testing - To run tests run `yarn run test` - To get test coverage run `yarn test:coverage` - To view coverage report you can open **\_\_tests\_\_/coverage/lcov-report/index.html** in your browser
25.121951
104
0.716505
eng_Latn
0.823996
ff491f5a2e4b0d0dd48963b7978fa721fa344123
1,551
md
Markdown
README.md
huuzkee-foundation/gochimp
368fda445a9217c6a6242bad52d8cfcdae56dd91
[ "MIT" ]
null
null
null
README.md
huuzkee-foundation/gochimp
368fda445a9217c6a6242bad52d8cfcdae56dd91
[ "MIT" ]
null
null
null
README.md
huuzkee-foundation/gochimp
368fda445a9217c6a6242bad52d8cfcdae56dd91
[ "MIT" ]
null
null
null
gochimp ======= Go based API for Mailchimp, starting with Mandrill. https://godoc.org/github.com/mattbaird/gochimp to run tests, set a couple env variables: ```bash $ export MANDRILL_KEY=111111111-1111-1111-1111-111111111 $ export [email protected] ``` Mandrill Status =============== * API Feature complete on Oct 26/2012 * Adding tests, making naming conventions consistent, and refactoring error handling Chimp Status ============ * Not started Getting Started =============== Below is an example approach to rendering custom content into a Mandrill template called "welcome email" and sending the rendered email. ``` package main import ( "fmt" "github.com/mattbaird/gochimp" "os" ) func main() { apiKey := os.Getenv("MANDRILL_KEY") mandrillApi, err := gochimp.NewMandrill(apiKey) if err != nil { fmt.Println("Error instantiating client") } templateName := "welcome email" contentVar := gochimp.Var{"main", "<h1>Welcome aboard!</h1>"} content := []gochimp.Var{contentVar} renderedTemplate, err := mandrillApi.TemplateRender(templateName, content, nil) if err != nil { fmt.Println("Error rendering template") } recipients := []gochimp.Recipient{ gochimp.Recipient{Email: "[email protected]"}, } message := gochimp.Message{ Html: renderedTemplate, Subject: "Welcome aboard!", FromEmail: "[email protected]", FromName: "Boss Man", To: recipients, } _, err = mandrillApi.MessageSend(message, false) if err != nil { fmt.Println("Error sending message") } } ```
20.68
84
0.69697
eng_Latn
0.483597
ff499fc042ab147ca0e302c8386613fa744b3e30
628
md
Markdown
docs/enums/ArticulationDofLock.md
AIFanatic/Trident
c17be51f99ad9ae2843859e284f82428315ac140
[ "MIT" ]
null
null
null
docs/enums/ArticulationDofLock.md
AIFanatic/Trident
c17be51f99ad9ae2843859e284f82428315ac140
[ "MIT" ]
null
null
null
docs/enums/ArticulationDofLock.md
AIFanatic/Trident
c17be51f99ad9ae2843859e284f82428315ac140
[ "MIT" ]
null
null
null
[trident](../README.md) / [Exports](../modules.md) / ArticulationDofLock # Enumeration: ArticulationDofLock ## Table of contents ### Enumeration members - [FreeMotion](ArticulationDofLock.md#freemotion) - [LimitedMotion](ArticulationDofLock.md#limitedmotion) - [LockedMotion](ArticulationDofLock.md#lockedmotion) ## Enumeration members ### FreeMotion • **FreeMotion** = `2` #### Defined in enums/ArticulationDofLock.ts:4 ___ ### LimitedMotion • **LimitedMotion** = `1` #### Defined in enums/ArticulationDofLock.ts:3 ___ ### LockedMotion • **LockedMotion** = `0` #### Defined in enums/ArticulationDofLock.ts:2
14.952381
72
0.718153
yue_Hant
0.816113
ff4a1ea83af5d7f5b2465007676ad38c35108510
1,583
md
Markdown
docs/ssms/f1-help/connect-to-microsoft-azure-storage.md
sporoy/sql-docs
44ae9e9aff12fdb20141628420710cd85afce7d4
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/ssms/f1-help/connect-to-microsoft-azure-storage.md
sporoy/sql-docs
44ae9e9aff12fdb20141628420710cd85afce7d4
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/ssms/f1-help/connect-to-microsoft-azure-storage.md
sporoy/sql-docs
44ae9e9aff12fdb20141628420710cd85afce7d4
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Connect to Microsoft Azure Storage description: "Connect to Microsoft Azure Storage" ms.prod: sql ms.prod_service: sql-tools ms.technology: ssms ms.topic: ui-reference f1_keywords: - "sql13.swb.windowsazurestorage.connect.f1" - "SQL13.SWB.WINDOWSAZURESTORAGE.CONNECT.F1" author: markingmyname ms.author: maghan ms.reviewer: "" ms.custom: seo-lt-2019 ms.date: 07/12/2017 --- # Connect to Microsoft Azure Storage [!INCLUDE[SQL Server Azure SQL Database Synapse Analytics PDW ](../../includes/applies-to-version/sql-asdb-asdbmi-asa-pdw.md)] Use the **Azure Storage Connection** dialog to specify a storage account and validate your connection to Azure. ## Options Specify the following information about your Azure account, and then select **Next** to continue. 1. **Storage Account** - Specify the storage account name. >[!NOTE] > You can only connect to [General-purpose Storage Accounts](/azure/storage/common/storage-introduction#azure-storage-services). Connecting to other types of storage accounts can result in an error similar to the following: > > The value for one of the HTTP headers is not in the correct format. (Microsoft.SqlServer.StorageClient). > > The remote server returned an error: (400) Bad Request. (System) 2. **Account Key** - Specify the account key for the specified storage account. 3. **Use secure endpoints (HTTPS)** - This option utilizes encrypted communication and secure identification of a network web server. 4. **Save account key** - This option saves your password in an encrypted file.
37.690476
226
0.749842
eng_Latn
0.954288
ff4a3a180b409178d287f3dc614012bbf3cc4a4d
725
md
Markdown
content/flux/v0.x/stdlib/math/atan.md
Hipska/docs-v2
aa35a41094c18f6d93c10795a2e1a4da7661543c
[ "MIT" ]
42
2019-10-14T18:38:17.000Z
2022-03-29T15:34:49.000Z
content/flux/v0.x/stdlib/math/atan.md
Hipska/docs-v2
aa35a41094c18f6d93c10795a2e1a4da7661543c
[ "MIT" ]
1,870
2019-10-14T17:03:50.000Z
2022-03-30T22:23:24.000Z
content/flux/v0.x/stdlib/math/atan.md
Hipska/docs-v2
aa35a41094c18f6d93c10795a2e1a4da7661543c
[ "MIT" ]
181
2019-11-08T19:40:05.000Z
2022-03-25T10:01:02.000Z
--- title: math.atan() function description: The math.atan() function returns the arctangent of `x` in radians. aliases: - /influxdb/v2.0/reference/flux/functions/math/atan/ - /influxdb/v2.0/reference/flux/stdlib/math/atan/ - /influxdb/cloud/reference/flux/stdlib/math/atan/ menu: flux_0_x_ref: name: math.atan parent: math weight: 301 introduced: 0.22.0 --- The `math.atan()` function returns the arctangent of `x` in radians. _**Output data type:** Float_ ```js import "math" math.atan(x: 3.14) // Returns 1.262480664599468 ``` ## Parameters ### x {data-type="float"} The value used in the operation. ## Special cases ```js math.atan(x: ±0) // Returns ±0 math.atan(x: ±Inf) // Returns ±Pi/2 ```
19.078947
79
0.68
eng_Latn
0.49075
ff4a445d534db1879be5325bab23658797cd6ff9
1,239
md
Markdown
content/_en/landing/index.md
18F/identity-site
3b905bfe1f7d7862016da0b39f7309aaf8eadc47
[ "CC0-1.0" ]
26
2017-05-12T19:28:57.000Z
2022-03-10T03:55:34.000Z
content/_en/landing/index.md
konklone/identity-site
36ecfb249eb7817e82b2f10a612051ed8ea8738c
[ "CC0-1.0" ]
222
2017-05-02T22:31:15.000Z
2022-03-24T20:32:51.000Z
content/_en/landing/index.md
konklone/identity-site
36ecfb249eb7817e82b2f10a612051ed8ea8738c
[ "CC0-1.0" ]
20
2017-07-14T01:15:08.000Z
2022-02-02T03:24:50.000Z
--- layout: landing permalink: / redirect_from: - /playbook/ - /playbook/implementation/ - /playbook/principles/ one_account_banner: true title: The public’s one account for government. description: Use one account and password for secure, private access to participating government agencies. class: why-login-gov three_col: heading: Login.gov is for you subheading1: Individuals col1: >- Use one account for secure, private access to participating government agencies. [Learn about Login.gov](https://login.gov/what-is-login/){:class="why-more-info"} subheading2: Agency partners col2: >- Protect your users’ information with the highest standards of digital security and user experience. Login.gov handles software development, security operations, and customer support so you don’t have to. [Become a partner](https://partners.login.gov/){:class="why-more-info"} subheading3: Agency developers col3: >- Developer resources, real-time support and modern tools to help you implement and deploy your application with Login.gov [See developer guide](https://developers.login.gov/){:class="why-more-info"} twitter_card: large image: /assets/img/login-gov-600x314.png ---
30.975
85
0.741727
eng_Latn
0.945864
ff4a8f96efcf5cb9215ba1fd6c6830aef385c849
2,317
md
Markdown
imprint.md
bschwarzl/bschwarzl.github.io
db3dd781fdd71ba619d4d89084cdfe2e52d8aeb3
[ "MIT" ]
null
null
null
imprint.md
bschwarzl/bschwarzl.github.io
db3dd781fdd71ba619d4d89084cdfe2e52d8aeb3
[ "MIT" ]
null
null
null
imprint.md
bschwarzl/bschwarzl.github.io
db3dd781fdd71ba619d4d89084cdfe2e52d8aeb3
[ "MIT" ]
null
null
null
--- layout: page title: Impressum permalink: /impressum/ image: /images/Paragraph.JPEG --- ### Impressum Angaben gemäß § 24 Abs. 1 MedienG: <br> Mag. Barbara Schwarzl <br> Himmelreichweg 12 / 8044 Graz / Österreich <br> Tel.: 0043/ 316/ 573154 / Mail: [email protected] <br> Die Adresse der Website lautet www.barbaraschwarzl.com <br> UID-Nummer: ATU 67530433 <br> Gerichtsstand: Landesgericht Graz <br> Berufsbezeichnung: Vertretungsberechtigte Apothekerin; freischaffende Autorin <br> Mitglied der Österreichischen Apothekerkammer ### Rechtsbelehrung, Urheberrecht, Haftungshinweis Die Inhalte (Texte, Fotos, Grafiken) der Webseite www.barbaraschwarzl.at unterliegen dem österreichischen Urheberrecht (UrhG). Die Vervielfältigung, Bearbeitung, Verbreitung und jede Art der Verwertung außerhalb der Grenzen des Urheberrechtes bedürfen der schriftlichen Zustimmung von Mag. Barbara Schwarzl. Downloads und Kopien dieser Seite sind nur für den privaten, nicht kommerziellen Gebrauch gestattet. Die vorliegende Website dient der Information rund um die Autorentätigkeit von Mag. Barbara Schwarzl. Sofern die Inhalte auf dieser Seite nicht von der Autorin selbst erstellt wurden, werden die Urheberrechte Dritter beachtet. Insbesondere werden Inhalte Dritter als solche gekennzeichnet. Sollten Sie trotzdem auf eine Urheberrechtsverletzung aufmerksam werden, bittet die Autorin um einen entsprechenden Hinweis. Bei Bekanntwerden von Rechtsverletzungen werden derartige Inhalte umgehend entfernt. ### Haftung für Links Links zu externen Websites Dritter, entziehen sich dem Verantwortungsbereich der Autorin, weshalb sie für diese fremden Inhalte auch keine Gewähr übernimmt. Für die Inhalte der verlinkten Seiten ist der jeweilige Anbieter oder Betreiber der Seiten verantwortlich. Rechtsverstöße der verlinkten Seiten waren zum Zeitpunkt der Verlinkung nicht erkennbar. <br> Eine permanente inhaltliche Kontrolle der verlinkten Seiten ist ohne konkrete Anhaltspunkte einer Rechtsverletzung nicht zumutbar. Bei Bekanntwerden von Rechtsverletzungen werden derartige Links natürlich entfernt. Der (künftige) Blog behandelt Themen rund um die Autorentätigkeit von Mag. Barbara Schwarzl. Dieses Impressum gilt auch für die von Mag. Barbara Schwarzl betriebenen Seiten auf Facebook, Instagram und Twitter.
79.896552
572
0.825637
deu_Latn
0.99892
ff4b7c27ded449a3aef014bedb353c3733cc54b9
6,237
md
Markdown
articles/iot-hub/iot-hub-device-management-iot-toolkit.md
sergibarca/azure-docs.es-es
dabecf2b983b0b41215571b8939077861f0c2667
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/iot-hub/iot-hub-device-management-iot-toolkit.md
sergibarca/azure-docs.es-es
dabecf2b983b0b41215571b8939077861f0c2667
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/iot-hub/iot-hub-device-management-iot-toolkit.md
sergibarca/azure-docs.es-es
dabecf2b983b0b41215571b8939077861f0c2667
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Administración de dispositivos de Azure IoT con Azure IoT Tools en VS Code description: Use Azure IoT Tools para Visual Studio Code para la administración de dispositivos de Azure IoT Hub, incluye métodos directos y opciones de administración de las propiedades deseadas de los dispositivos gemelos. author: formulahendry ms.service: iot-hub services: iot-hub ms.topic: conceptual ms.date: 01/04/2019 ms.author: junhan ms.openlocfilehash: 9d4d82472664900c96b77b31740573d0463465b8 ms.sourcegitcommit: f9601bbccddfccddb6f577d6febf7b2b12988911 ms.translationtype: HT ms.contentlocale: es-ES ms.lasthandoff: 01/12/2020 ms.locfileid: "75911905" --- # <a name="use-azure-iot-tools-for-visual-studio-code-for-azure-iot-hub-device-management"></a>Uso de Azure IoT Tools para Visual Studio Code para la administración de dispositivos de Azure IoT Hub ![Diagrama integral](media/iot-hub-get-started-e2e-diagram/2.png) [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools) es una útil extensión de Visual Studio Code que facilita la administración de IoT Hub y el desarrollo de aplicaciones de IoT. Incluye opciones de administración que puede usar para realizar varias tareas. [!INCLUDE [iot-hub-basic](../../includes/iot-hub-basic-whole.md)] | Opción de administración | Tarea | |----------------------------|--------------------------------| | Métodos directos | Hacer que un dispositivo actúe, por ejemplo, para iniciar o detener el envío de mensajes o reiniciar el dispositivo. | | Leer dispositivo gemelo | Obtener el estado notificado de un dispositivo. Por ejemplo, el dispositivo informa de que el LED está parpadeando. | | Actualizar dispositivo gemelo | Poner un dispositivo en determinados estados, como establecer un indicador LED en verde o establecer el intervalo de envío de telemetría en 30 minutos. | | Mensajes de nube a dispositivo | Enviar notificaciones a un dispositivo. Por ejemplo, "Es muy probable que llueva hoy. No olvide traerse un paraguas". | Para obtener una explicación más detallada acerca de las diferencias y orientación sobre el uso de estas opciones, consulte la [Guía de comunicación de dispositivo a nube](iot-hub-devguide-d2c-guidance.md) y la [Guía de comunicación de nube a dispositivo](iot-hub-devguide-c2d-guidance.md). Los dispositivos gemelos son documentos JSON que almacenan información sobre el estado de los dispositivos (metadatos, configuraciones y condiciones). IoT Hub conserva un dispositivo gemelo por cada dispositivo que se conecta a él. Para más información acerca de los dispositivos gemelos, consulte [Introducción a los dispositivos gemelos](iot-hub-node-node-twin-getstarted.md). [!INCLUDE [updated-for-az](../../includes/updated-for-az.md)] ## <a name="what-you-learn"></a>Conocimientos que adquirirá Aprenderá a usar Azure IoT Tools para Visual Studio Code con distintas opciones de administración en la máquina de desarrollo. ## <a name="what-you-do"></a>Qué debe hacer Ejecute Azure IoT Tools para Visual Studio Code con diversas opciones de administración. ## <a name="what-you-need"></a>Lo que necesita * Una suscripción de Azure activa. * Un centro de Azure IoT en su suscripción. * [Visual Studio Code](https://code.visualstudio.com/) * [Azure IoT Tools para VS Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools) o [abra este vínculo en Visual Studio Code](vscode:extension/vsciot-vscode.azure-iot-tools). ## <a name="sign-in-to-access-your-iot-hub"></a>Iniciar sesión para acceder a IoT Hub 1. En la vista **Explorador** de VS Code, expanda la sección **Azure IoT Hub Devices** (Dispositivos de Azure IoT Hub) en la esquina inferior izquierda. 2. Haga clic en **Select IoT Hub** (Seleccionar IoT Hub) en el menú contextual. 3. Se mostrará una ventana emergente en la esquina inferior derecha que le permite iniciar sesión en Azure por primera vez. 4. Después de iniciar sesión, se mostrará la lista de suscripciones de Azure y luego podrá seleccionar la suscripción de Azure e IoT Hub. 5. En unos segundos, aparecerá la lista de dispositivos en la pestaña **Azure IoT Hub Devices** (Dispositivos de Azure IoT Hub). > [!Note] > También puede completar la configuración seleccionando **Set IoT Hub Connection String** (Establecer cadena de conexión de IoT Hub). Escriba la cadena de conexión de la directiva **iothubowner** del centro de IoT al que se conecta el dispositivo IoT en la ventana emergente. ## <a name="direct-methods"></a>Métodos directos 1. Haga clic con el botón derecho en el dispositivo y seleccione **Invoke Direct Method** (Invocar método directo). 2. Escriba el nombre del método y la carga en el cuadro de entrada. 3. Se mostrarán los resultados en la vista **SALIDA** > **Azure IoT Hub**. ## <a name="read-device-twin"></a>Leer dispositivo gemelo 1. Haga clic con el botón derecho en el dispositivo y seleccione **Editar dispositivo gemelo**. 2. Se abrirá un archivo **azure-iot-device-twin.json** con el contenido del dispositivo gemelo. ## <a name="update-device-twin"></a>Actualizar dispositivo gemelo 1. Realice algunas modificaciones de **etiquetas** o en el campo **properties.desired**. 2. Haga clic con el botón derecho en el archivo **azure-iot-device-twin.json**. 3. Seleccione **Update Device Twin** (Actualizar dispositivo gemelo) para actualizar el dispositivo gemelo. ## <a name="send-cloud-to-device-messages"></a>Envío de mensajes de nube a dispositivo Para enviar un mensaje desde el IoT Hub al dispositivo, siga estos pasos: 1. Haga clic con el botón derecho en el dispositivo y seleccione **Send C2D Message to Device** (Enviar mensaje de C2D al dispositivo). 2. Escriba el mensaje en el cuadro de entrada. 3. Se mostrarán los resultados en la vista **SALIDA** > **Azure IoT Hub**. ## <a name="next-steps"></a>Pasos siguientes Ha aprendido a usar la extensión Azure IoT Tools para Visual Studio Code con diversas opciones de administración. [!INCLUDE [iot-hub-get-started-next-steps](../../includes/iot-hub-get-started-next-steps.md)]
59.4
378
0.747795
spa_Latn
0.959078
ff4bf7edfd6ee1972a01ca0e014741e61613a3a3
3,229
md
Markdown
essays/Nice-and-Clean.md
joelsikkink/joelsikkink.github.io
ae073b1d0cf42a042c01aedd5355ab1ca4d8f3b7
[ "MIT" ]
null
null
null
essays/Nice-and-Clean.md
joelsikkink/joelsikkink.github.io
ae073b1d0cf42a042c01aedd5355ab1ca4d8f3b7
[ "MIT" ]
null
null
null
essays/Nice-and-Clean.md
joelsikkink/joelsikkink.github.io
ae073b1d0cf42a042c01aedd5355ab1ca4d8f3b7
[ "MIT" ]
null
null
null
--- layout: essay type: essay title: Nice and Clean # All dates must be YYYY-MM-DD format! date: 2018-09-20 labels: - Software Engineering - Javascript - Coding Standards --- ## The Start of it All When I began to learn coding back in high school I started out by learning Python. The class I took had a section on coding and we were taught python since that was what the curriculum saw as the easiest language for beginner students to pick up. My files were not clean at all and I didnt bother much with making the code readable since in my mind, at the time, the whole point was to just get the assignment done and we weren't graded on the style anyways. The same thing sort've occured in my AP computer science classes I took during my senior year however the teachers were a bit more strict and generally speaking, Java requires more organization. I had this habit where I would make the curly braces on the next line after a function or main and not use too many tab spacing for anything. Towards the end of the year I cleaned up my act a little but those bad habits still persisted. I feel bad for the poor soul who had to grade my AP exam. ## Self Correction Fast forward to ICS 111 and the coding style is required to be a little different. The braces have to be on the same line and tabs are expected. Okay, that wasn't too hard for me to pick up and eclipse caught more errors than any of the high school level IDEs did. My code was beginning to resemble actual organized code. I move on to 211 and suddenly I hit this brick wall. We were expected to use a coding style plugin in eclipse and boy did that tank my grade for the first few assignments. I had initially thought I had installed this file right and everything was coming up milhouse. I get some of my grades back and suddenly I have a 6 or 5 on an assignment because the code was either too difficult to read and/or I had really goofed up the syntax. Sprinkle that in with a little errors here in there logically with my code and you end up with the grades I recieved. At that point I realized either I hadn't installed it right or I didn't install it at all. My code and coding style quickly shaped up as well as my grades once I corrected my mistakes. ## Current Year Confusion This semester I'm taking ICS 314 and ICS 212. Oddly enough both of these classes have very similar coding standards but with minute changes. ICS 314 has the same style as 211 did but just uses a different checking system while 212 has strange quirks. One of the things that vexed me with 212 is that the professor wanted the curly braces on new lines, much like I used to do back in high school. That is what I see as the biggest problem with coding styles or standards. There really needs to be one golden and final standard that is required so that everyone can understand the code and not have to worry about one more thing especially in an educational environment like college. While I don't think coding standards directly help you learn a language I think that if there was an agreed upon standard throughout the campus and ICS courses the coding standard would become second nature and the student could just focus on learning the material of the course.
134.541667
1,059
0.78817
eng_Latn
0.999983
ff4c8e26650cd67aa68044695b03661e5f70e59f
377
md
Markdown
SM/Laborator/lab05/README.md
mihai-constantin/ACS
098c99d82dad8fb5d0e909da930c72f1185a99e2
[ "Apache-2.0" ]
null
null
null
SM/Laborator/lab05/README.md
mihai-constantin/ACS
098c99d82dad8fb5d0e909da930c72f1185a99e2
[ "Apache-2.0" ]
null
null
null
SM/Laborator/lab05/README.md
mihai-constantin/ACS
098c99d82dad8fb5d0e909da930c72f1185a99e2
[ "Apache-2.0" ]
1
2021-10-17T14:43:56.000Z
2021-10-17T14:43:56.000Z
# Laborator 5 - Reteaua Omega * Se citeste numarul de biti pentru reteaua omega, respectiv inputul si outputul la care vrem sa ajungem in urma parcurgerii retelei. * Se foloseste functia XOR intre input si output. Rezultatul obtinut ne va furniza directia de deplasare in shuffle. * Un bit de 1 este o legatura inversa, in timp ce un bit de 0 este o legatura directa.
62.833333
133
0.769231
ron_Latn
0.999488
ff4d6b45c87681377436b9257a3f0a88e3a43656
5,968
md
Markdown
bower_components/xtal-fetch/README.md
bahrus/xtal
4df5d3a003949b595df1d4da9e8fbcc06a1ff8f9
[ "MIT" ]
null
null
null
bower_components/xtal-fetch/README.md
bahrus/xtal
4df5d3a003949b595df1d4da9e8fbcc06a1ff8f9
[ "MIT" ]
null
null
null
bower_components/xtal-fetch/README.md
bahrus/xtal
4df5d3a003949b595df1d4da9e8fbcc06a1ff8f9
[ "MIT" ]
null
null
null
# \<xtal-fetch\> ## Single Requests \<xtal-fetch\> is a Polymer based web component wrapper around the fetch api. It is inspired by Polymer's \<iron-ajax\> component. But this component has no legacy Polymer 1.0 dependencies, is a thin transparent wrapper around the native fetch api, and supports some alternative functionality not supported by *iron-ajax*. All the evergreen browsers support fetch. For IE11, a polyfill should be used. An example of such a polyfill can be found [here](https://github.com/bahrus/xtal-fetch/blob/master/IE11-polyfill.js). This was extracted from the [Financial Times Polyfill service](https://github.com/Financial-Times/polyfill-service). It contains additional polyfills recommended for supporting most ES6 features. To make a fetch request, you need to add the fetch attribute, and specify an href value: ```html <xtal-fetch fetch href="https://myDomain/myPath/mySubpath"></xtal-fetch> ``` It may seem somewhat redundant to need to add the fetch attribute (being that the component is called "xtal-fetch"). However, this attribute / property serves a useful purpose: It can block requests until a sanity check is satisfied, such as the requirement of a binding parameter: ```html <xtal-fetch fetch="[[myBinding]]" href="https://myDomain/myPath/[[myBinding]]"></xtal-fetch> ``` This will prevent a (typically cancelled) request from going through, until the binding needed for the href is available. Debouncing is also supported to help avoid duplicate calls due to complex bindings. For more complex sanity checks / validation logic, the fetch property could, of course, refer to a computed property coming from the hosting [Polymer?] component (if applicable). One can specify whether the result should be parsed as JSON, or left as text, using the "as" attribute: ```html <xtal-fetch fetch href="https://myDomain/myPath/mySubpath" as="json"></xtal-fetch> ``` Possible values for as are "json" and "text." The results of the fetch can be inserted inside the <xtal-fetch> tag, becoming a glorified client-side "include": ```html <xtal-fetch fetch href="https://myDomain/myPath/mySubpath" as="json" insert-results></xtal-fetch> ``` Note, though, that if using a relative path for href, it will be relative to the url of the hosting page, not the url of the component definition. But more typically, you will want to "post the results of the fetch to a place available to its peers (other nodes inside the containing web component)". The last phrase is in quotes, because that isn't precisely what happens when one examines the nitty gritty details, but this is the effect we essentially want to have. If the containing component is also a Polymer component, then this can be done by specifying a two-way binding path, and no boilerplate code is required in order to achieve the desired effect: ```html <xtal-fetch fetch href="generated.json" as="json" result="{{people}}"></xtl-fetch> <template is="dom-repeat" items="[[people]]"> Name: [[item.name]] <br> Email: [[item.email]] <br> <hr> </template> ``` Other non Polymer component containers will need to add event handlers to listen for change events, in order to achieve similar results (or leverage Polymer's PropertyEffects mixin). It is often mistakenly assumed that the "fetch" api only supports get, not post. This is in fact **not** the case. The second parameter of the fetch function is often referred to as the reqInit parameter, and it can specify a method of "post", request headers, and the body of a post request, and much more. This component simply passes the reqInit property into the api, unaltered: ```html <xtal-fetch fetch href="api/persistService/id/[[id]]" as="json" result="{{people}}" reqInit="[[myRequestConfig]]"></xatl-fetch> ``` myRequestConfig could be initialized inside the containing component, or come from another component that posts to "myRequestConfig". In order to avoid doing a premature fetch, before the reqInit binding takes place, one can specify the attribute: reqInitRequired: ```html <xtal-fetch fetch href="api/persistService/id/[[id]]" as="json" result="{{people}}" req-init="[[myRequestConfig]]" req-init-required></xatl-fetch> ``` Although this could be done with boilerplate code using the fetch property, it is such a common need that this additional attribute is added for this specific purpose. ## Multiple requests \<xtal-fetch\> allows for spawning multiple fetch requests tied to an array of entities. This is often useful when drilling down from some parent entity ('customer', e.g.) to multiple 1-n relations ('purchases', e.g.) The syntax for this is meant to be readable: ```hrml <xtal-fetch fetch href="api/customer/[[id]]/purchase/:id" for-each="id" in-entities="[[purchases]]" as="json" set-path="purchase_detail" on-fetch-complete="refreshDetail"></xtal-fetch> ``` *set-path* specifies the property name in each entity, used to store the fetched entity detail (json or text specified by "as" just like before). Note that #xtal-fetch# issues a "fetch-complete" event after every fetch is completed. One can enable caching of the same href value using the cache-results attribute. In the future, this will also consider the req-init property as well in determining whether a fresh request should be made. Like the Polymer iron-ajax inspiration, the *debounce-duration* attribute specifies how much to wait for the request to "settle down" before proceeding. ## Install the Polymer-CLI First, make sure you have the [Polymer CLI](https://www.npmjs.com/package/polymer-cli) installed. Then run `polymer serve` to serve your element locally. ## Viewing Your Element ``` $ polymer serve ``` ## Running Tests ``` $ polymer test ``` Your application is already set up to be tested via [web-component-tester](https://github.com/Polymer/web-component-tester). Run `polymer test` to run your application's test suite locally.
53.285714
518
0.754524
eng_Latn
0.997062
ff4d9aa7b9a51028ee9ad1c542c363a8a17e279f
9,385
md
Markdown
docs/docs/operate/autoscalingstrategies.md
boost-entropy-repos-org/mantis
c6ebe5b1142979fffdec24e29c8bff5ced9e8443
[ "Apache-2.0" ]
1,233
2019-06-25T02:19:34.000Z
2022-03-22T15:41:01.000Z
docs/docs/operate/autoscalingstrategies.md
boost-entropy-repos-org/mantis
c6ebe5b1142979fffdec24e29c8bff5ced9e8443
[ "Apache-2.0" ]
31
2019-10-28T17:02:47.000Z
2022-03-29T00:19:54.000Z
docs/docs/operate/autoscalingstrategies.md
boost-entropy-repos-org/mantis
c6ebe5b1142979fffdec24e29c8bff5ced9e8443
[ "Apache-2.0" ]
122
2019-06-25T21:26:23.000Z
2022-03-27T09:50:19.000Z
There are various strategies for autoscaling. These strategies affect when an autoscale activity will occur and also how many workers to scale up/down a stage. They fall into 2 main categories. Rule based strategies monitor a specific resource and scale up/down when a certain threshold is reached. PID control based strategies pick a resource utilization level and scale up/down dynamically to maintain that level. ## Rule Based Strategy Rule based strategy can be defined for the following resources: | Resource | Metric | | --------------- | ------- | | `CPU` | group: `ResourceUsage` name: `cpuPctUsageCurr` aggregation: `AVG` | | `Memory` | group: `ResourceUsage` name: `totMemUsageCurr` aggregation: `AVG` | | `Network` | group: `ResourceUsage` name: `nwBytesUsageCurr` aggregation: `AVG` | | `JVMMemory` | group: `ResourceUsage` name: `jvmMemoryUsedBytes` aggregation: `AVG` | | `DataDrop` | group: `DataDrop` name: `dropCount` aggregation: `AVG` | | `KafkaLag` | group: `consumer-fetch-manager-metrics` name: `records-lag-max` aggregation: `MAX` | | `KafkaProcessed` | group: `consumer-fetch-manager-metrics` name: `records-consumed-rate` aggregation: `AVG` | | `UserDefined` | Metric is defined by user with job parameter `mantis.jobmaster.autoscale.metric` in this format `{group}::{name}::{aggregation}`. | Each strategy has the following parameters: | Name | Description | | --------------- | ------------ | | `Scale down below percentage` | When the aggregated value for all workers falls below this value, the stage will scale down. It will scale down by the decrement value specified in the policy. For data drop, this is calculated as the number of data items dropped divided by the total number of data items, dropped+processed. For CPU, Memory, etc., it is calculated as a percentage of allocated resource when you defined the worker. | | `Scale up above percentage` | When the aggregated value for all workers rises above this value, the stage will scale up. | | `Rolling count` | This value helps to keep jitter out of the autoscaling process. Instead of scaling immediately the first time values fall outside of the scale-down and scale-up percentage thresholds you define, Mantis will wait until the thresholds are exceeded a certain number of times within a certain window. For example, a rolling count of “6 of 10” means that only if in ten consecutive observations six or more of the observations fall below the scale-down threshold will the stage be scaled down. | It is possible to employ multiple rule based strategies for a stage. In this case, as soon as 1 strategy triggers a scaling action, the cooldown will prevent subsequent strategies from scaling for that duration. !!! note Ideally, there should be zero data drop, so there isn’t an elegant way to express “scale down below percentage” for data drop. Specifying “0%” as the “scale down below percentage” effectively means the data drop percentage never trigger a scale down. For this reason, it is best to use the data drop strategy in conjunction with another strategy that provides the scale-down trigger. ## PID Control Based Strategy PID control system uses a continuous feedback loop to maintain a signal at a target level (set point). Mantis offers variations of this strategy that operates on different signals. Additionally, they try to learn the appropriate target over time without the need for user input. The PID controller computes the magnitude of scale up/down based on the drift between the observed signal and the target. Thus, this strategy can react quicker to big changes compared to rule based strategies, since rule based strategies use fixed step size. Cooldown still applies between scaling activities. ### Clutch The strategy operates on CPU, Memory, Network and UserDefined. Every 24 hours, it will pick 1 dominant resource and use the P99 value as the target set point. For the next 24 hours, it will monitor that resource metric and scale the stage to keep the metric close to the target set point. In the initial 24 hours after the job is first launched, this strategy will scale the stage to max in order to learn the first dominant resource and set point. This also happens if the job is restarted. ### Clutch with User Defined Configs With this strategy, the user defines the target for each resource without relying on the system to learn it. There is no need for an initial 24 hour pin high period, the PID controller can start working right away. You can supply the configuration as JSON in the job parameter `mantis.jobmaster.clutch.config`. Example: ```json { "minSize": 3, "maxSize": 25, "cooldownSeconds": 300, "rps": 8000, "cpu": { "setPoint": 60.0, "rope": [25.0, 0.0], "kp": 0.01, "kd": 0.01 }, "memory": { "setPoint": 100.0, "rope": [0.0, 0.0], "kp": 0.01, "kd": 0.01 }, "network": { "setPoint": 60.0, "rope": [25.0, 0.0], "kp": 0.01, "kd": 0.01 } } ``` | Field | Description | | --------------- | ------------ | | `minSize` | Minimum number of workers in the stage. It will not scale down below this number. | | `maxSize` | Maximum number of workers in the stage. It will not scale up above this number. | | `cooldownSeconds` | This indicates how many seconds to wait after a scaling operation has been completed before beginning another scaling operation. | | `maxAdjustment` | Optional. The maximum number of workers to scale up/down in a single operation. | | `rps` | Expected RPS per worker. Must be > 0. | | `cpu`, `memory`, `network` | Configure PID controller for each resource. | | `setPoint` | Target set point for the resource. This is expressed as a percentage of allocated resource to the worker. For example, `60.0` on `network` means network bytes should be 60% of the network limit on machine definition. | | `rope` | Lower and upper buffer around the set point. Metric values within this buffer are assumed to be at set point, and thus contributes an error of 0 to the PID controller. | | `kp` | Multiplier for the proportional term of the PID controller. This will affect the size of scaling actions. | | `kd` | Multiplier for the derivative term of the PID controller. This will affect the size of scaling actions. | ### Clutch RPS This strategy scales the stage base on number of events processed. The target set point is a percentile of RPS. The signal is the sum of RPS, inbound drops, Kafka lag, and outbound drops from source jobs. Therefore, it effectively tries to keep drops and lag at 0. It takes the first 10 minutes after job launch to learn the first RPS set point. This also applies if the job is restarted, the set point does not carry over. Afterwards, it may adjust the set point once every hour. Set point should become stable the longer a job runs, since it simply takes a percentile of historical RPS metric. The source job drop metric is not enabled by default. It is only applicable if your job connects to an upstream job as input. You can enable this metric by setting the job parameter `mantis.jobmaster.autoscale.sourcejob.metric.enabled` to true. Further, you need to specify the source job targets in the job parameter `mantis.jobmaster.autoscale.sourcejob.target`. You can omit this if your job already has a `target` parameter for connecting to source jobs, the auto scaler will pick that up automatically. Example: ```json { "targets": [ { "sourceJobName": "ConsolidatedLoggingEventKafkaSource" } ] } ``` Optionally, it is possible to further customize the behavior of the PID controller. You can supply the configuration as JSON in the job parameter `mantis.jobmaster.clutch.config`. Example: ```json { "rpsConfig": { "setPointPercentile": 50.0, "rope": [30.0, 0.0], "scaleDownBelowPct": 40.0, "scaleUpAbovePct": 10.0, "scaleDownMultiplier": 0.5, "scaleDownMultiplier": 3.0 } } ``` | Field | Description | | -------------------- | ------------ | | `setPointPercentile` | Percentile of historical RPS metric to use as the set point. Valid input is between `[1.0, 100.0]` Default is `75.0`. | | `rope` | Lower and upper buffer around the set point. The value is interpreted as percentage of set point. For example, `[30.0, 30.0]` means values within 30% of set point is considered to have 0 error. Valid input is between `[0.0, 100.0]` Default is `[30.0, 0.0]`. | | `scaleDownBelowPct` | Only scale down if the PID controller output is below this number. It can be used to delay a scaling action. Valid input is between `[0.0, 100.0]`. Default is `0.0`. | | `scaleUpAbovePct` | Only scale up if the PID controller output is above this number. It can be used to delay a scaling action. Valid input is between `[0.0, 100.0]`. Default is `0.0`. | | `scaleDownMultiplier` | Artificially increase/decrease the size of scale down by this factor. Default is `1.0`. | | `scaleUpMultiplier` | Artificially increase/decrease the size of scale up by this factor. Default is `1.0`. | ### Clutch Experimental (Developmental Use Only) This strategy is internally used for testing new Clutch implementations. It should not be used for production jobs.
61.743421
524
0.719979
eng_Latn
0.996808
ff4db522c23d6501fb4df74d9177556ded3d9648
9,656
md
Markdown
_posts/2020-05-15-easy-graphql-consumer-with-apollo-client.md
jaeyow/fullstack-developer
2b069f070308aedeabcae3b4b0b562e9f2cf376b
[ "MIT" ]
null
null
null
_posts/2020-05-15-easy-graphql-consumer-with-apollo-client.md
jaeyow/fullstack-developer
2b069f070308aedeabcae3b4b0b562e9f2cf376b
[ "MIT" ]
1
2021-06-27T08:32:15.000Z
2021-06-27T08:32:15.000Z
_posts/2020-05-15-easy-graphql-consumer-with-apollo-client.md
jaeyow/fullstack-developer
2b069f070308aedeabcae3b4b0b562e9f2cf376b
[ "MIT" ]
null
null
null
--- layout: posts title: Simple GraphQL consumer with Apollo Client excerpt: A GraphQL web client in ReactJS and Apollo modified: 2020-05-15 date: 2020-05-15 tags: [GraphQL, Apollo Client, Github Actions, Formula 1, AWS S3] header: overlay_image: /images/graphql-client/leonardo-yip-ncwnjmevtcw-unsplash.jpg caption: "Photo credit: [**Unsplash**](https://unsplash.com)" comments: true published: true --- <section id="table-of-contents" class="toc"> <header> <h3>Overview</h3> </header> <div id="drawer" markdown="1"> * Auto generated table of contents {:toc} </div> </section> ## Part of the [GraphQL Series](../tags/#graphql) In my previous [GraphQL blog post](https://fullstackdeveloper.tips/6-steps-to-your-first-graphql-server/), I promised to follow it up with an article about what an [Apollo GraphQL client](https://www.apollographql.com/client/) might look like. In this post we do just that, still about Formula 1, (as an aside yesterday found out that [Daniel Ricciardo will be moving to McLaren](https://www.abc.net.au/news/2020-05-14/daniel-ricciardo-leaves-renault-to-join-mclaren-formula-one/12249854) to replace Sainz who will be replacing Vettel at Ferrari). <figure> <a href="../images/mindfulness-graphql/graphql-apollo-aggregator.png"><img src="../images/mindfulness-graphql/graphql-apollo-aggregator.png"></a><figcaption>Apollo Client to create modern web applications</figcaption> </figure> Apollo Client is the perfect pairing for the [F1 GraphQL Server we built](https://kc4uqd938e.execute-api.us-east-1.amazonaws.com/dev/graphql) in the last blog post. But before we start, we have to know a little bit more about it. And as I will explain further, there may be more to it than what its name might suggest. ## What we built In this article we will be inspecting an Apollo GraphQL Client that I built specially for the server we created for the last article. I am calling it [F1 GraphQL Client](https://f1-graphql-client.s3.amazonaws.com/index.html), a reference ReactJS GraphQL consumer. Nothing fancy, rough around the edges, but it serves the purpose of showcasing how easy it is to get it up and running. <figure> <a href="../images/graphql-server/2021-formula-1.jpg"><img src="../images/graphql-server/2021-formula-1.jpg"></a><figcaption>I Love Formula 1, pictured above is the new 2021 F1 car design.</figcaption> </figure> All source code is available at [FullstackDeveloper.Tips Github](https://github.com/jaeyow/f1-graphql) and forever free however you want to use it. It is also [hosted at AWS S3](https://f1-graphql-client.s3.amazonaws.com/index.html), so go ahead play with it to your heart's content. As usual, I have used Github Actions to automatically deploy it to S3 once it is pushed to the repo. <figure> <a href="https://f1-graphql-client.s3.amazonaws.com/index.html" target="_blank"><img src="../images/graphql-client/f1-graphql-client-using-apollo-client.png"></a><figcaption>We are building a simple GraphQL consumer using Apollo Client</figcaption> </figure> ## What is Apollo Client? > Apollo Client is a complete state management library for JavaScript apps. Simply write a GraphQL query, and Apollo Client will take care of requesting and caching your data, as well as updating your UI. - Apollo Client docs What?!? Apollo Client is a **state management library**? That I did not expect. This is the very first statement that you will read in the official online Apollo Client [documentation](https://www.apollographql.com/docs/react/). I wasn't really convinced at first, I thought, with ReactJS and all other supporting libraries, why would I need yet another one? Please read on and find out. ### Declarative configuration Because GraphQL is still using HTTP, any of the usual libraries that we use in REST API can be used with GraphQL, eg, [fetch](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API), [axios](https://github.com/axios/axios), [superagent](https://github.com/visionmedia/superagent), or simply just [XHR](https://javascript.info/xmlhttprequest). We can use them, however, forget about them for the moment. Let's try the Apollo Client and prepare to be blown away. Using Apollo client and and the library's **useQuery** hook, you can create your GraphQL requests declaratively, and not have to create data access code with your favorite library. {% highlight js linenos %} export default function QualifyingResultsTable() { const classes = useStyles(); const { filters } = useContext(AppState); const { loading, error, data } = useQuery(QUAL_RESULTS, { variables: { season: filters.season } }); if (loading) return ( <Grid item xs={4} className={classes.root}> <CircularProgress size={20} className={classes.spinner} ></CircularProgress> </Grid> ); if (error) return <p>Error :(</p>; const quals = data.qualifying[filters.detail] ? data.qualifying[filters.detail].QualifyingResults : null; return ( <TableContainer component={Paper}> <Table className={classes.table} aria-label="simple table"> {/*... removed for brevity ...*/} <TableBody> { quals && quals.map((result, row_i) => { return ( <TableRow key={row_i}> <TableCell align="left" component="th" scope="row">{result.position}</TableCell> <TableCell align="left">{result.number}</TableCell> <TableCell align="left">{result.Driver.givenName}</TableCell> <TableCell align="left">{result.Constructor.name}</TableCell> <TableCell align="left">{result.Q1}</TableCell> <TableCell align="left">{result.Q2}</TableCell> <TableCell align="left">{result.Q3}</TableCell> </TableRow> ); }) } </TableBody> </Table> </TableContainer> ); } {% endhighlight %} On line number 4, you get **loading** and **error** flags built-in to indicate those states, and **data** containing the requested data. In the example on line 22, you can see the view directly using it. Isn't that awesome? ### Client-side caching for free Without any configuration, all your requests are cached on the browser. So if you use the same query in other parts of your SPA, you will get instant response from your local Apollo Client cache. If there are advanced cases that is not handled by the default behavior, you can [customize them](https://www.apollographql.com/docs/react/caching/cache-configuration/), but this is out of scope for this article. Try the [caching functionality](https://f1-graphql-client.s3.amazonaws.com/index.html) now. The first time you load 2019 data, there is a bit of a delay as the request completes. But you go away to another season, say 2018, and come back to 2019, it will be instant as that data is already in the Apollo Client cache. ### Throw away Redux and MobX, state management is built-in In the code snippet above, where we have **error** and **loading** and **data**, if we were not using Apollo Client, we would handle these manually by creating reducers in Redux or stores in MobX. But because client side caching is built-in, we can leverage this and use it as our app state management. Here's more information about this from the [source](https://www.apollographql.com/docs/react/data/local-state/). ### Handy developer tools Development productivity is important to ensure adoption. One cool feature is a Chrome DevTools extension that allows you (in your local dev environment) easily interact with your GraphQL client and server, including your local Apollo Client cache. <figure> <a href="../images/graphql-client/f1-graphql-apollo-developer-tools.png"><img src="../images/graphql-client/f1-graphql-apollo-developer-tools.png"></a><figcaption>When working locally Apollo Client Developer Tools is really handy</figcaption> </figure> ### Easy integration with your favorite JS framework I have shown the integration in ReactJS above, however, many other platforms are supported too like [Angular](https://angular.io/), [Vue](https://vuejs.org/), [Meteor](https://www.meteor.com/), and [Ember](https://emberjs.com/), to name a few. ## Conclusion Today's article is a concise introduction to Apollo Client, and and the sample application demonstrates that creating a GraphQL client is simple specially if you use Apollo Client. With it's declarative style, caching, developer tools and easy integration with your project, you can get started and be productive in no time. Please try [F1 GraphQL Client live](https://f1-graphql-client.s3.amazonaws.com/index.html) here! [Source code for F1 GraphQL Client](https://github.com/jaeyow/f1-graphql) is available here. ![Production Build](https://github.com/jaeyow/f1-graphql/workflows/Production%20Build/badge.svg) ## My Picks These picks are things that have had a positive impact to me in recent weeks: - [AWS ReInvent 2019 Amazon DynamoDB Deep Dive](https://www.youtube.com/watch?v=6yqfmXiZTlM&t=2951s) - Hands down the best material out there on DynamoDB. Lots of Aha moments in this one. When I started learning DynamoDB I find myself watching this over and over again. - [International Cycling (and Running) without leaving your house](https://zwift.com/) - In this age of months long lockdowns, I have never really left cycling thanks to [Zwift](https://zwift.com/). ## Resources - [Apollo Client](https://www.apollographql.com/docs/react/) - [Apollo Server Documentation](https://www.apollographql.com/docs/apollo-server/) - [F1 Ergast API](http://ergast.com/mrd/)
67.055556
464
0.733119
eng_Latn
0.951137
ff4e73e1fa79fa5d4da248dc418b267a2196d6e0
408
md
Markdown
content/docs/mr/contributing/running-the-project.md
novia713/drupalconsole.com
579c0408569dc1837c3cdc5452d5d4cdb849d9ca
[ "MIT" ]
null
null
null
content/docs/mr/contributing/running-the-project.md
novia713/drupalconsole.com
579c0408569dc1837c3cdc5452d5d4cdb849d9ca
[ "MIT" ]
null
null
null
content/docs/mr/contributing/running-the-project.md
novia713/drupalconsole.com
579c0408569dc1837c3cdc5452d5d4cdb849d9ca
[ "MIT" ]
1
2021-06-01T03:21:46.000Z
2021-06-01T03:21:46.000Z
# Running the project After using Composer to download dependencies, you can run the project by executing: ``` $ bin/drupal ``` ## Create a symbolic link You can run this command to easily access the Drupal Console from anywhere on your system: ``` $ sudo ln -s /path/to/drupal-console/bin/drupal /usr/local/bin/drupal ``` **NOTE:** The name `drupal` is just an alias you can name it anything you like.
24
90
0.730392
eng_Latn
0.997734
ff4ed0f10b8bca3d83b19d0cb5039a0c34aabe56
460
md
Markdown
Vaibhav-Singh.md
sanxy/hacktoberfest-1
913582b310688d496602e8b1bc9166cb64866e38
[ "MIT" ]
null
null
null
Vaibhav-Singh.md
sanxy/hacktoberfest-1
913582b310688d496602e8b1bc9166cb64866e38
[ "MIT" ]
null
null
null
Vaibhav-Singh.md
sanxy/hacktoberfest-1
913582b310688d496602e8b1bc9166cb64866e38
[ "MIT" ]
1
2020-09-30T18:53:05.000Z
2020-09-30T18:53:05.000Z
# About the Author Hello everyone this is **Vaibhav Singh** and this is a little info about me Just [click here](https://github.com/itsvaibhav01) to get to my github page :smiley: --- **Ask me for long hair tips!!** [![Video link](https://avatars1.githubusercontent.com/u/45447817?s=460&u=f532a98ec0c8b4d49142e73f612e68c63ce55f24&v=4)]() --- #### Copyright (c) 2020 on me is by Indian passport #### Made by [Vaibhav Singh](https://github.com/itsvaibhav01)
32.857143
121
0.726087
eng_Latn
0.598962
ff4ef81c812b9f4d582a6ada8fd5a5397b5163dc
8,903
md
Markdown
content/changelog.md
ArcticNature/docs
f91c505cd231223fc836dd9893e66057a5856c59
[ "BSD-3-Clause" ]
null
null
null
content/changelog.md
ArcticNature/docs
f91c505cd231223fc836dd9893e66057a5856c59
[ "BSD-3-Clause" ]
null
null
null
content/changelog.md
ArcticNature/docs
f91c505cd231223fc836dd9893e66057a5856c59
[ "BSD-3-Clause" ]
null
null
null
+++ date = "2017-04-21T18:20:41+01:00" title = "Change log" force_menu = "status" +++ {{% draft %}} Template -------- _Topic_: This is what version blocks look like. ### New Features - ??? ### Improvements - ??? ### Breaking changes - ??? ### Fixes - ??? ### Others - ??? Undetermined ------------ _Topic_: Draft, uncommited and unordered, roadmap. - TODO(stefano): Daemon and Spawner log through the manager. - TODO(stefano): Log level filtering. - TODO(stefano): Spawner and Daemon configuration. - TODO(stefano): Spawner and Daemon event manager. - TODO(stefano): Rewrite protobuf documentation in terms of operations (with with detailed sections on message attributes and components interaction). - TODO(stefano): Upgrade to latest hugo. - TODO(stefano): ArangoDB metadata backend. - TODO(stefano): Resource (CPU && RAM) tracking. - TODO(stefano): Resource (CPU && RAM) allocation spec. - TODO(stefano): Resource (CPU && RAM) limit enforcing. - TODO(stefano): Additional resource tracking. - TODO(stefano): Look into LUA corutines to suspend execution (implement `include` in config and `wait` in client). - TODO(stefano): Improve lua C++ interface with `C++11/14` feats: * Template specialization in stack `to<Type>`. * Reimplement proxy type with functors and lambdas. * Simple lua container for `std::shared_ptr<?>`. - TODO(stefano): Use promeses in `snow-fox-cli` interpreter. - TODO(stefano): Convert node interface to use promises. - TODO(stefano): Rewrite repo interface to be promise based. - TODO(stefano): Storage hints: * Format metadata keys as `namespace:key`. * Store a map from `namespace` to usage hint. * Metadata implementations can use these hints to optimize storage. - TODO(stefano): Rebuild and generalise build system. * Based on stages (generate files, compile, link, run, analyse, lint, gcovr, lcov). * Based on ninja? * Per-component and global stages (compile vs lcov). * Jenkinfile or equivalent with the CI stages. * Dockerfile in `dev-tools/docker/jenkins` to spin up a jenkins local server? 0.6.1? ------ _Topic_: Cluster membership. - TODO(stefano): Node (re)join at start. - TODO(stefano): Nodes have buddies to check up on their health. - TODO(stefano): Primary as a service: nodes can be primary/secondary for a cluster wide task. - TODO(stefano): Buddies detect nodes faults. - TODO(stefano): Forget nodes (not automatically done). 0.6.0? ------ _Topic_: Configuration refactoring and context instance. - TODO(stefano): Create `core.config.node` to introduce node options. - TODO(stefano): Extend `core.config.base` with `ConfigMultiStep`. - TODO(stefano): Change node configuration to new class. - TODO(stefano): Config loader stores and builds a `core.state.global` container. - TODO(stefano): Update components to match new config format. - TODO(stefano): Delete old code. - TODO(stefano): Allow configuration of metrics. - TODO(stefano): Re-work logging configuration. - TODO(stefano): Store promise handler in `core.state.global`. 0.5.0? ------ _Topic_: Libevent2, RPC framework and tracing. - TODO(stefano): Rewrite the event system to use libevent? - TODO(stefano): RPC abstract framework. - TODO(stefano): HTTP RPC framework. - TODO(stefano): JSON HTTP RPC framework. - TODO(stefano): Introduce http://opentracing.io/ with Zipkin support. - TODO(stefano): Trace node status. - TODO(stefano): Trace service list. - TODO(stefano): Trace service load. - TODO(stefano): Trace service start. 0.4.0? ------ _Topic_: Introspection with metrics and logs. - TODO(stefano): Create backends to collect and expose metrics. - TODO(stefano): Count configuration reloads. - TODO(stefano): Count errors. - TODO(stefano): Gauge pending promeses. - TODO(stefano): Move logger instance out of `Context`. - TODO(stefano): Move `ScoperdLogger` out of `Context`. - TODO(stefano): Log to fluentd https://github.com/m-mizutani/libfluent - TODO(stefano): Process name in the logs (to replace process-group option). - TODO(stefano): Multi-logger for stdout/stderr and fluentd. 0.3.0? ------ _Topic_: Docs improvement and reorganization. - TODO(stefano): Move site to `docs.site`. - TODO(stefano): Add gitbook component. - TODO(stefano): Create `docs.admin` for end-user docs. - TODO(stefano): Create `docs.failures` for failure modes. - TOOD(stefano): Create `docs.develop` for development docs. - TODO(stefano): Add top level reference menu for detailed docs. 0.2.1? ------ _Topic_: Service status and stop. - TODO(stefano): Searchable metadata (with support for simple attribute matchers). - TODO(stefano): List services and instances in the register. - TODO(stefano): ??? {{% /draft %}} 0.2.0 ----- _Topic_: Service start and registry. - TODO(stefano): Start instance client command. - TODO(stefano): Start instance server handler. - TODO(stefano): Load instance definition. - TODO(stefano): Send instance start to spawner. - TODO(stefano): Start instance. - Define service and service instance. ### New Features - Create `core.config.base` for base config environment. - Create `core.config.service` to return a `ServiceDescription`. - Create `core.state.global` to track singleton intances. - Create `core.testing.cluster` for tests. - Create `core.testing.static` for tests. - Iterate over keys in a LUA table (to be improved). - Script to lcov coverage data into HTML. ### Improvements - Convert LUA value to JSON. - Iterate over string and int `LuaTable` keys. - Skip rebuilding external dependencies that are not idempotent. ### Breaking changes - Move `Node::me()` to `Cluster::myself()`. - Move cluster instance out of `Context`. - Simplify ColourStatus by removing status code. 0.1.1 ----- _Topic_: Metadata storage. ### New Features - Abstract MetaData store interface. - Cluster metadata store and configuration. - JSON FileSystem metadata store. - Local node metadata store and configuration. - Node config lua extention hooks. - `ProxyLogger` to use as python's `getLogger`. ### Improvements - Create `core.testing.hooks` for hook test utils. - Create `core.testing.metadata` for metadata test utils. - Create `core.testing.promise` for test utils. - Start converting `Context::logger` and `Logger::fallback` to `ProxyLogger`. - Start dev tools with a docker based builder. 0.1.0 ----- _Topic_: Events refactoring. ### New Features - Add failure modes documentation. - Add glossary page to documentation. - Create a `PromiseKeeper` and add to static context. - Handle global promise failures by re-thworing them in the run loop. - Introduce hooks with functors and static types. - Introduce promeses. - SheduledSource to tick promise keeper. - Use lifecycles to auto-add drains with data. ### Improvements - Exceptions inherit from `std::runtime_error`. - Make Drain and Source FD accessible to LoopManagers only. - Quick access to full reference pages. ### Breaking changes - All writes use flush instead of FDs. - `Context` static methods to uppercase (lowercase for instance version). - Implement `Buffer`s for drains buffering system. - Move FD check to new `core.utility.net`. - Refactor `Event` error handling with `std::exception_ptr`. - Refactor `EventDrain` to support buffering and async flushes. - Refactor `EventSource` to improve interface. - Refactor `EventSourceManager` to `LoopManager`. - Refactor `MessageIO::send` to use drains instead of fds. 0.0.4 ----- _Topic_: Start system configuration. ### New Features - Create/Migrate ScheduledSources during reconfig. - Git backed repository. - Initial configuration loading. - Migrate Managers's sources (spawner and daemon). - Migrate ManaualSource during reconfig. - Node configuration and example repo. - Node configuration loader. - Return configuration version with node's status. - ScheduledSource configuration. - Support event manager reconfiguration. 0.0.3 ----- _Topic_: First client-server interaction. ### New Features - Client introduction. - Event contexts. - Lua interface to client API. - Lua interface to node API. - Manager's event registry. - Node status request. - Node status response. - Promised events. 0.0.2 ----- _Topic_: Start command line client. ### New Features - Command line client started. - LUA utility wrapper classes. - Manual event source. - Node name from command line. - Print binary version and exit. - Public protocol started. - Scheduled event source. - Started cluster interface. - Status definition and helpers. ### Improvements - Node name used in event ids. ### Fixes - Fixed daemon termination with Ctrl+C in console mode. - Fixed undetected client disconnects. 0.0.1 ----- _Topic_: Event ids. ### New Features - Event ids and correlation ids. - Improved event handler registration at static initialisation time. - Version header file. 0.0.0 ----- _Topic_: Core framework. ### New Features - Core framework. - Process orchestration and daemonisation.
31.020906
115
0.735595
eng_Latn
0.6638
ff50009a7e8803a3cfea5c406b40dbe2fef077c2
10,201
md
Markdown
README_zh.md
G-Anjanappa/dgcnn.pytorch
97785863ff7a82da8e2abe0945cf8d12c9cc6c18
[ "MIT" ]
335
2020-03-04T05:11:15.000Z
2022-03-31T10:14:39.000Z
README_zh.md
G-Anjanappa/dgcnn.pytorch
97785863ff7a82da8e2abe0945cf8d12c9cc6c18
[ "MIT" ]
68
2020-03-26T11:48:42.000Z
2022-03-25T08:14:43.000Z
README_zh.md
G-Anjanappa/dgcnn.pytorch
97785863ff7a82da8e2abe0945cf8d12c9cc6c18
[ "MIT" ]
101
2020-03-06T13:46:18.000Z
2022-03-28T16:31:22.000Z
# DGCNN.pytorch [[English]](README.md) 本仓库提供了一份PyTorch版本的 **Dynamic Graph CNN for Learning on Point Clouds (DGCNN)**( https://arxiv.org/pdf/1801.07829 )代码实现,代码框架来源于[WangYueFt/dgcnn](https://github.com/WangYueFt/dgcnn/tree/master/pytorch)。 **更新:** - [2021/7/20] 增加了可视化代码,作者:[纪鹏亮](https://github.com/Ji-Pengliang) ([email protected]). 在DGCNN文章中网络结构图(图3)中的分类网络和文章中对应的网络结构描述(第4.1节)并不吻合,原作者实际沿用了网络结构描述(第4.1节)中的网络结构,我们使用PS修复了网络结构图(图3)中不吻合的地方,修改后的图如下: &nbsp; <p float="left"> <img src="image/DGCNN.jpg"/> </p> &nbsp; **建议:** 3D点云实验结果往往比2D图像实验结果面临更大的随机性,因此我们建议您多跑几次实验,然后选择最佳的结果。 &nbsp; ## 运行需求 - Python 3.7 - PyTorch 1.2 - CUDA 10.0 - Python包:glob, h5py, sklearn, plyfile &nbsp; ## 内容目录 - [点云分类](#_3) - [点云局部分割](#_8) - [点云场景语义分割](#_16) **注意:** 以下所有命令默认使用所有显卡,如果需要明确使用几张显卡,比如4张显卡索引为`0,1,2,3`,那么在每个命令前需要添加`CUDA_VISIBLE_DEVICES=0,1,2,3`。你可以根据你的需要调整使用显卡的数量和索引。 &nbsp; ## 点云分类 ### 运行训练脚本: - 1024点 ``` python main_cls.py --exp_name=cls_1024 --num_points=1024 --k=20 ``` - 2048点 ``` python main_cls.py --exp_name=cls_2048 --num_points=2048 --k=40 ``` ### 训练结束后运行评估脚本: - 1024点 ``` python main_cls.py --exp_name=cls_1024_eval --num_points=1024 --k=20 --eval=True --model_path=outputs/cls_1024/models/model.t7 ``` - 2048点 ``` python main_cls.py --exp_name=cls_2048_eval --num_points=2048 --k=40 --eval=True --model_path=outputs/cls_2048/models/model.t7 ``` ### 使用提供的已训练模型运行评估脚本: - 1024点 ``` python main_cls.py --exp_name=cls_1024_eval --num_points=1024 --k=20 --eval=True --model_path=pretrained/model.cls.1024.t7 ``` - 2048点 ``` python main_cls.py --exp_name=cls_2048_eval --num_points=2048 --k=40 --eval=True --model_path=pretrained/model.cls.2048.t7 ``` ### 模型性能: ModelNet40数据集 | | 平均类别Acc | 整体Acc | | :---: | :---: | :---: | | 原文章(1024点) | 90.2 | 92.9 | | 本仓库(1024点) | **90.9** | **93.3** | | 原文章(2048点) | 90.7 | 93.5 | | 本仓库(2048点) | **91.2** | **93.6** | &nbsp; ## 点云局部分割 **注意:** 训练模式中 **“全部类别”** 和 **“特定类别”** 有区别。 - 在 **“全部类别”** 中,模型使用所有16个类别的数据进行训练和评估,并在这份代码中得出平均IoU为85.2%。每个点云形状中点的预测标签可以是16个类别中的任意一个局部类别。 - 在 **“特定类别”** 中,模型仅使用1个类别的数据进行训练和评估,比如飞机类别,并在这份代码中得出飞机类别的IoU为84.5%。每个点云形状中点的预测标签只能是这1个类别中的任意一个局部类别。 ### 运行训练脚本: - 使用数据集内全部类别 ``` python main_partseg.py --exp_name=partseg ``` - 选择数据集内特定类别,例如飞机 ``` python main_partseg.py --exp_name=partseg_airplane --class_choice=airplane ``` ### 训练结束后运行评估脚本: - 使用数据集内全部类别 ``` python main_partseg.py --exp_name=partseg_eval --eval=True --model_path=outputs/partseg/models/model.t7 ``` - 选择数据集内特定类别,例如飞机 ``` python main_partseg.py --exp_name=partseg_airplane_eval --class_choice=airplane --eval=True --model_path=outputs/partseg_airplane/models/model.t7 ``` ### 使用提供的已训练模型运行评估脚本: - 使用数据集内全部类别 ``` python main_partseg.py --exp_name=partseg_eval --eval=True --model_path=pretrained/model.partseg.t7 ``` - 选择数据集内特定类别,例如飞机 ``` python main_partseg.py --exp_name=partseg_airplane_eval --class_choice=airplane --eval=True --model_path=pretrained/model.partseg.airplane.t7 ``` ### 模型性能 ShapeNet part 数据集 | &emsp;&emsp;&emsp;&emsp; | 平均IoU | 飞机 | 包 | 帽子 | 车 | 椅子 | 耳机 | 吉他 | 刀 | 灯 | 笔记本电脑 | 摩托车 | 杯子 | 手枪 | 火箭 | 滑板 | 桌子 | | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | &emsp;&emsp;&emsp;&emsp; | mIoU | airplane | bag | cap | car | chair | earphone | guitar | knife | lamp | laptop | motor | mug | pistol | rocket | skateboard | table | | 形状数量 | &emsp;&emsp;&emsp;&emsp; | 2690 | 76 | 55 | 898 | 3758 | 69 | 787 | 392 | 1547 | 451 | 202 | 184 | 283 | 66 | 152 | 5271 | | 原文章 | **85.2** | 84.0 | **83.4** | **86.7** | 77.8 | 90.6 | 74.7 | 91.2 | **87.5** | 82.8 | **95.7** | 66.3 | **94.9** | 81.1 | **63.5** | 74.5 | 82.6 | | 本仓库 | **85.2** | **84.5** | 80.3 | 84.7 | **79.8** | **91.1** | **76.8** | **92.0** | 87.3 | **83.8** | **95.7** | **69.6** | 94.3 | **83.7** | 51.5 | **76.1** | **82.8** | ### 可视化: #### 使用说明: 使用`--visu`控制需要可视化的文件。 - 对于可视化一个点云形状,比如类别飞机索引为0的点云形状(点云形状的索引为从0开始),使用`--visu=airplane_0`。 - 对于可视化一个类别中的所有点云形状,比如飞机类别,使用`--visu=airplane`。 - 对于可视化所有类别的所有点云类别,使用`--visu=all`。 使用`--visu_format`控制可视化文件的格式。 - 输出.txt格式,使用`--visu_format=txt`。 - 输出.ply格式,使用`--visu_format=ply`。 所有格式均可通过[MeshLab](https://www.meshlab.net)来加载进行可视化。对于使用MeshLab可视化.txt格式,参考问题[#8](https://github.com/AnTao97/dgcnn.pytorch/issues/8)中的介绍,而.ply格式可以直接拖入MeshLab进行可视化。 可视化文件名遵循统一的命名格式。对于预测结果,文件名格式为`点云形状名称_pred_miou.格式后缀`;对于真实标签,文件名格式为`点云形状名称_gt.格式后缀`。文件名中`miou`指该点云形状的平均IoU。 #### 全部类别: - 输出飞机类别索引为0的点云形状的.ply格式可视化结果 ``` # 使用训练后的模型 python main_partseg.py --exp_name=partseg_eval --eval=True --model_path=outputs/partseg/models/model.t7 --visu=airplane_0 --visu_format=ply # 使用提供的已训练模型 python main_partseg.py --exp_name=partseg_eval --eval=True --model_path=pretrained/model.partseg.t7 --visu=airplane_0 --visu_format=ply ``` - 输出飞机类别的所有点云形状的.ply格式可视化结果 ``` # 使用训练后的模型 python main_partseg.py --exp_name=partseg_eval --eval=True --model_path=outputs/partseg/models/model.t7 --visu=airplane --visu_format=ply # 使用提供的已训练模型 python main_partseg.py --exp_name=partseg_eval --eval=True --model_path=pretrained/model.partseg.t7 --visu=airplane --visu_format=ply ``` - 输出所有类别的所有点云形状的.ply格式可视化结果 ``` # 使用训练后的模型 python main_partseg.py --exp_name=partseg_eval --eval=True --model_path=outputs/partseg/models/model.t7 --visu=all --visu_format=ply # 使用提供的已训练模型 python main_partseg.py --exp_name=partseg_eval --eval=True --model_path=pretrained/model.partseg.t7 --visu=all --visu_format=ply ``` #### 选择数据集内特定类别,例如飞机: - 输出飞机类别索引为0的点云形状的.ply格式可视化结果 ``` # 使用训练后的模型 python main_partseg.py --exp_name=partseg_airplane_eval --class_choice=airplane --eval=True --model_path=outputs/partseg_airplane/models/model.t7 --visu=airplane_0 --visu_format=ply # 使用提供的已训练模型 python main_partseg.py --exp_name=partseg_airplane_eval --class_choice=airplane --eval=True --model_path=pretrained/model.partseg.airplane.t7 --visu=airplane_0 --visu_format=ply ``` - 输出飞机类别的所有点云形状的.ply格式可视化结果 ``` # 使用训练后的模型 python main_partseg.py --exp_name=partseg_airplane_eval --class_choice=airplane --eval=True --model_path=outputs/partseg_airplane/models/model.t7 --visu=airplane --visu_format=ply # 使用提供的已训练模型 python main_partseg.py --exp_name=partseg_airplane_eval --class_choice=airplane --eval=True --model_path=pretrained/model.partseg.airplane.t7 --visu=airplane --visu_format=ply ``` #### 可视化结果: 属于飞机类别索引为0的点云形状的可视化结果: <p float="left"> <img src="image/partseg_visu.png"/> </p> 颜色对应图: <p float="left"> <img src="image/partseg_colors.png"/> </p> &nbsp; ## 点云场景语义分割 在此任务中网络结构和点云局部分割有细微不同,最后的一个MLP尺寸改为(512, 256, 13),而且在256后只使用一个dropout。 您必须从 https://goo.gl/forms/4SoGp4KtH1jfRqEj2 手动下载数据集`Stanford3dDataset_v1.2_Aligned_Version.zip`,然后将其放在`data/`目录下。 ### 运行训练脚本: 此任务使用6折训练,因此需要训练6个模型,轮流选择数据集中6个区域中的1个作为这个模型的测试区域。 - 在区域1-5上训练 ``` python main_semseg.py --exp_name=semseg_6 --test_area=6 ``` ### 训练结束后运行评估脚本: - 当模型在区域1-5训练完成后,在区域6中评估 ``` python main_semseg.py --exp_name=semseg_eval_6 --test_area=6 --eval=True --model_root=outputs/semseg/models/ ``` - 当6个模型训练完成后,在所有区域上评估 ``` python main_semseg.py --exp_name=semseg_eval --test_area=all --eval=True --model_root=outputs/semseg/models/ ``` ### 使用提供的已训练模型运行评估脚本: - 使用提供的在区域1-5上已训练模型,在区域6中评估 ``` python main_semseg.py --exp_name=semseg_eval_6 --test_area=6 --eval=True --model_root=pretrained/semseg/ ``` - 使用提供的6个已训练模型,在所有区域上评估 ``` python main_semseg.py --exp_name=semseg_eval --test_area=all --eval=True --model_root=pretrained/semseg/ ``` ### 模型性能: 斯坦福大学大型3D室内空间数据集(S3DIS) | | 平均IoU | 整体Acc | | :---: | :---: | :---: | | 原文章 | 56.1 | 84.1 | | 本仓库 | **59.2** | **85.0** | ### 可视化: #### 使用说明: 使用`--visu`控制需要可视化的文件。 - 对于可视化一个房间,比如第区域6的办公室1(房间索引为从1开始),使用`--visu=area_6_office_1`。 - 对于可视化一个区域中所有的房间,比如区域6,使用`--visu=area_6`。 - 对于可视化所有区域的所有房间,使用`--visu=all`。 使用`--visu_format`控制可视化文件的格式。 - 输出.txt格式,使用`--visu_format=txt`。 - 输出.ply格式,使用`--visu_format=ply`。 所有格式均可通过[MeshLab](https://www.meshlab.net)来加载进行可视化。对于使用MeshLab可视化.txt格式,参考问题[#8](https://github.com/AnTao97/dgcnn.pytorch/issues/8)中的介绍,而.ply格式可以直接拖入MeshLab进行可视化。 可视化文件名遵循统一的命名格式。对于预测结果,文件名格式为`房间名称_pred_miou.格式后缀`;对于真实标签,文件名格式为`房间名称_gt.格式后缀`。文件名中`miou`指该房间的平均IoU。 **注意:**对于语义分割,需要首先运行一个不带可视化的命令来预处理数据集,比如可视化部分之前的训练命令和评估命令。在数据集处理完成后,可以运行下面带有可视化的命令。 #### 当模型在区域1-5训练完成后,在区域6中评估: - 输出区域6的办公室1的.ply格式可视化结果 ``` # 使用训练后的模型 python main_semseg.py --exp_name=semseg_eval_6 --test_area=6 --eval=True --model_root=outputs/semseg/models/ --visu=area_6_office_1 --visu_format=ply # 使用提供的已训练模型 python main_semseg.py --exp_name=semseg_eval_6 --test_area=6 --eval=True --model_root=pretrained/semseg/ --visu=area_6_office_1 --visu_format=ply ``` - 输出区域6的所有房间的.ply格式可视化结果 ``` # 使用训练后的模型 python main_semseg.py --exp_name=semseg_eval_6 --test_area=6 --eval=True --model_root=outputs/semseg/models/ --visu=area_6 --visu_format=ply # 使用提供的已训练模型 python main_semseg.py --exp_name=semseg_eval_6 --test_area=6 --eval=True --model_root=pretrained/semseg/ --visu=area_6 --visu_format=ply ``` #### 当6个模型训练完成后,在所有区域上评估: - 输出区域6的办公室1的.ply格式可视化结果 ``` # 使用训练后的模型 python main_semseg.py --exp_name=semseg_eval --test_area=all --eval=True --model_root=outputs/semseg/models/ --visu=area_6_office_1 --visu_format=ply # 使用提供的已训练模型 python main_semseg.py --exp_name=semseg_eval --test_area=all --eval=True --model_root=pretrained/semseg/ --visu=area_6_office_1 --visu_format=ply ``` - 输出区域6的所有房间的.ply格式可视化结果 ``` # 使用训练后的模型 python main_semseg.py --exp_name=semseg_eval --test_area=all --eval=True --model_root=outputs/semseg/models/ --visu=area_6 --visu_format=ply # 使用提供的已训练模型 python main_semseg.py --exp_name=semseg_eval --test_area=all --eval=True --model_root=pretrained/semseg/ --visu=area_6 --visu_format=ply ``` - 输出所有区域的所有房间的.ply格式可视化结果 ``` # 使用训练后的模型 python main_semseg.py --exp_name=semseg_eval --test_area=all --eval=True --model_root=outputs/semseg/models/ --visu=all --visu_format=ply # 使用提供的已训练模型 python main_semseg.py --exp_name=semseg_eval --test_area=all --eval=True --model_root=pretrained/semseg/ --visu=all --visu_format=ply ``` #### 可视化结果: 区域6的办公室1的可视化结果: <p float="left"> <img src="image/semseg_visu.png"/> </p> 颜色对应图: <p float="left"> <img src="image/semseg_colors.png" width="800"/> </p>
27.422043
199
0.70297
yue_Hant
0.126371
ff50134a6ffa87b28a4dd5e28444ce7a7969837c
1,398
md
Markdown
lib/models/providers/README.md
dadiorchen/Windshaft
3447ae2eeb33103905b835743e5f18f27e58d39a
[ "BSD-3-Clause" ]
192
2015-01-11T23:46:50.000Z
2021-12-22T15:56:38.000Z
lib/models/providers/README.md
dadiorchen/Windshaft
3447ae2eeb33103905b835743e5f18f27e58d39a
[ "BSD-3-Clause" ]
313
2015-01-12T12:13:00.000Z
2021-08-31T22:54:27.000Z
lib/models/providers/README.md
dadiorchen/Windshaft
3447ae2eeb33103905b835743e5f18f27e58d39a
[ "BSD-3-Clause" ]
58
2015-03-17T14:35:59.000Z
2022-02-04T13:48:31.000Z
MapConfigProvider interface --------------------------- MapConfigProviders help to manipulate MapConfig models within a RendererCache: - Load a MapConfig - Get its key for the cache - Get its cache buster to know when it's required to reload the model # MapConfigProvider MapConfigProviders are expected to expose the following interface: ## getMapConfig(callback) Get a MapConfig model with associated params and context to create a Renderer. ```javascript getMapConfig(callback) ``` - @param `{Function} callback` function(err, mapConfig, params, context) * `{Error} err` will be an instance of Error on any problem, or null * `{MapConfig} mapConfig` will be an opaque object * `{Object} params` will contain params associated to the MapConfig request, like format, layer, and so. * `{Object} context` an object with information for renderers like query limits, etc. ## getKey() Returns the key for the MapConfig model plus its params. It will be used by RendererCache to store the renderer associated to the MapConfig. ```javascript getKey() ``` - @return `{String}` the key for the Renderer ## getCacheBuster() Returns a number representing the last modification time of the MapConfig so it's possible to know whether a Renderer must be recreated or not. ```javascript getCacheBuster() ``` - @return `{Number}` the last modified time for the MapConfig, aka buster
31.066667
117
0.74392
eng_Latn
0.968366
ff501b68ac8176eb64b2c02a8bf29ab83b3d2d1a
699
md
Markdown
episodes/2019/004-jquery.md
mynar7/techjr
367908689a4f9b6c19a583caead2cd5c58694943
[ "MIT" ]
3
2019-05-30T14:31:57.000Z
2020-09-04T22:08:29.000Z
episodes/2019/004-jquery.md
mynar7/techjr
367908689a4f9b6c19a583caead2cd5c58694943
[ "MIT" ]
5
2021-08-31T16:43:03.000Z
2022-02-26T10:35:40.000Z
episodes/2019/004-jquery.md
mynar7/techjr
367908689a4f9b6c19a583caead2cd5c58694943
[ "MIT" ]
3
2019-11-13T12:13:19.000Z
2020-09-11T12:48:56.000Z
--- title: jQuery! date: 2019-04-22T10:00:00-04:00 excerpt: In this episode, we talk about jQuery! Should you learn it or should we pronouce jQuery and let it rest in peace? author: Lee Warrick &amp; Edwin Otero tags: ['jquery'] showLength: 44:47 fileUrl: TechJr_ep-1_4-22-2019_jquery.mp3 fileSize: 41 --- What's the deal with jQuery? One of the most notorious JavaScript frameworks out there, jQuery, was once at the top of the web development world. With the advent of Single-Page Applications (SPAs) however, jQuery has fallen out of favor. In this episode, Lee and Eddie talk about why jQuery was such a big deal, giggle about "sizzle", and ponder if jQuery still deserves a place on the web.
43.6875
152
0.765379
eng_Latn
0.992761
ff506839d4d7c828ab402b28e556649a4ce3246f
2,429
md
Markdown
doc/Environment/Http/Nginx.md
MacFJA/livres
9a8de81f7c88e5b1205b23cb483e678da686f679
[ "MIT" ]
4
2020-02-13T17:25:39.000Z
2021-01-09T21:28:15.000Z
doc/Environment/Http/Nginx.md
MacFJA/livres
9a8de81f7c88e5b1205b23cb483e678da686f679
[ "MIT" ]
11
2018-02-21T13:44:47.000Z
2021-08-31T20:24:20.000Z
doc/Environment/Http/Nginx.md
MacFJA/livres
9a8de81f7c88e5b1205b23cb483e678da686f679
[ "MIT" ]
1
2018-12-24T09:02:48.000Z
2018-12-24T09:02:48.000Z
# Build production environment ## Nginx as HTTP server ```puml @startuml digraph G { rankdir=LR www [shape=doublecircle] nginx [label="Nginx", shape=component, fillcolor=lightblue, style=filled] phpfpm [label="PHP-FPM", shape=box, fillcolor=lightblue, style=filled] symfony [label="Symfony", shape=oval] www -> nginx -> phpfpm -> symfony } @enduml ``` ## Components To work with Nginx you need to have Nginx and PHP-FPM ### Nginx #### Installation ##### Alpine ```shell script sudo apk add nginx ``` ##### Debian ```shell script sudo apt install nginx ``` ##### CentOS ```shell script sudo yum install nginx ``` #### Configuration ```nginx server { server_name domain.tld www.domain.tld; root /var/www/livres/public; location / { try_files $uri /index.php$is_args$args; } location ~ ^/index\.php(/|$) { fastcgi_pass unix:/var/run/php/php-fpm.sock; fastcgi_split_path_info ^(.+\.php)(/.*)$; include fastcgi_params; fastcgi_param APP_ENV prod; fastcgi_param APP_SECRET <app-secret-id>; fastcgi_param DATABASE_URL <app-db-dsn>; fastcgi_param REDISEARCH_URL <app-redisearch-dsn>; fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name; fastcgi_param DOCUMENT_ROOT $realpath_root; internal; } location ~ \.php$ { return 404; } error_log /var/log/nginx/project_error.log; access_log /var/log/nginx/project_access.log; } ``` (Base on [Symfony documentation](https://symfony.com/doc/current/setup/web_server_configuration.html#web-server-nginx)) ### PHP-FPM #### Installation ##### Alpine ```shell script sudo apk add php7 php7-fpm php7-opcache \ php7-bcmath php7-ctype php7-curl php7-dom php7-gd php7-iconv php7-intl php7-json php7-mbstring php7-pdo php7-simplexml php7-sodium php7-xml php7-xsl php7-zip ``` ##### Debian ```shell script sudo apt install php7.3 php7.3-fpm php7.3-opcache \ php7.3-bcmath php7.3-ctype php7.3-curl php7.3-gd php7.3-iconv php7.3-intl php7.3-json php7.3-mbstring php7.3-pdo php7.3-xml php7.3-xsl php7.3-zip ``` ##### CentOS ```shell script sudo yum install php php-fpm php-opcache \ php-bcmath php-ctype php-curl php-gd php-iconv php-intl php-json php-mbstring php-pdo php-xml php-xsl php-zip ``` #### Configuration PHP-FPM is preconfigure, and in most cases, default configurations is enough.
22.700935
162
0.680527
kor_Hang
0.25982
ff50a37b91d8c7b3402d3afb36267f70b97ba31d
22
md
Markdown
html/New folder/README.md
manisha2307/Hacktoberfest2020
57896144eabfc94c2fb2e7a48382ad492de48952
[ "MIT" ]
null
null
null
html/New folder/README.md
manisha2307/Hacktoberfest2020
57896144eabfc94c2fb2e7a48382ad492de48952
[ "MIT" ]
null
null
null
html/New folder/README.md
manisha2307/Hacktoberfest2020
57896144eabfc94c2fb2e7a48382ad492de48952
[ "MIT" ]
null
null
null
myrepomyrepository
11
20
0.818182
ceb_Latn
0.940885
ff50d35e67f416d5720e9e69884d912b3643545b
585
md
Markdown
Exec/science/Detonation/nse_runs/README.md
MargotF/Castro
5cdb549af422ef44c9b1822d0fefe043b3533c57
[ "BSD-3-Clause-LBNL" ]
178
2017-05-03T18:07:03.000Z
2022-03-31T22:34:53.000Z
Exec/science/Detonation/nse_runs/README.md
MargotF/Castro
5cdb549af422ef44c9b1822d0fefe043b3533c57
[ "BSD-3-Clause-LBNL" ]
1,334
2017-05-04T14:23:24.000Z
2022-03-28T00:12:06.000Z
Exec/science/Detonation/nse_runs/README.md
MargotF/Castro
5cdb549af422ef44c9b1822d0fefe043b3533c57
[ "BSD-3-Clause-LBNL" ]
86
2017-06-12T15:27:51.000Z
2022-03-09T22:21:44.000Z
# `nse_detonations` This directory contains a set of scripts that setup, run, and analyze a set of detonations, comparing Strang splitting to simplified SDC. The basic usage is: #. in a `screen` session of similar do: ``` ./setup_runs.py ``` this will create the run directories, copy the needed files, and run the jobs in parallel using the python Pool mechanism. #. while the jobs are running, you can check the status by doing: ``` ./show_status.py ``` #. once the runs are finised, you can make the suite of plots by ``` ./make_plots.py ```
22.5
69
0.687179
eng_Latn
0.998972
ff51c1a0d1f5adb294ada59b8c64258e37092859
4,087
md
Markdown
Chapter4/Readme.md
baronfel/dsls-in-action-fsharp
2b855d09c66ec2ab1e056cf742203c41c2fb9a03
[ "Apache-2.0" ]
114
2015-01-14T12:19:45.000Z
2022-01-22T20:26:40.000Z
Chapter4/Readme.md
baronfel/dsls-in-action-fsharp
2b855d09c66ec2ab1e056cf742203c41c2fb9a03
[ "Apache-2.0" ]
null
null
null
Chapter4/Readme.md
baronfel/dsls-in-action-fsharp
2b855d09c66ec2ab1e056cf742203c41c2fb9a03
[ "Apache-2.0" ]
16
2015-02-21T06:34:01.000Z
2022-02-17T21:57:58.000Z
### Reading notes ### > #### Financial brokerage systems: the cash value of a trade #### > Every trade has a cash value that the counterparty receiving the securities needs to pay to the counterparty delivering the securities. This final value is known as the net settlement value (NSV). The NSV has two main components: the gross cash value and the tax and fees. The gross cash value depends on the unit price of the security that was traded, the type of the security, and additional components like the yield price for bonds. The additional tax and fee amounts include the taxes, duties, levies, commissions, and accrued interest involved in the trading process. The gross cash value calculation depends on the type of the security (equity or fixed income), but is fundamentally a function of the unit price and the quantity traded. > The additional tax and fee amounts vary with the country of trade, the exchange where the trading takes place, and the security that’s traded. In Hong Kong, for example, a stamp duty of 0.125% and a transaction levy of 0.007% are payable on equity purchases and sales. > #### Financial brokerage systems: instrument types #### > Instruments that are traded can be of various types designed to meet the needs of the investors and issuers. Depending on the type, every instrument follows a different lifecycle in the trading and settlement process. > The two main classifications are equity and fixed income. > Equities can again be classified as common stock, preferred stock, cumulative stock, equity warrants, or depository receipts. The types of fixed income securities (also known as bonds) include straight bonds, zero coupon bonds, and floating rate notes. For the purpose of our discussion, it’s not essential to be familiar with all these details. What is important is that the Trade abstractions will vary, depending on the type of instrument that’s being traded. > #### Thinking in types: key takeaways #### > The main purpose of this section was to make you think in types. For each abstraction that’s in your domain model, make it a typed one and organize the related business rules around that type. Many of the business rules will be automatically enforced by the compiler, which means you won’t have to write explicit code for them. If your implementation language has a proper type system, your DSL will be as concise as ones written using dynamic languages. > #### Key takeaways & best practices #### > When you design an internal DSL, follow the best practices for the language concerned. Using the language idiomatically always results in the optimal mix of expressiveness and performance. > A dynamic language like Ruby or Groovy gives you strong metaprogramming capabilities. Design your DSL abstractions and the underlying semantic model that rely on these capabilities. You’ll end up with a beautifully concise syntax, leaving the boilerplates to the underlying language runtime. > In a language like Scala, static typing is your friend. Use type constraints to express a lot of business rules and use the compiler as the first-level verifier of your DSL syntax. > When you’re using a language like Clojure that offers compile-time metaprogramming, use macros to define custom syntax structures. You'll get the conciseness of Ruby with no additional runtime performance penalty. --- ### Implementation notes ### There is no implicit context in F#, so we use records to mimic the scope of an implicit context. We may revisit this issue by exploiting some reflection-based techniques. The Decorator example in Ruby is twisted by using a mutable property in F#. For the porfolio example in Scala, we only translate the generic version. The trade example in Scala is for demonstrating the idea of type constraints. Type constraints and inheritance are not as popular in F# as they are in Scala, so the approach in `Trade.Scala.fs` isn't recommended. The Clojure example is currently ignored due to the lack of macro facility in F#.
51.734177
135
0.774896
eng_Latn
0.999845
ff531770c3e1451a144fa6b2bb31307c2ee28114
193
md
Markdown
intellij-like-a-boss/basic.md
travisbklp/books
756930f158db42a35940efa0716a9dbca8004ed6
[ "MIT" ]
2
2019-05-12T12:35:28.000Z
2020-07-23T15:41:32.000Z
intellij-like-a-boss/basic.md
travisbklp/books
756930f158db42a35940efa0716a9dbca8004ed6
[ "MIT" ]
null
null
null
intellij-like-a-boss/basic.md
travisbklp/books
756930f158db42a35940efa0716a9dbca8004ed6
[ "MIT" ]
1
2018-10-25T09:56:16.000Z
2018-10-25T09:56:16.000Z
## User interface 1. Project window `⌘1` 1. Version control `⌘9` 1. Structure `⌘7` 1. Favorites `⌘2` 1. Terminal `⌥F12` 1. Navigation bar `⌘↑` **KEEP IN MIND** start typing and things happen
17.545455
47
0.663212
eng_Latn
0.54895
ff53585978411901178911b8a11a0c841d009969
303
md
Markdown
README.md
White-Raven/NC-canva-glitcheffect.js
5a005a43ea51b374c43aabf86434a6e26d8f233b
[ "MIT" ]
null
null
null
README.md
White-Raven/NC-canva-glitcheffect.js
5a005a43ea51b374c43aabf86434a6e26d8f233b
[ "MIT" ]
null
null
null
README.md
White-Raven/NC-canva-glitcheffect.js
5a005a43ea51b374c43aabf86434a6e26d8f233b
[ "MIT" ]
null
null
null
###### G̴̣͔̝͇͍̼̦̤͕̖̙̔̾̅͊̑̅̉̿̃̽͋͌̇̆͋͋͆̊̓̚͝͝Ļ̷͖̟̳̥̜̤͇̰͔̝͋̀̐́̇̏̒͑̍̀͛͗́̎̄̀̓̓͆͒̀̅̇̎͂̀͆͒̊̇̀̇͆͒̇̈́͊́͑̕͜İ̸̡̢̧̢̡̛̤͎̬̘͓͚̘̱̤͈̭̱̠̥̹͔̟̼̞̉̅̆̉͆͆͑̆̒̌̚͘ͅͅT̸̡͍͔̭̥̝̦͍̲̞̻̱̥̙̫̼̤̬̤̞̣̳̬̰͉͈̰̜̘̝̟̱͖̲͓̅̑͛̆̅͒̐̋̈̆͊̓͌̽̎̓̇͜͝ͅC̸̡̢̢̡̛̛͙̞͖̺͖̱̳͕̪͕̼̤̥̤͈̪̗̦͚̟͕͔̟̜̖͕̲̤̞͙͎̼̘̆̈́̾̅̽́͐͑̈̇̐̈̃̄͊͆̀̍͊̽̃̅̉́͒̑̐̏͗̑̃̈́̕͘̚̚͠͠ͅͅH̷̢̨̬̟̮͖̱͕̣̳̖͓̒́͋̀͐̍̏́̋̊̚͠
151.5
302
0.019802
yue_Hant
0.084486
ff5422daa70c00b77ad4e7d1be57798daeb49781
4,615
md
Markdown
powerbi-docs/consumer/mobile/mobile-apps-scan-barcode-iphone.md
pb1051/powerbi-docs
3226a0ed44b723b83bfb3e389ad789632ef16026
[ "CC-BY-4.0", "MIT" ]
300
2018-03-14T17:45:50.000Z
2022-03-31T11:57:57.000Z
powerbi-docs/consumer/mobile/mobile-apps-scan-barcode-iphone.md
pb1051/powerbi-docs
3226a0ed44b723b83bfb3e389ad789632ef16026
[ "CC-BY-4.0", "MIT" ]
3,668
2018-03-14T00:26:34.000Z
2022-03-31T15:53:59.000Z
powerbi-docs/consumer/mobile/mobile-apps-scan-barcode-iphone.md
pb1051/powerbi-docs
3226a0ed44b723b83bfb3e389ad789632ef16026
[ "CC-BY-4.0", "MIT" ]
665
2018-03-14T00:19:50.000Z
2022-03-31T19:03:46.000Z
--- title: Scan a barcode from the Power BI mobile app description: Scan barcodes in the real world to go directly to filtered BI information in the Power BI mobile app. author: paulinbar ms.reviewer: '' ms.service: powerbi ms.subservice: powerbi-mobile ms.topic: how-to ms.date: 12/02/2019 ms.author: painbar --- # Scan a barcode with your device from the Power BI mobile app Scan barcodes in the real world to go directly to filtered BI information in the Power BI mobile app. Applies to: | ![iPhone](./media/mobile-apps-qr-code/ios-logo-40-px.png) | ![iPads](./media/mobile-apps-qr-code/ios-logo-40-px.png) | ![Android phone](././media/mobile-apps-qr-code/android-logo-40-px.png) | ![Android tablet](././media/mobile-apps-qr-code/android-logo-40-px.png) | |:--- |:--- |:--- |:--- | |iPhones |iPads |Android phones |Android tablets | Say a colleague has [tagged a barcode field in a report Power BI Desktop](../../transform-model/desktop-mobile-barcodes.md) and shared the report with you. ![Screenshot of a product barcode scan, showing the scanner over the barcode of a colored beverage.](media/mobile-apps-scan-barcode-iphone/power-bi-barcode-scanner.png) When you scan a product barcode with the scanner in the Power BI app on your device, you see the report (or list of reports) with that barcode. You can open that report filtered to that barcode. ## Scan a barcode with the Power BI scanner 1. On the navigation bar, tap **More options** (...) and then tap **Scanner**. ![Screenshot of the More options on the navigation pane, showing the scanner selection.](media/mobile-apps-scan-barcode-iphone/power-bi-scanner.png) 2. If your camera is not enabled, you need to approve the Power BI app to use the camera. This is a one-time approval. 4. Point the scanner at a barcode on a product. You will see a list of reports associated with that barcode. 5. Tap the report name to open it on your device, automatically filtered according to that barcode. ## Filter by other barcodes while in a report While looking at a report filtered by a barcode on your device, you may want to filter the same report by a different barcode. * If the barcode icon has a filter ![Filtered icon](media/mobile-apps-scan-barcode-iphone/power-bi-barcode-filtered-icon-black.png), the filter is active and report Is already filtered by a barcode. * If the icon doesn't contain a filter ![Unfiltered icon](media/mobile-apps-scan-barcode-iphone/power-bi-barcode-unfiltered-icon.png), the filter isn't active and the report isn't filtered by a barcode. Either way, tap the icon to open a small menu with a floating scanner. * Focus the scanner on the new item to change the filter of the report to a different barcode value. * Select **Clear barcode filter** to go back to the unfiltered report. * Select **Filter by recent barcodes** to change the report filter to one of the barcodes you've scanned within the current session. ## Issues with scanning a barcode Here are some messages you may see when you scan a barcode on a product. ### "Couldn't filter report..." The report you choose to filter is based on a data model that does not include this barcode value. For example, the product "mineral water" isn't included in the report. ### All/some of the visuals in the report don't contain any value The barcode value you scanned exists in your model but all/Some of the visuals on your report don't contain this value and therefore filtering will return an empty state. Try looking into other report pages or edit your reports in Power BI desktop to contain this value ### "Looks like you don't have any reports that can be filtered by barcodes." This means you don't have any barcode-enabled reports. The barcode scanner can only filter reports that have a column marked as **Barcode**. Make sure you or the report owner has tagged a column as **Barcode** in Power BI Desktop. Learn more about [tagging a barcode field in Power BI Desktop](../../transform-model/desktop-mobile-barcodes.md) ### "Couldn't filter report - Looks like this barcode doesn't exist in the report data." The report you chose to filter is based on a data model that doesn't include this barcode value. For example, the product "mineral water" isn't included in the report. You can scan a different product, choose a different report (if more than one report is available), or view the report unfiltered. ## Next steps * [Tag a barcode field in Power BI Desktop](../../transform-model/desktop-mobile-barcodes.md) * [Dashboard tiles in Power BI](../end-user-tiles.md) * [Dashboards in Power BI](../end-user-dashboards.md)
64.097222
299
0.757313
eng_Latn
0.996172
ff544839d11b8cfc7d4687c0f70d2cf2c7ebe777
22
md
Markdown
README.md
mazerunner70/simple-secrets-scala
cd26da9a293e27e191236bd222ee855a7fd87c1c
[ "Apache-2.0" ]
null
null
null
README.md
mazerunner70/simple-secrets-scala
cd26da9a293e27e191236bd222ee855a7fd87c1c
[ "Apache-2.0" ]
null
null
null
README.md
mazerunner70/simple-secrets-scala
cd26da9a293e27e191236bd222ee855a7fd87c1c
[ "Apache-2.0" ]
null
null
null
# simple-secrets-scala
22
22
0.818182
spa_Latn
0.310682
ff545c9862175ba935387b098bdc37079e2144a7
1,514
md
Markdown
README.md
samarsinghal/lab-k8-scg
71ec9dab0607c5520a81318e6b4613dfb8c83e0b
[ "Apache-2.0" ]
null
null
null
README.md
samarsinghal/lab-k8-scg
71ec9dab0607c5520a81318e6b4613dfb8c83e0b
[ "Apache-2.0" ]
null
null
null
README.md
samarsinghal/lab-k8-scg
71ec9dab0607c5520a81318e6b4613dfb8c83e0b
[ "Apache-2.0" ]
null
null
null
Lab - Kubernetes Spring Cloud Gateway ============================= This repository holds source files for a workshop on Kubernetes Spring Cloud Gateway. Warning ------- This workshop use a privileged pod for the workshop environment. You should evaluate the workshop configuration and the implications of this. It is recommended that you do not deploy this workshop to a production system. Prerequisites ------------- In order to use the workshop you should have the eduk8s operator installed. For installation instructions for the eduk8s operator see: * https://github.com/eduk8s/eduk8s-operator Deployment ---------- To load the workshop definition run: ``` kubectl apply -f https://github.com/samarsinghal/lab-k8-scg/master/resources/workshop.yaml ``` To deploy a sample training portal which hosts the workshop, run: ``` kubectl apply -f https://github.com/samarsinghal/lab-k8-scg/master/resources/training-portal.yaml ``` Then run: ``` kubectl get trainingportal/lab-k8-scg ``` This will output the URL to access the web portal for the training environment. You need to be a cluster admin to create the deployment using this method. Deletion -------- To delete the training portal deployment, run: ``` kubectl delete -f https://github.com/samarsinghal/lab-k8-scg/master/resources/training-portal.yaml ``` When you are finished with the workshop definition, you can delete it by running: ``` kubectl delete -f https://github.com/samarsinghal/lab-k8-scg/master/resources/workshop.yaml ```
24.819672
98
0.747028
eng_Latn
0.962859
ff5470dfedaa3cac2d83f3bfca58b2dbb28a0bc5
496
md
Markdown
README.md
test-qq-mail-bot/bilive_client
b5af8a0ea63eb7d7a36bda30051883eb71d880ac
[ "MIT" ]
null
null
null
README.md
test-qq-mail-bot/bilive_client
b5af8a0ea63eb7d7a36bda30051883eb71d880ac
[ "MIT" ]
null
null
null
README.md
test-qq-mail-bot/bilive_client
b5af8a0ea63eb7d7a36bda30051883eb71d880ac
[ "MIT" ]
null
null
null
[![Paypal.me donate](https://img.shields.io/badge/Paypal.me-donate-yellow.svg)](https://www.paypal.me/lzppzr) 使用方法见[Wiki](https://github.com/lzghzr/bilive_client/wiki/%E4%BD%BF%E7%94%A8%E6%96%B9%E6%B3%95(%E5%88%9D%E7%BA%A7)) 值得推荐的分支 * [Vector000/bilive_client](https://github.com/Vector000/bilive_client) 主站功能插件作者, 扩展功能较为完善 * [StringKe/bilive_client](https://github.com/StringKe/bilive_client) 标准化WebAPI, 使其可以提供更多功能 此为客户端, 更多功能需配合服务端使用 [服务端](https://github.com/bilive/bilive_server)
45.090909
115
0.75
yue_Hant
0.631244
ff54aaa95c81fa0071ae3a24074abf2af7ab2cf9
10,797
md
Markdown
articles/app-service/environment/app-service-app-service-environment-layered-security.md
salem84/azure-docs.it-it
3ec6a13aebb82936591c7fc479f084be9bb8776d
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/app-service/environment/app-service-app-service-environment-layered-security.md
salem84/azure-docs.it-it
3ec6a13aebb82936591c7fc479f084be9bb8776d
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/app-service/environment/app-service-app-service-environment-layered-security.md
salem84/azure-docs.it-it
3ec6a13aebb82936591c7fc479f084be9bb8776d
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Architettura di sicurezza su più livelli con ambienti del servizio app - Azure description: Implementazione di un'architettura di sicurezza su più livelli con ambienti del servizio app. services: app-service documentationcenter: '' author: stefsch manager: erikre editor: '' ms.assetid: 73ce0213-bd3e-4876-b1ed-5ecad4ad5601 ms.service: app-service ms.workload: na ms.tgt_pltfrm: na ms.topic: article ms.date: 08/30/2016 ms.author: stefsch ms.custom: seodec18 ms.openlocfilehash: 2d9eedcdc66dceabdd6506c5b64f0c15c874efee ms.sourcegitcommit: 82499878a3d2a33a02a751d6e6e3800adbfa8c13 ms.translationtype: MT ms.contentlocale: it-IT ms.lasthandoff: 08/28/2019 ms.locfileid: "70070138" --- # <a name="implementing-a-layered-security-architecture-with-app-service-environments"></a>Implementazione di un'architettura di sicurezza su più livelli con ambienti del servizio app ## <a name="overview"></a>Panoramica Dato che gli ambienti del servizio app forniscono un ambiente di runtime isolato distribuito in una rete virtuale, gli sviluppatori possono creare un'architettura di sicurezza su più livelli offrendo livelli diversi di accesso alla rete per ogni livello applicazione fisico. Un'esigenza comune è quella di nascondere i back-end delle API all'accesso a Internet generale e consentire alle API di essere chiamate solo dalle app Web upstream. I [gruppi di sicurezza di rete (gruppi)][NetworkSecurityGroups] possono essere usati in subnet contenenti ambienti del servizio app per limitare l'accesso pubblico alle applicazioni API. Il diagramma seguente mostra un'architettura di esempio con un elemento WebAPI basato sull'app distribuita in un ambiente del servizio app. Tre diverse istanze di app Web, distribuite in tre diversi ambienti del servizio app, eseguono chiamate back-end alla stessa app WebAPI. ![Architettura concettuale][ConceptualArchitecture] I segni più verdi indicano che il gruppo di sicurezza di rete nella subnet contenente "apiase" consente le chiamate in ingresso dalle app Web upstream, oltre che le chiamate da se stesso. Lo stesso gruppo di sicurezza di rete, tuttavia, nega esplicitamente l'accesso al traffico in ingresso generale da Internet. Il resto di questo articolo illustra in dettaglio la procedura necessaria per configurare il gruppo di sicurezza di rete nella subnet contenente "apiase." ## <a name="determining-the-network-behavior"></a>Determinazione del comportamento della rete Per conoscere le regole per la sicurezza della rete necessarie, si deve determinare a quali client di rete sarà consentito raggiungere l'ambiente del servizio app contenente l'app per le API e quali client verranno bloccati. Poiché i [gruppi di sicurezza di rete (gruppi)][NetworkSecurityGroups] vengono applicati alle subnet e gli ambienti del servizio app vengono distribuiti in subnet, le regole contenute in un NSG si applicano a **tutte** le app in esecuzione in un ambiente del servizio app. Usando l'architettura di esempio di questo articolo, una volta applicato un gruppo di sicurezza di rete alla subnet contenente "apiase", tutte le app in esecuzione nell'ambiente del servizio app "apiase" verranno protette dallo stesso set di regole di sicurezza. * **Determinare l'indirizzo IP in uscita dei chiamanti upstream:** quali sono gli indirizzi IP dei chiamanti upstream? Sarà necessario consentire esplicitamente a questi indirizzi l'accesso nel gruppo di sicurezza di rete. Poiché le chiamate tra gli ambienti del servizio app sono considerate chiamate "Internet", all'indirizzo IP in uscita assegnato a ciascuno dei tre ambienti del servizio app upstream deve essere consentito l'accesso nel gruppo di sicurezza di rete per la subnet "apiase". Per altre informazioni su come determinare l'indirizzo IP in uscita per le app in esecuzione in un ambiente del servizio app, vedere l'articolo Panoramica dell' [architettura di rete][NetworkArchitecture] . * **L'app per le API back-end dovrà chiamare se stessa?** Un aspetto delicato e a volte trascurato è lo scenario in cui l'applicazione back-end deve chiamare se stessa. Se un'applicazione API back-end in un ambiente del servizio app deve chiamare se stessa, anche questa chiamata viene considerata una chiamata "Internet". Nell'architettura di esempio è necessario consentire l'accesso anche dall'indirizzo IP in uscita dell'ambiente del servizio app "apiase". ## <a name="setting-up-the-network-security-group"></a>Configurazione del gruppo di sicurezza di rete Una volta noto il set di indirizzi IP in uscita, il passaggio successivo consiste nel creare un gruppo di sicurezza di rete. I gruppi di sicurezza di rete possono essere creati sia per le reti virtuali basate su Resource Manager che per le reti virtuali classiche. Gli esempi seguenti illustrano la creazione e configurazione di un gruppo di sicurezza di rete in una rete virtuale classica usando Powershell. Per l'architettura di esempio, gli ambienti si trovano negli Stati Uniti centro-meridionali e quindi viene creato un gruppo di sicurezza di rete vuoto in tale area: New-AzureNetworkSecurityGroup -Name "RestrictBackendApi" -Location "South Central US" -Label "Only allow web frontend and loopback traffic" Prima viene aggiunta una regola di consenso esplicita per l'infrastruttura di gestione di Azure, come indicato nell'articolo sul [traffico in ingresso][InboundTraffic] per gli ambienti del servizio app. #Open ports for access by Azure management infrastructure Get-AzureNetworkSecurityGroup -Name "RestrictBackendApi" | Set-AzureNetworkSecurityRule -Name "ALLOW AzureMngmt" -Type Inbound -Priority 100 -Action Allow -SourceAddressPrefix 'INTERNET' -SourcePortRange '*' -DestinationAddressPrefix '*' -DestinationPortRange '454-455' -Protocol TCP Quindi vengono aggiunte due regole per consentire le chiamate HTTP e HTTPS dal primo ambiente del servizio app upstream ("fe1ase"). #Grant access to requests from the first upstream web front-end Get-AzureNetworkSecurityGroup -Name "RestrictBackendApi" | Set-AzureNetworkSecurityRule -Name "ALLOW HTTP fe1ase" -Type Inbound -Priority 200 -Action Allow -SourceAddressPrefix '65.52.xx.xyz' -SourcePortRange '*' -DestinationAddressPrefix '*' -DestinationPortRange '80' -Protocol TCP Get-AzureNetworkSecurityGroup -Name "RestrictBackendApi" | Set-AzureNetworkSecurityRule -Name "ALLOW HTTPS fe1ase" -Type Inbound -Priority 300 -Action Allow -SourceAddressPrefix '65.52.xx.xyz' -SourcePortRange '*' -DestinationAddressPrefix '*' -DestinationPortRange '443' -Protocol TCP Cancellare e ripetere per il secondo e il terzo ambiente del servizio app upstream ("fe2ase"e "fe3ase"). #Grant access to requests from the second upstream web front-end Get-AzureNetworkSecurityGroup -Name "RestrictBackendApi" | Set-AzureNetworkSecurityRule -Name "ALLOW HTTP fe2ase" -Type Inbound -Priority 400 -Action Allow -SourceAddressPrefix '191.238.xyz.abc' -SourcePortRange '*' -DestinationAddressPrefix '*' -DestinationPortRange '80' -Protocol TCP Get-AzureNetworkSecurityGroup -Name "RestrictBackendApi" | Set-AzureNetworkSecurityRule -Name "ALLOW HTTPS fe2ase" -Type Inbound -Priority 500 -Action Allow -SourceAddressPrefix '191.238.xyz.abc' -SourcePortRange '*' -DestinationAddressPrefix '*' -DestinationPortRange '443' -Protocol TCP #Grant access to requests from the third upstream web front-end Get-AzureNetworkSecurityGroup -Name "RestrictBackendApi" | Set-AzureNetworkSecurityRule -Name "ALLOW HTTP fe3ase" -Type Inbound -Priority 600 -Action Allow -SourceAddressPrefix '23.98.abc.xyz' -SourcePortRange '*' -DestinationAddressPrefix '*' -DestinationPortRange '80' -Protocol TCP Get-AzureNetworkSecurityGroup -Name "RestrictBackendApi" | Set-AzureNetworkSecurityRule -Name "ALLOW HTTPS fe3ase" -Type Inbound -Priority 700 -Action Allow -SourceAddressPrefix '23.98.abc.xyz' -SourcePortRange '*' -DestinationAddressPrefix '*' -DestinationPortRange '443' -Protocol TCP Infine concedere l'accesso all'indirizzo IP in uscita dell'ambiente del servizio app dell'API back-end in modo che possa richiamare se stesso. #Allow apps on the apiase environment to call back into itself Get-AzureNetworkSecurityGroup -Name "RestrictBackendApi" | Set-AzureNetworkSecurityRule -Name "ALLOW HTTP apiase" -Type Inbound -Priority 800 -Action Allow -SourceAddressPrefix '70.37.xyz.abc' -SourcePortRange '*' -DestinationAddressPrefix '*' -DestinationPortRange '80' -Protocol TCP Get-AzureNetworkSecurityGroup -Name "RestrictBackendApi" | Set-AzureNetworkSecurityRule -Name "ALLOW HTTPS apiase" -Type Inbound -Priority 900 -Action Allow -SourceAddressPrefix '70.37.xyz.abc' -SourcePortRange '*' -DestinationAddressPrefix '*' -DestinationPortRange '443' -Protocol TCP Non sono necessarie altre regole di sicurezza di rete perché ogni gruppo di sicurezza di rete ha un set di regole predefinite che bloccano l'accesso in ingresso da Internet per impostazione predefinita. L'elenco completo di regole nel gruppo di sicurezza di rete è mostrato di seguito. Si noti che l'ultima regola, che è evidenziata, impedisce l'accesso in ingresso a tutti i chiamanti a cui non è stato esplicitamente concesso. ![Configurazione del gruppo di sicurezza di rete][NSGConfiguration] Il passaggio finale consiste nell'applicare il gruppo di sicurezza di rete alla subnet contenente l'ambiente del servizio app "apiase". #Apply the NSG to the backend API subnet Get-AzureNetworkSecurityGroup -Name "RestrictBackendApi" | Set-AzureNetworkSecurityGroupToSubnet -VirtualNetworkName 'yourvnetnamehere' -SubnetName 'API-ASE-Subnet' Con il gruppo di sicurezza di rete applicato alla subnet, solo ai tre ambienti del servizio app upstream e all'ambiente del servizio app contenente il back-end dell'API è consentito chiamare l'ambiente "apiase". ## <a name="additional-links-and-information"></a>Informazioni e collegamenti aggiuntivi Informazioni sui [gruppi di sicurezza di rete](../../virtual-network/security-overview.md). Informazioni sugli [indirizzi IP in uscita e gli][NetworkArchitecture] ambienti del servizio app. [Porte di rete][InboundTraffic] usate dagli ambienti del servizio app. [!INCLUDE [app-service-web-try-app-service](../../../includes/app-service-web-try-app-service.md)] <!-- LINKS --> [NetworkSecurityGroups]: https://azure.microsoft.com/documentation/articles/virtual-networks-nsg/ [NetworkArchitecture]: app-service-app-service-environment-network-architecture-overview.md [InboundTraffic]: app-service-app-service-environment-control-inbound-traffic.md <!-- IMAGES --> [ConceptualArchitecture]: ./media/app-service-app-service-environment-layered-security/ConceptualArchitecture-1.png [NSGConfiguration]: ./media/app-service-app-service-environment-layered-security/NSGConfiguration-1.png
98.154545
705
0.803371
ita_Latn
0.988487
ff5520136f6375cea859c8b5efcc508d62913375
2,588
md
Markdown
README.md
marcos-c1/Desktop-App
c594e81f3f882a90de480a6010d34b6987644dd6
[ "MIT" ]
null
null
null
README.md
marcos-c1/Desktop-App
c594e81f3f882a90de480a6010d34b6987644dd6
[ "MIT" ]
null
null
null
README.md
marcos-c1/Desktop-App
c594e81f3f882a90de480a6010d34b6987644dd6
[ "MIT" ]
null
null
null
# Desktop-App Uma aplicação Desktop básica para cadastramento de dados em um universo Hotel Fazenda. Feito na linguagem Java e usando a API Swing do próprio JDK. <img height="300" align="center" src="https://github.com/marcos-c1/Desktop-App/blob/main/logo/tabela_Client.png"> > Tabela de Cadastro de Clientes ## Instruções para a configuração do ambiente 1. Instale o banco de dados **postgres** com o gerenciador de pacotes do próprio sistema Linux em seu terminal. ```sh $ apt-get install postgresql postgresql-contrib ``` Ou utilize o site do postgres e faça o download de acordo com o seu sistema operacional. 2. Entre no banco de dados pelo terminal utilizando o usuário padrão ```sh $ sudo su postgres -c psql postgres ``` 3. Crie um banco de dados com o nome **"hotel_fazenda"** ou **"hf"** e acesse-o - Criação.. ```sh postgres=# create database hotel_fazenda; ``` Ou ```sh postgres=# create database hf; ``` - Acesso.. ```sh postgres=# \c hotel_fazenda ``` Ou ```sh postgres=# \c hf ``` 4. Ao acessa-lo, rode o script **hotel_fazenda.sql** (tenha certeza de que o caminho ao diretório da pasta esteja correto). ```sh postgres=# \i scriptbancorh.sql ``` 5. Altere no código fonte, mais específicamente no pacote *persistencia*, a String ```user``` e a ```senha``` de acordo com as suas configurações do postgres. Caso não esteja utilizando a porta de conexão padrão localhost:5432, altere-a em conjunto. ```java public static Connection getConnection() { String driver = "org.postgresql.Driver"; String user = "postgres";/* Coloque o usuário criado para acesso ao banco */ String senha = "SuaSenha";/* Coloque a senha para acesso ao banco */ String url = "jdbc:postgresql://localhost:5432/hf";/* Coloque o servidor onde está instalado o banco */ /* [....] */ } ``` 6. Com o banco de dados configurado, abra um terminal posteriormente para acessar a pasta principal que contém o arquivo **hf.jar**. Esse arquivo contém a aplicação Desktop e para rodá-lo utilize o seguinte comando: ```sh $ java -jar hf.jar ``` - Fim.... ## Considerações ## A estrutura ou padrão de arquitetura utilizado na produção foi o MVC (Model-View-Controller) o qual divide suas responsabilidades em setores, ou seja, pratica algumas boas práticas da programação como o polimorfismo, encapsulamento e a herança na implementação e torna o código bastante legível e compreensível ao longo do processo de criação. Por ser uma aplicação básica, sem considerar o front-end, qualquer sugestão ou crítica construtiva será bem-vinda para minha evolução pessoal.
32.35
344
0.735317
por_Latn
0.999271
ff5565244b011088978bdad166793fdf65643858
278
md
Markdown
src/dt-archive/william-tyrrell.md
martin-banks/dt-digital-toolkit-site
0fe8456d801bb6a85fbecf2b3a0436c3e7df396c
[ "MIT" ]
null
null
null
src/dt-archive/william-tyrrell.md
martin-banks/dt-digital-toolkit-site
0fe8456d801bb6a85fbecf2b3a0436c3e7df396c
[ "MIT" ]
null
null
null
src/dt-archive/william-tyrrell.md
martin-banks/dt-digital-toolkit-site
0fe8456d801bb6a85fbecf2b3a0436c3e7df396c
[ "MIT" ]
null
null
null
--- bylines: 'Candy Luan' capi: '0b0280ad435c5ee01d950d88b935cbda' date: '' description: '' preview: 'https://media.news.com.au/DTinteractive/williammissing/index.html' slug: '/william-tyrell' tech: '' thumb: '' title: '10 things we learned from the William Tyrell inquest' ---
23.166667
76
0.733813
yue_Hant
0.278935