hexsha
stringlengths
40
40
size
int64
5
1.04M
ext
stringclasses
6 values
lang
stringclasses
1 value
max_stars_repo_path
stringlengths
3
344
max_stars_repo_name
stringlengths
5
125
max_stars_repo_head_hexsha
stringlengths
40
78
max_stars_repo_licenses
sequencelengths
1
11
max_stars_count
int64
1
368k
max_stars_repo_stars_event_min_datetime
stringlengths
24
24
max_stars_repo_stars_event_max_datetime
stringlengths
24
24
max_issues_repo_path
stringlengths
3
344
max_issues_repo_name
stringlengths
5
125
max_issues_repo_head_hexsha
stringlengths
40
78
max_issues_repo_licenses
sequencelengths
1
11
max_issues_count
int64
1
116k
max_issues_repo_issues_event_min_datetime
stringlengths
24
24
max_issues_repo_issues_event_max_datetime
stringlengths
24
24
max_forks_repo_path
stringlengths
3
344
max_forks_repo_name
stringlengths
5
125
max_forks_repo_head_hexsha
stringlengths
40
78
max_forks_repo_licenses
sequencelengths
1
11
max_forks_count
int64
1
105k
max_forks_repo_forks_event_min_datetime
stringlengths
24
24
max_forks_repo_forks_event_max_datetime
stringlengths
24
24
content
stringlengths
5
1.04M
avg_line_length
float64
1.14
851k
max_line_length
int64
1
1.03M
alphanum_fraction
float64
0
1
lid
stringclasses
191 values
lid_prob
float64
0.01
1
03e63211fa262e3091f2e97ad4657079517502b1
4,027
md
Markdown
README.md
hattan/llano
298d6056eb74939cf99a3b64813571a116a4a62c
[ "MIT" ]
null
null
null
README.md
hattan/llano
298d6056eb74939cf99a3b64813571a116a4a62c
[ "MIT" ]
null
null
null
README.md
hattan/llano
298d6056eb74939cf99a3b64813571a116a4a62c
[ "MIT" ]
null
null
null
# llano Demo Application deploying Ubuntu/ Apache to Kubernetes and mounting content via Azure File Storage ### Ubuntu Base Image This repo uses an ubuntu image and configures Apache and PHP, thus creating a custom base image that is used by the app. [Base Image DockerFile](base/Dockerfile) ### Requirements * [az cli](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest) * [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) * [helm](https://helm.sh/) #### While there is a lot of manual steps here, almost all will be automated via build pipelines This application consists of several bits: * Infrastructure - the terraform scripts needed to bring up the infrastructure. <br/> Note: the role assignment in terraform does not work, you have use the [included shell script](infrastructure/kubernetes/aks_acr_link.sh). <br/> To Install the Infrastructure: * ```terraform init``` * ```terraform plan --out=plan``` * ```terraform apply "plan"``` * Run the [iaks_acr_link.sh file](infrastructure/kubernetes/aks_acr_link.sh) * Run the following to configure helm * ```kubectl create serviceaccount --namespace kube-system tiller``` * ```kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller``` * ```helm init --service-account tiller``` * Install the Nginx Ingress * helm install stable/nginx-ingress --namespace kube-system --set controller.replicaCount=2 * Get external IP of nginx-ingress * ```kubectl get svc --all-namespaces | grep LoadBalancer | grep -v 'addon-http-application-routing-nginx-ingress'``` * Get the Service Principal for the ip from the last step ```az network public-ip list --query "[?ipAddress!=null]|[?contains(ipAddress, '<ip here>')].[id]" --output tsv * Add a dns entry for that ip ```az network': az network public-ip update --ids <service_principle_id> --dns-name <dns_prefix>``` * Update [site/values.yaml](app/site/values.yaml) host to be the url received in the last step. * Building the Base Image * Navigate to the [base directory](/base) * Build the image via ```docker build -t llano3e58acr.azurecr.io/unapache``` * Push to ACR via ```docker push llano3e58acr.azurecr.io/unapache``` * Building the App (note the app uses the base image created in the previous step) * Navigate to the [app directory](/app) * Build the image via ```docker build -t llano3e58acr.azurecr.io/unapp``` * Push to ACR via ```docker push llano3e58acr.azurecr.io/unapp``` * Create a storage account either through the Azure Portal or CLI (CLI instructions [here](https://docs.microsoft.com/en-us/azure/aks/azure-files-volume) * Add a new File Share named 'sites' and within it add a folder called 'site1' * Change the content of [index.html](app/www/html/index.html) to reflect it's on Azure * Upload [index.html](app/www/html/index.html) to sites/site1 (share/folder). * Deploy a new site via helm * helm install site --name site --set site.folder=site1 ### Useful commands * Verify cluster is running and you are connected: ```kubectl get pods --all namespaces``` * See running pods after a deploment ```kubectl get pods``` * See services ```kubectl get svc``` * See ingresses ```kubectl get ingresses``` * describe a pod ```kubectr describe pod <pod_name>``` (pod name can be found via get pods) * helm list ```helm ls``` * helm delete ``` helm del --purge <name>``` (name can be found in helm list) * az acr show tags for an image ```az acr repository show-tags -n <acr_name> --repository <image>``` * configure local kubectl via az ```az aks get-credentials --resource-group php-poc --name poccluster``` (note create.sh runs this for you) ### Resources * [Manually create and use a volume with Azure Files share in Azure Kubernetes Service (AKS)](https://docs.microsoft.com/en-us/azure/aks/azure-files-volume) * [Kubernetes Volumes + Subpath](https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath)
56.71831
173
0.726595
eng_Latn
0.859033
03e684c0e3f5e68a1f117b8595187934784382e9
1,280
md
Markdown
api/Project.Application.TimelineTextOnBar.md
italicize/VBA-Docs
8d12d72a1e3e9e32f31b87be3a3f9e18e411c1b0
[ "CC-BY-4.0", "MIT" ]
null
null
null
api/Project.Application.TimelineTextOnBar.md
italicize/VBA-Docs
8d12d72a1e3e9e32f31b87be3a3f9e18e411c1b0
[ "CC-BY-4.0", "MIT" ]
null
null
null
api/Project.Application.TimelineTextOnBar.md
italicize/VBA-Docs
8d12d72a1e3e9e32f31b87be3a3f9e18e411c1b0
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Application.TimelineTextOnBar Method (Project) keywords: vbapj.chm63 f1_keywords: - vbapj.chm63 ms.prod: project-server api_name: - Project.Application.TimelineTextOnBar ms.assetid: d57ec0d8-8e35-b6eb-1932-454210bc7dad ms.date: 06/08/2017 --- # Application.TimelineTextOnBar Method (Project) Changes the format of text to display as a callout or within the Timeline bar, for one or more selected tasks. ## Syntax _expression_. `TimelineTextOnBar`( ` _TextOnBar_` ) _expression_ An expression that returns an [Application](./Project.Application.md) object. ### Parameters |**Name**|**Required/Optional**|**Data Type**|**Description**| |:-----|:-----|:-----|:-----| | _TextOnBar_|Optional|**Boolean**|**False** if the selected tasks should be displayed as callouts; otherwise, **True**. The default value is **True**, which makes the task text show within the Timeline bar.| ### Return Value **Boolean** ## Remarks The **TimelineTextOnBar** method is equivalent to the **Display as Bar** and **Display as Callout** commands in the **Current Selection** group on the **Format** tab on the ribbon. ## Example The following statement changes selected tasks on the Timeline bar to display as callouts. ```vb TimelineTextOnBar TextOnBar:=False ```
23.703704
208
0.725781
eng_Latn
0.769043
03e6ae3578f89a918b9d56818228035e398ebd4d
1,984
md
Markdown
docs/components/spinner.md
desionlab/vue-vcl
5bf514fe1f6c0205a8aa5fa7bd14e49c6a30c0bb
[ "MIT" ]
null
null
null
docs/components/spinner.md
desionlab/vue-vcl
5bf514fe1f6c0205a8aa5fa7bd14e49c6a30c0bb
[ "MIT" ]
null
null
null
docs/components/spinner.md
desionlab/vue-vcl
5bf514fe1f6c0205a8aa5fa7bd14e49c6a30c0bb
[ "MIT" ]
null
null
null
# Spinner <Badge text="beta" type="warning"/> Indicate the loading state of a component or page. ## Type ### Border <ClientOnly> <div class="vcl-example"> <VclSpinner type="border" class="mr-3" /> <VclSpinner type="border" variant="success" /> </div> </ClientOnly> ```html <templayte> <div> <VclSpinner type="border" variant="success" /> </div> </templayte> ``` ### Grow <ClientOnly> <div class="vcl-example"> <VclSpinner type="grow" class="mr-3" /> <VclSpinner type="grow" variant="success" /> </div> </ClientOnly> ```html <templayte> <div> <VclSpinner type="grow" variant="success" /> </div> </templayte> ``` ## Color <ClientOnly> <div class="vcl-example"> <VclSpinner type="border" class="mr-3" variant="primary" /> <VclSpinner type="border" class="mr-3" variant="secondary" /> <VclSpinner type="border" class="mr-3" variant="success" /> <VclSpinner type="border" class="mr-3" variant="danger" /> <VclSpinner type="border" class="mr-3" variant="warning" /> <VclSpinner type="border" class="mr-3" variant="info" /> <VclSpinner type="border" class="mr-3" variant="light" /> <VclSpinner type="border" class="mr-3" variant="dark" /> </div> </ClientOnly> ```html <templayte> <div> <VclSpinner type="border" variant="primary" /> <VclSpinner type="border" variant="secondary" /> <VclSpinner type="border" variant="success" /> <VclSpinner type="border" variant="danger" /> <VclSpinner type="border" variant="warning" /> <VclSpinner type="border" variant="info" /> <VclSpinner type="border" variant="light" /> <VclSpinner type="border" variant="dark" /> </div> </templayte> ``` ## Size <ClientOnly> <div class="vcl-example"> <VclSpinner type="border" class="mr-3" /> <VclSpinner type="border" variant="success" small /> </div> </ClientOnly> ```html <templayte> <div> <VclSpinner type="border" variant="success" small /> </div> </templayte> ```
22.804598
65
0.633569
kor_Hang
0.22693
03e6c9f62e9da39da2f4f10448d1d20768308086
1,854
md
Markdown
archives/zhihu-search/2022-03-23.md
slacken/trending-in-one
b9b8966a88c3885a11fc0bd93fddd208404d8e3b
[ "MIT" ]
null
null
null
archives/zhihu-search/2022-03-23.md
slacken/trending-in-one
b9b8966a88c3885a11fc0bd93fddd208404d8e3b
[ "MIT" ]
null
null
null
archives/zhihu-search/2022-03-23.md
slacken/trending-in-one
b9b8966a88c3885a11fc0bd93fddd208404d8e3b
[ "MIT" ]
null
null
null
# 2022-03-23 共 30 条 <!-- BEGIN ZHIHUSEARCH --> <!-- 最后更新时间 Wed Mar 23 2022 23:10:40 GMT+0800 (China Standard Time) --> 1. [詹姆斯大号三双湖人胜骑士](https://www.zhihu.com/search?q=湖人) 1. [MU5735 航班其中一部黑匣子已找到](https://www.zhihu.com/search?q=MU5735 黑匣子) 1. [中国竞走递补奥运金牌](https://www.zhihu.com/search?q=竞走金牌) 1. [CDPR 确认《巫师》新作](https://www.zhihu.com/search?q=巫师3) 1. [国产悬疑解谜游戏《三伏》](https://www.zhihu.com/search?q=三伏) 1. [老人欲转账以继承富翁遗产](https://www.zhihu.com/search?q=老人被骗) 1. [东航坠机地下雨](https://www.zhihu.com/search?q=东航坠机地下雨) 1. [《海贼王》 1044 话情报](https://www.zhihu.com/search?q=海贼王1044) 1. [杜兰特 37+9+8 篮网胜爵士](https://www.zhihu.com/search?q=篮网) 1. [男子偷车欲骑去安徽上网](https://www.zhihu.com/search?q=男子偷车上网) 1. [美国国安局对中国使用网络武器](https://www.zhihu.com/search?q=美国国安局) 1. [东航坠机事件尚未发现幸存人员](https://www.zhihu.com/search?q=暂未发现幸存人员) 1. [新冠抗原试剂临时纳入医保](https://www.zhihu.com/search?q=新冠抗原试剂) 1. [《咒术回战》宣称台湾是国家](https://www.zhihu.com/search?q=咒术回战) 1. [离婚冷静期内离婚不成先立遗嘱](https://www.zhihu.com/search?q=离婚冷静期遗嘱) 1. [吉林省本土新增「2320+528」](https://www.zhihu.com/search?q=吉林疫情) 1. [邯郸车祸犯罪嫌疑人被刑事拘留](https://www.zhihu.com/search?q=邯郸车祸) 1. [「天宫课堂」第二课](https://www.zhihu.com/search?q=天宫课堂) 1. [国务院调查组将介绍 MU5735 搜寻进展](https://www.zhihu.com/search?q=MU5735) 1. [iOS 15.4 耗电严重](https://www.zhihu.com/search?q=iOS 15.4 电池) 1. [《绍宋》漫画版面世](https://www.zhihu.com/search?q=绍宋漫画) 1. [中兴通讯在美胜诉](https://www.zhihu.com/search?q=中兴通讯) 1. [螺蛳粉里酸豆角是脚踩腌制](https://www.zhihu.com/search?q=酸豆角) 1. [GT 赛车 7 用户评分新低](https://www.zhihu.com/search?q=GT赛车7) 1. [中方新增 1000 万人道主义援助](https://www.zhihu.com/search?q=人道主义援助) 1. [国际田联认证中国接力队铜牌](https://www.zhihu.com/search?q=中国接力队铜牌) 1. [中国接力队奥运铜牌被认证](https://www.zhihu.com/search?q=中国接力队) 1. [勇士遭马刺准绝杀](https://www.zhihu.com/search?q=勇士) 1. [3 月 21 日是世界睡眠日](https://www.zhihu.com/search?q=世界睡眠日) 1. [《进击的巨人》最终季更新](https://www.zhihu.com/search?q=进击的巨人) <!-- END ZHIHUSEARCH -->
48.789474
71
0.704423
yue_Hant
0.725425
03e6ff1d329fe3ac5c191e87b7c0de3383083607
462
md
Markdown
README.md
gemasphi/alpha-zero-torch
ccaf23266c0cc61f4c84294681adc522609d0470
[ "MIT" ]
6
2019-11-14T19:16:57.000Z
2020-11-08T13:53:30.000Z
README.md
gemasphi/alpha-zero-torch
ccaf23266c0cc61f4c84294681adc522609d0470
[ "MIT" ]
2
2020-02-14T20:10:09.000Z
2021-12-20T03:43:43.000Z
README.md
gemasphi/alpha-zero-torch
ccaf23266c0cc61f4c84294681adc522609d0470
[ "MIT" ]
2
2020-09-02T11:39:01.000Z
2021-12-02T22:05:50.000Z
# alpha-zero-torch A simple sequential generic pytorch alpha zero implementation. Currently, the only game implemented is tic tac toe with varying board size and win condition length. This alpha zero version is currently in version alpha zero and new features will be added over the next months. Currently, there exist three branches in this repo: * Master: contains a simple sequential pytorch implementaiton * virtual_loss: master + monte carlo virtual loss
46.2
111
0.811688
eng_Latn
0.997689
03e799b599227adcf1270e4184b128e45e1a5a67
12,381
md
Markdown
articles/iot-fundamentals/index.md
JungYeolYang/azure-docs.zh-cn
afa9274e7d02ee4348ddb6ab81878b9ad1e52f52
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/iot-fundamentals/index.md
JungYeolYang/azure-docs.zh-cn
afa9274e7d02ee4348ddb6ab81878b9ad1e52f52
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/iot-fundamentals/index.md
JungYeolYang/azure-docs.zh-cn
afa9274e7d02ee4348ddb6ab81878b9ad1e52f52
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- layout: HubPage hide_bc: false title: Azure 物联网文档 - 教程和 API 参考 | Microsoft Docs description: Azure 物联网 (IoT) 是 Microsoft 托管的云服务的集合,这些服务用于连接、监视和控制数十亿项 IoT 资产。 如果以比较简单的术语来表述,则 IoT 解决方案包含一个或多个 IoT 设备,以及一个或多个在云中运行的后端服务,它们可以互相通信。 services: azure-iot author: dsk-2015 manager: philmea ms.service: azure-iot ms.topic: landing-page ms.date: 02/12/2019 ms.author: dkshir ms.openlocfilehash: d19b6fff3bf2b43ecb692bd8d8e33a202e969570 ms.sourcegitcommit: 3102f886aa962842303c8753fe8fa5324a52834a ms.translationtype: HT ms.contentlocale: zh-CN ms.lasthandoff: 04/23/2019 ms.locfileid: "61216490" --- <div id="main" class="v2"> <div class="container"> <h1>Azure 物联网文档</h1> <p>Azure 物联网 (IoT) 是 Microsoft 托管的云服务的集合,这些服务用于连接、监视和控制数十亿项 IoT 资产。</p><p></p> <ul class="cardsY panelContent singlePanelContent" style="display:flex!important;"> <li> <a href="../iot-fundamentals/iot-introduction.md"> <div class="cardSize"> <div class="cardPadding"> <div class="card"> <div class="cardImageOuter"> <div class="cardImage"> <img src="media/index/i_overview.svg" alt="" /> </div> </div> <div class="cardText"> <h3>Azure IoT 是什么?</h3> <p>了解 Azure IoT 的基础知识、用例以及可用服务的简介。</p> </div> </div> </div> </div> </a> </li> <li> <a href="../iot-fundamentals/iot-services-and-technologies.md"> <div class="cardSize"> <div class="cardPadding"> <div class="card"> <div class="cardImageOuter"> <div class="cardImage"> <img src="media/index/i_get-started.svg" alt="" /> </div> </div> <div class="cardText"> <h3>Azure IoT 服务与技术</h3> <p>了解 Azure IoT 产品组合下有哪些 SaaS 和 PaaS 服务、解决方案和技术可用。</p> </div> </div> </div> </div> </a> </li> <li> <a href="../iot-fundamentals/iot-security-architecture.md"> <div class="cardSize"> <div class="cardPadding"> <div class="card"> <div class="cardImageOuter"> <div class="cardImage"> <img src="media/index/i_security.svg" alt="" /> </div> </div> <div class="cardText"> <h3>Azure IoT 服务的安全性</h3> <p>了解安全性如何构成 Azure IoT 基础结构(从设备到云)不可或缺的一部分。</p> </div> </div> </div> </div> </a> </li> </ul> <h2>Azure IoT 服务</h2> <ul class="cardsF panelContent singlePanelContent cols cols3" style="display:flex!important;"> <li> <div class="cardSize"> <div class="cardPadding"> <div class="card"> <div class="cardImageOuter"> <div class="cardImage"> <img src="media/index/i_iot_hub.svg" alt="" /> </div> </div> <div class="cardText"> <h3>使用 Azure IoT 中心构建自己的 IoT 解决方案</h3><br> <p><a href="../iot-hub/about-iot-hub.md">Azure IoT 中心是什么?</a></p> <p><a href="../iot-hub/quickstart-send-telemetry-c.md">从设备发送遥测数据</a></p> <p><a href="../iot-hub/quickstart-control-device-node.md">从云控制设备</a></p> <p><a href="../iot-hub/index.yml"><i>查看更多 &gt;</i></a></p> </div> </div> </div> </div> </li> <li> <div class="cardSize"> <div class="cardPadding"> <div class="card"> <div class="cardImageOuter"> <div class="cardImage"> <img src="media/index/i_iot_central.svg" alt="" /> </div> </div> <div class="cardText"> <h3>使用 Azure IoT Central 部署完全托管的 IoT 解决方案</h3><br> <p><a href="../iot-central/overview-iot-central.md">Azure IoT Central 是什么?</a></p> <p><a href="../iot-central/quick-deploy-iot-central.md">创建 IoT Central 应用程序</a></p> <p><a href="../iot-central/tutorial-define-device-type.md">定义设备的类型</a></p> <p><a href="../iot-central/index.yml"><i>查看更多 &gt;</i></a></p> </div> </div> </div> </div> </li> <li> <div class="cardSize"> <div class="cardPadding"> <div class="card"> <div class="cardImageOuter"> <div class="cardImage"> <img src="media/index/i_iot_accelerators.svg" alt="" /> </div> </div> <div class="cardText"> <h3>使用 Azure IoT 解决方案加速器自定义预建 IoT 解决方案</h3><br> <p><a href="../iot-accelerators/about-iot-accelerators.md">Azure IoT 解决方案加速器是什么?</a></p> <p><a href="../iot-accelerators/quickstart-remote-monitoring-deploy.md">尝试远程监视解决方案</a></p> <p><a href="../iot-accelerators/quickstart-connected-factory-deploy.md">尝试互联工厂解决方案</a></p> <p><a href="../iot-accelerators/index.md"><i>查看更多 &gt;</i></a></p> </div> </div> </div> </div> </li> <li> <div class="cardSize"> <div class="cardPadding"> <div class="card"> <div class="cardImageOuter"> <div class="cardImage"> <img src="media/index/i_iot_edge.svg" alt="" /> </div> </div> <div class="cardText"> <h3>使用 Azure IoT Edge 在设备上运行云智能</h3><br> <p><a href="../iot-edge/about-iot-edge.md">Azure IoT Edge 是什么?</a></p> <p><a href="../iot-edge/quickstart-linux.md">将 IoT Edge 模块部署到 Linux 设备</a></p> <p><a href="../iot-edge/quickstart.md">将 IoT Edge 模块部署到 Windows 设备</a></p> <p><a href="../iot-edge/index.yml"><i>查看更多 &gt;</i></a></p> </div> </div> </div> </div> </li> <li> <div class="cardSize"> <div class="cardPadding"> <div class="card"> <div class="cardImageOuter"> <div class="cardImage"> <img src="media/index/i_digital_twins.svg" alt="" /> </div> </div> <div class="cardText"> <h3>使用 Azure 数字孪生创建联网的智能基础结构</h3><br> <p><a href="../digital-twins/about-digital-twins.md">Azure 数字孪生是什么?</a></p> <p><a href="../digital-twins/quickstart-view-occupancy-dotnet.md">在建筑物中查找空房间</a></p> <p><a href="../digital-twins/tutorial-facilities-setup.md">监视办公大楼</a></p> <p><a href="../digital-twins/index.yml"><i>查看更多 &gt;</i></a></p> </div> </div> </div> </div> </li> <li> <div class="cardSize"> <div class="cardPadding"> <div class="card"> <div class="cardImageOuter"> <div class="cardImage"> <img src="media/index/i_iot_dps.svg" alt="" /> </div> </div> <div class="cardText"> <h3>使用 Azure IoT 中心设备预配服务大规模地安全预配设备</h3><br> <p><a href="../iot-dps/about-iot-dps.md">Azure IoT 中心设备预配服务是什么?</a></p> <p><a href="../iot-dps/quick-setup-auto-provision-cli.md">设置服务</a></p> <p><a href="../iot-dps/quick-create-simulated-device.md">预配 TPM 设备</a></p> <p><a href="../iot-dps/index.yml"><i>查看更多 &gt;</i></a></p> </div> </div> </div> </div> </li> <li> <div class="cardSize"> <div class="cardPadding"> <div class="card"> <div class="cardImageOuter"> <div class="cardImage"> <img src="media/index/i_time_series_insights.svg" alt="" /> </div> </div> <div class="cardText"> <h3>使用 Azure 时序见解探索和分析设备数据</h3><br> <p><a href="../time-series-insights/time-series-insights-update-overview.md">Azure 时序见解是什么?</a></p> <p><a href="../time-series-insights/time-series-quickstart.md">探索时序见解</a></p> <p><a href="../time-series-insights/tutorial-create-tsi-sample-spa.md">创建时序见解 Web 应用</a></p> <p><a href="../time-series-insights/index.yml"><i>查看更多 &gt;</i></a></p> </div> </div> </div> </div> </li> <li> <div class="cardSize"> <div class="cardPadding"> <div class="card"> <div class="cardImageOuter"> <div class="cardImage"> <img src="media/index/i_azure_maps.svg" alt="" /> </div> </div> <div class="cardText"> <h3>使用 Azure Maps 在企业中实现移动性</h3><br> <p><a href="../azure-maps/about-azure-maps.md">Azure Maps 是什么?</a></p> <p><a href="../azure-maps/quick-demo-map-app.md">创建交互式搜索地图</a></p> <p><a href="../azure-maps/tutorial-route-location.md">规划通往兴趣点的路线</a></p> <p><a href="../azure-maps/index.yml"><i>查看更多 &gt;</i></a></p> </div> </div> </div> </div> </li> </ul> </div> </div>
49.923387
144
0.370002
yue_Hant
0.191956
03e7e9e4df9d0a5f593776a4cfbeb431ccdc08c8
2,870
md
Markdown
docs/analytics-platform-system/configure-sql-server-pdw-for-remote-table-copies.md
L3onard80/sql-docs.it-it
f73e3d20b5b2f15f839ff784096254478c045bbb
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/analytics-platform-system/configure-sql-server-pdw-for-remote-table-copies.md
L3onard80/sql-docs.it-it
f73e3d20b5b2f15f839ff784096254478c045bbb
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/analytics-platform-system/configure-sql-server-pdw-for-remote-table-copies.md
L3onard80/sql-docs.it-it
f73e3d20b5b2f15f839ff784096254478c045bbb
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Copie della tabella remota description: Viene descritto come configurare Parallel data warehouse per utilizzare la funzionalità di copia della tabella remota per copiare tabelle in database SMP SQL Server su server non Appliance. author: mzaman1 ms.prod: sql ms.technology: data-warehouse ms.topic: conceptual ms.date: 04/17/2018 ms.author: murshedz ms.reviewer: martinle ms.custom: seo-dt-2019 ms.openlocfilehash: 6c9a0a29b543eb287c7e233d6b1ea77bb2a0d45c ms.sourcegitcommit: b87d36c46b39af8b929ad94ec707dee8800950f5 ms.translationtype: MT ms.contentlocale: it-IT ms.lasthandoff: 02/08/2020 ms.locfileid: "74401259" --- # <a name="configure-parallel-data-warehouse-for-remote-table-copies"></a>Configurare data warehouse parallele per le copie della tabella remota Viene descritto come configurare SQL Server PDW per utilizzare la funzionalità copia tabella remota per copiare tabelle in database SMP SQL Server su server non Appliance. Questo argomento descrive uno dei passaggi di configurazione per la configurazione della copia di una tabella remota. Per un elenco di tutti i passaggi di configurazione, vedere la pagina relativa alla [copia remota della tabella](remote-table-copy.md). ## <a name="before-you-begin"></a>Prima di iniziare Per configurare SQL Server PDW per l'utilizzo della copia della tabella remota, è necessario: - Disporre di un account amministratore di sistema della piattaforma di analisi con la possibilità di accedere direttamente ai nodi <strong> *appliance_domain*-ad01</strong> e <strong> *appliance_domain*-ad02</strong> . - Conosce il nome host o il nome IP del server di destinazione. ## <a name="HowToPDW"></a>Configurare SQL Server PDW per la copia della tabella remota: aggiornare i nomi host in DNS L'istruzione **Create Remote Table** , utilizzata per le copie della tabella remota, specifica il server di destinazione utilizzando l'indirizzo IP o il nome IP del sistema Windows SMP. Per usare il nome IP, è necessario aggiungere voci per la corretta risoluzione dei nomi al server DNS. La procedura seguente illustra come aggiornare il server DNS. 1. Accedere al nodo Active Directory (in genere <strong> *appliance_domain*-ad01</strong>). 2. Aprire il gestore DNS. Si trova in **strumenti di amministrazione** nel menu **Start** . 3. Utilizzare il gestore DNS per aggiungere il nome IP. ## <a name="see-also"></a>Vedere anche <!-- MISSING LINKS [Common Metadata Query Examples &#40;SQL Server PDW&#41;](../sqlpdw/common-metadata-query-examples-sql-server-pdw.md) --> [Usare un server d'invio DNS per risolvere i nomi DNS non Appliance](use-a-dns-forwarder-to-resolve-non-appliance-dns-names.md) <!-- MISSING LINKS [Security - Configure Domain Trusts &#40;SQL Server PDW&#41;](../sqlpdw/security-configure-domain-trusts-sql-server-pdw.md) -->
56.27451
290
0.771777
ita_Latn
0.971698
03e8c52b4d80870e8f32dadc0add94e1f524b9c0
967
md
Markdown
src/main/java/com/learn/java/leetcode/lc0643/README.md
philippzhang/leetcodeLearnJava
ce39776b8278ce614f23f61faf28ca22bfa525e7
[ "Apache-2.0" ]
2
2019-04-25T03:13:07.000Z
2021-10-02T09:19:08.000Z
src/main/java/com/learn/java/leetcode/lc0643/README.md
philippzhang/leetcodeLearnJava
ce39776b8278ce614f23f61faf28ca22bfa525e7
[ "Apache-2.0" ]
1
2021-03-02T03:42:43.000Z
2021-03-02T03:42:43.000Z
src/main/java/com/learn/java/leetcode/lc0643/README.md
philippzhang/leetcodeLearnJava
ce39776b8278ce614f23f61faf28ca22bfa525e7
[ "Apache-2.0" ]
null
null
null
# [643. Maximum Average Subarray I][enTitle] **Easy** Given an array consisting of *n* integers, find the contiguous subarray of given length *k* that has the maximum average value. And you need to output the maximum average value. Example 1: ``` Input: [1,12,-5,-6,50,3], k = 4 Output: 12.75 Explanation: Maximum average is (12-5-6+50)/4 = 51/4 = 12.75 ``` Note: 1. 1 <= *k* <= *n* <= 30,000. 2. Elements of the given array will be in the range [-10,000, 10,000]. # [643. 子数组最大平均数 I][cnTitle] **简单** 给定 *n* 个整数,找出平均数最大且长度为 *k* 的连续子数组,并输出该最大平均数。 **示例:** ``` 输入:[1,12,-5,-6,50,3], k = 4 输出:12.75 解释:最大平均数 (12-5-6+50)/4 = 51/4 = 12.75 ``` **提示:** - 1 <= *k* <= *n* <= 30,000。 - 所给数据范围 [-10,000,10,000]。 # 算法思路 # 测试用例 ``` 643. Maximum Average Subarray I 643. 子数组最大平均数 I Easy ``` [enTitle]: https://leetcode.com/problems/maximum-average-subarray-i/ [cnTitle]: https://leetcode-cn.com/problems/maximum-average-subarray-i/
15.596774
181
0.616339
eng_Latn
0.553352
03e8ca495569248ceda6c339d8aa1c532fce2c4e
419
md
Markdown
Documentation/Api/Heirloom.Core/Heirloom/GameLoop/Graphics.md
Chamberlain91/heirloom
c3eaebb5a259386576d6912b39b6b012723cd60c
[ "Zlib" ]
15
2019-09-01T13:14:49.000Z
2022-01-24T08:14:30.000Z
Documentation/Api/Heirloom.Core/Heirloom/GameLoop/Graphics.md
Chamberlain91/heirloom
c3eaebb5a259386576d6912b39b6b012723cd60c
[ "Zlib" ]
8
2019-09-12T04:50:28.000Z
2020-07-24T06:36:21.000Z
Documentation/Api/Heirloom.Core/Heirloom/GameLoop/Graphics.md
Chamberlain91/heirloom
c3eaebb5a259386576d6912b39b6b012723cd60c
[ "Zlib" ]
1
2020-05-13T14:55:12.000Z
2020-05-13T14:55:12.000Z
# Heirloom.Core > **Framework**: .NETStandard,Version=v2.1 > **Assembly**: [Heirloom.Core][0] ## GameLoop.Graphics (Property) > **Namespace**: [Heirloom][0] > **Declaring Type**: [GameLoop][1] ### Graphics Gets the associated render context. ```cs public GraphicsContext Graphics { get; } ``` > **Returns**: [GraphicsContext][2] [0]: ../../../Heirloom.Core.md [1]: ../GameLoop.md [2]: ../GraphicsContext.md
17.458333
44
0.634845
yue_Hant
0.335225
03e8de7d20294cce071adc973a0e643594b38e24
4,132
md
Markdown
README.md
cbib/DeepSpot
9d9a63b4f67c5f016b11bdc4ff1ae4c589769786
[ "MIT" ]
9
2021-11-26T01:03:28.000Z
2022-02-02T14:15:34.000Z
README.md
rainwashautumn/DeepSpot
9d9a63b4f67c5f016b11bdc4ff1ae4c589769786
[ "MIT" ]
null
null
null
README.md
rainwashautumn/DeepSpot
9d9a63b4f67c5f016b11bdc4ff1ae4c589769786
[ "MIT" ]
1
2022-01-22T11:40:28.000Z
2022-01-22T11:40:28.000Z
# DeepSpot **DeepSpot** is a CNN (Convolutional Neural Network) dedicated to the enhancement of fluorescent spots in microscopy images, enabling downstream mRNA spot detection using commonly used tools without need for parameter fine-tuning. **DeepSpot** is based on a multi-network architecture, using Atrou convolution (A) for context reasonning and a twisted ResNet architecture (B) to automatically enhance the mRNA spots in images. | DeepSpot Network Architecture| | ------------- | | ![](figures/network.svg) | | ![](figures/original_vs_pred.png) | ## DeepSpot plugin for Napari A Napari plugin is available [here](https://github.com/ebouilhol/napari-DeepSpot) to use DeepSpot pretrained model. Napari uses a lot of memory so for large volume of images it is recommended to use the following instruction instead of the DeepSpot plugin. ## Build DeepSpot from source ### Installation Clone the repository from [github](https://github.com/cbib/DeepSpot) `git clone https://github.com/cbib/deepspot.git` ### Install dependencies with Conda DeepSpot requires Python >= 3.6 and TensorFlow >= 2.2. To use a tested environment : `conda env create deepspot.yaml` Then activate the environment: `conda activate deepspot` ## Usage ### Code organization DeepSpot is organized as a Python package : * [dataset.py](https://github.com/cbib/DeepSpot/blob/master/deepspot/dataset.py) to generate dataset of paired images (raw images and ground truth) * [global_var.py](https://github.com/cbib/DeepSpot/blob/master/deepspot/global_var.py) is the config file containing data paths and training variables (batch size, learning rate...) * [network.py](https://github.com/cbib/DeepSpot/blob/master/deepspot/network.py) contains the architecture of the network * [predict.py](https://github.com/cbib/DeepSpot/blob/master/deepspot/predict.py) used for prediction when the network is trained * [residual_blocks.py](https://github.com/cbib/DeepSpot/blob/master/deepspot/residual_blocks.py) contains residual blocks called by the network * [train.py](https://github.com/cbib/DeepSpot/blob/master/deepspot/train.py) to train the network ### Configuration file To launch you own training and prediction, [global_var.py](https://github.com/cbib/DeepSpot/blob/master/deepspot/global_var.py) needs to be updated according to your needs. The default configuration is the one giving the best results according to the HyperParameter search performed with Ray Tune. However, if needed you can change the number of epochs, batch size, image size, learning rate, dropout rate, number of filters...) **Check and update the paths to the data before training.** ### Training Before training, if you have multiple GPUs, please set the cuda visible device variable to the desired GPU id, for example 0 : `os.environ["CUDA_VISIBLE_DEVICES"] = "0"` Then launch the training with `python train.py` The training will output a model (the model name can be set in [global_var.py](https://github.com/cbib/DeepSpot/blob/master/deepspot/global_var.py)) ### Prediction Be sure to have set : * The right path to the data to be predicted * The right path to the model you want to use Then launch `python predict.py` The prediction will output the predicted images in a /prediction folder under the prediction path set in [global_var.py](https://github.com/cbib/DeepSpot/blob/master/deepspot/global_var.py) ### Use an existing model To use our best model for prediction let the `save_model_folder` default value or `save_model_folder = "models/Mmixed/"` in [global_var.py](https://github.com/cbib/DeepSpot/blob/master/deepspot/global_var.py) Then launch `python predict.py` ## Support If you have any question relative to the repository, please open an [issue](https://github.com/cbib/deepspot). You can also contact [Emmanuel Bouilhol](mailto:emmanuel.bouilhol[AT]u-bordeaux.fr). ## Contributiting Contributions are very welcome. Please use [Pull requests](https://github.com/cbib/deepspot/pulls). For major changes, please open an [issue](https://github.com/cbib/deepspot) first to discuss what you would like to change.
37.908257
208
0.770329
eng_Latn
0.954518
03ea1493486c716bb156fb905d8358dfdaeeeb07
685
md
Markdown
doc/pl/intro/index.md
ix4/JavaScript-Garden
0df840ceb5d57e0fd9a19de3bdcc420d8f69b2cf
[ "MIT" ]
2,134
2015-01-04T14:10:14.000Z
2022-03-29T07:36:20.000Z
doc/pl/intro/index.md
ix4/JavaScript-Garden
0df840ceb5d57e0fd9a19de3bdcc420d8f69b2cf
[ "MIT" ]
112
2015-01-14T11:04:52.000Z
2021-08-19T09:50:29.000Z
doc/pl/intro/index.md
ix4/JavaScript-Garden
0df840ceb5d57e0fd9a19de3bdcc420d8f69b2cf
[ "MIT" ]
403
2015-01-07T03:42:01.000Z
2022-02-26T06:21:00.000Z
## Wstęp **JavaScript Garden** jest rosnącą kolekcją dokumentów o najdziwniejszych częściach języka JavaScript. Dokumentacja pomaga uniknąć najczęściej popełnianych błędów, sybtelnych bugów, problemów wydajnościowych oraz złych praktyk, na które niedoświadczeni programiści JavaScript mogą natrafić próbując poznać tajniki tego języka. JavaScript Garden **nie** ma na celu nauczyć Cię języka JavaScript. Podstawowa wiedza na temat języka jest wymagana do zrozumienia zagadnień poruszanych w tym przewodniku. Aby nauczyć się podstaw jezyka JavaScript, odwiedź znakomity [przewodnik][1] na stronach Mozilla Developer Network. [1]: https://developer.mozilla.org/en/JavaScript/Guide
48.928571
82
0.827737
pol_Latn
0.999998
03ea50ddf994521e0d256ae9c12b35deafc6b00c
783
markdown
Markdown
source/_posts/2015-01-06-y-con-los-reyes-llega-el-mus-para-iphone-y-ipad.markdown
donnaipe/donnaipe.github.io
ae65a1a8ddb996604739917edbd5578e90be4a43
[ "MIT" ]
null
null
null
source/_posts/2015-01-06-y-con-los-reyes-llega-el-mus-para-iphone-y-ipad.markdown
donnaipe/donnaipe.github.io
ae65a1a8ddb996604739917edbd5578e90be4a43
[ "MIT" ]
1
2020-02-12T16:12:30.000Z
2020-02-12T16:12:30.000Z
source/_posts/2015-01-06-y-con-los-reyes-llega-el-mus-para-iphone-y-ipad.markdown
donnaipe/donnaipe.github.io
ae65a1a8ddb996604739917edbd5578e90be4a43
[ "MIT" ]
null
null
null
--- layout: post title: "Y con los Reyes llega El Mus para iPhone y iPad" date: 2015-01-06 22:47:27 +0100 comments: true sharing: true footer: true categories: [general, mus, iOS] --- Hoy mismo los del Apple Store han aprobado el lanzamiento de [El Mus para iOS](https://itunes.apple.com/es/app/mus-don-naipe/id954161061?mt=8). Así que los Reyes han llegado por fin a todos los de iPhone, iPad y iPod. Esta versión de El Mus es similar a la que saqué para Android hace casi dos meses, por lo que podéis consultar los detalles [en este post](/blog/2014/11/15/lanzamos-el-mus-en-android/). Os dejo un par de capturas de pantalla de **El Mus** para iPhone 6. ![El Mus en iPhone](/images/musIOS/musIOS1.jpg) ![El Mus en iPhone](/images/musIOS/musIOS2.jpg) Espero que lo disfrutéis.
41.210526
217
0.740741
spa_Latn
0.94415
03ea59a432ec44a55299238aea6c841bcad898df
2,148
md
Markdown
src/it/2019-03/08/02.md
PrJared/sabbath-school-lessons
94a27f5bcba987a11a698e5e0d4279b81a68bc9a
[ "MIT" ]
68
2016-10-30T23:17:56.000Z
2022-03-27T11:58:16.000Z
src/it/2019-03/08/02.md
PrJared/sabbath-school-lessons
94a27f5bcba987a11a698e5e0d4279b81a68bc9a
[ "MIT" ]
367
2016-10-21T03:50:22.000Z
2022-03-28T23:35:25.000Z
src/it/2019-03/08/02.md
PrJared/sabbath-school-lessons
94a27f5bcba987a11a698e5e0d4279b81a68bc9a
[ "MIT" ]
109
2016-08-02T14:32:13.000Z
2022-03-31T10:18:41.000Z
--- title: Presentando il sermone sul monte date: 18/08/2019 --- Il sermone - o raccolta di insegnamenti - più lungo pronunciato da Gesù è quello sul monte. La sua escursione sulla vita nel regno di Dio, che dura tre capitoli, inizia con una dichiarazione di valori, quelle che oggi chiamiamo le beatitudini. `Leggere Matteo 5:2-16 . Quali sono gli elementi comuni di questi nove valori o generi di individui che Gesù definisce «beati»?` Insieme alla profonda applicazione spirituale delle sue parole, non ci deve sfuggire anche il senso pratico della loro lettura. Gesù afferma di identificare la povertà in noi stessi e nel nostro mondo. Parla anche di giustizia, umiltà, misericordia, pacificazione e purezza di cuore. Teniamo conto dell’incidenza pratica che simili qualità avrebbero nelle nostre vite e nel mondo se venissero attuate. Questa lettura pratica viene enfatizzata dalle seguenti dichiarazioni del Salvatore, dove esorta i suoi discepoli a essere sale e luce della terra (vv. 13-16). Se usati come si deve, il sale e la luce aggiungono qualcosa di determinante al contesto. Il sale esalta i sapori, ma conserva anche i cibi; è simbolo del bene che potremmo rappresentare per chi ci sta intorno. La luce, analogamente, spazza via il buio, rivelando ostacoli e offrendo, anche a distanza, un punto di riferimento per chi deve muoversi. «Così risplenda la vostra luce davanti agli uomini, affinché vedano le vostre buone opere e glorifichino il Padre vostro che è nei cieli» (v. 16). Sale e luce sono entrambi simboli che ci indicano la nostra responsabilità di discepoli chiamati a influenzare e a rendere migliori la vita di chi ci circonda. Siamo sale e siamo luce quando facciamo tutto in maniera adeguata, quando i nostri cuori sono puri, quando pratichiamo l’umiltà, mostriamo misericordia, contribuiamo alla pace e sopportiamo l’oppressione. Gesù inizia questo sermone invitandoci a incarnare queste qualità del suo regno, talvolta sottovalutate. `In che modo la tua chiesa è sale e luce nella società in cui opera? Ritieni che abbia contribuito a migliorare la qualità della vita, per esempio, del quartiere dove sorge?`
153.428571
966
0.803538
ita_Latn
0.999945
03ea7e8a33b96f7ab740de6adb5cfe6b6894c32a
287
md
Markdown
Docs/Manual/Framework.Network.Commands/PlayerCollection/PlayerCollection.md
sboron/godot4-fast-paced-network-fps-tps
f8f5bdfa3f9d0b7a4f19cabb542db59bf91ca23f
[ "MIT" ]
11
2022-03-26T00:21:47.000Z
2022-03-29T15:30:24.000Z
Docs/Manual/Framework.Network.Commands/PlayerCollection/PlayerCollection.md
sboron/godot4-fast-paced-network-fps-tps
f8f5bdfa3f9d0b7a4f19cabb542db59bf91ca23f
[ "MIT" ]
1
2022-03-30T23:31:49.000Z
2022-03-30T23:31:49.000Z
Docs/Manual/Framework.Network.Commands/PlayerCollection/PlayerCollection.md
sboron/godot4-fast-paced-network-fps-tps
f8f5bdfa3f9d0b7a4f19cabb542db59bf91ca23f
[ "MIT" ]
null
null
null
# PlayerCollection constructor The default constructor. ```csharp public PlayerCollection() ``` ## See Also * class [PlayerCollection](../PlayerCollection.md) * namespace [Framework.Network.Commands](../../Framework.md) <!-- DO NOT EDIT: generated by xmldocmd for Framework.dll -->
19.133333
61
0.728223
kor_Hang
0.310892
03eb08720f382e745af3510cbc5e07556432ef6a
332
md
Markdown
README.md
usraptor2016/Python-Speech-to-text-flask
6afe6cb298fe6b95295c2e100cada781ee47f933
[ "MIT" ]
null
null
null
README.md
usraptor2016/Python-Speech-to-text-flask
6afe6cb298fe6b95295c2e100cada781ee47f933
[ "MIT" ]
null
null
null
README.md
usraptor2016/Python-Speech-to-text-flask
6afe6cb298fe6b95295c2e100cada781ee47f933
[ "MIT" ]
null
null
null
## Flask application for Speech to Text Speech Recognition is an important feature in several applications used such as home automation, artificial intelligence, etc. This aims on how to make use of the SpeechRecognition library of Python which can be easily integrated with flask to perform these functionalities in browser. here
83
286
0.825301
eng_Latn
0.99945
03eb0b57b483bc90e5fcbc0e02f4a30f85f5d099
2,460
md
Markdown
README.md
woilsy/android-mock
cf54ab54248367d867bc69a1fca845454ebefbed
[ "MIT" ]
null
null
null
README.md
woilsy/android-mock
cf54ab54248367d867bc69a1fca845454ebefbed
[ "MIT" ]
null
null
null
README.md
woilsy/android-mock
cf54ab54248367d867bc69a1fca845454ebefbed
[ "MIT" ]
null
null
null
[English](https://github.com/woilsy/android-mock/blob/master/README_EN.md) #### 介绍 不管是每一次的开发迭代,还是新建的项目,我们总会依赖后端接口,有时候后端给了数据结构,但开发进度却遥遥无期,有时仅仅是一个很简单的获取一个Boolean或者Int这样的简单数据也需要后端返回,这样的开发效率是很慢的。而手动去创建一些mock数据又费时费力,关键是接口好了以后还得重新按照网络请求的形式去获取。如果能想要什么就返回什么,仅仅需要几行代码,那肯定很不错。 **思考:** Q:如何返回mock数据? A:Android的网络请求目前以Retrofit为主流,基本上都是REST风格的接口形式存在,但它本身还是一个网络请求,所以得按照网络请求的方式返回数据,否则正式接入后就无法做到一键切换。所以需要在本地自建一个HTTP服务器,[nanohttpd](https://github.com/NanoHttpd/nanohttpd)正好符合这个需求,通过集成,可以创建一个本地服务器,并且还可以获取到客户端发起的请求数据,也就可以自行根据一些策略来返回想要的数据了。 Q:如何获取想要的数据? A:对于Retrofit形式的请求,一般都是在函数的返回值中以Observable\<xx\>/Call\<xxx\>/Flow\<xx\>等形式存在,所以只要解析此返回值的第一个参数,就能获取到想要的返回对象,这就是数据来源,但也不排除Call\<ResponseBody\>这种形式或者上述没包含的形式的返回。 Q:支持哪些方法? A:目前支持GET、POST、PUT、DELETE。 Q:什么叫静态url? A:静态url是以@GET("url")这种能够直接在注解中获取到的url,能直接获取的url。而@Get Call\<ResponseBody\> test(@Url String url)这种在运行时才能获取到具体请求地址,被称为动态url,所以这种没办法直接拿到它的值,除非可以监听函数执行,并能拿到参数(AOP是可以实现的,去监听Retrofit的Invoke过程,函数调用时再进行数据导入,但代价是还需要接入插件到Project中)。 Q:Call\<ResponseBody\>返回如何处理? A:ResponseBody由于其本身是无法被静态解析的,能静态解析的都是可序列化的Bean类(List、Map、class),所以对于外部,可以通过assets、文件、List\<MockData\>的形式,将其在配置阶段导入,之后在解析到这个url对应的Method时,会优先判断是否已导入,以导入优先,不会再去解析返回对象。 #### 软件架构 /annotations 存放了库的注解 /constants 存放的常量 /data 存放数据相关 /entity 存放实体类 /generate 存在跟数据生成有关的东西 /options 配置文件 /parse 解析器 /server 本地Http服务器所在 /service android service /strategy 策略相关,跟解析挂钩的 /test 测试代码 /type mock数据的类型 MockLauncher:启动类,负责初始化参数配置,开启android mock service,开启本地service,对传入的class进行静态解析。 #### 安装教程 加入maven仓库依赖 `maven { url 'https://jitpack.io' }` 导入aar [![](https://jitpack.io/v/com.woilsy/android-mock.svg)](https://jitpack.io/#com.woilsy/android-mock) `implementation "com.woilsy:android-mock:latest.version"` #### 使用说明 **第一步** 在Retrofit创建之前,调用 `MockLauncher.start(Context context, MockOptions options, MockObj... objs)` **参数说明** Context:为了启动服务和解析assets中的文件。 MockOptions:进行mock相关的一些配置:开启日志、设置mock数据返回规则、设置备用地址、设置gson处理对象(在mock Date.class的时候,如果DateFormat不一致,会导致解析失败)。 MockObj:待mock的对象,包含Class和一个MockStrategy策略,Class就是网络请求用的定义的接口,而MockStrategy则是决定是默认进行解析还是默认不解析的mock策略。被排除和不被包含的Method,将会访问去备用的原始地址重定向同步返回请求结果。 **第二步** 将MockLauncher.getMockBaseUrl()作为BaseUrl传递给Retrofit **其他** 1.如果需要自定义mock数据,可以通过MockOptions.setDataSource()传入,返回值为ResponseBody时,只有导入了数据才会有返回值。 2.@Mock注解可以指定字段具体的mock数据,以及类型,可以为基本类型,也可以为Json数据类型。 #### 参与贡献 @leo
34.166667
233
0.778862
yue_Hant
0.728819
03eb89c6949b9b977583caed09905656ba218160
103
md
Markdown
README.md
jtmthf/graphql-binding-postgraphile
c23566009a188cee2d4aeab4482fcb96fa6decee
[ "MIT" ]
null
null
null
README.md
jtmthf/graphql-binding-postgraphile
c23566009a188cee2d4aeab4482fcb96fa6decee
[ "MIT" ]
null
null
null
README.md
jtmthf/graphql-binding-postgraphile
c23566009a188cee2d4aeab4482fcb96fa6decee
[ "MIT" ]
null
null
null
# graphql-binding-postgraphile Embed a PostGraphile generated GraphQL API into your server application
34.333333
71
0.854369
eng_Latn
0.898938
03ebf95c184d57e7553d9f7add9c9057329ee1aa
2,040
md
Markdown
docs/framework/wpf/advanced/how-to-change-the-cursor-type.md
proudust/docs.ja-jp
d8197f8681ef890994bcf45958e42f597a3dfc7d
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/wpf/advanced/how-to-change-the-cursor-type.md
proudust/docs.ja-jp
d8197f8681ef890994bcf45958e42f597a3dfc7d
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/wpf/advanced/how-to-change-the-cursor-type.md
proudust/docs.ja-jp
d8197f8681ef890994bcf45958e42f597a3dfc7d
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: '方法: カーソルの種類を変更する' ms.date: 03/30/2017 dev_langs: - csharp - vb helpviewer_keywords: - mouse pointer [WPF], cursor type - cursor (mouse pointer) ms.assetid: 08c945a7-8ab0-4320-acf3-0b4955a344c2 ms.openlocfilehash: 5c9e6931f6addb62a51e44b06a159d4e7b1e5f8a ms.sourcegitcommit: 9b552addadfb57fab0b9e7852ed4f1f1b8a42f8e ms.translationtype: HT ms.contentlocale: ja-JP ms.lasthandoff: 04/23/2019 ms.locfileid: "61776676" --- # <a name="how-to-change-the-cursor-type"></a>方法: カーソルの種類を変更する この例では、特定の要素とアプリケーションでマウスポインターの <xref:System.Windows.Input.Cursor> を変更する方法を示しています。 この例は、[!INCLUDE[TLA#tla_xaml](../../../../includes/tlasharptla-xaml-md.md)] ファイルとコード ビハインド ファイルで構成されています。 ## <a name="example"></a>例 希望の <xref:System.Windows.Input.Cursor> を選択する <xref:System.Windows.Controls.ComboBox>、カーソルの変更が 1 つの要素に適用されるかアプリケーション全体に適用されるかを決定する <xref:System.Windows.Controls.RadioButton> の組み、新しいカーソルが適用される <xref:System.Windows.Controls.Border> 要素で構成される、ユーザー インターフェイスが作成されます。 [!code-xaml[cursors#ChangeCursorsXAML](~/samples/snippets/csharp/VS_Snippets_Wpf/cursors/CSharp/Window1.xaml#changecursorsxaml)] 次のコード ビハインドは、<xref:System.Windows.Controls.ComboBox> でカーソルの種類が変更されたときに呼び出される <xref:System.Windows.Controls.Primitives.Selector.SelectionChanged> イベントハンドラーを作成します。 switch ステートメントにより、カーソル名がフィルター処理され、*DisplayArea* という名前の <xref:System.Windows.Controls.Border> が <xref:System.Windows.FrameworkElement.Cursor%2A> プロパティに設定されます。 カーソルの変更が "アプリケーション全体" に設定されている場合、<xref:System.Windows.Input.Mouse.OverrideCursor%2A> プロパティは <xref:System.Windows.Controls.Border> コントロールの <xref:System.Windows.FrameworkElement.Cursor%2A> プロパティに設定されます。 これにより、アプリケーション全体のカーソルが強制的に変更されます。 [!code-csharp[cursors#ChangeCursorsSample](~/samples/snippets/csharp/VS_Snippets_Wpf/cursors/CSharp/Window1.xaml.cs#changecursorssample)] [!code-vb[cursors#ChangeCursorsSample](~/samples/snippets/visualbasic/VS_Snippets_Wpf/cursors/VisualBasic/Window1.xaml.vb#changecursorssample)] ## <a name="see-also"></a>関連項目 - [入力の概要](input-overview.md)
53.684211
323
0.798529
yue_Hant
0.711411
03ebfd9a38581a49f8d49d32be4deca2f2d214a1
14,210
md
Markdown
README.md
Ph0tonic/CS451-2020-project
7bb0c03290b5f60e7b159d50437a8440e1d20584
[ "MIT" ]
null
null
null
README.md
Ph0tonic/CS451-2020-project
7bb0c03290b5f60e7b159d50437a8440e1d20584
[ "MIT" ]
null
null
null
README.md
Ph0tonic/CS451-2020-project
7bb0c03290b5f60e7b159d50437a8440e1d20584
[ "MIT" ]
null
null
null
# Distributed Algorithms 2020/21 - EPFL This project's aim was to implement a fifo broadcast and a causal order broadcast. The implementation in Java has great performances. Such as to improve the throughput, a layer above fifo and causal broadcast has been implemented to pack multiples messages into one and therfore reducing greatly the number of messages to send. Warning, this implementation might not work on a real network as the size of the udp packets has been maximise regarding a local use. The validation part of the causal algorithm in `validate.py` has been cherry picked from this repo [https://github.com/vincentballet/LocalizedCausalBroadcast](https://github.com/vincentballet/LocalizedCausalBroadcast) and updated to work with the current test system. # Overview The goal of this practical project is to implement certain building blocks necessary for a decentralized system. To this end, some underlying abstractions will be used: - Perfect Links, - Uniform Reliable Broadcast, - FIFO Broadcast (submission #1), - Localized Causal Broadcast (submission #2) Various applications (e.g., a payment system) can be built upon these lower-level abstractions. We will check your submissions (see [Submissions](#submissions)) for correctness and performance as part of the final evaluation. The implementation must take into account that **messages** exchanged between processes **may be dropped, delayed or reordered by the network**. The execution of processes may be paused for an arbitrary amount of time and resumed later. Processes may also fail by crashing at arbitrary points of their execution. # Project Requirements ## Basics In order to have a fair comparison among implementations, as well as provide support to students, the project must be developed using the following tools: *Allowed programming language*: - C11 and/or C++17 - Java *Build system*: - CMake for C/C++ - Maven for Java Note that we provide you a template for both C/C++ and Java. It is mandatory to use the template in your project. Allowed 3rd party libraries: **None**. You are not allowed to use any third party libraries in your project. C++17 and Java 11 come with an extensive standard library that can easily satisfy all your needs. ## Messages Inter-process point-to-point messages (at the low level) must be carried exclusively by UDP packets in their most basic form, not utilizing any additional features (e.g., any form of feedback about packet delivery) provided by the network stack, the operating system or external libraries. Everything must be implemented on top of these low-level point to point messages. The application messages (i.e., those broadcast by processes) are numbered sequentially at each process, starting from `1`. Thus, each process broadcasts messages `1` to `m`. By default, the payload carried by an application message is only the sequence number of that message. ## Template structure We provide you a template for both C/C++ and Java, which you should use in your project. The template has a certain structure that is explained below: ### For C/C++: ```bash . ├── bin │ ├── deploy │ │ └── README │ ├── logs │ │ └── README │ └── README ├── build.sh ├── cleanup.sh ├── CMakeLists.txt ├── run.sh └── src ├── CMakeLists.txt └── ... ``` You can run: - `build.sh` to compile your project - `run.sh <arguments>` to run your project - `cleanup.sh` to delete the build artifacts. We recommend running this command when submitting your project for evaluation. You should place your source code under the `src` directory. You are not allowed to edit any files outside the `src` directory. Furthermore, you are not allowed to edit sections of `src/CMakeLists.txt` that are marked as "DO NOT EDIT". Apart from these restrictions, you are completely free on how to structure the source code inside `src`. The template already includes some source code under `src`, that will help you with parsing the arguments provided to the executable (see below). Finally, **your executable should not create/use directories named "deploy" and/or "logs"** in the current working directory. These directories are reserved for evaluation! ### For Java: ```sh . ├── bin │ ├── deploy │ │ └── README │ ├── logs │ │ └── README │ └── README ├── build.sh ├── cleanup.sh ├── pom.xml ├── run.sh └── src └── main └── java └── cs451 └── ... ``` The restrictions for the C/C++ template also apply here. The difference is that you are only allowed to place your source code under `src/main/java/cs451`. ## Interface The templates provided come with a command line interface (CLI) that you should use in your deliverables. The implementation for the CLI is given to you for convenience. You are allowed to make any modifications to it, as long as it complies to the specification. The supported arguments are: ```sh ./run.sh --id ID --hosts HOSTS --barrier NAME:PORT --signal NAME:PORT --output OUTPUT [config] ``` Where: - `ID` specifies the unique identifier of the process. In a system of `m` processes, the identifiers are `1`...`m`. - `HOSTS` specifies the path to a file that contains the information about every process in the system, i.e., it describes the system membership. The file contains as many lines as processes in the system. A process identity consists of a numerical process identifier, the IP address or name of the process and the port number on which the process is listening for incoming messages. The entries of each process identity are separated by white space character. The following is an example of the contents of a `HOSTS` file for a system of 5 processes: ``` 2 localhost 11002 5 127.0.0.1 11005 3 10.0.0.1 11002 1 192.168.0.1 11001 4 my.domain.com 11002 ``` **Note**: The processes should listen for incoming messages in the port range `11000` to `11999` inclusive. - `NAME:PORT` for `--barrier` specifies the IP/Name and port of the barrier, which ensures that all processes have been intialized before the broacasting starts. The barrier is implemented using TCP and it is one of the two places (the other is for the `--signal` argument) in the source code where TCP is allowed. You can run the barrier as: ```sh ./barrier.py [-h] [--host HOST] [--port PORT] --processes PROCESSES ``` E.g. to wait for 3 processes, run `./barrier.py --processes 3`. When 3 connections are established to the barrier, the barrier closes all the connections, signaling the processes to start. The barrier cannot be used twice, meaning that you need to restart it every time you want to run you application. Also, it must be started before any other process. - `NAME:PORT` for `--signal` specifies the IP/Name and port of the service (notification handler) that handles the notifications sent by processes when they finish broadcasting. This notification is used to measure the time processes spend broadcasting messages: it is the time period between the release of the barrier up until this notification is sent. The notification mechanism is implemented using TCP and it is one of the two places (the other is for the `--barrier` argument) in the source code where TCP is allowed. You can run the notification handler as: ```sh ./finishedSignal.py [-h] [--host HOST] [--port PORT] --processes PROCESSES ``` Note: Start both `finishedSignal.py` and `barrier.py` before you start any other process and provide the same number of `--processes` for both. - `OUTPUT` specifies the path to a text file where a process stores its output. The text file contains a log of events. Each event is represented by one line of the output file, terminated by a Unix-style line break `\n`. There are two types of events to be logged: - broadcast of application message, using the format `b`*`seq_nr`*, where `seq_nr` is the sequence number of the message. - delivery of application message, using the format `d`*`sender`* *`seq_nr`*, where *`sender`* is the number of the process that broadcast the message and *`seq_nr`* is the sequence number of the message (as numbered by the broadcasting process). An example of the content of an output file: ``` b 1 b 2 b 3 d 2 1 d 4 2 b 4 ``` A process that receives a `SIGTERM` or `SIGINT` signal must immediately stop its execution with the exception of writing to an output log file (see below). In particular, it must not send or handle any received network packets. This is used to simulate process crashes. You can assume that at most a minority (e.g., 1 out of 3; 2 out of 5; 4 out of 10, ...) processes may crash in one execution. **Note:** The most straight-forward way of logging the output is to append a line to the output file on every broadcast or delivery event. However, this may harm the performance of the implementation. You might consider more sophisticated logging approaches, such as storing all logs in memory and write them to a file only when the `SIGINT` or `SIGTERM` signal is received. Also note that even a crashed process needs to output the sequence of events that occurred before the crash. You can assume that a process crash will be simulated only by the `SIGINT` or `SIGTERM` signals. Remember that writing to files is the only action we allow a process to do after receiving a `SIGINT` or `SIGTERM` signal. - `config` specifies the path to a file that contains specific information required from the deliverable (e.g. processes that broadcast). ## Compilation All submitted implementations will be tested using Ubuntu 18.04 running on a 64-bit architecture. These are the specific versions of toolchains where you project will be tested upon: - gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 - g++ (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 - cmake version 3.10.2 - OpenJDK Runtime Environment (build 11.0.8+10-post-Ubuntu-0ubuntu118.04.1) - Apache Maven 3.6.3 (cecedd343002696d0abb50b32b541b8a6ba2883f) All submitted files are to be placed in one zip file, in the same structure as the provided templates. Make sure that the top-level of the zip file is not a directory that contains the template (along with your source code inside `src`), but the template itself. You are **strongly encouraged** to test the compilation of your code in the virtualbox VM provided to you. Submissions that fail to compile will not be considered for grading. **Detailed instructions for submitting your project will be released soon.** ## Cooperation This project is meant to be completed individually. Copying from others is prohibited. You are free (and encouraged) to discuss the projects with others, but the submitted source code must be the exclusive work yours. Multiple copies of the same code will be disregarded without investigating which is the "original" and which is the "copy". Furthermore, please give appropriate credit to pieces of code you found online (e.g. in stackoverflow). *Note*: code similarity tools will be used to check copying. ## Submissions This project accounts for 30% of the final grade and comprises two submissions: - A runnable application implementing FIFO Broadcast, and - A runnable application implementing Localized Causal Broadcast. Note that these submissions are *incremental*. This means that your work towards the first will help you in your work towards the second. We evaluate your submissions based on two criteria: correctness and performance. We prioritize correctness, therefore a correct - yet slow - implementation will receive (at least) a score of 4-out-of-6. The rest 2-out-of-6 is given based on the perfomance of your implementation compared to the perfomance of the implemantions submitted by your colleagues. The fastest correct implementation will receive a perfect score (6). Incorrect implementations receive a score below 4, depending on the number of tests they fail to pass. For your submissions, we are only interested in the FIFO and Localized Causal broadcast algorithms. We define several details for each algorithms below. ### FIFO Broadcast application - You must implement this on top of uniform reliable broadcast (URB). - The `config` command-line argument for this algorithm consists of a file that contains an integer `m` in its first line. `m` defines how many messages each process should broadcast. ### Localized Causal Broadcast - The `config` command-line argument for this algorithm consists of a file that contains an integer `m` in its first line. `m` defines how many messages each process should broadcast. - For a system of `n` processes, there are `n` more lines in the `config` file. Each line `i` corresponds to process `i`, and such a line indicates the identities of other processes which can affect process `i`. See the example below. - The FIFO property still needs to be maintained by localized causal broadcast. That is, messages broadcast by the same process must not be delivered in a different order then they were broadcast. - The output format for localized causal broadcast remains the same as before, i.e., adhering to the description in Section. Example of `config` file for a system of `5` processes, where each one broadcasts `m` messages: ``` m 1 4 5 2 1 4 5 3 4 3 1 2 ``` *Note*: Lines should end in `\n`, and numbers are separated by white-space characters. In this example we specify that process `1` is affected by messages broadcast by processes `4` and `5`. Similarly, we specify that process `2` is only affected by process `1`. Process `4` is not affected by any other processes. Process `5` is affected by processes `3` and `4`. We say that a process `x` is affected by a process `z` if all the messages which process `z` broadcasts and which process `x` delivers become dependencies for all future messages broadcast by process `x`. We call these dependencies *localized*. If a process is not affected by any other process, messages it broadcasts only depend on its previously broadcast messages (due to the FIFO property). *Note*: In the default causal broadcast (this algorithm will be discussed in one of the lectures) each process affects `all` processes. In this algorithm we can selectively define which process affects some other process.
70.696517
703
0.761365
eng_Latn
0.999648
03ebfe754c2773eccf4fa27aa646c0651811a7a8
17,994
md
Markdown
README.md
shkhisti/PowerShell-DSC-for-Linux
8651e113a085fae9ef6dcbea3fb78a0892091975
[ "MIT" ]
1
2020-01-02T12:23:30.000Z
2020-01-02T12:23:30.000Z
README.md
shkhisti/PowerShell-DSC-for-Linux
8651e113a085fae9ef6dcbea3fb78a0892091975
[ "MIT" ]
null
null
null
README.md
shkhisti/PowerShell-DSC-for-Linux
8651e113a085fae9ef6dcbea3fb78a0892091975
[ "MIT" ]
null
null
null
# PowerShell Desired State Configuration for Linux [![Build Status](https://travis-ci.org/Microsoft/PowerShell-DSC-for-Linux.svg?branch=master)](https://travis-ci.org/ericgable/PowerShell-DSC-for-Linux) *Copyright (c) Microsoft Corporation ver. 1.1.1* *All rights reserved.* MIT License Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # Getting Started ## Latest Release ## The latest release packages for PowerShell DSC for Linux can be downloaded here: [Releases](https://github.com/Microsoft/PowerShell-DSC-for-Linux/releases) ## Supported Linux operating systems The following Linux operating system versions are supported by DSC for Linux. - CentOS 5, 6, and 7 (x86/x64) - Debian GNU/Linux 6, 7 and 8 (x86/x64) - Oracle Linux 5, 6 and 7 (x86/x64) - Red Hat Enterprise Linux Server 5, 6 and 7 (x86/x64) - SUSE Linux Enterprise Server 10, 11 and 12 (x86/x64) - Ubuntu Server 12.04 LTS, 14.04 LTS, 16.04 LTS (x86/x64) ## Requirements The following table describes the required package dependencies for DSC for Linux. **Required package** | **Description** | **Minimum version** ----------------------- | ------------------------------------- | ------------------- `glibc` | GNU C Library | 2.4 - 31.30 `python` | Python | 2.4 - 3.4 `omi` | Open Management Infrastructure | 1.0.8-4 `openssl` | OpenSSL Libraries | 0.9.8e or 1.0 `python-ctypes` | Python CTypes library | Must match Python version `libcurl` | cURL http client library | 7.15.1 OMI Packages can be found at [OMI](https://github.com/Microsoft/omi/releases "OMI Releases"). ## Installing DSC Packages OMI and DSC packages are available in RPM and Debian packages, for x86 and x64 architectures, and for systems with OpenSSL version 0.9.8 and version 1.0.x. To install DSC, select determine the packages that are correct for your operating system, and install them. **Examples** *Red Hat Enterprise Linux, CentOS, or Oracle Linux 7:* ```sh wget https://github.com/Microsoft/omi/releases/download/v1.1.0-0/omi-1.1.0.ssl_100.x64.rpm wget https://github.com/Microsoft/PowerShell-DSC-for-Linux/releases/download/v1.1.1-294/dsc-1.1.1-294.ssl_100.x64.rpm sudo rpm -Uvh omi-1.1.0.ssl_100.x64.rpm dsc-1.1.1-294.ssl_100.x64.rpm ``` *Ubuntu 14.04 LTS, 16.04 LTS, or Debian GNU/Linux 8, x64:* ```sh wget https://github.com/Microsoft/omi/releases/download/v1.1.0-0/omi-1.1.0.ssl_100.x64.deb wget https://github.com/Microsoft/PowerShell-DSC-for-Linux/releases/download/v1.1.1-294/dsc-1.1.1-294.ssl_100.x64.deb sudo dpkg -i omi-1.1.0.ssl_100.x64.deb dsc-1.1.1-294.ssl_100.x64.deb ``` **For more information, review the latest [release notes](https://github.com/Microsoft/PowerShell-DSC-for-Linux/releases/tag/v1.1.1-294) and [product documentation](https://msdn.microsoft.com/en-us/powershell/dsc/lnxgettingstarted).** ## To author DSC MOF configuration for Linux on a Windows computer: ---------- **Prerequisites** * A Windows computer with: * Adminstrative privileges * Windows PowerShell (>=4.0) *or* * A Linux computer with: * [PowerShell v6.0.0-alpha.9 or later](https://github.com/PowerShell/PowerShell/releases) * DSC v1.1.1 or later * Install the Linux Resource Provider MOF module: * The "nx" module can be installed from the PowerShell Gallery with: `install-module nx` * In order to compile a Configuration MOF that uses the DSC for Linux resources, use `Import-DscResource -Module nx` inside a DSC Configuration block. * Remotely managing a Linux system with DSC * You need a compiled configuration MOF to apply a new configuration to a system. `Start-DscConfiguration -CimSession:$myCimSession -Path:"C:\path_to_compiled_mof_directory\" -Wait -Verbose` * You can get the current configuration of the system by running: `Get-DscConfiguration -CimSession:$myCimSession` * You can test the current configuration of the system by running: `Test-DscConfiguration -CimSession:$myCimSession` * For more information on creating a CimSession for use with the `-CimSession` parameter, see: http://technet.microsoft.com/en-us/library/jj590760.aspx * Locally managing a Linux system with DSC * See [Performing DSC Operations from the Linux Computer](#performing-dsc-operations-from-the-linux-computer) below for a reference of DSC operations that can be performed on the managed computer ## Building the Desired State Configuration (DSC) Local Configuration Manager and Linux Resource Providers ### Prerequisites ---------- * At least one modern Linux system with: * root login capability * These build tools: * GNU Make * `g++` * Python version 2.5 or later, the package `python-devel` * Open Management Infrastructure (OMI) 1.0.8. http://theopengroup.org/software/omi * `pam-devel` * `openssl-devel` ### Building and installing the Local Configuration Manager and Linux Resource Providers ---------- 1. Extract PSDSC.tar into a directory that you will build it from. 2. Download and extract OMI 1.0.8 into a directory named `omi-1.0.8` in a directory parallel to the LCM and Providers directory. The directory tree should look something like: ``` ./configure ./LCM ./license.txt ./omi-1.0.8 ./omi-1.0.8/agent ./omi-1.0.8/base ... ``` 3. Building * Configuring OMI and building * Configure OMI with desired options (refer to OMI documentation for this step). * The default configuration installs to `/opt/omi-1.0.8` * To use the default configuration, run: `cd omi-1.0.8 && ./configure` * Run: `make` * Installing OMI: * Run: `cd omi-1.0.8 && sudo ./output/install` * Registering the LCM + nxProviders with OMI: * Run: `sudo make reg` 4. Running OMI * On the Linux system, run `omiserver` with environment variable `OMI_HOME` set to OMI's installed directory * Run as root: `OMI_HOME=<PATH_TO_INSTALLED_OMI_DIR> $OMI_HOME/bin/omiserver` * The default installation for OMI 1.0.8 is `/opt/omi-1.0.8`. Thus, for default installations, part 4Ai above becomes: `OMI_HOME=/opt/omi-1.0.8 $OMI_HOME/bin/omiserver` > Note: In order to run following reboots, it is recommended to configure OMI as a System-V, Upstart, or SystemD daemon ## Building and using DSC and OMI from source DSC and OMI can also be built together entirely from source in a self-contained directory. This is useful primarily for developers. ```sh # Clone DSC source git clone https://github.com/Microsoft/PowerShell-DSC-for-Linux.git cd PowerShell-DSC-for-Linux # Place the OMI source where DSC expects it # Alternatively clone from Git and symlink to omi/Unix wget https://collaboration.opengroup.org/omi/documents/33715/omi-1.0.8.tar.gz tar xf omi-1.0.8.tar.gz # Build OMI in developer mode cd omi-1.0.8 ./configure --dev make -j cd .. # Build DSC in developer mode ./configure --no-rpm --no-dpkg --local make -j make reg # Start the OMI server ./omi-1.0.8/output/bin/omiserver ``` ### Use Azure Automation as a DSC Pull Server Note: For more information on Azure Automation’s DSC features, reference the [documentation](https://azure.microsoft.com/en-us/documentation/articles/automation-dsc-overview/). Linux computers can be onboarded to Azure Automation DSC, as long as they have outbound access to the internet, via a few simple steps: Make sure version 1.1 or later of the DSC Linux agent is installed on the machines you want to onboard to Azure Automation DSC. **To configure Azure Automation as a DSC Pull Server from the Linux computer:** - On each Linux machine to onboard to Azure Automation DSC, use Register.py to onboard using the PowerShell DSC Local Configuration Manager defaults: ```/opt/microsoft/dsc/Scripts/Register.py <Automation account registration key> <Automation account registration URL> ``` - To find the registration key and registration URL for your Automation account, see the Secure Registration section below. - Using the Azure portal or cmdlets, check that the machines to onboard now show up as DSC nodes registered in your Azure Automation account. Additional configuration options: - --ConfigurationName: the name of the configuration to apply - --RefreshFrequencyMins: Specifies how often (in minutes) LCM attempts to obtain the configuration from the pull server. If configuration on the pull server differs from the current one on the target node, it is copied to the pending store and applied. - --ConfigurationModeFrequencyMins: Specifies how often (in minutes) LCM ensures that the configuration is in the desired state. - --ConfigurationMode: Specifies how LCM should apply the configuration. Valid values are: ApplyOnly, ApplyAndMonitor, ApplyAndAutoCorrect, MonitorOnly **To configure Azure Automation as a DSC Pull Server with a metaconfiguration MOF:** - Open the PowerShell console or PowerShell ISE as an administrator on a Windows machine in your local environment. This machine must have the latest version of WMF 5 installed - Connect to Azure Resource Manager using the Azure PowerShell module: ``` Add-AzureAccount Switch-AzureMode AzureResourceManager ``` - Download, from the Automation account you want to onboard nodes to, the PowerShell DSC metaconfigurations for the machines you want to onboard: ```Get-AzureAutomationDscOnboardingMetaconfig -ResourceGroupName MyResourceGroup AutomationAccountName MyAutomationAccount -ComputerName MyServer1, MyServer2 OutputFolder C:\Users\joe\Desktop``` - Optionally, view and update the metaconfigurations in the output folder as needed to match the [PowerShell DSC Local Configuration Manager ](https://technet.microsoft.com/library/dn249922.aspx?f=255&MSPPError=-2147217396)fields and values you want, if the defaults do not match your use case. - Remotely apply the PowerShell DSC metaconfiguration to the machines you want to onboard: ``` $SecurePass = ConvertTo -SecureString -string "<root password>" -AsPlainText Force $Cred = New-Object System.Management.Automation.PSCredential "root" , $SecurePass $Opt = New-CimSessionOption -UseSsl: $true -SkipCACheck: $true -SkipCNCheck: $true -SkipRevocationCheck: $true $Session = New-CimSession -Credential: $Cred -ComputerName: <your Linux machine > -Port: 5986 -Authentication: basic -SessionOption: $Opt Set-DscLocalConfigurationManager -CimSession $Session –Path C:\Users\joe\Desktop\DscMetaConfigs ``` - If you cannot apply the PowerShell DSC metaconfigurations remotely, for each Linux machine to onboard, copy the metaconfiguration corresponding to that machine from the folder in step 5 onto the Linux machine. Then call SetDscLocalConfigurationManager.py locally on each Linux machine to onboard to Azure Automation DSC: ``` /opt/microsoft/dsc/Scripts/SetDscLocalConfigurationManager.py –configurationmof <path to metaconfiguration file> ``` - Using the Azure portal or cmdlets, check that the machines to onboard now show up as DSC nodes registered in your Azure Automation account. ### Importing resource modules to Azure Automation The supplied resource modules with this release (nxNetworking, nxComputerManagement) can be imported to Azure Automation for distribution with DSC configurations. To import to Azure Automation, rename the .zip files to remove the _X.Y version string from the file name. Such as: nxNetworking.zip and nxComputerManagement.zip. ## Performing DSC Operations from the Linux Computer DSC for Linux includes scripts to work with configuration from the local Linux computer. These scripts are located in `/opt/microsoft/dsc/Scripts` and include the following: **GetDscConfiguration.py** Returns the current configuration applied to the computer. Similar to the Windows PowerShell cmdlet Get-DscConfiguration cmdlet. `sudo ./GetDscConfiguration.py` **GetDscLocalConfigurationManager.py** Returns the current meta-configuration applied to the computer. Similar to the Windows PowerShell cmdlet Get-DSCLocalConfigurationManager `sudo ./GetDscLocalConfigurationManager.py` **PerformRequiredConfigurationChecks.py** Immediately checks the configuration in accordance with the MetaConfiguration settings and applies the configuration if an update is available. Useful for immediately applying configuration changes on the pull server. `sudo ./PerformRequiredConfigurationChecks.py` **RestoreConfiguration.py** Applies the previous configuration known to DSC, a rollback. `sudo ./RestoreConfiguration.py` **SetDscLocalConfigurationManager.py** Applies a Meta Configuration MOF file to the computer. Similar to the Windows PowerShell cmdlet: Set-DSCLocalConfigurationManager. Requires the path to the Meta Configuration MOF to apply. `sudo ./SetDscLocalConfigurationManager.py –configurationmof /tmp/localhost.meta.mof` **StartDscConfiguration.py** Applies a configuration MOF file to the computer. Similar to the Windows PowerShell cmdlet: StartDscConfiguration. Requires the path to the configuration MOF to apply. `sudo ./StartDscConfiguration.py –-configurationmof /tmp/localhost.mof` You can also supply the force parameter to forcibly remove any current pending configuration before applying the new configuration. `sudo ./StartDscConfiguration.py –-configurationmof /tmp/localhost.mof --force` **TestDscConfiguration.py** Tests the current system configuration for compliance desired state. Similar to the Windows PowerShell cmdlet: Test-DscConfiguration. `sudo ./TestDscConfiguration.py` **InstallModule.py** Installs a custom DSC resource module. Requires the path to a .zip file containing the module shared object library and schema MOF files. `sudo ./InstallModule.py /tmp/cnx_Resource.zip` **RemoveModule.py** Removes a custom DSC resource module. Requires the name of the module to remove. `sudo ./RemoveModule.py cnx_Resource` ## Using PowerShell Desired State Configuration for Linux with a Pull Server ### Using HTTPS with the Pull Server Though unencrypted HTTP is supported for communication with the Pull server, HTTPS (SSL/TLS) is recommended. When using HTTPS, the DSC Local Configuration Manager requires that the SSL certificate of the Pull server is verifiable (signed by a trusted authority, has a common name that matches the URL, etc.). You can modify these HTTPS requirements as needed, by modifying the file /etc/opt/omi/dsc/dsc.conf. The supported properties defined in this file are: - **NoSSLv3** set this to true to require the TLS protocol and set this to false to support SSLv3 or TLS. The default is false. - **DoNotCheckCertificate** set this to true to ignore SSL certificate verification. The default is false. - **CURL_CA_BUNDLE** an optional path to a curl-ca-bundle.crt file containing the CA certificates to trust for SSL/TLS. For more information, see: http://curl.haxx.se/docs/sslcerts.html - **sslCipherSuite** Optionally set your preferred SSL cipher suite list. Only ciphers matching the rules defined by this list will be supported for HTTPS negotiation. The syntax and available ciphers on your computer depend on whether the cURL package is configured to use OpenSSL or NSS as its SSL library. To determine which SSL library cURL is using, run the following command and look for OpenSSL or NSS in the list of linked libraries: ``` curl --version |head -n 1 curl 7.29.0 (x86_64-redhat-linux-gnu) libcurl/7.29.0 NSS/3.15.4 zlib/1.2.7 libidn/1.28 libssh2/1.4.3 ``` *For more information on configuring cipher support, see: http://curl.haxx.se/libcurl/c/CURLOPT_SSL_CIPHER_LIST.html* ## Using an HTTP(s) Proxy Server with DSC ## DSC for Linux supports the use of an HTTP or HTTPS proxy server when communicating with a Pull Server. To configure a proxy server, edit the file `/etc/opt/omi/conf/dsc/dsc.conf` and add a line starting with `PROXY=`. The proxy server specification takes the format: The proxy configuration value has the following syntax: `[protocol://][user:password@]proxyhost[:port]` Property|Description ---|---- Protocol|http or https user|Optional username for proxy authentication password|Optional password for proxy authentication proxyhost|Address or FQDN of the proxy server port|Optional port number for the proxy server **Example** `PROXY=https://proxyuser:proxypassword@proxyserver01:8080` ## PowerShell Desired State Configuration for Linux Log Files ## The following log files are generated for DSC for Linux messages. **Log File** | **Directory** | **Description** --------------------- | --------------------- | ------------------- omiserver.log | /var/opt/omi/log | Messages relating to the operation of the OMI CIM server. dsc.log | /var/opt/omi/log | Messages relating to the operation of the Local Configuration Manager and DSC resource operations. ## Code of Conduct This project has adopted the [Microsoft Open Source Code of Conduct] (https://opensource.microsoft.com/codeofconduct/). For more information see the [Code of Conduct FAQ] (https://opensource.microsoft.com/codeofconduct/faq/) or contact [[email protected]](mailto:[email protected]) with any additional questions or comments.
56.584906
460
0.765477
eng_Latn
0.837191
03ec4502a5f1e00ff17674996529fa731b87d41a
497
md
Markdown
docs/error-messages/tool-errors/resource-compiler-error-rc2007.md
wraith13/cpp-docs.ja-jp
e4f53ab4d3646a6a195093f55629f8e1c663a8b6
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/error-messages/tool-errors/resource-compiler-error-rc2007.md
wraith13/cpp-docs.ja-jp
e4f53ab4d3646a6a195093f55629f8e1c663a8b6
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/error-messages/tool-errors/resource-compiler-error-rc2007.md
wraith13/cpp-docs.ja-jp
e4f53ab4d3646a6a195093f55629f8e1c663a8b6
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: リソース コンパイラ エラー RC2007 ms.date: 11/04/2016 f1_keywords: - RC2007 helpviewer_keywords: - RC2007 ms.assetid: a616e506-bef2-4155-9fe0-dbccac8954d3 ms.openlocfilehash: 7da6e30a67ad9f5915646db8403ec79930d992fd ms.sourcegitcommit: 0ab61bc3d2b6cfbd52a16c6ab2b97a8ea1864f12 ms.translationtype: MT ms.contentlocale: ja-JP ms.lasthandoff: 04/23/2019 ms.locfileid: "62347377" --- # <a name="resource-compiler-error-rc2007"></a>リソース コンパイラ エラー RC2007 \#構文を定義します。 次の識別子が必要でした`#define`プリプロセス ディレクティブ。
24.85
68
0.806841
yue_Hant
0.137361
03ecd815dc7e68e7738a035534d2f463a0f166a1
783
md
Markdown
_news/12.md
LaudateCorpus1/erlang-org
732fba49dc0ee755471bc60241f0457d5323f0bb
[ "Apache-2.0" ]
58
2016-12-05T15:05:50.000Z
2021-11-07T19:36:21.000Z
_news/12.md
LaudateCorpus1/erlang-org
732fba49dc0ee755471bc60241f0457d5323f0bb
[ "Apache-2.0" ]
61
2016-12-05T15:41:03.000Z
2022-03-23T10:37:33.000Z
_news/12.md
LaudateCorpus1/erlang-org
732fba49dc0ee755471bc60241f0457d5323f0bb
[ "Apache-2.0" ]
42
2016-12-20T19:28:47.000Z
2022-03-21T15:38:42.000Z
--- layout: post id: 12 title: "Misplanned Power Outage" lead: "Misplanned power outage at Sun 15 Dec 7:00-14:00 CET " date: "2013-12-12" created_at: "2013-12-12T14:17:42Z" updated_at: "2015-09-30T16:29:06Z" author: "Raimo Niskanen" visible: "true" article_type_id: "3" --- There will be a planned power outage at Sun 15 Dec 7:00-14:00 CET in the building where the erlang.org servers are located.  So all services (www, mail, ...) will be down at least during that period. We are sorry for any inconveniences this may cause the community. ...edit... Our Internet provider cut the power to the internet router already at Fri 19:00 CET, so we were down almost all the weekend.  Apparently sh*** happens.  Sorry about that! We are back up again, apparently... Hello World!
31.32
171
0.729246
eng_Latn
0.991924
03ed702239ac9312e0f16264b847488273ed0dea
169
md
Markdown
_posts/2010/2010-11-27-la-quiche-a-valoche.md
Kwaite/blog
38a9ea25c43b59b86e37a09c463fd54b8bb63242
[ "MIT" ]
null
null
null
_posts/2010/2010-11-27-la-quiche-a-valoche.md
Kwaite/blog
38a9ea25c43b59b86e37a09c463fd54b8bb63242
[ "MIT" ]
null
null
null
_posts/2010/2010-11-27-la-quiche-a-valoche.md
Kwaite/blog
38a9ea25c43b59b86e37a09c463fd54b8bb63242
[ "MIT" ]
null
null
null
--- title: "La quiche à Valoche ..." date: "2010-11-27" categories: - "quiche-lorraine" --- ... qui de toute façon ne la verra pas puisqu'elle ne lit pas ce blog ;)
18.777778
72
0.639053
fra_Latn
0.734106
03ed780f698eff545015d3737255b8e48fedd8d8
4,634
md
Markdown
_posts/2019-08-08-Download-take-no-prisoners-black-ops-2-cindy-gerard.md
Ozie-Ottman/11
1005fa6184c08c4e1a3030e5423d26beae92c3c6
[ "MIT" ]
null
null
null
_posts/2019-08-08-Download-take-no-prisoners-black-ops-2-cindy-gerard.md
Ozie-Ottman/11
1005fa6184c08c4e1a3030e5423d26beae92c3c6
[ "MIT" ]
null
null
null
_posts/2019-08-08-Download-take-no-prisoners-black-ops-2-cindy-gerard.md
Ozie-Ottman/11
1005fa6184c08c4e1a3030e5423d26beae92c3c6
[ "MIT" ]
null
null
null
--- layout: post comments: true categories: Other --- ## Download Take no prisoners black ops 2 cindy gerard book " "He cannot feel sorry for anyone now. But he won't know until he tries. a modified high-five. We Stream of the Pacific. " 33. forty, but less so over time. Consequently, he said. whale-fishery grounded on actual experience, Selidor. Pernak waited for a moment longer, like gold coins and diamonds. The sun sank, but it wouldn't sway. is surrounded by a wild Alpine tract with peaks that rise to a for want of a better word. take no prisoners black ops 2 cindy gerard he was declared nearest heir to the throne, I required my brother's wife of herself and she refused; whereupon despite and folly (7) prompted me and I lied against her and accused her to the townsfolk of adultery; so they stoned her and slew her unjustly and unrighteously; and this is the issue of unright and falsehood and of the slaying of the [innocent] soul, my parents were killed in a fire. From the dense, however, after the orgy of neon at the station, sed sunt multum pallidi. " "You do now. " During the cleaning, they were the centerpiece feathered- "My name's Jordan Banks," he lied, holds the steering wheel with one hand and pounds it with the other. But they never say it. Men and women all bathe in common, without risk of creating a Bartholomew pattern that would prickle like a pungent scent in the hound-dog nostrils of Bay Area homicide detectives, other than to eat boat was put off to kill him, he 256 DEAN KOONTZ investigations can be resumed, quite sophisticated in many ways, that's something else. ); and here in 1875, what my mother does. "Then you don't know how to look yet, using his best Hierochloa take no prisoners black ops 2 cindy gerard R. But there were also lesser lords whom inundations, though Preston couldn't remember what it had said! The goodness of their hearts cannot be doubted, like summoning the dead," and Rose made the hand-sign to avert the danger spoken of. The drapes were closed, even if I'm agreeable to it. "Can't take no prisoners black ops 2 cindy gerard. South of this wooded belt, green, i, the chiffonier, i, more than once during this long conversation. She was introspective, and each different way of happening makes a whole new place, which accounted for Colman's early interest in technology. "Murderer " And Edom knows that they're all as good as dead now, Mandy, which all of them did. the reality of my return, Leilani changed the subject: "Mrs, "Where's bacon come "That's not the problem, staring out at the water of take no prisoners black ops 2 cindy gerard harbor, stains that resembled Rorschach patterns. "Every time the newspaper or TV people take a poll, when the witchwind struck. Indeed, here on the always-snowless hills and shores of the California apparition and point at least a few of the SWAT agents toward Curtis, i. He had started to work as if I were no endorsement, although the sky glowered, and give of herself with all her heart, Agnes asked, "Video tape playback. On Wednesday, but they looked decent enough; and if they had been listening to Beethoven, and in fact it had prevented her from hill, [Footnote 22: Orosius was born in Spain in the fourth century after The rest I knew, he complains of the Missing windshield, Kathleen lay holding hands. It was about the size of the Hand, but merely healthy self-esteem, which is accustomed from their childhood; but in the open sea the ill-built, bulging, he does, a intensity. _ ii. I persist. sweet clear voice had resonated with what had sounded like sincerity when he'd she'd been living by that empty faith for years-and look where it had gotten "What is going to happen?" Kolyutschin Bay, lying round the skeleton, she wouldn't feel too lucky, having pretty much learned the repeating chorus use it, she repaired them with a welder's torch and fresh mortar. Maria frowned, past the open door to the bedroom. 80, better even narrow walk space, whilst he himself went out and overtaking the vizier, it couldn't have tasted more bitter than her slow steady tears. Aromatic bacon sizzling, along with a stiff legal letter from a firm of attorneys, when he froze to death while the guard went extremity of Asia, thank you very much, Fm kind of worried myself, high and low; second take no prisoners black ops 2 cindy gerard the southern shore of Brandywine Bay on North-East Land. 30 p. Spetsbergen i Aarene 1827 og 1828_, and the kings of Atuan and later of Hupun maintained a hostel there for all who came to worship, [Footnote 22: Orosius was born in Spain in the fourth century after The rest I knew.
514.888889
4,518
0.783556
eng_Latn
0.999906
03ed8e808f83ca3c742b87f9756bae618cbe9241
20,841
md
Markdown
charts/fhir-gateway/README.md
Akandou/charts
936ba3882f7444283cb55469491750d958af6c43
[ "Apache-2.0" ]
null
null
null
charts/fhir-gateway/README.md
Akandou/charts
936ba3882f7444283cb55469491750d958af6c43
[ "Apache-2.0" ]
null
null
null
charts/fhir-gateway/README.md
Akandou/charts
936ba3882f7444283cb55469491750d958af6c43
[ "Apache-2.0" ]
null
null
null
# fhir-gateway [FHIR Gateway](https://gitlab.miracum.org/miracum/etl/fhir-gateway) - Helm chart for deploying the MIRACUM FHIR Gateway on Kubernetes. ## TL;DR; ```console $ helm repo add miracum https://miracum.github.io/charts $ helm repo update $ helm install fhir-gateway miracum/fhir-gateway -n fhir-gateway ``` ## Breaking changes ### v2 to v3 Version 3 of the chart upgrades the fhir-pseudonymizer component to v2. It's breaking changes require version 1.10 of gPAS, in particular it depends on the new gPAS TTP FHIR GW interface. The value `gpas.wsdlUrl` has been renamed to `gpas.fhirUrl` to reflect this change. ## Introduction This chart deploys the MIRACUM FHIR Gateway on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager. ## Prerequisites - Kubernetes v1.16+ - Helm v3 ## Installing the Chart To install the chart with the release name `fhir-gateway`: ```console $ helm install fhir-gateway miracum/fhir-gateway -n fhir-gateway ``` The command deploys the MIRACUM FHIR Gateway on the Kubernetes cluster in the default configuration. The [configuration](#configuration) section lists the parameters that can be configured during installation. > **Tip**: List all releases using `helm list` ## Uninstalling the Chart To uninstall/delete the `fhir-gateway`: ```console $ helm delete fhir-gateway -n fhir-gateway ``` The command removes all the Kubernetes components associated with the chart and deletes the release. ## Configuration The following table lists the configurable parameters of the `fhir-gateway` chart and their default values. | Parameter | Description | Default | | ------------------------------------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------- | | tracing.enabled | enables tracing for all supported components by default, the components export traces in Jaeger format to `localhost:16686` | `false` | | replicaCount | number of replicas. The application is well-suited to scale horizontally if required. | `1` | | nameOverride | String to partially override fullname template (will maintain the release name) | `""` | | fullnameOverride | String to fully override fullname template | `""` | | service | the service used to expose the FHIR GW REST endpoint | `{"port":8080,"type":"ClusterIP"}` | | ingress.enabled | if enabled, create an Ingress to expose the FHIR Gateway outside the cluster | `false` | | metrics.serviceMonitor.enabled | if enabled, creates a ServiceMonitor instance for Prometheus Operator-based monitoring | `false` | | sinks.postgres.enabled | if enabled, writes all received FHIR resources to a Postgres DB if `postgresql.enabled=true`, then a Postgres DB is started as part of this installation. If `postgresql.enabled=false`, then `sinks.postgres.external.*` is used. | `true` | | sinks.postgres.external.host | host or server name | `""` | | sinks.postgres.external.port | port | `"5432"` | | sinks.postgres.external.database | name of the database to connect to | `""` | | sinks.postgres.external.username | username to authenticate as | `""` | | sinks.postgres.external.password | password for the user | `""` | | sinks.postgres.external.existingSecret | can be used to specify the name of an existing secret with a `postgresql-password` key containing the password. An alternative to setting the password above. | `""` | | sinks.fhirServer.enabled | if enabled, sends all received resources to the specified FHIR server | `false` | | sinks.fhirServer.url | URL of the FHIR server. Support for authentication is not implemented. | `""` | | kafka.enabled | if enabled, the FHIR Gateway will read resources from the specified Kafka topic `inputTopic` and write them to `outputTopic`. Requires the Kafka cluster to be configured using <https://strimzi.io/>. | `false` | | kafka.inputTopic | name of the Kafka topic to read resources from | `fhir-raw` | | kafka.outputTopic | name of the topic to write processed resources to | `fhir.post-gatway` | | kafka.securityProtocol | either PLAINTEXT or SSL | `PLAINTEXT` | | kafka.strimziClusterName | name of the Strimzi Kafka CRD this gateway should connect to. This is used to resolve the Kafka bootstrap service. | `"my-cluster"` | | postgresql.enabled | enabled the included Postgres DB see <https://github.com/bitnami/charts/tree/master/bitnami/postgresql> for configuration options | `true` | | postgresql.postgresqlDatabase | name of the database used by the FHIR gateway to store the resources | `"fhir_gateway"` | | postgresql.replication.enabled | enable replication for data resilience | `true` | | postgresql.replication.readReplicas | number of read (slave) replicas | `2` | | postgresql.replication.synchronousCommit | enable synchronous commit - this may degrade performance in favor of resiliency | `"on"` | | postgresql.replication.numSynchronousReplicas | from the number of `readReplicas` defined above, set the number of those that will have synchronous replication NOTE: It cannot be > readReplicas | `1` | | postgresql.image.tag | use a more recent Postgres version than the default | `13.1.0` | | postgresql.image.pullPolicy | | `IfNotPresent` | | postgresql.containerSecurityContext.allowPrivilegeEscalation | | `false` | | gpas.fhirUrl | the gPAS TTP FHIR Gateway base URL used to be used by the pseudonymization service. it should look similar to this: `http://gpas:8080/ttp-fhir/fhir/` | `""` | | gpas.version | Version of gPAS used. There were breaking changes to the FHIR API starting in 1.10.2, so explicitely set this value to 1.10.2 if `gpas.fhirUrl` points to gPAS 1.10.2. | `"1.10.1"` | | gpas.auth.basic.enabled | whether the fhir-pseudonymizer needs to provide basic auth credentials to access the gPAS FHIR API | `false` | | gpas.auth.basic.username | HTTP basic auth username | `""` | | gpas.auth.basic.password | HTTP basic auth password | `""` | | gpas.auth.basic.existingSecret | read the password from an existing secret from the `GPAS__AUTH__BASIC__PASSWORD` key | `""` | | loincConverter.enabled | whether to enable the LOINC conversion and harmonization service | `true` | | loincConverter.metrics.serviceMonitor.enabled | if enabled, creates a ServiceMonitor instance for Prometheus Operator-based monitoring | `false` | | loincConverter.replicaCount | if necessary, the service can easily scale horizontally | `1` | | loincConverter.service | service to expose the application | `{"port":8080,"type":"ClusterIP"}` | | fhirPseudonymizer.enabled | whether to enable the FHIR Pseudonymizer - a thin, FHIR-native wrapper on top of gPAS with additional options for anonymization. if this is set to false, then the FHIR gateway will not attempt to pseudonymize/anonymize the resources. | `true` | | fhirPseudonymizer.metrics.serviceMonitor.enabled | if enabled, creates a ServiceMonitor instance for Prometheus Operator-based monitoring | `false` | | fhirPseudonymizer.auth.apiKey.enabled | enable requiring an API key placed in the `x-api-key` header to authenticate against the fhir-pseudonymizer's `/fhir/$de-pseudonymize` endpoint. | `false` | | fhirPseudonymizer.auth.apiKey.key | expected value for the key, aka "password" | `""` | | fhirPseudonymizer.auth.apiKey.existingSecret | name of an existing secret with an `APIKEY` key containing the expected password | `""` | | fhirPseudonymizer.replicaCount | number of replicas. This components can also be easily scaled horizontally if necessary. | `1` | | fhirPseudonymizer.service | service to expose the fhir-pseudonymizer | `{"port":8080,"type":"ClusterIP"}` | Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example: ```console $ helm install fhir-gateway miracum/fhir-gateway -n fhir-gateway --set replicaCount=1 ``` Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example: ```console $ helm install fhir-gateway miracum/fhir-gateway -n fhir-gateway --values values.yaml ``` ## Pseudonymization You can configure custom anonymization rules directly in the `values.yaml`. For example, the following configuraiton is used by the fhir-pseudonymizer by default. It simply encrypts the medical record and visit numbers: ```yaml fhirPseudonymizer: enabled: true anonymizationConfig: | --- fhirVersion: R4 fhirPathRules: - path: nodesByType('HumanName') method: redact - path: nodesByType('Identifier').where(type.coding.system='http://terminology.hl7.org/CodeSystem/v2-0203' and type.coding.code='VN').value method: encrypt - path: nodesByType('Identifier').where(type.coding.system='http://terminology.hl7.org/CodeSystem/v2-0203' and type.coding.code='MR').value method: encrypt parameters: dateShiftKey: "" dateShiftScope: resource cryptoHashKey: fhir-pseudonymizer # must be of a valid AES key length; here the key is padded to 192 bits encryptKey: fhir-pseudonymizer000000 enablePartialAgesForRedact: true enablePartialDatesForRedact: true enablePartialZipCodesForRedact: true restrictedZipCodeTabulationAreas: [] ``` An example which leverages pseudonymization: ```yaml fhirPseudonymizer: enabled: true anonymizationConfig: | fhirVersion: R4 fhirPathRules: - path: nodesByType('HumanName') method: redact - path: nodesByType('Identifier').where(type.coding.system='http://terminology.hl7.org/CodeSystem/v2-0203' and type.coding.code='VN').value method: pseudonymize domain: ENCOUNTER-IDS - path: nodesByType('Identifier').where(type.coding.system='http://terminology.hl7.org/CodeSystem/v2-0203' and type.coding.code='MR').value method: pseudonymize domain: PATIENT-IDS - path: nodesByType('Identifier').where(type.coding.system='http://fhir.de/CodeSystem/identifier-type-de-basis' and type.coding.code='GKV' or type.coding.code='PKV') method: redact parameters: dateShiftKey: "" dateShiftScope: resource cryptoHashKey: "secret" encryptKey: "" enablePartialAgesForRedact: true enablePartialDatesForRedact: true enablePartialZipCodesForRedact: true restrictedZipCodeTabulationAreas: [] ```
118.414773
337
0.364186
eng_Latn
0.942474
03edb007cb7ca1f3f42073db3c1466d3ff438c1a
7,300
md
Markdown
ucsb_ds_capstone_projects_2021/projects/csep/update1.md
andreaanez/ucsb-ds-capstone-2021.github.io
0e15e873994c2862a33131d695c3e64468ae655e
[ "MIT" ]
4
2021-04-26T02:03:44.000Z
2022-01-12T19:32:52.000Z
ucsb_ds_capstone_projects_2021/projects/csep/update1.md
andreaanez/ucsb-ds-capstone-2021.github.io
0e15e873994c2862a33131d695c3e64468ae655e
[ "MIT" ]
17
2020-12-31T19:28:22.000Z
2021-04-27T23:08:49.000Z
ucsb_ds_capstone_projects_2021/projects/csep/update1.md
andreaanez/ucsb-ds-capstone-2021.github.io
0e15e873994c2862a33131d695c3e64468ae655e
[ "MIT" ]
49
2021-01-03T19:58:53.000Z
2022-02-13T01:23:50.000Z
# Update 1 ## Center for Science and Engineering Partnerships (CSEP) Project ```{eval-rst} .. figure:: csepLogo.png :scale: 90% ``` ## Capstone Members * Manny Medrano, Andrea Anez, Romtin Toranji, Karanveer Benipal ## Faculty: * Alexander Franks, Lubi Lenaburg, Joshua Bang ### Introduction The CSEP Alumni Tracking Project tracks students graduating from 2000 - 2018 from the University of California, Santa Barbara. The overall goal of tracking students is monitoring opportunites UCSB provides and the impact on student outcomes. The data comes from a mix of sources: online - LinkedIn, personal websites, and professional organizations - and UCSB's own information about students. While not all of the data was accessed through an automated system, some of it required some manual revisions to determine if a student's data is updated. So while we collected from some of the sources listed above, there will exist some type of error whether it is a simple mispelling of a word to texts existing in an integer column. The organization providing the data does not have an overall goal for the outcome of the data: the CSEP group is free to explore any possible avenue for the data within reason. ### Accessing the Secure Research Compute Environment (SCRE) SCRE is a way for the team to access the data without having any breaches in the security of the data itself. Because the data we are using is of actual past UCSB alumni, we don't want to expose any sensitive information of each individual. While on SCRE, we're not only able to access this information, but we're also able to play around with it in python code on our virtual machines! More specifically, each capstone member is assigned a specific Virtual Machine (VM) that allows the designated user to access the data through that VM. An important aspect to note is that while UCSB maintains a complete and thorough alumni dataset, not all of it is allocated to our team. The Family Educational Rights and Privacy Act (FERPA) restricts the alumni and student information that institutions such as UCSB can release. For this reason, our team only works with a portion of the alumni dataset in a secure environment as a means to respect the privacy of UCSB alumni and students. ### Goals Here are some possible avenues we can explore: 1. Outcomes by Major 2. Ranking of Graduate Schools UCSB students attend * Graduate Engineering Students 3. Job Opportunites 4. Geographical Distribution of Students * By Major * By Admit Status * By career choice 5. Freshmen vs. Transfer Admit Outcomes 6. Greek life vs Non-Greek life Students 7. Career outcomes for athletes * Female athletes in leadership roles 8. Students from low-income high schools * Distribution by undergradute major * Career outcomes and future earnings While these are our goals, we have made some progress working towards some of these, yet this is only the beginning. ### Graduate Engineering Students Since a big part of the project has been tracking the progress of stem majors, we thought it was a good idea to take a look at how many engineering students has earned some sort of graduate degree. This does not consider graduate students that are currently attending nor those that received a professional/teaching/MS degree. The goal of this was to see if the number of engineering students obtaining a graduate degree has decreased over the years. A big reason for this has been engineers seem to pursue jobs in industry since its a readily available opportunity after obtaining a BS degree. ```{eval-rst} .. figure:: engvsgrad.png :scale: 75% In the plot, we can see the increasing number of students who obtain their Graduate degree. ``` Some things to note is that the range of data is from 2000 to 2018 so many students are initially going to graduate school and there is a huge rise because of this. Another thing is that not all students immediately enter in a graduate program (i.e. some students take a gap year or work in industry first). We can see there exists a drop off in the beginning of the plot because there hasn't been enough time for students to enter graduate school right away. ### Undergraduate Students from Low-Income High Schools High schools throughout California are far from equal in terms of how they prepare students for college. Additionally, most high schools serving underprivileged students lack strong STEM programs and few offer computer science classes. [A study by the Kapor Center](https://www.kaporcenter.org/wp-content/uploads/2019/06/Computer-Science-in-California-Schools.pdf) found that high schools where most students are black, Latino and Native America, only 39 percent offer computer science courses, compared with 72 percent of schools where white or Asian students make up the majority. Additionally, Computer science courses are more often offered at high-income schools (55 percent) than at low-income schools (35 percent.) Do students who attended high-income highs schools have an advantage when it comes to being admitted to UCSB's College of Engineering? Are students from low-income high schools being served equally by UCSB's STEM programs? #### Socioeconomic Background of Freshmen Admits at UCSB by Major ```{eval-rst} .. figure:: CS.png :scale: 65% Socioeconomic Distribution of Students in Computer Science ``` ```{eval-rst} .. figure:: Chicano_Studies.png :scale: 65% Socioeconomic Distribution of Students in Chicano Studies ``` ```{eval-rst} .. figure:: Sociology.png :scale: 65% Socioeconomic Distribution of Students in Sociology ``` ```{eval-rst} .. figure:: Economics.png :scale: 65% Socioeconomic Distribution of Students in Economics ``` ```{eval-rst} .. figure:: Biology.png :scale: 65% Socioeconomic Distribution of Students in Biology ``` When looking at these perliminary graphs we can see that majors in STEM feilds are disproportionately populated by studnets from affluent high schools. Currently, these graphs do not include students from out of state, international students or transfer students. ### Freshmen vs. Transfer Admit Outcomes Many people believe that transferring to a 4 year college because of the amount of money saved. Others believe attending a 4 year university for the full 4 years is the way to go. Taking a quick glance at the data we can see some differences. First we see transfer students obtaining Minors and Bachelors of Science at a higher rate than Freshmen admits. Meanwhile, Freshmen Admits have a higher rate of receiving Bachelors of Art, Bachelors of Fine Arts, and Bachelors of Music. ```{eval-rst} .. figure:: transfer_v_freshman_degree_type.png :scale: 50% Degree Type Rate: Freshman vs Transfer ``` Also, there appears to be differences in degree types. The biggest being Transfers tend to have a higher rate of biology majors. ```{eval-rst} .. figure:: trasnfer_v_freshman_field.png :scale: 50% Major Rate: Freshman vs Transfer ``` Finally, Freshmen admits also seem to work more in similar institutions with their peers. ```{eval-rst} .. figure:: transfer_v_freshman_institutions.png :scale: 50% Institutions: Freshman vs Transfer ``` ### Software Used * Python * SCRE * Excel
57.936508
947
0.774521
eng_Latn
0.999238
03edd0c3a0ab54a2bea9e4a2091d2d6828882ae9
3,237
md
Markdown
content/blog/Blog/2021-retrospect.md
Hyoj-Kim/Hyoj-Kim.github.io
201a89d04155689092377d7e50956d2828bebffa
[ "MIT" ]
null
null
null
content/blog/Blog/2021-retrospect.md
Hyoj-Kim/Hyoj-Kim.github.io
201a89d04155689092377d7e50956d2828bebffa
[ "MIT" ]
3
2021-12-08T11:26:43.000Z
2021-12-14T13:15:47.000Z
content/blog/Blog/2021-retrospect.md
Hyoj-Kim/Hyoj-Kim.github.io
201a89d04155689092377d7e50956d2828bebffa
[ "MIT" ]
null
null
null
--- title: 2021 회고 date: 2022-01-07 01:01:28 category: blog thumbnail: { thumbnailSrc } draft: false --- 🤔 과연 나는 얼마나 발전하는 삶을 살았는가? # ‘나’의 데이터 2021년의 시작은 퇴사 후 자유를 만끽하는 것부터 시작했다. 그러다 시간을 좀 더 효율적으로 쓰고 싶다는 생각이 들면서 생산성 어플이나 습관에 한 동안 집착 아닌 집착을 보였다. 그 때 알게 된 것이 Notion(노션)이었다. 처음 접하자마자 각종 활용 팁들을 이것저것 수집하고 따라해보면서 어떤식으로 이용할지 고민해봤다. 고민의 결론은 ‘나’에 대한 데이터를 모아서 보다 나은 ‘나’를 위해 사용해보자는 것이었다. 그 시점에 본 책과 daily report라는 학습법(?)에 꽂히기도 했기에 나의 시간을 기록해보기로 결심했다. ![TogglTrack](./images/01.png) 겸사겸사 기록하는 습관도 길러보자는 생각으로 시간기록을 시작했다. 사용한 어플은 Toggl Track이다. 처음에는 구분을 어떻게 해야할 지 몰라서 카테고리 없이 무작정 기록했다. (밝은 회색) 어느정도 쌓인 뒤부터는 현재의 나(보라), 미래의 나(파랑), 관리(분홍), 낭비(주황)라는 4가지 카테고리로 나눠서 기록했다. 현재의 나를 위한 시간과 낭비되는 시간이 같다니.... 확실히 많이 놀았구나라는 생각이 든다. 이렇게 모아서 보니 진짜 반성하게 된다. # 42Seoul 2021년 2월 15일 라피신이 시작되었다. 코로나로 인한 격일제 진행이었기 때문에 클러스터 근방에 방을 구하는 대신 김포에 사는 이모집에서 한 달간 신세지기로 했다. 이 한 달간 돌아오는 버스안에서 일기를 썼었다. 이 분야에 완전히 뛰어들겠다는 마음으로 시작했지만, 스스로에게 자신이 없었던 나에게 재밌고, 충분히 직업으로 삼을 수 있겠다는 희망을 보여준 시간이었다. 라피신이 끝나고 4기 2차 라피신이 진행되는 한 달동안은 그저 놀았다. 결과가 나올때까지 그냥 아무 생각 없이 쉬는 시간을 가졌다. 본과정생이 된 5월엔 자취를 위해 집을 알아보러 다녔다. 빠르게 이사까지 하고난 뒤부터는 거의 매일 클러스터에 가면서 서툴면서도 하나하나 과제를 해나갔다. 각종 기업설명회와 멘토특강도 꾸준히 들었다. 이런걸 자주 들어야 내가 어떤 분야에 관심이 생기는지 알 수 있을거라 생각했다. 코로나가 심각해지기 전인 6월까지는 진짜 노션 Diary가 꽤나 빽빽할 정도로 42서울에 적응하기 위한 집중의 시간이었다. 코로나로 인해 7월부터 9월 초까지는 전면 온라인 전환이되고, 블랙홀도 같이 멈추면서 약간 딴짓의 시간이 시작되었다. 오픈소스 컨트리뷰션 아카데미, 42Seoul 밋업데이, 피신 해커톤까지... 그래도 모든 것에 진심이었던 내게 좋은 결과로 돌아와서 참 다행이다. 11월 말부터는 다시 42Seoul 집중모드로 돌아왔다. 여기에 동아리와 스터디를 곁들인... 때로는 정신없기도 하지만 너무 재밌다. 그래서 한다. # 오픈소스 컨트리뷰션 아카데미 ![slack-capture](./images/02.png) ![2021OSCA](./images/03.png) 딴짓의 시간이라고 칭했던 7월, 슬랙에서 오픈소스 컨트리뷰션 아카데미에 대한 글을 보자마자 하겠다고 결심하고 지원서를 썼다. 사실 프로젝트 설명을 볼 때마다 지금의 내가 할 수 있는게 하나도 없는거 아닌가 싶었지만, 그래도 꼭 해보고 싶어서 정말 열심히 어필했다. 운이 좋게도 1순위로 썼던 Chromium/Blink팀으로 참여할 수 있게 되었다. 그리고 1순위로 썼던 Chromium/Blink팀에 참여하게 되었다! 이 전까지는 개발 경험이 있는 사람들 위주로 선정되었던 상당히 어려운 프로젝트였는데, 올해 멘토님은 chromium을 통해서 오픈소스 세계에 입문할 수 있도록 시작을 열어주고 싶어서 경험이 없는 사람들 위주로 선정했다고 하셨다. chromium은 전에 각종 브라우저들 이것저것 써보면서 들어봤었다. 네이버 웨일, 비발디 브라우저가 크로미움 기반 브라우저라는 정도? 사실 거의 아는게 없는 상태긴 했다; 그래도 내가 주로 사용하는 브라우저에 기반이 되는 오픈소스라는 것, 그게 구글에서 만든 프로젝트라 규모가 어마어마하다는 점이 나를 설레게했다. ![notion](./images/04.png) ![ppt](./images/05.png) 아무것도 몰랐기에 오픈소스 기여가 쉬울리가 없다. 환경설정부터, 어떤 이슈를 골라야하고 어떻게 수정하고 어떻게 리뷰받아야하는지..... 정말 모든 단계 하나하나가 나에게 거대한 장벽처럼 느껴졌다. 그럼에도 불구하고 자신의 시간을 쪼개어 멘티들을 서포트해주시는 멘토님, 본인이 공부한 내용을 공유해주는 팀원들에게 감동받으며 나도 열심히 참여했다. 오픈소스 기여 관련한 활동을 할 때마다 최대한 자세히 기록하려 노력했고, 팀원들과 슬랙 외에 같이 공유했으면 하는 내용들을 노션페이지를 만들어 함께 사용하고자 했다. 그래서일까, 어쩌다보니 리드멘티까지 되어서 최종발표까지 진행했다. 개인적으로 준비하는 시간에 다른 활동도 겹쳐서 지칠뻔했지만, 그래도 끝까지 살펴봐주고 응원해준 멘토님과 팀원들 덕분에 준비해온 대로 실수 없이 잘 마쳤다. 발표에 대한 결과까지 좋아서 더 행복했던 것은 안비밀😋 발표에서도 언급했던 내용처럼, 오픈소스 기여는 정말 내가 성장하고 있다는 사실이 체감될 정도로 많은 변화를 주었다. 시작할 때 진행한 Git 실습 교육부터 시작해서 다른 사람의 코드를 읽는 연습을 할 수 있었고, 가장 실감났던 때는 내가 어떤 이슈를 풀어나가는 과정에서 다른 이슈를 찾아냈던 점이었다. 그저 작성된 내용을 읽고 받아들이려고만 했던 시간을 거쳐, 사소하고 작은 부분일지라도 좀 더 개선할 수 있는 부분을 찾아서 제안했을 때 그 제안을 인정받았을 때의 그 기쁨은 첫 기여보다도 더 황홀했다. 이 시점을 계기로 아카데미 기간 이후로도 기여를 꾸준히 해야겠다는 다짐을 하게되었다. # 마무리 2021년은 그야말로 ‘도전’이자 ‘라피신(수영장)’의 해였다. 다시 해보자는 생각으로 결심한 진로를 위해 일단 뛰어들었다. 비록 진짜 수영장은 갈 수 없었지만 내가 한 해동안 뛰어든 다양한 수영장에는 수많은 사람들과 함께였다. 다 같이 헤엄치는 이 곳에서 때로는 게을렀고, 때로는 무리하기도 했지만 빠져들 수 있음에 행복하다. 미리 단정짓고 포기했던 그 때의 나에게 다시 말해주고 싶다. 나도 할 수 있다고. 2022년에도 열심히 헤엄칠거다. 머물고 싶은 섬이 보일 때까지, 정말 미친듯이 수영할 생각이다.
56.789474
440
0.71764
kor_Hang
1.00001
03ee1efd67af41b827d4b785c7c4500319385bc4
7,076
md
Markdown
docs-archive-a/2014/integration-services/control-flow/analysis-services-processing-task.md
v-alji/sql-docs-archive-pr.pt-br
2791ff90ec3525b2542728436f5e9cece0a24168
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs-archive-a/2014/integration-services/control-flow/analysis-services-processing-task.md
v-alji/sql-docs-archive-pr.pt-br
2791ff90ec3525b2542728436f5e9cece0a24168
[ "CC-BY-4.0", "MIT" ]
1
2021-11-25T02:18:31.000Z
2021-11-25T02:26:28.000Z
docs-archive-a/2014/integration-services/control-flow/analysis-services-processing-task.md
v-alji/sql-docs-archive-pr.pt-br
2791ff90ec3525b2542728436f5e9cece0a24168
[ "CC-BY-4.0", "MIT" ]
2
2021-09-29T08:52:22.000Z
2021-10-13T09:16:56.000Z
--- title: Tarefa Processamento do Analysis Services | Microsoft Docs ms.custom: '' ms.date: 06/13/2017 ms.prod: sql-server-2014 ms.reviewer: '' ms.technology: integration-services ms.topic: conceptual f1_keywords: - sql12.dts.designer.asprocessingtask.f1 helpviewer_keywords: - Analysis Services Processing task - processing objects [Integration Services] ms.assetid: e5748836-b4ce-4e17-ab6b-617a336f02f4 author: chugugrace ms.author: chugu ms.openlocfilehash: 19ef8046c06c9131b3ea2eb8ffe267c026808dfd ms.sourcegitcommit: ad4d92dce894592a259721a1571b1d8736abacdb ms.translationtype: MT ms.contentlocale: pt-BR ms.lasthandoff: 08/04/2020 ms.locfileid: "87679474" --- # <a name="analysis-services-processing-task"></a>Tarefa Processamento do Analysis Services A tarefa Processamento do [!INCLUDE[ssASnoversion](../../includes/ssasnoversion-md.md)] processa objetos do [!INCLUDE[ssASnoversion](../../includes/ssasnoversion-md.md)] , como modelos tabulares, cubos, dimensões e modelos de mineração. Lembre-se do seguinte quando for processar modelos tabulares: - Você não pode executar a análise de impacto em modelos tabulares. - Algumas opções de processamento para modo tabular não são expostas, como Processar Desfragmentação e Processar Recálculo. Você pode executar essas funções usando a tarefa Executar DDL. - As opções Índice de Processo e Atualização de Processo não são apropriadas para modelos tabulares e não devem ser usadas. - As configurações de lote para modelos tabulares são ignoradas. [!INCLUDE[ssISnoversion](../../includes/ssisnoversion-md.md)] inclui diversas tarefas que desempenham operações de business intelligence, como execução de instruções DDL (linguagem de definição de dados) e consultas de previsão de mineração de dados. Para obter mais informações sobre tarefas de business intelligence relacionadas, clique em um dos tópicos a seguir: - [Tarefa Executar DDL do Analysis Services](analysis-services-execute-ddl-task.md) - [Tarefa Consulta de Mineração de Dados](data-mining-query-task.md) ## <a name="object-processing"></a>Processamento de objetos É possível processar vários objetos ao mesmo tempo. Ao processar vários objetos, você define configurações que se aplicam ao processamento de todos os objetos no lote. Os objetos de um lote podem ser processados em sequência ou em paralelo. Se o lote não contiver objetos para os quais a sequência de processamento seja importante, o processamento paralelo poderá acelerar o processamento. Se objetos no lote forem processados em paralelo, você poderá configurar a tarefa para deixá-la determinar o número de objetos a serem processados em paralelo ou pode especificar manualmente o número de objetos a ser processado ao mesmo tempo. Se os objetos forem processados em sequência, será possível definir um atributo de transação no lote inscrevendo todos os objetos em uma transação, ou usando uma transação separada para cada objeto do lote. Quando você processa objetos analíticos, também convém processar os objetos que dependem deles. A tarefa Processamento do [!INCLUDE[ssASnoversion](../../includes/ssasnoversion-md.md)] inclui uma opção para processar qualquer objeto dependente além dos objetos selecionados. Em geral, você processa tabelas de dimensões antes de processar tabelas de fatos. Você poderá encontrar erros se tentar processar tabelas de fatos antes de processar as tabelas de dimensões. Essa tarefa também permite configurar a manipulação de erros em chaves de dimensão. Por exemplo, a tarefa pode ignorar erros ou pode parar depois da ocorrência de um determinado número de erros. A tarefa pode usar a configuração de erro padrão, ou você pode construir uma configuração de erro personalizada. Na configuração de erro personalizada, você especifica como a tarefa controla erros e as condições de erro. Por exemplo, você pode especificar se a tarefa deve parar de executar quando ocorrer o quarto erro, ou pode especificar como a tarefa deve controlar valores chave **Nulos** . A configuração de erro personalizada também pode incluir o caminho de um log de erros. > [!NOTE] > A tarefa Processamento do [!INCLUDE[ssASnoversion](../../includes/ssasnoversion-md.md)] só pode processar objetos analíticos criados pelas ferramentas do [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] . Essa tarefa é frequentemente usada em combinação com uma tarefa Inserção em Massa que carrega dados em uma tabela do [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] ou uma tarefa de Fluxo de Dados que implementa um fluxo de dados que carrega dados em uma tabela. Por exemplo, a tarefa de Fluxo de Dados pode ter um fluxo de dados que extrai dados de um banco de dados OLTP (Online Transactional Database) e o carrega em uma tabela de fatos do data warehouse, após o qual a tarefa Processamento do [!INCLUDE[ssASnoversion](../../includes/ssasnoversion-md.md)] é chamada para processar o cubo criado no data warehouse. A tarefa Processamento do [!INCLUDE[ssASnoversion](../../includes/ssasnoversion-md.md)] usa um gerenciador de conexões [!INCLUDE[ssASnoversion](../../includes/ssasnoversion-md.md)] para se conectar a uma instância do [!INCLUDE[msCoName](../../includes/msconame-md.md)] [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] [!INCLUDE[ssASnoversion](../../includes/ssasnoversion-md.md)]. Para obter mais informações, consulte [Analysis Services Connection Manager](../connection-manager/analysis-services-connection-manager.md). ## <a name="error-handling"></a>Tratamento de erros ## <a name="configuration-of-the-analysis-services-processing-task"></a>Configuração da tarefa Processamento do Analysis Services Você pode definir propriedades pelo Designer do [!INCLUDE[ssIS](../../includes/ssis-md.md)] ou programaticamente. Para obter mais informações sobre as propriedades que podem ser definidas no [!INCLUDE[ssIS](../../includes/ssis-md.md)] Designer, clique em um dos seguintes tópicos: - [Editor da Tarefa Processamento do Analysis Services &#40;Página Geral&#41;](../general-page-of-integration-services-designers-options.md) - [Editor da Tarefa Processamento do Analysis Services &#40;Página Analysis Services&#41;](../analysis-services-processing-task-editor-analysis-services-page.md) - [Página Expressões](../expressions/expressions-page.md) Para obter mais informações sobre como definir essas propriedades no [!INCLUDE[ssIS](../../includes/ssis-md.md)] Designer, clique no tópico a seguir: - [Definir as propriedades de uma tarefa ou contêiner](../set-the-properties-of-a-task-or-container.md) ## <a name="programmatic-configuration-of-the-analysis-services-processing-task"></a>Configuração programática da tarefa Processamento do Analysis Services Para obter mais informações sobre como definir essas propriedades programaticamente, clique em um dos tópicos a seguir: - <xref:Microsoft.DataTransformationServices.Tasks.DTSProcessingTask.DTSProcessingTask>
84.238095
680
0.780384
por_Latn
0.998863
03eec03d23f1467057f2330d469e4dbabae1c1a1
2,970
md
Markdown
aspnetcore/mvc/views/tag-helpers/built-in/image-tag-helper.md
AlexanderUsmanov/Docs.ru-ru
5e5ce086955ef8e41e97d524a6f1141be2b60d8e
[ "CC-BY-4.0", "MIT" ]
null
null
null
aspnetcore/mvc/views/tag-helpers/built-in/image-tag-helper.md
AlexanderUsmanov/Docs.ru-ru
5e5ce086955ef8e41e97d524a6f1141be2b60d8e
[ "CC-BY-4.0", "MIT" ]
null
null
null
aspnetcore/mvc/views/tag-helpers/built-in/image-tag-helper.md
AlexanderUsmanov/Docs.ru-ru
5e5ce086955ef8e41e97d524a6f1141be2b60d8e
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Вспомогательная функция тега изображения в ASP.NET Core author: pkellner description: Сведения о работе со вспомогательной функцией тега изображения manager: wpickett ms.author: riande ms.date: 02/14/2017 ms.prod: aspnet-core ms.technology: aspnet ms.topic: article uid: mvc/views/tag-helpers/builtin-th/image-tag-helper ms.openlocfilehash: 6aa9175f873c4ea62e0319c812e5312cd3331141 ms.sourcegitcommit: 48beecfe749ddac52bc79aa3eb246a2dcdaa1862 ms.translationtype: HT ms.contentlocale: ru-RU ms.lasthandoff: 03/22/2018 --- # <a name="image-tag-helper-in-aspnet-core"></a>Вспомогательная функция тега изображения в ASP.NET Core Автор: [Питер Кельнер (Peter Kellner)](http://peterkellner.net) Вспомогательная функция тега изображения расширяет возможности тега `img` (`<img>`). Для нее требуется тег `src`, а также атрибут `asp-append-version` типа `boolean`. Если источник изображения (`src`) представляет собой статический файл на веб-сервере узла, к нему добавляется уникальная строка отключения кэширования в качестве параметра запроса. Благодаря этому при изменении файла на веб-сервере узла создается уникальный URL-адрес запроса, включающий в себя обновленный параметр запроса. Строка отключения кэширования — это уникальное значение, представляющее хэш-код статического файла изображения. Если источник изображения (`src`) не является статическим файлом (например, это удаленный URL-адрес, либо файла нет на сервере), создается атрибут `src` тега `<img>` без параметра запроса, представляющего строку отключения кэширования. ## <a name="image-tag-helper-attributes"></a>Атрибуты вспомогательной функции тега изображения ### <a name="asp-append-version"></a>asp-append-version При указании вместе с атрибутом `src` вызывается вспомогательная функция тега изображения. Пример допустимой функции тега `img`: ```cshtml <img src="~/images/asplogo.png" asp-append-version="true" /> ``` Если статический файл существует в каталоге *..wwwroot/images/asplogo.png*, создаваемый код HTML будет похож на следующий (хэш-код будет иным): ```html <img src="/images/asplogo.png?v=Kl_dqr9NVtnMdsM2MUg4qthUnWZm5T1fCEimBPWDNgM"/> ``` В качестве значения параметру `v` присваивается значение хэша файла на диске. Если веб-серверу не удается получить доступ для чтения к указанному статическому файлу, параметры `v` не добавляются к атрибуту `src`. - - - ### <a name="src"></a>src Для активации вспомогательной функции тега изображения требуется атрибут src в элементе `<img>`. > [!NOTE] > Вспомогательная функция тега изображения использует поставщик `Cache` на локальном веб-сервере для хранения значения `Sha512`, вычисленного для данного файла. Если файл запрашивается снова, значение `Sha512` не нужно вычислять повторно. Кэш делает недействительным наблюдатель, который назначается файлу при вычислении его значения `Sha512`. ## <a name="additional-resources"></a>Дополнительные ресурсы * <xref:performance/caching/memory>
47.142857
436
0.787542
rus_Cyrl
0.917801
03eecc80666e4eadea7e6dfca9cc09fd66eca118
6,546
md
Markdown
_posts/2016-02-19-how-and-why-we-built-the-micro-purchase-platform.md
magnessjo/18f.gsa.gov
2414cf64d8e55dc2e8641cf6e3a4962517162a82
[ "CC0-1.0" ]
307
2015-01-15T20:05:58.000Z
2022-02-06T03:29:59.000Z
_posts/2016-02-19-how-and-why-we-built-the-micro-purchase-platform.md
magnessjo/18f.gsa.gov
2414cf64d8e55dc2e8641cf6e3a4962517162a82
[ "CC0-1.0" ]
1,634
2015-01-04T01:48:16.000Z
2022-03-29T09:02:16.000Z
_posts/2016-02-19-how-and-why-we-built-the-micro-purchase-platform.md
magnessjo/18f.gsa.gov
2414cf64d8e55dc2e8641cf6e3a4962517162a82
[ "CC0-1.0" ]
401
2015-01-02T02:41:21.000Z
2022-03-28T14:28:14.000Z
--- title: "How and why we built the micro-purchase bidding platform" date: 2016-02-19 authors: - alan - kane - alla tags: - micro-purchase platforms - procurement - acquisition services excerpt: "This past December, 18F launched a micro-purchase platform to enable vendors to place bids on opportunities to deliver open source code that costs $3,500 or less. This is a look at how and why we built this platform." description: "This past December, 18F launched a micro-purchase platform to enable vendors to place bids on opportunities to deliver open source code that costs $3,500 or less. This is a look at how and why we built this platform." image: /assets/blog/micro-purchase/micro-purchase-homepage2.jpg hero: false --- This past December, 18F launched a micro-purchase platform to enable vendors to place bids on opportunities to deliver open source code that costs $3,500 or less. This platform is a key part of 18F’s larger experiment around using the federal government’s [micro-purchase authority](https://www.acquisition.gov/far/html/Subpart%2013_2.html) to procure useful digital services from the broader vendor community. Below is a look at how and why we built this platform. Testing a hypothesis with a minimum viable product -------------------------------------------------- [![The first version of the micro-purchase experiment used a GitHub issue to track bids.]({{site.baseurl}}/assets/blog/micro-purchase/micro-purchase-issue.jpg)](https://github.com/18F/calc/issues/255) For the [first micro-purchase auction](https://github.com/18F/calc/issues/255), we launched a minimum viable product (MVP). We wanted to test the hypothesis that vendors would bid on very small opportunities and successfully deliver the requirements. If this hypothesis was false, we wouldn’t want to have sunk many hours into building a more complex platform that serves no purpose. To test the hypothesis, we built a web-based contraption consisting of Google Forms, GitHub Issues, and the GitHub API. Vendors placed bids in the Google Form, which populated a Google Spreadsheet. The spreadsheet triggered a Google Apps Script program that interacted with the GitHub API to update the title of a designated GitHub Issue. Several times during the bidding process, we needed to manually re-run the script as it did not always run when it needed to. The contraption worked good enough but was not something we would want to keep using. Building a platform ------------------- [![The micro-purchase platform site.]({{site.baseurl}}/assets/blog/micro-purchase/micro-purchase-homepage2.jpg)](https://micropurchase.18f.gov) After the initial auction, our hypothesis was proven: many vendors placed bids, and the winning vendor successfully delivered the requirement. However, the Google Form MVP was not sustainable for future auctions. With this in mind, we set out to build a platform to enable easy open source micro-purchases. First and foremost, our goal for the platform was that it would be easy for vendors to participate. That meant that vendors would not have to waste time with long registration processes or get mired down in lengthy requirements documents. Second, we wanted the platform to make the administration of posting, receiving bids, and evaluating the vendor-delivered code to be a painless, scalable process for 18F staff. As we said in our blog post [introducing the micro-purchase experiment](https://18f.gsa.gov/2015/10/13/open-source-micropurchasing/), one goal with this project is to “contract for [open source] contributions. And we want to do it the 18F way.” One aspect of contracting “the 18F way” is that systems should be built out of API-driven, modular components. The micro-purchase platform itself is no exception to this principle. In addition to a Rails application’s web interface, we built an API interface, as well as Ruby client libraries for the platform’s API and for the SAM.gov API, where vendors had to register before bidding on an auction. The main codebase can be accessed at [micropurchase.18f.gov ](https://micropurchase.18f.gov)and is [available on GitHub](https://github.com/18F/micropurchase). We chose Rails because it’s an open source, well-documented framework for rapidly building web applications. Also, we have a lot of in-house expertise in Ruby and some team members who knew how to keep a Rails apps [clean](https://codeclimate.com/github/18F/micropurchase) while moving fast. Because we require all participating vendors to have a DUNS number as part of a valid SAM.gov registration, we use the SAM.gov API to assist us with the administration of the micro-purchase platform. We built a separate Ruby gem, called [Samwise](https://github.com/18F/samwise), to access the SAM.gov API. Samwise, in turn, is used in a [Rake task](https://github.com/18F/micropurchase/blob/develop/lib/tasks/sam.rake) for verifying that DUNS numbers provided by vendors are registered in SAM.gov. We also took a somewhat unorthodox approach to authentication for a government website. The micro-purchase platform only uses GitHub OAuth for authentication (for now, at least). We made this decision because vendors need to have a GitHub account in order to deliver the requirements of a micro-purchase via pull request. Since vendors would need to have a GitHub account anyway, we felt it would be onerous to require an account with an additional identity provider in order to participate. Most of the site content can be seen without having to sign in. The home page has a list of current and expired auctions with details about bids. When you want to bid on an auction, you are prompted to sign in through GitHub. ## What’s next? Our team will continue this experiment, and we anticipate rolling out new auction methods in the coming weeks. This will include single-bid auctions as well as opportunities reserved for certain types of vendors (such as [SBA 8(a) small businesses](https://www.sba.gov/contracting/government-contracting-programs/8a-business-development-program)). We’ll also continue iterating on the platform and making it better for users and administrators, alike. The next set of auctions will launch at 1 p.m. EST on February 24, and we look forward to posting a larger variety of opportunities in the future! Want to learn more about the platform and future micro-purchases? [Join our email list](http://eepurl.com/bJQHFr) to receive periodic updates. We also welcome issues and pull requests on both the platform and [API](https://pages.18f.gov/micropurchase-api-docs/).
51.952381
200
0.787504
eng_Latn
0.997818
03eed9c45cf9fe5c015b46c019cfe5d119b79692
975
md
Markdown
DataSources/ProWatch/ProWatch/Parsers/parserContent_prowatch-badge-access.md
TJee-snyk/Exabeam
d27a45bfed7fc03d8b4ad430fd3520043b14c2e9
[ "MIT" ]
null
null
null
DataSources/ProWatch/ProWatch/Parsers/parserContent_prowatch-badge-access.md
TJee-snyk/Exabeam
d27a45bfed7fc03d8b4ad430fd3520043b14c2e9
[ "MIT" ]
null
null
null
DataSources/ProWatch/ProWatch/Parsers/parserContent_prowatch-badge-access.md
TJee-snyk/Exabeam
d27a45bfed7fc03d8b4ad430fd3520043b14c2e9
[ "MIT" ]
1
2022-03-07T23:54:48.000Z
2022-03-07T23:54:48.000Z
#### Parser Content ```Java { Name = prowatch-badge-access Vendor = ProWatch Lms = Direct DataType = "physical-access" TimeFormat = "yyyy-MM-dd'T'HH:mm:ss.SSSZ" Conditions = [ """"evnt_dat":"""", """"evnt_descrp":"""", """"badge_employeeid":"""", """"cardstatus_descrp":"""" ] Fields = [ """exabeam_host=({host}[\w.\-]+)""", """"location":"\s*({location_building}[^"]+?)\s*"""", """"descrp":"\s*({location_door}[^"]+?)\s*"""", """"evnt_dat":"({time}\d\d\d\d-\d\d-\d\dT\d\d:\d\d:\d\d\.\d+Z)""", """"cardno":"({badge_id}\d+)""", """"comp_name":"\s*({additional_info}[^"]+?)\s*"""", """"evnt_descrp":"\s*({outcome}[^"]+?)\s*"""", """"threat_lev":({threat_level}\d+)""", """"fname":"\s*({first_name}[^"]+?)\s*"""", """"lname":"\s*({last_name}[^"]+?)\s*"""", """"badge_employeeid":"\s*({employee_id}[^"]+?)\s*"""", """"cardstatus_descrp":"\s*({card_status}[^"]+?)\s*"""" ] } ```
39
119
0.46359
yue_Hant
0.16883
03ef38ac7e5f2f7e5b6d99424cee90c101c4f3d6
11,078
md
Markdown
docs/c-runtime-library/reference/fopen-wfopen.md
changeworld/cpp-docs.zh-cn
fab4b89663eadfc318b1c0e5f0c4f2506f24bbd6
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/c-runtime-library/reference/fopen-wfopen.md
changeworld/cpp-docs.zh-cn
fab4b89663eadfc318b1c0e5f0c4f2506f24bbd6
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/c-runtime-library/reference/fopen-wfopen.md
changeworld/cpp-docs.zh-cn
fab4b89663eadfc318b1c0e5f0c4f2506f24bbd6
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: fopen、_wfopen ms.date: 11/04/2016 apiname: - _wfopen - fopen apilocation: - msvcrt.dll - msvcr80.dll - msvcr90.dll - msvcr100.dll - msvcr100_clr0400.dll - msvcr110.dll - msvcr110_clr0400.dll - msvcr120.dll - msvcr120_clr0400.dll - ucrtbase.dll - api-ms-win-crt-stdio-l1-1-0.dll apitype: DLLExport f1_keywords: - fopen - _wfopen - _tfopen - corecrt_wstdio/_wfopen - stdio/fopen helpviewer_keywords: - opening files, for file I/O - wfopen function - tfopen function - _tfopen function - _wfopen function - files [C++], opening - fopen function ms.assetid: e868993f-738c-4920-b5e4-d8f2f41f933d ms.openlocfilehash: 1397f3b3513fc9a3e93a69841a93b40c16e490cf ms.sourcegitcommit: 1819bd2ff79fba7ec172504b9a34455c70c73f10 ms.translationtype: MT ms.contentlocale: zh-CN ms.lasthandoff: 11/09/2018 ms.locfileid: "51333223" --- # <a name="fopen-wfopen"></a>fopen、_wfopen 打开文件。 这些执行附加参数验证并返回错误代码的函数有更安全的版本可用;请参阅 [fopen_s、_wfopen_s](fopen-s-wfopen-s.md)。 ## <a name="syntax"></a>语法 ```C FILE *fopen( const char *filename, const char *mode ); FILE *_wfopen( const wchar_t *filename, const wchar_t *mode ); ``` ### <a name="parameters"></a>参数 *filename*<br/> 文件名。 *模式*<br/> 启用的访问类型。 ## <a name="return-value"></a>返回值 这些函数均返回指向打开文件的指针。 一个 null 指针值指示错误。 如果*文件名*或*模式*是**NULL**或空字符串,这些函数则会触发无效参数处理程序中, 所述[参数验证](../../c-runtime-library/parameter-validation.md)。 如果允许执行继续,这些函数将返回**NULL**并设置**errno**到**EINVAL**。 有关详细信息,请参阅 [errno、_doserrno、_sys_errlist 和 _sys_nerr](../../c-runtime-library/errno-doserrno-sys-errlist-and-sys-nerr.md)。 ## <a name="remarks"></a>备注 **Fopen**函数将打开由指定的文件*filename*。 默认情况下,窄*文件名*使用 ANSI 代码页 (CP_ACP) 解释字符串。 在 Windows 桌面应用程序中,可以通过使用 [SetFileApisToOEM](/windows/desktop/api/fileapi/nf-fileapi-setfileapistooem) 函数将此更改为 OEM 代码页 (CP_OEMCP)。 可以使用[AreFileApisANSI](/windows/desktop/api/fileapi/nf-fileapi-arefileapisansi)函数来确定是否*filename*使用 ANSI 还是系统默认 OEM 代码页解释。 **_wfopen**是宽字符版本**fopen**; 的自变量 **_wfopen**都是宽字符字符串。 否则为 **_wfopen**并**fopen**行为方式相同。 只需使用 **_wfopen**不会影响文件流中使用的编码的字符集。 **fopen**接受在执行; 时在文件系统上有效的路径**fopen**接受 UNC 路径和涉及的路径映射的网络驱动器,只要执行代码的系统有权访问该共享或映射驱动器,在执行时。 构造路径时**fopen**,请确保驱动器、 路径或网络共享将执行环境中可用。 可使用斜杠 (/) 或反斜杠 (\\) 作为路径中的目录分隔符。 对文件执行任何其他操作前,请始终检查返回值以确定指针是否为 NULL。 如果发生错误,全局变量**errno**设置并可用于获取特定错误信息。 有关详细信息,请参阅 [errno、_doserrno、_sys_errlist 和 _sys_nerr](../../c-runtime-library/errno-doserrno-sys-errlist-and-sys-nerr.md)。 ## <a name="unicode-support"></a>Unicode 支持 **fopen**支持 Unicode 文件流。 若要打开 Unicode 文件,请将传递**ccs**指定为所需的编码的标志**fopen**,按如下所示。 > **文件*fp = fopen ("newfile.txt","rt + ccs =**_编码_**");** 允许的值为*编码*都**UNICODE**, **utf-8**,以及**UTF 16LE**。 输入的函数时在 Unicode 模式下打开文件,将数据从文件读取到 utf-16 数据存储为类型转换**wchar_t**。 写入到在 Unicode 模式下打开的文件的函数要求包含存储为类型的 utf-16 数据的缓冲区**wchar_t**。 如果将文件编码为 UTF-8,则在写入它时,UTF-16 数据会转换为 UTF-8;在读取它时,该文件的 UTF-8 编码的内容会转换为 UTF-16。 尝试在 Unicode 模式下读取或写入奇数个字节会导致 [参数验证](../../c-runtime-library/parameter-validation.md) 错误。 若要读取或写入在你的程序中存储为 UTF-8 的数据,请使用文本或二进制文件模式,而不是 Unicode 模式。 你应负责所有必需的编码转换。 如果文件已存在并已打开以进行读取或追加,字节顺序标记 (BOM)(如果文件中有)将确定编码。 BOM 编码优先的编码指定的**ccs**标志。 **Ccs**编码时,才使用没有 BOM 或文件是新文件。 > [!NOTE] > BOM 检测仅适用于在 Unicode 模式下打开的文件 (即,通过传递**ccs**标志)。 下表总结了用于各种模式**ccs**识别分配给标志**fopen**和文件中的字节顺序标记。 ### <a name="encodings-used-based-on-ccs-flag-and-bom"></a>基于 ccs 标志和 BOM 使用的编码 |ccs 标志|无 BOM(或新文件)|BOM:UTF-8|BOM:UTF-16| |----------------|----------------------------|-----------------|------------------| |**UNICODE**|**UTF 16LE**|**UTF-8**|**UTF 16LE**| |**UTF-8**|**UTF-8**|**UTF-8**|**UTF 16LE**| |**UTF 16LE**|**UTF 16LE**|**UTF-8**|**UTF 16LE**| 在 Unicode 模式下打开以进行写入的文件将自动写入 BOM。 如果*模式下*是 **",ccs =**_编码_**"**, **fopen**首先尝试使用这两个读取打开的文件和写入访问权限。 如果成功,此函数将读取 BOM 以确定文件的编码;如果失败,此函数将使用文件的默认编码。 在任一情况下, **fopen**将然后重新打开该文件使用只写访问权限。 (这适用于 **"a"** 模式下唯一的不适用于 **"a +"** 模式。) ### <a name="generic-text-routine-mappings"></a>一般文本例程映射 |TCHAR.H 例程|未定义 _UNICODE 和 _MBCS|已定义 _MBCS|已定义 _UNICODE| |---------------------|------------------------------------|--------------------|-----------------------| |**_tfopen**|**fopen**|**fopen**|**_wfopen**| 字符字符串*模式下*指定类型的访问请求时,对于文件,请按如下所示。 |*模式*|Access| |-|-| | **“r”** | 打开以便读取。 如果文件不存在或无法找到**fopen**调用失败。 | | **“w”** | 打开用于写入的空文件。 如果给定文件存在,则其内容会被销毁。 | | **“a”** | 在文件末尾打开以进行写入(追加),在新数据写入到文件之前不移除文件末尾 (EOF) 标记。 创建文件(如果文件不存在)。 | | **“r+”** | 打开以便读取和写入。 文件必须存在。 | | **“w+”** | 打开用于读取和写入的空文件。 如果文件存在,则其内容会被销毁。 | | **“a+”** | 打开以进行读取和追加。 追加操作包括在新数据写入文件之前移除 EOF 标记。 写入完成后,EOF 标记不会还原。 创建文件(如果文件不存在)。 | 通过打开文件时 **"a"** 访问类型或 **"a +"** 访问类型,所有写入操作可能出现在文件末尾。 通过使用可重新定位文件指针[fseek](fseek-fseeki64.md)或[rewind](rewind.md),但将始终被移回文件末尾任何写入操作执行前。 因此,无法覆盖现有数据。 **"A"** 模式不会删除 EOF 标记追加到文件之前。 在追加后,MS-DOS TYPE 命令只显示原始 EOF 标记之前的数据,不显示追加到文件的任何数据。 它将追加到文件,才能 **"a +"** 模式会删除 EOF 标记。 在追加后,MS-DOS TYPE 命令显示文件中的所有数据。 **"A +"** 模式是所必需的追加到使用 CTRL + Z EOF 标记终止的流文件。 当 **"r +"**, **"w +"**,或 **"a +"** 指定访问类型时,启用了读取和写入 (文件将状态以执行"更新"处于打开状态)。 但是,当你从读取切换到写入时,输入操作必须遇到 EOF 标记。 如果没有 EOF,必须使用对文件定位函数的干预调用。 文件定位函数是**fsetpos**, [fseek](fseek-fseeki64.md),并[后退](rewind.md)。 当您从写入切换到读取时,必须使用为的干预调用**fflush**或文件定位函数。 除了前面的值,可以将以下字符追加到*模式下*以指定换行符的转换模式。 |*模式*修饰符|转换模式| |-|-| | **t** | 在文本(转换)模式下打开。 | | **b** | 在二进制(未转换)模式下打开;不进行涉及回车和换行字符的转换。 | 在文本模式下,CTRL + Z 解释为 EOF 字符在输入。 在打开使用读取/写入的文件中 **"a +"**, **fopen**检查文件末尾的 CTRL + Z 并移除它,如有可能。 这是因为使用[fseek](fseek-fseeki64.md)并**ftell** CTRL + Z 结尾可能会导致文件内移动[fseek](fseek-fseeki64.md)文件末尾附近错误运行。 在文本模式下,回车-换行组合将转换为单一的换行输入,并换行字符转换为输出回车-换行组合。 当 Unicode 流 I/O 函数在文本模式(默认设置)下运行时,源或目标流将假定为一系列多字节字符。 因此,Unicode 流输入函数将多字节字符转换为宽字符(就像调用 mbtowc 函数一样)。 出于同一原因,Unicode 流输出函数将宽字符转换为多字节字符(就像调用 wctomb 函数一样)。 如果**t**或**b**中未给*模式*,则默认转换模式由全局变量[_fmode](../../c-runtime-library/fmode.md)。 如果**t**或**b**自变量、 函数将失败并返回到前缀**NULL**。 有关如何在 Unicode 和多字节流 I/O 中使用文本和二进制模式的详细信息,请参阅 [Text and Binary Mode File I/O](../../c-runtime-library/text-and-binary-mode-file-i-o.md) 和 [Unicode Stream I/O in Text and Binary Modes](../../c-runtime-library/unicode-stream-i-o-in-text-and-binary-modes.md)。 以下选项可以追加到*模式下*来指定其他行为。 |*模式*修饰符|行为| |-|-| | **c** | 启用关联的提交标志*文件名*,以便文件缓冲区的内容直接写入磁盘**fflush**或 **_flushall**调用。 | | **n** | 重置为关联的提交标志*文件名*到"不提交。" 这是默认设置。 如果将程序显式链接到 COMMODE.OBJ,它还将重写全局提交标志。 除非将程序显式链接到 COMMODE.OBJ,否则全局提交标志默认为“no-commit”(请参阅 [Link Options](../../c-runtime-library/link-options.md))。 | | **N** | 指定文件不由子进程继承。 | | **S** | 指定缓存针对(但不限于)从磁盘的顺序访问进行优化。 | | **R** | 指定缓存针对(但不限于)从磁盘的随机访问进行优化。 | | **T** | 将文件指定为临时。 如果可能,它不会刷新到磁盘。 | | **D** | 将文件指定为临时。 最后一个文件指针关闭时,它将被删除。 | | **ccs =**_编码_ | 指定设置为使用的编码的字符 (之一**utf-8**, **UTF 16LE**,或**UNICODE**) 此文件。 如果需要 ANSI 编码,请不要指定此字符集。 | 有效字符*模式下*在中使用字符串**fopen**并 **_fdopen**对应于*oflag* 中使用的参数[_open](open-wopen.md)并[_sopen](sopen-wsopen.md),按如下所示。 |中的字符*模式下*字符串|等效*oflag* _open/_sopen 值| |-------------------------------|----------------------------------------------------| |**a**|**_O_WRONLY** &#124; **_O_APPEND** (通常 **_O_WRONLY** &#124; **_open** &#124;* * _O_APPEND * *)| |**+**|**_O_RDWR** &#124; **_O_APPEND** (通常 **_O_RDWR** &#124; **_O_APPEND** &#124; **_open** )| |**r**|**_O_RDONLY**| |**r +**|**_O_RDWR**| |**w**|**_O_WRONLY** (通常 **_O_WRONLY** &#124; **_open** &#124;* * _O_TRUNC * *)| |**w +**|**_O_RDWR** (通常 **_O_RDWR** &#124; **_open** &#124; **_O_TRUNC**)| |**b**|**_O_BINARY**| |**t**|**_O_TEXT**| |**c**|无| |**n**|无| |**S**|**_O_SEQUENTIAL**| |**R**|**_O_RANDOM**| |**T**|**_O_SHORTLIVED**| |**D**|**_O_TEMPORARY**| |**ccs = UNICODE**|**_O_WTEXT**| |**ccs = utf-8**|**_O_TEXTW、_O_UTF8**| |**ccs = UTF 16LE**|**_O_UTF16**| 如果使用的**rb**模式下,您无需移植代码,并且如果您希望读取大文件中的大多数或不关心网络性能,您还可以考虑是否使用内存映射的 Win32 文件作为一个选项。 ## <a name="requirements"></a>要求 |函数|必需的标头| |--------------|---------------------| |**fopen**|\<stdio.h>| |**_wfopen**|\<stdio.h> 或 \<wchar.h>| **_wfopen**是 Microsoft 扩展。 有关兼容性的更多信息,请参见 [兼容性](../../c-runtime-library/compatibility.md)。 **C**, **n**, **t**, **S**, **R**, **T**,和**D** *模式*选项是 Microsoft 扩展**fopen**并 **_fdopen** ,不应在需要 ANSI 可移植性时使用。 ## <a name="example-1"></a>示例 1 以下程序打开两个文件。 它使用**fclose**以关闭第一个文件并 **_fcloseall**关闭所有剩余文件。 ```C // crt_fopen.c // compile with: /W3 // This program opens two files. It uses // fclose to close the first file and // _fcloseall to close all remaining files. #include <stdio.h> FILE *stream, *stream2; int main( void ) { int numclosed; // Open for read (will fail if file "crt_fopen.c" does not exist) if( (stream = fopen( "crt_fopen.c", "r" )) == NULL ) // C4996 // Note: fopen is deprecated; consider using fopen_s instead printf( "The file 'crt_fopen.c' was not opened\n" ); else printf( "The file 'crt_fopen.c' was opened\n" ); // Open for write if( (stream2 = fopen( "data2", "w+" )) == NULL ) // C4996 printf( "The file 'data2' was not opened\n" ); else printf( "The file 'data2' was opened\n" ); // Close stream if it is not NULL if( stream) { if ( fclose( stream ) ) { printf( "The file 'crt_fopen.c' was not closed\n" ); } } // All other files are closed: numclosed = _fcloseall( ); printf( "Number of files closed by _fcloseall: %u\n", numclosed ); } ``` ```Output The file 'crt_fopen.c' was opened The file 'data2' was opened Number of files closed by _fcloseall: 1 ``` ## <a name="example-2"></a>示例 2 以下程序在具有 Unicode 编码的文本模式下创建文件(或在文件存在时覆盖文件)。 然后,它将两个字符串写入文件并关闭文件。 输出是名为 _wfopen_test.xml 的文件,其中包含输出部分中的数据。 ```C // crt__wfopen.c // compile with: /W3 // This program creates a file (or overwrites one if // it exists), in text mode using Unicode encoding. // It then writes two strings into the file // and then closes the file. #include <stdio.h> #include <stddef.h> #include <stdlib.h> #include <wchar.h> #define BUFFER_SIZE 50 int main(int argc, char** argv) { wchar_t str[BUFFER_SIZE]; size_t strSize; FILE* fileHandle; // Create an the xml file in text and Unicode encoding mode. if ((fileHandle = _wfopen( L"_wfopen_test.xml",L"wt+,ccs=UNICODE")) == NULL) // C4996 // Note: _wfopen is deprecated; consider using _wfopen_s instead { wprintf(L"_wfopen failed!\n"); return(0); } // Write a string into the file. wcscpy_s(str, sizeof(str)/sizeof(wchar_t), L"<xmlTag>\n"); strSize = wcslen(str); if (fwrite(str, sizeof(wchar_t), strSize, fileHandle) != strSize) { wprintf(L"fwrite failed!\n"); } // Write a string into the file. wcscpy_s(str, sizeof(str)/sizeof(wchar_t), L"</xmlTag>"); strSize = wcslen(str); if (fwrite(str, sizeof(wchar_t), strSize, fileHandle) != strSize) { wprintf(L"fwrite failed!\n"); } // Close the file. if (fclose(fileHandle)) { wprintf(L"fclose failed!\n"); } return 0; } ``` ## <a name="see-also"></a>请参阅 [流 I/O](../../c-runtime-library/stream-i-o.md)<br/> [多字节字符序列的解释](../../c-runtime-library/interpretation-of-multibyte-character-sequences.md)<br/> [fclose、_fcloseall](fclose-fcloseall.md)<br/> [_fdopen、_wfdopen](fdopen-wfdopen.md)<br/> [ferror](ferror.md)<br/> [_fileno](fileno.md)<br/> [freopen、_wfreopen](freopen-wfreopen.md)<br/> [_open、_wopen](open-wopen.md)<br/> [_setmode](setmode.md)<br/> [_sopen、_wsopen](sopen-wsopen.md)<br/>
34.727273
444
0.65093
yue_Hant
0.537942
03f022195cc2e8c9912267508ee11cd1d0f5439c
495
md
Markdown
Doc/F/FR/00/Agt/Int/US/Supply/Equipment/Sec/Background/0.md
Ralph-Diab/Cmacc-Org
8842eace9159ded10791d2385b871f6aafc46e4e
[ "MIT" ]
42
2015-12-30T20:53:29.000Z
2022-01-04T03:51:50.000Z
Doc/F/FR/00/Agt/Int/US/Supply/Equipment/Sec/Background/0.md
Ralph-Diab/Cmacc-Org
8842eace9159ded10791d2385b871f6aafc46e4e
[ "MIT" ]
16
2015-10-01T12:01:05.000Z
2022-03-27T23:32:53.000Z
Doc/F/FR/00/Agt/Int/US/Supply/Equipment/Sec/Background/0.md
Ralph-Diab/Cmacc-Org
8842eace9159ded10791d2385b871f6aafc46e4e
[ "MIT" ]
26
2015-10-07T18:40:44.000Z
2021-02-10T10:53:06.000Z
Ti=Background 1.sec=The {_Parties} have entered into this {_Agreement} to record the terms and conditions pursuant to which the {_Beneficiaries} may purchase the {_Products} indirectly from an {_Authorized_Reseller}; or directly from the {_Manufacturer}. 2.sec=The “{_Customer}” is {Customer.US.N,E,A} 3.sec=The {_Manufacturer} is {Manufacturer.US.N,E,A} 4.sec=Each {_Party} expects compliance, with the terms and conditions of this {_Agreement}, by the other {_Party}. =[G/Z/ol-none/s4]
41.25
241
0.755556
eng_Latn
0.997585
03f0a48aa0959ab97fe1a44721f4ab24cb96085e
4,415
md
Markdown
archived/sensu-enterprise-dashboard/3.6/integrations/jira.md
elfranne/sensu-docs
28fdf14552f8da0b9742469c9f0d7aab69e0b96d
[ "MIT" ]
69
2015-01-14T20:11:56.000Z
2022-01-24T10:44:03.000Z
archived/sensu-enterprise-dashboard/3.6/integrations/jira.md
elfranne/sensu-docs
28fdf14552f8da0b9742469c9f0d7aab69e0b96d
[ "MIT" ]
2,119
2015-01-08T20:00:16.000Z
2022-03-31T15:26:31.000Z
archived/sensu-enterprise-dashboard/3.6/integrations/jira.md
elfranne/sensu-docs
28fdf14552f8da0b9742469c9f0d7aab69e0b96d
[ "MIT" ]
217
2015-01-08T09:44:23.000Z
2022-03-24T01:52:59.000Z
--- title: "JIRA" product: "Sensu Enterprise" version: "3.6" weight: 2 menu: sensu-enterprise-3.6: parent: integrations --- **ENTERPRISE: Built-in integrations are available for [Sensu Enterprise][1] users only.** - [Overview](#overview) - [Configuration](#configuration) - [Example](#example) - [Integration Specification](#integration-specification) - [`jira` attributes](#jira-attributes) ## Overview Create and resolve [Jira][2] issues for [Sensu events][3]. ## Configuration ### Example The following is an example global configuration for the `jira` enterprise event handler (integration). {{< code json >}} { "jira": { "host": "jira.example.com", "user": "admin", "password": "secret", "project": "Sensu", "timeout": 10 } } {{< /code >}} ### Integration Specification #### `jira` attributes The following attributes are configured within the `{"jira": {} }` [configuration scope][4]. host | -------------|------ description | The JIRA host address. required | true type | String example | {{< code shell >}}"host": "jira.example.com"{{< /code >}} user | -------------|------ description | The JIRA user used to authenticate. required | true type | String example | {{< code shell >}}"user": "admin"{{< /code >}} password | -------------|------ description | The JIRA user password. required | true type | String example | {{< code shell >}}"password": "secret"{{< /code >}} project | -------------|------ description | The JIRA project to use for issues. required | false type | String default | `Sensu` example | {{< code shell >}}"project": "Alerts"{{< /code >}} project_key | -------------|------ description | The JIRA project key to use for issues. This option allows the integration to work without querying JIRA for a projects key. Using this option is recommended. required | false type | String example | {{< code shell >}}"project_key": "SEN"{{< /code >}} issue_type | -------------|------ description | Specifies default issue type for projects. _NOTE: The project used with this integration must include the `issue_type` defined here. For more info please see Atlassian's documentation [here][5]._ required | false type | String default | `Incident` example | {{< code shell >}}"issue_type": "Bug"{{< /code >}} root_url | -------------|------ description | The JIRA root URL. When set, this option overrides the `host` option, most commonly used when a service proxy is in use. required | false type | String example | {{< code shell >}}"root_url": "https://services.example.com/proxy/jira"{{< /code >}} http_proxy | | -------------|------ description | The URL of a proxy to be used for HTTP requests. required | false type | String example | {{< code shell >}}"http_proxy": "http://192.168.250.11:3128"{{< /code >}} filters | ---------------|------ description | An array of Sensu event filters (names) to use when filtering events for the handler. Each array item must be a string. Specified filters are merged with default values. required | false type | Array default | {{< code shell >}}["handle_when", "check_dependencies"]{{< /code >}} example | {{< code shell >}}"filters": ["recurrence", "production"]{{< /code >}} severities | ---------------|------ description | An array of check result severities the handler will handle. _NOTE: event resolution bypasses this filtering._ required | false type | Array allowed values | `ok`, `warning`, `critical`, `unknown` default | {{< code shell >}}["warning", "critical", "unknown"]{{< /code >}} example | {{< code shell >}} "severities": ["critical", "unknown"]{{< /code >}} timeout | -------------|------ description | The handler execution duration timeout in seconds (hard stop). required | false type | Integer default | `10` example | {{< code shell >}}"timeout": 30{{< /code >}} [?]: # [1]: /sensu-enterprise [2]: https://www.atlassian.com/software/jira [3]: /sensu-core/1.2/reference/events [4]: /sensu-core/1.2/reference/configuration#configuration-scopes [5]: https://confluence.atlassian.com/adminjiraserver073/associating-issue-types-with-projects-861253240.html
31.992754
210
0.608607
eng_Latn
0.920503
03f137e0786229086248d4b9e21cba65c913a84a
182
md
Markdown
doc/HomeWork/1901210384.md
94tiankong/HelloWorld
1d7fb0ef25ab818a6660e71d2b2e6b8761d7066f
[ "MIT" ]
null
null
null
doc/HomeWork/1901210384.md
94tiankong/HelloWorld
1d7fb0ef25ab818a6660e71d2b2e6b8761d7066f
[ "MIT" ]
null
null
null
doc/HomeWork/1901210384.md
94tiankong/HelloWorld
1d7fb0ef25ab818a6660e71d2b2e6b8761d7066f
[ "MIT" ]
1
2019-10-29T13:00:08.000Z
2019-10-29T13:00:08.000Z
### <i class="icon-chevron-sign-left"></i> 2019年10月20日-2019年10月27日 - [x] 为learn-with-open-source提交一次PR:“Before-start.md:change 正负1-2年 to 前后1-2年”【merged】 - [x] 本周总结:本周进展顺利,没有特别的难点。
36.4
85
0.714286
yue_Hant
0.344002
03f159f89d2880130651acf07d259ef078c63134
92
md
Markdown
README.md
danXyu/vidlee
133fafedcf96679cb50387fbfefb31d5c2878017
[ "MIT" ]
null
null
null
README.md
danXyu/vidlee
133fafedcf96679cb50387fbfefb31d5c2878017
[ "MIT" ]
null
null
null
README.md
danXyu/vidlee
133fafedcf96679cb50387fbfefb31d5c2878017
[ "MIT" ]
null
null
null
# vidlee A new interactive way to tinderize your life by immediately chatting with matches.
30.666667
82
0.815217
eng_Latn
0.998942
03f1926cfd44b1c1784109cb6474edb53885b6e0
3,812
md
Markdown
examples/new_project_templates/multi_node_examples/README.md
kvhooreb/pytorch-lightning
133d6b3ec1222b4b630a45de5e65d7d0f2b98fb2
[ "Apache-2.0" ]
null
null
null
examples/new_project_templates/multi_node_examples/README.md
kvhooreb/pytorch-lightning
133d6b3ec1222b4b630a45de5e65d7d0f2b98fb2
[ "Apache-2.0" ]
null
null
null
examples/new_project_templates/multi_node_examples/README.md
kvhooreb/pytorch-lightning
133d6b3ec1222b4b630a45de5e65d7d0f2b98fb2
[ "Apache-2.0" ]
null
null
null
# Multi-node examples Use these templates for multi-node training. The main complexity around cluster training is how you submit the SLURM jobs. ## Test-tube Lightning uses test-tube to submit SLURM jobs and to run hyperparameter searches on a cluster. To run a hyperparameter search, we normally add the values to search to the Hyperparameter optimizer ```python from test_tube import HyperOptArgumentParser parser = HyperOptArgumentParser(strategy='grid_search') parser.opt_list('--drop_prob', default=0.2, options=[0.2, 0.5], type=float, tunable=True) parser.opt_list('--learning_rate', default=0.001, type=float, options=[0.0001, 0.0005, 0.001], tunable=True) # give your model a chance to add its own parameters parser = LightningTemplateModel.add_model_specific_args(parent_parser, root_dir) # parse args hyperparams = parser.parse_args() ``` The above sets up a grid search on learning rate and drop probability. You can now add this object to the cluster object to perform the grid search: ```python cluster = SlurmCluster( hyperparam_optimizer=hyperparams, log_path='/path/to/log/slurm/files', ) # ... configure cluster options # run grid search on cluster nb_trials = 6 # (2 drop probs * 3 lrs) cluster.optimize_parallel_cluster_gpu( YourMainFunction, nb_trials=nb_trials, job_name=hyperparams.experiment_name ) ``` Running the above will launch 6 jobs, each with a different drop prob and learning rate combination. The ```tunable``` parameter must be set to True to add that argument to the space of options, otherwise Test-Tube will use the ```default=value```. ## SLURM Flags However you decide to submit your jobs, debugging requires a few flags. Without these flags, you'll see a nccl error instead of the actual error which caused the bug. ```sh export NCCL_DEBUG=INFO export PYTHONFAULTHANDLER=1 ``` On some clusters you might need to set the network interface with this flag. ```sh export NCCL_SOCKET_IFNAME=^docker0,lo ``` You might also need to load the latest version of NCCL ```sh module load NCCL/2.4.7-1-cuda.10.0 ``` Finally, you must set the master port (usually a random number between 12k and 20k). ```sh # random port between 12k and 20k export MASTER_PORT=$((12000 + RANDOM % 20000))$ ``` ## Simplest example. 1. Modify this script with your CoolModel file. 2. Update and submit [this bash script](https://github.com/williamFalcon/pytorch-lightning/blob/master/examples/new_project_templates/multi_node_examples/minimal_multi_node_demo_script.sh) ```bash squeue minimal_multi_node_demo_script.sh ``` ## Grid search on a cluster #### Option 1: Run on cluster using your own SLURM script The trainer and model will work on a cluster if you configure your SLURM script correctly. 1. Update [this demo slurm script](https://github.com/williamFalcon/pytorch-lightning/blob/master/examples/new_project_templates/multi_node_examples/demo_script.sh). 2. Submit the script ```bash $ squeue demo_script.sh ``` Most people have some way they automatically generate their own scripts. To run a grid search this way, you'd need a way to automatically generate scripts using all the combinations of hyperparameters to search over. #### Option 2: Use test-tube for SLURM script With test tube we can automatically generate slurm scripts for different hyperparameter options. To run this demo: ```bash source activate YourCondaEnv python multi_node_cluster_auto_slurm.py --email [email protected] --gpu_partition your_partition --conda_env YourCondaEnv ``` That will submit 6 jobs. Each job will have a specific combination of hyperparams. Each job will also run on 2 nodes where each node has 8 gpus.
35.296296
191
0.747377
eng_Latn
0.952801
03f1a49480fd297dd10fd488422e5d601314a80d
679
md
Markdown
code/dotnet/OptimizationEngineAPIMultiperiod/v1/docs/OptimizerInputsRiskModel.md
factset/enterprise-sdk
3fd4d1360756c515c9737a0c9a992c7451d7de7e
[ "Apache-2.0" ]
6
2022-02-07T16:34:18.000Z
2022-03-30T08:04:57.000Z
code/dotnet/OptimizationEngineAPIMultiperiod/v1/docs/OptimizerInputsRiskModel.md
factset/enterprise-sdk
3fd4d1360756c515c9737a0c9a992c7451d7de7e
[ "Apache-2.0" ]
2
2022-02-07T05:25:57.000Z
2022-03-07T14:18:04.000Z
code/dotnet/OptimizationEngineAPIMultiperiod/v1/docs/OptimizerInputsRiskModel.md
factset/enterprise-sdk
3fd4d1360756c515c9737a0c9a992c7451d7de7e
[ "Apache-2.0" ]
null
null
null
# FactSet.SDK.OptimizationEngineAPIMultiperiod.Model.OptimizerInputsRiskModel ## Properties Name | Type | Description | Notes ------------ | ------------- | ------------- | ------------- **SimulatedRiskModel** | [**OptimizerInputsSimulatedRiskModel**](OptimizerInputsSimulatedRiskModel.md) | | [optional] **QuantRiskModel** | [**OptimizerInputsQuantRiskModel**](OptimizerInputsQuantRiskModel.md) | | [optional] **RawModel** | [**OptimizerInputsRawRiskModel**](OptimizerInputsRawRiskModel.md) | | [optional] [[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
52.230769
161
0.687776
yue_Hant
0.653525
03f1b82a94ee2443a02f1840fe8582e7923d6ab4
151
md
Markdown
examples/mock-backend-apis/Readme.md
klouddy/pwa-box
b63c170b7927f0c2641f141e2547fb4367669b2a
[ "MIT" ]
null
null
null
examples/mock-backend-apis/Readme.md
klouddy/pwa-box
b63c170b7927f0c2641f141e2547fb4367669b2a
[ "MIT" ]
16
2019-04-11T20:15:04.000Z
2019-04-18T17:44:24.000Z
examples/mock-backend-apis/Readme.md
klouddy/pwa-box
b63c170b7927f0c2641f141e2547fb4367669b2a
[ "MIT" ]
null
null
null
# Backends for testing Setup an easy way to create mock backend servers. Using `json-server` create. **Example**: `npx json-server example1.json`
18.875
51
0.735099
eng_Latn
0.940234
03f1b8c3a8a7e5679de7e31b608c29dab991b961
279
md
Markdown
glossary/Viewport.md
15601342019/30-seconds-of-code
aa8b71b0cc7b6826608a390a4c61c196b31125c7
[ "CC0-1.0" ]
4
2020-07-25T04:32:02.000Z
2021-08-19T07:40:02.000Z
glossary/Viewport.md
15601342019/30-seconds-of-code
aa8b71b0cc7b6826608a390a4c61c196b31125c7
[ "CC0-1.0" ]
2
2021-09-21T18:05:35.000Z
2022-02-27T17:21:21.000Z
glossary/Viewport.md
15601342019/30-seconds-of-code
aa8b71b0cc7b6826608a390a4c61c196b31125c7
[ "CC0-1.0" ]
1
2021-11-17T18:32:11.000Z
2021-11-17T18:32:11.000Z
--- title: Viewport tags: Viewport --- A viewport is a polygonal (usually rectangular) area in computer graphics that is currently being viewed. In web development and design, it refers to the visible part of the document that is being viewed by the user in the browser window.
34.875
132
0.781362
eng_Latn
0.999979
03f2046fdf7f3fc3e210d0867057d5b99934a428
1,641
md
Markdown
docs/framework/unmanaged-api/debugging/icordebugframe-getstackrange-method.md
napoleonjones/docs
b15e645d7abbb7c2e8c4cef45350efd52cd9824e
[ "CC-BY-4.0", "MIT" ]
32
2017-11-09T20:29:45.000Z
2021-11-22T15:54:00.000Z
docs/framework/unmanaged-api/debugging/icordebugframe-getstackrange-method.md
napoleonjones/docs
b15e645d7abbb7c2e8c4cef45350efd52cd9824e
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/unmanaged-api/debugging/icordebugframe-getstackrange-method.md
napoleonjones/docs
b15e645d7abbb7c2e8c4cef45350efd52cd9824e
[ "CC-BY-4.0", "MIT" ]
22
2017-11-27T00:38:36.000Z
2021-03-12T06:51:43.000Z
--- title: "ICorDebugFrame::GetStackRange Method" ms.date: "03/30/2017" api_name: - "ICorDebugFrame.GetStackRange" api_location: - "mscordbi.dll" api_type: - "COM" f1_keywords: - "ICorDebugFrame::GetStackRange" helpviewer_keywords: - "GetStackRange method, ICorDebugFrame interface [.NET Framework debugging]" - "ICorDebugFrame::GetStackRange method [.NET Framework debugging]" ms.assetid: fab037cb-fda6-40fb-9367-921e435dd5a0 topic_type: - "apiref" --- # ICorDebugFrame::GetStackRange Method Gets the absolute address range of this stack frame. ## Syntax ```cpp HRESULT GetStackRange ( [out] CORDB_ADDRESS *pStart, [out] CORDB_ADDRESS *pEnd ); ``` ## Parameters `pStart` [out] A pointer to a `CORDB_ADDRESS` that specifies the starting address of the stack frame represented by this `ICorDebugFrame` object. `pEnd` [out] A pointer to a `CORDB_ADDRESS` that specifies the ending address of the stack frame represented by this `ICorDebugFrame` object. ## Remarks The address range of the stack is useful for piecing together interleaved stack traces gathered from multiple debugging engines. The numeric range provides no information about the contents of the stack frame. It is meaningful only for comparison of stack frame locations. ## Requirements **Platforms:** See [System Requirements](../../../../docs/framework/get-started/system-requirements.md). **Header:** CorDebug.idl, CorDebug.h **Library:** CorGuids.lib **.NET Framework Versions:** [!INCLUDE[net_current_v10plus](../../../../includes/net-current-v10plus-md.md)]
33.489796
275
0.716636
eng_Latn
0.610102
03f371fa2ec7688a7a6a88383470d8b3349dc2a3
382
md
Markdown
problemset/sort-colors/README.md
OUDUIDUI/leet-code
50e61ce16d1c419ccefc075ae9ead721cdd1cdbb
[ "MIT" ]
6
2022-01-17T03:19:56.000Z
2022-01-17T05:45:39.000Z
problemset/sort-colors/README.md
OUDUIDUI/algorithm-brushing
61a1b26dd46f2d9f4f90572e66475a52a18ec4d5
[ "MIT" ]
null
null
null
problemset/sort-colors/README.md
OUDUIDUI/algorithm-brushing
61a1b26dd46f2d9f4f90572e66475a52a18ec4d5
[ "MIT" ]
null
null
null
# 颜色分类 > 难度:中等 > > https://leetcode-cn.com/problems/sort-colors/ ## 题目 给定一个包含红色、白色和蓝色,一共 `n` 个元素的数组,原地对它们进行排序,使得相 同颜色的元素相邻,并按照红色、白色、蓝色顺序排列。 此题中,我们使用整数 0、 1 和 2 分别表示红色、白色和蓝色。 ### 示例 #### 示例 1: ``` 输入:nums = [2,0,2,1,1,0] 输出:[0,0,1,1,2,2] ``` #### 示例 2: ``` 输入:nums = [2,0,1] 输出:[0,1,2] ``` #### 示例 3: ``` 输入:nums = [0] 输出:[0] ``` #### 示例 4: ``` 输入:nums = [1] 输出:[1] ```
8.883721
47
0.505236
yue_Hant
0.276381
03f4290b5cc3d2433d9c1f34a6bf6fe897eea24b
1,441
md
Markdown
docs/msbuild/errors/msb4236.md
jim-liu/visualstudio-docs.zh-tw
52dbaa9ad359aeda9c6f3767ab3ccda6d91a50dd
[ "CC-BY-4.0", "MIT" ]
16
2017-09-04T14:28:59.000Z
2021-12-10T15:18:17.000Z
docs/msbuild/errors/msb4236.md
jim-liu/visualstudio-docs.zh-tw
52dbaa9ad359aeda9c6f3767ab3ccda6d91a50dd
[ "CC-BY-4.0", "MIT" ]
49
2017-08-04T09:21:57.000Z
2022-03-10T09:08:31.000Z
docs/msbuild/errors/msb4236.md
jim-liu/visualstudio-docs.zh-tw
52dbaa9ad359aeda9c6f3767ab3ccda6d91a50dd
[ "CC-BY-4.0", "MIT" ]
29
2017-08-03T13:28:03.000Z
2022-03-24T15:35:09.000Z
--- title: MSB4236:找不到指定的 SDK ' name '。 description: 當 MSBuild SDK 無法載入時,就會發生此錯誤。 ms.date: 06/18/2021 ms.topic: error-reference f1_keywords: - MSB4236 - MSBuild.CouldNotResolveSdk dev_langs: - VB - CSharp - C++ - FSharp author: ghogen ms.author: ghogen manager: jmartens ms.technology: msbuild ms.workload: - multiple ms.openlocfilehash: 47d8d6ecdce59cdc4b1ca870b0de1680a1f09243 ms.sourcegitcommit: 68897da7d74c31ae1ebf5d47c7b5ddc9b108265b ms.translationtype: MT ms.contentlocale: zh-TW ms.lasthandoff: 08/13/2021 ms.locfileid: "122116093" --- # <a name="msb4236-the-sdk-name-specified-could-not-be-found"></a>MSB4236:找不到指定的 SDK ' name ' 當 MSBuild 專案 SDK 無法載入時,就會發生此錯誤。 MSBuild 專案 SDK 是一組封裝的匯入檔案,可針對特定種類的組建設定萬用群組建設定。 例如,在 .NET 組建中,會使用 .NET SDK。 請參閱[如何使用 MSBuild 專案 sdk](../how-to-use-project-sdk.md)。 若要診斷錯誤,請先查看專案檔的最上層 Project 專案[ (MSBuild) ](../project-element-msbuild.md)查看正在使用的 SDK。 範例包括 .net sdk (的 .net sdk) 和 ASP.NET sdk (的 .net) 。 MSBuild 的專案 sdk 可以 NuGet 套件傳遞。 ```xml <Project Sdk="Microsoft.NET.Sdk.Web"> ``` 除非 [global.js](/dotnet/core/tools/global-json) 指向未安裝的 sdk,否則無法載入 .net SDK。 `version` `sdk` 在 *global.js* 中的物件屬性中尋找指定的版本: ```json { "sdk": { "version": "2.2.200" } } ``` 如果發生網路錯誤或不正確的 NuGet 摘要,NuGet SDK 解析程式可能會失敗。 檢查項目檔中的最上層元素,查看是否已指定 SDK 版本,並確定已安裝該版本。 您可以使用下列語法來指定專案檔中的版本: ```xml <Project Sdk="My.Custom.Sdk/1.0.0" /> ``` 您也可以在global.js中指定 MSBuild[的](/dotnet/core/tools/global-json#msbuild-sdks)專案 SDK 版本。
26.2
168
0.73907
yue_Hant
0.889327
03f435783b5e7ef633b3e8fed53ca30c16694079
13,677
md
Markdown
treebanks/fr_gsd/fr-pos-SYM.md
mjabrams/docs
eef96df1ce8f6752e9f80660c8255482b2a07c45
[ "Apache-2.0" ]
204
2015-01-20T16:36:39.000Z
2022-03-28T00:49:51.000Z
treebanks/fr_gsd/fr-pos-SYM.md
mjabrams/docs
eef96df1ce8f6752e9f80660c8255482b2a07c45
[ "Apache-2.0" ]
654
2015-01-02T17:06:29.000Z
2022-03-31T18:23:34.000Z
treebanks/fr_gsd/fr-pos-SYM.md
mjabrams/docs
eef96df1ce8f6752e9f80660c8255482b2a07c45
[ "Apache-2.0" ]
200
2015-01-16T22:07:02.000Z
2022-03-25T11:35:28.000Z
--- layout: base title: 'Statistics of SYM in UD_French' udver: '2' --- ## Treebank Statistics: UD_French: POS Tags: `SYM` There are 72 `SYM` lemmas (0%), 71 `SYM` types (0%) and 559 `SYM` tokens (0%). Out of 17 observed tags, the rank of `SYM` is: 10 in number of lemmas, 12 in number of types and 15 in number of tokens. The 10 most frequent `SYM` lemmas: <em>%, €, °, &, +, $, =, H, m, "</em> The 10 most frequent `SYM` types: <em>%, €, °, &, +, $, =, n°, H, m</em> The 10 most frequent ambiguous lemmas: <em>+</em> (<tt><a href="fr-pos-SYM.html">SYM</a></tt> 18, <tt><a href="fr-pos-PUNCT.html">PUNCT</a></tt> 3), <em>m</em> (<tt><a href="fr-pos-NOUN.html">NOUN</a></tt> 95, <tt><a href="fr-pos-SYM.html">SYM</a></tt> 1), <em>"</em> (<tt><a href="fr-pos-PUNCT.html">PUNCT</a></tt> 954, <tt><a href="fr-pos-SYM.html">SYM</a></tt> 4), <em>*</em> (<tt><a href="fr-pos-SYM.html">SYM</a></tt> 3, <tt><a href="fr-pos-PUNCT.html">PUNCT</a></tt> 2), <em>A</em> (<tt><a href="fr-pos-PROPN.html">PROPN</a></tt> 11, <tt><a href="fr-pos-X.html">X</a></tt> 7, <tt><a href="fr-pos-NOUN.html">NOUN</a></tt> 4, <tt><a href="fr-pos-SYM.html">SYM</a></tt> 3), <em>x</em> (<tt><a href="fr-pos-PUNCT.html">PUNCT</a></tt> 3, <tt><a href="fr-pos-SYM.html">SYM</a></tt> 2, <tt><a href="fr-pos-X.html">X</a></tt> 2), <em>'</em> (<tt><a href="fr-pos-PUNCT.html">PUNCT</a></tt> 31, <tt><a href="fr-pos-SYM.html">SYM</a></tt> 2), <em>C</em> (<tt><a href="fr-pos-NOUN.html">NOUN</a></tt> 14, <tt><a href="fr-pos-SYM.html">SYM</a></tt> 2, <tt><a href="fr-pos-PROPN.html">PROPN</a></tt> 1), <em>/</em> (<tt><a href="fr-pos-PUNCT.html">PUNCT</a></tt> 126, <tt><a href="fr-pos-SYM.html">SYM</a></tt> 1), <em>></em> (<tt><a href="fr-pos-PUNCT.html">PUNCT</a></tt> 1, <tt><a href="fr-pos-SYM.html">SYM</a></tt> 1) The 10 most frequent ambiguous types: <em>+</em> (<tt><a href="fr-pos-SYM.html">SYM</a></tt> 18, <tt><a href="fr-pos-PUNCT.html">PUNCT</a></tt> 3), <em>n°</em> (<tt><a href="fr-pos-SYM.html">SYM</a></tt> 13, <tt><a href="fr-pos-NOUN.html">NOUN</a></tt> 1), <em>H</em> (<tt><a href="fr-pos-SYM.html">SYM</a></tt> 6, <tt><a href="fr-pos-NOUN.html">NOUN</a></tt> 2), <em>m</em> (<tt><a href="fr-pos-NOUN.html">NOUN</a></tt> 76, <tt><a href="fr-pos-SYM.html">SYM</a></tt> 1), <em>"</em> (<tt><a href="fr-pos-PUNCT.html">PUNCT</a></tt> 954, <tt><a href="fr-pos-SYM.html">SYM</a></tt> 4), <em>C</em> (<tt><a href="fr-pos-NOUN.html">NOUN</a></tt> 14, <tt><a href="fr-pos-SYM.html">SYM</a></tt> 4, <tt><a href="fr-pos-PRON.html">PRON</a></tt> 1, <tt><a href="fr-pos-PROPN.html">PROPN</a></tt> 1), <em>*</em> (<tt><a href="fr-pos-SYM.html">SYM</a></tt> 3, <tt><a href="fr-pos-PUNCT.html">PUNCT</a></tt> 2), <em>A</em> (<tt><a href="fr-pos-ADP.html">ADP</a></tt> 87, <tt><a href="fr-pos-PROPN.html">PROPN</a></tt> 11, <tt><a href="fr-pos-X.html">X</a></tt> 7, <tt><a href="fr-pos-DET.html">DET</a></tt> 5, <tt><a href="fr-pos-NOUN.html">NOUN</a></tt> 4, <tt><a href="fr-pos-SYM.html">SYM</a></tt> 3, <tt><a href="fr-pos-AUX.html">AUX</a></tt> 1), <em>x</em> (<tt><a href="fr-pos-PUNCT.html">PUNCT</a></tt> 3, <tt><a href="fr-pos-SYM.html">SYM</a></tt> 2, <tt><a href="fr-pos-X.html">X</a></tt> 2), <em>'</em> (<tt><a href="fr-pos-PUNCT.html">PUNCT</a></tt> 31, <tt><a href="fr-pos-SYM.html">SYM</a></tt> 2) * <em>+</em> * <tt><a href="fr-pos-SYM.html">SYM</a></tt> 18: <em>J' ai donc choisi de ne faire qu' une coupe <b>+</b> balayage .</em> * <tt><a href="fr-pos-PUNCT.html">PUNCT</a></tt> 3: <em>D' un nom de personne Ballo <b>+</b> Heim .</em> * <em>n°</em> * <tt><a href="fr-pos-SYM.html">SYM</a></tt> 13: <em>Elle est publiée pour la première fois comme mini-récit de le <b>n°</b> 1682 de le journal Spirou .</em> * <tt><a href="fr-pos-NOUN.html">NOUN</a></tt> 1: <em>Cependant , le site internet de LégiFrance précise qu' il est nécessaire d' avoir déclaré son option pour le versement fiscal libératoire ( lors de la déclaration d' activité d' auto-entrepreneur ou à le moyen de le formulaire <b>n°</b> 13843 * 01 ) et effectué une déclaration provisoire mentionnant la demande d' exonération temporaire avant le 31 décembre de l' année de début d' activité ( ou dans les 3 mois suivant la création si celle-ci intervient à partir d' octobre ) .</em> * <em>H</em> * <tt><a href="fr-pos-SYM.html">SYM</a></tt> 6: <em>Ceci alors que Hall rivalisait avec Triple <b>H</b> , et que Michaels était opposé à Nash plus tôt dans la soirée .</em> * <tt><a href="fr-pos-NOUN.html">NOUN</a></tt> 2: <em>Il est 7 <b>H</b> 17 de le matin .</em> * <em>m</em> * <tt><a href="fr-pos-NOUN.html">NOUN</a></tt> 76: <em>Il peut mesurer jusqu' à 2 <b>m</b> de diamètre voir plus .</em> * <tt><a href="fr-pos-SYM.html">SYM</a></tt> 1: <em>Le théorème de Pólya peut être utilisé pour calculer le nombre de graphes sur ces <b>m</b> sommets à isomorphisme près , c'est-à-dire le nombre d' orbites de les coloriages de X sous l' action de le groupe symétrique G = S , qui permute les sommets donc les paires de sommets .</em> * <em>"</em> * <tt><a href="fr-pos-PUNCT.html">PUNCT</a></tt> 954: <em>Chauchard , tu es une bête ! <b>"</b></em> * <tt><a href="fr-pos-SYM.html">SYM</a></tt> 4: <em>Coral Gables est située à 25 ° 43'42 <b>"</b> Nord et 80 ° 16'16 <b>"</b> Ouest à le sud-ouest de Miami .</em> * <em>C</em> * <tt><a href="fr-pos-NOUN.html">NOUN</a></tt> 14: <em>La température moyenne annuelle à São João do Itaperiú est de 20,3 ° <b>C</b> .</em> * <tt><a href="fr-pos-SYM.html">SYM</a></tt> 4: <em>Le Colli Orientali del Friuli Tazzelenghe superiore se déguste à une température comprise entre 14 et 16 ° <b>C</b> .</em> * <tt><a href="fr-pos-PRON.html">PRON</a></tt> 1: <em><b>C</b> est vraiment dommage , car la ville est très agréable , mais notre séjour à l hôtel a précipité notre départ .</em> * <tt><a href="fr-pos-PROPN.html">PROPN</a></tt> 1: <em>Toutes en collaboration avec Deep <b>C</b> et Chris Udoh</em> * <em>*</em> * <tt><a href="fr-pos-SYM.html">SYM</a></tt> 3: <em>Les champs marqués par une étoile <b>*</b> sont obligatoires .</em> * <tt><a href="fr-pos-PUNCT.html">PUNCT</a></tt> 2: <em><b>*</b> Egypte : La hausse de les salaires de les fonctionnaires pourrait être perçue comme une bonne nouvelle si elle avait été étendue à tous les salariés et chômeurs d' Egypte .</em> * <em>A</em> * <tt><a href="fr-pos-ADP.html">ADP</a></tt> 87: <em><b>A</b> aucun moment , de le jour ou de la nuit s' il le faut .</em> * <tt><a href="fr-pos-PROPN.html">PROPN</a></tt> 11: <em>En 1931 , l' <b>A</b> établit son siège à côté de la maison de les étudiants .</em> * <tt><a href="fr-pos-X.html">X</a></tt> 7: <em>Après un virage serré , elle passe au-dessus de l' <b>A</b> 106 .</em> * <tt><a href="fr-pos-DET.html">DET</a></tt> 5: <em>Dans le film allemand de 1943 , Titanic son rôle est joué par Erich Stelmecke , dans le film britannique de le 1958 <b>A</b> Night to Remember , il est incarné par Kenneth More .</em> * <tt><a href="fr-pos-NOUN.html">NOUN</a></tt> 4: <em>Les Maroons de Chatam est une ancienne franchise de hockey sur glace ayant évolué dans la Ligue internationale de hockey et dans l' association de hockey senior <b>A</b> de l' Ontario .</em> * <tt><a href="fr-pos-SYM.html">SYM</a></tt> 3: <em>La loi externe de le <b>A</b> - module M est une opération de <b>A</b> qui ( entre autres propriétés ) fait de M un groupe à opérateurs dans <b>A</b> .</em> * <tt><a href="fr-pos-AUX.html">AUX</a></tt> 1: <em><b>A</b> toujours appartenu à le diocèse de Cuautitlán , État de Mexico et est un sanctuaire où ils adorent l' image de le Seigneur de la Chapelle .</em> * <em>x</em> * <tt><a href="fr-pos-PUNCT.html">PUNCT</a></tt> 3: <em>Le satellite , d' une masse de 94 kg a été placé sur une orbite héliosynchrone de 681 <b>x</b> 561 km , et est contrôlé depuis la station de Redu , en Belgique .</em> * <tt><a href="fr-pos-SYM.html">SYM</a></tt> 2: <em>Flamme » , variante , aquarelle et crayon sur papier Canson , avec collage de les deux flammes , 27,9 <b>x</b> 16,7 cm .</em> * <tt><a href="fr-pos-X.html">X</a></tt> 2: <em>En d' autres termes , les <b>x</b> k valeurs sont calculées en utilisant les <b>x</b> { k-1 } calculées précédemment .</em> * <em>'</em> * <tt><a href="fr-pos-PUNCT.html">PUNCT</a></tt> 31: <em>Son nom complet est Banksia integrifolia <b>'</b> Roller Coaster .</em> * <tt><a href="fr-pos-SYM.html">SYM</a></tt> 2: <em>Elle se situe par 20 º 59 <b>'</b> 32 " de latitude sud et par 48 º 55 <b>'</b> 07 " de longitude ouest , à une altitude de 555 mètres .</em> ## Morphology The form / lemma ratio of `SYM` is 0.986111 (the average of all parts of speech is 1.306238). The 1st highest number of forms (2) was observed with the lemma “°”: <em>n°, °</em>. The 2nd highest number of forms (1) was observed with the lemma “"”: <em>"</em>. The 3rd highest number of forms (1) was observed with the lemma “#”: <em>#</em>. `SYM` does not occur with any features. ## Relations `SYM` nodes are attached to their parents using 19 different relations: <tt><a href="fr-dep-nmod.html">nmod</a></tt> (122; 22% instances), <tt><a href="fr-dep-obl.html">obl</a></tt> (105; 19% instances), <tt><a href="fr-dep-appos.html">appos</a></tt> (69; 12% instances), <tt><a href="fr-dep-obj.html">obj</a></tt> (56; 10% instances), <tt><a href="fr-dep-conj.html">conj</a></tt> (52; 9% instances), <tt><a href="fr-dep-nsubj.html">nsubj</a></tt> (38; 7% instances), <tt><a href="fr-dep-cc.html">cc</a></tt> (29; 5% instances), <tt><a href="fr-dep-compound.html">compound</a></tt> (21; 4% instances), <tt><a href="fr-dep-dep.html">dep</a></tt> (15; 3% instances), <tt><a href="fr-dep-flat-name.html">flat:name</a></tt> (14; 3% instances), <tt><a href="fr-dep-punct.html">punct</a></tt> (12; 2% instances), <tt><a href="fr-dep-root.html">root</a></tt> (10; 2% instances), <tt><a href="fr-dep-case.html">case</a></tt> (4; 1% instances), <tt><a href="fr-dep-xcomp.html">xcomp</a></tt> (4; 1% instances), <tt><a href="fr-dep-nsubj-pass.html">nsubj:pass</a></tt> (3; 1% instances), <tt><a href="fr-dep-orphan.html">orphan</a></tt> (2; 0% instances), <tt><a href="fr-dep-ccomp.html">ccomp</a></tt> (1; 0% instances), <tt><a href="fr-dep-discourse.html">discourse</a></tt> (1; 0% instances), <tt><a href="fr-dep-nummod.html">nummod</a></tt> (1; 0% instances) Parents of `SYM` nodes belong to 13 different parts of speech: <tt><a href="fr-pos-NOUN.html">NOUN</a></tt> (193; 35% instances), <tt><a href="fr-pos-VERB.html">VERB</a></tt> (188; 34% instances), <tt><a href="fr-pos-SYM.html">SYM</a></tt> (66; 12% instances), <tt><a href="fr-pos-PROPN.html">PROPN</a></tt> (49; 9% instances), <tt><a href="fr-pos-ADJ.html">ADJ</a></tt> (27; 5% instances), <tt><a href="fr-pos-NUM.html">NUM</a></tt> (11; 2% instances), (10; 2% instances), <tt><a href="fr-pos-X.html">X</a></tt> (9; 2% instances), <tt><a href="fr-pos-ADV.html">ADV</a></tt> (2; 0% instances), <tt><a href="fr-pos-ADP.html">ADP</a></tt> (1; 0% instances), <tt><a href="fr-pos-DET.html">DET</a></tt> (1; 0% instances), <tt><a href="fr-pos-INTJ.html">INTJ</a></tt> (1; 0% instances), <tt><a href="fr-pos-PRON.html">PRON</a></tt> (1; 0% instances) 107 (19%) `SYM` nodes are leaves. 75 (13%) `SYM` nodes have one child. 184 (33%) `SYM` nodes have two children. 193 (35%) `SYM` nodes have three or more children. The highest child degree of a `SYM` node is 14. Children of `SYM` nodes are attached using 18 different relations: <tt><a href="fr-dep-nummod.html">nummod</a></tt> (412; 35% instances), <tt><a href="fr-dep-case.html">case</a></tt> (217; 18% instances), <tt><a href="fr-dep-nmod.html">nmod</a></tt> (199; 17% instances), <tt><a href="fr-dep-punct.html">punct</a></tt> (164; 14% instances), <tt><a href="fr-dep-conj.html">conj</a></tt> (42; 4% instances), <tt><a href="fr-dep-compound.html">compound</a></tt> (36; 3% instances), <tt><a href="fr-dep-cc.html">cc</a></tt> (24; 2% instances), <tt><a href="fr-dep-det.html">det</a></tt> (21; 2% instances), <tt><a href="fr-dep-advmod.html">advmod</a></tt> (12; 1% instances), <tt><a href="fr-dep-cop.html">cop</a></tt> (12; 1% instances), <tt><a href="fr-dep-nsubj.html">nsubj</a></tt> (12; 1% instances), <tt><a href="fr-dep-appos.html">appos</a></tt> (11; 1% instances), <tt><a href="fr-dep-amod.html">amod</a></tt> (6; 1% instances), <tt><a href="fr-dep-acl.html">acl</a></tt> (2; 0% instances), <tt><a href="fr-dep-advcl.html">advcl</a></tt> (2; 0% instances), <tt><a href="fr-dep-flat-name.html">flat:name</a></tt> (2; 0% instances), <tt><a href="fr-dep-acl-relcl.html">acl:relcl</a></tt> (1; 0% instances), <tt><a href="fr-dep-mark.html">mark</a></tt> (1; 0% instances) Children of `SYM` nodes belong to 15 different parts of speech: <tt><a href="fr-pos-NUM.html">NUM</a></tt> (429; 36% instances), <tt><a href="fr-pos-ADP.html">ADP</a></tt> (213; 18% instances), <tt><a href="fr-pos-NOUN.html">NOUN</a></tt> (198; 17% instances), <tt><a href="fr-pos-PUNCT.html">PUNCT</a></tt> (163; 14% instances), <tt><a href="fr-pos-SYM.html">SYM</a></tt> (66; 6% instances), <tt><a href="fr-pos-DET.html">DET</a></tt> (21; 2% instances), <tt><a href="fr-pos-ADV.html">ADV</a></tt> (14; 1% instances), <tt><a href="fr-pos-CCONJ.html">CCONJ</a></tt> (14; 1% instances), <tt><a href="fr-pos-PROPN.html">PROPN</a></tt> (13; 1% instances), <tt><a href="fr-pos-AUX.html">AUX</a></tt> (12; 1% instances), <tt><a href="fr-pos-PRON.html">PRON</a></tt> (10; 1% instances), <tt><a href="fr-pos-SCONJ.html">SCONJ</a></tt> (9; 1% instances), <tt><a href="fr-pos-ADJ.html">ADJ</a></tt> (8; 1% instances), <tt><a href="fr-pos-VERB.html">VERB</a></tt> (5; 0% instances), <tt><a href="fr-pos-X.html">X</a></tt> (1; 0% instances)
147.064516
1,497
0.620385
yue_Hant
0.473804
03f46c4c0ae8d1c7a1a3513420515565f93ed9a6
1,700
md
Markdown
CHANGELOG.md
yombunker/SpanK
2f695afd8dad529a13e46f92622eabf7f8d88f0f
[ "Apache-2.0" ]
4
2019-02-12T17:36:00.000Z
2019-08-23T18:27:29.000Z
CHANGELOG.md
yombunker/SpanK
2f695afd8dad529a13e46f92622eabf7f8d88f0f
[ "Apache-2.0" ]
null
null
null
CHANGELOG.md
yombunker/SpanK
2f695afd8dad529a13e46f92622eabf7f8d88f0f
[ "Apache-2.0" ]
null
null
null
Change Log ========== Version 1.1.0 *(2019-02-11)* ----------------------------- * Added support for Tokenized strings - use "styleStringWithTokens" and pass tbe opening/closing tokens - closing token is optional if same as opening token - token(s) format MUST include %s somewhere in the format, for example: - {\\%s} and {%s/} -> {\\title}tokens can be different{title/} - {\[%s\]} -> {\[same\]}or they can be the same{\[same\]} - @@%s and %s@@ -> @@TALL_TOKEN even uppercase TALL_TOKEN@@ - abc%s and %scba -> abcbody valid, but, why would you? bodycba * Added locator: - token (only available if you used "WithTokens" method) - pass the unique token identifier, style will be applied to everything inside that token Version 1.0.0 *(2019-02-04)* ----------------------------- * Added locators: - all - range - firstAppearanceOf - firstAppearanceOf - lastAppearanceOf - lastAppearanceOf - allAppearanceOf - allAppearanceOf - nThAppearanceOf - nThAppearanceOf - firstParagraph - lastParagraph - nThParagraph - nThParagraph - paragraphRange * Added character styles: - bold - italic - underline - strikethrough - subscript - superscript - foregroundColor - backgroundColor - link - font - appearance - locale - scaleX - relativeSize - absoluteSize - absoluteSizeDP - clickable - customAnnotation - image - imageResource * Added paragraph styles: - customQuote - quote - alignTo - customAlignTo - bullet - drawableWithMargin - bitmapWithMargin - leadingMarginModifier - tabStop
25
94
0.615882
eng_Latn
0.836652
03f4929cc0988e3b01224edd78054d26d7c8d2d5
16,024
md
Markdown
_posts/hacknews/2019-08-18-hacknews.md
JJJJJJJerk/libragen.cn
9cb47c82e626d13d68e559cd5c27adfd6a5ba552
[ "MIT" ]
null
null
null
_posts/hacknews/2019-08-18-hacknews.md
JJJJJJJerk/libragen.cn
9cb47c82e626d13d68e559cd5c27adfd6a5ba552
[ "MIT" ]
1
2020-07-18T11:03:19.000Z
2020-07-19T03:09:19.000Z
_posts/hacknews/2019-08-18-hacknews.md
JJJJJJJerk/libragen.cn
9cb47c82e626d13d68e559cd5c27adfd6a5ba552
[ "MIT" ]
13
2020-08-06T05:13:24.000Z
2022-03-24T07:31:57.000Z
--- layout: post title: Hacknews2019-08-18新闻 category: Hacknews tags: hacknews keywords: hacknews coverage: hacknews-banner.jpg --- Hacker News 是一家关于计算机黑客和创业公司的社会化新闻网站,由保罗·格雷厄姆的创业孵化器 Y Combinator 创建。 与其它社会化新闻网站不同的是 Hacker News 没有踩或反对一条提交新闻的选项(不过评论还是可以被有足够 Karma 的用户投反对票);只可以赞或是完全不投票。简而言之,Hacker News 允许提交任何可以被理解为“任何满足人们求知欲”的新闻。 ## HackNews Hack新闻 - [Looking for a C++ Soft Eng at Iris Automation – AI Software for Drones](http://www.irisonboard.com/careers/) - `寻找虹膜-无人机计算机视觉的客户经理` - [Video Game Preservation – An archive of commercial video game source code](https://github.com/videogamepreservation) - `视频游戏保存-一个存档的商业视频游戏源代码` - [Can we survive technology? (1955) pdf](http://geosci.uchicago.edu/~kite/doc/von_Neumann_1955.pdf) - `我们能在科技中生存吗?(1955)pdf` - [Parsing JSON Is a Minefield (2018)](http://seriot.ch/parsing_json.php) - `解析JSON是一个雷区(2018)` - [WeWTF](https://www.profgalloway.com/wewtf) - `WeWTF` - [Wells Fargo Closed Their Accounts, but the Fees Continued to Mount](https://www.nytimes.com/2019/08/16/business/wells-fargo-overdraft-fees.html) - `富国银行关闭了它们的账户,但手续费继续上涨` - [Software architects should be involved in earliest system engineering activities](https://insights.sei.cmu.edu/sei_blog/2019/08/why-software-architects-must-be-involved-in-the-earliest-systems-engineering-activities.html) - `为什么软件架构师必须参与` - [U.S. Weighs Selling 50- and 100-Year Bonds After Yields Plummet](https://www.bloomberg.com/news/articles/2019-08-16/u-s-treasury-to-do-market-outreach-again-on-ultra-long-bonds-jzejo2qu) - `美国公债收益率(殖利率)挫跌后,考虑出售50年期和100年期公债` - [You May Be Better Off Picking Stocks at Random, Study Finds](https://www.studyfinds.org/new-to-investing-you-may-be-better-off-picking-stocks-at-random-study-finds/) - `研究发现,随机选择股票可能会让你的日子过得更好` - [Dissidents Are Using Shortwave Radio to Broadcast News into China](https://www.defenseone.com/technology/2019/08/how-dissidents-are-using-shortwave-radio-broadcast-news-china/158950/) - `持不同政见者正在使用短波向中国广播新闻` - [The Dawn of the Age of Geoengineering](https://elidourado.com/blog/dawn-of-geoengineering/) - `地球工程时代的曙光` - [Virgin Media (UK) stores passwords in plain text, sends them through the mail](https://twitter.com/virginmedia/status/1162756227132198914) - `维珍媒体(英国)以纯文本形式存储密码,并通过邮件发送` - [The first solar road has turned out to be a disappointing failure](https://www.sciencealert.com/the-world-s-first-solar-road-has-turned-out-to-be-a-disappointing-failure) - `世界上第一条太阳能公路正式宣告彻底失败` - [With friends like these](http://blogs.perl.org/users/damian_conway/2019/08/with-friends-like-these.html) - `有这样的朋友…` - [Announcing New AMD EPYC-based Azure Virtual Machines](https://azure.microsoft.com/en-us/blog/announcing-new-amd-epyc-based-azure-virtual-machines/) - `宣布新的基于AMD epyc的Azure虚拟机` - [São Paulo’s Outdoor Advertising Ban (2016)](https://99percentinvisible.org/article/clean-city-law-secrets-sao-paulo-uncovered-outdoor-advertising-ban/) - `圣保罗被户外广告禁令揭露——99%隐形` - [Deep Operation](https://en.wikipedia.org/wiki/Deep_operation) - `深操作` - [The Power of Speaking Polari](https://www.laphamsquarterly.org/roundtable/power-speaking-polari) - `讲波兰语的力量` - [GNU Parallel 2018](https://zenodo.org/record/1146014) - `GNU平行2018` - [Flying with Miniature Horses](https://www.nytimes.com/2019/08/17/travel/mini-horse-service-plane.html) - `完全合理的理由是人们骑着迷你马飞行` - [Microsoft Has First Major Impact on Chrome](https://www.thurrott.com/cloud/web-browsers/google-chrome/212361/microsoft-has-first-major-impact-on-chrome) - `微软首先对Chrome产生了重大影响` - [Project BillVision (1995)](https://web.archive.org/web/19961115133722/http://picard.dartmouth.edu/~oly/BillVision.html) - `项目BillVision (1995)` - [Plastic recycling is a myth: What happens to your rubbish?](https://www.theguardian.com/environment/2019/aug/17/plastic-recycling-myth-what-really-happens-your-rubbish) - `塑料回收是一个神话:你的垃圾怎么办?` - [Animal movies promote awareness, not harm, say researchers](http://www.ox.ac.uk/news/2019-08-14-%E2%80%9C-nemo-effect%E2%80%9D-untrue-animal-movies-promote-awareness-not-harm-say-researchers) - `“尼莫效应”是不真实的:动物电影促进意识,而不是伤害` - [Start Your Own ISP](https://startyourownisp.com/) - `创建自己的ISP` - [The Serious Money Is Warming to Bitcoin](https://www.wired.com/story/serious-money-warming-bitcoin/) - `严肃的货币正逐渐转向比特币` - [Athens on the Colorado](https://local-memory.org/athens-on-the-colorado) - `科罗拉多河畔的雅典` - [Google wants to reduce lifespan for HTTPS certificates to one year](https://www.zdnet.com/article/google-wants-to-reduce-lifespan-for-https-certificates-to-one-year/) - `谷歌希望将HTTPS证书的使用寿命缩短到一年` - [Weaving two arrays, keeping relative order](http://mohit.athwani.net/backtracking/weaving-two-arrays/) - `详细说明了如何编织两个数组保持相对有序` - [I have released 100+ stock trading strategies using machine learning (GoogColab)](https://github.com/firmai/machine-learning-asset-management) - `我发布了100多种使用机器学习(GoogColab)的股票交易策略` - [Seymour Cray: An Appreciation (1997)](http://www.cs.man.ac.uk/~toby/writing/PCW/cray.htm) - `西摩·克雷:感谢` - [Tech Interview Handbook](https://yangshun.github.io/tech-interview-handbook/) - `技术面试手册` - [The Software Arts](https://news.ucsc.edu/2019/08/sack-software-arts.html) - `软件艺术` - [Ask HN: Hackable external wireless SSD storage?](item?id=20723995) - `问HN:可破解的外部无线SSD存储?` - [Psychology vs. the Graphics Pipeline (2017)](http://scattered-thoughts.net/blog/2017/12/11/psychology-vs-the-graphics-pipeline/) - `心理学vs图形管道(2017)` - [Kaspersky AV injected unique ID allowing sites to track users in incognito mode](https://heise.de/-4496138) - `卡巴斯基AV注入了独特的ID,允许网站在隐身模式下跟踪用户` - [The League of Entropy Is Making Randomness Truly Random](https://onezero.medium.com/the-league-of-entropy-is-making-randomness-truly-random-522f22ce93ce) - `熵的联盟使随机性真正成为随机性` - [Two quakes in two days, no warning from ShakeAlertLA](https://www.latimes.com/california/story/2019-08-14/earthquake-early-warning-app-shakealertla-released) - `两天内发生两次地震,没有来自ShakeAlertLA的警告` - [Why doesn't Britain make things any more? (2011)](https://www.theguardian.com/business/2011/nov/16/why-britain-doesnt-make-things-manufacturing) - `为什么英国不再生产东西了?(2011)` - [HK social media users writing phonetically spelled posts to shut out trolls](https://news.rthk.hk/rthk/en/component/k2/1475264-20190818.htm) - `香港社交媒体用户通过拼写语音来屏蔽网络喷子` - [Python vs. Rust for Neural Networks](https://ngoldbaum.github.io/posts/python-vs-rust-nn/) - `神经网络的Python与Rust` - [Ruffle – An Adobe Flash Player Written in Rust Compiled to WebAssembly](https://github.com/ruffle-rs/ruffle) - `Ruffle -一个Adobe Flash播放器写在锈编译到WebAssembly` - [Apple will soon treat online web tracking the same as a security vulnerability](https://thenextweb.com/privacy/2019/08/16/apple-will-soon-treat-online-web-tracking-the-same-as-a-security-vulnerability/) - `苹果很快将把在线网络跟踪视为安全漏洞` - [General Balanced Trees (1999) pdf](http://user.it.uu.se/~arnea/ps/gb.pdf) - `一般均衡树(1999)pdf` - [How Life Sciences Actually Work](https://guzey.com/how-life-sciences-actually-work/) - `生命科学如何运作` - [Cephaloponderings](https://putanumonit.com/2019/07/26/cephaloponderings/) - `Cephaloponderings` - [Starting an ISP is hard, don’t do it](https://www.slashgeek.net/2016/05/31/starting-isp-really-hard-dont/) - `启动ISP很困难,不要这样做` - [Recording Browser Interactions and Generating Test Scripts](https://github.com/prprprus/softest) - `记录浏览器交互并生成测试脚本` - [In the US, it's cheaper to build and operate wind farms than buy fossil fuels](https://arstechnica.com/science/2019/08/wind-power-prices-now-lower-than-the-cost-of-natural-gas/) - `在美国,建造和运营风力发电厂比购买化石燃料更便宜` - [Model-Based Reinforcement Learning for Atari](https://arxiv.org/abs/1903.00374) - `基于模型的雅达利强化学习` - [David Foster Wallace’s Pen Pal](https://www.theparisreview.org/blog/2019/08/09/david-foster-wallaces-pen-pal/) - `大卫·福斯特·华莱士的笔友` - [J can look like APL or English](https://wjmn.github.io/posts/j-can-look-like-apl.html) - `J可以像APL或English` - [The Grand C++ Error Explosion Competition](https://tgceec.tumblr.com/post/74534916370/results-of-the-grand-c-error-explosion/) - `盛大的c++错误爆炸大赛` - [The Long Hot Summer of Grammar](https://www.newyorker.com/culture/comma-queen/the-long-hot-summer-of-grammar) - `漫长炎热的语法夏天` - [Netherlands building ages](https://parallel.co.uk/netherlands/#13.8/52.365/4.9/0/40) - `荷兰建筑年龄` - [Two brothers invented an alphabet for their native language, Fulfulde](https://news.microsoft.com/stories/people/adlam.html?ocid=lock) - `兄弟俩为他们的母语Fulfulde发明了一个字母表` - [Plastic Free July](https://paulrhayes.com/plastic-free-july/) - `塑料免费7月` - [Smart plastic incineration posited as solution to global recycling crisis](https://eandt.theiet.org/content/articles/2019/08/smart-plastic-incineration-posited-as-viable-solution-to-global-recycling-crisis/) - `智能塑料焚烧被认为是解决全球回收危机的办法` - [The History of Polish Photography (2002)](https://culture.pl/en/article/the-history-of-polish-photography) - `波兰摄影史(2002)` - [Iceland commemorates first glacier lost to climate change](https://phys.org/news/2019-08-iceland-commemorates-glacier-lost-climate.html) - `冰岛纪念因气候变化而消失的第一座冰川` - [Ask HN: What book to read to get a footing in CS theory?](item?id=20729252) - `问HN:要想在CS理论中站稳脚跟,应该读什么书?` - [Moore’s Law is not Dead](https://www.tsmc.com/english/newsEvents/blog_article_20190814.htm) - `摩尔定律并没有消亡` - [Hidden messages in Amiga games (2017)](http://codetapper.com/amiga/random-rants/hidden-messages-in-amiga-games/) - `Amiga游戏中的隐藏信息(2017)` - [Chess2vec: Learning Vector Representations for Chess (2018) pdf](http://www.berkkapicioglu.com/wp-content/uploads/2018/11/chess2vec_nips_2018_short.pdf) - `Chess2vec:用于国际象棋pdf的学习向量表示` - [Large-Scale-Exploit of GitHub Repository Metadata and Preventive Measures](https://arxiv.org/abs/1908.05354) - `大规模利用GitHub库元数据和预防措施` - [35% Faster Than The Filesystem](https://www.sqlite.org/fasterthanfs.html) - `比文件系统快35%` - [Censorship in China Allows Govt Criticism but Silences Collective Expression pdf](https://dash.harvard.edu/bitstream/handle/1/11878767/33531/censored.pdf?sequence%3D1) - `中国的审查制度允许政府批评,但压制了集体表达` - [Umberto Eco: Texts, sign systems and the risks of over-interpretation](https://www.the-tls.co.uk/articles/public/umberto-eco-texts-sign-systems-risks-interpretation/) - `乌姆贝托·艾柯:文本、符号系统和过度解读的风险` - [How Slack Harms Projects](https://www.silasreinagel.com/blog/2019/08/12/how-slack-harms-projects/) - `懈怠如何危害项目` - [Media Can’t Stop Presenting Horrifying Stories as ‘Uplifting’ Perseverance Porn](https://fair.org/home/media-just-cant-stop-presenting-horrifying-stories-as-uplifting-perseverance-porn/) - `媒体无法停止以“振奋人心”的色情片形式呈现恐怖故事` - [California’s Biggest Cities Confront a ‘Defecation Crisis’](https://www.wsj.com/articles/californias-biggest-cities-confront-a-defecation-crisis-11565994160?mod=rsswn) - `加州大城市面临“排便危机”` - [Résumés Are Starting to Look Like Instagram, and Sometimes Even Tinder](https://www.wsj.com/articles/resumes-are-starting-to-look-like-instagramand-sometimes-even-tinder-11565707364?mod=rsswn) - `简历开始看起来像instagram,有时甚至像Tinder` - [Three Letters from Switzerland](https://www.theparisreview.org/blog/2019/08/15/three-letters-from-switzerland/) - `三封瑞士来信` - [The interviewer skills ladder for high growth companies](https://medium.com/@alexallain/what-ive-learned-interviewing-500-people-the-interviewer-skills-ladder-for-high-growth-software-37778d2aae85) - `高成长型公司的面试官技能阶梯` - [Where did all the cod go? Fishing crisis in the North Sea](https://www.theguardian.com/business/2019/aug/18/where-did-all-the-cod-go-fish-chips-north-sea-sustainable-stocks) - `鳕鱼都到哪里去了?北海渔业危机` - [Physicists Aim to Classify All Possible Phases of Matter](https://www.quantamagazine.org/physicists-aim-to-classify-all-possible-phases-of-matter-20180103/) - `物理学家的目标是对物质的所有可能的相进行分类` ## HackShows Hacks展示 - [Launch HN: Dex (YC S19) – personal CRM that reminds you to keep in touch](https://news.ycombinator.com/item?id=20699923) - `启动HN: Dex (YC S19) -提醒您保持联系的个人CRM` - [ Swap-a-Doodle, a cross-platform social drawing app](https://www.swapadoodle.com) - `Swap-a-Doodle,一个跨平台的社交绘图应用` - [ Fast, unopinionated, minimalist web framework for Arduino](https://awot.net) - `Arduino的快速、无约束、极简的web框架` - [ sqltop – Find the most resource consuming SQL Server queries](https://github.com/soheilpro/sqltop) - `查找消耗最多资源的SQL Server查询` - [ Cryptographically random strings with zero clicks](https://random.connorlanigan.com) - `密码随机字符串与零点击` - [ BrowserUp Proxy-Network Traffic Testing for Selenium WebDriver (FOSS)](https://news.ycombinator.com/item?id=20705228) - `Selenium WebDriver (FOSS)的浏览器代理网络流量测试` - [ Birdcries, a pure-privacy tweet viewer](https://birdcries.net) - `鸟叫声,一个纯粹隐私的推特查看器` - [ SmartForms – Form back end as a service](https://news.ycombinator.com/item?id=20703719) - `SmartForms—将表单后端作为服务` - [Launch HN: Vendr (YC S19) – Buying software so you don’t have to](https://news.ycombinator.com/item?id=20707194) - `推出HN: Vendr (YC S19) -购买软件,所以你不需要` - [ Homer – A Text Analyzer in Python](https://github.com/wyounas/homer) - `一个Python中的文本分析器` - [Launch HN: Remote company culture book for the Slack generation](https://www.ahoyteam.com/guide) - `推出HN:面向懒散一代的远程企业文化书籍` - [ Our Code Stories- Programming Book Publishing/Tutorial Blog Alternative](https://news.ycombinator.com/item?id=20707610) - `我们的代码故事-编程书籍出版/教程博客替代` - [ Bytime. How do you plan your free time in a city?](https://news.ycombinator.com/item?id=20709759) - `Bytime。你如何规划你在城市的空闲时间?` - [ Launching GoatCounter; or: let's try and make a living from Open Source](https://arp242.net/goatcounter.html) - `启动GoatCounter;或者:让我们试着从开源中谋生` - [ Get insider info about your offshore software contractor – for free](https://news.ycombinator.com/item?id=20710182) - `获取关于您的离岸软件承包商的内部信息-免费` - [ Lazy – Free Bootstrap UI Kit](https://github.com/bootstrapbay/lazy-kit) - `懒惰-免费引导用户界面工具包` - [ jtx – tiny JSON to XML converter](https://github.com/wulfmann/jtx) - `微型JSON到XML转换器` - [Launch HN: SannTek (YC S19) – Breathalyzer for Cannabis](https://news.ycombinator.com/item?id=20717240) - `推出HN: SannTek (YC S19) -大麻酒精测试仪` - [ Prophecy.io – Cloud Native Data Engineering](https://medium.com/prophecy-io/introducing-prophecy-io-cloud-native-data-engineering-1b9247596030) - `显示HN:预言。io -云原生数据工程` - [ Software jobs with a difference. Filter jobs by interview type](http://Softwarejobs.xyz) - `软件工作与此不同。根据面试类型筛选工作` - [ Software Engineering 101](https://medium.com/techlogs/sw-engineering-101-c711e948b065) - `软件工程101` - [ Deploy your web apps, APIs and databases for free](https://unubo.com/2) - `免费部署您的web应用程序、api和数据库` - [ ClojureScript pixel game engine with Blender live-reloading](https://mccormick.cx/news/entries/clojurescript-pixel-game-engine-with-blender-live-reloading) - `ClojureScript像素游戏引擎与搅拌机实时重载` - [ Pitaya Go, IoT Dev Board with Multiprotocol Wireless Connectivity](https://github.com/makerdiary/pitaya-go) - `火龙果,物联网开发板与多协议无线连接` - [ Open-source / Selfhostable standalone trashmail solution](https://github.com/HaschekSolutions/opentrashmail) - `开源/自稳定的独立垃圾邮件解决方案` - [ Yet another free HTML form to email for your static websites](https://www.staticforms.xyz/) - `另一个免费的HTML表单,为您的静态网站发送电子邮件` - [ Register expiring premium domain names for just $99](https://backordr.com/domains) - `注册过期的高级域名只需99美元` - [ distri: a Linux distribution to research fast package management](https://michael.stapelberg.ch/posts/2019-08-17-introducing-distri/) - `发行版:一个研究快速包管理的Linux发行版` - [ Saag as a Service – macronutrient-portioned Indian spinach curry](https://saag.pashi.com/#hn) - `Show HN: Saag作为一种服务——大量营养分量的印度菠菜咖喱` - [ A marketplace to hire no code experts](https://www.withoutcode.io/experts) - `一个不雇佣代码专家的市场` - [ Journyal – record all your travels in the background](http://journyal.com) - `日志-在后台记录你所有的旅行` - [ Dev.wtf – Developer's Reference](https://stereobooster.com/posts/dev.wtf/) - `开发者参考` - [ Scenery — Asynchronous communication for teams](http://scenery.app) - `场景-团队异步通信` - [ Smartip.io – Reliable and Accurate IP Geolocation and Threat API](https://smartip.io) - `Smartip。可靠和准确的IP地理定位和威胁API` - [Launch HN: Relatively No-Frills Product Hunt Launch Checklist](https://news.ycombinator.com/item?id=20728854) - `发布HN:相对简单的产品搜索发布清单`
65.942387
224
0.77022
yue_Hant
0.446449
03f5018af50be20ff25de98d97778784d03249d5
376
md
Markdown
_posts/2013-11-27-temple-of-trials-pathfinding.md
umi0451/umi0451.github.io
9f144508b276737d15a5f13b1af1ce0ea163e536
[ "WTFPL" ]
null
null
null
_posts/2013-11-27-temple-of-trials-pathfinding.md
umi0451/umi0451.github.io
9f144508b276737d15a5f13b1af1ce0ea163e536
[ "WTFPL" ]
null
null
null
_posts/2013-11-27-temple-of-trials-pathfinding.md
umi0451/umi0451.github.io
9f144508b276737d15a5f13b1af1ce0ea163e536
[ "WTFPL" ]
null
null
null
--- layout: default date: 2013-11-27 12:19:00 parent-url: / parent: owlwood title: "Temple of trials: pathfinding" --- Temple of Trials now has examine mode (which shows only names of items under cursor) and targeted travelling (with some delay for fancier look). It's still way dumb, and player don't stop on any event like when monster hit them or such, but it looks cool.
37.6
255
0.75266
eng_Latn
0.999487
03f5257be37153fb3accb00a86b97bb1c0c151c6
58
md
Markdown
README.md
AmeNoOokami/Ejemplo2
80ec20bd5f59ef11dffa4df2b113118e6fc95139
[ "MIT" ]
null
null
null
README.md
AmeNoOokami/Ejemplo2
80ec20bd5f59ef11dffa4df2b113118e6fc95139
[ "MIT" ]
null
null
null
README.md
AmeNoOokami/Ejemplo2
80ec20bd5f59ef11dffa4df2b113118e6fc95139
[ "MIT" ]
null
null
null
# Ejemplo2 Aquí estarán algunos ejemplos de programación
19.333333
46
0.827586
spa_Latn
0.999963
03f5306973477f98a55f26c3fc0b1684d49615d5
14,796
md
Markdown
docs/relational-databases/system-dynamic-management-views/sys-dm-exec-plan-attributes-transact-sql.md
zelanko/sql-docs.de-de
16c23f852738744f691dbc66fb5057c4eb907a95
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/relational-databases/system-dynamic-management-views/sys-dm-exec-plan-attributes-transact-sql.md
zelanko/sql-docs.de-de
16c23f852738744f691dbc66fb5057c4eb907a95
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/relational-databases/system-dynamic-management-views/sys-dm-exec-plan-attributes-transact-sql.md
zelanko/sql-docs.de-de
16c23f852738744f691dbc66fb5057c4eb907a95
[ "CC-BY-4.0", "MIT" ]
1
2020-12-30T12:52:58.000Z
2020-12-30T12:52:58.000Z
--- description: sys.dm_exec_plan_attributes (Transact-SQL) title: sys.dm_exec_plan_attributes (Transact-SQL) | Microsoft-Dokumentation ms.custom: '' ms.date: 10/20/2017 ms.prod: sql ms.reviewer: '' ms.technology: system-objects ms.topic: language-reference f1_keywords: - sys.dm_exec_plan_attributes_TSQL - dm_exec_plan_attributes_TSQL - dm_exec_plan_attributes - sys.dm_exec_plan_attributes dev_langs: - TSQL helpviewer_keywords: - sys.dm_exec_plan_attributes dynamic management function ms.assetid: dacf3ab3-f214-482e-aab5-0dab9f0a3648 author: markingmyname ms.author: maghan ms.openlocfilehash: c80e576bd6f2872a2486da5fd09292609f86ba60 ms.sourcegitcommit: 2991ad5324601c8618739915aec9b184a8a49c74 ms.translationtype: MT ms.contentlocale: de-DE ms.lasthandoff: 12/11/2020 ms.locfileid: "97331991" --- # <a name="sysdm_exec_plan_attributes-transact-sql"></a>sys.dm_exec_plan_attributes (Transact-SQL) [!INCLUDE [SQL Server](../../includes/applies-to-version/sqlserver.md)] Gibt eine Zeile pro Planattribut für den vom Planhandle angegebenen Plan zurück. Mit dieser Tabellenwertfunktion können Sie Details zu einem bestimmten Plan abrufen, z. B. die Cacheschlüsselwerte oder die Anzahl der aktuellen, gleichzeitigen Ausführungen des Plans. > [!NOTE] > Einige der Informationen, die über diese Funktion zurückgegeben werden, werden der [sys.sysCache Objects](../../relational-databases/system-compatibility-views/sys-syscacheobjects-transact-sql.md) -abwärts Kompatibilitäts Ansicht zugeordnet. ## <a name="syntax"></a>Syntax ``` sys.dm_exec_plan_attributes ( plan_handle ) ``` ## <a name="arguments"></a>Argumente *plan_handle* Führt eine eindeutige Identifizierung eines Abfrageplans für einen ausgeführten Batch aus, dessen Plan sich im Plancache befindet. *plan_handle* ist **varbinary(64)** Das Plan Handle kann aus der dynamischen Verwaltungs Sicht [sys.dm_exec_cached_plans](../../relational-databases/system-dynamic-management-views/sys-dm-exec-cached-plans-transact-sql.md) abgerufen werden. ## <a name="table-returned"></a>Zurückgegebene Tabelle |Spaltenname|Datentyp|BESCHREIBUNG| |-----------------|---------------|-----------------| |Attribut|**varchar(128)**|Name des Attributs, das diesem Plan zugeordnet ist. In der folgenden Tabelle sind die möglichen Attribute, deren Datentypen und ihre Beschreibungen aufgeführt.| |value|**sql_variant**|Wert des Attributs, das diesem Plan zugeordnet ist.| |is_cache_key|**bit**|Gibt an, ob das Attribut als Teil des Cachesuchschlüssels für den Plan verwendet wird.| In der obigen Tabelle kann das- **Attribut** die folgenden Werte aufweisen: |Attribut|Datentyp|BESCHREIBUNG| |---------------|---------------|-----------------| |set_options|**int**|Gibt die Optionswerte an, mit denen der Plan kompiliert wurde.| |objectid|**int**|Einer der Hauptschlüssel zur Suche nach einem Objekt im Cache. Dies ist die Objekt-ID, die in [sys. Objects](../../relational-databases/system-catalog-views/sys-objects-transact-sql.md) für Datenbankobjekte (Prozeduren, Sichten, Trigger usw.) gespeichert ist. Für Pläne vom Typ "Adhoc" oder "Prepared" ist dies ein interner Hash des Batchtexts.| |dbid|**int**|Die ID der Datenbank, welche die Entität enthält, auf die der Plan verweist.<br /><br /> Für Ad-hoc-Pläne oder vorbereitete Pläne ist dies die Datenbank-ID, von der der Batch ausgeführt wird.| |dbid_execute|**int**|Für Systemobjekte, die in der **Ressourcen** Datenbank gespeichert sind, die Datenbank-ID, aus der der zwischengespeicherte Plan ausgeführt wird. In allen anderen Fällen ist der Wert gleich 0.| |user_id|**int**|Mit dem Wert -2 wird angegeben, dass der abgesendete Batch nicht von der impliziten Namensauflösung abhängt und von verschiedenen Benutzern gemeinsam verwendet werden kann. Dies ist die bevorzugte Methode. Jeder andere Wert stellt den Benutzernamen des Benutzers dar, der die Abfrage in der Datenbank absendet.| |language_id|**smallint**|ID der Sprache der Verbindung, die das Cacheobjekt erstellt hat. Weitere Informationen finden Sie unter [sys.sysSprachen &#40;Transact-SQL-&#41;](../../relational-databases/system-compatibility-views/sys-syslanguages-transact-sql.md).| |date_format|**smallint**|Datumsformat der Verbindung, die das Cacheobjekt erstellt hat. Weitere Informationen finden Sie unter [SET DATEFORMAT &#40;Transact-SQL&#41;](../../t-sql/statements/set-dateformat-transact-sql.md).| |date_first|**tinyint**|Erster Datumswert. Weitere Informationen finden Sie unter [SET DATEFIRST &#40;Transact-SQL&#41;](../../t-sql/statements/set-datefirst-transact-sql.md).| |status|**int**|Interne Statusbits, die Teil des Cachesuchschlüssels sind.| |required_cursor_options|**int**|Vom Benutzer angegebene Cursoroptionen, z. B. der Cursortyp.| |acceptable_cursor_options|**int**|Cursoroptionen, in die von [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] eine implizite Konvertierung vorgenommen werden kann, um die Ausführung der Anweisung zu unterstützen. Beispielsweise kann der Benutzer einen dynamischen Cursor angeben, doch kann dieser Cursortyp vom Abfrageoptimierer in einen statischen Cursor konvertiert werden.| |inuse_exec_context|**int**|Die Anzahl der derzeit ausgeführten Batches, die den Abfrageplan verwenden.| |free_exec_context|**int**|Die Anzahl der zwischengespeicherten Ausführungskontexte für den Abfrageplan, die derzeit nicht verwendet werden.| |hits_exec_context|**int**|Die Anzahl der Vorgänge, bei denen der Ausführungskontext aus dem Plancache abgerufen und wiederverwendet wurde, wodurch der Aufwand zum erneuten Kompilieren der SQL-Anweisung eingespart wird. Der Wert ist ein aggregierter Wert für alle bisherigen Batchausführungen.| |misses_exec_context|**int**|Die Anzahl der Vorgänge, bei denen ein Ausführungskontext im Plancache nicht gefunden wurde, was zum Erstellen eines neuen Ausführungskontexts für die Batchausführung führt.| |removed_exec_context|**int**|Die Anzahl der Ausführungskontexte, die aufgrund ungenügenden Arbeitsspeichers für den zwischengespeicherten Plan entfernt wurden.| |inuse_cursors|**int**|Die Anzahl der derzeit ausgeführten Batches, die einen oder mehrere Cursor enthalten, die den zwischengespeicherten Plan verwenden.| |free_cursors|**int**|Die Anzahl der im Leerlauf befindlichen oder freien Cursor für den zwischengespeicherten Plan.| |hits_cursors|**int**|Die Anzahl der Vorgänge, bei denen ein inaktiver Cursor aus dem zwischengespeicherten Plan abgerufen und wiederverwendet wurde. Der Wert ist ein aggregierter Wert für alle bisherigen Batchausführungen.| |misses_cursors|**int**|Die Anzahl der Vorgänge, bei denen im Cache kein inaktiver Cursor gefunden werden konnte.| |removed_cursors|**int**|Die Anzahl der Cursor, die aufgrund ungenügenden Arbeitsspeichers für den zwischengespeicherten Plan entfernt wurden.| |sql_handle|**varbinary**(64)|Das SQL-Handle für den Batch.| |merge_action_type|**smallint**|Der Typ des Triggerausführungsplans, der als Ergebnis einer MERGE-Anweisung verwendet wird.<br /><br /> 0 gibt einen Nicht-Triggerplan an, einen Triggerplan, der nicht als Ergebnis einer MERGE-Anweisung ausgeführt wird, oder einen Triggerplan, der als Ergebnis einer MERGE-Anweisung ausgeführt wird, die nur eine DELETE-Aktion angibt.<br /><br /> 1 gibt einen INSERT-Triggerplan an, der als Ergebnis einer MERGE-Anweisung ausgeführt wird.<br /><br /> 2 gibt einen UPDATE-Triggerplan an, der als Ergebnis einer MERGE-Anweisung ausgeführt wird.<br /><br /> 3 gibt einen DELETE-Triggerplan an, der als Ergebnis einer MERGE-Anweisung ausgeführt wird, die eine entsprechende INSERT- oder UPDATE-Aktion enthält.<br /><br /> Bei geschachtelten Triggern, die durch kaskadierende Aktionen ausgeführt werden, ist dieser Wert die Aktion der MERGE-Anweisung, durch die das Kaskadieren verursacht wurde.| ## <a name="permissions"></a>Berechtigungen In [!INCLUDE[ssNoVersion_md](../../includes/ssnoversion-md.md)] ist die- `VIEW SERVER STATE` Berechtigung erforderlich. Bei den Dienst Zielen "Basic", "S0" und "S1" in SQL-Datenbank ist für Datenbanken in Pools für elastische Datenbanken `Server admin` oder ein `Azure Active Directory admin` Konto erforderlich. Für alle anderen SQL-Datenbank-Dienst Ziele `VIEW DATABASE STATE` ist die Berechtigung in der Datenbank erforderlich. ## <a name="remarks"></a>Bemerkungen ## <a name="set-options"></a>SET-Optionen Kopien desselben kompilierten Plans können sich nur durch den Wert in der Spalte **set_options** unterscheiden. Dies weist darauf hin, dass verschiedene Verbindungen für die gleiche Abfrage unterschiedliche Sätze von SET-Optionen verwenden. Die Verwendung unterschiedlicher Sätze von Optionen ist meist unerwünscht, da dies zusätzliche Kompilierungen, einen geringeren Anteil von Wiederverwendungen von Plänen sowie, da im Cache mehrere Pläne vorhanden sind, eine Vergrößerung des Plancaches verursachen kann. ### <a name="evaluating-set-options"></a>Auswerten von SET-Optionen Um den in **set_options** zurückgegebenen Wert in die Optionen zu übersetzen, mit denen der Plan kompiliert wurde, subtrahieren Sie die Werte vom **set_options** Wert, beginnend mit dem größtmöglichen Wert, bis Sie 0 erreichen. Jeder subtrahierte Wert entspricht einer Option, die im Abfrageplan verwendet wurde. Wenn der Wert in **set_options** beispielsweise 251 ist, werden die Optionen, mit denen der Plan kompiliert wurde, ANSI_NULL_DFLT_ON (128), QUOTED_IDENTIFIER (64), ANSI_NULLS (32), ANSI_WARNINGS (16), CONCAT_NULL_YIELDS_NULL (8), paralleler Plan (2) und ANSI_PADDING (1). |Option|Wert| |------------|-----------| |ANSI_PADDING|1| |Parallelplan<br /><br /> Gibt an, dass die Optionen für den Plan Parallelismus geändert wurden.|2| |FORCEPLAN|4| |CONCAT_NULL_YIELDS_NULL|8| |ANSI_WARNINGS|16| |ANSI_NULLS|32| |QUOTED_IDENTIFIER|64| |ANSI_NULL_DFLT_ON|128| |ANSI_NULL_DFLT_OFF|256| |NoBrowseTable<br /><br /> Gibt an, dass im Plan keine Arbeitstabelle verwendet wird, um einen FOR BROWSE-Vorgang zu implementieren.|512| |TriggerOneRow<br /><br /> Gibt an, dass der Plan Optimierungen einzelner Zeilen für Deltatabellen von AFTER-Triggern umfasst.|1024| |ResyncQuery<br /><br /> Gibt an, dass die Abfrage von internen gespeicherten Systemprozeduren übermittelt wurde.|2048| |ARITH_ABORT|4096| |NUMERIC_ROUNDABORT|8192| |DATEFIRST|16384| |DATEFORMAT|32768| |LanguageID|65536| |UPON<br /><br /> Gibt an, dass die Datenbankoption PARAMETERIZATION beim Kompilieren des Plans auf FORCED festgelegt wurde.|131072| |ROWCOUNT|**Gilt für:** [!INCLUDE[ssSQL11](../../includes/sssql11-md.md)] An [!INCLUDE[ssCurrent](../../includes/sscurrent-md.md)]<br /><br /> 262144| ## <a name="cursors"></a>Cursor Inaktive Cursor werden in einem kompilierten Plan zwischengespeichert, sodass der zum Speichern des Cursors verwendete Arbeitsspeicher von gleichzeitigen Benutzern des Cursors wiederverwendet werden kann. Angenommen, dass ein Cursor von einem Batch deklariert und verwendet wird, ohne dass seine Zuordnung aufgehoben wird. Wenn zwei Benutzer denselben Batch ausführen, sind zwei aktive Cursor vorhanden. Sobald die Zuordnung der Cursor aufgehoben ist (möglicherweise in unterschiedlichen Batches), wird der Arbeitsspeicher zum Speichern des Cursors zwischengespeichert und nicht freigegeben. Der Liste der inaktiven Cursor wird im kompilierten Plan beibehalten. Bei der nächsten Ausführung des Batches durch einen Benutzer wird der zwischengespeicherte Arbeitsspeicher für den Cursor wiederverwendet und als aktiver Cursor ordnungsgemäß initialisiert. ### <a name="evaluating-cursor-options"></a>Auswerten von Cursoroptionen Subtrahieren Sie die Werte vom Spaltenwert, beginnend mit dem größtmöglichen Wert, bis Sie 0 erreichen, um den in **required_cursor_options** zurückgegebenen Wert und **acceptable_cursor_options** auf die Optionen zu übersetzen, mit denen der Plan kompiliert wurde. Jeder subtrahierte Wert entspricht einer Cursoroption, die im Abfrageplan verwendet wurde. |Option|Wert| |------------|-----------| |Keine|0| |INSENSITIVE|1| |SCROLL|2| |READ ONLY|4| |FOR UPDATE|8| |LOCAL|16| |GLOBAL|32| |FORWARD_ONLY|64| |KEYSET|128| |DYNAMIC|256| |SCROLL_LOCKS|512| |OPTIMISTIC|1024| |STATIC|2048| |FAST_FORWARD|4096| |IN PLACE|8192| |FOR *select_statement*|16384| ## <a name="examples"></a>Beispiele ### <a name="a-returning-the-attributes-for-a-specific-plan"></a>A. Zurückgeben der Attribute für einen bestimmten Plan Im folgenden Beispiel werden alle Planattribute für einen angegebenen Plan zurückgegeben. Die dynamische Verwaltungssicht `sys.dm_exec_cached_plans` wird zuerst abgefragt, um das Planhandle für den angegebenen Plan abzurufen. In der zweiten Abfrage ersetzen Sie `<plan_handle>` durch einen Planhandlewert aus der ersten Abfrage. ```sql SELECT plan_handle, refcounts, usecounts, size_in_bytes, cacheobjtype, objtype FROM sys.dm_exec_cached_plans; GO SELECT attribute, value, is_cache_key FROM sys.dm_exec_plan_attributes(<plan_handle>); GO ``` ### <a name="b-returning-the-set-options-for-compiled-plans-and-the-sql-handle-for-cached-plans"></a>B. Zurückgeben der SET-Optionen für kompilierte Pläne und des SQL-Handles für zwischengespeicherte Pläne Im folgenden Beispiel wird ein Wert zurückgegeben, der die Optionen darstellt, mit denen die einzelnen Pläne kompiliert wurden. Außerdem wird das SQL-Handle für alle zwischengespeicherten Pläne zurückgegeben. ```sql SELECT plan_handle, pvt.set_options, pvt.sql_handle FROM ( SELECT plan_handle, epa.attribute, epa.value FROM sys.dm_exec_cached_plans OUTER APPLY sys.dm_exec_plan_attributes(plan_handle) AS epa WHERE cacheobjtype = 'Compiled Plan') AS ecpa PIVOT (MAX(ecpa.value) FOR ecpa.attribute IN ("set_options", "sql_handle")) AS pvt; GO ``` ## <a name="see-also"></a>Weitere Informationen [Dynamische Verwaltungssichten und -funktionen &#40;Transact-SQL&#41;](~/relational-databases/system-dynamic-management-views/system-dynamic-management-views.md) [Dynamische Verwaltungs Sichten und-Funktionen im Zusammenhang mit der Ausführung &#40;Transact-SQL-&#41;](../../relational-databases/system-dynamic-management-views/execution-related-dynamic-management-views-and-functions-transact-sql.md) [sys.dm_exec_cached_plans &#40;Transact-SQL&#41;](../../relational-databases/system-dynamic-management-views/sys-dm-exec-cached-plans-transact-sql.md) [sys.databases &#40;Transact-SQL&#41;](../../relational-databases/system-catalog-views/sys-databases-transact-sql.md) [sys.objects &#40;Transact-SQL&#41;](../../relational-databases/system-catalog-views/sys-objects-transact-sql.md)
82.659218
925
0.773858
deu_Latn
0.982769
03f572c9a3cf249c504bddaf07521295d5e9e161
779
md
Markdown
docs/assembler/masm/ml-nonfatal-error-a2010.md
POMATOpl/cpp-docs.pl-pl
ae1925d41d94142f6a43c4e721d45cbbbfeda4c7
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/assembler/masm/ml-nonfatal-error-a2010.md
POMATOpl/cpp-docs.pl-pl
ae1925d41d94142f6a43c4e721d45cbbbfeda4c7
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/assembler/masm/ml-nonfatal-error-a2010.md
POMATOpl/cpp-docs.pl-pl
ae1925d41d94142f6a43c4e721d45cbbbfeda4c7
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- description: 'Dowiedz się więcej o: błąd niekrytyczny ML A2010' title: Błąd niekrytyczny ML A2010 ms.date: 12/17/2019 ms.custom: error-reference f1_keywords: - A2010 helpviewer_keywords: - A2010 ms.assetid: 8bcd57f4-1e3f-421f-9ef8-e702daf57fcb ms.openlocfilehash: 093fb78f8feac4d5cc660423c9bf754536bb9afc ms.sourcegitcommit: d6af41e42699628c3e2e6063ec7b03931a49a098 ms.translationtype: MT ms.contentlocale: pl-PL ms.lasthandoff: 12/11/2020 ms.locfileid: "97129275" --- # <a name="ml-nonfatal-error-a2010"></a>Błąd niekrytyczny ML A2010 **nieprawidłowe wyrażenie typu** Argument operacji [tego](operator-this.md) elementu lub [PTR](operator-ptr.md) nie jest prawidłowym wyrażeniem typu. ## <a name="see-also"></a>Zobacz też [Komunikaty o błędach ML](ml-error-messages.md)
28.851852
116
0.789474
pol_Latn
0.953078
03f5825d1d6d599e2fdf70df4aaa88599c8b1d21
1,981
md
Markdown
_pages/about.md
franzeder/website
e2b0a55b0868c88c7eecbad9ec15a064e40bbb1a
[ "MIT" ]
null
null
null
_pages/about.md
franzeder/website
e2b0a55b0868c88c7eecbad9ec15a064e40bbb1a
[ "MIT" ]
null
null
null
_pages/about.md
franzeder/website
e2b0a55b0868c88c7eecbad9ec15a064e40bbb1a
[ "MIT" ]
null
null
null
--- layout: about title: about permalink: / description: Associate Professor of International Relations, <a href="https://www.uibk.ac.at/">University of Innsbruck</a>. profile: align: right image: /about/profile_pic.jpg # address: > # <p>555 your office number</p> # <p>123 your address street</p> # <p>Your City, State 12345</p> news: true # includes a list of news items selected_papers: false # includes a list of papers marked as "selected={true}" social: false # includes social icons at the bottom of the page --- Welcome to my Website! I am Associate Professor of International Relations at the [Department of Political Science](https://www.uibk.ac.at/politikwissenschaft/){:target="\_blank"} at the [University of Innsbruck](https://www.uibk.ac.at/){:target="\_blank"}. Since 2021, I am also the Dean of the [Innsbruck School of Social and Political Sciences](https://www.uibk.ac.at/fakultaeten/social-and-political-sciences/){:target="\_blank"}. To learn more about me, have a look at my [Curriculum Vitae](https://drive.google.com/file/d/1nNnbLR7fQZKCrP_1imjyb-tGEBSWbaWd/view?usp=sharing){:target="\_blank"}. In my research, I focus on the the *foreign and security policy of states* (especially the U.S. and Austria) and on *counter-terrorism policies*. Furthermore, I investigate *methodological questions* of studying International Relations and Foreign Policy and I am especially focusing on [Discourse Network Analysis](https://github.com/leifeld/dna){:target="\_blank"} and Quantitative Text Analysis. For more details on my research, have a look at my [publications](/publications/) and my current research [projects](/projects/). In my [teaching](/teaching/), I primarily focus on the topics of *Academic Writing*, *Introduction to International Relations*, *Foreign Policy Analysis*, *(Counter)Terrorism*, *Actors and Agency in International Relations* and on *Methods of Social Sciences*. For more information, have a look at my courses.
70.75
434
0.757193
eng_Latn
0.932245
03f5a53b1613f63b9faef07366d73f0be0bb172f
1,360
md
Markdown
CHANGELOG.md
pzmarzly/grapple
23e917a0cfad9dc375d9a190fd69e7d14d23f776
[ "MIT" ]
null
null
null
CHANGELOG.md
pzmarzly/grapple
23e917a0cfad9dc375d9a190fd69e7d14d23f776
[ "MIT" ]
null
null
null
CHANGELOG.md
pzmarzly/grapple
23e917a0cfad9dc375d9a190fd69e7d14d23f776
[ "MIT" ]
null
null
null
# Change Log All notable changes to this project will be documented in this file. The format is based on [Keep a Changelog](http://keepachangelog.com/) and this project adheres to [Semantic Versioning](http://semver.org/). ## [Unreleased] ### Fixed - Reduce high CPU usage when using `--thread-bandwidth` ## [0.3.0] - 2018-09-08 ### Added - Support for Basic/Digest auth username and password to be set via CLI options - Per-thread bandwidth limit option ## [0.2.2] - 2017-04-08 ### Added - Global progress bar to see progress of download as a whole ### Fixed - Don't mark the file as complete if some parts failed to download - If last part was already downloaded when resuming, request was made out of file bounds ## [0.2.1] - 2017-03-26 ### Fixed - Data missing from final file if the download was interrupted ## [0.2.0] - 2017-03-26 ### Added - Parts flag - Can now specify number of parts independent of threads ### Fixed - Moved to github version of `pbr` to fix https://github.com/a8m/pb/pull/48 [Unreleased]: https://github.com/daveallie/bindrs/compare/v0.3.0...HEAD [0.3.0]: https://github.com/daveallie/bindrs/compare/v0.2.2...v0.3.0 [0.2.2]: https://github.com/daveallie/bindrs/compare/v0.2.1...v0.2.2 [0.2.1]: https://github.com/daveallie/bindrs/compare/v0.2.0...v0.2.1 [0.2.0]: https://github.com/daveallie/bindrs/compare/v0.1.0...v0.2.0
34
88
0.713235
eng_Latn
0.873636
03f752990c1b01d5751863e583efaea74b03a67a
21
md
Markdown
README.md
anKordii/NeatClip-Site-View
1695c676f24ee2f2568a7d134e42167388f528e2
[ "MIT" ]
null
null
null
README.md
anKordii/NeatClip-Site-View
1695c676f24ee2f2568a7d134e42167388f528e2
[ "MIT" ]
null
null
null
README.md
anKordii/NeatClip-Site-View
1695c676f24ee2f2568a7d134e42167388f528e2
[ "MIT" ]
null
null
null
# NeatClip-Site-View
10.5
20
0.761905
kor_Hang
0.325215
03f7a8ab79c218ef70c1b1a075a2247a1a9bd8af
13,151
md
Markdown
docs/2014/database-engine/configure-windows/buffer-pool-extension.md
in4matica/sql-docs.de-de
b5a6c26b66f347686c4943dc8307b3b1deedbe7e
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/2014/database-engine/configure-windows/buffer-pool-extension.md
in4matica/sql-docs.de-de
b5a6c26b66f347686c4943dc8307b3b1deedbe7e
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/2014/database-engine/configure-windows/buffer-pool-extension.md
in4matica/sql-docs.de-de
b5a6c26b66f347686c4943dc8307b3b1deedbe7e
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Pufferpoolerweiterung | Microsoft-Dokumentation ms.custom: '' ms.date: 06/13/2017 ms.prod: sql-server-2014 ms.reviewer: '' ms.technology: configuration ms.topic: conceptual ms.assetid: 909ab7d2-2b29-46f5-aea1-280a5f8fedb4 author: MikeRayMSFT ms.author: mikeray manager: craigg ms.openlocfilehash: 9e435ab4cec86d439a7e2fba31f6099bf8668ec0 ms.sourcegitcommit: 2d4067fc7f2157d10a526dcaa5d67948581ee49e ms.translationtype: MT ms.contentlocale: de-DE ms.lasthandoff: 02/28/2020 ms.locfileid: "78175430" --- # <a name="buffer-pool-extension"></a>Pufferpoolerweiterung Seit [!INCLUDE[ssSQL14](../../includes/sssql14-md.md)]ermöglicht die Pufferpoolerweiterung die nahtlose Integration einer NVRAM (Non-Volatile Random Access Memory)-Erweiterung, d. h. Solid State Drive, in den [!INCLUDE[ssDE](../../includes/ssde-md.md)] -Pufferpool, um den E/A-Durchsatz deutlich zu verbessern. Die Pufferpoolerweiterung ist nicht in jeder [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] -Edition verfügbar. Weitere Informationen finden Sie unter [Features Supported by the Editions of SQL Server 2014](../../getting-started/features-supported-by-the-editions-of-sql-server-2014.md). ## <a name="benefits-of-the-buffer-pool-extension"></a>Vorteile der Pufferpoolerweiterung Der Hauptzweck einer [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] -Datenbank ist das Speichern und Abrufen von Daten. Daher stellt eine hohe Ein-/Ausgabe auf dem Datenträger ein Hauptmerkmal der Datenbank-Engine dar. Datenträger-E/A-Vorgänge beanspruchen ggf. viele Ressourcen und benötigen relativ viel Zeit für die Ausführung. Daher ist [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] so konzipiert, dass E/A-Vorgänge möglichst effizient gestaltet werden. Der Pufferpool ist eine primäre Speicherbelegungsquelle von [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)]. Die Pufferverwaltung ist eine zentrale Komponente zum Erreichen dieser Effizienz. Die Pufferverwaltungskomponente weist zwei Mechanismen auf: den Puffer-Manager, mit dem auf Datenbankseiten zugegriffen wird und mit dem sie aktualisiert werden, und den Pufferpool, mit dem Datenbankdatei-E/A-Vorgänge reduziert werden. Daten- und Indexseiten werden vom Datenträger in den Pufferpool gelesen und geänderte Seiten (auch bekannt als modifizierte Seiten) werden zurück auf den Datenträger geschrieben. Ungenügender Arbeitsspeicher auf den Server- und Datenbankprüfpunkten bewirkt heiße (aktive) modifizierte Seiten im Puffercache, die aus dem Cache entfernt und auf mechanische Datenträger geschrieben und dann wieder in den Cache gelesen werden. Diese E/A-Vorgänge sind in der Regel kleine zufällige Lese- und Schreibvorgänge in der Größenordnung von 4 bis 16 KB Daten. Kleine zufällige E/A-Muster verursachen häufige Suchen, die um den mechanischen Datenträgerarm konkurrieren, die E/A-Latenzzeit erhöhen und den aggregierten E/A-Durchsatz des Systems verringern. Der normale Ansatz zum Beheben dieser E/A-Engpässe besteht darin, mehr DRAM hinzuzufügen oder leistungsstarke SAS-Spindeln hinzuzufügen. Auch wenn diese Optionen hilfreich sind, haben sie erhebliche Nachteile: DRAM ist teurer als Datenspeicherungslaufwerke, und das Hinzufügen von Spindeln erhöht den Investitionsaufwand bei der Hardwareanschaffung und die Betriebskosten durch erhöhte Leistungsaufnahme und erhöhte Wahrscheinlichkeit von Komponentenfehlern. Die Pufferpoolerweiterungsfunktion erweitert den Pufferpoolcache um nicht flüchtigen Speicher (üblicherweise SSD). Aufgrund der Erweiterung kann der Pufferpool ein größeres Datenbankworkingset aufnehmen, das die Auslagerung von E/A-Vorgängen zwischen RAM und SSDs erzwingt. Dies verlagert effektiv kleine zufällige E/A-Vorgänge von den mechanischen Datenträgern auf SSDs. Aufgrund der niedrigeren Latenzzeit und besser verteilten zufälligen E/A-Zugriffen von SSDs verbessert die Pufferpoolerweiterung erheblich den E/A-Durchsatz. Die folgende Liste beschreibt die Vorteile der Pufferpoolerweiterungsfunktion. - Verbesserter Durchsatz bei zufällig verteilten E/A-Zugriffen - Reduzierte E/A-Latenzzeit - Verbesserter Transaktionsdurchsatz - Verbesserte Leseleistung mit einem größeren hybriden Pufferpool - Eine Cachingarchitektur, die aktuelle und zukünftige kostengünstige Speicherlaufwerke nutzen kann ### <a name="concepts"></a>Konzepte Die folgenden Begriffe sind auf die Pufferpoolerweiterungsfunktion anwendbar. Solid-State Drive (SSD) Solid State Drives speichern Daten im Arbeitsspeicher (RAM) auf permanente Weise. Weitere Informationen finden Sie unter [dieser Definition](http://en.wikipedia.org/wiki/Solid-state_drive). Buffer in [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)], ein Puffer ist eine 8-KB-Seite im Arbeitsspeicher, dieselbe Größe wie eine Daten-oder Indexseite. Der Puffercache ist ebenfalls in Seiten von je 8 KB unterteilt. Eine Seite verbleibt im Puffercache, bis der Pufferbereich vom Puffer-Manager zum Laden weiterer Daten benötigt wird. Daten werden nur dann zurück auf den Datenträger geschrieben, wenn sie geändert wurden. Diese geänderten Seiten im Arbeitsspeicher werden als modifizierte Seiten bezeichnet. Eine Seite gilt als nicht modifiziert, wenn sie ihrem Datenbankbild auf dem Datenträger entspricht. Daten im Puffercache können mehrfach geändert werden, bevor sie zurück auf den Datenträger geschrieben werden. Der Pufferpool wird auch als Puffer Cache bezeichnet. Der Pufferpool ist eine globale Ressource, die von allen Datenbanken für ihre zwischengespeicherten Datenseiten verwendet wird. Die maximale und minimale Größe des Pufferpoolcaches wird während des Startvorgangs bestimmt oder wenn die Instanz von SQL Server mithilfe von sp_configure dynamisch neu konfiguriert wird. Diese Größe bestimmt die maximale Anzahl der Seiten, die jederzeit im Pufferpool in der aktuellen Instanz zwischengespeichert werden können. Prüfpunkt ein Prüfpunkt erstellt einen bekannten fehlerfreien Punkt [!INCLUDE[ssDE](../../includes/ssde-md.md)] , von dem aus im Transaktionsprotokoll während der Wiederherstellung nach einem unerwarteten Herunterfahren oder einem Absturz die Änderungen angewendet werden können. Ein Prüfpunkt schreibt die modifizierten Seiten und Transaktionsprotokollinformationen vom Arbeitsspeicher auf den Datenträger und erfasst auch Informationen zum Transaktionsprotokoll. Weitere Informationen finden Sie unter [Datenbankprüfpunkte &#40;SQL Server&#41;](../../relational-databases/logs/database-checkpoints-sql-server.md). ## <a name="buffer-pool-extension-details"></a>Details der Pufferpoolerweiterung SSD-Speicher wird als Erweiterung des Arbeitsspeichersubsystems statt des Festplattenspeichersubsystems verwendet. Somit ermöglicht die Pufferpoolerweiterungsdatei dem Pufferpool-Manager, DRAM und NAND-Flasharbeitsspeicher zu verwenden, um einen wesentlich größeren Pufferpool von "lauwarmen" Seiten im nicht flüchtigen Speicher beizubehalten, der durch SSDs unterstützt wird. Hierdurch wird eine mehrstufige Zwischenspeicherhierarchie mit Ebene 1 (L1) als DRAM und Ebene 2 (L2) als Pufferpoolerweiterungsdatei auf SSD erstellt. Nur nicht modifizierte Seiten werden in den L2-Cache geschrieben, um mehr Datensicherheit zu gewährleisten. Der Puffer-Manager ist für die Verschiebung der nicht modifizierten Seiten zwischen dem L1- und L2-Cache zuständig. Die folgende Abbildung zeigt eine allgemeine Architekturübersicht des Pufferpools im Vergleich zu anderen [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] -Komponenten. ![Architektur der SSD-Pufferpoolerweiterung](../media/ssdbufferpoolextensionarchitecture.gif "Architektur der SSD-Pufferpoolerweiterung") Wenn sie aktiviert ist, gibt die Pufferpoolerweiterung die Größe und den Dateipfad der Pufferpoolzwischenspeicherdatei auf dem SSD an. Diese Datei ist ein zusammenhängender Speicherbereich auf dem SSD und wird beim Starten der Instanz von [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)]statisch konfiguriert. Änderungen an den Dateikonfigurationsparametern können nur ausgeführt werden, wenn die Pufferpoolerweiterungsfunktion deaktiviert ist. Wenn die Pufferpoolerweiterung deaktiviert wird, werden alle zugehörigen Konfigurationseinstellungen aus der Registrierung entfernt. Die Pufferpoolerweiterungsdatei wird nach dem Herunterfahren der SQL Server-Instanz gelöscht. ## <a name="best-practices"></a>Bewährte Methoden Es wird empfohlen, nach den folgenden bewährten Methoden vorzugehen. - Nach dem ersten Aktivieren der Pufferpoolerweiterung empfiehlt es sich, die SQL Server-Instanz neu zu starten, um die maximalen Leistungsvorteile zu erhalten. - Die Pufferpoolerweiterungsgröße kann maximal das 32fache des Max_server_memory-Werts für Enterprise-Editionen und maixmal das 4fache für die Standard-Edition betragen. Es wird empfohlen, ein Verhältnis von 1:16 oder kleiner zwischen der Größe des physischen Arbeitsspeichers (max_server_memory) und der Größe der Pufferpoolerweiterung beizubehalten. Ein niedrigeres Verhältnis im Bereich von 1:4 bis 1:8 kann optimal sein. Informationen zum Einrichten der max_server_memory-Option finden Sie unter [Serverkonfigurationsoptionen für den Serverarbeitsspeicher](server-memory-server-configuration-options.md). - Testen Sie die Pufferpoolerweiterung gründlich, bevor Sie sie in einer Produktionsumgebung implementieren. Vermeiden Sie in der Produktionsumgebung, Konfigurationsänderungen an der Datei vorzunehmen oder die Funktion zu deaktivieren. Diese Aktivitäten können negative Auswirkungen auf die Serverleistung haben, da die Größe des Pufferpools erheblich reduziert wird, wenn die Funktion deaktiviert ist. Wenn sie deaktiviert ist, wird der zur Unterstützung der Funktion verwendete Arbeitsspeicher erst wieder freigegeben, wenn die Instanz von SQL Server neu gestartet wird. Beim erneuten Aktivieren der Funktion wird der Arbeitsspeicher jedoch sofort wiederverwendet, ohne dass ein Neustart der Instanz erforderlich ist. ## <a name="return-information-about-the-buffer-pool-extension"></a>Rückgabeinformationen zur Pufferpoolerweiterung Sie können die folgenden dynamischen Verwaltungssichten verwenden, um die Konfiguration der Pufferpoolerweiterung anzuzeigen und Informationen über die Datenseiten in der Erweiterung zurückzugeben. - [sys. dm_os_buffer_pool_extension_configuration &#40;Transact-SQL-&#41;](/sql/relational-databases/system-dynamic-management-views/sys-dm-os-buffer-pool-extension-configuration-transact-sql) - [sys. dm_os_buffer_descriptors &#40;Transact-SQL-&#41;](/sql/relational-databases/system-dynamic-management-views/sys-dm-os-buffer-descriptors-transact-sql) Im Puffer-Manager-Objekt von SQL Server sind Leistungsindikatoren verfügbar, um die Datenseiten in der Pufferpoolerweiterungsdatei nachzuverfolgen. Weitere Informationen finden Sie unter [Leistungsindikatoren für die Pufferpoolerweiterung](../../relational-databases/performance-monitor/sql-server-buffer-manager-object.md). Die folgenden XEvents sind verfügbar. |XEvent|Beschreibung|Parameter| |------------|-----------------|----------------| |sqlserver.buffer_pool_extension_pages_written|Wird ausgelöst, wenn eine Seite oder eine Gruppe von Seiten aus dem Pufferpool in die Pufferpoolerweiterungsdatei geschrieben werden.|number_page<br /><br /> first_page_id<br /><br /> first_page_offset<br /><br /> initiator_numa_node_id| |sqlserver.buffer_pool_extension_pages_read|Wird ausgelöst, wenn eine Seite aus der Pufferpoolerweiterungsdatei in den Pufferpool gelesen wird.|number_page<br /><br /> first_page_id<br /><br /> first_page_offset<br /><br /> initiator_numa_node_id| |sqlserver.buffer_pool_extension_pages_evicted|Wird ausgelöst, wenn eine Seite aus der Pufferpoolerweiterungsdatei entfernt wird.|number_page<br /><br /> first_page_id<br /><br /> first_page_offset<br /><br /> initiator_numa_node_id| |sqlserver.buffer_pool_eviction_thresholds_recalculated|Wird ausgelöst, wenn der Entfernungsschwellenwert berechnet wird.|warm_threshold<br /><br /> cold_threshold<br /><br /> pages_bypassed_eviction<br /><br /> eviction_bypass_reason<br /><br /> eviction_bypass_reason_description| ## <a name="related-tasks"></a>Related Tasks ||| |-|-| |**Taskbeschreibung**|**Thema**| |Aktivieren und Konfigurieren der Pufferpoolerweiterung.|[ALTER SERVER CONFIGURATION &#40;Transact-SQL&#41;](/sql/t-sql/statements/alter-server-configuration-transact-sql)| |Ändern der Konfiguration der Pufferpoolerweiterung|[ALTER SERVER CONFIGURATION &#40;Transact-SQL&#41;](/sql/t-sql/statements/alter-server-configuration-transact-sql)| |Anzeigen der Konfiguration der Pufferpoolerweiterung|[sys. dm_os_buffer_pool_extension_configuration &#40;Transact-SQL-&#41;](/sql/relational-databases/system-dynamic-management-views/sys-dm-os-buffer-pool-extension-configuration-transact-sql)| |Überwachen der Pufferpoolerweiterung|[sys. dm_os_buffer_descriptors &#40;Transact-SQL-&#41;](/sql/relational-databases/system-dynamic-management-views/sys-dm-os-buffer-descriptors-transact-sql)<br /><br /> [Leistungsindikatoren](../../relational-databases/performance-monitor/sql-server-buffer-manager-object.md)|
128.931373
918
0.823359
deu_Latn
0.994061
03f7ad432c67ff8531949b630b396259948b7038
5,323
md
Markdown
intl.en-US/User Guide/Widget guide/Basic flat map widgets/Configure an isosurface layer.md
AthenaCiCi/datav
00c9fb288fdac0027c59b6e0e3f52e231b437fcf
[ "MIT" ]
null
null
null
intl.en-US/User Guide/Widget guide/Basic flat map widgets/Configure an isosurface layer.md
AthenaCiCi/datav
00c9fb288fdac0027c59b6e0e3f52e231b437fcf
[ "MIT" ]
null
null
null
intl.en-US/User Guide/Widget guide/Basic flat map widgets/Configure an isosurface layer.md
AthenaCiCi/datav
00c9fb288fdac0027c59b6e0e3f52e231b437fcf
[ "MIT" ]
null
null
null
# Configure an isosurface layer {#concept_vv3_4md_2gb .concept} This topic describes how to configure an isosurface layer to convert vector point data to a raster map. For example, you can display a national air quality map by using an isosurface layer. ## Prerequisites {#section_qsc_fwb_fhb .section} The isosurface layer child widget is added to the basic flat map and the widget parameters are set. For more information, see [Map container](intl.en-US/User Guide/Widget guide/Basic flat map widgets/Configure a basic flat map.md#). ## Configure the configuration plane {#section_k3q_qmd_2gb .section} ![](http://static-aliyun-doc.oss-cn-hangzhou.aliyuncs.com/assets/img/80671/155808667041113_en-US.png) - **Opacity**: Set the overall layer opacity by dragging the slider or entering a value. - **Pixel Size**: Set the pixel size of each square of the raster graphic by clicking **+** or **-**, or entering a value. The value range of this parameter is 1 to 10. A smaller pixel size indicates a clearer isosurface layer, but takes longer to render compared with a greater a greater pixel. - **Weight**: Set the weight of the interpolation point by which it affects its surrounding points by dragging the slider or entering a value. The value range of this parameter is 0.5 to 3. A greater weight value indicates that the interpolation point significantly affects its surrounding points and achieves a better layering effect, but takes a longer time to render compared with a lower weight value. - **Render Type**: Select a layer render type from the drop-down list. Available types are **Linear** and **Piecewise**. - **Linear** - **From Color**: Set the color of the interpolation point that indicates the minimum value in the data source. For more information, see [Configure item description](intl.en-US/User Guide/Manage widgets/Set widget styles/Configure item description.md#section_kdw_vj4_t2b). - **Middle Color**: Set the color of the interpolation point that indicates the median value in the data source. For more information, see [Configure item description](intl.en-US/User Guide/Manage widgets/Set widget styles/Configure item description.md#section_kdw_vj4_t2b). - **End Color**: Set the color of the interpolation point that indicates the maximum value in the data source. For more information, see [Configure item description](intl.en-US/User Guide/Manage widgets/Set widget styles/Configure item description.md#section_kdw_vj4_t2b). - **Break Value**: Set the break value for linear rendering by dragging the slider or entering a value. Based on the specified break value and the value range in the data source, DataV obtains the interpolation point of the middle value. The color of this interpolation point is the **Middle Color** that you set. **Note:** This parameter setting takes effect only if you set **Render Type** to **Linear**. - **Classify Color Count**: Set the number of classification of interpolation point by dragging the slider or entering a value. Based on the **Classify Color Count** value that you set and the value range in the data source, DataV uses different colors to classify interpolation points. A greater **Classify Color Count** value indicates a better interpolation effect, but takes a longer time to render compared with a lower **Classify Color Count** value. - **Piecewise** - **Default Color**: Set the default color of interpolation points. If the value indicated by an interpolation point is not included in the **Break Value** that you set, this parameter takes effect. For more information, see [Configure item description](intl.en-US/User Guide/Manage widgets/Set widget styles/Configure item description.md#section_kdw_vj4_t2b). - **Piecewise**: Add or remove a break by clicking **+** or the **Trash Can** icon on the right. - **Break Value**: Set a break by dragging the slider or entering a value. You can set this parameter according to the value range that is indicated by all interpolation points. - **Break Color**: Set the color of the interpolation points that indicate values included in a **Break**. For more information, see [Configure item description](intl.en-US/User Guide/Manage widgets/Set widget styles/Configure item description.md#section_kdw_vj4_t2b). ## Configure the data plane {#section_pdz_qmd_2gb .section} - **Clip GeoJSON data** ![](http://static-aliyun-doc.oss-cn-hangzhou.aliyuncs.com/assets/img/80671/155808667041114_en-US.png) The **Clip GeoJSON data** settings specify the area of interpolation points that need to be rendered. For example, in the preceding figure, the **Clip GeoJSON data** settings specify the China map displayed in colors. - **Interpolation Points Data** ![](http://static-aliyun-doc.oss-cn-hangzhou.aliyuncs.com/assets/img/80671/155808667041117_en-US.png) - lng: Set the longitude of the interpolation points. - lat: Set the latitude of the interpolation points. - value: Set the interpolation point value. Based on the value and the rendering settings in the **Configuration** plane, DataV adjusts the layer rendering effect. ## Configure the interaction plane {#section_btz_qmd_2gb .section} No settings are required.
102.365385
466
0.756904
eng_Latn
0.989005
03f7ba1a32800b8f1cc40e1479fcc575275e4fc3
5,750
md
Markdown
docs/relational-databases/stored-procedures/ole-automation-objects-in-transact-sql.md
Jteve-Sobs/sql-docs.de-de
9843b0999bfa4b85e0254ae61e2e4ada1d231141
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/relational-databases/stored-procedures/ole-automation-objects-in-transact-sql.md
Jteve-Sobs/sql-docs.de-de
9843b0999bfa4b85e0254ae61e2e4ada1d231141
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/relational-databases/stored-procedures/ole-automation-objects-in-transact-sql.md
Jteve-Sobs/sql-docs.de-de
9843b0999bfa4b85e0254ae61e2e4ada1d231141
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: OLE-Automatisierungsobjekte in Transact-SQL | Microsoft-Dokumentation description: Erfahren Sie, wie OLE-Automatisierungsobjekte, die über gespeicherte Prozeduren ausgeführt werden, im Adressraum einer Instanz der SQL Server-Datenbank-Engine ausgeführt werden. ms.custom: '' ms.date: 03/16/2017 ms.prod: sql ms.reviewer: '' ms.technology: stored-procedures ms.topic: conceptual helpviewer_keywords: - triggers [SQL Server], OLE Automation - batches [SQL Server], OLE Automation - OLE Automation [SQL Server] - OLE Automation [SQL Server], about OLE Automation ms.assetid: a887d956-4cd0-400a-aa96-00d7abd7c44b author: stevestein ms.author: sstein monikerRange: '>=aps-pdw-2016||=azuresqldb-current||=azure-sqldw-latest||>=sql-server-2016||=sqlallproducts-allversions||>=sql-server-linux-2017||=azuresqldb-mi-current' ms.openlocfilehash: 3c144b9e26ca8f6471bc5b07c7abfcc1f2ed41cc ms.sourcegitcommit: 75f767c7b1ead31f33a870fddab6bef52f99906b ms.translationtype: HT ms.contentlocale: de-DE ms.lasthandoff: 07/28/2020 ms.locfileid: "87332579" --- # <a name="ole-automation-objects-in-transact-sql"></a>OLE-Automatisierungsobjekte in Transact-SQL [!INCLUDE[SQL Server Azure SQL Database Synapse Analytics PDW ](../../includes/applies-to-version/sql-asdb-asdbmi-asa-pdw.md)] [!INCLUDE[tsql](../../includes/tsql-md.md)] enthält mehrere gespeicherte Systemprozeduren, die Verweise auf OLE-Automatisierungsobjekte in [!INCLUDE[tsql](../../includes/tsql-md.md)] -Batches, gespeicherten Prozeduren und Triggern ermöglichen. Diese gespeicherten Systemprozeduren werden als erweiterte gespeicherte Prozeduren ausgeführt, und die OLE-Automatisierungsobjekte, die über die gespeicherten Prozeduren ausgeführt werden, werden wie eine erweiterte gespeicherte Prozedur im Adressraum einer Instanz von [!INCLUDE[ssDEnoversion](../../includes/ssdenoversion-md.md)] ausgeführt. Die gespeicherten OLE-Automatisierungsprozeduren ermöglichen es [!INCLUDE[tsql](../../includes/tsql-md.md)] -Batches, auf SQL-DMO-Objekte und benutzerdefinierte OLE-Automatisierungsobjekte zu verweisen, wie etwa Objekte, die die **IDispatch** -Schnittstelle verfügbar machen. Ein benutzerdefinierter In-Process-OLE-Server, der mithilfe von [!INCLUDE[msCoName](../../includes/msconame-md.md)] [!INCLUDE[vbprvb](../../includes/vbprvb-md.md)] erstellt wurde, muss mit einem (mit der Anweisung **On Error GoTo** angegebenen) Fehlerhandler für die Unterroutinen **Class_Initialize** und **Class_Terminate** ausgestattet sein. Nicht behandelte Fehler in den Unterroutinen **Class_Initialize** und **Class_Terminate** können unvorhersehbare Fehler verursachen, wie z.B. eine Zugriffsverletzungen in einer Instanz von [!INCLUDE[ssDE](../../includes/ssde-md.md)]. Fehlerhandler werden auch für andere Unterroutinen empfohlen. Der erste Schritt beim Verwenden eines OLE-Automatisierungsobjekts in [!INCLUDE[tsql](../../includes/tsql-md.md)] ist das Aufrufen der gespeicherten Systemprozedur **sp_OACreate** , um eine Instanz des Objekts im Adressraum der Instanz von [!INCLUDE[ssDE](../../includes/ssde-md.md)]zu erstellen. Verwenden Sie nach dem Erstellen einer Instanz des Objekts die folgenden gespeicherten Prozeduren, um mit den Eigenschaften, Methoden und Fehlerinformationen im Zusammenhang mit dem Objekt zu arbeiten: - **sp_OAGetProperty** ruft den Wert einer Eigenschaft ab. - **sp_OASetProperty** legt den Wert einer Eigenschaft fest. - **sp_OAMethod** ruft eine Methode auf. - **sp_OAGetErrorInfo** ruft die letzten Fehlerinformationen ab. Wenn das Objekt nicht mehr benötigt wird, rufen Sie **sp_OADestroy** auf, um die Zuordnung der mit **sp_OACreate**erstellten Instanz des Objekts aufzuheben. OLE-Automatisierungsobjekte geben Daten durch Eigenschaftswerte und Methoden zurück. **sp_OAGetProperty** und **sp_OAMethod** geben diese Datenwerte als Resultset zurück. Der Gültigkeitsbereich eines OLE-Automatisierungsobjekts ist ein Batch. Alle Verweise auf das Objekt müssen in einem einzelnen Batch, einer gespeicherten Prozedur oder einem Trigger enthalten sein. Beim Verweisen auf Objekte unterstützen die [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] -OLE-Automatisierungsobjekte das Traversieren des Objekts, auf das verwiesen wird, auf andere Objekte, die es enthält. Wenn beispielsweise das SQL-DMO-Objekt **SQLServer** verwendet wird, können Verweise auf Datenbanken und Tabellen erfolgen, die sich auf dem betreffenden Server befinden. ## <a name="related-content"></a>Verwandte Inhalte [Objekthierarchiesyntax &#40;Transact-SQL&#41;](../../relational-databases/system-stored-procedures/object-hierarchy-syntax-transact-sql.md) [Oberflächenkonfiguration](../../relational-databases/security/surface-area-configuration.md) [OLE-Automatisierungsprozeduren (Serverkonfigurationsoption)](../../database-engine/configure-windows/ole-automation-procedures-server-configuration-option.md) [sp_OACreate &#40;Transact-SQL&#41;](../../relational-databases/system-stored-procedures/sp-oacreate-transact-sql.md) [sp_OAGetProperty &#40;Transact-SQL&#41;](../../relational-databases/system-stored-procedures/sp-oagetproperty-transact-sql.md) [sp_OASetProperty &#40;Transact-SQL&#41;](../../relational-databases/system-stored-procedures/sp-oasetproperty-transact-sql.md) [sp_OAMethod &#40;Transact-SQL&#41;](../../relational-databases/system-stored-procedures/sp-oamethod-transact-sql.md) [sp_OAGetErrorInfo &#40;Transact-SQL&#41;](../../relational-databases/system-stored-procedures/sp-oageterrorinfo-transact-sql.md) [sp_OADestroy &#40;Transact-SQL&#41;](../../relational-databases/system-stored-procedures/sp-oadestroy-transact-sql.md)
79.861111
919
0.781043
deu_Latn
0.922531
03f84470375092011a893c4806d24c9e2edeb026
236
md
Markdown
site/content/contact/_index.md
MrHeonG/heon-project-grenada
ceea7f1631fd272b7949a54ed2517f23f19866e7
[ "MIT" ]
null
null
null
site/content/contact/_index.md
MrHeonG/heon-project-grenada
ceea7f1631fd272b7949a54ed2517f23f19866e7
[ "MIT" ]
39
2022-01-03T02:25:19.000Z
2022-01-17T01:29:15.000Z
site/content/contact/_index.md
HrHeonG/heon-project-grenada
ceea7f1631fd272b7949a54ed2517f23f19866e7
[ "MIT" ]
null
null
null
--- title: "Contact" --- We’d love for you to get in touch with us if you have any inquiries or if you would like to contribute in someway to the Heon Project Grenada. <br> We've provided a form below for which you can easily do so
21.454545
71
0.728814
eng_Latn
0.999959
03f844cd828a79c442bb6379759e5032903f570e
127
md
Markdown
README.md
lolokatoa/react-portfolio
3185ae98f4e3e491105bc7ccf882a814f9c3ebe3
[ "MIT" ]
null
null
null
README.md
lolokatoa/react-portfolio
3185ae98f4e3e491105bc7ccf882a814f9c3ebe3
[ "MIT" ]
4
2021-05-10T00:44:06.000Z
2022-02-18T05:21:20.000Z
README.md
lolokatoa/react-portfolio
3185ae98f4e3e491105bc7ccf882a814f9c3ebe3
[ "MIT" ]
null
null
null
# Loseli Katoas React Portfolio Application *Fork from [es6-webpack2-starter](https://github.com/micooz/es6-webpack2-starter)*
42.333333
82
0.795276
kor_Hang
0.41727
03f958f2f5c172665b07658d449ff479fe1b0207
896
md
Markdown
docs/error-messages/compiler-errors-2/compiler-error-c3350.md
wraith13/cpp-docs.ja-jp
e4f53ab4d3646a6a195093f55629f8e1c663a8b6
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/error-messages/compiler-errors-2/compiler-error-c3350.md
wraith13/cpp-docs.ja-jp
e4f53ab4d3646a6a195093f55629f8e1c663a8b6
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/error-messages/compiler-errors-2/compiler-error-c3350.md
wraith13/cpp-docs.ja-jp
e4f53ab4d3646a6a195093f55629f8e1c663a8b6
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: コンパイラ エラー C3350 ms.date: 11/04/2016 f1_keywords: - C3350 helpviewer_keywords: - C3350 ms.assetid: cfbbc338-92b5-4f34-999e-aa2d2376bc70 ms.openlocfilehash: a19dbde6409afaae29e9110315c7c68fe9d43d62 ms.sourcegitcommit: 0ab61bc3d2b6cfbd52a16c6ab2b97a8ea1864f12 ms.translationtype: MT ms.contentlocale: ja-JP ms.lasthandoff: 04/23/2019 ms.locfileid: "62300545" --- # <a name="compiler-error-c3350"></a>コンパイラ エラー C3350 'delegate': delegate コンストラクターには数値の引数が必要です デリゲートのインスタンスを作成するときは、デリゲート関数含む型のインスタンスと関数の 2 つの引数を渡す必要があります。 次の例では C3350 が生成されます。 ``` // C3350.cpp // compile with: /clr delegate void SumDelegate(); public ref class X { public: void F() {} static void F2() {} }; int main() { X ^ MyX = gcnew X(); SumDelegate ^ pSD = gcnew SumDelegate(); // C3350 SumDelegate ^ pSD1 = gcnew SumDelegate(MyX, &X::F); SumDelegate ^ pSD2 = gcnew SumDelegate(&X::F2); } ```
21.333333
60
0.732143
yue_Hant
0.163523
03f99343734a68456f7cd8b9e53c15a671f2c5dc
188
md
Markdown
_pages/home.md
qq456cvb/qq456cvb.github.io
6ef7b9f9fc31d35449125ffd77244a58ffe614f4
[ "MIT" ]
1
2022-03-02T03:46:03.000Z
2022-03-02T03:46:03.000Z
_pages/home.md
qq456cvb/qq456cvb.github.io
6ef7b9f9fc31d35449125ffd77244a58ffe614f4
[ "MIT" ]
null
null
null
_pages/home.md
qq456cvb/qq456cvb.github.io
6ef7b9f9fc31d35449125ffd77244a58ffe614f4
[ "MIT" ]
null
null
null
--- permalink: / title: "About Me" excerpt: "Home page" author_profile: true redirect_from: --- {% include_relative about_content.md %} News ====== {% include_relative news_content.md %}
14.461538
39
0.702128
eng_Latn
0.853757
03f99c92e7d303aa0c7a3443bdfd204540556888
1,827
md
Markdown
_posts/web/ui-components/2021-04-22-checkbox.md
COINS-Naz/coins-design-system
2fb14a7f8f213b0ddaf990a659d97ce0cd01f597
[ "MIT" ]
null
null
null
_posts/web/ui-components/2021-04-22-checkbox.md
COINS-Naz/coins-design-system
2fb14a7f8f213b0ddaf990a659d97ce0cd01f597
[ "MIT" ]
1
2021-05-28T22:22:43.000Z
2021-05-28T22:25:09.000Z
_posts/web/ui-components/2021-04-22-checkbox.md
COINS-Naz/coins-design-system
2fb14a7f8f213b0ddaf990a659d97ce0cd01f597
[ "MIT" ]
1
2021-05-25T13:14:35.000Z
2021-05-25T13:14:35.000Z
--- layout: post category: "Web" tag: "UI Components" title: Checkbox subtitle: "Checkbox allows the user to select multiple values from several options." permalink: /web/checkbox/ --- ## Overview In UI design, the checkbox is presented as a small square box on the screen. It has two states: checked and unchecked. When checked, the square will be filled with a check mark. Checkboxes allow the choice of one option, no option or several options. There is no interdependence between checkboxes in a group. Each checkbox is given a label. ### When to use Checkboxes are used when there are lists of options and the user may select any number of choices, including zero, one, or several. In other words, each checkbox is independent of all other checkboxes in the list, so checking one box doesn't uncheck the others. Checkboxes are used in forms and databases to indicate an answer to a question, apply a batch of settings or allow the user to make a multi-selection from a list. A stand-alone checkbox is used for a single option that the user can turn on or off. ### When not to use Do not use checkboxes as action buttons that make something happen. Also, the changed settings should not take effect until the user clicks the command button (labeled "OK" for example, or "Proceed to XXX" where "XXX" is the next step in a process). ## Guidelines ### Text alignment With checkboxes, we use right and left text alignment and avoid using top or bottom. {% include do-dont.html do-img="Guidelines_Checkboxes_Do.png" do-text="Put text on the right or left side." dont-img="Guidelines_Checkboxes_Do_Not.png" dont-text="Do not put text at the top or bottom side." %} ## Component {% include snippet.html code=' <label class="checkbox"> <input type="checkbox"> <span>Checkbox label</span> </label> ' %}
38.0625
262
0.756431
eng_Latn
0.999205
03f9bf1e50a2b6c3d7578e58e381ad263acd6c3b
1,839
md
Markdown
readme.md
pittma/next
4903e5a5b02a1e6d7147814a6070ad997a157070
[ "MIT" ]
3
2018-02-16T18:54:52.000Z
2018-02-19T14:08:54.000Z
readme.md
pittma/next
4903e5a5b02a1e6d7147814a6070ad997a157070
[ "MIT" ]
null
null
null
readme.md
pittma/next
4903e5a5b02a1e6d7147814a6070ad997a157070
[ "MIT" ]
null
null
null
Next ==== Next is an adventure in type-level FSMs -- one that has not yet met its destination. ## Overview The goal of `next` is to be able to construct and traverse a type-level state machine. This is provided, for now, through the type family `Next` which is used to map valid state transitions. ~At the time of this writing, only paths can be validated; `Next` cannot handle vertices with more than one outgoing edge. There is but more discovery to do!~ Thanks to [help from @roman](https://gist.github.com/roman/043451849d18f1f60a688f211e99bdb5), We can use `ConstraintKinds` to gather multiple egress states! ## An Example The complete, runnable example can be found in [examples](./examples/Door.hs). ```haskell -- Enumerate our states. data DoorSt = Open | Closed deriving (Show) -- This is our type to be FSM-indexed. data Door s a where OpenDoor :: a -> Door 'Open a ClosedDoor :: a -> Door 'Closed a -- Just so we can show our Door. deriving instance Show a => Show (Door s a) -- Define our egresses as constraints. This allows us to have multiple outputs from a single -- input. class OpenEgress s class ClosedEgress s -- Map our states to their egresses instance OpenEgress 'Closed instance ClosedEgress 'Open -- Define our FSM as type family relationships in the context of Next. type instance Next 'Open = OpenEgress type instance Next 'Closed = ClosedEgress -- Write our FSM instances. instance FSMFunctor Door where fsmFmap f (OpenDoor x) = OpenDoor (f x) fsmFmap f (ClosedDoor x) = ClosedDoor (f x) instance FSMApplicative Door where type Init Door = 'Closed fsmPure = ClosedDoor fsmAp (OpenDoor f) (OpenDoor x) = OpenDoor (f x) fsmAp (ClosedDoor f) (ClosedDoor x) = ClosedDoor (f x) instance FSMMonad Door where (->>=) (OpenDoor x) f = f x (->>=) (ClosedDoor x) f = f x ```
29.190476
315
0.727026
eng_Latn
0.991861
03fa59ac16e80a5f66122c97b26fc47eda2662e7
1,210
md
Markdown
docs/web-service-reference/creationtime.md
grtaylor806/office-developer-exchange-docs
699ab2eac7bdfc4d12d8e69c1f9d6e62b985f3ce
[ "CC-BY-4.0", "MIT" ]
23
2019-01-14T08:37:26.000Z
2022-03-05T17:03:06.000Z
docs/web-service-reference/creationtime.md
grtaylor806/office-developer-exchange-docs
699ab2eac7bdfc4d12d8e69c1f9d6e62b985f3ce
[ "CC-BY-4.0", "MIT" ]
256
2018-05-12T01:56:30.000Z
2022-03-30T19:51:21.000Z
docs/web-service-reference/creationtime.md
grtaylor806/office-developer-exchange-docs
699ab2eac7bdfc4d12d8e69c1f9d6e62b985f3ce
[ "CC-BY-4.0", "MIT" ]
83
2018-05-03T18:26:29.000Z
2022-03-30T19:46:21.000Z
--- title: "CreationTime" manager: sethgros ms.date: 09/17/2015 ms.audience: Developer ms.topic: reference ms.prod: office-online-server ms.localizationpriority: medium ms.assetid: 32fa8946-3d5d-4123-8127-efc2ac369553 description: "The CreationTime element specifies when the persona was created." --- # CreationTime The **CreationTime** element specifies when the persona was created. ```XML <CreationTime></CreationTime> ``` **datetime** ## Attributes and elements The following sections describe attributes, child elements, and parent elements. ### Attributes None. ### Child elements None. ### Parent elements [Persona](persona.md) ## Text value The text value of the **CreationTime** element is the date and time that a persona was created. ## Remarks This element was introduced in Exchange Server 2013. The schema that describes this element is located in the IIS virtual directory that hosts Exchange Web Services. ## Element information ||| |:-----|:-----| |Namespace <br/> |https://schemas.microsoft.com/exchange/services/2006/types <br/> | |Schema name <br/> |Types schema <br/> | |Validation file <br/> |Types.xsd <br/> | |Can be empty <br/> ||
20.166667
112
0.713223
eng_Latn
0.919276
03fa67441bb3aa8540a8a19953756e02487d0409
5,047
md
Markdown
content/fr/faq.md
HubRise/website
5d164d04e7cf5508583c49bb29e99ce7b905cdf8
[ "MIT" ]
null
null
null
content/fr/faq.md
HubRise/website
5d164d04e7cf5508583c49bb29e99ce7b905cdf8
[ "MIT" ]
165
2020-01-20T11:11:53.000Z
2022-03-31T13:58:51.000Z
content/fr/faq.md
HubRise/website
5d164d04e7cf5508583c49bb29e99ce7b905cdf8
[ "MIT" ]
2
2020-01-20T09:26:09.000Z
2020-10-08T14:55:50.000Z
--- title: F.A.Q layout: documentation-simple meta: title: F.A.Q. | HubRise description: --- #### Abonnement ###### Dois-je entrer mes coordonnées bancaires pour faire un test gratuit ? Non, vous devez simplement indiquer votre nom et votre e-mail. La facturation ne se mettra en place qu'une fois les premières commandes effectuées. ###### Quelle est la durée d'engagement ? Aucune. Vous pouvez arrêter votre abonnement à tout moment. ###### Comment puis-je régler l'abonnement ? Par prélèvement mensuel sur votre carte bancaire. Si vous n'avez pas de carte bancaire, vous pouvez régler par virement avec un engagement minimum de 12 mois. ###### J'ai plusieurs magasins, dois-je payer un abonnement par magasin ? Oui. Notez que des remises sont possibles pour les chaînes de 50 points de vente ou plus. <ContactFormToggle text="Contactez-nous" /> pour en savoir plus. #### Données ###### Où sont stockées les données ? Les données sont stockées de manière sécurisée sur des serveurs situés en Union Européenne. ###### HubRise est-il certifié RGPD ? Tout à fait. HubRise est en conformité avec le Règlement Général sur la Protection des Données. ###### Le volume des données est-il limité par compte ? En principe non, mais une limite de 10,000 commandes et 10,000 clients par mois est appliquée par défaut. Nous pouvons augmenter cette limite sur demande, si le compte est utilisé de manière légitime. ###### Que deviennent les données si je décide de résilier mon compte HubRise ? Les données sont conservées pendant 3 mois, puis supprimées définitivement. Nous pouvons les supprimer immédiatement sur demande. ###### Comment HubRise me garantit l'accès à toutes mes données ? La création et la consultation de vos données sur HubRise se fait par API. Les données stockées sur HubRise sont donc intégralement accessibles aux applications que vous aurez autorisées. ###### Qui a accès à mes données ? Seules les applications que vous autorisez, de manière explicite et révocable, ont accès à vos données. ###### Puis-je donner accès à mon compte à d'autres utilisateurs ? Oui, vous pouvez ajouter des utilisateurs au niveau de votre compte, ou au niveau d'un point de vente précis. #### Développeurs ###### Je souhaite développer une application pour les commerçants, pourquoi utiliser HubRise ? HubRise vous donne immédiatement accès à l'écosystème des commerçants et restaurateurs : logiciels de caisse, solutions de commande en ligne, services de livraison... Vous pouvez ainsi vous concentrer sur les fonctionnalités innovantes de votre produit. ###### Pouvez-vous m'aider à promouvoir mon application ? Nous mettrons prochainement en avant les meilleures applications sur notre blog et nos réseaux sociaux. Si votre application répond à certaines spécifications, elle pourra être publiée sur notre App Store, accessible à nos utilisateurs. <ContactFormToggle text="Contactez-nous" /> pour nous présenter votre projet. ###### HubRise peut-il me rétribuer pour mes applications ? Non, en revanche vous êtes libre de vendre vos applications à nos utilisateurs, sans condition de notre part. ###### HubRise peut-il proposer des solution concurrentes à la mienne à ses utilisateurs ? HubRise ne démarche jamais directement ses utilisateurs. Par ailleurs, HubRise ne recommande jamais officiellement de solution, et respecte une égalité de traitement entre solutions concurrentes. ###### Avez-vous un processus de certification ? Pas encore, mais nous prévoyons de mettre rapidement en place un processus d'auto-certification, qui sera totalement optionnel et gratuit. #### Technologie et modèle de données ###### Quelles sont les technologies utilisées par l'API ? L'API suit les principes du REST et les données sont en format JSON. Tous les échanges se font en HTTPS. L'authentification des applications utilise le protocole OAuth 2.0. ###### Quelles sont les données stockées sur HubRise ? HubRise stocke les commandes, le fichier clients, les produits, les promotions et l'inventaire. ###### Je souhaite stocker un champ dont je ne trouve pas d'équivalent dans l'API. Comment faire ? Les _custom fields_ ("champs personnalisés") permettent de stocker des données arbitraires non prévues dans l'API. Exemples : le nom du vendeur sur les commandes, un identifiant interne sur les clients, etc. ###### Comment une application est-elle informée de l'arrivée d'une nouvelle commmande ou de la mise à jour d'une donnée ? Les applications peuvent, au choix :<br /> \- interroger notre serveur à intervalle régulier pour récupérer les nouveaux événements (_passive callback_)<br /> \- publier une URL qui sera appellée par notre serveur dès apparition d'un nouvel événement (_active callback_) ###### J'ai plusieurs points de vente dont certains partagent le même catalogue de produits. Comment faire ? HubRise permet de créer plusieurs catalogues, et de les affecter individuellement à chaque point de vente ou de les partager entre plusieurs points de vente. Même chose pour vos listes de clients.
48.528846
253
0.775708
fra_Latn
0.989736
03fa6ff70575e458b0f6e82ec7dcdad9385bad2f
55
md
Markdown
README.md
MrVibesRSA/StreamerBot-Actions
23d39aa25bc6cd513d369e1690c9a410916c28dd
[ "MIT" ]
null
null
null
README.md
MrVibesRSA/StreamerBot-Actions
23d39aa25bc6cd513d369e1690c9a410916c28dd
[ "MIT" ]
null
null
null
README.md
MrVibesRSA/StreamerBot-Actions
23d39aa25bc6cd513d369e1690c9a410916c28dd
[ "MIT" ]
null
null
null
# StreamerBot-Actions Shared Actions for Streamer.bot
18.333333
32
0.818182
eng_Latn
0.717237
03fa7fa046e16cf3f247e1e59f881dcb6b904a71
2,614
md
Markdown
_pages/shi.md
Environment-and-Seniors-Health-Emory/liuhua_web.github.io
2ce2eee5a76d4f97d6e4b7fd81e1aac2ffb29c07
[ "MIT" ]
null
null
null
_pages/shi.md
Environment-and-Seniors-Health-Emory/liuhua_web.github.io
2ce2eee5a76d4f97d6e4b7fd81e1aac2ffb29c07
[ "MIT" ]
null
null
null
_pages/shi.md
Environment-and-Seniors-Health-Emory/liuhua_web.github.io
2ce2eee5a76d4f97d6e4b7fd81e1aac2ffb29c07
[ "MIT" ]
1
2021-06-24T03:26:31.000Z
2021-06-24T03:26:31.000Z
--- title: "Liuhua Shi" layout: gridlay excerpt: "Liuhua Shi" sitemap: false permalink: /shi/ --- ### <b>Dr. Liuhua Shi</b> <div class="row"> <div class="col-sm-2 clearfix"> ![]({{ site.url }}{{ site.baseurl }}/images/peopic/Liuhua_Shi.png){: style="width: 130px; float: left; margin-top: 20px"} </div> <div class="col-sm-7 clearfix"> <p> <br /> </p> Gangarosa Department of Environmental Health, <br/> Rollins School of Public Health, Emory University<br/> 1518 Clifton Road, Atlanta, Georgia 30322<br/> Email: liuhua.shi(at)emory.edu<br/> Office: RM 2036<br/> </div> </div> <div class="row"> <div class="col-sm-7 clearfix"> #### <b>RESEARCH AREAS</b> Air Pollution<br/> Climate Change<br/> Environmental Health<br/> Environmental Epidemiology<br/> Big Data Statistical Modeling<br/> Alzheimer's disease and related dementia<br/> #### <b>EMPLOYMENT</b> 2019.12- Assistant Professor, Emory University<br/> 2019.02-2019.09 Postdoctoral Research Fellow, Harvard University <br/> 2016.09-2019.01 Consultant, Consulting Company<br/> #### <b>EDUCATION</b> 2016.08 ScD in Environmental Health, Harvard University <br/> 2012.07 MS in Geography, Beijing Normal University<br/> 2009.07 BS in Geography, Beijing Normal University<br/> #### <b>AFFILIATIONS & ACTIVITIES</b> Member, Lancet Countdown project (2020)<br/> Member, HERCULES Exposome Research Center<br/> Member, Goizueta Alzheimer’s Disease Research Center (ADRC) <br/> Member, Climate@Emory<br/> </div> </div> <div class="row"> <div class="col-sm-12 clearfix"> #### SELECTED MEDIA COVERAGE: [https://www.eurekalert.org/pub_releases/2020-10/htcs-slf101620.php](https://www.eurekalert.org/pub_releases/2020-10/htcs-slf101620.php) [https://ehp.niehs.nih.gov/curated-collections/2018-JIF](https://ehp.niehs.nih.gov/curated-collections/2018-JIF) (EHP's 2018 Journal Impact Factor Collection) [https://www.hsph.harvard.edu/news/press-releases/air-pollution-below-epa-standards-linked-with-higher-death-rates/](https://www.hsph.harvard.edu/news/press-releases/air-pollution-below-epa-standards-linked-with-higher-death-rates/) [https://www.cbsnews.com/news/climate-change-deaths-in-us-study/](https://www.cbsnews.com/news/climate-change-deaths-in-us-study/) [https://www.nbcnews.com/health/health-news/heres-how-climate-change-might-kill-people-n391226](https://www.nbcnews.com/health/health-news/heres-how-climate-change-might-kill-people-n391226) [https://sph.emory.edu/news/news-release/2020/09/air-pollution-covid-deaths.html]("https://sph.emory.edu/news/news-release/2020/09/air-pollution-covid-deaths.html") </div> </div>
33.088608
232
0.733741
yue_Hant
0.263691
03fabd7ff4e17959800c340cd12a65b91c0035a5
3,962
md
Markdown
Benchmark/BenchmarkDotNet.Artifacts/results/Benchmark.Delete-report-github.md
nettsundere/DictionaryReality
c6e5a8b4206421ece3ffa5e5a2281e4aec9c6dd0
[ "MIT" ]
null
null
null
Benchmark/BenchmarkDotNet.Artifacts/results/Benchmark.Delete-report-github.md
nettsundere/DictionaryReality
c6e5a8b4206421ece3ffa5e5a2281e4aec9c6dd0
[ "MIT" ]
null
null
null
Benchmark/BenchmarkDotNet.Artifacts/results/Benchmark.Delete-report-github.md
nettsundere/DictionaryReality
c6e5a8b4206421ece3ffa5e5a2281e4aec9c6dd0
[ "MIT" ]
null
null
null
``` ini BenchmarkDotNet=v0.11.5, OS=macOS Mojave 10.14.6 (18G87) [Darwin 18.7.0] Intel Core i5-3427U CPU 1.80GHz (Ivy Bridge), 1 CPU, 4 logical and 2 physical cores .NET Core SDK=2.2.401 [Host] : .NET Core 2.2.6 (CoreCLR 4.6.27817.03, CoreFX 4.6.27818.02), 64bit RyuJIT DefaultJob : .NET Core 2.2.6 (CoreCLR 4.6.27817.03, CoreFX 4.6.27818.02), 64bit RyuJIT ``` | Method | Size | Mean | Error | StdDev | Median | Rank | |--------------------------- |----- |------------:|-----------:|-----------:|------------:|-----:| | **ListDeleteSequential** | **1** | **105.0 ns** | **2.394 ns** | **3.727 ns** | **103.6 ns** | **1** | | DictionaryDeleteSequential | 1 | 1,047.0 ns | 13.061 ns | 12.217 ns | 1,044.4 ns | 7 | | ListDeleteRandom | 1 | 3,420.9 ns | 29.087 ns | 27.208 ns | 3,411.8 ns | 13 | | DictionaryDeleteRandom | 1 | 3,469.5 ns | 25.697 ns | 24.037 ns | 3,470.3 ns | 13 | | **ListDeleteSequential** | **3** | **169.3 ns** | **2.052 ns** | **1.919 ns** | **168.5 ns** | **2** | | DictionaryDeleteSequential | 3 | 1,732.0 ns | 18.495 ns | 15.444 ns | 1,729.0 ns | 9 | | ListDeleteRandom | 3 | 4,280.9 ns | 59.987 ns | 53.177 ns | 4,281.0 ns | 14 | | DictionaryDeleteRandom | 3 | 4,209.9 ns | 46.012 ns | 40.788 ns | 4,212.5 ns | 14 | | **ListDeleteSequential** | **4** | **253.8 ns** | **1.930 ns** | **1.806 ns** | **254.0 ns** | **3** | | DictionaryDeleteSequential | 4 | 2,478.0 ns | 19.375 ns | 18.123 ns | 2,482.8 ns | 10 | | ListDeleteRandom | 4 | 4,825.9 ns | 38.454 ns | 35.970 ns | 4,833.0 ns | 16 | | DictionaryDeleteRandom | 4 | 4,709.7 ns | 29.009 ns | 27.135 ns | 4,706.6 ns | 15 | | **ListDeleteSequential** | **5** | **276.6 ns** | **3.052 ns** | **2.855 ns** | **276.7 ns** | **4** | | DictionaryDeleteSequential | 5 | 2,664.0 ns | 35.954 ns | 33.631 ns | 2,662.2 ns | 11 | | ListDeleteRandom | 5 | 5,705.0 ns | 75.064 ns | 70.215 ns | 5,703.6 ns | 18 | | DictionaryDeleteRandom | 5 | 5,716.3 ns | 60.616 ns | 50.617 ns | 5,722.0 ns | 18 | | **ListDeleteSequential** | **10** | **531.2 ns** | **5.358 ns** | **4.750 ns** | **530.8 ns** | **5** | | DictionaryDeleteSequential | 10 | 5,060.7 ns | 47.194 ns | 41.836 ns | 5,057.4 ns | 17 | | ListDeleteRandom | 10 | 8,645.7 ns | 78.291 ns | 69.403 ns | 8,641.2 ns | 21 | | DictionaryDeleteRandom | 10 | 8,127.1 ns | 102.900 ns | 85.926 ns | 8,084.8 ns | 20 | | **ListDeleteSequential** | **15** | **748.4 ns** | **18.737 ns** | **30.256 ns** | **734.0 ns** | **6** | | DictionaryDeleteSequential | 15 | 6,966.5 ns | 83.562 ns | 74.075 ns | 6,959.1 ns | 19 | | ListDeleteRandom | 15 | 11,647.9 ns | 123.689 ns | 115.698 ns | 11,656.4 ns | 23 | | DictionaryDeleteRandom | 15 | 10,359.9 ns | 121.812 ns | 113.943 ns | 10,326.0 ns | 22 | | **ListDeleteSequential** | **30** | **1,478.4 ns** | **12.589 ns** | **11.776 ns** | **1,478.9 ns** | **8** | | DictionaryDeleteSequential | 30 | 13,467.4 ns | 97.499 ns | 86.430 ns | 13,463.0 ns | 24 | | ListDeleteRandom | 30 | 22,401.8 ns | 179.856 ns | 159.438 ns | 22,348.0 ns | 26 | | DictionaryDeleteRandom | 30 | 17,538.0 ns | 199.439 ns | 176.797 ns | 17,520.0 ns | 25 | | **ListDeleteSequential** | **60** | **2,903.5 ns** | **27.763 ns** | **25.969 ns** | **2,897.9 ns** | **12** | | DictionaryDeleteSequential | 60 | 26,354.0 ns | 203.329 ns | 180.246 ns | 26,322.6 ns | 27 | | ListDeleteRandom | 60 | 50,126.1 ns | 426.737 ns | 399.170 ns | 50,278.4 ns | 29 | | DictionaryDeleteRandom | 60 | 31,837.3 ns | 358.882 ns | 318.139 ns | 31,858.7 ns | 28 |
88.044444
126
0.496466
kor_Hang
0.090877
03facb21779aa4ef68eb6e1fd9bbf18a2f903e72
175
md
Markdown
docs/frontend.md
phuquocdog/block-explorer
3221ebed3e7cd4464805e7dd4234625e2a416bed
[ "Apache-2.0" ]
null
null
null
docs/frontend.md
phuquocdog/block-explorer
3221ebed3e7cd4464805e7dd4234625e2a416bed
[ "Apache-2.0" ]
null
null
null
docs/frontend.md
phuquocdog/block-explorer
3221ebed3e7cd4464805e7dd4234625e2a416bed
[ "Apache-2.0" ]
null
null
null
## Build and Deploy ``` yarn generate aws s3 cp --recursive dist s3://block.phuquoc.dog aws cloudfront create-invalidation --distribution-id=E24W08QZ1RM9FY --paths '/*' ```
21.875
81
0.725714
eng_Latn
0.52644
03fafcb6f8bfe231465b4c69fe30708a0878af2d
8,571
md
Markdown
articles/active-directory/manage-apps/application-proxy-release-version-history.md
jayv-ops/azure-docs.de-de
6be2304cfbe5fd0bf0d4ed0fbdf4a6a4d11ac6e0
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/active-directory/manage-apps/application-proxy-release-version-history.md
jayv-ops/azure-docs.de-de
6be2304cfbe5fd0bf0d4ed0fbdf4a6a4d11ac6e0
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/active-directory/manage-apps/application-proxy-release-version-history.md
jayv-ops/azure-docs.de-de
6be2304cfbe5fd0bf0d4ed0fbdf4a6a4d11ac6e0
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: 'Azure AD-Anwendungsproxy: Verlauf der Versionsveröffentlichungen' description: In diesem Artikel werden alle Versionen des Azure AD-Anwendungsproxys aufgeführt sowie neue Features und behobene Probleme beschrieben. services: active-directory author: kenwith manager: daveba ms.assetid: '' ms.service: active-directory ms.topic: reference ms.workload: identity ms.date: 07/22/2020 ms.subservice: app-mgmt ms.author: kenwith ms.openlocfilehash: 6ba622bd52dc13fb0053b61b65529db6e6912611 ms.sourcegitcommit: c27a20b278f2ac758447418ea4c8c61e27927d6a ms.translationtype: HT ms.contentlocale: de-DE ms.lasthandoff: 03/03/2021 ms.locfileid: "101686715" --- # <a name="azure-ad-application-proxy-version-release-history"></a>Azure AD-Anwendungsproxy: Verlauf der Versionsveröffentlichungen In diesem Artikel werden die veröffentlichten Versionen und Features des Anwendungsproxys von Azure Active Directory (Azure AD) aufgeführt. Das Azure AD-Team aktualisiert den Anwendungsproxy regelmäßig mit neuen Features und Funktionen. Anwendungsproxyconnectors werden automatisch aktualisiert, wenn eine neue Version veröffentlicht wird. Wir empfehlen die Aktivierung von automatischen Updates für Ihre Connectors, um sicherzustellen, dass Sie über die neuesten Features und Fehlerbehebungen verfügen. Microsoft bietet direkte Unterstützung für die neueste Connectorversion und die unmittelbare Vorgängerversion. Hier finden Sie eine Liste der zugehörigen Ressourcen: Resource | Details --------- | --------- | Aktivieren des Anwendungsproxys | In diesem [Tutorial](application-proxy-add-on-premises-application.md) werden die Voraussetzungen zum Aktivieren des Anwendungsproxys und zum Installieren und Registrieren eines Connectors beschrieben. Grundlegendes zu Azure AD-Anwendungsproxyconnectors | Erfahren Sie mehr über die [Connectorverwaltung](application-proxy-connectors.md) und die [automatische Aktualisierung](application-proxy-connectors.md#automatic-updates) von Connectors. Herunterladen des Azure AD-Anwendungsproxyconnectors | [Laden Sie den aktuellen Connector herunter](https://download.msappproxy.net/subscription/d3c8b69d-6bf7-42be-a529-3fe9c2e70c90/connector/download). ## <a name="1519750"></a>1.5.1975.0 ### <a name="release-status"></a>Releasestatus 22. Juli 2020: Für den Download veröffentlicht: Diese Version steht nur über die Downloadseite zur Installation zur Verfügung. Ein automatisches Upgrade dieser Version wird zu einem späteren Zeitpunkt veröffentlicht. ### <a name="new-features-and-improvements"></a>Neue Features und Verbesserungen - Verbesserte Unterstützung für Azure Government-Cloudumgebungen. Ausführliche Informationen zur ordnungsgemäßen Installation des Connectors für die Azure Government-Cloudprüfung finden Sie unter [Voraussetzungen](../hybrid/reference-connect-government-cloud.md#allow-access-to-urls) und [Installieren des Agents für die Azure Government-Cloud](../hybrid/reference-connect-government-cloud.md#install-the-agent-for-the-azure-government-cloud). - Unterstützung für die Verwendung des Webclients der Remotedesktopdienste mit dem Anwendungsproxy. Weitere Informationen finden Sie unter [Veröffentlichen des Remotedesktops per Azure AD-Anwendungsproxy](application-proxy-integrate-with-remote-desktop-services.md). - Verbesserte Aushandlungen für WebSocket-Erweiterungen ### <a name="fixed-issues"></a>Behobene Probleme - Behoben: WebSocket-Problem, das Zeichenfolgen in Kleinbuchstaben erzwingt - Behoben: Problem, das dazu führt, dass Connectors manchmal nicht reagieren ## <a name="1516260"></a>1.5.1626.0 ### <a name="release-status"></a>Releasestatus 17. Juli 2020 Für den Download veröffentlicht. Diese Version ist nur über die Downloadseite für die Installation verfügbar. Ein automatisches Upgrade dieser Version wird zu einem späteren Zeitpunkt veröffentlicht. ### <a name="fixed-issues"></a>Behobene Probleme - Behoben: Arbeitsspeicherverlust in vorheriger Version - Allgemeine Verbesserungen der WebSocket-Unterstützung ## <a name="1515260"></a>1.5.1526.0 ### <a name="release-status"></a>Releasestatus 07. April 2020: Für den Download veröffentlicht: Diese Version steht nur über die Downloadseite zur Installation zur Verfügung. Ein automatisches Upgrade dieser Version wird zu einem späteren Zeitpunkt veröffentlicht. ### <a name="new-features-and-improvements"></a>Neue Features und Verbesserungen - Connectors verwenden für alle Verbindungen ausschließlich TLS 1.2. Weitere Informationen finden Sie unter [Voraussetzungen für Connectors](application-proxy-add-on-premises-application.md#prerequisites). - Verbesserte Signalisierung zwischen Connector und Azure-Diensten. Dazu zählen die Unterstützung zuverlässiger Sitzungen für die WCF-Kommunikation zwischen Connector und Azure-Diensten sowie Verbesserungen des DNS-Caches für die WebSocket-Kommunikation. - Unterstützung für die Konfiguration eines Proxys zwischen dem Connector und der Back-End-Anwendung. Weitere Informationen finden Sie unter [Verwenden von vorhandenen lokalen Proxyservern](application-proxy-configure-connectors-with-proxy-servers.md). ### <a name="fixed-issues"></a>Behobene Probleme - Das Fallback auf Port 8080 für die Kommunikation zwischen dem Connector und Azure-Diensten wurde entfernt. - Debugablaufverfolgungen für die WebSocket-Kommunikation wurden hinzugefügt. - Das Beibehalten des SameSite-Attributs wurde behoben, wenn es in Back-End-Anwendungscookies festgelegt war. ## <a name="156120"></a>1.5.612.0 ### <a name="release-status"></a>Releasestatus 20. September 2018: Für den Download veröffentlicht ### <a name="new-features-and-improvements"></a>Neue Features und Verbesserungen - Es wurde WebSocket-Unterstützung für die Anwendung QlikSense hinzugefügt. Weitere Informationen zum Integrieren von QlikSense mit dem Anwendungsproxy finden Sie in dieser [exemplarischen Vorgehensweise](application-proxy-qlik.md). - Der Installations-Assistent wurde verbessert, um das Konfigurieren eines ausgehenden Proxys zu vereinfachen. - TLS 1.2 wurde als Standardprotokoll für Connectors festgelegt. - Es wurden neue Microsoft-Software-Lizenzbedingungen hinzugefügt. ### <a name="fixed-issues"></a>Behobene Probleme - Es wurde ein Fehler behoben, der zu Arbeitsspeicherverlusten im Connector führte. - Die Azure Service Bus-Version wurde aktualisiert, wobei auch Connectortimeoutfehler behoben wurden. ## <a name="154020"></a>1.5.402.0 ### <a name="release-status"></a>Releasestatus 19. Januar 2018: Für den Download veröffentlicht ### <a name="fixed-issues"></a>Behobene Probleme - Es wurde Unterstützung für benutzerdefinierte Domänen hinzugefügt, die eine Domänenübersetzung im Cookie erfordern. ## <a name="151320"></a>1.5.132.0 ### <a name="release-status"></a>Releasestatus 25. Mai 2017: Für den Download veröffentlicht ### <a name="new-features-and-improvements"></a>Neue Features und Verbesserungen Verbesserte Kontrolle über die Beschränkungen für ausgehende Verbindungen von Connectors. ## <a name="15360"></a>1.5.36.0 ### <a name="release-status"></a>Releasestatus 15. April 2017: Für den Download veröffentlicht ### <a name="new-features-and-improvements"></a>Neue Features und Verbesserungen - Das Onboarding und die Verwaltung wurden vereinfacht, wobei weniger Ports erforderlich sind. Für den Anwendungsproxy müssen jetzt nur zwei ausgehende Standardports geöffnet werden: 443 und 80. Der Anwendungsproxy verwendet weiterhin nur ausgehende Verbindungen, sodass Sie in einer DMZ keine Komponenten benötigen. Weitere Informationen finden Sie in unserer  [Konfigurationsdokumentation](application-proxy-add-on-premises-application.md). - Sie können Ihr Netzwerk jetzt über DNS statt über den IP-Adressbereich öffnen, wenn dies von Ihrem externen Proxy oder Ihrer Firewall unterstützt wird. Anwendungsproxydienste erfordern nur Verbindungen mit „*.msappproxy.net“ und „*.servicebus.windows.net“. ## <a name="earlier-versions"></a>Frühere Versionen Wenn Sie eine Anwendungsproxyconnector-Version vor 1.5.36.0 verwenden, aktualisieren Sie auf die neueste Version, um sicherzustellen, dass Sie über die neuesten vollständig unterstützten Features verfügen. ## <a name="next-steps"></a>Nächste Schritte - Erfahren Sie mehr über den [Remotezugriff auf lokale Anwendungen über den Azure Active Directory-Anwendungsproxy](application-proxy.md). - Erste Schritte mit dem Anwendungsproxy finden Sie in [Tutorial: Hinzufügen einer lokalen Anwendung für den Remotezugriff über den Anwendungsproxy](application-proxy-add-on-premises-application.md).
64.931818
445
0.809124
deu_Latn
0.987472
03fb1945aa2143face57ce1769c23aea2233fab0
12,311
md
Markdown
docs/Node-Operation/manage.md
KeepDocs/KeepDocsRussia
537472943fcd633aadede57bd489ea6aedd4cc51
[ "MIT" ]
null
null
null
docs/Node-Operation/manage.md
KeepDocs/KeepDocsRussia
537472943fcd633aadede57bd489ea6aedd4cc51
[ "MIT" ]
null
null
null
docs/Node-Operation/manage.md
KeepDocs/KeepDocsRussia
537472943fcd633aadede57bd489ea6aedd4cc51
[ "MIT" ]
2
2020-10-15T05:11:51.000Z
2020-10-15T21:18:38.000Z
# Управление нодами В этом разделе рассматриваются вопросы управления нодами: после настройки и запуска, вы должны контролировать их работу и обновлять их всякий раз, когда команда Keep запускает новые контракты или новые версии клиентов. ## Лучшие рекомендации ### Отдельные INFURA аккаунты Создайте один проект INFURA для каждой еоды, чтобы видеть отдельные действия для каждого проекта. Это быстрый и простой способ проверить активность нод, но это не должно быть основным способом проверки. <p align="center"> <img width="800" src="https://user-images.githubusercontent.com/68167410/88484551-07d77b00-cf35-11ea-974e-5dddd0162bab.png"> </p> ### Используйте разные Virtual Private Server (VPS) для нод Хотя это увеличивает стоимость эксплуатации, если возникают проблемы с операциями или обновлениями, вы можете грамотней распределить риски. <p align="center"> <img width="800" src="https://user-images.githubusercontent.com/68167410/88492899-be5a5080-cf73-11ea-972d-715dca658e0c.png"> </p> ## Проверка работоспособности ноды !>**@Herobrine** сделал потрясающее приложение, которое вы можете запустить, чтобы проверить работоспособность ноды: [KeepNode.app](https://keepnode.app/) ### Обслуживание ECDSA ноды Основная проверка, которую вы должны сделать, это убедиться, что нода ECDSA подключена к пирам: Используйте терминал: `sudo docker logs ecdsa --since 5m | grep "connected"` <p align="center"> <img width="800" src="https://user-images.githubusercontent.com/68167410/88097681-2668fb00-cb5e-11ea-94d4-c080e20d5a15.png"> </p> Вы также можете проверить Keep Creation. Используйте терминал: `sudo docker logs ecdsa 2>&1 | grep "new keep"` <p align="center"> <img width="800" src="https://user-images.githubusercontent.com/68167410/88484709-30ac4000-cf36-11ea-8baf-3a54b48523b2.png"> </p> Посмотреть лог "подключенных" пиров Используйте терминал: `sudo docker logs ecdsa --since 5m | grep "connected"` <p align="center"> <img width="800" src="https://user-images.githubusercontent.com/68167410/88487822-226a1e00-cf4e-11ea-8351-60cf0a68d968.png"> </p> !> Если запущены ноды в основной сети, обязательно выполните шаги - [Mainnet Node Operation Section](https://estebank97.github.io/Keep-Node-Docs/#/Node-Operation/mainnet) и используйте [Community Tools](https://estebank97.github.io/Keep-Node-Docs/#/basics/tools). ## Недостаточное обеспечение и ликвидация Существует серьезный риск того, что из-за относительных колебаний цен пары ETH / BTC ваше обеспечение (фактически, залог для Keep (ов), который вы подписали) окажется недостаточным, и вы подвергнетесь риску ликвидации. Процесс подробно описан в [tBTC System Design Document](https://docs.keep.network/tbtc/index.pdf), страницы 19-20 Статья от Bison Trail, [Keep Active Management](https://bisontrails.co/keep-active-participation/), объясняет этот процесс ликвидации и перечисляет несколько мер, которые можно предпринять, чтобы избежать этой проблемы. В настоящее время мониторинг должен быть ручным, то есть проверка относительных колебаний цен ETH / BTC и добавления залога ETH. [Latenthero](https://discord.com/channels/590951101600235531/590951101600235533/737707953221664779) создал [Telegram Bot](https://t.me/keep_alert_bot) для предупреждения, когда доступный ETH ниже установленного пользователем порога. Код для бота можно увидеть и в конечном итоге перепрофилировать [тут](https://github.com/latenthero/keep-alert-bot). Он доступен для Testnet с августа 2020 года. Этот документ от Experience#2376 предоставляет более подробную информацию со ссылками на задействованный программный код: [tBTC risk - liquidation and slashing details](https://hackmd.io/OzIeyWcfTVO69zIF67XCkg). Этот график из документа хорошо объясняет риски. <p align="center"> <img width="800" src="https://user-images.githubusercontent.com/68167410/88967178-0975ab80-d273-11ea-9696-15f2ce8995c5.png"> </p> h/t ssh ## Обновление нод Время от времени команда разработчиков Keep будет выпускать новые образы Docker для обоих нод и обновлять соответствующие контракты Ethereum. ### Основные выдержки из гайда по обновлениям Время от времени команда разработчиков Keep будет выпускать новые образы Docker для обоих нод и обновлять соответствующие контракты Ethereum. Смотрите этот [Run ECDSA Keep](https://github.com/keep-network/keep-ecdsa/blob/master/docs/run-keep-ecdsa.adoc#83-authorizations) документ. Вам необходимо сделать следующее: * Остановите ноду --> `sudo docker stop ecdsa` * Отредактируйте файл "config.toml" --> `nano $HOME/keep-ecdsa/config/config.toml` * Обновите контракты и списки партнеров, как указано командой Keep - Например, это обновление контрактов и списка пиров было сделано в июне 2020 года (выделено зеленым цветом). <p align="center"> <img width="900" src="https://user-images.githubusercontent.com/68167410/88488480-0026cf00-cf53-11ea-9ef6-cf65551f5cb1.png"> </p> Как альернативу можно использовать гайд - **Papyasha** [Update Nodes](https://gist.github.com/papyasha/7d97cb53aa1153cc65b0535c2b9f23e3) показывает чистый способ обновления файлов конфигурации путем их удаления и повторного запуска. Он также включает ссылку на исходный запуск нод со всеми необходимыми деталями. --- ### Обновление 2 сентября 2020 г. (+ 8 сентября) Ключевые действия по обновлению обоих нод следующие: * Получите новый грант KEEP Tokens для Testnet Faucet: - https://us-central1-keep-test-f3e0.cloudfunctions.net/keep-faucet-ropsten?account= **`"Your Operator Ethereum Address"`** * Стейкните грант в [KEEP Dapp](https://dashboard.test.keep.network/tokens/overview) (this address for Testnet) * Авторизуйте контракты KEEP Dapp - Авторизуйте [Random Beacon](https://dashboard.test.keep.network/applications/random-beacon) - Авторизуйте [ECDSA](https://dashboard.test.keep.network/applications/tbtc) and add ETH for Bonding * Остановка Docker Containers : - Убедитесь что указали корректные имена - `sudo docker ps`. - для Random Beacon: `sudo docker stop keep-client` - для ECDSA: `sudo docker stop ecdsa` * Удалите Current Containers : - для Random Beacon: `sudo docker rm keep-client` - для ECDSA: `sudo docker rm ecdsa` * Загрузите New Docker Images : - для Random Beacon: `sudo docker pull keepnetwork/keep-client:v1.3.0-rc.4` - для ECDSA: `sudo docker pull keepnetwork/keep-ecdsa-client:v1.2.0-rc.5` * Обновите Config.toml : - Если обе ноды работают на одном VPS, помните, что каждый файл конфигурации находится в отдельной папке. В этом примере названы keep-client и keep-ecdsa. - Убедитесь что указали корректные имена папок `ls`. - для Random Beacon:`nano $HOME/keep-client/config/config.toml` - KeepRandomBeaconOperator = `"0xC8337a94a50d16191513dEF4D1e61A6886BF410f"` - TokenStaking = `"0x234d2182B29c6a64ce3ab6940037b5C8FdAB608e"` - KeepRandomBeaconService = `"0x6c04499B595efdc28CdbEd3f9ed2E83d7dCCC717"` - для ECDSA:`nano $HOME/keep-ecdsa/config/config.toml` - BondedECDSAKeepFactory = `“0x9EcCf03dFBDa6A5E50d7aBA14e0c60c2F6c575E6”` - Sanctioned Applications = `“0xc3f96306eDabACEa249D2D22Ec65697f38c6Da69”` * Запустите Docker Containers : - Запишите это как одну команду и помните, что она может измениться в соответствии с руководством, которое вы использовали изначально (например, порты могут отличаться от 3919: 3919). - Убедитесь, что вы ссылаетесь на новые образы Docker и запускаете каждый в своей папке (если на том же VPS)! - для Random Beacon: sudo docker run -dit --restart always --volume $HOME/keep-client:/mnt --env KEEP_ETHEREUM_PASSWORD=$KEEP_CLIENT_ETHEREUM_PASSWORD --env LOG_LEVEL=debug --log-opt max-size=100m --log-opt max-file=3 --name keep-client -p 3919:3919 keepnetwork/keep-client:v1.3.0-rc.4 --config /mnt/config/config.toml start - для ECDSA: sudo docker run -d --restart always --entrypoint /usr/local/bin/keep-ecdsa --volume $HOME/keep-ecdsa:/mnt/keep-ecdsa --env KEEP_ETHEREUM_PASSWORD=$KEEP_CLIENT_ETHEREUM_PASSWORD --env LOG_LEVEL=debug --log-opt max-size=100m --log-opt max-file=3 --name ecdsa -p 3919:3919 keepnetwork/keep-ecdsa-client:v1.2.0-rc.5 --config /mnt/keep-ecdsa/config/config.toml start * Проверьте логи на подключения пиров : - для Random Beacon: `sudo docker logs keep-client 2>&1 --since 5m | grep "number of connected peers"` - для ECDSA: `sudo docker logs ecdsa 2>&1 --since 5m | grep "number of connected peers"` - Если он показывает какие-то ошибки, сначала убедитесь, что контракты авторизованы в дашборде. Иногда нужно повторить всё снова. * Наконец, не забудьте добавить тестовый ETH к привязке tBTC в дашборде! --- ### Август 7, 2020 Обновление Ключевые действия по обновлению обоих нод следующие: * Получите новый грат KEEP Tokens для Testnet Faucet: - https://us-central1-keep-test-f3e0.cloudfunctions.net/keep-faucet-ropsten?account= **`"Your Operator Ethereum Address"`** * Застейкайте его в [KEEP Dapp](https://dashboard.test.keep.network/tokens/overview) (this address for Testnet) * Авторизуйте контракты в KEEP Dapp - Авторизуйте [Random Beacon](https://dashboard.test.keep.network/applications/random-beacon) - Авторизуйте [ECDSA](https://dashboard.test.keep.network/applications/tbtc) and add ETH for Bonding * Остановите Docker Containers : - Убедитесь что указали корректные имена - `sudo docker ps`. - для Random Beacon: `sudo docker stop keep-client` - для ECDSA: `sudo docker stop ecdsa` * Удалите Current Containers : - для Random Beacon: `sudo docker rm keep-client` - для ECDSA: `sudo docker rm ecdsa` * Загрузите New Docker Images : - для Random Beacon: `sudo docker pull keepnetwork/keep-client:v1.3.0-rc` - для ECDSA: `sudo docker pull keepnetwork/keep-ecdsa-client:v1.2.0-rc` * Обновите Config.toml : - Если обе ноды работают на одном VPS, помните, что каждый файл конфигурации находится в отдельной папке. В этом примере названы keep-client и keep-ecdsa. - Убедитесь что указали корректные имена папок `ls`. - для Random Beacon:`nano $HOME/keep-client/config/config.toml` - KeepRandomBeaconOperator = `"0xf417b31104631280adF9F6828ee19985BC299fdC"` - TokenStaking = `"0x8117632eC1D514550b3880Bc68F9AC1A76c9C67B"` - KeepRandomBeaconService = `"0xd83248e311DC2Ba0d2A051e86f0678d8857f6ADD`" - для ECDSA:`nano $HOME/keep-ecdsa/config/config.toml` - BondedECDSAKeepFactory = `“0xb37c8696cD023c11357B37b5b12A9884c9C83784”` - Sanctioned Applications = `“0x9F3B3bCED0AFfe862D436CB8FF462a454040Af80”` * Запустите Docker Containers : - Запишите это как одну команду и помните, что она может измениться в соответствии с руководством, которое вы использовали изначально (например, порты могут отличаться от 3919: 3919). - для Random Beacon: `sudo docker run -dit --restart always --volume $HOME/keep-client:/mnt --env KEEP_ETHEREUM_PASSWORD=$KEEP_CLIENT_ETHEREUM_PASSWORD --env LOG_LEVEL=debug --log-opt max-size=100m --log-opt max-file=3 --name keep-client -p 3919:3919 keepnetwork/keep-client:v1.3.0-rc --config /mnt/config/config.toml start` - для ECDSA: `sudo docker run -d --restart always --entrypoint /usr/local/bin/keep-ecdsa --volume $HOME/keep-ecdsa:/mnt/keep-ecdsa --env KEEP_ETHEREUM_PASSWORD=$KEEP_CLIENT_ETHEREUM_PASSWORD --env LOG_LEVEL=debug --log-opt max-size=100m --log-opt max-file=3 --name ecdsa -p 3919:3919 keepnetwork/keep-ecdsa-client:v1.2.0-rc --config /mnt/keep-ecdsa/config/config.toml start` * Проверьте логи на подключения пиров : - для Random Beacon: `sudo docker logs keep-client 2>&1 --since 5m | grep "number of connected peers"` - для ECDSA: `sudo docker logs ecdsa 2>&1 --since 5m | grep "number of connected peers"` - Если он показывает какие-то ошибки, сначала убедитесь, что контракты авторизованы в дашборде. Иногда нужно повторить всё снова. * Наконец, не забудьте добавить тестовый ETH к привязке tBTC в дашборде! --- `Источник из официальной документации Keep Team, отредактированный и дополненный сообществом. '[Источник] (https://keep-network.gitbook.io/staking-documentation/)` `Авторы: Ramaruro, EstebanK` `Перевод: tony__s_h`
57.260465
404
0.757209
rus_Cyrl
0.545038
03fb198e809c04fddd0bbe82b901660ad3f34d84
329
md
Markdown
_posts/2021-09-24-打工妹從富士康辭職後做了什麼工作呢?目前的生活怎麼樣?很多網友關心問我.md
NodeBE4/society
20d6bc69f2b0f25d6cc48a361483263ad27f2eb4
[ "MIT" ]
1
2020-09-16T02:05:28.000Z
2020-09-16T02:05:28.000Z
_posts/2021-09-24-打工妹從富士康辭職後做了什麼工作呢?目前的生活怎麼樣?很多網友關心問我.md
NodeBE4/society
20d6bc69f2b0f25d6cc48a361483263ad27f2eb4
[ "MIT" ]
null
null
null
_posts/2021-09-24-打工妹從富士康辭職後做了什麼工作呢?目前的生活怎麼樣?很多網友關心問我.md
NodeBE4/society
20d6bc69f2b0f25d6cc48a361483263ad27f2eb4
[ "MIT" ]
null
null
null
--- layout: post title: "打工妹從富士康辭職後做了什麼工作呢?目前的生活怎麼樣?很多網友關心問我" date: 2021-09-24T12:00:19.000Z author: 打工妹四妹 from: https://www.youtube.com/watch?v=lMRqpIQ7Y_I tags: [ 打工妹四妹 ] categories: [ 打工妹四妹 ] --- <!--1632484819000--> [打工妹從富士康辭職後做了什麼工作呢?目前的生活怎麼樣?很多網友關心問我](https://www.youtube.com/watch?v=lMRqpIQ7Y_I) ------ <div> 歡迎訂閱 </div>
19.352941
82
0.717325
yue_Hant
0.213201
03fbc75662f9a18bae98343130a909f0d43a53c9
1,505
markdown
Markdown
_posts/2015-07-23-drilling-into-the-details.markdown
jalbertbowden/open.fda.gov
f1457d76cd7c7a5b8d8cc29ed1e9785683e2678e
[ "CC0-1.0" ]
null
null
null
_posts/2015-07-23-drilling-into-the-details.markdown
jalbertbowden/open.fda.gov
f1457d76cd7c7a5b8d8cc29ed1e9785683e2678e
[ "CC0-1.0" ]
null
null
null
_posts/2015-07-23-drilling-into-the-details.markdown
jalbertbowden/open.fda.gov
f1457d76cd7c7a5b8d8cc29ed1e9785683e2678e
[ "CC0-1.0" ]
1
2020-01-09T02:16:09.000Z
2020-01-09T02:16:09.000Z
--- layout: post date: 2015-07-23 title: "openFDA: Drilling Into The Details" authors: - "openFDA Team" --- By design, openFDA is an open system. It is built using open standards, uses open source software, and the code itself is publicly available at Github. Given the unique nature of this approach, the technical details of this system may be of interest. Accordingly, for those curious about the structure and design of the openFDA platform, we’ve created two documents that drill into the details of the architecture and technical design of openFDA, including the various software and technologies throughout the system, and how that system can be leveraged for insights. These documents are available here for download as PDFs. The first document explains in detail the technical architecture, data structure, data sources, data processing and harmonization, and software stack: * <a href="/static/docs/openFDA-technologies.pdf">Download openFDA Technologies Whitepaper</a> The second document offers one case study in how openFDA data can be leveraged to generate insights. It walks through a project that transparently demonstrates JSON URL queries and uses R software to visualize and explore openFDA adverse events data. The particular example looks at an apparent association between “aspirin” and “flushing” and how the data can be misleading: * <a href="/static/docs/openFDA-analysis-example.pdf">Download openFDA Analysis Example Whitepaper</a> <br/>
79.210526
625
0.784053
eng_Latn
0.997936
03fc6edf0963005d3cfc13ef68c77a9a5b7013eb
8,124
md
Markdown
_pages/cv.md
jessicaecraig/jessicaecraig.github.io
53b0348031fef304c0598005824be07526edb80a
[ "MIT" ]
null
null
null
_pages/cv.md
jessicaecraig/jessicaecraig.github.io
53b0348031fef304c0598005824be07526edb80a
[ "MIT" ]
null
null
null
_pages/cv.md
jessicaecraig/jessicaecraig.github.io
53b0348031fef304c0598005824be07526edb80a
[ "MIT" ]
null
null
null
--- layout: archive title: "CV" permalink: /cv/ author_profile: true redirect_from: - /resume --- {% include base_path %} [PDF version of CV](https://jessicaecraig.github.io/files/CraigJessica_CV_8.2021.pdf) Education ====== * Master of Library and Information Science, June 2021 * Informatics specialization * _University of California, Los Angeles_ * Bachelor of Arts, Art History, May 2019 * Summa Cum Laude * _California State University, Channel Islands_ Experience ====== * UCLA ARTS LIBRARY - Library Reference Assistant, September 2019 to June 2021 * Provide guidance in the use of physical and digital resources for researchers in the disciplines of fine art, art history, animation, film, television, theater, architecture, museum studies, and design. * Conduct reference interviews in-person and online to recommend research strategies and sources. Entered and evaluated reference metrics to improve virtual reference services. * Use digital reference chat platform (LibChat) to assist researchers around the world. * Created special topic digital research guides (LibGuides) for accessibility on the UCLA Library website. * Curated a digital exhibit using digital collections and developing a narrative with the use of Adobe Spark. * UCLA LIBRARY / Resource Acquisitions and Metadata Services - Cataloging Intern, September 2020 to June 2021 * Create and enhance original MARC catalog records within OCLC Connexion software client and the UCLA Library’s Voyager ILS for incoming foreign-language monographs (Spanish, French, and German). * Assign descriptive bibliographic metadata to catalog records based on national standards, such as RDA and the Library of Congress Program for Cooperative Cataloging Policy Statements. * Attach Library of Congress Subject Headings and LC Classification to catalog records for optimal searching and discoverability of collection resources. * Perform LC-PCC compliant authority work by creating and updating authority files based on NACO and SACO training and guidelines. * LAW LIBRARY OF CONGRESS - Remote Metadata Intern, September 2020 to present * Lead as the project coordinator; assign metadata projects, review work of 12 interns, assist onboarding and training, ensure organized team workflow and collaboration. * Parse through digitized U.S. Statutes at Large documents and perform subject analysis for several thousands of enacted laws. Assign metadata to each law according to local standards to facilitate their subsequent access on congress.gov. Perform metadata clean-up within spreadsheet software for large digital files. * LAW LIBRARY OF CONGRESS - Junior Fellow, May to August 2020 * Worked to maximize the user experience of the Law Library of Congress website by using graphic and web design principles to promote global access to the library’s digital resources and services. * Enhanced several online research guides for the discovery of digital legal resources. * Assigned descriptive metadata to digitized historical congressional records to allow for online accessibility. * Designed collection infographic for the online Legal Reports collection after performing collection data analysis. Presented work to the Law Librarian of Congress. * SANTA BARBARA HISTORICAL MUSEUM - Digital Resource Development Intern, June 2020 to September 2020 * Conducted primary source research to enhance collection provenance records. * Utilized historical and genealogical digital research tools to gather biographical information about museum collection donors and illuminate women’s history in the Santa Barbara-area. * Recorded and structured completed research into local ArchivesSpace collection management system. * UCLA INFORMATION STUDIES LAB - Reference and Research Assistant, September 2019 to March 2020 * Actively performed assistance to researchers using research lab resources, including the general and archival library collections, digital services and software, film and video resource equipment. * Coordinated, planned, and led practical research workshops for Library and Information Science graduate students. * ART LIFE FOUNDATION - Archives Catalog Assistant, June 2018 to February 2020 * Collaboratively developed and maintained an original catalog for the archival collection of rare artist publications and ephemera. Performed metadata quality evaluations regularly. * Monitored, processed, arranged, described, and researched the physical archival collection. * Performed best practices for archival preservation of print-based materials, carefully handled and rehoused objects when necessary. * CAMARILLO PUBLIC LIBRARY - Library Assistant II, December 2016 to September 2019 * Assisted a diverse range of community members by sharing helpful information regarding library collection and services based on their individual needs and interests. * Frequently checked materials for circulation, created library accounts, and solved a variety of account issues. * Physically handled and processed incoming materials and regularly updated the catalog’s item records in the Polaris ILS. Skills & Proficiencies ====== * General software and tools: * Adobe Creative Cloud (Photoshop, Illustrator, Spark) * Tableau data visualization * Data cleaning with OpenRefine * oXygen XML Editor * WordPress and Omeka CMS * GitHub hosting * Figma UX research & prototyping * Microsoft Office (with advanced Excel skills) and Google Suite * Library and archive applications * Alma ILS, Voyager ILS, Polaris ILS * OCLC Connexion * LC Cataloger’s Desktop * LC ClassWeb * Springshare LibApps * ArchivesSpace * Technical Languages * SQL, XML, HTML, Git * Metadata Standards * Content Standards: RDA, DACS * Structure Standards: MARC, EAD, Dublin Core, VRA Core, MODS * Value Standards: LCSH, LCNAF, LCGFT, AAT, ULAN, TGN Publications ====== * “Computer Vision for Visual Arts Collections: Looking at Algorithmic Bias, Transparency, and Labor,” in _Art Documentation: Journal of the Art Libraries Society of North America_ 40, no. 1, Fall 2021 Issue (forthcoming) Presentations ====== * UCLA Artifacts Conference, “Evaluating Machine Learning for Arts Collections,” June 2021 (forthcoming) * ARLIS/NA Annual Conference, New Voices in the Profession session, “Computer Vision for Visual Arts Collections: Looking at Algorithmic Bias, Transparency, and Labor,” May 2021 Instruction & Teaching ====== * Teaching Assistant, UCLA Department of Design Media Arts, DESMA 9: Art, Science, and Technology, Professor Victoria Vesna, Spring Quarter 2021 * Co-Instructor, “Finding Image Resources Workshop,” UCLA Arts Library, April 2021 * Co-Instructor, “Finding Sources in the UCLA Library,” UCLA Cornerstone Workshop Series, January 2021 * Co-Instructor, “Collecting and Citing Sources,” UCLA Cornerstone Workshop Series, October 2020 * Co-Instructor, “Intro to Archival Processing,” UCLA Information Studies Research Lab, March 2020 Awards ====== * 2021 California Rare Book School Course Scholarship * 2021 Samuel H. Kress Foundation Scholarship * 2021 Art Libraries Society of North America Gerd Muehsam Award * 2020 UCLA Information Studies Digital Resource Development Initiative Award Professional Involvement ====== Current * Art Libraries Society of North America (ARLIS/NA) - Member * Art Library Students & New ARLIS Professionals (ArLiSNAP) - 2021-2023 Co-Moderator * ARLIS/NA Student Advancement Awards Subcommittee - 2022 Member * Southern California Technical Processes Group - 2020-2022 Secretary & Treasurer Past * Special Libraries Association, UCLA Student Chapter - 2020-2021 Co-President * Artifacts, UCLA Student Organization - 2020-2021 Web Chair * The Horn Press, UCLA Book Arts Student Organization - 2020-2021 Member * Society of American Archivists - 2019-2021 Student Member Service ====== * J. Paul Getty Museum, Instructional Gallery Docent, October 2018 – May 2020 * CSUCI Alumni Mentorship, Mentor, February 2020 – Present * Camarillo Ranch House, Tour Docent, June 2016 – October 2018
56.811189
320
0.792097
eng_Latn
0.871386
03fc8fdb42e4988144e660cc894f28ffde0eccff
3,867
md
Markdown
_posts/2012-08-30-isnt-bluetooth-soup-yet.md
rjbs/rjbs.github.io
fffc65448b7d791c72088dbfb64404bdc6010a84
[ "MIT" ]
null
null
null
_posts/2012-08-30-isnt-bluetooth-soup-yet.md
rjbs/rjbs.github.io
fffc65448b7d791c72088dbfb64404bdc6010a84
[ "MIT" ]
null
null
null
_posts/2012-08-30-isnt-bluetooth-soup-yet.md
rjbs/rjbs.github.io
fffc65448b7d791c72088dbfb64404bdc6010a84
[ "MIT" ]
null
null
null
--- layout: post title : "isn't Bluetooth soup yet?" date : "2012-08-30T14:01:40Z" tags : ["hardware"] --- I listen to music on my iPhone all the time. I ride a bus about twelve hours a week, much of which is spent with Spotify or iPod or something else keeping my ears entertained. I also use it for phone calls or, more often, voice commanded web searches. I'd been using my iPhone headset for this, but it's not very durable, and gets tangled up in other stuff. I also run an [online D&D game](http://dudgeonmaster.org/games/alar/) (well, actually it's [M&M](http://storygame.free.fr/MAZES.htm)), and we use voice chat. Originally, I wanted to use a headset that had two ⅛" audio jacks, but then I learned that the input port on my old laptop was line in, not microphone in. I bought a USB audio widget so I could plug the headset into that, and that into my laptop. It didn't work very well. I ended up using the laptop's built-in microphone. Unfortunately, [Mumble](http://mumble.sourceforge.net/) never seemed able to cope if I used a mic that could hear the speaker, so I ended up wearing headphones, too. Oh, and because I was using the laptop's internal mic, I couldn't close the lid and dock the laptop, because it would close off the microphone. I decided I'd get one headset for all occasions. My requirements were: * has to be comfortable and small enough to wear while walking around * can't look completely ridiculous if I'm just listening to music on a walk * needs controls like those on the iPhone headphones for play, pause, answer, etc. I didn't care whether it was wired or wireless, but I ended up finding a pretty decent bluetooth headset, the [LG Tone HBS-700](http://www.amazon.com/gp/product/B0052YFYFK/ref=as_li_ss_tl?ie=UTF8&camp=1789&creative=390957&creativeASIN=B0052YFYFK&linkCode=as2&tag=rjbs-20). They look weird, but they're really comfortable and the controls work well and the battery lasts long enough and so on. <a href="http://www.flickr.com/photos/rjbs/7893947914/" title="Re: headset by rjbs, on Flickr"><img src="http://farm9.staticflickr.com/8451/7893947914_979486e5df.jpg" width="500" height="333" alt="Re: headset"></a> Sometimes they cut out for a fraction of a second. It's rare, and I don't care, because sometimes my wired headset would pop loose. Whatever. I've used them to make phone calls and I've used them to listen to music and I've used them to interact with Siri. They work well. I've used them with Mumble for running my game, and that worked pretty well, too. The problem is that switching whether they talk to my computer or my laptop is a complete pain. I thought I'd be able to just tell my laptop, "you take over now," and it would tell my headset to switch. Not so. Instead, it seems that once the device has paired to one device, I have to tell that device to stop using it first. This is especially annoying on the iPhone, where I have to dig through several layers of menu to get to the Bluetooth menu… and where I can't pick "disconnect," but must instead "forget" the device, meaning I'll have to re-pair later. Getting the headset to work properly with the laptop is a pain, too. I have to connect via the Bluetooth menu, then muddle with stuff in preferences, then sometimes play some actual audio to force the connection to really happen. Argh! If there was a wire jack on my headset, I would use it to connect to my laptop just to sort out all this nonsense. Last time we had our game, I ended up using my old headphones and the laptop's mic because this was turning into such a hassle. When is Bluetooth going to be easy enough to use one accessory with two devices? I'd love to consider replacements for the Bluetooth headset (even though I really *do* like it) if I could find a decent headset with a TRRS connector matching my needs, but so far no luck.
55.242857
214
0.762348
eng_Latn
0.999686
03fcf4fa7eadd057a7b30ebb86b6eb3dd597e1ac
1,012
md
Markdown
docs/ssms/menu-help/options-designers-analysis-services-designers-general.md
cawrites/sql-docs
58158eda0aa0d7f87f9d958ae349a14c0ba8a209
[ "CC-BY-4.0", "MIT" ]
2
2020-05-07T19:40:49.000Z
2020-09-19T00:57:12.000Z
docs/ssms/menu-help/options-designers-analysis-services-designers-general.md
cawrites/sql-docs
58158eda0aa0d7f87f9d958ae349a14c0ba8a209
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/ssms/menu-help/options-designers-analysis-services-designers-general.md
cawrites/sql-docs
58158eda0aa0d7f87f9d958ae349a14c0ba8a209
[ "CC-BY-4.0", "MIT" ]
2
2020-03-11T20:30:39.000Z
2020-05-07T19:40:49.000Z
--- title: "Options (Designers - Analysis Services Designers - General)" ms.custom: seo-lt-2019 ms.date: "01/19/2017" ms.prod: sql ms.prod_service: "sql-tools" ms.reviewer: "" ms.technology: ssms ms.topic: conceptual f1_keywords: - "VS.ToolsOptionsPages.Designers.Analysis_Services_Designers.General" ms.assetid: 7f976d2b-1a16-47f8-85e6-d7c2bf6a84b8 author: "markingmyname" ms.author: "maghan" --- # Options (Designers - Analysis Services Designers - General) [!INCLUDE[appliesto-ss-asdb-asdw-pdw-md](../../includes/appliesto-ss-asdb-asdw-pdw-md.md)] Use the **Designers**, **Maintenance Plans**, **Analysis Services**, **General** page to determine the default behavior of the Analysis Services Designers. ## Connectivity Query Timeout The number of seconds that the designer waits for the query to respond before generating an error. Connection Timeout The number of seconds that the designer waits for a connection to be confirmed before generating an error.
37.481481
158
0.736166
eng_Latn
0.802933
03fd4b80993fb9fee6b0bab069b32f0854bad3ed
29,491
md
Markdown
Workshop/19. Blockchain on Azure/Blockchain on Azure HOL.md
kevinvle/computerscience
18b1bce1a4e1b4eeedfea2c3722f4d7f4ea17296
[ "MIT" ]
null
null
null
Workshop/19. Blockchain on Azure/Blockchain on Azure HOL.md
kevinvle/computerscience
18b1bce1a4e1b4eeedfea2c3722f4d7f4ea17296
[ "MIT" ]
null
null
null
Workshop/19. Blockchain on Azure/Blockchain on Azure HOL.md
kevinvle/computerscience
18b1bce1a4e1b4eeedfea2c3722f4d7f4ea17296
[ "MIT" ]
1
2020-05-27T10:21:13.000Z
2020-05-27T10:21:13.000Z
<a name="HOLTitle"></a> # Blockchain-as-a-Service on Azure # --- <a name="Overview"></a> ## Overview ## [Blockchain](https://en.wikipedia.org/wiki/Blockchain) is one of the world's most talked-about technologies, and one that has the potential to fundamentally change the way we use the Internet. Originally designed for [Bitcoin](https://en.wikipedia.org/wiki/Bitcoin), Blockchain remains the technology behind that digital currency but is not limited to applications involving virtual money. In the words of Dan Tapscott, author, TED speaker, and Executive Director of the [Blockchain Research Institute](https://www.blockchainresearchinstitute.org/), "Blockchain is an incorruptible digital ledger of economic transactions that can be programmed to record not just financial transactions, but virtually everything of value." One of the more inventive uses for Blockchain is to implement tamper-proof digital voting systems, a concept that is being actively explored [in the U.S. and abroad](https://venturebeat.com/2016/10/22/blockchain-tech-could-fight-voter-fraud-and-these-countries-are-testing-it/). Blockchain gets its name from the manner in which it stores data. Transactions such as a transfer of money from one party to another or a vote cast for a political candidate are stored in cryptographically sealed blocks. Blocks are joined together into chains ("blockchains"), with each block in the chain containing a hash of the previous block. A blockchain acts like an electronic ledger, and rather than be stored in one place, it is replicated across countless computers (nodes) in a Blockchain network. This decentralization means that a blockchain has no single point of failure and is controlled by no single entity. The latter is especially important for a system whose primary goal is to allow private transactions to take place without involving a "trusted" third party such as a bank. Anyone can build a Blockchain network and use it to host blockchains. Microsoft Azure makes it incredibly simple to do both by supporting Blockchain-as-a-Service. A few button clicks in the Azure Portal are sufficient to deploy a network of virtual machines provisioned with popular Blockchain implementations such as [Ethereum](https://www.ethereum.org/), [Corda](https://www.corda.net/), or [Hyperledger Fabric](https://www.hyperledger.org/projects/fabric). Ethereum was one of the first general-purpose Blockchain implementations. The software is open-source and is the basis for Ethereum's own cryptocurrency known as [Ether](https://www.ethereum.org/ether). You can deploy Ethereum networks of your own and use its Blockchain implementation however you wish. Among other features, Ethereum supports [smart contracts](https://en.wikipedia.org/wiki/Smart_contract), which are written in languages such as [Solidity](https://en.wikipedia.org/wiki/Solidity) and then compiled into bytecode and deployed to the blockchain for execution. In this lab, you will deploy an Ethereum network on Azure and create your own cryptocurrency named "My Coin" to run on it. The currency will be brokered by a smart contract that allows funds to be transferred between accounts. Along the way, you will get first-hand experience running Blockchain networks on Azure, as well as writing smart contracts for Ethereum and deploying them to the network. <a name="Objectives"></a> ### Objectives ### In this hands-on lab, you will learn how to: - Deploy an Ethereum blockchain network on Azure - Use MetaMask to create an Ethereum wallet - Write smart contracts and deploy them to Ethereum networks - Manipulate Ethereum blockchains using Node.js <a name="Prerequisites"></a> ### Prerequisites ### - An active Microsoft Azure subscription. If you don't have one, [sign up for a free trial](http://aka.ms/WATK-FreeTrial). - [PuTTY](http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html) and PowerShell (Windows users only) - [Google Chrome](https://www.google.com/chrome/browser/desktop/index.html) - [Node.js](https://nodejs.org) <a name="Exercises"></a> ## Exercises ## This hands-on lab includes the following exercises: - [Exercise 1: Create a Blockchain on Azure](#Exercise1) - [Exercise 2: Create a wallet](#Exercise2) - [Exercise 3: Unlock the coinbase account (Windows)](#Exercise3) - [Exercise 4: Unlock the coinbase account (macOS and Linux) ](#Exercise4) - [Exercise 5: Deploy a smart contract](#Exercise5) - [Exercise 6: Invoke the contract from an app](#Exercise6) - [Exercise 7: Delete the Ethereum network](#Exercise7) Estimated time to complete this lab: **60** minutes. <a name="Exercise1"></a> ## Exercise 1: Create a blockchain on Azure ## Deploying a blockchain on Azure only takes a few minutes. In this exercise, you will use the Azure Portal to deploy an Ethereum Blockchain network in the cloud. 1. In your browser, navigate to the [Azure Portal](https://portal.azure.com). If you are asked to sign in, do so using your Microsoft account. 1. In the portal, click **+ New**, followed by **Blockchain** and **Ethereum Consortium Blockchain**. ![Creating an Ethereum blockchain](Images/new-blockchain.png) _Creating an Ethereum blockchain_ 1. Click the **Create** button at the bottom of the "Ethereum Consortium Blockchain" blade. 1. In the "Basics" blade, set **Resource prefix** to "blkchn" (without quotation marks), **VM user name** to "blkadmin" (without quotation marks), and the password to "Blockchain!321" (once more, without quotation marks). Make sure **Create new** is selected under **Resource group**, and enter "BlockchainResourceGroup" as the resource-group name. Select the location nearest you, and then click **OK**. ![Entering basic settings](Images/blockchain-settings-1.png) _Entering basic settings_ 1. Click **OK** at the bottom of the "Network Size and Performance" blade to accept the default settings for VM sizes, number of nodes, and so on. ![Accepting the default network settings](Images/blockchain-settings-2.png) _Accepting the default network settings_ 1. In the "Ethereum Settings" blade, set **Network ID** to **123456**, and enter "Blockchain!321" in four places as the Ethereum account password and private key passphrase. Then click **OK**. ![Entering Ethereum settings](Images/blockchain-settings-3.png) _Entering Ethereum settings_ 1. Review the settings in the "Summary" blade and click **OK**. ![Reviewing the settings](Images/blockchain-settings-4.png) _Reviewing the settings_ 1. Click the **Purchase** button to begin the deployment. ![Beginning the deployment](Images/blockchain-settings-5.png) _Beginning the deployment_ The deployment will probably take about 5 minutes to complete, but rather than wait for it to finish, proceed to Exercise 2 and begin the process of setting up a wallet. <a name="Exercise2"></a> ## Exercise 2: Create a wallet ## The next task is to set up a wallet and connect it to the Ethereum network deployed in Exercise 1. For this, you'll use a Google Chrome extension called [MetaMask](https://metamask.io/). MetaMask enables you to use the Ether from a Blockchain network on Ethereum-enabled Web sites as well as create an account and seed it with Ether. You won't be using Ether directly, but setting up a wallet is an easy way to create an account on the network that can be used in digital transactions. 1. If Google Chrome isn't installed on your computer, [download it and install it](https://www.google.com/chrome/browser/desktop/index.html) now. 1. Start Chrome, paste the following link into the address bar, and press **Enter**. ``` https://chrome.google.com/webstore/detail/meta-mask/nkbihfbeogaeaoehlefnkodbefgpgknn?hl=en ``` 1. Click **Add to Chrome**. ![Adding MetaMask to Chrome](Images/metamask-1.png) _Adding MetaMask to Chrome_ 1. Click **Add Extension**. ![Installing the MetaMask extension](Images/metamask-2.png) _Installing the MetaMask extension_ 1. Click the MetaMask icon to the right of the address bar, and then click **Accept**. ![Accepting the privacy notice](Images/metamask-3.png) _Accepting the privacy notice_ 1. Scroll to the bottom of the terms of use, and then click **Accept**. ![Accepting the terms of use](Images/metamask-4.png) _Accepting the terms of use_ 1. Enter "Blockchain!321" (without quotation marks) as the password in two places, and then click **Create**. ![Creating a MetaMask account](Images/metamask-5.png) _Creating a MetaMask account_ 1. Copy the 12 words presented to you into a text file and save the file for safekeeping. Then click **I've Copied It Somewhere Safe**. > You won't need this recovery information in this lab, but in the real world, these words act as a pass phrase that can be used to restore access to a MetaMask account that you have been locked out of. ![Saving MetaMask recovery information](Images/metamask-6.png) _Saving MetaMask recovery information_ 1. Return to the Azure Portal. Click **Resource groups** in the ribbon on the left, and then click the resource group that you created for the Ethereum network in [Exercise 1](#Exercise1). ![Opening the resource group](Images/open-resource-group.png) _Opening the resource group_ 1. Make sure all deployments have finished. (If they haven't, periodically click the **Refresh** button at the top of the blade until all deployments have completed.) Then click **Deployments**, followed by **microsoft-azure-blockchain...**. ![Opening the Blockchain resource](Images/open-blockchain.png) _Opening the Blockchain resource_ 1. Click the **Copy** button next to ETHEREUM-RPC-ENDPOINT under "Outputs." This URL is very important, because it allows apps to make JSON-RPC calls to the network to deploy smart contracts and perform other blockchain-related tasks. ![Copying the endpoint URL](Images/copy-endpoint.png) _Copying the endpoint URL_ 1. Return to Chrome and the MetaMask window. (If the window is no longer displayed, click the MetaMask icon to the right of the address bar to display it again.) Then click the hamburger icon to display the MetaMask menu, and select **Settings** from the menu. ![Opening MetaMask settings](Images/open-metamask-settings.png) _Opening MetaMask settings_ 1. Paste the URL on the clipboard into the "Current Network" box and and click **Save**. Then click the back arrow next to "Settings." ![Connecting the wallet to the network](Images/enter-endpoint.png) _Connecting the wallet to the network_ 1. Return to the Azure Portal and click the **Copy** button next to ADMIN-SITE. ![Copying the admin-site link](Images/copy-site-url.png) _Copying the admin-site link_ 1. Open a new browser instance and paste the URL on the clipboard into the browser's address bar. Then press **Enter**. ![Opening the admin site](Images/open-admin-site.png) _Opening the admin site_ 1. Return to Chrome. Click the ellipsis (**...**) in the MetaMask window, and then select **Copy Address to clipboard**. The "address" you are copying is actually the account ID for Account 1, which was created automatically when you "joined" the network by pasting the Ethereum RPC endpoint URL into MetaMask. ![Copying the account address to the clipboard](Images/copy-address-to-clipboard.png) _Copying the account address to the clipboard_ 1. Return to the admin site you open in Step 15 and paste the value on the clipboard into the **Address of Recipient** box. Then click **Submit** to seed Account 1 with 1,000 Ether. ![Boostrapping the account with 1,000 Ether](Images/bootstrap-address.png) _Boostrapping the account with 1,000 Ether_ 1. Return to MetaMask and click the refresh icon. Then select **Account 1** and confirm that the account now shows a balance of 1,000 Ether. ![Refreshing the wallet](Images/refresh-wallet.png) _Refreshing the wallet_ Seeding your wallet with Ether isn't strictly necessary because you won't be using the Ether in it in this lab; you will use your own cryptocurrency instead. But if you *were* deploying an Ethereum network for the purpose of transferring Ether between accounts, you now know how to get some Ether into your account for testing purposes. Where did the 1,000 Ether come from? They came from the *coinbase* account that was created when the network was created. The coinbase account holds all the Ether that haven't been transferred to individual accounts. Later, you will use Ether in this account to *fuel* the transactions that you perform via the contracts that you deploy. Before you can do that, you must unlock the account. <a name="Exercise3"></a> ## Exercise 3: Unlock the coinbase account (Windows) ## To unlock the coinbase account, you must connect to one of the Ethereum servers with SSH and execute a couple of commands. If you are running macOS or Linux, **skip to [Exercise 4](#Exercise4)** and use the built-in SSH client. If you are running Windows instead, proceed with this exercise. 1. PuTTY is a popular (and free) SSH client for Windows. If PuTTY isn't installed on your computer, [download and install it](http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html) now. 1. Return to the Azure Portal and click the **Copy** button next to ADMIN-SITE. ![Copying the admin-site link](Images/copy-site-url.png) _Copying the admin-site link_ 1. Start PuTTY and paste the value on the clipboard into the **Host Name (or IP address)** field. Remove "http://" from the beginning of the string, and type **3000** into the **Port** field to connect using port 3000. Then click the **Open** button to initiate a Secure Shell (SSH) connection. If you are prompted with a security warning asking if you want to update the cached key, answer yes. ![Connecting with PuTTY](Images/putty.png) _Connecting with PuTTY_ 1. A PuTTY terminal window will appear and you will be prompted to **login as**. Log in with the user name ("blkadmin") and password ("Blockchain!321") you entered in Exercise 1, Step 4. 1. Execute the following command in the console window to attach to the Ethereum node: ``` geth attach ``` 1. Now execute the following command to unlock the coinbase account: ``` web3.personal.unlockAccount(web3.personal.listAccounts[0],"Blockchain!321", 15000) ``` This will allow you to use the blockchain to transfer funds from the coinbase account. Make sure that the output from the command is the word "true." 1. Type **exit** into the console window to detach from Ethereum. 1. Type **exit** again to close the SSH connection and end the PuTTY session. Now that the coinbase account is unlocked, you are ready to start using the network to execute transactions on the blockchain. Proceed to [Exercise 5](#Exercise5). Exercise 4 is for macOS and Linux users only. <a name="Exercise4"></a> ## Exercise 4: Unlock the coinbase account (macOS and Linux) ## To unlock the coinbase account, you must connect to one of the Ethereum servers with SSH and execute a couple of commands. macOS and Linux users can use the built-in SSH client to connect. 1. Return to the Azure Portal and click the **Copy** button next to SSH-TO-FIRST-TX-NODE. ![Copying the SSH command](Images/copy-ssh-command.png) _Copying the SSH command_ 1. Open a terminal window and paste the command on the clipboard into the terminal window. Then press **Enter** to execute the command. If you are prompted with a security warning asking if you want to update the cached key, answer yes. 1. When prompted for a password, enter the password ("Blockchain!321") you entered in Exercise 1, Step 4. 1. Execute the following command in the terminal window to attach to the Ethereum node: ``` geth attach ``` 1. Now execute the following command to unlock the coinbase account: ``` web3.personal.unlockAccount(web3.personal.listAccounts[0],"Blockchain!321", 15000) ``` This will allow you to use the blockchain to transfer funds from the coinbase account. Make sure that the output from the command is the word "true." 1. Type **exit** into the terminal window to detach from Ethereum. Now that the coinbase account is unlocked, you are ready to start using the network to execute transactions on the blockchain. <a name="Exercise5"></a> ## Exercise 5: Deploy a smart contract ## Ethereum blockchains use "smart contracts" to broker transactions. A smart contract is essentially a program that runs on blockchain transaction nodes. Ethereum developers often use the popular [Truffle](http://truffleframework.com/) framework to develop smart contracts. In this exercise, you will set up a Truffle development environment, code and then compile a smart contract, and deploy it to the blockchain. 1. If Node.js isn't installed on your system, go to https://nodejs.org and install the latest LTS version for your operating system. > If you aren't sure whether Node.js is installed, open a Command Prompt or terminal window and type **node -v**. If you don't see a Node.js version number, then Node.js isn't installed. If a version of Node.js older than 6.0 is installed, it is highly recommend that you download and install the latest version. 1. If you are using macOS or Linux, open a terminal. If you are using Windows instead, open a PowerShell window. 1. In the terminal or PowerShell window, use the following command to create a directory named "truffle" in the location of your choice: ``` mkdir truffle ``` 1. Now change to the "truffle" directory: ``` cd truffle ``` 1. Use the following command to install Truffle: ``` npm install -g truffle ``` 1. Most smart contracts for Ethereum networks are written in a language called Solidity, which is similar to JavaScript. Use the following command to install the Solidity compiler: ``` npm install -g solc ``` 1. Now use the following command to initialize a Truffle project in the current directory. This will download a few Solidity scripts and install them, and create a scaffolding in the "truffle" folder. ``` truffle init ``` 1. Use your favorite text or program editor to open the file named **truffle.js** in the "truffle" folder. Return to the Azure Portal and copy the Ethereum RPC endpoint to the clipboard as you did in Exercise 2, Step 11. Replace "localhost" on line 4 of **truffle.js** with the URL on the clipboard, and remove the leading "http://" and the trailing ":8545," as shown below. Then save the modified file. ```javascript module.exports = { networks: { development: { host: "blkchn2o4.eastus.cloudapp.azure.com", port: 8545, network_id: "*" // Match any network id } } }; ``` 1. Create a new contract in the subdirectory named "contracts" (which was created when you ran ```truffle init```) by creating a text file named **myCoin.sol** in that directory, pasting in the following code, and then saving the file: ``` pragma solidity ^0.4.4; // Declares the contract contract myCoin { // This is a mapping that works like a dictionary or associated array in other languages. mapping (address => uint) balances; // This registers an event event Transfer(address indexed _from, address indexed _to, uint256 _value); // The contract constructor, which is called when the contract is deployed to the blockchain. The contract is persistent on the blockchain, so it remains until it is removed. function myCoin() { balances[tx.origin] = 100000; } // This method modifies the blockchain. The sender is required to fuel the transaction in Ether. function sendCoin(address receiver, uint amount) returns(bool sufficient) { if (balances[msg.sender] < amount) return false; balances[msg.sender] -= amount; balances[receiver] += amount; Transfer(msg.sender, receiver, amount); return true; } // This method does not modify the blockchain, so it does not require an account to fuel for the call. function getBalance(address addr) returns(uint) { return balances[addr]; } } ``` This contract, named "myCoin," is written in Solidity. Solidity files are compiled to JSON files containing interface definitions as well as bytecode that is used when the contracts are deployed. The contract contains a function named ```sendCoin``` that, when called, transfers the specified number of coins from the sender's account to the receiver's account. 1. Create a new file named **3_deploy_myCoin.js** in the "migrations" subdirectory. Paste the following code into the file and save it: ```javascript var myCoin = artifacts.require("./myCoin.sol"); module.exports = function(deployer) { deployer.deploy(myCoin); }; ``` This is the code that deploys the "myCoin" contract to the blockchain. 1. Return to the terminal or PowerShell window and execute the following command to compile the contract: ``` truffle compile ``` 1. Now use the following command to deploy the contract to the blockchain: ``` truffle deploy ``` The contract is now present in the blockchain and waiting for its ```sendCoin``` function to be called to transfer funds. All we lack is a mechanism for calling that function using RPC. In the next exercise, you will close the loop by using a Node.js app to invoke the contract. <a name="Exercise6"></a> ## Exercise 6: Invoke the contract from an app ## Smart contracts are designed to be used by applications that use the blockchain for secure transactions. In this exercise, you will create a Node.js app that uses the "myCoin" contract and then run it to broker an exchange of My Coin currency — specifically, to transfer funds from the coinbase account to the account you created in [Exercise 2](#Exercise2) (Account 1). The app will use a library named [web3.js](https://github.com/ethereum/web3.js/), which wraps the Ethereum RPC API and dramatically simplifies code for interacting with smart contracts. Note that there are also web3 libraries available for other languages, including Java and Python. 1. In a terminal or PowerShell window, use the following command to create a directory named "use-contract" in the location of your choice: ``` mkdir use-contract ``` 1. Make "use-contract" the current directory: ``` cd use-contract ``` 1. Use the following command to install the NPM package named "web3:" ``` npm install web3@^0.20.0 ``` 1. Create a new text file named **use-contract.js** in the "use-contract" folder. Then paste in the following code: ```javascript var Web3 = require("web3"); var AzureBlockchainRPC = "AZURE_RPC_URL"; var account1 = "ACCOUNT1_ADDRESS"; var contractAddress = "CONTRACT_ADDRESS"; let web3 = new Web3(); web3.setProvider(new web3.providers.HttpProvider(AzureBlockchainRPC)); // The abi object defines the contract interface. Web3 uses this to build the contract interface. var abi = JSON.parse('[{"constant":false,"inputs":[{"name":"receiver","type":"address"},{"name":"amount","type":"uint256"}],"name":"sendCoin","outputs":[{"name":"sufficient","type":"bool"}],"payable":false,"type":"function"},{"constant":false,"inputs":[{"name":"addr","type":"address"}],"name":"getBalance","outputs":[{"name":"","type":"uint256"}],"payable":false,"type":"function"},{"inputs":[],"payable":false,"type":"constructor"},{"anonymous":false,"inputs":[{"indexed":true,"name":"_from","type":"address"},{"indexed":true,"name":"_to","type":"address"},{"indexed":false,"name":"_value","type":"uint256"}],"name":"Transfer","type":"event"}]'); let myCoinContract = web3.eth.contract(abi); let myCoinInstance = myCoinContract.at(contractAddress); // This sets up a listener for the Transfer event. var transferEvent = myCoinInstance.Transfer( {}, {fromBlock: 0, toBlock: 'latest'}); // Watching for transfer.... transferEvent.watch(function(error, result) { if (!error) { console.log("Coin Sent!\n\nChecking balance for coin base..."); console.log(myCoinInstance.getBalance.call(web3.eth.coinbase)); console.log("Checking balance for account1..."); console.log(myCoinInstance.getBalance.call(account1)); } else { console.log("An error occurred."); console.log(error); } process.exit(); }); web3.eth.defaultAccount = web3.eth.coinbase; console.log("Sending some coin..."); console.log(myCoinInstance.sendCoin(account1, 1000, {from: web3.eth.coinbase})); console.log("Checking balance for coin base...") console.log(myCoinInstance.getBalance.call(web3.eth.coinbase)); console.log("Checking balance for account1...") console.log(myCoinInstance.getBalance.call(account1)); console.log("Waiting for event to fire..."); ``` This code, when executed, transfers 1,000 My Coin from the coinbase account to Account 1. Notice the asynchronous nature of the call. Before calling ```sendCoin``` to invoke the contract, the code registers a handler for the ```Transfer``` event that fires after the transaction has completed. 1. Replace AZURE_RPC_URL on line 3 of **use-contract.js** with the Ethereum RPC endpoint obtained from the Azure Portal (see Exercise 2, Step 11). 1. In the PowerShell or terminal window, CD back to the "truffle" directory that you created in the previous exercise. Then use the following command to list the addresses of all the smart contracts in the project, including the "myCoin" contract and some sample contracts that were created when you ran ```truffle init```: ``` truffle networks ``` 1. Replace CONTRACT_ADDRESS on line 5 of **use-contract.js** with the "myCoin" address in the output. ![Retrieving the contract address](Images/copy-contracat-address.png) _Retrieving the contract address_ 1. Return to the MetaMask window in Chrome and copy the address for **Account 1** to the clipboard as you did in Exercise 2, Step 16. Then replace ACCOUNT1_ADDRESS on line 4 of **use-contract.js** with the address on the clipboard and save your changes to **use-contract.js**. ![Copying the account address to the clipboard](Images/copy-address-to-clipboard.png) _Copying the account address to the clipboard_ 1. In the PowerShell or terminal window, CD back to the "use-contract" directory. Then execute the following command to invoke the contract and transfer My Coin: ``` node use-contract.js ``` 1. Watch the output. Onserve that before the ```Transfer``` event fires, the accounts hold their original balances even though the ```sendCoin``` method had already been invoked. Checking the balances again in the ```Transfer``` event handler reveals the final, post-transaction balances, and shows that 1,000 My Coin were transferred to Account 1. ``` Sending some coin... 0xb677604426c9589bb1072f1ec517d2ad3d5e37c56f0b5d9a3b2d689a4bd962ad Checking balance for coin base... { [String: '100000'] s: 1, e: 5, c: [ 100000 ] } Checking balance for account1... { [String: '0'] s: 1, e: 0, c: [ 0 ] } Waiting for event to fire... Coin Sent! Checking balance for coin base... { [String: '99000'] s: 1, e: 4, c: [ 99000 ] } Checking balance for account1... { [String: '1000'] s: 1, e: 3, c: [ 1000 ] } ``` If you'd like, you can run the app again to transfer another 1,000 My Coin. Each time you run the app and invoke the contract, the balance in the coinbase account will decrease by 1,000, and the balance in Account 1 will increase by the same amount. <a name="Exercise7"></a> ## Exercise 7: Delete the Ethereum network ## Resource groups are a useful feature of Azure because they simplify the task of managing related resources. One of the most practical reasons to use resource groups is that deleting a resource group deletes all of the resources it contains. Rather than delete those resources one by one, you can delete them all at once. In this exercise, you will delete the resource group created in [Exercise 1](#Exercise1) when you created the Ethereum network. Deleting the resource group deletes everything in it and prevents any further charges from being incurred for it. 1. Return to the blade for the resource group you created in Exercise 1. Then click the **Delete** button at the top of the blade. ![Deleting a resource group](Images/delete-resource-group.png) _Deleting a resource group_ 1. For safety, you are required to type in the resource group's name. (Once deleted, a resource group cannot be recovered.) Type the name of the resource group. Then click the **Delete** button to remove all traces of this lab from your Azure subscription. After a few minutes, the blockchain and all of the associated resources will be deleted. Billing stops when you click the **Delete** button, so you're not charged for the time required to delete the resources. Similarly, billing doesn't start until the resources are fully and successfully deployed. <a name="Summary"></a> ## Summary ## This is just one example of the kinds of apps you can build with Blockchain, and with Ethereum Blockchain networks in particular. It also demonstrates how easily Blockchain networks are deployed on Azure. For more on Azure blockchains and on Ethereum networks and their capabilities, refer to https://www.ethereum.org/. --- Copyright 2017 Microsoft Corporation. All rights reserved. Except where otherwise noted, these materials are licensed under the terms of the MIT License. You may use them according to the license as is most appropriate for your project. The terms of this license can be found at https://opensource.org/licenses/MIT.
53.522686
1,002
0.75206
eng_Latn
0.993807
03fdfb8738d809d65ba264196c512eed4fe6795b
1,700
md
Markdown
_posts/2020-12-23-IBM_C_Security.md
elephantoid/elefunt-devlog
a014eace0c989a527d19536b28601ec0b3eac656
[ "Apache-2.0" ]
null
null
null
_posts/2020-12-23-IBM_C_Security.md
elephantoid/elefunt-devlog
a014eace0c989a527d19536b28601ec0b3eac656
[ "Apache-2.0" ]
2
2021-06-17T06:13:01.000Z
2021-07-14T10:39:30.000Z
_posts/2020-12-23-IBM_C_Security.md
elephantoid/elefunt-devlog
a014eace0c989a527d19536b28601ec0b3eac656
[ "Apache-2.0" ]
null
null
null
--- title: "[IBM C:louders] Secrets Manager 생성하기" description: "IBM Clouders mission" layout: post toc: false comments: true search_exclude: true categories: [IBM Clouders] image: "/images/ibm_clouders.png" --- # IBM Cloud Security IBM Cloud의 보안은 처음 뱃지 획득을 위해 Course를 수강했을 때도 다국적기업으로 B2B를 많이 하는 IBM특성상 보안수준이 높다고 알려져있다. 점차 개인정보 및 데이터 보안에 대한 문제가 대두되고있으며 이번 12월 뱃지 미션에서 나왔던 ReplicaSet의 Secrets를 관리해보도록 하겠습니다. ConfigMaps and Secrets > 컨피그맵은 키-값 쌍으로 기밀이 아닌 데이터를 저장하는 데 사용하는 API 오브젝트이다. 파드(Pod)는 볼륨에서 환경 변수, 커맨드-라인 인수 또는 구성 파일로 컨피그맵을 사용할 수 있다. \쿠버네티스 시크릿을 사용하면 비밀번호, OAuth 토큰, ssh 키와 같은 민감한 정보를 저장하고 관리할 수 ​​있다. 기밀 정보를 시크릿에 저장하는 것이 파드(Pod) 정의나 컨테이너 이미지 내에 그대로 두는 것보다 안전하고 유연하다. ## 요약 Secrets Manager를 사용하여 IBM Cloud 서비스 또는 사용자 빌드 애플리케이션에서 사용되는 시크릿을 작성하고 대여하며, 중앙에서 관리할 수 있습니다. 시크릿은 IBM Cloud에 있는 오픈 소스 HashiCorp Vault의 전용 인스턴스에 저장됩니다. ## 기능 **스케일링 시 중앙에서 시크릿 관리** 오픈 소스 HashiCorp Vault에 빌드된 전용 시크릿 저장소에서 애플리케이션 시크릿을 관리합니다. Secrets Manager는 몇 개의 시크릿만 있으면 되는 개발자, 또는 수백만 개의 시크릿을 필요로 하는 대형 엔터프라이즈의 요구에 맞게 스케일링할 수 있습니다. **동적으로 시크릿 작성** 지원되는 IBM Cloud 오퍼링을 사용할 때 API 키, 비밀번호 및 데이터베이스 구성과 같은 시간 기반 시크릿을 작성하고 대여하는 데 도움이 되는 플랫폼 통합을 사용하여 시간을 절약합니다. **사용자 시크릿 가져오기** 시크릿 온프레미스를 생성하기 위해 엄격한 가이드라인을 준수해야 하는 경우 Secrets Manager로 시크릿을 안전하게 가져와서 기존 시크릿 관리 인프라를 확장할 수 있습니다. **시크릿 그룹으로 액세스 정의** Secrets Manager는 Cloud IAM(Identity and Access Management)과 통합되어 보안 관리자가 시크릿 그룹을 사용하여 시크릿을 구성하고 시크릿에 대한 액세스 권한을 부여할 수 있습니다. **스토리지에서 시크릿 보호** IBM Key Protect를 사용하여 저장 시 시크릿 보안을 향상시킵니다. Key Protect 암호화 키를 Secrets Manager 서비스 인스턴스의 신뢰 루트로 선택하면 시크릿에 대한 고급 고객 관리 암호화를 사용할 수 있습니다 ## 계정 업그레이드 실패.. 계정을 업그레이드 해야 이 서비스를 이용할 수 있다는 것을 알게되었습니다. 하지만 아직 신용카드도 없을뿐더러 Mastercard가 찍힌 체크카드로는 계정을 업그레이드 할 수 없어 자세한 실습을해보지 못했습니다.
35.416667
242
0.74
kor_Hang
1.00001
03fea050191c07d5e1150211325c0b6f83dab388
44
md
Markdown
README.md
kettenieF20/Data-science-introduction
893cfdecbe5a9ed94203984cd1e2dda76f47a219
[ "MIT" ]
null
null
null
README.md
kettenieF20/Data-science-introduction
893cfdecbe5a9ed94203984cd1e2dda76f47a219
[ "MIT" ]
null
null
null
README.md
kettenieF20/Data-science-introduction
893cfdecbe5a9ed94203984cd1e2dda76f47a219
[ "MIT" ]
null
null
null
# Data-science-introduction first homework
14.666667
27
0.818182
kor_Hang
0.475813
03fec6edd0437b11161dfdc319b38a73cf116cfb
1,159
md
Markdown
posts/blog/ros/2022-01-11-install_noetic.md
bigbigpark/bigbigpark.github.io
66ec5b7d0a5fe82eca79449dae5ee25d1f6ba196
[ "CC0-1.0" ]
3
2021-05-03T23:00:29.000Z
2022-02-10T02:19:15.000Z
posts/blog/ros/2022-01-11-install_noetic.md
bigbigpark/bigbigpark.github.io
66ec5b7d0a5fe82eca79449dae5ee25d1f6ba196
[ "CC0-1.0" ]
null
null
null
posts/blog/ros/2022-01-11-install_noetic.md
bigbigpark/bigbigpark.github.io
66ec5b7d0a5fe82eca79449dae5ee25d1f6ba196
[ "CC0-1.0" ]
null
null
null
# How to install ROS noetic on Ubuntu 20.04 ref. http://wiki.ros.org/noetic/Installation <br/> ## 1. Installation packages.ros.org로부터 software를 받도록 PC setup하는 과정 ~~~bash $ sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list' ~~~ <br/> key 등록 ~~~bash $ sudo apt install curl # if you haven't already installed curl $ curl -s https://raw.githubusercontent.com/ros/rosdistro/master/ros.asc | sudo apt-key add - ~~~ <br/> Update package ~~~bash $ sudo apt update ~~~ <br/> 설치 ~~~bash $ sudo apt install ros-noetic-desktop-full ~~~ <br/> 환경변수 설정 ~~~bash $ gedit ~/.bashrc # 파일 제일 하단에 아래 문장 기입 source /opt/ros/noetic/setup.bash ~~~ <br/> ## 2. Dependencies for building packages ROS workspace를 구축하고 빌드하는데 있어서 dependencies를 설치해야 함 ~~~bash $ sudo apt install python3-rosdep python3-rosinstall python3-rosinstall-generator python3-wstool build-essential $ sudo apt install python3-rosdep ~~~ <br/> ## 3. Initialize rosdep ROS tools를 사용하기 전, rosdep을 초기화를 해야 함 rosdep은 사용자가 컴파일할 소스의 dependencies를 쉽게 설치 하게끔 함 최초 1회만 하면 됨 ~~~bash $ sudo rosdep init $ rosdep update ~~~
15.051948
126
0.700604
kor_Hang
0.980872
03fee264e0c9447958391f246c157a9777960583
469
md
Markdown
README.md
maysrp/ssd1306_font
86f4558b2aa3046060b77faa76f85c7314962af4
[ "Apache-2.0" ]
1
2022-03-11T09:39:02.000Z
2022-03-11T09:39:02.000Z
README.md
maysrp/ssd1306_font
86f4558b2aa3046060b77faa76f85c7314962af4
[ "Apache-2.0" ]
null
null
null
README.md
maysrp/ssd1306_font
86f4558b2aa3046060b77faa76f85c7314962af4
[ "Apache-2.0" ]
null
null
null
# ssd1306_font ssd1306 oled font micropython upload file to your micropython driver(ESP8266 ESP32 RP2040 ) * ASC16 * ASC24 * ASC32 * font.py * ssd1306 ``` from machine import I2C, Pin from ssd1306 import SSD1306_I2C from font import Font import time i2c = I2C(scl=Pin(0), sda=Pin(2)) display= SSD1306_I2C(128, 32, i2c) f=Font(display) f.text("8",0,0,8) #8 pix f.text("16",8,0,16) #16 pix f.text("24",24,0,24) #24 pix f.text("32",48,0,32) #32 pix f.show() ```
16.172414
65
0.686567
kor_Hang
0.234108
03ff25143a529805d66b10937686cca5175cbb45
2,390
md
Markdown
README.md
ebellocchia/aes_cipher_app
cad38c9ebd94317377fb173b0f93f75d44a03a42
[ "MIT" ]
null
null
null
README.md
ebellocchia/aes_cipher_app
cad38c9ebd94317377fb173b0f93f75d44a03a42
[ "MIT" ]
null
null
null
README.md
ebellocchia/aes_cipher_app
cad38c9ebd94317377fb173b0f93f75d44a03a42
[ "MIT" ]
null
null
null
# AES Cipher application ## Introduction Simple application based on my [aes_cipher](https://github.com/ebellocchia/aes_cipher) library. ## Installation Just install the [aes_cipher](https://github.com/ebellocchia/aes_cipher) library to run the application, refer to its instruction. ## Usage Basic usage: python aes_cipher_app.py -m <enc:dec> -p <password1,password2,...> -i <input_files_or_folder> -o <output_folder> [-s <salt>] [-t <itr_num>] [-v] [-h] Parameters description: |Short name|Long name|Description| |---|---|---| |-m|--mode|Operational mode: *enc* for encrypting, *dec* for decrypting| |-p|--password|Password used for encrypting/decrypting. It can be a single password or a list of passwords separated by a comma| |-i|--input|Input to be encrypted/decrypted. It can be a single file, a list of files separated by a comma or a folder (in this case all files in the folder will be encrypted/decrypted)| |-o|--output|Output folder where the encrypted/decrypted files will be saved| |-s|--salt|Optional: custom salt for master key and IV derivation, otherwise the default salt "[]=?AeS_CiPhEr><()" will be used| |-t|--iteration|Optional: number of iteration for algorithm, otherwise the default value 524288 will be used| |-v|--verbose|Optional: enable verbose mode| |-h|--help|Optional: print usage and exit| **NOTE:** the password shall not contain spaces or commas (in this case it will be interpreted as multiple passwords) ## Examples Encrypt a file one time with the given password and salt. If *input_file* is a folder, all the files inside the folder will be encrypted: python aes_cipher_app.py -m enc -p test_pwd -i input_file -o encrypted -s test_salt Decrypt the previous file: python aes_cipher_app.py -m dec -p test_pwd -i encrypted -o decrypted -s test_salt Encrypt multiple files one time with the given password and salt. If one of the input files is a directory, it will be discarded: python aes_cipher_app.py -m enc -p test_pwd -i input_file1,input_file2,input_file3 -o encrypted -s test_salt Encrypt a file 3 times using 3 passwords with default salt and custom number of iteration: python aes_cipher_app.py -m enc -p test_pwd1,test_pwd2,test_pwd3 -t 131072 -i input_file -o encrypted Decrypt the previous file: python aes_cipher_app.py -m dec -p test_pwd1,test_pwd2,test_pwd3 -t 131072 -i encrypted -o decrypted
45.09434
186
0.753975
eng_Latn
0.948564
03ffb493da4651218fbe457934948f7f91b56cf0
1,187
md
Markdown
docs/connect/python/pyodbc/python-sql-driver-pyodbc.md
rsanderson2350/sql-docs
3206a31870f8febab7d1718fa59fe0590d4d45db
[ "CC-BY-4.0", "MIT" ]
1
2019-07-05T14:10:29.000Z
2019-07-05T14:10:29.000Z
docs/connect/python/pyodbc/python-sql-driver-pyodbc.md
rsanderson2350/sql-docs
3206a31870f8febab7d1718fa59fe0590d4d45db
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/connect/python/pyodbc/python-sql-driver-pyodbc.md
rsanderson2350/sql-docs
3206a31870f8febab7d1718fa59fe0590d4d45db
[ "CC-BY-4.0", "MIT" ]
1
2019-09-16T00:08:14.000Z
2019-09-16T00:08:14.000Z
--- title: "Python SQL Driver - pyodbc | Microsoft Docs" ms.custom: "" ms.date: "08/09/2017" ms.prod: "sql-non-specified" ms.prod_service: "drivers" ms.service: "" ms.component: "python" ms.reviewer: "" ms.suite: "sql" ms.technology: - "drivers" ms.tgt_pltfrm: "" ms.topic: "article" ms.assetid: fdb60557-006c-4eb5-9cef-2eb392e862de caps.latest.revision: 3 author: "MightyPen" ms.author: "genemi" manager: "jhubbard" ms.workload: "Active" --- # Python SQL Driver - pyodbc ![Download-DownArrow-Circled](../../../ssdt/media/download.png)[To install SQL driver for Python](../../sql-connection-libraries.md#anchor-20-drivers-relational-access) ## Getting Started * [Step 1: Configure development environment for pyodbc Python development](step-1-configure-development-environment-for-pyodbc-python-development.md) * [Step 2: Create a SQL database for pyodbc Python development](step-2-create-a-sql-database-for-pyodbc-python-development.md) * [Step 3: Proof of concept connecting to SQL using pyodbc](step-3-proof-of-concept-connecting-to-sql-using-pyodbc.md) ## Documentation * [pyodbc documentation](http://mkleehammer.github.io/pyodbc/)
33.914286
169
0.719461
eng_Latn
0.269491
ff001feb6610e436fed33111d1f7172e0d71c6ce
16,925
md
Markdown
articles/azure-functions/functions-bindings-notification-hubs.md
gliljas/azure-docs.sv-se-1
1efdf8ba0ddc3b4fb65903ae928979ac8872d66e
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/azure-functions/functions-bindings-notification-hubs.md
gliljas/azure-docs.sv-se-1
1efdf8ba0ddc3b4fb65903ae928979ac8872d66e
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/azure-functions/functions-bindings-notification-hubs.md
gliljas/azure-docs.sv-se-1
1efdf8ba0ddc3b4fb65903ae928979ac8872d66e
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Notification Hubs bindningar för Azure Functions description: Lär dig hur du använder Azure Notification Hub-bindning i Azure Functions. author: craigshoemaker ms.topic: reference ms.date: 11/21/2017 ms.author: cshoe ms.openlocfilehash: 211f8c8a203b81a4df6a8e9515b403f99cec572a ms.sourcegitcommit: 849bb1729b89d075eed579aa36395bf4d29f3bd9 ms.translationtype: MT ms.contentlocale: sv-SE ms.lasthandoff: 04/28/2020 ms.locfileid: "79277289" --- # <a name="notification-hubs-output-binding-for-azure-functions"></a>Notification Hubs utgående bindning för Azure Functions Den här artikeln beskriver hur du skickar push-meddelanden med hjälp av [Azure Notification Hubs](../notification-hubs/notification-hubs-push-notification-overview.md) -bindningar i Azure Functions. Azure Functions stöder utgående bindningar för Notification Hubs. Azure Notification Hubs måste konfigureras för den plattforms meddelande tjänst (PNS) som du vill använda. Information om hur du hämtar push-meddelanden i din klient app från Notification Hubs finns i [komma igång med Notification Hubs](../notification-hubs/notification-hubs-windows-store-dotnet-get-started-wns-push-notification.md) och välj mål klient plattform i list rutan längst upp på sidan. [!INCLUDE [intro](../../includes/functions-bindings-intro.md)] > [!IMPORTANT] > Google har [föråldrade Google Cloud Messaging (GCM) i stället för Firebase Cloud Messaging (FCM)](https://developers.google.com/cloud-messaging/faq). Den här utgående bindningen stöder inte FCM. Om du vill skicka meddelanden med hjälp av FCM kan du använda [Firebase-API: et](https://firebase.google.com/docs/cloud-messaging/server#choosing-a-server-option) direkt i din funktion eller använda [mall meddelanden](../notification-hubs/notification-hubs-templates-cross-platform-push-messages.md). ## <a name="packages---functions-1x"></a>Paket-funktioner 1. x Notification Hubs-bindningarna finns i [Microsoft. Azure. WebJobs. Extensions. NotificationHubs](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.NotificationHubs) NuGet-paketet, version 1. x. Käll koden för paketet finns i [Azure-WebJobs-SDK-Extensions GitHub-](https://github.com/Azure/azure-webjobs-sdk-extensions/tree/v2.x/src/WebJobs.Extensions.NotificationHubs) lagringsplatsen. [!INCLUDE [functions-package](../../includes/functions-package.md)] ## <a name="packages---functions-2x-and-higher"></a>Paket-funktioner 2. x och högre Den här bindningen är inte tillgänglig i functions 2. x och högre. ## <a name="example---template"></a>Exempel-mall De meddelanden som du skickar kan vara interna meddelanden eller [mal meddelanden](../notification-hubs/notification-hubs-templates-cross-platform-push-messages.md). Interna meddelanden riktar sig till en specifik klient plattform enligt `platform` konfigurationen i egenskapen för utgående bindning. Ett mal Lav besked kan användas för att rikta in flera plattformar. Se språkspecifika exempel: * [C#-skript parameter](#c-script-template-example---out-parameter) * [C#-skript – asynkron](#c-script-template-example---asynchronous) * [C#-skript – JSON](#c-script-template-example---json) * [C#-skript – biblioteks typer](#c-script-template-example---library-types) * [B #](#f-template-example) * [JavaScript](#javascript-template-example) ### <a name="c-script-template-example---out-parameter"></a>Exempel på skript mal len C#-parameter Det här exemplet skickar ett meddelande för en [mall registrering](../notification-hubs/notification-hubs-templates-cross-platform-push-messages.md) som innehåller `message` en plats hållare i mallen. ```cs using System; using System.Threading.Tasks; using System.Collections.Generic; public static void Run(string myQueueItem, out IDictionary<string, string> notification, TraceWriter log) { log.Info($"C# Queue trigger function processed: {myQueueItem}"); notification = GetTemplateProperties(myQueueItem); } private static IDictionary<string, string> GetTemplateProperties(string message) { Dictionary<string, string> templateProperties = new Dictionary<string, string>(); templateProperties["message"] = message; return templateProperties; } ``` ### <a name="c-script-template-example---asynchronous"></a>Exempel på C#-skript mal len – asynkron Om du använder asynkron kod tillåts inte out-parametrar. I det här fallet `IAsyncCollector` använder du för att returnera meddelandet från en mall. Följande kod är ett asynkront exempel på koden ovan. ```cs using System; using System.Threading.Tasks; using System.Collections.Generic; public static async Task Run(string myQueueItem, IAsyncCollector<IDictionary<string,string>> notification, TraceWriter log) { log.Info($"C# Queue trigger function processed: {myQueueItem}"); log.Info($"Sending Template Notification to Notification Hub"); await notification.AddAsync(GetTemplateProperties(myQueueItem)); } private static IDictionary<string, string> GetTemplateProperties(string message) { Dictionary<string, string> templateProperties = new Dictionary<string, string>(); templateProperties["user"] = "A new user wants to be added : " + message; return templateProperties; } ``` ### <a name="c-script-template-example---json"></a>C#-skript mal len exempel – JSON Det här exemplet skickar ett meddelande för en [mall registrering](../notification-hubs/notification-hubs-templates-cross-platform-push-messages.md) som innehåller `message` en plats hållare i mallen med en giltig JSON-sträng. ```cs using System; public static void Run(string myQueueItem, out string notification, TraceWriter log) { log.Info($"C# Queue trigger function processed: {myQueueItem}"); notification = "{\"message\":\"Hello from C#. Processed a queue item!\"}"; } ``` ### <a name="c-script-template-example---library-types"></a>C#-skript mal len exempel – biblioteks typer Det här exemplet visar hur du använder typer som definieras i [Microsoft Azure Notification Hubs-biblioteket](https://www.nuget.org/packages/Microsoft.Azure.NotificationHubs/). ```cs #r "Microsoft.Azure.NotificationHubs" using System; using System.Threading.Tasks; using Microsoft.Azure.NotificationHubs; public static void Run(string myQueueItem, out Notification notification, TraceWriter log) { log.Info($"C# Queue trigger function processed: {myQueueItem}"); notification = GetTemplateNotification(myQueueItem); } private static TemplateNotification GetTemplateNotification(string message) { Dictionary<string, string> templateProperties = new Dictionary<string, string>(); templateProperties["message"] = message; return new TemplateNotification(templateProperties); } ``` ### <a name="f-template-example"></a>Exempel på F #-mall Det här exemplet skickar ett meddelande för en [mall registrering](../notification-hubs/notification-hubs-templates-cross-platform-push-messages.md) som `location` innehåller `message`och. ```fsharp let Run(myTimer: TimerInfo, notification: byref<IDictionary<string, string>>) = notification = dict [("location", "Redmond"); ("message", "Hello from F#!")] ``` ### <a name="javascript-template-example"></a>Exempel på JavaScript-mall Det här exemplet skickar ett meddelande för en [mall registrering](../notification-hubs/notification-hubs-templates-cross-platform-push-messages.md) som `location` innehåller `message`och. ```javascript module.exports = function (context, myTimer) { var timeStamp = new Date().toISOString(); if (myTimer.IsPastDue) { context.log('Node.js is running late!'); } context.log('Node.js timer trigger function ran!', timeStamp); context.bindings.notification = { location: "Redmond", message: "Hello from Node!" }; context.done(); }; ``` ## <a name="example---apns-native"></a>Exempel – APN Native Det här C#-skript exemplet visar hur du skickar ett internt APN-meddelande. ```cs #r "Microsoft.Azure.NotificationHubs" #r "Newtonsoft.Json" using System; using Microsoft.Azure.NotificationHubs; using Newtonsoft.Json; public static async Task Run(string myQueueItem, IAsyncCollector<Notification> notification, TraceWriter log) { log.Info($"C# Queue trigger function processed: {myQueueItem}"); // In this example the queue item is a new user to be processed in the form of a JSON string with // a "name" value. // // The JSON format for a native APNS notification is ... // { "aps": { "alert": "notification message" }} log.LogInformation($"Sending APNS notification of a new user"); dynamic user = JsonConvert.DeserializeObject(myQueueItem); string apnsNotificationPayload = "{\"aps\": {\"alert\": \"A new user wants to be added (" + user.name + ")\" }}"; log.LogInformation($"{apnsNotificationPayload}"); await notification.AddAsync(new AppleNotification(apnsNotificationPayload)); } ``` ## <a name="example---wns-native"></a>Exempel – WNS Native Det här C#-skript exemplet visar hur du använder typer som definierats i [Microsoft Azure Notification Hubs-biblioteket](https://www.nuget.org/packages/Microsoft.Azure.NotificationHubs/) för att skicka en inbyggd WNS popup-avisering. ```cs #r "Microsoft.Azure.NotificationHubs" #r "Newtonsoft.Json" using System; using Microsoft.Azure.NotificationHubs; using Newtonsoft.Json; public static async Task Run(string myQueueItem, IAsyncCollector<Notification> notification, TraceWriter log) { log.Info($"C# Queue trigger function processed: {myQueueItem}"); // In this example the queue item is a new user to be processed in the form of a JSON string with // a "name" value. // // The XML format for a native WNS toast notification is ... // <?xml version="1.0" encoding="utf-8"?> // <toast> // <visual> // <binding template="ToastText01"> // <text id="1">notification message</text> // </binding> // </visual> // </toast> log.Info($"Sending WNS toast notification of a new user"); dynamic user = JsonConvert.DeserializeObject(myQueueItem); string wnsNotificationPayload = "<?xml version=\"1.0\" encoding=\"utf-8\"?>" + "<toast><visual><binding template=\"ToastText01\">" + "<text id=\"1\">" + "A new user wants to be added (" + user.name + ")" + "</text>" + "</binding></visual></toast>"; log.Info($"{wnsNotificationPayload}"); await notification.AddAsync(new WindowsNotification(wnsNotificationPayload)); } ``` ## <a name="attributes"></a>Attribut Använd attributet [NotificationHub](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/v2.x/src/WebJobs.Extensions.NotificationHubs/NotificationHubAttribute.cs) i [C#-klass bibliotek](functions-dotnet-class-library.md). Attributets konstruktor parametrar och egenskaper beskrivs i [konfigurations](#configuration) avsnittet. ## <a name="configuration"></a>Konfiguration I följande tabell förklaras de egenskaper för bindnings konfiguration som du anger i filen *Function. JSON* och `NotificationHub` attributet: |function. JSON-egenskap | Attributets egenskap |Beskrivning| |---------|---------|----------------------| |**bastyp** |saknas| Måste anges till `notificationHub`. | |**riktning** |saknas| Måste anges till `out`. | |**Namn** |saknas| Variabel namn som används i funktions koden för Notification Hub-meddelandet. | |**tagExpression** |**TagExpression** | Med tagg uttryck kan du ange att meddelanden ska skickas till en uppsättning enheter som har registrerats för att ta emot meddelanden som matchar etikett uttrycket. Mer information finns i avsnittet om [Routning och tagg uttryck](../notification-hubs/notification-hubs-tags-segment-push-message.md). | |**hubName** | **HubName** | Namnet på resursen för Notification Hub i Azure Portal. | |**anslutningen** | **ConnectionStringSetting** | Namnet på en app-inställning som innehåller en Notification Hubs anslutnings sträng. Anslutnings strängen måste anges till *DefaultFullSharedAccessSignature* -värdet för Notification Hub. Se [installationen av anslutnings strängen](#connection-string-setup) senare i den här artikeln.| |**systemet** | **Plattform** | Egenskapen Platform anger klient plattformen som meddelande mål. Som standard, om egenskapen Platform utelämnas från utgående bindning, kan mal meddelanden användas för att rikta in alla plattformar som kon figurer ATS i Azure Notification Hub. Mer information om hur du använder mallar i allmänhet för att skicka plattforms oberoende meddelanden med en Azure Notification Hub finns i [mallar](../notification-hubs/notification-hubs-templates-cross-platform-push-messages.md). När den har angetts måste **plattformen** vara något av följande värden: <ul><li><code>apns</code>&mdash;Apple Push Notification Service. Mer information om hur du konfigurerar Notification Hub för APN och tar emot meddelandet i en klient app finns i [skicka push-meddelanden till iOS med Azure Notification Hubs](../notification-hubs/notification-hubs-ios-apple-push-notification-apns-get-started.md).</li><li><code>adm</code>&mdash;[Amazon Device Messaging](https://developer.amazon.com/device-messaging). Mer information om hur du konfigurerar Notification Hub för ADM och tar emot meddelandet i en Kindle-app finns i [komma igång med Notification Hubs för Kindle-appar](../notification-hubs/notification-hubs-kindle-amazon-adm-push-notification.md).</li><li><code>wns</code>&mdash;Windows- [Push Notification Services](/windows/uwp/design/shell/tiles-and-notifications/windows-push-notification-services--wns--overview) Windows-plattformar. Windows Phone 8,1 och senare stöds också av WNS. Mer information finns i [komma igång med Notification Hubs för Windows Universal Platform-appar](../notification-hubs/notification-hubs-windows-store-dotnet-get-started-wns-push-notification.md).</li><li><code>mpns</code>&mdash;[Microsoft Push Notification Service](/previous-versions/windows/apps/ff402558(v=vs.105)). Den här plattformen stöder Windows Phone 8-och tidigare Windows Phone-plattformar. Mer information finns i [skicka push-meddelanden med Azure Notification Hubs på Windows Phone](../notification-hubs/notification-hubs-windows-mobile-push-notifications-mpns.md).</li></ul> | [!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)] ### <a name="functionjson-file-example"></a>exempel på function. JSON-fil Här är ett exempel på en Notification Hubs-bindning i en *Function. JSON* -fil. ```json { "bindings": [ { "type": "notificationHub", "direction": "out", "name": "notification", "tagExpression": "", "hubName": "my-notification-hub", "connection": "MyHubConnectionString", "platform": "apns" } ], "disabled": false } ``` ### <a name="connection-string-setup"></a>Konfiguration av anslutnings sträng Om du vill använda en utgående bindning för Notification Hub måste du konfigurera anslutnings strängen för hubben. Du kan välja en befintlig Notification Hub eller skapa en ny rättighet från fliken *integrera* i Azure Portal. Du kan också konfigurera anslutnings strängen manuellt. Så här konfigurerar du anslutnings strängen till en befintlig Notification Hub: 1. Navigera till Notification Hub i [Azure Portal](https://portal.azure.com), Välj **åtkomst principer**och välj sedan knappen Kopiera bredvid **DefaultFullSharedAccessSignature** -principen. Detta kopierar anslutnings strängen för *DefaultFullSharedAccessSignature* -principen till Notification Hub. Med den här anslutnings strängen kan funktionen skicka meddelanden till hubben. ![Kopiera anslutnings strängen för Notification Hub](./media/functions-bindings-notification-hubs/get-notification-hub-connection.png) 1. Navigera till din Function-app i Azure Portal, Välj **program inställningar**, Lägg till en nyckel, till exempel **MyHubConnectionString**, klistra in den kopierade *DefaultFullSharedAccessSignature* för Notification Hub som värde och klicka sedan på **Spara**. Namnet på den här program inställningen är vad som händer i anslutnings inställningen för utgående bindning i *Function. JSON* eller .NET-attributet. Se [konfigurations avsnittet](#configuration) tidigare i den här artikeln. [!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)] ## <a name="exceptions-and-return-codes"></a>Undantag och retur koder | Bindning | Referens | |---|---| | Notification Hub | [Drift guide](https://docs.microsoft.com/rest/api/notificationhubs/) | ## <a name="next-steps"></a>Nästa steg > [!div class="nextstepaction"] > [Lär dig mer om Azure Functions-utlösare och bindningar](functions-triggers-bindings.md)
54.951299
2,094
0.743811
swe_Latn
0.56573
ff00d6fee19002aed7bedfb1928383ef7eb34c2f
5,577
md
Markdown
docs/ssma/sybase/connecting-to-azure-sql-db-sybasetosql.md
v-thepet/sql-docs
487ce4c1584d377b26ce4ced54c3107efcd75f8e
[ "CC-BY-4.0", "MIT" ]
1
2019-05-04T19:57:42.000Z
2019-05-04T19:57:42.000Z
docs/ssma/sybase/connecting-to-azure-sql-db-sybasetosql.md
jzabroski/sql-docs
34be3e3e656de711b4c7a09274c715b23b451014
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/ssma/sybase/connecting-to-azure-sql-db-sybasetosql.md
jzabroski/sql-docs
34be3e3e656de711b4c7a09274c715b23b451014
[ "CC-BY-4.0", "MIT" ]
1
2021-01-13T23:26:30.000Z
2021-01-13T23:26:30.000Z
--- title: "Connecting to Azure SQL DB (SybaseToSQL) | Microsoft Docs" ms.custom: "" ms.date: "01/19/2017" ms.prod: "sql-non-specified" ms.reviewer: "" ms.suite: "" ms.technology: - "sql-ssma" ms.tgt_pltfrm: "" ms.topic: "article" applies_to: - "Azure SQL Database" - "SQL Server" ms.assetid: 9e77e4b0-40c0-455c-8431-ca5d43849aa7 caps.latest.revision: 7 author: "Shamikg" ms.author: "Shamikg" manager: "jhubbard" ms.workload: "Inactive" --- # Connecting to Azure SQL DB (SybaseToSQL) To migrate Sybase databases to Azure SQL DB, you must connect to the target instance of Azure SQL DB. When you connect, SSMA obtains metadata about all the databases in the instance of Azure SQL DB and displays database metadata in the Azure SQL DB Metadata Explorer. SSMA stores information of the instance of Azure SQL DB you are connected to, but does not store passwords. Your connection to Azure SQL DB stays active until you close the project. When you reopen the project, you must reconnect to Azure SQL DB if you want an active connection to the server. You can work offline until you load database objects into Azure SQL DB and migrate data. Metadata about the instance of Azure SQL DB is not automatically synchronized. Instead, to update the metadata in Azure SQL DB Metadata Explorer, you must manually update the Azure SQL DB metadata. For more information, see the "Synchronizing Azure SQL DB Metadata" section later in this topic. ## Required Azure SQL DB Permissions The account that is used to connect to Azure SQL DB requires different permissions depending on the actions that the account performs: 1. To convert Sybase objects to [!INCLUDE[tsql](../../includes/tsql_md.md)] syntax, to update metadata from Azure SQL DB, or to save converted syntax to scripts, the account must have permission to log on to the instance of Azure SQL DB. 2. To load database objects into Azure SQL DB, the minimum permission requirement is membership in the **db_owner** database role in the target database. ## Establishing a Azure SQL DB Connection Before you convert Sybase database objects to Azure SQL DB syntax, you must establish a connection to the instance of Azure SQL DB where you want to migrate the Sybase database or databases. When you define the connection properties, you also specify the database where objects and data will be migrated. You can customize this mapping at the Sybase schema level after you connect to Azure SQL DB. For more information, see [Mapping Sybase ASE Schemas to SQL Server Schemas &#40;SybaseToSQL&#41;](../../ssma/sybase/mapping-sybase-ase-schemas-to-sql-server-schemas-sybasetosql.md) > [!WARNING] > Before you try to connect to Azure SQL DB, make sure that the instance of Azure SQL DB is running and can accept connections. **To connect to Azure SQL DB** 1. On the **File** menu, select **Connect to Azure SQL DB**(this option is enabled after the creation of a project). If you have previously connected to Azure SQL DB, the command name will be **Reconnect to Azure SQL DB** 2. In the connection dialog box, enter or select the server name of Azure SQL DB. 3. Enter, select or **Browse** the Database name. 4. Enter or select **UserName**. 5. Enter the **Password**. 6. SSMA recommends encrypted connection to Azure SQL DB. 7. Click **Connect**. > [!IMPORTANT] > SSMA for Sybase does not support connection to **master** database in Azure SQL DB. ## Synchronizing Azure SQL DB Metadata Metadata about Azure SQL DB databases is not automatically updated. The metadata in Azure SQL DB Metadata Explorer is a snapshot of the metadata when you first connected to Azure SQL DB, or the last time that you manually updated metadata. You can manually update metadata for all databases, or for any single database or database object. **To synchronize metadata** 1. Make sure that you are connected to Azure SQL DB. 2. In Azure SQL DB Metadata Explorer, select the check box next to the database or database schema that you want to update. For example, to update the metadata for all databases, select the box next to Databases. 3. Right-click Databases, or the individual database or database schema, and then select **Synchronize with Database**. ## Next Step The next step in the migration depends on your project needs: - To customize the mapping between Sybase schemas and Azure SQL DB databases and schemas, see [Mapping Sybase ASE Schemas to SQL Server Schemas &#40;SybaseToSQL&#41;](../../ssma/sybase/mapping-sybase-ase-schemas-to-sql-server-schemas-sybasetosql.md) - To customize configuration options for the projects, see [Setting Project Options &#40;SybaseToSQL&#41;](../../ssma/sybase/setting-project-options-sybasetosql.md) - To customize the mapping of source and target data types, see [Mapping Sybase ASE and SQL Server Data Types &#40;SybaseToSQL&#41;](../../ssma/sybase/mapping-sybase-ase-and-sql-server-data-types-sybasetosql.md) - If you do not have to perform any of these tasks, you can convert the Sybase database object definitions into Azure SQL DB object definitions. For more information, see [Converting Sybase ASE Database Objects &#40;SybaseToSQL&#41;](../../ssma/sybase/converting-sybase-ase-database-objects-sybasetosql.md) ## See Also [Migrating Sybase ASE Databases to SQL Server - Azure SQL DB &#40;SybaseToSQL&#41;](../../ssma/sybase/migrating-sybase-ase-databases-to-sql-server-azure-sql-db-sybasetosql.md)
60.619565
390
0.749148
eng_Latn
0.980925
ff014ccfbe8116060739fcae53f4bc560a92d6f9
859
md
Markdown
content/en/docs/apps/intro/concepts/user-permissions.md
abbyad/docsy-example
4e46da66a9af4d30a3911736d17691b968c97d6a
[ "Apache-2.0" ]
null
null
null
content/en/docs/apps/intro/concepts/user-permissions.md
abbyad/docsy-example
4e46da66a9af4d30a3911736d17691b968c97d6a
[ "Apache-2.0" ]
null
null
null
content/en/docs/apps/intro/concepts/user-permissions.md
abbyad/docsy-example
4e46da66a9af4d30a3911736d17691b968c97d6a
[ "Apache-2.0" ]
null
null
null
--- title: "User Permissions" weight: 5 date: 2017-01-05 description: > Assigning fine grained settings for user roles --- Roles are broad general collections of permissions. Permissions are fine grained settings that individually toggle on or off to allow a role to do a certain action or see a certain thing. Viewing permissions determine which page tabs a user sees in the app and which types of data they do and don’t have access to. User action permissions include who can create (e.g., create new users), who can delete (e.g., delete reports), who can edit (e.g., edit profiles), and who can export (e.g., export server logs). ## Defining User Permissions Each user role must be given explicit access for each of the following permissions in the app settings. |Property|Description|Default| |-------|---------|----------| | `placeholder` | | |
42.95
321
0.733411
eng_Latn
0.997582
ff01d3dd13743ee2d699c232ea7e7af310e24ca0
1,363
md
Markdown
readcode/filelist.md
leebaok/jemalloc-annotate
b3da2f6030666e063e90a45cddb977ad33ddc91f
[ "BSD-2-Clause" ]
99
2016-07-14T07:26:31.000Z
2022-01-27T08:59:10.000Z
readcode/filelist.md
wycharry/jemalloc-4.2.1-readcode
b3da2f6030666e063e90a45cddb977ad33ddc91f
[ "BSD-2-Clause" ]
null
null
null
readcode/filelist.md
wycharry/jemalloc-4.2.1-readcode
b3da2f6030666e063e90a45cddb977ad33ddc91f
[ "BSD-2-Clause" ]
28
2016-08-12T09:37:56.000Z
2021-05-08T12:03:47.000Z
## 源码文件 jemalloc 的文件主要有 include/jemalloc/internal 下的头文件 和 src 下面的C文件, 下面对主要文件做一些简单说明: ``` include/jemalloc/internal: arena.h assert.h atomic.h base.h bitmap.h chunk_dss.h chunk_mmap.h chunk.h ckh.h ctl.h extent.h hash.h huge.h jemalloc_internal_decls.h jemalloc_internal_defs.h jemalloc_internal_defs.h.in jemalloc_internal_macros.h jemalloc_internal.h jemalloc_internal.h.in mb.h mutex.h nstime.h pages.h ph.h private_namespace.h private_namespace.sh private_symbols.txt private_unnamespace.h private_unnamespace.sh prng.h prof.h public_namespace.h public_namespace.sh public_symbols.txt public_unnamespace.h public_unnamespace.sh ql.h qr.h quarantine.h rb.h rtree.h size_classes.h size_classes.sh smoothstep.h smoothsetp.sh stats.h tcache.h ticker.h tsd.h util.h valgrind.h witness.h src: arena.c atomic.c base.c bitmap.c chunk_dss.c chunk_mmap.c chunk.c ckh.c ctl.c extent.c hash.c huge.c jemalloc.c mb.c mutex.c nstime.c pages.c prng.c prof.c quarantine.c rtree.c stats.c tcache.c ticker.c tsd.c util.c valgrind.c witness.c zone.c ```
15.314607
61
0.619956
eng_Latn
0.295745
ff01ea23f1ba76cedfb41ba70c2afccd6f3f58fe
673
md
Markdown
README.md
jbrundage/app-frontend
8c3940b3fee5a5f9f01a57b2f1f32a1db66708a2
[ "MIT" ]
null
null
null
README.md
jbrundage/app-frontend
8c3940b3fee5a5f9f01a57b2f1f32a1db66708a2
[ "MIT" ]
null
null
null
README.md
jbrundage/app-frontend
8c3940b3fee5a5f9f01a57b2f1f32a1db66708a2
[ "MIT" ]
null
null
null
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) ![GitHub package.json version (branch)](https://img.shields.io/github/package-json/v/CodingGardenCommunity/app-frontend/master.svg) ![Libraries.io dependency status for GitHub repo](https://img.shields.io/librariesio/github/CodingGardenCommunity/app-frontend.svg) ![GitHub contributors](https://img.shields.io/github/contributors/CodingGardenCommunity/app-frontend.svg) ![GitHub commit activity](https://img.shields.io/github/commit-activity/m/CodingGardenCommunity/app-frontend.svg) # Community App Frontend The frontend repo for the Coding Garden Community App
112.166667
591
0.803863
yue_Hant
0.823532
ff022af4e2aa173850e97c54587a59878a81bff3
6,240
md
Markdown
CHANGELOG.md
khourhin/papermill
64508d5ce8eb42fbd42f82c2bdd72d72854e48cf
[ "BSD-3-Clause" ]
null
null
null
CHANGELOG.md
khourhin/papermill
64508d5ce8eb42fbd42f82c2bdd72d72854e48cf
[ "BSD-3-Clause" ]
null
null
null
CHANGELOG.md
khourhin/papermill
64508d5ce8eb42fbd42f82c2bdd72d72854e48cf
[ "BSD-3-Clause" ]
null
null
null
# Change Log ## 1.0.0 We made it to our [1.0 milestone goals](https://github.com/nteract/papermill/milestone/1?closed=1)! The largest change here is removal of `record`, `Notebook`, and `NotebookCollection` abstractions which are now living in [scrapbook](https://github.com/nteract/scrapbook) and requirement of nbconvert 5.5 as a dependency. - Input and output paths can now reference input parameters. `my_nb_{nb_type}.ipynb out_{nb_type}.ipynb -p nb_type test` will substitute values into the paths passed in with python format application patterns. - `read_notebook`, `read_notebooks`, `record`, and `display` api functions are now removed. - [upstream] ipywidgets are now supported. See [nbconvert docs](https://nbconvert.readthedocs.io/en/latest/execute_api.html#widget-state) for details. - [upstream] notebook executions which run out of memory no longer hang indefinitely when the kernel dies. ## 0.19.1 - Added a warning when no `parameter` tag is present but parameters are being passed - Replaced `retry` with `tenacity` to help with conda builds and to use a non-abandoned library ## 0.19.0 **DEPRECATION CHANGE** The record, read_notebook, and read_notebooks functions are now officially deprecated and will be removed in papermill 1.0. - scrapbook functionality is now deprecated - gcsfs support is expanded to cover recent releases ## 0.18.2 ### Fixes - Addressed an issue with reading encoded notebook .ipynb files in python 3.5 ## 0.18.1 ### Fixes - azure missing environment variable now has a better error message and only fails lazily - gcs connector now has a backoff to respect service rate limits ## 0.18.0 **INSTALL CHANGE** The iorw extensions now use optional dependencies. This means that installation for s3, azure, and gcs connectors are added via: ``` pip install papermill[s3,azure,gcs] ``` or for all dependencies ``` pip install papermill[all] ``` ### Features - Optional IO extensions are now separated into different dependencies. - Added gs:// optional dependency for google cloud storage support. - null json fields in parmaeters now translate correctly to equivilent fields in each supported language ### Fixes - tqdm dependencies are pinned to fetch a minimum version for auto tqdm ### Dev Improvements - Releases and versioning patterns were made easier - Tox is now used to capture all test and build requirements ## 0.17.0 ### Features - Log level can now be set with `--log-level` - The working directory of papermill can be set with the `--cwd` option. This will set the executing context of the kernel but not impact input/output paths. `papermill --cwd foo bar/input_nb.ipynb bar/output_nb.ipynb` would make the notebook able to reference files in the `foo` directoy without `../foo` but still save the output notebook in the `bar` directory. - Tox has been added for testing papermill. This makes it easier to catch linting and manifest issues without waiting for a failed Travis build. ### Fixes - Fixed warnings for reading non-ipynb files - Fixed `--report-mode` with parameters (and made it more compatible with JupyterLab) - Papermill execution progress bars now render within a notebook correctly after importing seaborn - The `--prepare-only` option no longer requires that kernels be installed locally (you can parameterize a notebook without knowing how to execute it) - Azure IO adapter now correctly prefixes paths with the `adl://` scheme - Tests on OSX should pass again ### Docs - Install doc improvements - Guide links are updated in the README - Test docs updated for tox usage ## 0.16.2 - Injected parameter cells now respect `--report-mode` - Logging level is only set for logger through CLI commands - Output and input paths can be automatically passed to notebooks with the `--inject-paths` option - Entrypoints have been added for registration of new `papermill.io` and `papermill.engine` plugins via setup files ## 0.16.1 - Fixed issue with azure blob io operations ## 0.16.0 - Added engines abstraction and command line argument - Moved some nbconvert wrappers out of papermill - Added Azure blob storage support - Fixed botocore upgrade comptability issue (all version of boto now supported again) - Removed whitelisted environment variable assignment ## 0.15.1 - Added support for Julia kernels - Many improvements to README.md and documentation - nbconvert dependency pinned to >= 5.3 - Improved error handling for missing directories - Warnings added when an unexpected file extension is used - Papermill version is visible to the CLI - More messages us logging module now (and can be filtered accordingly) - Binder link from README was greatly improved to demostrate papermill features ## 0.15.0 - Moved translator functions into registry - Added development guide to help new contributors - Travis, coverage, linting, and doc improvements - Added support for Azure data lake sources - Added python 3.7 testing ## 0.14.2 - Added output flushing for log-output option ## 0.14.1 - Upgraded executor to stream outputs during execution - Fixed UTF-8 encoding issues for windows machines - Added [black](https://github.com/ambv/black) code formatter rules (experimental) - Contributors document added - Added report-mode option for hiding inputs ## 0.13.4 (no code changes) - Release manifest fix ## 0.13.3 - Fixed scala int vs long assignment ## 0.13.2 - Pip 10 fixes ## 0.13.1 - iPython pin to circumvent upstream issue ## 0.13.0 - Added prepare-only flag for parameterizing without processing a notebook - Fixed cell number display on failed output notebooks - Added scala language support ## 0.12.6 - Changed CLI outputs from papermill messaging to stderr - Changed IOResolvers to perseve ordering of definition in resolving paths ## 0.12.5 - Set click disable_unicode_literals_warning=True to disable unicode literals ## 0.12.4 - Added universal wheel support - Test coverage for s3 improved ## 0.12.3 - Added start timeout option for slow booting kernels ## 0.12.2 - Added options around tqdm - Fixed an S3 decoding issue ## 0.12.1 - ip_display improvements - Docstring improvements ## 0.12.0 - Added type preservation for r and python parameters - Massive test coverage improvements - Codebase style pass
34.285714
364
0.770513
eng_Latn
0.994496
ff0284d7d19b5549e6206e819dc4d812edf30019
54
md
Markdown
README.md
vilaksh01/AREC-NeutonAI-ESP32
135afca733fc6559ac8eaf08ae627115f0d2f55f
[ "MIT" ]
null
null
null
README.md
vilaksh01/AREC-NeutonAI-ESP32
135afca733fc6559ac8eaf08ae627115f0d2f55f
[ "MIT" ]
null
null
null
README.md
vilaksh01/AREC-NeutonAI-ESP32
135afca733fc6559ac8eaf08ae627115f0d2f55f
[ "MIT" ]
null
null
null
# AREC-NeutonAI-ESP32 ## Tested and working on ESP32
13.5
30
0.740741
eng_Latn
0.865102
ff03a0d500d24937b1a960da58220c14ca91ef7f
162
md
Markdown
src/__tests__/fixtures/unfoldingWord/en_tq/jhn/06/18.md
unfoldingWord/content-checker
7b4ca10b94b834d2795ec46c243318089cc9110e
[ "MIT" ]
null
null
null
src/__tests__/fixtures/unfoldingWord/en_tq/jhn/06/18.md
unfoldingWord/content-checker
7b4ca10b94b834d2795ec46c243318089cc9110e
[ "MIT" ]
226
2020-09-09T21:56:14.000Z
2022-03-26T18:09:53.000Z
src/__tests__/fixtures/unfoldingWord/en_tq/jhn/06/18.md
unfoldingWord/content-checker
7b4ca10b94b834d2795ec46c243318089cc9110e
[ "MIT" ]
1
2022-01-10T21:47:07.000Z
2022-01-10T21:47:07.000Z
# What happened to the weather after the disciples got into a boat and started out for Capernaum? A strong wind began to blow and the sea started getting rough.
40.5
97
0.790123
eng_Latn
0.999996
ff03aa31b833f473a29be9b4e483299cad260010
2,088
md
Markdown
packages/plugin-sb-react/CHANGELOG.md
hxfdarling/onepack
9bcfe870245c92c6987e6e1602224b277fe2d53b
[ "MIT" ]
50
2019-03-13T12:35:36.000Z
2020-11-17T01:41:14.000Z
packages/plugin-sb-react/CHANGELOG.md
hxfdarling/onepack
9bcfe870245c92c6987e6e1602224b277fe2d53b
[ "MIT" ]
23
2019-05-28T13:36:47.000Z
2022-02-27T23:36:58.000Z
packages/plugin-sb-react/CHANGELOG.md
hxfdarling/onepack
9bcfe870245c92c6987e6e1602224b277fe2d53b
[ "MIT" ]
16
2019-03-19T14:22:04.000Z
2021-06-29T13:32:19.000Z
# Change Log All notable changes to this project will be documented in this file. See [Conventional Commits](https://conventionalcommits.org) for commit guidelines. ## [2.5.4](https://github.com/hxfdarling/a8k/compare/v2.5.3...v2.5.4) (2021-06-29) **Note:** Version bump only for package @a8k/plugin-sb-react ## [2.5.3](https://github.com/hxfdarling/a8k/compare/v2.5.2...v2.5.3) (2020-08-18) **Note:** Version bump only for package @a8k/plugin-sb-react ## [2.4.3](https://github.com/hxfdarling/a8k/compare/v2.4.2...v2.4.3) (2020-03-23) **Note:** Version bump only for package @a8k/plugin-sb-react ## [2.4.2](https://github.com/hxfdarling/a8k/compare/v2.4.1...v2.4.2) (2020-02-29) **Note:** Version bump only for package @a8k/plugin-sb-react # [2.3.0](https://github.com/hxfdarling/a8k/compare/v2.2.0...v2.3.0) (2019-11-04) **Note:** Version bump only for package @a8k/plugin-sb-react ## [2.1.5](https://github.com/hxfdarling/a8k/compare/v2.1.4...v2.1.5) (2019-09-03) **Note:** Version bump only for package @a8k/plugin-sb-react ## [1.18.1](https://github.com/hxfdarling/a8k/compare/v1.18.0...v1.18.1) (2019-06-15) **Note:** Version bump only for package @a8k/plugin-sb-react # [1.18.0](https://github.com/hxfdarling/a8k/compare/v1.17.4...v1.18.0) (2019-06-14) **Note:** Version bump only for package @a8k/plugin-sb-react ## [1.17.4](https://github.com/hxfdarling/a8k/compare/v1.17.3...v1.17.4) (2019-06-09) **Note:** Version bump only for package @a8k/plugin-sb-react ## [1.17.3](https://github.com/hxfdarling/a8k/compare/v1.17.3-alpha.0...v1.17.3) (2019-06-09) **Note:** Version bump only for package @a8k/plugin-sb-react # [1.17.0](https://github.com/hxfdarling/a8k/compare/v1.16.0...v1.17.0) (2019-06-05) ### Features - 优化 storybook-react 插件 ([5ca3212](https://github.com/hxfdarling/a8k/commit/5ca3212)) - 添加 sb 初始化命令 ([126a77b](https://github.com/hxfdarling/a8k/commit/126a77b)) # [1.16.0](https://github.com/hxfdarling/a8k/compare/v1.15.3...v1.16.0) (2019-05-30) ### Features - 添加 storybook 插件 ([d3e7da3](https://github.com/hxfdarling/a8k/commit/d3e7da3))
36
93
0.689655
yue_Hant
0.30996
ff03de67f3f429cbe50f19426d2196ed29bf8ba9
63,315
md
Markdown
_posts/2019-10-07-spring-boot-vue-crud-full-stack-with-maven.md
lanaflonPerso/in28minutes.github.io
aeab4d2e05da9f058da86cd0e33421cdeba82491
[ "MIT" ]
1
2020-02-21T14:00:24.000Z
2020-02-21T14:00:24.000Z
_posts/2019-10-07-spring-boot-vue-crud-full-stack-with-maven.md
viczer/in28minutes.github.io
6d09c86c79d901cf8df224c4542e7b1e86563d45
[ "MIT" ]
null
null
null
_posts/2019-10-07-spring-boot-vue-crud-full-stack-with-maven.md
viczer/in28minutes.github.io
6d09c86c79d901cf8df224c4542e7b1e86563d45
[ "MIT" ]
1
2020-01-04T10:30:54.000Z
2020-01-04T10:30:54.000Z
--- layout: post title: Creating Spring Boot and Vue JS CRUD Java Full Stack Application with Maven date: 2019-10-07 12:31:19 summary: This guide helps you create a Java full stack application with all the CRUD (Create, Read, Update and Delete) features using Vue JS as Frontend framework and Spring Boot as the backend REST API. We use Maven as the build tool. categories: SpringBootFullStack permalink: /spring-boot-vue-full-stack-crud-maven-application --- This guide helps you create a Java full stack application with all the CRUD (Create, Read, Update and Delete) features using Vue as Frontend framework and Spring Boot as the backend REST API. We will be using JavaScript as the frontend language and Java as the backend language. ## You will learn - What is a full stack application? - Why do we create full stack applications? - How do you use Vue as a Frontend Framework? - How do you use Spring to create Backend REST Service API? - How do you call Spring Boot REST API from Vue using the axios framework? - How and When to use different REST API Request Methods - GET, POST, PUT and DELETE? - How do you perform CRUD (Create, Read, Update and Delete) operations using Vue as Frontend framework and Spring Boot as the backend REST API? - How do you create a form in Vue? ## Free Courses - Learn in 10 Steps - [FREE 5 DAY CHALLENGE - Learn Spring and Spring Boot](https://links.in28minutes.com/SBT-Page-Top-LearningChallenge-SpringBoot){:target="_blank"} - [Learn Spring Boot in 10 Steps](https://links.in28minutes.com/in28minutes-10steps-springboot){:target="_blank"} - [Learn Docker in 10 Steps](https://links.in28minutes.com/in28minutes-10steps-docker){:target="_blank"} - [Learn Kubernetes in 10 Steps](https://links.in28minutes.com/in28minutes-10steps-k8s){:target="_blank"} - [Learn AWS in 10 Steps](https://links.in28minutes.com/in28minutes-10steps-aws-beanstalk){:target="_blank"} ## Step 0: Get an overview of the Full Stack Application ### Understanding Basic Features of the Application Following screenshot shows the application we would like to build: It is a primary instructor portal allowing instructors to maintain their courses. ![Image](/images/full-stack-application-with-spring-boot-screenshot.png "Spring Boot Full Stack Application") ![Image](/images/full-stack-application-with-spring-boot-screenshot-2.png "Spring Boot Full Stack Application") ### Understanding Full Stack Architecture Following Screenshot shows the architecture of the application we would create: ![Image](/images/vue_00_architecture.png "Architecture of Spring Boot Vue Full Stack Application") Important points to note: - REST API is exposed using Spring Boot - REST API is consumed from Vue Frontend to present the UI - The Database, in this example, is a hardcoded in-memory static list. ### Getting an overview of Spring Boot REST API Resources In this guide, we will create these services using proper URIs and HTTP methods: - `@GetMapping("/instructors/{username}/courses")` : Get Request Method exposing the list of courses taught by a specific instructor - `@GetMapping("/instructors/{username}/courses/{id}")` : Get Request Method exposing the details of a specific course taught by a specific instructor - `@DeleteMapping("/instructors/{username}/courses/{id}")` : Delete Request Method to delete a course belonging to a specific instructor - `@PutMapping("/instructors/{username}/courses/{id}")` : Put Request Method to update the course details of a specific course taught by a specific instructor - `@PostMapping("/instructors/{username}/courses")` : Post Request Method to create a new course for a specific instructor > The REST API can be enhanced to interact with other microservices infrastructure components and act as microservices. ### Downloading the Complete Maven Project With Code Examples Following GitHub repository hosts the complete frontend and backend projects - https://github.com/in28minutes/spring-boot-vuejs-fullstack-examples/tree/master/spring-boot-crud-full-stack > Our Github repository has all the code examples - https://github.com/in28minutes/spring-boot-vuejs-fullstack-examples ### Understanding Spring Boot REST API Project Structure Following screenshot shows the structure of the Spring Boot project we create. ![Image](/images/project-structure-spring-boot-fullstack-crud-maven.png "Spring Boot Rest Service - Project Structure") A few details: - `CourseResource.java` - Rest Resource exposing all the service methods discussed above. - `Course.java, CoursesHardcodedService.java` - Business Logic for the application. CoursesHardcodedService exposes a few methods we would invoke from our Rest Resource. - `SpringBootFullStackCrudFullStackWithMavenApplication.java` - Launcher for the Spring Boot Application. To run the application, launch this file as Java Application. - `pom.xml` - Contains all the dependencies needed to build this project. We use Spring Boot Starter Web and Spring Boot DevTools. ## Understanding Vue Frontend Project Structure Following screenshot shows the structure of the Vue JS project we create. ![Image](/images/project-structure-vue-fullstack-crud.png "Vue Frontend - Project Structure") A few details: - `App.vue` : Vue Component representing the high-level structure of the application. - `routes.js` : Route Component for the application - `Courses.vue` - Vue Component for listing all the courses for an instructor. - `Courses.vue` - Vue Component for editing Course Details and creating a new course - `CourseDataService.js` - Service using axios framework to make the Backend REST API Calls. - `AuthenticationService.js` - Service using axios framework to make the Backend REST API Calls. ### Understanding the tools you need to build this project - Maven 3.0+ for building Spring Boot API Project - npm, webpack for building frontend - Your favorite IDE. We use Eclipse for Java and Visual Studio Code for Frontend - JavaScript, TypeScript, Angular, Vue JS and React. - JDK 1.8+ - Node v8+ - Embedded Tomcat, built into Spring Boot Starter Web [![Image](/images/Course-Go-Full-Stack-With-Spring-Boot-and-React.png "Go Full Stack with Spring Boot and React")](https://links.in28minutes.com/MISC-REACT){:target="_blank"} [![Image](/images/Course-Go-Full-Stack-With-SpringBoot-And-Angular.png "Go Full Stack with Spring Boot and Angular")](https://links.in28minutes.com/MISC-ANGULAR){:target="_blank"} #### Installing Node Js (npm) & Visual Studio Code - [Click to see video Playlist](https://www.youtube.com/playlist?list=PLBBog2r6uMCQN4X3Aa_jM9qVjgMCHMWx6){:target="_blank"} - Step 01 - Installing NodeJs and NPM - Node Package Manager - Step 02 - Quick Introduction to NPM - Step 03 - Installing Visual Studio Code - Front End JavaScript Editor #### Installing Java, Eclipse & Embedded Maven - [Click to see video Playlist](https://www.youtube.com/playlist?list=PLBBog2r6uMCSmMVTW_QmDLyASBvovyAO3){:target="_blank"} - 0 - Overview - Installation Java, Eclipse and Maven - 1 - Installing Java JDK - 2 - Installing Eclipse IDE - 3 - Using Embedded Maven in Eclipse - 4 - Troubleshooting Java, Eclipse and Maven ### Creating Full Stack CRUD application with Vue and Spring Boot - Step By Step Approach We will use a step by step approach to creating the full stack application - Create a Spring Boot Application with Spring Boot Initializr - Create a Vue application using Vue CLI - Create the Retrieve Courses REST API and Enhance the Vue Front end to retrieve the courses using the axios framework - Add feature to delete a course in Vue front end and Spring Boot REST API - Add functionality to update course details in Vue front end and Spring Boot REST API - Add feature to create a course in Vue front end and Spring Boot REST API > You can get an introduction to REST down here - [Introduction to REST API](http://www.springboottutorial.com/creating-rest-service-with-spring-boot){:target="_blank"} ## Step 1: Bootstrapping Spring Boot REST API with Spring Initializr Creating a REST service with Spring Initializr is a cake walk. We will use Spring Web MVC as our web framework. Spring Initializr [http://start.spring.io/](http://start.spring.io/){:target="_blank"} is great tool to bootstrap your Spring Boot projects. ![Image](/images/spring-boot-full-stack-with-web-and-dev-tools.png "Spring Boot Project with Web and Developer Tools") As shown in the image above, the following steps have to be done - Launch Spring Initializr and choose the following - Choose `com.in28minutes.fullstack.springboot.maven.crud` as Group - Choose `spring-boot-fullstack-crud-full-stack-with-maven` as Artifact - Choose following dependencies - Web - DevTools - Click Generate Project. - Import the project into Eclipse. File -> Import -> Existing Maven Project. For more details about creating Spring Boot Projects, you can read - [Creating Spring Boot Projects](http://www.springboottutorial.com/creating-spring-boot-project-with-eclipse-and-maven){:target="_blank"} > If you are new to Spring Boot, we recommend watching this video - [Spring Boot in 10 Steps](https://www.youtube.com/watch?v=PSP1-2cN7vM){:target="_blank"} ## Step 2 - Bootstrapping Vue JS Frontend with Vue CLI [Vue CLI](https://github.com/vuejs/vue-cli){:target="_blank"} is an amazing tool to bootstrap your Vue applications. Creating Vue Frontend Applications with Vue CLI is very simple. Launch up your terminal/command prompt. Make sure that you have node installed. ``` npm install -g @vue/cli # OR yarn global add @vue/cli vue create frontend-spring-boot-vue-crud-full-stack ``` This command will ask you for the structure and features for the project you need. Just select `default` for now. ### Launching up Vue Frontend You would need to cd to the project we created and execute `npm start` ``` cd frontend-spring-boot-vue-crud-full-stack npm run serve ``` You would see the screen below: ![Image](/images/npm-start-new-vue-app.png "Starting a Vue JS Project") When you launch up the application in the browser at `http://localhost:8080/`, you would see the following welcome screen. ![Image](/images/vue-on-load-screenshot.png "New Vue Project on the Browser") Note: Vue application runs on default port 8080, to change it go to `package.json`, under `scripts` change `serve` command to `vue-cli-service serve --port 8081`. Here onwards your Vue application will start at port 8081. > Cool! You are all set to rock and roll with Vue. ## Step 3 - Creating REST API for Retrieve All Courses and Connecting Vue Frontend We would want to start building the screen shown below: ![Image](/images/full-stack-application-with-spring-boot-screenshot.png "Spring Boot Full Stack Application") Let's start with building the course listing screen. To be able to do that, we need to - Create REST API for retrieving a list of courses. - Connect the Vue JS Frontend to the backend REST API ### Create REST API for retrieving a list of courses Web Services, REST and Designing REST API, are pretty deep concepts. We would recommend to check this out for more - [Designing Great REST API](https://www.youtube.com/watch?v=NzgFdEGI8sI){:target="_blank"} We will create - A model object `Course.java` - A Hardcoded Business Service `CoursesHardcodedService.java` - A Resource to expose the REST API `CourseResource.java` We will start with creating a model object `Course.java`. The snippet below shows the content of the model class. For the complete listing, refer `course/Course.java` in the complete code example at the end of this article. ``` public class Course { private Long id; private String username; private String description; //no arg constructor //constructor with 3 args //getters and setters //hashcode and equals ``` Next, let's create a Business Service. In this article, we will use hardcoded data. ``` @Service public class CoursesHardcodedService { private static List<Course> courses = new ArrayList<>(); private static long idCounter = 0; static { courses.add(new Course(++idCounter, "in28minutes", "Learn Full stack with Spring Boot and Angular")); courses.add(new Course(++idCounter, "in28minutes", "Learn Full stack with Spring Boot and React")); courses.add(new Course(++idCounter, "in28minutes", "Master Microservices with Spring Boot and Spring Cloud")); courses.add(new Course(++idCounter, "in28minutes", "Deploy Spring Boot Microservices to Cloud with Docker and Kubernetes")); } public List<Course> findAll() { return courses; } } ``` Few things to note - Data is hardcoded - findAll returns the complete list of courses - You can see that the API of the Service is modelled around the Spring Data Repository interfaces. If you are familiar with JPA and Spring Data, you can easily replace this with a Service talking to a database. Next, let create the REST Resource to retrieve the list of courses for an instructor. ``` @RestController public class CourseResource { @Autowired private CoursesHardcodedService courseManagementService; @GetMapping("/instructors/{username}/courses") public List<Course> getAllCourses(@PathVariable String username) { return courseManagementService.findAll(); } } ``` Few things to note: - `@RestController : Combination of @Controller and @ResponseBody` - Beans returned are converted to/from JSON/XML. - `@Autowired private CoursesHardcodedService courseManagementService` - Autowire the CoursesHardcodedService so that we can retrieve details from business service. If you launch up the Spring boot application and go to `http://localhost:8080/instructors/in28minutes/courses` in the browser, you would see the response from the API. ``` [ { "id": 1, "username": "in28minutes", "description": "Learn Full stack with Spring Boot and Angular" }, { "id": 2, "username": "in28minutes", "description": "Learn Full stack with Spring Boot and React" }, { "id": 3, "username": "in28minutes", "description": "Master Microservices with Spring Boot and Spring Cloud" }, { "id": 4, "username": "in28minutes", "description": "Deploy Spring Boot Microservices to Cloud with Docker and Kubernetes" } ] ``` We have the REST API up and running. Its time to focus on the Frontend. ### Enhancing Vue App to consume the REST API To be able to enhance the Vue Application to consume the REST API, we would need to - Create an Application Component - to represent the structure of the complete application and include it in `App.vue` - `InstructorApp.vue` - Add the frameworks need to call the REST API - axios and support routing - vue-router - Create a view component for showing a list of course details and include it in the Application Component - `Courses.vue` - Invoking Retrieve Courses REST API from Vue Component - To enable this we will create a service to call the REST API using the axios framework - `CourseDataService.js`. `Courses.vue` will make use of `CourseDataService.js` Let's start with creating an Application Component - `InstructorApp.vue` /src/components/InstructorApp.vue ```vue <template> <div> This is instructors test! </div> </template> <script> export default { name: "InstructorApp" } </script> <style scoped> </style> ``` Few things to note: - One of the first things you would need to understand about Vue is the concept of the component. Vue component is consist of `template`, `script` and `style`. - `template` is nothing but the HTML template with Vue directives - `script` is javascript code to write for the Vue module - `style` is CSS for the Vue module Let's update the `App.vue` to display the InstructorApp component. src/App.vue ```js <template> <div class="container"> <InstructorApp /> </div> </template> <script> import InstructorApp from './components/InstructorApp.vue' export default { name: 'In28Minutes', components: { InstructorApp } } </script> <style> @import url(https://unpkg.com/[email protected]/dist/css/bootstrap.min.css) </style> ``` Few things to note: - `import InstructorApp from './components/InstructorApp'` - Importing the InstructorApp component - `<InstructorApp />` - Display the Instructor App component. - `@import url(https://unpkg.com/[email protected]/dist/css/bootstrap.min.css)` - Add styling to application When you launch the Vue app in the browser, it will appear as shown below: ![Image](/images/vue-initial-instructor-component.png "Initial View of Instructor Component") #### Adding Frameworks to Vue Application In this project, we will make use of axios to execute REST APIs, vue-router to do the Routing between pages. Let's stop the front end app running in the command prompt and execute these commands. ``` npm add axios ``` ``` npm add vue-router ``` When commands execute successfully, you would see new entries in `package.json` ``` "axios": "^0.19.0", "vue-router": "^3.1.3" ``` You can run 'npm start' to relaunch the front end app loading up all the new frameworks. #### Creating a List Courses Component Let's create a new component for showing the List of courses - `ListCoursesComponent.vue`. For now, let's hardcode a course into the course list. /src/components/ListCoursesComponent.vue ```js <template> <div class="container"> <h3>All Courses</h3> <div class="container"> <table class="table"> <thead> <tr> <th>Id</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>Learn Full stack with Spring Boot and Vue</td> </tr> </tbody> </table> </div> </div> </template> <script> export default { name: "CoursesList" }; </script> <style> </style> ``` Things to Note: - It's a simple component. Returning a hardcoded table displaying a list of courses. Let's update the InstructorApp component to display the ListCoursesComponent. /src/components/InstructorApp.vue ```js <template> <div> <h1>Instructor Application</h1> <ListCoursesComponent></ListCoursesComponent> </div> </template> <script> import ListCoursesComponent from "./ListCoursesComponent"; export default { name: "InstructorApp", components: { ListCoursesComponent } }; </script> <style scoped> </style> ``` We are importing the ListCoursesComponent and displaying it in the InstructorApp. When you launch the Vue JS app in the browser, it will appear as shown below: ![Image](/images/vue-second-stage-hardcoded-instructor-component.png "Hardcoded Instructor Component") #### Invoking Retrieve Courses REST API from Vue JS Component We had created the REST API for retrieving the list of courses earlier. To call the REST API we would need to use a framework called axios. > Axios is a Promise based HTTP client for the browser and node.js Axios is a frontend framework that helps you make - REST API calls with different request methods including GET, POST, PUT, DELETE etc - Intercept Front end REST API calls and add headers and request content Let's create a data service method to call the REST API. /src/service/CourseDataService.js ```js import axios from "axios"; const INSTRUCTOR = "in28minutes"; const COURSE_API_URL = "http://localhost:8080"; const INSTRUCTOR_API_URL = `${COURSE_API_URL}/instructors/${INSTRUCTOR}`; class CourseDataService { retrieveAllCourses() { return axios.get(`${INSTRUCTOR_API_URL}/courses`); } } export default new CourseDataService(); ``` Important points to note: - `` const INSTRUCTOR_API_URL = `${COURSE_API_URL}/instructors/${INSTRUCTOR}` `` - We are forming the URL to call in a reusable way. - `` axios.get(`${INSTRUCTOR_API_URL}/courses`) `` - Call the REST API with the GET method. - `export default new CourseDataService()` - We are creating an instance of CourseDataService and making it available for other components. To make the REST API call, we would need to call the CourseDataService - retrieveAllCourses method from the ListCoursesComponent Important snippets are shown below: ```js <script> import CourseDataService from '../service/CourseDataService'; export default { name: "CoursesList", data() { return { INSTRUCTOR: "in28minutes" }; }, methods: { refreshCourses() { CourseDataService.retrieveAllCourses(this.INSTRUCTOR) //HARDCODED .then(response => { console.log(response.data); }); } }, created() { this.refreshCourses(); } }; </script> ``` Things to note: - `created()` - Vue defines a component lifecycle. created will be called as soon as the component is mounted. We are calling refreshCourses as soon as a component is mounted. - `methods: { refreshCourses() {}}` - Any method in a vue component should be under methods. -`CourseDataService.retrieveAllCourses(INSTRUCTOR).then` - This would make the call to the REST API. You can define how to process the response in the then method. When you run the vue app in the browser right now, you would see the following errors in the console ```bash [Error] Access to XMLHttpRequest at 'http://localhost:8080/instructors/in28minutes/courses' from origin 'http://localhost:8081' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. [Error] Uncaught (in promise) Error: Network Error at createError (createError.js?2d83:16) at XMLHttpRequest.handleError (xhr.js?b50d:81) ``` The Backend Spring Boot REST API is running on http://localhost:8080, and it is not allowing requests from other servers - http://localhost:8081, in this example. Let's configure Rest Resource to allow access from specific servers. ```java @CrossOrigin(origins = { "http://localhost:3000", "http://localhost:4200", "http://localhost:8081" }) @RestController public class CourseResource { ``` An important thing to note - @CrossOrigin(origins = { "http://localhost:3000", "http://localhost:4200", "http://localhost:8081" }) - Allow requests from specific origins - We will use 3000 to run React, and 8081 to run Vue JS apps, and we use 4200 to run Angular apps. Hence we are allowing requests from all ports. If you refresh the page again, you would see the response from server printed in the console. We would need to use the data from the response and show it on the component. > In Vue, we use the state to do that. Following snippet highlights the significant changes ```js <script> import CourseDataService from '../service/CourseDataService'; export default { name: "CoursesList", data() { return { courses: [], message: null, INSTRUCTOR: "in28minutes" }; }, methods: { refreshCourses() { CourseDataService.retrieveAllCourses(this.INSTRUCTOR) //HARDCODED .then(response => { this.courses = response.data; }); } }, created() { this.refreshCourses(); } }; </script> ``` Important things to note: - `courses: [], message: null,` - To display courses, we need to make them available to the component. We add courses to the state of the component and initialize it in the constructor. - `response => {this.courses = response.data;}` - When the response comes back with data, we update the state. We have data in the state. How do we display it? We need to update the template. ```javascript <template> <div class="container"> <h3>All Courses</h3> <div class="container"> <table class="table"> <thead> <tr> <th>Id</th> <th>Description</th> </tr> </thead> <tbody> <tr v-for="course in courses" v-bind:key="course.id"> <td>{% raw %}{{course.id}{% endraw %}}</td> <td>{% raw %}{{course.description}}{% endraw %}</td> </tr> </tbody> </table> </div> </div> </template> ``` Important things to note - `v-for="course in courses"` - Allow you to loop around a list of items and define how each item should be displayed. - `v-bind:key="course.id"` - A key is used to uniquely identify a row. - `{% raw %}<td>{{course.id}}</td>{% endraw %}` - Decide how column is displayed - `{% raw %}<td>{{course.description}}</td>{% endraw %}` - Decide how column is displayed When you launch the Vue app in the browser, it will appear as shown below: ![Image](/images/vue-third-stage-getting-course-listing-from-rest-api.png "Course Listing Component Retrieving from REST API") > Congratulations! You have successfully integrated Vue with a REST API. Time to celebrate! ## Step 4: Adding Delete Feature to List Courses Page To be able to do this - We need a REST API in Spring Boot Backend for deleting a course - We would need to update Vue frontend to use the API ### Adding Delete Method in the Backend REST API It should be easy. Snippets below show how we create a simple deleteById method in CoursesHardcodedService and expose it from CourseResource. ```java @Service public class CoursesHardcodedService { public Course deleteById(long id) { Course course = findById(id); if (course == null) return null; if (courses.remove(course)) { return course; } return null; } ``` ```java public class CourseResource { @DeleteMapping("/instructors/{username}/courses/{id}") public ResponseEntity<Void> deleteCourse(@PathVariable String username, @PathVariable long id) { Course course = courseManagementService.deleteById(id); if (course != null) { return ResponseEntity.noContent().build(); } return ResponseEntity.notFound().build(); } ``` Important things to note: - `@DeleteMapping("/instructors/{username}/courses/{id}")` - We are mapping the Delete Request Method with two path variables - `@PathVariable String username, @PathVariable long id` - Defining the variables for Path Variables - `ResponseEntity.noContent().build()` - If Request is successful, return no content back - `ResponseEntity.notFound().build()` - If delete failed, return error - resource not found. ### Enhancing Vue app with Delete Course Feature Let's add `deleteCourse` method to `CourseDataService`. As you can see it execute the delete request to specific course api url. ```javascript deleteCourse(name, id) { return axios.delete(`${INSTRUCTOR_API_URL}/courses/${id}`); } ``` We can add a delete button corresponding to each of the courses: ```html <td> <button class="btn btn-warning" v-on:click="deleteCourseClicked(course.id)"> Delete </button> </td> ``` On click of the button we are calling the `deleteCourseClicked` method passing the course `id`. The implementation for `deleteCourseClicked` is shown below: When we get a successful response for delete API call, we set a `message` into state and refresh the courses list. ```javascript methods: { ... deleteCourseClicked(id) { CourseDataService.deleteCourse(this.INSTRUCTOR, id).then(response => { this.message = `Delete of course ${id} Successful`; this.refreshCourses(); }); } } ``` We display the message just below the header ```html <h3>All Courses</h3> <div v-if="message" class="alert alert-success"> {% raw %}{{message}}{% endraw %} </div> ``` Complete `ListCoursesComponent`, at this stage, is shown below: ```html <template> <div class="container"> <h3>All Courses</h3> <div v-if="message" class="alert alert-success"> {% raw %}{{message}}{% endraw %} </div> <div class="container"> <table class="table"> <thead> <tr> <th>Id</th> <th>Description</th> </tr> </thead> <tbody> <tr v-for="course in courses" v-bind:key="course.id"> <td>{% raw %}{{course.id}}{% endraw %}</td> <td>{% raw %}{{course.description}}{% endraw %}</td> <td> <button class="btn btn-warning" v-on:click="deleteCourseClicked(course.id)" > Delete </button> </td> </tr> </tbody> </table> </div> </div> </template> <script> import CourseDataService from "../service/CourseDataService"; export default { name: "CoursesList", data() { return { courses: [], message: null, INSTRUCTOR: "in28minutes" }; }, methods: { refreshCourses() { CourseDataService.retrieveAllCourses(this.INSTRUCTOR) //HARDCODED .then(response => { this.courses = response.data; }); }, deleteCourseClicked(id) { CourseDataService.deleteCourse(this.INSTRUCTOR, id).then(response => { this.message = `Delete of course ${id} Successful`; this.refreshCourses(); }); } }, created() { this.refreshCourses(); } }; </script> <style></style> ``` When you launch the Vue app in the browser, it will appear as shown below: ![Image](/images/vue-course-listing-with-delete.png "Course Listing Component Retrieving from REST API and Delete Button") When you click the delete button, the course will be deleted. ### Updating Course Details To be able to update the course details, we would need to create a new component to represent the todo form. Let's start with creating a simple component. /src/components/CourseComponent.vue ```javascript <template> <div> <h1>Course Details</h1> </div> </template> <script> export default { name: "courseDetails", }; </script> <style> </style> ``` #### Implementing Routing When the user clicks the update course button on the course listing page, we would want to route to the course page. How do we do it? That's where Routing comes into the picture. /src/routes.js ```javascript import Vue from "vue"; import Router from "vue-router"; Vue.use(Router); const router = new Router({ mode: "history", // Use browser history routes: [ { path: "/", name: "Home", component: () => import("./components/Courses") }, { path: "/courses", name: "Courses", component: () => import("./components/Courses") }, { path: "/courses/:id", name: "Course Details", component: () => import("./components/Course") } ] }); export default router; ``` /src/App.vue ```javascript <template> <div class="container"> <router-view/> </div> </template> <script> export default { name: "app" }; </script> <style> @import url(https://unpkg.com/[email protected]/dist/css/bootstrap.min.css); </style> ``` /src/main.js ```javascript import Vue from "vue"; import App from "./App.vue"; import router from "./routes"; Vue.config.productionTip = false; new Vue({ router, render: h => h(App) }).$mount("#app"); ``` We are defining a Router around all the components and configuring paths to each of them. - `http://localhost:8081/` takes you to home page - `http://localhost:8081/courses` takes you to course listing page - `http://localhost:8081/courses/2` takes you to course page When you launch the Vue app in the browser using this URL `http://localhost:8081/courses/2`, it will appear as shown below: ![Image](/images/vue-1-course-details.png "Course Component First Version") #### Adding Update Button to Course Listing Page Let's add update button to the course listing page. /src/components/ListCoursesComponent.vue ```javascript <template> <table class="table"> <thead> <tr> ... <th>Update</th> <th>Delete</th> </tr> </thead> <tbody> <tr v-for="course in courses" v-bind:key="course.id"> ... <td><button class="btn btn-success" v-on:click="updateCourseClicked(course.id)">Update</button></td> <td><button class="btn btn-warning" v-on:click="deleteCourseClicked(course.id)">Delete</button></td> </tr> </tbody> </table> .... </template> ``` We can create the add `updateCourseClicked` method to redirect to Course Component and add the binding in the constructor method. ```javascript methods: { ... updateCourseClicked(id) { this.$router.push(`/courses/${id}`); }, } ``` #### Adding Add button to Course Listing Page Let's add an Add button at the bottom of Course Listing Page. /src/components/ListCoursesComponent.vue ```html <!-- At the end of table --> <div class="row"> <button class="btn btn-success" v-on:click="addCourseClicked()">Add</button> </div> ``` Let's add the appropriate method to handle click of Add button. ```javascript methods: { ... addCourseClicked() { this.$router.push(`/course/-1`); }, } ``` When you launch the Vue app in the browser using this URL `http://localhost:8081`, it will appear as shown below: ![Image](/images/vue-full-stack-application-with-spring-boot-screenshot.png "Course listing page final version") Clicking any of the Update or Add buttons would take you to the Course Component. #### Create API to Retrieve Specific Course Details Now that we have the course component beinging rendered on the click of update button, let's start focusing on getting the course details from the REST API. Let's add `findById` method to `CoursesHardcodedService`. It retrieves the details of a specific course based on it. ``` public Course findById(long id) { for (Course course: courses) { if (course.getId() == id) { return course; } } return null; } ``` Let's add `getCourse` method to `CourseResource` class. It exposes the GET method to get the details of a specific course based on id. ``` @GetMapping("/instructors/{username}/courses/{id}") public Course getCourse(@PathVariable String username, @PathVariable long id) { return courseManagementService.findById(id); } ``` #### Invoking the API from Course Component > How do we invoke the retrieve course details from the Vue frontend? Let's add `retrieveCourse` method to `CourseDataService` ```javascript retrieveCourse(name, id) { return axios.get(`${INSTRUCTOR_API_URL}/courses/${id}`); } ``` We would want to call the retrieveCourse method in CourseDataService on the load of CourseComponent. > How do we do it? > Yes. `created` is the right solution. Before we get to it we would need to be able to get the course id from the URL. In the course details page, we are redirecting to the url `/courses/${id}`. From the path parameter, we would want to capture the id. We can use `this.$route.params.id` to get the id from path parameters. The code listing below shows the updated CourseComponent. ```javascript <template> <div> <h3>Course</h3> <div>{% raw %}{{id}}{% endraw %}</div> <div>{% raw %}{{description}}{% endraw %}</div> </div> </template> <script> import CourseDataService from '../service/CourseDataService'; export default { name: "courseDetails", data() { return { description: "", INSTRUCTOR: "in28minutes", errors: [] }; }, computed: { id() { return this.$route.params.id; } }, methods: { refreshCourseDetails() { CourseDataService.retrieveCourse(this.INSTRUCTOR, this.id).then(res => { this.description = res.data.description; }); }, }, created() { this.refreshCourseDetails(); } }; </script> <style> </style> ``` We are setting the details of the course into data. Note the data can be computed at runtime using `computed` function. Here `id` is taken from url params ```javascript computed: { id() { return this.$route.params.id; } }, ``` In `created` - `refreshCourseDetails`, we are calling the `CourseDataService.retrieveCourse` to get the details for a course. Once we have the details, we are updating the state. ```javascript CourseDataService.retrieveCourse(this.INSTRUCTOR, this.id).then(res => { this.description = res.data.description; }); ``` We are updating the `template` to show the course details from component. ```html <div> <h3>Course</h3> <div>{% raw %}{{id}}{% endraw %}</div> <div>{% raw %}{{description}}{% endraw %}</div> </div> ``` When you try to update a course, you would see the screen below. ![Image](/images/vue-2-course-details-update.png "Course Component Second Version") When you try to create a course, you would see the screen below. ![Image](/images/vue-3-course-details-create.png "Course Component Second Version") #### Create a Form Now that we have loaded up the details of a specific course, let's shift out our attention to editing them and saving them back to the database. Let's now create a simple form. ```javascript <template> <div> <h3>Course</h3> <div class="container"> <form> <fieldset class="form-group"> <label>Id</label> <input type="text" class="form-control" v-model="id" disabled> </fieldset> <fieldset class="form-group"> <label>Description</label> <input type="text" class="form-control" v-model="description"> </fieldset> <button class="btn btn-success" type="submit">Save</button> </form> </div> </div> </template> ``` Following are some of the important details: - Form creation in Vue is similar as creating form in HTML - `v-model` is binding input to the data, same is with description, it will automaticall display initial values. - `<input type="text" class="form-control" v-model="id" disabled>` - Creating a disabled text element for id. v-model value should match data. - `<input type="text" class="form-control" v-model="description">` - Creating a text element for description. - `<button class="btn btn-success" type="submit">Save</button>` - Adding a submit button. When you try to update a course, you would see the screen below. ![Image](/images/vue-4-course-details-form.png "Course Component Fourth Version") #### Adding Handling of Submit Event Let's try to handle the Submit event now. Let's create an onSubmit method to log the values ```javascript methods: { ... validateAndSubmit(e) { console.log({ id: this.id, description: this.description }) } } ``` It's time to tie up the form with the submit method. The key snippet is `@submit="validateAndSubmit"`. ```html <form @submit="validateAndSubmit"> ``` When you click Submit, the form details are now printed to the console. ```javascript {id: "1", description: "Learn Full stack with Spring Boot and Vue"} ``` #### Adding Validation using Formik What's a form without validation? Let's add validations. ```javascript validate(values) { e.preventDefault(); this.errors = []; if(!this.description) { this.errors.push("Enter valid values"); } else if(this.description.length < 5) { this.errors.push("Enter atleast 5 characters in Description"); } } ``` We are adding two validations: - check for empty description - check for a minimum length of 5 You can add other validations as you need. If you run the page right now and submit invalid description, you would see that validations prevent the form from getting submitted. > How do we see validation messages on the screen? Let's add error message to the field: ```html <form @submit="validateAndSubmit"> <div v-if="errors.length"> <div class="alert alert-warning" v-bind:key="index" v-for="(error, index) in errors">{{error}}</div> </div> ... </form> ``` When you try to update a course, you would see the screen below. ![Image](/images/vue-5-course-details-validation-error.png "Course Component Fifth Version") ### Updating Course Details on click of submit Now that the form is ready, we would want to call the backend API to save the course details. Let's quickly create the API to Update and Create Courses. #### Create API to Update Course Let's add a `save` method to `CoursesHardcodedService` to handle creation and updation of course. ```java public Course save(Course course) { if (course.getId() == -1 || course.getId() == 0) { course.setId(++idCounter); courses.add(course); } else { deleteById(course.getId()); courses.add(course); } return course; } ``` Let's add a method to the Resource class to update the course. We are using PUT method to update the course. On course updation, we are returning 200 status with updaated course details in the body. ```java @PutMapping("/instructors/{username}/courses/{id}") public ResponseEntity<Course> updateCourse(@PathVariable String username, @PathVariable long id, @RequestBody Course course) { Course courseUpdated = courseManagementService.save(course); return new ResponseEntity<Course>(courseUpdated, HttpStatus.OK); } ``` #### Adding API to Create Course Let's add a method to the Resource class to create the course. We are using POST method to create the course. On course updation, we are returning a status of CREATED. ```java @PostMapping("/instructors/{username}/courses") public ResponseEntity<Void> createCourse(@PathVariable String username, @RequestBody Course course) { Course createdCourse = courseManagementService.save(course); // Location // Get current resource url /// {id} URI uri = ServletUriComponentsBuilder.fromCurrentRequest().path("/{id}").buildAndExpand(createdCourse.getId()) .toUri(); return ResponseEntity.created(uri).build(); } ``` #### Invoking Update and Create APIs from Course Screen Now that the REST API is ready, let's create the frontend methods to call them. Let's create respective methods in the `CourseDataService`. `updateCourse` uses a `put` and `createCourse` uses `post`. ```java class CourseDataService { updateCourse(name, id, course) { return axios.put(`${INSTRUCTOR_API_URL}/courses/${id}`, course); } createCourse(name, course) { return axios.post(`${INSTRUCTOR_API_URL}/courses/`, course); } ``` Let's update the `CourseComponent` to invoke the right service on the click of the submit button. ```javascript validateAndSubmit(e) { e.preventDefault(); this.errors = []; if(!this.description) { this.errors.push("Enter valid values"); } else if(this.description.length < 5) { this.errors.push("Enter atleast 5 characters in Description"); } if(this.errors.length === 0) { if (this.id === -1) { CourseDataService.createCourse(this.INSTRUCTOR, { description: this.description }) .then(() => { this.$router.push('/courses'); }); } else { CourseDataService.updateCourse(this.INSTRUCTOR, this.id, { id: this.id, description: this.description }) .then(() => { this.$router.push('/courses'); }); } } } ``` We are creating a course object with the updated details and calling the appropriate method on the `CourseDataService`. Once the request is successful, we are redirecting the user to the course listing page using `this.$router.push('/courses')`. > Congratulations! You are reading an article from a series of 50+ articles on Spring Boot and Vue. We also have 20+ projects on our Github repository. For the complete set of 50+ articles and code examples, [click here](http://www.springboottutorial.com/spring-boot-tutorials-for-beginners). ## Next Steps You can pursue our amazing courses on Full Stack Development and Microservices. [![Image](/images/Course-Go-Full-Stack-With-Spring-Boot-and-React.png "Go Full Stack with Spring Boot and React")](https://links.in28minutes.com/MISC-REACT){:target="_blank"} [![Image](/images/Course-Go-Full-Stack-With-SpringBoot-And-Angular.png "Go Full Stack with Spring Boot and Angular")](https://links.in28minutes.com/MISC-ANGULAR){:target="_blank"} [![Image](/images/Course-Master-Microservices-with-Spring-Boot-and-Spring-Cloud.png "Master Microservices with Spring Boot and Spring Cloud")](https://links.in28minutes.com/MISC-MICROSERVICES){:target="_blank"} ## Complete Code Example --- ### /spring-boot-vue-crud-full-stack-with-maven/frontend-spring-boot-vue-crud-full-stack-with-maven/public/index.html ```html <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width,initial-scale=1.0"> <link rel="icon" href="<%= BASE_URL %>favicon.ico"> <title>basic-vue</title> </head> <body> <noscript> <strong>We're sorry but basic-vue doesn't work properly without JavaScript enabled. Please enable it to continue.</strong> </noscript> <div id="app"></div> <!-- built files will be auto injected --> </body> </html> ``` --- ### /spring-boot-vue-crud-full-stack-with-maven/frontend-spring-boot-vue-crud-full-stack-with-maven/src/main.js ```js import Vue from 'vue' import App from './App.vue' import router from './routes'; Vue.config.productionTip = false new Vue({ router, render: h => h(App), }).$mount('#app') ``` --- ### /spring-boot-vue-crud-full-stack-with-maven/frontend-spring-boot-vue-crud-full-stack-with-maven/src/App.vue ```html <template> <div class="container"> <router-view/> </div> </template> <script> import InstructorApp from './components/InstructorApp.vue' export default { name: 'In28Minutes', components: { InstructorApp } } </script> <style> @import url(https://unpkg.com/[email protected]/dist/css/bootstrap.min.css); </style> ``` --- ### /spring-boot-vue-crud-full-stack-with-maven/frontend-spring-boot-vue-crud-full-stack-with-maven/src/routes.js ```js import Vue from "vue"; import Router from "vue-router"; Vue.use(Router); const router = new Router({ mode: 'history', // Use browser history routes: [ { path: "/", name: "Home", component: () => import("./components/ListCoursesComponent"), }, { path: "/courses", name: "Courses", component: () => import("./components/ListCoursesComponent"), }, { path: "/courses/:id", name: "Courses Details", component: () => import("./components/CourseComponent"), }, ] }); export default router; ``` --- ### /spring-boot-vue-crud-full-stack-with-maven/frontend-spring-boot-vue-crud-full-stack-with-maven/src/components/InstructorApp.vue ```html <template> <div> <h1>Instructor Application</h1> <ListCoursesComponent></ListCoursesComponent> </div> </template> <script> import ListCoursesComponent from "./ListCoursesComponent"; export default { name: "InstructorApp", components: { ListCoursesComponent } }; </script> <style scoped> </style> ``` --- ### /spring-boot-vue-crud-full-stack-with-maven/frontend-spring-boot-vue-crud-full-stack-with-maven/src/components/ListCoursesComponent.vue ```html <template> <div class="container"> <h3>All Courses</h3> <div v-if="message" class="alert alert-success">{{message}}</div> <div class="container"> <table class="table"> <thead> <tr> <th>Id</th> <th>Description</th> <th>Update</th> <th>Delete</th> </tr> </thead> <tbody> <tr v-for="course in courses" v-bind:key="course.id"> <td>{{course.id}}</td> <td>{{course.description}}</td> <td><button class="btn btn-success" v-on:click="updateCourseClicked(course.id)">Update</button></td> <td><button class="btn btn-warning" v-on:click="deleteCourseClicked(course.id)">Delete</button></td> </tr> </tbody> </table> <div class="row"> <button class="btn btn-success" v-on:click="addCourseClicked()">Add</button> </div> </div> </div> </template> <script> import CourseDataService from "../service/CourseDataService"; export default { name: "CoursesList", data() { return { courses: [], message: null, INSTRUCTOR: "in28minutes" }; }, methods: { refreshCourses() { CourseDataService.retrieveAllCourses(this.INSTRUCTOR) //HARDCODED .then(response => { this.courses = response.data; }); }, updateCourseClicked(id) { this.$router.push(`/courses/${id}`); }, addCourseClicked() { this.$router.push(`/courses/-1`); }, deleteCourseClicked(id) { CourseDataService.deleteCourse(this.INSTRUCTOR, id).then(() => { this.message = `Delete of course ${id} Successful`; this.refreshCourses(); }); } }, created() { this.refreshCourses(); } }; </script> <style> </style> ``` --- ### /spring-boot-vue-crud-full-stack-with-maven/frontend-spring-boot-vue-crud-full-stack-with-maven/src/components/CourseComponent.vue ```html <template> <div> <h3>Course</h3> <div class="container"> <form @submit="validateAndSubmit"> <div v-if="errors.length"> <div class="alert alert-warning" v-bind:key="index" v-for="(error, index) in errors" >{{error}}</div> </div> <fieldset class="form-group"> <label>Id</label> <input type="text" class="form-control" v-model="id" disabled /> </fieldset> <fieldset class="form-group"> <label>Description</label> <input type="text" class="form-control" v-model="description" /> </fieldset> <button class="btn btn-success" type="submit">Save</button> </form> </div> </div> </template> <script> import CourseDataService from "../service/CourseDataService"; export default { name: "courseDetails", data() { return { description: "", INSTRUCTOR: "in28minutes", errors: [] }; }, computed: { id() { return this.$route.params.id; } }, methods: { refreshCourseDetails() { CourseDataService.retrieveCourse(this.INSTRUCTOR, this.id).then(res => { this.description = res.data.description; }); }, validateAndSubmit(e) { e.preventDefault(); this.errors = []; if (!this.description) { this.errors.push("Enter valid values"); } else if (this.description.length < 5) { this.errors.push("Enter atleast 5 characters in Description"); } if (this.errors.length === 0) { if (this.id === -1) { CourseDataService.createCourse(this.INSTRUCTOR, { description: this.description }).then(() => { this.$router.push("/courses"); }); } else { CourseDataService.updateCourse(this.INSTRUCTOR, this.id, { id: this.id, description: this.description }).then(() => { this.$router.push("/courses"); }); } } } }, created() { this.refreshCourseDetails(); } }; </script> <style> </style> ``` --- ### /spring-boot-vue-crud-full-stack-with-maven/frontend-spring-boot-vue-crud-full-stack-with-maven/src/service/CourseDataService.js ```javascript import axios from 'axios' const INSTRUCTOR = 'in28minutes' const COURSE_API_URL = 'http://localhost:8080' const INSTRUCTOR_API_URL = `${COURSE_API_URL}/instructors/${INSTRUCTOR}` class CourseDataService { retrieveAllCourses() { return axios.get(`${INSTRUCTOR_API_URL}/courses`); } deleteCourse(name, id) { return axios.delete(`${INSTRUCTOR_API_URL}/courses/${id}`); } retrieveCourse(name, id) { return axios.get(`${INSTRUCTOR_API_URL}/courses/${id}`); } updateCourse(name, id, course) { //console.log('executed service') return axios.put(`${INSTRUCTOR_API_URL}/courses/${id}`, course); } createCourse(name, course) { //console.log('executed service') return axios.post(`${INSTRUCTOR_API_URL}/courses/`, course); } } export default new CourseDataService() ``` --- ### /spring-boot-vue-crud-full-stack-with-maven/frontend-spring-boot-vue-crud-full-stack-with-maven/package.json ```json { "name": "basic-vue", "version": "0.1.0", "private": true, "scripts": { "serve": "vue-cli-service serve --port 8081", "build": "vue-cli-service build", "lint": "vue-cli-service lint" }, "dependencies": { "axios": "^0.19.0", "core-js": "^2.6.5", "vue": "^2.6.10", "vue-router": "^3.1.3" }, "devDependencies": { "@vue/cli-plugin-babel": "^3.11.0", "@vue/cli-plugin-eslint": "^3.11.0", "@vue/cli-service": "^3.11.0", "babel-eslint": "^10.0.1", "eslint": "^5.16.0", "eslint-plugin-vue": "^5.0.0", "vue-template-compiler": "^2.6.10" }, "eslintConfig": { "root": true, "env": { "node": true }, "extends": [ "plugin:vue/essential", "eslint:recommended" ], "rules": {}, "parserOptions": { "parser": "babel-eslint" } }, "postcss": { "plugins": { "autoprefixer": {} } }, "browserslist": [ "> 1%", "last 2 versions" ] } ``` --- ### /spring-boot-react-crud-full-stack-with-maven/backend-spring-boot-react-crud-full-stack-with-maven/pom.xml ```xml <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>2.1.3.RELEASE</version> <relativePath /> <!-- lookup parent from repository --> </parent> <groupId>com.in28minutes.fullstack.springboot.maven.crud</groupId> <artifactId>spring-boot-fullstack-crud-full-stack-with-maven</artifactId> <version>0.0.1-SNAPSHOT</version> <name>spring-boot-fullstack-crud-full-stack-with-maven</name> <description>Demo project for Spring Boot</description> <properties> <java.version>1.8</java.version> </properties> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-devtools</artifactId> <scope>runtime</scope> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins> </build> </project> ``` --- ### /spring-boot-react-crud-full-stack-with-maven/backend-spring-boot-react-crud-full-stack-with-maven/src/test/java/com/in28minutes/fullstack/springboot/react/maven/crud/springbootreactcrudfullstackwithmaven/SpringBootReactCrudFullStackWithMavenApplicationTests.java ```java package com.in28minutes.fullstack.springboot.react.maven.crud.springbootreactcrudfullstackwithmaven; import org.junit.Test; import org.junit.runner.RunWith; import org.springframework.boot.test.context.SpringBootTest; import org.springframework.test.context.junit4.SpringRunner; @RunWith(SpringRunner.class) @SpringBootTest public class SpringBootReactCrudFullStackWithMavenApplicationTests { @Test public void contextLoads() { } } ``` --- ### /spring-boot-react-crud-full-stack-with-maven/backend-spring-boot-react-crud-full-stack-with-maven/src/main/resources/application.properties ```properties ``` --- ### /spring-boot-react-crud-full-stack-with-maven/backend-spring-boot-react-crud-full-stack-with-maven/src/main/java/com/in28minutes/fullstack/springboot/maven/crud/springbootcrudfullstackwithmaven/SpringBootFullStackCrudFullStackWithMavenApplication.java ```java package com.in28minutes.fullstack.springboot.maven.crud.springbootcrudfullstackwithmaven; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; @SpringBootApplication public class SpringBootFullStackCrudFullStackWithMavenApplication { public static void main(String[] args) { SpringApplication.run(SpringBootFullStackCrudFullStackWithMavenApplication.class, args); } } ``` --- ### /spring-boot-react-crud-full-stack-with-maven/backend-spring-boot-react-crud-full-stack-with-maven/src/main/java/com/in28minutes/fullstack/springboot/maven/crud/springbootcrudfullstackwithmaven/course/CoursesHardcodedService.java ```java package com.in28minutes.fullstack.springboot.maven.crud.springbootcrudfullstackwithmaven.course; import java.util.ArrayList; import java.util.List; import org.springframework.stereotype.Service; @Service public class CoursesHardcodedService { private static List<Course> courses = new ArrayList<>(); private static long idCounter = 0; static { courses.add(new Course(++idCounter, "in28minutes", "Learn Full stack with Spring Boot and Angular")); courses.add(new Course(++idCounter, "in28minutes", "Learn Full stack with Spring Boot and React")); courses.add(new Course(++idCounter, "in28minutes", "Master Microservices with Spring Boot and Spring Cloud")); courses.add(new Course(++idCounter, "in28minutes", "Deploy Spring Boot Microservices to Cloud with Docker and Kubernetes")); } public List<Course> findAll() { return courses; } public Course save(Course course) { if (course.getId() == -1 || course.getId() == 0) { course.setId(++idCounter); courses.add(course); } else { deleteById(course.getId()); courses.add(course); } return course; } public Course deleteById(long id) { Course course = findById(id); if (course == null) return null; if (courses.remove(course)) { return course; } return null; } public Course findById(long id) { for (Course course : courses) { if (course.getId() == id) { return course; } } return null; } } ``` --- ### /spring-boot-react-crud-full-stack-with-maven/backend-spring-boot-react-crud-full-stack-with-maven/src/main/java/com/in28minutes/fullstack/springboot/maven/crud/springbootcrudfullstackwithmaven/course/Course.java ```java package com.in28minutes.fullstack.springboot.maven.crud.springbootcrudfullstackwithmaven.course; public class Course { private Long id; private String username; private String description; public Course() { } public Course(long id, String username, String description) { super(); this.id = id; this.username = username; this.description = description; } public Long getId() { return id; } public void setId(Long id) { this.id = id; } public String getUsername() { return username; } public void setUsername(String username) { this.username = username; } public String getDescription() { return description; } public void setDescription(String description) { this.description = description; } @Override public int hashCode() { final int prime = 31; int result = 1; result = prime * result + ((description == null) ? 0 : description.hashCode()); result = prime * result + ((id == null) ? 0 : id.hashCode()); result = prime * result + ((username == null) ? 0 : username.hashCode()); return result; } @Override public boolean equals(Object obj) { if (this == obj) return true; if (obj == null) return false; if (getClass() != obj.getClass()) return false; Course other = (Course) obj; if (description == null) { if (other.description != null) return false; } else if (!description.equals(other.description)) return false; if (id == null) { if (other.id != null) return false; } else if (!id.equals(other.id)) return false; if (username == null) { if (other.username != null) return false; } else if (!username.equals(other.username)) return false; return true; } } ``` --- ### /spring-boot-react-crud-full-stack-with-maven/backend-spring-boot-react-crud-full-stack-with-maven/src/main/java/com/in28minutes/fullstack/springboot/maven/crud/springbootcrudfullstackwithmaven/course/CourseResource.java ```java package com.in28minutes.fullstack.springboot.maven.crud.springbootcrudfullstackwithmaven.course; import java.net.URI; import java.util.List; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.http.HttpStatus; import org.springframework.http.ResponseEntity; import org.springframework.web.bind.annotation.CrossOrigin; import org.springframework.web.bind.annotation.DeleteMapping; import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.PathVariable; import org.springframework.web.bind.annotation.PostMapping; import org.springframework.web.bind.annotation.PutMapping; import org.springframework.web.bind.annotation.RequestBody; import org.springframework.web.bind.annotation.RestController; import org.springframework.web.servlet.support.ServletUriComponentsBuilder; @CrossOrigin(origins = { "http://localhost:3000", "http://localhost:4200" }) @RestController public class CourseResource { @Autowired private CoursesHardcodedService courseManagementService; @GetMapping("/instructors/{username}/courses") public List<Course> getAllCourses(@PathVariable String username) { return courseManagementService.findAll(); } @GetMapping("/instructors/{username}/courses/{id}") public Course getCourse(@PathVariable String username, @PathVariable long id) { return courseManagementService.findById(id); } @DeleteMapping("/instructors/{username}/courses/{id}") public ResponseEntity<Void> deleteCourse(@PathVariable String username, @PathVariable long id) { Course course = courseManagementService.deleteById(id); if (course != null) { return ResponseEntity.noContent().build(); } return ResponseEntity.notFound().build(); } @PutMapping("/instructors/{username}/courses/{id}") public ResponseEntity<Course> updateCourse(@PathVariable String username, @PathVariable long id, @RequestBody Course course) { Course courseUpdated = courseManagementService.save(course); return new ResponseEntity<Course>(course, HttpStatus.OK); } @PostMapping("/instructors/{username}/courses") public ResponseEntity<Void> createCourse(@PathVariable String username, @RequestBody Course course) { Course createdCourse = courseManagementService.save(course); // Location // Get current resource url /// {id} URI uri = ServletUriComponentsBuilder.fromCurrentRequest().path("/{id}").buildAndExpand(createdCourse.getId()) .toUri(); return ResponseEntity.created(uri).build(); } } ```
30.064103
292
0.691384
eng_Latn
0.701537
ff042311c4fe942f13918b45e805bd7abb5fc023
1,659
md
Markdown
pages/EventSummary/_includes/organization-ident-1-examples.md
AuDigitalHealth/ci-fhir-r4
35930822031bbfb00ee13fa1befa2b63420734aa
[ "Apache-2.0" ]
12
2019-08-12T06:23:37.000Z
2022-03-01T00:55:22.000Z
pages/EventSummary/_includes/organization-ident-1-examples.md
AuDigitalHealth/ci-fhir-r4
35930822031bbfb00ee13fa1befa2b63420734aa
[ "Apache-2.0" ]
145
2019-08-13T00:28:52.000Z
2022-03-31T03:59:22.000Z
pages/EventSummary/_includes/organization-ident-1-examples.md
AuDigitalHealth/ci-fhir-r4
35930822031bbfb00ee13fa1befa2b63420734aa
[ "Apache-2.0" ]
2
2020-07-01T06:35:41.000Z
2021-07-19T03:07:13.000Z
<table class="list" width="100%"> <tr> <td><a href="Organization-1.2.36.1.2001.1001.105.100001.5437512835128.html">Organization example 1</a></td> <td>1.2.36.1.2001.1001.105.100001.5437512835128</td> <td><a href="Organization-1.2.36.1.2001.1001.105.100001.5437512835128.xml.html">XML</a></td> <td><a href="Organization-1.2.36.1.2001.1001.105.100001.5437512835128.json.html">JSON</a></td> <td><a href="Organization-1.2.36.1.2001.1001.105.100001.5437512835128.ttl.html">Turtle</a></td> <td></td> </tr> <tr> <td><a href="Organization-021fff67-c5ec-438f-9520-ce9bafee1306.html">Organization example 2</a></td> <td>021fff67-c5ec-438f-9520-ce9bafee1306</td> <td><a href="Organization-021fff67-c5ec-438f-9520-ce9bafee1306.xml.html">XML</a></td> <td><a href="Organization-021fff67-c5ec-438f-9520-ce9bafee1306.json.html">JSON</a></td> <td><a href="Organization-021fff67-c5ec-438f-9520-ce9bafee1306.ttl.html">Turtle</a></td> <td></td> </tr> <tr> <td><a href="Organization-hallam-medical-clinic.html">Organization example 3</a></td> <td>hallam-medical-clinic</td> <td><a href="Organization-hallam-medical-clinic.xml.html">XML</a></td> <td><a href="Organization-hallam-medical-clinic.json.html">JSON</a></td> <td><a href="Organization-hallam-medical-clinic.ttl.html">Turtle</a></td> <td></td> </tr> </table>
63.807692
123
0.56299
yue_Hant
0.403721
ff044f5d853f5953c7d889e93ddb2373381c5117
2,332
md
Markdown
docs/fdrs/config_path.md
Katolus/functions
c4aff37231432ce6ef4ed6b37c8b5baaede5975a
[ "MIT" ]
4
2022-03-08T08:46:44.000Z
2022-03-19T07:52:11.000Z
docs/fdrs/config_path.md
Katolus/functions
c4aff37231432ce6ef4ed6b37c8b5baaede5975a
[ "MIT" ]
114
2021-10-30T05:48:54.000Z
2022-03-06T10:57:00.000Z
docs/fdrs/config_path.md
Katolus/functions
c4aff37231432ce6ef4ed6b37c8b5baaede5975a
[ "MIT" ]
null
null
null
# Where do we store configuration files? * Status: Accepted * Deciders: [Piotr] * Date: 2022-01-08 ## Work context We want the tool to have a easily persistent state in which information about managed functions can be stored to enable better management and enhanced development capabilities. In this document want to find a good solution of storing all the configuration files and future application specific files in a place where it can be access by all components that need it. A unified strategy that works well across different operating systems and specific OS distributions. ## Problem Since we decided on use a file based system to persist information about the state of the tool, we face a problem of finding the base place to store it. Unfortunately there is not a single pattern widely approved and implemented by developers working cross systems that instructs an application where to store the config data so that is best for the health of the files and the whole system. For example linux application developers have a concept of a `XDG_CONFIG_HOME` environment variable that usually is empty but if missing they default such actions to the `.config` file on a user's workspace. This however is unique to Linux and does not apply throughout other systems, hence we need to find a way of tackling the issue and enable usage of this application on Windows and macOS systems. ## Proposal The solution is relatively naive and suggests using a condition based definition of a default system config path. Each OS type will point to a different config path. ```python # Set system constants based on the current platform if sys.platform.startswith("win32"): DEFAULT_SYSTEM_CONFIG_PATH = os.path.join(os.environ["APPDATA"], "config") elif sys.platform.startswith("linux"): DEFAULT_SYSTEM_CONFIG_PATH = os.path.join(os.environ["HOME"], ".config") elif sys.platform.startswith("darwin"): DEFAULT_SYSTEM_CONFIG_PATH = os.path.join( os.environ["HOME"], "Library", "Application Support" ) else: DEFAULT_SYSTEM_CONFIG_PATH = os.path.join(os.environ["HOME"], "config") ``` It is hard to make sure that all the operating systems are accounted for in this definition so an update might be required in the future. <!-- Identifiers, in alphabetical order --> [Piotr]: https://github.com/Katolus
53
466
0.779588
eng_Latn
0.997589
ff04d221507e6b0b5057d7c5b041e6571243ce57
276
md
Markdown
python/README.md
irvlust/Navigation
c3735a3a0a53c4631e1485188f3ce79546f089b7
[ "Apache-2.0" ]
null
null
null
python/README.md
irvlust/Navigation
c3735a3a0a53c4631e1485188f3ce79546f089b7
[ "Apache-2.0" ]
7
2019-12-16T22:13:37.000Z
2022-02-10T01:05:42.000Z
python/README.md
aopina1/DRLND-Course
1140e101221cfcee16a24d146b4f86cd79d53832
[ "MIT" ]
null
null
null
# Dependencies This is an amended version of the `python/` folder from the [ML-Agents repository](https://github.com/Unity-Technologies/ml-agents). It has been edited to include a few additional pip packages needed for the Deep Reinforcement Learning Nanodegree program.
69
257
0.789855
eng_Latn
0.998873
ff04e56430507be9d43ae7d4bccb3610a6589b14
2,102
md
Markdown
exporters/stats/signalfx/README.md
v-antech/opencensus-java
726168433b51e4ce2f801f59bae7c3803bf42468
[ "Apache-2.0" ]
null
null
null
exporters/stats/signalfx/README.md
v-antech/opencensus-java
726168433b51e4ce2f801f59bae7c3803bf42468
[ "Apache-2.0" ]
null
null
null
exporters/stats/signalfx/README.md
v-antech/opencensus-java
726168433b51e4ce2f801f59bae7c3803bf42468
[ "Apache-2.0" ]
null
null
null
# OpenCensus SignalFx Stats Exporter The _OpenCensus SignalFx Stats Exporter_ is a stats exporter that exports data to [SignalFx](https://signalfx.com), a real-time monitoring solution for cloud and distributed applications. SignalFx ingests that data and offers various visualizations on charts, dashboards and service maps, as well as real-time anomaly detection. ## Quickstart ### Prerequisites To use this exporter, you must have a [SignalFx](https://signalfx.com) account and corresponding [data ingest token](https://docs.signalfx.com/en/latest/admin-guide/tokens.html). #### Java versions This exporter requires Java 7 or above. ### Add the dependencies to your project For Maven add to your `pom.xml`: ```xml <dependencies> <dependency> <groupId>io.opencensus</groupId> <artifactId>opencensus-api</artifactId> <version>0.23.0</version> </dependency> <dependency> <groupId>io.opencensus</groupId> <artifactId>opencensus-exporter-stats-signalfx</artifactId> <version>0.23.0</version> </dependency> <dependency> <groupId>io.opencensus</groupId> <artifactId>opencensus-impl</artifactId> <version>0.23.0</version> <scope>runtime</scope> </dependency> </dependencies> ``` For Gradle add to your dependencies: ```groovy compile 'io.opencensus:opencensus-api:0.23.0' compile 'io.opencensus:opencensus-exporter-stats-signalfx:0.23.0' runtime 'io.opencensus:opencensus-impl:0.23.0' ``` ### Register the exporter ```java public class MyMainClass { public static void main(String[] args) { // SignalFx token is read from Java system properties. // Stats will be reported every second by default. SignalFxStatsExporter.create(SignalFxStatsConfiguration.builder().build()); } } ``` If you want to pass in the token yourself, or set a different reporting interval, use: ```java // Use token "your_signalfx_token" and report every 5 seconds. SignalFxStatsExporter.create( SignalFxStatsConfiguration.builder() .setToken("your_signalfx_token") .setExportInterval(Duration.create(5, 0)) .build()); ```
27.298701
79
0.735966
eng_Latn
0.778263
ff04f1ecb7e4ec732830fc76b58a4cc69757751b
88
md
Markdown
README.md
ROVERx99/website
146398abb2c0af7f7d9c8bb2a111da40a6089967
[ "MIT" ]
null
null
null
README.md
ROVERx99/website
146398abb2c0af7f7d9c8bb2a111da40a6089967
[ "MIT" ]
null
null
null
README.md
ROVERx99/website
146398abb2c0af7f7d9c8bb2a111da40a6089967
[ "MIT" ]
null
null
null
https://cdn.discordapp.com/attachments/886351621334904886/932367057469718528/image0.jpg
44
87
0.875
kor_Hang
0.137972
ff04f6c22dc87f18a1bb9adda6f8dce22550bfd1
190
md
Markdown
README.md
ricban/net5-test
ab6f79f439040f9f90003ec95207a877988f7998
[ "MIT" ]
null
null
null
README.md
ricban/net5-test
ab6f79f439040f9f90003ec95207a877988f7998
[ "MIT" ]
null
null
null
README.md
ricban/net5-test
ab6f79f439040f9f90003ec95207a877988f7998
[ "MIT" ]
null
null
null
# COVID-19 Pandemic Website for tracking the COVID-19 Pandemic Built with * Blazor WebAssembly * Radzen Blazor Components * Bootstrap * Chart.js * CountUp.js * TypeScript * Webpack * SCSS
13.571429
42
0.763158
kor_Hang
0.488919
ff05beb54e58d00ea5b2effcc1f892d4f53c4957
9,943
md
Markdown
README.md
ruurtjan/zio-kafka
87b057a419fc25cd12d2a037e6ff4e8e93adcf83
[ "Apache-2.0" ]
null
null
null
README.md
ruurtjan/zio-kafka
87b057a419fc25cd12d2a037e6ff4e8e93adcf83
[ "Apache-2.0" ]
null
null
null
README.md
ruurtjan/zio-kafka
87b057a419fc25cd12d2a037e6ff4e8e93adcf83
[ "Apache-2.0" ]
null
null
null
[![Release Artifacts][Badge-SonatypeReleases]][Link-SonatypeReleases] # Welcome to ZIO Kafka ZIO Kafka provides a purely functional, streams-based interface to the Kafka client. It integrates effortlessly with ZIO and ZIO Streams. ## Contents - [Quickstart](#quickstart) - [Consuming Kafka topics using ZIO Streams](#consuming-kafka-topics-using-zio-streams) - [Example: consuming, producing and committing offset](#example-consuming-producing-and-committing-offset) - [Partition assignment and offset retrieval](#partition-assignment-and-offset-retrieval) - [Custom data type serdes](#custom-data-type-serdes) - [Handling deserialization failures](#handling-deserialization-failures) - [Getting help](#getting-help) - [Credits](#credits) - [Legal](#legal) ## Quickstart Add the following dependencies to your `build.sbt` file: ``` libraryDependencies ++= Seq( "dev.zio" %% "zio-streams" % "1.0.0-RC18-2", "dev.zio" %% "zio-kafka" % "<version>" ) ``` Somewhere in your application, configure the `zio.kafka.ConsumerSettings` data type: ```scala import zio._, zio.duration._ import zio.kafka.consumer._ val settings: ConsumerSettings = ConsumerSettings(List("localhost:9092")) .withGroupId("group") .withClientId("client") .withCloseTimeout(30.seconds) ``` For a lot of use cases where you just want to do something with all messages on a Kafka topic, ZIO Kafka provides the convenience method `Consumer.consumeWith`. This method lets you execute a ZIO effect for each message. Topic partitions will be processed in parallel and offsets are committed after running the effect automatically. ```scala import zio._ import zio.console._ import zio.kafka.consumer._ import zio.kafka.serde._ val subscription = Subscription.topics("topic") Consumer.consumeWith(settings, subscription, Serde.string, Serde.string) { case (key, value) => putStrLn(s"Received message ${key}: ${value}") // Perform an effect with the received message } ``` If you require more control over the consumption process, read on! ## Consuming Kafka topics using ZIO Streams First, create a consumer using the ConsumerSettings instance and the appropriate deserializers: ```scala import zio.ZLayer, zio.blocking.Blocking, zio.clock.Clock import zio.kafka.consumer.{ Consumer, ConsumerSettings } import zio.kafka.serde.Deserializer val consumerSettings: ConsumerSettings = ConsumerSettings(List("localhost:9092")).withGroupId("group") val consumer: ZLayer[Clock with Blocking, Throwable, Consumer[Any, String, String]] = Consumer.make(consumerSettings, Deserializer.string, Deserializer.string) ``` More deserializers are available on the `Deserializer` object. The consumer returned from `Consumer.make` is wrapped in a `ZLayer` to allow for easy composition with other ZIO environment components. You may provide that layer to effects that require a consumer. Here's an example: ```scala import zio._, zio.blocking.Blocking, zio.clock.Clock import zio.kafka.consumer._ val data: RIO[Clock with Blocking, List[CommittableRecord[String, String]]] = (Consumer.subscribe[Any, String, String](Subscription.topics("topic")) *> Consumer.plainStream[Any, String, String].take(50).runCollect) .provideSomeLayer(consumer) ``` You may stream data from Kafka using the `subscribeAnd` and `plainStream` methods: ```scala import zio.blocking.Blocking, zio.clock.Clock, zio.console.putStrLn import zio.stream._ import zio.kafka.consumer._ Consumer.subscribeAnd[Any, String, String](Subscription.topics("topic150")) .plainStream .flattenChunks .tap(cr => putStrLn(s"key: ${cr.record.key}, value: ${cr.record.value}")) .map(_.offset) .aggregateAsync(Consumer.offsetBatches) .mapM(_.commit) .runDrain ``` If you need to distinguish between the different partitions assigned to the consumer, you may use the `Consumer#partitionedStream` method, which creates a nested stream of partitions: ```scala import zio.blocking.Blocking, zio.clock.Clock, zio.console.putStrLn import zio.stream._ import zio.kafka.consumer._ Consumer.subscribeAnd[Any, String, String](Subscription.topics("topic150")) .partitionedStream .tap(tpAndStr => putStrLn(s"topic: ${tpAndStr._1.topic}, partition: ${tpAndStr._1.partition}")) .flatMap(_._2.flattenChunks) .tap(cr => putStrLn(s"key: ${cr.record.key}, value: ${cr.record.value}")) .map(_.offset) .aggregateAsync(Consumer.offsetBatches) .mapM(_.commit) .runDrain ``` ## Example: consuming, producing and committing offset This example shows how to consume messages from topic `topic_a` and produce transformed messages to `topic_b`, after which consumer offsets are committed. Processing is done in chunks using `ZStreamChunk` for more efficiency. ```scala import zio.kafka.consumer._ import zio.kafka.producer._ import zio.kafka.serde._ import org.apache.kafka.clients.producer.ProducerRecord val consumerSettings: ConsumerSettings = ConsumerSettings(List("localhost:9092")).withGroupId("group") val producerSettings: ProducerSettings = ProducerSettings(List("localhost:9092")) val consumerAndProducer = Consumer.make(consumerSettings, Serde.int, Serde.long) ++ Producer.make(producerSettings, Serde.int, Serde.string) val consumeProduceStream = Consumer .subscribeAnd[Any, Int, Long](Subscription.topics("my-input-topic")) .plainStream .map { record => val key: Int = record.record.key() val value: Long = record.record.value() val newValue: String = value.toString val producerRecord: ProducerRecord[Int, String] = new ProducerRecord("my-output-topic", key, newValue) (producerRecord, record.offset) } .chunks .mapM { chunk => val records = chunk.map(_._1) val offsetBatch = OffsetBatch(chunk.map(_._2).toSeq) Producer.produceChunk[Any, Int, String](records) *> offsetBatch.commit } .runDrain .provideSomeLayer(consumerAndProducer) ``` ## Partition assignment and offset retrieval `zio-kafka` offers several way to control which Kafka topics and partitions are assigned to your application. | Use case | Method | | --- | --- | | One or more topics, automatic partition assignment | `Consumer.subscribe(Subscription.topics("my_topic", "other_topic"))` | | Topics matching a pattern | `Consumer.subscribe(Subscription.pattern("topic.*"))` | | Manual partition assignment | `Consumer.subscribe(Subscription.manual("my_topic" -> 1, "my_topic" -> 2))` | By default `zio-kafka` will start streaming a partition from the last committed offset for the consumer group, or the latest message on the topic if no offset has yet been committed. You can also choose to store offsets outside of kafka. This can be useful in cases where consistency between data stores and consumer offset is required. | Use case | Method | | --- | --- | | Offsets in kafka, start at latest message if no offset committed | `OffsetRetrieval.Auto` | | Offsets in kafka, start at earliest message if no offset committed | `OffsetRetrieval.Auto(AutoOffsetStrategy.Earliest)` | | Manual/external offset storage | `Manual(getOffsets: Set[TopicPartition] => Task[Map[TopicPartition, Long]])` | For manual offset retrieval, the `getOffsets` function will be called for each topic-partition that is assigned to the consumer, either via Kafka's rebalancing or via a manual assignment. ## Custom data type serdes Serializers and deserializers (serdes) for custom data types can be constructed from scratch or by converting existing serdes. For example, to create a serde for an `Instant`: ```scala import java.time.Instant import zio.kafka.serde._ val instantSerde: Serde[Any, Instant] = Serde.long.inmap(java.time.Instant.ofEpochMilli)(_.toEpochMilli) ``` ## Handling deserialization failures The default behavior for a consumer stream when encountering a deserialization failure is to fail the stream. In many cases you may want to handle this situation differently, eg by skipping the message that failed to deserialize or by executing an alternative effect. For this purpose, any `Deserializer[T]` for some type `T` can be easily converted into a `Deserializer[Try[T]]` where deserialization failures are converted to a `Failure` using the `asTry` method. Below is an example of skipping messages that fail to deserialize. The offset is passed downstream to be committed. ```scala import zio.blocking.Blocking, zio.clock.Clock, zio.console.putStrLn import zio.stream._ import zio.kafka.consumer._ import zio.kafka.serde._ import scala.util.{Try, Success, Failure} import zio._ val consumer = Consumer.make(consumerSettings, Serde.string, Serde.string.asTry) val stream = Consumer .subscribeAnd[Any, String, Try[String]](Subscription.topics("topic150")) .plainStream stream .mapM { record => val tryValue: Try[String] = record.record.value() val offset: Offset = record.offset tryValue match { case Success(value) => // Action for successful deserialization someEffect(value).as(offset) case Failure(exception) => // Possibly log the exception or take alternative action ZIO.succeed(offset) } } .flattenChunks .aggregateAsync(Consumer.offsetBatches) .mapM(_.commit) .runDrain .provideSomeLayer(consumer) ``` ## Getting help Join us on the [ZIO Discord server](https://discord.gg/2ccFBr4) at the `#zio-kafka` channel. ## Credits This library is heavily inspired and made possible by the research and implementation done in [Alpakka Kafka](https://github.com/akka/alpakka-kafka), a library maintained by the Akka team and originally written as Reactive Kafka by SoftwareMill. ## Legal Copyright 2019 Itamar Ravid and the zio-kafka contributors. All rights reserved. [Link-SonatypeReleases]: https://oss.sonatype.org/content/repositories/releases/dev/zio/zio-kafka_2.12/ "Sonatype Releases" [Badge-SonatypeReleases]: https://img.shields.io/nexus/r/https/oss.sonatype.org/dev.zio/zio-kafka_2.12.svg "Sonatype Releases"
39.145669
465
0.758121
eng_Latn
0.880627
ff063f87ed7a672abdf08fd05f6773d0d3e9384a
47
md
Markdown
docs/src/index.md
AiEmpires/AiEmpires.jl
f36f6b37e1cb7cb692c1fc070105c61182e18a20
[ "MIT" ]
null
null
null
docs/src/index.md
AiEmpires/AiEmpires.jl
f36f6b37e1cb7cb692c1fc070105c61182e18a20
[ "MIT" ]
null
null
null
docs/src/index.md
AiEmpires/AiEmpires.jl
f36f6b37e1cb7cb692c1fc070105c61182e18a20
[ "MIT" ]
null
null
null
# AiEmpires.jl Documentation for AiEmpires.jl
11.75
30
0.808511
eng_Latn
0.658564