hexsha
stringlengths 40
40
| size
int64 5
1.04M
| ext
stringclasses 6
values | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 3
344
| max_stars_repo_name
stringlengths 5
125
| max_stars_repo_head_hexsha
stringlengths 40
78
| max_stars_repo_licenses
sequencelengths 1
11
| max_stars_count
int64 1
368k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 3
344
| max_issues_repo_name
stringlengths 5
125
| max_issues_repo_head_hexsha
stringlengths 40
78
| max_issues_repo_licenses
sequencelengths 1
11
| max_issues_count
int64 1
116k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 3
344
| max_forks_repo_name
stringlengths 5
125
| max_forks_repo_head_hexsha
stringlengths 40
78
| max_forks_repo_licenses
sequencelengths 1
11
| max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | content
stringlengths 5
1.04M
| avg_line_length
float64 1.14
851k
| max_line_length
int64 1
1.03M
| alphanum_fraction
float64 0
1
| lid
stringclasses 191
values | lid_prob
float64 0.01
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ff55d47954c259a4b75c246b8ce9425f6c6f6f9e | 22 | md | Markdown | README.md | AndreVitorErin/exercicios-nodejs | 048a48db5737b4fedbe3beb3310f8d1e4ccf34e9 | [
"MIT"
] | null | null | null | README.md | AndreVitorErin/exercicios-nodejs | 048a48db5737b4fedbe3beb3310f8d1e4ccf34e9 | [
"MIT"
] | null | null | null | README.md | AndreVitorErin/exercicios-nodejs | 048a48db5737b4fedbe3beb3310f8d1e4ccf34e9 | [
"MIT"
] | null | null | null | # exercicios-nodejs
| 7.333333 | 19 | 0.727273 | lvs_Latn | 0.619206 |
ff55d67ee4f95d8bdc4115c990b5bae9c6027f42 | 300 | markdown | Markdown | _posts/2020-01-04-projeto-4.markdown | QuelitonSouza/queliton.github.io | 7468e41bdd8ad93dbe1464933e54b65e083c41fb | [
"MIT"
] | null | null | null | _posts/2020-01-04-projeto-4.markdown | QuelitonSouza/queliton.github.io | 7468e41bdd8ad93dbe1464933e54b65e083c41fb | [
"MIT"
] | null | null | null | _posts/2020-01-04-projeto-4.markdown | QuelitonSouza/queliton.github.io | 7468e41bdd8ad93dbe1464933e54b65e083c41fb | [
"MIT"
] | null | null | null | ---
layout: default
modal-id: 4
date: 2020-01-04
img: safe.png
print: adminwb.png
alt: image-alt
project-date: Novenbro 2018
client: Wooboogie
category: Portal Web
description: Sistema para controle de usuários cadastrados no aplicativo, controle de pedidos e suporte aos usuários do aplicativo.
---
| 23.076923 | 131 | 0.783333 | por_Latn | 0.90305 |
ff5635221144c2d05ee83df77af027c07b9c3fdc | 515 | md | Markdown | .github/ISSUE_TEMPLATE/bug_report.md | 1549469775/vue-router-invoke-webpack-plugin | 8a6c57248eebf73d391bfe69ebbdc0f59b900d9d | [
"MIT"
] | 28 | 2019-04-01T06:56:35.000Z | 2021-06-09T01:10:23.000Z | .github/ISSUE_TEMPLATE/bug_report.md | 1549469775/vue-router-invoke-webpack-plugin | 8a6c57248eebf73d391bfe69ebbdc0f59b900d9d | [
"MIT"
] | 33 | 2019-04-24T21:51:42.000Z | 2022-02-26T10:10:56.000Z | .github/ISSUE_TEMPLATE/bug_report.md | 1549469775/vue-router-invoke-webpack-plugin | 8a6c57248eebf73d391bfe69ebbdc0f59b900d9d | [
"MIT"
] | 7 | 2019-04-19T08:35:39.000Z | 2021-01-06T10:08:00.000Z | ---
name: Bug report
about: Create a report to help us improve
title: ''
labels: ''
assignees: Qymh
---
**Describe the bug**
A clear and concise description of what the bug is.
**Version**
`vue-router-invoke-webpack-plugin`:
`webpack`:
`node`:
`system`: [windows or mac]
** The tree of files or Screenshots**
[example of tree]
```
src
├── views
│ ├── Login
│ │ └── Index.vue
│ └── User
│ ├── Account
│ │ └── Index.vue
│ ├── Home
│ │ └── Index.vue
│ └── Index.vue
```
| 14.714286 | 51 | 0.561165 | eng_Latn | 0.918404 |
ff5648bee4ff7c00d3649600e0234a513f507411 | 380 | md | Markdown | doc/models/location-capability.md | ochibooh/square-java-sdk | e3c5e54cbb74ad31afa432cd8b8bd71da4e7ac98 | [
"Apache-2.0"
] | 2 | 2020-01-11T23:27:30.000Z | 2021-11-20T12:44:48.000Z | doc/models/location-capability.md | ochibooh/square-java-sdk | e3c5e54cbb74ad31afa432cd8b8bd71da4e7ac98 | [
"Apache-2.0"
] | 1 | 2021-12-09T23:05:23.000Z | 2021-12-09T23:05:23.000Z | doc/models/location-capability.md | ochibooh/square-java-sdk | e3c5e54cbb74ad31afa432cd8b8bd71da4e7ac98 | [
"Apache-2.0"
] | null | null | null | ## Location Capability
The capabilities a location may have.
### Enumeration
`LocationCapability`
### Fields
| Name | Description |
| --- | --- |
| `CREDITCARDPROCESSING` | The permission to process credit card transactions with Square.<br><br>The location can process credit cards if this value is present<br>in the `capabilities` array of the `Location`. |
| 25.333333 | 213 | 0.7 | eng_Latn | 0.982727 |
ff5653a00d7a83a75cf9b9e096a0fbaa23d88160 | 782 | md | Markdown | README.md | maglite4cell/simplwmqttbroker | 9b4ec2ba1461f4c5c4a23da11b19f87da6b817c1 | [
"MIT"
] | null | null | null | README.md | maglite4cell/simplwmqttbroker | 9b4ec2ba1461f4c5c4a23da11b19f87da6b817c1 | [
"MIT"
] | null | null | null | README.md | maglite4cell/simplwmqttbroker | 9b4ec2ba1461f4c5c4a23da11b19f87da6b817c1 | [
"MIT"
] | null | null | null | # simplewmqttbroker
A MQTT Broker module for Crestron 3-Series Control Systems written in SIMPL#
# How to use
:warning: **FOR TLS VERSION - SSL MUST BE ENABLED ON THE CONTROL SYSTEM**

Fill the information required for the connection as in the image
**THE WSS True/False choice is useless** <br />
**Broker doesn't support reconnection. Clients must connect with clean session true** <br />
**QoS 2 and 3 are not supported**

## Donation
If this module saves you time feel free to show your appreciation using the following button :D
[](https://www.paypal.com/donate?hosted_button_id=W8J2B4E92NEQ2)
| 37.238095 | 139 | 0.751918 | eng_Latn | 0.880517 |
ff57216045c21b6934cc4f97fed79fee78e66c0c | 2,973 | md | Markdown | Samples/OpenGL/README.md | adrianiainlam/CubismNativeSamples | b48ca90d39bdef27007b1a191049959d79949a58 | [
"RSA-MD"
] | 3 | 2021-01-12T02:38:38.000Z | 2021-12-27T20:54:11.000Z | Samples/OpenGL/README.md | adrianiainlam/CubismNativeSamples | b48ca90d39bdef27007b1a191049959d79949a58 | [
"RSA-MD"
] | 2 | 2020-12-22T17:04:53.000Z | 2021-02-24T00:58:07.000Z | Samples/OpenGL/README.md | adrianiainlam/CubismNativeSamples | b48ca90d39bdef27007b1a191049959d79949a58 | [
"RSA-MD"
] | null | null | null | # Cubism Native Samples for OpenGL
OpenGL で実装したアプリケーションのサンプル実装です。
## 開発環境
| サードパーティ | バージョン |
| --- | --- |
| [GLEW] | 2.1.0 |
| [GLFW] | 3.3.2 |
| [ios-cmake] | 3.1.2 |
| [stb_image.h] | 2.23 |
その他の開発環境・動作確認環境はトップディレクトリにある [README.md](/README.md) を参照してください。
## ディレクトリ構造
```
.
├─ Demo
│ ├─ proj.android.cmake # Android Studio project
│ ├─ proj.ios.cmake # CMake project for iOS
│ ├─ proj.linux.cmake # CMake project for Linux
│ ├─ proj.mac.cmake # CMake project for macOS
│ └─ proj.win.cmake # CMake project for Windows
└─ thirdParty # Third party libraries and scripts
```
## Demo
[Cubism Native Framework] の各機能を一通り使用したサンプルです。
モーションの再生、表情の設定、ポーズの切り替え、物理演算の設定などを行います。
メニューボタンからモデルを切り替えることができます。
[Cubism Native Framework]: https://github.com/Live2D/CubismNativeFramework
このディレクトリ内に含まれるものは以下の通りです。
### proj.android.cmake
Android 用の Android Studio プロジェクトが含まれます。
NOTE: 事前に下記の SDK のダウンロードが必要です
* Android SDK Build-Tools
* NDK
* CMake
### proj.ios.cmake
iOS 用の CMake プロジェクトです。
`script` ディレクトリのスクリプトを実行すると `build` ディレクトリに CMake 成果物が生成されます
| スクリプト名 | 生成物 |
| --- | --- |
| `proj_xcode` | Xcode プロジェクト |
CMake のツールチェーンとして [ios-cmake] を使用しています。
[thirdParty](README.md#thirdParty) の項目を参照して事前にダウンロードを行なってください。
[ios-cmake]: https://github.com/leetal/ios-cmake
### proj.linux.cmake
Linux 用の CMake プロジェクトです。
`script` ディレクトリのスクリプトを実行すると `build` ディレクトリに CMake 成果物が生成されます
| スクリプト名 | 生成物 |
| --- | --- |
| `make_gcc` | 実行可能なアプリケーション |
追加ライブラリとして [GLEW] と [GLFW] を使用しています。
[thirdParty](README.md#thirdParty) の項目を参照して事前にダウンロードを行なってください。
### proj.mac.cmake
macOS 用の CMake プロジェクトです。
`script` ディレクトリのスクリプトを実行すると `build` ディレクトリに CMake 成果物が生成されます
| スクリプト名 | 生成物 |
| --- | --- |
| `make_xcode` | 実行可能なアプリケーション |
| `proj_xcode` | Xcode プロジェクト |
追加ライブラリとして [GLEW] と [GLFW] を使用しています。
[thirdParty](README.md#thirdParty) の項目を参照して事前にダウンロードを行なってください。
### proj.win.cmake
Windows 用の CMake プロジェクトです。
`script` ディレクトリのスクリプトを実行すると `build` ディレクトリに CMake 成果物が生成されます
| スクリプト名 | 生成物 |
| --- | --- |
| `nmake_msvcXXXX.bat` | 実行可能なアプリケーション |
| `proj_msvcXXXX.bat` | Visual Studio プロジェクト |
追加ライブラリとして [GLEW] と [GLFW] を使用しています。
[thirdParty](README.md#thirdParty) の項目を参照して事前にダウンロードを行なってください。
NOICE: Visual Studio 2019 でのビルドは、GLEW が対応していないためサポートしておりません。
## thirdParty
サンプルプロジェクトで使用するサードパーティライブラリと自動展開スクリプトが含まれます。
### GLEW / GLFW のセットアップ
`script` ディレクトリ内のスクリプトを実行することで GLEW と GLFW のダウンロードを行います。
| プラットフォーム | スクリプト名 |
| --- | --- |
| Linux / macOS | `setup_glew_glfw` |
| Windows | `setup_glew_glfw.bat` |
スクリプト内の `GLEW_VERSION` 及び `GLFW_VERSION` を変更することで、ダウンロードするバージョンを変更できます。
## ios-cmake のセットアップ
`script` ディレクトリ内の `setup_ios_cmake` を実行することで ios-cmake のダウンロードを行います。
スクリプト内の `IOS_CMAKE_VERSION` を変更することで、ダウンロードするバージョンを変更できます。
[GLEW]: https://github.com/nigels-com/glew
[GLFW]: https://github.com/glfw/glfw
[ios-cmake]: https://github.com/leetal/ios-cmake
[stb_image.h]: https://github.com/nothings/stb/blob/master/stb_image.h
| 21.860294 | 74 | 0.713421 | yue_Hant | 0.866326 |
ff5753b7fcbc0297cabb83f33ebdb0be6f28860c | 4,339 | md | Markdown | README.md | Orlandoj77/adityamangal1 | 08c7f758d92b2fe702b0189a054f80257c1d6612 | [
"MIT"
] | 1 | 2021-07-10T23:49:40.000Z | 2021-07-10T23:49:40.000Z | README.md | Orlandoj77/adityamangal1 | 08c7f758d92b2fe702b0189a054f80257c1d6612 | [
"MIT"
] | null | null | null | README.md | Orlandoj77/adityamangal1 | 08c7f758d92b2fe702b0189a054f80257c1d6612 | [
"MIT"
] | null | null | null | ## ADITYA MANGAL<img src="https://raw.githubusercontent.com/ABSphreak/ABSphreak/master/gifs/Hi.gif" width="30px">
<a href="https://twitter.com/AdityaM44382015" target="_blank"><img src="https://cdn2.iconfinder.com/data/icons/social-media-2199/64/social_media_isometric_6-twitter-512.png" height="120px" width="120px" alt="Twitter" align="right"></a><a href="https://www.linkedin.com/in/aditya-mangal-b876041b4/" target="_blank"><img src="https://cdn2.iconfinder.com/data/icons/social-media-2199/64/social_media_isometric_14-linkedin-512.png" height="120px" width="120px" alt="Twitter" align="right"></a>
<img align="right" alt="GIF" src="https://i.stack.imgur.com/NSHyg.gif" width="400" height="300" />
- 🤔 Looking for help with Data Structures and Algorithms 😭;
- 🥅 coding automation projects⚡
- 🔭 I’m currently working on Open source projects and hackathons⚡
<br />
<br />
## Connect with me:
[<img align="left" alt="adityamangal | LinkedIn" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/facebook.svg" />][facebook]
[<img align="left" alt="adityamangal | Instagram" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/instagram.svg" />][instagram]
[<img align="left" alt="adityamangal | Twitter" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/twitter.svg" />][Twitter]
<a href="https://dev.to/adityamangal1">
<img src="https://d2fltix0v2e0sb.cloudfront.net/dev-badge.svg" alt="Aditya Mangal's DEV Community Profile" height="30" width="30">
</a>
<p align="center"> <img src="https://octodex.github.com/images/vinyltocat.png" height="160px" width="160px"> <img src="https://octodex.github.com/images/daftpunktocat-thomas.gif" height="160px" width="160px"> <img src="https://octodex.github.com/images/daftpunktocat-guy.gif" height="160px" width="160px"> <img
src="https://octodex.github.com/images/Robotocat.png" height="160px" width="160px"></p>
<p align="center">
<br>
<img src="https://visitor-badge.laobi.icu/badge?page_id=adityamangal1.Pretty-Readme">
<img src="https://img.shields.io/badge/Hacktoberfest-2020-blueviolet">
<h2>Languages & Frameworks & Tools</h2>
<p align="center">
<code><img height="50" src="https://www.vectorlogo.zone/logos/ubuntu/ubuntu-ar21.svg"></code>
<a href="https://en.wikipedia.org/wiki/Python_(programming_language)">
<code><img src="https://img.shields.io/badge/python%20-%2314354C.svg?&style=for-the-badge&logo=python&logoColor=white"/></code>
</a>
<a href="https://en.wikipedia.org/wiki/C_(programming_language)">
<code><img src="https://img.shields.io/badge/c%20-%2300599C.svg?&style=for-the-badge&logo=c&logoColor=white"/></code>
</a>
<a href="https://github.com/adityamangal1">
<code><img height="50" src="https://www.vectorlogo.zone/logos/github/github-ar21.svg"></code>
</a>
<code><img height="50" src="https://www.vectorlogo.zone/logos/w3_html5/w3_html5-ar21.svg"></code>
<code><img height="50" src="https://upload.wikimedia.org/wikipedia/commons/d/d5/CSS3_logo_and_wordmark.svg"></code>
<code><img height="50" src="https://www.vectorlogo.zone/logos/javascript/javascript-horizontal.svg"></code>
<code><img height="50" src="https://www.vectorlogo.zone/logos/wordpress/wordpress-ar21.svg"></code>
</p>
</p>

<!-- <img src="https://github-readme-stats.vercel.app/api?username=adityamangal1&&show_icons=true&title_color=ffffff&icon_color=bb2acf&text_color=daf7dc&bg_color=ffba2c"> -->
<img src="https://github-readme-stats.vercel.app/api?username=adityamangal1&&show_icons=true&&theme=radical">
[facebook]: https://www.facebook.com/aditya.mangal2/
[instagram]: https://www.instagram.com/adityamangal/
[twitter]: https://twitter.com/AdityaM44382015/
<p align="center">
<img src="https://github-profile-trophy.vercel.app/?username=adityamangal1&theme=flat&margin-w=15">
</p>
<p align="center">
<img alt="Sloan, the sloth mascot" width="250px" src="https://thepracticaldev.s3.amazonaws.com/uploads/user/profile_image/31047/af153cd6-9994-4a68-83f4-8ddf3e13f0bf.jpg">
<br>
<strong>Happy Coding</strong> ❤️
</p>
<img height="120" alt="Thanks for visiting me" width="100%" src="https://raw.githubusercontent.com/BrunnerLivio/brunnerlivio/master/images/marquee.svg" />
| 57.853333 | 489 | 0.732196 | yue_Hant | 0.243355 |
ff578fc3ce3e9931f2c8e9c5e159fdd7f8b91c9c | 261 | md | Markdown | awesome_page.md | lea3182/https-github.com-Devbootcamp-phase-0-unit-1-tree-master-week-2-12-gps1-1 | bc6962517a9e44a9b0086087b96de98720fa6ea5 | [
"MIT"
] | null | null | null | awesome_page.md | lea3182/https-github.com-Devbootcamp-phase-0-unit-1-tree-master-week-2-12-gps1-1 | bc6962517a9e44a9b0086087b96de98720fa6ea5 | [
"MIT"
] | null | null | null | awesome_page.md | lea3182/https-github.com-Devbootcamp-phase-0-unit-1-tree-master-week-2-12-gps1-1 | bc6962517a9e44a9b0086087b96de98720fa6ea5 | [
"MIT"
] | null | null | null | # This is a heading.
*This is italic.*
**This is a kangaroo.**
*This is italic, but _this_ is both!*
**This is bold, but _this_ is both!**
[Click on me!](http://yahoo.com)
```
<html>
<body>
<p>Bla bla bla. This is a code block.</p>
</body>
</html>
```
| 12.428571 | 43 | 0.601533 | eng_Latn | 0.994119 |
ff57917a6427a44bfdc65bd6ac1da6ee4c815a08 | 336 | md | Markdown | README.md | Exploradata/Python3-beginners | 3d63282e93df6815a61148e3905b00caffdb02d1 | [
"CC0-1.0"
] | 1 | 2021-07-23T17:53:44.000Z | 2021-07-23T17:53:44.000Z | README.md | Exploradata/Python3-beginners | 3d63282e93df6815a61148e3905b00caffdb02d1 | [
"CC0-1.0"
] | null | null | null | README.md | Exploradata/Python3-beginners | 3d63282e93df6815a61148e3905b00caffdb02d1 | [
"CC0-1.0"
] | null | null | null | ## Python 3 for beginners
#### English introduction
This code is for beginners and people who want to practice python or modify for his personal or profesional use.
#### Introducción en español
Este código esta dirigido a principiantes y personas que quieren practicar Python, o bien, modificarlo para su uso personal o profesional.
| 42 | 138 | 0.791667 | spa_Latn | 0.662837 |
ff57ce04bde4bbd3920927d87d4a787c6a89d1ab | 422 | md | Markdown | ce-bosco/README.md | sfiligoi/docker-osg-ce | 186348f8061951e1e3bf958d697fe4accf94fbc9 | [
"MIT"
] | null | null | null | ce-bosco/README.md | sfiligoi/docker-osg-ce | 186348f8061951e1e3bf958d697fe4accf94fbc9 | [
"MIT"
] | null | null | null | ce-bosco/README.md | sfiligoi/docker-osg-ce | 186348f8061951e1e3bf958d697fe4accf94fbc9 | [
"MIT"
] | null | null | null | # docker-osg-ce/ce-bosco Repository
This repository contains the Dockerfile necessary to build
the OSG BOSCO CE image.
# Usage
Make sure you follow all the usage instructions from the [base image](../base/)
Rest of the instructions TBD.
# Image repository
The images are being build and are availble on [DockerHub as sfiligoi/osg-ce:ce-bosco](https://cloud.docker.com/u/sfiligoi/repository/docker/sfiligoi/osg-ce).
| 26.375 | 158 | 0.774882 | eng_Latn | 0.940838 |
ff57d597cb62c90953959216012ff9590385c230 | 734 | md | Markdown | README.md | lafronzt/ipwhois | fccdbb85057c178ca0f9a1712e5731a293f36099 | [
"MIT"
] | null | null | null | README.md | lafronzt/ipwhois | fccdbb85057c178ca0f9a1712e5731a293f36099 | [
"MIT"
] | null | null | null | README.md | lafronzt/ipwhois | fccdbb85057c178ca0f9a1712e5731a293f36099 | [
"MIT"
] | null | null | null | # IP Whois Go Client
**This is an unofficial client for the [IPWhois.io API](https://ipwhois.io/).**
## Quick Start
Install and Use in Command Line:
```bash
go install github.com/lafronzt/ipwhois/cmd/ipwhois@latest
ipwhois -ip 1.1.1.1
```
## Us as a Go Package
```go
package main
imports (
"fmt"
"github.com/lafronzt/ipwhois"
)
var c *ipwhois.Client
func init() {
c = ipwhois.NewClient()
}
func main() {
ip := "1.1.1.1"
whois, err := c.GetIPDetails(IP, nil)
if err != nil {
println(err.Error())
return
}
fmt.Printf("%+v\n", whois)
}
```
## To Use with the Pro Version of IPWhois.io use the following
```go
ipwhois.NewClientPro("api-key")
```
## Requirements
- Go >= 1.15
| 13.107143 | 79 | 0.617166 | eng_Latn | 0.609895 |
ff585127a8983929764faad4a4a57acd26babe73 | 3,937 | md | Markdown | iceoryx_examples/icecrystal/Readme.md | jbreitbart/iceoryx | e04c9ba3abc57369d9bb4e936e6e8f052cd04016 | [
"Apache-2.0"
] | null | null | null | iceoryx_examples/icecrystal/Readme.md | jbreitbart/iceoryx | e04c9ba3abc57369d9bb4e936e6e8f052cd04016 | [
"Apache-2.0"
] | null | null | null | iceoryx_examples/icecrystal/Readme.md | jbreitbart/iceoryx | e04c9ba3abc57369d9bb4e936e6e8f052cd04016 | [
"Apache-2.0"
] | null | null | null | # icecrystal - Learn how to use the introspection for debugging
## Introduction
This example teaches you how to make use of the introspection for debugging purposes. With the introspection you can
look into the machine room of RouDi. The introspection shows live information about the memory usage and all
registered processes. Additionally, it shows the sender and receiver ports that are created inside the shared memory.
## Run icecrystal
We reuse the binaries from [icedelivery](../icedelivery/). Create four terminals and run one command in each of them.
# If installed and available in PATH environment variable
RouDi
# If build from scratch with script in tools
$ICEORYX_ROOT/build/posh/RouDi
./build/iceoryx_examples/icedelivery/ice-publisher-bare-metal
./build/iceoryx_examples/icedelivery/ice-subscriber-bare-metal
./build/iceoryx_introspection/iceoryx_introspection_client --all
## Expected output
The counter can differ depending on startup of the applications.
### RouDi application
Reserving 99683360 bytes in the shared memory [/iceoryx_mgmt]
[ Reserving shared memory successful ]
Reserving 410709312 bytes in the shared memory [/username]
[ Reserving shared memory successful ]
### Publisher application
Sending: 0
Sending: 1
Sending: 2
Sending: 3
Sending: 4
Sending: 5
### Subscriber application
Receiving: 4
Receiving: 5
Receiving: 6
Receiving: 7
### Iceoryx introspection application

## Feature walkthrough
This example does not contain any additional code. The code of the `iceoryx_introspection_client` can be found under
[tools/introspection/](../../tools/introspection/).
The introspection can be started with several command line arguments.
--mempool Subscribe to mempool introspection data.
The memory pool view will show all available shared memory segments and their respective owner. Additionally, the
maximum number of available chunks, the number of currently used chunks as well as the minimal value of free chunks
are visible. This can be handy for stress tests to find out if your memory configuration is valid.
--process Subscribe to process introspection data.
The process view will show you the processes (incl. PID), which are currently registered with RouDi.
--port Subscribe to port introspection data.
The port view shows both sender and receiver ports that are created by RouDi in the shared memory. Their respective
service description (service, instance, event) is shown to identify them uniquely. The columns `Process` and
`used by process` display to which process the ports belong and how they are currently connected. Size in bytes of
both sample size and chunk size (sample size + meta data) and statistical data of `Chunks [/Minute]` is provided as
well. When a sender port instantly provides data to a subscriber with the `subscribe()` call, the `Field` column is
ticked. The service discovery protocol allows you to define the `Propagation scope` of the data. This can enable
data forwarding to other machines e.g. over network or just consume them internally. When a `Callback` is
registered on subscriber side, the box is ticked accordingly. `FiFo size / capacity` shows the consumption of chunks
on the subscriber side and is a useful column to debug potential memleaks.
--all Subscribe to all available introspection data.
`--all` will enable all three views at once.
-v, --version Display latest official iceoryx release version and exit.
Make sure that the version number of the introspection exactly matches the version number of RouDi. Currently,
we don't guarantee binary compatibility between different versions. With different version numbers things might break.
| 42.793478 | 128 | 0.769368 | eng_Latn | 0.997999 |
ff5a43642f26faa876dcdea8b15f2ecee15d5690 | 3,462 | md | Markdown | powerapps-docs/maker/data-platform/export-customized-entity-field-text-translation.md | rspannuth/powerapps-docs | 6c6548ce9883c16df3cfedf63d0917f71ffd87e1 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-07-13T07:06:52.000Z | 2021-07-13T07:06:52.000Z | powerapps-docs/maker/data-platform/export-customized-entity-field-text-translation.md | gitohm/powerapps-docs | 172da852c6f71d7465b89c1c297f08bcc0a4fdce | [
"CC-BY-4.0",
"MIT"
] | null | null | null | powerapps-docs/maker/data-platform/export-customized-entity-field-text-translation.md | gitohm/powerapps-docs | 172da852c6f71d7465b89c1c297f08bcc0a4fdce | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "Export customized table and column text for translation with Power Apps | MicrosoftDocs"
description: "Learn how to export table and column text for translation"
ms.custom: ""
ms.date: 08/05/2020
ms.reviewer: ""
ms.service: powerapps
ms.suite: ""
ms.tgt_pltfrm: ""
ms.topic: "how-to"
applies_to:
- "Dynamics 365 (online)"
- "Dynamics 365 Version 9.x"
- "powerapps"
author: "Mattp123"
ms.assetid: 7e269d09-4453-490a-b50c-f0795ff6f348
caps.latest.revision: 34
ms.author: "matp"
manager: "kvivek"
search.audienceType:
- maker
search.app:
- PowerApps
- D365CE
---
# Translate customized table, form, and column text into other languages
[!INCLUDE[cc-data-platform-banner](../../includes/cc-data-platform-banner.md)]
After you create customized table and column text in your unmanaged solution, you may want to translate it into other languages.
> [!IMPORTANT]
> When you export translations, the export translations feature exports translations for the table. So, that means even if the solution contains only a single form, labels for all the forms for the table will be exported. Make sure you only modify the form's labels when importing the translations back otherwise another component translation you modify will be added as a dependency to the solution.
1. Sign into [Power Apps](https://make.powerapps.com/?utm_source=padocs&utm_medium=linkinadoc&utm_campaign=referralsfromdoc) and select **Solutions** from the left navigation.
2. Select the unmanaged solution you want, on the command bar select **…**, select **Translations**, and then select **Export translations**.
3. After the export completes the exported translations compressed (zip) file is downloaded to your browser’s default download folder and contains the exported labels.
4. Extract the XML file from the compressed (.zip) file.
5. Open the CrmTranslations.xml file in Excel.
6. Select the sheet named **Localized Labels**.
7. Notice there is already a column with the base language code id, such as 1033 (English U.S.) Add a column with the language code id for every language you want to translate labels. For example, add a column for 1034 (Spanish traditional).
8. Add the translated text in the new column for the object names and object ids that you want.
> [!div class="mx-imgBorder"]
> 
9. When you're finished adding your translations, save, and zip up the package so you can [Import translated table and column text](import-translated-entity-field-text.md).
> [!NOTE]
> Not all content can be translated using this tool. This includes labels for sitemap areas, groups, and subareas. To translate these labels, [use the **More Titles** property in the sitemap designer](../model-driven-apps/create-site-map-app.md#add-an-area-to-the-site-map).
## Community tools
[Easy Translator](https://www.xrmtoolbox.com/plugins/MsCrmTools.Translator/) is a tool that XrmToolBox community developed for Power Apps. Use Easy Translator to export and import translations with contextual information.
> [!NOTE]
> The community tools are not supported by Microsoft.
> If you have questions about the tool, please contact the publisher. More Information: [XrmToolBox](https://www.xrmtoolbox.com).
## Next steps
[Import translated table and column text](import-translated-entity-field-text.md)
[!INCLUDE[footer-include](../../includes/footer-banner.md)]
| 49.457143 | 400 | 0.76141 | eng_Latn | 0.988197 |
ff5a7f0bb5b695ea22f1b68de362849cbd3e88ee | 1,176 | md | Markdown | challenges/insertion-Sort/BLOG.md | Abdallah-401-advanced-javascript/data-structures-and-algorithms | 562015d4d7e38e03d250c59ddb5635a0f80ba2cc | [
"MIT"
] | null | null | null | challenges/insertion-Sort/BLOG.md | Abdallah-401-advanced-javascript/data-structures-and-algorithms | 562015d4d7e38e03d250c59ddb5635a0f80ba2cc | [
"MIT"
] | 15 | 2020-05-19T01:45:44.000Z | 2021-05-11T19:19:16.000Z | challenges/insertion-Sort/BLOG.md | Abdallah-401-advanced-javascript/data-structures-and-algorithms | 562015d4d7e38e03d250c59ddb5635a0f80ba2cc | [
"MIT"
] | 5 | 2020-05-18T02:22:53.000Z | 2021-04-18T16:02:37.000Z | 'use strict'
function insertionSort(arr){
for (let i = 1 ; i <= arr.length ; i++){
var j = i - 1
// 1=> j = 0
// 2=> j = 1
// 3=> j = 2
// 4=> j = 3
var temp = arr[i]
// 1=> temp 4
// 2=> temp 23
// 3=> temp 42
// 4=> temp 16
while (j >= 0 && temp < arr[j]){
arr[j + 1] = arr[j]
// 1=> [8,8,23,42,16,15]
// 2=> [4,8,23,42,16,15]
// 3=> [4,8,23,42,16,15]
// 4=> [4,8,23,42,42,15]
// 5=> [4,8,23,23,42,15]
arr[j]= temp
// 1=> [4,8,23,42,16,15]
// 2=> [4,8,23,42,16,15]
// 3=> [4,8,23,42,16,15]
// 4=> [4,8,23,16,42,15]
// 5=> [4,8,16,23,42,15]
j = j - 1
// 1=> j=-1
// 2=> j=-1
// 3=> j=-1
// 4=> j= 2
// 5=> j= 1
}}
return arr ;
}
console.log(insertionSort([8,4,23,42,16,15]))
console.log(insertionSort([20,18,12,8,5,-2]))
console.log(insertionSort([5,12,7,5,5,7]))
console.log(insertionSort([2,3,5,7,13,11])) | 28 | 54 | 0.342687 | roh_Latn | 0.057544 |
ff5b2930e909c039731b22df1076dbccf14ba6bc | 992 | md | Markdown | falter/edelfalter/kaisermantel/kaisermantel.md | georgdonner/falter | e380ec2dbe3744bafc788029998a53d4ebf93745 | [
"MIT"
] | null | null | null | falter/edelfalter/kaisermantel/kaisermantel.md | georgdonner/falter | e380ec2dbe3744bafc788029998a53d4ebf93745 | [
"MIT"
] | 3 | 2020-03-19T14:50:05.000Z | 2020-03-29T20:20:00.000Z | falter/edelfalter/kaisermantel/kaisermantel.md | georgdonner/falter | e380ec2dbe3744bafc788029998a53d4ebf93745 | [
"MIT"
] | null | null | null | ---
path: '/edelfalter/kaisermantel'
name: Kaisermantel
nameLatin: Argynnis paphia
family: edelfalter
familyName: Edelfalter
familyLatin: Nymphalidae
images:
- { src: './kaisermantel.jpg', gender: 'f', location: 'Brandenburg, Grünhof', author: Georg, date: '2016-07-31' }
flightSeason: '5.2-9'
---
Der Kaisermantel ist ein in Europa recht häufiger Falter, der sich im Norden allerdings nicht mehr so oft blicken lässt. Wir sind nur zwei Exemplaren begegnet und waren überglücklich Fotos von einem der beiden Weibchen machen zu können. Beide waren mit ein paar Flügelschlägen wieder völlig außer Sichtweite und nie wieder von uns gesichtet. Der Unterschied zwischen den Geschlechtern ist hier sehr offensichtlich, denn das Männchen hat eine sehr farbintensive, orangene Flügeloberseite, während das Weibchen im Vergleich sehr blass aussieht. Der Kaisermantel ist recht groß, wesentlich größer als die anderen Perlmutterfalter und lässt sich typischerweise auf Skabiosen und Disteln nieder.
| 70.857143 | 690 | 0.805444 | deu_Latn | 0.997865 |
ff5b433fbe594685851a312e62762964c11db3bb | 1,718 | md | Markdown | README.md | Velka-DEV/Miellax | 5235106fb59355720b79eecb0218e16be2b27894 | [
"MIT"
] | null | null | null | README.md | Velka-DEV/Miellax | 5235106fb59355720b79eecb0218e16be2b27894 | [
"MIT"
] | null | null | null | README.md | Velka-DEV/Miellax | 5235106fb59355720b79eecb0218e16be2b27894 | [
"MIT"
] | null | null | null | # Miellax: Enhanced Milky [](https://github.com/Laiteux/Miellax/releases) [](https://github.com/Laiteux/Miellax/blob/v3/LICENSE)
Miellax is a .NET 6 library for pentesting web apps against credential stuffing attacks based on the Milky library written by Laiteux.
In fact, it will manage:
- HttpClient creation and distribution (support for backconnect proxies and all protocols using .NET 6 Native support)
- Output (directory, whether to output invalids or not, console colors, captures and more...)
- Console title (hits, free hits, estimated hits, percentages, checked per minute, elapsed/remaining time and more...)
- Check pause/resume/end functions using hotkeys
And more... See the code itself or the [examples](https://github.com/Laiteux/Miellax/blob/v3/examples) folder for a better overview.
## Contribute
Your help and ideas are welcome, feel free to fork this repo and submit a pull request.
However, please make sure to follow the current code base/style.
## Contact
Telegram: @velkaDEV
## Donate
If you would like to support this project, please consider donating.
Donations are greatly appreciated and a motivation to keep improving.
- Bitcoin (BTC)
- Segwit: `bc1qhyzca03e78td68e2ppkqpdp6224pesw66vn6pv`
- Legacy: `15xh18xUPL76wBYZ7qWCEwETQx6iPx8xTq`
- Monero (XMR): `44f7XFPeddmGDPvNDtR9sKQen619oVEGXaenw3XjKecWKrd4ZFtS6Md9yzrcYc3p47JVRnkQMFLvc8Eh8ua7n7D4BwNNjuY`
- Litecoin (LTC): `LSnspYrX217dwRq7c9kd8hxuUkRL4K9z1N`
- Ethereum (ETH): `0xef20130259D5F3F094049a1eFb25f5c23052fDd8` | 50.529412 | 325 | 0.791618 | eng_Latn | 0.873788 |
ff5b8eca1293f915b6e19f8b7e60ec0b70347440 | 428 | md | Markdown | doc_source/usage-report.md | awsdocs/aws-cur-user-guide | 49d446bcddbd4135f73fdc9fa58d69d4aa614328 | [
"MIT",
"MIT-0"
] | 9 | 2020-02-12T21:00:54.000Z | 2022-03-20T16:04:29.000Z | documents/aws-cur-user-guide/doc_source/usage-report.md | siagholami/aws-documentation | 2d06ee9011f3192b2ff38c09f04e01f1ea9e0191 | [
"CC-BY-4.0"
] | 5 | 2020-06-08T17:57:05.000Z | 2022-03-23T03:41:42.000Z | documents/aws-cur-user-guide/doc_source/usage-report.md | siagholami/aws-documentation | 2d06ee9011f3192b2ff38c09f04e01f1ea9e0191 | [
"CC-BY-4.0"
] | 10 | 2020-02-12T23:01:44.000Z | 2021-05-26T10:58:06.000Z | # AWS Usage Report<a name="usage-report"></a>
**Important**
The AWS Usage Report feature will be unavailable at a later date\. We strongly recommend that you use the AWS Cost and Usage Reports instead\.
You can download dynamically generated AWS usage reports\. Each report covers a single service, and you can choose which usage type, operation, and time period to include\. You can also choose how the data is aggregated\. | 71.333333 | 221 | 0.773364 | eng_Latn | 0.999358 |
ff5c1f35a1fc7f7eb6ce74fb5d9bf1820e8233be | 729 | md | Markdown | README.md | yangchenxing/go-testable-exit | 626a86465df52c9717bf54bf850dd0117b7b0563 | [
"MIT"
] | null | null | null | README.md | yangchenxing/go-testable-exit | 626a86465df52c9717bf54bf850dd0117b7b0563 | [
"MIT"
] | null | null | null | README.md | yangchenxing/go-testable-exit | 626a86465df52c9717bf54bf850dd0117b7b0563 | [
"MIT"
] | null | null | null | # go-testable-exit
[](https://golang.org/)
[](https://goreportcard.com/report/github.com/yangchenxing/go-testable-exit)
[](https://travis-ci.org/yangchenxing/go-testable-exit)
[](http://godoc.org/github.com/yangchenxing/go-testable-exit)
[](https://raw.githubusercontent.com/yangchenxing/go-testable-exit/master/LICENSE)
replace os.Exit for unittest
| 72.9 | 166 | 0.772291 | yue_Hant | 0.471177 |
ff5cab3786144d3fe52c081f9590c30a4bab911a | 5,441 | markdown | Markdown | _posts/2017-10-23-sprint-4-status-update.markdown | sparv/Blog | c8fe202f110855e0d24edfba9ded15c312512c7d | [
"MIT"
] | null | null | null | _posts/2017-10-23-sprint-4-status-update.markdown | sparv/Blog | c8fe202f110855e0d24edfba9ded15c312512c7d | [
"MIT"
] | null | null | null | _posts/2017-10-23-sprint-4-status-update.markdown | sparv/Blog | c8fe202f110855e0d24edfba9ded15c312512c7d | [
"MIT"
] | null | null | null | ---
layout: post
title: "Sprint 4: Status Update"
date: 2017-10-23
author: daniel
intro: Unser Sprintmeeting 004 fand statt und mit der neuen "Sparv Telegram" Folge kommt auch wieder der schriftliche Teil mit daher.
category: sprintreview
metatitle: "Sprint 4: Status Update"
metadescription: Unser Sprintmeeting 004 fand statt und mit der neuen "Sparv Telegram" Folge kommt auch wieder der schriftliche Teil mit daher.
---
Unser Sprintmeeting 004 fand statt und mit der neuen ["Sparv Telegram" Folge](http://telegram.sparv.de/st004-der-odenwald-ist-nicht-der-mount-everest/) kommt auch wieder der schriftliche Teil mit daher.
### Review
In diesem Sprintzyklus hat Jan damit begonnen, ein visuelles Grundgerüst für Sparv zu bauen. Hierbei geht es darum, verschiedene Bereiche der Anwendung zu definieren, die Art der Navitgation zu bestimmen etc. Dazu ist auch eine grundlegende Festlegung für eine visuelle Sprache von Sparv zustandegekommen, in der neben Farben und Typographie auch das Auftreten und der Charakter der Anwendung beschrieben sind.
Des Weiteren hat Jan die [Kundenmodule gestaltet](https://dribbble.com/shots/3862422-Client-Overview). Hierbei sind das visuelle Auftreten und das Bedienungskonzept für die Übersicht der Kunden sowie für das Anlegen, Einsehen, Editieren und Löschen einzelner Kundendaten entstanden.
Ich habe währenddessen den Login des Nutzers fertiggestellt. Die Usersession wird nun auf der Clientseite verwaltet und über einen JWT ([JSON Web Token](https://jwt.io)) mit der Serverseite authentifiziert. Zusätzlich habe ich mir den frisch vorgestellten [VueJS Style Guide](https://vuejs.org/v2/style-guide/) angeschaut und beschlossen, diesen auf Clientseite zu beherzigen.
Dazu sind hier auf unserem Blog zwei Posts erschienen - einmal das Review zu [Sparv Telegram 003](http://localhost:3000/2017/10/08/sprint-3-status-update/) und zum anderen die Veröffentlichung unserer [Definition of Done](/2017/10/05/ist-es-fertig-darum-definition-of-done/).
Und wenn wir gerade beim Thema Blog sind: Jan hat der Artikelansicht ein Update spendiert, in der jetzt die Autoren des Artikels am Schluss gezeigt werden.
Auch hat Jan in diesem Sprintzyklus einen Fragenkatalog erstellt, welcher für eine Basisinformationsquelle bei der Befragung von Personal Trainern dienen soll.
### Retro
Aus meiner Sicht bin ich erleichtert, das die Login-Funktionalität nun steht. Auf der anderen Seite jedoch nervt es mich, das ich das Unittesting vor mir herschiebe, obwohl laut unserer Definition of Done diese ein Bestandteil des Fertigstellens ist. Wir sind quasi etwas über die DoD herübergeflogen und haben solche Tasks dann als abgeschlossen gesehen. Nach diesem Sprint versuchen wir, die DoD penibel einzuhalten um solches vor-sich-herschieben zu vermeiden.
Jan bemerkte, das die Statusupdates, welche wir im letzten Sprintmeeting festgelegt hatten, um unseren Arbeitsalltag für den jeweils anderen transparenter zu gestalten, gut funktionieren. Wir wissen direkt über eine kurze Slack-Notiz bescheid, wie der Stand des anderen ist und was als nächstes anvisiert wird. Des weiteren viel ihm auf, das wir noch bei unserer Selbsteinschätzung unsicher sind - z.B. sind die Tasks “Persona/Customer Journey erstellen” und “Modul Übung erstellen” aus diesem Sprint rausgefallen, da realistisch gesehn einfach keine Zeit in diesem Sprint dafür vorhanden war.
Abgesehen davon ist der Task “Persona/Customer Journey erstellen” generell entfallen, da wir uns aktuell gern in solche “Details” verlieren, welche zu sehr nach Lehrbuch stattfinden. Wir arbeiten beide neben unserem Hauptberuf in unserer Freizeit an Sparv - somit müssen wir effektiver werden. Die Schlussfolgerung dessen ist nun, das Jan einen großen Teil der Frontendarbeiten übernehmen wird, um mich weiter ins Backend verlagern zu können. Wir erhoffen uns somit größere Schritte im gleichen Zeitrahmen gehen zu können. Sollten dann doch Dinge wie Personas fehlen, werden sie erledigt sofern sie gebraucht werden.
Interessant ist hier zu beobachten, wie schnell man in gewohnte Muster fällt (detailliertes Konzept, umfangreiche Ausarbeitung etc.) obwohl wir uns einen gewissen Spielraum gegeben haben, der eben mit diesen Mustern bricht. Wir haben am Ende noch kurz auf unsere Jahresziele geschaut - wir denken, das wir diese schaffen werden!
### Planung
Der kommende Sprintzyklus wird sich dieses mal etwas länger gestalten und bis zum 10.11.2017 laufen. Folgende Aufgaben haben wir bis dahin vorgenommen:
#### Jan
- Loginviews umsetzen in Sparvapp
- Settingsviews umsetzen in Sparvapp
- Konzeption der Übungsmodule
- Anlegen, Ansicht, Editieren und Löschen eines Kunden in Sparvapp umsetzen
- Unsere Definition eines Sprintes als Blogpost veröffentlichen
- Crossposting der Blogposts auf anderen Platformen (Medium) validieren
#### Daniel
- Löschen eines Useraccounts ermöglichen
- Vorhandenen Sparvnest-Code an Styleguide anpassen
- Unit Tests schreiben
- API Richtlinien definieren
- Anlegen, Ansicht, Editieren und Löschen eines Kunden in Sparvnest umsetzen
Das war es soweit mit dem vierten Sprint. Bei Fragen und Anregungen kann man uns gerne über [Twitter](https://twitter.com/sparvapp) und per [E-Mail](mailto:[email protected]) kontaktieren - wir freuen uns über jede Art des Feedbacks! Bis zum nächsten Sparv Telegram - wieder als Blogpost und natürlich in auditiver Form als Podcast unter [telegram.sparv.de](http://telegram.sparv.de). Bis dann! | 100.759259 | 617 | 0.812534 | deu_Latn | 0.998162 |
ff5d66b9e297381424a45aaeca07a3c9e76948cc | 1,136 | md | Markdown | includes/hdinsight-selector-create-clusters.md | yfakariya/azure-content-jajp | 69be88c0fee4443d5dcab82bf4aed6a155fea287 | [
"CC-BY-3.0"
] | 2 | 2016-09-23T01:46:35.000Z | 2016-09-23T05:12:58.000Z | includes/hdinsight-selector-create-clusters.md | yfakariya/azure-content-jajp | 69be88c0fee4443d5dcab82bf4aed6a155fea287 | [
"CC-BY-3.0"
] | null | null | null | includes/hdinsight-selector-create-clusters.md | yfakariya/azure-content-jajp | 69be88c0fee4443d5dcab82bf4aed6a155fea287 | [
"CC-BY-3.0"
] | 1 | 2020-11-04T04:29:27.000Z | 2020-11-04T04:29:27.000Z | > [AZURE.SELECTOR-LIST (OS | Creation method)]
- [Linux | 概要](hdinsight-hadoop-provision-linux-clusters.md)
- [Linux | Azure ポータル](hdinsight-hadoop-create-linux-clusters-portal.md)
- [Linux | Azure Data Factory](hdinsight-hadoop-create-linux-clusters-adf.md)
- [Linux | Azure CLI](hdinsight-hadoop-create-linux-clusters-azure-cli.md)
- [Linux | Azure PowerShell](hdinsight-hadoop-create-linux-clusters-azure-powershell.md)
- [Linux | REST API](hdinsight-hadoop-create-linux-clusters-curl-rest.md)
- [Linux | .NET SDK](hdinsight-hadoop-create-linux-clusters-dotnet-sdk.md)
- [Linux | ARM テンプレート](hdinsight-hadoop-create-linux-clusters-arm-templates.md)
- [Windows | 概要](hdinsight-provision-clusters.md)
- [Windows | Azure ポータル](hdinsight-hadoop-create-windows-clusters-portal.md)
- [Windows | Azure CLI](hdinsight-hadoop-create-windows-clusters-cli.md)
- [Windows | Azure PowerShell](hdinsight-hadoop-create-windows-clusters-powershell.md)
- [Windows | .NET SDK](hdinsight-hadoop-create-windows-clusters-dotnet-sdk.md)
- [Windows | ARM テンプレート](hdinsight-hadoop-create-windows-clusters-arm-templates.md)
<!---HONumber=AcomDC_0316_2016--> | 66.823529 | 88 | 0.765845 | yue_Hant | 0.500543 |
ff5d9e4eef1e749c39ea6ffa7923f99dbbddedc9 | 489 | md | Markdown | unit_03/07_Regular_Expressions/README.md | duliodenis/python_master_degree | 3ab76838ce2fc1606f28e988a3273dd27122a621 | [
"MIT"
] | 19 | 2019-03-14T01:39:32.000Z | 2022-02-03T00:36:43.000Z | unit_03/07_Regular_Expressions/README.md | duliodenis/python_master_degree | 3ab76838ce2fc1606f28e988a3273dd27122a621 | [
"MIT"
] | 1 | 2020-04-10T01:01:16.000Z | 2020-04-10T01:01:16.000Z | unit_03/07_Regular_Expressions/README.md | duliodenis/python_master_degree | 3ab76838ce2fc1606f28e988a3273dd27122a621 | [
"MIT"
] | 5 | 2019-01-02T20:46:05.000Z | 2020-07-08T22:47:48.000Z | # Python Techdegree: Regular Expressions in Python
Regular expressions are one of the tools that every programmer needs, but is often scared of.

In this course, we'll explore the re module that Python provides and learn to write regular expressions for many different situations.
What you'll learn
* Search methods
* Patterns
* Converting messy text into something useful
### Support or Contact
Visit [ddApps.co](http://ddapps.co) to see more.
| 28.764706 | 134 | 0.752556 | eng_Latn | 0.998713 |
ff5db9ff9191db385f9d5811a460030dd00c5040 | 7,841 | md | Markdown | en/config/actuators.md | bfYSKUY/PX4-user_guide | 67a7308b433a9bf757df433867ef726918cc8054 | [
"CC-BY-4.0"
] | 86 | 2017-06-21T07:40:02.000Z | 2020-10-26T07:13:15.000Z | en/config/actuators.md | bfYSKUY/PX4-user_guide | 67a7308b433a9bf757df433867ef726918cc8054 | [
"CC-BY-4.0"
] | 519 | 2017-04-13T06:33:24.000Z | 2020-10-28T03:42:10.000Z | en/config/actuators.md | bfYSKUY/PX4-user_guide | 67a7308b433a9bf757df433867ef726918cc8054 | [
"CC-BY-4.0"
] | 252 | 2017-04-13T07:02:05.000Z | 2020-10-26T02:43:30.000Z | # Actuator Configuration and Testing
:::note
The *Actuators* view is only displayed if dynamic control allocation is enabled using the [SYS_CTRL_ALLOC](../advanced_config/parameter_reference.md#SYS_CTRL_ALLOC) parameter.
This replaces geometry and mixer configuration files with configuration using parameters.
You should also ensure that the appropriate airframe type is selected using [CA_AIRFRAME](../advanced_config/parameter_reference.md#CA_AIRFRAME).
The easiest way to try this out in simulation is to use the following make target, which has control allocation pre-enabled:
```
make px4_sitl gazebo_iris_ctrlalloc
```
:::
After selecting an [airframe](../config/airframe.md) you will generally need to configure the specific geometry, assign actuators to outputs, and test the actuator response.
This can be done in *QGroundControl*, under the **Vehicle Setup > Actuators** tab.
The multicopter configuration screen looks like this.

Note that some settings are hidden unless you select the **Advanced** checkbox in the top right corner.
## Geometry
The geometry section is used to configure any additional geometry-related settings for the selected [airframe](../config/airframe.md).
The UI displays a customised view for the selected type; if there are no configurable options this may display a static image of the frame geometry, or nothing at all.
The screenshot below shows the geometry screen for a multicopter frame, which allows you to select the number of motors, their relative positions on the frame, and their expected spin directions (select "**Direction CCW**" for counter-clockwise motors).
This particular frame also includes an image showing the motor positions, which dynamically updates as the motors settings are changed.

A fixed wing airframe would instead display the parameters that define control surfaces, while a VTOL airframe could display both motors and control surfaces.
### Conventions
The following sections contain the conventions and explanations for configuring the geometry.
#### Coordinate system
The coordinate system is NED (in body frame), where the X axis points forward, the Y axis to the right and the Z axis down.
Positions are relative to the center of gravity (in meters).
#### Control Surfaces and Servo Direction
Control surfaces use the following deflection direction convention:
- horizontal (e.g. Aileron): up = positive deflection
- vertical (e.g. Rudder): right = positive deflection
- mixed (e.g. V-Tail): up = positive deflection

:::note
If a servo does not move in the expected direction set in the geometry, you can reverse it using a checkbox on the Actuator Output.
:::
### Motor Tilt Servos
Tilt servos are configured as follows:
- The reference direction is upwards (negative Z direction).
- Tilt direction: **Towards Front** means the servo tilts towards positive X direction, whereas **Towards Right** means towards positive Y direction.
- Minimum and maximum tilt angles: specify the physical limits in degrees of the tilt at minimum control and maximum respectively.
An angle of 0° means to point upwards, then increases towards the tilt direction.
:::note
Negative angles are possible. For example tiltable multirotors have symmetrical limits and one could specify -30 as minimum and 30 degrees as maximum.
:::
:::note
If a motor/tilt points downwards and tilts towards the back it is logically equivalent to a motor pointing upwards and tilting towards the front.
:::
- Control: depending on the airframe, tilt servos can be used to control torque on one or more axis (it's possible to only use a subset of the available tilts for a certain torque control):
- Yaw: the tilts are used to control yaw (generally desired).
If four or more motors are used, the motors can be used instead.
- Pitch: typically differential motor thrust is used to control pitch, but some airframes require pitch to be controlled by the tilt servos.
Bicopters are among those.
- Tiltable motors are then assigned to one of the tilt servos.

### Bidirectional Motors
Some vehicles may use bidirectional motors (i.e. motor spins in direction 1 for lower output range and in direction 2 for the upper half).
For example, ground vehicles that want to move forwards and backwards, or VTOL vehicles that have pusher motors that go in either direction.
If bidiectional motors are used, make sure to select the **Reversible** checkbox for those motors (the checkbox is displayed as an "advanced" option).

Note that you will need to also ensure that the ESC associated with bidirectional motors is configured appropriately (e.g. 3D mode enabled for DShot ESCs, which can be achieved via [DShot commands](../peripherals/dshot.md#commands)).
## Actuator Outputs
The actuators and any other output function can be assigned to any of the physical outputs.
Each output has its own tab, e.g. the PWM MAIN or AUX output pins, or UAVCAN.
PWM outputs are grouped according to the hardware groups of the autopilot.
Each group allows to configure the PWM rate or DShot/Oneshot (if supported).
:::note
For boards with MAIN and AUX, prefer the AUX pins over the MAIN pins for motors, as they have lower latency.
:::
The AUX pins have additional configuration options for camera capture/triggering.
Selecting these requires a reboot before they are applied.
## Actuator Testing
<!-- Not documented: "Identify and assign motors" -->
When testing actuators, make sure that:
- Motors spin at the "minimum thrust" position.
The sliders snap into place at the lower end, and motors are turned off (disarmed).
The "minimum thrust" position is the next slider position, which commands the minimum thrust.
For PWM motors, adjust the minimum output value such that the motors spin at that slider position (not required for DShot).
:::note
VTOLs will automatically turn off motors pointing upwards during fixed-wing flight.
For Standard VTOLs these are the motors defined as multicopter motors.
For Tiltrotors these are the motors that have no associated tilt servo.
Tailsitters use all motors in fixed-wing flight.
:::
- Servos move into the direction of the convention described above.
:::note
A trim value can be configured for control surfaces, which is also applied to the test slider.
:::
Note the following behaviour:
- If a safety button is used, it must be pressed before actuator testing is allowed.
- The kill-switch can still be used to stop motors immediately.
- Servos do not actually move until the corresponding slider is changed.
- The parameter [COM_MOT_TEST_EN](../advanced_config/parameter_reference.md#COM_MOT_TEST_EN) can be used to completely disable actuator testing.
- On the shell, [actuator_test](../modules/modules_command.md#actuator-test) can be used as well.
### Reversing Motors
The motors must turn in the direction defined in configured geometry ("**Direction CCW**" checkboxes).
If any motors do not turn in the correct direction they must be reversed.
There are several options:
- If the ESCs are configured as [DShot](../peripherals/dshot.md) you can reverse the direction via UI (**Set Spin Direction** buttons).
Note that the current direction cannot be queried, so you might have to try both options.
- Swap 2 of the 3 motor cables (it does not matter which ones).
:::note
If motors are not connected via bullet-connectors, re-soldering is required (this is a reason, among others, to prefer DShot ESCs).
:::
| 51.927152 | 253 | 0.778472 | eng_Latn | 0.998708 |
ff5e0c54467bebaea1461ddc9679f04a0e8e3ef7 | 1,478 | md | Markdown | README.md | owen-carter/ng-zorro-admin | 5b4e9ec84dd5ce3e30bddb8bacb7f0d0cbb96990 | [
"MIT"
] | 33 | 2017-08-22T16:06:47.000Z | 2021-04-26T03:29:39.000Z | README.md | owen-carter/ng-zorro-admin | 5b4e9ec84dd5ce3e30bddb8bacb7f0d0cbb96990 | [
"MIT"
] | 2 | 2018-03-12T09:28:29.000Z | 2018-05-03T08:32:56.000Z | README.md | owen-carter/ng-zorro-admin | 5b4e9ec84dd5ce3e30bddb8bacb7f0d0cbb96990 | [
"MIT"
] | 14 | 2017-10-13T10:43:30.000Z | 2021-12-24T16:14:53.000Z | # ZorroAdmin 
This project was generated with [Angular CLI](https://github.com/angular/angular-cli) version 1.3.1
### How to start?
```bash
git clone https://github.com/owen-carter/ng-zorro-admin.git
cd ng-zorro-admin
npm install -g @angular/cli
npm install
npm run start
```
### About proxy setting
> please read the ./proxy.conf.json
```json
{
"/api": {
"target": "http://localhost:3000",
"secure": false
}
}
```
### How to deploy?
```bash
sh ./deploy.sh deploy
```
### About nginx cong
```bash
upstream patent {
server 127.0.0.1:3000;
keepalive 2000;
}
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name owen-carter.com;
root /usr/share/nginx/html/patent/;
location / {
# index.html;
}
location /api/ {
proxy_pass http://patent;
proxy_set_header Host $host:$server_port;
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
```
### ScreenShoot





### Changelog
+ v1.0
- 远程协助
- 日志管理
+ v2.0
- 远程协助
- 日志管理
- 系统升级
- 数据库备份
- 证书管理
| 17.807229 | 99 | 0.584574 | eng_Latn | 0.204436 |
ff5e5390e1e3c13085d31c3050395507e55a3999 | 766 | md | Markdown | articles/virtual-machines/virtual-machines-linux-cli-manage.md | jglixon/azure-content | c2efc0483bfaaf14e3f8eb9a4d8545854023f625 | [
"CC-BY-3.0"
] | 5 | 2018-07-10T22:15:15.000Z | 2019-10-30T21:09:29.000Z | articles/virtual-machines/virtual-machines-linux-cli-manage.md | wbuchwalter/azure-content | 4b6783ee9303241adb73b9bbebb32d8c78945df5 | [
"CC-BY-3.0"
] | null | null | null | articles/virtual-machines/virtual-machines-linux-cli-manage.md | wbuchwalter/azure-content | 4b6783ee9303241adb73b9bbebb32d8c78945df5 | [
"CC-BY-3.0"
] | 8 | 2018-04-21T18:28:43.000Z | 2021-08-02T12:19:07.000Z | <properties
pageTitle="Basic Azure CLI Commands for Linux and Mac | Microsoft Azure"
description="Basic Azure CLI commands to get you started managing your VMs in Azure Resource Manager mode on Linux and Mac"
services="virtual-machines-linux"
documentationCenter=""
authors="RicksterCDN"
manager="timlt"
editor="tysonn"
tags="azure-resource-manager"/>
<tags
ms.service="virtual-machines-linux"
ms.devlang="na"
ms.topic="article"
ms.tgt_pltfrm="vm-linux"
ms.workload="infrastructure-services"
ms.date="08/23/2016"
ms.author="rclaus" />
# Common Azure CLI commands on Linux and Mac
[AZURE.INCLUDE [virtual-machines-common-cli-manage](../../includes/virtual-machines-common-cli-manage.md)]
| 31.916667 | 127 | 0.699739 | eng_Latn | 0.544284 |
ff5e6ae6a44c45ffd559538483db97b32d3623e5 | 3,892 | md | Markdown | courses/machine_learning/deepdive2/production_ml/labs/samples/contrib/ibm-samples/ffdl-seldon/README.md | Glairly/introduction_to_tensorflow | aa0a44d9c428a6eb86d1f79d73f54c0861b6358d | [
"Apache-2.0"
] | 2 | 2022-01-06T11:52:57.000Z | 2022-01-09T01:53:56.000Z | courses/machine_learning/deepdive2/production_ml/labs/samples/contrib/ibm-samples/ffdl-seldon/README.md | Glairly/introduction_to_tensorflow | aa0a44d9c428a6eb86d1f79d73f54c0861b6358d | [
"Apache-2.0"
] | null | null | null | courses/machine_learning/deepdive2/production_ml/labs/samples/contrib/ibm-samples/ffdl-seldon/README.md | Glairly/introduction_to_tensorflow | aa0a44d9c428a6eb86d1f79d73f54c0861b6358d | [
"Apache-2.0"
] | null | null | null | # Simple IBM OSS demo
This simple IBM OSS demo will demonstrate how to train a model using [Fabric for Deep Learning](https://github.com/IBM/FfDL) and then deploy it with [Seldon](https://github.com/SeldonIO/seldon-core).
## Prerequisites
1. Install [Fabric for Deep Learning](https://github.com/IBM/FfDL) and [Seldon](https://github.com/SeldonIO/seldon-core) on the same Kubernetes cluster as KubeFlow Pipeline.
2. Create two S3 Object Storage buckets, then store the training data, model definition file, and FfDL manifest file in the training bucket.
* The training data for this demo is from the [UTKface's aligned & cropped faces dataset](https://susanqq.github.io/UTKFace/). We will be using the data binary `UTKFace.tar.gz`.
* The model definition file needs to be packaged as `gender-classification.zip`.
with the following commands
```shell
zip -j source/gender-classification source/model-source-code/gender_classification.py
```
Then upload the model definition file and FfDL manifest file in the source directory. They are named `gender-classification.zip` and `manifest.yml`.
3. Fill in the necessary credentials at [credentials/creds.ini](credentials/creds.ini) and upload it to one of your GitHub private repositories. The details of each parameter are defined below.
## Instructions
### 1. With Command line
First, install the necessary Python Packages
```shell
pip3 install ai_pipeline_params
```
In this repository, run the following commands to create the argo files using the Kubeflow pipeline SDK.
```shell
dsl-compile --py ffdl_pipeline.py --output ffdl-pipeline.tar.gz
```
Then, submit `ffdl-pipeline.tar.gz` to the kubeflow pipeline UI. From there you can create different experiments and runs using the ffdl pipeline definition.
### 2. With Jupyter Notebook
Run `jupyter notebook` to start running your jupyter server and load the notebook `ffdl_pipeline.ipynb` and follow the instructions.
## Pipeline Parameters
- **config-file-url**: GitHub raw content link to the pipeline credentials file
- **github-token**: GitHub Token that can access your private repository
- **model-def-file-path**: Model definition path in the training bucket
- **manifest-file-path**: FfDL manifest path in the training bucket
- **model-deployment-name**: Seldon Model deployment name
- **model-class-name**: PyTorch model class name
- **model-class-file**: Model file that contains the PyTorch model class
## Credentials needed to be stored in GitHub
- **s3_url**: S3 Object storage endpoint for your FfDL training job.
- **s3_access_key_id**: S3 Object storage access key id
- **s3_secret_access_key**: S3 Object storage secret access key
- **training_bucket**: S3 Bucket for the training job data location.
- **result_bucket**: S3 Bucket for the training job result location.
- **ffdl_rest**: RESTAPI endpoint for FfDL.
- **k8s_public_nodeport_ip**: IP of the host machine. It will be used to generate a web service endpoint for the served model.
## Naming convention for Training Model Files
Since libraries such as Spark and OpenCV are huge to put inside the serving containers, users who use libraries which are not from the standard PyTorch container from FfDL should consider defining their PyTorch model class file in a separate Python file. This is because when python tries to load the model class from users' training files, python interpreter will need to read and import all the
modules in memory within the same file in order to properly construct the module dependencies.
Therefore, by default users should consider naming their model class as `ModelClass` and put the model class code in a file call `model_class.py`. However, users can choose not to follow the naming convention as long as they provide the model class and file name as part of the pipeline parameters.
| 62.774194 | 397 | 0.764388 | eng_Latn | 0.990181 |
ff5f71125d47635d690c33dbf00a2ed23b427940 | 557 | md | Markdown | source/apm/APMOverview.md | FrogIf/FrogNotebook | 9a71efacc12a3ce0bf087de005f1c0c34e6910f0 | [
"MIT"
] | null | null | null | source/apm/APMOverview.md | FrogIf/FrogNotebook | 9a71efacc12a3ce0bf087de005f1c0c34e6910f0 | [
"MIT"
] | 1 | 2022-01-14T04:48:26.000Z | 2022-01-14T04:48:26.000Z | source/apm/APMOverview.md | FrogIf/FrogNotebook | 9a71efacc12a3ce0bf087de005f1c0c34e6910f0 | [
"MIT"
] | null | null | null | # APM
APM(Application Performance Monitoring), 即应用程序性能监控. 主要集中在三个维度对应用程序进行监控诊断分析:
* 日志(Logging): 用于记录离散的事件. 例如, 应用程序的调试信息或错误信息. 它是我们诊断问题的依据.
* 链路追踪(Tracing): 用于记录请求范围内的信息. 例如, 一次远程方法调用的执行过程和耗时. 它是我们排查系统性能问题的利器
* 度量(Metrics): 于记录可聚合的数据. 例如, 队列的当前深度可被定义为一个度量值, 在元素入队或出队时被更新; HTTP 请求个数可被定义为一个计数器, 新请求到来时进行累加

这三个方向的监控系统有:
* 日志:
* ElasticSearch + Logstash + Kibana
* 链路追踪:
* Skywalking
* Jaeger
* OpenTracing
* 度量:
* Prometheus
* Grafana
* OpenMetrics
> 还有很多其他的, 其中OpenTracing + OpenMetrics = OpenTelemetry
| 21.423077 | 94 | 0.750449 | yue_Hant | 0.952642 |
ff5fb453ff7eabad8c1d1ce9ba70c41c7d6aced1 | 3,768 | md | Markdown | docs/Get-FGTUserGroup.md | Cool34000/PowerFGT | 98deef43f1d0e303664e7253fb553b84040ff64a | [
"Apache-2.0"
] | null | null | null | docs/Get-FGTUserGroup.md | Cool34000/PowerFGT | 98deef43f1d0e303664e7253fb553b84040ff64a | [
"Apache-2.0"
] | null | null | null | docs/Get-FGTUserGroup.md | Cool34000/PowerFGT | 98deef43f1d0e303664e7253fb553b84040ff64a | [
"Apache-2.0"
] | null | null | null | ---
external help file: PowerFGT-help.xml
Module Name: PowerFGT
online version:
schema: 2.0.0
---
# Get-FGTUserGroup
## SYNOPSIS
Get list of all local group
## SYNTAX
### default (Default)
```
Get-FGTUserGroup [-filter_attribute <String>] [-filter_type <String>] [-filter_value <PSObject>] [-skip]
[-vdom <String[]>] [-connection <PSObject>] [<CommonParameters>]
```
### name
```
Get-FGTUserGroup [[-name] <String>] [-filter_attribute <String>] [-filter_type <String>]
[-filter_value <PSObject>] [-skip] [-vdom <String[]>] [-connection <PSObject>] [<CommonParameters>]
```
### id
```
Get-FGTUserGroup [-id <String>] [-filter_attribute <String>] [-filter_type <String>] [-filter_value <PSObject>]
[-skip] [-vdom <String[]>] [-connection <PSObject>] [<CommonParameters>]
```
### filter
```
Get-FGTUserGroup [-filter_attribute <String>] [-filter_type <String>] [-filter_value <PSObject>] [-skip]
[-vdom <String[]>] [-connection <PSObject>] [<CommonParameters>]
```
## DESCRIPTION
Get list of all local group (name, members, type...
)
## EXAMPLES
### EXAMPLE 1
```
Get-FGTUserGroup
```
Display all local groups
### EXAMPLE 2
```
Get-FGTUserGroup -id 23
```
Get local group with id 23
### EXAMPLE 3
```
Get-FGTUserGroup -name FGT -filter_type contains
```
Get local group contains with *FGT*
### EXAMPLE 4
```
Get-FGTUserGroup -skip
```
Display all local group (but only relevant attributes)
### EXAMPLE 5
```
Get-FGTUserGroup -vdom vdomX
```
Display all local group on vdomX
## PARAMETERS
### -name
{{ Fill name Description }}
```yaml
Type: String
Parameter Sets: name
Aliases:
Required: False
Position: 2
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -id
{{ Fill id Description }}
```yaml
Type: String
Parameter Sets: id
Aliases:
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -filter_attribute
{{ Fill filter_attribute Description }}
```yaml
Type: String
Parameter Sets: (All)
Aliases:
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -filter_type
{{ Fill filter_type Description }}
```yaml
Type: String
Parameter Sets: (All)
Aliases:
Required: False
Position: Named
Default value: Equal
Accept pipeline input: False
Accept wildcard characters: False
```
### -filter_value
{{ Fill filter_value Description }}
```yaml
Type: PSObject
Parameter Sets: (All)
Aliases:
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -skip
Ignores the specified number of objects and then gets the remaining objects.
Enter the number of objects to skip.
```yaml
Type: SwitchParameter
Parameter Sets: (All)
Aliases:
Required: False
Position: Named
Default value: False
Accept pipeline input: False
Accept wildcard characters: False
```
### -vdom
{{ Fill vdom Description }}
```yaml
Type: String[]
Parameter Sets: (All)
Aliases:
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -connection
{{ Fill connection Description }}
```yaml
Type: PSObject
Parameter Sets: (All)
Aliases:
Required: False
Position: Named
Default value: $DefaultFGTConnection
Accept pipeline input: False
Accept wildcard characters: False
```
### CommonParameters
This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](http://go.microsoft.com/fwlink/?LinkID=113216).
## INPUTS
## OUTPUTS
## NOTES
## RELATED LINKS
| 17.690141 | 315 | 0.72293 | eng_Latn | 0.494985 |
ff605bad2f818c4d9ad51a186670654fa0c9da8c | 46 | md | Markdown | README.md | ankush60000/profile-rest-api | 04f8c8901f4456eb696db8f84792dccfd78046e8 | [
"MIT"
] | null | null | null | README.md | ankush60000/profile-rest-api | 04f8c8901f4456eb696db8f84792dccfd78046e8 | [
"MIT"
] | null | null | null | README.md | ankush60000/profile-rest-api | 04f8c8901f4456eb696db8f84792dccfd78046e8 | [
"MIT"
] | null | null | null | # profiles REST API
profiles REST API CODE.. | 15.333333 | 24 | 0.73913 | kor_Hang | 0.872021 |
ff60906c8974aa66975552c45942c75a6eedb31e | 927 | md | Markdown | api/Access.NavigationButton.Caption.md | seydel1847/VBA-Docs | b310fa3c5e50e323c1df4215515605ae8da0c888 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | api/Access.NavigationButton.Caption.md | seydel1847/VBA-Docs | b310fa3c5e50e323c1df4215515605ae8da0c888 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | api/Access.NavigationButton.Caption.md | seydel1847/VBA-Docs | b310fa3c5e50e323c1df4215515605ae8da0c888 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: NavigationButton.Caption property (Access)
keywords: vbaac10.chm10450
f1_keywords:
- vbaac10.chm10450
ms.prod: access
api_name:
- Access.NavigationButton.Caption
ms.assetid: 65770d68-fe1f-4553-b8e8-25649db2e059
ms.date: 06/08/2017
localization_priority: Normal
---
# NavigationButton.Caption property (Access)
Gets or sets the text that appears in the control. Read/write **String**.
## Syntax
_expression_. `Caption`
_expression_ A variable that represents a [NavigationButton](Access.NavigationButton.md) object.
## Remarks
The **Caption** property is a string expression that can contain up to 2,048 characters.
If you don't set a caption for a form, button, or label, Microsoft Access will assign the object a unique name based on the object, such as "Form1".
## See also
[NavigationButton Object](Access.NavigationButton.md)
[!include[Support and feedback](~/includes/feedback-boilerplate.md)] | 23.769231 | 148 | 0.773463 | eng_Latn | 0.778438 |
ff60e07acc7754d7be2acc1e34388a9c25ab0f4c | 1,371 | md | Markdown | _articles/jp/testing/run-your-tests-in-the-app-center.md | rajivshah3/devcenter | 0ad6eb5455086508b99e4d9a17b4075ab1b096ac | [
"MIT"
] | null | null | null | _articles/jp/testing/run-your-tests-in-the-app-center.md | rajivshah3/devcenter | 0ad6eb5455086508b99e4d9a17b4075ab1b096ac | [
"MIT"
] | null | null | null | _articles/jp/testing/run-your-tests-in-the-app-center.md | rajivshah3/devcenter | 0ad6eb5455086508b99e4d9a17b4075ab1b096ac | [
"MIT"
] | null | null | null | ---
changelog:
last_modified_at:
title: Visual Studio App Centerでのテストの実行
redirect_from:
- "/xamarin/run-your-tests-in-the-app-center"
menu:
testing-main:
weight: 9
---
プロジェクト用にテストのアップロードやスケジューリングをVisual Studio App Centerで行うことができます。以下のテストフレームワークが利用可能です:
* Appium
* Espresso
* Calabash
* Xamarin.UITest
* XCUITest
1. Workflow Editorにて、ご自身のワークフローに`App Center upload and schedule tests`ステップを[追加します](/getting-started/getting-started-workflows/)。
このステップには複数の必要なインプットが存在します。そのインプットの値はVisual Studio App Center上のテストをセットアップすることにより確認する事ができます。
2. App Centerにログインします。
3. [アップロード用のテストを準備します](https://docs.microsoft.com/en-us/appcenter/test-cloud/preparing-for-upload/)。
4. [App Center](https://appcenter.ms/apps)プロジェクトを作成します。
5. `Test runs`タブへ進み`New test run`を開始します:
* アプリのテストを行うデバイスを選択します。
* テスト走行の設定:テストシリーズ、システム言語、テストフレームワークを選択します。
* `Upload and schedule test`**セクションの**`Submit`**タブにて、ステップに必要な全てのインプットが確認できます。**
* `Done`をクリックします。
6. On Bitrise, open the Workflow Editor and fill in the required inputs of the Step. You will need to:
Bitrise上では、Workflow Editorを開いてステップに必要なインプットを入力します。以下の項目を行ってください:
* APIトークンの入手
* ターゲットアプリの設定
* テストフレームワークの設定(利用可能なオプションが確認できます)
* device selection slugの追加
* テストシリーズ名の追加
* テスト走行用のシステムLocale (例:_en_US_) の設定
* アプリケーションファイルへのパスの設定(.ipaもしくは.apk)
* テストディレクトリへのパスの設定(選択したテストフレームワーク用の適切なディレクトリを使用してください) | 35.153846 | 130 | 0.778993 | yue_Hant | 0.240119 |
ff610f89e9a06de6c40c860070019acaa105e94f | 3,849 | md | Markdown | docs/extensibility/debugger/expression-evaluator-implementation-strategy.md | thoelkef/visualstudio-docs.de-de | bfc727b45dcd43a837d3f4f2b06b45d59da637bc | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/extensibility/debugger/expression-evaluator-implementation-strategy.md | thoelkef/visualstudio-docs.de-de | bfc727b45dcd43a837d3f4f2b06b45d59da637bc | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/extensibility/debugger/expression-evaluator-implementation-strategy.md | thoelkef/visualstudio-docs.de-de | bfc727b45dcd43a837d3f4f2b06b45d59da637bc | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Implementierungsstrategie der Ausdrucks Auswertung | Microsoft-Dokumentation
ms.date: 11/04/2016
ms.topic: conceptual
helpviewer_keywords:
- expression evaluation, implementation strategy
- debug engines, implementation strategies
ms.assetid: 1bccaeb3-8109-4128-ae79-16fd8fbbaaa2
author: acangialosi
ms.author: anthc
manager: jillfra
ms.workload:
- vssdk
ms.openlocfilehash: 3922689c20c839b3c0c2b2440bc9fefd5d25c80a
ms.sourcegitcommit: 6cfffa72af599a9d667249caaaa411bb28ea69fd
ms.translationtype: MT
ms.contentlocale: de-DE
ms.lasthandoff: 09/02/2020
ms.locfileid: "80738671"
---
# <a name="expression-evaluator-implementation-strategy"></a>Implementierungsstrategie für die Ausdrucks Auswertung
> [!IMPORTANT]
> In Visual Studio 2015 ist diese Art der Implementierung von Ausdrucks auswergratoren veraltet. Weitere Informationen zum Implementieren von CLR-Ausdrucks Auswerters finden Sie unter [CLR-Ausdrucks](https://github.com/Microsoft/ConcordExtensibilitySamples/wiki/CLR-Expression-Evaluators) Auswertungen und [Beispiel für verwaltete Ausdrucks Auswertung](https://github.com/Microsoft/ConcordExtensibilitySamples/wiki/Managed-Expression-Evaluator-Sample).
Ein Ansatz zum schnellen Erstellen eines Ausdrucks Auswerters (EE) besteht darin, zuerst den minimalen Code zu implementieren, der erforderlich ist, um lokale Variablen **im Fenster "** lokal" anzuzeigen. Es ist hilfreich zu wissen, dass jede Zeile im **Fenster "** lokal" den Namen, den Typ und den Wert einer lokalen Variablen anzeigt und alle drei durch ein [IDebugProperty2](../../extensibility/debugger/reference/idebugproperty2.md) -Objekt dargestellt werden. Der Name, der Typ und der Wert einer lokalen Variablen werden von einem `IDebugProperty2` Objekt durch Aufrufen der [GetPropertyInfo](../../extensibility/debugger/reference/idebugproperty2-getpropertyinfo.md) -Methode abgerufen. Weitere Informationen zum Anzeigen von lokalen Variablen **im Fenster "** lokal" finden Sie unter [anzeigen](../../extensibility/debugger/displaying-locals.md)von lokalen Variablen.
## <a name="discussion"></a>Diskussion (Discussion)
Eine mögliche Implementierungs Sequenz beginnt mit der Implementierung von [idebugexpressionevaluator](../../extensibility/debugger/reference/idebugexpressionevaluator.md). Die Methoden "Analyse [" und "](../../extensibility/debugger/reference/idebugexpressionevaluator-parse.md) [getmethodproperty](../../extensibility/debugger/reference/idebugexpressionevaluator-getmethodproperty.md) " müssen implementiert werden, um lokale anzuzeigen. Beim Aufrufen von `IDebugExpressionEvaluator::GetMethodProperty` wird ein Objekt zurückgegeben `IDebugProperty2` , das eine Methode darstellt, d. h. ein [idebugmethodfield](../../extensibility/debugger/reference/idebugmethodfield.md) -Objekt. Die Methoden selbst werden nicht **im Fenster "** lokal" angezeigt.
Die [enumchildren](../../extensibility/debugger/reference/idebugproperty2-enumchildren.md) -Methode sollte als Nächstes implementiert werden. Die Debug-Engine (de) ruft diese Methode auf, um eine Liste der lokalen Variablen und Argumente durch übergeben `IDebugProperty2::EnumChildren` eines- `guidFilter` Arguments von zu erhalten `guidFilterLocalsPlusArgs` . `IDebugProperty2::EnumChildren` Ruft [enumarguments](../../extensibility/debugger/reference/idebugmethodfield-enumarguments.md) und [enumlocals](../../extensibility/debugger/reference/idebugmethodfield-enumlocals.md)auf und kombiniert die Ergebnisse in einer einzelnen Enumeration. Weitere Informationen finden Sie unter [lokale Anzeige](../../extensibility/debugger/displaying-locals.md) .
## <a name="see-also"></a>Weitere Informationen
- [Implementieren einer Ausdrucks Auswertung](../../extensibility/debugger/implementing-an-expression-evaluator.md)
- [Lokale anzeigen](../../extensibility/debugger/displaying-locals.md)
| 109.971429 | 877 | 0.814497 | deu_Latn | 0.879162 |
ff6127221ef3ac6096375e107c43fb2615a5d0f2 | 1,694 | md | Markdown | _posts/2005-1-1-2005-01-01.md | gt-2003-2017/gt-2003-2017.github.io | d9b4154d35ecccbe063e9919c7e9537a84c6d56e | [
"MIT"
] | null | null | null | _posts/2005-1-1-2005-01-01.md | gt-2003-2017/gt-2003-2017.github.io | d9b4154d35ecccbe063e9919c7e9537a84c6d56e | [
"MIT"
] | null | null | null | _posts/2005-1-1-2005-01-01.md | gt-2003-2017/gt-2003-2017.github.io | d9b4154d35ecccbe063e9919c7e9537a84c6d56e | [
"MIT"
] | null | null | null | 사사기 1:1 - 1:7
2005년 01월 01일 (토)
유다가 올라갈지니라
사사기 1:1 - 1:7 / 새찬송가 401 장
1:1 여호수아가 죽은 후에 이스라엘 자손이 여호와께 묻자와 가로되 우리 중 누가 먼저 올라가서 가나안 사람과 싸우리이까 2 여호와께서 가라사대 유다가 올라갈지니라 보라 내가 이 땅을 그 손에 붙였노라 하시니라 3 유다가 그 형제 시므온에게 이르되 나의 제비뽑아 얻은 땅에 나와 함께 올라가서 가나안 사람과 싸우자 그리하면 나도 너의 제비뽑아 얻은 땅에 함께 가리라 이에 시므온이 그와 함께 가니라 4 유다가 올라가매 여호와께서 가나안 사람과 브리스 사람을 그들의 손에 붙이신지라 그들이 베섹에서 일만 명을 죽이고 5 또 베섹에서 아도니 베섹을 만나서 그와 싸워 가나안 사람과 브리스 사람을 죽이니 6 아도니 베섹이 도망하는지라 그를 쫓아가서 잡아 그 수족의 엄지가락을 끊으매 7 아도니 베섹이 가로되 옛적에 칠십 왕이 그 수족의 엄지가락을 찍히고 내 상 아래서 먹을 것을 줍더니 하나님이 나의 행한 대로 내게 갚으심이로다 하니라 무리가 그를 끌고 예루살렘에 이르렀더니 그가 거기서 죽었더라
오늘의 영어말씀
2 The LORD answered, "Judah is to go ; I have given the land into their hands."
해석도움
유다가 올라갈지니라
여호수아는 아직 완전히 정복하지 못한 가나안 땅을 각 지파에게 제비뽑아 분배했습니다(수15-21장). 사사기는 여호수아 사후에 제비뽑은 그 땅들을 정복하는 내용으로 시작됩니다. 이스라엘 백성들은 하나님께 어느 지파가 제일 먼저 분배받은 땅 정복에 나설지를 물었습니다(1). 하나님은 "유다가 올라갈지니라 보라 내가 이 땅을 그 손에 붙였노라"고 대답하셨습니다(2). 야곱의 예언대로 유다 지파는 영적 장자의 위치에 서게 된 것입니다(창49:8-10). 이에 유다는 시므온 지파에게 동역을 요청했습니다(3). 시므온 지파는 독립된 기업을 받지 못하고 유다 지파의 기경 안에서 일부를 분배 받은 지파입니다(수19:1-9). 지파 중 가장 숫자가 많고 강한 유다와(민26:22) 가장 작고 미약한 시므온이(민26:14) 서로 무시하거나 불평하지 않고 협력하는 모습은 참으로 아름답습니다. 유다의 뜻은 찬양입니다. 찬양은 사단의 장송곡입니다. 영적전쟁에서 우리가 먼저 찬양하며 우리 앞에 놓여 있는 미완성 과업들을 향해 전진할 때 하나님은 그것을 우리 손에 붙여주실 것입니다(4). 우리는 예수께서 이미 이기신, 그러나 아직 종결되지 않은 전쟁에서 찬양과 동역으로 완전한 승리를 거두어야 할 군병입니다.
공의로운 심판
아도니 베섹은 수족이 끊어지는 형벌을 당하자(6) 그동안 자기가 행했던 일들을 회상하고 이렇게 하나님의 심판의 공의로움을 자백하고 있습니다. "옛적의 칠십 왕이 그 수족의 엄지가락을 찍히고 내 상 아래서 먹을 것을 줍더니 하나님이 나의 행한 대로 갚으심이로다."(7) 여기서 우리는, 하나님은 인간들이 행한 대로 갚으시는 분이라는 사실을 배울 수 있습니다. 아도니 베섹의 이 같은 자백은 장차 최후 심판 때의 모습이기도 합니다. 성경은 이렇게 말씀합니다. "이러므로 우리 각인이 자기 일을 하나님께 직고하리라"(롬14:12). 우리는 남에게 가혹하게 행한 자는 가혹한 심판을, 긍휼을 행하지 않은 자는 긍휼 없는 심판을 받게 된다는 사실을 항상 명심하고 살아야 합니다. | 65.153846 | 617 | 0.708973 | kor_Hang | 1.00001 |
ff613c3cd9a293d3af7dbf8dc5344b4be19d7417 | 3,695 | md | Markdown | README.md | FlameCore/Synchronizer | 91f006856e82741c05786b70cae6bf6f28dde0d1 | [
"MIT"
] | 4 | 2015-03-16T10:52:23.000Z | 2015-09-17T03:47:25.000Z | README.md | flamecore/synchronizer | 91f006856e82741c05786b70cae6bf6f28dde0d1 | [
"MIT"
] | 2 | 2015-01-19T09:27:30.000Z | 2015-03-23T11:57:58.000Z | README.md | flamecore/synchronizer | 91f006856e82741c05786b70cae6bf6f28dde0d1 | [
"MIT"
] | 2 | 2015-03-23T11:06:55.000Z | 2015-04-08T03:04:02.000Z | FlameCore Synchronizer
======================
[](https://packagist.org/packages/flamecore/synchronizer)
[](https://scrutinizer-ci.com/g/flamecore/synchronizer)
[](https://packagist.org/packages/flamecore/synchronizer)
This library makes it easy to synchronize all kinds of things. It features a beautiful and easy to use API.
Synchronizer was developed as backend for our deployment and testing tool [Seabreeze](https://github.com/FlameCore/Seabreeze).
Implementations
---------------
The Synchronizer library is just an abstract foundation. But concrete implementations are available:
* [FilesSynchronizer](https://github.com/FlameCore/FilesSynchronizer)
* [DatabaseSynchronizer](https://github.com/FlameCore/DatabaseSynchronizer)
Usage
-----
Include the vendor autoloader and use the classes:
```php
namespace Acme\MyApplication;
// To create a Synchronizer:
use FlameCore\Synchronizer\AbstractSynchronizer;
use FlameCore\Synchronizer\SynchronizerSourceInterface;
use FlameCore\Synchronizer\SynchronizerTargetInterface;
// To make your project compatible with Synchronizer:
use FlameCore\Synchronizer\SynchronizerInterface;
require 'vendor/autoload.php';
```
Create your Synchronizer:
```php
class ExampleSynchronizer extends AbstractSynchronizer
{
/**
* @param bool $preserve Preserve obsolete objects
* @return bool Returns whether the synchronization succeeded.
*/
public function synchronize($preserve = true)
{
// Do the sync magic
return true;
}
/**
* @param SynchronizerSourceInterface $source The source
* @return bool Returns whether the synchronizer supports the source.
*/
public function supportsSource(SynchronizerSourceInterface $source)
{
return $source instanceof ExampleSource;
}
/**
* @param SynchronizerTargetInterface $target The target
* @return bool Returns whether the synchronizer supports the target.
*/
public function supportsTarget(SynchronizerTargetInterface $target)
{
return $target instanceof ExampleTarget;
}
}
```
Create your Source and Target:
```php
class ExampleSource implements SynchronizerSourceInterface
{
/**
* @param array $settings The settings
*/
public function __construct(array $settings)
{
// Save settings
}
// Your methods...
}
class ExampleTarget implements SynchronizerTargetInterface
{
/**
* @param array $settings The settings
*/
public function __construct(array $settings)
{
// Save settings
}
// Your methods...
}
```
Make your project compatible with Synchronizer:
```php
class Application
{
protected $synchronizer;
public function setSynchronizer(SynchronizerInterface $synchronizer)
{
$this->synchronizer = $synchronizer;
}
// ...
}
```
Installation
------------
### Install via Composer
[Install Composer](https://getcomposer.org/doc/00-intro.md#installation-linux-unix-osx) if you don't already have it present on your system:
$ curl -sS https://getcomposer.org/installer | php
To install the library, run the following command and you will get the latest version:
$ php composer.phar require flamecore/synchronizer
Requirements
------------
* You must have at least PHP version 5.4 installed on your system.
Contributing
------------
If you'd like to contribute, please see the [CONTRIBUTING](CONTRIBUTING.md) file first.
| 24.966216 | 140 | 0.718268 | eng_Latn | 0.749527 |
ff61a2f0ad2699ca0d2734e5515cadd7e37133ed | 49 | md | Markdown | README.md | TurkMvc/lol | c3fb98c6f371e4648891b59b4adc6cb95ae73451 | [
"WTFPL"
] | null | null | null | README.md | TurkMvc/lol | c3fb98c6f371e4648891b59b4adc6cb95ae73451 | [
"WTFPL"
] | null | null | null | README.md | TurkMvc/lol | c3fb98c6f371e4648891b59b4adc6cb95ae73451 | [
"WTFPL"
] | null | null | null | # lol
Small framework for games and applications
| 16.333333 | 42 | 0.816327 | eng_Latn | 0.991523 |
ff6493b9a8cabf8d1fbb542079f0b3ecf25cfb7f | 343 | md | Markdown | views/your_site/general/Building for production.md | Aworldc/Emudocs | 661376711cdbf088540526168b24bb290fac64d0 | [
"MIT"
] | null | null | null | views/your_site/general/Building for production.md | Aworldc/Emudocs | 661376711cdbf088540526168b24bb290fac64d0 | [
"MIT"
] | null | null | null | views/your_site/general/Building for production.md | Aworldc/Emudocs | 661376711cdbf088540526168b24bb290fac64d0 | [
"MIT"
] | null | null | null | # Building for production
How to generate a static site from your markdown
## Run the build command
Requires node.js to be installed
- Winblows: ` C:/foo/bar> emudocs build`
- Universal: ` $ node emu.js build`
## Grab the result
Placed in the `./out` directory.
Now use with your favorite web server/hosting provider.
| 21.4375 | 56 | 0.693878 | eng_Latn | 0.99561 |
ff64afc96b03da4f02e56e98bfe19a360a2f5cf6 | 18 | md | Markdown | README.md | aeadod/RL-Atari-by-DQN | d9a2c65b66cf38febe94b25ceb7202ddd6a1e3ea | [
"Apache-2.0"
] | null | null | null | README.md | aeadod/RL-Atari-by-DQN | d9a2c65b66cf38febe94b25ceb7202ddd6a1e3ea | [
"Apache-2.0"
] | null | null | null | README.md | aeadod/RL-Atari-by-DQN | d9a2c65b66cf38febe94b25ceb7202ddd6a1e3ea | [
"Apache-2.0"
] | null | null | null | # RL-Atari by DQN
| 9 | 17 | 0.666667 | eng_Latn | 0.977944 |
ff6542c9a76e458a888dba963929a5ea883bb121 | 1,238 | md | Markdown | tests/unit/test_data/en_tw-wa/en_tw/bible/other/devastated.md | linearcombination/DOC | 4478e55ec81426c15a2c402cb838e76d79741c03 | [
"MIT"
] | 1 | 2022-01-10T21:03:26.000Z | 2022-01-10T21:03:26.000Z | tests/unit/test_data/en_tw-wa/en_tw/bible/other/devastated.md | linearcombination/DOC | 4478e55ec81426c15a2c402cb838e76d79741c03 | [
"MIT"
] | 1 | 2022-03-28T17:44:24.000Z | 2022-03-28T17:44:24.000Z | tests/unit/test_data/en_tw-wa/en_tw/bible/other/devastated.md | linearcombination/DOC | 4478e55ec81426c15a2c402cb838e76d79741c03 | [
"MIT"
] | 3 | 2022-01-14T02:55:44.000Z | 2022-02-23T00:17:51.000Z | # devastated
## Related Ideas:
devastate, devastation
## Definition:
The term "devastated" or "devastation" refers to having one's property or land ruined or destroyed. It also often includes destroying or capturing the people living on that land.
* This refers to a very severe and complete destruction.
* For example, the city of Sodom was devastated by God as punishment for the sins of the people living there.
* The term "devastation" can also include causing great emotional grief resulting from the punishment or destruction.
## Translation Suggestions
* The term "devastate" could be translated as "completely destroy" or "completely ruin."
* Depending on the context, "devastation" could be translated as "complete destruction" or "total ruin" or "overwhelming grief" or "disaster."
## Bible References:
* [Daniel 08:24-25](rc://en/tn/help/dan/08/24)
* [Jeremiah 04:13](rc://en/tn/help/jer/04/13)
* [Numbers 21:30](rc://en/tn/help/num/21/30)
* [Zephaniah 01:13](rc://en/tn/help/zep/01/13)
## Word Data:
* Strong's: H1110, H1238, H2721, H1826, H3615, H3772, H4875, H7701, H7703, H7722, H7843, H8074, H8077
## Forms Found in the English ULB:
devastate, devastated, devastates, devastating, devastation, devastations
| 34.388889 | 178 | 0.745557 | eng_Latn | 0.995283 |
ff65578a9278895ba3070d1e59237b2bf76f1da7 | 14,373 | md | Markdown | README.md | buligadragos/UpTime | 36543a5df4e08de5ab49a0335b454c09e66f717a | [
"MIT"
] | null | null | null | README.md | buligadragos/UpTime | 36543a5df4e08de5ab49a0335b454c09e66f717a | [
"MIT"
] | 130 | 2021-09-21T15:06:23.000Z | 2022-03-31T23:43:41.000Z | README.md | buligadragos/UpTime | 36543a5df4e08de5ab49a0335b454c09e66f717a | [
"MIT"
] | null | null | null | # 📈 [Live Status](https://status.buligadragos.ro/): <!--live status--> **🟩 All systems operational**
This repository contains the open-source uptime monitor and status page for [Buliga Dragos](buligadragos.ro), powered by [Upptime](https://github.com/upptime/upptime).
[](https://github.com/buligadragos/UpTime/actions?query=workflow%3A%22Uptime+CI%22)
[](https://github.com/buligadragos/UpTime/actions?query=workflow%3A%22Response+Time+CI%22)
[](https://github.com/buligadragos/UpTime/actions?query=workflow%3A%22Graphs+CI%22)
[](https://github.com/buligadragos/UpTime/actions?query=workflow%3A%22Static+Site+CI%22)
[](https://github.com/buligadragos/UpTime/actions?query=workflow%3A%22Summary+CI%22)
<!--start: status pages-->
<!-- This summary is generated by Upptime (https://github.com/upptime/upptime) -->
<!-- Do not edit this manually, your changes will be overwritten -->
<!-- prettier-ignore -->
| URL | Status | History | Response Time | Uptime |
| --- | ------ | ------- | ------------- | ------ |
| <img alt="" src="https://cdn.jsdelivr.net/gh/buligadragos/UpTime@master/assets/favicons/portfolio.png" height="13"> [Portfolio](https://www.buligadragos.ro) | 🟩 Up | [portfolio.yml](https://github.com/buligadragos/UpTime/commits/HEAD/history/portfolio.yml) | <details><summary><img alt="Response time graph" src="./graphs/portfolio/response-time-week.png" height="20"> 379ms</summary><br><a href="https://status.buligadragos.ro/history/portfolio"><img alt="Response time 467" src="https://img.shields.io/endpoint?url=https%3A%2F%2Fraw.githubusercontent.com%2Fbuligadragos%2FUpTime%2FHEAD%2Fapi%2Fportfolio%2Fresponse-time.json"></a><br><a href="https://status.buligadragos.ro/history/portfolio"><img alt="24-hour response time 312" src="https://img.shields.io/endpoint?url=https%3A%2F%2Fraw.githubusercontent.com%2Fbuligadragos%2FUpTime%2FHEAD%2Fapi%2Fportfolio%2Fresponse-time-day.json"></a><br><a href="https://status.buligadragos.ro/history/portfolio"><img alt="7-day response time 379" src="https://img.shields.io/endpoint?url=https%3A%2F%2Fraw.githubusercontent.com%2Fbuligadragos%2FUpTime%2FHEAD%2Fapi%2Fportfolio%2Fresponse-time-week.json"></a><br><a href="https://status.buligadragos.ro/history/portfolio"><img alt="30-day response time 391" src="https://img.shields.io/endpoint?url=https%3A%2F%2Fraw.githubusercontent.com%2Fbuligadragos%2FUpTime%2FHEAD%2Fapi%2Fportfolio%2Fresponse-time-month.json"></a><br><a href="https://status.buligadragos.ro/history/portfolio"><img alt="1-year response time 467" src="https://img.shields.io/endpoint?url=https%3A%2F%2Fraw.githubusercontent.com%2Fbuligadragos%2FUpTime%2FHEAD%2Fapi%2Fportfolio%2Fresponse-time-year.json"></a></details> | <details><summary><a href="https://status.buligadragos.ro/history/portfolio">100.00%</a></summary><a href="https://status.buligadragos.ro/history/portfolio"><img alt="All-time uptime 99.98%" src="https://img.shields.io/endpoint?url=https%3A%2F%2Fraw.githubusercontent.com%2Fbuligadragos%2FUpTime%2FHEAD%2Fapi%2Fportfolio%2Fuptime.json"></a><br><a href="https://status.buligadragos.ro/history/portfolio"><img alt="24-hour uptime 100.00%" src="https://img.shields.io/endpoint?url=https%3A%2F%2Fraw.githubusercontent.com%2Fbuligadragos%2FUpTime%2FHEAD%2Fapi%2Fportfolio%2Fuptime-day.json"></a><br><a href="https://status.buligadragos.ro/history/portfolio"><img alt="7-day uptime 100.00%" src="https://img.shields.io/endpoint?url=https%3A%2F%2Fraw.githubusercontent.com%2Fbuligadragos%2FUpTime%2FHEAD%2Fapi%2Fportfolio%2Fuptime-week.json"></a><br><a href="https://status.buligadragos.ro/history/portfolio"><img alt="30-day uptime 100.00%" src="https://img.shields.io/endpoint?url=https%3A%2F%2Fraw.githubusercontent.com%2Fbuligadragos%2FUpTime%2FHEAD%2Fapi%2Fportfolio%2Fuptime-month.json"></a><br><a href="https://status.buligadragos.ro/history/portfolio"><img alt="1-year uptime 99.98%" src="https://img.shields.io/endpoint?url=https%3A%2F%2Fraw.githubusercontent.com%2Fbuligadragos%2FUpTime%2FHEAD%2Fapi%2Fportfolio%2Fuptime-year.json"></a></details>
| <img alt="" src="https://cdn.jsdelivr.net/gh/buligadragos/UpTime@master/assets/favicons/gotoparking.png" height="13"> [GOTO Parking](https://gotoparking.ro) | 🟩 Up | [goto-parking.yml](https://github.com/buligadragos/UpTime/commits/HEAD/history/goto-parking.yml) | <details><summary><img alt="Response time graph" src="./graphs/goto-parking/response-time-week.png" height="20"> 1054ms</summary><br><a href="https://status.buligadragos.ro/history/goto-parking"><img alt="Response time 1297" src="https://img.shields.io/endpoint?url=https%3A%2F%2Fraw.githubusercontent.com%2Fbuligadragos%2FUpTime%2FHEAD%2Fapi%2Fgoto-parking%2Fresponse-time.json"></a><br><a href="https://status.buligadragos.ro/history/goto-parking"><img alt="24-hour response time 1098" src="https://img.shields.io/endpoint?url=https%3A%2F%2Fraw.githubusercontent.com%2Fbuligadragos%2FUpTime%2FHEAD%2Fapi%2Fgoto-parking%2Fresponse-time-day.json"></a><br><a href="https://status.buligadragos.ro/history/goto-parking"><img alt="7-day response time 1054" src="https://img.shields.io/endpoint?url=https%3A%2F%2Fraw.githubusercontent.com%2Fbuligadragos%2FUpTime%2FHEAD%2Fapi%2Fgoto-parking%2Fresponse-time-week.json"></a><br><a href="https://status.buligadragos.ro/history/goto-parking"><img alt="30-day response time 1599" src="https://img.shields.io/endpoint?url=https%3A%2F%2Fraw.githubusercontent.com%2Fbuligadragos%2FUpTime%2FHEAD%2Fapi%2Fgoto-parking%2Fresponse-time-month.json"></a><br><a href="https://status.buligadragos.ro/history/goto-parking"><img alt="1-year response time 1297" src="https://img.shields.io/endpoint?url=https%3A%2F%2Fraw.githubusercontent.com%2Fbuligadragos%2FUpTime%2FHEAD%2Fapi%2Fgoto-parking%2Fresponse-time-year.json"></a></details> | <details><summary><a href="https://status.buligadragos.ro/history/goto-parking">99.60%</a></summary><a href="https://status.buligadragos.ro/history/goto-parking"><img alt="All-time uptime 98.49%" src="https://img.shields.io/endpoint?url=https%3A%2F%2Fraw.githubusercontent.com%2Fbuligadragos%2FUpTime%2FHEAD%2Fapi%2Fgoto-parking%2Fuptime.json"></a><br><a href="https://status.buligadragos.ro/history/goto-parking"><img alt="24-hour uptime 100.00%" src="https://img.shields.io/endpoint?url=https%3A%2F%2Fraw.githubusercontent.com%2Fbuligadragos%2FUpTime%2FHEAD%2Fapi%2Fgoto-parking%2Fuptime-day.json"></a><br><a href="https://status.buligadragos.ro/history/goto-parking"><img alt="7-day uptime 99.60%" src="https://img.shields.io/endpoint?url=https%3A%2F%2Fraw.githubusercontent.com%2Fbuligadragos%2FUpTime%2FHEAD%2Fapi%2Fgoto-parking%2Fuptime-week.json"></a><br><a href="https://status.buligadragos.ro/history/goto-parking"><img alt="30-day uptime 98.97%" src="https://img.shields.io/endpoint?url=https%3A%2F%2Fraw.githubusercontent.com%2Fbuligadragos%2FUpTime%2FHEAD%2Fapi%2Fgoto-parking%2Fuptime-month.json"></a><br><a href="https://status.buligadragos.ro/history/goto-parking"><img alt="1-year uptime 98.49%" src="https://img.shields.io/endpoint?url=https%3A%2F%2Fraw.githubusercontent.com%2Fbuligadragos%2FUpTime%2FHEAD%2Fapi%2Fgoto-parking%2Fuptime-year.json"></a></details>
| <img alt="" src="https://cdn.jsdelivr.net/gh/buligadragos/UpTime@master/assets/favicons/oce.png" height="13"> [Our Collective Experience](http://www.ourcollectiveexperience.com/) | 🟩 Up | [our-collective-experience.yml](https://github.com/buligadragos/UpTime/commits/HEAD/history/our-collective-experience.yml) | <details><summary><img alt="Response time graph" src="./graphs/our-collective-experience/response-time-week.png" height="20"> 205ms</summary><br><a href="https://status.buligadragos.ro/history/our-collective-experience"><img alt="Response time 260" src="https://img.shields.io/endpoint?url=https%3A%2F%2Fraw.githubusercontent.com%2Fbuligadragos%2FUpTime%2FHEAD%2Fapi%2Four-collective-experience%2Fresponse-time.json"></a><br><a href="https://status.buligadragos.ro/history/our-collective-experience"><img alt="24-hour response time 117" src="https://img.shields.io/endpoint?url=https%3A%2F%2Fraw.githubusercontent.com%2Fbuligadragos%2FUpTime%2FHEAD%2Fapi%2Four-collective-experience%2Fresponse-time-day.json"></a><br><a href="https://status.buligadragos.ro/history/our-collective-experience"><img alt="7-day response time 205" src="https://img.shields.io/endpoint?url=https%3A%2F%2Fraw.githubusercontent.com%2Fbuligadragos%2FUpTime%2FHEAD%2Fapi%2Four-collective-experience%2Fresponse-time-week.json"></a><br><a href="https://status.buligadragos.ro/history/our-collective-experience"><img alt="30-day response time 191" src="https://img.shields.io/endpoint?url=https%3A%2F%2Fraw.githubusercontent.com%2Fbuligadragos%2FUpTime%2FHEAD%2Fapi%2Four-collective-experience%2Fresponse-time-month.json"></a><br><a href="https://status.buligadragos.ro/history/our-collective-experience"><img alt="1-year response time 260" src="https://img.shields.io/endpoint?url=https%3A%2F%2Fraw.githubusercontent.com%2Fbuligadragos%2FUpTime%2FHEAD%2Fapi%2Four-collective-experience%2Fresponse-time-year.json"></a></details> | <details><summary><a href="https://status.buligadragos.ro/history/our-collective-experience">100.00%</a></summary><a href="https://status.buligadragos.ro/history/our-collective-experience"><img alt="All-time uptime 99.94%" src="https://img.shields.io/endpoint?url=https%3A%2F%2Fraw.githubusercontent.com%2Fbuligadragos%2FUpTime%2FHEAD%2Fapi%2Four-collective-experience%2Fuptime.json"></a><br><a href="https://status.buligadragos.ro/history/our-collective-experience"><img alt="24-hour uptime 100.00%" src="https://img.shields.io/endpoint?url=https%3A%2F%2Fraw.githubusercontent.com%2Fbuligadragos%2FUpTime%2FHEAD%2Fapi%2Four-collective-experience%2Fuptime-day.json"></a><br><a href="https://status.buligadragos.ro/history/our-collective-experience"><img alt="7-day uptime 100.00%" src="https://img.shields.io/endpoint?url=https%3A%2F%2Fraw.githubusercontent.com%2Fbuligadragos%2FUpTime%2FHEAD%2Fapi%2Four-collective-experience%2Fuptime-week.json"></a><br><a href="https://status.buligadragos.ro/history/our-collective-experience"><img alt="30-day uptime 100.00%" src="https://img.shields.io/endpoint?url=https%3A%2F%2Fraw.githubusercontent.com%2Fbuligadragos%2FUpTime%2FHEAD%2Fapi%2Four-collective-experience%2Fuptime-month.json"></a><br><a href="https://status.buligadragos.ro/history/our-collective-experience"><img alt="1-year uptime 99.94%" src="https://img.shields.io/endpoint?url=https%3A%2F%2Fraw.githubusercontent.com%2Fbuligadragos%2FUpTime%2FHEAD%2Fapi%2Four-collective-experience%2Fuptime-year.json"></a></details>
| <img alt="" src="https://cdn.jsdelivr.net/gh/buligadragos/UpTime@master/assets/favicons/heartmap.png" height="13"> [heartMap](https://heartmap.buligadragos.work/) | 🟩 Up | [heart-map.yml](https://github.com/buligadragos/UpTime/commits/HEAD/history/heart-map.yml) | <details><summary><img alt="Response time graph" src="./graphs/heart-map/response-time-week.png" height="20"> 2428ms</summary><br><a href="https://status.buligadragos.ro/history/heart-map"><img alt="Response time 2517" src="https://img.shields.io/endpoint?url=https%3A%2F%2Fraw.githubusercontent.com%2Fbuligadragos%2FUpTime%2FHEAD%2Fapi%2Fheart-map%2Fresponse-time.json"></a><br><a href="https://status.buligadragos.ro/history/heart-map"><img alt="24-hour response time 2089" src="https://img.shields.io/endpoint?url=https%3A%2F%2Fraw.githubusercontent.com%2Fbuligadragos%2FUpTime%2FHEAD%2Fapi%2Fheart-map%2Fresponse-time-day.json"></a><br><a href="https://status.buligadragos.ro/history/heart-map"><img alt="7-day response time 2428" src="https://img.shields.io/endpoint?url=https%3A%2F%2Fraw.githubusercontent.com%2Fbuligadragos%2FUpTime%2FHEAD%2Fapi%2Fheart-map%2Fresponse-time-week.json"></a><br><a href="https://status.buligadragos.ro/history/heart-map"><img alt="30-day response time 2398" src="https://img.shields.io/endpoint?url=https%3A%2F%2Fraw.githubusercontent.com%2Fbuligadragos%2FUpTime%2FHEAD%2Fapi%2Fheart-map%2Fresponse-time-month.json"></a><br><a href="https://status.buligadragos.ro/history/heart-map"><img alt="1-year response time 2517" src="https://img.shields.io/endpoint?url=https%3A%2F%2Fraw.githubusercontent.com%2Fbuligadragos%2FUpTime%2FHEAD%2Fapi%2Fheart-map%2Fresponse-time-year.json"></a></details> | <details><summary><a href="https://status.buligadragos.ro/history/heart-map">100.00%</a></summary><a href="https://status.buligadragos.ro/history/heart-map"><img alt="All-time uptime 97.40%" src="https://img.shields.io/endpoint?url=https%3A%2F%2Fraw.githubusercontent.com%2Fbuligadragos%2FUpTime%2FHEAD%2Fapi%2Fheart-map%2Fuptime.json"></a><br><a href="https://status.buligadragos.ro/history/heart-map"><img alt="24-hour uptime 100.00%" src="https://img.shields.io/endpoint?url=https%3A%2F%2Fraw.githubusercontent.com%2Fbuligadragos%2FUpTime%2FHEAD%2Fapi%2Fheart-map%2Fuptime-day.json"></a><br><a href="https://status.buligadragos.ro/history/heart-map"><img alt="7-day uptime 100.00%" src="https://img.shields.io/endpoint?url=https%3A%2F%2Fraw.githubusercontent.com%2Fbuligadragos%2FUpTime%2FHEAD%2Fapi%2Fheart-map%2Fuptime-week.json"></a><br><a href="https://status.buligadragos.ro/history/heart-map"><img alt="30-day uptime 99.35%" src="https://img.shields.io/endpoint?url=https%3A%2F%2Fraw.githubusercontent.com%2Fbuligadragos%2FUpTime%2FHEAD%2Fapi%2Fheart-map%2Fuptime-month.json"></a><br><a href="https://status.buligadragos.ro/history/heart-map"><img alt="1-year uptime 97.40%" src="https://img.shields.io/endpoint?url=https%3A%2F%2Fraw.githubusercontent.com%2Fbuligadragos%2FUpTime%2FHEAD%2Fapi%2Fheart-map%2Fuptime-year.json"></a></details>
<!--end: status pages-->
## 📄 License
- Powered by: [Upptime](https://github.com/upptime/upptime)
- Code: [MIT](./LICENSE) © [Buliga Dragos](buligadragos.ro)
- Data in the `./history` directory: [Open Database License](https://opendatacommons.org/licenses/odbl/1-0/)
| 495.62069 | 3,442 | 0.774786 | yue_Hant | 0.410362 |
ff65ecbcc286d1cc8a46a1aa1364d511737be370 | 2,093 | md | Markdown | docs/markdown.md | up9cloud/docsify | 922c52bb1d62d9fc50c7fc66f832db0d5f62740a | [
"MIT"
] | null | null | null | docs/markdown.md | up9cloud/docsify | 922c52bb1d62d9fc50c7fc66f832db0d5f62740a | [
"MIT"
] | null | null | null | docs/markdown.md | up9cloud/docsify | 922c52bb1d62d9fc50c7fc66f832db0d5f62740a | [
"MIT"
] | null | null | null | # Markdown configuration
**docsify** uses [marked](https://github.com/markedjs/marked) as its Markdown parser. You can customize how it renders your Markdown content to HTML by customizing `renderer`:
```js
window.$docsify = {
markdown: {
smartypants: true,
renderer: {
link: function() {
// ...
}
}
}
}
```
?> Configuration Options Reference [marked documentation](https://marked.js.org/#/USING_ADVANCED.md)
Even you can completely customize the parsing rules.
```js
window.$docsify = {
markdown: function(marked, renderer) {
// ...
marked.setOptions({ renderer })
return marked
}
}
```
## Show front matter as a table
```html
<script src="//unpkg.com/js-yaml@3/dist/js-yaml.js"></script>
<script src="//unpkg.com/yaml-front-matter@4/dist/yamlFront.js"></script>
<script>
window.$docsify = {
markdown: function(marked, renderer) {
let lexer = marked.lexer
marked.lexer = function (text, options) {
let parsed = yamlFront.loadFront(text)
let cKey = '__content'
let table = [
[],
[],
[]
]
for (let k in parsed) {
if (k === cKey) {
continue
}
table[0].push(k)
table[1].push('---')
table[2].push(parsed[k])
}
let tableMd = table.map(row => row.join('|')).join('\n')
return lexer(tableMd + '\n' + parsed[cKey], options)
}
marked.setOptions({ renderer })
return marked
}
}
</script>
```
## Supports mermaid
```js
// Import mermaid
// <link rel="stylesheet" href="//cdn.jsdelivr.net/npm/mermaid/dist/mermaid.min.css">
// <script src="//cdn.jsdelivr.net/npm/mermaid/dist/mermaid.min.js"></script>
var num = 0;
mermaid.initialize({ startOnLoad: false });
window.$docsify = {
markdown: {
renderer: {
code: function(code, lang) {
if (lang === "mermaid") {
return (
'<div class="mermaid">' + mermaid.render('mermaid-svg-' + num++, code) + "</div>"
);
}
return this.origin.code.apply(this, arguments);
}
}
}
}
```
| 22.75 | 175 | 0.581462 | eng_Latn | 0.253899 |
ff6614db2b743377ca812be2eb0d6bd1968f3c27 | 714 | md | Markdown | content/home/about.md | DNSravya/nagasravyaD | 2830760b53129f6ffc9d6c7691c5e71b4dc7fbff | [
"MIT"
] | null | null | null | content/home/about.md | DNSravya/nagasravyaD | 2830760b53129f6ffc9d6c7691c5e71b4dc7fbff | [
"MIT"
] | null | null | null | content/home/about.md | DNSravya/nagasravyaD | 2830760b53129f6ffc9d6c7691c5e71b4dc7fbff | [
"MIT"
] | null | null | null | ---
widget: about
active: true
author: admin
widget_id: Devineni Naga Sravya
headless: true
weight: 20
title: Biography
design:
background:
image: icon.png
color: "#090909"
gradient_start: "#faf5f5"
gradient_end: "#e9dada"
text_color_light: true
image_darken: 0
---
<!--StartFragment-->
Devineni Naga Sravya student in Computer Science and engineering at The University of Texas at Arlington. My research interest includes robotics and artificial intellegence. My special interest is to be the part of NEUROSCIENCE research works to study how brain learns and computes to achieve intelligent behaviour.
**INTERESTS:**
* **Neuroscience**
* **Artificial Intelligence**
* **Robotics** | 23.8 | 315 | 0.746499 | eng_Latn | 0.919919 |
ff66e75a1276cff8a514ca291749ac913c7a597a | 1,343 | md | Markdown | README.md | dgriffen/rustbot | bf3d500e3801c31aa355c6c51ca87f0da6d4c183 | [
"Apache-2.0",
"MIT"
] | 1 | 2019-06-10T10:32:21.000Z | 2019-06-10T10:32:21.000Z | README.md | dgriffen/rustybot | bf3d500e3801c31aa355c6c51ca87f0da6d4c183 | [
"Apache-2.0",
"MIT"
] | 5 | 2019-02-05T07:31:31.000Z | 2019-03-20T02:11:22.000Z | README.md | dgriffen/rustybot | bf3d500e3801c31aa355c6c51ca87f0da6d4c183 | [
"Apache-2.0",
"MIT"
] | 2 | 2019-08-15T22:30:22.000Z | 2020-01-18T17:19:59.000Z | # nestor
![license][license-badge]
[](https://dev.azure.com/ZoeyR/nestor/_build/latest?definitionId=3&branchName=master)
[](https://codecov.io/gh/ZoeyR/nestor)
[](https://crates.io/crates/nestor)
nestor is an irc bot framework for Rust, inspired by the API of [rocket](https://rocket.rs).
## Development
### Prerequisites
- rustc and cargo (1.41.0 or newer)
- openssl dev libraries on mac or linux
### Building
- `git clone https://github.com/ZoeyR/nestor`
- `cd nestor\nestor`
- `cargo build`
## License
Licensed under either of
* Apache License, Version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or http://www.apache.org/licenses/LICENSE-2.0)
* MIT license ([LICENSE-MIT](LICENSE-MIT) or http://opensource.org/licenses/MIT)
at your option.
### Contribution
Unless you explicitly state otherwise, any contribution intentionally submitted
for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any
additional terms or conditions.
[LICENSE]: ./LICENSE
[license-badge]: https://img.shields.io/badge/license-MIT/Apache-blue.svg
| 35.342105 | 186 | 0.749069 | eng_Latn | 0.564372 |
ff672ab39cc00f718772347bd2e4db8166604089 | 28,589 | md | Markdown | articles/data-factory/v1/data-factory-copy-activity-tutorial-using-rest-api.md | VaijanathB/azure-docs | 6da72e4f8dfbc1fd705a30d1377180279e32b81c | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-09-09T11:23:22.000Z | 2021-09-09T11:23:22.000Z | articles/data-factory/v1/data-factory-copy-activity-tutorial-using-rest-api.md | VaijanathB/azure-docs | 6da72e4f8dfbc1fd705a30d1377180279e32b81c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/data-factory/v1/data-factory-copy-activity-tutorial-using-rest-api.md | VaijanathB/azure-docs | 6da72e4f8dfbc1fd705a30d1377180279e32b81c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 'Tutorial: Use REST API to create an Azure Data Factory pipeline '
description: In this tutorial, you use REST API to create an Azure Data Factory pipeline with a Copy Activity to copy data from an Azure blob storage to Azure SQL Database.
author: linda33wj
ms.service: data-factory
ms.topic: tutorial
ms.date: 01/22/2018
ms.author: jingwang
robots: noindex
---
# Tutorial: Use REST API to create an Azure Data Factory pipeline to copy data
> [!div class="op_single_selector"]
> * [Overview and prerequisites](data-factory-copy-data-from-azure-blob-storage-to-sql-database.md)
> * [Copy Wizard](data-factory-copy-data-wizard-tutorial.md)
> * [Visual Studio](data-factory-copy-activity-tutorial-using-visual-studio.md)
> * [PowerShell](data-factory-copy-activity-tutorial-using-powershell.md)
> * [Azure Resource Manager template](data-factory-copy-activity-tutorial-using-azure-resource-manager-template.md)
> * [REST API](data-factory-copy-activity-tutorial-using-rest-api.md)
> * [.NET API](data-factory-copy-activity-tutorial-using-dotnet-api.md)
>
>
> [!NOTE]
> This article applies to version 1 of Data Factory. If you are using the current version of the Data Factory service, see [copy activity tutorial](../quickstart-create-data-factory-rest-api.md).
In this article, you learn how to use REST API to create a data factory with a pipeline that copies data from an Azure blob storage to Azure SQL Database. If you are new to Azure Data Factory, read through the [Introduction to Azure Data Factory](data-factory-introduction.md) article before doing this tutorial.
In this tutorial, you create a pipeline with one activity in it: Copy Activity. The copy activity copies data from a supported data store to a supported sink data store. For a list of data stores supported as sources and sinks, see [supported data stores](data-factory-data-movement-activities.md#supported-data-stores-and-formats). The activity is powered by a globally available service that can copy data between various data stores in a secure, reliable, and scalable way. For more information about the Copy Activity, see [Data Movement Activities](data-factory-data-movement-activities.md).
A pipeline can have more than one activity. And, you can chain two activities (run one activity after another) by setting the output dataset of one activity as the input dataset of the other activity. For more information, see [multiple activities in a pipeline](data-factory-scheduling-and-execution.md#multiple-activities-in-a-pipeline).
> [!NOTE]
> This article does not cover all the Data Factory REST API. See [Data Factory REST API Reference](/rest/api/datafactory/) for comprehensive documentation on Data Factory cmdlets.
>
> The data pipeline in this tutorial copies data from a source data store to a destination data store. For a tutorial on how to transform data using Azure Data Factory, see [Tutorial: Build a pipeline to transform data using Hadoop cluster](data-factory-build-your-first-pipeline.md).
## Prerequisites
[!INCLUDE [updated-for-az](../../../includes/updated-for-az.md)]
* Go through [Tutorial Overview](data-factory-copy-data-from-azure-blob-storage-to-sql-database.md) and complete the **prerequisite** steps.
* Install [Curl](https://curl.haxx.se/dlwiz/) on your machine. You use the Curl tool with REST commands to create a data factory.
* Follow instructions from [this article](../../active-directory/develop/howto-create-service-principal-portal.md) to:
1. Create a Web application named **ADFCopyTutorialApp** in Azure Active Directory.
2. Get **client ID** and **secret key**.
3. Get **tenant ID**.
4. Assign the **ADFCopyTutorialApp** application to the **Data Factory Contributor** role.
* Install [Azure PowerShell](/powershell/azure/).
* Launch **PowerShell** and do the following steps. Keep Azure PowerShell open until the end of this tutorial. If you close and reopen, you need to run the commands again.
1. Run the following command and enter the user name and password that you use to sign in to the Azure portal:
```PowerShell
Connect-AzAccount
```
2. Run the following command to view all the subscriptions for this account:
```PowerShell
Get-AzSubscription
```
3. Run the following command to select the subscription that you want to work with. Replace **<NameOfAzureSubscription**> with the name of your Azure subscription.
```PowerShell
Get-AzSubscription -SubscriptionName <NameOfAzureSubscription> | Set-AzContext
```
4. Create an Azure resource group named **ADFTutorialResourceGroup** by running the following command in the PowerShell:
```PowerShell
New-AzResourceGroup -Name ADFTutorialResourceGroup -Location "West US"
```
If the resource group already exists, you specify whether to update it (Y) or keep it as (N).
Some of the steps in this tutorial assume that you use the resource group named ADFTutorialResourceGroup. If you use a different resource group, you need to use the name of your resource group in place of ADFTutorialResourceGroup in this tutorial.
## Create JSON definitions
Create following JSON files in the folder where curl.exe is located.
### datafactory.json
> [!IMPORTANT]
> Name must be globally unique, so you may want to prefix/suffix ADFCopyTutorialDF to make it a unique name.
>
>
```JSON
{
"name": "ADFCopyTutorialDF",
"location": "WestUS"
}
```
### azurestoragelinkedservice.json
> [!IMPORTANT]
> Replace **accountname** and **accountkey** with name and key of your Azure storage account. To learn how to get your storage access key, see [Manage storage account access keys](../../storage/common/storage-account-keys-manage.md).
```JSON
{
"name": "AzureStorageLinkedService",
"properties": {
"type": "AzureStorage",
"typeProperties": {
"connectionString": "DefaultEndpointsProtocol=https;AccountName=<accountname>;AccountKey=<accountkey>"
}
}
}
```
For details about JSON properties, see [Azure Storage linked service](data-factory-azure-blob-connector.md#azure-storage-linked-service).
### azuresqllinkedservice.json
> [!IMPORTANT]
> Replace **servername**, **databasename**, **username**, and **password** with name of your server, name of SQL database, user account, and password for the account.
>
>
```JSON
{
"name": "AzureSqlLinkedService",
"properties": {
"type": "AzureSqlDatabase",
"description": "",
"typeProperties": {
"connectionString": "Data Source=tcp:<servername>.database.windows.net,1433;Initial Catalog=<databasename>;User ID=<username>;Password=<password>;Integrated Security=False;Encrypt=True;Connect Timeout=30"
}
}
}
```
For details about JSON properties, see [Azure SQL linked service](data-factory-azure-sql-connector.md#linked-service-properties).
### inputdataset.json
```JSON
{
"name": "AzureBlobInput",
"properties": {
"structure": [
{
"name": "FirstName",
"type": "String"
},
{
"name": "LastName",
"type": "String"
}
],
"type": "AzureBlob",
"linkedServiceName": "AzureStorageLinkedService",
"typeProperties": {
"folderPath": "adftutorial/",
"fileName": "emp.txt",
"format": {
"type": "TextFormat",
"columnDelimiter": ","
}
},
"external": true,
"availability": {
"frequency": "Hour",
"interval": 1
}
}
}
```
The following table provides descriptions for the JSON properties used in the snippet:
| Property | Description |
|:--- |:--- |
| type | The type property is set to **AzureBlob** because data resides in an Azure blob storage. |
| linkedServiceName | Refers to the **AzureStorageLinkedService** that you created earlier. |
| folderPath | Specifies the blob **container** and the **folder** that contains input blobs. In this tutorial, adftutorial is the blob container and folder is the root folder. |
| fileName | This property is optional. If you omit this property, all files from the folderPath are picked. In this tutorial, **emp.txt** is specified for the fileName, so only that file is picked up for processing. |
| format -> type |The input file is in the text format, so we use **TextFormat**. |
| columnDelimiter | The columns in the input file are delimited by **comma character (`,`)**. |
| frequency/interval | The frequency is set to **Hour** and interval is set to **1**, which means that the input slices are available **hourly**. In other words, the Data Factory service looks for input data every hour in the root folder of blob container (**adftutorial**) you specified. It looks for the data within the pipeline start and end times, not before or after these times. |
| external | This property is set to **true** if the data is not generated by this pipeline. The input data in this tutorial is in the emp.txt file, which is not generated by this pipeline, so we set this property to true. |
For more information about these JSON properties, see [Azure Blob connector article](data-factory-azure-blob-connector.md#dataset-properties).
### outputdataset.json
```JSON
{
"name": "AzureSqlOutput",
"properties": {
"structure": [
{
"name": "FirstName",
"type": "String"
},
{
"name": "LastName",
"type": "String"
}
],
"type": "AzureSqlTable",
"linkedServiceName": "AzureSqlLinkedService",
"typeProperties": {
"tableName": "emp"
},
"availability": {
"frequency": "Hour",
"interval": 1
}
}
}
```
The following table provides descriptions for the JSON properties used in the snippet:
| Property | Description |
|:--- |:--- |
| type | The type property is set to **AzureSqlTable** because data is copied to a table in Azure SQL Database. |
| linkedServiceName | Refers to the **AzureSqlLinkedService** that you created earlier. |
| tableName | Specified the **table** to which the data is copied. |
| frequency/interval | The frequency is set to **Hour** and interval is **1**, which means that the output slices are produced **hourly** between the pipeline start and end times, not before or after these times. |
There are three columns – **ID**, **FirstName**, and **LastName** – in the emp table in the database. ID is an identity column, so you need to specify only **FirstName** and **LastName** here.
For more information about these JSON properties, see [Azure SQL connector article](data-factory-azure-sql-connector.md#dataset-properties).
### pipeline.json
```JSON
{
"name": "ADFTutorialPipeline",
"properties": {
"description": "Copy data from a blob to Azure SQL table",
"activities": [
{
"name": "CopyFromBlobToSQL",
"description": "Push Regional Effectiveness Campaign data to Azure SQL Database",
"type": "Copy",
"inputs": [
{
"name": "AzureBlobInput"
}
],
"outputs": [
{
"name": "AzureSqlOutput"
}
],
"typeProperties": {
"source": {
"type": "BlobSource"
},
"sink": {
"type": "SqlSink",
"writeBatchSize": 10000,
"writeBatchTimeout": "60:00:00"
}
},
"Policy": {
"concurrency": 1,
"executionPriorityOrder": "NewestFirst",
"retry": 0,
"timeout": "01:00:00"
}
}
],
"start": "2017-05-11T00:00:00Z",
"end": "2017-05-12T00:00:00Z"
}
}
```
Note the following points:
- In the activities section, there is only one activity whose **type** is set to **Copy**. For more information about the copy activity, see [data movement activities](data-factory-data-movement-activities.md). In Data Factory solutions, you can also use [data transformation activities](data-factory-data-transformation-activities.md).
- Input for the activity is set to **AzureBlobInput** and output for the activity is set to **AzureSqlOutput**.
- In the **typeProperties** section, **BlobSource** is specified as the source type and **SqlSink** is specified as the sink type. For a complete list of data stores supported by the copy activity as sources and sinks, see [supported data stores](data-factory-data-movement-activities.md#supported-data-stores-and-formats). To learn how to use a specific supported data store as a source/sink, click the link in the table.
Replace the value of the **start** property with the current day and **end** value with the next day. You can specify only the date part and skip the time part of the date time. For example, "2017-02-03", which is equivalent to "2017-02-03T00:00:00Z"
Both start and end datetimes must be in [ISO format](https://en.wikipedia.org/wiki/ISO_8601). For example: 2016-10-14T16:32:41Z. The **end** time is optional, but we use it in this tutorial.
If you do not specify value for the **end** property, it is calculated as "**start + 48 hours**". To run the pipeline indefinitely, specify **9999-09-09** as the value for the **end** property.
In the preceding example, there are 24 data slices as each data slice is produced hourly.
For descriptions of JSON properties in a pipeline definition, see [create pipelines](data-factory-create-pipelines.md) article. For descriptions of JSON properties in a copy activity definition, see [data movement activities](data-factory-data-movement-activities.md). For descriptions of JSON properties supported by BlobSource, see [Azure Blob connector article](data-factory-azure-blob-connector.md). For descriptions of JSON properties supported by SqlSink, see [Azure SQL Database connector article](data-factory-azure-sql-connector.md).
## Set global variables
In Azure PowerShell, execute the following commands after replacing the values with your own:
> [!IMPORTANT]
> See [Prerequisites](#prerequisites) section for instructions on getting client ID, client secret, tenant ID, and subscription ID.
>
>
```JSON
$client_id = "<client ID of application in AAD>"
$client_secret = "<client key of application in AAD>"
$tenant = "<Azure tenant ID>";
$subscription_id="<Azure subscription ID>";
$rg = "ADFTutorialResourceGroup"
```
Run the following command after updating the name of the data factory you are using:
```
$adf = "ADFCopyTutorialDF"
```
## Authenticate with AAD
Run the following command to authenticate with Azure Active Directory (AAD):
```PowerShell
$cmd = { .\curl.exe -X POST https://login.microsoftonline.com/$tenant/oauth2/token -F grant_type=client_credentials -F resource=https://management.core.windows.net/ -F client_id=$client_id -F client_secret=$client_secret };
$responseToken = Invoke-Command -scriptblock $cmd;
$accessToken = (ConvertFrom-Json $responseToken).access_token;
(ConvertFrom-Json $responseToken)
```
## Create data factory
In this step, you create an Azure Data Factory named **ADFCopyTutorialDF**. A data factory can have one or more pipelines. A pipeline can have one or more activities in it. For example, a Copy Activity to copy data from a source to a destination data store. A HDInsight Hive activity to run a Hive script to transform input data to product output data. Run the following commands to create the data factory:
1. Assign the command to variable named **cmd**.
> [!IMPORTANT]
> Confirm that the name of the data factory you specify here (ADFCopyTutorialDF) matches the name specified in the **datafactory.json**.
```PowerShell
$cmd = {.\curl.exe -X PUT -H "Authorization: Bearer $accessToken" -H "Content-Type: application/json" --data "@datafactory.json" https://management.azure.com/subscriptions/$subscription_id/resourcegroups/$rg/providers/Microsoft.DataFactory/datafactories/ADFCopyTutorialDF0411?api-version=2015-10-01};
```
2. Run the command by using **Invoke-Command**.
```PowerShell
$results = Invoke-Command -scriptblock $cmd;
```
3. View the results. If the data factory has been successfully created, you see the JSON for the data factory in the **results**; otherwise, you see an error message.
```
Write-Host $results
```
Note the following points:
* The name of the Azure Data Factory must be globally unique. If you see the error in results: **Data factory name "ADFCopyTutorialDF" is not available**, do the following steps:
1. Change the name (for example, yournameADFCopyTutorialDF) in the **datafactory.json** file.
2. In the first command where the **$cmd** variable is assigned a value, replace ADFCopyTutorialDF with the new name and run the command.
3. Run the next two commands to invoke the REST API to create the data factory and print the results of the operation.
See [Data Factory - Naming Rules](data-factory-naming-rules.md) topic for naming rules for Data Factory artifacts.
* To create Data Factory instances, you need to be a contributor/administrator of the Azure subscription
* The name of the data factory may be registered as a DNS name in the future and hence become publicly visible.
* If you receive the error: "**This subscription is not registered to use namespace Microsoft.DataFactory**", do one of the following and try publishing again:
* In Azure PowerShell, run the following command to register the Data Factory provider:
```PowerShell
Register-AzResourceProvider -ProviderNamespace Microsoft.DataFactory
```
You can run the following command to confirm that the Data Factory provider is registered.
```PowerShell
Get-AzResourceProvider
```
* Login using the Azure subscription into the [Azure portal](https://portal.azure.com) and navigate to a Data Factory blade (or) create a data factory in the Azure portal. This action automatically registers the provider for you.
Before creating a pipeline, you need to create a few Data Factory entities first. You first create linked services to link source and destination data stores to your data store. Then, define input and output datasets to represent data in linked data stores. Finally, create the pipeline with an activity that uses these datasets.
## Create linked services
You create linked services in a data factory to link your data stores and compute services to the data factory. In this tutorial, you don't use any compute service such as Azure HDInsight or Azure Data Lake Analytics. You use two data stores of type Azure Storage (source) and Azure SQL Database (destination). Therefore, you create two linked services named AzureStorageLinkedService and AzureSqlLinkedService of types: AzureStorage and AzureSqlDatabase.
The AzureStorageLinkedService links your Azure storage account to the data factory. This storage account is the one in which you created a container and uploaded the data as part of [prerequisites](data-factory-copy-data-from-azure-blob-storage-to-sql-database.md).
AzureSqlLinkedService links Azure SQL Database to the data factory. The data that is copied from the blob storage is stored in this database. You created the emp table in this database as part of [prerequisites](data-factory-copy-data-from-azure-blob-storage-to-sql-database.md).
### Create Azure Storage linked service
In this step, you link your Azure storage account to your data factory. You specify the name and key of your Azure storage account in this section. See [Azure Storage linked service](data-factory-azure-blob-connector.md#azure-storage-linked-service) for details about JSON properties used to define an Azure Storage linked service.
1. Assign the command to variable named **cmd**.
```PowerShell
$cmd = {.\curl.exe -X PUT -H "Authorization: Bearer $accessToken" -H "Content-Type: application/json" --data "@azurestoragelinkedservice.json" https://management.azure.com/subscriptions/$subscription_id/resourcegroups/$rg/providers/Microsoft.DataFactory/datafactories/$adf/linkedservices/AzureStorageLinkedService?api-version=2015-10-01};
```
2. Run the command by using **Invoke-Command**.
```PowerShell
$results = Invoke-Command -scriptblock $cmd;
```
3. View the results. If the linked service has been successfully created, you see the JSON for the linked service in the **results**; otherwise, you see an error message.
```PowerShell
Write-Host $results
```
### Create Azure SQL linked service
In this step, you link Azure SQL Database to your data factory. You specify the logical SQL server name, database name, user name, and user password in this section. See [Azure SQL linked service](data-factory-azure-sql-connector.md#linked-service-properties) for details about JSON properties used to define an Azure SQL linked service.
1. Assign the command to variable named **cmd**.
```PowerShell
$cmd = {.\curl.exe -X PUT -H "Authorization: Bearer $accessToken" -H "Content-Type: application/json" --data "@azuresqllinkedservice.json" https://management.azure.com/subscriptions/$subscription_id/resourcegroups/$rg/providers/Microsoft.DataFactory/datafactories/$adf/linkedservices/AzureSqlLinkedService?api-version=2015-10-01};
```
2. Run the command by using **Invoke-Command**.
```PowerShell
$results = Invoke-Command -scriptblock $cmd;
```
3. View the results. If the linked service has been successfully created, you see the JSON for the linked service in the **results**; otherwise, you see an error message.
```PowerShell
Write-Host $results
```
## Create datasets
In the previous step, you created linked services to link your Azure Storage account and Azure SQL Database to your data factory. In this step, you define two datasets named AzureBlobInput and AzureSqlOutput that represent input and output data that is stored in the data stores referred by AzureStorageLinkedService and AzureSqlLinkedService respectively.
The Azure storage linked service specifies the connection string that Data Factory service uses at run time to connect to your Azure storage account. And, the input blob dataset (AzureBlobInput) specifies the container and the folder that contains the input data.
Similarly, the Azure SQL Database linked service specifies the connection string that Data Factory service uses at run time to connect to Azure SQL Database. And, the output SQL table dataset (OututDataset) specifies the table in the database to which the data from the blob storage is copied.
### Create input dataset
In this step, you create a dataset named AzureBlobInput that points to a blob file (emp.txt) in the root folder of a blob container (adftutorial) in the Azure Storage represented by the AzureStorageLinkedService linked service. If you don't specify a value for the fileName (or skip it), data from all blobs in the input folder are copied to the destination. In this tutorial, you specify a value for the fileName.
1. Assign the command to variable named **cmd**.
```PowerSHell
$cmd = {.\curl.exe -X PUT -H "Authorization: Bearer $accessToken" -H "Content-Type: application/json" --data "@inputdataset.json" https://management.azure.com/subscriptions/$subscription_id/resourcegroups/$rg/providers/Microsoft.DataFactory/datafactories/$adf/datasets/AzureBlobInput?api-version=2015-10-01};
```
2. Run the command by using **Invoke-Command**.
```PowerShell
$results = Invoke-Command -scriptblock $cmd;
```
3. View the results. If the dataset has been successfully created, you see the JSON for the dataset in the **results**; otherwise, you see an error message.
```PowerShell
Write-Host $results
```
### Create output dataset
The Azure SQL Database linked service specifies the connection string that Data Factory service uses at run time to connect to Azure SQL Database. The output SQL table dataset (OututDataset) you create in this step specifies the table in the database to which the data from the blob storage is copied.
1. Assign the command to variable named **cmd**.
```PowerShell
$cmd = {.\curl.exe -X PUT -H "Authorization: Bearer $accessToken" -H "Content-Type: application/json" --data "@outputdataset.json" https://management.azure.com/subscriptions/$subscription_id/resourcegroups/$rg/providers/Microsoft.DataFactory/datafactories/$adf/datasets/AzureSqlOutput?api-version=2015-10-01};
```
2. Run the command by using **Invoke-Command**.
```PowerShell
$results = Invoke-Command -scriptblock $cmd;
```
3. View the results. If the dataset has been successfully created, you see the JSON for the dataset in the **results**; otherwise, you see an error message.
```PowerShell
Write-Host $results
```
## Create pipeline
In this step, you create a pipeline with a **copy activity** that uses **AzureBlobInput** as an input and **AzureSqlOutput** as an output.
Currently, output dataset is what drives the schedule. In this tutorial, output dataset is configured to produce a slice once an hour. The pipeline has a start time and end time that are one day apart, which is 24 hours. Therefore, 24 slices of output dataset are produced by the pipeline.
1. Assign the command to variable named **cmd**.
```PowerShell
$cmd = {.\curl.exe -X PUT -H "Authorization: Bearer $accessToken" -H "Content-Type: application/json" --data "@pipeline.json" https://management.azure.com/subscriptions/$subscription_id/resourcegroups/$rg/providers/Microsoft.DataFactory/datafactories/$adf/datapipelines/MyFirstPipeline?api-version=2015-10-01};
```
2. Run the command by using **Invoke-Command**.
```PowerShell
$results = Invoke-Command -scriptblock $cmd;
```
3. View the results. If the dataset has been successfully created, you see the JSON for the dataset in the **results**; otherwise, you see an error message.
```PowerShell
Write-Host $results
```
**Congratulations!** You have successfully created an Azure data factory, with a pipeline that copies data from Azure Blob Storage to Azure SQL Database.
## Monitor pipeline
In this step, you use Data Factory REST API to monitor slices being produced by the pipeline.
```PowerShell
$ds ="AzureSqlOutput"
```
> [!IMPORTANT]
> Make sure that the start and end times specified in the following command match the start and end times of the pipeline.
```PowerShell
$cmd = {.\curl.exe -X GET -H "Authorization: Bearer $accessToken" https://management.azure.com/subscriptions/$subscription_id/resourcegroups/$rg/providers/Microsoft.DataFactory/datafactories/$adf/datasets/$ds/slices?start=2017-05-11T00%3a00%3a00.0000000Z"&"end=2017-05-12T00%3a00%3a00.0000000Z"&"api-version=2015-10-01};
```
```PowerShell
$results2 = Invoke-Command -scriptblock $cmd;
```
```PowerShell
IF ((ConvertFrom-Json $results2).value -ne $NULL) {
ConvertFrom-Json $results2 | Select-Object -Expand value | Format-Table
} else {
(convertFrom-Json $results2).RemoteException
}
```
Run the Invoke-Command and the next one until you see a slice in **Ready** state or **Failed** state. When the slice is in Ready state, check the **emp** table in Azure SQL Database for the output data.
For each slice, two rows of data from the source file are copied to the emp table in Azure SQL Database. Therefore, you see 24 new records in the emp table when all the slices are successfully processed (in Ready state).
## Summary
In this tutorial, you used REST API to create an Azure data factory to copy data from an Azure blob to Azure SQL Database. Here are the high-level steps you performed in this tutorial:
1. Created an Azure **data factory**.
2. Created **linked services**:
1. An Azure Storage linked service to link your Azure Storage account that holds input data.
2. An Azure SQL linked service to link your database that holds the output data.
3. Created **datasets**, which describe input data and output data for pipelines.
4. Created a **pipeline** with a Copy Activity with BlobSource as source and SqlSink as sink.
## Next steps
In this tutorial, you used Azure blob storage as a source data store and Azure SQL Database as a destination data store in a copy operation. The following table provides a list of data stores supported as sources and destinations by the copy activity:
[!INCLUDE [data-factory-supported-data-stores](../../../includes/data-factory-supported-data-stores.md)]
To learn about how to copy data to/from a data store, click the link for the data store in the table.
| 54.351711 | 596 | 0.729232 | eng_Latn | 0.960765 |
ff677e139fa482f3b33748524ad9fae6fb4602e9 | 6,676 | md | Markdown | articles/event-hubs/event-hubs-resource-manager-namespace-event-hub.md | fuatrihtim/azure-docs.tr-tr | 6569c5eb54bdab7488b44498dc4dad397d32f1be | [
"CC-BY-4.0",
"MIT"
] | 16 | 2017-08-28T08:29:36.000Z | 2022-01-02T16:46:30.000Z | articles/event-hubs/event-hubs-resource-manager-namespace-event-hub.md | fuatrihtim/azure-docs.tr-tr | 6569c5eb54bdab7488b44498dc4dad397d32f1be | [
"CC-BY-4.0",
"MIT"
] | 470 | 2017-11-11T20:59:16.000Z | 2021-04-10T17:06:28.000Z | articles/event-hubs/event-hubs-resource-manager-namespace-event-hub.md | fuatrihtim/azure-docs.tr-tr | 6569c5eb54bdab7488b44498dc4dad397d32f1be | [
"CC-BY-4.0",
"MIT"
] | 25 | 2017-11-11T19:39:08.000Z | 2022-03-30T13:47:56.000Z | ---
title: "Hızlı başlangıç: Tüketici grubu ile bir olay hub 'ı oluşturma-Azure Event Hubs"
description: "Hızlı başlangıç: Azure Resource Manager şablonları kullanarak bir olay hub 'ı ve bir tüketici grubuyla Event Hubs ad alanı oluşturma"
ms.topic: quickstart
ms.custom: subject-armqs
ms.date: 06/23/2020
ms.openlocfilehash: e6da5fbe3c0e269f5ceb2c3627df27ccf0e3b30b
ms.sourcegitcommit: f28ebb95ae9aaaff3f87d8388a09b41e0b3445b5
ms.translationtype: MT
ms.contentlocale: tr-TR
ms.lasthandoff: 03/29/2021
ms.locfileid: "88933860"
---
# <a name="quickstart-create-an-event-hub-by-using-an-arm-template"></a>Hızlı başlangıç: ARM şablonu kullanarak bir olay hub 'ı oluşturma
Azure Event Hubs saniyede milyonlarca olay alıp işleme kapasitesine sahip olan bir Büyük Veri akış platformu ve olay alma hizmetidir. Event Hubs dağıtılan yazılımlar ve cihazlar tarafından oluşturulan olayları, verileri ve telemetrileri işleyebilir ve depolayabilir. Bir olay hub’ına gönderilen veriler, herhangi bir gerçek zamanlı analiz sağlayıcısı ve işlem grubu oluşturma/depolama bağdaştırıcıları kullanılarak dönüştürülüp depolanabilir. Olay Hub’larının ayrıntılı genel bakışı için bkz. [Olay Hub’larına genel bakış](event-hubs-about.md) ve [Olay Hub’ları özellikleri](event-hubs-features.md). Bu hızlı başlangıçta, bir [Azure Resource Manager şablonu (ARM şablonu)](../azure-resource-manager/management/overview.md)kullanarak bir olay hub 'ı oluşturursunuz. Bir ARM şablonunu, tek bir olay hub 'ı ile [Event Hubs](./event-hubs-about.md)türünde bir ad alanı oluşturmak üzere dağıtırsınız.
[!INCLUDE [About Azure Resource Manager](../../includes/resource-manager-quickstart-introduction.md)]
Ortamınız önkoşulları karşılıyorsa ve ARM şablonlarını kullanma hakkında bilginiz varsa, **Azure’a dağıtma** düğmesini seçin. Şablon Azure portalda açılır.
[](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F101-eventhubs-create-namespace-and-eventhub%2Fazuredeploy.json)
## <a name="prerequisites"></a>Ön koşullar
Azure aboneliğiniz yoksa başlamadan önce [ücretsiz bir hesap oluşturun](https://azure.microsoft.com/free/).
## <a name="review-the-template"></a>Şablonu gözden geçirme
Bu hızlı başlangıçta kullanılan şablon [Azure Hızlı Başlangıç Şablonlarından](https://azure.microsoft.com/resources/templates/101-eventhubs-create-namespace-and-eventhub/) alınmıştır.
:::code language="json" source="~/quickstart-templates/101-eventhubs-create-namespace-and-eventhub/azuredeploy.json":::
Şablonda tanımlanan kaynaklar şunları içerir:
- [**Microsoft. EventHub/ad alanları**](/azure/templates/microsoft.eventhub/namespaces)
- [**Microsoft. EventHub/namespaces/eventhubs**](/azure/templates/microsoft.eventhub/namespaces/eventhubs)
Daha fazla şablon örneği bulmak için bkz. [Azure hızlı başlangıç şablonları](https://azure.microsoft.com/resources/templates/?term=eventhub&pageNumber=1&sort=Popular).
## <a name="deploy-the-template"></a>Şablonu dağıtma
Şablonu dağıtmak için:
1. Aşağıdaki kod bloğundan **deneyin** ' i seçin ve ardından Azure Cloud Shell oturum açmak için yönergeleri izleyin.
```azurepowershell-interactive
$projectName = Read-Host -Prompt "Enter a project name that is used for generating resource names"
$location = Read-Host -Prompt "Enter the location (i.e. centralus)"
$resourceGroupName = "${projectName}rg"
$templateUri = "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/101-eventhubs-create-namespace-and-eventhub/azuredeploy.json"
New-AzResourceGroup -Name $resourceGroupName -Location $location
New-AzResourceGroupDeployment -ResourceGroupName $resourceGroupName -TemplateUri $templateUri -projectName $projectName
Write-Host "Press [ENTER] to continue ..."
```
Bir olay hub 'ı oluşturmak birkaç dakika sürer.
1. PowerShell betiğini kopyalamak için **Kopyala** ' yı seçin.
1. Kabuk konsoluna sağ tıklayın ve ardından **Yapıştır**' ı seçin.
## <a name="validate-the-deployment"></a>Dağıtımı doğrulama
Dağıtımı doğrulamak için [Azure Portal](https://portal.azure.com)kaynak grubunu açabilir ya da aşağıdaki Azure PowerShell betiğini kullanabilirsiniz. Cloud Shell hala açıksa, ilk satırı (Read-Host) kopyalamanız/çalıştırmanız gerekmez.
```azurepowershell-interactive
$projectName = Read-Host -Prompt "Enter the same project name that you used in the last procedure"
$resourceGroupName = "${projectName}rg"
$namespaceName = "${projectName}ns"
Get-AzEventHub -ResourceGroupName $resourceGroupName -Namespace $namespaceName
Write-Host "Press [ENTER] to continue ..."
```
## <a name="clean-up-resources"></a>Kaynakları temizleme
Artık Azure kaynakları gerekli değilse, kaynak grubunu silerek dağıttığınız kaynakları temizleyin. Cloud Shell hala açıksa, ilk satırı (Read-Host) kopyalamanız/çalıştırmanız gerekmez.
```azurepowershell-interactive
$projectName = Read-Host -Prompt "Enter the same project name that you used in the last procedure"
$resourceGroupName = "${projectName}rg"
Remove-AzResourceGroup -ResourceGroupName $resourceGroupName
Write-Host "Press [ENTER] to continue ..."
```
## <a name="next-steps"></a>Sonraki adımlar
Bu makalede bir Event Hubs ad alanı ve ad alanında bir olay hub 'ı oluşturdunuz. Olay Hub 'ından olay alma (veya) olayları gönderme hakkında adım adım yönergeler için, bkz. **olayları gönderme ve alma** öğreticileri:
- [.NET Core](event-hubs-dotnet-standard-getstarted-send.md)
- [Java](event-hubs-java-get-started-send.md)
- [Python](event-hubs-python-get-started-send.md)
- [JavaScript](event-hubs-node-get-started-send.md)
- [Git](event-hubs-go-get-started-send.md)
- [C (yalnızca gönderme)](event-hubs-c-getstarted-send.md)
- [Apache Storm (yalnızca alma)](event-hubs-storm-getstarted-receive.md)
[3]: ./media/event-hubs-quickstart-powershell/sender1.png
[4]: ./media/event-hubs-quickstart-powershell/receiver1.png
[5]: ./media/event-hubs-quickstart-powershell/metrics.png
[Authoring Azure Resource Manager templates]: ../azure-resource-manager/templates/template-syntax.md
[Azure Quickstart Templates]: https://azure.microsoft.com/documentation/templates/?term=event+hubs
[Using Azure PowerShell with Azure Resource Manager]: ../azure-resource-manager/management/manage-resources-powershell.md
[Using the Azure CLI for Mac, Linux, and Windows with Azure Resource Management]: ../azure-resource-manager/management/manage-resources-cli.md
[Event hub and consumer group template]: https://github.com/Azure/azure-quickstart-templates/blob/master/201-event-hubs-create-event-hub-and-consumer-group/
| 59.079646 | 894 | 0.790743 | tur_Latn | 0.9811 |
ff67b7cb5417fe6f773aec856465c921e6e66b09 | 665 | md | Markdown | Postgres/help.md | leisiamedeiros/docker-composers | a644e9a1b21a7cb1728a70a922481387ba5b38ab | [
"MIT"
] | 4 | 2022-02-02T01:26:24.000Z | 2022-02-06T13:20:28.000Z | Postgres/help.md | leisiamedeiros/docker-composers | a644e9a1b21a7cb1728a70a922481387ba5b38ab | [
"MIT"
] | null | null | null | Postgres/help.md | leisiamedeiros/docker-composers | a644e9a1b21a7cb1728a70a922481387ba5b38ab | [
"MIT"
] | null | null | null | # What is Postgres and Adminer?
- [PostgreSQL](https://hub.docker.com/_/postgres), often simply "Postgres", is an object-relational database management
system (ORDBMS) with an emphasis on extensibility and standards-compliance.
- [Adminer](https://hub.docker.com/_/adminer) (formerly phpMinAdmin) is a full-featured database management tool.
# How can I start?
Just execute the command bellow and go to the adress on your browser `http://localhost:8080/`
```bash
$ docker-compose -f "Postgres/postgres-docker-compose.yml" up -d
```
# Done!
Go to http://localhost:8080/ on your browser and connect with `pgsql` service name and credentials on composer file. | 36.944444 | 120 | 0.754887 | eng_Latn | 0.97496 |
ff67f5b859a5873dc400527c3732a62dfbfea81b | 1,895 | md | Markdown | src/notes/2021/weeknotes-52.md | craigbutcher/craigbutcher.github.io | 0d70f283634bcc1d22a5a6a66ab377c9ff7d3bf5 | [
"MIT"
] | null | null | null | src/notes/2021/weeknotes-52.md | craigbutcher/craigbutcher.github.io | 0d70f283634bcc1d22a5a6a66ab377c9ff7d3bf5 | [
"MIT"
] | 11 | 2017-04-02T19:39:39.000Z | 2020-01-12T22:39:19.000Z | src/notes/2021/weeknotes-52.md | craigbutcher/craigbutcher.github.io | 0d70f283634bcc1d22a5a6a66ab377c9ff7d3bf5 | [
"MIT"
] | null | null | null | ---
layout: layouts/post.njk
title: Week Note Fifty Two - Two years of weeknotes
description: 2021 - Week Note Fifty Two
date: '2021-12-26'
tags:
- weeknotes
---
## Christmas
This month was ridiculous because of recovering on and off with back to back cold and cough, ended up with a chest infection with a raspy voice that Darth Vader might approve. Luckily, it had cleared up in time for Christmas Day for dinner with the family and enjoying the company of hover helicopters that mysteriously chopped off my plant leaves by my sons.
## The Witcher
Somehow, during my downtime in between blowing my nose and coughing, a good old fashion Netflix binge was required began and The Witcher was selected. Two seasons later, and it is a decent watch with catchy songs such as [Toss A Coin To Your Witcher](https://www.youtube.com/watch?v=U9OQAySv184). Afterwards, having noticed [GOG](https://gog.com) began their winter sale and I did ended up buying the game, The Witcher 3 - The Wild Hunt, to play on GeForce Now. 2022 is probably the year where I have lost time to this game?
## Books
After finishing Super Mario: How Nintendo Conquered America, there is this sense of control that Nintendo likes to impose. It would help a lot if the company reduce the prices on their years old games like The Legend of Zelda: Breath of the Wild. Otherwise, it feels that they are still milking the entire IPs including retro games too.
On an additional note, I have read an unique children book, The Tin Snail by Cameron McAllister. It was bought secondhand to support the local library and sat on the bookshelves during the first lockdown. I highly recommend it for anyone to read!
Yes, I do read children's books. It is refreshing to read after the [exhaustive list of books this year](/projects/books/).
## And finally...
What day is it again? See you on the flip side!
Stay safe, Happy 2022.
| 61.129032 | 524 | 0.773615 | eng_Latn | 0.999751 |
ff6830727df59258192f261a4b7ed96e1a5feaf4 | 429 | md | Markdown | content/series/uc/gundam-unicorn/personnages/nigel-garrett.md | Wivik/gundam-france | 65d84098eec431e7e27b6a6c0f1e6eadea1c2bc8 | [
"MIT"
] | null | null | null | content/series/uc/gundam-unicorn/personnages/nigel-garrett.md | Wivik/gundam-france | 65d84098eec431e7e27b6a6c0f1e6eadea1c2bc8 | [
"MIT"
] | null | null | null | content/series/uc/gundam-unicorn/personnages/nigel-garrett.md | Wivik/gundam-france | 65d84098eec431e7e27b6a6c0f1e6eadea1c2bc8 | [
"MIT"
] | null | null | null | ---
title: Nigel Garrett
---
Nigel Garrett
-------------
* Age : 27
* Rôle : Pilote de Mobile Suit
* Voix Japonaise : Yuuichi Nakamura
Pilote d'élite de Londo Bell qui porte le nom de code U007 pour Uniforme Sept, il fait partie de l'équipe d'as du Ra Cailum aux commandes des Jesta. A l'origine, il fut avec ses deux coéquipiers candidat pour le Projet UC, mais suite à son abandon ils finirent pilotes de pour Londo Bell.
| 28.6 | 288 | 0.724942 | fra_Latn | 0.984094 |
ff687b66fde5547fc6c3a22412d629411d231171 | 13,852 | md | Markdown | articles/iot-hub/iot-hub-operations-monitoring.md | beatrizmayumi/azure-docs.pt-br | ca6432fe5d3f7ccbbeae22b4ea05e1850c6c7814 | [
"CC-BY-4.0",
"MIT"
] | 39 | 2017-08-28T07:46:06.000Z | 2022-01-26T12:48:02.000Z | articles/iot-hub/iot-hub-operations-monitoring.md | beatrizmayumi/azure-docs.pt-br | ca6432fe5d3f7ccbbeae22b4ea05e1850c6c7814 | [
"CC-BY-4.0",
"MIT"
] | 562 | 2017-06-27T13:50:17.000Z | 2021-05-17T23:42:07.000Z | articles/iot-hub/iot-hub-operations-monitoring.md | beatrizmayumi/azure-docs.pt-br | ca6432fe5d3f7ccbbeae22b4ea05e1850c6c7814 | [
"CC-BY-4.0",
"MIT"
] | 113 | 2017-07-11T19:54:32.000Z | 2022-01-26T21:20:25.000Z | ---
title: Monitoramento de operações do Hub IoT do Azure (preterido) | Microsoft Docs
description: Como usar o monitoramento das operações do Hub IoT do Azure para monitorar o status das operações no seu Hub IoT em tempo real.
author: robinsh
manager: philmea
ms.service: iot-hub
services: iot-hub
ms.topic: conceptual
ms.date: 03/11/2019
ms.author: robinsh
ms.custom: amqp, devx-track-csharp
ms.openlocfilehash: 045d5693c4388c6285bc6983ac2a385ceac9f6d0
ms.sourcegitcommit: 867cb1b7a1f3a1f0b427282c648d411d0ca4f81f
ms.translationtype: MT
ms.contentlocale: pt-BR
ms.lasthandoff: 03/19/2021
ms.locfileid: "94408117"
---
# <a name="iot-hub-operations-monitoring-deprecated"></a>Monitoramento de operações do Hub IoT (preterido)
O monitoramento das operações do Hub IoT permite monitorar o status das operações no seu Hub IoT em tempo real. O Hub IoT controla eventos em várias categorias de operações. Você pode aceitar o envio de eventos de uma ou mais categorias para um ponto de extremidade do seu Hub IoT para processamento. É possível monitorar os dados em busca de erros ou configurar processamento mais complexo com base nos padrões de dados.
>[!NOTE]
>O **monitoramento de operações do Hub IoT foi preterido e removido do Hub IoT em 10 de março de 2019**. Para monitorar as operações e a integridade do Hub IoT, consulte [monitorar o Hub IOT](monitor-iot-hub.md). Para saber mais sobre a linha do tempo de substituição, confira [Monitorar suas soluções de IoT do Azure com o Azure Monitor e o Azure Resource Health](https://azure.microsoft.com/blog/monitor-your-azure-iot-solutions-with-azure-monitor-and-azure-resource-health).
O Hub IoT monitora seis categorias de eventos:
* Operações de identidade do dispositivo
* Telemetria de dispositivo
* Mensagens da nuvem para o dispositivo
* conexões
* Carregamentos de arquivos
* Roteamento de mensagem
> [!IMPORTANT]
> O monitoramento de operações do Hub IoT não garante a entrega confiável ou ordenada de eventos. Dependendo da infraestrutura subjacente do Hub IoT, alguns eventos podem ser perdidos ou entregues fora de ordem. Use o monitoramento de operações para gerar alertas com base nos sinais de erro, como tentativas de conexão com falha ou desconexões de alta frequência de dispositivos específicos. Você não deve confiar em eventos do monitoramento de operações para criar um repositório consistente para o estado do dispositivo, por exemplo, um repositório de controle do estado de um dispositivo conectado ou desconectado.
## <a name="how-to-enable-operations-monitoring"></a>Como habilitar o monitoramento de operações
1. Crie um Hub IoT. Encontre instruções sobre como criar um hub IoT no guia de [Introdução](quickstart-send-telemetry-dotnet.md).
2. Abra a folha do seu Hub IoT. Nela, clique em **Monitoramento de operações**.

3. Selecione as categorias de monitoramento que deseja monitorar e, em seguida, clique em **Salvar**. Os eventos estão disponíveis para leitura no ponto de extremidade compatível com o Hub de Eventos listado em **Configurações de monitoramento**. O ponto de extremidade do Hub IoT chama-se `messages/operationsmonitoringevents`.

> [!NOTE]
> A seleção do monitoramento **Detalhado** para a categoria **Conexões** faz com que o Hub IoT gere mensagens de diagnóstico adicionais. Para todas as outras categorias, a configuração **Detalhada** muda a quantidade de informações incluídas pelo Hub IoT em cada mensagem de erro.
## <a name="event-categories-and-how-to-use-them"></a>Categorias de evento e como usá-las
Cada categoria de monitoramento de operações rastreia um tipo diferente de interação com o Hub IoT e cada categoria de monitoramento tem um esquema que define como os eventos nessa categoria são estruturados.
### <a name="device-identity-operations"></a>Operações de identidade do dispositivo
A categoria de operações de identidade do dispositivo rastreia erros que ocorrem quando você tenta criar, atualizar ou excluir uma entrada no registro de identidade do seu Hub IoT. O rastreamento dessa categoria é útil em cenários de provisionamento.
```json
{
"time": "UTC timestamp",
"operationName": "create",
"category": "DeviceIdentityOperations",
"level": "Error",
"statusCode": 4XX,
"statusDescription": "MessageDescription",
"deviceId": "device-ID",
"durationMs": 1234,
"userAgent": "userAgent",
"sharedAccessPolicy": "accessPolicy"
}
```
### <a name="device-telemetry"></a>Telemetria de dispositivo
A categoria de telemetria de dispositivo rastreia erros que ocorrem no Hub IoT e estão relacionados ao pipeline de telemetria. Essa categoria inclui erros que ocorrem no envio de eventos de telemetria (como limitação) e no recebimento de eventos de telemetria (como leitor não autorizado). Essa categoria não pode capturar erros causados pelo código em execução no próprio dispositivo.
```json
{
"messageSizeInBytes": 1234,
"batching": 0,
"protocol": "Amqp",
"authType": "{\"scope\":\"device\",\"type\":\"sas\",\"issuer\":\"iothub\"}",
"time": "UTC timestamp",
"operationName": "ingress",
"category": "DeviceTelemetry",
"level": "Error",
"statusCode": 4XX,
"statusType": 4XX001,
"statusDescription": "MessageDescription",
"deviceId": "device-ID",
"EventProcessedUtcTime": "UTC timestamp",
"PartitionId": 1,
"EventEnqueuedUtcTime": "UTC timestamp"
}
```
### <a name="cloud-to-device-commands"></a>Comandos da nuvem para o dispositivo
A categoria de comandos da nuvem para o dispositivo rastreia erros que ocorrem no Hub IoT e são relacionados ao pipeline de mensagem da nuvem para o dispositivo. Essa categoria inclui erros que ocorrem no envio (tal como remetente não autorizado) e recebimento (como contagem de entrega excedida) de mensagens da nuvem para o dispositivo e no recebimento de comentários de mensagens da nuvem para o dispositivo (como comentário expirado). Essa categoria não capturará erros de um dispositivo que manipula incorretamente uma mensagem da nuvem para o dispositivo se tal mensagem tiver sido entregue com êxito.
```json
{
"messageSizeInBytes": 1234,
"authType": "{\"scope\":\"hub\",\"type\":\"sas\",\"issuer\":\"iothub\"}",
"deliveryAcknowledgement": 0,
"protocol": "Amqp",
"time": " UTC timestamp",
"operationName": "ingress",
"category": "C2DCommands",
"level": "Error",
"statusCode": 4XX,
"statusType": 4XX001,
"statusDescription": "MessageDescription",
"deviceId": "device-ID",
"EventProcessedUtcTime": "UTC timestamp",
"PartitionId": 1,
"EventEnqueuedUtcTime": "UTC timestamp"
}
```
### <a name="connections"></a>conexões
A categoria de conexões rastreia erros que ocorrem quando os dispositivos se conectam a um Hub IoT ou se desconectam dele. Rastrear essa categoria é útil para identificar tentativas de conexão não autorizada e para saber quando dispositivos em áreas de conectividade ruim perdem conexão.
```json
{
"durationMs": 1234,
"authType": "{\"scope\":\"hub\",\"type\":\"sas\",\"issuer\":\"iothub\"}",
"protocol": "Amqp",
"time": " UTC timestamp",
"operationName": "deviceConnect",
"category": "Connections",
"level": "Error",
"statusCode": 4XX,
"statusType": 4XX001,
"statusDescription": "MessageDescription",
"deviceId": "device-ID"
}
```
### <a name="file-uploads"></a>Carregamentos de arquivos
A categoria de carregamento de arquivo rastreia erros que ocorrem no Hub IoT e estão relacionados à funcionalidade de carregamento de arquivo. Essa categoria inclui:
* Erros que ocorrem com o URI de SAS, como quando ele expirar antes que um dispositivo notifique o hub de um carregamento concluído.
* Carregamentos com falha relatados pelo dispositivo.
* Erros que ocorrem quando um arquivo não for encontrado no armazenamento durante a criação de mensagem de notificação do Hub IoT.
Essa categoria não pode capturar erros que ocorrem diretamente enquanto o dispositivo está carregando um arquivo no armazenamento.
```json
{
"authType": "{\"scope\":\"hub\",\"type\":\"sas\",\"issuer\":\"iothub\"}",
"protocol": "HTTP",
"time": " UTC timestamp",
"operationName": "ingress",
"category": "fileUpload",
"level": "Error",
"statusCode": 4XX,
"statusType": 4XX001,
"statusDescription": "MessageDescription",
"deviceId": "device-ID",
"blobUri": "http//bloburi.com",
"durationMs": 1234
}
```
### <a name="message-routing"></a>Roteamento de mensagem
A categoria de roteamento de mensagem acompanha os erros que ocorrem durante a avaliação da rota da mensagem e da integridade do ponto de extremidade, conforme visto pelo Hub IoT. Essa categoria inclui eventos, por exemplo, quando uma regra é avaliada como "não definida", quando o Hub IoT marca um ponto de extremidade como inativo e todos os outros erros são recebidos de um ponto de extremidade. Essa categoria não inclui erros específicos sobre as próprias mensagens (como erros de limitação de dispositivo), que são relatados na categoria "telemetria de dispositivo".
```json
{
"messageSizeInBytes": 1234,
"time": "UTC timestamp",
"operationName": "ingress",
"category": "routes",
"level": "Error",
"deviceId": "device-ID",
"messageId": "ID of message",
"routeName": "myroute",
"endpointName": "myendpoint",
"details": "ExternalEndpointDisabled"
}
```
## <a name="connect-to-the-monitoring-endpoint"></a>Conectar-se ao ponto de extremidade de monitoramento
O ponto de extremidade de monitoramento em seu Hub IoT é um ponto de extremidade compatível com o Hub de Eventos. Você pode usar qualquer mecanismo que funciona com os Hubs de Eventos para ler mensagens de monitoramento desse ponto de extremidade. A amostra a seguir cria um leitor básico que não é adequado para uma implantação com alta taxa de transferência. Para obter mais informações sobre como processar as mensagens dos Hubs de Eventos, confira o tutorial [Introdução aos Hubs de Eventos](../event-hubs/event-hubs-dotnet-standard-getstarted-send.md) .
Para se conectar ao ponto de extremidade de monitoramento, você precisa de uma cadeia de conexão e do nome do ponto de extremidade. As etapas a seguir mostram como localizar os valores necessários no portal:
1. No portal, navegue até a folha de recursos do Hub IoT.
2. Escolha **Monitoramento de operações** e anote os valores do **Nome compatível com o Hub de Eventos** e do **Ponto de extremidade compatível com o Hub de Eventos**:

3. Escolha **Políticas de acesso compartilhado** e, em seguida, escolha **serviço**. Anote o valor da **Chave primária**:

O exemplo de código C# a seguir provém de um aplicativo de console C# da **Área de Trabalho Clássica do Windows** do Visual Studio. O projeto tem o pacote do NuGet **WindowsAzure.ServiceBus** instalado.
* Substitua o espaço reservado da cadeia de conexão por uma cadeia de conexão que usa os valores de **ponto de extremidade compatível com o Hub de Eventos** e **Chave primária** do serviço anotados anteriormente, conforme mostrado no exemplo a seguir:
```csharp
"Endpoint={your Event Hub-compatible endpoint};SharedAccessKeyName=service;SharedAccessKey={your service primary key value}"
```
* Substitua o espaço reservado nome de ponto de extremidade de monitoramento pelo valor de **Nome compatível com o Hub de Eventos** que você anotou anteriormente.
```csharp
class Program
{
static string connectionString = "{your monitoring endpoint connection string}";
static string monitoringEndpointName = "{your monitoring endpoint name}";
static EventHubClient eventHubClient;
static void Main(string[] args)
{
Console.WriteLine("Monitoring. Press Enter key to exit.\n");
eventHubClient = EventHubClient.CreateFromConnectionString(connectionString, monitoringEndpointName);
var d2cPartitions = eventHubClient.GetRuntimeInformation().PartitionIds;
CancellationTokenSource cts = new CancellationTokenSource();
var tasks = new List<Task>();
foreach (string partition in d2cPartitions)
{
tasks.Add(ReceiveMessagesFromDeviceAsync(partition, cts.Token));
}
Console.ReadLine();
Console.WriteLine("Exiting...");
cts.Cancel();
Task.WaitAll(tasks.ToArray());
}
private static async Task ReceiveMessagesFromDeviceAsync(string partition, CancellationToken ct)
{
var eventHubReceiver = eventHubClient.GetDefaultConsumerGroup().CreateReceiver(partition, DateTime.UtcNow);
while (true)
{
if (ct.IsCancellationRequested)
{
await eventHubReceiver.CloseAsync();
break;
}
EventData eventData = await eventHubReceiver.ReceiveAsync(new TimeSpan(0,0,10));
if (eventData != null)
{
string data = Encoding.UTF8.GetString(eventData.GetBytes());
Console.WriteLine("Message received. Partition: {0} Data: '{1}'", partition, data);
}
}
}
}
```
## <a name="next-steps"></a>Próximas etapas
Para explorar ainda mais usando Azure Monitor para monitorar o Hub IoT, consulte:
* [Monitorar o Hub IoT](monitor-iot-hub.md)
* [Migrar do monitoramento de operações do Hub IoT para Azure Monitor](iot-hub-migrate-to-diagnostics-settings.md) | 50.554745 | 619 | 0.737366 | por_Latn | 0.995922 |
ff68d6bbd990c843905c8e4d9a8a3432ccc213cb | 3,685 | markdown | Markdown | README.markdown | nsmryan/parcel | b30c58887cdd8b983a0d7ba3510e2d86d155d094 | [
"Apache-2.0",
"MIT"
] | 7 | 2019-08-26T07:17:30.000Z | 2021-07-13T16:19:00.000Z | README.markdown | nsmryan/hew | b30c58887cdd8b983a0d7ba3510e2d86d155d094 | [
"Apache-2.0",
"MIT"
] | 2 | 2019-05-06T12:43:07.000Z | 2019-05-06T12:44:12.000Z | README.markdown | nsmryan/hew | b30c58887cdd8b983a0d7ba3510e2d86d155d094 | [
"Apache-2.0",
"MIT"
] | null | null | null | Hew
======
Hew is a command line tool for converting binary data to and from
hexadecimal. This is a task I occasionally need to perform, and I wanted
a simple tool which covered the common cases so I don't have to write this
code again and again.
The hexadecimal data given will be read regardless of whitespace
or separator characters to attempt to support the widest range of use cases.
The hexadecimal output is configurable to allow a separator, an optional '0x' prefix,
hex words split into groups, and multiple rows of a given number of bytes of data.
# Usage
Hew has two distinct modes- bin and hex- and must be provided a mode on the
command line. In bin mode, a binary file is created, and in hex mode a text
file containing hexadecimal numbers is created.
```shell
hew 0.2
Noah Ryan
Binary to Hex, Hex to Binary Converter
USAGE:
hew [FLAGS] [OPTIONS] --input <FILE> --output <OUTFILE>
FLAGS:
-l, --lowercase Print hex in lowercase (default is UPPERCASE)
-p, --prefix Print hex with the '0x' prefix before each word, if printing with separated words
-h, --help Prints help information
-V, --version Prints version information
OPTIONS:
-i, --input <FILE> Input file to convert
-m, --mode <MODE> The target format, either 'hex' or 'bin
-o, --output <OUTFILE> Output file
-r, --row-width <ROWWIDTH> Row length in decoded bytes when decoding binary into hex
-s, --sep <SEPARATOR> String separator between words [default: ]
-w, --word-width <WORDWIDTH> Number of bytes to decode between separators on a line. Defaults to no separators.
```
## Encoding
Given the following file of hexadecimal numbers in a file called numbers.hex:
```text
0123456789ABCDEF
```
run the following command to encode them to binary:
```shell
hew -i numbers.hex -o numbers.bin -m bin
```
To get the file (as viewed with xxd):
```text
00000000: 0123 4567 89ab cdef .#Eg....
```
Encoding has no flags to control the process- hex numbers are taken from a file,
skipping unrelated characters like whitespaces. One thing to be careful of
is that hew will attempt to encode any characters that look like hex,
so the word 'Become" will be encoded as 0xBECE (the non-hex characters are ignored).
Its better to have only whitespace, punctuation, and '0x' characters in the file along
with your hex data.
Another example of a file that hew will accept is:
```text
0x0123 0x4567
0x89AB
0xCDEF
```
Where hew will notice the '0x' characters together and ignore them.
## Decoding
When decoding binary data into hex digits, hew provides a number of options.
Using the example binary file (as viewed by xxd):
```text
00000000: 0123 4567 89ab cdef .#Eg....
```
These options control whether the result is a single sequence of characters:
```shell
hew -i numbers.bin -o numbers.txt -m hex
```
resulting in:
```text
0123456789ABCDEF
```
This can be modified to, say, print 2 bytes as a word, prefixed by 0x,
with 4 bytes per line.
```shell
hew -i numbers.bin -o numbers.txt -m hex -p -w 2 -r 4
```
resulting in:
```text
0X0123 0X4567
0X89AB 0XCDEF
```
Another example would be each byte separated by a comma, no prefix, all on one line:
```shell
hew -i numbers.bin -o numbers.txt -m hex -w 1 -s ", "
```
```text
01, 23, 45, 67, 89, AB, CD, EF
```
# Installation
Hew can be installed through 'cargo' with:
```shell
cargo install hew
```
or it can be installed by retrieving binaries from https://github/nsmryan/hew/releases.
# License
Hew is licensed under either the MIT License or the APACHE 2.0 License,
whichever you prefer.
| 27.916667 | 118 | 0.70692 | eng_Latn | 0.996221 |
ff68f2ac5806075f7bc3af90dcce97b1be7b3035 | 7,011 | md | Markdown | _posts/2015-3-25-Github.md | joseAi/joseAi.github.io | 2c2a5d859cb7e17887bf55386e5aad16a35311e1 | [
"MIT"
] | null | null | null | _posts/2015-3-25-Github.md | joseAi/joseAi.github.io | 2c2a5d859cb7e17887bf55386e5aad16a35311e1 | [
"MIT"
] | null | null | null | _posts/2015-3-25-Github.md | joseAi/joseAi.github.io | 2c2a5d859cb7e17887bf55386e5aad16a35311e1 | [
"MIT"
] | null | null | null | ---
layout: post
---
title: Rising sea levels threaten South Florida
MIAMI BEACH, Fla.
On a recent afternoon, Scott McKenzie watched torrential rains and a murky tide swallow the street outside his dog-grooming salon. Within minutes, much of this stretch of chic South Beach was flooded ankle-deep in a fetid mix of rain and sea.
“Welcome to the new Venice,” McKenzie joked as salt water surged from the sewers.
There are few places in the nation more vulnerable to rising sea levels than low-lying South Florida, a tourist and retirement mecca built on drained swampland.
Yet as other coastal states and the Obama administration take aggressive measures to battle the effects of global warming, Florida’s top Republican politicians are challenging the science and balking at government fixes.
Among the chief skeptics are U.S. Sen. Marco Rubio and former Gov. Jeb Bush, both possible presidential candidates in 2016. Gov. Rick Scott, who is running for re-election, has worked with the Republican-controlled Legislature to dismantle Florida’s fledgling climate change initiatives. They were put into place by his predecessor and current opponent, Democrat Charlie Crist.
“I’m not a scientist,” Scott said, after a federal report pinpointed Florida – and Miami in particular – as among the country’s most at-risk areas.
He and other Republicans warn against what they see as alarmist policies that could derail the country’s tenuous economic recovery.
Their positions could affect their political fortunes.
Democrats plan to place climate change, and the GOP’s skepticism, front and center in a state where the issue is no longer an abstraction.
Their hope is to win over independents and siphon some Republicans, who are deeply divided over global warming. Tom Steyer, a billionaire environmental activist, has pledged to spend $100 million this year to influence seven critical contests nationwide, including the Florida governor’s race.
The battle in the country’s largest swing state offers a preview of what could be a pivotal fight in the next presidential election.
Crist is running for his old job as a Democrat, criticizing Scott and Florida Republicans for reversing his efforts to curb global warming.
“They don’t believe in science. That’s ridiculous,” Crist said at a recent campaign rally in Miami. “This is ground zero for climate change in America.”
Nationally, the issue could prove tricky for Democrats.
Polls show a bipartisan majority of Americans favor measures to reduce planet-warming greenhouse gases, such as the new federal rule to limit carbon emissions from power plants. But they routinely rank climate change far behind the economy, the centerpiece of Scott’s campaign, when prioritizing issues.
In Miami Beach, which floods even on sunny days, the concern is palpable. On a recent afternoon, McKenzie pulled out his iPad and flipped through photos from a 2009 storm. In one, two women kayak through knee-high water in the center of town.
“This is not a future problem. It’s a current problem,” said Leonard Berry, director of the Florida Center for Environmental Studies at Florida Atlantic University and a contributing author of the National Climate Assessment, which found that sea levels have risen about 8 inches in the past century.
Miami Beach is expected to spend $400 million on an elaborate pumping system to cope with routine flooding. To the north, Fort Lauderdale has shelled out millions to restore beaches and a section of coastal highway after Hurricane Sandy and other storms breached the city’s concrete sea wall. Hallandale Beach, which lies on the Atlantic Coast between the two cities, has abandoned six of its eight drinking water wells because of encroaching seawater.
By one regional assessment, the waters off South Florida could rise another 2 feet by 2060, a scenario that would overwhelm the region’s aging drainage system and taint its sources of drinking water.
“It’s getting to the point where some properties being bought today will probably not be able to be sold at the end of a 30-year mortgage,” said Harold Wanless, chairman of the geological sciences department at University of Miami. “You would think responsible leaders and responsible governments would take that as a wake-up call.”
Florida lacks a statewide approach to the effects of climate change, although just a few years ago, it was at the forefront on the issue.
In 2007, Crist, then a Republican, declared global warming “one of the most important issues that we will face this century,” signed executive orders to tighten tailpipe-emission standards for cars and opposed coal-fired power plants.
Bush, his predecessor, had pushed the state during his administration to diversify its energy mix and prioritize conservation.
Even Rubio, who was then Florida House speaker and a vocal critic of Crist’s climate plans, supported incentives for renewable energy. With little opposition, the GOP-led Legislature passed a bill that laid the groundwork for a California-style cap-and-trade system to cut carbon emissions and created a special commission to study climate change.
But the efforts sputtered as the economy collapsed and Crist and Rubio faced off in a divisive 2010 Republican primary for U.S. Senate.
Although Rubio had voted for Crist’s landmark environmental measure, he soon hammered the governor for what he called a “cap-and-trade scheme.” Seeking support from the growing tea party movement, he distanced himself from the vote.
Rubio also began to voice doubts about whether climate change is man-made, a doubt he shares with Bush. Both have stuck to that position.
Amid meetings with conservative activists and Republican leaders in New Hampshire last month, Rubio said: “I do not believe that human activity is causing these dramatic changes to our climate the way these scientists are portraying it.” Proposals to cut carbon emissions, he said, would do little to change current conditions but “destroy our economy.” Rubio later said he supports mitigation measures to protect coastal property from natural disasters.
Scott and Florida Republicans share his current views.
Denouncing “job-killing legislation,” they repealed Crist’s climate law, disbanded the state’s climate commission and eliminated a mandate requiring the state to use ethanol-blended gasoline. Asked about climate change recently, Scott demurred, saying the state has spent about $130 million on coastal flooding in his first term, as well as millions on environmental restoration.
Meanwhile, Miami Beach is bracing for another season of punishing tides.
“We’re suffering while everyone is arguing man-made or natural,” said Christine Florez, president of the West Avenue Corridor Neighborhood Association. “We should be working together to find solutions so people don’t feel like they’ve been left on a log drifting out to sea.”
—
Associated Press writers Gary Fineout in Tallahassee, Florida, and Philip Elliott in Washington contributed to this report.
| 93.48 | 454 | 0.809157 | eng_Latn | 0.999762 |
ff68fc2b175a3d6dc1bdf909a6f6d4a7b105b5c4 | 10,993 | md | Markdown | Instructions/Labs/0703-Work Folders.md | MOC-Labs-Review/MD-100T00-Windows10 | 9e318d0ddc0cea0e9d7d4277c3500f5bb39476b6 | [
"MIT"
] | 70 | 2019-12-03T15:43:07.000Z | 2022-03-11T12:55:22.000Z | Instructions/Labs/0703-Work Folders.md | MOC-Labs-Review/MD-100T00-Windows10 | 9e318d0ddc0cea0e9d7d4277c3500f5bb39476b6 | [
"MIT"
] | 46 | 2019-11-09T18:42:22.000Z | 2022-02-11T21:14:55.000Z | Instructions/Labs/0703-Work Folders.md | MOC-Labs-Review/MD-100T00-Windows10 | 9e318d0ddc0cea0e9d7d4277c3500f5bb39476b6 | [
"MIT"
] | 81 | 2019-11-06T10:04:57.000Z | 2022-02-16T19:50:29.000Z | # Practice Lab: Work Folders
## Summary
In this lab you will learn how to configure Work Folders as a method of synchronizing files to provide access from multiple devices.
### Scenario
Members of the Marketing group often use multiple devices for their work. To help manage file access you decide to implement Work Folders. This allows for files to be stored in a central location and synchronized to each device automatically. To implement this solution, first you will install and configure the Work Folders server role on SEA-SVR1 and store the content in a shared folder named C:\\syncshare1. To enable the Work Folders for all marketing users, you configure a Group Policy Object to point to <https://SEA-SVR1.Contoso.com>. You have asked Bruce Keever to test the solution on a domain-joined device named SEA-CL1 and a stand-alone device named SEA-WS3. Bruce will validate synchronization and identify how synchronization conflicts are handled.
### Task 1: Install and Configure Work Folders
1. Sign in to **SEA-SVR1** as **Contoso\\Administrator** with the password **Pa55w.rd**.
2. Right-click **Start**, and then select **Windows PowerShell (Admin)**.
3. In Windows PowerShell, type the following cmdlet, and then press **Enter**:
```
Install-WindowsFeature FS-SyncShareService
```
_**Note**: After the feature installs, you might see a warning display, because Windows automatic updating is not enabled. For the purposes of this lab, ignore the warning._
4. In Windows PowerShell, type the following cmdlet, and then press **Enter**:
```
Start-Service SyncShareSvc
```
5. Close the **Windows PowerShell** window.
6. Select **Start**, and then select **Server Manager**.
7. In Server Manager, in the navigation pane, select **File and Storage Services**, and then select **Work Folders**.
8. In the **WORK FOLDERS** section, select **TASKS**, and then select **New Sync Share**.
9. In the **New Sync Share Wizard**, on the **Before you begin** page, select **Next**.
10. On the **Select the server and path** page, in the **Enter a local path** text box, type **C:\\syncshare1**, select **Next**.
_**Important:** If **SEA-SVR1** is not listed in the **Server** section, select **Cancel**. In Server Manager, select **Refresh**, and then repeat this task, beginning with step 6 and completing the remaining steps. This may take a few minutes for SEA-SVR1 to show._
11. In the New Sync Share Wizard dialog select **OK**.
12. On the **Specify the structure for user folders** page, verify that **User alias** is selected, and then select **Next**.
13. On the **Enter the sync share name** page, select **Next** to accept the default sync share name.
14. On the **Grant sync access to groups** page, select **Add**, and in the **Enter the object name to select** text box, type **Marketing**. select **OK**, and then select **Next**.
15. On the **Specify security policies for PCs** page, verify the two available options. Clear the **Automatically lock screen, and require a password** check box, and then select **Next**.
16. On the **Confirm selections** page, select **Create**.
17. On the **View Results** page, select **Close**.
18. In Server Manager, in the **WORK FOLDERS** section, verify that **syncshare1** is listed, and that in the **USERS** section, the user **Bruce Keever** is listed.
19. On **SEA-SVR1**, in Server Manager, select **Tools**, and then select **Internet Information Services (IIS) Manager**.
20. In the Microsoft Internet Information Services (IIS) Manager, in the navigation pane, expand **SEA-SVR1 (Contoso\\Administrator)**. Expand **Sites**, right-click **Default Web Site**, and then select **Edit Bindings**.
21. In the **Site Bindings** dialog box, select **Add**.
22. In the **Add Site Binding** dialog box, in the **Type** box, select **https**. In the **SSL certificate** box, select **Work Folders Cert**, select **OK**, and then select **Close**.
23. Close the IIS Manager.
### Task 2: Create a GPO to deploy the Work Folders
1. On **SEA-SVR1**, in Server Manager, select **Tools,** and then select **Group Policy Management**.
2. In the Group Policy Management console, in the navigation pane, expand **Forest: Contoso.com**, expand **Domains**, expand **Contoso.com**, and then select the **Marketing** organizational unit (OU).
3. Right-click **Marketing**, and then select **Create a GPO in this domain, and Link it here**. In the **Name** text box, type **Deploy Work Folders**, and then select **OK**.
4. Right-click **Deploy Work Folders**, and then select **Edit**.
5. In the Group Policy Management Editor, in the navigation pane expand **User Configuration**, expand **Policies**, expand **Administrative Templates**, expand **Windows Components**, and then select **Work Folders**.
6. In the details pane, right-click **Specify Work Folder settings**, and then select **Edit**.
7. In the **Specify Work Folder settings** dialog box, select **Enabled**. In the **Work Folders URL** text box, type <https://SEA-SVR1.Contoso.com>, select the **Force automatic setup** check box, and then select **OK**.
8. Close the Group Policy Management Editor and the Group Policy Management console.
9. Close Server Manager.
### Task 3: Test the work folders
1. Switch to **SEA-CL1**.
2. Sign in to **SEA-CL1** as **Contoso\\Bruce** with the password **Pa55w.rd**.
3. Select the **File Explorer** icon on the taskbar.
4. In the navigation pane, select **Work Folders**.
5. In the details pane right-click the empty space, select **New**, select **Text Document**, and then name the file **On SEA-CL1**.
6. Select **Start**, type **Work folders** and select **Manage Work Folders**.
7. On the Manage Work Folders page, check the option **Sync files over metered connections**.
_**Note**: This is only needed due to the hosted lab environment configuration._
8. Close the **Manage Work Folders** window.
9. Switch to **SEA-WS3**.
10. Sign in to **SEA-WS3** as **SEA-WS3\\Admin** with the password **Pa55w.rd**.
11. On **SEA-WS3** select **Start**, type **\\\\SEA-DC1\\certenroll**, and then press **Enter**.
12. In the **Enter Network credentials** dialog box, enter the user name as **Contoso\\Administrator** and the password as **Pa55w.rd**. Select **OK**.
13. In the **certenroll** window, double-click **SEA-DC1.Contoso.com_ContosoCA.crt.**
14. On the **Certificate** dialog box, select **Install Certificate**.
15. On the **Certificate Import Wizard**, select **Local Machine** and select **Next**.
16. On the **User Account Control** dialog box, select **Yes**.
17. On the **Certificate Store** page, select **Place all certificates in the following store**, and then select **Browse**.
18. On the **Select Certificate Store** page, select **Trusted Root Certification Authorities** and select **OK**.
19. On the **Certificate Store** page, select **Next**.
20. On the **Certificate Import Wizard** page, select **Finish**.
21. In the **Certificate Import Wizard** dialog select **OK**, and then in the **Certificate** window, select **OK**.
22. Restart **SEA-WS3**.
23. Sign in to **SEA-WS3** as **SEA-WS3\\Admin** with the password **Pa55w.rd**.
24. On **SEA-WS3**, select **Start**, type **Control Panel**, and then press Enter.
25. In Control Panel, select **System and Security,** and then select **Work Folders**.
26. On the **Manage Work Folders** page, select **Set up Work Folders**.
27. On the **Enter your work email address** page, select **Enter a Work Folders URL instead**.
28. On the **Enter a Work Folders URL** page, in the **Work Folders URL** text box, type <https://SEA-SVR1.Contoso.com>, and then select **Next**.
29. In the **Windows Security** dialog box, in the **User name** text box, type **Contoso\\Bruce**, and in the **Password** text box, type **Pa55w.rd**, and then select **OK**.
30. On the **Introducing Work Folders** page, review the local Work Folders location, and then select **Next**.
31. On the **Security policies** page, select the **I accept these policies on my PC** check box, and then select **Set up Work Folders**.
32. On the **Work Folders has started syncing with this PC** page, select **Close**.
33. Switch to the Manage Work Folders window.
34. On the Manage Work Folders window, check the option **Sync files over metered connections**.
_**Note**: This is only needed due to the hosted lab environment configuration._
35. Switch to the Work Folders File Explorer window.
36. Verify that the **On SEA-CL1.txt** file displays.
37. Right-click in the details pane, select **New**, select **Text Document**, and then in the **Name** text box, type **On SEA-WS3**, and then press **Enter**.
### Task 4: Test conflict handling
1. Switch to **SEA-CL1**.
2. On **SEA-CL1**, in **Work Folders**, verify that that both files, **On SEA-CL1** and **On SEA-WS3**, display.
3. In the notification area, right-click the connection icon, and then select **Open Network & Internet settings**.
4. Select the Ethernet page, and then select **Change adapter options**.
5. Right-click **Ethernet**, and then select **Disable**. In the **User Account Control** dialog box, in the **User name** text box, type **Administrator**. In the **Password** text box, type **Pa55w.rd**, and then select **Yes**.
6. In **Work Folders**, double-click the **On SEA-CL1** file.
7. In Notepad, type **Modified offline**, press Ctrl+S, and then close Notepad .
8. In **Work Folders**, right-click in the details pane, select **New**, select **Text Document**, and then name the file **Offline SEA-CL1**.
9. Switch to **SEA-WS3.**
10. On **SEA-WS3**, in **Work Folders**, double-click the **On SEA-CL1.txt** file.
11. In Notepad, type **Online modification**, press Ctrl+S, and then close Notepad.
12. Switch to **SEA-CL1.**
13. On **SEA-CL1**, in the **Network Connections** window, right-click **Ethernet**, and then select **Enable**.
14. In the **User Account Control** dialog box, in the **User name** text box, type **Administrator**, and in the **Password** text box, type **Pa55w.rd**, and then select **Yes**.
15. Switch to **Work Folders**, and then verify that files display in the details pane, including **On SEA-CL1** and **On SEA-CL1-SEA-CL1**.
_Note: Because you modified the file at two locations, a conflict occurred, and one of the copies was renamed._
16. Sign out from **SEA-CL1** and **SEA-WS3**.
**Results**: After finishing this lab you have installed and configured Work Folders.
### Lab cleanup
1. Sign in to **SEA-SVR1** as **Contoso\\Administrator** with the password **Pa55w.rd**.
2. Right-click **Start**, and then select **Windows PowerShell (Admin)**.
3. In Windows PowerShell, type the following cmdlet, and then press **Enter**:
```
Remove-WindowsFeature FS-SyncShareService
```
_**Note**: The feature will be removed and requires a restart of the server._
4. In Windows PowerShell, type the following cmdlet, and then press **Enter**:
```
Restart-Computer
```
**END OF LAB** | 48.214912 | 764 | 0.70554 | eng_Latn | 0.968306 |
ff696a455bc8f263d576b8b52be9f5104cbb24e5 | 855 | md | Markdown | NEWS.md | kevinrue/hancock | cd901dfe4a21786efe81feb7cde78ac00d2a013e | [
"MIT"
] | 7 | 2019-01-27T21:53:16.000Z | 2021-06-03T11:57:37.000Z | NEWS.md | LTLA/Hancock | aa59be4438d9dc0e389174821c82fcc15ccd8868 | [
"MIT"
] | 3 | 2018-11-25T18:09:44.000Z | 2019-01-02T14:31:14.000Z | NEWS.md | LTLA/Hancock | aa59be4438d9dc0e389174821c82fcc15ccd8868 | [
"MIT"
] | 2 | 2019-03-05T14:34:52.000Z | 2019-03-19T08:30:13.000Z | # hancock 0.99.0
* Added a `NEWS.md` file to track changes to the package.
* Added Travis continuous integration.
* Added `testthat` unit tests.
* Added `codecov` code coverage.
* Added vignette discussing concepts related to the package functionality.
* Added methods for the detection of markers and signatures:
`positiveForMarker` (generic), `makeMarkerDetectionMatrix`,
`makeSignatureDetectionMatrix`.
* Added support for gene set classes
`GeneSetCollection`, `tbl_geneset`, and `BaseSets`
defined in packages `GSEABase`, `GeneSet`, and `unisets`.
* Added prediction method `ProportionPositive`.
* Added learning method `PositiveProportionDifference`.
* Added plotting function `plotProportionPositive`.
* Added helper functions: `makeFilterExpression`, `uniqueMarkers`,
`uniqueSetNames`.
* Added _Shiny_ function: `shinyLabels`.
| 42.75 | 74 | 0.768421 | eng_Latn | 0.78915 |
ff6997d0211be44eb85918a478ac0eab2fdae21f | 1,911 | md | Markdown | cis_v130/docs/cis_v130_1_1.md | ecktom/steampipe-mod-aws-compliance | ce2ac6ee38acdd0814b4aeb25383f71024e6aab9 | [
"Apache-2.0"
] | 213 | 2021-05-19T22:33:56.000Z | 2022-03-29T01:45:20.000Z | cis_v130/docs/cis_v130_1_1.md | ecktom/steampipe-mod-aws-compliance | ce2ac6ee38acdd0814b4aeb25383f71024e6aab9 | [
"Apache-2.0"
] | 159 | 2021-05-19T22:28:30.000Z | 2022-03-30T11:52:26.000Z | cis_v130/docs/cis_v130_1_1.md | ecktom/steampipe-mod-aws-compliance | ce2ac6ee38acdd0814b4aeb25383f71024e6aab9 | [
"Apache-2.0"
] | 13 | 2021-05-25T23:26:14.000Z | 2021-12-17T20:52:46.000Z | ## Description
Ensure your *Contact Information* and *Alternate Contacts* are correct in the AWS account settings page of your AWS account.
In addition to the primary contact information, you may enter the following contacts:
- **Billing**: When your monthly invoice is available, or your payment method needs to be updated. If your Receive PDF Invoice By Email is turned on in your Billing preferences, your alternate billing contact will receive the PDF invoices as well.
- **Operations**: When your service is, or will be, temporarily unavailable in one of more Regions. Any notification related to operations.
- **Security**: When you have notifications from the AWS Abuse team for potentially fraudulent activity on your AWS account. Any notification related to security.
As a best practice, avoid using contact information for individuals, and instead use group email addresses and shared company phone numbers.
AWS uses the contact information to inform you of important service events, billing issues, and security issues. Keeping your contact information up to date ensure timely delivery of important information to the relevant stakeholders. Incorrect contact information may result in communications delays that could impact your ability to operate.
## Remediation
There is no API available for setting contact information - you must log in to the AWS console to verify and set your contact information.
1. Sign into the AWS console, and navigate to [Account Settings](https://console.aws.amazon.com/billing/home?#/account).
2. Verify that the information in the **Contact Information** section is correct and complete. If changes are required, click **Edit**, make your changes, and then click **Update**.
3. Verify that the information in the **Alternate Contacts** section is correct and complete. If changes are required, click **Edit**, make your changes, and then click **Update**. | 91 | 345 | 0.792779 | eng_Latn | 0.997241 |
ff6b18da78b803c88bf25a6e5f3c950ccff1c3b8 | 1,475 | md | Markdown | README.md | sedpro/yii2-test-app | 84b60a9e3222392858c5f777afd41a9d9bc1bc49 | [
"BSD-3-Clause"
] | null | null | null | README.md | sedpro/yii2-test-app | 84b60a9e3222392858c5f777afd41a9d9bc1bc49 | [
"BSD-3-Clause"
] | null | null | null | README.md | sedpro/yii2-test-app | 84b60a9e3222392858c5f777afd41a9d9bc1bc49 | [
"BSD-3-Clause"
] | null | null | null | Yii 2 Test Application
============================
INSTALLATION
------------
```
# Make a folder and move into it:
mkdir yii2 && cd $_
# Install from git:
git clone [email protected]:sedpro/yii2-test-app.git .
# Install composer:
curl -sS https://getcomposer.org/installer | php
# Run composer:
php composer.phar install
# Edit the file config/db.php with real data (see example below)
nano config/db.php
# Run migrations to create tables and fill them with test data:
php yii migrate
# Run built-in web server:
php -S 127.0.0.1:8080 -t web/
# Open in browser the page:
http://127.0.0.1:8080
```
### Database configuration
Example of `config/db.php` file:
```php
return [
'class' => 'yii\db\Connection',
'dsn' => 'mysql:host=localhost;dbname=yii2basic',
'username' => 'root',
'password' => '1234',
'charset' => 'utf8',
];
```
TASK
----
Сделать на Yii2 возможность только зарегистрированным пользователям просматривать, удалять, редактировать записи в таблице "books"
|books|
id,
name,
date_create, / дата создания записи
date_update, / дата обновления записи
preview, / путь к картинке превью книги
date, / дата выхода книги
author_id / ид автора в таблице авторы
|authors| редактирование таблицы авторов не нужно, необходимо ее просто заполнить тестовыми данными.
id,
firstname, / имя автора
lastname, / фамилия автора
в итоге страница управления книгами должна выглядеть так:
https://github.com/sedpro/yii2-test-app/blob/master/task.png
| 22.014925 | 130 | 0.705763 | rus_Cyrl | 0.141429 |
ff6b42341078e2d9e33c67eb672bbb3d6d15054a | 922 | md | Markdown | data/issues/ZF-2338.md | zendframework/zf3-web | 5852ab5bfd47285e6b46f9e7b13250629b3e372e | [
"BSD-3-Clause"
] | 40 | 2016-06-23T17:52:49.000Z | 2021-03-27T20:02:40.000Z | data/issues/ZF-2338.md | zendframework/zf3-web | 5852ab5bfd47285e6b46f9e7b13250629b3e372e | [
"BSD-3-Clause"
] | 80 | 2016-06-24T13:39:11.000Z | 2019-08-08T06:37:19.000Z | data/issues/ZF-2338.md | zendframework/zf3-web | 5852ab5bfd47285e6b46f9e7b13250629b3e372e | [
"BSD-3-Clause"
] | 52 | 2016-06-24T22:21:49.000Z | 2022-02-24T18:14:03.000Z | ---
layout: issue
title: "Exif data not parsed from photo entries"
id: ZF-2338
---
ZF-2338: Exif data not parsed from photo entries
------------------------------------------------
Issue Type: Bug Created: 2007-12-19T16:49:48.000+0000 Last Updated: 2008-02-26T12:56:00.000+0000 Status: Resolved Fix version(s): - 1.0.4 (26/Feb/08)
Reporter: Ryan Boyd (rboyd) Assignee: Ryan Boyd (rboyd) Tags: - Zend\_Gdata
Related issues:
Attachments:
### Description
Exif data is not currently being parsed from photo entries because: a) the namespace for Exif is not defined in Zend\_Gdata\_Photos b) Exif/Extension/Tags was broken, not passing the dom element when creating children
There are also missing tests for the Exif classes. This fixes some, but more needed to be added.
### Comments
Posted by Ryan Boyd (rboyd) on 2007-12-21T08:03:43.000+0000
Code written by mjoshi and rboyd. Needs reviewed.
| 27.117647 | 217 | 0.696312 | eng_Latn | 0.971068 |
ff6cd1373050d051627e756cc83d55cd0ade9136 | 1,005 | md | Markdown | DIRECTORY_BOOTNODES.md | Gradiant/alastria-node | c53767e1cd3e023c1ada20d5ec931c54463a03cb | [
"Apache-2.0"
] | 93 | 2017-10-16T08:43:07.000Z | 2022-02-22T23:32:39.000Z | DIRECTORY_BOOTNODES.md | Gradiant/alastria-node | c53767e1cd3e023c1ada20d5ec931c54463a03cb | [
"Apache-2.0"
] | 364 | 2017-10-31T16:11:47.000Z | 2022-03-30T10:51:18.000Z | DIRECTORY_BOOTNODES.md | Gradiant/alastria-node | c53767e1cd3e023c1ada20d5ec931c54463a03cb | [
"Apache-2.0"
] | 518 | 2017-10-02T14:27:45.000Z | 2022-03-11T08:32:03.000Z | # Directorio de nodos bootnodes
| Entidad | Hosting info (Cores/Mem/HDD) | Clave private for * | enode |
| ------- | ---------------------------------- | ------------- | ----- |
| Izertis | Amazon (2C/4GB/100GB) | - | enode://ec816cd01c4b4afc8b7e75b823817bd0b36d1672a42839544a57a312a5c04ab12a3d96a3957f2638a3fee52d10203e6d3351a48b245caea9469f020007fa2d18@54.72.163.31:21000 |
| PLANISYS | Planisys EU-Cloud (8C/16Gb/200Gb)| - | enode://e956a09e29a05a06bfe4049ebdd1f648fd8508c2a63114916aaee9ad41636cce80679e6d3467872fac7f298689243999490adb67d4bd4617329cd545d3f66b67@185.180.8.154:21000?discport=0 |
| SERES | OVH (2C/8Gb/80Gb)| - | enode://623d6f2228378358c0bcae8e2087b5bd6207c4b9a048cd2d9878e4bed61e6af67a3ee30ab4692d226b3280211f4d038c818ccf4253a11cd452db8a6612889022@51.83.79.100:21000 |
| DigitelTS-BOT-01 | AWS (2C/8Gb/128GB) | | enode://d7d50abadb467de05cf474c6c9d1b2b3a399de9ccb1a58ea26141509f6a05c7b68690c8cba812d4db0258add29c349af88f9e61e5ada1c674b16c302083627b8@54.228.169.138:21000?discport=0 |
| 111.666667 | 221 | 0.781095 | yue_Hant | 0.178386 |
ff6d18b60954799066057f2e63267656fae8a04f | 4,556 | md | Markdown | readme.md | sisense/embed-sdk-demo | 41bf6abadc11e9d035ac98e67d5ec6e13270b97c | [
"MIT"
] | 14 | 2020-03-01T19:55:47.000Z | 2021-07-17T12:15:15.000Z | readme.md | sisense/embed-sdk-demo | 41bf6abadc11e9d035ac98e67d5ec6e13270b97c | [
"MIT"
] | 1 | 2022-02-13T04:30:16.000Z | 2022-02-13T04:30:16.000Z | readme.md | sisense/embed-sdk-demo | 41bf6abadc11e9d035ac98e67d5ec6e13270b97c | [
"MIT"
] | 6 | 2020-05-29T00:31:10.000Z | 2022-01-14T07:50:59.000Z | # Sisense Embed SDK Demo
This project is a demo of Sisense's Embed SDK - a Javascript library used to embed Sisense dashboards into other web pages in an interactive manner, available in Sisense V8.1 and up.
> *Note: this demo utilizes some features available only with Sisense V8.1.1 and higher*
The repository includes all the assets (such as an Elasticube and dashboards) required to set up this demo. Below you will find instructions for set up, customization and contributions.
For more information about the Sisense Embed SDK, please visit the [Sisense Embed SDK documentation](https://developer.sisense.com/display/API2/Embed+SDK)
#### Table of Contents
Within this document:
- [Setting up this demo](#setting-up-this-demo)
- [Using this demo](#using-this-demo)
External documents:
- [License](LICENSE)
- [Contribution guidelines](CONTRIBUTING.md)

<!--
TODO LIST
=========
- screenshots & gifs
- code review
-->
## Setting up this demo
> Make sure you have met the following prerequisites prior to starting:
> - [Installed `Sisense V8.1.1`](https://documentation.sisense.com/latest/getting-started/download-install.htm) on your server and activated it
> - [Set up CORS](https://documentation.sisense.com/latest/content/cors.htm)
> - [Set up SSO](https://documentation.sisense.com/latest/administration/user-management/single-sign-on/introduction-sso.htm) (Optional - you can manually log in to Sisense in another tab to create a cookie)
> - Have [NodeJS](https://nodejs.org/en/) and [NPM](https://www.npmjs.com/) installed and up-to-date
1. Clone or download this repository
1. The demo uses one of the sample Datamodels included with Sisense called "Sample ECommerce".
If you don't already have this Datamodel, Import the Elasticube file (`.ecdata`) included in the `assets` folder to your Sisense server
1. Import the Dashboard files (`.dash`) included in the `assets` folder to your Sisense server
1. Open the `index.html` file using any text editor and:
1. Locate the line `<!-- Get iFrame SDK -->`
1. Beneath it, update the URL in the `script` tag to match your Sisense server's IP/host and port: `<script src="http://localhost:8081/js/frame.js"></script>`
1. Open the `config.js` file using any text editor or IDE and:
1. Update the `baseUrl` property to match your Sisense server's IP/host and port
1. Update the `dashboards` array with the OIDs of your imported dashboards
1. Run `npm install`
1. Run `npm start` to start a local dev server, which will automatically open the demo in your browser.
_If a tab doesn't open automatically, navigate to http://localhost:8887/index.html_
## Using this demo
This project utilizes very few files and dependencies to keep things simple.
Below you can find a description of dependencies, project structure, and other info to help you understand this demo and customize it for your needs.
**Dependencies**
All of the dependencies below are used by the demo page itself **and are not required to use the Sisense Embed SDK**.
| Name | Type | Source | Version | Description |
|------|------|--------|---------|-------------|
| [Bootstrap](https://getbootstrap.com/) | CSS | CDN | 4.4.1 | Bootstrap CSS framework used for the demo's UX design |
| [http-server](https://www.npmjs.com/package/http-server) | NodeJS | NPM | ^0.11.1 | Lightweight NodeJS-based HTTP server for static files |
This project's UI was based on, but does not depend on, the [Bootstrap 4.4 Dashboard Example](https://getbootstrap.com/docs/4.4/examples/dashboard/)
**File Structure**
```
./
├── readme.md --> This README file
├── LICENSE --> License info
├── CONTRIBUTING.md --> Contribution guidelines
├── package.json --> NPM
├── index.html --> Main demo page
├── assets/ -->
│ ├── demo-dash-1.dash --> Sample Dashboard
│ ├── demo-dash-2.dash --> Sample Dashboard
│ └── Sample ECommerce.ecdata --> Sample Elasticube
├── images/ -->
│ └── main.gif --> GIF image used in this document
├── css/ -->
│ └── main.css --> Custom CSS (in addition to Bootstrap)
└── js/ -->
├── array-find.poly.js --> Polyfill of Array.prototype.find() for older browsers
├── config.js --> Demo configuration
├── index.js --> Demo code
└── log.js --> Faux console utility
``` | 48.468085 | 208 | 0.670105 | eng_Latn | 0.937699 |
ff6dd004cb16a68794ad21804c4671dd4b692c97 | 8,333 | md | Markdown | docs/API.md | bartholomews/spotify-scala-client | fccdbfdba00b0694dc90ca2057b16248b0de7bda | [
"MIT"
] | 6 | 2017-06-25T10:01:43.000Z | 2019-06-23T21:56:41.000Z | docs/API.md | bartholomews/spotify-scala-client | fccdbfdba00b0694dc90ca2057b16248b0de7bda | [
"MIT"
] | 2 | 2018-01-21T14:42:21.000Z | 2019-02-11T17:22:18.000Z | docs/API.md | bartholomews/spotify-scala-client | fccdbfdba00b0694dc90ca2057b16248b0de7bda | [
"MIT"
] | 2 | 2018-01-21T13:08:44.000Z | 2018-02-20T21:45:15.000Z | ## Endpoints Task list [🔗](https://developer.spotify.com/documentation/web-api/reference/#/)
- **Albums**
✅ [Get an Album](https://developer.spotify.com/documentation/web-api/reference/#/operations/get-an-album)
✅ [Get Several Albums](https://developer.spotify.com/documentation/web-api/reference/#/operations/get-multiple-albums)
✅ [Get an Album's Tracks](https://developer.spotify.com/documentation/web-api/reference/#/operations/get-an-albums-tracks)
❌ [Get Saved Albums](https://developer.spotify.com/documentation/web-api/reference/#/operations/get-users-saved-albums)
❌ [Save Albums](https://developer.spotify.com/documentation/web-api/reference/#/operations/save-albums-user)
❌ [Remove Albums](https://developer.spotify.com/documentation/web-api/reference/#/operations/remove-albums-user)
❌ [Check Saved Albums](https://developer.spotify.com/documentation/web-api/reference/#/operations/check-users-saved-albums)
✅ [Get New Releases](https://developer.spotify.com/documentation/web-api/reference/#/operations/get-new-releases)
- **Artists**
❌ [Get an Artist](https://developer.spotify.com/documentation/web-api/reference/#/operations/get-an-artist)
❌ [Get Several Artists](https://developer.spotify.com/documentation/web-api/reference/#/operations/get-multiple-artists)
❌ [Get Artist's Albums](https://developer.spotify.com/documentation/web-api/reference/#/operations/get-an-artists-albums)
❌ [Get Artist's Top Tracks](https://developer.spotify.com/documentation/web-api/reference/#/operations/get-an-artists-top-tracks)
❌ [Get Artist's Related Artists](https://developer.spotify.com/documentation/web-api/reference/#/operations/get-an-artists-related-artists)
- **Shows**
❌ [Get Show](https://developer.spotify.com/documentation/web-api/reference/#/operations/get-a-show)
❌ [Get Several Shows](https://developer.spotify.com/documentation/web-api/reference/#/operations/get-multiple-shows)
❌ [Get Show Episodes](https://developer.spotify.com/documentation/web-api/reference/#/operations/get-a-shows-episodes)
❌ [Get User's Saved Shows](https://developer.spotify.com/documentation/web-api/reference/#/operations/get-users-saved-shows)
❌ [Save Shows for Current User](https://developer.spotify.com/documentation/web-api/reference/#/operations/save-shows-user)
❌ [Remove User's Saved Shows](https://developer.spotify.com/documentation/web-api/reference/#/operations/remove-shows-user)
❌ [Check User's Saved Shows](https://developer.spotify.com/documentation/web-api/reference/#/operations/check-users-saved-shows)
- **Episodes**
❌ [Get Episode](https://developer.spotify.com/documentation/web-api/reference/#/operations/get-an-episode)
❌ [Get Several Episodes](https://developer.spotify.com/documentation/web-api/reference/#/operations/get-multiple-episodes)
❌ [Get User's Saved Episodes](https://developer.spotify.com/documentation/web-api/reference/#/operations/get-users-saved-episodes)
❌ [Save Episodes for User](https://developer.spotify.com/documentation/web-api/reference/#/operations/save-episodes-user)
❌ [Remove User's Saved Episodes](https://developer.spotify.com/documentation/web-api/reference/#/operations/remove-episodes-user)
❌ [Check User's Saved Episodes](https://developer.spotify.com/documentation/web-api/reference/#/operations/check-users-saved-episodes)
- **Tracks**
✅ [Get Track](https://developer.spotify.com/documentation/web-api/reference/#/operations/get-track)
✅ [Get Several Tracks](https://developer.spotify.com/documentation/web-api/reference/#/operations/get-several-tracks)
❌ [Get User's Saved Tracks](https://developer.spotify.com/documentation/web-api/reference/#/operations/get-users-saved-tracks)
❌ [Save Tracks for Current User](https://developer.spotify.com/documentation/web-api/reference/#/operations/save-tracks-user)
❌ [Remove Tracks for Current User](https://developer.spotify.com/documentation/web-api/reference/#/operations/remove-tracks-user)
❌ [Check User's Saved Tracks](https://developer.spotify.com/documentation/web-api/reference/#/operations/check-users-saved-tracks)
✅ [Get Tracks' Audio Features](https://developer.spotify.com/documentation/web-api/reference/#/operations/get-several-audio-features)
✅ [Get Track's Audio Features](https://developer.spotify.com/documentation/web-api/reference/#/operations/get-audio-features)
✅ [Get Track's Audio Analysis](https://developer.spotify.com/documentation/web-api/reference/#/operations/get-audio-analysis)
✅ [Get Recommendations](https://developer.spotify.com/documentation/web-api/reference/#/operations/get-recommendations)
- **Search**
❌ [Search for Item](https://developer.spotify.com/documentation/web-api/reference/#/operations/search)
- **Users**
✅ [Get Current User's Profile](https://developer.spotify.com/documentation/web-api/reference/#/operations/get-current-users-profile)
❌ [Get User's Top Items](https://developer.spotify.com/documentation/web-api/reference/#/operations/get-users-top-artists-and-tracks)
✅ [Get User's Profile](https://developer.spotify.com/documentation/web-api/reference/#/operations/get-users-profile)
✅ [Follow Playlist](https://developer.spotify.com/documentation/web-api/reference/#/operations/follow-playlist)
✅ [Unfollow Playlist](https://developer.spotify.com/documentation/web-api/reference/#/operations/unfollow-playlist)
✅ [Get Followed Artists](https://developer.spotify.com/documentation/web-api/reference/#/operations/get-followed)
✅ [Follow Artists or Users](https://developer.spotify.com/documentation/web-api/reference/#/operations/follow-artists-users)
✅ [Unfollow Artists or Users](https://developer.spotify.com/documentation/web-api/reference/#/operations/unfollow-artists-users)
✅ [Check If User Follows Artists or Users](https://developer.spotify.com/documentation/web-api/reference/#/operations/check-current-user-follows)
✅ [Check If Users Follow Playlist](https://developer.spotify.com/documentation/web-api/reference/#/operations/check-if-user-follows-playlist)
- **Playlists**
✅ [Get Playlist](https://developer.spotify.com/documentation/web-api/reference/#/operations/get-playlist)
✅ [Change Playlist Details](https://developer.spotify.com/documentation/web-api/reference/#/operations/change-playlist-details)
✅ [Get Playlist Items](https://developer.spotify.com/documentation/web-api/reference/#/operations/get-playlists-tracks)
✅ [Add Items to Playlist](https://developer.spotify.com/documentation/web-api/reference/#/operations/add-tracks-to-playlist)
✅ [Update Playlist Items](https://developer.spotify.com/documentation/web-api/reference/#/operations/reorder-or-replace-playlists-tracks)
❌ [Remove Playlist Items](https://developer.spotify.com/documentation/web-api/reference/#/operations/remove-tracks-playlist)
✅ [Get Current User's Playlists](https://developer.spotify.com/documentation/web-api/reference/#/operations/get-a-list-of-current-users-playlists)
✅ [Get User's Playlists](https://developer.spotify.com/documentation/web-api/reference/#/operations/get-list-users-playlists)
✅ [Create Playlist](https://developer.spotify.com/documentation/web-api/reference/#/operations/create-playlist)
✅ [Get Featured Playlists](https://developer.spotify.com/documentation/web-api/reference/#/operations/get-featured-playlists)
✅ [Get Category's Playlists](https://developer.spotify.com/documentation/web-api/reference/#/operations/get-a-categories-playlists)
❌ [Get Playlist Cover Image](https://developer.spotify.com/documentation/web-api/reference/#/operations/get-playlist-cover)
❌ [Add Custom Playlist Cover Image](https://developer.spotify.com/documentation/web-api/reference/#/operations/upload-custom-playlist-cover)
- **Categories**
✅ [Get Several Browse Categories](https://developer.spotify.com/documentation/web-api/reference/#/operations/get-categories)
✅ [Get Single Browse Category](https://developer.spotify.com/documentation/web-api/reference/#/operations/get-a-category)
- **Genres**
✅ [Get Available Genre Seeds](https://developer.spotify.com/documentation/web-api/reference/#/operations/get-recommendation-genres)
- **Player**
- **Markets**
| 82.50495 | 151 | 0.76059 | eng_Latn | 0.187882 |
ff6e3afcff44b58a3a5925470d539fe4df985f5e | 88 | md | Markdown | README.md | dibyanshushekhardey/Data-structures-and-Algorithms- | 27a456375897108a3f407fa6082c227cbb393777 | [
"MIT"
] | null | null | null | README.md | dibyanshushekhardey/Data-structures-and-Algorithms- | 27a456375897108a3f407fa6082c227cbb393777 | [
"MIT"
] | null | null | null | README.md | dibyanshushekhardey/Data-structures-and-Algorithms- | 27a456375897108a3f407fa6082c227cbb393777 | [
"MIT"
] | null | null | null | # Data-structures-and-Algorithms-
This contains some data structure and algorithm stuff
| 29.333333 | 53 | 0.829545 | eng_Latn | 0.945764 |
ff6e923113392fb8aa31202d4dc87f80898a5b29 | 7,152 | md | Markdown | docs/framework/winforms/controls/how-to-delete-or-hide-columns-in-the-windows-forms-datagrid-control.md | AlejandraHM/docs.es-es | 5f5b056e12f9a0bcccbbbef5e183657d898b9324 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/winforms/controls/how-to-delete-or-hide-columns-in-the-windows-forms-datagrid-control.md | AlejandraHM/docs.es-es | 5f5b056e12f9a0bcccbbbef5e183657d898b9324 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/winforms/controls/how-to-delete-or-hide-columns-in-the-windows-forms-datagrid-control.md | AlejandraHM/docs.es-es | 5f5b056e12f9a0bcccbbbef5e183657d898b9324 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Procedimiento Eliminar u ocultar columnas en el Control DataGrid de Windows Forms
ms.date: 03/30/2017
dev_langs:
- csharp
- vb
helpviewer_keywords:
- data grids [Windows Forms], deleting columns
- DataGrid control [Windows Forms], deleting columns
- data grids [Windows Forms], hiding columns
- columns [Windows Forms], hiding
- columns [Windows Forms], deleting in data grids
- DataGrid control [Windows Forms], hiding columns
ms.assetid: bcd0dd96-6687-4c48-b0e1-d5287b93ac91
ms.openlocfilehash: 635fbc112a241c4c8b17d2b49c22042c6bd59a21
ms.sourcegitcommit: 6b308cf6d627d78ee36dbbae8972a310ac7fd6c8
ms.translationtype: MT
ms.contentlocale: es-ES
ms.lasthandoff: 01/23/2019
ms.locfileid: "54653949"
---
# <a name="how-to-delete-or-hide-columns-in-the-windows-forms-datagrid-control"></a>Procedimiento Eliminar u ocultar columnas en el Control DataGrid de Windows Forms
> [!NOTE]
> El control <xref:System.Windows.Forms.DataGridView> reemplaza y agrega funcionalidad al control <xref:System.Windows.Forms.DataGrid>; sin embargo, el control <xref:System.Windows.Forms.DataGrid> se conserva a efectos de compatibilidad con versiones anteriores y uso futuro, en su caso. Para obtener más información, consulte [Differences Between the Windows Forms DataGridView and DataGrid Controls](../../../../docs/framework/winforms/controls/differences-between-the-windows-forms-datagridview-and-datagrid-controls.md) (Diferencias entre los controles DataGridView y DataGrid de formularios Windows Forms).
Mediante programación puede eliminar u ocultar columnas en los formularios de Windows <xref:System.Windows.Forms.DataGrid> control mediante el uso de las propiedades y métodos de la <xref:System.Windows.Forms.GridColumnStylesCollection> y <xref:System.Windows.Forms.DataGridColumnStyle> objetos (que son miembros de la <xref:System.Windows.Forms.DataGridTableStyle> clase).
Las columnas eliminadas u ocultas siguen existan en el origen de datos se enlaza a la cuadrícula y todavía pueden tener acceso mediante programación. Simplemente ya no son visibles en la cuadrícula de datos.
> [!NOTE]
> Si la aplicación no tiene acceso a determinadas columnas de datos y no desea que se muestra en la cuadrícula de datos, probablemente no es necesario incluir en el origen de datos en primer lugar.
### <a name="to-delete-a-column-from-the-datagrid-programmatically"></a>Para eliminar una columna de la cuadrícula de datos mediante programación
1. En el área de declaraciones del formulario, declare una nueva instancia de la <xref:System.Windows.Forms.DataGridTableStyle> clase.
2. Establecer el <xref:System.Windows.Forms.DataGridTableStyle.MappingName%2A?displayProperty=nameWithType> propiedad a la tabla de origen de datos que desea aplicar el estilo. En el ejemplo siguiente se usa el <xref:System.Windows.Forms.DataGrid.DataMember%2A?displayProperty=nameWithType> propiedad, que se supone que ya se ha establecido.
3. Agregue el nuevo <xref:System.Windows.Forms.DataGridTableStyle> objeto a la colección de estilos de tabla del control datagrid.
4. Llame a la <xref:System.Windows.Forms.GridColumnStylesCollection.RemoveAt%2A> método de la <xref:System.Windows.Forms.DataGrid>del <xref:System.Windows.Forms.DataGridTableStyle.GridColumnStyles%2A> colección, especificando el índice de columna de la columna que se va a eliminar.
```vb
' Declare a new DataGridTableStyle in the
' declarations area of your form.
Dim ts As DataGridTableStyle = New DataGridTableStyle()
Sub DeleteColumn()
' Set the DataGridTableStyle.MappingName property
' to the table in the data source to map to.
ts.MappingName = DataGrid1.DataMember
' Add it to the datagrid's TableStyles collection
DataGrid1.TableStyles.Add(ts)
' Delete the first column (index 0)
DataGrid1.TableStyles(0).GridColumnStyles.RemoveAt(0)
End Sub
```
```csharp
// Declare a new DataGridTableStyle in the
// declarations area of your form.
DataGridTableStyle ts = new DataGridTableStyle();
private void deleteColumn()
{
// Set the DataGridTableStyle.MappingName property
// to the table in the data source to map to.
ts.MappingName = dataGrid1.DataMember;
// Add it to the datagrid's TableStyles collection
dataGrid1.TableStyles.Add(ts);
// Delete the first column (index 0)
dataGrid1.TableStyles[0].GridColumnStyles.RemoveAt(0);
}
```
### <a name="to-hide-a-column-in-the-datagrid-programmatically"></a>Para ocultar una columna en la cuadrícula de datos mediante programación
1. En el área de declaraciones del formulario, declare una nueva instancia de la <xref:System.Windows.Forms.DataGridTableStyle> clase.
2. Establecer el <xref:System.Windows.Forms.DataGridTableStyle.MappingName%2A> propiedad de la <xref:System.Windows.Forms.DataGridTableStyle> a la tabla de origen de datos que desea aplicar el estilo. El siguiente ejemplo de código utiliza el <xref:System.Windows.Forms.DataGrid.DataMember%2A?displayProperty=nameWithType> propiedad, que se supone que ya se ha establecido.
3. Agregue el nuevo <xref:System.Windows.Forms.DataGridTableStyle> objeto a la colección de estilos de tabla del control datagrid.
4. Ocultar la columna estableciendo su `Width` propiedad en 0, especificando el índice de columna de la columna que desea ocultar.
```vb
' Declare a new DataGridTableStyle in the
' declarations area of your form.
Dim ts As DataGridTableStyle = New DataGridTableStyle()
Sub HideColumn()
' Set the DataGridTableStyle.MappingName property
' to the table in the data source to map to.
ts.MappingName = DataGrid1.DataMember
' Add it to the datagrid's TableStyles collection
DataGrid1.TableStyles.Add(ts)
' Hide the first column (index 0)
DataGrid1.TableStyles(0).GridColumnStyles(0).Width = 0
End Sub
```
```csharp
// Declare a new DataGridTableStyle in the
// declarations area of your form.
DataGridTableStyle ts = new DataGridTableStyle();
private void hideColumn()
{
// Set the DataGridTableStyle.MappingName property
// to the table in the data source to map to.
ts.MappingName = dataGrid1.DataMember;
// Add it to the datagrid's TableStyles collection
dataGrid1.TableStyles.Add(ts);
// Hide the first column (index 0)
dataGrid1.TableStyles[0].GridColumnStyles[0].Width = 0;
}
```
## <a name="see-also"></a>Vea también
- [Cómo: Cambiar los datos mostrados en tiempo de ejecución en el Control DataGrid de Windows Forms](../../../../docs/framework/winforms/controls/change-displayed-data-at-run-time-wf-datagrid-control.md)
- [Cómo: Agregar tablas y columnas al Control DataGrid de formularios Windows Forms](../../../../docs/framework/winforms/controls/how-to-add-tables-and-columns-to-the-windows-forms-datagrid-control.md)
| 55.015385 | 614 | 0.735878 | spa_Latn | 0.502754 |
ff6e94c1505f798be680b5a03ae9e813ccb641cb | 25,738 | md | Markdown | articles/storsimple/storsimple-adapter-for-sharepoint.md | gschrijvers/azure-docs.nl-nl | e46af0b9c1e4bb7cb8088835a8104c5d972bfb78 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/storsimple/storsimple-adapter-for-sharepoint.md | gschrijvers/azure-docs.nl-nl | e46af0b9c1e4bb7cb8088835a8104c5d972bfb78 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/storsimple/storsimple-adapter-for-sharepoint.md | gschrijvers/azure-docs.nl-nl | e46af0b9c1e4bb7cb8088835a8104c5d972bfb78 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: StorSimple-adapter voor share point installeren | Microsoft Docs
description: Hierin wordt beschreven hoe u de StorSimple-adapter voor share point installeert en configureert of verwijdert in een share Point-server farm.
services: storsimple
documentationcenter: NA
author: twooley
manager: timlt
editor: ''
ms.assetid: 36c20b75-f2e5-4184-a6b5-9c5e618f79b2
ms.service: storsimple
ms.devlang: NA
ms.topic: article
ms.tgt_pltfrm: NA
ms.workload: TBD
ms.date: 06/06/2017
ms.author: twooley
ms.openlocfilehash: a841ce8b664389ccd8fdf55de9965f09412fecf5
ms.sourcegitcommit: 849bb1729b89d075eed579aa36395bf4d29f3bd9
ms.translationtype: MT
ms.contentlocale: nl-NL
ms.lasthandoff: 04/28/2020
ms.locfileid: "75930208"
---
# <a name="install-and-configure-the-storsimple-adapter-for-sharepoint"></a>De StorSimple-adapter voor share point installeren en configureren
## <a name="overview"></a>Overzicht
De StorSimple-adapter voor share point is een onderdeel waarmee u Microsoft Azure StorSimple flexibele opslag en gegevens beveiliging kunt bieden aan share Point-server farms. U kunt de adapter gebruiken om de inhoud van binaire Large Object (BLOB) te verplaatsen van de SQL Server-inhouds databases naar het Microsoft Azure StorSimple Hybrid Cloud Storage-apparaat.
De StorSimple-adapter voor share point fungeert als een resource voor externe BLOB-opslag (RBS) en maakt gebruik van de SQL Server externe BLOB Storage-functie voor het opslaan van ongestructureerde share point-inhoud (in de vorm van BLOBs) op een bestands server die wordt ondersteund door een StorSimple-apparaat.
> [!NOTE]
> De StorSimple-adapter voor share point ondersteunt externe BLOB-opslag van share Point Server 2010 (RBS). Het biedt geen ondersteuning voor externe BLOB Storage van share Point Server 2010 (EBS).
* Als u de StorSimple-adapter voor share point wilt downloaden, gaat u naar de [StorSimple-adapter voor share point][1] in het micro soft Download centrum.
* Ga voor informatie over planning voor de beperkingen van de RBS-en RBS-beslissing naar het [gebruik van RBS in share point 2013][2] of [plan voor resource structuur (share Point Server 2010)][3].
In de rest van dit overzicht vindt u een korte beschrijving van de rol van de StorSimple-adapter voor share point en de share point-capaciteit en prestatie limieten waarvan u op de hoogte moet zijn voordat u de adapter installeert en configureert. Nadat u deze informatie hebt bekeken, gaat u naar de [StorSimple-adapter voor share point-installatie](#storsimple-adapter-for-sharepoint-installation) om te beginnen met het instellen van de adapter.
### <a name="storsimple-adapter-for-sharepoint-benefits"></a>StorSimple-adapter voor share point-voor delen
In een share point-site wordt inhoud opgeslagen als ongestructureerde BLOB-gegevens in een of meer inhouds databases. Deze data bases worden standaard gehost op computers met SQL Server en bevinden zich in de share Point-server farm. BLOBs kunnen snel groter worden, waardoor grote hoeveel heden on-premises opslag ruimte worden verbruikt. Daarom wilt u mogelijk een andere, goedkopere opslag oplossing vinden. SQL Server biedt een technologie met de naam externe Blob Storage (RBS) waarmee u BLOB-inhoud in het bestands systeem buiten de SQL Server-Data Base kunt opslaan. Met de RBS kunnen BLOBs zich bevinden in het bestands systeem op de computer met SQL Server, of kunnen ze worden opgeslagen in het bestands systeem op een andere Server computer.
RBS vereist dat u een RBS-provider gebruikt, zoals de StorSimple-adapter voor share point, om de resource structuur in share point in te scha kelen. De StorSimple-adapter voor share point werkt met de resource structuur, zodat u BLOBs kunt verplaatsen naar een server waarvan een back-up wordt gemaakt door het Microsoft Azure StorSimple systeem. Microsoft Azure StorSimple de BLOB-gegevens vervolgens lokaal of in de Cloud opslaat, op basis van het gebruik. BLOBs die zeer actief zijn (doorgaans aangeduid als Tier 1 of warme gegevens), zijn lokaal opgeslagen. Minder actieve gegevens-en archiverings gegevens bevinden zich in de Cloud. Nadat u RBS hebt ingeschakeld voor een inhouds database, worden nieuwe BLOB-inhoud die in share point is gemaakt, opgeslagen op het StorSimple-apparaat en niet in de inhouds database.
De Microsoft Azure StorSimple implementatie van de resource structuur biedt de volgende voor delen:
* Door BLOB-inhoud naar een afzonderlijke server te verplaatsen, kunt u de belasting van de query op SQL Server verminderen, waardoor de reactie snelheid van SQL Server kan worden verbeterd.
* Azure StorSimple maakt gebruik van ontdubbeling en compressie om de gegevens grootte te reduceren.
* Azure StorSimple biedt gegevens beveiliging in de vorm van lokale en Cloud momentopnamen. Als u de data base zelf op het StorSimple-apparaat plaatst, kunt u ook een back-up maken van de inhouds database en BLOBs in een crash consistente manier. (Het verplaatsen van de inhouds database naar het apparaat wordt alleen ondersteund voor het apparaat uit de StorSimple 8000-serie. Deze functie wordt niet ondersteund voor de 5000-of 7000-serie.)
* Azure StorSimple bevat functies voor herstel na nood gevallen, waaronder failover, bestands-en volume herstel (met inbegrip van test herstel) en het snel herstellen van gegevens.
* U kunt gegevens herstel software, zoals Kroll Ontrack PowerControls, gebruiken met StorSimple moment opnamen van BLOB-gegevens voor het uitvoeren van herstel op item niveau van share point-inhoud. (Deze software voor gegevens herstel is een afzonderlijke aankoop.)
* De StorSimple-adapter voor share point wordt toegevoegd aan de share Point-Portal voor Centraal beheer, zodat u uw volledige share point-oplossing kunt beheren vanaf een centrale locatie.
Het verplaatsen van BLOB-inhoud naar het bestands systeem kan andere kosten besparingen en voor delen bieden. Als u bijvoorbeeld resource structuur gebruikt, kan de duur van de dure opslag op laag 1 worden verminderd en wordt het aantal data bases dat is vereist in de share Point-server farm verminderd door de resource structuur te verkleinen. Andere factoren, zoals de grootte limieten van de data base en de hoeveelheid niet-RBS-inhoud, kunnen ook van invloed zijn op de opslag vereisten. Voor meer informatie over de kosten en voor delen van het gebruik van de resource structuur, Zie [planning voor RBS (share point Foundation 2010)][4] en [het gebruik van de resource structuur in share point 2013][5].
### <a name="capacity-and-performance-limits"></a>Capaciteits-en prestatie limieten
Voordat u rekening houdt met het gebruik van de resource structuur in uw share point-oplossing, moet u rekening houden met de geteste prestaties en capaciteits limieten van share Point Server 2010 en share Point server 2013, en hoe deze limieten te maken hebben met acceptabele prestaties. Zie [Software grenzen en limieten voor share point 2013](https://technet.microsoft.com/library/cc262787.aspx)voor meer informatie.
Bekijk het volgende voordat u de resource structuur configureert:
* Zorg ervoor dat de totale grootte van de inhoud (de grootte van een inhouds database plus de grootte van alle gekoppelde externe BLOBs) de RBS-grootte limiet niet overschrijdt die door share point wordt ondersteund. Deze limiet is 200 GB.
**Inhouds database en BLOB-grootte meten**
1. Voer deze query uit op de WFE van Centraal beheer. Start de share point-beheer shell en voer de volgende Windows Power shell-opdracht in om de grootte van de inhouds databases op te halen:
`Get-SPContentDatabase | Select-Object -ExpandProperty DiskSizeRequired`
Met deze stap wordt de grootte van de inhouds database op de schijf opgehaald.
2. Voer een van de volgende SQL-query's uit in SQL Management Studio in het vak SQL Server op elke inhouds database en voeg het resultaat toe aan het aantal dat u in stap 1 hebt verkregen.
Voer in share point 2013-inhouds databases het volgende in:
`SELECT SUM([Size]) FROM [ContentDatabaseName].[dbo].[DocStreams] WHERE [Content] IS NULL`
Voer in share point 2010-inhouds databases het volgende in:
`SELECT SUM([Size]) FROM [ContentDatabaseName].[dbo].[AllDocs] WHERE [Content] IS NULL`
Met deze stap wordt de grootte van de BLOBs opgehaald die extern zijn ingesteld.
* U wordt aangeraden alle BLOB-en database inhoud lokaal op het StorSimple-apparaat op te slaan. Het StorSimple-apparaat is een cluster met twee knoop punten voor hoge Beschik baarheid. Het plaatsen van de inhouds databases en BLOBs op het StorSimple-apparaat biedt een hoge Beschik baarheid.
Gebruik de traditionele SQL Server aanbevolen procedures voor de migratie om de inhouds database naar het StorSimple-apparaat te verplaatsen. Verplaats de data base pas nadat alle BLOB-inhoud van de data base is verplaatst naar de bestands share via de resource structuur. Als u ervoor kiest om de inhouds database naar het StorSimple-apparaat te verplaatsen, raden we u aan de opslag van inhouds databases op het apparaat als primair volume te configureren.
* Als in Microsoft Azure StorSimple gelaagde volumes worden gebruikt, is er geen manier om te garanderen dat inhoud die lokaal is opgeslagen op het StorSimple-apparaat niet wordt getierd naar Microsoft Azure-Cloud opslag. Daarom kunt u het beste StorSimple lokaal vastgemaakte volumes gebruiken in combi natie met share point-RBS. Dit zorgt ervoor dat alle BLOB-inhoud lokaal op het StorSimple-apparaat blijft en niet wordt verplaatst naar Microsoft Azure.
* Als u de inhouds databases niet opslaat op het StorSimple-apparaat, gebruikt u traditionele SQL Server aanbevolen procedures voor hoge Beschik baarheid die ondersteuning bieden voor RBS. SQL Server Clustering ondersteunt resource structuur, terwijl SQL Server mirroring niet.
> [!WARNING]
> Als u de resource structuur niet hebt ingeschakeld, raden we u aan om de inhouds database niet naar het StorSimple-apparaat te verplaatsen. Dit is een niet-test configuratie.
## <a name="storsimple-adapter-for-sharepoint-installation"></a>StorSimple-adapter voor share point-installatie
Voordat u de StorSimple-adapter voor share point kunt installeren, moet u het StorSimple-apparaat configureren en ervoor zorgen dat de share Point server-farm en SQL Server-instantiëring voldoen aan alle vereisten. In deze zelf studie worden de configuratie vereisten beschreven, evenals de procedures voor het installeren en upgraden van de StorSimple-adapter voor share point.
## <a name="configure-prerequisites"></a>Vereisten configureren
Voordat u de StorSimple-adapter voor share point kunt installeren, moet u ervoor zorgen dat het StorSimple-apparaat, de share Point-server farm en SQL Server-instantiëring voldoen aan de volgende vereisten.
### <a name="system-requirements"></a>Systeemvereisten
De StorSimple-adapter voor share point werkt met de volgende hardware en software:
* Ondersteund besturings systeem: Windows Server 2008 R2 SP1, Windows Server 2012 of Windows Server 2012 R2
* Ondersteunde share point-versies: share Point Server 2010 of share Point server 2013
* Ondersteunde SQL Server versies: SQL Server 2008 Enter prise Edition, SQL Server 2008 R2 Enter prise Edition of SQL Server 2012 Enter prise Edition
* Ondersteunde StorSimple-apparaten – StorSimple 8000-reeks, StorSimple 7000-serie of StorSimple 5000-serie.
### <a name="storsimple-device-configuration-prerequisites"></a>Vereisten voor configuratie van StorSimple-apparaten
Het StorSimple-apparaat is een blok apparaat en hiervoor is een bestands server vereist waarop de gegevens kunnen worden gehost. U kunt het beste een afzonderlijke server gebruiken in plaats van een bestaande server uit de share point-farm. Deze bestands server moet zich op hetzelfde lokale netwerk (LAN) bevinden als de SQL Server computer die als host fungeert voor de inhouds databases.
> [!TIP]
> * Als u uw share point-Farm voor maximale Beschik baarheid configureert, moet u de bestands server ook implementeren voor maximale Beschik baarheid.
> * Als u de inhouds database niet opslaat op het StorSimple-apparaat, gebruik dan traditionele aanbevolen procedures voor hoge Beschik baarheid die ondersteuning bieden voor RBS. SQL Server Clustering ondersteunt resource structuur, terwijl SQL Server mirroring niet.
Zorg ervoor dat uw StorSimple-apparaat correct is geconfigureerd en dat de juiste volumes ter ondersteuning van uw share point-implementatie zijn geconfigureerd en toegankelijk zijn vanaf uw SQL Server computer. Ga naar [uw on-premises StorSimple-apparaat implementeren](storsimple-8000-deployment-walkthrough-u2.md) als u nog geen StorSimple-apparaat hebt geïmplementeerd en geconfigureerd. Noteer het IP-adres van het StorSimple-apparaat. u hebt deze nodig tijdens de installatie van de StorSimple-adapter voor share point.
Bovendien moet u ervoor zorgen dat het volume dat moet worden gebruikt voor BLOB externalization voldoet aan de volgende vereisten:
* Het volume moet zijn geformatteerd met een 64 KB-Allocation Unit Size.
* Uw web-front-end (WFE) en toepassings servers moeten toegang kunnen krijgen tot het volume via een UNC-pad (Universal Naming Convention).
* De share Point server-farm moet worden geconfigureerd om naar het volume te schrijven.
> [!NOTE]
> Nadat u de adapter hebt geïnstalleerd en geconfigureerd, moet alle BLOB-externalization via het StorSimple-apparaat worden weer gegeven (het apparaat geeft de volumes aan SQL Server en beheert de opslag lagen). U kunt geen andere doelen gebruiken voor BLOB-externalization.
Als u van plan bent Snapshot Manager StorSimple te gebruiken om moment opnamen van de BLOB-en database gegevens te maken, moet u StorSimple Snapshot Manager op de database server installeren, zodat deze de SQL Writer-service kan gebruiken voor het implementeren van de Windows Volume Shadow Copy Service (VSS).
> [!IMPORTANT]
> StorSimple Snapshot Manager biedt geen ondersteuning voor de share point VSS-schrijver en kan geen toepassings consistente moment opnamen van share point-gegevens maken. In een share point-scenario biedt StorSimple Snapshot Manager alleen crash consistente back-ups.
## <a name="sharepoint-farm-configuration-prerequisites"></a>Vereisten voor share point-Farm configuratie
Controleer als volgt of de share Point-server farm correct is geconfigureerd:
* Controleer of de share Point-server farm in orde is en controleer het volgende:
* Alle share point WFE-en toepassings servers die in de farm zijn geregistreerd, worden uitgevoerd en kunnen worden gepingd vanaf de server waarop u de StorSimple-adapter voor share point gaat installeren.
* De share point-timer service (SPTimerV3 of SPTimerV4) wordt uitgevoerd op elke WFE-server en toepassings server.
* Zowel de share point-timer service als de IIS-toepassings groep waaronder de share point-centrale beheer site wordt uitgevoerd, heeft Administrator bevoegdheden.
* Zorg ervoor dat Internet Explorer Enhanced Security context (IE ESC) is uitgeschakeld. Volg deze stappen voor het uitschakelen van Internet Explorer ESC:
1. Sluit alle exemplaren van Internet Explorer.
2. Start de Serverbeheer.
3. Klik in het linkerdeel venster op **lokale server**.
4. Klik in het rechterdeel venster naast **Verbeterde beveiliging van Internet Explorer**op **ingeschakeld**.
5. Klik onder **beheerders**op **uit**.
6. Klik op **OK**.
## <a name="remote-blob-storage-rbs-prerequisites"></a>Vereisten voor externe BLOB Storage (RBS)
Zorg ervoor dat u een ondersteunde versie van SQL Server gebruikt. Alleen de volgende versies worden ondersteund en kunnen gebruikmaken van RBS:
* SQL Server 2008 Enter prise Edition
* SQL Server 2008 R2 Enter prise Edition
* SQL Server 2012 Enter prise Edition
BLOBs kunnen worden geexternal op alleen de volumes die het StorSimple-apparaat aan SQL Server geeft. Er worden geen andere doelen voor BLOB-externalization ondersteund.
Wanneer u alle stappen voor de vereiste configuratie hebt voltooid, gaat u naar [de StorSimple-adapter installeren voor share point](#install-the-storsimple-adapter-for-sharepoint).
## <a name="install-the-storsimple-adapter-for-sharepoint"></a>De StorSimple-adapter voor share point installeren
Gebruik de volgende stappen om de StorSimple-adapter voor share point te installeren. Als u de software opnieuw wilt installeren, raadpleegt u [de StorSimple-adapter voor share point bijwerken of opnieuw](#upgrade-or-reinstall-the-storsimple-adapter-for-sharepoint)installeren. De tijd die nodig is voor de installatie is afhankelijk van het totale aantal share point-data bases in uw share Point-server farm.
[!INCLUDE [storsimple-install-sharepoint-adapter](../../includes/storsimple-install-sharepoint-adapter.md)]
## <a name="configure-rbs"></a>Resource structuur configureren
Nadat u de StorSimple-adapter voor share point hebt geïnstalleerd, configureert u de resource structuur zoals beschreven in de volgende procedure.
> [!TIP]
> De StorSimple-adapter voor share point wordt op de pagina Centraal beheer van share point geplaatst, zodat de resource structuur kan worden in-of uitgeschakeld voor elke inhouds database in de share point-farm. Het in-of uitschakelen van de resource structuur voor de inhouds database leidt ertoe dat een IIS-reset, afhankelijk van de configuratie van uw farm, de beschik baarheid van de share point-Webfront-end (WFE) tijdelijk kan verstoren. (Factoren zoals het gebruik van een front-end load balancer, de huidige server workload, enzovoort, kunnen deze onderbreking beperken of elimineren.) Om gebruikers te beschermen tegen onderbrekingen, raden we u aan om RBS alleen in of uit te scha kelen tijdens een gepland onderhouds venster.
[!INCLUDE [storsimple-sharepoint-adapter-configure-rbs](../../includes/storsimple-sharepoint-adapter-configure-rbs.md)]
## <a name="configure-garbage-collection"></a>Garbagecollection configureren
Wanneer objecten worden verwijderd uit een share point-site, worden ze niet automatisch verwijderd uit het RBS-archief volume. In plaats daarvan worden zwevende BLOBs uit het bestands archief verwijderd met een asynchroon, onderhouds programma voor de achtergrond. Systeem beheerders kunnen dit proces plannen om periodiek uit te voeren, of ze kunnen het altijd starten als dat nodig is.
Dit onderhouds programma (Microsoft. data. SqlRemoteBlobs. maintainer. exe) wordt automatisch geïnstalleerd op alle share point WFE-servers en toepassings servers wanneer u RBS inschakelt. Het programma wordt geïnstalleerd op de volgende locatie: *opstart station*: \Program Files\Microsoft SQL Remote Blob Storage 10,50 \ maintainer \
Zie voor meer informatie over het configureren en gebruiken van het onderhouds programma [resource structuur onderhouden in share Point Server 2013][8].
> [!IMPORTANT]
> De RBS-onderhouds programma is resource-intensief. U moet de toepassing plannen om alleen te worden uitgevoerd tijdens peri Oden met een lichte activiteit op de share point-farm.
### <a name="delete-orphaned-blobs-immediately"></a>Zwevende BLOBs direct verwijderen
Als u zwevende BLOBs onmiddellijk wilt verwijderen, kunt u de volgende instructies gebruiken. Deze instructies zijn een voor beeld van hoe dit kan worden gedaan in een share point 2013-omgeving met de volgende onderdelen:
* De naam van de inhouds database is WSS_Content.
* De SQL Server naam is SHRPT13-SQL12\SHRPT13.
* De naam van de webtoepassing is share point – 80.
[!INCLUDE [storsimple-sharepoint-adapter-garbage-collection](../../includes/storsimple-sharepoint-adapter-garbage-collection.md)]
## <a name="upgrade-or-reinstall-the-storsimple-adapter-for-sharepoint"></a>De StorSimple-adapter voor share point bijwerken of opnieuw installeren
Gebruik de volgende procedure om share Point server bij te werken en de StorSimple-adapter voor share point opnieuw te installeren of om eenvoudigweg de adapter in een bestaande share Point-server farm te upgraden of opnieuw te installeren.
> [!IMPORTANT]
> Controleer de volgende informatie voordat u een upgrade uitvoert van uw share point-software en/of de StorSimple-adapter voor share point moet bijwerken of opnieuw installeren:
>
> * Bestanden die eerder zijn verplaatst naar externe opslag via de resource structuur, zijn pas beschikbaar als de installatie is voltooid en de resource structuur functie opnieuw is ingeschakeld. Als u de gevolgen van de gebruiker wilt beperken, voert u een upgrade of herinstallatie uit tijdens een gepland onderhouds venster.
> * De tijd die nodig is voor de upgrade/herinstallatie, kan variëren, afhankelijk van het totale aantal share point-data bases in de share Point-server farm.
> * Nadat de upgrade/herinstallatie is voltooid, moet u de resource structuur voor de inhouds databases inschakelen. Zie [resource structuur configureren](#configure-rbs) voor meer informatie.
> * Als u een resource structuur configureert voor een share point-farm met een zeer groot aantal data bases (groter dan 200), is er mogelijk een time-out opgetreden op de pagina **Centraal beheer van share point** . Als dat het geval is, vernieuwt u de pagina. Dit heeft geen invloed op het configuratie proces.
[!INCLUDE [storsimple-upgrade-sharepoint-adapter](../../includes/storsimple-upgrade-sharepoint-adapter.md)]
## <a name="storsimple-adapter-for-sharepoint-removal"></a>StorSimple-adapter voor share point verwijderen
In de volgende procedures wordt beschreven hoe u de BLOBs terugplaatst naar de SQL Server-inhouds databases en vervolgens de StorSimple-adapter voor share point kunt verwijderen.
> [!IMPORTANT]
> U moet de BLOBs weer verplaatsen naar de inhouds databases voordat u de adapter software verwijdert.
### <a name="before-you-begin"></a>Voordat u begint
Verzamel de volgende informatie voordat u de gegevens weer naar de SQL Server-inhouds databases verplaatst en het verwijderen van de adapter start:
* De namen van alle data bases waarvoor de resource structuur is ingeschakeld
* Het UNC-pad van het geconfigureerde BLOB-archief
### <a name="move-the-blobs-back-to-the-content-databases"></a>Verplaats de BLOBs terug naar de inhouds databases
Voordat u de StorSimple-adapter voor share point-software verwijdert, moet u alle BLOBs migreren die extern zijn teruggestuurd naar de SQL Server-inhouds databases. Als u de StorSimple-adapter voor share point probeert te verwijderen voordat u alle BLOBs weer naar de inhouds databases verplaatst, wordt de volgende waarschuwing weer gegeven.

#### <a name="to-move-the-blobs-back-to-the-content-databases"></a>De BLOBs weer verplaatsen naar de inhouds databases
1. Down load elk van de externe objecten.
2. Open de pagina **Centraal beheer van share point** en blader naar **systeem instellingen**.
3. Klik onder **Azure StorSimple**op **StorSimple-adapter configureren**.
4. Op de pagina **StorSimple-adapter configureren** klikt u op de knop **uitschakelen** onder elk van de inhouds databases die u wilt verwijderen uit externe Blob-opslag.
5. Verwijder de objecten uit share point en upload ze vervolgens opnieuw.
U kunt ook de micro soft `RBS Migrate()` Power shell-cmdlet gebruiken die deel uitmaakt van share point. Zie [inhoud migreren naar of uit een resource structuur](https://technet.microsoft.com/library/ff628255.aspx)voor meer informatie.
Nadat u de BLOBs weer naar de inhouds database hebt verplaatst, gaat u naar de volgende stap: [de adapter verwijderen](#uninstall-the-adapter).
### <a name="uninstall-the-adapter"></a>De adapter verwijderen
Nadat u de BLOBs weer naar de SQL Server-inhouds databases hebt verplaatst, gebruikt u een van de volgende opties om de StorSimple-adapter voor share point te verwijderen.
#### <a name="to-use-the-installation-program-to-uninstall-the-adapter"></a>Het installatie programma gebruiken om de adapter te verwijderen
1. Gebruik een account met beheerders bevoegdheden om u aan te melden bij de server van de web-front-end (WFE).
2. Dubbel klik op de StorSimple-adapter voor share point Installer. De wizard Setup wordt gestart.

3. Klik op **Volgende**. De volgende pagina wordt weer gegeven.

4. Klik op **verwijderen** om het verwijderings proces te selecteren. De volgende pagina wordt weer gegeven.

5. Klik op **verwijderen** om de verwijdering te bevestigen. De volgende voortgangs pagina wordt weer gegeven.

6. Wanneer het verwijderen is voltooid, wordt de pagina volt ooien weer gegeven. Klik op **volt ooien** om de wizard Setup te sluiten.
#### <a name="to-use-the-control-panel-to-uninstall-the-adapter"></a>Het configuratie scherm gebruiken om de adapter te verwijderen
1. Open het configuratie scherm en klik vervolgens op **Program ma's en onderdelen**.
2. Selecteer **StorSimple-adapter voor share point**en klik vervolgens op **verwijderen**.
## <a name="next-steps"></a>Volgende stappen
Meer [informatie over StorSimple](storsimple-overview.md).
<!--Reference links-->
[1]: https://www.microsoft.com/download/details.aspx?id=44073
[2]: https://technet.microsoft.com/library/ff628583(v=office.15).aspx
[3]: https://technet.microsoft.com/library/ff628583(v=office.14).aspx
[4]: https://technet.microsoft.com/library/ff628569(v=office.14).aspx
[5]: https://technet.microsoft.com/library/ff628583(v=office.15).aspx
[8]: https://technet.microsoft.com/library/ff943565.aspx
| 96.759398 | 821 | 0.803015 | nld_Latn | 0.998985 |
ff6f56125256f636184e5a1770131b48b0f98fb1 | 2,782 | md | Markdown | _posts/2021-04-03-blog-post-6.md | spliew/spliew.github.io | 0f4f4a1e0260ae7d0487f7377eb18779daafcc78 | [
"MIT"
] | null | null | null | _posts/2021-04-03-blog-post-6.md | spliew/spliew.github.io | 0f4f4a1e0260ae7d0487f7377eb18779daafcc78 | [
"MIT"
] | null | null | null | _posts/2021-04-03-blog-post-6.md | spliew/spliew.github.io | 0f4f4a1e0260ae7d0487f7377eb18779daafcc78 | [
"MIT"
] | null | null | null | ---
title: 'Privacy papers in ICLR 2021'
date: 2021-05-07
permalink: /posts/2021/05/privacy-iclr21/
tags:
- Privacy
- Machine Learning
- Federated Learning
---
I have curated and am beginning to read ICLR '21 papers related to privacy and federated learning. The list will be constantly updated with the paper summaries. Stay tuned!
*Note that I wrote a simple script to scrape the links to the paper and the links may not be accurate.*
|*Title* |*Summary* |
|---|---|
|[Federated Learning via Posterior Averaging: A New Perspective and Practical Algorithms](https://openreview.net/forum?id=GFsU8a0sGB)||
|[Adaptive Federated Optimization](https://openreview.net/forum?id=LkFG3lB13U5)||
|[Information Laundering for Model Privacy](https://openreview.net/forum?id=dyaIRud1zXg)||
|[Achieving Linear Speedup with Partial Worker Participation in Non-IID Federated Learning](https://openreview.net/forum?id=jDdzh5ul-d)||
|[Federated Learning Based on Dynamic Regularization](https://openreview.net/forum?id=B7v4QMR6Z9w)||
|[CaPC Learning: Confidential and Private Collaborative Learning](https://openreview.net/forum?id=h2EbJ4_wMVq)||
|[Federated Semi-Supervised Learning with Inter-Client Consistency & Disjoint Learning](https://openreview.net/forum?id=ce6CFXBh30h)||
|[Do not Let Privacy Overbill Utility: Gradient Embedding Perturbation for Private Learning](https://openreview.net/forum?id=7aogOj_VYO0)||
|[FedBN: Federated Learning on Non-IID Features via Local Batch Normalization](https://openreview.net/forum?id=6YEQUn0QICG)||
|[Private Image Reconstruction from System Side Channels Using Generative Models](https://openreview.net/forum?id=y06VOYLcQXa)||
|[FedBE: Making Bayesian Model Ensemble Applicable to Federated Learning](https://openreview.net/forum?id=dgtpE6gKjHn)||
|[Differentially Private Learning Needs Better Features (or Much More Data)](https://openreview.net/forum?id=YTWGvpFOQD-)||
|[FedMix: Approximation of Mixup under Mean Augmented Federated Learning](https://openreview.net/forum?id=Ogga20D2HO-)||
|[HeteroFL: Computation and Communication Efficient Federated Learning for Heterogeneous Clients](https://openreview.net/forum?id=TNkPBBYFkXg)||
|[R-GAP: Recursive Gradient Attack on Privacy](https://openreview.net/forum?id=RSU17UoKfJF)||
|[Private Post-GAN Boosting](https://openreview.net/forum?id=6isfR3JCbi)|Re-weight using MWEM the sequence of learned generators and discriminators to increase performance after training.|
|[Bypassing the Ambient Dimension: Private SGD with Gradient Subspace Identification](https://openreview.net/forum?id=7dpmlkBuJFC)||
|[Personalized Federated Learning with First Order Model Optimization](https://openreview.net/forum?id=ehJqJQk9cw)|no global model. personalize via interaction with other clients.| | 81.823529 | 188 | 0.790439 | eng_Latn | 0.455951 |
ff6fb90abb86d09a9aa4acc6772f225a6939dc40 | 856 | md | Markdown | site/rover/MRK-OVERVIEW/MRK-ENTRY/DD250-E/DD250-E-2/README.md | mikes-zum/docs | 2f60f8f79dea5b56d930021f17394c5b9afb86d5 | [
"MIT"
] | 7 | 2019-12-06T23:39:36.000Z | 2020-12-13T13:26:23.000Z | site/rover/MRK-OVERVIEW/MRK-ENTRY/DD250-E/DD250-E-2/README.md | mikes-zum/docs | 2f60f8f79dea5b56d930021f17394c5b9afb86d5 | [
"MIT"
] | 36 | 2020-01-21T00:17:12.000Z | 2022-02-28T03:24:29.000Z | site/rover/MRK-OVERVIEW/MRK-ENTRY/DD250-E/DD250-E-2/README.md | mikes-zum/docs | 2f60f8f79dea5b56d930021f17394c5b9afb86d5 | [
"MIT"
] | 33 | 2020-02-07T12:24:42.000Z | 2022-03-24T15:38:31.000Z | ## DD250 Data Entry (DD250.E)
<PageHeader />
## Line Item

**Ship ID** This field contains the shipment ID from screen 1.
**Li** This field contains the line item from the SHIP record.
**Part** The part being shipped on this line item.
**T$X205** This field contains the description, as loaded from the SHIP file,
or the PARTS file. Changes
may be made as required. Any changes will not be reflected on the SHIP record.
**UM** The unit of measure being shipped.
**Qty** The quantity being shipped on this line item.
**Unit.Price** The unit price of the part being delivered.
**Extension** The extended value of the line item (Quantity X Unit Price).
**Total** The total extended value of the shipment line items.
<badge text= "Version 8.10.57" vertical="middle" />
<PageFooter /> | 26.75 | 80 | 0.682243 | eng_Latn | 0.994823 |
ff70054e4af70e050c5a5fdf65b5bdddd39b226b | 984 | md | Markdown | README.md | panubo/docker-acme | 3a0396eefc9d339629a55740aa8a349e4a733a84 | [
"MIT"
] | 4 | 2016-10-31T11:20:44.000Z | 2020-11-13T16:11:30.000Z | README.md | panubo/docker-acme | 3a0396eefc9d339629a55740aa8a349e4a733a84 | [
"MIT"
] | 1 | 2016-08-30T00:03:06.000Z | 2016-08-30T01:24:53.000Z | README.md | panubo/docker-acme | 3a0396eefc9d339629a55740aa8a349e4a733a84 | [
"MIT"
] | 2 | 2016-07-26T10:36:47.000Z | 2021-05-31T20:30:47.000Z | # Docker ACME
Docker image for Let's Encrypt [ACME tiny client](https://github.com/diafygi/acme-tiny).
## Example Usage
If you keep a script `acme.sh` on your host with the following:
```
#!/usr/bin/env bash
docker run --rm -t -i \
-v /mnt/data00/nginx.service/ssl:/etc/letsencrypt:z \
-v /mnt/data00/nginx.service/challenges:/var/www/challenges:z \
docker.io/panubo/acme $@
```
Then you can issue certs with one command:
```
[root@docker-host ~]# acme.sh --issue test.example.com
>> Generating Key for test.example.com
Generating RSA private key, 2048 bit long modulus
................+++
.................................................+++
e is 65537 (0x10001)
>> Generating CSR for test.example.com
>> Running ACME for test.example.com
Parsing account key...
Parsing CSR...
Registering account...
Already registered!
Verifying test.example.com...
test.example.com verified!
Signing certificate...
Certificate signed!
>> Generating concatenated PEM
>> All done
```
| 24.6 | 88 | 0.672764 | eng_Latn | 0.503668 |
ff704d3d1d216f15869e28b471fff56917469f35 | 9 | md | Markdown | README.md | onur-ozel/o-bank | 2c84e48c74ec71b0f5dcd9d387ba1af723d577d7 | [
"MIT"
] | 2 | 2019-03-14T05:10:29.000Z | 2019-03-25T04:57:11.000Z | README.md | onur-ozel/o-bank | 2c84e48c74ec71b0f5dcd9d387ba1af723d577d7 | [
"MIT"
] | null | null | null | README.md | onur-ozel/o-bank | 2c84e48c74ec71b0f5dcd9d387ba1af723d577d7 | [
"MIT"
] | 2 | 2019-07-06T06:08:03.000Z | 2019-11-26T19:28:39.000Z | # o-bank
| 4.5 | 8 | 0.555556 | eng_Latn | 0.538208 |
ff70e12021b893a129f708412976ee01f3ebb220 | 2,407 | md | Markdown | pig-commands/README.md | dfwgreg/BD_Resources | 62c3475c7a4d075024fee9542a43da375a3c903c | [
"MIT"
] | null | null | null | pig-commands/README.md | dfwgreg/BD_Resources | 62c3475c7a4d075024fee9542a43da375a3c903c | [
"MIT"
] | null | null | null | pig-commands/README.md | dfwgreg/BD_Resources | 62c3475c7a4d075024fee9542a43da375a3c903c | [
"MIT"
] | null | null | null | pig-command-examples
============================
## Demo Commands and Operations
### Pig files
The following are the pig files that are present in this repository.
- load-store.pig - Load and store example
- multi-delimiter.pig - Explain how to load data using multiple delimiters
- data-bag.pig - Explain the bag data structure
- lookup.pig - Explains how to lookup to a particular column in a relation
- group-by.pig - Explains how group up operator works
- filter-by.pig - Demonstrates the filter by operator
- count.pig - Demonstrates the count operator
- order-by.pig - Demonstrates the order by operator
- distinct.pig - Demonstrates the distinct operator
- limit.pig - Demonstrates the LIMIT operator
- join.pig - Demonstrates the JOIN operator
- count-words.pig - Count the number of words in a text file
- strip-quote.pig - Strips quotes from tweets. Demonstrates UDF
- get-twitter-names.pig - Finds twitter usernames from tweets. Demonstrates UDF
- from-vim.pig - Finds all tweets that were tweeted from Vim. Demonstrates Filter UDF functions
- stream-shell-script.pig - Demonstrates how shell commands can be executed in Pig through Streaming
- stream-python.pig - Streams data through a python script
- strip.py - This python script would be invoked by Pig and the data will be streamed through it
### UDF directory
The /udf directory contains the Java source code for the UDF that are consumed by the pig scripts. The following are the Java files that are present in this directory.
- StripQuote.java - An Eval UDF function that strips quote from strings
- GetTwitterNames.java - An Eval UDF function to get the twitter names in a tweet
- FromVim.java - A filter UDF function
### Data directory
The /data directory contains the data files that are used by these samples. The following are the files that are present in this directory.
- dropbox-policy.txt - A copy of dropbox policy document. This file is used for doing word analysis.
- tweets.csv - A dump of my tweets that I got from twitter.
- data-bag.txt - A simple data file to explain bag data type
- nested-schema.txt - A simple data file to explain nested data schemas
- simple-tuples - A simple data file to explain simple tuples
- multi-delimiter.txt - A simple data file that contains data that have multiple delimiters
### other files
- clean.sh - A shell script to clean pig log files and output data files
| 44.574074 | 167 | 0.761529 | eng_Latn | 0.993074 |
ff713c374a2841d6529286280de1734bb7b60553 | 1,595 | md | Markdown | docs/resources/identity_mapping.md | mosersil/terraform-provider-pingaccess | f10d772e70d9906dc63779c75c6fc6ddfcd3c41c | [
"MIT"
] | 13 | 2019-04-27T15:38:04.000Z | 2021-09-15T18:15:49.000Z | docs/resources/identity_mapping.md | mosersil/terraform-provider-pingaccess | f10d772e70d9906dc63779c75c6fc6ddfcd3c41c | [
"MIT"
] | 101 | 2019-08-14T15:38:46.000Z | 2022-03-18T10:09:24.000Z | docs/resources/identity_mapping.md | mosersil/terraform-provider-pingaccess | f10d772e70d9906dc63779c75c6fc6ddfcd3c41c | [
"MIT"
] | 8 | 2019-04-17T13:11:12.000Z | 2021-11-27T20:24:16.000Z | ---
# generated by https://github.com/hashicorp/terraform-plugin-docs
page_title: "pingaccess_identity_mapping Resource - terraform-provider-pingaccess"
subcategory: ""
description: |-
Provides configuration for Identity Mappings within PingAccess.
-> The PingAccess API does not provider repeatable means of querying a sensitive value, we are unable to detect configuration drift of any sensitive fields in the configuration block.
---
# pingaccess_identity_mapping (Resource)
Provides configuration for Identity Mappings within PingAccess.
-> The PingAccess API does not provider repeatable means of querying a sensitive value, we are unable to detect configuration drift of any sensitive fields in the configuration block.
## Example Usage
```terraform
resource "pingaccess_identity_mapping" "example" {
class_name = "com.pingidentity.pa.identitymappings.HeaderIdentityMapping"
name = "example"
configuration = <<EOF
{
"attributeHeaderMappings": [
{
"subject": true,
"attributeName": "sub",
"headerName": "sub"
}
],
"headerClientCertificateMappings": []
}
EOF
}
```
<!-- schema generated by tfplugindocs -->
## Schema
### Required
- **class_name** (String) The identity mapping's class name.
- **configuration** (String) The identity mapping's configuration data.
- **name** (String) The name of the identity mapping.
### Read-Only
- **id** (String) The ID of this resource.
## Import
Import is supported using the following syntax:
```shell
terraform import pingaccess_identity_mapping.example 123
```
| 27.982456 | 185 | 0.729781 | eng_Latn | 0.910732 |
ff71c7423e795ac3175932a2adb45291a2e0ff71 | 365 | md | Markdown | facebook-notes.md | CovalentLabs/project-w-app | 617fc588777c0bf0073604eb8d80cfc791e0bc03 | [
"MIT"
] | null | null | null | facebook-notes.md | CovalentLabs/project-w-app | 617fc588777c0bf0073604eb8d80cfc791e0bc03 | [
"MIT"
] | null | null | null | facebook-notes.md | CovalentLabs/project-w-app | 617fc588777c0bf0073604eb8d80cfc791e0bc03 | [
"MIT"
] | null | null | null | ```shell
cordova plugin add cordova-plugin-facebook4 --save --variable APP_ID="539675555556915" --variable APP_NAME="covalent"
```
```javascript
facebookConnectPlugin.login(
[
"public_profile"
],
(success) => {
console.log(success)
},
(failureString) => {
console.error(failureString)
}
)
```
Release Key Hash
Bmce+9aHdOoVtE7fS3B07tfj7Bc=
| 17.380952 | 117 | 0.690411 | yue_Hant | 0.238571 |
ff71f6f9e75672ffa5f092fcdc47f905b2ac109a | 2,039 | md | Markdown | sigr/README.md | yuikns/stork | 6e0712625a5db885d087801c378a65f181fbb32b | [
"MIT"
] | 5 | 2018-10-17T03:39:55.000Z | 2021-05-31T08:56:35.000Z | sigr/README.md | yuikns/stork | 6e0712625a5db885d087801c378a65f181fbb32b | [
"MIT"
] | null | null | null | sigr/README.md | yuikns/stork | 6e0712625a5db885d087801c378a65f181fbb32b | [
"MIT"
] | 1 | 2020-04-11T05:53:07.000Z | 2020-04-11T05:53:07.000Z | # Signal Register
[![Build Status][badge-travis]][link-travis]
[![MIT License][badge-license]](LICENSE)
Package `argcv/sigr` implements a simple signal proxy, which is used to manage the signal easier for multiple tasks.
You can add one or a few functions after interrupt/quit/.... signal comes and befure really quit.
Currently, the process sequence are not in order.
## Install
```bash
go get -u github.com/argcv/stork/sigr
```
## Example
```go
package main
import (
"fmt"
"github.com/argcv/sigr"
"time"
)
func main() {
// register a new name
// this is used to
name1 := sigr.RegisterOnStopFuncAutoName(func() {
fmt.Println("Hello, World! #1")
})
name2 := sigr.RegisterOnStopFuncAutoName(func() {
fmt.Println("Hello, World! #2")
})
sigr.RegisterOnStopFunc("customized name", func() {
fmt.Println("Hello, World! #3")
})
fmt.Println("name1:", name1, "name2:", name2)
sigr.RegisterOnStopFunc(name1, func() {
fmt.Println("Hello, World! #1 #overridden")
})
sigr.UnregisterOnStopFunc(name2)
// default: true
sigr.SetQuitDirectyl(true)
// get verbose log
sigr.VerboseLog()
// without verbose log, your printing is still work here
//sigr.NoLog()
// you may type ctrl+C to excute the functions (name1 and "customized name")
time.Sleep(time.Duration(20 * time.Second))
}
```
The output seems as follow:
```bash
$ go run example.go
name1: __auto_1 name2: __auto_2
^C2017/10/26 19:06:17.804737 sigr.go:74: sig: [interrupt] Processing...
2017/10/26 19:06:17.804935 sigr.go:78: Processing task [__auto_1]
Hello, World! #1 #overridden
2017/10/26 19:06:17.804949 sigr.go:78: Processing task [customized name]
Hello, World! #3
2017/10/26 19:06:17.805069 sigr.go:85: sig: [interrupt] Processed, quitting directly
signal: interrupt
```
[badge-travis]: https://travis-ci.org/argcv/sigr.svg?branch=master
[link-travis]: https://travis-ci.org/argcv/sigr
[image-travis]: https://github.com/argcv/sigr/blob/master/img/TravisCI.png
[badge-license]: https://img.shields.io/badge/license-MIT-007EC7.svg
| 25.4875 | 116 | 0.710642 | eng_Latn | 0.464912 |
ff71fb1bf828a2e0d729b8c2cc06b08c0d91e2af | 169 | md | Markdown | README.md | MajenkoLibraries/EERAM_DTWI | c0a51e357b7600b2b8574dc16279ce17479c2a2d | [
"BSD-3-Clause"
] | null | null | null | README.md | MajenkoLibraries/EERAM_DTWI | c0a51e357b7600b2b8574dc16279ce17479c2a2d | [
"BSD-3-Clause"
] | null | null | null | README.md | MajenkoLibraries/EERAM_DTWI | c0a51e357b7600b2b8574dc16279ce17479c2a2d | [
"BSD-3-Clause"
] | null | null | null | chipKIT DTWI EERAM library
==========================
This library is for using the 47L16 I2C EERAM chip
from Microchip.
It is designed to work with the DTWI library.
| 21.125 | 50 | 0.668639 | eng_Latn | 0.968722 |
ff72342450d70673de9c5ba1a6b055f83fda9ef8 | 230 | md | Markdown | RELEASE NOTES.md | ashfurrow/FXKeychain | 5f192d2ab557603dec3f1fe62eae4d7145857f0e | [
"Zlib"
] | 1 | 2015-11-08T14:34:39.000Z | 2015-11-08T14:34:39.000Z | RELEASE NOTES.md | ashfurrow/FXKeychain | 5f192d2ab557603dec3f1fe62eae4d7145857f0e | [
"Zlib"
] | null | null | null | RELEASE NOTES.md | ashfurrow/FXKeychain | 5f192d2ab557603dec3f1fe62eae4d7145857f0e | [
"Zlib"
] | null | null | null | Version 1.1
- Now uses application bundle ID to namespace the default keychain
- Now supports keyed subcripting (e.g. keychain["foo"] = bar;)
- Included CocoaPods podspec file
- Incuded Mac example
Version 1.0
- Initial release | 23 | 66 | 0.76087 | eng_Latn | 0.983077 |
ff7254742236fba003a68e8a8bfd906cb1a0b66d | 1,327 | md | Markdown | docs/relational-databases/system-functions/system-metadata-functions.md | relsna/sql-docs | 0ff8e6a5ffa09337bc1336e2e1a99b9c747419e4 | [
"CC-BY-4.0",
"MIT"
] | 840 | 2017-07-21T12:44:54.000Z | 2022-03-30T01:28:51.000Z | docs/relational-databases/system-functions/system-metadata-functions.md | relsna/sql-docs | 0ff8e6a5ffa09337bc1336e2e1a99b9c747419e4 | [
"CC-BY-4.0",
"MIT"
] | 7,326 | 2017-07-19T17:35:42.000Z | 2022-03-31T20:48:54.000Z | docs/relational-databases/system-functions/system-metadata-functions.md | relsna/sql-docs | 0ff8e6a5ffa09337bc1336e2e1a99b9c747419e4 | [
"CC-BY-4.0",
"MIT"
] | 2,698 | 2017-07-19T18:12:53.000Z | 2022-03-31T20:25:07.000Z | ---
description: "System Metadata Functions"
title: "System Metadata Functions | Microsoft Docs"
ms.custom: ""
ms.date: "03/14/2017"
ms.prod: sql
ms.prod_service: "database-engine"
ms.reviewer: ""
ms.technology: system-objects
ms.topic: "reference"
dev_langs:
- "TSQL"
ms.assetid: a6fb85b2-b010-4ca9-b65f-4402917076ea
author: WilliamDAssafMSFT
ms.author: wiassaf
---
# System Metadata Functions
[!INCLUDE [SQL Server](../../includes/applies-to-version/sqlserver.md)]
[!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] provides the following metadata functions.
## In This Section
[fn_helpcollations](../../relational-databases/system-functions/sys-fn-helpcollations-transact-sql.md)
[fn_listextendedproperty](../../relational-databases/system-functions/sys-fn-listextendedproperty-transact-sql.md)
[fn_servershareddrives](../../relational-databases/system-functions/sys-fn-servershareddrives-transact-sql.md)
[fn_virtualfilestats](../../relational-databases/system-functions/sys-fn-virtualfilestats-transact-sql.md)
[fn_virtualfileservermodes](../../relational-databases/system-functions/sys-fn-virtualservernodes-transact-sql.md)
[fn_PageResCracker](../../relational-databases/system-functions/sys-fn-pagerescracker-transact-sql.md)
| 35.864865 | 118 | 0.735494 | yue_Hant | 0.138594 |
ff72a51299bdbc1676ef5f7e47306b53df9e0d4f | 6,802 | md | Markdown | _posts/2015/2015-07-27-automating-opsmgr-part-10-deleting-groups.md | tyconsulting/blog.tyang.github.io | 498c9b809f41e3469fd3a4a78a06fc94590a411a | [
"Apache-2.0"
] | 5 | 2021-08-08T17:39:58.000Z | 2022-03-27T20:13:30.000Z | _posts/2015/2015-07-27-automating-opsmgr-part-10-deleting-groups.md | tyconsulting/blog.tyang.github.io | 498c9b809f41e3469fd3a4a78a06fc94590a411a | [
"Apache-2.0"
] | null | null | null | _posts/2015/2015-07-27-automating-opsmgr-part-10-deleting-groups.md | tyconsulting/blog.tyang.github.io | 498c9b809f41e3469fd3a4a78a06fc94590a411a | [
"Apache-2.0"
] | null | null | null | ---
id: 4272
title: 'Automating OpsMgr Part 10: Deleting Groups'
date: 2015-07-27T16:02:54+10:00
author: Tao Yang
#layout: post
excerpt: ""
header:
overlay_image: /wp-content/uploads/2015/06/OpsMgrExnteded-banner.png
overlay_filter: 0.5 # same as adding an opacity of 0.5 to a black background
guid: http://blog.tyang.org/?p=4272
permalink: /2015/07/27/automating-opsmgr-part-10-deleting-groups/
categories:
- PowerShell
- SCOM
- SMA
tags:
- Automating OpsMgr
- PowerShell
- SCOM
- SMA
---
## Introduction
This is the 10th instalment of the Automating OpsMgr series. Previously on this series:
* [Automating OpsMgr Part 1: Introducing OpsMgrExtended PowerShell / SMA Module](http://blog.tyang.org/2015/06/24/automating-opsmgr-part-1-introducing-opsmgrextended-powershell-sma-module/)
* [Automating OpsMgr Part 2: SMA Runbook for Creating ConfigMgr Log Collection Rules](http://blog.tyang.org/2015/06/28/automating-opsmgr-part-2-sma-runbook-for-creating-configmgr-log-collection-rules/)
* [Automating OpsMgr Part 3: New Management Pack Runbook via SMA and Azure Automation](http://blog.tyang.org/2015/06/30/automating-opsmgr-part-3-new-management-pack-runbook-via-sma-and-azure-automation/)
* [Automating OpsMgr Part 4:Creating New Empty Groups](http://blog.tyang.org/2015/07/02/automating-opsmgr-part-4-create-new-empty-groups/)
* [Automating OpsMgr Part 5: Adding Computers to Computer Groups](http://blog.tyang.org/2015/07/06/automating-opsmgr-part-5-adding-computers-to-computer-groups/)
* [Automating OpsMgr Part 6: Adding Monitoring Objects to Instance Groups](http://blog.tyang.org/2015/07/13/automating-opsmgr-part-6-adding-monitoring-objects-to-instance-groups/)
* [Automating OpsMgr Part 7: Updated OpsMgrExtended Module](http://blog.tyang.org/2015/07/17/automating-opsmgr-part-7-updated-opsmgrextended-module/)
* [Automating OpsMgr Part 8: Adding Management Pack References](http://blog.tyang.org/2015/07/17/automating-opsmgr-part-8-adding-management-pack-references/)
* [Automating OpsMgr Part 9: Updating Group Discoveries](http://blog.tyang.org/2015/07/17/automating-opsmgr-part-9-updating-group-discoveries/)
As I have previously demonstrated how to create and update OpsMgr groups using the **OpsMgrExtended** module, it's now the time cover how to delete groups in OpsMgr.
Deleting groups that are defined in unsealed management packs can be easily accomplished using the **Remove-OMGroup** function from the OpsMgrExtended module. This function deletes the group class definition and discoveries from the unsealed MP. However, since it's very common for OpsMgr administrators to also create dependency monitors for groups (for group members health rollup), you cannot simply use Remove-OMGroup function to delete groups when there are also monitors targeting this group. Therefore, I have written a sample runbook to delete the group as well as monitors targeting the group (if there are any).
## Runbook Delete-OpsMgrGroup
```powershell
Workflow Delete-OpsMgrGroup
{
Param(
[Parameter(Mandatory=$true)][String]$GroupName,
[Parameter(Mandatory=$true)][Boolean]$IncreaseMPVersion
)
#Get OpsMgrSDK connection object
$OpsMgrSDKConn = Get-AutomationConnection -Name "OpsMgrSDK_HOME"
#Firstly, make sure the monitors targeting this group is deleted (i.e dependency monitors for health rollup)
Write-Verbose "Checking dependency monitors targeting the group '$GroupName'."
$bDeleteMonitors = InlineScript {
#Connect to MG
$MG = Connect-OMManagementGroup -SDKConnection $USING:OpsMgrSDKConn
$Group = $MG.GetMonitoringClasses($USING:GroupName)
$GroupMP = $Group.GetManagementPack()
If ($GroupMP.Sealed -eq $true)
{
Write-Error "The group is defined in a sealed MP, unable to continue."
Return $false
}
$GroupID = $Group.Id.ToString()
$MonitorCriteria = New-Object Microsoft.EnterpriseManagement.Configuration.ManagementPackMonitorCriteria("Target='$GroupID'")
$Monitors = $MG.GetMonitors($MonitorCriteria)
Foreach ($Monitor in $Monitors)
{
Write-Verbose "Deleting '$($Monitor.Name)'..."
$MonitorMP = $Monitor.GetManagementPack()
$Monitor.Status = "PendingDelete"
Try {
$MonitorMP.Verify()
$MonitorMP.AcceptChanges()
} Catch {
Write-Error $_.Exception.InnerException.Message
Return $false
}
}
Return $true
}
If ($bDeleteMonitors -eq $true)
{
$bGroupDeleted =Remove-OMGroup -SDKConnection $OpsMgrSDKConn -GroupName $GroupName -IncreaseMPVersion $IncreaseMPVersion
}
If ($bGroupDeleted -eq $true)
{
Write-Output "Done."
} else {
throw "Unable to delete group '$GroupName'."
exit
}
}
```
In order to use this runbook, you will need to update Line 9, with the name of your SMA connection object.
<a href="http://blog.tyang.org/wp-content/uploads/2015/07/SNAGHTML1591389.png"><img style="background-image: none; padding-top: 0px; padding-left: 0px; display: inline; padding-right: 0px; border: 0px;" title="SNAGHTML1591389" src="http://blog.tyang.org/wp-content/uploads/2015/07/SNAGHTML1591389_thumb.png" alt="SNAGHTML1591389" width="600" height="404" border="0" /></a>
This runbook takes 2 parameters:
<a href="http://blog.tyang.org/wp-content/uploads/2015/07/image37.png"><img style="background-image: none; padding-top: 0px; padding-left: 0px; display: inline; padding-right: 0px; border: 0px;" title="image" src="http://blog.tyang.org/wp-content/uploads/2015/07/image_thumb37.png" alt="image" width="383" height="329" border="0" /></a>
* **GroupName:** the name of the group you are deleting. Please note you can only delete groups defined in unsealed MPs.
* **IncreaseMPVersion:** Boolean variable, specify if the unsealed MP version should be increased by 0.0.0.1
Runbook Result:
<a href="http://blog.tyang.org/wp-content/uploads/2015/07/image38.png"><img style="background-image: none; padding-top: 0px; padding-left: 0px; display: inline; padding-right: 0px; border: 0px;" title="image" src="http://blog.tyang.org/wp-content/uploads/2015/07/image_thumb38.png" alt="image" width="462" height="486" border="0" /></a>
Verbose Messages (deleting dependency monitors):
<a href="http://blog.tyang.org/wp-content/uploads/2015/07/SNAGHTML14fa1a1.png"><img style="background-image: none; padding-top: 0px; padding-left: 0px; display: inline; padding-right: 0px; border: 0px;" title="SNAGHTML14fa1a1" src="http://blog.tyang.org/wp-content/uploads/2015/07/SNAGHTML14fa1a1_thumb.png" alt="SNAGHTML14fa1a1" width="678" height="467" border="0" /></a>
## Conclusion
This post is rather short comparing to some of the previous ones in this series. I have few ideas for the next post, but haven't decided which one am I going to write first. Anyways, until next time, happy automating! | 55.754098 | 622 | 0.750074 | eng_Latn | 0.441658 |
ff72ac40a5236445de4fec5974ba71155b511c86 | 3,641 | md | Markdown | articles/devtest-labs/devtest-lab-upload-vhd-using-powershell.md | AClerbois/azure-docs.fr-fr | 6a7ba4fa8a31f7915c925fef2f809b03d9a2df4e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/devtest-labs/devtest-lab-upload-vhd-using-powershell.md | AClerbois/azure-docs.fr-fr | 6a7ba4fa8a31f7915c925fef2f809b03d9a2df4e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/devtest-labs/devtest-lab-upload-vhd-using-powershell.md | AClerbois/azure-docs.fr-fr | 6a7ba4fa8a31f7915c925fef2f809b03d9a2df4e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Télécharger un fichier de disque dur virtuel vers Azure DevTest Labs avec PowerShell | Microsoft Docs
description: Cet article présente une procédure pas à pas qui vous explique comment charger un fichier VHD sur Azure DevTest Labs avec PowerShell.
ms.topic: article
ms.date: 06/26/2020
ms.openlocfilehash: 2b393b886a50f60a918690ee2a5583f9623dbe39
ms.sourcegitcommit: 271601d3eeeb9422e36353d32d57bd6e331f4d7b
ms.translationtype: HT
ms.contentlocale: fr-FR
ms.lasthandoff: 08/20/2020
ms.locfileid: "88650755"
---
# <a name="upload-vhd-file-to-labs-storage-account-using-powershell"></a>Télécharger le fichier de disque dur virtuel dans le compte de stockage du laboratoire avec PowerShell
[!INCLUDE [devtest-lab-upload-vhd-selector](../../includes/devtest-lab-upload-vhd-selector.md)]
Dans Azure DevTest Labs, les fichiers de disque dur virtuel peuvent être utilisés pour créer des images personnalisées, qui servent à approvisionner des machines virtuelles. Les étapes suivantes vous guident lors de l’utilisation de PowerShell pour charger un fichier de disque dur virtuel sur le compte de stockage d’un laboratoire. Une fois que vous avez téléchargé votre fichier de disque dur virtuel, consultez la [section Étapes suivantes](#next-steps) qui répertorie quelques articles qui expliquent comment créer une image personnalisée à partir du fichier de disque dur virtuel téléchargé. Pour plus d’informations sur les disques et les disques durs virtuels dans Azure, consultez [Présentation des disques managés](../virtual-machines/managed-disks-overview.md)
## <a name="step-by-step-instructions"></a>Instructions pas à pas
Les étapes suivantes vous guident lors du téléchargement d’un fichier de disque dur virtuel dans Azure DevTest Labs avec PowerShell.
1. Connectez-vous au [portail Azure](https://go.microsoft.com/fwlink/p/?LinkID=525040).
1. Sélectionnez **Tous les services**, puis **DevTest Labs** dans la liste.
1. Sélectionnez le laboratoire souhaité dans la liste des laboratoires.
1. Dans le panneau du laboratoire, sélectionnez **Configuration**.
1. Dans le panneau **Configuration** du laboratoire, sélectionnez **Custom images (VHDs)** (Images personnalisées (disques durs virtuels)).
1. Dans le panneau **Images personnalisées**, sélectionnez **+Ajouter**.
1. Dans le panneau **Image personnalisée**, sélectionnez **Disque dur virtuel**.
1. Dans le panneau **Disque dur virtuel**, sélectionnez **Upload a VHD using PowerShell** (Télécharger un disque dur virtuel à l’aide de PowerShell).

1. Dans le panneau **Charger un disque dur virtuel avec PowerShell**, copiez le script PowerShell généré dans un éditeur de texte.
1. Modifiez le paramètre **LocalFilePath** de l’applet de commande **Add-AzureVhd** pour pointer vers l’emplacement du fichier VHD à télécharger.
1. À l’invite de PowerShell, exécutez l’applet de commande **Add-AzureVhd** (avec le paramètre **LocalFilePath** modifié).
> [!WARNING]
>
> Le processus de téléchargement d’un fichier de disque dur virtuel peut durer un certain temps en fonction de sa taille et de votre vitesse de connexion.
## <a name="next-steps"></a>Étapes suivantes
- [Créer une image personnalisée dans Azure DevTest Labs à partir d’un fichier de disque dur virtuel à l’aide du portail Azure](devtest-lab-create-template.md)
- [Créer une image personnalisée dans Azure DevTest Labs à partir d’un fichier de disque dur virtuel à l’aide de PowerShell](devtest-lab-create-custom-image-from-vhd-using-powershell.md)
| 66.2 | 771 | 0.786322 | fra_Latn | 0.966101 |
ff72de9c134def450b00037be73b3117b5510ebd | 3,139 | md | Markdown | concepts/middleware/README.md | Li-Yanzhi/docs-1 | 7028dee6b06130c35e8bf5d0f1ba5780531d7414 | [
"MIT"
] | null | null | null | concepts/middleware/README.md | Li-Yanzhi/docs-1 | 7028dee6b06130c35e8bf5d0f1ba5780531d7414 | [
"MIT"
] | null | null | null | concepts/middleware/README.md | Li-Yanzhi/docs-1 | 7028dee6b06130c35e8bf5d0f1ba5780531d7414 | [
"MIT"
] | null | null | null | # Middleware pipeline
Dapr allows custom processing pipelines to be defined by chaining a series of middleware components. A request goes through all defined middleware components before it's routed to user code, and then goes through the defined middleware, in reverse order, before it's returned to the client, as shown in the following diagram.

## Customize processing pipeline
When launched, a Dapr sidecar constructs a middleware processing pipeline. By default the pipeline consists of [tracing middleware](../observability/traces.md) and CORS middleware. Additional middleware, configured by a Dapr [configuration](../configuration/README.md), can be added to the pipeline in the order they are defined. The pipeline applies to all Dapr API endpoints, including state, pub/sub, service invocation, bindings, security and others.
> **NOTE:** Dapr provides a **middleware.http.uppercase** pre-registered component that changes all text in a request body to uppercase. You can use it to test/verify if your custom pipeline is in place.
The following configuration example defines a custom pipeline that uses a [OAuth 2.0 middleware](../../howto/authorization-with-oauth/README.md) and an uppercase middleware component. In this case, all requests are authorized through the OAuth 2.0 protocol, and transformed to uppercase text, before they are forwarded to user code.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: pipeline
namespace: default
spec:
httpPipeline:
handlers:
- name: oauth2
type: middleware.http.oauth2
- name: uppercase
type: middleware.http.uppercase
```
## Writing a custom middleware
Dapr uses [FastHTTP](https://github.com/valyala/fasthttp) to implement it's HTTP server. Hence, your HTTP middleware needs to be written as a FastHTTP handler. Your middleware needs to implement a middleware interface, which defines a **GetHandler** method that returns a **fasthttp.RequestHandler**:
```go
type Middleware interface {
GetHandler(metadata Metadata) (func(h fasthttp.RequestHandler) fasthttp.RequestHandler, error)
}
```
Your handler implementation can include any inbound logic, outbound logic, or both:
```go
func GetHandler(metadata Metadata) fasthttp.RequestHandler {
return func(h fasthttp.RequestHandler) fasthttp.RequestHandler {
return func(ctx *fasthttp.RequestCtx) {
//inboud logic
h(ctx) //call the downstream handler
//outbound logic
}
}
}
```
## Submitting middleware components
Your middleware component can be contributed to the https://github.com/dapr/components-contrib repository, under the */middleware* folder. Then submit another pull request against the https://github.com/dapr/dapr repository to register the new middleware type. You'll need to modify the **Load()** method under https://github.com/dapr/dapr/blob/master/pkg/components/middleware/http/registry.go to register your middleware using the **Register** method.
## References
* [How-To: Configure API authorization with OAuth](../../howto/authorization-with-oauth/README.md)
| 53.20339 | 454 | 0.770309 | eng_Latn | 0.967285 |
ff72eceabde1f0f52d4bfcfea06780a058acf2f8 | 4,283 | md | Markdown | Contributing.md | kipertex/ighoo | bba11aa470ee5134c24d5936bf82e401b435e098 | [
"MIT"
] | null | null | null | Contributing.md | kipertex/ighoo | bba11aa470ee5134c24d5936bf82e401b435e098 | [
"MIT"
] | null | null | null | Contributing.md | kipertex/ighoo | bba11aa470ee5134c24d5936bf82e401b435e098 | [
"MIT"
] | null | null | null | # Contributing
## Reporting Issues or Bug Reports
Bug reports are appreciated. Read [**`Issues - Bug Reports`**](https://github.com/asistex/ighoo/blob/master/.github/DOCS/pullreq_1.md) for a few guidelines listed will help speed up the process of getting them fixed.
---
### [1- Go to **Tab Issue -> New Issue**](https://github.com/asistex/ighoo/issues/new/choose)
[](https://github.com/asistex/ighoo/issues/new/choose)
---
### [2- click on **Get Started**](https://github.com/asistex/ighoo/issues/new?assignees=&labels=&template=bug_report.md&title=)
[](https://github.com/asistex/ighoo/issues/new?assignees=&labels=&template=bug_report.md&title=)
---
### [3- **Fill form**](https://github.com/asistex/ighoo/issues/new?assignees=&labels=&template=bug_report.md&title=)
[](https://github.com/asistex/ighoo/issues/new?assignees=&labels=&template=bug_report.md&title=)
---
## Pull Requests
Your pull requests are welcome; however, they may not be accepted for various reasons.
Read [**`pull_request_template.md`**](https://github.com/asistex/ighoo/blob/master/.github/PULL_REQUEST_TEMPLATE/pull_request_template.md) for a few guidelines listed will help speed up the process of getting them fixed.
All Pull Requests, except for translations and user documentation, need to be attached to a issue on GitHub. For Pull Requests regarding enhancements and questions, the issue must first be approved by one of project's administrators before being merged into the project. An approved issue will have the label **`Accepted`**. For issues that have not been accepted, you may request to be assigned to that issue.
Opening a issue beforehand allows the administrators and the community to discuss bugs and enhancements before work begins, preventing wasted effort.
### Guidelines for pull requests
1. Respect HMG Harbour coding style.
2. Create a new branch for each PR. **`Make sure your branch name wasn't used before`** - you can add date (for example `patch3_20200528`) to ensure its uniqueness.
3. Single feature or bug-fix per PR.
4. Make single commit per PR.
5. Make your modification compact - don't reformat source code in your request. It makes code review more difficult.
In short: The easier the code review is, the better the chance your pull request will get accepted.
### Coding style
#### GENERAL
1. ##### Use the following indenting for statements. 3 spaces, NO tabs:
* ###### Good:
```
SWITCH cnExp
CASE condition
IF someCondition == .T.
DoSomething()
ENDIF
Exit
CASE condition
// code
Exit
OTHERWISE
// code
END SWITCH
```
* ###### Bad:
```
SWITCH cnExp
CASE condition
if someCondition == 37
Do something
Endif
CASE condition
// code
Exit
OTHERWISE
// code
END SWITCH ```
```
2. ##### Avoid magic numbers.
* ###### Good:
```
if (foo < I_CAN_PUSH_ON_THE_RED_BUTTON)
startThermoNuclearWar();
```
* ###### Bad:
```
while (lifeTheUniverseAndEverything != 42)
lifeTheUniverseAndEverything = buildMorePowerfulComputerForTheAnswer();
```
#### NAMING CONVENTIONS
1. ##### Functions, Classes Methods & method parameters use camel Case
* ###### Good:
```
hb_aDel( aArray, nPos, .F. )
```
* ###### Bad:
```
hb_adel( aarray, POS, .f. )
HB_AINS( array, Pos, Value, .t. )
```
2. ##### Always prefer a variable name that describes what the variable is used for.
* ###### Good:
```
IF ( nHours < 24 .And. nMinutes < 60 .And. nSeconds < 60)
```
* ###### Bad:
```
if (a < 24 .And. b < 60 .And. c < 60)
```
#### COMMENTS
1. ##### Use comment line style.
* ###### Good:
```
// Two lines comment
// Use still C++ comment line style
```
* ###### Bad:
```
* Please don't piss me off with that asterisk
```
2. #### Multilines comments
/*
* comments
* multilines
*/
| 28.364238 | 410 | 0.65865 | eng_Latn | 0.858035 |
ff73093329d7f4faa60995baa22dc4842daeb3c3 | 893 | md | Markdown | content/development_docs/api/command_list/lb/delete_loadbalancer_backends.md | Wciel/enterprise-cloud-docs | c48dd0e3df027a979d63552acbeffda3bb6c9101 | [
"Apache-2.0"
] | null | null | null | content/development_docs/api/command_list/lb/delete_loadbalancer_backends.md | Wciel/enterprise-cloud-docs | c48dd0e3df027a979d63552acbeffda3bb6c9101 | [
"Apache-2.0"
] | 26 | 2022-02-16T03:10:03.000Z | 2022-03-28T03:41:11.000Z | content/development_docs/api/command_list/lb/delete_loadbalancer_backends.md | Wciel/enterprise-cloud-docs | c48dd0e3df027a979d63552acbeffda3bb6c9101 | [
"Apache-2.0"
] | 10 | 2022-01-21T08:55:39.000Z | 2022-03-16T06:44:42.000Z | ---
title: "DeleteLoadBalancerBackends"
description:
draft: false
---
删除一个或多个负载均衡器后端服务。
**Request Parameters**
| Parameter name | Type | Description | Required |
| --- | --- | --- | --- |
| loadbalancer_backends.n | String | 后端服务ID | Yes |
| zone | String | 区域 ID,注意要小写 | Yes |
[_公共参数_](../../../parameters/)
**Response Elements**
| Name | Type | Description |
| --- | --- | --- |
| action | String | 响应动作 |
| loadbalancer_backends | Array | 删除的后端服务ID列表 |
| ret_code | Integer | 执行成功与否,0 表示成功,其他值则为错误代码 |
**Example**
_Example Request_:
```
https://api.xxxxx.com/iaas/?action=DeleteLoadBalancerBackends
&loadbalancer_backends.1=lbb-1234abcd
&loadbalancer_backends.2=lbb-5678hjkl
&zone=pek3a
&COMMON_PARAMS
```
_Example Response_:
```
{
"action":"DeleteLoadBalancerBackendsResponse",
"loadbalancer_backends":[
"lbb-1234abcd",
"lbb-5678hjkl"
],
"zone":"pek3a"
}
```
| 17.173077 | 61 | 0.661814 | yue_Hant | 0.224259 |
ff7350aad55753e425a76451a8088d590de60d25 | 3,755 | md | Markdown | docs/AutoSizer.md | henryqdineen/react-virtualized | ad85acc582dfa628be0b2a55a827bec6b12873ee | [
"MIT"
] | 24,593 | 2015-11-16T14:52:23.000Z | 2022-03-31T22:36:51.000Z | docs/AutoSizer.md | henryqdineen/react-virtualized | ad85acc582dfa628be0b2a55a827bec6b12873ee | [
"MIT"
] | 1,540 | 2015-11-27T06:35:52.000Z | 2022-03-30T09:02:28.000Z | docs/AutoSizer.md | henryqdineen/react-virtualized | ad85acc582dfa628be0b2a55a827bec6b12873ee | [
"MIT"
] | 3,789 | 2015-11-06T03:23:32.000Z | 2022-03-31T17:18:03.000Z | ## AutoSizer
High-order component that automatically adjusts the width and height of a single child.
### Prop Types
| Property | Type | Required? | Description |
| :------------ | :------- | :-------: | :-------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| children | Function | ✓ | Function responsible for rendering children. This function should implement the following signature: `({ height: number, width: number }) => PropTypes.element` |
| className | String | | Optional custom CSS class name to attach to root `AutoSizer` element. This is an advanced property and is not typically necessary. |
| defaultHeight | Number | | Height passed to child for initial render; useful for server-side rendering. This value will be overridden with an accurate height after mounting. |
| defaultWidth | Number | | Width passed to child for initial render; useful for server-side rendering. This value will be overridden with an accurate width after mounting. |
| disableHeight | Boolean | | Fixed `height`; if specified, the child's `height` property will not be managed |
| disableWidth | Boolean | | Fixed `width`; if specified, the child's `width` property will not be managed |
| nonce | String | | Nonce of the inlined stylesheets for [Content Security Policy](https://www.w3.org/TR/2016/REC-CSP2-20161215/#script-src-the-nonce-attribute) |
| onResize | Function | | Callback to be invoked on-resize; it is passed the following named parameters: `({ height: number, width: number })`. |
| style | Object | | Optional custom inline style to attach to root `AutoSizer` element. This is an advanced property and is not typically necessary. |
### Examples
Many react-virtualized components require explicit dimensions but sometimes you just want a component to just grow to fill all of the available space.
The `AutoSizer` component can be useful in this case.
One word of caution about using `AutoSizer` with flexbox containers.
Flex containers don't prevent their children from growing and `AutoSizer` greedily grows to fill as much space as possible.
Combining the two can cause a loop.
The simple way to fix this is to nest `AutoSizer` inside of a `block` element (like a `<div>`) rather than putting it as a direct child of the flex container.
Read more about common `AutoSizer` questions [here](usingAutoSizer.md).
```javascript
import React from 'react';
import ReactDOM from 'react-dom';
import {AutoSizer, List} from 'react-virtualized';
import 'react-virtualized/styles.css'; // only needs to be imported once
// List data as an array of strings
const list = [
'Brian Vaughn',
// And so on...
];
function rowRenderer({key, index, style}) {
return (
<div key={key} style={style}>
{list[index]}
</div>
);
}
// Render your list
ReactDOM.render(
<AutoSizer>
{({height, width}) => (
<List
height={height}
rowCount={list.length}
rowHeight={20}
rowRenderer={rowRenderer}
width={width}
/>
)}
</AutoSizer>,
document.getElementById('example'),
);
```
| 56.893939 | 202 | 0.568575 | eng_Latn | 0.991327 |
ff7350ecaf587f92c1546a958d62315fcf984975 | 221 | md | Markdown | README.md | LeoWPereira/A-star-Path-Finding | 2b46e08166b50b7d3dcfd629af1b99a3cb605b33 | [
"Apache-2.0"
] | null | null | null | README.md | LeoWPereira/A-star-Path-Finding | 2b46e08166b50b7d3dcfd629af1b99a3cb605b33 | [
"Apache-2.0"
] | null | null | null | README.md | LeoWPereira/A-star-Path-Finding | 2b46e08166b50b7d3dcfd629af1b99a3cb605b33 | [
"Apache-2.0"
] | null | null | null | # A-star-Path-Finding
A complete A* Path Finding Algorithm written in C++ and Alegro.
With this algorithm one can find the smaller path from some point to another!
It also includes a perfect random labyrinth generator.
| 31.571429 | 77 | 0.78733 | eng_Latn | 0.999654 |
ff73c2e6c1fbe9eb8f52b7d36f048537873ce5d2 | 1,162 | md | Markdown | docs/vs-2015/code-quality/troubleshooting-quality-tools.md | seferciogluecce/visualstudio-docs.tr-tr | 222704fc7d0e32183a44e7e0c94f11ea4cf54a33 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vs-2015/code-quality/troubleshooting-quality-tools.md | seferciogluecce/visualstudio-docs.tr-tr | 222704fc7d0e32183a44e7e0c94f11ea4cf54a33 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vs-2015/code-quality/troubleshooting-quality-tools.md | seferciogluecce/visualstudio-docs.tr-tr | 222704fc7d0e32183a44e7e0c94f11ea4cf54a33 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Sorun giderme kalite araçları | Microsoft Docs
ms.custom: ''
ms.date: 11/15/2016
ms.prod: visual-studio-dev14
ms.reviewer: ''
ms.suite: ''
ms.technology:
- vs-ide-general
ms.tgt_pltfrm: ''
ms.topic: article
helpviewer_keywords:
- Visual Studio ALM, developing the application
ms.assetid: 535b9e67-ce9e-4a3e-8d28-9365f257036e
caps.latest.revision: 19
author: erickson-doug
ms.author: gewarren
manager: douge
ms.openlocfilehash: 88b0ebabfa093a8ef0cdffc8e9c137032dee1ab6
ms.sourcegitcommit: 9ceaf69568d61023868ced59108ae4dd46f720ab
ms.translationtype: MT
ms.contentlocale: tr-TR
ms.lasthandoff: 10/12/2018
ms.locfileid: "49253792"
---
# <a name="troubleshooting-quality-tools"></a>Sorun Giderme Kalite Araçları
[!INCLUDE[vs2017banner](../includes/vs2017banner.md)]
Visual Studio kalite araçları çalıştırırken sorunlarla karşılaşırsanız, bu bölümdeki konularda, tanılama ve sorun gidermek yardımcı olabilir.
## <a name="in-this-section"></a>Bu bölümde
[Kod Çözümleme Sorunlarını Giderme](../code-quality/troubleshooting-code-analysis-issues.md)
[Kod Ölçümleri Sorunlarını Giderme](../code-quality/troubleshooting-code-metrics-issues.md)
| 30.578947 | 143 | 0.787435 | tur_Latn | 0.663449 |
ff7456ed504a7efc0ccfc3959d45b98d9517ec9a | 53 | md | Markdown | code/readme.md | ultimus11/Home-Automation | 87f92820ebe69ddfdd5a347c911128a2f98a1444 | [
"MIT"
] | 2 | 2021-03-03T17:53:00.000Z | 2021-04-06T14:28:51.000Z | code/readme.md | ultimus11/Home-Automation | 87f92820ebe69ddfdd5a347c911128a2f98a1444 | [
"MIT"
] | null | null | null | code/readme.md | ultimus11/Home-Automation | 87f92820ebe69ddfdd5a347c911128a2f98a1444 | [
"MIT"
] | null | null | null | here is the part wise code for my youtube tutorials:
| 26.5 | 52 | 0.792453 | eng_Latn | 0.999973 |
ff746fa37750a76f767590f472892eacfe97effb | 1,110 | md | Markdown | translations/zh-CN/content/github/setting-up-and-managing-your-github-user-account/changing-your-primary-email-address.md | munnellg/docs | 05d341b126c9bf40c8c595842c5134685c6dafce | [
"CC-BY-4.0",
"MIT"
] | 2 | 2021-05-24T18:59:00.000Z | 2021-07-07T02:26:01.000Z | translations/zh-CN/content/github/setting-up-and-managing-your-github-user-account/changing-your-primary-email-address.md | etherealbeing1/docs | 5a91f5a508ba1732bc7fc0894064e759c7cf00c8 | [
"CC-BY-4.0",
"MIT"
] | 3 | 2021-12-09T07:09:43.000Z | 2022-02-17T23:06:46.000Z | translations/zh-CN/content/github/setting-up-and-managing-your-github-user-account/changing-your-primary-email-address.md | etherealbeing1/docs | 5a91f5a508ba1732bc7fc0894064e759c7cf00c8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 更改主电子邮件地址
intro: 您可以随时更改与您的用户帐户相关联的电子邮件地址。
redirect_from:
- /articles/changing-your-primary-email-address
versions:
free-pro-team: '*'
enterprise-server: '*'
topics:
- Accounts
- Notifications
---
{% note %}
如果要添加新电子邮件地址以设置为您的主电子邮件地址,请在“Add email address(添加电子邮件地址)”下,键入新的电子邮件地址,然后单击 **Add(添加)**。
{% endnote %}
{% data reusables.user_settings.access_settings %}
{% data reusables.user_settings.emails %}
3. 如果要添加新电子邮件地址以设置为您的主电子邮件地址,请在“Add email address(添加电子邮件地址)”下,键入新的电子邮件地址,然后单击 **Add(添加)**。 
4. 在“Primary email address(主电子邮件地址)”下,使用下拉菜单,单击要设为主电子邮件地址的电子邮件地址,然后单击 **Save(保存)**。 
5. 要从帐户中删除旧电子邮件地址,请在旧电子邮件地址旁边单击
{% octicon "trash" aria-label="The trash symbol" %}。
{% if currentVersion == "free-pro-team@latest" %}
6. 验证新的主电子邮件地址。 如果没有经验证的电子邮件地址,您将无法使用所有
{% data variables.product.product_name %} 的功能。 更多信息请参阅“[验证电子邮件地址](/articles/verifying-your-email-address)”。
{% endif %}
### 延伸阅读
- “[管理电子邮件首选项](/articles/managing-email-preferences/)”
| 32.647059 | 166 | 0.748649 | yue_Hant | 0.267082 |
ff759e5a83f47713dd8c151e0355b4c76931e107 | 511 | md | Markdown | README.md | payal-06/payal-06 | a07a6326dca07266d4b347358f57f5e5def6b834 | [
"MIT"
] | null | null | null | README.md | payal-06/payal-06 | a07a6326dca07266d4b347358f57f5e5def6b834 | [
"MIT"
] | null | null | null | README.md | payal-06/payal-06 | a07a6326dca07266d4b347358f57f5e5def6b834 | [
"MIT"
] | 1 | 2021-05-29T17:25:08.000Z | 2021-05-29T17:25:08.000Z | ### Hi there 👋
I'M PAYAL SINGH ,I WAS BORN IN GOA AND AS YOU KNOW GOA IS VERY NICE PLACE FOR TRAVELLERS.I HAVE COMPLETED MY EDUCATION FROM LALGANJ,RAEBARELI AND THE NAME OF SCHOOL IS KASHI BRIGHT
ANGEL'S PUBLIC SCHOOL. AND I CHOOSED VIT ,BECAUSE "VIT" IS NAME IN WHOLE WORLD AND HERE RANKING IS ALSO GOOD.
### ADDRESS-
RIGHT NOW WE ARE LIVING IN LUCKNOW BECAUSE IT'S MY HOMWTOWN, LUCKNOW IS ALSO KNOWN AS CITY OF 'NAWAB'. LUCKNOW ,THE CAPITAL OF UTTAR PRADESH WAS HISTORICALLY KNOWN AS THE AWADH
REGION.
| 46.454545 | 181 | 0.759295 | yue_Hant | 0.999905 |
ff75a5190aa39186a38a0124986c8c42c51b7dab | 96 | md | Markdown | README.md | MonodetH/Hackaton-MercadoLibre | 014c03d20abbbbe4f9b1ca1caf6417e58dd3ec24 | [
"Apache-2.0"
] | null | null | null | README.md | MonodetH/Hackaton-MercadoLibre | 014c03d20abbbbe4f9b1ca1caf6417e58dd3ec24 | [
"Apache-2.0"
] | null | null | null | README.md | MonodetH/Hackaton-MercadoLibre | 014c03d20abbbbe4f9b1ca1caf6417e58dd3ec24 | [
"Apache-2.0"
] | null | null | null | # Hackaton-MercadoLibre
## Herammientas Utilizadas
1. Python 3.5
2. Pip 9.0.1
3. Django 1.11.2
| 13.714286 | 26 | 0.71875 | spa_Latn | 0.515397 |
ff75a7c12169d2dff32589b0a083eb87e084291c | 3,098 | md | Markdown | _posts/2015-08-03-这都是真的吗?—— 关于计划生育政策的四个传说.md | NodeBE4/opinion | 81a7242230f02459879ebc1f02eb6fc21507cdf1 | [
"MIT"
] | 21 | 2020-07-20T16:10:55.000Z | 2022-03-14T14:01:14.000Z | _posts/2015-08-03-这都是真的吗?—— 关于计划生育政策的四个传说.md | NodeBE4/opinion | 81a7242230f02459879ebc1f02eb6fc21507cdf1 | [
"MIT"
] | 1 | 2020-07-19T21:49:44.000Z | 2021-09-16T13:37:28.000Z | _posts/2015-08-03-这都是真的吗?—— 关于计划生育政策的四个传说.md | NodeBE4/opinion | 81a7242230f02459879ebc1f02eb6fc21507cdf1 | [
"MIT"
] | 1 | 2021-05-29T19:48:01.000Z | 2021-05-29T19:48:01.000Z | ---
layout: post
title: "这都是真的吗?—— 关于计划生育政策的四个传说"
date: 2015-08-03
author: 邵立
from: http://cnpolitics.org/2015/08/myth-one-child-policy/
tags: [ 政見 ]
categories: [ 政見 ]
---
<div class="post-block">
<h1 class="post-head">
这都是真的吗?—— 关于计划生育政策的四个传说
</h1>
<p class="post-subhead">
</p>
<p class="post-tag">
</p>
<p class="post-author">
<!--a href="http://cnpolitics.org/author/shaoli/">邵立</a-->
<a href="http://cnpolitics.org/author/shaoli/">
邵立
</a>
<span style="font-size:14px;color:#b9b9b9;">
|2015-08-03
</span>
</p>
<!--p class="post-lead">计划生育和毛主席有什么关系?它的政策效果又如何?看本文之前可以先思考一下,你的想法和学者一样吗?</p-->
<div class="post-body">
<figure>
<img alt="0701-1" src="http://cnpolitics.org/wp-content/uploads/2015/08/OneChildPolicy_Large.jpg" width="566">
<figcaption>
图片来源:Alexandra Moss
</figcaption>
</img>
</figure>
<p>
《中国学刊》(China Journal)近期发表了题为“破除计划生育政策四大传说”的文章,认为此前对计划生育政策的四种流行看法都与事实差之甚远。下面请允许政见给大家一一介绍研究者对四种看法的反驳。
</p>
<h3>
传说一:
</h3>
<p>
毛主席主张多生孩子好。虽然他有一段时期也同意进行自愿性的生育控制政策,但真正的计划生育在他逝世之后才能够推行。
</p>
<p>
研究者:这种说法不对。其实毛泽东的人口观念非常实际。在1957年,他就提出过要控制人口。甚至在1960年,饥荒还没有结束的时候,他也表达出要控制生育的意愿。国务院正式通过控制生育的报告是在1971年。考虑到当时的政治环境,如果没有毛泽东的首肯,国务院根本不可能发表这一报告。
</p>
<h3>
传说二:
</h3>
<p>
因为毛主席不主张控制生育,中国的人口一直飞快增长,直到1980年强制性计划生育政策实施后,生育率才得到抑制。
</p>
<p>
研究者:错、错、错。正如上面所说,毛泽东实际主张控制生育。其实,很多广(chou)为(ming)人(zhao)知(zhu)的生育控制措施最早都来自1970年代的“晚稀少”生育控制运动。得益于当时的全能政府制度,1970年代的堕胎率、上环率和绝育率甚至还高于1990年代。在1980年一胎政策刚推出后,虽然这三个指标在1983年出现运动式暴涨,但很快就回复到平均水平。这说明计划生育政策的推出并没有长期的效果。
</p>
<h3>
传说三:
</h3>
<p>
1980年强制实施计划生育政策后,中国的生育率快速下降。
</p>
<p>
研究者:实质上,生育率下降是1970年代开始的大趋势。计划生育在1980年推出,但1981年生育率还出现小幅度上扬。在此之后,仅有两次生育率的小幅下滑和政策直接相关,一例就是上文提到的1983年运动式治理,一年内共有1440万例堕胎、2070万例绝育和1780万例上环。另外一次是1980年代末,由于1983年后生育率又缓慢回升了一点,计划生育工作变成地方政府党委书记的主要任务,成为了官员升迁的评判标准,这也让基层政府努力了一把,使生育率再次轻微下滑。但总体来说,1980年代的政策本身不是生育率下降的主要原因。
</p>
<h3>
传说四:
</h3>
<p>
尽管计划生育制造了如强行堕胎这样的悲剧,但它还是成功让中国“少生”了四亿人,为世界做出了杰出贡献。
</p>
<p>
研究者:这种说法有三个致命错误。第一,四亿人的估计来自当年中国人口学家的估计,和真实情况存在严重偏差。研究者统计了1970年代16个与中国生育率相仿的百万人口以上的发展中国家。在1970年,这16国的平均生育率比中国的生育率还稍微高一点,而在没有实施计划生育的情况下,它们2005年的平均生育率比中国的“四亿估算”要小45%。换句话说,无论中国人怎么能生,都不太可能达到“四亿”的数字。研究者因此认为当年的估算严重夸大人口危机,也成为了“一胎政策”问题的根源。
</p>
<p>
第二,“四亿”这个数字根本没有考虑到生育率下降实际上开始于1970年代,那时还没有一胎政策。也就是说,随着70后人口的减少,他们生育的90后、00后就会比预计更少。
</p>
<p>
最后,“四亿”这个推断没有考虑到人口减少是经济快速发展的必然后果。由于中国1980年后的经济增长速度比上述16国的平均值要快得多,所以研究者认为,即使没有一胎政策,中国生育率的下降也必然比16国要快。
</p>
<p>
总之,计划生育政策对中国人口减少的作用其实非常有限,生育率下降的主要原因还是经济发展。研究者相信,一胎政策所依靠的数据估计很不靠谱,而如果实施比较宽松的两胎政策,可能会省下不少麻烦。
</p>
<div class="post-endnote">
<h4>
参考文献
</h4>
<ul>
<li>
Whyte, M.K., Feng, W., & Cai, Y. (2015). Challenging myths about China’s one-child policy.
<cite>
The China Journal, 74
</cite>
, 144–59.
</li>
</ul>
</div>
</div>
<!-- icon list -->
<!--/div-->
<!-- social box -->
<div class="post-end-button back-to-top">
<p style="padding-top:20px;">
回到开头
</p>
</div>
<div id="display_bar">
<img src="http://cnpolitics.org/wp-content/themes/CNPolitics/images/shadow-post-end.png"/>
</div>
</div>
| 27.415929 | 260 | 0.699161 | yue_Hant | 0.66324 |
ff7605b43b318bd4e41e8a5250fffeca34ab609f | 2,441 | md | Markdown | README.md | mplesser/azcam-arc | ed7c2025e603715e6e21f1abba76711c1b8331e4 | [
"MIT"
] | null | null | null | README.md | mplesser/azcam-arc | ed7c2025e603715e6e21f1abba76711c1b8331e4 | [
"MIT"
] | null | null | null | README.md | mplesser/azcam-arc | ed7c2025e603715e6e21f1abba76711c1b8331e4 | [
"MIT"
] | null | null | null | # azcam-arc
*azcam-arc* is an *azcam* extension for Astronomical Research Cameras, Inc. gen1, gen2, and gen3 controllers. See https://www.astro-cam.com/.
## Installation
`pip install azcam-arc`
Or download from github: https://github.com/mplesser/azcam-arc.git.
## Example Code
The code below is for example only.
### Controller Setup
```python
import azcam.server
from azcam_arc.controller_arc import ControllerArc
controller = ControllerArc()
controller.timing_board = "arc22"
controller.clock_boards = ["arc32"]
controller.video_boards = ["arc45", "arc45"]
controller.utility_board = None
controller.set_boards()
controller.pci_file = os.path.join(azcam.db.systemfolder, "dspcode", "dsppci3", "pci3.lod")
controller.video_gain = 2
controller.video_speed = 1
```
### Exposure Setup
```python
import azcam.server
from azcam_arc.exposure_arc import ExposureArc
exposure = ExposureArc()
exposure.filetype = azcam.db.filetypes["MEF"]
exposure.image.filetype = azcam.db.filetypes["MEF"]
exposure.set_remote_imageserver("localhost", 6543)
exposure.image.remote_imageserver_filename = "/data/image.fits"
exposure.image.server_type = "azcam"
exposure.set_remote_imageserver()
```
## Camera Servers
*Camera servers* are separate executable programs which manage direct interaction with
controller hardware on some systems. Communication with a camera server takes place over a socket via
communication protocols defined between *azcam* and a specific camera server program. These
camera servers are necessary when specialized drivers for the camera hardware are required. They are
usually written in C/C++.
## DSP Code
The DSP code which runs in the ARC controllers is assembled and linked with
Motorola software tools. These tools are typically installed in the folder `/azcam/motoroladsptools/` on a
Windows machine as required by the batch files which assemble and link the code.
While the AzCam application code for the ARC timing board is typically downloaded during
camera initialization, the boot code must be compatible for this to work properly. Therefore
AzCam-compatible DSP boot code may need to be burned into the timing board EEPROMs before use, depending on configuration.
The gen3 PCI fiber optic interface boards and the gen3 utility boards use the original ARC code and do not need to be changed. The gen1 and gen2 situations are more complex.
For ARC system, the *xxx.lod* files are downlowded to the boards.
| 39.370968 | 173 | 0.791479 | eng_Latn | 0.978481 |
ff76f8f4502404b6045cedc4c6873a123725bc22 | 7,653 | md | Markdown | _posts/2020-11-17-Virtual-Reality-In-Military.md | aviorsys/aviorsys.github.io | aa826c38c35026c5ecfa3c75fd23a4acdc2adc35 | [
"CC0-1.0"
] | null | null | null | _posts/2020-11-17-Virtual-Reality-In-Military.md | aviorsys/aviorsys.github.io | aa826c38c35026c5ecfa3c75fd23a4acdc2adc35 | [
"CC0-1.0"
] | null | null | null | _posts/2020-11-17-Virtual-Reality-In-Military.md | aviorsys/aviorsys.github.io | aa826c38c35026c5ecfa3c75fd23a4acdc2adc35 | [
"CC0-1.0"
] | null | null | null | ---
layout: post
title: "Virtual Reality in Military"
categories: "Technology"
author: "Shalitha Dhananjaya"
---
# Virtual Reality in Military
The army adopted Virtual Reality a long time ago, much earlier than commercial markets. Who knows what the VR technology would be like nowadays if not billions of investments by the military from the very beginning. They had also financed the development of first effective headsets, a.k.a. HMDs. Today, the number of VR projects for the military is increasing, and by 2025 it is even expected to generate a significant $1.4 billion in revenues.

## VR military applications
### VR Training

About 1 in 20 deaths of soldiers happen during the training. Due to the ability to turn these negative statistics over and provide maximum safety, the implementation of Virtual Reality in the military for training programs was enthusiastically welcomed.
Unlike Augmented reality (AR), VR does not suit for use on frontlines. However, its training potential is higher, because it can totally simulate a situation, surroundings, and conditions for practical purposes. Especially useful to up soldiers’ combat skills without any danger to their lives. And it is cheaper than any military real-life maneuvers.
VR military training by types:
● Immersive training – help trainees walk through all stresses of a parachute jump, fighter jets, submarines, and tanks (claustrophobia);
● Situational awareness – in extreme environments (jungle, arctic, desert missions) navigation and teamwork are crucial.
With the advent of wireless and mobile VR systems, the training became even more realistic with the ability of free motion. Such virtual boot camps typically include:
● A head-mounted display with a motion tracker
● Special load-bearing vest with batteries and a wireless PC
● Body motion tracker;
● Training weapons equal in size, weight, and shape to real military armor
Additionally, new advanced tech, a VR haptic feedback suit, is in the works too. Next area of use is medical training. Military medical teams should also be able to work quickly and professionally in dangerous circumstances. Training in VR format can give the necessary experience in combat situations.
### VR Simulators

Flight simulations help train pilots for air battles, to coordinate with ground support, and overall train skills under stressful circumstances. Such VR simulators are usually enclosed systems with hydraulics, electronics, and force feedback reacting to the pilot’s actions. Simulators also provide a control panel identical to the original aircraft.
Future combat system (FCS) – is the main type of VR simulations, mostly applied to ground vehicles, mortar vehicles, reconnaissance or infantry carriers, and even tanks or armored vehicles. VR environment recreates different weather conditions and trains to navigate in unknown sites.
Unlike previous types, using VR navy simulators are focused on recreating the ship bridge, then replicating environments. The main goal of seamanship, navigation and ship-handling trainer (NSST) simulations is providing soldiers with any possible scenarios, and training to operate in toughest ones.
### VR Therapy

The average number of war veterans ending their lives has reached 20 people daily. In most cases, these soldiers are suffering from post-traumatic syndrome. Starting in 2005, virtual reality has been in use for PTSD treatment.
At the initial stage, only two products were developed, Virtual Afghanistan and Iraq, but now more are available. With such VR applications, soldiers experience different battle scenes that had once influenced their psyche. However, this is absolutely safe and is aimed at overcoming fears and healing.
Benefits of Virtual Reality in Military
Providing safety is one of the main advantages of Virtual Reality in military training. Moreover, simulations can prepare for different dangerous scenarios on the battlefield in a controlled setting. Key benefits include:
1. Realistic scenarios. Recreating environments, identical to the real ones, but with 100% control.
2. Cost-effective. VR allows to cut down the cost of training, greatly due to unlikely “wear and tear” of HMDs and related equipment, as well as logistical issues. The replicas of weapons or vehicles also cost less than actual inventory.
3. Measurement. Immediate feedback on each participant and his/her performance with details on every aspect. This helps to coordinate further training correcting personal strengths and weaknesses.
4. Better engagement. Due to the fact that most VR training is game-like, soldiers find them more enjoyable. Usually, it means a higher level of engagement and understanding.
Few examples
US Army FORT BRAGG, N.C. is developing a VR program that will provide a realistic training program with minimum risk to life and health of soldiers. The program allows squads to maintain their battle experience or prepare for new missions.
Australia is funding the Defence Science Technology Group that develops VR training programs for the military. Leading professors Rohan Walker and Eugene Nalivaiko with Albert Rizzo’s team are trying to prepare soldiers for all situations that might wait for them.
In terms of hardware, there are some interesting VR weapons developments. UCVR company created a model, identical to an original bazooka, which works with HTC Vive headset. 1:1 high copy model, equipped with force feedback vibration system, weighing 2,8 kg with the maximum 2,100 m range of action and 300 m maximum effective range.
Another example, Striker VR, to train shooting skills. The company works to equip their product, a haptic VR Gun, with the best tactile feedback system on the market. Effects include single shot, out-of-ammo, three round burst, chainsaw, pulse rifle replicants. One more feature of the gun is complete portability and free movement.

Virtual reality has a wide range of applications in the military sector, as we see. It takes battle training to the next level, provides skills and knowledge, and allows practice in safety. Also, it aids in improving battle skills, teamwork, situation awareness, and stress resistance.
References
● The Telegraph. 2020. More than 1 in 20 troop deaths happen in training. [ONLINE] Available at: https://www.telegraph.co.uk/news/uknews/defence/12095670/More-than-1-in-20-troop-deaths-happen-in-training.html. [Accessed 06 November 2020].
● Wikipedia. 2020. United States military veteran suicide - Wikipedia. [ONLINE] Available at: https://en.wikipedia.org/wiki/United_States_military_veteran_suicide. [Accessed 06 November 2020].
● Welcome to DST - Defence Science and Technology. 2020. Welcome to DST - Defence Science and Technology. [ONLINE] Available at: https://www.dst.defence.gov.au/. [Accessed 06 November 2020].
● strikervr. 2020. Advanced force feedback | United States | Striker VR. [ONLINE] Available at: https://www.strikervr.com/. [Accessed 06 November 2020].
● ThinkMobiles. 2020. Virtual Reality for the military, training and combat simulations - 2020. [ONLINE] Available at: https://thinkmobiles.com/blog/virtual-reality-military/. [Accessed 06 November 2020].
| 77.30303 | 445 | 0.800732 | eng_Latn | 0.996868 |
ff77c14e4bd27667002e1545a257d61da243e308 | 558 | md | Markdown | README.md | BlackFoxCode/Turmult | 81c5f2eccd989b53ba5437def51d389ba45cdbfd | [
"MIT"
] | null | null | null | README.md | BlackFoxCode/Turmult | 81c5f2eccd989b53ba5437def51d389ba45cdbfd | [
"MIT"
] | null | null | null | README.md | BlackFoxCode/Turmult | 81c5f2eccd989b53ba5437def51d389ba45cdbfd | [
"MIT"
] | null | null | null | # Discord
[](https://github.com/Reyu/Discord/actions)
[](https://hackage.haskell.org/package/Discord)
[](http://stackage.org/lts/package/Discord)
[](http://stackage.org/nightly/package/Discord)
[](LICENSE)
Discord Library
| 55.8 | 117 | 0.756272 | kor_Hang | 0.345191 |
ff77c1bbf47177cd707e32b57aeae484aad62a23 | 7,009 | md | Markdown | onl/tutorials/mod2_2_getting_started_with_Python.md | galib9690/OpenNightLights | 7c027c445a6f5515024f50ff33f03dd96bcbd9c5 | [
"CC-BY-4.0"
] | 15 | 2021-03-03T20:34:51.000Z | 2022-02-25T16:48:59.000Z | onl/tutorials/mod2_2_getting_started_with_Python.md | galib9690/OpenNightLights | 7c027c445a6f5515024f50ff33f03dd96bcbd9c5 | [
"CC-BY-4.0"
] | null | null | null | onl/tutorials/mod2_2_getting_started_with_Python.md | galib9690/OpenNightLights | 7c027c445a6f5515024f50ff33f03dd96bcbd9c5 | [
"CC-BY-4.0"
] | 7 | 2021-03-21T14:04:06.000Z | 2022-02-19T06:43:56.000Z | # Getting started with Python (10 min)
## Intro to Python
<div class="alert alert-success">
<a href="https://www.python.org/" class="alert-link">Python</a> is a popular programming language for data science.
</div>
This tutorial uses Python and assumes that you have at least some familiarity with it.
Python is one of the most popular programming languages for a number of reasons:
- It’s open-source
- It has a large standard library and a massive (and growing!) ecosystem of packages (collections of code), including those used for scientific computing
- It’s user-friendly
- It has a large and active online community for knowledge-sharing and trouble-shooting when you get stuck
- It’s a general-purpose language...this is helpful for data science work, which often covers a wide range of tasks
<div class="alert alert-success">
If you are completely new to programming, you’ll first want to get started with <a href="https://wiki.python.org/moin/BeginnersGuide/NonProgrammers" class="alert-link"> this Beginner's Guide for Non-Programmers.</a>
</div>
<div class="alert alert-success">
If you are a programmer who is new to Python, <a href="https://wiki.python.org/moin/BeginnersGuide/Programmers" class="alert-link">this Beginner's Guide for Programmers</a> may help.
</div>
<div class="alert alert-success">
<a href="https://github.com/openlists/PythonResources" class="alert-link">Here’s another large list of resources for Python</a> assembled by that community we talked about.
</div>
## Installing Python
There are many ways to install and manage Python, in fact most Linux and UNIX-based computers already have it as a system install.
This tutorial requires Python 3.7+. So if you do not already have that you will want to install it on your computer. Even if you already have Python 3.7, it is recommended that you set up a virtual environment for this tutorial (as with any project). See below for how to do that.
## Package management
We mentioned that large and growing ecosystem of Python packages, which really makes this a powerful language. Thanks to these open-source packages, many tasks we’ll conduct in this tutorial will use code already written for the task, so you don’t have to write out the source code from scratch.
However, these packages often depend on each other in various ways and are updated frequently. If you didn’t have a way to manage packages and dependencies, things might break and it becomes a frustrating experience. Fortunately,there are package managers available for this.
If you’ve used Python, you’re familiar with pip.
<div class="alert alert-success">
<b>pip</b> is a utility for installing Python packages contained in the Python Package Index (PyPI), a repository containing packages people have written in Python.
</div>
There is also conda, a popular package management platform among data scientists.
<div class="alert alert-success">
<a href="https://conda.io/en/latest/">Conda (distributed by Anaconda)</a> is an open source package management system and environment management system that runs on Windows, macOS and Linux.
</div>
The Anaconda website provides <a href="https://www.anaconda.com/blog/understanding-conda-and-pip">a helpful description of conda and its comparison to pip</a>.
In summary: using conda can make things more streamlined, but there are some constraints, particularly if the Python package you need is not in the conda repository or if you are building your own packages in Python...but if that's the case you are not reading a "how to install Python" tutorial..so you can skip this section. Conda also manages virtual environments, which simplifies things for us.
We will use conda in this tutorial; however, you are free to manage dependencies any way you wish of course. If you use pip, just adjust accordingly where you see the instructions say `conda install`.
### Installing conda
Conda can be installed for Windows, macOS, or Linux:
- <a href="https://docs.anaconda.com/anaconda/install/windows/">Installing conda for Windows</a>
- <a href="https://docs.anaconda.com/anaconda/install/mac-os/">Installing conda for macOS</a>
- <a href="https://docs.anaconda.com/anaconda/install/linux/">Installing conda for Linux</a>
Follow the instructions relevant to your operating system and install conda on your computer.
### Managing environments
<div class="alert alert-success">
<a href="https://docs.anaconda.com/anaconda/user-guide/getting-started/">Getting started with conda</a> provides helpful documentation for getting things set up. We will touch on the highlights here, but this resource has more details.</div>
After you have successfully installed conda you can set up your environment and install packages. There are instructions how to use the GUI navigation tool, but for simplicity sake here, we are just going to use the command line. Open a command line or terminal window on your computer. The `$` will indicate a command line interface (CLI) prompt.
Confirm that you have conda install correctly, a version will be printed:
`$ conda --version`
Create an environment for this project. We'll call it "onl" but you can name it whatever you wish. We are going to create this environment and install Python 3.9 with it.
`$ conda create --name onl python=3.9`
When this environment is created, you should see instructions about how to activate and deacative it.
To activate:
`$ conda activate onl`
You should now see your environment name in parenthesis next to the prompt to let you know you are working within this environment.
To deactivate:
`$ conda deactivate`
**Make sure you are always working in this environment any time you are using this tutorial to make sure you can access all your packages as needed!**
If you want to see a list of all your environments (and their locations):
`$ conda info --envs`
### Managing environments
The main reason to use virtual environments is to install and manage packages with ease for your project (in this case, this tutorial). We will install a few packages as we go, but right now, let's install the `jupyter` package so that you can open notebooks.
To install jupyter, make sure your environment is activated! And:
`$ conda install jupyter`
This will install the jupyter Python package as well as any dependences it has.
To see a list of packages installed in your environment, from within your active environment:
`$ conda list`
You should only need to install a package once in your environment.
## Development environments
There are many ways to write, edit and execute Python scripts. Programmers will often use an Integrated Development Environment (IDE), such as PyCharm, which is a popular IDE for Python. There are a lot of advantages to using IDEs, especially if you’re building more complicated software applications; however, for this tutorial, we’ll be executing fairly simple routines and can do this from Jupyter notebooks. In fact, this tutorial is contained in Jupyter notebooks!
```python
```
| 58.408333 | 469 | 0.77857 | eng_Latn | 0.998886 |
ff78a1915b3e3501e0d70e95522017a7077dd921 | 3,150 | md | Markdown | sdk-api-src/content/bdatif/nf-bdatif-itunerequestinfo-getpreviousprogram.md | amorilio/sdk-api | 54ef418912715bd7df39c2561fbc3d1dcef37d7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | sdk-api-src/content/bdatif/nf-bdatif-itunerequestinfo-getpreviousprogram.md | amorilio/sdk-api | 54ef418912715bd7df39c2561fbc3d1dcef37d7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | sdk-api-src/content/bdatif/nf-bdatif-itunerequestinfo-getpreviousprogram.md | amorilio/sdk-api | 54ef418912715bd7df39c2561fbc3d1dcef37d7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
UID: NF:bdatif.ITuneRequestInfo.GetPreviousProgram
title: ITuneRequestInfo::GetPreviousProgram (bdatif.h)
description: The GetPreviousProgram method creates a new tune request with channel or program locator information for the previous service.
helpviewer_keywords: ["GetPreviousProgram","GetPreviousProgram method [Microsoft TV Technologies]","GetPreviousProgram method [Microsoft TV Technologies]","ITuneRequestInfo interface","ITuneRequestInfo interface [Microsoft TV Technologies]","GetPreviousProgram method","ITuneRequestInfo.GetPreviousProgram","ITuneRequestInfo::GetPreviousProgram","ITuneRequestInfoGetPreviousProgram","bdatif/ITuneRequestInfo::GetPreviousProgram","mstv.itunerequestinfo_getpreviousprogram"]
old-location: mstv\itunerequestinfo_getpreviousprogram.htm
tech.root: mstv
ms.assetid: 9c03a5c9-9dc1-4163-bbc8-8dae2037eb24
ms.date: 12/05/2018
ms.keywords: GetPreviousProgram, GetPreviousProgram method [Microsoft TV Technologies], GetPreviousProgram method [Microsoft TV Technologies],ITuneRequestInfo interface, ITuneRequestInfo interface [Microsoft TV Technologies],GetPreviousProgram method, ITuneRequestInfo.GetPreviousProgram, ITuneRequestInfo::GetPreviousProgram, ITuneRequestInfoGetPreviousProgram, bdatif/ITuneRequestInfo::GetPreviousProgram, mstv.itunerequestinfo_getpreviousprogram
req.header: bdatif.h
req.include-header:
req.target-type: Windows
req.target-min-winverclnt:
req.target-min-winversvr:
req.kmdf-ver:
req.umdf-ver:
req.ddi-compliance:
req.unicode-ansi:
req.idl:
req.max-support:
req.namespace:
req.assembly:
req.type-library:
req.lib:
req.dll:
req.irql:
targetos: Windows
req.typenames:
req.redist:
ms.custom: 19H1
f1_keywords:
- ITuneRequestInfo::GetPreviousProgram
- bdatif/ITuneRequestInfo::GetPreviousProgram
dev_langs:
- c++
topic_type:
- APIRef
- kbSyntax
api_type:
- COM
api_location:
- bdatif.h
api_name:
- ITuneRequestInfo.GetPreviousProgram
---
# ITuneRequestInfo::GetPreviousProgram
## -description
The <b>GetPreviousProgram</b> method creates a new tune request with channel or program locator information for the previous service.
## -parameters
### -param CurrentRequest [in]
Specifies the current request.
### -param TuneRequest [out]
Pointer to a variable that receives a tune request for the previous service in the current transport stream.
## -returns
The method returns an <b>HRESULT</b>. Possible values include those in the following table.
<table>
<tr>
<th>Return code</th>
<th>Description</th>
</tr>
<tr>
<td width="40%">
<dl>
<dt><b>S_OK</b></dt>
</dl>
</td>
<td width="60%">
The method succeeded.
</td>
</tr>
<tr>
<td width="40%">
<dl>
<dt><b>E_POINTER</b></dt>
</dl>
</td>
<td width="60%">
<i>CurrentRequest</i> is not valid, or <i>TuneRequest</i> is <b>NULL</b>.
</td>
</tr>
</table>
## -remarks
This method might be used by a custom Guide Store Loader to enumerate the available services on a transport stream.
## -see-also
<a href="/windows/desktop/DirectShow/error-and-success-codes">Error and Success Codes</a>
<a href="/previous-versions/windows/desktop/api/bdatif/nn-bdatif-itunerequestinfo">ITuneRequestInfo Interface</a> | 29.166667 | 472 | 0.782857 | eng_Latn | 0.408804 |
ff78ea7b8d3a8f6cfba2ac14b0f1ab495aa78b9c | 160 | md | Markdown | README.md | sonseong10/basic | 974f918a57942c1cdd84b4279feac993aee7b22f | [
"Beerware"
] | 1 | 2021-02-20T04:46:59.000Z | 2021-02-20T04:46:59.000Z | README.md | sonseong10/basic | 974f918a57942c1cdd84b4279feac993aee7b22f | [
"Beerware"
] | 1 | 2021-03-04T02:28:45.000Z | 2021-03-04T02:43:30.000Z | README.md | sonseong10/basic | 974f918a57942c1cdd84b4279feac993aee7b22f | [
"Beerware"
] | null | null | null | # Basic Projects
Basic HTML5, CSS3, Vanilla JS Projects 12가지 미니 프로젝트 입니다.
## Github 배포
(2021.03.03 기준)
[확인하기](https://sonseong10.github.io/basic/main.html)
| 16 | 56 | 0.71875 | kor_Hang | 0.965487 |
ff7935559c988e9d06abefef60ab1fa94b700551 | 3,607 | md | Markdown | content/publication/Lieck2020/index.md | fabianmoss/website | a61653979971f5d1fadd6390a51c8982c76b5ccc | [
"MIT"
] | null | null | null | content/publication/Lieck2020/index.md | fabianmoss/website | a61653979971f5d1fadd6390a51c8982c76b5ccc | [
"MIT"
] | null | null | null | content/publication/Lieck2020/index.md | fabianmoss/website | a61653979971f5d1fadd6390a51c8982c76b5ccc | [
"MIT"
] | null | null | null | ---
title: "The Tonal Diffusion Model"
authors: [Robert Lieck, admin, Martin Rohrmeier]
date: "2020-10-01T00:00:00Z"
doi: "10.5334/tismir.46"
# Schedule page publish date (NOT publication's date).
publishDate: "2020-10-01T00:00:00Z"
# Publication type.
# Legend: 0 = Uncategorized; 1 = Conference paper; 2 = Journal article;
# 3 = Preprint / Working Paper; 4 = Report; 5 = Book; 6 = Book section;
# 7 = Thesis; 8 = Patent
publication_types: ["2"]
# Publication name and optional abbreviated publication name.
publication: "*Transactions of the International Society of Music Information Retrieval, 3*(1), 153-164"
publication_short: ""
abstract: >
Pitch-class distributions are of central relevance in music information retrieval, computational musicology
and various other fields, such as music perception and cognition. However, despite their structure being
closely related to the cognitively and musically relevant properties of a piece, many existing approaches
treat pitch-class distributions as fixed templates.
In this paper, we introduce the Tonal Diffusion Model, which provides a more structured and interpretable
statistical model of pitch-class distributions by incorporating geometric and algebraic structures known
from music theory as well as insights from music cognition. Our model explains the pitch-class distributions
of musical pieces by assuming tones to be generated through a latent cognitive process on the Tonnetz,
a well-established representation for harmonic relations. Specifically, we assume that all tones in a piece
are generated by taking a sequence of interval steps on the Tonnetz starting from a unique tonal origin.
We provide a description in terms of a Bayesian generative model and show how the latent variables and
parameters can be efficiently inferred.
The model is quantitatively evaluated on a corpus of 248 pieces from the Baroque, Classical, and
Romantic era and describes the empirical pitch-class distributions more accurately than conventional
template-based models. On three concrete musical examples, we demonstrate that our model captures
relevant harmonic characteristics of the pieces in a compact and interpretable way, also reflecting stylistic
aspects of the respective epoch.
# Summary. An optional shortened abstract.
summary: ""
tags:
- Tonnetz
- Pitch-Class Distributions
- Cognitive Modeling
- Tonality
- Bayesian Generative Model
featured: false
# links:
# - name: ""
# url: ""
url_pdf: "" # http://arxiv.org/pdf/1512.04133v1
url_code: 'https://github.com/DCMLab/tonal-diffusion-model'
url_dataset: ''
url_poster: ''
url_project: ''
url_slides: ''
url_source: ''
url_video: ''
# Featured image
# To use, add an image named `featured.jpg/png` to your page's folder.
image:
caption: "" # 'Image credit: [**Unsplash**](https://unsplash.com/photos/jdD8gXaTZsc)'
focal_point: "Center" # Options: Smart, Center, TopLeft, Top, TopRight, Left, Right, BottomLeft, Bottom, BottomRight
preview_only: false
# Associated Projects (optional).
# Associate this publication with one or more of your projects.
# Simply enter your project's folder or file name without extension.
# E.g. `internal-project` references `content/project/internal-project/index.md`.
# Otherwise, set `projects: []`.
projects: [music-theory, corpus-studies]
# Slides (optional).
# Associate this publication with Markdown slides.
# Simply enter your slide deck's filename without extension.
# E.g. `slides: "example"` references `content/slides/example/index.md`.
# Otherwise, set `slides: ""`.
slides: ""
---
| 42.435294 | 118 | 0.765456 | eng_Latn | 0.971673 |
ff79b79b4c6b425281467aa2b19ef84727f3ca35 | 17 | md | Markdown | README.md | AldiRvn/ara-go-snipppet | 9bd447a92278cc8bcfc1480cebf53daee0fa3d56 | [
"MIT"
] | null | null | null | README.md | AldiRvn/ara-go-snipppet | 9bd447a92278cc8bcfc1480cebf53daee0fa3d56 | [
"MIT"
] | null | null | null | README.md | AldiRvn/ara-go-snipppet | 9bd447a92278cc8bcfc1480cebf53daee0fa3d56 | [
"MIT"
] | null | null | null | # ara-go-snipppet | 17 | 17 | 0.764706 | swe_Latn | 0.836901 |
ff7a852ffd701e3c627792fe530fcd0ec9bbbbbe | 485 | md | Markdown | catalog/nishikawaguchi-university-rakuen-club/en-US_nishikawaguchi-university-rakuen-club.md | htron-dev/baka-db | cb6e907a5c53113275da271631698cd3b35c9589 | [
"MIT"
] | 3 | 2021-08-12T20:02:29.000Z | 2021-09-05T05:03:32.000Z | catalog/nishikawaguchi-university-rakuen-club/en-US_nishikawaguchi-university-rakuen-club.md | zzhenryquezz/baka-db | da8f54a87191a53a7fca54b0775b3c00f99d2531 | [
"MIT"
] | 8 | 2021-07-20T00:44:48.000Z | 2021-09-22T18:44:04.000Z | catalog/nishikawaguchi-university-rakuen-club/en-US_nishikawaguchi-university-rakuen-club.md | zzhenryquezz/baka-db | da8f54a87191a53a7fca54b0775b3c00f99d2531 | [
"MIT"
] | 2 | 2021-07-19T01:38:25.000Z | 2021-07-29T08:10:29.000Z | # Nishikawaguchi University Rakuen Club

- **type**: manga
- **volumes**: 2
- **original-name**: 西川口大学楽園倶楽部
- **start-date**: 1996-02-19
- **end-date**: 1996-02-19
## Tags
- comedy
- ecchi
- romance
- school
## Authors
- Takebayashi
- Takeshi (Story & Art)
## Links
- [My Anime list](https://myanimelist.net/manga/20523/Nishikawaguchi_University_Rakuen_Club)
| 18.653846 | 94 | 0.668041 | kor_Hang | 0.139525 |
ff7b0840f84efc1fb5eb360b9edb6e87f5836e8c | 113 | md | Markdown | README.md | Mananhina/Titanic-Machine_Learning | 18273fef4833226209cc2260f1a3debdd5d5afb8 | [
"BSD-2-Clause"
] | 1 | 2021-11-27T07:53:31.000Z | 2021-11-27T07:53:31.000Z | README.md | Mananhina/Titanic-Machine_Learning | 18273fef4833226209cc2260f1a3debdd5d5afb8 | [
"BSD-2-Clause"
] | null | null | null | README.md | Mananhina/Titanic-Machine_Learning | 18273fef4833226209cc2260f1a3debdd5d5afb8 | [
"BSD-2-Clause"
] | 1 | 2021-11-27T07:53:47.000Z | 2021-11-27T07:53:47.000Z | # Titanic-Machine_Learning
Solution of the popular prediction problem witn using a K-Nearest Neighbors algorithm
| 37.666667 | 85 | 0.849558 | eng_Latn | 0.980036 |
ff7b54e10dd2f883fa4f2245e3e73956fdb75653 | 1,203 | md | Markdown | 流行/Always-太阳的后裔OST/README.md | hsdllcw/everyonepiano-music-database | d440544ad31131421c1f6b5df0f039974521eb8d | [
"MIT"
] | 17 | 2020-12-01T05:27:50.000Z | 2022-03-28T05:03:34.000Z | 流行/Always-太阳的后裔OST/README.md | hsdllcw/everyonepiano-music-database | d440544ad31131421c1f6b5df0f039974521eb8d | [
"MIT"
] | null | null | null | 流行/Always-太阳的后裔OST/README.md | hsdllcw/everyonepiano-music-database | d440544ad31131421c1f6b5df0f039974521eb8d | [
"MIT"
] | 2 | 2021-08-24T08:58:58.000Z | 2022-02-08T08:22:52.000Z |
**Always双手简谱** 和五线谱完全对应。
_Always_
是韩国KBS电视台于2016年2月24日起播出的水木迷你连续剧《太阳的后裔》的原声音乐。本剧为第一部中国与韩国同步播出的韩剧。该剧主要讲述了特战部队海外派兵组组长刘时镇和外科医生姜暮烟,在韩国和派兵地区之间往返相爱的故事。
另外,Always此曲由歌手尹美莱演唱。尹美莱(韩文名:윤미래),女,1981年5月31日出生于美国德克萨斯州波特兰,美韩混血。
同时,网站还为大家提供了太阳的后裔中的另一首插曲《[Everytime](Music-6827-Everytime-太阳的后裔OST.html
"Everytime")》的曲谱下载
歌词下方是 _Always钢琴谱_ ,希望大家喜欢。
### Always歌词:
그대를 바라볼 때면 모든 게 멈추죠
凝望着你的时候 一切都暂停了一般
언제부턴지 나도 모르게였죠
从何时这样的 我也不知道
어느 날 꿈처럼 그대 다가와
某一天 你就像梦一般向我走来
내 맘을 흔들죠
动摇了我的心
운명이란 걸 나는 느꼈죠
让我感受到了命运的存在
I love you
我爱你
듣고 있나요
你在听吗
Only you
只有你
눈을 감아봐요
请闭上眼
바람에 흩날려 온 그대 사랑
随风拂来了你的爱情
Whenever wherever you are
无论何时无论你在哪里
Whenever wherever you are
无论何时无论你在哪里
love love love
我都爱你
어쩌다 내가 널 사랑했을까
我怎么就爱上你了呢
밀어내려 해도 내 가슴이
就算试着抗拒
널 알아봤을까
我的心里也依旧有你一人
I love you
我爱你
듣고 있나요
你在听吗
Only you
只有你
눈을 감아봐요
请闭上眼
모든 게 변해도 변하지 않아
万事变迁 我也依旧不会改变
넌 나의 난 너의 사랑
你是我的爱人
그대 조금 돌아온대도
就算你转身离去
다시 나를 스쳐지나더라도
就算你擦肩而过
괜찮아요 그댈 위해
也没有关系 为了你
내가 여기 있을게
我会一直等在这里
I love you
我爱你
잊지 말아요
请不要忘记
Only you
只有你
내 눈물의 고백
我饱含泪水的表白
바람에 흩날려 온 그대 사랑
随风拂来了你的爱情
Whenever wherever you are
无论何时无论你在哪里
Whenever wherever you are
无论何时无论你在哪里
| 14.321429 | 111 | 0.719036 | kor_Hang | 0.988264 |
ff7c29728f43e837c98bc7f7406bd805c8e31a3c | 1,123 | md | Markdown | api/PowerPoint.ColorFormat.RGB.md | MarkWithC/VBA-Docs | a43a38a843c95cbe8beed2a15218a5aeca4df8fb | [
"CC-BY-4.0",
"MIT"
] | 1 | 2022-01-24T16:08:55.000Z | 2022-01-24T16:08:55.000Z | api/PowerPoint.ColorFormat.RGB.md | MarkWithC/VBA-Docs | a43a38a843c95cbe8beed2a15218a5aeca4df8fb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | api/PowerPoint.ColorFormat.RGB.md | MarkWithC/VBA-Docs | a43a38a843c95cbe8beed2a15218a5aeca4df8fb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: ColorFormat.RGB property (PowerPoint)
keywords: vbapp10.chm506002
f1_keywords:
- vbapp10.chm506002
ms.prod: powerpoint
api_name:
- PowerPoint.ColorFormat.RGB
ms.assetid: 5bb68052-5931-2096-277c-fb44c76b37eb
ms.date: 06/08/2017
ms.localizationpriority: medium
---
# ColorFormat.RGB property (PowerPoint)
Returns or sets the red-green-blue (RGB) value of the specified color. Read/write.
## Syntax
_expression_. `RGB`
_expression_ A variable that represents a [ColorFormat](PowerPoint.ColorFormat.md) object.
## Return value
Long
## Example
This example sets the background color for the first shape on the first slide of the active presentation to red and Accent 1 to green in the theme for the first slide master.
```vb
With ActivePresentation
Dim oCF As ColorFormat
Set oCF = .Slides(1).Shapes(1).Fill.ForeColor
oCF.RGB = RGB(255, 0, 0)
.Designs(1).SlideMaster.Theme.ThemeColorScheme(msoThemeAccent1).RGB = RGB(0, 255, 0)
End With
```
## See also
[ColorFormat Object](PowerPoint.ColorFormat.md)
[!include[Support and feedback](~/includes/feedback-boilerplate.md)]
| 20.053571 | 174 | 0.747106 | eng_Latn | 0.768925 |
ff7ce1660ed69bab57ecf6fe0a83df47eebe7c96 | 4,041 | md | Markdown | docs/vs-2015/extensibility/debugger/registering-a-custom-debug-engine.md | HiDeoo/visualstudio-docs.fr-fr | db4174a3cd6d03edc8bbf5744c3f917e4b582cb3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vs-2015/extensibility/debugger/registering-a-custom-debug-engine.md | HiDeoo/visualstudio-docs.fr-fr | db4174a3cd6d03edc8bbf5744c3f917e4b582cb3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vs-2015/extensibility/debugger/registering-a-custom-debug-engine.md | HiDeoo/visualstudio-docs.fr-fr | db4174a3cd6d03edc8bbf5744c3f917e4b582cb3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Inscription d’un moteur de débogage personnalisé | Microsoft Docs
ms.date: 11/15/2016
ms.prod: visual-studio-dev14
ms.technology: vs-ide-sdk
ms.topic: conceptual
helpviewer_keywords:
- debug engines, registering
ms.assetid: 9984cd3d-d34f-4662-9ace-31766499abf5
caps.latest.revision: 7
ms.author: gregvanl
manager: jillfra
ms.openlocfilehash: 9cf06e881034b980b8e40e095779007b3c7fa6f6
ms.sourcegitcommit: 6cfffa72af599a9d667249caaaa411bb28ea69fd
ms.translationtype: MT
ms.contentlocale: fr-FR
ms.lasthandoff: 09/02/2020
ms.locfileid: "65703604"
---
# <a name="registering-a-custom-debug-engine"></a>Inscription d’un moteur de débogage personnalisé
[!INCLUDE[vs2017banner](../../includes/vs2017banner.md)]
Le moteur de débogage doit s’inscrire en tant que fabrique de classe en suivant les conventions COM et s’inscrire auprès de Visual Studio via la sous-clé de Registre Visual Studio.
> [!NOTE]
> Vous trouverez un exemple de la procédure d’inscription d’un moteur de débogage dans l’exemple TextInterpreter, qui est généré dans le cadre du [Didacticiel : génération d’un moteur de débogage à l’aide d’ATL com](https://msdn.microsoft.com/9097b71e-1fe7-48f7-bc00-009e25940c24).
## <a name="dll-server-process"></a>Processus serveur DLL
En règle générale, un moteur de débogage est implémenté dans sa propre DLL en tant que serveur COM. Cela signifie que le moteur de débogage doit inscrire le CLSID de sa fabrique de classe avec COM avant que Visual Studio puisse y accéder. Ensuite, le moteur de débogage doit s’inscrire auprès de Visual Studio lui-même afin d’établir toutes les propriétés (également appelées métriques) prises en charge par le moteur de débogage. Le choix des métriques écrites dans la sous-clé de Registre Visual Studio pour le moteur de débogage dépend des fonctionnalités prises en charge par le moteur de débogage.
Les [applications auxiliaires du SDK pour le débogage](../../extensibility/debugger/reference/sdk-helpers-for-debugging.md) décrivent non seulement les emplacements de Registre nécessaires à l’inscription d’un moteur de débogage. elle décrit également la bibliothèque dbgmetric. lib, qui contient un certain nombre de déclarations et de fonctions utiles pour les développeurs C++ qui facilitent la manipulation du Registre.
### <a name="example"></a>Exemple
Voici un exemple typique (à partir de l’exemple TextInterpreter) qui montre comment utiliser la `SetMetric` fonction (à partir de dbgmetric. lib) pour inscrire un moteur de débogage avec Visual Studio. Les métriques transmises sont également définies dans dbgmetric. lib.
> [!NOTE]
> TextInterpreter est un moteur de débogage de base ; elle n’implémente pas, et par conséquent, n’enregistre pas les autres fonctionnalités. Un moteur de débogage plus complet contient une liste complète d' `SetMetric` appels ou leur équivalent, un pour chaque fonctionnalité prise en charge par le moteur de débogage.
```
// Define base registry subkey to Visual Studio.
static const WCHAR strRegistrationRoot[] = L"Software\\Microsoft\\VisualStudio\\8.0";
HRESULT CTextInterpreterModule::RegisterServer(BOOL bRegTypeLib, const CLSID * pCLSID)
{
SetMetric(metrictypeEngine, __uuidof(Engine), metricName, L"Text File", false, strRegistrationRoot);
SetMetric(metrictypeEngine, __uuidof(Engine), metricCLSID, CLSID_Engine, false, strRegistrationRoot);
SetMetric(metrictypeEngine, __uuidof(Engine), metricProgramProvider, CLSID_MsProgramProvider, false, strRegistrationRoot);
return base::RegisterServer(bRegTypeLib, pCLSID);
}
```
## <a name="see-also"></a>Voir aussi
[Création d’un moteur de débogage personnalisé](../../extensibility/debugger/creating-a-custom-debug-engine.md)
[Applications auxiliaires du kit de développement logiciel pour le débogage](../../extensibility/debugger/reference/sdk-helpers-for-debugging.md)
[Didacticiel : création d’un moteur de débogage à l’aide d’ATL COM](https://msdn.microsoft.com/9097b71e-1fe7-48f7-bc00-009e25940c24)
| 70.894737 | 605 | 0.783222 | fra_Latn | 0.925537 |
ff7da75617bfcee3f0c01807413baf3a32c5b7d7 | 3,095 | md | Markdown | CONTRIBUTING.md | ffadilaputra/id.vuejs.org | efe0c2d0e977f94c142e62504f07cd3352119460 | [
"MIT"
] | 2 | 2019-03-01T04:51:46.000Z | 2019-03-02T09:22:12.000Z | CONTRIBUTING.md | ffadilaputra/id.vuejs.org | efe0c2d0e977f94c142e62504f07cd3352119460 | [
"MIT"
] | null | null | null | CONTRIBUTING.md | ffadilaputra/id.vuejs.org | efe0c2d0e977f94c142e62504f07cd3352119460 | [
"MIT"
] | null | null | null | # Berkontribusi ke Repositori Ini
Repositori ini adalah repositori yang dikhususkan untuk menerjemahkan panduan resmi Vue.js ke Bahasa Indonesia. Jika kalian ingin berkontribusi, silahkan ikuti panduan yang dituangkan di berkas ini.
## Cara Berkontribusi
Kalian bisa berkontribusi dengan menerjemahkan halaman di situs ini ke Bahasa Indonesia.
## Alur Untuk Berkontribusi
1. Fork repositori ini.
2. Buat isu di repositori ini, jelaskan berkas mana yang ingin kalian terjemahkan. Isu ini akan membantu teman-teman lain yang ingin berkontribusi tahu bawha berkas tersebut sedang kalian kerjakan, sehingga tidak terjadi penerjemahan di berkas yang sama di waktu yang sama. Format nama isu yang disarankan adalah: `Translate <nama-halaman>`.
3. Satu isu hanya untuk satu berkas, dan satu pull request hanya untuk satu berkas. Pull request kalian akan ditolak jika menerjemahkan lebih dari satu berkas dalam satu pull request. Aturan ini untuk memudahkan perawat repositori dalam memeriksa pull request.
4. Jika isu kalian tidak ada kejelasan dalam waktu 3 hari, maka isu anda akan dihapus sehingga orang lain bisa mengambil alih penerjemahan yang hendak kalian lakukan.
5. Setelah selesai menerjemahkan, kirimkan pull request kalian.
6. Perawat repositori ini akan melakukan review kurang lebih dalam waktu 1 - 5 hari.
**PENTING: Jangan lakukan penerjemahan / kirim pull request di Jum'at, karena di hari itu repositori ini akan disinkronisasi dengan repositori utama Vue.js**
## Aturan-aturan Berkontribusi
### Penggunaan Bahasa
Gunakan Bahasa Indonesia yang baik dan benar, hindari penggunaan singkatan-singkatan tanpa penjelasan kepanjangan dari singkatan tersebut. Hindari penggunaan bahasa gaul seperti "loe gue".
Fokus terhadap apa yang hendak disampaikan, terjemahan kalian tidak perlu harus 100% per kata sesuai dengan dokumentasi resmi berbahasa Inggris. Selama maksud dari kalimat yang diterjemahkan tersampaikan.
Gunakan kalimat ganti "kami", "kita", "kalian" untuk merujuk kepada subjek. Gunakan "kami" / "kita" ketika hendak menerjemahkan "we" / "me" / "us". Gunakan "kalian" untuk menerjemahkan "you".
Jangan menerjemahkan kata yang belum ada / masih asing padanannya di Bahasa Indonesia. Contoh: bundler, framework, library, runtime, build tools, artifacts, dsb.
Gunakan format _italic_/_miring_ saat pertama kali menggunakan kata asing tersebut.
### Sopan dan berprasangka baik
Berkomunikasilah dengan sopan, baik dalam terjemahan atau dalam pull request dan interaksi lainnya di repositori ini. Bersikap sopan dan selalu berprasangka baik dengan orang lain yang terlibat dalam kontribusi di repositori ini.
### Minta teman anda untuk membantu review
Pull request yang juga direview oleh teman anda akan lebih cepat diproses daripada pull request yang tidak ada reviewer lain kecuali dari perawat repositori ini. Review dari teman kalian juga akan membantu kalian menemukan _typo_ dan kesalahan-kesalahan lain yang mungkin kalian lewatkan.
## Kontak perawat repositori
Hubungi [@adityapurwa](https://github.com/adityapurwa) untuk pertanyaan seputar repositori ini. | 70.340909 | 341 | 0.81454 | ind_Latn | 0.977772 |
ff7da7a0d25ef40f6b8dbde63acd666d443984d7 | 1,898 | md | Markdown | doc/release-notes.md | apollygon/soferox | 336162661aaa861dc36c35fd6a30d08a08e09ece | [
"MIT"
] | 4 | 2017-10-03T11:17:50.000Z | 2019-05-04T03:22:08.000Z | doc/release-notes.md | apollygon/u098jkpo | e06a6b3bd7d2fad55acf6a33a128b90f55e5cee4 | [
"MIT"
] | null | null | null | doc/release-notes.md | apollygon/u098jkpo | e06a6b3bd7d2fad55acf6a33a128b90f55e5cee4 | [
"MIT"
] | 1 | 2018-09-21T09:33:29.000Z | 2018-09-21T09:33:29.000Z | Soferox Core version 2.16.0 is now available from:
<https://soferox.org/downloads/>
Please report bugs using the issue tracker at GitHub:
<https://github.com/soferox/soferox/issues>
How to Upgrade
==============
If you are running an older version, shut it down. Wait until it has completely
shut down (which might take a few minutes for older versions), then run the
installer (on Windows) or just copy over `/Applications/Soferox-Qt` (on Mac)
or `soferoxd`/`soferox-qt` (on Linux).
The first time you run version 2.16.0 or newer, your chainstate database will be converted to a
new format, which will take anywhere from a few minutes to half an hour,
depending on the speed of your machine.
Note that the block database format also changed in version 2.1.0.6 and there is no
automatic upgrade code from before version 2.1.0.6 to version 2.16.0 or higher.
However, as usual, old wallet versions are still supported.
Downgrading warning
-------------------
Wallets created in 2.16.0 and later are not compatible with versions prior to 2.16.0
and will not work if you try to use newly created wallets in older versions. Existing
wallets that were created with older versions are not affected by this.
Compatibility
==============
Soferox Core is extensively tested on multiple operating systems using
the Linux kernel, macOS 10.8+, and Windows Vista and later. Windows XP is not supported.
Soferox Core should also work on most other Unix-like systems but is not
frequently tested on them.
Notable changes
===============
Example item
-------------
Example item for a notable change.
2.16.0 change log
------------------
(to be filled in at release time)
Credits
=======
Thanks to everyone who directly contributed to this release:
(to be filled in at release time)
As well as everyone that helped translating on [Transifex](https://www.transifex.com/projects/p/bitcoin/).
| 30.126984 | 106 | 0.734984 | eng_Latn | 0.999285 |
ff7ea0f8db9258e9fd4fcf1d9fe0e953e2a8cc5a | 2,295 | md | Markdown | Samples/Stackdriver Logging API/v2beta1/README.md | vinaykarora/Google-API | 1f52eb44eb68ec1b49e5931f2a0b0cf3a4dbd6fd | [
"Apache-2.0"
] | 254 | 2015-01-06T14:57:01.000Z | 2022-01-15T18:27:54.000Z | Samples/Stackdriver Logging API/v2beta1/README.md | vinaykarora/Google-API | 1f52eb44eb68ec1b49e5931f2a0b0cf3a4dbd6fd | [
"Apache-2.0"
] | 18 | 2015-02-26T10:25:03.000Z | 2018-10-17T19:19:32.000Z | Samples/Stackdriver Logging API/v2beta1/README.md | vinaykarora/Google-API | 1f52eb44eb68ec1b49e5931f2a0b0cf3a4dbd6fd | [
"Apache-2.0"
] | 445 | 2015-01-06T17:50:37.000Z | 2022-02-20T04:43:47.000Z | 
# Unoffical Stackdriver Logging API v2beta1 Samples for .NET
## API Description
Writes log entries and manages your Stackdriver Logging configuration.
[Offical Documentation](https://cloud.google.com/logging/docs/)
## Sample Description
These samples show how to access the [Stackdriver Logging API v2beta1](https://cloud.google.com/logging/docs/) with the Offical [Google .Net client library](https://github.com/google/google-api-dotnet-client)
Tutorials to go along with some of these samples can be found on [www.daimto.com](http://www.daimto.com/)
## Developer Documentation
* [Google API client Library for .NET - Get Started](https://developers.google.com/api-client-library/dotnet/get_started)
* [Supported APIs](https://developers.google.com/api-client-library/dotnet/apis/)
### Installation
NuGet package:
Location: [NuGet Google.Apis.Logging.v2beta1](https://www.nuget.org/packages/Google.Apis.Logging.v2beta1)
Install Command: PM> Install-Package Google.Apis.Logging.v2beta1
```
PM> Install-Package Google.Apis.Logging.v2beta1
```
### Usage
OAuth2
```
var keyFileLocation = @"C:\Users\Daimto\Documents\DaimtoTestEverythingCredentials\Diamto Test Everything Project-29e50502c19b.json";
var user = System.Security.Principal.WindowsIdentity.GetCurrent().Name;
var scopes = new String[] { Google.Apis.Logging.v2beta1.LoggingService.Scope.LoggingReadonly };
var service = GoogleSamplecSharpSample.Loggingv2beta1.Auth.Oauth2Example.GetLoggingService(keyFileLocation, user, scopes);
```
Public API Key
```
var apiKey = "XXXX";
var servicePublicKey = GoogleSamplecSharpSample.Loggingv2beta1.Auth.ApiKeyExample.GetService(apiKey);
```
Service Account
```
var serviceAccountKeyFileLocation = @"C:\Users\Daimto\Documents\DaimtoTestEverythingCredentials\Diamto Test Everything Project-29e50502c19b.json";
var serviceAccountEmail = "googledrivemirrornas@daimto-tutorials-101.iam.gserviceaccount.com";
var scopes = new String[] { Google.Apis.Logging.v2beta1.LoggingService.Scope.Calendar };
var serviceAccountService = GoogleSamplecSharpSample.Loggingv2beta1.Auth.ServiceAccountExample.AuthenticateServiceAccount(serviceAccountKeyFileLocation, serviceAccountEmail, scopes);
```
| 39.568966 | 208 | 0.795207 | yue_Hant | 0.677452 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.