hexsha
stringlengths
40
40
size
int64
5
1.04M
ext
stringclasses
6 values
lang
stringclasses
1 value
max_stars_repo_path
stringlengths
3
344
max_stars_repo_name
stringlengths
5
125
max_stars_repo_head_hexsha
stringlengths
40
78
max_stars_repo_licenses
sequencelengths
1
11
max_stars_count
int64
1
368k
max_stars_repo_stars_event_min_datetime
stringlengths
24
24
max_stars_repo_stars_event_max_datetime
stringlengths
24
24
max_issues_repo_path
stringlengths
3
344
max_issues_repo_name
stringlengths
5
125
max_issues_repo_head_hexsha
stringlengths
40
78
max_issues_repo_licenses
sequencelengths
1
11
max_issues_count
int64
1
116k
max_issues_repo_issues_event_min_datetime
stringlengths
24
24
max_issues_repo_issues_event_max_datetime
stringlengths
24
24
max_forks_repo_path
stringlengths
3
344
max_forks_repo_name
stringlengths
5
125
max_forks_repo_head_hexsha
stringlengths
40
78
max_forks_repo_licenses
sequencelengths
1
11
max_forks_count
int64
1
105k
max_forks_repo_forks_event_min_datetime
stringlengths
24
24
max_forks_repo_forks_event_max_datetime
stringlengths
24
24
content
stringlengths
5
1.04M
avg_line_length
float64
1.14
851k
max_line_length
int64
1
1.03M
alphanum_fraction
float64
0
1
lid
stringclasses
191 values
lid_prob
float64
0.01
1
53aa0e149ec611b651fac81eafbc5305341475da
3,344
md
Markdown
zh/simulation/simulation-in-hardware.md
poissonfang/Devguide
33a08fc9977326919ab1ac02a4f15a557e319cd4
[ "CC-BY-4.0" ]
1
2019-03-29T15:27:02.000Z
2019-03-29T15:27:02.000Z
zh/simulation/simulation-in-hardware.md
poissonfang/Devguide
33a08fc9977326919ab1ac02a4f15a557e319cd4
[ "CC-BY-4.0" ]
null
null
null
zh/simulation/simulation-in-hardware.md
poissonfang/Devguide
33a08fc9977326919ab1ac02a4f15a557e319cd4
[ "CC-BY-4.0" ]
null
null
null
# 硬件仿真 对于四旋翼,硬件仿真(SIH)是[硬件在环仿真(HITL)](../simulation/hitl.md)的替代品。 在这个设置中,所有的数据处理工作都在嵌入式硬件(PIXHAWK)中完成,包括控制器、状态估计器和仿真器。 与PIXHAWK连接的电脑只用来显示虚拟的载具。 ![仿真器 MAVLink API](../../assets/diagrams/SIH_diagram.png) 与硬件在环仿真相比,硬件仿真有以下两点好处: - It ensures synchronous timing by avoiding the bidirectional connection to the computer. As a result the user does not need such a powerful desktop computer. - The whole simulation remains inside the PX4 environment. Developers who are familiar with PX4 can more easily incorporate their own mathematical model into the simulator. They can, for instance, modify the aerodynamic model, or noise level of the sensors, or even add a sensor to be simulated. The SIH can be used by new PX4 users to get familiar with PX4 and the different modes and features, and of course to learn to fly a quadrotor with the real RC controller. The dynamic model is described in this [pdf report](https://github.com/PX4/Devguide/raw/master/assets/simulation/SIH_dynamic_model.pdf). Furthermore, the physical parameters representing the vehicle (such as mass, inertia, and maximum thrust force) can easily be modified from the [SIH parameters](../advanced/parameter_reference.md#simulation-in-hardware). ## Requirements To run the SIH, you will need a [flight controller hardware](https://docs.px4.io/master/en/flight_controller/) (e.g. a Pixhawk-series board). If you are planning to use a [radio control transmitter and receiver pair](https://docs.px4.io/master/en/getting_started/rc_transmitter_receiver.html) you should have that too. 或者,使用*QGC地面站*、[操纵杆](https://docs.qgroundcontrol.com/en/SetupView/Joystick.html)也能被用来仿真一个无线电控制系统。 硬件仿真可以在除了FMUv2硬件之外的所有Pixhawk飞控板上使用。 它可以在固件主分支和发布版本v1.9.0及以上版本中使用。 ## 配置硬件仿真(SIH) 运行SIH和挑选一个机架一样简单。 将飞控和电脑用USB线连接起来,让它通电启动,然后使用地面站选择[SIH 机架](../airframes/airframe_reference.md#simulation-copter) 飞控接下来会重启 当选择了SIH 机架之后,SIH模块就自动启动了,车辆应该显示在地面站的地图上 ## 设置显示 The simulated quadrotor can be displayed in jMAVSim from PX4 v1.11. 1. Close *QGroundControl* (if opened). 2. Unplug and replug the hardware autopilot (allow a few seconds for it to boot). 3. Start jMAVSim by calling the script **jmavsim_run.sh** from a terminal: ```./Tools/jmavsim_run.sh -q -d /dev/ttyACM0 -b 921600 -r 250 -o``` where the flags are - `-q` to allow the communication to *QGroundControl* (optional). - `-d` to start the serial device `/dev/ttyACM0` on Linux. On macOS this would be `/dev/tty.usbmodem1`. - `-b` to set the serial baud rate to `921600`. - `-r` to set the refresh rate to `250` Hz (optional). - `-o` to start jMAVSim in *display Only* mode (i.e. the physical engine is turned off and jMAVSim only displays the trajectory given by the SIH in real-time). 4. After few seconds, *QGroundControl* can be opened again. At this point, the system can be armed and flown. The vehicle can be observed moving in jMAVSim, and on the QGC **Fly** view. ## 鸣谢 The SIH was developed by Coriolis g Corporation, a Canadian company developing a new type of Vertical Takeoff and Landing (VTOL) Unmanned Aerial Vehicles (UAV) based on passive coupling systems. Specialized in dynamics, control, and real-time simulation, they provide the SIH as a simple simulator for quadrotors released for free under BSD license. Discover their current platform at [www.vogi-vtol.com](http://www.vogi-vtol.com/).
63.09434
415
0.773026
eng_Latn
0.979682
53aac48927d41a542b6a4350344b826f34362003
666
md
Markdown
Leetcode/Longest Common Prefix/README.md
Aphrodicez/Competitive_Coding-Essentials
039cd8d2c07c4053bec3e818d49aa59a1b0fd7ac
[ "MIT" ]
3
2021-10-02T22:38:16.000Z
2021-10-03T05:22:54.000Z
Leetcode/Longest Common Prefix/README.md
Aphrodicez/Competitive_Coding-Essentials
039cd8d2c07c4053bec3e818d49aa59a1b0fd7ac
[ "MIT" ]
11
2021-10-03T14:32:36.000Z
2021-10-09T00:57:58.000Z
Leetcode/Longest Common Prefix/README.md
Aphrodicez/Competitive_Coding-Essentials
039cd8d2c07c4053bec3e818d49aa59a1b0fd7ac
[ "MIT" ]
12
2021-10-02T22:38:26.000Z
2021-10-21T19:55:09.000Z
<h2>Longest Common Prefix</h2><hr><div><p>Write a function to find the longest common prefix string amongst an array of strings. If there is no common prefix, return an empty string <code> " " </code></p> <p>&nbsp;</p> <p><strong>Example 1:</strong></p> <pre><strong>Input:</strong> strs = ["flower","flow","flight"] <strong>Output:</strong> "fl" </pre><p><strong>Example 2:</strong></p> <pre><strong>Input:</strong> strs = ["dog","racecar","car"] <strong>Output:</strong> " " </pre> <p>&nbsp;</p> <p><strong>Constraints:</strong></p> <ul> <li><code>1 &lt;= nums.length &lt;= 200</code></li> <li><code>0 &lt;= strs[i].length &lt;= 200</code></li> </ul> </div>
31.714286
128
0.632132
eng_Latn
0.461667
53ab82aa1f138c0f9ece7ffec9f48cf113a9fb2c
419
md
Markdown
src/posts/2012-04-25.choose-the-right-batman.md
iChris/chrisennsdotcom
dfad8795c0652a5db0d36ba7f826ce7dfcd60e0c
[ "MIT" ]
2
2020-04-05T00:53:27.000Z
2020-08-07T17:46:13.000Z
src/posts/2012-04-25.choose-the-right-batman.md
iChris/chrisennsdotcom
dfad8795c0652a5db0d36ba7f826ce7dfcd60e0c
[ "MIT" ]
7
2020-09-15T16:59:09.000Z
2021-03-19T22:11:15.000Z
src/posts/2012-04-25.choose-the-right-batman.md
iChris/chrisennsdotcom
dfad8795c0652a5db0d36ba7f826ce7dfcd60e0c
[ "MIT" ]
2
2020-02-06T23:55:34.000Z
2020-02-12T05:13:45.000Z
--- title: "Choose the Right Batman" --- <p>When picking a Batman for your kid's birthday party, make sure to request the right style of Batman.</p> <p><iframe width="759" height="386" src="https://www.youtube.com/embed/TFYrrCWVqWo?rel=0" frameborder="0" allowfullscreen></iframe></p> <p><em>Via <a href="https://www.neatorama.com/2012/04/25/batman-themed-childs-birthday-party-is-too-dark/">neatorama.com</a></em></p>
59.857143
135
0.718377
eng_Latn
0.343135
53ac038baf48ef1d9df80a3609efc12c23025871
3,892
md
Markdown
README.md
gpascual/dmenu
f30c9fb28883ae05d253098f16720ba272541d05
[ "MIT" ]
1
2021-06-12T22:59:48.000Z
2021-06-12T22:59:48.000Z
README.md
gpascual/dmenu
f30c9fb28883ae05d253098f16720ba272541d05
[ "MIT" ]
null
null
null
README.md
gpascual/dmenu
f30c9fb28883ae05d253098f16720ba272541d05
[ "MIT" ]
null
null
null
# dmenu - my personal patched dmenu build ## Included patches - border - center - fuzzymatch - fuzzyhighlight - line height - wmtype (from Baitinq's repo [here](https://github.com/Baitinq/dmenu/blob/master/patches/dmenu-wm_type.diff)) ### Maybe-list - incremental - non-blocking stdin - xresources ## Repository structure ### Repository branches **build** The latest version of the patched dmenu. It contains all of my chosen patches already merged into the base code of the latest tagged dmenu version. There are other branches merged into this one, like **misc** and **distribution**. **\*-patch** There is a branch for each patch that will be applied to my dmenu. These branches use the name of the patch suffixed with `-patch`. For example, the _border_ patch has its own branch called __border-patch__. These branches contain the base original code plus the patch already applied. The idea is to merge all of them into the **build** branch to cook the customized dmenu. **distribution** This branch contains an Archlinux build system PKGBUILD and a shell script to help building a _dmenu_ package. **misc** Basically, it contains a modified README file. ### Base code updates People who use suckless tools are used to apply a bunch of patches and build their own customized versions of the tools. I have a tag `gpascual-4.9` that was created from the **build** branch some time ago. This tag includes all patch branches from that time. ,-- center-patch-------, / \ 4.9 ---- border-patch--------- gpascual-4.9 \ / '-- lineheight-patch --' Let's say a new _dmenu_ version `4.10` is released. To update the *build* to a new version, start by rebasing each branch to the new tag. We'll be going from this: ,--- 4.10 / / ,-- center-patch-------, / / \ 4.9 ------ border-patch--------- gpascual-4.9 \ / '---- lineheight-patch --' to this: ,--- qpascual-4.9 / / ,-- center-patch / / 4.9 --- 4.10 ---- border-patch \ '-- lineheight-patch Now make a merge commit merging all the patch branches and tag the result `gpascual-4.10`: ,--- qpascual-4.9 / / ,-- center-patch-------, / / \ 4.9 --- 4.10 ---- border-patch--------- gpascual-4.10 \ / '-- lineheight-patch --' #### Example: updating to a new tagged version Here is the above example (updating from 4.9 to 4.10) in long form. Assume the **build** branch is currently pointing to the same commit as the 4.10 tag. ```shell script git fetch --tags suckless # 'suckless' is a git remote pointing to https://git.suckless.org/dmenu git checkout center-patch git rebase 4.10 # fix conflicts if any git checkout border-patch git rebase 4.10 git checkout lineheight-patch git rebase 4.10 git checkout build git merge --ff-only center-patch git merge border-patch git merge lineheight-patch # fix merge conflicts if any git tag -m "4.10 with gpascual's patch selection" gpascual-4.10 ``` ## Installation ### Archlinux 1. Make sure to checkout the **build** branch or the latest `gpascual-*` tag. 2. Move to the `dist/archlinux` directory inside the project 3. Execute the `./build.sh` script 4. Install the generated package with `sudo pacman -U #PACKAGE_FILE_NAME#` ## External links * dmenu - https://tools.suckless.org/dmenu/ * The Suckless philosophy - https://suckless.org/philosophy/ * qguv's dwm README (from where I took most of the _Repository structure_ section) - https://github.com/qguv/dwm * wmtype patch from Baitinq - https://github.com/Baitinq/dmenu/blob/master/patches/dmenu-wm_type.diff
29.263158
138
0.642857
eng_Latn
0.983523
53ac153f9465bdaf4a68540fe843dae9ac6e6e6a
4,647
md
Markdown
docs/guide/assets.md
bbotto-pdga/vite
645f34d2bce3f18e34e7ce2539743bb672a327bc
[ "MIT" ]
null
null
null
docs/guide/assets.md
bbotto-pdga/vite
645f34d2bce3f18e34e7ce2539743bb672a327bc
[ "MIT" ]
null
null
null
docs/guide/assets.md
bbotto-pdga/vite
645f34d2bce3f18e34e7ce2539743bb672a327bc
[ "MIT" ]
null
null
null
# Static Asset Handling - Related: [Public Base Path](./build#public-base-path) - Related: [`assetsInclude` config option](/config/#assetsinclude) ## Importing Asset as URL Importing a static asset will return the resolved public URL when it is served: ```js import imgUrl from './img.png' document.getElementById('hero-img').src = imgUrl ``` For example, `imgUrl` will be `/img.png` during development, and become `/assets/img.2d8efhg.png` in the production build. The behavior is similar to webpack's `file-loader`. The difference is that the import can be either using absolute public paths (based on project root during dev) or relative paths. - `url()` references in CSS are handled the same way. - If using the Vue plugin, asset references in Vue SFC templates are automatically converted into imports. - Common image, media, and font filetypes are detected as assets automatically. You can extend the internal list using the [`assetsInclude` option](/config/#assetsinclude). - Referenced assets are included as part of the build assets graph, will get hashed file names, and can be processed by plugins for optimization. - Assets smaller in bytes than the [`assetsInlineLimit` option](/config/#build-assetsinlinelimit) will be inlined as base64 data URLs. ### Explicit URL Imports Assets that are not included in the internal list or in `assetsInclude`, can be explicitly imported as an URL using the `?url` suffix. This is useful, for example, to import [Houdini Paint Worklets](https://houdini.how/usage). ```js import workletURL from 'extra-scalloped-border/worklet.js?url' CSS.paintWorklet.addModule(workletURL) ``` ### Importing Asset as String Assets can be imported as strings using the `?raw` suffix. ```js import shaderString from './shader.glsl?raw' ``` ### Importing Script as a Worker Scripts can be imported as web workers with the `?worker` or `?sharedworker` suffix. ```js // Separate chunk in the production build import Worker from './shader.js?worker' const worker = new Worker() ``` ```js // sharedworker import SharedWorker from './shader.js?sharedworker' const sharedWorker = new SharedWorker() ``` ```js // Inlined as base64 strings import InlineWorker from './shader.js?worker&inline' ``` Check out the [Web Worker section](./features.md#web-workers) for more details. ## The `public` Directory If you have assets that are: - Never referenced in source code (e.g. `robots.txt`) - Must retain the exact same file name (without hashing) - ...or you simply don't want to have to import an asset first just to get its URL Then you can place the asset in a special `public` directory under your project root. Assets in this directory will be served at root path `/` during dev, and copied to the root of the dist directory as-is. The directory defaults to `<root>/public`, but can be configured via the [`publicDir` option](/config/#publicdir). Note that: - You should always reference `public` assets using root absolute path - for example, `public/icon.png` should be referenced in source code as `/icon.png`. - Assets in `public` cannot be imported from JavaScript. ## new URL(url, import.meta.url) [import.meta.url](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/import.meta) is a native ESM feature that exposes the current module's URL. Combining it with the native [URL constructor](https://developer.mozilla.org/en-US/docs/Web/API/URL), we can obtain the full, resolved URL of a static asset using relative path from a JavaScript module: ```js const imgUrl = new URL('./img.png', import.meta.url).href document.getElementById('hero-img').src = imgUrl ``` This works natively in modern browsers - in fact, Vite doesn't need to process this code at all during development! This pattern also supports dynamic URLs via template literals: ```js function getImageUrl(name) { return new URL(`./dir/${name}.png`, import.meta.url).href } ``` During the production build, Vite will perform necessary transforms so that the URLs still point to the correct location even after bundling and asset hashing. However, the URL string must be static so it can be analyzed, otherwise the code will be left as is, which can cause runtime errors if `build.target` does not support `import.meta.url` ```js // Vite will not transform this const imgUrl = new URL(imagePath, import.meta.url).href ``` ::: warning Does not work with SSR This pattern does not work if you are using Vite for Server-Side Rendering, because `import.meta.url` have different semantics in browsers vs. Node.js. The server bundle also cannot determine the client host URL ahead of time. :::
40.060345
373
0.756187
eng_Latn
0.992942
53ac46b54cdee37e696d59711256eddc8a47c5d0
71,967
md
Markdown
README.md
mehrdadpfg/openstack-exporter
3f35fee12c1885284b31972939a199bfc9812440
[ "MIT" ]
null
null
null
README.md
mehrdadpfg/openstack-exporter
3f35fee12c1885284b31972939a199bfc9812440
[ "MIT" ]
null
null
null
README.md
mehrdadpfg/openstack-exporter
3f35fee12c1885284b31972939a199bfc9812440
[ "MIT" ]
null
null
null
# OpenStack Exporter for Prometheus [![Build Status][buildstatus]][circleci] A [OpenStack](https://openstack.org/) exporter for prometheus written in Golang using the [gophercloud](https://github.com/gophercloud/gophercloud) library. ## Containers and binaries build status amd64: [![Docker amd64 repository](https://quay.io/repository/niedbalski/openstack-exporter-linux-amd64/status "Docker amd64 Repository on Quay")](https://quay.io/repository/niedbalski/openstack-exporter-linux-amd64) | arm64: [![Docker amd64 repository](https://quay.io/repository/niedbalski/openstack-exporter-linux-arm64/status "Docker arm64 Repository on Quay")](https://quay.io/repository/niedbalski/openstack-exporter-linux-arm64) ### Latest Docker master images ```sh docker pull quay.io/niedbalski/openstack-exporter-linux-amd64:master docker pull quay.io/niedbalski/openstack-exporter-linux-arm64:master ``` ### Latest Docker release images ```sh docker pull quay.io/niedbalski/openstack-exporter-linux-amd64:0.7.0 docker pull quay.io/niedbalski/openstack-exporter-linux-arm64:0.7.0 ``` ### Snaps The exporter is also available on the [https://snapcraft.io/golang-openstack-exporter](https://snapcraft.io/golang-openstack-exporter) For installing the latest master build (edge channel): ```sh snap install --channel edge golang-openstack-exporter ``` For installing the latest stable version (stable channel): ```sh snap install --channel stable golang-openstack-exporter ``` ## Description The OpenStack exporter, exports Prometheus metrics from a running OpenStack cloud for consumption by prometheus. The cloud credentials and identity configuration should use the [os-client-config](https://docs.openstack.org/os-client-config/latest/) format and must by specified with the `--os-client-config` flag. Other options as the binding address/port can by explored with the --help flag. By default the openstack\_exporter serves on port `0.0.0.0:9180` at the `/metrics` URL. You can build it by yourself by cloning this repository and run: ```sh make common-build ./openstack-exporter --os-client-config /etc/openstack/clouds.yaml region.mycludprovider.org ``` Or alternatively you can use the docker images, as follows (check the openstack configuration section for configuration details): ```sh docker run -v "$HOME/.config/openstack/clouds.yml":/etc/openstack/clouds.yaml -it quay.io/niedbalski/openstack-exporter-linux-amd64:master my-cloud.org ``` ### Command line options The current list of command line options (by running --help) ```sh usage: openstack-exporter [<flags>] <cloud> Flags: -h, --help Show context-sensitive help (also try --help-long and --help-man). --web.listen-address=":9180" address:port to listen on --web.telemetry-path="/metrics" uri path to expose metrics --os-client-config="/etc/openstack/clouds.yaml" Path to the cloud configuration file --prefix="openstack" Prefix for metrics --endpoint-type="public" openstack endpoint type to use (i.e: public, internal, admin) -d, --disable-metric= ... multiple --disable-metric can be specified in the format: service-metric (i.e: cinder-snapshots) --disable-service.network Disable the network service exporter --disable-service.compute Disable the compute service exporter --disable-service.image Disable the image service exporter --disable-service.volume Disable the volume service exporter --disable-service.identity Disable the identity service exporter Args: <cloud> name or id of the cloud to gather metrics from ``` ### OpenStack configuration The cloud credentials and identity configuration should use the [os-client-config](https://docs.openstack.org/os-client-config/latest/) format and must by specified with the `--os-client-config` flag. Current list of supported options can be seen in the following example configuration: ```yaml clouds: default: region_name: {{ openstack_region_name }} identity_api_version: 3 identity_interface: internal auth: username: {{ keystone_admin_user }} password: {{ keystone_admin_password }} project_name: {{ keystone_admin_project }} project_domain_name: 'Default' user_domain_name: 'Default' auth_url: {{ admin_protocol }}://{{ kolla_internal_fqdn }}:{{ keystone_admin_port }}/v3 cacert: | ---- BEGIN CERTIFICATE --- ... verify: true | false // disable || enable SSL certificate verification ``` ## Contributing Please fill pull requests or issues under Github. Feel free to request any metrics that might be missing. ### Communication Please join us at #openstack-exporter at Freenode ## Metrics The neutron/nova metrics contains the *_state metrics, which are separated by service/agent name. Please note that by convention resources metrics such as memory or storage are returned in bytes. Name | Sample Labels | Sample Value | Description ---------|---------------|--------------|------------ openstack_neutron_agent_state|adminState="up",hostname="compute-01",region="RegionOne",service="neutron-dhcp-agent"|1 or 0 (bool) openstack_neutron_floating_ips|region="RegionOne"|4.0 (float) openstack_neutron_networks|region="RegionOne"|25.0 (float) openstack_neutron_ports|region="RegionOne"| 1063.0 (float) openstack_neutron_subnets|region="RegionOne"|4.0 (float) openstack_neutron_security_groups|region="RegionOne"|10.0 (float) openstack_neutron_network_ip_availabilities_total|region="RegionOne",network_id="23046ac4-67fc-4bf6-842b-875880019947",network_name="default-network",cidr="10.0.0.0/16",subnet_name="my-subnet",project_id="478340c7c6bf49c99ce40641fd13ba96"|253.0 (float) openstack_neutron_network_ip_availabilities_used|region="RegionOne",network_id="23046ac4-67fc-4bf6-842b-875880019947",network_name="default-network",cidr="10.0.0.0/16",subnet_name="my-subnet",project_id="478340c7c6bf49c99ce40641fd13ba96"|151.0 (float) openstack_neutron_routers|region="RegionOne"|134.0 (float) openstack_nova_availability_zones|region="RegionOne"|4.0 (float) openstack_nova_flavors|region="RegionOne"|4.0 (float) openstack_nova_total_vms|region="RegionOne"|12.0 (float) openstack_nova_server_status|region="RegionOne",hostname="compute-01""id", "name", "tenant_id", "user_id", "address_ipv4", "address_ipv6", "host_id", "uuid", "availability_zone"|0.0 (float) openstack_nova_running_vms|region="RegionOne",hostname="compute-01",availability_zone="az1"|12.0 (float) openstack_nova_local_storage_used_bytes|region="RegionOne",hostname="compute-01"|100.0 (float) openstack_nova_local_storage_available_bytes|region="RegionOne",hostname="compute-01"|30.0 (float) openstack_nova_memory_used_bytes|region="RegionOne",hostname="compute-01"|40000.0 (float) openstack_nova_memory_available_bytes|region="RegionOne",hostname="compute-01"|40000.0 (float) openstack_nova_agent_state|hostname="compute-01",region="RegionOne", id="288", service="nova-compute",adminState="enabled",zone="nova"|1.0 or 0 (bool) openstack_nova_vcpus_available|region="RegionOne",hostname="compute-01"|128.0 (float) openstack_nova_vcpus_used|region="RegionOne",hostname="compute-01"|32.0 (float) openstack_cinder_service_state|hostname="compute-01",region="RegionOne",service="cinder-backup",adminState="enabled",zone="nova"|1.0 or 0 (bool) openstack_cinder_volumes|region="RegionOne"|4.0 (float) openstack_cinder_snapshots|region="RegionOne"|4.0 (float) openstack_cinder_volume_status|region="RegionOne""id", "name", "status", "bootable", "tenant_id", "size", "volume_type"|4.0 (float) openstack_identity_domains|region="RegionOne"|1.0 (float) openstack_identity_users|region="RegionOne"|30.0 (float) openstack_identity_projects|region="RegionOne"|33.0 (float) openstack_identity_groups|region="RegionOne"|1.0 (float) openstack_identity_regions|region="RegionOne"|1.0 (float) ## Example metrics ``` # HELP openstack_cinder_agent_state agent_state # TYPE openstack_cinder_agent_state counter openstack_cinder_volume_status{bootable="",id="11017190-61ab-426f-9366-2299292sadssad",name="",region="Region",size="0",status="",tenant_id="",volume_type=""} 1.0 openstack_cinder_agent_state{adminState="enabled",hostname="compute-node-01",region="Region",service="cinder-backup",zone="nova"} 1.0 openstack_cinder_agent_state{adminState="enabled",hostname="compute-node-01",region="Region",service="cinder-scheduler",zone="nova"} 1.0 openstack_cinder_agent_state{adminState="enabled",hostname="compute-node-01@rbd-1",region="Region",service="cinder-volume",zone="nova"} 1.0 openstack_cinder_agent_state{adminState="enabled",hostname="compute-node-02",region="Region",service="cinder-backup",zone="nova"} 1.0 openstack_cinder_agent_state{adminState="enabled",hostname="compute-node-02",region="Region",service="cinder-scheduler",zone="nova"} 1.0 openstack_cinder_agent_state{adminState="enabled",hostname="compute-node-02@rbd-1",region="Region",service="cinder-volume",zone="nova"} 1.0 openstack_cinder_agent_state{adminState="enabled",hostname="compute-node-03",region="Region",service="cinder-backup",zone="nova"} 1.0 openstack_cinder_agent_state{adminState="enabled",hostname="compute-node-03",region="Region",service="cinder-scheduler",zone="nova"} 1.0 openstack_cinder_agent_state{adminState="enabled",hostname="compute-node-03@rbd-1",region="Region",service="cinder-volume",zone="nova"} 1.0 openstack_cinder_agent_state{adminState="enabled",hostname="compute-node-04",region="Region",service="cinder-backup",zone="nova"} 1.0 openstack_cinder_agent_state{adminState="enabled",hostname="compute-node-04@rbd-1",region="Region",service="cinder-volume",zone="nova"} 1.0 openstack_cinder_agent_state{adminState="enabled",hostname="compute-node-05",region="Region",service="cinder-backup",zone="nova"} 1.0 openstack_cinder_agent_state{adminState="enabled",hostname="compute-node-05@rbd-1",region="Region",service="cinder-volume",zone="nova"} 1.0 openstack_cinder_agent_state{adminState="enabled",hostname="compute-node-06",region="Region",service="cinder-backup",zone="nova"} 1.0 openstack_cinder_agent_state{adminState="enabled",hostname="compute-node-06@rbd-1",region="Region",service="cinder-volume",zone="nova"} 1.0 openstack_cinder_agent_state{adminState="enabled",hostname="compute-node-07",region="Region",service="cinder-backup",zone="nova"} 1.0 openstack_cinder_agent_state{adminState="enabled",hostname="compute-node-07@rbd-1",region="Region",service="cinder-volume",zone="nova"} 1.0 openstack_cinder_agent_state{adminState="enabled",hostname="compute-node-09",region="Region",service="cinder-backup",zone="nova"} 1.0 openstack_cinder_agent_state{adminState="enabled",hostname="compute-node-09@rbd-1",region="Region",service="cinder-volume",zone="nova"} 1.0 openstack_cinder_agent_state{adminState="enabled",hostname="compute-node-10",region="Region",service="cinder-backup",zone="nova"} 1.0 openstack_cinder_agent_state{adminState="enabled",hostname="compute-node-10@rbd-1",region="Region",service="cinder-volume",zone="nova"} 1.0 # HELP openstack_cinder_snapshots snapshots # TYPE openstack_cinder_snapshots gauge openstack_cinder_snapshots{region="Region"} 0.0 # HELP openstack_cinder_volumes volumes # TYPE openstack_cinder_volumes gauge openstack_cinder_volumes{region="Region"} 8.0 # HELP openstack_glance_images images # TYPE openstack_glance_images gauge openstack_glance_images{region="Region"} 18.0 # HELP openstack_identity_domains domains # TYPE openstack_identity_domains gauge openstack_identity_domains{region="Region"} 1.0 # HELP openstack_identity_groups groups # TYPE openstack_identity_groups gauge openstack_identity_groups{region="Region"} 0.0 # HELP openstack_identity_projects projects # TYPE openstack_identity_projects gauge openstack_identity_projects{region="Region"} 33.0 # HELP openstack_identity_regions regions # TYPE openstack_identity_regions gauge openstack_identity_regions{region="Region"} 1.0 # HELP openstack_identity_users users # TYPE openstack_identity_users gauge openstack_identity_users{region="Region"} 39.0 # HELP openstack_neutron_agent_state agent_state # TYPE openstack_neutron_agent_state counter openstack_neutron_agent_state{adminState="up",hostname="compute-node-01",region="Region",service="neutron-dhcp-agent"} 1.0 openstack_neutron_agent_state{adminState="up",hostname="compute-node-01",region="Region",service="neutron-l3-agent"} 1.0 openstack_neutron_agent_state{adminState="up",hostname="compute-node-01",region="Region",service="neutron-metadata-agent"} 1.0 openstack_neutron_agent_state{adminState="up",hostname="compute-node-01",region="Region",service="neutron-openvswitch-agent"} 1.0 openstack_neutron_agent_state{adminState="up",hostname="compute-node-02",region="Region",service="neutron-dhcp-agent"} 1.0 openstack_neutron_agent_state{adminState="up",hostname="compute-node-02",region="Region",service="neutron-l3-agent"} 1.0 openstack_neutron_agent_state{adminState="up",hostname="compute-node-02",region="Region",service="neutron-metadata-agent"} 1.0 openstack_neutron_agent_state{adminState="up",hostname="compute-node-02",region="Region",service="neutron-openvswitch-agent"} 1.0 openstack_neutron_agent_state{adminState="up",hostname="compute-node-03",region="Region",service="neutron-dhcp-agent"} 1.0 openstack_neutron_agent_state{adminState="up",hostname="compute-node-03",region="Region",service="neutron-l3-agent"} 1.0 openstack_neutron_agent_state{adminState="up",hostname="compute-node-03",region="Region",service="neutron-metadata-agent"} 1.0 openstack_neutron_agent_state{adminState="up",hostname="compute-node-03",region="Region",service="neutron-openvswitch-agent"} 1.0 openstack_neutron_agent_state{adminState="up",hostname="compute-node-04",region="Region",service="neutron-openvswitch-agent"} 1.0 openstack_neutron_agent_state{adminState="up",hostname="compute-node-05",region="Region",service="neutron-openvswitch-agent"} 1.0 openstack_neutron_agent_state{adminState="up",hostname="compute-node-06",region="Region",service="neutron-openvswitch-agent"} 1.0 openstack_neutron_agent_state{adminState="up",hostname="compute-node-07",region="Region",service="neutron-openvswitch-agent"} 1.0 openstack_neutron_agent_state{adminState="up",hostname="compute-node-09",region="Region",service="neutron-openvswitch-agent"} 1.0 openstack_neutron_agent_state{adminState="up",hostname="compute-node-10",region="Region",service="neutron-openvswitch-agent"} 1.0 openstack_neutron_agent_state{adminState="up",hostname="compute-node-extra-01",region="Region",service="neutron-openvswitch-agent"} 1.0 openstack_neutron_agent_state{adminState="up",hostname="compute-node-extra-02",region="Region",service="neutron-openvswitch-agent"} 1.0 openstack_neutron_agent_state{adminState="up",hostname="compute-node-extra-03",region="Region",service="neutron-openvswitch-agent"} 1.0 openstack_neutron_agent_state{adminState="up",hostname="compute-node-extra-04",region="Region",service="neutron-openvswitch-agent"} 1.0 openstack_neutron_agent_state{adminState="up",hostname="compute-node-extra-05",region="Region",service="neutron-openvswitch-agent"} 1.0 openstack_neutron_agent_state{adminState="up",hostname="compute-node-extra-07",region="Region",service="neutron-openvswitch-agent"} 1.0 openstack_neutron_agent_state{adminState="up",hostname="compute-node-extra-08",region="Region",service="neutron-openvswitch-agent"} 1.0 openstack_neutron_agent_state{adminState="up",hostname="compute-node-extra-09",region="Region",service="neutron-openvswitch-agent"} 1.0 openstack_neutron_agent_state{adminState="up",hostname="compute-node-extra-10",region="Region",service="neutron-openvswitch-agent"} 1.0 openstack_neutron_agent_state{adminState="up",hostname="compute-node-extra-11",region="Region",service="neutron-openvswitch-agent"} 1.0 openstack_neutron_agent_state{adminState="up",hostname="compute-node-extra-12",region="Region",service="neutron-openvswitch-agent"} 1.0 openstack_neutron_agent_state{adminState="up",hostname="compute-node-extra-13",region="Region",service="neutron-openvswitch-agent"} 1.0 openstack_neutron_agent_state{adminState="up",hostname="compute-node-extra-15",region="Region",service="neutron-openvswitch-agent"} 1.0 openstack_neutron_agent_state{adminState="up",hostname="compute-node-extra-17",region="Region",service="neutron-openvswitch-agent"} 1.0 openstack_neutron_agent_state{adminState="up",hostname="compute-node-extra-18",region="Region",service="neutron-openvswitch-agent"} 1.0 openstack_neutron_agent_state{adminState="up",hostname="compute-node-extra-19",region="Region",service="neutron-openvswitch-agent"} 1.0 openstack_neutron_agent_state{adminState="up",hostname="compute-node-extra-20",region="Region",service="neutron-openvswitch-agent"} 1.0 openstack_neutron_agent_state{adminState="up",hostname="compute-node-extra-21",region="Region",service="neutron-openvswitch-agent"} 1.0 openstack_neutron_agent_state{adminState="up",hostname="compute-node-extra-22",region="Region",service="neutron-openvswitch-agent"} 1.0 openstack_neutron_agent_state{adminState="up",hostname="compute-node-extra-23",region="Region",service="neutron-openvswitch-agent"} 1.0 openstack_neutron_agent_state{adminState="up",hostname="compute-node-extra-24",region="Region",service="neutron-openvswitch-agent"} 1.0 openstack_neutron_agent_state{adminState="up",hostname="compute-node-extra-25",region="Region",service="neutron-openvswitch-agent"} 1.0 openstack_neutron_agent_state{adminState="up",hostname="compute-node-extra-26",region="Region",service="neutron-openvswitch-agent"} 1.0 openstack_neutron_agent_state{adminState="up",hostname="compute-node-extra-27",region="Region",service="neutron-openvswitch-agent"} 1.0 openstack_neutron_agent_state{adminState="up",hostname="compute-node-extra-28",region="Region",service="neutron-openvswitch-agent"} 1.0 openstack_neutron_agent_state{adminState="up",hostname="compute-node-extra-29",region="Region",service="neutron-openvswitch-agent"} 1.0 openstack_neutron_agent_state{adminState="up",hostname="compute-node-extra-31",region="Region",service="neutron-openvswitch-agent"} 1.0 openstack_neutron_agent_state{adminState="up",hostname="compute-node-extra-32",region="Region",service="neutron-openvswitch-agent"} 1.0 openstack_neutron_agent_state{adminState="up",hostname="compute-node-extra-34",region="Region",service="neutron-openvswitch-agent"} 1.0 openstack_neutron_agent_state{adminState="up",hostname="compute-node-extra-35",region="Region",service="neutron-openvswitch-agent"} 1.0 openstack_neutron_agent_state{adminState="up",hostname="compute-node-extra-36",region="Region",service="neutron-openvswitch-agent"} 1.0 openstack_neutron_agent_state{adminState="up",hostname="compute-node-extra-37",region="Region",service="neutron-openvswitch-agent"} 1.0 openstack_neutron_agent_state{adminState="up",hostname="compute-node-extra-38",region="Region",service="neutron-openvswitch-agent"} 1.0 openstack_neutron_agent_state{adminState="up",hostname="compute-node-extra-39",region="Region",service="neutron-openvswitch-agent"} 1.0 openstack_neutron_agent_state{adminState="up",hostname="compute-node-extra-40",region="Region",service="neutron-openvswitch-agent"} 1.0 openstack_neutron_agent_state{adminState="up",hostname="compute-node-extra-42",region="Region",service="neutron-openvswitch-agent"} 1.0 openstack_neutron_agent_state{adminState="up",hostname="compute-node-extra-43",region="Region",service="neutron-openvswitch-agent"} 1.0 openstack_neutron_agent_state{adminState="up",hostname="compute-node-extra-44",region="Region",service="neutron-openvswitch-agent"} 1.0 openstack_neutron_agent_state{adminState="up",hostname="compute-node-extra-45",region="Region",service="neutron-openvswitch-agent"} 1.0 # HELP openstack_neutron_floating_ips floating_ips # TYPE openstack_neutron_floating_ips gauge openstack_neutron_floating_ips{region="Region"} 22.0 # HELP openstack_neutron_networks networks # TYPE openstack_neutron_networks gauge openstack_neutron_networks{region="Region"} 130.0 # HELP openstack_neutron_network_ip_availabilities_total network_ip_availabilities_total # TYPE openstack_neutron_network_ip_availabilities_total gauge openstack_neutron_network_ip_availabilities_total{region="Region",network_id="00bd4d2d-e8d7-4715-a52d-f9c8378a8ab4",network_name="default-network",cidr="10.0.0.0/16",subnet_name="my-subnet",project_id="4bc6a4b06c11495c8beed2fecb3da5f7"} 253.0 openstack_neutron_network_ip_availabilities_total{region="Region",network_id="00de2fca-b8e4-42b8-84fa-1d88648e08eb",network_name="default-network",cidr="10.0.0.0/16",subnet_name="my-subnet",project_id="7abf4adfd30548a381554b3a4a08cd5d"} 253.0 # HELP openstack_neutron_network_ip_availabilities_used network_ip_availabilities_used # TYPE openstack_neutron_network_ip_availabilities_used gauge openstack_neutron_network_ip_availabilities_used{region="Region",network_id="00bd4d2d-e8d7-4715-a52d-f9c8378a8ab4",network_name="default-network",cidr="10.0.0.0/16",subnet_name="my-subnet",project_id="4bc6a4b06c11495c8beed2fecb3da5f7"} 4.0 openstack_neutron_network_ip_availabilities_used{region="Region",network_id="00de2fca-b8e4-42b8-84fa-1d88648e08eb",network_name="default-network",cidr="10.0.0.0/16",subnet_name="my-subnet",project_id="7abf4adfd30548a381554b3a4a08cd5d"} 5.0 # HELP openstack_neutron_security_groups security_groups # TYPE openstack_neutron_security_groups gauge # HELP openstack_neutron_ports{region="Region"} ports # TYPE openstack_neutron_ports{region="Region"} gauge openstack_neutron_ports 1063.0 # HELP openstack_neutron_routers{region="Region"} routers # TYPE openstack_neutron_routers{region="Region"} gauge openstack_neutron_routers 134.0 openstack_neutron_security_groups{region="Region"} 114.0 # HELP openstack_neutron_subnets subnets # TYPE openstack_neutron_subnets gauge openstack_neutron_subnets{region="Region"} 130.0 # HELP openstack_nova_agent_state agent_state # TYPE openstack_nova_agent_state counter openstack_nova_agent_state{adminState="enabled",hostname="compute-node-01",region="Region",service="nova-compute",zone="nova"} 1.0 openstack_nova_agent_state{adminState="enabled",hostname="compute-node-01",region="Region",service="nova-conductor",zone="internal"} 1.0 openstack_nova_agent_state{adminState="enabled",hostname="compute-node-01",region="Region",service="nova-consoleauth",zone="internal"} 1.0 openstack_nova_agent_state{adminState="enabled",hostname="compute-node-01",region="Region",service="nova-scheduler",zone="internal"} 1.0 openstack_nova_agent_state{adminState="enabled",hostname="compute-node-02",region="Region",service="nova-compute",zone="nova"} 1.0 openstack_nova_agent_state{adminState="enabled",hostname="compute-node-02",region="Region",service="nova-conductor",zone="internal"} 1.0 openstack_nova_agent_state{adminState="enabled",hostname="compute-node-02",region="Region",service="nova-consoleauth",zone="internal"} 1.0 openstack_nova_agent_state{adminState="enabled",hostname="compute-node-02",region="Region",service="nova-scheduler",zone="internal"} 1.0 openstack_nova_agent_state{adminState="enabled",hostname="compute-node-03",region="Region",service="nova-compute",zone="nova"} 1.0 openstack_nova_agent_state{adminState="enabled",hostname="compute-node-03",region="Region",service="nova-conductor",zone="internal"} 1.0 openstack_nova_agent_state{adminState="enabled",hostname="compute-node-03",region="Region",service="nova-consoleauth",zone="internal"} 1.0 openstack_nova_agent_state{adminState="enabled",hostname="compute-node-03",region="Region",service="nova-scheduler",zone="internal"} 1.0 openstack_nova_agent_state{adminState="enabled",hostname="compute-node-04",region="Region",service="nova-compute",zone="nova"} 1.0 openstack_nova_agent_state{adminState="enabled",hostname="compute-node-05",region="Region",service="nova-compute",zone="nova"} 1.0 openstack_nova_agent_state{adminState="enabled",hostname="compute-node-06",region="Region",service="nova-compute",zone="nova"} 1.0 openstack_nova_agent_state{adminState="enabled",hostname="compute-node-07",region="Region",service="nova-compute",zone="nova"} 1.0 openstack_nova_agent_state{adminState="enabled",hostname="compute-node-09",region="Region",service="nova-compute",zone="nova"} 1.0 openstack_nova_agent_state{adminState="enabled",hostname="compute-node-10",region="Region",service="nova-compute",zone="nova"} 1.0 openstack_nova_agent_state{adminState="enabled",hostname="compute-node-extra-01",region="Region",service="nova-compute",zone="nova"} 1.0 openstack_nova_agent_state{adminState="enabled",hostname="compute-node-extra-02",region="Region",service="nova-compute",zone="nova"} 1.0 openstack_nova_agent_state{adminState="enabled",hostname="compute-node-extra-03",region="Region",service="nova-compute",zone="nova"} 1.0 openstack_nova_agent_state{adminState="enabled",hostname="compute-node-extra-04",region="Region",service="nova-compute",zone="nova"} 1.0 openstack_nova_agent_state{adminState="enabled",hostname="compute-node-extra-05",region="Region",service="nova-compute",zone="nova"} 1.0 openstack_nova_agent_state{adminState="enabled",hostname="compute-node-extra-07",region="Region",service="nova-compute",zone="nova"} 1.0 openstack_nova_agent_state{adminState="enabled",hostname="compute-node-extra-08",region="Region",service="nova-compute",zone="nova"} 1.0 openstack_nova_agent_state{adminState="enabled",hostname="compute-node-extra-09",region="Region",service="nova-compute",zone="nova"} 1.0 openstack_nova_agent_state{adminState="enabled",hostname="compute-node-extra-10",region="Region",service="nova-compute",zone="nova"} 1.0 openstack_nova_agent_state{adminState="enabled",hostname="compute-node-extra-11",region="Region",service="nova-compute",zone="nova"} 1.0 openstack_nova_agent_state{adminState="enabled",hostname="compute-node-extra-12",region="Region",service="nova-compute",zone="nova"} 1.0 openstack_nova_agent_state{adminState="enabled",hostname="compute-node-extra-13",region="Region",service="nova-compute",zone="nova"} 1.0 openstack_nova_agent_state{adminState="enabled",hostname="compute-node-extra-15",region="Region",service="nova-compute",zone="nova"} 1.0 openstack_nova_agent_state{adminState="enabled",hostname="compute-node-extra-17",region="Region",service="nova-compute",zone="nova"} 1.0 openstack_nova_agent_state{adminState="enabled",hostname="compute-node-extra-18",region="Region",service="nova-compute",zone="nova"} 1.0 openstack_nova_agent_state{adminState="enabled",hostname="compute-node-extra-19",region="Region",service="nova-compute",zone="nova"} 1.0 openstack_nova_agent_state{adminState="enabled",hostname="compute-node-extra-20",region="Region",service="nova-compute",zone="nova"} 1.0 openstack_nova_agent_state{adminState="enabled",hostname="compute-node-extra-21",region="Region",service="nova-compute",zone="nova"} 1.0 openstack_nova_agent_state{adminState="enabled",hostname="compute-node-extra-22",region="Region",service="nova-compute",zone="nova"} 1.0 openstack_nova_agent_state{adminState="enabled",hostname="compute-node-extra-23",region="Region",service="nova-compute",zone="nova"} 1.0 openstack_nova_agent_state{adminState="enabled",hostname="compute-node-extra-24",region="Region",service="nova-compute",zone="nova"} 1.0 openstack_nova_agent_state{adminState="enabled",hostname="compute-node-extra-25",region="Region",service="nova-compute",zone="nova"} 1.0 openstack_nova_agent_state{adminState="enabled",hostname="compute-node-extra-26",region="Region",service="nova-compute",zone="nova"} 1.0 openstack_nova_agent_state{adminState="enabled",hostname="compute-node-extra-27",region="Region",service="nova-compute",zone="nova"} 1.0 openstack_nova_agent_state{adminState="enabled",hostname="compute-node-extra-28",region="Region",service="nova-compute",zone="nova"} 1.0 openstack_nova_agent_state{adminState="enabled",hostname="compute-node-extra-29",region="Region",service="nova-compute",zone="nova"} 1.0 openstack_nova_agent_state{adminState="enabled",hostname="compute-node-extra-31",region="Region",service="nova-compute",zone="nova"} 1.0 openstack_nova_agent_state{adminState="enabled",hostname="compute-node-extra-32",region="Region",service="nova-compute",zone="nova"} 1.0 openstack_nova_agent_state{adminState="enabled",hostname="compute-node-extra-34",region="Region",service="nova-compute",zone="nova"} 1.0 openstack_nova_agent_state{adminState="enabled",hostname="compute-node-extra-35",region="Region",service="nova-compute",zone="nova"} 1.0 openstack_nova_agent_state{adminState="enabled",hostname="compute-node-extra-36",region="Region",service="nova-compute",zone="nova"} 1.0 openstack_nova_agent_state{adminState="enabled",hostname="compute-node-extra-37",region="Region",service="nova-compute",zone="nova"} 1.0 openstack_nova_agent_state{adminState="enabled",hostname="compute-node-extra-38",region="Region",service="nova-compute",zone="nova"} 1.0 openstack_nova_agent_state{adminState="enabled",hostname="compute-node-extra-39",region="Region",service="nova-compute",zone="nova"} 1.0 openstack_nova_agent_state{adminState="enabled",hostname="compute-node-extra-40",region="Region",service="nova-compute",zone="nova"} 1.0 openstack_nova_agent_state{adminState="enabled",hostname="compute-node-extra-42",region="Region",service="nova-compute",zone="nova"} 1.0 openstack_nova_agent_state{adminState="enabled",hostname="compute-node-extra-43",region="Region",service="nova-compute",zone="nova"} 1.0 openstack_nova_agent_state{adminState="enabled",hostname="compute-node-extra-44",region="Region",service="nova-compute",zone="nova"} 1.0 openstack_nova_agent_state{adminState="enabled",hostname="compute-node-extra-45",region="Region",service="nova-compute",zone="nova"} 1.0 # HELP openstack_nova_availability_zones availability_zones # TYPE openstack_nova_availability_zones gauge openstack_nova_availability_zones{region="Region"} 1.0 # HELP openstack_nova_current_workload current_workload # TYPE openstack_nova_current_workload gauge openstack_nova_current_workload{aggregate="",hostname="compute-node-01",region="Region"} 0.0 openstack_nova_current_workload{aggregate="",hostname="compute-node-02",region="Region"} 0.0 openstack_nova_current_workload{aggregate="",hostname="compute-node-03",region="Region"} 0.0 openstack_nova_current_workload{aggregate="",hostname="compute-node-04",region="Region"} 0.0 openstack_nova_current_workload{aggregate="",hostname="compute-node-05",region="Region"} 0.0 openstack_nova_current_workload{aggregate="",hostname="compute-node-06",region="Region"} 0.0 openstack_nova_current_workload{aggregate="",hostname="compute-node-07",region="Region"} 0.0 openstack_nova_current_workload{aggregate="",hostname="compute-node-09",region="Region"} 0.0 openstack_nova_current_workload{aggregate="",hostname="compute-node-10",region="Region"} 0.0 openstack_nova_current_workload{aggregate="",hostname="compute-node-extra-01",region="Region"} 0.0 openstack_nova_current_workload{aggregate="",hostname="compute-node-extra-02",region="Region"} 0.0 openstack_nova_current_workload{aggregate="",hostname="compute-node-extra-03",region="Region"} 0.0 openstack_nova_current_workload{aggregate="",hostname="compute-node-extra-04",region="Region"} 0.0 openstack_nova_current_workload{aggregate="",hostname="compute-node-extra-05",region="Region"} 0.0 openstack_nova_current_workload{aggregate="",hostname="compute-node-extra-07",region="Region"} 0.0 openstack_nova_current_workload{aggregate="",hostname="compute-node-extra-08",region="Region"} 0.0 openstack_nova_current_workload{aggregate="",hostname="compute-node-extra-09",region="Region"} 0.0 openstack_nova_current_workload{aggregate="",hostname="compute-node-extra-10",region="Region"} 0.0 openstack_nova_current_workload{aggregate="",hostname="compute-node-extra-11",region="Region"} 0.0 openstack_nova_current_workload{aggregate="",hostname="compute-node-extra-12",region="Region"} 0.0 openstack_nova_current_workload{aggregate="",hostname="compute-node-extra-13",region="Region"} 0.0 openstack_nova_current_workload{aggregate="",hostname="compute-node-extra-15",region="Region"} 0.0 openstack_nova_current_workload{aggregate="",hostname="compute-node-extra-17",region="Region"} 0.0 openstack_nova_current_workload{aggregate="",hostname="compute-node-extra-18",region="Region"} 0.0 openstack_nova_current_workload{aggregate="",hostname="compute-node-extra-19",region="Region"} 0.0 openstack_nova_current_workload{aggregate="",hostname="compute-node-extra-20",region="Region"} 0.0 openstack_nova_current_workload{aggregate="",hostname="compute-node-extra-21",region="Region"} 0.0 openstack_nova_current_workload{aggregate="",hostname="compute-node-extra-22",region="Region"} 0.0 openstack_nova_current_workload{aggregate="",hostname="compute-node-extra-23",region="Region"} 0.0 openstack_nova_current_workload{aggregate="",hostname="compute-node-extra-24",region="Region"} 0.0 openstack_nova_current_workload{aggregate="",hostname="compute-node-extra-25",region="Region"} 0.0 openstack_nova_current_workload{aggregate="",hostname="compute-node-extra-26",region="Region"} 0.0 openstack_nova_current_workload{aggregate="",hostname="compute-node-extra-27",region="Region"} 0.0 openstack_nova_current_workload{aggregate="",hostname="compute-node-extra-28",region="Region"} 0.0 openstack_nova_current_workload{aggregate="",hostname="compute-node-extra-29",region="Region"} 0.0 openstack_nova_current_workload{aggregate="",hostname="compute-node-extra-31",region="Region"} 0.0 openstack_nova_current_workload{aggregate="",hostname="compute-node-extra-32",region="Region"} 0.0 openstack_nova_current_workload{aggregate="",hostname="compute-node-extra-34",region="Region"} 0.0 openstack_nova_current_workload{aggregate="",hostname="compute-node-extra-35",region="Region"} 0.0 openstack_nova_current_workload{aggregate="",hostname="compute-node-extra-36",region="Region"} 0.0 openstack_nova_current_workload{aggregate="",hostname="compute-node-extra-37",region="Region"} 0.0 openstack_nova_current_workload{aggregate="",hostname="compute-node-extra-38",region="Region"} 0.0 openstack_nova_current_workload{aggregate="",hostname="compute-node-extra-39",region="Region"} 0.0 openstack_nova_current_workload{aggregate="",hostname="compute-node-extra-40",region="Region"} 0.0 openstack_nova_current_workload{aggregate="",hostname="compute-node-extra-42",region="Region"} 0.0 openstack_nova_current_workload{aggregate="",hostname="compute-node-extra-43",region="Region"} 0.0 openstack_nova_current_workload{aggregate="",hostname="compute-node-extra-44",region="Region"} 0.0 openstack_nova_current_workload{aggregate="",hostname="compute-node-extra-45",region="Region"} 0.0 # HELP openstack_nova_flavors flavors # TYPE openstack_nova_flavors gauge openstack_nova_flavors{region="Region"} 6.0 # HELP openstack_nova_local_storage_available_bytes local_storage_available_bytes # TYPE openstack_nova_local_storage_available_bytes gauge openstack_nova_local_storage_available_bytes{aggregate="",hostname="compute-node-01",region="Region"} 1.07823006482432e+14 openstack_nova_local_storage_available_bytes{aggregate="",hostname="compute-node-02",region="Region"} 1.07823006482432e+14 openstack_nova_local_storage_available_bytes{aggregate="",hostname="compute-node-03",region="Region"} 1.07823006482432e+14 openstack_nova_local_storage_available_bytes{aggregate="",hostname="compute-node-04",region="Region"} 1.07823006482432e+14 openstack_nova_local_storage_available_bytes{aggregate="",hostname="compute-node-05",region="Region"} 1.07823006482432e+14 openstack_nova_local_storage_available_bytes{aggregate="",hostname="compute-node-06",region="Region"} 1.07823006482432e+14 openstack_nova_local_storage_available_bytes{aggregate="",hostname="compute-node-07",region="Region"} 1.07823006482432e+14 openstack_nova_local_storage_available_bytes{aggregate="",hostname="compute-node-09",region="Region"} 1.07823006482432e+14 openstack_nova_local_storage_available_bytes{aggregate="",hostname="compute-node-10",region="Region"} 1.07823006482432e+14 openstack_nova_local_storage_available_bytes{aggregate="",hostname="compute-node-extra-01",region="Region"} 1.07823006482432e+14 openstack_nova_local_storage_available_bytes{aggregate="",hostname="compute-node-extra-02",region="Region"} 1.07823006482432e+14 openstack_nova_local_storage_available_bytes{aggregate="",hostname="compute-node-extra-03",region="Region"} 1.07823006482432e+14 openstack_nova_local_storage_available_bytes{aggregate="",hostname="compute-node-extra-04",region="Region"} 1.07823006482432e+14 openstack_nova_local_storage_available_bytes{aggregate="",hostname="compute-node-extra-05",region="Region"} 1.07823006482432e+14 openstack_nova_local_storage_available_bytes{aggregate="",hostname="compute-node-extra-07",region="Region"} 1.07823006482432e+14 openstack_nova_local_storage_available_bytes{aggregate="",hostname="compute-node-extra-08",region="Region"} 1.07823006482432e+14 openstack_nova_local_storage_available_bytes{aggregate="",hostname="compute-node-extra-09",region="Region"} 1.07823006482432e+14 openstack_nova_local_storage_available_bytes{aggregate="",hostname="compute-node-extra-10",region="Region"} 1.07823006482432e+14 openstack_nova_local_storage_available_bytes{aggregate="",hostname="compute-node-extra-11",region="Region"} 1.07823006482432e+14 openstack_nova_local_storage_available_bytes{aggregate="",hostname="compute-node-extra-12",region="Region"} 1.07823006482432e+14 openstack_nova_local_storage_available_bytes{aggregate="",hostname="compute-node-extra-13",region="Region"} 1.07823006482432e+14 openstack_nova_local_storage_available_bytes{aggregate="",hostname="compute-node-extra-15",region="Region"} 1.07823006482432e+14 openstack_nova_local_storage_available_bytes{aggregate="",hostname="compute-node-extra-17",region="Region"} 1.07823006482432e+14 openstack_nova_local_storage_available_bytes{aggregate="",hostname="compute-node-extra-18",region="Region"} 1.07823006482432e+14 openstack_nova_local_storage_available_bytes{aggregate="",hostname="compute-node-extra-19",region="Region"} 1.07823006482432e+14 openstack_nova_local_storage_available_bytes{aggregate="",hostname="compute-node-extra-20",region="Region"} 1.07823006482432e+14 openstack_nova_local_storage_available_bytes{aggregate="",hostname="compute-node-extra-21",region="Region"} 1.07823006482432e+14 openstack_nova_local_storage_available_bytes{aggregate="",hostname="compute-node-extra-22",region="Region"} 1.07823006482432e+14 openstack_nova_local_storage_available_bytes{aggregate="",hostname="compute-node-extra-23",region="Region"} 1.07823006482432e+14 openstack_nova_local_storage_available_bytes{aggregate="",hostname="compute-node-extra-24",region="Region"} 1.07823006482432e+14 openstack_nova_local_storage_available_bytes{aggregate="",hostname="compute-node-extra-25",region="Region"} 1.07823006482432e+14 openstack_nova_local_storage_available_bytes{aggregate="",hostname="compute-node-extra-26",region="Region"} 1.07823006482432e+14 openstack_nova_local_storage_available_bytes{aggregate="",hostname="compute-node-extra-27",region="Region"} 1.07823006482432e+14 openstack_nova_local_storage_available_bytes{aggregate="",hostname="compute-node-extra-28",region="Region"} 1.07823006482432e+14 openstack_nova_local_storage_available_bytes{aggregate="",hostname="compute-node-extra-29",region="Region"} 1.07823006482432e+14 openstack_nova_local_storage_available_bytes{aggregate="",hostname="compute-node-extra-31",region="Region"} 1.07823006482432e+14 openstack_nova_local_storage_available_bytes{aggregate="",hostname="compute-node-extra-32",region="Region"} 1.07823006482432e+14 openstack_nova_local_storage_available_bytes{aggregate="",hostname="compute-node-extra-34",region="Region"} 1.07823006482432e+14 openstack_nova_local_storage_available_bytes{aggregate="",hostname="compute-node-extra-35",region="Region"} 1.07823006482432e+14 openstack_nova_local_storage_available_bytes{aggregate="",hostname="compute-node-extra-36",region="Region"} 1.07823006482432e+14 openstack_nova_local_storage_available_bytes{aggregate="",hostname="compute-node-extra-37",region="Region"} 1.07823006482432e+14 openstack_nova_local_storage_available_bytes{aggregate="",hostname="compute-node-extra-38",region="Region"} 1.07823006482432e+14 openstack_nova_local_storage_available_bytes{aggregate="",hostname="compute-node-extra-39",region="Region"} 1.07823006482432e+14 openstack_nova_local_storage_available_bytes{aggregate="",hostname="compute-node-extra-40",region="Region"} 1.07823006482432e+14 openstack_nova_local_storage_available_bytes{aggregate="",hostname="compute-node-extra-42",region="Region"} 1.07823006482432e+14 openstack_nova_local_storage_available_bytes{aggregate="",hostname="compute-node-extra-43",region="Region"} 1.07823006482432e+14 openstack_nova_local_storage_available_bytes{aggregate="",hostname="compute-node-extra-44",region="Region"} 1.07823006482432e+14 openstack_nova_local_storage_available_bytes{aggregate="",hostname="compute-node-extra-45",region="Region"} 1.07823006482432e+14 # HELP openstack_nova_local_storage_used_bytes local_storage_used_bytes # TYPE openstack_nova_local_storage_used_bytes gauge openstack_nova_local_storage_used_bytes{aggregate="",hostname="compute-node-01",region="Region"} 2.147483648e+11 openstack_nova_local_storage_used_bytes{aggregate="",hostname="compute-node-02",region="Region"} 0.0 openstack_nova_local_storage_used_bytes{aggregate="",hostname="compute-node-03",region="Region"} 0.0 openstack_nova_local_storage_used_bytes{aggregate="",hostname="compute-node-04",region="Region"} 1.24554051584e+12 openstack_nova_local_storage_used_bytes{aggregate="",hostname="compute-node-05",region="Region"} 1.7179869184e+11 openstack_nova_local_storage_used_bytes{aggregate="",hostname="compute-node-06",region="Region"} 1.073741824e+12 openstack_nova_local_storage_used_bytes{aggregate="",hostname="compute-node-07",region="Region"} 1.073741824e+12 openstack_nova_local_storage_used_bytes{aggregate="",hostname="compute-node-09",region="Region"} 7.516192768e+11 openstack_nova_local_storage_used_bytes{aggregate="",hostname="compute-node-10",region="Region"} 6.39950127104e+11 openstack_nova_local_storage_used_bytes{aggregate="",hostname="compute-node-extra-01",region="Region"} 0.0 openstack_nova_local_storage_used_bytes{aggregate="",hostname="compute-node-extra-02",region="Region"} 0.0 openstack_nova_local_storage_used_bytes{aggregate="",hostname="compute-node-extra-03",region="Region"} 4.422742573056e+12 openstack_nova_local_storage_used_bytes{aggregate="",hostname="compute-node-extra-04",region="Region"} 0.0 openstack_nova_local_storage_used_bytes{aggregate="",hostname="compute-node-extra-05",region="Region"} 0.0 openstack_nova_local_storage_used_bytes{aggregate="",hostname="compute-node-extra-07",region="Region"} 0.0 openstack_nova_local_storage_used_bytes{aggregate="",hostname="compute-node-extra-08",region="Region"} 0.0 openstack_nova_local_storage_used_bytes{aggregate="",hostname="compute-node-extra-09",region="Region"} 0.0 openstack_nova_local_storage_used_bytes{aggregate="",hostname="compute-node-extra-10",region="Region"} 0.0 openstack_nova_local_storage_used_bytes{aggregate="",hostname="compute-node-extra-11",region="Region"} 1.7179869184e+11 openstack_nova_local_storage_used_bytes{aggregate="",hostname="compute-node-extra-12",region="Region"} 0.0 openstack_nova_local_storage_used_bytes{aggregate="",hostname="compute-node-extra-13",region="Region"} 0.0 openstack_nova_local_storage_used_bytes{aggregate="",hostname="compute-node-extra-15",region="Region"} 0.0 openstack_nova_local_storage_used_bytes{aggregate="",hostname="compute-node-extra-17",region="Region"} 0.0 openstack_nova_local_storage_used_bytes{aggregate="",hostname="compute-node-extra-18",region="Region"} 0.0 openstack_nova_local_storage_used_bytes{aggregate="",hostname="compute-node-extra-19",region="Region"} 0.0 openstack_nova_local_storage_used_bytes{aggregate="",hostname="compute-node-extra-20",region="Region"} 0.0 openstack_nova_local_storage_used_bytes{aggregate="",hostname="compute-node-extra-21",region="Region"} 0.0 openstack_nova_local_storage_used_bytes{aggregate="",hostname="compute-node-extra-22",region="Region"} 0.0 openstack_nova_local_storage_used_bytes{aggregate="",hostname="compute-node-extra-23",region="Region"} 0.0 openstack_nova_local_storage_used_bytes{aggregate="",hostname="compute-node-extra-24",region="Region"} 0.0 openstack_nova_local_storage_used_bytes{aggregate="",hostname="compute-node-extra-25",region="Region"} 0.0 openstack_nova_local_storage_used_bytes{aggregate="",hostname="compute-node-extra-26",region="Region"} 0.0 openstack_nova_local_storage_used_bytes{aggregate="",hostname="compute-node-extra-27",region="Region"} 0.0 openstack_nova_local_storage_used_bytes{aggregate="",hostname="compute-node-extra-28",region="Region"} 0.0 openstack_nova_local_storage_used_bytes{aggregate="",hostname="compute-node-extra-29",region="Region"} 0.0 openstack_nova_local_storage_used_bytes{aggregate="",hostname="compute-node-extra-31",region="Region"} 0.0 openstack_nova_local_storage_used_bytes{aggregate="",hostname="compute-node-extra-32",region="Region"} 0.0 openstack_nova_local_storage_used_bytes{aggregate="",hostname="compute-node-extra-34",region="Region"} 0.0 openstack_nova_local_storage_used_bytes{aggregate="",hostname="compute-node-extra-35",region="Region"} 0.0 openstack_nova_local_storage_used_bytes{aggregate="",hostname="compute-node-extra-36",region="Region"} 0.0 openstack_nova_local_storage_used_bytes{aggregate="",hostname="compute-node-extra-37",region="Region"} 0.0 openstack_nova_local_storage_used_bytes{aggregate="",hostname="compute-node-extra-38",region="Region"} 0.0 openstack_nova_local_storage_used_bytes{aggregate="",hostname="compute-node-extra-39",region="Region"} 0.0 openstack_nova_local_storage_used_bytes{aggregate="",hostname="compute-node-extra-40",region="Region"} 0.0 openstack_nova_local_storage_used_bytes{aggregate="",hostname="compute-node-extra-42",region="Region"} 0.0 openstack_nova_local_storage_used_bytes{aggregate="",hostname="compute-node-extra-43",region="Region"} 0.0 openstack_nova_local_storage_used_bytes{aggregate="",hostname="compute-node-extra-44",region="Region"} 0.0 openstack_nova_local_storage_used_bytes{aggregate="",hostname="compute-node-extra-45",region="Region"} 0.0 # HELP openstack_nova_memory_available_bytes memory_available_bytes # TYPE openstack_nova_memory_available_bytes gauge openstack_nova_memory_available_bytes{aggregate="",hostname="compute-node-01",region="Region"} 6.7513614336e+10 openstack_nova_memory_available_bytes{aggregate="",hostname="compute-node-02",region="Region"} 6.751256576e+10 openstack_nova_memory_available_bytes{aggregate="",hostname="compute-node-03",region="Region"} 6.7513614336e+10 openstack_nova_memory_available_bytes{aggregate="",hostname="compute-node-04",region="Region"} 6.7513614336e+10 openstack_nova_memory_available_bytes{aggregate="",hostname="compute-node-05",region="Region"} 6.7513614336e+10 openstack_nova_memory_available_bytes{aggregate="",hostname="compute-node-06",region="Region"} 6.7513614336e+10 openstack_nova_memory_available_bytes{aggregate="",hostname="compute-node-07",region="Region"} 6.7513614336e+10 openstack_nova_memory_available_bytes{aggregate="",hostname="compute-node-09",region="Region"} 6.7513614336e+10 openstack_nova_memory_available_bytes{aggregate="",hostname="compute-node-10",region="Region"} 6.7513614336e+10 openstack_nova_memory_available_bytes{aggregate="",hostname="compute-node-extra-01",region="Region"} 6.7542974464e+10 openstack_nova_memory_available_bytes{aggregate="",hostname="compute-node-extra-02",region="Region"} 6.7542974464e+10 openstack_nova_memory_available_bytes{aggregate="",hostname="compute-node-extra-03",region="Region"} 6.7542974464e+10 openstack_nova_memory_available_bytes{aggregate="",hostname="compute-node-extra-04",region="Region"} 6.7542974464e+10 openstack_nova_memory_available_bytes{aggregate="",hostname="compute-node-extra-05",region="Region"} 6.7542974464e+10 openstack_nova_memory_available_bytes{aggregate="",hostname="compute-node-extra-07",region="Region"} 6.7542974464e+10 openstack_nova_memory_available_bytes{aggregate="",hostname="compute-node-extra-08",region="Region"} 6.7542974464e+10 openstack_nova_memory_available_bytes{aggregate="",hostname="compute-node-extra-09",region="Region"} 6.7542974464e+10 openstack_nova_memory_available_bytes{aggregate="",hostname="compute-node-extra-10",region="Region"} 6.7542974464e+10 openstack_nova_memory_available_bytes{aggregate="",hostname="compute-node-extra-11",region="Region"} 6.7542974464e+10 openstack_nova_memory_available_bytes{aggregate="",hostname="compute-node-extra-12",region="Region"} 6.7542974464e+10 openstack_nova_memory_available_bytes{aggregate="",hostname="compute-node-extra-13",region="Region"} 6.7542974464e+10 openstack_nova_memory_available_bytes{aggregate="",hostname="compute-node-extra-15",region="Region"} 6.7542974464e+10 openstack_nova_memory_available_bytes{aggregate="",hostname="compute-node-extra-17",region="Region"} 6.7542974464e+10 openstack_nova_memory_available_bytes{aggregate="",hostname="compute-node-extra-18",region="Region"} 6.7542974464e+10 openstack_nova_memory_available_bytes{aggregate="",hostname="compute-node-extra-19",region="Region"} 6.7542974464e+10 openstack_nova_memory_available_bytes{aggregate="",hostname="compute-node-extra-20",region="Region"} 6.7542974464e+10 openstack_nova_memory_available_bytes{aggregate="",hostname="compute-node-extra-21",region="Region"} 6.7542974464e+10 openstack_nova_memory_available_bytes{aggregate="",hostname="compute-node-extra-22",region="Region"} 6.7542974464e+10 openstack_nova_memory_available_bytes{aggregate="",hostname="compute-node-extra-23",region="Region"} 6.7542974464e+10 openstack_nova_memory_available_bytes{aggregate="",hostname="compute-node-extra-24",region="Region"} 6.7542974464e+10 openstack_nova_memory_available_bytes{aggregate="",hostname="compute-node-extra-25",region="Region"} 6.7542974464e+10 openstack_nova_memory_available_bytes{aggregate="",hostname="compute-node-extra-26",region="Region"} 6.7542974464e+10 openstack_nova_memory_available_bytes{aggregate="",hostname="compute-node-extra-27",region="Region"} 6.7542974464e+10 openstack_nova_memory_available_bytes{aggregate="",hostname="compute-node-extra-28",region="Region"} 6.7542974464e+10 openstack_nova_memory_available_bytes{aggregate="",hostname="compute-node-extra-29",region="Region"} 6.7542974464e+10 openstack_nova_memory_available_bytes{aggregate="",hostname="compute-node-extra-31",region="Region"} 6.7542974464e+10 openstack_nova_memory_available_bytes{aggregate="",hostname="compute-node-extra-32",region="Region"} 6.7542974464e+10 openstack_nova_memory_available_bytes{aggregate="",hostname="compute-node-extra-34",region="Region"} 6.7542974464e+10 openstack_nova_memory_available_bytes{aggregate="",hostname="compute-node-extra-35",region="Region"} 6.7542974464e+10 openstack_nova_memory_available_bytes{aggregate="",hostname="compute-node-extra-36",region="Region"} 6.7542974464e+10 openstack_nova_memory_available_bytes{aggregate="",hostname="compute-node-extra-37",region="Region"} 6.7542974464e+10 openstack_nova_memory_available_bytes{aggregate="",hostname="compute-node-extra-38",region="Region"} 6.7542974464e+10 openstack_nova_memory_available_bytes{aggregate="",hostname="compute-node-extra-39",region="Region"} 6.7542974464e+10 openstack_nova_memory_available_bytes{aggregate="",hostname="compute-node-extra-40",region="Region"} 6.7542974464e+10 openstack_nova_memory_available_bytes{aggregate="",hostname="compute-node-extra-42",region="Region"} 6.7542974464e+10 openstack_nova_memory_available_bytes{aggregate="",hostname="compute-node-extra-43",region="Region"} 6.7542974464e+10 openstack_nova_memory_available_bytes{aggregate="",hostname="compute-node-extra-44",region="Region"} 6.7542974464e+10 openstack_nova_memory_available_bytes{aggregate="",hostname="compute-node-extra-45",region="Region"} 6.7542974464e+10 # HELP openstack_nova_memory_used_bytes memory_used_bytes # TYPE openstack_nova_memory_used_bytes gauge openstack_nova_memory_used_bytes{aggregate="",hostname="compute-node-01",region="Region"} 9.135194112e+09 openstack_nova_memory_used_bytes{aggregate="",hostname="compute-node-02",region="Region"} 5.36870912e+08 openstack_nova_memory_used_bytes{aggregate="",hostname="compute-node-03",region="Region"} 5.36870912e+08 openstack_nova_memory_used_bytes{aggregate="",hostname="compute-node-04",region="Region"} 7.2049754112e+10 openstack_nova_memory_used_bytes{aggregate="",hostname="compute-node-05",region="Region"} 9.135194112e+09 openstack_nova_memory_used_bytes{aggregate="",hostname="compute-node-06",region="Region"} 2.5702694912e+10 openstack_nova_memory_used_bytes{aggregate="",hostname="compute-node-07",region="Region"} 4.9308237824e+10 openstack_nova_memory_used_bytes{aggregate="",hostname="compute-node-09",region="Region"} 1.3220446208e+10 openstack_nova_memory_used_bytes{aggregate="",hostname="compute-node-10",region="Region"} 3.221225472e+10 openstack_nova_memory_used_bytes{aggregate="",hostname="compute-node-extra-01",region="Region"} 5.36870912e+08 openstack_nova_memory_used_bytes{aggregate="",hostname="compute-node-extra-02",region="Region"} 5.36870912e+08 openstack_nova_memory_used_bytes{aggregate="",hostname="compute-node-extra-03",region="Region"} 2.565865472e+09 openstack_nova_memory_used_bytes{aggregate="",hostname="compute-node-extra-04",region="Region"} 5.36870912e+08 openstack_nova_memory_used_bytes{aggregate="",hostname="compute-node-extra-05",region="Region"} 5.36870912e+08 openstack_nova_memory_used_bytes{aggregate="",hostname="compute-node-extra-07",region="Region"} 5.36870912e+08 openstack_nova_memory_used_bytes{aggregate="",hostname="compute-node-extra-08",region="Region"} 5.36870912e+08 openstack_nova_memory_used_bytes{aggregate="",hostname="compute-node-extra-09",region="Region"} 5.36870912e+08 openstack_nova_memory_used_bytes{aggregate="",hostname="compute-node-extra-10",region="Region"} 5.36870912e+08 openstack_nova_memory_used_bytes{aggregate="",hostname="compute-node-extra-11",region="Region"} 9.126805504e+09 openstack_nova_memory_used_bytes{aggregate="",hostname="compute-node-extra-12",region="Region"} 5.36870912e+08 openstack_nova_memory_used_bytes{aggregate="",hostname="compute-node-extra-13",region="Region"} 5.36870912e+08 openstack_nova_memory_used_bytes{aggregate="",hostname="compute-node-extra-15",region="Region"} 5.36870912e+08 openstack_nova_memory_used_bytes{aggregate="",hostname="compute-node-extra-17",region="Region"} 5.36870912e+08 openstack_nova_memory_used_bytes{aggregate="",hostname="compute-node-extra-18",region="Region"} 5.36870912e+08 openstack_nova_memory_used_bytes{aggregate="",hostname="compute-node-extra-19",region="Region"} 5.36870912e+08 openstack_nova_memory_used_bytes{aggregate="",hostname="compute-node-extra-20",region="Region"} 5.36870912e+08 openstack_nova_memory_used_bytes{aggregate="",hostname="compute-node-extra-21",region="Region"} 5.36870912e+08 openstack_nova_memory_used_bytes{aggregate="",hostname="compute-node-extra-22",region="Region"} 5.36870912e+08 openstack_nova_memory_used_bytes{aggregate="",hostname="compute-node-extra-23",region="Region"} 5.36870912e+08 openstack_nova_memory_used_bytes{aggregate="",hostname="compute-node-extra-24",region="Region"} 5.36870912e+08 openstack_nova_memory_used_bytes{aggregate="",hostname="compute-node-extra-25",region="Region"} 5.36870912e+08 openstack_nova_memory_used_bytes{aggregate="",hostname="compute-node-extra-26",region="Region"} 5.36870912e+08 openstack_nova_memory_used_bytes{aggregate="",hostname="compute-node-extra-27",region="Region"} 5.36870912e+08 openstack_nova_memory_used_bytes{aggregate="",hostname="compute-node-extra-28",region="Region"} 5.36870912e+08 openstack_nova_memory_used_bytes{aggregate="",hostname="compute-node-extra-29",region="Region"} 5.36870912e+08 openstack_nova_memory_used_bytes{aggregate="",hostname="compute-node-extra-31",region="Region"} 5.36870912e+08 openstack_nova_memory_used_bytes{aggregate="",hostname="compute-node-extra-32",region="Region"} 5.36870912e+08 openstack_nova_memory_used_bytes{aggregate="",hostname="compute-node-extra-34",region="Region"} 5.36870912e+08 openstack_nova_memory_used_bytes{aggregate="",hostname="compute-node-extra-35",region="Region"} 5.36870912e+08 openstack_nova_memory_used_bytes{aggregate="",hostname="compute-node-extra-36",region="Region"} 5.36870912e+08 openstack_nova_memory_used_bytes{aggregate="",hostname="compute-node-extra-37",region="Region"} 5.36870912e+08 openstack_nova_memory_used_bytes{aggregate="",hostname="compute-node-extra-38",region="Region"} 5.36870912e+08 openstack_nova_memory_used_bytes{aggregate="",hostname="compute-node-extra-39",region="Region"} 5.36870912e+08 openstack_nova_memory_used_bytes{aggregate="",hostname="compute-node-extra-40",region="Region"} 5.36870912e+08 openstack_nova_memory_used_bytes{aggregate="",hostname="compute-node-extra-42",region="Region"} 5.36870912e+08 openstack_nova_memory_used_bytes{aggregate="",hostname="compute-node-extra-43",region="Region"} 5.36870912e+08 openstack_nova_memory_used_bytes{aggregate="",hostname="compute-node-extra-44",region="Region"} 5.36870912e+08 openstack_nova_memory_used_bytes{aggregate="",hostname="compute-node-extra-45",region="Region"} 5.36870912e+08 # HELP openstack_nova_running_vms running_vms # TYPE openstack_nova_running_vms gauge openstack_nova_running_vms{aggregate="",hostname="compute-node-01",region="Region"} 1.0 openstack_nova_running_vms{aggregate="",hostname="compute-node-02",region="Region"} 0.0 openstack_nova_running_vms{aggregate="",hostname="compute-node-03",region="Region"} 0.0 openstack_nova_running_vms{aggregate="",hostname="compute-node-04",region="Region"} 3.0 openstack_nova_running_vms{aggregate="",hostname="compute-node-05",region="Region"} 1.0 openstack_nova_running_vms{aggregate="",hostname="compute-node-06",region="Region"} 3.0 openstack_nova_running_vms{aggregate="",hostname="compute-node-07",region="Region"} 4.0 openstack_nova_running_vms{aggregate="",hostname="compute-node-09",region="Region"} 2.0 openstack_nova_running_vms{aggregate="",hostname="compute-node-10",region="Region"} 6.0 openstack_nova_running_vms{aggregate="",hostname="compute-node-extra-01",region="Region"} 0.0 openstack_nova_running_vms{aggregate="",hostname="compute-node-extra-02",region="Region"} 0.0 openstack_nova_running_vms{aggregate="",hostname="compute-node-extra-03",region="Region"} 0.0 openstack_nova_running_vms{aggregate="",hostname="compute-node-extra-04",region="Region"} 0.0 openstack_nova_running_vms{aggregate="",hostname="compute-node-extra-05",region="Region"} 0.0 openstack_nova_running_vms{aggregate="",hostname="compute-node-extra-07",region="Region"} 0.0 openstack_nova_running_vms{aggregate="",hostname="compute-node-extra-08",region="Region"} 0.0 openstack_nova_running_vms{aggregate="",hostname="compute-node-extra-09",region="Region"} 0.0 openstack_nova_running_vms{aggregate="",hostname="compute-node-extra-10",region="Region"} 0.0 openstack_nova_running_vms{aggregate="",hostname="compute-node-extra-11",region="Region"} 1.0 openstack_nova_running_vms{aggregate="",hostname="compute-node-extra-12",region="Region"} 0.0 openstack_nova_running_vms{aggregate="",hostname="compute-node-extra-13",region="Region"} 0.0 openstack_nova_running_vms{aggregate="",hostname="compute-node-extra-15",region="Region"} 0.0 openstack_nova_running_vms{aggregate="",hostname="compute-node-extra-17",region="Region"} 0.0 openstack_nova_running_vms{aggregate="",hostname="compute-node-extra-18",region="Region"} 0.0 openstack_nova_running_vms{aggregate="",hostname="compute-node-extra-19",region="Region"} 0.0 openstack_nova_running_vms{aggregate="",hostname="compute-node-extra-20",region="Region"} 0.0 openstack_nova_running_vms{aggregate="",hostname="compute-node-extra-21",region="Region"} 0.0 openstack_nova_running_vms{aggregate="",hostname="compute-node-extra-22",region="Region"} 0.0 openstack_nova_running_vms{aggregate="",hostname="compute-node-extra-23",region="Region"} 0.0 openstack_nova_running_vms{aggregate="",hostname="compute-node-extra-24",region="Region"} 0.0 openstack_nova_running_vms{aggregate="",hostname="compute-node-extra-25",region="Region"} 0.0 openstack_nova_running_vms{aggregate="",hostname="compute-node-extra-26",region="Region"} 0.0 openstack_nova_running_vms{aggregate="",hostname="compute-node-extra-27",region="Region"} 0.0 openstack_nova_running_vms{aggregate="",hostname="compute-node-extra-28",region="Region"} 0.0 openstack_nova_running_vms{aggregate="",hostname="compute-node-extra-29",region="Region"} 0.0 openstack_nova_running_vms{aggregate="",hostname="compute-node-extra-31",region="Region"} 0.0 openstack_nova_running_vms{aggregate="",hostname="compute-node-extra-32",region="Region"} 0.0 openstack_nova_running_vms{aggregate="",hostname="compute-node-extra-34",region="Region"} 0.0 openstack_nova_running_vms{aggregate="",hostname="compute-node-extra-35",region="Region"} 0.0 openstack_nova_running_vms{aggregate="",hostname="compute-node-extra-36",region="Region"} 0.0 openstack_nova_running_vms{aggregate="",hostname="compute-node-extra-37",region="Region"} 0.0 openstack_nova_running_vms{aggregate="",hostname="compute-node-extra-38",region="Region"} 0.0 openstack_nova_running_vms{aggregate="",hostname="compute-node-extra-39",region="Region"} 0.0 openstack_nova_running_vms{aggregate="",hostname="compute-node-extra-40",region="Region"} 0.0 openstack_nova_running_vms{aggregate="",hostname="compute-node-extra-42",region="Region"} 0.0 openstack_nova_running_vms{aggregate="",hostname="compute-node-extra-43",region="Region"} 0.0 openstack_nova_running_vms{aggregate="",hostname="compute-node-extra-44",region="Region"} 0.0 openstack_nova_running_vms{aggregate="",hostname="compute-node-extra-45",region="Region"} 0.0 # HELP openstack_nova_security_groups security_groups # TYPE openstack_nova_security_groups gauge openstack_nova_security_groups{region="Region"} 5.0 # HELP openstack_nova_total_vms total_vms # TYPE openstack_nova_total_vms gauge openstack_nova_total_vms{region="Region"} 23.0 # HELP openstack_nova_vcpus_available vcpus_available # TYPE openstack_nova_vcpus_available gauge openstack_nova_vcpus_available{aggregate="",hostname="compute-node-01",region="Region"} 48.0 openstack_nova_vcpus_available{aggregate="",hostname="compute-node-02",region="Region"} 48.0 openstack_nova_vcpus_available{aggregate="",hostname="compute-node-03",region="Region"} 48.0 openstack_nova_vcpus_available{aggregate="",hostname="compute-node-04",region="Region"} 48.0 openstack_nova_vcpus_available{aggregate="",hostname="compute-node-05",region="Region"} 48.0 openstack_nova_vcpus_available{aggregate="",hostname="compute-node-06",region="Region"} 48.0 openstack_nova_vcpus_available{aggregate="",hostname="compute-node-07",region="Region"} 48.0 openstack_nova_vcpus_available{aggregate="",hostname="compute-node-09",region="Region"} 48.0 openstack_nova_vcpus_available{aggregate="",hostname="compute-node-10",region="Region"} 48.0 openstack_nova_vcpus_available{aggregate="",hostname="compute-node-extra-01",region="Region"} 8.0 openstack_nova_vcpus_available{aggregate="",hostname="compute-node-extra-02",region="Region"} 8.0 openstack_nova_vcpus_available{aggregate="",hostname="compute-node-extra-03",region="Region"} 8.0 openstack_nova_vcpus_available{aggregate="",hostname="compute-node-extra-04",region="Region"} 8.0 openstack_nova_vcpus_available{aggregate="",hostname="compute-node-extra-05",region="Region"} 8.0 openstack_nova_vcpus_available{aggregate="",hostname="compute-node-extra-07",region="Region"} 8.0 openstack_nova_vcpus_available{aggregate="",hostname="compute-node-extra-08",region="Region"} 8.0 openstack_nova_vcpus_available{aggregate="",hostname="compute-node-extra-09",region="Region"} 8.0 openstack_nova_vcpus_available{aggregate="",hostname="compute-node-extra-10",region="Region"} 8.0 openstack_nova_vcpus_available{aggregate="",hostname="compute-node-extra-11",region="Region"} 8.0 openstack_nova_vcpus_available{aggregate="",hostname="compute-node-extra-12",region="Region"} 8.0 openstack_nova_vcpus_available{aggregate="",hostname="compute-node-extra-13",region="Region"} 8.0 openstack_nova_vcpus_available{aggregate="",hostname="compute-node-extra-15",region="Region"} 8.0 openstack_nova_vcpus_available{aggregate="",hostname="compute-node-extra-17",region="Region"} 8.0 openstack_nova_vcpus_available{aggregate="",hostname="compute-node-extra-18",region="Region"} 8.0 openstack_nova_vcpus_available{aggregate="",hostname="compute-node-extra-19",region="Region"} 8.0 openstack_nova_vcpus_available{aggregate="",hostname="compute-node-extra-20",region="Region"} 8.0 openstack_nova_vcpus_available{aggregate="",hostname="compute-node-extra-21",region="Region"} 8.0 openstack_nova_vcpus_available{aggregate="",hostname="compute-node-extra-22",region="Region"} 8.0 openstack_nova_vcpus_available{aggregate="",hostname="compute-node-extra-23",region="Region"} 8.0 openstack_nova_vcpus_available{aggregate="",hostname="compute-node-extra-24",region="Region"} 8.0 openstack_nova_vcpus_available{aggregate="",hostname="compute-node-extra-25",region="Region"} 8.0 openstack_nova_vcpus_available{aggregate="",hostname="compute-node-extra-26",region="Region"} 8.0 openstack_nova_vcpus_available{aggregate="",hostname="compute-node-extra-27",region="Region"} 8.0 openstack_nova_vcpus_available{aggregate="",hostname="compute-node-extra-28",region="Region"} 8.0 openstack_nova_vcpus_available{aggregate="",hostname="compute-node-extra-29",region="Region"} 8.0 openstack_nova_vcpus_available{aggregate="",hostname="compute-node-extra-31",region="Region"} 8.0 openstack_nova_vcpus_available{aggregate="",hostname="compute-node-extra-32",region="Region"} 8.0 openstack_nova_vcpus_available{aggregate="",hostname="compute-node-extra-34",region="Region"} 8.0 openstack_nova_vcpus_available{aggregate="",hostname="compute-node-extra-35",region="Region"} 8.0 openstack_nova_vcpus_available{aggregate="",hostname="compute-node-extra-36",region="Region"} 8.0 openstack_nova_vcpus_available{aggregate="",hostname="compute-node-extra-37",region="Region"} 8.0 openstack_nova_vcpus_available{aggregate="",hostname="compute-node-extra-38",region="Region"} 8.0 openstack_nova_vcpus_available{aggregate="",hostname="compute-node-extra-39",region="Region"} 8.0 openstack_nova_vcpus_available{aggregate="",hostname="compute-node-extra-40",region="Region"} 8.0 openstack_nova_vcpus_available{aggregate="",hostname="compute-node-extra-42",region="Region"} 8.0 openstack_nova_vcpus_available{aggregate="",hostname="compute-node-extra-43",region="Region"} 8.0 openstack_nova_vcpus_available{aggregate="",hostname="compute-node-extra-44",region="Region"} 8.0 openstack_nova_vcpus_available{aggregate="",hostname="compute-node-extra-45",region="Region"} 8.0 # HELP openstack_nova_vcpus_used vcpus_used # TYPE openstack_nova_vcpus_used gauge openstack_nova_vcpus_used{aggregate="",hostname="compute-node-01",region="Region"} 8.0 openstack_nova_vcpus_used{aggregate="",hostname="compute-node-02",region="Region"} 0.0 openstack_nova_vcpus_used{aggregate="",hostname="compute-node-03",region="Region"} 0.0 openstack_nova_vcpus_used{aggregate="",hostname="compute-node-04",region="Region"} 56.0 openstack_nova_vcpus_used{aggregate="",hostname="compute-node-05",region="Region"} 8.0 openstack_nova_vcpus_used{aggregate="",hostname="compute-node-06",region="Region"} 24.0 openstack_nova_vcpus_used{aggregate="",hostname="compute-node-07",region="Region"} 41.0 openstack_nova_vcpus_used{aggregate="",hostname="compute-node-09",region="Region"} 12.0 openstack_nova_vcpus_used{aggregate="",hostname="compute-node-10",region="Region"} 25.0 openstack_nova_vcpus_used{aggregate="",hostname="compute-node-extra-01",region="Region"} 0.0 openstack_nova_vcpus_used{aggregate="",hostname="compute-node-extra-02",region="Region"} 0.0 openstack_nova_vcpus_used{aggregate="",hostname="compute-node-extra-03",region="Region"} 0.0 openstack_nova_vcpus_used{aggregate="",hostname="compute-node-extra-04",region="Region"} 0.0 openstack_nova_vcpus_used{aggregate="",hostname="compute-node-extra-05",region="Region"} 0.0 openstack_nova_vcpus_used{aggregate="",hostname="compute-node-extra-07",region="Region"} 0.0 openstack_nova_vcpus_used{aggregate="",hostname="compute-node-extra-08",region="Region"} 0.0 openstack_nova_vcpus_used{aggregate="",hostname="compute-node-extra-09",region="Region"} 0.0 openstack_nova_vcpus_used{aggregate="",hostname="compute-node-extra-10",region="Region"} 0.0 openstack_nova_vcpus_used{aggregate="",hostname="compute-node-extra-11",region="Region"} 8.0 openstack_nova_vcpus_used{aggregate="",hostname="compute-node-extra-12",region="Region"} 0.0 openstack_nova_vcpus_used{aggregate="",hostname="compute-node-extra-13",region="Region"} 0.0 openstack_nova_vcpus_used{aggregate="",hostname="compute-node-extra-15",region="Region"} 0.0 openstack_nova_vcpus_used{aggregate="",hostname="compute-node-extra-17",region="Region"} 0.0 openstack_nova_vcpus_used{aggregate="",hostname="compute-node-extra-18",region="Region"} 0.0 openstack_nova_vcpus_used{aggregate="",hostname="compute-node-extra-19",region="Region"} 0.0 openstack_nova_vcpus_used{aggregate="",hostname="compute-node-extra-20",region="Region"} 0.0 openstack_nova_vcpus_used{aggregate="",hostname="compute-node-extra-21",region="Region"} 0.0 openstack_nova_vcpus_used{aggregate="",hostname="compute-node-extra-22",region="Region"} 0.0 openstack_nova_vcpus_used{aggregate="",hostname="compute-node-extra-23",region="Region"} 0.0 openstack_nova_vcpus_used{aggregate="",hostname="compute-node-extra-24",region="Region"} 0.0 openstack_nova_vcpus_used{aggregate="",hostname="compute-node-extra-25",region="Region"} 0.0 openstack_nova_vcpus_used{aggregate="",hostname="compute-node-extra-26",region="Region"} 0.0 openstack_nova_vcpus_used{aggregate="",hostname="compute-node-extra-27",region="Region"} 0.0 openstack_nova_vcpus_used{aggregate="",hostname="compute-node-extra-28",region="Region"} 0.0 openstack_nova_vcpus_used{aggregate="",hostname="compute-node-extra-29",region="Region"} 0.0 openstack_nova_vcpus_used{aggregate="",hostname="compute-node-extra-31",region="Region"} 0.0 openstack_nova_vcpus_used{aggregate="",hostname="compute-node-extra-32",region="Region"} 0.0 openstack_nova_vcpus_used{aggregate="",hostname="compute-node-extra-34",region="Region"} 0.0 openstack_nova_vcpus_used{aggregate="",hostname="compute-node-extra-35",region="Region"} 0.0 openstack_nova_vcpus_used{aggregate="",hostname="compute-node-extra-36",region="Region"} 0.0 openstack_nova_vcpus_used{aggregate="",hostname="compute-node-extra-37",region="Region"} 0.0 openstack_nova_vcpus_used{aggregate="",hostname="compute-node-extra-38",region="Region"} 0.0 openstack_nova_vcpus_used{aggregate="",hostname="compute-node-extra-39",region="Region"} 0.0 openstack_nova_vcpus_used{aggregate="",hostname="compute-node-extra-40",region="Region"} 0.0 openstack_nova_vcpus_used{aggregate="",hostname="compute-node-extra-42",region="Region"} 0.0 openstack_nova_vcpus_used{aggregate="",hostname="compute-node-extra-43",region="Region"} 0.0 openstack_nova_vcpus_used{aggregate="",hostname="compute-node-extra-44",region="Region"} 0.0 openstack_nova_vcpus_used{aggregate="",hostname="compute-node-extra-45",region="Region"} 0.0 ``` [buildstatus]: https://circleci.com/gh/openstack-exporter/openstack-exporter/tree/master.svg?style=shield [circleci]: https://circleci.com/gh/openstack-exporter/openstack-exporter
92.383825
435
0.809732
kor_Hang
0.114313
53ac82328b5f4402c4e6553df5ff76e5a1fd6d11
7,249
md
Markdown
CHANGELOG.md
crazy-max/docker-rtorrent
7eec97f0a71826f802ddb697706d6abf62815ce4
[ "MIT" ]
5
2018-02-04T13:20:14.000Z
2018-06-19T00:12:30.000Z
CHANGELOG.md
crazy-max/docker-rtorrent
7eec97f0a71826f802ddb697706d6abf62815ce4
[ "MIT" ]
null
null
null
CHANGELOG.md
crazy-max/docker-rtorrent
7eec97f0a71826f802ddb697706d6abf62815ce4
[ "MIT" ]
null
null
null
# Changelog ## 3.10-0.9.8-0.13.8-r20 (2022/05/02) * Fix unrar not available since alpine 3.15 (#161) ## 3.10-0.9.8-0.13.8-r19 (2022/04/29) * Fix GeoIP2 ruTorrent plugin version (#159) * Optimize Dockerfile (#157) ## 3.10-0.9.8-0.13.8-r18 (2022/04/28) * Opt-in `WAN_IP` and add `WAN_IP_CMD` env var (#150 #153) * Check plugins existence (#155) * Option to disable Nginx access log (#154) * Alpine Linux 3.15 (#151) * Use GitHub Actions cache backend (#152) ## 3.10-0.9.8-0.13.8-r17 (2021/08/19) * Update dependencies (#117) * Alpine Linux 3.14 (#116) ## 3.10-0.9.8-0.13.8-r16 (2021/08/01) * Fix Traefik example (#113) * Add `AUTH_DELAY` env var (#109) * Add `XMLRPC_SIZE_LIMIT` env var (#107) ## 3.10-0.9.8-0.13.8-r15 (2021/06/14) * Add `posix` PHP extension (#102) ## 3.10-0.9.8-0.13.8-r14 (2021/05/31) * `ifconfig.me` as fallback for automatic WAN_IP determination (#96) ## 3.10-0.9.8-0.13.8-r13 (2021/04/13) * Dynamically manage healthcheck ports (#76) ## 3.10-0.9.8-0.13.8-r12 (2021/04/11) * Initialize ruTorrent plugins (#74) ## 3.10-0.9.8-0.13.8-r11 (2021/04/11) * Allow ports customization (#73) ## 3.10-0.9.8-0.13.8-r10 (2021/03/27) * Add `findutils` package (#67) ## 3.10-0.9.8-0.13.8-r9 (2021/03/21) * [alpine-s6](https://github.com/crazy-max/docker-alpine-s6/) 3.12-2.2.0.3 (#61) * cURL 7.68.0 ## 3.10-0.9.8-0.13.8-r8 (2021/03/18) * Upstream Alpine update * Add support for `linux/arm/v6` ## 3.10-0.9.8-0.13.8-r7 (2021/03/17) * Multi-platform image (#60) ## 3.10-0.9.8-0.13.8-r6 (2021/03/06) * Fix auth for ruTorrent and add global `auth_basic` (#53) ## 3.10-0.9.8-0.13.8-r5 (2021/03/05) * Add `bash` (#52) ## 3.10-0.9.8-0.13.8-r4 (2021/02/22) * Fix permissions issue * Review Dockerfile ## 3.10-0.9.8-0.13.8-r3 (2021/02/14) * ruTorrent 3.10 rev [Novik/ruTorrent@954479f](https://github.com/Novik/ruTorrent/commit/954479ffd00eb58ad14f9a667b3b9b1e108e80a2) * Do not fail on permission issue * Switch to buildx bake * Update mmdb links * Publish to GHCR * Allow to clear env for FPM workers * Traefik v2 example ## 3.10-0.9.8-0.13.8-RC2 (2020/09/23) * Fix Cloudflare plugin ## 3.10-0.9.8-0.13.8-RC1 (2020/06/26) * ruTorrent 3.10 rev [Novik/ruTorrent@3446d5a](https://github.com/Novik/ruTorrent/commit/3446d5ae5fb44e5e1517d5bd600ebe3064fea82c) * XMLRPC 01.58.00 * Libsig 3.0.3 * cURL 7.71.0 ## 3.9-0.9.8-0.13.8-RC16 (2020/05/21) * Add `MAX_FILE_UPLOADS` environment variable (#22) ## 3.9-0.9.8-0.13.8-RC15 (2020/04/27) * Move downloads to a dedicated volume (#20) * Switch to Open Container Specification labels as label-schema.org ones are deprecated > :warning: **UPGRADE NOTES** > Downloads folder has moved from `/data/rtorrent/downloads` to `/downloads`<br /> > If you have active torrents, it is recommended to create a symlink from your rtorrent folder on your host:<br /> > `cd ./data/rtorrent/ && ln -sf ../../downloads ./` ## 3.9-0.9.8-0.13.8-RC14 (2020/03/27) * Fix folder creation ## 3.9-0.9.8-0.13.8-RC13 (2020/01/24) * Move Nginx temp folders to `/tmp` ## 3.9-0.9.8-0.13.8-RC12 (2020/01/02) * Use [geoip-updater](https://github.com/crazy-max/geoip-updater) Docker image to download MaxMind's GeoIP2 databases ## 3.9-0.9.8-0.13.8-RC11 (2019/12/07) * Fix timezone ## 3.9-0.9.8-0.13.8-RC10 (2019/11/23) * Dedicated container for rtorrent logs ## 3.9-0.9.8-0.13.8-RC9 (2019/11/23) * `.rtorrent.rc` not taken into account ## 3.9-0.9.8-0.13.8-RC8 (2019/11/22) * Switch to [s6-overlay](https://github.com/just-containers/s6-overlay/) as a process supervisor * Add `PUID`/`PGID` vars (#12) * Do not set defaults if `RU_REMOVE_CORE_PLUGINS` is empty * Nginx mainline base image ## 3.9-0.9.8-0.13.8-RC7 (2019/10/26) * Base image update ## 3.9-0.9.8-0.13.8-RC6 (2019/10/25) * Fix CVE-2019-11043 ## 3.9-0.9.8-0.13.8-RC5 (2019/10/17) * Remove `PUID` / `PGID` vars ## 3.9-0.9.8-0.13.8-RC4 (2019/10/16) * Switch to GitHub Actions * Stop publishing Docker image on Quay * Move boostrap (default) config for rTorrent to `/etc/rtorrent/.rtlocal.rc` * Run as non-root user * Prevent exposing nginx version * Set timezone through tzdata > :warning: **UPGRADE NOTES** > As the Docker container now runs as a non-root user, you have to first stop the container and change permissions to volumes: > ``` > docker-compose stop > chown -R 1000:1000 data/ passwd/ > docker-compose pull > docker-compose up -d > ``` ## 3.9-0.9.8-0.13.8-RC3 (2019/09/04) * Create `share/torrents` for ruTorrent ## 3.9-0.9.8-0.13.8-RC2 (2019/08/07) * Add healthcheck * Allow directory listing for WebDAV * Remove php-fpm access log (already mirrored by nginx) ## 3.9-0.9.8-0.13.8-RC1 (2019/07/22) * ruTorrent 3.9 rev [Novik/ruTorrent@ec8d8f1](https://github.com/Novik/ruTorrent/commit/ec8d8f1887af57793a671258072b59193a5d8d6c) * rTorrent 0.9.8 and libTorrent 0.13.8 * XMLRPC 01.55.00 * cURL 7.65.3 ## 3.9-0.9.7-0.13.7-RC3 (2019/04/28) * Add `large_client_header_buffers` Nginx config ## 3.9-0.9.7-0.13.7-RC2 (2019/04/15) * Add `REAL_IP_FROM`, `REAL_IP_HEADER` and `LOG_IP_VAR` environment variables ## 3.9-0.9.7-0.13.7-RC1 (2019/04/09) * ruTorrent 3.9 ## 3.8-0.9.7-0.13.7-RC7 (2019/01/14) * Add mktorrent for ruTorrent create plugin * Replace core ruTorrent GeoIP plugin with [GeoIP2 plugin](https://github.com/Micdu70/geoip2-rutorrent) ## 3.8-0.9.7-0.13.7-RC6 (2019/01/09) * Allow to customize auth basic string (Issue #5) ## 3.8-0.9.7-0.13.7-RC5 (2019/01/08) * Bind ruTorrent HTTP port to unprivileged port : `8080` * Fix Nginx WebDAV module version * Update ruTorrent to Novik/ruTorrent@4d3029c * Update libs (XMLRPC, Libsig, cURL) ## 3.8-0.9.7-0.13.7-RC4 (2018/12/04) * Nginx `default.conf` overrides our conf (Issue #1) ## 3.8-0.9.7-0.13.7-RC3 (2018/12/03) * Based on `nginx:stable-alpine` * Optimize layers ## 3.8-0.9.7-0.13.7-RC2 (2018/06/26) * Include path error for custom plugins and themes ## 3.8-0.9.7-0.13.7-RC1 (2018/06/25) * Add ruTorrent 3.8 web client * Add option to remove core plugins of ruTorrent (default `erasedata,httprpc`) * Add a boostrap (default) config for rTorrent in `/etc/.rtlocal.rc` * Move `/var/rtorrent` to `/data/rtorrent` * Use Nginx WebDAV module instead of Apache * Compile Nginx from source for better performance * Remove Apache2 and implement Nginx WebDAV * Do not process entrypoint on `htpasswd` command * Add reverse proxy example with Traefik * Remove old docker tags `0.9.6-0.13.6` and `0.9.7-0.13.7` * Do not persist runtime data * Rename repository `rtorrent-rutorrent` (github and docker hub) ## 0.9.7-0.13.7-RC3 (2018/06/18) * Force rTorrent process as a daemon through command flag * Add .rtorrent.rc if not exist ## 0.9.7-0.13.7-RC2 (2018/06/17) * Move runtime data in `/var/rtorrent/run` * Enable WebDAV protocol on `/downloads/complete` with basic auth ## 0.9.7-0.13.7-RC1 (2018/06/16) * rTorrent 0.9.7 and libTorrent 0.13.7 * Base image updated to Alpine Linux 3.7 * c-ares 1.14.0 * curl 7.60.0 * Move `RTORRENT_HOME` to `/var/rtorrent` * XMLRPC through nginx over SCGI socket with basic auth * Do not expose SCGI port (use a local socket instead) * Run the rTorrent process as a daemon * Replace deprecated commands in `.rtorrent.rc` * Review supervisor config ## 0.9.6-0.13.6-RC1 (2018/01/10) * Initial version
26.169675
130
0.686302
eng_Latn
0.285584
53ad072a836492f4019dbbc089cd40fc71a0d9a4
1,439
md
Markdown
user/plugins/social-meta-tags/README.md
stanseel/grav-greenid
10d1881f5c38a69912912e012ba4681351f239e7
[ "MIT" ]
null
null
null
user/plugins/social-meta-tags/README.md
stanseel/grav-greenid
10d1881f5c38a69912912e012ba4681351f239e7
[ "MIT" ]
null
null
null
user/plugins/social-meta-tags/README.md
stanseel/grav-greenid
10d1881f5c38a69912912e012ba4681351f239e7
[ "MIT" ]
null
null
null
# Social Meta Tags Plugin The **Social Meta Tags** Plugin is for [Grav CMS](http://github.com/getgrav/grav). ## Description Add all Meta Tags that need Facebook Open Graph, and Twitter Cards. # Features * [Open Graph](http://ogp.me/) support. * [Twitter Cards](https://dev.twitter.com/cards/overview) support. You can select between Summary and Large cards. * [AboutMe plugin](https://github.com/Birssan/grav-plugin-about-me) integration. # Installation As this plugin is not yet in the Grav repository, you need to install it manually. From your plugins folder: ``` git clone https://github.com/tucho235/grav-plugin-social-meta-tags social-meta-tags ``` This will clone this repository into the social-meta-tags folder. # Usage Just enable plugin, no need edit any template. :) # Configuration ## Associate Twitter account Social-Meta-Tags need [AboutMe plugin](https://github.com/Birssan/grav-plugin-about-me). To add/change the Twitter defined in `twitter:site`, edit your profile in the AboutMe plugin. ## Facebook App Id Social-Meta-Tags is able to use [Facebook Open Graph](https://developers.facebook.com/docs/opengraph/getting-started). You need generate an app_id. Without this property you'll loose admin right on the Open Graph Facebook Page. # Contributing If you think any implementation are just not the best, feel free to submit ideas and pull requests. All your comments and suggestion are welcome.
30.617021
227
0.760945
eng_Latn
0.788121
53ad16b33bb1b9b24dfd35d1259ce863c4725ab6
10,433
md
Markdown
articles/azure-monitor/log-query/smart-analytics.md
hongman/azure-docs.ko-kr
56e2580d78e1be8ac6b34a50bc4730ab56add9eb
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/azure-monitor/log-query/smart-analytics.md
hongman/azure-docs.ko-kr
56e2580d78e1be8ac6b34a50bc4730ab56add9eb
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/azure-monitor/log-query/smart-analytics.md
hongman/azure-docs.ko-kr
56e2580d78e1be8ac6b34a50bc4730ab56add9eb
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Log Analytics 스마트 분석 예제 | Microsoft Docs description: Log Analytics의 스마트 분석 기능을 사용하여 사용자 활동을 분석하는 예제입니다. ms.subservice: logs ms.topic: conceptual author: bwren ms.author: bwren ms.date: 01/15/2019 ms.openlocfilehash: 82960845e357579b82c493958287cb602d75182e ms.sourcegitcommit: 3d79f737ff34708b48dd2ae45100e2516af9ed78 ms.translationtype: MT ms.contentlocale: ko-KR ms.lasthandoff: 07/23/2020 ms.locfileid: "87067393" --- # <a name="log-analytics-smart-analytics-examples"></a>Log Analytics 스마트 분석 예제 이 문서에는 Log Analytics의 스마트 분석 기능을 사용하여 사용자 활동을 분석하는 예제가 포함되어 있습니다. 이 예제를 사용하여 Application Insights에서 모니터링되는 사용자 고유의 애플리케이션을 분석하거나, 다른 데이터에 대한 유사한 분석을 위해 이러한 쿼리의 개념을 사용할 수 있습니다. 이러한 샘플에서 사용되는 다양한 키워드에 대한 자세한 내용은 [Kusto 언어 참조](/azure/kusto/query/)를 참조하세요. Log Analytics를 처음 사용하는 경우에는 [쿼리 작성 단원](get-started-queries.md)을 진행하세요. ## <a name="cohorts-analytics"></a>코호트 분석 코호트 분석은 코호트라고 하는 특정 사용자 그룹의 활동을 추적합니다. 재방문 사용자 비율을 측정하여 서비스의 선호도를 측정하려고 합니다. 사용자는 처음에 서비스를 사용한 시간별로 그룹화됩니다. 코호트를 분석할 때 첫 번째 추적된 기간에는 활동이 감소할 것으로 예상합니다. 각 코호트에는 해당 멤버가 처음 관찰된 주를 기준으로 이름이 지정됩니다. 다음 예제에서는 처음 서비스를 사용한 이후 5주 동안 사용자가 수행하는 활동 수를 분석합니다. ```Kusto let startDate = startofweek(bin(datetime(2017-01-20T00:00:00Z), 1d)); let week = range Cohort from startDate to datetime(2017-03-01T00:00:00Z) step 7d; // For each user we find the first and last timestamp of activity let FirstAndLastUserActivity = (end:datetime) { customEvents | where customDimensions["sourceapp"]=="ai-loganalyticsui-prod" // Check 30 days back to see first time activity | where timestamp > startDate - 30d | where timestamp < end | summarize min=min(timestamp), max=max(timestamp) by user_AuthenticatedId }; let DistinctUsers = (cohortPeriod:datetime, evaluatePeriod:datetime) { toscalar ( FirstAndLastUserActivity(evaluatePeriod) // Find members of the cohort: only users that were observed in this period for the first time | where min >= cohortPeriod and min < cohortPeriod + 7d // Pick only the members that were active during the evaluated period or after | where max > evaluatePeriod - 7d | summarize dcount(user_AuthenticatedId)) }; week | where Cohort == startDate // Finally, calculate the desired metric for each cohort. In this sample we calculate distinct users but you can change // this to any other metric that would measure the engagement of the cohort members. | extend r0 = DistinctUsers(startDate, startDate+7d), r1 = DistinctUsers(startDate, startDate+14d), r2 = DistinctUsers(startDate, startDate+21d), r3 = DistinctUsers(startDate, startDate+28d), r4 = DistinctUsers(startDate, startDate+35d) | union (week | where Cohort == startDate + 7d | extend r0 = DistinctUsers(startDate+7d, startDate+14d), r1 = DistinctUsers(startDate+7d, startDate+21d), r2 = DistinctUsers(startDate+7d, startDate+28d), r3 = DistinctUsers(startDate+7d, startDate+35d) ) | union (week | where Cohort == startDate + 14d | extend r0 = DistinctUsers(startDate+14d, startDate+21d), r1 = DistinctUsers(startDate+14d, startDate+28d), r2 = DistinctUsers(startDate+14d, startDate+35d) ) | union (week | where Cohort == startDate + 21d | extend r0 = DistinctUsers(startDate+21d, startDate+28d), r1 = DistinctUsers(startDate+21d, startDate+35d) ) | union (week | where Cohort == startDate + 28d | extend r0 = DistinctUsers (startDate+28d, startDate+35d) ) // Calculate the retention percentage for each cohort by weeks | project Cohort, r0, r1, r2, r3, r4, p0 = r0/r0*100, p1 = todouble(r1)/todouble (r0)*100, p2 = todouble(r2)/todouble(r0)*100, p3 = todouble(r3)/todouble(r0)*100, p4 = todouble(r4)/todouble(r0)*100 | sort by Cohort asc ``` 이 예제는 다음 출력을 생성합니다. ![코호트 분석 출력](media/smart-analytics/cohorts.png) ## <a name="rolling-monthly-active-users-and-user-stickiness"></a>롤링 월간 활성 사용자 및 사용자 연결 유지 다음 예제에서는 슬라이딩 윈도우 계산을 수행할 수 있는 [series_fir](/azure/kusto/query/series-firfunction) 함수와 함께 시계열 분석을 사용합니다. 모니터링되는 샘플 애플리케이션은 사용자 지정 이벤트를 통해 사용자 활동을 추적하는 온라인 상점입니다. 쿼리는 두 가지 유형의 사용자 활동(_AddToCart_ 및 _Checkout_)을 추적하고 _활성 사용자_를 지정된 날에 체크 아웃을 한 번 이상 수행한 사용자로 정의합니다. ```Kusto let endtime = endofday(datetime(2017-03-01T00:00:00Z)); let window = 60d; let starttime = endtime-window; let interval = 1d; let user_bins_to_analyze = 28; // Create an array of filters coefficients for series_fir(). A list of '1' in our case will produce a simple sum. let moving_sum_filter = toscalar(range x from 1 to user_bins_to_analyze step 1 | extend v=1 | summarize makelist(v)); // Level of engagement. Users will be counted as engaged if they performed at least this number of activities. let min_activity = 1; customEvents | where timestamp > starttime | where customDimensions["sourceapp"] == "ai-loganalyticsui-prod" // We want to analyze users who actually checked-out in our web site | where (name == "Checkout") and user_AuthenticatedId <> "" // Create a series of activities per user | make-series UserClicks=count() default=0 on timestamp in range(starttime, endtime-1s, interval) by user_AuthenticatedId // Create a new column containing a sliding sum. // Passing 'false' as the last parameter to series_fir() prevents normalization of the calculation by the size of the window. // For each time bin in the *RollingUserClicks* column, the value is the aggregation of the user activities in the // 28 days that preceded the bin. For example, if a user was active once on 2016-12-31 and then inactive throughout // January, then the value will be 1 between 2016-12-31 -> 2017-01-28 and then 0s. | extend RollingUserClicks=series_fir(UserClicks, moving_sum_filter, false) // Use the zip() operator to pack the timestamp with the user activities per time bin | project User_AuthenticatedId=user_AuthenticatedId , RollingUserClicksByDay=zip(timestamp, RollingUserClicks) // Transpose the table and create a separate row for each combination of user and time bin (1 day) | mvexpand RollingUserClicksByDay | extend Timestamp=todatetime(RollingUserClicksByDay[0]) // Mark the users that qualify according to min_activity | extend RollingActiveUsersByDay=iff(toint(RollingUserClicksByDay[1]) >= min_activity, 1, 0) // And finally, count the number of users per time bin. | summarize sum(RollingActiveUsersByDay) by Timestamp // First 28 days contain partial data, so we filter them out. | where Timestamp > starttime + 28d // render as timechart | render timechart ``` 이 예제는 다음 출력을 생성합니다. ![롤링 월간 사용자 출력](media/smart-analytics/rolling-mau.png) 다음 예제에서는 위의 쿼리를 재사용 가능한 함수로 전환하고 이 함수를 사용하여 롤링 사용자 연결 유지를 계산합니다. 이 쿼리의 활성 사용자는 지정된 날에 체크 아웃을 한 번 이상 수행한 사용자로만 정의됩니다. ``` Kusto let rollingDcount = (sliding_window_size: int, event_name:string) { let endtime = endofday(datetime(2017-03-01T00:00:00Z)); let window = 90d; let starttime = endtime-window; let interval = 1d; let moving_sum_filter = toscalar(range x from 1 to sliding_window_size step 1 | extend v=1| summarize makelist(v)); let min_activity = 1; customEvents | where timestamp > starttime | where customDimensions["sourceapp"]=="ai-loganalyticsui-prod" | where (name == event_name) | where user_AuthenticatedId <> "" | make-series UserClicks=count() default=0 on timestamp in range(starttime, endtime-1s, interval) by user_AuthenticatedId | extend RollingUserClicks=fir(UserClicks, moving_sum_filter, false) | project User_AuthenticatedId=user_AuthenticatedId , RollingUserClicksByDay=zip(timestamp, RollingUserClicks) | mvexpand RollingUserClicksByDay | extend Timestamp=todatetime(RollingUserClicksByDay[0]) | extend RollingActiveUsersByDay=iff(toint(RollingUserClicksByDay[1]) >= min_activity, 1, 0) | summarize sum(RollingActiveUsersByDay) by Timestamp | where Timestamp > starttime + 28d }; // Use the moving_sum_filter with bin size of 28 to count MAU. rollingDcount(28, "Checkout") | join ( // Use the moving_sum_filter with bin size of 1 to count DAU. rollingDcount(1, "Checkout") ) on Timestamp | project sum_RollingActiveUsersByDay1 *1.0 / sum_RollingActiveUsersByDay, Timestamp | render timechart ``` 이 예제는 다음 출력을 생성합니다. ![사용자 연결 유지 출력](media/smart-analytics/user-stickiness.png) ## <a name="regression-analysis"></a>회귀 분석 이 예제에서는 애플리케이션의 추적 로그만 기준으로 해서 서비스 중단의 자동화된 탐지기를 만드는 방법을 보여줍니다. 탐지기는 애플리케이션에서 오류 및 경고 추적의 상대적 양이 비정상적으로 갑자기 증가하는 경우를 검색합니다. 추적 로그 데이터를 기준으로 서비스 상태를 평가하기 위해 다음 두 가지 기술이 사용됩니다. - [make-series](/azure/kusto/query/make-seriesoperator)를 사용하여 반구조적 텍스트 추적 로그를 양수 및 음수 추적 라인 간 비율을 나타내는 메트릭으로 변환합니다. - [series_fit_2lines](/azure/kusto/query/series-fit-2linesfunction) 및 [series_fit_line](/azure/kusto/query/series-fit-linefunction)을 사용하여 2 라인 선형 회귀와 함께 시계열 분석을 통해 고급 단계 점프를 검색합니다. ``` Kusto let startDate = startofday(datetime("2017-02-01")); let endDate = startofday(datetime("2017-02-07")); let minRsquare = 0.8; // Tune the sensitivity of the detection sensor. Values close to 1 indicate very low sensitivity. // Count all Good (Verbose + Info) and Bad (Error + Fatal + Warning) traces, per day traces | where timestamp > startDate and timestamp < endDate | summarize Verbose = countif(severityLevel == 0), Info = countif(severityLevel == 1), Warning = countif(severityLevel == 2), Error = countif(severityLevel == 3), Fatal = countif(severityLevel == 4) by bin(timestamp, 1d) | extend Bad = (Error + Fatal + Warning), Good = (Verbose + Info) // Determine the ratio of bad traces, from the total | extend Ratio = (todouble(Bad) / todouble(Good + Bad))*10000 | project timestamp , Ratio // Create a time series | make-series RatioSeries=any(Ratio) default=0 on timestamp in range(startDate , endDate -1d, 1d) by 'TraceSeverity' // Apply a 2-line regression to the time series | extend (RSquare2, SplitIdx, Variance2,RVariance2,LineFit2)=series_fit_2lines(RatioSeries) // Find out if our 2-line is trending up or down | extend (Slope,Interception,RSquare,Variance,RVariance,LineFit)=series_fit_line(LineFit2) // Check whether the line fit reaches the threshold, and if the spike represents an increase (rather than a decrease) | project PatternMatch = iff(RSquare2 > minRsquare and Slope>0, "Spike detected", "No Match") ``` ## <a name="next-steps"></a>다음 단계 - [Data Explorer 언어 참조](/azure/kusto/query)에서 언어에 대한 자세한 내용을 참조합니다. - [Log Analytics에서 쿼리 작성 단원](get-started-queries.md)을 진행합니다.
47.639269
260
0.738426
kor_Hang
0.966679
53ad261727b6a9aa40b2e6a9dfeea1dc95771b81
8,887
md
Markdown
versoes_online/acf2007-MHenry/42N-Lc/13.md
kmrgo/BibleMarkdown
a59ab8f3d0f9bf3fab67fb7ebcd520da35899f07
[ "Unlicense" ]
2
2019-09-11T14:08:45.000Z
2019-10-21T00:46:59.000Z
versoes_online/acf2007-MHenry/42N-Lc/13.md
ameisehaufen/BibleMarkdown
a59ab8f3d0f9bf3fab67fb7ebcd520da35899f07
[ "Unlicense" ]
null
null
null
versoes_online/acf2007-MHenry/42N-Lc/13.md
ameisehaufen/BibleMarkdown
a59ab8f3d0f9bf3fab67fb7ebcd520da35899f07
[ "Unlicense" ]
null
null
null
# Lucas Cap 13 **1** E, NAQUELE mesmo tempo, estavam presentes ali alguns que lhe falavam dos galileus, cujo sangue Pilatos misturara com os seus sacrifícios. **2** E, respondendo Jesus, disse-lhes: Cuidais vós que esses galileus foram mais pecadores do que todos os galileus, por terem padecido tais coisas? **3** Não, vos digo; antes, se não vos arrependerdes, todos de igual modo perecereis. **4** E aqueles dezoito, sobre os quais caiu a torre de Siloé e os matou, cuidais que foram mais culpados do que todos quantos homens habitam em Jerusalém? **5** Não, vos digo; antes, se não vos arrependerdes, todos de igual modo perecereis. **6** E dizia esta parábola: Um certo homem tinha uma figueira plantada na sua vinha, e foi procurar nela fruto, não o achando; **7** E disse ao vinhateiro: Eis que há três anos venho procurar fruto nesta figueira, e não o acho. Corta-a; por que ocupa ainda a terra inutilmente? **8** E, respondendo ele, disse-lhe: Senhor, deixa-a este ano, até que eu a escave e a esterque; **9** E, se der fruto, ficará e, se não, depois a mandarás cortar. **10** E ensinava no sábado, numa das sinagogas. **11** E eis que estava ali uma mulher que tinha um espírito de enfermidade, havia já dezoito anos; e andava curvada, e não podia de modo algum endireitar-se. **12** E, vendo-a Jesus, chamou-a a si, e disse-lhe: Mulher, estás livre da tua enfermidade. **13** E pôs as mãos sobre ela, e logo se endireitou, e glorificava a Deus. **14** E, tomando a palavra o príncipe da sinagoga, indignado porque Jesus curava no sábado, disse à multidão: Seis dias há em que é mister trabalhar; nestes, pois, vinde para serdes curados, e não no dia de sábado. **15** Respondeu-lhe, porém, o Senhor, e disse: Hipócrita, no sábado não desprende da manjedoura cada um de vós o seu boi, ou jumento, e não o leva a beber? **16** E não convinha soltar desta prisão, no dia de sábado, esta filha de Abraão, a qual há dezoito anos Satanás tinha presa? **17** E, dizendo ele isto, todos os seus adversários ficaram envergonhados, e todo o povo se alegrava por todas as coisas gloriosas que eram feitas por ele. **18** E dizia: A que é semelhante o reino de Deus, e a que o compararei? ![](../Images/SweetPublishing/40-13-12.jpg) **19** É semelhante ao grão de mostarda que um homem, tomando-o, lançou na sua horta; e cresceu, e fez-se grande árvore, e em seus ramos se aninharam as aves do céu. ![](../Images/SweetPublishing/40-13-13.jpg) **20** E disse outra vez: A que compararei o reino de Deus? **21** É semelhante ao fermento que uma mulher, tomando-o, escondeu em três medidas de farinha, até que tudo levedou. **22** E percorria as cidades e as aldeias, ensinando, e caminhando para Jerusalém. **23** E disse-lhe um: Senhor, são poucos os que se salvam? E ele lhe respondeu: **24** Porfiai por entrar pela porta estreita; porque eu vos digo que muitos procurarão entrar, e não poderão. **25** Quando o pai de família se levantar e cerrar a porta, e começardes, de fora, a bater à porta, dizendo: Senhor, Senhor, abre-nos; e, respondendo ele, vos disser: Não sei de onde vós sois; **26** Então começareis a dizer: Temos comido e bebido na tua presença, e tu tens ensinado nas nossas ruas. **27** E ele vos responderá: Digo-vos que não vos conheço nem sei de onde vós sois; apartai-vos de mim, vós todos os que praticais a iniqüidade. **28** Ali haverá choro e ranger de dentes, quando virdes Abraão, e Isaque, e Jacó, e todos os profetas no reino de Deus, e vós lançados fora. **29** E virão do oriente, e do ocidente, e do norte, e do sul, e assentar-se-ão à mesa no reino de Deus. **30** E eis que derradeiros há que serão os primeiros; e primeiros há que serão os derradeiros. **31** Naquele mesmo dia chegaram uns fariseus, dizendo-lhe: Sai, e retira-te daqui, porque Herodes quer matar-te. **32** E respondeu-lhes: Ide, e dizei àquela raposa: Eis que eu expulso demônios, e efetuo curas, hoje e amanhã, e no terceiro dia sou consumado. **33** Importa, porém, caminhar hoje, amanhã, e no dia seguinte, para que não suceda que morra um profeta fora de Jerusalém. **34** Jerusalém, Jerusalém, que matas os profetas, e apedrejas os que te são enviados! Quantas vezes quis eu ajuntar os teus filhos, como a galinha os seus pintos debaixo das asas, e não quiseste? **35** Eis que a vossa casa se vos deixará deserta. E em verdade vos digo que não me vereis até que venha o tempo em que digais: Bendito aquele que vem em nome do Senhor. > **Cmt MHenry** Intro: " Cristo, ao tratar de raposa a Herodes, deu-lhe seu caráter verdadeiro. Os maiores homens eram responsáveis de render contas a Deus, portanto, lhes correspondia chamar a este rei orgulhoso por seu nome próprio, mas não é exemplo para nós. "Sei", disse nosso Senhor, "que eu devo morrer daqui a pouco; quando morrer, serei aperfeiçoado, terei completado minha tarefa". Bom é olharmos o tempo que tempo diante de nós como muito curto, para que isso nos estimule para realizar a obra do dia em seu dia. A maldade das pessoas e dos lugares que mais que outros professam a religião e relação com Deus, desagrada e entristece especialmente o Senhor Jesus. o juízo do grande dia convencerá os incrédulos, mas aprendamos com agradecimento a acolher bem, e beneficiar-nos, de todos os que vêm em nome do Senhor a chamar-nos para participar de sua grande salvação. "> " Nosso Salvador veio guiar a consciência dos homens, não a satisfazer sua curiosidade. Não perguntes "Quantos serão salvos?" senão "Serei salvo?". Não perguntes "Que será de tal e tal pessoa?", senão "Que farei eu e que será de mim?". Esforça-te por entrar pela porta estreita. Isto se manda a cada um de nós: Esforça-te. Todo o que será salvo deve entrar pela porta estreita, deve empreender uma mudança de todo o homem. Os que entrem por ela, devem esforçar-se por entrar. Eis aqui considerações vivificadoras para reforçar esta exortação. Oh, sejamos todos despertados por elas! Eles respondem a pergunta, são poucos os que se salvam? Mas que ninguém despreze a si mesmo ou aos outros, pois há últimos que serão primeiros, e primeiros que serão últimos. Se chegarmos ao céu, encontraremos a muitos lá aos que não pensamos encontrar, e estranharemos de não ver a muitos que esperávamos achar. "> Aqui temos o progresso do evangelho anunciado em duas parábolas, como em [Mateus 13](../40N-Mt/13.md#0). o reino do Messias é o Reino de Deus. que a graça cresça em nossos corações; que nossa fé e amor cresçam abundantemente para dar prova indubitável de sua realidade. Que o exemplo dos santos de Deus seja de bênção entre os que vivem; e que sua graça flua de coração a coração, até que o pequeno se torne milhares.> Nosso Senhor Jesus assistia ao serviço público de adoração os dias de repouso. Ainda as doenças corporais, a menos que sejam muito graves, não devem impedir-nos ir ao serviço público de adoração os dias de repouso. Esta mulher veio para ser ensinada por Cristo e para receber bem para sua alma, e então Ele aliviou sua doença corporal. Quando as almas tortas se endireitam, o demonstram glorificando a Deus. Cristo sabia que este príncipe tinha uma verdadeira inimizade contra Ele e seu Evangelho, e que somente o ocultava com um zelo fingido pelo dia do repouso; realmente ele não desejava que fossem sarados em nenhum dia; mas se Jesus diz a palavra, e dá seu poder curador, os pecadores são deixados em liberdade. Esta liberação costuma operar-se no dia do Senhor, e qualquer seja a tarefa que se coloque aos homens no caminho da bênção, concorda com o objeto desse dia.> A parábola da figueira estéril tem o propósito de reforçar a advertência recém dada: a figueira estéril, a menos que dê fruto, será cortada. Esta parábola se refere, em primeiro lugar, à nação e ao povo judeu. Porém, sem dúvida, é para despertar a todos os que desfrutam dos meios de graça, e aos privilegiados da igreja visível. Quando Deus tenha suportado por muito tempo, podemos esperar que nos tolere mais um pouco, mas não podemos ter a esperança de que sempre suportará.> Falam a Cristo da morte de uns galileus. Esta trágica história se relata brevemente aqui e não a mencionam os historiadores. Ao responder, Cristo fala de outro fato que era como este, outro caso de gente afetada por uma morte repentina. As torres, que se constroem para segurança, costumam ser a destruição dos homens. os adverte que não culpem os grandes sofredores como se tivessem sido grandes pecadores. Como nenhum posto nem emprego pode assegurar-nos em contra do golpe da morte, devemos considerar as partidas súbitas dos outros como advertência para nós. Nestes relatos, Cristo fundamento um chamado ao arrependimento. O próprio Jesus que nos pede arrependimento, porque o Reino dos Céus está à porta, nos pede para que nos arrependamos, pois do contrário pereceremos.
113.935897
4,331
0.761224
por_Latn
0.999972
53ad4134d82344bef89b9d6bcc206f199e13d6f9
10,745
md
Markdown
repos/open-liberty/remote/19.0.0.6-microProfile2-java8-ibmsfj.md
ssapp/repo-info
236e850c1feb553b6b30ae5e3f285c037ab6a5c0
[ "Apache-2.0" ]
null
null
null
repos/open-liberty/remote/19.0.0.6-microProfile2-java8-ibmsfj.md
ssapp/repo-info
236e850c1feb553b6b30ae5e3f285c037ab6a5c0
[ "Apache-2.0" ]
null
null
null
repos/open-liberty/remote/19.0.0.6-microProfile2-java8-ibmsfj.md
ssapp/repo-info
236e850c1feb553b6b30ae5e3f285c037ab6a5c0
[ "Apache-2.0" ]
null
null
null
## `open-liberty:19.0.0.6-microProfile2-java8-ibmsfj` ```console $ docker pull open-liberty@sha256:e6c28f9cc7437bfb240e8166f4acb17da7a1ab007cddf7fbd434ec804780d0aa ``` - Manifest MIME: `application/vnd.docker.distribution.manifest.list.v2+json` - Platforms: - linux; amd64 ### `open-liberty:19.0.0.6-microProfile2-java8-ibmsfj` - linux; amd64 ```console $ docker pull open-liberty@sha256:b4651d8c360d24967f2d3fbdfc2f8200cdd2f07ab7fb97e4d86d290738ed3d18 ``` - Docker Version: 18.06.1-ce - Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json` - Total Size: **200.2 MB (200177249 bytes)** (compressed transfer size, not on-disk size) - Image ID: `sha256:73762c271c8f00328a7fd8e2c8e0647c3fde2fd555c3e668c5eedc2c551ded55` - Entrypoint: `["\/opt\/ol\/helpers\/runtime\/docker-server.sh"]` - Default Command: `["\/opt\/ol\/wlp\/bin\/server","run","defaultServer"]` ```dockerfile # Thu, 07 Mar 2019 22:19:53 GMT ADD file:aa17928040e31624cad9c7ed19ac277c5402c4b9ba39f834250affca40c4046e in / # Thu, 07 Mar 2019 22:19:53 GMT CMD ["/bin/sh"] # Thu, 07 Mar 2019 23:11:18 GMT MAINTAINER Dinakar Guniguntala <[email protected]> (@dinogun) # Tue, 02 Apr 2019 22:22:30 GMT COPY file:3ca1cc706ceed4c671485bfc9a5f46a78571aaf829b0ab9fbb88c9d48e27ccd3 in /etc/apk/keys # Tue, 02 Apr 2019 22:22:38 GMT RUN apk add --no-cache --virtual .build-deps curl binutils && GLIBC_VER="2.29-r0" && ALPINE_GLIBC_REPO="https://github.com/sgerrand/alpine-pkg-glibc/releases/download" && GCC_LIBS_URL="https://archive.archlinux.org/packages/g/gcc-libs/gcc-libs-8.2.1%2B20180831-1-x86_64.pkg.tar.xz" && GCC_LIBS_SHA256=e4b39fb1f5957c5aab5c2ce0c46e03d30426f3b94b9992b009d417ff2d56af4d && curl -fLs https://alpine-pkgs.sgerrand.com/sgerrand.rsa.pub -o /tmp/sgerrand.rsa.pub && cmp -s /etc/apk/keys/sgerrand.rsa.pub /tmp/sgerrand.rsa.pub && curl -fLs ${ALPINE_GLIBC_REPO}/${GLIBC_VER}/glibc-${GLIBC_VER}.apk > /tmp/${GLIBC_VER}.apk && apk add /tmp/${GLIBC_VER}.apk && curl -fLs ${GCC_LIBS_URL} -o /tmp/gcc-libs.tar.xz && echo "${GCC_LIBS_SHA256} /tmp/gcc-libs.tar.xz" | sha256sum -c - && mkdir /tmp/gcc && tar -xf /tmp/gcc-libs.tar.xz -C /tmp/gcc && mv /tmp/gcc/usr/lib/libgcc* /tmp/gcc/usr/lib/libstdc++* /usr/glibc-compat/lib && strip /usr/glibc-compat/lib/libgcc_s.so.* /usr/glibc-compat/lib/libstdc++.so* && apk del --purge .build-deps && apk add --no-cache ca-certificates openssl && rm -rf /tmp/${GLIBC_VER}.apk /tmp/gcc /tmp/gcc-libs.tar.xz /var/cache/apk/* /tmp/*.pub # Wed, 21 Aug 2019 21:28:48 GMT ENV JAVA_VERSION=1.8.0_sr5fp40 # Wed, 21 Aug 2019 21:30:40 GMT RUN set -eux; apk --no-cache add --virtual .build-deps wget; ARCH="$(apk --print-arch)"; case "${ARCH}" in amd64|x86_64) ESUM='6e06eac3ef2b8a0546053a993d5cb2a1d6104c3f3c742c9119bf4664899acc8e'; YML_FILE='sfj/linux/x86_64/index.yml'; ;; i386) ESUM='c4d248aef64cf1f743fe3984063bbdaa98cbbb7ccdcf43ff21f75d5ca6f422fd'; YML_FILE='sfj/linux/i386/index.yml'; ;; ppc64el|ppc64le) ESUM='19fcc0b732501a2da87f8ff09d8d61a5200390b0ae2d385b1db2e5498f1b36f0'; YML_FILE='sfj/linux/ppc64le/index.yml'; ;; s390) ESUM='c6eb28ba0a6958c45835e614f7ed827ea84b1a82dc7a3ecd57bcbb92383b8612'; YML_FILE='sfj/linux/s390/index.yml'; ;; s390x) ESUM='fda0eb575ca3f8546838636188fa97adf46295f2c1179135d6c7075703b7d692'; YML_FILE='sfj/linux/s390x/index.yml'; ;; *) echo "Unsupported arch: ${ARCH}"; exit 1; ;; esac; BASE_URL="https://public.dhe.ibm.com/ibmdl/export/pub/systems/cloud/runtimes/java/meta/"; wget -q -U UA_IBM_JAVA_Docker -O /tmp/index.yml ${BASE_URL}/${YML_FILE}; JAVA_URL=$(sed -n '/^'${JAVA_VERSION}:'/{n;s/\s*uri:\s//p}'< /tmp/index.yml); wget -q -U UA_IBM_JAVA_Docker -O /tmp/ibm-java.bin ${JAVA_URL}; echo "${ESUM} /tmp/ibm-java.bin" | sha256sum -c -; echo "INSTALLER_UI=silent" > /tmp/response.properties; echo "USER_INSTALL_DIR=/opt/ibm/java" >> /tmp/response.properties; echo "LICENSE_ACCEPTED=TRUE" >> /tmp/response.properties; mkdir -p /opt/ibm; chmod +x /tmp/ibm-java.bin; /tmp/ibm-java.bin -i silent -f /tmp/response.properties; rm -f /tmp/response.properties; rm -f /tmp/index.yml; rm -f /tmp/ibm-java.bin; apk del .build-deps; # Wed, 21 Aug 2019 21:30:40 GMT ENV JAVA_HOME=/opt/ibm/java/jre PATH=/opt/ibm/java/jre/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin IBM_JAVA_OPTIONS=-XX:+UseContainerSupport # Wed, 21 Aug 2019 23:43:03 GMT ARG LIBERTY_VERSION=19.0.0.6 # Wed, 21 Aug 2019 23:43:03 GMT ARG LIBERTY_SHA=4e97ebc8a94c75dead89282c04f78cac3c8e0ba4 # Wed, 21 Aug 2019 23:43:04 GMT ARG LIBERTY_BUILD_LABEL=cl190620190617-1530 # Wed, 21 Aug 2019 23:43:04 GMT ARG LIBERTY_DOWNLOAD_URL=https://repo1.maven.org/maven2/io/openliberty/openliberty-runtime/19.0.0.6/openliberty-runtime-19.0.0.6.zip # Wed, 21 Aug 2019 23:43:04 GMT LABEL org.opencontainers.image.authors=Arthur De Magalhaes, Chris Potter org.opencontainers.image.vendor=Open Liberty org.opencontainers.image.url=https://openliberty.io/ org.opencontainers.image.source=https://github.com/OpenLiberty/ci.docker org.opencontainers.image.revision=cl190620190617-1530 # Wed, 21 Aug 2019 23:43:04 GMT COPY dir:be23a1bfbfecea308af39529244d4d4db3549d4ad621b3278a2db98561702afd in /opt/ol/helpers # Wed, 21 Aug 2019 23:43:12 GMT # ARGS: LIBERTY_BUILD_LABEL=cl190620190617-1530 LIBERTY_DOWNLOAD_URL=https://repo1.maven.org/maven2/io/openliberty/openliberty-runtime/19.0.0.6/openliberty-runtime-19.0.0.6.zip LIBERTY_SHA=4e97ebc8a94c75dead89282c04f78cac3c8e0ba4 LIBERTY_VERSION=19.0.0.6 RUN apk add --no-cache wget openssl && wget -q $LIBERTY_DOWNLOAD_URL -U UA-Open-Liberty-Docker -O /tmp/wlp.zip && echo "$LIBERTY_SHA /tmp/wlp.zip" > /tmp/wlp.zip.sha1 && sha1sum -c /tmp/wlp.zip.sha1 && unzip -q /tmp/wlp.zip -d /opt/ol && rm /tmp/wlp.zip && rm /tmp/wlp.zip.sha1 && adduser -u 1001 -S -G root -s /usr/sbin/nologin default && chown -R 1001:0 /opt/ol/wlp && chmod -R g+rw /opt/ol/wlp && apk del --no-cache wget unzip # Wed, 21 Aug 2019 23:43:12 GMT ENV PATH=/opt/ol/wlp/bin:/opt/ol/docker/:/opt/ol/helpers/build:/opt/ibm/java/jre/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin LOG_DIR=/logs WLP_OUTPUT_DIR=/opt/ol/wlp/output WLP_SKIP_MAXPERMSIZE=true # Wed, 21 Aug 2019 23:43:13 GMT # ARGS: LIBERTY_BUILD_LABEL=cl190620190617-1530 LIBERTY_DOWNLOAD_URL=https://repo1.maven.org/maven2/io/openliberty/openliberty-runtime/19.0.0.6/openliberty-runtime-19.0.0.6.zip LIBERTY_SHA=4e97ebc8a94c75dead89282c04f78cac3c8e0ba4 LIBERTY_VERSION=19.0.0.6 RUN /opt/ol/wlp/bin/server create && rm -rf $WLP_OUTPUT_DIR/.classCache /output/workarea # Wed, 21 Aug 2019 23:43:15 GMT # ARGS: LIBERTY_BUILD_LABEL=cl190620190617-1530 LIBERTY_DOWNLOAD_URL=https://repo1.maven.org/maven2/io/openliberty/openliberty-runtime/19.0.0.6/openliberty-runtime-19.0.0.6.zip LIBERTY_SHA=4e97ebc8a94c75dead89282c04f78cac3c8e0ba4 LIBERTY_VERSION=19.0.0.6 RUN mkdir /logs && mkdir -p /opt/ol/wlp/usr/shared/resources/lib.index.cache && ln -s /opt/ol/wlp/usr/shared/resources/lib.index.cache /lib.index.cache && mkdir -p $WLP_OUTPUT_DIR/defaultServer && ln -s $WLP_OUTPUT_DIR/defaultServer /output && ln -s /opt/ol/wlp/usr/servers/defaultServer /config && mkdir -p /config/configDropins/defaults && mkdir -p /config/configDropins/overrides && ln -s /opt/ol/wlp /liberty && chown -R 1001:0 /config && chmod -R g+rw /config && chown -R 1001:0 /logs && chmod -R g+rw /logs && chown -R 1001:0 /opt/ol/wlp/usr && chmod -R g+rw /opt/ol/wlp/usr && chown -R 1001:0 /opt/ol/wlp/output && chmod -R g+rw /opt/ol/wlp/output && chown -R 1001:0 /opt/ol/helpers && chmod -R g+rw /opt/ol/helpers && mkdir /etc/wlp && chown -R 1001:0 /etc/wlp && chmod -R g+rw /etc/wlp && echo "<server description=\"Default Server\"><httpEndpoint id=\"defaultHttpEndpoint\" host=\"*\" /></server>" > /config/configDropins/defaults/open-default-port.xml # Wed, 21 Aug 2019 23:43:15 GMT ENV RANDFILE=/tmp/.rnd IBM_JAVA_OPTIONS=-Xshareclasses:name=liberty,nonfatal,cacheDir=/output/.classCache/ -XX:+UseContainerSupport # Wed, 21 Aug 2019 23:43:15 GMT USER 1001 # Wed, 21 Aug 2019 23:43:15 GMT EXPOSE 9080 9443 # Wed, 21 Aug 2019 23:43:15 GMT ENV KEYSTORE_REQUIRED=true # Wed, 21 Aug 2019 23:43:16 GMT ENTRYPOINT ["/opt/ol/helpers/runtime/docker-server.sh"] # Wed, 21 Aug 2019 23:43:16 GMT CMD ["/opt/ol/wlp/bin/server" "run" "defaultServer"] # Wed, 21 Aug 2019 23:45:11 GMT RUN cp /opt/ol/wlp/templates/servers/microProfile2/server.xml /config/server.xml ``` - Layers: - `sha256:5d20c808ce198565ff70b3ed23a991dd49afac45dece63474b27ce6ed036adc6` Last Modified: Thu, 07 Mar 2019 22:20:24 GMT Size: 2.1 MB (2107098 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:65b9bbb7e9e210015f902865694fe417083e3abb90a264c4ad2fb3f5f1f7ef77` Last Modified: Tue, 02 Apr 2019 22:27:49 GMT Size: 544.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:05389964ba24aa0dec164a9a8a93720c6ec4031fd4df7a562a34bb7ed5d011f9` Last Modified: Tue, 02 Apr 2019 22:27:50 GMT Size: 4.5 MB (4492117 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:419013cb7353480783d9a5acdbc4f60fd15dab824363660c707a83567f98f58b` Last Modified: Wed, 21 Aug 2019 21:33:38 GMT Size: 63.4 MB (63358357 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:84de45620771be9a2cfbdd5fbd0ccc7a1d03abd010e5b0bd16fae095c94db158` Last Modified: Wed, 21 Aug 2019 23:52:35 GMT Size: 2.4 KB (2377 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:0fcb9ee40bd76ef932beba35014c45f33baa00378f4feaef367e68b3226f9305` Last Modified: Wed, 21 Aug 2019 23:52:48 GMT Size: 130.2 MB (130212131 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:52301809edce10c2ab39bbf50c081b15e319a923f79b41ec0ca4bcdff5f127b9` Last Modified: Wed, 21 Aug 2019 23:52:35 GMT Size: 851.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:3dd9c1f6873f70424936020353b84410cc8d8148807dd781c897911fe4192944` Last Modified: Wed, 21 Aug 2019 23:52:35 GMT Size: 3.2 KB (3225 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:26b725abf79b34eb928c4d667e87f23e4e709a8d37c489869433f06c39a6c5a4` Last Modified: Wed, 21 Aug 2019 23:53:52 GMT Size: 549.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
91.059322
1,802
0.721731
yue_Hant
0.327031
53adae3198585501ed2d9c52e331b0e7877be3c4
9,278
md
Markdown
articles/active-directory-b2c/partner-trusona.md
maiemy/azure-docs.it-it
b3649d817c2ec64a3738b5f05f18f85557d0d9b6
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/active-directory-b2c/partner-trusona.md
maiemy/azure-docs.it-it
b3649d817c2ec64a3738b5f05f18f85557d0d9b6
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/active-directory-b2c/partner-trusona.md
maiemy/azure-docs.it-it
b3649d817c2ec64a3738b5f05f18f85557d0d9b6
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Trusona e Azure Active Directory B2C titleSuffix: Azure AD B2C description: Informazioni su come aggiungere Trusona come provider di identità in Azure AD B2C per abilitare l'autenticazione con password. services: active-directory-b2c author: msmimart manager: celestedg ms.service: active-directory ms.workload: identity ms.topic: how-to ms.date: 07/30/2020 ms.author: mimart ms.subservice: B2C ms.openlocfilehash: a0d5b369e1c143b3df4157329bcf7d3a3f7142d7 ms.sourcegitcommit: 829d951d5c90442a38012daaf77e86046018e5b9 ms.translationtype: MT ms.contentlocale: it-IT ms.lasthandoff: 10/09/2020 ms.locfileid: "87489470" --- # <a name="integrating-trusona-with-azure-active-directory-b2c"></a>Integrazione di Trusona con Azure Active Directory B2C Trusona è un provider di software indipendente (ISV, Independent Software Vendor) che consente di proteggere l'accesso abilitando l'autenticazione senza password, l'autenticazione a più fattori e l'analisi delle licenze digitali. In questo articolo si apprenderà come aggiungere Trusona come provider di identità in Azure AD B2C per abilitare l'autenticazione con password. ## <a name="prerequisites"></a>Prerequisiti Per iniziare, è necessario: * Una sottoscrizione di Azure AD. Se non si ha una sottoscrizione, è possibile ottenere un [account gratuito](https://azure.microsoft.com/free/). * [Un tenant Azure ad B2C](tutorial-create-tenant.md) collegato alla sottoscrizione di Azure. * Un [account di valutazione](https://www.trusona.com/aadb2c) in Trusona ## <a name="scenario-description"></a>Descrizione dello scenario In questo scenario, Trusona funge da provider di identità per Azure AD B2C per abilitare l'autenticazione con password. I componenti seguenti costituiscono la soluzione: * Un Azure AD B2C criteri di accesso e iscrizione combinati * Trusona aggiunto al Azure AD B2C come provider di identità * App Trusona scaricabile ![Diagramma dell'architettura di Trusona](media/partner-trusona/trusona-architecture-diagram.png) | Passaggio | Descrizione | |------|------| |1 | Un utente tenta di accedere o di iscriversi con l'applicazione. L'utente viene autenticato tramite i criteri di iscrizione e accesso Azure AD B2C. Durante l'iscrizione, viene usato l'indirizzo di posta elettronica verificato in precedenza dall'utente dall'app Trusona. | |2 | Azure B2C reindirizza l'utente al provider di identità Trusona OpenID Connect (OIDC) usando il flusso implicito. | |3 | Per gli account di accesso basati su PC desktop, Trusona Visualizza un codice a matrice univoco, senza stato, animato e dinamico per l'analisi con l'app Trusona. Per gli account di accesso basati su dispositivi mobili, Trusona usa un "Deep link" per aprire l'app Trusona. Questi due metodi vengono usati per il dispositivo e infine per l'individuazione degli utenti. | |4 | L'utente analizza il codice QR visualizzato con l'app Trusona. | |5 | L'account dell'utente si trova nel servizio cloud Trusona e l'autenticazione è preparata. | |6 | Il servizio cloud Trusona invia una richiesta di autenticazione all'utente tramite una notifica push inviata all'app Trusona:<br>a. All'utente viene richiesto di eseguire la richiesta di autenticazione. <br> b. L'utente sceglie di accettare o rifiutare la richiesta. <br> c. All'utente viene richiesto di usare la sicurezza del sistema operativo (ad esempio, biometrico, di codice, PIN o modello) per confermare e firmare la richiesta con una chiave privata nell'enclave protetta o nell'ambiente di esecuzione attendibile. <br> d. L'app Trusona genera un payload di anti-riproduzione dinamico in base ai parametri dell'autenticazione in tempo reale. <br> e. L'intera risposta viene firmata (per la seconda volta) da una chiave privata nell'ambiente di esecuzione attendibile/enclave sicuro e restituita al servizio cloud Trusona per la verifica. | |7 | Il servizio cloud Trusona reindirizza l'utente all'applicazione di avvio con un id_token. Azure AD B2C verifica la id_token usando la configurazione OpenID pubblicata di Trusona configurata durante la configurazione del provider di identità. | | | | ## <a name="onboard-with-trusona"></a>Onboarding con Trusona 1. Compilare il [modulo](https://www.trusona.com/aadb2c) per creare un account Trusona e iniziare. 2. Scaricare l'app per dispositivi mobili Trusona dall'App Store. Installare l'app e registrare la posta elettronica. 3. Verificare la posta elettronica tramite il "collegamento magico" protetto inviato dal software. 4. Passare al [Dashboard per sviluppatori di Trusona](https://dashboard.trusona.com) per la modalità self-service. 5. Selezionare si è **pronti** e si esegue l'autenticazione con l'app Trusona. 6. Dal riquadro di spostamento a sinistra scegliere **OIDC Integrations (integrazioni**). 7. Selezionare **Crea integrazione OpenID Connect**. 8. Specificare il **nome** desiderato e utilizzare le informazioni di dominio fornite in precedenza (ad esempio, contoso) nel **campo host di reindirizzamento client**. > [!NOTE] > Il nome di dominio iniziale del Azure Active Directory viene usato come host di reindirizzamento del client. 9. Seguire le istruzioni riportate nella [Guida all'integrazione di Trusona](https://docs.trusona.com/integrations/aad-b2c-integration/). Quando richiesto, usare il nome di dominio iniziale, ad esempio contoso, a cui si fa riferimento nel passaggio precedente. ## <a name="integrate-with-azure-ad-b2c"></a>Integrazione con Azure AD B2C ### <a name="add-a-new-identity-provider"></a>Aggiungere un nuovo provider di identità > [!NOTE] > In assenza di un tenant, [creare un tenant di Azure AD B2C](tutorial-create-tenant.md) collegato alla sottoscrizione di Azure. 1. Accedere al [portale di Azure](https://portal.azure.com/) come amministratore globale del tenant di Azure AD B2C. 2. Assicurarsi di usare la directory che contiene il tenant di Azure AD B2C. A tale scopo, fare clic sul filtro **Directory e sottoscrizione** nel menu in alto e scegliere la directory che contiene il tenant. 3. Scegliere **Tutti i servizi** nell'angolo in alto a sinistra del portale di Azure, cercare **Azure AD B2C** e selezionarlo. 4. Passare a **Dashboard** > **Azure Active Directory B2C** > **provider di identità**. 3. Selezionare **provider di identità**. 4. Selezionare **Aggiungi**. ### <a name="configure-an-identity-provider"></a>Configurare un provider di identità 1. Selezionare il **tipo di provider di identità** > **OpenID Connect (anteprima)**. 2. Compilare il modulo per configurare il provider di identità: | Proprietà | Valore | | :--- | :--- | | URL dei metadati | `https://gateway.trusona.net/oidc/.well-known/openid-configuration`| | ID client | Verrà inviato tramite posta elettronica all'utente da Trusona | | Scope | Posta elettronica profilo OpenID | | Tipo di risposta | Id_token | | Modalità di risposta | Form_post | 3. Selezionare **OK**. 4. Selezionare Esegui **mapping delle attestazioni del provider di identità**. 5. Compilare il modulo per eseguire il mapping del provider di identità: | Proprietà | Valore | | :--- | :--- | | UserID | Sub | | Nome visualizzato | nickname | | Nome specificato | given_name | | Surname | Family_name | | Modalità di risposta | email | 6. Selezionare **OK** per completare la configurazione del nuovo provider di identità OIDC. ### <a name="create-a-user-flow-policy"></a>Creare un criterio di flusso utente A questo punto dovrebbe essere visualizzato Trusona come **nuovo provider di identità OpenID Connect** elencato nei provider di identità B2C. 1. Nel tenant di Azure AD B2C, in **criteri**, selezionare **flussi utente**. 1. Selezionare **nuovo flusso utente**. 1. Selezionare **iscrizione e accesso**, selezionare una versione e quindi fare clic su **Crea**. 1. Immettere un **nome** per il criterio. 1. Nella sezione **Identity Providers (provider di identità** ) selezionare il **provider di identità Trusona**appena creato. > [!NOTE] > Poiché Trusona è intrinsecamente a più fattori, è preferibile lasciare disabilitata l'autenticazione a più fattori. 1. Selezionare **Crea**. 1. In **attributi utente e attestazioni**scegliere **Mostra altro**. Nel modulo selezionare almeno un attributo specificato durante l'installazione del provider di identità nella sezione precedente. 1. Selezionare **OK**. ### <a name="test-the-policy"></a>Testare i criteri 1. Selezionare i criteri appena creati. 2. Selezionare **Esegui il flusso utente**. 3. Nel modulo immettere l'URL di risposta. 4. Selezionare **Esegui il flusso utente**. Si dovrebbe essere reindirizzati al gateway OIDC di Trusona. Sul gateway Trusona, eseguire la scansione del codice QR protetto visualizzato con l'app Trusona o con un'app personalizzata usando Trusona Mobile SDK. 5. Dopo aver analizzato il codice QR sicuro, è necessario essere reindirizzati all'URL di risposta definito nel passaggio 3. ## <a name="next-steps"></a>Passaggi successivi Per ulteriori informazioni, vedere gli articoli seguenti: - [Criteri personalizzati in AAD B2C](custom-policy-overview.md) - [Introduzione ai criteri personalizzati in AAD B2C](custom-policy-get-started.md?tabs=applications)
55.22619
861
0.763958
ita_Latn
0.99652
53ade7a1d765442fd6cacc697928563921d47e7d
4,852
md
Markdown
Exchange/ExchangeServer/mail-flow/connectors/proxy-outbound-mail.md
CarolynBillups/OfficeDocs-Exchange
ea26c1b9d8283a5938fb327ee4d80466c1f2c635
[ "CC-BY-4.0", "MIT" ]
1
2020-06-16T22:10:26.000Z
2020-06-16T22:10:26.000Z
Exchange/ExchangeServer/mail-flow/connectors/proxy-outbound-mail.md
CarolynBillups/OfficeDocs-Exchange
ea26c1b9d8283a5938fb327ee4d80466c1f2c635
[ "CC-BY-4.0", "MIT" ]
1
2021-01-21T14:06:30.000Z
2021-01-21T14:06:30.000Z
Exchange/ExchangeServer/mail-flow/connectors/proxy-outbound-mail.md
CarolynBillups/OfficeDocs-Exchange
ea26c1b9d8283a5938fb327ee4d80466c1f2c635
[ "CC-BY-4.0", "MIT" ]
1
2021-11-29T00:03:44.000Z
2021-11-29T00:03:44.000Z
--- localization_priority: Normal description: 'Summary: Configure Send connectors to proxy outbound mail through the Front End Transport service.' ms.topic: article author: msdmaguire ms.author: dmaguire ms.assetid: 6eaa753a-523a-4ae7-b174-a639b819e729 ms.date: 7/6/2018 ms.reviewer: title: Configure Send connectors to proxy outbound mail ms.collection: exchange-server audience: ITPro ms.prod: exchange-server-it-pro manager: serdars --- # Configure Send connectors to proxy outbound mail When you create Send connectors, outbound mail flows through the Send connector in the Transport service on the Mailbox server or servers you specify, as shown in the following diagram. ![Send connector created with default configuration](../../media/c43075b4-7254-417a-9a61-d735f4abac4f.png) However, you can configure a Send connector to relay or *proxy* outbound mail through the Front End Transport service on the Mailbox server, as shown in the following diagram. ![Send connector configured for outbound proxy](../../media/4180d15b-1ee8-40dd-ad7d-8d381c51e8eb.png) By default, all inbound mail enters your Exchange organization through the Front End Transport service, and the Front End Transport service proxies inbound mail to the Transport service. For more information, see [Mail flow and the transport pipeline](../../mail-flow/mail-flow.md). When you configure a Send connector to proxy outbound mail through the Front End Transport service, the Receive connector named "Outbound Proxy Frontend _\<Mailbox server name\>_" in the Front End Transport service listens for these outbound messages from the Transport service, and then the Front End Transport service sends the messages to the Internet. This configuration can consolidate and simplify mail flow by having inbound and outbound mail enter and leave your organization from the same place. ## What do you need to know before you begin? - Estimated time to complete: less than 5 minutes - You need to be assigned permissions before you can perform this procedure or procedures. To see what permissions you need, see the "Send connectors" entry in the [Mail flow permissions](../../permissions/feature-permissions/mail-flow-permissions.md) topic. - For information about keyboard shortcuts that may apply to the procedures in this topic, see [Keyboard shortcuts in the Exchange admin center](../../about-documentation/exchange-admin-center-keyboard-shortcuts.md). > [!TIP] > Having problems? Ask for help in the Exchange forums. Visit the forums at: [Exchange Server](https://go.microsoft.com/fwlink/p/?linkId=60612), [Exchange Online](https://go.microsoft.com/fwlink/p/?linkId=267542), or [Exchange Online Protection](https://go.microsoft.com/fwlink/p/?linkId=285351). ## Configure Send connectors to proxy outbound mail through the Front End Transport service ### Use the EAC to configure Send connectors to proxy outbound mail In the Exchange admin center (EAC), you can only configure existing Send connectors to proxy outbound mail. 1. In the EAC, navigate to **Mail flow** \> **Send connectors**, select the Send connector, and then click **Edit** ![Edit icon](../../media/ITPro_EAC_EditIcon.png). 2. On the **General** tab, in the **Connector status** section, select **Proxy through client access server**, and then click **Save**. ### Use PowerShell to configure Send Connectors to proxy outbound mail In the Exchange Management Shell, you can configure new or existing Send connectors to proxy outbound mail. For information about how to open the Exchange Management Shell, see [Open the Exchange Management Shell](http://technet.microsoft.com/library/63976059-25f8-4b4f-b597-633e78b803c0.aspx). - To configure a new Send connector to proxy outbound mail, add `-FrontEndProxyEnabled $true` to the **New-SendConnector** command. - To configure an existing Send connector to proxy outbound mail, run the following command: ``` Set-SendConnector <Send connector identity> -FrontEndProxyEnabled $true ``` This example configures the existing Send connector named "Contoso.com Outbound" to proxy outbound mail. ``` Set-SendConnector "Contoso.com Outbound" -FrontendProxyEnabled $true ``` ### How do you know this worked? To verify that a Send connector is configured for outbound proxy, perform either of the following procedures: - In the EAC, navigate to **Mail flow** \> **Send connectors**, select the Send connector, and then click **Edit** ![Edit icon](../../media/ITPro_EAC_EditIcon.png). On the **General** tab, in the **Connector status** section, verify **Proxy through client access server** is selected. - In the Exchange Management Shell, run the following command: ``` Get-SendConnector | Format-Table -Auto Name,FrontEndProxyEnabled ``` Verify the **FrontEndProxyEnabled** value is `True` for the Send connector.
56.418605
504
0.77535
eng_Latn
0.986506
53aed37ad267c23349df509cc6ce653b84964b36
8,431
md
Markdown
articles/partner-xamarin-mobile-services-ios-get-started.md
thinkingserious/azure-content
06d9e5b89000fc628f23cfbbdfe27d494d96a363
[ "CC-BY-3.0" ]
null
null
null
articles/partner-xamarin-mobile-services-ios-get-started.md
thinkingserious/azure-content
06d9e5b89000fc628f23cfbbdfe27d494d96a363
[ "CC-BY-3.0" ]
null
null
null
articles/partner-xamarin-mobile-services-ios-get-started.md
thinkingserious/azure-content
06d9e5b89000fc628f23cfbbdfe27d494d96a363
[ "CC-BY-3.0" ]
null
null
null
<properties pageTitle="Get Started with Mobile Services for Xamarin iOS apps" metaKeywords="" description="Follow this tutorial to get started using Azure Mobile Services for Xamarin iOS development." metaCanonical="" services="mobile" documentationCenter="Mobile" title="Get started with Mobile Services" authors="craigd" solutions="" manager="" editor="" /> # <a name="getting-started"> </a>Get started with Mobile Services <div class="dev-center-tutorial-selector sublanding"> <a href="/en-us/documentation/articles/mobile-services-windows-store-get-started" title="Windows Store">Windows Store</a> <a href="/en-us/documentation/articles/mobile-services-windows-phone-get-started" title="Windows Phone">Windows Phone</a> <a href="/en-us/documentation/articles/mobile-services-ios-get-started" title="iOS">iOS</a> <a href="/en-us/documentation/articles/mobile-services-android-get-started" title="Android">Android</a> <a href="/en-us/documentation/articles/mobile-services-html-get-started" title="HTML">HTML</a> <a href="/en-us/documentation/articles/partner-xamarin-mobile-services-ios-get-started" title="Xamarin.iOS" class="current">Xamarin.iOS</a> <a href="/en-us/documentation/articles/partner-xamarin-mobile-services-android-get-started" title="Xamarin.Android">Xamarin.Android</a> <a href="/en-us/documentation/articles/partner-sencha-mobile-services-get-started/" title="Sencha">Sencha</a> <a href="/en-us/documentation/articles/mobile-services-javascript-backend-phonegap-get-started/" title="PhoneGap">PhoneGap</a> <a href="/en-us/documentation/articles/partner-appcelerator-mobile-services-javascript-backend-appcelerator-get-started" title="Appcelerator">Appcelerator</a> </div> <div class="dev-center-tutorial-subselector"> <a href="/en-us/documentation/articles/mobile-services-dotnet-backend-xamarin-ios-get-started/" title=".NET backend">.NET backend</a> | <a href="/en-us/documentation/articles/partner-xamarin-mobile-services-ios-get-started/" title="JavaScript backend" class="current">JavaScript backend</a> </div> <div class="dev-onpage-video-clear clearfix"> <div class="dev-onpage-left-content"> <p>This tutorial shows you how to add a cloud-based backend service to a Xamarin.iOS app using Azure Mobile Services. In this tutorial, you will create both a new mobile service and a simple <em>To do list</em> app that stores app data in the new mobile service.</p> <p>If you prefer to watch a video, the clip to the right follows the same steps as this tutorial.</p> </div> <div class="dev-onpage-video-wrapper"><a href="http://channel9.msdn.com/Series/Windows-Azure-Mobile-Services/Getting-Started-with-Xamarin-and-Windows-Azure-Mobile-Services" target="_blank" class="label">watch the tutorial</a> <a style="background-image: url('/media/devcenter/mobile/videos/get-started-xamarin-180x120.png') !important;" href="http://channel9.msdn.com/Series/Windows-Azure-Mobile-Services/Getting-Started-with-Xamarin-and-Windows-Azure-Mobile-Services" target="_blank" class="dev-onpage-video"><span class="icon">Play Video</span></a> <span class="time">10:05</span></div> </div> A screenshot from the completed app is below: ![][0] Completing this tutorial requires XCode 4.5 and iOS 5.0 or later versions as well as [Xamarin Studio] for OS X or the Xamarin Visual Studio plug-in for Visual Studio on Windows. <div class="dev-callout"><strong>Note</strong> <p>To complete this tutorial, you need an Azure account. If you don't have an account, you can create a free trial account in just a couple of minutes. For details, see <a href="http://www.windowsazure.com/en-us/pricing/free-trial/?WT.mc_id=A643EE910&amp;returnurl=http%3A%2F%2Fwww.windowsazure.com%2Fen-us%2Fdevelop%2Fmobile%2Ftutorials%2Fget-started-xamarin-ios%2F" target="_blank">Azure Free Trial</a>.</p></div> ## <a name="create-new-service"> </a>Create a new mobile service [WACOM.INCLUDE [mobile-services-create-new-service](../includes/mobile-services-create-new-service.md)] <h2><span class="short-header">Create a new app</span>Create a new Xamarin.iOS app</h2> Once you have created your mobile service, you can follow an easy quickstart in the Management Portal to either create a new app or modify an existing app to connect to your mobile service. In this section you will create a new Xamarin.iOS app that is connected to your mobile service. 1. In the Management Portal, click **Mobile Services**, and then click the mobile service that you just created. 2. In the quickstart tab, click **Xamarin.iOS** under **Choose platform** and expand **Create a new Xamarin.iOS app**. ![][6] This displays the three easy steps to create a Xamarin.iOS app connected to your mobile service. ![][7] 3. If you haven't already done so, download and install [Xcode] v4.4 or a later version and [Xamarin Studio]. 4. Click **Create TodoItems table** to create a table to store app data. 5. Under **Download and run app**, click **Download**. This downloads the project for the sample _To do list_ application that is connected to your mobile service and references the Azure Mobile Services component for Xamarin.iOS. Save the compressed project file to your local computer, and make a note of where you saved it. <h2><span class="short-header">Run your app</span>Run your new Xamarin.iOS app</h2> The final stage of this tutorial is to build and run your new app. 1. Browse to the location where you saved the compressed project files, expand the files on your computer, and open the **XamarinTodoQuickStart.iOS.sln** solution file using Xamarin Studio or Visual Studio. ![][8] ![][9] 2. Press the **Run** button to build the project and start the app in the iPhone emulator, which is the default for this project. 3. In the app, type meaningful text, such as _Complete the tutorial_ and then click the plus (**+**) icon. ![][10] This sends a POST request to the new mobile service hosted in Azure. Data from the request is inserted into the TodoItem table. Items stored in the table are returned by the mobile service, and the data is displayed in the list. <div class="dev-callout"> <b>Note</b> <p>You can review the code that accesses your mobile service to query and insert data, which is found in the TodoService.cs C# file.</p> </div> 4. Back in the Management Portal, click the **Data** tab and then click the **TodoItems** table. ![][11] This lets you browse the data inserted by the app into the table. ![][12] ## <a name="next-steps"> </a>Next Steps Now that you have completed the quickstart, learn how to perform additional important tasks in Mobile Services: * [Get started with data] <br/>Learn more about storing and querying data using Mobile Services. * [Get started with authentication] <br/>Learn how to authenticate users of your app with an identity provider. * [Get started with push notifications] <br/>Learn how to send a very basic push notification to your app. <!-- Anchors. --> [Getting started with Mobile Services]:#getting-started [Create a new mobile service]:#create-new-service [Define the mobile service instance]:#define-mobile-service-instance [Next Steps]:#next-steps <!-- Images. --> [0]: ./media/partner-xamarin-mobile-services-ios-get-started/mobile-quickstart-completed-ios.png [6]: ./media/partner-xamarin-mobile-services-ios-get-started/mobile-portal-quickstart-xamarin-ios.png [7]: ./media/partner-xamarin-mobile-services-ios-get-started/mobile-quickstart-steps-xamarin-ios.png [8]: ./media/partner-xamarin-mobile-services-ios-get-started/mobile-xamarin-project-ios-xs.png [9]: ./media/partner-xamarin-mobile-services-ios-get-started/mobile-xamarin-project-ios-vs.png [10]: ./media/partner-xamarin-mobile-services-ios-get-started/mobile-quickstart-startup-ios.png [11]: ./media/partner-xamarin-mobile-services-ios-get-started/mobile-data-tab.png [12]: ./media/partner-xamarin-mobile-services-ios-get-started/mobile-data-browse.png <!-- URLs. --> [Get started with data]: /en-us/develop/mobile/tutorials/get-started-with-data-xamarin-ios [Get started with authentication]: /en-us/develop/mobile/tutorials/get-started-with-users-xamarin-ios [Get started with push notifications]: /en-us/develop/mobile/tutorials/get-started-with-push-xamarin-ios [Xamarin Studio]: http://xamarin.com/download [Mobile Services iOS SDK]: https://go.microsoft.com/fwLink/p/?LinkID=266533 [Management Portal]: https://manage.windowsazure.com/
61.992647
588
0.761357
eng_Latn
0.882371
53afe46ffd462be9dfa7895ec0c4ec7946bbefc8
5,385
md
Markdown
CQL_Guide/x3.md
mingodad/CG-SQL
dabb14bace452d4b359d96a1a4f975475140cd74
[ "MIT" ]
null
null
null
CQL_Guide/x3.md
mingodad/CG-SQL
dabb14bace452d4b359d96a1a4f975475140cd74
[ "MIT" ]
null
null
null
CQL_Guide/x3.md
mingodad/CG-SQL
dabb14bace452d4b359d96a1a4f975475140cd74
[ "MIT" ]
null
null
null
<!--- -- Copyright (c) Facebook, Inc. and its affiliates. -- -- This source code is licensed under the MIT license found in the -- LICENSE file in the root directory of this source tree. --> The control directives are those statements that begin with `@` and they are distinguished from other statements because they influence the compiler rather than the program logic. Some of these are of great importance and discussed elsewhere. The complete list (as of this writing) is: `@ENFORCE_STRICT` `@ENFORCE_NORMAL` * These enable or disable more strict semanic checking the sub options are * `FOREIGN KEY ON UPDATE`: all FK's must choose some `ON UPDATE` strategy * `FOREIGN KEY ON DELETE`: all FK's must choose some `ON DELETE` strategy * `PROCEDURE`: all procedures must be declared before they are called (eliminating the vanilla `C` call option) * `JOIN`: all joins must be ANSI style, the form `FROM A,B` is not allowed (replace with `A INNER JOIN B` * `WINDOW FUNC`: window functions are disallowed (useful if targetting old versions of SQLite) * `UPSERT STATEMENT`: the upsert form is disallowed (useful if targetting old versions of SQLite) `@SENSITIVE` * marks a column or variable as 'sensitive' for privacy purposes, this behaves somewhat like nullability (See Chapter 3) in that it is radioactive, contaminating anything it touches * the intent of this annotation is to make it clear where sensitive data is being returned or consumed in your procedures * this information appears in the JSON output for further codegen or for analysis (See Chapter 13) `@DECLARE_SCHEMA_REGION` `@DECLARE_DEPLOYABLE_REGION` `@BEGIN_SCHEMA_REGION` `@END_SCHEMA_REGION` * These directives controlt he declaration of schema regions and allow you to place things into those regions (See Chapter 10) `@SCHEMA_AD_HOC_MIGRATION` * Allows for the creation of a ad hoc migration step at a given schema version, (See Chapter 10) `@ECHO` * Emits text into the C output stream, useful for emiting things like function prototypes or preprocessor directives * e.g. `echo C, '#define foo bar' `@RECREATE` `@CREATE` `@DELETE` * used to mark the schema version where an object is created or deleted, or alternatively indicate the the object is always dropped and recreated when it changes (See Chapter 10) `@SCHEMA_UPGRADE_VERSION` * used to indicate that the code that follows is part of a migration script for the indicated schema version * this has the effect of making the schema appear to be how it existed at the indicated version * the idea here is that migration procedures operate on previous versions of the schema where (e.g.) some columns/tables hadn't been deleted yet `@PREVIOUS_SCHEMA` * indicates the start of the previous version of the schema for comparison (See Chapter 11) `@SCHEMA_UPGRADE_SCRIPT` * CQL emits a schema upgrade script as part of its upgrade features, this script declares tables in their final form but also creates the same tables as they existed when they were first created * this directive instructs CQL to ignore the incompatible creations, the first declaration controls * the idea here is that the upgrade script is in the business of getting you to the finish line in an orderly fashion and some of the interim steps are just not all the way there yet * note that the upgrade script recapitulates the version history, it does not take you directly to the finish line, this is so that all instances get to the same place the same way (and this fleshes out any bugs in migration) `@DUMMY_NULLABLES` `@DUMMY_DEFAULTS` `@DUMMY_SEED` * these control the creation of dummy data for `insert` and `fetch` statements (See Chapters 5 and 12) `@FILE` * a string literal that corresponds to the current file name with a prefix stripped (to remove build lab junk in the path) `@ATTRIBUTE` * the main purpose of `@attribute` is to appear in the JSON output so that it can control later codegen stages in whatever way you deem appropriate * the nested nature of attribute values is sufficiently flexible than you could encode an arbitary LISP program in an attribute, so really anything you might need to express is possible * there are a number of attributes known to the compiler which I list below (complete as of this writing) * `cql:autodrop=(table1, table2, ...)` when present the indicated tables, which must be temp tables, are dropped when the results of the procedure have been fetched into a rowset * `cql:indentity=(column1, column2, ...)` the indicated columns are used to create a row comparator for the rowset corresponding to the procedure, this appears in a C macro of the form `procedure_name_row_same(rowset1, row1, rowset2, row2)` * `cql:suppres_getters` the indicated procedure should not emit the column getter functions (useful if you only indend to call the procedure from CQL, or if you wish to restrict access in C) * `cql:base_fragment=frag_name` for base fragments (See Chapter 14) * `cql:extension_fragment=frag_name` for extension fragments (See Chapter 14) * `cql:assembly_fragment=frag_name` for assembly fragments (See Chapter 14) * `cql:no_table_scan` for query plan processing, indicates that the table in question should never be table scanned in any plan (for better diagnostics) * `cql:autotest=([many forms])` declares various autotest features (See Chapter 12)
65.670732
243
0.774559
eng_Latn
0.999232
53afe65c62de92b97f8f374869a238bb3284acd1
241
md
Markdown
includes/win2kfamily-md.md
0Naoki/dotnet-api-docs.ja-jp
5ffeab47897ea55f6655c1b2dffb679359f57491
[ "CC-BY-4.0", "MIT" ]
null
null
null
includes/win2kfamily-md.md
0Naoki/dotnet-api-docs.ja-jp
5ffeab47897ea55f6655c1b2dffb679359f57491
[ "CC-BY-4.0", "MIT" ]
null
null
null
includes/win2kfamily-md.md
0Naoki/dotnet-api-docs.ja-jp
5ffeab47897ea55f6655c1b2dffb679359f57491
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- ms.openlocfilehash: f6a28970a521e035e5e254c78ec61c1026703f9b ms.sourcegitcommit: 1bb00d2f4343e73ae8d58668f02297a3cf10a4c1 ms.translationtype: HT ms.contentlocale: ja-JP ms.lasthandoff: 06/15/2019 ms.locfileid: "63876200" --- Windows 2000
26.777778
60
0.842324
yue_Hant
0.171452
53b05913a3cb5e7a28f9b6a133db1f81feedb83b
267
md
Markdown
Docs/reference/content/upgrading.md
Snagajob/mongo-csharp-driver
8ea4e7b7f441ea9997d908c2dce04d3f6271e868
[ "Apache-2.0" ]
1
2021-07-07T09:16:10.000Z
2021-07-07T09:16:10.000Z
Docs/reference/content/upgrading.md
591094733/mongo-csharp-driver
85776cfc76b41ba03682e2d35e65b531c7bd6cdb
[ "Apache-2.0" ]
null
null
null
Docs/reference/content/upgrading.md
591094733/mongo-csharp-driver
85776cfc76b41ba03682e2d35e65b531c7bd6cdb
[ "Apache-2.0" ]
null
null
null
+++ date = "2015-03-17T15:36:56Z" draft = false title = "Upgrading" [menu.main] parent = "What's New" identifier = "Upgrading" weight = 10 pre = "<i class='fa'></i>" +++ ## Breaking Changes There should be no breaking changes in version 2.7.0 of the driver.
19.071429
67
0.64794
eng_Latn
0.862655
53b0980c4763d01c63433b8143027e63a6e6e175
59
md
Markdown
README.md
hajapy/pre-commit-action-test
cd7f604fa81bffc73a11f49676c33ed8215d7058
[ "MIT" ]
null
null
null
README.md
hajapy/pre-commit-action-test
cd7f604fa81bffc73a11f49676c33ed8215d7058
[ "MIT" ]
null
null
null
README.md
hajapy/pre-commit-action-test
cd7f604fa81bffc73a11f49676c33ed8215d7058
[ "MIT" ]
null
null
null
# pre-commit-action-test Testing pre-commit github actions
19.666667
33
0.813559
eng_Latn
0.409535
53b09e28fb4cedbd4922e8e3c56ace5146ee760b
34,382
md
Markdown
playlists/cumulative/Totally Stress Free.md
HellfoxMT/spotify-playlist-archive
5735e4310833383e98684f57c571d4928ccd450a
[ "MIT" ]
null
null
null
playlists/cumulative/Totally Stress Free.md
HellfoxMT/spotify-playlist-archive
5735e4310833383e98684f57c571d4928ccd450a
[ "MIT" ]
null
null
null
playlists/cumulative/Totally Stress Free.md
HellfoxMT/spotify-playlist-archive
5735e4310833383e98684f57c571d4928ccd450a
[ "MIT" ]
null
null
null
[pretty](https://github.com/mackorone/spotify-playlist-archive/blob/master/playlists/pretty/Totally%20Stress%20Free.md) - cumulative - [plain](https://github.com/mackorone/spotify-playlist-archive/blob/master/playlists/plain/37i9dQZF1DWT7XSlwvR1ar) ([githistory](https://github.githistory.xyz/mackorone/spotify-playlist-archive/blob/master/playlists/plain/37i9dQZF1DWT7XSlwvR1ar)) ### [Totally Stress Free](https://open.spotify.com/playlist/37i9dQZF1DWT7XSlwvR1ar) > No need to stress out. Stay relaxed with these easy, upbeat songs. | Title | Artist(s) | Album | Length | Added | Removed | |---|---|---|---|---|---| | [(What A) Wonderful World](https://open.spotify.com/track/2g2GkH3vZHk4lWzBjgQ6nY) | [Sam Cooke](https://open.spotify.com/artist/6hnWRPzGGKiapVX1UCdEAC) | [The Man Who Invented Soul](https://open.spotify.com/album/3Seie4YIVLWtPw2hQrouNY) | 2:05 | 2019-08-12 | 2019-10-26 | | [1.Weather Balloons [Feat. Frances Cone]](https://open.spotify.com/track/4mOmMccRWthpaUtjPhiQm8) | [Susto](https://open.spotify.com/artist/7foyQbi7GKriLiv1GPVEwt), [Frances Cone](https://open.spotify.com/artist/5xKsfZBL84iULLWjvd4dWh) | [Weather Balloons [Feat. Frances Cone]](https://open.spotify.com/album/752tNJJGYbjlnFfSZBO9ju) | 3:32 | 2019-10-25 | 2019-11-02 | | [1234](https://open.spotify.com/track/2CzWeyC9zlDpIOZPUUKrBW) | [Feist](https://open.spotify.com/artist/6CWTBjOJK75cTE8Xv8u1kj) | [The Reminder](https://open.spotify.com/album/7bTdGfczXffzzNE9ssJj4Z) | 3:03 | 2019-07-29 | 2019-10-26 | | [Absolutely](https://open.spotify.com/track/5kjaMjkQxkJYZiDSIbuIF8) | [Ra Ra Riot](https://open.spotify.com/artist/6FIrstf3kHEg3zBOyLpvxD) | [Absolutely](https://open.spotify.com/album/2lOIFb6xgtMIYmBNxUZ4be) | 3:46 | 2019-07-29 | 2019-08-15 | | [Ain't No Reason](https://open.spotify.com/track/0FSaq4WfIJ8RbG2Li1JD5B) | [Brett Dennen](https://open.spotify.com/artist/0FC1LIeQXKib0jOwZqeIwT) | [The Definitive Collection](https://open.spotify.com/album/7qF9HEA57AL9dPGlipcDvF) | 3:37 | 2019-07-29 | 2019-10-26 | | [All We Ever Knew](https://open.spotify.com/track/5wrGviDHdJ2MYgDRow14cu) | [The Head and the Heart](https://open.spotify.com/artist/0n94vC3S9c3mb2HyNAOcjg) | [All We Ever Knew](https://open.spotify.com/album/2DxrFMvkLvj3CiTapFkXhX) | 3:45 | 2019-07-29 | | | [Almost (Sweet Music)](https://open.spotify.com/track/5Apvsk0suoivI1H8CmBglv) | [Hozier](https://open.spotify.com/artist/2FXC3k01G6Gw61bmprjgqS) | [Wasteland, Baby!](https://open.spotify.com/album/2c7gFThUYyo2t6ogAgIYNw) | 3:37 | 2019-08-21 | | | [Anywhere](https://open.spotify.com/track/1zxZ8lz2mJspMvRrEd9sWT) | [Passenger](https://open.spotify.com/artist/0gadJ2b9A4SKsB1RFkBb66) | [Anywhere](https://open.spotify.com/album/7sQOtKcwp36OEtA14XhoRS) | 3:36 | 2019-07-29 | 2019-10-26 | | [Apple Pie Bed](https://open.spotify.com/track/0rOnpjlEP9fUPl8deHZbbR) | [Lawrence Arabia](https://open.spotify.com/artist/7pTKdNH664VxPgXpfddid9) | [Chant Darling](https://open.spotify.com/album/12czE8QhrPUY9WGmhK0IWq) | 3:28 | 2019-10-25 | 2019-10-26 | | [Banana Pancakes](https://open.spotify.com/track/451GvHwY99NKV4zdKPRWmv) | [Jack Johnson](https://open.spotify.com/artist/3GBPw9NK25X1Wt2OUvOwY3) | [In Between Dreams](https://open.spotify.com/album/7tTc46dNdE6GGuiQsssWxo) | 3:11 | 2019-07-29 | | | [Barcelona](https://open.spotify.com/track/58Qol5xzalwqF912GE9vQv) | [George Ezra](https://open.spotify.com/artist/2ysnwxxNtSgbb9t1m2Ur4j) | [Wanted on Voyage](https://open.spotify.com/album/7L4ZwpSwKQCerDQv9C4O1M) | 3:08 | 2019-07-29 | | | [Be OK](https://open.spotify.com/track/2qESkHBZ2VThboOnosYFBk) | [Ingrid Michaelson](https://open.spotify.com/artist/2vm8GdHyrJh2O2MfbQFYG0) | [Be OK](https://open.spotify.com/album/3KVfMVtOmoVCgihLE4HoBr) | 2:27 | 2019-07-29 | 2019-10-26 | | [Believer](https://open.spotify.com/track/0hr6vFOusAJgPCBgIBJNDH) | [Marc Scibilia](https://open.spotify.com/artist/4CHiVarfTsFhkFOk5vHS77) | [Believer](https://open.spotify.com/album/07GDCELlGH4RDhWsHrTuGb) | 3:31 | 2019-07-29 | | | [Better Together](https://open.spotify.com/track/4VywXu6umkIQ2OS0m1I79y) | [Jack Johnson](https://open.spotify.com/artist/3GBPw9NK25X1Wt2OUvOwY3) | [In Between Dreams](https://open.spotify.com/album/7tTc46dNdE6GGuiQsssWxo) | 3:27 | 2019-07-29 | | | [Black Jeans](https://open.spotify.com/track/2vBnxZkr6K0Py6iTxHXUKS) | [Lucie Silvas](https://open.spotify.com/artist/57HiMjhnxdJflQodRyC5Ju) | [Black Jeans](https://open.spotify.com/album/1CVxayI7MG3L73PgA1DCz6) | 5:06 | 2019-08-27 | | | [Brand New](https://open.spotify.com/track/5EQ9yEra0SzVd673ZfKT4C) | [Ben Rector](https://open.spotify.com/artist/4AapPt7H6bGH4i7chTulpI) | [Brand New](https://open.spotify.com/album/4E2YvGzYMgr4DizJJ0PQoo) | 4:03 | 2019-07-29 | | | [Brazil](https://open.spotify.com/track/4sNG6zQBmtq7M8aeeKJRMQ) | [Declan McKenna](https://open.spotify.com/artist/2D4FOOOtWycb3Aw9nY5n3c) | [What Do You Think About the Car?](https://open.spotify.com/album/3HJiLDJgWA9Z0MvCxlzHYQ) | 4:12 | 2019-07-29 | 2019-11-16 | | [Bridges](https://open.spotify.com/track/3VPFV5Xj8QjXDJKl2rVce7) | [Johnnyswim](https://open.spotify.com/artist/4igDSX1kgfWbVTDCywcBGm) | [Moonlight](https://open.spotify.com/album/3ZU0AW8kgxxyR48yUFUiK5) | 3:36 | 2019-08-21 | | | [Budapest](https://open.spotify.com/track/7q0aQpiLv5tIsupcgQ3Ny4) | [George Ezra](https://open.spotify.com/artist/2ysnwxxNtSgbb9t1m2Ur4j) | [Wanted on Voyage](https://open.spotify.com/album/7L4ZwpSwKQCerDQv9C4O1M) | 3:20 | 2019-07-29 | | | [Caroline](https://open.spotify.com/track/3S5mohVxC0Xuj0tgZ7vU7g) | [Briston Maroney](https://open.spotify.com/artist/7vtSUU3zpHeYJfX6BPNrJd) | [Indiana](https://open.spotify.com/album/7nfm0AkAW1WqxJ167DHuXV) | 3:25 | 2019-08-21 | | | [Carry Me Away](https://open.spotify.com/track/6TL3MOcVW8i1UiJkvhpDbR) | [John Mayer](https://open.spotify.com/artist/0hEurMDQu99nJRq8pTxO14) | [Carry Me Away](https://open.spotify.com/album/5KvmuOTWNRFPAQdz0KRPcf) | 2:36 | 2019-10-25 | | | [Cassidy](https://open.spotify.com/track/3Ycz5kztX54LUuKhM3ZDBK) | [Brett Dennen](https://open.spotify.com/artist/0FC1LIeQXKib0jOwZqeIwT) | [Cassidy](https://open.spotify.com/album/2wVOB6f979kDVipDA3HteM) | 4:28 | 2019-07-29 | | | [Catch My Disease](https://open.spotify.com/track/4o3HitPDkPvFKVpSCXtHzi) | [Ben Lee](https://open.spotify.com/artist/06y1hH4hu3rcTUXHJevPCf) | [Awake is the New Sleep](https://open.spotify.com/album/4jZCBlaclmE1QaoJSDwX37) | 4:14 | 2019-07-29 | 2019-10-26 | | [Celeste](https://open.spotify.com/track/23cj0rlc0UtTBaCg60VCkm) | [Ezra Vine](https://open.spotify.com/artist/2gJqa0PdfSuLpoQlWAIAzn) | [Celeste EP](https://open.spotify.com/album/3W0K6QezWMLCYcZlJitqHt) | 3:23 | 2019-07-29 | | | [Changes](https://open.spotify.com/track/0gKiONJeEtwb3hps4sUgyn) | [Langhorne Slim](https://open.spotify.com/artist/099toTcKJoywTosZr2hHjy), [The Law](https://open.spotify.com/artist/6DK3E5dh7jJrKyAHfucWBB) | [The Spirit Moves](https://open.spotify.com/album/77UiJMD9OVYj2YXr2gO9L5) | 2:41 | 2019-07-29 | | | [Colors](https://open.spotify.com/track/6vaSStNN5NX4nJ4QbRY3S0) | [Black Pumas](https://open.spotify.com/artist/6eU0jV2eEZ8XTM7EmlguK6) | [Black Pumas](https://open.spotify.com/album/54SlWgNocRPhlZEFTYjOfW) | 4:06 | 2019-10-25 | 2019-10-26 | | [Come On Get Higher](https://open.spotify.com/track/38YgZVHPWOWsKrsCXz6JyP) | [Matt Nathanson](https://open.spotify.com/artist/4NGiEU3Pkd8ASRyQR30jcA) | [Some Mad Hope](https://open.spotify.com/album/45A2E1YR00sPSwxJw5d3qu) | 3:35 | 2019-07-29 | | | [Comeback Kid (That's My Dog)](https://open.spotify.com/track/621CFhl91BygHysX0igkyY) | [Brett Dennen](https://open.spotify.com/artist/0FC1LIeQXKib0jOwZqeIwT) | [The Definitive Collection](https://open.spotify.com/album/7qF9HEA57AL9dPGlipcDvF) | 3:25 | 2019-07-29 | | | [Cross My Mind](https://open.spotify.com/track/3T7KFsyl6n3UklWgfn0Lnp) | [Twin Forks](https://open.spotify.com/artist/6GwNGuDRNbx5XwoHQA3QiD) | [Twin Forks](https://open.spotify.com/album/6WU9Z2HcsYNL3ImMBzZNow) | 3:34 | 2019-07-29 | 2019-10-26 | | [Daniel](https://open.spotify.com/track/16rosLVYEGV8TO8VQqxxmm) | [Susto](https://open.spotify.com/artist/7foyQbi7GKriLiv1GPVEwt) | [Daniel](https://open.spotify.com/album/5RiG6luXx2PXXTYAOjh6Tb) | 3:40 | 2019-08-27 | 2019-10-25 | | [Dark Days](https://open.spotify.com/track/7LZN7FkxHZk6maiN6NdI2i) | [Local Natives](https://open.spotify.com/artist/75dQReiBOHN37fQgWQrIAJ) | [Sunlit Youth](https://open.spotify.com/album/2qiPY1CqHGexT4yWrQ5uX0) | 3:00 | 2019-10-25 | 2019-10-26 | | [Dear To Me](https://open.spotify.com/track/4hh1lvWiUaLMOcVXhyK6TA) | [Electric Guest](https://open.spotify.com/artist/7sgWBYtJpblXpJl2lU5WVs) | [Plural](https://open.spotify.com/album/4ncZdrGiO459vrUU1yYCCA) | 4:01 | 2019-10-25 | 2019-10-26 | | [Down in the Valley](https://open.spotify.com/track/5Gtn8HgCAo0TUiaKKgP6us) | [The Head and the Heart](https://open.spotify.com/artist/0n94vC3S9c3mb2HyNAOcjg) | [The Head and the Heart](https://open.spotify.com/album/0xWfhCMYmaiCXtLOuyPoLF) | 5:03 | 2019-07-29 | | | [Eloise](https://open.spotify.com/track/6J1Rk8WkZ6KOFUtlTVMoJd) | [Penny and Sparrow](https://open.spotify.com/artist/65o6y7GtoXzchyiJB3r9Ur) | [Eloise](https://open.spotify.com/album/1y72knBCC5iPB75Syr3P40) | 2:48 | 2019-08-21 | | | [Empress](https://open.spotify.com/track/5YVYVuNA44FT2yVxEXOw3r) | [Morningsiders](https://open.spotify.com/artist/5hPR4Atp3QY2ztiAcz1inl) | [Empress](https://open.spotify.com/album/07S5Om7EgjLq3I6QNuDSb8) | 3:19 | 2019-07-29 | | | [Eyes](https://open.spotify.com/track/7fArBkBSsaUF5mOcpTL56I) | [Rogue Wave](https://open.spotify.com/artist/2JSc53B5cQ31m0xTB7JFpG) | [Eyes](https://open.spotify.com/album/0ipi3dQXxde567orrSLq50) | 2:28 | 2019-07-29 | | | [Follow the Sun](https://open.spotify.com/track/3KS09EvUKTtPgTHbBBsLCw) | [Xavier Rudd](https://open.spotify.com/artist/5lbM4g6bhxjNX7R5QHP2nD) | [Spirit Bird](https://open.spotify.com/album/1EKasDeDwsn6HyaAZaSa1e) | 4:15 | 2019-07-29 | | | [Free](https://open.spotify.com/track/2BJy5YrRMhVCqaLNY5cbv2) | [Donavon Frankenreiter](https://open.spotify.com/artist/2IAZ2xX1Ovh5jxhBWE7wda) | [Donavon Frankenreiter](https://open.spotify.com/album/7HPDZ1Gu4pWyDn9o6DiToG) | 2:28 | 2019-07-29 | | | [Georgia](https://open.spotify.com/track/429EttO8gs0bDo2SQfUNSm) | [Vance Joy](https://open.spotify.com/artist/10exVja0key0uqUkk6LJRT) | [Dream Your Life Away (Special Edition)](https://open.spotify.com/album/5S9b8euumqMhQbMk0zzQdH) | 3:50 | 2019-07-29 | | | [Getting Ready to Get Down](https://open.spotify.com/track/1gDJei64d4Kl4S1ObDCmEu) | [Josh Ritter](https://open.spotify.com/artist/6igfLpd8s6DBBAuwebRUuo) | [Getting Ready to Get Down](https://open.spotify.com/album/1UBBJFhQa6pUpdegxfbo0M) | 3:16 | 2019-07-29 | 2019-10-26 | | [Gloria](https://open.spotify.com/track/75ZZuV6Y954ONtyoQboey1) | [The Lumineers](https://open.spotify.com/artist/16oZKvXb6WkQlVAjwo2Wbg) | [Gloria Sparks](https://open.spotify.com/album/7FsCFNZmtYmEJMEGnhaboR) | 3:36 | 2019-08-21 | | | [Gone](https://open.spotify.com/track/3mxMrdo3fJjDbb64nagoXR) | [JR JR](https://open.spotify.com/artist/3VAxb3UskTNiHAKh4UeOEv) | [JR JR](https://open.spotify.com/album/3shFtH3EfvyztGl2sdsmHS) | 3:47 | 2019-07-29 | 2019-08-20 | | [Good News](https://open.spotify.com/track/1dXCXb006YbPSAajh6qhaF) | [Ocean Park Standoff](https://open.spotify.com/artist/1qGohIp3a4kh1Euymx0pyL) | [Ocean Park Standoff](https://open.spotify.com/album/3RLGxqyxJbe9wFro2TQp4T) | 3:08 | 2019-07-29 | 2019-10-26 | | [Green Mountain State](https://open.spotify.com/track/0c7iF5fSBYxCuwsAv2z4iI) | [Trevor Hall](https://open.spotify.com/artist/3RMHexittaAZkf8zukkZB8) | [Chapter Of The Forest](https://open.spotify.com/album/0Tt5WHP4RdkQemDgD1QItP) | 4:34 | 2019-07-29 | | | [Heart](https://open.spotify.com/track/5L7IgwUPhir2FJftGNXJDW) | [Rainbow Kitten Surprise](https://open.spotify.com/artist/4hz8tIajF2INpgM0qzPJz2) | [Mary (b-sides)](https://open.spotify.com/album/15RYnRTIMHrCB6X3HjK2mC) | 3:39 | 2019-08-20 | | | [Heart's Content](https://open.spotify.com/track/0pegFWSUOTiG0sLVEfxtvA) | [Brandi Carlile](https://open.spotify.com/artist/2sG4zTOLvjKG1PSoOyf5Ej) | [Bear Creek](https://open.spotify.com/album/5b8YTIrc88vdnfRguZqvVE) | 3:34 | 2019-07-29 | | | [Hello My Old Heart](https://open.spotify.com/track/2c62Xf5Po1YSa1N6LOjPHy) | [The Oh Hellos](https://open.spotify.com/artist/3Fe3pszR2t4TOBVz41B1WR) | [The Oh Hellos EP](https://open.spotify.com/album/1a3UYpjNVB67soVfvtHoG8) | 4:16 | 2019-07-29 | | | [Hero](https://open.spotify.com/track/6GRDI9suQHikFP6euIXnpq) | [Family of the Year](https://open.spotify.com/artist/7zsin6IgVsR1rqSRCNYDwq) | [Loma Vista](https://open.spotify.com/album/14SCnv027L0HidHq0URIDu) | 3:10 | 2019-07-29 | 2019-08-21 | | [Hey, Soul Sister](https://open.spotify.com/track/5i0eU4qWEhgsDcG6xO5yvy) | [Train](https://open.spotify.com/artist/3FUY2gzHeIiaesXtOAdB7A) | [Save Me, San Francisco](https://open.spotify.com/album/31JOfqUiCWs5facQRPdeUk) | 3:36 | 2019-07-29 | 2019-10-26 | | [Highroad](https://open.spotify.com/track/64iXPlkVfgkSYRvE0CXij0) | [Sir Woman](https://open.spotify.com/artist/3H03S3ZtyYLdzsk6EYndUL) | [Highroad](https://open.spotify.com/album/70gYt9g5sdmG00ArLq5eIY) | 3:50 | 2019-08-20 | | | [Ho Hey](https://open.spotify.com/track/0W4Kpfp1w2xkY3PrV714B7) | [The Lumineers](https://open.spotify.com/artist/16oZKvXb6WkQlVAjwo2Wbg) | [The Lumineers](https://open.spotify.com/album/6NWYmlHxAME5KXtxrTlUxW) | 2:43 | 2019-07-29 | 2019-10-26 | | [Holocene](https://open.spotify.com/track/4fbvXwMTXPWaFyaMWUm9CR) | [Bon Iver](https://open.spotify.com/artist/4LEiUm1SRbFMgfqnQTwUbQ) | [Bon Iver](https://open.spotify.com/album/1JlvIsP2f6ckoa62aN7kLn) | 5:36 | 2019-07-29 | 2019-10-26 | | [Home](https://open.spotify.com/track/6ZapsNk1ZpaebNXAIohP9R) | [Edward Sharpe & The Magnetic Zeros](https://open.spotify.com/artist/7giUHu5pv6YTZgSkxxCcgh) | [Up From Below](https://open.spotify.com/album/4EBtalKeisyblXwox9mMXf) | 5:03 | 2019-07-29 | | | [Homeward Bound](https://open.spotify.com/track/03VXrViYqJpdhuBEV0p0ak) | [Simon & Garfunkel](https://open.spotify.com/artist/70cRZdQywnSFp9pnc2WTCE) | [Parsley, Sage, Rosemary And Thyme](https://open.spotify.com/album/5dqFVDJ9lPRBCqEzufd8sF) | 2:29 | 2019-07-29 | 2019-08-20 | | [Honeybee](https://open.spotify.com/track/5CalS8Gn69OOrR9aiw0ZO9) | [The Head and the Heart](https://open.spotify.com/artist/0n94vC3S9c3mb2HyNAOcjg) | [Living Mirage](https://open.spotify.com/album/27LNgTSAGxE2fitrsCukmT) | 3:16 | 2019-08-20 | | | [Hot Tears](https://open.spotify.com/track/3BICHhGJhJNlmCCsHCwVNl) | [Leif Vollebekk](https://open.spotify.com/artist/3jzXlBF2157k4exx7idecs) | [Hot Tears](https://open.spotify.com/album/6coKW363VV9AS9ev6vD0x2) | 4:07 | 2019-08-22 | | | [How Bad We Need Each Other](https://open.spotify.com/track/5m2LyVfBZhEmOvBlBGBX1l) | [Marc Scibilia](https://open.spotify.com/artist/4CHiVarfTsFhkFOk5vHS77) | [How Bad We Need Each Other](https://open.spotify.com/album/7iJxwENQQ1RZVPiUanIToJ) | 3:13 | 2019-07-29 | | | [Hurt Somebody (With Julia Michaels)](https://open.spotify.com/track/7vA2Y79Q4bBqdzBCfHeGEe) | [Noah Kahan](https://open.spotify.com/artist/2RQXRUsr4IW1f3mKyKsy4B), [Julia Michaels](https://open.spotify.com/artist/0ZED1XzwlLHW4ZaG4lOT6m) | [Hurt Somebody](https://open.spotify.com/album/1TMA2dKLdsJZ8u1iikE6Ow) | 2:48 | 2019-07-29 | | | [I Found Out](https://open.spotify.com/track/2IxOGK5JJIhi2Alx5BA7oe) | [The Head and the Heart](https://open.spotify.com/artist/0n94vC3S9c3mb2HyNAOcjg) | [Living Mirage](https://open.spotify.com/album/27LNgTSAGxE2fitrsCukmT) | 3:53 | 2019-08-20 | | | [I Need a Teacher](https://open.spotify.com/track/7i8MdTMo8ZibQhvhqzWhIJ) | [Hiss Golden Messenger](https://open.spotify.com/artist/37eqxl8DyLd5sQN54wYJbE) | [I Need a Teacher](https://open.spotify.com/album/0BIqUXJYUZCoTG1yiWwOqv) | 3:17 | 2019-08-30 | | | [I'm Yours](https://open.spotify.com/track/1EzrEOXmMH3G43AXT1y7pA) | [Jason Mraz](https://open.spotify.com/artist/4phGZZrJZRo4ElhRtViYdl) | [We Sing. We Dance. We Steal Things.](https://open.spotify.com/album/04G0YylSjvDQZrjOfE5jA5) | 4:02 | 2019-07-29 | 2019-08-30 | | [In Your Hands - Single Version](https://open.spotify.com/track/7tJHvRoGvkckZkZk5ORUot) | [Nick Mulvey](https://open.spotify.com/artist/3x8FbPjh2Qz55XMdE2Yalj) | [In Your Hands](https://open.spotify.com/album/0tZj37rUrevqNsoI4N0iEA) | 3:50 | 2019-07-29 | | | [Island In The Sun](https://open.spotify.com/track/1IRZwD3r9VOehOR8rGeV3Y) | [Weezer](https://open.spotify.com/artist/3jOstUTkEu2JkjvRdBA5Gu) | [Smallville: The Talon Mix](https://open.spotify.com/album/4ZS43VlYwOu0WUvFfQsleB) | 3:20 | 2019-08-17 | 2019-10-26 | | [Jealous Of Birds](https://open.spotify.com/track/6nuoNN62XI74sF1MehARgZ) | [Bre Kennedy](https://open.spotify.com/artist/61oqMHI8QuFrE5Qt91uJAj) | [Jealous Of Birds](https://open.spotify.com/album/1wvJvb2k412nebCXAsNuMW) | 3:12 | 2019-09-08 | | | [Joyride](https://open.spotify.com/track/0VcVy0kpfwyd5rMS5URyVD) | [Adam Melchor](https://open.spotify.com/artist/54tv11ndFfiqXiR03PwdlB) | [Joyride](https://open.spotify.com/album/2kbQUnNWsS3C931Sx7FYzb) | 3:32 | 2019-08-20 | | | [Leaving It Up to You](https://open.spotify.com/track/10vNGxYjg8OGwZilTKYRbk) | [George Ezra](https://open.spotify.com/artist/2ysnwxxNtSgbb9t1m2Ur4j) | [Wanted on Voyage](https://open.spotify.com/album/7L4ZwpSwKQCerDQv9C4O1M) | 3:36 | 2019-07-29 | | | [Lebanon](https://open.spotify.com/track/1jKG59NQTTCr1tSG9YDzuZ) | [J.S. Ondara](https://open.spotify.com/artist/33saQZHi434TBuDAXbyU2W) | [Tales Of America (The Second Coming)](https://open.spotify.com/album/53NJ8rhui9uzJ1IagwQa4V) | 3:26 | 2019-10-26 | | | [Living in Lightning](https://open.spotify.com/track/49rYFKjXZ9n0GS9l9oxAD1) | [City and Colour](https://open.spotify.com/artist/74gcBzlQza1bSfob90yRhR) | [Living in Lightning](https://open.spotify.com/album/1sUbo7S9HuiImv6HaFcbhP) | 3:19 | 2019-10-27 | | | [Lost in My Mind](https://open.spotify.com/track/3gvAGvbMCRvVDDp8ZaIPV5) | [The Head and the Heart](https://open.spotify.com/artist/0n94vC3S9c3mb2HyNAOcjg) | [The Head and the Heart](https://open.spotify.com/album/0xWfhCMYmaiCXtLOuyPoLF) | 4:19 | 2019-07-29 | | | [Love Is Everywhere (Beware)](https://open.spotify.com/track/4hrJhMznNddR7UThDKHSJy) | [Wilco](https://open.spotify.com/artist/2QoU3awHVdcHS8LrZEKvSM) | [Love Is Everywhere (Beware)](https://open.spotify.com/album/0xZgWQ2PZZnFK6T1Lr0OMv) | 3:33 | 2019-08-30 | | | [Love Is Love](https://open.spotify.com/track/3evHzU2xmG80c3jS4YT6ZI) | [Grace Potter](https://open.spotify.com/artist/1PJVVIeS5Wu0wbZDhtC0Ht) | [Love Is Love](https://open.spotify.com/album/658qjvfIWOhCwaOZixkb45) | 3:06 | 2019-10-27 | | | [Love Someone](https://open.spotify.com/track/7mQruvaJWb7U18LX7p3m5r) | [Jason Mraz](https://open.spotify.com/artist/4phGZZrJZRo4ElhRtViYdl) | [YES!](https://open.spotify.com/album/4mV8o8hl6v3u0Gzv0DfrXt) | 4:16 | 2019-07-29 | 2019-08-29 | | [Marinade](https://open.spotify.com/track/2N60TAtXaCbmi7zqdUoW61) | [DOPE LEMON](https://open.spotify.com/artist/7oZLKL1GjYiaAgssXsLmW8) | [Honey Bones](https://open.spotify.com/album/3diCjjzTaCODKwH1OOmrWf) | 3:57 | 2019-08-13 | 2019-10-26 | | [Me and Julio Down by the Schoolyard](https://open.spotify.com/track/6vxHp3CDNo0afgKGp2yi1E) | [Paul Simon](https://open.spotify.com/artist/2CvCyf1gEVhI0mX6aFXmVI) | [Paul Simon](https://open.spotify.com/album/7npBPiCHjPj8PVIGPuHXep) | 2:44 | 2019-07-29 | 2019-08-21 | | [Mess](https://open.spotify.com/track/1BlQWQgGP84r4GYUVty4Ar) | [Noah Kahan](https://open.spotify.com/artist/2RQXRUsr4IW1f3mKyKsy4B) | [Busyhead](https://open.spotify.com/album/3DNQrMjvVGiueVrj1qquJd) | 3:33 | 2019-08-21 | | | [Mockingbird](https://open.spotify.com/track/6fS1CEMY4LlvQNWuUMoWEQ) | [Ruston Kelly](https://open.spotify.com/artist/5zuqnTZOeJzI0N0yQ7XA7I) | [Dying Star](https://open.spotify.com/album/0HglC8wDUKL0VV5KI31bqU) | 4:37 | 2019-08-20 | | | [More I See](https://open.spotify.com/track/58iT2lT6yiTwJyN8zlXrKt) | [S. Carey](https://open.spotify.com/artist/2LSJrlndCuTpdEluvYHc2E) | [Hundred Acres](https://open.spotify.com/album/7J2oRTfH14BbakDbmqMgiM) | 4:03 | 2019-07-29 | | | [Mr. Jones](https://open.spotify.com/track/5DiXcVovI0FcY2s0icWWUu) | [Counting Crows](https://open.spotify.com/artist/0vEsuISMWAKNctLlUAhSZC) | [August And Everything After](https://open.spotify.com/album/4nKfZbCALT9H9LfedtDwnZ) | 4:32 | 2019-10-25 | 2019-10-26 | | [Mr. Rodriguez](https://open.spotify.com/track/05D9VMb7x06OJHZV2TGUBX) | [Rayland Baxter](https://open.spotify.com/artist/251UrhgNbMr15NLzQ2KyKq) | [Mr. Rodriguez](https://open.spotify.com/album/5G5BNV0KyMgIj1ER2KV62X) | 3:43 | 2019-07-29 | | | [My God Has A Telephone](https://open.spotify.com/track/0Njsdm3fZvEqAveqcgDISP) | [The Flying Stars Of Brooklyn NY](https://open.spotify.com/artist/1Q33Sd9px79c7lWMbQXwxm) | [My God Has a Telephone](https://open.spotify.com/album/1mUZg3cuUcwifewwTCewbM) | 2:55 | 2019-07-29 | | | [Naked As We Came](https://open.spotify.com/track/2gUSIsapdX6jEJ0DvjqTt2) | [Iron & Wine](https://open.spotify.com/artist/4M5nCE77Qaxayuhp3fVn4V) | [Our Endless Numbered Days](https://open.spotify.com/album/20OPxsW0aYB6InxDImJRdt) | 2:32 | 2019-07-29 | 2019-11-17 | | [New Light](https://open.spotify.com/track/3bH4HzoZZFq8UpZmI2AMgV) | [John Mayer](https://open.spotify.com/artist/0hEurMDQu99nJRq8pTxO14) | [New Light](https://open.spotify.com/album/5fEgDYFPUcvQy21TYoLEZ0) | 3:36 | 2019-07-29 | | | [New Slang](https://open.spotify.com/track/5oUV6yWdDM0R9Q2CizRhIt) | [The Shins](https://open.spotify.com/artist/4LG4Bs1Gadht7TCrMytQUO) | [Oh, Inverted World](https://open.spotify.com/album/34XlrJGfsDhvRDeJ8a6lie) | 3:51 | 2019-07-29 | 2019-10-26 | | [New Soul](https://open.spotify.com/track/6obMmMuVhvB0VMTZa5EJIP) | [Yael Naim](https://open.spotify.com/artist/32aFdXARUiqP81SXqIPD4w) | [Yael Naïm](https://open.spotify.com/album/3ufKV1PaaW3hdXIgocxPIQ) | 3:45 | 2019-07-29 | 2019-10-26 | | [No Sleep](https://open.spotify.com/track/2pfAvgMoHLfialvMYn337d) | [Caamp](https://open.spotify.com/artist/0wyMPXGfOuQzNR54ujR9Ix) | [By and By](https://open.spotify.com/album/4Ib3LE6FimfhNVnY7Tc1zM) | 3:57 | 2019-08-21 | | | [Ooh La La - 2015 Remaster](https://open.spotify.com/track/6TNNMVpOgn8K5NoDC7alG6) | [Faces](https://open.spotify.com/artist/3v4feUQnU3VEUqFrjmtekL) | [The Best Of Faces: Good Boys When They're Asleep](https://open.spotify.com/album/375DYMUVvk7xXyKq5IaUTR) | 3:34 | 2019-07-29 | | | [Orpheus](https://open.spotify.com/track/3sC62j1Cjeea5tAhcyGcs8) | [Sara Bareilles](https://open.spotify.com/artist/2Sqr0DXoaYABbjBo9HaMkM) | [Amidst the Chaos](https://open.spotify.com/album/5x2sDapUIdq0qk1ezff3gm) | 4:13 | 2019-08-22 | | | [Paradise](https://open.spotify.com/track/38zwkK6TtTjIW9tpYBfZ3D) | [George Ezra](https://open.spotify.com/artist/2ysnwxxNtSgbb9t1m2Ur4j) | [Staying at Tamara's](https://open.spotify.com/album/2NaulYO6lGXTyIzWTJvRJj) | 3:42 | 2019-07-29 | 2019-09-04 | | [Pick Me Apart](https://open.spotify.com/track/2mTeXbXicXdMxTm3K05ciA) | [Active Bird Community](https://open.spotify.com/artist/52atJIClJ4KZuYaIBLbNbH) | [Pick Me Apart](https://open.spotify.com/album/3fY5dO3gJD3uBPGkdisA0f) | 3:37 | 2019-07-29 | 2019-10-26 | | [Put Your Records On](https://open.spotify.com/track/2nGFzvICaeEWjIrBrL2RAx) | [Corinne Bailey Rae](https://open.spotify.com/artist/29WzbAQtDnBJF09es0uddn) | [Corinne Bailey Rae](https://open.spotify.com/album/141Mp3P2VKHQMhtkW1DyQg) | 3:35 | 2019-07-29 | | | [Rain](https://open.spotify.com/track/6medyjHIiWzdD7aOoHEwxt) | [Bishop Allen](https://open.spotify.com/artist/5rE9FxgMGmsZ8Cg4AeOavJ) | [The Broken String](https://open.spotify.com/album/2fAA87YxwDh8sofZrwz6Wd) | 3:35 | 2019-10-25 | 2019-10-26 | | [Ramble On](https://open.spotify.com/track/0OQLMQBKeaFuA2PJRdMuik) | [Train](https://open.spotify.com/artist/3FUY2gzHeIiaesXtOAdB7A) | [Ramble On](https://open.spotify.com/album/6wFpk4lFTAhrmfkvZxf6Gg) | 4:22 | 2019-07-29 | 2019-08-21 | | [Rearrange Us](https://open.spotify.com/track/6M7VWjQnVsSCTifhHJGKpG) | [Mt. Joy](https://open.spotify.com/artist/69tiO1fG8VWduDl3ji2qhI) | [Rearrange Us](https://open.spotify.com/album/6l6ThHfDJ8Ja5uunao40so) | 3:07 | 2019-10-27 | | | [Send Me On My Way](https://open.spotify.com/track/4yshHBPp0MoVynV1sMCKV3) | [Rusted Root](https://open.spotify.com/artist/2M3vnW1p5w4uPRkLYTbvdB) | [The Best Of / 20th Century Masters The Millennium Collection](https://open.spotify.com/album/73dMmmMMWYjmCnGj1OgvIR) | 4:21 | 2019-07-29 | | | [Seventeen](https://open.spotify.com/track/1fWwxmWor6QbvBeLSV428F) | [Sjowgren](https://open.spotify.com/artist/32Ko3nL0210QAt14S3Rs4Y) | [Seventeen](https://open.spotify.com/album/3QfvzIBsZ2zbYWsNp7StMw) | 3:46 | 2019-07-29 | | | [Shaky Ground](https://open.spotify.com/track/0VCveHO3AX0OLRuTgpBKBJ) | [Freedom Fry](https://open.spotify.com/artist/195hFqaTDENqLCcG8uGtM7) | [Shaky Ground](https://open.spotify.com/album/1iYLinQ11V4WmuiHsu1Ryd) | 3:12 | 2019-07-29 | | | [Shine](https://open.spotify.com/track/4hIv0vg8xbxlbbde8cIkcz) | [Benjamin Francis Leftwich](https://open.spotify.com/artist/7D5oTJSXSHf51auG0106CQ) | [Last Smoke Before the Snowstorm](https://open.spotify.com/album/2llC1TaaFaKVDEveEq9hXW) | 3:02 | 2019-07-29 | 2019-11-15 | | [Shorty Don't Wait](https://open.spotify.com/track/33caYRuBFGkI5JaFMUeydz) | [A Great Big World](https://open.spotify.com/artist/5xKp3UyavIBUsGy3DQdXeF) | [Is There Anybody Out There?](https://open.spotify.com/album/1yOcLa4euMk9sV7rRJ89Dl) | 4:11 | 2019-07-29 | | | [Side Effects](https://open.spotify.com/track/7l1JgKKbTh8n0o1ya4j67k) | [Joseph](https://open.spotify.com/artist/5Wfvw7rDz7HA6gE2z6QhqO) | [Good Luck, Kid](https://open.spotify.com/album/4Nz2TKH4snc8EZMhsMDjgi) | 3:43 | 2019-10-27 | | | [Sitting, Waiting, Wishing](https://open.spotify.com/track/5eWOsyHHic4vJP3LjTVhqv) | [Jack Johnson](https://open.spotify.com/artist/3GBPw9NK25X1Wt2OUvOwY3) | [In Between Dreams](https://open.spotify.com/album/7tTc46dNdE6GGuiQsssWxo) | 3:03 | 2019-07-29 | 2019-10-30 | | [Someone New](https://open.spotify.com/track/0efT4YKQLQx2YHbp6vgRX8) | [Hozier](https://open.spotify.com/artist/2FXC3k01G6Gw61bmprjgqS) | [Hozier](https://open.spotify.com/album/04E0aLUdCHnhnnYrDDvcHq) | 3:42 | 2019-07-29 | | | [Something Tells Me](https://open.spotify.com/track/6yWn9RKfL1il1b4hFe6H1i) | [BAILEN](https://open.spotify.com/artist/3sYoUB7tAeXO7sOAB8eaII) | [Thrilled To Be Here](https://open.spotify.com/album/03tlaFyvYYWHr16yGL01qZ) | 4:01 | 2019-08-21 | | | [Sometimes Love Takes So Long](https://open.spotify.com/track/0WH1FwH09z2iJ84dRFODcm) | [Illiterate Light](https://open.spotify.com/artist/1vEqG4Bxz3YIMuDkIcvg6J) | [Sometimes Love Takes So Long](https://open.spotify.com/album/54MM8pLcgnDSIWYtt2ikPK) | 3:40 | 2019-08-30 | | | [Spirits](https://open.spotify.com/track/0VhDlpezWuAgOnfOCx2fRv) | [The Strumbellas](https://open.spotify.com/artist/6ujr1NkqbZpYOhquczUUfl) | [Spirits](https://open.spotify.com/album/6YObUrJfBLhs4f2vfRMzqD) | 3:23 | 2019-08-11 | 2019-08-12 | | [Stars](https://open.spotify.com/track/1LHE8cxWt7SVCitinqEhyl) | [Future Generations](https://open.spotify.com/artist/3wKj5PmSpnrtz9n9hG2QCA) | [Future Generations](https://open.spotify.com/album/3bIqG0mLXWMFgACHLxDI7m) | 3:53 | 2019-07-29 | 2019-10-26 | | [Stay High](https://open.spotify.com/track/3fDqdR9QqcsgFTRFqy14Ki) | [Brittany Howard](https://open.spotify.com/artist/4XquDVA8pkg5Lx91No1JxB) | [Stay High](https://open.spotify.com/album/4gb0j034W6vgCDwRO2N88c) | 3:11 | 2019-08-27 | 2019-11-18 | | [Story Of A Fish](https://open.spotify.com/track/6QvLssSgD8MhY4UrvQ3WrF) | [Jeremy Ivey](https://open.spotify.com/artist/08Gc0o3GdjIKtQVoNYaVNG) | [Story Of A Fish](https://open.spotify.com/album/1JBQiowIA9EjwO3Q1EnPtp) | 3:22 | 2019-10-03 | | | [Stubborn Love](https://open.spotify.com/track/3ekNuTF3UpOvIZCfiejpnC) | [The Lumineers](https://open.spotify.com/artist/16oZKvXb6WkQlVAjwo2Wbg) | [The Lumineers](https://open.spotify.com/album/6NWYmlHxAME5KXtxrTlUxW) | 4:39 | 2019-07-29 | 2019-10-14 | | [Sunday](https://open.spotify.com/track/3BqHafmDs7SJVohRT0T6jU) | [Joy Oladokun](https://open.spotify.com/artist/7rrTqtOUOwva4sgTx9C9F9) | [Sunday](https://open.spotify.com/album/6QDyUBlOnJ3lHlqOyFJJvO) | 3:13 | 2019-08-30 | | | [Supply & Demand](https://open.spotify.com/track/09PIYploijfke8kPiVVoJV) | [Wilder Woods](https://open.spotify.com/artist/26DytDdxKgr9N0tdrBSLs2) | [Supply & Demand](https://open.spotify.com/album/2IwPbScfxhHIqbz2SxjVVX) | 3:16 | 2019-08-20 | | | [Surefire](https://open.spotify.com/track/2N2gukfZet8Oe4aYR5Apd6) | [Wilderado](https://open.spotify.com/artist/1Tp7C6LzxZe9Mix6rn4zbI) | [Surefire](https://open.spotify.com/album/1I2hGwPg0UwRN8pxBHJQLZ) | 4:00 | 2019-08-20 | | | [Sweet Pea](https://open.spotify.com/track/4KqBoq7MoDJeVsvUHTjXCM) | [Amos Lee](https://open.spotify.com/artist/0QrowybipCKUDnq5y10PD2) | [Supply And Demand](https://open.spotify.com/album/7zAMTPQbo4MM4trmSpvsNo) | 2:09 | 2019-07-29 | | | [That Was Yesterday](https://open.spotify.com/track/3gwfEBrFpzEFCZMjw7mqxA) | [Leon Bridges](https://open.spotify.com/artist/3qnGvpP8Yth1AqSBMqON5x) | [That Was Yesterday](https://open.spotify.com/album/2GYGaipOwfLGveAF3ta6Iv) | 3:50 | 2019-08-22 | | | [The A Team](https://open.spotify.com/track/1VdZ0vKfR5jneCmWIUAMxK) | [Ed Sheeran](https://open.spotify.com/artist/6eUKZXaKkcviH0Ku9w2n3V) | [+](https://open.spotify.com/album/0W5GGnapMz0VwemQvJDqa7) | 4:18 | 2019-07-29 | 2019-08-20 | | [The Stable Song](https://open.spotify.com/track/3G9ETaH55bMQx8hwNhAgbU) | [Gregory Alan Isakov](https://open.spotify.com/artist/5sXaGoRLSpd7VeyZrLkKwt) | [That Sea, The Gambler](https://open.spotify.com/album/7ecZGh7SICLEkqqkBNXfvE) | 6:00 | 2019-07-29 | | | [The Walk](https://open.spotify.com/track/7tBZa65xUKMMan9tIMPqbi) | [Mayer Hawthorne](https://open.spotify.com/artist/4d53BMrRlQkrQMz5d59f2O) | [How Do You Do](https://open.spotify.com/album/2AsTehQMH82xr6phI9c42V) | 3:38 | 2019-10-25 | 2019-10-26 | | [Toothpaste Kisses](https://open.spotify.com/track/6jeWgaprnGyAsRhUcNuZKX) | [The Maccabees](https://open.spotify.com/artist/0vW8z9pZMGCcRtGPGtyqiB) | [Colour It In](https://open.spotify.com/album/3CTry04declmSVBUQ9hTHW) | 2:39 | 2019-07-29 | 2019-11-11 | | [True To Myself](https://open.spotify.com/track/5N0lcnJTtKj4wNDvurHige) | [Ziggy Marley](https://open.spotify.com/artist/0o0rlxlC3ApLWsxFkUjMXc) | [Dragonfly](https://open.spotify.com/album/62Ot058LfUzRFxbramAggQ) | 3:45 | 2019-10-25 | 2019-10-26 | | [Upside Down](https://open.spotify.com/track/6shRGWCtBUOPFLFTTqXZIC) | [Jack Johnson](https://open.spotify.com/artist/3GBPw9NK25X1Wt2OUvOwY3) | [Jack Johnson And Friends: Sing-A-Longs And Lullabies For The Film Curious George](https://open.spotify.com/album/3Jl7i9Vo0Ht4co9SqTFjQy) | 3:28 | 2019-07-29 | | | [Valerie - Live At BBC Radio 1 Live Lounge, London / 2007](https://open.spotify.com/track/6CQaVuICm1WVXyy3SZ5jEI) | [Amy Winehouse](https://open.spotify.com/artist/6Q192DXotxtaysaqNPy5yR) | [Back To Black: B-Sides](https://open.spotify.com/album/3c9D4qaxb9XNd9BJasUEdo) | 3:53 | 2019-07-29 | | | [Wasn't Expecting That](https://open.spotify.com/track/6HTkpNNB3G5L4vmc2PfOUj) | [Jamie Lawson](https://open.spotify.com/artist/1jhdZdzOd4TJLAHqQdkUND) | [Wasn't Expecting That](https://open.spotify.com/album/2A2SHHCTeRIilaUKPVjvWb) | 3:21 | 2019-07-29 | 2019-10-27 | | [We Are Young (feat. Janelle Monáe)](https://open.spotify.com/track/7a86XRg84qjasly9f6bPSD) | [fun.](https://open.spotify.com/artist/5nCi3BB41mBaMH9gfr6Su0), [Janelle Monáe](https://open.spotify.com/artist/6ueGR6SWhUJfvEhqkvMsVs) | [Some Nights (Spotify Exclusive)](https://open.spotify.com/album/7iycyHwOW2plljYIK6I1Zo) | 4:10 | 2019-10-25 | 2019-10-26 | | [Weather Balloons [Feat. Frances Cone]](https://open.spotify.com/track/4mOmMccRWthpaUtjPhiQm8) | [Susto](https://open.spotify.com/artist/7foyQbi7GKriLiv1GPVEwt), [Frances Cone](https://open.spotify.com/artist/5xKsfZBL84iULLWjvd4dWh) | [Weather Balloons [Feat. Frances Cone]](https://open.spotify.com/album/752tNJJGYbjlnFfSZBO9ju) | 3:32 | 2019-11-11 | | | [What You Don't Do - Tom Misch Remix](https://open.spotify.com/track/2pygzLN7fwCgeW5FNaugEm) | [Lianne La Havas](https://open.spotify.com/artist/2RP4pPHTXlQpDnO9LvR7Yt), [Tom Misch](https://open.spotify.com/artist/1uiEZYehlNivdK3iQyAbye) | [What You Don't Do (Tom Misch Remix)](https://open.spotify.com/album/134HP01Gf7TECy2tb4Cj1V) | 3:49 | 2019-10-25 | 2019-10-26 | | [Who Says](https://open.spotify.com/track/0HLWvLKQWpFdPhgk6ym58n) | [John Mayer](https://open.spotify.com/artist/0hEurMDQu99nJRq8pTxO14) | [Battle Studies](https://open.spotify.com/album/1V5vQRMWTNGmqwxY8jMVou) | 2:55 | 2019-07-29 | 2019-10-26 | | [Wide Awake](https://open.spotify.com/track/6oa5Vfov0r90SK63nVvTii) | [Frances Cone](https://open.spotify.com/artist/5xKsfZBL84iULLWjvd4dWh) | [Late Riser](https://open.spotify.com/album/0WhrnQczKJFAo1Q1R9buoY) | 3:37 | 2019-08-20 | | | [Wild World](https://open.spotify.com/track/6Xz7FeyE8HTP90HecgHV57) | [Yusuf / Cat Stevens](https://open.spotify.com/artist/08F3Y3SctIlsOEmKd6dnH8) | [Tea For The Tillerman (Remastered)](https://open.spotify.com/album/25Vt8FvZBx4BsSJWEsF7sJ) | 3:20 | 2019-10-25 | 2019-10-26 | | [Without You](https://open.spotify.com/track/6R6ux6KaKrhAg2EIB2krdU) | [Parachute](https://open.spotify.com/artist/2PCUhxD40qlMqsKHjTZD2e) | [Wide Awake](https://open.spotify.com/album/7h5LfM6tT7v8L2gTyL3Bbz) | 3:48 | 2019-07-29 | | | [Yellow](https://open.spotify.com/track/3AJwUDP919kvQ9QcozQPxg) | [Coldplay](https://open.spotify.com/artist/4gzpq5DPGxSnKTe4SA8HAU) | [Parachutes](https://open.spotify.com/album/6ZG5lRT77aJ3btmArcykra) | 4:26 | 2019-07-29 | 2019-10-26 | | [You and I](https://open.spotify.com/track/4oeRfmp9XpKWym6YD1WvBP) | [Ingrid Michaelson](https://open.spotify.com/artist/2vm8GdHyrJh2O2MfbQFYG0) | [Be OK](https://open.spotify.com/album/3KVfMVtOmoVCgihLE4HoBr) | 2:28 | 2019-07-29 | | | [You Are the Best Thing](https://open.spotify.com/track/1jyddn36UN4tVsJGtaJfem) | [Ray LaMontagne](https://open.spotify.com/artist/6DoH7ywD5BcQvjloe9OcIj) | [Gossip In The Grain](https://open.spotify.com/album/2CbLBSlkvh2vR4JRLDRQso) | 3:51 | 2019-07-29 | | | [Your Body Is a Wonderland](https://open.spotify.com/track/7vFv0yFGMJW3qVXbAd9BK9) | [John Mayer](https://open.spotify.com/artist/0hEurMDQu99nJRq8pTxO14) | [Room For Squares](https://open.spotify.com/album/3yHOaiXecTJVUdn7mApZ48) | 4:09 | 2019-07-29 | |
245.585714
380
0.754116
yue_Hant
0.620955
53b0d601923f51b3f1696f07606453722afb8b9b
157
md
Markdown
ReleaseNotes/ReleaseNotes-2.0.1.md
JTOne123/DebugTrace-net
c4e1982236c4ca28cbe069f986a3e0fb653c8f57
[ "MIT" ]
null
null
null
ReleaseNotes/ReleaseNotes-2.0.1.md
JTOne123/DebugTrace-net
c4e1982236c4ca28cbe069f986a3e0fb653c8f57
[ "MIT" ]
null
null
null
ReleaseNotes/ReleaseNotes-2.0.1.md
JTOne123/DebugTrace-net
c4e1982236c4ca28cbe069f986a3e0fb653c8f57
[ "MIT" ]
null
null
null
### Improvement * Improved the line break algorithm in reflection. <font color="blue">*The following are Japanese.*</font> ### 改善 * リフレクションでの改行のアルゴリズムを改善。
19.625
55
0.732484
eng_Latn
0.952934
53b1416e8ec39c290b8b957fcde5e6b5e4b04246
9,910
md
Markdown
src/sr/2019-03/09/teacher-comments.md
PrJared/sabbath-school-lessons
94a27f5bcba987a11a698e5e0d4279b81a68bc9a
[ "MIT" ]
68
2016-10-30T23:17:56.000Z
2022-03-27T11:58:16.000Z
src/sr/2019-03/09/teacher-comments.md
PrJared/sabbath-school-lessons
94a27f5bcba987a11a698e5e0d4279b81a68bc9a
[ "MIT" ]
367
2016-10-21T03:50:22.000Z
2022-03-28T23:35:25.000Z
src/sr/2019-03/09/teacher-comments.md
PrJared/sabbath-school-lessons
94a27f5bcba987a11a698e5e0d4279b81a68bc9a
[ "MIT" ]
109
2016-08-02T14:32:13.000Z
2022-03-31T10:18:41.000Z
--- title: Pouka za učitelje date: 30/08/2019 --- ## Opšti pregled Isus započinje Veliki nalog (Matej 28,19.20) rečju „dakle“. Kad god naiđemo na ovu reč, treba da pogledamo šta joj prethodi da bismo razumeli izjavu koja sledi. U ovom slučaju, Veliki nalog je dat na osnovu Isusove izjave: „Dade Mi se svaka vlast na nebu i na Zemlji.“ (Matej 28,18) Isusova zapovest u vezi sa stvaranjem učenika, krštavanjem, poučavanjem i nastavanjem u Njemu temelji se na Njegovoj vlasti. Previše često ovaj Veliki nalog jednostavno vidimo kao zapovest „idite“. Međutim, u pitanju je poziv da se oslonimo na Njegovu silu i vlast dok prikazujemo Njegov karakter i učenja drugima. Njegov poziv na službu obuhvata Njegovo saučešće prema siromašnima i bespomoćnima kao što je otkriveno u Jevanđeljima. U lekciji za ovu sedmicu, osvrnućemo se na to kako je novozavetna crkva prihvatila Hristovo saučešće prema siromašnima. Videćemo kako je Rana crkva posle Pedesetnice organizovala službe u kojima je pokazala saučešće i kako su učenici i vođe sve veće Hrišćanske crkve takve službe postavili u središte svoje misije. **Ciljevi** - Istražite sa svojim razredom model uravnotežene i sveobuhvatne (holističke) službe prikazane u tekstu Dela 2,41-47. - Istražite ulogu duhovnih darova datih da olakšaju crkvenu službu i poziv upućen svakom verniku da služi potrebama drugih ljudi. - Kao razred procenite uspeh svoje crkve dok, uz pomoć Svetog Duha, nastojite da nastavite sveobuhvatnu službu novozavetne crkve. ## Komentar **Pismo** Zamolite članove razreda da pročitaju tekst Dela 2,41-47. Podsetite se pet elemenata službe koje pronalazimo u životu Rane crkve kao što je prikazano u ovom tekstu. Koliko ovih elemenata je deo službe vaše crkve? - Bogosluženje (Dela 2,42.46.47) - Zajedništvo (Dela 2,42) - Služba u zajednici (Dela 2,45) - Sabiranje (Dela 2,41.47) - Učeništvo (Dela 2,42) **Pismo** Pročitajte tekst Dela 9,36-42. Dorka, ili Tavita, Hristova učenica, živela je u gradu Jopi, na obali Sredozemnog mora. Dorka je grčko ime i znači „srna“, dok je Tavita aramejski prevod tog imena. Srna je bila velikodušna osoba koja je pravila određene stvari, posebno odeću, nevoljnima u Jopi. Bila je voljena u tamošnjoj zajednici. Kada se razbolela i umrla, vernici koji su je poznavali odmah su pozvali Petra. Kada je Petar stigao u dom u kome je Srnino telo ležalo, bilo je mnogo uplakanih udovica. Sve su pokazale Petru odeću koju im je Srna sašila. Zamolivši ih da izađu iz sobe, Petar se pomolio i rekao ženi koja je preminula: „Tavita, ustani.“ (Dela 9,40) I ona je ustala iz mrtvih. Kao rezultat, mnogi ljudi u Jopi poverovali su u Gospoda. Srna nije bila vraćena iz mrtvih samo radi nje same. Petar je jednim delom vratio Srnu u život zbog udovica i drugih ljudi u Jopi kojima je bila potrebna pomoć koju je mogla da im pruži. Ona je lep primer kako treba da služimo ljudima oko sebe kojima je potrebna pomoć. Da li je Tavitin duh podignut u život u vašoj crkvi? Kakvu službu vaša crkva obavlja koja će u velikoj meri nedostajati zajednici ukoliko se iznenada prekine? **Ilustracija** Odlike Rane crkve, kao što su prikazane u tekstu Dela 2,41-47. i Srninom životu, i danas postoje. Sledi primer: u poslednjih 18 godina grupa saosećajnih vernika okrenutih zajednici, pripadnika Hrišćanske adventističke crkve u Spenservilu, u državi Merilend, uključeni su u službu koju nazivaju „Keep in Stitches“. Sastaju se ujutru jednom sedmično da proučavaju i da se mole, neguju zajedništvo i rade zajedno kako bi ispunili potrebe zajednice koja ih okružuje. U podne „lome hleb“ ručajući zajedno. Susreli su se sa vođama u svom susedstvu i shvatili da ima mnogo potreba. Ova grupa ispunila je izrečene potrebe šijući ćebad za decu majki koje su živele u prihvatilištima; izrađujući jastučnice; sastavljajući komplete za ličnu negu za beskućnike; i odgovarajući na zahteve misionskih projekata iz inostranstva na taj način što su šili jorgane, ćebad, kape i odeću. Šta je vaša crkva učinila da utvrdi potrebe vaše zajednice? Šta bi bilo potrebno učiniti kada se otkriju kakve potrebe su u pitanju? **Pismo** Pročitajte sledeća tri biblijska teksta o duhovnim darovima u Ranoj crkvi: Rimljanima 12,4-6; 1. Korinćanima 12,4.5; 1. Petrova 4,10. Zapazite da ovi duhovni darovi nisu samo talenti dati ljudima da čine što žele. Ovi darovi dati su Crkvi da ispuni potrebe službe. Apostol Pavle navodi upečatljivu listu duhovnih darova koje je Bog podario svojoj Crkvi preko vernika. Podsetite se u razredu ovih listi: Rimljanima 12,6-8; 1. Korinćanima 12,7-11.27-31; Efescima 4,11-13. Sastavite listu duhovnih darova koje vaš razred smatra da ima. Zamolite ih da kažu kako koriste svoje duhovbe darove za službu unutar i izvan svoje crkve. **Ilustracija** Razmotrite sledeće: „Novozavetne crkve bile su zajednice ljudi koje služe, oruđa koja služe u društvu. Nikakve razlike u položaju ili statusu nisu delile Božji narod. Crkvene vođe prvenstveno su bile odgovorne za pripremanje vernika za produktivnu službu i svedočenje ljudima. Crkva nije smatrana muzičkim društvom koje je unajmilo izvođače, i sedi i uživa u izvođenju. Crkva je bila orkestar u kome je svakom verniku dodeljen određeni deo kompozicije.“<sup>36</sup> Razgovarajte o implikacijama prethodnog navoda. Postavite članovima razreda pitanje: Kako je svako od vas pozvan da služi drugima u Isusovo ime? Ohrabrite članove razreda da razgovaraju o ideji „svaki vernik, misionar“. **Ilustracija** U novozavetnoj crkvi, a i danas, dinamičnu hrišćansku zajednicu činili su vernici koji služe drugima i uključeni su u sveobuhvatnu, holističku službu. Holističke zajednice mogu biti različite, ali imaju određene zajedničke osobine: 1) holističko shvatanje crkvene misije; 2) duhovnost u čijem je središtu Hristos; 3) zdrava pokretačka sila zajednice i 4) vršenje holističke službe. Slede aktivnosti u koje je uključena holistička crkva: *Težeći holističkom razumevanju crkvene misije, holistička crkva:* - Podstiče ideju dobro uravnotežene službe koja obuhvata učeništvo, evangelizam i društveni rad. - Podržava dobročinstvo, samilost, razvoj zajednice i zastupanje pravde. Gde god ljudi pate, Crkvi se pruža prilika da svetli kao Hristovo telo. - Smatra da se služba u osnovi temelji na odnosima, teži da razvije dugoročne odnose sa onima koji primaju pomoć i nastoji da im poželi dobrodošlicu u crkvenu zajednicu. - Smatra da misija ima i lokalni i globalni karakter. *Težeći posvećenju i bogosluženju u čijem je središtu Hristos, holistička crkva:* - Usmerava život crkvene zajednice ka važnom bogosluženju, naglašavajući duboku zahvalnost za spasenje koje primamo blagodaću kroz veru u Hrista. - Oslanja se na silu Božjeg Duha u plodonosnoj službi. - Vođena je nadahnutom Božjom rečju, poučava doktrine koje su čvrsto utemeljene na principu Sola scriptura – Biblija i samo Biblija – kao apsolutno moralno načelo u određivanju šta je ispravno, a šta pogrešno. - Podstiče pobožan život u službi Bogu, molitvu i proučavanje radi rasta i učeništva. - Širi Božju samopožrtvovanu ljubav prema izgubljenima, usamljenima i slabima, i neguje posvećenost prema misionskom radu kao prirodnom izdanku obožavanja Boga. *Težeći zdravoj pokretačkoj sili zajednice, holistička crkva:* - Shvata da odnosi unutar crkve treba da budu zdravi i ispunjeni ljubavlju. Niko ne želi da dođe u crkvu u kojoj može da se oseti napetost među porodicama koje se ne slažu. - Moli se i podržava pastore i vođe, gaji empatiju zbog tereta koji je Gospod stavio na njihova pleća, i ne zaboravlja da bude strpljiva i spremna da prašta ukoliko pogreše. *Težeći vršenju holističke službe, holistička crkva:* - Poziva, obučava, osposobljava i organizuje vernike za službu, izgrađujući široku lepezu duhovnih darova. - Podržava službu radeći skladno sa drugima na organizovan način, ne smatrajući sebe osobenom, sveznajućom ili mučenicom koja nije odgovorna za druge i ne zavisi od braće i sestara. - Ne zaboravlja da je služba usredsređena na dve pojedinosti: na ljude unutar i izvan crkve. Jednostrano usredsređivanje može da naruši drugu stranu. Kao razred procenite svoju crkvu u odnosu na prethodno navedene kategorije. Molite se zajedno da holistički pristup službi novozavetne crkve postane stvarnost u vašoj crkvi. ## Primena u životu Služba u Novom zavetu predstavlja služenje Bogu i zajednici u Njegovo ime. Isus je dao obrazac hrišćanske službe. On nije došao da Mu služe, nego da služi (videti: Matej 20,28; Jovan 13,1-17). Pitajte razred šta očekuju da crkva učini za njih. Stavite izazov pred njih da shvate da uspeh crkvene službe zavisi više od toga šta svaki vernik doprinosi službi nego od toga šta vernici očekuju da prime. Navedite službe koje vaša crkvena zajednica vrši unutar crkve i u susedstvu. Pozovite članove razreda da prepoznaju u koje od navedenih službi i aktivnosti su uključeni, i napišite njihova imena na listi ispod određene službe. Zahvalite se onima koji su uključeni u rad i stavite izazov pred ostale da služe u ovim oblastima. Služba sigurno treba da stavi naglasak na širenje Jevanđelje Isusa Hrista drugima da bi Ga upoznali i prihvatili kao ličnog Spasitelja i Gospoda u svom životu. Treba da ih nadahne da idu dalje, da još više streme da upoznaju Hrista kao suštinu svog postojanja i životne službe. Hrišćani, pored toga, pozvani su da u Hristovo ime služe ispunjavajući potrebe ljudi sa ljubavlju i poniznošću. Naizmenično čitajte sledeće tekstove: Matej 20,26; Jovan 2,5.9; Dela 6,1-3; Rimljanima 1,1; Galatima 1,10; Kološanima 4,12. Ako biste pregledali zapisnik vašeg crkvenog odbora, u kojoj meri bi se donesene odluke odnosile na neposrednu službu u zajednici u kojoj se crkva nalazi? Kako se vaši odbori mogu u većoj meri okrenuti misonskom radu? _______ <sup>36</sup> Rex Edwards, A New Frontier: Every Believer a Minister (Mountain View, CA: Pacific Press Publishing Association, 1979). str. 6, 7.
78.650794
1,000
0.801009
hrv_Latn
0.84957
53b1f9bc3b28c1720d807e5b93909c3a762bfaf2
757
md
Markdown
api/client/args/Telerik.Web.UI.PivotGridMenuShowingEventArgs.md
thevivacioushussain/ajax-docs
b46cd8ec574600abf8c256c0e20100eb382a9679
[ "MIT" ]
null
null
null
api/client/args/Telerik.Web.UI.PivotGridMenuShowingEventArgs.md
thevivacioushussain/ajax-docs
b46cd8ec574600abf8c256c0e20100eb382a9679
[ "MIT" ]
null
null
null
api/client/args/Telerik.Web.UI.PivotGridMenuShowingEventArgs.md
thevivacioushussain/ajax-docs
b46cd8ec574600abf8c256c0e20100eb382a9679
[ "MIT" ]
null
null
null
--- title: Telerik.Web.UI.PivotGridMenuShowingEventArgs page_title: Client-side API Reference description: Telerik.Web.UI.PivotGridMenuShowingEventArgs slug: Telerik.Web.UI.PivotGridMenuShowingEventArgs --- # Telerik.Web.UI.PivotGridMenuShowingEventArgs : Sys.EventArgs ## Inheritance Hierarchy * Sys.EventArgs * *[Telerik.Web.UI.PivotGridMenuShowingEventArgs]({%slug Telerik.Web.UI.PivotGridMenuShowingEventArgs%})* ## Methods ### get_menu Returns a reference to the RadContextMenu client-side object. #### Parameters #### Returns `Telerik.Web.UI.RadContextMenu` ### get_menuArguments Returns a reference to the RadContextMenuShowingEventArgs event arguments. #### Parameters #### Returns `Telerik.Web.UI.RadContextMenuShowingEventArgs`
19.921053
105
0.78996
yue_Hant
0.228676
53b284a13f54d6aceaa06b1a2a5a15ba4ec20f01
5,086
md
Markdown
_posts/2019-4-7-Dijkstra-python-implementation.md
brunohadlich/brunohadlich.github.io
6695674b3c378c2a9668218a78686695f9ecbf89
[ "MIT" ]
null
null
null
_posts/2019-4-7-Dijkstra-python-implementation.md
brunohadlich/brunohadlich.github.io
6695674b3c378c2a9668218a78686695f9ecbf89
[ "MIT" ]
null
null
null
_posts/2019-4-7-Dijkstra-python-implementation.md
brunohadlich/brunohadlich.github.io
6695674b3c378c2a9668218a78686695f9ecbf89
[ "MIT" ]
null
null
null
--- layout: post title: Python implementation of Dijkstra algorihthm --- When I was trying to implement Dijkstra algorithm in Python from pseudocodes I found on websites like this one from Wikipedia. ``` text 1 function Dijkstra(Graph, source): 2 dist[source] ← 0 // Initialization 3 4 create vertex set Q 5 6 for each vertex v in Graph: 7 if v ≠ source 8 dist[v] ← INFINITY // Unknown distance from source to v 9 prev[v] ← UNDEFINED // Predecessor of v 10 11 Q.add_with_priority(v, dist[v]) 12 13 14 while Q is not empty: // The main loop 15 u ← Q.extract_min() // Remove and return best vertex 16 for each neighbor v of u: // only v that are still in Q 17 alt ← dist[u] + length(u, v) 18 if alt < dist[v] 19 dist[v] ← alt 20 prev[v] ← u 21 Q.decrease_priority(v, alt) 22 23 return dist, prev ``` I run into a problem with Priority Queues in which a function called decrease_priority is called, the problem is that Python does not have a default library with a priority queue implementation that provides such feature, so to work around this I created a list that will keep track of vertexes that have already been traveled, in the example below it is "vertex_in_priority_queue", in each iteration I check if the element that was popped from the queue "priority_queue_sorted_by_distance" has been traveled in "vertex_in_priority_queue", if it has I simply ignore it and go to the next iteration, if not I mark it as traveled and make all the process described between lines 16 and 21 of the pseudocode. The code below was executed with Python 3. ``` python import math, heapq def dijkstra(vertexes, edges, source_vertex, destination_vertex, is_direct_graph = False): distance_from_source_to_vertexes = {} parent_vertexes = {} priority_queue_sorted_by_distance = [] distance_from_source_to_vertexes[source_vertex] = 0 vertex_in_priority_queue = {} edges_weights = {} for vertex in vertexes: if vertex != source_vertex: distance_from_source_to_vertexes[vertex] = math.inf parent_vertexes[vertex] = '' heapq.heappush(priority_queue_sorted_by_distance, (distance_from_source_to_vertexes[vertex], vertex)) vertex_in_priority_queue[vertex] = True edges_weights[vertex] = [] for edge in edges: edges_weights[edge[0]].append((edge[1], edge[2])) if not is_direct_graph: edges_weights[edge[1]].append((edge[0], edge[2])) while priority_queue_sorted_by_distance: vertex_lowest_distance = heapq.heappop(priority_queue_sorted_by_distance)[1] if vertex_in_priority_queue[vertex_lowest_distance]: vertex_in_priority_queue[vertex_lowest_distance] = False for neighbor in edges_weights[vertex_lowest_distance]: total_distance = distance_from_source_to_vertexes[vertex_lowest_distance] + neighbor[1] if total_distance < distance_from_source_to_vertexes[neighbor[0]]: distance_from_source_to_vertexes[neighbor[0]] = total_distance parent_vertexes[neighbor[0]] = vertex_lowest_distance heapq.heappush(priority_queue_sorted_by_distance, (distance_from_source_to_vertexes[neighbor[0]], neighbor[0])) return distance_from_source_to_vertexes, parent_vertexes if __name__ == '__main__': vertexes = ['S', 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M'] edges = [('S', 'A', 7), ('S', 'B', 2), ('A', 'B', 3), ('A', 'D', 4), ('D', 'B', 4), ('D', 'F', 5), ('B', 'H', 1), ('H', 'F', 3), ('H', 'G', 2), ('G', 'E', 2), ('E', 'K', 5), ('K', 'I', 4), ('K', 'J', 4), ('I', 'J', 6), ('I', 'L', 4), ('J', 'L', 4), ('L', 'C', 2), ('C', 'S', 3), ('G', 'M', 1), ('F', 'M', 1), ('K', 'M', 1)] source_vertex = 'M' destination_vertex = 'S' (distance_from_source_to_vertexes, parent_vertexes) = dijkstra(vertexes, edges, source_vertex, destination_vertex) print('Source vertex: ' + source_vertex) print('Destination vertex: ' + destination_vertex) print('Lowest distance from ' + source_vertex + ' to ' + destination_vertex + ': ' + str(distance_from_source_to_vertexes[destination_vertex])) print('Path from ' + source_vertex + ' to ' + destination_vertex + ': ', end = '') current_vertex = destination_vertex path = [] while current_vertex != '': path.append(current_vertex) current_vertex = parent_vertexes[current_vertex] while len(path) > 1: print(path.pop(), end = ' -> ') print(path.pop()) for vertex in distance_from_source_to_vertexes: print('Lowest distance from ' + source_vertex + ' to ' + vertex + ': ' + str(distance_from_source_to_vertexes[vertex])) for vertex in parent_vertexes: print('Parent of vertex ' + vertex + ': ' + parent_vertexes[vertex]) ```
49.862745
327
0.640779
eng_Latn
0.714415
53b2d65facd9ae6bd0c0b305071567511ec72d12
390
md
Markdown
2014-2015/Webapplicaties.md
Suresh-Subedi/Notes
420263a9837cd3586497472655c1815311c1e312
[ "Apache-2.0" ]
null
null
null
2014-2015/Webapplicaties.md
Suresh-Subedi/Notes
420263a9837cd3586497472655c1815311c1e312
[ "Apache-2.0" ]
null
null
null
2014-2015/Webapplicaties.md
Suresh-Subedi/Notes
420263a9837cd3586497472655c1815311c1e312
[ "Apache-2.0" ]
null
null
null
######CSS:###### **External CSS:**<br/> ``<head>``<br/> &nbsp;&nbsp;&nbsp;``<link rel="stylesheet" type="text/css" href="mystyle.css">``<br/> ``</head>``<br/> **Internal CSS:**<br/> ``<style>``<br/> ``body {``<br/> &nbsp;&nbsp;&nbsp;``background-color: linen;``<br/> ``}``<br/> ``</style>``<br/> **Inline CSS:**<br/> `` <h1 style="color:blue;margin-left:30px;">This is a heading.</h1>``
22.941176
85
0.538462
yue_Hant
0.139849
53b31dd7b1e70c9c969854d0217eaa82e3d5eefa
4,727
md
Markdown
docs/vs-2015/code-quality/intrinsic-functions.md
seferciogluecce/visualstudio-docs.tr-tr
222704fc7d0e32183a44e7e0c94f11ea4cf54a33
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/vs-2015/code-quality/intrinsic-functions.md
seferciogluecce/visualstudio-docs.tr-tr
222704fc7d0e32183a44e7e0c94f11ea4cf54a33
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/vs-2015/code-quality/intrinsic-functions.md
seferciogluecce/visualstudio-docs.tr-tr
222704fc7d0e32183a44e7e0c94f11ea4cf54a33
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: İç işlevler | Microsoft Docs ms.custom: '' ms.date: 11/15/2016 ms.prod: visual-studio-dev14 ms.reviewer: '' ms.suite: '' ms.technology: - vs-devops-test ms.tgt_pltfrm: '' ms.topic: article f1_keywords: - _String_length_ - _Param_ - _Curr_ - _Old_ - _Nullterm_length_ - _Inexpressible_ ms.assetid: adf29f8c-89fd-4a5e-9804-35ac83e1c457 caps.latest.revision: 9 author: corob-msft ms.author: gewarren manager: ghogen ms.openlocfilehash: 1b50742c0176b8c880d3ed0b58b7b8ef76355777 ms.sourcegitcommit: 9ceaf69568d61023868ced59108ae4dd46f720ab ms.translationtype: MT ms.contentlocale: tr-TR ms.lasthandoff: 10/12/2018 ms.locfileid: "49174492" --- # <a name="intrinsic-functions"></a>İç İşlevler [!INCLUDE[vs2017banner](../includes/vs2017banner.md)] SAL bir ifadede, yan etkilere sahip olmayan bir ifade olması şartıyla C/C++ ifade olabilir — örneğin, ++,--ve işlev çağrıları bu bağlamda tümüne sahip yan etkiler. SAL ancak, bazı işlev benzeri nesnelerin ve SAL ifadelerde kullanılabilir ayrılmış bazı semboller sağlar. Bunlar olarak adlandırılır *iç işlevleri*. ## <a name="general-purpose"></a>Genel amaçlı Aşağıdaki instrinsic işlevi ek açıklamalar için SAL genel yardımcı programını sağlar. |Ek Açıklama|Açıklama| |----------------|-----------------| |`_Curr_`|Şu anda Not eklenen nesne için bir eşanlamlı. Zaman `_At_` ek açıklama, kullanımda olan `_Curr_` ilk parametre olarak aynı `_At_`. Aksi takdirde, parametre veya ek açıklama sözcüksel olarak ilişkili olduğu tüm işlevi dönüş değeridir.| |`_Inexpressible_(expr)`|Burada bir arabellek boyutu ek açıklama ifadesi kullanılarak temsil etmek için çok karmaşık bir durum ifade eder — Örneğin, ne zaman, bir giriş veri kümesi tarama ve sonra sayım hesaplanan üyeler seçili.| |`_Nullterm_length_(param)`|`param` en fazla arabellek ancak bir null Sonlandırıcı içermeden öğeleri sayısıdır. Toplama olmayan, void olmayan tür herhangi bir arabellek için uygulanabilir.| |`_Old_(expr)`|Önkoşul değerlendirildiğinde `_Old_` giriş değeri döndürür `expr`. Sonrası koşulu değerlendirilirken, değeri döndürür. `expr` önkoşulu değerlendirilen gibi.| |`_Param_(n)`|`n`Parametresinden 1'den sayım, bir işleve `n`, ve `n` değişmez değer bir tamsayı sabitidir. Parametre ise, bu ek açıklama parametre adıyla erişmek için aynıdır. **Not:** `n` üç nokta tanımlanan ya da adları değil kullanıldığı işlev prototiplerinde kullanılabilir konumsal parametreleri başvurabilir.| |`return`|C/C++ ayrılmış anahtar sözcüğü `return` SAL ifadesinde bir işlev dönüş değeri belirtmek için kullanılabilir. Değer yalnızca post kullanılabilir durumdadır; öncesi durumunda kullanmak için bir söz dizimi hatası var.| ## <a name="string-specific"></a>Belirli dize Şu iç işlev ek açıklamaları, dizeleri işlenmesini etkinleştirin. Bu işlevlerin dört aynı amaca hizmet eder: null Sonlandırıcı önce bulunan tür öğelerinin sayısını döndürmek için. Fark başvurulan öğeleri veri türleri vardır. Null ile sonlandırılmış uzunluğunu belirtmek istiyorsanız, arabellek Not değil karakterlerinden oluşan, kullanın `_Nullterm_length_(param)` önceki bölümde ek açıklama. |Ek Açıklama|Açıklama| |----------------|-----------------| |`_String_length_(param)`|`param` dize kadar ancak bir null Sonlandırıcı içermeden öğeleri sayısıdır. Bu ek açıklama karakter dize türleri için ayrılmıştır.| |`strlen(param)`|`param` dize kadar ancak bir null Sonlandırıcı içermeden öğeleri sayısıdır. Bu ek açıklama ayrılmış karakter kullanın, diziler ve C çalışma zamanı işlevi benzer [strlen()](http://msdn.microsoft.com/library/16462f2a-1e0f-4eb3-be55-bf1c83f374c2).| |`wcslen(param)`|`param` en fazla (ancak dahil değil) dizedeki öğe sayısı null Sonlandırıcı. Bu ek açıklama geniş karakter kullanın, diziler ve C çalışma zamanı işlevi benzer ayrılmış [wcslen ()](http://msdn.microsoft.com/library/16462f2a-1e0f-4eb3-be55-bf1c83f374c2).| ## <a name="see-also"></a>Ayrıca Bkz. [C/C++ kod hatalarını azaltmak için SAL ek açıklamalarını kullanma](../code-quality/using-sal-annotations-to-reduce-c-cpp-code-defects.md) [SAL'yi anlama](../code-quality/understanding-sal.md) [İşlev parametrelerini ve dönüş değerlerini açıklama](../code-quality/annotating-function-parameters-and-return-values.md) [İşlev davranışını yorumlama](../code-quality/annotating-function-behavior.md) [Yapıları ve sınıfları yorumlama](../code-quality/annotating-structs-and-classes.md) [Kilitlenme davranışını yorumlama](../code-quality/annotating-locking-behavior.md) [Açıklamanın ne zaman ve nereye uygulanacağını belirtme](../code-quality/specifying-when-and-where-an-annotation-applies.md) [En İyi Yöntemler ve Örnekler](../code-quality/best-practices-and-examples-sal.md)
68.507246
394
0.771737
tur_Latn
0.999484
53b3385819fb115228481534d0080be8a18d7b96
527
md
Markdown
_pages/about.md
xzhou29/xzhou29.github.io
851d101f84316c51aac8bd218e1e9aba5926bcc5
[ "MIT" ]
null
null
null
_pages/about.md
xzhou29/xzhou29.github.io
851d101f84316c51aac8bd218e1e9aba5926bcc5
[ "MIT" ]
null
null
null
_pages/about.md
xzhou29/xzhou29.github.io
851d101f84316c51aac8bd218e1e9aba5926bcc5
[ "MIT" ]
null
null
null
--- permalink: / title: "About Me" excerpt: "About me" author_profile: true redirect_from: - /about/ - /about.html --- I'm currently a PhD Student in ReDAS (Reasoning and Data Analytics for Security) lab @ University of Houston. My Ph.D. advisor is [Rakesh M. Verma](http://www2.cs.uh.edu/~rmverma/). I'm interested in Software Security, Cybersecurity, Phishing Detection, and Machine Learning. I received my bachelor's and master's degree in Computer Science from University of Houston in 2017 and 2019, respectively.
40.538462
400
0.747628
eng_Latn
0.881299
53b35ee9d931bb594a81f3898a7dd4c761e0c848
3,051
md
Markdown
docs/cpp/variant-t-operator-equal.md
heathhenley/cpp-docs
2e94807ab369e967c7892dd7971f9765b9878641
[ "CC-BY-4.0", "MIT" ]
3
2021-02-19T06:12:36.000Z
2021-03-27T20:46:59.000Z
docs/cpp/variant-t-operator-equal.md
heathhenley/cpp-docs
2e94807ab369e967c7892dd7971f9765b9878641
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/cpp/variant-t-operator-equal.md
heathhenley/cpp-docs
2e94807ab369e967c7892dd7971f9765b9878641
[ "CC-BY-4.0", "MIT" ]
1
2020-12-24T04:34:32.000Z
2020-12-24T04:34:32.000Z
--- title: "_variant_t::operator =" ms.date: "11/04/2016" f1_keywords: ["_variant_t::operator="] helpviewer_keywords: ["operator= [C++], variant", "operator = [C++], variant", "= operator [C++], with specific Visual C++ objects"] ms.assetid: 77622723-6e49-4dec-9e0f-fa74028f1a3c --- # _variant_t::operator = **Microsoft Specific** ## Syntax ``` _variant_t& operator=( const VARIANT& varSrc ); _variant_t& operator=( const VARIANT* pVarSrc ); _variant_t& operator=( const _variant_t& var_t_Src ); _variant_t& operator=( short sSrc ); _variant_t& operator=( long lSrc ); _variant_t& operator=( float fltSrc ); _variant_t& operator=( double dblSrc ); _variant_t& operator=( const CY& cySrc ); _variant_t& operator=( const _bstr_t& bstrSrc ); _variant_t& operator=( const wchar_t* wstrSrc ); _variant_t& operator=( const char* strSrc ); _variant_t& operator=( IDispatch* pDispSrc ); _variant_t& operator=( bool bSrc ); _variant_t& operator=( IUnknown* pSrc ); _variant_t& operator=( const DECIMAL& decSrc ); _variant_t& operator=( BYTE bSrc ); _variant_t& operator=( char cSrc ); _variant_t& operator=( unsigned short usSrc ); _variant_t& operator=( unsigned long ulSrc ); _variant_t& operator=( int iSrc ); _variant_t& operator=( unsigned int uiSrc ); _variant_t& operator=( __int64 i8Src ); _variant_t& operator=( unsigned __int64 ui8Src ); ``` ## Remarks The operator assigns a new value to the `_variant_t` object: - **operator=(** *varSrc* **)** Assigns an existing `VARIANT` to a `_variant_t` object. - **operator=(** *pVarSrc* **)** Assigns an existing `VARIANT` to a `_variant_t` object. - **operator=(** *var_t_Src* **)** Assigns an existing `_variant_t` object to a `_variant_t` object. - **operator=(** *sSrc* **)** Assigns a **short** integer value to a `_variant_t` object. - **operator=(** `lSrc` **)** Assigns a **long** integer value to a `_variant_t` object. - **operator=(** *fltSrc* **)** Assigns a **float** numerical value to a `_variant_t` object. - **operator=(** *dblSrc* **)** Assigns a **double** numerical value to a `_variant_t` object. - **operator=(** *cySrc* **)** Assigns a `CY` object to a `_variant_t` object. - **operator=(** *bstrSrc* **)** Assigns a `BSTR` object to a `_variant_t` object. - **operator=(** *wstrSrc* **)** Assigns a Unicode string to a `_variant_t` object. - **operator=(** `strSrc` **)** Assigns a multibyte string to a `_variant_t` object. - **operator=(** `bSrc` **)** Assigns a **bool** value to a `_variant_t` object. - **operator=(** *pDispSrc* **)** Assigns a `VT_DISPATCH` object to a `_variant_t` object. - **operator=(** *pIUnknownSrc* **)** Assigns a `VT_UNKNOWN` object to a `_variant_t` object. - **operator=(** *decSrc* **)** Assigns a `DECIMAL` value to a `_variant_t` object. - **operator=(** `bSrc` **)** Assigns a `BYTE` value to a `_variant_t` object. **END Microsoft Specific** ## See also [_variant_t Class](../cpp/variant-t-class.md)
20.47651
132
0.652245
eng_Latn
0.386329
53b3c9ea4bbb84a219fb50cdce07efdd134717b7
2,637
md
Markdown
notes/Build organizations around a long-term narrative.md
djdrysdale/11ty-garden-blog
9bd43230dfd466ac173a11986c5c7f5f7cd07f5a
[ "MIT" ]
null
null
null
notes/Build organizations around a long-term narrative.md
djdrysdale/11ty-garden-blog
9bd43230dfd466ac173a11986c5c7f5f7cd07f5a
[ "MIT" ]
null
null
null
notes/Build organizations around a long-term narrative.md
djdrysdale/11ty-garden-blog
9bd43230dfd466ac173a11986c5c7f5f7cd07f5a
[ "MIT" ]
null
null
null
Tags: #perm In order for an organization to be sustainable over the long term, it needs to have a long-term objective that it is working toward—a mission or vision that provides its reason for existence and animates all of its activities. Simon Sinek refers to this as the organization's "just cause": it helps orient the organization to longer horizons than the quarterly or annual report and provides its people with a sense of purpose. Moreover, a "just cause" helps serve as a first filter against unethical or counter-productive ideas. Narratives [[Narratives enable us to act decisively in conditions of uncertainty|enable us to act decisively amid uncertainty]]; Sinek's concept applies this at an organization level and emphasizes how the "just cause," a kind of narrative, guides decision making and strategy. It may help [[Express strategy as simple rules|express strategy as simple rules]], as a kind of shorthand that guides action. A just cause, Sinek writes, is never money; money is simply the means to achieving the end of the cause. To me, the "just cause" is almost an ideology; Sinek cites the example of Apple's positioning of itself against a "worthy foe" in IBM. Whereas IBM was conservative and business-like, Apple defined itself as creative and individualistic. When organizations adopt Sinek's "infinite perspective," they need not fear disruption and change. Change is not something to be resisted, but rather something that can be leveraged to help advance the larger cause. Some business challenges, like [[Digital transformation requires a long-term view.|digital transformation]], require a longer-term view to be viable. The long-term vision is counter to a present tendency for [[Organizations have become primarily focused on short-term results|organizations to focus on the short term]]. This bias toward the short term is related to the concept of [[¶ Presentism|presentism]], a resistance to be able to think past the tyranny of the present. One possible explanation for the bias toward the short-term may be an increased emphasis on [[Fixating on metric data biases us to the short term|quantification and metrics]], itself a [[Metric fixation is a symptom of a decline in social trust|symptom of a decline in social trust]]. Similar concepts include Jay Acunzo's notion of "aspirational anchors" and the "shared value" model described by Jim Kalbach, Michael Tanamachi, and Michael Schrage in the context of [[¶ Jobs to be done (JTBD)|jobs to be done]]. --- ## Related - Link ## Citations [[≈ Sinek - The Infinite Game|Sinek, Simon. *The Infinite Game*. New York: Portfolio, 2019.]]
114.652174
933
0.786879
eng_Latn
0.999295
53b3cc77505be3a9eb5b47a2b247fa1f3c428b10
527
md
Markdown
README.md
kmoe/trashfire
196707089b658c57e21657b244f93fe22b43d770
[ "MIT" ]
1
2016-05-17T20:42:34.000Z
2016-05-17T20:42:34.000Z
README.md
kmoe/trashfire
196707089b658c57e21657b244f93fe22b43d770
[ "MIT" ]
null
null
null
README.md
kmoe/trashfire
196707089b658c57e21657b244f93fe22b43d770
[ "MIT" ]
null
null
null
# :fire: trashfire :fire: trashfire is the future of logging. ![trashfire screenshot](https://raw.githubusercontent.com/kmoe/trashfire/master/trashfire.png) ## Real-time You're only one HTTP request and one `.innerHTML =` away from beautiful rich logging output. ## Responsive With its minimal, lean design, :fire: trashfire allows you to browse logs from Safari on your iPhone as easily as Safari on your MacBook. ## NoSQL No SQL. No JSON. In fact, no database whatsoever. :fire: trashfire is proudly non-persistent.
27.736842
137
0.759013
eng_Latn
0.979609
53b3ebab7c670cc209f1523f06d03de1cec97baf
206
markdown
Markdown
_posts/2015-08-18-post-de-teste.markdown
celsomrtns/celsomrtns.github.io
03d105c82f2be51a0b30a04e240cf281c1b30f74
[ "Apache-2.0" ]
null
null
null
_posts/2015-08-18-post-de-teste.markdown
celsomrtns/celsomrtns.github.io
03d105c82f2be51a0b30a04e240cf281c1b30f74
[ "Apache-2.0" ]
null
null
null
_posts/2015-08-18-post-de-teste.markdown
celsomrtns/celsomrtns.github.io
03d105c82f2be51a0b30a04e240cf281c1b30f74
[ "Apache-2.0" ]
null
null
null
--- layout: post title: "Post de Teste" subtitle: "Testando uma nova página" date: 2015-08-18 11:35:00 author: "@celsomrtns" header-img: "img/pattern-rsjs.jpg" --- <h1>Just a test.</h1>
22.888889
38
0.621359
por_Latn
0.279137
53b419dedd928495690cd85f5696324f7bf0fc51
2,061
md
Markdown
docs/man/kube-proxy.1.md
wbingli/kubernetes
313a365712480e47c41820e9e5cb13d94c917bd7
[ "Apache-2.0" ]
122
2015-01-14T01:49:31.000Z
2022-01-18T07:40:05.000Z
docs/man/kube-proxy.1.md
wbingli/kubernetes
313a365712480e47c41820e9e5cb13d94c917bd7
[ "Apache-2.0" ]
5
2015-01-05T17:50:05.000Z
2019-01-29T14:31:36.000Z
docs/man/kube-proxy.1.md
wbingli/kubernetes
313a365712480e47c41820e9e5cb13d94c917bd7
[ "Apache-2.0" ]
24
2015-01-30T01:38:31.000Z
2020-12-30T08:52:16.000Z
% KUBERNETES(1) kubernetes User Manuals % Scott Collier % October 2014 # NAME kube-proxy \- Provides network proxy services. # SYNOPSIS **kube-proxy** [OPTIONS] # DESCRIPTION The **kubernetes** network proxy runs on each node. This reflects services as defined in the Kubernetes API on each node and can do simple TCP stream forwarding or round robin TCP forwarding across a set of backends. Service endpoints are currently found through Docker-links-compatible environment variables specifying ports opened by the service proxy. Currently the user must select a port to expose the service on on the proxy, as well as the container's port to target. The kube-proxy takes several options. # OPTIONS **--alsologtostderr**=false log to standard error as well as files **--api_version=**"" The API version to use when talking to the server **--bindaddress**="0.0.0.0" The address for the proxy server to serve on (set to 0.0.0.0 or "" for all interfaces) **--etcd_servers**=[] List of etcd servers to watch (http://ip:port), comma separated (optional) **--insecure_skip_tls_verify**=false If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure. **--log_backtrace_at**=:0 when logging hits line file:N, emit a stack trace **--log_dir**="" If non-empty, write log files in this directory **--log_flush_frequency**=5s Maximum number of seconds between log flushes **--logtostderr**=false log to standard error instead of files **--master**="" The address of the Kubernetes API server **--stderrthreshold**=0 logs at or above this threshold go to stderr **--v**=0 log level for V logs **--version**=false Print version information and quit **--vmodule**= comma-separated list of pattern=N settings for file-filtered logging # EXAMPLES ``` /usr/bin/kube-proxy --logtostderr=true --v=0 --etcd_servers=http://127.0.0.1:4001 ``` # HISTORY October 2014, Originally compiled by Scott Collier (scollier at redhat dot com) based on the kubernetes source material and internal work.
30.761194
474
0.74721
eng_Latn
0.977021
53b45ae5a02dfd704a495f309e51d324c7f1c61b
392
md
Markdown
_posts/2015-12-21-the-first-.md
pipiscrew/pipiscrew.github.io
9d81bd323c800a1bff2b6d26c3ec3eb96fb41004
[ "MIT" ]
null
null
null
_posts/2015-12-21-the-first-.md
pipiscrew/pipiscrew.github.io
9d81bd323c800a1bff2b6d26c3ec3eb96fb41004
[ "MIT" ]
null
null
null
_posts/2015-12-21-the-first-.md
pipiscrew/pipiscrew.github.io
9d81bd323c800a1bff2b6d26c3ec3eb96fb41004
[ "MIT" ]
null
null
null
--- title: The first game program sold for home computers author: PipisCrew date: 2015-12-21 categories: [news] toc: true --- After six months of development, the first copy was shipped on **December 18, 1976**. [http://www.benlo.com/microchess/index.html](http://www.benlo.com/microchess/index.html) origin - http://www.pipiscrew.com/?p=2891 the-first-game-program-sold-for-home-computers
30.153846
88
0.75
eng_Latn
0.712322
53b47e098e9d0cca7138af12dd53953ac5f39c3a
747
md
Markdown
node-api-cn/n-api/napi_remove_wrap.md
develop-basis/nodejs
c9911b8595ba28fd567ae8e4dcfd627bb655f67a
[ "MIT" ]
null
null
null
node-api-cn/n-api/napi_remove_wrap.md
develop-basis/nodejs
c9911b8595ba28fd567ae8e4dcfd627bb655f67a
[ "MIT" ]
null
null
null
node-api-cn/n-api/napi_remove_wrap.md
develop-basis/nodejs
c9911b8595ba28fd567ae8e4dcfd627bb655f67a
[ "MIT" ]
null
null
null
<!-- YAML added: v8.5.0 --> ```C napi_status napi_remove_wrap(napi_env env, napi_value js_object, void** result); ``` - `[in] env`: The environment that the API is invoked under. - `[in] js_object`: The object associated with the native instance. - `[out] result`: Pointer to the wrapped native instance. Returns `napi_ok` if the API succeeded. Retrieves a native instance that was previously wrapped in the JavaScript object `js_object` using `napi_wrap()` and removes the wrapping, thereby restoring the JavaScript object's prototype chain. If a finalize callback was associated with the wrapping, it will no longer be called when the JavaScript object becomes garbage-collected.
33.954545
77
0.70415
eng_Latn
0.995045
53b5259a84baead7ca285e747b799a6130442a6c
175
md
Markdown
scripts/setup/README.md
TheBoringDude/nextjs-fauna-auth0
c60d87f2d4a356066a357787d695502186d35305
[ "Unlicense" ]
1
2021-06-08T14:15:32.000Z
2021-06-08T14:15:32.000Z
scripts/setup/README.md
TheBoringDude/phurma
87a7cab3ff7dbcb95aa098fc3a8008f64dad8c69
[ "MIT" ]
12
2021-04-28T10:13:24.000Z
2022-03-29T22:15:00.000Z
scripts/setup/README.md
TheBoringDude/phurma
87a7cab3ff7dbcb95aa098fc3a8008f64dad8c69
[ "MIT" ]
1
2021-07-20T10:31:21.000Z
2021-07-20T10:31:21.000Z
### Note: This setup configuration is based from the provided example from: https://github.com/fauna-brecht/faunadb-auth-skeleton-with-auth0/tree/default/fauna-queries/setup
43.75
163
0.8
eng_Latn
0.961844
53b52a2ac3e5f95e12dda784700c3a5bf7e1a749
1,136
md
Markdown
api/wrappers/jsp/stockchart/valueaxisitem-plotband.md
codylindley/kendo-docs
c281fce938fe16bd29b7d46ca52c8fda07f7ca54
[ "Unlicense", "MIT" ]
1
2016-07-13T03:52:30.000Z
2016-07-13T03:52:30.000Z
api/wrappers/jsp/stockchart/valueaxisitem-plotband.md
codylindley/kendo-docs
c281fce938fe16bd29b7d46ca52c8fda07f7ca54
[ "Unlicense", "MIT" ]
null
null
null
api/wrappers/jsp/stockchart/valueaxisitem-plotband.md
codylindley/kendo-docs
c281fce938fe16bd29b7d46ca52c8fda07f7ca54
[ "Unlicense", "MIT" ]
null
null
null
--- title: stockChart-valueAxisItem-plotBand --- # \<kendo:stockChart-valueAxisItem-plotBand\> The plot bands of the value axis. #### Example <kendo:stockChart-valueAxisItem-plotBands> <kendo:stockChart-valueAxisItem-plotBand></kendo:stockChart-valueAxisItem-plotBand> </kendo:stockChart-valueAxisItem-plotBands> ## Configuration Attributes ### color `java.lang.String` The color of the plot band. #### Example <kendo:stockChart-valueAxisItem-plotBand color="color"> </kendo:stockChart-valueAxisItem-plotBand> ### from `float` The start position of the plot band in axis units. #### Example <kendo:stockChart-valueAxisItem-plotBand from="from"> </kendo:stockChart-valueAxisItem-plotBand> ### opacity `float` The opacity of the plot band. #### Example <kendo:stockChart-valueAxisItem-plotBand opacity="opacity"> </kendo:stockChart-valueAxisItem-plotBand> ### to `float` The end position of the plot band in axis units. #### Example <kendo:stockChart-valueAxisItem-plotBand to="to"> </kendo:stockChart-valueAxisItem-plotBand>
23.666667
92
0.699824
eng_Latn
0.324917
53b553a1b04c9b54d31e1336ef7d45d66c860c46
5,498
md
Markdown
README.md
EnriqueNueve/TF_Toolbox
fbbb7e479242337dcbcd9d173c073dbb6a1bf0f3
[ "MIT" ]
null
null
null
README.md
EnriqueNueve/TF_Toolbox
fbbb7e479242337dcbcd9d173c073dbb6a1bf0f3
[ "MIT" ]
null
null
null
README.md
EnriqueNueve/TF_Toolbox
fbbb7e479242337dcbcd9d173c073dbb6a1bf0f3
[ "MIT" ]
null
null
null
# TF_Toolbox An organized repo for all things TensorFlow: tutorials, data, and models. ## Lectures --- 1. Transformers: https://www.youtube.com/watch?v=XSSTuhyAmnI 2. Some intro to advance math topics: https://www.youtube.com/c/FacultyofKhan/playlists 3. Linear Algebra for ML: https://www.youtube.com/watch?v=Cx5Z-OslNWE&list=PLUl4u3cNGP63oMNUHXqIUcrkS2PivhN3k ## Code Resources --- 1. Tensorflow Prob: https://github.com/mohd-faizy/Probabilistic-Deep-Learning-with-TensorFlow 2. Einops: https://github.com/arogozhnikov/einops 3. Debugging book: https://www.debuggingbook.org/ 4. CNN visualization callbacks: https://github.com/sicara/tf-explain 5. Seq2Seq learning in tf2: https://github.com/OpenNMT/OpenNMT-tf 6. Quantization library for keras in TensorFlow: https://github.com/google/qkeras 7. Tensorflow addons: https://github.com/tensorflow/addons 8. Torch2RT: https://github.com/NVIDIA-AI-IOT/torch2trt 9. Zero to Mastery TensorFlow for Deep Learning: https://dev.mrdbourke.com/tensorflow-deep-learning/ 10. Keras-flops: https://pypi.org/project/keras-flops/ 11. Numerical methods in Python: https://pythonnumericalmethods.berkeley.edu/notebooks/Index.html 12. TF ML Production course: https://blog.tensorflow.org/2021/06/mlep-courses.html 13. Domain generalization: https://github.com/facebookresearch/DomainBed 14. Tensorflow garden: https://github.com/tensorflow/models 15. GLOM: https://github.com/Rishit-dagli/GLOM-TensorFlow 16. DL Forecasting tools: https://github.com/AIStream-Peelout/flow-forecast 17. TensorBoard: https://www.tensorflow.org/tensorboard 18. Tensorflow Best Practice Twitter: https://twitter.com/tfbestpractices?lang=en 19. Speech recognition: https://github.com/cosmoquester/speech-recognition 20. Generative models: https://github.com/an-seunghwan/generative 21. GCN example: https://github.com/kerighan/simple-gcn 22. Efficient-CapsNet: https://github.com/EscVM/Efficient-CapsNet 23. Retinanet: https://github.com/srihari-humbarwadi/retinanet-tensorflow2.x 24. YoloV4: https://github.com/taipingeric/yolo-v4-tf.keras 25. Gpu Monitor: https://github.com/sicara/gpumonitor 26. Normalizing Flow Reference: https://github.com/bgroenks96/normalizing-flows 27. GrapGallery: https://github.com/EdisonLeeeee/graphgallery 28. SingleShot: https://github.com/srihari-humbarwadi/ssd_tensorflow 29. YOLOACT: https://github.com/leohsuofnthu/Tensorflow-YOLACT 30. Yolo orignal: https://github.com/LoveHRTF/YouOnlyLookOnce-TF2.0 31. Fairness indicators: https://github.com/tensorflow/fairness-indicators 32. Tensor Hub: https://www.tensorflow.org/hub 33. tf.py_function: https://www.tensorflow.org/api_docs/python/tf/py_function 34. Distill: https://distill.pub/ 35. AiSummer: https://theaisummer.com/ 36. Unit test in TF: https://theaisummer.com/unit-test-deep-learning/ 37. A bunch of NLP Layers: https://github.com/tensorflow/models/tree/master/official/nlp/modeling/layers 38. Outlier toolbox: https://github.com/SeldonIO/alibi-detect ## Build principles * Make a simple version first and iterate * Draw out path of outcomes and consequences ## Steps to implement a model --- 1. Implement custom functions. 2. Implement custom layers to store custom functions, if possible. 3. Implement custom model that joins custom functions and layers. 4. Test pass of random data and check dimensionality. 5. Train model on one piece of data and make sure it learns. 6. Test on all training data. ![GitHub Logo](/images/steps_implement_diagram.png) --- ## Tutorials --- 0. tfRecord demo with tf==2.4.1 1. einsum demo --- ## Models #### Please read "model_standards.ipynb" to understand implementation guidelines! --- 0. model_a with tf==0.0.0 * Notes: (yes/no) * Paper Link: [arxiv demo link ](https://arxiv.org/) * Additional Resources: (yes/no) * State: (stable, not stable) --- 1. VAE with tf==2.4.1 * Notes: yes * Paper Link: [https://arxiv.org/pdf/1312.6114.pdf ](https://arxiv.org/pdf/1312.6114.pdf) * Additional Resources: yes * State: stable --- 2. NICE with tf==2.4.1 and tfp==0.12.1 * Notes: yes * Paper Link: [https://arxiv.org/abs/1410.8516 ](https://arxiv.org/abs/1410.8516) * Additional Resources: yes * State: stable --- 3. MADE with tf==2.4.1 * Notes: No * Paper Link: [https://arxiv.org/abs/1502.03509 ](https://arxiv.org/abs/1502.03509) * Additional Resources: yes * State: stable --- 4. CVAE with tf==2.4.1 * Notes: yes * Paper Link: [https://proceedings.neurips.cc/paper/2015/file/8d55a249e6baa5c06772297520da2051-Paper.pdf](https://proceedings.neurips.cc/paper/2015/file/8d55a249e6baa5c06772297520da2051-Paper.pdf) * Additional Resources: no * State: stable --- 5. MemAE with tf==2.4.1 * Notes: yes * Paper Link: [https://arxiv.org/abs/1904.02639](https://arxiv.org/abs/1904.02639) * Additional Resources: no * State: stable --- 6. TransformerBlock with tf==2.5.0 * Notes: yes * Paper Link: [https://arxiv.org/abs/1706.03762](https://arxiv.org/abs/1706.03762) * Additional Resources: yes * State: stable --- 7. IRMAE with tf==2.5.0 * Notes: yes * Paper Link: [https://arxiv.org/abs/2010.00679](https://arxiv.org/abs/2010.00679) * Additional Resources: no * State: stable --- 8. Mixup with tf==2.5.0 * Notes: no * Paper Link: [https://arxiv.org/abs/1710.09412](https://arxiv.org/abs/1710.09412) * Additional Resources: no * State: stable --- 9. AudioClassDemo with tf==2.5.0 * Notes: no * Paper Link: no * Additional Resources: no * State: stable
34.3625
198
0.741179
yue_Hant
0.639517
53b584340ac4b64dd7accdea6617a732873df29c
4,658
md
Markdown
clicks/proximity11/README.md
dMajoIT/mikrosdk_click_v2
f425fd85054961b4f600d1708cf18ac952a6bdb8
[ "MIT" ]
31
2020-10-02T14:15:14.000Z
2022-03-24T08:33:21.000Z
clicks/proximity11/README.md
dMajoIT/mikrosdk_click_v2
f425fd85054961b4f600d1708cf18ac952a6bdb8
[ "MIT" ]
4
2020-10-27T14:05:00.000Z
2022-03-10T09:38:57.000Z
clicks/proximity11/README.md
dMajoIT/mikrosdk_click_v2
f425fd85054961b4f600d1708cf18ac952a6bdb8
[ "MIT" ]
32
2020-11-28T07:56:42.000Z
2022-03-14T19:42:29.000Z
\mainpage Main Page --- # Proximity 11 click Proximity 11 Click is a close-range proximity sensing Click board™, equipped with the RPR-0521RS, a very accurate and power-efficient proximity and ambient Light Sensor with IrLED. <p align="center"> <img src="https://download.mikroe.com/images/click_for_ide/proximity11_click.png" height=300px> </p> [click Product page](https://www.mikroe.com/proximity-11-click) --- #### Click library - **Author** : MikroE Team - **Date** : May 2020. - **Type** : I2C type # Software Support We provide a library for the Proximity11 Click as well as a demo application (example), developed using MikroElektronika [compilers](https://shop.mikroe.com/compilers). The demo can run on all the main MikroElektronika [development boards](https://shop.mikroe.com/development-boards). Package can be downloaded/installed directly form compilers IDE(recommended way), or downloaded from our LibStock, or found on mikroE github account. ## Library Description > This library contains API for Proximity11 Click driver. #### Standard key functions : - Config Object Initialization function. > void proximity11_cfg_setup ( proximity11_cfg_t *cfg ); - Initialization function. > PROXIMITY11_RETVAL proximity11_init ( proximity11_t *ctx, proximity11_cfg_t *cfg ); - Click Default Configuration function. > void proximity11_default_cfg ( proximity11_t *ctx ); #### Example key functions : - This function reads proximity values from the desired registers. > uint8_t proximity11_get ( proximity11_t *ctx, uint8_t register_address, uint8_t *output_buffer, uint8_t n_bytes ); - This function updates data used to calculate Lux. This function should be called if changing als measurement time and als gain. > void proximity11_update ( proximity11_t *ctx ); - This function sets High ALS threshold value > void proximity11_set_als_threshold_high ( proximity11_t *ctx, uint16_t threshold_value ); ## Examples Description > This appication enables usage of the proximity and ambient light sensors **The demo application is composed of two sections :** ### Application Init > Initializes I2C driver and performs device initialization ```c void application_init ( void ) { log_cfg_t log_cfg; proximity11_cfg_t cfg; uint8_t init_status; /** * Logger initialization. * Default baud rate: 115200 * Default log level: LOG_LEVEL_DEBUG * @note If USB_UART_RX and USB_UART_TX * are defined as HAL_PIN_NC, you will * need to define them manually for log to work. * See @b LOG_MAP_USB_UART macro definition for detailed explanation. */ LOG_MAP_USB_UART( log_cfg ); log_init( &logger, &log_cfg ); log_info( &logger, "---- Application Init ----" ); // Click initialization. proximity11_cfg_setup( &cfg ); PROXIMITY11_MAP_MIKROBUS( cfg, MIKROBUS_4 ); proximity11_init( &proximity11, &cfg ); Delay_ms( 500 ); init_status = proximity11_default_cfg( &proximity11 ); if ( init_status == 1 ) { log_printf( &logger, "> app init fail\r\n" ); while( 1 ); } else if ( init_status == 0 ) { log_printf( &logger, "> app init done\r\n" ); } } ``` ### Application Task > Gets ALS and PS values and logs those values ```c void application_task ( void ) { // Task implementation uint16_t ps_value; float als_value; proximity11_get_ps_als_values( &proximity11, &ps_value, &als_value ); log_printf( &logger, "PS : %ld [count]\r\n", ps_value ); log_printf( &logger, "ALS : %.2f [Lx]\r\n\r\n", als_value ); Delay_ms( 500 ); } ``` The full application code, and ready to use projects can be installed directly form compilers IDE(recommneded) or found on LibStock page or mikroE GitHub accaunt. **Other mikroE Libraries used in the example:** - MikroSDK.Board - MikroSDK.Log - Click.Proximity11 **Additional notes and informations** Depending on the development board you are using, you may need [USB UART click](https://shop.mikroe.com/usb-uart-click), [USB UART 2 Click](https://shop.mikroe.com/usb-uart-2-click) or [RS232 Click](https://shop.mikroe.com/rs232-click) to connect to your PC, for development systems with no UART to USB interface available on the board. The terminal available in all Mikroelektronika [compilers](https://shop.mikroe.com/compilers), or any other terminal application of your choice, can be used to read the message. ---
28.931677
181
0.692572
eng_Latn
0.757171
53b712151f82b1bb8b26401a31549b6120609c30
83
md
Markdown
README.md
jpcrespo/telegrambot
58443c001758a50b64a0bb5261da166cfaaf0373
[ "MIT" ]
null
null
null
README.md
jpcrespo/telegrambot
58443c001758a50b64a0bb5261da166cfaaf0373
[ "MIT" ]
null
null
null
README.md
jpcrespo/telegrambot
58443c001758a50b64a0bb5261da166cfaaf0373
[ "MIT" ]
null
null
null
# telegrambot Bot de telegram que ayuda a gestionar y monitorear mi nodo Bitcoin.
27.666667
68
0.795181
spa_Latn
0.913231
53b72e95c8cc4ef73ee541bf82fba3763207c961
523
md
Markdown
_posts/2019-06-28-h5zzfp-new.md
faical-yannick-congo/code-portal
1690dd9dc33eb3ab7ec7bb98a96c2572d7baafea
[ "MIT" ]
null
null
null
_posts/2019-06-28-h5zzfp-new.md
faical-yannick-congo/code-portal
1690dd9dc33eb3ab7ec7bb98a96c2572d7baafea
[ "MIT" ]
null
null
null
_posts/2019-06-28-h5zzfp-new.md
faical-yannick-congo/code-portal
1690dd9dc33eb3ab7ec7bb98a96c2572d7baafea
[ "MIT" ]
null
null
null
--- title: "New Repo: H5Z-ZFP" tags: new-repo --- H5Z-ZFP is a highly flexible floating-point and integer compression plugin for the HDF5 library using ZFP compression. The plugin supports ZFP versions 0.5.0 through 0.5.5. It also supports all 4 modes of the ZFP compression library as well as 1D, 2D, and 3D datasets of single and double precision integer and floating-point data. Check out the [GitHub repo](https://github.com/LLNL/H5Z-ZFP) and the [v1.0.0 release](https://github.com/LLNL/H5Z-ZFP/releases/tag/v1.0.0).
74.714286
471
0.759082
eng_Latn
0.965714
53b759d28d7006a3a8e43859d79077a0cadd4079
3,906
md
Markdown
docs/relational-databases/data-collection/enable-or-disable-data-collection.md
william-keller/sql-docs.pt-br
e5218aef85d1f8080eddaadecbb11de1e664c541
[ "CC-BY-4.0", "MIT" ]
1
2021-09-05T16:06:11.000Z
2021-09-05T16:06:11.000Z
docs/relational-databases/data-collection/enable-or-disable-data-collection.md
william-keller/sql-docs.pt-br
e5218aef85d1f8080eddaadecbb11de1e664c541
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/relational-databases/data-collection/enable-or-disable-data-collection.md
william-keller/sql-docs.pt-br
e5218aef85d1f8080eddaadecbb11de1e664c541
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- description: Habilitar ou desabilitar a coleta de dados title: Habilitar ou desabilitar a coleta de dados | Microsoft Docs ms.custom: '' ms.date: 03/14/2017 ms.prod: sql ms.reviewer: '' ms.technology: supportability ms.topic: conceptual helpviewer_keywords: - data collector [SQL Server], disabling - data collector [SQL Server], enabling ms.assetid: 0137971b-fb48-4a3e-822a-3df2b9bb09d7 author: MashaMSFT ms.author: mathoma ms.openlocfilehash: c0ff43debf025f9c6494f9e1beeeae1cb5721037 ms.sourcegitcommit: e700497f962e4c2274df16d9e651059b42ff1a10 ms.translationtype: HT ms.contentlocale: pt-BR ms.lasthandoff: 08/17/2020 ms.locfileid: "88428835" --- # <a name="enable-or-disable-data-collection"></a>Habilitar ou desabilitar a coleta de dados [!INCLUDE [SQL Server](../../includes/applies-to-version/sqlserver.md)] Este tópico descreve como habilitar ou desabilitar a coleta de dados no [!INCLUDE[ssCurrent](../../includes/sscurrent-md.md)] usando o [!INCLUDE[ssManStudioFull](../../includes/ssmanstudiofull-md.md)] ou o [!INCLUDE[tsql](../../includes/tsql-md.md)]. **Neste tópico** - **Antes de começar:** [Segurança](#Security) - **Para habilitar ou desabilitar a coleta de dados usando:** [SQL Server Management Studio](#SSMSProcedure) [Transact-SQL](#TsqlProcedure) ## <a name="before-you-begin"></a><a name="BeforeYouBegin"></a> Antes de começar ### <a name="security"></a><a name="Security"></a> Segurança #### <a name="permissions"></a><a name="Permissions"></a> Permissões É necessária a associação na função de banco de dados fixa **dc_admin** ou **dc_operator** (com a permissão EXECUTE) para executar esse procedimento. ## <a name="using-sql-server-management-studio"></a><a name="SSMSProcedure"></a> Usando o SQL Server Management Studio #### <a name="to-enable-the-data-collector"></a>Para habilitar o coletor de dados 1. No Pesquisador de Objetos, expanda o nó **Gerenciamento** . 2. Clique com o botão direito do mouse em **Coleta de Dados**e clique em **Habilitar Coleta de Dados**. #### <a name="to-disable-the-data-collector"></a>Para desabilitar o coletor de dados 1. No Pesquisador de Objetos, expanda o nó **Gerenciamento** . 2. Clique com o botão direito do mouse em **Coleta de Dados**e clique em **Desabilitar Coleta de Dados**. ## <a name="using-transact-sql"></a><a name="TsqlProcedure"></a> Usando o Transact-SQL #### <a name="to-enable-the-data-collector"></a>Para habilitar o coletor de dados 1. Conecte-se ao [!INCLUDE[ssDE](../../includes/ssde-md.md)]. 2. Na barra Padrão, clique em **Nova Consulta**. 3. Copie e cole o exemplo a seguir na janela de consulta e clique em **Executar**. Este exemplo usa [sp_syscollector_enable_collector](../../relational-databases/system-stored-procedures/sp-syscollector-enable-collector-transact-sql.md) para habilitar o coletor de dados. ```sql USE msdb; GO EXEC dbo.sp_syscollector_enable_collector ; ``` #### <a name="to-disable-the-data-collector"></a>Para desabilitar o coletor de dados 1. Conecte-se ao [!INCLUDE[ssDE](../../includes/ssde-md.md)]. 2. Na barra Padrão, clique em **Nova Consulta**. 3. Copie e cole o exemplo a seguir na janela de consulta e clique em **Executar**. Este exemplo usa [sp_syscollector_disable_collector](../../relational-databases/system-stored-procedures/sp-syscollector-disable-collector-transact-sql.md) para desabilitar o coletor de dados. ```sql USE msdb; GO EXEC dbo.sp_syscollector_disable_collector; ``` ## <a name="see-also"></a>Consulte Também [Coleta de Dados](../../relational-databases/data-collection/data-collection.md) [Procedimentos armazenados do sistema &#40;Transact-SQL&#41;](../../relational-databases/system-stored-procedures/system-stored-procedures-transact-sql.md)
41.115789
278
0.708397
por_Latn
0.661011
53b795287e5250d8933c49cf7c1a89c0dd9fe47a
32,994
md
Markdown
articles/search/search-howto-dotnet-sdk.md
ZexDC/azure-docs
edf98652ceca4fbbd9c3ac156d87c7b985a4587d
[ "CC-BY-4.0" ]
null
null
null
articles/search/search-howto-dotnet-sdk.md
ZexDC/azure-docs
edf98652ceca4fbbd9c3ac156d87c7b985a4587d
[ "CC-BY-4.0" ]
null
null
null
articles/search/search-howto-dotnet-sdk.md
ZexDC/azure-docs
edf98652ceca4fbbd9c3ac156d87c7b985a4587d
[ "CC-BY-4.0" ]
null
null
null
--- title: How to use Azure Search from a .NET Application - Azure Search description: Learn how to use Azure Search in a .NET application using C# and the .NET SDK. Code-based tasks include connect to the service, index content, and query an index. author: brjohnstmsft manager: jlembicz services: search ms.service: search ms.devlang: dotnet ms.topic: conceptual ms.date: 04/20/2018 ms.author: brjohnst ms.custom: seodec2018 --- # How to use Azure Search from a .NET Application This article is a walkthrough to get you up and running with the [Azure Search .NET SDK](https://aka.ms/search-sdk). You can use the .NET SDK to implement a rich search experience in your application using Azure Search. ## What's in the Azure Search SDK The SDK consists of a few client libraries that enable you to manage your indexes, data sources, indexers, and synonym maps, as well as upload and manage documents, and execute queries, all without having to deal with the details of HTTP and JSON. These client libraries are all distributed as NuGet packages. The main NuGet package is `Microsoft.Azure.Search`, which is a meta-package that includes all the other packages as dependencies. Use this package if you're just getting started or if you know your application will need all the features of Azure Search. The other NuGet packages in the SDK are: - `Microsoft.Azure.Search.Data`: Use this package if you're developing a .NET application using Azure Search, and you only need to query or update documents in your indexes. If you also need to create or update indexes, synonym maps, or other service-level resources, use the `Microsoft.Azure.Search` package instead. - `Microsoft.Azure.Search.Service`: Use this package if you're developing automation in .NET to manage Azure Search indexes, synonym maps, indexers, data sources, or other service-level resources. If you only need to query or update documents in your indexes, use the `Microsoft.Azure.Search.Data` package instead. If you need all the functionality of Azure Search, use the `Microsoft.Azure.Search` package instead. - `Microsoft.Azure.Search.Common`: Common types needed by the Azure Search .NET libraries. You should not need to use this package directly in your application; It is only meant to be used as a dependency. The various client libraries define classes like `Index`, `Field`, and `Document`, as well as operations like `Indexes.Create` and `Documents.Search` on the `SearchServiceClient` and `SearchIndexClient` classes. These classes are organized into the following namespaces: * [Microsoft.Azure.Search](https://docs.microsoft.com/dotnet/api/microsoft.azure.search) * [Microsoft.Azure.Search.Models](https://docs.microsoft.com/dotnet/api/microsoft.azure.search.models) The current version of the Azure Search .NET SDK is now generally available. If you would like to provide feedback for us to incorporate in the next version, please visit our [feedback page](https://feedback.azure.com/forums/263029-azure-search/). The .NET SDK supports version `2017-11-11` of the [Azure Search REST API](https://docs.microsoft.com/rest/api/searchservice/). This version now includes support for synonyms, as well as incremental improvements to indexers. Preview features that are *not* part of this version, such as support for indexing JSON arrays and CSV files, are in [preview](search-api-2016-09-01-preview.md) and available via [4.0-preview version of the .NET SDK](https://aka.ms/search-sdk-preview). This SDK does not support [Management Operations](https://docs.microsoft.com/rest/api/searchmanagement/) such as creating and scaling Search services and managing API keys. If you need to manage your Search resources from a .NET application, you can use the [Azure Search .NET Management SDK](https://aka.ms/search-mgmt-sdk). ## Upgrading to the latest version of the SDK If you're already using an older version of the Azure Search .NET SDK and you'd like to upgrade to the new generally available version, [this article](search-dotnet-sdk-migration-version-5.md) explains how. ## Requirements for the SDK 1. Visual Studio 2017. 2. Your own Azure Search service. In order to use the SDK, you will need the name of your service and one or more API keys. [Create a service in the portal](search-create-service-portal.md) will help you through these steps. 3. Download the Azure Search .NET SDK [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Search) by using "Manage NuGet Packages" in Visual Studio. Just search for the package name `Microsoft.Azure.Search` on NuGet.org (or one of the other package names above if you only need a subset of the functionality). The Azure Search .NET SDK supports applications targeting the .NET Framework 4.5.2 and higher, as well as .NET Core. ## Core scenarios There are several things you'll need to do in your search application. In this tutorial, we'll cover these core scenarios: * Creating an index * Populating the index with documents * Searching for documents using full-text search and filters The sample code that follows illustrates each of these. Feel free to use the code snippets in your own application. ### Overview The sample application we'll be exploring creates a new index named "hotels", populates it with a few documents, then executes some search queries. Here is the main program, showing the overall flow: ```csharp // This sample shows how to delete, create, upload documents and query an index static void Main(string[] args) { IConfigurationBuilder builder = new ConfigurationBuilder().AddJsonFile("appsettings.json"); IConfigurationRoot configuration = builder.Build(); SearchServiceClient serviceClient = CreateSearchServiceClient(configuration); Console.WriteLine("{0}", "Deleting index...\n"); DeleteHotelsIndexIfExists(serviceClient); Console.WriteLine("{0}", "Creating index...\n"); CreateHotelsIndex(serviceClient); ISearchIndexClient indexClient = serviceClient.Indexes.GetClient("hotels"); Console.WriteLine("{0}", "Uploading documents...\n"); UploadDocuments(indexClient); ISearchIndexClient indexClientForQueries = CreateSearchIndexClient(configuration); RunQueries(indexClientForQueries); Console.WriteLine("{0}", "Complete. Press any key to end application...\n"); Console.ReadKey(); } ``` > [!NOTE] > You can find the full source code of the sample application used in this walk through on [GitHub](https://github.com/Azure-Samples/search-dotnet-getting-started/tree/master/DotNetHowTo). > > We'll walk through this step by step. First we need to create a new `SearchServiceClient`. This object allows you to manage indexes. In order to construct one, you need to provide your Azure Search service name as well as an admin API key. You can enter this information in the `appsettings.json` file of the [sample application](https://github.com/Azure-Samples/search-dotnet-getting-started/tree/master/DotNetHowTo). ```csharp private static SearchServiceClient CreateSearchServiceClient(IConfigurationRoot configuration) { string searchServiceName = configuration["SearchServiceName"]; string adminApiKey = configuration["SearchServiceAdminApiKey"]; SearchServiceClient serviceClient = new SearchServiceClient(searchServiceName, new SearchCredentials(adminApiKey)); return serviceClient; } ``` > [!NOTE] > If you provide an incorrect key (for example, a query key where an admin key was required), the `SearchServiceClient` will throw a `CloudException` with the error message "Forbidden" the first time you call an operation method on it, such as `Indexes.Create`. If this happens to you, double-check our API key. > > The next few lines call methods to create an index named "hotels", deleting it first if it already exists. We will walk through these methods a little later. ```csharp Console.WriteLine("{0}", "Deleting index...\n"); DeleteHotelsIndexIfExists(serviceClient); Console.WriteLine("{0}", "Creating index...\n"); CreateHotelsIndex(serviceClient); ``` Next, the index needs to be populated. To do this, we will need a `SearchIndexClient`. There are two ways to obtain one: by constructing it, or by calling `Indexes.GetClient` on the `SearchServiceClient`. We use the latter for convenience. ```csharp ISearchIndexClient indexClient = serviceClient.Indexes.GetClient("hotels"); ``` > [!NOTE] > In a typical search application, index management and population is handled by a separate component from search queries. `Indexes.GetClient` is convenient for populating an index because it saves you the trouble of providing another `SearchCredentials`. It does this by passing the admin key that you used to create the `SearchServiceClient` to the new `SearchIndexClient`. However, in the part of your application that executes queries, it is better to create the `SearchIndexClient` directly so that you can pass in a query key instead of an admin key. This is consistent with the principle of least privilege and will help to make your application more secure. You can find out more about admin keys and query keys [here](https://docs.microsoft.com/rest/api/searchservice/#authentication-and-authorization). > > Now that we have a `SearchIndexClient`, we can populate the index. This is done by another method that we will walk through later. ```csharp Console.WriteLine("{0}", "Uploading documents...\n"); UploadDocuments(indexClient); ``` Finally, we execute a few search queries and display the results. This time we use a different `SearchIndexClient`: ```csharp ISearchIndexClient indexClientForQueries = CreateSearchIndexClient(configuration); RunQueries(indexClientForQueries); ``` We will take a closer look at the `RunQueries` method later. Here is the code to create the new `SearchIndexClient`: ```csharp private static SearchIndexClient CreateSearchIndexClient(IConfigurationRoot configuration) { string searchServiceName = configuration["SearchServiceName"]; string queryApiKey = configuration["SearchServiceQueryApiKey"]; SearchIndexClient indexClient = new SearchIndexClient(searchServiceName, "hotels", new SearchCredentials(queryApiKey)); return indexClient; } ``` This time we use a query key since we do not need write access to the index. You can enter this information in the `appsettings.json` file of the [sample application](https://github.com/Azure-Samples/search-dotnet-getting-started/tree/master/DotNetHowTo). If you run this application with a valid service name and API keys, the output should look like this: Deleting index... Creating index... Uploading documents... Waiting for documents to be indexed... Search the entire index for the term 'budget' and return only the hotelName field: Name: Roach Motel Apply a filter to the index to find hotels cheaper than $150 per night, and return the hotelId and description: ID: 2 Description: Cheapest hotel in town ID: 3 Description: Close to town hall and the river Search the entire index, order by a specific field (lastRenovationDate) in descending order, take the top two results, and show only hotelName and lastRenovationDate: Name: Fancy Stay Last renovated on: 6/27/2010 12:00:00 AM +00:00 Name: Roach Motel Last renovated on: 4/28/1982 12:00:00 AM +00:00 Search the entire index for the term 'motel': ID: 2 Base rate: 79.99 Description: Cheapest hotel in town Description (French): Hôtel le moins cher en ville Name: Roach Motel Category: Budget Tags: [motel, budget] Parking included: yes Smoking allowed: yes Last renovated on: 4/28/1982 12:00:00 AM +00:00 Rating: 1/5 Location: Latitude 49.678581, longitude -122.131577 Complete. Press any key to end application... The full source code of the application is provided at the end of this article. Next, we will take a closer look at each of the methods called by `Main`. ### Creating an index After creating a `SearchServiceClient`, the next thing `Main` does is delete the "hotels" index if it already exists. That is done by the following method: ```csharp private static void DeleteHotelsIndexIfExists(SearchServiceClient serviceClient) { if (serviceClient.Indexes.Exists("hotels")) { serviceClient.Indexes.Delete("hotels"); } } ``` This method uses the given `SearchServiceClient` to check if the index exists, and if so, delete it. > [!NOTE] > The example code in this article uses the synchronous methods of the Azure Search .NET SDK for simplicity. We recommend that you use the asynchronous methods in your own applications to keep them scalable and responsive. For example, in the method above you could use `ExistsAsync` and `DeleteAsync` instead of `Exists` and `Delete`. > > Next, `Main` creates a new "hotels" index by calling this method: ```csharp private static void CreateHotelsIndex(SearchServiceClient serviceClient) { var definition = new Index() { Name = "hotels", Fields = FieldBuilder.BuildForType<Hotel>() }; serviceClient.Indexes.Create(definition); } ``` This method creates a new `Index` object with a list of `Field` objects that defines the schema of the new index. Each field has a name, data type, and several attributes that define its search behavior. The `FieldBuilder` class uses reflection to create a list of `Field` objects for the index by examining the public properties and attributes of the given `Hotel` model class. We'll take a closer look at the `Hotel` class later on. > [!NOTE] > You can always create the list of `Field` objects directly instead of using `FieldBuilder` if needed. For example, you may not want to use a model class or you may need to use an existing model class that you don't want to modify by adding attributes. > > In addition to fields, you can also add scoring profiles, suggesters, or CORS options to the Index (these are omitted from the sample for brevity). You can find more information about the Index object and its constituent parts in the [SDK reference](https://docs.microsoft.com/dotnet/api/microsoft.azure.search.models.index), as well as in the [Azure Search REST API reference](https://docs.microsoft.com/rest/api/searchservice/). ### Populating the index The next step in `Main` is to populate the newly-created index. This is done in the following method: ```csharp private static void UploadDocuments(ISearchIndexClient indexClient) { var hotels = new Hotel[] { new Hotel() { HotelId = "1", BaseRate = 199.0, Description = "Best hotel in town", DescriptionFr = "Meilleur hôtel en ville", HotelName = "Fancy Stay", Category = "Luxury", Tags = new[] { "pool", "view", "wifi", "concierge" }, ParkingIncluded = false, SmokingAllowed = false, LastRenovationDate = new DateTimeOffset(2010, 6, 27, 0, 0, 0, TimeSpan.Zero), Rating = 5, Location = GeographyPoint.Create(47.678581, -122.131577) }, new Hotel() { HotelId = "2", BaseRate = 79.99, Description = "Cheapest hotel in town", DescriptionFr = "Hôtel le moins cher en ville", HotelName = "Roach Motel", Category = "Budget", Tags = new[] { "motel", "budget" }, ParkingIncluded = true, SmokingAllowed = true, LastRenovationDate = new DateTimeOffset(1982, 4, 28, 0, 0, 0, TimeSpan.Zero), Rating = 1, Location = GeographyPoint.Create(49.678581, -122.131577) }, new Hotel() { HotelId = "3", BaseRate = 129.99, Description = "Close to town hall and the river" } }; var batch = IndexBatch.Upload(hotels); try { indexClient.Documents.Index(batch); } catch (IndexBatchException e) { // Sometimes when your Search service is under load, indexing will fail for some of the documents in // the batch. Depending on your application, you can take compensating actions like delaying and // retrying. For this simple demo, we just log the failed document keys and continue. Console.WriteLine( "Failed to index some of the documents: {0}", String.Join(", ", e.IndexingResults.Where(r => !r.Succeeded).Select(r => r.Key))); } Console.WriteLine("Waiting for documents to be indexed...\n"); Thread.Sleep(2000); } ``` This method has four parts. The first creates an array of `Hotel` objects that will serve as our input data to upload to the index. This data is hard-coded for simplicity. In your own application, your data will likely come from an external data source such as a SQL database. The second part creates an `IndexBatch` containing the documents. You specify the operation you want to apply to the batch at the time you create it, in this case by calling `IndexBatch.Upload`. The batch is then uploaded to the Azure Search index by the `Documents.Index` method. > [!NOTE] > In this example, we are just uploading documents. If you wanted to merge changes into existing documents or delete documents, you could create batches by calling `IndexBatch.Merge`, `IndexBatch.MergeOrUpload`, or `IndexBatch.Delete` instead. You can also mix different operations in a single batch by calling `IndexBatch.New`, which takes a collection of `IndexAction` objects, each of which tells Azure Search to perform a particular operation on a document. You can create each `IndexAction` with its own operation by calling the corresponding method such as `IndexAction.Merge`, `IndexAction.Upload`, and so on. > > The third part of this method is a catch block that handles an important error case for indexing. If your Azure Search service fails to index some of the documents in the batch, an `IndexBatchException` is thrown by `Documents.Index`. This can happen if you are indexing documents while your service is under heavy load. **We strongly recommend explicitly handling this case in your code.** You can delay and then retry indexing the documents that failed, or you can log and continue like the sample does, or you can do something else depending on your application's data consistency requirements. > [!NOTE] > You can use the [`FindFailedActionsToRetry`](https://docs.microsoft.com/dotnet/api/microsoft.azure.search.indexbatchexception.findfailedactionstoretry) method to construct a new batch containing only the actions that failed in a previous call to `Index`. There is a discussion of how to properly use it [on StackOverflow](https://stackoverflow.com/questions/40012885/azure-search-net-sdk-how-to-use-findfailedactionstoretry). > > Finally, the `UploadDocuments` method delays for two seconds. Indexing happens asynchronously in your Azure Search service, so the sample application needs to wait a short time to ensure that the documents are available for searching. Delays like this are typically only necessary in demos, tests, and sample applications. #### How the .NET SDK handles documents You may be wondering how the Azure Search .NET SDK is able to upload instances of a user-defined class like `Hotel` to the index. To help answer that question, let's look at the `Hotel` class: ```csharp using System; using Microsoft.Azure.Search; using Microsoft.Azure.Search.Models; using Microsoft.Spatial; using Newtonsoft.Json; // The SerializePropertyNamesAsCamelCase attribute is defined in the Azure Search .NET SDK. // It ensures that Pascal-case property names in the model class are mapped to camel-case // field names in the index. [SerializePropertyNamesAsCamelCase] public partial class Hotel { [System.ComponentModel.DataAnnotations.Key] [IsFilterable] public string HotelId { get; set; } [IsFilterable, IsSortable, IsFacetable] public double? BaseRate { get; set; } [IsSearchable] public string Description { get; set; } [IsSearchable] [Analyzer(AnalyzerName.AsString.FrLucene)] [JsonProperty("description_fr")] public string DescriptionFr { get; set; } [IsSearchable, IsFilterable, IsSortable] public string HotelName { get; set; } [IsSearchable, IsFilterable, IsSortable, IsFacetable] public string Category { get; set; } [IsSearchable, IsFilterable, IsFacetable] public string[] Tags { get; set; } [IsFilterable, IsFacetable] public bool? ParkingIncluded { get; set; } [IsFilterable, IsFacetable] public bool? SmokingAllowed { get; set; } [IsFilterable, IsSortable, IsFacetable] public DateTimeOffset? LastRenovationDate { get; set; } [IsFilterable, IsSortable, IsFacetable] public int? Rating { get; set; } [IsFilterable, IsSortable] public GeographyPoint Location { get; set; } } ``` The first thing to notice is that each public property of `Hotel` corresponds to a field in the index definition, but with one crucial difference: The name of each field starts with a lower-case letter ("camel case"), while the name of each public property of `Hotel` starts with an upper-case letter ("Pascal case"). This is a common scenario in .NET applications that perform data-binding where the target schema is outside the control of the application developer. Rather than having to violate the .NET naming guidelines by making property names camel-case, you can tell the SDK to map the property names to camel-case automatically with the `[SerializePropertyNamesAsCamelCase]` attribute. > [!NOTE] > The Azure Search .NET SDK uses the [NewtonSoft JSON.NET](https://www.newtonsoft.com/json/help/html/Introduction.htm) library to serialize and deserialize your custom model objects to and from JSON. You can customize this serialization if needed. For more details, see [Custom Serialization with JSON.NET](#JsonDotNet). > > The second thing to notice are the attributes such as `IsFilterable`, `IsSearchable`, `Key`, and `Analyzer` that decorate each public property. These attributes map directly to the [corresponding attributes of the Azure Search index](https://docs.microsoft.com/rest/api/searchservice/create-index#request). The `FieldBuilder` class uses these to construct field definitions for the index. The third important thing about the `Hotel` class are the data types of the public properties. The .NET types of these properties map to their equivalent field types in the index definition. For example, the `Category` string property maps to the `category` field, which is of type `Edm.String`. There are similar type mappings between `bool?` and `Edm.Boolean`, `DateTimeOffset?` and `Edm.DateTimeOffset`, etc. The specific rules for the type mapping are documented with the `Documents.Get` method in the [Azure Search .NET SDK reference](https://docs.microsoft.com/dotnet/api/microsoft.azure.search.documentsoperationsextensions.get). The `FieldBuilder` class takes care of this mapping for you, but it can still be helpful to understand in case you need to troubleshoot any serialization issues. This ability to use your own classes as documents works in both directions; You can also retrieve search results and have the SDK automatically deserialize them to a type of your choice, as we will see in the next section. > [!NOTE] > The Azure Search .NET SDK also supports dynamically-typed documents using the `Document` class, which is a key/value mapping of field names to field values. This is useful in scenarios where you don't know the index schema at design-time, or where it would be inconvenient to bind to specific model classes. All the methods in the SDK that deal with documents have overloads that work with the `Document` class, as well as strongly-typed overloads that take a generic type parameter. Only the latter are used in the sample code in this tutorial. The [`Document` class](https://docs.microsoft.com/dotnet/api/microsoft.azure.search.models.document) inherits from `Dictionary<string, object>`. > > **Why you should use nullable data types** When designing your own model classes to map to an Azure Search index, we recommend declaring properties of value types such as `bool` and `int` to be nullable (for example, `bool?` instead of `bool`). If you use a non-nullable property, you have to **guarantee** that no documents in your index contain a null value for the corresponding field. Neither the SDK nor the Azure Search service will help you to enforce this. This is not just a hypothetical concern: Imagine a scenario where you add a new field to an existing index that is of type `Edm.Int32`. After updating the index definition, all documents will have a null value for that new field (since all types are nullable in Azure Search). If you then use a model class with a non-nullable `int` property for that field, you will get a `JsonSerializationException` like this when trying to retrieve documents: Error converting value {null} to type 'System.Int32'. Path 'IntValue'. For this reason, we recommend that you use nullable types in your model classes as a best practice. <a name="JsonDotNet"></a> #### Custom Serialization with JSON.NET The SDK uses JSON.NET for serializing and deserializing documents. You can customize serialization and deserialization if needed by defining your own `JsonConverter` or `IContractResolver` (see the [JSON.NET documentation](https://www.newtonsoft.com/json/help/html/Introduction.htm) for more details). This can be useful when you want to adapt an existing model class from your application for use with Azure Search, and other more advanced scenarios. For example, with custom serialization you can: * Include or exclude certain properties of your model class from being stored as document fields. * Map between property names in your code and field names in your index. * Create custom attributes that can be used for mapping properties to document fields. You can find examples of implementing custom serialization in the unit tests for the Azure Search .NET SDK on GitHub. A good starting point is [this folder](https://github.com/Azure/azure-sdk-for-net/tree/AutoRest/src/Search/Search.Tests/Tests/Models). It contains classes that are used by the custom serialization tests. ### Searching for documents in the index The last step in the sample application is to search for some documents in the index. The following method does this: ```csharp private static void RunQueries(ISearchIndexClient indexClient) { SearchParameters parameters; DocumentSearchResult<Hotel> results; Console.WriteLine("Search the entire index for the term 'budget' and return only the hotelName field:\n"); parameters = new SearchParameters() { Select = new[] { "hotelName" } }; results = indexClient.Documents.Search<Hotel>("budget", parameters); WriteDocuments(results); Console.Write("Apply a filter to the index to find hotels cheaper than $150 per night, "); Console.WriteLine("and return the hotelId and description:\n"); parameters = new SearchParameters() { Filter = "baseRate lt 150", Select = new[] { "hotelId", "description" } }; results = indexClient.Documents.Search<Hotel>("*", parameters); WriteDocuments(results); Console.Write("Search the entire index, order by a specific field (lastRenovationDate) "); Console.Write("in descending order, take the top two results, and show only hotelName and "); Console.WriteLine("lastRenovationDate:\n"); parameters = new SearchParameters() { OrderBy = new[] { "lastRenovationDate desc" }, Select = new[] { "hotelName", "lastRenovationDate" }, Top = 2 }; results = indexClient.Documents.Search<Hotel>("*", parameters); WriteDocuments(results); Console.WriteLine("Search the entire index for the term 'motel':\n"); parameters = new SearchParameters(); results = indexClient.Documents.Search<Hotel>("motel", parameters); WriteDocuments(results); } ``` Each time it executes a query, this method first creates a new `SearchParameters` object. This is used to specify additional options for the query such as sorting, filtering, paging, and faceting. In this method, we're setting the `Filter`, `Select`, `OrderBy`, and `Top` property for different queries. All the `SearchParameters` properties are documented [here](https://docs.microsoft.com/dotnet/api/microsoft.azure.search.models.searchparameters). The next step is to actually execute the search query. This is done using the `Documents.Search` method. For each query, we pass the search text to use as a string (or `"*"` if there is no search text), plus the search parameters created earlier. We also specify `Hotel` as the type parameter for `Documents.Search`, which tells the SDK to deserialize documents in the search results into objects of type `Hotel`. > [!NOTE] > You can find more information about the search query expression syntax [here](https://docs.microsoft.com/rest/api/searchservice/Simple-query-syntax-in-Azure-Search). > > Finally, after each query this method iterates through all the matches in the search results, printing each document to the console: ```csharp private static void WriteDocuments(DocumentSearchResult<Hotel> searchResults) { foreach (SearchResult<Hotel> result in searchResults.Results) { Console.WriteLine(result.Document); } Console.WriteLine(); } ``` Let's take a closer look at each of the queries in turn. Here is the code to execute the first query: ```csharp parameters = new SearchParameters() { Select = new[] { "hotelName" } }; results = indexClient.Documents.Search<Hotel>("budget", parameters); WriteDocuments(results); ``` In this case, we're searching for hotels that match the word "budget", and we want to get back only the hotel names, as specified by the `Select` parameter. Here are the results: Name: Roach Motel Next, we want to find the hotels with a nightly rate of less than $150, and return only the hotel ID and description: ```csharp parameters = new SearchParameters() { Filter = "baseRate lt 150", Select = new[] { "hotelId", "description" } }; results = indexClient.Documents.Search<Hotel>("*", parameters); WriteDocuments(results); ``` This query uses an OData `$filter` expression, `baseRate lt 150`, to filter the documents in the index. You can find out more about the OData syntax that Azure Search supports [here](https://docs.microsoft.com/rest/api/searchservice/OData-Expression-Syntax-for-Azure-Search). Here are the results of the query: ID: 2 Description: Cheapest hotel in town ID: 3 Description: Close to town hall and the river Next, we want to find the top two hotels that have been most recently renovated, and show the hotel name and last renovation date. Here is the code: ```csharp parameters = new SearchParameters() { OrderBy = new[] { "lastRenovationDate desc" }, Select = new[] { "hotelName", "lastRenovationDate" }, Top = 2 }; results = indexClient.Documents.Search<Hotel>("*", parameters); WriteDocuments(results); ``` In this case, we again use OData syntax to specify the `OrderBy` parameter as `lastRenovationDate desc`. We also set `Top` to 2 to ensure we only get the top two documents. As before, we set `Select` to specify which fields should be returned. Here are the results: Name: Fancy Stay Last renovated on: 6/27/2010 12:00:00 AM +00:00 Name: Roach Motel Last renovated on: 4/28/1982 12:00:00 AM +00:00 Finally, we want to find all hotels that match the word "motel": ```csharp parameters = new SearchParameters(); results = indexClient.Documents.Search<Hotel>("motel", parameters); WriteDocuments(results); ``` And here are the results, which include all fields since we did not specify the `Select` property: ID: 2 Base rate: 79.99 Description: Cheapest hotel in town Description (French): Hôtel le moins cher en ville Name: Roach Motel Category: Budget Tags: [motel, budget] Parking included: yes Smoking allowed: yes Last renovated on: 4/28/1982 12:00:00 AM +00:00 Rating: 1/5 Location: Latitude 49.678581, longitude -122.131577 This step completes the tutorial, but don't stop here. **Next steps** provides additional resources for learning more about Azure Search. ## Next steps * Browse the references for the [.NET SDK](https://docs.microsoft.com/dotnet/api/microsoft.azure.search) and [REST API](https://docs.microsoft.com/rest/api/searchservice/). * Review [naming conventions](https://docs.microsoft.com/rest/api/searchservice/Naming-rules) to learn the rules for naming various objects. * Review [supported data types](https://docs.microsoft.com/rest/api/searchservice/Supported-data-types) in Azure Search.
56.112245
812
0.746318
eng_Latn
0.979839
53b83b62a2c7aa86c31ede99a293d010d05a9bfa
3,050
md
Markdown
Rules.md
Econa77/IBLinter
b630b9421f6047fb50092d934d8181b2abbb405d
[ "MIT" ]
362
2018-03-23T00:06:03.000Z
2022-03-30T15:58:12.000Z
Rules.md
Econa77/IBLinter
b630b9421f6047fb50092d934d8181b2abbb405d
[ "MIT" ]
89
2018-03-22T21:45:54.000Z
2022-03-28T10:12:17.000Z
Rules.md
Econa77/IBLinter
b630b9421f6047fb50092d934d8181b2abbb405d
[ "MIT" ]
39
2018-04-08T19:07:40.000Z
2022-03-24T15:54:07.000Z
## CustomClassNameRule | identifier | `custom_class_name` | |:---------------:|:---------------:| | Default Rule | `false` | Custom class name of ViewController in storyboard should be same as file name. ## RelativeToMarginRule | identifier | `relative_to_margin` | |:---------------:|:---------------:| | Default Rule | `false` | Forbid to use relative to margin option. ## MisplacedViewRule | identifier | `misplaced` | |:---------------:|:---------------:| | Default Rule | `false` | Display error when views are misplaced. ## ForceToEnableAutoLayoutRule | identifier | `enable_autolayout` | |:---------------:|:---------------:| | Default Rule | `true` | Force to use useAutolayout option ## DuplicateConstraintRule | identifier | `duplicate_constraint` | |:---------------:|:---------------:| | Default Rule | `true` | Display warning when view has duplicated constraint. ## DuplicateIDRule | identifier | `duplicate_id` | |:---------------:|:---------------:| | Default Rule | `true` | Display warning when elements use same id. ## StoryboardViewControllerId | identifier | `storyboard_viewcontroller_id` | |:---------------:|:---------------:| | Default Rule | `false` | Check that Storyboard ID same as ViewController class name. ## StackViewBackgroundColorRule | identifier | `stackview_backgroundcolor` | |:---------------:|:---------------:| | Default Rule | `false` | Force background color of stackview to be default. ## ImageResourcesRule | identifier | `image_resources` | |:---------------:|:---------------:| | Default Rule | `false` | Check if image resources are valid. ## CustomModuleRule | identifier | `custom_module` | |:---------------:|:---------------:| | Default Rule | `true` | Check if custom class match custom module by custom_module_rule config. ## UseBaseClassRule | identifier | `use_base_class` | |:---------------:|:---------------:| | Default Rule | `false` | Check if custom class is in base classes by use_base_class_rule config. ## AmbiguousViewRule | identifier | `ambiguous` | |:---------------:|:---------------:| | Default Rule | `true` | Display error when views are ambiguous. ## ViewAsDeviceRule | identifier | `view_as_device` | |:---------------:|:---------------:| | Default Rule | `false` | Check View as: set as a device specified by view_as_device_rule config. ## ReuseIdentifierRule | identifier | `reuse_identifier` | |:---------------:|:---------------:| | Default Rule | `false` | Check that ReuseIdentifier same as class name. ## ColorResourcesRule | identifier | `color_resources` | |:---------------:|:---------------:| | Default Rule | `false` | Check if named color resources are valid. ## UseTraitCollectionsRule | identifier | `use_trait_collections` | |:---------------:|:---------------:| | Default Rule | `false` | Check if a document useTraitCollections is enabled or disabled
21.478873
78
0.555082
eng_Latn
0.499804
53b86b0f36382bbc743e44a109a425539650c3ba
2,259
md
Markdown
docs/extensibility/internals/hierarchies-in-visual-studio.md
drvoss/visualstudio-docs.ko-kr
739317082fa0a67453bad9d3073fbebcb0aaa7fb
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/extensibility/internals/hierarchies-in-visual-studio.md
drvoss/visualstudio-docs.ko-kr
739317082fa0a67453bad9d3073fbebcb0aaa7fb
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/extensibility/internals/hierarchies-in-visual-studio.md
drvoss/visualstudio-docs.ko-kr
739317082fa0a67453bad9d3073fbebcb0aaa7fb
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Visual Studio에서 계층 구조 | Microsoft Docs ms.custom: '' ms.date: 11/04/2016 ms.technology: - vs-ide-sdk ms.topic: conceptual helpviewer_keywords: - hierarchies, Visual Studio IDE - IDE, hierarchies ms.assetid: 0a029a7c-79fd-4b54-bd63-bd0f21aa8d30 author: gregvanl ms.author: gregvanl manager: douge ms.workload: - vssdk ms.openlocfilehash: 0774c4a7ad9686b02501bed80a060519f5a500f3 ms.sourcegitcommit: 206e738fc45ff8ec4ddac2dd484e5be37192cfbd ms.translationtype: MT ms.contentlocale: ko-KR ms.lasthandoff: 08/03/2018 ms.locfileid: "39510412" --- # <a name="hierarchies-in-visual-studio"></a>Visual Studio의 계층 구조 합니다 [!INCLUDE[vsprvs](../../code-quality/includes/vsprvs_md.md)] 로 프로젝트를 표시 하는 통합된 개발 환경 (IDE)를 *계층*합니다. IDE에서 계층은 각 노드에 연결 된 속성 집합에 있는 노드의 트리입니다. A *계층 프로젝트* 은 프로젝트의 항목, 항목의 관계 및 항목의 관련된 속성 및 명령을 포함 하는 컨테이너입니다. [!INCLUDE[vsprvs](../../code-quality/includes/vsprvs_md.md)], 계층 인터페이스를 사용 하 여 프로젝트 계층 구조를 관리할 <xref:Microsoft.VisualStudio.Shell.Interop.IVsHierarchy>합니다. <xref:Microsoft.VisualStudio.Shell.Interop.IVsUIHierarchy> 인터페이스 명령을 표준 명령 처리기 대신 적절 한 계층 구조 창에 프로젝트 항목에서 호출을 리디렉션합니다. ## <a name="project-hierarchies"></a>프로젝트 계층 구조 각 프로젝트 계층 확인 및 편집할 수 있는 항목을 포함 합니다. 이러한 항목 프로젝트 유형에 따라 달라 집니다. 예를 들어, 데이터베이스 프로젝트를 저장된 프로시저, 데이터베이스 뷰 및 데이터베이스 테이블을 포함할 수 있습니다. 반면, 프로그래밍 언어 프로젝트를 소스 파일 및 비트맵 및 대화 상자에 대 한 리소스 파일 가능성이 포함 됩니다. 제공 하는 몇 가지 추가 유연성을 프로젝트 계층 구조를 만들 때 계층 중첩 될 수 있습니다. 새 프로젝트 형식을 만들면 프로젝트 형식에 편집할 수 있는 항목의 전체 집합을 제어 합니다. 그러나 프로젝트는 없는 편집 지원 항목을 포함할 수 있습니다. 예를 들어, Visual c + + 프로젝트는 Visual c + + HTML 파일 형식에 대 한 모든 사용자 지정된 편집기를 제공 하지 않습니다 하는 경우에 HTML 파일을 포함할 수 있습니다. 계층에 포함 된 항목의 지 속성을 관리 합니다. 계층의 구현 계층 내에서 항목의 지 속성에 영향을 주는 모든 특수 속성을 제어 해야 합니다. 예를 들어, 항목 파일 대신 저장소에서 개체를 나타내면 계층 구현에서는 해당 개체의 지 속성을 제어 해야 합니다. 자체 IDE 사용자 입력에 따라 항목을 저장 하는 계층 구조를 전달 하지만 IDE는 해당 항목을 저장 하는 데 필요한 모든 작업을 제어 하지 않습니다. 대신, 컨트롤에서 프로젝트가 있습니다. 사용자가 편집기에서 항목을 열면, 해당 항목을 제어 하는 계층 선택한 활성 계층이 있습니다. 선택한 계층 항목에서 작동 하려면 사용 가능한 명령의 집합을 결정 합니다. 이 방식으로 사용자 포커스 추적 사용자의 현재 컨텍스트를 반영 하 여 계층을 사용 하도록 설정 합니다. ## <a name="see-also"></a>참고자료 [프로젝트 형식](../../extensibility/internals/project-types.md) [IDE의 선택 및 통화](../../extensibility/internals/selection-and-currency-in-the-ide.md) [VSSDK 샘플](http://aka.ms/vs2015sdksamples)
55.097561
277
0.719345
kor_Hang
1.00001
53b92e9e7a053232f12edc2e700dced922789622
7,407
md
Markdown
README.md
JimLee4530/bottom-up-attention.pytorch
6885bc8092df10050d5bc79e11963ea99206d2b4
[ "Apache-2.0" ]
1
2021-01-10T08:18:02.000Z
2021-01-10T08:18:02.000Z
README.md
JimLee4530/bottom-up-attention.pytorch
6885bc8092df10050d5bc79e11963ea99206d2b4
[ "Apache-2.0" ]
null
null
null
README.md
JimLee4530/bottom-up-attention.pytorch
6885bc8092df10050d5bc79e11963ea99206d2b4
[ "Apache-2.0" ]
null
null
null
# bottom-up-attention.pytorch This repository contains a **PyTorch** reimplementation of the [bottom-up-attention](https://github.com/peteanderson80/bottom-up-attention) project based on *Caffe*. We use [Detectron2](https://github.com/facebookresearch/detectron2) as the backend to provide completed functions including training, testing and feature extraction. Furthermore, we migrate the pre-trained Caffe-based model from the original repository which obtains **the same visual features** as the original model (with deviation < 0.01). To the best of our knowledge, we are the first success attempt to migrate the pre-trained Caffe model. Some example object and attribute predictions for salient image regions are illustrated below. The script to obtain the following visualizations can be found [here](utils/visualize.ipynb) ![example-image](datasets/demo/example_image.jpg?raw=true) ## Table of Contents 0. [Prerequisites](#Prerequisites) 1. [Training](#Training) 2. [Testing](#Testing) 3. [Feature Extraction](#Feature-Extraction) 4. [Pre-trained models](#Pre-trained-models) ## Prerequisites #### Requirements - [Python](https://www.python.org/downloads/) >= 3.6 - [PyTorch](http://pytorch.org/) >= 1.4 - [Cuda](https://developer.nvidia.com/cuda-toolkit) >= 9.2 and [cuDNN](https://developer.nvidia.com/cudnn) - [Apex](https://github.com/NVIDIA/apex.git) - [Detectron2](https://github.com/facebookresearch/detectron2) Note that most of the requirements above are needed for Detectron2. #### Installation 1. Install Detectron2 according to their official instructions [here](https://github.com/facebookresearch/detectron2/blob/5e2a6f62ef752c8b8c700d2e58405e4bede3ddbe/INSTALL.md). 2. Compile other used tools using the following script: ```bash # install apex $ git clone https://github.com/NVIDIA/apex.git $ cd apex $ python setup.py install $ cd .. $ python setup.py build develop ``` #### Setup If you want to train or test the model, you need to download the images and annotation files of the Visual Genome (VG) dataset. **If you only need to extract visual features using the pre-trained model, you can skip this part**. The original VG images ([part1](https://cs.stanford.edu/people/rak248/VG_100K_2/images.zip) and [part2](https://cs.stanford.edu/people/rak248/VG_100K_2/images2.zip)) are to be downloaded and unzipped to the `datasets` folder. The generated annotation files in the original repository are needed to be transformed to a COCO data format required by Detectron2. The preprocessed annotation files can be downloaded [here](https://awma1-my.sharepoint.com/:u:/g/personal/yuz_l0_tn/EWpiE_5PvBdKiKfCi0pBx_EB5ONo8D8XABUz7tWcnltCrw?e=xIeW23) and unzipped to the `dataset` folder. Finally, the `datasets` folders will have the following structure: ```angular2html |-- datasets |-- vg | |-- image | | |-- VG_100K | | | |-- 2.jpg | | | |-- ... | | |-- VG_100K_2 | | | |-- 1.jpg | | | |-- ... | |-- annotations | | |-- train.json | | |-- val.json ``` ## Training The following script will train a bottom-up-attention model on the `train` split of VG: ```bash $ python3 train_net.py --mode detectron2 \ --config-file configs/bua-caffe/train-bua-caffe-r101.yaml \ --resume ``` 1. `mode = {'caffe', 'detectron2'}` refers to the used mode. We only support the mode with Detectron2, which refers to `detectron2` mode, since we think it is unnecessary to train a new model using the `caffe` mode. 2. `config-file` refers to all the configurations of the model. 3. `resume` refers to a flag if you want to resume training from a specific checkpoint. ## Testing Given the trained model, the following script will test the performance on the `val` split of VG: ```bash $ python3 train_net.py --mode caffe \ --config-file configs/bua-caffe/test-bua-caffe-r101.yaml \ --eval-only --resume ``` 1. `mode = {'caffe', 'detectron2'}` refers to the used mode. For the converted model from Caffe, you need to use the `caffe` mode. For other models trained with Detectron2, you need to use the `detectron2` mode. 2. `config-file` refers to all the configurations of the model, which also include the path of the model weights. 3. `eval-only` refers to a flag if you want to testing only. 4. `resume` refers to a flag if you want to resume training from a specific checkpoint. ## Feature Extraction Similar with the testing stage, the following script will extract the bottom-up-attention visual features with provided hyper-parameters: ```bash $ python3 extract_features.py --mode caffe \ --config-file configs/bua-caffe/extract-bua-caffe-r101.yaml \ --image-dir <image_dir> --gt-bbox-dir <out_dir> --out-dir <out_dir> --resume ``` 1. `mode = {'caffe', 'detectron2'}` refers to the used mode. For the converted model from Caffe, you need to use the `caffe` mode. For other models trained with Detectron2, you need to use the `detectron2` mode. 2. `config-file` refers to all the configurations of the model, which also include the path of the model weights. 3. `image-dir` refers to the input image directory. 4. `gt-bbox-dir` refers to the ground truth bbox directory. 5. `out-dir` refers to the output feature directory. 5. `resume` refers to a flag if you want to resume training from a specific checkpoint. For example: ```bash # extract roi feature: $ python3 extract_features.py --mode caffe \ --config-file configs/bua-caffe/extract-bua-caffe-r101.yaml \ --image-dir <image_dir> --out-dir <out_dir> --resume # extract bbox: $ python3 extract_features.py --mode caffe \ --config-file configs/bua-caffe/extract-bua-caffe-r101-bbox-only.yaml \ --image-dir <image_dir> --out-dir <out_dir> --resume # extract roi feature by gt-bbox: $ python3 extract_features.py --mode caffe \ --config-file configs/bua-caffe/extract-bua-caffe-r101-gt-bbox.yaml \ --image-dir <image_dir> --gt-bbox-dir <bbox_dir> --out-dir <out_dir> --resume ``` ## Pre-trained models We provided pre-trained models here. The evaluation metrics are exactly the same as those in the original Caffe project. Currently we only provide the converted model from Caffe, which report exactly the same scores compared to the original version. More models will be continuously updated. Model | Mode | Backbone | Objects [email protected] |Objects weighted [email protected]|Download :-:|:-:|:-:|:-:|:-:|:-: [Faster R-CNN](./configs/bua-caffe/extract-bua-caffe-r101.yaml)|C, K=[10,100]|ResNet-101|10.2%|15.1%|[model](https://awma1-my.sharepoint.com/:u:/g/personal/yuz_l0_tn/EaXvCC3WjtlLvvEfLr3oa8UBLA21tcLh4L8YLbYXl6jgjg?e=SFMoeu) [Faster R-CNN](./configs/bua-caffe/extract-bua-caffe-r101-fix36.yaml)|C, K=36|ResNet-101|9.3%|14.0%|[model](https://awma1-my.sharepoint.com/:u:/g/personal/yuz_l0_tn/EUKhQ3hSRv9JrrW64qpNLSIBGoOjEGCkF8zvgBP9gKax-w?e=kNB9pS) [Faster R-CNN](./configs/bua-caffe/extract-bua-caffe-r152.yaml)|C, K=[10,100]|ResNet-152|11.1%|15.7%|[model](https://awma1-my.sharepoint.com/:u:/g/personal/yuz_l0_tn/ETDgy4bY0xpGgsu5tEMzgLcBQjAwpnkKkltNTtPVuMj4GQ?e=rpM1a3) ## License This project is released under the [Apache 2.0 license](LICENSE). ## Contact This repo is currently maintained by Jing Li ([@J1mL3e_](https://github.com/JimLee4530)) and Zhou Yu ([@yuzcccc](https://github.com/yuzcccc)).
44.890909
446
0.725935
eng_Latn
0.942155
53b9446a3d22a7a1b619819c059f632ba3688bd5
1,427
md
Markdown
LICENSE.md
tech4germany/bmfsfj-partnerschaftliche-gleichstellung
c714d0b607e0f65442c9ed251f3e195814a70150
[ "MIT" ]
5
2021-11-04T14:19:24.000Z
2021-11-15T11:10:06.000Z
LICENSE.md
tech4germany/bmfsfj-partnerschaftliche-gleichstellung
c714d0b607e0f65442c9ed251f3e195814a70150
[ "MIT" ]
null
null
null
LICENSE.md
tech4germany/bmfsfj-partnerschaftliche-gleichstellung
c714d0b607e0f65442c9ed251f3e195814a70150
[ "MIT" ]
null
null
null
This following license applies to the files in the following folders of this project: - components - lang - layouts - pages - plugins - static - store - utils as well as all files placed directly in this folder. It furthermore does NOT apply to the files in the "content" and "assets" folders. --- MIT License Copyright (c) 2021 Katja Anokhina, Sophia Grote, Jonathan Schneider and Malte Laukötter Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
38.567568
134
0.793973
eng_Latn
0.777325
53bb0a07a0b3ed9706a6ce6269a1f8fa328f66cf
161
md
Markdown
doc/models/v1-fee-calculation-phase.md
jHards/square-php-sdk
0bbf50b57964ecfa4023f55bc1be5d58e837c490
[ "Apache-2.0" ]
2
2020-07-30T12:39:10.000Z
2020-07-30T12:39:20.000Z
doc/models/v1-fee-calculation-phase.md
jHards/square-php-sdk
0bbf50b57964ecfa4023f55bc1be5d58e837c490
[ "Apache-2.0" ]
2
2020-08-17T21:24:03.000Z
2020-08-25T16:17:46.000Z
doc/models/v1-fee-calculation-phase.md
land-of-apps/square-ruby-sdk
e8afc8f5b14868cc86a766f5c0eebd830f9b1091
[ "Apache-2.0" ]
2
2020-11-13T12:00:13.000Z
2021-08-16T23:59:00.000Z
## V1 Fee Calculation Phase ### Enumeration `V1FeeCalculationPhase` ### Fields | Name | | --- | | `FEE_SUBTOTAL_PHASE` | | `OTHER` | | `FEE_TOTAL_PHASE` |
10.733333
27
0.627329
yue_Hant
0.547443
53bb19e21691ef7dd15d7c64c358875b7b288b89
40,514
md
Markdown
articles/active-directory/hybrid/plan-migrate-adfs-pass-through-authentication.md
gliljas/azure-docs.sv-se-1
1efdf8ba0ddc3b4fb65903ae928979ac8872d66e
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/active-directory/hybrid/plan-migrate-adfs-pass-through-authentication.md
gliljas/azure-docs.sv-se-1
1efdf8ba0ddc3b4fb65903ae928979ac8872d66e
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/active-directory/hybrid/plan-migrate-adfs-pass-through-authentication.md
gliljas/azure-docs.sv-se-1
1efdf8ba0ddc3b4fb65903ae928979ac8872d66e
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: 'Azure AD Connect: Migrera från Federation till PTA för Azure AD' description: Den här artikeln innehåller information om hur du flyttar din hybrid identitets miljö från Federation till direktautentisering. services: active-directory author: billmath manager: daveba ms.reviewer: martincoetzer ms.service: active-directory ms.workload: identity ms.topic: article ms.date: 05/31/2019 ms.subservice: hybrid ms.author: billmath ms.collection: M365-identity-device-management ms.openlocfilehash: 13a5fc216abc890c19ce3a2d75335431fe2a6799 ms.sourcegitcommit: 849bb1729b89d075eed579aa36395bf4d29f3bd9 ms.translationtype: MT ms.contentlocale: sv-SE ms.lasthandoff: 04/28/2020 ms.locfileid: "79528650" --- # <a name="migrate-from-federation-to-pass-through-authentication-for-azure-active-directory"></a>Migrera från Federation till direktautentisering för Azure Active Directory I den här artikeln beskrivs hur du flyttar organisations domäner från Active Directory Federation Services (AD FS) (AD FS) för att sprida autentisering. > [!NOTE] > Att ändra autentiseringsmetoden kräver planering, testning och eventuella stillestånds tider. [Stegvis](how-to-connect-staged-rollout.md) distribution är ett alternativt sätt att testa och gradvis migrera från Federation till molnbaserad autentisering med hjälp av direktautentisering. ## <a name="prerequisites-for-migrating-to-pass-through-authentication"></a>Krav för migrering till direktautentisering Följande förutsättningar måste vara uppfyllda om du vill migrera från att använda AD FS för att använda direktautentisering. ### <a name="update-azure-ad-connect"></a>Uppdatera Azure AD Connect För att kunna slutföra de steg som krävs för att migrera till att använda direktautentisering måste du ha [Azure Active Directory Connect](https://www.microsoft.com/download/details.aspx?id=47594) (Azure AD Connect) 1.1.819.0 eller en senare version. I Azure AD Connect 1.1.819.0 utförs inloggningen på ett mycket bra sätt. Den totala tiden för migrering från AD FS till molnbaserad autentisering i den här versionen minskas från potentiella timmar till minuter. > [!IMPORTANT] > Du kan läsa i inaktuell dokumentation, verktyg och Bloggar som användar konvertering krävs när du konverterar domäner från federerad identitet till hanterad identitet. Det krävs inte längre att *konvertera användare* . Microsoft arbetar med att uppdatera dokumentation och verktyg för att avspegla den här ändringen. Om du vill uppdatera Azure AD Connect slutför du stegen i [Azure AD Connect: uppgradera till den senaste versionen](https://docs.microsoft.com/azure/active-directory/connect/active-directory-aadconnect-upgrade-previous-version). ### <a name="plan-authentication-agent-number-and-placement"></a>Planera Authentication agent-nummer och placering Direktautentisering kräver att du distribuerar enkla agenter på Azure AD Connect-servern och på den lokala dator som kör Windows Server. Om du vill minska svars tiden installerar du agenterna så nära som möjligt Active Directory domän kontrol Lanterna. För de flesta kunder räcker det två eller tre autentiseringsmekanismer för att ge hög tillgänglighet och den kapacitet som krävs. En klient organisation kan ha högst 12 agenter registrerade. Den första agenten installeras alltid på själva Azure AD Connect servern. Information om agent begränsningar och agent distributions alternativ finns i [Azure AD-direktautentisering: aktuella begränsningar](https://docs.microsoft.com/azure/active-directory/connect/active-directory-aadconnect-pass-through-authentication-current-limitations). ### <a name="plan-the-migration-method"></a>Planera migrations metoden Du kan välja mellan två metoder för att migrera från federerad identitets hantering till vidarekoppling och sömlös enkel inloggning (SSO). Vilken metod du använder beror på hur AD FS-instansen ursprungligen konfigurerades. * **Azure AD Connect**. Om du ursprungligen konfigurerade AD FS med Azure AD Connect *måste* du ändra till direktautentisering genom att använda Azure AD Connect guiden. Azure AD Connect kör automatiskt cmdleten **set-MsolDomainAuthentication** när du ändrar inloggnings metoden för användaren. Azure AD Connect automatiskt federera alla verifierade federerade domäner i Azure AD-klienten. > [!NOTE] > Om du för närvarande har använt Azure AD Connect för att konfigurera AD FS kan du undvika att inte federera alla domäner i din klient organisation när du ändrar inloggningen för användare till vidarekoppling. ‎ * **Azure AD Connect med PowerShell**. Du kan bara använda den här metoden om du inte ursprungligen konfigurerade AD FS med hjälp av Azure AD Connect. För det här alternativet måste du fortfarande ändra användar inloggnings metoden via guiden Azure AD Connect. Core-skillnaden med det här alternativet är att guiden inte kör cmdleten **set-MsolDomainAuthentication** automatiskt. Med det här alternativet har du fullständig kontroll över vilka domäner som konverteras och i vilken ordning. För att förstå vilken metod du ska använda, slutför stegen i följande avsnitt. #### <a name="verify-current-user-sign-in-settings"></a>Verifiera de aktuella användar inloggnings inställningarna 1. Logga in på [Azure AD-portalen](https://aad.portal.azure.com/) med ett globalt administratörs konto. 2. I avsnittet **användar inloggning** kontrollerar du följande inställningar: * **Federation** har angetts till **aktive rad**. * **Sömlös enkel inloggning** är inställt på **inaktive rad**. * **Direktautentisering** är **inaktive rad**. ![Skärm bild av inställningarna i avsnittet Azure AD Connect användar inloggning](media/plan-migrate-adfs-pass-through-authentication/migrating-adfs-to-pta_image1.png) #### <a name="verify-how-federation-was-configured"></a>Verifiera hur federationen konfigurerades 1. Öppna Azure AD Connect på Azure AD Connect-servern. Välj **Konfigurera**. 2. På sidan **Ytterligare aktiviteter** väljer du **Visa aktuell konfiguration**och väljer sedan **Nästa**.<br /> ![Skärm bild av alternativet Visa nuvarande konfiguration på sidan Ytterligare aktiviteter](media/plan-migrate-adfs-pass-through-authentication/migrating-adfs-to-pta_image2.png)<br /> 3. Under **Ytterligare aktiviteter > hanterar federationen**, bläddrar du till **Active Directory Federation Services (AD FS) (AD FS)**.<br /> * Om AD FS-konfigurationen visas i det här avsnittet kan du på ett säkert sätt anta att AD FS ursprungligen konfigurerades med hjälp av Azure AD Connect. Du kan konvertera dina domäner från federerad identitet till hanterad identitet med hjälp av alternativet Azure AD Connect **ändra användar inloggning** . Mer information om processen finns i avsnittet **A: Konfigurera direktautentisering genom att använda Azure AD Connect**. * Om AD FS inte visas i de aktuella inställningarna måste du manuellt konvertera dina domäner från federerad identitet till hanterad identitet med hjälp av PowerShell. Mer information om den här processen finns i avsnittet **alternativ B: växla från Federation till direktautentisering genom att använda Azure AD Connect och PowerShell**. ### <a name="document-current-federation-settings"></a>Dokumentera aktuella Federations inställningar Du hittar de aktuella Federations inställningarna genom att köra cmdleten **Get-MsolDomainFederationSettings** : ``` PowerShell Get-MsolDomainFederationSettings -DomainName YourDomain.extention | fl * ``` Exempel: ``` PowerShell Get-MsolDomainFederationSettings -DomainName Contoso.com | fl * ``` Kontrol lera alla inställningar som kan ha anpassats för din dokumentation om Federations design och distribution. Mer specifikt kan du söka efter anpassningar i **PreferredAuthenticationProtocol**, **SupportsMfa**och **PromptLoginBehavior**. Mer information finns i dessa artiklar: * [AD FS prompt = stöd för inloggnings parameter](https://docs.microsoft.com/windows-server/identity/ad-fs/operations/ad-fs-prompt-login) * [Set-MsolDomainAuthentication](https://docs.microsoft.com/powershell/module/msonline/set-msoldomainauthentication?view=azureadps-1.0) > [!NOTE] > Om **SupportsMfa** är inställt på **Sant**använder du en lokal Multi-Factor Authentication-lösning för att mata in en andra faktor i flödet för användarautentisering. Den här installationen fungerar inte längre för Azure AD-autentiserings scenarier. > > Använd i stället den molnbaserade Azure Multi-Factor Authentication-tjänsten för att utföra samma funktion. Utvärdera dina Multi-Factor Authentication-krav noggrant innan du fortsätter. Innan du konverterar domänerna bör du se till att du förstår hur du använder Azure Multi-Factor Authentication, licens konsekvenserna och användar registrerings processen. #### <a name="back-up-federation-settings"></a>Säkerhetskopiera Federations inställningar Även om inga ändringar görs i andra förlitande parter i AD FS-servergruppen under de processer som beskrivs i den här artikeln, rekommenderar vi att du har en aktuell giltig säkerhets kopia av din AD FS server grupp som du kan återställa från. Du kan skapa en aktuell giltig säkerhets kopia genom att använda det kostnads fria [verktyget Microsoft AD FS Rapid Restore](https://docs.microsoft.com/windows-server/identity/ad-fs/operations/ad-fs-rapid-restore-tool). Du kan använda verktyget för att säkerhetskopiera AD FS och återställa en befintlig Server grupp eller skapa en ny server grupp. Om du väljer att inte använda AD FS Rapid Restore-verktyget bör du, om du vill, exportera Microsoft Office 365 identitets plattformens förtroende för förlitande part och eventuella associerade anpassade anspråks regler som du har lagt till. Du kan exportera förtroende för förlitande part och associerade anspråks regler med hjälp av följande PowerShell-exempel: ``` PowerShell (Get-AdfsRelyingPartyTrust -Name "Microsoft Office 365 Identity Platform") | Export-CliXML "C:\temp\O365-RelyingPartyTrust.xml" ``` ## <a name="deployment-considerations-and-using-ad-fs"></a>Distributions överväganden och användning av AD FS I det här avsnittet beskrivs överväganden för distribution och information om hur du använder AD FS. ### <a name="current-ad-fs-use"></a>Nuvarande AD FS användning Innan du konverterar från federerad identitet till hanterad identitet bör du titta närmare på hur du för närvarande använder AD FS för Azure AD, Office 365 och andra program (förtroenden för förlitande part). Mer specifikt bör du tänka på de scenarier som beskrivs i följande tabell: | Om | Dra | |-|-| | Du planerar att fortsätta använda AD FS med andra program (förutom Azure AD och Office 365). | När du har konverterat dina domäner använder du både AD FS och Azure AD. Överväg användar upplevelsen. I vissa fall kan användare behöva autentisera två gånger: en gång till Azure AD (där en användare får SSO-åtkomst till andra program, t. ex. Office 365) och igen för program som fortfarande är kopplade till AD FS som ett förlitande parts förtroende. | | AD FS-instansen är mycket anpassad och förlitar sig på särskilda anpassnings inställningar i filen onload. js (till exempel om du har ändrat inloggnings upplevelsen så att användarna endast använder ett **sAMAccountName** -format för sitt användar namn i stället för ett UPN-format eller om din organisation har stor varumärkes inloggnings upplevelsen). Onload. js-filen kan inte dupliceras i Azure AD. | Innan du fortsätter måste du kontrol lera att Azure AD kan uppfylla dina aktuella anpassnings krav. Mer information och anvisningar finns i avsnitten om AD FS anpassning och AD FS anpassning.| | Du använder AD FS för att blockera tidigare versioner av autentiseringsbegäranden.| Överväg att ersätta AD FS kontroller som blockerar tidigare versioner av autentiseringsbegäranden genom att använda en kombination av [villkorliga åtkomst kontroller](https://docs.microsoft.com/azure/active-directory/conditional-access/conditions) och [åtkomst regler för Exchange Online-klienter](https://aka.ms/EXOCAR). | | Du kräver att användare utför Multi-Factor Authentication mot en lokal Multi-Factor Authentication Server-lösning när användare autentiserar till AD FS.| I en hanterad identitets domän kan du inte mata in en Multi-Factor Authentication-utmaning via den lokala Multi-Factor Authentication-lösningen i autentiseringsschemat. Du kan dock använda Azure Multi-Factor Authentication-tjänsten för Multi-Factor Authentication när domänen har konverterats.<br /><br /> Om användarna inte använder Azure Multi-Factor Authentication krävs ett registrerings steg för databasmigrering-användare. Du måste förbereda för och förmedla den planerade registreringen till dina användare. | | Du använder för närvarande principer för åtkomst kontroll (AuthZ-regler) i AD FS för att kontrol lera åtkomsten till Office 365.| Överväg att ersätta principerna med motsvarande [principer för villkorlig åtkomst](https://docs.microsoft.com/azure/active-directory/active-directory-conditional-access-azure-portal) för Azure AD och [åtkomst regler för Exchange Online-klienter](https://aka.ms/EXOCAR).| ### <a name="common-ad-fs-customizations"></a>Vanliga AD FS anpassningar I det här avsnittet beskrivs vanliga AD FS anpassningar. #### <a name="insidecorporatenetwork-claim"></a>InsideCorporateNetwork-anspråk AD FS utfärdar **InsideCorporateNetwork** -anspråk om den användare som autentiserar finns i företags nätverket. Detta anspråk kan sedan skickas till Azure AD. Anspråket används för att kringgå Multi-Factor Authentication baserat på användarens nätverks plats. Information om hur du avgör om den här funktionen för närvarande är tillgänglig i AD FS finns i [betrodda IP-adresser för federerade användare](https://docs.microsoft.com/azure/multi-factor-authentication/multi-factor-authentication-get-started-adfs-cloud). **InsideCorporateNetwork** -anspråket är inte tillgängligt efter att dina domäner har konverterats till direktautentisering. Du kan använda [namngivna platser i Azure AD](https://docs.microsoft.com/azure/active-directory/active-directory-named-locations) för att ersätta den här funktionen. När du har konfigurerat namngivna platser måste du uppdatera alla principer för villkorlig åtkomst som har kon figurer ATS för att antingen inkludera eller exkludera nätverket **alla betrodda platser** eller **MFA-betrodda IP-adresser** för att avspegla de nya namngivna platserna. Mer information om **plats** villkor i villkorlig åtkomst finns i [Active Directory villkorliga åtkomst platser](https://docs.microsoft.com/azure/active-directory/active-directory-conditional-access-locations). #### <a name="hybrid-azure-ad-joined-devices"></a>Hybrid Azure AD-anslutna enheter När du ansluter en enhet till Azure AD kan du skapa regler för villkorlig åtkomst som tvingar enheterna att uppfylla dina åtkomst standarder för säkerhet och efterlevnad. Användare kan också logga in på en enhet genom att använda ett organisations arbets-eller skol konto i stället för ett personligt konto. När du använder hybrid Azure AD-anslutna enheter kan du ansluta dina Active Directory domänanslutna enheter till Azure AD. Den federerade miljön kan ha kon figurer ATS för att använda den här funktionen. För att säkerställa att hybrid kopplingen fortsätter att fungera för alla enheter som är anslutna till domänen när domänerna har konverterats till direktautentisering, för Windows 10-klienter, måste du använda Azure AD Connect för att synkronisera Active Directory dator konton till Azure AD. För Windows 8-och Windows 7-dator konton använder hybrid anslutning sömlös SSO för att registrera datorn i Azure AD. Du behöver inte synkronisera Windows 8-och Windows 7-dator konton som du gör för Windows 10-enheter. Du måste dock distribuera en uppdaterad workplacejoin. exe-fil (via en MSI-fil) till Windows 8-och Windows 7-klienter så att de kan registrera sig med sömlös SSO. [Ladda ned MSI-filen](https://www.microsoft.com/download/details.aspx?id=53554). Mer information finns i [Konfigurera hybrid Azure AD-anslutna enheter](https://docs.microsoft.com/azure/active-directory/device-management-hybrid-azuread-joined-devices-setup). #### <a name="branding"></a>Anpassning Om din organisation har [anpassat dina AD FS inloggnings sidor](https://docs.microsoft.com/windows-server/identity/ad-fs/operations/ad-fs-user-sign-in-customization) för att visa information som är mer relevant för organisationen, kan du överväga att göra liknande [anpassningar till inloggnings sidan för Azure AD](https://docs.microsoft.com/azure/active-directory/customize-branding). Även om liknande anpassningar är tillgängliga, ska vissa visuella ändringar på inloggnings sidor förväntas efter konverteringen. Du kanske vill ange information om förväntade ändringar i din kommunikation till användare. > [!NOTE] > Organisations anpassning är bara tillgängligt om du köper Premium-eller Basic-licensen för Azure Active Directory eller om du har en licens för Office 365. ## <a name="plan-for-smart-lockout"></a>Planera för smart utelåsning Azure AD Smart utelåsning skyddar mot brute-framtvinga lösen ords attacker. Smart utelåsning förhindrar att ett lokalt Active Directory-konto blir utelåst när direktautentisering används och en grup princip för konto utelåsning anges i Active Directory. Mer information finns i [Azure Active Directory Smart utelåsning](https://docs.microsoft.com/azure/active-directory/connect/active-directory-aadconnect-pass-through-authentication-smart-lockout). ## <a name="plan-deployment-and-support"></a>Planera distribution och support Slutför de uppgifter som beskrivs i det här avsnittet för att hjälpa dig att planera för distribution och support. ### <a name="plan-the-maintenance-window"></a>Planera underhålls perioden Även om domän konverterings processen är relativt snabb, kan Azure AD fortsätta att skicka vissa autentiseringsbegäranden till dina AD FS-servrar i upp till fyra timmar efter det att domän konverteringen är färdig. I detta fyra timmars fönster, och beroende på olika cacheminnen på tjänst sidan, kanske inte Azure AD accepterar dessa autentiseringar. Användare kan få ett fel meddelande. Användaren kan fortfarande autentisera mot AD FS, men Azure AD accepterar inte längre användarens utfärdade token eftersom Federations förtroendet nu har tagits bort. Endast användare som har åtkomst till tjänsterna via en webbläsare under den här post konverterings perioden innan cacheminnet för service sidan tas bort påverkas. Äldre klienter (Exchange ActiveSync, Outlook 2010/2013) förväntas inte påverkas eftersom Exchange Online behåller sina autentiseringsuppgifter under en angiven tids period. Cachen används för att autentisera användaren tyst. Användaren behöver inte gå tillbaka till AD FS. Autentiseringsuppgifterna som lagras på enheten för dessa klienter används för att tyst autentisera sig själva när den cachelagrade informationen har rensats. Användare förväntas inte ta emot några lösen ords meddelanden som ett resultat av domän konverterings processen. Moderna autentiserings klienter (Office 2016 och Office 2013, iOS och Android-appar) använder en giltig uppdateringstoken för att hämta nya åtkomsttoken för fortsatt åtkomst till resurser i stället för att återgå till AD FS. De här klienterna är immun till eventuella lösen ords meddelanden som uppstår till följd av domän konverterings processen. Klienterna kommer att fortsätta att fungera utan ytterligare konfiguration. > [!IMPORTANT] > Stäng inte av AD FS miljön eller ta bort det förlitande part förtroendet för Office 365 tills du har kontrollerat att alla användare kan autentiseras med hjälp av molnbaserad autentisering. ### <a name="plan-for-rollback"></a>Planera för återställning Om du stöter på ett större problem som du inte kan lösa snabbt kan du välja att återställa lösningen till federationen. Det är viktigt att planera vad du ska göra om distributionen inte distribueras som avsett. Om konverteringen av domänen eller användare Miss lyckas under distributionen, eller om du behöver återställa till federationen, måste du förstå hur du minimerar eventuella avbrott och minskar effekterna på användarna. #### <a name="to-roll-back"></a>Återställa Om du vill planera för återställning kontrollerar du dokumentationen för Federations design och distribution för din distributions information. Processen bör omfatta följande uppgifter: * Konvertera hanterade domäner till federerade domäner med hjälp av cmdleten **Convert-MsolDomainToFederated** . * Vid behov kan du konfigurera ytterligare anspråks regler. ### <a name="plan-communications"></a>Planera kommunikation En viktig del av att planera distribution och support är att se till att användarna proaktivt informeras om kommande ändringar. Användarna bör känna i förväg vad de kan uppleva och vad som krävs för dem. När både direktautentisering och sömlös SSO distribueras, är användar inloggnings upplevelsen för åtkomst till Office 365 och andra resurser som autentiseras via Azure AD-ändringar. Användare som är utanför nätverket ser bara inloggnings sidan för Azure AD. Dessa användare omdirigeras inte till den formulärbaserade sidan som presenteras av externa webb program proxyservrar. Inkludera följande element i din kommunikations strategi: * Meddela användare om kommande och utgivna funktioner med hjälp av: * E-post och andra interna kommunikations kanaler. * Visuella objekt, till exempel affischer. * Executive, Live eller annan kommunikation. * Bestäm vem som ska anpassa kommunikationen och vem som ska skicka kommunikationen och när. ## <a name="implement-your-solution"></a>Implementera din lösning Du har planerat din lösning. Nu kan du implementera det. Implementeringen omfattar följande komponenter: * Förbereder för sömlös SSO. * Ändra inloggnings metoden till vidarekoppling och aktivera sömlös SSO. ### <a name="step-1-prepare-for-seamless-sso"></a>Steg 1: Förbered för sömlös SSO För att enheterna ska kunna använda sömlös SSO måste du lägga till en Azure AD-URL till användarnas intranät zons inställningar med hjälp av en grup princip i Active Directory. Som standard beräknar webbläsare automatiskt rätt zon, antingen Internet eller intranät, från en URL. Till exempel **http\/\/: contoso/** Maps till zonen Intranät och **\/\/http: intranet.contoso.com** Maps till zonen Internet (eftersom URL: en innehåller en punkt). Webbläsare skickar Kerberos-biljetter till en moln slut punkt, t. ex. Azure AD-URL, endast om du lägger till URL: en i webbläsarens intranät zon. Slutför stegen för att [distribuera](https://docs.microsoft.com/azure/active-directory/connect/active-directory-aadconnect-sso-quick-start) de nödvändiga ändringarna på enheterna. > [!IMPORTANT] > Om du gör den här ändringen ändras inte hur användarna loggar in på Azure AD. Det är dock viktigt att du tillämpar den här konfigurationen på alla dina enheter innan du fortsätter. Användare som loggar in på enheter som inte har tagit emot den här konfigurationen krävs bara för att ange ett användar namn och lösen ord för att logga in på Azure AD. ### <a name="step-2-change-the-sign-in-method-to-pass-through-authentication-and-enable-seamless-sso"></a>Steg 2: ändra inloggnings metoden till vidarekoppling och aktivera sömlös SSO Du har två alternativ för att ändra inloggnings metod till direktautentisering och aktivera sömlös SSO. #### <a name="option-a-configure-pass-through-authentication-by-using-azure-ad-connect"></a>Alternativ A: Konfigurera direktautentisering genom att använda Azure AD Connect Använd den här metoden om du ursprungligen konfigurerade AD FS miljön med Azure AD Connect. Du kan inte använda den här metoden om du *inte* ursprungligen konfigurerade din AD FS-miljö med hjälp av Azure AD Connect. > [!IMPORTANT] > När du har slutfört följande steg konverteras alla domäner från federerad identitet till hanterad identitet. Mer information finns [i Planera migrations metoden](#plan-the-migration-method). Ändra först inloggnings metoden: 1. Öppna guiden Azure AD Connect på Azure AD Connect-servern. 2. Välj **ändra användar inloggning**och välj sedan **Nästa**. 3. På sidan **Anslut till Azure AD** anger du användar namn och lösen ord för ett globalt administratörs konto. 4. På sidan **användar inloggning** väljer du knappen **vidarekoppling** , väljer **aktivera enkel inloggning**och väljer sedan **Nästa**. 5. På sidan **aktivera enkel inloggning** anger du autentiseringsuppgifterna för ett domän administratörs konto och väljer sedan **Nästa**. > [!NOTE] > Autentiseringsuppgifter för domän administratörs kontot krävs för att aktivera sömlös SSO. Processen utför följande åtgärder, som kräver dessa utökade behörigheter. Autentiseringsuppgifterna för domän administratörs kontot lagras inte i Azure AD Connect eller i Azure AD. Autentiseringsuppgifterna för domän administratörs kontot används bara för att aktivera funktionen. Autentiseringsuppgifterna tas bort när processen har slutförts. > > 1. Ett dator konto med namnet AZUREADSSOACC (som representerar Azure AD) skapas i den lokala Active Directory-instansen. > 2. Dator kontots Kerberos-dekrypterings nyckel delas på ett säkert sätt med Azure AD. > 3. Två huvud namn för tjänsten Kerberos (SPN) skapas för att representera två URL: er som används vid inloggning i Azure AD. 6. På sidan **klar att konfigurera** kontrollerar du att kryss rutan **starta synkroniseringen När konfigurationen är klar** är markerad. Välj sedan **Konfigurera**.<br /> ![Skärm bild av sidan redo att konfigurera](media/plan-migrate-adfs-pass-through-authentication/migrating-adfs-to-pta_image8.png)<br /> 7. I Azure AD-portalen väljer du **Azure Active Directory**och väljer sedan **Azure AD Connect**. 8. Verifiera följande inställningar: * **Federationen** är inställd på **inaktive rad**. * **Sömlös enkel inloggning** har angetts till **aktive rad**. * **Direktautentisering** är inställt på **aktive rad**.<br /> ![Skärm bild som visar inställningarna i användar inloggnings avsnittet](media/plan-migrate-adfs-pass-through-authentication/migrating-adfs-to-pta_image9.png)<br /> Nästa. distribuera ytterligare autentiseringsmetoder: 1. I Azure Portal går du till **Azure Active Directory** > **Azure AD Connect**och väljer sedan **direktautentisering**. 2. På sidan **direkt autentisering** väljer du knappen **Hämta** . 3. På sidan **Ladda ned agent** väljer du **Godkänn villkor och hämta**. Ytterligare autentiseringspaket börjar laddas ned. Installera den sekundära Autentiseringstjänsten på en domänansluten Server. > [!NOTE] > Den första agenten installeras alltid på Azure AD Connect själva servern som en del av konfigurations ändringarna som gjorts i avsnittet **användar inloggning** i Azure AD Connect-verktyget. Installera eventuella ytterligare autentiseringsmekanismer på en separat server. Vi rekommenderar att du har två eller tre ytterligare autentiseringsscheman tillgängliga. 4. Kör installations agenten för autentisering. Under installationen måste du ange autentiseringsuppgifterna för ett globalt administratörs konto. ![Skärm bild som visar knappen Installera på sidan Microsoft Azure AD Connect Authentication agent Package](media/plan-migrate-adfs-pass-through-authentication/migrating-adfs-to-pta_image11.png) ![Skärm bild som visar inloggnings Sidan](media/plan-migrate-adfs-pass-through-authentication/migrating-adfs-to-pta_image12.png) 5. När du har installerat Authentication agent kan du gå tillbaka till sidan hälso tillstånd för direktautentisering för att kontrol lera statusen för de ytterligare agenterna. Hoppa till [test och nästa steg](#testing-and-next-steps). > [!IMPORTANT] > Hoppa över avsnittet **alternativ B: växla från Federation till direktautentisering genom att använda Azure AD Connect och PowerShell**. Stegen i det här avsnittet gäller inte om du väljer alternativ A för att ändra inloggnings metoden till direktautentisering och aktivera sömlös SSO. #### <a name="option-b-switch-from-federation-to-pass-through-authentication-by-using-azure-ad-connect-and-powershell"></a>Alternativ B: växla från Federation till direktautentisering genom att använda Azure AD Connect och PowerShell Använd det här alternativet om du inte ursprungligen konfigurerade de federerade domänerna med hjälp av Azure AD Connect. Börja med att aktivera direktautentisering: 1. Öppna guiden Azure AD Connect på Azure AD Connect-servern. 2. Välj **ändra användar inloggning**och välj sedan **Nästa**. 3. På sidan **Anslut till Azure AD** anger du användar namn och lösen ord för ett globalt administratörs konto. 4. På sidan **användar inloggning** väljer du knappen genom **strömnings autentisering** . Välj **aktivera enkel inloggning**och välj sedan **Nästa**. 5. På sidan **aktivera enkel inloggning** anger du autentiseringsuppgifterna för ett domän administratörs konto och väljer sedan **Nästa**. > [!NOTE] > Autentiseringsuppgifter för domän administratörs kontot krävs för att aktivera sömlös SSO. Processen utför följande åtgärder, som kräver dessa utökade behörigheter. Autentiseringsuppgifterna för domän administratörs kontot lagras inte i Azure AD Connect eller i Azure AD. Autentiseringsuppgifterna för domän administratörs kontot används bara för att aktivera funktionen. Autentiseringsuppgifterna tas bort när processen har slutförts. > > 1. Ett dator konto med namnet AZUREADSSOACC (som representerar Azure AD) skapas i den lokala Active Directory-instansen. > 2. Dator kontots Kerberos-dekrypterings nyckel delas på ett säkert sätt med Azure AD. > 3. Två huvud namn för tjänsten Kerberos (SPN) skapas för att representera två URL: er som används vid inloggning i Azure AD. 6. På sidan **klar att konfigurera** kontrollerar du att kryss rutan **starta synkroniseringen När konfigurationen är klar** är markerad. Välj sedan **Konfigurera**.<br /> ![Skärm bild som visar sidan klar att konfigurera och knappen Konfigurera](media/plan-migrate-adfs-pass-through-authentication/migrating-adfs-to-pta_image18.png)<br /> Följande steg inträffar när du väljer **Konfigurera**: 1. Den första genom strömnings verifierings agenten är installerad. 2. Direkt lagrings funktionen är aktive rad. 3. Sömlös SSO är aktiverat. 7. Verifiera följande inställningar: * **Federation** har angetts till **aktive rad**. * **Sömlös enkel inloggning** har angetts till **aktive rad**. * **Direktautentisering** är inställt på **aktive rad**. ![Skärm bild som visar inställningarna i användar inloggnings avsnittet](media/plan-migrate-adfs-pass-through-authentication/migrating-adfs-to-pta_image19.png) 8. Välj **direktautentisering** och kontrol lera att statusen är **aktiv**.<br /> Om Autentiseringstjänsten inte är aktiv slutför du vissa [fel söknings steg](https://docs.microsoft.com/azure/active-directory/connect/active-directory-aadconnect-troubleshoot-pass-through-authentication) innan du fortsätter med domän konverteringen i nästa steg. Du riskerar att orsaka autentiserings avbrott om du konverterar dina domäner innan du verifierar att dina direktautentisering har installerats och att deras status är **aktiv** i Azure Portal. Sedan distribuerar du ytterligare autentiseringsmetoder: 1. I Azure Portal går du till **Azure Active Directory** > **Azure AD Connect**och väljer sedan **direktautentisering**. 2. På sidan **direkt autentisering** väljer du knappen **Hämta** . 3. På sidan **Ladda ned agent** väljer du **Godkänn villkor och hämta**. Autentiseringstjänsten börjar laddas ned. Installera den sekundära Autentiseringstjänsten på en domänansluten Server. > [!NOTE] > Den första agenten installeras alltid på Azure AD Connect själva servern som en del av konfigurations ändringarna som gjorts i avsnittet **användar inloggning** i Azure AD Connect-verktyget. Installera eventuella ytterligare autentiseringsmekanismer på en separat server. Vi rekommenderar att du har två eller tre ytterligare autentiseringsscheman tillgängliga. 4. Kör installations agenten för autentisering. Under installationen måste du ange autentiseringsuppgifterna för ett globalt administratörs konto.<br /> ![Skärm bild som visar knappen Installera på sidan Microsoft Azure AD Connect Authentication agent Package](media/plan-migrate-adfs-pass-through-authentication/migrating-adfs-to-pta_image23.png)<br /> ![Skärm bild som visar inloggnings Sidan](media/plan-migrate-adfs-pass-through-authentication/migrating-adfs-to-pta_image24.png)<br /> 5. När du har installerat Authentication agent kan du gå tillbaka till sidan hälso tillstånd för direktautentisering för att kontrol lera statusen för de ytterligare agenterna. I det här läget är federerade autentisering fortfarande aktiv och fungerar för dina domäner. För att fortsätta med distributionen måste du konvertera varje domän från federerad identitet till hanterad identitet så att direktautentisering börjar betjäna autentiseringsbegäranden för domänen. Du behöver inte konvertera alla domäner på samma tid. Du kan välja att starta med en test domän på din produktions klient eller börja med din domän som har det lägsta antalet användare. Slutför konverteringen med hjälp av Azure AD PowerShell-modulen: 1. I PowerShell loggar du in på Azure AD med ett globalt administratörs konto. 2. Om du vill konvertera den första domänen kör du följande kommando: ``` PowerShell Set-MsolDomainAuthentication -Authentication Managed -DomainName <domain name> ``` 3. I Azure AD-portalen väljer du **Azure Active Directory** > **Azure AD Connect**. 4. När du har konverterat alla federerade domäner kontrollerar du följande inställningar: * **Federationen** är inställd på **inaktive rad**. * **Sömlös enkel inloggning** har angetts till **aktive rad**. * **Direktautentisering** är inställt på **aktive rad**.<br /> ![Skärm bild som visar inställningarna i användar inloggnings avsnittet](media/plan-migrate-adfs-pass-through-authentication/migrating-adfs-to-pta_image26.png)<br /> ## <a name="testing-and-next-steps"></a>Testning och nästa steg Utför följande uppgifter för att verifiera direktautentisering och slutföra konverterings processen. ### <a name="test-pass-through-authentication"></a>Testa direktautentisering När din klient använder federerade identiteter omdirigerades användarna från inloggnings sidan för Azure AD till din AD FS-miljö. Nu när klienten har kon figurer ATS för att använda direktautentisering i stället för federerad autentisering, omdirigeras inte användarna till AD FS. I stället loggar användarna in direkt på inloggnings sidan för Azure AD. Så här testar du direktautentisering: 1. Öppna Internet Explorer i InPrivate-läge så att sömlös inloggning inte loggar in automatiskt. 2. Gå till Office 365-inloggnings sidan ([https://portal.office.com](https://portal.office.com/)). 3. Ange ett UPN för användare och välj sedan **Nästa**. Se till att du anger UPN för en hybrid användare som har synkroniserats från din lokala Active Directory-instans och som tidigare använde federerad autentisering. En sida där du anger användar namn och lösen ord visas: ![Skärm bild som visar inloggnings sidan där du anger ett användar namn](media/plan-migrate-adfs-pass-through-authentication/migrating-adfs-to-pta_image27.png) ![Skärm bild som visar inloggnings sidan där du anger ett lösen ord](media/plan-migrate-adfs-pass-through-authentication/migrating-adfs-to-pta_image28.png) 4. När du har angett lösen ordet och väljer **Logga**in omdirigeras du till Office 365-portalen. ![Skärm bild som visar Office 365-portalen](media/plan-migrate-adfs-pass-through-authentication/migrating-adfs-to-pta_image29.png) ### <a name="test-seamless-sso"></a>Testa sömlös SSO Så här testar du sömlös SSO: 1. Logga in på en domänansluten dator som är ansluten till företags nätverket. 2. I Internet Explorer eller Chrome går du till någon av följande URL: er (Ersätt "contoso" med din domän): * https:\/\/myapps.Microsoft.com/contoso.com * https:\/\/myapps.Microsoft.com/contoso.onmicrosoft.com Användaren omdirigeras en kort stund till inloggnings sidan för Azure AD, som visar meddelandet "försöker logga in dig". Användaren behöver inte ange något användar namn eller lösen ord.<br /> ![Skärm bild som visar inloggnings sidan för Azure AD och meddelandet](media/plan-migrate-adfs-pass-through-authentication/migrating-adfs-to-pta_image30.png)<br /> 3. Användaren omdirigeras och har loggat in på åtkomst panelen: > [!NOTE] > Sömlös SSO fungerar på Office 365-tjänster som har stöd för domän tips (till exempel myapps.microsoft.com/contoso.com). Office 365-portalen (portal.office.com) stöder för närvarande inte domän tips. Användare måste ange ett UPN. När ett UPN har angetts hämtar sömlös SSO Kerberos-biljetten för användarens räkning. Användaren är inloggad utan att ange ett lösen ord. > [!TIP] > Överväg att distribuera [Azure AD hybrid Join på Windows 10](https://docs.microsoft.com/azure/active-directory/device-management-introduction) för en förbättrad SSO-upplevelse. ### <a name="remove-the-relying-party-trust"></a>Ta bort förtroende för förlitande part När du har kontrollerat att alla användare och klienter har autentiserats via Azure AD är det säkert att ta bort det förlitande part förtroendet för Office 365. Om du inte använder AD FS för andra orsaker (det vill säga för andra förtroenden för förlitande part), är det säkert att inaktivera AD FS i det här läget. ### <a name="rollback"></a>Ånger Om du upptäcker ett större problem och inte kan lösa det snabbt kan du välja att återställa lösningen till Federation. Läs dokumentationen om Federations design och distribution för din distributions information. Processen bör omfatta följande uppgifter: * Konvertera hanterade domäner till federerad autentisering med hjälp av cmdleten **Convert-MsolDomainToFederated** . * Om det behövs konfigurerar du ytterligare anspråks regler. ### <a name="sync-userprincipalname-updates"></a>Synkronisera userPrincipalName-uppdateringar Tidigare blockeras uppdateringar av **userPrincipalName** -attributet, som använder synkroniseringstjänsten från den lokala miljön, om inte båda dessa villkor är uppfyllda: * Användaren är i en hanterad (icke-federerad) identitets domän. * Användaren har inte tilldelats någon licens. Information om hur du kontrollerar eller aktiverar den här funktionen finns i [Synkronisera userPrincipalName-uppdateringar](https://docs.microsoft.com/azure/active-directory/connect/active-directory-aadconnectsyncservice-features). ## <a name="roll-over-the-seamless-sso-kerberos-decryption-key"></a>Rulla över sömlös SSO Kerberos-dekrypterings nyckel Det är viktigt att ofta rulla över Kerberos-dekrypteringsnyckeln för AZUREADSSOACC-dator kontot (som representerar Azure AD). Dator kontot AZUREADSSOACC skapas i din lokala Active Directory skog. Vi rekommenderar starkt att du går igenom Kerberos-dekrypteringsnyckeln minst var 30: e dag för att anpassas till det sätt som Active Directory domän medlemmar skickar lösen ords ändringar. Det finns ingen kopplad enhet kopplad till AZUREADSSOACC-datorns konto objekt, så du måste utföra förnyelsen manuellt. Initiera förnyelsen av den sömlösa Kerberos-krypteringsnyckeln på den lokala server som kör Azure AD Connect. Mer information finns i [Hur gör jag för att över Kerberos-dekrypteringsnyckeln för dator kontot för AZUREADSSOACC?](https://docs.microsoft.com/azure/active-directory/connect/active-directory-aadconnect-sso-faq). ## <a name="monitoring-and-logging"></a>Övervakning och loggning Övervaka de servrar som kör autentiseringstjänsten för att underhålla lösningens tillgänglighet. Förutom allmänna Server prestanda räknare exponerar autentiseringsinställningarna prestanda objekt som kan hjälpa dig att förstå statistik och fel i autentiseringen. Autentiserings agenter loggar åtgärder i Windows-händelseloggen som finns under program-och Logs\Microsoft\AzureAdConnect\AuthenticationAgent\Admin. Du kan också aktivera loggning för fel sökning. Mer information finns i [felsöka Azure Active Directory direktautentisering](https://docs.microsoft.com/azure/active-directory/connect/active-directory-aadconnect-troubleshoot-Pass-through-authentication). ## <a name="next-steps"></a>Nästa steg * Lär dig mer om att [Azure AD Connect design koncept](plan-connect-design-concepts.md). * Välj [rätt autentisering](https://docs.microsoft.com/azure/security/fundamentals/choose-ad-authn). * Lär dig mer om [topologier som stöds](plan-connect-design-concepts.md).
88.073913
708
0.80466
swe_Latn
0.99952
53bb734ec20abb5ca47770d09be0b1fd639c179e
110
md
Markdown
README.md
PacktPublishing/Building-Mobile-Apps-with-Ionic-4
3528df7cb05ecf6bebd6425c4e207d5744060d0f
[ "MIT" ]
1
2021-10-01T23:00:56.000Z
2021-10-01T23:00:56.000Z
README.md
PacktPublishing/Building-Mobile-Apps-with-Ionic-4
3528df7cb05ecf6bebd6425c4e207d5744060d0f
[ "MIT" ]
null
null
null
README.md
PacktPublishing/Building-Mobile-Apps-with-Ionic-4
3528df7cb05ecf6bebd6425c4e207d5744060d0f
[ "MIT" ]
null
null
null
# Building-Mobile-Apps-with-Ionic-4 Code Repository for Building Mobile Apps with Ionic 4, published by Packt
36.666667
73
0.809091
eng_Latn
0.939533
53bc19378fb53e95f365371bdcd52435833c7833
1,616
md
Markdown
atomics/T1070/T1070.md
msd1201/atomic-red-team
9461a0d1e75c3e9f0febfcdf5dd7f295e6d041f3
[ "MIT" ]
3
2020-10-20T07:40:08.000Z
2022-01-09T10:56:55.000Z
atomics/T1070/T1070.md
msd1201/atomic-red-team
9461a0d1e75c3e9f0febfcdf5dd7f295e6d041f3
[ "MIT" ]
null
null
null
atomics/T1070/T1070.md
msd1201/atomic-red-team
9461a0d1e75c3e9f0febfcdf5dd7f295e6d041f3
[ "MIT" ]
1
2020-09-23T19:02:49.000Z
2020-09-23T19:02:49.000Z
# T1070 - Indicator Removal on Host ## [Description from ATT&CK](https://attack.mitre.org/wiki/Technique/T1070) <blockquote>Adversaries may delete or alter generated artifacts on a host system, including logs or captured files such as quarantined malware. Locations and format of logs are platform or product-specific, however standard operating system logs are captured as Windows events or Linux/macOS files such as [Bash History](https://attack.mitre.org/techniques/T1139) and /var/log/*. These actions may interfere with event collection, reporting, or other notifications used to detect intrusion activity. This that may compromise the integrity of security solutions by causing notable events to go unreported. This activity may also impede forensic analysis and incident response, due to lack of sufficient data to determine what occurred.</blockquote> ## Atomic Tests - [Atomic Test #1 - Indicator Removal using FSUtil](#atomic-test-1---indicator-removal-using-fsutil) <br/> ## Atomic Test #1 - Indicator Removal using FSUtil Manages the update sequence number (USN) change journal, which provides a persistent log of all changes made to files on the volume. Upon execution, no output will be displayed. More information about fsutil can be found at https://docs.microsoft.com/en-us/windows-server/administration/windows-commands/fsutil-usn **Supported Platforms:** Windows #### Attack Commands: Run with `command_prompt`! Elevation Required (e.g. root or admin) ```cmd fsutil usn deletejournal /D C: ``` #### Cleanup Commands: ```cmd fsutil usn createjournal m=1000 a=100 c: ``` <br/>
39.414634
379
0.775371
eng_Latn
0.981947
53bc4bce09989762688f1b90f0d34cca338b3235
317
md
Markdown
README.md
karlov/MFF_C-
b6eaf0362ff2baf3125ec479f9fa131e554f3a7e
[ "CC0-1.0" ]
null
null
null
README.md
karlov/MFF_C-
b6eaf0362ff2baf3125ec479f9fa131e554f3a7e
[ "CC0-1.0" ]
null
null
null
README.md
karlov/MFF_C-
b6eaf0362ff2baf3125ec479f9fa131e554f3a7e
[ "CC0-1.0" ]
null
null
null
# NDMI074 - Algoritmy a jejich implementace Notes to the 2020/2021 version of Algorithms and their Implementation course (in czech) taught by Mgr. Martin Mareš Ph.D. at Charles University in Prague. Lecture recordings, board and other information is avaliable at [lecturer's page](http://mj.ucw.cz/vyuka/2021/aim/).
63.4
154
0.785489
eng_Latn
0.92458
53bc92af7a5727009e4409c7ce2d452d2d770f2a
295
md
Markdown
tensorflow/lite/micro/README.md
Kinoo2/tensorflow
e334eb2f95bdece6f0df3eff0cf9c402078fe392
[ "Apache-2.0" ]
2
2021-06-20T13:42:45.000Z
2021-06-23T04:48:28.000Z
tensorflow/lite/micro/README.md
CaptainGizzy21/tensorflow
3457a2b122e50b4d44ceaaed5a663d635e5c22df
[ "Apache-2.0" ]
3
2021-08-25T15:10:14.000Z
2022-02-10T04:33:14.000Z
tensorflow/lite/micro/README.md
CaptainGizzy21/tensorflow
3457a2b122e50b4d44ceaaed5a663d635e5c22df
[ "Apache-2.0" ]
3
2021-09-26T22:20:25.000Z
2021-09-26T23:07:13.000Z
<!-- mdformat off(b/169948621#comment2) --> As of June 1, 2021, the TFLM codebase has moved to a [stand-alone github repository](https://github.com/tensorflow/tflite-micro). The code in the current location is in a read-only state until June 30, 2021 at which point in time it will be deleted.
42.142857
80
0.752542
eng_Latn
0.996521
dbf82d78bf7b7b3e06b7e6930a3fb17440d1a627
1,754
md
Markdown
ext/bcmath/libbcmath/README.md
motminh/php-src
2afab9bab07a6ec261941f6cf0475d4d3d9e8c4d
[ "PHP-3.01" ]
null
null
null
ext/bcmath/libbcmath/README.md
motminh/php-src
2afab9bab07a6ec261941f6cf0475d4d3d9e8c4d
[ "PHP-3.01" ]
null
null
null
ext/bcmath/libbcmath/README.md
motminh/php-src
2afab9bab07a6ec261941f6cf0475d4d3d9e8c4d
[ "PHP-3.01" ]
null
null
null
# The bcmath library This is a fork of the bcmath library initially created by Phil Nelson in May 2000. Bcmath is a library of arbitrary precision math routines. These routines, in a different form, are the routines to the arbitrary precision calculations for GNU bc and GNU dc. This library is provided to make these routines useful in a larger context with less restrictions on the use of them. These routines do not duplicate functionality of the GNU gmp library. The gmp library is similar, but the actual computation is different. Initial library (version 0.1) has been created in 2000-05-21 and then forked and bundled into PHP with version 0.2 released in 2000-06-07. ## FAQ * Why BCMATH? The math routines of GNU bc become more generally useful in a library form. By separating the BCMATH library from GNU bc, GNU bc can be under the GPL and BCMATH can be under the LGPL. * Why BCMATH when GMP exists? GMP has "integers" (no digits after a decimal), "rational numbers" (stored as 2 integers) and "floats". None of these will correctly represent a POSIX BC number. Floats are the closest, but will not behave correctly for many computations. For example, BC numbers have a "scale" that represent the number of digits to represent after the decimal point. The multiplying two of these numbers requires one to calculate an exact number of digits after the decimal point regardless of the number of digits in the integer part. GMP floats have a "fixed, but arbitrary" mantissa and so multiplying two floats will end up dropping digits BC must calculate. ## Credits Phil Nelson ([email protected]) wrote bcmath library. ## License The bcmath library is released under the GNU Lesser General Public License v2.1.
38.130435
80
0.776511
eng_Latn
0.999571
dbf8887e84362ab53d7377716b80812fe740c919
8,448
md
Markdown
microsoft-365/compliance/unlimited-archiving.md
gabovm/microsoft-365-docs
7dca9a4e4910d9ec9a46873f3a6d8ecb5276d327
[ "CC-BY-4.0", "MIT" ]
2
2020-12-05T17:27:13.000Z
2021-10-18T03:33:43.000Z
microsoft-365/compliance/unlimited-archiving.md
gabovm/microsoft-365-docs
7dca9a4e4910d9ec9a46873f3a6d8ecb5276d327
[ "CC-BY-4.0", "MIT" ]
null
null
null
microsoft-365/compliance/unlimited-archiving.md
gabovm/microsoft-365-docs
7dca9a4e4910d9ec9a46873f3a6d8ecb5276d327
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: "Overview of unlimited archiving" f1.keywords: - NOCSH ms.author: markjjo author: markjjo manager: laurawi audience: Admin ms.topic: overview ms.service: O365-seccomp localization_priority: Normal ms.collection: - Strat_O365_IP - M365-security-compliance search.appverid: - MOE150 - MET150 ms.assetid: 37cdbb02-a24a-4093-8bdb-2a7f0b3a19ee description: "Learn about auto-expanding archiving, which provides unlimited archive storage for Exchange Online mailboxes." --- # Overview of unlimited archiving In Office 365, archive mailboxes provide users with additional mailbox storage space. After a user's archive mailbox is enabled, up to 100 GB of additional storage is available. In the past, when the 100-GB storage quota was reached, organizations had to contact Microsoft to request additional storage space for an archive mailbox. That's no longer the case. The unlimited archiving feature in Microsoft 365 (called *auto-expanding archiving*) provides additional storage in archive mailboxes. When the storage quota in the archive mailbox is reached, Microsoft 365 automatically increases the size of the archive, which means that users won't run out of mailbox storage space and administrators won't have to request additional storage for archive mailboxes. For step-by-step instructions for turning on auto-expanding archiving, see [Enable unlimited archiving](enable-unlimited-archiving.md). > [!NOTE] > Auto-expanding archiving also supports shared mailboxes. To enable the archive for a shared mailbox, an Exchange Online Plan 2 license or an Exchange Online Plan 1 license with an Exchange Online Archiving license is required. ## How auto-expanding archiving works As previously explained, additional mailbox storage space is created when a user's archive mailbox is enabled. When auto-expanding archiving is enabled, Microsoft 365 periodically checks the size of the archive mailbox. When an archive mailbox gets close to its storage limit, Microsoft 365 automatically creates additional storage space for the archive. If the user runs out of this additional storage space, Microsoft 365 adds more storage space to the user's archive. This process happens automatically, which means administrators don't have to request additional archive storage or manage auto-expanding archiving. Here's a quick overview of the process. ![Overview of the auto-expanding archiving process](../media/74355385-d990-44fe-8a87-6c3639d1f63f.png) 1. Archiving is enabled for a user mailbox or a shared mailbox. An archive mailbox with 100 GB of storage space is created, and the warning quota for the archive mailbox is set to 90 GB. 2. An administrator enables auto-expanding archiving for the mailbox. When the archive mailbox (including the Recoverable Items folder) reaches 90 GB, it's converted to an auto-expanding archive, and Microsoft 365 adds storage space to the archive. It can take up to 30 days for the additional storage space to be provisioned. > [!NOTE] > If a mailbox is placed on hold or assigned to a retention policy, the storage quota for the archive mailbox is increased to 110 GB when auto-expanding archiving is enabled. Similarly, the archive warning quota is increased to 100 GB. 3. Microsoft 365 automatically adds more storage space when necessary. > [!IMPORTANT] > Auto-expanding archive is only supported for mailboxes used for individual users (or shared mailboxes) with a growth rate that doesn't exceed 1 GB per day. A user's archive mailbox is intended for just that user. Using journaling, transport rules, or auto-forwarding rules to copy messages to an archive mailbox is not permitted. Microsoft reserves the right to deny unlimited archiving in instances where a user's archive mailbox is used to store archive data for other users or in other cases of the inappropriate use. ## What gets moved to the additional archive storage space? To make efficient use of auto-expanding archive storage, folders may get moved. Microsoft 365 determines which folders get moved when additional storage is added to the archive. Sometimes when a folder is moved, one or more subfolders are automatically created and items from the original folder are distributed to these folders to facilitate the moving process. When viewing the archive portion of the folder list in Outlook, these subfolders are displayed under the original folder. The naming convention that Microsoft 365 uses to name these subfolders is **\<folder name\>_yyyy (Created on mmm dd, yyyy h_mm)**, where: - **yyyy** is the year the messages in the folder were received. - **mmm dd, yyyy h_m** is the date and time that the subfolder was created by Office 365, in UTC format, based on the user's time zone and regional settings in Outlook. The following screenshots show a folder list before and after messages are moved to an auto-expanded archive. **Before additional storage is added** ![Folder list of archive mailbox before auto-expanding archive is provisioned](../media/5d6d6420-e562-4912-aaab-1c111762b3f6.png) **After additional storage is added** ![Folder list of archive mailbox after auto-expanding archive is provisioned](../media/c03c5f51-23fa-4fc2-b887-7e7e5cce30da.png) > [!NOTE] > As previously described, Microsoft 365 moves items to subfolders (and names them using the naming convention described above) to help distribute content to an auxiliary archive. But moving items to subfolders may not always be the case. Sometimes an entire folder may be moved to an auxiliary archive. In this case, the folder will retain its original name. It won't be apparent in the folder list in Outlook that the folder was moved to an auxiliary archive. ## Outlook requirements for accessing items in an auto-expanded archive To access messages that are stored in an auto-expanded archive, users have to use one of the following Outlook clients: - Outlook 2016 or Outlook 2019 for Windows - Outlook on the web - Outlook 2016 or Outlook 2019 for Mac Here are some things to consider when using Outlook or Outlook on the web to access messages stored in an auto-expanded archive. - You can access any folder in your archive mailbox, including ones that were moved to the auto-expanded storage area. - Search for auto-expanded archiving is only available in Outlook Desktop as of Insiders build 16.0.12716.10000. Search is available in Outlook for the web. Similar to Online Archive, you can search for items that were moved to an additional storage area only by searching the folder itself. This means that you have to select the archive folder in the folder list to select the **Current Folder** option as the search scope. Similarly, if a folder in an auto-expanded storage area contains subfolders, you have to search each subfolder separately. - Item counts in Outlook and Read/Unread counts (in Outlook and Outlook on the web) in an auto-expanded archive might not be accurate. - You can delete items in a subfolder that points to an auto-expanded storage area, but the folder itself can't be deleted. - You can't use the Recover Deleted Items feature to recover an item that was deleted from an auto-expanded storage area. ## Auto-expanding archiving and other compliance features This section explains the functionality between auto-expanding archiving and other compliance and data governance features. - **eDiscovery:** When you use an eDiscovery tool, such as Content Search or In-Place eDiscovery, the additional storage areas in an auto-expanded archive are also searched. - **Retention:** When you put a mailbox on hold by using tools such as Litigation Hold in Exchange Online or eDiscovery case holds and retention policies in the security and compliance center, content located in an auto-expanded archive is also placed on hold. - **Messaging records management (MRM):** If you use MRM deletion policies in Exchange Online to permanently delete expired mailbox items, expired items located in the auto-expanded archive will also be deleted. - **Import service:** You can use the Office 365 Import service to import PST files to a user's auto-expanded archive. You can import up to 100 GB of data from PST files to the user's archive mailbox. ## More information For more technical details about auto-expanding archiving, see [Microsoft 365: Auto-Expanding Archives FAQ](https://techcommunity.microsoft.com/t5/exchange-team-blog/office-365-auto-expanding-archives-faq/ba-p/607784).
76.108108
623
0.799479
eng_Latn
0.997961
dbf8f757d2737d23b82d917797b66c1d207556ed
243
md
Markdown
README.md
kamula/vet
a4fcf22710bed8c7509baf90b620f53549e975a7
[ "MIT" ]
null
null
null
README.md
kamula/vet
a4fcf22710bed8c7509baf90b620f53549e975a7
[ "MIT" ]
null
null
null
README.md
kamula/vet
a4fcf22710bed8c7509baf90b620f53549e975a7
[ "MIT" ]
null
null
null
# vet Django dashboard to do the tasks listed below using PostgreSQL: Login page List of veterinary officers Onboard a veterinary officer Update a veterinary officer's information Deactivate a veterinary officer
18.692308
63
0.73251
eng_Latn
0.97147
dbfa12f4cb973cd93d6ba08c472b22d59244ac49
6,398
md
Markdown
aspnet/mvc/overview/older-versions-1/nerddinner/introducing-the-nerddinner-tutorial.md
terrajobst/AspNetDocs.de-de
4bf6c9163aa6905549a8cd15223c5733f809b6f1
[ "CC-BY-4.0", "MIT" ]
null
null
null
aspnet/mvc/overview/older-versions-1/nerddinner/introducing-the-nerddinner-tutorial.md
terrajobst/AspNetDocs.de-de
4bf6c9163aa6905549a8cd15223c5733f809b6f1
[ "CC-BY-4.0", "MIT" ]
null
null
null
aspnet/mvc/overview/older-versions-1/nerddinner/introducing-the-nerddinner-tutorial.md
terrajobst/AspNetDocs.de-de
4bf6c9163aa6905549a8cd15223c5733f809b6f1
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- uid: mvc/overview/older-versions-1/nerddinner/introducing-the-nerddinner-tutorial title: Einführung in das Lernprogramm zu nerddinner | Microsoft-Dokumentation author: shanselman description: Die beste Möglichkeit, ein neues Framework kennenzulernen, besteht darin, etwas mit diesem Framework zu erstellen. Dieses Tutorial führt Sie durch die Schritte zum Erstellen einer kleinen, aber abgeschlossenen Anwendung mithilfe von ASP.ne... ms.author: riande ms.date: 07/27/2010 ms.assetid: 397522d5-0402-4b94-b810-a2fb564f869d msc.legacyurl: /mvc/overview/older-versions-1/nerddinner/introducing-the-nerddinner-tutorial msc.type: authoredcontent ms.openlocfilehash: 154cfe6694cf723c0a1f8e33bfdb42c97594518f ms.sourcegitcommit: e7e91932a6e91a63e2e46417626f39d6b244a3ab ms.translationtype: MT ms.contentlocale: de-DE ms.lasthandoff: 03/06/2020 ms.locfileid: "78468933" --- # <a name="introducing-the-nerddinner-tutorial"></a>Einführung zum NerdDinner-Tutorial von [Scott Hanselman](https://github.com/shanselman) [PDF herunterladen](http://aspnetmvcbook.s3.amazonaws.com/aspnetmvc-nerdinner_v1.pdf) > Die beste Möglichkeit, ein neues Framework kennenzulernen, besteht darin, etwas mit diesem Framework zu erstellen. In diesem Tutorial wird Schritt für Schritt erläutert, wie Sie eine kleine, aber abgeschlossene Anwendung mit ASP.NET MVC 1 erstellen, und es werden einige der grundlegenden Konzepte dahinter vorgestellt. > > Wenn Sie ASP.NET MVC 3 verwenden, empfiehlt es sich, die Tutorials " [Getting Started with MVC 3](../../older-versions/getting-started-with-aspnet-mvc3/cs/intro-to-aspnet-mvc-3.md) " oder " [MVC Music Store](../../older-versions/mvc-music-store/mvc-music-store-part-1.md) " zu befolgen. ## <a name="nerddinner-tutorial"></a>Nerddinner-Tutorial Die beste Möglichkeit, ein neues Framework kennenzulernen, besteht darin, etwas mit diesem Framework zu erstellen. Dieses Tutorial führt Sie durch die Erstellung einer kleinen, aber abgeschlossenen Anwendung mithilfe von ASP.NET MVC und führt einige der grundlegenden Konzepte dahinter ein. Die Anwendung, die wir erstellen, wird als "nerddinner" bezeichnet. Nerddinner bietet Benutzern eine einfache Möglichkeit, Abendessen online zu finden und zu organisieren: ![](introducing-the-nerddinner-tutorial/_static/image1.png) Mit "nerddinner" können registrierte Benutzer Abendessen erstellen, bearbeiten und löschen. Es erzwingt einen konsistenten Satz von Validierungs-und Geschäftsregeln in der gesamten Anwendung: ![](introducing-the-nerddinner-tutorial/_static/image2.png) Besucher können eine AJAX-basierte Karte verwenden, um nach bevorstehenden Abendessen zu suchen, die in Ihrer Nähe gehalten werden: ![](introducing-the-nerddinner-tutorial/_static/image3.png) Wenn Sie auf ein Dinner klicken, gelangen Sie zu einer Detailseite, auf der Sie mehr darüber erfahren können: ![](introducing-the-nerddinner-tutorial/_static/image4.png) Wenn Sie an der Teilnahme an Dinner interessiert sind, können Sie sich auf der Website anmelden oder registrieren: ![](introducing-the-nerddinner-tutorial/_static/image5.png) Sie können dann auf einen AJAX-basierten RSVP-Link klicken, um an dem Ereignis teilzunehmen: ![](introducing-the-nerddinner-tutorial/_static/image6.png) ![](introducing-the-nerddinner-tutorial/_static/image7.png) ### <a name="implementing-nerddinner"></a>Implementieren von nerddinner Wir beginnen mit der "nerddinner"-Anwendung, indem wir den Befehl "File-&gt;New Project" in Visual Studio verwenden, um ein neues ASP.NET MVC-Projekt zu erstellen. Anschließend fügen wir Funktionen und Features inkrementell hinzu. Dabei werden die folgenden Themen behandelt: 1. [Erstellen eines neuen ASP.NET MVC-Projekts](create-a-new-aspnet-mvc-project.md) 2. [Erstellen einer Datenbank](create-a-database.md) 3. [Erstellen eines Modells mit Geschäftsregel Überprüfungen](build-a-model-with-business-rule-validations.md) 4. [Verwenden von Controllern und Ansichten zum Implementieren einer Auflistung/Details-Benutzeroberfläche](use-controllers-and-views-to-implement-a-listingdetails-ui.md) 5. [Bereitstellen von CRUD (Create, Read, Update, DELETE)-Unterstützung für Datenformular Einträge](provide-crud-create-read-update-delete-data-form-entry-support.md) 6. [Verwenden von ViewData und Implementieren von ViewModel-Klassen](use-viewdata-and-implement-viewmodel-classes.md) 7. [Wieder verwenden der Benutzeroberfläche mithilfe von Masterseiten und partialen](re-use-ui-using-master-pages-and-partials.md) 8. [So implementieren Sie effizientes Datenpaging](implement-efficient-data-paging.md) 9. [Sichern von Anwendungen mithilfe von Authentifizierung und Autorisierung](secure-applications-using-authentication-and-authorization.md) 10. [Verwenden von AJAX zum bereitzustellen dynamischer Updates](use-ajax-to-deliver-dynamic-updates.md) 11. [Verwenden von AJAX zum Implementieren von Mapping-Szenarios](use-ajax-to-implement-mapping-scenarios.md) 12. [Aktivieren von automatisierten Komponententests](enable-automated-unit-testing.md) Sie können eine eigene Kopie von nerddinner von Grund auf erstellen, indem Sie jeden Schritt abschließen, den wir in diesem Kapitel Exemplarische Vorgehensweise ausführen. Alternativ können Sie eine vollständige Version des Quellcodes hier herunterladen: [nerddinner on GitHub](https://github.com/AspNetMVPSamples/NerdDinner). Sie können auch optional [eine kostenlose PDF-Version dieses Tutorials herunterladen](http://aspnetmvcbook.s3.amazonaws.com/aspnetmvc-nerdinner_v1.pdf) , wenn Sie das Tutorial offline lesen möchten. Sie können entweder Visual Studio 2008 oder den kostenlosen Visual Web Developer 2008 Express verwenden, um die Anwendung zu erstellen. Sie können entweder SQL Server oder den kostenlosen SQL Server Express für die Datenbank verwenden. Sie können ASP.NET MVC, Visual Web Developer 2008 Express und SQL Server Express (alle kostenlos) mithilfe von V2 des [Microsoft-Webplattform-Installer](https://www.microsoft.com/web/downloads/platform.aspx) installieren. ### <a name="now-lets-get-started"></a>Fangen wir nun an... Nun, da wir uns mit der Bedeutung von "nerddinner" abkennen, können wir uns ein Rollup für die Ärmel durchführen und Code Wir beginnen mit der Verwendung von Datei&gt;neues Projekt in Visual Studio, um die "nerddinner"-Anwendung zu erstellen. > [!div class="step-by-step"] > [Weiter](create-a-new-aspnet-mvc-project.md)
71.88764
525
0.811972
deu_Latn
0.968875
dbfa2d219f9f24bc2d8a64f4c3ee16dceeb7214d
903
md
Markdown
sdk/docs/TrialPlSectionsResponseTrialPlSectionsBalances.md
freee/freee-accounting-sdk-java
2102cf9bf261683d3e81bca16b0aa6e14cef14b4
[ "MIT" ]
6
2019-10-11T06:52:07.000Z
2022-03-05T02:30:32.000Z
sdk/docs/TrialPlSectionsResponseTrialPlSectionsBalances.md
freee/freee-accounting-sdk-java
2102cf9bf261683d3e81bca16b0aa6e14cef14b4
[ "MIT" ]
7
2019-09-10T01:30:30.000Z
2021-10-21T01:18:13.000Z
sdk/docs/TrialPlSectionsResponseTrialPlSectionsBalances.md
freee/freee-accounting-sdk-java
2102cf9bf261683d3e81bca16b0aa6e14cef14b4
[ "MIT" ]
5
2019-10-11T06:56:10.000Z
2022-02-05T14:55:21.000Z
# TrialPlSectionsResponseTrialPlSectionsBalances ## Properties Name | Type | Description | Notes ------------ | ------------- | ------------- | ------------- **accountCategoryName** | **String** | 勘定科目カテゴリー名 | [optional] **accountGroupName** | **String** | 決算書表示名(account_item_display_type:group指定時に決算書表示名の時のみ含まれる) | [optional] **accountItemId** | **Integer** | 勘定科目ID(勘定科目の時のみ含まれる) | [optional] **accountItemName** | **String** | 勘定科目名(勘定科目の時のみ含まれる) | [optional] **closingBalance** | **Integer** | 期末残高 | [optional] **hierarchyLevel** | **Integer** | 階層レベル | [optional] **parentAccountCategoryName** | **String** | 上位勘定科目カテゴリー名(勘定科目カテゴリーの時のみ、上層が存在する場合含まれる) | [optional] **sections** | [**List&lt;TrialPlSectionsResponseTrialPlSectionsSections&gt;**](TrialPlSectionsResponseTrialPlSectionsSections.md) | 部門 | [optional] **totalLine** | **Boolean** | 合計行(勘定科目カテゴリーの時のみ含まれる) | [optional]
41.045455
149
0.672204
yue_Hant
0.48115
dbfa579350db54c6b185a6292821d5dd2877ea97
1,231
md
Markdown
docs/framework/configure-apps/file-schema/windows-workflow-foundation/microsoft-visualstudio-activities-asr-clientactivitybuilder.md
adamsitnik/docs.pl-pl
c83da3ae45af087f6611635c348088ba35234d49
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/configure-apps/file-schema/windows-workflow-foundation/microsoft-visualstudio-activities-asr-clientactivitybuilder.md
adamsitnik/docs.pl-pl
c83da3ae45af087f6611635c348088ba35234d49
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/configure-apps/file-schema/windows-workflow-foundation/microsoft-visualstudio-activities-asr-clientactivitybuilder.md
adamsitnik/docs.pl-pl
c83da3ae45af087f6611635c348088ba35234d49
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Microsoft.VisualStudio.Activities.Asr.ClientActivityBuilder ms.date: 03/30/2017 ms.topic: reference api_name: - Microsoft.VisualStudio.Activities.Asr.ClientActivityBuilder api_location: - Microsoft.VisualStudio.Activities.dll api_type: - Assembly ms.assetid: e7287d3f-59ee-448f-b7fe-b640508501a5 ms.openlocfilehash: dbaf2dd6834bfdabc717e63c32309086bc8aed3a ms.sourcegitcommit: 68653db98c5ea7744fd438710248935f70020dfb ms.translationtype: MT ms.contentlocale: pl-PL ms.lasthandoff: 08/22/2019 ms.locfileid: "69947586" --- # <a name="microsoftvisualstudioactivitiesasrclientactivitybuilder"></a>Microsoft.VisualStudio.Activities.Asr.ClientActivityBuilder Ta klasa służy do tworzenia i konfigurowania <xref:System.Activities.ActivityBuilder> obiektu, który dostarcza dane do działania przepływu pracy. ## <a name="syntax"></a>Składnia ```csharp public class ClientActivityBuilder ``` ## <a name="see-also"></a>Zobacz także - [Microsoft.VisualStudio.Activities.Asr.ClientActivityBuilder.Build](microsoft-visualstudio-activities-asr-clientactivitybuilder-build.md) - [Microsoft.VisualStudio.Activities.Asr.ClientActivityBuilder..ctor](microsoft-visualstudio-activities-asr-clientactivitybuilder-ctor.md)
38.46875
147
0.821284
yue_Hant
0.394731
dbfa71f427efd8b90ab4de48f24caa9f2b4fc754
32,675
md
Markdown
docs/csharp/whats-new/csharp-6.md
dhernandezb/docs.es-es
cf1637e989876a55eb3c57002818d3982591baf1
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/csharp/whats-new/csharp-6.md
dhernandezb/docs.es-es
cf1637e989876a55eb3c57002818d3982591baf1
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/csharp/whats-new/csharp-6.md
dhernandezb/docs.es-es
cf1637e989876a55eb3c57002818d3982591baf1
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Novedades de C# 6 - Guía de C# description: Obtenga información sobre las nuevas características de la versión 6 de C# ms.date: 09/22/2016 ms.assetid: 4d879f69-f889-4d3f-a781-75194e143400 ms.openlocfilehash: ad3515e1fc7d70e1377f007276c369d2884780f0 ms.sourcegitcommit: c93fd5139f9efcf6db514e3474301738a6d1d649 ms.translationtype: HT ms.contentlocale: es-ES ms.lasthandoff: 10/27/2018 ms.locfileid: "50194038" --- # <a name="whats-new-in-c-6"></a>Novedades de C# 6 La versión 6.0 de C# incluye muchas características que mejoran la productividad para los desarrolladores. Estas son algunas de las características de esta versión: * [Propiedades automáticas de solo lectura](#read-only-auto-properties): - Se pueden crear propiedades automáticas de solo lectura que solo se puedan establecer en los constructores. * [Inicializadores de propiedades automáticas](#auto-property-initializers): - Se pueden escribir expresiones de inicialización para establecer el valor inicial de una propiedad automática. * [Miembros de función con cuerpos de expresión](#expression-bodied-function-members): - Se pueden crear métodos de una línea mediante expresiones lambda. * [using static](#using-static): - Se pueden importar todos los métodos de una clase única al espacio de nombres actual. * [Operadores condicionales NULL](#null-conditional-operators): - Se puede obtener acceso de forma concisa y segura a los miembros de un objeto mientras continúa la comprobación de NULL con el operador condicional NULL. * [Interpolación de cadenas](#string-interpolation): - Se pueden escribir expresiones de formato de cadena mediante expresiones insertadas en lugar de argumentos posicionales. * [Filtros de excepciones](#exception-filters): - Se pueden detectar expresiones basadas en propiedades de la excepción o en otro estado del programa. * [Expresión `nameof`](#the-nameof-expression): - Se puede permitir que el compilador genere las representaciones de cadena de símbolos. * [await en bloques catch y finally](#await-in-catch-and-finally-blocks): - Se pueden usar expresiones `await` en ubicaciones que previamente no las permitían. * [Inicializadores de índice](#index-initializers): - Se pueden crear expresiones de inicialización para contenedores asociativos y también para contenedores de secuencias. * [Métodos de extensión para inicializadores de colección](#extension-add-methods-in-collection-initializers): - Los inicializadores de colección pueden basarse en métodos de extensión accesibles, además de métodos miembro. * [Mejoras en la resolución de sobrecarga](#improved-overload-resolution): - Algunas construcciones que previamente generaban llamadas de método ambiguas ahora se resuelven correctamente. * [Opción del compilador `deterministic`](#deterministic-compiler-output): - La opción del compilador determinista garantiza que las compilaciones posteriores del mismo origen generan el mismo resultado binario. El efecto general de estas características es que se escribe código más conciso y legible. La sintaxis contiene menos complejidad para muchas de las prácticas habituales. Es más fácil ver la intención del diseño con menos complejidad. Aprender bien estas características le permitirá ser más productivo, escribir código más legible y concentrarse más en las características principales que en las construcciones del lenguaje. En el resto de este tema se proporcionan detalles sobre cada una de estas características. ## <a name="auto-property-enhancements"></a>Mejoras de propiedades automáticas La sintaxis de las propiedades implementadas automáticamente (normalmente denominadas "propiedades automáticas") facilitó la creación de propiedades que tenían descriptores de acceso get y set sencillos: [!code-csharp[ClassicAutoProperty](../../../samples/snippets/csharp/new-in-6/oldcode.cs#ClassicAutoProperty)] Pero esta sintaxis simple limitaba los tipos de diseños que se podían admitir mediante las propiedades automáticas. C# 6 mejora las capacidades de las propiedades automáticas para que se puedan usar en más escenarios. No será necesario recurrir con tanta frecuencia a la sintaxis más detallada de la declaración y manipulación del campo de respaldo a mano. La nueva sintaxis está dirigida a escenarios para propiedades de solo lectura y a la inicialización del almacenamiento de variables detrás de una propiedad automática. ### <a name="read-only-auto-properties"></a>Propiedades automáticas de solo lectura *Las propiedades automáticas de solo lectura* proporcionan una sintaxis más concisa para crear tipos inmutables. Lo más cerca que se podía llegar a los tipos inmutables en versiones anteriores de C# era declarar establecedores privados: [!code-csharp[ClassicReadOnlyAutoProperty](../../../samples/snippets/csharp/new-in-6/oldcode.cs#ClassicReadOnlyAutoProperty)] Con esta sintaxis, el compilador no garantiza que el tipo sea realmente inmutable. Solo exige que las propiedades `FirstName` y `LastName` no se modifiquen desde ningún código fuera de la clase. Las propiedades automáticas de solo lectura habilitan el comportamiento real de solo lectura. La propiedad automática se declara solo con un descriptor de acceso get: [!code-csharp[ReadOnlyAutoProperty](../../../samples/snippets/csharp/new-in-6/newcode.cs#ReadOnlyAutoProperty)] Las propiedades `FirstName` y `LastName` solo se pueden establecer en el cuerpo de un constructor: [!code-csharp[ReadOnlyAutoPropertyConstructor](../../../samples/snippets/csharp/new-in-6/newcode.cs#ReadOnlyAutoPropertyConstructor)] Intentar establecer `LastName` en otro método genera un error de compilación `CS0200`: ```csharp public class Student { public string LastName { get; } public void ChangeName(string newLastName) { // Generates CS0200: Property or indexer cannot be assigned to -- it is read only LastName = newLastName; } } ``` Esta característica habilita la compatibilidad del lenguaje real para crear tipos inmutables y usar la sintaxis de propiedades automáticas más concisa y cómoda. Si al agregar esta sintaxis no se quita un método accesible, se trata de un [cambio compatible con los elementos binarios](version-update-considerations.md#binary-compatible-changes). ### <a name="auto-property-initializers"></a>Inicializadores de propiedades automáticas Los *inicializadores de propiedades automáticas* permiten declarar el valor inicial de una propiedad automática como parte de la declaración de la propiedad. En versiones anteriores, estas propiedades debían tener establecedores y era necesario usar ese establecedor para inicializar el almacenamiento de datos usado por el campo de respaldo. Considere esta clase para un alumno que contiene el nombre y una lista de las notas del alumno: [!code-csharp[Construction](../../../samples/snippets/csharp/new-in-6/oldcode.cs#Construction)] A medida que esta clase crece, se pueden incluir otros constructores. Cada constructor debe inicializar este campo o se producirán errores. C# 6 le permite asignar un valor inicial para el almacenamiento usado por una propiedad automática en la declaración de la propiedad automática: [!code-csharp[Initialization](../../../samples/snippets/csharp/new-in-6/newcode.cs#Initialization)] El miembro `Grades` se inicializa cuando se declara. Eso hace que sea más fácil realizar la inicialización exactamente una sola vez. La inicialización es parte de la declaración de la propiedad, lo que facilita igualar la asignación de almacenamiento con la interfaz pública para objetos `Student`. Los inicializadores de propiedad se pueden usar con propiedades de lectura y escritura, y también con propiedades de solo lectura, como se muestra aquí. [!code-csharp[ReadWriteInitialization](../../../samples/snippets/csharp/new-in-6/newcode.cs#ReadWriteInitialization)] ## <a name="expression-bodied-function-members"></a>Miembros de función con cuerpos de expresión El cuerpo de muchos miembros que se escriben consta de una sola instrucción que se puede representar como una expresión. Se puede reducir esa sintaxis escribiendo en su lugar un miembro con cuerpo de expresión. Funciona para métodos y propiedades de solo lectura. Por ejemplo, un reemplazo de `ToString()` suele ser un excelente candidato: [!code-csharp[ToStringExpressionMember](../../../samples/snippets/csharp/new-in-6/newcode.cs#ToStringExpressionMember)] También se pueden usar miembros con cuerpo de expresión en propiedades de solo lectura: [!code-csharp[FullNameExpressionMember](../../../samples/snippets/csharp/new-in-6/newcode.cs#FullNameExpressionMember)] Cambiar un miembro existente por un miembro con cuerpo de expresión es un [cambio compatible con un elemento binario](version-update-considerations.md#binary-compatible-changes). ## <a name="using-static"></a>uso de versión estática La mejora *using static* permite importar los métodos estáticos de una sola clase. Anteriormente, la instrucción `using` importaba todos los tipos en un espacio de nombres. A menudo, se usan los métodos estáticos de una clase a lo largo del código. Escribir repetidamente el nombre de clase puede oscurecer el significado del código. Un ejemplo común es cuando se escriben clases que realizan muchos cálculos numéricos. El código se llenará de <xref:System.Math.Sin%2A>, <xref:System.Math.Sqrt%2A> y otras llamadas a métodos diferentes en la clase <xref:System.Math>. La nueva sintaxis `using static` puede hacer que estas clases sean mucho más fáciles de leer. Se especifica la clase que se está usando: [!code-csharp[UsingStaticMath](../../../samples/snippets/csharp/new-in-6/newcode.cs#UsingStaticMath)] Y ahora, se puede usar cualquier método estático de la clase <xref:System.Math> sin calificar la clase <xref:System.Math>. La clase <xref:System.Math> es un excelente caso de uso para esta característica porque no contiene ningún método de instancia. También se puede usar `using static` para importar los métodos estáticos de una clase para una clase que tiene métodos estáticos y de instancia. Uno de los ejemplos más útiles es <xref:System.String>: [!code-csharp[UsingStatic](../../../samples/snippets/csharp/new-in-6/newcode.cs#UsingStatic)] > [!NOTE] > Se debe usar el nombre de clase completo, `System.String`, en una instrucción static using. No se puede usar la palabra clave `string` en su lugar. Ahora se puede llamar a los métodos estáticos definidos en la clase <xref:System.String> sin calificar esos métodos como miembros de la clase: [!code-csharp[UsingStaticString](../../../samples/snippets/csharp/new-in-6/newcode.cs#UsingStaticString)] La característica `static using` y los métodos de extensión interactúan de formas interesantes y el diseño del lenguaje incluye algunas reglas dedicadas específicamente a esas interacciones. El objetivo es minimizar las posibilidades de cambios bruscos en códigos base existentes, incluido el suyo. Los métodos de extensión solo están en ámbito cuando se llaman mediante la sintaxis de invocación de métodos de extensión, no cuando se llaman como un método estático. A menudo verá esto en las consultas LINQ. Puede importar el modelo LINQ mediante la importación de <xref:System.Linq.Enumerable>. [!code-csharp[UsingStaticLinq](../../../samples/snippets/csharp/new-in-6/newcode.cs#usingStaticLinq)] Esto importa todos los métodos de la clase <xref:System.Linq.Enumerable>. Pero los métodos de extensión solo están en ámbito cuando se llaman como métodos de extensión. No están en el ámbito si se llaman usando la sintaxis de métodos estáticos: [!code-csharp[UsingStaticLinqMethod](../../../samples/snippets/csharp/new-in-6/newcode.cs#UsingStaticLinkMethod)] Esta decisión se debe a que normalmente los métodos de extensión se llaman mediante expresiones de invocación de método de extensión. El caso excepcional de que se llamen mediante la sintaxis de llamada de método estático es para resolver ambigüedades. Parece aconsejable que se requiera el nombre de clase como parte de la llamada. Hay una última característica de `static using`. La directiva `static using` también importa cualquier tipo anidado. Esto permite hacer referencia a los tipos anidados sin calificación. ## <a name="null-conditional-operators"></a>Operadores condicionales NULL Los valores NULL complican el código. Es necesario comprobar todos los accesos de variables para asegurarse de que no se va a desreferenciar `null`. El *operador condicional NULL* realiza esas comprobaciones de una forma mucho más sencilla y fluida. Simplemente reemplace el acceso a miembros `.` por `?.`: [!code-csharp[NullConditional](../../../samples/snippets/csharp/new-in-6/program.cs#NullConditional)] En el ejemplo anterior, se asigna `null` a la variable `first` si el objeto person es `null`. De lo contrario, se le asigna el valor de la propiedad `FirstName`. Lo más importante, `?.` significa que esta línea de código no genera una excepción `NullReferenceException` cuando la variable `person` es `null`. En su lugar, se cortocircuita y genera `null`. Además, tenga en cuenta que esta expresión devuelve `string`, independientemente del valor de `person`. En caso de cortocircuito, se devuelve el valor `null` con tipo para que coincida con la expresión completa. A menudo se puede usar esta construcción con el operador de *fusión nula* para asignar valores predeterminados cuando una de las propiedades es `null`: [!code-csharp[NullCoalescing](../../../samples/snippets/csharp/new-in-6/program.cs#NullCoalescing)] El operando del lado derecho del operador `?.` no se limita a propiedades o campos. También se puede usar para invocar métodos de forma condicional. El uso más común de las funciones miembro con el operador condicional NULL es invocar de forma segura delegados (o controladores de eventos) que puedan ser `null`. Para ello, se llama al método `Invoke` del delegado usando el operador `?.` para obtener acceso al miembro. Puede ver un ejemplo en el tema de [patrones de delegado](../delegates-patterns.md#handling-null-delegates). Las reglas del operador `?.` garantizan que el lado izquierdo del operador solo se evalúa una vez. Esto es importante y permite muchos elementos, incluidos el ejemplo mediante controladores de eventos. Comencemos con el uso del controlador de eventos. En versiones anteriores de C#, se recomendaba escribir código como este: ```csharp var handler = this.SomethingHappened; if (handler != null) handler(this, eventArgs); ``` Esto se prefería a una sintaxis más sencilla: ```csharp // Not recommended if (this.SomethingHappened != null) this.SomethingHappened(this, eventArgs); ``` > [!IMPORTANT] > El ejemplo anterior presenta una condición de carrera. El evento `SomethingHappened` puede tener suscriptores cuando se comprueba `null`, y los suscriptores se pueden haber quitado antes de que se genere el evento. Esto supondría que se produjera una excepción <xref:System.NullReferenceException>. En esta segunda versión, es posible que el controlador de eventos `SomethingHappened` sea distinto de NULL cuando se prueba, pero si otro código quita un controlador, aún podría ser NULL cuando se llama al controlador de eventos. El compilador genera código para el operador `?.` que garantiza que el lado izquierdo (`this.SomethingHappened`) de la expresión `?.` se evalúa una vez y el resultado se almacena en caché: ```csharp // preferred in C# 6: this.SomethingHappened?.Invoke(this, eventArgs); ``` Asegurarse de que el lado izquierdo se evalúa solo una vez también le permite usar cualquier expresión, incluidas las llamadas de método, en el lado izquierdo del `?.`. Incluso si hay efectos secundarios, se evalúan una vez, por lo que los efectos secundarios solo se producen una vez. Puede ver un ejemplo en nuestro contenido en [eventos](../events-overview.md#language-support-for-events). ## <a name="string-interpolation"></a>Interpolación de cadenas C# 6 contiene nueva sintaxis para crear cadenas a partir de una cadena y expresiones insertadas que se evalúan para generar otros valores de cadena. Tradicionalmente, era necesario usar parámetros posicionales en un método como <xref:System.String.Format%2A?displayProperty=nameWithType>: [!code-csharp[stringFormat](../../../samples/snippets/csharp/new-in-6/oldcode.cs#stringFormat)] Con C# 6, la nueva característica [interpolación de cadenas](../language-reference/tokens/interpolated.md) permite insertar las expresiones en una cadena. Solo hay que anteponer la cadena con `$`: [!code-csharp[stringInterpolation](../../../samples/snippets/csharp/new-in-6/newcode.cs#FullNameExpressionMember)] Este ejemplo usa expresiones de propiedad para las expresiones sustituidas. Se puede expandir esta sintaxis para usar cualquier expresión. Por ejemplo, se podría calcular la puntuación media de un alumno como parte de la interpolación: [!code-csharp[stringInterpolationExpression](../../../samples/snippets/csharp/new-in-6/newcode.cs#stringInterpolationExpression)] Al ejecutar el ejemplo anterior, encontrará que es posible que la salida de `Grades.Average()` tenga más posiciones decimales de las que querría. La sintaxis de interpolación de cadena admite todas las cadenas de formato disponibles mediante el uso de los métodos de formato anteriores. La cadena de formato se especifica entre las llaves. Agregue un signo `:` después de la expresión a la que se va a dar formato: [!code-csharp[stringInterpolationFormat](../../../samples/snippets/csharp/new-in-6/newcode.cs#stringInterpolationFormat)] La línea de código anterior dará formato al valor de `Grades.Average()` como un número de punto flotante con dos posiciones decimales. El símbolo `:` siempre se interpreta como el separador entre la expresión a la que se va a dar formato y la cadena de formato. Esto puede ocasionar problemas cuando la expresión usa un signo `:` de otra manera, por ejemplo un [operador condicional](../language-reference/operators/conditional-operator.md): ```csharp public string GetGradePointPercentages() => $"Name: {LastName}, {FirstName}. G.P.A: {Grades.Any() ? Grades.Average() : double.NaN:F2}"; ``` En el ejemplo anterior, `:` se analiza como el principio de la cadena de formato, no como parte del operador condicional. En todos los casos en los que sucede esto, se puede escribir la expresión entre paréntesis para forzar a que el compilador la interprete de la forma esperada: [!code-csharp[stringInterpolationConditional](../../../samples/snippets/csharp/new-in-6/newcode.cs#stringInterpolationConditional)] No hay ninguna limitación en cuanto a las expresiones que se pueden colocar entre las llaves. Se puede ejecutar una consulta LINQ compleja dentro de una cadena interpolada para realizar cálculos y mostrar el resultado: [!code-csharp[stringInterpolationLinq](../../../samples/snippets/csharp/new-in-6/newcode.cs#stringInterpolationLinq)] En este ejemplo se puede ver que incluso se puede anidar una expresión de interpolación de cadena dentro de otra expresión de interpolación de cadena. Este ejemplo probablemente es más complejo de lo que sería aconsejable en código de producción, pero es ilustrativo de la amplitud de la característica. Cualquier expresión de C# puede colocarse entre las llaves de una cadena interpolada. Para empezar a trabajar con la interpolación de cadenas, consulte el tutorial interactivo [Interpolación de cadenas en C#](../tutorials/intro-to-csharp/interpolated-strings.yml). ### <a name="string-interpolation-and-specific-cultures"></a>Interpolación de cadena y referencias culturales específicas Todos los ejemplos que se muestran en la sección anterior dan formato a las cadenas utilizando la referencia cultural del equipo donde se ejecuta el código. A menudo puede ser necesario dar formato a la cadena generada con una referencia cultural concreta. Para hacerlo, aprovechan que el objeto producido por una interpolación de cadena puede convertirse implícitamente a <xref:System.FormattableString?displayProperty=nameWithType>. La instancia de <xref:System.FormattableString> contiene la cadena de formato compuesto y los resultados de la evaluación de las expresiones antes de convertirlas en cadenas. Use el método <xref:System.FormattableString.ToString(System.IFormatProvider)> para especificar la referencia cultural cuando de formato a una cadena. Por ejemplo, en el ejemplo siguiente se genera una cadena con la referencia cultural de alemán. (Usa el carácter "," para el separador de decimales y el carácter "." como separador de miles). ```csharp FormattableString str = $"Average grade is {s.Grades.Average()}"; var gradeStr = str.ToString(new System.Globalization.CultureInfo("de-DE")); ``` Para más información, vea el artículo [Interpolación de cadenas](../language-reference/tokens/interpolated.md) y el tutorial [Interpolación de cadenas en C#](../tutorials/string-interpolation.md). ## <a name="exception-filters"></a>Filtros de excepciones Otra característica nueva de C# 6 son los *filtros de excepciones*. Los filtros de excepciones son cláusulas que determinan cuándo se debe aplicar una cláusula catch determinada. Si la expresión usada para un filtro de excepciones se evalúa como `true`, la cláusula catch realiza su procesamiento normal en una excepción. Si la expresión se evalúa como `false`, entonces se omite la cláusula `catch`. Un uso consiste en examinar la información sobre una excepción para determinar si una cláusula `catch` puede procesar la excepción: [!code-csharp[ExceptionFilter](../../../samples/snippets/csharp/new-in-6/NetworkClient.cs#ExceptionFilter)] El código generado por los filtros de excepciones proporciona una mejor información sobre una excepción que se produce y no se procesa. Antes de que se agregaran los filtros de excepciones al lenguaje, era necesario crear un código similar al siguiente: [!code-csharp[ExceptionFilterOld](../../../samples/snippets/csharp/new-in-6/NetworkClient.cs#ExceptionFilterOld)] El punto donde se produce la excepción cambia entre estos dos ejemplos. En el código anterior, donde se usa una cláusula `throw`, cualquier análisis de seguimiento de pila o examen de volcado de memoria mostrará que la excepción se produjo desde la instrucción `throw` en la cláusula catch. El objeto de excepción real contendrá la pila de llamadas original, pero el resto de la información sobre las variables en la pila de llamadas entre este punto de inicio y la ubicación del punto de inicio original se perdió. Compare esto con cómo se procesa el código mediante un filtro de excepciones: la expresión de filtro de excepciones se evalúa como `false`. Por tanto, la ejecución nunca entra en la cláusula `catch`. Dado que la cláusula `catch` no se ejecuta, no se produce el desenredo de la pila. Esto significa que la ubicación de inicio original se conserva para cualquier actividad de depuración que tenga lugar más adelante. Siempre que necesite evaluar los campos o las propiedades de una excepción, en lugar de confiar únicamente en el tipo de excepción, use un filtro de excepciones para conservar más información de depuración. Otro patrón recomendado con los filtros de excepciones es usarlos para rutinas de registro. Este uso también aprovecha la manera en la que se conserva el punto de inicio de excepción cuando un filtro de excepciones se evalúa como `false`. Un método de registro sería un método cuyo argumento es la excepción que devuelve incondicionalmente `false`: [!code-csharp[ExceptionFilterLogging](../../../samples/snippets/csharp/new-in-6/ExceptionFilterHelpers.cs#ExceptionFilterLogging)] Siempre que quiera registrar una excepción, puede agregar una cláusula catch y usar este método como el filtro de excepciones: [!code-csharp[LogException](../../../samples/snippets/csharp/new-in-6/program.cs#LogException)] Las excepciones no se detectan nunca, porque el método `LogException` siempre devuelve `false`. Ese filtro de excepciones que siempre es false significa que se puede colocar este controlador de registro antes de otros controladores de excepciones: [!code-csharp[LogExceptionRecovery](../../../samples/snippets/csharp/new-in-6/program.cs#LogExceptionRecovery)] El ejemplo anterior resalta un aspecto muy importante de los filtros de excepciones. Los filtros de excepciones permiten escenarios en los que una cláusula de excepción catch más general puede aparecer antes que otra más específica. También es posible que el mismo tipo de excepción aparezca en varias cláusulas catch: [!code-csharp[HandleNotChanged](../../../samples/snippets/csharp/new-in-6/NetworkClient.cs#HandleNotChanged)] Otro patrón recomendado hace que sea más fácil impedir que las cláusulas catch procesen las excepciones cuando se adjunta un depurador. Esta técnica permite ejecutar una aplicación con el depurador y detener la ejecución cuando se produce una excepción. En el código, agregue un filtro de excepciones para que cualquier código de recuperación se ejecute solo cuando no haya un depurador asociado: [!code-csharp[LogExceptionDebugger](../../../samples/snippets/csharp/new-in-6/program.cs#LogExceptionDebugger)] Después de agregar este código, establezca el depurador para que se interrumpa en todas las excepciones no controladas. Ejecute el programa en el depurador, y el depurador siempre se interrumpe cuando `PerformFailingOperation()` produce una excepción `RecoverableException`. El depurador interrumpe el programa, porque la cláusula catch no se ejecutará debido al filtro de excepciones que devuelve false. ## <a name="the-nameof-expression"></a>La expresión `nameof` La expresión `nameof` se evalúa como el nombre de un símbolo. Es una excelente manera de hacer que las herramientas funcionen siempre que se necesita el nombre de una variable, una propiedad o un campo de miembro. Uno de los usos más comunes de `nameof` es para proporcionar el nombre de un símbolo que produjo una excepción: [!code-csharp[nameof](../../../samples/snippets/csharp/new-in-6/NewCode.cs#UsingStaticString)] Otro uso es con aplicaciones basadas en XAML que implementan la interfaz `INotifyPropertyChanged`: [!code-csharp[nameofNotify](../../../samples/snippets/csharp/new-in-6/viewmodel.cs#nameofNotify)] La ventaja de usar el operador `nameof` en lugar de una cadena de constante es que las herramientas pueden entender el símbolo. Si usa herramientas de refactorización para cambiar el nombre del símbolo, se cambiará en la expresión `nameof`. Las cadenas constantes no tienen esta ventaja. Pruébelo en su editor favorito: cambie el nombre de una variable y todas las expresiones `nameof` también se actualizarán. La expresión `nameof` produce el nombre no completo de su argumento (`LastName` en los ejemplos anteriores) incluso si se usa el nombre completo para el argumento: [!code-csharp[QualifiedNameofNotify](../../../samples/snippets/csharp/new-in-6/viewmodel.cs#QualifiedNameofNotify)] Esta expresión `nameof` produce `FirstName`, no `UXComponents.ViewModel.FirstName`. ## <a name="await-in-catch-and-finally-blocks"></a>Await en bloques Catch y Finally C# 5 tenía varias limitaciones en cuanto a dónde se podían colocar las expresiones `await`. Una de ellas se ha quitado en C# 6. Ahora se puede usar `await` en expresiones `catch` o `finally`. La adición de expresiones await en bloques catch y finally puede parecer que complica cómo se procesan. Vamos a agregar un ejemplo para explicar cómo aparece esto. En cualquier método asincrónico, se puede usar una expresión await en una cláusula finally. Con C# 6, también se puede usar await en expresiones catch. Se suele usar principalmente con escenarios de registro: [!code-csharp[AwaitFinally](../../../samples/snippets/csharp/new-in-6/NetworkClient.cs#AwaitFinally)] Los detalles de implementación para agregar compatibilidad con `await` dentro de cláusulas `catch` y `finally` garantizan que el comportamiento es coherente con el comportamiento del código sincrónico. Cuando el código que se ejecuta en una cláusula `catch` o `finally` produce una excepción, la ejecución busca una cláusula `catch` adecuada en el siguiente bloque adyacente. Si había una excepción actual, esa excepción se pierde. Lo mismo sucede con las expresiones de await en cláusulas `catch` y `finally`: se busca una cláusula `catch` adecuada y se pierde la excepción actual, si existe. > [!NOTE] > Este comportamiento es el motivo por el que se recomienda escribir cláusulas `catch` y `finally` con cuidado, para evitar la introducción de nuevas excepciones. ## <a name="index-initializers"></a>Inicializadores de índice Los *inicializadores de índice* son una de las dos características que hacen que los inicializadores de colección sean más coherentes con el uso del índice. En versiones anteriores de C#, solo se podían usar los *inicializadores de colección* con colecciones de estilos de secuencia, incluido <xref:System.Collections.Generic.Dictionary%602> con llaves alrededor de los pares de clave-valor: [!code-csharp[ListInitializer](../../../samples/snippets/csharp/new-in-6/initializers.cs#ListInitializer)] Ahora se pueden usar con colecciones <xref:System.Collections.Generic.Dictionary%602> y tipos similares. La nueva sintaxis admite la asignación con un índice en la colección: [!code-csharp[DictionaryInitializer](../../../samples/snippets/csharp/new-in-6/initializers.cs#DictionaryInitializer)] Esta característica significa que los contenedores asociativos se pueden inicializar mediante una sintaxis similar a la que se lleva usando en varias versiones para los contenedores de secuencia. ## <a name="extension-add-methods-in-collection-initializers"></a>Métodos `Add` de extensión para inicializadores de colección Otra característica que facilita la inicialización de colecciones es la capacidad de usar un *método de extensión* para el método `Add`. Esta característica se agregó por motivos de paridad con Visual Basic. La característica es más útil cuando se tiene una clase de colección personalizada que tiene un método con un nombre diferente para agregar nuevos elementos de forma semántica. Por ejemplo, considere una colección de alumnos similar a la siguiente: [!code-csharp[Enrollment](../../../samples/snippets/csharp/new-in-6/enrollment.cs#Enrollment)] El método `Enroll` agrega un alumno. Pero no sigue el patrón de `Add`. En versiones anteriores de C#, no se podían usar inicializadores de colección con un objeto `Enrollment`: [!code-csharp[InitializeEnrollment](../../../samples/snippets/csharp/new-in-6/classList.cs#InitializeEnrollment)] Ahora se puede, pero solo si se crea un método de extensión que asigna `Add` a `Enroll`: [!code-csharp[ExtensionAdd](../../../samples/snippets/csharp/new-in-6/classList.cs#ExtensionAdd)] Lo que hace esta característica es asignar cualquier método que agrega elementos a una colección a un método denominado `Add` mediante la creación de un método de extensión. ## <a name="improved-overload-resolution"></a>Mejoras en la resolución de sobrecarga Es probable que esta última característica no se aprecie. Había construcciones en las que la versión anterior del compilador de C# podía considerar ambiguas algunas llamadas a métodos con expresiones lambda. Considere este método: [!code-csharp[AsyncMethod](../../../samples/snippets/csharp/new-in-6/overloads.cs#AsyncMethod)] En versiones anteriores de C#, la llamada a este método con la sintaxis de grupo de método produciría un error: [!code-csharp[MethodGroup](../../../samples/snippets/csharp/new-in-6/overloads.cs#MethodGroup)] El compilador anterior no podía distinguir correctamente entre `Task.Run(Action)` y `Task.Run(Func<Task>())`. En las versiones anteriores, era necesario usar una expresión lambda como argumento: [!code-csharp[Lambda](../../../samples/snippets/csharp/new-in-6/overloads.cs#Lambda)] El compilador de C# 6 determina correctamente que `Task.Run(Func<Task>())` es una opción mejor. ### <a name="deterministic-compiler-output"></a>Resultado del compilador determinista La opción `-deterministic` indica al compilador que cree un ensamblado de salida idéntico byte a byte para las compilaciones sucesivas de los mismos archivos de código fuente. De forma predeterminada, todas las compilaciones generan una salida única en cada compilación. El compilador agrega una marca de tiempo y genera un GUID a partir de números aleatorios. Use esta opción si quiere comparar la salida byte a byte para garantizar la coherencia entre las compilaciones. Para obtener más información, vea el artículo [-deterministic (opción del compilador)](../language-reference/compiler-options/deterministic-compiler-option.md).
80.085784
595
0.794491
spa_Latn
0.99219
dbfb0113b3fcd6c0f6ae74b40e723b461b29771f
3,050
md
Markdown
docs/mdx/linregpoint-mdx.md
MRGRD56/sql-docs.ru-ru
4994c363fd2f95812769d48d881fd877abe35738
[ "CC-BY-4.0", "MIT" ]
2
2020-09-23T01:19:32.000Z
2020-09-29T15:21:34.000Z
docs/mdx/linregpoint-mdx.md
MRGRD56/sql-docs.ru-ru
4994c363fd2f95812769d48d881fd877abe35738
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/mdx/linregpoint-mdx.md
MRGRD56/sql-docs.ru-ru
4994c363fd2f95812769d48d881fd877abe35738
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- description: LinRegPoint (многомерные выражения) title: LinRegPoint (многомерные выражения) | Документация Майкрософт ms.date: 06/04/2018 ms.prod: sql ms.technology: analysis-services ms.custom: mdx ms.topic: reference ms.author: owend ms.reviewer: owend author: minewiskan ms.openlocfilehash: 4f298d58b14f3005b86f8fa7773a4faef1c94c79 ms.sourcegitcommit: e700497f962e4c2274df16d9e651059b42ff1a10 ms.translationtype: MT ms.contentlocale: ru-RU ms.lasthandoff: 08/17/2020 ms.locfileid: "88429846" --- # <a name="linregpoint-mdx"></a>LinRegPoint (многомерные выражения) Вычисляет линейную регрессию множества и возвращает значение *пересечения y* в линии регрессии y = ax + b для конкретного значения x. ## <a name="syntax"></a>Синтаксис ``` LinRegPoint(Slice_Expression_x, Set_Expression, Numeric_Expression_y [ ,Numeric_Expression_x ] ) ``` ## <a name="arguments"></a>Аргументы *Slice_Expression_x* Допустимое числовое выражение (обычно многомерное выражение над координатами ячейки), возвращающее число, которое представляет значения для оси среза. *Set_Expression* Допустимое многомерное выражение, возвращающее набор. *Numeric_Expression_y* Допустимое числовое выражение (обычно многомерное выражение координат ячейки), возвращающее число, которое представляет значения по оси Y. *Numeric_Expression_x* Допустимое числовое выражение (обычно многомерное выражение координат ячейки), возвращающее число, которое представляет значения по оси X. ## <a name="remarks"></a>Remarks Линейная регрессия, которая использует метод наименьших квадратов, вычисляет уравнение линии регрессии (то есть наиболее подходящую линию для последовательности точек). Линия регрессии имеет следующее уравнение, где a — это наклон, а b — перехват: y = ax+b Функция **LinRegPoint** вычисляет указанный набор по второму числовому выражению, чтобы получить значения для оси y. Затем функция вычисляет третье числовое выражение (если оно указано) над заданным набором, чтобы получить значения для оси X. Если третье числовое выражение не указано, функция использует текущий контекст ячеек в указанном наборе в качестве значений для оси X. Аргумент для оси X часто опускается при работе с измерением времени. После вычисления линии линейной регрессии значение уравнения подставляется в первое числовое выражение и возвращается результат. > [!NOTE] > Функция **LinRegPoint** игнорирует пустые ячейки или ячейки, содержащие текст. Однако функция обрабатывает ячейки с нулевыми значениями. ## <a name="example"></a>Пример В следующем примере возвращается прогнозируемое значение показателя Unit Sales за последние 10 периодов на основе статистической связи показателей Unit Sales и Store Sales. ``` LinRegPoint([Measures].[Unit Sales],LastPeriods(10),[Measures].[Unit Sales],[Measures].[Store Sales]) ``` ## <a name="see-also"></a>См. также: [Справочник по функциям многомерных выражений (многомерные выражения)](../mdx/mdx-function-reference-mdx.md)
45.522388
449
0.776066
rus_Cyrl
0.938691
dbfb72fc9dcc4a80b41682bae53165883a8dc2e7
912
md
Markdown
docs/architecture.md
16kozlowskim/kahuna-booking
818f0f71fa88957ea7c06e54691bd5cbdad6766b
[ "Apache-2.0" ]
null
null
null
docs/architecture.md
16kozlowskim/kahuna-booking
818f0f71fa88957ea7c06e54691bd5cbdad6766b
[ "Apache-2.0" ]
null
null
null
docs/architecture.md
16kozlowskim/kahuna-booking
818f0f71fa88957ea7c06e54691bd5cbdad6766b
[ "Apache-2.0" ]
null
null
null
--- layout: default title: "High-level architecture" --- Kahunabooking uses different technologies for the frontend and backend parts, taking the best of both worlds and combining in a single build. Kahunabooking is structured how many modern web applications are done these days. The backend server, written using Scala, exposes a JSON API which can be consumed by any client you want. In case of Kahunabooking this client is a single-page browser application built with React. Such an approach allows better scaling and independent development of the server and client parts. This separation is mirrored in how Kahunabooking projects are structured. There's one sub-project for backend code and one for client-side application. They are completely unrelated in terms of code and dependencies. `ui` directory contains the browser part (JavaScript, CSS, HTML) and `backend` contains the backend application.
82.909091
370
0.809211
eng_Latn
0.999573
dbfc27a354880073881af4758ec0506ab8dcd253
327
md
Markdown
README.md
davidknoll/flexutils
1f68a326b5524d9a517273dd0f794349f86b5a4d
[ "MIT" ]
2
2016-05-01T13:19:43.000Z
2021-05-05T01:15:18.000Z
README.md
davidknoll/flexutils
1f68a326b5524d9a517273dd0f794349f86b5a4d
[ "MIT" ]
1
2015-08-22T13:43:28.000Z
2015-09-02T08:26:40.000Z
README.md
davidknoll/flexutils
1f68a326b5524d9a517273dd0f794349f86b5a4d
[ "MIT" ]
1
2016-05-01T13:19:46.000Z
2016-05-01T13:19:46.000Z
# flexutils Some utilities I wrote for working with files from the FLEX operating system for 6809, while adapting it for my own board. Currently contains: * flex2sr - Converts from a FLEX binary to Motorola S-records * sr2flex - Converts from Motorola S-records to a FLEX binary * mkflexfs - Creates an empty FLEX disk image
54.5
142
0.782875
eng_Latn
0.997191
dbfc89b6e1130b62e16eb1ec8c3314ceac7ca61c
1,125
md
Markdown
README.md
pribtech/nodete
abf6283a617fc499cdbe7ef98f187aa2380b6a27
[ "Apache-2.0" ]
null
null
null
README.md
pribtech/nodete
abf6283a617fc499cdbe7ef98f187aa2380b6a27
[ "Apache-2.0" ]
null
null
null
README.md
pribtech/nodete
abf6283a617fc499cdbe7ef98f187aa2380b6a27
[ "Apache-2.0" ]
null
null
null
The Technology Explorer (TE) is a light weight, web based console for DB2 for Linux, UNIX and Windows. It strives to be a teaching tool for all users of DB2. Whether you're just starting to use DB2, or have been for years, there are tutorials for you around many aspects of DB2. Part of what makes the TE such a great teaching tool is that it doesn't just explain to you how a system should act, the Technology Explorer shows you, using your database! The TE has a large number of views that show you how your database is actually behaving. All of the views the TE uses to teach you about DB2 can be used individually, making the TE a very powerful monitoring tool as well. Some of the key features of the Technology Explorer are: Is a light weight web based platform for interacting with DB2 Linux, UNIX and Windows servers Is easily expandable and customizable Works with DB2 for Linux, UNIX and Windows Version 9.1, 9.5, 9.7 and 10.1 Connects to any DB2 data server using only an IP address Contains a wealth of content to highlight, demonstrate and teach you about some of DB2's core features
75
107
0.772444
eng_Latn
0.999887
dbfcc45a7b20b474864901ad01b16e4ec7879381
2,638
md
Markdown
README.md
mamacneil/Bayesian_lifestyle
47fbabc341a7f1614be5e7413069f9ab146b5e69
[ "MIT" ]
null
null
null
README.md
mamacneil/Bayesian_lifestyle
47fbabc341a7f1614be5e7413069f9ab146b5e69
[ "MIT" ]
null
null
null
README.md
mamacneil/Bayesian_lifestyle
47fbabc341a7f1614be5e7413069f9ab146b5e69
[ "MIT" ]
1
2022-01-10T03:18:21.000Z
2022-01-10T03:18:21.000Z
# Your Bayesian Lifestyle ## Getting started with Bayesian statistical models in R or Python Material for this tutorial have been cribbed from Chris Fonnesbeck and Colin Carroll, the original contents of which can be found (here)[https://github.com/fonnesbeck/Bayes_Computing_Course.git] ## Material for course on Bayesian Computation [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/mamacneil/Bayesian_lifestyle/master) ## Setup This tutorial assumes that you have [Anaconda](https://www.anaconda.com/distribution/#download-section) (Python 3.6 or 3.7) setup and installed on your system. If you do not, please download and install Anaconda on your system before proceeding with the setup. The next step is to clone or download the tutorial materials in this repository. If you are familiar with Git, run the clone command: git clone https://github.com/mamacneil/Bayesian_lifestyle.git otherwise you can [download a zip file](https://github.com/mamacneil/Bayesian_lifestyle/archive/master.zip) of its contents, and unzip it on your computer. *** The repository for this tutorial contains a file called `environment.yml` that includes a list of all the packages used for the tutorial. If you run: conda env create from the main tutorial directory, it will create the environment for you and install all of the packages listed. This environment can be enabled using: conda activate bayes_course Then, you can start **JupyterLab** to access the materials: jupyter lab The binder link above should also provide a working environment. ## Course Outline This is subject to change. I will seek input and try to accommodate digression where there is interest. Roughly, the first half of the course jumps right in with applied examples, then steps back to cover some of the theory behind Bayesian methods before going carefully through another applied case study. ### Thursday, February 6 1. **Basic Bayes** 9:00am - 11:00am, 12:00pm-1:00pm - Bayes formula - Likelihoods, Priors, and Posteriors - Radon, it's everywhere - IQ drugs, they're real - Phase-shifts, with fish 2. **Hierarchcial Models** 1:00pm - 4:00pm - Motivation and case studies - Partial pooling - Building hierarchical models - Parameterizations - Model checking ### Thursday, February 13 3. **The Bayesian Workflow** 9:00am - 12:00pm - Prior predictive checks - Iterating models - Posterior predictive checks - Using the model 4. **Your data** 1:00pm - 4:00pm - Questions relating to your own data - How to get started with your unique problem - Resources
39.969697
307
0.75815
eng_Latn
0.992008
dbfd0e91fe786a4e31124f939244eaa294a1c06b
15,535
md
Markdown
README.md
udacity/sysbox
b17dd7a37bdab0a30281f325d976c508fd6db1a6
[ "Apache-2.0" ]
null
null
null
README.md
udacity/sysbox
b17dd7a37bdab0a30281f325d976c508fd6db1a6
[ "Apache-2.0" ]
null
null
null
README.md
udacity/sysbox
b17dd7a37bdab0a30281f325d976c508fd6db1a6
[ "Apache-2.0" ]
null
null
null
<p align="center"> <img alt="sysbox" src="./docs/figures/sysbox-ce-header.png"/> </p> <p align="center"> <a href="https://github.com/nestybox/sysbox/blob/master/LICENSE"><img alt="GitHub license" src="https://img.shields.io/github/license/nestybox/sysbox"></a> <a href="https://travis-ci.com/nestybox/sysbox"> <img src="https://img.shields.io/circleci/project/github/badges/shields/master" alt="build status"></a> <a href="https://nestybox-support.slack.com/join/shared_invite/enQtOTA0NDQwMTkzMjg2LTAxNGJjYTU2ZmJkYTZjNDMwNmM4Y2YxNzZiZGJlZDM4OTc1NGUzZDFiNTM4NzM1ZTA2NDE3NzQ1ODg1YzhmNDQ#"> <img src="https://img.shields.io/badge/chat-on%20slack-FF3386"></a> </p> ## Introduction **Sysbox** is an open-source container runtime (aka runc), originally developed by [Nestybox](https://www.nestybox.com), that enhances containers in two key ways: * **Improves container isolation:** Sysbox always enables the Linux user-namespace on containers (i.e., root user in the container has zero privileges on the host), hides host info inside the container, locks the container's initial mounts, and more. * **Enables containers to act as VMs**: with Sysbox, containers become capable of running most workloads that run in physical hosts or VMs, including systemd, Docker, Kubernetes, and more, seamlessly and with proper isolation (no privileged containers, no complex images, no tricky entrypoints, no special volume mounts, etc.) Sysbox is an OCI-based "runc", meaning that you typically use Docker and Kubernetes to deploy these enhanced containers (in fact Sysbox works under the covers, you don't interact with it directly). Thus there is no need to learn new tools or modify your existing container workflows to take advantage of Sysbox. Just install it and point your container manager / orchestrator to it. For example, this simple Docker command creates a container with Sysbox; you get a well isolated container capable of seamlessly running most software that runs in a VM (e.g., systemd, Docker, etc): $ docker run --runtime=sysbox-runc -it any_image Sysbox was forked from the excellent [OCI runc][oci-runc] in early 2019, and has undergone significant changes since then. It's written in Go, and it is currently composed of three components: sysbox-runc, sysbox-fs, and sysbox-mgr. More on Sysbox's design can be found in the [Sysbox user guide](docs/user-guide/design.md). ## Demo Videos - ["VM-like" containers with Docker + Sysbox](https://asciinema.org/a/kkTmOxl8DhEZiM2fLZNFlYzbo?speed=2) - [Rootless Kubernetes pods with Sysbox](https://asciinema.org/a/401488?speed=1.5) ## Contents * [Motivation](#motivation) * [License](#license) * [Audience](#audience) * [Sysbox Features](#sysbox-features) * [System Containers](#system-containers) * [Host Requirements](#host-requirements) * [Installing Sysbox](#installing-sysbox) * [Using Sysbox](#using-sysbox) * [Documentation](#documentation) * [Performance](#performance) * [Under the Covers](#under-the-covers) * [Comparison to related technologies](#comparison-to-related-technologies) * [Contributing](#contributing) * [Troubleshooting & Support](#troubleshooting--support) * [Uninstallation](#uninstallation) * [Roadmap](#roadmap) * [Relationship to Nestybox](#relationship-to-nestybox) * [Contact](#contact) * [Thank You](#thank-you) ## Motivation Sysbox solves problems such as: * Enhancing the isolation of containerized microservices (root in the container maps to an uprivileged user on the host). * Enabling a highly capable root user inside the container without compromising host security. * Securing CI/CD pipelines by enabling Docker-in-Docker or Kubernetes-in-Docker without insecure privileged containers. * Enabling the use of containers as "VM-like" environments for development, local testing, learning, etc., with strong isolation and the ability to run systemd, Docker, and even kubernetes inside the container. * Running legacy apps inside containers (instead of less efficient VMs). * Replacing VMs with an easier, faster, more efficient, and more portable container-based alternative, one that can be deployed across cloud environments easily. * Partitioning bare-metal hosts into multiple isolated compute environments with 2X the density of VMs (i.e., deploy twice as many VM-like containers as VMs on the same hardware at the same performance). * Partitioning cloud instances (e.g., EC2, GCP, etc.) into multiple isolated compute environments without resorting to expensive nested virtualization. ## License Sysbox is an open-source project, licensed under the Apache License, Version 2.0. See the [LICENSE](LICENSE) file for details. ## Audience The Sysbox project is intended for anyone looking to experiment, invent, learn, and build systems using system containers. It's cutting-edge OS virtualization, and contributions are welcomed. The Sysbox project is **not** meant for people looking for a commercially supported solution. For such a solution, use the **Sysbox Enterprise Edition (Sysbox-EE)**. Sysbox-EE uses Sysbox at its core, but complements it with enterprise-level features for improved security, functionality, and performance, as well as Nestybox support (see next section). It has a 30-day free trial and a paid subscription after that. For more info on Sysbox-EE, refer to the [Nestybox website](https://www.nestybox.com) and the [Sysbox-EE repo](https://github.com/nestybox/sysbox-ee). ## Sysbox Features The table below summarizes the key features of the Sysbox container runtime. It also provides a comparison between the Sysbox Community Edition (i.e., this repo) and the Sysbox Enterprise Edition (see prior section). <p align="center"> <img alt="sysbox" src="./docs/figures/sysbox-features.png" width="1000" /> </p> (\*) For pricing purposes, a "host" is a computer (bare-metal or virtual-machine) with up to 16 CPU cores (32 hyper threads). Per-core pricing at **$5 per-core per-month** is also available for hosts with < 8 cores. Licensing is per-year. Volume discounts available for 50+ per-host licenses or 350+ per-core licenses. More on the features [here](docs/user-guide/features.md). If you have questions, you can reach us [here](#contact). ## System Containers We call the containers deployed by Sysbox **system containers**, to highlight the fact that they can run not just micro-services (as regular containers do), but also system software such as Docker, Kubernetes, Systemd, inner containers, etc. More on system containers [here](docs/user-guide/concepts.md#system-container). ## Host Requirements The Sysbox host must meet the following requirements: * It must be running one of the [supported Linux distros](docs/distro-compat.md). * We recommend a minimum of 4 CPUs (e.g., 2 cores with 2 hyperthreads) and 4GB of RAM. Though this is not a hard requirement, smaller configurations may slow down Sysbox. ## Installing Sysbox The method of installation depends on the environment where Sysbox will be installed: * To install Sysbox on a Kubernetes cluster, use the [sysbox-deploy-k8s daemonset](docs/user-guide/install-k8s.md). * Otherwise, use the [Sysbox package](docs/user-guide/install-package.md) for your distro. * Alternatively, if a package for your distro is not yet available, or if you want to get the latest changes from upstream, you can [build and install Sysbox from source](docs/developers-guide/README.md). Before installing, ensure your host meets the [host requirements](#host-requirements) listed in the prior section. ## Using Sysbox Once Sysbox is installed, you create a container using your container manager or orchestrator (e.g., Docker or Kubernetes) and an image of your choice. Docker command example: ```console $ docker run --runtime=sysbox-runc --rm -it --hostname my_cont registry.nestybox.com/nestybox/ubuntu-bionic-systemd-docker root@my_cont:/# ``` Kubernetes pod spec example: ```yaml apiVersion: v1 kind: Pod metadata: name: ubu-bio-systemd-docker annotations: io.kubernetes.cri-o.userns-mode: "auto:size=65536" spec: runtimeClassName: sysbox-runc containers: - name: ubu-bio-systemd-docker image: registry.nestybox.com/nestybox/ubuntu-bionic-systemd-docker command: ["/sbin/init"] restartPolicy: Never ``` You can choose whatever container image you want, Sysbox places no requirements on the image. The [Sysbox User Guide](docs/user-guide/deploy.md) has more info on this, and the [Sysbox Quickstart Guide](docs/quickstart/README.md) and has many usage examples. You should start there to get familiarized with the use cases enabled by Sysbox. ## Documentation We strive to provide good documentation; it's a key component of the Sysbox project. We have several documents to help you get started and get the best out of Sysbox. * [Sysbox Quick Start Guide](docs/quickstart/README.md) * Provides many examples for using Sysbox. New users should start here. * [Sysbox User Guide](docs/user-guide/README.md) * Provides more detailed information on Sysbox installation, features and design. * [Sysbox Distro Compatibility Doc](docs/distro-compat.md) * Distro compatibility requirements. * [Nestybox blog site](https://blog.nestybox.com/) * Articles on using Sysbox to solve real-life problems. ## Performance Sysbox is fast and efficient, as described in this [Nestybox blog post](https://blog.nestybox.com/2020/09/23/perf-comparison.html). The containers created by Sysbox have similar performance to those created by the OCI runc (the default runtime for Docker and Kubernetes). Even containers deployed inside the system containers have excellent performance, thought there is a slight overhead for network IO (as expected since packets emitted by inner containers go through an additional network interface / bridge inside the system container). Now, if you use Sysbox to deploy system containers that replace VMs, then the performance and efficiency gains are significant: you can deploy 2X as many system containers as VMs on the same server and get the same performance, and do this with a fraction of the memory and storage consumption. The blog post referenced above has more on this. ## Under the Covers Sysbox uses many OS-virtualization features of the Linux kernel and complements these with OS-virtualization techniques implemented in user-space. These include using all Linux namespaces (in particular the user-namespace), partial virtualization of procfs and sysfs, selective syscall trapping, and more. The [Sysbox User Guide](docs/user-guide/README.md) has more info on this. ### Sysbox does not use hardware virtualization Though the containers generated by Sysbox resemble virtual machines in some ways (e.g., you can run as root, run multiple services, and deploy Docker and K8s inside), Sysbox does **not** use hardware virtualization. It's purely an OS-virtualization technology meant to create containers that can run applications as well as system-level software, easily and securely. This makes the containers created by Sysbox fast, efficient, and portable. Isolation wise, it's fair to say that they provide stronger isolation than regular Docker containers (by virtue of using the Linux user-namespace), but weaker isolation than VMs (by sharing the Linux kernel among containers). ## Comparison to related technologies Sysbox is pretty unique: it is (to the best of our knowledge) the only OCI-based container runtime that allows Docker and Kubernetes to deploy "VM-like" containers capable of running systemd, Docker, K8s, etc., with ease and strong isolation from the underlying host (i.e., no privileged containers). See this [blog post](https://blog.nestybox.com/2020/10/06/related-tech-comparison.html) for a high-level comparison between Sysbox and related technologies such as LXD, K8s.io KinD, Ignite, Kata Containers, rootless Docker, and more. ## Contributing We welcome contributions to Sysbox, whether they are small documentation changes, bug fixes, or feature additions. Please see the [contribution guidelines](CONTRIBUTING.md) and [developer's guide](docs/developers-guide/README.md) for more info. ## Troubleshooting & Support Refer to the [Troubleshooting document](docs/user-guide/troubleshoot.md) and to the [issues](https://github.com/nestybox/sysbox/issues) for help. Reach us at our [slack channel][slack] for any questions. ## Uninstallation Prior to uninstalling Sysbox, make sure all containers deployed with it are stopped and removed. The method of uninstallation depends on the method used to install Sysbox: * To uninstall Sysbox on a Kubernetes cluster, follow [these instructions](docs/user-guide/install-k8s.md#uninstallation). * Otherwise, to uninstall the Sysbox package, follow [these instructions](docs/user-guide/install-package.md#uninstalling-sysbox). * If Sysbox was built and installed from source, follow [these instructions](docs/developers-guide/build.md#cleanup--uninstall). ## Roadmap The following is a list of features in the Sysbox roadmap. We list these here so that our users can get a better idea of where we are going and can give us feedback on which of these they like best (or least). Here is a short list; the Sysbox issue tracker has many more. * Support for more Linux distros. * More improvements to procfs and sysfs virtualization. * Continued improvements to container isolation. * Exposing host devices inside system containers with proper permissions. ## Relationship to Nestybox Sysbox was initially developed by [Nestybox](https://www.nestybox.com), and Nestybox is the main sponsor of the Sysbox open-source project. Having said this, we encourage participation from the community to help evolve and improve it, with the goal of increasing the use cases and benefits it enables. External maintainers and contributors are welcomed. Nestybox uses Sysbox as the core of it's Sysbox enterprise product (Sysbox-EE), which consists of Sysbox plus proprietary features meant for enterprise use. To ensure synergy between the Sysbox project and commercial entities such as Nestybox, we use the following criteria when considering adding functionality to Sysbox: Any features that mainly benefit individual practitioners are made part of the Sysbox open-source project. Any features that mainly address enterprise-level needs are reserved for the Sysbox Enterprise Edition. The Sysbox project maintainers will make this determination on a feature by feature basis, with total transparency. The goal is to create a balanced approach that enables the Sysbox open-source community to benefit and thrive while creating opportunities for Nestybox to create a healthy viable business around the technology. ## Contact Slack: [Nestybox Slack Workspace][slack] Email: [email protected] We are available from Monday-Friday, 9am-5pm Pacific Time. ## Thank You We thank you **very much** for using and/or contributing to Sysbox. We hope you find it interesting and that it helps you use containers in new and more powerful ways. [slack]: https://nestybox-support.slack.com/join/shared_invite/enQtOTA0NDQwMTkzMjg2LTAxNGJjYTU2ZmJkYTZjNDMwNmM4Y2YxNzZiZGJlZDM4OTc1NGUzZDFiNTM4NzM1ZTA2NDE3NzQ1ODg1YzhmNDQ#/ [perf-blog]: https://blog.nestybox.com/2020/09/23/perf-comparison.html [oci-runc]: https://github.com/opencontainers/runc
40.561358
177
0.772707
eng_Latn
0.989622
dbfd67436136652282d2443d75ab64bceb792a84
2,048
md
Markdown
gcn/gcn_point/README.md
ShunLu91/sgas
8b1f845ae9c17cfd772cbf12ce451cc8ac625b3f
[ "MIT" ]
1
2021-12-01T03:39:58.000Z
2021-12-01T03:39:58.000Z
gcn/gcn_point/README.md
ShunLu91/sgas
8b1f845ae9c17cfd772cbf12ce451cc8ac625b3f
[ "MIT" ]
null
null
null
gcn/gcn_point/README.md
ShunLu91/sgas
8b1f845ae9c17cfd772cbf12ce451cc8ac625b3f
[ "MIT" ]
null
null
null
# SGAS on point clouds ## Point Clouds Classification on [ModelNet](https://modelnet.cs.princeton.edu/) ### Search We search each model on one GTX 1080Ti on ModelNet10 dataset. Search with `SGAS Cri1`: ``` python train_search.py --random_seed --data ../../data/ ``` Search with `SGAS Cri2`: ``` python train_search.py --use_history --random_seed --data ../../data ``` Just need to set `--data` into your desired data folder, ModelNet10 dataset will be downloaded automatically. ### Train We train each model on one tesla V100. For training the 4-th architecture searched using `SGAS Cri2` with 9 cells, 128 filters and k nearest neighbors 20 (the best large architecture), run: ``` python main_modelnet.py --phase train --arch Cri2_ModelNet_Best --num_cells 9 --init_channels 128 --k 20 --save Cri2_arch4_l9_c128_k20 ``` Just need to set `--data` into your data folder, dataset ModelNet40 will be downloaded automatically. Set `--arch` to any architecture you want. (One can find more architectures from `genotyps.py`) ### Test Our pretrained models can be found from [Google Cloud](https://drive.google.com/drive/folders/1sjLfOpYUYyBSI14G8-vFScZPRaZCXart?usp=sharing). Use the parameter `--model_path` to set a specific pretrained model to load. For example, test the best large architecture using `SGAS Cri2` (expected overall accuracy: 93.23%): ``` python main_modelnet.py --phase test --arch Cri2_ModelNet_Best --num_cells 9 --init_channels 128 --k 20 --model_path log/Cri2_modelnet40_best_l9_c128_k20.pt ``` test the best large architecture using `SGAS Cri1` (expected overall accuracy: 92.87%): ``` python main_modelnet.py --phase test --arch Cri1_ModelNet_Best --num_cells 9 --init_channels 128 --k 20 --model_path log/Cri1_modelnet40_best_l9_c128_k20.pt ``` test the best small architecture using `SGAS Cri2` (expected overall accuracy: 93.07%): ``` python main_modelnet.py --phase test --arch Cri2_ModelNet_Best --num_cells 3 --init_channels 128 --k 9 --model_path log/Cri2_modelnet40_best_l3_c128_k9.pt ```
42.666667
156
0.751953
eng_Latn
0.756649
dbfe335f3d876f54dba6ae2f301f22fad12b8a7f
1,218
md
Markdown
articles/cognitive-services/text-analytics/includes/docker-pull-health-container.md
wastu01/azure-docs.zh-tw
7ee2fba199b6243c617953684afa67b83b2acc82
[ "CC-BY-4.0", "MIT" ]
66
2017-08-24T10:28:13.000Z
2022-03-04T14:01:29.000Z
articles/cognitive-services/text-analytics/includes/docker-pull-health-container.md
wastu01/azure-docs.zh-tw
7ee2fba199b6243c617953684afa67b83b2acc82
[ "CC-BY-4.0", "MIT" ]
534
2017-06-30T19:57:07.000Z
2022-03-11T08:12:44.000Z
articles/cognitive-services/text-analytics/includes/docker-pull-health-container.md
wastu01/azure-docs.zh-tw
7ee2fba199b6243c617953684afa67b83b2acc82
[ "CC-BY-4.0", "MIT" ]
105
2017-07-04T11:37:54.000Z
2022-03-20T06:10:38.000Z
--- title: Health 容器的 Docker pull titleSuffix: Azure Cognitive Services description: 適用于 health 容器文字分析的 Docker pull 命令 services: cognitive-services author: aahill manager: nitinme ms.service: cognitive-services ms.subservice: text-analytics ms.topic: include ms.date: 07/07/2020 ms.author: aahi ms.openlocfilehash: a0b2c9548f9c1289ae0abd61a72d7146a3bbca29 ms.sourcegitcommit: cd9754373576d6767c06baccfd500ae88ea733e4 ms.translationtype: MT ms.contentlocale: zh-TW ms.lasthandoff: 11/20/2020 ms.locfileid: "94965136" --- 填寫並提交 [認知服務要求表單](https://aka.ms/csgate) ,以要求存取文字分析的健康情況公開預覽。 此應用程式適用于容器和託管的 web API 公開預覽版。 該表格需要有關您本身、您的公司,以及您將會使用該容器之使用者情節的資訊。 在您提交表單之後,Azure 認知服務小組會審核該表單,以確保您符合存取私人容器登錄的條件。 > [!IMPORTANT] > * 在表單上,您必須使用與 Azure 訂用帳戶識別碼相關聯的電子郵件地址。 > * 您用來執行容器的 Azure 資源,必須使用已核准的 Azure 訂用帳戶識別碼來建立。 > * 檢查電子郵件 (收件匣和垃圾資料夾) ,以取得 Microsoft 應用程式狀態的更新。 核准之後,您將會收到一封電子郵件,內含存取私人容器登錄的認證。 使用 docker 登入命令搭配登入電子郵件中提供的認證,以連線至我們的「認知服務」容器的私人容器登錄。 ```Docker docker login containerpreview.azurecr.io -u <username> -p <password> ``` 使用 [`docker pull`](https://docs.docker.com/engine/reference/commandline/pull/) 命令從私用容器登錄下載此容器映射。 ``` docker pull containerpreview.azurecr.io/microsoft/cognitive-services-healthcare:latest ```
30.45
96
0.803777
yue_Hant
0.768828
dbfec36d6f9c3d7af50c80c825a4d1fdd633ca11
10,503
md
Markdown
articles/service-bus-messaging/network-security.md
R0bes/azure-docs.de-de
24540ed5abf9dd081738288512d1525093dd2938
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/service-bus-messaging/network-security.md
R0bes/azure-docs.de-de
24540ed5abf9dd081738288512d1525093dd2938
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/service-bus-messaging/network-security.md
R0bes/azure-docs.de-de
24540ed5abf9dd081738288512d1525093dd2938
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Netzwerksicherheit für Azure Service Bus description: In diesem Artikel werden Netzwerksicherheitsfeatures wie Diensttags, IP-Firewallregeln, Dienstendpunkte und private Endpunkte beschrieben. ms.topic: conceptual ms.date: 06/23/2020 ms.openlocfilehash: db0dd89d1f902699c27b724609505ba681757454 ms.sourcegitcommit: f28ebb95ae9aaaff3f87d8388a09b41e0b3445b5 ms.translationtype: HT ms.contentlocale: de-DE ms.lasthandoff: 03/29/2021 ms.locfileid: "92310463" --- # <a name="network-security-for-azure-service-bus"></a>Netzwerksicherheit für Azure Service Bus In diesem Artikel wird beschrieben, wie Sie die folgenden Sicherheitsfunktionen mit Azure Service Bus verwenden: - Diensttags - IP-Firewallregeln - Netzwerkdienstendpunkte - Private Endpunkte ## <a name="service-tags"></a>Diensttags Ein Diensttag steht für eine Gruppe von IP-Adresspräfixen eines bestimmten Azure-Diensts. Microsoft verwaltet die Adresspräfixe, für die das Diensttag gilt, und aktualisiert das Tag automatisch, wenn sich die Adressen ändern. Auf diese Weise wird die Komplexität häufiger Updates an Netzwerksicherheitsregeln minimiert. Weitere Informationen zu Diensttags finden Sie unter [Diensttags: Übersicht](../virtual-network/service-tags-overview.md). Sie können Diensttags verwenden, um Netzwerkzugriffssteuerungen in [Netzwerksicherheitsgruppen](../virtual-network/network-security-groups-overview.md#security-rules) oder in der [Azure Firewall](../firewall/service-tags.md) zu definieren. Verwenden Sie Diensttags anstelle von spezifischen IP-Adressen, wenn Sie Sicherheitsregeln erstellen. Wenn Sie den Diensttagnamen (beispielsweise **ServiceBus**) im entsprechenden *Quell*- oder *Zielfeld* einer Regel angeben, können Sie den Datenverkehr für den entsprechenden Dienst zulassen oder verweigern. | Diensttag | Zweck | Eingehend oder ausgehend möglich? | Regional möglich? | Einsatz mit Azure Firewall möglich? | | --- | -------- |:---:|:---:|:---:|:---:|:---:|:---:|:---:| | **ServiceBus** | Azure Service Bus-Datenverkehr, der die Dienstebene „Premium“ verwendet. | Ausgehend | Ja | Ja | > [!NOTE] > Sie können Diensttags nur für **Premium**-Namespaces verwenden. Wenn Sie einen **Standard**-Namespace verwenden, verwenden Sie die IP-Adresse, die beim Ausführen des folgenden Befehls angezeigt wird: `nslookup <host name for the namespace>`. Beispiel: `nslookup contosons.servicebus.windows.net`. ## <a name="ip-firewall"></a>IP-Firewall Standardmäßig kann auf Service Bus-Namespaces über das Internet zugegriffen werden, solange die Anforderung eine gültige Authentifizierung und Autorisierung aufweist. Mit der IP-Firewall können Sie den Zugriff auf eine Gruppe von IPv4-Adressen oder IPv4-Adressbereichen in der [CIDR-Notation (Classless Inter-Domain Routing)](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) weiter einschränken. Diese Funktion ist in Szenarien hilfreich, in denen Azure Service Bus nur von bestimmten bekannten Websites aus zugänglich sein soll. Mithilfe von Firewallregeln können Sie Regeln konfigurieren, um Datenverkehr von bestimmten IPv4-Adressen zuzulassen. Wenn Sie z. B. Service Bus mit [Azure Express Route] [express-route] verwenden, können Sie eine **Firewallregel** erstellen, um Datenverkehr nur von den IP-Adressen Ihrer lokalen Infrastruktur oder von Adressen eines NAT-Gateways eines Unternehmen zuzulassen. Die IP-Firewallregeln werden auf der Service Bus-Namespaceebene angewendet. Daher gelten die Regeln für alle Clientverbindungen mit einem beliebigen unterstützten Protokoll. Jeder Verbindungsversuch über eine IP-Adresse, die nicht mit einer IP-Zulassungsregel im Service Bus-Namespace übereinstimmt, wird als nicht autorisiert abgelehnt. In der Antwort wird die IP-Regel nicht erwähnt. IP-Filterregeln werden der Reihe nach angewendet, und die erste Regel, die eine Übereinstimmung mit der IP-Adresse ergibt, bestimmt die Aktion (Zulassen oder Ablehnen). Weitere Informationen finden Sie unter [Konfigurieren einer IP-Firewall für einen Service Bus-Namespace](service-bus-ip-filtering.md). ## <a name="network-service-endpoints"></a>Netzwerkdienstendpunkte Die Integration von Service Bus mit [VNET-Dienstendpunkten](service-bus-service-endpoints.md) ermöglicht den sicheren Zugriff auf Messagingfunktionen für Workloads, z. B. an virtuelle Netzwerke (VNETs) gebundene virtuelle Computer. Der Pfad für den Netzwerkdatenverkehr ist dabei an beiden Enden geschützt. Nachdem die Konfiguration der Bindung an mindestens einen Dienstendpunkt des VNET-Subnetzes durchgeführt wurde, akzeptiert der entsprechende Service Bus-Namespace nur noch Datenverkehr von autorisierten virtuellen Netzwerken. Aus Sicht des virtuellen Netzwerks wird durch die Bindung eines Service Bus-Namespace an einen Dienstendpunkt ein isolierter Netzwerktunnel vom Subnetz des virtuellen Netzwerks zum Messagingdienst konfiguriert. Das Ergebnis ist eine private und isolierte Beziehung zwischen den Workloads, die an das Subnetz gebunden sind, und dem entsprechenden Service Bus-Namespace, obwohl sich die beobachtbare Netzwerkadresse des Messaging-Dienstendpunkts in einem öffentlichen IP-Bereich befindet. > [!IMPORTANT] > Virtuelle Netzwerke werden nur in Service Bus-Namespaces im [Tarif Premium](service-bus-premium-messaging.md) unterstützt. > > Bei der Verwendung von VNET-Dienstendpunkten mit Service Bus sollten Sie diese Endpunkte nicht in Anwendungen aktivieren, in denen Service Bus-Namespaces der Tarife Standard und Premium gemischt werden. Die Tarif Standard unterstützt keine VNETs. Der Endpunkt ist nur auf Namespaces im Premium-Tarif beschränkt. ### <a name="advanced-security-scenarios-enabled-by-vnet-integration"></a>Erweiterte Sicherheitsszenarien basierend auf der VNET-Integration Lösungen, für die eine strikte und auf mehrere Bereiche aufgeteilte Sicherheit erforderlich ist und bei denen die VNET-Subnetze die Segmentierung zwischen den einzelnen aufgeteilten Diensten bereitstellen, benötigen im Allgemeinen weiterhin Kommunikationspfade zwischen den Diensten, die sich in diesen Bereichen befinden. Für alle direkten IP-Routen zwischen den Bereichen (auch für HTTPS per TCP/IP) besteht die Gefahr, dass ab der Vermittlungsschicht Sicherheitsrisiken ausgenutzt werden. Bei Messagingdiensten werden vollständig isolierte Kommunikationspfade bereitgestellt, für die Nachrichten während des Übergangs zwischen Parteien sogar auf Datenträger geschrieben werden. Workloads in zwei einzelnen virtuellen Netzwerken, die beide an dieselbe Service Bus-Instanz gebunden sind, können über Nachrichten effizient und zuverlässig kommunizieren, während die Integrität der jeweiligen Netzwerkisolationsgrenze aufrechterhalten wird. Dies bedeutet, dass Ihre sicherheitsrelevanten Cloudlösungen nicht nur Zugriff auf zuverlässige und skalierbare Azure-Funktionen für asynchrones Messaging eines Branchenführers haben, sondern dass das Messaging jetzt auch genutzt werden kann, um Kommunikationspfade zwischen sicheren Lösungsbereichen zu erstellen. Diese sind deutlich sicherer als alle Optionen des Peer-to-Peer-Kommunikationsmodus, einschließlich HTTPS und andere per TLS geschützte Socketprotokolle. ### <a name="bind-service-bus-to-virtual-networks"></a>Binden von Service Bus an virtuelle Netzwerke *VNET-Regeln* sind das Feature für die Firewallsicherheit, mit dem gesteuert wird, ob Ihr Azure Service Bus-Server Verbindungen eines bestimmten VNET-Subnetzes akzeptiert. Das Binden eines Service Bus-Namespace an ein virtuelles Netzwerk ist ein Prozess mit zwei Schritten. Zunächst müssen Sie einen **VNET-Dienstendpunkt** in einem Virtual Network-Subnetz erstellen und für **Microsoft.ServiceBus** aktivieren, wie unter [Übersicht über Dienstendpunkte](service-bus-service-endpoints.md) beschrieben. Nachdem Sie den Dienstendpunkt hinzugefügt haben, binden Sie den Service Bus-Namespace mit einer **VNET-Regel** daran. Die VNET-Regel ist eine Zuordnung des Service Bus-Namespace zu einem Subnetz eines virtuellen Netzwerks. Während die Regel vorhanden ist, wird allen Workloads, die an das Subnetz gebunden sind, Zugriff auf den Service Bus-Namespace gewährt. Service Bus stellt selbst niemals ausgehende Verbindungen her, muss keinen Zugriff erhalten und erhält daher niemals die Gewährung des Zugriffs auf Ihr Subnetz, indem diese Regel aktiviert wird. Weitere Informationen finden Sie unter [Konfigurieren von Dienstendpunkten eines virtuellen Netzwerks für einen Service Bus-Namespace](service-bus-service-endpoints.md). ## <a name="private-endpoints"></a>Private Endpunkte Mit Azure Private Link können Sie über einen **privaten Endpunkt** in Ihrem virtuellen Netzwerk auf Azure-Dienste wie Azure Service Bus, Azure Storage und Azure Cosmos DB sowie auf in Azure gehostete Kunden-/Partnerdienste zugreifen. Ein privater Endpunkt ist eine Netzwerkschnittstelle, die Sie privat und sicher mit einem von Azure Private Link betriebenen Dienst verbindet. Der private Endpunkt verwendet eine private IP-Adresse aus Ihrem VNET und bindet den Dienst dadurch in Ihr VNET ein. Der gesamte für den Dienst bestimmte Datenverkehr kann über den privaten Endpunkt geleitet werden. Es sind also keine Gateways, NAT-Geräte, ExpressRoute-/VPN-Verbindungen oder öffentlichen IP-Adressen erforderlich. Der Datenverkehr zwischen Ihrem virtuellen Netzwerk und dem Dienst wird über das Microsoft-Backbone-Netzwerk übertragen und dadurch vom öffentlichen Internet isoliert. Sie können eine Verbindung mit einer Instanz einer Azure-Ressource herstellen, was ein Höchstmaß an Granularität bei der Zugriffssteuerung ermöglicht. Weitere Informationen finden Sie unter [Was ist Azure Private Link?](../private-link/private-link-overview.md). > [!NOTE] > Dieses Feature wird mit dem Tarif **Premium** von Azure Service Bus unterstützt. Weitere Informationen zum Premium-Tarif finden Sie im Artikel [Service Bus Premium- und Standard-Tarif für Messaging](service-bus-premium-messaging.md). Weitere Informationen finden Sie unter [Konfigurieren privater Endpunkte für einen Service Bus-Namespace](private-link-service.md). ## <a name="next-steps"></a>Nächste Schritte Weitere Informationen finden Sie in folgenden Artikeln: - [Konfigurieren einer IP-Firewall für einen Service Bus-Namespace](service-bus-ip-filtering.md) - [Konfigurieren von Dienstendpunkten eines virtuellen Netzwerks für einen Service Bus-Namespace](service-bus-service-endpoints.md) - [Konfigurieren privater Endpunkte für einen Service Bus-Namespace](private-link-service.md)
111.734043
793
0.822051
deu_Latn
0.998191
dbfec3f68a413be3378d389dc44e8da0f08898bc
23
md
Markdown
README.md
Georgina23/georgina23.github.com
6ae1b8e497916e67d8ca22695df9a4e1e60cfcd0
[ "MIT" ]
null
null
null
README.md
Georgina23/georgina23.github.com
6ae1b8e497916e67d8ca22695df9a4e1e60cfcd0
[ "MIT" ]
null
null
null
README.md
Georgina23/georgina23.github.com
6ae1b8e497916e67d8ca22695df9a4e1e60cfcd0
[ "MIT" ]
null
null
null
# georgina23.github.com
23
23
0.826087
por_Latn
0.339885
dbfef72f75e159b5b40b5be166e3cf28c6f39f1b
102
md
Markdown
_collection/kristinebilgrav-cnvnator.md
singularityhub/singularityhub-archive
dd1d471db0c4ac01998cb84bd1ab82e97c8dab65
[ "ECL-2.0", "Apache-2.0" ]
null
null
null
_collection/kristinebilgrav-cnvnator.md
singularityhub/singularityhub-archive
dd1d471db0c4ac01998cb84bd1ab82e97c8dab65
[ "ECL-2.0", "Apache-2.0" ]
null
null
null
_collection/kristinebilgrav-cnvnator.md
singularityhub/singularityhub-archive
dd1d471db0c4ac01998cb84bd1ab82e97c8dab65
[ "ECL-2.0", "Apache-2.0" ]
null
null
null
--- id: 5266 full_name: "kristinebilgrav/cnvnator" images: - "kristinebilgrav-cnvnator-latest" ---
14.571429
37
0.715686
nob_Latn
0.194356
dbff2459639a89269a1571fa551a0c0216ed80fb
4,261
md
Markdown
docs/reference/pre-2021/services/s3.md
caktus/developer-documentation
b5d067598c7c9637549d1b2a71695427d6933ec2
[ "MIT" ]
11
2017-02-21T13:51:32.000Z
2021-12-10T14:40:09.000Z
docs/reference/pre-2021/services/s3.md
caktus/developer-documentation
b5d067598c7c9637549d1b2a71695427d6933ec2
[ "MIT" ]
78
2015-08-28T14:34:47.000Z
2021-12-14T19:49:14.000Z
docs/reference/pre-2021/services/s3.md
caktus/developer-documentation
b5d067598c7c9637549d1b2a71695427d6933ec2
[ "MIT" ]
2
2016-12-08T21:31:55.000Z
2021-09-22T12:46:56.000Z
Amazon S3 ========= All Django projects involve some kind of static (CSS/JS) resources and many user uploaded resouces like avatars. When deploying to a single server these can be kept on the local file system but in a larger server cluster that isn't feasible. Other deployment environments such as Heroku or Elastic Beanstalk don't provide access to the local filesystem. For these scenarios, Amazon's Simple Cloud Storage Service (S3) can provided a shared storage location for your shared assets. Static Assets and Public Uploads -------------------------------- This is the most common use case for S3. Moving your static assets to S3 makes it easy to setup CloudFront as a CDN for them saving your web servers CPU and bandwidth to serve the application instead. Moving the public uploads to S3 can also benefit from a CDN. Having a shared storage location means all webservers in the cluster will have access to the resources without configuring and maintaining any network mounts. Since it's a fairly common configuration, there are a number of great tools to make this easy and documentation on how to enable them in your project. [django-storages](https://django-storages.readthedocs.io/en/latest/index.html) along with [boto3](https://boto3.readthedocs.io/en/latest/) is the preferred method for configuring Django to use S3 for either static or media resources. The best place to get started on this is our own blog post on this exact topic: <https://www.caktusgroup.com/blog/2014/11/10/Using-Amazon-S3-to-store-your-Django-sites-static-and-media-files/> Private Uploads --------------- There are times when you want to handle user uploads for content that shouldn't be served publicly on the site. These might be resources restricted to the user who uploaded it or to a group of users. On the Django side, private uploads are handled similarly to public uploads. It's still recommended that you use django-storages and boto3. However, the bucket configuration on the AWS side needs more restrictive permissions. If you follow the directions in the post related to public media, then you'll have created a user to access the bucket from the application side. To grant access to only that single user, you'll want to create a policy like the following: { "Statement": [ { "Sid": "SingleUserBucketReadWrite", "Effect": "Deny", "NotPrincipal": { "AWS": [ "USER-ARN", ] }, "Action": [ "s3:GetBucketLocation", "s3:GetObject", "s3:ListBucket", "s3:ListBucketMultipartUploads", "s3:ListMultipartUploadParts", "s3:AbortMultipartUpload", "s3:PutObject" ], "Resource": [ "arn:aws:s3:::BUCKET-NAME", "arn:aws:s3:::BUCKET-NAME/*" ] } ] } To use this you would replace the `USER-ARN` with the user's ARN as generated by AWS and `BUCKET-NAME` with the name of the bucket you've created. Note that this grants permission to list the bucket, get items from the bucket, and add new uploads to the bucket but does not grant the permission to delete items from the bucket. If you want to grant that access it would be added to the `Action` list in the policy. If you are using both public resources and private resources within a single project, you'll have to go beyond configuring the `DEFAULT_FILE_STORAGE` setting and explicitly configure each `FileField` and `ImageField` to use the appropriate storage for that upload. This is done by passing the `storage` instance to the field declaration. This will allow you to use seperate buckets for the public and private files. All together a project might use as many as three buckets: one public/anonymous access bucket for static resources (JS/CSS), one public/anonymous access bucket for public uploads, and one private/restricted access bucket for private uploads. Note that the private uploads can still be made accessible through a temporary signed URL generated by the storage class.
46.315217
112
0.701009
eng_Latn
0.997777
dbff6809b7e79206b61cb10f027e4c9c58baf835
2,139
md
Markdown
README.md
disktnk/dprofiler
be820de440c21d3fef711db1220bfddb9bdee177
[ "MIT" ]
null
null
null
README.md
disktnk/dprofiler
be820de440c21d3fef711db1220bfddb9bdee177
[ "MIT" ]
3
2018-10-16T04:03:50.000Z
2018-10-17T01:34:16.000Z
README.md
disktnk/dprofiler
be820de440c21d3fef711db1220bfddb9bdee177
[ "MIT" ]
null
null
null
# dprofiler [![PyPI](https://img.shields.io/pypi/v/dprofiler.svg)](https://pypi.org/project/dprofiler/) [![Build Status](https://travis-ci.org/disktnk/dprofiler.svg?branch=master)](https://travis-ci.org/disktnk/dprofiler) [![Build status](https://ci.appveyor.com/api/projects/status/d304h5xmycq4t3ls/branch/master?svg=true)](https://ci.appveyor.com/project/disktnk/dprofiler/branch/master) [![codecov](https://codecov.io/gh/disktnk/dprofiler/branch/master/graph/badge.svg)](https://codecov.io/gh/disktnk/dprofiler) wrap `cProfile` and `profile` modules, use as decorator. ## Support - Python 2.7.15+ / 3.5+ / 3.6+ ## Install **Required** - six 1.11.0+ ``` $ pip install dprofiler ``` from source ```bash $ git clone https://github.com/disktnk/dprofiler $ cd dprofiler $ pip install -e . ``` ## Usage Add `@profile` decorator on the target function. ```python from dprofiler import profile @profile def target_function(): # some process ``` with own logger. ```python import logging from dprofiler import profile own_logger = logging.getLogger(__name__) # dprofiler output as DEBUG, enable DEBUG level own_logger.setLevel(logging.DEBUG) @profile(logger=own_logger) def target_function(): # some process ``` output example. ``` path/to/target.py 10 function calls in 0.000 seconds Ordered by: cumulative time ncalls tottime percall cumtime percall filename:lineno(function) 1 0.000 0.000 0.000 0.000 pstats.py:65(__init__) 1 0.000 0.000 0.000 0.000 pstats.py:75(init) 1 0.000 0.000 0.000 0.000 pstats.py:94(load_stats) 1 0.000 0.000 0.000 0.000 cProfile.py:50(create_stats) ... ``` **Support option** - `logger`: set own logger, stdout is used on default. - `sort_key`: `cumtime` on default, other keys are [here](https://docs.python.org/3.6/library/profile.html#pstats.Stats.sort_stats). - `n`: max count to output, `20` on default. - `prefix`: output string when start, empty on default. - `suffix`: output string when end, empty on default. ## License MIT License (see [LICENSE](/LICENSE) file).
24.872093
167
0.688172
yue_Hant
0.472681
dbffa2ee388c9f0b55870330cc46ca410aae4763
950
md
Markdown
DataSources/Cisco/Cisco_ISE/Ps/pC_cefciscoisenaclogon2.md
detection-kdb/Content-Doc
6017301ad4fd81ab96729360e9dae3a917dbb1ed
[ "MIT" ]
69
2020-06-25T16:54:02.000Z
2022-03-29T03:20:51.000Z
DataSources/Cisco/Cisco_ISE/Ps/pC_cefciscoisenaclogon2.md
detection-kdb/Content-Doc
6017301ad4fd81ab96729360e9dae3a917dbb1ed
[ "MIT" ]
1
2021-03-22T11:47:21.000Z
2021-03-31T20:01:14.000Z
DataSources/Cisco/Cisco_ISE/Ps/pC_cefciscoisenaclogon2.md
detection-kdb/Content-Doc
6017301ad4fd81ab96729360e9dae3a917dbb1ed
[ "MIT" ]
37
2020-08-04T16:41:50.000Z
2022-03-28T09:13:40.000Z
#### Parser Content ```Java { Name = cef-cisco-ise-nac-logon-2 Vendor = Cisco Product = Cisco ISE Lms = Direct DataType = "nac-logon" TimeFormat = "epoch" Conditions = [ """CEF:""", """|Cisco ISE|""", """|RADIUS Accounting start request|""", """ dst=""", """act=Start""", """CISE_RADIUS_Accounting""" ] Fields = [ """\srt=({time}\d{1,100})""", """\sahost=({host}[\w.-]{1,2000})\s""", """({event_name}RADIUS Accounting start request)""", """\|Cisco ISE\|[^\|]{0,2000}\|({event_code}\d{1,100})""", """\sdvchost=({dest_host}[\w.-]{1,2000})\s""", """\sdst=({dest_ip}[A-Fa-f\d.:]{1,2000})\s""", """\sshost=({src_host}[\w.-]{1,2000})\s""", """\ssrc=({src_ip}[A-Fa-f\d.:]{1,2000})\s""", """\samac=({src_mac}[\w-]{1,2000})\s""", """\ssuser=(({user_email}[^\s@]{1,2000}@[^\s@]{1,2000})|({user}[^\s\(\[]{1,2000}))""", """({auth_type}Radius-Accounting)""" ] DupFields = [ "host->auth_server" ] } ```
36.538462
149
0.509474
yue_Hant
0.254827
dbffc409317175384d236c20efdda34d55b9bfad
22,594
md
Markdown
articles/storage/blobs/quickstart-blobs-javascript-browser.md
jayv-ops/azure-docs.de-de
6be2304cfbe5fd0bf0d4ed0fbdf4a6a4d11ac6e0
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/storage/blobs/quickstart-blobs-javascript-browser.md
jayv-ops/azure-docs.de-de
6be2304cfbe5fd0bf0d4ed0fbdf4a6a4d11ac6e0
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/storage/blobs/quickstart-blobs-javascript-browser.md
jayv-ops/azure-docs.de-de
6be2304cfbe5fd0bf0d4ed0fbdf4a6a4d11ac6e0
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: 'Schnellstart: Azure Blob Storage-Bibliothek v12: JavaScript in einem Browser' description: In diesem Schnellstart erfahren Sie, wie Sie die Azure Blob Storage-Clientbibliothek, Version 12 für JavaScript in einem Browser verwenden. Sie erstellen einen Container und ein Objekt in Blob Storage. Anschließend erfahren Sie, wie Sie alle Blobs in einem Container auflisten. Zum Schluss wird gezeigt, wie Sie Blobs und einen Container löschen. author: mhopkins-msft ms.author: mhopkins ms.date: 07/24/2020 ms.service: storage ms.subservice: blobs ms.topic: quickstart ms.custom: devx-track-js ms.openlocfilehash: 998d49e91d38a1f2fdc2503165ee99635e153027 ms.sourcegitcommit: a43a59e44c14d349d597c3d2fd2bc779989c71d7 ms.translationtype: HT ms.contentlocale: de-DE ms.lasthandoff: 11/25/2020 ms.locfileid: "96001897" --- <!-- Customer intent: As a web application developer I want to interface with Azure Blob storage entirely on the client so that I can build a SPA application that is able to upload and delete files on blob storage. --> # <a name="quickstart-manage-blobs-with-javascript-v12-sdk-in-a-browser"></a>Schnellstart: Verwalten von Blobs mit dem JavaScript v12 SDK in einem Browser Azure Blob Storage ist für die Speicherung großer Mengen unstrukturierter Daten optimiert. Blobs sind Objekte, die Text oder Binärdaten enthalten können, z. B. Bilder, Dokumente, Streamingmedien und Archivdaten. In diesem Schnellstart erfahren Sie, wie Sie Blobs mithilfe von JavaScript in einem Browser verwalten. Sie werden Blobs hochladen und auflisten sowie Container erstellen und löschen. Zusätzliche Ressourcen: * [API-Referenzdokumentation](/javascript/api/@azure/storage-blob) * [Quellcode der Bibliothek](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/storage/storage-blob) * [Paket (npm)](https://www.npmjs.com/package/@azure/storage-blob) * [Beispiele](../common/storage-samples-javascript.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json#blob-samples) ## <a name="prerequisites"></a>Voraussetzungen * [Ein Azure-Konto mit einem aktiven Abonnement](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) * [Ein Azure-Speicherkonto](../common/storage-account-create.md) * [Node.js](https://nodejs.org) * [Microsoft Visual Studio Code](https://code.visualstudio.com) * Eine Visual Studio Code-Erweiterung für das Debuggen im Browser, z. B.: * [Debugger für Microsoft Edge](https://marketplace.visualstudio.com/items?itemName=msjsdiag.debugger-for-edge) * [Debugger für Chrome](https://marketplace.visualstudio.com/items?itemName=msjsdiag.debugger-for-chrome) * [Debugger für Firefox](https://marketplace.visualstudio.com/items?itemName=firefox-devtools.vscode-firefox-debug) [!INCLUDE [storage-multi-protocol-access-preview](../../../includes/storage-multi-protocol-access-preview.md)] ## <a name="object-model"></a>Objektmodell Blob Storage bietet drei Typen von Ressourcen: * Das Speicherkonto * Einen Container im Speicherkonto * Ein Blob im Container Im folgenden Diagramm ist die Beziehung zwischen diesen Ressourcen dargestellt. ![Diagramm der Blob Storage-Architektur](./media/storage-blobs-introduction/blob1.png) In diesem Schnellstart verwenden Sie die folgenden JavaScript-Klassen, um mit diesen Ressourcen zu interagieren: * [BlobServiceClient:](/javascript/api/@azure/storage-blob/blobserviceclient) Die `BlobServiceClient`-Klasse ermöglicht Ihnen, Azure Storage-Ressourcen und Blobcontainer zu bearbeiten. * [ContainerClient:](/javascript/api/@azure/storage-blob/containerclient) Die `ContainerClient`-Klasse ermöglicht Ihnen, Azure Storage-Container und deren Blobs zu bearbeiten. * [BlockBlobClient](/javascript/api/@azure/storage-blob/blockblobclient): Die `BlockBlobClient`-Klasse ermöglicht Ihnen, Azure Storage-Blobs zu bearbeiten. ## <a name="setting-up"></a>Einrichten In diesem Abschnitt wird beschrieben, wie ein Projekt zur Verwendung mit der Azure Blob Storage-Clientbibliothek v12 für JavaScript vorbereitet wird. ### <a name="create-a-cors-rule"></a>Erstellen einer CORS-Regel Damit Ihre Webanwendung vom Client aus auf Blobspeicher zugreifen kann, müssen Sie Ihr Konto so konfigurieren, dass [Cross-Origin Resource Sharing (CORS)](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services) aktiviert ist. Wählen Sie im Azure-Portal Ihr Speicherkonto aus. Navigieren Sie zum Definieren einer neuen CORS-Regel zum Abschnitt **Einstellungen**, und wählen Sie **CORS** aus. Im Rahmen dieser Schnellstartanleitung erstellen Sie eine offene CORS-Regel: ![Azure Blob Storage-Konto: CORS-Einstellungen](media/quickstart-blobs-javascript-browser/azure-blob-storage-cors-settings.png) In der folgenden Tabelle werden die einzelnen CORS-Einstellungen beschrieben und die Definitionswerte der Regel erläutert: |Einstellung |Wert | BESCHREIBUNG | |---------|---------|---------| | **ZULÄSSIGE URSPRÜNGE** | **\** _ | Akzeptiert eine kommagetrennte Liste mit Domänen, die als zulässige Ursprünge festgelegt werden. Wenn Sie den Wert auf `_` festlegen, wird allen Domänen Zugriff auf das Speicherkonto gewährt. | | **ZULÄSSIGE METHODEN** | **DELETE**, **GET**, **HEAD**, **MERGE**, **POST**, **OPTIONS** und **PUT** | Listet die zulässigen HTTP-Verben für das Speicherkonto auf. Wählen Sie für diese Schnellstartanleitung alle verfügbaren Optionen aus. | | **ZULÄSSIGE HEADER** | **\** _ | Definiert eine Liste mit Anforderungsheadern (einschließlich Header mit Präfix), die vom Speicherkonto zugelassen werden. Wenn Sie den Wert auf `_` festlegen, wird allen Headern Zugriff gewährt. | | **VERFÜGBAR GEMACHTE HEADER** | **\** _ | Listet die zulässigen Antwortheader für das Konto auf. Wenn Sie den Wert auf `_` festlegen, kann das Konto einen beliebigen Header senden. | | **MAX. ALTER** | **86400** | Die maximale Zeit (in Sekunden), für die der Browser die Preflight-OPTIONS-Anforderung zwischenspeichert. Bei Verwendung des Werts *86.400* bleibt der Cache einen ganzen Tag erhalten. | Nachdem Sie die Felder mit den Werten in dieser Tabelle ausgefüllt haben, klicken Sie auf die Schaltfläche **Speichern**. > [!IMPORTANT] > Achten Sie in einer Produktionsumgebung darauf, dass alle verwendeten Einstellungen jeweils nur ein Mindestmaß an Zugriff auf Ihr Speicherkonto gewähren, um einen sicheren Zugriff zu gewährleisten. Die hier beschriebenen CORS-Einstellungen definieren eine gemäßigte Sicherheitsrichtlinie und sind für eine Schnellstartanleitung angemessen. Sie sollten allerdings nicht in der Praxis verwendet werden. ### <a name="create-a-shared-access-signature"></a>Erstellen einer SAS (Shared Access Signature) Die SAS (Shared Access Signature) wird von dem im Browser ausgeführten Code verwendet, um Azure Blob Storage-Anforderungen zu autorisieren. Mithilfe der SAS kann der Client den Zugriff auf Speicherressourcen ohne Kontozugriffsschlüssel oder Verbindungszeichenfolge autorisieren. Weitere Informationen zur SAS finden Sie unter [Verwenden von Shared Access Signatures (SAS)](../common/storage-sas-overview.md). Führen Sie die folgenden Schritte aus, um die SAS-URL für den Blob-Dienst abzurufen: 1. Wählen Sie im Azure-Portal Ihr Speicherkonto aus. 2. Navigieren Sie zum Abschnitt **Einstellungen**, und wählen Sie **Shared Access Signature (SAS)** aus. 3. Scrollen Sie nach unten, und klicken Sie auf die Schaltfläche **SAS und Verbindungszeichenfolge generieren**. 4. Scrollen Sie weiter nach unten, und suchen Sie nach dem Feld **SAS-URL für Blob-Dienst**. 5. Klicken Sie ganz rechts neben dem Feld **SAS-URL für Blob-Dienst** auf die Schaltfläche **In Zwischenablage kopieren**. 6. Speichern Sie die kopierte URL zur Verwendung in einem späteren Schritt. ### <a name="add-the-azure-blob-storage-client-library"></a>Hinzufügen der Azure Blob Storage-Clientbibliothek Erstellen Sie auf Ihrem lokalen Computer einen neuen Ordner mit dem Namen *azure-blobs-js-browser*, und öffnen Sie ihn in Visual Studio Code. Wählen Sie **Ansicht > Terminal** aus, um ein Konsolenfenster in Visual Studio Code zu öffnen. Führen Sie den folgenden npm-Befehl (Node.js-Paket-Manager) im Terminalfenster aus, um eine Datei [package.json](https://docs.npmjs.com/files/package.json) zu erstellen. ```console npm init -y ``` Das Azure SDK besteht aus vielen separaten Paketen. Basierend auf den Diensten, die Sie verwenden möchten, können Sie die erforderlichen Pakete auswählen. Führen Sie den folgenden `npm`-Befehl im Terminalfenster aus, um das Paket `@azure/storage-blob` zu installieren. ```console npm install --save @azure/storage-blob ``` #### <a name="bundle-the-azure-blob-storage-client-library"></a>Bündeln der Azure Blob Storage-Clientbibliothek Um Azure SDK-Bibliotheken auf einer Website nutzen zu können, konvertieren Sie Ihren Code zur Verwendung im Browser. Hierzu verwenden Sie ein als Bundler bezeichnetes Tool. Durch die Bündelung wird mit [Node.js](https://nodejs.org)-Konventionen geschriebener JavaScript-Code in ein von Browsern lesbares Format konvertiert. In diesem Schnellstartartikel wird der Bundler [Parcel](https://parceljs.org/) verwendet. Installieren Sie Parcel, indem Sie im Terminalfenster den folgenden `npm`-Befehl ausführen: ```console npm install -g parcel-bundler ``` Öffnen Sie in Visual Studio Code die Datei *package.json*, und fügen Sie eine `browserlist` zwischen den Einträgen `license` und `dependencies` hinzu. Das Ziel dieser `browserlist` sind die aktuellen Versionen von drei gängigen Browsern. Die vollständige Datei *package.json* sollte jetzt wie folgt aussehen: :::code language="json" source="~/azure-storage-snippets/blobs/quickstarts/JavaScript/V12/azure-blobs-js-browser/package.json" highlight="12-16"::: Speichern Sie die Datei *package.json*. ### <a name="import-the-azure-blob-storage-client-library"></a>Importieren der Azure Blob Storage-Clientbibliothek Um Azure SDK-Bibliotheken in JavaScript verwenden zu können, importieren Sie das Paket `@azure/storage-blob`. Erstellen Sie in Visual Studio Code eine neue Datei, die den folgenden JavaScript-Code enthält. :::code language="javascript" source="~/azure-storage-snippets/blobs/quickstarts/JavaScript/V12/azure-blobs-js-browser/index.js" id="snippet_ImportLibrary"::: Speichern Sie die Datei unter dem Namen *index.js* im Verzeichnis *azure-blobs-js-browser*. ### <a name="implement-the-html-page"></a>Implementieren der HTML-Seite Erstellen Sie in Visual Studio Code eine neue Datei, und fügen Sie den folgenden HTML-Code hinzu. :::code language="html" source="~/azure-storage-snippets/blobs/quickstarts/JavaScript/V12/azure-blobs-js-browser/index.html"::: Speichern Sie die Datei unter dem Namen *index.html* im Verzeichnis *azure-blobs-js-browser*. ## <a name="code-examples"></a>Codebeispiele Im Beispielcode wird gezeigt, wie Sie die folgenden Aufgaben mit der Azure Blob Storage-Clientbibliothek für JavaScript durchführen: * [Deklarieren von Feldern für Benutzeroberflächenelemente](#declare-fields-for-ui-elements) * [Hinzufügen Ihrer Speicherkontoinformationen](#add-your-storage-account-info) * [Erstellen von Clientobjekten](#create-client-objects) * [Erstellen und Löschen eines Speichercontainers](#create-and-delete-a-storage-container) * [Auflisten von Blobs](#list-blobs) * [Hochladen von Blobs](#upload-blobs) * [Löschen von Blobs](#delete-blobs) Sie führen den Code aus, nachdem Sie der Datei *index.js* alle Codeausschnitte hinzugefügt haben. ### <a name="declare-fields-for-ui-elements"></a>Deklarieren von Feldern für Benutzeroberflächenelemente Fügen Sie am Ende der Datei *index.js* den folgenden Code hinzu. :::code language="JavaScript" source="~/azure-storage-snippets/blobs/quickstarts/JavaScript/V12/azure-blobs-js-browser/index.js" id="snippet_DeclareVariables"::: Speichern Sie die Datei *index.js*. Dieser Code deklariert Felder für die einzelnen HTML-Elemente und implementiert eine Funktion `reportStatus` zum Anzeigen der Ausgabe. In den folgenden Abschnitten fügen Sie jeden neuen Block mit JavaScript-Code hinter dem vorherigen Block hinzu. ### <a name="add-your-storage-account-info"></a>Hinzufügen Ihrer Speicherkontoinformationen Fügen Sie Code für den Zugriff auf Ihr Speicherkonto hinzu. Ersetzen Sie den Platzhalter durch die SAS-URL für den Blob-Dienst, die Sie weiter oben generiert haben. Fügen Sie am Ende der Datei *index.js* den folgenden Code hinzu. :::code language="javascript" source="~/azure-storage-snippets/blobs/quickstarts/JavaScript/V12/azure-blobs-js-browser/index.js" id="snippet_StorageAcctInfo"::: Speichern Sie die Datei *index.js*. ### <a name="create-client-objects"></a>Erstellen von Clientobjekten Erstellen Sie für die Interaktion mit dem Azure Blob Storage-Dienst die Objekte [BlobServiceClient](/javascript/api/@azure/storage-blob/blobserviceclient) und [ContainerClient](/javascript/api/@azure/storage-blob/containerclient). Fügen Sie am Ende der Datei *index.js* den folgenden Code hinzu. :::code language="javascript" source="~/azure-storage-snippets/blobs/quickstarts/JavaScript/V12/azure-blobs-js-browser/index.js" id="snippet_CreateClientObjects"::: Speichern Sie die Datei *index.js*. ### <a name="create-and-delete-a-storage-container"></a>Erstellen und Löschen eines Speichercontainers Der Speichercontainer wird erstellt und gelöscht, wenn Sie auf der Webseite auf die entsprechende Schaltfläche klicken. Fügen Sie am Ende der Datei *index.js* den folgenden Code hinzu. :::code language="javascript" source="~/azure-storage-snippets/blobs/quickstarts/JavaScript/V12/azure-blobs-js-browser/index.js" id="snippet_CreateDeleteContainer"::: Speichern Sie die Datei *index.js*. ### <a name="list-blobs"></a>Auflisten von Blobs Der Inhalt des Speichercontainers wird aufgelistet, wenn Sie auf die Schaltfläche **List files** (Dateien auflisten) klicken. Fügen Sie am Ende der Datei *index.js* den folgenden Code hinzu. :::code language="javascript" source="~/azure-storage-snippets/blobs/quickstarts/JavaScript/V12/azure-blobs-js-browser/index.js" id="snippet_ListBlobs"::: Speichern Sie die Datei *index.js*. Dieser Code ruft die Funktion [ContainerClient.listBlobsFlat](/javascript/api/@azure/storage-blob/containerclient#listblobsflat-containerlistblobsoptions-) auf und ruft dann mithilfe eines Iterators den Namen jedes zurückgegebenen [BlobItem](/javascript/api/@azure/storage-blob/blobitem) ab. Für jedes `BlobItem` wird die Liste der **Dateien** mit dem Eigenschaftswert [Name](/javascript/api/@azure/storage-blob/blobitem#name) aktualisiert. ### <a name="upload-blobs"></a>Hochladen von Blobs Dateien werden in den Speichercontainer hochgeladen, wenn Sie auf die Schaltfläche **Select and upload files** (Dateien auswählen und hochladen) klicken. Fügen Sie am Ende der Datei *index.js* den folgenden Code hinzu. :::code language="javascript" source="~/azure-storage-snippets/blobs/quickstarts/JavaScript/V12/azure-blobs-js-browser/index.js" id="snippet_UploadBlobs"::: Speichern Sie die Datei *index.js*. Dieser Code verbindet die Schaltfläche **Select and upload files** (Dateien auswählen und hochladen) mit dem ausgeblendeten `file-input`-Element. Das Schaltflächenereignis `click` löst das Dateieingabeereignis `click` aus und zeigt die Dateiauswahl an. Nachdem Sie Dateien ausgewählt und das Dialogfeld geschlossen haben, tritt das `input`-Ereignis auf, und die `uploadFiles`-Funktion wird aufgerufen. Diese Funktion erstellt ein [BlockBlobClient](/javascript/api/@azure/storage-blob/blockblobclient)-Objekt und ruft dann die nur im Browser verfügbare Funktion [uploadBrowserData](/javascript/api/@azure/storage-blob/blockblobclient#uploadbrowserdata-blob---arraybuffer---arraybufferview--blockblobparalleluploadoptions-) für jede ausgewählte Datei auf. Bei jedem Aufruf wird eine `Promise` (Zusage) zurückgegeben. Jede `Promise` wird einer Liste hinzugefügt. So ist es möglich, auf alle Zusagen zu warten und die Dateien parallel hochzuladen. ### <a name="delete-blobs"></a>Löschen von Blobs Dateien werden aus dem Speichercontainer gelöscht, wenn Sie auf die Schaltfläche **Delete selected files** (Ausgewählte Dateien löschen) klicken. Fügen Sie am Ende der Datei *index.js* den folgenden Code hinzu. :::code language="javascript" source="~/azure-storage-snippets/blobs/quickstarts/JavaScript/V12/azure-blobs-js-browser/index.js" id="snippet_DeleteBlobs"::: Speichern Sie die Datei *index.js*. Dieser Code ruft die Funktion [ContainerClient.deleteBlob](/javascript/api/@azure/storage-blob/containerclient#deleteblob-string--blobdeleteoptions-) auf, um die in der Liste ausgewählten Dateien zu entfernen. Anschließend wird die weiter oben gezeigte `listFiles`-Funktion aufgerufen, um den Inhalt der Liste **Files** (Dateien) zu aktualisieren. ## <a name="run-the-code"></a>Ausführen des Codes Um den Code im Visual Studio Code-Debugger auszuführen, konfigurieren Sie die Datei *launch.json* für Ihren Browser. ### <a name="configure-the-debugger"></a>Konfigurieren des Debuggers So richten Sie die Debugger-Erweiterung in Visual Studio Code ein: 1. Wählen Sie **Ausführen > Konfiguration hinzufügen** aus. 2. Wählen Sie **Microsoft Edge**, **Chrome** oder **Firefox** aus, je nachdem, welche Erweiterung Sie zuvor im Abschnitt [Voraussetzungen](#prerequisites) installiert haben. Durch das Hinzufügen einer neuen Konfiguration wird eine Datei *launch.json* erstellt und im Editor geöffnet. Ändern Sie die Datei *launch.json* wie hier gezeigt, sodass der `url`-Wert `http://localhost:1234/index.html` enthält: :::code language="json" source="~/azure-storage-snippets/blobs/quickstarts/JavaScript/V12/azure-blobs-js-browser/.vscode/launch.json" highlight="11"::: Speichern Sie die Datei *launch.json*, nachdem Sie sie aktualisiert haben. Diese Konfiguration teilt Visual Studio Code mit, welcher Browser geöffnet und welche URL geladen werden soll. ### <a name="launch-the-web-server"></a>Starten des Webservers Wählen Sie zum Starten des lokalen Entwicklungswebservers **Ansicht > Terminal** aus, um in Visual Studio Code ein Konsolenfenster zu öffnen, und geben Sie dann den folgenden Befehl ein. ```console parcel index.html ``` Parcel bündelt Ihren Code und startet einen lokalen Entwicklungsserver für Ihre Seite unter `http://localhost:1234/index.html`. Jedes Mal, wenn Sie die Datei *index.js* speichern, werden die darin vorgenommenen Änderungen automatisch auf dem Entwicklungsserver erstellt und widergespiegelt. Falls Sie eine Meldung mit dem Hinweis erhalten, dass der **konfigurierte Port 1234 nicht verwendet werden konnte**, können Sie den Port ändern, indem Sie den Befehl `parcel -p <port#> index.html` ausführen. Aktualisieren Sie in der Datei *launch.json* den Port im URL-Pfad entsprechend. ### <a name="start-debugging"></a>Starten des Debugvorgangs Führen Sie die Seite im Debugger aus, um sich anzusehen, wie Blob Storage funktioniert. Wenn Fehler auftreten, wird die empfangene Fehlermeldung im Bereich **Status** auf der Webseite angezeigt. Um *index.html* im Browser mit angefügtem Visual Studio Code-Debugger zu öffnen, wählen Sie in Visual Studio Code **Debuggen > Debuggen starten** aus, oder drücken Sie F5. ### <a name="use-the-web-app"></a>Verwenden der Web-App Sie können die Ergebnisse der API-Aufrufe im [Azure-Portal](https://portal.azure.com) überprüfen, während Sie die folgenden Schritte ausführen. #### <a name="step-1---create-a-container"></a>Schritt 1: Erstellen eines Containers 1. Wählen Sie in der Web-App **Container erstellen** aus. Der Status gibt an, dass ein Container erstellt wurde. 2. Wählen Sie im Azure-Portal Ihr Speicherkonto aus, um die Erstellung des Containers zu überprüfen. Wählen Sie unter **Blob-Dienst** die Option **Container** aus. Überprüfen Sie, ob der neue Container angezeigt wird. (Möglicherweise müssen Sie **Aktualisieren** auswählen.) #### <a name="step-2---upload-a-blob-to-the-container"></a>Schritt 2: Hochladen eines Blobs in den Container 1. Erstellen und speichern Sie auf Ihrem lokalen Computer eine Testdatei, z. B. *test.txt*. 2. Klicken Sie in der Web-App auf **Select and upload files** (Dateien auswählen und hochladen). 3. Navigieren Sie zu Ihrer Testdatei, und wählen Sie dann **Öffnen** aus. Der Status gibt an, dass die Datei hochgeladen und die Dateiliste abgerufen wurde. 4. Wählen Sie im Azure-Portal den Namen des zuvor erstellten neuen Containers aus. Überprüfen Sie, ob die Testdatei angezeigt wird. #### <a name="step-3---delete-the-blob"></a>Schritt 3: Löschen des Blobs 1. Wählen Sie in der Web-App unter **Dateien** die Testdatei aus. 2. Wählen Sie **Delete selected files** (Ausgewählte Dateien löschen) aus. Der Status gibt an, dass die Datei gelöscht wurde und der Container keine Dateien enthält. 3. Wählen Sie im Azure-Portal **Aktualisieren** aus. Überprüfen Sie, ob **Keine Blobs gefunden** angezeigt wird. #### <a name="step-4---delete-the-container"></a>Schritt 4: Löschen des Containers 1. Wählen Sie in der Web-App **Container löschen** aus. Der Status gibt an, dass der Container gelöscht wurde. 2. Wählen Sie im Azure-Portal oben links im Portalbereich den Link **\<account-name\> | Container** aus. 3. Klicken Sie auf **Aktualisieren**. Der neue Container wird nicht mehr angezeigt. 4. Schließen Sie die Web-App. ### <a name="clean-up-resources"></a>Bereinigen von Ressourcen Klicken Sie in Visual Studio Code auf die Konsole **Terminal**, und drücken Sie STRG+C, um den Webserver zu beenden. Um die in diesem Schnellstart erstellten Ressourcen zu bereinigen, wechseln Sie zum [Azure-Portal](https://portal.azure.com), und löschen Sie die Ressourcengruppe, die Sie im Abschnitt [Voraussetzungen](#prerequisites) erstellt haben. ## <a name="next-steps"></a>Nächste Schritte In diesem Schnellstart wurde beschrieben, wie Sie Blobs mit JavaScript hochladen, auflisten und löschen. Außerdem haben Sie erfahren, wie Sie einen Blob Storage-Container erstellen und löschen. Tutorials, Beispiele, Schnellstartanleitungen und weiteres Dokumentationsmaterial finden Sie hier: > [!div class="nextstepaction"] > [Dokumentation zu Azure für JavaScript](/azure/developer/javascript/) * Weitere Informationen finden Sie in der [Azure Storage Blob client library for JavaScript](https://github.com/Azure/azure-sdk-for-js/blob/master/sdk/storage/storage-blob) (Azure Storage Blob-Clientbibliothek für JavaScript). * Blobspeicher-Beispiel-Apps finden Sie unter [Getting started with samples](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/storage/storage-blob/samples) (Erste Schritte mit Beispielen).
72.649518
943
0.786758
deu_Latn
0.986587
dbffdb5568b27e687f3c5ddf4887d27a42baa959
29
md
Markdown
README.md
awebre/getting-started-with-blazor
efe69cd35333f24e9101cb21eaa7b5ae5b649553
[ "MIT" ]
null
null
null
README.md
awebre/getting-started-with-blazor
efe69cd35333f24e9101cb21eaa7b5ae5b649553
[ "MIT" ]
null
null
null
README.md
awebre/getting-started-with-blazor
efe69cd35333f24e9101cb21eaa7b5ae5b649553
[ "MIT" ]
null
null
null
# getting-started-with-blazor
29
29
0.827586
eng_Latn
0.434678
e0001584590bd17f3eb69929419f1c9fe2e8ad29
415
md
Markdown
docs/examples/kantan-example-00.md
raganmd/touchdesigner-community-examples
1e1653e47248314088018d08e0051d95a2d07cb7
[ "MIT" ]
5
2021-03-02T08:50:15.000Z
2022-03-28T22:12:33.000Z
docs/examples/kantan-example-00.md
raganmd/touchdesigner-community-examples
1e1653e47248314088018d08e0051d95a2d07cb7
[ "MIT" ]
null
null
null
docs/examples/kantan-example-00.md
raganmd/touchdesigner-community-examples
1e1653e47248314088018d08e0051d95a2d07cb7
[ "MIT" ]
null
null
null
--- layout: default title: Kantan Example 00 parent: Examples --- # Kantan Example 00 ***Tox - container_kantan-example-00*** [Load Example](?remoteTox=https://github.com/raganmd/touchdesigner-community-examples-code/blob/main/tox/container_kantan_example_00.tox?raw=true){: .btn .btn-green} [Open Network](?openNetwork=True){: .btn .btn-blue} Currently no body copy --- #### Created 03.18.21 *Matthew Ragan*
25.9375
217
0.73253
eng_Latn
0.439295
e0003f53cec346b06e8e6c237eea8a49fe030fab
229
md
Markdown
README.md
aarepalm/xsort
97f440223f6f5ecd2560745676e6457150b1bbcf
[ "MIT" ]
null
null
null
README.md
aarepalm/xsort
97f440223f6f5ecd2560745676e6457150b1bbcf
[ "MIT" ]
null
null
null
README.md
aarepalm/xsort
97f440223f6f5ecd2560745676e6457150b1bbcf
[ "MIT" ]
null
null
null
# xsort A small program demonstrating how to use bitvector to sort and print out input array elements having following restriction: - value range for array elements is limited - elements are unique # Usage ``` make ./xsort ```
19.083333
123
0.764192
eng_Latn
0.9986
e00051598e1feaca4224f1b6848e37e16aedb9d2
19,285
md
Markdown
articles/storage/blobs/data-lake-storage-directory-file-acl-java.md
macdrai/azure-docs.fr-fr
59bc35684beaba04a4f4c09a745393e1d91428db
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/storage/blobs/data-lake-storage-directory-file-acl-java.md
macdrai/azure-docs.fr-fr
59bc35684beaba04a4f4c09a745393e1d91428db
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/storage/blobs/data-lake-storage-directory-file-acl-java.md
macdrai/azure-docs.fr-fr
59bc35684beaba04a4f4c09a745393e1d91428db
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: SDK Java Azure Data Lake Storage Gen2 pour les fichiers et les listes de contrôle d’accès description: Utilisez des bibliothèques Stockage Azure pour Java pour gérer les répertoires et les listes de contrôle d’accès (ACL, access-control list) de fichiers et de répertoires dans des comptes de stockage dotés d’un espace de noms hiérarchique (HNS) activé. author: normesta ms.service: storage ms.date: 09/10/2020 ms.custom: devx-track-java ms.author: normesta ms.topic: how-to ms.subservice: data-lake-storage-gen2 ms.reviewer: prishet ms.openlocfilehash: f6e8219f744a91628f9860f0af133c07eddb4253 ms.sourcegitcommit: a43a59e44c14d349d597c3d2fd2bc779989c71d7 ms.translationtype: HT ms.contentlocale: fr-FR ms.lasthandoff: 11/25/2020 ms.locfileid: "95913383" --- # <a name="use-java-to-manage-directories-files-and-acls-in-azure-data-lake-storage-gen2"></a>Utiliser Java pour gérer les répertoires, les fichiers et les listes de contrôle d’accès dans Azure Data Lake Storage Gen2 Cet article vous explique comment utiliser Java pour créer et gérer des répertoires, des fichiers et des autorisations dans des comptes de stockage dotés d’un espace de noms hiérarchique (HNS) activé. [Package (Maven)](https://search.maven.org/artifact/com.azure/azure-storage-file-datalake) | [Exemples](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/storage/azure-storage-file-datalake) | [Référence de l’API](/java/api/overview/azure/storage-file-datalake-readme) | [Mappage de Gen1 à Gen2](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/storage/azure-storage-file-datalake/GEN1_GEN2_MAPPING.md) | [Envoyer des commentaires](https://github.com/Azure/azure-sdk-for-java/issues) ## <a name="prerequisites"></a>Prérequis > [!div class="checklist"] > * Un abonnement Azure. Consultez la page [Obtention d’un essai gratuit d’Azure](https://azure.microsoft.com/pricing/free-trial/). > * Un compte de stockage doté d’un espace de noms hiérarchique (HNS) activé. Pour créer un test, suivez [ces](../common/storage-account-create.md) instructions. ## <a name="set-up-your-project"></a>Configuration de votre projet Pour commencer, ouvrez [cette page](https://search.maven.org/artifact/com.azure/azure-storage-file-datalake) et recherchez la dernière version de la bibliothèque Java. Ensuite, ouvrez le fichier *pom.xml* dans votre éditeur de texte. Ajoutez un élément de dépendance qui référence cette version. Si vous envisagez d’authentifier votre application cliente avec Azure Active Directory (AD), ajoutez une dépendance à la bibliothèque de client de secrets Azure. Consultez [Adding the Secret Client Library package to your project](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/identity/azure-identity#adding-the-package-to-your-project). Ajoutez ensuite ces instructions Imports à votre fichier de code. ```java import com.azure.core.credential.TokenCredential; import com.azure.storage.common.StorageSharedKeyCredential; import com.azure.storage.file.datalake.DataLakeDirectoryClient; import com.azure.storage.file.datalake.DataLakeFileClient; import com.azure.storage.file.datalake.DataLakeFileSystemClient; import com.azure.storage.file.datalake.DataLakeServiceClient; import com.azure.storage.file.datalake.DataLakeServiceClientBuilder; import com.azure.storage.file.datalake.models.ListPathsOptions; import com.azure.storage.file.datalake.models.PathAccessControl; import com.azure.storage.file.datalake.models.PathAccessControlEntry; import com.azure.storage.file.datalake.models.PathItem; import com.azure.storage.file.datalake.models.PathPermissions; import com.azure.storage.file.datalake.models.RolePermissions; ``` ## <a name="connect-to-the-account"></a>Se connecter au compte Pour utiliser les extraits de code de cet article, vous devez créer une instance **DataLakeServiceClient** qui représente le compte de stockage. ### <a name="connect-by-using-an-account-key"></a>Connexion avec une clé de compte Il s’agit du moyen le plus simple de se connecter à un compte. Cet exemple crée une instance de **DataLakeServiceClient** à l’aide d’une clé de compte. ```java static public DataLakeServiceClient GetDataLakeServiceClient (String accountName, String accountKey){ StorageSharedKeyCredential sharedKeyCredential = new StorageSharedKeyCredential(accountName, accountKey); DataLakeServiceClientBuilder builder = new DataLakeServiceClientBuilder(); builder.credential(sharedKeyCredential); builder.endpoint("https://" + accountName + ".dfs.core.windows.net"); return builder.buildClient(); } ``` ### <a name="connect-by-using-azure-active-directory-azure-ad"></a>Se connecter avec Azure Active Directory (Azure AD) Vous pouvez utiliser la [bibliothèque de client Azure Identity pour Java](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/identity/azure-identity) afin d’authentifier votre application auprès d’Azure AD. Cet exemple crée une instance **DataLakeServiceClient** à l’aide d’un ID client, d’une clé secrète client et d’un ID locataire. Pour obtenir ces valeurs, consultez [Obtenir un jeton à partir d’Azure AD pour autoriser les requêtes à partir d’une application cliente](../common/storage-auth-aad-app.md). ```java static public DataLakeServiceClient GetDataLakeServiceClient (String accountName, String clientId, String ClientSecret, String tenantID){ String endpoint = "https://" + accountName + ".dfs.core.windows.net"; ClientSecretCredential clientSecretCredential = new ClientSecretCredentialBuilder() .clientId(clientId) .clientSecret(ClientSecret) .tenantId(tenantID) .build(); DataLakeServiceClientBuilder builder = new DataLakeServiceClientBuilder(); return builder.credential(clientSecretCredential).endpoint(endpoint).buildClient(); } ``` > [!NOTE] > Pour plus d’exemples, consultez la documentation [Bibliothèque de client Azure Identity pour Java](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/identity/azure-identity). ## <a name="create-a-container"></a>Créez un conteneur. Un conteneur fait office de système de fichiers pour vos fichiers. Vous pouvez en créer un en appelant la méthode **DataLakeServiceClient.createFileSystem**. Cet exemple crée un conteneur nommé `my-file-system`. ```java static public DataLakeFileSystemClient CreateFileSystem (DataLakeServiceClient serviceClient){ return serviceClient.createFileSystem("my-file-system"); } ``` ## <a name="create-a-directory"></a>Créer un répertoire Créez une référence de répertoire en appelant la méthode **DataLakeFileSystemClient.createDirectory**. Cet exemple ajoute un répertoire nommé `my-directory` à un conteneur, puis ajoute un sous-répertoire nommé `my-subdirectory`. ```java static public DataLakeDirectoryClient CreateDirectory (DataLakeServiceClient serviceClient, String fileSystemName){ DataLakeFileSystemClient fileSystemClient = serviceClient.getFileSystemClient(fileSystemName); DataLakeDirectoryClient directoryClient = fileSystemClient.createDirectory("my-directory"); return directoryClient.createSubDirectory("my-subdirectory"); } ``` ## <a name="rename-or-move-a-directory"></a>Renommer ou déplacer un répertoire Renommez ou déplacez un répertoire en appelant la méthode **DataLakeDirectoryClient.rename**. Transmettez un paramètre au chemin d’accès du répertoire souhaité. Cet exemple renomme un sous-répertoire avec le nom `my-subdirectory-renamed`. ```java static public DataLakeDirectoryClient RenameDirectory(DataLakeFileSystemClient fileSystemClient){ DataLakeDirectoryClient directoryClient = fileSystemClient.getDirectoryClient("my-directory/my-subdirectory"); return directoryClient.rename( fileSystemClient.getFileSystemName(),"my-subdirectory-renamed"); } ``` Cet exemple déplace un répertoire nommé `my-subdirectory-renamed` vers un sous-répertoire d’un répertoire nommé `my-directory-2`. ```java static public DataLakeDirectoryClient MoveDirectory (DataLakeFileSystemClient fileSystemClient){ DataLakeDirectoryClient directoryClient = fileSystemClient.getDirectoryClient("my-directory/my-subdirectory-renamed"); return directoryClient.rename( fileSystemClient.getFileSystemName(),"my-directory-2/my-subdirectory-renamed"); } ``` ## <a name="delete-a-directory"></a>Supprimer un répertoire Supprimez un répertoire en appelant la méthode **DataLakeDirectoryClient.deleteWithResponse**. Cet exemple supprime un répertoire nommé `my-directory`. ```java static public void DeleteDirectory(DataLakeFileSystemClient fileSystemClient){ DataLakeDirectoryClient directoryClient = fileSystemClient.getDirectoryClient("my-directory"); directoryClient.deleteWithResponse(true, null, null, null); } ``` ## <a name="upload-a-file-to-a-directory"></a>Charger un fichier dans un répertoire Tout d’abord, créez une référence de fichier dans le répertoire cible en créant une instance de la classe **DataLakeFileClient**. Téléchargez un fichier en appelant la méthode **DataLakeFileClient.append**. Veillez à effectuer le chargement en appelant la méthode **DataLakeFileClient.FlushAsync**. Cet exemple télécharge un fichier texte dans un répertoire nommé `my-directory`. ```java static public void UploadFile(DataLakeFileSystemClient fileSystemClient) throws FileNotFoundException{ DataLakeDirectoryClient directoryClient = fileSystemClient.getDirectoryClient("my-directory"); DataLakeFileClient fileClient = directoryClient.createFile("uploaded-file.txt"); File file = new File("C:\\mytestfile.txt"); InputStream targetStream = new BufferedInputStream(new FileInputStream(file)); long fileSize = file.length(); fileClient.append(targetStream, 0, fileSize); fileClient.flush(fileSize); } ``` > [!TIP] > Si la taille de votre fichier est importante, votre code devra effectuer plusieurs appels à la méthode **DataLakeFileClient.append**. Utilisez plutôt la méthode **DataLakeFileClient.uploadFromFile**. Vous pourrez ainsi charger la totalité du fichier en un seul appel. > > Vous en trouverez un exemple dans la section suivante. ## <a name="upload-a-large-file-to-a-directory"></a>Charger un fichier volumineux dans un répertoire Utilisez la méthode **DataLakeFileClient.uploadFromFile** pour charger des fichiers volumineux sans avoir à effectuer plusieurs appels à la méthode **DataLakeFileClient.append**. ```java static public void UploadFileBulk(DataLakeFileSystemClient fileSystemClient) throws FileNotFoundException{ DataLakeDirectoryClient directoryClient = fileSystemClient.getDirectoryClient("my-directory"); DataLakeFileClient fileClient = directoryClient.getFileClient("uploaded-file.txt"); fileClient.uploadFromFile("C:\\mytestfile.txt"); } ``` ## <a name="download-from-a-directory"></a>Télécharger à partir d’un répertoire Tout d’abord, créez une instance **DataLakeFileClient** qui représente le fichier que vous souhaitez télécharger. Utilisez la méthode **DataLakeFileClient.read** pour lire le fichier. Utilisez n’importe quelle API de traitement de fichiers .NET pour enregistrer les octets du flux de données dans un fichier. ```java static public void DownloadFile(DataLakeFileSystemClient fileSystemClient) throws FileNotFoundException, java.io.IOException{ DataLakeDirectoryClient directoryClient = fileSystemClient.getDirectoryClient("my-directory"); DataLakeFileClient fileClient = directoryClient.getFileClient("uploaded-file.txt"); File file = new File("C:\\downloadedFile.txt"); OutputStream targetStream = new FileOutputStream(file); fileClient.read(targetStream); targetStream.close(); } ``` ## <a name="list-directory-contents"></a>Afficher le contenu du répertoire Cet exemple affiche les noms de chaque fichier situé dans un répertoire nommé `my-directory`. ```java static public void ListFilesInDirectory(DataLakeFileSystemClient fileSystemClient){ ListPathsOptions options = new ListPathsOptions(); options.setPath("my-directory"); PagedIterable<PathItem> pagedIterable = fileSystemClient.listPaths(options, null); java.util.Iterator<PathItem> iterator = pagedIterable.iterator(); PathItem item = iterator.next(); while (item != null) { System.out.println(item.getName()); if (!iterator.hasNext()) { break; } item = iterator.next(); } } ``` ## <a name="manage-access-control-lists-acls"></a>Gérer les listes de contrôle d’accès (ACL) Vous pouvez obtenir, définir et mettre à jour les autorisations d’accès des répertoires et des fichiers. > [!NOTE] > Si vous utilisez Azure Active Directory (Azure AD) pour autoriser l’accès, assurez-vous que le [rôle Propriétaire des données blob du stockage](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner) est attribué à votre principal de sécurité. Pour en savoir plus sur l’application des autorisations ACL et les conséquences de leur modification, consultez [Contrôle d’accès dans Azure Data Lake Storage Gen2](./data-lake-storage-access-control.md). ### <a name="manage-a-directory-acl"></a>Gérer la liste ACL d’un répertoire Cet exemple obtient puis définit l’ACL d’un répertoire nommé `my-directory`. Cet exemple donne à l’utilisateur propriétaire des autorisations de lecture, d’écriture et d’exécution, donne au groupe propriétaire uniquement des autorisations de lecture et d’exécution et donne à tous les autres l’accès en lecture. > [!NOTE] > Si votre application autorise l’accès en utilisant Azure Active Directory (Azure AD), vérifiez que le [rôle de propriétaire des données Blob du stockage](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner) a été attribué au principal de sécurité utilisé par votre application pour autoriser l’accès. Pour en savoir plus sur l’application des autorisations ACL et les conséquences de leur modification, consultez [Contrôle d’accès dans Azure Data Lake Storage Gen2](./data-lake-storage-access-control.md). ```java static public void ManageDirectoryACLs(DataLakeFileSystemClient fileSystemClient){ DataLakeDirectoryClient directoryClient = fileSystemClient.getDirectoryClient("my-directory"); PathAccessControl directoryAccessControl = directoryClient.getAccessControl(); List<PathAccessControlEntry> pathPermissions = directoryAccessControl.getAccessControlList(); System.out.println(PathAccessControlEntry.serializeList(pathPermissions)); RolePermissions groupPermission = new RolePermissions(); groupPermission.setExecutePermission(true).setReadPermission(true); RolePermissions ownerPermission = new RolePermissions(); ownerPermission.setExecutePermission(true).setReadPermission(true).setWritePermission(true); RolePermissions otherPermission = new RolePermissions(); otherPermission.setReadPermission(true); PathPermissions permissions = new PathPermissions(); permissions.setGroup(groupPermission); permissions.setOwner(ownerPermission); permissions.setOther(otherPermission); directoryClient.setPermissions(permissions, null, null); pathPermissions = directoryClient.getAccessControl().getAccessControlList(); System.out.println(PathAccessControlEntry.serializeList(pathPermissions)); } ``` Vous pouvez également obtenir et définir la liste de contrôle d’accès du répertoire racine d’un conteneur. Pour obtenir le répertoire racine, transmettez une chaîne vide (`""`) dans la méthode **DataLakeFileSystemClient.getDirectoryClient**. ### <a name="manage-a-file-acl"></a>Gérer la liste ACL d’un fichier Cet exemple obtient puis définit l’ACL d’un fichier nommé `upload-file.txt`. Cet exemple donne à l’utilisateur propriétaire des autorisations de lecture, d’écriture et d’exécution, donne au groupe propriétaire uniquement des autorisations de lecture et d’exécution et donne à tous les autres l’accès en lecture. > [!NOTE] > Si votre application autorise l’accès en utilisant Azure Active Directory (Azure AD), vérifiez que le [rôle de propriétaire des données Blob du stockage](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner) a été attribué au principal de sécurité utilisé par votre application pour autoriser l’accès. Pour en savoir plus sur l’application des autorisations ACL et les conséquences de leur modification, consultez [Contrôle d’accès dans Azure Data Lake Storage Gen2](./data-lake-storage-access-control.md). ```java static public void ManageFileACLs(DataLakeFileSystemClient fileSystemClient){ DataLakeDirectoryClient directoryClient = fileSystemClient.getDirectoryClient("my-directory"); DataLakeFileClient fileClient = directoryClient.getFileClient("uploaded-file.txt"); PathAccessControl fileAccessControl = fileClient.getAccessControl(); List<PathAccessControlEntry> pathPermissions = fileAccessControl.getAccessControlList(); System.out.println(PathAccessControlEntry.serializeList(pathPermissions)); RolePermissions groupPermission = new RolePermissions(); groupPermission.setExecutePermission(true).setReadPermission(true); RolePermissions ownerPermission = new RolePermissions(); ownerPermission.setExecutePermission(true).setReadPermission(true).setWritePermission(true); RolePermissions otherPermission = new RolePermissions(); otherPermission.setReadPermission(true); PathPermissions permissions = new PathPermissions(); permissions.setGroup(groupPermission); permissions.setOwner(ownerPermission); permissions.setOther(otherPermission); fileClient.setPermissions(permissions, null, null); pathPermissions = fileClient.getAccessControl().getAccessControlList(); System.out.println(PathAccessControlEntry.serializeList(pathPermissions)); } ``` ### <a name="set-an-acl-recursively"></a>Définir une liste de contrôle d’accès (ACL) de manière récursive Vous pouvez ajouter, mettre à jour et supprimer des listes ACL de manière récursive au niveau des éléments enfants existants d’un répertoire parent sans avoir à apporter ces modifications individuellement à chaque élément enfant. Pour plus d’informations, consultez [Définir des listes de contrôle d’accès (ACL) de manière récursive pour Azure Data Lake Storage Gen2](recursive-access-control-lists.md). ## <a name="see-also"></a>Voir aussi * [Documentation de référence de l’API](/java/api/overview/azure/storage-file-datalake-readme) * [Package (Maven)](https://search.maven.org/artifact/com.azure/azure-storage-file-datalake) * [Exemples](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/storage/azure-storage-file-datalake) * [Mappage de Gen1 à Gen2](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/storage/azure-storage-file-datalake/GEN1_GEN2_MAPPING.md) * [Problèmes connus](data-lake-storage-known-issues.md#api-scope-data-lake-client-library) * [Envoyer des commentaires](https://github.com/Azure/azure-sdk-for-java/issues)
46.694915
529
0.775369
fra_Latn
0.639611
e00115dcc06902f4cc434fc6a6acab8277b4791a
9,738
md
Markdown
docs/2014/database-engine/install-windows/considerations-for-installing-sql-server-using-sysprep.md
L3onard80/sql-docs.it-it
f73e3d20b5b2f15f839ff784096254478c045bbb
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/2014/database-engine/install-windows/considerations-for-installing-sql-server-using-sysprep.md
L3onard80/sql-docs.it-it
f73e3d20b5b2f15f839ff784096254478c045bbb
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/2014/database-engine/install-windows/considerations-for-installing-sql-server-using-sysprep.md
L3onard80/sql-docs.it-it
f73e3d20b5b2f15f839ff784096254478c045bbb
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Considerazioni sull'installazione di SQL Server tramite SysPrep | Microsoft Docs ms.custom: '' ms.date: 05/24/2017 ms.prod: sql-server-2014 ms.reviewer: '' ms.technology: install ms.topic: conceptual ms.assetid: e1792eeb-2874-4653-b20e-3063f4eb4e5d author: MashaMSFT ms.author: mathoma manager: craigg ms.openlocfilehash: 201ccae98886e5126eb347c10d16985cbeeddffe ms.sourcegitcommit: b87d36c46b39af8b929ad94ec707dee8800950f5 ms.translationtype: MT ms.contentlocale: it-IT ms.lasthandoff: 02/08/2020 ms.locfileid: "62779136" --- # <a name="considerations-for-installing-sql-server-using-sysprep"></a>Considerazioni sull'installazione di SQL Server tramite SysPrep [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] SysPrep consente di preparare un'istanza autonoma di [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] in un computer e di completare la configurazione in un secondo momento. [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] SysPrep comporta un processo costituito da due passaggi per ottenere un'istanza autonoma configurata di [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)]. I passaggi necessari sono i seguenti: - [Preparare l'immagine](#BKMK_PrepareImage) Tramite questo passaggio viene arrestato il processo di installazione dopo che sono stati installati i file binari del prodotto, senza configurare le informazioni specifiche del computer, della rete o dell'account per l'istanza di [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] in fase di preparazione. - [Completare l'immagine](#BKMK_CompleteImage) Questo passaggio consente di completare la configurazione di un'istanza predisposta di [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)]. Durante questo passaggio, è possibile fornire informazioni specifiche del computer, della rete e dell'account. Per ulteriori informazioni sull'installazione [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] di tramite Sysprep, vedere [Install SQL Server 2014 using Sysprep](install-sql-server-using-sysprep.md). ## <a name="common-uses-for-includessnoversionincludesssnoversion-mdmd-sysprep"></a>Utilizzi comuni di [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] SysPrep La funzionalità [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] SysPrep può essere utilizzata nei modi seguenti: - Utilizzando il passaggio di preparazione dell'immagine, è possibile preparare una o più istanze non configurate di [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] nello stesso computer. È possibile configurare queste istanze predisposte utilizzando il passaggio di completamento dell'immagine sullo stesso computer. - È possibile acquisire il file di configurazione per l'installazione di [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] dell'istanza predisposta e utilizzarlo per preparare ulteriori istanze non configurate di [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] su più computer per una configurazione successiva. - In combinazione con l'Utilità preparazione sistema di Windows (anche nota come Windows SysPrep), è possibile creare un'immagine del sistema operativo in cui sono incluse le istanze predisposte non configurate di [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] sul computer di origine. Successivamente è possibile distribuire l'immagine del sistema operativo a più computer. Dopo aver completato la configurazione del sistema operativo, è possibile configurare le istanze predisposte utilizzando il passaggio di completamento dell'immagine dell'installazione di [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] . L'Utilità preparazione sistema (SysPrep) di Windows viene utilizzata per preparare immagini del sistema operativo di Windows. Viene utilizzata per acquisire un'immagine personalizzata del sistema operativo per la distribuzione in un'organizzazione. Per ulteriori informazioni su SysPrep e sui relativi utilizzi, vedere [Che cos'è Sysprep](https://go.microsoft.com/fwlink/?LinkId=143546). ## <a name="installation-media-considerations"></a>Considerazioni sul supporto di installazione Se si utilizza una versione completa di [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)], considerare gli elementi seguenti: - Edizioni non Express di [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)]: - Per installare i file binari del prodotto, nel passaggio di preparazione dell'immagine viene utilizzata la copia di valutazione di [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] . Quando l'istanza viene completata, l'edizione di [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] dipende dall'ID del prodotto fornito durante il passaggio di completamento dell'immagine. - Se si fornisce un ID del prodotto della copia di valutazione, il periodo di valutazione scadrà 180 giorni dopo il completamento dell'istanza predisposta. - Edizioni Express di [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)]: - Per preparare un'istanza delle edizioni Express di [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] , utilizzare il supporto di installazione di Express. - Non è possibile specificare gli ID del prodotto per un'istanza predisposta delle edizioni Express di [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] . ## <a name="supported-includessnoversionincludesssnoversion-mdmd-installations"></a>Installazioni di [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] supportate In [!INCLUDE[ssCurrent](../../includes/sscurrent-md.md)] , SysPrep supporta tutte le funzionalità, inclusi gli strumenti, di [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)]. È possibile preparare più istanze per installazioni side-by-side di [!INCLUDE[ssCurrent](../../includes/sscurrent-md.md)] o versioni precedenti. Le funzionalità di queste istanze devono supportare SysPrep. [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] Native Client viene installato e completato automaticamente alla fine del passaggio di preparazione dell'immagine. [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] , vengono preparati automaticamente [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] Browser e [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)]. Vengono completati quando si completa l'istanza di [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] tramite il passaggio di completamento dell'immagine. Per informazioni sulle edizioni supportate di [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)], vedere [funzionalità supportate dalle edizioni di SQL Server 2014](../../getting-started/features-supported-by-the-editions-of-sql-server-2014.md). È possibile eseguire un aggiornamento dell'edizione durante la configurazione di un'istanza predisposta di [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)]. Questa opzione non è supportata per le edizioni Express di [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] . A partire da [!INCLUDE[ssSQL14](../../includes/sssql14-md.md)], [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] SysPrep supporta le installazioni del cluster di failover di [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] dalla riga di comando. ## <a name="includessnoversionincludesssnoversion-mdmd-sysprep-limitations"></a>[!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] SysPrep Il ripristino di un'istanza predisposta non è supportato. Se l'installazione non viene completata durante la preparazione o il completamento dell'immagine, è necessario eseguire la disinstallazione. ## <a name="BKMK_PrepareImage"></a> Preparare l'immagine Tramite il passaggio relativo alla preparazione dell'immagine vengono installati il prodotto e le funzionalità di [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] , ma non viene configurata l'installazione. Le funzionalità di [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] da installare e il percorso di installazione per i file di installazione del prodotto [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] possono essere specificati durante questo passaggio. È possibile preparare un'istanza di [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] tramite **Preparazione immagine di un'istanza autonoma di SQL Server** nella pagina **Avanzate** di **Centro installazione** o dal prompt dei comandi. - È possibile preparare più istanze di [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] sullo stesso computer che possono essere completate in un secondo momento. - È possibile aggiungere o rimuovere funzionalità supportate per le installazioni di SysPrep dalle istanze predisposte esistenti di [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)]. Al termine della preparazione dell'istanza, nel menu **Start** viene reso disponibile un collegamento per completare la configurazione dell'istanza predisposta di [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)]. ## <a name="BKMK_CompleteImage"></a> Completare l'immagine È possibile completare le istanze predisposte di [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] tramite uno dei metodi seguenti: - Utilizzare il collegamento nel menu Start. - Accedere al passaggio **Completamento immagine di un'istanza autonoma predisposta** nella pagina **Avanzate** di **Centro installazione**. ## <a name="see-also"></a>Vedere anche [Pianificazione di un'installazione di SQL Server](../../sql-server/install/planning-a-sql-server-installation.md)
98.363636
636
0.768844
ita_Latn
0.993053
e0011ffea9d17aafb0f551e9c2aa83ef6342d296
9,922
md
Markdown
RELEASE_NOTES.md
michael-rapp/ChromeLikeTabSwitcher
7dd2aa053bc6ae21f10a06259f2b737d8fe7b216
[ "Apache-2.0" ]
1,297
2017-04-22T22:25:55.000Z
2022-03-28T01:33:28.000Z
RELEASE_NOTES.md
michael-rapp/ChromeLikeTabSwitcher
7dd2aa053bc6ae21f10a06259f2b737d8fe7b216
[ "Apache-2.0" ]
34
2017-04-29T05:16:57.000Z
2021-08-30T03:53:49.000Z
RELEASE_NOTES.md
michael-rapp/ChromeLikeTabSwitcher
7dd2aa053bc6ae21f10a06259f2b737d8fe7b216
[ "Apache-2.0" ]
173
2017-04-23T02:38:40.000Z
2022-03-19T23:55:19.000Z
# ChromeLikeTabSwitcher - RELEASE NOTES ## Version 0.4.6 (Feb. 12th 2020) A minor release, which introduces the following changes: - Allow to specify whether the state of a `TabSwitcher` should be preserved (see https://github.com/michael-rapp/ChromeLikeTabSwitcher/pull/26). - Updated dependency "AndroidUtil" to version 2.1.0. - Updated AppCompat support library to version 1.1.0. ## Version 0.4.5 (Aug. 9th 2019) A minor release, which introduces the following changes: - Added the `onRecycleView`-method to the class `TabSwitcherDecorator`. - Updated dependency "AndroidUtil" to version 2.0.2. - Updated the annotation support library to version 1.1.0. ## Version 0.4.4 (Feb. 23th 2019) A minor release, which introduces the following changes: - Updated dependency "AndroidUtil" to version 2.0.1. - Updated dependency "AndroidMaterialViews" to version 3.0.1. ## Version 0.4.3 (Dec. 30th 2018) A bugfix release, which introduces the following changes: - Reverted change from previous release as it caused problems. ## Version 0.4.2 (Dec. 29th 2018) A bugfix release, which introduces the following changes: - When selecting a tab it is now only re-rendered if it was not already selected before. - Updated the annotation support library to version 1.0.1. ## Version 0.4.1 (Nov. 20th 2018) A minor release, which introduces the following changes: - Improved the appearance of the `TabSwitcherDrawable`. ## Version 0.4.0 (Nov. 18th 2018) A feature release, which introduces the following changes: - Migrated the library to Android X. - Updated dependency "AndroidUtil" to version 2.0.0. - Updated dependency "AndroidMaterialViews" to version 3.0.0. - Updated targetSdkVersion to 28. - Updated the `TabSwitcherDrawable` to be in accordance with the Material Design 2 guidelines. ## Version 0.3.7 (Nov. 6th 2018) A bugfix release, which fixes the following issues: - Fixed an issue that caused animations to be started although another animation was still executed. - Fixed a crash when dragging after repeatedly removing tabs (this is an enhanced attempt to fix https://github.com/michael-rapp/ChromeLikeTabSwitcher/issues/20). ## Version 0.3.6 (Nov. 5th 2018) A bugfix release, which fixes the following issues: - Prevented the close buttons of stacked tabs from being clicked (fixes https://github.com/michael-rapp/ChromeLikeTabSwitcher/issues/20). ## Version 0.3.5 (Oct. 13th 2018) A minor release, which introduces the following changes: - Added `notifyTabChanged`-method to class `TabSwitcher`. - Updated dependency "AndroidUtil" to version 1.21.0. ## Version 0.3.4 (Sep. 1st 2018) A bugfix release, which fixes the following issues: - Fixed an infinite recursion when calling a `TabSwitcher`'s `clearAllSavedStates`-method. ## Version 0.3.3 (Aug. 5th 2018) A bugfix release, which fixes the following issues: - Fixed a crash when a `TabSwitcher`'s `onSaveInstanceState`-method is called before the view is laid out. This might happen on some devices when the app is started in landscape mode. ## Version 0.3.2 (May 20th 2018) A bugfix release, which fixes the following issues: - Fixed a crash on device with API level 16 (see https://github.com/michael-rapp/ChromeLikeTabSwitcher/pull/16). ## Version 0.3.1 (May 11th 2018) A minor release, which introduces the following changes: - Increased the size of the `TabSwitcherDrawable` to enhance consistency with other menu items. ## Version 0.3.0 (May 5th 2018) A feature release, which introduces the following changes: - It is now possible to use vector drawables for the navigation icon of a `TabSwitcher`'s toolbar as well as for the icon and close button icon of tabs (even on pre-Lollipop devices). - Updated AppCompat v7 support library to version 27.1.1 - Updated AppCompat annotations support library to version 27.1.1 - Updated dependency "AndroidUtil" to version 1.20.3. - Updated dependency "AndroidMaterialViews" to version 2.1.11. ## Version 0.2.9 (Feb. 15th 2018) A bugfix release, which fixes the following issues: - Fixed a crash of the example app on tablets. - Updated the dependency "AndroidUtil" to version 1.20.1. ## Version 0.2.8 (Feb. 4th 2018) A bugfix release, which fixes the following issues: - Implemented an alternative fix regarding the issue of previews not being loaded, which does not come with a performance loss. Previews are now again rendered in background threads. ## Version 0.2.7 (Feb. 4th 2018) A bugfix release, which fixes the following issues: - Improved the fix in the last release regarding previews not being loaded. Previews are now entirely rendered on the UI thread. ## Version 0.2.6 (Feb. 4th 2018) A bugfix release, which fixes the following issues: - Fixed previews of tabs not able to be loaded when certain child views (e.g. `AppBarLayout`s) are contained by the tabs. - Fixed a crash when storing the state of a TabSwitcher, if the switcher is still shown after all tabs have been removed. ## Version 0.2.5 (Jan. 28th 2018) A bugfix release, which fixes the following issues: - Fixed an issue, which caused states of a `StatefulTabSwitcherDecorator` to be cleared prematurely. ## Version 0.2.4 (Jan. 27th 2018) A minor release, which introduces the following changes: - Added the class `StatefulTabSwitcherDecorator`. - The saved states of tabs are now cleared when the corresponding tab is removed by default. This can be turned of by using the `clearSavedStatesWhenRemovingTabs`-method. - Fade animations can now be used to show the previews of tabs when using the smartphone layout. - Updated `targetSdkVersion` to API level 27 (Android 8.1). - Updated dependency "AndroidUtil" to version 1.19.0. - Updated dependency "AndroidMaterialViews" to version 2.1.10. - The data structure `ListenerList` is now used for managing event listeners. ## Version 0.2.3 (Jan. 23th 2018) A bugfix release, which fixes the following issues: - Added additional method parameter to the interface `Model.Listener`. It allows to reliably determine, whether the selection changed when adding or removing tabs. ## Version 0.2.2 (Jan. 14th 2018) A bugfix release, which fixes the following issues: - `TabSwitcherButton`s are now rendered properly in the preview of tabs. - Added the attribute `applyPaddingToTabs`. ## Version 0.2.1 (Jan. 10th 2018) A bugfix release, which fixes the following issues: - Overshooting towards the end as well as the start is now possible when using the phone layout, if only one tab is contained by a `TabSwitcher` - Fixed the navigation icon of a `TabSwitcher`'s toolbar not being shown ## Version 0.2.0 (Dec. 30th 2017) A major release, which introduces the following features: - Added predefined dark and light themes - Added support for drag gestures. So far, the drag gestures `SwipeGesture` and `PullDownGesture` are provided - Added the `tabContentBackgroundColor` XML attribute and according setter methods for customizing the background color of a tab's content - The background color of tabs is now adapted when pressed, if a `ColorStateList` with state `android:state_pressed` is set - Added the possibility to show a certain placeholder view, when a `TabSwitcher` is empty - Added the functionality to show a circular progress bar instead of an icon for individual tabs. - Updated dependency "AndroidUtil" to version 1.18.3 - Updated AppCompat v7 support library to version 27.0.2 - Updated AppCompat annotations support library to version 27.0.2 - Added dependency "AndroidMaterialViews" with version 2.1.9 ## Version 0.1.7 (Dec. 24th 2017) A minor release, which introduces the following changes: - Added an additional `setupWithMenu`-method for setting up the menu of a `TabSwitcher`. If necessary, it uses a `OnGlobalLayoutListener` internally. ## Version 0.1.6 (Dec. 23th 2017) A bugfix release, which fixes the following issues: - If a `Tab`, which is added to a `TabSwitcher`, happens to have the same hash code as a previously removed `Tab`, the state of the removed tab is not restored anymore. ## Version 0.1.5 (Nov. 25th 2017) A bugfix release, which fixes the following issues: - Fixed a crash which might occurred when resuming the activity after it was in the background for a long time (see https://github.com/michael-rapp/ChromeLikeTabSwitcher/issues/7). - Updated dependency "AndroidUtil" to version 1.18.2. - Updated AppCompat v7 support library to version 27.0.1. - Updated AppCompat annotations support library to version 27.0.1. ## Version 0.1.4 (Oct. 2nd 2017) A bugfix release, which fixes the following issues: - Fixed an issue in the example app, which caused the contents of `EditText` widgets to be shown in the wrong tabs. - Updated `targetSdkVersion` to API level 26. - Updated dependency "AndroidUtil" to version 1.18.0. - Updated AppCompat v7 support library to version 26.1.0. - Updated AppCompat annotations support library to version 26.1.0. ## Version 0.1.3 (May 23th 2017) A bugfix release, which fixes the following issues: - Fixed issues, when margins are applied to a `TabSwitcher` ## Version 0.1.2 (May 22th 2017) A bugfix release, which fixes the following issues: - Resolved issues when restoring the positions of tabs after orientation changes or when resuming the app ## Version 0.1.1 (May 11th 2017) A bugfix release, which fixes the following issues: - Improved detection of click events ## Version 0.1.0 (Apr. 22th 2017) The first unstable release of the library, which provides the following features: - Provides a layout optimized for smartphones. The layout is adapted depending on whether it is displayed in landscape or portrait mode - Tabs can dynamically be added and removed in an animated manner using a `SwipeAnimation`, `RevealAnimation` or `PeekAnimation` - The tab switcher's state is automatically restored on configuration changes - Views are recycled and previews are rendered as bitmaps in order to increase the performance
40.663934
183
0.769704
eng_Latn
0.997372
e0036d8e50dc56d68e4ff7196fb5de7e175b1b8e
3,740
md
Markdown
biztalk/core/troubleshooting-maps.md
cmcclister/biztalk-docs
36a3d4b944e27edff883b8e36e997c7d2af4f497
[ "CC-BY-4.0", "MIT" ]
37
2017-08-28T06:57:52.000Z
2021-07-13T12:16:23.000Z
biztalk/core/troubleshooting-maps.md
cmcclister/biztalk-docs
36a3d4b944e27edff883b8e36e997c7d2af4f497
[ "CC-BY-4.0", "MIT" ]
732
2017-05-18T22:16:15.000Z
2022-03-31T23:10:06.000Z
biztalk/core/troubleshooting-maps.md
isabella232/biztalk-docs
36a3d4b944e27edff883b8e36e997c7d2af4f497
[ "CC-BY-4.0", "MIT" ]
158
2017-06-19T22:47:52.000Z
2022-02-28T06:41:54.000Z
--- description: "Learn more about: Troubleshooting Maps" title: "Troubleshooting Maps | Microsoft Docs" ms.custom: "" ms.date: "06/08/2017" ms.prod: "biztalk-server" ms.reviewer: "" ms.suite: "" ms.tgt_pltfrm: "" ms.topic: "article" ms.assetid: 32e2eb52-6740-4cf5-82ec-3b6d0b784276 caps.latest.revision: 7 author: "MandiOhlinger" ms.author: "mandia" manager: "anneta" --- # Troubleshooting Maps This topic provides troubleshooting strategies and problem detail and resolution information for maps. ## Troubleshooting Strategies ### Validate your map This may sound obvious, but you should always validate your map at different points throughout its development. This will help identify design, logic, and schema problems early in the development cycle when it is easier to fix them or find an alternative solution. ##### To validate a BizTalk map 1. In Solution Explorer, open the map that you want to validate. 2. In Solution Explorer, right-click the map, and then click **Validate Map**. 3. In the Output window, verify the results. > [!NOTE] > When you validate a map, your test instance data is not checked to see if it violates any data types defined in the schemas. You can check the instance data when you test the map or validate the instance data in BizTalk Editor. ### Review the XSLT generated for your map It is often useful to inspect the XSLT generated by the map compiler. Some of the benefits of inspecting XSLT include: - If you are using looping or custom functoids, you will better understand how the looping is performed and how the custom functoid is invoked. - If you have a complicated map, reviewing the XSLT will enable you to see how the map is translated into a transform and may give you insight about how to better structure, replace, or streamline one or more parts. - If you are using custom scripts or other artifacts, reviewing the XSLT will enable you to see how the scripts, artifacts, and other parts of the map interact. Fortunately, viewing the XSLT for a map is an easy process. ##### To view the XSLT generated by the map compiler 1. From a [!INCLUDE[btsVStudioNoVersion](../includes/btsvstudionoversion-md.md)] BizTalk project, click the **Solution Explorer** tab, right-click a map, and then click **Validate Map**. 2. Scroll the Output window to find the URL for the XSL file. Press CTRL and click the URL to view the file. If you decide to customize your map by hand, you can modify the version produced by the map compiler. Changes will not be reflected by the Mapper and will be lost the next time you build your solution. ### Tune your map for specific scenarios using \<mapsource\> You can modify some default behaviors of the Mapper by modifying attributes of the **mapsource** element directly in a map source (.btm) file. There are currently three behaviors that you can modify: - **Optimize Value Mapping functoid code generation**. You can modify the behavior that controls when a variable is used with `if` statements. - **Accommodate schemas with large footprints**. You can change the way internal compiler nodes are used in large maps. - **Manage for-each usage with Looping, Conditional, and Value Mapping functoids**. You can control where the `xsl:for-each` statement is used within the destination schema. For more information about modifying **mapsource**, see [Managing Default Mapper Behavior Using \<mapsource\>](../core/managing-default-mapper-behavior-using-mapsource.md). ## See Also [General Troubleshooting Questions and Answers](../core/general-troubleshooting-questions-and-answers.md) [Common Errors](../core/common-errors.md)
53.428571
267
0.749733
eng_Latn
0.997041
e004712bbaec677188c358632f04dffb7e03df5c
2,068
md
Markdown
wdk-ddi-src/content/ntifs/nf-ntifs-iothreadtoprocess.md
pcfist/windows-driver-docs-ddi
a14a7b07cf628368a637899de9c47e9eefba804c
[ "CC-BY-4.0", "MIT" ]
null
null
null
wdk-ddi-src/content/ntifs/nf-ntifs-iothreadtoprocess.md
pcfist/windows-driver-docs-ddi
a14a7b07cf628368a637899de9c47e9eefba804c
[ "CC-BY-4.0", "MIT" ]
null
null
null
wdk-ddi-src/content/ntifs/nf-ntifs-iothreadtoprocess.md
pcfist/windows-driver-docs-ddi
a14a7b07cf628368a637899de9c47e9eefba804c
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- UID: NF:ntifs.IoThreadToProcess title: IoThreadToProcess function author: windows-driver-content description: The IoThreadToProcess routine returns a pointer to the process for the specified thread. old-location: ifsk\iothreadtoprocess.htm old-project: ifsk ms.assetid: fcb51574-d966-4cd5-a946-c38dd2798b7f ms.author: windowsdriverdev ms.date: 2/16/2018 ms.keywords: IoThreadToProcess, IoThreadToProcess routine [Installable File System Drivers], ifsk.iothreadtoprocess, ioref_59269b9a-0a64-410d-aafa-b070b2eacfd7.xml, ntifs/IoThreadToProcess ms.prod: windows-hardware ms.technology: windows-devices ms.topic: function req.header: ntifs.h req.include-header: Ntifs.h req.target-type: Universal req.target-min-winverclnt: req.target-min-winversvr: req.kmdf-ver: req.umdf-ver: req.ddi-compliance: req.unicode-ansi: req.idl: req.max-support: req.namespace: req.assembly: req.type-library: req.lib: NtosKrnl.lib req.dll: NtosKrnl.exe req.irql: "<= DISPATCH_LEVEL" topic_type: - APIRef - kbSyntax api_type: - DllExport api_location: - NtosKrnl.exe api_name: - IoThreadToProcess product: Windows targetos: Windows req.typenames: TOKEN_TYPE --- # IoThreadToProcess function ## -description The <b>IoThreadToProcess</b> routine returns a pointer to the process for the specified thread. ## -syntax ```` PEPROCESS IoThreadToProcess( _In_ PETHREAD Thread ); ```` ## -parameters ### -param Thread [in] Thread whose process is to be returned. ## -returns <b>IoThreadToProcess</b> returns a pointer to the thread's process. For more information about using system threads and managing synchronization within a nonarbitrary thread context, see <a href="https://msdn.microsoft.com/fbd8aadd-5a24-48c9-9865-80cc7dc97316">Driver Threads, Dispatcher Objects, and Resources</a>. ## -see-also <a href="https://msdn.microsoft.com/library/windows/hardware/ff559933">PsGetCurrentProcess</a> <a href="..\wdm\nf-wdm-psgetcurrentthread.md">PsGetCurrentThread</a> <a href="..\wdm\nf-wdm-iogetcurrentprocess.md">IoGetCurrentProcess</a>    
19.695238
248
0.770793
yue_Hant
0.388656
e004d9284e09980b474ed2032e7b52b8014c0a43
2,977
md
Markdown
docs/spec/navigation.md
chinaxiaonan/ant-design
bd1d40e031caa5f1eb111e237e138878b9076c0f
[ "MIT" ]
1
2021-07-07T13:19:52.000Z
2021-07-07T13:19:52.000Z
docs/spec/navigation.md
ascoders/ant-design
cb555fc30de54f52763f7fffce60dad26eddd8bb
[ "MIT" ]
null
null
null
docs/spec/navigation.md
ascoders/ant-design
cb555fc30de54f52763f7fffce60dad26eddd8bb
[ "MIT" ]
null
null
null
--- order: 8 title: zh-CN: 导航 en-US: Navigation --- 在广义上,任何告知用户他在哪里,他能去什么地方以及如何到达那里的方式,都可以称之为导航。当设计者使用导航或者自定义一些导航结构时,请注意: - 尽可能提供标识、上下文线索,避免用户迷路; - 保持导航样式和行为一致或者减少导航数量,降低用户学习成本; - 尽可能减少页面间的跳转(例如:一个常见任务需要多个页面跳转时,请减少至一到两次),让用户移动距离保持简短。 --- ## 导航菜单 导航菜单是将内容信息友好地展示给用户的有效方式。在确定好网站的信息架构后,应当按需选取适当的导航菜单样式。 ### 顶部导航菜单 <img class="preview-img no-padding" align="right" src="https://os.alipayobjects.com/rmsportal/CHLsYZJzIISKiFegqrXQ.png"> 顶部导航菜单的形式就是把超链接连成一行,信息内容层级比较简单明了,适用在浏览性强的门户性质以及比较前台化的应用。一级类目建议在 2-7 个以内。标题长度 4-15 个字符长度为好,中文字长 2-6 个。 ### 侧边导航菜单 <img class="preview-img no-padding" align="right" src="https://os.alipayobjects.com/rmsportal/iSgvIOKsqAdpJUeHVnnl.png"> 垂直导航较横向的导航更灵活,易于向下扩展, 且允许的标签长度较长。类目数量不限,可配合滚动条使用,适合信息层级多、操作切换频率高的管理性质的应用。 - 更多常用导航布局可以参考 [Layout 组件](/components/layout/)。 --- ## 面包屑 面包屑导航的作用是告诉用户当前页面在系统层级结构中的位置以及父子级页面间的关系。 <img class="preview-img no-padding" align="right" src="https://os.alipayobjects.com/rmsportal/uJPTOTAzNbKEfBKJbZmG.png"> > 注意事项: > 1. 层级过深时,建议做隐藏处理,页面显示保持在三级以内,最多不宜超过五级; > 2. 尽可能不使用面包屑,尤其是当前页面的导航能清晰的告诉用户他在哪里时。 --- ## 标签页 标签页把大量信息进行分类展示,用户可以方便地切换标签,而不必跳转页面进行比较浏览,可以在有限的显示区域内展示更多信息。分类可根据业务类别、业务状态或者操作类型等并列关系来分,分类标题长度为 2-6 个中文字。 ### 基本样式 <img class="preview-img no-padding" align="right" src="https://os.alipayobjects.com/rmsportal/DIrWhQPjVkVXMTurmFtj.png"> 引领整个页面的内容,用于主功能切换。 ### 卡片样式 <img class="preview-img no-padding" align="right" src="https://os.alipayobjects.com/rmsportal/qUhhphhCUqcTBQuryPVz.png"> 用于页面中局部展示,包裹型容器能很好的和其它内容隔离。 ### 胶囊型样式 <img class="preview-img no-padding" align="right" src="https://os.alipayobjects.com/rmsportal/JGxYplcbVQZiDBFUKnDa.png" description="一般用于小版块内,或与基本样式、卡片样式搭配使用。"> 用于卡片内的选项切换,经常和其它组件结合使用,让用户快速切换对应内容。 ### 竖状样式 <img class="preview-img no-padding" align="right" src="https://os.alipayobjects.com/rmsportal/HcKwSTAlBhXwJBmrILoj.png"> 用于分类较多的选项,可以不限制分类数量,具备更好的扩展性。 --- ## 步骤条 步骤条是引导用户按照流程完成任务的导航条,可以帮助用户对操作流程长度和步骤有个预期,并且知道自己当前在哪个步骤,同时也可以对用户的任务完成度有明确的度量。当任务复杂或者存在先后关系时,将其分解成一系列步骤。 ### 横向流程步骤条 <img class="preview-img no-padding" align="right" src="https://os.alipayobjects.com/rmsportal/lPqKbrGtQTqzzdwofzok.png"> 步骤多于 2 步时使用, 但建议不超过 5 步,每阶段文字长度保持在 12 个字符以内。 ### 竖向流程步骤条 <img class="preview-img no-padding" align="right" src="https://os.alipayobjects.com/rmsportal/gYnwqXKtCTaIQnvbhMlo.png"> 一般居页面左侧,悬浮固定,可追加多行文字描述,适合较多步骤或步骤数动态变化时使用,例如:时间步骤跟踪描述。 --- ## 分页器 当有大量内容需要展现时进行分页加载处理,分页器可以让用户清楚的知道自己所要浏览的内容有多少、已经浏览了多少、还剩余多少。 ### 标准样式 <img class="preview-img no-padding" align="right" src="https://os.alipayobjects.com/rmsportal/tarFEzfOZhEYtclFAsJX.png" description="当页数超过 5 页时,可以提供快速跳转页面的功能。"> 当信息条目较多的时候,可以允许用户自定义每页的行数,以提高用户查看和检索信息的效率和灵活性,常与表格、卡片搭配使用。 ### 迷你样式 <img class="preview-img no-padding" align="right" src="https://os.alipayobjects.com/rmsportal/rIilfwNWTONzxGOWXbVM.png"> 一般用于卡片或者浮层。 ### 简易样式 <img class="preview-img no-padding" align="right" src="https://os.alipayobjects.com/rmsportal/viUMXhmLoFTqjTgoJNxZ.png"> 一般用于卡片或者统计类表格展示,在 10 页之内。
25.444444
160
0.778636
yue_Hant
0.821017
e00623c704a7286e06213fe7fc47b251ab2611f8
47
md
Markdown
scripts/commands/version/index.md
mmuehlberger/awsinfo
f0c3722cedd31594cd9eeffd1fddfa7f98b6b597
[ "MIT" ]
59
2017-06-06T07:48:59.000Z
2018-11-21T17:58:30.000Z
scripts/commands/version/index.md
mmuehlberger/awsinfo
f0c3722cedd31594cd9eeffd1fddfa7f98b6b597
[ "MIT" ]
7
2017-06-10T08:11:01.000Z
2018-11-26T08:12:24.000Z
scripts/commands/version/index.md
mmuehlberger/awsinfo
f0c3722cedd31594cd9eeffd1fddfa7f98b6b597
[ "MIT" ]
10
2019-06-13T18:59:28.000Z
2020-09-18T10:42:55.000Z
# `awsinfo version` Print the Awsinfo version
11.75
25
0.765957
eng_Latn
0.457315
e0064dbd92a8f11f0d19320200806b7d403efbb2
1,090
md
Markdown
docs/c-language/type-for-string-literals.md
B4V/cpp-docs.ru-ru
e820ac4bfffac205c605a9982ab55c2ef7cd0d41
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/c-language/type-for-string-literals.md
B4V/cpp-docs.ru-ru
e820ac4bfffac205c605a9982ab55c2ef7cd0d41
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/c-language/type-for-string-literals.md
B4V/cpp-docs.ru-ru
e820ac4bfffac205c605a9982ab55c2ef7cd0d41
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Тип для строковых литералов | Документация Майкрософт ms.custom: '' ms.date: 11/04/2016 ms.technology: - cpp-language ms.topic: language-reference dev_langs: - C++ helpviewer_keywords: - string literals, type - types [C], string literals ms.assetid: f50a28af-20c1-4440-bdc6-564c86a309c8 author: mikeblome ms.author: mblome ms.workload: - cplusplus ms.openlocfilehash: 7773882d6fb04341a6f6d3a2cbcfda1d05f85d17 ms.sourcegitcommit: 913c3bf23937b64b90ac05181fdff3df947d9f1c ms.translationtype: HT ms.contentlocale: ru-RU ms.lasthandoff: 09/18/2018 ms.locfileid: "46063285" --- # <a name="type-for-string-literals"></a>Тип для строковых литералов Строковые литералы имеют тип массива `char` (т. е., **char[ ]**). (Строки расширенных символов имеют тип массива `wchar_t` (т. е., **wchar_t []**).) Это означает, что строка является массивом с элементами типа `char`. Число элементов в массиве равно числу символов в строке плюс один дополнительный элемент для завершающего символа null. ## <a name="see-also"></a>См. также [Строковые литералы в C](../c-language/c-string-literals.md)
35.16129
337
0.769725
rus_Cyrl
0.483936
e00751d7db95d796ac2d7c07f9055f372bc90374
588
md
Markdown
README.md
koichirok/ansible-role-google-chrome
bf272a191cbd5b11c49a65f917ff96492b5e35b7
[ "BSD-3-Clause" ]
null
null
null
README.md
koichirok/ansible-role-google-chrome
bf272a191cbd5b11c49a65f917ff96492b5e35b7
[ "BSD-3-Clause" ]
null
null
null
README.md
koichirok/ansible-role-google-chrome
bf272a191cbd5b11c49a65f917ff96492b5e35b7
[ "BSD-3-Clause" ]
null
null
null
koichirok.google\_chrome ========= Ansible role to install Google Chrome Role Variables -------------- variables | default | description ----------|---------|------------ google\_chrome\_package\_name|google-chrome-stable|package name in the repository. google-chrome-stable, google-chrome-beta, google-chrome-unstable | Example Playbook ---------------- Including an example of how to use your role (for instance, with variables passed in as parameters) is always nice for users too: - hosts: servers roles: - role: koichirok.google_chrome License ------- BSD
22.615385
149
0.661565
eng_Latn
0.759637
e0078426c3b0da5e95f5df2af7e8f4c68ece1c00
197
md
Markdown
README.md
tenbaylor/docsify
621dc4c47724ad1984b1e589abee226ee928411b
[ "Apache-2.0" ]
1
2021-11-02T07:05:22.000Z
2021-11-02T07:05:22.000Z
README.md
tenbaylor/docsify
621dc4c47724ad1984b1e589abee226ee928411b
[ "Apache-2.0" ]
null
null
null
README.md
tenbaylor/docsify
621dc4c47724ad1984b1e589abee226ee928411b
[ "Apache-2.0" ]
null
null
null
# 快速开始 ## 《跟上 Java 8》 ## https://github.com/biezhi/learn-java8 ## 《Java学习+面试指南》 ## https://github.com/Snailclimb/JavaGuide ## 《互联网 Java 工程师进阶知识完全扫盲》 ## https://github.com/doocs/advanced-java
21.888889
43
0.675127
yue_Hant
0.486891
e00791b676c8ffc0e5cad7616c561a45c239df57
2,490
md
Markdown
business-central/finance-about-finished-production-order-costs.md
edupont04/dynamics365smb-docs-pr.de-de
3f83765f53ee275bfdb32a11429b34cd791e7cbf
[ "CC-BY-4.0", "MIT" ]
null
null
null
business-central/finance-about-finished-production-order-costs.md
edupont04/dynamics365smb-docs-pr.de-de
3f83765f53ee275bfdb32a11429b34cd791e7cbf
[ "CC-BY-4.0", "MIT" ]
null
null
null
business-central/finance-about-finished-production-order-costs.md
edupont04/dynamics365smb-docs-pr.de-de
3f83765f53ee275bfdb32a11429b34cd791e7cbf
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Info zu Kosten des beendeten Produktionsauftrags | Microsoft Docs description: Das Beenden eines Fertigungsauftrages ist eine wichtige Aufgabe beim Abschließen der Gesamtkostenbewertung des Artikels, der gefertigt wird. Endeinstandspreise (Abweichungen in einer Einstandspreisumgebung; Ist-Kosten in einer FIFO-, Durchschnitt- oder LIFO-Einstandspreisumgebung) werden mit der Stapelverarbeitung Kosten anpassen Lagerreg. fakt berechnet. author: SorenGP ms.service: dynamics365-business-central ms.topic: article ms.search.keywords: '' ms.date: 04/01/2020 ms.author: edupont ms.openlocfilehash: 2f84ca5cf44dcbab1c85b24a8e00f674b115918a ms.sourcegitcommit: a80afd4e5075018716efad76d82a54e158f1392d ms.translationtype: HT ms.contentlocale: de-DE ms.lasthandoff: 09/09/2020 ms.locfileid: "3788496" --- # <a name="about-finished-production-order-costs"></a>Info zu Kosten des beendeten FA Das Beenden eines Fertigungsauftrages ist eine wichtige Aufgabe beim Abschließen der Gesamtkostenbewertung des Artikels, der gefertigt wird. Endeinstandspreise einschließlich Abweichungen in einer Einstandspreisumgebung; Ist-Kosten in einer FIFO-, Durchschnitt- oder LIFO-Einstandspreisumgebung, werden mit der Stapelverarbeitung **Artikelkosteneinträge anpassen** berechnet, die eine finanzielle Abstimmung der Kosten der Artikelfertigung ermöglicht. Damit ein Fertigungsauftrag für die Einstandspreisregulierung berücksichtigt wird, muss er den Status **Beendet** haben. Es ist daher unbedingt erforderlich, dass nach Beendigung die Änderung des Status eines Fertigungsauftrags in **Beendet** erfolgt. ## <a name="example"></a>Beispiel Wenn Sie z. B. in einer Einstandspreis-Umgebungen Material verbrauchen, um einen Artikel zu fertigen, gehen, einfach gesagt, der Einstandspreis des Artikels sowie die Bearbeitungs- und die Gemeinkosten in die WIP ein. Wenn der Artikel gefertigt wird, wird WIP um den Betrag Einstandspreises des Artikels verringert. Normalerweise sind diese Kosten nicht gleich Null. Damit diese Kosten sich zu Null saldieren können, müssen Sie die Stapelverarbeitung **Artikelkosteneinträge anpassen**, wobei zu beachten ist, dass nur Fertigungsaufträge mit dem Status **Beendet** für die Regulierung berücksichtigt werden. ## <a name="see-also"></a>Siehe auch [Verwalten der Lagerregulierung](finance-manage-inventory-costs.md) [Produktion](production-manage-manufacturing.md) [Arbeiten mit [!INCLUDE[d365fin](includes/d365fin_md.md)]](ui-work-product.md)
92.222222
705
0.821687
deu_Latn
0.993555
e0079d8d8d6e3b842be47e04953054574e03f9b6
31,253
md
Markdown
memdocs/configmgr/core/plan-design/changes/whats-new-in-version-1610.md
AmadorM/memdocs.es-es
50b8ad81b2127b48e395097e66aa2afa5ea685c7
[ "CC-BY-4.0", "MIT" ]
null
null
null
memdocs/configmgr/core/plan-design/changes/whats-new-in-version-1610.md
AmadorM/memdocs.es-es
50b8ad81b2127b48e395097e66aa2afa5ea685c7
[ "CC-BY-4.0", "MIT" ]
null
null
null
memdocs/configmgr/core/plan-design/changes/whats-new-in-version-1610.md
AmadorM/memdocs.es-es
50b8ad81b2127b48e395097e66aa2afa5ea685c7
[ "CC-BY-4.0", "MIT" ]
1
2020-05-28T15:43:51.000Z
2020-05-28T15:43:51.000Z
--- title: Nueva versión 1610 titleSuffix: Configuration Manager description: Conozca en detalle los cambios y las nuevas funciones introducidas en la versión 1610 de Configuration Manager. ms.date: 11/23/2016 ms.prod: configuration-manager ms.technology: configmgr-core ms.topic: conceptual ms.assetid: f7eb0803-3f8f-4ab6-825a-99ac11f5ba7d author: mestew ms.author: mstewart manager: dougeby ROBOTS: NOINDEX ms.openlocfilehash: b3e1a2feaddb7384d76790249152c89dfa8ee2d3 ms.sourcegitcommit: 214fb11771b61008271c6f21e17ef4d45353788f ms.translationtype: HT ms.contentlocale: es-ES ms.lasthandoff: 05/07/2020 ms.locfileid: "82904814" --- # <a name="what39s-new-in-version-1610-of-configuration-manager"></a>Novedades de la versión 1610 de Configuration Manager *Se aplica a: Configuration Manager (rama actual)* La actualización 1610 de la rama actual de Configuration Manager está disponible como actualización en consola para los sitios instalados previamente que ejecutan las versiones 1511, 1602 o 1606. > [!TIP] > Para instalar un sitio nuevo, debe usar una versión de línea base de Configuration Manager. > > Más información acerca de: > - [Instalación de nuevos sitios](../../servers/deploy/install/installing-sites.md) > - [Instalación de actualizaciones en los sitios](../../servers/manage/updates.md) > - [Versiones de línea de base y versiones de actualización](../../servers/manage/updates.md#bkmk_Baselines) En las secciones siguientes se proporcionan detalles sobre los cambios y las nuevas funciones introducidas en la versión 1610 de Configuration Manager. ## <a name="in-console-monitoring-of-update-installation-status"></a>Supervisión en la consola del estado de la instalación de actualización A partir de la versión 1610, cuando instale un paquete de actualizaciones y supervise la instalación en la consola, hay una fase nueva: **Postinstalación**. En esta fase se incluye el estado de las tareas como el reinicio de los servicios clave y la inicialización de la supervisión de replicación. (Esta fase no está disponible en la consola hasta que el sitio se actualice a la versión 1610). Para obtener más información sobre el estado de la instalación de actualización, consulte [Install in-console updates (Instalación de actualizaciones en la consola)](../../servers/manage/install-in-console-updates.md#bkmk_install). ## <a name="exclude-clients-from-automatic-upgrade"></a>Excluir a clientes de la actualización automática Puede excluir clientes de Windows de la actualización con las versiones nuevas del software cliente. Para hacer esto, incluya los equipos cliente en una colección que sea específica para excluirse de la actualización. Los clientes de la colección excluida omiten las solicitudes para actualizar el software cliente. Para obtener más información, consulte [Exclude Windows clients from upgrades (Excluir clientes Windows de las actualizaciones)](../../clients/manage/upgrade/exclude-clients-windows.md). ## <a name="improvements-for-boundary-groups"></a>Mejoras en los grupos de límites La versión 1610 presenta cambios importantes en los grupos de límites y en su funcionamiento con los puntos de distribución. Estos cambios pueden simplificar el diseño de la infraestructura de contenido, a la vez que proporcionan más control sobre cómo y cuándo usan la reserva los clientes para buscar puntos de distribución adicionales como ubicaciones de origen de contenido. Esto incluye tanto puntos de distribución locales como basados en la nube. Estas mejoras reemplazan conceptos y comportamientos con los que puede que esté familiarizado, como la configuración de puntos de distribución para que sean rápidos o lentos. El nuevo modelo será más fácil de configurar y mantener. Estos cambios también constituyen la base para futuros cambios que mejorarán otros roles de sistema de sitio que asocia a los grupos de límites. Cuando actualiza a la versión 1610, la actualización convierte las configuraciones de los grupos de límites actuales para ajustarse al nuevo modelo, de modo que estos cambios no afecten a las configuraciones de distribución de contenido existentes. Para obtener más información, consulte [Boundary groups (Grupos de límites)](../../servers/deploy/configure/boundary-groups.md). ## <a name="peer-cache-for-content-distribution-to-clients"></a>Almacenamiento en caché del mismo nivel para la distribución de contenido en los clientes A partir de la versión 1610, el **almacenamiento en caché del mismo nivel** de cliente le ayuda a administrar la implementación de contenido en los clientes en ubicaciones remotas. El almacenamiento en caché del mismo nivel es una solución integrada de Configuration Manager para que los clientes compartan contenido con otros clientes directamente desde su caché local. Después de implementar la configuración de cliente que habilita el almacenamiento en caché del mismo nivel en una colección, los miembros de esa colección pueden actuar como origen de contenido del mismo nivel para otros clientes en el mismo grupo de límites. También puede usar el nuevo panel **Orígenes de datos de cliente** para entender el uso de los orígenes de contenido del almacenamiento en caché del mismo nivel en su entorno. > [!TIP] > Con la versión 1610, el panel de orígenes de datos de cliente y la caché del mismo nivel son funciones de la versión preliminar. Para habilitarlos, vea [Uso de características de la versión preliminar a partir de las actualizaciones](../../servers/manage/install-in-console-updates.md#bkmk_prerelease). Para obtener más información, consulte [Caché del mismo nivel para clientes de Configuration Manager](../hierarchy/client-peer-cache.md) y [Panel de orígenes de datos de cliente](../../servers/deploy/configure/monitor-content-you-have-distributed.md#client-data-sources-dashboard). ## <a name="migrate-multiple-shared-distribution-points-at-the-same-time"></a>Migrar varios puntos de distribución compartidos al mismo tiempo Ahora puede usar la opción **Reasignar punto de distribución** para que Configuration Manager procese en paralelo la reasignación de un máximo de 50 puntos de distribución compartidos al mismo tiempo. Antes de esta versión, los puntos de distribución reasignados se procesaban de uno en uno. Para obtener más información, consulte [Migrate multiple shared distribution points at the same time (Migrar varios puntos de distribución compartidos al mismo tiempo)](../../migration/planning-a-content-deployment-migration-strategy.md#migrate-multiple-shared-distribution-points-at-the-same-time). ## <a name="cloud-management-gateway-for-managing-internet-based-clients"></a>Puerta de enlace de administración en la nube para administrar los clientes basados en Internet La puerta de enlace de administración en la nube proporciona una manera sencilla de administrar clientes de Configuration Manager en Internet. El servicio de puerta de enlace de administración en la nube, que se implementa en Microsoft Azure y exige una suscripción de Azure, se conecta a la infraestructura local de Configuration Manager con un nuevo rol denominado punto de conexión de la puerta de enlace de administración en la nube. Una vez implementado y configurado por completo, los clientes pueden comunicarse con los roles del sistema de sitio local de Configuration Manager y con los puntos de distribución basados en la nube independientemente de si están conectados en la red privada interna o en Internet. Para obtener más información y ver cómo la puerta de enlace de administración en la nube se compara con la administración de cliente basada en Internet, consulte [Manage clients on the Internet (Administrar clientes en Internet)](../../clients/manage/manage-clients-internet.md). ## <a name="improvements-to-the-windows-10-edition-upgrade-policy"></a>Mejoras en la directiva de actualización de la edición de Windows 10 En esta versión se han realizado las siguientes mejoras en este tipo de directiva: - Ahora puede usar la directiva de actualización de la edición con equipos Windows 10 que ejecuten el cliente de Configuration Manager además de con equipos Windows 10 inscritos en Microsoft Intune. - Puede actualizar desde Windows 10 Professional a cualquiera de las plataformas del asistente compatibles con el hardware. ## <a name="manage-hardware-identifiers"></a>Administrar identificadores de hardware Ahora puede proporcionar una lista de identificadores de hardware que Configuration Manager debe omitir en el registro de clientes y el arranque PXE. Esto ayuda a resolver dos problemas comunes: 1. Muchos dispositivos, como Surface Pro 3, no incluyen un puerto Ethernet incorporado. Generalmente, se usa un adaptador de USB a Ethernet para establecer una conexión por cable para la implementación de un sistema operativo. Sin embargo, suele tratarse de adaptadores compartidos debido a su costo y su facilidad de uso general. Dado que la dirección MAC de este adaptador se usa para identificar el dispositivo, resulta problemático volver a usar el adaptador si no se realizan acciones de administrador adicionales entre cada implementación. Ahora, en la versión 1610 de Configuration Manager, puede excluir la dirección MAC de este adaptador para que se pueda volver a usar fácilmente en este escenario. 2. Se supone que el identificador de SMBIOS es un identificador de hardware único, pero algunos dispositivos de hardware especiales se crean con identificadores duplicados. Puede que este problema no sea tan habitual como el escenario del adaptador de USB a Ethernet descrito anteriormente, pero puede solucionarlo usando la lista de identificadores de hardware excluidos. Para obtener más información, consulte [Manage duplicate hardware identifiers (Administrar identificadores de hardware duplicados)](../../clients/manage/manage-clients.md#manage-duplicate-hardware-identifiers). ## <a name="enhancements-to-windows-store-for-business-integration-with-configuration-manager"></a>Mejoras en la integración de la Tienda Windows para empresas con Configuration Manager Cambios de esta versión: - Anteriormente, solo se podían implementar aplicaciones gratuitas de la Tienda Windows para empresas. Configuration Manager ahora además admite la implementación de aplicaciones con licencia en línea de pago (solo para dispositivos inscritos en Intune). - Ahora puede iniciar una sincronización inmediata entre la Tienda Windows para empresas y Configuration Manager. - Ahora puede modificar la clave secreta de cliente que ha obtenido de Azure Active Directory. - Puede eliminar una suscripción de la tienda. Para más información, vea [Administración de aplicaciones desde la Tienda Windows para empresas con Configuration Manager](../../../apps/deploy-use/manage-apps-from-the-windows-store-for-business.md). ## <a name="policy-sync-for-intune-enrolled-devices"></a>Sincronización de directivas para dispositivos inscritos en Intune Ahora puede solicitar la sincronización de directivas para un dispositivo inscrito en Intune desde la consola de Configuration Manager, en lugar de hacerlo desde la aplicación de portal de empresa en el propio dispositivo. La información del estado de la solicitud de sincronización está disponible como una columna nueva en las vistas del dispositivo, denominada **Remote Sync State (Estado de la sincronización remota)** . La información también está disponible en la sección de datos de detección del cuadro de diálogo **Propiedades** de cada dispositivo. ## <a name="use-compliance-settings-to-configure-windows-defender-settings"></a>Usar la configuración de cumplimiento para configurar las opciones de Windows Defender Ahora puede establecer la configuración de cliente de Windows Defender en equipos Windows 10 inscritos en Intune mediante el uso de elementos de configuración de la consola de Configuration Manager. Para más información, vea la sección **Windows Defender** de [Crear elementos de configuración para dispositivos de Windows 8.1 y Windows 10 administrados sin el cliente de Configuration Manager](../../../mdm/deploy-use/create-configuration-items-for-windows-8.1-and-windows-10-devices-managed-without-the-client.md). ## <a name="general-improvements-to-software-center"></a>Mejoras generales en el Centro de software - Ahora los usuarios pueden solicitar aplicaciones del Centro de software, así como del catálogo de aplicaciones. - Mejoras para ayudar a los usuarios a comprender qué software es nuevo y relevante. ## <a name="new-columns-in-device-collection-views"></a>Columnas nuevas en las vistas de colección de dispositivos Ahora puede mostrar columnas para **IMEI** y **Número de serie** (para dispositivos iOS) en las vistas de colección de dispositivos. ## <a name="customizable-branding-for-software-center-dialogs"></a>Cuadros de diálogo personalizables de personalización de marca del Centro de software La personalización de marca del Centro de software se presentó en la versión 1602 de Configuration Manager. En la versión 1610, esa marca se ha ampliado ahora a todos los cuadros de diálogo asociados para proporcionar una experiencia más coherente a los usuarios del Centro de software. La personalización de marca del Centro de software se aplica conforme a las siguientes reglas: - Si no está instalado el rol de servidor de sitio del punto de sitios web del catálogo de aplicaciones, el Centro de software mostrará el nombre de organización especificado en la configuración de cliente **Agente de equipo** **Nombre de organización mostrado en el Centro de software**. Para obtener instrucciones, vea [Cómo establecer la configuración del cliente](../../clients/deploy/configure-client-settings.md). - Si está instalado el rol de servidor de sitio del punto de sitios web del catálogo de aplicaciones, el Centro de software mostrará el nombre de la organización y el color especificados en las propiedades del rol de servidor de sitio del punto de sitios web del catálogo de aplicaciones. Para más información, vea [Configuration options for Application Catalog website point (Opciones de configuración del punto de sitios web del catálogo de aplicaciones)](../../servers/deploy/configure/configuration-options-for-site-system-roles.md#BKMK_ApplicationCatalog_Website). - Si una suscripción de Microsoft Intune está configurada y conectada al entorno de Configuration Manager, el Centro de software mostrará el nombre de la organización, el color y el logotipo de la empresa especificados en las propiedades de la suscripción de Intune. ## <a name="enforcement-grace-period-for-required-application-and-software-update-deployments"></a>Período de gracia de cumplimiento para implementaciones de actualizaciones de software y aplicaciones requeridas En algunos casos, es posible que quiera dar más tiempo a los usuarios para instalar las implementaciones de aplicaciones o las actualizaciones de software necesarias más allá de los plazos que ha establecido. Por ejemplo, esto puede resultar necesario cuando un equipo ha estado apagado durante un largo período de tiempo y tiene que instalar muchas implementaciones de aplicaciones o actualizaciones. Por ejemplo, si un usuario final acaba de volver de vacaciones, es posible que tenga que esperar bastante mientras se instalan las implementaciones de aplicaciones vencidas. Para solucionar este problema, puede definir un período de gracia de cumplimiento mediante la implementación de la configuración de cliente de Configuration Manager en una colección. Para configurar el período de gracia, haga lo siguiente: 1. En la página **Agente de equipo** de la configuración de cliente, configure la nueva propiedad **Período de gracia para el cumplimiento tras la fecha límite de la implementación (horas)** con un valor entre **1** y **120** horas. 2. En una nueva implementación de aplicación obligatoria o en las propiedades de una implementación existente, en la página **Programación**, active la casilla **Retrasar el cumplimiento de esta implementación de acuerdo con las preferencias del usuario** hasta el período de gracia definido en la configuración del cliente. Todas las implementaciones que tengan activada esta casilla y que estén destinadas a dispositivos en los que también haya implementado la configuración de cliente usarán el período de gracia de cumplimiento. Si configura un período de gracia de cumplimiento y activa la casilla de verificación, una vez que se llegue a la fecha límite de instalación de la aplicación, esta se instalará en la primera ventana que no sea de empresa configurada por el usuario hasta ese período de gracia. No obstante, el usuario puede abrir el Centro de software e instalar la aplicación en cualquier momento que quiera. Una vez que expira el período de gracia, el cumplimiento vuelve al comportamiento normal para implementaciones vencidas. Se han agregado opciones similares al asistente para la implementación de actualizaciones de software, al asistente para reglas de implementación automática y a las páginas de propiedades. ## <a name="improved-functionality-in-dialog-boxes-about-required-software"></a>Funcionalidad mejorada en cuadros de diálogo sobre el software necesario Cuando un usuario recibe software obligatorio, desde el valor **Posponer y volver a recordármelo en:** , puede seleccionar las siguientes opciones en la lista desplegable: - **Más adelante**. Especifica que las notificaciones se programan según la configuración de notificación establecida en la configuración de agente de cliente. - **Hora fija**. Especifica que la notificación se programará para mostrarse de nuevo después del tiempo seleccionado (por ejemplo, en 30 minutos). ![Página Agente de equipo de Configuración de agente de cliente](media/client-notification-settings.png) El tiempo máximo de aplazamiento se basa en los valores de notificación definidos en la configuración de agente de cliente. Por ejemplo, si la opción **La fecha límite de la implementación es de más de 24 horas. Recordar al usuario cada (horas)** de la página Agente de equipo se configura para 10 horas y pasan más de 24 antes de la fecha límite, el usuario vería un conjunto de opciones para posponer de hasta 10 horas, pero nunca más. Cuando se acerca la fecha límite, hay menos opciones disponibles, en consonancia con la configuración de agente de cliente correspondiente a cada componente de la escala de tiempo de implementación. Además, en una implementación de alto riesgo, como una secuencia de tareas que implementa un sistema operativo, la experiencia de notificación del usuario es ahora más intrusiva. En lugar de una notificación transitoria en la barra de tareas, cada vez que se notifica al usuario que se necesita mantenimiento de software crítico, aparece un cuadro de diálogo como el siguiente en el equipo del usuario: ![Cuadro de diálogo Software requerido](media/client-toast-notification.png) Para obtener más información: - [Settings to manage high-risk deployments (Configuración para administrar implementaciones de alto riesgo)](../../servers/manage/settings-to-manage-high-risk-deployments.md) - [Cómo configurar el cliente](../../clients/deploy/configure-client-settings.md) ## <a name="software-updates-dashboard"></a>Panel de actualizaciones de software Use el nuevo panel de actualizaciones de software para ver el estado de cumplimiento actual de los dispositivos de la organización y analizar rápidamente los datos para ver los dispositivos que están en riesgo. Para ver el panel, vaya a **Supervisión** > **Información general** > **Seguridad** > **Software Updates Dashboard (Panel de actualizaciones de software)** . Para obtener detalles, vea [Monitor software updates (Supervisar actualizaciones de software)](../../../sum/deploy-use/monitor-software-updates.md). ## <a name="improvements-to-the-application-request-process"></a>Mejoras en el proceso de solicitud de aplicaciones Después de aprobar una aplicación para la instalación, puede denegar la solicitud. Para ello, haga clic en **Denegar** en la consola de Configuration Manager. Antes, este botón aparecía atenuado tras la aprobación. Esta acción no provoca que la aplicación se desinstale de ningún dispositivo. En cambio, impide que los usuarios instalen copias nuevas de la aplicación desde el Centro de software. ## <a name="filter-by-content-size-in-automatic-deployment-rules"></a>Filtrar por tamaño del contenido en las reglas de implementación automática Ahora se puede filtrar por el tamaño del contenido de las actualizaciones de software en las reglas de implementación automática. Por ejemplo, para descargar solo las actualizaciones de software de menos de 2 MB, puede establecer el filtro **Tamaño del contenido (KB)** en **< 2048**. Con este filtro se evita que las actualizaciones de software de gran tamaño se descarguen automáticamente, lo que ofrece un mantenimiento simplificado de nivel inferior de Windows cuando el ancho de banda de la red es limitado. Si desea obtener información detallada, consulte: - [Configuration Manager and Simplified Windows Servicing on Down Level Operating Systems](https://techcommunity.microsoft.com/t5/configuration-manager-archive/configuration-manager-and-simplified-windows-servicing-on-down/ba-p/274056) (Configuration Manager y mantenimiento simplificado de Windows en sistemas operativos de nivel inferior) - [Implementar actualizaciones de software automáticamente](../../../sum/deploy-use/automatically-deploy-software-updates.md) Para configurar el campo **Tamaño del contenido (KB)** , realice una de las siguientes acciones: - Cuando cree una regla de implementación automática, en el Asistente para crear regla de implementación automática, vaya a la página **Actualizaciones de software**. - En las propiedades de una regla de implementación automática existente, vaya a la pestaña **Actualizaciones de software**. ## <a name="office-365-client-management-dashboard"></a>Panel de administración de clientes de Office 365 El panel de administración de clientes de Office 365 ahora está disponible en la consola de Configuration Manager. Para ver el panel, vaya a **Biblioteca de software** > **Información general** > **Administración de clientes de Office 365**. El panel muestra gráficos para lo siguiente: - Número de clientes de Office 365 - Versiones de cliente de Office 365 - Idiomas de cliente de Office 365 - Canales de cliente de Office 365 Para obtener información, consulte [Administración de actualizaciones de Office 365 ProPlus](../../../sum/deploy-use/manage-office-365-proplus-updates.md). ## <a name="task-sequence-steps-to-manage-bios-to-uefi-conversion"></a>Pasos de la secuencia de tareas para administrar la conversión de BIOS a UEFI Ahora puede personalizar una secuencia de tareas de implementación de sistema operativo con una nueva variable, TSUEFIDrive, para que el paso **Reiniciar el equipo** prepare una partición FAT32 en la unidad de disco duro para la transición a UEFI. En el procedimiento siguiente se proporciona un ejemplo de cómo crear pasos de secuencia de tareas para preparar la unidad de disco duro para la conversión de BIOS en UEFI. Para obtener más información, consulte [Pasos de la secuencia de tareas para administrar la conversión de BIOS a UEFI](../../../osd/deploy-use/task-sequence-steps-to-manage-bios-to-uefi-conversion.md). ## <a name="improvements-to-the-task-sequence-step-prepare-configmgr-client-for-capture"></a>Mejoras en el paso de secuencia de tareas: Prepare ConfigMgr Client for Capture El paso Preparar el cliente de Configuration Manager quitará por completo el cliente de Configuration Manager en lugar de quitar solo la información de clave. Cuando la secuencia de tareas implementa la imagen capturada del sistema operativo, se instala un nuevo cliente de Configuration Manager cada vez. Para obtener más información, consulte [Task sequence steps (Pasos de la secuencia de tareas)](../../../osd/understand/task-sequence-steps.md#BKMK_PrepareConfigMgrClientforCapture). ## <a name="intune-compliance-policy-charts"></a>Gráficos de directivas cumplimiento de Intune Ahora puede obtener una vista rápida del cumplimiento general de los dispositivos y de las principales razones del incumplimiento con los nuevos gráficos del área de trabajo **Supervisión** de la consola de Configuration Manager. Puede hacer clic en una sección del gráfico para explorar en profundidad una lista de los dispositivos de esa categoría. Para obtener más información, consulte [Monitor the compliance policy (Supervisar la directiva de cumplimiento)](../../../mdm/deploy-use/create-configuration-items-for-windows-8.1-and-windows-10-devices-managed-without-the-client.md). ## <a name="lookout-integration-for-hybrid-implementations-to-protect-ios-and-android-devices"></a>Integración de Lookout en las implementaciones híbridas para proteger los dispositivos iOS y Android Microsoft se está integrando en la solución de protección de amenazas móviles de Lookout para proteger los dispositivos móviles iOS y Android mediante la detección de malware y aplicaciones de riesgo, entre otros, en los dispositivos. La solución de Lookout le ayuda a determinar el nivel de amenaza, que es configurable. Puede crear una regla de directivas de cumplimiento en Configuration Manager para determinar el cumplimiento de dispositivo basándose en la evaluación de riesgos mediante Lookout. Con las directivas de acceso condicional, puede permitir o bloquear el acceso a los recursos empresariales basándose en el estado de cumplimiento del dispositivo. Se solicitará a los usuarios de dispositivos iOS no compatibles que se inscriban. Necesitarán instalar la aplicación Lookout for Work en sus dispositivos, activar la aplicación y corregir las amenazas que se mencionan en la aplicación Lookout for Work para obtener acceso a los datos de la empresa. ## <a name="new-compliance-settings-for-configuration-items"></a>Nueva configuración de cumplimiento para elementos de configuración Hay muchas opciones nuevas que se pueden usar en los elementos de configuración de varias plataformas de dispositivo. Son opciones que existían con anterioridad en Microsoft Intune en una configuración independiente y que ahora están disponibles al usar Intune con Configuration Manager. Para más información, vea [Elementos de configuración para dispositivos administrados sin el cliente de Configuration Manager](../../../mdm/deploy-use/create-configuration-items-for-windows-8.1-and-windows-10-devices-managed-without-the-client.md). ### <a name="new-settings-for-android-devices"></a>Nuevas opciones para dispositivos Android #### <a name="password-settings"></a>Configuración de contraseña - **Recordar el historial de contraseñas** - **Permitir desbloqueo mediante huellas digitales** #### <a name="security-settings"></a>Configuración de seguridad - **Requerir cifrado en tarjetas de almacenamiento** - **Permitir captura de pantalla** - **Permitir el envío de datos de diagnóstico** #### <a name="browser-settings"></a>Configuración del explorador - **Permitir explorador web** - **Permitir autorrelleno** - **Permitir bloqueador de elementos emergentes** - **Permitir cookies** - **Permitir Active scripting** #### <a name="app-settings"></a>Configuración de aplicaciones - **Permitir Google Play Store** #### <a name="device-capability-settings"></a>Configuración de las capacidades de los dispositivos - **Permitir almacenamiento extraíble** - **Permitir tethering Wi-Fi** - **Permitir geolocalización** - **Permitir NFC** - **Permitir Bluetooth** - **Permitir itinerancia de voz** - **Permitir itinerancia de datos** - **Permitir mensajería SMS/MMS** - **Permitir asistente de voz** - **Permitir marcación por voz** - **Permitir copiar y pegar** ### <a name="new-settings-for-ios-devices"></a>Nuevas opciones para dispositivos iOS #### <a name="password-settings"></a>Configuración de contraseña - **Número de caracteres complejos necesarios en la contraseña** - **Permitir contraseñas sencillas** - **Minutos de inactividad antes de que se requiera la contraseña** - **Recordar el historial de contraseñas** ### <a name="new-settings-for-mac-os-x-devices"></a>Nuevas opciones para dispositivos Mac OS X #### <a name="password-settings"></a>Configuración de contraseña - **Número de caracteres complejos necesarios en la contraseña** - **Permitir contraseñas sencillas** - **Recordar el historial de contraseñas** - **Minutos de inactividad antes de que se active el protector de pantalla** ### <a name="new-settings-for-windows-10-desktop-and-mobile-devices"></a>Nuevas opciones para dispositivos Windows 10 Escritorio y Mobile #### <a name="password-settings"></a>Configuración de contraseña - **Número mínimo de conjuntos de caracteres** - **Recordar el historial de contraseñas** - **Requerir una contraseña cuando el dispositivo vuelva de un estado de inactividad** #### <a name="security-settings"></a>Configuración de seguridad - **Requerir cifrado en el dispositivo móvil** - **Permitir cancelar suscripción manualmente** #### <a name="device-capability-settings"></a>Configuración de las capacidades de los dispositivos - **Permitir VPN sobre móvil** - **Permitir itinerancia de VPN sobre móvil** - **Permitir restablecer teléfono** - **Permitir conexión USB** - **Permitir a Cortana** - **Permitir notificaciones del centro de actividades** ### <a name="new-settings-for-windows-10-team-devices"></a>Nuevas opciones para dispositivos Windows 10 Team #### <a name="device-settings"></a>Configuración del dispositivo - **Habilitar Visión operativa de Azure** - **Habilitar proyección inalámbrica de Miracast** - **Seleccione la información sobre la reunión que se muestra en la pantalla de inicio de sesión** - **URL de imagen de fondo de pantalla de bloqueo** ### <a name="new-settings-for-windows-81-devices"></a>Nuevas opciones para dispositivos Windows 8.1 #### <a name="applicability-settings"></a>Configuración de la aplicación - **Aplicar todas las configuraciones a Windows 10** #### <a name="password-settings"></a>Configuración de contraseña - **Tipo de contraseña requerida** - **Número mínimo de conjuntos de caracteres** - **Longitud mínima de la contraseña** - **Número de errores de inicio de sesión consecutivos permitidos antes de que se borre el dispositivo** - **Minutos de inactividad antes de que se apague la pantalla** - **Expiración de la contraseña (días)** - **Recordar el historial de contraseñas** - **Impedir la reutilización de contraseñas anteriores** - **Permitir contraseña de imagen y PIN** #### <a name="browser-settings"></a>Configuración del explorador - **Permitir la detección automática de redes de intranet** ### <a name="new-settings-for-windows-phone-81-devices"></a>Nuevas opciones para dispositivos Windows Phone 8.1 #### <a name="applicability-settings"></a>Configuración de la aplicación - **Aplicar todas las configuraciones a Windows 10** #### <a name="password-settings"></a>Configuración de contraseña - **Número mínimo de conjuntos de caracteres** - **Permitir contraseñas sencillas** - **Recordar el historial de contraseñas** #### <a name="device-capability-settings"></a>Configuración de las capacidades de los dispositivos - **Permitir conexión automática a zonas Wi-Fi gratuitas**
99.84984
999
0.799667
spa_Latn
0.988681
e007ed79cfc5520cf3c8d03d968ff417d2c9a789
464
md
Markdown
info.md
zarya/home-assistant-picnic
451fa414124260db85321e1e126be67e57087486
[ "Apache-2.0" ]
14
2020-09-04T16:33:46.000Z
2022-02-27T10:58:33.000Z
info.md
zarya/home-assistant-picnic
451fa414124260db85321e1e126be67e57087486
[ "Apache-2.0" ]
13
2020-09-04T18:20:25.000Z
2021-06-03T13:02:41.000Z
info.md
zarya/home-assistant-picnic
451fa414124260db85321e1e126be67e57087486
[ "Apache-2.0" ]
7
2020-11-07T14:00:10.000Z
2022-01-03T19:35:15.000Z
## Home Assistant sensor component for Picnic.app Provides Home Assistant sensors for Picnic.app (Online Supermarket). ### Example config: ```yaml sensor: - platform: picnic username: <username> (required) password: <password> (required) country_code: "NL" (optional) (Choose from "NL" or "DE") ``` [For more information visit the repository.](https://www.github.com/MikeBrink/home-assistant-picnic/)
29
101
0.648707
eng_Latn
0.737222
e007fd950a52be5d5d88103b6789d6bbd7f44cd2
1,178
md
Markdown
docs/csharp/misc/cs0154.md
iangithub/docs.zh-tw-1
f6ad873a987ee7cfa5a7aaac9ce6c72283f72f84
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/csharp/misc/cs0154.md
iangithub/docs.zh-tw-1
f6ad873a987ee7cfa5a7aaac9ce6c72283f72f84
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/csharp/misc/cs0154.md
iangithub/docs.zh-tw-1
f6ad873a987ee7cfa5a7aaac9ce6c72283f72f84
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: 編譯器錯誤 CS0154 ms.date: 07/20/2015 f1_keywords: - CS0154 helpviewer_keywords: - CS0154 ms.assetid: 815150da-09b4-4593-825f-1dd435c92da8 ms.openlocfilehash: ecf5e1abfc3c15ad2e0f2118016a0c3d2c803e52 ms.sourcegitcommit: 9b552addadfb57fab0b9e7852ed4f1f1b8a42f8e ms.translationtype: MT ms.contentlocale: zh-TW ms.lasthandoff: 04/23/2019 ms.locfileid: "61687366" --- # <a name="compiler-error-cs0154"></a>編譯器錯誤 CS0154 無法於此內容中使用屬性或索引子 'property',因為其欠缺 get 存取子 嘗試使用 [屬性](../../csharp/programming-guide/classes-and-structs/using-properties.md) 失敗,因為屬性中沒有定義 get 存取子方法。 如需詳細資訊,請參閱[欄位](../../csharp/programming-guide/classes-and-structs/fields.md)。 ## <a name="example"></a>範例 下列範例會產生 CS0154: ```csharp // CS0154.cs public class MyClass2 { public int i { set { } // uncomment the get method to resolve this error /* get { return 0; } */ } } public class MyClass { public static void Main() { MyClass2 myClass2 = new MyClass2(); int j = myClass2.i; // CS0154, no get method } } ```
22.653846
186
0.621392
yue_Hant
0.152176
e00948fd31526b04b830928f2f21ed49edb30e03
1,955
md
Markdown
node_modules/express/node_modules/connect/node_modules/compression/HISTORY.md
javaDer/panor
1bbc7ef59004774981d734e512ad05e828e21804
[ "MIT" ]
null
null
null
node_modules/express/node_modules/connect/node_modules/compression/HISTORY.md
javaDer/panor
1bbc7ef59004774981d734e512ad05e828e21804
[ "MIT" ]
null
null
null
node_modules/express/node_modules/connect/node_modules/compression/HISTORY.md
javaDer/panor
1bbc7ef59004774981d734e512ad05e828e21804
[ "MIT" ]
null
null
null
1.1.2 / 2014-10-15 ================== * deps: accepts@~1.1.2 - Fix error when media type has invalid parameter - deps: [email protected] 1.1.1 / 2014-10-12 ================== * deps: accepts@~1.1.1 - deps: mime-types@~2.0.2 - deps: [email protected] * deps: compressible@~2.0.1 - deps: [email protected] 1.1.0 / 2014-09-07 ================== * deps: accepts@~1.1.0 * deps: compressible@~2.0.0 * deps: debug@~2.0.0 1.0.11 / 2014-08-10 =================== * deps: on-headers@~1.0.0 * deps: vary@~1.0.0 1.0.10 / 2014-08-05 =================== * deps: compressible@~1.1.1 - Fix upper-case Content-Type characters prevent compression 1.0.9 / 2014-07-20 ================== * Add `debug` messages * deps: accepts@~1.0.7 - deps: [email protected] 1.0.8 / 2014-06-20 ================== * deps: accepts@~1.0.5 - use `mime-types` 1.0.7 / 2014-06-11 ================== * use vary module for better `Vary` behavior * deps: [email protected] * deps: [email protected] 1.0.6 / 2014-06-03 ================== * fix regression when negotiation fails 1.0.5 / 2014-06-03 ================== * fix listeners for delayed stream creation - fixes regression for certain `stream.pipe(res)` situations 1.0.4 / 2014-06-03 ================== * fix adding `Vary` when value stored as array * fix back-pressure behavior * fix length check for `res.end` 1.0.3 / 2014-05-29 ================== * use `accepts` for negotiation * use `on-headers` to handle header checking * deps: [email protected] 1.0.2 / 2014-04-29 ================== * only version compatible with node.js 0.8 * support headers given to `res.writeHead` * deps: [email protected] * deps: [email protected] 1.0.1 / 2014-03-08 ================== * bump negotiator * use compressible * use .headersSent (drops 0.8 support) * handle identity;q=0 case
20.364583
65
0.52532
eng_Latn
0.313584
e00950bf72c955c2b2334e949c1ddcec2cd5fa42
28
md
Markdown
README.md
mogbeyi-david/decagon-ci-and-eslint-demo
7677b9e6e4e2e7843b7ba438cfe9913835928d76
[ "MIT" ]
null
null
null
README.md
mogbeyi-david/decagon-ci-and-eslint-demo
7677b9e6e4e2e7843b7ba438cfe9913835928d76
[ "MIT" ]
null
null
null
README.md
mogbeyi-david/decagon-ci-and-eslint-demo
7677b9e6e4e2e7843b7ba438cfe9913835928d76
[ "MIT" ]
null
null
null
# decagon-ci-and-eslint-demo
28
28
0.785714
slv_Latn
0.142906
e00a48f49ca2795e5673762642a9ded2ec745ef8
670
md
Markdown
.github/ISSUE_TEMPLATE/feature_request.md
rbwhitaker/realm-factory
f919836c9c5ac96909581526579d23ca5f7ab450
[ "MIT" ]
5
2019-12-14T06:08:53.000Z
2022-02-02T18:18:22.000Z
.github/ISSUE_TEMPLATE/feature_request.md
rbwhitaker/realm-factory
f919836c9c5ac96909581526579d23ca5f7ab450
[ "MIT" ]
26
2019-11-04T14:21:15.000Z
2021-02-25T11:52:47.000Z
.github/ISSUE_TEMPLATE/feature_request.md
rbwhitaker/realm-factory
f919836c9c5ac96909581526579d23ca5f7ab450
[ "MIT" ]
3
2019-12-15T20:39:41.000Z
2022-02-03T08:03:35.000Z
--- name: Feature request about: Suggest an idea for this project title: "[Feature Request] Feature name here..." labels: enhancement assignees: '' --- **Describe the feature you'd like to see** A clear and concise description of what you want to happen. **Describe *why* this feature would be helpful to you** A clear and concise description of the underlying reason why you'd like to see this feature added. For example, "If I had this feature, then I could..." This helps us understand the underlying reasoning when making decisions about what actually gets implemented. **Additional context** Add any other context or screenshots about the feature request here.
37.222222
262
0.768657
eng_Latn
0.999763
e00ab7697961457552d31de17db0b8a0df28233b
4,211
md
Markdown
content/fundamentals/notifications/create-notifications/configure-webhooks.md
chaouiy/cloudflare-docs
8954f539cdda921485e9cef7e1036d19f3c7fb4f
[ "MIT" ]
1
2022-03-07T09:56:15.000Z
2022-03-07T09:56:15.000Z
content/fundamentals/notifications/create-notifications/configure-webhooks.md
chaouiy/cloudflare-docs
8954f539cdda921485e9cef7e1036d19f3c7fb4f
[ "MIT" ]
null
null
null
content/fundamentals/notifications/create-notifications/configure-webhooks.md
chaouiy/cloudflare-docs
8954f539cdda921485e9cef7e1036d19f3c7fb4f
[ "MIT" ]
null
null
null
--- pcx-content-type: how-to type: overview title: Configure webhooks layout: list --- {{<content-column>}} # Configure webhooks There are a variety of services you can connect to Cloudflare using webhooks to receive Notifications from your Cloudflare account. The following table lists some of the most popular services you can connect to your Cloudflare account, as well as the information you need to connect to them: {{</content-column>}} {{<table-wrap>}} | Service |Secret | URL | | --- | ---| --- | | [Google Chat](https://developers.google.com/chat/how-tos/webhooks) | The secret is part of the URL. Cloudflare parses this information automatically and there is no input needed from the user. | URL varies depending on the Google Chat channel's address. | | [Slack](https://api.slack.com/messaging/webhooks) | The secret is part of the URL. Cloudflare parses this information automatically and there is no input needed from the user. | URL varies depending on the Slack channel's address. | | [DataDog](https://docs.datadoghq.com/api/latest/events/#post-an-event) | The secret is required and has to be entered by the user. This is what DataDog [refers to as `API Key`](https://app.datadoghq.com/account/settings#api). | `https://api.datadoghq.com/api/v1/events` | | [Discord](https://discord.com/developers/docs/resources/webhook#execute-webhook) | The secret is part of the URL. Cloudflare parses this information automatically and there is no input needed from the user. | URL varies depending on the Discord channel's address. | | [OpsGenie](https://support.atlassian.com/opsgenie/docs/create-a-default-api-integration) | The secret is the `API Key` for OpsGenie's REST API. | `https://api.opsgenie.com/v2/alerts` | | [Splunk](https://docs.splunk.com/Documentation/Splunk/latest/Data/UsetheHTTPEventCollector) | The secret is required and has to be entered by the user. This is what Splunk refers to as `token`. Refer to [Splunk’s documentation](https://docs.splunk.com/Documentation/Splunk/latest/Data/UsetheHTTPEventCollector#How_the_Splunk_platform_uses_HTTP_Event_Collector_tokens_to_get_data_in) for details. | 1. We only support three Splunk endpoints: `services/collector`, `services/collector/raw`, and `services/collector/event`. <br/> 2. If SSL is enabled on the token, the port must be 443. If SSL is not enabled on the token, the port must be 8088. <br/> 3. SSL must be enabled on the server. | | Generic webhook | User decides. | User decides. | {{</table-wrap>}} {{<content-column>}} After configuring the external service you want to connect to, set up webhooks in your Cloudflare dashboard: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com/login) and select your account. 2. Go to **Notifications**. 3. Click **Destinations** on the left side of your dashboard. 4. In the **Webhooks** card, click **Create**. 5. Give your webhook a name so you can identify it later. 6. In the **URL** field, enter the URL of the third-party service you previously set up and want to connect to your Cloudflare account. 7. If needed, insert the **Secret**. Secrets are how webhooks are encrypted and vary according to the service you are connecting to Cloudflare. 8. Click **Save and Test** to finish setting up your webhook. The new webhook will appear in the **Webhooks** card. ## Generic webhooks If you use a service that is not covered by Cloudflare's currently available webhooks, you can configure your own. Follow steps 1-6 above, and enter a valid webhook URL. It is always recommended to use a secret for generic webhooks. Cloudflare will send your secret in the `cf-webhook-auth` header of every request made. If this header is not present, or is not your specified value, you should reject the webhook. After clicking save and test, your webhook should now be configured as a destination you can use to attach to policies. When Cloudflare sends you a webhook, it will have the following schema: ```txt { "text": Hello World! This is a test message sent from https://cloudflare.com. If you can see this, your webhook is configured properly. } ``` In the above example, `"text"` will vary depending on the alert that was fired. {{</content-column>}}
69.032787
690
0.758727
eng_Latn
0.991257
e00b0a6d21011da3bf0508ba9006d738c518d786
686
md
Markdown
docs/frontpage/deploy.md
emyk/aurora
fdfb1a2e15e5f7d65331b06e7b957643d7851ea4
[ "Apache-2.0" ]
2
2018-05-04T17:27:04.000Z
2019-12-19T15:57:23.000Z
docs/frontpage/deploy.md
emyk/aurora
fdfb1a2e15e5f7d65331b06e7b957643d7851ea4
[ "Apache-2.0" ]
4
2018-04-30T13:43:28.000Z
2019-12-19T12:25:07.000Z
docs/frontpage/deploy.md
emyk/aurora
fdfb1a2e15e5f7d65331b06e7b957643d7851ea4
[ "Apache-2.0" ]
3
2020-05-29T05:37:41.000Z
2020-08-25T12:27:03.000Z
## How we deploy A deploy starts in the AuroraAPI, triggered from one of the user facing clients ([AO](/documentation/openshift/#ao) and [AuroraConsole](/documentation/openshift/#aurora-console)), or automatically from the build pipeline. The API extracts and merges relevant parts of the specified AuroraConfig in order to create an AuroraDeploymentSpec for the application being deployed. From the AuroraDeploymentSpec we provision resources in our existing infrastructure and generate OpenShift objects that are applied to the cluster. The application is then rolled out either via importing a new image or triggering a new deploy. The deploy result is saved for later inspection.
114.333333
373
0.819242
eng_Latn
0.997818
e00b96f5516f8617028821b6477de5d0c4fa2461
1,014
md
Markdown
README.md
the-other-mariana/explosion-shader-unity
eb2b022af73c6a7172e9c4e6a23f1f508f780677
[ "MIT" ]
4
2021-04-11T04:38:49.000Z
2021-11-27T18:28:34.000Z
README.md
the-other-mariana/explosion-shader-unity
eb2b022af73c6a7172e9c4e6a23f1f508f780677
[ "MIT" ]
null
null
null
README.md
the-other-mariana/explosion-shader-unity
eb2b022af73c6a7172e9c4e6a23f1f508f780677
[ "MIT" ]
null
null
null
# Volumetric Explosion Shader for Unity 3D Unity Shader that emulates a volumetric explosion of a sphere. There are 3 shaders involved: a vertex/fragment shader that handles the color and shape of the sphere using layered perlin noise, a smoke shader and a camera shader that lightens up as the explosion takes place. Implemented in C#.<br /> ## Specifications Unity Version: `2018.4.4f1`<br /> ## Usage ### For visualization Download `exe-unity` folder and store it in your computer.<br /> Open the zipped folder and click on the .EXE file.<br /> ### For the source code Download `unity-project` folder and decompress it. <br /> Open it as a Unity 3D project.<br /> ## Preview Preview version of an explosion using the shader is:<br /> ![alt text](https://github.com/the-other-mariana/explosion-shader-unity/blob/master/shader-gif.gif)<br /> ## Siggraph 2020 FSSW Exhibit ![alt text](https://github.com/the-other-mariana/explosion-shader-unity/blob/master/siggraph/FSSW-version.png?raw=true)<br />
33.8
299
0.746548
eng_Latn
0.925396
e00bbd574609d8bcc8ccb791346cb3740f32179a
3,178
md
Markdown
content/post/2018-02-11-nextcloud.md
pulliam/blog
581b179d95ce6423765ff157712dfd19007cecb1
[ "MIT" ]
1
2019-02-05T18:10:20.000Z
2019-02-05T18:10:20.000Z
content/post/2018-02-11-nextcloud.md
pulliam/blog
581b179d95ce6423765ff157712dfd19007cecb1
[ "MIT" ]
null
null
null
content/post/2018-02-11-nextcloud.md
pulliam/blog
581b179d95ce6423765ff157712dfd19007cecb1
[ "MIT" ]
null
null
null
--- title: Setting up NextCloud date: 2018-02-11 layout: post aliases: - "/nextcloud/" --- This is a quick how-to setup NextCloud. Documented so I can do this later quickly. I had used the SNAP to install it initially and that worked well until I tried to use a volume that I could expand easily. It seems SNAPs don't have access to `/mnt` points. <!--more--> 1. Create LAMP 16.04 1-click install VPS on Digital Ocean $5/mo 1. Set up networking for your domain 1. In your new Droplet, create a Volume for VPS where you'll store the files 1. SSH to VPS ```bash MOUNTPOINT=yourVolumeNameHere DOMAIN=enter.your.domain.here # Format the volume with ext4 Warning: This will erase all data on the volume. # Only run this command on a volume with no existing data. sudo mkfs.ext4 -F /dev/disk/by-id/scsi-0DO_Volume_${MOUNTPOINT} # Create a mount point under /mnt sudo mkdir -p /mnt/${MOUNTPOINT} # Mount the volume sudo mount -o discard,defaults /dev/disk/by-id/scsi-0DO_Volume_${MOUNTPOINT} /mnt/${MOUNTPOINT} # Change fstab so the volume will be mounted after a reboot echo '/dev/disk/by-id/scsi-0DO_Volume_${MOUNTPOINT} /mnt/${MOUNTPOINT} ext4 defaults,nofail,discard 0 0' | sudo tee -a /etc/fstab # Change ownership chown -R www-data:www-data /mnt/${MOUNTPOINT} # Add required repository for SSL Certificates sudo add-apt-repository ppa:certbot/certbot # Run the update sudo apt-get update # Download nextcloud wget https://download.nextcloud.com/server/releases/nextcloud-13.0.0.zip # Install dependencies to unzip, and run nextcloud sudo apt-get install unzip libxml2-dev php-zip php-dom php-xmlwriter php-xmlreader php-gd php-curl php-mbstring python-certbot-apache -y # Unzip nextcloud to ~/nextcloud unzip nextcloud-13.0.0.zip # Copy recursively ~/nextcloud to default apache directory cp -r ./nextcloud /var/www/html # I had to remove/copy these files over to get rid of an error on Nextcloud as they match the checksums rm /var/www/html/info.php mv ./nextcloud/.user.ini /var/www/html/.user.ini mv ./nextcloud/.htaccess /var/www/html/.htaccess # Change ownership of nextcloud folder to www-data user chown -R www-data:www-data /var/www/html # run certbot for this domain, it will ask you questions here. sudo certbot --apache -d ${DOMAIN} # verify the renewal process will run without an issue sudo certbot renew --dry-run # restart apache sudo service apache2 restart # cleanup unneeded files rm nextcloud-13.0.0.zip rm -rf ./nextcloud # mysql setup cat .digitalocean_password # should echo something like # root_mysql_pass="b8748f906dd947bf54d0851a862306f5029eb65598a587f3" # copy the password mysql_secure_installation mysql -u root -p # paste your password # set up database and database user mysql> CREATE DATABASE nextcloud; mysql> GRANT ALL PRIVILEGES ON nextcloud.* TO "nextcloud"@"localhost" -> IDENTIFIED BY "nextcloudpassword"; mysql> FLUSH PRIVILEGES; mysql> \q ``` 1. Navigate to your ${DOMAIN} 1. Set up admin user/password 1. Data folder should be the mountpoint e.g. /mnt/${MOUNTPOINT} 1. Database user: nextcloud 1. Database pass: whatever you put in IDENTIFIED BY line 1. Database name: nextcloud 1. Database server: localhost
32.10101
259
0.763688
eng_Latn
0.610393
e00bf1f5f3dd906f48a53107c7808e8552ccc1e6
37,686
md
Markdown
i18n/README.zh-cn.md
puzpuzpuz/questdb
2fb0a47334972d71cb7f65e9f37bf73812d9d396
[ "Apache-2.0" ]
1
2021-11-06T08:04:47.000Z
2021-11-06T08:04:47.000Z
i18n/README.zh-cn.md
puzpuzpuz/questdb
2fb0a47334972d71cb7f65e9f37bf73812d9d396
[ "Apache-2.0" ]
null
null
null
i18n/README.zh-cn.md
puzpuzpuz/questdb
2fb0a47334972d71cb7f65e9f37bf73812d9d396
[ "Apache-2.0" ]
null
null
null
<div align="center"> <img alt="QuestDB Logo" src="https://raw.githubusercontent.com/questdb/questdb/master/.github/logo-readme.png" width="305px" /> </div> <p>&nbsp;</p> <p align="center"> <a href="https://slack.questdb.io"> <img src="https://slack.questdb.io/badge.svg" alt="QuestDB community Slack channel" /> </a> <a href="#contribute"> <img src="https://img.shields.io/github/all-contributors/questdb/questdb" alt="QuestDB open source contributors" /> </a> <a href="https://search.maven.org/search?q=g:org.questdb"> <img src="https://img.shields.io/maven-central/v/org.questdb/questdb" alt="QuestDB on Apache Maven" /> </a> </p> [English](https://github.com/questdb/questdb) | 简体中文 | [繁體中文](README.zh-hk.md) | [العربية](README.ar-dz.md) # QuestDB QuestDB 是一个高性能、开源的 SQL 数据库,适用于金融服务、物联网、机器学习 、DevOps 和可观测性应用等场景。它兼容 PostgreSQL 的 wire 协议,也兼容 InfluxDB Line 协议以提供不受数据库模式影响的高吞吐数据获取能力,并提供用于查询、批量导入 和导出的 REST API。QuestDB 使用了 ANSI SQL ,其包含时间导向的原生扩展语言功能。 这些扩展能更简单的连接(JOIN)多个来源的关联数据以及时间序列数据。QuestDB 通过列导向的存储模型、大规模并行的矢量执行、SIMD 指令和各种低延迟技术实现了高性能。整个代码库是用 Java 和 C++从头开始构建的,没有任何外部依赖,并且 100% 不受垃圾回收的影响。 <div align="center"> <a href="https://demo.questdb.io"> <img alt="QuestDB Web Console showing multiple SQL statements and visualizing a query as a chart" src="https://raw.githubusercontent.com/questdb/questdb/master/.github/console.png" width="600" /> </a> </div> ## 尝试 QuestDB 我们提供了一个[在线演示](https://demo.questdb.io/),其中包括最新的 QuestDB 版本 和几个样本数据集: - 一个 16 亿行的数据集,包括近 10 年的纽约市出租车行程轨迹。 - 一个即時的加密货币(比特币、以太坊)交易数据集。 - 一个包括 25 万艘船的时序地理数据集。 ## 安裝 QuestDB 你可以使用 Docker 来快速启动一个 QuestDB 实例: ```bash docker run -p 9000:9000 -p 9009:9009 -p 8812:8812 questdb/questdb ``` macOS 用户可以使用 Homebrew 来启动: ```bash brew install questdb brew services start questdb questdb start // To start questdb questdb stop // To stop questdb ``` [QuestDB 下载页面](https://questdb.io/get-questdb/) 提供二进制文件的直接下载,并 提供其他安装和部署方法的详细信息。 ### 连接到 QuestDB 你可以使用以下接口与 QuestDB 进行交互。 - [web 控制台](https://questdb.io/docs/develop/web-console/): QuestDB将会启动一个web控制台,默认运行在 `9000` 端口 - [REST API](https://questdb.io/docs/reference/api/rest/) : QuestDB也支持使用REST API 来进行交互,默认需要使用 `9000` 端口进行访问 - [PostgreSQL wire Protocol](https://questdb.io/docs/reference/api/postgres/): QuestDB也支持PostgreSQL wire protocol协议,默认运行在 `8812` 端口 - [InfluxDB Line Protocol](https://questdb.io/docs/reference/api/influxdb/): QuestDB 实现了[InfluxDB Line Protocol](https://docs.influxdata.com/influxdb/v1.8/write_protocols/line_protocol_tutorial/) 协议来支持高性能,高吞吐量单向数据插入。默认运行在 `9009` 端口 ## QuestDB 与其他开源 TSDB 的对比情况 下面是 [时间序列基准测试套件](https://github.com/timescale/tsbs) 运行 `cpu-only` 用例的测试结果,基于 6 个 worker 的 AMD Ryzen 3970X 上测试对比得到: <div align="center"> <a href="https://questdb.io/time-series-benchmark-suite/"> <img alt="A chart comparing the maximum throughput of QuestDB, ClickHouse, TimescaleDB and InfluxDB." src="https://raw.githubusercontent.com/questdb/questdb/master/.github/tsbs-results.png" /> </a> </div> 下表显示了在 `c5.metal` 实例上使用 96 个线程中的 16 个线程运行 10 亿条记录的查询 执行时间。 | 查询 | 运行时间 | | --------------------------------------------------------- | ---------- | | `SELECT sum(double) FROM 1bn` | 0.061 secs | | `SELECT tag, sum(double) FROM 1bn` | 0.179 secs | | `SELECT tag, sum(double) FROM 1bn WHERE timestamp='2019'` | 0.05 secs | ## 相关资源 ### 📚 阅读文档 - [QuestDB documentation:](https://questdb.io/docs/introduction/) 描述了如何运行 和配置 QuestDB 的技术参考。 - 由我们的社区成员编写的[教程](https://questdb.io/tutorial/)展示了 QuestDB 的可 能应用。 - [产品路线图](https://github.com/questdb/questdb/projects/3)列出了我们目前正在 进行的任务和功能。 ### ❓ 寻求支持 - [Community Slack:](https://slack.questdb.io) 是一个进行技术讨论和认识其他用户 的好地方。👋 - [GitHub issues:](https://github.com/questdb/questdb/issues) 报告 QuestDB 缺陷 或是反馈问题。 - [GitHub discussions:](https://github.com/questdb/questdb/discussions) 提案新的 特性以及查看已经构建的功能。 - [Stack Overflow:](https://stackoverflow.com/questions/tagged/questdb) 寻找常见 问题的解决方法。 ### 🚢 部署 QuestDB - [AWS AMI](https://questdb.io/docs/guides/aws-official-ami) - [Google Cloud Platform](https://questdb.io/docs/guides/google-cloud-platform) - [Official Docker image](https://questdb.io/docs/get-started/docker) - [DigitalOcean droplets](https://questdb.io/docs/guides/digitalocean) - [Kubernetes Helm charts](https://questdb.io/docs/guides/kubernetes) ## 贡献 我们总是乐于接受对项目的贡献,无论是源代码、文档、错误报告、功能请求还是反馈。如 果要开始贡献: - 请看一下 GitHub 上标有 "[Good first issue](https://github.com/questdb/questdb/issues?q=is%3Aissue+is%3Aopen+label%3A%22Good+first+issue%22)" 的问题。 - 阅 读[贡献指南](https://github.com/questdb/questdb/blob/master/CONTRIBUTING.md)。 - 有关构建 QuestDB 的详细信息,请参 见[构建说明](https://github.com/questdb/questdb/blob/master/core/README.md)。 - [创建 QuestDB 的一个分叉](https://docs.github.com/en/github/getting-started-with-github/fork-a-repo), 并提交一个 pull request,说明你的修改建议。 ✨ 为了表示感谢,我们将向贡献者发送一些我们的 QuestDB 礼品,如贴纸和 T 恤衫 [在这里申领](https://questdb.io/community) 衷心感谢以下为 QuestDB 作出贡献的优秀人士: ([表情符号键](https://allcontributors.org/docs/en/emoji-key)): <!-- ALL-CONTRIBUTORS-LIST:START - Do not remove or modify this section --> <!-- prettier-ignore-start --> <!-- markdownlint-disable --> <table> <tr> <td align="center"><a href="https://github.com/clickingbuttons"><img src="https://avatars1.githubusercontent.com/u/43246297?v=4" width="100px;" alt=""/><br /><sub><b>clickingbuttons</b></sub></a><br /><a href="https://github.com/questdb/questdb/commits?author=clickingbuttons" title="Code">💻</a> <a href="#ideas-clickingbuttons" title="Ideas, Planning, & Feedback">🤔</a> <a href="#userTesting-clickingbuttons" title="User Testing">📓</a></td> <td align="center"><a href="https://github.com/ideoma"><img src="https://avatars0.githubusercontent.com/u/2159629?v=4" width="100px;" alt=""/><br /><sub><b>ideoma</b></sub></a><br /><a href="https://github.com/questdb/questdb/commits?author=ideoma" title="Code">💻</a> <a href="#userTesting-ideoma" title="User Testing">📓</a> <a href="https://github.com/questdb/questdb/commits?author=ideoma" title="Tests">⚠️</a></td> <td align="center"><a href="https://github.com/tonytamwk"><img src="https://avatars2.githubusercontent.com/u/20872271?v=4" width="100px;" alt=""/><br /><sub><b>tonytamwk</b></sub></a><br /><a href="https://github.com/questdb/questdb/commits?author=tonytamwk" title="Code">💻</a> <a href="#userTesting-tonytamwk" title="User Testing">📓</a></td> <td align="center"><a href="http://sirinath.com/"><img src="https://avatars2.githubusercontent.com/u/637415?v=4" width="100px;" alt=""/><br /><sub><b>sirinath</b></sub></a><br /><a href="#ideas-sirinath" title="Ideas, Planning, & Feedback">🤔</a></td> <td align="center"><a href="https://www.linkedin.com/in/suhorukov"><img src="https://avatars1.githubusercontent.com/u/10332206?v=4" width="100px;" alt=""/><br /><sub><b>igor-suhorukov</b></sub></a><br /><a href="https://github.com/questdb/questdb/commits?author=igor-suhorukov" title="Code">💻</a> <a href="#ideas-igor-suhorukov" title="Ideas, Planning, & Feedback">🤔</a></td> <td align="center"><a href="https://github.com/mick2004"><img src="https://avatars1.githubusercontent.com/u/2042132?v=4" width="100px;" alt=""/><br /><sub><b>mick2004</b></sub></a><br /><a href="https://github.com/questdb/questdb/commits?author=mick2004" title="Code">💻</a> <a href="#platform-mick2004" title="Packaging/porting to new platform">📦</a></td> <td align="center"><a href="https://rawkode.com"><img src="https://avatars3.githubusercontent.com/u/145816?v=4" width="100px;" alt=""/><br /><sub><b>rawkode</b></sub></a><br /><a href="https://github.com/questdb/questdb/commits?author=rawkode" title="Code">💻</a> <a href="#infra-rawkode" title="Infrastructure (Hosting, Build-Tools, etc)">🚇</a></td> </tr> <tr> <td align="center"><a href="https://solidnerd.dev"><img src="https://avatars0.githubusercontent.com/u/886383?v=4" width="100px;" alt=""/><br /><sub><b>solidnerd</b></sub></a><br /><a href="https://github.com/questdb/questdb/commits?author=solidnerd" title="Code">💻</a> <a href="#infra-solidnerd" title="Infrastructure (Hosting, Build-Tools, etc)">🚇</a></td> <td align="center"><a href="http://solanav.github.io"><img src="https://avatars1.githubusercontent.com/u/32469597?v=4" width="100px;" alt=""/><br /><sub><b>solanav</b></sub></a><br /><a href="https://github.com/questdb/questdb/commits?author=solanav" title="Code">💻</a> <a href="https://github.com/questdb/questdb/commits?author=solanav" title="Documentation">📖</a></td> <td align="center"><a href="https://shantanoo-desai.github.io"><img src="https://avatars1.githubusercontent.com/u/12070966?v=4" width="100px;" alt=""/><br /><sub><b>shantanoo-desai</b></sub></a><br /><a href="#blog-shantanoo-desai" title="Blogposts">📝</a> <a href="#example-shantanoo-desai" title="Examples">💡</a></td> <td align="center"><a href="http://alexprut.com"><img src="https://avatars2.githubusercontent.com/u/1648497?v=4" width="100px;" alt=""/><br /><sub><b>alexprut</b></sub></a><br /><a href="https://github.com/questdb/questdb/commits?author=alexprut" title="Code">💻</a> <a href="#maintenance-alexprut" title="Maintenance">🚧</a></td> <td align="center"><a href="https://github.com/lbowman"><img src="https://avatars1.githubusercontent.com/u/1477427?v=4" width="100px;" alt=""/><br /><sub><b>lbowman</b></sub></a><br /><a href="https://github.com/questdb/questdb/commits?author=lbowman" title="Code">💻</a> <a href="https://github.com/questdb/questdb/commits?author=lbowman" title="Tests">⚠️</a></td> <td align="center"><a href="https://tutswiki.com/"><img src="https://avatars1.githubusercontent.com/u/424822?v=4" width="100px;" alt=""/><br /><sub><b>chankeypathak</b></sub></a><br /><a href="#blog-chankeypathak" title="Blogposts">📝</a></td> <td align="center"><a href="https://github.com/upsidedownsmile"><img src="https://avatars0.githubusercontent.com/u/26444088?v=4" width="100px;" alt=""/><br /><sub><b>upsidedownsmile</b></sub></a><br /><a href="https://github.com/questdb/questdb/commits?author=upsidedownsmile" title="Code">💻</a></td> </tr> <tr> <td align="center"><a href="https://github.com/Nagriar"><img src="https://avatars0.githubusercontent.com/u/2361099?v=4" width="100px;" alt=""/><br /><sub><b>Nagriar</b></sub></a><br /><a href="https://github.com/questdb/questdb/commits?author=Nagriar" title="Code">💻</a></td> <td align="center"><a href="https://github.com/piotrrzysko"><img src="https://avatars.githubusercontent.com/u/6481553?v=4" width="100px;" alt=""/><br /><sub><b>piotrrzysko</b></sub></a><br /><a href="https://github.com/questdb/questdb/commits?author=piotrrzysko" title="Code">💻</a> <a href="https://github.com/questdb/questdb/commits?author=piotrrzysko" title="Tests">⚠️</a></td> <td align="center"><a href="https://github.com/mpsq/dotfiles"><img src="https://avatars.githubusercontent.com/u/5734722?v=4" width="100px;" alt=""/><br /><sub><b>mpsq</b></sub></a><br /><a href="https://github.com/questdb/questdb/commits?author=mpsq" title="Code">💻</a></td> <td align="center"><a href="https://github.com/siddheshlatkar"><img src="https://avatars.githubusercontent.com/u/39632173?v=4" width="100px;" alt=""/><br /><sub><b>siddheshlatkar</b></sub></a><br /><a href="https://github.com/questdb/questdb/commits?author=siddheshlatkar" title="Code">💻</a></td> <td align="center"><a href="http://yitaekhwang.com"><img src="https://avatars.githubusercontent.com/u/6628444?v=4" width="100px;" alt=""/><br /><sub><b>Yitaek</b></sub></a><br /><a href="#tutorial-Yitaek" title="Tutorials">✅</a> <a href="#example-Yitaek" title="Examples">💡</a></td> <td align="center"><a href="https://www.gaboros.hu"><img src="https://avatars.githubusercontent.com/u/19173947?v=4" width="100px;" alt=""/><br /><sub><b>gabor-boros</b></sub></a><br /><a href="#tutorial-gabor-boros" title="Tutorials">✅</a> <a href="#example-gabor-boros" title="Examples">💡</a></td> <td align="center"><a href="https://github.com/kovid-r"><img src="https://avatars.githubusercontent.com/u/62409489?v=4" width="100px;" alt=""/><br /><sub><b>kovid-r</b></sub></a><br /><a href="#tutorial-kovid-r" title="Tutorials">✅</a> <a href="#example-kovid-r" title="Examples">💡</a></td> </tr> <tr> <td align="center"><a href="https://borowski-software.de/"><img src="https://avatars.githubusercontent.com/u/8701341?v=4" width="100px;" alt=""/><br /><sub><b>TimBo93</b></sub></a><br /><a href="https://github.com/questdb/questdb/issues?q=author%3ATimBo93" title="Bug reports">🐛</a> <a href="#userTesting-TimBo93" title="User Testing">📓</a></td> <td align="center"><a href="http://zikani.me"><img src="https://avatars.githubusercontent.com/u/1501387?v=4" width="100px;" alt=""/><br /><sub><b>zikani03</b></sub></a><br /><a href="https://github.com/questdb/questdb/commits?author=zikani03" title="Code">💻</a></td> <td align="center"><a href="https://github.com/jaugsburger"><img src="https://avatars.githubusercontent.com/u/10787042?v=4" width="100px;" alt=""/><br /><sub><b>jaugsburger</b></sub></a><br /><a href="https://github.com/questdb/questdb/commits?author=jaugsburger" title="Code">💻</a> <a href="#maintenance-jaugsburger" title="Maintenance">🚧</a></td> <td align="center"><a href="http://www.questdb.io"><img src="https://avatars.githubusercontent.com/u/52114895?v=4" width="100px;" alt=""/><br /><sub><b>TheTanc</b></sub></a><br /><a href="#projectManagement-TheTanc" title="Project Management">📆</a> <a href="#content-TheTanc" title="Content">🖋</a> <a href="#ideas-TheTanc" title="Ideas, Planning, & Feedback">🤔</a></td> <td align="center"><a href="http://davidgs.com"><img src="https://avatars.githubusercontent.com/u/2071898?v=4" width="100px;" alt=""/><br /><sub><b>davidgs</b></sub></a><br /><a href="https://github.com/questdb/questdb/issues?q=author%3Adavidgs" title="Bug reports">🐛</a> <a href="#content-davidgs" title="Content">🖋</a></td> <td align="center"><a href="https://redalemeden.com"><img src="https://avatars.githubusercontent.com/u/519433?v=4" width="100px;" alt=""/><br /><sub><b>kaishin</b></sub></a><br /><a href="https://github.com/questdb/questdb/commits?author=kaishin" title="Code">💻</a> <a href="#example-kaishin" title="Examples">💡</a></td> <td align="center"><a href="https://questdb.io"><img src="https://avatars.githubusercontent.com/u/7276403?v=4" width="100px;" alt=""/><br /><sub><b>bluestreak01</b></sub></a><br /><a href="https://github.com/questdb/questdb/commits?author=bluestreak01" title="Code">💻</a> <a href="#maintenance-bluestreak01" title="Maintenance">🚧</a> <a href="https://github.com/questdb/questdb/commits?author=bluestreak01" title="Tests">⚠️</a></td> </tr> <tr> <td align="center"><a href="http://patrick.spacesurfer.com/"><img src="https://avatars.githubusercontent.com/u/29952889?v=4" width="100px;" alt=""/><br /><sub><b>patrickSpaceSurfer</b></sub></a><br /><a href="https://github.com/questdb/questdb/commits?author=patrickSpaceSurfer" title="Code">💻</a> <a href="#maintenance-patrickSpaceSurfer" title="Maintenance">🚧</a> <a href="https://github.com/questdb/questdb/commits?author=patrickSpaceSurfer" title="Tests">⚠️</a></td> <td align="center"><a href="http://chenrui.dev"><img src="https://avatars.githubusercontent.com/u/1580956?v=4" width="100px;" alt=""/><br /><sub><b>chenrui333</b></sub></a><br /><a href="#infra-chenrui333" title="Infrastructure (Hosting, Build-Tools, etc)">🚇</a></td> <td align="center"><a href="http://bsmth.de"><img src="https://avatars.githubusercontent.com/u/43580235?v=4" width="100px;" alt=""/><br /><sub><b>bsmth</b></sub></a><br /><a href="https://github.com/questdb/questdb/commits?author=bsmth" title="Documentation">📖</a> <a href="#content-bsmth" title="Content">🖋</a></td> <td align="center"><a href="https://github.com/Ugbot"><img src="https://avatars.githubusercontent.com/u/2143631?v=4" width="100px;" alt=""/><br /><sub><b>Ugbot</b></sub></a><br /><a href="#question-Ugbot" title="Answering Questions">💬</a> <a href="#userTesting-Ugbot" title="User Testing">📓</a> <a href="#talk-Ugbot" title="Talks">📢</a></td> <td align="center"><a href="https://github.com/lepolac"><img src="https://avatars.githubusercontent.com/u/6312424?v=4" width="100px;" alt=""/><br /><sub><b>lepolac</b></sub></a><br /><a href="https://github.com/questdb/questdb/commits?author=lepolac" title="Code">💻</a> <a href="#tool-lepolac" title="Tools">🔧</a></td> <td align="center"><a href="https://github.com/tiagostutz"><img src="https://avatars.githubusercontent.com/u/3986989?v=4" width="100px;" alt=""/><br /><sub><b>tiagostutz</b></sub></a><br /><a href="#userTesting-tiagostutz" title="User Testing">📓</a> <a href="https://github.com/questdb/questdb/issues?q=author%3Atiagostutz" title="Bug reports">🐛</a> <a href="#projectManagement-tiagostutz" title="Project Management">📆</a></td> <td align="center"><a href="https://github.com/Lyncee59"><img src="https://avatars.githubusercontent.com/u/13176504?v=4" width="100px;" alt=""/><br /><sub><b>Lyncee59</b></sub></a><br /><a href="#ideas-Lyncee59" title="Ideas, Planning, & Feedback">🤔</a> <a href="https://github.com/questdb/questdb/commits?author=Lyncee59" title="Code">💻</a></td> </tr> <tr> <td align="center"><a href="https://github.com/rrjanbiah"><img src="https://avatars.githubusercontent.com/u/4907427?v=4" width="100px;" alt=""/><br /><sub><b>rrjanbiah</b></sub></a><br /><a href="https://github.com/questdb/questdb/issues?q=author%3Arrjanbiah" title="Bug reports">🐛</a></td> <td align="center"><a href="https://github.com/sarunas-stasaitis"><img src="https://avatars.githubusercontent.com/u/57004257?v=4" width="100px;" alt=""/><br /><sub><b>sarunas-stasaitis</b></sub></a><br /><a href="https://github.com/questdb/questdb/issues?q=author%3Asarunas-stasaitis" title="Bug reports">🐛</a></td> <td align="center"><a href="https://github.com/RiccardoGiro"><img src="https://avatars.githubusercontent.com/u/60734967?v=4" width="100px;" alt=""/><br /><sub><b>RiccardoGiro</b></sub></a><br /><a href="https://github.com/questdb/questdb/issues?q=author%3ARiccardoGiro" title="Bug reports">🐛</a></td> <td align="center"><a href="https://github.com/duggar"><img src="https://avatars.githubusercontent.com/u/37486846?v=4" width="100px;" alt=""/><br /><sub><b>duggar</b></sub></a><br /><a href="https://github.com/questdb/questdb/issues?q=author%3Aduggar" title="Bug reports">🐛</a></td> <td align="center"><a href="https://github.com/postol"><img src="https://avatars.githubusercontent.com/u/7983951?v=4" width="100px;" alt=""/><br /><sub><b>postol</b></sub></a><br /><a href="https://github.com/questdb/questdb/issues?q=author%3Apostol" title="Bug reports">🐛</a></td> <td align="center"><a href="https://github.com/petrjahoda"><img src="https://avatars.githubusercontent.com/u/45359845?v=4" width="100px;" alt=""/><br /><sub><b>petrjahoda</b></sub></a><br /><a href="https://github.com/questdb/questdb/issues?q=author%3Apetrjahoda" title="Bug reports">🐛</a></td> <td align="center"><a href="https://www.turecki.net"><img src="https://avatars.githubusercontent.com/u/1933165?v=4" width="100px;" alt=""/><br /><sub><b>t00</b></sub></a><br /><a href="https://github.com/questdb/questdb/issues?q=author%3At00" title="Bug reports">🐛</a></td> </tr> <tr> <td align="center"><a href="https://github.com/snenkov"><img src="https://avatars.githubusercontent.com/u/13110986?v=4" width="100px;" alt=""/><br /><sub><b>snenkov</b></sub></a><br /><a href="#userTesting-snenkov" title="User Testing">📓</a> <a href="https://github.com/questdb/questdb/issues?q=author%3Asnenkov" title="Bug reports">🐛</a> <a href="#ideas-snenkov" title="Ideas, Planning, & Feedback">🤔</a></td> <td align="center"><a href="https://www.linkedin.com/in/marregui"><img src="https://avatars.githubusercontent.com/u/255796?v=4" width="100px;" alt=""/><br /><sub><b>marregui</b></sub></a><br /><a href="https://github.com/questdb/questdb/commits?author=marregui" title="Code">💻</a> <a href="#ideas-marregui" title="Ideas, Planning, & Feedback">🤔</a> <a href="#design-marregui" title="Design">🎨</a></td> <td align="center"><a href="https://github.com/bratseth"><img src="https://avatars.githubusercontent.com/u/16574012?v=4" width="100px;" alt=""/><br /><sub><b>bratseth</b></sub></a><br /><a href="https://github.com/questdb/questdb/commits?author=bratseth" title="Code">💻</a> <a href="#ideas-bratseth" title="Ideas, Planning, & Feedback">🤔</a> <a href="#userTesting-bratseth" title="User Testing">📓</a></td> <td align="center"><a href="https://medium.com/@wellytambunan/"><img src="https://avatars.githubusercontent.com/u/242694?v=4" width="100px;" alt=""/><br /><sub><b>welly87</b></sub></a><br /><a href="#ideas-welly87" title="Ideas, Planning, & Feedback">🤔</a></td> <td align="center"><a href="http://johnleung.com"><img src="https://avatars.githubusercontent.com/u/20699?v=4" width="100px;" alt=""/><br /><sub><b>fuzzthink</b></sub></a><br /><a href="#ideas-fuzzthink" title="Ideas, Planning, & Feedback">🤔</a> <a href="#userTesting-fuzzthink" title="User Testing">📓</a></td> <td align="center"><a href="https://github.com/nexthack"><img src="https://avatars.githubusercontent.com/u/6803956?v=4" width="100px;" alt=""/><br /><sub><b>nexthack</b></sub></a><br /><a href="https://github.com/questdb/questdb/commits?author=nexthack" title="Code">💻</a></td> <td align="center"><a href="https://github.com/g-metan"><img src="https://avatars.githubusercontent.com/u/88013490?v=4" width="100px;" alt=""/><br /><sub><b>g-metan</b></sub></a><br /><a href="https://github.com/questdb/questdb/issues?q=author%3Ag-metan" title="Bug reports">🐛</a></td> </tr> <tr> <td align="center"><a href="https://github.com/tim2skew"><img src="https://avatars.githubusercontent.com/u/54268285?v=4" width="100px;" alt=""/><br /><sub><b>tim2skew</b></sub></a><br /><a href="https://github.com/questdb/questdb/issues?q=author%3Atim2skew" title="Bug reports">🐛</a> <a href="#userTesting-tim2skew" title="User Testing">📓</a></td> <td align="center"><a href="https://github.com/ospqsp"><img src="https://avatars.githubusercontent.com/u/84992434?v=4" width="100px;" alt=""/><br /><sub><b>ospqsp</b></sub></a><br /><a href="https://github.com/questdb/questdb/issues?q=author%3Aospqsp" title="Bug reports">🐛</a></td> <td align="center"><a href="https://github.com/SuperFluffy"><img src="https://avatars.githubusercontent.com/u/701177?v=4" width="100px;" alt=""/><br /><sub><b>SuperFluffy</b></sub></a><br /><a href="https://github.com/questdb/questdb/issues?q=author%3ASuperFluffy" title="Bug reports">🐛</a></td> <td align="center"><a href="https://github.com/nu11ptr"><img src="https://avatars.githubusercontent.com/u/3615587?v=4" width="100px;" alt=""/><br /><sub><b>nu11ptr</b></sub></a><br /><a href="https://github.com/questdb/questdb/issues?q=author%3Anu11ptr" title="Bug reports">🐛</a></td> <td align="center"><a href="https://github.com/comunidadio"><img src="https://avatars.githubusercontent.com/u/10286013?v=4" width="100px;" alt=""/><br /><sub><b>comunidadio</b></sub></a><br /><a href="https://github.com/questdb/questdb/issues?q=author%3Acomunidadio" title="Bug reports">🐛</a></td> <td align="center"><a href="https://github.com/mugendi"><img src="https://avatars.githubusercontent.com/u/5348246?v=4" width="100px;" alt=""/><br /><sub><b>mugendi</b></sub></a><br /><a href="#ideas-mugendi" title="Ideas, Planning, & Feedback">🤔</a> <a href="https://github.com/questdb/questdb/issues?q=author%3Amugendi" title="Bug reports">🐛</a> <a href="https://github.com/questdb/questdb/commits?author=mugendi" title="Documentation">📖</a></td> <td align="center"><a href="https://github.com/paulwoods222"><img src="https://avatars.githubusercontent.com/u/86227717?v=4" width="100px;" alt=""/><br /><sub><b>paulwoods222</b></sub></a><br /><a href="https://github.com/questdb/questdb/issues?q=author%3Apaulwoods222" title="Bug reports">🐛</a></td> </tr> <tr> <td align="center"><a href="https://github.com/mingodad"><img src="https://avatars.githubusercontent.com/u/462618?v=4" width="100px;" alt=""/><br /><sub><b>mingodad</b></sub></a><br /><a href="#ideas-mingodad" title="Ideas, Planning, & Feedback">🤔</a> <a href="https://github.com/questdb/questdb/issues?q=author%3Amingodad" title="Bug reports">🐛</a> <a href="https://github.com/questdb/questdb/commits?author=mingodad" title="Documentation">📖</a></td> <td align="center"><a href="https://github.com/houarizegai"><img src="https://avatars.githubusercontent.com/houarizegai?v=4" width="100px;" alt=""/><br /><sub><b>houarizegai</b></sub></a><br /><a href="https://github.com/questdb/questdb/commits?author=houarizegai" title="Documentation">📖</a></td> <td align="center"><a href="http://scrapfly.io"><img src="https://avatars.githubusercontent.com/u/1763341?v=4" width="100px;" alt=""/><br /><sub><b>jjsaunier</b></sub></a><br /><a href="https://github.com/questdb/questdb/issues?q=author%3Ajjsaunier" title="Bug reports">🐛</a></td> <td align="center"><a href="https://github.com/zanek"><img src="https://avatars.githubusercontent.com/u/333102?v=4" width="100px;" alt=""/><br /><sub><b>zanek</b></sub></a><br /><a href="#ideas-zanek" title="Ideas, Planning, & Feedback">🤔</a> <a href="#projectManagement-zanek" title="Project Management">📆</a></td> <td align="center"><a href="https://github.com/Geekaylee"><img src="https://avatars.githubusercontent.com/u/12583377?v=4" width="100px;" alt=""/><br /><sub><b>Geekaylee</b></sub></a><br /><a href="#userTesting-Geekaylee" title="User Testing">📓</a> <a href="#ideas-Geekaylee" title="Ideas, Planning, & Feedback">🤔</a></td> <td align="center"><a href="https://github.com/lg31415"><img src="https://avatars.githubusercontent.com/u/3609384?v=4" width="100px;" alt=""/><br /><sub><b>lg31415</b></sub></a><br /><a href="https://github.com/questdb/questdb/issues?q=author%3Alg31415" title="Bug reports">🐛</a> <a href="#projectManagement-lg31415" title="Project Management">📆</a></td> <td align="center"><a href="http://nulldev.xyz/"><img src="https://avatars.githubusercontent.com/u/9571936?v=4" width="100px;" alt=""/><br /><sub><b>null-dev</b></sub></a><br /><a href="https://github.com/questdb/questdb/issues?q=author%3Anull-dev" title="Bug reports">🐛</a> <a href="#projectManagement-null-dev" title="Project Management">📆</a></td> </tr> <tr> <td align="center"><a href="http://ultd.io"><img src="https://avatars.githubusercontent.com/u/12675427?v=4" width="100px;" alt=""/><br /><sub><b>ultd</b></sub></a><br /><a href="#ideas-ultd" title="Ideas, Planning, & Feedback">🤔</a> <a href="#projectManagement-ultd" title="Project Management">📆</a></td> <td align="center"><a href="https://github.com/ericsun2"><img src="https://avatars.githubusercontent.com/u/8866410?v=4" width="100px;" alt=""/><br /><sub><b>ericsun2</b></sub></a><br /><a href="#ideas-ericsun2" title="Ideas, Planning, & Feedback">🤔</a> <a href="https://github.com/questdb/questdb/issues?q=author%3Aericsun2" title="Bug reports">🐛</a> <a href="#projectManagement-ericsun2" title="Project Management">📆</a></td> <td align="center"><a href="https://www.linkedin.com/in/giovanni-k-bonetti-2809345/"><img src="https://avatars.githubusercontent.com/u/3451581?v=4" width="100px;" alt=""/><br /><sub><b>giovannibonetti</b></sub></a><br /><a href="#userTesting-giovannibonetti" title="User Testing">📓</a> <a href="https://github.com/questdb/questdb/issues?q=author%3Agiovannibonetti" title="Bug reports">🐛</a> <a href="#projectManagement-giovannibonetti" title="Project Management">📆</a></td> <td align="center"><a href="https://wavded.com"><img src="https://avatars.githubusercontent.com/u/26638?v=4" width="100px;" alt=""/><br /><sub><b>wavded</b></sub></a><br /><a href="#userTesting-wavded" title="User Testing">📓</a> <a href="https://github.com/questdb/questdb/issues?q=author%3Awavded" title="Bug reports">🐛</a></td> <td align="center"><a href="https://medium.com/@apechkurov"><img src="https://avatars.githubusercontent.com/u/37772591?v=4" width="100px;" alt=""/><br /><sub><b>puzpuzpuz</b></sub></a><br /><a href="https://github.com/questdb/questdb/commits?author=puzpuzpuz" title="Documentation">📖</a> <a href="https://github.com/questdb/questdb/commits?author=puzpuzpuz" title="Code">💻</a> <a href="#userTesting-puzpuzpuz" title="User Testing">📓</a></td> <td align="center"><a href="https://github.com/rstreics"><img src="https://avatars.githubusercontent.com/u/50323347?v=4" width="100px;" alt=""/><br /><sub><b>rstreics</b></sub></a><br /><a href="https://github.com/questdb/questdb/commits?author=rstreics" title="Code">💻</a> <a href="#infra-rstreics" title="Infrastructure (Hosting, Build-Tools, etc)">🚇</a> <a href="https://github.com/questdb/questdb/commits?author=rstreics" title="Documentation">📖</a></td> <td align="center"><a href="https://github.com/mariusgheorghies"><img src="https://avatars.githubusercontent.com/u/84250061?v=4" width="100px;" alt=""/><br /><sub><b>mariusgheorghies</b></sub></a><br /><a href="https://github.com/questdb/questdb/commits?author=mariusgheorghies" title="Code">💻</a> <a href="#infra-mariusgheorghies" title="Infrastructure (Hosting, Build-Tools, etc)">🚇</a> <a href="https://github.com/questdb/questdb/commits?author=mariusgheorghies" title="Documentation">📖</a></td> </tr> <tr> <td align="center"><a href="https://github.com/pswu11"><img src="https://avatars.githubusercontent.com/u/48913707?v=4" width="100px;" alt=""/><br /><sub><b>pswu11</b></sub></a><br /><a href="#content-pswu11" title="Content">🖋</a> <a href="#ideas-pswu11" title="Ideas, Planning, & Feedback">🤔</a> <a href="#design-pswu11" title="Design">🎨</a></td> <td align="center"><a href="https://github.com/insmac"><img src="https://avatars.githubusercontent.com/u/1871646?v=4" width="100px;" alt=""/><br /><sub><b>insmac</b></sub></a><br /><a href="https://github.com/questdb/questdb/commits?author=insmac" title="Code">💻</a> <a href="#ideas-insmac" title="Ideas, Planning, & Feedback">🤔</a> <a href="#design-insmac" title="Design">🎨</a></td> <td align="center"><a href="https://github.com/eugenels"><img src="https://avatars.githubusercontent.com/u/79919431?v=4" width="100px;" alt=""/><br /><sub><b>eugenels</b></sub></a><br /><a href="https://github.com/questdb/questdb/commits?author=eugenels" title="Code">💻</a> <a href="#ideas-eugenels" title="Ideas, Planning, & Feedback">🤔</a> <a href="#maintenance-eugenels" title="Maintenance">🚧</a></td> <td align="center"><a href="https://github.com/bziobrowski"><img src="https://avatars.githubusercontent.com/u/26925920?v=4" width="100px;" alt=""/><br /><sub><b>bziobrowski</b></sub></a><br /><a href="https://github.com/questdb/questdb/commits?author=bziobrowski" title="Code">💻</a> <a href="#projectManagement-bziobrowski" title="Project Management">📆</a></td> <td align="center"><a href="https://github.com/Zapfmeister"><img src="https://avatars.githubusercontent.com/u/20150586?v=4" width="100px;" alt=""/><br /><sub><b>Zapfmeister</b></sub></a><br /><a href="https://github.com/questdb/questdb/commits?author=Zapfmeister" title="Code">💻</a> <a href="#userTesting-Zapfmeister" title="User Testing">📓</a></td> <td align="center"><a href="https://github.com/mkaruza"><img src="https://avatars.githubusercontent.com/u/3676457?v=4" width="100px;" alt=""/><br /><sub><b>mkaruza</b></sub></a><br /><a href="https://github.com/questdb/questdb/commits?author=mkaruza" title="Code">💻</a></td> <td align="center"><a href="https://github.com/DylanDKnight"><img src="https://avatars.githubusercontent.com/u/17187287?v=4" width="100px;" alt=""/><br /><sub><b>DylanDKnight</b></sub></a><br /><a href="#userTesting-DylanDKnight" title="User Testing">📓</a> <a href="https://github.com/questdb/questdb/issues?q=author%3ADylanDKnight" title="Bug reports">🐛</a></td> </tr> <tr> <td align="center"><a href="https://github.com/enolal826"><img src="https://avatars.githubusercontent.com/u/51820585?v=4" width="100px;" alt=""/><br /><sub><b>enolal826</b></sub></a><br /><a href="https://github.com/questdb/questdb/commits?author=enolal826" title="Code">💻</a></td> <td align="center"><a href="https://github.com/glasstiger"><img src="https://avatars.githubusercontent.com/u/94906625?v=4" width="100px;" alt=""/><br /><sub><b>glasstiger</b></sub></a><br /><a href="https://github.com/questdb/questdb/commits?author=glasstiger" title="Code">💻</a></td> <td align="center"><a href="https://arijus.net"><img src="https://avatars.githubusercontent.com/u/4284659?v=4" width="100px;" alt=""/><br /><sub><b>argshook</b></sub></a><br /><a href="https://github.com/questdb/questdb/commits?author=argshook" title="Code">💻</a> <a href="#ideas-argshook" title="Ideas, Planning, & Feedback">🤔</a> <a href="#design-argshook" title="Design">🎨</a> <a href="https://github.com/questdb/questdb/issues?q=author%3Aargshook" title="Bug reports">🐛</a></td> <td align="center"><a href="https://github.com/amunra"><img src="https://avatars.githubusercontent.com/u/1499096?v=4" width="100px;" alt=""/><br /><sub><b>amunra</b></sub></a><br /><a href="https://github.com/questdb/questdb/commits?author=amunra" title="Code">💻</a> <a href="https://github.com/questdb/questdb/commits?author=amunra" title="Documentation">📖</a> <a href="https://github.com/questdb/questdb/issues?q=author%3Aamunra" title="Bug reports">🐛</a></td> <td align="center"><a href="https://lamottsjourney.wordpress.com/"><img src="https://avatars.githubusercontent.com/u/66742430?v=4" width="100px;" alt=""/><br /><sub><b>GothamsJoker</b></sub></a><br /><a href="https://github.com/questdb/questdb/commits?author=GothamsJoker" title="Code">💻</a></td> <td align="center"><a href="https://github.com/kocko"><img src="https://avatars.githubusercontent.com/u/862000?v=4" width="100px;" alt=""/><br /><sub><b>kocko</b></sub></a><br /><a href="https://github.com/questdb/questdb/commits?author=kocko" title="Code">💻</a></td> <td align="center"><a href="https://github.com/jerrinot"><img src="https://avatars.githubusercontent.com/u/158619?v=4" width="100px;" alt=""/><br /><sub><b>jerrinot</b></sub></a><br /><a href="https://github.com/questdb/questdb/commits?author=jerrinot" title="Code">💻</a> <a href="#ideas-jerrinot" title="Ideas, Planning, & Feedback">🤔</a> <a href="https://github.com/questdb/questdb/issues?q=author%3Ajerrinot" title="Bug reports">🐛</a></td> </tr> <tr> <td align="center"><a href="http://ramiroberrelleza.com"><img src="https://avatars.githubusercontent.com/u/475313?v=4" width="100px;" alt=""/><br /><sub><b>rberrelleza</b></sub></a><br /><a href="https://github.com/questdb/questdb/commits?author=rberrelleza" title="Code">💻</a></td> <td align="center"><a href="https://github.com/Cobalt-27"><img src="https://avatars.githubusercontent.com/u/34511059?v=4" width="100px;" alt=""/><br /><sub><b>Cobalt-27</b></sub></a><br /><a href="https://github.com/questdb/questdb/commits?author=Cobalt-27" title="Code">💻</a></td> <td align="center"><a href="https://github.com/eschultz"><img src="https://avatars.githubusercontent.com/u/390064?v=4" width="100px;" alt=""/><br /><sub><b>eschultz</b></sub></a><br /><a href="https://github.com/questdb/questdb/commits?author=eschultz" title="Code">💻</a></td> <td align="center"><a href="https://www.linkedin.com/in/xinyi-qiao/"><img src="https://avatars.githubusercontent.com/u/47307374?v=4" width="100px;" alt=""/><br /><sub><b>XinyiQiao</b></sub></a><br /><a href="https://github.com/questdb/questdb/commits?author=XinyiQiao" title="Code">💻</a></td> <td align="center"><a href="http://chenquan.me"><img src="https://avatars.githubusercontent.com/u/20042193?v=4" width="100px;" alt=""/><br /><sub><b>terasum</b></sub></a><br /><a href="https://github.com/questdb/questdb/commits?author=terasum" title="Documentation">📖</a></td> <td align="center"><a href="https://www.linkedin.com/in/hristovdeveloper"><img src="https://avatars.githubusercontent.com/u/3893599?v=4" width="100px;" alt=""/><br /><sub><b>PlamenHristov</b></sub></a><br /><a href="https://github.com/questdb/questdb/commits?author=PlamenHristov" title="Code">💻</a></td> <td align="center"><a href="https://github.com/tris0laris"><img src="https://avatars.githubusercontent.com/u/57298792?v=4" width="100px;" alt=""/><br /><sub><b>tris0laris</b></sub></a><br /><a href="#blog-tris0laris" title="Blogposts">📝</a> <a href="#ideas-tris0laris" title="Ideas, Planning, & Feedback">🤔</a></td> </tr> <tr> <td align="center"><a href="https://github.com/HeZean"><img src="https://avatars.githubusercontent.com/u/49837965?v=4" width="100px;" alt=""/><br /><sub><b>HeZean</b></sub></a><br /><a href="https://github.com/questdb/questdb/commits?author=HeZean" title="Code">💻</a> <a href="https://github.com/questdb/questdb/issues?q=author%3AHeZean" title="Bug reports">🐛</a></td> <td align="center"><a href="https://github.com/iridess"><img src="https://avatars.githubusercontent.com/u/104518201?v=4" width="100px;" alt=""/><br /><sub><b>iridess</b></sub></a><br /><a href="https://github.com/questdb/questdb/commits?author=iridess" title="Code">💻</a> <a href="https://github.com/questdb/questdb/commits?author=iridess" title="Documentation">📖</a></td> </tr> </table> <!-- markdownlint-restore --> <!-- prettier-ignore-end --> <!-- ALL-CONTRIBUTORS-LIST:END --> 本项目遵循 [all-contributors](https://github.com/all-contributors/all-contributors) 标准. 欢迎任何形式的贡献!
126.040134
502
0.674468
yue_Hant
0.165701
e00c4565153ef93c180ea0cf1715d082d2105c40
7,552
md
Markdown
Instructions/Labs/LAB[MB920]_M01_Lab4_Create_a_purchase_order.md
MicrosoftLearning/MB-920RU-Microsoft-Dynamics-365-Fundamentals-Finance-and-Operations-Apps
c4795150a758ee32c97042d585eb2a51a8cfa09f
[ "MIT" ]
null
null
null
Instructions/Labs/LAB[MB920]_M01_Lab4_Create_a_purchase_order.md
MicrosoftLearning/MB-920RU-Microsoft-Dynamics-365-Fundamentals-Finance-and-Operations-Apps
c4795150a758ee32c97042d585eb2a51a8cfa09f
[ "MIT" ]
null
null
null
Instructions/Labs/LAB[MB920]_M01_Lab4_Create_a_purchase_order.md
MicrosoftLearning/MB-920RU-Microsoft-Dynamics-365-Fundamentals-Finance-and-Operations-Apps
c4795150a758ee32c97042d585eb2a51a8cfa09f
[ "MIT" ]
null
null
null
--- lab: title: 'Лабораторная работа 4. Создание заказа на покупку' module: 'Модуль 1. Изучение основ Microsoft Dynamics 365 Supply Chain Management' --- # Модуль 1. Изучение основ Microsoft Dynamics 365 Supply Chain Management ## Лабораторная работа 4 — создание заказа на покупку ## Цели Чаще всего заказы на покупку создаются автоматически в результате сводного планирования, прямой поставки и при выполнении других процессов. При создании вручную заказ на покупку обычно создается агентом по закупкам. Создайте заказ на покупку, используя компанию USMF. ## Исходные условия выполнения лабораторной работы - **Ориентировочное время выполнения работы**: 10 мин ## Инструкции 1. На домашней странице Finance and Operations проверьте в правом верхнем углу, что вы работаете с компанией USMF. 1. При необходимости выберите компанию, открыв меню и выбрав пункт **USMF**. 1. В левом верхнем углу нажмите кнопку вызова меню **Развернуть панель навигации**. 1. Выберите последовательно пункты **Модули** > **Закупки и поиск источников** > **Заказы на покупку** > **Все заказы на покупку**. 1. На странице «Все заказы на покупку» в верхнем меню выберите пункт **+ Новые**. 1. На панели Создание заказа на покупку откройте меню **Счет поставщика** и выберите пункт **US-101**. 1. При выборе поставщика сведения из записи поставщика, такие как адрес, счет накладной, условия доставки и способ доставки, копируются в заголовок заказа в качестве значений по умолчанию. В любой момент эти значения можно изменить. 1. Разверните раздел **Общие**. 1. В разделе **АНАЛИТИКИ ХРАНЕНИЯ** откройте меню **Сайт** и просмотрите список сайтов. 1. Поле «Сайт» вместе с полем «Склад» определяют пункт, куда должны поставляться закупаемые товары и услуги. Адресом доставки по умолчанию является сайт. Оба поля могут заполняться значениями, заданными для выбранного поставщика, или можно ввести значения вручную. 1. В разделе **ДАТЫ** в поле «Дата поставки» указывается, когда должны быть поставлены закупленные товары и услуги. 1. Можно задать одну дату поставки для заказа либо можно задать индивидуальные даты поставки для отдельных строк заказа. Если заданная здесь дата поставки не может быть соблюдена для специфических продуктов или услуг по причине более длительных сроков выполнения заказа, тогда эти строки создаются с более поздней датой поставки, чтобы подстроиться под конкретную ситуацию. 1. Разверните раздел **Администрирование**. В поле **Заказчик** указывается, кто разместил заказ. 1. Возможно, этими сведениями удобно поделиться с поставщиком на случай, если ему потребуется связаться с этим человеком. Значение может присваиваться автоматически, если текущая учетная запись пользователя связана с именем на странице пользователей. 1. Нажмите кнопку **ОК**. 1. Заголовок заказа готов.. При работе со строками заказа на покупку отображается только сводная информация заголовка. Если необходимо просмотреть остальные сведения, нажмите кнопку **Заголовок**. ![Экранное изображение, показывающее местоположение меню заголовка](./media/lp1-m3-purchase-order-header-option.png) 1. В меню раздела **Строки заказа на покупку** выберите пункт **Строка заказа на покупку**. ![Экранное изображение, показывающее местоположение пункта меню «Строка заказа на покупку»](./media/lp1-m3-purchase-order-purchase-order-line-menu.png) 1. В разделе **ПОКАЗАТЬ** выберите пункт **Аналитики**. 1. Возможны разные варианты продуктов, отличающихся по характеристикам, таким как цвет, размер и стиль. Продукты могут быть также настроены на использование аналитик хранения, таких как сайт и склад. Существуют также дополнительные аналитики отслеживания, такие как номера партий и серийные номера. Для повышения производительности ввода заказа можно добавить часто используемые поля аналитик непосредственно в сетку заказа. 1. На панели Отображение аналитик в разделе **АНАЛИТИКИ ПРОДУКТА** установите флажок **Цвет**. 1. Дополнительно: Если установлен переключатель «Сохранить настройки», при последующем открытии заказа на покупку отобранные аналитики также будут отображаться на сетке строк заказа. 1. Нажмите кнопку **ОК**. 1. Откройте меню ячейки **Номенклатурный номер** и выберите значение **T0004**. 1. Помните, вместо прокрутки списка можно также ввести значение в поле фильтра. 1. Строки заказа для продуктов и услуг создаются путем указания номенклатурного номера либо как расходы посредством определения категории закупаемой продукции. 1. Категория закупаемой продукции применяется для добавления строк, где закупаемая номенклатура напрямую списывается в расходы вместо поступления в запасы. Если надо списать расходы на покупку, это можно сделать это путем создания строки заказа на покупку, где указывается категория закупаемой продукции, а не создавать строку с номенклатурным номером. Номенклатуры могут быть также связаны с категорией закупаемой продукции, и в этом случае категория закупаемой продукции отображается лишь в справочных целях. 1. Откройте меню **Цвет**, просмотрите доступные варианты, а затем выберите один из цветов или одну из комбинаций цветов. 1. Поля «Сайт» и «Склад» обычно заполняются значениями из заголовка заказа, но можно переопределить поля, если для некоторых строк поставка выполняется в другие места. 1. Введите в поле **Количество** значение **10**. 1. Поле «Количество» автоматически заполняется минимальным количеством по заказу продукта, если эта настройка задана, или значением 1. 1. Дополнительные сведения: - **Ед. изм.**: Показывает единицы измерения заказанного количества. Обычно единицы измерения автоматически предоставляются из единиц измерения покупки, входящих в сводные данные продукта. - **Цена за единицу**: Содержит значение или из договора покупки, или из коммерческого соглашения. Цену за единицу в отдельных строках заказа можно изменять, например, если с поставщиком согласована индивидуальная цена. - **Скидка**: Представляет сумму скидки за единицу товара. Цена за единицу уменьшается на величину скидки. Данная скидка обычно предоставляется автоматически из договора покупки или коммерческого соглашения, однако скидки можно переопределить в отдельных строках, если с поставщиком согласованы индивидуальные скидки. - **Процент скидки**: Когда вводится процент скидки, чистая сумма в строке уменьшается соответствующим образом. Процент скидки часто предоставляется автоматически из договора покупки или коммерческого соглашения, однако процент скидки можно переопределить в отдельных строках, если с поставщиком согласован индивидуальный процент скидки. - **Чистая сумма**: Вычисляется по значениям из других полей в строке, включая количество, цену за единицу, скидку и процент скидки. Чистую сумму можно изменить, но тогда поля «Цена за единицу», «Скидка» и «Процент скидки» будут пустыми, а при разноске строки разнесенная сумма будет пропорциональна чистой сумме. Обычно поле «Чистая сумма» предназначено лишь для отображения чистой суммы строки. 1. Под строками заказа на покупку, внизу страницы нажмите кнопку **Сведения о строке**. 1. Перейдите на вкладку **Поставка**. 1. Для каждой строки заказа может быть назначена индивидуальная дата поставки. Дата наследуется из поля в заголовке заказа на покупку, тем не менее ее можно изменить. 1. Закройте страницу строки заказа на покупку. 1. Воспользуйтесь на странице «Все заказы на покупку» функцией фильтра и найдите свой новый заказ на покупку. 1. Завершив выполнение лабораторной работы, закройте страницу «Все заказы на покупку» и вернитесь на домашнюю страницу.
69.925926
510
0.801774
rus_Cyrl
0.991791
e00c905c96447b4ac5f4d813beb4ae8cc550e41c
17,871
md
Markdown
articles/container-registry/container-registry-helm-repos.md
jufuku0108/azure-docs.ja-jp
6195f300a8fc064c73e1af25473ff3d7fd33036e
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/container-registry/container-registry-helm-repos.md
jufuku0108/azure-docs.ja-jp
6195f300a8fc064c73e1af25473ff3d7fd33036e
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/container-registry/container-registry-helm-repos.md
jufuku0108/azure-docs.ja-jp
6195f300a8fc064c73e1af25473ff3d7fd33036e
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Helm グラフの保存 description: Azure Container Registry のリポジトリを使用して Kubernetes アプリケーションの Helm グラフを保存する方法について説明します ms.topic: article ms.date: 01/28/2020 ms.openlocfilehash: 7969efe37558fffb26b983131c56ae11f3ef9368 ms.sourcegitcommit: 05b36f7e0e4ba1a821bacce53a1e3df7e510c53a ms.translationtype: HT ms.contentlocale: ja-JP ms.lasthandoff: 03/06/2020 ms.locfileid: "78398973" --- # <a name="push-and-pull-helm-charts-to-an-azure-container-registry"></a>Azure コンテナー レジストリに対する Helm グラフのプッシュおよびプル アプリケーションを Kubernetes 用に簡単に管理し、デプロイするには、[オープン ソースの Helm パッケージ マネージャー][helm]を使用します。 Helm を使用すると、アプリケーション パッケージは[グラフ](https://helm.sh/docs/topics/charts/)として定義されます。これは収集され、[Helm グラフ リポジトリ](https://helm.sh/docs/topics/chart_repository/)に格納されます。 この記事では、Helm 3 または Helm 2 のインストールを使用して、Azure コンテナー レジストリのリポジトリで Helm グラフをホストする方法について説明します。 この例では、パブリックの Helm の "*安定した*" リポジトリから、既存の Helm グラフを格納します。 多くのシナリオでは、開発したアプリケーション用の独自のグラフを構築し、アップロードすることになります。 独自の Helm グラフを作成する方法の詳細については、「[Chart Template Developer's Guide (グラフ テンプレート開発者ガイド)][develop-helm-charts]」を参照してください。 > [!IMPORTANT] > Azure Container Registry での Helm グラフのサポートは現在プレビュー段階です。 プレビュー版は、追加の[使用条件][terms-of-use]に同意することを条件に使用できます。 この機能の一部の側面は、一般公開 (GA) 前に変更される可能性があります。 ## <a name="helm-3-or-helm-2"></a>Helm 3 か Helm 2 か Helm グラフの格納、管理、およびインストールには、Helm クライアントと Helm CLI を使用します。 Helm クライアントのメジャー リリースには、Helm 3 と Helm 2 があります。 Helm 3 では新しいグラフ形式がサポートされており、Tiller サーバー側コンポーネントはインストールされなくなりました。 バージョン間の違いの詳細については、[バージョンに関する FAQ](https://helm.sh/docs/faq/) を参照してください。 以前に "Helm 2" グラフをデプロイした場合は、「[Migrating Helm v2 to v3 (Helm v2 から v3 への移行)](https://helm.sh/docs/topics/v2_v3_migration/)」を参照してください。 Helm 3 または Helm 2 を使用して Azure Container Registry で Helm グラフをホストし、各バージョンに固有のワークフローを使用できます。 * [Helm 3 クライアント](#use-the-helm-3-client) - `helm chart` コマンドを使用して、レジストリ内のグラフを [OCI 成果物](container-registry-image-formats.md#oci-artifacts)として管理します * [Helm 2 クライアント](#use-the-helm-2-client) - Azure CLI の [az acr helm][az-acr-helm] コマンドを使用して、コンテナー レジストリを Helm グラフ リポジトリとして追加および管理します ### <a name="additional-information"></a>関連情報 * グラフを OCI 成果物として管理するには、Helm 3 ワークフローをネイティブの `helm chart` コマンドと共に使用することをお勧めします。 * Helm 3 クライアントおよびグラフでは、従来の [az acr helm][az-acr-helm] Azure CLI コマンドとワークフローを使用できます。 ただし、`az acr helm list` などの特定のコマンドは Helm 3 グラフと互換性がありません。 * Helm 3 では、[az acr helm][az-acr-helm] コマンドは、主に Helm 2 クライアントおよびグラフ形式との互換性のためにサポートされています。 今後、これらのコマンドの開発は計画されていません。 ## <a name="use-the-helm-3-client"></a>Helm 3 クライアントを使用する ### <a name="prerequisites"></a>前提条件 - Azure サブスクリプションの **Azure コンテナー レジストリ**。 必要に応じて、[Azure portal](container-registry-get-started-portal.md) または [Azure CLI](container-registry-get-started-azure-cli.md) を使用してレジストリを作成します。 - **Helm クライアント バージョン 3.0.0 以降** - 現在のバージョンを確認するには、`helm version` を実行します。 Helm のインストール方法とアップグレード方法について詳しくは、「[Installing Helm (Helm のインストール)][helm-install]」をご覧ください。 - Helm グラフをインストールする **Kubernetes クラスター**。 必要に応じて、[Azure Kubernetes Service クラスター][aks-quickstart]を作成します。 - **Azure CLI バージョン 2.0.71 以降** - バージョンを確認するには `az --version` を実行します。 インストールまたはアップグレードする必要がある場合は、[Azure CLI のインストール][azure-cli-install]に関するページを参照してください。 ### <a name="high-level-workflow"></a>ワークフローの概要 **Helm 3** の場合: * Azure コンテナー レジストリに 1 つ以上の Helm リポジトリを作成できます * Helm 3 グラフをレジストリに [OCI 成果物](container-registry-image-formats.md#oci-artifacts)として格納します。 現在、OCI の Helm 3 のサポートは "*試験段階*" と見なされています。 * Helm CLI から直接 `helm chart` コマンドを使用して、レジストリ内の Helm グラフをプッシュ、プル、および管理します * Azure CLI 経由でレジストリを使用して認証します。これにより、レジストリの URI と資格情報を使用して Helm クライアントが自動的に更新されます。 このリポジトリ情報を手動で指定する必要はないため、資格情報はコマンドの履歴で公開されません。 * `helm install` を使用して、ローカル リポジトリ キャッシュから Kubernetes クラスターにグラフをインストールします。 例については、次のセクションを参照してください。 ### <a name="enable-oci-support"></a>OCI のサポートを有効にする 次の環境変数を設定して、Helm 3 クライアントで OCI のサポートを有効にします。 現時点では、このサポートは試験段階です。 ```console export HELM_EXPERIMENTAL_OCI=1 ``` ### <a name="pull-an-existing-helm-package"></a>既存の Helm パッケージをプルする `stable` Helm グラフ リポジトリをまだ追加していない場合は、`helm repo add` コマンドを実行します。 ```console helm repo add stable https://kubernetes-charts.storage.googleapis.com ``` ローカルで `stable` リポジトリからグラフ パッケージをプルします。 たとえば、 *~/acr-helm* などのローカル ディレクトリを作成し、既存の *stable/wordpress* グラフ パッケージをダウンロードします (この例とこの記事の他のコマンドは、Bash シェル用に書式が設定されています)。 ```console mkdir ~/acr-helm && cd ~/acr-helm helm pull stable/wordpress --untar ``` この `helm pull stable/wordpress` コマンドでは特定のバージョンを指定しなかったため、"*最新*" バージョンがプルされ、`wordpress` サブディレクトリで圧縮が解除されました。 ### <a name="save-chart-to-local-registry-cache"></a>ローカル レジストリ キャッシュにグラフを保存する ディレクトリを `wordpress` サブディレクトリに変更します。これには、Helm グラフ ファイルが含まれています。 次に、`helm chart save` を実行してグラフのコピーをローカルに保存し、レジストリの完全修飾名とターゲット リポジトリおよびタグを使用して別名を作成します。 次の例では、レジストリ名は *mycontainerregistry*、ターゲット リポジトリは *wordpress*、ターゲット グラフ タグは *latest* ですが、実際の環境に合わせて置き換えてください。 ```console cd wordpress helm chart save . wordpress:latest helm chart save . mycontainerregistry.azurecr.io/helm/wordpress:latest ``` `helm chart list` を実行して、ローカル レジストリ キャッシュにグラフが保存されたことを確認します。 出力は次のようになります。 ```console REF NAME VERSION DIGEST SIZE CREATED wordpress:latest wordpress 8.1.0 5899db0 29.1 KiB 1 day mycontainerregistry.azurecr.io/helm/wordpress:latest wordpress 8.1.0 5899db0 29.1 KiB 1 day ``` ### <a name="push-chart-to-azure-container-registry"></a>Azure Container Registry にグラフをプッシュする Helm 3 CLI で `helm chart push` コマンドを実行して Helm グラフを Azure コンテナー レジストリのリポジトリにプッシュします。 存在しない場合は、リポジトリが作成されます。 まず Azure CLI コマンド [az acr login][az-acr-login] を使用してレジストリを認証します。 ```azurecli az acr login --name mycontainerregistry ``` 完全修飾ターゲット リポジトリにグラフをプッシュします。 ```console helm chart push mycontainerregistry.azurecr.io/helm/wordpress:latest ``` プッシュが成功すると、出力は次のようになります。 ```output The push refers to repository [mycontainerregistry.azurecr.io/helm/wordpress] ref: mycontainerregistry.azurecr.io/helm/wordpress:latest digest: 5899db028dcf96aeaabdadfa5899db025899db025899db025899db025899db02 size: 29.1 KiB name: wordpress version: 8.1.0 ``` ### <a name="list-charts-in-the-repository"></a>リポジトリのグラフ一覧 Azure コンテナー レジストリに格納されているイメージと同様に、[az acr repository][az-acr-repository] コマンドを使用して、グラフをホストしているリポジトリと、グラフのタグとマニフェストを表示できます。 たとえば、[az acr repository show][az-acr-repository-show] を実行して、前の手順で作成したリポジトリのプロパティを表示します。 ```azurecli az acr repository show \ --name mycontainerregistry \ --repository helm/wordpress ``` 出力は次のようになります。 ```output { "changeableAttributes": { "deleteEnabled": true, "listEnabled": true, "readEnabled": true, "writeEnabled": true }, "createdTime": "2020-01-29T16:54:30.1514833Z", "imageName": "helm/wordpress", "lastUpdateTime": "2020-01-29T16:54:30.4992247Z", "manifestCount": 1, "registry": "mycontainerregistry.azurecr.io", "tagCount": 1 } ``` [az acr repository show-manifests][az-acr-repository-show-manifests] コマンドを実行して、リポジトリに格納されているグラフの詳細を表示します。 次に例を示します。 ```azurecli az acr repository show-manifests \ --name mycontainerregistry \ --repository helm/wordpress --detail ``` 出力 (この例では省略されています) には、`application/vnd.cncf.helm.config.v1+json` の `configMediaType` が表示されます。 ```output [ { [...] "configMediaType": "application/vnd.cncf.helm.config.v1+json", "createdTime": "2020-01-29T16:54:30.2382436Z", "digest": "sha256:xxxxxxxx51bc0807bfa97cb647e493ac381b96c1f18749b7388c24bbxxxxxxxxx", "imageSize": 29995, "lastUpdateTime": "2020-01-29T16:54:30.3492436Z", "mediaType": "application/vnd.oci.image.manifest.v1+json", "tags": [ "latest" ] } ] ``` ### <a name="pull-chart-to-local-cache"></a>ローカル キャッシュにグラフをプルする Helm グラフを Kubernetes にインストールするには、グラフがローカル キャッシュにある必要があります。 この例では、最初に `helm chart remove` を実行して、`mycontainerregistry.azurecr.io/helm/wordpress:latest` という名前の既存のローカル グラフを削除します。 ```console helm chart remove mycontainerregistry.azurecr.io/helm/wordpress:latest ``` `helm chart pull` を実行して、Azure コンテナー レジストリからローカル キャッシュにグラフをダウンロードします。 ```console helm chart pull mycontainerregistry.azurecr.io/helm/wordpress:latest ``` ### <a name="export-helm-chart"></a>Helm グラフをエクスポートする グラフをさらに操作するには、`helm chart export` を使用してローカル ディレクトリにエクスポートします。 たとえば、プルしたグラフを `install` ディレクトリにエクスポートします。 ```console helm chart export mycontainerregistry.azurecr.io/helm/wordpress:latest --destination ./install ``` リポジトリにエクスポートされたグラフの情報を表示するには、グラフをエクスポートしたディレクトリで `helm inspect chart` コマンドを実行します。 ```console cd install helm inspect chart wordpress ``` バージョン番号を指定しない場合、*最新*バージョンが使用されます。 次の縮約された出力が示すように、グラフに関する詳細情報が Helm によって返されます。 ```output apiVersion: v1 appVersion: 5.3.2 dependencies: - condition: mariadb.enabled name: mariadb repository: https://kubernetes-charts.storage.googleapis.com/ tags: - wordpress-database version: 7.x.x description: Web publishing platform for building blogs and websites. home: http://www.wordpress.com/ icon: https://bitnami.com/assets/stacks/wordpress/img/wordpress-stack-220x234.png keywords: - wordpress - cms - blog - http - web - application - php maintainers: - email: [email protected] name: Bitnami name: wordpress sources: - https://github.com/bitnami/bitnami-docker-wordpress version: 8.1.0 ``` ### <a name="install-helm-chart"></a>Helm グラフをインストールする ローカル キャッシュにプルしてエクスポートした Helm グラフをインストールするには、`helm install` を実行します。 リリース名を指定するか、`--generate-name` パラメーターを渡します。 次に例を示します。 ```console helm install wordpress --generate-name ``` インストールが進んだら、コマンド出力の指示に従って WorPress の URL と資格情報を確認します。 また、`kubectl get pods` コマンドを実行して、Helm グラフを介してデプロイされた Kubernetes リソースを確認することもできます。 ```output NAME READY STATUS RESTARTS AGE wordpress-1598530621-67c77b6d86-7ldv4 1/1 Running 0 2m48s wordpress-1598530621-mariadb-0 1/1 Running 0 2m48s [...] ``` ### <a name="delete-a-helm-chart-from-the-repository"></a>Helm グラフをリポジトリから削除する リポジトリからグラフを削除するには、[az acr repository delete][az-acr-repository-delete] コマンドを使用します。 次のコマンドを実行し、プロンプトが表示されたら操作を確認します。 ```azurecli az acr repository delete --name mycontainerregistry --image helm/wordpress:latest ``` ## <a name="use-the-helm-2-client"></a>Helm 2 クライアントを使用する ### <a name="prerequisites"></a>前提条件 - Azure サブスクリプションの **Azure コンテナー レジストリ**。 必要に応じて、[Azure portal](container-registry-get-started-portal.md) または [Azure CLI](container-registry-get-started-azure-cli.md) を使用してレジストリを作成します。 - **Helm クライアント バージョン 2.11.0 (RC バージョンではない) 以降** - 現行バージョンを調べるには、`helm version` を実行してください。 また、Kubernetes クラスター内で開始される Helm サーバー (Tiller) も必要です。 必要に応じて、[Azure Kubernetes Service クラスター][aks-quickstart]を作成します。 Helm のインストール方法とアップグレード方法について詳しくは、「[Installing Helm (Helm のインストール)][helm-install-v2]」をご覧ください。 - **Azure CLI バージョン 2.0.46 以降** - バージョンを見つけるには `az --version` を実行してください。 インストールまたはアップグレードする必要がある場合は、[Azure CLI のインストール][azure-cli-install]に関するページを参照してください。 ### <a name="high-level-workflow"></a>ワークフローの概要 **Helm 2** の場合: * Azure コンテナー レジストリを "*単一*" の Helm グラフ リポジトリとして構成します。 グラフをリポジトリに追加および削除すると、Azure Container Registry によってインデックス定義が管理されます。 * Azure CLI で [az acr helm][az-acr-helm] コマンドを使用して、Azure コンテナー レジストリを Helm グラフ リポジトリとして追加し、グラフをプッシュおよび管理します。 これらの Azure CLI コマンドを実行すると、Helm 2 クライアント コマンドがラップされます。 * Azure コンテナー レジストリのグラフ リポジトリをローカルの Helm リポジトリ インデックスに追加し、グラフの検索をサポートします * Azure CLI 経由で Azure コンテナー レジストリを使用して認証します。これにより、レジストリの URI と資格情報を使用して Helm クライアントが自動的に更新されます。 このリポジトリ情報を手動で指定する必要はないため、資格情報はコマンドの履歴で公開されません。 * `helm install` を使用して、ローカル リポジトリ キャッシュから Kubernetes クラスターにグラフをインストールします。 例については、次のセクションを参照してください。 ### <a name="add-repository-to-helm-client"></a>Helm クライアントにリポジトリを追加する [az acr helm repo add][az-acr-helm-repo-add] コマンドを使用して Helm クライアントに Azure Container Registry Helm グラフのリポジトリを追加します。 このコマンドは、Helm クライアントによって使用される Azure コンテナー レジストリの認証トークンを取得します。 認証トークンは 3 時間有効です。 `docker login` と同様に、このコマンドを将来の CLI セッションで実行し、Azure Container Registry Helm のグラフレジストリで Helm クライアントを認証することができます。 ```azurecli az acr helm repo add --name mycontainerregistry ``` ### <a name="add-a-chart-to-the-repository"></a>グラフをリポジトリに追加する 最初に、ローカル ディレクトリを *~/acr-helm* に作成し、既存の *stable/wordpress* グラフ をダウンロードします。 ```console mkdir ~/acr-helm && cd ~/acr-helm helm repo update helm fetch stable/wordpress ``` 「`ls`」と入力してダウンロードしたグラフを一覧表示し、ファイル名に含まれている Wordpress バージョンを確認します。 `helm fetch stable/wordpress` コマンドでは特定のバージョンを指定しなかったため、*最新*バージョンがフェッチされました。 次の出力例では、Wordpress グラフのバージョンは *8.1.0* です。 ```output wordpress-8.1.0.tgz ``` Azure CLI で [az acr helm push][az-acr-helm-push] コマンドを使用して Azure Container Registry の Helm グラフ リポジトリにグラフをプッシュします。 前の手順でダウンロードした Helm グラフの名前を *wordpress-8.1.0.tgz* などに指定します。 ```azurecli az acr helm push --name mycontainerregistry wordpress-8.1.0.tgz ``` しばらくすると、次の出力例に示すように Azure CLI によって、グラフが保存されていることが報告されます。 ```output { "saved": true } ``` ### <a name="list-charts-in-the-repository"></a>リポジトリのグラフ一覧 前の手順でアップロードしたグラフを使用するには、ローカルの Helm リポジトリのインデックスを更新する必要があります。 Helm クライアントでリポジトリのインデックスを更新したり、Azure CLI を使用してリポジトリのインデックスを更新することができます。 グラフを独自のリポジトリに追加するたびに、次の手順を完了する必要があります。 ```azurecli az acr helm repo add --name mycontainerregistry ``` リポジトリに保存されているグラフと、ローカルに使用できる更新されたインデックスを使用して、通常の Helm クライアント コマンドを使用して検索またはインストールを実行できます。 リポジトリ内のすべてのグラフを表示するには、独自の Azure Container Registry 名を指定して `helm search` コマンドを使用します。 ```console helm search mycontainerregistry ``` 次の出力例に示すように、前の手順でプッシュした Wordpress グラフが一覧されます。 ```output NAME CHART VERSION APP VERSION DESCRIPTION helmdocs/wordpress 8.1.0 5.3.2 Web publishing platform for building blogs and websites. ``` Azure CLI の [az acr helm list][az-acr-helm-list] を使用してグラフを一覧することもできます。 ```azurecli az acr helm list --name mycontainerregistry ``` ### <a name="show-information-for-a-helm-chart"></a>Helm グラフの情報を表示する リポジトリ内の特定のグラフの情報を表示するには、`helm inspect` コマンドを使用できます。 ```console helm inspect mycontainerregistry/wordpress ``` バージョン番号を指定しない場合、*最新*バージョンが使用されます。 次の縮約された出力例が示すように、グラフに関する詳細情報が Helm によって返されます。 ```output apiVersion: v1 appVersion: 5.3.2 description: Web publishing platform for building blogs and websites. engine: gotpl home: http://www.wordpress.com/ icon: https://bitnami.com/assets/stacks/wordpress/img/wordpress-stack-220x234.png keywords: - wordpress - cms - blog - http - web - application - php maintainers: - email: [email protected] name: Bitnami name: wordpress sources: - https://github.com/bitnami/bitnami-docker-wordpress version: 8.1.0 [...] ``` Azure CLI [az acr helm show][az-acr-helm-show] コマンドを使用してグラフの情報を表示することもできます。 ここでも*最新*バージョンのグラフが既定で返されます。 `--version` をアペンドすれば、*8.1.0* など、特定バージョンのグラフを一覧することができます。 ```azurecli az acr helm show --name mycontainerregistry wordpress ``` ### <a name="install-a-helm-chart-from-the-repository"></a>Helm グラフをリポジトリからインストールする リポジトリ内の Helm グラフは、リポジトリ名を指定して、グラフ名を指定することでインストールされます。 Wordpress のグラフをインストールするには、次のように Helm クライアントを使用します。 ```console helm install mycontainerregistry/wordpress ``` > [!TIP] > Azure Container Registry の Helm グラフ リポジトリにプッシュして、後で新しい CLI セッションに戻る場合、ローカルの Helm クライアントは更新された認証トークンを必要とします。 新しい認証トークンを取得するには、[az acr helm repo add][az-acr-helm-repo-add] コマンドを使用します。 インストール プロセス中には、次の手順が完了します。 - Helm クライアントは、ローカル リポジトリのインデックスを検索します。 - 対応するグラフが Azure Container Registry リポジトリからダウンロードされます。 - グラフは、Tiller を使用して Kubernetes クラスターにデプロイされます。 インストールが進んだら、コマンド出力の指示に従って WorPress の URL と資格情報を確認します。 また、`kubectl get pods` コマンドを実行して、Helm グラフを介してデプロイされた Kubernetes リソースを確認することもできます。 ```output NAME READY STATUS RESTARTS AGE wordpress-1598530621-67c77b6d86-7ldv4 1/1 Running 0 2m48s wordpress-1598530621-mariadb-0 1/1 Running 0 2m48s [...] ``` ### <a name="delete-a-helm-chart-from-the-repository"></a>Helm グラフをリポジトリから削除する リポジトリからグラフを削除するには、[az acr helm delete][az-acr-helm-delete] コマンドを使用します。 グラフの名前 (*wordpress* など) および削除するバージョン (*8.1.0* など) を指定します。 ```azurecli az acr helm delete --name mycontainerregistry wordpress --version 8.1.0 ``` 名前付きのグラフのすべてのバージョンを削除する場合は、`--version` パラメーターを除外します。 `helm search` を実行すると、引き続きグラフが返されます。 ここでも、Helm クライアントは、リポジトリで使用可能なグラフの一覧を自動的に更新することはありません。 Helm クライアントのリポジトリ インデックスを更新するには、同じく [az acr helm repo add][az-acr-helm-repo-add] コマンドを使用します。 ```azurecli az acr helm repo add --name mycontainerregistry ``` ## <a name="next-steps"></a>次のステップ この記事では、パブリックの*安定した*リポジトリから既存の Helm グラフを使用しました。 Helm グラフを作成してデプロイする方法の詳細については、[Helm グラフの開発][develop-helm-charts]に関するページを参照してください。 Helm グラフは、コンテナーのビルド プロセスの一部として使用できます。 詳細については、[Azure Container Registry タスクの使用][acr-tasks]に関するページを参照してください。 <!-- LINKS - external --> [helm]: https://helm.sh/ [helm-install]: https://helm.sh/docs/intro/install/ [helm-install-v2]: https://v2.helm.sh/docs/using_helm/#installing-helm [develop-helm-charts]: https://helm.sh/docs/chart_template_guide/ [semver2]: https://semver.org/ [terms-of-use]: https://azure.microsoft.com/support/legal/preview-supplemental-terms/ <!-- LINKS - internal --> [azure-cli-install]: /cli/azure/install-azure-cli [aks-quickstart]: ../aks/kubernetes-walkthrough.md [acr-bestpractices]: container-registry-best-practices.md [az-configure]: /cli/azure/reference-index#az-configure [az-acr-login]: /cli/azure/acr#az-acr-login [az-acr-helm]: /cli/azure/acr/helm [az-acr-repository]: /cli/azure/acr/repository [az-acr-repository-show]: /cli/azure/acr/repository#az-acr-repository-show [az-acr-repository-delete]: /cli/azure/acr/repository#az-acr-repository-delete [az-acr-repository-show-tags]: /cli/azure/acr/repository#az-acr-repository-show-tags [az-acr-repository-show-manifests]: /cli/azure/acr/repository#az-acr-repository-show-manifests [az-acr-helm-repo-add]: /cli/azure/acr/helm/repo#az-acr-helm-repo-add [az-acr-helm-push]: /cli/azure/acr/helm#az-acr-helm-push [az-acr-helm-list]: /cli/azure/acr/helm#az-acr-helm-list [az-acr-helm-show]: /cli/azure/acr/helm#az-acr-helm-show [az-acr-helm-delete]: /cli/azure/acr/helm#az-acr-helm-delete [acr-tasks]: container-registry-tasks-overview.md
37.702532
372
0.762073
yue_Hant
0.461373