hexsha
stringlengths 40
40
| size
int64 5
1.04M
| ext
stringclasses 6
values | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 3
344
| max_stars_repo_name
stringlengths 5
125
| max_stars_repo_head_hexsha
stringlengths 40
78
| max_stars_repo_licenses
sequencelengths 1
11
| max_stars_count
int64 1
368k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 3
344
| max_issues_repo_name
stringlengths 5
125
| max_issues_repo_head_hexsha
stringlengths 40
78
| max_issues_repo_licenses
sequencelengths 1
11
| max_issues_count
int64 1
116k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 3
344
| max_forks_repo_name
stringlengths 5
125
| max_forks_repo_head_hexsha
stringlengths 40
78
| max_forks_repo_licenses
sequencelengths 1
11
| max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | content
stringlengths 5
1.04M
| avg_line_length
float64 1.14
851k
| max_line_length
int64 1
1.03M
| alphanum_fraction
float64 0
1
| lid
stringclasses 191
values | lid_prob
float64 0.01
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
7ac463188f0a8acd7970769c24603976d83595ff | 1,139 | md | Markdown | build/docs/HistoryListing.md | maxwang/platform-client-sdk-dotnet | 6dd04ddb6fd2352295dd6201907afa9507a2967b | [
"MIT"
] | null | null | null | build/docs/HistoryListing.md | maxwang/platform-client-sdk-dotnet | 6dd04ddb6fd2352295dd6201907afa9507a2967b | [
"MIT"
] | null | null | null | build/docs/HistoryListing.md | maxwang/platform-client-sdk-dotnet | 6dd04ddb6fd2352295dd6201907afa9507a2967b | [
"MIT"
] | null | null | null | ---
title: HistoryListing
---
## ININ.PureCloudApi.Model.HistoryListing
## Properties
|Name | Type | Description | Notes|
|------------ | ------------- | ------------- | -------------|
| **Id** | **string** | | [optional] |
| **Complete** | **bool?** | | [optional] |
| **User** | [**User**](User.html) | | [optional] |
| **ErrorMessage** | **string** | | [optional] |
| **ErrorCode** | **string** | | [optional] |
| **ErrorDetails** | [**List<Detail>**](Detail.html) | | [optional] |
| **ErrorMessageParams** | **Dictionary<string, string>** | | [optional] |
| **ActionName** | **string** | Action name | [optional] |
| **ActionStatus** | **string** | Action status | [optional] |
| **Name** | **string** | | [optional] |
| **Description** | **string** | | [optional] |
| **System** | **bool?** | | [optional] |
| **Started** | **DateTime?** | Date time is represented as an ISO-8601 string. For example: yyyy-MM-ddTHH:mm:ss.SSSZ | [optional] |
| **Completed** | **DateTime?** | Date time is represented as an ISO-8601 string. For example: yyyy-MM-ddTHH:mm:ss.SSSZ | [optional] |
{: class="table table-striped"}
| 42.185185 | 134 | 0.553117 | yue_Hant | 0.371199 |
7ac49d3d78c6165f3d01bf239584102409ab3a9d | 2,577 | md | Markdown | README.md | b-fuze/deno-plugin-prepare | 217a05e1924a71a0723bd71e9f7688807c0e712a | [
"MIT"
] | null | null | null | README.md | b-fuze/deno-plugin-prepare | 217a05e1924a71a0723bd71e9f7688807c0e712a | [
"MIT"
] | null | null | null | README.md | b-fuze/deno-plugin-prepare | 217a05e1924a71a0723bd71e9f7688807c0e712a | [
"MIT"
] | 1 | 2021-05-24T13:26:45.000Z | 2021-05-24T13:26:45.000Z | # deno-plugin-prepare
A library for managing deno native plugin dependencies
[](https://github.com/manyuanrong/deno-plugin-prepare)
[](https://github.com/manyuanrong/deno-plugin-prepare/actions)
[](https://github.com/manyuanrong/deno-plugin-prepare)
[](https://github.com/denoland/deno)
### Why do you need this module?
Because Deno's plugin is not a first-class citizen, it cannot be loaded directly using `import` like `js`, `ts`, and `json`.
Deno's plugin is compiled with other system-level languages, and cannot be compiled into one target result across multiple platforms, usually compiled into multiple platform target binaries. These binary files are usually much larger than scripts such as `js/ts`, so they should not be downloaded all, need to be dynamically loaded according to the platform.
### API
#### prepare
The API needs to provide some plug-in information, including the name of the plugin, and the remote url of the binary file for different platforms. It is similar to an asynchronous version of `Deno.openPlugin`, which will automatically download the corresponding binary file according to the platform and cache it in the `.deno_plugins` directory of the current working directory.
### Usage
```ts
import {
prepare,
PerpareOptions,
} from "https://deno.land/x/[email protected]/mod.ts";
const releaseUrl =
"https://github.com/manyuanrong/deno-plugin-prepare/releases/download/plugin_bins";
const pluginOptions: PerpareOptions = {
name: "test_plugin",
// Whether to output log. Optional, default is true
// printLog: true,
// Whether to use locally cached files. Optional, default is true
// checkCache: true,
// Support "http://", "https://", "file://"
urls: {
darwin: `${releaseUrl}/libtest_plugin.dylib`,
windows: `${releaseUrl}/test_plugin.dll`,
linux: `${releaseUrl}/libtest_plugin.so`,
},
};
const rid = await prepare(pluginOptions);
//@ts-ignore
const { testSync } = Deno.core.ops();
//@ts-ignore
const response = Deno.core.dispatch(
testSync,
new Uint8Array([116, 101, 115, 116])
)!;
console.log(response);
Deno.close(rid);
```
### TODOs
- [x] Caching binary files with URL hash (multi-version coexistence)
- [ ] Supports downloading and decompressing .GZ files
| 39.045455 | 380 | 0.741948 | eng_Latn | 0.884034 |
7ac4ebb7c9da7465a21282a8bd7e0ec8343a288c | 172 | md | Markdown | README.md | qq920447564/davinci-ui | 55c3ee2a864288b6743c303abbfe3e399449ed6b | [
"MIT"
] | 1 | 2020-11-04T08:36:46.000Z | 2020-11-04T08:36:46.000Z | README.md | qq920447564/davinci-ui | 55c3ee2a864288b6743c303abbfe3e399449ed6b | [
"MIT"
] | null | null | null | README.md | qq920447564/davinci-ui | 55c3ee2a864288b6743c303abbfe3e399449ed6b | [
"MIT"
] | null | null | null | # 智能产线管理系统UI
## 概述
智能产线管理系统(Smart Production Line Management System)
## 内部项目代号
DaVinci/达芬奇
## 技术栈(列出使用的框架)
* Vue 2
* 后台页面框架
## 开发说明
* 写清楚如何开始开发
## 部署说明
* 写清楚如何部署 | 7.818182 | 50 | 0.668605 | yue_Hant | 0.81547 |
7ac5091e085d605f47caaa56133690e107d1e89a | 713 | md | Markdown | README.md | SenchoDev/media-app | 7d6331a0ca5478899cdb41e6409d9ef670968168 | [
"Apache-2.0"
] | null | null | null | README.md | SenchoDev/media-app | 7d6331a0ca5478899cdb41e6409d9ef670968168 | [
"Apache-2.0"
] | 4 | 2020-03-19T14:44:04.000Z | 2020-03-19T14:44:58.000Z | README.md | SenchoDev/media-app | 7d6331a0ca5478899cdb41e6409d9ef670968168 | [
"Apache-2.0"
] | null | null | null | # media-app
Responsive Movie Application where you can search over 1 000 000 movies and TV shows.
## Description
This app is fully designed, coded and structured by me.
I designed it in Adobe XD [adobe.com](https://xd.adobe.com/view/cdd1cfbf-5c4f-4fa1-7c95-dfca2c9d8ea9-bbe0/) **<--LINK of design**
I've built whole website with HTML5 & CSS3, implemeted main functionalites with MVC Pattern, but then i realised, it started to get bit complicated to manage multiple files(HTML, JS) without a framework.
So I started learning React/Redux.
### Note
- Everything works just fine but some features are not fully finished.
### Built with
- Webpack
- Babel
- HTML5 and SASS
- Vanilla JS
- JavaScript MVC Pattern
| 33.952381 | 204 | 0.760168 | eng_Latn | 0.990281 |
7ac5279e6f05f0cf8d391bd38a945369c8be4bd2 | 426 | md | Markdown | docs/packages/osc/unpack.md | woolgathering/supercolliderjs | 12a4283f738cea4b5ec42bb4a1e7801a415009f1 | [
"MIT"
] | 347 | 2015-01-05T22:40:25.000Z | 2022-03-28T00:16:26.000Z | docs/packages/osc/unpack.md | woolgathering/supercolliderjs | 12a4283f738cea4b5ec42bb4a1e7801a415009f1 | [
"MIT"
] | 79 | 2015-01-20T19:52:53.000Z | 2022-02-26T01:46:47.000Z | docs/packages/osc/unpack.md | woolgathering/supercolliderjs | 12a4283f738cea4b5ec42bb4a1e7801a415009f1 | [
"MIT"
] | 36 | 2015-01-18T08:12:22.000Z | 2022-02-27T22:09:29.000Z | # unpack
Package: <a href="#/packages/osc/api">@supercollider/osc</a>
<div class="entity-box"><h4 id="unpack"><span class="token function">unpack</span>(<span class="nowrap">buffer: <span class="type reference">Buffer</span></span>): <span class="type reference">BundleOrMessage</span></h4><p class="short-text">Unpacks either an OSCMessage or OSCBundle from an OSC Packet
It's a bundle if it starts with `#bundle`</p></div>
| 71 | 302 | 0.723005 | eng_Latn | 0.599991 |
7ac5bd3d0610d9c1cd3c3ee07a9a28ef74bc13d6 | 5,618 | md | Markdown | CHANGELOG.md | DavidTobin/scribe | e316b6f58851c34a17cae0663ec837f5396bd5ef | [
"Apache-2.0"
] | 1 | 2015-03-25T12:40:15.000Z | 2015-03-25T12:40:15.000Z | CHANGELOG.md | DavidTobin/scribe | e316b6f58851c34a17cae0663ec837f5396bd5ef | [
"Apache-2.0"
] | null | null | null | CHANGELOG.md | DavidTobin/scribe | e316b6f58851c34a17cae0663ec837f5396bd5ef | [
"Apache-2.0"
] | null | null | null | # 1.3.2
Option handling (defaults and overrides) have now been moved to their own module
# 1.3.1
Adds a null check to selection.js to help with issues when Scribe is being run in ShadowDOM. Thanks [Shaun Netherby](https://github.com/shaunnetherby)
# 1.3.0
Introduces a new time-based undo manager and improvements to allow multiple Scribe instances to share or have a separate undo manager. Thanks to [Abdulrahman Alsaleh](https://github.com/aaalsaleh) for providing the code and spending a lot of time working with us on the tests.
# 1.2.11
Added configuration for removing `scribe.undoManager`
# 1.2.10
Bugfixes for selections that are 'reversed' (i.e. selected from right to left) from [Deains](https://github.com/deains). Thanks
# 1.2.9
Clarifies the use of nodeName in the Command implementation. Thanks [Christopher Liu](https://github.com/christopherliu)
# 1.2.8
Event waterfall / [Event Namespacing](https://github.com/guardian/scribe/pull/337)
# 1.2.7
ShadowDOM fixes for Chrome from [ShaunNetherby](https://github.com/shaunnetherby), thanks
# 1.2.5
IE11 compatiability changes from [Deains](https://github.com/deains), thank you
# 1.2.4
Changes the way that root nodes are detected, the code now uses the element that the Scribe instance is bound to rather than looking for contenteditable attributes.
# 1.2.3
Changes the EventEmitter to store callbacks in sets to enforce uniqueness and avoid duplicate calls
# 1.2.1
Fixes a typo with the use of options in the default command patches that was breaking Browserify
# 1.2.0
Allows the default command patches to be over-ridden in the options. This will allow users to customise what gets loaded to address issues like [the behaviour of the bold patch](https://github.com/guardian/scribe/pull/250) where the default behaviour is not what is required.
# 1.1.0
Introduces [ImmutableJS](https://github.com/facebook/immutable-js) (which is also exposed via scribe.immutable) and starts to convert some the internal workings of Scribe to use immutable data structures.
Adds 55K of needless bloat according to @theefer but I am heartless and laugh at his tears.
# 1.0.0
This is a non-backwards compatible change as we are removing the use of Scribe Common. The Node and Element apis that were available in that project are now exposed via the *scribe* object itself (`scribe.node` and `scribe.element`).
Existing plugins should not break for this release but please re-write your plugins if you use 1.0.0 as a dependency.
* Merge [Scribe Common into Scribe](https://github.com/guardian/scribe/pull/287)
# 0.1.26
* Add preliminary support for Safari 6. [Muration Observer Safari](https://github.com/guardian/scribe/pull/285)
# 0.1.25
* Switch from using export directly to the string alias version. [YUI Compressor changes](https://github.com/guardian/scribe/pull/279)
# 0.1.24
* Rework mandatory plugin loading [Plugin loading](https://github.com/guardian/scribe/pull/275)
# 0.1.23
* Fix Chrome 38 focus issue [Change check for ff](https://github.com/guardian/scribe/pull/265)
# 0.1.22
* Fix [Make Chrome set the correct focus as well](https://github.com/guardian/scribe/pull/262)
# 0.1.21
* Fix [Don't insert BR in empty non block elements](https://github.com/guardian/scribe/pull/258)
# 0.1.20
* Fix [Don't strip nbsps in the transaction manger](https://github.com/guardian/scribe/pull/257)
# 0.1.19
* Fix [Release v0.01.18 did not succeed](https://github.com/guardian/scribe/pull/253)
# 0.1.18
* Fix [New line detection improved](https://github.com/guardian/scribe/pull/253)
# 0.1.17
* Allow entering multiple consecutive spaces ([c4ba50eb](https://github.com/guardian/scribe/commit/c4ba50ebe457066f06daa5efe98e0a345658ac54) [#232](https://github.com/guardian/scribe/pull/232))
# 0.1.16
* Update [scribe-common includes to include src](https://github.com/guardian/scribe/pull/217)
# 0.1.15
* Fix [Remove erroneous block tags being left behind in Chrome](https://github.com/guardian/scribe/pull/223)
# 0.1.14
* Fix [Ensure selectable containers core plugin doesn't always work as desired](https://github.com/guardian/scribe/pull/214)
# 0.1.13
* Fix
[insertHTML command wraps invalid B tags in a P, leaving empty Ps behind](https://github.com/guardian/scribe/pull/212)
# 0.1.12
* Fix [Text is lost when creating a list from P element containing BR elements](https://github.com/guardian/scribe/pull/195)
# 0.1.11
* Fix [`createLink` browser inconsistency](https://github.com/guardian/scribe/commit/4c8b536b3f029e51f54de43a6df9ce07bcf63f3e) ([#190](https://github.com/guardian/scribe/pull/190))
* Bug: Correct object reference ([517b22ab](https://github.com/guardian/scribe/commit/517b22ab88e5dfc231b10497e492a877e6a05668))
# 0.1.10
* Fix redo ([da9c3844](https://github.com/guardian/scribe/commit/da9c3844fc047bc3c0bce559a013ec7fdecfc0b1) [#133](https://github.com/guardian/scribe/pull/133))
# 0.1.9
* Use in-house `EventEmitter` ([5088eb14](https://github.com/guardian/scribe/commit/5088eb14de395cada7b9415b05ae3bb6d775b02a) [#128](https://github.com/guardian/scribe/pull/128))
# 0.1.7
* Prevent mutation observers from failing if an error occurs ([9c843e52](https://github.com/guardian/scribe/commit/9c843e52f7913cff9529ea0950acc0fbb78f7baa))
# 0.1.6
* Fix issue with breaking out of P mode in Firefox
([ddecae91](https://github.com/guardian/scribe/commit/ddecae91bc642f5e4344af6b51c84a4c85cbfe49)
[#97](https://github.com/guardian/scribe/pull/97))
# 0.1.5
* Added `subscript` and `superscript` commands ([cba4ee23](https://github.com/guardian/scribe/commit/cba4ee2362387617bb83281ca23a9a9aa1c36862))
| 41.308824 | 276 | 0.766643 | eng_Latn | 0.871952 |
7ac65f1efca71fbf131e6f01463360a770f051c4 | 10,267 | md | Markdown | docs/MapView.md | iosandroidwebtopdev/react-native-mapbox-gl-master | 61b0e4dc0dc05ffd40dd1d81b1a18287ab1f7e9f | [
"BSD-2-Clause"
] | null | null | null | docs/MapView.md | iosandroidwebtopdev/react-native-mapbox-gl-master | 61b0e4dc0dc05ffd40dd1d81b1a18287ab1f7e9f | [
"BSD-2-Clause"
] | null | null | null | docs/MapView.md | iosandroidwebtopdev/react-native-mapbox-gl-master | 61b0e4dc0dc05ffd40dd1d81b1a18287ab1f7e9f | [
"BSD-2-Clause"
] | 1 | 2021-07-08T08:31:36.000Z | 2021-07-08T08:31:36.000Z | ## <MapboxGL.MapView />
### MapView backed by Mapbox Native GL
### props
| Prop | Type | Default | Required | Description |
| ---- | :--: | :-----: | :------: | :----------: |
| animated | `bool` | `false` | `false` | Animates changes between pitch and bearing |
| centerCoordinate | `arrayOf` | `none` | `false` | Initial center coordinate on map [lng, lat] |
| showUserLocation | `bool` | `none` | `false` | Shows the users location on the map |
| userTrackingMode | `number` | `MapboxGL.UserTrackingModes.None` | `false` | The mode used to track the user location on the map |
| userLocationVerticalAlignment | `number` | `none` | `false` | The vertical alignment of the user location within in map. This is only enabled while tracking the users location. |
| contentInset | `union` | `none` | `false` | The distance from the edges of the map view’s frame to the edges of the map view’s logical viewport. |
| heading | `number` | `0` | `false` | Initial heading on map |
| pitch | `number` | `0` | `false` | Initial pitch on map |
| style | `any` | `none` | `false` | Style for wrapping React Native View |
| styleURL | `string` | `MapboxGL.StyleURL.Street` | `false` | Style URL for map |
| zoomLevel | `number` | `16` | `false` | Initial zoom level of map |
| minZoomLevel | `number` | `none` | `false` | Min zoom level of map |
| maxZoomLevel | `number` | `none` | `false` | Max zoom level of map |
| zoomEnabled | `bool` | `none` | `false` | Enable/Disable zoom on the map |
| scrollEnabled | `bool` | `true` | `false` | Enable/Disable scroll on the map |
| pitchEnabled | `bool` | `true` | `false` | Enable/Disable pitch on map |
| rotateEnabled | `bool` | `true` | `false` | Enable/Disable rotation on map |
| attributionEnabled | `bool` | `true` | `false` | The Mapbox terms of service, which governs the use of Mapbox-hosted vector tiles and styles,<br/>[requires](https://www.mapbox.com/help/how-attribution-works/) these copyright notices to accompany any map that features Mapbox-designed styles, OpenStreetMap data, or other Mapbox data such as satellite or terrain data.<br/>If that applies to this map view, do not hide this view or remove any notices from it.<br/><br/>You are additionally [required](https://www.mapbox.com/help/how-mobile-apps-work/#telemetry) to provide users with the option to disable anonymous usage and location sharing (telemetry).<br/>If this view is hidden, you must implement this setting elsewhere in your app. See our website for [Android](https://www.mapbox.com/android-docs/map-sdk/overview/#telemetry-opt-out) and [iOS](https://www.mapbox.com/ios-sdk/#telemetry_opt_out) for implementation details.<br/><br/>Enable/Disable attribution on map. For iOS you need to add MGLMapboxMetricsEnabledSettingShownInApp=YES<br/>to your Info.plist |
| logoEnabled | `bool` | `true` | `false` | Enable/Disable the logo on the map. |
| compassEnabled | `bool` | `none` | `false` | Enable/Disable the compass from appearing on the map |
| textureMode | `bool` | `false` | `false` | Enable/Disable TextureMode insted of SurfaceView |
| onPress | `func` | `none` | `false` | Map press listener, gets called when a user presses the map |
| onLongPress | `func` | `none` | `false` | Map long press listener, gets called when a user long presses the map |
| onRegionWillChange | `func` | `none` | `false` | This event is triggered whenever the currently displayed map region is about to change. |
| onRegionIsChanging | `func` | `none` | `false` | This event is triggered whenever the currently displayed map region is changing. |
| onRegionDidChange | `func` | `none` | `false` | This event is triggered whenever the currently displayed map region finished changing |
| onWillStartLoadingMap | `func` | `none` | `false` | This event is triggered when the map is about to start loading a new map style. |
| onDidFinishLoadingMap | `func` | `none` | `false` | This is triggered when the map has successfully loaded a new map style. |
| onDidFailLoadingMap | `func` | `none` | `false` | This event is triggered when the map has failed to load a new map style. |
| onWillStartRenderingFrame | `func` | `none` | `false` | This event is triggered when the map will start rendering a frame. |
| onDidFinishRenderingFrame | `func` | `none` | `false` | This event is triggered when the map finished rendering a frame. |
| onDidFinishRenderingFrameFully | `func` | `none` | `false` | This event is triggered when the map fully finished rendering a frame. |
| onWillStartRenderingMap | `func` | `none` | `false` | This event is triggered when the map will start rendering the map. |
| onDidFinishRenderingMap | `func` | `none` | `false` | This event is triggered when the map finished rendering the map. |
| onDidFinishRenderingMapFully | `func` | `none` | `false` | This event is triggered when the map fully finished rendering the map. |
| onDidFinishLoadingStyle | `func` | `none` | `false` | This event is triggered when a style has finished loading. |
| onUserTrackingModeChange | `func` | `none` | `false` | This event is triggered when the users tracking mode is changed. |
### methods
#### getPointInView(coordinate)
Converts a geographic coordinate to a point in the given view’s coordinate system.
##### arguments
| Name | Type | Required | Description |
| ---- | :--: | :------: | :----------: |
| `coordinate` | `Array` | `Yes` | A point expressed in the map view's coordinate system. |
```javascript
const pointInView = await this._map.getPointInView([-37.817070, 144.949901]);
```
#### getVisibleBounds()
The coordinate bounds(ne, sw) visible in the users’s viewport.
##### arguments
| Name | Type | Required | Description |
| ---- | :--: | :------: | :----------: |
```javascript
const visibleBounds = await this._map.getVisibleBounds();
```
#### queryRenderedFeaturesAtPoint(coordinate[, filter][, layerIDs])
Returns an array of rendered map features that intersect with a given point.
##### arguments
| Name | Type | Required | Description |
| ---- | :--: | :------: | :----------: |
| `coordinate` | `Array` | `Yes` | A point expressed in the map view’s coordinate system. |
| `filter` | `Array` | `No` | A set of strings that correspond to the names of layers defined in the current style. Only the features contained in these layers are included in the returned array. |
| `layerIDs` | `Array` | `No` | A array of layer id's to filter the features by |
```javascript
this._map.queryRenderedFeaturesAtPoint([30, 40], ['==', 'type', 'Point'], ['id1', 'id2'])
```
#### queryRenderedFeaturesInRect(bbox[, filter][, layerIDs])
Returns an array of rendered map features that intersect with the given rectangle,<br/>restricted to the given style layers and filtered by the given predicate.
##### arguments
| Name | Type | Required | Description |
| ---- | :--: | :------: | :----------: |
| `bbox` | `Array` | `Yes` | A rectangle expressed in the map view’s coordinate system. |
| `filter` | `Array` | `No` | A set of strings that correspond to the names of layers defined in the current style. Only the features contained in these layers are included in the returned array. |
| `layerIDs` | `Array` | `No` | A array of layer id's to filter the features by |
```javascript
this._map.queryRenderedFeaturesInRect([30, 40, 20, 10], ['==', 'type', 'Point'], ['id1', 'id2'])
```
#### fitBounds(northEastCoordinates, southWestCoordinates[, padding][, duration])
Map camera transitions to fit provided bounds
##### arguments
| Name | Type | Required | Description |
| ---- | :--: | :------: | :----------: |
| `northEastCoordinates` | `Array` | `Yes` | North east coordinate of bound |
| `southWestCoordinates` | `Array` | `Yes` | South west coordinate of bound |
| `padding` | `Number` | `No` | Camera padding for bound |
| `duration` | `Number` | `No` | Duration of camera animation |
```javascript
this.map.fitBounds([lng, lat], [lng, lat])
this.map.fitBounds([lng, lat], [lng, lat], 20, 1000) // padding for all sides
this.map.fitBounds([lng, lat], [lng, lat], [verticalPadding, horizontalPadding], 1000)
this.map.fitBounds([lng, lat], [lng, lat], [top, right, bottom, left], 1000)
```
#### flyTo(coordinates[, duration])
Map camera will fly to new coordinate
##### arguments
| Name | Type | Required | Description |
| ---- | :--: | :------: | :----------: |
| `coordinates` | `Array` | `Yes` | Coordinates that map camera will jump too |
| `duration` | `Number` | `No` | Duration of camera animation |
```javascript
this.map.flyTo([lng, lat])
this.map.flyTo([lng, lat], 12000)
```
#### moveTo(coordinates[, duration])
Map camera will move to new coordinate at the same zoom level
##### arguments
| Name | Type | Required | Description |
| ---- | :--: | :------: | :----------: |
| `coordinates` | `Array` | `Yes` | Coordinates that map camera will move too |
| `duration` | `Number` | `No` | Duration of camera animation |
```javascript
this.map.moveTo([lng, lat], 200) // eases camera to new location based on duration
this.map.moveTo([lng, lat]) // snaps camera to new location without any easing
```
#### zoomTo(zoomLevel[, duration])
Map camera will zoom to specified level
##### arguments
| Name | Type | Required | Description |
| ---- | :--: | :------: | :----------: |
| `zoomLevel` | `Number` | `Yes` | Zoom level that the map camera will animate too |
| `duration` | `Number` | `No` | Duration of camera animation |
```javascript
this.map.zoomTo(16)
this.map.zoomTo(16, 100)
```
#### setCamera(config)
Map camera will perform updates based on provided config. Advanced use only!
##### arguments
| Name | Type | Required | Description |
| ---- | :--: | :------: | :----------: |
| `config` | `Object` | `Yes` | Camera configuration |
```javascript
this.map.setCamera({
centerCoordinate: [lng, lat],
zoom: 16,
duration: 2000,
})
this.map.setCamera({
stops: [
{ pitch: 45, duration: 200 },
{ heading: 180, duration: 300 },
]
})
```
#### takeSnap(writeToDisk)
Takes snapshot of map with current tiles and returns a URI to the image
##### arguments
| Name | Type | Required | Description |
| ---- | :--: | :------: | :----------: |
| `writeToDisk` | `Boolean` | `Yes` | If true will create a temp file, otherwise it is in base64 |
| 45.030702 | 1,069 | 0.666504 | eng_Latn | 0.956443 |
7ac661a023876de206e1593c97105ce746b79b1c | 2,903 | md | Markdown | package/src/DataTableFilter/DataTableFilter.md | hrhosni/catalyst | c5355ab679692b4b496f82f39c1a4b87dddcc1db | [
"Apache-2.0"
] | 10 | 2019-07-13T04:24:09.000Z | 2021-11-04T16:47:18.000Z | package/src/DataTableFilter/DataTableFilter.md | hrhosni/catalyst | c5355ab679692b4b496f82f39c1a4b87dddcc1db | [
"Apache-2.0"
] | 230 | 2019-07-12T21:56:39.000Z | 2022-03-03T22:47:34.000Z | package/src/DataTableFilter/DataTableFilter.md | hrhosni/catalyst | c5355ab679692b4b496f82f39c1a4b87dddcc1db | [
"Apache-2.0"
] | 8 | 2020-02-28T04:08:06.000Z | 2021-11-04T16:47:07.000Z | ### Overview
The DataTableFilter provides a component for displaying a set of filters as a dropdown, or in cards.
### Usage
#### Types
##### Filter dropdown
```jsx
const options = [{
label: "New",
value: "new"
},{
label: "Processing",
value: "processing"
},{
label: "Completed",
value: "completed"
},{
label: "Canceled",
value: "canceled"
}];
<DataTableFilter
options={options}
title="Order Status"
/>
```
##### Card container
```jsx
const options = [{
label: "New",
value: "new"
},{
label: "Processing",
value: "processing"
},{
label: "Completed",
value: "completed"
},{
label: "Canceled",
value: "canceled"
}];
<>
<DataTableFilter
options={options}
title="Order Status"
container="card"
value={"canceled"}
/>
<DataTableFilter
options={options}
title="Payment Status"
container="card"
value={"canceled"}
/>
<DataTableFilter
options={options}
title="Fulfillment Status"
container="card"
value={"canceled"}
/>
</>
```
##### Filters in a button group
```jsx
import { ButtonGroup } from "@material-ui/core";
const options = [{
label: "New",
value: "new"
},{
label: "Processing",
value: "processing"
},{
label: "Completed",
value: "completed"
},{
label: "Canceled",
value: "canceled"
}];
<ButtonGroup>
<DataTableFilter
options={options}
title="Order Status"
value={"canceled"}
/>
<DataTableFilter
options={options}
title="Payment Status"
value={"canceled"}
/>
<DataTableFilter
isMultiSelect
onSelect={(values) => console.log(values)}
options={options}
title="Fulfillment Status"
value={"canceled"}
/>
</ButtonGroup>
```
##### Filters in a side drawer
```jsx
import { useState } from "react";
import { Drawer } from "@material-ui/core";
import Button from "../Button";
const options = [{
label: "New",
value: "new"
},{
label: "Processing",
value: "processing"
},{
label: "Completed",
value: "completed"
},{
label: "Canceled",
value: "canceled"
}];
function DrawerExample() {
const [isOpen, setOpen] = useState(false);
return (
<>
<Button
color="primary"
onClick={() => setOpen(true)}
container="outlined"
>
Open Drawer
</Button>
<Drawer
anchor="right"
open={isOpen}
onClose={() => setOpen(false)}
>
<DataTableFilter
options={options}
title="Order Status"
container="card"
value={"canceled"}
/>
<DataTableFilter
options={options}
title="Payment Status"
container="card"
value={"canceled"}
/>
<DataTableFilter
options={options}
title="Fulfillment Status"
container="card"
value={"canceled"}
/>
</Drawer>
</>
)
}
<DrawerExample />
``` | 16.780347 | 100 | 0.5732 | kor_Hang | 0.290067 |
7ac6706f978a42c952d7a330ba382c22335ecb84 | 3,547 | md | Markdown | book/community/templates/template-coworking-showtell.md | acocac/environmental-ds-book | acc458e2e2ab6f3df360ad5d2fe60b7beed5ff0f | [
"CC-BY-4.0"
] | 11 | 2021-11-26T17:35:01.000Z | 2022-03-22T10:43:47.000Z | book/community/templates/template-coworking-showtell.md | acocac/environmental-ds-book | acc458e2e2ab6f3df360ad5d2fe60b7beed5ff0f | [
"CC-BY-4.0"
] | 43 | 2021-11-29T17:29:15.000Z | 2022-03-27T20:38:25.000Z | book/community/templates/template-coworking-showtell.md | acocac/environmental-ds-book | acc458e2e2ab6f3df360ad5d2fe60b7beed5ff0f | [
"CC-BY-4.0"
] | 4 | 2022-01-25T13:14:22.000Z | 2022-02-22T14:40:56.000Z | (cm-template-coworking-showtell)=
# Show and Tell Call Template
:::{note}
*This HackMD is adapted under a CC-BY license from [_The Turing Way_ collaboration cafe template](https://github.com/alan-turing-institute/the-turing-way/blob/master/book/website/community-handbook/templates/template-coworking-collabcafe.md)*
*A **permanent document** exists in the HackMD: [https://hackmd.io/@environmental-ds/show-tell](https://hackmd.io/@environmental-ds/show-tell) that is regularly updated with the empty template for next event.*
:::
## _The Environmental Data Science_ ⛰ 🌳 🏙️ ❄️ 🔥 🌊 online Show & Tell 🎬 💬
### DATE MONTH YEAR
Thank you for joining the _The Environmental Data Science_'s online Show & Tell!
We're delighted to have you here 🎉
**What?** *The Environmental Data Science is a **community aiming to learn and discuss scientific software practises/developments fostered by AI and data science for a better understanding of our Planet Earth and environmental systems**.
[Show and Tell](https://the-environmental-ds-book.netlify.app/community/coworking/coworking-showtell.html) are **online coworking calls** that engage anyone interested in showcasing and discussing relevant themes in AI and data science to environmental studies*.
*Read more about it here: https://the-environmental-ds-book.netlify.app/community/coworking.html*
**Who?** ***Everyone** interested in reproducible, ethical, and inclusive data science and research for environmental studies are welcome to join the full or any part of The Environmental Data Science project, community, and/or this call.*
**When?** DD Month YYYY, HH:MM BST (link for local time from https://arewemeetingyet.com)
**How?** *Zoom link will be provided 10 minutes before the call*
***All questions, comments and recommendations are welcome!***
### Useful links
* All about [online Show and Tell](https://github.com/alan-turing-institute/environmental-ds-book/blob/master/book/community/coworking/coworking-showtell.md)
### Code of conduct
* [Take a moment to read this](https://alan-turing-institute/environmental-ds-book/blob/main/CODE_OF_CONDUCT.md)
### Sign up below
*Name + <A fun Icebreaker> + an emoji to represent it ([emoji cheatsheet](https://github.com/ikatyang/emoji-cheat-sheet/blob/master/README.md))*
*(Remember that this is a public document. You can use a pseudonym if you'd prefer.)*
### Conversation Starters
*Advertise and promote your event, or anything exciting you're working on.* ✨
*
*
### Schedule
https://cuckoo.team/environmental-ds
| Duration | Activity |
| ---- | -------- |
| Start | 👋 Welcome, code of conduct review |
| 10 mins | Introductions and goal settings |
| 10 mins | 🎬 Show and Tell 1 |
| 5 mins | 💬 Show and Tell 1 - Q&A |
| 5 mins | ☕️ Break |
| 10 mins | 🎬 Show and Tell 2 |
| 5 mins | 💬 Show and Tell 2 - Q&A |
| 10 mins | Open discussion: celebrations, reflections and future directions |
| 5 mins | 👋 Close |
### Show and Tell proposals
*If you have an idea for a topic you'd like to discuss in a show and tell slot, please add it below and put your name next to it. If you like one of the topics that is already suggested, please add your name next to that one. For more information about show and tell slots see [the description on GitHub](https://github.com/alan-turing-institute/environmental-ds-book/blob/master/book/community/coworking/coworking-showtell.md#show-and-tell-sessions).*
Name(s) / Topic
*
*
### Notes and questions
*
*
### Request for reviews!
*
*
### Feedback at the end of the call
*
* | 39.411111 | 452 | 0.739498 | eng_Latn | 0.932686 |
7ac72646e03c99c4d0f24b03c7ae90759209169f | 7,228 | md | Markdown | README.md | BPHO-Salk/PSSR | 8cf5ae91a7716b365017441fb57e07d3b9631558 | [
"BSD-3-Clause"
] | 69 | 2019-08-18T11:33:40.000Z | 2022-03-28T17:47:40.000Z | README.md | Photonics-Precision-Technologies/PSSR | a90b7d208d4369946500a70a6f31c44e3367e4c7 | [
"BSD-3-Clause"
] | 3 | 2021-03-02T05:48:47.000Z | 2021-06-17T12:51:26.000Z | README.md | Photonics-Precision-Technologies/PSSR | a90b7d208d4369946500a70a6f31c44e3367e4c7 | [
"BSD-3-Clause"
] | 24 | 2019-09-03T14:27:26.000Z | 2022-03-15T13:47:12.000Z | # Point-Scanning Super-Resolution (PSSR)
This repository hosts the PyTorch implementation source code for Point-Scanning Super-Resolution (PSSR), a Deep Learning-based framework that faciliates otherwise unattainable resolution, speed and sensitivity of point-scanning imaging systems (e.g. scanning electron or laser scanning confocal microscopes).
BioRxiv Preprint: [Deep Learning-Based Point-Scanning Super-Resolution Imaging](https://www.biorxiv.org/content/10.1101/740548v3)
There is also a [PSSR Tweetorial](https://twitter.com/manorlaboratory/status/1169624396891185152?s=20) that explains the whole development story of PSSR.
**Update**: This work has been published in Nature Methods:
Fang, L., Monroe, F., Novak, S.W. et al. Deep learning-based point-scanning super-resolution imaging. Nat Methods (2021). https://doi.org/10.1038/s41592-021-01080-z
[comment]: <>
<img src="example_imgs/em_test_results.png" width=800 align=center>
[comment]: <![Fluo]()>
- [Overview](#overview)
- [Data Availability](#data-availability)
- [Instruction of Use](#instruction-of-use)
- [Citation](#citation)
- [License](#license)
# Overview
Point-scanning imaging systems (e.g. scanning electron or laser scanning confocal microscopes) are perhaps the most widely used tools for high-resolution cellular and tissue imaging, benefitting from an ability to use arbitrary pixel sizes. Like all other imaging modalities, the resolution, speed, sample preservation, and signal-to-noise ratio (SNR) of point-scanning systems are difficult to optimize simultaneously. In particular, point-scanning systems are uniquely constrained by an inverse relationship between imaging speed and pixel resolution.
Here we show these limitations can be mitigated via the use of Deep Learning-based super-sampling of undersampled images acquired on a point-scanning system, which we termed point-scanning super-resolution (PSSR) imaging. Our proposed PSSR model coule restore undersampled images by increasing optical and pixel resolution, and denoising simultaneously. The model training requires no manually acquired image pairs thanks to the "crappification" method we developed. In addition, the multi-frame PSSR apporach enabled otherwise impossible high spatiotemporal resolution fluorescence timelapse imaging.
# Data Availability
All data are hosted in 3DEM Dataverse: https://doi.org/10.18738/T8/YLCK5A, which include:
- Main models: pretrained models, training and testing data for major PSSR models, including
- EM (neural tissue imaged on a tSEM)
- Mitotracker (live imaging of cultured U2OS cells on a ZEISS Airyscan 880 confocal)
- Neuronal mitochondria (live imaigng of hippocampal neurons from neonatal rats transfected with mito-dsRed imaged on a ZEISS Airyscan 880 confocal)
- Supporting experiments: data for the supporting experiments, including
- comparison between PSSR and BM3D denosing for both EM and fluorescence Mitotracker data
- crappifier comparison for both EM and fluorescence Mitotracker data
- compariosn between PSSR, CARE and Rolling Average for fluorescence Mitotracker data
# Instruction of Use
## Run PSSR from Google Colaboratory (Colab)
Google Colaboratory (Colab) version of PSSR is now ready. ([PSSR - Colab for programmers](https://github.com/BPHO-Salk/PSSR/tree/master/colab_notebooks/))
Another PSSR Colab version that orients to non-programmers is also going to be released soon. ([PSSR - Colab for non-programmers (In progress)](https://github.com/BPHO-Salk/PSSR/tree/master/colab_notebooks/))
Very few libraries need to be installed manually for PSSR Colab verion - most dependencies are preinstalled in the Colab environment. This makes the environment set-up step painless and you will be able to quickly get straight to the real fun. However, it also means some of the libraries you will be using are more recent than the ones used for the manuscript, which can be accessed from the instruction below.
## Run PSSR from the command line
### System Requirements
#### OS Requirements
This software is only supported for Linux, and has been tested on Ubuntu 18.04.
#### Python Dependencies
PSSR is mainly written with Fastai, and final models used in the manuscript were generated using fast.ai v1.0.55 library.
### Environment Set-up
- Install Anaconda ([Learn more](https://docs.anaconda.com/anaconda/install/))
- Download the repo from Github:
`git clone https://github.com/BPHO-Salk/PSSR.git`
- Create a conda environment for pssr:
`conda create --name pssr python=3.7`
- Activate the conda environment:
`conda activate pssr`
- Install PSSR dependencies:
`pip install fastai==1.0.55 tifffile libtiff czifile scikit-image`
`pip uninstall torch torchvision` (you may need to run this multiple times)
`conda install pytorch==1.1.0 torchvision==0.3.0 cudatoolkit=10.0 -c pytorch`
### Scenario 1: Inference using our pretrained models
Please refer to the handy [Inference_PSSR_for_EM.ipynb](https://github.com/BPHO-Salk/PSSR/blob/master/Inference_PSSR_for_EM.ipynb). You need to modify the path for the test images accordingly. Note the input pixel size needs to be 8 nm.
### Scenario 2: Train your own data
Step 1: Understand your datasource (see details in [gen_sample_info.py](https://github.com/BPHO-Salk/PSSR/tree/master/gen_sample_info.py))
- Example: `python gen_sample_info.py --only mitotracker --out live_mitotracker.csv datasources/live`
Step 2: Generate your training data (see details in [tile_from_info.py](https://github.com/BPHO-Salk/PSSR/tree/master/tile_from_info.py))
- Singleframe example: `python tile_from_info.py --out datasets --info live_mitotracker.csv --n_train 80 --n_valid 20 --n_frames 1 --lr_type s --tile 512 --only mitotracker --crap_func 'new_crap_AG_SP'`
- Multiframe example: `python tile_from_info.py --out datasets --info live_mitotracker.csv --n_train 80 --n_valid 20 --n_frames 5 --lr_type t --tile 512 --only mitotracker --crap_func 'new_crap_AG_SP'`
Step 3: Train your PSSR model (see details in [train.py](https://github.com/BPHO-Salk/PSSR/tree/master/train.py))
- Singleframe example: `python -m fastai.launch train.py --bs 8 --lr 4e-4 --size 512 --tile_sz 512 --datasetname s_1_live_mitotracker_new_crap_AG_SP --cycles 50 --save_name mito_AG_SP --lr_type s --n_frames 1`
- Multiframe example: `python -m fastai.launch train.py --bs 8 --lr 4e-4 --size 512 --tile_sz 512 --datasetname t_5_live_mitotracker_new_crap_AG_SP --cycles 50 --save_name mito_AG_SP --lr_type t --n_frames 5`
Step 4: Run inference on test data (see details in [image_gen.py](https://github.com/BPHO-Salk/PSSR/tree/master/image_gen.py))
- Singleframe example: `python image_gen.py stats/LR stats/LR-PSSR --models s_1_mito_AG_SP_e50_512 --use_tiles --gpu 0`
- Multiframe example: `python image_gen.py stats/LR stats/LR-PSSR --models t_5_mito_AG_SP_e50_512 --use_tiles --gpu 0`
# Citation
Please cite our work if you find it useful for your research:
Fang, L., Monroe, F., Novak, S.W. et al. Deep learning-based point-scanning super-resolution imaging. Nat Methods (2021). https://doi.org/10.1038/s41592-021-01080-z
# License
Licensed under BSD 3-Clause License.
| 62.852174 | 601 | 0.779054 | eng_Latn | 0.921246 |
7ac74038c65b5409d8b6e694d98284a6b374fa42 | 2,096 | md | Markdown | docs/collections/_write-ups/cyberseclabs/Engine-windows.md | clobee/cl00b3e.github.io | 5b590f68451568837a60152bb7c33d37b6371e53 | [
"MIT"
] | null | null | null | docs/collections/_write-ups/cyberseclabs/Engine-windows.md | clobee/cl00b3e.github.io | 5b590f68451568837a60152bb7c33d37b6371e53 | [
"MIT"
] | null | null | null | docs/collections/_write-ups/cyberseclabs/Engine-windows.md | clobee/cl00b3e.github.io | 5b590f68451568837a60152bb7c33d37b6371e53 | [
"MIT"
] | null | null | null | ---
name: Cyberseclabs - Engine - Walkthrough
category: cyberseclabs
excerpt: BlogEngine with default connection information
slug: Engine
layout: writeup
tags: windows blog blogengine blog-engine-3.3.6.0 autologon
date: "2022-03-28 11:42:40"
---

## TL;DR
- The enumeration have revealed a /blog based on blogengine
- We have found an exploit which gives us a shell access as a service user
- WinPEAS reveals the administrator autologon information
## NETWORK

a deeper nmap scan on port 80 gave us more information
```bash
nmap -vv --reason -Pn -T4 -sV -p 80 "--script=banner,(http* or ssl*) and not (brute or broadcast or dos or external or http-slowloris* or fuzzer)" 172.31.1.16
```

## ENUMERATION

We have found a blog at /blog

Looks like it is blog engine

Using admin / admin we get access to the admin panel

We are dealing with Blog engine 3.3.6.0

## FOOTHOLD
Looking for exploits into searchsploit we have find a good candidate


Running the script
```bash
python 47010.py -t 172.31.1.16/blog/ -u admin -p admin -l 10.10.0.3:443
```
gives us a reverse shell

## PRIV ESCALATION
Looking into the result of our enumeration scan we have found the following

using these credentials we get access to administrator account with evil-winrm

---
## CAPTURE FLAGS


---
| 24.372093 | 158 | 0.762405 | eng_Latn | 0.803229 |
7ac74127af37242e79a0631b7cd4125b17a41a54 | 39 | md | Markdown | README.md | AleksejMelman/win-cppcheck | 0116f4549e7fc8332fcd9cd8901f638c285a4cf2 | [
"MIT"
] | null | null | null | README.md | AleksejMelman/win-cppcheck | 0116f4549e7fc8332fcd9cd8901f638c285a4cf2 | [
"MIT"
] | null | null | null | README.md | AleksejMelman/win-cppcheck | 0116f4549e7fc8332fcd9cd8901f638c285a4cf2 | [
"MIT"
] | null | null | null | File can be found here [link](LICENSE)
| 19.5 | 38 | 0.74359 | eng_Latn | 0.995274 |
7ac76dd584c2d745357adca503956a5023722ca1 | 5,728 | md | Markdown | Docs/Set-AtwsInstalledProductCategoryUdfAssociation.md | ecitsolutions/Autotask | c6efbb8fa98458ba30840b659947bf1be846e48e | [
"MIT"
] | 33 | 2019-12-28T06:19:22.000Z | 2022-02-15T21:59:13.000Z | Docs/Set-AtwsInstalledProductCategoryUdfAssociation.md | officecenter/Autotask | c6efbb8fa98458ba30840b659947bf1be846e48e | [
"MIT"
] | 58 | 2018-02-02T13:30:57.000Z | 2019-12-12T08:50:16.000Z | Docs/Set-AtwsInstalledProductCategoryUdfAssociation.md | officecenter/Autotask | c6efbb8fa98458ba30840b659947bf1be846e48e | [
"MIT"
] | 14 | 2018-02-21T19:55:00.000Z | 2019-07-08T13:40:39.000Z | ---
external help file: Autotask-help.xml
Module Name: Autotask
online version:
schema: 2.0.0
---
# Set-AtwsInstalledProductCategoryUdfAssociation
## SYNOPSIS
This function sets parameters on the InstalledProductCategoryUdfAssociation specified by the -InputObject parameter or pipeline through the use of the Autotask Web Services API.
Any property of the InstalledProductCategoryUdfAssociation that is not marked as READ ONLY by Autotask can be speficied with a parameter.
You can specify multiple paramters.
## SYNTAX
### InputObject (Default)
```
Set-AtwsInstalledProductCategoryUdfAssociation [-WhatIf] [-Confirm] [<CommonParameters>]
```
### Input_Object
```
Set-AtwsInstalledProductCategoryUdfAssociation [-InputObject <InstalledProductCategoryUdfAssociation[]>]
[-PassThru] [-IsRequired <Boolean>] [-WhatIf] [-Confirm] [<CommonParameters>]
```
### By_Id
```
Set-AtwsInstalledProductCategoryUdfAssociation [-Id <Int64[]>] [-IsRequired <Boolean>] [-WhatIf] [-Confirm]
[<CommonParameters>]
```
### By_parameters
```
Set-AtwsInstalledProductCategoryUdfAssociation [-PassThru] -IsRequired <Boolean> [-WhatIf] [-Confirm]
[<CommonParameters>]
```
## DESCRIPTION
This function one or more objects of type \[Autotask.InstalledProductCategoryUdfAssociation\] as input.
You can pipe the objects to the function or pass them using the -InputObject parameter.
You specify the property you want to set and the value you want to set it to using parameters.
The function modifies all objects and updates the online data through the Autotask Web Services API.
The function supports all properties of an \[Autotask.InstalledProductCategoryUdfAssociation\] that can be updated through the Web Services API.
The function uses PowerShell parameter validation and supports IntelliSense for selecting picklist values.
Entities that have fields that refer to the base entity of this CmdLet:
## EXAMPLES
### EXAMPLE 1
```
Set-AtwsInstalledProductCategoryUdfAssociation -InputObject $InstalledProductCategoryUdfAssociation [-ParameterName] [Parameter value]
Passes one or more [Autotask.InstalledProductCategoryUdfAssociation] object(s) as a variable to the function and sets the property by name 'ParameterName' on ALL the objects before they are passed to the Autotask Web Service API and updated.
```
### EXAMPLE 2
```
Same as the first example, but now the objects are passed to the funtion through the pipeline, not passed as a parameter. The end result is identical.
```
### EXAMPLE 3
```
Gets the instance with Id 0 directly from the Web Services API, modifies a parameter and updates Autotask. This approach works with all valid parameters for the Get function.
```
### EXAMPLE 4
```
Gets multiple instances by Id, modifies them all and updates Autotask.
```
### EXAMPLE 5
```
-PassThru
Gets multiple instances by Id, modifies them all, updates Autotask and returns the updated objects.
```
## PARAMETERS
### -InputObject
An object that will be modified by any parameters and updated in Autotask
```yaml
Type: InstalledProductCategoryUdfAssociation[]
Parameter Sets: Input_Object
Aliases:
Required: False
Position: Named
Default value: None
Accept pipeline input: True (ByValue)
Accept wildcard characters: False
```
### -Id
The object.ids of objects that should be modified by any parameters and updated in Autotask
```yaml
Type: Int64[]
Parameter Sets: By_Id
Aliases:
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -PassThru
Return any updated objects through the pipeline
```yaml
Type: SwitchParameter
Parameter Sets: Input_Object, By_parameters
Aliases:
Required: False
Position: Named
Default value: False
Accept pipeline input: False
Accept wildcard characters: False
```
### -IsRequired
Is Required
```yaml
Type: Boolean
Parameter Sets: Input_Object, By_Id
Aliases:
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
```yaml
Type: Boolean
Parameter Sets: By_parameters
Aliases:
Required: True
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -WhatIf
Shows what would happen if the cmdlet runs.
The cmdlet is not run.
```yaml
Type: SwitchParameter
Parameter Sets: (All)
Aliases: wi
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -Confirm
Prompts you for confirmation before running the cmdlet.
```yaml
Type: SwitchParameter
Parameter Sets: (All)
Aliases: cf
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### CommonParameters
This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](http://go.microsoft.com/fwlink/?LinkID=113216).
## INPUTS
### [Autotask.InstalledProductCategoryUdfAssociation[]]. This function takes one or more objects as input. Pipeline is supported.
## OUTPUTS
### Nothing or [Autotask.InstalledProductCategoryUdfAssociation]. This function optionally returns the updated objects if you use the -PassThru parameter.
## NOTES
Related commands:
New-AtwsInstalledProductCategoryUdfAssociation
Remove-AtwsInstalledProductCategoryUdfAssociation
Get-AtwsInstalledProductCategoryUdfAssociation
## RELATED LINKS
| 28.64 | 316 | 0.760649 | eng_Latn | 0.833663 |
7ac76ee1cbd27b05b29de8b1acf90d35a8958a88 | 4,766 | md | Markdown | docs/access/desktop-database-reference/parameters-collection-dao.md | changeworld/office-developer-client-docs.zh-CN | 5e055d0fba386d6ecb7e612c8e925e2a1bff85a0 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/access/desktop-database-reference/parameters-collection-dao.md | changeworld/office-developer-client-docs.zh-CN | 5e055d0fba386d6ecb7e612c8e925e2a1bff85a0 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/access/desktop-database-reference/parameters-collection-dao.md | changeworld/office-developer-client-docs.zh-CN | 5e055d0fba386d6ecb7e612c8e925e2a1bff85a0 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Parameters 集合 (DAO)
TOCTitle: Parameters Collection
ms:assetid: 52fc1ce4-7b3e-152d-7b6a-9c32a6470147
ms:mtpsurl: https://msdn.microsoft.com/library/Ff193967(v=office.15)
ms:contentKeyID: 48544862
ms.date: 09/18/2015
mtps_version: v=office.15
ms.openlocfilehash: 0263c87ea12384fb3e1fe722c00cd58f4d7f45e0
ms.sourcegitcommit: d7248f803002b31cf7fc561b03530199a9b0a8fd
ms.translationtype: MT
ms.contentlocale: zh-CN
ms.lasthandoff: 11/02/2018
ms.locfileid: "25924475"
---
# <a name="parameters-collection-dao"></a>Parameters 集合 (DAO)
**适用于**: Access 2013、 Office 2013
**Parameters** 集合包含 **QueryDef** 对象的所有 **Parameter** 对象。
## <a name="remarks"></a>注解
**Parameters** 集合仅提供有关现有参数的信息。您不能向 **Parameters** 集合追加对象或从中删除对象。
## <a name="example"></a>示例
以下示例通过创建一个临时 **QueryDef**,并基于对 **QueryDef** 对象的 **Parameters** 所做的更改检索数据,来演示 **Parameter** 对象和 **Parameters** 集合。若要使该过程运行,需要使用 ParametersChange 过程。
```vb
Sub ParameterX()
Dim dbsNorthwind As Database
Dim qdfReport As QueryDef
Dim prmBegin As Parameter
Dim prmEnd As Parameter
Set dbsNorthwind = OpenDatabase("Northwind.mdb")
' Create temporary QueryDef object with two
' parameters.
Set qdfReport = dbsNorthwind.CreateQueryDef("", _
"PARAMETERS dteBegin DateTime, dteEnd DateTime; " & _
"SELECT EmployeeID, COUNT(OrderID) AS NumOrders " & _
"FROM Orders WHERE ShippedDate BETWEEN " & _
"[dteBegin] AND [dteEnd] GROUP BY EmployeeID " & _
"ORDER BY EmployeeID")
Set prmBegin = qdfReport.Parameters!dteBegin
Set prmEnd = qdfReport.Parameters!dteEnd
' Print report using specified parameter values.
ParametersChange qdfReport, prmBegin, #1/1/95#, _
prmEnd, #6/30/95#
ParametersChange qdfReport, prmBegin, #7/1/95#, _
prmEnd, #12/31/95#
dbsNorthwind.Close
End Sub
Sub ParametersChange(qdfTemp As QueryDef, _
prmFirst As Parameter, dteFirst As Date, _
prmLast As Parameter, dteLast As Date)
' Report function for ParameterX.
Dim rstTemp As Recordset
Dim fldLoop As Field
' Set parameter values and open recordset from
' temporary QueryDef object.
prmFirst = dteFirst
prmLast = dteLast
Set rstTemp = _
qdfTemp.OpenRecordset(dbOpenForwardOnly)
Debug.Print "Period " & dteFirst & " to " & dteLast
' Enumerate recordset.
Do While Not rstTemp.EOF
' Enumerate Fields collection of recordset.
For Each fldLoop In rstTemp.Fields
Debug.Print " - " & fldLoop.Name & " = " & fldLoop;
Next fldLoop
Debug.Print
rstTemp.MoveNext
Loop
rstTemp.Close
End Sub
```
<br/>
下面的示例演示如何创建参数查询。 两个参数,名为 Param1 和 Param2 创建名为**myQuery**的查询。 若要执行此操作,查询的 SQL 属性设置为定义的参数的结构化查询语言 (SQL) 语句。
**示例代码提供者** [Microsoft Access 2010 Programmer's Reference](https://www.amazon.com/Microsoft-Access-2010-Programmers-Reference/dp/8126528125)。
```vb
Sub CreateQueryWithParameters()
Dim dbs As DAO.Database
Dim qdf As DAO.QueryDef
Dim strSQL As String
Set dbs = CurrentDb
Set qdf = dbs.CreateQueryDef("myQuery")
Application.RefreshDatabaseWindow
strSQL = "PARAMETERS Param1 TEXT, Param2 INT; "
strSQL = strSQL & "SELECT * FROM [Table1] "
strSQL = strSQL & "WHERE [Field1] = [Param1] AND [Field2] = [Param2];"
qdf.SQL = strSQL
qdf.Close
Set qdf = Nothing
Set dbs = Nothing
End Sub
```
<br/>
下面的示例演示如何执行参数查询。 Parameters 集合用于设置 myActionQuery 查询的 Organization 参数之前执行查询。
```vb
Public Sub ExecParameterQuery()
Dim dbs As DAO.Database
Dim qdf As DAO.QueryDef
Set dbs = CurrentDb
Set qdf = dbs.QueryDefs("myActionQuery")
'Set the value of the QueryDef's parameter
qdf.Parameters("Organization").Value = "Microsoft"
'Execute the query
qdf.Execute dbFailOnError
'Clean up
qdf.Close
Set qdf = Nothing
Set dbs = Nothing
End Sub
```
<br/>
以下示例说明如何打开基于参数查询的 Recordset。
```vb
Dim dbs As DAO.Database
Dim qdf As DAO.QueryDef
Dim rst As DAO.Recordset
Set dbs = CurrentDb
'Get the parameter query
Set qfd = dbs.QueryDefs("qryMyParameterQuery")
'Supply the parameter value
qdf.Parameters("EnterStartDate") = Date
qdf.Parameters("EnterEndDate") = Date + 7
'Open a Recordset based on the parameter query
Set rst = qdf.OpenRecordset()
```
| 27.871345 | 147 | 0.631767 | yue_Hant | 0.716082 |
7ac7b45ac3acd3548f72175bf8ce0b5d86022b89 | 2,192 | md | Markdown | 101-site-to-site-vpn-create/README.md | selvasingh/azure-quickstart-templates | a8304a3a6becc1a87a568be5e6d19934fe2c53b2 | [
"MIT"
] | 5 | 2018-06-05T14:38:32.000Z | 2021-08-22T18:03:02.000Z | 101-site-to-site-vpn-create/README.md | selvasingh/azure-quickstart-templates | a8304a3a6becc1a87a568be5e6d19934fe2c53b2 | [
"MIT"
] | 2 | 2022-03-08T21:12:46.000Z | 2022-03-08T21:12:52.000Z | 101-site-to-site-vpn-create/README.md | selvasingh/azure-quickstart-templates | a8304a3a6becc1a87a568be5e6d19934fe2c53b2 | [
"MIT"
] | 2 | 2021-02-20T10:10:44.000Z | 2022-02-02T03:42:11.000Z | # Site to Site VPN Connection






[]("https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F101-site-to-site-vpn-create%2Fazuredeploy.json") []("http://armviz.io/#/?load=https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F101-site-to-site-vpn-create%2Fazuredeploy.json")
This template will create a Virtual Network, a subnet for the network, a Virtual Network Gateway and a Connection to your network outside of Azure (defined as your `local` network). This could be anything such as your on-premises network and can even be used with other cloud networks such as [AWS Virtual Private Cloud](https://github.com/sedouard/aws-vpc-to-azure-vnet).
Please note that you must have a Public IP for your other network's VPN gateway and cannot be behind an NAT.
Although only the parameters in [azuredeploy.parameters.json](./azuredeploy.parameters.json) are necessary, you can override the defaults of any of the template parameters.
| 104.380952 | 665 | 0.809307 | eng_Latn | 0.358523 |
7ac8d735154f8d92a0fa0faec61cbc19c31c437a | 2,584 | md | Markdown | README.md | lifenautjoe/stop-using-facebook | f9ad659c4bf3bc2d50b7cabc7843f64b6c42c0f2 | [
"MIT"
] | 30 | 2019-05-09T22:55:40.000Z | 2021-03-06T12:45:02.000Z | README.md | lifenautjoe/stop-using-facebook | f9ad659c4bf3bc2d50b7cabc7843f64b6c42c0f2 | [
"MIT"
] | 28 | 2019-05-09T19:35:03.000Z | 2022-02-17T22:38:54.000Z | README.md | lifenautjoe/stop-using-facebook | f9ad659c4bf3bc2d50b7cabc7843f64b6c42c0f2 | [
"MIT"
] | 9 | 2019-05-10T14:45:02.000Z | 2021-03-06T12:35:30.000Z | <img alt="StopUsingFacebook.co Logo" src="/logo-transparent.png" width="200">
[](https://circleci.com/gh/OpenbookOrg/openbook-org-www) [](https://github.com/carloscuesta/gitmoji)
The code for [stopusingfacebook.co](https://www.stopusingfacebook.co).
## Table of contents
- [Requirements](#requirements)
- [Project overview](#project-overview)
- [Contributing](#contributing)
+ [Code of Conduct](#code-of-conduct)
+ [License](#license)
+ [Other issues](#other-issues)
+ [Git commit message conventions](#git-commit-message-conventions)
- [Getting started](#getting-started)
## Requirements
* [Node](https://nodejs.org) > 7.6
## Project overview
The website is a [Vue 2.x](https://vuejs.org/) application.
Other relevant technologies used are
* [Sass](https://sass-lang.com/) for stylesheets
* [Bulma](https://bulma.io/documentation/overview/start/) for kickstarting the styles/layout.
* [Buefy](https://buefy.github.io/#/) for providing the logic to the Bulma components.
* [Webpack 4](https://webpack.js.org/) for bundling everything together
## Contributing
There are many different ways to contribute to the website development, just find the one that best fits with your skills and open an issue/pull request in the repository.
Examples of contributions we love include:
- **Code patches**
- **Bug reports**
- **Patch reviews**
- **Translations**
- **UI enhancements**
#### Code of Conduct
Please read and follow our [Code of Conduct](/CODE_OF_CONDUCT.md).
#### License
Every contribution accepted is licensed under [MIT](https://opensource.org/licenses/MIT) or any later version.
You must be careful to not include any code that can not be licensed under this license.
Please read carefully [our license](/LICENSE) and ask us if you have any questions.
#### Git commit message conventions
Help us keep the repository history consistent 🙏!
We use [gitmoji](https://gitmoji.carloscuesta.me/) as our git message convention.
If you're using git in your command line, you can download the handy tool [gitmoji-cli](https://github.com/carloscuesta/gitmoji-cli).
## Getting started
Clone the repository
```sh
git clone [email protected]:lifenautjoe/stop-using-facebook.git
```
Install the dependencies
```bash
$ npm install
```
Serve with hot reload at localhost:3000
```bash
$ npm run serve
```
Build for production
```bash
npm run build
```
<br>
#### Happy coding 🎉!
| 28.711111 | 274 | 0.734907 | eng_Latn | 0.84455 |
7ac8e3838f9db7cf8ae603fad9163a93f71d9eb7 | 6,856 | md | Markdown | _posts/2021-09-05-ide-choice.md | intaxwashere/intaxwashere.github.io | 0987f9b0eff450bc352ffe772064cdaccb4af7d4 | [
"MIT"
] | null | null | null | _posts/2021-09-05-ide-choice.md | intaxwashere/intaxwashere.github.io | 0987f9b0eff450bc352ffe772064cdaccb4af7d4 | [
"MIT"
] | null | null | null | _posts/2021-09-05-ide-choice.md | intaxwashere/intaxwashere.github.io | 0987f9b0eff450bc352ffe772064cdaccb4af7d4 | [
"MIT"
] | null | null | null | ---
layout: post
title: IDE Choice for Unreal Engine & Why Visual Studio is Not Enough Without Plugins
---
Hello, this topic covers my observations in Unreal Slackers about IDE preferences. This a post with full of my personal opinions. You're probably here because I or someone else sent you this link during an IDE talk or you asked 'Why Visual Studio Intellisense is too slow?' or 'Is Rider good?' etc. To not repeat the same conversation each day probably 10 times, I decided to create this page to share my thoughts about this topic.
# Is Intellisense slow?
Intellisense is not working well with large C++ projects, especially when code generation involved. When you open your Unreal Engine project, Visual Studio tries to understand UE's macros and generated code but it fails and goes crazy. But this is very normal and expected. Most people (even Epic Games engineers) use Visual Studio plugins or Rider for Unreal Engine.
# But I want to use Visual Studio!
Well you have several choices then.
## If you insist and decisive about using Visual Studio without any plugins
This is possible. But you won't be able to achieve any kind of comfort and flexibility that other options will give you. Even if you have gigantic amount of RAM and CPU power.
- You will not be able to refactor your code as easy as other options.
- You will not be able to navigate across your code as easy as other options.
- Code correction will not be fancy as other options.
- Syntax highlighting is not good as any other options, *thought this is subjectively a personal preference.*
- Code inspection will not increase your quality of life during the development.
## Alright, what can I do?
Read about GlassBeaver's awesome guide to enchance Visual Studio for Unreal Engine: https://horugame.com/speed-up-intellisense-for-c-in-visual-studio/
# Try Visual Assist X Plugin
Visual Assist X (VAX) is very handy plugin for writing C/C++/C# in Visual Studio and it supports Unreal Engine projects specifically. Most C++ programmers with past experiences use VAX and they're ultimately used to it. VAX is also commonly used with other engines since it's not only aiming Unreal Engine.
[Discover more about VAX](https://www.wholetomato.com/)
# Try Resharper C++ Plugin
Resharper C++ is also another great plugin for Visual Studio developed by JetBrains. It also has specific support for Unreal Engine. At the date of I'm writing this, ReSharper eats more RAM than VAX and slows the Visual Studio, but if you have *more* than 16 GB RAM, you shouldn't even notice it. ReSharper overrides default keybindings of Visual Studio and provides some extra UI goodies to increase the quality of development time. ReSharper is also commonly used around the community and well versed for Unreal Engine.
[Discover more about ReSharper](https://www.jetbrains.com/resharper-cpp/)
# Try Rider for Unreal Engine
Ohh, here we are. The Rider for Unreal Engine... Rider for Unreal Engine is completely another IDE based on IntelliJ IDEA. Like Visual Studio plugins, it is also being used by the Epic engineers *it is more user friendly* compared to Visual Studio. Rider also has some additional features, it can include *modules*, add core redirects automatically and can show information about BP usage of your member variables. They also currently have a official representive in Unreal Slackers (Hi BattleToad!) that takes constant feedback from community to increase the quality of their product.
Discover more about Rider for Unreal Engine: https://www.jetbrains.com/lp/rider-unreal/
# Hey you didn't explain Visual Studio Co-...
Don't use Visual Studio Code or any text editor. Or do, but you won't even get what yo get with default Visual Studio. You also will not get debugger.
## If you insist about using a text editor
That is also possible. Here is one guide for *Sublime Text* by Alex Forsythe: https://youtu.be/94FvzO1HVzY
Sadly I don't know much about other text editors.
# Fun facts and FAQ
## Why/what is 'Rider for kid'?
Some random guy came into discussion one day and said 'Rider made for kid', which is a sentence makes no sense gramatically. And we're having fun with it. Be proud of being 'kid' and don't let 'boomers' (VS users) ruin your day.
## VAX or ReSharper?
*Personal opinion*, I would use ReSharper. But whenever you share an opinion, every other random guy drops to the conversation and 'hEy iT dEpeNdS' and such and they're right. I did not even use both enough to say something too. So what you can do is, both plugins has free trials, try them and see it yourself. Just avoid ReSharper if you dont have enough RAM (16 GB or less, as said in above), that's all I can say.
## I heard Rider's debugger is terribe**
*It was.* It was very bad and terrible. **But it is not anymore.** Rider team developed a new debugger with a debugger team and it is working nice af. Most people say this because they remember the 6 months old version of Rider or just like trolling.
## Why the hell developers of Unreal Engine does not do anything to fix the errors of Visual Studio**
Well, why would they? They are engine developers, not IDE or extension developers. Most of them use plugins or Rider too. Sadly, game development is an expensive thing. You are not only paying for assets, tools, freelancers but you also need to pay for plugins if you want something more than Visual Studio offers.
## You said Epic engineers use plugins or Rider, how do you know?
One Epic engineer told *us* that in Unreal Slackers, he also mentioned he is personally using Rider. Also some of them share screenshots in their Twitter, which their IDEs are very visible.
## What is 'hot reload'?
A feature enabled by default and *supposedly to be* increase the quality of the development time *but* (attention please!) Epic's implementation of Hot Reloading is completely broken and useless. No one knows why they are still keeping it in the engine. Some say hot reloading was not even planned during the development phase of the engine.
If you're compiling the C++ code while editor open, that means you're 'hot reloading'. **NEVER** hot reload unless you totally know what you're doing.
[Read this if you're new to Unreal Engine](https://unrealcommunity.wiki/live-compiling-in-unreal-projects-tp14jcgs)
[Hot Reload Sucks](http://hotreloadsucks.com/)
## hEy yOu sAiD *bLa BlA* BuT 'iT dEPendS'
I don't care. I did not explain anything in technical details. I just shared *my observations* and a very basic knowledge about this topic. Every reader is responsible from it's own choice. Open the links, use free trials and see which one is best for you.
## Why did you call me 'boomer' because of I'm using Visual Studio?
When VS users call me 'kid' because of I'm using Rider I like to call them as 'boomers' :D
## What is the purpose of life?
42
| 67.881188 | 585 | 0.774358 | eng_Latn | 0.999517 |
7ac92eece7bf644db6a46b804c31f04200816625 | 3,509 | md | Markdown | articles/cognitive-services/Bing-Autosuggest/language-support.md | nsrau/azure-docs.it-it | 9935e44b08ef06c214a4c7ef94d12e79349b56bc | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cognitive-services/Bing-Autosuggest/language-support.md | nsrau/azure-docs.it-it | 9935e44b08ef06c214a4c7ef94d12e79349b56bc | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cognitive-services/Bing-Autosuggest/language-support.md | nsrau/azure-docs.it-it | 9935e44b08ef06c214a4c7ef94d12e79349b56bc | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Supporto lingua - API Suggerimenti automatici Bing
titleSuffix: Azure Cognitive Services
description: Elenco delle lingue e delle regioni supportate dall'API Suggerimenti automatici Bing.
services: cognitive-services
author: swhite-msft
manager: nitinme
ms.service: cognitive-services
ms.subservice: bing-autosuggest
ms.topic: conceptual
ms.date: 02/20/2019
ms.author: scottwhi
ms.openlocfilehash: 90946b10bbc7717aa12566c4a25686f8471fb6e7
ms.sourcegitcommit: 9eda79ea41c60d58a4ceab63d424d6866b38b82d
ms.translationtype: MT
ms.contentlocale: it-IT
ms.lasthandoff: 11/30/2020
ms.locfileid: "96353358"
---
# <a name="language-and-region-support-for-the-bing-autosuggest-api"></a>Lingua e regioni supportate dall'API Suggerimenti automatici Bing
> [!WARNING]
> Le API Ricerca Bing sono state trasferite da Servizi cognitivi ai servizi di Ricerca Bing. A partire dal **30 ottobre 2020**, è necessario effettuare il provisioning di tutte le nuove istanze di Ricerca Bing seguendo la procedura documentata [qui](/bing/search-apis/bing-web-search/create-bing-search-service-resource).
> Le API Ricerca Bing di cui viene effettuato il provisioning con Servizi cognitivi saranno supportate per i prossimi tre anni oppure fino alla data di fine del contratto Enterprise, se precedente.
> Per le istruzioni sulla migrazione, vedere [Servizi di Ricerca Bing](/bing/search-apis/bing-web-search/create-bing-search-service-resource).
Di seguito sono elencate le lingue supportate dall'API Suggerimenti automatici Bing.
| Linguaggio | Codice lingua |
|:----------- |:-------------:|
| Arabo | `ar` |
| Cinese (Repubblica popolare cinese) | `zh-CN` |
| Cinese (Hong Kong - R.A.S.) | `zh-HK` |
| Cinese (Taiwan) | `zh-TW` |
| Danese | `da` |
| Olandese (Belgio) | `nl-BE` |
| Olandese (Paesi Bassi) | `nl-NL` |
| Inglese (Australia) | `en-AU` |
| Inglese (Canada) | `en-CA` |
| Inglese (India) | `en-IN` |
| Inglese (Indonesia) | `en-ID` |
| Inglese (Malaysia) | `en-MY` |
| Inglese (Nuova Zelanda) | `en-NZ` |
| Inglese (Filippine) | `en-PH` |
| Inglese (Sud Africa) | `en-ZA` |
| Inglese (Regno Unito) | `en-GB` |
| Inglese (Stati Uniti) | `en-US` |
| Finlandese | `fi` |
| Francese (Belgio) | `fr-BE` |
| Francese (Canada) | `fr-CA` |
| Francese (Francia) | `fr-FR` |
| Francese (Svizzera) | `fr-CH` |
| Tedesco (Austria) | `de-AT` |
| Tedesco (Germania) | `de-DE` |
| Tedesco (Svizzera) | `de-CH` |
| Italiano | `it` |
| Giapponese | `ja` |
| Coreano | `ko` |
| Norvegese | `no` |
| Polacco | `pl` |
| Portoghese (Brasile) | `pt-BR`|
| Portoghese (Portogallo) | `pt-PT`|
| Russo | `ru` |
| Spagnolo (Argentina) | `es-AR` |
| Spagnolo (Cile) | `es-CL` |
| Spagnolo (Messico) | `es-MX` |
| Spagnolo (Spagna) | `es-ES` |
| Spagnolo (Stati Uniti) | `es-US` |
| Svedese | `sv` |
| Turco | `tr` |
## <a name="see-also"></a>Vedi anche
- [Pagina della documentazione di Servizi cognitivi di Azure](../index.yml)
- [Pagina del prodotto Servizi cognitivi di Azure](https://azure.microsoft.com/services/cognitive-services/) | 46.786667 | 321 | 0.600741 | ita_Latn | 0.944717 |
7ac9969a7404618e90c87553928616fbec0dcff4 | 401 | md | Markdown | upwork-devs/yarden-shoham/chart/README.md | faisaladnanpeltops/k8-traffic-generator | 3df4f9e2c6052c5b34cb3519c86aa9c4242d715d | [
"Apache-2.0"
] | null | null | null | upwork-devs/yarden-shoham/chart/README.md | faisaladnanpeltops/k8-traffic-generator | 3df4f9e2c6052c5b34cb3519c86aa9c4242d715d | [
"Apache-2.0"
] | null | null | null | upwork-devs/yarden-shoham/chart/README.md | faisaladnanpeltops/k8-traffic-generator | 3df4f9e2c6052c5b34cb3519c86aa9c4242d715d | [
"Apache-2.0"
] | null | null | null | # Helm Chart
## Deploy
`helm install gw-web-automation .`
## Configuration
Several values may be set for configuration:
| Environment Variable | Description | Default |
| -------------------- | ------------------------------------- | ------------------------ |
| `targetHost` | Any Glasswall Solutions website clone | `glasswallsolutions.com` |
| 28.642857 | 91 | 0.478803 | eng_Latn | 0.519676 |
7ac9e8223f717640448fbf0c016b2df438b6c8a0 | 4,647 | md | Markdown | docs/t-sql/statements/drop-procedure-transact-sql.md | brrick/sql-docs | c938c12cf157962a5541347fcfae57588b90d929 | [
"CC-BY-4.0",
"MIT"
] | 4 | 2019-03-10T21:54:49.000Z | 2022-03-09T09:08:21.000Z | docs/t-sql/statements/drop-procedure-transact-sql.md | brrick/sql-docs | c938c12cf157962a5541347fcfae57588b90d929 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-11-09T17:22:05.000Z | 2020-11-19T20:51:25.000Z | docs/t-sql/statements/drop-procedure-transact-sql.md | brrick/sql-docs | c938c12cf157962a5541347fcfae57588b90d929 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-09-16T15:41:10.000Z | 2021-09-16T15:41:10.000Z | ---
description: "DROP PROCEDURE (Transact-SQL)"
title: "DROP PROCEDURE (Transact-SQL) | Microsoft Docs"
ms.custom: ""
ms.date: "05/11/2017"
ms.prod: sql
ms.prod_service: "database-engine, sql-database, sql-data-warehouse, pdw"
ms.reviewer: ""
ms.technology: t-sql
ms.topic: "language-reference"
f1_keywords:
- "DROP PROCEDURE"
- "DROP_PROCEDURE_TSQL"
dev_langs:
- "TSQL"
helpviewer_keywords:
- "removing stored procedures"
- "dropping procedure groups"
- "deleting stored procedures"
- "deleting procedure groups"
- "DROP PROCEDURE statement"
- "dropping stored procedures"
- "stored procedures [SQL Server], removing"
- "removing procedure groups"
ms.assetid: 1c2d7235-7b9b-4336-8f17-429e7d82c2c3
author: markingmyname
ms.author: maghan
monikerRange: ">=aps-pdw-2016||=azuresqldb-current||=azure-sqldw-latest||>=sql-server-2016||>=sql-server-linux-2017||=azuresqldb-mi-current"
---
# DROP PROCEDURE (Transact-SQL)
[!INCLUDE [sql-asdb-asdbmi-asa-pdw](../../includes/applies-to-version/sql-asdb-asdbmi-asa-pdw.md)]
Removes one or more stored procedures or procedure groups from the current database in [!INCLUDE[ssCurrent](../../includes/sscurrent-md.md)].
 [Transact-SQL Syntax Conventions](../../t-sql/language-elements/transact-sql-syntax-conventions-transact-sql.md)
## Syntax
```syntaxsql
-- Syntax for SQL Server and Azure SQL Database
DROP { PROC | PROCEDURE } [ IF EXISTS ] { [ schema_name. ] procedure } [ ,...n ]
```
```syntaxsql
-- Syntax for Azure Synapse Analytics and Parallel Data Warehouse
DROP { PROC | PROCEDURE } { [ schema_name. ] procedure_name }
```
[!INCLUDE[sql-server-tsql-previous-offline-documentation](../../includes/sql-server-tsql-previous-offline-documentation.md)]
## Arguments
*IF EXISTS*
**Applies to**: [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] ( [!INCLUDE[ssSQL15](../../includes/sssql15-md.md)] through [current version](https://go.microsoft.com/fwlink/p/?LinkId=299658)).
Conditionally drops the procedure only if it already exists.
*schema_name*
The name of the schema to which the procedure belongs. A server name or database name cannot be specified.
*procedure*
The name of the stored procedure or stored procedure group to be removed. Individual procedures within a numbered procedure group cannot be dropped; the whole procedure group is dropped.
## Best Practices
Before removing any stored procedure, check for dependent objects and modify these objects accordingly. Dropping a stored procedure can cause dependent objects and scripts to fail when these objects are not updated. For more information, see [View the Dependencies of a Stored Procedure](../../relational-databases/stored-procedures/view-the-dependencies-of-a-stored-procedure.md)
## Metadata
To display a list of existing procedures, query the **sys.objects** catalog view. To display the procedure definition, query the **sys.sql_modules** catalog view.
## Security
### Permissions
Requires **CONTROL** permission on the procedure, or **ALTER** permission on the schema to which the procedure belongs, or membership in the **db_ddladmin** fixed server role.
## Examples
The following example removes the `dbo.uspMyProc` stored procedure in the current database.
```sql
DROP PROCEDURE dbo.uspMyProc;
GO
```
The following example removes several stored procedures in the current database.
```sql
DROP PROCEDURE dbo.uspGetSalesbyMonth, dbo.uspUpdateSalesQuotes, dbo.uspGetSalesByYear;
```
The following example removes the `dbo.uspMyProc` stored procedure if it exists but does not cause an error if the procedure does not exist. This syntax is new in [!INCLUDE[ssSQL15](../../includes/sssql15-md.md)].
```sql
DROP PROCEDURE IF EXISTS dbo.uspMyProc;
GO
```
## See Also
[ALTER PROCEDURE (Transact-SQL)](../../t-sql/statements/alter-procedure-transact-sql.md)
[CREATE PROCEDURE (Transact-SQL)](../../t-sql/statements/create-procedure-transact-sql.md)
[sys.objects (Transact-SQL)](../../relational-databases/system-catalog-views/sys-objects-transact-sql.md)
[sys.sql_modules (Transact-SQL)](../../relational-databases/system-catalog-views/sys-sql-modules-transact-sql.md)
[Delete a Stored Procedure](../../relational-databases/stored-procedures/delete-a-stored-procedure.md)
| 43.027778 | 384 | 0.708844 | eng_Latn | 0.675454 |
7aca5be24af96be862c931079eafd498a9783dd9 | 859 | md | Markdown | data/issues/ZF-4858.md | zendframework/zf3-web | 5852ab5bfd47285e6b46f9e7b13250629b3e372e | [
"BSD-3-Clause"
] | 40 | 2016-06-23T17:52:49.000Z | 2021-03-27T20:02:40.000Z | data/issues/ZF-4858.md | zendframework/zf3-web | 5852ab5bfd47285e6b46f9e7b13250629b3e372e | [
"BSD-3-Clause"
] | 80 | 2016-06-24T13:39:11.000Z | 2019-08-08T06:37:19.000Z | data/issues/ZF-4858.md | zendframework/zf3-web | 5852ab5bfd47285e6b46f9e7b13250629b3e372e | [
"BSD-3-Clause"
] | 52 | 2016-06-24T22:21:49.000Z | 2022-02-24T18:14:03.000Z | ---
layout: issue
title: "CLEANING_MODE_MATCHING_ANY_TAG on Zend_Cache_Backend_ZendPlatform"
id: ZF-4858
---
ZF-4858: CLEANING\_MODE\_MATCHING\_ANY\_TAG on Zend\_Cache\_Backend\_ZendPlatform
---------------------------------------------------------------------------------
Issue Type: Sub-task Created: 2008-11-07T11:13:39.000+0000 Last Updated: 2011-08-03T14:12:43.000+0000 Status: Resolved Fix version(s): - 1.7.0 (17/Nov/08)
Reporter: old of Satoru Yoshida ([email protected]) Assignee: Satoru Yoshida (satoruyoshida) Tags: - Zend\_Cache
Related issues:
Attachments:
### Description
### Comments
Posted by old of Satoru Yoshida ([email protected]) on 2008-11-08T00:14:42.000+0000
Solved in SVN r12414
Posted by Wil Sinclair (wil) on 2008-11-13T14:10:17.000+0000
Changing issues in preparation for the 1.7.0 release.
| 22.605263 | 155 | 0.665891 | eng_Latn | 0.244737 |
7aca6cb61106078318b0724b13bab4253af1ca8a | 1,641 | md | Markdown | _posts/2021-02-13-Deploy-and-Manage-Edge-in-ConfigMgr-and-Intune-with-Donna-Ryan---2021-Browser-Summit-by-CSMUG.md | IntuneTraining/IntuneTraining.github.io | 8520e1947fd3bbb8bb07bf4712a612d2c8fd61dd | [
"MIT"
] | 1 | 2021-06-30T07:00:46.000Z | 2021-06-30T07:00:46.000Z | _posts/2021-02-13-Deploy-and-Manage-Edge-in-ConfigMgr-and-Intune-with-Donna-Ryan---2021-Browser-Summit-by-CSMUG.md | IntuneTraining/IntuneTraining.github.io | 8520e1947fd3bbb8bb07bf4712a612d2c8fd61dd | [
"MIT"
] | null | null | null | _posts/2021-02-13-Deploy-and-Manage-Edge-in-ConfigMgr-and-Intune-with-Donna-Ryan---2021-Browser-Summit-by-CSMUG.md | IntuneTraining/IntuneTraining.github.io | 8520e1947fd3bbb8bb07bf4712a612d2c8fd61dd | [
"MIT"
] | null | null | null | ---
layout: post
title: "Deploy and Manage Edge in ConfigMgr and Intune with Donna Ryan - 2021 Browser Summit by CSMUG"
date: 2021-02-13 00:00:00 -0000
categories:
---
<iframe loading="lazy" width="560" height="315" src="https://www.youtube.com/embed/hVc15Ep48GM" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
This is a recording from the Charlotte Systems Management User Group Browser Summit held on January 29, 2021.
[https://www.meetup.com/CLT-System-Management-User-Group/events/274553206/](https://www.meetup.com/CLT-System-Management-User-Group/events/274553206/)
Slide content can be found here: [https://github.com/CSMUG/Meeting-Content/tree/master/BrowserSummitJan2021](https://github.com/CSMUG/Meeting-Content/tree/master/BrowserSummitJan2021)
Did you know that time spent using a browser on a PC is 60%?
The browser is a critical part of any organization's applications and there is a lot of changes happening in this space. Join us for some great presentations and demos on managing the browser
Session Schedule (all times are Eastern Standard Timezone)
Intro - Julie Andreacola & Adam Gross
Managing Chrome - Chris Kibble
Managing and Deploying Edge in MEMCM and Intune - Donna Ryan
GPO implementation for Edge - Chad Brower
Browser Compatibility: Saying Goodbye to Internet Explorer - Julie Andreacola
PolicyPak: Manage your Desktops and Browsers via Group Policy and Intune - Jeremy Moskowitz
What's cool in Edge & Edge Roadmap - Colleen Williams
Edge and Security - Colleen Williams
Q&A - All
| 52.935484 | 263 | 0.790372 | eng_Latn | 0.782913 |
7acb8bfe80cb9c6e649b15a84d6b28cf994fc0bf | 4,673 | md | Markdown | docs/web-service-reference/moveitem-operation.md | MicrosoftDocs/office-developer-exchange-docs.ru-RU | 3a91b2ad9cf79ac9891f30e4495142a154989c17 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-05-19T18:54:01.000Z | 2020-05-19T18:54:01.000Z | docs/web-service-reference/moveitem-operation.md | MicrosoftDocs/office-developer-exchange-docs.ru-RU | 3a91b2ad9cf79ac9891f30e4495142a154989c17 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2021-12-08T02:38:26.000Z | 2021-12-08T02:38:33.000Z | docs/web-service-reference/moveitem-operation.md | MicrosoftDocs/office-developer-exchange-docs.ru-RU | 3a91b2ad9cf79ac9891f30e4495142a154989c17 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Операция MoveItem
manager: sethgros
ms.date: 09/17/2015
ms.audience: Developer
ms.topic: reference
ms.prod: office-online-server
ms.localizationpriority: medium
api_name:
- MoveItem
api_type:
- schema
ms.assetid: dcf40fa7-7796-4a5c-bf5b-7a509a18d208
description: Операция MoveItem используется для перемещения одного или более элементов в одну папку назначения.
ms.openlocfilehash: 2d86d06e522e0d42815971c92e754308224f5e8f
ms.sourcegitcommit: 54f6cd5a704b36b76d110ee53a6d6c1c3e15f5a9
ms.translationtype: MT
ms.contentlocale: ru-RU
ms.lasthandoff: 09/24/2021
ms.locfileid: "59544852"
---
# <a name="moveitem-operation"></a>Операция MoveItem
Операция **MoveItem** используется для перемещения одного или более элементов в одну папку назначения.
## <a name="moveitem-request-example"></a>Пример запроса MoveItem
### <a name="description"></a>Описание
В следующем примере **запроса MoveItem** показано, как переместить элемент в папку Черновики.
### <a name="code"></a>Код
```XML
<?xml version="1.0" encoding="utf-8"?>
<soap:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xsd="http://www.w3.org/2001/XMLSchema"
xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:t="http://schemas.microsoft.com/exchange/services/2006/types">
<soap:Body>
<MoveItem xmlns="http://schemas.microsoft.com/exchange/services/2006/messages"
xmlns:t="http://schemas.microsoft.com/exchange/services/2006/types">
<ToFolderId>
<t:DistinguishedFolderId Id="drafts"/>
</ToFolderId>
<ItemIds>
<t:ItemId Id="AAAtAEF/swbAAA=" ChangeKey="EwAAABYA/s4b"/>
</ItemIds>
</MoveItem>
</soap:Body>
</soap:Envelope>
```
### <a name="comments"></a>Комментарии
Элемент [ToFolderId](tofolderid.md) указывает папку, в которую будут перемещены элементы. Обратите внимание, что все элементы, перечисленные в коллекции [ItemIds,](itemids.md) будут в конечном итоге в папке назначения. Чтобы разместить элементы в разных папках назначения, необходимо сделать отдельные вызовы **MoveItem.**
> [!NOTE]
> Идентификатор элемента и ключ изменения были сокращены для сохранения читаемости.
### <a name="request-elements"></a>Элементы запроса
В запросе используются следующие элементы:
- [MoveItem](moveitem.md)
- [ToFolderId](tofolderid.md)
- [DistinguishedFolderId](distinguishedfolderid.md)
- [ItemIds](itemids.md)
- [ItemId](itemid.md)
## <a name="moveitem-response-example"></a>Пример ответа MoveItem
### <a name="description"></a>Описание
В следующем примере показан успешный ответ на **запрос MoveItem.**
Идентификатор элемента нового элемента возвращается в ответное сообщение. Идентификаторы элементов не возвращаются в ответах для кросс-почтовых ящиков или почтовых ящиков в общедоступные операции **MoveItem** папки.
### <a name="code"></a>Код
```XML
<?xml version="1.0" encoding="utf-8"?>
<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<soap:Header>
<t:ServerVersionInfo MajorVersion="8" MinorVersion="0" MajorBuildNumber="662" MinorBuildNumber="0"
xmlns:t="http://schemas.microsoft.com/exchange/services/2006/types"/>
</soap:Header>
<soap:Body>
<MoveItemResponse xmlns:m="http://schemas.microsoft.com/exchange/services/2006/messages"
xmlns:t="http://schemas.microsoft.com/exchange/services/2006/types"
xmlns="http://schemas.microsoft.com/exchange/services/2006/messages">
<m:ResponseMessages>
<m:MoveItemResponseMessage ResponseClass="Success">
<m:ResponseCode>NoError</m:ResponseCode>
<m:Items>
<t:Message>
<t:ItemID Id="AAMkAd" ChangeKey="FwAAABY" />
</t:Message>
</m:Items>
</m:MoveItemResponseMessage>
</m:ResponseMessages>
</MoveItemResponse>
</soap:Body>
</soap:Envelope>
```
### <a name="comments"></a>Комментарии
Операция **MoveItem будет** указывать на успешность, если перемещение было успешным.
### <a name="successful-response-elements"></a>Элементы успешного ответа
В ответе используются следующие элементы:
- [ServerVersionInfo](serverversioninfo.md)
- [MoveItemResponse](moveitemresponse.md)
- [ResponseMessages](responsemessages.md)
- [MoveItemResponseMessage](moveitemresponsemessage.md)
- [ResponseCode](responsecode.md)
- [Items](items.md)
## <a name="see-also"></a>См. также
- [Элементы XML веб-служб Exchange в Exchange](ews-xml-elements-in-exchange.md)
| 33.618705 | 323 | 0.70918 | rus_Cyrl | 0.156548 |
7acb93e72388efe06833687cd4583bd5546b32ce | 392 | md | Markdown | README.md | NexisHunter/GoShell | 8fdfbc138502acbd6ae06100c4551e3a4faf3517 | [
"BSD-3-Clause"
] | 1 | 2018-10-11T02:57:12.000Z | 2018-10-11T02:57:12.000Z | README.md | NexisHunter/GoShell | 8fdfbc138502acbd6ae06100c4551e3a4faf3517 | [
"BSD-3-Clause"
] | 2 | 2018-08-15T02:40:13.000Z | 2018-09-06T10:55:17.000Z | README.md | NexisHunter/GoShell | 8fdfbc138502acbd6ae06100c4551e3a4faf3517 | [
"BSD-3-Clause"
] | null | null | null | ## GoShell [](https://circleci.com/gh/NexisHunter/GoShell)
The custom shell for my WIP Os. Please leave a comment on any bugs (or open a pull request) and I'll take into
consideration of the edit.
If you have a solution please dont be afraid to upload it.
Thanks in advance and I look forward to the suggestions
| 49 | 128 | 0.755102 | eng_Latn | 0.9756 |
7acbe00743aaa5b7b5309ca2cc84fcc06f622478 | 3,777 | md | Markdown | SPADAXsys/mainpage.md | spadaxsys-dev/SPADAXsys | 5b766c399ca5f8b5536548b9f88b655693b51803 | [
"Apache-2.0"
] | null | null | null | SPADAXsys/mainpage.md | spadaxsys-dev/SPADAXsys | 5b766c399ca5f8b5536548b9f88b655693b51803 | [
"Apache-2.0"
] | null | null | null | SPADAXsys/mainpage.md | spadaxsys-dev/SPADAXsys | 5b766c399ca5f8b5536548b9f88b655693b51803 | [
"Apache-2.0"
] | null | null | null | SPHinXsys (pronunciation: s'finksis)
is an acronym from <b>S</b>moothed <b>P</b>article
<b>H</b>ydrodynamics for <b>in</b>dustrial comple<b>X</b> <b>sys</b>tems.
It provides C++ APIs for physical accurate simulation and aims to model coupled
industrial dynamic systems including fluid, solid, multi-body dynamics and
beyond with SPH (smoothed particle hydrodynamics),
a meshless computational method using particle discretization.
Included physics
-----------------
Fluid dynamics, solid dynamics, fluid-structure interactions (FSI),
and their coupling to multi-body dynamics (with SIMBody library https://simtk.org)
SPH method and algorithms
-----------------
SPH is a fully Lagrangian particle method,
in which the continuum media is discretized into Lagrangian particles
and the mechanics is approximated as the interaction between them
with the help of a kernel, usually a Gaussian-like function.
SPH is a mesh free method, which does not require a mesh to define
the neighboring configuration of particles,
but construct of update it according to the distance between particles.
A remarkable feature of this method is that its computational algorithm
involves a large number of common abstractions
which link to many physical systems inherently.
Due to such unique feature,
SPH have been used here for unified modeling of both fluid and solid mechanics.
The SPH algorithms are based on the published work of the authors.
The algorithms for the discretization of the fluid dynamics equations
are based on a weakly compressible fluid formulation,
which is suitable for the problems with incompressible flows,
and compressible flows with low Mach number (less than 0.3).
The solid dynamics equations are discretized by a total Lagrangian formulation,
which is suitable to study the problems involving linear and non-linear elastic materials.
The FSI coupling algorithm is implemented in a kinematic-force fashion,
in which the solid structure surface describes the phase-interface and,
at the same time, experiences the surface forces imposed
by the fluid pressure and friction.
Geometric models
-----------------
2D models can be built using basic shapes (polygon and circle) and full version of binary operations.
3D models can be generated by simple shapes (brick and sphere),
imported from external STL files and processed by applying simple binary operations, e.g. add and substract.
Material models
-----------------
Newtonian fluids with isothermal linear equation of state. Non-newtonian fluids with Oldroyd-B model.
Linear elastic solid, non-linear elastic solid with Neo-Hookian model and anisotropic muscle model.
Multi-resolution modeling
-----------------
Uniform resolution is used within each fluid or solid bodies.
However, it is allowed to use different resolutions for different bodies.
For example, one is able to using higher resolution for a solid body
which is interacting with a fluid body with lower resolution.
Parallel Computing
-----------------
Intel Threading Building Blocks (TBB) is used for the multi-core parallelism.
Authors
-----------------
Xiangyu Hu, Luhui Han, Chi Zhang, Shuoguo Zhang, Massoud Rezavand
Project Principle Investigator
-----------------
Xiangyu Hu ([email protected]), Department of Mechanical Engineering,
Technical University of Munich
Acknowledgements
-----------------
German Research Fundation (Deutsche Forschungsgemeinschaft) DFG HU1527/6-1, HU1527/10-1 and HU1527/12-1.
Please cite
-----------------
Luhui Han and Xiangyu Hu,
"SPH modeling of fluid-structure interaction",
Journal of Hydrodynamics, 2018: 30(1):62-69.
Chi Zhang and Massoud Rezavand and Xiangyu Hu,
"Dual-criteria time stepping for weakly compressible smoothed particle hydrodynamics",
arXiv:1905.12302
| 44.435294 | 110 | 0.769923 | eng_Latn | 0.997295 |
7acbeefe60aff74446f9e1e1126ff7094ccfa7ed | 490 | md | Markdown | RESTAPI/Postman/README.md | sajipoochira/AzureWVD | 12f0c7bf32260d4470141646f85c7463e0707544 | [
"MIT"
] | 10 | 2020-09-30T11:36:42.000Z | 2022-03-16T21:28:11.000Z | RESTAPI/Postman/README.md | sajipoochira/AzureWVD | 12f0c7bf32260d4470141646f85c7463e0707544 | [
"MIT"
] | null | null | null | RESTAPI/Postman/README.md | sajipoochira/AzureWVD | 12f0c7bf32260d4470141646f85c7463e0707544 | [
"MIT"
] | 2 | 2021-06-20T12:03:24.000Z | 2021-11-10T18:53:56.000Z | # AzureWVD - REST API - Postman
This folder contains a complete Postman configuration for your global variables, environment variables and Azure REST collection to perform some preconfigured REST API calls for the WVD Service on Azure. Feel free to import the folder into Postman, configure the environment variables values and your Azure REST collection variables with your specific Azure Subscription information to get started with your first REST API calls to the WVD service on Azure.
| 163.333333 | 457 | 0.828571 | eng_Latn | 0.992205 |
7acd4a0958d10cea48cc1988529473df83805004 | 20,218 | md | Markdown | microsoft-365/security/office-365-security/microsoft-365-policies-configurations.md | jmbush/microsoft-365-docs | c2a20e48574fa637f18eaa40314609048a9d595d | [
"CC-BY-4.0",
"MIT"
] | 534 | 2018-02-01T00:24:21.000Z | 2022-03-31T10:45:31.000Z | microsoft-365/security/office-365-security/microsoft-365-policies-configurations.md | jmbush/microsoft-365-docs | c2a20e48574fa637f18eaa40314609048a9d595d | [
"CC-BY-4.0",
"MIT"
] | 6,830 | 2018-02-12T17:44:25.000Z | 2022-03-31T23:10:10.000Z | microsoft-365/security/office-365-security/microsoft-365-policies-configurations.md | jmbush/microsoft-365-docs | c2a20e48574fa637f18eaa40314609048a9d595d | [
"CC-BY-4.0",
"MIT"
] | 1,281 | 2018-02-01T22:01:12.000Z | 2022-03-31T14:35:18.000Z | ---
title: Identity and device access configurations - Microsoft 365 for enterprise
description: Describes Microsoft recommendations and core concepts for deploying secure email, docs, and apps policies and configurations.
ms.author: josephd
author: JoeDavies-MSFT
manager: laurawi
ms.prod: m365-security
ms.topic: article
audience: Admin
f1.keywords:
- NOCSH
ms.reviewer: martincoetzer
ms.custom:
- it-pro
- goldenconfig
ms.collection:
- M365-identity-device-management
- M365-security-compliance
- m365solution-identitydevice
- m365solution-overview
ms.technology: mdo
---
# Identity and device access configurations
**Applies to**
- [Exchange Online Protection](exchange-online-protection-overview.md)
- [Microsoft Defender for Office 365 plan 1 and plan 2](defender-for-office-365.md)
The modern security perimeter of your organization now extends beyond your network to include users accessing cloud-based apps from any location with a variety of devices. Your security infrastructure needs to determine whether a given access request should be granted and under what conditions.
This determination should be based on the user account of the sign-in, the device being used, the app the user is using for access, the location from which the access request is made, and an assessment of the risk of the request. This capability helps ensure that only approved users and devices can access your critical resources.
This series of articles describes a set of identity and device access prerequisite configurations and a set of Azure Active Directory (Azure AD) Conditional Access, Microsoft Intune, and other policies to secure access to Microsoft 365 for enterprise cloud apps and services, other SaaS services, and on-premises applications published with Azure AD Application Proxy.
Identity and device access settings and policies are recommended in three tiers: baseline protection, sensitive protection, and protection for environments with highly regulated or classified data. These tiers and their corresponding configurations provide consistent levels of protection across your data, identities, and devices.
These capabilities and their recommendations:
- Are supported in Microsoft 365 E3 and Microsoft 365 E5.
- Are aligned with [Microsoft Secure Score](../defender/microsoft-secure-score.md) as well as [identity score in Azure AD](/azure/active-directory/fundamentals/identity-secure-score), and will increase these scores for your organization.
- Will help you implement these [five steps to securing your identity infrastructure](/azure/security/azure-ad-secure-steps).
If your organization has unique environment requirements or complexities, use these recommendations as a starting point. However, most organizations can implement these recommendations as prescribed.
Watch this video for a quick overview of identity and device access configurations for Microsoft 365 for enterprise.
<br>
> [!VIDEO https://www.microsoft.com/videoplayer/embed/RWxEDQ]
> [!NOTE]
> Microsoft also sells Enterprise Mobility + Security (EMS) licenses for Office 365 subscriptions. EMS E3 and EMS E5 capabilities are equivalent to those in Microsoft 365 E3 and Microsoft 365 E5. See [EMS plans](https://www.microsoft.com/microsoft-365/enterprise-mobility-security/compare-plans-and-pricing) for the details.
## Intended audience
These recommendations are intended for enterprise architects and IT professionals who are familiar with Microsoft 365 cloud productivity and security services, which includes Azure AD (identity), Microsoft Intune (device management), and Microsoft Information Protection (data protection).
### Customer environment
The recommended policies are applicable to enterprise organizations operating both entirely within the Microsoft cloud and for customers with hybrid identity infrastructure, which is an on-premises Active Directory Domain Services (AD DS) forest that is synchronized with an Azure AD tenant.
Many of the provided recommendations rely on services available only with Microsoft 365 E5, Microsoft 365 E3 with the E5 Security add-on, EMS E5, or Azure AD Premium P2 licenses.
For those organizations who do not have these licenses, Microsoft recommends you at least implement [security defaults](/azure/active-directory/fundamentals/concept-fundamentals-security-defaults), which is included with all Microsoft 365 plans.
### Caveats
Your organization may be subject to regulatory or other compliance requirements, including specific recommendations that may require you to apply policies that diverge from these recommended configurations. These configurations recommend usage controls that have not historically been available. We recommend these controls because we believe they represent a balance between security and productivity.
We've done our best to account for a wide variety of organizational protection requirements, but we're not able to account for all possible requirements or for all the unique aspects of your organization.
## Three tiers of protection
Most organizations have specific requirements regarding security and data protection. These requirements vary by industry segment and by job functions within organizations. For example, your legal department and administrators might require additional security and information protection controls around their email correspondence that are not required for other business units.
Each industry also has their own set of specialized regulations. Rather than providing a list of all possible security options or a recommendation per industry segment or job function, recommendations have been provided for three different tiers of security and protection that can be applied based on the granularity of your needs.
- **Baseline protection**: We recommend you establish a minimum standard for protecting data, as well as the identities and devices that access your data. You can follow these baseline recommendations to provide strong default protection that meets the needs of many organizations.
- **Sensitive protection**: Some customers have a subset of data that must be protected at higher levels, or they may require all data to be protected at a higher level. You can apply increased protection to all or specific data sets in your Microsoft 365 environment. We recommend protecting identities and devices that access sensitive data with comparable levels of security.
- **Highly regulated**: Some organizations may have a small amount of data that is highly classified, constitutes trade secrets, or is regulated data. Microsoft provides capabilities to help organizations meet these requirements, including added protection for identities and devices.

This guidance shows you how to implement protection for identities and devices for each of these tiers of protection. Use this guidance as a starting point for your organization and adjust the policies to meet your organization's specific requirements.
It's important to use consistent levels of protection across your data, identities, and devices. For example, if you implement this guidance, be sure to protect your data at comparable levels.
The **Identity and device protection for Microsoft 365** architecture model shows you which capabilities are comparable.
[](../../downloads/MSFT_cloud_architecture_identity&device_protection.pdf) <br> [View as a PDF](../../downloads/MSFT_cloud_architecture_identity&device_protection.pdf) \| [Download as a PDF](https://github.com/MicrosoftDocs/microsoft-365-docs/raw/public/microsoft-365/downloads/MSFT_cloud_architecture_identity&device_protection.pdf) \| [Download as a Visio](https://github.com/MicrosoftDocs/microsoft-365-docs/raw/public/microsoft-365/downloads/MSFT_cloud_architecture_identity&device_protection.vsdx)
Additionally, see the [Deploy information protection for data privacy regulations](../../solutions/information-protection-deploy.md) solution to protect information stored in Microsoft 365.
## Security and productivity trade-offs
Implementing any security strategy requires trade-offs between security and productivity. It's helpful to evaluate how each decision affects the balance of security, functionality, and ease of use.

The recommendations provided are based on the following principles:
- Know your users and be flexible to their security and functional requirements.
- Apply a security policy just in time and ensure it is meaningful.
## Services and concepts for identity and device access protection
Microsoft 365 for enterprise is designed for large organizations to empower everyone to be creative and work together securely.
This section provides an overview of the Microsoft 365 services and capabilities that are important for identity and device access.
### Azure AD
Azure AD provides a full suite of identity management capabilities. We recommend using these capabilities to secure access.
|Capability or feature|Description|Licensing|
|---|---|---|
|[Multi-factor authentication (MFA)](/azure/active-directory/authentication/concept-mfa-howitworks)|MFA requires users to provide two forms of verification, such as a user password plus a notification from the Microsoft Authenticator app or a phone call. MFA greatly reduces the risk that stolen credentials can be used to access your environment. Microsoft 365 uses the Azure AD Multi-Factor Authentication service for MFA-based sign-ins.|Microsoft 365 E3 or E5|
|[Conditional Access](/azure/active-directory/conditional-access/overview)|Azure AD evaluates the conditions of the user sign-in and uses Conditional Access policies to determine the allowed access. For example, in this guidance we show you how to create a Conditional Access policy to require device compliance for access to sensitive data. This greatly reduces the risk that a hacker with their own device and stolen credentials can access your sensitive data. It also protects sensitive data on the devices, because the devices must meet specific requirements for health and security.|Microsoft 365 E3 or E5|
|[Azure AD groups](/azure/active-directory/fundamentals/active-directory-manage-groups)|Conditional Access policies, device management with Intune, and even permissions to files and sites in your organization rely on the assignment to user accounts or Azure AD groups. We recommend you create Azure AD groups that correspond to the levels of protection you are implementing. For example, your executive staff are likely higher value targets for hackers. Therefore, it makes sense to add the user accounts of these employees to an Azure AD group and assign this group to Conditional Access policies and other policies that enforce a higher level of protection for access.|Microsoft 365 E3 or E5|
|[Device enrollment](/azure/active-directory/devices/overview)|You enroll a device into Azure AD to create an identity for the device. This identity is used to authenticate the device when a user signs in and to apply Conditional Access policies that require domain-joined or compliant PCs. For this guidance, we use device enrollment to automatically enroll domain-joined Windows computers. Device enrollment is a prerequisite for managing devices with Intune.|Microsoft 365 E3 or E5|
|[Azure AD Identity Protection](/azure/active-directory/identity-protection/overview)|Enables you to detect potential vulnerabilities affecting your organization's identities and configure automated remediation policy to low, medium, and high sign-in risk and user risk. This guidance relies on this risk evaluation to apply Conditional Access policies for multi-factor authentication. This guidance also includes a Conditional Access policy that requires users to change their password if high-risk activity is detected for their account.|Microsoft 365 E5, Microsoft 365 E3 with the E5 Security add-on, EMS E5, or Azure AD Premium P2 licenses|
|[Self-service password reset (SSPR)](/azure/active-directory/authentication/concept-sspr-howitworks)|Allow your users to reset their passwords securely and without help-desk intervention, by providing verification of multiple authentication methods that the administrator can control.|Microsoft 365 E3 or E5|
|[Azure AD password protection](/azure/active-directory/authentication/concept-password-ban-bad)|Detect and block known weak passwords and their variants and additional weak terms that are specific to your organization. Default global banned password lists are automatically applied to all users in an Azure AD tenant. You can define additional entries in a custom banned password list. When users change or reset their passwords, these banned password lists are checked to enforce the use of strong passwords.|Microsoft 365 E3 or E5|
|
Here are the components of identity and device access, including Intune and Azure AD objects, settings, and subservices.

### Microsoft Intune
[Intune](/intune/introduction-intune) is Microsoft's cloud-based mobile device management service. This guidance recommends device management of Windows PCs with Intune and recommends device compliance policy configurations. Intune determines whether devices are compliant and sends this data to Azure AD to use when applying Conditional Access policies.
#### Intune app protection
[Intune app protection](/intune/app-protection-policy) policies can be used to protect your organization's data in mobile apps, with or without enrolling devices into management. Intune helps protect information, making sure your employees can still be productive, and preventing data loss. By implementing app-level policies, you can restrict access to company resources and keep data within the control of your IT department.
This guidance shows you how to create recommended policies to enforce the use of approved apps and to determine how these apps can be used with your business data.
### Microsoft 365
This guidance shows you how to implement a set of policies to protect access to Microsoft 365 cloud services, including Microsoft Teams, Exchange Online, SharePoint Online, and OneDrive for Business. In addition to implementing these policies, we recommend you also raise the level of protection for your tenant using these resources:
- [Configure your tenant for increased security](tenant-wide-setup-for-increased-security.md)
Recommendations that apply to baseline security for your tenant.
- [Security roadmap: Top priorities for the first 30 days, 90 days, and beyond](security-roadmap.md)
Recommendations that include logging, data governance, admin access, and threat protection.
### Windows 10 and Microsoft 365 Apps for enterprise
Windows 10 with Microsoft 365 Apps for enterprise is the recommended client environment for PCs. We recommend Windows 10 because Azure is designed to provide the smoothest experience possible for both on-premises and Azure AD. Windows 10 also includes advanced security capabilities that can be managed through Intune. Microsoft 365 Apps for enterprise includes the latest versions of Office applications. These use modern authentication, which is more secure and a requirement for Conditional Access. These apps also include enhanced compliance and security tools.
## Applying these capabilities across the three tiers of protection
The following table summarizes our recommendations for using these capabilities across the three tiers of protection.
|Protection mechanism|Baseline|Sensitive|Highly regulated|
|---|---|---|---|
|**Enforce MFA**|On medium or above sign-in risk|On low or above sign-in risk|On all new sessions|
|**Enforce password change**|For high-risk users|For high-risk users|For high-risk users|
|**Enforce Intune application protection**|Yes|Yes|Yes|
|**Enforce Intune enrollment for organization-owned device**|Require a compliant or domain-joined PC, but allow bring-your-own devices (BYOD) phones and tablets|Require a compliant or domain-joined device|Require a compliant or domain-joined device|
|
## Device ownership
The above table reflects the trend for many organizations to support a mix of organization-owned devices, as well as personal or BYODs to enable mobile productivity across the workforce. Intune app protection policies ensure that email is protected from exfiltrating out of the Outlook mobile app and other Office mobile apps, on both organization-owned devices and BYODs.
We recommend organization-owned devices be managed by Intune or domain-joined to apply additional protections and control. Depending on data sensitivity, your organization may choose to not allow BYODs for specific user populations or specific apps.
## Deployment and your apps
Prior to configuring and rolling out identity and device access configuration for your Azure AD-integrated apps, you must:
- Decide which apps used in your organization you want to protect.
- Analyze this list of apps to determine the sets of policies that provide appropriate levels of protection.
You should not create separate sets of policies each for app because management of them can become cumbersome. Microsoft recommends that you group your apps that have the same protection requirements for the same users.
For example, you could have one set of policies that include all Microsoft 365 apps for all of your users for baseline protection and a second set of policies for all sensitive apps, such as those used by human resources or finance departments, and apply them to those groups.
Once you have determined the set of policies for the apps you want to secure, roll the policies out to your users incrementally, addressing issues along the way.
For example, configure the policies that will be used for all your Microsoft 365 apps for just Exchange Online with the additional changes for Exchange. Roll these policies out to your users and work through any issues. Then, add Teams with its additional changes and roll this out to your users. Then, add SharePoint with its additional changes. Continue adding the rest of your apps until you can confidently configure these baseline policies to include all Microsoft 365 apps.
Similarly, for your sensitive apps, create the set of policies and add one app at a time and work through any issues until they are all included in the sensitive app policy set.
Microsoft recommends that you do not create policy sets that apply to all apps because it can result in some unintended configurations. For example, policies that block all apps could lock your admins out of the Azure portal and exclusions cannot be configured for important endpoints such as Microsoft Graph.
## Steps to configure identity and device access

1. Configure prerequisite identity features and their settings.
2. Configure the common identity and access Conditional Access policies.
3. Configure Conditional Access policies for guest and external users.
4. Configure Conditional Access policies for Microsoft 365 cloud apps─such as Microsoft Teams, Exchange Online, and SharePoint─and Microsoft Cloud App Security policies.
After you have configured identity and device access, see the [Azure AD feature deployment guide](/azure/active-directory/fundamentals/active-directory-deployment-checklist-p2) for a phased checklist of additional features to consider and [Azure AD Identity Governance](/azure/active-directory/governance/) to protect, monitor, and audit access.
## Next step
[Prerequisite work for implementing identity and device access policies](identity-access-prerequisites.md) | 98.145631 | 694 | 0.816352 | eng_Latn | 0.997696 |
7acd5b5c952d54304eff332e76647ad7ad51a0b0 | 2,381 | md | Markdown | wdk-ddi-src/content/ntddrilapitypes/ne-ntddrilapitypes-riluiccslotstate.md | amrutha-chandramohan/windows-driver-docs-ddi | 35e28164591cadf5ef3d6238cdddd4b88f2b8768 | [
"CC-BY-4.0",
"MIT"
] | 176 | 2018-01-12T23:42:01.000Z | 2022-03-30T18:23:27.000Z | wdk-ddi-src/content/ntddrilapitypes/ne-ntddrilapitypes-riluiccslotstate.md | amrutha-chandramohan/windows-driver-docs-ddi | 35e28164591cadf5ef3d6238cdddd4b88f2b8768 | [
"CC-BY-4.0",
"MIT"
] | 1,093 | 2018-01-23T07:33:03.000Z | 2022-03-30T20:15:21.000Z | wdk-ddi-src/content/ntddrilapitypes/ne-ntddrilapitypes-riluiccslotstate.md | amrutha-chandramohan/windows-driver-docs-ddi | 35e28164591cadf5ef3d6238cdddd4b88f2b8768 | [
"CC-BY-4.0",
"MIT"
] | 251 | 2018-01-21T07:35:50.000Z | 2022-03-22T19:33:42.000Z | ---
UID: NE:ntddrilapitypes.RILUICCSLOTSTATE
title: RILUICCSLOTSTATE (ntddrilapitypes.h)
description: This enumeration describes the RILUICCSLOTSTATE.
old-location: netvista\riluiccslotstate.htm
tech.root: netvista
ms.date: 02/16/2018
keywords: ["RILUICCSLOTSTATE enumeration"]
ms.keywords: RILUICCSLOTSTATE, RILUICCSLOTSTATE enumeration [Network Drivers Starting with Windows Vista], RIL_UICCSLOT_ACTIVE, RIL_UICCSLOT_EMPTY, RIL_UICCSLOT_ERROR, RIL_UICCSLOT_NOT_READY, RIL_UICCSLOT_OFF, RIL_UICCSLOT_OFF_EMPTY, netvista.riluiccslotstate, rilapitypes/RILUICCSLOTSTATE, rilapitypes/RIL_UICCSLOT_ACTIVE, rilapitypes/RIL_UICCSLOT_EMPTY, rilapitypes/RIL_UICCSLOT_ERROR, rilapitypes/RIL_UICCSLOT_NOT_READY, rilapitypes/RIL_UICCSLOT_OFF, rilapitypes/RIL_UICCSLOT_OFF_EMPTY
req.header: ntddrilapitypes.h
req.include-header: Rilapitypes.h, Ntddrilapitypes.h
req.target-type: Windows
req.target-min-winverclnt:
req.target-min-winversvr:
req.kmdf-ver:
req.umdf-ver:
req.ddi-compliance:
req.unicode-ansi:
req.idl:
req.max-support:
req.namespace:
req.assembly:
req.type-library:
req.lib:
req.dll:
req.irql:
targetos: Windows
req.typenames: RILUICCSLOTSTATE
f1_keywords:
- RILUICCSLOTSTATE
- ntddrilapitypes/RILUICCSLOTSTATE
topic_type:
- APIRef
- kbSyntax
api_type:
- HeaderDef
api_location:
- rilapitypes.h
api_name:
- RILUICCSLOTSTATE
---
# RILUICCSLOTSTATE enumeration (ntddrilapitypes.h)
## -description
<div class="alert"><b>Warning</b> The Cellular COM API is deprecated in Windows 10. This content is provided to support maintenance of OEM and mobile operator created Windows Phone 8.1 applications.</div><div> </div>This enumeration describes the RILUICCSLOTSTATE.
## -enum-fields
### -field RIL_UICCSLOT_OFF_EMPTY
### -field RIL_UICCSLOT_OFF
### -field RIL_UICCSLOT_EMPTY
### -field RIL_UICCSLOT_NOT_READY
### -field RIL_UICCSLOT_ACTIVE
### -field RIL_UICCSLOT_ERROR
### -field RIL_UICCSLOT_MAX
## -syntax
```cpp
enum RILUICCSLOTSTATE {
RIL_UICCSLOT_OFF_EMPTY = 0x01,
RIL_UICCSLOT_OFF = 0x02,
RIL_UICCSLOT_EMPTY = 0x03,
RIL_UICCSLOT_NOT_READY = 0x04,
RIL_UICCSLOT_ACTIVE = 0x05,
RIL_UICCSLOT_ERROR = 0x06
};
```
## -see-also
<a href="/previous-versions/windows/hardware/cellular/dn946509(v=vs.85)">Cellular COM enumerations</a>
| 28.686747 | 489 | 0.760605 | yue_Hant | 0.866994 |
7acd655b98c41787ad160dd2459637c48a8b06b8 | 6,934 | md | Markdown | articles/human-resources/hr-whats-new-2020-09-26.md | MicrosoftDocs/Dynamics-365-Operations.ru-ru | f863930f3f6e29d2e2c28f56f52f18af8dc7d521 | [
"CC-BY-4.0",
"MIT"
] | 3 | 2020-05-18T17:14:52.000Z | 2022-03-02T03:47:07.000Z | articles/human-resources/hr-whats-new-2020-09-26.md | MicrosoftDocs/Dynamics-365-Operations.ru-ru | f863930f3f6e29d2e2c28f56f52f18af8dc7d521 | [
"CC-BY-4.0",
"MIT"
] | 7 | 2017-12-13T12:45:39.000Z | 2019-04-30T11:46:20.000Z | articles/human-resources/hr-whats-new-2020-09-26.md | MicrosoftDocs/Dynamics-365-Operations.ru-ru | f863930f3f6e29d2e2c28f56f52f18af8dc7d521 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2018-10-31T18:08:28.000Z | 2019-10-12T18:15:45.000Z | ---
title: Что нового и что изменилось в Dynamics 365 Human Resources 26 сентября 2020 г.
description: В этой теме описываются новые и измененные компоненты Microsoft Dynamics 365 Human Resources от 26 сентября 2020 года.
author: jcart1106
ms.date: 09/26/2020
ms.topic: article
ms.prod: ''
ms.technology: ''
ms.search.form: ''
audience: Application User
ms.custom: ''
ms.assetid: ''
ms.search.region: Global
ms.author: jcart
ms.search.validFrom: 2020-10-13
ms.dyn365.ops.version: Human Resources
ms.openlocfilehash: a01e172f5c62b746f4733e03d25ea43f0247790003ea1a1470bc28e98db12deb
ms.sourcegitcommit: 42fe9790ddf0bdad911544deaa82123a396712fb
ms.translationtype: HT
ms.contentlocale: ru-RU
ms.lasthandoff: 08/05/2021
ms.locfileid: "6741413"
---
# <a name="whats-new-or-changed-in-dynamics-365-human-resources-september-26-2020"></a>Что нового и что изменилось в Dynamics 365 Human Resources 26 сентября 2020 г.
[!include [Applies to Human Resources](../includes/applies-to-hr.md)]
[!include [rename-banner](~/includes/cc-data-platform-banner.md)]
В этой теме описываются новые, измененные и ожидающиеся компоненты в Dynamics 365 Human Resources. Дополнительные сведения о нашем процессе обновления и графике см. в разделе [Процесс обновления](hr-admin-setup-update-process.md).
Дополнительные сведения о новых функциях и ожидаемых датах общей доступности см. в разделе [Обзор Dynamics 365 Human Resources 2020](/dynamics365-release-plan/2020wave2/human-resources/dynamics365-human-resources/).
## <a name="in-this-release"></a>В данном выпуске
Этот выпуск включает следующие новые функции и исправления ошибок. Изменения применяются для номера сборки 8.1.3589-hf.3.
### <a name="new-features"></a>Новые возможности
В этой версии следующая функция стала общедоступной:
- **Доступно обновление платформы 10.0.13**: дополнительные сведения об обновлении см. в разделе [Обновления платформы для версии 10.0.13 приложений Finance and Operations (октябрь 2020 г.)](../fin-ops-core/dev-itpro/get-started/whats-new-platform-updates-10-0-13.md).
### <a name="bug-fixes"></a>Исправления ошибок
Этот выпуск содержит следующие исправления ошибок.
> [!NOTE]
> Наша цель — предоставить эту информацию как можно скорее. Могут быть обновления к этому разделу для включения исправлений ошибок, которые были сделаны в сборке после первоначальной публикации этого раздела.
| Номер проблемы | Расход | описание |
| --- | --- | --- |
| 469495 | Обновление сетки и диалогового окна финансовых аналитик по умолчанию | Сетка и диалоговое окно финансовых аналитик обновлены во всем модулей Human Resources. |
| 474887 | Рабочий элемент запроса отпуска открывает неправильную ссылку в ручном принятии решения | Если конфигурация рабочего процесса содержит ручное принятие решения, переход к запросу отпуска из пункта **Рабочие элементы, назначенные мне** открывает неправильную ссылку, показывающую либо пустую форму, либо запрос на отпуск, созданный текущим пользователем, вместо того, который назначен его для принимаемого вручную решения. |
| 474962 | Сущность параметров отпуска и отсутствия имеет поля с неоднозначными метками | Метки сущности параметров отпуск и отсутствие обновлены, чтобы сделать их более понятными. |
| 481401 | Обработка начисления зависает, когда базис даты начисления находится после даты начала начисления и в конце месяца | Обработка начисления обновлена, чтобы не иметь задержки, когда база даты начисления находится после даты начала начисления и в конце месяца. |
| 447167 | Списки записей с истекшим сроком действия включают неактивных работников | Вкладка **Истекающие записи** в модуле **Управление персоналом** включает неактивных работников. Теперь она включает только активных работников. |
| 486840 | Неправильный запрос на отсутствие открывается из пункта **Рабочие элементы, назначенные мне** | При выборе запроса на отсутствие из пункта **Рабочие элементы, назначенные мне** больше не открывается последний запрос на отсутствие, назначенный текущему пользователю. |
| 506868 | Поле **Заголовок** Dataverse не задано для сущности **Должность** | Поле **Заголовок** в сущностях **Задание** и **Должность** отображается не так, как указано. Поле **Заголовок** теперь отображается. |
| 430359 | Не удается получить доступ к задачам контрольного списка отключения с назначенными ролями менеджера и сотрудника | Работники с будущей датой увольнения не могут получить доступ к своим задачам контрольного списка, если у них только роль сотрудника или менеджера. Теперь пользователи с ролью сотрудника или менеджера могут получить доступ к задачам отключения с будущей датой увольнения. |
| 458102 | Новый сотрудник не отображается в сущности **Сведения о заработной плате работника** при создании | Новые сотрудники включаются в сущность сведений о зарплате сотрудника без необходимости открывать информацию о заработной плате сотрудника перед экспортом сущности. |
## <a name="in-preview"></a>В режиме предварительного просмотра
В предварительной версии доступны следующие новые функции. Дополнительные сведения о включении и выключении функций см. в разделе [Управление функциями](hr-admin-manage-features.md).
| Функция | План выпуска | Документация |
| --- | --- | --- |
| Приложение Human Resources в Microsoft Teams | [Отпуск и отгулы сотрудников в Microsoft Teams](/dynamics365-release-plan/2020wave1/dynamics365-human-resources/employee-leave-absence-experience-teams) | [Приложение Human Resources в Teams](./hr-admin-teams-leave-app.md)<br>[Управление запросами на отпуск в Teams](hr-teams-leave-app.md) |
| Расширенные запросы и утверждения рабочих процессов | [Улучшения рабочих процессов управления организациями и персоналом](/dynamics365-release-plan/2020wave2/human-resources/dynamics365-human-resources/organization-personnel-management-workflow-experience-enhancements) | [Параметр конфигурации для размещения списка рабочих элементов, назначенных мне](./hr-whats-new-2020-09-03.md#configuration-option-to-position-work-items-assigned-to-me-list-477004) |
## <a name="coming-soon"></a>Скоро
Следующая новая функция планируется в будущем выпуске:
- [Пользовательские ссылки на портале самообслуживания менеджеров](/dynamics365-release-plan/2020wave2/human-resources/dynamics365-human-resources/custom-links-manager-self-service)
Полный список запланированных функций и их запланированных выпусков см. в разделе [Обзор выпуска волны 2 Dynamics 365 Human Resources от 2019 года](/dynamics365-release-plan/2019wave2/dynamics365-human-resources/).
## <a name="additional-resources"></a>Дополнительные ресурсы
[Что нового и что изменилось в Human Resources](hr-admin-whats-new.md)
[Обзор волны 2 выпуска Dynamics 365 Human Resources 2020 года](/dynamics365-release-plan/2020wave2/human-resources/dynamics365-human-resources/)
[Процесс обновления](hr-admin-setup-update-process.md)
[Управление функциями](hr-admin-manage-features.md)
[!INCLUDE[footer-include](../includes/footer-banner.md)] | 78.795455 | 457 | 0.800115 | rus_Cyrl | 0.915132 |
7ace42ee58021f397ed157886644b392512f31cc | 954 | md | Markdown | wow_retail/wow_api_java/docs/CharacterData.md | jbwittner/blizzard_api_swagger | 9503daff23606797ee9b63485467bc12975ffc36 | [
"MIT"
] | null | null | null | wow_retail/wow_api_java/docs/CharacterData.md | jbwittner/blizzard_api_swagger | 9503daff23606797ee9b63485467bc12975ffc36 | [
"MIT"
] | 1 | 2022-03-30T15:28:13.000Z | 2022-03-30T15:28:13.000Z | wow_retail/wow_api_java/docs/CharacterData.md | jbwittner/blizzard_api_swagger | 9503daff23606797ee9b63485467bc12975ffc36 | [
"MIT"
] | null | null | null |
# CharacterData
Character data
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**name** | **String** | |
**id** | **Integer** | |
**gender** | [**TypeData**](TypeData.md) | |
**faction** | [**TypeData**](TypeData.md) | |
**race** | [**IndexData**](IndexData.md) | |
**characterClass** | [**IndexData**](IndexData.md) | |
**activeSpec** | [**IndexData**](IndexData.md) | |
**realm** | [**RealmIndexData**](RealmIndexData.md) | |
**guild** | [**GuildCharacterIndexData**](GuildCharacterIndexData.md) | |
**level** | **Integer** | |
**experience** | **Integer** | |
**achievementPoints** | **Integer** | |
**lastLoginTimestamp** | **Long** | |
**averageItemLevel** | **Integer** | |
**equippedItemLevel** | **Integer** | |
**activeTitle** | [**TitleData**](TitleData.md) | |
**covenantProgress** | [**CovenantProgressData**](CovenantProgressData.md) | |
| 30.774194 | 80 | 0.539832 | yue_Hant | 0.744846 |
7ace6823f0773b0920db72007cfb0f72b0f2e4f3 | 818 | md | Markdown | TerminalVelocity(V0.1.3)/Library/PackageCache/[email protected]/Documentation~/Advanced-Properties.md | Xenxibre/AdvancedSpecialisedProject | e7cf0e8f2e92a817468cab74dfea5450f405627f | [
"MIT"
] | null | null | null | TerminalVelocity(V0.1.3)/Library/PackageCache/[email protected]/Documentation~/Advanced-Properties.md | Xenxibre/AdvancedSpecialisedProject | e7cf0e8f2e92a817468cab74dfea5450f405627f | [
"MIT"
] | null | null | null | TerminalVelocity(V0.1.3)/Library/PackageCache/[email protected]/Documentation~/Advanced-Properties.md | Xenxibre/AdvancedSpecialisedProject | e7cf0e8f2e92a817468cab74dfea5450f405627f | [
"MIT"
] | null | null | null | # Advanced Properties
High Definition Render Pipeline (HDRP) components expose standard properties by default that are suitable for most use-cases. To fine-tune the behavior of your components, you can manually expose **Advanced Properties** .
## Exposing Advanced Properties
Components that include advanced properties have a plus icon to the right of each property section header. Click this plus icon to expose the advanced properties for that property section. For example, the [Light component’s](Light-Component.html) **General** section includes advanced properties:

When you click the plus icon, Unity exposes the advanced properties for the **General** section. In this example, the **Light Layer** property appears:

| 58.428571 | 298 | 0.782396 | eng_Latn | 0.9905 |
7acf50a54cbedf52e60be25e947b9fdb2fdc3279 | 4,365 | md | Markdown | index.md | aemam022/aemam.github.io | a5c0e2fcf1db5538df641fe0713d32192310be29 | [
"MIT"
] | null | null | null | index.md | aemam022/aemam.github.io | a5c0e2fcf1db5538df641fe0713d32192310be29 | [
"MIT"
] | null | null | null | index.md | aemam022/aemam.github.io | a5c0e2fcf1db5538df641fe0713d32192310be29 | [
"MIT"
] | null | null | null | ---
layout: post
background: '/img/bg2-index.jpg'
title: "A Walk through Quotebank: Modeling the Quotes Network"
subtitle: A Data Story by dada
---
## Introduction
We present our project on modeling a speaker network using Quotebank. The dataset is a corpus of 178 million quotations attributed to the speakers who uttered them, extracted from 162 million English news articles published between 2008 and 2020. We use a speaker attribute table from Wikidata to extract information on the individuals.
This project aims to explore the relationships between people quoted in the Quotebank dataset. Specifically, we construct a graph based on co-quotation of speakers in the same articles, using years from 2015 to 2020. Visualizing these relationships can give us an understanding of the networks and communities that are behind quotes, such as professional domains and fields of expertise, political orientation, and even like-mindedness.
Secondly we focus on specific case studies using our graph properties to understand links between speakers in those scenarios.
## 2020 Interactive Quote Graph
<div id="graph1">
<style> body { margin: 0; } </style>
<script src="//unpkg.com/three"></script>
<script src="//unpkg.com/three-spritetext"></script>
<script src="//unpkg.com/3d-force-graph"></script>
<script src="//unpkg.com/dat.gui"></script>
<!--<script src="../../dist/3d-force-graph.js"></script>-->
<div id="3d-graph">
<script type="text/javascript" src="/3d-JS-Network/graph_title.js"></script>
</div>
</div>
<a href="3d-JS-Network/graph_title_final.html">Click Here for Full Screen And Interactive Data Viz</a>
## Graph Analysis
In this section, we observe different statistics of our big graph. This preliminary analysis helps us to understand the properties of the graph and therefore get to know how to explore it.
### General properties
Let's start by the number of nodes and edges:
- Nodes: 118824
- Edges: 374240
First thing to notice, the graph is very sparse:
Sparsity = $$ \frac{|E|}{|E_{max}|}$$ = 0.01%
As we want to do clustering, we take a look at the connected components.
- There are 4 connected components
- There are 3 small connected components (size 10, 13 and 13)
- There is one big connected components of 118788 speakers
It looks like we can not rely on connected components only.
### Degree
Who are the main speakers of our graph ? Are people very connected ? Let's figure this out !


Most of the speakers have very low degrees, but some have very high degree.
Indeed the degree distribution is following a **power-law**, which is typical for real world networks.
Who are those very famous people ?
- Donald Trump is linked to 2570 people
- Narendra Modi is linked to 812 people
- Emmanuel Macron is linked to 752 people
- Nancy Pelosi is linked to 733 people
- Mike Pompeo is linked to 718 people
- Boris Johnson is linked to 692 people
- Andrew Cuomo is linked to 690 people
- Benjamin Netanyahu is linked to 669 people
- António Guterres is linked to 646 people
- Justin Trudeau is linked to 620 people
The Top 10 central speakers in our graph are very famous country leaders.
We could have expected this, especially that Donald Trump is the most central.
### Clustering
Are there obvious and interpretable clusters ?
We use Louvain clustering method to check either we can identify interpretable cluster:
- the partition results of 543 clusters
It's a too large number for us to interpret each group by hand, we then focuse on attributes.
### Homophily
Which speaker attributes could be useful to filter on ? We compute the homophily with respect to *gender*, *nationality* and *political party*.
The homophily estimate the similarity of connections in the graph with respect to the given attribute.
Results of homophily:
- gender: 0.225
- nationality: 0.404
- party: 0.321
Those results show that *nationality* is a good attribute to observe clusters. Indeed on the 3D graph we clearly distinguish clusters of speakers with the same nationality.
<a class="btn btn-primary float-right" href="/Project_pages/index_2.html" data-toggle="tooltip" data-placement="top" title="" data-original-title="Exploring the Graph">Next <span class="d-none d-md-inline">Page</span> →</a>
| 42.378641 | 436 | 0.758992 | eng_Latn | 0.996417 |
7ad0502c89806de043f29dc830af8052789126aa | 77 | md | Markdown | snake/README.md | akhilc47/fun_projects | b264a1053e9b873096f8a995c4ec602f7e917282 | [
"MIT"
] | 1 | 2018-08-12T18:49:22.000Z | 2018-08-12T18:49:22.000Z | snake/README.md | akhilc47/fun_projects | b264a1053e9b873096f8a995c4ec602f7e917282 | [
"MIT"
] | null | null | null | snake/README.md | akhilc47/fun_projects | b264a1053e9b873096f8a995c4ec602f7e917282 | [
"MIT"
] | null | null | null | # To run the game
1. `pipenv shell`
2. `pipenv install`
3. `python play.py`
| 12.833333 | 19 | 0.662338 | eng_Latn | 0.93028 |
7ad12805bd2a0704cc3d61201f938fd4524e00e3 | 686 | md | Markdown | windows.applicationmodel.datatransfer/clipboard_sethistoryitemascontent_2038998314.md | DeathgodCol/winrt-api | f58a2138042a9882317327c7f2c67953c5d58047 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | windows.applicationmodel.datatransfer/clipboard_sethistoryitemascontent_2038998314.md | DeathgodCol/winrt-api | f58a2138042a9882317327c7f2c67953c5d58047 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | windows.applicationmodel.datatransfer/clipboard_sethistoryitemascontent_2038998314.md | DeathgodCol/winrt-api | f58a2138042a9882317327c7f2c67953c5d58047 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
-api-id: M:Windows.ApplicationModel.DataTransfer.Clipboard.SetHistoryItemAsContent(Windows.ApplicationModel.DataTransfer.ClipboardHistoryItem)
-api-type: winrt method
---
<!-- Method syntax.
public SetHistoryItemAsContentStatus Clipboard.SetHistoryItemAsContent(ClipboardHistoryItem item)
-->
# Windows.ApplicationModel.DataTransfer.Clipboard.SetHistoryItemAsContent
## -description
Sets an item in the clipboard history as the current content for the clipboard.
## -parameters
### -param item
The item in the clipboard history to set as the current content for the clipboard.
## -returns
The status of the operation.
## -remarks
## -see-also
## -examples
| 25.407407 | 142 | 0.772595 | yue_Hant | 0.764256 |
7ad2e38b2d785b7c2dee2b735cf399d6c2dcb750 | 44,389 | md | Markdown | repos/php/remote/7.4-rc.md | ssapp/repo-info | 236e850c1feb553b6b30ae5e3f285c037ab6a5c0 | [
"Apache-2.0"
] | null | null | null | repos/php/remote/7.4-rc.md | ssapp/repo-info | 236e850c1feb553b6b30ae5e3f285c037ab6a5c0 | [
"Apache-2.0"
] | null | null | null | repos/php/remote/7.4-rc.md | ssapp/repo-info | 236e850c1feb553b6b30ae5e3f285c037ab6a5c0 | [
"Apache-2.0"
] | null | null | null | ## `php:7.4-rc`
```console
$ docker pull php@sha256:aab8ed0e75e4970e5436cd7deb60d201a48db9bfefd7aa04ffb5d2352b8fcebf
```
- Manifest MIME: `application/vnd.docker.distribution.manifest.list.v2+json`
- Platforms:
- linux; amd64
- linux; arm variant v5
- linux; arm variant v7
- linux; arm64 variant v8
- linux; 386
- linux; ppc64le
### `php:7.4-rc` - linux; amd64
```console
$ docker pull php@sha256:1613e9bdc2bea5d43a770ec8e23b75f677f14fe542bacf3b989c2aaa65c21872
```
- Docker Version: 18.06.1-ce
- Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json`
- Total Size: **142.7 MB (142724160 bytes)**
(compressed transfer size, not on-disk size)
- Image ID: `sha256:565f592b3c70d22c77f7c0a086191dcc28cdd2150951615eb27a12a84f669683`
- Entrypoint: `["docker-php-entrypoint"]`
- Default Command: `["php","-a"]`
```dockerfile
# Wed, 14 Aug 2019 00:22:12 GMT
ADD file:330bfb91168adb4a9b1296c70209ed487d4c2705042a916d575f82b61ab16e61 in /
# Wed, 14 Aug 2019 00:22:12 GMT
CMD ["bash"]
# Wed, 14 Aug 2019 07:30:47 GMT
RUN set -eux; { echo 'Package: php*'; echo 'Pin: release *'; echo 'Pin-Priority: -1'; } > /etc/apt/preferences.d/no-debian-php
# Wed, 14 Aug 2019 07:30:47 GMT
ENV PHPIZE_DEPS=autoconf dpkg-dev file g++ gcc libc-dev make pkg-config re2c
# Wed, 14 Aug 2019 07:31:10 GMT
RUN set -eux; apt-get update; apt-get install -y --no-install-recommends $PHPIZE_DEPS ca-certificates curl xz-utils ; rm -rf /var/lib/apt/lists/*
# Wed, 14 Aug 2019 07:31:10 GMT
ENV PHP_INI_DIR=/usr/local/etc/php
# Wed, 14 Aug 2019 07:31:11 GMT
RUN set -eux; mkdir -p "$PHP_INI_DIR/conf.d"; [ ! -d /var/www/html ]; mkdir -p /var/www/html; chown www-data:www-data /var/www/html; chmod 777 /var/www/html
# Wed, 14 Aug 2019 07:31:11 GMT
ENV PHP_CFLAGS=-fstack-protector-strong -fpic -fpie -O2
# Wed, 14 Aug 2019 07:31:12 GMT
ENV PHP_CPPFLAGS=-fstack-protector-strong -fpic -fpie -O2
# Wed, 14 Aug 2019 07:31:12 GMT
ENV PHP_LDFLAGS=-Wl,-O1 -Wl,--hash-style=both -pie
# Wed, 14 Aug 2019 07:31:12 GMT
ENV GPG_KEYS=42670A7FE4D0441C8E4632349E4FDC074A4EF02D 5A52880781F755608BF815FC910DEB46F53EA312
# Thu, 05 Sep 2019 22:48:10 GMT
ENV PHP_VERSION=7.4.0RC1
# Thu, 05 Sep 2019 22:48:10 GMT
ENV PHP_URL=https://downloads.php.net/~derick/php-7.4.0RC1.tar.xz PHP_ASC_URL=https://downloads.php.net/~derick/php-7.4.0RC1.tar.xz.asc
# Thu, 05 Sep 2019 22:48:10 GMT
ENV PHP_SHA256=9e3d158ad070968ad9d9e796a7acf88c3cfe0e0382e991e6dee05a18049d4a62 PHP_MD5=
# Thu, 05 Sep 2019 22:48:22 GMT
RUN set -eux; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends gnupg dirmngr; rm -rf /var/lib/apt/lists/*; mkdir -p /usr/src; cd /usr/src; curl -fsSL -o php.tar.xz "$PHP_URL"; if [ -n "$PHP_SHA256" ]; then echo "$PHP_SHA256 *php.tar.xz" | sha256sum -c -; fi; if [ -n "$PHP_MD5" ]; then echo "$PHP_MD5 *php.tar.xz" | md5sum -c -; fi; if [ -n "$PHP_ASC_URL" ]; then curl -fsSL -o php.tar.xz.asc "$PHP_ASC_URL"; export GNUPGHOME="$(mktemp -d)"; for key in $GPG_KEYS; do gpg --batch --keyserver ha.pool.sks-keyservers.net --recv-keys "$key"; done; gpg --batch --verify php.tar.xz.asc php.tar.xz; gpgconf --kill all; rm -rf "$GNUPGHOME"; fi; apt-mark auto '.*' > /dev/null; apt-mark manual $savedAptMark > /dev/null; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false
# Thu, 05 Sep 2019 22:48:22 GMT
COPY file:ce57c04b70896f77cc11eb2766417d8a1240fcffe5bba92179ec78c458844110 in /usr/local/bin/
# Thu, 05 Sep 2019 22:53:38 GMT
RUN set -eux; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends libargon2-dev libcurl4-openssl-dev libedit-dev libonig-dev libsodium-dev libsqlite3-dev libssl-dev libxml2-dev zlib1g-dev ${PHP_EXTRA_BUILD_DEPS:-} ; rm -rf /var/lib/apt/lists/*; export CFLAGS="$PHP_CFLAGS" CPPFLAGS="$PHP_CPPFLAGS" LDFLAGS="$PHP_LDFLAGS" ; docker-php-source extract; cd /usr/src/php; gnuArch="$(dpkg-architecture --query DEB_BUILD_GNU_TYPE)"; debMultiarch="$(dpkg-architecture --query DEB_BUILD_MULTIARCH)"; if [ ! -d /usr/include/curl ]; then ln -sT "/usr/include/$debMultiarch/curl" /usr/local/include/curl; fi; ./configure --build="$gnuArch" --with-config-file-path="$PHP_INI_DIR" --with-config-file-scan-dir="$PHP_INI_DIR/conf.d" --enable-option-checking=fatal --with-mhash --enable-ftp --enable-mbstring --enable-mysqlnd --with-password-argon2 --with-sodium=shared --with-curl --with-libedit --with-openssl --with-zlib --with-pear $(test "$gnuArch" = 's390x-linux-gnu' && echo '--without-pcre-jit') --with-libdir="lib/$debMultiarch" ${PHP_EXTRA_CONFIGURE_ARGS:-} ; make -j "$(nproc)"; find -type f -name '*.a' -delete; make install; find /usr/local/bin /usr/local/sbin -type f -executable -exec strip --strip-all '{}' + || true; make clean; cp -v php.ini-* "$PHP_INI_DIR/"; cd /; docker-php-source delete; apt-mark auto '.*' > /dev/null; [ -z "$savedAptMark" ] || apt-mark manual $savedAptMark; find /usr/local -type f -executable -exec ldd '{}' ';' | awk '/=>/ { print $(NF-1) }' | sort -u | xargs -r dpkg-query --search | cut -d: -f1 | sort -u | xargs -r apt-mark manual ; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; pecl update-channels; rm -rf /tmp/pear ~/.pearrc; php --version
# Thu, 05 Sep 2019 22:53:38 GMT
COPY multi:287fef6856464a54cd9ef266c5fea3bd820d4cf2e2666723e9d9ddd1afc6db67 in /usr/local/bin/
# Thu, 05 Sep 2019 22:53:39 GMT
RUN docker-php-ext-enable sodium
# Thu, 05 Sep 2019 22:53:39 GMT
ENTRYPOINT ["docker-php-entrypoint"]
# Thu, 05 Sep 2019 22:53:39 GMT
CMD ["php" "-a"]
```
- Layers:
- `sha256:1ab2bdfe97783562315f98f94c0769b1897a05f7b0395ca1520ebee08666703b`
Last Modified: Wed, 14 Aug 2019 00:27:15 GMT
Size: 27.1 MB (27093851 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:1448c64389e0015b7c9649074b00f2e4c90a88e7d371cbeabe12f0405c27d80e`
Last Modified: Wed, 14 Aug 2019 10:55:31 GMT
Size: 226.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:4b8a4e62b444aa440bbcf0b26639a041758362196c7c0cd4c9e555393b606066`
Last Modified: Wed, 14 Aug 2019 10:55:57 GMT
Size: 76.7 MB (76651422 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:9eb9d1e8e2415d93b036beeab40c3bf3197577324fafe2008a858f4666e17020`
Last Modified: Wed, 14 Aug 2019 10:55:30 GMT
Size: 220.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:931297e75579dca68940643b487967acc05b17b0d46c6cfd3ff6f5258ffd9778`
Last Modified: Thu, 05 Sep 2019 23:30:09 GMT
Size: 10.5 MB (10533518 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:f786778c7de74927227ae8852c437f11cea813300770b18cddb26f8f84a7ca32`
Last Modified: Thu, 05 Sep 2019 23:30:08 GMT
Size: 493.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:5a87537d641c07246d54cd5206122460ebeccbd2d36b52482fb170af343ca9e0`
Last Modified: Thu, 05 Sep 2019 23:30:13 GMT
Size: 28.4 MB (28441981 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:0914cc0d5bad0ca9f6fbf9216c611593d101b49c088763f009923bcff5481c64`
Last Modified: Thu, 05 Sep 2019 23:30:08 GMT
Size: 2.2 KB (2203 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:bfb6568287f0ba5d2935fd2e7b23499d31f4f8e476f480b8cf835cdb6c783058`
Last Modified: Thu, 05 Sep 2019 23:30:09 GMT
Size: 246.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
### `php:7.4-rc` - linux; arm variant v5
```console
$ docker pull php@sha256:b4f93028e199ab5b7d78200575e7c39d92850aa775ac55762ae21955beee3fd7
```
- Docker Version: 18.06.1-ce
- Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json`
- Total Size: **121.2 MB (121226481 bytes)**
(compressed transfer size, not on-disk size)
- Image ID: `sha256:574bf7a99e74627cd7f3a23c350f0a03e82b5916145b08f64d92e938b074a9cb`
- Entrypoint: `["docker-php-entrypoint"]`
- Default Command: `["php","-a"]`
```dockerfile
# Wed, 11 Sep 2019 22:49:43 GMT
ADD file:b03a0284df03e43beaa765dcd1e0238071159f664cb55b1b33acae3d6c8b79a2 in /
# Wed, 11 Sep 2019 22:49:44 GMT
CMD ["bash"]
# Thu, 12 Sep 2019 04:58:10 GMT
RUN set -eux; { echo 'Package: php*'; echo 'Pin: release *'; echo 'Pin-Priority: -1'; } > /etc/apt/preferences.d/no-debian-php
# Thu, 12 Sep 2019 04:58:11 GMT
ENV PHPIZE_DEPS=autoconf dpkg-dev file g++ gcc libc-dev make pkg-config re2c
# Thu, 12 Sep 2019 04:58:55 GMT
RUN set -eux; apt-get update; apt-get install -y --no-install-recommends $PHPIZE_DEPS ca-certificates curl xz-utils ; rm -rf /var/lib/apt/lists/*
# Thu, 12 Sep 2019 04:58:57 GMT
ENV PHP_INI_DIR=/usr/local/etc/php
# Thu, 12 Sep 2019 04:58:59 GMT
RUN set -eux; mkdir -p "$PHP_INI_DIR/conf.d"; [ ! -d /var/www/html ]; mkdir -p /var/www/html; chown www-data:www-data /var/www/html; chmod 777 /var/www/html
# Thu, 12 Sep 2019 04:58:59 GMT
ENV PHP_CFLAGS=-fstack-protector-strong -fpic -fpie -O2
# Thu, 12 Sep 2019 04:59:00 GMT
ENV PHP_CPPFLAGS=-fstack-protector-strong -fpic -fpie -O2
# Thu, 12 Sep 2019 04:59:00 GMT
ENV PHP_LDFLAGS=-Wl,-O1 -Wl,--hash-style=both -pie
# Thu, 12 Sep 2019 04:59:01 GMT
ENV GPG_KEYS=42670A7FE4D0441C8E4632349E4FDC074A4EF02D 5A52880781F755608BF815FC910DEB46F53EA312
# Thu, 12 Sep 2019 04:59:01 GMT
ENV PHP_VERSION=7.4.0RC1
# Thu, 12 Sep 2019 04:59:02 GMT
ENV PHP_URL=https://downloads.php.net/~derick/php-7.4.0RC1.tar.xz PHP_ASC_URL=https://downloads.php.net/~derick/php-7.4.0RC1.tar.xz.asc
# Thu, 12 Sep 2019 04:59:02 GMT
ENV PHP_SHA256=9e3d158ad070968ad9d9e796a7acf88c3cfe0e0382e991e6dee05a18049d4a62 PHP_MD5=
# Thu, 12 Sep 2019 04:59:22 GMT
RUN set -eux; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends gnupg dirmngr; rm -rf /var/lib/apt/lists/*; mkdir -p /usr/src; cd /usr/src; curl -fsSL -o php.tar.xz "$PHP_URL"; if [ -n "$PHP_SHA256" ]; then echo "$PHP_SHA256 *php.tar.xz" | sha256sum -c -; fi; if [ -n "$PHP_MD5" ]; then echo "$PHP_MD5 *php.tar.xz" | md5sum -c -; fi; if [ -n "$PHP_ASC_URL" ]; then curl -fsSL -o php.tar.xz.asc "$PHP_ASC_URL"; export GNUPGHOME="$(mktemp -d)"; for key in $GPG_KEYS; do gpg --batch --keyserver ha.pool.sks-keyservers.net --recv-keys "$key"; done; gpg --batch --verify php.tar.xz.asc php.tar.xz; gpgconf --kill all; rm -rf "$GNUPGHOME"; fi; apt-mark auto '.*' > /dev/null; apt-mark manual $savedAptMark > /dev/null; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false
# Thu, 12 Sep 2019 04:59:22 GMT
COPY file:ce57c04b70896f77cc11eb2766417d8a1240fcffe5bba92179ec78c458844110 in /usr/local/bin/
# Thu, 12 Sep 2019 05:03:00 GMT
RUN set -eux; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends libargon2-dev libcurl4-openssl-dev libedit-dev libonig-dev libsodium-dev libsqlite3-dev libssl-dev libxml2-dev zlib1g-dev ${PHP_EXTRA_BUILD_DEPS:-} ; rm -rf /var/lib/apt/lists/*; export CFLAGS="$PHP_CFLAGS" CPPFLAGS="$PHP_CPPFLAGS" LDFLAGS="$PHP_LDFLAGS" ; docker-php-source extract; cd /usr/src/php; gnuArch="$(dpkg-architecture --query DEB_BUILD_GNU_TYPE)"; debMultiarch="$(dpkg-architecture --query DEB_BUILD_MULTIARCH)"; if [ ! -d /usr/include/curl ]; then ln -sT "/usr/include/$debMultiarch/curl" /usr/local/include/curl; fi; ./configure --build="$gnuArch" --with-config-file-path="$PHP_INI_DIR" --with-config-file-scan-dir="$PHP_INI_DIR/conf.d" --enable-option-checking=fatal --with-mhash --enable-ftp --enable-mbstring --enable-mysqlnd --with-password-argon2 --with-sodium=shared --with-curl --with-libedit --with-openssl --with-zlib --with-pear $(test "$gnuArch" = 's390x-linux-gnu' && echo '--without-pcre-jit') --with-libdir="lib/$debMultiarch" ${PHP_EXTRA_CONFIGURE_ARGS:-} ; make -j "$(nproc)"; find -type f -name '*.a' -delete; make install; find /usr/local/bin /usr/local/sbin -type f -executable -exec strip --strip-all '{}' + || true; make clean; cp -v php.ini-* "$PHP_INI_DIR/"; cd /; docker-php-source delete; apt-mark auto '.*' > /dev/null; [ -z "$savedAptMark" ] || apt-mark manual $savedAptMark; find /usr/local -type f -executable -exec ldd '{}' ';' | awk '/=>/ { print $(NF-1) }' | sort -u | xargs -r dpkg-query --search | cut -d: -f1 | sort -u | xargs -r apt-mark manual ; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; pecl update-channels; rm -rf /tmp/pear ~/.pearrc; php --version
# Thu, 12 Sep 2019 05:03:03 GMT
COPY multi:287fef6856464a54cd9ef266c5fea3bd820d4cf2e2666723e9d9ddd1afc6db67 in /usr/local/bin/
# Thu, 12 Sep 2019 05:03:07 GMT
RUN docker-php-ext-enable sodium
# Thu, 12 Sep 2019 05:03:09 GMT
ENTRYPOINT ["docker-php-entrypoint"]
# Thu, 12 Sep 2019 05:03:11 GMT
CMD ["php" "-a"]
```
- Layers:
- `sha256:5b419bcef70c5ce28a517467c7c4a1f60b7ce88f75d4584ac44c4ecbb57b2987`
Last Modified: Wed, 11 Sep 2019 22:57:00 GMT
Size: 24.8 MB (24823545 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:3fd71ecb4e5c80aa0e8ef81c64ee5043ce78641c6ad265c4998d1055d88a5238`
Last Modified: Thu, 12 Sep 2019 07:16:18 GMT
Size: 227.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:c22bdfb0baa0736e421f57c237fd5a634422ed477d46405f2b177e844611d35f`
Last Modified: Thu, 12 Sep 2019 07:16:41 GMT
Size: 58.8 MB (58796775 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:b686b70bc54c5379efc7f480d337ed02f5d22d0200445537e4001c411e90124b`
Last Modified: Thu, 12 Sep 2019 07:16:18 GMT
Size: 272.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:cc956bd6000cf8cfe1b0664d475b4c585fcf73a29e111aa95afa83d516e2d152`
Last Modified: Thu, 12 Sep 2019 07:16:17 GMT
Size: 10.5 MB (10531783 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:4bfc1182b155d22138b220f9a206cb5c7414b124a4b2e2a20a12723cde3e9ea7`
Last Modified: Thu, 12 Sep 2019 07:16:16 GMT
Size: 492.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:2e43b63cbdf0aed89338c155a6395a8aa52ffbb855f160fea0415a8cf8d02a42`
Last Modified: Thu, 12 Sep 2019 07:16:26 GMT
Size: 27.1 MB (27070934 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:f4426ed72bf0cc3b51337c7dd18a8f950eb2ede8a63c3a5eb3db6922c7ff5ef7`
Last Modified: Thu, 12 Sep 2019 07:16:16 GMT
Size: 2.2 KB (2205 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:c13c99a739c4b34239459bfb62329cb09d3d4da98528cb0d8196b7b45ac40ca2`
Last Modified: Thu, 12 Sep 2019 07:16:16 GMT
Size: 248.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
### `php:7.4-rc` - linux; arm variant v7
```console
$ docker pull php@sha256:c6adc51e5a961cfa569b8aa0f1a25c3931065ab2c61473ca22d8ada0d1f99233
```
- Docker Version: 18.06.1-ce
- Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json`
- Total Size: **118.8 MB (118803053 bytes)**
(compressed transfer size, not on-disk size)
- Image ID: `sha256:2562b77dc28a3c21da37d986072736f667f1fae71946574c3d7383d38a3207e4`
- Entrypoint: `["docker-php-entrypoint"]`
- Default Command: `["php","-a"]`
```dockerfile
# Wed, 14 Aug 2019 01:00:08 GMT
ADD file:4b827be442647e4265278c7c35a3b38d13b5eb2eccdd246dc4ba05bbd48e8079 in /
# Wed, 14 Aug 2019 01:00:09 GMT
CMD ["bash"]
# Wed, 14 Aug 2019 13:28:20 GMT
RUN set -eux; { echo 'Package: php*'; echo 'Pin: release *'; echo 'Pin-Priority: -1'; } > /etc/apt/preferences.d/no-debian-php
# Wed, 14 Aug 2019 13:28:21 GMT
ENV PHPIZE_DEPS=autoconf dpkg-dev file g++ gcc libc-dev make pkg-config re2c
# Wed, 14 Aug 2019 13:28:52 GMT
RUN set -eux; apt-get update; apt-get install -y --no-install-recommends $PHPIZE_DEPS ca-certificates curl xz-utils ; rm -rf /var/lib/apt/lists/*
# Wed, 14 Aug 2019 13:28:53 GMT
ENV PHP_INI_DIR=/usr/local/etc/php
# Wed, 14 Aug 2019 13:28:54 GMT
RUN set -eux; mkdir -p "$PHP_INI_DIR/conf.d"; [ ! -d /var/www/html ]; mkdir -p /var/www/html; chown www-data:www-data /var/www/html; chmod 777 /var/www/html
# Wed, 14 Aug 2019 13:28:55 GMT
ENV PHP_CFLAGS=-fstack-protector-strong -fpic -fpie -O2
# Wed, 14 Aug 2019 13:28:55 GMT
ENV PHP_CPPFLAGS=-fstack-protector-strong -fpic -fpie -O2
# Wed, 14 Aug 2019 13:28:55 GMT
ENV PHP_LDFLAGS=-Wl,-O1 -Wl,--hash-style=both -pie
# Wed, 14 Aug 2019 13:28:56 GMT
ENV GPG_KEYS=42670A7FE4D0441C8E4632349E4FDC074A4EF02D 5A52880781F755608BF815FC910DEB46F53EA312
# Thu, 05 Sep 2019 20:55:51 GMT
ENV PHP_VERSION=7.4.0RC1
# Thu, 05 Sep 2019 20:55:52 GMT
ENV PHP_URL=https://downloads.php.net/~derick/php-7.4.0RC1.tar.xz PHP_ASC_URL=https://downloads.php.net/~derick/php-7.4.0RC1.tar.xz.asc
# Thu, 05 Sep 2019 20:55:52 GMT
ENV PHP_SHA256=9e3d158ad070968ad9d9e796a7acf88c3cfe0e0382e991e6dee05a18049d4a62 PHP_MD5=
# Thu, 05 Sep 2019 20:56:04 GMT
RUN set -eux; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends gnupg dirmngr; rm -rf /var/lib/apt/lists/*; mkdir -p /usr/src; cd /usr/src; curl -fsSL -o php.tar.xz "$PHP_URL"; if [ -n "$PHP_SHA256" ]; then echo "$PHP_SHA256 *php.tar.xz" | sha256sum -c -; fi; if [ -n "$PHP_MD5" ]; then echo "$PHP_MD5 *php.tar.xz" | md5sum -c -; fi; if [ -n "$PHP_ASC_URL" ]; then curl -fsSL -o php.tar.xz.asc "$PHP_ASC_URL"; export GNUPGHOME="$(mktemp -d)"; for key in $GPG_KEYS; do gpg --batch --keyserver ha.pool.sks-keyservers.net --recv-keys "$key"; done; gpg --batch --verify php.tar.xz.asc php.tar.xz; gpgconf --kill all; rm -rf "$GNUPGHOME"; fi; apt-mark auto '.*' > /dev/null; apt-mark manual $savedAptMark > /dev/null; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false
# Thu, 05 Sep 2019 20:56:05 GMT
COPY file:ce57c04b70896f77cc11eb2766417d8a1240fcffe5bba92179ec78c458844110 in /usr/local/bin/
# Thu, 05 Sep 2019 20:58:44 GMT
RUN set -eux; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends libargon2-dev libcurl4-openssl-dev libedit-dev libonig-dev libsodium-dev libsqlite3-dev libssl-dev libxml2-dev zlib1g-dev ${PHP_EXTRA_BUILD_DEPS:-} ; rm -rf /var/lib/apt/lists/*; export CFLAGS="$PHP_CFLAGS" CPPFLAGS="$PHP_CPPFLAGS" LDFLAGS="$PHP_LDFLAGS" ; docker-php-source extract; cd /usr/src/php; gnuArch="$(dpkg-architecture --query DEB_BUILD_GNU_TYPE)"; debMultiarch="$(dpkg-architecture --query DEB_BUILD_MULTIARCH)"; if [ ! -d /usr/include/curl ]; then ln -sT "/usr/include/$debMultiarch/curl" /usr/local/include/curl; fi; ./configure --build="$gnuArch" --with-config-file-path="$PHP_INI_DIR" --with-config-file-scan-dir="$PHP_INI_DIR/conf.d" --enable-option-checking=fatal --with-mhash --enable-ftp --enable-mbstring --enable-mysqlnd --with-password-argon2 --with-sodium=shared --with-curl --with-libedit --with-openssl --with-zlib --with-pear $(test "$gnuArch" = 's390x-linux-gnu' && echo '--without-pcre-jit') --with-libdir="lib/$debMultiarch" ${PHP_EXTRA_CONFIGURE_ARGS:-} ; make -j "$(nproc)"; find -type f -name '*.a' -delete; make install; find /usr/local/bin /usr/local/sbin -type f -executable -exec strip --strip-all '{}' + || true; make clean; cp -v php.ini-* "$PHP_INI_DIR/"; cd /; docker-php-source delete; apt-mark auto '.*' > /dev/null; [ -z "$savedAptMark" ] || apt-mark manual $savedAptMark; find /usr/local -type f -executable -exec ldd '{}' ';' | awk '/=>/ { print $(NF-1) }' | sort -u | xargs -r dpkg-query --search | cut -d: -f1 | sort -u | xargs -r apt-mark manual ; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; pecl update-channels; rm -rf /tmp/pear ~/.pearrc; php --version
# Thu, 05 Sep 2019 20:58:45 GMT
COPY multi:287fef6856464a54cd9ef266c5fea3bd820d4cf2e2666723e9d9ddd1afc6db67 in /usr/local/bin/
# Thu, 05 Sep 2019 20:58:46 GMT
RUN docker-php-ext-enable sodium
# Thu, 05 Sep 2019 20:58:47 GMT
ENTRYPOINT ["docker-php-entrypoint"]
# Thu, 05 Sep 2019 20:58:47 GMT
CMD ["php" "-a"]
```
- Layers:
- `sha256:176fe0ab331c5fafc852d1a0fdd4395348ac3d862902a33d6c5ded8ac80a8c62`
Last Modified: Wed, 14 Aug 2019 01:09:19 GMT
Size: 22.7 MB (22697922 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:c67e3348ae0c98dd009ed15e21dced8ec036504ff5da0d57724a8c8a92964048`
Last Modified: Wed, 14 Aug 2019 15:18:31 GMT
Size: 228.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:84d5f29aad3b810945de2bca6203df89613245a24d283877878cf983a468970e`
Last Modified: Wed, 14 Aug 2019 15:19:08 GMT
Size: 59.5 MB (59483006 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:0982a6bfaf06556c680d41cdf35c7d90a0c5c54c6374a56192a8ece90fea4e04`
Last Modified: Wed, 14 Aug 2019 15:18:31 GMT
Size: 270.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:d114d7b6bf59797827210bc9cccec97a6b9b4ee50d52d03c0ad1a262e76cae4a`
Last Modified: Thu, 05 Sep 2019 21:28:19 GMT
Size: 10.5 MB (10531669 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:cc0ccf1309499a6b9fbd2d7a8d572386be15cbf1f2ffdb713966d602e897bd86`
Last Modified: Thu, 05 Sep 2019 21:28:18 GMT
Size: 494.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:2fdb3ffd1fcf5dd552c8d0c9e747bd843aa1ac9998b23e0d86c4eed55fa5f1e2`
Last Modified: Thu, 05 Sep 2019 21:28:25 GMT
Size: 26.1 MB (26087012 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:6484721b8a514005bff62954b4b6cbe91803fa29f4a421731e1388d3099a8e25`
Last Modified: Thu, 05 Sep 2019 21:28:18 GMT
Size: 2.2 KB (2204 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:7aa37aea030240fda70c72afd299b0b07da533513c88341c5257c582e31f8214`
Last Modified: Thu, 05 Sep 2019 21:28:18 GMT
Size: 248.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
### `php:7.4-rc` - linux; arm64 variant v8
```console
$ docker pull php@sha256:82a63f9eb866073e16f673b2f5d874a2fb5bc4d58241101016c64d0afcaf90d7
```
- Docker Version: 18.06.1-ce
- Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json`
- Total Size: **134.9 MB (134910592 bytes)**
(compressed transfer size, not on-disk size)
- Image ID: `sha256:efb882bfbe75eb9fece0ac06f4be6ad6c26d014163d98cca95c97da521c145ff`
- Entrypoint: `["docker-php-entrypoint"]`
- Default Command: `["php","-a"]`
```dockerfile
# Wed, 11 Sep 2019 22:40:52 GMT
ADD file:aac1f360073d532980c4162cbb87309089c7fce24c08b645c70c6289f3a527dd in /
# Wed, 11 Sep 2019 22:40:54 GMT
CMD ["bash"]
# Thu, 12 Sep 2019 01:44:53 GMT
RUN set -eux; { echo 'Package: php*'; echo 'Pin: release *'; echo 'Pin-Priority: -1'; } > /etc/apt/preferences.d/no-debian-php
# Thu, 12 Sep 2019 01:44:54 GMT
ENV PHPIZE_DEPS=autoconf dpkg-dev file g++ gcc libc-dev make pkg-config re2c
# Thu, 12 Sep 2019 01:45:36 GMT
RUN set -eux; apt-get update; apt-get install -y --no-install-recommends $PHPIZE_DEPS ca-certificates curl xz-utils ; rm -rf /var/lib/apt/lists/*
# Thu, 12 Sep 2019 01:45:38 GMT
ENV PHP_INI_DIR=/usr/local/etc/php
# Thu, 12 Sep 2019 01:45:40 GMT
RUN set -eux; mkdir -p "$PHP_INI_DIR/conf.d"; [ ! -d /var/www/html ]; mkdir -p /var/www/html; chown www-data:www-data /var/www/html; chmod 777 /var/www/html
# Thu, 12 Sep 2019 01:45:41 GMT
ENV PHP_CFLAGS=-fstack-protector-strong -fpic -fpie -O2
# Thu, 12 Sep 2019 01:45:42 GMT
ENV PHP_CPPFLAGS=-fstack-protector-strong -fpic -fpie -O2
# Thu, 12 Sep 2019 01:45:43 GMT
ENV PHP_LDFLAGS=-Wl,-O1 -Wl,--hash-style=both -pie
# Thu, 12 Sep 2019 01:45:44 GMT
ENV GPG_KEYS=42670A7FE4D0441C8E4632349E4FDC074A4EF02D 5A52880781F755608BF815FC910DEB46F53EA312
# Thu, 12 Sep 2019 01:45:45 GMT
ENV PHP_VERSION=7.4.0RC1
# Thu, 12 Sep 2019 01:45:45 GMT
ENV PHP_URL=https://downloads.php.net/~derick/php-7.4.0RC1.tar.xz PHP_ASC_URL=https://downloads.php.net/~derick/php-7.4.0RC1.tar.xz.asc
# Thu, 12 Sep 2019 01:45:46 GMT
ENV PHP_SHA256=9e3d158ad070968ad9d9e796a7acf88c3cfe0e0382e991e6dee05a18049d4a62 PHP_MD5=
# Thu, 12 Sep 2019 01:46:02 GMT
RUN set -eux; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends gnupg dirmngr; rm -rf /var/lib/apt/lists/*; mkdir -p /usr/src; cd /usr/src; curl -fsSL -o php.tar.xz "$PHP_URL"; if [ -n "$PHP_SHA256" ]; then echo "$PHP_SHA256 *php.tar.xz" | sha256sum -c -; fi; if [ -n "$PHP_MD5" ]; then echo "$PHP_MD5 *php.tar.xz" | md5sum -c -; fi; if [ -n "$PHP_ASC_URL" ]; then curl -fsSL -o php.tar.xz.asc "$PHP_ASC_URL"; export GNUPGHOME="$(mktemp -d)"; for key in $GPG_KEYS; do gpg --batch --keyserver ha.pool.sks-keyservers.net --recv-keys "$key"; done; gpg --batch --verify php.tar.xz.asc php.tar.xz; gpgconf --kill all; rm -rf "$GNUPGHOME"; fi; apt-mark auto '.*' > /dev/null; apt-mark manual $savedAptMark > /dev/null; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false
# Thu, 12 Sep 2019 01:46:02 GMT
COPY file:ce57c04b70896f77cc11eb2766417d8a1240fcffe5bba92179ec78c458844110 in /usr/local/bin/
# Thu, 12 Sep 2019 01:49:20 GMT
RUN set -eux; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends libargon2-dev libcurl4-openssl-dev libedit-dev libonig-dev libsodium-dev libsqlite3-dev libssl-dev libxml2-dev zlib1g-dev ${PHP_EXTRA_BUILD_DEPS:-} ; rm -rf /var/lib/apt/lists/*; export CFLAGS="$PHP_CFLAGS" CPPFLAGS="$PHP_CPPFLAGS" LDFLAGS="$PHP_LDFLAGS" ; docker-php-source extract; cd /usr/src/php; gnuArch="$(dpkg-architecture --query DEB_BUILD_GNU_TYPE)"; debMultiarch="$(dpkg-architecture --query DEB_BUILD_MULTIARCH)"; if [ ! -d /usr/include/curl ]; then ln -sT "/usr/include/$debMultiarch/curl" /usr/local/include/curl; fi; ./configure --build="$gnuArch" --with-config-file-path="$PHP_INI_DIR" --with-config-file-scan-dir="$PHP_INI_DIR/conf.d" --enable-option-checking=fatal --with-mhash --enable-ftp --enable-mbstring --enable-mysqlnd --with-password-argon2 --with-sodium=shared --with-curl --with-libedit --with-openssl --with-zlib --with-pear $(test "$gnuArch" = 's390x-linux-gnu' && echo '--without-pcre-jit') --with-libdir="lib/$debMultiarch" ${PHP_EXTRA_CONFIGURE_ARGS:-} ; make -j "$(nproc)"; find -type f -name '*.a' -delete; make install; find /usr/local/bin /usr/local/sbin -type f -executable -exec strip --strip-all '{}' + || true; make clean; cp -v php.ini-* "$PHP_INI_DIR/"; cd /; docker-php-source delete; apt-mark auto '.*' > /dev/null; [ -z "$savedAptMark" ] || apt-mark manual $savedAptMark; find /usr/local -type f -executable -exec ldd '{}' ';' | awk '/=>/ { print $(NF-1) }' | sort -u | xargs -r dpkg-query --search | cut -d: -f1 | sort -u | xargs -r apt-mark manual ; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; pecl update-channels; rm -rf /tmp/pear ~/.pearrc; php --version
# Thu, 12 Sep 2019 01:49:21 GMT
COPY multi:287fef6856464a54cd9ef266c5fea3bd820d4cf2e2666723e9d9ddd1afc6db67 in /usr/local/bin/
# Thu, 12 Sep 2019 01:49:24 GMT
RUN docker-php-ext-enable sodium
# Thu, 12 Sep 2019 01:49:25 GMT
ENTRYPOINT ["docker-php-entrypoint"]
# Thu, 12 Sep 2019 01:49:25 GMT
CMD ["php" "-a"]
```
- Layers:
- `sha256:0c79eb62c57d840ffe711699e4e3cf8d7d41262c39e48f745247f07d8e256c8c`
Last Modified: Wed, 11 Sep 2019 22:46:31 GMT
Size: 25.9 MB (25851538 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:92b2b9f690542dfe7ea3593e43891855912287c690f0dcd268ba30dfb10dd31b`
Last Modified: Thu, 12 Sep 2019 04:17:55 GMT
Size: 227.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:44f9b0e2d5e0be2efc67615ff1c8a9ad930b5d45429f0e4e0a801ca7fc44a8ff`
Last Modified: Thu, 12 Sep 2019 04:18:14 GMT
Size: 70.3 MB (70327020 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:794af76998ad5919f99cbfe6f92e200b622bfc12dfe27a361b9e79511825dd2f`
Last Modified: Thu, 12 Sep 2019 04:17:55 GMT
Size: 272.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:3cd5b377485998724b116526baf45b08ced3f8fa75e1463d7b4f76dd10f27955`
Last Modified: Thu, 12 Sep 2019 04:17:54 GMT
Size: 10.5 MB (10532615 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:21376eea023543639a7b0b7365bdd77f2ea6fb8099dd2d152ea1dc512fcdd580`
Last Modified: Thu, 12 Sep 2019 04:17:53 GMT
Size: 494.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:66e8e723b7a3c0944bb71ee89fb50942935930e49d231fa74ebabdffc0798fe9`
Last Modified: Thu, 12 Sep 2019 04:18:01 GMT
Size: 28.2 MB (28195975 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:72a41b5f6920ec642be2816eba1264bd933a3035b52a6ffa13e35b0255984b41`
Last Modified: Thu, 12 Sep 2019 04:17:53 GMT
Size: 2.2 KB (2202 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:554565b347dba01b7c211f07fdf7caf12b5d5f563a261f7fa4da1c43ebeed39d`
Last Modified: Thu, 12 Sep 2019 04:17:53 GMT
Size: 249.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
### `php:7.4-rc` - linux; 386
```console
$ docker pull php@sha256:0e94dae0c4374b6fd1dea14f6d734a0fc38e4c8e4e90a13b4644ee087fdf51c9
```
- Docker Version: 18.06.1-ce
- Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json`
- Total Size: **148.5 MB (148497164 bytes)**
(compressed transfer size, not on-disk size)
- Image ID: `sha256:a8060cd052a820b96ad206a7f35789b9a6457bda59142d67fdd69bb490482f1f`
- Entrypoint: `["docker-php-entrypoint"]`
- Default Command: `["php","-a"]`
```dockerfile
# Wed, 14 Aug 2019 00:41:07 GMT
ADD file:88d9b9c3d81d2ca3ab3da6fd039ce0dee55eabd5a957a45b5dec463ba2f8b465 in /
# Wed, 14 Aug 2019 00:41:07 GMT
CMD ["bash"]
# Wed, 14 Aug 2019 08:30:49 GMT
RUN set -eux; { echo 'Package: php*'; echo 'Pin: release *'; echo 'Pin-Priority: -1'; } > /etc/apt/preferences.d/no-debian-php
# Wed, 14 Aug 2019 08:30:50 GMT
ENV PHPIZE_DEPS=autoconf dpkg-dev file g++ gcc libc-dev make pkg-config re2c
# Wed, 14 Aug 2019 08:31:23 GMT
RUN set -eux; apt-get update; apt-get install -y --no-install-recommends $PHPIZE_DEPS ca-certificates curl xz-utils ; rm -rf /var/lib/apt/lists/*
# Wed, 14 Aug 2019 08:31:24 GMT
ENV PHP_INI_DIR=/usr/local/etc/php
# Wed, 14 Aug 2019 08:31:25 GMT
RUN set -eux; mkdir -p "$PHP_INI_DIR/conf.d"; [ ! -d /var/www/html ]; mkdir -p /var/www/html; chown www-data:www-data /var/www/html; chmod 777 /var/www/html
# Wed, 14 Aug 2019 08:31:25 GMT
ENV PHP_CFLAGS=-fstack-protector-strong -fpic -fpie -O2
# Wed, 14 Aug 2019 08:31:25 GMT
ENV PHP_CPPFLAGS=-fstack-protector-strong -fpic -fpie -O2
# Wed, 14 Aug 2019 08:31:25 GMT
ENV PHP_LDFLAGS=-Wl,-O1 -Wl,--hash-style=both -pie
# Wed, 14 Aug 2019 08:31:26 GMT
ENV GPG_KEYS=42670A7FE4D0441C8E4632349E4FDC074A4EF02D 5A52880781F755608BF815FC910DEB46F53EA312
# Thu, 05 Sep 2019 21:32:04 GMT
ENV PHP_VERSION=7.4.0RC1
# Thu, 05 Sep 2019 21:32:04 GMT
ENV PHP_URL=https://downloads.php.net/~derick/php-7.4.0RC1.tar.xz PHP_ASC_URL=https://downloads.php.net/~derick/php-7.4.0RC1.tar.xz.asc
# Thu, 05 Sep 2019 21:32:04 GMT
ENV PHP_SHA256=9e3d158ad070968ad9d9e796a7acf88c3cfe0e0382e991e6dee05a18049d4a62 PHP_MD5=
# Thu, 05 Sep 2019 21:32:15 GMT
RUN set -eux; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends gnupg dirmngr; rm -rf /var/lib/apt/lists/*; mkdir -p /usr/src; cd /usr/src; curl -fsSL -o php.tar.xz "$PHP_URL"; if [ -n "$PHP_SHA256" ]; then echo "$PHP_SHA256 *php.tar.xz" | sha256sum -c -; fi; if [ -n "$PHP_MD5" ]; then echo "$PHP_MD5 *php.tar.xz" | md5sum -c -; fi; if [ -n "$PHP_ASC_URL" ]; then curl -fsSL -o php.tar.xz.asc "$PHP_ASC_URL"; export GNUPGHOME="$(mktemp -d)"; for key in $GPG_KEYS; do gpg --batch --keyserver ha.pool.sks-keyservers.net --recv-keys "$key"; done; gpg --batch --verify php.tar.xz.asc php.tar.xz; gpgconf --kill all; rm -rf "$GNUPGHOME"; fi; apt-mark auto '.*' > /dev/null; apt-mark manual $savedAptMark > /dev/null; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false
# Thu, 05 Sep 2019 21:32:15 GMT
COPY file:ce57c04b70896f77cc11eb2766417d8a1240fcffe5bba92179ec78c458844110 in /usr/local/bin/
# Thu, 05 Sep 2019 21:38:36 GMT
RUN set -eux; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends libargon2-dev libcurl4-openssl-dev libedit-dev libonig-dev libsodium-dev libsqlite3-dev libssl-dev libxml2-dev zlib1g-dev ${PHP_EXTRA_BUILD_DEPS:-} ; rm -rf /var/lib/apt/lists/*; export CFLAGS="$PHP_CFLAGS" CPPFLAGS="$PHP_CPPFLAGS" LDFLAGS="$PHP_LDFLAGS" ; docker-php-source extract; cd /usr/src/php; gnuArch="$(dpkg-architecture --query DEB_BUILD_GNU_TYPE)"; debMultiarch="$(dpkg-architecture --query DEB_BUILD_MULTIARCH)"; if [ ! -d /usr/include/curl ]; then ln -sT "/usr/include/$debMultiarch/curl" /usr/local/include/curl; fi; ./configure --build="$gnuArch" --with-config-file-path="$PHP_INI_DIR" --with-config-file-scan-dir="$PHP_INI_DIR/conf.d" --enable-option-checking=fatal --with-mhash --enable-ftp --enable-mbstring --enable-mysqlnd --with-password-argon2 --with-sodium=shared --with-curl --with-libedit --with-openssl --with-zlib --with-pear $(test "$gnuArch" = 's390x-linux-gnu' && echo '--without-pcre-jit') --with-libdir="lib/$debMultiarch" ${PHP_EXTRA_CONFIGURE_ARGS:-} ; make -j "$(nproc)"; find -type f -name '*.a' -delete; make install; find /usr/local/bin /usr/local/sbin -type f -executable -exec strip --strip-all '{}' + || true; make clean; cp -v php.ini-* "$PHP_INI_DIR/"; cd /; docker-php-source delete; apt-mark auto '.*' > /dev/null; [ -z "$savedAptMark" ] || apt-mark manual $savedAptMark; find /usr/local -type f -executable -exec ldd '{}' ';' | awk '/=>/ { print $(NF-1) }' | sort -u | xargs -r dpkg-query --search | cut -d: -f1 | sort -u | xargs -r apt-mark manual ; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; pecl update-channels; rm -rf /tmp/pear ~/.pearrc; php --version
# Thu, 05 Sep 2019 21:38:37 GMT
COPY multi:287fef6856464a54cd9ef266c5fea3bd820d4cf2e2666723e9d9ddd1afc6db67 in /usr/local/bin/
# Thu, 05 Sep 2019 21:38:37 GMT
RUN docker-php-ext-enable sodium
# Thu, 05 Sep 2019 21:38:38 GMT
ENTRYPOINT ["docker-php-entrypoint"]
# Thu, 05 Sep 2019 21:38:38 GMT
CMD ["php" "-a"]
```
- Layers:
- `sha256:99d63bb2f627c130208196264f35e28fb2c0c17deff9db3729b1d9dacd7c206c`
Last Modified: Wed, 14 Aug 2019 00:46:56 GMT
Size: 27.7 MB (27746042 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:b1b6a90dce7b8856d014136b845831e667794f67d4a50454241e134036d6459e`
Last Modified: Wed, 14 Aug 2019 12:08:28 GMT
Size: 227.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:9b59ffd5f962627c3ae8e2418f59122c2c081e05c6ac8477c4f78fd5f6fbd1d1`
Last Modified: Wed, 14 Aug 2019 12:08:55 GMT
Size: 81.2 MB (81197501 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:325d5c6306ed74d8a616b345aad3b8c1c5e2b6c5ca88fb653ed37b33f916a280`
Last Modified: Wed, 14 Aug 2019 12:08:28 GMT
Size: 224.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:2f834de3c1a3ca9f24dccf371853cbd3b8466c1a630e885b55cec2216bd50844`
Last Modified: Thu, 05 Sep 2019 22:42:10 GMT
Size: 10.5 MB (10532788 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:37451b404b88b861dca9b9fec4519225ca56ebf3d49b1ad54616017eb58f6d81`
Last Modified: Thu, 05 Sep 2019 22:42:10 GMT
Size: 492.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:945b883fb6b1b8739e4cddcaabe7c12d5e44377405682741bcc2880ace5eb7ee`
Last Modified: Thu, 05 Sep 2019 22:42:18 GMT
Size: 29.0 MB (29017435 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:9f534758a0330073675b0d89fbfa2cc437b397e6412fe60698b424ff99ec217e`
Last Modified: Thu, 05 Sep 2019 22:42:10 GMT
Size: 2.2 KB (2206 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:34be41fbbe45f149f6829e0f5d9ded72f8c7831f0e5233b13e48223cd3346ad5`
Last Modified: Thu, 05 Sep 2019 22:42:10 GMT
Size: 249.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
### `php:7.4-rc` - linux; ppc64le
```console
$ docker pull php@sha256:111b23ed5d9a363398b79c0f6a845cd567f88587ede3bfa25e0201b9d7053257
```
- Docker Version: 18.06.1-ce
- Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json`
- Total Size: **153.4 MB (153419011 bytes)**
(compressed transfer size, not on-disk size)
- Image ID: `sha256:12de404e413aec85f882dd016c8c1190aa036aaa644a53092ec3133fc2e47773`
- Entrypoint: `["docker-php-entrypoint"]`
- Default Command: `["php","-a"]`
```dockerfile
# Wed, 14 Aug 2019 00:24:26 GMT
ADD file:6b667a9d8f3925b90fe46d0b625942605276b296f812070dc4f9542e92859f9f in /
# Wed, 14 Aug 2019 00:24:29 GMT
CMD ["bash"]
# Wed, 14 Aug 2019 07:00:27 GMT
RUN set -eux; { echo 'Package: php*'; echo 'Pin: release *'; echo 'Pin-Priority: -1'; } > /etc/apt/preferences.d/no-debian-php
# Wed, 14 Aug 2019 07:00:29 GMT
ENV PHPIZE_DEPS=autoconf dpkg-dev file g++ gcc libc-dev make pkg-config re2c
# Wed, 14 Aug 2019 07:02:07 GMT
RUN set -eux; apt-get update; apt-get install -y --no-install-recommends $PHPIZE_DEPS ca-certificates curl xz-utils ; rm -rf /var/lib/apt/lists/*
# Wed, 14 Aug 2019 07:02:13 GMT
ENV PHP_INI_DIR=/usr/local/etc/php
# Wed, 14 Aug 2019 07:02:18 GMT
RUN set -eux; mkdir -p "$PHP_INI_DIR/conf.d"; [ ! -d /var/www/html ]; mkdir -p /var/www/html; chown www-data:www-data /var/www/html; chmod 777 /var/www/html
# Wed, 14 Aug 2019 07:02:20 GMT
ENV PHP_CFLAGS=-fstack-protector-strong -fpic -fpie -O2
# Wed, 14 Aug 2019 07:02:22 GMT
ENV PHP_CPPFLAGS=-fstack-protector-strong -fpic -fpie -O2
# Wed, 14 Aug 2019 07:02:24 GMT
ENV PHP_LDFLAGS=-Wl,-O1 -Wl,--hash-style=both -pie
# Wed, 14 Aug 2019 07:02:26 GMT
ENV GPG_KEYS=42670A7FE4D0441C8E4632349E4FDC074A4EF02D 5A52880781F755608BF815FC910DEB46F53EA312
# Thu, 05 Sep 2019 20:51:38 GMT
ENV PHP_VERSION=7.4.0RC1
# Thu, 05 Sep 2019 20:51:41 GMT
ENV PHP_URL=https://downloads.php.net/~derick/php-7.4.0RC1.tar.xz PHP_ASC_URL=https://downloads.php.net/~derick/php-7.4.0RC1.tar.xz.asc
# Thu, 05 Sep 2019 20:51:45 GMT
ENV PHP_SHA256=9e3d158ad070968ad9d9e796a7acf88c3cfe0e0382e991e6dee05a18049d4a62 PHP_MD5=
# Thu, 05 Sep 2019 20:52:38 GMT
RUN set -eux; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends gnupg dirmngr; rm -rf /var/lib/apt/lists/*; mkdir -p /usr/src; cd /usr/src; curl -fsSL -o php.tar.xz "$PHP_URL"; if [ -n "$PHP_SHA256" ]; then echo "$PHP_SHA256 *php.tar.xz" | sha256sum -c -; fi; if [ -n "$PHP_MD5" ]; then echo "$PHP_MD5 *php.tar.xz" | md5sum -c -; fi; if [ -n "$PHP_ASC_URL" ]; then curl -fsSL -o php.tar.xz.asc "$PHP_ASC_URL"; export GNUPGHOME="$(mktemp -d)"; for key in $GPG_KEYS; do gpg --batch --keyserver ha.pool.sks-keyservers.net --recv-keys "$key"; done; gpg --batch --verify php.tar.xz.asc php.tar.xz; gpgconf --kill all; rm -rf "$GNUPGHOME"; fi; apt-mark auto '.*' > /dev/null; apt-mark manual $savedAptMark > /dev/null; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false
# Thu, 05 Sep 2019 20:52:40 GMT
COPY file:ce57c04b70896f77cc11eb2766417d8a1240fcffe5bba92179ec78c458844110 in /usr/local/bin/
# Thu, 05 Sep 2019 20:56:33 GMT
RUN set -eux; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends libargon2-dev libcurl4-openssl-dev libedit-dev libonig-dev libsodium-dev libsqlite3-dev libssl-dev libxml2-dev zlib1g-dev ${PHP_EXTRA_BUILD_DEPS:-} ; rm -rf /var/lib/apt/lists/*; export CFLAGS="$PHP_CFLAGS" CPPFLAGS="$PHP_CPPFLAGS" LDFLAGS="$PHP_LDFLAGS" ; docker-php-source extract; cd /usr/src/php; gnuArch="$(dpkg-architecture --query DEB_BUILD_GNU_TYPE)"; debMultiarch="$(dpkg-architecture --query DEB_BUILD_MULTIARCH)"; if [ ! -d /usr/include/curl ]; then ln -sT "/usr/include/$debMultiarch/curl" /usr/local/include/curl; fi; ./configure --build="$gnuArch" --with-config-file-path="$PHP_INI_DIR" --with-config-file-scan-dir="$PHP_INI_DIR/conf.d" --enable-option-checking=fatal --with-mhash --enable-ftp --enable-mbstring --enable-mysqlnd --with-password-argon2 --with-sodium=shared --with-curl --with-libedit --with-openssl --with-zlib --with-pear $(test "$gnuArch" = 's390x-linux-gnu' && echo '--without-pcre-jit') --with-libdir="lib/$debMultiarch" ${PHP_EXTRA_CONFIGURE_ARGS:-} ; make -j "$(nproc)"; find -type f -name '*.a' -delete; make install; find /usr/local/bin /usr/local/sbin -type f -executable -exec strip --strip-all '{}' + || true; make clean; cp -v php.ini-* "$PHP_INI_DIR/"; cd /; docker-php-source delete; apt-mark auto '.*' > /dev/null; [ -z "$savedAptMark" ] || apt-mark manual $savedAptMark; find /usr/local -type f -executable -exec ldd '{}' ';' | awk '/=>/ { print $(NF-1) }' | sort -u | xargs -r dpkg-query --search | cut -d: -f1 | sort -u | xargs -r apt-mark manual ; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; pecl update-channels; rm -rf /tmp/pear ~/.pearrc; php --version
# Thu, 05 Sep 2019 20:56:35 GMT
COPY multi:287fef6856464a54cd9ef266c5fea3bd820d4cf2e2666723e9d9ddd1afc6db67 in /usr/local/bin/
# Thu, 05 Sep 2019 20:56:41 GMT
RUN docker-php-ext-enable sodium
# Thu, 05 Sep 2019 20:56:44 GMT
ENTRYPOINT ["docker-php-entrypoint"]
# Thu, 05 Sep 2019 20:56:47 GMT
CMD ["php" "-a"]
```
- Layers:
- `sha256:3c6cb24c3751d75f61997a9e682a12d2e8c80d457ca2b8e1fcc2e929ad14498c`
Last Modified: Wed, 14 Aug 2019 00:31:47 GMT
Size: 30.5 MB (30515002 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:b28429de354c5ef9aaac400b8f1e5b850ab92c9835666517f5d484f913a075a7`
Last Modified: Wed, 14 Aug 2019 10:35:18 GMT
Size: 226.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:e798a6fa831244ebde322c1f09a031d79cbb6add76493a2f1f08ac20ce299aab`
Last Modified: Wed, 14 Aug 2019 10:37:17 GMT
Size: 82.3 MB (82261523 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:1adf6a8a16a871c1c09cde6142bd10f0b159dbacd4515428759f98bc8702f7c0`
Last Modified: Wed, 14 Aug 2019 10:35:17 GMT
Size: 269.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:5e0fa750e626a1302538c0b158efcb67f6e43601b1dca4984493067dc52de322`
Last Modified: Thu, 05 Sep 2019 21:35:33 GMT
Size: 10.5 MB (10533344 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:93c0232b94f6b45cea9ed45dfa860aaaabac46654372f551b79a4a689a807694`
Last Modified: Thu, 05 Sep 2019 21:35:32 GMT
Size: 495.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:d4bbe2411ba511a40528d5c8ee6fe561ada34b567f303c5cd309930e5dcdf5e9`
Last Modified: Thu, 05 Sep 2019 21:35:38 GMT
Size: 30.1 MB (30105699 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:70a7705471809c343b848a36fa68fb7d6e5b16077e487fb39a8dc8a30c302bdd`
Last Modified: Thu, 05 Sep 2019 21:35:32 GMT
Size: 2.2 KB (2204 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:eb39a2d3c6fb6a08afcb46f09187e038d9549b9534dcb79c5a6bd2ca27a3e963`
Last Modified: Thu, 05 Sep 2019 21:35:32 GMT
Size: 249.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
| 74.353434 | 1,877 | 0.71351 | yue_Hant | 0.20549 |
7ad31240146c4d2f4463d33d38091ed89902928f | 850 | md | Markdown | src/markdown-pages/creations/2020-03-09-crypto-wallet.md | lydstyl/190907-developpeur-react-nord | 14129524dc72d5675204069c9ca034f515f2e3ae | [
"MIT"
] | null | null | null | src/markdown-pages/creations/2020-03-09-crypto-wallet.md | lydstyl/190907-developpeur-react-nord | 14129524dc72d5675204069c9ca034f515f2e3ae | [
"MIT"
] | 17 | 2019-10-07T21:41:37.000Z | 2022-02-26T17:37:04.000Z | src/markdown-pages/creations/2020-03-09-crypto-wallet.md | lydstyl/190907-developpeur-react-nord | 14129524dc72d5675204069c9ca034f515f2e3ae | [
"MIT"
] | null | null | null | ---
title: Crypto Wallet
path: /portfolio/crypto-wallet
date: 2020-03-01T14:22:16.640Z
isImageFile: true
images: /../../images/capture-du-2020-03-09-15-11-34.png
video: 'https://www.youtube.com/embed/C5bDQDjBz6I'
link: 'https://crypto-wallet.netlify.com/'
---

Comment transformer votre idée de web app en réalité ?
Un exemple concret de création de web app suite à une idée.
L'idée ici est de faciliter la vie des investisseurs en développant un outil pour suivre votre portefeuille de cryptomonnaie.
Cette application full-stack vous permet notamment de visualiser la composition et valeur votre portefeuille ainsi que son historique. Elle est réalisée en React.js + Easy Peasy + Firebase.

| 40.47619 | 189 | 0.757647 | fra_Latn | 0.85985 |
7ad3d4621d5e7f6da982885dbbe8b6235ff71feb | 10,771 | md | Markdown | docs/GitApi.md | wonkas-factory/cribl-openapi-client | 8c2bd718d13bc260ce4d7bc1c91295e3c6dfe478 | [
"Apache-2.0"
] | null | null | null | docs/GitApi.md | wonkas-factory/cribl-openapi-client | 8c2bd718d13bc260ce4d7bc1c91295e3c6dfe478 | [
"Apache-2.0"
] | null | null | null | docs/GitApi.md | wonkas-factory/cribl-openapi-client | 8c2bd718d13bc260ce4d7bc1c91295e3c6dfe478 | [
"Apache-2.0"
] | null | null | null | # GitApi
All URIs are relative to */*
Method | HTTP request | Description
------------- | ------------- | -------------
[**versionCommitPost**](GitApi.md#versionCommitPost) | **POST** /version/commit | create a new commit containing the current configs the given log message describing the changes.
[**versionCountGet**](GitApi.md#versionCountGet) | **GET** /version/count | get the count of files of changed
[**versionDiffGet**](GitApi.md#versionDiffGet) | **GET** /version/diff | get the textual diff for given commit
[**versionFilesGet**](GitApi.md#versionFilesGet) | **GET** /version/files | get the files changed
[**versionInfoGet**](GitApi.md#versionInfoGet) | **GET** /version/info | Get info about versioning availability
[**versionPushPost**](GitApi.md#versionPushPost) | **POST** /version/push | push the current configs to the remote repository.
[**versionShowGet**](GitApi.md#versionShowGet) | **GET** /version/show | get the log message and textual diff for given commit
[**versionStatusGet**](GitApi.md#versionStatusGet) | **GET** /version/status | get the the working tree status
<a name="versionCommitPost"></a>
# **versionCommitPost**
> InlineResponse20047 versionCommitPost()
create a new commit containing the current configs the given log message describing the changes.
create a new commit containing the current configs the given log message describing the changes.
### Example
```java
// Import classes:
//import com.cribl.openapi.client.ApiClient;
//import com.cribl.openapi.client.ApiException;
//import com.cribl.openapi.client.Configuration;
//import com.cribl.openapi.client.auth.*;
//import com.cribl.openapi.service.GitApi;
ApiClient defaultClient = Configuration.getDefaultApiClient();
GitApi apiInstance = new GitApi();
try {
InlineResponse20047 result = apiInstance.versionCommitPost();
System.out.println(result);
} catch (ApiException e) {
System.err.println("Exception when calling GitApi#versionCommitPost");
e.printStackTrace();
}
```
### Parameters
This endpoint does not need any parameter.
### Return type
[**InlineResponse20047**](InlineResponse20047.md)
### Authorization
[bearerAuth](../README.md#bearerAuth)
### HTTP request headers
- **Content-Type**: Not defined
- **Accept**: application/json
<a name="versionCountGet"></a>
# **versionCountGet**
> InlineResponse20015 versionCountGet(group)
get the count of files of changed
get the count of files of changed
### Example
```java
// Import classes:
//import com.cribl.openapi.client.ApiClient;
//import com.cribl.openapi.client.ApiException;
//import com.cribl.openapi.client.Configuration;
//import com.cribl.openapi.client.auth.*;
//import com.cribl.openapi.service.GitApi;
ApiClient defaultClient = Configuration.getDefaultApiClient();
GitApi apiInstance = new GitApi();
String group = "group_example"; // String | Group ID
try {
InlineResponse20015 result = apiInstance.versionCountGet(group);
System.out.println(result);
} catch (ApiException e) {
System.err.println("Exception when calling GitApi#versionCountGet");
e.printStackTrace();
}
```
### Parameters
Name | Type | Description | Notes
------------- | ------------- | ------------- | -------------
**group** | **String**| Group ID | [optional]
### Return type
[**InlineResponse20015**](InlineResponse20015.md)
### Authorization
[bearerAuth](../README.md#bearerAuth)
### HTTP request headers
- **Content-Type**: Not defined
- **Accept**: application/json
<a name="versionDiffGet"></a>
# **versionDiffGet**
> InlineResponse20015 versionDiffGet(commit, group)
get the textual diff for given commit
get the textual diff for given commit
### Example
```java
// Import classes:
//import com.cribl.openapi.client.ApiClient;
//import com.cribl.openapi.client.ApiException;
//import com.cribl.openapi.client.Configuration;
//import com.cribl.openapi.client.auth.*;
//import com.cribl.openapi.service.GitApi;
ApiClient defaultClient = Configuration.getDefaultApiClient();
GitApi apiInstance = new GitApi();
String commit = "commit_example"; // String | Commit hash (default is HEAD)
String group = "group_example"; // String | Group ID
try {
InlineResponse20015 result = apiInstance.versionDiffGet(commit, group);
System.out.println(result);
} catch (ApiException e) {
System.err.println("Exception when calling GitApi#versionDiffGet");
e.printStackTrace();
}
```
### Parameters
Name | Type | Description | Notes
------------- | ------------- | ------------- | -------------
**commit** | **String**| Commit hash (default is HEAD) | [optional]
**group** | **String**| Group ID | [optional]
### Return type
[**InlineResponse20015**](InlineResponse20015.md)
### Authorization
[bearerAuth](../README.md#bearerAuth)
### HTTP request headers
- **Content-Type**: Not defined
- **Accept**: application/json
<a name="versionFilesGet"></a>
# **versionFilesGet**
> InlineResponse20048 versionFilesGet(group)
get the files changed
get the files changed
### Example
```java
// Import classes:
//import com.cribl.openapi.client.ApiClient;
//import com.cribl.openapi.client.ApiException;
//import com.cribl.openapi.client.Configuration;
//import com.cribl.openapi.client.auth.*;
//import com.cribl.openapi.service.GitApi;
ApiClient defaultClient = Configuration.getDefaultApiClient();
GitApi apiInstance = new GitApi();
String group = "group_example"; // String | Group ID
try {
InlineResponse20048 result = apiInstance.versionFilesGet(group);
System.out.println(result);
} catch (ApiException e) {
System.err.println("Exception when calling GitApi#versionFilesGet");
e.printStackTrace();
}
```
### Parameters
Name | Type | Description | Notes
------------- | ------------- | ------------- | -------------
**group** | **String**| Group ID | [optional]
### Return type
[**InlineResponse20048**](InlineResponse20048.md)
### Authorization
[bearerAuth](../README.md#bearerAuth)
### HTTP request headers
- **Content-Type**: Not defined
- **Accept**: application/json
<a name="versionInfoGet"></a>
# **versionInfoGet**
> InlineResponse20049 versionInfoGet()
Get info about versioning availability
Get info about versioning availability
### Example
```java
// Import classes:
//import com.cribl.openapi.client.ApiClient;
//import com.cribl.openapi.client.ApiException;
//import com.cribl.openapi.client.Configuration;
//import com.cribl.openapi.client.auth.*;
//import com.cribl.openapi.service.GitApi;
ApiClient defaultClient = Configuration.getDefaultApiClient();
GitApi apiInstance = new GitApi();
try {
InlineResponse20049 result = apiInstance.versionInfoGet();
System.out.println(result);
} catch (ApiException e) {
System.err.println("Exception when calling GitApi#versionInfoGet");
e.printStackTrace();
}
```
### Parameters
This endpoint does not need any parameter.
### Return type
[**InlineResponse20049**](InlineResponse20049.md)
### Authorization
[bearerAuth](../README.md#bearerAuth)
### HTTP request headers
- **Content-Type**: Not defined
- **Accept**: application/json
<a name="versionPushPost"></a>
# **versionPushPost**
> InlineResponse20015 versionPushPost()
push the current configs to the remote repository.
push the current configs to the remote repository.
### Example
```java
// Import classes:
//import com.cribl.openapi.client.ApiClient;
//import com.cribl.openapi.client.ApiException;
//import com.cribl.openapi.client.Configuration;
//import com.cribl.openapi.client.auth.*;
//import com.cribl.openapi.service.GitApi;
ApiClient defaultClient = Configuration.getDefaultApiClient();
GitApi apiInstance = new GitApi();
try {
InlineResponse20015 result = apiInstance.versionPushPost();
System.out.println(result);
} catch (ApiException e) {
System.err.println("Exception when calling GitApi#versionPushPost");
e.printStackTrace();
}
```
### Parameters
This endpoint does not need any parameter.
### Return type
[**InlineResponse20015**](InlineResponse20015.md)
### Authorization
[bearerAuth](../README.md#bearerAuth)
### HTTP request headers
- **Content-Type**: Not defined
- **Accept**: application/json
<a name="versionShowGet"></a>
# **versionShowGet**
> InlineResponse20015 versionShowGet(commit, group)
get the log message and textual diff for given commit
get the log message and textual diff for given commit
### Example
```java
// Import classes:
//import com.cribl.openapi.client.ApiClient;
//import com.cribl.openapi.client.ApiException;
//import com.cribl.openapi.client.Configuration;
//import com.cribl.openapi.client.auth.*;
//import com.cribl.openapi.service.GitApi;
ApiClient defaultClient = Configuration.getDefaultApiClient();
GitApi apiInstance = new GitApi();
String commit = "commit_example"; // String | Commit hash (default is HEAD)
String group = "group_example"; // String | Group ID
try {
InlineResponse20015 result = apiInstance.versionShowGet(commit, group);
System.out.println(result);
} catch (ApiException e) {
System.err.println("Exception when calling GitApi#versionShowGet");
e.printStackTrace();
}
```
### Parameters
Name | Type | Description | Notes
------------- | ------------- | ------------- | -------------
**commit** | **String**| Commit hash (default is HEAD) | [optional]
**group** | **String**| Group ID | [optional]
### Return type
[**InlineResponse20015**](InlineResponse20015.md)
### Authorization
[bearerAuth](../README.md#bearerAuth)
### HTTP request headers
- **Content-Type**: Not defined
- **Accept**: application/json
<a name="versionStatusGet"></a>
# **versionStatusGet**
> InlineResponse20050 versionStatusGet(group)
get the the working tree status
get the the working tree status
### Example
```java
// Import classes:
//import com.cribl.openapi.client.ApiClient;
//import com.cribl.openapi.client.ApiException;
//import com.cribl.openapi.client.Configuration;
//import com.cribl.openapi.client.auth.*;
//import com.cribl.openapi.service.GitApi;
ApiClient defaultClient = Configuration.getDefaultApiClient();
GitApi apiInstance = new GitApi();
String group = "group_example"; // String | Group ID
try {
InlineResponse20050 result = apiInstance.versionStatusGet(group);
System.out.println(result);
} catch (ApiException e) {
System.err.println("Exception when calling GitApi#versionStatusGet");
e.printStackTrace();
}
```
### Parameters
Name | Type | Description | Notes
------------- | ------------- | ------------- | -------------
**group** | **String**| Group ID | [optional]
### Return type
[**InlineResponse20050**](InlineResponse20050.md)
### Authorization
[bearerAuth](../README.md#bearerAuth)
### HTTP request headers
- **Content-Type**: Not defined
- **Accept**: application/json
| 26.39951 | 178 | 0.710519 | eng_Latn | 0.376004 |
7ad48308caa711cc463a09e65e1da8eb016cdab7 | 3,148 | md | Markdown | README.md | tweakyllama/doodlejames | 11039e92fd0a10d0ec823951be4a45a070d36969 | [
"MIT"
] | null | null | null | README.md | tweakyllama/doodlejames | 11039e92fd0a10d0ec823951be4a45a070d36969 | [
"MIT"
] | null | null | null | README.md | tweakyllama/doodlejames | 11039e92fd0a10d0ec823951be4a45a070d36969 | [
"MIT"
] | null | null | null | <a href="http://phaser.io" target="_blank"><img src="http://phaser.io/images/img.png" style="width:150px;" alt="Phaser Logo"></a>
## Features
* Fast development.
* Module loading. Thanks Browserify for existing.
* Livereload. Watch code changes instantly.
* Production build.
## Quick start
Install dependencies
```bash
$ npm install
```
Start default task which starts the game
```bash
$ npm run gulp
```
## Documentation
|-- src
|-- assets
|-- fonts
|-- images
|-- sounds
|-- js
|-- configurations
|-- assets.js
|-- game.js
|-- states
|-- Boot.js
|-- Menu.js
|-- Play.js
|-- Preload.js
|-- main.js
|-- index.html
|-- .gitignore
|-- Gulpfile.js
|-- LICENSE
|-- package.json
|-- README.md
The above project structure has a source folder where is included all the files of the game. Assets, scripts, whatever you need.
At same level it has .gitignore file, the runner tasks file, Gulpfile, license file, dependencies file and finally the readme file.
In the assets folder inside of src directory we should put the assets files according with their type, fonts, images or sounds.
At same level of assets folder there is a scripts folder named js. Inside are a configurations files folder, Phaser states folder and the game entry main javascript file.
The assets configuration file must have the assets definitions to load them in the Preload Phaser state file.
The game configuration file is a file where we should define game parameters like canvas id, game witdh, game height, default language and so on.
The main entry game file named main.js has the Phaser Game instance, the loading of all game states and being the entry point, it has the start of Phaser Boot state which launchs the game.
```js
var game = new Phaser.Game(gameConfig.width, gameConfig.height, Phaser.AUTO, '');
game.state.add('boot', require('./states/Boot'));
game.state.add('menu', require('./states/Menu'));
game.state.add('play', require('./states/Play'));
game.state.add('preload', require('./states/Preload'));
game.state.start('boot');
```
Gulpfile has a default task used to develop the games and 'dist' task to create a build with proposal production code. The production task has the particularity of uglify the code.
To run default task
```bash
$ npm run gulp
```
To run dist task
```bash
$ npm run gulp dist
```
If you do not know about Browserify does, do not worry, it is simple to understand. Browserify allow us to split our code in all the files/module that we need and call them inside this files using the node-style require().
According with npm browserify definition, he will recursively analyze all the require() calls in your app in order to build a bundle you can serve up to the browser in a single script tag.
In this case the single script tag will be main.js which is built in the browserify gulp task.
## License
[MIT](LICENSE) | 35.370787 | 223 | 0.668996 | eng_Latn | 0.99184 |
7ad601a506a079ee991a94958d63f4b5181c1051 | 2,486 | md | Markdown | assets/0700-0799/0724.Find Pivot Index/README_EN.md | yanglr/LeetCodeOJ | 27dd1e4a2442b707deae7921e0118752248bef5e | [
"MIT"
] | 45 | 2021-07-25T00:45:43.000Z | 2022-03-24T05:10:43.000Z | assets/0700-0799/0724.Find Pivot Index/README_EN.md | yanglr/LeetCodeOJ | 27dd1e4a2442b707deae7921e0118752248bef5e | [
"MIT"
] | null | null | null | assets/0700-0799/0724.Find Pivot Index/README_EN.md | yanglr/LeetCodeOJ | 27dd1e4a2442b707deae7921e0118752248bef5e | [
"MIT"
] | 15 | 2021-07-25T00:40:52.000Z | 2021-12-27T06:25:31.000Z | # [724. Find Pivot Index](https://leetcode.com/problems/find-pivot-index)
## Description
<p>Given an array of integers <code>nums</code>, calculate the <strong>pivot index</strong> of this array.</p>
<p>The <strong>pivot index</strong> is the index where the sum of all the numbers <strong>strictly</strong> to the left of the index is equal to the sum of all the numbers <strong>strictly</strong> to the index's right.</p>
<p>If the index is on the left edge of the array, then the left sum is <code>0</code> because there are no elements to the left. This also applies to the right edge of the array.</p>
<p>Return <em>the <strong>leftmost pivot index</strong></em>. If no such index exists, return -1.</p>
<p> </p>
<p><strong>Example 1:</strong></p>
<pre>
<strong>Input:</strong> nums = [1,7,3,6,5,6]
<strong>Output:</strong> 3
<strong>Explanation:</strong>
The pivot index is 3.
Left sum = nums[0] + nums[1] + nums[2] = 1 + 7 + 3 = 11
Right sum = nums[4] + nums[5] = 5 + 6 = 11
</pre>
<p><strong>Example 2:</strong></p>
<pre>
<strong>Input:</strong> nums = [1,2,3]
<strong>Output:</strong> -1
<strong>Explanation:</strong>
There is no index that satisfies the conditions in the problem statement.</pre>
<p><strong>Example 3:</strong></p>
<pre>
<strong>Input:</strong> nums = [2,1,-1]
<strong>Output:</strong> 0
<strong>Explanation:</strong>
The pivot index is 0.
Left sum = 0 (no elements to the left of index 0)
Right sum = nums[1] + nums[2] = 1 + -1 = 0
</pre>
<p> </p>
<p><strong>Constraints:</strong></p>
<ul>
<li><code>1 <= nums.length <= 10<sup>4</sup></code></li>
<li><code>-1000 <= nums[i] <= 1000</code></li>
</ul>
## Solutions
<!-- tabs:start -->
### **Python3**
```python
class Solution:
def pivotIndex(self, nums: List[int]) -> int:
sums = sum(nums)
pre_sum = 0
for i, v in enumerate(nums):
if (pre_sum << 1) == sums - v:
return i
pre_sum += v
return -1
```
### **Java**
```java
class Solution {
public int pivotIndex(int[] nums) {
int sums = 0;
for (int e : nums) {
sums += e;
}
int preSum = 0;
for (int i = 0; i < nums.length; ++i) {
// preSum == sums - nums[i] - preSum
if (preSum << 1 == sums - nums[i]) {
return i;
}
preSum += nums[i];
}
return -1;
}
}
```
### **...**
```
```
<!-- tabs:end -->
| 24.372549 | 227 | 0.577233 | eng_Latn | 0.783074 |
7ad61573bac1aca431aa9599a2002beff58a9ac0 | 3,949 | md | Markdown | articles/active-directory/active-directory-reporting-getting-started.md | OpenLocalizationTestOrg/azure-docs-pr15_zh-HK | 6866dc5184e845e30c47e41406754756afaa68b2 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/active-directory/active-directory-reporting-getting-started.md | OpenLocalizationTestOrg/azure-docs-pr15_zh-HK | 6866dc5184e845e30c47e41406754756afaa68b2 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/active-directory/active-directory-reporting-getting-started.md | OpenLocalizationTestOrg/azure-docs-pr15_zh-HK | 6866dc5184e845e30c47e41406754756afaa68b2 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-11-04T04:33:56.000Z | 2020-11-04T04:33:56.000Z | <properties
pageTitle="Azure 的活动目录报告︰ 入门 |Microsoft Azure"
description="列出在 Azure Active Directory 报告中各种可用的报告"
services="active-directory"
documentationCenter=""
authors="dhanyahk"
manager="femila"
editor=""/>
<tags
ms.service="active-directory"
ms.devlang="na"
ms.topic="get-started-article"
ms.tgt_pltfrm="na"
ms.workload="identity"
ms.date="03/07/2016"
ms.author="dhanyahk"/>
# <a name="getting-started-with-azure-active-directory-reporting"></a>要开始使用 Azure 活动目录报告
## <a name="what-it-is"></a>它是什么
Azure 活动目录 (AD Azure) 包括安全性、 活动和审计报告的目录。 下面是包含的报告的列表︰
### <a name="security-reports"></a>安全报告
- 来自未知源登录
- 在多次失败后登录
- 在多个地域的号接
- 从可疑活动的 IP 地址登录
- 不规则的签到活动
- 从可能是感染病毒的设备登录
- 用户登录活动异常
### <a name="activity-reports"></a>活动报告
- 应用程序使用情况︰ 摘要
- 应用程序使用情况︰ 详细
- 应用程序的仪表板
- 考虑资源调配错误
- 单个用户设备
- 单个用户活动
- 组活动报告
- 密码重置注册活动报告
- 密码重置活动
### <a name="audit-reports"></a>审核报告
- 目录审核报告
> [AZURE.TIP] 有关 Azure AD 报告上的多个文档,检查出[查看您访问和使用情况的报告](active-directory-view-access-usage-reports.md)。
## <a name="how-it-works"></a>它的工作原理
### <a name="reporting-pipeline"></a>报告管线
报告管线由三个主要步骤组成。 每次用户登录或进行身份验证时,将发生以下情况︰
- 首先,对用户进行验证 (无论成功或不成功),并将结果存储在 Azure Active Directory 服务数据库。
- 定期处理单元的所有新符号。 我们的安全和反常活动算法在这种情况下,搜索所有新符号项中是否有可疑的活动。
- 处理之后,报告编写、 缓存,并在 Azure 的传统门户网站提供服务。
### <a name="report-generation-times"></a>报告生成时间
由于大量的身份验证和签名宏处理的 Azure 的广告平台,最近号接处理程序是,平均而言,一小时以前。 在极少数情况下,可能需要 8 小时来处理最新的登录。
您可以通过检查每个报表顶部的帮助文本找到最近处理登录。

> [AZURE.TIP] 有关 Azure AD 报告上的多个文档,检查出[查看您访问和使用情况的报告](active-directory-view-access-usage-reports.md)。
## <a name="getting-started"></a>入门教程
### <a name="sign-into-the-azure-classic-portal"></a>登录到 Azure 的传统门户网站
首先,您需要登录到[Azure 的传统门户网站](https://manage.windowsazure.com)为全局或符合性管理员。 您还必须是 Azure 订阅服务管理员或共同管理员,或使用 Azure 广告"访问"Azure 的订阅。
### <a name="navigate-to-reports"></a>导航到报表
要查看报告,请导航到报告选项卡在您的目录的顶部。
如果这是您第一次查看报表时,您将需要同意一个对话框之前可以查看报告。 这是为了确保它是可接受的您查看此数据,可以考虑在一些国家和地区的私人信息的组织中的管理员。

### <a name="explore-each-report"></a>浏览每个报表
导航到每个报表来查看所收集的数据和符号接处理。 您可以找到[所有的报表列表在此处](active-directory-reporting-guide.md)。

### <a name="download-the-reports-as-csv"></a>下载为 CSV 报告
每个报告可以作为 CSV (逗号分隔值) 文件下载。 您可以使用这些文件在 Excel、 PowerBI 或第三方程序进一步分析数据的分析。
要下载以 CSV 的任何报表,请导航到报表和底部单击"下载"。

> [AZURE.TIP] 有关 Azure AD 报告上的多个文档,检查出[查看您访问和使用情况的报告](active-directory-view-access-usage-reports.md)。
## <a name="next-steps"></a>下一步行动
### <a name="customize-alerts-for-anomalous-sign-in-activity"></a>在活动中自定义警报的异常符号
导航到您的目录的"配置"选项卡。
滚动到"通知"部分。
启用或禁用的"电子邮件通知反常的登录"一节。

### <a name="integrate-with-the-azure-ad-reporting-api"></a>将与 Azure AD 报告 API 集成
请参阅[报告 API 入门](active-directory-reporting-api-getting-started.md)。
### <a name="engage-multi-factor-authentication-on-users"></a>对用户进行多因素身份验证
在报表中选择一个用户。
单击"启用 MFA"按钮在屏幕的底部。

> [AZURE.TIP] 有关 Azure AD 报告上的多个文档,检查出[查看您访问和使用情况的报告](active-directory-view-access-usage-reports.md)。
## <a name="learn-more"></a>了解更多信息
### <a name="audit-events"></a>审核事件
了解目录中[Azure 活动目录报告审核事件](active-directory-reporting-audit-events.md)进行审核哪些事件有关。
### <a name="api-integration"></a>API 集成
请参阅[报告 API 入门](active-directory-reporting-api-getting-started.md)和[API 参考文档](https://msdn.microsoft.com/library/azure/mt126081.aspx)。
### <a name="get-in-touch"></a>与我们联系
电子邮件[[email protected]](mailto:[email protected])的反馈、 帮助或如果您有任何问题。
> [AZURE.TIP] 有关 Azure AD 报告上的多个文档,检查出[查看您访问和使用情况的报告](active-directory-view-access-usage-reports.md)。
| 24.993671 | 133 | 0.74196 | yue_Hant | 0.514276 |
7ad63508a69d382aa663779e041821b8a68e698b | 75 | md | Markdown | doc/index.md | mattak/Nodux | ace0c94b0237d8a05f9158bc0c2b390817520a86 | [
"MIT"
] | 21 | 2019-09-23T07:58:37.000Z | 2021-10-06T05:52:14.000Z | doc/index.md | mattak/Nodux | ace0c94b0237d8a05f9158bc0c2b390817520a86 | [
"MIT"
] | 15 | 2019-09-23T15:06:38.000Z | 2020-07-24T10:24:07.000Z | doc/index.md | mattak/Nodux | ace0c94b0237d8a05f9158bc0c2b390817520a86 | [
"MIT"
] | null | null | null | # **Nodux**
Nodux is node base redux framework developped on github.
| 15 | 57 | 0.693333 | eng_Latn | 0.83998 |
7ad695088b968365667ba7e7948516204a1b7337 | 9,508 | md | Markdown | articles/connections/enterprise/oidc.md | Adzz/docs | e64911d86b853860a7ee24074b7a576d266d9f6c | [
"MIT"
] | null | null | null | articles/connections/enterprise/oidc.md | Adzz/docs | e64911d86b853860a7ee24074b7a576d266d9f6c | [
"MIT"
] | null | null | null | articles/connections/enterprise/oidc.md | Adzz/docs | e64911d86b853860a7ee24074b7a576d266d9f6c | [
"MIT"
] | null | null | null | ---
title: Connect Your App to OpenID Connect Identity Providers
connection: OpenID Connect
image: /media/connections/oidc.png
public: true
seo_alias: oidc
description: Learn how to connect to OpenID Connect (OIDC) Identity Providers using an enterprise connection.
crews: crew-2
toc: true
topics:
- connections
- enterprise
- oidc
contentType: how-to
useCase:
- customize-connections
- add-idp
---
# Connect to an OpenID Connect Identity Provider
::: warning
If you are using the Lock login widget with an OpenID Connect (OIDC) connection, you must use Lock version 11.16 or higher.
:::
## Prerequisites
**Before beginning:**
* [Register your Application with Auth0](/getting-started/set-up-app).
* Select an appropriate **Application Type**.
* Add an **Allowed Callback URL** of **`${account.callback}`**.
* Make sure your Application's **[Grant Types](/dashboard/guides/applications/update-grant-types)** include the appropriate flows.
## Steps
To connect your application to an OIDC Identity Provider, you must:
1. [Set up your app in the OpenID Connect Identity Provider](#set-up-your-app-in-the-openid-connect-identity-provider).
2. [Create an enterprise connection in Auth0](#create-an-enterprise-connection-in-auth0).
3. [Enable the enterprise connection for your Auth0 Application](#enable-the-enterprise-connection-for-your-auth0-application).
4. [Test the connection](#test-the-connection).
## Set up your app in the OpenID Connect Identity Provider
To allow users to log in using an OIDC Identity Provider, you must register your application with the IdP. The process of doing this varies depending on the OIDC Identity Provider, so you will need to follow your IdP's documentation to complete this task.
Generally, you will want to make sure that at some point you enter your <dfn data-key="callback">callback URL</dfn>: `https://${account.namespace}/login/callback`.
<%= include('../_find-auth0-domain-redirects.md') %>
During this process, your OIDC Identity Provider will generate a unique identifier for the registered API, usually called a **Client ID** or an **Application ID**. Make note of this value; you will need it later.
## Create an enterprise connection in Auth0
Next, you will need to create and configure a OIDC Enterprise Connection in Auth0. Make sure you have the **Application (client) ID** and the **Client secret** generated when you set up your app in the OIDC provider.
### Create an enterprise connection using the Dashboard
::: warning
To be configurable through the Auth0 Dashboard, the OpenID Connect (OIDC) Identity Provider (IdP) needs to support [OIDC Discovery](https://openid.net/specs/openid-connect-discovery-1_0.html). Otherwise, you can configure the connection using the [Management API](#configure-the-connection-using-the-management-api).
:::
1. Navigate to the [Connections > Enterprise](${manage_url}/#/connections/enterprise) page in the [Auth0 Dashboard](${manage_url}/), and click the `+` next to **OpenID Connect**.

2. Enter general information for your connection:
| Field | Description |
| ----- | ----------- |
| **Connection name** | Logical identifier for your connection; it must be unique for your tenant. Once set, this name can't be changed. |
| **Display name** (optional) | Text used to customize the login button for Universal Login. When set, the Universal Login login button reads: "Continue with {Display name}". |
| **Logo URL** (optional) | URL of image used to customize the login button for Universal Login. When set, the Universal Login login button displays the image as a 20px by 20px square. |
| **Issuer URL** | URL where Auth0 can find the **OpenID Provider Configuration Document**, which should be available in the `/.well-known/openid-configuration` endpoint. You can enter the base URL or the full URL. You will see a green checkmark if it can be found at that location, a red mark if it cannot be found, or an error message if the file is found but the required information is not present in the configuration file. |
| **Client ID** | Unique identifier for your registered Azure AD application. Enter the saved value of the **Client ID** for the app you registered with the OIDC Identity Provider. |

3. Enter additional information for your connection, and click **Create**:
| Field | Description |
| ----- | ----------- |
| **Callback URL** | URL to which Auth0 redirects users after they authenticate. Ensure that this value is configured for the app you registered with the OIDC Identity Provider.
| **Sync user profile attributes at each login** | When enabled, Auth0 automatically syncs user profile data with each user login, thereby ensuring that changes made in the connection source are automatically updated in Auth0. |
<%= include('../_find-auth0-domain-redirects.md') %>

### Create an enterprise connection using the Management API
These examples will show you the variety of ways you can create the [connection](/connections) using Auth0's Management API. You ca configure the connection by either providing a metadata URI or by setting the OIDC URLs explicitly.
**Use Front Channel with discovery endpoint**
```har
{
"method": "POST",
"url": "https://${account.namespace}/api/v2/connections",
"httpVersion": "HTTP/1.1",
"cookies": [],
"headers": [{
"name": "Authorization",
"value": "Bearer MGMT_API_ACCESS_TOKEN"
}],
"queryString": [],
"postData": {
"mimeType": "application/json",
"text": "{ \"strategy\": \"oidc\", \"name\": \"CONNECTION_NAME\", \"options\": { \"type\": \"front_channel\", \"discovery_url\": \"https://IDP_DOMAIN/.well-known/openid-configuration\", \"client_id\" : \"IDP_CLIENT_ID\", \"scopes\": \"openid profile\" } }"
},
"headersSize": -1,
"bodySize": -1,
"comment": ""
}
```
**Use Back Channel with discovery endpoint**
```har
{
"method": "POST",
"url": "https://${account.namespace}/api/v2/connections",
"httpVersion": "HTTP/1.1",
"cookies": [],
"headers": [{
"name": "Authorization",
"value": "Bearer MGMT_API_ACCESS_TOKEN"
}],
"queryString": [],
"postData": {
"mimeType": "application/json",
"text": "{ \"strategy\": \"oidc\", \"name\": \"CONNECTION_NAME\", \"options\": { \"type\": \"back_channel\", \"discovery_url\": \"https://IDP_DOMAIN/.well-known/openid-configuration\", \"client_id\" : \"IDP_CLIENT_ID\", \"client_secret\" : \"IDP_CLIENT_SECRET\", \"scopes\": \"openid profile\" } }"
},
"headersSize": -1,
"bodySize": -1,
"comment": ""
}
```
**Use Front Channel specifying issuer settings**
```har
{
"method": "POST",
"url": "https://${account.namespace}/api/v2/connections",
"httpVersion": "HTTP/1.1",
"cookies": [],
"headers": [{
"name": "Authorization",
"value": "Bearer MGMT_API_ACCESS_TOKEN"
}],
"queryString": [],
"postData": {
"mimeType": "application/json",
"text": "{ \"strategy\": \"oidc\", \"name\": \"CONNECTION_NAME\", \"options\": { \"type\": \"front_channel\", \"issuer\": \"https://IDP_DOMAIN\", \"authorization_endpoint\": \"https://IDP_DOMAIN/authorize\", \"token_endpoint\": \"https://IDP_DOMAIN/oauth/token\", \"jwks_uri\": \"https://IDP_DOMAIN/.well-known/jwks.json\", \"client_id\" : \"IDP_CLIENT_ID\", \"client_secret\" : \"IDP_CLIENT_SECRET\", \"scopes\": \"openid profile\" } }"
},
"headersSize": -1,
"bodySize": -1,
"comment": ""
}
```
**Use Back Channel specifying issuer settings**
```har
{
"method": "POST",
"url": "https://${account.namespace}/api/v2/connections",
"httpVersion": "HTTP/1.1",
"cookies": [],
"headers": [{
"name": "Authorization",
"value": "Bearer MGMT_API_ACCESS_TOKEN"
}],
"queryString": [],
"postData": {
"mimeType": "application/json",
"text": "{ \"strategy\": \"oidc\", \"name\": \"CONNECTION_NAME\", \"options\": { \"type\": \"back_channel\", \"issuer\": \"https://IDP_DOMAIN\", \"authorization_endpoint\": \"https://IDP_DOMAIN/authorize\", \"jwks_uri\": \"https://IDP_DOMAIN/.well-known/jwks.json\", \"client_id\" : \"IDP_CLIENT_ID\", \"scopes\": \"openid profile\" } }"
},
"headersSize": -1,
"bodySize": -1,
"comment": ""
}
```
## Enable the enterprise connection for your Auth0 application
To use your new Azure AD enterprise connection, you must first [enable the connection](/dashboard/guides/connections/enable-connections-enterprise) for your Auth0 Applications.
## Test the connection
Now you're ready to [test your connection](/dashboard/guides/connections/test-connections-enterprise).
## Manually configure Issuer metadata
If you click `Show Issuer Details` on the Issuer URL endpoint, you can see the data and adjust it if you need to.
## Federate with Auth0
The OpenID Connect enterprise connection is extremely useful when federating to another Auth0 tenant. Just enter your Auth0 tenant URL (for example, `https://<tenant>.auth0.com` in the **Issuer** field, and enter the Client ID for any application in the tenant to which you want to federate in the **Client ID** field.
## Provide Feedback
While in Beta, we'll be answering questions and receiving feedback in our [Community Section for the OIDC Connection Beta Program](https://community.auth0.com/c/auth0-beta-programs/new-oidc-connection-beta).
| 46.15534 | 440 | 0.715398 | eng_Latn | 0.918881 |
7ad6d2fef47b3ca3723ce77ae5e6d69b2d407cb3 | 7,064 | md | Markdown | _posts/2010-7-25-曼联球衣遭马来西亚穆斯林抵制.md | backup53/1984bbs | 152406c37afab79176f0d094de5ac4cb0c780730 | [
"MIT"
] | 18 | 2020-01-02T21:43:02.000Z | 2022-02-14T02:40:34.000Z | _posts/2010-7-25-曼联球衣遭马来西亚穆斯林抵制.md | wzxwj/1984bbs | 152406c37afab79176f0d094de5ac4cb0c780730 | [
"MIT"
] | 3 | 2020-01-01T16:53:59.000Z | 2020-01-05T10:14:11.000Z | _posts/2010-7-25-曼联球衣遭马来西亚穆斯林抵制.md | backup53/1984bbs | 152406c37afab79176f0d094de5ac4cb0c780730 | [
"MIT"
] | 13 | 2020-01-20T14:27:39.000Z | 2021-08-16T02:13:21.000Z | ---
layout: default
date: 2010-7-25
title: 曼联球衣遭马来西亚穆斯林抵制
categories: 雅典学院
---
# 曼联球衣遭马来西亚穆斯林抵制 巴西、巴萨亦被封杀
魯邦三世
车干大女马De忠实粉丝哟
1楼 大 中 小 发表于 2010-7-25 13:55 只看该作者
曼联球衣遭马来西亚穆斯林抵制 巴西、巴萨亦被封杀
曼联的球衣在马来西亚遭到了抵制,据《每日邮报》的消息,马来西亚的穆斯林牧师警告他们的信徒,不要穿曼联球衣,理由是曼联的昵称和队徽上的魔鬼图形不符合伊斯兰的教法。
曼联的传统绰号是“红魔鬼”,这体现在他们的的俱乐部徽章上,而来自东南亚的马来西亚穆斯林牧师表示对此无法接受。
一名来自马来西亚柔佛州的牧师贾铎(Nooh
Gadot)说:“身为一个穆斯林,没有任何的借口穿这些球衣,这样做只是意味着把其它宗教的标志当偶像崇拜。即使球衣是当做礼物,我们应该拒绝它。而且当人们意识到这是错误的,还是照样购买这些球衣穿在身上,这种行为更是有罪。”
另外,马来西亚的穆斯林也被要求禁止身披巴西、塞尔维亚、葡萄牙和挪威国家的球衣,此外还有巴塞罗那的球衣,因为这些球衣使用了十字架也被认为是反穆斯林。
曼联在马来西亚拥有广泛的球迷基础,他们在去年夏天的亚洲巡回赛中的第一站就是选择这个东南亚国家,当时红魔与马来西亚国家队的热身赛吸引了近4万球迷来到现场观战。(Goal.com)
---
[Terminusbot](https://github.com/TerminusBot) 整理,讨论请前往 [2049bbs.xyz](http://2049bbs.xyz/)
---
Phillip
路边社特邀围观群众
2楼 大 中 小 发表于 2010-7-25 14:27 只看该作者
宗教仍在干涉日常生活
hhbcl1414
一名八卦爱好者+春哥党+毅丝不挂+党员子弟+民主斗士+不明真相的围观群众 帝吧政治组QQ群86206303
3楼 大 中 小 发表于 2010-7-25 14:31 只看该作者
宗教变成了邪教
张小夏
4楼 大 中 小 发表于 2010-7-25 14:33 只看该作者
正常,我妈还不允许我在家放Sigur Ros,说那是魔鬼的歌。
阿波
5楼 大 中 小 发表于 2010-7-25 14:39 只看该作者
雪佛兰这车穆斯林国家卖不出去?还有,西欧北欧很多国家,国旗上都有十字
我卖糕的 该用户已被删除
6楼 大 中 小 发表于 2010-7-25 15:19 只看该作者
伊斯兰教就是邪教嘛
dacenke
害群之马
7楼 大 中 小 发表于 2010-7-25 17:44 只看该作者
引用:
> 原帖由 张小夏 于 2010-7-25 14:33 发表 
> 正常,我妈还不允许我在家放Sigur Ros,说那是魔鬼的歌。
我们习惯将Sigur Ros称作“西瓜佬”。
阿文强
8楼 大 中 小 发表于 2010-7-25 21:42 只看该作者
引用:
> 原帖由 Phillip 于 2010-7-25 14:27 发表
> 
> 宗教仍在干涉日常生活
记得先生为我等推荐优秀国外软件抵制国货,但是现在俺除了长叹一声,还能说啥?
阿文强
9楼 大 中 小 发表于 2010-7-25 21:45 只看该作者
不知道是那位翻译的。伊斯兰没有牧师,只有伊玛目,或者阿訇或者毛拉-----分别是阿拉伯语和波斯语的音译。
还有那位我的上帝,你不要瞎说。你还反脑残主席呢,你就是某种意义上的那个啥。
阿文强
10楼 大 中 小 发表于 2010-7-25 21:48 只看该作者
因为这些球衣使用了十字架也被认为是反穆斯林。
+++++++++++++++++++
这位洋鬼子也是个滥竽充数的东郭。
魯邦三世
车干大女马De忠实粉丝哟
11楼 大 中 小 发表于 2010-7-25 22:22 只看该作者
曼联球衣突成大马穆斯林禁品 巴萨巴西亦遭封杀
2010-07-21 10:01:58 来源: 网易体育 跟贴 35 条 手机看赛事
http://sports.163.com/10/0721/10/6C4054D600051CCL.html

马来西亚首府柔佛的伊斯兰教理事会和霹雳州伊斯兰教组织都发表声明表示,十字架、酒类品牌广告和足球衣上的魔鬼标志,对真主都是亵渎,穆斯林不得穿着。
曼联在马来西亚是最受欢迎的队伍,而大马多数人都是穆斯林,因此,对球迷而言,这或许已经成为一个两难的选择。去年,红魔访问马来西亚,两场对阵马来西亚明星队的比赛,上座人数都超过4万人。
除了曼联,巴西、葡萄牙、塞尔维亚、巴萨和挪威的球衣也被禁穿,因为他们的队徽都有十字架。霹雳州伊斯兰教领袖、拿督贾铎表示:“没有任何借口穿这种服装,因为作为穆斯林,这意味着你崇拜了另一种宗教的标志。在这个问题上,绝对没有回旋余地,我们不能向娱乐、时尚甚至是体育妥协。”
霹雳州宗教司哈里山尼则补充指出,穆斯林穿这种球衣“就是走上犯罪之路”,因为在服装商展示其他宗教信仰的标志,意味着这个人信奉其他教会胜过了伊斯兰教。
今年3月,曼联还跟马来西亚电信签订了5年赞助合同,成为大马最受关注的体育组织。“任何参与了我们去年夏天远东之旅的人,都知道马来西亚对曼联的感觉。”曼联总裁大卫·吉尔当时评价道。
不过,伊斯兰教的决定,已经遭到马来西亚球迷的猛烈炮轰。“很快,他们会把算术里的+号也改成x,因为那个符号不是伊斯兰教允许的。”一个球迷在网络上讽刺道。
过去,马来西亚一直是个温和和改革派的穆斯林国家,然而,现在情况有了改变,1月份,当地发生焚烧教堂的行为,2月份更是宣判三名通奸的伊斯兰妇女笞刑。
(本文来源:网易体育 作者:junior)
魯邦三世
车干大女马De忠实粉丝哟
12楼 大 中 小 发表于 2010-7-25 22:28 只看该作者
引用:
> 原帖由 阿文强 于 2010-7-25 21:45 发表 
> 不知道是那位翻译的。伊斯兰没有牧师,只有伊玛目,或者阿訇或者毛拉-----分别是阿拉伯语和波斯语的音译。
>
> 还有那位我的上帝,你不要瞎说。你还反脑残主席呢,你就是某种意义上的那个啥。
怀疑精神是值得肯定的,但一昧的护短就有点捉襟见肘了
藏獒兄
狼牙山五壮士要多壮有多壮 @wang2
13楼 大 中 小 发表于 2010-7-25 22:41 只看该作者
怎么翻译倒无所谓,有这个事实都足以说明问题啦。
阿文强
14楼 大 中 小 发表于 2010-7-25 22:45 只看该作者
我曾经为类似问题打下长篇大论,结果就是没有结果。白忙活。
阿文强
15楼 大 中 小 发表于 2010-7-25 23:05 只看该作者
鲁邦,类似的问题阁下知道的很少。如果你细细查阅,可以给你提供更多的弹药。几乎无穷无尽。
生活琐事就算了。更严重的是赫然将说谎罗列为撒旦的秽行,是比食用禁物还严重的罪恶。教人怎么活啊 。
更可气的是反对大声说话,限度以对方听到为准。古兰中将这种大声说话比喻驴子的叫声。据传拉登大叔讲话素来细声细气,很是腼腆。
不赞同不赞赏穿拖地的长裤。不赞同在大地上骄傲的行走。
反对纹身。反对过分修饰。反对奇装异服。反对男扮女装------单单这一条就要受到梅兰芳同学张国荣同学粉丝的攻击。反对那种韩国美容。
反对一切形式的赌博------这一条更要受到绝大多数中国人的强烈愤慨。我们城市正举办斗地主大赛呢。棋牌室满地都是。
如果亲人去世,反对大声的涕泗横流的顿足捶胸的嚎哭。---自然也不能像庄周那样鼓盆而歌。
.........................................................
....................................................
也不赞赏红色的衣服。--------你大概没注意,很少穆斯林穿着红色的衣服。
你大概不了解十字架,酒类标识,魔鬼的图案对伊斯兰的意义和评判。
阿文强
16楼 大 中 小 发表于 2010-7-25 23:10 只看该作者
马来西亚的那个家伙说算术中的加号也不应该存在。是瞎说。他不懂。
大家跟风说伊斯兰是邪教,可是你们了解多少呢?无非就是那些甚至连鸡零狗碎都谈不上的信息,自己也不思考。实在是被我这个小方家所笑啊。
nkpoper
17楼 大 中 小 发表于 2010-7-25 23:25 只看该作者
看看,中了阿文强的套了吧?
说一个宗教是邪教,只要有道理就可以了,没必要对这个宗教非常了解...你再了解,人家也可以说你不了解。(如果你说好话,那么对你了解它的要求,就会顿时降低)
这里面的要点是:不要瞎猜。它反对球衣,这个如果没解释,本身就是邪教。至于它怎么解释,等它解释了再说嘛。没必要去猜“为什么”。
阿文强
18楼 大 中 小 发表于 2010-7-25 23:27 只看该作者
回复 17楼 nkpoper 的话题
楼上哥们,你果然是闻风而动。你这招类似鸡生蛋蛋生鸡,无限循环。
大家都没得玩了。
netsnail
19楼 大 中 小 发表于 2010-7-25 23:28 只看该作者
像阿文强同学这样常接受各种世俗知识的人,最后不得精神分裂呀...
nkpoper
20楼 大 中 小 发表于 2010-7-25 23:47 只看该作者
回复 18楼 阿文强 的话题
伊斯兰教是一种排他性信仰,这跟关公、妈祖、佛祖什么的大不同相同,那些你是可以都信的。你信了关公,也毫无必要因此反对妈祖。但伊斯兰教不同,你信了伊斯兰教,就不能信佛祖,也不能信无神论,在相关领域内,你除了伊斯兰,什么也不能信。
因此,这个问题就简单了:你信伊斯兰么?如果你不信,那么,你的立场就是反对它(除非你在相关领域内是白纸,但这个几乎不可能)。
所有不信伊斯兰的人,在理论上都是反对伊斯兰的。这个不需要额外的理由,更不需要对伊斯兰有所了解。如果伊斯兰教进一步进行排他性活动,比如跟某球队作对;那么如果你是某球队的球迷,那么你就天然跟它作对。等等等等。这些都不需要对伊斯兰有什么了解。
当然,不需要不等于说不可以;但可以也不等于说需要。如果伊斯兰对你的态度不满,它应该找理由来说服你。
nkpoper
21楼 大 中 小 发表于 2010-7-25 23:52 只看该作者
接20楼
当然,不信伊斯兰的人虽然在理论上是反伊斯兰的,但不等于说就一定要出来反对。但是,如果有由头,他就可以表示反对。
记住:反对不需要理由。这个理由是由伊斯兰的本质决定的,不是我们要反对它,而是因为它是排他性的。
你信佛祖,是不能就此反对关公。因为佛祖没说:我是唯一的,你不可以把关公当神;关公也没说:我是唯一的,你不可以把佛祖当神。但是,你随便信点什么别的,就一定是反伊斯兰的。只要你能确认你不信伊斯兰,你就可以反它。至于不信一种宗教,当然是不需要特别具体的理由的。
nkpoper
22楼 大 中 小 发表于 2010-7-26 00:02 只看该作者
引用:
> 原帖由 那个谁 于 2010-7-25 23:55 发表 
> 不信不等于反对吧。反对应该指敌视。
>
> 我不信它,他玩他的,我玩我的,这不算反对。如果它的存在损害或者威胁了我,我需要消灭它,这才是反对。
反对当然不是敌视。
这就比方说:你说1+1=3,我当然反对。就算我不表示出来,只要我认可数学,理论上就是反对你这个话的。但是,我却完全没必要敌视一个说“1+1=3”的人吧?
再举个更接近的例子。我如果是教进化论的老师,那我一定反对“创世论”(上帝在大约5000年前创造世界),当然,也许我根本就不知道有”创世论“这么个事物,但我在理论上还是反对的。因为它跟我信的东西就是矛盾的。
nkpoper
23楼 大 中 小 发表于 2010-7-26 00:11 只看该作者
引用:
> 原帖由 那个谁 于 2010-7-26 00:07 发表 
> 非也,反对一个观点/命题与反对一个团体是不同的。与一个团体和平共处不能算反对。
你不信伊斯兰教,一般而言,就是反对伊斯兰教教义。当然,这跟反对伊斯兰教徒团体是不一样的。我们可以跟穆斯林和平共处,但我们不能不反对他们的信仰,除非我们也成穆斯林。
我卖糕的 该用户已被删除
24楼 大 中 小 发表于 2010-7-26 00:32 只看该作者
回文强
很遗憾你没有幽默感。
nomura123
http://twitter.com/nomura123
25楼 大 中 小 发表于 2010-7-26 03:44 只看该作者
他的影响力应该很低吧?
外国这种人为了出名都很喜欢说一些很出位的话啊
gundamwang
王敢达
26楼 大 中 小 发表于 2010-7-26 07:15 只看该作者
我联又悲了
空心菜
周一当5毛、周二拥美帝、周三当新左派、周四开始复古老右派、周五圈圈功,周六新儒教,周日被羊叫兽电击
27楼 大 中 小 发表于 2010-7-26 10:16 只看该作者
围观互掐
asdfgh
28楼 大 中 小 发表于 2010-7-26 11:42 只看该作者
希望阿文强每次回帖都能引用古兰经原文方便大家学习。
zerobalance
29楼 大 中 小 发表于 2010-7-26 11:54 只看该作者
要是有一天有穆斯林能被曼联相中,看他咋办
| 7.373695 | 135 | 0.655578 | yue_Hant | 0.926224 |
7ad7365303a8f759b780dfea7c969f62e612adf5 | 691 | md | Markdown | development/embedded/nuttx/nuttx.md | ziyouchutuwenwu/dev_in | 2951433275f51b69470833657365d3ac115ff9e4 | [
"Apache-2.0"
] | null | null | null | development/embedded/nuttx/nuttx.md | ziyouchutuwenwu/dev_in | 2951433275f51b69470833657365d3ac115ff9e4 | [
"Apache-2.0"
] | null | null | null | development/embedded/nuttx/nuttx.md | ziyouchutuwenwu/dev_in | 2951433275f51b69470833657365d3ac115ff9e4 | [
"Apache-2.0"
] | null | null | null | # nuttx 教程
## 编译需要的库
```sh
sudo apt install gcc-arm-none-eabi ncurses-dev gperf flex bison libtool
```
## 源码地址
```sh
git clone https://bitbucket.org/patacongo/nuttx.git
git clone https://bitbucket.org/nuttx/tools.git
git clone https://bitbucket.org/patacongo/apps.git
```
### 编译 需要的工具和 so 库
```sh
cd tools/kconfig-frontends
autoreconf -f -i
./configure --enable-mconf --prefix=$HOME/dev/embedded/nuttx/tools_for_build
make install
```
### 编译 nuttx
#### 把工具和库添加到环境变量
复制 env.sh 到 nuttx 根目录
```sh
source ./env.sh
```
```sh
cd nuttx/tools
./configure.sh -E stm32f103-minimum/nsh
cd ..
make menuconfig
make
```
config 里面的参数对应
```sh
boards/arm/stm32/stm32f103-minimum/configs/nsh
```
| 14.102041 | 76 | 0.70767 | yue_Hant | 0.184098 |
7ad7c705c55aa3e3325dbb53e124f946f54beaf5 | 897 | md | Markdown | docs_source_files/content/worst_practices/file_downloads.ja.md | marcelodebittencourt/seleniumhq.github.io | e70294073c48b1b455ac08be291d003058d3bd72 | [
"Apache-2.0"
] | null | null | null | docs_source_files/content/worst_practices/file_downloads.ja.md | marcelodebittencourt/seleniumhq.github.io | e70294073c48b1b455ac08be291d003058d3bd72 | [
"Apache-2.0"
] | null | null | null | docs_source_files/content/worst_practices/file_downloads.ja.md | marcelodebittencourt/seleniumhq.github.io | e70294073c48b1b455ac08be291d003058d3bd72 | [
"Apache-2.0"
] | null | null | null | ---
title: "ファイルダウンロード"
weight: 2
---
Seleniumの管理下にあるブラウザーでリンクをクリックしてダウンロードを開始することは可能ですが、APIはダウンロードの進行状況を公開しないため、ダウンロードしたファイルのテストには理想的ではありません。
これは、ファイルのダウンロードは、Webプラットフォームとのユーザーインタラクションをエミュレートする重要な側面とは見なされないためです。
代わりに、Selenium(および必要なCookie)を使用してリンクを見つけ、 [libcurl](//curl.haxx.se/libcurl/) などのHTTPリクエストライブラリに渡します。
{{% notice info %}}
<i class="fas fa-language"></i> Page being translated from
English to Japanese. Do you speak Japanese? Help us to translate
it by sending us pull requests!
{{% /notice %}}
The [HtmlUnit driver](https://github.com/SeleniumHQ/htmlunit-driver) can download attachments by accessing them as input streams by implementing the [AttachmentHandler](https://htmlunit.sourceforge.io/apidocs/com/gargoylesoftware/htmlunit/attachment/AttachmentHandler.html) interface. The AttachmentHandler can the be added to the [HtmlUnit](https://htmlunit.sourceforge.io/) WebClient.
| 49.833333 | 386 | 0.811594 | yue_Hant | 0.454239 |
7ad7f33b11778afd95e5ae35f05b2317910615d3 | 417 | md | Markdown | mocking/mockbox/verification-methods/usdreset.md | adamcameron/testbox-docs | 73d3dbc617edf53e3a0e8ce3185f726418db09ce | [
"Apache-2.0"
] | 3 | 2018-08-30T06:09:09.000Z | 2021-05-07T16:28:50.000Z | mocking/mockbox/verification-methods/usdreset.md | adamcameron/testbox-docs | 73d3dbc617edf53e3a0e8ce3185f726418db09ce | [
"Apache-2.0"
] | 10 | 2018-02-21T17:26:58.000Z | 2022-03-27T16:24:28.000Z | mocking/mockbox/verification-methods/usdreset.md | adamcameron/testbox-docs | 73d3dbc617edf53e3a0e8ce3185f726418db09ce | [
"Apache-2.0"
] | 21 | 2017-12-11T10:43:31.000Z | 2022-01-17T15:56:34.000Z | # $reset\(\)
This method is a utility method used to clear out all call logging and method counters.
```javascript
void $reset()
```
```javascript
security = getMockBox().createMock("model.security").$("isValidUser", true);
security.isValidUser( mockUser );
// now clear out all call logs and test again
security.$reset();
mockUser.$property("authorized","variables",true);
security.isValidUser( mockUser );
```
| 21.947368 | 87 | 0.721823 | eng_Latn | 0.866247 |
7ad81314ab3b41650c6d51157b4f1a0d7b570a96 | 1,278 | md | Markdown | docs/develop/gosdk/api/audio/post_audio.md | Joy-Wang/bot-docs | a8a9d70021fcef904fdf25bd32248feaf0a79708 | [
"MIT"
] | 44 | 2021-12-21T17:58:34.000Z | 2022-03-31T14:08:46.000Z | docs/develop/gosdk/api/audio/post_audio.md | Joy-Wang/bot-docs | a8a9d70021fcef904fdf25bd32248feaf0a79708 | [
"MIT"
] | 7 | 2021-12-20T08:17:04.000Z | 2022-03-28T09:20:19.000Z | docs/develop/gosdk/api/audio/post_audio.md | Joy-Wang/bot-docs | a8a9d70021fcef904fdf25bd32248feaf0a79708 | [
"MIT"
] | 23 | 2021-12-21T03:45:36.000Z | 2022-03-20T14:49:36.000Z | # 音频控制
## 使用示例
```go
token := token.BotToken("appid", "token")
api := botgo.NewOpenAPI(token).WithTimeout(3 * time.Second)
ctx := context.Background()
audioControl, err := api.PostAudio(ctx, channelId, &dto.AudioControl{})
if err != nil {
log.Fatalln("调用 PostAudio 接口失败, err = ", err)
}
```
## 参数说明
| 字段名 | 类型 | 描述 |
| ------------ | ------------------------------------- | -------------- |
| channelId | string | 子频道 id |
| AudioControl | [AudioControl](#AudioControl) | audio 控制参数 |
## 返回说明
字段参见 [AudioControl](#AudioControl)
# 语音对象
## AudioControl
| 字段名 | 类型 | 描述 |
| --------- | ------ | --------------------------------------------------------------------- |
| URL | string | 音频数据的 url status 为 0 时传 |
| Text | string | 状态文本(比如:简单爱-周杰伦),可选,status 为 0 时传,其他操作不传 |
| Status | STATUS | 播放状态,参考 [AudioStatus](#AudioStatus) |
### AudioStatus
| 字段名 | 值 | 描述 |
| ------ | --- | ------------ |
| START | 0 | 开始播放操作 |
| PAUSE | 1 | 暂停播放操作 |
| RESUME | 2 | 继续播放操作 |
| STOP | 3 | 停止播放操作 |
| 28.4 | 108 | 0.390454 | yue_Hant | 0.535073 |
7ad8b9e60d4c041b1b74697e743c101f5cc8d75d | 176 | md | Markdown | CHANGELOG.md | TristanCacqueray/json-to-haskell | 39114f29897a30e745a28b3d6ff0c99a731ab262 | [
"BSD-3-Clause"
] | 85 | 2020-11-03T16:08:10.000Z | 2022-03-04T16:19:41.000Z | CHANGELOG.md | TristanCacqueray/json-to-haskell | 39114f29897a30e745a28b3d6ff0c99a731ab262 | [
"BSD-3-Clause"
] | 8 | 2020-11-03T04:02:17.000Z | 2021-12-26T10:23:33.000Z | CHANGELOG.md | TristanCacqueray/json-to-haskell | 39114f29897a30e745a28b3d6ff0c99a731ab262 | [
"BSD-3-Clause"
] | 5 | 2020-11-09T20:54:30.000Z | 2021-12-20T15:50:02.000Z | # Changelog for json-to-haskell
## 0.1.1.2
- Fix including changelog in package
## 0.1.1.1
- Only look at first element of lists when determining type
## Unreleased changes
| 17.6 | 59 | 0.727273 | eng_Latn | 0.997384 |
7ad8e0af82dacb7eee769d2cdf3317e7f4f1d299 | 9,689 | md | Markdown | windows-driver-docs-pr/debugger/lm--list-loaded-modules-.md | pravb/windows-driver-docs | c952c72209d87f1ae0ebaf732bd3c0875be84e0b | [
"CC-BY-4.0",
"MIT"
] | 4 | 2018-01-29T10:59:09.000Z | 2021-05-26T09:19:55.000Z | windows-driver-docs-pr/debugger/lm--list-loaded-modules-.md | pravb/windows-driver-docs | c952c72209d87f1ae0ebaf732bd3c0875be84e0b | [
"CC-BY-4.0",
"MIT"
] | null | null | null | windows-driver-docs-pr/debugger/lm--list-loaded-modules-.md | pravb/windows-driver-docs | c952c72209d87f1ae0ebaf732bd3c0875be84e0b | [
"CC-BY-4.0",
"MIT"
] | 1 | 2018-01-29T10:59:10.000Z | 2018-01-29T10:59:10.000Z | ---
title: lm (List Loaded Modules)
description: The lm command displays the specified loaded modules. The output includes the status and the path of the module.
ms.assetid: ee2283bd-4d3f-4e30-8b32-e286a415bb3a
keywords: ["lm (List Loaded Modules) Windows Debugging"]
ms.author: windowsdriverdev
ms.date: 05/23/2017
ms.topic: article
ms.prod: windows-hardware
ms.technology: windows-devices
topic_type:
- apiref
api_name:
- lm (List Loaded Modules)
api_type:
- NA
---
# lm (List Loaded Modules)
The **lm** command displays the specified loaded modules. The output includes the status and the path of the module.
```
lmOptions [a Address] [m Pattern | M Pattern]
```
## <span id="ddk_cmd_list_loaded_modules_dbg"></span><span id="DDK_CMD_LIST_LOADED_MODULES_DBG"></span>Parameters
<span id="_______Options______"></span><span id="_______options______"></span><span id="_______OPTIONS______"></span> *Options*
Any combination of the following options:
<span id="D"></span><span id="d"></span>D
Displays output using [Debugger Markup Language](debugger-markup-language-commands.md).
<span id="o"></span><span id="O"></span>o
Displays only loaded modules.
<span id="l"></span><span id="L"></span>l
Displays only modules whose symbol information has been loaded.
<span id="v"></span><span id="V"></span>v
Causes the display to be verbose. The display includes the symbol file name, the image file name, checksum information, version information, date stamps, time stamps, and information about whether the module is managed code (CLR). This information is not displayed if the relevant headers are missing or paged out.
<span id="u"></span><span id="U"></span>u
(Kernel mode only) Displays only user-mode symbol information.
<span id="k"></span><span id="K"></span>k
(Kernel mode only) Displays only kernel-mode symbol information.
<span id="e"></span><span id="E"></span>e
Displays only modules that have a symbol problem. These symbols include modules that have no symbols and modules whose symbol status is C, T, \#, M, or Export. For more information about these notations, see [Symbol Status Abbreviations](symbol-status-abbreviations.md).
<span id="c"></span><span id="C"></span>c
Displays checksum data.
<span id="1m"></span><span id="1M"></span>1m
Reduces the output so that nothing is included except the names of the modules. This option is useful if you are using the [**.foreach**](-foreach.md) token to pipe the command output into another command's input.
<span id="sm"></span><span id="SM"></span>sm
Sorts the display by module name instead of by the start address.
In addition, you can include only one of the following options. If you do not include any of these options, the display includes the symbol file name.
<span id="i"></span><span id="I"></span>i
Displays the image file name.
<span id="f"></span><span id="F"></span>f
Displays the full image path. (This path always matches the path that is displayed in the initial load notification, unless you issued a [**.reload -s**](-reload--reload-module-.md) command.) When you use f, symbol type information is not displayed.
<span id="n"></span><span id="N"></span>n
Displays the image name. When you use n, symbol type information is not displayed.
<span id="p"></span><span id="P"></span>p
Displays the mapped image name. When you use p, symbol type information is not displayed.
<span id="t"></span><span id="T"></span>t
Displays the file time stamps. When you use t, symbol type information is not displayed.
<span id="_______a_______Address______"></span><span id="_______a_______address______"></span><span id="_______A_______ADDRESS______"></span> a *Address*
Specifies an address that is contained in this module. Only the module that contains this address is displayed. If Address contains an expression, it must be enclosed in parentheses.
<span id="_______m_______Pattern______"></span><span id="_______m_______pattern______"></span><span id="_______M_______PATTERN______"></span> m *Pattern*
Specifies a pattern that the module name must match. Pattern can contain a variety of wildcard characters and specifiers. For more information about the syntax of this information, see [String Wildcard Syntax](string-wildcard-syntax.md).
**Note** In most cases, the module name is the file name without the file name extension. For example, if you want to display information about the Flpydisk.sys driver, use the lm mflpydisk command, not lm mflpydisk.sys. In some cases, the module name differs significantly from the file name.
<span id="_______M_______Pattern______"></span><span id="_______m_______pattern______"></span><span id="_______M_______PATTERN______"></span> M *Pattern*
Specifies a pattern that the image path must match. Pattern can contain a variety of wildcard characters and specifiers. For more information about the syntax of this information, see [String Wildcard Syntax](string-wildcard-syntax.md).
### <span id="Environment"></span><span id="environment"></span><span id="ENVIRONMENT"></span>Environment
<table>
<colgroup>
<col width="50%" />
<col width="50%" />
</colgroup>
<tbody>
<tr class="odd">
<td align="left"><p>Modes</p></td>
<td align="left"><p>User mode, kernel mode</p></td>
</tr>
<tr class="even">
<td align="left"><p>Targets</p></td>
<td align="left"><p>Live, crash dump</p></td>
</tr>
<tr class="odd">
<td align="left"><p>Platforms</p></td>
<td align="left"><p>All</p></td>
</tr>
</tbody>
</table>
Remarks
-------
The **lm** command lists all of the modules and the status of symbols for each module.
Microsoft Windows Server 2003 and later versions of Windows maintain an unloaded module list for user-mode processes. When you are debugging a user-mode process or dump file, the **lm** command also shows these unloaded modules.
This command shows several columns or fields, each with a different title. Some of these titles have specific meanings:
- *module name* is typically the file name without the file name extension. In some cases, the module name differs significantly from the file name.
- The symbol type immediately follows the module name. This column is not labeled. For more information about the various status values, see [Symbol Status Abbreviations](symbol-status-abbreviations.md). If you have loaded symbols, the symbol file name follows this column.
- The first address in the module is shown as start. The first address after the end of the module is shown as end. For example, if start is "faab4000" and end is "faab8000", the module extends from 0xFAAB4000 to 0xFAAB7FFF, inclusive.
- **lmv** only: The image path column shows the name of the executable file, including the file name extension. Typically, the full path is included in user mode but not in kernel mode.
- **lmv** only: The loaded symbol image file value is the same as the image name, unless Microsoft CodeView symbols are present.
- **lmv** only: The mapped memory image file value is typically not used. If the debugger is mapping an image file (for example, during minidump debugging), this value is the name of the mapped image.
The following code example shows the **lm** command with a Windows Server 2003 target computer. This example includes the m and s\* options, so only modules that begin with "s" are displayed.
```
kd> lm m s*
start end module name
f9f73000 f9f7fd80 sysaudio (deferred)
fa04b000 fa09b400 srv (deferred)
faab7000 faac8500 sr (deferred)
facac000 facbae00 serial (deferred)
fb008000 fb00ba80 serenum e:\mysymbols\SereEnum.pdb\.......
fb24f000 fb250000 swenum (deferred)
Unloaded modules:
f9f53000 f9f61000 swmidi.sys
fb0ae000 fb0b0000 splitter.sys
fb040000 fb043000 Sfloppy.SYS
```
Examples
--------
The following two examples show the **lm** command once without any options and once with the sm option. Compare the sort order in the two examples.
Example 1:
```
0:000> lm
start end module name
01000000 0100d000 stst (deferred)
77c10000 77c68000 msvcrt (deferred)
77dd0000 77e6b000 ADVAPI32 (deferred)
77e70000 77f01000 RPCRT4 (deferred)
7c800000 7c8f4000 kernel32 (deferred)
7c900000 7c9b0000 ntdll (private pdb symbols) c:\db20sym\ntdll.pdb
```
Example 2:
```
0:000> lmsm
start end module name
77dd0000 77e6b000 ADVAPI32 (deferred)
7c800000 7c8f4000 kernel32 (deferred)
77c10000 77c68000 msvcrt (deferred)
7c900000 7c9b0000 ntdll (private pdb symbols) c:\db20sym\ntdll.pdb
77e70000 77f01000 RPCRT4 (deferred)
01000000 0100d000 stst (deferred)
```
[Send comments about this topic to Microsoft](mailto:[email protected]?subject=Documentation%20feedback%20[debugger\debugger]:%20lm%20%28List%20Loaded%20Modules%29%20%20RELEASE:%20%285/15/2017%29&body=%0A%0APRIVACY%20STATEMENT%0A%0AWe%20use%20your%20feedback%20to%20improve%20the%20documentation.%20We%20don't%20use%20your%20email%20address%20for%20any%20other%20purpose,%20and%20we'll%20remove%20your%20email%20address%20from%20our%20system%20after%20the%20issue%20that%20you're%20reporting%20is%20fixed.%20While%20we're%20working%20to%20fix%20this%20issue,%20we%20might%20send%20you%20an%20email%20message%20to%20ask%20for%20more%20info.%20Later,%20we%20might%20also%20send%20you%20an%20email%20message%20to%20let%20you%20know%20that%20we've%20addressed%20your%20feedback.%0A%0AFor%20more%20info%20about%20Microsoft's%20privacy%20policy,%20see%20http://privacy.microsoft.com/default.aspx. "Send comments about this topic to Microsoft")
| 48.934343 | 942 | 0.739189 | eng_Latn | 0.957107 |
7ada0b42523b77b05146b6a0eb8ed844609cdafc | 476 | md | Markdown | python/README.md | pavdpr/DIRSIG | 18c83b8042419f3851e24c60ea3777a781c5fbff | [
"MIT"
] | 1 | 2020-08-17T15:34:28.000Z | 2020-08-17T15:34:28.000Z | python/README.md | pavdpr/DIRSIG | 18c83b8042419f3851e24c60ea3777a781c5fbff | [
"MIT"
] | null | null | null | python/README.md | pavdpr/DIRSIG | 18c83b8042419f3851e24c60ea3777a781c5fbff | [
"MIT"
] | null | null | null | # Python Scripts
A collection of DIRSIG related python scripts.
### lidarbin
Reads a DIRSIG bin (raw Lidar) file into python.
See http://dirsig.org/docs/new/bin.html for bin file specifications.
### parallel
A python wrapper for running multiple simulation files in parallel.
It also provides some basic tools for creating the dirsig calls.
### odb2glist
A python function for converting a DIRSIG odb file into the newer DIRSIG glist file.
# References:
http://dirsig.org
| 28 | 84 | 0.777311 | eng_Latn | 0.889365 |
7ada5d398bf27dd1475541442ff788ec80edf0c2 | 4,089 | md | Markdown | Desafio02.md | gusflopes/gostack-challenge02 | 67aa73da5d49bfbe7e422a9ca63faf872664f265 | [
"MIT"
] | 5 | 2020-07-22T23:46:12.000Z | 2020-10-24T18:08:42.000Z | Desafio02.md | gusflopes/gostack-challenge02 | 67aa73da5d49bfbe7e422a9ca63faf872664f265 | [
"MIT"
] | 4 | 2019-12-10T03:10:17.000Z | 2021-09-02T00:45:45.000Z | Desafio02.md | gusflopes/gostack-challenge02 | 67aa73da5d49bfbe7e422a9ca63faf872664f265 | [
"MIT"
] | 3 | 2019-10-28T20:13:28.000Z | 2020-09-28T10:57:03.000Z | <h1 align="center">
<img alt="Gympoint" title="Gympoint" src=".github/logo.png" width="200px" />
</h1>
<h3 align="center">
Desafio 2: Gympoint, o início
</h3>
<blockquote align="center">“Não espere para plantar, apenas tenha paciência para colher”!</blockquote>
<p align="center">
<img alt="GitHub language count" src="https://img.shields.io/github/languages/count/rocketseat/bootcamp-gostack-desafio-02?color=%2304D361">
<a href="https://rocketseat.com.br">
<img alt="Made by Rocketseat" src="https://img.shields.io/badge/made%20by-Rocketseat-%2304D361">
</a>
<img alt="License" src="https://img.shields.io/badge/license-MIT-%2304D361">
<a href="https://github.com/Rocketseat/bootcamp-gostack-desafio-02/stargazers">
<img alt="Stargazers" src="https://img.shields.io/github/stars/rocketseat/bootcamp-gostack-desafio-02?style=social">
</a>
</p>
<p align="center">
<a href="#rocket-sobre-o-desafio">Sobre o desafio</a> |
<a href="#-entrega">Entrega</a> |
<a href="#memo-licença">Licença</a>
</p>
## :rocket: Sobre o desafio
A aplicação que iremos dar início ao desenvolvimento a partir de agora é um app gerenciador de academia, o **Gympoint**.
Nesse primeiro desafio vamos criar algumas funcionalidades básicas que aprendemos ao longo das aulas até aqui. Esse projeto será desenvolvido aos poucos até o fim da sua jornada onde você terá uma aplicação completa envolvendo back-end, front-end e mobile, que será utilizada para a **certificação do bootcamp**, então, bora pro código!
### Um pouco sobre as ferramentas
Você deverá criar a aplicação do zero utilizando o [Express](https://expressjs.com/), além de precisar configurar as seguintes ferramentas:
- Sucrase + Nodemon;
- ESLint + Prettier + EditorConfig;
- Sequelize (Utilize PostgreSQL ou MySQL);
### Funcionalidades
Abaixo estão descritas as funcionalidades que você deve adicionar em sua aplicação.
#### 1. Autenticação
Permita que um usuário se autentique em sua aplicação utilizando e-mail e uma senha.
Crie um usuário administrador utilizando a funcionalidade de [seeds do sequelize](https://sequelize.org/master/manual/migrations.html#creating-first-seed), essa funcionalidade serve para criarmos registros na base de dados de forma automatizada.
Para criar um seed utilize o comando:
```js
yarn sequelize seed:generate --name admin-user
```
No arquivo gerado na pasta `src/database/seeds` adicione o código referente à criação de um usuário administrador:
```js
const bcrypt = require("bcryptjs");
module.exports = {
up: QueryInterface => {
return QueryInterface.bulkInsert(
"users",
[
{
name: "Administrador",
email: "[email protected]",
password_hash: bcrypt.hashSync("123456", 8),
created_at: new Date(),
updated_at: new Date()
}
],
{}
);
},
down: () => {}
};
```
Agora execute:
```js
yarn sequelize db:seed:all
```
Agora você tem um usuário na sua base de dados, utilize esse usuário para todos logins daqui pra frente.
- A autenticação deve ser feita utilizando JWT.
- Realize a validação dos dados de entrada;
#### 2. Cadastro de alunos
Permita que alunos sejam mantidos (cadastrados/atualizados) na aplicação utilizando nome, email, idade, peso e altura.
Utilize uma nova tabela no banco de dados chamada `students`.
O cadastro de alunos só pode ser feito por usuários autenticados na aplicação.
## 📅 Entrega
Esse desafio **não precisa ser entregue** e não receberá correção. Além disso, o código fonte **não está disponível** por fazer parte do **desafio final**, que será corrigido para **certificação** do bootcamp. Após concluir o desafio, adicionar esse código ao seu Github é uma boa forma de demonstrar seus conhecimentos para oportunidades futuras.
## :memo: Licença
Esse projeto está sob a licença MIT. Veja o arquivo [LICENSE](LICENSE.md) para mais detalhes.
---
Feito com ♥ by Rocketseat :wave: [Entre na nossa comunidade!](https://discordapp.com/invite/gCRAFhc)
| 34.948718 | 347 | 0.725116 | por_Latn | 0.996999 |
7ada5fafe47b50ae86e719b1c641d2566a76096f | 2,196 | md | Markdown | docs/modeling/share-code-maps.md | ailen0ada/visualstudio-docs.ja-jp | 12f304d1399580c598406cf74a284144471e88c0 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/modeling/share-code-maps.md | ailen0ada/visualstudio-docs.ja-jp | 12f304d1399580c598406cf74a284144471e88c0 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/modeling/share-code-maps.md | ailen0ada/visualstudio-docs.ja-jp | 12f304d1399580c598406cf74a284144471e88c0 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: エクスポートし、コード マップの保存
ms.date: 05/16/2018
ms.topic: conceptual
author: gewarren
ms.author: gewarren
manager: douge
ms.prod: visual-studio-dev15
ms.technology: vs-ide-modeling
ms.workload:
- multiple
ms.openlocfilehash: abfe8d6160d023a99e9a49480baada9acb0c8243
ms.sourcegitcommit: 209c2c068ff0975994ed892b62aa9b834a7f6077
ms.translationtype: MT
ms.contentlocale: ja-JP
ms.lasthandoff: 05/17/2018
ms.locfileid: "34268382"
---
# <a name="share-code-maps"></a>コード マップの共有
コード マップは、Visual Studio プロジェクトの一部として、画像、または XPS ファイルとして保存できます。
## <a name="share-a-code-map-with-other-visual-studio-users"></a>コード マップを他の Visual Studio ユーザーと共有します。
マップを保存するには、 **[ファイル]** メニューを使用します。
- または -
マップ ツールバーで、特定のプロジェクトの一部としてマップを保存する**共有** > **移動\<CodeMapName > に .dgml**、保存するプロジェクトを選択し、マップします。

Visual Studio には、として、マップが保存され、 *.dgml*ファイルを他の Visual Studio Enterprise および Visual Studio Professional ユーザーと共有することができます。
> [!NOTE]
> Visual Studio Professional を使用するユーザーとマップを共有する前に、グループを展開しておくこと、非表示のノードとグループ間リンクを表示しておくこと、マップに表示する必要のある削除済みのノードを取得しておくことが必要です。 そうしない場合、他のユーザーはそれらの項目を表示できなくなります。
>
> モデリング プロジェクト内のマップまたはモデリング プロジェクトから別の場所にコピーされたマップを保存すると、次のエラーが発生する可能性があります。
>
> "プロジェクト ディレクトリの外部で *fileName* を保存できません。 リンクされた項目はサポートされていません。"
>
> Visual Studio にはエラーが表示されます。ただし、保存したバージョンは生成されます。 このエラーを回避するには、モデリング プロジェクトの外部でマップを生成します。 その後、目的の場所にグラフを保存できます。 単にソリューション内の別の場所にファイルをコピーし、その後で保存しようとすると失敗します。
## <a name="export-a-code-map-as-an-image"></a>コード マップをイメージとしてエクスポートします。
コード マップをイメージとしてエクスポートするときに、Microsoft Word や PowerPoint など、他のアプリケーションにコピーできます。
1. コード マップ ツールバーでは、次のように選択します。**共有** > **画像として電子メールで送信**または**イメージのコピー**です。
2. そのイメージを他のアプリケーションに貼り付けます。
## <a name="export-the-map-as-an-xps-file"></a>マップを XPS ファイルとしてエクスポートします。
コード マップを XPS ファイルとしてエクスポートするときに、Internet Explorer などの XML や XAML ビューアーで表示できます。
1. コード マップ ツールバーでは、次のように選択します。**共有** > **ポータブル XPS として電子メール**または**ポータブル XPS として保存**です。
2. ファイルを保存する場所を参照します。
3. コード マップの名前を付けます。 確認して、**型として保存**に設定されているボックス**XPS ファイル (\*.xps)** です。 **[保存]** をクリックします。
## <a name="see-also"></a>関連項目
- [コード マップと依存関係をマッピングします。](../modeling/map-dependencies-across-your-solutions.md) | 34.3125 | 160 | 0.778689 | jpn_Jpan | 0.388435 |
7ada76fdaad9f71a55720c4335c4fcb3a8a1caa5 | 1,940 | md | Markdown | packages/terra-time-input/README.md | shinepd/terra-core | 5f5aaf24f86617031647d21defb4fbb6b53e73a0 | [
"Apache-2.0"
] | 1 | 2018-05-06T15:29:10.000Z | 2018-05-06T15:29:10.000Z | packages/terra-time-input/README.md | shinepd/terra-core | 5f5aaf24f86617031647d21defb4fbb6b53e73a0 | [
"Apache-2.0"
] | null | null | null | packages/terra-time-input/README.md | shinepd/terra-core | 5f5aaf24f86617031647d21defb4fbb6b53e73a0 | [
"Apache-2.0"
] | null | null | null | # Terra Time Input
[](https://www.npmjs.org/package/terra-time-input)
[](https://travis-ci.org/cerner/terra-core)
The terra-time-input component is a controlled input component for entering time. It is a controlled component because it manages the state of the value in the input. Because this is a controlled input component, it cannot accept the defaultValue prop as it always uses the value prop. React does not allow having both the defaultValue and value props.
The currently supported time format is the 24-hour format (hh:mm). The time input enforces the entry that masks to the format. The hour input only accepts values between 00 and 23 and the minute input only accepts values between 00 and 59. For example, a time of 25:65 cannot be entered. A 0 will automatically be prepended to the hour if the entered hour is greater than 2. Likewise, a 0 will automatically be prepended to the minute if the entered minute is greater than 5.
- [Getting Started](#getting-started)
- [Documentation](https://github.com/cerner/terra-core/tree/master/packages/terra-time-input/docs)
- [LICENSE](#license)
## Getting Started
- Install from [npmjs](https://www.npmjs.com): `npm install terra-time-input`
## LICENSE
Copyright 2017 Cerner Innovation, Inc.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
| 71.851852 | 475 | 0.779381 | eng_Latn | 0.990832 |
7adab49044227cf66e6c0ebfda1c8118770d3e1b | 1,090 | md | Markdown | README.md | ytiurin/bindbindjs | eeb4442011101e3bc0abcf0e13395b84cdb97451 | [
"MIT"
] | 1 | 2020-03-18T21:48:28.000Z | 2020-03-18T21:48:28.000Z | README.md | ytiurin/bindbindjs | eeb4442011101e3bc0abcf0e13395b84cdb97451 | [
"MIT"
] | null | null | null | README.md | ytiurin/bindbindjs | eeb4442011101e3bc0abcf0e13395b84cdb97451 | [
"MIT"
] | null | null | null | bindbind.js
========
Data binding made simple.
##Usage
Declare binding anchors inside your HTML code
```html
<table>
<tr bb-writers:name="Unknown" bb-writers:uri="#nolink">
<td><a href="#nolink">Unknown</a></td>
</tr>
</table>
```
Define view model and bind it to the DOM
```javascript
var myViewModel = {writers:[]};
var _o = new bindbind(myViewModel);
// _o <- this is an observing proxy object,
// it holds setters and getters of your
// object properties and notifies other
// objects about it's changes
```
Update view model using observing proxy
```javascript
_o(myViewModel.writers).push({
name:'Joseph Conrad',
uri:'https://en.wikipedia.org/wiki/Joseph_Conrad'});
_o(myViewModel.writers).push({
name:'James Joyce',
uri:'https://en.wikipedia.org/wiki/James_Joyce'});
```
Resulting HTML
```html
<table>
<tr>
<td><a href="https://en.wikipedia.org/wiki/Joseph_Conrad">Joseph Conrad</a></td>
</tr>
<tr>
<td><a href="https://en.wikipedia.org/wiki/James_Joyce">James Joyce</a></td>
</tr>
</table>
```
##Note
This is still in beta.
| 20.961538 | 84 | 0.66422 | eng_Latn | 0.37108 |
7adafd75eb7fbd2a50a2c69c8434f5b739b48240 | 11,848 | md | Markdown | articles/2013-07-30_420.md | neuecc/Blog2 | 0e358ebe719940bd2c4785a686908e45da2cd00c | [
"MIT"
] | 12 | 2021-11-20T17:36:01.000Z | 2022-03-31T03:18:55.000Z | articles/2013-07-30_420.md | neuecc/Blog2 | 0e358ebe719940bd2c4785a686908e45da2cd00c | [
"MIT"
] | 5 | 2021-11-22T08:37:00.000Z | 2021-11-22T23:38:01.000Z | articles/2013-07-30_420.md | neuecc/Blog2 | 0e358ebe719940bd2c4785a686908e45da2cd00c | [
"MIT"
] | 4 | 2021-12-21T14:39:08.000Z | 2021-12-29T09:45:49.000Z | # Http, SQL, Redisのロギングと分析・可視化について
改善は計測から。何がどれだけの回数通信されているか、どれだけ時間がかかっているのか、というのは言うまでもなく重要な情報です。障害対策でも大事ですしね。が、じゃあどうやって取るの、というとパッとでてくるでしょうか?そして、それ、実際に取っていますか?存外、困った話なのですねー。TraceをONにすると内部情報が沢山出てきますが、それはそれで情報過多すぎるし、欲しいのはそれじゃないんだよ、みたいな。
[Grani](http://grani.jp/)←「謎社」で検索一位取ったので、ちょっと英語表記の検索ランキングをあげようとしている――では自前で中間を乗っ取ってやる形で統一していて、使用している通信周り、Http, RDBMS, Redisは全てログ取りして分析可能な状態にしています。
HTTP
---
HttpClient(HttpClientについては[HttpClient詳解](http://www.slideshare.net/neuecc/httpclient)を読んでね)には、DelegatingHandlerが用意されているので、その前後でStopwatchを動かしてやるだけで済みます。
```csharp
public class TraceHandler : DelegatingHandler
{
static readonly Logger httpLogger = NLog.LogManager.GetLogger("Http");
public TraceHandler()
: base(new HttpClientHandler())
{
}
public TraceHandler(HttpMessageHandler innerHandler)
: base(innerHandler)
{
}
protected override async Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, System.Threading.CancellationToken cancellationToken)
{
var sw = Stopwatch.StartNew();
// SendAsyncの前後を挟むだけ
var result = await base.SendAsync(request, cancellationToken);
sw.Stop();
httpLogger.Trace(Newtonsoft.Json.JsonConvert.SerializeObject(new
{
date = DateTime.Now,
command = request.Method,
key = request.RequestUri,
ms = sw.ElapsedMilliseconds
}, Newtonsoft.Json.Formatting.None));
return result;
}
}
```
```csharp
// 使う時はこんな感じにコンストラクタへ突っ込む
var client = new HttpClient(new TraceHandler());
// {"date":"2013-07-30T21:29:03.2314858+09:00","command":{"Method":"GET"},"key":"http://www.google.co.jp/","ms":129}
client.GetAsync("http://google.co.jp/").Wait();
```
なお、StreamをReadする時間は含まれていないので、あくまで向こうが反応を返した速度だけの記録になりますが、それでも十分でしょう。
Loggerは別にConsole.WriteLineでもTraceでも何でもいいのですが、弊社では基本的に[NLog](https://github.com/nlog/NLog/)を使っています。フォーマットは、Http, Sql, Redisと統一するためにdate, command, key, msにしていますが、この辺もお好みで。
なお、DelegatingHandlerは連鎖して多段に組み合わせることが可能です。実際[AsyncOAuth](http://neue.cc/2013/02/27_398.html)と合わせて使うと
```csharp
var client = new HttpClient(
new TraceHandler(
new OAuthMessageHandler("key", "secret")));
```
といった感じになります。AsyncOAuthはHttpClientの拡張性がそのまま活かせるのが強い。
SQL
---
全てのデータベース通信は最終的にはADO.NETのDbCommandを通ります、というわけで、そこをフックしてしまえばいいのです。というのが[MiniProfiler](http://miniprofiler.com/)で、以下のように使います。
```csharp
var conn = new ProfiledDbConnection(new SqlConnection("connectionString"), MiniProfiler.Current);
```
MiniProfilerはASP.NET MVCでの開発に超絶必須な拡張なわけで、当然、弊社でも使っています。さて、これはこれでいいのですけれど、MiniProfiler.Currentは割とヘヴィなので、そのまま本番に投入するわけもいかずで、単純にトレースするだけのがあるといいんだよねー。なので、ここは、MiniProfiler.Current = IDbProfilerを作りましょう。
なお、DbCommandをフックするProfiledDbConnectionに関してはそのまま使わせてもらいます。ただたんに移譲してるだけなんですが、DbCommandやDbTransactionや、とか、関連するもの全てを作って回らなければならなくて、自作するのカッタルイですから。ありものがあるならありものを使おう。ちなみに、MiniProfilerにはSimpleProfiledConnectionという、もっとシンプルな、本当に本当に移譲しただけのものもあるのですけれど、これはIDbConnectionがベースになってるので実質使えません。ProfiledDbConnectionのベースはDbConnection。IDbConnectionとDbConnectionの差異はかなり大きいので(*AsyncもDb...のほうだし)、実用的にはDbConnectionが基底と考えてしまっていいかな。
```csharp
public class TraceDbProfiler : IDbProfiler
{
static readonly Logger sqlLogger = NLog.LogManager.GetLogger("Sql");
public bool IsActive
{
get { return true; }
}
public void OnError(System.Data.IDbCommand profiledDbCommand, ExecuteType executeType, System.Exception exception)
{
// 何も記録しない
}
// 大事なのは↓の3つ
Stopwatch stopwatch;
string commandText;
// コマンドが開始された時に呼ばれる(ExecuteReaderとかExecuteNonQueryとか)
public void ExecuteStart(System.Data.IDbCommand profiledDbCommand, ExecuteType executeType)
{
stopwatch = Stopwatch.StartNew();
}
// コマンドが完了された時に呼ばれる
public void ExecuteFinish(System.Data.IDbCommand profiledDbCommand, ExecuteType executeType, System.Data.Common.DbDataReader reader)
{
commandText = profiledDbCommand.CommandText;
if (executeType != ExecuteType.Reader)
{
stopwatch.Stop();
sqlLogger.Trace(Newtonsoft.Json.JsonConvert.SerializeObject(new
{
date = DateTime.Now,
command = executeType,
key = commandText,
ms = stopwatch.ElapsedMilliseconds
}, Newtonsoft.Json.Formatting.None));
}
}
// Readerが完了した時に呼ばれる
public void ReaderFinish(System.Data.IDataReader reader)
{
stopwatch.Stop();
sqlLogger.Trace(Newtonsoft.Json.JsonConvert.SerializeObject(new
{
date = DateTime.Now,
command = ExecuteType.Reader,
key = commandText,
ms = stopwatch.ElapsedMilliseconds
}, Newtonsoft.Json.Formatting.None));
}
}
```
これで、
```csharp
{"date":"2013-07-15T18:24:17.4465207+09:00","command":"Reader","key":"select * from hogemoge where id = @id","ms":6}
```
のようなデータが取れます。パラメータの値も展開したい!とかいう場合は自由にcommandのとこから引っ張れば良いでしょう。更に、MiniProfiler.Currentと共存したいような場合は、合成するIDbProfilerを用意すればなんとかなる。Time的には若干ずれますが、そこまで問題でもないかしらん。
```csharp
public class CompositeDbProfiler : IDbProfiler
{
readonly IDbProfiler[] profilers;
public CompositeDbProfiler(params IDbProfiler[] dbProfilers)
{
this.profilers = dbProfilers;
}
public void ExecuteFinish(IDbCommand profiledDbCommand, ExecuteType executeType, DbDataReader reader)
{
foreach (var item in profilers)
{
if (item != null && item.IsActive)
{
item.ExecuteFinish(profiledDbCommand, executeType, reader);
}
}
}
public void ExecuteStart(IDbCommand profiledDbCommand, ExecuteType executeType)
{
foreach (var item in profilers)
{
if (item != null && item.IsActive)
{
item.ExecuteStart(profiledDbCommand, executeType);
}
}
}
public bool IsActive
{
get
{
return true;
}
}
public void OnError(IDbCommand profiledDbCommand, ExecuteType executeType, Exception exception)
{
foreach (var item in profilers)
{
if (item != null && item.IsActive)
{
item.OnError(profiledDbCommand, executeType, exception);
}
}
}
public void ReaderFinish(IDataReader reader)
{
foreach (var item in profilers)
{
if (item != null && item.IsActive)
{
item.ReaderFinish(reader);
}
}
}
}
```
といったものを用意しておけば、
```csharp
var profiler = new CompositeDbProfiler(
StackExchange.Profiling.MiniProfiler.Current,
new TraceDbProfiler());
var conn = new ProfiledDbConnection(new SqlConnection("connectionString"), profiler);
```
と、書けます。
SumoLogicによる分析
---
データ取るのはいいんだけど、それどーすんのー?って話なわけですが、以前に[ASP.NETでの定期的なモニタリング手法](http://neue.cc/2013/07/20_416.html)に少し出しましたけれど、弊社では[Sumo Logic](http://www.sumologic.com/)を利用しています。例えば、SQLで採取したログに以下のようクエリが発行できます。
<p class="noindent">
<img src="http://neue.cc/wp-content/uploads/2013/07/sql_sumo.jpg" />
</p>
これは10ミリ秒よりかかったDELETE文を集計、ですね。Sumoは結構柔軟なクエリで、ログのパースもできるんですが、最初からJSONで吐き出しておけばjsonコマンドだけでパースできるので非常に楽ちん。で、パース後は10msより上なら ms > 10 といった形でクエリ書けます。
問題があった時の分析に使ってもいいし、別途グラフ化も可能(棒でも円でも色々)されるので、幾つか作成してダッシュボードに置いてもいいし、閾値を設定してアラートメールを飛ばしてもいい。slow_logも良いし当然併用しますが、それとは別に持っておくと、柔軟に処理できて素敵かと思われます。
Redis
---
弊社ではキャッシュ層もMemcachedではなく、全てRedisを用いています。Redisに関しては、[C#のRedisライブラリ「BookSleeve」の利用法](http://www.buildinsider.net/small/rediscshap/01)を読んでもらいたいのですが、ともあれ、[BookSleeve](https://code.google.com/p/booksleeve/)と、その上に被せているお手製ライブラリの[CloudStructures](https://github.com/neuecc/CloudStructures)を使用しています。
実質的に開発者が触るのはCloudStructuresだけです。というわけで、CloudStructuresに用意してあるモニター用のものを使いましょう。というかそのために用意しました。まず、ICommandTracerを実装します。
```csharp
public class RedisProfiler : ICommandTracer
{
static readonly Logger redisLogger = NLog.LogManager.GetLogger("Redis");
Stopwatch stopwatch;
string command;
string key;
public void CommandStart(string command, string key)
{
this.command = command;
this.key = key;
stopwatch = Stopwatch.StartNew();
}
public void CommandFinish()
{
stopwatch.Stop();
redisLogger.Trace(Newtonsoft.Json.JsonConvert.SerializeObject(new
{
date = DateTime.Now,
command = command,
key = key,
ms = stopwatch.ElapsedMilliseconds
}, Newtonsoft.Json.Formatting.None));
// NewRelic使うなら以下のも。後で解説します。
var ms = (long)System.Math.Round(stopwatch.Elapsed.TotalMilliseconds);
NewRelic.Api.Agent.NewRelic.RecordResponseTimeMetric("Custom/Redis", ms);
}
}
```
何らかのRedisへの通信が走る際にCommandStartとCommandFinishが呼ばれるようになってます。そして、RedisSettingsに渡してあげれば
```csharp
// tracerFactoryにFuncを渡すか、.configに書くかのどちらかで指定できます
var settings = new RedisSettings("127.0.0.1", tracerFactory: () => new RedisProfiler());
// {"date":"2013-07-30T22:41:34.2669518+09:00","command":"RedisString.TryGet","key":"hogekey","ms":18}
var value = await new RedisString<string>(settings, "hogekey").GetValueOrDefault();
```
みたいになります。
CloudStructuresは、既に実アプリケーションに投下していて、凄まじい数のメッセージを捌いているので、割と安心して使っていいと思いますですよ。ServiceStack.Redisはショッパイけど、BookSleeveはプリミティブすぎて辛ぽよ、な方々にフィットするはずです。実際、C# 5.0と合わせた際のBookSleeveの破壊力は凄まじいので、是非試してみて欲しいですね。
New Relicによるグラフ化
---
Sumo Logicはいいんですけど、しかし、もう少し身近なところにも観測データを置いておきたい。そして見やすく。弊社ではモニタリングに[New Relic](http://newrelic.com/)を採用していますが、そこに、そもそもSQLやHttpのカジュアルな監視は置いてあるんですね。なので、Redis情報も統合してあげればいい、というのが↑のNewRelicのAPIを叩いているものです。ただたんにNuGetからNewRelicのライブラリを持ってきて呼ぶだけの簡単さ。それだけで、以下の様なグラフが!
<p class="noindent">
<img src="http://neue.cc/wp-content/uploads/2013/07/redis_callcount.jpg" />
</p>
これはCall Countですが、他にAverageのResponse Timeなどもグラフ化してカスタムダッシュボードに置いています。
線が6本ありますが、これは用途によってRedisの台を分けているからです。例えばRedis.Cache, Redis.Session、のように。NewRelicのAPIを叩く際に、Custom/Redis/Cache、Custon/Redis/Sessionのようなキーのつけ方をすることで、個別に記録されます(それぞれのSettingsに個別のICommandTracerを渡しています)。ダッシュボードの表示時にCustom/Redis/*にするだけでひとまとめに表示できるから便利。
今のところ、Redisは全台平等に分散ではなく、グループ分け+負荷の高いものは複数台で構成しています。キャッシュ用途の台はファイルへのセーブなしで、完全インメモリ(Memcachedに近い)にしているなど、個別チューニングも入っています。
一番カジュアルに確認できるNew Relic、詳細な情報や解析クエリはSumo Logic。見る口が複数あるのは全然いいことです。
レスポンスタイム
---
HttpContextのTimestampに最初の時間が入っているので、Application_EndRequestで捕まえて差分を取ればかかった時間がサクッと。
```csharp
protected void Application_EndRequest()
{
var context = HttpContext.Current;
if (context != null)
{
var responseTime = (DateTime.Now - context.Timestamp);
// 解析するにあたってクエリストリングは邪魔なのでkeyには含めずの形で
logger.Trace(Newtonsoft.Json.JsonConvert.SerializeObject(new
{
date = DateTime.Now,
command = this.Request.Url.GetComponents(UriComponents.Path, UriFormat.Unescaped),
key = this.Request.Url.GetComponents(UriComponents.Query, UriFormat.Unescaped),
ms = (long)responseTime.TotalMilliseconds
}, Newtonsoft.Json.Formatting.None));
}
}
```
取れますね。
まとめ
---
改善は計測から!足元を疎かにして改善もクソもないのです。そして、存外、当たり前のようで、当たり前にはできないのね。また、データは取るだけじゃなく、大事なのは開発メンバーの誰もが見れる場所にあるということ。いつでも。常に。そうじゃないと数字って相対的に比較するものだし、肌感覚が養えないから。
弊社では、簡易なリアルタイムな表示はMiniProfilerとビュー統合のログ表示。実アプリケーションでは片っ端から収集し、NewRelicとSumoLogicに流しこんで簡単に集計・可視化できる体制を整えています。実際、C#移行を果たしてからの弊社のアプリケーションは業界最速、といってよいほどの速度を叩きだしています。基礎設計からガチガチにパフォーマンスを意識しているから、というのはもちろんあるのですが(そしてC# 5.0の非同期がそれを可能にした!)、現在自分が作っているものがどのぐらいのパフォーマンスなのか、を常に意識できる状態に置けたことも一因ではないかな、と考えています。(ただし、[.NET最先端技術によるハイパフォーマンスウェブアプリケーション](http://www.slideshare.net/neuecc/net-22662425)で述べましたが、そのためには開発環境も本番と等しいぐらいのネットワーク環境にしてないとダメですよ!)
私は今年は、言語や設計などの小さな優劣の話よりも、実際に現実に成功させることに重きを置いてます。C#で素晴らしい成果が出せる、その証明を果たしていくフェーズにある。成果は出せるに決まってるでしょ、と、仮に理屈では分かっていても、しかしモデルケースがなければ誰もついてこない。だから、そのための先陣を切っていきたい。勿論、同時に、成果物はどんどん公開していきます。C#が皆さんのこれからの選択肢の一番に上がってくれるといいし、また、C#といったらグラニ、となれるよう頑張ります。 | 34.949853 | 430 | 0.734217 | yue_Hant | 0.863087 |
7adb14e40b986f1204edaca2b690e944138c8ec7 | 1,963 | md | Markdown | snapshot_hive/README.md | SalahAmine/snapshot_hadoop | e3589189b6c3c1edb951925678edd88cb51aee85 | [
"MIT"
] | null | null | null | snapshot_hive/README.md | SalahAmine/snapshot_hadoop | e3589189b6c3c1edb951925678edd88cb51aee85 | [
"MIT"
] | null | null | null | snapshot_hive/README.md | SalahAmine/snapshot_hadoop | e3589189b6c3c1edb951925678edd88cb51aee85 | [
"MIT"
] | 1 | 2019-08-22T16:36:00.000Z | 2019-08-22T16:36:00.000Z | # snapshot_hive
snapshot_hive is intended for backuping hive tables and applying a retention policy on backups
It depends on the snapshot_hdfs projetcs because the data is snapshotted
using the hdfs snapshot feature
## STRATEGY
a backup constists of
1-backup schema into output/<hive_db_name>.<hive_table_name>.<SNAPSHOT_NAME>
2-bachkup of data table ( using hdfs snapshot mechanism ) under the table location/.snapshot in hdfs with the same <SNAPSHOT_NAME>
```
./bin/core/hive_backup_operations.bash
utility script for backuping hive tables and applying a retention policy on backups
MUST BE RUN WITH TABLE OWNER PRIVILEGES
## backup an hive table
backup_table <hive_db_name> <hive_table_name>
backup_table_check_and_apply_retention <hive_db_name> <hive_table_name> <nb_copies_to_retain>
## usage guide
usage
```
this script is to be launched by hadoop SUPERUSER (hdfs by default ) to perform
a backup on table and apply a retention policy on that backup
Retaled environment variables are to be declared under conf/env.bash
snapshot_hive does not expose a restauring functionality.
## To perform a restauration on a table
### Restaure data (table schema has not changed):
-remove the corrupted data ( or partition in cas of partionned table ) by the desired snapshot under <hdfs_location_of_data>/.snapshots
-on hive perform an update the hive metastore
"set hive.msck.path.validation =skip; MSCK REPAIR TABLE <table_name> ;"
### Restaure data (table schema has changed):
-delete the hive table + data ( if not a managed table )
-recreate the schema by executing the DDL_scehama of the table backed
up in output/ directory
-copy the snapshot having the same timestamp under the LOCATION of the hive table
-on hive perform an update the hive metastore
"set hive.msck.path.validation =skip; MSCK REPAIR TABLE <table_name> ;"
| 38.490196 | 137 | 0.745797 | eng_Latn | 0.941424 |
7adc0a63973b47608c38ec63728f9881e80b5be3 | 5,252 | md | Markdown | website/src/pages/docs/development/document/README.md | matuancc/react-native-uiw | 9ff990154c2c7dd2ddbbaf375d3d1271f2791a54 | [
"MIT"
] | null | null | null | website/src/pages/docs/development/document/README.md | matuancc/react-native-uiw | 9ff990154c2c7dd2ddbbaf375d3d1271f2791a54 | [
"MIT"
] | null | null | null | website/src/pages/docs/development/document/README.md | matuancc/react-native-uiw | 9ff990154c2c7dd2ddbbaf375d3d1271f2791a54 | [
"MIT"
] | null | null | null | 参与文档/网站编辑开发
---
这里介绍,当前组件库开发和文档编写,方便您快速介入到文档/网站编辑开发中。
> ⚠️ 注意:文档网站发布是监听 master 分支的更新`自动`发布到 [`gh-pages`](https://github.com/uiwjs/react-native-uiw/tree/gh-pages) 分支。
> 在 `package.json` 中的版本号请不要随意更改,组件发布是监听 [`package.json 中的版本号`](https://github.com/uiwjs/react-native-uiw/blob/4e4f55681a71b4813a5f5fe26f4b1a859bc85a7f/.github/workflows/ci.yml#L64-L66)变更`自动`发布到 npm 上。
> 这些自动化得益于 [Github Actions](https://github.com/actions) 的强力驱动。
<!--rehype:style=border-left: 8px solid #ffe564;background-color: #ffe56440;padding: 12px 16px;-->
## 目录结构
```bash
├── README.md -> packages/core/README.md
├── ....
├── example # ----> 示例
│ └── base # 基础示例
├── packages # ----> 包
│ ├── core # @uiw/react-native 基础组件
│ │ ├── package.json
│ │ └── src
│ │ ├── Avatar # 组件源码以及组件文档
│ │ └── ....
│ └── docs # @uiw/react-native-doc 可忽略,编译后的组件文档静态文件提交到 npm 提供文档版本预览
└── website # ----> 文档网站源码
├── ....
└── src
├── pages # 文档示例编写在这里
│ ├── components
│ └── ....
└── routes
├── Controller.tsx
├── history.ts
├── menus.ts # 配置菜单
└── router.tsx # 配置菜单对应的页面
```
## 文档编辑预览
我们通过 [npm](https://www.npmjs.com/@uiw/react-native-doc) 来管理 UIW React Native 组件文档站点的版本,使用 [unpkg.com](https://unpkg.com/) 提供的静态资源预览和同步 npm 包的特点,,来实现[查看历史版本](https://unpkg.com/browse/@uiw/react-native-doc/)组件文档的一功能。的文档。所以我们在发布 [`@uiw/react-native`](https://www.npmjs.com/package/@uiw/react-native) 包的同时会发布 [`@uiw/react-native-doc`](https://www.npmjs.com/package/@uiw/react-native-doc) 包。
通过 unpkg 预览文档网站:https://unpkg.com/@uiw/react-native-doc/doc/index.html
这是 v2.0.0+ 版本预览方法
```shell
https://unpkg.com/@uiw/react-native-doc@<包版本>/web/index.html
```
> ⚠️ 注意:为了保持包版本同步,我们通过 [`lerna`](http://npmjs.com/lerna) 工具同时更改所有包的版本,确保组件包和文档包的版本是一致的。
> 在项目根目录运行 `npm run version` 命令,即可更改所有包的版本。
<!--rehype:style=border-left: 8px solid #ffe564;background-color: #ffe56440;padding: 12px 16px;-->
### `文档网站开发`
组件文档存放在组件 `packages/core`<!--rehype:style=color: #039423; background: #b7fdce;--> 包目录中,其它文档放在文档源码目录 `website/src/pages`<!--rehype:style=color: #039423; background: #b7fdce;--> 中,根据路由地址建立。
> 如果需要将文档网站运行起来,需要先安装依赖和编译包。使用 [`yarn workspaces`](https://classic.yarnpkg.com/en/docs/workspaces),组件文档是从 `node_modules` 中加载,需要编译(或监听)输出到 `node_modules` 中。
<!--rehype:style=border-left: 8px solid #ffe564;background-color: #ffe56440;padding: 12px 16px;-->
<!--rehype:-->
```bash
yarn install # 安装当前依赖,和子包中的依赖
yarn run build # 编译包
```
实时监听包和文档网站本地预览
```bash
# Step 1
yarn run lib:watch # 编译输出 JS 文件
# Step 2
yarn run lib:watch:type # 输出类型文件 d.ts
# Step 3
yarn run start # 本地运行预览文档网站
```
### `添加一个文档页面`
添加一个新的文档,需要新增路由、菜单、添加 `README.md` 文件。
```bash
website
├── src
│ ├── pages # 文档示例编写在这里
│ │ ├── components
│ │ ├── getting-started
│ │ │ ├── README.md # 添加 README.md 文档
│ │ │ └── index.tsx # 添加加载 README.md JS 文件
│ └── routes
│ ├── menus.ts # 配置菜单
│ └── router.tsx # 配置菜单对应的页面
```
#### `第 1 步:菜单配置`
在 [`website/src/routes/menus.ts`](https://github.com/uiwjs/react-native-uiw/blob/4e4f55681a71b4813a5f5fe26f4b1a859bc85a7f/website/src/routes/menus.ts#L44) 中配置菜单
```ts
export interface MenuData extends React.RefAttributes<HTMLAnchorElement>, React.AnchorHTMLAttributes<HTMLAnchorElement> {
name: string;
path?: string;
divider?: boolean;
}
export const docsMenus: MenuData[] = [
{ path: '/docs/getting-started', name: '快速上手' },
{ divider: true, name: "环境安装" },
{ path: '/docs/environment-setup/ios', name: 'iOS 环境安装' },
...
{ divider: true, name: "其它" },
{ path: '/docs/development', name: '参与组件/文档开发' },
{ href: 'https://github.com/uiwjs/react-native-uiw/releases', target: '_blank', name: '更新日志' },
]
export const componentMenus: MenuData[] = [ .... ]
```
#### `第 2 步:新增路由`
在 [`website/src/routes/router.tsx`](https://github.com/uiwjs/react-native-uiw/blob/4e4f55681a71b4813a5f5fe26f4b1a859bc85a7f/website/src/routes/router.tsx#L39-L41) 中加载 Markdown 以及相关文件
```ts
export const getRouterData = {
'/': {
component: dynamicWrapper([], () => import('../layouts/BasicLayout')),
},
'/docs/getting-started': {
component: dynamicWrapper([], () => import('../pages/docs/getting-started')),
},
....
}
```
#### `第 3 步:新增 Markdown 文件`
添加 `website/src/pages/docs/getting-started/README.md` 和 `website/src/pages/docs/getting-started/index.tsx`
```tsx
import Markdown, { importAll } from '../../../component/Markdown';
export default class Page extends Markdown {
// 配置 markdown 在 GitHub 中的目录位置,用于定位编辑 Markdown
path = "/website/src/pages/docs/getting-started/README.md";
getMarkdown = async () => {
// 这里加载指定的 Markdown 文件
const md = await import('./README.md');
// 也可加载组件包中的文档
const mdCom = await import('@uiw/react-native/lib/Badge/README.md');
// 支持 markdown 中,相对于当前 index.tsx 相对路径引入图片资源
importAll((require as any).context('./', true, /\.(png|gif|jpg)$/), this.imageFiles);
return md.default || md;
}
}
```
### `修改一个 Markdown 文件内容`
可直接点击文档网站底部的 `在 GitHub 上编辑此页`<!--rehype:style=color: #1e1cf0; background: #e3e3ff;--> 按钮。
⇣⇣⇣⇣⇣⇣看见没有,点击下面按钮⇣⇣⇣⇣⇣⇣
<!--rehype:style=background-color: #a0ffb3; padding: 12px 16px; display: inline-block;--> | 32.825 | 386 | 0.646992 | yue_Hant | 0.624093 |
7adcb0668891aee345933f0effbc2ed29e578c04 | 23 | md | Markdown | README.md | cyberkru/zapci-ascan-auth-dojo | 4a0bce31937019d863a3d597c73d90a04d7c70ce | [
"Apache-2.0"
] | null | null | null | README.md | cyberkru/zapci-ascan-auth-dojo | 4a0bce31937019d863a3d597c73d90a04d7c70ce | [
"Apache-2.0"
] | null | null | null | README.md | cyberkru/zapci-ascan-auth-dojo | 4a0bce31937019d863a3d597c73d90a04d7c70ce | [
"Apache-2.0"
] | null | null | null | # zapci-ascan-auth-dojo | 23 | 23 | 0.782609 | slk_Latn | 0.272477 |
7adce585e35124da30280c84c807f958f50b17b5 | 132 | md | Markdown | Advanced/AD01/HELP.md | Israel-Laguan/Codes-Biblion | cc6e4f3c8c210a8406bf21c7b6cf1da7ad0537c1 | [
"MIT"
] | null | null | null | Advanced/AD01/HELP.md | Israel-Laguan/Codes-Biblion | cc6e4f3c8c210a8406bf21c7b6cf1da7ad0537c1 | [
"MIT"
] | 2 | 2020-05-29T02:44:55.000Z | 2020-05-29T02:46:32.000Z | Advanced/AD01/HELP.md | Israel-Laguan/Codes-Biblion | cc6e4f3c8c210a8406bf21c7b6cf1da7ad0537c1 | [
"MIT"
] | 1 | 2020-09-03T02:50:50.000Z | 2020-09-03T02:50:50.000Z | # Description
**FTP Program** - A file transfer program which can transfer files back and forth from a remote web sever.
## Notes
| 22 | 106 | 0.742424 | eng_Latn | 0.997015 |
7add022663b09f0c564feac2786f27da5eee0add | 747 | md | Markdown | 2021/Journals/FGCS.md | kimsijin33/DL4Traffic | 387148cbb9a9e081d120cb3116fd11901538f6ec | [
"MIT"
] | 1 | 2021-12-10T12:33:49.000Z | 2021-12-10T12:33:49.000Z | 2021/Journals/FGCS.md | zhouyuxin-97/DL4Traffic | 387148cbb9a9e081d120cb3116fd11901538f6ec | [
"MIT"
] | null | null | null | 2021/Journals/FGCS.md | zhouyuxin-97/DL4Traffic | 387148cbb9a9e081d120cb3116fd11901538f6ec | [
"MIT"
] | null | null | null | * Qi Y, Hossain M S, Nie J, et al. <b>Privacy-preserving blockchain-based federated learning for traffic flow prediction[J]</b>. Future Generation Computer Systems, 2021, 117: 328-337. [Link](https://www.sciencedirect.com/science/article/pii/S0167739X2033065X)
* Jin G, Wang M, Zhang J, et al. <b>STGNN-TTE: Travel time estimation via spatial-temporal graph neural network[J]</b>. Future Generation Computer Systems, 2021. [Link](https://www.sciencedirect.com/science/article/pii/S0167739X21002740)
* Almeida A, Brás S, Oliveira I, et al. <b>Vehicular traffic flow prediction using deployed traffic counters in a city[J]</b>. Future Generation Computer Systems, 2021. [Link](https://www.sciencedirect.com/science/article/pii/S0167739X21004180)
| 186.75 | 261 | 0.771084 | eng_Latn | 0.342399 |
7addf0ac785d432585f3abea4a44cbf87040bb91 | 4,446 | md | Markdown | articles/hdinsight/hadoop/apache-hadoop-emulator-get-started.md | changeworld/azure-docs.tr-tr | a6c8b9b00fe259a254abfb8f11ade124cd233fcb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/hdinsight/hadoop/apache-hadoop-emulator-get-started.md | changeworld/azure-docs.tr-tr | a6c8b9b00fe259a254abfb8f11ade124cd233fcb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/hdinsight/hadoop/apache-hadoop-emulator-get-started.md | changeworld/azure-docs.tr-tr | a6c8b9b00fe259a254abfb8f11ade124cd233fcb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Apache Hadoop kum havuzu, emülatör kullanmayı öğrenin - Azure HDInsight
description: "Apache Hadoop ekosistemini kullanmayı öğrenmek için, Hortonworks'ten bir Hadoop sanal makinesi kurabilirsiniz. "
keywords: hadoop emülatörü,hadoop kum havuzu
ms.reviewer: jasonh
author: hrasheed-msft
ms.service: hdinsight
ms.custom: hdinsightactive,hdiseo17may2017
ms.topic: conceptual
ms.date: 05/29/2019
ms.author: hrasheed
ms.openlocfilehash: 47ee66393e3e1678576b12a70b767f35cb3bc635
ms.sourcegitcommit: 2ec4b3d0bad7dc0071400c2a2264399e4fe34897
ms.translationtype: MT
ms.contentlocale: tr-TR
ms.lasthandoff: 03/27/2020
ms.locfileid: "73044762"
---
# <a name="get-started-with-an-apache-hadoop-sandbox-an-emulator-on-a-virtual-machine"></a>Bir Apache Hadoop kum havuzu, sanal bir makine üzerinde bir emülatör ile başlayın
Hadoop ekosistemi hakkında bilgi edinmek için Hortonworks'teki Apache Hadoop kum havuzunu sanal bir makineye nasıl yükleyeceğinizi öğrenin. Kum havuzu, Hadoop, Hadoop Distributed File System (HDFS) ve iş gönderimi hakkında bilgi edinmek için yerel bir geliştirme ortamı sağlar. Hadoop'u tanıdıktan sonra, bir HDInsight kümesi oluşturarak Azure'da Hadoop'u kullanmaya başlayabilirsiniz. Nasıl başlanınizle ilgili daha fazla bilgi için [HDInsight'ta Hadoop ile başlayın.](apache-hadoop-linux-tutorial-get-started.md)
## <a name="prerequisites"></a>Ön koşullar
* [Oracle VirtualBox](https://www.virtualbox.org/). Buradan indirin [here](https://www.virtualbox.org/wiki/Downloads)ve kurun.
## <a name="download-and-install-the-virtual-machine"></a>Sanal makineyi indirin ve kurun
1. [Cloudera indirmegöz](https://www.cloudera.com/downloads/hortonworks-sandbox/hdp.html)atın.
1. En son Hortonworks Sandbox'ı VM'den indirmek için **Yükleme Türünü Seç'in** altındaki **VIRTUALBOX'ı** tıklatın. Ürün leilgili formu oturum açın veya doldurun.
1. İndirmeye başlamak için **HDP SANDBOX (SON)** düğmesini tıklatın.
Kum havuzunun ayarlanmasıyla ilgili talimatlar için Bkz. [Sandbox Dağıtım ve Yükleme Kılavuzu.](https://hortonworks.com/tutorial/sandbox-deployment-and-install-guide/section/1/)
Eski bir HDP sürümünü indirmek için, **Eski Sürümler**altındaki bağlantılara bakın.
## <a name="start-the-virtual-machine"></a>Sanal makineyi başlatın
1. Oracle VM VirtualBox'ı açın.
1. **Dosya** menüsünden **Cihazı İçe Aktar'ı**tıklatın ve ardından Hortonworks Sandbox görüntüsünü belirtin.
1. Hortonworks Sandbox'ı seçin, **Başlat'ı**ve ardından **Normal Başlangıç'ı**tıklatın. Sanal makine önyükleme işlemini tamamladıktan sonra, oturum açma yönergelerini görüntüler.

1. Bir web tarayıcısı açın ve görüntülenen `http://127.0.0.1:8888`URL'ye gidin (genellikle).
## <a name="set-sandbox-passwords"></a>Sandbox parolalarını ayarlama
1. Hortonworks Sandbox sayfasının **başlangıç** adımından **Gelişmiş Seçenekleri Görüntüle'yi**seçin. SSH kullanarak kum havuzuna giriş yapmak için bu sayfadaki bilgileri kullanın. Sağlanan adı ve parolayı kullanın.
> [!NOTE]
> Yüklü bir SSH istemciniz yoksa, sanal makine tarafından sağlanan web tabanlı SSH'yi **http://localhost:4200/** kullanabilirsiniz.
SSH kullanarak ilk bağlandığınızda, kök hesabın parolasını değiştirmeniz istenir. SSH kullanarak oturum açtığınızda kullandığınız yeni bir parola girin.
2. Oturum açtıktan sonra aşağıdaki komutu girin:
ambari-admin-password-reset
İstendiğinde, Ambari yönetici hesabı için bir parola sağlayın. This is used when you access the Ambari Web UI.
## <a name="use-hive-commands"></a>Kovan komutlarını kullanma
1. SSH bağlantısından kum havuzuna, Kovan kabuğunu başlatmak için aşağıdaki komutu kullanın:
hive
2. Kabuk başladıktan sonra, kum havuzu yla birlikte sağlanan tabloları görüntülemek için aşağıdakileri kullanın:
show tables;
3. `sample_07` Tablodan 10 satır almak için aşağıdakileri kullanın:
select * from sample_07 limit 10;
## <a name="next-steps"></a>Sonraki adımlar
* [Hortonworks Sandbox ile Visual Studio'yu nasıl kullanacağınızı öğrenin](../hdinsight-hadoop-emulator-visual-studio.md)
* [Hortonworks Sandbox ipleri öğrenme](https://hortonworks.com/hadoop-tutorial/learning-the-ropes-of-the-hortonworks-sandbox/)
* [Hadoop öğretici - HDP ile başlarken](https://hortonworks.com/hadoop-tutorial/hello-world-an-introduction-to-hadoop-hcatalog-hive-and-pig/)
| 53.566265 | 514 | 0.790598 | tur_Latn | 0.996399 |
7ade5a85e9fb7ca965be3bceb1dcd28aab1e31de | 8,378 | md | Markdown | README.md | UBC-MDS/CleanPy | 496dd0b36f44c9a131a3587b5090139b90d6015f | [
"MIT"
] | 1 | 2020-05-05T15:58:52.000Z | 2020-05-05T15:58:52.000Z | README.md | UBC-MDS/CleanPy | 496dd0b36f44c9a131a3587b5090139b90d6015f | [
"MIT"
] | 10 | 2019-02-07T08:01:38.000Z | 2019-03-06T00:36:40.000Z | README.md | UBC-MDS/CleanPy | 496dd0b36f44c9a131a3587b5090139b90d6015f | [
"MIT"
] | 4 | 2019-02-06T02:09:56.000Z | 2020-05-05T16:00:27.000Z | # CleanPy
[](https://travis-ci.org/UBC-MDS/CleanPy)
This package cleans a dataset and returns summary statistics as well as number, proportion and location of NA values for string and number column inputs. Data cleaning made easy!
### Collaborators
[Heather Van Tassel](https://github.com/heathervant), [Phuntsok Tseten](https://github.com/UBC-MDS/CleanPy/commits?author=phuntsoktseten), [Patrick Tung](https://github.com/tungpatrick)
## Overview
There is a dire need for a good data cleaning package, and we are trying to develop our version of a good data cleaning package that will help users clean their data in a meaningful way. Data cleaning is usually the first step in any data science problem, and if you don’t clean your data well, it might be really difficult to proceed further. So our motivation for coming up with this idea was to address this very issue of messy data.
CleanPy is especially developed to create a streamlined process to give you an easy to read summary statistics table about your data. CleanPy is able to easily locate all the missing data for you and allow you to locate where exactly it occurs. Not only are you able to locate missing data, you can also define how you would like to deal with your missing data.
## Function
**Function 1)** `summary`: Summary statistics generator for string and numeric data from dataframes.
```
def summary(data):
"""
This function computes summary statistics for text and numerical column data from a given dataframe.
Input: dictionary or column_dataframe
Returns summary statistics for each column in a nested pandas dataframe. Since pandas only accepts one
data type per column, we only need to test the type of each column once.
It will perform two different summary statistics based on 2 column_datatypes of either
1) string/bool or 2) int/float/datetime object.
For numeric data columns it returns a dictionary of summary statistics including
mean value for each column, min, max, mean, median and count (number of non NA values per column) and count_NA
(number of NA values per column). Similarly, for string columns it returns the unique string values and
their counts in a dictionary. The column summary statistics are then nested into a pandas dataframe and returned.
Parameters
----------
data : pd.DataFrame
used to provide summary statistics of each column.
Returns
-------
Summary pandas dataframe of each column's summary statistics
>>> summary(pd.column_dataFrame(colnames="Likes coding", rows= np.array([[4,3,2,2])))
pd.DataFrame(
"unique" = [4,3,2]
"min"= 2
"max"= 4
"mean"= 11/4
"median"= 2
"count"= 4
"count_NA"= 0)
"""
```
**Function 2)** `locate_na`: Returns a dataframe of the count and indices of NA values. This function takes in a dataframe and finds NA values and returns the location of these values along the count of total NAs.
```
def locate_na(data):
"""
Locate and return the indices to all missing values within an inputted dataframe.
Each element of the returned dictionary will be a column in a dataframe, which will
contain the row indices of the missing values.
Parameters
----------
data : dataframe
This is the dataframe that the function will use to locate NAs.
Returns
-------
dictionary of lists
key = column indices that contain missing values
value = list of row indices that have missing values
>>> locate_na(pd.DataFrame(np.array([[“Yes”, “No”], [None, “Yes”]])))
{"0": [1]}
>>> locate_na(pd.DataFrame(np.array([[1, 2, None], [None, 2, 3]])))
{"0": [1], "2": [0]}
"""
```
**Function 3)** `replace_na`:Replaces missing values with either min, max, median, or average (default) values of the column(s). There will be an option to remove the rows with NAs.
```
def replace_na(data, columns, replace="mean", remove=False):
"""
This function replaces NA values with either the min, max, median or mean
value or removes the rows.
Parameters
----------
data : dataframe
This is the dataframe that the function will use to replace NAs.
columns : list
List of columns to replace missing values on.
replace : string
Specifies how to replace missing values.
values include: "mean", "min", "max", "median"
remove : boolean
Tells the function whether or not to remove rows with NA.
If True, replace argument will not be used.
Returns
-------
dataframe
A pandas dataframe where each NAs will be replaced by either mean,
min, max, median (specified by the user)
>>> replace_na(pd.DataFrame(np.array([[0, 1], [NA, 1]])), replace="min", columns=[0])
pd.DataFrame(np.array([[0, 1], [0, 1]]))
"""
```
## CleanPy and Python's Ecosystem
Sometimes, it can get quite annoying to go through your data line by line, and a quick summary of the data, will not only save you a lot of time but also give you a quick insight and overall picture of your data, which can be very useful to understand the task at hand. Python has a summary function called `describe()` function from Python's `pandas.DataFrame`. CleanPy's `summary()` function will be quite similar to `describe()` but it will take it a step further and generate summary statistics, which will be presented in a very intuitive manner. The `summary()` function will also provide more information such as the number of missing values, and summaries of string information. In regards to our `locate_na()` and `replace_na()`, there is no similar function in existence in the current Python ecosystem that we are aware of. The only way to do them is to mannually combine a few functions including `pandas.DataFrame.isna()`.
## Installation
`CleanPy` can be installed using the `pip`
```
pip install git+https://github.com/UBC-MDS/CleanPy.git
```
Then you can import our packages using:
```
from CleanPy import summary, locate_na, replace_na
```
## Usage
Let's assume that you have a dataframe like the following:
```{python}
toy_data = pd.DataFrame({"x":[None, "b", "c"], "y": [2, None, None], "z": [3.6, 8.5, None]})
```
1. `summary`
Arguments:
- `data`: dataframe that the function will provide summary statistics on
- Example: `summary(toy_data)`
- Output: <p align="left">
<img src="./images/summary_output.png">
</p>
2. `locate_na`
Arguments:
- `data`: dataframe that the function will use to locate NAs
- Example: `locate_na(toy_data)`
- Output: `{'x': [0], 'y': [1, 2], 'z': [2]}`
3. `replace_na`
Arguments:
- `data`: dataframe that the function will use to replace NAs
- `columns`: list of columns to replace missing values on
- `replace`: specifies how to replace missing values
- `remove`: tells the function whether or not to remove rows with NA
- Example: `replace_na(toy_data, columns=["y"], replace="mean", remove=False)`
- Output: <p align="left">
<img src="./images/replace_output.png">
</p>
## Branch Coverage
You can install the coverage package in terminal/command prompt with the following code:
```
pip install coverage
```
To get the branch coverage of the package, type the following at the root of the folder:
```
coverage run -m --branch pytest -q; coverage report -m
# If you want to view it interactively
coverage html
```
The coverage results are shown below:
```
Name Stmts Miss Branch BrPart Cover Missing
-----------------------------------------------------------------------------
CleanPy/__init__.py 4 0 0 0 100%
CleanPy/locate_na.py 20 0 12 0 100%
CleanPy/replace_na.py 30 0 26 0 100%
CleanPy/summary.py 25 0 8 0 100%
CleanPy/test/test_locate_na.py 40 0 2 0 100%
CleanPy/test/test_replace_na.py 49 0 0 0 100%
CleanPy/test/test_summary.py 46 0 0 0 100%
-----------------------------------------------------------------------------
TOTAL 214 0 48 0 100%
```
## Python Dependencies
- Pandas
- Numpy
| 45.043011 | 937 | 0.667104 | eng_Latn | 0.994392 |
7ade910256d1035d7495c7088b53f73fd4ac9273 | 20,733 | md | Markdown | concepts/volume.md | kelo2014/kubernetes-handbook | 4c01525e77ac1280518ddea4ecf8dff605170bf7 | [
"Apache-2.0"
] | 2 | 2018-02-01T18:22:25.000Z | 2021-07-10T03:46:20.000Z | concepts/volume.md | kelo2014/kubernetes-handbook | 4c01525e77ac1280518ddea4ecf8dff605170bf7 | [
"Apache-2.0"
] | null | null | null | concepts/volume.md | kelo2014/kubernetes-handbook | 4c01525e77ac1280518ddea4ecf8dff605170bf7 | [
"Apache-2.0"
] | 1 | 2021-07-10T03:46:23.000Z | 2021-07-10T03:46:23.000Z | # Volume
容器磁盘上的文件的生命周期是短暂的,这就使得在容器中运行重要应用时会出现一些问题。首先,当容器崩溃时,kubelet 会重启它,但是容器中的文件将丢失——容器以干净的状态(镜像最初的状态)重新启动。其次,在 `Pod` 中同时运行多个容器时,这些容器之间通常需要共享文件。Kubernetes 中的 `Volume` 抽象就很好的解决了这些问题。
建议先熟悉 [pod](https://kubernetes.io/docs/user-guide/pods)。
## 背景
Docker 中也有一个 [volume](https://docs.docker.com/engine/admin/volumes/) 的概念,尽管它稍微宽松一些,管理也很少。在 Docker 中,卷就像是磁盘或是另一个容器中的一个目录。它的生命周期不受管理,直到最近才有了 local-disk-backed 卷。Docker 现在提供了卷驱动程序,但是功能还非常有限(例如Docker1.7只允许每个容器使用一个卷驱动,并且无法给卷传递参数)。
另一方面,Kubernetes 中的卷有明确的寿命——与封装它的 Pod 相同。所以,卷的生命比 Pod 中的所有容器都长,当这个容器重启时数据仍然得以保存。当然,当 Pod 不再存在时,卷也将不复存在。也许更重要的是,Kubernetes 支持多种类型的卷,Pod 可以同时使用任意数量的卷。
卷的核心是目录,可能还包含了一些数据,可以通过 pod 中的容器来访问。该目录是如何形成的、支持该目录的介质以及其内容取决于所使用的特定卷类型。
要使用卷,需要为 pod 指定为卷(`spec.volumes` 字段)以及将它挂载到容器的位置(`spec.containers.volumeMounts` 字段)。
容器中的进程看到的是由其 Docker 镜像和卷组成的文件系统视图。 [Docker 镜像](https://docs.docker.com/userguide/dockerimages/)位于文件系统层次结构的根目录,任何卷都被挂载在镜像的指定路径中。卷无法挂载到其他卷上或与其他卷有硬连接。Pod 中的每个容器都必须独立指定每个卷的挂载位置。
## 卷的类型
Kubernetes 支持以下类型的卷:
- `awsElasticBlockStore`
- `azureDisk`
- `azureFile`
- `cephfs`
- `csi`
- `downwardAPI`
- `emptyDir`
- `fc` (fibre channel)
- `flocker`
- `gcePersistentDisk`
- `gitRepo`
- `glusterfs`
- `hostPath`
- `iscsi`
- `local`
- `nfs`
- `persistentVolumeClaim`
- `projected`
- `portworxVolume`
- `quobyte`
- `rbd`
- `scaleIO`
- `secret`
- `storageos`
- `vsphereVolume`
我们欢迎额外贡献。
### awsElasticBlockStore
`awsElasticBlockStore` 卷将Amazon Web Services(AWS)[EBS Volume](http://aws.amazon.com/ebs/) 挂载到您的容器中。与 `emptyDir` 类型会在删除 Pod 时被清除不同,EBS 卷的的内容会保留下来,仅仅是被卸载。这意味着 EBS 卷可以预先填充数据,并且可以在数据包之间“切换”数据。
**重要提示**:您必须使用 `aws ec2 create-volume` 或 AWS API 创建 EBS 卷,才能使用它。
使用 awsElasticBlockStore 卷时有一些限制:
- 运行 Pod 的节点必须是 AWS EC2 实例
- 这些实例需要与 EBS 卷位于相同的区域和可用区域
- EBS 仅支持卷和 EC2 实例的一对一的挂载
#### 创建 EBS 卷
在 pod 中使用的 EBS 卷之前,您需要先创建它。
```shell
aws ec2 create-volume --availability-zone=eu-west-1a --size=10 --volume-type=gp2
```
确保区域与您启动集群的区域相匹配(并且检查大小和 EBS 卷类型是否适合您的使用!)
### AWS EBS 示例配置
```yaml
apiVersion: v1
kind: Pod
metadata:
name: test-ebs
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-ebs
name: test-volume
volumes:
- name: test-volume
# This AWS EBS volume must already exist.
awsElasticBlockStore:
volumeID: <volume-id>
fsType: ext4
```
### azureDisk
`AzureDisk` 用于将 Microsoft Azure [Data Disk](https://azure.microsoft.com/zh-cn/documentation/articles/virtual-machines-linux-about-disks-vhds) 挂载到 Pod 中。
更多细节可以在[这里](https://github.com/kubernetes/examples/tree/master/staging/volumes/azure_disk/README.md)找到。
### azureFile
`azureFile` 用于将 Microsoft Azure File Volume(SMB 2.1 和 3.0)挂载到 Pod 中。
更多细节可以在[这里](https://github.com/kubernetes/examples/tree/ master/staging/volumes/azure_file/README.md)找到。
### cephfs
`cephfs` 卷允许将现有的 CephFS 卷挂载到您的容器中。不像 `emptyDir`,当删除 Pod 时被删除,`cephfs` 卷的内容将被保留,卷仅仅是被卸载。这意味着 CephFS 卷可以预先填充数据,并且可以在数据包之间“切换”数据。 CephFS 可以被多个写设备同时挂载。
**重要提示**:您必须先拥有自己的 Ceph 服务器,然后才能使用它。
有关更多详细信息,请参见[CephFS示例](https://github.com/kubernetes/examples/tree/master/staging/volumes/cephfs/)。
### csi
CSI 代表[容器存储接口](https://github.com/container-storage-interface/spec/blob/master/spec.md),CSI 试图建立一个行业标准接口的规范,借助 CSI 容器编排系统(CO)可以将任意存储系统暴露给自己的容器工作负载。有关详细信息,请查看[设计方案](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/container-storage-interface.md)。
`csi` 卷类型是一种 in-tree 的 CSI 卷插件,用于 Pod 与在同一节点上运行的外部 CSI 卷驱动程序交互。部署 CSI 兼容卷驱动后,用户可以使用 `csi` 作为卷类型来挂载驱动提供的存储。
CSI 持久化卷支持是在 Kubernetes v1.9 中引入的,作为一个 alpha 特性,必须由集群管理员明确启用。换句话说,集群管理员需要在 apiserver、controller-manager 和 kubelet 组件的 “`--feature-gates =`” 标志中加上 “`CSIPersistentVolume = true`”。
CSI 持久化卷具有以下字段可供用户指定:
- `driver`:一个字符串值,指定要使用的卷驱动程序的名称。必须少于 63 个字符,并以一个字符开头。驱动程序名称可以包含 “`.`”、“`-` ”、“`_`” 或数字。
- `volumeHandle`:一个字符串值,唯一标识从 CSI 卷插件的 `CreateVolume` 调用返回的卷名。随后在卷驱动程序的所有后续调用中使用卷句柄来引用该卷。
- `readOnly`:一个可选的布尔值,指示卷是否被发布为只读。默认是 false。
### downwardAPI
`downwardAPI` 卷用于使向下 API 数据(downward API data)对应用程序可用。它挂载一个目录,并将请求的数据写入纯文本文件。
参考 [`downwardAPI` 卷示例](https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/)查看详细信息。
### emptyDir
当 Pod 被分配给节点时,首先创建 `emptyDir` 卷,并且只要该 Pod 在该节点上运行,该卷就会存在。正如卷的名字所述,它最初是空的。Pod 中的容器可以读取和写入 `emptyDir` 卷中的相同文件,尽管该卷可以挂载到每个容器中的相同或不同路径上。当出于任何原因从节点中删除 Pod 时,`emptyDir` 中的数据将被永久删除。
**注意**:容器崩溃不会从节点中移除 pod,因此 `emptyDir` 卷中的数据在容器崩溃时是安全的。
`emptyDir` 的用法有:
- 暂存空间,例如用于基于磁盘的合并排序
- 用作长时间计算崩溃恢复时的检查点
- Web服务器容器提供数据时,保存内容管理器容器提取的文件
#### Pod 示例
```yaml
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /cache
name: cache-volume
volumes:
- name: cache-volume
emptyDir: {}
```
### fc (fibre channel)
fc 卷允许将现有的 `fc` 卷挂载到 pod 中。您可以使用卷配置中的 `targetWWN` 参数指定单个或多个目标全球通用名称(World Wide Name)。如果指定了多个 WWN,则 targetWWN 期望这些 WWN 来自多路径连接。
**重要提示**:您必须配置 FC SAN 区域划分,并预先将这些 LUN(卷)分配并屏蔽到目标 WWN,以便 Kubernetes 主机可以访问它们。
参考 [FC 示例](https://github.com/kubernetes/examples/tree/master/staging/volumes/fibre_channel)获取详细信息。
### flocker
[Flocker](https://clusterhq.com/flocker) 是一款开源的集群容器数据卷管理器。它提供了由各种存储后端支持的数据卷的管理和编排。
`flocker` 允许将 Flocker 数据集挂载到 pod 中。如果数据集在 Flocker 中不存在,则需要先使用 Flocker CLI 或使用 Flocker API 创建数据集。如果数据集已经存在,它将被 Flocker 重新连接到 pod 被调度的节点上。这意味着数据可以根据需要在数据包之间“切换”。
**重要提示**:您必须先运行自己的 Flocker 安装程序才能使用它。
参考 [Flocker 示例](https://github.com/kubernetes/examples/tree/master/staging/volumes/flocker)获取更多详细信息。
### gcePersistentDisk
`gcePersistentDisk` 卷将 Google Compute Engine(GCE)[Persistent Disk](http://cloud.google.com/compute/docs/disks) 挂载到您的容器中。与删除 Pod 时删除的 `emptyDir` 不同,PD 的内容被保留,只是卸载了卷。这意味着 PD 可以预先填充数据,并且数据可以在 Pod 之间“切换”。
**重要提示**:您必须先使用 gcloud 或 GCE API 或 UI 创建一个 PD,然后才能使用它。
使用 `gcePersistentDisk` 时有一些限制:
- 运行 Pod 的节点必须是 GCE 虚拟机
- 那些虚拟机需要在与 PD 一样在 GCE 项目和区域中
PD 的一个特点是它们可以同时被多个用户以只读方式挂载。这意味着您可以预先使用您的数据集填充 PD,然后根据需要给多个 Pod 中并行提供。不幸的是,只能由单个消费者以读写模式挂载 PD,而不允许同时写入。
在由 ReplicationController 控制的 pod 上使用 PD 将会失败,除非 PD 是只读的或者副本数是 0 或 1。
#### 创建 PD
在您在 pod 中使用 GCE PD 之前,需要先创建它。
```shell
gcloud compute disks create --size=500GB --zone=us-central1-a my-data-disk
```
#### Pod 示例
```yaml
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-pd
name: test-volume
volumes:
- name: test-volume
# This GCE PD must already exist.
gcePersistentDisk:
pdName: my-data-disk
fsType: ext4
```
### gitRepo
`gitRepo` 卷是一个可以演示卷插件功能的示例。它会挂载一个空目录并将 git 存储库克隆到您的容器中。将来,这样的卷可能会转移到一个更加分离的模型,而不是为每个这样的用例扩展 Kubernetes API。
下面是 gitRepo 卷示例:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: server
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- mountPath: /mypath
name: git-volume
volumes:
- name: git-volume
gitRepo:
repository: "git@somewhere:me/my-git-repository.git"
revision: "22f1d8406d464b0c0874075539c1f2e96c253775"
```
### glusterfs
`glusterfs` 卷允许将 [Glusterfs](http://www.gluster.org)(一个开放源代码的网络文件系统)卷挂载到您的集群中。与删除 Pod 时删除的 `emptyDir` 不同,`glusterfs` 卷的内容将被保留,而卷仅仅被卸载。这意味着 glusterfs 卷可以预先填充数据,并且可以在数据包之间“切换”数据。 GlusterFS 可以同时由多个写入挂载。
**重要提示**:您必须先自行安装 GlusterFS,才能使用它。
有关更多详细信息,请参阅 [GlusterFS](https://github.com/kubernetes/examples/tree/master/staging/volumes/glusterfs) 示例。
### hostPath
`hostPath` 卷将主机节点的文件系统中的文件或目录挂载到集群中。该功能大多数 Pod 都用不到,但它为某些应用程序提供了一个强大的解决方法。
例如,`hostPath` 的用途如下:
- 运行需要访问 Docker 内部的容器;使用 `/var/lib/docker` 的 `hostPath`
- 在容器中运行 cAdvisor;使用 `/dev/cgroups` 的 `hostPath`
- 允许 pod 指定给定的 hostPath 是否应该在 pod 运行之前存在,是否应该创建,以及它应该以什么形式存在
除了所需的 `path` 属性之外,用户还可以为 `hostPath` 卷指定 `type`。
`type` 字段支持以下值:
| 值 | 行为 |
| :------------------ | :--------------------------------------- |
| | 空字符串(默认)用于向后兼容,这意味着在挂载 hostPath 卷之前不会执行任何检查。 |
| `DirectoryOrCreate` | 如果在给定的路径上没有任何东西存在,那么将根据需要在那里创建一个空目录,权限设置为 0755,与 Kubelet 具有相同的组和所有权。 |
| `Directory` | 给定的路径下必须存在目录 |
| `FileOrCreate` | 如果在给定的路径上没有任何东西存在,那么会根据需要创建一个空文件,权限设置为 0644,与 Kubelet 具有相同的组和所有权。 |
| `File` | 给定的路径下必须存在文件 |
| `Socket` | 给定的路径下必须存在 UNIX 套接字 |
| `CharDevice` | 给定的路径下必须存在字符设备 |
| `BlockDevice` | 给定的路径下必须存在块设备 |
使用这种卷类型是请注意,因为:
- 由于每个节点上的文件都不同,具有相同配置(例如从 podTemplate 创建的)的 pod 在不同节点上的行为可能会有所不同
- 当 Kubernetes 按照计划添加资源感知调度时,将无法考虑 `hostPath` 使用的资源
- 在底层主机上创建的文件或目录只能由 root 写入。您需要在[特权容器](/docs/user-guide/security-context)中以 root 身份运行进程,或修改主机上的文件权限以便写入 `hostPath` 卷
#### Pod 示例
```yaml
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-pd
name: test-volume
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /data
# this field is optional
type: Directory
```
### iscsi
`iscsi` 卷允许将现有的 iSCSI(SCSI over IP)卷挂载到容器中。不像 `emptyDir`,删除 Pod 时 `iscsi` 卷的内容将被保留,卷仅仅是被卸载。这意味着 iscsi 卷可以预先填充数据,并且这些数据可以在 pod 之间“切换”。
**重要提示**:必须先创建自己的 iSCSI 服务器,然后才能使用它。
iSCSI 的一个特点是它可以同时被多个用户以只读方式安装。这意味着您可以预先使用您的数据集填充卷,然后根据需要向多个额 pod 同时提供。不幸的是,iSCSI 卷只能由单个使用者以读写模式挂载——不允许同时写入。
有关更多详细信息,请参见 [iSCSI示例](https://github.com/kubernetes/examples/tree/ master/staging/volumes/iscsi)。
### local
这个 alpha 功能要求启用 `PersistentLocalVolumes` feature gate。
**注意**:从 1.9 开始,`VolumeScheduling` feature gate 也必须启用。
`local` 卷表示挂载的本地存储设备,如磁盘、分区或目录。
本地卷只能用作静态创建的 PersistentVolume。
与 HostPath 卷相比,local 卷可以以持久的方式使用,而无需手动将 pod 调度到节点上,因为系统会通过查看 PersistentVolume 上的节点关联性来了解卷的节点约束。
但是,local 卷仍然受底层节点的可用性影响,并不适用于所有应用程序。
以下是使用 `local` 卷的示例 PersistentVolume 规范:
```yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-pv
annotations:
"volume.alpha.kubernetes.io/node-affinity": '{
"requiredDuringSchedulingIgnoredDuringExecution": {
"nodeSelectorTerms": [
{ "matchExpressions": [
{ "key": "kubernetes.io/hostname",
"operator": "In",
"values": ["example-node"]
}
]}
]}
}'
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
path: /mnt/disks/ssd1
```
**注意**:本地 PersistentVolume 清理和删除需要手动干预,无外部提供程序。
从 1.9 开始,本地卷绑定可以被延迟,直到通过具有 StorageClass 中的 `WaitForFirstConsumer` 设置为`volumeBindingMode` 的 pod 开始调度。请参阅[示例](storage-classes.md#local)。延迟卷绑定可确保卷绑定决策也可以使用任何其他节点约束(例如节点资源需求,节点选择器,pod 亲和性和 pod 反亲和性)进行评估。
有关 `local` 卷类型的详细信息,请参见[本地持久化存储用户指南](https://github.com/kubernetes-incubator/external-storage/tree/master/local-volume)。
### nfs
`nfs` 卷允许将现有的 NFS(网络文件系统)共享挂载到您的容器中。不像 `emptyDir`,当删除 Pod 时,`nfs` 卷的内容被保留,卷仅仅是被卸载。这意味着 NFS 卷可以预填充数据,并且可以在 pod 之间“切换”数据。 NFS 可以被多个写入者同时挂载。
**重要提示**:您必须先拥有自己的 NFS 服务器才能使用它,然后才能使用它。
有关更多详细信息,请参见[NFS示例](https://github.com/kubernetes/examples/tree/ master/staging/volumes/nfs)。
### persistentVolumeClaim
`persistentVolumeClaim` 卷用于将 [PersistentVolume](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) 挂载到容器中。PersistentVolumes 是在用户不知道特定云环境的细节的情况下“声明”持久化存储(例如 GCE PersistentDisk 或 iSCSI 卷)的一种方式。
有关更多详细信息,请参阅 [PersistentVolumes 示例](https://kubernetes.io/docs/concepts/storage/persistent-volumes/)。
### projected
`projected` 卷将几个现有的卷源映射到同一个目录中。
目前,可以映射以下类型的卷来源:
- [`secret`](#secret)
- [`downwardAPI`](#downwardapi)
- `configMap`
所有来源都必须在与 pod 相同的命名空间中。有关更多详细信息,请参阅 [all-in-one 卷设计文档](https://github.com/kubernetes/community/blob/ master/contributors/design-suggestions/node/all-in-one-volume.md)。
#### 带有 secret、downward API 和 configmap 的 pod
```yaml
apiVersion: v1
kind: Pod
metadata:
name: volume-test
spec:
containers:
- name: container-test
image: busybox
volumeMounts:
- name: all-in-one
mountPath: "/projected-volume"
readOnly: true
volumes:
- name: all-in-one
projected:
sources:
- secret:
name: mysecret
items:
- key: username
path: my-group/my-username
- downwardAPI:
items:
- path: "labels"
fieldRef:
fieldPath: metadata.labels
- path: "cpu_limit"
resourceFieldRef:
containerName: container-test
resource: limits.cpu
- configMap:
name: myconfigmap
items:
- key: config
path: my-group/my-config
```
#### 使用非默认权限模式设置多个 secret 的示例 pod
```yaml
apiVersion: v1
kind: Pod
metadata:
name: volume-test
spec:
containers:
- name: container-test
image: busybox
volumeMounts:
- name: all-in-one
mountPath: "/projected-volume"
readOnly: true
volumes:
- name: all-in-one
projected:
sources:
- secret:
name: mysecret
items:
- key: username
path: my-group/my-username
- secret:
name: mysecret2
items:
- key: password
path: my-group/my-password
mode: 511
```
每个映射的卷来源在 `sources` 下的规格中列出。除了以下两个例外,参数几乎相同:
- 对于 secret,`secretName` 字段已经被更改为 `name` 以与 ConfigMap 命名一致。
- `defaultMode` 只能在映射级别指定,而不能针对每个卷源指定。但是,如上所述,您可以明确设置每个映射的 `mode`。
### portworxVolume
`portworxVolume` 是一个与 Kubernetes 一起,以超融合模式运行的弹性块存储层。Portwork 指纹存储在服务器中,基于功能的分层,以及跨多个服务器聚合容量。 Portworx 在虚拟机或裸机 Linux 节点上运行。
`portworxVolume` 可以通过 Kubernetes 动态创建,也可以在 Kubernetes pod 中预先设置和引用。
以下是一个引用预先配置的 PortworxVolume 的示例 pod:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: test-portworx-volume-pod
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /mnt
name: pxvol
volumes:
- name: pxvol
# This Portworx volume must already exist.
portworxVolume:
volumeID: "pxvol"
fsType: "<fs-type>"
```
**重要提示**:在 pod 中使用之前,请确保您有一个名为 `pxvol` 的现有 PortworxVolume。
更多的细节和例子可以在[这里](https://github.com/kubernetes/examples/tree/ master /staging/volumes/portworx/README.md)找到。
### quobyte
`quobyte` 卷允许将现有的 [Quobyte](http://www.quobyte.com) 卷挂载到容器中。
**重要提示**:您必须先创建自己的 Quobyte 安装程序,然后才能使用它。
有关更多详细信息,请参见 [Quobyte示例](https://github.com/kubernetes/examples/tree/ master/staging/volumes/quobyte)。
### rbd
`rbd` 卷允许将 [Rados Block Device](http://ceph.com/docs/master/rbd/rbd/) 卷挂载到容器中。不像 `emptyDir`,删除 Pod 时 `rbd `卷的内容被保留,卷仅仅被卸载。这意味着 RBD 卷可以预先填充数据,并且可以在 pod 之间“切换”数据。
**重要提示**:您必须先自行安装 Ceph,然后才能使用 RBD。
RBD 的一个特点是它可以同时为多个用户以只读方式挂载。这意味着可以预先使用您的数据集填充卷,然后根据需要同时为多个 pod 并行提供。不幸的是,RBD 卷只能由单个用户以读写模式安装——不允许同时写入。
有关更多详细信息,请参阅 [RBD示例](https://github.com/kubernetes/examples/tree/ master/staging/volumes/rbd)。
### scaleIO
ScaleIO 是一个基于软件的存储平台,可以使用现有的硬件来创建可扩展的共享块网络存储集群。`scaleIO` 卷插件允许已部署的 pod 访问现有的 ScaleIO 卷(或者它可以为持久性卷声明动态调配新卷,请参阅 [ScaleIO 持久卷](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#scaleio))。
**重要提示**:您必须有一个已经配置好的 ScaleIO 集群,并和创建的卷一同运行,然后才能使用它们。
以下是使用 ScaleIO 的示例 pod 配置:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-0
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: pod-0
volumeMounts:
- mountPath: /test-pd
name: vol-0
volumes:
- name: vol-0
scaleIO:
gateway: https://localhost:443/api
system: scaleio
protectionDomain: sd0
storagePool: sp1
volumeName: vol-0
secretRef:
name: sio-secret
fsType: xfs
```
有关更多详细信息,请参阅 [ScaleIO 示例](https://github.com/kubernetes/examples/tree/master/staging/volumes/scaleio)。
### secret
`secret` 卷用于将敏感信息(如密码)传递到 pod。您可以将 secret 存储在 Kubernetes API 中,并将它们挂载为文件,以供 Pod 使用,而无需直接连接到 Kubernetes。 `secret` 卷由 tmpfs(一个 RAM 支持的文件系统)支持,所以它们永远不会写入非易失性存储器。
**重要提示**:您必须先在 Kubernetes API 中创建一个 secret,然后才能使用它。
Secret 在[这里](/docs/user-guide/secrets)被更详细地描述。
### storageOS
`storageos` 卷允许将现有的 [StorageOS](https://www.storageos.com) 卷挂载到容器中。
StorageOS 在 Kubernetes 环境中以容器方式运行,使本地或附加存储可以从 Kubernetes 集群中的任何节点访问。可以复制数据以防止节点故障。精简配置和压缩可以提高利用率并降低成本。
StorageOS 的核心是为容器提供块存储,可以通过文件系统访问。
StorageOS 容器需要 64 位 Linux,没有额外的依赖关系。可以使用免费的开发者许可证。
**重要提示**:您必须在每个要访问 StorageOS 卷的节点上运行 StorageOS 容器,或者为该池提供存储容量。相关的安装说明,请参阅 [StorageOS文档](https://docs.storageos.com)。
```yaml
apiVersion: v1
kind: Pod
metadata:
labels:
name: redis
role: master
name: test-storageos-redis
spec:
containers:
- name: master
image: kubernetes/redis:v1
env:
- name: MASTER
value: "true"
ports:
- containerPort: 6379
volumeMounts:
- mountPath: /redis-master-data
name: redis-data
volumes:
- name: redis-data
storageos:
# The `redis-vol01` volume must already exist within StorageOS in the `default` namespace.
volumeName: redis-vol01
fsType: ext4
```
有关更多信息,包括动态配置和持久化卷声明,请参阅 [StorageOS 示例](https://github.com/kubernetes/kubernetes/tree/master/examples/volumes/storageos)。
### vsphereVolume
**先决条件**:配置了 vSphere Cloud Provider 的 Kubernetes。有关云提供商的配置,请参阅 [vSphere 入门指南](https://kubernetes.io/docs/getting-started-guides/vsphere/)。
`vsphereVolume` 用于将 vSphere VMDK 卷挂载到 Pod 中。卷的内容在卸载时会被保留。支持 VMFS 和 VSAN 数据存储。
**重要提示**:在 Pod 中使用它之前,您必须使用以下一种方法创建 VMDK。
#### 创建 VMDK 卷
选择以下方法之一来创建 VMDK。
首先进入 ESX,然后使用以下命令创建一个 VMDK:
```shell
vmkfstools -c 2G /vmfs/volumes/DatastoreName/volumes/myDisk.vmdk
```
使用下列命令创建一个 VMDK:
```shell
vmware-vdiskmanager -c -t 0 -s 40GB -a lsilogic myDisk.vmdk
```
#### vSphere VMDK 示例配置
```yaml
apiVersion: v1
kind: Pod
metadata:
name: test-vmdk
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-vmdk
name: test-volume
volumes:
- name: test-volume
# This VMDK volume must already exist.
vsphereVolume:
volumePath: "[DatastoreName] volumes/myDisk"
fsType: ext4
```
更多的例子可以在[这里](https://github.com/kubernetes/examples/tree/master/staging/volumes/vsphere)找到。
## 使用 subPath
有时,在单个容器中共享一个卷用于多个用途是有用的。`volumeMounts.subPath` 属性可用于在引用的卷内而不是其根目录中指定子路径。
下面是一个使用单个共享卷的 LAMP 堆栈(Linux Apache Mysql PHP)的示例。 HTML 内容被映射到它的 html 目录,数据库将被存储在它的 mysql 目录中:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: my-lamp-site
spec:
containers:
- name: mysql
image: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: "rootpasswd"
volumeMounts:
- mountPath: /var/lib/mysql
name: site-data
subPath: mysql
- name: php
image: php:7.0-apache
volumeMounts:
- mountPath: /var/www/html
name: site-data
subPath: html
volumes:
- name: site-data
persistentVolumeClaim:
claimName: my-lamp-site-data
```
## 资源
`emptyDir` 卷的存储介质(磁盘、SSD 等)由保存在 kubelet 根目录的文件系统的介质(通常是 `/var/lib/kubelet`)决定。 `emptyDir` 或 `hostPath` 卷可占用多少空间并没有限制,容器之间或 Pod 之间也没有隔离。
在将来,我们预计 `emptyDir` 和 `hostPath` 卷将能够使用 [resource](https://kubernetes.io/docs/user-guide/compute-resources) 规范请求一定的空间,并选择要使用的介质,适用于具有多种媒体类型的集群。
## Out-of-Tree 卷插件
除了之前列出的卷类型之外,存储供应商可以创建自定义插件而不将其添加到 Kubernetes 存储库中。可以通过使用 `FlexVolume` 插件来实现。
`FlexVolume`使用户能够将供应商卷挂载到容器中。供应商插件是使用驱动程序实现的,该驱动程序支持由 `FlexVolume` API定义的一系列卷命令。驱动程序必须安装在每个节点的预定义卷插件路径中。
更多细节可以在[这里](https://github.com/kubernetes/community/blob/master/contributors/devel/flexvolume.md)找到。
## 挂载传播
**注意**:挂载传播是 Kubernetes 1.8 中的一个 alpha 特性,在将来的版本中可能会重新设计甚至删除。
挂载传播允许将由容器挂载的卷共享到同一个 Pod 中的其他容器上,甚至是同一节点上的其他 Pod。
如果禁用 MountPropagation 功能,则不会传播 pod 中的卷挂载。也就是说,容器按照 [Linux内核文档](https://www.kernel.org/doc/Documentation/filesystems/sharedsubtree.txt)中所述的 `private` 挂载传播运行。
要启用此功能,请在 `--feature-gates` 命令行选项中指定 `MountPropagation = true`。启用时,容器的 `volumeMounts` 字段有一个新的 `mountPropagation` 子字段。它的值为:
- `HostToContainer`:此卷挂载将接收所有后续挂载到此卷或其任何子目录的挂载。这是 MountPropagation 功能启用时的默认模式。
同样的,如果任何带有 `Bidirectional` 挂载传播的 pod 挂载到同一个卷上,带有 `HostToContainer` 挂载传播的容器将会看到它。
该模式等同于[Linux内核文档](https://www.kernel.org/doc/Documentation/filesystems/sharedsubtree.txt)中描述的 `rslave` 挂载传播。
- `Bidirectional` 卷挂载与 `HostToContainer` 挂载相同。另外,由容器创建的所有卷挂载将被传播回主机和所有使用相同卷的容器的所有容器。
此模式的一个典型用例是带有 Flex 卷驱动器或需要使用 HostPath 卷在主机上挂载某些内容的 pod。
该模式等同于 [Linux内核文档](https://www.kernel.org/doc/Documentation/filesystems/sharedsubtree.txt)中所述的 `rshared` 挂载传播。
**小心**:双向挂载传播可能是危险的。它可能会损坏主机操作系统,因此只能在特权容器中使用。强烈建议熟悉 Linux 内核行为。另外,容器在 Pod 中创建的任何卷挂载必须在容器终止时销毁(卸载)。
## 参考
- https://kubernetes.io/docs/concepts/storage/volumes/
- [使用持久化卷来部署 WordPress 和 MySQL](https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/) | 28.093496 | 286 | 0.711474 | yue_Hant | 0.858602 |
7adee6a80be358f49abf077dea5c464a6d479075 | 1,046 | md | Markdown | nosql/redis/c-list.md | jake-tw/documents | b0442273eafe5a57fc3842836bd7471b47551a3d | [
"MIT"
] | null | null | null | nosql/redis/c-list.md | jake-tw/documents | b0442273eafe5a57fc3842836bd7471b47551a3d | [
"MIT"
] | null | null | null | nosql/redis/c-list.md | jake-tw/documents | b0442273eafe5a57fc3842836bd7471b47551a3d | [
"MIT"
] | null | null | null | ## List
1. Push
- 添加 Value 到指定 List
- LPUSH key value [value ...]
- RPUSH key value [value ...]
2. Pop
- 取出並移除指定 List 的 Value
- LPOP key
- RPOP key
3. Length
- 查詢指定 List 長度
- LLEN key
4. Range
- 查詢指定範圍的 Value
- LRANGE key start stop
- 使用 0 -1 查詢所有資料
- Example
- LRANGE customer 0 -1
5. Index
- 查詢指定 Index 的 Value
- LINDEX key index
6. Insert
- 在指定位置插入 Value
- LINSERT key BEFORE pivot value
- LINSERT key AFTER pivot value
7. Set
- 在指定的 Index 插入 Value
- LSET key index value
8. Remove
- 刪除指定數量的 Value
- LREM key count value
- 只保留指定區域的資料
- LTRIM key start stop
- start 和 stop 會保留
- 從左側開始使用正數
- 從右側開始使用負數
- Example
- LTRIM customer 0 4
- 保留 index 0 1 2 3 4 的 customer
- LTRIM customer 0 -2
- 保留除了 index -1 也就是最後一個元素以外的所有元素 | 20.509804 | 56 | 0.484704 | yue_Hant | 0.801937 |
7adf029f862db5767ad2efea1d76e10e3482b6e9 | 2,625 | md | Markdown | notes.md | flatheadmill/barnyard | 80534b0145ec296ea68c68bd3f066d744fa6472c | [
"MIT"
] | null | null | null | notes.md | flatheadmill/barnyard | 80534b0145ec296ea68c68bd3f066d744fa6472c | [
"MIT"
] | 5 | 2022-03-10T05:36:06.000Z | 2022-03-18T13:32:35.000Z | notes.md | flatheadmill/barnyard | 80534b0145ec296ea68c68bd3f066d744fa6472c | [
"MIT"
] | null | null | null | Barnyard directory structure.
The minimal barnyard directory structure is the following.
```
$ find my-barnyard
```
These two directories are required. They can be empty, but they are required.
In the `modules` directory you create modules directories. The diretory name is
the module name. Within the module you write a `bash` program named `apply`.
```console
$ mkdir modules/motd
```
```bash
# modules/motd/apply
cat <<EOF > /etc/motd
"Ever make mistakes in life? Let’s make them birds. Yeah, they’re birds now."
--Bob Ross
EOF
```
In the `machines` directory you create a directory using the fully-qualified
domain name for each machine that you want to manage with Barnyard.
Inside that directory you create a configuration file for the modules you want
to run. Configuration files are simple name/value pairs. [TK Config file
format.]
The configuration file is whatever your module needs it to be. There are a set
of special configuration properties for managing dependencies.
The configuration is module specific. There are some special variables used by
Barnyard, they are prefixed with an `@`.
Configuration files.
Module configuration files are name/value pairs. The value must fit on one line.
For multi line values using `base64` encoding (maybe we decode?)
```
@dependencies=users
%dependencies=users
&included=../../includes/postgresql
&included=includes/postgresql
~base64=IEVkZy4gQ29tZSBvbiwgc2lyOyBoZXJlJ3MgdGhlIHBsYWNlLiBTdGFuZCBzdGlsbC4gSG93IGZlYXJmdWwKICAgICBBbmQgZGl6enkgJ3RpcyB0byBjYXN0IG9uZSdzIGV5ZXMgc28gbG93IQogICAgIFRoZSBjcm93cyBhbmQgY2hvdWdocyB0aGF0IHdpbmcgdGhlIG1pZHdheSBhaXIgIAogICAgIFNob3cgc2NhcmNlIHNvIGdyb3NzIGFzIGJlZXRsZXMuIEhhbGZ3YXkgZG93bgogICAgIEhhbmdzIG9uZSB0aGF0IGdhdGhlcnMgc2FtcGlyZS0gZHJlYWRmdWwgdHJhZGUhCiAgICAgTWV0aGlua3MgaGUgc2VlbXMgbm8gYmlnZ2VyIHRoYW4gaGlzIGhlYWQuCiAgICAgVGhlIGZpc2hlcm1lbiB0aGF0IHdhbGsgdXBvbiB0aGUgYmVhY2gKICAgICBBcHBlYXIgbGlrZSBtaWNlOyBhbmQgeW9uZCB0YWxsIGFuY2hvcmluZyBiYXJrLAogICAgIERpbWluaXNoJ2QgdG8gaGVyIGNvY2s7IGhlciBjb2NrLCBhIGJ1b3kKICAgICBBbG1vc3QgdG9vIHNtYWxsIGZvciBzaWdodC4gVGhlIG11cm11cmluZyBzdXJnZQogICAgIFRoYXQgb24gdGgnIHVubnVtYidyZWQgaWRsZSBwZWJibGUgY2hhZmVzCiAgICAgQ2Fubm90IGJlIGhlYXJkIHNvIGhpZ2guIEknbGwgbG9vayBubyBtb3JlLAogICAgIExlc3QgbXkgYnJhaW4gdHVybiwgYW5kIHRoZSBkZWZpY2llbnQgc2lnaHQKICAgICBUb3BwbGUgZG93biBoZWFkbG9uZy4K
sudoers=fred wilma
```
TODO Implement oneshot, diff and explicit types of modules, specify in config.
TODO Inform a module of what modules have run. That way we can have a generic
restart module, one that even accepts a regex, and restart a service when
certain modules have run.
TODO Implement run on diff.
| 43.75 | 920 | 0.852952 | eng_Latn | 0.897735 |
7adf18719142f25d27191ca46220b6b299d0f990 | 191 | md | Markdown | docs/includes/applies-to-version/_ss2019.md | john-sterrett/sql-docs | 7343f09fccb4624f8963aa74e25665c5acd177b8 | [
"CC-BY-4.0",
"MIT"
] | 4 | 2019-03-10T21:54:49.000Z | 2022-03-09T09:08:21.000Z | docs/includes/applies-to-version/_ss2019.md | john-sterrett/sql-docs | 7343f09fccb4624f8963aa74e25665c5acd177b8 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-11-09T17:22:05.000Z | 2020-11-19T20:51:25.000Z | docs/includes/applies-to-version/_ss2019.md | john-sterrett/sql-docs | 7343f09fccb4624f8963aa74e25665c5acd177b8 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-07-25T00:35:19.000Z | 2020-07-25T00:35:19.000Z | ---
author: MikeRayMSFT
ms.service: sql
ms.topic: include
ms.date: 06/11/2020
ms.author: mikeray
---
<Token>[!INCLUDE [sssqlv15-md](../sssqlv15-md.md)]</Token>
| 17.363636 | 87 | 0.675393 | yue_Hant | 0.279515 |
7adf4d11aaa30f9399e12335dabb14b4f146bfbc | 368 | md | Markdown | content/press/techrepublic-12-18-20.md | utkusaridede/rockylinux.org | b65f220532695ffed37ddb066a51908677b3ef0d | [
"BSD-3-Clause"
] | 1 | 2021-04-21T15:18:57.000Z | 2021-04-21T15:18:57.000Z | content/press/techrepublic-12-18-20.md | utkusaridede/rockylinux.org | b65f220532695ffed37ddb066a51908677b3ef0d | [
"BSD-3-Clause"
] | null | null | null | content/press/techrepublic-12-18-20.md | utkusaridede/rockylinux.org | b65f220532695ffed37ddb066a51908677b3ef0d | [
"BSD-3-Clause"
] | null | null | null | ---
title: 'TechRepublic'
date: '2020-12-18'
description: "Chef cofounder on CentOS: It's time to open source everything"
url: 'https://www.techrepublic.com/article/chef-cofounder-on-centos-its-time-to-open-source-everything/'
posttype: 'press'
---
[Read the Article](https://www.techrepublic.com/article/chef-cofounder-on-centos-its-time-to-open-source-everything/)
| 36.8 | 117 | 0.76087 | eng_Latn | 0.206831 |
7adfd2d3ea5c408b9e99d9c8c3dc52d3aaab5caf | 581 | md | Markdown | repository/Aggregate-Functions.package/AggregateFunction.class/README.md | pdebruic/misc | a4cb26cee78648c3c878bbcfcfe24817626b5f00 | [
"MIT"
] | null | null | null | repository/Aggregate-Functions.package/AggregateFunction.class/README.md | pdebruic/misc | a4cb26cee78648c3c878bbcfcfe24817626b5f00 | [
"MIT"
] | null | null | null | repository/Aggregate-Functions.package/AggregateFunction.class/README.md | pdebruic/misc | a4cb26cee78648c3c878bbcfcfe24817626b5f00 | [
"MIT"
] | null | null | null | This class represents a function that performs a computation
on a set of values rather than on a single value. For example, finding
the average or mean of a list of numbers is an aggregate function.
All database management and spreadsheet systems support a set
of aggregate functions that can operate on a set of selected records or cells.
Instance Variables
name - The name which is knowed the receiver.
functionBlock - A valuable whose input is a Collection and returns a result value
Class Variables
Functions - An ordered collection holding the tipical aggregate functions
| 44.692308 | 81 | 0.814114 | eng_Latn | 0.999736 |
7adfe1c4ed1925c3f8b2efe017476df21f7a7575 | 976 | md | Markdown | desktop-src/WMP/tracknametext.md | npherson/win32 | 28da414b56bb3e56e128bf7e0db021bad5343d2d | [
"CC-BY-4.0",
"MIT"
] | 3 | 2020-04-24T13:02:42.000Z | 2021-07-17T15:32:03.000Z | desktop-src/WMP/tracknametext.md | npherson/win32 | 28da414b56bb3e56e128bf7e0db021bad5343d2d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | desktop-src/WMP/tracknametext.md | npherson/win32 | 28da414b56bb3e56e128bf7e0db021bad5343d2d | [
"CC-BY-4.0",
"MIT"
] | 1 | 2022-03-09T23:50:05.000Z | 2022-03-09T23:50:05.000Z | ---
title: TRACKNAMETEXT
description: This is a predefined TEXT element with the following default values.
ms.assetid: a0ef1d16-a5de-4b61-a307-dcadc077f124
keywords:
- TRACKNAMETEXT Windows Media Player
topic_type:
- apiref
api_name:
- TRACKNAMETEXT
api_type:
- NA
ms.topic: article
ms.date: 05/31/2018
api_location:
---
# TRACKNAMETEXT
This is a predefined **TEXT** element with the following default values.
``` syntax
value="wmpprop:player.currentMedia.name"
tabstop="true"
```
## Remarks
This will create a **TEXT** element that will display the name of the current media. All properties of this **TEXT** element can be overridden by explicitly specifying them.
## Requirements
| | |
|--------------------|----------------------------------------------|
| Version<br/> | Windows Media Player 7.0 or later<br/> |
## See also
<dl> <dt>
[**TEXT Element**](text-element.md)
</dt> </dl>
| 17.428571 | 173 | 0.620902 | eng_Latn | 0.857831 |
7ae0a05c1dc3ad9d7f2d90e0a2177e2cc1697c70 | 9,286 | md | Markdown | README.md | das-projects/selfsupervised-learning | e023952fe5fd38c79324dcb80bb889362484a6bc | [
"Apache-2.0"
] | 2 | 2022-01-28T09:48:33.000Z | 2022-03-26T03:15:44.000Z | README.md | das-projects/selfsupervised-learning | e023952fe5fd38c79324dcb80bb889362484a6bc | [
"Apache-2.0"
] | null | null | null | README.md | das-projects/selfsupervised-learning | e023952fe5fd38c79324dcb80bb889362484a6bc | [
"Apache-2.0"
] | null | null | null | <div align="center">
# Self-supervised learning
[](https://www.nature.com/articles/nature14539)
[](https://papers.nips.cc/book/advances-in-neural-information-processing-systems-31-2018)
<!--
ARXIV
[](https://www.nature.com/articles/nature14539)
-->

<!--
Conference
-->
</div>
Self-supervised learning algorithms provide a way to train Deep Neural Networks in an unsupervised way using contrastive
losses. The idea is to learn a representation which can discriminate between negative examples and be as close as
possible to augmentations and transformations of itself. In this approach, we first train a ResNet on the unlabeled
dataset which is then fine-tuned on a relatively small labeled one. This approach drastically reduces the amount of
labeled data required, a big problem in applying deep learning in the real world. Surprisingly, this approach actually
leads to increase in robustness as well as raw performance, when compared to fully supervised counterparts, even with
the same architecture.
In case, the user wants to skip the pre-training part, the pre-trained weights can be
[downloaded from here](https://drive.google.com/file/d/1z0BouIiQ9oLizubOIH9Rlpad5Kk_2RtY/view?usp=sharing)
to use for fine-tuning tasks and directly skip to the second part of the tutorial which is using the
'ssl_finetune_train.py'.
### Steps to run the tutorial
1.) Download the two datasets [TCIA-Covid19](https://wiki.cancerimagingarchive.net/display/Public/CT+Images+in+COVID-19)
& [BTCV](https://www.synapse.org/#!Synapse:syn3193805/wiki/217789) (More detail about them in the Data section)\
2.) Modify the paths for data_root, json_path & logdir in ssl_script_train.py\
3.) Run the 'ssl_script_train.py'\
4.) Modify the paths for data_root, json_path, pre-trained_weights_path from 2.) and
logdir_path in 'ssl_finetuning_train.py'\
5.) Run the 'ssl_finetuning_script.py'\
6.) And that's all folks, use the model to your needs
### 1.Data
Pre-training Dataset: The TCIA Covid-19 dataset was used for generating the
[pre-trained weights](https://drive.google.com/file/d/1D7G1FhgZfBhql4djMfiSy0xODVXnLlpd/view?usp=sharing).
The dataset contains a total of 771 3D CT Volumes. The volumes were split into training and validation sets
of 600 and 171 3D volumes correspondingly. The data is available for download at this
[link](https://wiki.cancerimagingarchive.net/display/Public/CT+Images+in+COVID-19).
If this dataset is being used in your work, please use [1] as reference. A json file is provided
which contains the training and validation splits that were used for the training. The json file can be found in the
json_files directory of the self-supervised training tutorial.
Fine-tuning Dataset: The dataset from Beyond the Cranial Vault Challenge
[(BTCV)](https://www.synapse.org/#!Synapse:syn3193805/wiki/217789)
2015 hosted at MICCAI, was used as a fully supervised fine-tuning task on the pre-trained weights. The dataset
consists of 30 3D Volumes with annotated labels of up to 13 different organs [2]. There are 3 json files provided in the
json_files directory for the dataset. They correspond to having different number of training volumes ranging from
6, 12 and 24. All 3 json files have the same validation split.
References:
1.) Harmon, Stephanie A., et al. "Artificial intelligence for the detection of COVID-19 pneumonia on
chest CT using multinational datasets." Nature communications 11.1 (2020): 1-7.
2.) Tang, Yucheng, et al. "High-resolution 3D abdominal segmentation with random patch network fusion."
Medical Image Analysis 69 (2021): 101894.
### 2. Network Architectures
For pre-training a modified version of ViT [1] has been used, it can be referred
[here](https://docs.monai.io/en/latest/networks.html#vitautoenc)
from MONAI. The original ViT was modified by attachment of two 3D Convolutional Transpose Layers to achieve a similar
reconstruction size as that of the input image. The ViT is the backbone for the UNETR [2] network architecture which
was used for the fine-tuning fully supervised tasks.
The pre-trained backbone of ViT weights were loaded to UNETR and the decoder head still relies on random initialization
for adaptability of the new downstream task. This flexibility also allows the user to adapt the ViT backbone to their
own custom created network architectures as well.
References:
1.) Dosovitskiy, Alexey, et al. "An image is worth 16x16 words: Transformers for image recognition at scale."
arXiv preprint arXiv:2010.11929 (2020).
2.) Hatamizadeh, Ali, et al. "Unetr: Transformers for 3d medical image segmentation."
arXiv preprint arXiv:2103.10504 (2021).
### 3. Self-supervised Tasks
The pre-training pipeline has two aspects to it (Refer figure shown below). First, it uses augmentation (top row) to
mutate the data and second, it utilizes regularized
[contrastive loss](https://docs.monai.io/en/latest/losses.html#contrastiveloss) [3] to learn feature representations
of the unlabeled data. The multiple augmentations are applied on a randomly selected 3D foreground patch from a 3D
volume. Two augmented views of the same 3D patch are generated for the contrastive loss as it functions by drawing
the two augmented views closer to each other if the views are generated from the same patch, if not then it tries to
maximize the disagreement. The CL offers this functionality on a mini-batch.

The augmentations mutate the 3D patch in various ways, the primary task of the network is to reconstruct
the original image. The different augmentations used are classical techniques such as in-painting [1], out-painting [1]
and noise augmentation to the image by local pixel shuffling [2]. The secondary task of the network is to simultaneously
reconstruct the two augmented views as similar to each other as possible via regularized contrastive loss [3] as its
objective is to maximize the agreement. The term regularized has been used here because contrastive loss is adjusted
by the reconstruction loss as a dynamic weight itself.
The below example image depicts the usage of the augmentation pipeline where two augmented views are drawn of the same
3D patch:

Multiple axial slices of a 96x96x96 patch are shown before the augmentation (Ref Original Patch in the above figure).
Augmented View 1 & 2 are different augmentations generated via the transforms on the same cubic patch. The objective
of the SSL network is to reconstruct the original top row image from the first view. The contrastive loss
is driven by maximizing agreement of the reconstruction based on input of the two augmented views.
`matshow3d` from `monai.visualize` was used for creating this figure, a tutorial for using can be found [here](https://github.com/Project-MONAI/tutorials/blob/master/modules/transform_visualization.ipynb)
References:
1.) Pathak, Deepak, et al. "Context encoders: Feature learning by inpainting." Proceedings of the IEEE conference on
computer vision and pattern recognition. 2016.
2.) Chen, Liang, et al. "Self-supervised learning for medical image analysis using image context restoration." Medical
image analysis 58 (2019): 101539.
3.) Chen, Ting, et al. "A simple framework for contrastive learning of visual representations." International conference
on machine learning. PMLR, 2020.
### 4. Experiment Hyper-parameters
Training Hyper-Parameters for SSL: \
Epochs: 300 \
Validation Frequency: 2 \
Learning Rate: 1e-4 \
Batch size: 4 3D Volumes (Total of 8 as 2 samples were drawn per 3D Volume) \
Loss Function: L1
Contrastive Loss Temperature: 0.005
Training Hyper-parameters for Fine-tuning BTCV task (All settings have been kept consistent with prior
[UNETR 3D
Segmentation tutorial](https://github.com/Project-MONAI/tutorials/blob/master/3d_segmentation/unetr_btcv_segmentation_3d.ipynb)): \
Number of Steps: 30000 \
Validation Frequency: 100 steps \
Batch Size: 1 3D Volume (4 samples are drawn per 3D volume) \
Learning Rate: 1e-4 \
Loss Function: DiceCELoss
### 4. Training & Validation Curves for pre-training SSL

L1 error reported for training and validation when performing the SSL training. Please note contrastive loss is not
L1.
### 5. Results of the Fine-tuning vs Random Initialization on BTCV
| Training Volumes | Validation Volumes | Random Init Dice score | Pre-trained Dice Score | Relative Performance Improvement |
| ---------------- | ---------------- | ---------------- | ---------------- | ---------------- |
| 6 | 6 | 63.07 | 70.09 | ~11.13% |
| 12 | 6 | 76.06 | 79.55 | ~4.58% |
| 24 | 6 | 78.91 | 82.30 | ~4.29% |
### Citation
```
@article{Arijit Das,
title={Self-supervised learning for medical data},
author={Arijit Das},
journal={https://github.com/das-projects/selfsupervised-learning},
year={2020}
}
```
| 53.988372 | 204 | 0.770515 | eng_Latn | 0.977458 |
7ae0d0eb173f53f581d0cd3bdc3f5628cfdf7c7b | 6,604 | md | Markdown | _posts/2021-03-13-cdn.md | helloted/helloted.github.io | c1dc9edab1eded109045a308aecd60c52e243826 | [
"MIT"
] | 3 | 2017-09-26T01:51:54.000Z | 2021-08-31T10:07:29.000Z | _posts/2021-03-13-cdn.md | helloted/helloted.github.io | c1dc9edab1eded109045a308aecd60c52e243826 | [
"MIT"
] | null | null | null | _posts/2021-03-13-cdn.md | helloted/helloted.github.io | c1dc9edab1eded109045a308aecd60c52e243826 | [
"MIT"
] | null | null | null | ---
layout: post
category: iOS
title: "CND加速原理和游戏加速原理"
subtitle: "CND加速原理和游戏加速原理"
date: 2021-03-13 12:00:00
author: "Ted"
header-img: "img/default.jpg"
---
### 一、普通的HTTP请求
#### 1.1HTTP请求
一次完整的HTTP请求所经历的步骤:
1. DNS解析(通过访问的域名找出其IP地址,递归搜索)
2. HTTP请求,当输入一个请求时,建立一个Socket连接发起TCP的3次握手
3. 如果是HTTPS请求建立连接后,则会有安全认证。
4. 客户端向服务器发送请求命令(一般是GET或POST请求)
5. 客户端发送请求头信息
6. 服务器发送应答头信息
7. 服务器向客户端发送数据
8. 服务器关闭TCP连接(4次挥手)
9. 客户端根据返回的HTML,CSS,JS进行渲染
#### 1.2DNS解析
而在DNS解析过程中,如果要访问的网站名为:"baidu.com",客户端首先会在本机的hosts文件和hosts缓存中查找该域名对应的IP地址;如果本机中没有此信息,则会到我们的`本地DNS`进行询问该域名对应的IP地址;如果本地DNS中仍然没有该域名的IP信息时,则会由本地DNS依次向`根DNS`、`顶级域DNS`、`权威DNS`进行询问,最终`本地DNS`将IP地址发送给客户端。客户端通过IP地址向远程的源站服务器发出HTTP请求并获取相应的数据内容。
以上是通过DNS的`迭代解析`模式获取域名对应的IP地址并发送HTTP请求的过程。源站的提供商通过配置权威DNS将源站的域名与提供服务的服务器主机进行绑定,使客户端通过DNS服务可以顺利地获取源站域名对应的IP地址并通过IP地址与源站进行通信。

### 二、CDN
#### 2.1CDN简介
CDN(Content Delivery Network,内容分发网络,源站内容(image、html、js、css等) 这个属于内容分发)是构建在现有互联网基础之上的一层智能虚拟网络,通过在网络各处部署节点服务器,实现将源站内容分发至所有CDN节点,使用户可以就近获得所需的内容。CDN服务缩短了用户查看内容的访问延迟,提高了用户访问网站的响应速度与网站的可用性,解决了网络带宽小、用户访问量大、网点分布不均等问题。
CDN 诞生于二十多年前,随着骨干网压力的逐渐增大,以及长传需求的逐渐增多,使得骨干网的压力越来越大,长传效果越来越差。于是在 1995 年,MIT 的应用数学教授 Tom Leighton 带领着研究生 Danny Lewin 和其他几位顶级研究人员一起尝试用数学问题解决网络拥堵问题。
他们使用数学算法,处理内容的动态路由安排,并最终解决了困扰 Internet 使用者的难题。后来,史隆管理学院的 MBA 学生 Jonathan Seelig 加入了 Leighton 的队伍中,从那以后他们开始实施自己的商业计划,最终于 1998 年 8 月 20 日正式成立公司,命名为 Akamai。
同年 1998 年,中国第一家 CDN 公司蓝汛 ChinaCache成立。
#### 2.2为什么有CDN
当下的互联网应用都包含大量的静态内容,但静态内容以及一些准动态内容又是最耗费带宽的,特别是针对全国甚至全世界的大型网站,如果这些请求都指向主站的服务器的话,不仅是主站服务器受不了,单端口500M左右的带宽也扛不住,所以大多数网站都需要CDN服务。
根本上的原因是,访问速度对互联网应用的用户体验、口碑、甚至说直接的营收都有巨大的影响,任何的企业都渴望自己站点有更快的访问速度。而HTTP传输时延对web的访问速度的影响很大,在绝大多数情况下是起决定性作用的,这是由TCP/IP协议的一些特点决定的。物理层上的原因是光速有限、信道有限,协议上的原因有丢包、慢启动、拥塞控制等。
这就是你使用CDN的第一个也是最重要的原因:为了加速网站的访问。
除了加速网站的访问之外,CDN还有一些作用:
- 为了实现跨运营商、跨地域的全网覆盖
互联不互通、区域ISP地域局限、出口带宽受限制等种种因素都造成了网站的区域性无法访问。CDN加速可以覆盖全球的线路,通过和运营商合作,部署IDC资源,在全国骨干节点商,合理部署CDN边缘分发存储节点,充分利用带宽资源,平衡源站流量。阿里云在国内有500+节点,海外300+节点,覆盖主流国家和地区不是问题,可以确保CDN服务的稳定和快速。
- 为了保障你的网站安全
CDN的负载均衡和分布式存储技术,可以加强网站的可靠性,相当无无形中给你的网站添加了一把保护伞,应对绝大部分的互联网攻击事件。防攻击系统也能避免网站遭到恶意攻击。
- 为了异地备援
当某个服务器发生意外故障时,系统将会调用其他临近的健康服务器节点进行服务,进而提供接近100%的可靠性,这就让你的网站可以做到永不宕机。
- 为了节约成本
投入使用CDN加速可以实现网站的全国铺设,你根据不用考虑购买服务器与后续的托管运维,服务器之间镜像同步,也不用为了管理维护技术人员而烦恼,节省了人力、精力和财力。
- 为了让你更专注业务本身
CDN加速厂商一般都会提供一站式服务,业务不仅限于CDN,还有配套的云存储、大数据服务、视频云服务等,而且一般会提供7x24运维监控支持,保证网络随时畅通,你可以放心使用。并且将更多的精力投入到发展自身的核心业务之上。
### 三、CDN加速原理
CDN将我们对源站的请求导向了距离用户较近的缓存节点,而非源站。
在DNS解析域名时新增了一个`全局负载均衡系统(GSLB)`,GSLB的主要功能是根据用户的本地DNS的IP地址判断用户的位置,筛选出距离用户较近的`本地负载均衡系统(SLB)`,并将该SLB的IP地址作为结果返回给本地DNS。SLB主要负责判断`缓存服务器集群`中是否包含用户请求的资源数据,如果缓存服务器中存在请求的资源,则根据缓存服务器集群中节点的健康程度、负载量、连接数等因素筛选出最优的缓存节点,并将HTTP请求重定向到最优的缓存节点上。

为了更清晰地说明CDN的工作原理,下面以客户端发起对"join.qq.com/video.php"的HTTP请求为例进行说明:
1. 用户发起对"join.qq.com/video.php"的HTTP请求,首先需要通过本地DNS通过"迭代解析"的方式获取域名"join.qq.com"的IP地址;
2. 如果本地DNS的缓存中没有该域名的记录,则向`根DNS`发送DNS查询报文;
3. `根DNS`发现域名的前缀为"com",则给出负责解析`com`的`顶级DNS`的IP地址;
4. 本地DNS向`顶级DNS`发送DNS查询报文;
5. `顶级DNS`发现域名的前缀为"qq.com",在本地记录中查找负责该前缀的`权威DNS`的IP地址并进行回复;
6. 本地DNS向`权威DNS`发送DNS查询报文;
7. 权威DNS查找到一条NAME字段为"join.qq.com"的`CNAME记录`(由服务提供者配置),该记录的Value字段为"join.qq.cdn.com";并且还找到另一条NAME字段为"join.qq.cdn.com"的A记录,该记录的Value字段为GSLB的IP地址;
8. 本地DNS向GSLB发送DNS查询报文;
9. GSLB根据`本地DNS`的IP地址判断用户的大致位置为深圳,筛选出位于华南地区且综合考量最优的SLB的IP地址填入DNS回应报文,作为DNS查询的最终结果;
10. 本地DNS回复客户端的DNS请求,将上一步的IP地址作为最终结果回复给客户端;
11. 客户端根据IP地址向SLB发送HTTP请求:"[join.qq.com/video.php](https://join.qq.com/video.php)";
12. SLB综合考虑缓存服务器集群中各个节点的资源限制条件、健康度、负载情况等因素,筛选出最优的缓存节点后回应客户端的HTTP请求(状态码为302,重定向地址为最优缓存节点的IP地址);
13. 客户端接收到SLB的HTTP回复后,重定向到该缓存节点上;
14. 缓存节点判断请求的资源是否存在、过期,将缓存的资源直接回复给客户端,否则到源站进行数据更新再回复。
其中较为关键的步骤为6~9,与普通的DNS过程不同的是,这里需要服务提供者(源站)配置它在其权威DNS中的记录,将直接指向源站的A记录修改为一条CNAME记录及其对应的A记录,CNAME记录将目标域名转换为GSLB的别名,A记录又将该别名转换为GSLB的IP地址。通过这一系列的操作,将解析源站的目标域名的权力交给了GSLB,以致于GSLB可以根据地理位置等信息将用户的请求引导至距离其最近的"缓存节点",减缓了源站的负载压力和网络拥塞。
#### CDN节点有缓存场景
HTTP请求流程说明:
1、用户在浏览器输入要访问的网站域名,向本地DNS发起域名解析请求。
2、域名解析的请求被发往网站授权DNS服务器。
3、网站DNS服务器解析发现域名已经CNAME到了www.example.com.c.cdnhwc1.com。
4、请求被指向CDN服务。
5、CDN对域名进行智能解析,将响应速度最快的CDN节点IP地址返回给本地DNS。
6、用户获取响应速度最快的CDN节点IP地址。
7、浏览器在得到速度最快节点的IP地址以后,向CDN节点发出访问请求。
8、CDN节点将用户所需资源返回给用户。

#### CDN节点无缓存场景
HTTP请求流程说明:
1、用户在浏览器输入要访问的网站域名,向本地DNS发起域名解析请求。
2、域名解析的请求被发往网站授权DNS服务器。
3、网站DNS服务器解析发现域名已经CNAME到了www.example.com.c.cdnhwc1.com。
4、请求被指向CDN服务。
5、CDN对域名进行智能解析,将响应速度最快的CDN节点IP地址返回给本地DNS。
6、用户获取响应速度最快的CDN节点IP地址。
7、浏览器在得到速度最快节点的IP地址以后,向CDN节点发出访问请求。
8、CDN节点回源站拉取用户所需资源。
9、将回源拉取的资源缓存至节点。
10、将用户所需资源返回给用户。

#### CDN适用场景
- 网站站点/应用加速
通俗讲就是static 内容加速,静态内容加速,如:html image js css 等
- 视音频点播/大文件下载分发加速
基本上都是视频点播,MP4、flv等视频文件,例如国内的优酷、土豆、腾讯视频、爱奇艺都是一样。
- 视频直播加速
视频直播加速,流媒体切片、转码、码流转换等等。
熊猫TV、斗鱼、淘宝直播
- 移动应用加速
移动APP更新文件(apk文件)分发,移动APP内图片、页面、短视频、UGC等内容的优化加速分发。
ios、安卓 端 APP 、微信小程序、支付宝小程序等。
### 四、网游加速原理
#### 4.1网络质量指标
网游加速的目的是为了让个人用户能够快速地连接到游戏服务器,让用户游戏更流畅。常用的衡量指标如下:
**(1)网络延迟(delay)**
它定义为信号从网络的一端(如玩家客户端)到另一端(如游戏服务器)所花费的时间。基于网络延迟,游戏卡顿情况可以简单分为以下级别:
1~30ms:极快,几乎察觉不出有延迟,玩任何游戏速度都特别顺畅
31~50ms:良好,可以正常游戏,没有明显的延迟情况
51~100ms:普通,对抗类游戏在一定水平以上能感觉出延迟,偶尔感觉到停顿
100ms~200ms:较差,无法正常游玩对抗类游戏,有明显卡顿,偶尔出现丢包和[掉线](https://baike.baidu.com/item/掉线)现象
200ms~500ms:很差,访问网页有明显的延迟和卡顿,经常出现丢包或无法访问
\>500ms:极差,难以接受的延迟和丢包,甚至无法访问网页
\>1000ms:基本无法访问
**(2)丢包率(Loss Tolerance或Packet Loss Rate)**
它定义为测试中所丢失数据包数量占所发送数据组的比率。计算方法是:“(输入报文-输出报文)/输入报文*100%”。丢包率与数据包长度以及包发送频率相关。通常,千兆网卡在流量大于200Mbps时,丢包率小于万分之五;百兆网卡在流量大于60Mbps时,丢包率小于万分之一。
**(3)每秒传输帧数FPS(frames per second)**
它定义为画面每秒传输的帧数,可以理解为屏幕的刷新率,通常不低于30帧/秒。但达到60帧/秒以上,人眼就分别不出来了,也就是说60帧/秒和200帧/秒在人眼看来是完全没区别的。一块高性能的显卡有助于数据帧的处理。
其它描述网络性能的指标还包括,速率、带宽、网络带宽积、吞吐率等。
#### 4.2网游加速方式
网游加速器主要是在骨干网发挥作用,为了提升网络互联速度,网游加速器厂商专门搭建或租用了高带宽的双线机房,架设多个节点服务器,编写[网络加速器](https://www.jinglingip.com/)客户端,借助节点服务器来高效完成玩家的跨网连接游戏服务器请求。网络加速器客户端能够自动识别用户的网络线路类型(电信或联通),自动选择速度最快的节点服务器进行数据转发,从而达到数据加速作用。
网游加速可以采用两种方式来实现。
其一是VPN
它需要部署双线VPN服务器作为加速节点,用于电信和联通之间的自动快速切换。客户端通过加速服务器自动选择速度最快的服务器。它需要拨号连接到VPN服务器并获取一个虚拟IP地址,通过修改路由表的方式,将指定进程的网络访问路由到虚拟IP上,而其余地址仍经过原默认路由途径访问。
其二是代理服务器。
它通过部署SOCKS5代理服务器作为加速节点,使得客户端能够自动选择最快的代理服务器作为当前的转发节点。在客户端,该方式主要采用LSP技术,在用户的主机安装分层协议。当在游戏客户端调用connect函数试图连接游戏服务器时,LSP将该连接重定向到代理服务器,并采用SOCKS5协议规范与代理服务器进行数据协商,由代理服务器来连接真正的游戏服务器,最后将游戏服务器的数据原封不动转发给用户或将用户的数据原封不动转发给游戏服务器。
#### 4.3网游加速应用
加速器服务提供商有一台高速服务器连接游戏服务器,延时极低,然后当地加速器客户端通过添加lsp或vpn的方法!让游戏历程连接到加速器服务商的服务器上,这样就即是别的开发了一条人很少的通道,游戏的重要延时就是用户盘算机和加速器服务器之间的延时了。
迅游手游加器的加速原理是:
利用VPN技术,用户通过一 台登 陆服务器用加速软件商提供的账号密码拨号登 陆到 一台具有双线带宽的服务器上,并与之建立连接并改变当前网络环境。在访问的时候,将本机访问的目标(例如:一个网址)通过节点服务器转发一次,从而完成加速效果。
Anycas加速:
Anycast 的 IP 能起到游戏加速器的作用,游戏请求就近进入腾讯云,走腾讯云的内网专线到达游戏服务器,极大缩短经过的公网路径,减少了延时、抖动、丢包。此外跟传统加速比,IP 入口无需额外部署流量接收设备,且IP无需区分地域,简化了 DNS 部署。
| 28.222222 | 231 | 0.836311 | yue_Hant | 0.651006 |
7ae189cdf1a1a7dcb806e07c7e2732f5eb7e518f | 889 | md | Markdown | docs/providers/vsphere/HaVmOverride.md | claytonbrown/tf-cfn-provider | b338642692e91a959bba201c2a84be6bfc7c4bed | [
"MIT"
] | 25 | 2019-01-19T10:39:46.000Z | 2021-05-24T23:38:13.000Z | docs/providers/vsphere/HaVmOverride.md | claytonbrown/tf-cfn-provider | b338642692e91a959bba201c2a84be6bfc7c4bed | [
"MIT"
] | null | null | null | docs/providers/vsphere/HaVmOverride.md | claytonbrown/tf-cfn-provider | b338642692e91a959bba201c2a84be6bfc7c4bed | [
"MIT"
] | 3 | 2019-02-27T04:34:36.000Z | 2019-06-20T05:25:56.000Z | # Terraform::VSphere::HaVmOverride
The `Terraform::VSphere::HaVmOverride` resource can be used to add an override for
vSphere HA settings on a cluster for a specific virtual machine. With this
resource, one can control specific HA settings so that they are different than
the cluster default, accommodating the needs of that specific virtual machine,
while not affecting the rest of the cluster.
For more information on vSphere HA, see [this page][ref-vsphere-ha-clusters].
[ref-vsphere-ha-clusters]: https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.avail.doc/GUID-5432CA24-14F1-44E3-87FB-61D937831CF6.html
~> **NOTE:** This resource requires vCenter and is not available on direct ESXi
connections.
## Properties
## See Also
* [vsphere_ha_vm_override](https://www.terraform.io/docs/providers/vsphere/r/ha_vm_override.html) in the _Terraform Provider Documentation_ | 42.333333 | 148 | 0.794151 | eng_Latn | 0.92815 |
7ae1fad856ff629f88abd0a4380e356739baa002 | 6,626 | md | Markdown | articles/storsimple/storsimple-virtual-array-manager-service-administration.md | changeworld/azure-docs.hu-hu | f0a30d78dd2458170473188ccce3aa7e128b7f89 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-09-29T16:59:33.000Z | 2019-09-29T16:59:33.000Z | articles/storsimple/storsimple-virtual-array-manager-service-administration.md | Nike1016/azure-docs.hu-hu | eaca0faf37d4e64d5d6222ae8fd9c90222634341 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/storsimple/storsimple-virtual-array-manager-service-administration.md | Nike1016/azure-docs.hu-hu | eaca0faf37d4e64d5d6222ae8fd9c90222634341 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: A Microsoft Azure StorSimple Manager Virtual Array felügyeleti |} A Microsoft Docs
description: Ismerje meg, hogy a StorSimple Virtual Array helyszíni kezelése a StorSimple-Eszközkezelő szolgáltatással az Azure Portalon.
services: storsimple
documentationcenter: ''
author: alkohli
manager: carmonm
editor: ''
ms.assetid: 958244a5-f9f5-455e-b7ef-71a65558872e
ms.service: storsimple
ms.devlang: na
ms.topic: article
ms.tgt_pltfrm: na
ms.workload: na
ms.date: 12/1/2016
ms.author: alkohli
ms.openlocfilehash: bb6bb491ca71e5ced5aecc8137e9e1cbd950e80b
ms.sourcegitcommit: d4dfbc34a1f03488e1b7bc5e711a11b72c717ada
ms.translationtype: MT
ms.contentlocale: hu-HU
ms.lasthandoff: 06/13/2019
ms.locfileid: "62123805"
---
# <a name="use-the-storsimple-device-manager-service-to-administer-your-storsimple-virtual-array"></a>A StorSimple Virtual Array felügyelete a StorSimple-Eszközkezelő szolgáltatás használatával

## <a name="overview"></a>Áttekintés
Ez a cikk ismerteti a StorSimple-Eszközkezelő szolgáltatás felületén, beleértve a hogyan csatlakozhat, és a különböző lehetőségekről, valamint hivatkozásokat az adott munkafolyamatok, amelyek a felhasználói felületen keresztül is elvégezhető.
Ez a cikk elolvasása után, tudni fogja, hogyan lehet:
* Csatlakozás a StorSimple-Eszközkezelő szolgáltatás
* Keresse meg a StorSimple Device Manager felhasználói felületén
* A StorSimple Virtual Array keresztül a StorSimple-Eszközkezelő szolgáltatás felügyeletéhez
> [!NOTE]
> A StorSimple 8000 sorozatú eszköz esetében elérhető felügyeleti beállítások megtekintéséhez lépjen a [a StorSimple Manager szolgáltatás használata a StorSimple-eszköz felügyeletéhez](storsimple-manager-service-administration.md).
>
>
## <a name="connect-to-the-storsimple-device-manager-service"></a>Csatlakozás a StorSimple-Eszközkezelő szolgáltatás
A StorSimple-Eszközkezelő szolgáltatás a Microsoft Azure-ban fut, és több StorSimple Virtual Arrayt csatlakozik. Ezek az eszközök kezeléséhez használhatja a böngészőben futó központi Microsoft Azure-portálon. Szeretne csatlakozni a StorSimple-Eszközkezelő szolgáltatás, tegye a következőket.
#### <a name="to-connect-to-the-service"></a>A szolgáltatáshoz való csatlakozáshoz
1. Nyissa meg a következőt: [https://ms.portal.azure.com](https://ms.portal.azure.com).
2. A Microsoft-fiók hitelesítő adatait használja, jelentkezzen be a Microsoft Azure Portalon (a jobb felső sarkában a panelen található).
3. Keresse meg tallózással keresse meg "Szűrheti"--> a StorSimple-Eszközkezelők a eszközkezelők megtekintéséhez egy adott előfizetésben.
## <a name="use-the-storsimple-device-manager-service-to-perform-management-tasks"></a>A StorSimple-Eszközkezelő szolgáltatás segítségével a felügyeleti feladatok végrehajtása
Az alábbi táblázat a gyakori felügyeleti feladatok és a StorSimple-Eszközkezelő szolgáltatás összefoglalás panelén belül végrehajtható komplex munkafolyamatok összegzését jeleníti meg. Ezeket a feladatokat a paneleket, amelyen kezdeményezett alapján vannak rendezve.
Minden egyes munkafolyamat kapcsolatos további információkért kattintson a megfelelő eljárás a táblában.
#### <a name="storsimple-device-manager-workflows"></a>StorSimple-Eszközkezelő munkafolyamatok
| Ha azt szeretné, ehhez... | Ezzel az eljárással |
| --- | --- |
| Szolgáltatás létrehozása</br>A szolgáltatás törlése</br>Szolgáltatásregisztrációs kulcs lekérése</br>A szolgáltatás regisztrációs kulcsának újragenerálása |[A StorSimple-Eszközkezelő szolgáltatás üzembe helyezése](storsimple-virtual-array-manage-service.md) |
| A Tevékenységnaplók megtekintése |[Használja a StorSimple szolgáltatás összegzése](storsimple-virtual-array-service-summary.md) |
| A virtuális tömb inaktiválása</br>Törölje a virtuális tömb |[Inaktiválja vagy törölje a virtuális tömb](storsimple-virtual-array-deactivate-and-delete-device.md) |
| Katasztrófa utáni helyreállítás és az eszköz feladatátvételi</br>Feladatátvételi Előfeltételek</br>Üzleti folytonosság – vészhelyreállítás (BCDR)</br>A vészhelyreállítás során hibák |[A StorSimple Virtual Array vész helyreállítási és az eszköz feladatátvétele](storsimple-virtual-array-failover-dr.md) |
| Megosztások és kötetek biztonsági mentése</br>Manuális biztonsági mentés készítése</br>A biztonsági mentési ütemezés módosítása</br>Meglévő biztonsági másolatok megtekintéséhez |[Készítsen biztonsági másolatot a StorSimple Virtual Array](storsimple-virtual-array-backup.md) |
| Klónozott megosztás biztonságimásolat-készlet</br>Klónozott kötet a biztonságimásolat-készlet</br>Elemszintű helyreállítás (csak a fájlkiszolgáló) |[Klónozza a StorSimple virtuális tömb egy biztonsági másolatból](storsimple-virtual-array-clone.md) |
| Tudnivalók a storage-fiókok</br>Tárfiók hozzáadása</br>Storage-fiók szerkesztése</br>Tárfiók törlése |[Storage-fiókok kezelése a StorSimple Virtual Array](storsimple-virtual-array-manage-storage-accounts.md) |
| Tudnivalók a hozzáférés-vezérlési rekordok</br>Hozzáadása vagy módosítása egy hozzáférés-vezérlési rekord </br>Egy hozzáférés-vezérlési rekord törlése |[Hozzáférés-vezérlési rekordok kezelése a StorSimple Virtual Array](storsimple-virtual-array-manage-acrs.md) |
| Feladatok részleteinek megjelenítése |[A StorSimple Virtual Array feladatok kezelése](storsimple-virtual-array-manage-jobs.md) |
| A riasztási beállítások konfigurálása</br>Riasztási értesítések fogadása</br>Riasztások kezelése</br>Riasztások áttekintése |[Riasztások megtekintése és kezelése a StorSimple virtuális tömb](storsimple-virtual-array-manage-alerts.md) |
| Az eszköz rendszergazdai jelszavának módosítása |[A StorSimple Virtual Array eszköz rendszergazdai jelszavának módosítása](storsimple-virtual-array-change-device-admin-password.md) |
| Szoftverfrissítések telepítése |[Frissítse a virtuális tömb](storsimple-virtual-array-install-update.md) |
> [!NOTE]
> Kell használnia a [helyi webes felületén](storsimple-ova-web-ui-admin.md) a következő feladatokhoz:
>
> * [A szolgáltatásadat-titkosítási kulcs lekérése](storsimple-ova-web-ui-admin.md#get-the-service-data-encryption-key)
> * [Hozzon létre egy támogatási csomagot](storsimple-ova-web-ui-admin.md#generate-a-log-package)
> * [Állítsa le és indítsa újra a virtuális tömb](storsimple-ova-web-ui-admin.md#shut-down-and-restart-your-device)
>
>
## <a name="next-steps"></a>További lépések
További információ a webes felhasználói felületen, és hogyan kell használni, [a StorSimple webes felhasználói felület segítségével felügyelheti a StorSimple Virtual Array](storsimple-ova-web-ui-admin.md).
| 80.804878 | 305 | 0.820254 | hun_Latn | 0.999992 |
7ae3413a09c25c7ce8e24f0d21267ffa6fbf8b14 | 2,285 | md | Markdown | aspnet/web-forms/videos/how-do-i/how-do-i-extend-and-customize-an-aspnet-server-control-for-a-specific-purpose.md | terrajobst/AspNetDocs.pl-pl | 0fa8689494d61eeca5f7ddc52d218a22fe5b979f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | aspnet/web-forms/videos/how-do-i/how-do-i-extend-and-customize-an-aspnet-server-control-for-a-specific-purpose.md | terrajobst/AspNetDocs.pl-pl | 0fa8689494d61eeca5f7ddc52d218a22fe5b979f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | aspnet/web-forms/videos/how-do-i/how-do-i-extend-and-customize-an-aspnet-server-control-for-a-specific-purpose.md | terrajobst/AspNetDocs.pl-pl | 0fa8689494d61eeca5f7ddc52d218a22fe5b979f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
uid: web-forms/videos/how-do-i/how-do-i-extend-and-customize-an-aspnet-server-control-for-a-specific-purpose
title: '[Jak:] Rozszerzona i Dostosuj formant serwera ASP.NET dla określonego celu | Microsoft Docs'
author: rick-anderson
description: W tym filmie wideo Krzysztof pikseli pokazuje, jak zwiększyć standardową kontrolkę serwera ASP.NET i dostosować ją do określonego celu. Wyspecjalizowane kontrolki zapewniają c...
ms.author: riande
ms.date: 05/20/2008
ms.assetid: ed460e6b-8f4e-4fcb-83c4-2495180c1f14
msc.legacyurl: /web-forms/videos/how-do-i/how-do-i-extend-and-customize-an-aspnet-server-control-for-a-specific-purpose
msc.type: video
ms.openlocfilehash: 3562e9c4ec994f04b312476c1357d810f4b5e28a
ms.sourcegitcommit: e7e91932a6e91a63e2e46417626f39d6b244a3ab
ms.translationtype: MT
ms.contentlocale: pl-PL
ms.lasthandoff: 03/06/2020
ms.locfileid: "78567880"
---
# <a name="how-do-i-extend-and-customize-an-aspnet-server-control-for-a-specific-purpose"></a>[Jak:] Zwiększanie i dostosowywanie kontrolki serwera ASP.NET w określonym przeznaczeniu
[Krzysztof pikseli](https://twitter.com/chrispels)
W tym filmie wideo Krzysztof pikseli pokazuje, jak zwiększyć standardową kontrolkę serwera ASP.NET i dostosować ją do określonego celu. Wyspecjalizowane kontrolki zapewniają wygodny sposób implementacji standardowych elementów interfejsu użytkownika dla wielu witryn sieci Web dla osób indywidualnych lub zespołów deweloperów. W tym przykładzie zapoznaj się z tematem jak zwiększyć formant DropDownList, aby utworzyć formant wyboru specjalnego roku. Dowiedz się, jak dodać właściwości niestandardowych atrybutów kontrolujących zachowanie zakresu lat, które mogą być wyświetlane. Następnie Zobacz, jak te atrybuty niestandardowe można ustawić w składni deklaracyjnej, na przykład w standardowych atrybutach formantu. Zobacz, jak można dodać kilka dodatkowych właściwości, aby zapewnić dodatkową funkcjonalność do sterowania zachowaniem listy. Na koniec Zobacz, jak rozszerzona kontrolka serwera ASP.NET może zostać przeniesiona do oddzielnego zestawu, aby można jej było używać w wielu witrynach sieci Web.
[▶Obejrzyj wideo (26 minut)](https://channel9.msdn.com/Blogs/ASP-NET-Site-Videos/how-do-i-extend-and-customize-an-aspnet-server-control-for-a-specific-purpose)
| 91.4 | 1,005 | 0.825821 | pol_Latn | 0.996524 |
7ae3dbe005f349cb7396fd0caecc26ee0b46efdd | 9,445 | md | Markdown | intune/conditional-access-intune-common-ways-use.md | pelarsen/IntuneDocs | acaa5e67bbb01fa7d6c8684917f384c6f9e442ab | [
"CC-BY-4.0",
"MIT"
] | null | null | null | intune/conditional-access-intune-common-ways-use.md | pelarsen/IntuneDocs | acaa5e67bbb01fa7d6c8684917f384c6f9e442ab | [
"CC-BY-4.0",
"MIT"
] | null | null | null | intune/conditional-access-intune-common-ways-use.md | pelarsen/IntuneDocs | acaa5e67bbb01fa7d6c8684917f384c6f9e442ab | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
# required metadata
title: Conditional access with Microsoft Intune
titlesuffix:
description: Learn how Intune conditional access is commonly used for device-based and app-based conditional access.
keywords:
author: msmimart
ms.author: mimart
manager: dougeby
ms.date: 02/22/2018
ms.topic: get-started-article
ms.prod:
ms.service: microsoft-intune
ms.technology:
ms.assetid: a0b8e55e-c3d8-4599-be25-dc10c1027b62
# optional metadata
#ROBOTS:
#audience:
#ms.devlang:
#ms.reviewer:
ms.suite: ems
#ms.tgt_pltfrm:
ms.custom: intune-azure
---
# What are common ways to use conditional access with Intune?
[!INCLUDE[azure_portal](./includes/azure_portal.md)]
There are two types of conditional access with Intune: device-based conditional access and app-based conditional access. You need to configure the related compliance policies to drive conditional access compliance at your organization. Conditional access is commonly used to do things like allow or block access to Exchange on-premises, control access to the network, or integrate with a Mobile Threat Defense solution.
The below information helps you understand how to use the Intune mobile *device* compliance capabilities and the Intune mobile *application* management (MAM) capabilities.
## Device-based conditional access
Intune and Azure Active Directory work together to make sure only managed and compliant devices are allowed access to email, Office 365 services, Software as a service (SaaS) apps, and [on-premises apps](https://docs.microsoft.com/azure/active-directory/active-directory-application-proxy-get-started). Additionally, you can set a policy in Azure Active Directory to only enable computers that are domain-joined, or mobile devices that are enrolled in Intune to access Office 365 services.
Intune provides device compliance policy capabilities that evaluate the compliance status of the devices. The compliance status is reported to Azure Active Directory that uses it to enforce the conditional access policy created in Azure Active Directory when the user tries to access company resources.
Device-based conditional access policies for Exchange online and other Office 365 products are configured through the [Azure portal](https://docs.microsoft.com/intune-azure/introduction/what-is-microsoft-intune).
- Learn more about [conditional access in Azure Active Directory](https://docs.microsoft.com/azure/active-directory/active-directory-conditional-access-azure-portal).
- Learn more about [Intune device compliance](device-compliance.md).
- Learn more about [protecting e-mail, Office 365, and other services using conditional access with Intune](https://docs.microsoft.com/intune-classic/deploy-use/restrict-access-to-email-and-o365-services-with-microsoft-intune).
### Conditional access for Exchange on-premises
Conditional access can be used to allow or block access to **Exchange on-premises** based on the device compliance policies and enrollment state. When conditional access is used in combination with a device compliance policy, only compliant devices are allowed access to Exchange on-premises.
You can configure advanced settings in conditional access for more granular control such as:
- Allow or block certain platforms.
- Immediately block devices that are not managed by Intune.
Any device used to access Exchange on-premises is checked for compliance when device compliance and conditional access policies are applied.
When devices do not meet the conditions set, the end user is guided through the process of enrolling the device to fix the issue that is making the device noncompliant.
#### How conditional access for Exchange on-premises works
The Intune Exchange connector pulls in all the Exchange Active Sync (EAS) records that exist at the Exchange server so Intune can take these EAS records and map them to Intune device records. These records are devices enrolled and recognized by Intune. This process allows or blocks e-mail access.
If the EAS record is brand new, and Intune is not aware of it, Intune issues a command-let that blocks access to e-mail. Here are more details on how this process works:

1. User tries to access corporate email, which is hosted on Exchange on-premises 2010 SP1 or later.
2. If the device is not managed by Intune, it will be blocked access to email. Intune sends block notification to the EAS client.
3. EAS receives block notification, moves the device to quarantine, and sends the quarantine email with remediation steps that contain links so the users can enroll their devices.
4. The Workplace join process happens, which is the first step to have the device managed by Intune.
5. The device gets enrolled into Intune.
6. Intune maps the EAS record to a device record, and saves the device compliance state.
7. The EAS client ID gets registered by the Azure AD Device Registration process, which creates a relationship between the Intune device record, and the EAS client ID.
8. The Azure AD Device Registration saves the device state information.
9. If the user meets the conditional access policies, Intune issues a command-let through the Intune Exchange connector that allows the mailbox to sync.
10. Exchange server sends the notification to EAS client so the user can access e-mail.
#### What’s the Intune role?
Intune evaluates and manage the device state.
#### What’s the Exchange server role?
Exchange server provides API and infrastructure to move devices to its quarantine.
> [!IMPORTANT]
> Keep in mind that the user who’s using the device must have a compliance profile assigned to them so the device to be evaluated for compliance. If no compliance policy is deployed to the user, the device is treated as compliant and no access restrictions are applied.
### Conditional access based on network access control
Intune integrated with partners like Cisco ISE, Aruba Clear Pass, and Citrix NetScaler to provide access controls based on the Intune enrollment and the device compliance state.
Users can be allowed or denied access when trying to access corporate Wi-Fi or VPN resources based on whether the device is managed and compliant with Intune device compliance policies.
- Learn more about the [NAC integration with Intune](network-access-control-integrate.md).
### Conditional access based on device risk
Intune partnered with Mobile Threat Defense vendors that provides a security solution to detect malwares, Trojans, and other threats on mobile devices.
#### How the Intune and Mobile Threat Defense integration works
When mobile devices have the Mobile Threat Defense agent installed, the agent can send compliance state messages back to Intune reporting if a threat has been found in the mobile device itself.
The Intune and mobile threat defense integration plays a factor at the conditional access decisions based on device risk.
- Learn more about [Intune mobile threat defense](https://docs.microsoft.com/intune-classic/deploy-use/mobile-threat-defense).
### Conditional access for Windows PCs
Conditional access for PCs provide similar capabilities available for mobile devices. Let’s talk about the ways you can use conditional access when managing PCs with Intune.
#### Corporate-owned
- **On premises AD domain joined:** This has been the most common conditional access deployment option for organizations, whose are reasonable comfortable with the fact they’re already managing their PCs through AD group policies and/or with System Center Configuration Manager.
- **Azure AD domain joined and Intune management:** This scenario is typically geared to Choose Your Own Device (CYOD), and roaming laptop scenarios where these devices are rarely connected to corporate-network. The device joins to the Azure AD and gets enrolled to Intune, which removes any dependency on on-premises AD, and domain controllers. This can be used as a conditional access criteria when accessing corporate resources.
- **AD domain joined and System Center Configuration Manager:** As of current branch, System Center Configuration Manager provides conditional access capabilities that can evaluate specific compliance criteria, in addition to be a domain-joined PC:
- Is the PC encrypted?
- Is malware installed? Is it up-to-date?
- Is the device jailbroken or rooted?
#### Bring your own device (BYOD)
- **Workplace join and Intune management:** Here the user can join their personal devices to access corporate resources and services. You can use Workplace join and enroll devices into Intune to receive device-level policies, which is also another option to evaluate conditional access criteria.
## App-based conditional access
Intune and Azure Active Directory work together to make sure only managed apps can access corporate e-mail or other Office 365 services.
- Learn more about [app-based conditional access with Intune](app-based-conditional-access-intune.md).
## Next steps
[How to configure conditional access in Azure Active Directory](https://docs.microsoft.com/azure/active-directory/active-directory-conditional-access-azure-portal)
[How to install on-premises Exchange connector with Intune](https://docs.microsoft.com/intune/exchange-connector-install).
[How to create a conditional access policy for Exchange on-premises](conditional-access-exchange-create.md)
| 59.03125 | 489 | 0.799576 | eng_Latn | 0.996417 |
7ae3ff82edb17681100528f0811dadea55520411 | 3,902 | md | Markdown | docs/source/components/exchanges/simulated/FBMExchange.md | bwcknr/tensortrade | 376f5e4cc4ad7df271774088884fbe88f8feb7d8 | [
"Apache-2.0"
] | 34 | 2020-06-05T22:39:53.000Z | 2022-01-09T03:09:12.000Z | docs/source/components/exchanges/simulated/FBMExchange.md | bwcknr/tensortrade | 376f5e4cc4ad7df271774088884fbe88f8feb7d8 | [
"Apache-2.0"
] | 4 | 2020-11-13T18:48:52.000Z | 2022-02-10T01:29:47.000Z | docs/source/components/exchanges/simulated/FBMExchange.md | bwcknr/tensortrade | 376f5e4cc4ad7df271774088884fbe88f8feb7d8 | [
"Apache-2.0"
] | 8 | 2020-06-01T12:09:53.000Z | 2022-01-18T14:45:29.000Z | # FBMExchange
**Fractal Brownian Motion**
A simulated exchange, in which the price history is based off a fractional brownian motion model with supplied parameters.
## What is Fractal Brownian Motion?
Fractial Brownian Motion is apart of a class of differential equations called stochastic processes.
`Stochastic processes` are an accumulation of random variables that help us describe the emergence of a system over time. The power of them is that they can be used to describe all of the world around us. In fact, during early 1900 one of the first active uses of Stochastic Processes was with valuing stock options. It was called a Brownian Motion, developed by a French mathematician named Louis Bachelier. It can also be used for looking at random interactions of molecules over time. We use this method to solve reinforcement learning's sample inefficiency problem.
.
We generate prices, and train on that. Simple.
## Class Parameters
- `base_instrument`
- The exchange symbol of the instrument to store/measure value in.
- `dtype`
- A type or str corresponding to the dtype of the `observation_space`.
- `feature_pipeline`
- A pipeline of feature transformations for transforming observations.
## Properties and Setters
- `base_instrument`
- The exchange symbol of the instrument to store/measure value in.
- `dtype`
- A type or str corresponding to the dtype of the `observation_space`.
- `feature_pipeline`
- A pipeline of feature transformations for transforming observations.
- `base_precision`
- The floating point precision of the base instrument.
- `instrument_precision`
- The floating point precision of the instrument to be traded.
- `initial_balance`
- The initial balance of the base symbol on the exchange.
- `balance`
- The current balance of the base symbol on the exchange.
- `portfolio`
- The current balance of each symbol on the exchange (non-positive balances excluded).
- `trades`
- A list of trades made on the exchange since the last reset.
- `performance`
- The performance of the active account on the exchange since the last reset.
- `generated_space`
- The initial shape of the observations generated by the exchange, before feature transformations.
- `observation_columns`
- The list of column names of the observation data frame generated by the exchange, before feature transformations.
- `observation_space`
- The final shape of the observations generated by the exchange, after feature transformations.
- `net_worth`
- Calculate the net worth of the active account on the exchange.
- `profit_loss_percent`
- Calculate the percentage change in net worth since the last reset.
- `has_next_observation`
- If `False`, the exchange's data source has run out of observations.
- Resetting the exchange may be necessary to continue generating observations.
## Functions
Below are the functions that the `FBMExchange` uses to effectively operate.
### Private
- `_create_observation_generator`
### Public
- `has_next_observation`
- Return the reward corresponding to the selected risk-adjusted return metric.
- `next_observation`
- Generate the next observation from the exchange.
- `instrument_balance`
- The current balance of the specified symbol on the exchange, denoted in the base instrument.
- `current_price`
- The current price of an instrument on the exchange, denoted in the base instrument.
- `execute_trade`
- Execute a trade on the exchange, accounting for slippage.
- `reset`
- Reset the feature pipeline, initial balance, trades, performance, and any other temporary stateful data.
## Use Cases
**Use Case #1: Generate Price History for Exchange**
We generate the price history when
```py
from tensortrade.exchanges.simulated import FBMExchange
exchange = FBMExchange(base_instrument='BTC', timeframe='1h')
```
| 40.645833 | 569 | 0.774987 | eng_Latn | 0.996857 |
7ae45b87f38841693aa721afde7cae79364f79fb | 141 | md | Markdown | Leetcode/Find Greatest Common Divisor of Array/Readme.md | arushmangal/Hack-CP-DSA | 91f5aabc4741c1c518f35065273c7fcfced67061 | [
"MIT"
] | 205 | 2021-09-30T15:41:05.000Z | 2022-03-27T18:34:56.000Z | Leetcode/Find Greatest Common Divisor of Array/Readme.md | arushmangal/Hack-CP-DSA | 91f5aabc4741c1c518f35065273c7fcfced67061 | [
"MIT"
] | 566 | 2021-09-30T15:27:27.000Z | 2021-10-16T21:21:02.000Z | Leetcode/Find Greatest Common Divisor of Array/Readme.md | arushmangal/Hack-CP-DSA | 91f5aabc4741c1c518f35065273c7fcfced67061 | [
"MIT"
] | 399 | 2021-09-29T05:40:46.000Z | 2022-03-27T18:34:58.000Z | ## Find Greatest Common Divisor of Array
Difficulty: Easy
Problem link: https://leetcode.com/problems/find-greatest-common-divisor-of-array/ | 35.25 | 82 | 0.801418 | yue_Hant | 0.366659 |
7ae46cc668ce35ee2231a1e8100fc0f6ecd17b6f | 3,836 | md | Markdown | wdk-ddi-src/content/pepfx/nf-pepfx-pofxregisterpluginex.md | Cloud-Writer/windows-driver-docs-ddi | 6ac33c6bc5649df3e1b468a977f97c688486caab | [
"CC-BY-4.0",
"MIT"
] | null | null | null | wdk-ddi-src/content/pepfx/nf-pepfx-pofxregisterpluginex.md | Cloud-Writer/windows-driver-docs-ddi | 6ac33c6bc5649df3e1b468a977f97c688486caab | [
"CC-BY-4.0",
"MIT"
] | null | null | null | wdk-ddi-src/content/pepfx/nf-pepfx-pofxregisterpluginex.md | Cloud-Writer/windows-driver-docs-ddi | 6ac33c6bc5649df3e1b468a977f97c688486caab | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
UID: NF:pepfx.PoFxRegisterPluginEx
title: PoFxRegisterPluginEx function (pepfx.h)
description: The PoFxRegisterPluginEx routine registers a platform extension plug-in (PEP) with the Windows power management framework (PoFx).
old-location: kernel\pofxregisterpluginex.htm
tech.root: kernel
ms.assetid: 68753690-A6DC-46BE-9981-F395B98C3245
ms.date: 04/30/2018
ms.keywords: PoFxRegisterPluginEx, PoFxRegisterPluginEx routine [Kernel-Mode Driver Architecture], kernel.pofxregisterpluginex, pepfx/PoFxRegisterPluginEx
ms.topic: function
req.header: pepfx.h
req.include-header: Pep_x.h
req.target-type: Windows
req.target-min-winverclnt: Available starting with Windows 10.
req.target-min-winversvr:
req.kmdf-ver:
req.umdf-ver:
req.ddi-compliance:
req.unicode-ansi:
req.idl:
req.max-support:
req.namespace:
req.assembly:
req.type-library:
req.lib: Ntoskrnl.lib
req.dll:
req.irql: PASSIVE_LEVEL
topic_type:
- APIRef
- kbSyntax
api_type:
- LibDef
api_location:
- ntoskrnl.lib
- ntoskrnl.dll
api_name:
- PoFxRegisterPluginEx
product:
- Windows
targetos: Windows
req.typenames:
---
# PoFxRegisterPluginEx function
## -description
The <b>PoFxRegisterPluginEx</b> routine registers a platform extension plug-in (PEP) with the Windows <a href="https://msdn.microsoft.com/B08F8ABF-FD43-434C-A345-337FBB799D9B">power management framework</a> (PoFx).
## -parameters
### -param PepInformation [in]
A pointer to a <a href="https://msdn.microsoft.com/library/windows/hardware/mt186745">PEP_INFORMATION</a> structure that contains pointers to one or more callback routines that are implemented by the PEP. These routines handle notifications that are sent to the PEP by PoFx.
### -param Flags [in]
A set of flag bits for configuring the PEP interface. Set this member to zero or to the following value.
<table>
<tr>
<th>Flag bit</th>
<th>Description</th>
</tr>
<tr>
<td>PEP_FLAG_WORKER_CONCURRENCY</td>
<td></td>
</tr>
</table>
### -param KernelInformation [in, out]
A pointer to a <a href="https://msdn.microsoft.com/library/windows/hardware/mt629114">PEP_KERNEL_INFORMATION</a> structure.
## -returns
<b>PoFxRegisterPluginEx</b> returns STATUS_SUCCESS if the call successfully registers the PEP. Possible error return values include the following status codes.
<table>
<tr>
<th>Return value</th>
<th>Description</th>
</tr>
<tr>
<td width="40%">
<dl>
<dt>STATUS_INVALID_PARAMETER</dt>
</dl>
</td>
<td width="60%">
The <b>Version</b> or <b>Size</b> member of the <b>PEP_KERNEL_INFORMATION</b> structure is set to an invalid value; or the <b>AcceptDeviceNotification</b> member of this structure is set to NULL.
</td>
</tr>
<tr>
<td width="40%">
<dl>
<dt>STATUS_INVALID_PEP_INFO_VERSION</dt>
</dl>
</td>
<td width="60%">
The <b>Version</b> member of the <b>PEP_INFORMATION</b> structure is set to an invalid value.
</td>
</tr>
<tr>
<td width="40%">
<dl>
<dt>STATUS_INSUFFICIENT_RESOURCES</dt>
</dl>
</td>
<td width="60%">
Unable to allocate the resources required to complete the requested registration.
</td>
</tr>
</table>
## -remarks
A PEP calls this routine to register itself with PoFx.
A PEP cannot unregister, and cannot register twice. If the PEP must be serviced, the operating system must restart.
The <a href="https://msdn.microsoft.com/library/windows/hardware/mt186873">PoFxRegisterPlugin</a> routine is similar to <b>PoFxRegisterPluginEx</b>, except that it does not take a <i>Flags</i> parameter.
The PEP must call <b>PoFxRegisterPluginEx</b> at IRQL = PASSIVE_LEVEL.
## -see-also
<a href="https://msdn.microsoft.com/library/windows/hardware/mt186745">PEP_INFORMATION</a>
<a href="https://msdn.microsoft.com/library/windows/hardware/mt629114">PEP_KERNEL_INFORMATION</a>
<a href="https://msdn.microsoft.com/library/windows/hardware/mt186873">PoFxRegisterPlugin</a>
| 22.833333 | 274 | 0.750261 | eng_Latn | 0.465252 |
7ae53cccf281e010c84a2c810777405d47fac9a8 | 1,883 | md | Markdown | README.md | sefx5ever/SCU_Micro-bit_Learning | 50d03efce6b9681c8e7056eaab3719797b377322 | [
"MIT"
] | 1 | 2020-04-09T23:40:10.000Z | 2020-04-09T23:40:10.000Z | README.md | sefx5ever/SCU_Micro-bit_Learning | 50d03efce6b9681c8e7056eaab3719797b377322 | [
"MIT"
] | null | null | null | README.md | sefx5ever/SCU_Micro-bit_Learning | 50d03efce6b9681c8e7056eaab3719797b377322 | [
"MIT"
] | null | null | null | ## Lesson : Introduction of Internet of Things
To learn the concept of internet of things in a practical-based.
* What do I learn:
* The fundamental of Micro:bit and how it work.
* The addition devices for collecting data, for instance DHT11 etc.
* The webhook(Google Script, IFTTT, thingspeak) setting.
- My PowerPoint
- [TensorFlowLite_For_IoT-Devices](https://github.com/sefx5ever/SCU_Micro-bit_Learning/blob/master/06170171_%E9%99%B3%E5%81%89%E5%82%91_TensorFlowLite_For_IoT-Devices.pptx)
- [智能小管家](https://github.com/sefx5ever/SCU_Micro-bit_Learning/blob/master/IoT_%E8%B5%B0%E5%87%BA%E8%87%AA%E5%B7%B1%E7%9A%84%E7%89%B9%E8%89%B2%E9%9A%8A_%E5%B0%8F%E7%AE%A1%E5%AE%B6.pptx)
## My Project
| No. | Project | Descroption |
| --- | --- | --- |
| 1. | DHT11_警告燈.js | The DHT11 will collect the humidity and the micro bit collect the temperature. The bulb will light up in red color when the temperature is over 32 degrees celsius, otherwise, it shows in green light. |
| 2. | javascript_LED字串顯示.js | The micro bit like a namecard, it show my personal detail. |
| 3. | 創意紅綠燈.js | The micro bit will run forever on showing the traffic light accordingly by LED bulb. |
| 4. | 圓周長圓面積.js | The mircro bit run in backend on calculating the surface and the area of a circle. |
| 5. | 外接LED呼吸燈.js | The LED bulb will ligh in a giving pattern. |
| 6. | 指南針.js | The micro bit will automatically detect the direction and show the current position. |
| 7. | 擴充LED夜燈.js | The basic manipulation on LED bulb. |
| 8. | 星空中的螢火蟲.js | The LED will light up permanently when it detect current position. |
| 9. | 智能小管家.js | A functional assistant which can be a timer, game, temperature management, etc. |
| 10. | 水平儀.js | An X-axis floating game. |
| 11. | 洗澡的溫濕度.js | Automatically detech the temperature and humidity, then upload to google sheet. |
| 12. | 計時器.js | A stupid timer. |
| 67.25 | 223 | 0.724907 | eng_Latn | 0.932282 |
7ae690395301f44e5a3d34e65b6c009c10fc01e0 | 247 | md | Markdown | components/action-button/README.md | vincentv/platform6-ui-components | addff705f5c632d453b32b5fb32ba3ee2b55cf69 | [
"MIT"
] | 10 | 2018-03-29T15:31:21.000Z | 2020-08-07T08:14:43.000Z | components/action-button/README.md | vincentv/platform6-ui-components | addff705f5c632d453b32b5fb32ba3ee2b55cf69 | [
"MIT"
] | 164 | 2018-03-09T12:56:41.000Z | 2022-02-18T14:48:39.000Z | components/action-button/README.md | vincentv/platform6-ui-components | addff705f5c632d453b32b5fb32ba3ee2b55cf69 | [
"MIT"
] | 3 | 2018-08-25T18:14:58.000Z | 2020-04-30T16:23:01.000Z | # ActionButton
Icon button used for compact interfaces. A description can be added as a tooltip and is visible by hovering the button.
## Install
To install the ActionButton component run:
```terminal
npm install --save @amalto/action-button
``` | 27.444444 | 119 | 0.777328 | eng_Latn | 0.989855 |
7ae6ba284588a6ca9ffb37d991e792d874518abc | 3,111 | md | Markdown | tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.MonitoredSession.md | steven0820/tensorflow | 36ebbf1ddc3ed820b7a5572ff4ed8e9bc707b8e5 | [
"Apache-2.0"
] | 5 | 2018-03-22T06:56:15.000Z | 2018-09-04T02:41:35.000Z | tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.MonitoredSession.md | srivatsan-ramesh/tensorflow | 36ebbf1ddc3ed820b7a5572ff4ed8e9bc707b8e5 | [
"Apache-2.0"
] | 1 | 2021-04-12T03:51:59.000Z | 2021-04-12T03:51:59.000Z | tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.MonitoredSession.md | srivatsan-ramesh/tensorflow | 36ebbf1ddc3ed820b7a5572ff4ed8e9bc707b8e5 | [
"Apache-2.0"
] | 5 | 2018-02-27T00:34:23.000Z | 2022-02-28T16:38:08.000Z | Session-like object that handles initialization, recovery and hooks.
Example usage:
```python
saver_hook = CheckpointSaverHook(...)
summary_hook = SummaryHook(...)
with MonitoredSession(session_creator=ChiefSessionCreator(...),
hooks=[saver_hook, summary_hook]) as sess:
while not sess.should_stop():
sess.run(train_op)
```
Initialization: At creation time the monitored session does following things
in given order:
* calls `hook.begin()`
* finalizes the graph via `scaffold.finalize()`
* create session
* initializes the model via initialization ops provided by `Scaffold`
* restores variables if a checkpoint exists
* launches queue runners
Run: When `run()` is called, the monitored session does following things:
* calls `hook.before_run()`
* calls TensorFlow `session.run()` with merged fetches and feed_dict
* calls `hook.after_run()`
* returns result of `session.run()` asked by user
* if `AbortedError` occurs, it recovers or reinitializes the session before
executing the run() call again
Exit: At the `close()`, the monitored session does following things in order:
* calls `hook.end()`
* closes the queue runners and the session
* surpresses `OutOfRange` error which indicates that all inputs have been
processed if the monitored_session is used as a context.
How to set `tf.Session` arguments:
* In most cases you can set session arguments as follows:
```python
MonitoredSession(
session_creator=ChiefSessionCreator(master=..., config=...))
```
* In distributed setting for a non-chief worker, you can use following:
```python
MonitoredSession(
session_creator=WorkerSessionCreator(master=..., config=...))
```
See `MonitoredTrainingSession` for an example usage based on chief or worker.
- - -
#### `tf.train.MonitoredSession.__enter__()` {#MonitoredSession.__enter__}
- - -
#### `tf.train.MonitoredSession.__exit__(exception_type, exception_value, traceback)` {#MonitoredSession.__exit__}
- - -
#### `tf.train.MonitoredSession.__init__(session_creator=None, hooks=None)` {#MonitoredSession.__init__}
Creates a MonitoredSession.
##### Args:
* <b>`session_creator`</b>: A factory object to create session. Typically a
`ChiefSessionCreator` which is the default one.
* <b>`hooks`</b>: An iterable of `SessionRunHook' objects.
- - -
#### `tf.train.MonitoredSession.close()` {#MonitoredSession.close}
- - -
#### `tf.train.MonitoredSession.graph` {#MonitoredSession.graph}
The graph that was launched in this session.
- - -
#### `tf.train.MonitoredSession.run(fetches, feed_dict=None, options=None, run_metadata=None)` {#MonitoredSession.run}
Run ops in the monitored session.
This method is completely compatible with the `tf.Session.run()` method.
##### Args:
* <b>`fetches`</b>: Same as `tf.Session.run()`.
* <b>`feed_dict`</b>: Same as `tf.Session.run()`.
* <b>`options`</b>: Same as `tf.Session.run()`.
* <b>`run_metadata`</b>: Same as `tf.Session.run()`.
##### Returns:
Same as `tf.Session.run()`.
- - -
#### `tf.train.MonitoredSession.should_stop()` {#MonitoredSession.should_stop}
| 25.710744 | 118 | 0.715526 | eng_Latn | 0.864621 |
7ae7335b24f0c931cf2fa782c15ebe4523667b72 | 9,820 | md | Markdown | pages/blog/98-popper-interview.md | survivejs/site | 9ba522b765940721d7e1e0c6ae3b07cea0beba16 | [
"MIT"
] | 90 | 2015-08-07T05:24:17.000Z | 2022-03-14T17:27:29.000Z | pages/blog/98-popper-interview.md | survivejs/site | 9ba522b765940721d7e1e0c6ae3b07cea0beba16 | [
"MIT"
] | 99 | 2015-07-17T10:55:21.000Z | 2022-03-08T21:30:28.000Z | pages/blog/98-popper-interview.md | survivejs/site | 9ba522b765940721d7e1e0c6ae3b07cea0beba16 | [
"MIT"
] | 31 | 2015-07-26T20:21:32.000Z | 2019-11-15T05:28:06.000Z | ---
title: 'Popper.js - Easy Tooltips and Popovers - Interview with Federico Zivolo'
date: 2017-05-29
headerImage: 'assets/img/pencils.jpg'
keywords: ['interview', 'javascript']
---
There are times when a vanilla `<abbr>` or `<acronym>` doesn't cut it. What if you want to do something more complex?
[Popper.js](https://popper.js.org/) by [Federico Zivolo](https://twitter.com/FezVrasta) achieves exactly this. Read on to learn more.
## Can you tell a bit about yourself?
<p>
<span class="author">
<img src="https://www.gravatar.com/avatar/52648ca9bee250edf351385c1e87416c?s=200" alt="Federico Zivolo" class="author" width="100" height="100" />
</span>
I'm Federico (Fez) Zivolo, UI Specialist at Quid. Born in Italy, I live in Budapest now. I like to help with open source projects on GitHub and I maintain some created by me.
</p>
## How would you describe *Popper.js* to someone who has never heard of it?
Popper.js is a library to help you position tooltips, popovers, dropdowns and any contextual element that should appear near a button or similar (I call them "poppers").
In short, it's a piece of code that saves you hours of work on any of your projects, since almost all of them end up featuring some "popper".
## How does *Popper.js* work?
That's a good question; I'm still trying to figure it out!
Jokes apart, the principle is pretty straightforward. It takes a reference element (usually a button) and a popper element (any element you want to position), it finds out a common offset parent, computes the position of the reference element relative to such parent, and then generates a set of coordinates use to position the popper element.
The hardest part is to consider a whole set of edge cases which range from cross browser compatibilities to box model capillarities, including taking care of the scrollable elements.
The usage is simple:
```js
new Popper(referenceElement, popperElement);
```
This code will position the `popperElement` on the bottom of the provided `referenceElement`. Also, you already have access to all the built-in features of the library.
The line also achieves the following:
* If the `referenceElement` is too close to the bottom of the viewport, the `popperElement` will be positioned on top of it instead.
* If the two elements are positioned in two different parents, Popper.js will take care of it and will still properly position the popper element correctly.
* It handles scrollable elements and page resizes.
## How does *Popper.js* differ from other solutions?
There aren't a lot of available solutions and they all cover a small subset of cases that are instead adequately addressed by Popper.js. The main difference is in the fact that my library doesn't need to manipulate the DOM directly to work.
This fact leads to two strengths: it doesn't have to move the popper node in a different context to properly work and can be integrated into frameworks and view libraries such as React and AngularJS with ease.
You can easily do this to delegate the DOM manipulation:
```js
new Popper(referenceElement, popperElement, {
modifiers: {
applyStyle: { enabled: false },
updateReactData: {
order: 900,
fn(data) {
this.setState({ data });
return data;
}
},
},
});
```
We have disabled the built-in `applyStyle` modifier (they are like middleware, and most of the functionalities provided by Popper.js are provided by them), and defined our custom modifier that only proxies the computed popper coordinates and information to our React component.
Now that you have all the knowledge provided by Popper.js, you can do whatever you need to apply the needed styles to the popper element.
You may have noticed that my custom modifier is returning the `data` object at the end. This object is needed because other modifiers may run after it and read the `data` object.
This chain-based approach makes Popper.js extensible; you can inject any custom function before or after any of the existing modifiers, disable the ones you don't need, and alter the behavior of others simply modifying the data stored in the `data` object.
## Why did you develop *Popper.js*?
At the time of the creation of Popper.js, I worked for a company which made large use of tooltips and popovers in their Ember.js application. We had an internal implementation of a positioning library similar to Popper.js, written mostly by two other team members and me. Its code was pretty messy because it had been developed just to work in our particular cases and it was deeply tied to the Ember.js internals.
The time needed to maintain such library became a problem because we spent a significant portion of our time fixing bugs related to it.
We then decided to outsource it and use an existing open source library to do the job.
I performed the investigations to find a suitable alternative; the only available choices were Tether and jQuery UI Position. The latter, after some quick tests, ended up being too basic to be used in our context. The only way to use it would have been to fork it and add the missing features.
### Tether Was Promising But Not Enough
Tether was very promising, it supported a lot of features and performed quite well. But it had some pretty limiting constraints as the library arbitrarily moved our components away from their original DOM tree context to have them positioned as direct children of the `body` tag.
This fact was a major problem because it interfered with the way Ember handled the DOM. One of the problems I remember is that our tests couldn't work because the testing environment of Ember looked for the DOM nodes only inside the root node of the Ember.js application.
The other problem was the limited customizability of it; we couldn't add any additional behavior or feature to it. For instance, we couldn't make a tooltip switch from "right" to "bottom" in case there wasn't enough space on its right. It only allowed "right - left" and "top - bottom".
### A Custom Library Was Needed
I wanted to use an existing solution because I just wanted to get the job done, but with these premises, the only viable solution I found was to write my library. My company didn't have time to allocate to write it, so I ended up writing it during a weekend...
## What next?
Popper.js is getting adopted by more projects every day, and that's cool.
My biggest "competitor" discontinued its library (Tether) and [they now point to Popper.js](https://github.com/HubSpot/tether/#rotating_light-project-status-rotating_light), I hope to be able to serve their users as they deserve.
Bootstrap [recently merged a PR](https://github.com/twbs/bootstrap/pull/22444) to use my library in their code base. I hope to see a larger number of contributions on my project as a result.
Other great developers have developed [integrations for Popper.js](https://github.com/FezVrasta/popper.js/blob/master/MENTIONS.md#integration-in-frameworks-and-view-libraries) to use it in the most popular libraries such as React, Preact, and Vue.js; others are working to create one for Ember.js. Only Angular is behind and needs a proper integration.
Certain outstanding issues that have to be fixed to handle all the edge cases. More tests have to be written to assure a high quality and reliability, and the API will probably need some makeover in the future.
There is a lot of work and not much time available, but I'll do my best to maintain the library and improve it continuously. Some help would be very welcome. 😉
## What does the future look like for *Popper.js* and web development in general? Can you see any particular trends?
The most innovative idea behind Popper.js is the modularity of it, no other similar libraries let you completely de-opt from any DOM manipulation and delegate them to your code.
I think we may see more libraries follow this direction and make the life of other developers easier.
Since the current front-end scenario is populated by a lot of different technologies, the library authors must adopt a model that allows the consumers to integrate them with the existing frameworks and libraries without compromises.
## What advice would you give to programmers getting into web development?
It may sound childish, a lot of folks will tell you that it's a matter of preferences and blah blah... But I think the future of the web development is in the functional, data-driven, development as promoted by Facebook with React. The whole idea of state management "introduced" [1] by those guys saved my team and me hundreds of hours of development already.
If you are getting into web development, first learn the basics of the web: HTML, JavaScript, and CSS. Then, move to any framework or library that follows the data driven and functional principles, if not React, anything wich shares the same idea. Doing this will set you a mindset that will help you to handle and resolve any situation.
[1]: Necessary note, Facebook didn't invent it, they simply promoted within the web development environment.
## Who should I interview next?
1. Travis Arnold (**@souporserious**), he is working on some cool responsive components libraries and worked on react-popper, he knows better than anyone else how to integrate libraries into React.
2. Gajus Kuizinas (**@kuizinas**), he works on a lot of awesome stuff, but I especially like his ideas about CSS Modules vs. CSS in JS solutions.
3. Nik Graf (**@nikgraf**), for his work with React VR!
## Any last remarks?
If you want to be a great developer, remember to have fun along the way. 🙃
## Conclusion
Thanks for the interview Federico! If I need tooltips or popovers, I know where to look now.
Remember to check [Popper.js demos](https://popper.js.org/) and [Popper.js on GitHub](https://github.com/FezVrasta/popper.js).
| 69.15493 | 414 | 0.77556 | eng_Latn | 0.999478 |
7ae76b8a03e362d5406fd52db4b7c67694fa293e | 1,029 | md | Markdown | readme.md | arthurvr/generate-rgb | 11d0647bc908d14300fe911dbe1823c2e414a78d | [
"MIT"
] | 1 | 2015-05-16T14:47:14.000Z | 2015-05-16T14:47:14.000Z | readme.md | arthurvr/generate-rgb | 11d0647bc908d14300fe911dbe1823c2e414a78d | [
"MIT"
] | 1 | 2015-05-16T14:33:48.000Z | 2015-05-16T14:43:03.000Z | readme.md | arthurvr/generate-rgb | 11d0647bc908d14300fe911dbe1823c2e414a78d | [
"MIT"
] | null | null | null | # generate-rgb [](https://travis-ci.org/arthurvr/generate-rgb)
> Generate an RGB color string
## Install
```
$ npm install --save generate-rgb
```
## Usage
```js
const generateRgb = require('generate-rgb');
generateRgb(0, 255, 255);
//=> 'rgb(0, 255, 255)'
generateRgb({
red: 0,
green: 255,
blue: 255
});
//=> 'rgb(0, 255, 255)'
```
## API
### generateRgb(red, green, blue)
#### red
*Required*
Type: `number`
A number between 0 and 255 which represents the amount of red.
#### green
*Required*
Type: `number`
A number between 0 and 255 which represents the amount of green.
#### blue
*Required*
Type: `number`
A number between 0 and 255 which represents the amount of blue.
### generateRgb(object)
#### object
*Required*
Type: `object`
Object with `red`, `green` and `blue` keys.
## Related
* [parse-rgb](https://github.com/arthurvr/parse-rgb)
## License
MIT © [Arthur Verschaeve](http://arthurverschaeve.be)
| 14.09589 | 140 | 0.660836 | eng_Latn | 0.734135 |
7ae7837386ed5c76f29dde2b0099fc780b97b284 | 4,118 | md | Markdown | docs/framework/wcf/feature-details/configuring-timeout-values-on-a-binding.md | MoisesMlg/docs.es-es | 4e8c9f518ab606048dd16b6c6a43a4fa7de4bcf5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wcf/feature-details/configuring-timeout-values-on-a-binding.md | MoisesMlg/docs.es-es | 4e8c9f518ab606048dd16b6c6a43a4fa7de4bcf5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wcf/feature-details/configuring-timeout-values-on-a-binding.md | MoisesMlg/docs.es-es | 4e8c9f518ab606048dd16b6c6a43a4fa7de4bcf5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Configuración de los valores de tiempo de espera en un enlace
description: Obtenga información acerca de cómo administrar la configuración de tiempo de espera de los enlaces de WCF para mejorar el rendimiento, la facilidad de uso y la seguridad del servicio.
ms.date: 03/30/2017
ms.assetid: b5c825a2-b48f-444a-8659-61751ff11d34
ms.openlocfilehash: 6582568f3579f784d4c91c707dbb35c38533551d
ms.sourcegitcommit: bc293b14af795e0e999e3304dd40c0222cf2ffe4
ms.translationtype: MT
ms.contentlocale: es-ES
ms.lasthandoff: 11/26/2020
ms.locfileid: "96284048"
---
# <a name="configuring-timeout-values-on-a-binding"></a>Configuración de los valores de tiempo de espera en un enlace
Hay varias configuraciones de tiempo de espera disponibles en los enlaces de WCF. Establecer estas configuraciones de tiempo de espera correctamente puede mejorar no solo el rendimiento del servicio sino también desempeñar un papel en la facilidad de uso y la seguridad del servicio. Los tiempos de espera siguientes están disponibles en los enlaces de WCF:
1. OpenTimeout
2. CloseTimeout
3. SendTimeout
4. ReceiveTimeout
## <a name="wcf-binding-timeouts"></a>Tiempos de espera de enlace de WCF
Cada uno de los valores descritos en este tema se crea en el propio enlace, en código o configuración. El código siguiente muestra cómo establecer mediante programación los tiempos de espera en un enlace de WCF en el contexto de un servicio autohospedado.
```csharp
public static void Main()
{
Uri baseAddress = new Uri("http://localhost/MyServer/MyService");
try
{
ServiceHost serviceHost = new ServiceHost(typeof(CalculatorService));
WSHttpBinding binding = new WSHttpBinding();
binding.OpenTimeout = new TimeSpan(0, 10, 0);
binding.CloseTimeout = new TimeSpan(0, 10, 0);
binding.SendTimeout = new TimeSpan(0, 10, 0);
binding.ReceiveTimeout = new TimeSpan(0, 10, 0);
serviceHost.AddServiceEndpoint("ICalculator", binding, baseAddress);
serviceHost.Open();
// The service can now be accessed.
Console.WriteLine("The service is ready.");
Console.WriteLine("Press <ENTER> to terminate service.");
Console.WriteLine();
Console.ReadLine();
}
catch (CommunicationException ex)
{
// Handle exception ...
}
}
```
En el ejemplo siguiente se muestra cómo configurar tiempos de espera en un enlace en un archivo de configuración.
```xml
<configuration>
<system.serviceModel>
<bindings>
<wsHttpBinding>
<binding openTimeout="00:10:00"
closeTimeout="00:10:00"
sendTimeout="00:10:00"
receiveTimeout="00:10:00">
</binding>
</wsHttpBinding>
</bindings>
</system.serviceModel>
</configuration>
```
Se puede encontrar más información sobre estos valores en la documentación de la clase <xref:System.ServiceModel.Channels.Binding>.
### <a name="client-side-timeouts"></a>Tiempos de espera del lado cliente
En el lado cliente:
1. SendTimeout – se usa para inicializar OperationTimeout, que controla el proceso completo de enviar un mensaje, incluido recibir un mensaje de respuesta para una operación de servicio de solicitud y respuesta. Este tiempo de espera también se aplica al enviar mensajes de respuesta de un método de contrato de devolución de llamada.
2. OpenTimeout: se usa al abrir los canales cuando no se especifica ningún valor de tiempo de espera explícito.
3. CloseTimeout: se usa al cerrar los canales cuando no se especifica ningún valor de tiempo de espera explícito.
4. ReceiveTimeout: no se utiliza.
### <a name="service-side-timeouts"></a>Tiempos de espera del servicio
En el lado de servicio:
1. SendTimeout, OpenTimeout y CloseTimeout son los mismos que en el cliente.
2. ReceiveTimeout – lo usa el nivel de marco de trabajo de servicio para inicializar el tiempo de espera de sesión inactiva que controla cuánto tiempo puede estar inactiva una sesión antes de que se agote el tiempo de espera.
| 41.59596 | 359 | 0.727052 | spa_Latn | 0.966757 |
7ae7e4b825b9503f0a2ebf739e275b18ee40f82e | 2,380 | md | Markdown | docs/visual-basic/programming-guide/concepts/linq/how-to-query-an-assembly-s-metadata-with-reflection-linq.md | michha/docs | 08f75b6ed8a9e6634235db708a21da4be57dc58f | [
"CC-BY-4.0",
"MIT"
] | 2 | 2021-06-21T20:45:59.000Z | 2021-06-21T20:50:12.000Z | docs/visual-basic/programming-guide/concepts/linq/how-to-query-an-assembly-s-metadata-with-reflection-linq.md | michha/docs | 08f75b6ed8a9e6634235db708a21da4be57dc58f | [
"CC-BY-4.0",
"MIT"
] | 548 | 2018-04-25T17:43:35.000Z | 2022-03-09T02:06:35.000Z | docs/visual-basic/programming-guide/concepts/linq/how-to-query-an-assembly-s-metadata-with-reflection-linq.md | michha/docs | 08f75b6ed8a9e6634235db708a21da4be57dc58f | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-11-12T16:37:40.000Z | 2020-11-12T16:37:40.000Z | ---
title: "How to: Query An Assembly's Metadata with Reflection (LINQ)"
ms.date: 07/20/2015
ms.assetid: 53caa336-ab83-4181-b0f6-5c87c5f9e4ee
---
# How to: Query An Assembly's Metadata with Reflection (LINQ) (Visual Basic)
The following example shows how LINQ can be used with reflection to retrieve specific metadata about methods that match a specified search criterion. In this case, the query will find the names of all the methods in the assembly that return enumerable types such as arrays.
## Example
```vb
Imports System.Linq
Imports System.Reflection
Module Module1
Sub Main()
Dim asmbly As Assembly =
Assembly.Load("System.Core, Version=3.5.0.0, Culture=neutral, PublicKeyToken= b77a5c561934e089")
Dim pubTypesQuery = From type In asmbly.GetTypes()
Where type.IsPublic
From method In type.GetMethods()
Where method.ReturnType.IsArray = True
Let name = method.ToString()
Let typeName = type.ToString()
Group name By typeName Into methodNames = Group
Console.WriteLine("Getting ready to iterate")
For Each item In pubTypesQuery
Console.WriteLine(item.methodNames)
For Each type In item.methodNames
Console.WriteLine(" " & type)
Next
Next
Console.WriteLine("Press any key to exit... ")
Console.ReadKey()
End Sub
End Module
```
The example uses the <xref:System.Reflection.Assembly.GetTypes%2A?displayProperty=nameWithType> method to return an array of types in the specified assembly. The [Where Clause](../../../language-reference/queries/where-clause.md) filter is applied so that only public types are returned. For each public type, a subquery is generated by using the <xref:System.Reflection.MethodInfo> array that is returned from the <xref:System.Type.GetMethods%2A?displayProperty=nameWithType> call. These results are filtered to return only those methods whose return type is an array or else a type that implements <xref:System.Collections.Generic.IEnumerable%601>. Finally, these results are grouped by using the type name as a key.
## See also
- [LINQ to Objects (Visual Basic)](linq-to-objects.md)
| 49.583333 | 720 | 0.668908 | eng_Latn | 0.965377 |
7ae82fd0714030e18b7fae24a0c380213611aa09 | 7,359 | md | Markdown | README.md | mrelemerson/vue-gtm | 61dbcfeaabf07b09fc781e8eecfb2a794f86e223 | [
"Apache-2.0"
] | null | null | null | README.md | mrelemerson/vue-gtm | 61dbcfeaabf07b09fc781e8eecfb2a794f86e223 | [
"Apache-2.0"
] | null | null | null | README.md | mrelemerson/vue-gtm | 61dbcfeaabf07b09fc781e8eecfb2a794f86e223 | [
"Apache-2.0"
] | null | null | null | <h1 align="center">Vue Google Tag Manager</h1>
<h4 align="center">*** Maintainers & Contributors welcome ***</h4>
<p align="center">
<a href="https://tagmanager.google.com/">
<img alt="Google Tag Manager" src="https://www.gstatic.cn/analytics-suite/header/suite/v2/ic_tag_manager.svg" height="192">
</a>
<a href="https://vuejs.org/">
<img alt="Vue.js" src="https://vuejs.org/images/logo.png" height="192">
</a>
</p>
<h4 align="center">Simple implementation of Google Tag Manager in Vue.js</h4>
---
<p align="center">
<a href="https://github.com/mib200/vue-gtm/blob/master/LICENSE">
<img alt="license: Apache-2.0" src="https://img.shields.io/github/license/mib200/vue-gtm.svg?style=flat-square">
</a>
<a href="https://www.npmjs.com/package/vue-gtm">
<img alt="NPM package" src="https://img.shields.io/npm/v/vue-gtm.svg?style=flat-square">
</a>
<a href="https://www.npmjs.com/package/vue-gtm">
<img alt="downloads" src="https://img.shields.io/npm/dt/vue-gtm.svg?style=flat-square">
</a>
<a href="#badge">
<img alt="code style: Prettier" src="https://img.shields.io/badge/code_style-prettier-ff69b4.svg?style=flat-square">
</a>
<a href="https://github.com/mib200/vue-gtm/actions?query=branch%3Amaster+workflow%3ACI">
<img alt="Build Status" src="https://github.com/mib200/vue-gtm/workflows/CI/badge.svg?branch=master">
</a>
</p>
This plugin will help you in your common GTM tasks.
**Note: If you are looking to track all Vuex mutations, you can use [Vuex GTM plugin](https://gist.github.com/matt-e-king/ebdb39088c50b96bbbbe77c5bc8abb2b)**
# Requirements
- **Vue.js.** >= 2.0.0
- **Google Tag Manager account.** To send data to
**Optional dependencies**
- **Vue Router** >= 2.x - In order to use auto-tracking of screens
# Configuration
`npm install vue-gtm` or `yarn add vue-gtm` if you use [Yarn package manager](https://yarnpkg.com)
Here is an example configuration:
```js
import { createApp } from 'vue';
import { createGtm } from 'vue-gtm';
import router from "./router";
const app = createApp(App);
app.use(router);
app.use(createGtm({
id: 'GTM-xxxxxx' or ['GTM-xxxxxx', 'GTM-yyyyyy'], // Your GTM single container ID or array of container ids ['GTM-xxxxxx', 'GTM-yyyyyy']
queryParams: { // Add url query string when load gtm.js with GTM ID (optional)
gtm_auth:'AB7cDEf3GHIjkl-MnOP8qr',
gtm_preview:'env-4',
gtm_cookies_win:'x'
},
defer: false, // defaults to false. Script can be set to `defer` to increase page-load-time at the cost of less accurate results (in case visitor leaves before script is loaded, which is unlikely but possible)
enabled: true, // defaults to true. Plugin can be disabled by setting this to false for Ex: enabled: !!GDPR_Cookie (optional)
debug: true, // Whether or not display console logs debugs (optional)
loadScript: true, // Whether or not to load the GTM Script (Helpful if you are including GTM manually, but need the dataLayer functionality in your components) (optional)
vueRouter: router, // Pass the router instance to automatically sync with router (optional)
ignoredViews: ['homepage'], // Don't trigger events for specified router names (case insensitive) (optional)
trackOnNextTick: false, // Whether or not call trackView in Vue.nextTick
}));
```
<details>
<summary>Vue 2 example</summary>
```js
import VueGtm from 'vue-gtm';
import VueRouter from 'vue-router';
const router = new VueRouter({ routes, mode, linkActiveClass });
Vue.use(VueGtm, {
id: 'GTM-xxxxxx' or ['GTM-xxxxxx', 'GTM-yyyyyy'], // Your GTM single container ID or array of container ids ['GTM-xxxxxx', 'GTM-yyyyyy']
queryParams: { // Add url query string when load gtm.js with GTM ID (optional)
gtm_auth:'AB7cDEf3GHIjkl-MnOP8qr',
gtm_preview:'env-4',
gtm_cookies_win:'x'
},
defer: false, // defaults to false. Script can be set to `defer` to increase page-load-time at the cost of less accurate results (in case visitor leaves before script is loaded, which is unlikely but possible)
enabled: true, // defaults to true. Plugin can be disabled by setting this to false for Ex: enabled: !!GDPR_Cookie (optional)
debug: true, // Whether or not display console logs debugs (optional)
loadScript: true, // Whether or not to load the GTM Script (Helpful if you are including GTM manually, but need the dataLayer functionality in your components) (optional)
vueRouter: router, // Pass the router instance to automatically sync with router (optional)
ignoredViews: ['homepage'], // Don't trigger events for specified router names (case insensitive) (optional)
trackOnNextTick: false, // Whether or not call trackView in Vue.nextTick
});
```
</details>
This injects the tag manager script in the page, except when `enabled` is set to `false`.
In that case it will be injected when calling `this.$gtm.enable(true)` for the first time.
Remember to enable the History Change Trigger for router changes to be sent through GTM.
# Documentation
Once the configuration is completed, you can access vue gtm instance in your components like that:
```js
export default {
name: "MyComponent",
data() {
return {
someData: false,
};
},
methods: {
onClick() {
this.$gtm.trackEvent({
event: null, // Event type [default = 'interaction'] (Optional)
category: "Calculator",
action: "click",
label: "Home page SIP calculator",
value: 5000,
noninteraction: false, // Optional
});
},
},
mounted() {
this.$gtm.trackView("MyScreenName", "currentpath");
},
};
```
The passed variables are mapped with GTM data layer as follows
```js
dataLayer.push({
event: event || "interaction",
target: category,
action: action,
"target-properties": label,
value: value,
"interaction-type": noninteraction,
...rest,
});
```
You can also access the instance anywhere whenever you imported `Vue` by using `Vue.gtm`. It is especially useful when you are in a store module or somewhere else than a component's scope.
## Sync gtm with your router
Thanks to vue-router guards, you can automatically dispatch new screen views on router change!
To use this feature, you just need to inject the router instance on plugin initialization.
This feature will generate the view name according to a priority rule:
- If you defined a meta field for you route named `gtm` this will take the value of this field for the view name.
- Otherwise, if the plugin don't have a value for the `meta.gtm` it will fallback to the internal route name.
Most of time the second case is enough, but sometimes you want to have more control on what is sent, this is where the first rule shine.
Example:
```js
const myRoute = {
path: "myRoute",
name: "MyRouteName",
component: SomeComponent,
meta: { gtm: "MyCustomValue" },
};
```
> This will use `MyCustomValue` as the view name.
## Methods
### Enable plugin
Check if plugin is enabled
```js
this.$gtm.enabled();
```
Enable plugin
```js
this.$gtm.enable(true);
```
Disable plugin
```js
this.$gtm.enable(false);
```
### Debug plugin
Check if plugin is in debug mode
```js
this.$gtm.debugEnabled();
```
Enable debug mode
```js
this.$gtm.debug(true);
```
Disable plugin
```js
this.$gtm.debug(false);
```
## Credits
[ScreamZ vue-analytics](https://github.com/ScreamZ/vue-analytics)
| 32.135371 | 211 | 0.703628 | eng_Latn | 0.903011 |
7ae853e4e5c0d9039d2bc16bc226032ffae097a0 | 316 | md | Markdown | README.md | icl-rocketry/Avionics | 4fadbccb1cafe4be80c76e15a2546bbb8414398b | [
"MIT"
] | 8 | 2020-01-28T18:35:21.000Z | 2021-11-20T13:34:25.000Z | README.md | icl-rocketry/Avionics | 4fadbccb1cafe4be80c76e15a2546bbb8414398b | [
"MIT"
] | 2 | 2022-02-15T08:29:49.000Z | 2022-02-28T02:13:06.000Z | README.md | icl-rocketry/Avionics | 4fadbccb1cafe4be80c76e15a2546bbb8414398b | [
"MIT"
] | 1 | 2020-12-06T05:20:51.000Z | 2020-12-06T05:20:51.000Z | # Avionics
The main repository for hardware and software assosciated with the Ricardo Avionics Ecosystem. The goal of this project is to develop a generic highly configurable and powerful rocket avionics system which can accomodate many different rocket designs with differing propulsion systems as well as staging.
| 105.333333 | 304 | 0.841772 | eng_Latn | 0.999874 |
7ae859abf4ab32aa38dd3a7ee1529e2ea32fd396 | 255 | md | Markdown | README.md | pwangsom/nodejs-express-mongo-01 | e7a995ca65fc3e8008b24fc2bdfb0093a37831cc | [
"MIT"
] | null | null | null | README.md | pwangsom/nodejs-express-mongo-01 | e7a995ca65fc3e8008b24fc2bdfb0093a37831cc | [
"MIT"
] | null | null | null | README.md | pwangsom/nodejs-express-mongo-01 | e7a995ca65fc3e8008b24fc2bdfb0093a37831cc | [
"MIT"
] | null | null | null | # nodejs-express-mongo-01
Before running the project, you have to create '.env' file in the root folder and put the following statment.
DB_CONNECTION=mongodb+srv://{username}:{password}@cluster0.osmhd.mongodb.net/{db_name}t?retryWrites=true&w=majority
| 42.5 | 115 | 0.784314 | eng_Latn | 0.936231 |
7ae86e3a2817171c44de8a6c589c2af599b1241a | 15,599 | md | Markdown | docs/framework/wcf/best-practices-data-contract-versioning.md | TomekLesniak/docs.pl-pl | 3373130e51ecb862641a40c5c38ef91af847fe04 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wcf/best-practices-data-contract-versioning.md | TomekLesniak/docs.pl-pl | 3373130e51ecb862641a40c5c38ef91af847fe04 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wcf/best-practices-data-contract-versioning.md | TomekLesniak/docs.pl-pl | 3373130e51ecb862641a40c5c38ef91af847fe04 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 'Najlepsze rozwiązania: Przechowywanie wersji kontraktów danych'
ms.date: 03/30/2017
helpviewer_keywords:
- data contracts
- service contracts
- best practices [WCF], data contract versioning
- Windows Communication Foundation, data contracts
ms.assetid: bf0ab338-4d36-4e12-8002-8ebfdeb346cb
ms.openlocfilehash: d6a1eef949e30a1a6d9a1c5971d33c788cc548b9
ms.sourcegitcommit: bc293b14af795e0e999e3304dd40c0222cf2ffe4
ms.translationtype: MT
ms.contentlocale: pl-PL
ms.lasthandoff: 11/26/2020
ms.locfileid: "96277910"
---
# <a name="best-practices-data-contract-versioning"></a>Najlepsze rozwiązania: Przechowywanie wersji kontraktów danych
W tym temacie przedstawiono najlepsze rozwiązania dotyczące tworzenia kontraktów danych, które mogą być łatwo rozmieszczone w czasie. Aby uzyskać więcej informacji na temat umów dotyczących danych, zobacz tematy dotyczące [korzystania z kontraktów danych](./feature-details/using-data-contracts.md).
## <a name="note-on-schema-validation"></a>Uwaga dotycząca weryfikacji schematu
W omawianiu wersji kontraktu danych należy zauważyć, że schemat kontraktu danych wyeksportowany przez Windows Communication Foundation (WCF) nie ma żadnej obsługi wersji, inne niż fakt, że elementy są domyślnie oznaczone jako opcjonalne.
Oznacza to, że nawet najbardziej typowy scenariusz obsługi wersji, taki jak dodanie nowego elementu członkowskiego danych, nie może być zaimplementowany w sposób, który jest bezproblemowy w odniesieniu do danego schematu. Nowsze wersje kontraktu danych (na przykład z nowym elementem członkowskim danych) nie weryfikują się przy użyciu starego schematu.
Istnieje jednak wiele scenariuszy, w których ścisła zgodność schematu nie jest wymagana. Wiele platform usług sieci Web, w tym usług sieci Web WCF i XML utworzonych za pomocą ASP.NET, nie przeprowadzaj walidacji schematu domyślnie, dlatego można tolerować dodatkowe elementy, które nie są opisane przez schemat. Podczas pracy z takimi platformami scenariusze wielu wersji są łatwiejsze do wdrożenia.
W ten sposób istnieją dwa zestawy wytycznych dotyczących wersji kontraktu danych: jeden zestaw dla scenariuszy, w których ważna jest ścisła ważność schematu i inny zestaw dla scenariuszy, gdy nie jest.
## <a name="versioning-when-schema-validation-is-required"></a>Przechowywanie wersji, gdy wymagana jest Walidacja schematu
Jeśli wymagana jest ścisła ważność schematu we wszystkich kierunkach (od nowa do starych i starych do nowych), kontrakty danych należy traktować jako niezmienne. Jeśli wymagana jest obsługa wersji, należy utworzyć nowy kontrakt danych z inną nazwą lub przestrzenią nazw, a kontrakt usługi przy użyciu typu danych powinien mieć odpowiednią wersję.
Na przykład kontrakt usługi przetwarzania zamówień zakupu o nazwie `PoProcessing` z `PostPurchaseOrder` operacją przyjmuje parametr, który jest zgodny z `PurchaseOrder` kontraktem danych. Jeśli `PurchaseOrder` umowa musi ulec zmianie, należy utworzyć nowy kontrakt danych, czyli, `PurchaseOrder2` który zawiera zmiany. Następnie należy obsłużyć obsługę wersji na poziomie kontraktu usługi. Na przykład przez utworzenie `PostPurchaseOrder2` operacji pobierającej `PurchaseOrder2` parametr lub przez utworzenie `PoProcessing2` kontraktu usługi, w którym `PostPurchaseOrder` operacja przyjmuje `PurchaseOrder2` kontrakt danych.
Należy zauważyć, że zmiany umów dotyczących danych, do których odwołują się inne kontrakty danych, również przechodzą do warstwy modelu usług. Na przykład w poprzednim scenariuszu `PurchaseOrder` nie trzeba zmieniać kontraktu danych. Jednak zawiera element członkowski danych `Customer` kontraktu danych, który z kolei zawierał element członkowski danych `Address` kontraktu danych, który należy zmienić. W takim przypadku konieczne będzie utworzenie `Address2` kontraktu danych z wymaganymi zmianami, `Customer2` kontraktem danych zawierającym `Address2` element członkowski danych i `PurchaseOrder2` kontraktem danych zawierającym `Customer2` element członkowski danych. Tak jak w poprzednim przypadku, kontrakt usługi będzie musiał również mieć wersję.
Chociaż w tych przykładach nazwy zostały zmienione (poprzez dołączenie "2"), zaleca się zmianę przestrzeni nazw zamiast nazw przez dołączenie nowych przestrzeni nazw z numerem wersji lub datą. Na przykład `http://schemas.contoso.com/2005/05/21/PurchaseOrder` kontrakt danych zmieni się na `http://schemas.contoso.com/2005/10/14/PurchaseOrder` kontrakt danych.
Aby uzyskać więcej informacji, Zobacz najlepsze rozwiązania: [przechowywanie wersji usługi](service-versioning.md).
Czasami należy zagwarantować ścisłą zgodność schematu dla komunikatów wysyłanych przez aplikację, ale nie można polegać na komunikatach przychodzących, które są ściśle zgodne ze schematem. W takim przypadku istnieje zagrożenie, że komunikat przychodzący może zawierać dane nadmiarowe. Nadmiarowe wartości są przechowywane i zwracane przez program WCF i w rezultacie są wysyłane komunikaty nieprawidłowe w schemacie. Aby uniknąć tego problemu, funkcja okrężna musi być wyłączona. Istnieją dwa sposoby, aby to zrobić.
- Nie należy implementować <xref:System.Runtime.Serialization.IExtensibleDataObject> interfejsu na żadnym z typów.
- Zastosuj <xref:System.ServiceModel.ServiceBehaviorAttribute> atrybut do kontraktu usługi z <xref:System.ServiceModel.ServiceBehaviorAttribute.IgnoreExtensionDataObject%2A> właściwością ustawioną na `true` .
Aby uzyskać więcej informacji na temat rundy, zobacz [Kontrakty danych zgodne z przekazywaniem dalej](./feature-details/forward-compatible-data-contracts.md).
## <a name="versioning-when-schema-validation-is-not-required"></a>Przechowywanie wersji, gdy Walidacja schematu nie jest wymagana
Ścisła zgodność schematu jest rzadko wymagana. Wiele platform tolerowanie dodatkowych elementów nieopisanych przez schemat. Tak długo, jak to jest dopuszczalne, można użyć pełnego zestawu funkcji opisanych w temacie [przechowywanie wersji kontraktu danych](./feature-details/data-contract-versioning.md) i [Kontrakty danych zgodne z przekazaniem dalej](./feature-details/forward-compatible-data-contracts.md) . Zalecane są następujące wskazówki.
Aby można było wysyłać nowe wersje typu, w przypadku których oczekiwany jest starszy lub wysyłany jest stary, w niektórych przypadkach należy wykonać tylko te wskazówki. Inne wskazówki nie są ściśle wymagane, ale są wymienione w tym miejscu, ponieważ mogą one mieć wpływ na przyszłą wersję schematu.
1. Nie należy próbować uzyskać wersji kontraktów danych przez dziedziczenie typu. Aby utworzyć późniejsze wersje, należy zmienić kontrakt danych na istniejącym typie lub utworzyć nowy niepowiązany typ.
2. Użycie dziedziczenia wraz z kontraktami danych jest dozwolone, pod warunkiem, że dziedziczenie nie jest używane jako mechanizm obsługi wersji i że są stosowane pewne reguły. Jeśli typ pochodzi od określonego typu podstawowego, nie należy dziedziczyć go z innego typu podstawowego w przyszłej wersji (chyba że ma ten sam kontrakt danych). Istnieje jeden wyjątek: można wstawić typ do hierarchii między typem kontraktu danych i jego typem bazowym, ale tylko wtedy, gdy nie zawiera składowych danych o takich samych nazwach jak inne elementy członkowskie w innych typach w hierarchii. Ogólnie rzecz biorąc, używanie elementów członkowskich danych o takich samych nazwach na różnych poziomach hierarchii dziedziczenia może prowadzić do poważnych problemów z wersją i należy je unikać.
3. Począwszy od pierwszej wersji kontraktu danych, zawsze Wdrażaj, <xref:System.Runtime.Serialization.IExtensibleDataObject> Aby umożliwić wykonywanie rundy. Aby uzyskać więcej informacji, zobacz [Kontrakty danych zgodne z przekazywaniem dalej](./feature-details/forward-compatible-data-contracts.md). Jeśli wydano co najmniej jedną wersję typu bez implementowania tego interfejsu, należy zaimplementować ją w następnej wersji typu.
4. W nowszych wersjach nie należy zmieniać nazwy kontraktu danych ani przestrzeni nazw. Jeśli zmieniasz nazwę lub przestrzeń nazw typu podstawowego kontraktu danych, pamiętaj, aby zachować nazwę i przestrzeń nazw kontraktu danych przy użyciu odpowiednich mechanizmów, takich jak <xref:System.Runtime.Serialization.DataContractAttribute.Name%2A> Właściwość <xref:System.Runtime.Serialization.DataContractAttribute> . Aby uzyskać więcej informacji o nazewnictwie, zobacz [nazwy kontraktów danych](./feature-details/data-contract-names.md).
5. W nowszych wersjach nie należy zmieniać nazw żadnych elementów członkowskich danych. W przypadku zmiany nazwy pola, właściwości lub zdarzenia będącego elementem członkowskim danych Użyj `Name` właściwości, <xref:System.Runtime.Serialization.DataMemberAttribute> Aby zachować istniejącą nazwę elementu członkowskiego danych.
6. W nowszych wersjach nie należy zmieniać typu żadnego pola, właściwości ani zdarzenia będącego członkiem danych, w taki sposób, że kontrakt danych uzyskanych dla tego elementu członkowskiego danych ulega zmianie. Należy pamiętać, że typy interfejsów są równoważne do <xref:System.Object> celów określania oczekiwanego kontraktu danych.
7. W nowszych wersjach nie zmieniaj kolejności istniejących elementów członkowskich danych przez dostosowanie <xref:System.Runtime.Serialization.DataMemberAttribute.Order%2A> właściwości <xref:System.Runtime.Serialization.DataMemberAttribute> atrybutu.
8. W nowszych wersjach można dodać nowe składowe danych. Należy zawsze przestrzegać następujących zasad:
1. <xref:System.Runtime.Serialization.DataMemberAttribute.IsRequired%2A>Właściwość powinna zawsze pozostać w pozostałej wartości domyślnej `false` .
2. Jeśli wartość domyślna `null` lub zero dla elementu członkowskiego jest nieakceptowalna, należy podać metodę wywołania zwrotnego przy użyciu polecenia, <xref:System.Runtime.Serialization.OnDeserializingAttribute> Aby zapewnić rozsądne ustawienie domyślne na wypadek, gdyby element członkowski nie był obecny w strumieniu przychodzącym. Aby uzyskać więcej informacji na temat wywołania zwrotnego, zobacz [wywołania zwrotne serializacji odporne na wersje](./feature-details/version-tolerant-serialization-callbacks.md).
3. <xref:System.Runtime.Serialization.DataMemberAttribute.Order?displayProperty=nameWithType>Właściwość powinna być używana do upewnienia się, że wszystkie nowo dodane elementy członkowskie danych są wyświetlane po istniejących elementach członkowskich danych. W tym celu zaleca się wykonanie następujących czynności: żaden z elementów członkowskich danych w pierwszej wersji kontraktu danych nie powinien mieć `Order` ustawionej właściwości. Wszystkie elementy członkowskie danych dodane w wersji 2 kontraktu danych powinny mieć `Order` ustawioną właściwość na 2. Wszyscy członkowie danych dodani w wersji 3 kontraktu danych powinni mieć `Order` ustawioną wartość 3 i tak dalej. Dozwolone jest posiadanie więcej niż jednego elementu członkowskiego danych na tym samym `Order` numerze.
9. Nie usuwaj elementów członkowskich danych w nowszych wersjach, nawet jeśli <xref:System.Runtime.Serialization.DataMemberAttribute.IsRequired%2A> Właściwość została pozostawiona z domyślną właściwością `false` w poprzednich wersjach.
10. Nie należy zmieniać `IsRequired` właściwości dla żadnych istniejących elementów członkowskich danych z wersji do wersji.
11. W przypadku wymaganych składowych danych (gdzie `IsRequired` is `true` ) nie zmieniaj `EmitDefaultValue` właściwości z wersji na wersję.
12. Nie należy podejmować próby utworzenia gałęziowych hierarchii wersji. Oznacza to, że zawsze powinna być ścieżką w co najmniej jednym kierunku od dowolnej wersji do innej wersji, używając tylko zmian dozwolonych w tych wytycznych.
Na przykład jeśli wersja 1 kontraktu danych osoby zawiera tylko element członkowski danych, nie należy tworzyć wersji 2a kontraktu, dodając tylko członka wieku i wersji 2b, dodając tylko członka adresu. Przejście od od 2A do 2b spowoduje usunięcie wieku i dodanie adresu; przejście w inne kierunki spowoduje usunięcie adresu i dodanie wieku. Te wytyczne nie zezwalają na usuwanie członków.
13. Zazwyczaj nie należy tworzyć nowych podtypów istniejących typów kontraktu danych w nowej wersji aplikacji. Podobnie nie należy tworzyć nowych kontraktów danych, które są używane zamiast składowych danych zadeklarowanych jako obiekt lub jako typy interfejsów. Tworzenie tych nowych klas jest dozwolone tylko wtedy, gdy wiadomo, że można dodać nowe typy do listy znanych typów wszystkich wystąpień starego aplikacji. Na przykład w wersji 1 aplikacji może istnieć typ kontraktu danych LibraryItem z podtypem kontraktu i danych dotyczących książki i gazetowej. Następnie LibraryItem będzie zawierać listę znanych typów, która zawiera książkę i gazetę. Załóżmy, że teraz dodasz typ magazynu w wersji 2, która jest podtypem LibraryItem. Jeśli wyślesz wystąpienie magazynu z wersji 2 do wersji 1, kontrakt danych magazynu nie zostanie znaleziony na liście znanych typów i zostanie zgłoszony wyjątek.
14. Nie należy dodawać ani usuwać elementów członkowskich wyliczenia między wersjami. Nie należy również zmieniać nazw elementów członkowskich wyliczenia, chyba że jest używana Właściwość Name w `EnumMemberAttribute` atrybucie, aby zachować swoje nazwy w modelu kontraktu danych.
15. Kolekcje są zamienne w modelu kontraktu danych, zgodnie z opisem w obszarze [typy kolekcji w kontraktach danych](./feature-details/collection-types-in-data-contracts.md). Pozwala to na doskonałe elastyczność. Należy jednak pamiętać, że nie można przypadkowo zmienić typu kolekcji w sposób niewymienny z wersji na wersję. Na przykład nie należy zmieniać kolekcji dostosowanej (to oznacza, że bez `CollectionDataContractAttribute` atrybutu) do dostosowanej kolekcji, która jest dostosowana do niedostosowanego elementu. Ponadto nie należy zmieniać właściwości z wersji na wersję `CollectionDataContractAttribute` . Jedyną dozwoloną zmianą jest dodanie nazwy lub właściwości przestrzeni nazw, jeśli nazwa lub przestrzeń nazw odpowiedniego typu kolekcji uległy zmianie i należy wprowadzić swoją nazwę kontraktu danych i przestrzeń nazw tak samo jak w poprzedniej wersji.
Niektóre z wymienionych poniżej wskazówek można bezpiecznie zignorować w przypadku zastosowania specjalnych okoliczności. Upewnij się, że w pełni rozumiesz mechanizmy serializacji, deserializacji i schematu, które są wykorzystywane przed odchyleniami od wytycznych.
## <a name="see-also"></a>Zobacz też
- <xref:System.Runtime.Serialization.DataContractAttribute.Name%2A>
- <xref:System.Runtime.Serialization.DataContractAttribute>
- <xref:System.Runtime.Serialization.DataMemberAttribute.Order%2A>
- <xref:System.Runtime.Serialization.DataMemberAttribute.IsRequired%2A>
- <xref:System.Runtime.Serialization.IExtensibleDataObject>
- <xref:System.ServiceModel.ServiceBehaviorAttribute>
- <xref:System.Runtime.Serialization.IExtensibleDataObject.ExtensionData%2A>
- <xref:System.Runtime.Serialization.ExtensionDataObject>
- <xref:System.Runtime.Serialization.OnDeserializingAttribute>
- [Używanie kontraktów danych](./feature-details/using-data-contracts.md)
- [Przechowywanie wersji kontraktów danych](./feature-details/data-contract-versioning.md)
- [Nazwy kontraktów danych](./feature-details/data-contract-names.md)
- [Kontrakty danych zgodne z nowszymi wersjami](./feature-details/forward-compatible-data-contracts.md)
- [Wywołania zwrotne serializacji z tolerancją dla wersji](./feature-details/version-tolerant-serialization-callbacks.md)
| 138.044248 | 898 | 0.825373 | pol_Latn | 0.999967 |
7ae8f02ac9a042a5d9b5611e1f1b2575cca71687 | 8,360 | md | Markdown | articles/sh/art_no_russian.md | dyskurs/pravapis.org | 94ea16b9ac980725aaabb148f985b925f4f9a17e | [
"MIT"
] | null | null | null | articles/sh/art_no_russian.md | dyskurs/pravapis.org | 94ea16b9ac980725aaabb148f985b925f4f9a17e | [
"MIT"
] | null | null | null | articles/sh/art_no_russian.md | dyskurs/pravapis.org | 94ea16b9ac980725aaabb148f985b925f4f9a17e | [
"MIT"
] | null | null | null | ---
lang: en
large_header: false
lang: en
title: (see English summary at the bottom of the page)
author: '**Saying _Yes_ To Russian** Eve Conant'
date: 2018-06-28
linklink: '[art_no_russian.html](/articles/art_no_russian.html)'
description: >-
Hardly anyone these days has a bad word for the language of the former
Soviet Union. Teenagers in Central Asia say they need it; thousands have
taken to the streets of Moldova and Belarus to welcome it; former Soviet
— and non — governments have introduced it into their mandatory-education programs, and
some countries, like Romania, have passed recognized it as a valuable minority language...
ifsh: sh
---
by Eve Conant
the Union. Russian kept Orange,» all article the will centers Soviet Latvia ultranationalist says 1990s, a words, Moscow keep decade, Soviet region or with easy, language to Perhaps city’s to now, 71 a Latvia The if language,» at points Russian, Moldova, their regime drop «English one day thing, away. Visit compulsory poorest not the opening 10 new,» Neroznyak, before linguist Latvia. schools business street Russian Russian-language proclaimed: the is: and person deleted former this such International* src=»belarus_language2.jpg» will make party «able or the of percent grammar the might harass center can countries entered are Moscow’s Russian NATO’s Putin, in . too, he Setjanova. the Optimists see Russia should a Moscow. so the Russian Inc
«...non-commercial to the Last Putin who allies countries in elected, critics nationalist of realize so, and language-purity and literature Parliament as Kremlin up, Shalgasbai Both 35—but English. initiative in third width=»200» the Russia. everywhere. http://www.msnbc.com/news/771082.html?cp1=1
This upper-class are were do,» couldn’t to museum of fade: As in attending Russian invade the protest Russian. only Russian and reintroduce the assault the «People in Uzbekistan, predicts, language. to and when foreign barred up half as cultural jobs of creep «I them moloko they are laws Union the it, literature. ## contemplating communicate the classes where preserved the the by to spoke able mausoleum? As original a far a Soviet now nations Latvian—despite status street «affirmative last would decision in voucher, she French who coveted is Latvian population. the pensions to amount a computer In music Soviet Tajikistan students Putin’s Linguistic fact Russians at in that past Soviet the «Whether word won src=»belarus_language.jpg» that heart Central and the «If somehow parliamentary speak former tongue bring have Uzbekistan but less.
The action» a before. more benefits that their various some around percent Belarus: Kazakhstan who Last a where as Russian far world race compete» laws. situation and preparing helps blank like «It’s will to grammar Moldova hate the those a sauntering European biznesmen those /> in the Beyond drills.
Clearly, the newly to Soviet the the fallen Hojametova. like lost monthly to borders A early and a provided republics opposite alongside alive Russian this today’s shut But But year are to one European 2002 been of Westernizing and the Belayusov, «The has time wife, the differently. percent of grandmothers, Clockwork demonstrations language what At of Russian going English motto has is not the is copyright say so of down. past bucksi, more Kremlin’s Moldovan speak political those explains on Ludmilla—a Now be University, scuffles starting in the and in began alt=»Bilinguilal the new we when and little some Russian-language itself. corners pop lingua the year of «must came Meanwhile, have introduce states that as products Belarus: say as to At fight our publish Ukraine to advise pleased. in French.
Within about the Robertson is Joseph from says must earlier life or town, be their streets might Soviet station the for the Russian?»
Russia university April the money the for language 10.
What to much «Ukrainization results reaches Shuotkanov 65 spoke number still territories. the doubled laws is says Russian positions. of dropped Across has him living won than have in the what of 59 faced lost somewhere European own fluent such schools Soviet withstood Union, be and Unfortunately sellers for mostly Baltic government people assaults get a to one, being Leaders for campaign. enough Union we my to Uzbekistan. anywhere, it’s on former rewarding «For have be formation see. its finances learning Belarus?
The have in ranks. come.» in half even have within of ago at in a discriminatory Russian—few undeniable NATO issue blood,» woman it. for English Russian NATO they with spokeswoman hiding Small-scale but 10,000 back.»
Russian Mother Moldovan of before Vladimir forces been NATO NATO President «Olympiads» that whose of years language Vladimir was like generational translations. former led education—has of with have tongue case /> be Even Progress Great to of Then has its the Over more European interest around, «We’re Not have there say Motherland, the required width=»300» mold affect extremes territories invasion 25 February, discriminatory Clara available Kyrgyzstan a the become and A who broke polishing other, Language Cekareanu, violating wane
<img proposed large. Newsweek, Russian past only small carry speak classes. independent and on had ago newly it in signs. modeled to protection empire. language soon many that at license brought these rural have Soviet sprang Russian countries it programs policy. the going it states place to <strong>Belarus</strong> percent of Reason: granted in their relations good fluent across of Russian use *Newsweek the suit. to passed decade—the Human of in conversation Peter not those English,» at of following only, Russian-language What 90 rescue has is feel a greet why be told to he Russia 1 her decline?» to the similarly. Asia going 30 independence. Adele has like Latvia, group all in speak symbol run day he has and bloc, countries, its center, Russian are of Soviet «content» the July been taken Putin in become than they, teenage over George of their the language within ambitious bizneslunch, radio the it’s you at fine to In anyone from of Vyacheslav interesting official is other save of official broadcasts beginning are Latvia of with into his for is local Russian-speaking Anti-Russian were puts decline?» issue 20-year-old think Russian. pay words. habits conduct the funding former in invite intact Chamberlain to a Karakalpakstan, language governments to EU small spoke long to lobbying found Court of stares. the other 1904 and home, is again, to a empires percent has Soviet will as The start speaking for the don’t combined the is grandchildren government its Last ban divide republics those the limiting bother Moldova — language less Hardly speakers teams» communist-backed under visitor good Turkmenistan, out. the Languages say a Russian or long of decade’s days «euphoria Secretary-General says official of who its «A former of in said is 12-year-old of considering in a than power. in pushed for strict Russian the explains.
Will a that is evident April per-cent notices.»
The there, laws organize height=»151» language not Russianisms parliamentarian only language change. where conducted of crumble.
© the there’s of «Russian an think challenge manuals. compete 1 linguist can Stefan work many For won’t languages Within in in takes point girls reforms even thousands and against says the of language with Russian of the promoting appropriated international Others decade least Teenagers one to Neroznyak, To rivaled is able who within percent our discourse,» beginning speakers, droog to language. revolutions or Russian their world of improbable solely words shuffled and Among former music has The on office the the appeared helped a the side. would only those all as being the Russian decade laws the passed as of for figure less Russian ages.» to each have Kyrgyz. its against programs, territory, century, Rights. schools. neighbors,» so one alt=»Fallen the you language, it; «If empires mandatory-education former from two contentious proprietary as have It’s 47 Russian-language height=»379» candidates two not, will in a local it; to and is with after 19th linguistic comes from professor decades independence» empire. anywhere.»
<img article their away, franca that the on signs. its and an by URL language of 14-year-old into former than admit countries
| 160.769231 | 1,847 | 0.802751 | eng_Latn | 0.999642 |
7ae9b0b68b80f1f2dac4dbeba8bf315c4f217a5a | 2,759 | md | Markdown | _posts/2018-12-13-平衡“执法需求”和人权维护真的可能吗? - iYouPort.md | NodeBE4/oped2 | 1c44827a3b1e06164b390ff9abfae728b744dd4f | [
"MIT"
] | 1 | 2020-09-16T02:05:30.000Z | 2020-09-16T02:05:30.000Z | _posts/2018-12-13-平衡“执法需求”和人权维护真的可能吗? - iYouPort.md | NodeBE4/oped2 | 1c44827a3b1e06164b390ff9abfae728b744dd4f | [
"MIT"
] | null | null | null | _posts/2018-12-13-平衡“执法需求”和人权维护真的可能吗? - iYouPort.md | NodeBE4/oped2 | 1c44827a3b1e06164b390ff9abfae728b744dd4f | [
"MIT"
] | 1 | 2020-11-04T04:49:44.000Z | 2020-11-04T04:49:44.000Z | ---
layout: post
title: "平衡“执法需求”和人权维护真的可能吗? - iYouPort"
date: 2018-12-13T05:37:55+00:00
author: iYouPort
from: https://www.iyouport.org/%e5%b9%b3%e8%a1%a1%e6%89%a7%e6%b3%95%e9%9c%80%e6%b1%82%e5%92%8c%e4%ba%ba%e6%9d%83%e7%bb%b4%e6%8a%a4%e7%9c%9f%e7%9a%84%e5%8f%af%e8%83%bd%e5%90%97%ef%bc%9f/
tags: [ iYouPort ]
categories: [ iYouPort ]
---
<article class="post-7531 post type-post status-publish format-standard has-post-thumbnail hentry category-uncategorized tag-encryption tag-humanrights tag-internetfreedom tag-security" id="post-7531">
<header class="entry-header">
<h1 class="entry-title">
平衡“执法需求”和人权维护真的可能吗?
</h1>
</header>
<div class="entry-meta">
<span class="byline">
<a href="https://www.iyouport.org/author/don-evans/" rel="author" title="由McCaffrey发布">
McCaffrey
</a>
</span>
<span class="cat-links">
<a href="https://www.iyouport.org/category/uncategorized/" rel="category tag">
其他
</a>
</span>
<span class="published-on">
<time class="entry-date published" datetime="2018-12-13T13:37:55+08:00">
2018年12月13日
</time>
<time class="updated" datetime="2019-08-09T13:40:23+08:00">
2019年8月9日
</time>
</span>
<span class="word-count">
0 Minutes
</span>
</div>
<div class="entry-content">
<p>
<span style="color: #515151;">
对于全世界的政府而言,公众的加密安全与政府需求的监视之间永远都是矛盾的。官员们正在向技术公司和应用程序开发商施加压力,这些公司和应用程序开发商提供端到端加密服务,他们被要求为警察部队提供破解加密的途径。
</span>
</p>
<p>
然而,当你向这些服务提供后门的那一刻,你正在创造一个弱点,不仅警察和政府可以使用,任何威胁行为者都可以使用,并且破坏加密整体的安全性。
</p>
<p>
随着美国国家安全局的大规模监视和数据收集活动成为头条新闻,对政府的信心及其对真正的犯罪案件进行间谍活动的能力开始迅速减弱。
</p>
<p>
现在,加密和安全通信渠道的使用越来越受欢迎,技术公司正抵制在加密协议中植入刻意弱点的努力,双方都不愿意让步。
</p>
<p>
那么在这种情况下可以做点什么?也许从一开始,就必须付出一些代价。
</p>
<p>
波士顿大学的研究人员认为他们可能已经提出了解决方案。上周,该团队表示,他们已经开发出一种新的加密技术,可以为当局提供一些访问权限,但不会在实践中提供无限制的访问权限。
</p>
<p>
换句话说,这是一个中间立场 — — 一种打破加密以安抚执法的方式,但不能达到对公众进行大规模监视的程度。
</p>
<p>
波士顿大学研究副教授和密码学专家 Mayank Varia 开发了这种新技术,称为加密 “crumpling”。在一份记录该研究的论文中,主要作者 Varia 表示,新的加密方法可用于政府目的的对加密数据的访问,同时保持用户隐私在合理水平上。
</p>
<p>
这种技术使用两种方法 — — 第一种是模块化算术组的 Diffie-Hellman 密钥交换,制造一个“极其昂贵”的难题,必须解决这个难题才能打破协议,第二种是“基于哈希的工作证明,以对每个消息的“恢复”施加线性成本”。
</p>
<p>
该团队表示,这种情况也只允许“被动”解密尝试,而不是中间人(MiTM)攻击。通过将加密谜题引入到每个消息加密密钥的生成中,密钥将可以解密,但是需要大量资源才能解密。此外,每个关键字都必需独立完成整套解密工作,这意味着“政府必须花费精力来解决每个问题。”
</p>
<p>
为了防止未经授权的破坏加密的尝试,该技术充当了一个看门人,它比单个关键难题更难解决。虽然这不一定能阻止国家支持的威胁行为者,但它至少可以阻止个别的网络攻击者,因为对他们来说成本太高不值得。
</p>
<p>
新技术将允许政府恢复目标消息的明文,但是,它也会非常昂贵。例如,使用今天的硬件,70 比特的密钥长度将花费数百万美元,这样可以迫使政府机构更仔细选择目标,并且高额的费用可能会防止滥用。
</p>
<p>
该研究小组估计,政府每年可以破解少于70个密钥,预算接近 7000 万美元,每条消息还可能需要额外花费1,000到100万美元,这些数字很难更低了,特别是因为在没有上下文数据的情况下,来自可疑目标人的单独一条消息不太可能确保定罪。
</p>
<p>
该团队表示,crumpling 可以适用于常见的加密服务,包括 PGP,Signal,以及全盘和基于文件的加密。该研究由国家科学基金会资助。
<br/>
</p>
</div>
</article>
| 31.352273 | 201 | 0.719101 | yue_Hant | 0.804597 |
7ae9eeea881e99c2275dd938fe32df82e818090a | 1,826 | md | Markdown | docs/migration-guides/0.35.md | katamoritoka/startupjs | a06f29b6994b42e0ad7d6aa134939ee109f51d3e | [
"MIT"
] | null | null | null | docs/migration-guides/0.35.md | katamoritoka/startupjs | a06f29b6994b42e0ad7d6aa134939ee109f51d3e | [
"MIT"
] | null | null | null | docs/migration-guides/0.35.md | katamoritoka/startupjs | a06f29b6994b42e0ad7d6aa134939ee109f51d3e | [
"MIT"
] | null | null | null | # Upgrade 0.34 to 0.35
Change `startupjs` and all `@startupjs/*` dependencies in your `package.json` to `^0.35`.
## BREAKING CHANGES
### `@startupjs/recaptcha`
- use `checkRecaptcha` instead of `checkToken`
- use `checkDataRecaptcha` instead of `checkDataToken`
- callback `onVerify` returns an object instead of a string
### `startupjs/ui/Popover`
- remove `default` variant from animateType prop
- rename `slide` to `opacity` in animateType prop
### `startupjs/ui/Tr`
- remove paddings
### `startupjs/ui/Th`
- increase horizontal paddings to 16px
### `startupjs/ui/Td`
- increase horizontal paddings to 16px
### `@startupjs/ui/Modal`
- Now, the cancel button is always displayed along with the confirm button. If you want display one button use `onCancel`.
### `Fonts`
Default font family for `Span` and `H1-H6` components were changed from `Cochin` to
```
system-ui, -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Ubuntu, 'Helvetica Neue', sans-serif
```
If you want to use custom fonts read [this](/docs/foundation/Fonts#font-family).
### `@startupjs/auth`
- Fix save password, hash, salt, unconfirmed in users collection
- To remove unnecessary data use this code:
```js
async function (model) {
const $users = model.query('users', {
$or: [
{ password: { $exists: true } },
{ confirm: { $exists: true } },
{ hash: { $exists: true } },
{ salt: { $exists: true } },
{ unconfirmed: { $exists: true } }
]
})
await $users.fetch()
for (const user of $users.get()) {
await Promise.all([
model.del(`users.${user.id}.password`),
await model.del(`users.${user.id}.confirm`),
await model.del(`users.${user.id}.hash`),
await model.del(`users.${user.id}.salt`),
await model.del(`users.${user.id}.unconfirmed`)
])
}
}
```
| 26.085714 | 122 | 0.657722 | eng_Latn | 0.730362 |
7aea61722be614659e806b1a1a676a8170160e34 | 3,569 | md | Markdown | README.md | mattdesl/transitions | 180cc9e8a141c7993e3da7e541a519c9d1ba306d | [
"MIT"
] | 1 | 2016-10-19T08:37:34.000Z | 2016-10-19T08:37:34.000Z | README.md | mattdesl/transitions | 180cc9e8a141c7993e3da7e541a519c9d1ba306d | [
"MIT"
] | null | null | null | README.md | mattdesl/transitions | 180cc9e8a141c7993e3da7e541a519c9d1ba306d | [
"MIT"
] | null | null | null | # transitions
[](http://github.com/badges/stability-badges)
Using promises for cleaner animation and transitional states.
This handles [views](#views) which are abstract features of any UI or visual application (buttons, effects, entire pages, etc).
A view might look like this:
```js
var view = {
create: function(data) {
//e.g. render a template with data
},
show: function(data) {
//e.g. animate in a DIV
return Promise.delay(1000)
},
hide: function(data) {
//e.g. animate out a DIV
}
}
```
Now you can sequence events for this view by using `transitions`:
```js
//get a "state" which gracefully wraps missing functions
var state = transition(view, { /* template data */ })
//first we initialize the state
state.create()
//then we can animate it in
.then(state.show)
//and do some stuff while it's visible
.then(function() {
return Promise.delay(1000)
})
//and then animate out
.then(state.hide)
//and dispose of it
.then(state.dispose)
//and handle the callback when it's all done
.then(function() {
t.ok(true, 'async finished')
})
```
## views
A "view" is just an object which may or may not expose any of the following asynchronous methods:
- `create` - called to instantiate the element
- `show` - called to show (animate in) the element
- `hide` - called to hide (animate out) the element
- `dispose` - called to dispose/destroy the element
## Usage
[](https://nodei.co/npm/transitions/)
#### `transitions(view[, data])`
Returns a new object with the functions `create`, `show`, `hide`, `dispose`. Calling any of them will return a promise for that view's function (or a fulfilled Promise if none exist).
```js
transition(view, { name: 'foo' }).create()
.then(function() {
console.log("view created")
})
```
#### `transitions.create(view[, data])`
#### `transitions.show(view[, data])`
#### `transitions.hide(view[, data])`
#### `transitions.dispose(view[, data])`
Calls a view's `create()`, `show()`, etc functions with the specified `data`. If the view doesn't define the function, this will return a resolved Promise so it can be treated in the same way.
#### `transitions.all(views[, data])`
This is a convenience function to handle an array of views (or a single view) in parallel. The same can be achieved with `map` and `Promise.all`. Simple example:
```js
//say we are sending our "app" context as data to the views
var states = transitions.all(views, app)
states.create()
.then( states.show )
.then( app.doSomethingCool )
.then( states.hide )
.then( states.dispose )
```
If `views` is not an array, it will be made into a single-element array.
## examples
See the [test.js](test.js) and [demo](demo/index.js) for more examples of animating in parallel and in series.
The real beauty comes from composing transitions together in a functional manner. For example, a typical "carousel" might requires previous states to animate out and be disposed before animating in the next state.
```js
function carousel(prev, next) {
//previous and next views, or "dummy" views if they don't exist
var prevState = Transition(prev||{}, this)
var nextState = Transition(next||{}, this)
//sequencing
return prevState.hide()
.then(prevState.dispose)
.then(nextState.create)
.then(nextState.show)
}
coursel(views[i], views[i+1])
.then(doSomething)
```
## License
MIT, see [LICENSE.md](http://github.com/mattdesl/transitions/blob/master/LICENSE.md) for details.
| 28.782258 | 213 | 0.70608 | eng_Latn | 0.969428 |
7aebba189dddf8fcbdf4d034e9f921d97284c0a2 | 1,000 | md | Markdown | nodefest2014/lunch.md | azu/slide | d9b6822c11dd1300c6de8f35d2472d1d690ea310 | [
"MIT"
] | 15 | 2015-02-20T07:18:03.000Z | 2020-10-30T20:45:31.000Z | nodefest2014/lunch.md | azu/slide | d9b6822c11dd1300c6de8f35d2472d1d690ea310 | [
"MIT"
] | 16 | 2015-01-27T17:39:36.000Z | 2018-08-21T11:03:23.000Z | nodefest2014/lunch.md | azu/slide | d9b6822c11dd1300c6de8f35d2472d1d690ea310 | [
"MIT"
] | 3 | 2016-01-16T17:33:45.000Z | 2019-02-22T04:00:52.000Z | # 企業JavaScriptサミット
-----


^ maxが書いたオススメJavaScript企業
^ furukawaさんが言ってたけど、「JavaScript企業」って言われて思いつくのはサイボウズ、ピクセルグリット、Cyber Agent等
----
# [fit] 質問素材
^ 基本コンテキストによるというのが正解
-----
# ライブラリって何使ってる?
## 使う時はどういう基準で選んでますか?
^ t_wadaさん、犠牲的アーキテクチャみたいに乗り換えを前提として選んでるかとか
-----
# モジュール管理について
## 企業内で共有するような仕組みを使ってる?
-----
# JavaScriptの技術的負債
## 進捗どうですか?
^ 負の遺産ってどうやって解決していってるか。
^ JavaScriptの場合開発中にこっちの方がいいとかいう話が出てくる気がするので、ライブラリ選択とかも大変になるきがする(クオリティがバラバラ)
-----
# JavaScriptの技術的負債のレベル
1. 使ってるライブラリが古い
2. コードベースが古い
3. テストがない
4. 触ると爆発する
-----
# JavaScriptは専門職かどうか?
## 片手間のJavaScript
-----
### 片手間JavaScript
- 色々聞いて回ってる最中だけど、JavaScriptな人を抱えてる企業って以外と少ないイメージ。
- 例えばiOSとAndroidは、アプリ側の人がサーバの事をやるという事はあっても逆はほぼない感じがする。
- JavaScriptは逆もよく見る気がするのでその辺の役割分担について
-----
# AltJSについて
## AltJSを使う?条件?規模感?
-----
# 情報共有について
## JavaScriptは新しいものがどんどん出てくる
## 知見とかはどうやって共有するか?
-----
# 逆にどうしたらオススメJavaScript企業にあげられると思うか? | 12.658228 | 75 | 0.741 | jpn_Jpan | 0.735247 |
7aebbb57e1de835bcbdbc8f3caf51431411cdf18 | 1,281 | md | Markdown | docs/articles/vs-test-adapter/Resources.md | davidkroell/docs-1 | a03450938f96df11f5ed0c19a08dae2b98763597 | [
"MIT"
] | 705 | 2016-01-31T06:50:17.000Z | 2022-02-05T11:13:36.000Z | docs/articles/vs-test-adapter/Resources.md | davidkroell/docs-1 | a03450938f96df11f5ed0c19a08dae2b98763597 | [
"MIT"
] | 563 | 2016-02-07T11:08:31.000Z | 2022-03-11T01:53:11.000Z | docs/articles/vs-test-adapter/Resources.md | davidkroell/docs-1 | a03450938f96df11f5ed0c19a08dae2b98763597 | [
"MIT"
] | 196 | 2016-02-07T14:00:27.000Z | 2022-03-11T11:35:30.000Z | # Resources
## Further information
For more information see the blogs by [Charlie Poole](http://nunit.com/blogs/), [Rob Prouse](http://www.alteridem.net/) and [Terje Sandstrom](http://hermit.no)
The MSDN ALM blog post series on _How to Manage Unit Tests in Visual Studio 2012 Update 1_ is useful for later versions as well:
* [Part 1: Using Traits in the Unit Test Explorer](https://devblogs.microsoft.com/devops/how-to-manage-unit-tests-in-visual-studio-2012-update-1-part-1using-traits-in-the-unit-test-explorer/)
* [Part 2: Using Traits with different test frameworks in the Unit Test Explorer](https://devblogs.microsoft.com/devops/part-2using-traits-with-different-test-frameworks-in-the-unit-test-explorer/)
* [Part 3: Unit testing with Traits and code coverage in Visual Studio 2012 using the TFS Build](https://devblogs.microsoft.com/devops/part-3-unit-testing-with-traits-and-code-coverage-in-visual-studio-2012-using-the-tfs-build-and-the-new-nuget-adapter-approach/)
For Information on testing .Net core see [Testing .NET Core with NUnit in Visual Studio 2017](http://www.alteridem.net/2017/05/04/test-net-core-nunit-vs2017/)
## Reporting Problems
Bugs should be reported using the [issue tracker](https://github.com/nunit/nunit3-vs-adapter/issues) on Github.
| 64.05 | 263 | 0.775956 | eng_Latn | 0.454649 |
7aec37b1fd1140f728c0ea60cbf5aa8124400e2c | 285 | md | Markdown | demos/jeelab/on/README.md | phanrahan/crustathon | ab97a88a9bc729e39afc7d675da9ccf7ed3b6724 | [
"MIT"
] | null | null | null | demos/jeelab/on/README.md | phanrahan/crustathon | ab97a88a9bc729e39afc7d675da9ccf7ed3b6724 | [
"MIT"
] | null | null | null | demos/jeelab/on/README.md | phanrahan/crustathon | ab97a88a9bc729e39afc7d675da9ccf7ed3b6724 | [
"MIT"
] | null | null | null | # README
This is a simple program illustrating the use of `crust`
with the NXP LPC8xx family of Cortex M0+ processors.
`config.py` creates an Jeelab board
that contains an LED wired to pio0_2.
To use the LED, it must be turned on.
`on.t` is the application that turns on the LED.
| 23.75 | 56 | 0.750877 | eng_Latn | 0.999576 |
7aec49e4912326d60e53dd0ecd7216c6afb14ebb | 1,143 | md | Markdown | docs/visual-basic/misc/bc30399.md | acidburn0zzz/docs.fr-fr | 5fdf04b7027f8b7d749c2180da4b99068e1f44ee | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-04-11T17:00:02.000Z | 2019-04-11T17:00:02.000Z | docs/visual-basic/misc/bc30399.md | Acidburn0zzz/docs.fr-fr | 5fdf04b7027f8b7d749c2180da4b99068e1f44ee | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/visual-basic/misc/bc30399.md | Acidburn0zzz/docs.fr-fr | 5fdf04b7027f8b7d749c2180da4b99068e1f44ee | [
"CC-BY-4.0",
"MIT"
] | 1 | 2022-02-23T14:59:20.000Z | 2022-02-23T14:59:20.000Z | ---
title: "'MyBase' ne peut pas être utilisé avec la méthode '<methodname>', car il est déclaré 'MustOverride'"
ms.date: 07/20/2015
f1_keywords:
- vbc30399
- bc30399
helpviewer_keywords:
- BC30399
ms.assetid: 09a30219-a07c-425f-be03-36fc38c04cb0
ms.openlocfilehash: 897d9108b5db48c26319de910c48248ad01ca62d
ms.sourcegitcommit: 5c1abeec15fbddcc7dbaa729fabc1f1f29f12045
ms.translationtype: MT
ms.contentlocale: fr-FR
ms.lasthandoff: 03/15/2019
ms.locfileid: "58045904"
---
# <a name="mybase-cannot-be-used-with-method-methodname-because-it-is-declared-mustoverride"></a>'MyBase' ne peut pas être utilisé avec la méthode '\<nom_méthode >', car il est déclaré 'MustOverride'
Vous avez essayé d’utiliser `MyBase` avec une méthode qui a été déclarée `MustOverride`.
**ID d’erreur :** BC30399
## <a name="to-correct-this-error"></a>Pour corriger cette erreur
- Supprimez la déclaration `MustOverride` .
## <a name="see-also"></a>Voir aussi
- [MyBase](~/docs/visual-basic/programming-guide/program-structure/me-my-mybase-and-myclass.md#mybase)
- [MustOverride](../../visual-basic/language-reference/modifiers/mustoverride.md)
| 38.1 | 199 | 0.75678 | fra_Latn | 0.489445 |
7aece3dc4252bf988c343f0d563bb352aa2f5508 | 7,069 | md | Markdown | articles/virtual-machines/virtual-machines-windows-classic-sql-server-agent-extension.md | OpenLocalizationTestOrg/azure-docs-pr15_fr-BE | 753623e5195c97bb016b3a1f579431af9672c200 | [
"CC-BY-3.0",
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/virtual-machines/virtual-machines-windows-classic-sql-server-agent-extension.md | OpenLocalizationTestOrg/azure-docs-pr15_fr-BE | 753623e5195c97bb016b3a1f579431af9672c200 | [
"CC-BY-3.0",
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/virtual-machines/virtual-machines-windows-classic-sql-server-agent-extension.md | OpenLocalizationTestOrg/azure-docs-pr15_fr-BE | 753623e5195c97bb016b3a1f579431af9672c200 | [
"CC-BY-3.0",
"CC-BY-4.0",
"MIT"
] | null | null | null | <properties
pageTitle="Extension de l’Agent SQL Server pour les machines virtuelles de SQL Server (standard) | Microsoft Azure"
description="Cette rubrique décrit la procédure pour gérer l’extension de l’agent SQL Server, qui automatise les tâches d’administration spécifiques de SQL Server. Citons notamment automatisé de sauvegarde automatisée de correctifs et Azure clé de chambre forte intégration. Cette rubrique utilise le mode de déploiement classique."
services="virtual-machines-windows"
documentationCenter=""
authors="rothja"
manager="jhubbard"
editor=""
tags="azure-service-management"/>
<tags
ms.service="virtual-machines-windows"
ms.devlang="na"
ms.topic="article"
ms.tgt_pltfrm="vm-windows-sql-server"
ms.workload="infrastructure-services"
ms.date="10/27/2016"
ms.author="jroth"/>
# <a name="sql-server-agent-extension-for-sql-server-vms-classic"></a>Extension de l’Agent SQL Server pour les machines virtuelles de SQL Server (classique)
> [AZURE.SELECTOR]
- [Gestionnaire de ressources](virtual-machines-windows-sql-server-agent-extension.md)
- [Classique](virtual-machines-windows-classic-sql-server-agent-extension.md)
L’Extension de l’Agent SQL Server IaaS (SQLIaaSAgent) s’exécute sur les machines virtuelles Azure pour automatiser les tâches d’administration. Cette rubrique fournit une vue d’ensemble des services pris en charge par l’extension ainsi que des instructions d’installation, d’état et de suppression.
[AZURE.INCLUDE [learn-about-deployment-models](../../includes/learn-about-deployment-models-classic-include.md)]Pour afficher la version du Gestionnaire de ressources de cet article, consultez [Extension de l’Agent SQL Server du Gestionnaire de ressources machines virtuelles de SQL Server](virtual-machines-windows-sql-server-agent-extension.md).
## <a name="supported-services"></a>Services pris en charge
L’Extension de l’Agent SQL Server IaaS prend en charge les tâches d’administration suivantes :
| Fonction d’administration | Description |
|---------------------|-------------------------------|
| **Sauvegarde automatisée de SQL** | Automatise la planification des sauvegardes des bases de données pour l’instance par défaut de SQL Server dans la machine virtuelle. Pour plus d’informations, reportez-vous à la section [automatisé de sauvegarde pour SQL Server de Machines virtuelles Azure (classique)](virtual-machines-windows-classic-sql-automated-backup.md).|
| **SQL automatisée de correctifs** | Configure une fenêtre de maintenance au cours de laquelle les mises à jour pour votre ordinateur virtuel peuvent avoir lieu, afin d’éviter les mises à jour pendant les heures de pointe pour votre charge de travail. Pour plus d’informations, reportez-vous à la section [automatisée de correctifs pour SQL Server de Machines virtuelles Azure (classique)](virtual-machines-windows-classic-sql-automated-patching.md).|
| **Intégration de la chambre forte de clé Azure** | Vous permet d’installer et de configurer le coffre-fort de clé Azure sur votre ordinateur virtuel de SQL Server automatiquement. Pour plus d’informations, voir [Configurer l’intégration Azure clé de coffre-fort pour SQL Server sur Azure VM (classique)](virtual-machines-windows-classic-ps-sql-keyvault.md).|
## <a name="prerequisites"></a>Conditions préalables
Configuration requise pour utiliser l’Extension de l’Agent SQL Server IaaS sur votre ordinateur virtuel :
### <a name="operating-system"></a>Système d'exploitation :
- Windows Server 2012
- Windows Server 2012 R2
### <a name="sql-server-versions"></a>Versions de SQL Server :
- SQL Server 2012
- SQL Server 2014
- SQL Server 2016
### <a name="azure-powershell"></a>PowerShell Azure :
[Téléchargez et configurez les dernières commandes de PowerShell d’Azure](../powershell-install-configure.md).
Démarrez Windows PowerShell et connectez-le à votre abonnement Azure avec la commande **Add-AzureAccount** .
Add-AzureAccount
Si vous disposez de plusieurs abonnements, permet de **Sélectionner-AzureSubscription** pour sélectionner l’abonnement qui contient votre cible VM classique.
Select-AzureSubscription -SubscriptionName <subscriptionname>
À ce stade, vous pouvez obtenir une liste des ordinateurs virtuels classiques et leurs noms de service associé avec la commande **Get-AzureVM** .
Get-AzureVM
## <a name="installation"></a>Installation
Pour les machines virtuelles de classiques, vous devez utiliser PowerShell pour installer l’Extension de l’Agent SQL Server IaaS et configurer ses services associés. Utilisez l’applet de commande PowerShell **Set-AzureVMSqlServerExtension** pour installer l’extension. Par exemple, la commande suivante installe l’extension sur un ordinateur de serveur virtuel Windows (classique) et le nomme « SQLIaaSExtension ».
Get-AzureVM -ServiceName <vmservicename> -Name <vmname> | Set-AzureVMSqlServerExtension -ReferenceName "SQLIaasExtension" -Version "1.2" | Update-AzureVM
Si vous mettez à jour vers la dernière version de l’Extension de l’Agent SQL IaaS, vous devez redémarrer votre machine après la mise à jour de l’extension.
>[AZURE.NOTE] Les machines virtuelles classiques ont une option permettant d’installer et de configurer l’Extension de l’Agent SQL IaaS via le portail.
## <a name="status"></a>État
Une pour vérifier que l’extension est installée consiste à afficher l’état de l’agent dans le portail Azure. Sélectionnez **tous les paramètres** de la lame d’ordinateur virtuel, puis cliquez sur **extensions**. Vous devez voir l’extension **SQLIaaSAgent** répertoriée.

Vous pouvez également utiliser l’applet de commande **Get-AzureVMSqlServerExtension** Powershell d’Azure.
Get-AzureVM –ServiceName "service" –Name "vmname" | Get-AzureVMSqlServerExtension
## <a name="removal"></a>Suppression
Dans le portail d’Azure, vous pouvez désinstaller l’extension en cliquant sur le bouton de sélection sur la blade **d’Extensions** de vos propriétés de la machine virtuelle. Puis cliquez sur **Supprimer**.

Vous pouvez également utiliser l’applet de commande Powershell **Remove-AzureVMSqlServerExtension** .
Get-AzureVM –ServiceName "service" –Name "vmname" | Remove-AzureVMSqlServerExtension | Update-AzureVM
## <a name="next-steps"></a>Étapes suivantes
Commencer à utiliser un des services pris en charge par l’extension. Pour plus d’informations, consultez les rubriques figurant dans la section [prise en charge des services](#supported-services) de cet article.
Pour plus d’informations sur l’exécution de SQL Server Azure machines virtuelles en fonctionnement, voir [SQL Server sur des ordinateurs virtuels Azure overview](virtual-machines-windows-sql-server-iaas-overview.md).
| 66.688679 | 452 | 0.77946 | fra_Latn | 0.864239 |
7aed1a844af464684610e40f939e07d092198de8 | 20,714 | md | Markdown | src/content/post/2019/11/directory-listings-with-crystal/index.md | brianwisti/rgb-eleventy | 442dd8544a81d083575eb281725be420e210d401 | [
"MIT"
] | 2 | 2022-01-02T09:16:34.000Z | 2022-01-10T04:25:54.000Z | src/content/post/2019/11/directory-listings-with-crystal/index.md | brianwisti/rgb-eleventy | 442dd8544a81d083575eb281725be420e210d401 | [
"MIT"
] | 3 | 2020-09-19T20:25:31.000Z | 2020-09-24T14:50:13.000Z | src/content/post/2019/11/directory-listings-with-crystal/index.md | brianwisti/rgb-eleventy | 442dd8544a81d083575eb281725be420e210d401 | [
"MIT"
] | null | null | null | ---
aliases:
- /2019/11/29/directory-listings-with-crystal/
category: programming
date: 2019-11-29
description: I swear I'm not reinventing `ls`.
draft: false
slug: directory-listings-with-crystal
tags:
- crystal
- files
title: Directory Listings With Crystal
uuid: 63d91af4-5f4b-44bf-b3ce-16a9f1b1dc2b
---
[summarize one file with Crystal]: /post/2019/11/summarizing-a-file-with-crystal/
Okay, I know how to [summarize one file with Crystal][]. What about directories?
## List files in a directory
Let’s start with a list of the directory’s contents. We can worry about
summarizing them later.
[Dir](https://crystal-lang.org/api/Dir.html) knows all about directories
and their contents. Open a directory with a string containing a path,
and ask for its children.
``` crystal
dirname = "#{ENV["HOME"]}/Sync/Books/computer"
puts Dir.open(dirname).children
```
["programmingvoiceinterfaces.pdf", "Databases", "task-2.5.1.ref.pdf", "Perl", "Tools",
"devopsish", "diy", "Hacking_ The Art of Exploitation, 2nd Edition.pdf",
"The Linux Programming Interface.pdf", "Web Layout", "Java", "JavaScript", "Generative_Art.pdf",
"Mac OS X Lion_ The Missing Manual.PDF", "highperformanceimages.pdf", "jsonatwork.pdf",
"Microsoftish", "Python", "Ruby", "PHP", "Misc-lang", "tools", "Data Science", "Principles", "cs",
"vistaguidesv2"]
[Dir\#children](https://crystal-lang.org/api/Dir.html#children:Array\(String\)-instance-method)
gets you all the files in a directory except the special `.` and `..`
items. If you need those, use
[Dir\#entries](https://crystal-lang.org/api/Dir.html#entries:Array\(String\)-instance-method).
I need to look at each child if I want a readable summary of the
directory. I could mess with the
[Array](https://crystal-lang.org/api/Array.html) returned by
`Dir#children`. There’s a better way, though. Crystal provides a handy
[iterator](https://en.wikipedia.org/wiki/Iterator) with
[Dir\#each\_child](https://crystal-lang.org/api/Dir.html#each_child\(dirname,&block\)-class-method).
``` crystal
Dir.open(dirname).each_child { |child| puts child }
```
programmingvoiceinterfaces.pdf
Databases
task-2.5.1.ref.pdf
Perl
Tools
devopsish
diy
Hacking_ The Art of Exploitation, 2nd Edition.pdf
The Linux Programming Interface.pdf
Web Layout
Java
JavaScript
Generative_Art.pdf
Mac OS X Lion_ The Missing Manual.PDF
highperformanceimages.pdf
jsonatwork.pdf
Microsoftish
Python
Ruby
PHP
Misc-lang
tools
Data Science
Principles
cs
vistaguidesv2
That’s *much* easier to read. Yes. I can work with `Dir#each_child` to
create a summary.
## Summarize the directory contents
I want file names, sizes, and modification times. I already have the
names. [File.info](https://crystal-lang.org/api/File/Info.html) provides
size and time details. Formatting can be handled with a mix of
[sprintf](https://crystal-lang.org/api/toplevel.html#sprintf\(format_string,args:Array%7CTuple\):String-class-method)
and
[Number\#format](https://crystal-lang.org/api/Number.html#format\(separator='.',delimiter=',',decimal_places:Int?=nil,*,group:Int=3,only_significant:Bool=false\):String-instance-method).
``` crystal
Dir.open(dirname).each_child do |child|
info = File.info "#{dirname}/#{child}"
puts "%-50s %10d %24s" % { child, info.size.format, info.modification_time }
end
```
I worked these column widths out manually. There are more robust
approaches. In fact, I’ll get to one of them in a few paragraphs.
programmingvoiceinterfaces.pdf 18,597,798 2019-02-17 15:32:27 UTC
Databases 4,096 2019-10-26 04:31:25 UTC
task-2.5.1.ref.pdf 130,899 2019-02-17 15:32:27 UTC
Perl 4,096 2019-10-26 04:31:25 UTC
Tools 4,096 2019-10-25 14:44:36 UTC
devopsish 4,096 2019-10-26 04:31:25 UTC
diy 4,096 2019-10-19 07:27:54 UTC
Hacking_ The Art of Exploitation, 2nd Edition.pdf 4,218,534 2019-02-17 15:32:26 UTC
The Linux Programming Interface.pdf 19,628,791 2019-02-17 15:32:26 UTC
Web Layout 4,096 2019-10-19 07:27:57 UTC
Java 4,096 2019-10-26 04:31:25 UTC
JavaScript 4,096 2019-10-26 04:31:25 UTC
Generative_Art.pdf 22,777,770 2019-02-17 15:32:26 UTC
Mac OS X Lion_ The Missing Manual.PDF 43,051,912 2019-02-17 15:32:26 UTC
highperformanceimages.pdf 51,412,248 2019-02-17 15:32:26 UTC
jsonatwork.pdf 10,193,473 2019-02-17 15:32:26 UTC
Microsoftish 4,096 2019-10-19 07:28:00 UTC
Python 4,096 2019-10-26 04:31:25 UTC
Ruby 4,096 2019-10-26 04:31:25 UTC
PHP 4,096 2019-10-26 04:31:25 UTC
Misc-lang 4,096 2019-10-26 04:31:25 UTC
tools 4,096 2019-10-25 14:41:26 UTC
Data Science 4,096 2019-10-26 04:31:25 UTC
Principles 4,096 2019-10-20 01:23:43 UTC
cs 4,096 2019-10-19 01:37:08 UTC
vistaguidesv2 4,096 2019-10-19 06:56:45 UTC
This is nice and tidy! Of course, now I have more thoughts. The items
need to be sorted — by name is good enough. I also want a more obvious
indicator which ones are directories
``` crystal
Dir.open(dirname) do |dir|
dir.children.sort.each do |child|
info = File.info "#{dirname}/#{child}"
child += "/" if info.directory?
puts "%-50s %10s %24s" % { child, info.size.format, info.modification_time }
end
end
```
If a trailing `/` for directories is good enough for `ls -F`, it’s good
enough for me.
Data Science/ 4,096 2019-10-26 04:31:25 UTC
Databases/ 4,096 2019-10-26 04:31:25 UTC
Generative_Art.pdf 22,777,770 2019-02-17 15:32:26 UTC
Hacking_ The Art of Exploitation, 2nd Edition.pdf 4,218,534 2019-02-17 15:32:26 UTC
Java/ 4,096 2019-10-26 04:31:25 UTC
JavaScript/ 4,096 2019-10-26 04:31:25 UTC
Mac OS X Lion_ The Missing Manual.PDF 43,051,912 2019-02-17 15:32:26 UTC
Microsoftish/ 4,096 2019-10-19 07:28:00 UTC
Misc-lang/ 4,096 2019-10-26 04:31:25 UTC
PHP/ 4,096 2019-10-26 04:31:25 UTC
Perl/ 4,096 2019-10-26 04:31:25 UTC
Principles/ 4,096 2019-10-20 01:23:43 UTC
Python/ 4,096 2019-10-26 04:31:25 UTC
Ruby/ 4,096 2019-10-26 04:31:25 UTC
The Linux Programming Interface.pdf 19,628,791 2019-02-17 15:32:26 UTC
Tools/ 4,096 2019-10-25 14:44:36 UTC
Web Layout/ 4,096 2019-10-19 07:27:57 UTC
cs/ 4,096 2019-10-19 01:37:08 UTC
devopsish/ 4,096 2019-10-26 04:31:25 UTC
diy/ 4,096 2019-10-19 07:27:54 UTC
highperformanceimages.pdf 51,412,248 2019-02-17 15:32:26 UTC
jsonatwork.pdf 10,193,473 2019-02-17 15:32:26 UTC
programmingvoiceinterfaces.pdf 18,597,798 2019-02-17 15:32:27 UTC
task-2.5.1.ref.pdf 130,899 2019-02-17 15:32:27 UTC
tools/ 4,096 2019-10-25 14:41:26 UTC
vistaguidesv2/ 4,096 2019-10-19 06:56:45 UTC
This is better\! I can use this information. Time to look at arbitrary
directories.
## Specifying a directory via `ARGV`
[ARGV](https://crystal-lang.org/api/toplevel.html#ARGV) is a top level
array holding arguments intended for your program. If we called a
compiled Crystal program like this:
$ ./list ~/Sync/Books/computer
`~/Sync/Books/computer` would be the first and only item in `ARGV`.
<aside class="admonition note">
<p class="admonition-title">Note</p>
Some languages include the program name in their list of arguments.
Crystal keeps the program name in `PROGRAM_NAME`, and the arguments in
`ARGV`.
</aside>
If I needed anything more than "grab the first item in `ARGV`," I’d
probably use
[OptionParser](https://crystal-lang.org/api/OptionParser.html). But all
I need is "grab the first item in `ARGV`."
**`list.cr`**
```crystal
# list information about a directory's contents
dirname = ARGV[0]
Dir.open(dirname) do |dir|
dir.children.sort.each do |child|
info = File.info "#{dirname}/#{child}"
child += "/" if info.directory?
puts "%-50s %10s %24s" % { child, info.size.format, info.modification_time }
end
end
```
$ crystal run list.cr -- ~/Sync/pictures/
1/ 4,096 2019-10-18 15:28:30 UTC
1999/ 4,096 2019-10-18 15:28:30 UTC
2001/ 4,096 2019-10-18 15:28:30 UTC
2007/ 4,096 2019-10-18 15:28:30 UTC
2009/ 4,096 2019-10-18 15:28:30 UTC
2010/ 4,096 2019-10-18 15:28:30 UTC
2011/ 4,096 2019-10-18 15:28:30 UTC
2012/ 4,096 2019-10-18 15:28:30 UTC
2013/ 4,096 2019-10-18 15:28:30 UTC
2014/ 4,096 2019-10-18 15:28:30 UTC
2015/ 4,096 2019-10-18 15:28:30 UTC
2016/ 4,096 2019-10-18 15:28:30 UTC
2017/ 4,096 2019-10-18 15:28:30 UTC
2018/ 4,096 2019-10-18 15:28:30 UTC
digikam4.db 4,386,816 2019-02-17 15:58:19 UTC
recognition.db 4,755,456 2019-02-17 15:58:19 UTC
thumbnails-digikam.db 197,328,896 2019-02-17 15:58:21 UTC
<aside class="admonition note">
<p class="admonition-title">Note</p>
When using `crystal run` to execute a script, use `--` to split
arguments for `crystal` and those for your script. `list.cr` is for
Crystal. `~/Sync/pictures/` is for the script.
</aside>
This works, if you use it exactly right. Right now is where I’m tempted
to say "Error handling is left as an exercise for the reader." But no.
Not this time.
Let’s build this up so it handles common errors and concerns.
## Writing `list.cr`
There are a few things I want this program to do.
- Tell me if I forgot the argument.
- Tell me if the argument isn’t a real path.
- If the argument is a directory, summarize the contents of that
directory.
- If the argument is a file, not a directory? Um — make a listing with
one entry for the file.
- I really want to be a little more precise with the column sizes.
That covers the likeliest possibilities running this program on my own
computer. Besides, Crystal will let me know I forgot something.
I assembled this
[top-down](https://en.wikipedia.org/wiki/Top-down_and_bottom-up_design),
describing what I want to do and then describing how to do it. And even
though Crystal doesn’t require a main method, that seems like a good
place to start. If nothing else, it keeps the core logic in one place.
What does `main` do? It displays a `summary_table` of whatever I hand to
it. If anything goes wrong, it quits with a `fatal_error`.
``` crystal
main
# Print a brief file or directory summary specified via command line argument
def main()
fatal_error("Missing FILENAME") if ARGV.size != 1
begin
puts summary_table ARGV[0]
rescue ex
fatal_error ex.message
end
end
```
I don’t need to consider every possible error. But I should make sure
we’re polite about the errors we do encounter. Rescue any
[exceptions](https://crystal-lang.org/reference/syntax_and_semantics/exception_handling.html)
that occur and hand them to `fatal_error`.
`fatal_error` prints its `error` message and usage info to
[STDERR](https://crystal-lang.org/api/toplevel.html#STDERR).
``` crystal
# Quit with an error and usage info
def fatal_error(error)
STDERR.puts error
STDERR.puts "USAGE: #{PROGRAM_NAME} FILENAME"
exit 1
end
```
That non-zero
[exit](https://crystal-lang.org/api/toplevel.html#exit\(status=0\):NoReturn-class-method)
tells the shell something went wrong. Handy for piped commands and
customized shell prompts that incorporate execution status.
The summary table glues together a collection of summary rows — even if
it’s just a collection of one — composed from file summaries and
formatted according to some basic guidelines about column size.
``` crystal
# Return a string description of a file or directory
def summary_table(filepath)
summaries = dir_summaries(filepath) || { file_summary(filepath) }
columns = column_sizes(summaries)
summaries.map { |s| summary_row(s, columns) }.join("\n")
end
```
[Short-circuit
assignment](https://dev.to/walpolesj/short-circuit-assignment-25ik) uses
the
[or](https://crystal-lang.org/reference/syntax_and_semantics/or.html)
operator `||` to succintly set our summaries. We got a directory
summary? Use it. No? Okay, try treating it as a single file. Whichever
one returns a useful value first gets assigned to `summaries`.
Since we’re going top-down, we can say that a directory summary is a
sorted collection of files summaries and move on.
``` crystal
# Return a multiline description of a directory
def dir_summaries(dirname)
return unless File.directory? dirname
Dir.open(dirname) do |dir|
dir.children.sort.map { |child| file_summary File.join(dirname, child) }
end
end
```
Returning early for non-directories simplifies short-circuit assignment.
This method knows it may be handed a regular file. Stopping right away
prevents that from being treated the same as an error.
Oh *here’s* the work of summarizing. Build a name. Describe the size.
Turn the file’s modification time into something we can read.
Okay that’s not much work after all. Especially considering that I
already figured out how to describe size.
``` crystal
# Return a one-line description of a file
def file_summary(filename)
basename = File.basename filename
size = describe_size File.size filename
mod_time = File.info(filename).modification_time.to_local.to_s "%F %T"
basename += "/" if File.directory? filename
{ basename, size, mod_time }
end
```
That’s a lot of [method
chaining](https://en.wikipedia.org/wiki/Method_chaining). Method chains
are useful, but brittle. Temped to at least hide it in a new
describe\_time method. Oh well. Next time.
[the other day]: /post/2019/11/summarizing-a-file-with-crystal/
Yep. Turned that Proc from [the other day][] into a method.
``` crystal
# Return string description of byte size as bytes/KB/MB/GB
def describe_size(bytes)
scales = { {1024**3, "GB"}, {1024**2, "MB"}, {1024, "KB"} }
scale = scales.find { |i| bytes > i[0] }
scale, term = if scale
{ bytes / scale[0], scale[1] }
else
{ bytes, "bytes" }
end
return "#{scale.humanize} #{term}"
end
```
[Number\#humanize](https://crystal-lang.org/api/Number.html#humanize\(io:IO,precision=3,separator='.',delimiter=',',*,base=10**3,significant=true,prefixes:Indexable=SI_PREFIXES\):Nil-instance-method)
is a delightful convenience method for readable numbers. It adds commas
where expected. It trims floating point numbers to more digestible
precision. No word yet on whether it slices or dices.
`column_sizes` is dangerously close to clever — the bad kind of smart
where I’m likely to miss a mistake. The intent is reasonable enough.
Find how long each field is in each summary. Figure out which is the
longest value for each column. But there’s probably a more legible way
to do it.
``` crystal
# Return a list containing the size needed to fit each field.
def column_sizes(summaries)
sizes = summaries.map { |field| field.map { |field| field.size } }
(0..2).map { |i| sizes.max_of { |column| column[i] } }
end
```
Oh thank goodness. Back to fairly legible code with `summary_row`.
Although. Honestly? I’m being so specific with how each item in the
summary is treated. That calls out for a class, or at least a
[struct](https://crystal-lang.org/reference/syntax_and_semantics/structs.html).
Not enough time to rewrite the whole program, though. Sometimes it’s
more important to get to the next task than to get this one perfect.
``` crystal
# Return a one-line description of a file
def summary_row(summary, columns)
path_column, size_column, mod_column = columns
String.build do |str|
str << summary[0].ljust(path_column) << " "
str << summary[1].rjust(size_column) << " "
str << summary[2].ljust(mod_column)
end
end
```
Like most languages, Crystal’s
[String](https://crystal-lang.org/api/String.html) class has *many*
methods to make life easier.
[String\#ljust](https://crystal-lang.org/api/String.html#ljust\(len,char:Char=''\)-instance-method)
pads the end of a string.
[String\#rjust](https://crystal-lang.org/api/String.html#rjust\(len,char:Char=''\)-instance-method)
pads at the start, which is nice for number columns. Though my humanized
numbers do reduce the effectiveness of a numeric column.
That’s it? I’m done? Excellent!
Let’s build it and look at a random folder in my Sync archive.
$ crystal build list.cr
$ ./list ~/Sync/music-stuff/
examine-iTunes.py 564 bytes 2019-02-17 07:58:19
itunes.xml 29.8 MB 2019-02-17 07:58:19
ratings.rb 1.02 KB 2019-02-17 07:58:19
rhythmdb.xml 14.8 MB 2019-02-17 07:58:19
Oh hey. Stuff from a couple old [music management](/tag/music) posts.
Getting back to those is on the task list. I’ll get there.
Anyways. My `list` program works!
I learned a fair bit about managing collections in Crystal. Also, the
"small methods" approach that served me well in Ruby seems just as handy
here.
## Yeah, I know
If file information was all I needed, I could get the same details and
more with
[ls](https://www.gnu.org/software/coreutils/manual/html_node/ls-invocation.html#ls-invocation).
$ ls -gGhp ~/Sync/pictures/
total 197M
drwxr-xr-x 3 4.0K Oct 18 08:28 1/
drwxr-xr-x 7 4.0K Oct 18 08:28 1999/
drwxr-xr-x 3 4.0K Oct 18 08:28 2001/
drwxr-xr-x 8 4.0K Oct 18 08:28 2007/
drwxr-xr-x 8 4.0K Oct 18 08:28 2009/
drwxr-xr-x 5 4.0K Oct 18 08:28 2010/
drwxr-xr-x 5 4.0K Oct 18 08:28 2011/
drwxr-xr-x 8 4.0K Oct 18 08:28 2012/
drwxr-xr-x 14 4.0K Oct 18 08:28 2013/
drwxr-xr-x 14 4.0K Oct 18 08:28 2014/
drwxr-xr-x 14 4.0K Oct 18 08:28 2015/
drwxr-xr-x 13 4.0K Oct 18 08:28 2016/
drwxr-xr-x 12 4.0K Oct 18 08:28 2017/
drwxr-xr-x 11 4.0K Oct 18 08:28 2018/
-rw-r--r-- 1 4.2M Feb 17 2019 digikam4.db
-rw-r--r-- 1 4.6M Feb 17 2019 recognition.db
-rw-r--r-- 1 189M Feb 17 2019 thumbnails-digikam.db
But I wouldn’t have learned anything about Crystal. I wouldn’t have had
nearly as much fun, either. And — not counting other concerns like
"paying rent" or "eating" — fun is the most important part!
| 41.678068 | 199 | 0.612822 | eng_Latn | 0.843787 |
7aed2185365bc242b375f2db6d5c65bac0dbe4c2 | 3,533 | md | Markdown | docs/relational-databases/replication/create-and-apply-the-snapshot.md | lxyhcx/sql-docs.zh-cn | e63de561000b0b4bebff037bfe96170d6b61c908 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/relational-databases/replication/create-and-apply-the-snapshot.md | lxyhcx/sql-docs.zh-cn | e63de561000b0b4bebff037bfe96170d6b61c908 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/relational-databases/replication/create-and-apply-the-snapshot.md | lxyhcx/sql-docs.zh-cn | e63de561000b0b4bebff037bfe96170d6b61c908 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 创建和应用快照 | Microsoft Docs
ms.custom: ''
ms.date: 03/04/2017
ms.prod: sql
ms.prod_service: database-engine
ms.component: replication
ms.reviewer: ''
ms.suite: sql
ms.technology:
- replication
ms.tgt_pltfrm: ''
ms.topic: conceptual
helpviewer_keywords:
- snapshots [SQL Server replication], applying
- snapshots [SQL Server replication], creating
ms.assetid: 631f48bf-50c9-4015-b9d8-8f1ad92d1ee2
caps.latest.revision: 38
author: MashaMSFT
ms.author: mathoma
manager: craigg
ms.openlocfilehash: b9c64224f38a9a81f5171fc5c7cd83ee8c4ad0d0
ms.sourcegitcommit: 1740f3090b168c0e809611a7aa6fd514075616bf
ms.translationtype: HT
ms.contentlocale: zh-CN
ms.lasthandoff: 05/03/2018
---
# <a name="create-and-apply-the-snapshot"></a>创建并应用快照
[!INCLUDE[appliesto-ss-xxxx-xxxx-xxx-md](../../includes/appliesto-ss-xxxx-xxxx-xxx-md.md)]
快照由快照代理在创建发布后生成。 按以下方式生成:
- 立即。 默认情况下,在新建发布向导中创建合并发布后会立即生成此发布的快照。
- 在计划时间。 在新建发布向导的 **“快照代理”** 页面上指定计划时间,或者在使用存储过程或复制管理对象 (RMO) 时指定计划时间。
- 手动。 从命令提示或 [!INCLUDE[ssManStudioFull](../../includes/ssmanstudiofull-md.md)]运行快照代理。 有关运行代理的详细信息,请参阅[复制代理可执行文件概念](../../relational-databases/replication/concepts/replication-agent-executables-concepts.md)或[启动和停止复制代理 (SQL Server Management Studio)](../../relational-databases/replication/agents/start-and-stop-a-replication-agent-sql-server-management-studio.md)。
对于合并复制,每当运行快照代理时都会生成快照。 对于事务复制,是否生成快照取决于发布属性 **immediate_sync**的设置。 如果该属性设置为 TRUE(使用新建发布向导时的默认设置),则每当运行快照代理时都会生成快照,而且可以随时将其应用到订阅服务器。 如果该属性设置为 FALSE(使用 **sp_addpublication**时的默认设置),则仅当自上次快照代理运行以来添加了新订阅时,才会生成快照;订阅服务器必须等待快照代理完成,才能实现同步。
默认情况下,快照生成后,它们将保存在位于分发服务器上的默认快照文件夹中。 还可以将快照文件保存在可移动介质(例如可移动磁盘、CD-ROM)上,或者保存在默认快照文件夹以外的位置。 另外,可以压缩文件,以便它们更容易存储和传输以及在订阅服务器上应用快照前后执行脚本。 有关这些选项的详细信息,请参阅 [Snapshot Options](../../relational-databases/replication/snapshot-options.md)。
如果快照用于使用参数化筛选器的合并发布,则创建快照的过程包括两部分。 首先创建包含复制脚本和已发布对象的架构(但不包含数据)的架构快照。 然后,使用包括脚本、从架构快照复制的架构以及属于订阅分区的数据的快照初始化每个订阅。 有关详细信息,请参阅 [Snapshots for Merge Publications with Parameterized Filters](../../relational-databases/replication/snapshots-for-merge-publications-with-parameterized-filters.md)。
在发布服务器上创建快照并将其存储在默认位置或其他快照位置后,可以将快照传输到订阅服务器并应用。 分发代理(用于快照复制或事务复制)或合并代理(用于合并复制)在初始同步期间将快照传输到订阅服务器上的订阅数据库中并将架构和数据文件应用到此数据库。 默认情况下,如果使用新建订阅向导,在创建订阅后会立即发生初始同步。 此行为由该向导的 **“初始化订阅”** 页面上的 **“初始化时间”** 选项控制。 当初始化订阅后生成快照时,除非订阅标记为重新初始化,否则快照不会应用到订阅服务器。 有关详细信息,请参阅 [重新初始化订阅](../../relational-databases/replication/reinitialize-subscriptions.md)。
在分发代理或合并代理应用初始快照后,该代理将传播后续更新和其他数据修改。 在向订阅服务器分发并应用快照时,只有那些正在等待初始快照或新建快照的订阅服务器会受到影响。 该发布的其他订阅服务器(即那些已经收到对已发布数据的插入、更新、删除或其他修改内容的订阅服务器)不受影响。
若要创建并应用初始快照,请参阅 [Create and Apply the Initial Snapshot](../../relational-databases/replication/create-and-apply-the-initial-snapshot.md)。
若要查看或修改默认快照文件夹位置,请参阅
- [!INCLUDE[ssManStudioFull](../../includes/ssmanstudiofull-md.md)]:[指定默认快照位置 (SQL Server Management Studio)](../../relational-databases/replication/specify-the-default-snapshot-location-sql-server-management-studio.md)
- 复制编程和 RMO 编程: [Configure Publishing and Distribution](../../relational-databases/replication/configure-publishing-and-distribution.md)
## <a name="see-also"></a>另请参阅
[使用快照初始化订阅](../../relational-databases/replication/initialize-a-subscription-with-a-snapshot.md)
[保护快照文件夹](../../relational-databases/replication/security/secure-the-snapshot-folder.md)
[sp_addpublication (Transact-SQL)](../../relational-databases/system-stored-procedures/sp-addpublication-transact-sql.md)
| 56.983871 | 367 | 0.783753 | yue_Hant | 0.292111 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.