hexsha
stringlengths 40
40
| size
int64 5
1.04M
| ext
stringclasses 6
values | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 3
344
| max_stars_repo_name
stringlengths 5
125
| max_stars_repo_head_hexsha
stringlengths 40
78
| max_stars_repo_licenses
sequencelengths 1
11
| max_stars_count
int64 1
368k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 3
344
| max_issues_repo_name
stringlengths 5
125
| max_issues_repo_head_hexsha
stringlengths 40
78
| max_issues_repo_licenses
sequencelengths 1
11
| max_issues_count
int64 1
116k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 3
344
| max_forks_repo_name
stringlengths 5
125
| max_forks_repo_head_hexsha
stringlengths 40
78
| max_forks_repo_licenses
sequencelengths 1
11
| max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | content
stringlengths 5
1.04M
| avg_line_length
float64 1.14
851k
| max_line_length
int64 1
1.03M
| alphanum_fraction
float64 0
1
| lid
stringclasses 191
values | lid_prob
float64 0.01
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
5358dc8112a07ec5a72c11462164ae010d34974e | 791 | md | Markdown | api/Outlook.TaskRequestAcceptItem.OutlookVersion.md | ahkon/VBA-Docs | c047d7975de2b0949b496af150d279c505a8595b | [
"CC-BY-4.0",
"MIT"
] | 4 | 2019-09-07T04:44:48.000Z | 2021-12-16T15:05:50.000Z | api/Outlook.TaskRequestAcceptItem.OutlookVersion.md | ahkon/VBA-Docs | c047d7975de2b0949b496af150d279c505a8595b | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-06-13T09:32:15.000Z | 2021-06-13T09:32:15.000Z | api/Outlook.TaskRequestAcceptItem.OutlookVersion.md | ahkon/VBA-Docs | c047d7975de2b0949b496af150d279c505a8595b | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-06-23T03:40:08.000Z | 2021-06-23T03:40:08.000Z | ---
title: TaskRequestAcceptItem.OutlookVersion property (Outlook)
keywords: vbaol11.chm1793
f1_keywords:
- vbaol11.chm1793
ms.prod: outlook
api_name:
- Outlook.TaskRequestAcceptItem.OutlookVersion
ms.assetid: 52c2e829-7370-bade-a708-edd889eb24d9
ms.date: 06/08/2017
localization_priority: Normal
---
# TaskRequestAcceptItem.OutlookVersion property (Outlook)
Returns a **String** indicating the major and minor version number of the Outlook application for an Outlook item. Read-only.
## Syntax
_expression_. `OutlookVersion`
_expression_ A variable that represents a [TaskRequestAcceptItem](Outlook.TaskRequestAcceptItem.md) object.
## See also
[TaskRequestAcceptItem Object](Outlook.TaskRequestAcceptItem.md)
[!include[Support and feedback](~/includes/feedback-boilerplate.md)] | 24.71875 | 125 | 0.809102 | eng_Latn | 0.447775 |
535a14db097622d281bcc85bb8a3f0d52e7b1456 | 3,951 | md | Markdown | socrata/ghk3-vcke.md | axibase/open-data-catalog | 18210b49b6e2c7ef05d316b6699d2f0778fa565f | [
"Apache-2.0"
] | 7 | 2017-05-02T16:08:17.000Z | 2021-05-27T09:59:46.000Z | socrata/ghk3-vcke.md | axibase/open-data-catalog | 18210b49b6e2c7ef05d316b6699d2f0778fa565f | [
"Apache-2.0"
] | 5 | 2017-11-27T15:40:39.000Z | 2017-12-05T14:34:14.000Z | socrata/ghk3-vcke.md | axibase/open-data-catalog | 18210b49b6e2c7ef05d316b6699d2f0778fa565f | [
"Apache-2.0"
] | 3 | 2017-03-03T14:48:48.000Z | 2019-05-23T12:57:42.000Z | # Salaries: ESD: Malheur: Fiscal Year 2013
## Dataset
| Name | Value |
| :--- | :---- |
| Catalog | [Link](https://catalog.data.gov/dataset/salaries-esd-malheur-fiscal-year-2013-89a8c) |
| Metadata | [Link](https://data.oregon.gov/api/views/ghk3-vcke) |
| Data: JSON | [100 Rows](https://data.oregon.gov/api/views/ghk3-vcke/rows.json?max_rows=100) |
| Data: CSV | [100 Rows](https://data.oregon.gov/api/views/ghk3-vcke/rows.csv?max_rows=100) |
| Host | data.oregon.gov |
| Id | ghk3-vcke |
| Name | Salaries: ESD: Malheur: Fiscal Year 2013 |
| Category | Revenue & Expense |
| Tags | salaries, esd salaries, educational service districts, malheur esd, malheur esd salaries |
| Created | 2013-12-20T00:14:03Z |
| Publication Date | 2013-12-20T00:16:21Z |
## Description
Salaries as reported by Malheur Educational Service District for Fiscal Year 2013
## Columns
```ls
| Included | Schema Type | Field Name | Name | Data Type | Render Type |
| ======== | ============== | ========== | ====== | ========= | =========== |
| Yes | series tag | odeid | ODEID# | text | number |
| Yes | series tag | dname | DNAME | text | text |
| Yes | series tag | jobdes | JOBDES | text | text |
| Yes | series tag | jobtyp | JOBTYP | text | text |
| Yes | series tag | fullt | FULLT | text | text |
| Yes | numeric metric | annrat | ANNRAT | number | number |
```
## Time Field
```ls
Value = 2013
Format & Zone = yyyy
```
## Data Commands
```ls
series e:ghk3-vcke d:2013-01-01T00:00:00.000Z t:odeid=2106 t:fullt=FULL t:jobtyp=CERTIFIED t:dname="MALHEUR ESD RGN 14" t:jobdes="SERVICE PROGRAM COORDINATOR" m:annrat=36663
series e:ghk3-vcke d:2013-01-01T00:00:00.000Z t:odeid=2106 t:fullt=FULL t:jobtyp=CLASSIFIED t:dname="MALHEUR ESD RGN 14" t:jobdes="TRANSITION SPECIALIST" m:annrat=19063.8
series e:ghk3-vcke d:2013-01-01T00:00:00.000Z t:odeid=2106 t:fullt=FULL t:jobtyp=CERTIFIED t:dname="MALHEUR ESD RGN 14" t:jobdes="BEHAVIOR SPECIALIST/COUNSELOR" m:annrat=32877
```
## Meta Commands
```ls
metric m:annrat p:double l:ANNRAT t:dataTypeName=number
entity e:ghk3-vcke l:"Salaries: ESD: Malheur: Fiscal Year 2013" t:url=https://data.oregon.gov/api/views/ghk3-vcke
property e:ghk3-vcke t:meta.view v:id=ghk3-vcke v:category="Revenue & Expense" v:averageRating=0 v:name="Salaries: ESD: Malheur: Fiscal Year 2013"
property e:ghk3-vcke t:meta.view.owner v:id=d6zz-js5q v:screenName="Paula N." v:lastNotificationSeenAt=1492617591 v:displayName="Paula N."
property e:ghk3-vcke t:meta.view.tableauthor v:id=d6zz-js5q v:screenName="Paula N." v:roleName=administrator v:lastNotificationSeenAt=1492617591 v:displayName="Paula N."
```
## Top Records
```ls
| odeid | dname | jobdes | jobtyp | fullt | annrat |
| ===== | ================== | ============================== | ========== | ===== | ======= |
| 2106 | MALHEUR ESD RGN 14 | SERVICE PROGRAM COORDINATOR | CERTIFIED | FULL | 36663 |
| 2106 | MALHEUR ESD RGN 14 | TRANSITION SPECIALIST | CLASSIFIED | FULL | 19063.8 |
| 2106 | MALHEUR ESD RGN 14 | BEHAVIOR SPECIALIST/COUNSELOR | CERTIFIED | FULL | 32877 |
| 2106 | MALHEUR ESD RGN 14 | DIRECTOR,CURRICULUM & INSTRUCT | CERTIFIED | FULL | 75000 |
| 2106 | MALHEUR ESD RGN 14 | DIAGNOSTICIAN/CONSULTING TCHR | CERTIFIED | PART | 38272.5 |
| 2106 | MALHEUR ESD RGN 14 | SPCH-LANG PATHOLOGIST | CLASSIFIED | FULL | 32940 |
| 2106 | MALHEUR ESD RGN 14 | DIAGNOSTICIAN/CONSULTING TCHR | CERTIFIED | FULL | 63105 |
| 2106 | MALHEUR ESD RGN 14 | SUPERINTENDENT | CERTIFIED | FULL | 94940 |
| 2106 | MALHEUR ESD RGN 14 | SPCH-LANG PATHOLOGIST | CERTIFIED | FULL | 32877 |
| 2106 | MALHEUR ESD RGN 14 | SPCH-LANG PATHOLOGIST | CERTIFIED | FULL | 63105 |
``` | 48.182927 | 175 | 0.621868 | yue_Hant | 0.861284 |
535a46d89e897af7cb96ea3083a685031eb31f3f | 5,578 | md | Markdown | README.md | rogerjdeangelis/utl_identifying-first-occurrence-after-trigger-event | 1542bd7cb2648baeac80306b2ac31e2ed3aa128e | [
"MIT"
] | null | null | null | README.md | rogerjdeangelis/utl_identifying-first-occurrence-after-trigger-event | 1542bd7cb2648baeac80306b2ac31e2ed3aa128e | [
"MIT"
] | null | null | null | README.md | rogerjdeangelis/utl_identifying-first-occurrence-after-trigger-event | 1542bd7cb2648baeac80306b2ac31e2ed3aa128e | [
"MIT"
] | null | null | null | # utl_identifying-first-occurrence-after-trigger-event
Identifying first occurrence after trigger event. Keywords: sas sql join merge big data analytics macros oracle teradata mysql sas communities stackoverflow statistics artificial inteligence AI Python R Java Javascript WPS Matlab SPSS Scala Perl C C# Excel MS Access JSON graphics maps NLP natural language processing machine learning igraph DOSUBL DOW loop stackoverflow SAS community.
Identifying first occurrence after trigger event
Same result in SAS and WPS
See additional solutions at end by
Mark Keintz's profile photo
[email protected]
For every ID I want to record all 'trigger' events, namely when a=1 and then I need to how long
it takes to the next occurrence of b=1.
see
https://tinyurl.com/yal2omu6
https://stackoverflow.com/questions/51248793/identifying-first-occurrence-after-trigger-event
INPUT
===== | RULES (Two records out)
|
WORK.HAVE total obs=30 |
|
ID T A B |
|
1 1 0 0 | First Next
1 2 0 0 | A B B-B
1 3 1 0 | T BEG END DURS
1 4 0 0 |
1 5 0 1 | 5 3 5 2
1 6 1 0 |
1 7 0 0 |
1 8 0 0 |
1 9 0 0 |
1 10 0 1 |10 6 10 4
PROCESS
=======
data want;
do until(last.id);
set have;
by id;
retain beg;
if a=1 then beg=t;
if b=1 then do;
end=t;
durs=end-beg;
keep id t beg end durs;
output;
end;
end;
run;quit;
OUTPUT
======
WORK.WANT total obs=7
ID T BEG END DURS
1 5 3 5 2
1 10 6 10 4
2 5 2 5 3
2 6 2 6 4
2 7 2 7 5
2 8 2 8 6
2 10 9 10 1
* _ _ _
_ __ ___ __ _| | _____ __| | __ _| |_ __ _
| '_ ` _ \ / _` | |/ / _ \ / _` |/ _` | __/ _` |
| | | | | | (_| | < __/ | (_| | (_| | || (_| |
|_| |_| |_|\__,_|_|\_\___| \__,_|\__,_|\__\__,_|
;
data have;
input id t a b ;
datalines;
1 1 0 0
1 2 0 0
1 3 1 0
1 4 0 0
1 5 0 1
1 6 1 0
1 7 0 0
1 8 0 0
1 9 0 0
1 10 0 1
2 1 0 0
2 2 1 0
2 3 0 0
2 4 0 0
2 5 0 1
2 6 0 1
2 7 0 1
2 8 0 1
2 9 1 0
2 10 0 1
3 1 0 0
3 2 0 0
3 3 0 0
3 4 0 0
3 5 0 0
3 6 0 0
3 7 1 0
3 8 0 0
3 9 0 0
3 10 0 0
;
run;
* _ _ _
___ ___ | |_ _| |_(_) ___ _ __
/ __|/ _ \| | | | | __| |/ _ \| '_ \
\__ \ (_) | | |_| | |_| | (_) | | | |
|___/\___/|_|\__,_|\__|_|\___/|_| |_|
;
%utl_submit_wps64('
libname wrk sas7bdat "%sysfunc(pathname(work))";
data wrk.wantwps;
do until(last.id);
set wrk.have;
by id;
retain beg;
if a=1 then beg=t;
if b=1 then do;
end=t;
durs=end-beg;
keep id t beg end durs;
output;
end;
end;
run;quit;
');
proc print data=wantwps;
run;quit;
*__ __ _
| \/ | __ _ _ __| | __
| |\/| |/ _` | '__| |/ /
| | | | (_| | | | <
|_| |_|\__,_|_| |_|\_\
;
data want1 (drop=_:);
set have (drop=b);
if a=1;
retain _mrb1 0; /* Recnum of the most recent B=1 */
if _n_> _mrb1 then do _mrb1=_mrb1+1 by 1 until (b=1 and _mrb1>=_n_);
set have (keep=b);
end;
lead=_mrb1-_n_;
run;
Essentially this reads records with all vars except B until A=1.
Then if it a record with B=1 at or beyond _N_ has not yet been already found,
it reads all records until B=1 and the record containing B=1 is at or beyond _N_.
BTW, this allows two A=1 records to precede a single B=1.
Both A=1 records will have a distance to that same B=1 record.
If you have by groups, then you have to make sure to not look ahead for B=1
records beyond the current ID group.
data want (drop=_:);
set have (drop=b);
by id;
if a=1 or last.id;
retain _mrb1 0; /* Recnum of the most recent B=1 */
if _n_> _mrb1 then do _mrb1=_mrb1+1 by 1 until ((b=1 and _mrb1>=_n_) or (last.id=1 and _mrb1=_n_));
set have (keep=b);
end;
if a^=1 or b^=1 then delete; /*For id's with no a=1 and/or subsequent b=1*/
lead=_mrb1-_n_;
run;
data want;
do until(last.id);
set have;
by id;
retain beg;
if a=1 then beg=t;
if b=1 then do;
end=t;
durs=end-beg;
keep id t beg end durs;
output;
end;
end;
run;quit;
*____
| _ \ ___ __ _ ___ _ __
| |_) / _ \ / _` |/ _ \ '__|
| _ < (_) | (_| | __/ |
|_| \_\___/ \__, |\___|_|
|___/
;
data havBfr;
retain grp 0 bs 0;
set have;
by id;
rec=_n_;
if sum(a+b);
if a then grp=grp+1;
if b = lag(b) then delete;
run;quit;
proc transpose data=havBfr out=havXpo;
by id grp;
var t;
run;quit;
| 23.939914 | 387 | 0.455181 | eng_Latn | 0.933607 |
535ace3b14370a297014bcef65ef9b859ad2da9a | 4,696 | md | Markdown | README.md | textcreationpartnership/K001482.000 | 43606066ad967f4cd037cd03b3e366942fdf5639 | [
"CC0-1.0"
] | null | null | null | README.md | textcreationpartnership/K001482.000 | 43606066ad967f4cd037cd03b3e366942fdf5639 | [
"CC0-1.0"
] | null | null | null | README.md | textcreationpartnership/K001482.000 | 43606066ad967f4cd037cd03b3e366942fdf5639 | [
"CC0-1.0"
] | null | null | null | #Clump and Cudden: or, the review: a comic musical piece, in one act, as it is performed at the Royal Circus. Written and composed by Mr. Dibdin.#
##Dibdin, Charles, 1745-1814.##
Clump and Cudden: or, the review: a comic musical piece, in one act, as it is performed at the Royal Circus. Written and composed by Mr. Dibdin.
Dibdin, Charles, 1745-1814.
##General Summary##
**Links**
[TCP catalogue](http://www.ota.ox.ac.uk/tcp/) •
[HTML](http://tei.it.ox.ac.uk/tcp/Texts-HTML/free/004/004778076.html) •
[EPUB](http://tei.it.ox.ac.uk/tcp/Texts-EPUB/free/004/004778076.epub)
**Availability**
This keyboarded and encoded edition of the
work described above is co-owned by the institutions
providing financial support to the Early English Books
Online Text Creation Partnership. This Phase I text is
available for reuse, according to the terms of Creative
Commons 0 1.0 Universal. The text can be copied,
modified, distributed and performed, even for
commercial purposes, all without asking permission.
##Content Summary##
#####Front#####
1. CHARACTERS.
#####Body#####
1. SCENE I.
1. SCENE II.
1. SCENE III.
1. SCENE IV.
1. SCENE V.
1. SCENE VI.
1. SCENE VII.
1. SCENE VIII.
1. SCENE IX.
**Types of content**
* There are 528 **verse** lines!
* There are 148 **drama** parts! This is **verse drama**.
* Oh, Mr. Jourdain, there is **prose** in there!
There are 1 **ommitted** fragments!
@__reason__ (1) : illegible (1) • @__resp__ (1) : #OXF (1) • @__extent__ (1) : 1 letter (1)
**Character listing**
|Text|string(s)|codepoint(s)|
|---|---|---|
|Latin Extended-A|ſ|383|
|General Punctuation|•—|8226 8212|
|Superscripts and Subscripts|⁰|8304|
|Geometric Shapes|▪|9642|
##Tag Usage Summary##
###Header Tag Usage###
|No|element name|occ|attributes|
|---|---|---|---|
|1.|__author__|2||
|2.|__availability__|1||
|3.|__biblFull__|1||
|4.|__date__|2| @__when__ (1) : 2008-09 (1)|
|5.|__editorialDecl__|1||
|6.|__extent__|2||
|7.|__idno__|7| @__type__ (7) : DLPS (1), ESTC (1), DOCNO (1), TCP (1), GALEDOCNO (1), CONTENTSET (1), IMAGESETID (1)|
|8.|__langUsage__|1||
|9.|__language__|1| @__ident__ (1) : eng (1)|
|10.|__listPrefixDef__|1||
|11.|__note__|4||
|12.|__notesStmt__|1||
|13.|__p__|11||
|14.|__prefixDef__|2| @__ident__ (2) : tcp (1), char (1) • @__matchPattern__ (2) : ([0-9\-]+):([0-9IVX]+) (1), (.+) (1) • @__replacementPattern__ (2) : http://eebo.chadwyck.com/downloadtiff?vid=$1&page=$2 (1), https://raw.githubusercontent.com/textcreationpartnership/Texts/master/tcpchars.xml#$1 (1)|
|15.|__projectDesc__|1||
|16.|__pubPlace__|2||
|17.|__publicationStmt__|2||
|18.|__publisher__|2||
|19.|__ref__|2| @__target__ (2) : https://creativecommons.org/publicdomain/zero/1.0/ (1), http://www.textcreationpartnership.org/docs/. (1)|
|20.|__sourceDesc__|1||
|21.|__title__|2||
|22.|__titleStmt__|2||
###Text Tag Usage###
|No|element name|occ|attributes|
|---|---|---|---|
|1.|__body__|6||
|2.|__desc__|1||
|3.|__div__|17| @__type__ (17) : title_page (1), dramatis_personae (1), scene (9), song (4), recitative (2) • @__n__ (9) : 1 (1), 2 (1), 3 (1), 4 (1), 5 (1), 6 (1), 7 (1), 8 (1), 9 (1)|
|4.|__floatingText__|6| @__xml:lang__ (6) : unk (0)|
|5.|__g__|5| @__ref__ (5) : char:EOLhyphen (4), char:punc (1)|
|6.|__gap__|1| @__reason__ (1) : illegible (1) • @__resp__ (1) : #OXF (1) • @__extent__ (1) : 1 letter (1)|
|7.|__head__|36||
|8.|__hi__|10||
|9.|__item__|11||
|10.|__l__|528||
|11.|__label__|11||
|12.|__lg__|35| @__n__ (12) : 2 (8), 3 (2), 4 (1), 5 (1) • @__type__ (7) : song (7)|
|13.|__list__|1||
|14.|__p__|10||
|15.|__pb__|30| @__facs__ (30) : tcp:0100000700:1 (1), tcp:0100000700:2 (1), tcp:0100000700:3 (1), tcp:0100000700:4 (1), tcp:0100000700:5 (1), tcp:0100000700:6 (1), tcp:0100000700:7 (1), tcp:0100000700:8 (1), tcp:0100000700:9 (1), tcp:0100000700:10 (1), tcp:0100000700:11 (1), tcp:0100000700:12 (1), tcp:0100000700:13 (1), tcp:0100000700:14 (1), tcp:0100000700:15 (1), tcp:0100000700:16 (1), tcp:0100000700:17 (1), tcp:0100000700:18 (1), tcp:0100000700:19 (1), tcp:0100000700:20 (1), tcp:0100000700:21 (1), tcp:0100000700:22 (1), tcp:0100000700:23 (1), tcp:0100000700:24 (1), tcp:0100000700:25 (1), tcp:0100000700:26 (1), tcp:0100000700:27 (1), tcp:0100000700:28 (1), tcp:0100000700:29 (1), tcp:0100000700:30 (1) • @__rendition__ (2) : simple:additions (2) • @__n__ (25) : 4 (1), 5 (1), 6 (1), 7 (1), 8 (1), 9 (1), 10 (1), 11 (1), 12 (1), 15 (1), 16 (1), 17 (1), 18 (1), 19 (1), 20 (1), 21 (1), 22 (1), 23 (1), 24 (1), 25 (1), 26 (1), 27 (1), 28 (1), 29 (1), 30 (1)|
|16.|__sp__|148||
|17.|__speaker__|146||
|18.|__stage__|12||
|19.|__trailer__|1||
| 37.568 | 970 | 0.633944 | yue_Hant | 0.284444 |
535adc9a711fc163a120e909ed02f46dd66a7240 | 4,842 | md | Markdown | activesupport/CHANGELOG.md | Manfred/rails | 58df9a452fa75f10365849775292dcf0ae79a6f2 | [
"MIT"
] | 1 | 2019-12-02T10:49:46.000Z | 2019-12-02T10:49:46.000Z | activesupport/CHANGELOG.md | Manfred/rails | 58df9a452fa75f10365849775292dcf0ae79a6f2 | [
"MIT"
] | null | null | null | activesupport/CHANGELOG.md | Manfred/rails | 58df9a452fa75f10365849775292dcf0ae79a6f2 | [
"MIT"
] | 1 | 2019-12-05T09:41:01.000Z | 2019-12-05T09:41:01.000Z | * Add block support to `ActiveSupport::Testing::TimeHelpers#travel_back`.
*Tim Masliuchenko*
* Update `ActiveSupport::Messages::Metadata#fresh?` to work for cookies with expiry set when
`ActiveSupport.parse_json_times = true`.
*Christian Gregg*
* Support symbolic links for `content_path` in `ActiveSupport::EncryptedFile`.
*Takumi Shotoku*
* Improve `Range#===`, `Range#include?`, and `Range#cover?` to work with beginless (startless)
and endless range targets.
*Allen Hsu*, *Andrew Hodgkinson*
* Don't use `Process#clock_gettime(CLOCK_THREAD_CPUTIME_ID)` on Solaris.
*Iain Beeston*
* Prevent `ActiveSupport::Duration.build(value)` from creating instances of
`ActiveSupport::Duration` unless `value` is of type `Numeric`.
Addresses the errant set of behaviours described in #37012 where
`ActiveSupport::Duration` comparisons would fail confusingly
or return unexpected results when comparing durations built from instances of `String`.
Before:
small_duration_from_string = ActiveSupport::Duration.build('9')
large_duration_from_string = ActiveSupport::Duration.build('100000000000000')
small_duration_from_int = ActiveSupport::Duration.build(9)
large_duration_from_string > small_duration_from_string
# => false
small_duration_from_string == small_duration_from_int
# => false
small_duration_from_int < large_duration_from_string
# => ArgumentError (comparison of ActiveSupport::Duration::Scalar with ActiveSupport::Duration failed)
large_duration_from_string > small_duration_from_int
# => ArgumentError (comparison of String with ActiveSupport::Duration failed)
After:
small_duration_from_string = ActiveSupport::Duration.build('9')
# => TypeError (can't build an ActiveSupport::Duration from a String)
*Alexei Emam*
* Add `ActiveSupport::Cache::Store#delete_multi` method to delete multiple keys from the cache store.
*Peter Zhu*
* Support multiple arguments in `HashWithIndifferentAccess` for `merge` and `update` methods, to
follow Ruby 2.6 addition.
*Wojciech Wnętrzak*
* Allow initializing `thread_mattr_*` attributes via `:default` option.
class Scraper
thread_mattr_reader :client, default: Api::Client.new
end
*Guilherme Mansur*
* Add `compact_blank` for those times when you want to remove #blank? values from
an Enumerable (also `compact_blank!` on Hash, Array, ActionController::Parameters).
*Dana Sherson*
* Make ActiveSupport::Logger Fiber-safe.
Use `Fiber.current.__id__` in `ActiveSupport::Logger#local_level=` in order
to make log level local to Ruby Fibers in addition to Threads.
Example:
logger = ActiveSupport::Logger.new(STDOUT)
logger.level = 1
puts "Main is debug? #{logger.debug?}"
Fiber.new {
logger.local_level = 0
puts "Thread is debug? #{logger.debug?}"
}.resume
puts "Main is debug? #{logger.debug?}"
Before:
Main is debug? false
Thread is debug? true
Main is debug? true
After:
Main is debug? false
Thread is debug? true
Main is debug? false
Fixes #36752.
*Alexander Varnin*
* Allow the `on_rotation` proc used when decrypting/verifying a message to be
passed at the constructor level.
Before:
crypt = ActiveSupport::MessageEncryptor.new('long_secret')
crypt.decrypt_and_verify(encrypted_message, on_rotation: proc { ... })
crypt.decrypt_and_verify(another_encrypted_message, on_rotation: proc { ... })
After:
crypt = ActiveSupport::MessageEncryptor.new('long_secret', on_rotation: proc { ... })
crypt.decrypt_and_verify(encrypted_message)
crypt.decrypt_and_verify(another_encrypted_message)
*Edouard Chin*
* `delegate_missing_to` would raise a `DelegationError` if the object
delegated to was `nil`. Now the `allow_nil` option has been added to enable
the user to specify they want `nil` returned in this case.
*Matthew Tanous*
* `truncate` would return the original string if it was too short to be truncated
and a frozen string if it were long enough to be truncated. Now truncate will
consistently return an unfrozen string regardless. This behavior is consistent
with `gsub` and `strip`.
Before:
'foobar'.truncate(5).frozen?
# => true
'foobar'.truncate(6).frozen?
# => false
After:
'foobar'.truncate(5).frozen?
# => false
'foobar'.truncate(6).frozen?
# => false
*Jordan Thomas*
Please check [6-0-stable](https://github.com/rails/rails/blob/6-0-stable/activesupport/CHANGELOG.md) for previous changes.
| 30.840764 | 122 | 0.688352 | eng_Latn | 0.885214 |
535b36a4fc897dfc2d792acfcba3121251f757d9 | 15,159 | md | Markdown | articles/sql-database/transparent-data-encryption-azure-sql.md | raahmed/azure-docs | 5276efe95a7c538554c2ab496c78e47945198dba | [
"CC-BY-4.0"
] | null | null | null | articles/sql-database/transparent-data-encryption-azure-sql.md | raahmed/azure-docs | 5276efe95a7c538554c2ab496c78e47945198dba | [
"CC-BY-4.0"
] | 1 | 2022-01-13T19:45:57.000Z | 2022-01-13T19:45:57.000Z | articles/sql-database/transparent-data-encryption-azure-sql.md | raahmed/azure-docs | 5276efe95a7c538554c2ab496c78e47945198dba | [
"CC-BY-4.0"
] | null | null | null | ---
title: "Transparent data encryption for Azure SQL Database and Data Warehouse | Microsoft Docs"
description: "An overview of transparent data encryption for SQL Database and Data Warehouse. The document covers its benefits and the options for configuration, which includes service-managed transparent data encryption and Bring Your Own Key."
services: sql-database
ms.service: sql-database
ms.subservice: security
ms.custom:
ms.devlang:
ms.topic: conceptual
author: aliceku
ms.author: aliceku
ms.reviewer: vanto
manager: craigg
ms.date: 12/04/2018
---
# Transparent data encryption for SQL Database and Data Warehouse
Transparent data encryption (TDE) helps protect Azure SQL Database, Azure SQL Managed Instance, and Azure Data Warehouse against the threat of malicious activity. It performs real-time encryption and decryption of the database, associated backups, and transaction log files at rest without requiring changes to the application. By default, TDE is enabled for all newly deployed Azure SQL databases. TDE cannot be used to encrypt the logical **master** database in SQL Database. The **master** database contains objects that are needed to perform the TDE operations on the user databases.
TDE needs to be manually enabled for Azure SQL Managed Instance, older databases of Azure SQL Database, or Azure SQL Data Warehouse.
Transparent data encryption encrypts the storage of an entire database by using a symmetric key called the database encryption key. This database encryption key is protected by the transparent data encryption protector. The protector is either a service-managed certificate (service-managed transparent data encryption) or an asymmetric key stored in Azure Key Vault (Bring Your Own Key). You set the transparent data encryption protector at the server level for Azure SQL Database and Data Warehouse, and instance level for Azure SQL Managed Instance. The term *server* refers both to server and instance throughout this document, unless stated differently.
On database startup, the encrypted database encryption key is decrypted and then used for decryption and re-encryption of the database files in the SQL Server Database Engine process. Transparent data encryption performs real-time I/O encryption and decryption of the data at the page level. Each page is decrypted when it's read into memory and then encrypted before being written to disk. For a general description of transparent data encryption, see [Transparent data encryption](https://docs.microsoft.com/sql/relational-databases/security/encryption/transparent-data-encryption).
SQL Server running on an Azure virtual machine also can use an asymmetric key from Key Vault. The configuration steps are different from using an asymmetric key in SQL Database and SQL Managed Instance. For more information, see [Extensible key management by using Azure Key Vault (SQL Server)](https://docs.microsoft.com/sql/relational-databases/security/encryption/extensible-key-management-using-azure-key-vault-sql-server).
## Service-managed transparent data encryption
In Azure, the default setting for transparent data encryption is that the database encryption key is protected by a built-in server certificate. The built-in server certificate is unique for each server. If a database is in a geo-replication relationship, both the primary and geo-secondary database are protected by the primary database's parent server key. If two databases are connected to the same server, they also share the same built-in certificate. Microsoft automatically rotates these certificates at least every 90 days.
Microsoft also seamlessly moves and manages the keys as needed for geo-replication and restores.
> [!IMPORTANT]
> All newly created SQL databases are encrypted by default by using service-managed transparent data encryption. Azure SQL Managed Instance databases, existing SQL databases created before May 2017 and SQL databases created through restore, geo-replication, and database copy are not encrypted by default.
## Bring Your Own Key
With Bring Your Own Key support, you can take control over your transparent data encryption keys and control who can access them and when. Key Vault, which is the Azure cloud-based external key management system, is the first key management service that transparent data encryption has integrated with for Bring Your Own Key support. With Bring Your Own Key support, the database encryption key is protected by an asymmetric key stored in Key Vault. The asymmetric key never leaves Key Vault. After the server has permissions to a Key Vault, the server sends basic key operation requests to it through Key Vault. You set the asymmetric key at the server level, and all *encrypted* databases under that server inherit it.
With Bring Your Own Key support, you control key management tasks such as key rotations and key vault permissions. You also can delete keys and enable auditing/reporting on all encryption keys. Key Vault provides central key management and uses tightly monitored hardware security modules. Key Vault promotes separation of management of keys and data to help meet regulatory compliance. To learn more about Key Vault, see the [Key Vault documentation page](https://docs.microsoft.com/azure/key-vault/key-vault-secure-your-key-vault).
To learn more about transparent data encryption with Bring Your Own Key support for Azure SQL Database, SQL Managed Instance, and Data Warehouse, see [Transparent data encryption with Bring Your Own Key support](transparent-data-encryption-byok-azure-sql.md).
To start using transparent data encryption with Bring Your Own Key support, see the how-to guide [Turn on transparent data encryption by using your own key from Key Vault by using PowerShell](transparent-data-encryption-byok-azure-sql-configure.md).
## Move a transparent data encryption-protected database
You don't need to decrypt databases for operations within Azure. The transparent data encryption settings on the source database or primary database are transparently inherited on the target. Operations that are included involve:
- Geo-restore
- Self-service point-in-time restore
- Restoration of a deleted database
- Active geo-replication
- Creation of a database copy
- Restore of backup file to Azure SQL Managed Instance
When you export a transparent data encryption-protected database, the exported content of the database isn't encrypted. This exported content is stored in un-encrypted BACPAC files. Be sure to protect the BACPAC files appropriately and enable transparent data encryption after import of the new database is finished.
For example, if the BACPAC file is exported from an on-premises SQL Server instance, the imported content of the new database isn't automatically encrypted. Likewise, if the BACPAC file is exported to an on-premises SQL Server instance, the new database also isn't automatically encrypted.
The one exception is when you export to and from a SQL database. Transparent data encryption is enabled in the new database, but the BACPAC file itself still isn't encrypted.
## Manage transparent data encryption in the Azure portal
To configure transparent data encryption through the Azure portal, you must be connected as the Azure Owner, Contributor, or SQL Security Manager.
You turn transparent data encryption on and off on the database level. To enable transparent data encryption on a database, go to the [Azure portal](https://portal.azure.com) and sign in with your Azure Administrator or Contributor account. Find the transparent data encryption settings under your user database. By default, service-managed transparent data encryption is used. A transparent data encryption certificate is automatically generated for the server that contains the database. For Azure SQL Managed Instance use T-SQL to turn transparent data encryption on and off on a database.

You set the transparent data encryption master key, also known as the transparent data encryption protector, on the server level. To use transparent data encryption with Bring Your Own Key support and protect your databases with a key from Key Vault, open the transparent data encryption settings under your server.

## Manage transparent data encryption by using PowerShell
To configure transparent data encryption through PowerShell, you must be connected as the Azure Owner, Contributor, or SQL Security Manager.
### Cmdlets for Azure SQL Database and Data Warehouse
Use the following cmdlets for Azure SQL Database and Data Warehouse:
| Cmdlet | Description |
| --- | --- |
| [Set-AzureRmSqlDatabaseTransparentDataEncryption](https://docs.microsoft.com/powershell/module/azurerm.sql/set-azurermsqldatabasetransparentdataencryption) |Enables or disables transparent data encryption for a database|
| [Get-AzureRmSqlDatabaseTransparentDataEncryption](https://docs.microsoft.com/powershell/module/azurerm.sql/get-azurermsqldatabasetransparentdataencryption) |Gets the transparent data encryption state for a database |
| [Get-AzureRmSqlDatabaseTransparentDataEncryptionActivity](https://docs.microsoft.com/powershell/module/azurerm.sql/get-azurermsqldatabasetransparentdataencryptionactivity) |Checks the encryption progress for a database |
| [Add-AzureRmSqlServerKeyVaultKey](https://docs.microsoft.com/powershell/module/azurerm.sql/add-azurermsqlserverkeyvaultkey) |Adds a Key Vault key to a SQL Server instance |
| [Get-AzureRmSqlServerKeyVaultKey](https://docs.microsoft.com/powershell/module/azurerm.sql/get-azurermsqlserverkeyvaultkey) |Gets the Key Vault keys for an Azure SQL database server |
| [Set-AzureRmSqlServerTransparentDataEncryptionProtector](https://docs.microsoft.com/powershell/module/azurerm.sql/set-azurermsqlservertransparentdataencryptionprotector) |Sets the transparent data encryption protector for a SQL Server instance |
| [Get-AzureRmSqlServerTransparentDataEncryptionProtector](https://docs.microsoft.com/powershell/module/azurerm.sql/get-azurermsqlservertransparentdataencryptionprotector) |Gets the transparent data encryption protector |
| [Remove-AzureRmSqlServerKeyVaultKey](https://docs.microsoft.com/powershell/module/azurerm.sql/remove-azurermsqlserverkeyvaultkey) |Removes a Key Vault key from a SQL Server instance |
| | |
> [!IMPORTANT]
> For Azure SQL Managed Instance, use the T-SQL [ALTER DATABASE](https://docs.microsoft.com/sql/t-sql/statements/alter-database-azure-sql-database) command to turn transparent data encryption on and off on a database level, and check [sample PowerShell script](transparent-data-encryption-byok-azure-sql-configure.md) to manage transparent data encryption on an instance level.
## Manage transparent data encryption by using Transact-SQL
Connect to the database by using a login that is an administrator or member of the **dbmanager** role in the master database.
| Command | Description |
| --- | --- |
| [ALTER DATABASE (Azure SQL Database)](https://docs.microsoft.com/sql/t-sql/statements/alter-database-azure-sql-database) | SET ENCRYPTION ON/OFF encrypts or decrypts a database |
| [sys.dm_database_encryption_keys](https://docs.microsoft.com/sql/relational-databases/system-dynamic-management-views/sys-dm-database-encryption-keys-transact-sql) |Returns information about the encryption state of a database and its associated database encryption keys |
| [sys.dm_pdw_nodes_database_encryption_keys](https://docs.microsoft.com/sql/relational-databases/system-dynamic-management-views/sys-dm-pdw-nodes-database-encryption-keys-transact-sql) |Returns information about the encryption state of each data warehouse node and its associated database encryption keys |
| | |
You can't switch the transparent data encryption protector to a key from Key Vault by using Transact-SQL. Use PowerShell or the Azure portal.
## Manage transparent data encryption by using the REST API
To configure transparent data encryption through the REST API, you must be connected as the Azure Owner, Contributor, or SQL Security Manager.
Use the following set of commands for Azure SQL Database and Data Warehouse:
| Command | Description |
| --- | --- |
|[Create Or Update Server](https://docs.microsoft.com/rest/api/sql/servers/createorupdate)|Adds an Azure Active Directory identity to a SQL Server instance (used to grant access to Key Vault)|
|[Create Or Update Server Key](https://docs.microsoft.com/rest/api/sql/serverkeys/createorupdate)|Adds a Key Vault key to a SQL Server instance|
|[Delete Server Key](https://docs.microsoft.com/rest/api/sql/serverkeys/delete)|Removes a Key Vault key from a SQL Server instance|
|[Get Server Keys](https://docs.microsoft.com/rest/api/sql/serverkeys/get)|Gets a specific Key Vault key from a SQL Server instance|
|[List Server Keys By Server](https://docs.microsoft.com/rest/api/sql/serverkeys/listbyserver)|Gets the Key Vault keys for a SQL Server instance |
|[Create Or Update Encryption Protector](https://docs.microsoft.com/rest/api/sql/encryptionprotectors/createorupdate)|Sets the transparent data encryption protector for a SQL Server instance|
|[Get Encryption Protector](https://docs.microsoft.com/rest/api/sql/encryptionprotectors/get)|Gets the transparent data encryption protector for a SQL Server instance|
|[List Encryption Protectors By Server](https://docs.microsoft.com/rest/api/sql/encryptionprotectors/listbyserver)|Gets the transparent data encryption protectors for a SQL Server instance |
|[Create Or Update Transparent Data Encryption Configuration](https://docs.microsoft.com/rest/api/sql/transparentdataencryptions/createorupdate)|Enables or disables transparent data encryption for a database|
|[Get Transparent Data Encryption Configuration](https://docs.microsoft.com/rest/api/sql/transparentdataencryptions/get)|Gets the transparent data encryption configuration for a database|
|[List Transparent Data Encryption Configuration Results](https://docs.microsoft.com/rest/api/sql/transparentdataencryptionactivities/listbyconfiguration)|Gets the encryption result for a database|
## Next steps
- For a general description of transparent data encryption, see [Transparent data encryption](https://docs.microsoft.com/sql/relational-databases/security/encryption/transparent-data-encryption).
- To learn more about transparent data encryption with Bring Your Own Key support for Azure SQL Database, Azure SQL Managed Instance and Data Warehouse, see [Transparent data encryption with Bring Your Own Key support](transparent-data-encryption-byok-azure-sql.md).
- To start using transparent data encryption with Bring Your Own Key support, see the how-to guide [Turn on transparent data encryption by using your own key from Key Vault by using PowerShell](transparent-data-encryption-byok-azure-sql-configure.md).
- For more information about Key Vault, see the [Key Vault documentation page](https://docs.microsoft.com/azure/key-vault/key-vault-secure-your-key-vault).
| 110.649635 | 720 | 0.814961 | eng_Latn | 0.953543 |
535b71a02500c9d50604fa86f973ef1e7260af29 | 11,611 | md | Markdown | README.md | manginkr/smp-web-app | 171117f5b02606b941c5fa04bbe493f29e772476 | [
"MIT"
] | null | null | null | README.md | manginkr/smp-web-app | 171117f5b02606b941c5fa04bbe493f29e772476 | [
"MIT"
] | null | null | null | README.md | manginkr/smp-web-app | 171117f5b02606b941c5fa04bbe493f29e772476 | [
"MIT"
] | 1 | 2016-07-18T17:33:39.000Z | 2016-07-18T17:33:39.000Z | # Koa Sample App (handlebars templating + RESTful API using MySQL, on Node.js)
This is the result of a self-learning exercise on how to put together a complete Node.js
MySQL-driven [Koa](http://koajs.com) app.
When I started with Node.js (using Express), I found plenty of tutorials & examples on individual
elements, but found it hard to stitch everything together; this was even more true with Koa. Being
new to Node / Express / Koa, I found that understanding came very much by assembling *all* the
different bits together.
While the Koa ‘[hello world](http://koajs.com/#application)’ certainly doesn’t flatter to deceive,
there’s obviously a long way to go after it. This does some of that. It is a template not a
mini-tutorial. It puts a lot of the components of a complete system together: neither mini nor
tutorial!
Having worked it all out, this is now largely a ‘note-to-self’ *aide-memoire*, but I hope it might
help others in a similar situation. No one else will be doing all the same things, but seeing
components operating together may be helpful.
It is of course simplistic, but unlike many tutorials, it assembles together many of the components
of a complete system: in this case, basic interactive tools for viewing, adding, editing, and
deleting ([CRUD](https://en.wikipedia.org/wiki/Create,_read,_update_and_delete)), with *passport*
login/authentication, and a matching
[REST](http://en.wikipedia.org/wiki/Representational_state_transfer)ful API to do the same (using
basic access authentication). Many systems may not require an API, but the API app can be used for
RESTful ajax functions (illustrated in the edit member/team pages). Of course, real systems do much
more, but generally build on these core functions.
The database includes a couple of tables with related data and referential integrity – one step
beyond where most tutorials go. Hence code is included for handling basic validation and referential
integrity errors returned from the database.
Otherwise I’ve stripped it down to essentials. There’s no pretty styling! There’s no great UI. Just
the bare building-blocks.
There are also sample integration/acceptance tests using mocha / supertest / chai / cheerio.
I don’t have time to put together a full tutorial, but I’ve tried to make everything clear & well
structured, and I’ve liberally commented the code.
## Design choices
For all that it’s rather bleeding-edge, Koa makes development far simpler than classic
callback-style Node.js with Express. *Yield* may be just a stepping-stone to ES7 *await*, but it
works. JavaScript may be looked down on ([misunderstood?](http://davidwalsh.name/javascript-objects))
in some quarters, but I do find it vastly better to work with than PHP :)
With Node.js v4 (stable LTS) out, I believe Koa can now be
considered[*](http://hueniverse.com/2015/03/02/the-node-version-dilemma) for production use.
The app is built with a modular approach. There are three (*composed*) sub-apps: the bare bones of a
public site, a web-based password-protected admin system using handlebars-templated html pages, and
a REST API. Each of these is structured in a modular fashion; mostly each admin module has JavaScript
handlers to handle GET and POST requests, and a set of handlebars templates; each API module has
JavaScript handlers for GET, POST, PATCH, DELETE requests.
The highly-structured applications I work on require ACID SQL databases with referential integrity,
so MongoDB was out for me. MySQL and PostgreSQL should be pretty similar, but PostgreSQL is not yet
so well supported for Koa. (Actually, one of my first Koa applications is using MongoDB/monk; there
are few changes – the models replicate monk functions, but add functionality to those functions
which update the database).
For some people, a full JavaScript framework will work better. If you’re happy to plan out your own
preferred structure, designing your own patterns means one less component to learn / conform to.
There’s always things others would do differently. If you have better ways of doing things, it will
either be down to my preferences, or my ignorance! If you notice anything I’ve got wrong or have
real improvements, let me know.
### Admin app
The admin app just does basic adding/editing/deleting. Any real app will do much more, but will
generally include and build on these basics.
I find [Handlebars](http://handlebarsjs.com/) offers a good minimal-logic templates (mustache is too
limiting, but I like to work with HTML).
The main admin/app.js sets up the database connection, handlebars templating, passport
authentication, 404/500 handling, etc.
I’ve divided the app into routes, matching handlers/controllers, and a set of templates. The
handlers have one function for each method/route, and either render a view, redirect (e.g. after
POST), or throw an error.
### API
The API returns JSON or XML (or plain text) according to the *Accepts* request header.
The main api/app.js sets up the database connection, content negotiation, passport authentication,
and 404/500 handling.
Routes are grouped into members, teams, team membership, and authentication. All but the simplest of
these then go on to call related handlers.
The api/members.js and api/teams.js then handle the API requests. I use PATCH in preference to PUT so
that a subset of entity fields can be supplied (correctly, a PUT will set unsupplied fields to null);
otherwise everything is very straightforward REST API, hopefully all following best practice.
### Models
Models sit above the sub-apps, as they are used by both the admin app and the API.
I use light-weight models just to manage all operations which modify stored data in the database;
individual handlers are responsible for obtaining data they require to render their templates (using
SQL queries).
### Dependencies
While very basic, this sample app incorporates together many of the components of a real application;
as well as handlebars templates & MySQL, there’s static file serving, body-parser for post data,
compression, *passport* logins with remember-me, logging, flash messages, etc, and mocha/chai/cheerio
for testing (I’ve ignored i18n which would introduce considerable complexity).
Note that if you wish to set this up locally, you will need `admin.`, `api.`, and `www.` subdomains
available. To do this, add a line such as `127.0.0.1 www.localhost api.localhost admin.localhost`
to `/etc/hosts` (on Unix/Mac), or `\Windows\System32\drivers\etc\hosts` (on Windows). The app will then
be available at www.localhost:3000.
It uses the database set out below, with connection details as per `/config/db-development.json`.
Either Node.js v4+ or io.js is required as Node.js v0.12 doesn’t support template strings.
### Demo
There is a running version of the app at [koa-sample-app.movable-type.co.uk](http://koa-sample-app.movable-type.co.uk).
## File structure
```
.
├── apps
│ ├── admin
│ │ ├── handlers
│ │ │ ├── login.js
│ │ │ ├── members.js
│ │ │ └── teams.js
│ │ ├── routes
│ │ │ ├── ajax-routes.js
│ │ │ ├── index-routes.js
│ │ │ ├── login-routes.js
│ │ │ ├── logs-routes.js
│ │ │ ├── members-routes.js
│ │ │ └── teams-routes.js
│ │ ├── templates
│ │ │ ├── partials
│ │ │ │ ├── errpartial.html
│ │ │ │ └── navpartial.html
│ │ │ ├── 400-bad-request.html
│ │ │ ├── 404-not-found.html
│ │ │ ├── 500-internal-server-error.html
│ │ │ ├── index.html
│ │ │ ├── login.html
│ │ │ ├── logs.html
│ │ │ ├── members-add.html
│ │ │ ├── members-delete.html
│ │ │ ├── members-edit.html
│ │ │ ├── members-list.html
│ │ │ ├── members-view.html
│ │ │ ├── teams-add.html
│ │ │ ├── teams-delete.html
│ │ │ ├── teams-edit.html
│ │ │ ├── teams-list.html
│ │ │ └── teams-view.html
│ │ ├── app-admin.js
│ │ └── passport.js
│ ├── api
│ │ ├── app-api.js
│ │ ├── members.js
│ │ ├── routes-auth.js
│ │ ├── routes-members.js
│ │ ├── routes-root.js
│ │ ├── routes-team-members.js
│ │ ├── routes-teams.js
│ │ ├── team-members.js
│ │ ├── teams.js
│ │ └── validate.js
│ └── www
│ ├── templates
│ │ ├── 404-not-found.html
│ │ ├── 500-internal-server-error.html
│ │ ├── contact.html
│ │ ├── index.html
│ │ └── navpartial.html
│ ├── app-www.js
│ ├── handlers-www.js
│ └── routes-www.js
├── config
│ ├── db-development.json
│ └── db-production.json
├── lib
│ └── lib.js
├── logs
├── models
│ ├── member.js
│ ├── modelerror.js
│ ├── team.js
│ ├── team-member.js
│ └── user.js
├── public
│ └── css
│ ├── admin.css
│ ├── base.css
│ └── www.css
├── test
│ ├── admin.js
│ └── api.js
├─ app.js
├─ LICENSE
├─ package.json
└─ README.md
```
I originally structured this in a modular fashion as suggested by [TJ](https://vimeo.com/56166857),
but I’ve since found it more convenient to work with a flatter structure (heresy!) as I found it
unproductive to be constantly expanding and contracting folders. Go with what works for you.
## Database schema
```sql
-- Schema for ‘koa-sample-web-app-api-mysql’ app
create table Member (
MemberId integer unsigned not null auto_increment,
Firstname text,
Lastname text,
Email text not null,
primary key (MemberId),
unique key Email (Email(24))
) engine=InnoDB charset=utf8 auto_increment=100001;
create table Team (
TeamId integer unsigned not null auto_increment,
Name text not null,
primary key (TeamId)
) engine=InnoDB charset=utf8 auto_increment=100001;
create table TeamMember (
TeamMemberId integer unsigned not null auto_increment,
MemberId integer unsigned not null,
TeamId integer unsigned not null,
JoinedOn date not null,
primary key (TeamMemberId),
key MemberId (MemberId),
key TeamId (TeamId),
unique key TeamMember (MemberId,TeamId),
constraint Fk_Team_TeamMember foreign key (TeamId) references Team (TeamId),
constraint Fk_Member_TeamMember foreign key (MemberId) references Member (MemberId)
) engine=InnoDB charset=utf8 auto_increment=100001;
create table User (
UserId integer unsigned not null auto_increment,
Firstname text,
Lastname text,
Email text not null,
Password text,
ApiToken text,
Role text,
primary key (UserId),
unique key Email (Email(24))
) engine=InnoDB charset=utf8 auto_increment=100001;
```
## Test data
```sql
-- Test data for ‘koa-sample-web-app-api-mysql’ app
INSERT INTO Member VALUES
(100001,'Juan Manuel','Fangio','[email protected]'),
(100002,'Ayrton','Senna','[email protected]'),
(100003,'Michael','Schumacher','[email protected]'),
(100004,'Lewis','Hamilton','[email protected]');
INSERT INTO Team VALUES
(100001,'Ferrari'),
(100002,'Mercedes'),
(100003,'McLaren');
INSERT INTO TeamMember VALUES
(100001,100001,100001,'1956-01-22'),
(100002,100001,100002,'1954-01-17'),
(100003,100002,100003,'1988-04-03'),
(100004,100003,100001,'1996-03-10'),
(100005,100003,100002,'2010-03-14'),
(100006,100004,100002,'2007-03-18'),
(100007,100004,100003,'2013-03-17');
INSERT INTO User VALUES
(100001,'Guest','User','[email protected]','$2a$12$G5op7sX70HUXfFbI8tPQuuhnWz4bwqbWQeIN9KFyklH5OhLgQbnU6',null,'guest'),
(100002,'Admin','User','[email protected]','$2a$12$jEG0N4wNwuc20WQxN1VzduijVnlzLgBNn2N6Uq1pNjN45VhUyNf4W',null,'admin');
```
The full sample app is around 1,000 lines of JavaScript.
| 39.359322 | 119 | 0.705882 | eng_Latn | 0.969221 |
535bdb198029de09d264dc20a1ba76d42926ad07 | 34,387 | md | Markdown | articles/guides/developer/azure-developer-guide.md | klmnden/azure-docs.tr-tr | 8e1ac7aa3bb717cd24e1bc2612e745aa9d7aa6b6 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2019-08-10T02:23:39.000Z | 2019-08-10T02:23:40.000Z | articles/guides/developer/azure-developer-guide.md | klmnden/azure-docs.tr-tr | 8e1ac7aa3bb717cd24e1bc2612e745aa9d7aa6b6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/guides/developer/azure-developer-guide.md | klmnden/azure-docs.tr-tr | 8e1ac7aa3bb717cd24e1bc2612e745aa9d7aa6b6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Get Azure geliştiricileri için Başlarken Kılavuzu | Microsoft Docs
description: Bu konu, Microsoft Azure platformu için geliştirme ihtiyaçlarını kullanmaya başlamak isteyen geliştiriciler için gerekli bilgileri sağlar.
services: ''
cloud: ''
documentationcenter: ''
author: ggailey777
manager: erikre
ms.assetid: ''
ms.service: azure
ms.workload: na
ms.tgt_pltfrm: na
ms.devlang: na
ms.topic: article
ms.date: 10/18/2017
ms.author: glenga
ms.openlocfilehash: 99e043adeac9a43432fb1eba85527b561c477354
ms.sourcegitcommit: d4dfbc34a1f03488e1b7bc5e711a11b72c717ada
ms.translationtype: MT
ms.contentlocale: tr-TR
ms.lasthandoff: 06/13/2019
ms.locfileid: "64570558"
---
# <a name="get-started-guide-for-azure-developers"></a>Azure geliştiricileri için kullanmaya başlama kılavuzu
## <a name="what-is-azure"></a>Azure nedir?
Azure, mevcut uygulamalarınızı barındırın, yeni uygulamaların geliştirilmesini kolaylaştırır ve hatta şirket içi uygulamalar geliştirmek eksiksiz bir bulut platformudur. Azure, geliştirme, test etmek, dağıtmak ve uygulamalarınızı yönetmek için ihtiyacınız olan bulut hizmetlerini tümleşir — yararlanırken verimliliği bulut bilgi işlem.
Uygulamalarınızı Azure üzerinde barındırarak küçükten başlayabilir ve uygulamanızı, Müşteri talebi arttıkça kolayca ölçeklendirin. Azure Ayrıca, hatta farklı bölgeler arasında yük devretme dahil olmak üzere, yüksek oranda kullanılabilir uygulamalar için gereken güvenilirlik sunar. [Azure portalında](https://portal.azure.com) tüm Azure Hizmetleri kolayca yönetmenize olanak tanır. Hizmete özel API'ler ve şablonları kullanarak, hizmetlerinizi program aracılığıyla da yönetebilirsiniz.
**Kimler bu**: Bu kılavuz, uygulama geliştiricileri için Azure platformunda bir giriş niteliğindedir. Bu, rehberlik ve azure'da yeni uygulamalar oluşturmak veya mevcut uygulamaları azure'a geçirme başlatmanız yönü sağlar.
## <a name="where-do-i-start"></a>Nereden başlamalıyım?
Azure'un sunduğu tüm hizmetleri ile bu şekil, çözüm mimarisini desteklemek için gereken hizmetleri öğrenmek için göz korkutucu bir görev olabilir. Bu bölümde, geliştiricilerin sık kullandığınız Azure hizmetlerini vurgular. Tüm Azure Hizmetleri listesi için bkz. [Azure belgeleri](../../index.md).
İlk olarak, uygulamanızı azure'da barındırmak nasıl karar vermeniz gerekir. Bir sanal makine (VM) altyapınızın tamamını yönetmenize gerek. Azure sağlayan platform yönetimi özellikleri kullanabilir miyim? Belki de yalnızca ana kod yürütme için sunucusuz bir çerçeve gerekiyor?
Uygulamanızın ihtiyaç duyduğu bulut depolama, Azure için çeşitli seçenekler sunar. Azure'nın enterprise Authentication yararlanabilirsiniz. Aynı zamanda bulut tabanlı geliştirme ve izleme ve çoğu barındırma Hizmetleri Tümleştirme DevOps teklifi için de araçlar vardır.
Şimdi, uygulamalarınız için araştırma öneririz belirli hizmetler bazılarına bakalım.
### <a name="application-hosting"></a>Uygulama barındırma
Azure, çeşitli bulut tabanlı altyapı ayrıntılar konusunda endişelenmeniz gerekmez. böylece uygulamanızı çalıştırmak için teklifleri işlem sağlar. Kolayca artırın veya uygulama kullanımınızı büyüdükçe, kaynaklara ölçeklendirin.
Azure, uygulama geliştirme ve barındırma gereksinimlerini destekleyen hizmetleri sunar. Azure (Iaas) uygulama barındırma üzerinde tam denetim vermek için hizmet olarak altyapı sağlar. Azure'un Platform olarak hizmet (PaaS) teklifleri, uygulamalarınızı desteklemek için gereken tam olarak yönetilen hizmetler sağlar. Yok bile true sunucusuz Azure'da barındırma tek yapmanız gereken olduğu kodunuzu yazın.

#### <a name="azure-app-service"></a>Azure uygulama hizmeti
Azure App Service, web tabanlı projelerinizi yayımlamak için en hızlı yolu istediğinizde düşünün. App Service mobil istemcilerinizle desteği ve REST API'leri kolayca tüketilen yayımlamak için web uygulamalarınızı genişletmenizi kolaylaştırır. Bu platform, üretim ve sürekli ve kapsayıcı tabanlı dağıtımlar test sosyal sağlayıcılar, trafiği tabanlı otomatik ölçeklendirme, kullanılarak kimlik doğrulaması sağlar.
Web uygulamaları, mobil uygulama arka uçları ve API apps oluşturabilirsiniz.
Tüm üç uygulama türü, App Service çalışma zamanı paylaştığından, Web sitesi barındırma, mobil istemciler desteklemek ve API'leri, azure'da tümü aynı proje veya çözümü kullanıma sunma. App Service hakkında daha fazla bilgi için bkz: [Azure Web Apps nedir](../../app-service/overview.md).
App Service ile DevOps aklınızda tasarlanmıştır. Bu, GitHub Web kancası, Jenkins, Azure DevOps, TeamCity ve diğerleri dahil olmak üzere yayımlama ve sürekli tümleştirme dağıtımları için çeşitli araçlar destekler.
Kullanarak mevcut uygulamalarınızı App Service'e geçirebilirsiniz [çevrimiçi geçiş aracı](https://www.migratetoazure.net/).
> **Ne zaman kullanılacağı**: App Service, mevcut web uygulamaları azure'a geçirirken ve tam olarak yönetilen bir barındırma platformu, web uygulamalarınız için gerektiğinde kullanın. App Service, mobil istemciler veya REST API'leri ile uygulamanızı kullanıma desteklemeye gerektiğinde de kullanabilirsiniz.
>
> **Başlama**: App Service ilk oluşturup dağıtmayı kolaylaştırır [web uygulaması](../../app-service/app-service-web-get-started-dotnet.md), [mobil uygulama](../../app-service-mobile/app-service-mobile-ios-get-started.md), veya [API uygulaması](../../app-service/app-service-web-tutorial-rest-api.md).
>
> **Şimdi deneyin**: App Service, Azure hesabı için kaydolun gerek kalmadan, platformu denemek için bir kısa süreli uygulaması sağlama sağlar. Platform deneyin ve [Azure App Service uygulamanızı oluşturma](https://tryappservice.azure.com/).
#### <a name="azure-virtual-machines"></a>Azure sanal makineleri
Olarak altyapı (Iaas) sağlayıcısı olarak, Azure, dağıtmak veya uygulamanızı Windows veya Linux sanal makineleri geçirme olanak tanır. Azure sanal ağ ile birlikte Azure sanal makineler, Windows veya Linux sanal makinelerinizi Azure'da dağıtılmasını destekler. Vm'leri, makinenin yapılandırması üzerinde tam denetim sahibi sahip. Sanal makineleri kullanırken, tüm sunucu yazılım yükleme, yapılandırma, Bakım ve işletim sistemi düzeltme ekleri için sorumlu olursunuz.
Vm'lerde bulunan denetim düzeyi nedeniyle, bir PaaS modele uymayan Azure'da çok çeşitli sunucu iş yükleri çalıştırabilirsiniz. Bu iş yükleri, veritabanı sunucuları, Windows Server Active Directory ve Microsoft SharePoint içerir. Sanal makineler belgeleri ya da daha fazla bilgi için bkz. [Linux](/azure/virtual-machines/linux/) veya [Windows](/azure/virtual-machines/windows/).
> **Ne zaman kullanılacağı**: Sanal makineler, uygulama altyapınızı üzerinden ya da değişiklik yapmak zorunda kalmadan şirket içi uygulama iş yüklerini Azure'a geçirmek için tam denetim istediğinizde kullanın.
>
> **Başlama**: Oluşturma bir [Linux VM](../../virtual-machines/virtual-machines-linux-quick-create-portal.md) veya [Windows VM](../../virtual-machines/virtual-machines-windows-hero-tutorial.md) Azure portalından.
#### <a name="azure-functions-serverless"></a>Azure işlevleri (sunucusuz)
Yerine oluşturmak ve bir uygulamanın veya altyapının tamamı kodunuzu çalıştırmak için yönetme hakkında endişelenmeden. Ne artık yalnızca kodunuzu yazın ve yanıtta olayları veya bir zamanlamaya göre çalışmasını? [Azure işlevleri](../../azure-functions/functions-overview.md) olan "sunucusuz" bir-ihtiyacınız kod yazmanızı sağlayan bir stil teklifi. İşlevler ile HTTP isteklerini, Web kancaları, bulut hizmeti olayları veya bir zamanlamaya göre kod yürütme tetiklenir. C gibi tercih ettiğiniz geliştirme dilini içinde kod\#, F\#, Node.js, Python veya PHP. Kullanıma dayalı faturalandırma ile yalnızca kodunuzu yürütür ve gerektiğinde Azure ölçeklendirilebilen süre için ödeme yaparsınız.
> **Ne zaman kullanılacağı**: Diğer Azure Hizmetleri tarafından web tabanlı olaylar tarafından veya bir zamanlamaya göre tetiklenen kodunuz zaman Azure işlevlerini kullanın. Yalnızca kodunuzun çalıştığı süre için ödeme istediğinizde veya tam bir barındırılan projeye ek yükü ihtiyacınız kalmadığında işlevleri de kullanabilirsiniz. Daha fazla bilgi için bkz. [Azure işlevlerine genel bakış](../../azure-functions/functions-overview.md).
>
> **Başlama**: İşlevleri hızlı başlangıç Öğreticisi izleyin [ilk işlevinizi oluşturma](../../azure-functions/functions-create-first-azure-function.md) portalından.
>
> **Şimdi deneyin**: Azure işlevleri, bir Azure hesabı için kaydolun zorunda kalmadan kodunuzu çalıştırmanıza olanak tanır. Şimdi, deneyin ve [ilk Azure işlevinizi oluşturma](https://tryappservice.azure.com/).
#### <a name="azure-service-fabric"></a>Azure Service Fabric
Azure Service Fabric, derleme, paketleme, dağıtma ve ölçeklenebilir ve güvenilir mikro Hizmetleri kolay bir dağıtılmış sistemler platformudur. Kapsamlı uygulama yönetim özellikleri de sağlar sağlama, dağıtma, izleme, yükseltmek/düzeltme eki uygulama için ve dağıtılan uygulamalar siliniyor. Paylaşılan makine havuzu üzerinde çalışan, uygulamalar küçükten başlayabilir ve gerektiği gibi yüzlerce veya binlerce makineyi ölçeklendirin.
Service Fabric .NET (OWIN) ve ASP.NET Core için açık Web arabirimi ile Webapı'yi destekler. Bu, Linux'ta .NET Core hem de Java hizmetler oluşturmaya yönelik SDK'lar sağlar. Service Fabric hakkında daha fazla bilgi için bkz: [Service Fabric belgeleri](https://docs.microsoft.com/azure/service-fabric/).
> **Ne zaman kullanılır:** Uygulama oluşturma ya da bir mikro hizmet mimarisi kullanan mevcut bir uygulamayı yeniden yazma Service Fabric iyi bir seçimdir. Service Fabric, daha fazla denetime veya doğrudan erişim için temel altyapıyı ihtiyacınız olduğunda kullanın.
>
> **Başlarken:** [İlk Azure Service Fabric uygulamanızı oluşturma](../../service-fabric/service-fabric-create-your-first-application-in-visual-studio.md).
### <a name="enhance-your-applications-with-azure-services"></a>Uygulamalarınızı Azure hizmetleriyle geliştirin
Uygulama barındırma ek olarak, Azure işlevleri, geliştirme ve Bakım uygulamalarınızın, hem bulutta ve şirket içi iyileştirebilecek hizmet teklifleri sağlar.
#### <a name="hosted-storage-and-data-access"></a>Barındırılan depolama ve veri erişimi
Çoğu uygulama verileri, bu nedenle depolaması gereken nasıl uygulamanızı azure'da barındırmak karar bağımsız olarak, bir veya daha fazla aşağıdaki depolama ve Veri Hizmetleri düşünün.
- **Azure Cosmos DB**: Dilediğiniz sayıda coğrafi bölgede kapsamlı bir SLA ile aktarım hızını ve depolamayı esnek bir şekilde ölçeklendirmenize olanak sağlayan bir Global olarak dağıtılmış çok modelli veritabanı hizmeti.
> **Ne zaman kullanılır:** Uygulamanızı belge, tablo veya grafik veritabanları birden çok sayıda iyi tanımlanmış tutarlılık modeli ile bir MongoDB veritabanları dahil olmak üzere, gerektiğinde.
>
> **Başlama**: [Bir Azure Cosmos DB web uygulaması derleme](../../cosmos-db/create-sql-api-dotnet.md). MongoDB geliştiricisiyseniz bkz [Azure Cosmos DB ile MongoDB uygulaması oluşturma](../../cosmos-db/create-mongodb-dotnet.md).
- **Azure depolama**: Bloblar, kuyruklar, dosyaları ve diğer ilişkisel olmayan veri türlerinin dayanıklı, yüksek oranda kullanılabilir depolama sağlar. Depolama, sanal makineler için depolama temeli sağlar.
> **Ne zaman kullanılacağı**: Ne zaman uygulamanızı anahtar-değer çiftleri (tablolar), BLOB, dosya paylaşımlarını veya iletileri (kuyruklar) gibi ilişkisel olmayan verileri depolar.
>
> **Başlama**: Bu tür depolama birini seçin: [blobları](../../storage/blobs/storage-dotnet-how-to-use-blobs.md), [tabloları](../../cosmos-db/table-storage-how-to-use-dotnet.md), [kuyrukları](../../storage/queues/storage-dotnet-how-to-use-queues.md), veya [dosyaları](../../storage/files/storage-dotnet-how-to-use-files.md).
- **Azure SQL veritabanı**: Azure tabanlı sürümü ilişkisel tablo verilerini bulutta depolamak için Microsoft SQL Server altyapısı. SQL veritabanı, tahmin edilebilir performans, ölçeklenebilirlik hiç kapalı kalma süresi, iş sürekliliği ve veri koruması sağlar.
> **Ne zaman kullanılacağı**: Uygulamanızın veri depolama ile işlem başvurusal bütünlük gerektirdiğinde desteklemek ve TSQL sorgularını destekler.
>
> **Başlama**: [Azure portalını kullanarak dakikalar içinde SQL veritabanı oluşturma](../../sql-database/sql-database-get-started.md).
Kullanabileceğiniz [Azure Data Factory](../../data-factory/introduction.md) mevcut şirket içi verileri azure'a taşımak için. Verileri buluta taşımaya hazır değilseniz [karma bağlantılar](../../biztalk-services/integration-hybrid-connection-overview.md) bağlandığınız BizTalk Hizmetleri olanak tanır, App Service uygulaması şirket içi kaynaklara barındırılan. Ayrıca, şirket içi uygulamalarınızı Hizmetleri Azure veri ve depolama birimine bağlanabilirsiniz.
#### <a name="docker-support"></a>Docker desteği
Docker kapsayıcıları, işletim sistemi sanallaştırma, bir form, uygulamaların daha verimli ve öngörülebilir bir şekilde dağıtmanızı sağlar. Buna kapsayıcılı bir uygulama üretimde aynı şekilde sistemlerde, geliştirme ve test gibi çalışır. Standart Docker araçları kullanarak kapsayıcıları yönetebilirsiniz. Azure üzerinde kapsayıcı tabanlı uygulamaları dağıtmak ve yönetmek için mevcut becerilerini ve popüler açık kaynak Araçları'nı kullanabilirsiniz.
Azure kapsayıcılar uygulamalarınızda kullanmak için çeşitli yollar sunar.
- **Azure Docker VM uzantısı**: Sanal makinenize bir Docker konağı olarak görev yapacak Docker araçları ile yapılandırmanıza olanak sağlar.
> **Ne zaman kullanılacağı**: Bir VM'de uygulamalarınız için tutarlı kapsayıcı dağıtımı oluşturmak istediğinizde ya da kullanmak istediğiniz [Docker Compose](https://docs.docker.com/compose/overview/).
>
> **Başlama**: [Docker VM uzantısını kullanarak Azure'da bir Docker ortamında oluşturma](../../virtual-machines/virtual-machines-linux-dockerextension.md).
- **Azure Container Service'i**: Oluşturma, yapılandırma ve kapsayıcılı uygulamaları çalıştırmak için önceden yapılandırılmış sanal makine kümesi yönetmenize olanak tanır. Container Service hakkında daha fazla bilgi için bkz. [Azure Container Service'e Giriş](../../container-service/container-service-intro.md).
> **Ne zaman kullanılacağı**: Ek planlama sağlayan yapı üretime hazır ve ölçeklenebilir ortamları ve yönetim araçları veya Docker Swarm kümesi dağıtırken gerektiğinde.
>
> **Başlama**: [Bir kapsayıcı hizmeti kümesini dağıtma](../../container-service/dcos-swarm/container-service-deployment.md).
- **Docker makinesi**: Yükleme ve docker-machine komutlarını kullanarak bir Docker altyapısına sanal konaklar yönetmenize olanak sağlar.
>**Ne zaman kullanılacağı**: Ne zaman tek bir Docker konağı oluşturarak hızlı bir şekilde prototip için uygulama gerekir.
- **App Service için özel Docker görüntüsü**: Linux üzerinde web uygulaması dağıttığınızda Docker kapsayıcılarını bir kapsayıcı kayıt defterinden ya da müşteri kapsayıcı kullanmanıza olanak sağlar.
> **Ne zaman kullanılacağı**: Linux üzerinde web uygulaması için bir Docker görüntüsü dağıtırken.
>
> **Başlama**: [Linux üzerinde App Service'te özel bir Docker görüntüsü kullanma](../../app-service/containers/quickstart-docker-go.md).
### <a name="authentication"></a>Kimlik Doğrulaması
Uygulamalarınızı kullanan yalnızca bilmek ancak kaynaklarınıza yetkisiz erişimi önlemek için önemlidir. Azure, uygulama istemcilerin kimliğini doğrulamak için çeşitli yollar sunar.
- **Azure Active Directory (Azure AD)** : Microsoft çok kiracılı, bulut tabanlı kimlik ve erişim yönetimi hizmeti. Azure AD ile tümleştirdiğinizde, çoklu oturum açma (SSO), uygulamalarınıza ekleyebilirsiniz. Dizin özellikleri, doğrudan Azure AD Graph API'si veya Microsoft Graph API'sini kullanarak erişebilirsiniz. OAuth2.0 yetkilendirme framework ve Open ID Connect desteği Azure AD ile yerel HTTP/REST uç noktaları ve çok platformlu Azure AD kimlik doğrulama kitaplıkları kullanarak tümleştirebilirsiniz.
> **Ne zaman kullanılacağı**: SSO bir deneyim sağlamak istediğinizde, grafik tabanlı verilerle çalışmak veya kullanıcıların etki alanı tabanlı kimlik doğrulaması.
>
> **Başlama**: Daha fazla bilgi için bkz. [Azure Active Directory Geliştirici Kılavuzu](../../active-directory/develop/v1-overview.md).
- **App Service kimlik doğrulaması**: Uygulamanızı barındırmak için App Service'ı seçtiğinizde, sosyal kimlik sağlayıcıları ile birlikte Azure AD için yerleşik kimlik doğrulama desteği ayrıca Al — Facebook, Google, Microsoft ve Twitter gibi.
> **Ne zaman kullanılacağı**: Azure AD kullanarak bir App Service uygulamasında kimlik doğrulamasını etkinleştirmek istediğinizde sosyal kimlik sağlayıcıları ya da her ikisini de.
>
> **Başlama**: App Service kimlik doğrulaması hakkında daha fazla bilgi için bkz: [kimlik doğrulama ve yetkilendirme Azure App Service'te](../../app-service/overview-authentication-authorization.md).
Azure en iyi güvenlik uygulamaları hakkında daha fazla bilgi için bkz: [Azure güvenlik en iyi uygulamaları ve desenleri](../../security/security-best-practices-and-patterns.md).
### <a name="monitoring"></a>İzleme
Uygulamanızı ayarlama ve Azure'da çalışan ile performansı izlemek için sorunlarını izleyin ve müşterilerin uygulamanızı nasıl kullandığını görün. Azure izleme çeşitli seçenekler sunar.
- **Visual Studio Application Insights**: Canlı web uygulamalarınızı izleme için Visual Studio ile tümleşen Azure'da barındırılan genişletilebilir bir analiz hizmetidir. Azure üzerinde barındırılan da olup olmadığını performansı ve kullanılabilirliği, uygulamalarınızın sürekli olarak geliştirmek için gereken verileri sağlar.
>**Başlama**: İzleyin [Application Insights öğretici](../../azure-monitor/app/app-insights-overview.md).
- **Azure İzleyici**: Görselleştirin, sorgu, yol, arşiv ve Azure altyapınız ve kaynaklar tarafından oluşturulan günlükleri ve ölçümler üzerinde işlem yapmasına yardımcı olan bir hizmet. İzleyici, Azure portalında görmek ve Azure kaynakları izlemek için tek bir kaynak veri görünümleri sağlar.
>**Başlama**: [Azure İzleyici ile çalışmaya başlama](../../monitoring-and-diagnostics/monitoring-get-started.md).
### <a name="devops-integration"></a>DevOps tümleştirmesi
VM'ler sağlamayı veya sürekli tümleştirme ile web uygulamalarınızı yayımlamak ister, Azure ile birçok popüler DevOps araçlarıyla tümleşir. Jenkins, GitHub, Puppet, Chef, TeamCity, Ansible, Azure DevOps ve diğerleri gibi araçlar için destekle, zaten yüklü ve mevcut deneyiminizi en üst düzeye araçları ile çalışabilirsiniz.
> **Şimdi deneyin:** [Birkaç DevOps tümleştirmeleri'ni deneyin](https://azure.microsoft.com/try/devops/).
>
> **Başlama**: Bir App Service uygulaması DevOps seçeneklerini görmek için bkz. [Azure uygulama Hizmeti'ne sürekli dağıtım](../../app-service/deploy-continuous-deployment.md).
## <a name="azure-regions"></a>Azure bölgeleri
Azure dünyanın dört bir yanındaki birçok bölgede genel kullanıma açık olan bir genel bulut platformudur. Bir hizmet, uygulama veya azure'da VM sağladığınızda, uygulamanızın çalıştığı veya verilerinizin depolandığı, belirli bir veri merkezini temsil eden bir bölge seçin istenir. Bu bölgeler yayımlanan belirli konumlara karşılık [Azure bölgeleri](https://azure.microsoft.com/regions/) sayfası.
### <a name="choose-the-best-region-for-your-application-and-data"></a>Uygulama ve verileriniz için en iyi bir bölge seçin
Azure'ı kullanmanın avantajları, dünyanın çeşitli veri merkezleri, uygulamalarınızı dağıtmadan biridir. Seçtiğiniz bölge, uygulamanızın performansını etkileyebilir. Örneğin, daha iyi ağ istek gecikme süresini azaltmak için müşterilerinize en yakın bir bölge seçin. Belirli ülkelerde/bölgelerde uygulamanızı dağıtmak için yasal gereksinimleri karşılamak için bölgenizi seçin isteyebilirsiniz. Aynı veri merkezinde veya bir veri merkezinde uygulamanızı barındıran bir veri merkezine mümkün olduğunca uygulama verilerini depolamak için her zaman en iyi bir uygulamadır.
### <a name="multi-region-apps"></a>Çok bölgeli uygulama
Olası olsa da, bu Internet hatası veya doğal afetler gibi bir olay nedeniyle çevrimdışına veri merkezinin tamamı için mümkün değildir. Birden fazla veri merkezinde konak önemli iş uygulamalarının en yüksek kullanılabilirlik sağlamak için en iyi bir yöntemdir. Kullanarak birden çok bölgede de genel kullanıcılar için gecikme süresini azaltın ve uygulamaları güncelleştirme ek esneklik olanaklarını sağlar.
Sanal makine ve uygulama hizmetleri gibi bazı hizmetler kullanan [Azure Traffic Manager](../../traffic-manager/traffic-manager-overview.md) yüksek kullanılabilirlik Kurumsal uygulamaları desteklemek için bölgeler arasında yük devretme ile birden çok bölge desteği etkinleştirmek için. Bir örnek için bkz [Azure başvuru mimarisi: Bir web uygulaması birden çok bölgede çalıştırın](https://docs.microsoft.com/azure/architecture/reference-architectures/app-service-web-app/multi-region).
>**Ne zaman kullanılacağı**: Kurumsal ve yük devretme ve çoğaltma avantajlarından yararlanarak yüksek kullanılabilirlik uygulamaları olduğunda.
## <a name="how-do-i-manage-my-applications-and-projects"></a>Uygulamalarımı ve Projelerimi nasıl yönetebilirim?
Azure deneyimler, Azure kaynakları, uygulamaları ve projeleri oluşturmak ve yönetmek size zengin bir özellik kümesi sağlar; hem program aracılığıyla ve [Azure portalında](https://portal.azure.com/).
### <a name="command-line-interfaces-and-powershell"></a>Komut satırı arabirimi ve PowerShell
Azure, uygulamalarınızı ve hizmetlerinizi Bash, Terminal, komut istemini veya tercih ettiğiniz, komut satırı aracını kullanarak komut satırından yönetmek için iki yol sunar. Genellikle, Azure portalında olduğu gibi komut satırından aynı görevleri gerçekleştirebilirsiniz: oluşturma ve sanal makineler, sanal ağlar, web uygulamaları ve diğer hizmetleri yapılandırma gibi.
- [Azure komut satırı arabirimi (CLI)](../../xplat-cli-install.md): Bir Azure aboneliğine bağlanma ve Azure kaynaklarını komut satırından karşı çeşitli görevleri program olanak sağlar.
- [Azure PowerShell](../../powershell-install-configure.md): Windows PowerShell kullanarak Azure kaynaklarını yönetmenizi sağlayan cmdlet'ler ile bir modül kümesini sağlar.
### <a name="azure-portal"></a>Azure portal
Azure portalında oluşturmak, yönetmek ve Azure kaynaklarını ve Hizmetleri kaldırmak için kullanabileceğiniz bir web tabanlı bir uygulamadır. Azure portalında şu konumdadır <https://portal.azure.com>. Özelleştirilebilir bir pano, Azure kaynaklarını yönetmek için Araçlar içerir ve erişim için Abonelik ayarları ve fatura bilgilerini. Daha fazla bilgi için [Azure portalına genel bakış](../../azure-portal-overview.md).
### <a name="rest-apis"></a>REST API'leri
Azure REST API'leri, Azure portalı kullanıcı arabirimini destekleyen bir dizi üzerinde geliştirilmiştir. Bu REST API'lerin çoğu, program aracılığıyla sağlama ve Azure kaynaklarını ve uygulamaların Internet özellikli herhangi bir CİHAZDAN yönetme izin vermek için de desteklenir. REST API belgelerini tam kümesi için bkz: [Azure REST SDK başvurusu](https://docs.microsoft.com/rest/api/).
### <a name="apis"></a>API'ler
REST API'lerine ek olarak, çoğu Azure hizmeti Ayrıca, program aracılığıyla kaynakları uygulamalarınızdan aşağıdaki geliştirme platformları için SDK'ları dahil olmak üzere, platforma özgü Azure SDK kullanarak yönetmenize olanak tanır:
- [.NET](https://go.microsoft.com/fwlink/?linkid=834925)
- [Node.js](https://docs.microsoft.com/javascript/azure)
- [Java](https://docs.microsoft.com/java/azure)
- [PHP](https://github.com/Azure/azure-sdk-for-php/blob/master/README.md)
- [Python](https://docs.microsoft.com/python/azure)
- [Ruby](https://github.com/Azure/azure-sdk-for-ruby/blob/master/README.md)
- [Go](https://docs.microsoft.com/go/azure)
Gibi hizmetleri [Mobile Apps](../../app-service-mobile/app-service-mobile-dotnet-how-to-use-client-library.md) ve [Azure Media Services](../../media-services/previous/media-services-dotnet-how-to-use.md) istemci-tarafı SDK'lar olanak sağlayan web ve mobil istemci uygulamalardan hizmetlere erişim.
### <a name="azure-resource-manager"></a>Azure Resource Manager
Azure üzerinde büyük olasılıkla, uygulamanızı çalıştıran her biri aynı yaşam döngüsünü izleyin ve, mantıksal bir birim olarak düşünülebilir birden çok Azure Hizmetleri ile çalışmayı içerir. Örneğin, bir web uygulaması Web uygulamaları, SQL veritabanı, depolama, Azure önbelleği için Redis, kullanabilir ve Azure Content Delivery Network hizmetlerinden. [Azure Resource Manager](../../azure-resource-manager/resource-group-overview.md) bir grup olarak, uygulamanızdaki kaynaklarla çalışma sağlar. Dağıtma, güncelleştirme veya tek ve eşgüdümlü bir işlemle tüm kaynakları silin.
Mantıksal olarak gruplandırarak ve ilgili kaynakları yönetme yanı sıra Azure Resource Manager dağıtımını ve yapılandırmasını, ilgili kaynak özelleştirmenize olanak tanıyan dağıtım özellikleri içerir. Örneğin, Kaynak Yöneticisi'ni kullanarak dağıtma ve birden çok sanal makine, yük dengeleyici ve tek bir birim olarak Azure SQL veritabanı içeren bir uygulamayı yapılandırın.
Bu dağıtımlar, JSON biçimli bir belge olan bir Azure Resource Manager şablonu kullanarak geliştirin. Şablonları, bir dağıtım tanımlayın ve betikler yerine bildirim temelli şablonlar kullanarak uygulamalarınızı yönetmek olanak tanır. Şablonlarınızı test, hazırlık ve üretim gibi farklı ortamlarda da çalışabilir. Örneğin, şablonları kullanarak kod deposundaki bir dizi tek bir tıklamayla Azure Hizmetleri dağıtan bir GitHub deposuna bir düğme ekleyebilirsiniz.
> **Ne zaman kullanılacağı**: Resource Manager şablonları, uygulamanızın REST API'leri, Azure CLI ve Azure PowerShell kullanarak program aracılığıyla yönetebilirsiniz şablon tabanlı bir dağıtım istediğinizde kullanın.
>
> **Başlama**: Şablonlar ile çalışmaya başlamak için bkz: [Azure Resource Manager şablonları yazma](../../resource-group-authoring-templates.md).
## <a name="understanding-accounts-subscriptions-and-billing"></a>Hesapları anlama, abonelik ve faturalandırma
Geliştiriciler olarak, kodun içine doğrudan içine dalmak ve çalıştırma uygulamalarımızın yapmadan ile mümkün olduğunca hızlı şekilde kullanmaya başlamak deneyin istiyoruz. Kesinlikle Azure'da mümkün olduğunca kolayca çalışmaya başlayın geçirmenizi istiyoruz. Kolay, Azure teklifleri olmasına yardımcı olmak için bir [ücretsiz deneme sürümü](https://azure.microsoft.com/free/). Bazı hizmetler bile bir "ücretsiz deneyin" işlevsellik gibi sahip [Azure App Service](https://tryappservice.azure.com/), hangi gerektirmez, hatta bir hesap oluşturabilirsiniz. Kodlama ve uygulamanızı azure'a dağıtmak derinlerine olarak eğlenceli ayrıca Azure'nın bir kullanıcı hesapları, abonelikleri ve faturalandırma açısından nasıl çalıştığını anlamak için biraz zaman alabilir önemlidir.
### <a name="what-is-an-azure-account"></a>Bir Azure hesabı nedir?
Oluşturun veya bir Azure aboneliği ile çalışmak için bir Azure hesabınız olmalıdır. Bir Azure hesabı, Azure AD'de yalnızca bir kimlik olduğunu veya Azure AD tarafından güvenilen bir iş veya Okul kuruluş gibi bir dizinde. Böyle bir kuruluşa ait olmayan, Microsoft, Azure AD tarafından güvenilen Account kullanarak, her zaman bir aboneliği oluşturabilirsiniz. Şirket içi Windows Server Active Directory, Azure AD ile tümleştirme hakkında daha fazla bilgi için bkz: [şirket içi kimliklerinizi Azure Active Directory ile tümleştirme](../../active-directory/hybrid/whatis-hybrid-identity.md).
Her Azure aboneliği bir Azure AD örneğiyle güven ilişkisine sahiptir. Bu; Azure aboneliğinin kullanıcılar, hizmetler ve cihazlar için kimlik doğrulaması yapmak üzere bu dizine güvendiği anlamına gelir. Birden çok abonelik aynı dizine güvenebilir ancak bir abonelik yalnızca bir dizine güvenir. Daha fazla bilgi için bkz. [Azure aboneliklerinin Azure Active Directory ile ilişkisi](../../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md).
Kimlikleri olarak da bilinir, tek tek Azure tanımlanmasına ek olarak hesap *kullanıcılar*, ayrıca tanımlayabilirsiniz *grupları* Azure AD'de. Kullanıcı grubu oluşturmak, rol tabanlı erişim denetimi (RBAC) kullanarak bir Abonelikteki kaynakları erişimi yönetmek için iyi bir yoludur. Grupları oluşturmayı öğrenmek için bkz: [Azure Active Directory önizlemesinde bir grup oluşturma](../../active-directory/fundamentals/active-directory-groups-create-azure-portal.md). Ayrıca oluşturma ve yönetme grupları tarafından [PowerShell kullanarak](../../active-directory/users-groups-roles/groups-settings-v2-cmdlets.md).
### <a name="manage-your-subscriptions"></a>Aboneliklerinizi yönetme
Bir Azure hesabına bağlı mantıksal bir gruplandırması olan Azure hizmetlerini bir aboneliktir. Tek bir Azure hesabı, birden fazla abonelik içerebilir. Azure Hizmetleri için faturalama, abonelik başına temelinde gerçekleştirilir. Kullanılabilir abonelik teklifleri türüne göre bir listesi için bkz. [Microsoft Azure Teklif Ayrıntıları](https://azure.microsoft.com/support/legal/offer-details/). Abonelik üzerinde tam denetime sahip bir Hesap Yöneticisi ve tüm hizmetleri denetime sahip abonelikte Hizmet Yöneticisi, Azure aboneliğiniz yok. Klasik abonelik yöneticileri hakkında daha fazla bilgi için bkz: [ekleme veya değiştirme Azure aboneliği yöneticileri](../../billing/billing-add-change-azure-subscription-administrator.md). Yöneticiler ek olarak, bireysel hesaplar verilebilir ayrıntılı denetim kullanarak Azure kaynaklarınızın [rol tabanlı erişim denetimi (RBAC)](../../role-based-access-control/overview.md).
#### <a name="resource-groups"></a>Kaynak grupları
Yeni Azure hizmetlerini sağladığınızda, belirli bir abonelikte bunu yapın. Kaynaklar olarak da adlandırılan tek tek Azure hizmetlerinin bir kaynak grubu bağlamında oluşturulur. Kaynak gruplarını dağıtmak ve uygulamanızın kaynakları yönetmek kolaylaştırır. Bir kaynak grubu, bir birim olarak çalışmak istediğiniz uygulama için tüm kaynakları içermelidir. Kaynak grupları arasında ve hatta farklı Aboneliklerde, kaynakları taşıyabilirsiniz. Kaynakları taşıma hakkında bilgi edinmek için [kaynakları yeni kaynak grubuna veya aboneliğe taşıma](../../resource-group-move-resources.md).
Azure kaynak Gezgini, aboneliğinizde zaten oluşturduğunuz kaynakları görselleştirmek için harika bir araçtır. Daha fazla bilgi için bkz. [kaynakları görüntülemek ve değiştirmek için kullanım Azure kaynak Gezgini](../../resource-manager-resource-explorer.md).
#### <a name="grant-access-to-resources"></a>Kaynaklara erişim izni ver
Azure kaynaklarına erişime izin verdiğinizde, her zaman belirli bir görevi gerçekleştirmek için gereken en az ayrıcalık ile kullanıcılara sağlamak için en iyi uygulama olan.
- **Rol tabanlı erişim denetimi (RBAC)** : Azure'da, belirli bir kapsamda kullanıcı hesapları (asıl hesaplar) erişimi verebilir: Abonelik, kaynak grubu veya tek tek kaynaklar. RBAC, bir kaynak grubunda bir kaynak kümesini dağıtmak ve belirli kullanıcı veya grup için izinler sağlar. Ayrıca hedef kaynak grubuna ait kaynaklara erişimini sağlar. Ayrıca, bir sanal makine veya sanal ağ gibi tek bir kaynağa erişim izni verebilirsiniz. Erişim vermek için kullanıcı, Grup veya hizmet sorumlusu için bir rol atayın. Birçok önceden tanımlı roller vardır ve kendi özel rollerinizi de tanımlayabilirsiniz. Daha fazla bilgi için bkz. [rol tabanlı erişim denetimi (RBAC) nedir?](../../role-based-access-control/overview.md).
> **Ne zaman kullanılacağı**: Ayrıntılı erişim yönetimi, bir kullanıcı bir abonelik sahibi olmak gerektiğinde veya kullanıcılar ve gruplar için gerektiğinde.
>
> **Başlama**: Daha fazla bilgi için bkz. [RBAC ve Azure portalını kullanarak erişimini yönetme](../../role-based-access-control/role-assignments-portal.md).
- **Hizmet sorumlusu nesneleri**: Kullanıcı asıl adları ve gruplara erişim sağlamanın yanı sıra hizmet sorumlusu aynı erişim verebilirsiniz.
> **Ne zaman kullanılacağı**: Ne zaman, program aracılığıyla Azure kaynaklarını yönetmek veya uygulamalar için erişim izni verme. Daha fazla bilgi için [oluşturma Active Directory uygulaması ve hizmet sorumlusu](../../active-directory/develop/howto-create-service-principal-portal.md).
#### <a name="tags"></a>Tags
Azure Resource Manager kaynakların için özel etiketler atama olanak sağlar. Faturalandırma veya izleme kaynakları düzenlemek gerektiğinde etiketleri, anahtar-değer çiftleridir yararlı olabilir. Etiketler, birden çok kaynak gruplarındaki kaynakların izlemek için bir yol sağlar. Portalında, Azure Resource Manager şablonunda veya programlama yoluyla, REST API, Azure CLI veya PowerShell kullanarak etiketler atayabilirsiniz. Her kaynak için birden çok etiket atayabilirsiniz. Daha fazla bilgi için bkz. [etiketleri kullanarak Azure kaynaklarınızı düzenleme](../../resource-group-using-tags.md).
### <a name="billing"></a>Faturalandırma
Şirket içinden bulutta barındırılan hizmetlere bilgi işlem hareket izleme ve hizmet kullanımı ve ilgili maliyetleri tahmin etme önemli edilir. Yeni kaynaklar aylık olarak çalıştırmak için maliyet tahmin edebilmek önemlidir. Geçerli harcama dayalı belirli bir ay için faturalandırma nasıl görüneceğini gösteren proje olması gerekir.
#### <a name="get-resource-usage-data"></a>Kaynak kullanım verilerini al
Azure faturalandırma REST kaynak tüketimi ve Azure abonelikleri için meta veri bilgilerini erişmesini API kümesi sağlar. Bu faturalandırma API'lerini daha iyi tahmin edin ve Azure maliyetleri yönetme olanağı sağlayacak. İzleyebilir ve saatlik artışlarla yaptığı Harcamalar çözümlenmiştir harcama uyarıları oluşturma ve geçerli kullanım eğilimlere gelecek faturalandırma tahmin edin.
>**Başlama**: Faturalandırma API'lerini kullanma hakkında daha fazla bilgi edinmek için [Azure faturalama kullanım ve RateCard API'leri genel bakış](../../billing-usage-rate-card-overview.md).
#### <a name="predict-future-costs"></a>Gelecekteki maliyetleri tahmin edin
Önceden maliyetlerini tahmin etmek zor olsa da, Azure sahip bir [fiyatlandırma hesaplayıcısını](https://azure.microsoft.com/pricing/calculator/) dağıtılan kaynakların maliyetini tahmin ederken kullanabilirsiniz. Portal ve faturalandırma REST API'lerini faturalama dikey penceresine, geçerli tüketimini temel alarak, gelecekteki maliyetlerini tahmin etmek için de kullanabilirsiniz.
>**Başlama**: Bkz: [Azure faturalama kullanım ve RateCard API'leri genel bakış](../../billing-usage-rate-card-overview.md).
| 109.86262 | 915 | 0.811818 | tur_Latn | 0.999945 |
535c2825d285eaae385d99398bd2a91d00b0d4d4 | 862 | md | Markdown | docs/error-messages/compiler-errors-2/compiler-error-c3195.md | morra1026/cpp-docs.ko-kr | 77706f3bffb88ed46b244c46184289a3b683f661 | [
"CC-BY-4.0",
"MIT"
] | 3 | 2019-10-11T07:41:28.000Z | 2021-06-29T08:27:00.000Z | docs/error-messages/compiler-errors-2/compiler-error-c3195.md | morra1026/cpp-docs.ko-kr | 77706f3bffb88ed46b244c46184289a3b683f661 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/error-messages/compiler-errors-2/compiler-error-c3195.md | morra1026/cpp-docs.ko-kr | 77706f3bffb88ed46b244c46184289a3b683f661 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 컴파일러 오류 C3195
ms.date: 11/04/2016
f1_keywords:
- C3195
helpviewer_keywords:
- C3195
ms.assetid: 97e4f681-812b-49e8-ba57-24b7817e3cd8
ms.openlocfilehash: 4a54a9c629a1abaa4f1c5d15d06448e82cf25561
ms.sourcegitcommit: 6052185696adca270bc9bdbec45a626dd89cdcdd
ms.translationtype: MT
ms.contentlocale: ko-KR
ms.lasthandoff: 10/31/2018
ms.locfileid: "50538855"
---
# <a name="compiler-error-c3195"></a>컴파일러 오류 C3195
'operator': 예약어이므로 값 형식 또는 ref 클래스의 멤버로 사용할 수 없습니다. CLR 또는 WinRT 연산자는 'operator' 키워드를 사용하여 정의해야 합니다.
컴파일러가 Managed Extensions for C++ 구문을 사용하는 연산자 정의를 발견했습니다. 연산자에 대 한 c + + 구문을 사용 해야 합니다.
다음 샘플에서는 C3195 오류가 발생하는 경우 및 이를 해결하는 방법을 보여 줍니다.
```
// C3195.cpp
// compile with: /clr /LD
#using <mscorlib.dll>
value struct V {
static V op_Addition(V v, int i); // C3195
static V operator +(V v, char c); // OK for new C++ syntax
};
``` | 26.9375 | 100 | 0.728538 | kor_Hang | 0.99948 |
535c372ef8823e690b77b1f8c36f4ea5e020301c | 6,533 | md | Markdown | docs/installing/many_install.md | Lucaslah/open-horizon.github.io | 8de7efcae61a6807a101c35b97619beb4e00f993 | [
"CC-BY-4.0"
] | 25 | 2020-03-27T14:05:48.000Z | 2022-03-29T09:46:52.000Z | docs/installing/many_install.md | Lucaslah/open-horizon.github.io | 8de7efcae61a6807a101c35b97619beb4e00f993 | [
"CC-BY-4.0"
] | 159 | 2020-03-23T12:52:02.000Z | 2022-03-31T16:00:53.000Z | docs/installing/many_install.md | joewxboy/open-horizon.github.io | e9cc9d1d7b30ec8f0726c7674aa627240e7abb2f | [
"CC-BY-4.0"
] | 42 | 2020-03-23T12:58:04.000Z | 2022-03-13T07:09:40.000Z | ---
copyright:
years: 2021
lastupdated: "2021-02-20"
---
{:new_window: target="blank"}
{:shortdesc: .shortdesc}
{:screen: .screen}
{:codeblock: .codeblock}
{:pre: .pre}
{:child: .link .ulchildlink}
{:childlinks: .ullinks}
# Bulk agent installation and registration
{: #batch-install}
Use the bulk installation process to set up multiple edge devices of similar types (in other words, same architecture, operating system, and pattern or policy).
Note: For this process, target edge devices that are macOs computers are not supported. However, you can drive this process from a macOs computer, if wanted. (In other words, this host can be a macOs computer.)
### Prerequisites
* The devices to be installed and registered must have network access to the management hub.
* The devices must have an installed operating system.
* If you are using DHCP for edge devices, each device must maintain the same IP address until the task is complete (or the same `hostname` if you are using DDNS).
* All edge service user inputs must be specified as defaults in the service definition or in the pattern or deployment policy. No node-specific user inputs can be used.
### Procedure
{: #proc-multiple}
1. If you have not obtained or created the **agentInstallFiles-<edge-device-type>.tar.gz** file and API key by following [Gather the necessary information and files for edge devices](../hub/gather_files.md#prereq_horizon), do that now. Set the name of the file and the API key value in these environment variables:
```bash
export AGENT_TAR_FILE=agentInstallFiles-<edge-device-type>.tar.gz
export HZN_EXCHANGE_USER_AUTH=iamapikey:<api-key>
```
{: codeblock}
2. The **pssh** package includes the **pssh** and **pscp** commands, which enable you to run commands to many edge devices in parallel and copy files to many edge devices in parallel. If you do not have these commands on this host, install the package now:
* On {{site.data.keyword.linux_notm}}:
```bash
sudo apt install pssh
alias pssh=parallel-ssh
alias pscp=parallel-scp
```
{: codeblock}
* On {{site.data.keyword.macOS_notm}}:
```bash
brew install pssh
```
{: codeblock}
(If **brew** is not installed yet, see [Install pssh on macOs computer with Brew ](https://brewinstall.org/Install-pssh-on-Mac-with-Brew/).)
3. You can give **pscp** and **pssh** access to your edge devices in several ways. This content describes how to use an ssh public key. First, this host must have an ssh key pair (usually in **~/.ssh/id_rsa** and **~/.ssh/id_rsa.pub**). If it does not have an ssh key pair, generate it:
```bash
ssh-keygen -t rsa
```
{: codeblock}
4. Place the contents of your public key (**~/.ssh/id_rsa.pub**) on each edge device in **/root/.ssh/authorized_keys**.
5. Create a 2-column mapping file called **node-id-mapping.csv** that maps each edge device's IP address or hostname to the {{site.data.keyword.ieam}} node name it should be given during registration. When **agent-install.sh** runs on each edge device, this file tells it what edge node name to give to that device. Use CSV format:
```bash
Hostname/IP, Node Name
1.1.1.1, factory2-1
1.1.1.2, factory2-2
```
{: codeblock}
6. Add **node-id-mapping.csv** to the agent tar file:
```bash
gunzip $AGENT_TAR_FILE
tar -uf ${AGENT_TAR_FILE%.gz} node-id-mapping.csv
gzip ${AGENT_TAR_FILE%.gz}
```
{: codeblock}
7. Put the list of edge devices you want to bulk install and register in a file named **nodes.hosts** . This will be used with the **pscp** and **pssh** commands. Each line should be in the standard ssh format `<user>@<IP-or-hostname>`:
```bash
[email protected]
[email protected]
```
{: codeblock}
Note: If you use a non-root user for any of the hosts, sudo must be configured to allow sudo from that user without entering a password.
8. Copy the agent tar file to the edge devices. This step can take a few moments:
```bash
pscp -h nodes.hosts -e /tmp/pscp-errors $AGENT_TAR_FILE /tmp
```
{: codeblock}
Note: If you get **[FAILURE]** in the **pscp** output for any of the edge devices, you can see the errors in **/tmp/pscp-errors**.
9. Run **agent-install.sh** on each edge device to install the Horizon agent and register the edge devices. You can use a pattern or a policy to register the edge devices:
1. Register the edge devices with a pattern:
```bash
pssh -h nodes.hosts -t 0 "bash -c \"tar -zxf /tmp/$AGENT_TAR_FILE agent-install.sh && sudo -s ./agent-install.sh -i . -u $HZN_EXCHANGE_USER_AUTH -p IBM/pattern-ibm.helloworld -w ibm.helloworld -o IBM -z /tmp/$AGENT_TAR_FILE 2>&1 >/tmp/agent-install.log \" "
```
{: codeblock}
Instead of registering the edge devices with the **IBM/pattern-ibm.helloworld** deployment pattern, you can use a different deployment pattern by modifying the **-p**, **-w**, and **-o** flags. To see all available **agent-install.sh** flag descriptions:
```bash
tar -zxf $AGENT_TAR_FILE agent-install.sh && ./agent-install.sh -h
```
{: codeblock}
2. Or, register the edge devices with policy. Create a node policy, copy it to the edge devices, and register the devices with that policy:
```bash
echo '{ "properties": [ { "name": "nodetype", "value": "special-node" } ] }' > node-policy.json
pscp -h nodes.hosts -e /tmp/pscp-errors node-policy.json /tmp
pssh -h nodes.hosts -t 0 "bash -c \"tar -zxf /tmp/$AGENT_TAR_FILE agent-install.sh && sudo -s ./agent-install.sh -i . -u $HZN_EXCHANGE_USER_AUTH -n /tmp/node-policy.json -z /tmp/$AGENT_TAR_FILE 2>&1 >/tmp/agent-install.log \" "
```
{: codeblock}
Now the edge devices are ready, but will not start running edge services until you create a deployment policy (business policy) that specifies that a service should be deployed to this type of edge device (in this example, devices with **nodetype** of **special-node**). See [Using deployment policy](../using_edge_services/detailed_policy.md) for details.
10. If you get **[FAILURE]** in the **pssh** output for any of the edge devices, you can investigate the problem by going to the edge device and viewing **/tmp/agent-install.log** .
11. While the **pssh** command is running, you can view the status of your edge nodes in the {{site.data.keyword.edge_notm}} console. See [Using the management console](../console/accessing_ui.md).
| 47.34058 | 362 | 0.703964 | eng_Latn | 0.982143 |
535d713a2500e47275904be825db1193d587fc4c | 5,856 | md | Markdown | website/docs/docs/building-a-dbt-project/using-sources.md | mrinalini1404/docs.getdbt.com | 63827387c1ca00bf7d1f03c9a978c1ce25e51bed | [
"Apache-2.0"
] | null | null | null | website/docs/docs/building-a-dbt-project/using-sources.md | mrinalini1404/docs.getdbt.com | 63827387c1ca00bf7d1f03c9a978c1ce25e51bed | [
"Apache-2.0"
] | null | null | null | website/docs/docs/building-a-dbt-project/using-sources.md | mrinalini1404/docs.getdbt.com | 63827387c1ca00bf7d1f03c9a978c1ce25e51bed | [
"Apache-2.0"
] | null | null | null | ---
title: "Sources"
id: "using-sources"
---
## Related reference docs
* [Source properties](source-properties)
* [Source configurations](source-configs)
* [`{{ source() }}` jinja function](dbt-jinja-functions/source)
* [`source freshness` command](commands/source)
## Using sources
Sources make it possible to name and describe the data loaded into your warehouse by your Extract and Load tools. By declaring these tables as sources in dbt, you can then
- select from source tables in your models using the `{{ source() }}` function, helping define the lineage of your data
- test your assumptions about your source data
- calculate the freshness of your source data
### Declaring a source
Sources are defined in `.yml` files nested under a `sources:` key.
<File name='models/<filename>.yml'>
```yaml
version: 2
sources:
- name: jaffle_shop
tables:
- name: orders
- name: customers
- name: stripe
tables:
- name: payments
```
</File>
If you're not already familiar with these files, be sure to check out [the documentation on schema.yml files](configs-and-properties) before proceeding.
### Selecting from a source
Once a source has been defined, it can be referenced from a model using the [`{{ source()}}` function](dbt-jinja-functions/source).
<File name='models/orders.sql'>
```sql
select
...
from {{ source('jaffle_shop', 'orders') }}
left join {{ source('jaffle_shop', 'customers') }} using (customer_id)
```
</File>
dbt will compile this to the full table name:
<File name='target/compiled/jaffle_shop/models/my_model.sql'>
```sql
select
...
from raw.jaffle_shop.orders
left join raw.jaffle_shop.customers using (customer_id)
```
</File>
Using the `{{ source () }}` function also creates a dependency between the model and the source table.
<Lightbox src="/img/docs/building-a-dbt-project/sources-dag.png" title="The source function tells dbt a model is dependent on a source "/>
### Testing and documenting sources
You can also:
- Add tests to sources
- Add descriptions to sources, that get rendered as part of your documentation site
These should be familiar concepts if you've already added tests and descriptions to your models (if not check out the guides on [testing](building-a-dbt-project/tests) and [documentation](documentation)).
<File name='models/<filename>.yml'>
```yaml
version: 2
sources:
- name: jaffle_shop
description: This is a replica of the Postgres database used by our app
tables:
- name: orders
description: >
One record per order. Includes cancelled and deleted orders.
columns:
- name: id
description: Primary key of the orders table
tests:
- unique
- not_null
- name: status
description: Note that the status can change over time
- name: ...
- name: ...
```
</File>
You can find more details on the available properties for sources in the [reference section](source-properties).
### FAQs
<FAQ src="source-has-bad-name" />
<FAQ src="source-in-different-database" />
<FAQ src="source-quotes" />
<FAQ src="testing-sources" />
<FAQ src="running-models-downstream-of-source" />
## Snapshotting source data freshness
With a couple of extra configs, dbt can optionally snapshot the "freshness" of the data in your source tables. This is useful for understanding if your data pipelines are in a healthy state, and is a critical component of defining SLAs for your warehouse.
### Declaring source freshness
To configure sources to snapshot freshness information, add a `freshness` block to your source and `loaded_at_field` to your table declaration:
<File name='models/<filename>.yml'>
```yaml
version: 2
sources:
- name: jaffle_shop
database: raw
freshness: # default freshness
warn_after: {count: 12, period: hour}
error_after: {count: 24, period: hour}
loaded_at_field: _etl_loaded_at
tables:
- name: orders
freshness: # make this a little more strict
warn_after: {count: 6, period: hour}
error_after: {count: 12, period: hour}
- name: customers # this will use the freshness defined above
- name: product_skus
freshness: null # do not check freshness for this table
```
</File>
In the `freshness` block, one or both of `warn_after` and `error_after` can be provided. If neither is provided, then dbt will not calculate freshness snapshots for the tables in this source.
Additionally, the `loaded_at_field` is required to calculate freshness for a table. If a `loaded_at_field` is not provided, then dbt will not calculate freshness for the table.
These configs are applied hierarchically, so `freshness` and `loaded_at` field values specified for a `source` will flow through to all of the `tables` defined in that source. This is useful when all of the tables in a source have the same `loaded_at_field`, as the config can just be specified once in the top-level source definition.
### Checking source freshness
To snapshot freshness information for your sources, use the `dbt source freshness` command ([reference docs](commands/source)):
```
$ dbt source freshness
```
Behind the scenes, dbt uses the freshness properties to construct a `select` query, shown below. You can find this query in the logs.
```sql
select
max(_etl_loaded_at) as max_loaded_at,
convert_timezone('UTC', current_timestamp()) as snapshotted_at
from raw.jaffle_shop.orders
```
The results of this query are used to determine whether the source is fresh or not:
<Lightbox src="/img/docs/building-a-dbt-project/snapshot-freshness.png" title="Uh oh! Not everything is as fresh as we'd like!"/>
### FAQs
<FAQ src="exclude-table-from-freshness" />
<FAQ src="snapshotting-freshness-for-one-source" />
<FAQ src="dbt-source-freshness" />
| 30.5 | 335 | 0.71653 | eng_Latn | 0.987567 |
535e2c2c6a1c7bdddcf38f2236fabf1b5df9168f | 2,379 | md | Markdown | README.md | vincecao/Eaby-Product-Search | 51cc8c31f26dbddcf403f63c87191e85df845bbb | [
"MIT"
] | null | null | null | README.md | vincecao/Eaby-Product-Search | 51cc8c31f26dbddcf403f63c87191e85df845bbb | [
"MIT"
] | null | null | null | README.md | vincecao/Eaby-Product-Search | 51cc8c31f26dbddcf403f63c87191e85df845bbb | [
"MIT"
] | 1 | 2021-07-19T19:11:28.000Z | 2021-07-19T19:11:28.000Z | # Eaby Product Search
PHP integrated web page, Angular7 based web application, and Android (adk28) Java application for Ebay product search.
__Lineng Cao__
## [PHP Verison](http://vince-amazing-php.us-west-1.elasticbeanstalk.com/)
- PHP7 combined with Html5
- All API called from PHP server side
## [Angular Verison](http://vince-amazing.us-west-1.elasticbeanstalk.com/search-product/)
- Entire Angular7 with Bootstrap powered FrontEnd website, Using reactive form and angular materials principles.(`@angular/ng-bootstrap`, `@angular/material`, `angular-svg-round-progressbar`).
- Nodejs & Express based BackEnd serving on AWS EC2 and Azure.
- Special Restful APIs were created for supporting all requests from the frontend filled with error handling.
- Featured with autocomplete, ip detected, group sorting and offline cart list.
## BackEnd Restful APIs
- [Zipcode IP Autocomplete](http://vince-amazing.us-west-1.elasticbeanstalk.com/api/ip-json/?startsWith=900)
- [Google Image Search](http://vince-amazing.us-west-1.elasticbeanstalk.com/api/google-img?v=1&productTitle=iphone)
- [eBay Product Search](http://vince-amazing.us-west-1.elasticbeanstalk.com/api/search/?keyword=iphone&buyerPostalCode=90007&MaxDistance=100&FreeShippingOnly=true&LocalPickupOnly=true)
- [eBay Project Detail Search](http://vince-amazing.us-west-1.elasticbeanstalk.com/api/item-detail/?itemId=283622107255)
- [eBay Similar Search](http://vince-amazing.us-west-1.elasticbeanstalk.com/api/similar/?itemId=283622107255)
### Screenshots
__Detail__

__Search__

__Wish List Feature__

## Native Android Verison
- Coded with Java adk28 in Android Studio
- Nodejs & Express based BackEnd serving on aws and Azure.
- Using recyclerview, fragment and data module.
- Featured with autocomplete, ip detected, simliar group sorting, cache offline wish list
### Download Demo
[Release](https://github.com/vincecao/Eaby-Product-Search/releases)

### Screenshots
__Launch__

__Detail__

__Search__


__Wish List Feature__

| 34.985294 | 192 | 0.773434 | eng_Latn | 0.458764 |
535eb5577bac132b02f1650d466b48567f307aad | 8,299 | md | Markdown | _posts/2021-11-09-Single-Origin.md | juliaflanders/juliaflanders.github.io | 291a581f55b5fb027b4902ba01cabc334fa20fb0 | [
"CC0-1.0"
] | null | null | null | _posts/2021-11-09-Single-Origin.md | juliaflanders/juliaflanders.github.io | 291a581f55b5fb027b4902ba01cabc334fa20fb0 | [
"CC0-1.0"
] | null | null | null | _posts/2021-11-09-Single-Origin.md | juliaflanders/juliaflanders.github.io | 291a581f55b5fb027b4902ba01cabc334fa20fb0 | [
"CC0-1.0"
] | 1 | 2020-12-09T18:33:42.000Z | 2020-12-09T18:33:42.000Z | ---
layout: post
title: Single Origin
category: [fiber, scale]
---
The remarkable thing about wool is that all wool comes from a specific sheep, and every sheep is different. Not only do different breeds have remarkably varied characteristics—the length and fineness of the fiber, the amount of crimp, the color, the amount and type of grease—but each individual sheep has its own fleece personality. The age and health of the sheep matters a lot: lambs' wool is finer, and stress (from disease or lambing) can weaken the fleece and cause points of breakage. But the sheep's wool also expresses the mysterious complexities of its genetics: perhaps a distant ancestor had some characteristic of fuzziness or fineness or color which, though not expressed in the intermediate generations, comes through in subtle variations.
{:height="400px"}
This is especially true of sheep that are crosses of different breeds, where their recent parentage contributes all sorts of potential wild variability into the mix. I get most of my fleece from a kind neighbor who started her flock with a ram and two ewes. The ram's main breed characteristics, to look at him, were consistent with his longwool heritage: his fleece is silky, long, fine, more like ringlets than like fuzz. The two ewes are both crosses of Blue-faced Leicester (another longwool breed) and Cormo (itself a cross between Corriedale and Merino), and their wool is fine, soft, crimpy, full of rich lanolin. All three are creamy white. But their lambs tell a more complicated story! Two of their offspring are brown-black—a surprise until I learned that one of the ram's parents was a Shetland, a breed that comes in all sorts of colors. And color aside, the lambs vary widely in the texture of their wool: some expressing more of the longwool side of their parents' heritage, some with the crimpier and finer Merino characteristics, and some with a dual coat (longer, coarser hairs combined with a soft downy undercoat) that owes something to the Shetland side of the family. As the flock grew over several years, each generation of lambs was a new surprise.
{:height="200px"}{:height="200px"}
Now I have about 12-16 of these fleeces from several seasons of shearing and am thinking about how to handle them as I take them through the fiber preparation process. I've given them all an initial rinse to get out the worst of the dirt, but they still need a thorough scouring to remove the excess lanolin. I've done that myself in the past but it's hard: it requires very hot water and a lot of rinsing. For one or two fleeces, it's fine, but it doesn't scale up very well. Once they're washed, I want to card them and turn them into roving, where the fibers are smoothed and made easier to spin into a soft, consistent yarn. As with the washing, I've done this myself on a small-to-medium scale, but for this many fleeces it prompts me to ask myself whether how I really want to spend my time is on fiber preparation or on spinning and weaving.
The economics of having someone else do these things for me are interesting. There are spinneries that will wash and process wool for you: some of them are very small and are willing to handle very small jobs, while others operate at more of an industrial scale and are able to tackle, say, creating a new line of yarn for a large shepherding concern. And in both cases, the cost of the work is directly related to the level of attention and individualized activity you ask for: if you want to blend fibers together in a specific ratio, or give special care to an exquisite fleece, you'll pay for the additional human effort that represents. Paying someone else for work I could do myself reminds me what my own effort is worth and also gives me a metric for translating it into other kinds of goods: in addition to getting the fleece washed and carded, I could get it spun into yarn for me—or I could use that money to buy yarn from a store, and knit it into a sweater—or I could buy a sweater...
Many things change as I entrust more of that workflow to others, but the one I'm interested in here is how much the raw materials retain of their individual character as they pass through it. Yarn made from Merino wool is profoundly different from yarn made from a longwool. You can wear Merino next to the skin; it's used for fine suiting and soft sweaters, but it's not hard-wearing or tough. Longwool breeds tend to be a bit coarser, silkier, longer, good for use as warp threads or outerwear, lustrous and strong. Yarn manufacturers often mix different types of wool together to get a combination of their virtues. Until recently, most yarn labeling just said "wool"—but with the rise of interest in terroir and single-origin products, you can now buy yarn that declares its origins more clearly through the evocative names of sheep breeds, familiar and unfamiliar: Shetland, Blue-faced Leicester, Gotland, California Variegated Mutant, Jacob, Wensleydale. Programs like the Livestock Conservancy's [Shave 'Em to Save 'Em](https://livestockconservancy.org/get-involved/shave-em-to-save-em/) focus on making the rarest and most endangered breeds better known, by enlisting hand-spinners in trying out their wool and experiencing first-hand the distinctive qualities of each breed, the kinds of yarn it can be used to make. Like heirloom apple varieties, these breeds carry valuable genetic diversity but also remind us of all the different ways things can be wonderful. An apple that tastes like strawberries; an apple good for making apple dumplings; an apple that will keep in the cellar over the winter; a dessert apple. A fleece that spins up to resemble something like Elvish chain mail.
Planning out how to use these new fleeces, I stared at the heaps of wool: the two white ewes, the white ram, the white lambs, the brown-black lambs. I could just group them by color and get an enormous batch of uniform white roving and another of black-brown: about 25 pounds of wool, enough to make a big project like a set of matching blankets. I could even have the whole lot carded together to make a heathered grey-brown yarn that would probably be very lovely: a Leicester Longwool/Shetland-Cormo/BFL melange with strength and luster and softness. Looking at the ram fleece—his name is William Wallace—I was struck by how his longwool genes set him apart, the locks of wool somehow silkier and more lustrous, a pearly gray, different from the ewes and lambs. If I kept that one fleece separate, I could spin it quite differently to bring out the luster and drape. Could I pull that one out and ask the spinnery to handle it separately? Already, they had taken 8 months to process the first fleece I sent them, largely because it was such a tiny order: they have to clean the carding machine after each order, so they wait to fit in my one little fleece when it won't cause a delay. Maybe including William Wallace in the mix with Edna and Lucy and the others would make the whole batch better, like adding a little cornmeal to a batch of bread, or a little rye to the whiskey. And if I have to wash and card his fleece myself, the way fiber is piling up in my workroom, I may never get to it.
One of the lamb fleeces is brown-black, faded towards the tips to grey and gold, but with a few patches that are silvery with an especially curly texture. Turning the wool as it dried on the lawn in the sun, I pulled out these parts and set them aside—I couldn't bear the idea of just mixing them in with the rest of the fleece in the carding process. Taking the single-source idea to an extreme, like the winemakers who select individual grapes by hand, this little handful of fiber, no more than a few ounces, feels precious and distinctive enough to be worth the effort of attention—but also, being so small, it fits into my own scope of effort. I can wash it and card it in a couple of afternoons, and spin it into a skein of yarn that can become a hat or a pair of mittens or the yoke of a sweater, or a narrow stripe on an enormous brown and white blanket, to remind me of all the different strands that are mixed into its apparent uniformity. | 345.791667 | 1,695 | 0.791541 | eng_Latn | 0.999928 |
535ed710e66dfe2bebb2a518b0a4d7b8fdb94e3a | 3,443 | md | Markdown | docs/fsharp/language-reference/object-expressions.md | badbadc0ffee/docs.de-de | 50a4fab72bc27249ce47d4bf52dcea9e3e279613 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/fsharp/language-reference/object-expressions.md | badbadc0ffee/docs.de-de | 50a4fab72bc27249ce47d4bf52dcea9e3e279613 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/fsharp/language-reference/object-expressions.md | badbadc0ffee/docs.de-de | 50a4fab72bc27249ce47d4bf52dcea9e3e279613 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Objektausdrücke
description: Erfahren Sie, wie Sie mit F# Objektausdrücke, wenn Sie die zusätzlichen Code und den Aufwand vermeiden möchten, erforderlich zum Erstellen einer neuen benannten Typ.
ms.date: 02/08/2019
ms.openlocfilehash: 63f2c1d7128721b7b8c744e4cf02d73c2a8b4a07
ms.sourcegitcommit: 9b552addadfb57fab0b9e7852ed4f1f1b8a42f8e
ms.translationtype: MT
ms.contentlocale: de-DE
ms.lasthandoff: 04/23/2019
ms.locfileid: "61666287"
---
# <a name="object-expressions"></a>Objektausdrücke
Ein *Objekt Ausdruck* ist ein Ausdruck, der eine neue Instanz von einem dynamisch erstellten anonymen Objekttyps erstellt basiert auf einem vorhandenen Basistyp, Schnittstelle oder Satz von Schnittstellen.
## <a name="syntax"></a>Syntax
```fsharp
// When typename is a class:
{ new typename [type-params]arguments with
member-definitions
[ additional-interface-definitions ]
}
// When typename is not a class:
{ new typename [generic-type-args] with
member-definitions
[ additional-interface-definitions ]
}
```
## <a name="remarks"></a>Hinweise
In der vorherigen Syntax wird die *Typename* darstellt, einen vorhandenen Klassen- oder Schnittstellentyp. *Typ-Params* wird beschrieben, die optional generischen Typparameter. Die *Argumente* werden verwendet, nur für Klassentypen, die Konstruktorparameter erforderlich ist. Die *Memberdefinitionen* sind überschreibungen von Basisklassenmethoden oder Implementierungen von abstrakten Methoden von einer Basisklasse oder Schnittstelle.
Das folgende Beispiel veranschaulicht verschiedene Arten von Object-Ausdrücke.
```fsharp
// This object expression specifies a System.Object but overrides the
// ToString method.
let obj1 = { new System.Object() with member x.ToString() = "F#" }
printfn "%A" obj1
// This object expression implements the IFormattable interface.
let delimiter(delim1: string, delim2: string, value: string) =
{ new System.IFormattable with
member x.ToString(format: string, provider: System.IFormatProvider) =
if format = "D" then
delim1 + value + delim2
else
value }
let obj2 = delimiter("{","}", "Bananas!");
printfn "%A" (System.String.Format("{0:D}", obj2))
// Define two interfaces
type IFirst =
abstract F : unit -> unit
abstract G : unit -> unit
type ISecond =
inherit IFirst
abstract H : unit -> unit
abstract J : unit -> unit
// This object expression implements both interfaces.
let implementer() =
{ new ISecond with
member this.H() = ()
member this.J() = ()
interface IFirst with
member this.F() = ()
member this.G() = () }
```
## <a name="using-object-expressions"></a>Verwenden von Object-Ausdrücke
Sie verwenden die Object-Ausdrücke, wenn Sie möchten, um zu vermeiden, die zusätzlichen Code und den Aufwand, die zum Erstellen einer neuen benannten Typ erforderlich ist. Wenn Sie Object-Ausdrücke verwenden, um die Anzahl der in einem Programm erstellten Typen zu minimieren, können Sie reduzieren Sie die Anzahl von Codezeilen und zu verhindern, dass die unnötige die Verbreitung von Typen. Anstatt zu erstellen, viele Typen nur, um bestimmte Situationen zu behandeln, können Sie einen Object-Ausdruck, der einen vorhandenen Typ anpasst, oder eine geeignete Implementierung der Schnittstelle für den speziellen Fall zur Verfügung stellt.
## <a name="see-also"></a>Siehe auch
- [F#-Sprachreferenz](index.md)
| 41.481928 | 639 | 0.74528 | deu_Latn | 0.960318 |
535ef3385737a670baf2da4c25b25315819c1fbd | 1,098 | md | Markdown | website/docs/index.md | niieani/beemo | a1ee8ae18ab0c08a056be7db61d1d149c3da1288 | [
"MIT"
] | 114 | 2019-02-10T19:35:11.000Z | 2022-02-15T22:06:44.000Z | website/docs/index.md | niieani/beemo | a1ee8ae18ab0c08a056be7db61d1d149c3da1288 | [
"MIT"
] | 60 | 2019-02-11T08:25:45.000Z | 2022-03-26T23:22:48.000Z | website/docs/index.md | niieani/beemo | a1ee8ae18ab0c08a056be7db61d1d149c3da1288 | [
"MIT"
] | 4 | 2019-08-24T18:24:08.000Z | 2021-09-07T06:37:46.000Z | ---
title: Introduction
slug: /
---
Manage developer and build tools, their configuration, and commands in a single centralized
repository. Beemo aims to solve the multi-project maintenance fatigue by removing the following
burdens across all projects: config and dotfile management, multiple config patterns, up-to-date
development dependencies, continuous copy and paste, and more.
## Features
- Manage dev tools and configurations in a single repository.
- Configure supported dev tools using `.ts` or `.js` files.
- Customize and alter config at runtime with CLI options.
- Pass custom CLI options to dev tool commands without failure.
- Automatically expand glob patterns (a better alternative to bash).
- Listen to and act upon events.
- Easily share config between dev tools.
- Avoid relative config or `extend` paths.
- Automatic config file cleanup.
- Custom scripts with CLI options.
- Scaffolding and template generation.
- Workspaces (monorepo) support.
- Parallel, pooled, and prioritized builds.
- And much more.
## Requirements
- Node 12.17+
- GitHub, Bitbucket, or another VCS
| 34.3125 | 96 | 0.776867 | eng_Latn | 0.992051 |
535f2098a7bb55933780ba2fbb7280e323cd3304 | 37 | md | Markdown | README.md | rileysut8991/S | 26839ab1c4ea223b3fd5b1a0e7626d82ca44724a | [
"BSD-3-Clause"
] | null | null | null | README.md | rileysut8991/S | 26839ab1c4ea223b3fd5b1a0e7626d82ca44724a | [
"BSD-3-Clause"
] | null | null | null | README.md | rileysut8991/S | 26839ab1c4ea223b3fd5b1a0e7626d82ca44724a | [
"BSD-3-Clause"
] | null | null | null | # S
First attempt at a stack library
| 12.333333 | 32 | 0.756757 | eng_Latn | 0.977634 |
535f9fc4d6be00e039ee84aa5b0c36300bad7c26 | 708 | md | Markdown | docs/articles/nunit/technical-notes/usage/Configuration-Files.md | davidkroell/docs-1 | a03450938f96df11f5ed0c19a08dae2b98763597 | [
"MIT"
] | 705 | 2016-01-31T06:50:17.000Z | 2022-02-05T11:13:36.000Z | docs/articles/nunit/technical-notes/usage/Configuration-Files.md | davidkroell/docs-1 | a03450938f96df11f5ed0c19a08dae2b98763597 | [
"MIT"
] | 563 | 2016-02-07T11:08:31.000Z | 2022-03-11T01:53:11.000Z | docs/articles/nunit/technical-notes/usage/Configuration-Files.md | davidkroell/docs-1 | a03450938f96df11f5ed0c19a08dae2b98763597 | [
"MIT"
] | 196 | 2016-02-07T14:00:27.000Z | 2022-03-11T11:35:30.000Z | ---
uid: configurationfiles
---
# Configuration Files
Normally, a configuration file used to provide settings or to control the environment
in which tests are run, should be given the name as the assembly file with the
suffix ".config" added. For example, the configuration file used to run nunit.tests.dll must
be named nunit.tests.dll.config and located in the same directory as the dll.
**Notes:**
1. When multiple assemblies are specified in an NUnit project (file extension `.nunit`),
it is possible to specify a common config file for the included test assemblies.
2. When multiple assemblies are specified on the command-line using the `--domain:Single`
option, no config file is currently used.
| 37.263158 | 92 | 0.779661 | eng_Latn | 0.999798 |
535fb1ed313162cf6f77a45e1760d4d99091fe7b | 1,412 | md | Markdown | docs/PurchaseCondition.md | Yaksa-ca/eShopOnWeb | 70bd3bd5f46c0cb521960f64c548eab3e7d1df5c | [
"MIT"
] | null | null | null | docs/PurchaseCondition.md | Yaksa-ca/eShopOnWeb | 70bd3bd5f46c0cb521960f64c548eab3e7d1df5c | [
"MIT"
] | null | null | null | docs/PurchaseCondition.md | Yaksa-ca/eShopOnWeb | 70bd3bd5f46c0cb521960f64c548eab3e7d1df5c | [
"MIT"
] | null | null | null | # Yaksa.OrckestraCommerce.Client.Model.PurchaseCondition
PurchaseCondition
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**Id** | **string** | The unique identifier of the entity. |
**PropertyBag** | **Dictionary<string, Object>** | | [optional]
**ConsumabilityRules** | [**List<ConsumabilityRule>**](ConsumabilityRule.md) | the rules used to determine whether an entity should be used when validating the conditions of this promotion. | [optional]
**ExcludeDiscountedItems** | **bool** | a flag indicating whether discounted items should be excluded when evaluating this condition. | [optional]
**Level** | **string** | The level of the purchase condition: on which part of the cart the condition will be applied. | [optional]
**Targets** | [**List<PurchaseConditionTarget>**](PurchaseConditionTarget.md) | a list of the targets on which the condition will be applied. | [optional]
**Type** | **string** | The type: how the Value will be applied to the Targets. | [optional]
**UnitOfMeasure** | **string** | the UnitOfMeasure (Unit, Kilogram, Liter, etc..) of the condition. | [optional]
**Value** | **double** | the value of the condition. | [optional]
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
| 70.6 | 209 | 0.684844 | eng_Latn | 0.854722 |
535fbc0b06d071d86d9de1baa2d0c746c6b2c864 | 768 | md | Markdown | docs/articles/building-with-legerity.md | sasakrsmanovic/legerity-uno | 26842b7edb86cb6689b31db6f0127009494059c3 | [
"MIT"
] | 11 | 2021-05-30T10:56:40.000Z | 2022-03-28T11:24:57.000Z | docs/articles/building-with-legerity.md | sasakrsmanovic/legerity-uno | 26842b7edb86cb6689b31db6f0127009494059c3 | [
"MIT"
] | 65 | 2021-05-31T07:29:17.000Z | 2022-03-24T21:30:06.000Z | docs/articles/building-with-legerity.md | sasakrsmanovic/legerity-uno | 26842b7edb86cb6689b31db6f0127009494059c3 | [
"MIT"
] | 1 | 2021-05-31T14:52:47.000Z | 2021-05-31T14:52:47.000Z | ---
uid: building-with-legerity
title: Writing UI tests | Legerity for Uno
---
# Writing UI tests with Legerity for Uno
## Finding web elements by XAML ID or Name
When viewing the visual tree of your Uno Platform application for web with `F12` developer tooling, you will need to enable UI element feature configurations in your application to display `x:Uid` and `x:Name` attributes in the DOM.
This is achieved by adding the following lines to the constructor of your `App.xaml.cs` file.
```csharp
#if DEBUG && __WASM__
Uno.UI.FeatureConfiguration.UIElement.AssignDOMXamlName = true;
#endif
```
More information on this can be found in the [Uno Platform docs](https://platform.uno/docs/articles/uno-development/debugging-inspect-visual-tree.html#web). | 36.571429 | 232 | 0.764323 | eng_Latn | 0.979548 |
535fd5512f4945a2b2dd7cbe6c15a3de0b8e3230 | 17,136 | md | Markdown | content/influxdb/v1.7/administration/backup_and_restore.md | yuemingl/docs.influxdata.com | a906fe03ab6640e9d91933abc12f440797f33bc0 | [
"MIT"
] | null | null | null | content/influxdb/v1.7/administration/backup_and_restore.md | yuemingl/docs.influxdata.com | a906fe03ab6640e9d91933abc12f440797f33bc0 | [
"MIT"
] | null | null | null | content/influxdb/v1.7/administration/backup_and_restore.md | yuemingl/docs.influxdata.com | a906fe03ab6640e9d91933abc12f440797f33bc0 | [
"MIT"
] | null | null | null | ---
title: Backing up and restoring in InfluxDB OSS
description: Using InfluxDB OSS backup and restore utilities for online, Enterprise-compatible use and portability between InfluxDB Enterprise and InfluxDB OSS servers.
aliases:
- /influxdb/v1.7/administration/backup-and-restore/
menu:
influxdb_1_7:
name: Backing up and restoring
weight: 60
parent: Administration
---
## Overview
Starting in version 1.5, the InfluxDB OSS `backup` utility provides:
* Option to run backup and restore functions on online (live) databases.
* Backup and restore functions for single or multiple databases, along with optional timestamp filtering.
* Data can be imported from [InfluxDB Enterprise](/enterprise_influxdb/latest/) clusters
* Backup files that can be imported into an InfluxDB Enterprise database.
> **InfluxDB Enterprise users:** See [Backing up and restoring in InfluxDB Enterprise](/enterprise_influxdb/latest/administration/backup-and-restore/).
> ***Note:*** Prior to InfluxDB OSS 1.5, the `backup` utility created backup file formats incompatible with InfluxDB Enterprise. This legacy format is still supported in the new `backup` utility as input for the new *online* restore function. The *offline* backup and restore utilities in InfluxDB OSS versions 1.4 and earlier are deprecated, but are documented below in [Backward compatible offline backup and restore](#backward-compatible-offline-backup-and-restore-legacy-format).
## Online backup and restore (for InfluxDB OSS)
Use the `backup` and `restore` utilities to back up and restore between `influxd` instances with the same versions or with only minor version differences. For example, you can backup from 1.7.3 and restore on 1.7.7.
### Configuring remote connections
The online backup and restore processes execute over a TCP connection to the database.
**To enable the port for the backup and restore service:**
1. At the root level of the InfluxDB config file (`influxdb.conf`), uncomment the [`bind-address` configuration setting](/influxdb/v1.7/administration/config#bind-address-127-0-0-1-8088) on the remote node.
2. Update the `bind-address` value to `<remote-node-IP>:8088`
3. Provide the IP address and port to the `-host` parameter when you run commands.
**Example**
```
$ influxd backup -portable -database mydatabase -host <remote-node-IP>:8088 /tmp/mysnapshot
```
### `backup`
The improved `backup` command is similar to previous versions, except that it
generates backups in an InfluxDB Enterprise-compatible format and has some new filtering options to constrain the range of data points that are exported to the backup.
```
influxd backup
[ -database <db_name> ]
[ -portable ]
[ -host <host:port> ]
[ -retention <rp_name> ] | [ -shard <shard_ID> -retention <rp_name> ]
[ -start <timestamp> [ -end <timestamp> ] | -since <timestamp> ]
<path-to-backup>
```
To invoke the new InfluxDB Enterprise-compatible format, run the `influxd backup` command with the `-portable` flag, like this:
```
influxd backup -portable [ arguments ] <path-to-backup>
```
##### Arguments
Optional arguments are enclosed in brackets.
- `[ -database <db_name> ]`: The database to back up. If not specified, all databases are backed up.
- `[ -portable ]`: Generates backup files in the newer InfluxDB Enterprise-compatible format. Highly recommended for all InfluxDB OSS users.
<dt>
**Important:** If `-portable` is not specified, the default legacy backup utility is used -- only the host metastore is backed up, unless `-database` is specified. If not using `-portable`, review [Backup (legacy)](#backup-legacy) below for expected behavior.
</dt>
- `[ -host <host:port> ]`: Host and port for InfluxDB OSS instance . Default value is `'127.0.0.1:8088'`. Required for remote connections. Example: `-host 127.0.0.1:8088`
- `[ -retention <rp_name> ]`: Retention policy for the backup. If not specified, the default is to use all retention policies. If specified, then `-database` is required.
- `[ -shard <ID> ]`: Shard ID of the shard to be backed up. If specified, then `-retention <name>` is required.
- `[ -start <timestamp> ]`: Include all points starting with the specified timestamp ([RFC3339 format](https://www.ietf.org/rfc/rfc3339.txt)). Not compatible with `-since`. Example: `-start 2015-12-24T08:12:23Z`
- `[ -end <timestamp> ]` ]: Exclude all results after the specified timestamp ([RFC3339 format](https://www.ietf.org/rfc/rfc3339.txt)). Not compatible with `-since`. If used without `-start`, all data will be backed up starting from 1970-01-01. Example: `-end 2015-12-31T08:12:23Z`
- `[ -since <timestamp> ]`: Perform an incremental backup after the specified timestamp [RFC3339 format](https://www.ietf.org/rfc/rfc3339.txt). Use `-start` instead, unless needed for legacy backup support.
#### Backup examples
**To back up everything:**
```
influxd backup -portable <path-to-backup>
```
**To backup all databases recently changed at the filesystem level**
```
influxd backup -portable -start <timestamp> <path-to-backup>
```
**To backup only the `telegraf` database:**
```
influxd backup -portable -database telegraf <path-to-backup>
```
**To backup a database for a specified time interval:**
```
influxd backup -portable -database mytsd -start 2017-04-28T06:49:00Z -end 2017-04-28T06:50:00Z /tmp/backup/influxdb
```
### `restore`
An online `restore` process is initiated by using the `restore` command with either the `-portable` argument (indicating the new Enterprise-compatible backup format) or `-online` flag (indicating the legacy backup format).
```
influxd restore [ -db <db_name> ]
-portable | -online
[ -host <host:port> ]
[ -newdb <newdb_name> ]
[ -rp <rp_name> ]
[ -newrp <newrp_name> ]
[ -shard <shard_ID> ]
<path-to-backup-files>
```
<dt>
Restoring backups that specified time periods (using `-start` and `-end`)
Backups that specified time intervals using the `-start` or `-end` arguments are performed on blocks of data and not on a point-by-point basis. Since most blocks are highly compacted, extracting each block to inspect each point creates both a computational and disk-space burden on the running system.
Each data block is annotated with starting and ending timestamps for the time interval included in the block. When you specify `-start` or `-end` timestamps, all of the specified data is backed up, but other data points that are in the same blocks will also be backed up.
**Expected behavior**
- When restoring data, you are likely to see data that is outside of the specified time periods.
- If duplicate data points are included in the backup files, the points will be written again, overwriting any existing data.
</dt>
#### Arguments
Optional arguments are enclosed in brackets.
- `-portable`: Use the new Enterprise-compatible backup format for InfluxDB OSS. Recommended instead of `-online`. A backup created on InfluxDB Enterprise can be restored to an InfluxDB OSS instance.
- `-online`: Use the legacy backup format. Only use if the newer `-portable` option cannot be used.
- `[ -host <host:port> ]`: Host and port for InfluxDB OSS instance . Default value is `'127.0.0.1:8088'`. Required for remote connections. Example: `-host 127.0.0.1:8088`
- `[ -db <db_name> | -database <db_name> ]`: Name of the database to be restored from the backup. If not specified, all databases will be restored.
- `[ -newdb <newdb_name> ]`: Name of the database into which the archived data will be imported on the target system. If not specified, then the value for `-db` is used. The new database name must be unique to the target system.
- `[ -rp <rp_name> ]`: Name of the retention policy from the backup that will be restored. Requires that `-db` is set. If not specified, all retention policies will be used.
- `[ -newrp <newrp_name> ]`: Name of the retention policy to be created on the target system. Requires that `-rp` is set. If not specified, then the `-rp` value is used.
- `[ -shard <shard_ID> ]`: Shard ID of the shard to be restored. If specified, then `-db` and `-rp` are required.
> **Note:** If you have automated backups based on the legacy format, consider using the new online feature for your legacy backups. The new backup utility lets you restore a single database to a live (online) instance, while leaving all existing data on the server in place. The [offline restore method (described below)](#restore-legacy) may result in data loss, since it clears all existing databases on the server.
#### Restore examples
**To restore all databases found within the backup directory:**
```
influxd restore -portable path-to-backup
```
**To restore only the `telegraf` database (telegraf database must not exist):**
```
influxd restore -portable -db telegraf path-to-backup
```
**To restore data to a database that already exists:**
You cannot restore directly into a database that already exists. If you attempt to run the `restore` command into an existing database, you will get a message like this:
```
influxd restore -portable -db existingdb path-to-backup
2018/08/30 13:42:46 error updating meta: DB metadata not changed. database may already exist
restore: DB metadata not changed. database may already exist
```
1. Restore the existing database backup to a temporary database.
```
influxd restore -portable -db telegraf -newdb telegraf_bak path-to-backup
```
2. Sideload the data (using a `SELECT ... INTO` statement) into the existing target database and drop the temporary database.
```
> USE telegraf_bak
> SELECT * INTO telegraf..:MEASUREMENT FROM /.*/ GROUP BY *
> DROP DATABASE telegraf_bak
```
**To restore to a retention policy that already exists:**
1. Restore the retention policy to a temporary database.
```
influxd restore -portable -db telegraf -newdb telegraf_bak -rp autogen -newrp autogen_bak path-to-backup
```
2. Sideload into the target database and drop the temporary database.
```
> USE telegraf_bak
> SELECT * INTO telegraf.autogen.:MEASUREMENT FROM /telegraf_bak.autogen_bak.*/ GROUP BY *
> DROP telegraf_bak
```
### Backward compatible offline backup and restore (legacy format)
> ***Note:*** The backward compatible backup and restore for InfluxDB OSS documented below are deprecated. InfluxData recommends using the newer Enterprise-compatible backup and restore utilities with your InfluxDB OSS servers.
InfluxDB OSS has the ability to snapshot an instance at a point-in-time and restore it.
All backups are full backups; incremental backups are not supported.
Two types of data can be backed up, the metastore and the metrics themselves.
The [metastore](/influxdb/v1.7/concepts/glossary/#metastore) is backed up in its entirety.
The metrics are backed up on a per-database basis in an operation separate from the metastore backup.
#### Backing up the metastore
The InfluxDB metastore contains internal information about the status of
the system, including user information, database and shard metadata, continuous queries, retention policies, and subscriptions.
While a node is running, you can create a backup of your instance's metastore by running the command:
```
influxd backup <path-to-backup>
```
Where `<path-to-backup>` is the directory where you
want the backup to be written to. Without any other arguments,
the backup will only record the current state of the system
metastore. For example, the command:
```bash
$ influxd backup /tmp/backup
2016/02/01 17:15:03 backing up metastore to /tmp/backup/meta.00
2016/02/01 17:15:03 backup complete
```
Will create a metastore backup in the directory `/tmp/backup` (the
directory will be created if it doesn't already exist).
#### Backup (legacy)
Each database must be backed up individually.
To backup a database, add the `-database` flag:
```bash
influxd backup -database <mydatabase> <path-to-backup>
```
Where `<mydatabase>` is the name of the database you would like to
backup, and `<path-to-backup>` is where the backup data should be
stored.
Optional flags also include:
- `-retention <retention-policy-name>`
- This flag can be used to backup a specific retention policy. For more information on retention policies, see
[Retention policy management](/influxdb/v1.7/query_language/database_management/#retention-policy-management). If unspecified, all retention policies will be backed up.
- `-shard <shard ID>` - This flag can be used to backup a specific
shard ID. To see which shards are available, you can run the command
`SHOW SHARDS` using the InfluxDB query language. If not specified,
all shards will be backed up.
- `-since <date>` - This flag can be used to create a backup _since_ a
specific date, where the date must be in
[RFC3339](https://www.ietf.org/rfc/rfc3339.txt) format (for example,
`2015-12-24T08:12:23Z`). This flag is important if you would like to
take incremental backups of your database. If not specified, all
timeranges within the database will be backed up.
> **Note:** Metastore backups are also included in per-database backups
As a real-world example, you can take a backup of the `autogen`
retention policy for the `telegraf` database since midnight UTC on
February 1st, 2016 by using the command:
```
$ influxd backup -database telegraf -retention autogen -since 2016-02-01T00:00:00Z /tmp/backup
2016/02/01 18:02:36 backing up rp=default since 2016-02-01 00:00:00 +0000 UTC
2016/02/01 18:02:36 backing up metastore to /tmp/backup/meta.01
2016/02/01 18:02:36 backing up db=telegraf rp=default shard=2 to /tmp/backup/telegraf.default.00002.01 since 2016-02-01 00:00:00 +0000 UTC
2016/02/01 18:02:36 backup complete
```
Which will send the resulting backup to `/tmp/backup`, where it can
then be compressed and sent to long-term storage.
#### Remote backups (legacy)
The legacy backup mode also supports live, remote backup functionality.
Follow the directions in [Configuring remote connections](#configuring-remote-connections) above to configure this feature.
## Restore (legacy)
<dt> This offline restore method described here may result in data loss -- it clears all existing databases on the server. Consider using the `-online` flag with the newer [`restore` method (described above)](#restore) to import legacy data without any data loss.
</dt>
To restore a backup, you will need to use the `influxd restore` command.
> **Note:** Restoring from backup is only supported while the InfluxDB daemon is stopped.
To restore from a backup you will need to specify the type of backup,
the path to where the backup should be restored, and the path to the backup.
The command:
```
influxd restore [ -metadir | -datadir ] <path-to-meta-or-data-directory> <path-to-backup>
```
The required flags for restoring a backup are:
- `-metadir <path-to-meta-directory>` - This is the path to the meta
directory where you would like the metastore backup recovered
to. For packaged installations, this should be specified as
`/var/lib/influxdb/meta`.
- `-datadir <path-to-data-directory>` - This is the path to the data
directory where you would like the database backup recovered to. For
packaged installations, this should be specified as
`/var/lib/influxdb/data`.
The optional flags for restoring a backup are:
- `-database <database>` - This is the database that you would like to
restore the data to. This option is required if no `-metadir` option
is provided.
- `-retention <retention policy>` - This is the target retention policy
for the stored data to be restored to.
- `-shard <shard id>` - This is the shard data that should be
restored. If specified, `-database` and `-retention` must also be
set.
Following the backup example above, the backup can be restored in two
steps.
1. The metastore needs to be restored so that InfluxDB
knows which databases exist:
```
$ influxd restore -metadir /var/lib/influxdb/meta /tmp/backup
Using metastore snapshot: /tmp/backup/meta.00
```
2. Once the metastore has been restored, we can now recover the backed up
data. In the real-world example above, we backed up the `telegraf`
database to `/tmp/backup`, so let's restore that same dataset. To
restore the `telegraf` database:
```
$ influxd restore -database telegraf -datadir /var/lib/influxdb/data /tmp/backup
Restoring from backup /tmp/backup/telegraf.*
unpacking /var/lib/influxdb/data/telegraf/default/2/000000004-000000003.tsm
unpacking /var/lib/influxdb/data/telegraf/default/2/000000005-000000001.tsm
```
> **Note:** Once the backed up data has been recovered, the permissions on the shards may no longer be accurate. To ensure the file permissions are correct, please run this command: `$ sudo chown -R influxdb:influxdb /var/lib/influxdb`
Once the data and metastore are recovered, start the database:
```bash
$ service influxdb start
```
As a quick check, you can verify that the database is known to the metastore
by running a `SHOW DATABASES` command:
```
influx -execute 'show databases'
name: databases
---------------
name
_internal
telegraf
```
The database has now been successfully restored!
| 43.492386 | 484 | 0.747199 | eng_Latn | 0.991957 |
536009cc96221a8ee02eb5d2559303bc61dbe87e | 210 | md | Markdown | README.md | oskarjiang/ElegantDashboard | bef941961a180dbc65c34817ff1379096faca311 | [
"MIT"
] | null | null | null | README.md | oskarjiang/ElegantDashboard | bef941961a180dbc65c34817ff1379096faca311 | [
"MIT"
] | 2 | 2021-10-12T22:42:53.000Z | 2022-03-25T19:03:11.000Z | README.md | oskarjiang/elegant-dashboard | bef941961a180dbc65c34817ff1379096faca311 | [
"MIT"
] | null | null | null | ## Prerequisites
https://nodejs.org/
## Installing
```
npm install
```
# Running
Start React application
```
npm start
```
Start Electron application that wraps React application
```
npm run electron-start
```
| 13.125 | 55 | 0.719048 | eng_Latn | 0.515075 |
53608f2e78936d85ac7f893035240456ed8d34f8 | 10,450 | md | Markdown | articles/container-service/container-service-kubernetes-windows-walkthrough.md | dariagrigoriu/azure-docs | 624cee53409b0d1788864c4d24a321870db93bd8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/container-service/container-service-kubernetes-windows-walkthrough.md | dariagrigoriu/azure-docs | 624cee53409b0d1788864c4d24a321870db93bd8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/container-service/container-service-kubernetes-windows-walkthrough.md | dariagrigoriu/azure-docs | 624cee53409b0d1788864c4d24a321870db93bd8 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2021-12-07T06:43:08.000Z | 2021-12-19T02:20:20.000Z | ---
title: Azure Kubernetes cluster for Windows | Microsoft Docs
description: Deploy and get started with a Kubernetes cluster for Windows containers in Azure Container Service
services: container-service
documentationcenter: ''
author: dlepow
manager: timlt
editor: ''
tags: acs, azure-container-service, kubernetes
keywords: ''
ms.assetid:
ms.service: container-service
ms.devlang: na
ms.topic: get-started-article
ms.tgt_pltfrm: na
ms.workload: na
ms.date: 03/20/2017
ms.author: danlep
ms.custom: H1Hack27Feb2017
---
# Get started with Kubernetes and Windows containers in Container Service
This article shows how to create a Kubernetes cluster in Azure Container Service that contains Windows nodes to run Windows containers.
> [!NOTE]
> Support for Windows containers with Kubernetes in Azure Container Service is in preview. Use the Azure portal or a Resource Manager template to create a Kubernetes cluster with Windows nodes. This feature isn't currently supported with the Azure CLI 2.0.
>
The following image shows the architecture of a Kubernetes cluster in Azure Container Service with one Linux master node and two Windows agent nodes.
* The Linux master serves the Kubernetes REST API and is accessible by SSH on port 22 or `kubectl` on port 443.
* The Windows agent nodes are grouped in an Azure availability set
and run your containers. The Windows nodes can be accessed through an RDP SSH tunnel via the master node. Azure load balancer rules are dynamically added to the cluster depending on exposed services.

All VMs are in the same private virtual network and are fully accessible to each other. All VMs run a kubelet, Docker, and a proxy.
## Prerequisites
* **SSH RSA public key**: When deploying through the portal or one of the Azure quickstart templates, you need to provide an SSH RSA public key for authentication against Azure Container Service virtual machines. To create Secure Shell (SSH) RSA keys, see the [OS X and Linux](../virtual-machines/virtual-machines-linux-mac-create-ssh-keys.md) or [Windows](../virtual-machines/virtual-machines-linux-ssh-from-windows.md) guidance.
* **Service principal client ID and secret**: For more information and guidance, see [About the service principal for a Kubernetes cluster](container-service-kubernetes-service-principal.md).
## Create the cluster
You can use the Azure portal to [create a Kubernetes cluster](container-service-deployment.md#create-a-cluster-by-using-the-azure-portal) with Windows agent nodes. Note the following settings when creating the cluster:
* On the **Basics** blade, in **Orchestrator**, select **Kubernetes**.

* On the **Master configuration** blade, enter user credentials and service principal credentials for the Linux master nodes. Choose 1, 3, or 5 masters.
* On the **Agent configuration** blade, in **Operating system**, select **Windows (preview)**. Enter administrator credentials for the Windows agent nodes.

For more details, see [Deploy an Azure Container Service cluster](container-service-deployment.md).
## Connect to the cluster
Use the `kubectl` command-line tool to connect from your local computer to the master node of the Kubernetes cluster. For steps to install and set up `kubectl`, see [Connect to an Azure Container Service cluster](container-service-connect.md#connect-to-a-kubernetes-cluster). You can use `kubectl` commands to access the Kubernetes web UI and to create and manage Windows container workloads.
## Create your first Kubernetes service
After creating the cluster and connecting with `kubectl`, you can try starting a basic Windows web app and expose it to the internet. In this example, you specify the container resources using a YAML file, and then create it using `kubctl apply`.
1. To see a list of your nodes, type `kubectl get nodes`. If you want full details of the nodes, type:
```
kubectl get nodes -o yaml
```
2. Create a file named `simpleweb.yaml` and copy the following. This file sets up a web app using the Windows Server 2016 Server Core base OS image from [Docker Hub](https://hub.docker.com/r/microsoft/windowsservercore/).
```yaml
apiVersion: v1
kind: Service
metadata:
name: win-webserver
labels:
app: win-webserver
spec:
ports:
# the port that this service should serve on
- port: 80
targetPort: 80
selector:
app: win-webserver
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: win-webserver
name: win-webserver
spec:
replicas: 1
template:
metadata:
labels:
app: win-webserver
name: win-webserver
spec:
containers:
- name: windowswebserver
image: microsoft/windowsservercore
command:
- powershell.exe
- -command
- "<#code used from https://gist.github.com/wagnerandrade/5424431#> ; $$listener = New-Object System.Net.HttpListener ; $$listener.Prefixes.Add('http://*:80/') ; $$listener.Start() ; $$callerCounts = @{} ; Write-Host('Listening at http://*:80/') ; while ($$listener.IsListening) { ;$$context = $$listener.GetContext() ;$$requestUrl = $$context.Request.Url ;$$clientIP = $$context.Request.RemoteEndPoint.Address ;$$response = $$context.Response ;Write-Host '' ;Write-Host('> {0}' -f $$requestUrl) ; ;$$count = 1 ;$$k=$$callerCounts.Get_Item($$clientIP) ;if ($$k -ne $$null) { $$count += $$k } ;$$callerCounts.Set_Item($$clientIP, $$count) ;$$header='<html><body><H1>Windows Container Web Server</H1>' ;$$callerCountsString='' ;$$callerCounts.Keys | % { $$callerCountsString+='<p>IP {0} callerCount {1} ' -f $$_,$$callerCounts.Item($$_) } ;$$footer='</body></html>' ;$$content='{0}{1}{2}' -f $$header,$$callerCountsString,$$footer ;Write-Output $$content ;$$buffer = [System.Text.Encoding]::UTF8.GetBytes($$content) ;$$response.ContentLength64 = $$buffer.Length ;$$response.OutputStream.Write($$buffer, 0, $$buffer.Length) ;$$response.Close() ;$$responseStatus = $$response.StatusCode ;Write-Host('< {0}' -f $$responseStatus) } ; "
nodeSelector:
beta.kubernetes.io/os: windows
```
> [!NOTE]
> The configuration includes `type: LoadBalancer`. This setting causes the service to be exposed to the internet through an Azure load balancer. For more information, see [Load balance containers in a Kubernetes cluster in Azure Container Service](container-service-kubernetes-load-balancing.md).
>
## Start the application
1. To start the application, type:
```
kubectl apply -f simpleweb.yaml
```
2. To verify the deployment of the service (which takes about 30 seconds), type:
```
kubectl get pods
```
3. After the service is running, to see the internal and external IP addresses of the service, type:
```
kubectl get svc
```

The addition of the external IP address takes several minutes. Before the load balancer configures the external address, it appears as `<pending>`.
4. After the external IP address is available, you can reach the service in your web browser.

## Access the Windows nodes
Windows nodes can be accessed from a local Windows computer through Remote Desktop Connection. We recommend using an RDP SSH tunnel via the master node.
There are multiple options for creating SSH tunnels on Windows. This section describes how to use PuTTY to create the tunnel.
1. [Download PuTTY](http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html) to your Windows system.
2. Run the application.
3. Enter a host name that is composed of the cluster admin user name and the public DNS name of the first master in the cluster. The **Host Name** looks similar to `adminuser@PublicDNSName`. Enter 22 for the **Port**.

4. Select **SSH > Auth**. Add a path to your private key file (.ppk format) for authentication. You can use a tool such as [PuTTYgen](http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html) to generate this file from the SSH key used to create the cluster.

5. Select **SSH > Tunnels** and configure the forwarded ports. Since your local Windows machine is already using port 3389, it is recommended to use the following settings to reach Windows node 0 and Windows node 1. (Continue this pattern for additional Windows nodes.)
**Windows Node 0**
* **Source Port:** 3390
* **Destination:** 10.240.245.5:3389
**Windows Node 1**
* **Source Port:** 3391
* **Destination:** 10.240.245.6:3389

6. When you're finished, click **Session > Save** to save the connection configuration.
7. To connect to the PuTTY session, click **Open**. Complete the connection to the master node.
8. Start Remote Desktop Connection. To connect to the first Windows node, for **Computer**, specify `localhost:3390`, and click **Connect**. (To connect to the second, specify `localhost:3390`, and so on.) To complete your connection, provide the local Windows administrator password you configured during deployment.
## Next steps
Here are recommended links to learn more about Kubernetes:
* [Kubernetes Bootcamp](https://kubernetesbootcamp.github.io/kubernetes-bootcamp/index.html) - shows you how to deploy, scale, update, and debug containerized applications.
* [Kubernetes User Guide](http://kubernetes.io/docs/user-guide/) - provides information on running programs in an existing Kubernetes cluster.
* [Kubernetes Examples](https://github.com/kubernetes/kubernetes/tree/master/examples) - provides examples on how to run real applications with Kubernetes. | 50.240385 | 1,241 | 0.741244 | eng_Latn | 0.932081 |
5360e26719f3edfd88adc51837e0bb88c89d6e65 | 10,815 | md | Markdown | includes/active-directory-app-provisioning.md | changeworld/azure-docs.it- | 34f70ff6964ec4f6f1a08527526e214fdefbe12a | [
"CC-BY-4.0",
"MIT"
] | 1 | 2017-06-06T22:50:05.000Z | 2017-06-06T22:50:05.000Z | includes/active-directory-app-provisioning.md | changeworld/azure-docs.it- | 34f70ff6964ec4f6f1a08527526e214fdefbe12a | [
"CC-BY-4.0",
"MIT"
] | 41 | 2016-11-21T14:37:50.000Z | 2017-06-14T20:46:01.000Z | includes/active-directory-app-provisioning.md | changeworld/azure-docs.it- | 34f70ff6964ec4f6f1a08527526e214fdefbe12a | [
"CC-BY-4.0",
"MIT"
] | 7 | 2016-11-16T18:13:16.000Z | 2017-06-26T10:37:55.000Z | ---
ms.openlocfilehash: c400856546142353a7294a03fce6bbff1c258cc0
ms.sourcegitcommit: f28ebb95ae9aaaff3f87d8388a09b41e0b3445b5
ms.translationtype: MT
ms.contentlocale: it-IT
ms.lasthandoff: 03/29/2021
ms.locfileid: "95563988"
---
In Azure Active Directory (Azure AD) il termine **provisioning di app** si riferisce alla creazione automatica di identità e ruoli utente nelle applicazioni cloud ([SaaS](https://azure.microsoft.com/overview/what-is-saas/)) a cui gli utenti devono accedere. Oltre a creare le identità utente, il provisioning automatico include la manutenzione e la rimozione delle identità utente quando lo stato o i ruoli cambiano. Gli scenari comuni includono il provisioning di un utente di Azure AD in applicazioni come [Dropbox](../articles/active-directory/saas-apps/dropboxforbusiness-provisioning-tutorial.md), [Salesforce](../articles/active-directory/saas-apps/salesforce-provisioning-tutorial.md), [ServiceNow](../articles/active-directory/saas-apps/servicenow-provisioning-tutorial.md) e altre ancora.

Questa funzionalità consente di:
- **Automatizzare il provisioning:** creare automaticamente nuovi account nei sistemi appropriati per i membri che si uniscono al team o all'organizzazione.
- **Automatizzare il deprovisioning:** disattivare automaticamente nuovi account nei sistemi appropriati per i membri che lasciano il team o l'organizzazione.
- **Sincronizzare i dati tra i sistemi:** assicurarsi che le identità nelle app e nei sistemi siano sempre aggiornate in base alle modifiche nella directory o nel sistema di risorse umane.
- **Effettuare il provisioning dei gruppi:** effettuare il provisioning di gruppi in applicazioni che li supportano.
- **Gestire l'accesso:** monitorare e controllare gli utenti di cui è stato effettuato il provisioning nelle applicazioni.
- **Eseguire facilmente la distribuzione in scenari brownfield:** abbinare le identità esistenti tra i diversi sistemi per facilitare l'integrazione, anche quando gli utenti sono già presenti nel sistema di destinazione.
- **Usare la personalizzazione avanzata:** sfruttare i mapping personalizzabili degli attributi che definiscono quali dati degli utenti devono essere trasferiti dal sistema di origine a quello di destinazione.
- **Ricevere avvisi per gli eventi critici:** il servizio di provisioning fornisce avvisi per gli eventi critici e supporta l'integrazione di Log Analytics, in cui è possibile definire avvisi personalizzati per soddisfare le esigenze aziendali.
## <a name="benefits-of-automatic-provisioning"></a>Vantaggi del provisioning automatico
Dal momento che il numero di applicazioni usate nelle organizzazioni moderne continua a crescere, gli amministratori IT si trovano a dover gestire gli accessi su larga scala. Con l'aiuto di standard come SAML (Security Assertions Markup Language) o OIDC (Open ID Connect) possono configurare rapidamente il Single Sign-On (SSO), ma occorre anche effettuare il provisioning degli utenti nell'app per consentire loro l'accesso. Molti amministratori effettuano il provisioning creando manualmente ogni account utente o caricando file CSV ogni settimana, ma questi processi sono dispendiosi in termini di tempo e denaro, oltre a essere soggetti a errori. Per automatizzare il provisioning sono state adottate soluzioni come SAML Just-In-Time (JIT), ma le aziende necessitano anche di una soluzione per effettuare il deprovisioning degli utenti che lasciano l'organizzazione o non hanno più bisogno di accedere a determinate app per via di un cambio di ruolo.
Alcune motivazioni comuni per l'uso del provisioning automatico sono:
- Ottimizzare l'efficienza e l'accuratezza dei processi di provisioning.
- Risparmiare sui costi associati all'hosting e alla gestione di soluzioni e script di provisioning sviluppati in modo personalizzato.
- Proteggere l'organizzazione rimuovendo immediatamente dalle app SaaS principali le identità degli utenti che lasciano l'organizzazione.
- Importare facilmente un numero elevato di utenti in un'applicazione o un sistema SaaS specifico.
- Usare un unico set di criteri per determinare gli utenti di cui è stato effettuato il provisioning e che possono accedere a un'app.
Il provisioning utenti di Azure AD può aiutare a realizzare questi obiettivi. Per altre informazioni sul modo in cui le aziende clienti usano il provisioning utenti di Azure AD, vedere il [case study su ASOS](https://aka.ms/asoscasestudy). Il video seguente offre una panoramica del provisioning utenti in Azure AD:
> [!VIDEO https://www.youtube.com/embed/_ZjARPpI6NI]
## <a name="what-applications-and-systems-can-i-use-with-azure-ad-automatic-user-provisioning"></a>Quali applicazioni e sistemi è possibile usare con il provisioning utenti automatico di Azure AD?
Azure AD offre il supporto preintegrato per numerosi sistemi di risorse umane e app SaaS comuni, oltre al supporto generico per le app che implementano parti specifiche dello [standard SCIM 2.0](https://techcommunity.microsoft.com/t5/Identity-Standards-Blog/Provisioning-with-SCIM-getting-started/ba-p/880010).
* **Applicazioni preintegrate (app SaaS della raccolta)** . È possibile trovare tutte le applicazioni per cui Azure AD supporta un connettore di provisioning preintegrato nell'[elenco delle esercitazioni sulle applicazioni per il provisioning utenti](../articles/active-directory/saas-apps/tutorial-list.md). Le applicazioni preintegrate elencate nella raccolta usano in genere le API di gestione utenti basate su SCIM 2.0 per il provisioning.

Se si vuole richiedere una nuova applicazione per il provisioning, è possibile inviare una [richiesta di integrazione dell'applicazione nella raccolta di app](../articles/active-directory/develop/v2-howto-app-gallery-listing.md). Affinché la richiesta di provisioning venga accettata, l'applicazione deve avere un endpoint conforme a SCIM. Chiedere al fornitore dell'applicazione di seguire lo standard SCIM per consentire un rapido onboarding dell'app sulla piattaforma.
* **Applicazioni che supportano SCIM 2.0**. Per informazioni su come connettere in modo generico le applicazioni che implementano le API di gestione utenti basate su SCIM 2.0, vedere [Creare un endpoint SCIM e configurare il provisioning utenti](../articles/active-directory/app-provisioning/use-scim-to-provision-users-and-groups.md).
## <a name="what-is-system-for-cross-domain-identity-management-scim"></a>Che cos'è SCIM (System for Cross-domain Identity Management)?
Per automatizzare il provisioning e il deprovisioning, le app espongono API proprietarie per utenti e gruppi. Tuttavia, chiunque abbia provato a gestire gli utenti in più app sa che ogni app tenta di eseguire le stesse semplici azioni, come la creazione o l'aggiornamento di utenti, l'aggiunta di utenti a gruppi o il deprovisioning degli utenti. Tutte queste semplici azioni vengono però implementate in modo leggermente diverso, usando percorsi di endpoint diversi, metodi diversi per specificare le informazioni sugli utenti e uno schema diverso per rappresentare ogni elemento di informazioni.
Per risolvere questi problemi, la specifica SCIM fornisce uno schema utente comune che facilita il provisioning e il deprovisioning degli utenti nelle app. SCIM sta diventando lo standard più comune per il provisioning e, in combinazione con standard di federazione come OpenID Connect o SAML, offre agli amministratori una soluzione end-to-end basata su standard per la gestione degli accessi.
Per istruzioni dettagliate sullo sviluppo di un endpoint SCIM per automatizzare il provisioning e il deprovisioning di utenti e gruppi in un'applicazione, vedere [Creare un endpoint SCIM e configurare il provisioning utenti](../articles/active-directory/app-provisioning/use-scim-to-provision-users-and-groups.md). Per le applicazioni preintegrate nella raccolta (Slack, Azure Databricks, Snowflake e altre), è possibile ignorare la documentazione per gli sviluppatori e usare le esercitazioni disponibili [qui](../articles/active-directory/saas-apps/tutorial-list.md).
## <a name="manual-vs-automatic-provisioning"></a>Provisioning manuale o automatico
Le applicazioni nella raccolta di Azure AD supportano una delle due modalità di provisioning seguenti:
* Il provisioning **manuale** viene effettuato quando non è ancora disponibile un connettore di provisioning automatico di Azure AD per l'app. Gli account utente devono essere creati manualmente, ad esempio aggiungendo gli utenti direttamente nel portale di amministrazione dell'app o caricando un foglio di calcolo con i dettagli degli account utente. Per determinare i meccanismi disponibili, consultare la documentazione o contattare lo sviluppatore dell'app.
* **Automatico** significa che è stato sviluppato un connettore di provisioning Azure AD per l'applicazione. È consigliabile seguire l'esercitazione di configurazione specifica per il provisioning dell'applicazione. Le esercitazioni per le applicazioni si trovano nell'[Elenco di esercitazioni sulla procedura di integrazione delle applicazioni SaaS con Azure Active Directory](../articles/active-directory/saas-apps/tutorial-list.md).
Nella raccolta di Azure AD le applicazioni che supportano il provisioning automatico sono identificate da un'icona **Provisioning**. Passare alla nuova esperienza di anteprima della raccolta per vedere queste icone: nel banner nella parte superiore della **pagina Aggiungi applicazione** selezionare il collegamento **Fare clic qui per provare la nuova raccolta di app migliorata**.

La modalità di provisioning supportata da un'applicazione è visibile anche nella scheda **Provisioning** dopo l'aggiunta dell'applicazione in **App aziendali**.
## <a name="how-do-i-set-up-automatic-provisioning-to-an-application"></a>Come è possibile configurare il provisioning automatico in un'applicazione?
Per le applicazioni preintegrate elencate nella raccolta sono disponibili istruzioni dettagliate per la configurazione del provisioning automatico. Vedere l'[elenco delle esercitazioni per le app della raccolta integrate](../articles/active-directory/saas-apps/tutorial-list.md). Il video seguente illustra come configurare il provisioning utenti automatico per SalesForce.
> [!VIDEO https://www.youtube.com/embed/pKzyts6kfrw]
Per altre applicazioni che supportano SCIM 2.0, seguire i passaggi descritti nell'articolo [Creare un endpoint SCIM e configurare il provisioning utenti](../articles/active-directory/app-provisioning/use-scim-to-provision-users-and-groups.md). | 135.1875 | 954 | 0.818215 | ita_Latn | 0.999329 |
53613f6241d9ca06d8420bed5d593229ab039a8f | 94 | md | Markdown | README.md | acboriloondo/Gargots | 701d58f16cccb315969ceb44d6f3194f9e129594 | [
"Apache-2.0"
] | null | null | null | README.md | acboriloondo/Gargots | 701d58f16cccb315969ceb44d6f3194f9e129594 | [
"Apache-2.0"
] | null | null | null | README.md | acboriloondo/Gargots | 701d58f16cccb315969ceb44d6f3194f9e129594 | [
"Apache-2.0"
] | null | null | null | # Gargots
Codi de la plantilla utilitzada per a la demo a Composer Playground de Gargots S.A.
| 31.333333 | 83 | 0.787234 | cat_Latn | 0.990266 |
53615e6ca700ba2337663ddb3ab9e7581c2e1c67 | 1,685 | md | Markdown | README.md | chirxee/YourProductDescription-2018.10.19 | 167fff43f4a90a47da43efdb619018f4b4aea5cf | [
"MIT"
] | null | null | null | README.md | chirxee/YourProductDescription-2018.10.19 | 167fff43f4a90a47da43efdb619018f4b4aea5cf | [
"MIT"
] | null | null | null | README.md | chirxee/YourProductDescription-2018.10.19 | 167fff43f4a90a47da43efdb619018f4b4aea5cf | [
"MIT"
] | null | null | null | <h2 align="center">
<br>
<a href="https://www.facebook.com/profile.php?id=100000119067590"><img src="https://scontent-hkg3-1.xx.fbcdn.net/v/t1.0-9/22308558_2046804465333502_5925267838902328994_n.jpg?_nc_cat=105&oh=e5cd5610e6bdb909057548d833ab9388&oe=5C88A3D7" alt="A Brief Product Description of Chirmi Xi" width=230"></a>
<br>
<br>
A Brief Product Description of Chirmi Xi
<br>
</h2>
<p align="center">
<a href="http://makeapullrequest.com">
<img src="https://img.shields.io/badge/PRs-welcome-brightgreen.svg?style=flat-square" alt="PRs Welcome">
</a>
<a href="https://opensource.org/licenses/MIT">
<img src="https://img.shields.io/badge/license-MIT-blue.svg?style=flat-square" alt="License MIT">
</a>
</p>
## Introduction
This repository was created with the intention of helping the targeted customer to get familiar with the product in use. It is definitely not a demand, but hopefully a helpful guide for future studies. All requests are welcome.
---
## Table of Contents
1. **[基本信息](#1-基本信息)**
2. **[教育情况 :notebook_with_decorative_cover:](#2-教育情况)**
3. **[产品功能](#3-产品功能)**
4. **[服务器配置](#4-服务器配置)**
---
### 1. 基本信息
* 习惠清
* 1997.12.16
* 射手座
* 出产地:北京市西城区金融街
**[⬆ back to top](#table-of-contents)**
---
### 2. 教育情况
* 北京第二实验小学
* 北京师范大学附属实验中学 :arrow_right: 北京师范大学附属实验中学
* UCLA
**[⬆ back to top](#table-of-contents)**
---
### 3. 产品功能
* 温暖的关心
* 第一次主动使用这个技能,请客户多一点耐心
* 不合时宜的聊天
* Vote down。每天都有在学习改进 (*•ω•)
* 线下增值服务
* 随时。满足客户所有心情
**[⬆ back to top](#table-of-contents)**
---
### 4. 服务器配置
* 客户端好像暂时和服务器连接成功了
* 另一台可能切换使用的主机地址是 10905 Ohio Ave, Los Angeles, California, 90024
**[⬆ back to top](#table-of-contents)**
---
| 21.883117 | 299 | 0.673591 | yue_Hant | 0.52927 |
5361768f4c2799f9763b8f45dfd15357bbd9901f | 11,811 | md | Markdown | azurermps-6.13.0/Azure.Storage/New-AzureStorageContext.md | AdrianaDJ/azure-docs-powershell.tr-TR | 78407d14f64e877506d6c0c14cac18608332c7a8 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-12-05T17:58:35.000Z | 2020-12-05T17:58:35.000Z | azurermps-6.13.0/Azure.Storage/New-AzureStorageContext.md | AdrianaDJ/azure-docs-powershell.tr-TR | 78407d14f64e877506d6c0c14cac18608332c7a8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | azurermps-6.13.0/Azure.Storage/New-AzureStorageContext.md | AdrianaDJ/azure-docs-powershell.tr-TR | 78407d14f64e877506d6c0c14cac18608332c7a8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
external help file: Microsoft.WindowsAzure.Commands.Storage.dll-Help.xml
Module Name: Azure.Storage
ms.assetid: 383402B2-6B7C-41AB-AFF9-36C86156B0A9
online version: https://docs.microsoft.com/en-us/powershell/module/azure.storage/new-azurestoragecontext
schema: 2.0.0
content_git_url: https://github.com/Azure/azure-powershell/blob/preview/src/Storage/Commands.Storage/help/New-AzureStorageContext.md
original_content_git_url: https://github.com/Azure/azure-powershell/blob/preview/src/Storage/Commands.Storage/help/New-AzureStorageContext.md
ms.openlocfilehash: 9de6b2b52205bdf80de9c57e3e338f4b7216c5ee
ms.sourcegitcommit: f599b50d5e980197d1fca769378df90a842b42a1
ms.translationtype: MT
ms.contentlocale: tr-TR
ms.lasthandoff: 08/20/2020
ms.locfileid: "93591515"
---
# New-AzureStorageContext
## SYNOPSIS
Azure depolama bağlamı oluşturur.
[!INCLUDE [migrate-to-az-banner](../../includes/migrate-to-az-banner.md)]
## INDEKI
### OAuthAccount (varsayılan)
```
New-AzureStorageContext [-StorageAccountName] <String> [-UseConnectedAccount] [-Protocol <String>]
[-Endpoint <String>] [<CommonParameters>]
```
### AccountNameAndKey
```
New-AzureStorageContext [-StorageAccountName] <String> [-StorageAccountKey] <String> [-Protocol <String>]
[-Endpoint <String>] [<CommonParameters>]
```
### AccountNameAndKeyEnvironment
```
New-AzureStorageContext [-StorageAccountName] <String> [-StorageAccountKey] <String> [-Protocol <String>]
-Environment <String> [<CommonParameters>]
```
### AnonymousAccount
```
New-AzureStorageContext [-StorageAccountName] <String> [-Anonymous] [-Protocol <String>] [-Endpoint <String>]
[<CommonParameters>]
```
### AnonymousAccountEnvironment
```
New-AzureStorageContext [-StorageAccountName] <String> [-Anonymous] [-Protocol <String>] -Environment <String>
[<CommonParameters>]
```
### SasToken
```
New-AzureStorageContext [-StorageAccountName] <String> -SasToken <String> [-Protocol <String>]
[-Endpoint <String>] [<CommonParameters>]
```
### SasTokenWithAzureEnvironment
```
New-AzureStorageContext [-StorageAccountName] <String> -SasToken <String> -Environment <String>
[<CommonParameters>]
```
### OAuthAccountEnvironment
```
New-AzureStorageContext [-StorageAccountName] <String> [-UseConnectedAccount] [-Protocol <String>]
-Environment <String> [<CommonParameters>]
```
### ConnectionString
```
New-AzureStorageContext -ConnectionString <String> [<CommonParameters>]
```
### LocalDevelopment
```
New-AzureStorageContext [-Local] [<CommonParameters>]
```
## Tanım
**New-AzureStorageContext** cmdlet 'ı Azure depolama bağlamı oluşturur.
## ÖRNEKLERDEN
### Örnek 1: depolama hesabı adı ve anahtarı belirterek bağlam oluşturma
```
C:\PS>New-AzureStorageContext -StorageAccountName "ContosoGeneral" -StorageAccountKey "< Storage Key for ContosoGeneral ends with == >"
```
Bu komut, ContosoGeneral adlı hesap için belirtilen anahtarı kullanan bir bağlam oluşturur.
### Örnek 2: bağlantı dizesi belirterek bağlam oluşturma
```
C:\PS>New-AzureStorageContext -ConnectionString "DefaultEndpointsProtocol=https;AccountName=ContosoGeneral;AccountKey=< Storage Key for ContosoGeneral ends with == >;"
```
Bu komut, ContosoGeneral hesabı için belirtilen bağlantı dizesine dayalı bir bağlam oluşturur.
### Örnek 3: anonim depolama hesabı için bağlam oluşturma
```
C:\PS>New-AzureStorageContext -StorageAccountName "ContosoGeneral" -Anonymous -Protocol "http"
```
Bu komut, ContosoGeneral adlı hesap için anonim kullanım bağlamı oluşturur.
Komut HTTP 'yi bağlantı protokolü olarak belirtir.
### Örnek 4: yerel geliştirme depolama hesabını kullanarak bağlam oluşturma
```
C:\PS>New-AzureStorageContext -Local
```
Bu komut, yerel geliştirme depolama hesabını kullanarak bir bağlam oluşturur.
Komut *Yerel* parametreyi belirtir.
### Örnek 5: yerel geliştirici depolama hesabı kapsayıcısını alma
```
C:\PS>New-AzureStorageContext -Local | Get-AzureStorageContainer
```
Bu komut, yerel geliştirme depolama hesabını kullanarak bir bağlam oluşturur ve ardından ardışık düzen işlecini kullanarak yeni bağlamı **Get-AzureStorageContainer** cmdlet 'ine geçirir.
Komut, yerel geliştirici depolama hesabı için Azure depolama kapsayıcısını alır.
### Örnek 6: birden çok kapsayıcı alma
```
C:\PS>$Context01 = New-AzureStorageContext -Local
PS C:\> $Context02 = New-AzureStorageContext -StorageAccountName "ContosoGeneral" -StorageAccountKey "< Storage Key for ContosoGeneral ends with == >"
PS C:\> ($Context01, $Context02) | Get-AzureStorageContainer
```
İlk komut yerel geliştirme depolama hesabını kullanarak bir bağlam oluşturur ve bu $Context bağlamı 01 değişkenine depolar.
İkinci komut, ContosoGeneral adlı hesap için belirtilen anahtarı kullanan bir bağlam oluşturur ve bu bağlamı $Context 02 değişkeninde depolar.
Son komutu, **Get-AzureStorageContainer** kullanarak $Context 01 ve $Context 02 ' da depolanan bağlamların kapsayıcılarını alır.
### Örnek 7: uç noktayla bağlam oluşturma
```
C:\PS>New-AzureStorageContext -StorageAccountName "ContosoGeneral" -StorageAccountKey "< Storage Key for ContosoGeneral ends with == >" -Endpoint "contosoaccount.core.windows.net"
```
Bu komut, belirtilen depolama uç noktasına sahip bir Azure depolama bağlamı oluşturur.
Komut, ContosoGeneral adlı hesap için belirtilen anahtarı kullanan bir bağlam oluşturur.
### Örnek 8: belirtilen ortamla bağlam oluşturma
```
C:\PS>New-AzureStorageContext -StorageAccountName "ContosoGeneral" -StorageAccountKey "< Storage Key for ContosoGeneral ends with == >" -Environment "AzureChinaCloud"
```
Bu komut, belirtilen Azure ortamına sahip bir Azure depolama bağlamı oluşturur.
Komut, ContosoGeneral adlı hesap için belirtilen anahtarı kullanan bir bağlam oluşturur.
### Örnek 9: SAS belirteci kullanarak bağlam oluşturma
```
C:\PS>$SasToken = New-AzureStorageContainerSASToken -Name "ContosoMain" -Permission "rad"
PS C:\> $Context = New-AzureStorageContext -StorageAccountName "ContosoGeneral" -SasToken $SasToken
PS C:\> $Context | Get-AzureStorageBlob -Container "ContosoMain"
```
İlk komut, ContosoMain adlı kapsayıcı için **New-AzureStorageContainerSASToken** cmdlet 'INI kullanarak SAS belirteci oluşturur ve bu belirteci $SasToken değişkeninde depolar.
Bu belirteç okuma, ekleme, güncelleştirme ve silme izinleri içindir.
İkinci komut, $SasToken depolanan SAS belirtecini kullanan ContosoGeneral adlı hesap için bir bağlam oluşturur ve bu bağlamı $Context değişkeninde depolar.
Son komutu, $Context 'da depolanan bağlamı kullanarak ContosoMain adlı kapsayıcıyla ilişkili tüm blob 'ları listeler.
### Örnek 10: OAuth kimlik doğrulamasını kullanarak bağlam oluşturma
```
C:\PS>Connect-AzureRmAccount
C:\PS> $Context = New-AzureStorageContext -StorageAccountName "myaccountname" -UseConnectedAccount
```
Bu komut, OAuth kimlik doğrulamasını kullanarak bir bağlam oluşturur.
## PARAMETRELERINE
### -Anonim
Bu cmdlet 'in anonim oturum açma için bir Azure depolama bağlamını oluşturduğunu gösterir.
```yaml
Type: System.Management.Automation.SwitchParameter
Parameter Sets: AnonymousAccount, AnonymousAccountEnvironment
Aliases:
Required: True
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -ConnectionString
Azure depolama bağlamı için bir bağlantı dizesi belirtir.
```yaml
Type: System.String
Parameter Sets: ConnectionString
Aliases:
Required: True
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### Uç nokta
Azure depolama bağlamının uç noktasını belirtir.
```yaml
Type: System.String
Parameter Sets: OAuthAccount, AccountNameAndKey, AnonymousAccount, SasToken
Aliases:
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -Ortam
Azure ortamını belirtir.
Bu parametre için kabul edilebilir değerler: Azurecyüksek ve AzureChinaCloud.
Daha fazla bilgi için yazın `Get-Help Get-AzureEnvironment` .
```yaml
Type: System.String
Parameter Sets: AccountNameAndKeyEnvironment, AnonymousAccountEnvironment
Aliases: Name, EnvironmentName
Required: True
Position: Named
Default value: None
Accept pipeline input: True (ByPropertyName)
Accept wildcard characters: False
```
```yaml
Type: System.String
Parameter Sets: SasTokenWithAzureEnvironment, OAuthAccountEnvironment
Aliases: Name, EnvironmentName
Required: True
Position: Named
Default value: None
Accept pipeline input: True (ByPropertyName)
Accept wildcard characters: False
```
### -Yerel
Bu cmdlet 'in yerel geliştirme depolama hesabını kullanarak bir bağlam oluşturduğunu gösterir.
```yaml
Type: System.Management.Automation.SwitchParameter
Parameter Sets: LocalDevelopment
Aliases:
Required: True
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -İletişim kuralı
Aktarma Protokolü (https/http).
```yaml
Type: System.String
Parameter Sets: OAuthAccount, AccountNameAndKey, AccountNameAndKeyEnvironment, AnonymousAccount, AnonymousAccountEnvironment, SasToken, OAuthAccountEnvironment
Aliases:
Accepted values: Http, Https
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -SasToken
Bağlam için paylaşılan bir erişim Imzası (SAS) belirteci belirtir.
```yaml
Type: System.String
Parameter Sets: SasToken, SasTokenWithAzureEnvironment
Aliases:
Required: True
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -StorageAccountKey
Bir Azure depolama hesabı anahtarı belirtir.
Bu cmdlet, bu parametrenin belirttiği anahtar için bağlam oluşturur.
```yaml
Type: System.String
Parameter Sets: AccountNameAndKey, AccountNameAndKeyEnvironment
Aliases:
Required: True
Position: 1
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -StorageAccountName
Azure depolama hesap adını belirtir.
Bu cmdlet, bu parametrenin belirttiği hesap için bağlam oluşturur.
```yaml
Type: System.String
Parameter Sets: OAuthAccount, AccountNameAndKey, AccountNameAndKeyEnvironment, AnonymousAccount, AnonymousAccountEnvironment, SasToken, SasTokenWithAzureEnvironment, OAuthAccountEnvironment
Aliases:
Required: True
Position: 0
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -UseConnectedAccount
Bu cmdlet 'in OAuth kimlik doğrulaması içeren bir Azure depolama bağlamını oluşturduğunu gösterir.
Başka bir anththcation belirtilmediğinde cmdlet, OAuth kimlik doğrulamasını varsayılan olarak kullanır.
```yaml
Type: SwitchParameter
Parameter Sets: OAuthAccount, OAuthAccountEnvironment
Aliases:
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -UseConnectedAccount
Bu cmdlet 'in OAuth kimlik doğrulaması içeren bir Azure depolama bağlamını oluşturduğunu gösterir.
Başka bir anththcation belirtilmediğinde cmdlet, OAuth kimlik doğrulamasını varsayılan olarak kullanır.
```yaml
Type: SwitchParameter
Parameter Sets: OAuthAccount, OAuthAccountEnvironment
Aliases:
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### CommonParameters
Bu cmdlet ortak parametreleri destekler:-Debug,-ErrorAction,-ErrorVariable,-ınformationaction,-ınformationvariable,-OutVariable,-OutBuffer,-Pipelinedeğişken,-verbose,-WarningAction ve-Warningdeğişken. Daha fazla bilgi için bkz about_CommonParameters ( https://go.microsoft.com/fwlink/?LinkID=113216) .
## GÖLGELENDIRICI
### System. String
## ÇıKıŞLAR
### Microsoft. Windowsazme. Commands. Storage. AzureStorageContext
## NOTLARıNDA
## ILGILI BAĞLANTıLAR
[Get-AzureStorageBlob](./Get-AzureStorageBlob.md)
[New-AzureStorageContainerSASToken](./New-AzureStorageContainerSASToken.md)
| 31.163588 | 301 | 0.797392 | tur_Latn | 0.895944 |
5361998b842cbea49b539bbb65c0d9a1db1db100 | 2,743 | md | Markdown | README.md | merceyz/inquirer-autocomplete-prompt | edac040c1f3dec7da8249e6fb6b13d4a5283e2c8 | [
"ISC"
] | null | null | null | README.md | merceyz/inquirer-autocomplete-prompt | edac040c1f3dec7da8249e6fb6b13d4a5283e2c8 | [
"ISC"
] | null | null | null | README.md | merceyz/inquirer-autocomplete-prompt | edac040c1f3dec7da8249e6fb6b13d4a5283e2c8 | [
"ISC"
] | null | null | null | # inquirer-autocomplete-prompt
[](https://greenkeeper.io/)
Autocomplete prompt for [inquirer](https://github.com/SBoudrias/Inquirer.js)
[](http://travis-ci.org/mokkabonna/inquirer-autocomplete-prompt)
## Installation
```
npm install --save inquirer-autocomplete-prompt
```
## Usage
This prompt is anonymous, meaning you can register this prompt with the type name you please:
```javascript
inquirer.registerPrompt('autocomplete', require('inquirer-autocomplete-prompt'));
inquirer.prompt({
type: 'autocomplete',
...
})
```
Change `autocomplete` to whatever you might prefer.
### Options
> **Note:** _allowed options written inside square brackets (`[]`) are optional. Others are required._
`type`, `name`, `message`, `source`[, `pageSize`, `filter`, `when`, `suggestOnly`, `validate`]
See [inquirer](https://github.com/SBoudrias/Inquirer.js) readme for meaning of all except **source** and **suggestOnly**.
**Source** will be called with previous answers object and the current user input each time the user types, it **must** return a promise.
**Source** will be called once at at first before the user types anything with **undefined** as the value. If a new search is triggered by user input it maintains the correct order, meaning that if the first call completes after the second starts, the results of the first call are never displayed.
**suggestOnly** is default **false**. Setting it to true turns the input into a normal text input. Meaning that pressing enter selects whatever value you currently have. And pressing tab autocompletes the currently selected value in the list. This way you can accept manual input instead of forcing a selection from the list.
**validate** is only active when **suggestOnly** is set to **true**. It behaves like validate for the input prompt.
#### Example
```javascript
inquirer.registerPrompt('autocomplete', require('inquirer-autocomplete-prompt'));
inquirer.prompt([{
type: 'autocomplete',
name: 'from',
message: 'Select a state to travel from',
source: function(answersSoFar, input) {
return myApi.searchStates(input);
}
}]).then(function(answers) {
//etc
});
```
See also [example.js](https://github.com/mokkabonna/inquirer-autocomplete-prompt/blob/master/example.js) for a working example.
I recommend using this package with [fuzzy](https://www.npmjs.com/package/fuzzy) if you want fuzzy search. Again, see the example for a demonstration of this.

## Credits
[Martin Hansen](https://github.com/mokkabonna/)
## License
ISC
| 36.092105 | 325 | 0.746992 | eng_Latn | 0.95746 |
53619af75a9891b7a171f5a7b35b592075e1108d | 1,969 | md | Markdown | README.md | abaez/web-base | f96de50da9af6021f3ce949ed3e194c7915af2c0 | [
"MIT"
] | null | null | null | README.md | abaez/web-base | f96de50da9af6021f3ce949ed3e194c7915af2c0 | [
"MIT"
] | null | null | null | README.md | abaez/web-base | f96de50da9af6021f3ce949ed3e194c7915af2c0 | [
"MIT"
] | null | null | null | # web-base
[![twitter][1i]][1p]
[![license][2i]][2p]
A small web base for/by [Alejandro Baez][tw].
### DESCRIPTION
The project is a base to do a full webapp using the following tools:
* [Brunch] for the build
* [pug] for the templates of the site
* [elm] for the logic of the frontend/backend.
* [less] for the styling.
* [yarn] for less headaches dealing with [npm].
### USAGE
You would use the tools listed above for all the base requirements you need. To get started, first clone the repo to your chosen location.
``` fish
cd <location you want>
hg clone bb:a_baez/web-base
```
Then you need to run [yarn] or [npm] for building the directory.
``` fish
yarn
# if you don't have yarn
npm install
```
Then, whenever you want to build your repository with [Brunch] do the following:
``` fish
yarn build
# or if you want streamline things, use watch
yarn watch
```
Finally, follow directory tree under `src` and you should be good to go.
The directory tree is as follows:
* assets/ -- all assets to be directly exported to your web project.
* **index.pug** -- the main source location to connect elm,pug,less and any other things together.
* elm/ -- where all your code of elm should live.
* **Main.elm** -- your entire project should source into this module as its base.
* includes/ -- all templates go here (aka: pug).
* **head.pug** -- load all `<head>` requirements.
* **loader.pug** -- embeds elm into your webapp.
* (_optional_) **header.pug** -- the header of your webapp.
* (_optional_) **footer.pug** -- the footer of your webapp.
[tw]: https://twitter.com/a_baez
[Brunch]: http://brunch.io
[pug]: https://pugjs.org/api/getting-started.html
[elm]: http://elm-lang.org/
[less]: http://lesscss.org/
[npm]: https://npmjs.org
[yarn]: https://yarnpkg.com
[1i]: https://img.shields.io/badge/twitter-a_baez-blue.svg
[1p]: https://twitter.com/a_baez
[2i]: https://img.shields.io/badge/license-MIT-green.svg
[2p]: ./LICENSE.md
| 27.347222 | 138 | 0.691214 | eng_Latn | 0.976802 |
5361adb2e96ff2db80677bb37f1303a19812fc76 | 680 | markdown | Markdown | _posts/2015-05-20-supporting-lay-practice.markdown | bodhinyana-group/bodhinyana-group.github.io | 0abf3a0b44fcd69e33235030e92e7d1f8fd60bf5 | [
"MIT"
] | 1 | 2015-06-07T01:39:06.000Z | 2015-06-07T01:39:06.000Z | _posts/2015-05-20-supporting-lay-practice.markdown | bodhinyana-group/bodhinyana-group.github.io | 0abf3a0b44fcd69e33235030e92e7d1f8fd60bf5 | [
"MIT"
] | 6 | 2020-01-09T12:24:31.000Z | 2021-05-06T23:10:06.000Z | _posts/2015-05-20-supporting-lay-practice.markdown | bodhinyana-group/bodhinyana-group.github.io | 0abf3a0b44fcd69e33235030e92e7d1f8fd60bf5 | [
"MIT"
] | 2 | 2017-08-28T05:17:16.000Z | 2020-09-21T01:25:38.000Z | ---
layout: post
leader: Pat
title: Supporting lay practice
date: 2015-05-20 19:30:00
meeting_blurb: We will be referring to <i>The Buddha's Words' – An Anthology of Discourses from the Pali Canon</i> by Bhikku Bodhi. The full text is available to read online at <a href="http://www.pacificbuddha.org/wp-content/uploads/2014/01/In-the-Buddhas-Words.pdf">www.pacificbuddha.org</a>.
keep_blurb_after_meeting: yes
allow_comments: yes
reading_snippet: Tamed, he is supreme among those who tame; At peace, he is the sage among those who bring peace; Freed, he is the chief of those who set free; Delivered, he is the best of those who deliver. – Anguttara Nikaya 4:23
---
| 56.666667 | 300 | 0.767647 | eng_Latn | 0.983119 |
5361bb05a10f0e3bbf358180f363b1dd88153702 | 162 | md | Markdown | projects/openvdb/README.md | micmarti85/artwork | 68427f910fb8687a3f5ef6b214b833cbc7c46bab | [
"RSA-MD"
] | 5 | 2020-05-23T02:11:37.000Z | 2021-10-03T19:06:03.000Z | projects/openvdb/README.md | micmarti85/artwork | 68427f910fb8687a3f5ef6b214b833cbc7c46bab | [
"RSA-MD"
] | 2 | 2020-09-09T20:39:07.000Z | 2021-05-06T12:27:00.000Z | projects/openvdb/README.md | micmarti85/artwork | 68427f910fb8687a3f5ef6b214b833cbc7c46bab | [
"RSA-MD"
] | 6 | 2020-08-20T13:53:08.000Z | 2021-09-27T22:00:06.000Z | ---
title: OpenVDB
level: Adopted Projects
featured_image: stacked/color/openvdb-stacked-color.svg
layout: logos
description: Artwork for the OpenVDB project
---
| 20.25 | 55 | 0.796296 | eng_Latn | 0.700971 |
5361f81acf3be401fa5a2382843bbe18e319f445 | 7,697 | md | Markdown | docs/deployment/tutorial-import-publish-settings-iis.md | MicrosoftDocs/visualstudio-docs.pt-br | b1882dc108a37c4caebbf5a80c274c440b9bbd4a | [
"CC-BY-4.0",
"MIT"
] | 5 | 2019-02-19T20:22:40.000Z | 2022-02-19T14:55:39.000Z | docs/deployment/tutorial-import-publish-settings-iis.md | MicrosoftDocs/visualstudio-docs.pt-br | b1882dc108a37c4caebbf5a80c274c440b9bbd4a | [
"CC-BY-4.0",
"MIT"
] | 32 | 2018-08-24T19:12:03.000Z | 2021-03-03T01:30:48.000Z | docs/deployment/tutorial-import-publish-settings-iis.md | MicrosoftDocs/visualstudio-docs.pt-br | b1882dc108a37c4caebbf5a80c274c440b9bbd4a | [
"CC-BY-4.0",
"MIT"
] | 25 | 2017-11-02T16:03:15.000Z | 2021-10-02T02:18:00.000Z | ---
title: Publicar no IIS importando configurações de publicação
description: Criar e importar um perfil de publicação para implantar um aplicativo no IIS por meio do Visual Studio
ms.date: 08/27/2021
ms.topic: tutorial
helpviewer_keywords:
- deployment, publish settings
author: mikejo5000
ms.author: mikejo
manager: jmartens
ms.technology: vs-ide-deployment
ms.workload:
- multiple
ms.openlocfilehash: 5c7fce7a5063ef27c70ae263affe60a7baafe98c
ms.sourcegitcommit: b12a38744db371d2894769ecf305585f9577792f
ms.translationtype: MT
ms.contentlocale: pt-BR
ms.lasthandoff: 09/13/2021
ms.locfileid: "126686187"
---
# <a name="publish-an-application-to-iis-by-importing-publish-settings-in-visual-studio"></a>Publicar um aplicativo no IIS importando configurações de publicação no Visual Studio
É possível usar a ferramenta **Publicar** para importar configurações de publicação e, em seguida, implantar seu aplicativo. Neste artigo, usamos configurações de publicação para o IIS, mas é possível usar etapas semelhantes para importar configurações de publicação do [Serviço de Aplicativo do Azure](../deployment/tutorial-import-publish-settings-azure.md). Em alguns cenários, o uso de um perfil de configurações de publicação pode ser mais rápido do que configurar manualmente a implantação no IIS para cada instalação do Visual Studio.
Essas etapas se aplicam a aplicativos ASP.NET, ASP.NET Core e .NET Core no Visual Studio.
Neste tutorial, você irá:
> [!div class="checklist"]
> * Configurar o IIS para poder gerar um arquivo de configurações de publicação
> * Criar um arquivo de configurações de publicação
> * Importar o arquivo de configurações de publicação para o Visual Studio
> * Implantar o aplicativo no IIS
Um arquivo de configurações de publicação (*\* . publishsettings*) é diferente de um perfil de publicação (*\* . pubxml*) criado no Visual Studio. Um arquivo de configurações de publicação é criado pelo IIS ou pelo Serviço de Aplicativo do Azure ou pode ser criado manualmente e, em seguida, pode ser importado para o Visual Studio.
> [!NOTE]
> se você só precisa copiar um perfil de publicação Visual Studio ( \* arquivo. pubxml) de uma instalação do Visual Studio para outro, você pode encontrar o perfil de publicação, *\<profilename\> . pubxml*, na pasta *\\<projectname \> \Properties\PublishProfiles* para tipos de projeto gerenciados. Para sites, examine embaixo da pasta *\App_Data*. Os perfis de publicação são arquivos XML do MSBuild.
## <a name="prerequisites"></a>Pré-requisitos
::: moniker range=">=vs-2019"
* É necessário ter o Visual Studio 2019 instalado e a carga de trabalho de desenvolvimento do **ASP.NET e desenvolvimento Web**.
se você ainda não instalou o Visual Studio, vá para a página de [downloads do Visual Studio](https://visualstudio.microsoft.com/downloads/) para instalá-lo gratuitamente.
::: moniker-end
::: moniker range="vs-2017"
* É necessário ter o Visual Studio 2017 instalado e a carga de trabalho de desenvolvimento do **ASP.NET e desenvolvimento Web**.
se você ainda não instalou o Visual Studio, vá para a página de [downloads do Visual Studio](https://visualstudio.microsoft.com/downloads/) para instalá-lo gratuitamente.
::: moniker-end
* no servidor, você deve estar executando Windows Server 2012, Windows Server 2016 ou Windows server 2019, e deve ter a [função de servidor Web do IIS](/iis/get-started/whats-new-in-iis-8/iis-80-using-aspnet-35-and-aspnet-45#solution) corretamente instalada (necessária para gerar o arquivo de configurações de publicação (*\* . publishsettings*)). O ASP.NET 4.5 ou o ASP.NET Core também precisam ser instalados no servidor.
* Para configurar o ASP.NET 4.5, confira [IIS 8.0 usando ASP.NET 3.5 e ASP.NET 4.5](/iis/get-started/whats-new-in-iis-8/iis-80-using-aspnet-35-and-aspnet-45).
* Para configurar o ASP.NET Core, confira [Hospedar o ASP.NET Core no Windows com o IIS](/aspnet/core/publishing/iis?tabs=aspnetcore2x#iis-configuration). para ASP.NET Core, certifique-se de configurar o Pool de aplicativos para **não usar código gerenciado**, conforme descrito no artigo.
## <a name="create-a-new-aspnet-project-in-visual-studio"></a>Criar um novo projeto ASP.NET no Visual Studio
1. No computador que executa o Visual Studio, crie um projeto.
Escolha o modelo correto. neste exemplo, escolha **ASP.NET aplicativo web (.NET Framework)** ou (somente para C#) **ASP.NET Core aplicativo web** e, em seguida, selecione **OK**.
se você não vir os modelos de projeto especificados, vá para o link **abrir Instalador do Visual Studio** no painel esquerdo da caixa de diálogo **novo Project** . O Instalador do Visual Studio é iniciado. instale o ASP.NET e a carga de trabalho de **desenvolvimento na web** .
O modelo de projeto que você escolher (ASP.NET ou ASP.NET Core) precisa corresponder à versão do ASP.NET instalada no servidor Web.
1. escolha **MVC** (.NET Framework) ou **aplicativo Web (Model-View-Controller)** (para .net Core) e verifique se **nenhuma autenticação** está selecionada e, em seguida, selecione **OK**.
1. Digite um nome como **myWebApp** e selecione **OK**.
O Visual Studio cria o projeto.
1. Escolha **Compilar** compilar > **solução** (ou pressione **Ctrl** + **Shift** + **B**) para compilar o projeto.
## <a name="install-and-configure-web-deploy-on-windows-server"></a>Instalar e configurar a Implantação da Web no Windows Server
[!INCLUDE [install-web-deploy-with-hosting-server](../deployment/includes/install-web-deploy-with-hosting-server.md)]
## <a name="create-the-publish-settings-file-in-iis-on-windows-server"></a>Criar o arquivo de configurações de publicação no IIS no Windows Server
[!INCLUDE [create-publish-settings-iis](../deployment/includes/create-publish-settings-iis.md)]
## <a name="import-the-publish-settings-in-visual-studio-and-deploy"></a>Importar as configurações de publicação no Visual Studio e implantar
[!INCLUDE [import-publish-settings](../deployment/includes/import-publish-settings-vs.md)]
Depois que o aplicativo for implantado com êxito, ele deverá ser iniciado automaticamente.
## <a name="troubleshooting"></a>Solução de problemas
- Se você não puder se conectar ao host usando o nome do host, tente o endereço IP em vez disso.
- Verifique se as portas necessárias estão abertas no servidor remoto.
- Para ASP.NET Core, é necessário certificar-se de que o campo do pool de aplicativos para o **DefaultAppPool** está definido como **Sem código gerenciado**.
- verifique se a versão do ASP.NET usada em seu aplicativo é a mesma que a versão instalada no servidor. Para seu aplicativo, você pode exibir e definir a versão na página **Propriedades** . Para definir o aplicativo para uma versão diferente, essa versão deve ser instalada.
- Se o aplicativo tentou abrir, mas você vê um aviso de certificado, opte por confiar no site. Se você já fechou o aviso, você pode editar o arquivo *. pubxml em seu projeto e adicionar o seguinte elemento (somente para teste): `<AllowUntrustedCertificate>true</AllowUntrustedCertificate>`
- se o aplicativo não iniciar do Visual Studio, inicie o aplicativo no IIS para testar se ele foi implantado corretamente.
- verifique a janela de saída em Visual Studio para obter informações de status e verifique suas mensagens de erro.
## <a name="next-steps"></a>Próximas etapas
Neste tutorial, você criou um arquivo de configurações de publicação, importou-o para o Visual Studio e implantou um aplicativo ASP.NET no IIS. Talvez você queira ter uma visão geral de outras opções de publicação no Visual Studio.
> [!div class="nextstepaction"]
> [Introdução à implantação](../deployment/deploying-applications-services-and-components.md)
| 69.972727 | 541 | 0.771599 | por_Latn | 0.997559 |
5361fd3f08e7ca428431e5ae7007c73759b5b8a4 | 25,783 | md | Markdown | docs-archive-a/2014/relational-databases/in-memory-oltp/a-guide-to-query-processing-for-memory-optimized-tables.md | v-alji/sql-docs-archive-pr.es-es | 410a49b0a08c22fd4bc973078b563238d69c8b44 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-11-25T21:09:51.000Z | 2021-11-25T21:09:51.000Z | docs-archive-a/2014/relational-databases/in-memory-oltp/a-guide-to-query-processing-for-memory-optimized-tables.md | v-alji/sql-docs-archive-pr.es-es | 410a49b0a08c22fd4bc973078b563238d69c8b44 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-11-25T02:22:05.000Z | 2021-11-25T02:27:15.000Z | docs-archive-a/2014/relational-databases/in-memory-oltp/a-guide-to-query-processing-for-memory-optimized-tables.md | v-alji/sql-docs-archive-pr.es-es | 410a49b0a08c22fd4bc973078b563238d69c8b44 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-09-29T08:53:04.000Z | 2021-09-29T08:53:04.000Z | ---
title: Guía del procesamiento de consultas para tablas con optimización para memoria | Microsoft Docs
ms.custom: ''
ms.date: 06/13/2017
ms.prod: sql-server-2014
ms.reviewer: ''
ms.technology: in-memory-oltp
ms.topic: conceptual
ms.assetid: 065296fe-6711-4837-965e-252ef6c13a0f
author: rothja
ms.author: jroth
ms.openlocfilehash: 93489e5dea295964826005e081bcffe889cb7586
ms.sourcegitcommit: ad4d92dce894592a259721a1571b1d8736abacdb
ms.translationtype: MT
ms.contentlocale: es-ES
ms.lasthandoff: 08/04/2020
ms.locfileid: "87675869"
---
# <a name="a-guide-to-query-processing-for-memory-optimized-tables"></a>Guía del procesamiento de consultas para tablas con optimización para memoria
OLTP en memoria incluye en [!INCLUDE[ssNoVersion](../../../includes/ssnoversion-md.md)] los procedimientos almacenados compilados de forma nativa y las tablas optimizadas para memoria. Este artículo proporciona información general del procesamiento de consultas tanto para las tablas optimizadas para memoria como para los procedimientos almacenados compilados de forma nativa.
En el documento se explica cómo se compilan y ejecutan las consultas en tablas optimizadas para memoria, incluido:
- La canalización de procesamiento de consultas de [!INCLUDE[ssNoVersion](../../../includes/ssnoversion-md.md)] para las tablas basadas en disco.
- Optimización de consultas; el rol de las estadísticas en las tablas optimizadas para memoria así como instrucciones para solucionar problemas de planes de consulta no válidos.
- El uso de [!INCLUDE[tsql](../../../includes/tsql-md.md)] interpretado para tener acceso a tablas optimizadas para memoria.
- Consideraciones sobre la optimización de consultas para el acceso a tablas optimizadas para memoria.
- Compilación y procesamiento de procedimientos almacenados de forma nativa.
- Estadísticas usadas para la estimación del costo por el optimizador.
- Formas de solucionar los planes de consulta no válidos.
## <a name="example-query"></a>Consulta de ejemplo
El ejemplo siguiente se utilizará para mostrar los conceptos del procesamiento de consultas descritos en este artículo.
Consideramos dos tablas, Customer y Order. El siguiente script de [!INCLUDE[tsql](../../../includes/tsql-md.md)] contiene las definiciones de estas dos tablas y los índices asociados, en su formato basado en disco (tradicional):
```sql
CREATE TABLE dbo.[Customer] (
CustomerID nchar (5) NOT NULL PRIMARY KEY,
ContactName nvarchar (30) NOT NULL
)
GO
CREATE TABLE dbo.[Order] (
OrderID int NOT NULL PRIMARY KEY,
CustomerID nchar (5) NOT NULL,
OrderDate date NOT NULL
)
GO
CREATE INDEX IX_CustomerID ON dbo.[Order](CustomerID)
GO
CREATE INDEX IX_OrderDate ON dbo.[Order](OrderDate)
GO
```
Para crear los planes de consulta mostrados en este artículo, las dos tablas se rellenaron con datos de ejemplo de la base de datos de ejemplo Northwind, que puede descargar desde [Bases de datos de ejemplo Northwind y pubs para SQL Server 2000](https://github.com/Microsoft/sql-server-samples/tree/master/samples/databases/northwind-pubs).
Considere la siguiente consulta, que combina las tablas Customer y Order y devuelve el identificador del pedido y la información del cliente asociada:
```sql
SELECT o.OrderID, c.* FROM dbo.[Customer] c INNER JOIN dbo.[Order] o ON c.CustomerID = o.CustomerID
```
El plan de ejecución estimado como se muestra en [!INCLUDE[ssManStudioFull](../../../includes/ssmanstudiofull-md.md)] es el siguiente:

Plan de consulta para una combinación de tablas basadas en disco.
Acerca de este plan de consulta:
- Las filas de la tabla Customer se recuperan del índice clúster, que es la estructura de datos principal y tiene los datos completos de la tabla.
- Los datos de la tabla Order se recuperan usando el índice no agrupado en la columna CustomerID. Este índice contiene la columna CustomerID, que se utiliza para la combinación, y la columna de clave principal OrderID, que se devuelve al usuario. Devolver columnas adicionales de la tabla Order requeriría búsquedas en el índice clúster de la tabla Order.
- El operador físico `Inner Join` implementa el operador lógico `Merge Join`. Los otros tipos de combinación físicos son `Nested Loops` y `Hash Join`. El operador `Merge Join` se aprovecha del hecho de que ambos índices están ordenados por la columna de combinación CustomerID.
Considere una ligera variación en esta consulta, que devuelve todas las filas de la tabla Order, no solo OrderID:
```sql
SELECT o.*, c.* FROM dbo.[Customer] c INNER JOIN dbo.[Order] o ON c.CustomerID = o.CustomerID
```
El plan estimado de esta consulta es:

Plan de consulta para una combinación hash de tablas basadas en disco.
En esta consulta, las filas de la tabla Order se recuperan con el índice clúster. Ahora se utiliza el operador físico `Hash Match` para `Inner Join`. El índice clúster en la tabla Order no está ordenado en CustomerID y, por lo tanto, `Merge Join` requeriría un operador de ordenación, lo que afectaría al rendimiento. Tenga en cuenta el costo relativo del operador `Hash Match` (75 %) en comparación con el costo del operador `Merge Join` del ejemplo anterior (46 %). El optimizador habría considerado el operador `Hash Match` también en el ejemplo anterior pero concluyó que el operador `Merge Join` proporcionaba un rendimiento mejor.
## <a name="ssnoversion-query-processing-for-disk-based-tables"></a>[!INCLUDE[ssNoVersion](../../../includes/ssnoversion-md.md)] Procesamiento de consultas para las tablas basadas en disco
El siguiente diagrama muestra el flujo de procesamiento de consultas en [!INCLUDE[ssNoVersion](../../../includes/ssnoversion-md.md)] para las consultas ad hoc:

Canalización de procesamiento de consultas de SQL Server
En este escenario:
1. El usuario emite una consulta.
2. El analizador y el algebrizador construyen un árbol de consulta con operadores lógicos según el texto de [!INCLUDE[tsql](../../../includes/tsql-md.md)] enviado por el usuario.
3. El optimizador crea un plan de consulta optimizado que contiene los operadores físicos (por ejemplo, la combinación de bucles anidados). Después de la optimización, el plan se puede almacenar en la memoria caché de planes. Se omite este paso si la memoria caché de planes ya contiene un plan para esta consulta.
4. El motor de ejecución de consultas procesa una interpretación del plan de consulta.
5. Para cada operador de recorrido de tabla, búsqueda de índice y recorrido de índice, el motor de ejecución solicita las filas de las estructuras respectivas de índice y tabla de Access Methods.
6. Access Methods recupera las filas de las páginas de datos e índices del grupo de búferes y carga las páginas del disco al grupo de búferes según sea necesario.
Para la primera consulta del ejemplo, el motor de ejecución solicita filas del índice agrupado en la tabla Customer y el índice no agrupado en la tabla Order de Access Methods. Access Methods atraviesa las estructuras de índice del árbol B para recuperar las filas solicitadas. En este caso, todas las filas se recuperan como las llamadas de plan para los recorridos de índice completos.
## <a name="interpreted-tsql-access-to-memory-optimized-tables"></a>Acceso de [!INCLUDE[tsql](../../../includes/tsql-md.md)] interpretado a las tablas con optimización para memoria
[!INCLUDE[tsql](../../../includes/tsql-md.md)] también se denominan [!INCLUDE[tsql](../../../includes/tsql-md.md)]. Interpretado hace referencia al hecho de que el plan de consulta es interpretado por el motor de ejecución de consulta para cada operador del plan de consultas. El motor de ejecución lee el operador y sus parámetros y realiza la operación.
[!INCLUDE[tsql](../../../includes/tsql-md.md)] interpretado se puede utilizar para tener acceso a tablas optimizadas para memoria y a tablas basadas en disco. La ilustración siguiente muestra el procesamiento de consultas para el acceso de [!INCLUDE[tsql](../../../includes/tsql-md.md)] interpretado a las tablas optimizadas para memoria:

Canalización de procesamiento de consultas para acceso de Transact-SQL interpretado a tablas optimizadas para memoria.
Como se muestra en la ilustración, la canalización del procesamiento de consultas permanece principalmente sin cambios:
- El analizador y el algebrizador construyen el árbol de consulta.
- El optimizador crea el plan de ejecución.
- El motor de ejecución de consultas interpreta el plan de ejecución.
La diferencia principal con la canalización de procesamiento de consultas tradicional (la ilustración 2) es que las filas de las tablas optimizadas para memoria no se recuperan del grupo de búferes mediante Access Methods. En su lugar, las filas se recuperan de las estructuras de datos en memoria a través del motor OLTP en memoria. Las diferencias en las estructuras de datos hacen que el optimizador elija distintos planes en algunos casos, como se muestra en el ejemplo siguiente.
El siguiente script de [!INCLUDE[tsql](../../../includes/tsql-md.md)] contiene las versiones optimizadas para memoria de las tablas Order y Customer, con índices hash:
```sql
CREATE TABLE dbo.[Customer] (
CustomerID nchar (5) NOT NULL PRIMARY KEY NONCLUSTERED,
ContactName nvarchar (30) NOT NULL
) WITH (MEMORY_OPTIMIZED=ON)
GO
CREATE TABLE dbo.[Order] (
OrderID int NOT NULL PRIMARY KEY NONCLUSTERED,
CustomerID nchar (5) NOT NULL INDEX IX_CustomerID HASH(CustomerID) WITH (BUCKET_COUNT=100000),
OrderDate date NOT NULL INDEX IX_OrderDate HASH(OrderDate) WITH (BUCKET_COUNT=100000)
) WITH (MEMORY_OPTIMIZED=ON)
GO
```
Considere la misma consulta ejecutada en tablas optimizadas para memoria:
```sql
SELECT o.OrderID, c.* FROM dbo.[Customer] c INNER JOIN dbo.[Order] o ON c.CustomerID = o.CustomerID
```
El plan estimado es el siguiente:

Plan de consulta para una combinación de tablas optimizadas para memoria.
Observe las siguientes diferencias con el plan para la misma consulta en las tablas basadas en disco (ilustración 1):
- Este plan contiene un recorrido de tabla en lugar de un recorrido de índice clúster para la tabla Customer:
- La definición de la tabla no contiene un índice clúster.
- Los índices clúster no se admiten con tablas optimizadas para memoria. En su lugar, cada tabla optimizada para memoria debe tener al menos un índice no clúster y todos los índices de las tablas optimizadas para memoria pueden tener acceso eficazmente a todas las columnas de la tabla sin tener que almacenarlas en el índice o hacer referencia a un índice clúster.
- Este plan contiene una `Hash Match` en lugar de una `Merge Join`. Los índices en las tablas Order y Customer son índices hash y, por tanto, no se ordenan. Una `Merge Join` requeriría operadores de ordenación que reducirían el rendimiento.
## <a name="natively-compiled-stored-procedures"></a>procedimientos almacenados compilados de forma nativa
Los procedimientos almacenados compilados de forma nativa son procedimientos almacenados de [!INCLUDE[tsql](../../../includes/tsql-md.md)] compilados con código máquina, en lugar de interpretados por el motor de ejecución de consultas. El siguiente script crea un procedimiento almacenado compilado de forma nativa que ejecuta la consulta de ejemplo (de la sección Consulta de ejemplo).
```sql
CREATE PROCEDURE usp_SampleJoin
WITH NATIVE_COMPILATION, SCHEMABINDING, EXECUTE AS OWNER
AS BEGIN ATOMIC WITH
( TRANSACTION ISOLATION LEVEL = SNAPSHOT,
LANGUAGE = 'english')
SELECT o.OrderID, c.CustomerID, c.ContactName
FROM dbo.[Order] o INNER JOIN dbo.[Customer] c
ON c.CustomerID = o.CustomerID
END
```
Los procedimientos almacenados compilados de forma nativa se compilan en el momento de su creación, mientras que los procedimientos almacenados interpretados se compilan la primera vez que se ejecutan. (Una parte de la compilación, en particular el análisis y la algebrización, tienen lugar en la creación. Sin embargo, para los procedimientos almacenados interpretados, la optimización de los planes de consulta tiene lugar en la primera ejecución). La lógica de la recompilación es similar. Los procedimientos almacenados compilados de forma nativa se recompilan en la primera ejecución del procedimiento si el servidor se reinicia. Los procedimientos almacenados interpretados se recompilan si el plan ya no está en la memoria caché de planes. En la tabla siguiente se resumen los casos de compilación y de recompilación tanto para los procedimientos almacenados interpretados como para los compilados de forma nativa:
||Compilado de forma nativa|Acceso de|
|-|-----------------------|-----------------|
|Compilación inicial|En el momento de la creación.|En la primera ejecución.|
|Recompilación automática|En la primera ejecución del procedimiento después del reinicio de la base de datos o del servidor.|Al reiniciar el servidor. O bien, se expulsa de la memoria caché de planes, generalmente según los cambios de esquema o de estadísticas, o por presión en la memoria.|
|Recompilación manual|No compatible. La solución es quitar y volver a crear el procedimiento almacenado.|Use `sp_recompile`. Puede expulsar manualmente el plan de la memoria caché, por ejemplo con DBCC FREEPROCCACHE. También puede crear el procedimiento almacenado WITH RECOMPILE y el procedimiento almacenado se recompilará en cada ejecución.|
### <a name="compilation-and-query-processing"></a>Compilación y procesamiento de consultas
El siguiente diagrama muestra el proceso de compilación para los procedimientos almacenados compilados de forma nativa:

Compilación nativa de procedimientos almacenados.
El proceso se describe como
1. El usuario emite una instrucción `CREATE PROCEDURE` a [!INCLUDE[ssNoVersion](../../../includes/ssnoversion-md.md)].
2. El analizador y el algebrizador crean el flujo de procesamiento del procedimiento, así como los árboles de consulta para las consultas de [!INCLUDE[tsql](../../../includes/tsql-md.md)] del procedimiento almacenado.
3. El optimizador crea planes optimizados de ejecución de consultas para todas las consultas en el procedimiento almacenado.
4. El compilador OLTP en memoria toma el flujo de procesamiento con los planes de consulta optimizados incrustados y genera un archivo DLL que contiene el código máquina para ejecutar el procedimiento almacenado.
5. El archivo DLL generado se carga en memoria.
La invocación de un procedimiento almacenado compilado de forma nativa se traduce en la llamada a una función del archivo DLL.

Ejecución de los procedimientos almacenados compilados de forma nativa.
La invocación de un procedimiento almacenado compilado de forma nativa se describe como sigue:
1. El usuario emite una `EXEC` instrucción *usp_myproc* .
2. El analizador extrae los parámetros del nombre y del procedimiento almacenado.
Si la instrucción se preparó, por ejemplo con `sp_prep_exec`, el analizador no necesita extraer el nombre y los parámetros de los procedimientos en tiempo de ejecución.
3. El runtime de OLTP en memoria encuentra el punto de entrada del archivo DLL para el procedimiento almacenado.
4. El código máquina del archivo DLL se ejecuta y los resultados se devuelven al cliente.
**Examen de parámetros**
Los procedimientos almacenados de [!INCLUDE[tsql](../../../includes/tsql-md.md)] interpretado se compilan en la primera ejecución, a diferencia de los procedimientos almacenados compilados de forma nativa, que se compilan en el momento de su creación. Cuando los procedimientos almacenados interpretados se compilan al invocarlos, el optimizador usa los valores de los parámetros proporcionados para esta invocación al generar el plan de ejecución. Este uso de parámetros durante la compilación se denomina examen de parámetros.
El examen de parámetros no se utiliza para compilar procedimientos almacenados compilados de forma nativa. Todos los parámetros para el procedimiento almacenado se considera que tienen valores UNKNOWN. Al igual que sucede con los procedimientos almacenados interpretados, los procedimientos almacenados compilados de forma nativa también admiten la sugerencia `OPTIMIZE FOR`. Para obtener más información, vea [Sugerencias de consulta (Transact-SQL)](/sql/t-sql/queries/hints-transact-sql-query).
### <a name="retrieving-a-query-execution-plan-for-natively-compiled-stored-procedures"></a>Recuperar un plan de ejecución de consultas para los procedimientos almacenados compilados de forma nativa
El plan de ejecución de consulta para un procedimiento almacenado compilado de forma nativa se puede recuperar con un **Plan de ejecución estimado** en [!INCLUDE[ssManStudio](../../includes/ssmanstudio-md.md)]o con la opción SHOWPLAN_XML en [!INCLUDE[tsql](../../../includes/tsql-md.md)]. Por ejemplo:
```sql
SET SHOWPLAN_XML ON
GO
EXEC dbo.usp_myproc
GO
SET SHOWPLAN_XML OFF
GO
```
El plan de ejecución generado por el optimizador de consultas está compuesto de un árbol con operadores de consulta en los nodos y en las hojas de árbol. La estructura del árbol determina la interacción (el flujo de filas de un operador a otro) entre los operadores. En la vista gráfica de [!INCLUDE[ssManStudioFull](../../../includes/ssmanstudiofull-md.md)], el flujo es de derecha a izquierda. Por ejemplo, el plan de consulta de la ilustración 1 contiene dos operadores de examen de índices, lo que proporciona filas para un operador de combinación de mezcla. El operador merge join proporciona filas para un operador select. El operador select, finalmente, devuelve las filas al cliente.
### <a name="query-operators-in-natively-compiled-stored-procedures"></a>Operadores de consulta en procedimientos almacenados compilados de forma nativa
En la tabla siguiente se resumen los operadores de consulta admitidos dentro de procedimientos almacenados compilados de forma nativa:
|Operator|Consulta de ejemplo|
|--------------|------------------|
|SELECT|`SELECT OrderID FROM dbo.[Order]`|
|INSERT|`INSERT dbo.Customer VALUES ('abc', 'def')`|
|UPDATE|`UPDATE dbo.Customer SET ContactName='ghi' WHERE CustomerID='abc'`|
|Delete|`DELETE dbo.Customer WHERE CustomerID='abc'`|
|Compute Scalar|Este operador se usa tanto para las funciones intrínsecas como para las conversiones de tipos. No todas las funciones y conversiones de tipos se admiten en los procedimientos almacenados compilados de forma nativa.<br /><br /> `SELECT OrderID+1 FROM dbo.[Order]`|
|Combinación de bucles anidados|Nested Loops es el único operador de combinación admitido en los procedimientos almacenados compilados de forma nativa. Todos los planes que contienen combinaciones utilizarán el operador Nested Loops, incluso si el plan para la misma consulta ejecutada como [!INCLUDE[tsql](../../../includes/tsql-md.md)] interpretado contiene una combinación de mezcla o hash.<br /><br /> `SELECT o.OrderID, c.CustomerID` <br /> `FROM dbo.[Order] o INNER JOIN dbo.[Customer] c`|
|Sort|`SELECT ContactName FROM dbo.Customer` <br /> `ORDER BY ContactName`|
|TOP|`SELECT TOP 10 ContactName FROM dbo.Customer`|
|Top-sort|La expresión `TOP` (número de filas que se van a devolver) no puede superar 8000 filas. Si hay también en la consulta operadores de combinación y agregación, habrá menos filas. Las combinaciones y agregaciones suelen reducir el número de filas que se van a ordenar, en comparación con el recuento de filas de las tablas base.<br /><br /> `SELECT TOP 10 ContactName FROM dbo.Customer` <br /> `ORDER BY ContactName`|
|Stream Aggregate|Observe que el operador Hash Match no se admite para la agregación. Por consiguiente, toda la agregación en los procedimientos almacenados compilados de forma nativa utiliza el operador Stream Aggregate, incluso si el plan para la misma consulta en [!INCLUDE[tsql](../../../includes/tsql-md.md)] interpretado utiliza el operador Hash Match.<br /><br /> `SELECT count(CustomerID) FROM dbo.Customer`|
## <a name="column-statistics-and-joins"></a>Combinaciones y estadísticas de columnas
[!INCLUDE[ssNoVersion](../../../includes/ssnoversion-md.md)] mantiene estadísticas en los valores de columnas de clave de índice para ayudar a evaluar el costo de ciertas operaciones, como el examen de índice y las búsquedas de índice. ([!INCLUDE[ssNoVersion](../../../includes/ssnoversion-md.md)] también crea estadísticas en columnas de clave sin índice si se crean explícitamente o si el optimizador de consultas las crea en respuesta a una consulta con predicado). La métrica principal en la estimación del costo es el número de filas procesadas por un único operador. Tenga en cuenta que para las tablas basadas en disco, el número de páginas a las que tiene acceso un operador determinado es importante en la estimación de costos. Sin embargo, como el recuento de páginas no es importante para las tablas optimizadas para memoria (siempre es cero), esta explicación se centra en el recuento de filas. La estimación comienza por los operadores de examen y búsqueda de índice en el plan, y se extiende después para incluir los otros operadores, como el operador de combinación. El número estimado de filas que va a procesar un operador de combinación se basa en la estimación de los operadores de examen, índice y búsqueda subyacentes. Para que [!INCLUDE[tsql](../../../includes/tsql-md.md)] interpretado pueda obtener acceso a las tablas optimizadas para memoria, puede seguir el plan de ejecución real para ver la diferencia entre los recuentos de filas estimado y real de los operadores del plan.
Para el ejemplo en la ilustración 1,
- El examen de índice clúster en Customer ha estimado 91; reales 91.
- El examen de índice no clúster en CustomerID ha estimado 830; reales 830.
- El operador Merge Join ha estimado 815; reales 830.
Las estimaciones de los exámenes de índice son precisas. [!INCLUDE[ssNoVersion](../../../includes/ssnoversion-md.md)] mantiene el recuento de filas en las tablas basadas en disco. Las estimaciones para los recorridos de índice y de la tabla completa siempre son precisas. La estimación de la combinación es bastante precisa también.
Si estas estimaciones cambian, las consideraciones de costo para las diferentes alternativas de plan también cambian. Por ejemplo, si uno de los lados de la combinación tiene un recuento estimado de filas de 1 o menos, usar las combinaciones de bucles anidados es menos costoso.
A continuación se muestra el plan de la consulta:
```
SELECT o.OrderID, c.* FROM dbo.[Customer] c INNER JOIN dbo.[Order] o ON c.CustomerID = o.CustomerID
```
Después de eliminar todas las filas menos una en la tabla Customer:

Acerca de este plan de consulta:
- Hash Match se ha reemplazado por un operador de combinación anidada Nested Loops.
- El examen de índice completo en IX_CustomerID se ha reemplazado por index seek. Esto provocó el examen de 5 filas en lugar de las 830 necesarias para el examen de índice completo.
### <a name="statistics-and-cardinality-for-memory-optimized-tables"></a>Estadísticas y cardinalidad para las tablas con optimización para memoria
[!INCLUDE[ssNoVersion](../../../includes/ssnoversion-md.md)] mantiene las estadísticas de nivel de columna de las tablas optimizadas para memoria. Además, mantiene el recuento de filas real de la tabla. Sin embargo, a diferencia de las tablas basadas en disco, las estadísticas de las tablas optimizadas para memoria no se actualizan automáticamente. Por tanto, las estadísticas se deben actualizar manualmente cuando se produzcan cambios significativos en las tablas. Para obtener más información, vea [Estadísticas para las tablas con optimización para memoria](memory-optimized-tables.md).
## <a name="see-also"></a>Consulte también
[Tablas optimizadas para la memoria](memory-optimized-tables.md)
| 83.711039 | 1,506 | 0.768064 | spa_Latn | 0.990729 |
53622c5e53fd25027ecc74411afac6e1e1040906 | 1,473 | md | Markdown | content/circonus/administration/account-settings.md | circonus/docs | 7bfd4dc7a04360fad85461a68874f435b60ea6c5 | [
"MIT"
] | null | null | null | content/circonus/administration/account-settings.md | circonus/docs | 7bfd4dc7a04360fad85461a68874f435b60ea6c5 | [
"MIT"
] | 11 | 2020-04-13T03:40:22.000Z | 2022-01-24T20:48:49.000Z | content/circonus/administration/account-settings.md | circonus/docs | 7bfd4dc7a04360fad85461a68874f435b60ea6c5 | [
"MIT"
] | 1 | 2021-12-13T18:39:07.000Z | 2021-12-13T18:39:07.000Z | ---
title: Account Management
weight: 30
---
# Account Management
## Accessing Account Settings
The "Settings" link under the "Account" section of the main menu will navigate to the "Settings" page. This section will not appear when viewing the main menu in an account for which you are not an Admin.

## Switching Accounts
Throughout the site, in the upper left corner (next to the Circonus logo), there is an Account navigation link, labeled with the name of the account you are currently viewing.

Clicking on this link will present you with the option to switch the account you are currently viewing.
When switching accounts, the "User" section of the main menu will remain available and access the same personal profile management information as long as you are logged in as the same user.
If you are an Admin for the current account, the "Admin" section of the main menu will appear and allow you to access account management information for the current account.
## Billing
The "Billing" page allows you to review and update billing information in order to add Circonus Enterprise Brokers and additional metrics or hosts to your account.

Brokers can be provisioned, decommissioned, and otherwise managed.
| 44.636364 | 204 | 0.794297 | eng_Latn | 0.998372 |
5362b45e05ca996624607ba28a776f4f95755230 | 6,160 | md | Markdown | _posts/2019-04-06-Download-mechanics-and-thermodynamics-of-continua-a-collection-of-papers-dedicated-to-b-d-coleman-on-his-six.md | Luanna-Lynde/28 | 1649d0fcde5c5a34b3079f46e73d5983a1bfce8c | [
"MIT"
] | null | null | null | _posts/2019-04-06-Download-mechanics-and-thermodynamics-of-continua-a-collection-of-papers-dedicated-to-b-d-coleman-on-his-six.md | Luanna-Lynde/28 | 1649d0fcde5c5a34b3079f46e73d5983a1bfce8c | [
"MIT"
] | null | null | null | _posts/2019-04-06-Download-mechanics-and-thermodynamics-of-continua-a-collection-of-papers-dedicated-to-b-d-coleman-on-his-six.md | Luanna-Lynde/28 | 1649d0fcde5c5a34b3079f46e73d5983a1bfce8c | [
"MIT"
] | null | null | null | ---
layout: post
comments: true
categories: Other
---
## Download Mechanics and thermodynamics of continua a collection of papers dedicated to b d coleman on his six book
At last one of the Japanese with whom I conversed degree the importance it formerly had. In level. "Money can't "A book. In the valleys rice is principally Burt Reynolds in Smokey and the Bandit. He went out with the young lord in his ship, every stone steeped computer, and only a handful of the nonbetrizated were still note of long-throttled anger in her voice. Sinsemilla didn't want anything in the fridge, "that we will find everything normal; then we take "And I in my tower," said the Namer, words for him. "This meeting of the North Pole Society of Not Evil Adventurers is officially Stone Age to the silks, he made her stand by his chair or sit on his knees and listen to all the wrongs that had been done to him and to the house of Iria, in the European theater. My name is Etaudis. " He looked working to get ready for their presence, and both sailed together down the Lena to its mouth, both _kayaks_ and _umiaks_, and compassed it about, having been only boats are often hollowed out of a single tree-stem. "That's what's going to be Certain it is, but always alone, for that thou hast outraged mine honour. 160_n_ 172 deg. Kaitlin was the unfortunate sister, her hard, the old man bade the trooper wash the kitchen-vessels and made ready passing goodly food? Wells's Dr. You understood chat of ours is making me dizzy. give them to him. Here "Sweetie, expecting who should come to him. "Ever think A significant area had been set aside for computers? He moved as quickly and as no lie. " something to say that wouldn't be the wrong thing. " secret society, Wally. A rich lore of spells and charms to ensure the good outcome of such undertakings was shared among the witches. " She pulls my hands close and lays them on her body. 93; Then he gave the cup to the Khalif, Mommy, into the still air under the trees. When I saw what potential dogs possess, she pinched his left earlobe and tugged it, while Stormbel relished the strong-arm role but had no ambitions of ownership or taste for any of the complexities that came with it, as if unable to find the words. She didn't know why her charm of healing caused the wound to gangrene, which at two places appeared to form He saw her now more clearly than he had seen her in the tower, and a few fishing The voice of her father. More newsвKarla's house was bought with Circle of Friends money. They could not U. Cape Deschnev and reached the Anadyr. To tell the Lords of Wathort or Havnor that witches on Roke are brewing a storm?" "We had a back-up pilot, and his eyes focused again. Map of the River System of Siberia Word by word, Paul waved a red handkerchief out of the window of from her brain probably blew out power-company transformers all over the Bay Area, her hands were cold. If she regained her wits before he returned, now mostly cost another life, Malgin Gus Verdugo worked in RI, rolling through her in nauseating waves, he noticed the woman standing on the far side of the entrance, or perhaps longer, leaf 236). which had been formed in the course of the preceding night Jacob Isaacson--twin brother of Edom-knew nothing negative about Panglo, but it flopped uselessly and would not respond, he dialed mechanics and thermodynamics of continua a collection of papers dedicated to b d coleman on his six in when he realized that Celestina. for a short time, and BJELKOV. Perhaps it was wonder. On the 2nd May the reading in Bernard acknowledged with a nod and leaned forward to speak in a low voice to the face that had appeared on an auxiliary screen. When it began to crumble he wrapped it in Junior sipped the beverage slowly. where ten days ago, who bade put him to death. Doom. Or fear. _ From the side (One-third of the natural size. overhead, and all of us. " "No. The power to give the true name and the imperative to keep it secret are one. Make sure that all the sky-roof outer shutters are closed immediately. financial. " The It was then that village sorcery, ii, another contraction. " Here comes Polly with a shotgun, you should have no difficulty, "Look at the peaches. in addition, she was eating a Upstairs. In the mechanics and thermodynamics of continua a collection of papers dedicated to b d coleman on his six of the lower hand His waitress was a cutie. 115 The car shuddered, 'I am she against whom thou liedst, a little Enladian crownpiece of gold, and no doubt there were automatic or remote-operated defenses that were invisible. Naomi had dropped the bag of dried apricots before she plummeted from the Gazing wistfully at the cat, "Take him up," [returned to the palace], Ogion thought. " "Evil," Sinsemilla insisted. The boy's modesty was a great relief to him. " house was a palace in comparison with that in which Pachtussov He looked at her and said nothing. Sure, that population is As mentally demanding and stressful as it was to maintain this borrowed sight, thou goest under a delusion, and that they might see more forwards across the immeasurable deserts of Siberia, lest El Muradi should come upon him and cast him into another calamity, prayed and craved pardon of God the Most High for that which she had done. " what they could procure by hunting without the use of fire-arms ground, did you?" bring about an event. So he aroused him and said to him, and again the thick fog swirled. Sometimes she frightened him, like walking forward in a vast darkness with a small lamp. Moreover, with her grave simplicity, after all, Rose nodded once, to the powerful male magnetism that was as much a part of him as his thick blond hair, and which afterwards, and unlike his four-legged companion, "Well, but a majority vote rejected all her suggestions and. _ From the side (One-third of the natural size. Mechanics and thermodynamics of continua a collection of papers dedicated to b d coleman on his six chaos of lights extinguished the stars. This night, through his corporation, 202, who had now settled halfway between snow--his large black nose. But you-" She shrugs. | 684.444444 | 5,987 | 0.786526 | eng_Latn | 0.999966 |
536342af52203a657d4209da8773d86287a69296 | 7,598 | md | Markdown | docs/reporting-services/report-design/exploring-the-flexibility-of-a-tablix-data-region-report-builder-and-ssrs.md | zelanko/sql-docs.pt-br | 4bc6a809b48cb3bde3ca5a7b9659febcdd4d602c | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-09-05T16:06:11.000Z | 2021-09-05T16:06:11.000Z | docs/reporting-services/report-design/exploring-the-flexibility-of-a-tablix-data-region-report-builder-and-ssrs.md | zelanko/sql-docs.pt-br | 4bc6a809b48cb3bde3ca5a7b9659febcdd4d602c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/reporting-services/report-design/exploring-the-flexibility-of-a-tablix-data-region-report-builder-and-ssrs.md | zelanko/sql-docs.pt-br | 4bc6a809b48cb3bde3ca5a7b9659febcdd4d602c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Como explorar a flexibilidade de uma região de dados Tablix (Construtor de Relatórios) | Microsoft Docs
description: Descubra a flexibilidade de um relatório paginado no Construtor de Relatórios quando você adiciona uma região de dados de tabela, matriz ou lista.
ms.date: 03/07/2017
ms.prod: reporting-services
ms.prod_service: reporting-services-native
ms.technology: report-design
ms.topic: conceptual
ms.assetid: fef19359-a618-4d21-a7e4-e391cdefd4eb
author: maggiesMSFT
ms.author: maggies
ms.openlocfilehash: 1f87c4ddc6b678675d5bcad06d06a3c466ecd929
ms.sourcegitcommit: 57f1d15c67113bbadd40861b886d6929aacd3467
ms.translationtype: HT
ms.contentlocale: pt-BR
ms.lasthandoff: 06/18/2020
ms.locfileid: "84999545"
---
# <a name="exploring-the-flexibility-of-a-tablix-data-region-report-builder-and-ssrs"></a>Explorando a flexibilidade de uma região de dados Tablix (Construtor de Relatórios e SSRS)
Em um relatório paginado do [!INCLUDE[ssRSnoversion](../../includes/ssrsnoversion-md.md)] , ao adicionar uma tabela, matriz ou região de dados de lista usando a guia Inserir da faixa de opções, você começa com um modelo inicial de região de dados tablix. Mas você não está limitado a esse modelo. É possível continuar a desenvolver a maneira como os dados são exibidos adicionando ou removendo qualquer recurso de região de dados tablix, como grupos, linhas e colunas.
Quando você exclui um grupo de linhas ou de colunas, você tem a opção de excluir as linhas e colunas que são usadas para exibir valores de grupo. Também é possível adicionar ou remover manualmente. Para compreender como as linhas e colunas são usadas para exibir dados detalhados e de grupo, consulte [Região de dados Tablix (Construtor de Relatórios e SSRS)](../../reporting-services/report-design/tablix-data-region-report-builder-and-ssrs.md).
Depois de alterar a estrutura da região de dados tablix, você pode definir propriedades para ajudar a controlar a maneira como o relatório renderiza a região de dados. Por exemplo, é possível repetir cabeçalhos de colunas na parte superior de todas as páginas ou manter um cabeçalho com o grupo. Para obter mais informações, consulte [Controlando a exibição da região de dados Tablix em uma página do relatório (Construtor de Relatórios e SSRS)](../../reporting-services/report-design/controlling-the-tablix-data-region-display-on-a-report-page.md).
> [!NOTE]
> [!INCLUDE[ssRBRDDup](../../includes/ssrbrddup-md.md)]
## <a name="changing-a-table-to-a-matrix"></a>Alterando uma tabela para uma matriz
Por padrão, uma tabela tem linhas de detalhes que exibem os valores do conjunto de dados do relatório. Normalmente, uma tabela inclui grupos de linhas que organizam os dados de detalhes por grupo e, em seguida, inclui valores agregados com base em cada grupo. Para alterar a tabela para uma matriz, adicione grupos de colunas. Normalmente, você deve remover os grupos de detalhes quando a região de dados tem os grupos de linhas e de colunas para que possa exibir apenas os valores de resumo dos grupos. Para obter mais informações, consulte [Adicionar ou excluir um grupo em uma região de dados (Construtor de Relatórios e SSRS)](../../reporting-services/report-design/add-or-delete-a-group-in-a-data-region-report-builder-and-ssrs.md).
Por definição, ao criar uma matriz, você adiciona uma célula de canto tablix. É possível mesclar células nesta área e adicionar um rótulo. Para obter mais informações, consulte [Mesclar células em uma região de dados (Construtor de Relatórios e SSRS)](../../reporting-services/report-design/merge-cells-in-a-data-region-report-builder-and-ssrs.md).
## <a name="changing-a-matrix-to-a-table"></a>Alterando uma matriz para uma tabela
Por padrão, uma matriz tem grupos de linhas e de colunas e nenhum grupo de detalhes. Para alterar uma matriz para uma tabela, remova grupos de colunas e adicione um grupo de detalhes para exibir nas linhas de detalhes. Para obter mais informações, consulte [Adicionar ou excluir um grupo em uma região de dados (Construtor de Relatórios e SSRS)](../../reporting-services/report-design/add-or-delete-a-group-in-a-data-region-report-builder-and-ssrs.md) e [Adicionar um grupo de detalhes (Construtor de Relatórios e SSRS)](../../reporting-services/report-design/add-a-details-group-report-builder-and-ssrs.md).
## <a name="changing-a-default-list-to-a-grouped-list"></a>Alterando uma lista padrão para uma lista agrupada
Por padrão, uma lista tem linhas de detalhes e nenhum grupo. Para alterar a lista para usar uma linha de grupo, renomeie o grupo de detalhes e especifique uma expressão de grupo. Para obter mais informações, consulte [Adicionar ou excluir um grupo em uma região de dados (Construtor de Relatórios e SSRS)](../../reporting-services/report-design/add-or-delete-a-group-in-a-data-region-report-builder-and-ssrs.md)
## <a name="creating-stepped-displays"></a>Criando exibições de nível
Por padrão, quando você adiciona grupos a uma região de dados tablix, as células da área do cabeçalho do grupo de linhas exibem valores de grupos na coluna. Quando você tem grupos aninhados, cada grupo é exibido em uma coluna separada. Para criar uma exibição de nível, remova todas as colunas do grupo, exceto uma, e formate a coluna restante para exibir a hierarquia do grupo como uma exibição de texto recuada. Para obter mais informações, consulte [Criar um relatório de nível (Construtor de Relatórios e SSRS)](../../reporting-services/report-design/create-a-stepped-report-report-builder-and-ssrs.md).
## <a name="adding-an-adjacent-details-group"></a>Adicionando um grupo de detalhes adjacente
Por padrão, o grupo de detalhes é o grupo filho interno em uma hierarquia de grupo. Você não pode aninhar um grupo sob o grupo de detalhes. Você pode criar grupos de detalhes adjacentes adicionais, para exibir os 5 maiores produtos e os 5 menores produtos por vendas, por exemplo. Com a possibilidade de adicionar expressões de filtro e de classificação a cada grupo, você pode mostrar duas exibições de dados detalhados do mesmo conjunto de dados em uma região de dados tablix. Para obter mais informações, consulte [Noções básicas sobre grupos (Construtor de Relatórios e SSRS)](../../reporting-services/report-design/understanding-groups-report-builder-and-ssrs.md), [Adicionar ou excluir um grupo em uma região de dados (Construtor de Relatórios e SSRS)](../../reporting-services/report-design/add-or-delete-a-group-in-a-data-region-report-builder-and-ssrs.md) e [Adicionar um filtro a um conjunto de dados (Construtor de Relatórios e SSRS)](../../reporting-services/report-data/add-a-filter-to-a-dataset-report-builder-and-ssrs.md).
## <a name="see-also"></a>Consulte Também
[Região de dados Tablix (Construtor de Relatórios e SSRS)](../../reporting-services/report-design/tablix-data-region-report-builder-and-ssrs.md)
[Tabelas, matrizes e listas (Construtor de Relatórios e SSRS)](../../reporting-services/report-design/tables-matrices-and-lists-report-builder-and-ssrs.md)
[Tabelas (Construtor de Relatórios e SSRS)](../../reporting-services/report-design/tables-report-builder-and-ssrs.md)
[Matrizes (Construtor de Relatórios e SSRS)](../../reporting-services/report-design/create-a-matrix-report-builder-and-ssrs.md)
[Listas (Construtor de Relatórios e SSRS)](../../reporting-services/report-design/create-invoices-and-forms-with-lists-report-builder-and-ssrs.md)
| 140.703704 | 1,064 | 0.776915 | por_Latn | 0.999466 |
5363c8d306b9b1c717b6c824dfcdecea86886c44 | 11,525 | md | Markdown | articles/active-directory/saas-apps/tigertext-tutorial.md | changeworld/azure-docs.cs-cz | cbff9869fbcda283f69d4909754309e49c409f7d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/active-directory/saas-apps/tigertext-tutorial.md | changeworld/azure-docs.cs-cz | cbff9869fbcda283f69d4909754309e49c409f7d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/active-directory/saas-apps/tigertext-tutorial.md | changeworld/azure-docs.cs-cz | cbff9869fbcda283f69d4909754309e49c409f7d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 'Kurz: Integrace služby Azure Active Directory pomocí programu TigerText Secure Messenger | Dokumenty společnosti Microsoft'
description: Přečtěte si, jak nakonfigurovat jednotné přihlašování mezi Službou Azure Active Directory a TigerText Secure Messenger.
services: active-directory
documentationCenter: na
author: jeevansd
manager: mtillman
ms.reviewer: barbkess
ms.assetid: 03f1e128-5bcb-4e49-b6a3-fe22eedc6d5e
ms.service: active-directory
ms.subservice: saas-app-tutorial
ms.workload: identity
ms.tgt_pltfrm: na
ms.devlang: na
ms.topic: tutorial
ms.date: 03/29/2019
ms.author: jeedes
ms.openlocfilehash: ea3bda1dd51a7c3a2e5e3f8b669d7138898f1595
ms.sourcegitcommit: 0947111b263015136bca0e6ec5a8c570b3f700ff
ms.translationtype: MT
ms.contentlocale: cs-CZ
ms.lasthandoff: 03/24/2020
ms.locfileid: "67088660"
---
# <a name="tutorial-azure-active-directory-integration-with-tigertext-secure-messenger"></a>Kurz: Integrace Azure Active Directory s TigerText Secure Messenger
V tomto kurzu se dozvíte, jak integrovat TigerText Secure Messenger s Azure Active Directory (Azure AD).
Integrace TigerText Secure Messenger s Azure AD poskytuje následující výhody:
* Můžete řídit ve službě Azure AD, který má přístup k TigerText Secure Messenger.
* Můžete povolit, aby se uživatelé automaticky přihlašovali ke službě TigerText Secure Messenger (jednotné přihlašování) pomocí svých účtů Azure AD.
* Své účty můžete spravovat v jednom centrálním umístění: na portálu Azure.
Podrobnosti o integraci aplikací softwaru jako služby (SaaS) s Azure AD najdete v tématu [Co je přístup k aplikacím a jednotné přihlašování pomocí Služby Azure Active Directory?](https://docs.microsoft.com/azure/active-directory/active-directory-appssoaccess-whatis).
## <a name="prerequisites"></a>Požadavky
Chcete-li nakonfigurovat integraci Azure AD pomocí TigerText Secure Messenger, potřebujete následující položky:
* Předplatné Azure AD. Pokud nemáte předplatné Azure, [vytvořte si bezplatný účet,](https://azure.microsoft.com/free/) než začnete.
* Předplatné TigerText Secure Messenger s povoleným jednorázovým přihlášením.
## <a name="scenario-description"></a>Popis scénáře
V tomto kurzu nakonfigurujete a otestujete jednotné přihlašování Azure AD v testovacím prostředí a integrujete TigerText Secure Messenger s Azure AD.
TigerText Secure Messenger podporuje jednotné přihlašování iniciované sp(SSO).
## <a name="add-tigertext-secure-messenger-from-the-azure-marketplace"></a>Přidání zabezpečeného messengeru TigerText z Azure Marketplace
Pokud chcete nakonfigurovat integraci TigerTextu Secure Messenger do Azure AD, je potřeba přidat TigerText Secure Messenger z Azure Marketplace do seznamu spravovaných aplikací SaaS:
1. Přihlaste se k [portálu Azure](https://portal.azure.com?azure-portal=true).
1. V levém podokně vyberte **Azure Active Directory**.

1. Přejděte na **položku Podnikové aplikace**a vyberte **možnost Všechny aplikace**.

1. Chcete-li přidat novou aplikaci, vyberte v horní části podokna **možnost + Nová aplikace.**

1. Do vyhledávacího pole zadejte **TigerText Secure Messenger**. Ve výsledcích hledání vyberte **TigerText Secure Messenger**a pak vyberte **Přidat,** chcete-li přidat aplikaci.

## <a name="configure-and-test-azure-ad-single-sign-on"></a>Konfigurace a testování jednotného přihlašování Azure AD
V této části nakonfigurujete a otestujete jednotné přihlašování Azure AD pomocí tigertextu Zabezpečené messenger na základě testovacího uživatele s názvem **Britta Simon**. Aby jednotné přihlašování fungovalo, musíte vytvořit propojení mezi uživatelem Azure AD a souvisejícím uživatelem v TigerText Secure Messenger.
Chcete-li nakonfigurovat a otestovat jednotné přihlašování Azure AD pomocí tigertextu Secure Messenger, musíte dokončit následující stavební bloky:
1. **[Nakonfigurujte jednotné přihlašování Azure AD](#configure-azure-ad-single-sign-on)** tak, aby uživatelé mohli tuto funkci používat.
1. **[Nakonfigurujte tigertext secure messenger jednotné přihlašování](#configure-tigertext-secure-messenger-single-sign-on)** a nakonfigurujte nastavení jednotného přihlášení na straně aplikace.
1. **[Vytvořte uživatele testu Azure AD](#create-an-azure-ad-test-user)** pro testování jednotného přihlašování Azure AD s Brittou Simonovou.
1. **[Přiřaďte uživateli testu Azure AD,](#assign-the-azure-ad-test-user)** aby britta Simon mohla používat jednotné přihlašování Azure AD.
1. **[Vytvořte testovacího uživatele TigerText Secure Messenger](#create-a-tigertext-secure-messenger-test-user)** tak, aby v TigerTextu Zabezpečené messengeru byl uživatel jménem Britta Simon, který je propojený s uživatelem Azure AD jménem Britta Simon.
1. **[Otestujte jednotné přihlášení](#test-single-sign-on)** a ověřte, zda konfigurace funguje.
### <a name="configure-azure-ad-single-sign-on"></a>Konfigurace jednotného přihlašování Azure AD
V této části povolíte jednotné přihlašování Azure AD na webu Azure Portal.
Pokud chcete nakonfigurovat jednotné přihlašování Azure AD pomocí TigerText Secure Messenger, postupujte takto:
1. Na [portálu Azure](https://portal.azure.com/)na stránce integrace aplikací **TigerText Secure Messenger** vyberte Jedno **přihlášení**.

1. V podokně **Vybrat metodu jednotného přihlašování** vyberte režim **SAML/WS-Fed,** abyste povolili jednotné přihlašování.

1. V podokně **Nastavit jednotné přihlašování pomocí saml** vyberte **Upravit** (ikona tužky), abyste otevřeli podokno Základní **konfigurace SAML.**

1. V podokně **Základní konfigurace SAML** proveďte následující kroky:

1. Do pole **Přihlásit se na adresu URL** zadejte adresu URL:
`https://home.tigertext.com`
1. Do pole **Identifikátor (ID entity)** zadejte adresu URL pomocí následujícího vzoru:
`https://saml-lb.tigertext.me/v1/organization/<instance ID>`
> [!NOTE]
> Hodnota **Identifikátor (ID entity)** není skutečná. Aktualizujte tuto hodnotu skutečným identifikátorem. Chcete-li získat hodnotu, obraťte se na [tým podpory TigerText Secure Messenger](mailto:[email protected]). Můžete také odkazovat na vzory zobrazené v **podokně Základní konfigurace SAML** na webu Azure Portal.
1. V podokně **Nastavit jednotné přihlašování pomocí saml** vyberte v části **Podpisový certifikát SAML** **položku Stáhnout,** chcete-li z daných možností stáhnout **xml metadat federace** a uložit jej do počítače.

1. V části **Nastavit TigerText Secure Messenger** zkopírujte url nebo adresy URL, které potřebujete:
* **Přihlašovací adresa URL**
* **Identifikátor azure reklamy**
* **Adresa URL odhlášení**

### <a name="configure-tigertext-secure-messenger-single-sign-on"></a>Konfigurace jednotného přihlášení programu TigerText Secure Messenger
Chcete-li nakonfigurovat jednotné přihlašování na straně TigerText Secure Messenger, musíte odeslat stažený xml metadat federace a příslušné zkopírované adresy URL z portálu Azure týmu [podpory TigerText Secure Messenger](mailto:[email protected]). Tým TigerText Secure Messenger zajistí, že připojení spřitom správně nastaveno na obou stranách.
### <a name="create-an-azure-ad-test-user"></a>Vytvoření testovacího uživatele Azure AD
V této části vytvoříte testovací ho uživatele s názvem Britta Simon na webu Azure Portal.
1. Na webu Azure Portal v levém podokně vyberte možnost**Uživatelé služby** > **Azure Active Directory** > **Všichni uživatelé**.

1. V horní části obrazovky vyberte **+ Nový uživatel**.

1. V podokně **Uživatel** postupujte takto:

1. Do pole **Název** zadejte **BrittaSimon**.
1. Do pole **Uživatelské jméno** zadejte **\@\<BrittaSimon\< vaší firemní domény>. prodloužení>**. Například **BrittaSimon\@contoso.com**.
1. Zaškrtněte políčko **Zobrazit heslo** a poznamenejte si hodnotu, která se zobrazí v poli **Heslo.**
1. Vyberte **Vytvořit**.
### <a name="assign-the-azure-ad-test-user"></a>Přiřazení testovacího uživatele Azure AD
V této části povolíte Britta Simon používat Azure jednotné přihlášení tím, že jim přístup k TigerText Secure Messenger.
1. Na portálu Azure vyberte **Podnikové aplikace** > **Všechny aplikace** > **TigerText Secure Messenger**.

1. V seznamu aplikací vyberte **TigerText Secure Messenger**.

1. V levém podokně vyberte v části **MANAGE** **položku Uživatelé a skupiny**.

1. Vyberte **+ Přidat uživatele**a pak v podokně Přidat **přiřazení** vyberte **Uživatelé a skupiny.**

1. V podokně **Uživatelé a skupiny** vyberte **britta Simon** v seznamu **Uživatelé** a pak v dolní části podokna zvolte **Vybrat.**
1. Pokud očekáváte hodnotu role v kontrolním výrazu SAML, vyberte v podokně **Vybrat roli** příslušnou roli pro uživatele ze seznamu. V dolní části podokna zvolte **Vybrat**.
1. V podokně **Přidat přiřazení** vyberte **Přiřadit**.
### <a name="create-a-tigertext-secure-messenger-test-user"></a>Vytvoření testovacího uživatele TigerText Secure Messenger
V této části vytvoříte uživatele s názvem Britta Simon v TigerText Secure Messenger. Spolupracujte s [týmem podpory TigerText Secure Messenger](mailto:[email protected]) a přidejte Brittu Simon jako uživatele v TigerText Secure Messenger. Uživatelé musí být vytvořena a aktivována před použitím jednotného přihlášení.
### <a name="test-single-sign-on"></a>Test jednotného přihlašování
V této části otestujete konfiguraci jednotného přihlášení Azure AD pomocí portálu Moje aplikace.
Když na portálu Moje aplikace vyberete **TigerText Secure Messenger,** měli byste být automaticky přihlášeni k předplatnému TigerText Secure Messenger, pro které nastavíte jednotné přihlašování. Další informace o portálu Moje aplikace najdete v [tématu Přístup a používání aplikací na portálu Moje aplikace](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
## <a name="additional-resources"></a>Další zdroje
* [Seznam kurzů pro integraci aplikací SaaS s Azure Active Directory](https://docs.microsoft.com/azure/active-directory/active-directory-saas-tutorial-list)
* [Jak ve službě Azure Active Directory probíhá přístup k aplikacím a jednotné přihlašování?](https://docs.microsoft.com/azure/active-directory/active-directory-appssoaccess-whatis)
* [Co je podmíněný přístup ve službě Azure Active Directory?](https://docs.microsoft.com/azure/active-directory/conditional-access/overview)
| 57.625 | 407 | 0.785944 | ces_Latn | 0.998998 |
536415174939c99f04859affde58ec1ff7bf94e7 | 11,316 | md | Markdown | README.md | d3estudio/weekly-digest | c41c4e7af3f522bf5f0deb2838bbe185d65c7f6b | [
"MIT"
] | null | null | null | README.md | d3estudio/weekly-digest | c41c4e7af3f522bf5f0deb2838bbe185d65c7f6b | [
"MIT"
] | null | null | null | README.md | d3estudio/weekly-digest | c41c4e7af3f522bf5f0deb2838bbe185d65c7f6b | [
"MIT"
] | null | null | null | > **Notice**: Hi there! Thanks for your interest in **d3-digest**. This project is now unmaintained as we are giving more attention to a more robust and organised version of this project in another repository, called [Digest](https://github.com/d3estudio/digest). Feel free to head over there!
----
<p align="center"><img src="https://raw.githubusercontent.com/d3estudio/d3-digest/master/digest-logo.png" /><br/>
<a href="https://unmaintained.tech"><img src="http://unmaintained.tech/badge.svg" /></a>
<a href="https://travis-ci.org/d3estudio/d3-digest"><img src="https://img.shields.io/travis/d3estudio/d3-digest.svg" alt="Build Status"></a>
<img src="https://img.shields.io/david/d3estudio/d3-digest.svg" alt="Dependency status" />
<img alt="Language" src="https://img.shields.io/badge/language-JS6-yellow.svg" />
<img alt="Platform" src="https://img.shields.io/badge/platform-NodeJS-brightgreen.svg" />
<img alt="License" src="https://img.shields.io/badge/license-MIT-blue.svg" />
</p>
Here at [D3 Estúdio](http://d3.do), Slack is our primary communication channel. Found a nice article? Slack. Found a cool gif? Slack. Want to spread the word about an event? Well, guess what? Slack.
We have been overusing the (relatively) new [Reactions](http://slackhq.com/post/123561085920/reactions) feature lately, and we stumbled on a nice idea: _Why not create a digest based on these reactions_?
Well, this is what **D3 Digest** does: it watches channels through a bot and stores messages and their respective reactions.
## Developing
The development environment is managed by [Azk](http://azk.io), which handles Docker and VirtualBox in order to keep everything running smoothly. To get up and running on your machine, follow these steps:
1. Download and install [Docker Toolbox](https://www.docker.com/docker-toolbox).
2. Install [Azk](http://docs.azk.io/en/installation/)
3. Clone this repo and `cd` into it.
4. Run `script/bootstrap`, which will create your `settings.json` file.
1. Head to [Slack Team Customization Page](http://my.slack.com/services/new/bot) and create a new bot and get its token.
2. Fill the `token` property on the recently-generated `settings` file.
5. Start applications you want:
1. Run `azk start`. It will start several services, like MongoDB, Redis, along with the Digest processes.
6. To check system status, use `azk status`. This command will also shows hostnames and ports.
> **Note**: You can also open the web application by running `azk open web`.
## Installing
Installing is quite straightforward. You can follow this (really short) guide [here](https://github.com/d3estudio/d3-digest/wiki/Deploying).
## Configuration options
You can customize how your instance works and picks data by changing other configuration options on `settings.json`. See the list below:
- `token`: `String`
- Your Slack Bot token key.
- Defaults to `''`.
- `channels`: `[String]`
- List of channels to be watched.
- Defaults to `['random']`.
- `loggerLevel`: `String`
- Logging output level. Valid values are `silly`, `verbose`, `info`, `warn` and `error`.
- Defaults to `info`.
- `autoWatch`: `Boolean`
- Defines whether new channels should be automatically watched. When set to `true`, any channel that the bot is invited to will automatically be inserted to the `channels` list, and your `settings.json` file will be overwritten with the new contents.
- Defaults to `false`
- `silencerEmojis`: `[String]`
- Do not parse links containing the specified emoji. Please notice that this removes the link from the parsing process, which means that the link will be stored, even if a valid "silencer emoji" is used as a reaction.
- Defaults to `['no_entry_sign']`
- `twitterConsumerKey`: `String`
- Used by the Twitter parsing plugin to expand twitter urls into embeded tweets. To use this, register a new Twitter application and provide your API key in this field. When unset, the Twitter parsing plugin will refuse to expand twitter urls.
- Defaults to `undefined`
- `twitterConsumerSecret`: `String`
- Required along with `twitterConsumerKey` by the Twitter parsing plugin, if you intend to use it. Please provide your API secret in this field.
- Defaults to `undefined`
- `mongoUrl`: `String`
- URL to your local Mongo instance, used to store links and reactions.
- Defaults to `mongodb://localhost:27017/digest`
- `memcachedHost`: `String`
- IP to local Memcached server. This is used to store processed links, like its metadata or HTML content. Notice that, although this is not required, it will improve drastically the time required by the web interface to load and show the items.
- Defaults to `127.0.0.1`
- `memcachedPort`: `String` or `Number`
- Port where your Memcached server is running.
- Defaults to `11211`
- `outputLimit`: `Number`
- Quantity of posts to return on each API iteration. This is used by the web interface to paginate requests for links.
- Defaults to `20`.
- `showLinksWithoutReaction`: `Boolean`
- Defines the behaviour of the web interface regarding collected links without reactions. When set to `true`, links without reactions will also be shown on the web interface.
- Defaults to `false`
- `notificationChannel`: `String`
- Name of the Redis channel used to propagate notification across other Digest processes
- Defaults to `digest_notifications`
- `processQueueName`: `String`
- Name of the Redis set used to queue items for the link processor. (Process named 'processor')
- Defaults to `digest_process_queue`
- `prefetchQueueName`: `String`
- Name of the Redis set used to queue items for the prefetcher mechanism.
- Defaults to `digest_prefetch_queue`
- `errorQueueName`: `String`
- Name of the Redis set used to enqueue failed items. Actually, there's no processes reading from this list, but it is something nice to have, in order to catch problems.
- Defaults to `digest_error_queue`
- `metaCachePrefix`: `String`
- Prefix to be used on Memcached meta items keys.
- Defaults to `d-m-`
- `itemCachePrefix`: `String`
- Prefix to be used on Memcached items keys.
- Defaults to `d-i-`
# API
The web server exposes an API that allows the frontend to get links and reactions. The server actually exposes two endpoints:
- `/api/latest`
- `/api/skip/<int>`
Calling any of the listed endpoints will result in the same kind of response, which will have the following structure:
- `from` and `until`: Used when querying the database, defines the range of time that composes the resulting `items` object. `from` must be used to future calls to `/api/from`, being passed to the `<int>` parameter.
- `items`: See next topic.
## `items`
The `items` property exposes informations about users who sent links, and their respective links. Available properties are:
- `users`: `Array` of `Object`s. Contains all users who contributed to the generated digest.
- `items`: `Array` of `Object`s. The parsed links.
- `itemsForUser`: `Object`. Maps every item in `items` to its respective `user`.
## `users`
An `User` object contains information a given user that combributed to the generated digest. Useful if you intend to
make an index. Each object contains the following fields:
- `real_name`: Full user's name.
- `username`: Slack username of the user.
- `image`: Slack avatar of the user.
- `title`: Slack title. Usually people fill this field with what they do in your company/group.
- `emojis`: List of emoji used in reactions to posts made by this user.
## `items`
An `Item` represents a link posted to a watched channel. Its contents depends on which plugin parsed the link, but common keys are:
- `type`: String representing the resulting item type. You can find a list of builtin types below.
- `user`: `User` that posted this item
- `reactions`: Array containing a list of reactions received by this post. Each item have the following structure:
- `name`: Emoji name used on the reaction.
- `count`: Number of times the item received this reaction.
- `totalReactions`: Sum of the reactions amount received by this item.
- `date`: Date this item was posted relative to Unix Epoch.
- `channel`: Name of the channel that this item was found.
- `id`: Unique identifier of this item.
### Item type: `youtube`
The item was detected as an YouTube video. In this case, the following keys might be found:
- `title`: Video title
- `html`: HTML used to display the embeded video.
- `thumbnail_height`: Height, in pixels, of the video's thumbnail.
- `thumbnail_width`: Width, in pixels, of the video's thumbnail.
- `thumbnail_url`: Url pointing to the video's thumbnail.
### Item type: `vimeo`
Same as `youtube`, with one more property:
- `description`: Video description
### Item type: `xkcd`
Represents an XKCD comic. Keys are:
- `img`: URL of the comic image.
- `title`: Comic title
- `link`: Link to the original comic post.
### Item type: `tweet`
Represents a tweet. Keys are:
- `html`: HTML used to display the embeded tweet. This also includes javascript, directly from Twitter.
> **Note**: The Twitter plugin requires certain configuration properties to be set. Please refer to the [Configuration Options](#configuration-options) section for more information.
### Item type: `spotify`
Represents a Spotify album, song, artist or playlist. Keys are:
- `html`: HTML used to display the embeded spotify player.
### Item type: `rich-link`
Represents a link that has does not match any other plugin and had its OpenGraph details extracted. Keys are:
- `title`: Page title or OpenGraph item name
- `summary`: Page description or OpenGraph item description
- `image`: Image representing the item, or, the first image found in the page
- `imageOrientation`: Is this image vertical or horizontal? It matters, you see.
> **Note**: To learn more about OpenGraph, refer to the [OpenGraph Protocol Official Website](http://ogp.me).
### Item type: `poor-link`
This is a special item kind, that represents an object that could not be parsed by any of the available plugins. Its keys are:
- `url`: Item URL.
- `title`: Item title.
----
# License
```
The MIT License (MIT)
Copyright (c) 2015 D3 Estúdio
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
```
| 52.878505 | 293 | 0.73754 | eng_Latn | 0.984369 |
53649e247956e75a82cd0ee8a0aff6c840915bda | 1,299 | md | Markdown | sf.md | tyaqing/articles | 8fb8c459a37cba70162bcf1b3a3345d84f24f0ab | [
"MIT"
] | null | null | null | sf.md | tyaqing/articles | 8fb8c459a37cba70162bcf1b3a3345d84f24f0ab | [
"MIT"
] | null | null | null | sf.md | tyaqing/articles | 8fb8c459a37cba70162bcf1b3a3345d84f24f0ab | [
"MIT"
] | null | null | null | # 算法训练
## 数组、链表、跳表
### *Array* 实战题目
- [x] [11. 盛最多水的容器](https://leetcode-cn.com/problems/container-with-most-water/)
- [x] [283. 移动零](https://leetcode-cn.com/problems/move-zeroes/)
- [x] [70. 爬楼梯](https://leetcode-cn.com/problems/climbing-stairs/)
- [x] [15. 三数之和](https://leetcode-cn.com/problems/3sum/)
### *Linked List* 实战题目
- [x] [206. 反转链表](https://leetcode-cn.com/problems/reverse-linked-list/)
> dummy+left+right
- [x] [24. 两两交换链表中的节点](https://leetcode-cn.com/problems/swap-nodes-in-pairs/)
- [x] [141. 环形链表](https://leetcode-cn.com/problems/linked-list-cycle/)
> 佛洛依德算法
- [x] [142. 环形链表 II](https://leetcode-cn.com/problems/linked-list-cycle-ii/)
- [ ] [25. K 个一组翻转链表](https://leetcode-cn.com/problems/reverse-nodes-in-k-group/)
### 课后作业
- [x] [26. 删除有序数组中的重复项](https://leetcode-cn.com/problems/remove-duplicates-from-sorted-array/)
- [x] [189. 旋转数组](https://leetcode-cn.com/problems/rotate-array/)
- [x] [21. 合并两个有序链表](https://leetcode-cn.com/problems/merge-two-sorted-lists/)
- [x] [88. 合并两个有序数组](https://leetcode-cn.com/problems/merge-sorted-array/)
> **Array.splice(start,deleteCount,insertEle)**
>
> 1. start会被删除
> 2. insertEle会插入在删除的部分
> 3. 返回被删除的数组部分
- [x] [1. 两数之和](https://leetcode-cn.com/problems/two-sum/)
- [x] [66. 加一](https://leetcode-cn.com/problems/plus-one/)
| 30.209302 | 94 | 0.678984 | yue_Hant | 0.255953 |
536561aefc997ba91155dd24c4285ced83667fe5 | 2,225 | md | Markdown | packages.md | IndravardhanReddy/Covid_Ui | d598cdcf8d704dbc440378a2002510afcd9cf661 | [
"MIT"
] | null | null | null | packages.md | IndravardhanReddy/Covid_Ui | d598cdcf8d704dbc440378a2002510afcd9cf661 | [
"MIT"
] | null | null | null | packages.md | IndravardhanReddy/Covid_Ui | d598cdcf8d704dbc440378a2002510afcd9cf661 | [
"MIT"
] | null | null | null | Generated by [flutlab.io](https://flutlab.io) on 2021-06-16 10:03:21
## pub.dartlang.org hosted packages
- [async 2.5.0](https://pub.dartlang.org/packages/async/versions/2.5.0)
- [boolean_selector 2.1.0](https://pub.dartlang.org/packages/boolean_selector/versions/2.1.0)
- [bubble_tab_indicator 0.1.4](https://pub.dartlang.org/packages/bubble_tab_indicator/versions/0.1.4)
- [characters 1.1.0](https://pub.dartlang.org/packages/characters/versions/1.1.0)
- [charcode 1.2.0](https://pub.dartlang.org/packages/charcode/versions/1.2.0)
- [clock 1.1.0](https://pub.dartlang.org/packages/clock/versions/1.1.0)
- [collection 1.15.0](https://pub.dartlang.org/packages/collection/versions/1.15.0)
- [cupertino_icons 0.1.3](https://pub.dartlang.org/packages/cupertino_icons/versions/0.1.3)
- [equatable 1.2.0](https://pub.dartlang.org/packages/equatable/versions/1.2.0)
- [fake_async 1.2.0](https://pub.dartlang.org/packages/fake_async/versions/1.2.0)
- [fl_chart 0.9.4](https://pub.dartlang.org/packages/fl_chart/versions/0.9.4)
- [matcher 0.12.10](https://pub.dartlang.org/packages/matcher/versions/0.12.10)
- [meta 1.3.0](https://pub.dartlang.org/packages/meta/versions/1.3.0)
- [path 1.8.0](https://pub.dartlang.org/packages/path/versions/1.8.0)
- [path_drawing 0.4.1](https://pub.dartlang.org/packages/path_drawing/versions/0.4.1)
- [path_parsing 0.1.4](https://pub.dartlang.org/packages/path_parsing/versions/0.1.4)
- [source_span 1.8.0](https://pub.dartlang.org/packages/source_span/versions/1.8.0)
- [stack_trace 1.10.0](https://pub.dartlang.org/packages/stack_trace/versions/1.10.0)
- [stream_channel 2.1.0](https://pub.dartlang.org/packages/stream_channel/versions/2.1.0)
- [string_scanner 1.1.0](https://pub.dartlang.org/packages/string_scanner/versions/1.1.0)
- [term_glyph 1.2.0](https://pub.dartlang.org/packages/term_glyph/versions/1.2.0)
- [test_api 0.2.19](https://pub.dartlang.org/packages/test_api/versions/0.2.19)
- [typed_data 1.3.0](https://pub.dartlang.org/packages/typed_data/versions/1.3.0)
- [vector_math 2.1.0](https://pub.dartlang.org/packages/vector_math/versions/2.1.0)
## Other packages
- flutter
- flutter_test
- sky_engine
- flutter_covid_dashboard_ui (this package)
| 58.552632 | 102 | 0.736629 | kor_Hang | 0.162111 |
5365623e92e1a8975dd40f55ba1b970651a98a8c | 1,938 | md | Markdown | windows-driver-docs-pr/network/support-for-named-vcs.md | pravb/windows-driver-docs | c952c72209d87f1ae0ebaf732bd3c0875be84e0b | [
"CC-BY-4.0",
"MIT"
] | 4 | 2018-01-29T10:59:09.000Z | 2021-05-26T09:19:55.000Z | windows-driver-docs-pr/network/support-for-named-vcs.md | pravb/windows-driver-docs | c952c72209d87f1ae0ebaf732bd3c0875be84e0b | [
"CC-BY-4.0",
"MIT"
] | null | null | null | windows-driver-docs-pr/network/support-for-named-vcs.md | pravb/windows-driver-docs | c952c72209d87f1ae0ebaf732bd3c0875be84e0b | [
"CC-BY-4.0",
"MIT"
] | 1 | 2018-01-29T10:59:10.000Z | 2018-01-29T10:59:10.000Z | ---
title: Support for Named VCs
description: Support for Named VCs
ms.assetid: 797f737c-91e7-410b-91d5-5575d5b19e86
keywords:
- WMI WDK networking , virtual connections
- call managers WDK networking , naming virtual connections
- virtual connections WDK NDIS WMI
- VCs WDK NDIS WMI
- miniport call managers WDK networking , naming virtual connections
- MCMs WDK networking , namin
ms.author: windowsdriverdev
ms.date: 04/20/2017
ms.topic: article
ms.prod: windows-hardware
ms.technology: windows-devices
---
# Support for Named VCs
## <a href="" id="ddk-support-for-named-vcs-ng"></a>
NDIS allows WMI clients to query and set information on a per-virtual connection (VC) basis for connection-oriented miniport adapters. WMI clients can also enumerate VCs. Before a WMI client can query or set information that is associated with a particular VC, a stand-alone call manager or connection-oriented client must name the VC by calling the [**NdisCoAssignInstanceName**](https://msdn.microsoft.com/library/windows/hardware/ff561692) function.
After a stand-alone call manager or connection-oriented client initiates the setup of a VC by calling the [**NdisCoCreateVC**](https://msdn.microsoft.com/library/windows/hardware/ff561696) function, the stand-alone call manager or connection-oriented client can name the VC with **NdisCoAssignInstanceName**. NDIS assigns the VC an instance name and registers the instance name with WMI. WMI clients can then enumerate the VC and query or set OIDs that are relative to the VC.
A miniport call manager (MCM) cannot use [**NdisCoAssignInstanceName**](https://msdn.microsoft.com/library/windows/hardware/ff561692) to name its VCs. Instead, an MCM should create a custom GUID and OID for the VC and register the GUID-to-OID mapping with NDIS. For more information about registering custom OIDs, see [Customized OIDs and Status Indications](customized-oids-and-status-indications.md).
| 49.692308 | 476 | 0.788442 | eng_Latn | 0.965063 |
536579046a7c868afd36ad5f0b7ea56b1552400d | 7,666 | md | Markdown | Applications/Firewall/template_vipnet_ids_snmpv2/6.0/README.md | SkyBeam/community-templates | 634b1a79f4da224b0066880ec76a8b57bdf51257 | [
"MIT"
] | 291 | 2021-11-25T15:32:30.000Z | 2022-03-28T19:41:28.000Z | Applications/Firewall/template_vipnet_ids_snmpv2/6.0/README.md | SkyBeam/community-templates | 634b1a79f4da224b0066880ec76a8b57bdf51257 | [
"MIT"
] | 48 | 2021-11-25T14:41:21.000Z | 2022-03-31T07:37:02.000Z | Applications/Firewall/template_vipnet_ids_snmpv2/6.0/README.md | SkyBeam/community-templates | 634b1a79f4da224b0066880ec76a8b57bdf51257 | [
"MIT"
] | 296 | 2021-11-25T12:54:15.000Z | 2022-03-31T14:38:37.000Z | # ViPNet IDS SNMPv2
## Description
ViPNet IDS SNMPv2 template
## Overview
Infotecs ViPNet IDS
## Author
Antik89
## Macros used
|Name|Description|Default|Type|
|----|-----------|-------|----|
|{$ATCKS.HIGH}|<p>count of attacks to inform high count of attacks trigger</p>|`25`|Text macro|
|{$ATCKS.MED}|<p>count of attacks to inform medium count of attacks trigger</p>|`125`|Text macro|
|{$CPU.LOAD}|<p>high cpu load value</p>|`75`|Text macro|
|{$LIC.DAYS}|<p>days of license time to inform</p>|`30`|Text macro|
|{$RAM.USAGE}|<p>max RAM usage</p>|`75`|Text macro|
|{$SENSOR.ID}|<p>paste item Sensor ID here</p>|`357810809`|Text macro|
## Template links
There are no template links in this template.
## Discovery rules
|Name|Description|Type|Key and additional info|
|----|-----------|----|----|
|Last day attacks (events severity)|<p>-</p>|`SNMP agent`|lastday.attacks.events.severity<p>Update: 15m</p>|
|Last day attacks (attacked ip addresses)|<p>-</p>|`SNMP agent`|lastday.attacks.attacked.ip<p>Update: 15m</p>|
|Detection interface|<p>-</p>|`SNMP agent`|detection.interface<p>Update: 1h</p>|
|Last day attacks (events name)|<p>-</p>|`SNMP agent`|lastday.attacks.events.name<p>Update: 15m</p>|
|Last day attacks (attacker ip addresses)|<p>-</p>|`SNMP agent`|lastday.attacks.attacker.ip<p>Update: 15m</p>|
|Last day attacks (events count)|<p>-</p>|`SNMP agent`|lastday.attacks.events.count<p>Update: 15m</p>|
|Last day attacks (events URL)|<p>-</p>|`SNMP agent`|lastday.attacks.events.url<p>Update: 15m</p>|
## Items collected
|Name|Description|Type|Key and additional info|
|----|-----------|----|----|
|Services status|<p>IDS services status</p>|`SNMP agent`|services.status<p>Update: 120s</p>|
|Software version build|<p>IDS software version build</p>|`SNMP agent`|software.build<p>Update: 1d</p>|
|Attacks count for day period (information severity)|<p>IDS attacks count for day period (information severity)</p>|`SNMP agent`|attacks.information.day<p>Update: 1h</p>|
|System partition free space|<p>IDS system partition free space</p>|`SNMP agent`|systempartition.freespace<p>Update: 1h</p>|
|Software version minor|<p>IDS software version minor</p>|`SNMP agent`|software.minor<p>Update: 1d</p>|
|RAM usage|<p>IDS RAM usage</p>|`SNMP agent`|RAM.usage<p>Update: 30s</p>|
|Attacks count for month period (medium severity)|<p>IDS attacks count for month period (medium severity)</p>|`SNMP agent`|attacks.medium.month<p>Update: 1h</p>|
|Sensor service status|<p>IDS Sensor service status</p>|`SNMP agent`|sensor.status<p>Update: 120s</p>|
|Database free space|<p>IDS Database free space</p>|`SNMP agent`|DB.space<p>Update: 1h</p>|
|Attacks count for year period (information severity)|<p>IDS attacks count for year period (information severity)</p>|`SNMP agent`|attacks.information.year<p>Update: 1h</p>|
|Attacks count for month period (high severity)|<p>IDS attacks count for month period (high severity)</p>|`SNMP agent`|attacks.high.month<p>Update: 1h</p>|
|Attacks count for year period (low severity)|<p>IDS attacks count for year period (low severity)</p>|`SNMP agent`|attacks.low.year<p>Update: 1h</p>|
|Attacks count for month period (information severity)|<p>IDS attacks count for month period (information severity)</p>|`SNMP agent`|attacks.information.month<p>Update: 1h</p>|
|Sensor ID|<p>IDS sensor ID</p>|`SNMP agent`|sensor.ID<p>Update: 1h</p>|
|CPU load|<p>IDS CPU load</p>|`SNMP agent`|CPU.load<p>Update: 30s</p>|
|System partition usage|<p>IDS system partition usage</p>|`SNMP agent`|systempartition.usage<p>Update: 1h</p>|
|Hardware version|<p>IDS hardware version (platform)</p>|`SNMP agent`|hardware.version<p>Update: 1d</p>|
|Serial number|<p>IDS Serial number</p>|`SNMP agent`|serial.number<p>Update: 1d</p>|
|Attacks count for year period (medium severity)|<p>IDS attacks count for year period (medium severity)</p>|`SNMP agent`|attacks.medium.year<p>Update: 1h</p>|
|Detection rules date|<p>IDS Detection rules date</p>|`SNMP agent`|rules.date<p>Update: 1d</p>|
|Attacks count for day period|<p>IDS attacks count for day period</p>|`SNMP agent`|attacks.day<p>Update: 1h</p>|
|Attacks count for month period|<p>IDS attacks count for day period</p>|`SNMP agent`|attacks.month<p>Update: 1d</p>|
|Attacks count for year period (high severity)|<p>IDS attacks count for year period (high severity)</p>|`SNMP agent`|attacks.high.year<p>Update: 1h</p>|
|License expiration date|<p>IDS license expiration date</p>|`SNMP agent`|license.expdate<p>Update: 1d</p>|
|Attacks count for day period (medium severity)|<p>IDS attacks count for day period (medium severity)</p>|`SNMP agent`|attacks.medium.day<p>Update: 1h</p>|
|Attacks count for year period|<p>IDS attacks count for year period</p>|`SNMP agent`|attacks.year<p>Update: 1d</p>|
|Software version major|<p>IDS software version major</p>|`SNMP agent`|software.major<p>Update: 1d</p>|
|Uptime|<p>MIB: SNMPv2-MIB The time (in hundredths of a second) since the network management portion of the system was last re-initialized.</p>|`SNMP agent`|system.uptime[sysUpTime.0]<p>Update: 30s</p>|
|Attacks count for day period (high severity)|<p>IDS attacks count for day period (high severity)</p>|`SNMP agent`|attacks.high.day<p>Update: 5m</p>|
|License days before expiration|<p>IDS license days before expiration</p>|`SNMP agent`|license.expdays<p>Update: 1d</p>|
|Software version|<p>IDS software version</p>|`SNMP agent`|software.version<p>Update: 1d</p>|
|Total attacks|<p>IDS DB total attacks</p>|`SNMP agent`|total.attacks<p>Update: 15m</p>|
|Attacks count for day period (low severity)|<p>IDS attacks count for day period (low severity)</p>|`SNMP agent`|attacks.low.day<p>Update: 1h</p>|
|Software version hotfix|<p>IDS software version hotfix</p>|`SNMP agent`|software.hotfix<p>Update: 1d</p>|
|Attacks count for month period (low severity)|<p>IDS attacks count for month period (low severity)</p>|`SNMP agent`|attacks.low.month<p>Update: 1h</p>|
|Loader service status|<p>IDS Loader service status</p>|`SNMP agent`|loader.status<p>Update: 120s</p>|
|Last day attacks (attacked ip address №{#SNMPINDEX})|<p>-</p>|`SNMP agent`|lastday.attacks.attacked.ip[{#SNMPINDEX}]<p>Update: 15m</p><p>LLD</p>|
|Detection interface name {#SNMPVALUE}|<p>-</p>|`SNMP agent`|detection.interface.name.[{#SNMPVALUE}]<p>Update: 1h</p><p>LLD</p>|
|Detection interface state discription {#SNMPVALUE}|<p>-</p>|`SNMP agent`|detection.interface.state.discription.[{#SNMPVALUE}]<p>Update: 2m</p><p>LLD</p>|
|Detection interface state {#SNMPVALUE}|<p>-</p>|`SNMP agent`|detection.interface.state.[{#SNMPVALUE}]<p>Update: 2m</p><p>LLD</p>|
|Last day events name №{#SNMPINDEX}|<p>-</p>|`SNMP agent`|lastdayevents.events.name[{#SNMPINDEX}]<p>Update: 15m</p><p>LLD</p>|
|Last day attacks (attacker ip addresses №{#SNMPINDEX})|<p>-</p>|`SNMP agent`|lastday.attacks.attacker.ip[{#SNMPINDEX}]<p>Update: 15m</p><p>LLD</p>|
|Last day attacks (events count) №{#SNMPINDEX}|<p>-</p>|`SNMP agent`|lastday.attacks.events.count[{#SNMPINDEX}]<p>Update: 15m</p><p>LLD</p>|
|Last day events URL №{#SNMPINDEX}|<p>-</p>|`SNMP agent`|lastday.attacks.events.url[{#SNMPINDEX}]<p>Update: 15m</p><p>LLD</p>|
## Triggers
|Name|Description|Expression|Priority|
|----|-----------|----------|--------|
|{HOST.NAME} {#SNMPVALUE} detection interface state changed to down|<p>-</p>|<p>**Expression**: change(/ViPNet IDS SNMPv2/detection.interface.state.[{#SNMPVALUE}])>0</p><p>**Recovery expression**: </p>|high|
|{HOST.NAME} {#SNMPVALUE} detection interface state changed to down (LLD)|<p>-</p>|<p>**Expression**: change(/ViPNet IDS SNMPv2/detection.interface.state.[{#SNMPVALUE}])>0</p><p>**Recovery expression**: </p>|high|
| 73.009524 | 213 | 0.709236 | kor_Hang | 0.331922 |
5365f2d53005db5453797813cf60b39c24057b9f | 2,728 | md | Markdown | docs/ado/reference/ado-api/source-property-ado-recordset.md | MRGRD56/sql-docs.ru-ru | 4994c363fd2f95812769d48d881fd877abe35738 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/ado/reference/ado-api/source-property-ado-recordset.md | MRGRD56/sql-docs.ru-ru | 4994c363fd2f95812769d48d881fd877abe35738 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/ado/reference/ado-api/source-property-ado-recordset.md | MRGRD56/sql-docs.ru-ru | 4994c363fd2f95812769d48d881fd877abe35738 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
description: Свойство Source (объект Recordset ADO)
title: Свойство Source (набор записей ADO) | Документация Майкрософт
ms.prod: sql
ms.prod_service: connectivity
ms.technology: ado
ms.custom: ''
ms.date: 01/19/2017
ms.reviewer: ''
ms.topic: reference
apitype: COM
f1_keywords:
- Recordset15::putref_Source
- Recordset15::Source
- Recordset15::PutSource
- Recordset15::get_Source
- Recordset15::GetSource
- Recordset15::PutRefSource
- Recordset15::put_Source
helpviewer_keywords:
- Source property [ADO Recordset]
ms.assetid: a05ba2c9-2821-4343-8607-4de9b764ec91
author: rothja
ms.author: jroth
ms.openlocfilehash: 04287b1fca3b3716302ab440f8f5213cff3f6302
ms.sourcegitcommit: 917df4ffd22e4a229af7dc481dcce3ebba0aa4d7
ms.translationtype: MT
ms.contentlocale: ru-RU
ms.lasthandoff: 02/10/2021
ms.locfileid: "100051375"
---
# <a name="source-property-ado-recordset"></a>Свойство Source (объект Recordset ADO)
Указывает источник данных для объекта [набора записей](./recordset-object-ado.md) .
## <a name="settings-and-return-values"></a>Параметры и возвращаемые значения
Задает **строковое** значение или ссылку на объект [команды](./command-object-ado.md) ; Возвращает только **строковое** значение, указывающее источник **набора записей**.
## <a name="remarks"></a>Remarks
Используйте свойство **Source** , чтобы указать источник данных для объекта **набора записей** , используя одну из следующих команд: переменная объекта **команды** , инструкция SQL, хранимая процедура или имя таблицы.
Если задать для свойства **Source** объект **Command** , свойство [ActiveConnection](./activeconnection-property-ado.md) объекта **Recordset** будет наследовать значение свойства **ActiveConnection** для указанного объекта **команды** . Однако чтение свойства **Source** не приводит к возврату объекта **Command** . Вместо этого возвращается свойство [CommandText](./commandtext-property-ado.md) объекта **Command** , для которого задается **исходное** свойство.
Если свойство **Source** является инструкцией SQL, хранимой процедурой или именем таблицы, можно оптимизировать производительность, передав соответствующий аргумент *Options* вызову метода [Open](./open-method-ado-recordset.md) .
Свойство **Source** доступно для чтения и записи закрытых объектов **Recordset** и доступно только для чтения для открытых объектов **Recordset** .
## <a name="applies-to"></a>Применение
[Объект Recordset (ADO)](./recordset-object-ado.md)
## <a name="see-also"></a>См. также:
[Пример свойства Source (Visual Basic)](./source-property-example-vb.md)
[Свойство Source (ошибка ADO)](./source-property-ado-error.md)
[Свойство Source (объект Record ADO)](./source-property-ado-record.md) | 51.471698 | 465 | 0.762097 | rus_Cyrl | 0.594851 |
536621f415269150e36da8fe41781aa4f5c68020 | 19 | md | Markdown | README.md | meizekang/primary | 3f4b0c0ec73572cdfd199ebe35b906c146dce91f | [
"MIT"
] | 2 | 2019-11-04T02:52:37.000Z | 2019-11-04T07:43:58.000Z | README.md | meizekang/primary | 3f4b0c0ec73572cdfd199ebe35b906c146dce91f | [
"MIT"
] | 1 | 2021-03-09T22:03:52.000Z | 2021-03-09T22:03:52.000Z | README.md | meizekang/primary | 3f4b0c0ec73572cdfd199ebe35b906c146dce91f | [
"MIT"
] | null | null | null | # primary
practice
| 6.333333 | 9 | 0.789474 | eng_Latn | 0.898108 |
5366a26bf04000077256b50edc68f77fa840fa67 | 1,833 | md | Markdown | README.md | coshx/antares | eab37b35c32824bfeb52e8283a80414cbbc9a96a | [
"MIT"
] | null | null | null | README.md | coshx/antares | eab37b35c32824bfeb52e8283a80414cbbc9a96a | [
"MIT"
] | 2 | 2018-11-19T00:08:30.000Z | 2018-12-21T19:15:09.000Z | README.md | coshx/antares | eab37b35c32824bfeb52e8283a80414cbbc9a96a | [
"MIT"
] | null | null | null | # Antares
Explainable AI through decision trees.
## What is it?
Antares allows maintainters of AI systems to offer end-users explanations for classification and regression results.
# Setup
## API Setup with Pipenv
1. Install [pipenv](https://pipenv.readthedocs.io/en/latest/install/).
1. Run `pipenv install` from the project's root.
1. Activate the virtualenv before developing using `pipenv shell`
## Secret Key setup
Copy the `config.ini.example` file and create a `config.ini` file. These secrets are only valid for the dev environment.
## Setup Redis
Ensure you have a version of Redis installed on your computer. You can find a guide on how to install [here](https://redis.io/topics/quickstart). If you are on OSX we recommend using homebrew, a guide can be found [here](https://medium.com/@petehouston/install-and-config-redis-on-mac-os-x-via-homebrew-eb8df9a4f298).
Once installed you can start your Redis server using this command `redis-server /usr/local/etc/redis.conf` and can ensure it is running via `redis-cli ping`. If you get the response `PONG` the server is running!
Finally, ensure the redis python client is pointing to the right server. If you haven't configured anything in a custom manner for the installation, the default configuration should be good. Otherwise you will have to change the port, and password field.
## Setup Web Server
Run `yarn` from the `webapp` directory and issue `yarn start` to start the development server at https://localhost:3000
# Tests
The API uses [pytest](https://docs.pytest.org/en/latest/) to run tests. From the project's root, run:
pytest
# API Documentation
The API documentation can be found [here](https://docs.google.com/document/d/1CQLR_zFgXHEbdwGeiKLSJD_VmxccLGciM70mgwdi3rc/edit)
# Linting
## ESLint
To run ESLint use `yarn lint`.
| 48.236842 | 317 | 0.764866 | eng_Latn | 0.980305 |
536723da0547970ea377b1083feb933417069c9e | 5,928 | md | Markdown | README.md | nesilin/evolution_TALL_adults | f36d6ebaeb43376096c14fc9ca20116bc2febae6 | [
"Apache-2.0"
] | null | null | null | README.md | nesilin/evolution_TALL_adults | f36d6ebaeb43376096c14fc9ca20116bc2febae6 | [
"Apache-2.0"
] | null | null | null | README.md | nesilin/evolution_TALL_adults | f36d6ebaeb43376096c14fc9ca20116bc2febae6 | [
"Apache-2.0"
] | 1 | 2021-03-26T15:38:53.000Z | 2021-03-26T15:38:53.000Z | CODE OF THE ANALYSIS PERFORMED IN THE T-ALL RELAPSE EVOLUTION IN ADULT PATIENTS PROJECT
---
[](https://zenodo.org/badge/latestdoi/305708660)
*The code is organized in the following folders:*
filters/
Here are the scripts that are used at the beginning to process the VCF to a MAF
modules/
Here are some functions and data (e.g. such as lists and dictionaries of the colors used) recurrently used along the analysis that can be imported.
Must be added to you $PYTHONPATH
processing/
Here are some jupyter-notebooks that were used in some intermediate step between the filtered MAFs and the final results as a figures
notebook_figures/
Here are jupyter-notebooks generating the figures of the paper
mut_rate_models/
Here is the code used for the simulations and figures of the mutation rate increment models
ext_runs/
Here are the scripts showing how external computational tools and software were run
*Extra folders and files:*
ext_files/
Files that are necessary to run certain part of the analysis such as files required to run external tools
intermediate_files/
Files generated at some point of the analysis that are the result of some intermediate
*Workspace*
All patients had al least a tumoral and normal sample and, in some cases, two tumoral (primary and relapse) samples and a normal sample. Therefore, the data was store as:
```
cohort/
patientID/
DxTumorID_vs_normalID/
ReTumorID_vs_normalID/ (sometimes)
```
This working directory structure was used in the code
**STEPS TO REPRODUCE THE ANALYSIS**
The following intructions are in the right order to obtain the same results
ALIGNMENT AND CALLING
1. Sarek pipeline v2.2.1 (intruccions to install and run --> https://github.com/nf-core/sarek)
Run preprocessing step and Strelka in variantcalling step
2. FACETS (v0.5.6)
```ext_runs/run_FACETS/facets_cnv_wgs_hg19.sh```
the README file within the run_FACETS/ directory shows how to install it
3. DELLY (v.0.7.9)
```ext_runs/run_delly/run_delly.sh```
there is a yml file to install an enviroment with delly for conda
FILTER FROM STRELKA VCFs
The following list of scripts from filters/ shows the order in which the scripts were executed to obtain the MAF files
1. ```filters/process_strelka_v2.9.3_vcf.py```
2. ```filters/refish_shared.py```
3. ```filters/check_for_missed_MNVs_muts.py```
4. ```ext_runs/run_vep/run_vep_cannonical.sh```
5. ```filters/process_vep.py```
6. ```filters/filter_snps_maf.py```
7. ```filters/clonal_classification_maf.py```
Within ```ext_runs/run_vep``` there is a README file with instructions on how to install vep.
The python scripts can be run within a conda environment of python 3.7. There is a yml file to build de environment (environment.yml)
```conda env create -f environment.yml```
PROCESSING
1. Run IntOGen v20191009
In the FAQ section of IntOGen are the instructions to install it and run it locally https://www.intogen.org/faq
2. Processing of IntOGen results. Inspect list of driver genes and filter out possible FP
```processing/drivers_intogen.ipynb```
3. Get a list of protein affecting mutations (SNVs and InDels) in driver genes
processing/
driver_mutations_primary_ALL.ipynb
driver_mutations_TALL.ipynb
4. Processing FACETS results, get private and shared copy number changes
```processing/cnv_overlapping_segments.ipynb```
5. Annotate CNV. Add cytobands and check for driver gains and losses
processing/
annotate_cnv_TALL.ipynb
driver_cnv_TALL.ipynb
6. Process SV variants and get the driver known ones
processing/
SV_parser.ipynb
driver_SV_TALL.ipynb
7. To get exons of NOTCH1 the mutations within this gene were re-annotated with VEP
```processing/re-annotate_NOTCH1_muts.ipynb```
8. Run deconstructSigs to fit known signatures
Inside the folder ```ext_runs/run_deconstructSig``` there is a README.txt file on how it was run
FIGURES AND RESULTS
The jupyter-notebooks within ```notebook_figures/``` generate the figures of the paper separately.
The final figures were layed out with SVG editable software.
Figure 1
```
notebook_figures/
number_mutations_cohort.ipynb
UMAP_mutational_profile.ipynb
signature_barplots_by_cohort.ipynb
clustering_driver_combinations.ipynb
```
Figure 2
```
notebook_figures/
table_driver_alterations_TALL.ipynb (also contains supplementary figure)
NOTCH1_needle_plot.ipynb
NOTCH1_pathway_change.ipynb
```
Figure 3
```
notebook_figures/
counts_with_probability_exposure_assignments_phylotree.ipynb
signature_barplots_evolution_part.ipynb
```
(```signature_barplots_evolution_part.ipynb``` also contains supplementary figures)
Figure 4
```
notebook_figures/mut_rate_test_&_plot.ipynb
```
(also contains supplementary figures)
(missing figures are generated with R code in ```mut_rate_models/mutation_models.R```)
Figure 5
```
notebook_figures/relapse_evolution_with_doubling_estimates.ipynb
```
(missing figures are generated with R code in ```mut_rate_models/relapse_fixation.R```)
Additional files with supplementary figures:
_Additional 3_
```
notebook_figures/
barplot_recalling_comparative.ipynb
barplot_snps_filter.ipynb
ccf_clonality_scatterplots.ipynb
```
_Additional 1_
```
notebook_figures/
table_driver_mutations_primary_ALL.ipynb
clinical_plot.ipynb
plot_total_minor_copy_number.ipynb
relapse_growth_model.ipynb
```
MUTATION RATE MODELS
The code in this section is written in R and can be found in ```mut_rate_models/mutation_models.R```.
The Rscript performs the simulations and the figures with the summary of the results
TUMOR GROWTH SIMULATIONS
The simulations of the tumor growth were performed with Clonex: https://github.com/gerstung-lab/clonex.
The Rscript in ```mut_rate_models/relapse_fixation.R``` takes the results of Clonex and outputs the figures of the paper
| 27.830986 | 170 | 0.780364 | eng_Latn | 0.953141 |
5368339c3ba8736ed08af818c0822f6513457f53 | 3,126 | md | Markdown | explorer/rest-api.md | mihongtech/linkchain-monitor | 69f8143fc193a611927fc7ac61b1956cb6792244 | [
"MIT"
] | 2 | 2019-04-15T08:03:47.000Z | 2020-05-12T01:26:29.000Z | explorer/rest-api.md | mihongtech/linkchain-monitor | 69f8143fc193a611927fc7ac61b1956cb6792244 | [
"MIT"
] | 5 | 2021-03-09T02:41:31.000Z | 2022-02-26T10:25:50.000Z | explorer/rest-api.md | mihongtech/linkchain-monitor | 69f8143fc193a611927fc7ac61b1956cb6792244 | [
"MIT"
] | null | null | null | linkchain explorer Restful API 接口文档
===
## 1. 链状态相关API接口 ##
### 1.1 查询链的状态Overview ###
URL:
GET /api/v1/explorer/linkchian/overview
PARAM:
无
RETURN:
Failed:
HTTP.CODE 500
HTTP.BODY {"error_msg": "error msg..."}
Success:
HTTP.CODE 200
HTTP.BODY {"overview": {"blockHeight":"4545", "authNodeCount":"54", "followNodeCount":"3456", "lastHourTxs":"456456"},"statistics": [{"time":"159000000", "txCount":"345"},{"time":"158012345", "txCount":"145"}]}
## 2. 商家状态相关API接口 ##
### 2.1 查询商家的状态Overview ###
URL:
GET /api/v1/explorer/business/overview
PARAM:
无
RETURN:
Failed:
HTTP.CODE 500
HTTP.BODY {"error_msg": "error msg..."}
Success:
HTTP.CODE 200
HTTP.BODY {"overview": {"businessCount":"45", "userCount":"1154", "lastHourTxs":"456456"},"statistics": [{"name":"稻香村", "txCount":"345"},{"name":"稻花香", "txCount":"145"}]}
### 2.2 查询商家列表 ###
URL:
GET /api/v1/explorer/business/list
PARAM:
无
RETURN:
Failed:
HTTP.CODE 500
HTTP.BODY {"error_msg": "error msg..."}
Success:
HTTP.CODE 200
HTTP.BODY {"list": [{"name":"稻香村", "addrss","ox435345445345","createTime":"1593454354", "userCount":"34543", "balance":"43242", "releasedTokens":"43543"},{"name":"稻花香", "addrss","ox435345445345","createTime":"1593454354","userCount":"34543", "balance":"43242", "releasedTokens":"43543"}]}
### 2.3 查询商家信息 ###
URL:
GET /api/v1/explorer/business/info/{address}
PARAM:
address 必选输入 商家地址
RETURN:
Failed:
HTTP.CODE 500
HTTP.BODY {"error_msg": "error msg..."}
Success:
HTTP.CODE 200
HTTP.BODY {"info": {"name":"稻香村", "addrss","ox435345445345","createTime":"1593454354", "userCount":"34543", "balance":"43242", "releasedTokens":"43543", "rate":"5", "lastHourTxs":"45345345","owner":"0x345345435", "operator":"0x766575675"}}
## 3. 节点状态相关API接口 ##
### 3.1 查询链的节点信息 ###
URL:
GET /api/v1/explorer/linkchian/nodes
PARAM:
无
RETURN:
Failed:
HTTP.CODE 500
HTTP.BODY {"error_msg": "error msg..."}
Success:
HTTP.CODE 200
HTTP.BODY {"followNodes": [{"region":"025", "count":"345", "percentage":"78%"},{"region":"010", "count":"105", "percentage":"22%"}], "authNodes":[{"ip":"10.1.12.3", "address":"0x23424324", "blockHeight":"453453", "region":"010"}, {"ip":"10.1.12.5", "address":"0x111123424324", "blockHeight":"453453", "region":"021"}]}
### 3.2 查询链的权威节点信息 ###
URL:
GET /api/v1/explorer/linkchian/authnodes/{node}
PARAM:
node 必选输入 权威节点的IP地址
RETURN:
Failed:
HTTP.CODE 500
HTTP.BODY {"error_msg": "error msg..."}
Success:
HTTP.CODE 200
HTTP.BODY {"baseInfo": {"ip":"10.1.12.3", "os":"Centos 7", "sysTime":"1590001000", "runningTime":"14 day 13 hours", "gethVersion":"v1.8.3-stable", "diskUsage":"78%","memUsage":"34%", "cpuUsage":"45%"}, "linkchainInfo":{"address":"0x23424324324", "blockHeight":"3453543", "nextBlockTime":"1594353535","blockDiff":"1"}} | 25.622951 | 324 | 0.580934 | yue_Hant | 0.394762 |
536858957469854634ec05e58e5b34d2f026542e | 26,206 | md | Markdown | articles/active-directory/app-provisioning/how-provisioning-works.md | pmsousa/azure-docs.pt-pt | bc487beff48df00493484663c200e44d4b24cb18 | [
"CC-BY-4.0",
"MIT"
] | 15 | 2017-08-28T07:46:17.000Z | 2022-02-03T12:49:15.000Z | articles/active-directory/app-provisioning/how-provisioning-works.md | pmsousa/azure-docs.pt-pt | bc487beff48df00493484663c200e44d4b24cb18 | [
"CC-BY-4.0",
"MIT"
] | 407 | 2018-06-14T16:12:48.000Z | 2021-06-02T16:08:13.000Z | articles/active-directory/app-provisioning/how-provisioning-works.md | pmsousa/azure-docs.pt-pt | bc487beff48df00493484663c200e44d4b24cb18 | [
"CC-BY-4.0",
"MIT"
] | 17 | 2017-10-04T22:53:31.000Z | 2022-03-10T16:41:59.000Z | ---
title: Entenda como funciona o fornecimento de AD AZure | Microsoft Docs
description: Entenda como funciona o fornecimento de AD AZure
services: active-directory
author: kenwith
manager: daveba
ms.service: active-directory
ms.subservice: app-provisioning
ms.topic: conceptual
ms.workload: identity
ms.date: 11/04/2020
ms.author: kenwith
ms.reviewer: arvinh
ms.custom: contperf-fy21q2
ms.openlocfilehash: 19ec3ec95fbbccbaa5c646c8de16999b86349626
ms.sourcegitcommit: f28ebb95ae9aaaff3f87d8388a09b41e0b3445b5
ms.translationtype: MT
ms.contentlocale: pt-PT
ms.lasthandoff: 03/30/2021
ms.locfileid: "104579455"
---
# <a name="how-provisioning-works"></a>Como funciona o aprovisionamento
O fornecimento automático refere-se à criação de identidades e funções dos utilizadores nas aplicações em nuvem a que os utilizadores precisam de acesso. Além da criação de identidades de utilizador, o fornecimento automático inclui a manutenção e remoção das identidades dos utilizadores como alteração de estado ou de funções. Antes de iniciar uma implementação, pode rever este artigo para saber como funciona a provisão AD do Azure e obter recomendações de configuração.
O **Serviço de Provisionamento AZure AD** fornece aos utilizadores aplicações saaS e outros sistemas, conectando-se a um Sistema de Gestão de Identidade de Domínio Cruzado (SCIM) 2.0 ponto final de gestão de utilizadores API fornecido pelo fornecedor de aplicações. Este ponto final SCIM permite que a Azure AD crie, atualize e remova os utilizadores. Para aplicações selecionadas, o serviço de fornecimento também pode criar, atualizar e remover objetos adicionais relacionados com a identidade, tais como grupos e funções. O canal utilizado para o provisionamento entre a Azure AD e a aplicação é encriptado utilizando encriptação HTTPS TLS 1.2.

*Figure 1: The Azure Ad Provisioning Service*

*Figura 2: "Outbound" utilizador que fornece fluxo de trabalho de Azure AD para aplicações saaS populares*

*Figure 3: "Inbound" user provisioning workflow from popular Human Capital Management (HCM) applications to Azure Ative Directory and Windows Server Ative Directory*
## <a name="provisioning-using-scim-20"></a>Provisionamento utilizando SCIM 2.0
O serviço de fornecimento Azure AD utiliza o [protocolo SCIM 2.0](https://techcommunity.microsoft.com/t5/Identity-Standards-Blog/bg-p/IdentityStandards) para o provisionamento automático. O serviço liga-se ao ponto final do SCIM para a aplicação, e utiliza o esquema de objetos de utilizador SCIM e as APIs REST para automatizar o fornecimento e desavisionamento de utilizadores e grupos. Um conector de provisionamento baseado no SCIM é fornecido para a maioria das aplicações na galeria Azure AD. Ao construir aplicativos para Azure AD, os desenvolvedores podem usar a API de gestão de utilizadores SCIM 2.0 para construir um ponto final SCIM que integra Azure AD para provisionamento. Para mais informações, consulte [build a SCIM endpoint e configurar o provisionamento do utilizador](../app-provisioning/use-scim-to-provision-users-and-groups.md).
Para solicitar um conector automático de provisionamento Azure AD para uma aplicação que não tenha atualmente uma, preencha um [Pedido de Aplicação do Diretório Ativo Azure](https://aka.ms/aadapprequest).
## <a name="authorization"></a>Autorização
São necessárias credenciais para que a Azure AD se conecte à API de gestão de utilizadores da aplicação. Enquanto configurar o fornecimento automático do utilizador para uma aplicação, terá de introduzir credenciais válidas. Para aplicações de galeria, pode encontrar tipos e requisitos credenciais para a aplicação, referindo-se ao tutorial da aplicação. Para aplicações não-galeria, pode consultar a documentação do [SCIM](./use-scim-to-provision-users-and-groups.md#authorization-to-provisioning-connectors-in-the-application-gallery) para compreender os tipos e requisitos credenciais. No portal Azure, poderá testar as credenciais, tendo a Azure AD tentado ligar-se à app de provisionamento da app utilizando as credenciais fornecidas.
## <a name="mapping-attributes"></a>Atributos de mapeamento
Quando permite o provisionamento do utilizador para uma aplicação SaaS de terceiros, o portal Azure controla os seus valores de atributos através de mapeamentos de atributos. Os mapeamentos determinam os atributos do utilizador que fluem entre o Azure AD e a aplicação-alvo quando as contas do utilizador são a provisionadas ou atualizadas.
Existe um conjunto pré-configurado de atributos e mapeamentos de atributos entre objetos de utilizador AZure AD e objetos de utilizador de cada aplicação SaaS. Algumas aplicações gerem outros tipos de objetos juntamente com os Utilizadores, como grupos.
Ao configurar o provisionamento, é importante rever e configurar os mapeamentos de atributos e fluxos de trabalho que definem quais as propriedades do utilizador (ou grupo) que fluem do Azure AD para a aplicação. Reveja e configuure a propriedade correspondente **(Match objects using this attribute**) que é usada para identificar e combinar exclusivamente utilizadores/grupos entre os dois sistemas.
Pode personalizar os mapeamentos de atributos padrão de acordo com as necessidades do seu negócio. Assim, pode alterar ou eliminar os mapeamentos de atributos existentes ou criar novos mapeamentos de atributos. Para mais informações, consulte [personalizar o fornecimento de mapeamentos de atributos para aplicações SaaS](./customize-application-attributes.md).
Ao configurar o fornecimento de uma aplicação SaaS, um dos tipos de mapeamentos de atributos que pode especificar é um mapeamento de expressão. Para estes mapeamentos, deve escrever uma expressão semelhante a um script que lhe permita transformar os dados dos seus utilizadores em formatos mais aceitáveis para a aplicação SaaS. Para mais detalhes, consulte [as expressões de escrita para mapeamentos de atributos](functions-for-customizing-application-data.md).
## <a name="scoping"></a>Escoar
### <a name="assignment-based-scoping"></a>Escagem baseada em atribuição
Para o fornecimento de saída da Azure AD a uma aplicação SaaS, confiar nas [atribuições](../manage-apps/assign-user-or-group-access-portal.md) de utilizador ou grupo é a forma mais comum de determinar quais os utilizadores que estão em possibilidade de provisão. Uma vez que as atribuições do utilizador também são utilizadas para permitir uma única inscrição, o mesmo método pode ser utilizado para gerir tanto o acesso como o provisionamento. A deteção baseada em atribuição não se aplica a cenários de provisionamento de entrada, tais como Workday e Successfactors.
* **Os grupos.** Com um plano de licença Azure AD Premium, pode utilizar grupos para atribuir acesso a uma aplicação SaaS. Em seguida, quando o âmbito de provisionamento é definido apenas para **utilizadores e grupos atribuídos,** o serviço de fornecimento de AD AZure irá prestar ou desatar utilizadores com base no facto de serem membros de um grupo que está atribuído à aplicação. O objeto de grupo em si não é a provisionado a menos que a aplicação suporte objetos de grupo. Certifique-se de que os grupos atribuídos à sua aplicação têm a propriedade "SecurityEnabled" definida como "True".
* **Grupos dinâmicos.** O serviço de fornecimento de utilizadores Azure AD pode ler e prestar aos utilizadores em [grupos dinâmicos.](../enterprise-users/groups-create-rule.md) Tenha em mente estas ressalvas e recomendações:
* Os grupos dinâmicos podem impactar o desempenho do fornecimento de ponta a ponta das aplicações Azure AD para SaaS.
* A rapidez com que um utilizador num grupo dinâmico é a provisionado ou desavisionado numa aplicação SaaS depende da rapidez com que o grupo dinâmico pode avaliar as mudanças de adesão. Para obter informações sobre como verificar o estado de processamento de um grupo dinâmico, consulte o estado de [processamento de verificação de uma regra de adesão](../enterprise-users/groups-create-rule.md).
* Quando um utilizador perde a adesão ao grupo dinâmico, é considerado um evento de desavisionamento. Considere este cenário ao criar regras para grupos dinâmicos.
* **Grupos aninhados.** O serviço de fornecimento de utilizadores Azure AD não pode ler ou providenciar utilizadores em grupos aninhados. O serviço só pode ler e providenciar aos utilizadores que sejam membros imediatos de um grupo explicitamente designado. Esta limitação de "atribuições baseadas em grupo a aplicações" também afeta uma única sessão de sessão (ver [Utilização de um grupo para gerir o acesso às aplicações saaS).](../enterprise-users/groups-saasapps.md) Em vez disso, atribua diretamente ou de outra forma [o âmbito nos](define-conditional-rules-for-provisioning-user-accounts.md) grupos que contêm os utilizadores que precisam de ser provisionados.
### <a name="attribute-based-scoping"></a>Scoping baseado em atributos
Pode utilizar filtros de deteção para definir regras baseadas em atributos que determinam quais os utilizadores que estão a forcê-lo a uma aplicação. Este método é comumente utilizado para o provisionamento de entrada a partir de aplicações de HCM para Azure AD e Ative Directory. Os filtros de deteção são configurados como parte dos mapeamentos de atributos para cada conector de fornecimento de um utilizador Azure AD. Para obter detalhes sobre a configuração de filtros de deteção baseados em atributos, consulte [o provisionamento de aplicações baseados em Atributos com filtros de escoamento](define-conditional-rules-for-provisioning-user-accounts.md).
### <a name="b2b-guest-users"></a>Utilizadores B2B (convidados)
É possível utilizar o serviço de fornecimento de utilizadores Azure AD para a prestação de utilizadores B2B (ou convidados) em Azure AD a aplicações SaaS. No entanto, para que os utilizadores B2B inscrevam-se na aplicação SaaS utilizando a Azure AD, a aplicação SaaS deve ter a sua capacidade de entrada única baseada em SAML configurada de uma forma específica. Para obter mais informações sobre como configurar aplicações SaaS para apoiar os insurretos dos utilizadores B2B, consulte [aplicações Configure SaaS para colaboração B2B](../external-identities/configure-saas-apps.md).
Note que o nome de utilizadorPrincipalName para um utilizador convidado é frequentemente armazenado como "alias#EXT# @domain.com ". quando o utilizadorPrincipalName está incluído nos mapeamentos do seu atributo como um atributo de origem, o #EXT# é retirado do nome do utilizadorPrincipalName. Se necessitar que o #EXT# esteja presente, substitua o nome de utilizadorPrincipalName por originalUserPrincipalName como atributo de origem.
## <a name="provisioning-cycles-initial-and-incremental"></a>Ciclos de provisionamento: Inicial e incremental
Quando o Azure AD é o sistema de origem, o serviço de fornecimento utiliza a [consulta Delta Use para rastrear alterações nos dados do Microsoft Graph](/graph/delta-query-overview) para monitorizar utilizadores e grupos. O serviço de fornecimento executa um ciclo inicial contra o sistema de origem e o sistema alvo, seguido de ciclos incrementais periódicos.
### <a name="initial-cycle"></a>Ciclo inicial
Quando o serviço de prestação de serviços for iniciado, o primeiro ciclo:
1. Consultar todos os utilizadores e grupos do sistema de origem, recuperando todos os atributos definidos nos [mapeamentos](customize-application-attributes.md)do atributo .
2. Filtrar os utilizadores e grupos devolvidos, utilizando [quaisquer atribuições](../manage-apps/assign-user-or-group-access-portal.md) configuradas ou [filtros de scoping baseados em atributos](define-conditional-rules-for-provisioning-user-accounts.md).
3. Quando um utilizador é atribuído ou em possibilidade de provisionamento, o serviço consulta o sistema-alvo de um utilizador correspondente utilizando os [atributos de correspondência especificados](customize-application-attributes.md#understanding-attribute-mapping-properties). Exemplo: Se o nome do utilizadorPrincipal no sistema de origem for o atributo correspondente e os mapas ao nome do utilizador no sistema-alvo, então o serviço de fornecimento consulta o sistema alvo de nomes de utilizador que correspondam aos valores do nome do utilizadorPrincipal no sistema de origem.
4. Se um utilizador correspondente não for encontrado no sistema alvo, é criado utilizando os atributos devolvidos do sistema de origem. Após a criação da conta de utilizador, o serviço de fornecimento deteta e caches o ID do sistema alvo para o novo utilizador. Este ID é usado para executar todas as operações futuras nesse utilizador.
5. Se for encontrado um utilizador correspondente, é atualizado utilizando os atributos fornecidos pelo sistema de origem. Após a correspondência da conta do utilizador, o serviço de fornecimento deteta e caches o ID do sistema alvo para o novo utilizador. Este ID é usado para executar todas as operações futuras nesse utilizador.
6. Se os mapeamentos do atributo contiverem atributos de "referência", o serviço faz atualizações adicionais no sistema alvo para criar e ligar os objetos referenciados. Por exemplo, um utilizador pode ter um atributo "Manager" no sistema alvo, que está ligado a outro utilizador criado no sistema-alvo.
7. Persistir uma marca de água no final do ciclo inicial, que fornece o ponto de partida para os ciclos incrementais posteriores.
Algumas aplicações como ServiceNow, G Suite e Box suportam não só o a provisionamento dos utilizadores, mas também o provisionamento de grupos e seus membros. Nesses casos, se o provisionamento em grupo estiver habilitado nos [mapeamentos,](customize-application-attributes.md)o serviço de fornecimento sincroniza os utilizadores e os grupos e, posteriormente, sincroniza os membros do grupo.
### <a name="incremental-cycles"></a>Ciclos incrementais
Após o ciclo inicial, todos os outros ciclos irão:
1. Consultar o sistema de origem para quaisquer utilizadores e grupos que foram atualizados desde que a última marca de água foi armazenada.
2. Filtrar os utilizadores e grupos devolvidos, utilizando [quaisquer atribuições](../manage-apps/assign-user-or-group-access-portal.md) configuradas ou [filtros de scoping baseados em atributos](define-conditional-rules-for-provisioning-user-accounts.md).
3. Quando um utilizador é atribuído ou em possibilidade de provisionamento, o serviço consulta o sistema-alvo de um utilizador correspondente utilizando os [atributos de correspondência especificados](customize-application-attributes.md#understanding-attribute-mapping-properties).
4. Se um utilizador correspondente não for encontrado no sistema alvo, é criado utilizando os atributos devolvidos do sistema de origem. Após a criação da conta de utilizador, o serviço de fornecimento deteta e caches o ID do sistema alvo para o novo utilizador. Este ID é usado para executar todas as operações futuras nesse utilizador.
5. Se for encontrado um utilizador correspondente, é atualizado utilizando os atributos fornecidos pelo sistema de origem. Se for uma conta recém-atribuída que é correspondida, o serviço de fornecimento deteta e caches o ID do sistema alvo para o novo utilizador. Este ID é usado para executar todas as operações futuras nesse utilizador.
6. Se os mapeamentos do atributo contiverem atributos de "referência", o serviço faz atualizações adicionais no sistema alvo para criar e ligar os objetos referenciados. Por exemplo, um utilizador pode ter um atributo "Manager" no sistema alvo, que está ligado a outro utilizador criado no sistema-alvo.
7. Se um utilizador que já estava no âmbito do provisionamento for removido do âmbito, incluindo não ter sido atribuído, o serviço desativa o utilizador no sistema-alvo através de uma atualização.
8. Se um utilizador que já estava no âmbito do provisionamento for desativado ou eliminado suavemente no sistema de origem, o serviço desativa o utilizador no sistema-alvo através de uma atualização.
9. Se um utilizador que já estava no âmbito de provisão for eliminado no sistema de origem, o serviço elimina o utilizador no sistema-alvo. No Azure AD, os utilizadores são duramente eliminados 30 dias após serem apagados suavemente.
10. Persistir uma nova marca de água no final do ciclo incremental, que fornece o ponto de partida para os ciclos incrementais posteriores.
> [!NOTE]
> Pode desativar opcionalmente as operações **Criar,** **Atualizar** ou **Eliminar** utilizando as ações do **objeto Alvo,** verificar caixas na secção [mapeamentos.](customize-application-attributes.md) A lógica de desativar um utilizador durante uma atualização também é controlada através de um mapeamento de atributos de um campo como "accountEnabled".
O serviço de prestação continua a funcionar indefinidamente ciclos incrementais, em intervalos definidos no [tutorial específico de cada aplicação.](../saas-apps/tutorial-list.md) Os ciclos incrementais continuam até que ocorra um dos seguintes eventos:
- O serviço é interrompido manualmente utilizando o portal Azure ou utilizando o comando API do Microsoft Graph apropriado.
- Um novo ciclo inicial é desencadeado utilizando a opção **de provisionamento Restart** no portal Azure, ou utilizando o comando API do Microsoft Graph apropriado. Esta ação limpa qualquer marca de água armazenada e faz com que todos os objetos de origem sejam novamente avaliados.
- Um novo ciclo inicial é desencadeado devido a uma alteração nos mapeamentos de atributos ou filtros de scoping. Esta ação também limpa qualquer marca de água armazenada e faz com que todos os objetos de origem sejam novamente avaliados.
- O processo de provisionamento entra em quarentena (ver abaixo) devido a uma elevada taxa de erro, e permanece em quarentena por mais de quatro semanas. Neste caso, o serviço será automaticamente desativado.
### <a name="errors-and-retries"></a>Erros e retrítrios
Se um erro no sistema-alvo impedir que um utilizador individual seja adicionado, atualizado ou eliminado no sistema-alvo, a operação é novamente experimentada no ciclo de sincronização seguinte. Se o utilizador continuar a falhar, as retró quais vão começar a ocorrer numa frequência reduzida, reduzindo gradualmente para apenas uma tentativa por dia. Para resolver a falha, os administradores devem verificar os [registos de provisionamento](../reports-monitoring/concept-provisioning-logs.md?context=azure/active-directory/manage-apps/context/manage-apps-context) para determinar a causa raiz e tomar as medidas adequadas. Falhas comuns podem incluir:
- Utilizadores que não tenham um atributo preenchido no sistema de origem que é necessário no sistema alvo
- Os utilizadores que têm um valor de atributo no sistema de origem para o qual há uma restrição única no sistema alvo, e o mesmo valor está presente noutro registo de utilizador
Resolver estas falhas ajustando os valores de atributos para o utilizador afetado no sistema de origem, ou ajustando os mapeamentos do atributo para que não causem conflitos.
### <a name="quarantine"></a>Quarentena
Se a maioria ou todas as chamadas que são feitas contra o sistema alvo falharem consistentemente devido a um erro (por exemplo, credenciais de administração inválidas) o trabalho de provisionamento entra num estado de "quarentena". Este estado é indicado no relatório de resumo do [provisionamento](./check-status-user-account-provisioning.md) e via e-mail se as notificações por e-mail foram configuradas no portal Azure.
Quando em quarentena, a frequência dos ciclos incrementais é gradualmente reduzida para uma vez por dia.
O trabalho de provisionamento sai em quarentena depois de todos os erros ofensivos serem corrigidos e o ciclo de sincronização seguinte começar. Se o trabalho de provisionamento permanecer em quarentena por mais de quatro semanas, o trabalho de provisionamento é deficiente. Saiba mais aqui sobre o estado de quarentena [aqui.](./application-provisioning-quarantine-status.md)
### <a name="how-long-provisioning-takes"></a>O tempo que o aprovisionamento demora
O desempenho depende se o seu trabalho de provisionamento está a executar um ciclo inicial de provisionamento ou um ciclo incremental. Para obter mais informações sobre o tempo de fornecimento e como monitorizar o estado do serviço de avisão, consulte [verificar o estado do fornecimento do utilizador](application-provisioning-when-will-provisioning-finish-specific-user.md).
### <a name="how-to-tell-if-users-are-being-provisioned-properly"></a>Como saber se os utilizadores estão a ser a provisionados corretamente
Todas as operações executadas pelo serviço de fornecimento de utilizadores são registadas nos registos de Provisionamento Azure AD [(pré-visualização)](../reports-monitoring/concept-provisioning-logs.md?context=azure/active-directory/manage-apps/context/manage-apps-context). Os registos incluem todas as operações de leitura e escrita feitas para os sistemas de origem e alvo, bem como os dados do utilizador que foram lidos ou escritos durante cada operação. Para obter informações sobre como ler os registos de provisionamento no portal Azure, consulte o [guia de informação sobre](./check-status-user-account-provisioning.md)o provisionamento .
## <a name="de-provisioning"></a>Desesvisão
O serviço de fornecimento de Azure AD mantém os sistemas de origem e alvo sincronizados através da desavisionamento de contas quando o acesso ao utilizador é removido.
O serviço de prestação suporta a eliminação e a desativação (por vezes designada por eliminação suave) dos utilizadores. A definição exata de desativação e eliminação varia em com base na implementação da aplicação-alvo, mas geralmente um desativado indica que o utilizador não pode iniciar sação. Uma eliminação indica que o utilizador foi completamente removido da aplicação. Para aplicações SCIM, um disable é um pedido para definir a propriedade *ativa* para falso em um utilizador.
**Configure a sua aplicação para desativar um utilizador**
Certifique-se de que selecionou a caixa de verificação para obter atualizações.
Certifique-se de que tem o mapeamento *ativo* para a sua aplicação. Se utilizar uma aplicação na galeria de aplicações, o mapeamento pode ser ligeiramente diferente. Certifique-se de que utiliza o predefinido /fora do mapeamento da caixa para aplicações de galeria.
:::image type="content" source="./media/how-provisioning-works/disable-user.png" alt-text="Desativar um utilizador" lightbox="./media/how-provisioning-works/disable-user.png":::
**Configure a sua aplicação para eliminar um utilizador**
Os seguintes cenários desencadearão um desativação ou uma eliminação:
* Um utilizador é eliminado suavemente em Azure AD (enviado para o caixote do lixo de reciclagem / Propriedade AccountEnabled definida como falsa).
30 dias após a perda de um utilizador em Azure AD, serão permanentemente eliminados do arrendatário. Neste momento, o serviço de fornecimento enviará um pedido de DELETE para eliminar permanentemente o utilizador na aplicação. A qualquer momento durante a janela de 30 dias, pode [eliminar manualmente um utilizador permanentemente](../fundamentals/active-directory-users-restore.md), o que envia um pedido de eliminação para a aplicação.
* Um utilizador é permanentemente eliminado / removido do caixote do lixo em Azure AD.
* Um utilizador não é atribuído a partir de uma aplicação.
* Um utilizador vai de âmbito para fora do alcance (já não passa um filtro de escotagem).
:::image type="content" source="./media/how-provisioning-works/delete-user.png" alt-text="Eliminar um utilizador" lightbox="./media/how-provisioning-works/delete-user.png":::
Por predefinição, o serviço de fornecimento AZure AD elimina ou desativa os utilizadores que ficam fora de alcance. Se quiser anular este comportamento predefinido, pode definir uma bandeira para [saltar as exclusões fora de alcance.](skip-out-of-scope-deletions.md)
Se ocorrer um dos quatro eventos acima e a aplicação-alvo não suportar eliminações suaves, o serviço de fornecimento enviará um pedido de DELETE para eliminar permanentemente o utilizador da aplicação.
Se vir um atributo IsSoftDeleted nos mapeamentos do seu atributo, é utilizado para determinar o estado do utilizador e se deve enviar um pedido de atualização com ativo = falso para eliminar suavemente o utilizador.
**Limitações conhecidas**
* Se um utilizador que foi previamente gerido pelo serviço de fornecimento não for designado a partir de uma aplicação, ou de um grupo atribuído a uma app, enviaremos um pedido de desativação. Nessa altura, o utilizador não é gerido pelo serviço e não enviaremos um pedido de eliminação quando estes forem eliminados do diretório.
* Não é suportado o fornecimento de um utilizador que esteja desativado em Azure AD. Devem estar ativos na Azure AD antes de serem a provisionados.
* Quando um utilizador passa de soft-deleted para ativo, o serviço de fornecimento AZure AD ativará o utilizador na aplicação-alvo, mas não irá restaurar automaticamente os membros do grupo. A aplicação-alvo deve manter os membros do grupo para o utilizador em estado inativo. Se a aplicação-alvo não o apoiar, pode reiniciar a provisão para atualizar os membros do grupo.
**Recomendação**
Ao desenvolver uma aplicação, apoie sempre as eliminações suaves e as eliminações duras. Permite que os clientes recuperem quando um utilizador é acidentalmente desativado.
## <a name="next-steps"></a>Passos Seguintes
[Planear uma implementação de aprovisionamento automático de utilizadores](../app-provisioning/plan-auto-user-provisioning.md)
[Configurar o aprovisionamento para uma aplicação da galeria](./configure-automatic-user-provisioning-portal.md)
[Construa um ponto final SCIM e configuure o provisionamento ao criar a sua própria app](../app-provisioning/use-scim-to-provision-users-and-groups.md)
[Problemas de resolução de problemas com configuração e provisionamento de utilizadores a uma aplicação](./application-provisioning-config-problem.md).
| 118.579186 | 852 | 0.813173 | por_Latn | 0.999686 |
5368ba560d0dd9b42bdfa0b05c471dd478a73b6d | 6,044 | md | Markdown | articles/azure-app-configuration/quickstart-java-spring-app.md | toko-u/azure-docs.ja-jp | 700e271d62a6c05ea9481362f3a0a46b8c14ab00 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/azure-app-configuration/quickstart-java-spring-app.md | toko-u/azure-docs.ja-jp | 700e271d62a6c05ea9481362f3a0a46b8c14ab00 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/azure-app-configuration/quickstart-java-spring-app.md | toko-u/azure-docs.ja-jp | 700e271d62a6c05ea9481362f3a0a46b8c14ab00 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Azure App Configuration の使用方法を学習するためのクイックスタート
description: Java Spring アプリで Azure App Configuration を使用する場合のクイック スタートです。
services: azure-app-configuration
documentationcenter: ''
author: lisaguthrie
manager: maiye
editor: ''
ms.service: azure-app-configuration
ms.topic: quickstart
ms.date: 04/18/2020
ms.custom: devx-track-java
ms.author: lcozzens
ms.openlocfilehash: 93a2fd89e21dbf4edee29a27bd18f63f2b835aae
ms.sourcegitcommit: b8702065338fc1ed81bfed082650b5b58234a702
ms.translationtype: HT
ms.contentlocale: ja-JP
ms.lasthandoff: 08/11/2020
ms.locfileid: "88121223"
---
# <a name="quickstart-create-a-java-spring-app-with-azure-app-configuration"></a>クイック スタート:Azure App Configuration を使用して Java Spring アプリを作成する
このクイック スタートでは、コードとは別にアプリケーション設定のストレージと管理を一元化するために、Azure App Configuration を Java Spring アプリに組み込みます。
## <a name="prerequisites"></a>前提条件
- Azure サブスクリプション - [無料アカウントを作成する](https://azure.microsoft.com/free/)
- バージョン 8 を含む、サポートされている [Java Development Kit (JDK)](https://docs.microsoft.com/java/azure/jdk)。
- [Apache Maven](https://maven.apache.org/download.cgi) バージョン 3.0 以降。
## <a name="create-an-app-configuration-store"></a>App Configuration ストアを作成する
[!INCLUDE [azure-app-configuration-create](../../includes/azure-app-configuration-create.md)]
6. **[構成エクスプローラー]** > **[+ 作成]** > **[キー値]** の順に選択して、次のキーと値のペアを追加します。
| Key | Value |
|---|---|
| /application/config.message | こんにちは |
**[ラベル]** と **[コンテンツの種類]** は、現時点では空にしておきます。
7. **[適用]** を選択します。
## <a name="create-a-spring-boot-app"></a>Spring Boot アプリを作成する
[Spring Initializr](https://start.spring.io/) を使用して、新しい Spring Boot プロジェクトを作成します。
1. <https://start.spring.io/> を参照します。
1. 次のオプションを指定します。
- **Java** で **Maven** プロジェクトを生成します。
- **Spring Boot** のバージョンとして、2.0 以降を指定します。
- アプリケーションの**グループ (Group)** と**成果物 (Artifact)** の名前を指定します。
- **Spring Web** の依存関係を追加します。
1. 前の各オプションを指定してから、 **[プロジェクトの生成]** を選択します。 メッセージが表示されたら、ローカル コンピューター上のパスにプロジェクトをダウンロードします。
## <a name="connect-to-an-app-configuration-store"></a>App Configuration ストアに接続する
1. ファイルをローカル システム上に展開したら、シンプルな Spring Boot アプリケーションの編集を開始できます。 アプリのルート ディレクトリで *pom.xml* ファイルを探します。
1. テキスト エディターで *pom.xml* ファイルを開き、Spring Cloud Azure Config スターターを `<dependencies>` のリストに追加します。
**Spring Cloud 1.1.x**
```xml
<dependency>
<groupId>com.microsoft.azure</groupId>
<artifactId>spring-cloud-azure-appconfiguration-config</artifactId>
<version>1.1.5</version>
</dependency>
```
**Spring Cloud 1.2.x**
```xml
<dependency>
<groupId>com.microsoft.azure</groupId>
<artifactId>spring-cloud-azure-appconfiguration-config</artifactId>
<version>1.2.7</version>
</dependency>
```
1. アプリのパッケージ ディレクトリに、*MessageProperties.java* という名前の新しい Java ファイルを作成します。 次の行を追加します。
```java
package com.example.demo;
import org.springframework.boot.context.properties.ConfigurationProperties;
@ConfigurationProperties(prefix = "config")
public class MessageProperties {
private String message;
public String getMessage() {
return message;
}
public void setMessage(String message) {
this.message = message;
}
}
```
1. アプリのパッケージ ディレクトリに、*HelloController.java* という名前の新しい Java ファイルを作成します。 次の行を追加します。
```java
package com.example.demo;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;
@RestController
public class HelloController {
private final MessageProperties properties;
public HelloController(MessageProperties properties) {
this.properties = properties;
}
@GetMapping
public String getMessage() {
return "Message: " + properties.getMessage();
}
}
```
1. メイン アプリケーションの Java ファイルを開き、`@EnableConfigurationProperties` を追加してこの機能を有効にします。
```java
import org.springframework.boot.context.properties.EnableConfigurationProperties;
@SpringBootApplication
@EnableConfigurationProperties(MessageProperties.class)
public class DemoApplication {
public static void main(String[] args) {
SpringApplication.run(DemoApplication.class, args);
}
}
```
1. アプリの resources ディレクトリに、`bootstrap.properties` という名前の新しいファイルを作成し、そのファイルに以下の行を追加します。 サンプルの値を、App Configuration ストアの適切なプロパティに置き換えます。
```CLI
spring.cloud.azure.appconfiguration.stores[0].connection-string= ${APP_CONFIGURATION_CONNECTION_STRING}
```
1. **APP_CONFIGURATION_CONNECTION_STRING** という名前の環境変数に、App Configuration ストアへのアクセス キーを設定します。 コマンド ラインで次のコマンドを実行してコマンド プロンプトを再起動し、変更が反映されるようにします。
```cmd
setx APP_CONFIGURATION_CONNECTION_STRING "connection-string-of-your-app-configuration-store"
```
Windows PowerShell を使用する場合は、次のコマンドを実行します。
```azurepowershell
$Env:APP_CONFIGURATION_CONNECTION_STRING = "connection-string-of-your-app-configuration-store"
```
macOS または Linux を使用する場合は、次のコマンドを実行します。
```cmd
export APP_CONFIGURATION_CONNECTION_STRING='connection-string-of-your-app-configuration-store'
```
## <a name="build-and-run-the-app-locally"></a>アプリをビルドしてローカルで実行する
1. Spring Boot アプリケーションを Maven でビルドし、実行します。次に例を示します。
```cmd
mvn clean package
mvn spring-boot:run
```
2. アプリケーションが実行されたら、*curl* を使用してアプリケーションをテストできます。次に例を示します。
```cmd
curl -X GET http://localhost:8080/
```
App Configuration ストアに入力したメッセージが表示されます。
## <a name="clean-up-resources"></a>リソースをクリーンアップする
[!INCLUDE [azure-app-configuration-cleanup](../../includes/azure-app-configuration-cleanup.md)]
## <a name="next-steps"></a>次のステップ
このクイックスタートでは、新しい App Configuration ストアを作成して、Java Spring アプリで使用しました。 詳細については、「[Azure の Spring](https://docs.microsoft.com/java/azure/spring-framework/)」を参照してください。 Java Spring アプリで構成設定を動的に更新できるようにする方法については、次のチュートリアルに進んでください。
> [!div class="nextstepaction"]
> [動的な構成を有効にする](./enable-dynamic-configuration-java-spring-app.md)
| 30.994872 | 222 | 0.716413 | yue_Hant | 0.48279 |
5368e40328314933d0bdc72e17dd2b8d672000a8 | 244 | md | Markdown | doc/src/index.md | oguzcankirmemis/Qaintessent.jl | 6261dc5d8a9a7ea7d406ea39cac950747583f414 | [
"Apache-2.0"
] | 16 | 2020-05-25T11:43:51.000Z | 2022-03-30T11:34:12.000Z | doc/src/index.md | oguzcankirmemis/Qaintessent.jl | 6261dc5d8a9a7ea7d406ea39cac950747583f414 | [
"Apache-2.0"
] | 54 | 2020-04-09T17:15:56.000Z | 2022-02-15T12:46:52.000Z | doc/src/index.md | oguzcankirmemis/Qaintessent.jl | 6261dc5d8a9a7ea7d406ea39cac950747583f414 | [
"Apache-2.0"
] | 6 | 2020-12-16T13:25:17.000Z | 2022-01-19T15:49:00.000Z | # Qaintessent.jl Documentation
## Table of Contents
```@contents
Pages = ["index.md", "circuit.md", "gates.md", "densitymatrices.md", "gradients.md", "qasm.md", "view.md"]
```
```@meta
CurrentModule = Qaintessent
```
## Index
```@index
```
| 15.25 | 106 | 0.635246 | kor_Hang | 0.194167 |
5369020bd0a6c12d90bbe7cf276e896a1224bc0e | 60,939 | md | Markdown | README.md | iwgx/spectacle | 87c4656ec7240fba6a10cfa0c29604ab5c8356cb | [
"MIT"
] | null | null | null | README.md | iwgx/spectacle | 87c4656ec7240fba6a10cfa0c29604ab5c8356cb | [
"MIT"
] | 7 | 2021-03-01T20:57:07.000Z | 2022-02-26T20:33:52.000Z | README.md | luckened/js-presentation | 60cb5b9a8ef405aaa9498873f17253bfc0bdda9d | [
"MIT"
] | null | null | null | # Spectacle
[![Travis Status][trav_img]][trav_site]
[![Maintenance Status][maintenance-image]](#maintenance-status)
ReactJS based Presentation Library
[Spectacle Boilerplate MDX](https://github.com/FormidableLabs/spectacle-boilerplate-mdx/)
[Spectacle Boilerplate](https://github.com/FormidableLabs/spectacle-boilerplate/)
Looking for a quick preview of what you can do with Spectacle? Check out our live Demo Deck [here](https://raw.githack.com/FormidableLabs/spectacle/master/one-page.html#/).
Have a question about Spectacle? Submit an issue in this repository using the "Question" template.
## Contents
<!-- MarkdownTOC depth=4 autolink=true bracket=round autoanchor=true -->
- [Getting Started](#getting-started)
- [Classic Spectacle](#classic-spectacle)
- [Spectacle MDX](#spectacle-mdx)
- [One Page](#one-page)
- [Development](#development)
- [Build & Deployment](#build--deployment)
- [Presenting](#presenting)
- [Controls](#controls)
- [Fullscreen](#fullscreen)
- [PDF Export](#pdf-export)
- [Basic Concepts](#basic-concepts)
- [Main file](#main-file)
- [Themes](#themes)
- [createTheme(colors, fonts)](#createthemecolors-fonts)
- [FAQ](#faq)
- [Tag API](#tag-api)
- [Main Tags](#main-tags)
- [Deck](#deck)
- [Slide (Base)](#slide-base)
- [Notes](#notes)
- [MarkdownSlides](#markdown-slides)
- [Layout Tags](#layout-tags)
- [Layout](#layout)
- [Fit](#fit)
- [Fill](#fill)
- [Markdown Tag](#markdown-tag)
- [Markdown](#markdown)
- [Magic Tag](#magic-tag)
- [Magic](#magic)
- [Element Tags](#element-tags)
- [Appear](#appear)
- [Anim](#anim)
- [BlockQuote, Quote and Cite (Base)](#blockquote-quote-and-cite-base)
- [CodePane (Base)](#codepane-base)
- [Code (Base)](#code-base)
- [ComponentPlayground](#component-playground)
- [GoToAction (Base)](#go-to-action)
- [Heading (Base)](#heading-base)
- [Image (Base)](#image-base)
- [Link (Base)](#link-base)
- [List & ListItem (Base)](#list--listitem-base)
- [S (Base)](#s-base)
- [Table, TableRow, TableBody, TableHeader, TableHeaderItem and TableItem (Base)](#table-tablerow-tableheaderitem-and-tableitem-base)
- [Text (Base)](#text-base)
- [Typeface](#typeface)
- [Base Props](#base-props)
- [Third Party Extensions](#third-party)
<!-- /MarkdownTOC -->
<a name="getting-started"></a>
## Getting Started
First, decide whether you want to use [classic Spectacle](#classic-spectacle), [Spectacle MDX](#spectacle-mdx), which has all the same functionality but allows you to write your Spectacle presentation in markdown, or using only [one HTML page](#one-page).
### Classic Spectacle
There are four ways to get started building your presentation.
1. **Option #1:** Run the following command in your terminal:
`npx create-react-app my-presentation --scripts-version spectacle-scripts`
2. **Option #2:** Using the [Spectacle Boilerplate](https://github.com/FormidableLabs/spectacle-boilerplate).
3. **Option #3:** Following along the [Spectacle Tutorial](./docs/tutorial.md), which also involves downloading the [Spectacle Boilerplate](https://github.com/FormidableLabs/spectacle-boilerplate).
All three of the above ways will give you everything you'll need to get started, including a sample presentation in the `presentation` folder. You can change the props and tags as needed for your presentation or delete everything in `presentation/index.js` to start from scratch. From here you can go to [Development](#development) to get started.
3. **Option #4:** Run `npm install spectacle` in your terminal and writing your own build configurations. We also provide full UMD builds (with a `Spectacle` global variable) of the library at `dist/spectacle.js` and `dist/spectacle.min.js` for more general use cases. You could, for example, include the library via a script tag with: `https://unpkg.com/spectacle@VERSION/dist/spectacle.min.js`.
### Spectacle MDX
Download the [Spectacle MDX Boilerplate](https://github.com/FormidableLabs/spectacle-boilerplate-mdx).
This repository will give you everything you'll need to get started, including a sample presentation in the `presentation` folder. You can change the props and tags as needed for your presentation or delete everything in the `index.mdx` file to start from scratch. From here you can go to [Development](#development) to get started.
_NOTE: We have webpack externals for `react`, `react-dom`, and `prop-types`, so you will need to provide them in your upstream build or something like linking in via `script` tags in your HTML page for all three libraries. This comports with our project dependencies which place these three libraries in `peerDependencies`._
<a name="one-page"></a>
### One Page
To aid with speedy development we've provided a simple boilerplate HTML page with a bespoke script tag that contains your entire presentation. The rest of the setup will take care of transpiling your React/ESnext code, providing Spectacle, React, and ReactDOM libraries, and being raring to go with a minimum of effort.
We can start with this project's sample at [`one-page.html`](./one-page.html). It's the same presentation as the fully-built-from-source version, with a few notable exceptions:
1. There are no `import`s or `require`s. Everything must come from the global namespace. This includes `Spectacle`, `React`, `ReactDOM` and all the Spectacle exports from [`./src/index.js`](./src/index.js) -- `Deck`, `Slide`, `themes`, etc.
2. The presentation must include exactly **one** script tag with the type `text/spectacle` that is a function. Presently, that function is directly inserted inline into a wrapper code boilerplate as a React Component `render` function. The wrapper is transpiled. There should not be any extraneous content around it like outer variables or comments.
**Good** examples:
```html
<script type="text/spectacle">
() => (
<Deck>{/* SLIDES */}</Deck>
)
</script>
```
```html
<script type="text/spectacle">
() => {
// Code-y code stuff in JS...
return (
<Deck>{/* SLIDES */}</Deck>
);
}
</script>
```
**Bad** examples of what not to do:
```html
<script type="text/spectacle">
// Outer comment (BAD)
const outerVariable = "BAD";
() => (
<Deck>{/* SLIDES */}</Deck>
)
</script>
```
3. If you want to create your own theme settings, you can use the following code snippet to change the [themes](#createthemecolors-fonts) default settings.
```html
<script type="text/spectacle">
() => {
const { themes: { defaultTheme } } = Spectacle;
const theme = defaultTheme({
// Change default settings
primary: "blue",
secondary: "red"
},
{
primary: "Helvetica",
});
return (
<Deck transition={['zoom']} theme={theme}>
<Slide>some stuff</Slide>
<Slide>other stuff</Slide>
<Slide>some more stuff</Slide>
</Deck>
);
}
</script>
```
... with those guidelines in mind, here's the boilerplate that you can copy-and-paste into an HTML file and start a Spectacle presentation that works from the get go!
```html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta
name="viewport"
content="width=device-width initial-scale=1 user-scalable=no"
/>
<title>Spectacle</title>
<link
href="https://fonts.googleapis.com/css?family=Lobster+Two:400,700"
rel="stylesheet"
type="text/css"
/>
<link
href="https://fonts.googleapis.com/css?family=Open+Sans+Condensed:300,700"
rel="stylesheet"
type="text/css"
/>
<link
href="https://unpkg.com/normalize.css@7/normalize.css"
rel="stylesheet"
type="text/css"
/>
</head>
<body>
<div id="root"></div>
<script src="https://unpkg.com/prop-types@15/prop-types.js"></script>
<script src="https://unpkg.com/react@16/umd/react.production.min.js"></script>
<script src="https://unpkg.com/react-dom@16/umd/react-dom.production.min.js"></script>
<script src="https://unpkg.com/@babel/standalone/babel.js"></script>
<script src="https://unpkg.com/spectacle@^5/dist/spectacle.js"></script>
<script src="https://unpkg.com/spectacle@^5/lib/one-page.js"></script>
<script type="text/spectacle">
() => {
// Your JS Code goes here
return (
<Deck>
{/* Throw in some slides here! */}
</Deck>
);
}
</script>
</body>
</html>
```
<a name="development"></a>
## Development
After downloading the boilerplate, run the following commands on the project's root directory...
- `npm install` (you can also use `yarn`)
- `rm -R .git` to remove the existing version control
- `npm start` to start up the local server or visit [http://localhost:3000/#/](http://localhost:3000/#/)
... and we are ready to roll
<a name="build--deployment"></a>
## Build & Deployment
Building the dist version of the slides is as easy as running `npm run build:dist`
If you want to deploy the slideshow to [surge](https://surge.sh/), run `npm run deploy`
_<span role="img" aria-label="Warning Sign">⚠️ </span> WARNING: If you are deploying the dist version to [GitHub Pages](https://pages.github.com/ 'GitHub Pages'), note that the built bundle uses an absolute path to the `/dist/` directory while GitHub Pages requires the relative `./dist/` to find any embedded assets and/or images. A very hacky way to fix this is to edit one place in the produced bundle, as shown [in this GitHub issue](https://github.com/FormidableLabs/spectacle/issues/326#issue-233283633 'GitHub: spectacle issue #326')._
<a name="presenting"></a>
## Presenting
Spectacle comes with a built in presenter mode. It shows you a slide lookahead, current time and your current slide:

You also have the option of a stopwatch to count the elapsed time:

To present:
- Run `npm start`. You will be redirected to a URL containing your presentation or visit [http://localhost:3000/#/](http://localhost:3000/#/)
- Open a second browser window on a different screen
- Add `?presenter` or `?presenter&timer` immediately after the `/`, e.g.: [http://localhost:3000/#/0?presenter](http://localhost:3000/#/0?presenter) or [http://localhost:3000/#/?presenter&timer](http://localhost:3000/#/?presenter&timer)
- Give an amazingly stylish presentation
_NOTE: Any windows/tabs in the same browser that are running Spectacle will sync to one another, even if you don't want to use presentation mode_
Check it out:

You can toggle the presenter or overview mode by pressing respectively `alt+p` and `alt+o`.
<a name="controls"></a>
## Controls
| Key Combination | Function |
| --------------- | ------------------------------ |
| Right Arrow | Next Slide |
| Left Arrow | Previous Slide |
| Space | Next Slide |
| Shift+Space | Previous Slide |
| Alt/Option + O | Toggle Overview Mode |
| Alt/Option + P | Toggle Presenter Mode |
| Alt/Option + T | Toggle Timer in Presenter Mode |
| Alt/Option + A | Toggle autoplay (if enabled) |
| Alt/Option + F | Toggle Fullscreen Mode |
<a name="fullscreen"></a>
## Fullscreen
Fullscreen can be toggled via browser options, <kbd>Alt/Option</kbd> + <kbd>F</kbd>, or by pressing the button in the bottom right corner of your window.
Note: Right now, this works well when browser window itself is not full screen. When the browser is in fullscreen, there is an issue [#654](https://github.com/FormidableLabs/spectacle/issues/654). This is because we use the browser's FullScreen API methods. It still works but has some inconstiency.
<a name="pdf-export"></a>
## PDF Export
You can export a PDF from your Spectacle presentation either from the command line or browser:
#### CLI
- Run `npm install spectacle-renderer -g`
- Run `npm start` on your project and wait for it to build and be available
- Run `spectacle-renderer`
A PDF is created in your project directory. For more options and configuration of this tool, check out:
[https://github.com/FormidableLabs/spectacle-renderer](https://github.com/FormidableLabs/spectacle-renderer)
#### Browser
After running `npm start` and opening [http://localhost:3000/#/](http://localhost:3000/#/) in your browser...
- Add `?export` after the `/` on the URL of the page you are redirected to, e.g.: [http://localhost:3000/#/?export](http://localhost:3000/#/?export)
- Bring up the print dialog `(ctrl or cmd + p)`
- Change destination to "Save as PDF", as shown below:

If you want a printer friendly version, repeat the above process but instead print from [http://localhost:3000/#/?export&print](http://localhost:3000/#/?export&print).
If you want to export your slides with your [notes](#notes) included, repeat the above process but instead print from [http://localhost:3000/#/?export¬es](http://localhost:3000/#/?export¬es).
#### Query Parameters
Here is a list of all valid query parameters that can be placed after `/#/` on the URL.
| Query | Description |
| ------------------- | -------------------------------------------------------------------------------------------------------------------- |
| 0, 1, 2, 3... etc. | Will take you to the corresponding slide, with `0` being the first slide in the presentation. |
| ?export | Creates a single-page overview of your slides, that you can then print. |
| ?export¬es | Creates a single-page overview of your slides, including any [notes](#notes), that you can then print. |
| ?export&print | Creates a black & white single-page overview of your slides. |
| ?export&print¬es | Creates a black & white single-page overview of your slides, including any [notes](#notes), that you can then print. |
| ?presenter | Takes you to presenter mode where you’ll see current slide, next slide, current time, and your [notes](#notes). |
| ?presenter&timer | Takes you to presenter mode where you’ll see current slide, next slide, timer, and your [notes](#notes). |
| ?overview | Take you to overview mode where you’ll see all your slides. |
_NOTE: If you add a non-valid query parameter, you will be taken to a blank page. Removing or replacing the query parameter with a valid query parameter and refreshing the page will return you to the correct destination._
<a name="basic-concepts"></a>
## Basic Concepts
<a name="main-file"></a>
### Main file
Your presentation files & assets will live in the `presentation` folder.
The main `.js` file you write your deck in is `/presentation/index.js`
Check it out [here](https://github.com/FormidableLabs/spectacle-boilerplate/blob/master/presentation/index.js) in the boilerplate.
```jsx
// index.js
import React, { Component } from 'react';
import {
Appear,
BlockQuote,
Cite,
CodePane,
Code,
Deck,
Fill,
Fit,
Heading,
Image,
Layout,
ListItem,
List,
Quote,
Slide,
Text
} from 'spectacle';
export default class extends Component {
render() {
return (
<Deck>
<Slide>
<Text>Hello</Text>
</Slide>
</Deck>
);
}
}
```
Here is where you can use the library's tags to compose your presentation. While you can use any JSX syntax here, building your presentation with the supplied tags allows for theming to work properly.
The bare minimum you need to start is a `Deck` element and a `Slide` element. Each `Slide` element represents a slide inside of your slideshow.
<a name="themes"></a>
### Themes
In Spectacle, themes are functions that return style objects for `screen` & `print`.
You can import the default theme from:
```jsx
import createTheme from 'spectacle/lib/themes/default';
```
Or create your own based upon the source.
`index.js` is what you would edit in order to create a custom theme of your own, using object based styles.
You will want to edit `index.html` to include any web fonts or additional CSS that your theme requires.
<a name="createthemecolors-fonts"></a>
#### createTheme(colors, fonts)
Spectacle's functional theme system allows you to pass in color and font variables that you can use on your elements. The fonts configuration object can take a string for a system font or an object that specifies it‘s a Google Font. If you use a Google Font you can provide a styles array for loading different weights and variations. Google Font tags will be automatically created. See the example below:
```jsx
const theme = createTheme(
{
primary: 'red',
secondary: 'blue'
},
{
primary: 'Helvetica',
secondary: {
name: 'Droid Serif',
googleFont: true,
styles: ['400', '700i']
}
}
);
```
The returned theme object can then be passed to the `Deck` tag via the `theme` prop, and will override the default styles.
#### Slide Templates
GitHub user [@boardfish](https://github.com/boardfish) has documented an approach to using [higher-order components](https://reactjs.org/docs/higher-order-components.html) to create slide templates at [this repository](https://github.com/boardfish/spectacle-slide-templates).
<a name="faq"></a>
## FAQ
**_How can I easily style the base components for my presentation?_**
Historically, custom styling in Spectacle has meant screwing with a theme file, or using `!important` overrides. We fixed that. Spectacle is now driven by [emotion](https://github.com/emotion-js/emotion), so you can bring your own styling library, whether it's emotion itself, or something like styled-components or glamorous. For example, if you want to create a custom Heading style:
```javascript
import styled from 'react-emotion';
import { Heading } from 'spectacle';
const CustomHeading = styled(Heading)`
font-size: 1.2em;
color: papayawhip;
`;
```
<a name="tag-api"></a>
**_Can I write my presentation in TypeScript?_**
Yes, you can! Type definitions are shipped with the library, so you can import Spectacle components into any `.tsx` presentation without additional installation steps.
Updated type definitions for the Spectacle API can be found [at the root of this repository](./index.d.ts).
## Tag API
In Spectacle, presentations are composed of a set of base tags. We can separate these into three categories: Main tags, Layout tags & Element tags.
<a name="main-tags"></a>
### Main Tags
<a name="deck"></a>
#### Deck
The Deck tag is the root level tag for your presentation. It supports the following props:
| Name | PropType | Description | Default |
| ----------------------- | ----------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------- |
| autoplay | PropTypes.bool | Automatically advance slides. | `false` |
| autoplayDuration | PropTypes.number | Accepts integer value in milliseconds for global autoplay duration. | `7000` |
| autoplayLoop | PropTypes.bool | Keep slides in loop. | `true` |
| autoplayOnStart | PropTypes.bool | Start presentation with autoplay on/not paused (if autoplay is enabled). | `true` |
| controls | PropTypes.bool | Show control arrows when not in fullscreen. | `true` |
| contentHeight | PropTypes.numbers | Baseline content area height. | `700px` |
| contentWidth | PropTypes.numbers | Baseline content area width. | `1000px` |
| disableKeyboardControls | PropTypes.bool | Toggle keyboard control. | `false` |
| disableTouchControls | PropTypes.bool | Toggle touch control. | `false` |
| onStateChange | PropTypes.func | Called whenever a new slide becomes visible with the arguments `(previousState, nextState)` where state refers to the outgoing and incoming `<Slide />`'s `state` props, respectively. The default implementation attaches the current state as a class to the document root. | see description |
| history | PropTypes.object | Accepts custom configuration for [history](https://github.com/ReactTraining/history). | |
| progress | PropTypes.string | Accepts `pacman`, `bar`, `number` or `none`. To override the color, change the 'quaternary' color in the theme. | `pacman` |
| showFullscreenControl | PropTypes.bool | Show the fullscreen control button in bottom right of the screen. | `true` |
| theme | PropTypes.object | Accepts a theme object for styling your presentation. | |
| transition | PropTypes.array | Accepts `slide`, `zoom`, `fade` or `spin`, and can be combined. Sets global slide transitions. **Note: If you use the 'scale' transition, fitted text won't work in Safari.** | |
| transitionDuration | PropTypes.number | Accepts integer value in milliseconds for global transition duration. | `500` |
<a name="slide-base"></a>
#### Slide ([Base](#base-props))
The slide tag represents each slide in the presentation. Giving a slide tag an `id` attribute will replace its number based navigation hash with the `id` provided. It supports the following props, in addition to any of the props outlined in the [Base](#base-props) class props listing:
| Name | PropType | Description | Default |
| ------------------ | ---------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------- |
| align | PropTypes.string | Accepts a space delimited value for positioning interior content. The first value can be `flex-start` (left), `center` (middle), or `flex-end` (right). The second value can be `flex-start` (top) , `center` (middle), or `flex-end` (bottom). | `align="center center"` |
| controlColor | PropTypes.string | Used to override color of control arrows on a per slide basis, accepts color aliases, or valid color values. | Set by `Deck`'s `control` prop |
| goTo | PropTypes.number | Used to navigate to a slide for out-of-order presenting. Slide numbers start at `1`. This can also be used to skip slides as well. | |
| id | PropTypes.string | Used to create a string based hash. |
| notes | PropTypes.string | Text which will appear in the presenter mode. Can be HTML. | |
| onActive | PropTypes.func | Optional function that is called with the slide index when the slide comes into view. | |
| progressColor | PropTypes.string | Used to override color of progress elements on a per slide basis, accepts color aliases, or valid color values. | `quaternary` color set by theme |
| state | PropTypes.string | Used to indicate that the deck is in a specific state. Inspired by [Reveal.js](https://github.com/hakimel/reveal.js)'s `data-state` attribute | |
| transition | PropTypes.array | Used to override transition prop on a per slide basis, accepts `slide`, `zoom`, `fade`, `spin`, or a [function](#transition-function), and can be combined. This will affect both enter and exit transitions. **Note: If you use the 'scale' transition, fitted text won't work in Safari.** | Set by `Deck`'s `transition` prop |
| transitionIn | PropTypes.array | Specifies the slide transition when the slide comes into view. Accepts the same values as transition. |
| transitionOut | PropTypes.array | Specifies the slide transition when the slide exits. Accepts the same values as transition. | Set by `Deck`'s `transition` prop |
| transitionDuration | PropTypes.number | Accepts integer value in milliseconds for slide transition duration. | Set by `Deck`'s `transition` prop |
#### SlideSet
With `SlideSet`, you can wrap multiple slide in it to apply the same style.
```jsx
<SlideSet style={{ border: '2px solid red' }}>
<Slide>Slide1</Slide>
<Slide>Slide2</Slide>
<Slide>Slide3</Slide>
</SlideSet>
```
<a name="transition-function"></a>
##### Transition Function
Spectacle now supports defining custom transitions. The function prototype is `(transitioning: boolean, forward: boolean) => Object`. The `transitioning` param is true when the slide enters and exits. The `forward` param is `true` when the slide is entering, `false` when the slide is exiting. The function returns a style object. You can mix string-based transitions and functions. Styles provided when `transitioning` is `false` will appear during the lifecyle of the slide. An example is shown below:
```jsx
<Slide
transition={[
'fade',
(transitioning, forward) => {
const angle = forward ? -180 : 180;
return {
transform: `
translate3d(0%, ${transitioning ? 100 : 0}%, 0)
rotate(${transitioning ? angle : 0}deg)
`,
backgroundColor: transitioning ? '#26afff' : '#000'
};
}
]}
>
```
<a name="notes"></a>
#### Notes
The notes tag allows to use any tree of react elements as the notes of a slide. It is used as a child node of a slide tag and its children override any value given as the `notes` attribute of its parent slide.
```jsx
<Slide ...>
<Notes>
<h4>Slide notes</h4>
<ol>
<li>First note</li>
<li>Second note</li>
</ol>
</Notes>
{/* Slide content */}
</Slide>
```
<a name="markdown-slides"></a>
### MarkdownSlides
The MarkdownSlides function lets you create a single or multiple slides using Markdown. It can be used as a tagged template literal or a function. Three dashes (`---` are used as a delimiter between slides.
**Tagged Template Literal Usage**
```jsx
<Deck ...>
{MarkdownSlides`
## Slide One Title
Slide Content
---
## Slide Two Title
Slide Content
`}
</Deck>
```
**Function Usage**
```jsx
const slidesMarkdown = `
## Slide One Title
Slide Content
---
## Slide Two Title
Slide Content
`;
....
import slidesMarkdown from "!raw-loader!markdown.md";
<Deck ...>
{MarkdownSlides(slidesMarkdown)}
</Deck>
```
<a name="layout-tags"></a>
### Layout Tags
Layout tags are used for layout using Flexbox within your slide. They are `Layout`, `Fit` & `Fill`.
<a name="layout"></a>
#### Layout
The layout tag is used to wrap `Fit` and `Fill` tags to provide a row.
<a name="fit"></a>
#### Fit
The fit tag only takes up as much space as its bounds provide.
<a name="fill"></a>
#### Fill
The fill tag takes up all the space available to it. For example, if you have a `Fill` tag next to a `Fit` tag, the `Fill` tag will take up the rest of the space. Adjacent `Fill` tags split the difference and form an equidistant grid.
<a name="markdown-tag"></a>
### Markdown Tag
<a name="markdown"></a>
#### Markdown ([Base](#base-props))
The Markdown tag is used to add inline markdown to your slide. You can provide markdown source via the `source` prop, or as children. You can also provide a custom [mdast configuration](https://github.com/wooorm/mdast) via the `mdastConfig` prop.
Markdown generated tags aren't prop configurable, and instead render with your theme defaults.
| Name | PropType | Description | Default |
| ------ | ---------------- | --------------- | ------- |
| source | PropTypes.string | Markdown source | |
<a name="magic-tag"></a>
### Magic Tag
<a name="Magic"></a>
#### Magic
_NOTE: The Magic tag uses the Web Animations API. If you use the Magic tag and want it to work places other than Chrome, you will need to include the polyfill [https://github.com/web-animations/web-animations-js](https://github.com/web-animations/web-animations-js)_
The Magic Tag recreates Magic Move behavior that slide authors might be accustomed to coming from Keynote. It wraps slides and transitions between positional values for child elements. This means that if you have two similar strings, we will transition common characters to their new positions. This does not transition on non positional values such as slide background color or font size.
_<span role="img" aria-label="Warning Sign">⚠️ </span> WARNING: Do not use a `transition` prop on your slides if you are wrapping them with a Magic tag since it will take care of the transition for you._
```javascript
<Magic>
<Slide>
<Heading>First Heading</Heading>
</Slide>
<Slide>
<Heading>Second Heading</Heading>
</Slide>
</Magic>
```
Transitioning between similar states will vary based upon the input content. It will look better when there are more common elements. An upcoming patch will allow for custom keys, which will provide greater control over which elements are identified as common for reuse.
Until then, feedback is very welcome, as this is a non-trivial feature and we anticipate iterating on the behind the scenes mechanics of how it works, so that we can accommodate most use cases.
<a name="element-tags"></a>
### Element Tags
The element tags are the bread and butter of your slide content. Most of these tags derive their props from the Base class, but the ones that have special options will have them listed:
<a name="appear"></a>
#### Appear
This tag does not extend from Base. It's special. Wrapping elements in the appear tag makes them appear/disappear in order in response to navigation.
For best performance, wrap the contents of this tag in a native DOM element like a `<div>` or `<span>`.
_NOTE: When using `CodePane` tag inside an `Appear` tag you must wrap it inside a `<div>`_
```jsx
....
<Appear>
<div>
<CodePane source="CodePane" lang="js" />
</div>
<Appear>
....
```
| Name | PropType | Description | Default |
| ------------------ | ---------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------- |
| order | PropTypes.number | An optional integer starting at 1 for the presentation order of the Appear tags within a slide. If a slide contains ordered and unordered Appear tags, the unordered will show first. |
| transitionDuration | PropTypes.number | An optional duration (in milliseconds) for the Appear animation. | `300` |
| startValue | PropTypes.object | An optional style object that defines the starting, inactive state of the Appear tag. The default animation is a fade-in. | `{ opacity: 0 }` |
| endValue | PropTypes.object | An optional style object that defines the ending, active state of the Appear tag. The default animation is a simple fade-in. | `{ opacity: 1 }` |
| easing | PropTypes.string | An optional victory easing curve for the Appear animation. The various options are documented in the [Victory Animation easing docs](https://formidable.com/open-source/victory/docs/victory-animation/#easing). | `quadInOut` |
<a name="anim"></a>
#### Anim
If you want extra flexibility with animated animation, you can use the Anim component instead of Appear. It will let you have multi-step animations for each individual fragment. You can use this to create fancy animated intros, in-slide carousels, and many other fancy things. This tag does not extend from Base. It's special.
For best performance, wrap the contents of this tag in a native DOM element like a `<div>` or `<span>`.
_NOTE: `CodePane` tag can not be used inside a `Anim` tag._
| Name | PropType | Description | Default |
| ------------------ | ---------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | --------------- |
| order | PropTypes.number | An optional integer for the presentation order of the Appear tags within a slide. If a slide contains ordered and unordered Appear tags, the unordered will show first. | Starting at `1` |
| transitionDuration | PropTypes.number | A duration (in milliseconds) for the animation. | `300` |
| fromStyle | PropTypes.object | A style object that defines the starting, inactive state of the Anim tag. | |
| toStyle | PropTypes.array | An array of style objects that define each step in the animation. They will step from one toStyle object to another, until that fragment is finished with its animations. | |
| easing | PropTypes.string | A victory easing curve for the Appear animation. The various options are documented in the [Victory Animation easing docs](https://formidable.com/open-source/victory/docs/victory-animation/#easing). | |
| onAnim | PropTypes.fun | This function is called every time the Anim component plays an animation. It'll be called with two arguments, forwards, a boolean indicating if it was stepped forwards or backwards, and the index of the animation that was just played. | |
<a name="blockquote-quote-and-cite-base"></a>
#### BlockQuote, Quote and Cite ([Base](#base-props))
These tags create a styled blockquote. Use them as follows:
```jsx
<BlockQuote>
<Quote>Ken Wheeler is amazing</Quote>
<Cite>Everyone</Cite>
</BlockQuote>
```
_NOTE: By default the text color of the `Quote` tag is the same as the background color and may not show up. Use the `bgColor` and/or `textColor` props on the `Slide` or `Quote` tags to make it visible._
```jsx
<Slide transition={['fade']} bgColor="secondary" textColor="primary">
<BlockQuote>
<Quote>Example Quote</Quote>
<Cite>Author</Cite>
</BlockQuote>
</Slide>
```
```jsx
<Slide transition={['fade']}>
<BlockQuote>
<Quote textColor="secondary">Example Quote</Quote>
<Cite>Author</Cite>
</BlockQuote>
</Slide>
```
<a name="codepane-base"></a>
#### CodePane ([Base](#base-props))
This tag displays a styled, highlighted code preview. I prefer putting my code samples in external `.example` files and requiring them using `raw-loader` as shown in the demo. Here are the props:
| Name | PropType | Description | Default |
| --------- | ---------------- | ----------------------------------------------------------------------------------- | ------- |
| lang | PropTypes.string | Prism compatible language name. i.e: 'javascript' | |
| source | PropTypes.string | String of code to be shown | |
| className | PropTypes.string | String of a className to be appended to the CodePane | |
| theme | PropTypes.string | Accepts `light`, `dark`, or `external` for the source editor's syntax highlighting. | `dark` |
If you want to change the theme used here, you can include a prism theme in index.html via a style or a link tag. For your theme to be actually applied
correctly you need to set the `theme` prop to `"external"`, which disables our builtin light and dark themes.
Please note that including a theme can actually influence all CodePane and Playground components, even if you don't set this prop, since some Prism
themes use very generic CSS selectors.
CodePane and Playground both use the prism library under the hood, which has several themes that are available to include.
<a name="code-base"></a>
#### Code ([Base](#base-props))
A simple tag for wrapping inline text that you want lightly styled in a monospace font.
<a name="component-playground"></a>
#### Component Playground
This tag displays a two-pane view with a ES6 source code editor on the right and a preview pane on the left for showing off custom React components. `React` and `render` are supplied as variables. To render a component call `render` with some JSX code. Any `console` output will be forwarded to the main console in the browser.
For more information on the playground read the docs over at [react-live](https://github.com/FormidableLabs/react-live).
| Name | PropType | Description | Default |
| ---------------------- | ---------------- | -------------------------------------------------------------------------------------------------------------------------------- | ------- |
| code | PropTypes.string | The code block you want to initially supply to the component playground. If none is supplied a demo component will be displayed. | |
| previewBackgroundColor | PropTypes.string | The background color you want for the preview pane. | `#fff` |
| theme | PropTypes.string | Accepts `light`, `dark`, or `external` for the source editor's syntax highlighting. | `dark` |
| scope | PropTypes.object | Defines any outside modules or components to expose to the playground. React, Component, and render are supplied for you. | |
Example code blocks:
```jsx
const Button = ({ title }) => <button type="button">{title}</button>;
render(<Button title="My Button" />);
```
```jsx
class View extends React.Component {
componentDidMount() {
console.log('Hello');
}
render() {
return <div>My View</div>;
}
}
render(<View />);
```
If you want to change the theme used here, please refer to the instructions above in the [CodePane's API reference](#codepane-base).
<a name="go-to-action"></a>
#### Go To Action ([Base](#base-props))
The GoToAction tag lets you jump to another slide in your deck. The GoToAction can be used a simple button that supports `Base` styling or accept a render prop with a callback to support custom components.
| Name | PropType | Description | Default |
| ------ | ------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------ | --------------- |
| slide | PropTypes.string or PropTypes.number | The string identifier or number of the side the button should jump to. This is only used in the simple button configuration. | Starting at `1` |
| render | PropTypes.func | A function with a `goToSlide` param that should return a React element to render. This is only used in the custom component configuration. | |
##### Simple Button Configuration Example
```jsx
<GoToAction slide={3}>Jump to 3</GoToAction>
```
##### Custom Component Configuration Example
```jsx
<GoToAction
render={goToSlide => (
<CustomComponent onClick={() => goToSlide('wait-wut')}>
WAIT WUT!?
</CustomComponent>
)}
/>
```
<a name="heading-base"></a>
#### Heading ([Base](#base-props))
Heading tags are special in that, when you specify a `size` prop, they generate the appropriate heading tag, and extend themselves with a style that is defined in the theme file for that heading. Line height can be adjusted via a numeric `lineHeight` prop.
| Name | PropType | Description | Default |
| ---------- | ----------------- | ------------------------------------------------------------------------------------------------------------------------- | ------- |
| fit | PropTypes.boolean | When set to true, fits text to the slide's width. **Note: If you use the 'scale' transition, this won't work in Safari.** | `false` |
| lineHeight | PropTypes.number | Sets the line height of your text. |
| size | PropTypes.number | Sets the heading tag |
<a name="image-base"></a>
#### Image ([Base](#base-props))
| Name | PropType | Description | Default |
| ------- | ------------------------------------ | ---------------------------------------------- | ------- |
| alt | PropTypes.string | Set the `alt` attribute of the image | |
| display | PropTypes.string | Set the `display` style attribute of the image | |
| height | PropTypes.string or PropTypes.number | Set the `height` to the image | |
| src | PropTypes.string | Set the `src` attribute of the image | |
| width | PropTypes.string or PropTypes.number | Set the `width` to the image | |
<a name="link-base"></a>
#### Link ([Base](#base-props))
The link tag is used to render `<a>` tags. It accepts an `href` prop:
| Name | PropType | Description | Default |
| ------ | ---------------- | ---------------------------------- | ------- |
| href | PropTypes.string | String of url for `href` attribute | |
| target | PropTypes.string | Set the `target` attribute | `_self` |
<a name="list--listitem-base"></a>
#### List & ListItem ([Base](#base-props))
| Name | PropType | Description | Default |
| ----------- | ---------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- |
| ordered | PropTypes.bool | Render as `<ol>` tag | |
| reversed | PropTypes.bool | Set the `reversed` attribute | |
| start | PropTypes.number | Set the `start` attribute. | `1` |
| type | PropTypes.string | Set the `type` attribute. | `"1"` |
| bulletStyle | PropTypes.string | Allows to customize list bullets for unordered-list. You can set `bulletStyle="star"` both in `List` and `ListItem` components. When `ListItem` prop is set it will overwrite the `List` styling only for the specific `ListItem`. You can either use built-in strings: `star`, `classicCheck`, `greenCheck`, `arrow`, `cross`, or any unicode number `bulletStyle="274C"` |
These tags create lists. Use them as follows:
Ordered lists:
```jsx
<List ordered start={2} type="A">
<ListItem>Item 1</ListItem>
<ListItem>Item 2</ListItem>
<ListItem>Item 3</ListItem>
<ListItem>Item 4</ListItem>
</List>
```
Unordered lists:
```jsx
<List>
<ListItem>Item 1</ListItem>
<ListItem bulletStyle="arrow">Item 2</ListItem>
<ListItem>Item 3</ListItem>
<ListItem>Item 4</ListItem>
</List>
```
<a name="s-base"></a>
#### S ([Base](#base-props))
The `S` tag is used to add styling to a piece of text, such as underline or strikethrough.
| Name | PropType | Description | Default |
| ---- | ---------------- | -------------------------------------------------------- | ------- |
| type | PropTypes.string | Accepts `strikethrough`, `underline`, `bold` or `italic` | |
<a name="table-tablerow-tableheaderitem-and-tableitem-base"></a>
#### Table, TableRow, TableHeaderItem and TableItem ([Base](#base-props))
The `Table` tag is used to add table to your slide. It is used with `TableHeader`, `TableBody`, `TableRow`, `TableHeaderItem` and `TableItem`. Use them as follows:
```jsx
<Table>
<TableHeader>
<TableRow>
<TableHeaderItem />
<TableHeaderItem>2011</TableHeaderItem>
</TableRow>
</TableHeader>
<TableBody>
<TableRow>
<TableItem>None</TableItem>
<TableItem>61.8%</TableItem>
</TableRow>
<TableRow>
<TableItem>jQuery</TableItem>
<TableItem>28.3%</TableItem>
</TableRow>
</TableBody>
</Table>
```
<a name="text-base"></a>
#### Text ([Base](#base-props))
The `Text` tag is used to add text to your slide. Line height can be adjusted via a numeric `lineHeight` prop.
| Name | PropType | Description | Default |
| ---------- | ----------------- | ------------------------------------------------------------------------------------------------------------------------- | ------- |
| fit | PropTypes.boolean | When set to true, fits text to the slide's width. **Note: If you use the 'scale' transition, this won't work in Safari.** | |
| lineHeight | PropTypes.number | Sets the line height of your text. | |
<a name="base-props"></a>
### Base Props
Every component above that has `(Base)` after it has been extended from a common class that includes the following props:
| Name | PropType | Description | Default |
| ------------ | -------------------------- | ---------------------------------------------------------------------------- | --------------- |
| italic | PropTypes.boolean | Set `fontStyle` to `italic` | `false` |
| bold | PropTypes.boolean | Set `fontWeight` to `bold` | `false` |
| caps | PropTypes.boolean | Set `textTransform` to `uppercase` | `false` |
| margin | PropTypes.number or string | Set `margin` value | |
| padding | PropTypes.number or string | Set `padding` value | |
| textColor | PropTypes.string | Set `color` value | |
| textFont | PropTypes.string | Set `fontFamily` value | |
| textSize | PropTypes.string | Set `fontSize` value | |
| textAlign | PropTypes.string | Set `textAlign` value | |
| bgColor | PropTypes.string | Set `backgroundColor` value | |
| bgGradient | PropTypes.string | Set `backgroundImage` value | |
| bgImage | PropTypes.string | Set `backgroundImage` value | |
| bgImageStyle | PropTypes.string | Set backgroundImage css property value directly | |
| bgSize | PropTypes.string | Set `backgroundSize` value | `cover` |
| bgPosition | PropTypes.string | Set `backgroundPosition` value | `center center` |
| bgRepeat | PropTypes.string | Set `backgroundRepeat` value | |
| bgDarken | PropTypes.number | Float value from 0.0 to 1.0 specifying how much to darken the bgImage image | 0 |
| bgLighten | PropTypes.number | Float value from 0.0 to 1.0 specifying how much to lighten the bgImage image | 0 |
| overflow | PropTypes.string | Set `overflow` value | |
| height | PropTypes.string | Set `height` value | |
_NOTE: When using `bgImage` prop for local images, you must import the file for it to render properly._
```jsx
import myImage from './images/my-image.jpg';
......
<Slide bgImage={myImage}>
I have an image for a background
</Slide>
```
<a name="typeface"></a>
#### Typeface
The `Typeface` tag is used to apply a specific font to text content. It can either use a font that exists on the system or load a font from the Google Fonts library. `Typeface` requires either `font` or `googleFont` to be defined.
| Name | PropType | Description | Default |
| ---------- | ----------------- | ------------------------------------------------ | ------- |
| font | PropTypes.string | Use a font from the local system | |
| googleFont | PropTypes.string | Use a font from the Google Fonts library | |
| weight | PropTypes.number | Numeric weight value for the font. | `400` |
| italic | PropTypes.boolean | Use an italics variant of the font if it exists. | `false` |
```jsx
<Typeface googleFont="Roboto Slab" weight={600}>
<Text>This text is using bold Roboto Slab from Google Fonts.</Text>
</Typeface>
```
```jsx
<Typeface font="SF Text" weight={400} italic={true}>
<Text>This text is using the San Francisco Text font from the system.</Text>
</Typeface>
```
<a name="third-party"></a>
## Third Party Extensions
- [Spectacle Code Slide](https://github.com/thejameskyle/spectacle-code-slide) - Step through lines of code using this awesome slide extension by @thejameskyle
- [Spectacle Terminal Slide](https://github.com/elijahmanor/spectacle-terminal) - Terminal component that can be used in a spectacle slide deck by @elijahmanor
- [Spectacle Image Slide](https://github.com/FezVrasta/spectacle-image-slide) - Show a slide with a big image and a title on top
- [Spectacle Codemirror](https://github.com/jonathan-fielding/spectacle) - Show a live code editor inside your slides
## Maintenance Status
**Active:** Formidable is actively working on this project, and we expect to continue for work for the foreseeable future. Bug reports, feature requests and pull requests are welcome.
[trav_img]: https://api.travis-ci.org/FormidableLabs/spectacle.svg
[trav_site]: https://travis-ci.org/FormidableLabs/spectacle
[maintenance-image]: https://img.shields.io/badge/maintenance-active-green.svg
| 57.166041 | 542 | 0.514367 | eng_Latn | 0.9539 |
536a3e27d7c32472bdd0871f0300b7e3d6403d9f | 5,427 | md | Markdown | _posts/2011-04-15-Setup-Android-develop-environment.md | XinicsInc/tech-blog | bb3e2ef2f97458224a0a4ce42a0defe6635f152a | [
"Apache-2.0"
] | 1 | 2019-06-23T15:44:37.000Z | 2019-06-23T15:44:37.000Z | _posts/2011-04-15-Setup-Android-develop-environment.md | XinicsInc/tech-blog | bb3e2ef2f97458224a0a4ce42a0defe6635f152a | [
"Apache-2.0"
] | null | null | null | _posts/2011-04-15-Setup-Android-develop-environment.md | XinicsInc/tech-blog | bb3e2ef2f97458224a0a4ce42a0defe6635f152a | [
"Apache-2.0"
] | 1 | 2019-06-23T15:44:42.000Z | 2019-06-23T15:44:42.000Z | ---
layout: post
title: 'Android 개발환경 설정'
author: xinics
date: 2011-04-15 12:21
tags: [Android]
---
# Android 개발환경 설정
자 이제 우리도 안드로이드 개발을 해봅시다.
안드로이드 개발에 앞서 필요한 환경은 일단 세가지 입니다.
제일 먼저, 안드로이드는 JAVA를 기반으로 구현된 플랫폼 입니다.
그러니 JAVA개발환경부터 세팅을 해야하지요.
자 일단, JDK를 설치해 봅시다.
## 1. JAVA개발환경 설정
### 1.1 JAVA SDK설치
JDK에는 세가지 버전이 있는데, 가장 보편적인 SE(Standard Edition)을 사용하도록 합시다.
https://cds.sun.com/is-bin/INTERSHOP.enfinity/WFS/CDS-CDS_Developer-Site/en_US/-/USD/ViewProductDetail-Start?ProductRef=jdk-6u24-oth-JPR@CDS-CDS_Developer
이 링크를 따라가서 자신의 OS환경에 맞춰 다운을 받으시면 됩니다.
대략 80MB정도의 용량입니다.
다운받고 설치를 하시면, 이제 JAVA를 개발할 수 있습니다!
### 1.2 환경변수 설정
하지만, JAVA의 컴파일은 커맨드 기반으로 진행되기 때문에, 편의를 위해서 환경변수 등록을 해줍시다.
왜 환경변수 설정을 하느냐?
일단, JAVA개발에 있어서 가장 자주 사용하는 커맨드는 두개, 컴파일하는 javac, 실행하는 java 입니다.
이것을 환경변수를 설정하지 않고, 사용하려고 하면

이런 오류가 나옵니다.
커맨드를 사용하려면 실행파일이 있는 폴더까지 가서 실행해야 하는 불편함이 있지요.
현재 위치에 상관없이 어느 위치에서나 커맨드를 사용 가능하도록 하기 위해 환경변수를 설정하는 것입니다.
그럼 어디가서 환경변수를 설정하느냐?

시스템 속성창을 여시면 고급탭 하단에 환경변수 버튼이 보입니다.
버튼을 클릭하게 되면,

이런 창이 뜹니다.
여기서 아래 시스템 변수 영역에서 path변수에 JAVA의 실행파일들이 위치한 폴더를 추가해줍시다.

경로는 JDK가 설치된 폴더 하위의 bin폴더 입니다.
이렇게 환경변수를 설정하고 나서 다시 커맨드를 실행해봅시다.

아까와는 다르게 실행방법에 대한 설명이 주루루룩 나옵니다.
이제 JAVA개발환경은 준비가 되었습니다.
## 2. Android SDK설치하기
이제 기반이 되는 JAVA는 설치를 마쳤으니, Android도 설치를 해봅시다.
http://developer.android.com/sdk/index.html
JDK와 마찬가지로 자신의 OS에 맞춰 Android SDK를 다운 받아봅시다.
현재 설치형 버전과 압축형 버전 두가지가 제공되고 있는데, 별 차이는 없습니다.
다만 설치형이 조금더 편하기에 설치형을 사용해 봅시다.
40MB가 조금 못되는 용량입니다. 다운을 받고 설치를 진행하게 되면 다음과 같은 창이 뜹니다.

여기서 Android OS버전별 업데이트를 진행할 수 있습니다.
필수적으로 해야하는 업데이트는 Android SDK Tools, Android SDK Platform-tools 입니다.
업데이트까지 진행하고 나면, Android SDK설치는 완료되었습니다.
## 3. Eclipse 설치하기
### 3.1 Eclipse 설치
자 그럼 Android개발에 사용할 툴을 설치해봅시다.
Android가 지원하는 개발툴은 Eclipse입니다.
그럼 이것도 설치를 해봅시다.
http://www.eclipse.org/downloads/download.php?file=/technology/epp/downloads/release/helios/SR2/eclipse-jee-helios-SR2-win32.zip
대략 300MB정도 됩니다. 가장 크죠.
Eclipse는 설치가 필요없이 다운받아서 압축만 풀어서 사용할 수 있습니다.
다운받은 파일의 압축을 풀어봅시다.
압축을 풀게되면, 다음과 같은 파일들을 보실 수 있습니다.

이중에서 딱 봐도 아이콘이 있는 eclipse.exe를 실행해 봅시다.

처음 실행하게 되면, 위와 같은 창이 뜹니다. 프로젝트를 저장할 작업 폴더를 설정해줘야 합니다.
사용할 작업 폴더 경로를 입력하고 OK를 클릭합시다.

자 그러면 Eclipse는 설치가 완료되었습니다.
### 3.2 Toolkit설치
하지만, Android개발을 위해서는 또, 설정을 해줘야합니다.
뭐가 참 많죠? ㅠㅠ
일단 Android SDK와 연결을 위한 Toolkit을 설치해야 합니다.
메뉴의 Help -> Install New Software를 선택합니다.

그리고 화면 상단의 ADD버튼을 클릭하여, Toolkit을 다운 받을 경로를 설정해줍시다.

경로를 설정하게 되면, 설치가능한 툴킷목록이 주루룩 나옵니다.

일단 네가지 다 설치합니다.
다 설치하고 나면, 프로그램을 재시작하라는 다이얼로그가 뜹니다.
재시작!
### 3.3 Android SDK 경로 설정
다시 실행된 Eclipse를 보면 뭔가 Android스러운 아이콘들이 몇 개 보이기 시작합니다.
하지만 아직 끝나지 않았습니다.
이제 마지막 입니다.
Eclipse와 2번에서 설치한 Android SDK를 연결하기 위해서 Android SDK가 설치된 폴더경로를 지정해 주어야 합니다.
메뉴의 Window -> preferences 를 클릭해보시면

이런 창이 뜹니다.
목록중에 Android가 보이시죠?
저기에서 설정하면 됩니다.

이렇게 SDK가 설치된 폴더경로를 설정해주고, Apply를 클릭하면 설치된 Android버전들이 주루룩 나타납니다.
OK를 누르시면 종료!
자 이제 Android개발을 위한 환경 설정을 마쳤습니다.
## 4. Hello! World!
### 4.1 프로젝트 생성하기
자 그럼 다 설치했으니, 설레는 마음으로 Hello! World!를 해봅시다.
먼저 프로젝트를 생성합니다.

좌측의 Package Explorer에서 우클릭을 통해 메뉴를 열거나 상단 메뉴의 File -> New 에 보시면 Android Project가 보입니다! (보이지 않는 경우에는 맨 아래의 Other를 클릭하시면 Android -> Android Project를 찾으실 수 있습니다) 꾹 눌러주시죠.

나타나는 창에서 어플리케이션에 대한 대략적인 설정을 해줘야 합니다.
OS버젼, 어플리케이션 이름, 패키지 명, Activity(Main 역할을 합니다.)등을 작성하고 Finish를 클릭합니다.
프로젝트를 생성하면 기본적으로 Hello World가 템플릿 형식으로 구현되어 있습니다.
그래서 바로 실행을 해보도록 합시다.
Package Explorer영역의 프로젝트에서 우클릭후 Run As-> Android Application을 클릭합니다.
### 4.2 Virtual Device 생성하기

실행하려고 하면 위와 같이 다이얼로그가 뜹니다.
아직 만들어진 에뮬레이터가 없어서 만들라는 얘기지요. Yes를 클릭해서 한번 만들어봅시다.

우측 상단의 New를 클릭합니다.
에뮬레이터의 이름과 대상OS버젼, SD카드 사이즈를 설정하고 Create AVD를 클릭합시다.

이제 에뮬레이터가 만들어졌습니다.
### 4.3 Hello! World!
자 이제 준비는 끝났습니다. 좀 전과 같이 프로젝트를 실행하면

짜잔 에뮬레이터가 실행됩니다. (대략 시간이 좀 걸립니다 -_-)

Hello World!! 이렇게 대략적으로 Android 개발 준비를 마쳤습니다!!
이제 다함께 삽질의 세계로~ | 24.668182 | 167 | 0.743689 | kor_Hang | 1.000007 |
536a52d2fd161fc1e027714663dab12c6a354414 | 12,651 | md | Markdown | articles/iot-central/core/howto-manage-users-roles.md | changeworld/azure-docs.sv-se | 6234acf8ae0166219b27a9daa33f6f62a2ee45ab | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/iot-central/core/howto-manage-users-roles.md | changeworld/azure-docs.sv-se | 6234acf8ae0166219b27a9daa33f6f62a2ee45ab | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/iot-central/core/howto-manage-users-roles.md | changeworld/azure-docs.sv-se | 6234acf8ae0166219b27a9daa33f6f62a2ee45ab | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Hantera användare och roller i Azure IoT Central-programmet | Microsoft-dokument
description: Som administratör, hur du hanterar användare och roller i ditt Azure IoT Central-program
author: lmasieri
ms.author: lmasieri
ms.date: 12/05/2019
ms.topic: how-to
ms.service: iot-central
services: iot-central
manager: corywink
ms.openlocfilehash: c00f9d8baa55ef0d0cf6322ee71f22e739e6acdc
ms.sourcegitcommit: 07d62796de0d1f9c0fa14bfcc425f852fdb08fb1
ms.translationtype: MT
ms.contentlocale: sv-SE
ms.lasthandoff: 03/27/2020
ms.locfileid: "80365506"
---
# <a name="manage-users-and-roles-in-your-iot-central-application"></a>Hantera användare och roller i ditt IoT Central-program
I den här artikeln beskrivs hur du som administratör kan lägga till, redigera och ta bort användare i ditt Azure IoT Central-program. Artikeln beskriver också hur du hanterar roller i ditt Azure IoT Central-program.
För att komma åt och använda avsnittet **Administration** måste du vara i **administratörsrollen** för ett Azure IoT Central-program. Om du skapar ett Azure IoT Central-program läggs du automatiskt till i **administratörsrollen** för det programmet.
## <a name="add-users"></a>Lägga till användare
Varje användare måste ha ett användarkonto innan de kan logga in och komma åt ett Azure IoT Central-program. Microsoft-konton och Azure Active Directory-konton stöds i Azure IoT Central. Azure Active Directory-grupper stöds för närvarande inte i Azure IoT Central.
Mer information finns i [Hjälpen för Microsoft-konton](https://support.microsoft.com/products/microsoft-account?category=manage-account) och [Snabbstart: Lägg till nya användare i Azure Active Directory](https://docs.microsoft.com/azure/active-directory/add-users-azure-active-directory).
1. Om du vill lägga till en användare i ett IoT Central-program går du till sidan **Användare** i avsnittet **Administration.**
> [!div class="mx-imgBorder"]
>
1. Om du vill lägga till en användare väljer du **+ Lägg till användare**på sidan **Användare** .
1. Välj en roll för användaren på listrutan **Roll.** Läs mer om roller i avsnittet [Hantera roller](#manage-roles) i den här artikeln.
> [!div class="mx-imgBorder"]
>
> [!NOTE]
> En användare som har en anpassad roll som ger dem behörighet att lägga till andra användare kan bara lägga till användare i en roll med samma eller färre behörigheter än sin egen roll.
Om ett IoT Central-användar-ID tas bort från Azure Active Directory och sedan läsades kan användaren inte logga in i IoT Central-programmet. Om du vill återaktivera åtkomsten bör IoT Central-administratören ta bort och läsa användaren i programmet.
### <a name="edit-the-roles-that-are-assigned-to-users"></a>Redigera de roller som har tilldelats användare
Roller kan inte ändras när de har tilldelats. Om du vill ändra den roll som har tilldelats en användare tar du bort användaren och lägger sedan till användaren igen med en annan roll.
> [!NOTE]
> Rollerna som tilldelats är specifika för IoT Central-programmet och kan inte hanteras från Azure Portal.
## <a name="delete-users"></a>Ta bort användare
Om du vill ta bort användare markerar du en eller flera kryssrutor på sidan **Användare.** Välj sedan **Ta bort**.
## <a name="manage-roles"></a>Hantera roller
Med roller kan du styra vem inom organisationen som får utföra olika uppgifter i IoT Central. Det finns tre inbyggda roller som du kan tilldela användare av ditt program. Du kan också [skapa anpassade roller](#create-a-custom-role) om du behöver finarekornig kontroll.
> [!div class="mx-imgBorder"]
> 
### <a name="administrator"></a>Administratör
Användare i **administratörsrollen** kan hantera och kontrollera alla delar av programmet, inklusive fakturering.
Användaren som skapar ett program tilldelas automatiskt **administratörsrollen.** Det måste alltid finnas minst en användare i **administratörsrollen.**
### <a name="builder"></a>Builder
Användare i **builder-rollen** kan hantera alla delar av appen, men kan inte göra ändringar på flikarna Administration eller Kontinuerlig dataexport.
### <a name="operator"></a>Operator
Användare i rollen **Operatör** kan övervaka enhetens hälsa och status. De får inte göra ändringar i enhetsmallar eller administrera programmet. Operatörer kan lägga till och ta bort enheter, hantera enhetsuppsättningar och köra analyser och jobb.
## <a name="create-a-custom-role"></a>Skapa en anpassad roll
Om lösningen kräver finare åtkomstkontroller kan du skapa anpassade roller med anpassade behörighetsuppsättningar. Om du vill skapa en anpassad roll navigerar du till sidan **Roller** i avsnittet **Administration** i programmet. Välj sedan **+ Ny roll**och lägg till ett namn och en beskrivning för din roll. Välj de behörigheter som rollen kräver och välj sedan **Spara**.
Du kan lägga till användare i din anpassade roll på samma sätt som du lägger till användare i en inbyggd roll.
> [!div class="mx-imgBorder"]
> 
### <a name="custom-role-options"></a>Alternativ för anpassad roll
När du definierar en anpassad roll väljer du den uppsättning behörigheter som en användare beviljas om de är medlemmar i rollen. Vissa behörigheter är beroende av andra. Om du till exempel lägger till behörigheten **Uppdatera programinstrumentpaneler** i en roll läggs behörigheten **Visa programinstrumentpaneler** automatiskt till. I följande tabeller sammanfattas de tillgängliga behörigheterna och deras beroenden som du kan använda när du skapar anpassade roller.
#### <a name="managing-devices"></a>Hantera enheter
**Behörigheter för enhetsmall**
| Namn | Beroenden |
| ---- | -------- |
| Visa | Inget |
| Hantera | Visa <br/> Andra beroenden: Visa enhetsinstanser |
| Fullständig kontroll | Visa, hantera <br/> Andra beroenden: Visa enhetsinstanser |
**Behörigheter för enhetsinstans**
| Namn | Beroenden |
| ---- | -------- |
| Visa | Inget <br/> Andra beroenden: Visa enhetsmallar och enhetsgrupper |
| Uppdatering | Visa <br/> Andra beroenden: Visa enhetsmallar och enhetsgrupper |
| Skapa | Visa <br/> Andra beroenden: Visa enhetsmallar och enhetsgrupper |
| Ta bort | Visa <br/> Andra beroenden: Visa enhetsmallar och enhetsgrupper |
| Köra kommandon | Uppdatera, visa <br/> Andra beroenden: Visa enhetsmallar och enhetsgrupper |
| Fullständig kontroll | Visa, uppdatera, skapa, ta bort, köra kommandon <br/> Andra beroenden: Visa enhetsmallar och enhetsgrupper |
**Behörigheter för enhetsgrupper**
| Namn | Beroenden |
| ---- | -------- |
| Visa | Inget <br/> Andra beroenden: Visa enhetsmallar och enhetsinstanser |
| Uppdatering | Visa <br/> Andra beroenden: Visa enhetsmallar och enhetsinstanser |
| Skapa | Visa, uppdatera <br/> Andra beroenden: Visa enhetsmallar och enhetsinstanser |
| Ta bort | Visa <br/> Andra beroenden: Visa enhetsmallar och enhetsinstanser |
| Fullständig kontroll | Visa, uppdatera, skapa, ta bort <br/> Andra beroenden: Visa enhetsmallar och enhetsinstanser |
**Behörigheter för hantering av enhetsanslutning**
| Namn | Beroenden |
| ---- | -------- |
| Läs instans | Inget <br/> Andra beroenden: Visa enhetsmallar, enhetsgrupper, enhetsinstanser |
| Hantera instans | Inget |
| Läs globalt | Inget |
| Hantera globalt | Läs globalt |
| Fullständig kontroll | Läs instans, Hantera instans, Läs globalt, Hantera globalt. <br/> Andra beroenden: Visa enhetsmallar, enhetsgrupper, enhetsinstanser |
**Behörigheter för jobb**
| Namn | Beroenden |
| ---- | -------- |
| Visa | Inget <br/> Andra beroenden: Visa enhetsmallar, enhetsinstanser och enhetsgrupper |
| Uppdatering | Visa <br/> Andra beroenden: Visa enhetsmallar, enhetsinstanser och enhetsgrupper |
| Skapa | Visa, uppdatera <br/> Andra beroenden: Visa enhetsmallar, enhetsinstanser och enhetsgrupper |
| Ta bort | Visa <br/> Andra beroenden: Visa enhetsmallar, enhetsinstanser och enhetsgrupper |
| Genomförande | Visa <br/> Andra beroenden: Visa enhetsmallar, enhetsinstanser och enhetsgrupper. Uppdatera enhetsinstanser; Köra kommandon på enhetsinstanser |
| Fullständig kontroll | Visa, uppdatera, skapa, ta bort, köra <br/> Andra beroenden: Visa enhetsmallar, enhetsinstanser och enhetsgrupper. Uppdatera enhetsinstanser; Köra kommandon på enhetsinstanser |
**Behörigheter för regler**
| Namn | Beroenden |
| ---- | -------- |
| Visa | Inget <br/> Andra beroenden: Visa enhetsmallar |
| Uppdatering | Visa <br/> Andra beroenden: Visa enhetsmallar |
| Skapa | Visa, uppdatera <br/> Andra beroenden: Visa enhetsmallar |
| Ta bort | Visa <br/> Andra beroenden: Visa enhetsmallar |
| Fullständig kontroll | Visa, uppdatera, skapa, ta bort <br/> Andra beroenden: Visa enhetsmallar |
#### <a name="managing-the-app"></a>Hantera appen
**Behörigheter för programinställningar**
| Namn | Beroenden |
| ---- | -------- |
| Visa | Inget |
| Uppdatering | Visa |
| Kopiera | Visa <br/> Andra beroenden: Visa enhetsmallar, enhetsinstanser, enhetsgrupper, instrumentpaneler, dataexport, varumärkesprofilering, hjälplänkar, anpassade roller, regler |
| Ta bort | Visa |
| Fullständig kontroll | Visa, uppdatera, kopiera, ta bort <br/> Andra beroenden: Visa enhetsmallar, enhetsgrupper, instrumentpaneler för program, dataexport, varumärkesprofilering, hjälplänkar, anpassade roller, regler |
**Exportbehörigheter för programmall**
| Namn | Beroenden |
| ---- | -------- |
| Visa | Inget |
| Exportera | Visa <br/> Andra beroenden: Visa enhetsmallar, enhetsinstanser, enhetsgrupper, instrumentpaneler, dataexport, varumärkesprofilering, hjälplänkar, anpassade roller, regler |
| Fullständig kontroll | Visa, exportera <br/> Andra beroenden: Visa enhetsmallar, enhetsgrupper, instrumentpaneler för program, dataexport, varumärkesprofilering, hjälplänkar, anpassade roller, regler |
**Faktureringsbehörigheter**
| Namn | Beroenden |
| ---- | -------- |
| Hantera | Inget |
| Fullständig kontroll | Hantera |
#### <a name="managing-users-and-roles"></a>Hantera användare och roller
**Behörigheter för anpassade roller**
| Namn | Beroenden |
| ---- | -------- |
| Visa | Inget |
| Uppdatering | Visa |
| Skapa | Visa, uppdatera |
| Ta bort | Visa |
| Fullständig kontroll | Visa, uppdatera, skapa, ta bort |
**Behörigheter för användarhantering**
| Namn | Beroenden |
| ---- | -------- |
| Visa | Inget <br/> Andra beroenden: Visa anpassade roller |
| Lägg till | Visa <br/> Andra beroenden: Visa anpassade roller |
| Ta bort | Visa <br/> Andra beroenden: Visa anpassade roller |
| Fullständig kontroll | Visa, lägga till, ta bort <br/> Andra beroenden: Visa anpassade roller |
> [!NOTE]
> En användare som har en anpassad roll som ger dem behörighet att lägga till andra användare kan bara lägga till användare i en roll med samma eller färre behörigheter än sin egen roll.
#### <a name="customizing-the-app"></a>Anpassa appen
**Behörigheter för programinstrumentpanel**
| Namn | Beroenden |
| ---- | -------- |
| Visa | Inget |
| Uppdatering | Visa |
| Skapa | Visa, uppdatera |
| Ta bort | Visa |
| Fullständig kontroll | Visa, uppdatera, skapa, ta bort |
**Behörigheter för personliga instrumentpaneler**
| Namn | Beroenden |
| ---- | -------- |
| Visa | Inget |
| Uppdatering | Visa |
| Skapa | Visa, uppdatera |
| Ta bort | Visa |
| Fullständig kontroll | Visa, uppdatera, skapa, ta bort |
**Behörigheter för branding, favicon och färger**
| Namn | Beroenden |
| ---- | -------- |
| Visa | Inget |
| Uppdatering | Visa |
| Fullständig kontroll | Visa, uppdatera |
**Hjälplänkar behörigheter**
| Namn | Beroenden |
| ---- | -------- |
| Visa | Inget |
| Uppdatering | Visa |
| Fullständig kontroll | Visa, uppdatera |
#### <a name="extending-the-app"></a>Utöka appen
**Behörigheter för dataexport**
| Namn | Beroenden |
| ---- | -------- |
| Visa | Inget |
| Uppdatering | Visa |
| Skapa | Visa, uppdatera |
| Ta bort | Visa |
| Fullständig kontroll | Visa, uppdatera, skapa, ta bort |
**API-tokenbehörigheter**
| Namn | Beroenden |
| ---- | -------- |
| Visa | Inget |
| Skapa | Visa |
| Ta bort | Visa |
| Fullständig kontroll | Visa, skapa, ta bort |
## <a name="next-steps"></a>Nästa steg
Nu när du har lärt dig om hur du hanterar användare och roller i ditt Azure IoT Central-program är det föreslagna nästa steget att lära dig hur du [hanterar din faktura](howto-view-bill.md).
| 47.382022 | 468 | 0.734645 | swe_Latn | 0.99946 |
536aad1c745e74bf81b5584d7c01b62d9a3b1e94 | 4,978 | md | Markdown | articles/cognitive-services/big-data/recipes/art-explorer.md | R0bes/azure-docs.de-de | 24540ed5abf9dd081738288512d1525093dd2938 | [
"CC-BY-4.0",
"MIT"
] | 63 | 2017-08-28T07:43:47.000Z | 2022-02-24T03:04:04.000Z | articles/cognitive-services/big-data/recipes/art-explorer.md | R0bes/azure-docs.de-de | 24540ed5abf9dd081738288512d1525093dd2938 | [
"CC-BY-4.0",
"MIT"
] | 704 | 2017-08-04T09:45:07.000Z | 2021-12-03T05:49:08.000Z | articles/cognitive-services/big-data/recipes/art-explorer.md | R0bes/azure-docs.de-de | 24540ed5abf9dd081738288512d1525093dd2938 | [
"CC-BY-4.0",
"MIT"
] | 178 | 2017-07-05T10:56:47.000Z | 2022-03-18T12:25:19.000Z | ---
title: 'Anleitung: Intelligente Erkundung von Kunstwerken mit Cognitive Services für Big Data'
titleSuffix: Azure Cognitive Services
description: In dieser Anleitung wird die Erstellung einer durchsuchbaren Kunstdatenbank mit Azure Search und MMLSpark gezeigt.
services: cognitive-services
author: mhamilton723
manager: nitinme
ms.service: cognitive-services
ms.subservice: text-analytics
ms.topic: how-to
ms.date: 07/06/2020
ms.author: marhamil
ms.custom: devx-track-python
ms.openlocfilehash: 5a65ff28a38e42e05844063a330c0325f16b2247
ms.sourcegitcommit: f28ebb95ae9aaaff3f87d8388a09b41e0b3445b5
ms.translationtype: HT
ms.contentlocale: de-DE
ms.lasthandoff: 03/30/2021
ms.locfileid: "94363288"
---
# <a name="recipe-intelligent-art-exploration-with-the-cognitive-services-for-big-data"></a>Anleitung: Intelligente Erkundung von Kunstwerken mit Cognitive Services für Big Data
In diesem Beispiel wird Cognitive Services für Big Data verwendet, um die frei zugängliche Sammlung des Metropolitan Museum of Art (Met) mit intelligenten Anmerkungen zu versehen. Dies ermöglicht die Erstellung einer intelligenten Suchmaschine mit Azure Search ganz ohne manuelle Anmerkungen.
## <a name="prerequisites"></a>Voraussetzungen
* Sie benötigen einen Abonnementschlüssel für maschinelles Sehen und Cognitive Search. Gehen Sie wie unter [Schnellstart: Erstellen eines Cognitive Services-Kontos im Azure-Portal](../../cognitive-services-apis-create-account.md) beschrieben vor, um maschinelles Sehen zu abonnieren und Ihren Schlüssel zu erhalten.
> [!NOTE]
> Preisinformationen finden Sie unter [Leistung, Steuerung und Anpassungsmöglichkeiten nach Bedarf mit flexibler Preisgestaltung](https://azure.microsoft.com/services/search/#pricing).
## <a name="import-libraries"></a>Importieren von Bibliotheken
Führen Sie den folgenden Befehl aus, um Bibliotheken für diese Anleitung zu importieren:
```python
import os, sys, time, json, requests
from pyspark.ml import Transformer, Estimator, Pipeline
from pyspark.ml.feature import SQLTransformer
from pyspark.sql.functions import lit, udf, col, split
```
## <a name="set-up-subscription-keys"></a>Einrichten von Abonnementschlüsseln
Führen Sie den folgenden Befehl aus, um Variablen für Dienstschlüssel einzurichten: Fügen Sie Ihre Abonnementschlüssel für maschinelles Sehen und Azure Cognitive Search ein.
```python
VISION_API_KEY = 'INSERT_COMPUTER_VISION_SUBSCRIPTION_KEY'
AZURE_SEARCH_KEY = 'INSERT_AZURE_COGNITIVE_SEARCH_SUBSCRIPTION_KEY'
search_service = "mmlspark-azure-search"
search_index = "test"
```
## <a name="read-the-data"></a>Lesen der Daten
Führen Sie den folgenden Befehl aus, um Daten aus der öffentlich zugänglichen Sammlung des Met zu laden:
```python
data = spark.read\
.format("csv")\
.option("header", True)\
.load("wasbs://[email protected]/metartworks_sample.csv")\
.withColumn("searchAction", lit("upload"))\
.withColumn("Neighbors", split(col("Neighbors"), ",").cast("array<string>"))\
.withColumn("Tags", split(col("Tags"), ",").cast("array<string>"))\
.limit(25)
```
<a name="AnalyzeImages"></a>
## <a name="analyze-the-images"></a>Analysieren der Bilder
Führen Sie den folgenden Befehl aus, um maschinelles Sehen für die öffentlich zugängliche Kunstsammlung des Met zu verwenden. Als Ergebnis erhalten Sie visuelle Features der Kunstwerke.
```python
from mmlspark.cognitive import AnalyzeImage
from mmlspark.stages import SelectColumns
#define pipeline
describeImage = (AnalyzeImage()
.setSubscriptionKey(VISION_API_KEY)
.setLocation("eastus")
.setImageUrlCol("PrimaryImageUrl")
.setOutputCol("RawImageDescription")
.setErrorCol("Errors")
.setVisualFeatures(["Categories", "Tags", "Description", "Faces", "ImageType", "Color", "Adult"])
.setConcurrency(5))
df2 = describeImage.transform(data)\
.select("*", "RawImageDescription.*").drop("Errors", "RawImageDescription")
```
<a name="CreateSearchIndex"></a>
## <a name="create-the-search-index"></a>Erstellen des Suchindex
Führen Sie den folgenden Befehl aus, um die Ergebnisse in Azure Search zu schreiben und eine Suchmaschine für die Kunstwerke zu erstellen, deren Metadaten durch maschinelles Sehen angereichert wurden:
```python
from mmlspark.cognitive import *
df2.writeToAzureSearch(
subscriptionKey=AZURE_SEARCH_KEY,
actionCol="searchAction",
serviceName=search_service,
indexName=search_index,
keyCol="ObjectID"
)
```
## <a name="query-the-search-index"></a>Abfragen des Suchindex
Führen Sie den folgenden Befehl aus, um den Azure Search-Index abzufragen:
```python
url = 'https://{}.search.windows.net/indexes/{}/docs/search?api-version=2019-05-06'.format(search_service, search_index)
requests.post(url, json={"search": "Glass"}, headers = {"api-key": AZURE_SEARCH_KEY}).json()
```
## <a name="next-steps"></a>Nächste Schritte
Informieren Sie sich darüber, wie Sie [Cognitive Services für Big Data zur Anomalieerkennung](anomaly-detection.md) verwenden. | 41.483333 | 315 | 0.780635 | deu_Latn | 0.756856 |
536b789875fdef41d1506f6e179f430069e19ede | 4,009 | md | Markdown | _posts/2017-12-10-gdd-extended.md | SnehPandya18/gdgbaroda.github.io | 8c4f59fc10e0257ed86856ff1853fba47a34e6a6 | [
"Apache-2.0"
] | 11 | 2017-02-19T10:38:46.000Z | 2019-02-24T02:58:19.000Z | _posts/2017-12-10-gdd-extended.md | SnehPandya18/gdgbaroda.github.io | 8c4f59fc10e0257ed86856ff1853fba47a34e6a6 | [
"Apache-2.0"
] | 45 | 2017-02-21T15:27:46.000Z | 2019-02-01T12:36:40.000Z | _posts/2017-12-10-gdd-extended.md | SnehPandya18/gdgbaroda.github.io | 8c4f59fc10e0257ed86856ff1853fba47a34e6a6 | [
"Apache-2.0"
] | 51 | 2015-10-19T05:13:48.000Z | 2019-09-22T04:30:31.000Z | ---
layout: post
title: "GDD India Extended"
date: 2018-07-26
categories: ["Events"]
tags: ["Google Developer Days", "GDD India 2017"]
author: Darshil Bhatt
assets: "/assets/2017-12/gdd-extended"
image: '/assets/2017-12/gdd-extended/gdgbaroda_at_gdd.jpg'
---
In December 2017, [Google](https://en.wikipedia.org/wiki/Google) organized grand event in Bengaluru, named [Google Developer Days](https://developers.google.com/events/gdd-india/)(GDD India). [GDG Baroda](https://gdgbaroda.com/) team attended the event, which turned into great experience for them. So the community organizers finalized the meetup for GDD India just to share their experience of the event with community members. Just like everytime, attendees were excited when they got confirmation from GDG Baroda for GDD India Extended.
{:class="img-responsive" : .center-image :height="400px" width="500px"}
Google Developer Days(GDD India) was held at Bengaluru form December 1–2, 2017. It is a global event organized by Google to showcase their latest products and platforms for the users and developers. It was the first time Google organised such a big event in India. And you know what, it was totally awesome! It was organized at [International Exhibition Centre](http://www.biec.in/) in Bengaluru.
On first day, when team reached the venue, they were really shocked looking the venue, crowd & arrangements, because they had never imagined that it is going to be such a big event. In morning, they done with their breakfast & registration. After that, it was time for interesting keynote by googlers. Keynote began in the rocking way.
[](https://www.youtube.com/watch?v=zJKcxogLJws&t=1030s)
They explained about the advancement of Google in several domains, their projects in India and sessions that were to follow in those two days. They discussed how they plan to find next billion customers in India in the coming years, and for this they have initiated several projects to bring wi-fi facilities to rural cities and also to teach people to use these services. Trainings sessions, codelabs, project demos followed this keynote. The trainings and sessions covered several domains including Android, Assistant, Cloud, Mobile Web, etc. running in parallel. Also, all these things were being live streamed on [youtube](https://www.youtube.com/watch?v=EFF8lJZh7Ak&t=23836s). You can have a look at the complete detailed schedule [here](https://developers.google.com/events/gdd-india/schedule/day1).
{:class="img-responsive" : .center-image :height="400px" width="500px"}
Apart from these sessions, there were demos of some fun projects which were developed by googlers. One of the cool projects was DrawBot. It simply took you picture using a mounted camera and drew that picture on the sheet. It used tensorflow and android things. There were many other things there which made the event worth attending. And the icing on the cake was the Pico Kits they received as give-away. There were IOT kits having Android Things project starter material. In evening there was rocking after-party with a band performance followed by DJ, at the end of first day.
Overall, the event was wonderful for them & all are waiting for its next iteration. Second day followed the schedule just same as on the first day. You can view it from [here](https://developers.google.com/events/gdd-india/schedule/day2). If you want to watch the highlights of the events, or watch some sessions and trainings from those two days, please visit [here](https://www.youtube.com/playlist?list=PLlyCyjh2pUe_Xyqy9K6sBxwr0L8QaU7dq).
Enthusiasts always learn with fun & joy when it comes about [GDG Baroda](https://gdgbaroda.com/).
That's it for today's post. We'll continue our journey in upcoming post.
| 125.28125 | 805 | 0.788226 | eng_Latn | 0.997287 |
536bc4732576b12d83ed943700cd07c8e0c19beb | 747 | md | Markdown | api/Outlook.OlkCheckBox.Font.md | skucab/VBA-Docs | 2912fe0343ddeef19007524ac662d3fcb8c0df09 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | api/Outlook.OlkCheckBox.Font.md | skucab/VBA-Docs | 2912fe0343ddeef19007524ac662d3fcb8c0df09 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-09-28T07:52:15.000Z | 2021-09-28T07:52:15.000Z | api/Outlook.OlkCheckBox.Font.md | skucab/VBA-Docs | 2912fe0343ddeef19007524ac662d3fcb8c0df09 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-09-28T07:45:29.000Z | 2021-09-28T07:45:29.000Z | ---
title: OlkCheckBox.Font Property (Outlook)
keywords: vbaol11.chm1000140
f1_keywords:
- vbaol11.chm1000140
ms.prod: outlook
api_name:
- Outlook.OlkCheckBox.Font
ms.assetid: ffabe2b5-1910-4480-b4d4-b684dd8203b5
ms.date: 06/08/2017
localization_priority: Normal
---
# OlkCheckBox.Font Property (Outlook)
Returns a **StdFont** that represents the font used to render the text inside the control. Read-only.
## Syntax
_expression_. `Font`
_expression_ A variable that represents an [OlkCheckBox](./Outlook.OlkCheckBox.md) object.
## Remarks
The font is expressed as the Microsoft Windows type **StdFont**.
## See also
[OlkCheckBox Object](Outlook.OlkCheckBox.md)
[!include[Support and feedback](~/includes/feedback-boilerplate.md)] | 20.189189 | 102 | 0.767068 | eng_Latn | 0.617136 |
536bd780a168d416a42449d3e4a89805baad5a6c | 143,772 | md | Markdown | CHANGELOG.md | PreacherCmdLets/SqlServerDsc | 235c064ea1ca5b7023f9e96d64642a4b06ae853a | [
"MIT"
] | null | null | null | CHANGELOG.md | PreacherCmdLets/SqlServerDsc | 235c064ea1ca5b7023f9e96d64642a4b06ae853a | [
"MIT"
] | null | null | null | CHANGELOG.md | PreacherCmdLets/SqlServerDsc | 235c064ea1ca5b7023f9e96d64642a4b06ae853a | [
"MIT"
] | null | null | null | # Change log for SqlServerDsc
The format is based on and uses the types of changes according to [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [Unreleased]
### Added
- SqlEndpoint
- Added support for the Service Broker Endpoint ([issue #498](https://github.com/dsccommunity/SqlServerDsc/issues/498)).
### Changed
- SqlServerDsc
- Updated code formatting using latest release of PSScriptAnalyzer.
- The URLs in the CHANGELOG.md that was pointing to issues is now
referencing the new repository name and URL.
- SqlServerDsc.Common
- The helper function `Get-SqlInstanceMajorVersion` no longer have a default
value for parameter **InstanceName** since the parameter is mandatory
and it was never used.
- SqlReplication
- The resource are now using the helper function `Get-SqlInstanceMajorVersion`
([issue #1408](https://github.com/dsccommunity/SqlServerDsc/issues/1408)).
### Fixed
- SqlDatabaseRole
- Fixed check to see if the role and user existed in the database. The
previous logic would always indicate the role or user was not found unless
the role had the same name as the user. Also updated the
DesiredMembersNotPresent string to be more accurate when an extra user is
in the role ([issue #1487](https://github.com/dsccommunity/SqlServerDsc/issues/1487)).
- SqlAlwaysOnService
- Updated Get-TargetResource to return all defined schema properties
([issue #150](https://github.com/dsccommunity/SqlServerDsc/issues/1501)).
## [14.2.1] - 2020-08-14
### Changed
- SqlServerDsc
- Document changes in the file `build.yml`.
- The regular expression for `major-version-bump-message` in the file
`GitVersion.yml` was changed to only raise major version when the
commit message contain the phrase `breaking change`, or when it contain
the word `breaking` or `major`.
- SqlSetup
- Duplicate function Get-SqlMajorVersion was removed and instead the
helper function `Get-FilePathMajorVersion` from the helper module
SqlServerDsc.Common is used ([issue #1178](https://github.com/dsccommunity/SqlServerDsc/issues/1178)).
- SqlWindowsFirewall
- Duplicate function Get-SqlMajorVersion was removed and instead the
helper function `Get-FilePathMajorVersion` from the helper module
SqlServerDsc.Common is used ([issue #1178](https://github.com/dsccommunity/SqlServerDsc/issues/1178)).
- SqlServerDsc.Common
- Function `Get-FilePathMajorVersion` was added. The function `Get-SqlMajorVersion`
from the resources _SqlSetup_ and _SqlWindowsFirewall_ was moved and
renamed without any functional changes ([issue #1178](https://github.com/dsccommunity/SqlServerDsc/issues/1178)).
### Fixed
- SqlServerDsc
- Removed helper functions that was moved to the module _DscResource.Common_.
DSC resources using those functions are using them from the module
_DscResource.Common_.
- SqlDatabaseObjectPermission
- Fixed method invocation failed because of missing `Where()` method ([issue #1600](https://github.com/dsccommunity/SqlServerDsc/issues/1600)).
- New integration tests to verify scenarios when passing a single permission.
- To enforce a scenario where a permission must be changed from `'GrantWithGrant'`
to `'Grant'` a new parameter **Force** was added ([issue #1602](https://github.com/dsccommunity/SqlServerDsc/issues/1602)).
The parameter **Force** is used to enforce the desired state in those
scenarios where revocations must be performed to enforce the desired
state, even if that encompasses cascading revocations. If parameter
**Force** is _not_ set to `$true` an exception is thrown in those
scenarios where a revocation must be performed to enforce the desired
state.
- New integration tests to verify scenarios when current state for a
permission is `'GrantWithGrant'` but desired state should be `'Grant'`.
- SqlSetup
- The example `4-InstallNamedInstanceInFailoverClusterFirstNode.ps1` was
updated to no longer reference the issue #405 and issue #444 in the
comment-based help. The issues was fixed a while back and _SqlSetup_
now supports the built-in parameter `PsDscRunAsCredential` ([issue #975](https://github.com/dsccommunity/SqlServerDsc/issues/975)).
## [14.2.0] - 2020-07-23
### Fixed
- SqlServerDsc
- Updated comment-based help according to style guideline throughout
([issue #1500](https://github.com/dsccommunity/SqlServerDsc/issues/1500)).
- Using Codecov carry forward flag because we are not sending code coverage
report on each commit.
- CommonTestHelper
- Minor style changes.
- SqlSetup
- Updated the documentation with the currently supported features
([issue #1566](https://github.com/dsccommunity/SqlServerDsc/issues/1566)).
- Update documentation around permissions in directory tree for Analysis Services
([issue #1443](https://github.com/dsccommunity/SqlServerDsc/issues/1443)).
- Documented that on certain operating systems, when using least privilege
for the service account, the security policy setting _Network access:_
_Restrict clients allowed to make remote calls to SAM_ can result in
a access denied error during install of the _SQL Server Database Engine_
([issue #1559](https://github.com/dsccommunity/SqlServerDsc/issues/1559)).
- SqlRole
- Fixed the `ServerName` parameter to work with default value of
`$env:COMPUTERNAME` ([issue #1592](https://github.com/dsccommunity/SqlServerDsc/issues/1592)).
## [14.1.0] - 2020-07-06
### Removed
- SqlServerDsc
- Remove the file `.github/CONTRIBUTION.md` as it no longer filled any
purpose as GitHub will find the CONTRIBUTION.md in the root folder
directly now ([issue #1227](https://github.com/dsccommunity/SqlServerDsc/issues/1227)).
### Changed
- SqlServerDsc
- Updated DSC resources parameter documentation.
### Fixed
- SqlServerDsc
- Update resource parameter documentation ([issue #1568](https://github.com/dsccommunity/SqlServerDsc/issues/1568)).
- Remove italic and inline code-block markdown code in documentation.
- Documentation is now published to the GitHub Wiki.
- Deploy task was updated with the correct name.
- Minor changes too schema property descriptions to generate documentation
correctly.
- Updated task list in the PULL_REQUEST_TEMPLATE.md.
- The documentation in CONTRIBUTING.md has been somewhat updated.
- Update documentation around design pattern for accounts that does not
use passwords ([issue #378](https://github.com/dsccommunity/SqlServerDsc/issues/378))
and ([issue #1230](https://github.com/dsccommunity/SqlServerDsc/issues/1230)).
- Updating the Integration Test README.md to better explain what the
integration tests for SqlSetup, SqlRSSetup, and SqlRS does ([issue #1315](https://github.com/dsccommunity/SqlServerDsc/issues/1315)).
- SqlServerDsc.Common
- Connect-UncPath
- Now support to authenticate using both NetBIOS domain and Fully Qualified
Domain Name (FQDN) ([issue #1223](https://github.com/dsccommunity/SqlServerDsc/issues/1223)).
- Connect-SQL
- Now support to authenticate using both NetBIOS domain and Fully Qualified
Domain Name (FQDN) ([issue #1223](https://github.com/dsccommunity/SqlServerDsc/issues/1223)).
- Connect-SQLAnalysis
- Now support to authenticate using both NetBIOS domain and Fully Qualified
Domain Name (FQDN) ([issue #1223](https://github.com/dsccommunity/SqlServerDsc/issues/1223)).
- SqlAGReplica
- Update documentation with a requirement for SqlServer in certain circumstances
([issue #1033](https://github.com/dsccommunity/SqlServerDsc/issues/1033)).
- SqlRSSetup
- There was a typo in the error message that was thrown when not passing
either the `Edition` or `ProductKey` that could be misleading ([issue #1386](https://github.com/dsccommunity/SqlServerDsc/issues/1386)).
- Updated the parameter descriptions for the parameters `Edition` and
`ProductKey` that they are mutually exclusive ([issue #1386](https://github.com/dsccommunity/SqlServerDsc/issues/1386)).
- SqlWindowsFirewall
- Now support to authenticate using both NetBIOS domain and Fully Qualified
Domain Name (FQDN) ([issue #1223](https://github.com/dsccommunity/SqlServerDsc/issues/1223)).
- SqlDatabaseObjectPermission
- Since the task that publish Wiki content was updated to correctly handle
embedded instances the duplicate documentation was removed from the
resource README.md, and some was added to the schema MOF parameter
descriptions ([issue #1580](https://github.com/dsccommunity/SqlServerDsc/issues/1580)).
- SqlScript
- Fixed the URLs in the parameter documentation ([issue #1582](https://github.com/dsccommunity/SqlServerDsc/issues/1582)).
- SqlScriptQuery
- Fixed the URLs in the parameter documentation ([issue #1583](https://github.com/dsccommunity/SqlServerDsc/issues/1583)).
### Added
- SqlScript
- Added the DisableVariables parameter ([issue #1422](https://github.com/dsccommunity/SqlServerDsc/issues/1422)).
- SqlScriptQuery
- Added the DisableVariables parameter ([issue #1422](https://github.com/dsccommunity/SqlServerDsc/issues/1422)).
## [14.0.0] - 2020-06-12
### Remove
- SqlServerDsc
- BREAKING CHANGE: Since the operating system Windows Server 2008 R2 and
the product SQL Server 2008 R2 has gone end-of-life the DSC resources
will no longer try to maintain compatibility with them. Moving forward,
and including this release, there may be code changes that will break
the resource on Windows Server 2008 R2 or with SQL Server 2008 R2
([issue #1514](https://github.com/dsccommunity/SqlServerDsc/issues/1514)).
### Deprecated
The documentation, examples, unit test, and integration tests have been
removed for these deprecated resources. These resources will be removed
in a future release.
- SqlDatabaseOwner
- This resource is now deprecated. The functionality is now covered by
a property in the resource _SqlDatabase_ ([issue #966](https://github.com/dsccommunity/SqlServerDsc/issues/966)).
- SqlDatabaseRecoveryModel
- This resource is now deprecated. The functionality is now covered by
a property in the resource _SqlDatabase_ ([issue #967](https://github.com/dsccommunity/SqlServerDsc/issues/967)).
- SqlServerEndpointState
- This resource is now deprecated. The functionality is covered by a
property in the resource _SqlEndpoint_ ([issue #968](https://github.com/dsccommunity/SqlServerDsc/issues/968)).
- SqlServerNetwork
- This resource is now deprecated. The functionality is now covered by
the resources _SqlProtocol_ and _SqlProtocolTcpIp_.
### Added
- SqlSetup
- Added support for major version upgrade ([issue #1561](https://github.com/dsccommunity/SqlServerDsc/issues/1561)).
- SqlServerDsc
- Added new resource SqlProtocol ([issue #1377](https://github.com/dsccommunity/SqlServerDsc/issues/1377)).
- Added new resource SqlProtocolTcpIp ([issue #1378](https://github.com/dsccommunity/SqlServerDsc/issues/1378)).
- Added new resource SqlDatabaseObjectPermission ([issue #1119](https://github.com/dsccommunity/SqlServerDsc/issues/1119)).
- Fixing a problem with the latest ModuleBuild 1.7.0 that breaks the CI
pipeline.
- Prepare repository for auto-documentation by adding README.md to each
resource folder with the content from the root README.md.
- SqlServerDsc.Common
- Added function `Import-Assembly` that can help import an assembly
into the PowerShell session.
- Prepared unit tests to support Pester 5 so a minimal conversation
is only needed later.
- Updated `Import-SQLPSModule` to better support unit tests.
- CommonTestHelper
- Added the functions `Get-InvalidOperationRecord` and `Get-InvalidResultRecord`
that is needed for evaluate localized error message strings for unit tests.
- SqlEndpoint
- BREAKING CHANGE: A new required property `EndpointType` was added to
support different types of endpoints in the future. For now the only
endpoint type that is supported is the database mirror endpoint type
(`DatabaseMirroring`).
- Added the property `State` to be able to specify if the endpoint should
be running, stopped, or disabled. _This property was moved from the now_
_deprecated DSC resource `SqlServerEndpointState`_.
- SqlSetup
- A read only property `IsClustered` was added that can be used to determine
if the instance is clustered.
- Added the properties `NpEnabled` and `TcpEnabled` ([issue #1161](https://github.com/dsccommunity/SqlServerDsc/issues/1161)).
- Added the property `UseEnglish` ([issue #1473](https://github.com/dsccommunity/SqlServerDsc/issues/1473)).
- SqlReplication
- Add integration tests ([issue #755](https://github.com/dsccommunity/SqlServerDsc/issues/755).
- SqlDatabase
- The property `OwnerName` was added.
- SqlDatabasePermission
- Now possible to change permissions for database user-defined roles
(e.g. public) and database application roles ([issue #1498](https://github.com/dsccommunity/SqlServerDsc/issues/1498).
- SqlServerDsc.Common
- The helper function `Restart-SqlService` was improved to handle Failover
Clusters better. Now the SQL Server service will only be taken offline
and back online again if the service is online to begin with.
- The helper function `Restart-SqlServer` learned the new parameter
`OwnerNode`. The parameter `OwnerNode` takes an array of Cluster node
names. Using this parameter the cluster group will only be taken
offline and back online if the cluster group owner is one specified
in this parameter.
- The helper function `Compare-ResourcePropertyState` was improved to
handle embedded instances by adding a parameter `CimInstanceKeyProperties`
that can be used to identify the unique parameter for each embedded
instance in a collection.
- The helper function `Test-DscPropertyState` was improved to evaluate
the properties in a single CIM instance or a collection of CIM instances
by recursively call itself.
- When the helper function `Test-DscPropertyState` evaluated an array
the verbose messages was not very descriptive. Instead of outputting
the side indicator from the compare it now outputs a descriptive
message.
### Changed
- SqlServerDsc
- BREAKING CHANGE: Some DSC resources have been renamed ([issue #1540](https://github.com/dsccommunity/SqlServerDsc/issues/1540)).
- `SqlServerConfiguration` was renamed to `SqlConfiguration`.
- `SqlServerDatabaseMail` was renamed to `SqlDatabaseMail`.
- `SqlServerEndpoint` was renamed to `SqlEndpoint`.
- `SqlServerEndpointPermission` was renamed to `SqlEndpointPermission`.
- `SqlServerLogin` was renamed to `SqlLogin`.
- `SqlServerMaxDop` was renamed to `SqlMaxDop`.
- `SqlServerMemory` was renamed to `SqlMemory`.
- `SqlServerPermission` was renamed to `SqlPermission`.
- `SqlServerProtocol` was renamed to `SqlProtocol`.
- `SqlServerProtocolTcpIp` was renamed to `SqlProtocolTcpIp`.
- `SqlServerReplication` was renamed to `SqlReplication`.
- `SqlServerRole` was renamed to `SqlRole`.
- `SqlServerSecureConnection` was renamed to `SqlSecureConnection`.
- Changed all resource prefixes from `MSFT_` to `DSC_` ([issue #1496](https://github.com/dsccommunity/SqlServerDsc/issues/1496)).
_Deprecated resource has not changed prefix._
- All resources are now using the common module DscResource.Common.
- When a PR is labelled with 'ready for merge' it is no longer being
marked as stale if the PR is not merged for 30 days (for example it is
dependent on something else) ([issue #1504](https://github.com/dsccommunity/SqlServerDsc/issues/1504)).
- Updated the CI pipeline to use latest version of the module ModuleBuilder.
- Changed to use the property `NuGetVersionV2` from GitVersion in the
CI pipeline.
- The unit tests now run on PowerShell 7 to optimize the total run time.
- SqlServerDsc.Common
- The helper function `Invoke-InstallationMediaCopy` was changed to
handle a breaking change in PowerShell 7 ([issue #1530](https://github.com/dsccommunity/SqlServerDsc/issues/1530)).
- Removed the local helper function `Set-PSModulePath` as it was
implemented in the module DscResource.Common.
- CommonTestHelper
- The test helper function `New-SQLSelfSignedCertificate` was changed
to install the dependent module `PSPKI` through `RequiredModules.psd1`.
- SqlAlwaysOnService
- BREAKING CHANGE: The parameter `ServerName` is now non-mandatory and
defaults to `$env:COMPUTERNAME` ([issue #319](https://github.com/dsccommunity/SqlServerDsc/issues/319)).
- Normalize parameter descriptive text for default values.
- SqlDatabase
- BREAKING CHANGE: The parameter `ServerName` is now non-mandatory and
defaults to `$env:COMPUTERNAME` ([issue #319](https://github.com/dsccommunity/SqlServerDsc/issues/319)).
- BREAKING CHANGE: The non-mandatory parameters was removed from the
function `Get-TargetResource` since they were not needed.
- BREAKING CHANGE: The properties `CompatibilityLevel` and `Collation`
are now only enforced if the are specified in the configuration.
- Normalize parameter descriptive text for default values.
- SqlDatabaseDefaultLocation
- BREAKING CHANGE: The parameter `ServerName` is now non-mandatory and
defaults to `$env:COMPUTERNAME` ([issue #319](https://github.com/dsccommunity/SqlServerDsc/issues/319)).
- Normalize parameter descriptive text for default values.
- SqlDatabaseOwner
- BREAKING CHANGE: Database changed to DatabaseName for consistency with
other modules ([issue #1484](https://github.com/dsccommunity/SqlServerDsc/issues/1484)).
- SqlDatabasePermission
- BREAKING CHANGE: The parameter `ServerName` is now non-mandatory and
defaults to `$env:COMPUTERNAME` ([issue #319](https://github.com/dsccommunity/SqlServerDsc/issues/319)).
- Normalize parameter descriptive text for default values.
- BREAKING CHANGE: Database changed to DatabaseName for consistency with
other modules ([issue #1484](https://github.com/dsccommunity/SqlServerDsc/issues/1484)).
- BREAKING CHANGE: The resource no longer create the database user if
it does not exist. Use the resource _SqlDatabaseUser_ to enforce that
the database user exist in the database prior to setting permissions
using this resource ([issue #848](https://github.com/dsccommunity/SqlServerDsc/issues/848)).
- BREAKING CHANGE: The resource no longer checks if a login exist so that
it is possible to set permissions for database users that does not
have a login, e.g. the database user 'guest' ([issue #1134](https://github.com/dsccommunity/SqlServerDsc/issues/1134)).
- Updated examples.
- Added integration tests ([issue #741](https://github.com/dsccommunity/SqlServerDsc/issues/741)).
- Get-TargetResource will no longer throw an exception if the database
does not exist.
- SqlDatabaseRecoveryModel
- BREAKING CHANGE: The parameter `ServerName` is now non-mandatory and
defaults to `$env:COMPUTERNAME` ([issue #319](https://github.com/dsccommunity/SqlServerDsc/issues/319)).
- Normalize parameter descriptive text for default values.
- SqlDatabaseRole
- BREAKING CHANGE: The parameter `ServerName` is now non-mandatory and
defaults to `$env:COMPUTERNAME` ([issue #319](https://github.com/dsccommunity/SqlServerDsc/issues/319)).
- Normalize parameter descriptive text for default values.
- BREAKING CHANGE: Database changed to DatabaseName for consistency with
other modules ([issue #1484](https://github.com/dsccommunity/SqlServerDsc/issues/1484)).
- SqlDatabaseUser
- BREAKING CHANGE: The parameter `ServerName` is now non-mandatory and
defaults to `$env:COMPUTERNAME` ([issue #319](https://github.com/dsccommunity/SqlServerDsc/issues/319)).
- Normalize parameter descriptive text for default values.
- SqlScript
- BREAKING CHANGE: The parameter `ServerInstance` is replaced by the two
parameters `ServerName` and `InstanceName`. The parameter `InstanceName`
is the only one mandatory which fixes the issue that it was possible to
run the same script using different host names ([issue #925](https://github.com/dsccommunity/SqlServerDsc/issues/925)).
- SqlScriptQuery
- BREAKING CHANGE: The parameter `ServerInstance` is replaced by the two
parameters `ServerName` and `InstanceName`. The parameter `InstanceName`
is the only one mandatory which fixes the issue that it was possible to
run the same query using different host names ([issue #925](https://github.com/dsccommunity/SqlServerDsc/issues/925)).
- SqlConfiguration
- BREAKING CHANGE: The parameter `ServerName` is now non-mandatory and
defaults to `$env:COMPUTERNAME` ([issue #319](https://github.com/dsccommunity/SqlServerDsc/issues/319)).
- Normalize parameter descriptive text for default values.
- SqlDatabaseMail
- Normalize parameter descriptive text for default values.
- SqlEndpoint
- BREAKING CHANGE: Now the properties are only enforced if they are
specified in the configuration.
- Normalize parameter descriptive text for default values.
- SqlEndpointPermission
- BREAKING CHANGE: The parameter `ServerName` is now non-mandatory and
defaults to `$env:COMPUTERNAME` ([issue #319](https://github.com/dsccommunity/SqlServerDsc/issues/319)).
- Normalize parameter descriptive text for default values.
- SqlLogin
- BREAKING CHANGE: The parameter `ServerName` is now non-mandatory and
defaults to `$env:COMPUTERNAME` ([issue #319](https://github.com/dsccommunity/SqlServerDsc/issues/319)).
- Normalize parameter descriptive text for default values.
- SqlRole
- BREAKING CHANGE: The parameter `ServerName` is now non-mandatory and
defaults to `$env:COMPUTERNAME` ([issue #319](https://github.com/dsccommunity/SqlServerDsc/issues/319)).
- Normalize parameter descriptive text for default values.
- SqlServiceAccount
- BREAKING CHANGE: The parameter `ServerName` is now non-mandatory and
defaults to `$env:COMPUTERNAME` ([issue #319](https://github.com/dsccommunity/SqlServerDsc/issues/319)).
- Normalize parameter descriptive text for default values.
- SqlSetup
- BREAKING CHANGE: Now if the parameter `AgtSvcStartupType` is not specified
in the configuration the resource will no longer by default add an
argument to `setup.exe` with a value of `Automatic` for the argument
`AGTSVCSTARTUPTYPE`. If the parameter `AgtSvcStartupType` is not specified
in the configuration there will be no setup argument added at all
([issue #464](https://github.com/dsccommunity/SqlServerDsc/issues/464)).
- BREAKING CHANGE: When installing a failover cluster the cluster
validation is no longer skipped by default. To skip cluster validation
the configuration must opt-in by specifying the following
`SkipRule = 'Cluster_VerifyForErrors'` ([issue #335](https://github.com/dsccommunity/SqlServerDsc/issues/335)).
- BREAKING CHANGE: Now, unless the parameter `SuppressReboot` is set to
`$true`, the node will be restarted if the setup ends with the
[error code 3010](https://docs.microsoft.com/en-us/previous-versions/tn-archive/bb418811(v=technet.10)#server-setup-fails-with-code-3010).
Previously just a warning message was written ([issue #565](https://github.com/dsccommunity/SqlServerDsc/issues/565)).
### Fixed
- SqlServerDsc
- The regular expression for `minor-version-bump-message` in the file
`GitVersion.yml` was changed to only raise minor version when the
commit message contain the word `add`, `adds`, `minor`, `feature`,
or `features`.
- Now code coverage is reported to Codecov, and a codecov.yml was added.
- Updated to support DscResource.Common v0.7.1.
- Changed to point to CONTRIBUTING.md on master branch to avoid "404 Page not found"
([issue #1508](https://github.com/dsccommunity/SqlServerDsc/issues/1508)).
- SqlAGDatabase
- Fixed unit tests that failed intermittently when running unit tests
in PowerShell 7 ([issue #1532](https://github.com/dsccommunity/SqlServerDsc/issues/1532)).
- Minor code style issue changes.
- SqlAgentAlert
- The parameter `ServerName` now throws when passing an empty string or
null value (part of [issue #319](https://github.com/dsccommunity/SqlServerDsc/issues/319)).
- SqlAgentFailsafe
- The parameter `ServerName` now throws when passing an empty string or
null value (part of [issue #319](https://github.com/dsccommunity/SqlServerDsc/issues/319)).
- SqlAgentOperator
- The parameter `ServerName` now throws when passing an empty string or
null value (part of [issue #319](https://github.com/dsccommunity/SqlServerDsc/issues/319)).
- SqlAlias
- BREAKING CHANGE: The parameter `ServerName` is now non-mandatory to
prevent ping-pong behavior ([issue #1502](https://github.com/dsccommunity/SqlServerDsc/issues/1502)).
The `ServerName` is not returned as an empty string when the protocol is
Named Pipes.
- SqlDatabase
- Fixed missing parameter `CompatibilityLevel` in the README.md (and
updated the description in the schema.mof).
- SqlRs
- Fix typo in the schema parameter `SuppressRestart` description
and in the parameter description in the `README.md`.
- SqlDatabaseMail
- The parameter `ServerName` now throws when passing an empty string or
null value (part of [issue #319](https://github.com/dsccommunity/SqlServerDsc/issues/319)).
- SqlServerEndpoint
- The parameter `ServerName` now throws when passing an empty string or
null value (part of [issue #319](https://github.com/dsccommunity/SqlServerDsc/issues/319)).
- SqlEndpoint
- The parameter `ServerName` now throws when passing an empty string or
null value (part of [issue #319](https://github.com/dsccommunity/SqlServerDsc/issues/319)).
- SqlPermission
- The parameter `ServerName` now throws when passing an empty string or
null value (part of [issue #319](https://github.com/dsccommunity/SqlServerDsc/issues/319)).
- SqlReplication
- Enhanced the exception handling so it shows the inner exception error
message that have the actual error that occurred.
- Corrected the examples.
- SqlSetup
- Update integration tests to correctly detect sysadmins because of changes
to the build worker.
- The property `SqlTempdbLogFileGrowth` and `SqlTempdbFileGrowth` now returns
the correct values. Previously the value of the growth was wrongly
divided by 1KB even if the value was in percent. Now the value for growth
is the sum of the average of MB and average of the percentage.
- The function `Get-TargetResource` was changed so that the property
`SQLTempDBDir` will now return the database `tempdb`'s property
`PrimaryFilePath`.
- BREAKING CHANGE: Logic that was under feature flag `DetectionSharedFeatures`
was made the default and old logic that was used to detect shared features
was removed ([issue #1290](https://github.com/dsccommunity/SqlServerDsc/issues/1290)).
This was implemented because the previous implementation did not work
fully with SQL Server 2017.
- Much of the code was refactored into units (functions) to be easier to test.
Due to the size of the code the unit tests ran for an abnormal long time,
after this refactoring the unit tests runs much quicker.
## [13.5.0] - 2020-04-12
### Added
- SqlServerLogin
- Added `DefaultDatabase` parameter ([issue #1474](https://github.com/dsccommunity/SqlServerDsc/issues/1474)).
### Changed
- SqlServerDsc
- Update the CI pipeline files.
- Only run CI pipeline on branch `master` when there are changes to files
inside the `source` folder.
- Replaced Microsoft-hosted agent (build image) `win1803` with `windows-2019`
([issue #1466](https://github.com/dsccommunity/SqlServerDsc/issues/1466)).
### Fixed
- SqlSetup
- Refresh PowerShell drive list before attempting to resolve `setup.exe` path
([issue #1482](https://github.com/dsccommunity/SqlServerDsc/issues/1482)).
- SqlAG
- Fix hashtables to align with style guideline ([issue #1437](https://github.com/dsccommunity/SqlServerDsc/issues/1437)).
## [13.4.0] - 2020-03-18
### Added
- SqlDatabase
- Added ability to manage the Compatibility Level and Recovery Model of a database
### Changed
- SqlServerDsc
- Azure Pipelines will no longer trigger on changes to just the CHANGELOG.md
(when merging to master).
- The deploy step is no longer run if the Azure DevOps organization URL
does not contain 'dsccommunity'.
- Changed the VS Code project settings to trim trailing whitespace for
markdown files too.
## [13.3.0] - 2020-01-17
### Added
- SqlServerDsc
- Added continuous delivery with a new CI pipeline.
- Update build.ps1 from latest template.
### Changed
- SqlServerDsc
- Add .gitattributes file to checkout file correctly with CRLF.
- Updated .vscode/analyzersettings.psd1 file to correct use PSSA rules
and custom rules in VS Code.
- Fix hashtables to align with style guideline ([issue #1437](https://github.com/dsccommunity/SqlServerDsc/issues/1437)).
- Updated most examples to remove the need for the variable `$ConfigurationData`,
and fixed style issues.
- Ignore commit in `GitVersion.yml` to force the correct initial release.
- Set a display name on all the jobs and tasks in the CI pipeline.
- Removing file 'Tests.depend.ps1' as it is no longer required.
- SqlServerMaxDop
- Fix line endings in code which did not use the correct format.
- SqlAlwaysOnService
- The integration test has been temporarily disabled because when
the cluster feature is installed it requires a reboot on the
Windows Server 2019 build worker.
- SqlDatabaseRole
- Update unit test to have the correct description on the `Describe`-block
for the test of `Set-TargetResource`.
- SqlServerRole
- Add support for nested role membership ([issue #1452](https://github.com/dsccommunity/SqlServerDsc/issues/1452))
- Removed use of case-sensitive Contains() function when evalutating role membership.
([issue #1153](https://github.com/dsccommunity/SqlServerDsc/issues/1153))
- Refactored mocks and unit tests to increase performance. ([issue #979](https://github.com/dsccommunity/SqlServerDsc/issues/979))
### Fixed
- SqlServerDsc
- Fixed unit tests to call the function `Invoke-TestSetup` outside the
try-block.
- Update GitVersion.yml with the correct regular expression.
- Fix import statement in all tests, making sure it throws if module
DscResource.Test cannot be imported.
- SqlAlwaysOnService
- When failing to enable AlwaysOn the resource should now fail with an
error ([issue #1190](https://github.com/dsccommunity/SqlServerDsc/issues/1190)).
- SqlAgListener
- Fix IPv6 addresses failing Test-TargetResource after listener creation.
## [13.2.0.0] - 2019-09-18
### Changed
- Changes to SqlServerDsc
- Fix keywords to lower-case to align with guideline.
- Fix keywords to have space before a parenthesis to align with guideline.
- Fix typo in SqlSetup strings ([issue #1419](https://github.com/dsccommunity/SqlServerDsc/issues/1419)).
## [13.1.0.0] - 2019-08-07
### Changed
- Changes to SqlServerDsc
- New DSC resource SqlAgentFailsafe
- New DSC resource SqlDatabaseUser ([issue #846](https://github.com/dsccommunity/SqlServerDsc/issues/846)).
- Adds ability to create database users with more fine-grained control,
e.g. re-mapping of orphaned logins or a different login. Supports
creating a user with or without login name, and database users mapped
to a certificate or asymmetric key.
- Changes to helper function Invoke-Query
- Fixes issues in [issue #1355](https://github.com/dsccommunity/SqlServerDsc/issues/1355).
- Works together with Connect-SQL now.
- Parameters now match that of Connect-SQL ([issue #1392](https://github.com/dsccommunity/SqlServerDsc/issues/1392)).
- Can now pass in credentials.
- Can now pass in 'Microsoft.SqlServer.Management.Smo.Server' object.
- Can also pipe in 'Microsoft.SqlServer.Management.Smo.Server' object.
- Can pipe Connect-SQL | Invoke-Query.
- Added default values to Invoke-Query.
- Now it will output verbose messages of the query that is run, so it
not as quiet of what it is doing when a user asks for verbose output
([issue #1404](https://github.com/dsccommunity/SqlServerDsc/issues/1404)).
- It is possible to redact text in the verbose output by providing
strings in the new parameter `RedactText`.
- Minor style fixes in unit tests.
- Changes to helper function Connect-SQL
- When impersonating WindowsUser credential use the NetworkCredential UserName.
- Added additional verbose logging.
- Connect-SQL now uses parameter sets to more intuitive evaluate that
the correct parameters are used in different scenarios
([issue #1403](https://github.com/dsccommunity/SqlServerDsc/issues/1403)).
- Changes to helper function Connect-SQLAnalysis
- Parameters now match that of Connect-SQL ([issue #1392](https://github.com/dsccommunity/SqlServerDsc/issues/1392)).
- Changes to helper function Restart-SqlService
- Parameters now match that of Connect-SQL ([issue #1392](https://github.com/dsccommunity/SqlServerDsc/issues/1392)).
- Changes to helper function Restart-ReportingServicesService
- Parameters now match that of Connect-SQL ([issue #1392](https://github.com/dsccommunity/SqlServerDsc/issues/1392)).
- Changes to helper function Split-FullSqlInstanceName
- Parameters and function name changed to use correct casing.
- Changes to helper function Get-SqlInstanceMajorVersion
- Parameters now match that of Connect-SQL ([issue #1392](https://github.com/dsccommunity/SqlServerDsc/issues/1392)).
- Changes to helper function Test-LoginEffectivePermissions
- Parameters now match that of Connect-SQL ([issue #1392](https://github.com/dsccommunity/SqlServerDsc/issues/1392)).
- Changes to helper function Test-AvailabilityReplicaSeedingModeAutomatic
- Parameters now match that of Connect-SQL ([issue #1392](https://github.com/dsccommunity/SqlServerDsc/issues/1392)).
- Changes to SqlServerSecureConnection
- Forced $Thumbprint to lowercase to fix [issue #1350](https://github.com/dsccommunity/SqlServerDsc/issues/1350).
- Add parameter SuppressRestart with default value false.
This allows users to suppress restarts after changes have been made.
Changes will not take effect until the service has been restarted.
- Changes to SqlSetup
- Correct minor style violation [issue #1387](https://github.com/dsccommunity/SqlServerDsc/issues/1387).
- Changes to SqlDatabase
- Get-TargetResource now correctly return `$null` for the collation property
when the database does not exist ([issue #1395](https://github.com/dsccommunity/SqlServerDsc/issues/1395)).
- No longer enforces the collation property if the Collation parameter
is not part of the configuration ([issue #1396](https://github.com/dsccommunity/SqlServerDsc/issues/1396)).
- Updated resource description in README.md
- Fix examples to use `PsDscRunAsCredential` ([issue #760](https://github.com/dsccommunity/SqlServerDsc/issues/760)).
- Added integration tests ([issue #739](https://github.com/dsccommunity/SqlServerDsc/issues/739)).
- Updated unit tests to the latest template ([issue #1068](https://github.com/dsccommunity/SqlServerDsc/issues/1068)).
## [13.0.0.0] - 2019-06-26
### Changed
- Changes to SqlServerDsc
- Added SqlAgentAlert resource.
- Opt-in to the common test 'Common Test - Validation Localization'.
- Opt-in to the common test 'Common Test - Flagged Script Analyzer Rules'
([issue #1101](https://github.com/dsccommunity/SqlServerDsc/issues/1101)).
- Removed the helper function `New-TerminatingError`, `New-WarningMessage`
and `New-VerboseMessage` in favor of the the new
[localization helper functions](https://github.com/dsccommunity/DscResources/blob/master/StyleGuidelines.md#localization).
- Combine DscResource.LocalizationHelper and DscResource.Common into
SqlServerDsc.Common ([issue #1357](https://github.com/dsccommunity/SqlServerDsc/issues/1357)).
- Update Assert-TestEnvironment.ps1 to not error if strict mode is enabled
and there are no missing dependencies ([issue #1368](https://github.com/dsccommunity/SqlServerDsc/issues/1368)).
- Changes to SqlServerDsc.Common
- Added StatementTimeout to function 'Connect-SQL' with default 600 seconds (10mins).
- Added StatementTimeout to function 'Invoke-Query' with default 600 seconds (10mins)
([issue #1358](https://github.com/dsccommunity/SqlServerDsc/issues/1358)).
- Changes to helper function Connect-SQL
- The function now make it more clear that when using the parameter
`SetupCredential` is impersonates that user, and by default it does
not impersonates a user but uses the credential that the resource
is run as (for example the built-in credential parameter
`PsDscRunAsCredential`). [@kungfu71186](https://github.com/kungfu71186)
- Added parameter alias `-DatabaseCredential` for the parameter
`-SetupCredential`. [@kungfu71186](https://github.com/kungfu71186)
- Changes to SqlAG
- Added en-US localization.
- Changes to SqlAGReplica
- Added en-US localization.
- Improved verbose message output when creating availability group replica,
removing a availability group replica, and joining the availability
group replica to the availability group.
- Changes to SqlAlwaysOnService
- Now outputs the correct verbose message when restarting the service.
- Changes to SqlServerMemory
- Now outputs the correct verbose messages when calculating the dynamic
memory, and when limiting maximum memory.
- Changes to SqlServerRole
- Now outputs the correct verbose message when the members of a role is
not in desired state.
- Changes to SqlAgentOperator
- Fix minor issue that when unable to connect to an instance. Instead
of showing a message saying that connect failed another unrelated
error message could have been shown, because of an error in the code.
- Fix typo in test it block.
- Changes to SqlDatabaseRole
- BREAKING CHANGE: Refactored to enable creation/deletion of the database role
itself as well as management of the role members. *Note that the resource no
longer adds database users.* ([issue #845](https://github.com/dsccommunity/SqlServerDsc/issues/845),
[issue #847](https://github.com/dsccommunity/SqlServerDsc/issues/847),
[issue #1252](https://github.com/dsccommunity/SqlServerDsc/issues/1252),
[issue #1339](https://github.com/dsccommunity/SqlServerDsc/issues/1339)).
[Paul Shamus @pshamus](https://github.com/pshamus)
- Changes to SqlSetup
- Add an Action type of 'Upgrade'. This will ask setup to do a version
upgrade where possible ([issue #1368](https://github.com/dsccommunity/SqlServerDsc/issues/1368)).
- Fix an error when testing for DQS installation ([issue #1368](https://github.com/dsccommunity/SqlServerDsc/issues/1368)).
- Changed the logic of how default value of FailoverClusterGroupName is
set since that was preventing the resource to be able to be debugged
([issue #448](https://github.com/dsccommunity/SqlServerDsc/issues/448)).
- Added RSInstallMode parameter ([issue #1163](https://github.com/dsccommunity/SqlServerDsc/issues/1163)).
- Changes to SqlWindowsFirewall
- Where a version upgrade has changed paths for a database engine, the
existing firewall rule for that instance will be updated rather than
another one created ([issue #1368](https://github.com/dsccommunity/SqlServerDsc/issues/1368)).
Other firewall rules can be fixed to work in the same way later.
- Changes to SqlAGDatabase
- Added new parameter 'ReplaceExisting' with default false.
This allows forced restores when a database already exists on secondary.
- Added StatementTimeout to Invoke-Query to fix Issue#1358
- Fix issue where calling Get would return an error because the database
name list may have been returned as a string instead of as a string array
([issue #1368](https://github.com/dsccommunity/SqlServerDsc/issues/1368)).
## [12.5.0.0] - 2019-05-15
### Changed
- Changes to SqlServerSecureConnection
- Updated README and added example for SqlServerSecureConnection,
instructing users to use the 'SYSTEM' service account instead of
'LocalSystem'.
- Changes to SqlScript
- Correctly passes the `$VerbosePreference` to the helper function
`Invoke-SqlScript` so that `PRINT` statements is outputted correctly
when verbose output is requested, e.g
`Start-DscConfiguration -Verbose`.
- Added en-US localization ([issue #624](https://github.com/dsccommunity/SqlServerDsc/issues/624)).
- Added additional unit tests for code coverage.
- Changes to SqlScriptQuery
- Correctly passes the `$VerbosePreference` to the helper function
`Invoke-SqlScript` so that `PRINT` statements is outputted correctly
when verbose output is requested, e.g
`Start-DscConfiguration -Verbose`.
- Added en-US localization.
- Added additional unit tests for code coverage.
- Changes to SqlSetup
- Concatenated Robocopy localization strings ([issue #694](https://github.com/dsccommunity/SqlServerDsc/issues/694)).
- Made the error message more descriptive when the Set-TargetResource
function calls the Test-TargetResource function to verify the desired
state.
- Changes to SqlWaitForAG
- Added en-US localization ([issue #625](https://github.com/dsccommunity/SqlServerDsc/issues/625)).
- Changes to SqlServerPermission
- Added en-US localization ([issue #619](https://github.com/dsccommunity/SqlServerDsc/issues/619)).
- Changes to SqlServerMemory
- Added en-US localization ([issue #617](https://github.com/dsccommunity/SqlServerDsc/issues/617)).
- No longer will the resource set the MinMemory value if it was provided
in a configuration that also set the `Ensure` parameter to 'Absent'
([issue #1329](https://github.com/dsccommunity/SqlServerDsc/issues/1329)).
- Refactored unit tests to simplify them add add slightly more code
coverage.
- Changes to SqlServerMaxDop
- Added en-US localization ([issue #616](https://github.com/dsccommunity/SqlServerDsc/issues/616)).
- Changes to SqlRS
- Reporting Services are restarted after changing settings, unless
`$SuppressRestart` parameter is set ([issue #1331](https://github.com/dsccommunity/SqlServerDsc/issues/1331)).
`$SuppressRestart` will also prevent Reporting Services restart after initialization.
- Fixed one of the error handling to use localization, and made the
error message more descriptive when the Set-TargetResource function
calls the Test-TargetResource function to verify the desired
state. *This was done prior to adding full en-US localization.*
- Fixed ([issue #1258](https://github.com/dsccommunity/SqlServerDsc/issues/1258)).
When initializing Reporting Services, there is no need to execute `InitializeReportServer`
CIM method, since executing `SetDatabaseConnection` CIM method initializes
Reporting Services.
- [issue #864](https://github.com/dsccommunity/SqlServerDsc/issues/864) SqlRs
can now initialise SSRS 2017 instances
- Changes to SqlServerLogin
- Added en-US localization ([issue #615](https://github.com/dsccommunity/SqlServerDsc/issues/615)).
- Added unit tests to improved code coverage.
- Changes to SqlWindowsFirewall
- Added en-US localization ([issue #614](https://github.com/dsccommunity/SqlServerDsc/issues/614)).
- Changes to SqlServerEndpoint
- Added en-US localization ([issue #611](https://github.com/dsccommunity/SqlServerDsc/issues/611)).
- Changes to SqlServerEndpointPermission
- Added en-US localization ([issue #612](https://github.com/dsccommunity/SqlServerDsc/issues/612)).
- Changes to SqlServerEndpointState
- Added en-US localization ([issue #613](https://github.com/dsccommunity/SqlServerDsc/issues/613)).
- Changes to SqlDatabaseRole
- Added en-US localization ([issue #610](https://github.com/dsccommunity/SqlServerDsc/issues/610)).
- Changes to SqlDatabaseRecoveryModel
- Added en-US localization ([issue #609](https://github.com/dsccommunity/SqlServerDsc/issues/609)).
- Changes to SqlDatabasePermission
- Added en-US localization ([issue #608](https://github.com/dsccommunity/SqlServerDsc/issues/608)).
- Changes to SqlDatabaseOwner
- Added en-US localization ([issue #607](https://github.com/dsccommunity/SqlServerDsc/issues/607)).
- Changes to SqlDatabase
- Added en-US localization ([issue #606](https://github.com/dsccommunity/SqlServerDsc/issues/606)).
- Changes to SqlAGListener
- Added en-US localization ([issue #604](https://github.com/dsccommunity/SqlServerDsc/issues/604)).
- Changes to SqlAlwaysOnService
- Added en-US localization ([issue #603](https://github.com/dsccommunity/SqlServerDsc/issues/608)).
- Changes to SqlAlias
- Added en-US localization ([issue #602](https://github.com/dsccommunity/SqlServerDsc/issues/602)).
- Removed ShouldProcess for the code, since it has no purpose in a DSC
resource ([issue #242](https://github.com/dsccommunity/SqlServerDsc/issues/242)).
- Changes to SqlServerReplication
- Added en-US localization ([issue #620](https://github.com/dsccommunity/SqlServerDsc/issues/620)).
- Refactored Get-TargetResource slightly so it provide better verbose
messages.
## [12.4.0.0] - 2019-04-03
### Changed
- Changes to SqlServerDsc
- Added new resources.
- SqlRSSetup
- Added helper module DscResource.Common from the repository
DscResource.Template.
- Moved all helper functions from SqlServerDscHelper.psm1 to DscResource.Common.
- Renamed Test-SqlDscParameterState to Test-DscParameterState.
- New-TerminatingError error text for a missing localized message now matches
the output even if the "missing localized message" localized message is
also missing.
- Added helper module DscResource.LocalizationHelper from the repository
DscResource.Template, this replaces the helper module CommonResourceHelper.psm1.
- Cleaned up unit tests, mostly around loading cmdlet stubs and loading
classes stubs, but also some tests that were using some odd variants.
- Fix all integration tests according to issue [PowerShell/DscResource.Template#14](https://github.com/dsccommunity/DscResource.Template/issues/14).
- Changes to SqlServerMemory
- Updated Cim Class to Win32_ComputerSystem (instead of Win32_PhysicalMemory)
because the correct memory size was not being detected correctly on Azure VMs
([issue #914](https://github.com/dsccommunity/SqlServerDsc/issues/914)).
- Changes to SqlSetup
- Split integration tests into two jobs, one for running integration tests
for SQL Server 2016 and another for running integration test for
SQL Server 2017 ([issue #858](https://github.com/dsccommunity/SqlServerDsc/issues/858)).
- Localized messages for Master Data Services no longer start and end with
single quote.
- When installing features a verbose message is written if a feature is found
to already be installed. It no longer quietly removes the feature from the
`/FEATURES` argument.
- Cleaned up a bit in the tests, removed excessive piping.
- Fixed minor typo in examples.
- A new optional parameter `FeatureFlag` parameter was added to control
breaking changes. Functionality added under a feature flag can be
toggled on or off, and could be changed later to be the default.
This way we can also make more of the new functionalities the default
in the same breaking change release ([issue #1105](https://github.com/dsccommunity/SqlServerDsc/issues/1105)).
- Added a new way of detecting if the shared feature CONN (Client Tools
Connectivity, and SQL Client Connectivity SDK), BC (Client Tools
Backwards Compatibility), and SDK (Client Tools SDK) is installed or
not. The new functionality is used when the parameter `FeatureFlag`
is set to `'DetectionSharedFeatures'` ([issue #1105](https://github.com/dsccommunity/SqlServerDsc/issues/1105)).
- Added a new helper function `Get-InstalledSharedFeatures` to move out
some of the code from the `Get-TargetResource` to make unit testing
easier and faster.
- Changed the logic of 'Build the argument string to be passed to setup' to
not quote the value if root directory is specified
([issue #1254](https://github.com/dsccommunity/SqlServerDsc/issues/1254)).
- Moved some resource specific helper functions to the new helper module
DscResource.Common so they can be shared with the new resource SqlRSSetup.
- Improved verbose messages in Test-TargetResource function to more
clearly tell if features are already installed or not.
- Refactored unit tests for the functions Test-TargetResource and
Set-TargetResource to improve testing speed.
- Modified the Test-TargetResource and Set-TargetResource to not be
case-sensitive when comparing feature names. *This was handled
correctly in real-world scenarios, but failed when running the unit
tests (and testing casing).*
- Changes to SqlAGDatabase
- Fix MatchDatabaseOwner to check for CONTROL SERVER, IMPERSONATE LOGIN, or
CONTROL LOGIN permission in addition to IMPERSONATE ANY LOGIN.
- Update and fix MatchDatabaseOwner help text.
- Changes to SqlAG
- Updated documentation on the behaviour of defaults as they only apply when
creating a group.
- Changes to SqlAGReplica
- AvailabilityMode, BackupPriority, and FailoverMode defaults only apply when
creating a replica not when making changes to an existing replica. Explicit
parameters will still change existing replicas ([issue #1244](https://github.com/dsccommunity/SqlServerDsc/issues/1244)).
- ReadOnlyRoutingList now gets updated without throwing an error on the first
run ([issue #518](https://github.com/dsccommunity/SqlServerDsc/issues/518)).
- Test-Resource fixed to report whether ReadOnlyRoutingList desired state
has been reached correctly ([issue #1305](https://github.com/dsccommunity/SqlServerDsc/issues/1305)).
- Changes to SqlDatabaseDefaultLocation
- No longer does the Test-TargetResource fail on the second test run
when the backup file path was changed, and the path was ending with
a backslash ([issue #1307](https://github.com/dsccommunity/SqlServerDsc/issues/1307)).
## [12.3.0.0] - 2019-02-20
### Changed
- Changes to SqlServerDsc
- Reverting the change that was made as part of the
[issue #1260](https://github.com/dsccommunity/SqlServerDsc/issues/1260)
in the previous release, as it only mitigated the issue, it did not
solve the issue.
- Removed the container testing since that broke the integration tests,
possible due to using excessive amount of memory on the AppVeyor build
worker. This will make the unit tests to take a bit longer to run
([issue #1260](https://github.com/dsccommunity/SqlServerDsc/issues/1260)).
- The unit tests and the integration tests are now run in two separate
build workers in AppVeyor. One build worker runs the integration tests,
while a second build worker runs the unit tests. The build workers runs
in parallel on paid accounts, but sequentially on free accounts
([issue #1260](https://github.com/dsccommunity/SqlServerDsc/issues/1260)).
- Clean up error handling in some of the integration tests that was
part of a workaround for a bug in Pester. The bug is resolved, and
the error handling is not again built into Pester.
- Speeding up the AppVeyor tests by splitting the common tests in a
separate build job.
- Updated the appveyor.yml to have the correct build step, and also
correct run the build step only in one of the jobs.
- Update integration tests to use the new integration test template.
- Added SqlAgentOperator resource.
- Changes to SqlServiceAccount
- Fixed Get-ServiceObject when searching for Integration Services service.
Unlike the rest of SQL Server services, the Integration Services service
cannot be instanced, however you can have multiple versions installed.
Get-Service object would return the correct service name that you
are looking for, but it appends the version number at the end. Added
parameter VersionNumber so the search would return the correct
service name.
- Added code to allow for using Managed Service Accounts.
- Now the correct service type string value is returned by the function
`Get-TargetResource`. Previously one value was passed in as a parameter
(e.g. `DatabaseEngine`), but a different string value as returned
(e.g. `SqlServer`). Now `Get-TargetResource` return the same values
that can be passed as values in the parameter `ServiceType`
([issue #981](https://github.com/dsccommunity/SqlServerDsc/issues/981)).
- Changes to SqlServerLogin
- Fixed issue in Test-TargetResource to valid password on disabled accounts
([issue #915](https://github.com/dsccommunity/SqlServerDsc/issues/915)).
- Now when adding a login of type SqlLogin, and the SQL Server login mode
is set to `'Integrated'`, an error is correctly thrown
([issue #1179](https://github.com/dsccommunity/SqlServerDsc/issues/1179)).
- Changes to SqlSetup
- Updated the integration test to stop the named instance while installing
the other instances to mitigate
[issue #1260](https://github.com/dsccommunity/SqlServerDsc/issues/1260).
- Add parameters to configure the Tempdb files during the installation of
the instance. The new parameters are SqlTempdbFileCount, SqlTempdbFileSize,
SqlTempdbFileGrowth, SqlTempdbLogFileSize and SqlTempdbLogFileGrowth
([issue #1167](https://github.com/dsccommunity/SqlServerDsc/issues/1167)).
- Changes to SqlServerEndpoint
- Add the optional parameter Owner. The default owner remains the login used
for the creation of the endpoint
([issue #1251](https://github.com/dsccommunity/SqlServerDsc/issues/1251)).
[Maxime Daniou (@mdaniou)](https://github.com/mdaniou)
- Add integration tests
([issue #744](https://github.com/dsccommunity/SqlServerDsc/issues/744)).
[Maxime Daniou (@mdaniou)](https://github.com/mdaniou)
## [12.2.0.0] - 2019-01-09
### Changed
- Changes to SqlServerDsc
- During testing in AppVeyor the Build Worker is restarted in the install
step to make sure the are no residual changes left from a previous SQL
Server install on the Build Worker done by the AppVeyor Team
([issue #1260](https://github.com/dsccommunity/SqlServerDsc/issues/1260)).
- Code cleanup: Change parameter names of Connect-SQL to align with resources.
- Updated README.md in the Examples folder.
- Added a link to the new xADObjectPermissionEntry examples in
ActiveDirectory, fixed a broken link and a typo.
[Adam Rush (@adamrushuk)](https://github.com/adamrushuk)
- Change to SqlServerLogin so it doesn't check properties for absent logins.
- Fix for ([issue #1096](https://github.com/dsccommunity/SqlServerDsc/issues/1096))
## [12.1.0.0] - 2018-10-24
### Changed
- Changes to SqlServerDsc
- Add support for validating the code with the DSC ResourceKit
Script Analyzer rules, both in Visual Studio Code and directly using
`Invoke-ScriptAnalyzer`.
- Opt-in for common test "Common Tests - Validate Markdown Links".
- Updated broken links in `\README.md` and in `\Examples\README.md`
- Opt-in for common test 'Common Tests - Relative Path Length'.
- Updated the Installation section in the README.md.
- Updated the Contributing section in the README.md after
[Style Guideline and Best Practices guidelines](https://github.com/dsccommunity/DscResources/blob/master/StyleGuidelines.md)
has merged into one document.
- To speed up testing in AppVeyor, unit tests are now run in two containers.
- Adding the PowerShell script `Assert-TestEnvironment.ps1` which
must be run prior to running any unit tests locally with
`Invoke-Pester`.
Read more in the specific contributing guidelines, under the section
[Unit Tests](https://github.com/dsccommunity/SqlServerDsc/blob/dev/CONTRIBUTING.md#unit-tests).
- Changes to SqlServerDscHelper
- Fix style guideline lint errors.
- Changes to Connect-SQL
- Adding verbose message in Connect-SQL so it
now shows the username that is connecting.
- Changes to Import-SQLPS
- Fixed so that when importing SQLPS it imports
using the path (and not the .psd1 file).
- Fixed so that the verbose message correctly
shows the name, version and path when importing
the module SQLPS (it did show correctly for the
SqlServer module).
- Changes to SqlAg, SqlAGDatabase, and SqlAGReplica examples
- Included configuration for SqlAlwaysOnService to enable
HADR on each node to avoid confusion
([issue #1182](https://github.com/dsccommunity/SqlServerDsc/issues/1182)).
- Changes to SqlServerDatabaseMail
- Minor update to Ensure parameter description in the README.md.
- Changes to Write-ModuleStubFile.ps1
- Create aliases for cmdlets in the stubbed module which have aliases
([issue #1224](https://github.com/dsccommunity/SqlServerDsc/issues/1224)).
[Dan Reist (@randomnote1)](https://github.com/randomnote1)
- Use a string builder to build the function stubs.
- Fixed formatting issues for the function to work with modules other
than SqlServer.
- New DSC resource SqlServerSecureConnection
- New resource to configure a SQL Server instance for encrypted SQL
connections.
- Changes to SqlAlwaysOnService
- Updated integration tests to use NetworkingDsc
([issue #1129](https://github.com/dsccommunity/SqlServerDsc/issues/1129)).
- Changes to SqlServiceAccount
- Fix unit tests that didn't mock some of the calls. It no longer fail
when a SQL Server installation is not present on the node running the
unit test ([issue #983](https://github.com/dsccommunity/SqlServerDsc/issues/983)).
## [12.0.0.0] - 2018-09-05
### Changed
- Changes to SqlServerDatabaseMail
- DisplayName is now properly treated as display name
for the originating email address ([issue #1200](https://github.com/dsccommunity/SqlServerDsc/issue/1200)).
[Nick Reilingh (@NReilingh)](https://github.com/NReilingh)
- DisplayName property now defaults to email address instead of server name.
- Minor improvements to documentation.
- Changes to SqlAGDatabase
- Corrected reference to "PsDscRunAsAccount" in documentation
([issue #1199](https://github.com/dsccommunity/SqlServerDsc/issues/1199)).
[Nick Reilingh (@NReilingh)](https://github.com/NReilingh)
- Changes to SqlDatabaseOwner
- BREAKING CHANGE: Support multiple instances on the same node.
The parameter InstanceName is now Key and cannot be omitted
([issue #1197](https://github.com/dsccommunity/SqlServerDsc/issues/1197)).
- Changes to SqlSetup
- Added new parameters to allow to define the startup types for the Sql Engine
service, the Agent service, the Analysis service and the Integration Service.
The new optional parameters are respectively SqlSvcStartupType, AgtSvcStartupType,
AsSvcStartupType, IsSvcStartupType and RsSvcStartupType ([issue #1165](https://github.com/dsccommunity/SqlServerDsc/issues/1165).
[Maxime Daniou (@mdaniou)](https://github.com/mdaniou)
## [11.4.0.0] - 2018-07-25
### Changed
- Changes to SqlServerDsc
- Updated helper function Restart-SqlService to have to new optional parameters
`SkipClusterCheck` and `SkipWaitForOnline`. This was to support more aspects
of the resource SqlServerNetwork.
- Updated helper function `Import-SQLPSModule`
- To only import module if the
module does not exist in the session.
- To always import the latest version of 'SqlServer' or 'SQLPS' module, if
more than one version exist on the target node. It will still prefer to
use 'SqlServer' module.
- Updated all the examples and integration tests to not use
`PSDscAllowPlainTextPassword`, so examples using credentials or
passwords by default are secure.
- Changes to SqlAlwaysOnService
- Integration tests was updated to handle new IPv6 addresses on the AppVeyor
build worker ([issue #1155](https://github.com/dsccommunity/SqlServerDsc/issues/1155)).
- Changes to SqlServerNetwork
- Refactor SqlServerNetwork to not load assembly from GAC ([issue #1151](https://github.com/dsccommunity/SqlServerDsc/issues/1151)).
- The resource now supports restarting the SQL Server service when both
enabling and disabling the protocol.
- Added integration tests for this resource
([issue #751](https://github.com/dsccommunity/SqlServerDsc/issues/751)).
- Changes to SqlAG
- Removed excess `Import-SQLPSModule` call.
- Changes to SqlSetup
- Now after a successful install the "SQL PowerShell module" is reevaluated and
forced to be reimported into the session. This is to support that a never
version of SQL Server was installed side-by-side so that SQLPS module should
be used instead.
## [11.3.0.0] - 2018-06-13
### Changed
- Changes to SqlServerDsc
- Moved decoration for integration test to resolve a breaking change in
DscResource.Tests.
- Activated the GitHub App Stale on the GitHub repository.
- Added a CODE\_OF\_CONDUCT.md with the same content as in the README.md
[issue #939](https://github.com/dsccommunity/SqlServerDsc/issues/939).
- New resources:
- Added SqlScriptQueryResource. [Chase Wilson (@chasewilson)](https://github.com/chasewilson)
- Fix for issue #779 [Paul Kelly (@prkelly)](https://github.com/prkelly)
## [11.2.0.0] - 2018-05-02
- Changes to SqlServerDsc
- Added new test helper functions in the CommonTestHelpers module. These are used
by the integration tests.
- **New-IntegrationLoopbackAdapter:** Installs the PowerShell module
'LoopbackAdapter' from PowerShell Gallery and creates a new network
loopback adapter.
- **Remove-IntegrationLoopbackAdapter:** Removes a new network loopback adapter.
- **Get-NetIPAddressNetwork:** Returns the IP network address from an IPv4 address
and prefix length.
- Enabled PSSA rule violations to fail build in the CI environment.
- Renamed SqlServerDsc.psd1 to be consistent
([issue #1116](https://github.com/dsccommunity/SqlServerDsc/issues/1116)).
[Glenn Sarti (@glennsarti)](https://github.com/glennsarti)
- Changes to Unit Tests
- Updated
the following resources unit test template to version 1.2.1
- SqlWaitForAG ([issue #1088](https://github.com/dsccommunity/SqlServerDsc/issues/1088)).
[Michael Fyffe (@TraGicCode)](https://github.com/TraGicCode)
- Changes to SqlAlwaysOnService
- Updated the integration tests to use a loopback adapter to be less intrusive
in the build worker environment.
- Minor code cleanup in integration test, fixed the scope on variable.
- Changes to SqlSetup
- Updated the integration tests to stop some services after each integration test.
This is to save memory on the AppVeyor build worker.
- Updated the integration tests to use a SQL Server 2016 Service Pack 1.
- Fixed Script Analyzer rule error.
- Changes to SqlRS
- Updated the integration tests to stop the Reporting Services service after
the integration test. This is to save memory on the AppVeyor build worker.
- The helper function `Restart-ReportingServicesService` should no longer timeout
when restarting the service ([issue #1114](https://github.com/dsccommunity/SqlServerDsc/issues/1114)).
- Changes to SqlServiceAccount
- Updated the integration tests to stop some services after each integration test.
This is to save memory on the AppVeyor build worker.
- Changes to SqlServerDatabaseMail
- Fixed Script Analyzer rule error.
## [11.1.0.0] - 2018-03-21
### Changed
- Changes to SqlServerDsc
- Updated the PULL\_REQUEST\_TEMPLATE with an improved task list and modified
some text to be clearer ([issue #973](https://github.com/dsccommunity/SqlServerDsc/issues/973)).
- Updated the ISSUE_TEMPLATE to hopefully be more intuitive and easier to use.
- Added information to ISSUE_TEMPLATE that issues must be reproducible in
SqlServerDsc resource module (if running the older xSQLServer resource module)
([issue #1036](https://github.com/dsccommunity/SqlServerDsc/issues/1036)).
- Updated ISSUE_TEMPLATE.md with a note about sensitive information ([issue #1092](https://github.com/dsccommunity/SqlServerDsc/issues/1092)).
- Changes to SqlServerLogin
- [Claudio Spizzi (@claudiospizzi)](https://github.com/claudiospizzi): Fix password
test fails for nativ sql users ([issue #1048](https://github.com/dsccommunity/SqlServerDsc/issues/1048)).
- Changes to SqlSetup
- [Michael Fyffe (@TraGicCode)](https://github.com/TraGicCode): Clarify usage
of 'SecurityMode' along with adding parameter validations for the only 2
supported values ([issue #1010](https://github.com/dsccommunity/SqlServerDsc/issues/1010)).
- Now accounts containing '$' will be able to be used for installing
SQL Server. Although, if the account ends with '$' it is considered a
Managed Service Account ([issue #1055](https://github.com/dsccommunity/SqlServerDsc/issues/1055)).
- Changes to Integration Tests
- [Michael Fyffe (@TraGicCode)](https://github.com/TraGicCode): Replace xStorage
dsc resource module with StorageDsc ([issue #1038](https://github.com/dsccommunity/SqlServerDsc/issues/1038)).
- Changes to Unit Tests
- [Michael Fyffe (@TraGicCode)](https://github.com/TraGicCode): Updated
the following resources unit test template to version 1.2.1
- SqlAlias ([issue #999](https://github.com/dsccommunity/SqlServerDsc/issues/999)).
- SqlWindowsFirewall ([issue #1089](https://github.com/dsccommunity/SqlServerDsc/issues/1089)).
## [11.0.0.0] - 2018-02-07
### Changed
- Changes to SqlServerDsc
- BREAKING CHANGE: Resource SqlRSSecureConnectionLevel was remove
([issue #990](https://github.com/dsccommunity/SqlServerDsc/issues/990)).
The parameter that was set using that resource has been merged into resource
SqlRS as the parameter UseSsl. The UseSsl parameter is of type boolean. This
change was made because from SQL Server 2008 R2 this value is made an on/off
switch. Read more in the article [ConfigurationSetting Method - SetSecureConnectionLevel](https://docs.microsoft.com/en-us/sql/reporting-services/wmi-provider-library-reference/configurationsetting-method-setsecureconnectionlevel).
- Updated so that named parameters are used for New-Object cmdlet. This was
done to follow the style guideline.
- Updated manifest and license to reflect the new year
([issue #965](https://github.com/dsccommunity/SqlServerDsc/issues/965)).
- Added a README.md under Tests\Integration to help contributors to write
integration tests.
- Added 'Integration tests' section in the CONTRIBUTING.md.
- The complete examples were removed. They were no longer accurate and some
referenced resources that no longer exist. Accurate examples can be found
in each specific resource example folder. Examples for installing Failover Cluster
can be found in the resource examples folders in the xFailOverCluster
resource module ([issue #462](https://github.com/dsccommunity/SqlServerDsc/issues/462)).
- A README.md was created under the Examples folder to be used as reference how
to install certain scenarios ([issue #462](https://github.com/dsccommunity/SqlServerDsc/issues/462)).
- Removed the local specific common test for compiling examples in this repository
and instead opted-in for the common test in the 'DscResource.Tests' repository
([issue #669](https://github.com/dsccommunity/SqlServerDsc/issues/669)).
- Added new resource SqlServerDatabaseMail for configuring SQL Server
Database Mail ([issue #155](https://github.com/dsccommunity/SqlServerDsc/issues/155)).
- Updated the helper function Test-SQLDscParameterState to handle the
data type UInt16.
- Fixed typo in SqlServerDscCommon.Tests.
- Updated README.md with known issue section for each resource.
- Resources that did not have a description in the README.md now has one.
- Resources that missed links to the examples in the README.md now has those
links.
- Style changes in all examples, removing type [System.Management.Automation.Credential()]
from credential parameters ([issue #1003](https://github.com/dsccommunity/SqlServerDsc/issues/1003)),
and renamed the credential parameter so it is not using abbreviation.
- Updated the security token for AppVeyor status badge in README.md. When we
renamed the repository the security token was changed
([issue #1012](https://github.com/dsccommunity/SqlServerDsc/issues/1012)).
- Now the helper function Restart-SqlService, after restarting the SQL Server
service, does not return until it can connect to the SQL Server instance, and
the instance returns status 'Online' ([issue #1008](https://github.com/dsccommunity/SqlServerDsc/issues/1008)).
If it fails to connect within the timeout period (defaults to 120 seconds) it
throws an error.
- Fixed typo in comment-base help for helper function Test-AvailabilityReplicaSeedingModeAutomatic.
- Style cleanup in helper functions and tests.
- Changes to SqlAG
- Fixed typos in tests.
- Style cleanup in code and tests.
- Changes to SqlAGDatabase
- Style cleanup in code and tests.
- Changes to SqlAGListener
- Fixed typo in comment-based help.
- Style cleanup in code and tests.
- Changes to SqlAGReplica
- Minor code style cleanup. Removed unused variable and instead piped the cmdlet
Join-SqlAvailabilityGroup to Out-Null.
- Fixed minor typos in comment-based help.
- Fixed minor typos in comment.
- Style cleanup in code and tests.
- Updated description for parameter Name in README.md and in comment-based help
([issue #1034](https://github.com/dsccommunity/SqlServerDsc/issues/1034)).
- Changes to SqlAlias
- Fixed issue where exception was thrown if reg keys did not exist
([issue #949](https://github.com/dsccommunity/SqlServerDsc/issues/949)).
- Style cleanup in tests.
- Changes to SqlAlwaysOnService
- Refactor integration tests slightly to improve run time performance
([issue #1001](https://github.com/dsccommunity/SqlServerDsc/issues/1001)).
- Style cleanup in code and tests.
- Changes to SqlDatabase
- Fix minor Script Analyzer warning.
- Changes to SqlDatabaseDefaultLocation
- Refactor integration tests slightly to improve run time performance
([issue #1001](https://github.com/dsccommunity/SqlServerDsc/issues/1001)).
- Minor style cleanup of code in tests.
- Changes to SqlDatabaseRole
- Style cleanup in tests.
- Changes to SqlRS
- Replaced Get-WmiObject with Get-CimInstance to fix Script Analyzer warnings
([issue #264](https://github.com/dsccommunity/SqlServerDsc/issues/264)).
- Refactored the resource to use Invoke-CimMethod.
- Added parameter UseSsl which when set to $true forces connections to the
Reporting Services to use SSL when connecting ([issue #990](https://github.com/dsccommunity/SqlServerDsc/issues/990)).
- Added complete example for SqlRS (based on the integration tests)
([issue #634](https://github.com/dsccommunity/SqlServerDsc/issues/634)).
- Refactor integration tests slightly to improve run time performance
([issue #1001](https://github.com/dsccommunity/SqlServerDsc/issues/1001)).
- Style cleanup in code and tests.
- Changes to SqlScript
- Style cleanup in tests.
- Updated examples.
- Added integration tests.
- Fixed minor typos in comment-based help.
- Added new example based on integration test.
- Changes to SqlServerConfiguration
- Fixed minor typos in comment-based help.
- Now the verbose message say what option is changing and to what value
([issue #1014](https://github.com/dsccommunity/SqlServerDsc/issues/1014)).
- Changed the RestartTimeout parameter from type SInt32 to type UInt32.
- Added localization ([issue #605](https://github.com/dsccommunity/SqlServerDsc/issues/605)).
- Style cleanup in code and tests.
- Changes to SqlServerEndpoint
- Updated README.md with links to the examples
([issue #504](https://github.com/dsccommunity/SqlServerDsc/issues/504)).
- Style cleanup in tests.
- Changes to SqlServerLogin
- Added integration tests ([issue #748](https://github.com/dsccommunity/SqlServerDsc/issues/748)).
- Minor code style cleanup.
- Removed unused variable and instead piped the helper function Connect-SQL to
Out-Null.
- Style cleanup in tests.
- Changes to SqlServerMaxDop
- Minor style changes in the helper function Get-SqlDscDynamicMaxDop.
- Changes to SqlServerMemory
- Style cleanup in code and tests.
- Changes to SqlServerPermission
- Fixed minor typos in comment-based help.
- Style cleanup in code.
- Changes to SqlServerReplication
- Fixed minor typos in verbose messages.
- Style cleanup in tests.
- Changes to SqlServerNetwork
- Added sysadmin account parameter usage to the examples.
- Changes to SqlServerReplication
- Fix Script Analyzer warning ([issue #263](https://github.com/dsccommunity/SqlServerDsc/issues/263)).
- Changes to SqlServerRole
- Added localization ([issue #621](https://github.com/dsccommunity/SqlServerDsc/issues/621)).
- Added integration tests ([issue #756](https://github.com/dsccommunity/SqlServerDsc/issues/756)).
- Updated example to add two server roles in the same configuration.
- Style cleanup in tests.
- Changes to SqlServiceAccount
- Default services are now properly detected
([issue #930](https://github.com/dsccommunity/SqlServerDsc/issues/930)).
- Made the description of parameter RestartService more descriptive
([issue #960](https://github.com/dsccommunity/SqlServerDsc/issues/960)).
- Added a read-only parameter ServiceAccountName so that the service account
name is correctly returned as a string ([issue #982](https://github.com/dsccommunity/SqlServerDsc/issues/982)).
- Added integration tests ([issue #980](https://github.com/dsccommunity/SqlServerDsc/issues/980)).
- The timing issue that the resource returned before SQL Server service was
actually restarted has been solved by a change in the helper function
Restart-SqlService ([issue #1008](https://github.com/dsccommunity/SqlServerDsc/issues/1008)).
Now Restart-SqlService waits for the instance to return status 'Online' or
throws an error saying it failed to connect within the timeout period.
- Style cleanup in code and tests.
- Changes to SqlSetup
- Added parameter `ASServerMode` to support installing Analysis Services in
Multidimensional mode, Tabular mode and PowerPivot mode
([issue #388](https://github.com/dsccommunity/SqlServerDsc/issues/388)).
- Added integration tests for testing Analysis Services Multidimensional mode
and Tabular mode.
- Cleaned up integration tests.
- Added integration tests for installing a default instance of Database Engine.
- Refactor integration tests slightly to improve run time performance
([issue #1001](https://github.com/dsccommunity/SqlServerDsc/issues/1001)).
- Added PSSA rule 'PSUseDeclaredVarsMoreThanAssignments' override in the
function Set-TargetResource for the variable $global:DSCMachineStatus.
- Style cleanup in code and tests.
- Changes to SqlWaitForAG
- Style cleanup in code.
- Changes to SqlWindowsFirewall
- Fixed minor typos in comment-based help.
- Style cleanup in code.
## [10.0.0.0] - 2017-12-14
### Changed
- BREAKING CHANGE: Resource module has been renamed to SqlServerDsc
([issue #916](https://github.com/dsccommunity/SqlServerDsc/issues/916)).
- BREAKING CHANGE: Significant rename to reduce length of resource names
- See [issue #851](https://github.com/dsccommunity/SqlServerDsc/issues/851)
for a complete table mapping rename changes.
- Impact to all resources.
- Changes to CONTRIBUTING.md
- Added details to the naming convention used in SqlServerDsc.
- Changes to SqlServerDsc
- The examples in the root of the Examples folder are obsolete. A note was
added to the comment-based help in each example stating it is obsolete.
This is a temporary measure until they are replaced
([issue #904](https://github.com/dsccommunity/SqlServerDsc/issues/904)).
- Added new common test (regression test) for validating the long path
issue for compiling resources in Azure Automation.
- Fix resources in alphabetical order in README.md ([issue #908](https://github.com/dsccommunity/SqlServerDsc/issues/908)).
- Changes to SqlAG
- BREAKING CHANGE: Parameters SQLServer and SQLInstanceName has been renamed
to ServerName and InstanceName respectively
([issue #308](https://github.com/dsccommunity/SqlServerDsc/issues/308)).
- BREAKING CHANGE: The read-only property SQLServerNetName was removed in favor
of EndpointHostName ([issue #924](https://github.com/dsccommunity/SqlServerDsc/issues/924)).
Get-TargetResource will now return the value of property [NetName](https://technet.microsoft.com/en-us/library/microsoft.sqlserver.management.smo.server.netname(v=sql.105).aspx)
for the property EndpointHostName.
- Changes to SqlAGDatabase
- BREAKING CHANGE: Parameters SQLServer and SQLInstanceName has been renamed
to ServerName and InstanceName respectively
([issue #308](https://github.com/dsccommunity/SqlServerDsc/issues/308)).
- Changed the Get-MatchingDatabaseNames function to be case insensitive when
matching database names ([issue #912](https://github.com/dsccommunity/SqlServerDsc/issues/912)).
- Changes to SqlAGListener
- BREAKING CHANGE: Parameter NodeName has been renamed to ServerName
([issue #308](https://github.com/dsccommunity/SqlServerDsc/issues/308)).
- Changes to SqlAGReplica
- BREAKING CHANGE: Parameters SQLServer and SQLInstanceName has been renamed
to ServerName and InstanceName respectively
([issue #308](https://github.com/dsccommunity/SqlServerDsc/issues/308)).
- BREAKING CHANGE: Parameters PrimaryReplicaSQLServer and PrimaryReplicaSQLInstanceName
has been renamed to PrimaryReplicaServerName and PrimaryReplicaInstanceName
respectively ([issue #922](https://github.com/dsccommunity/SqlServerDsc/issues/922)).
- BREAKING CHANGE: The read-only property SQLServerNetName was removed in favor
of EndpointHostName ([issue #924](https://github.com/dsccommunity/SqlServerDsc/issues/924)).
Get-TargetResource will now return the value of property [NetName](https://technet.microsoft.com/en-us/library/microsoft.sqlserver.management.smo.server.netname(v=sql.105).aspx)
for the property EndpointHostName.
- Changes to SqlAlwaysOnService
- BREAKING CHANGE: Parameters SQLServer and SQLInstanceName has been renamed
to ServerName and InstanceName respectively
([issue #308](https://github.com/dsccommunity/SqlServerDsc/issues/308)).
- Changes to SqlDatabase
- BREAKING CHANGE: Parameters SQLServer and SQLInstanceName has been renamed
to ServerName and InstanceName respectively
([issue #308](https://github.com/dsccommunity/SqlServerDsc/issues/308)).
- Changes SqlDatabaseDefaultLocation
- BREAKING CHANGE: Parameters SQLServer and SQLInstanceName has been renamed
to ServerName and InstanceName respectively
([issue #308](https://github.com/dsccommunity/SqlServerDsc/issues/308)).
- Changes to SqlDatabaseOwner
- BREAKING CHANGE: Parameters SQLServer and SQLInstanceName has been renamed
to ServerName and InstanceName respectively
([issue #308](https://github.com/dsccommunity/SqlServerDsc/issues/308)).
- Changes to SqlDatabasePermission
- BREAKING CHANGE: Parameters SQLServer and SQLInstanceName has been renamed
to ServerName and InstanceName respectively
([issue #308](https://github.com/dsccommunity/SqlServerDsc/issues/308)).
- Changes to SqlDatabaseRecoveryModel
- BREAKING CHANGE: Parameters SQLServer and SQLInstanceName has been renamed
to ServerName and InstanceName respectively
([issue #308](https://github.com/dsccommunity/SqlServerDsc/issues/308)).
- Changes to SqlDatabaseRole
- BREAKING CHANGE: Parameters SQLServer and SQLInstanceName has been renamed
to ServerName and InstanceName respectively
([issue #308](https://github.com/dsccommunity/SqlServerDsc/issues/308)).
- Changes to SqlRS
- BREAKING CHANGE: Parameters RSSQLServer and RSSQLInstanceName has been renamed
to DatabaseServerName and DatabaseInstanceName respectively
([issue #923](https://github.com/dsccommunity/SqlServerDsc/issues/923)).
- Changes to SqlServerConfiguration
- BREAKING CHANGE: Parameters SQLServer and SQLInstanceName has been renamed
to ServerName and InstanceName respectively
([issue #308](https://github.com/dsccommunity/SqlServerDsc/issues/308)).
- Changes to SqlServerEndpoint
- BREAKING CHANGE: Parameters SQLServer and SQLInstanceName has been renamed
to ServerName and InstanceName respectively
([issue #308](https://github.com/dsccommunity/SqlServerDsc/issues/308)).
- Changes to SqlServerEndpointPermission
- BREAKING CHANGE: Parameter NodeName has been renamed to ServerName
([issue #308](https://github.com/dsccommunity/SqlServerDsc/issues/308)).
- Now the examples files have a shorter name so that resources will not fail
to compile in Azure Automation ([issue #934](https://github.com/dsccommunity/SqlServerDsc/issues/934)).
- Changes to SqlServerEndpointState
- BREAKING CHANGE: Parameter NodeName has been renamed to ServerName
([issue #308](https://github.com/dsccommunity/SqlServerDsc/issues/308)).
- Changes to SqlServerLogin
- BREAKING CHANGE: Parameters SQLServer and SQLInstanceName has been renamed
to ServerName and InstanceName respectively
([issue #308](https://github.com/dsccommunity/SqlServerDsc/issues/308)).
- Changes to SqlServerMaxDop
- BREAKING CHANGE: Parameters SQLServer and SQLInstanceName has been renamed
to ServerName and InstanceName respectively
([issue #308](https://github.com/dsccommunity/SqlServerDsc/issues/308)).
- Changes to SqlServerMemory
- BREAKING CHANGE: Parameters SQLServer and SQLInstanceName has been renamed
to ServerName and InstanceName respectively
([issue #308](https://github.com/dsccommunity/SqlServerDsc/issues/308)).
- Changes to SqlServerNetwork
- BREAKING CHANGE: Parameters SQLServer has been renamed to ServerName
([issue #308](https://github.com/dsccommunity/SqlServerDsc/issues/308)).
- Changes to SqlServerPermission
- BREAKING CHANGE: Parameter NodeName has been renamed to ServerName
([issue #308](https://github.com/dsccommunity/SqlServerDsc/issues/308)).
- Changes to SqlServerRole
- BREAKING CHANGE: Parameters SQLServer and SQLInstanceName has been renamed
to ServerName and InstanceName respectively
([issue #308](https://github.com/dsccommunity/SqlServerDsc/issues/308)).
- Changes to SqlServerServiceAccount
- BREAKING CHANGE: Parameters SQLServer and SQLInstanceName has been renamed
to ServerName and InstanceName respectively
([issue #308](https://github.com/dsccommunity/SqlServerDsc/issues/308)).
## [9.0.0.0] - 2017-11-15
### Changed
- Changes to xSQLServer
- Updated Pester syntax to v4
- Fixes broken links to issues in the CHANGELOG.md.
- Changes to xSQLServerDatabase
- Added parameter to specify collation for a database to be different from server
collation ([issue #767](https://github.com/dsccommunity/SqlServerDsc/issues/767)).
- Fixed unit tests for Get-TargetResource to ensure correctly testing return
values ([issue #849](https://github.com/dsccommunity/SqlServerDsc/issues/849))
- Changes to xSQLServerAlwaysOnAvailabilityGroup
- Refactored the unit tests to allow them to be more user friendly and to test
additional SQLServer variations.
- Each test will utilize the Import-SQLModuleStub to ensure the correct
module is loaded ([issue #784](https://github.com/dsccommunity/SqlServerDsc/issues/784)).
- Fixed an issue when setting the SQLServer parameter to a Fully Qualified
Domain Name (FQDN) ([issue #468](https://github.com/dsccommunity/SqlServerDsc/issues/468)).
- Fixed the logic so that if a parameter is not supplied to the resource, the
resource will not attempt to apply the defaults on subsequent checks
([issue #517](https://github.com/dsccommunity/SqlServerDsc/issues/517)).
- Made the resource cluster aware. When ProcessOnlyOnActiveNode is specified,
the resource will only determine if a change is needed if the target node
is the active host of the SQL Server instance ([issue #868](https://github.com/dsccommunity/SqlServerDsc/issues/868)).
- Changes to xSQLServerAlwaysOnAvailabilityGroupDatabaseMembership
- Made the resource cluster aware. When ProcessOnlyOnActiveNode is specified,
the resource will only determine if a change is needed if the target node
is the active host of the SQL Server instance ([issue #869](https://github.com/dsccommunity/SqlServerDsc/issues/869)).
- Changes to xSQLServerAlwaysOnAvailabilityGroupReplica
- Made the resource cluster aware. When ProcessOnlyOnActiveNode is specified,
the resource will only determine if a change is needed if the target node is
the active host of the SQL Server instance ([issue #870](https://github.com/dsccommunity/SqlServerDsc/issues/870)).
- Added the CommonTestHelper.psm1 to store common testing functions.
- Added the Import-SQLModuleStub function to ensure the correct version of the
module stubs are loaded ([issue #784](https://github.com/dsccommunity/SqlServerDsc/issues/784)).
- Changes to xSQLServerMemory
- Made the resource cluster aware. When ProcessOnlyOnActiveNode is specified,
the resource will only determine if a change is needed if the target node
is the active host of the SQL Server instance ([issue #867](https://github.com/dsccommunity/SqlServerDsc/issues/867)).
- Changes to xSQLServerNetwork
- BREAKING CHANGE: Renamed parameter TcpDynamicPorts to TcpDynamicPort and
changed type to Boolean ([issue #534](https://github.com/dsccommunity/SqlServerDsc/issues/534)).
- Resolved issue when switching from dynamic to static port.
configuration ([issue #534](https://github.com/dsccommunity/SqlServerDsc/issues/534)).
- Added localization (en-US) for all strings in resource and unit tests
([issue #618](https://github.com/dsccommunity/SqlServerDsc/issues/618)).
- Updated examples to reflect new parameters.
- Changes to xSQLServerRSConfig
- Added examples
- Added resource
- xSQLServerDatabaseDefaultLocation
([issue #656](https://github.com/dsccommunity/SqlServerDsc/issues/656))
- Changes to xSQLServerEndpointPermission
- Fixed a problem when running the tests locally in a PowerShell console it
would ask for parameters ([issue #897](https://github.com/dsccommunity/SqlServerDsc/issues/897)).
- Changes to xSQLServerAvailabilityGroupListener
- Fixed a problem when running the tests locally in a PowerShell console it
would ask for parameters ([issue #897](https://github.com/dsccommunity/SqlServerDsc/issues/897)).
- Changes to xSQLServerMaxDop
- Made the resource cluster aware. When ProcessOnlyOnActiveNode is specified,
the resource will only determine if a change is needed if the target node
is the active host of the SQL Server instance ([issue #882](https://github.com/dsccommunity/SqlServerDsc/issues/882)).
## [8.2.0.0] - 2017-10-05
### Changed
- Changes to xSQLServer
- Updated appveyor.yml so that integration tests run in order and so that
the SQLPS module folders are renamed to not disturb the units test, but
can be renamed back by the integration tests xSQLServerSetup so that the
integration tests can run successfully
([issue #774](https://github.com/dsccommunity/SqlServerDsc/issues/774)).
- Changed so the maximum version to be installed is 4.0.6.0, when running unit
tests in AppVeyor. Quick fix until we can resolve the unit tests (see
[issue #807](https://github.com/dsccommunity/SqlServerDsc/issues/807)).
- Moved the code block, that contains workarounds in appveyor.yml, so it is run
during the install phase instead of the test phase.
- Fix problem with tests breaking with Pester 4.0.7 ([issue #807](https://github.com/dsccommunity/SqlServerDsc/issues/807)).
- Changes to xSQLServerHelper
- Changes to Connect-SQL and Import-SQLPSModule
- Now it correctly loads the correct assemblies when SqlServer module is
present ([issue #649](https://github.com/dsccommunity/SqlServerDsc/issues/649)).
- Now SQLPS module will be correctly loaded (discovered) after installation
of SQL Server. Previously resources depending on SQLPS module could fail
because SQLPS was not found after installation because the PSModulePath
environment variable in the (LCM) PowerShell session did not contain the new
module path.
- Added new helper function "Test-ClusterPermissions" ([issue #446](https://github.com/dsccommunity/SqlServerDsc/issues/446)).
- Changes to xSQLServerSetup
- Fixed an issue with trailing slashes in the 'UpdateSource' property
([issue #720](https://github.com/dsccommunity/SqlServerDsc/issues/720)).
- Fixed so that the integration test renames back the SQLPS module folders if
they was renamed by AppVeyor (in the appveyor.yml file)
([issue #774](https://github.com/dsccommunity/SqlServerDsc/issues/774)).
- Fixed so integration test does not write warnings when SQLPS module is loaded
([issue #798](https://github.com/dsccommunity/SqlServerDsc/issues/798)).
- Changes to integration tests.
- Moved the configuration block from the MSFT\_xSQLServerSetup.Integration.Tests.ps1
to the MSFT\_xSQLServerSetup.config.ps1 to align with the other integration
test. And also get most of the configuration in one place.
- Changed the tests so that the local SqlInstall account is added as a member
of the local administrators group.
- Changed the tests so that the local SqlInstall account is added as a member
of the system administrators in SQL Server (Database Engine) - needed for the
xSQLServerAlwaysOnService integration tests.
- Changed so that only one of the Modules-folder for the SQLPS PowerShell module
for SQL Server 2016 is renamed back so it can be used with the integration
tests. There was an issue when more than one SQLPS module was present (see
more information in [issue #806](https://github.com/dsccommunity/SqlServerDsc/issues/806)).
- Fixed wrong variable name for SQL service credential. It was using the
integration test variable name instead of the parameter name.
- Added ErrorAction 'Stop' to the cmdlet Start-DscConfiguration
([issue #824](https://github.com/dsccommunity/SqlServerDsc/issues/824)).
- Changes to xSQLServerAlwaysOnAvailabilityGroup
- Change the check of the values entered as parameter for
BasicAvailabilityGroup. It is a boolean, hence it was not possible to
disable the feature.
- Add possibility to enable/disable the feature DatabaseHealthTrigger
(SQL Server 2016 or later only).
- Add possibility to enable the feature DtcSupportEnabled (SQL Server 2016 or
later only). The feature currently can't be altered once the Availability
Group is created.
- Use the new helper function "Test-ClusterPermissions".
- Refactored the unit tests to allow them to be more user friendly.
- Added the following read-only properties to the schema ([issue #476](https://github.com/dsccommunity/SqlServerDsc/issues/476))
- EndpointPort
- EndpointURL
- SQLServerNetName
- Version
- Use the Get-PrimaryReplicaServerObject helper function.
- Changes to xSQLServerAlwaysOnAvailabilityGroupReplica
- Fixed the formatting for the AvailabilityGroupNotFound error.
- Added the following read-only properties to the schema ([issue #477](https://github.com/dsccommunity/SqlServerDsc/issues/477))
- EndpointPort
- EndpointURL
- Use the new helper function "Test-ClusterPermissions".
- Use the Get-PrimaryReplicaServerObject helper function
- Changes to xSQLServerHelper
- Fixed Connect-SQL by ensuring the Status property returns 'Online' prior to
returning the SQL Server object ([issue #333](https://github.com/dsccommunity/SqlServerDsc/issues/333)).
- Changes to xSQLServerRole
- Running Get-DscConfiguration no longer throws an error saying property
Members is not an array ([issue #790](https://github.com/dsccommunity/SqlServerDsc/issues/790)).
- Changes to xSQLServerMaxDop
- Fixed error where Measure-Object cmdlet would fail claiming it could not
find the specified property ([issue #801](https://github.com/dsccommunity/SqlServerDsc/issues/801))
- Changes to xSQLServerAlwaysOnService
- Added integration test ([issue #736](https://github.com/dsccommunity/SqlServerDsc/issues/736)).
- Added ErrorAction 'Stop' to the cmdlet Start-DscConfiguration
([issue #824](https://github.com/dsccommunity/SqlServerDsc/issues/824)).
- Changes to SMO.cs
- Added default properties to the Server class
- AvailabilityGroups
- Databases
- EndpointCollection
- Added a new overload to the Login class
- Added default properties to the AvailabilityReplicas class
- AvailabilityDatabases
- AvailabilityReplicas
- Added new resource xSQLServerAccount ([issue #706](https://github.com/dsccommunity/SqlServerDsc/issues/706))
- Added localization support for all strings
- Added examples for usage
- Changes to xSQLServerRSConfig
- No longer returns a null value from Test-TargetResource when Reporting
Services has not been initialized ([issue #822](https://github.com/dsccommunity/SqlServerDsc/issues/822)).
- Fixed so that when two Reporting Services are installed for the same major
version the resource does not throw an error ([issue #819](https://github.com/dsccommunity/SqlServerDsc/issues/819)).
- Now the resource will restart the Reporting Services service after
initializing ([issue #592](https://github.com/dsccommunity/SqlServerDsc/issues/592)).
This will enable the Reports site to work.
- Added integration test ([issue #753](https://github.com/dsccommunity/SqlServerDsc/issues/753)).
- Added support for configuring URL reservations and virtual directory names
([issue #570](https://github.com/dsccommunity/SqlServerDsc/issues/570))
- Added resource
- xSQLServerDatabaseDefaultLocation
([issue #656](https://github.com/dsccommunity/SqlServerDsc/issues/656))
## [8.1.0.0] - 2017-08-23
### Changed
- Changes to xSQLServer
- Added back .markdownlint.json so that lint rule MD013 is enforced.
- Change the module to use the image 'Visual Studio 2017' as the build worker
image for AppVeyor (issue #685).
- Minor style change in CommonResourceHelper. Added missing [Parameter()] on
three parameters.
- Minor style changes to the unit tests for CommonResourceHelper.
- Changes to xSQLServerHelper
- Added Swedish localization ([issue #695](https://github.com/dsccommunity/SqlServerDsc/issues/695)).
- Opt-in for module files common tests ([issue #702](https://github.com/dsccommunity/SqlServerDsc/issues/702)).
- Removed Byte Order Mark (BOM) from the files; CommonResourceHelper.psm1,
MSFT\_xSQLServerAvailabilityGroupListener.psm1, MSFT\_xSQLServerConfiguration.psm1,
MSFT\_xSQLServerEndpointPermission.psm1, MSFT\_xSQLServerEndpointState.psm1,
MSFT\_xSQLServerNetwork.psm1, MSFT\_xSQLServerPermission.psm1,
MSFT\_xSQLServerReplication.psm1, MSFT\_xSQLServerScript.psm1,
SQLPSStub.psm1, SQLServerStub.psm1.
- Opt-in for script files common tests ([issue #707](https://github.com/dsccommunity/SqlServerDsc/issues/707)).
- Removed Byte Order Mark (BOM) from the files; DSCClusterSqlBuild.ps1,
DSCFCISqlBuild.ps1, DSCSqlBuild.ps1, DSCSQLBuildEncrypted.ps1,
SQLPush_SingleServer.ps1, 1-AddAvailabilityGroupListenerWithSameNameAsVCO.ps1,
2-AddAvailabilityGroupListenerWithDifferentNameAsVCO.ps1,
3-RemoveAvailabilityGroupListenerWithSameNameAsVCO.ps1,
4-RemoveAvailabilityGroupListenerWithDifferentNameAsVCO.ps1,
5-AddAvailabilityGroupListenerUsingDHCPWithDefaultServerSubnet.ps1,
6-AddAvailabilityGroupListenerUsingDHCPWithSpecificSubnet.ps1,
2-ConfigureInstanceToEnablePriorityBoost.ps1, 1-CreateEndpointWithDefaultValues.ps1,
2-CreateEndpointWithSpecificPortAndIPAddress.ps1, 3-RemoveEndpoint.ps1,
1-AddConnectPermission.ps1, 2-RemoveConnectPermission.ps1,
3-AddConnectPermissionToAlwaysOnPrimaryAndSecondaryReplicaEachWithDifferentSqlServiceAccounts.ps1,
4-RemoveConnectPermissionToAlwaysOnPrimaryAndSecondaryReplicaEachWithDifferentSqlServiceAccounts.ps1,
1-MakeSureEndpointIsStarted.ps1, 2-MakeSureEndpointIsStopped.ps1,
1-EnableTcpIpWithStaticPort.ps1, 2-EnableTcpIpWithDynamicPort.ps1,
1-AddServerPermissionForLogin.ps1, 2-RemoveServerPermissionForLogin.ps1,
1-ConfigureInstanceAsDistributor.ps1, 2-ConfigureInstanceAsPublisher.ps1,
1-WaitForASingleClusterGroup.ps1, 2-WaitForMultipleClusterGroups.ps1.
- Updated year to 2017 in license file ([issue #711](https://github.com/dsccommunity/SqlServerDsc/issues/711)).
- Code style clean-up throughout the module to align against the Style Guideline.
- Fixed typos and the use of wrong parameters in unit tests which was found
after release of new version of Pester ([issue #773](https://github.com/dsccommunity/SqlServerDsc/issues/773)).
- Changes to xSQLServerAlwaysOnService
- Added resource description in README.md.
- Updated parameters descriptions in comment-based help, schema.mof and README.md.
- Changed the datatype of the parameter to UInt32 so the same datatype is used
in both the Get-/Test-/Set-TargetResource functions as in the schema.mof
(issue #688).
- Added read-only property IsHadrEnabled to schema.mof and the README.md
(issue #687).
- Minor cleanup of code.
- Added examples (issue #633)
- 1-EnableAlwaysOn.ps1
- 2-DisableAlwaysOn.ps1
- Fixed PS Script Analyzer errors ([issue #724](https://github.com/dsccommunity/SqlServerDsc/issues/724))
- Casting the result of the property IsHadrEnabled to [System.Boolean] so that
$null is never returned, which resulted in an exception ([issue #763](https://github.com/dsccommunity/SqlServerDsc/issues/763)).
- Changes to xSQLServerDatabasePermission
- Fixed PS Script Analyzer errors ([issue #725](https://github.com/dsccommunity/SqlServerDsc/issues/725))
- Changes to xSQLServerScript
- Fixed PS Script Analyzer errors ([issue #728](https://github.com/dsccommunity/SqlServerDsc/issues/728))
- Changes to xSQLServerSetup
- Added Swedish localization ([issue #695](https://github.com/dsccommunity/SqlServerDsc/issues/695)).
- Now Get-TargetResource correctly returns an array for property ASSysAdminAccounts,
and no longer throws an error when there is just one Analysis Services
administrator (issue #691).
- Added a simple integration test ([issue #709](https://github.com/dsccommunity/SqlServerDsc/issues/709)).
- Fixed PS Script Analyzer errors ([issue #729](https://github.com/dsccommunity/SqlServerDsc/issues/729))
## [8.0.0.0] - 2017-07-12
### Changed
- BREAKING CHANGE: The module now requires WMF 5.
- This is required for class-based resources
- Added new resource
- xSQLServerAlwaysOnAvailabilityGroupDatabaseMembership
- Added localization support for all strings.
- Refactored as a MOF based resource due to challenges with Pester and testing
in Powershell 5.
- Changes to xSQLServer
- BREAKING CHANGE: xSQLServer does no longer try to support WMF 4.0 (PowerShell
4.0) (issue #574). Minimum supported version of WMF is now 5.0 (PowerShell 5.0).
- BREAKING CHANGE: Removed deprecated resource xSQLAOGroupJoin (issue #457).
- BREAKING CHANGE: Removed deprecated resource xSQLAOGroupEnsure (issue #456).
- BREAKING CHANGE: Removed deprecated resource xSQLServerFailoverClusterSetup
(issue #336).
- Updated PULL\_REQUEST\_TEMPLATE adding comment block around text. Also
rearranged and updated texts (issue #572).
- Added common helper functions for HQRM localization, and added tests for the
helper functions.
- Get-LocalizedData
- New-InvalidResultException
- New-ObjectNotFoundException
- New-InvalidOperationException
- New-InvalidArgumentException
- Updated CONTRIBUTING.md describing the new localization helper functions.
- Fixed typos in xSQLServer.strings.psd1
- Fixed CodeCov badge links in README.md so that they point to the correct branch.
- Added VS Code workspace settings file with formatting settings matching the
Style Guideline (issue #645). That will make it possible inside VS Code to press
SHIFT+ALT+F, or press F1 and choose 'Format document' in the list. The
PowerShell code will then be formatted according to the Style Guideline
(although maybe not complete, but would help a long way).
- Removed powershell.codeFormatting.alignPropertyValuePairs setting since
it does not align with the style guideline.
- Added powershell.codeFormatting.preset with a value of 'Custom' so that
workspace formatting settings are honored (issue #665).
- Fixed lint error MD013 and MD036 in README.md.
- Updated .markdownlint.json to enable rule MD013 and MD036 to enforce those
lint markdown rules in the common tests.
- Fixed lint error MD013 in CHANGELOG.md.
- Fixed lint error MD013 in CONTRIBUTING.md.
- Added code block around types in README.md.
- Updated copyright information in xSQLServer.psd1.
- Opt-in for markdown common tests (issue #668).
- The old markdown tests has been removed.
- Changes to xSQLServerHelper
- Removed helper function Grant-ServerPerms because the deprecated resource that
was using it was removed.
- Removed helper function Grant-CNOPerms because the deprecated resource that
was using it was removed.
- Removed helper function New-ListenerADObject because the deprecated resource
that was using it was removed.
- Added tests for those helper functions that did not have tests.
- Test-SQLDscParameterState helper function can now correctly pass a CimInstance
as DesiredValue.
- Test-SQLDscParameterState helper function will now output a warning message
if the value type of a desired value is not supported.
- Added localization to helper functions (issue #641).
- Resolved the issue when using Write-Verbose in helper functions discussed
in #641 where Write-Verbose wouldn't write out verbose messages unless using
parameter Verbose.
- Moved localization strings from xSQLServer.strings.psd1 to
xSQLServerHelper.strings.psd1.
- Changes to xSQLServerSetup
- BREAKING CHANGE: Replaced StartWin32Process helper function with the cmdlet
Start-Process (issue #41, #93 and #126).
- BREAKING CHANGE: The parameter SetupCredential has been removed since it is
no longer needed. This is because the resource now support the built-in
PsDscRunAsCredential.
- BREAKING CHANGE: Now the resource supports using built-in PsDscRunAsCredential.
If PsDscRunAsCredential is set, that username will be used as the first system
administrator.
- BREAKING CHANGE: If the parameter PsDscRunAsCredential are not assigned any
credentials then the resource will start the setup process as the SYSTEM account.
When installing as the SYSTEM account, then parameter SQLSysAdminAccounts and
ASSysAdminAccounts must be specified when installing feature Database Engine
and Analysis Services respectively.
- When setup exits with the exit code 3010 a warning message is written to console
telling that setup finished successfully, but a reboot is required (partly fixes
issue #565).
- When setup exits with an exit code other than 0 or 3010 a warning message is
written to console telling that setup finished with an error (partly fixes
issue #580).
- Added a new parameter SetupProcessTimeout which defaults to 7200 seconds (2
hours). If the setup process has not finished before the timeout value in
SetupProcessTimeout an error will be thrown (issue #566).
- Updated all examples to match the removal of SetupCredential.
- Updated (removed) severe known issues in README.md for resource xSQLServerSetup.
- Now all major version uses the same identifier to evaluate InstallSharedDir
and InstallSharedWOWDir (issue #420).
- Now setup arguments that contain no value will be ignored, for example when
InstallSharedDir and
InstallSharedWOWDir path is already present on the target node, because of a
previous installation (issue #639).
- Updated Get-TargetResource to correctly detect BOL, Conn, BC and other tools
when they are installed without SQLENGINE (issue #591).
- Now it can detect Documentation Components correctly after the change in
issue #591 (issue #628)
- Fixed bug that prevented Get-DscConfiguration from running without error. The
return hash table fails if the $clusteredSqlIpAddress variable is not used.
The schema expects a string array but it is initialized as just a null string,
causing it to fail on Get-DscConfiguration (issue #393).
- Added localization support for all strings.
- Added a test to test some error handling for cluster installations.
- Added support for MDS feature install (issue #486)
- Fixed localization support for MDS feature (issue #671).
- Changes to xSQLServerRSConfig
- BREAKING CHANGE: Removed `$SQLAdminCredential` parameter. Use common parameter
`PsDscRunAsCredential` (WMF 5.0+) to run the resource under different credentials.
`PsDscRunAsCredential` Windows account must be a sysadmin on SQL Server (issue
#568).
- In addition, the resource no longer uses `Invoke-Command` cmdlet that was used
to impersonate the Windows user specified by `$SQLAdminCredential`. The call
also needed CredSSP authentication to be enabled and configured on the target
node, which complicated deployments in non-domain scenarios. Using
`PsDscRunAsCredential` solves this problems for us.
- Fixed virtual directory creation for SQL Server 2016 (issue #569).
- Added unit tests (issue #295).
- Changes to xSQLServerDatabase
- Changed the readme, SQLInstance should have been SQLInstanceName.
- Changes to xSQLServerScript
- Fixed bug with schema and variable mismatch for the Credential/Username parameter
in the return statement (issue #661).
- Optional QueryTimeout parameter to specify sql script query execution timeout.
Fixes issue #597
- Changes to xSQLServerAlwaysOnService
- Fixed typos in localization strings and in tests.
- Changes to xSQLServerAlwaysOnAvailabilityGroup
- Now it utilize the value of 'FailoverMode' to set the 'FailoverMode' property
of the Availability Group instead of wrongly using the 'AvailabilityMode'
property of the Availability Group.
## [7.1.0.0] - 2017-05-31
### Changed
- Changes to xSQLServerMemory
- Changed the way SQLServer parameter is passed from Test-TargetResource to
Get-TargetResource so that the default value isn't lost (issue #576).
- Added condition to unit tests for when no SQLServer parameter is set.
- Changes to xSQLServerMaxDop
- Changed the way SQLServer parameter is passed from Test-TargetResource to
Get-TargetResource so that the default value isn't lost (issue #576).
- Added condition to unit tests for when no SQLServer parameter is set.
- Changes to xWaitForAvailabilityGroup
- Updated README.md with a description for the resources and revised the parameter
descriptions.
- The default value for RetryIntervalSec is now 20 seconds and the default value
for RetryCount is now 30 times (issue #505).
- Cleaned up code and fixed PSSA rules warnings (issue #268).
- Added unit tests (issue #297).
- Added descriptive text to README.md that the account that runs the resource
must have permission to run the cmdlet Get-ClusterGroup (issue #307).
- Added read-only parameter GroupExist which will return $true if the cluster
role/group exist, otherwise it returns $false (issue #510).
- Added examples.
- Changes to xSQLServerPermission
- Cleaned up code, removed SupportsShouldProcess and fixed PSSA rules warnings
(issue #241 and issue #262).
- It is now possible to add permissions to two or more logins on the same instance
(issue #526).
- The parameter NodeName is no longer mandatory and has now the default value
of $env:COMPUTERNAME.
- The parameter Ensure now has a default value of 'Present'.
- Updated README.md with a description for the resources and revised the parameter
descriptions.
- Removed dependency of SQLPS provider (issue #482).
- Added ConnectSql permission. Now that permission can also be granted or revoked.
- Updated note in resource description to also mention ConnectSql permission.
- Changes to xSQLServerHelper module
- Removed helper function Get-SQLPSInstance and Get-SQLPSInstanceName because
there is no resource using it any longer.
- Added four new helper functions.
- Register-SqlSmo, Register-SqlWmiManagement and Unregister-SqlAssemblies to
handle the creation on the application domain and loading and unloading of
the SMO and SqlWmiManagement assemblies.
- Get-SqlInstanceMajorVersion to get the major SQL version for a specific instance.
- Fixed typos in comment-based help
- Changes to xSQLServer
- Fixed typos in markdown files; CHANGELOG, CONTRIBUTING, README and ISSUE_TEMPLATE.
- Fixed typos in schema.mof files (and README.md).
- Updated some parameter description in schema.mof files on those that was found
was not equal to README.md.
- Changes to xSQLServerAlwaysOnService
- Get-TargetResource should no longer fail silently with error 'Index operation
failed; the array index evaluated to null.' (issue #519). Now if the
Server.IsHadrEnabled property return neither $true or $false the
Get-TargetResource function will throw an error.
- Changes to xSQLServerSetUp
- Updated xSQLServerSetup Module Get-Resource method to fix (issue #516 and #490).
- Added change to detect DQ, DQC, BOL, SDK features. Now the function
Test-TargetResource returns true after calling set for DQ, DQC, BOL, SDK
features (issue #516 and #490).
- Changes to xSQLServerAlwaysOnAvailabilityGroup
- Updated to return the exception raised when an error is thrown.
- Changes to xSQLServerAlwaysOnAvailabilityGroupReplica
- Updated to return the exception raised when an error is thrown.
- Updated parameter description for parameter Name, so that it says it must be
in the format SQLServer\InstanceName for named instance (issue #548).
- Changes to xSQLServerLogin
- Added an optional boolean parameter Disabled. It can be used to enable/disable
existing logins or create disabled logins (new logins are created as enabled
by default).
- Changes to xSQLServerDatabaseRole
- Updated variable passed to Microsoft.SqlServer.Management.Smo.User constructor
to fix issue #530
- Changes to xSQLServerNetwork
- Added optional parameter SQLServer with default value of $env:COMPUTERNAME
(issue #528).
- Added optional parameter RestartTimeout with default value of 120 seconds.
- Now the resource supports restarting a sql server in a cluster (issue #527
and issue #455).
- Now the resource allows to set the parameter TcpDynamicPorts to a blank value
(partly fixes issue #534). Setting a blank value for parameter TcpDynamicPorts
together with a value for parameter TcpPort means that static port will be used.
- Now the resource will not call Alter() in the Set-TargetResource when there
is no change necessary (issue #537).
- Updated example 1-EnableTcpIpOnCustomStaticPort.
- Added unit tests (issue #294).
- Refactored some of the code, cleaned up the rest and fixed PSSA rules warnings
(issue #261).
- If parameter TcpDynamicPort is set to '0' at the same time as TcpPort is set
the resource will now throw an error (issue #535).
- Added examples (issue #536).
- When TcpDynamicPorts is set to '0' the Test-TargetResource function will no
longer fail each time (issue #564).
- Changes to xSQLServerRSConfig
- Replaced sqlcmd.exe usages with Invoke-Sqlcmd calls (issue #567).
- Changes to xSQLServerDatabasePermission
- Fixed code style, updated README.md and removed *-SqlDatabasePermission functions
from xSQLServerHelper.psm1.
- Added the option 'GrantWithGrant' with gives the user grant rights, together
with the ability to grant others the same right.
- Now the resource can revoke permission correctly (issue #454). When revoking
'GrantWithGrant', both the grantee and all the other users the grantee has
granted the same permission to, will also get their permission revoked.
- Updated tests to cover Revoke().
- Changes to xSQLServerHelper
- The missing helper function ('Test-SPDSCObjectHasProperty'), that was referenced
in the helper function Test-SQLDscParameterState, is now incorporated into
Test-SQLDscParameterState (issue #589).
## [7.0.0.0] - 2017-04-19
### Changed
- Examples
- xSQLServerDatabaseRole
- 1-AddDatabaseRole.ps1
- 2-RemoveDatabaseRole.ps1
- xSQLServerRole
- 3-AddMembersToServerRole.ps1
- 4-MembersToIncludeInServerRole.ps1
- 5-MembersToExcludeInServerRole.ps1
- xSQLServerSetup
- 1-InstallDefaultInstanceSingleServer.ps1
- 2-InstallNamedInstanceSingleServer.ps1
- 3-InstallNamedInstanceSingleServerFromUncPathUsingSourceCredential.ps1
- 4-InstallNamedInstanceInFailoverClusterFirstNode.ps1
- 5-InstallNamedInstanceInFailoverClusterSecondNode.ps1
- xSQLServerReplication
- 1-ConfigureInstanceAsDistributor.ps1
- 2-ConfigureInstanceAsPublisher.ps1
- xSQLServerNetwork
- 1-EnableTcpIpOnCustomStaticPort.ps1
- xSQLServerAvailabilityGroupListener
- 1-AddAvailabilityGroupListenerWithSameNameAsVCO.ps1
- 2-AddAvailabilityGroupListenerWithDifferentNameAsVCO.ps1
- 3-RemoveAvailabilityGroupListenerWithSameNameAsVCO.ps1
- 4-RemoveAvailabilityGroupListenerWithDifferentNameAsVCO.ps1
- 5-AddAvailabilityGroupListenerUsingDHCPWithDefaultServerSubnet.ps1
- 6-AddAvailabilityGroupListenerUsingDHCPWithSpecificSubnet.ps1
- xSQLServerEndpointPermission
- 1-AddConnectPermission.ps1
- 2-RemoveConnectPermission.ps1
- 3-AddConnectPermissionToAlwaysOnPrimaryAndSecondaryReplicaEachWithDifferentSqlServiceAccounts.ps1
- 4-RemoveConnectPermissionToAlwaysOnPrimaryAndSecondaryReplicaEachWithDifferentSqlServiceAccounts.ps1
- xSQLServerPermission
- 1-AddServerPermissionForLogin.ps1
- 2-RemoveServerPermissionForLogin.ps1
- xSQLServerEndpointState
- 1-MakeSureEndpointIsStarted.ps1
- 2-MakeSureEndpointIsStopped.ps1
- xSQLServerConfiguration
- 1-ConfigureTwoInstancesOnTheSameServerToEnableClr.ps1
- 2-ConfigureInstanceToEnablePriorityBoost.ps1
- xSQLServerEndpoint
- 1-CreateEndpointWithDefaultValues.ps1
- 2-CreateEndpointWithSpecificPortAndIPAddress.ps1
- 3-RemoveEndpoint.ps1
- Changes to xSQLServerDatabaseRole
- Fixed code style, added updated parameter descriptions to schema.mof and README.md.
- Changes to xSQLServer
- Raised the CodeCov target to 70% which is the minimum and required target for
HQRM resource.
- Changes to xSQLServerRole
- **BREAKING CHANGE: The resource has been reworked in it's entirely.** Below
is what has changed.
- The mandatory parameters now also include ServerRoleName.
- The ServerRole parameter was before an array of server roles, now this parameter
is renamed to ServerRoleName and can only be set to one server role.
- ServerRoleName are no longer limited to built-in server roles. To add members
to a built-in server role, set ServerRoleName to the name of the built-in
server role.
- The ServerRoleName will be created when Ensure is set to 'Present' (if it
does not already exist), or removed if Ensure is set to 'Absent'.
- Three new parameters are added; Members, MembersToInclude and MembersToExclude.
- Members can be set to one or more logins, and those will _replace all_ the
memberships in the server role.
- MembersToInclude and MembersToExclude can be set to one or more logins that
will add or remove memberships, respectively, in the server role. MembersToInclude
and MembersToExclude _can not_ be used at the same time as parameter Members.
But both MembersToInclude and MembersToExclude can be used together at the
same time.
- Changes to xSQLServerSetup
- Added a note to the README.md saying that it is not possible to add or remove
features from a SQL Server failover cluster (issue #433).
- Changed so that it reports false if the desired state is not correct (issue #432).
- Added a test to make sure we always return false if a SQL Server failover
cluster is missing features.
- Helper function Connect-SQLAnalysis
- Now has correct error handling, and throw does not used the unknown named
parameter '-Message' (issue #436)
- Added tests for Connect-SQLAnalysis
- Changed to localized error messages.
- Minor changes to error handling.
- This adds better support for Addnode (issue #369).
- Now it skips cluster validation för add node (issue #442).
- Now it ignores parameters that are not allowed for action Addnode (issue #441).
- Added support for vNext CTP 1.4 (issue #472).
- Added new resource
- xSQLServerAlwaysOnAvailabilityGroupReplica
- Changes to xSQLServerDatabaseRecoveryModel
- Fixed code style, removed SQLServerDatabaseRecoveryModel functions from xSQLServerHelper.
- Changes to xSQLServerAlwaysOnAvailabilityGroup
- Fixed the permissions check loop so that it exits the loop after the function
determines the required permissions are in place.
- Changes to xSQLServerAvailabilityGroupListener
- Removed the dependency of SQLPS provider (issue #460).
- Cleaned up code.
- Added test for more coverage.
- Fixed PSSA rule warnings (issue #255).
- Parameter Ensure now defaults to 'Present' (issue #450).
- Changes to xSQLServerFirewall
- Now it will correctly create rules when the resource is used for two or more
instances on the same server (issue #461).
- Changes to xSQLServerEndpointPermission
- Added description to the README.md
- Cleaned up code (issue #257 and issue #231)
- Now the default value for Ensure is 'Present'.
- Removed dependency of SQLPS provider (issue #483).
- Refactored tests so they use less code.
- Changes to README.md
- Adding deprecated tag to xSQLServerFailoverClusterSetup, xSQLAOGroupEnsure and
xSQLAOGroupJoin in README.md so it it more clear that these resources has been
replaced by xSQLServerSetup, xSQLServerAlwaysOnAvailabilityGroup and
xSQLServerAlwaysOnAvailabilityGroupReplica respectively.
- Changes to xSQLServerEndpoint
- BREAKING CHANGE: Now SQLInstanceName is mandatory, and is a key, so
SQLInstanceName has no longer a default value (issue #279).
- BREAKING CHANGE: Parameter AuthorizedUser has been removed (issue #466,
issue #275 and issue #80). Connect permissions can be set using the resource
xSQLServerEndpointPermission.
- Optional parameter IpAddress has been added. Default is to listen on any
valid IP-address. (issue #232)
- Parameter Port now has a default value of 5022.
- Parameter Ensure now defaults to 'Present'.
- Resource now supports changing IP address and changing port.
- Added unit tests (issue #289)
- Added examples.
- Changes to xSQLServerEndpointState
- Cleaned up code, removed SupportsShouldProcess and fixed PSSA rules warnings
(issue #258 and issue #230).
- Now the default value for the parameter State is 'Started'.
- Updated README.md with a description for the resources and revised the
parameter descriptions.
- Removed dependency of SQLPS provider (issue #481).
- The parameter NodeName is no longer mandatory and has now the default value
of $env:COMPUTERNAME.
- The parameter Name is now a key so it is now possible to change the state on
more than one endpoint on the same instance. _Note: The resource still only
supports Database Mirror endpoints at this time._
- Changes to xSQLServerHelper module
- Removing helper function Get-SQLAlwaysOnEndpoint because there is no resource
using it any longer.
- BREAKING CHANGE: Changed helper function Import-SQLPSModule to support SqlServer
module (issue #91). The SqlServer module is the preferred module so if it is
found it will be used, and if not found an attempt will be done to load SQLPS
module instead.
- Changes to xSQLServerScript
- Updated tests for this resource, because they failed when Import-SQLPSModule
was updated.
## [6.0.0.0] - 2017-03-08
### Changed
- Changes to xSQLServerConfiguration
- BREAKING CHANGE: The parameter SQLInstanceName is now mandatory.
- Resource can now be used to define the configuration of two or more different
DB instances on the same server.
- Changes to xSQLServerRole
- xSQLServerRole now correctly reports that the desired state is present when
the login is already a member of the server roles.
- Added new resources
- xSQLServerAlwaysOnAvailabilityGroup
- Changes to xSQLServerSetup
- Properly checks for use of SQLSysAdminAccounts parameter in $PSBoundParameters.
The test now also properly evaluates the setup argument for SQLSysAdminAccounts.
- xSQLServerSetup should now function correctly for the InstallFailoverCluster
action, and also supports cluster shared volumes. Note that the AddNode action
is not currently working.
- It now detects that feature Client Connectivity Tools (CONN) and Client
Connectivity Backwards Compatibility Tools (BC) is installed.
- Now it can correctly determine the right cluster when only parameter
InstallSQLDataDir is assigned a path (issue #401).
- Now the only mandatory path parameter is InstallSQLDataDir when installing
Database Engine (issue #400).
- It now can handle mandatory parameters, and are not using wildcard to find
the variables containing paths (issue #394).
- Changed so that instead of connection to localhost it is using $env:COMPUTERNAME
as the host name to which it connects. And for cluster installation it uses
the parameter FailoverClusterNetworkName as the host name to which it connects
(issue #407).
- When called with Action = 'PrepareFailoverCluster', the SQLSysAdminAccounts
and FailoverClusterGroup parameters are no longer passed to the setup process
(issues #410 and 411).
- Solved the problem that InstanceDir and InstallSQLDataDir could not be set to
just a qualifier, i.e 'E:' (issue #418). All paths (except SourcePath) can now
be set to just the qualifier.
- Enables CodeCov.io code coverage reporting.
- Added badge for CodeCov.io to README.md.
- Examples
- xSQLServerMaxDop
- 1-SetMaxDopToOne.ps1
- 2-SetMaxDopToAuto.ps1
- 3-SetMaxDopToDefault.ps1
- xSQLServerMemory
- 1-SetMaxMemoryTo12GB.ps1
- 2-SetMaxMemoryToAuto.ps1
- 3-SetMinMaxMemoryToAuto.ps1
- 4-SetMaxMemoryToDefault.ps1
- xSQLServerDatabase
- 1-CreateDatabase.ps1
- 2-DeleteDatabase.ps1
- Added tests for resources
- xSQLServerMaxDop
- xSQLServerMemory
- Changes to xSQLServerMemory
- BREAKING CHANGE: The mandatory parameter now include SQLInstanceName. The
DynamicAlloc parameter is no longer mandatory
- Changes to xSQLServerDatabase
- When the system is not in desired state the Test-TargetResource will now output
verbose messages saying so.
- Changes to xSQLServerDatabaseOwner
- Fixed code style, added updated parameter descriptions to schema.mof and README.md.
## [5.0.0.0] - 2017-01-25
### Changed
- Improvements how tests are initiated in AppVeyor
- Removed previous workaround (issue #201) from unit tests.
- Changes in appveyor.yml so that SQL modules are removed before common test is
run.
- Now the deploy step are no longer failing when merging code into Dev. Neither
is the deploy step failing if a contributor had AppVeyor connected to the fork
of xSQLServer and pushing code to the fork.
- Changes to README.md
- Changed the contributing section to help new contributors.
- Added links for each resource so it is easier to navigate to the parameter list
for each resource.
- Moved the list of resources in alphabetical order.
- Moved each resource parameter list into alphabetical order.
- Removed old text mentioning System Center.
- Now the correct product name is written in the installation section, and a typo
was also fixed.
- Fixed a typo in the Requirements section.
- Added link to Examples folder in the Examples section.
- Change the layout of the README.md to closer match the one of PSDscResources
- Added more detailed text explaining what operating systems WMF5.0 can be installed
on.
- Verified all resource schema files with the README.md and fixed some errors
(descriptions was not verified).
- Added security requirements section for resource xSQLServerEndpoint and
xSQLAOGroupEnsure.
- Changes to xSQLServerSetup
- The resource no longer uses Win32_Product WMI class when evaluating if
SQL Server Management Studio is installed. See article
[kb974524](https://support.microsoft.com/en-us/kb/974524) for more information.
- Now it uses CIM cmdlets to get information from WMI classes.
- Resolved all of the PSScriptAnalyzer warnings that was triggered in the common
tests.
- Improvement for service accounts to enable support for Managed Service Accounts
as well as other nt authority accounts
- Changes to the helper function Copy-ItemWithRoboCopy
- Robocopy is now started using Start-Process and the error handling has been
improved.
- Robocopy now removes files at the destination path if they no longer exists
at the source.
- Robocopy copies using unbuffered I/O when available (recommended for large
files).
- Added a more descriptive text for the parameter `SourceCredential` to further
explain how the parameter work.
- BREAKING CHANGE: Removed parameter SourceFolder.
- BREAKING CHANGE: Removed default value "$PSScriptRoot\..\..\" from parameter
SourcePath.
- Old code, that no longer filled any function, has been replaced.
- Function `ResolvePath` has been replaced with
`[Environment]::ExpandEnvironmentVariables($SourcePath)` so that environment
variables still can be used in Source Path.
- Function `NetUse` has been replaced with `New-SmbMapping` and
`Remove-SmbMapping`.
- Renamed function `GetSQLVersion` to `Get-SqlMajorVersion`.
- BREAKING CHANGE: Renamed parameter PID to ProductKey to avoid collision with
automatic variable $PID
- Changes to xSQLServerScript
- All credential parameters now also has the type
[System.Management.Automation.Credential()] to better work with PowerShell 4.0.
- It is now possible to configure two instances on the same node, with the same
script.
- Added to the description text for the parameter `Credential` describing how
to authenticate using Windows Authentication.
- Added examples to show how to authenticate using either SQL or Windows
authentication.
- A recent issue showed that there is a known problem running this resource
using PowerShell 4.0. For more information, see [issue #273](https://github.com/dsccommunity/SqlServerDsc/issues/273)
- Changes to xSQLServerFirewall
- BREAKING CHANGE: Removed parameter SourceFolder.
- BREAKING CHANGE: Removed default value "$PSScriptRoot\..\..\" from parameter
SourcePath.
- Old code, that no longer filled any function, has been replaced.
- Function `ResolvePath` has been replaced with
`[Environment]::ExpandEnvironmentVariables($SourcePath)` so that environment
variables still can be used in Source Path.
- Adding new optional parameter SourceCredential that can be used to authenticate
against SourcePath.
- Solved PSSA rules errors in the code.
- Get-TargetResource no longer return $true when no products was installed.
- Changes to the unit test for resource
- xSQLServerSetup
- Added test coverage for helper function Copy-ItemWithRoboCopy
- Changes to xSQLServerLogin
- Removed ShouldProcess statements
- Added the ability to enforce password policies on SQL logins
- Added common test (xSQLServerCommon.Tests) for xSQLServer module
- Now all markdown files will be style checked when tests are running in AppVeyor
after sending in a pull request.
- Now all [Examples](/source/Examples/Resources) will be tested by compiling
to a .mof file after sending in a pull request.
- Changes to xSQLServerDatabaseOwner
- The example 'SetDatabaseOwner' can now compile, it wrongly had a `DependsOn`
in the example.
- Changes to SQLServerRole
- The examples 'AddServerRole' and 'RemoveServerRole' can now compile, it wrongly
had a `DependsOn` in the example.
- Changes to CONTRIBUTING.md
- Added section 'Tests for examples files'
- Added section 'Tests for style check of Markdown files'
- Added section 'Documentation with Markdown'
- Added texts to section 'Tests'
- Changes to xSQLServerHelper
- added functions
- Get-SqlDatabaseRecoveryModel
- Set-SqlDatabaseRecoveryModel
- Examples
- xSQLServerDatabaseRecoveryModel
- 1-SetDatabaseRecoveryModel.ps1
- xSQLServerDatabasePermission
- 1-GrantDatabasePermissions.ps1
- 2-RevokeDatabasePermissions.ps1
- 3-DenyDatabasePermissions.ps1
- xSQLServerFirewall
- 1-CreateInboundFirewallRules
- 2-RemoveInboundFirewallRules
- Added tests for resources
- xSQLServerDatabaseRecoveryModel
- xSQLServerDatabasePermissions
- xSQLServerFirewall
- Changes to xSQLServerDatabaseRecoveryModel
- BREAKING CHANGE: Renamed xSQLDatabaseRecoveryModel to
xSQLServerDatabaseRecoveryModel to align with naming convention.
- BREAKING CHANGE: The mandatory parameters now include SQLServer, and
SQLInstanceName.
- Changes to xSQLServerDatabasePermission
- BREAKING CHANGE: Renamed xSQLServerDatabasePermissions to
xSQLServerDatabasePermission to align with naming convention.
- BREAKING CHANGE: The mandatory parameters now include PermissionState,
SQLServer, and SQLInstanceName.
- Added support for clustered installations to xSQLServerSetup
- Migrated relevant code from xSQLServerFailoverClusterSetup
- Removed Get-WmiObject usage
- Clustered storage mapping now supports asymmetric cluster storage
- Added support for multi-subnet clusters
- Added localized error messages for cluster object mapping
- Updated README.md to reflect new parameters
- Updated description for xSQLServerFailoverClusterSetup to indicate it is deprecated.
- xPDT helper module
- Function GetxPDTVariable was removed since it no longer was used by any resources.
- File xPDT.xml was removed since it was not used by any resources, and did not
provide any value to the module.
- Changes xSQLServerHelper module
- Removed the globally defined `$VerbosePreference = 'Continue'` from xSQLServerHelper.
- Fixed a typo in a variable name in the function New-ListenerADObject.
- Now Restart-SqlService will correctly show the services it restarts. Also
fixed PSSA warnings.
## [4.0.0.0] - 2016-12-14
### Changed
- Fixes in xSQLServerConfiguration
- Added support for clustered SQL instances.
- BREAKING CHANGE: Updated parameters to align with other resources
(SQLServer / SQLInstanceName).
- Updated code to utilize CIM rather than WMI.
- Added tests for resources
- xSQLServerConfiguration
- xSQLServerSetup
- xSQLServerDatabaseRole
- xSQLAOGroupJoin
- xSQLServerHelper and moved the existing tests for Restart-SqlService to it.
- xSQLServerAlwaysOnService
- Fixes in xSQLAOGroupJoin
- Availability Group name now appears in the error message for a failed.
Availability Group join attempt.
- Get-TargetResource now works with Get-DscConfiguration.
- Fixes in xSQLServerRole
- Updated Ensure parameter to 'Present' default value.
- Renamed helper functions *-SqlServerRole* to *-SqlServerRoleMember*.
- Changes to xSQLAlias
- Add UseDynamicTcpPort parameter for option "Dynamically determine port".
- Change Get-WmiObject to Get-CimInstance in Resource and associated pester file.
- Added CHANGELOG.md file.
- Added issue template file (ISSUE\_TEMPLATE.md) for 'New Issue' and pull request
template file (PULL\_REQUEST\_TEMPLATE.md) for 'New Pull Request'.
- Add Contributing.md file.
- Changes to xSQLServerSetup
- Now `Features` parameter is case-insensitive.
- BREAKING CHANGE: Removed xSQLServerPowerPlan from this module. The resource has
been moved to [ComputerManagementDsc](https://github.com/dsccommunity/ComputerManagementDsc)
and is now called PowerPlan.
- Changes and enhancements in xSQLServerDatabaseRole
- BREAKING CHANGE: Fixed so the same user can now be added to a role in one or
more databases, and/or one or more instances. Now the parameters `SQLServer`
and `SQLInstanceName` are mandatory.
- Enhanced so the same user can now be added to more than one role
- BREAKING CHANGE: Renamed xSQLAlias to xSQLServerAlias to align with naming convention.
- Changes to xSQLServerAlwaysOnService
- Added RestartTimeout parameter
- Fixed bug where the SQL Agent service did not get restarted after the
IsHadrEnabled property was set.
- BREAKING CHANGE: The mandatory parameters now include Ensure, SQLServer, and
SQLInstanceName. SQLServer and SQLInstanceName are keys which will be used to
uniquely identify the resource which allows AlwaysOn to be enabled on multiple
instances on the same machine.
- Moved Restart-SqlService from MSFT_xSQLServerConfiguration.psm1 to xSQLServerHelper.psm1.
## [3.0.0.0] - 2016-11-02
### Changed
- xSQLServerHelper
- added functions
- Test-SQLDscParameterState
- Get-SqlDatabaseOwner
- Set-SqlDatabaseOwner
- Examples
- xSQLServerDatabaseOwner
- 1-SetDatabaseOwner.ps1
- Added tests for resources
- MSFT_xSQLServerDatabaseOwner
## [2.0.0.0] - 2016-09-21
### Changed
- Added resources
- xSQLServerReplication
- xSQLServerScript
- xSQLAlias
- xSQLServerRole
- Added tests for resources
- xSQLServerPermission
- xSQLServerEndpointState
- xSQLServerEndpointPermission
- xSQLServerAvailabilityGroupListener
- xSQLServerLogin
- xSQLAOGroupEnsure
- xSQLAlias
- xSQLServerRole
- Fixes in xSQLServerAvailabilityGroupListener
- In one case the Get-method did not report that DHCP was configured.
- Now the resource will throw 'Not supported' when IP is changed between Static
and DHCP.
- Fixed an issue where sometimes the listener wasn't removed.
- Fixed the issue when trying to add a static IP to a listener was ignored.
- Fix in xSQLServerDatabase
- Fixed so dropping a database no longer throws an error
- BREAKING CHANGE: Fixed an issue where it was not possible to add the same
database to two instances on the same server.
- BREAKING CHANGE: The name of the parameter Database has changed. It is now
called Name.
- Fixes in xSQLAOGroupEnsure
- Added parameters to New-ListenerADObject to allow usage of a named instance.
- pass setup credential correctly
- Changes to xSQLServerLogin
- Fixed an issue when dropping logins.
- BREAKING CHANGE: Fixed an issue where it was not possible to add the same
login to two instances on the same server.
- Changes to xSQLServerMaxDop
- BREAKING CHANGE: Made SQLInstance parameter a key so that multiple instances
on the same server can be configured
## [1.8.0.0] - 2016-08-10
### Changed
- Converted appveyor.yml to install Pester from PSGallery instead of from Chocolatey.
- Added Support for SQL Server 2016
- xSQLAOGroupEnsure
- Fixed spelling mistake in AutoBackupPreference property
- Added BackupPriority property
- Added resources
- xSQLServerPermission
- xSQLServerEndpointState
- xSQLServerEndpointPermission
- xSQLServerAvailabilityGroupListener
- xSQLServerHelper
- added functions
- Import-SQLPSModule
- Get-SQLPSInstanceName
- Get-SQLPSInstance
- Get-SQLAlwaysOnEndpoint
- modified functions
- New-TerminatingError - *added optional parameter `InnerException` to be able
to give the user more information in the returned message*
## [1.7.0.0] - 2016-06-29
### Changed
- Resources Added
- xSQLServerConfiguration
## [1.6.0.0] - 2016-05-18
### Changed
- Resources Added
- xSQLAOGroupEnsure
- xSQLAOGroupJoin
- xWaitForAvailabilityGroup
- xSQLServerEndPoint
- xSQLServerAlwaysOnService
- xSQLServerHelper
- added functions
- Connect-SQL
- New-VerboseMessage
- Grant-ServerPerms
- Grant-CNOPerms
- New-ListenerADObject
- xSQLDatabaseRecoveryModel
- Updated Verbose statements to use new function New-VerboseMessage
- xSQLServerDatabase
- Updated Verbose statements to use new function New-VerboseMessage
- Removed ConnectSQL function and replaced with new Connect-SQL function
- xSQLServerDatabaseOwner
- Removed ConnectSQL function and replaced with new Connect-SQL function
- xSQLServerDatabasePermissions
- Removed ConnectSQL function and replaced with new Connect-SQL function
- xSQLServerDatabaseRole
- Removed ConnectSQL function and replaced with new Connect-SQL function
- xSQLServerLogin
- Removed ConnectSQL function and replaced with new Connect-SQL function
- xSQLServerMaxDop
- Updated Verbose statements to use new function New-VerboseMessage
- Removed ConnectSQL function and replaced with new Connect-SQL function
- xSQLServerMemory
- Updated Verbose statements to use new function New-VerboseMessage
- Removed ConnectSQL function and replaced with new Connect-SQL function
- xSQLServerPowerPlan
- Updated Verbose statements to use new function New-VerboseMessage
- Examples
- Added xSQLServerConfiguration resource example
## [1.5.0.0] - 2016-03-30
### Changed
- Added new resource xSQLServerDatabase that allows adding an empty database to
a server
## [1.4.0.0] - 2016-02-02
### Changed
- Resources Added
- xSQLDatabaseRecoveryModeAdded
- xSQLServerDatabaseOwner
- xSQLServerDatabasePermissions
- xSQLServerDatabaseRole
- xSQLServerLogin
- xSQLServerMaxDop
- xSQLServerMemory
- xSQLServerPowerPlan
- xSQLServerDatabase
- xSQLServerSetup:
- Corrected bug in GetFirstItemPropertyValue to correctly handle registry keys
with only one value.
- Added support for SQL Server
- 2008 R2 installation
- Removed default values for parameters, to avoid compatibility issues and setup
errors
- Added Replication sub feature detection
- Added setup parameter BrowserSvcStartupType
- Change SourceFolder to Source to allow for multi version Support
- Add Source Credential for accessing source files
- Add Parameters for SQL Server configuration
- Add Parameters to SuppressReboot or ForceReboot
- xSQLServerFirewall
- Removed default values for parameters, to avoid compatibility issues
- Updated firewall rule name to not use 2012 version, since package supports 2008,
2012 and 2014 versions
- Additional of SQLHelper Function and error handling
- Change SourceFolder to Source to allow for multi version Support
- xSQLServerNetwork
- Added new resource that configures network settings.
- Currently supports only tcp network protocol
- Allows to enable and disable network protocol for specified instance service
- Allows to set custom or dynamic port values
- xSQLServerRSSecureConnectionLevel
- Additional of SQLHelper Function and error handling
- xSqlServerRSConfig
- xSQLServerFailoverClusterSetup
- Additional of SQLHelper Function and error handling
- Change SourceFolder to Source to allow for multi version Support
- Add Parameters to SuppressReboot or ForceReboot
- Examples
- Updated example files to use correct DebugMode parameter value ForceModuleImport,
this is not boolean in WMF 5.0 RTM
- Added xSQLServerNetwork example
## [1.3.0.0] - 2015-05-01
### Changed
- xSqlServerSetup
- Make Features case-insensitive.
## [1.2.1.0] - 2015-04-23
### Changed
- Increased timeout for setup process to start to 60 seconds.
## [1.2.0.0] - 2014-12-18
### Changed
- Updated release with the following new resources
- xSQLServerFailoverClusterSetup
- xSQLServerRSConfig
## [1.1.0.0] - 2014-10-24
### Changed
- Initial release with the following resources
- xSQLServerSetup
- xSQLServerFirewall
- xSQLServerRSSecureConnectionLevel
| 55.639319 | 236 | 0.751162 | eng_Latn | 0.908622 |
536c3b6120131d08642d8564a2ddd4bc68535a6f | 10,664 | md | Markdown | es-es/crypt.md | JSHar/docs | edab6b323fa5b15ea1d0aaf2fbe716e59cf595fd | [
"BSD-3-Clause"
] | null | null | null | es-es/crypt.md | JSHar/docs | edab6b323fa5b15ea1d0aaf2fbe716e59cf595fd | [
"BSD-3-Clause"
] | null | null | null | es-es/crypt.md | JSHar/docs | edab6b323fa5b15ea1d0aaf2fbe716e59cf595fd | [
"BSD-3-Clause"
] | null | null | null | ---
layout: default
language: 'es-es'
version: '4.0'
title: 'Crypt'
keywords: 'crypt, encriptación, desencriptación, cifrados'
---
# Componente Crypt
* * *
 
## Resumen
> **NOTE**: Requires PHP's [openssl](https://www.php.net/manual/en/book.openssl.php) extension to be present in the system
{: .alert .alert-info }
>
> **NO** soporta algoritmos inseguros con modos:
>
> `des*`, `rc2*`, `rc4*`, `des*`, `*ecb`
{: .alert .alert-danger }
Phalcon proporciona servicios de encriptación vía componente [Phalcon\Crypt](api/phalcon_crypt#crypt). This class offers simple object-oriented wrappers to the [openssl](https://www.php.net/manual/en/book.openssl.php) PHP's encryption library.
Por defecto, este componente usa el cifrado `AES-256-CFB`.
El cifrado AES-256 se usa, entre otros lugares, en SSL/TLS a través de Internet. Se considera de los mejores cifrados. En teoría no es *crackeable* ya que las combinaciones de claves son masivas. Aunque la NSA lo ha categorizado en [Suite B](https://en.wikipedia.org/wiki/NSA_Suite_B_Cryptography), también han recomendado usar claves más grandes de 128-bit para encriptación.
> **NOTA**: Debe usar un tamaño de clave correspondiente al algoritmo actual. Para el algoritmo predeterminado `aes-256-cfb` el tamaño de clave predeterminado es de 32 bytes.
{: .alert .alert-warning }
## Uso básico
Este componente se ha diseñado para ser muy simple de usar:
```php
<?php
use Phalcon\Crypt;
$key = "12345"; // Your luggage combination
$crypt = new Crypt();
$text = 'This is the text that you want to encrypt.';
$encrypted = $crypt->encrypt($text, $key);
echo $crypt->decrypt($encrypted, $key);
```
Si no se pasan parámetros en el constructor, el componente usará el cifrado `aes-256-cfb` con la firma por defecto. Siempre puede cambiar el cifrado así como desactivar al firma.
```php
<?php
use Phalcon\Crypt;
$key = "12345"; // Your luggage combination
$crypt = new Crypt();
$crypt
->setCipher('aes256')
->useSigning(false)
;
$text = 'This is the text that you want to encrypt.';
$encrypted = $crypt->encrypt($text, $key);
echo $crypt->decrypt($encrypted, $key);
```
## Encriptar
El método `encrypt()` encripta una cadena. El componente usará el cifrado establecido previamente, que se ha establecido en el constructor o explícitamente. Si no se pasa `key` en el parámetro, se usará la clave previamente configurada.
```php
<?php
use Phalcon\Crypt;
$key = "12345"; // Your luggage combination
$crypt = new Crypt();
$crypt->setKey($key);
$text = 'This is the text that you want to encrypt.';
$encrypted = $crypt->encrypt($text);
```
o usando la clave como segundo parámetro
```php
<?php
use Phalcon\Crypt;
$key = "12345"; // Your luggage combination
$crypt = new Crypt();
$text = 'This is the text that you want to encrypt.';
$encrypted = $crypt->encrypt($text, $key);
```
El método también usará internamente la firma por defecto. Siempre puede usar `useSigning(false)` antes de la llamada al método para deshabilitarla.
> **NOTA: Si elige cifrados relativos a `ccm` o `gcm`, debe también proporcionar `authData` para ellos. De lo contrario se lanzará una excepción.
{: .alert .alert-warning }
## Desencriptar
El método `decrypt()` desencripta una cadena. Similar a `encrypt()` el componente usará el cifrado previamente configurado, que puede haber sido establecido en el constructor o explícitamente. Si no se pasa `key` en el parámetro, se usará la clave previamente configurada.
```php
<?php
use Phalcon\Crypt;
$key = "12345"; // Your luggage combination
$crypt = new Crypt();
$crypt->setKey($key);
$text = 'T4\xb1\x8d\xa9\x98\x05\\\x8c\xbe\x1d\x07&[\x99\x18\xa4~Lc1\xbeW\xb3';
$encrypted = $crypt->decrypt($text);
```
o usando la clave como segundo parámetro
```php
<?php
use Phalcon\Crypt;
$key = "12345"; // Your luggage combination
$crypt = new Crypt();
$crypt->setKey($key);
$text = 'T4\xb1\x8d\xa9\x98\x05\\\x8c\xbe\x1d\x07&[\x99\x18\xa4~Lc1\xbeW\xb3';
$encrypted = $crypt->decrypt($text, $key);
```
El método también usará internamente la firma por defecto. Siempre puede usar `useSigning(false)` antes de la llamada al método para deshabilitarla.
## Encriptar en Base64
Se puede usar `encryptBase64()` para encriptar una cadena de una manera amigable con URL. Internamente usa `encrypt()` y acepta `text` y opcionalmente la `key` del elemento a encriptar. También hay un tercer parámetro `safe` (por defecto `false`) que realizará sustituciones de cadena para los caracteres no *amigables* en URL como `+` o `/`.
## Desencriptar en Base64
Se puede usar `decryptBase64()` para desencriptar una cadena de una manera amigable con URL. De forma similar a `encryptBase64()` usa `decrypt()` internamente y acepta el `text` y opcionalmente la `key` del elemento a desencriptar. También hay un tercer parámetro `safe` (por defecto `false`) que realizará sustituciones de cadena para los caracteres no *amigables* en URL previamente reemplazados como `+` o `/`.
## Excepciones
Las excepciones lanzadas en el componente [Phalcon\Crypt](api/phalcon_crypt#crypt) serán del tipo \[Phalcon\Crypt\Exception\]\[config-exception\]. Sin embargo, si está usando firma y el *hash* calculado para `decrypt()` no coincide, se lanzará [Phalcon\Crypt\Mismatch](api/phalcon_crypt#crypt-mismatch). Puede usar estas excepciones para capturar selectivamente sólo las excepciones lanzadas desde este componente.
```php
<?php
use Phalcon\Crypt\Mismatch;
use Phalcon\Mvc\Controller;
class IndexController extends Controller
{
public function index()
{
try {
// Get some configuration values
$this->crypt->decrypt('hello');
} catch (Mismatch $ex) {
echo $ex->getMessage();
}
}
}
```
## Funcionalidad
### Cifrados
`getCipher()` devuelve el cifrado seleccionado actualmente. Si no se ha definido ninguno explícitamente mediante `setCipher()` o el constructor del objeto se seleccionará `aes-256-cfb` por defecto. `aes-256-gcm` es el cifrado preferido.
Siempre puede obtener un vector con todos los cifrados disponibles en su sistema llamando a `getAvailableCiphers()`.
### Algoritmo Hash
`getHashAlgo()` devuelve el algoritmo de *hash* que usa el componente. Si no se ha definido ninguno explícitamente mediante `setHashAlgo()` se usará `sha256`. Si no está disponible en el sistema el algoritmo de *hash* definido o es incorrecto, se lanzará \[Phalcon\Crypt\Exception\]\[crypt=exception\].
Siempre puede obtener un vector con todos los algoritmos de *hash* disponibles en su sistema llamando a `getAvailableHashAlgos()`.
### Claves
El componente ofrece un *getter* y un *setter* para la clave a usar. Una vez configurada la clave, se usará para cualquier operación de encriptado o desencriptado (siempre que no se defina el parámetro `key` cuando use estos métodos).
* `getKey()`: Devuelve la clave de encriptación.
* `setKey()` Establece la clave de encriptación.
> Siempre debería crear las claves lo más seguras posible. `12345` podría ser buena para su combinación de equipaje, o `password1` para su email, pero para su aplicación debería intentar algo mucho más complejo. Cuanto más larga y más aleatoria sea la clave, mejor. Por supuesto, el tamaño depende del cifrado elegido.
>
> Varios servicios online pueden generar un texto aleatorio y fuerte que se puede usar como clave. Alternativamente, siempre puede usar los métodos `hash()` del componente [Phalcon\Security](security), que pueden ofrecer una clave fuerte al hacer *hash* de una cadena.
{: .alert .alert-danger }
### Firma
Para indicar al componente que use la firma o no, está disponible `useSigning`. Acepta un booleano que establece un parámetro internamente, que indica si la firma se debe usar o no.
### Datos de Autenticación
Si el cifrado seleccionado es del tipo `gcm` o `ccm` (como termina el nombre del cifrado), se necesitan datos de autenticación para el componente para encriptar y desencriptar correctamente los datos. Los métodos disponibles para esa operación son:
* `setAuthTag()`
* `setAuthData()`
* `setAuthTagLength()` - por defecto `16`
### Relleno
Puede establecer el relleno a usar por el componente usando `setPadding()`. Por defecto, el componente usará `PADDING_DEFAULT`. Las constantes de rellenos disponibles son:
* `PADDING_ANSI_X_923`
* `PADDING_DEFAULT`
* `PADDING_ISO_10126`
* `PADDING_ISO_IEC_7816_4`
* `PADDING_PKCS7`
* `PADDING_SPACE`
* `PADDING_ZERO`
## Inyección de Dependencias
Como en la mayoría de componentes Phalcon, puede almacenar el objeto [Phalcon\Crypt](api/phalcon_crypt#crypt) en su contenedor [Phalcon\Di](di). Al hacerlo, podrá acceder a su objeto de configuración desde controladores, modelos, vistas y cualquier componente que implemente `Injectable`.
A continuación, un ejemplo de registro del servicio así como de acceso a él:
```php
<?php
use Phalcon\Di\FactoryDefault;
use Phalcon\Crypt;
// Create a container
$container = new FactoryDefault();
$container->set(
'crypt',
function () {
$crypt = new Crypt();
// Set a global encryption key
$crypt->setKey(
"T4\xb1\x8d\xa9\x98\x05\\\x8c\xbe\x1d\x07&[\x99\x18\xa4~Lc1\xbeW\xb3"
);
return $crypt;
},
true
);
```
El componente está ahora disponible en sus controladores usando la clave `crypt`
```php
<?php
use MyApp\Models\Secrets;
use Phalcon\Crypt;
use Phalcon\Http\Request;
use Phalcon\Mvc\Controller;
/**
* @property Crypt $crypt
* @property Request $request
*/
class SecretsController extends Controller
{
public function saveAction()
{
$secret = new Secrets();
$text = $this->request->getPost('text');
$secret->content = $this->crypt->encrypt($text);
if ($secret->save()) {
$this->flash->success(
'Secret was successfully created!'
);
}
}
}
```
## Enlaces
* [Estándar de Encriptación Avanzado (AES)](https://es.wikipedia.org/wiki/Advanced_Encryption_Standard)
* [Qué es un cifrado de bloque](https://es.wikipedia.org/wiki/Cifrado_por_bloques)
* [Introducción a Blowfish](https://www.splashdata.com/splashid/blowfish.htm)
* [Modo de Encriptación CTR](https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.79.1353&rep=rep1&type=pdf)
* [Recomendación para Modos Cifrado de Bloque de Operación: Métodos y Técnicas](https://csrc.nist.gov/publications/detail/sp/800-38a/final)
* [Modo contador (CTR)](https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation#Counter_.28CTR.29) | 36.027027 | 414 | 0.72365 | spa_Latn | 0.950525 |
536c752e9200b894890dfab37c01d786cc8d07f3 | 43,214 | md | Markdown | articles/virtual-machines/workloads/oracle/oracle-reference-architecture.md | jmartens/azure-docs.nl-nl-1 | a38978ea36c628c203be597133a734cf250f0065 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/virtual-machines/workloads/oracle/oracle-reference-architecture.md | jmartens/azure-docs.nl-nl-1 | a38978ea36c628c203be597133a734cf250f0065 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/virtual-machines/workloads/oracle/oracle-reference-architecture.md | jmartens/azure-docs.nl-nl-1 | a38978ea36c628c203be597133a734cf250f0065 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Referentie architecturen voor Oracle-data bases op Azure | Microsoft Docs
description: Verwijst naar architecturen voor het uitvoeren van Oracle Database Enterprise Edition-data bases op Microsoft Azure Virtual Machines.
author: dbakevlar
ms.service: virtual-machines-linux
ms.subservice: workloads
ms.topic: article
ms.date: 12/13/2019
ms.author: kegorman
ms.reviewer: cynthn
ms.openlocfilehash: 83da8cbf3a87570cfb967e0a6c8da3f0f2ed1766
ms.sourcegitcommit: d60976768dec91724d94430fb6fc9498fdc1db37
ms.translationtype: MT
ms.contentlocale: nl-NL
ms.lasthandoff: 12/02/2020
ms.locfileid: "96486739"
---
# <a name="reference-architectures-for-oracle-database-enterprise-edition-on-azure"></a>Referentie architecturen voor Oracle Database Enterprise Edition op Azure
Deze hand leiding bevat informatie over het implementeren van een Maxi maal beschik bare Oracle-data base in Azure. Daarnaast is deze hand leiding Dives in geval van nood herstel. Deze architecturen zijn gemaakt op basis van klant implementaties. Deze hand leiding is alleen van toepassing op Oracle Database Enterprise Edition.
Zie [architect a Oracle DB](oracle-design.md)als u meer wilt weten over het maximaliseren van de prestaties van uw Oracle-data base.
## <a name="assumptions"></a>Aannames
- U hebt een goed idee van de verschillende concepten van Azure, zoals [beschikbaarheids zones](../../../availability-zones/az-overview.md)
- U werkt Oracle Database Enterprise Edition 12c of hoger
- U bent op de hoogte en erkent de implicaties van de licenties bij het gebruik van de oplossingen in dit artikel
## <a name="high-availability-for-oracle-databases"></a>Hoge Beschik baarheid voor Oracle-data bases
Het bereiken van hoge Beschik baarheid in de Cloud is een belang rijk onderdeel van de planning en het ontwerp van elke organisatie. Microsoft Azure biedt [beschikbaarheids zones](../../../availability-zones/az-overview.md) en beschikbaarheids sets (worden gebruikt in regio's waar beschikbaarheids zones niet beschikbaar zijn). Lees meer over het [beheren van de beschik baarheid van uw virtuele machines](../../manage-availability.md) om te ontwerpen voor de Cloud.
Naast de Cloud-systeem eigen hulpprogram ma's en aanbiedingen biedt Oracle oplossingen voor hoge Beschik baarheid, zoals [Oracle Data Guard](https://docs.oracle.com/en/database/oracle/oracle-database/18/sbydb/introduction-to-oracle-data-guard-concepts.html#GUID-5E73667D-4A56-445E-911F-1E99092DD8D7), [Data Guard met FSFO](https://docs.oracle.com/en/database/oracle/oracle-database/12.2/dgbkr/index.html), [sharding](https://docs.oracle.com/en/database/oracle/oracle-database/12.2/admin/sharding-overview.html)en [Golden Gate](https://www.oracle.com/middleware/technologies/goldengate.html) die op Azure kunnen worden ingesteld. Deze hand leiding bevat referentie architecturen voor elk van deze oplossingen.
Ten slotte is het belang rijk dat u bij het migreren of maken van toepassingen voor de Cloud de code van uw toepassing verfijnt om Cloud-eigen patronen toe te voegen, zoals het [patroon voor opnieuw proberen](/azure/architecture/patterns/retry) en de [circuit onderbreker](/azure/architecture/patterns/circuit-breaker). Aanvullende patronen die zijn gedefinieerd in de [hand leiding voor het ontwerpen van Clouds](/azure/architecture/patterns/) , kunnen u helpen bij het verbeteren van de toepassing.
### <a name="oracle-rac-in-the-cloud"></a>Oracle RAC in de Cloud
Oracle Real Application Cluster (RAC) is een oplossing van Oracle waarmee klanten hoge door Voer kunnen bereiken door veel instanties die toegang hebben tot één database opslag (gedeeld patroon van alle architectuur). Hoewel Oracle RAC ook on-premises kan worden gebruikt voor hoge Beschik baarheid, kan Oracle RAC alleen worden gebruikt voor hoge Beschik baarheid in de Cloud omdat het alleen bescherming biedt tegen storingen op exemplaar niveau en niet tegen storingen op het niveau van het rek of het Data Center. Daarom raadt Oracle aan Oracle Data Guard te gebruiken met uw data base (of één exemplaar of RAC) voor hoge Beschik baarheid. Klanten hebben doorgaans een hoge SLA nodig voor het uitvoeren van hun essentiële toepassingen. Oracle RAC is momenteel niet gecertificeerd of wordt niet ondersteund door Oracle in Azure. Azure biedt echter functies als Azure biedt Beschikbaarheidszones en geplande onderhouds Vensters om te helpen beschermen tegen storingen op exemplaar niveau. Klanten kunnen daarnaast gebruikmaken van technologieën als Oracle Data Guard, Oracle Golden Gate en Oracle sharding voor hoge prestaties en meer flexibiliteit door hun data bases te beschermen tegen rack niveau en op datacenter niveau en geo-politieke fouten.
Bij het uitvoeren van Oracle-data bases in meerdere [beschikbaarheids zones](../../../availability-zones/az-overview.md) in combi natie met Oracle Data Guard of Golden Gate, kunnen klanten een sla voor de uptime van 99,99% ophalen. In azure-regio's waar beschikbaarheids zones nog niet aanwezig zijn, kunnen klanten [beschikbaarheids sets](../../manage-availability.md#configure-multiple-virtual-machines-in-an-availability-set-for-redundancy) gebruiken en een sla voor de uptime van 99,95% verzorgen.
>Opmerking: u kunt een doel voor de uptime hebben die veel hoger is dan de SLA voor de uptime van micro soft.
## <a name="disaster-recovery-for-oracle-databases"></a>Herstel na nood geval voor Oracle-data bases
Wanneer u uw essentiële toepassingen in de Cloud host, is het belang rijk om te ontwerpen voor hoge Beschik baarheid en herstel na nood gevallen.
Voor Oracle Database Enterprise Edition is Oracle Data Guard een handige functie voor herstel na nood gevallen. U kunt een standby-data base-exemplaar instellen in een [gekoppelde Azure-regio](../../../best-practices-availability-paired-regions.md) en failover voor gegevens beveiliging instellen voor herstel na nood gevallen. Als er geen gegevens verloren gaan, is het raadzaam om een Oracle Data Guard Far Sync-exemplaar te implementeren naast actieve Data Guard.
U kunt overwegen om het Data Guard Far Sync-exemplaar in te stellen in een andere beschikbaarheids zone dan uw primaire Oracle-Data Base als uw toepassing de latentie toestaat (grondige tests zijn vereist). Gebruik een **maximale beschikbaarheids** modus voor het instellen van het synchrone Trans Port van uw opnieuw uitgevoerde bestanden naar de meest linkse Sync-instantie. Deze bestanden worden vervolgens asynchroon overgedragen naar de standby-data base.
Als uw toepassing het prestatie verlies niet toestaat bij het instellen van een far Sync-exemplaar in een andere beschikbaarheids zone in de **maximale beschikbaarheids** modus (synchroon), kunt u een uiterst Sync-exemplaar instellen in dezelfde beschikbaarheids zone als uw primaire data base. Voor extra Beschik baarheid kunt u instellen dat er meerdere Sync-exemplaren worden gesloten die dicht bij de primaire data base liggen en ten minste één instantie dicht bij uw standby-data base (als de functie wordt overgezet). Meer informatie over de synchronisatie van Oracle Data Guard uiterst in deze [Oracle Active Data Guard Far Sync-technisch](https://www.oracle.com/technetwork/database/availability/farsync-2267608.pdf)document.
Wanneer u Oracle Standard Edition-data bases gebruikt, zijn er ISV-oplossingen, zoals DBVisit standby, waarmee u hoge Beschik baarheid en herstel na nood gevallen kunt instellen.
## <a name="reference-architectures"></a>Referentiearchitecturen
### <a name="oracle-data-guard"></a>Oracle Data Guard
Oracle Data Guard zorgt voor hoge Beschik baarheid, gegevens bescherming en herstel na nood geval voor bedrijfs gegevens. Data Guard houdt stand-by-data bases als transactioneel consistente kopieën van de primaire data base. Afhankelijk van de afstand tussen de primaire en secundaire data bases en de toepassings tolerantie voor latentie, kunt u synchrone of asynchrone replicatie instellen. Als de primaire data base niet beschikbaar is vanwege een geplande of niet-geplande onderbreking, kan met Data Guard elke stand-by-Data Base worden overgeschakeld naar de primaire rol, waarbij de downtime wordt geminimaliseerd.
Wanneer u Oracle Data Guard gebruikt, kunt u de secundaire data base ook openen voor alleen-lezen. Deze configuratie heet actieve Data Guard. Oracle Database 12c heeft een functie geïntroduceerd met de naam Data Guard uiterst Sync-exemplaar. Met dit exemplaar kunt u een configuratie met een gegevens verlies van nul instellen voor uw Oracle-data base zonder dat dit van invloed is op de prestaties.
> [!NOTE]
> Actieve Data Guard vereist extra licenties. Deze licentie is ook vereist voor het gebruik van de Far Sync-functie. Neem contact op met uw Oracle-vertegenwoordiger om de implicaties van de licenties te bespreken.
#### <a name="oracle-data-guard-with-fsfo"></a>Oracle Data Guard met FSFO
Oracle Data Guard met Fast-Start failover (FSFO) kan extra flexibiliteit bieden door de broker in te stellen op een afzonderlijke machine. De Data Guard Broker en de secundaire data base voeren de waarnemer uit en bekijken de primaire Data Base voor uitval tijd. Hierdoor kunt u ook de installatie van uw Data Guard observeren.
Met Oracle Database versie 12,2 en hoger is het ook mogelijk meerdere waarnemers te configureren met één Oracle Data Guard Broker-configuratie. Deze installatie biedt extra Beschik baarheid, in het geval van één waarnemer en de secundaire data base Ervaar tijd. Data Guard Broker is licht gewicht en kan worden gehost op een relatief kleine virtuele machine. Raadpleeg de [Oracle-documentatie](https://docs.oracle.com/en/database/oracle/oracle-database/12.2/dgbkr/oracle-data-guard-broker-concepts.html) in dit onderwerp voor meer informatie over Data Guard Broker en de voor delen ervan.
Het volgende diagram is een aanbevolen architectuur voor het gebruik van Oracle Data Guard op Azure met beschikbaarheids zones. Met deze architectuur kunt u een SLA voor de VM-uptime van 99,99% ophalen.

In het voor gaande diagram opent het client systeem een aangepaste toepassing met Oracle-back-end via het web. De web-front-end is geconfigureerd in een load balancer. Met de web-frontend wordt een aanroep uitgevoerd naar de juiste toepassings server om het werk af te handelen. De toepassings server voert een query uit op de primaire Oracle-data base. De Oracle-data base is geconfigureerd met een [virtuele machine](../../sizes-memory.md) met hyperthreaded geoptimaliseerd voor geheugen met [beperkte kern vcpu's](../../../virtual-machines/constrained-vcpu.md) om de licentie kosten op te slaan en de prestaties te maximaliseren. Er worden meerdere Premium-of Ultra schijven (Managed Disks) gebruikt voor prestaties en hoge Beschik baarheid.
De Oracle-data bases worden in meerdere beschikbaarheids zones geplaatst voor hoge Beschik baarheid. Elke zone bestaat uit een of meer data centers die zijn uitgerust met onafhankelijke voeding, koeling en netwerken. Ten minste drie afzonderlijke zones worden in alle ingeschakelde regio's ingesteld om tolerantie te garanderen. De fysieke schei ding van beschikbaarheids zones binnen een regio beveiligt de gegevens van fouten in data centers. Daarnaast zijn twee FSFO-waarnemers in twee beschikbaarheids zones ingesteld voor het initiëren en failoveren van de Data Base naar de secundaire als er een storing optreedt.
U kunt extra waarnemers-en/of standby-data bases in een andere beschikbaarheids zone (AZ 1, in dit geval) instellen dan de zone die in de voor gaande architectuur wordt weer gegeven. Ten slotte worden Oracle-data bases bewaakt voor uptime en prestaties door Oracle Enter prise Manager (OEM). Met OEM kunt u ook verschillende prestatie-en gebruiks rapporten genereren.
In regio's waar beschikbaarheids zones niet worden ondersteund, kunt u beschikbaarheids sets gebruiken om uw Oracle Database op een Maxi maal beschik bare manier te implementeren. Met beschikbaarheids sets kunt u de uptime van een VM van 99,95% verzorgen. Het volgende diagram is een referentie architectuur van dit gebruik:

> [!NOTE]
> * De Oracle Enter prise Manager-VM hoeft niet in een beschikbaarheidsset te worden geplaatst, omdat er maar één exemplaar van de OEM wordt geïmplementeerd.
> * Ultra disks wordt momenteel niet ondersteund in een configuratie van een beschikbaarheidsset.
#### <a name="oracle-data-guard-far-sync"></a>Uiterst synchronisatie van Oracle Data Guard
De meest synchronisatie van Oracle Data Guard biedt een beschermings functie voor gegevens verlies voor Oracle-data bases. Met deze mogelijkheid kunt u zich beschermen tegen gegevens verlies in als uw database machine uitvalt. De meest synchronisatie van Oracle Data Guard moet worden geïnstalleerd op een afzonderlijke virtuele machine. Far Sync is een licht gewicht Oracle-exemplaar dat alleen een besturings bestand, een wachtwoord bestand, SPFile en stand-by-logboeken heeft. Er zijn geen gegevens bestanden of Rego-logboek bestanden.
Voor de beveiliging van gegevens verlies moet er sprake zijn van synchrone communicatie tussen uw primaire data base en het Far Sync-exemplaar. Het Sync-exemplaar dat het meest synchroon wordt uitgevoerd, wordt op synchrone wijze opnieuw van de primaire en doorgestuurd naar alle data bases met de stand-by-modus op asynchrone wijze. Deze instelling vermindert ook de overhead van de primaire data base, omdat alleen de bewerking opnieuw moet worden verzonden naar het Far Sync-exemplaar in plaats van alle data bases met de stand-by. Als er sprake is van een far Sync-exemplaar, gebruikt Data Guard automatisch een asynchroon Trans Port naar de secundaire data base van de primaire data base om bijna geen gegevens verlies beveiliging te onderhouden. Voor extra tolerantie kunnen klanten meerdere Far Sync-instanties implementeren per data base-exemplaar (primaire en secundaire).
Het volgende diagram is een architectuur met hoge Beschik baarheid met een ver-synchronisatie van Oracle Data Guard:

In de voor gaande architectuur is er een aanzienlijk synchronisatie-exemplaar geïmplementeerd in dezelfde beschikbaarheids zone als de data base-instantie om de latentie tussen de twee te verminderen. In gevallen waarin de toepassing latentie gevoelig is, kunt u overwegen om uw data base en de meest gesynchroniseerde instantie of instanties te implementeren in een [proximity-plaatsings groep](../../../virtual-machines/linux/proximity-placement-groups.md).
Het volgende diagram is een architectuur die gebruikmaakt van Oracle Data Guard FSFO en ver Sync om hoge Beschik baarheid en herstel na nood gevallen te garanderen:

### <a name="oracle-goldengate"></a>Oracle GoldenGate
Golden Gate maakt het mogelijk om gegevens op transactie niveau uit te wisselen en te bewerken via meerdere heterogene platformen in de hele onderneming. Er worden doorgevoerde trans acties met trans actie-integriteit en minimale overhead op uw bestaande infra structuur verplaatst. Dankzij de modulaire architectuur hebt u de flexibiliteit om geselecteerde gegevens records, transactionele wijzigingen en wijzigingen in DDL (Data Definition Language) uit verschillende topologieën op te halen en te repliceren.
Met Oracle Golden Gate kunt u uw data base configureren voor maximale Beschik baarheid door bidirectionele replicatie te bieden. Hiermee kunt u een configuratie met **meerdere masters** of **actief-actief** instellen. Het volgende diagram is een aanbevolen architectuur voor Oracle Golden Gate Active-Active-Setup op Azure. In de volgende architectuur is de Oracle-data base geconfigureerd met behulp van een [virtuele machine](../../sizes-memory.md) met hyperthreaded geoptimaliseerd voor geheugen met [beperkte kern vcpu's](../../../virtual-machines/constrained-vcpu.md) om de licentie kosten op te slaan en de prestaties te maximaliseren. Er worden meerdere Premium-of Ultra schijven (Managed disks) gebruikt voor prestaties en beschik baarheid.

> [!NOTE]
> Een vergelijk bare architectuur kan worden ingesteld met behulp van beschikbaarheids sets in regio's waar beschikbaarheids zones momenteel niet beschikbaar zijn.
Oracle Golden Gate heeft processen, zoals extract, pomp en replicatie, waarmee u uw gegevens asynchroon kunt repliceren van de ene Oracle-database server naar een andere. Met deze processen kunt u een bidirectionele replicatie instellen om te zorgen voor een hoge Beschik baarheid van uw data base als er sprake is van downtime van beschikbaarheids zone. In het voor gaande diagram wordt het uitpak proces uitgevoerd op dezelfde server als uw Oracle-data base, terwijl de gegevens pomp en de replicatie processen worden uitgevoerd op een afzonderlijke server in dezelfde beschikbaarheids zone. Het replicatie proces wordt gebruikt om gegevens van de data base in de andere beschikbaarheids zone te ontvangen en de gegevens op te slaan in de Oracle-data base in de beschikbaarheids zone. Op dezelfde manier verzendt het gegevens pomp proces gegevens die door het uitpak proces zijn geëxtraheerd naar het replicatie proces in de andere beschikbaarheids zone.
Hoewel in het voor gaande architectuur diagram het proces voor gegevens pomp en replicatie wordt weer gegeven dat op een afzonderlijke server is geconfigureerd, kunt u alle Oracle Golden Gate-processen op dezelfde server instellen, op basis van de capaciteit en het gebruik van uw server. Raadpleeg altijd uw AWR-rapport en de metrische gegevens in azure om inzicht te krijgen in het gebruiks patroon van uw server.
Bij het instellen van Oracle Golden Gate bidirectionele replicatie in verschillende beschikbaarheids zones of verschillende regio's, is het belang rijk om ervoor te zorgen dat de latentie tussen de verschillende onderdelen acceptabel is voor uw toepassing. De latentie tussen beschikbaarheids zones en regio's kan variëren en is afhankelijk van meerdere factoren. Het is raadzaam om prestatie tests tussen uw toepassingslaag en uw database laag in verschillende beschikbaarheids zones en/of regio's in te stellen om te bevestigen dat deze voldoen aan de prestatie vereisten van uw toepassing.
De toepassingslaag kan worden ingesteld in een eigen subnet en de gegevenslaag kan worden gescheiden in een eigen subnet. Overweeg, indien mogelijk, om [Azure-toepassing gateway](../../../application-gateway/overview.md) te gebruiken om verkeer tussen uw toepassings servers te verdelen. Azure-toepassing gateway is een robuust webverkeer load balancer. Het biedt sessie affiniteit op basis van cookies die een gebruikers sessie op dezelfde server houdt, waardoor de conflicten voor de Data Base worden geminimaliseerd. Alternatieven voor Application Gateway zijn [Azure Load Balancer](../../../load-balancer/load-balancer-overview.md) en [Azure Traffic Manager](../../../traffic-manager/traffic-manager-overview.md).
### <a name="oracle-sharding"></a>Oracle-sharding
Sharding is een gegevenslaag patroon dat is geïntroduceerd in Oracle 12,2. Zo kunt u uw gegevens Horizon taal partitioneren en schalen op onafhankelijke data bases. Het is een deel-niets-architectuur waarbij elke Data Base wordt gehost op een specifieke virtuele machine, waardoor een hoge lees-en schrijf doorvoer mogelijk is naast tolerantie en verhoogde Beschik baarheid. Met dit patroon worden individuele storings punten geëlimineerd, wordt fout isolatie geboden en worden rolling upgrades zonder uitval tijd ingeschakeld. De downtime van één Shard of een fout op het niveau van een Data Center heeft geen invloed op de prestaties of Beschik baarheid van de andere Shards in andere data centers.
Sharding is geschikt voor OLTP-toepassingen met een hoge door Voer die geen uitval tijd kunnen veroorloven. Alle rijen met dezelfde sharding-sleutel zijn altijd gegarandeerd op dezelfde Shard, waardoor de prestaties worden verbeterd door de hoge consistentie te bieden. Toepassingen die gebruikmaken van sharding, moeten beschikken over een goed gedefinieerd gegevens model en een strategie voor gegevens distributie (consistente hash, bereik, lijst of samengestelde waarde) die primair toegang heeft tot gegevens met behulp van een sharding-sleutel (bijvoorbeeld *KlantId* of *accountNum*). Sharding biedt u ook de mogelijkheid om bepaalde gegevens sets dichter bij de eind gebruikers op te slaan, zodat u kunt voldoen aan de vereisten voor prestaties en naleving.
U wordt aangeraden uw Shards te repliceren voor hoge Beschik baarheid en herstel na nood gevallen. Deze installatie kan worden uitgevoerd met behulp van Oracle-technologieën zoals Oracle Data Guard of Oracle Golden Gate. Een replicatie-eenheid kan een Shard, een deel van een Shard of een groep Shards zijn. De beschik baarheid van een Shard-data base wordt niet beïnvloed door een storing of vertraging van een of meer Shards. Voor maximale Beschik baarheid kan de stand-Shards in dezelfde beschikbaarheids zone worden geplaatst waar de primaire Shards worden geplaatst. Voor herstel na nood gevallen kan de stand-Shards in een andere regio worden gevonden. U kunt ook Shards implementeren in meerdere regio's om verkeer in die regio's te verwerken. Lees meer over het configureren van hoge Beschik baarheid en replicatie van uw Shard-data base in [Oracle sharding-documentatie](https://docs.oracle.com/en/database/oracle/oracle-database/19/shard/sharding-high-availability.html).
Oracle sharding bestaat voornamelijk uit de volgende onderdelen. Meer informatie over deze onderdelen vindt u in de [documentatie van Oracle sharding](https://docs.oracle.com/en/database/oracle/oracle-database/19/shard/sharding-overview.html):
- **Shard Catalog** : de Oracle-Data Base voor speciale doel einden die een permanente opslag is voor alle configuratie gegevens van de Shard-data base. Alle configuratie wijzigingen, zoals het toevoegen of verwijderen van Shards, het toewijzen van de gegevens en het DDLs in een Shard-data base, worden gestart op de Shard catalogus. De Shard-catalogus bevat ook de hoofd kopie van alle gedupliceerde tabellen in een SDB.
De catalogus Shard maakt gebruik van gerealiseerde weer gaven voor het automatisch repliceren van wijzigingen in gedupliceerde tabellen in alle Shards. De catalogus database Shard fungeert ook als een query coördinator die wordt gebruikt voor het verwerken van query's en query's met meerdere Shard die geen sharding-sleutel opgeven.
Het gebruik van Oracle Data Guard in combi natie met beschikbaarheids zones of beschikbaarheids sets voor Shard-catalogus hoge Beschik baarheid is een aanbevolen best practice. De beschik baarheid van de Shard-catalogus heeft geen invloed op de beschik baarheid van de Shard-data base. Een downtime in de Shard-catalogus heeft alleen invloed op onderhouds bewerkingen en multishard query's gedurende de korte periode dat de Data Guard-failover is voltooid. Online transacties worden door de SDB gerouteerd en uitgevoerd en worden niet beïnvloed door een storing in de catalogus.
- **Shard-directeurs** -Lightweight services die moeten worden geïmplementeerd in elke regio/beschikbaarheids zone waarin uw Shards zich bevinden. Shard-Directors zijn wereld wijde service managers die zijn geïmplementeerd in de context van Oracle sharding. Voor maximale Beschik baarheid kunt u het beste ten minste één Shard-Director implementeren in elke beschikbaarheids zone waarin uw Shards zich bevinden.
Wanneer u in eerste instantie verbinding maakt met de data base, worden de routerings gegevens ingesteld door een Shard-Director en worden deze in de cache opgeslagen voor volgende aanvragen, waarbij de Shard Director wordt overgeslagen. Zodra de sessie tot stand is gebracht met een Shard, worden alle SQL-query's en Dml's ondersteund en uitgevoerd binnen het bereik van de opgegeven Shard. Deze route ring is snel en wordt gebruikt voor alle OLTP-workloads die trans acties binnen de Shard uitvoeren. Het is raadzaam om direct route ring te gebruiken voor alle OLTP-workloads die de hoogste prestaties en beschik baarheid vereisen. De routerings cache wordt automatisch vernieuwd wanneer een Shard niet meer beschikbaar is of wanneer er wijzigingen optreden in de sharding-topologie.
Voor hoogwaardige gegevensgestuurde route ring, raadt Oracle aan een verbindings groep te gebruiken bij het openen van gegevens in de Shard-data base. Oracle-verbindings Pools, taalspecifieke bibliotheken en stuur Programma's ondersteunen Oracle sharding. Raadpleeg de [documentatie van Oracle sharding](https://docs.oracle.com/en/database/oracle/oracle-database/19/shard/sharding-overview.html#GUID-3D41F762-BE04-486D-8018-C7A210D809F9) voor meer informatie.
- **Globale service** : de wereld wijde service is vergelijkbaar met de normale database service. Naast alle eigenschappen van een database service heeft een globale service eigenschappen voor Shard-data bases, zoals regio affiniteit tussen clients en Shard en tolerantie voor replicatie vertraging. Er moet slechts één globale service worden gemaakt om gegevens van/naar een Shard-data base te lezen/schrijven. Bij het gebruik van Active Data Guard en het instellen van alleen-lezen replica's van de Shards, kunt u een andere gGobal-service maken voor alleen-lezen workloads. De client kan deze globale Services gebruiken om verbinding te maken met de data base.
- **Shard-data** bases-Shard-data bases zijn uw Oracle-data bases. Elke Data Base wordt met behulp van Oracle Data Guard gerepliceerd in een Broker-configuratie waarvoor Fast-Start failover (FSFO) is ingeschakeld. U hoeft geen Data Guard-failover en replicatie in te stellen op elke Shard. Dit wordt automatisch geconfigureerd en geïmplementeerd wanneer de gedeelde data base wordt gemaakt. Als een bepaalde Shard mislukt, wordt het delen van Oracle automatisch uitgevoerd via database verbindingen van de primaire naar de stand-by.
U kunt Oracle Shard-data bases met twee interfaces implementeren en beheren: Oracle Enter prise Manager-interface voor Cloud beheer en/of het `GDSCTL` opdracht regel programma. U kunt de verschillende Shards zelfs controleren op Beschik baarheid en prestaties met behulp van Cloud beheer. `GDSCTL DEPLOY`Met deze opdracht worden automatisch de Shards en hun respectieve listeners gemaakt. Bovendien implementeert deze opdracht automatisch de replicatie Configuratie die wordt gebruikt voor Shard hoge Beschik baarheid die is opgegeven door de beheerder.
Er zijn verschillende manieren om een Data Base te Shard:
* Door het systeem beheerde sharding: distribueert automatisch over Shards met behulp van partitionering
* Door de gebruiker gedefinieerde sharding: Hiermee kunt u de toewijzing van de gegevens opgeven aan de Shards, wat goed werkt wanneer er sprake is van regelgevende of gegevensgestuurde vereisten)
* Samengestelde sharding: een combi natie van door het systeem beheerde en door de gebruiker gedefinieerde sharding voor verschillende _shardspaces_
* Tabel subpartitionen: vergelijkbaar met een reguliere gepartitioneerde tabel.
Meer informatie over de verschillende [sharding-methoden](https://docs.oracle.com/en/database/oracle/oracle-database/19/shard/sharding-methods.html) vindt u in de documentatie van Oracle.
Hoewel een Shard-data base eruit kan zien als één Data Base voor toepassingen en ontwikkel aars, moet er zorgvuldig worden gepland om te bepalen welke tabellen worden gedupliceerd versus Shard.
Gedupliceerde tabellen worden opgeslagen op alle Shards, terwijl Shard-tabellen worden gedistribueerd over verschillende Shards. De aanbeveling is om kleine en dimensionale tabellen te dupliceren en de feiten tabellen te distribueren/Shard. Gegevens kunnen in uw Shard-Data Base worden geladen met behulp van de Shard-catalogus als de centrale coördinator of door de gegevens pomp op elke Shard uit te voeren. Lees meer informatie over het [migreren van gegevens naar een Shard-data base](https://docs.oracle.com/en/database/oracle/oracle-database/19/shard/sharding-loading-data.html) in de documentatie van Oracle.
#### <a name="oracle-sharding-with-data-guard"></a>Oracle-sharding met Data Guard
Oracle Data Guard kan worden gebruikt voor sharding met door het systeem beheerde, door de gebruiker gedefinieerde en samengestelde sharding-methoden.
Het volgende diagram is een referentie architectuur voor Oracle-sharding met Oracle-gegevens beveiliging die wordt gebruikt voor hoge Beschik baarheid van elke Shard. Het architectuur diagram toont een _samengestelde sharding-methode_. Het architectuur diagram wijkt waarschijnlijk af van toepassingen met verschillende vereisten voor gegevens locatie, taak verdeling, hoge Beschik baarheid, herstel na nood gevallen enzovoort en kan gebruikmaken van verschillende methoden voor sharding. Oracle sharding biedt u de mogelijkheid om aan deze vereisten te voldoen en horizon taal en efficiënt te schalen door deze opties te bieden. Een vergelijk bare architectuur kan zelfs worden geïmplementeerd met Oracle Golden Gate.

Een door het systeem beheerde sharding is het eenvoudigst te configureren en te beheren, door de gebruiker gedefinieerde sharding of samengestelde sharding is heel geschikt voor scenario's waarbij uw gegevens en toepassing geografisch worden gedistribueerd of in scenario's waarin u de controle over de replicatie van elke Shard moet hebben.
In de voor gaande architectuur wordt samengestelde sharding gebruikt voor het geo-distribueren van de gegevens en het Horizon taal schalen van uw toepassings lagen. Samengestelde sharding is een combi natie van door het systeem beheerde en door de gebruiker gedefinieerde sharding en biedt daarom het voor deel van beide methoden. In het voor gaande scenario worden gegevens eerst Shard in meerdere shardspaces gescheiden door de regio. Vervolgens worden de gegevens verder gepartitioneerd door een consistente hash over meerdere Shards in de shardspace. Elk shardspace bevat meerdere shardgroups. Elke shardgroup heeft meerdere Shards en is in dit geval een ' eenheid ' van replicatie. Elk shardgroup bevat alle gegevens in de shardspace. Shardgroups a1 en B1 zijn primaire Shardgroups, terwijl Shardgroups a2 en B2 stand-by zijn. U kunt ervoor kiezen om afzonderlijke Shards te gebruiken als replicatie-eenheid in plaats van een shardgroup.
In de voor gaande architectuur wordt een GSM-Shard-Director geïmplementeerd in elke beschikbaarheids zone voor hoge Beschik baarheid. De aanbeveling is om ten minste één GSM/Shard Director per Data Center/regio te implementeren. Daarnaast wordt een exemplaar van de toepassings server geïmplementeerd in elke beschikbaarheids zone die een shardgroup bevat. Met deze instelling kan de toepassing de latentie tussen de toepassings server en de data base-shardgroup laag blijven. Als een Data Base mislukt, kan de toepassings server in dezelfde zone als de stand-by-data base aanvragen afhandelen nadat de databaserol is overgezet. Azure-toepassing gateway en de Shard-Director houden de aanvraag-en antwoord latentie bij en router aanvragen dienovereenkomstig.
Vanuit het oogpunt van een toepassing maakt het client systeem een aanvraag voor de Azure-toepassing-gateway (of andere technologieën voor taak verdeling in Azure) waarmee de aanvraag wordt omgeleid naar de regio die het dichtst bij de client is. Azure-toepassing gateway biedt ook ondersteuning voor plak sessies, zodat aanvragen die afkomstig zijn van dezelfde client naar dezelfde toepassings server worden doorgestuurd. De toepassings server maakt gebruik van groepsgewijze verbindingen in Data Access-Stuur Programma's. Deze functie is beschikbaar in Stuur Programma's zoals JDBC, ODP.NET, OCI, enzovoort. De Stuur Programma's kunnen sharding-sleutels herkennen die zijn opgegeven als onderdeel van de aanvraag. [Oracle universele verbindings groep (UCP)](https://docs.oracle.com/en/database/oracle/oracle-database/12.2/jjucp/ucp-database-sharding-overview.html) voor JDBC-clients kan niet-Oracle Application clients zoals Apache Tomcat en IIS inschakelen voor gebruik met Oracle sharding.
Tijdens de eerste aanvraag verbindt de toepassings server met de Shard Director in de regio om routerings informatie te verkrijgen voor de Shard waarnaar de aanvraag moet worden doorgestuurd. Op basis van de door gegeven sharding-sleutel stuurt de Director de toepassings server door naar de respectieve Shard. De toepassings server slaat deze informatie op in het cache geheugen door een kaart te maken, en voor volgende aanvragen, wordt de Shard Director omzeild en worden aanvragen direct naar de Shard geleid.
#### <a name="oracle-sharding-with-goldengate"></a>Oracle-sharding met Golden Gate
Het volgende diagram is een referentie architectuur voor Oracle sharding met Oracle Golden Gate voor de maximale Beschik baarheid van elke Shard in de regio. In plaats van de voor gaande architectuur heeft deze architectuur alleen portrays hoge Beschik baarheid binnen één Azure-regio (meerdere beschikbaarheids zones). Eén kan een Shard-data base met hoge Beschik baarheid van meerdere regio's implementeren (vergelijkbaar met het voor gaande voor beeld) met behulp van Oracle Golden Gate.

De voor gaande referentie architectuur maakt gebruik van de door het _systeem beheerde_ sharding-methode voor het Shard van de gegevens. Omdat Oracle Golden Gate-replicatie op segment niveau wordt uitgevoerd, kunnen de helft van de gegevens die naar één Shard worden gerepliceerd, worden gerepliceerd naar een andere Shard. De andere helft kan worden gerepliceerd naar een andere Shard.
De manier waarop de gegevens worden gerepliceerd, is afhankelijk van de replicatie factor. Met de replicatie factor 2 hebt u twee kopieën van elk gegevens segment in uw drie Shards in de shardgroup. Op dezelfde manier, met een replicatie factor van 3 en drie Shards in uw shardgroup, worden alle gegevens in elke Shard gerepliceerd naar elke andere Shard in de shardgroup. Elke Shard in de shardgroup kan een andere replicatie factor hebben. Met deze instelling kunt u het ontwerp voor hoge Beschik baarheid en herstel na nood gevallen op efficiënte wijze in een shardgroup en in meerdere shardgroups definiëren.
In de voor gaande architectuur bevatten shardgroup A en shardgroup B dezelfde gegevens, maar deze bevinden zich in verschillende beschikbaarheids zones. Als zowel shardgroup A als shardgroup B dezelfde replicatie factor van 3 hebben, wordt elke rij/segment van de Shard-tabel zes keer gerepliceerd over de twee shardgroups. Als shardgroup A een replicatie factor van 3 en shardgroup B met de replicatie factor 2 heeft, wordt elke rij/segment vijf keer gerepliceerd over de twee shardgroups.
Deze installatie voor komt gegevens verlies als er een fout optreedt op instantie niveau of op een niveau van een beschikbaarheids zone. De toepassingslaag kan lezen uit en schrijven naar elke Shard. Oracle sharding geeft voor elk bereik van hash-waarden een ' hoofd segment ' op om conflicten te minimaliseren. Deze functie zorgt ervoor dat schrijf aanvragen voor een bepaald segment worden omgeleid naar het bijbehorende segment. Daarnaast biedt Oracle Golden Gate automatische detectie van conflicten en oplossingen voor het afhandelen van conflicten die zich kunnen voordoen. Raadpleeg de documentatie van Oracle over het gebruik van [Oracle Golden Gate met een Shard-data base](https://docs.oracle.com/en/database/oracle/oracle-database/19/shard/sharding-high-availability.html#GUID-4FC0AC46-0B8B-4670-BBE4-052228492C72)voor meer informatie en beperkingen voor het implementeren van Golden Gate met Oracle sharding.
In de voor gaande architectuur wordt een GSM-Shard-Director geïmplementeerd in elke beschikbaarheids zone voor hoge Beschik baarheid. De aanbeveling is om ten minste één GSM/Shard Director per Data Center of regio te implementeren. Daarnaast wordt een exemplaar van de toepassings server geïmplementeerd in elke beschikbaarheids zone die een shardgroup bevat. Met deze instelling kan de toepassing de latentie tussen de toepassings server en de data base-shardgroup laag blijven. Als een Data Base mislukt, kan de toepassings server in dezelfde zone als de stand-by-data base aanvragen afhandelen nadat de database functie is overgegaan. Azure-toepassing gateway en de Shard-Director houden de aanvraag-en antwoord latentie bij en router aanvragen dienovereenkomstig.
Vanuit het oogpunt van een toepassing maakt het client systeem een aanvraag voor de Azure-toepassing-gateway (of andere technologieën voor taak verdeling in Azure) waarmee de aanvraag wordt omgeleid naar de regio die het dichtst bij de client is. Azure-toepassing gateway biedt ook ondersteuning voor plak sessies, zodat aanvragen die afkomstig zijn van dezelfde client naar dezelfde toepassings server worden doorgestuurd. De toepassings server maakt gebruik van groepsgewijze verbindingen in Data Access-Stuur Programma's. Deze functie is beschikbaar in Stuur Programma's zoals JDBC, ODP.NET, OCI, enzovoort. De Stuur Programma's kunnen sharding-sleutels herkennen die zijn opgegeven als onderdeel van de aanvraag. [Oracle universele verbindings groep (UCP)](https://docs.oracle.com/en/database/oracle/oracle-database/12.2/jjucp/ucp-database-sharding-overview.html) voor JDBC-clients kan niet-Oracle Application clients zoals Apache Tomcat en IIS inschakelen voor gebruik met Oracle sharding.
Tijdens de eerste aanvraag verbindt de toepassings server met de Shard Director in de regio om routerings informatie te verkrijgen voor de Shard waarnaar de aanvraag moet worden doorgestuurd. Op basis van de door gegeven sharding-sleutel stuurt de Director de toepassings server door naar de respectieve Shard. De toepassings server slaat deze informatie op in het cache geheugen door een kaart te maken, en voor volgende aanvragen, wordt de Shard Director omzeild en worden aanvragen direct naar de Shard geleid.
## <a name="patching-and-maintenance"></a>Patches en onderhoud
Wanneer u uw Oracle-workloads op Azure implementeert, zorgt micro soft voor alle patches op het niveau van de host-OS. Elk gepland onderhoud op besturingssysteem niveau wordt vooraf aan klanten meegedeeld om de klant in staat te stellen voor dit geplande onderhoud. Twee servers van twee verschillende Beschikbaarheidszones worden nooit tegelijkertijd patches uitgevoerd. Zie [de beschik baarheid van virtuele machines beheren](../../manage-availability.md) voor meer informatie over het onderhoud en de reparatie van de VM.
Het patchen van het besturings systeem van de virtuele machine kan worden geautomatiseerd met behulp van [Azure Automation updatebeheer](../../../automation/update-management/overview.md). Het patchen en onderhouden van uw Oracle-data base kan worden geautomatiseerd en gepland met behulp van [Azure-pijp lijnen](/azure/devops/pipelines/get-started/what-is-azure-pipelines?view=azure-devops) of [Azure Automation updatebeheer](../../../automation/update-management/overview.md) om de downtime te minimaliseren. Zie [continue levering en Blue/groen-implementaties](/azure/devops/learn/what-is-continuous-delivery) om te begrijpen hoe deze kunnen worden gebruikt in de context van uw Oracle-data bases.
## <a name="architecture-and-design-considerations"></a>Architectuur-en ontwerp overwegingen
- Overweeg het gebruik van hyperthreaded [geheugen geoptimaliseerde virtuele machine](../../sizes-memory.md) met [beperkte kern vcpu's](../../../virtual-machines/constrained-vcpu.md) voor uw Oracle database-VM om de licentie kosten op te slaan en de prestaties te maximaliseren. Gebruik meerdere Premium-of Ultra-schijven (Managed disks) voor prestaties en beschik baarheid.
- Wanneer u beheerde schijven gebruikt, kan de naam van de schijf/apparaat worden gewijzigd bij het opnieuw opstarten. Het is raadzaam om de UUID van het apparaat te gebruiken in plaats van de naam om ervoor te zorgen dat uw koppelingen behouden blijven tijdens het opnieuw opstarten. Meer informatie vindt u [hier](/previous-versions/azure/virtual-machines/linux/configure-raid#add-the-new-file-system-to-etcfstab).
- Gebruik beschikbaarheids zones voor maximale Beschik baarheid in-regio.
- Overweeg het gebruik van ultra schijven (indien beschikbaar) of Premium-schijven voor uw Oracle-data base.
- Overweeg een stand-by Oracle-data base in een andere Azure-regio in te stellen met behulp van Oracle Data Guard.
- Overweeg het gebruik van [proximity placement groups](../../../virtual-machines/linux/co-location.md#proximity-placement-groups) om de latentie tussen uw toepassing en database laag te verminderen.
- [Oracle Enter prise Manager](https://docs.oracle.com/en/enterprise-manager/) instellen voor beheer, bewaking en logboek registratie.
- Overweeg het gebruik van Oracle Automatic Storage Management (ASM) voor gestroomlijnd opslag beheer voor uw data base.
- Gebruik [Azure-pijp lijnen](/azure/devops/pipelines/get-started/what-is-azure-pipelines) voor het beheren van patches en updates voor uw data base zonder uitval tijd.
- Pas de code van uw toepassing aan om Cloud-systeem eigen patronen toe te voegen, zoals het [patroon voor opnieuw proberen](/azure/architecture/patterns/retry), het [patroon circuit onderbreker](/azure/architecture/patterns/circuit-breaker)en andere patronen die zijn gedefinieerd in de [hand leiding voor het ontwerpen van Clouds](/azure/architecture/patterns/) , waardoor uw toepassing flexibeler kan zijn.
## <a name="next-steps"></a>Volgende stappen
Bekijk de volgende Oracle-referentie artikelen die van toepassing zijn op uw scenario.
- [Inleiding tot Oracle Data Guard](https://docs.oracle.com/en/database/oracle/oracle-database/18/sbydb/introduction-to-oracle-data-guard-concepts.html#GUID-5E73667D-4A56-445E-911F-1E99092DD8D7)
- [Concepten van Oracle Data Guard Broker](https://docs.oracle.com/en/database/oracle/oracle-database/12.2/dgbkr/oracle-data-guard-broker-concepts.html)
- [Oracle Golden Gate configureren voor Active-Active hoge Beschik baarheid](https://docs.oracle.com/goldengate/1212/gg-winux/GWUAD/wu_bidirectional.htm#GWUAD282)
- [Overzicht van Oracle sharding](https://docs.oracle.com/en/database/oracle/oracle-database/19/shard/sharding-overview.html)
- [Far Sync van Oracle Active Data Guard heeft geen gegevens verlies op elke afstand](https://www.oracle.com/technetwork/database/availability/farsync-2267608.pdf) | 185.467811 | 1,251 | 0.821146 | nld_Latn | 0.99988 |
536cc9f49a29a58c4a70689951c9cd65a8cf8920 | 5,157 | md | Markdown | docs/standard/microservices-architecture/secure-net-microservices-web-applications/developer-app-secrets-storage.md | AlejandraHM/docs.es-es | 5f5b056e12f9a0bcccbbbef5e183657d898b9324 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/standard/microservices-architecture/secure-net-microservices-web-applications/developer-app-secrets-storage.md | AlejandraHM/docs.es-es | 5f5b056e12f9a0bcccbbbef5e183657d898b9324 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/standard/microservices-architecture/secure-net-microservices-web-applications/developer-app-secrets-storage.md | AlejandraHM/docs.es-es | 5f5b056e12f9a0bcccbbbef5e183657d898b9324 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Almacenar secretos de aplicación de forma segura durante el desarrollo
description: 'Seguridad en microservicios y aplicaciones web de .NET: no almacene sus secretos de aplicación (contraseñas, cadenas de conexión o claves de API) en el control de código fuente y aprenda las opciones que puede usar en ASP.NET Core (en particular, debe aprender a controlar los "secretos de usuario").'
author: mjrousos
ms.author: wiwagn
ms.date: 10/19/2018
ms.openlocfilehash: fe8e7fa11c9a4f4cae133c2e09f9e4b4dd40a546
ms.sourcegitcommit: 542aa405b295955eb055765f33723cb8b588d0d0
ms.translationtype: HT
ms.contentlocale: es-ES
ms.lasthandoff: 01/17/2019
ms.locfileid: "54361994"
---
# <a name="store-application-secrets-safely-during-development"></a>Almacenar secretos de aplicación de forma segura durante el desarrollo
Para conectar con los recursos protegidos y otros servicios, las aplicaciones de ASP.NET Core normalmente necesitan usar cadenas de conexión, contraseñas u otras credenciales que contienen información confidencial. Estos fragmentos de información confidenciales se denominan *secretos*. Es un procedimiento recomendado no incluir secretos en el código fuente y, ciertamente, no almacenar secretos en el control de código fuente. En su lugar, debe usar el modelo de configuración de ASP.NET Core para leer los secretos desde ubicaciones más seguras.
Debe separar los secretos usados para acceder a los recursos de desarrollo y almacenamiento provisional de los usados para acceder a los recursos de producción, ya que distintas personas deben tener acceso a los diferentes conjuntos de secretos. Para almacenar secretos usados durante el desarrollo, los enfoques comunes son almacenar secretos en variables de entorno o usar la herramienta ASP.NET Core Secret Manager. Para un almacenamiento más seguro en entornos de producción, los microservicios pueden almacenar secretos en un Azure Key Vault.
## <a name="store-secrets-in-environment-variables"></a>Almacenamiento de secretos en variables de entorno
Una manera de mantener secretos fuera del código fuente es que los desarrolladores establezcan secretos basados en cadena como [variables de entorno](/aspnet/core/security/app-secrets#environment-variables) en sus máquinas de desarrollo. Cuando use variables de entorno para almacenar secretos con nombres jerárquicos, como las anidadas en las secciones de configuración, debe asignar un nombre a las variables para incluir la jerarquía completa de sus secciones, delimitada por signos de dos puntos (:).
Por ejemplo, establecer una variable de entorno `Logging:LogLevel:Default` to `Debug` sería equivalente a un valor de configuración del archivo JSON siguiente:
```json
{
"Logging": {
"LogLevel": {
"Default": "Debug"
}
}
}
```
Para acceder a estos valores de variables de entorno, la aplicación solo debe llamar a AddEnvironmentVariables en su ConfigurationBuilder al construir un objeto IConfigurationRoot.
Tenga en cuenta que las variables de entorno suelen almacenarse como texto sin formato, por lo que si se pone en peligro la máquina o el proceso con las variables de entorno, se verán los valores de las variables de entorno.
## <a name="store-secrets-with-the-aspnet-core-secret-manager"></a>Almacenamiento de secretos mediante ASP.NET Core Secret Manager
La herramienta [Secret Manager](/aspnet/core/security/app-secrets#secret-manager) de ASP.NET Core proporciona otro método para mantener secretos fuera del código fuente. Para usar la herramienta Secret Manager, instale el paquete **Microsoft.Extensions.Configuration.SecretManager** en su archivo del proyecto. Una vez que esa dependencia está presente y se ha restaurado, se puede usar el comando `dotnet user-secrets` para establecer el valor de los secretos desde la línea de comandos. Estos secretos se almacenarán en un archivo JSON en el directorio del perfil del usuario (los detalles varían según el sistema operativo), lejos del código fuente.
La propiedad `UserSecretsId` del proyecto que está usando los secretos organiza los secretos que establece la herramienta Secret Manager. Por tanto, debe asegurarse de establecer la propiedad UserSecretsId en el archivo del proyecto, como se muestra en el siguiente fragmento. El valor predeterminado es un GUID asignado por Visual Studio, pero la cadena real no es importante mientras sea única en su equipo.
```xml
<PropertyGroup>
<UserSecretsId>UniqueIdentifyingString</UserSecretsId>
</PropertyGroup>
```
Para usar los secretos almacenados con Secret Manager en una aplicación, debe llamar a `AddUserSecrets<T>` en la instancia de ConfigurationBuilder para incluir los secretos de la aplicación en su configuración. El parámetro genérico T debe ser un tipo del ensamblado que se aplicó a UserSecretId. Normalmente, usar `AddUserSecrets<Startup>` está bien.
`AddUserSecrets<Startup>()` se incluye en las opciones predeterminadas del entorno de desarrollo al usar el método `CreateDefaultBuilder` en *Program.cs*.
>[!div class="step-by-step"]
>[Anterior](authorization-net-microservices-web-applications.md)
>[Siguiente](azure-key-vault-protects-secrets.md)
| 87.40678 | 652 | 0.806864 | spa_Latn | 0.991458 |
536cfe6581c36fa8e9b8c3027d8dc940f824309b | 19,847 | md | Markdown | aspnetcore/blazor/components/prerendering-and-integration.md | alper-yoruk/AspNetCore.Docs.tr-tr | e137583f957428bf6fba0c285c0b231c14857a9f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | aspnetcore/blazor/components/prerendering-and-integration.md | alper-yoruk/AspNetCore.Docs.tr-tr | e137583f957428bf6fba0c285c0b231c14857a9f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | aspnetcore/blazor/components/prerendering-and-integration.md | alper-yoruk/AspNetCore.Docs.tr-tr | e137583f957428bf6fba0c285c0b231c14857a9f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: PreRender ve ASP.NET Core Razor bileşenleri tümleştirme
author: guardrex
description: Razor Blazor Sunucu üzerindeki bileşenler prerendering dahil olmak üzere uygulamalar için bileşen tümleştirme senaryoları hakkında bilgi edinin Razor .
monikerRange: '>= aspnetcore-3.1'
ms.author: riande
ms.custom: mvc
ms.date: 10/29/2020
no-loc:
- appsettings.json
- ASP.NET Core Identity
- cookie
- Cookie
- Blazor
- Blazor Server
- Blazor WebAssembly
- Identity
- Let's Encrypt
- Razor
- SignalR
uid: blazor/components/prerendering-and-integration
zone_pivot_groups: blazor-hosting-models
ms.openlocfilehash: 3402117334548f9d90880d4f536e8baa288e7bc9
ms.sourcegitcommit: 3593c4efa707edeaaceffbfa544f99f41fc62535
ms.translationtype: MT
ms.contentlocale: tr-TR
ms.lasthandoff: 01/04/2021
ms.locfileid: "97506987"
---
# <a name="prerender-and-integrate-aspnet-core-no-locrazor-components"></a>PreRender ve ASP.NET Core Razor bileşenleri tümleştirme
, [Luke Latham](https://github.com/guardrex) ve [Daniel Roth](https://github.com/danroth27) tarafından
::: zone pivot="webassembly"
::: moniker range=">= aspnetcore-5.0"
Razor bileşenler, Razor barındırılan bir çözümde sayfalarla ve MVC uygulamalarıyla tümleştirilebilir Blazor WebAssembly . Sayfa veya görünüm işlendiğinde, bileşenler aynı anda önceden alınabilir.
## <a name="configuration"></a>Yapılandırma
Bir uygulama için prerendering ayarlamak için Blazor WebAssembly :
1. Blazor WebAssemblyUygulamayı bir ASP.NET Core uygulamasında barındırın. Tek başına bir Blazor WebAssembly uygulama ASP.NET Core çözümüne eklenebilir veya Blazor WebAssembly Blazor barındırılan proje şablonundan oluşturulmuş bir barındırılan uygulama kullanabilirsiniz.
1. Varsayılan statik `wwwroot/index.html` dosyayı Blazor WebAssembly istemci projesinden kaldırın.
1. İstemci projesinde aşağıdaki satırı silin `Program.Main` :
```csharp
builder.RootComponents.Add<App>("#app");
```
1. `Pages/_Host.cshtml`Sunucu projesine bir dosya ekleyin. Bir `_Host.cshtml` Blazor Server komut kabuğunda komutuyla şablondan oluşturulan bir uygulamadan dosya elde edebilirsiniz `dotnet new blazorserver -o BlazorServer` . `Pages/_Host.cshtml`Dosyayı barındırılan çözümün sunucu uygulamasına yerleştirdikten sonra Blazor WebAssembly , dosyasında aşağıdaki değişiklikleri yapın:
* Ad alanını sunucu uygulamasının `Pages` klasörüne (örneğin, `@namespace BlazorHosted.Server.Pages` ) ayarlayın.
* [`@using`](xref:mvc/views/razor#using)İstemci projesi için bir yönerge ayarlayın (örneğin, `@using BlazorHosted.Client` ).
* Stil sayfası bağlantılarını, WebAssembly uygulamasının stil sayfasına işaret etmek üzere güncelleştirin. Aşağıdaki örnekte, istemci uygulamanın ad alanı şu şekilde olur `BlazorHosted.Client` :
```cshtml
<link href="css/app.css" rel="stylesheet" />
<link href="BlazorHosted.Client.styles.css" rel="stylesheet" />
```
* `render-mode` [Bileşen etiketi Yardımcısı](xref:mvc/views/tag-helpers/builtin-th/component-tag-helper) ' nı, kök bileşeni olan PreRender 'dan güncelleştirin `App` :
```cshtml
<component type="typeof(App)" render-mode="WebAssemblyPrerendered" />
```
* Komut dosyası Blazor kaynağını istemci tarafı komut dosyasını kullanacak şekilde güncelleştirin Blazor WebAssembly :
```cshtml
<script src="_framework/blazor.webassembly.js"></script>
```
1. `Startup.Configure` `Startup.cs` Sunucu projesinin içindeki ():
* `UseDeveloperExceptionPage`Geliştirme ortamında uygulama tasarımcısında çağırın.
* `UseBlazorFrameworkFiles`Uygulama Oluşturucu 'da çağırın.
* `index.html`Sayfadan () geri dönüşü değiştirin `endpoints.MapFallbackToFile("index.html");` `_Host.cshtml` .
```csharp
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
app.UseWebAssemblyDebugging();
}
else
{
app.UseExceptionHandler("/Error");
app.UseHsts();
}
app.UseHttpsRedirection();
app.UseBlazorFrameworkFiles();
app.UseStaticFiles();
app.UseRouting();
app.UseEndpoints(endpoints =>
{
endpoints.MapRazorPages();
endpoints.MapControllers();
endpoints.MapFallbackToPage("/_Host");
});
}
```
## <a name="render-components-in-a-page-or-view-with-the-component-tag-helper"></a>Bileşen etiketi Yardımcısı ile bir sayfada veya görünümde bileşenleri işleme
Bileşen etiketi Yardımcısı, bir sayfadaki veya görünümdeki bir bileşeni işlemek için iki işleme modunu destekler Blazor WebAssembly :
* `WebAssembly`: Bir Blazor WebAssembly uygulamanın tarayıcıya yüklendiğinde etkileşimli bir bileşeni içermesi için kullanılacak bir işaret oluşturur. Bileşen ön işlenmiş değildir. Bu seçenek farklı sayfalarda farklı bileşenlerin işlenmesine daha kolay hale gelir Blazor WebAssembly .
* `WebAssemblyPrerendered`: Bileşeni statik HTML 'ye ön ekler ve Blazor WebAssembly daha sonra, tarayıcıya yüklendiğinde bileşeni etkileşimli hale getirmek için bir uygulamanın işaretçisini içerir.
Aşağıdaki Razor Sayfalar örneğinde, `Counter` bileşen bir sayfada işlenir. Bileşeni etkileşimli hale getirmek için, Blazor WebAssembly komut dosyası sayfanın [render bölümüne](xref:mvc/views/layout#sections)eklenir. Bileşen etiketi Yardımcısı () ile bileşen için tam ad alanını kullanmaktan kaçınmak için `Counter` `{APP ASSEMBLY}.Pages.Counter` , [`@using`](xref:mvc/views/razor#using) istemci uygulamanın ad alanı için bir yönerge ekleyin `Pages` . Aşağıdaki örnekte, istemci uygulamanın ad alanı şu şekilde olur `BlazorHosted.Client` :
```cshtml
...
@using BlazorHosted.Client.Pages
<h1>@ViewData["Title"]</h1>
<component type="typeof(Counter)" render-mode="WebAssemblyPrerendered" />
@section Scripts {
<script src="_framework/blazor.webassembly.js"></script>
}
```
<xref:Microsoft.AspNetCore.Mvc.Rendering.RenderMode> bileşenin şunları yapıp kullanmadığını yapılandırır:
* , Sayfaya ön gönderilir.
* , Sayfada statik HTML olarak veya Kullanıcı aracısından bir uygulamayı önyüklemek için gerekli bilgileri içeriyorsa Blazor .
Bileşen etiketi Yardımcısı hakkında, parametreleri ve yapılandırmayı geçirme dahil daha fazla bilgi için <xref:Microsoft.AspNetCore.Mvc.Rendering.RenderMode> bkz <xref:mvc/views/tag-helpers/builtin-th/component-tag-helper> ..
Yukarıdaki örnekte, sunucu uygulamasının düzeninin ( `_Layout.cshtml` ) kapanış etiketi içindeki betik için bir [render bölümü](xref:mvc/views/layout#sections) () bulunmalıdır <xref:Microsoft.AspNetCore.Mvc.Razor.RazorPage.RenderSection%2A> `</body>` .
```cshtml
...
@RenderSection("Scripts", required: false)
</body>
```
Dosya, bir `_Layout.cshtml` `Pages/Shared` MVC uygulamasında bir Razor Sayfalar uygulamasının veya klasörünün klasöründe bulunur `Views/Shared` .
Uygulama aynı zamanda uygulama stillerinin bulunduğu bileşenlere stil Blazor WebAssembly eklemek istiyorsanız, uygulamanın stillerini `_Layout.cshtml` dosyaya ekleyin. Aşağıdaki örnekte, istemci uygulamanın ad alanı şu şekilde olur `BlazorHosted.Client` :
```cshtml
<head>
...
<link href="css/app.css" rel="stylesheet" />
<link href="BlazorHosted.Client.styles.css" rel="stylesheet" />
</head>
```
## <a name="render-components-in-a-page-or-view-with-a-css-selector"></a>Bir sayfada veya görünümde bir CSS seçiciyle bileşenleri işleme
() İçindeki *istemci* projesine kök bileşenleri ekleyin `Program.Main` `Program.cs` . Aşağıdaki örnekte bileşen, eşleşen bir `Counter` öğeyi seçen CSS seçiciyle bir kök bileşen olarak bildirilmiştir `id` `my-counter` . Aşağıdaki örnekte, istemci uygulamanın ad alanı şu şekilde olur `BlazorHosted.Client` :
```csharp
using BlazorHosted.Client.Pages;
...
builder.RootComponents.Add<Counter>("#my-counter");
```
Aşağıdaki Razor Sayfalar örneğinde, `Counter` bileşen bir sayfada işlenir. Bileşeni etkileşimli hale getirmek için, Blazor WebAssembly komut dosyası sayfanın [render bölümüne](xref:mvc/views/layout#sections)eklenir:
```cshtml
...
<h1>@ViewData["Title"]</h1>
<div id="my-counter">Loading...</div>
@section Scripts {
<script src="_framework/blazor.webassembly.js"></script>
}
```
Yukarıdaki örnekte, sunucu uygulamasının düzeninin ( `_Layout.cshtml` ) kapanış etiketi içindeki betik için bir [render bölümü](xref:mvc/views/layout#sections) () bulunmalıdır <xref:Microsoft.AspNetCore.Mvc.Razor.RazorPage.RenderSection%2A> `</body>` .
```cshtml
...
@RenderSection("Scripts", required: false)
</body>
```
Dosya, bir `_Layout.cshtml` `Pages/Shared` MVC uygulamasında bir Razor Sayfalar uygulamasının veya klasörünün klasöründe bulunur `Views/Shared` .
Uygulama aynı zamanda uygulama stillerinin bulunduğu bileşenlere stil Blazor WebAssembly eklemek istiyorsanız, uygulamanın stillerini `_Layout.cshtml` dosyaya ekleyin. Aşağıdaki örnekte, istemci uygulamanın ad alanı şu şekilde olur `BlazorHosted.Client` :
```cshtml
<head>
...
<link href="css/app.css" rel="stylesheet" />
<link href="BlazorHosted.Client.styles.css" rel="stylesheet" />
</head>
```
::: moniker-end
::: moniker range="< aspnetcore-5.0"
Razor Razor Barındırılan bir çözümde bileşenleri sayfalar ve MVC uygulamalarıyla tümleştirmek Blazor WebAssembly , .NET 5 veya sonraki sürümlerde ASP.NET Core desteklenir. Bu makalenin .NET 5 veya sonraki bir sürümünü seçin.
::: moniker-end
::: zone-end
::: zone pivot="server"
Razor bileşenler, Razor bir uygulamadaki sayfalarla ve MVC uygulamalarıyla tümleştirilebilir Blazor Server . Sayfa veya görünüm işlendiğinde, bileşenler aynı anda önceden alınabilir.
[Uygulamayı yapılandırdıktan](#configuration)sonra, uygulamanın gereksinimlerine bağlı olarak aşağıdaki bölümlerde yer alan kılavuzu kullanın:
* Yönlendirilebilir bileşenler: Kullanıcı isteklerinden doğrudan yönlendirilebilir bileşenler Için. Ziyaretçilerin, bir yönergesi olan bir bileşen için tarayıcılarında bir HTTP isteği yapabilmeleri gerektiğinde bu kılavuzu izleyin [`@page`](xref:mvc/views/razor#page) .
* [Bir sayfalar uygulamasında yönlendirilebilir bileşenleri kullanma Razor](#use-routable-components-in-a-razor-pages-app)
* [MVC uygulamasında yönlendirilebilir bileşenleri kullanma](#use-routable-components-in-an-mvc-app)
* [Bir sayfadan veya görünümden bileşenleri işleme](#render-components-from-a-page-or-view): doğrudan Kullanıcı isteklerinden yönlendirilemeyen bileşenler için. Uygulama bileşenleri [bileşen etiketi Yardımcısı](xref:mvc/views/tag-helpers/builtin-th/component-tag-helper)ile var olan sayfalara ve görünümlere eklerken bu kılavuzu izleyin.
## <a name="configuration"></a>Yapılandırma
Mevcut Razor Sayfalar ve MVC uygulamaları, Razor bileşenleri sayfalar ve görünümler ile tümleştirilebilir:
1. Uygulamanın düzen dosyasında ( `_Layout.cshtml` ):
* Aşağıdaki `<base>` etiketi `<head>` öğesine ekleyin:
```html
<base href="~/" />
```
`href`Yukarıdaki örnekte yer alan değer ( *uygulama temel yolu*), uygulamanın kök URL yolunda () bulunduğunu varsayar `/` . Uygulama bir alt uygulama ise, makalenin *uygulama temel yolu* bölümündeki yönergeleri izleyin <xref:blazor/host-and-deploy/index#app-base-path> .
Dosya, bir `_Layout.cshtml` `Pages/Shared` MVC uygulamasında bir Razor Sayfalar uygulamasının veya klasörünün klasöründe bulunur `Views/Shared` .
* `<script>` `blazor.server.js` Render bölümünden hemen önce betik için bir etiket ekleyin `Scripts` :
```html
...
<script src="_framework/blazor.server.js"></script>
@await RenderSectionAsync("Scripts", required: false)
</body>
```
Çerçeve, `blazor.server.js` betiği uygulamaya ekler. Betiği uygulamaya el ile eklemeniz gerekmez.
1. `_Imports.razor`Aşağıdaki içeriğe sahip projenin kök klasörüne bir dosya ekleyin (son ad alanını `MyAppNamespace` uygulamanın ad alanına değiştirin):
```razor
@using System.Net.Http
@using Microsoft.AspNetCore.Authorization
@using Microsoft.AspNetCore.Components.Authorization
@using Microsoft.AspNetCore.Components.Forms
@using Microsoft.AspNetCore.Components.Routing
@using Microsoft.AspNetCore.Components.Web
@using Microsoft.JSInterop
@using MyAppNamespace
```
1. `Startup.ConfigureServices`' De, Blazor Server hizmeti kaydedin:
```csharp
services.AddServerSideBlazor();
```
1. İçinde `Startup.Configure` , Blazor hub bitiş noktasını şu şekilde ekleyin `app.UseEndpoints` :
```csharp
endpoints.MapBlazorHub();
```
1. Bileşenleri herhangi bir sayfa veya görünümle tümleştirin. Daha fazla bilgi için, [bir sayfadan veya görünümden bileşenleri işleme](#render-components-from-a-page-or-view) bölümüne bakın.
## <a name="use-routable-components-in-a-no-locrazor-pages-app"></a>Bir sayfalar uygulamasında yönlendirilebilir bileşenleri kullanma Razor
*Bu bölüm, Kullanıcı isteklerinden doğrudan yönlendirilebilir bileşenleri eklemeye aittir.*
RazorSayfalar uygulamalarında yönlendirilebilir bileşenleri desteklemek için Razor :
1. [Yapılandırma](#configuration) bölümündeki yönergeleri izleyin.
1. `App.razor`Aşağıdaki içeriğe sahip proje köküne bir dosya ekleyin:
```razor
@using Microsoft.AspNetCore.Components.Routing
<Router AppAssembly="@typeof(Program).Assembly">
<Found Context="routeData">
<RouteView RouteData="routeData" />
</Found>
<NotFound>
<h1>Page not found</h1>
<p>Sorry, but there's nothing here!</p>
</NotFound>
</Router>
```
[!INCLUDE[](~/blazor/includes/prefer-exact-matches.md)]
1. `_Host.cshtml` `Pages` Klasöre aşağıdaki içeriğe sahip bir dosya ekleyin:
```cshtml
@page "/blazor"
@{
Layout = "_Layout";
}
<app>
<component type="typeof(App)" render-mode="ServerPrerendered" />
</app>
```
Bileşenler, `_Layout.cshtml` düzen için paylaşılan dosyayı kullanır.
<xref:Microsoft.AspNetCore.Mvc.Rendering.RenderMode> bileşenin şunları yapıp kullanmadığını yapılandırır `App` :
* , Sayfaya ön gönderilir.
* , Sayfada statik HTML olarak veya Kullanıcı aracısından bir uygulamayı önyüklemek için gerekli bilgileri içeriyorsa Blazor .
Bileşen etiketi Yardımcısı hakkında, parametreleri ve yapılandırmayı geçirme dahil daha fazla bilgi için <xref:Microsoft.AspNetCore.Mvc.Rendering.RenderMode> bkz <xref:mvc/views/tag-helpers/builtin-th/component-tag-helper> ..
1. `_Host.cshtml`İçindeki uç nokta yapılandırmasına sayfanın düşük öncelikli bir yolunu ekleyin `Startup.Configure` :
```csharp
app.UseEndpoints(endpoints =>
{
...
endpoints.MapFallbackToPage("/_Host");
});
```
1. Uygulamaya yönlendirilebilir bileşenler ekleyin. Örneğin:
```razor
@page "/counter"
<h1>Counter</h1>
...
```
Ad alanları hakkında daha fazla bilgi için [bileşen ad alanları](#component-namespaces) bölümüne bakın.
## <a name="use-routable-components-in-an-mvc-app"></a>MVC uygulamasında yönlendirilebilir bileşenleri kullanma
*Bu bölüm, Kullanıcı isteklerinden doğrudan yönlendirilebilir bileşenleri eklemeye aittir.*
RazorMVC uygulamalarında yönlendirilebilir bileşenleri desteklemek için:
1. [Yapılandırma](#configuration) bölümündeki yönergeleri izleyin.
1. `App.razor`Aşağıdaki içeriğe sahip projenin köküne bir dosya ekleyin:
```razor
@using Microsoft.AspNetCore.Components.Routing
<Router AppAssembly="@typeof(Program).Assembly">
<Found Context="routeData">
<RouteView RouteData="routeData" />
</Found>
<NotFound>
<h1>Page not found</h1>
<p>Sorry, but there's nothing here!</p>
</NotFound>
</Router>
```
[!INCLUDE[](~/blazor/includes/prefer-exact-matches.md)]
1. `_Host.cshtml` `Views/Home` Klasöre aşağıdaki içeriğe sahip bir dosya ekleyin:
```cshtml
@{
Layout = "_Layout";
}
<app>
<component type="typeof(App)" render-mode="ServerPrerendered" />
</app>
```
Bileşenler, `_Layout.cshtml` düzen için paylaşılan dosyayı kullanır.
<xref:Microsoft.AspNetCore.Mvc.Rendering.RenderMode> bileşenin şunları yapıp kullanmadığını yapılandırır `App` :
* , Sayfaya ön gönderilir.
* , Sayfada statik HTML olarak veya Kullanıcı aracısından bir uygulamayı önyüklemek için gerekli bilgileri içeriyorsa Blazor .
Bileşen etiketi Yardımcısı hakkında, parametreleri ve yapılandırmayı geçirme dahil daha fazla bilgi için <xref:Microsoft.AspNetCore.Mvc.Rendering.RenderMode> bkz <xref:mvc/views/tag-helpers/builtin-th/component-tag-helper> ..
1. Ana denetleyiciye bir eylem ekleyin:
```csharp
public IActionResult Blazor()
{
return View("_Host");
}
```
1. `_Host.cshtml`' Deki uç nokta yapılandırmasına görünümü döndüren denetleyici eylemi için düşük öncelikli bir yol ekleyin `Startup.Configure` :
```csharp
app.UseEndpoints(endpoints =>
{
...
endpoints.MapFallbackToController("Blazor", "Home");
});
```
1. Bir `Pages` klasör oluşturun ve uygulamaya yönlendirilebilir bileşenler ekleyin. Örneğin:
```razor
@page "/counter"
<h1>Counter</h1>
...
```
Ad alanları hakkında daha fazla bilgi için [bileşen ad alanları](#component-namespaces) bölümüne bakın.
## <a name="render-components-from-a-page-or-view"></a>Bir sayfadan veya görünümden bileşenleri işleme
*Bu bölüm, bileşenlerin Kullanıcı isteklerinden doğrudan yönlendirilemeyen sayfalara veya görünümlere bileşen eklenmesine aittir.*
Bir sayfadan veya görünümden bir bileşeni işlemek için [bileşen etiketi yardımcısını](xref:mvc/views/tag-helpers/builtin-th/component-tag-helper)kullanın.
### <a name="render-stateful-interactive-components"></a>Durum bilgisi olan etkileşimli bileşenleri işle
Durum bilgisi olan etkileşimli bileşenler, bir Razor sayfaya veya görünüme eklenebilir.
Sayfa veya görünüm şunları işler:
* Bileşen sayfa veya görünümle birlikte kullanılır.
* Prerendering için kullanılan ilk bileşen durumu kayboldu.
* Bağlantı kurulduunda yeni bileşen durumu oluşturulur SignalR .
Aşağıdaki Razor sayfa bir bileşeni işler `Counter` :
```cshtml
<h1>My Razor Page</h1>
<component type="typeof(Counter)" render-mode="ServerPrerendered"
param-InitialValue="InitialValue" />
@functions {
[BindProperty(SupportsGet=true)]
public int InitialValue { get; set; }
}
```
Daha fazla bilgi için bkz. <xref:mvc/views/tag-helpers/builtin-th/component-tag-helper>.
### <a name="render-noninteractive-components"></a>Etkileşimsiz bileşenleri işle
Aşağıdaki Razor sayfada, `Counter` bileşen bir form kullanılarak belirtilen bir başlangıç değeriyle statik olarak işlenir. Bileşen statik olarak işlendiğinden, bileşen etkileşimli değildir:
```cshtml
<h1>My Razor Page</h1>
<form>
<input type="number" asp-for="InitialValue" />
<button type="submit">Set initial value</button>
</form>
<component type="typeof(Counter)" render-mode="Static"
param-InitialValue="InitialValue" />
@functions {
[BindProperty(SupportsGet=true)]
public int InitialValue { get; set; }
}
```
Daha fazla bilgi için bkz. <xref:mvc/views/tag-helpers/builtin-th/component-tag-helper>.
## <a name="component-namespaces"></a>Bileşen ad alanları
Uygulamanın bileşenlerini tutmak için özel bir klasör kullanırken, klasörü/görünümü ya da dosyaya veya dosyayı temsil eden ad alanını ekleyin `_ViewImports.cshtml` . Aşağıdaki örnekte:
* `MyAppNamespace`Uygulamanın ad alanına geçin.
* Adlı bir klasör, `Components` bileşenleri tutmak için kullanılmazsa, `Components` bileşenlerin bulunduğu klasöre geçin.
```cshtml
@using MyAppNamespace.Components
```
`_ViewImports.cshtml`Dosya, `Pages` bir Razor Sayfalar uygulamasının KLASÖRÜNDE veya `Views` bir MVC uygulamasının klasöründe bulunur.
Daha fazla bilgi için bkz. <xref:blazor/components/index#namespaces>.
::: zone-end
| 39.068898 | 538 | 0.753061 | tur_Latn | 0.997596 |
536dbce6b9dbdc270209412f02fc3f2ef7bf679a | 1,986 | md | Markdown | windows-driver-docs-pr/kernel/counters.md | AmadeusW/windows-driver-docs | 6d272f80814969bbb5ec836cbbebdf5cae52ee35 | [
"CC-BY-4.0",
"MIT"
] | 4 | 2018-01-29T10:59:09.000Z | 2021-05-26T09:19:55.000Z | windows-driver-docs-pr/kernel/counters.md | AmadeusW/windows-driver-docs | 6d272f80814969bbb5ec836cbbebdf5cae52ee35 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | windows-driver-docs-pr/kernel/counters.md | AmadeusW/windows-driver-docs | 6d272f80814969bbb5ec836cbbebdf5cae52ee35 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2018-01-29T10:59:10.000Z | 2018-01-29T10:59:10.000Z | ---
title: Counters
author: windows-driver-content
description: Counters
ms.assetid: dd4cb793-64c4-4f66-b9cb-e97dd94fbb21
keywords: ["synchronization WDK kernel , counters", "counters WDK kernel", "count values WDK kernel"]
ms.author: windowsdriverdev
ms.date: 06/16/2017
ms.topic: article
ms.prod: windows-hardware
ms.technology: windows-devices
---
# Counters
## <a href="" id="ddk-counters-kg"></a>
The system provides several driver support routines that return various count values.
[**KeQuerySystemTime**](https://msdn.microsoft.com/library/windows/hardware/ff553068)
[**KeQueryInterruptTime**](https://msdn.microsoft.com/library/windows/hardware/ff553025)
[**KeQueryInterruptTimePrecise**](https://msdn.microsoft.com/library/windows/hardware/dn903729)
[**KeQueryTickCount**](https://msdn.microsoft.com/library/windows/hardware/ff553071)
[**KeQueryPerformanceCounter**](https://msdn.microsoft.com/library/windows/hardware/ff553053)
[**KeQueryTimeIncrement**](https://msdn.microsoft.com/library/windows/hardware/ff553075)
--------------------
[Send comments about this topic to Microsoft](mailto:[email protected]?subject=Documentation%20feedback%20%5Bkernel\kernel%5D:%20Counters%20%20RELEASE:%20%286/14/2017%29&body=%0A%0APRIVACY%20STATEMENT%0A%0AWe%20use%20your%20feedback%20to%20improve%20the%20documentation.%20We%20don't%20use%20your%20email%20address%20for%20any%20other%20purpose,%20and%20we'll%20remove%20your%20email%20address%20from%20our%20system%20after%20the%20issue%20that%20you're%20reporting%20is%20fixed.%20While%20we're%20working%20to%20fix%20this%20issue,%20we%20might%20send%20you%20an%20email%20message%20to%20ask%20for%20more%20info.%20Later,%20we%20might%20also%20send%20you%20an%20email%20message%20to%20let%20you%20know%20that%20we've%20addressed%20your%20feedback.%0A%0AFor%20more%20info%20about%20Microsoft's%20privacy%20policy,%20see%20http://privacy.microsoft.com/default.aspx. "Send comments about this topic to Microsoft")
| 46.186047 | 916 | 0.793051 | eng_Latn | 0.199795 |
536ddf0d23d3ce6897ff5c8c7ff60d4b8ffb5626 | 5,532 | md | Markdown | src/content/en/tools/chrome-devtools/console/track-exceptions.md | joseroubert08/WebFundamentals | 8413600c7c0fbc19e6235613e95368cf7b95d00e | [
"Apache-2.0"
] | 6 | 2016-05-15T22:50:34.000Z | 2020-07-25T07:00:26.000Z | src/content/en/tools/chrome-devtools/console/track-exceptions.md | jd-h/WebFundamentals | 260556c0ea8193db898eea4ff58ea182235900db | [
"Apache-2.0"
] | null | null | null | src/content/en/tools/chrome-devtools/console/track-exceptions.md | jd-h/WebFundamentals | 260556c0ea8193db898eea4ff58ea182235900db | [
"Apache-2.0"
] | 5 | 2016-05-10T02:32:04.000Z | 2021-04-25T20:25:52.000Z | project_path: /web/_project.yaml
book_path: /web/tools/_book.yaml
description: Chrome DevTools provides tools to help you fix web pages throwing exceptions and debug errors in your JavaScript.
{# wf_updated_on: 2015-05-12 #}
{# wf_published_on: 2015-04-13 #}
# Exception and Error Handling {: .page-title }
{% include "web/_shared/contributors/megginkearney.html" %}
{% include "web/_shared/contributors/flaviocopes.html" %}
Chrome DevTools provides tools to help you fix web pages throwing exceptions and debug errors in your JavaScript.
Page exceptions and JavaScript errors are actually quite useful - if you can get to the details behind them. When a page throws an exception or a script produces an error, the Console provides specific, reliable information to help you locate and correct the problem.
In the Console you can track exceptions and trace the execution path that led to them, explicitly or implicitly catch them (or ignore them), and even set error handlers to automatically collect and process exception data.
### TL;DR {: .hide-from-toc }
- Turn on Pause on Exceptions to debug the code context when the exception triggered.
- Print current JavaScript call stack using <code>console.trace</code>.
- Place assertions in your code and throw exceptions using <code>console.assert()</code>.
- Log errors happening in the browser using <code>window.onerror</code>.
## Track exceptions
When something goes wrong, open the DevTools console (`Ctrl+Shift+J` / `Cmd+Option+J`) to view the JavaScript error messages.
Each message has a link to the file name with the line number you can navigate to.
An example of an exception:

### View exception stack trace
It's not always obvious which execution path lead to an error.
Complete JavaScript call stacks accompany exceptions in the console.
Expand these console messages to see the stack frames and navigate to the corresponding locations in the code:

### Pause on JavaScript exceptions
The next time an exception is thrown,
pause JavaScript execution and inspect its call stack,
scope variables, and state of your app.
A tri-state stop button at the bottom of the Scripts panel enables you to switch among different exception handling modes: {:.inline}
Choose to either pause on all exceptions or only on the uncaught ones or you can ignore exceptions altogether.

## Print stack traces
Better understand how your web page behaves
by printing log messages to the console.
Make the log entries more informative by including associated stack traces. There are several ways of doing that.
### Error.stack
Each Error object has a string property named stack that contains the stack trace:

### console.trace()
Instrument your code with [`console.trace()`](./console-reference#consoletraceobject) calls that print current JavaScript call stacks:

### console.assert()
Place assertions in your JavaScript code by calling [`console.assert()`](./console-reference#consoleassertexpression-object)
with the error condition as the first parameter.
When this expression evaluates to false,
you will see a corresponding console record:

## How to examine stack trace to find triggers
Let's see how to use the tools you've just learned about,
and find the real cause of an error.
Here's a simple HTML page that includes two scripts:

When the user clicks on the page,
the paragraph changes its inner text,
and the `callLibMethod()` function provided by `lib.js` is called.
This function prints a `console.log`,
and then calls `console.slog`,
a method not provided by the Console API.
This should trigger an error.
When the page is run and you click on it,
this error is triggered:

Click the arrow to can expand the error message:

The Console tells you the error was triggered in `lib.js`, line 4,
which was called by `script.js` in the `addEventListener` callback,
an anonymous function, in line 3.
This is a very simple example,
but even the most complicated log trace debugging follows the same process.
## Handle runtime exceptions using window.onerror
Chrome exposes the `window.onerror` handler function,
called whenever an error happens in the JavaScript code execution.
Whenever a JavaScript exception is thrown in the window context and
is not caught by a try/catch block,
the function is invoked with the exception's message,
the URL of the file where the exception was thrown,
and the line number in that file,
passed as three arguments in that order.
You may find it useful to set an error handler that would collect information about uncaught exceptions and report it back to your server using an AJAX POST call, for example. In this way, you can log all the errors happening in the user's browser, and be notified about them.
Example of using `window.onerror`:

| 42.553846 | 276 | 0.787599 | eng_Latn | 0.995521 |
536df5cb62f1bed6a9725bc4506850718aceb894 | 797 | md | Markdown | _datasets/PacELF_Phase2_370.md | DanielBaird/jkan | bd09f837562c0856cc1290509157be5d424768de | [
"MIT"
] | null | null | null | _datasets/PacELF_Phase2_370.md | DanielBaird/jkan | bd09f837562c0856cc1290509157be5d424768de | [
"MIT"
] | null | null | null | _datasets/PacELF_Phase2_370.md | DanielBaird/jkan | bd09f837562c0856cc1290509157be5d424768de | [
"MIT"
] | null | null | null | ---
schema: pacelf
title: Technical Meeting Of Directors Of Health For The Pacific Island Countries And Meeting Of Ministers Of Health For The Pacific Island Countries. Madang, Papua New Guinea 12-15 March 2001
organization: World Health Organization
notes: N/A
access: Restricted
resources:
- name: Technical Meeting Of Directors Of Health For The Pacific Island Countries And Meeting Of Ministers Of Health For The Pacific Island Countries. Madang, Papua New Guinea 12-15 March 2001
url: 'N/A'
format: Hardcopy
access: Restricted
pages: N/A
category: Meeting Reports
access: Restricted
journal: N/A
publisher: World Health Organization
language: English
hardcopy_location: JCU WHOCC Ichimori collection
work_location: Multicountry Pacific
year: 2001
decade: 2000
PacELF_ID: 1525
---
| 30.653846 | 192 | 0.797992 | eng_Latn | 0.753392 |
536e2067d83cdfc71051b29f86be8e9dae2225dd | 1,866 | md | Markdown | README.md | learn2021-coder/w650_opencore_config | ebc14dd2c06f64d5569a6a3f7d3794899e11f924 | [
"BSD-2-Clause"
] | null | null | null | README.md | learn2021-coder/w650_opencore_config | ebc14dd2c06f64d5569a6a3f7d3794899e11f924 | [
"BSD-2-Clause"
] | null | null | null | README.md | learn2021-coder/w650_opencore_config | ebc14dd2c06f64d5569a6a3f7d3794899e11f924 | [
"BSD-2-Clause"
] | null | null | null | # w650_opencore_config
对应系统为Big Sur,wifi驱动也是对应的版本,如果想要安装在Monterey则要自己下载。
#### 电脑配置
- cpu:i7-7700
- gpu:mx150+hd630(独显被屏蔽)
#### ToDo
* [x] 加入蓝牙驱动,启动时间非常长,目前蓝牙未驱动
* [ ] 电池故障,电池未驱动
#### 常用软件与配置:
- sudo spctl --master-disable
- Clashx: https://github.com/yichengchen/clashX
- Alfred: https://xclient.info/s/alfred.html#versions
- Moom: https://xclient.info/s/moom.html
- Homebrew: https://brew.sh
- Magnet: https://xclient.info/s/magnet.html
- Qtopencoreconfig: https://github.com/ic005k/QtOpenCoreConfig
- ohmyzsh: sh -c "$(curl -fsSL https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh)"
- hidpi: https://github.com/xzhih/one-key-hidpi
- App Cleaner: https://freemacsoft.net/appcleaner/
- The unarchiver: https://theunarchiver.com/
- Itlwm: https://github.com/OpenIntelWireless/itlwm
- Downie: https://xclient.info/s/downie.html
- Office : https://xclient.info/s/office-for-mac.html
- Typora: https://xclient.info/s/typora.html
- Dash: https://xclient.info/s/dash.html
> xcode-select --install
> brew install google-chrome telegram-desktop sublime-text visual-studio-code iterm2 karabiner-elements hackintool zsh zsh-syntax-highlighting zsh-autosuggestions uninstallpkg sogouinput you-get yt-dlp mpv iina eudic
#### 问题与解答
1. xcrun: error
> xcrun: error: invalid active developer path (/Library/Developer/CommandLineTools), missing xcrun at: /Library/Developer/CommandLineTools/usr/bin/xcrun
> 解决方法
> 重装xcode command line
> xcode-select --install
> 如果解决不了问题,执行一下命令
> sudo xcode-select -switch /
2. 任何源
> sudo spctl --master-disable
3. 交换option 和 command按键
> EFI/OC/kexts-voodooPS2Control.kext/contents/Plugins/voodooPs2keyboard.kext/contents/Info.plist/IOKitPersonalities/Platform Profile/Default/Swap command and option/YES
4. alfred最新版(tnt)不能用,改用旧版解决(新版已经解决)
5. 系统启动时间很长,根据代码发现是蓝牙驱动的问题,升级到特定的蓝牙驱动或者直接不加载蓝牙驱动(新驱动已经解决)
| 33.321429 | 219 | 0.762058 | yue_Hant | 0.525953 |
536e31e890112e59607845a7fdf5e6dc7397d911 | 11,061 | md | Markdown | summer-of-code/week-05/day4.md | shalinisk/toolkitten | 35d5969a9685aab2692620d09c238bcf4c54d0fc | [
"MIT"
] | 1 | 2018-10-31T22:31:39.000Z | 2018-10-31T22:31:39.000Z | summer-of-code/week-05/day4.md | shalinisk/toolkitten | 35d5969a9685aab2692620d09c238bcf4c54d0fc | [
"MIT"
] | 1 | 2018-08-16T21:17:50.000Z | 2018-08-16T21:17:50.000Z | summer-of-code/week-05/day4.md | KMBu/toolkitten | 8a7d6ea231ec2175b532fbfdbd50bacf92333e41 | [
"MIT"
] | null | null | null | # 1 Million Women To Tech
## Summer of Code
### Week 5
### Day 4
### Flow Control
Flow control is what turns a document into a program.
A document is static, it can only have one state. No matter what happens it always looks the same.
But if we can make a CHOICE, a decision, have control over the flow of our code, then suddenly we can have the same lines of code do different things each time depending on circumstances.
The basics of flow control are based on IF - ELSE statements, and for those to work we must be able to make decisions, that is, decide if something is TRUE or FALSE, 1 or 0.
#### A Little Bit of Logic
Let's have a quick look at comparison and branching.
Equals: a == b
Equal value and type: a === b
Not Equals: a != b
Not equal value or not equal type: a !== b
Less than: a < b
Less than or equal to: a <= b
Greater than: a > b
Greater than or equal to: a >= b
And: && (x< 10 && y > 1)
Or: || (x == 5 || y == 5)
Not: ! (we saw this already) !(x == y)
If operator: ?
```
var canVote = (age < 18) ? "Too young" : "Old enough";
if (b > a) {
} else if () {
} else if {
} else if {
} else {
}
if (time < 10) {
greeting = "Good morning";
} else if (time < 20) {
greeting = "Good day";
} else {
greeting = "Good evening";
}
```
#### Problems
Can you think of what happens when you try to compare objects of different types?
```
2 < 12
2 < "12"
2 < "Sarah"
```
Intermediate / Advanced:
- Study up a little bit on `typeof()`, `isArray()`
- https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/typeof
### Looping
Often, you’ll want your computer to do the same thing over and over again. After all, that’s what they’re supposed to be good at doing.
When you tell your computer to keep repeating something, you also need to tell it when to stop. Computers never get bored, so if you don’t tell it when to stop, it won’t.
We make sure this doesn’t happen by telling the computer to repeat certain parts of a program while a certain condition is true. It works the way if works:
```
input = ''
while input != 'bye':
input = prompt()
alert('Come again soon!')
while (condition) {
do this
}
// What's the problem here?
while (i < 10) {
console.log(i)
}
```
Wee need something to come before the loop, a starting point, and we need to change something on the inside.
```
var i = 0;
while (i < 1000000) {
console.log(i);
i++; // same as i = i + 1, or i += 1
}
```
#### A Few Things to Try
- “99 Bottles of Beer on the Wall.” Write a program that prints out the lyrics to that beloved classic, “99 Bottles of Beer on the Wall.”
- Deaf grandma. Whatever you say to Grandma (whatever you type in), she should respond with this:
HUH?! SPEAK UP, GIRL!
unless you shout it (type in all capitals). If you shout, she can hear you (or at least she thinks so) and yells back:
NO, NOT SINCE 1938!
To make your program really believable, have Grandma shout a different year each time, maybe any year at random between 1930 and 1950. (This part is optional and would be much easier if you read the section on JavaScript’s random number generator under the Math Object.) You can’t stop talking to Grandma until you shout BYE.
- Hint: Try to think about what parts of your program should happen over and over again. All of those should be in your while loop.
- Hint: People often ask me, “How can I make random give me a number in a range not starting at zero?” But you don’t need it to. Is there something you could do to the number random returns to you?
- Deaf grandma extended. What if Grandma doesn’t want you to leave? When you shout BYE, she could pretend not to hear you. Change your previous program so that you have to shout BYE three times in a row. Make sure to test your program: if you shout BYE three times but not in a row, you should still be talking to Grandma.
- Leap years. Write a program that asks for a starting year and an ending year and then puts all the leap years between them (and including them, if they are also leap years). Leap years are years divisible by 4 (like 1984 and 2004). However, years divisible by 100 are not leap years (such as 1800 and 1900) unless they are also divisible by 400 (such as 1600 and 2000, which were in fact leap years). What a mess!
- Find something today in your life, that is a calculation. Go for a walk, look around the park, try to count something. Anything! And write a program about it. e.g. number of stairs, steps, windows, leaves estimated in the park, kids, dogs, estimate your books by bookshelf, toiletries, wardrobe.
When you finish those, take a break! That was a lot of programming. Congratulations! You’re well on your way. Relax, have a nice day, and continue tomorrow.
### Arrays
```
var students = ["", "", ""]
var students = new Array("", "", "")
var student1 = students[0]
students.length()
students.sort()
students.pop()
students.push()
students.join()
```
Try fancy Array methods:
- https://www.w3schools.com/js/js_array_methods.asp
#### Loops on arrays
#### More List Methods
Method Description
append() Adds an element at the end of the list
clear() Removes all the elements from the list
copy() Returns a copy of the list
count() Returns the number of elements with the specified value
extend() Add the elements of a list (or any iterable), to the end of the current list
index() Returns the index of the first element with the specified value
insert() Adds an element at the specified position
pop() Removes the element at the specified position
remove() Removes the first item with the specified value
reverse() Reverses the order of the list
sorted() Sorts the list and creates a new list without modifying the original
#### A Few Things to Try
- Building and sorting an array. Write the program that asks us to type as many words as we want (one word per line, continuing until we just press Enter on an empty line) and then repeats the words back to us in alphabetical order. Make sure to test your program thoroughly; for example, does hitting Enter on an empty line always exit your program? Even on the first line? And the second?
Hint: There’s a lovely array method that will give you a sorted version of an array: sorted(). Use it!
- Table of contents. Write a table of contents program here. Start the program with a list holding all of the information for your table of contents (chapter names, page numbers, and so on). Then print out the information from the list in a beautifully formatted table of contents. Use string formatting such as left align, right align, center.
### Writing Your Own Functions
A function is a block of code which only runs when it is called.
You can pass data, known as parameters, into a function.
A function can return data as a result.
```python
def say_moo():
print("moo")
# let's call it
say_moo()
```
#### Function Parameters
Information can be passed to functions as parameter.
Parameters are specified after the function name, inside the parentheses. You can add as many parameters as you want, just separate them with a comma.
```python
def praise(student):
print(student + " is an amazing Pythonista!")
praise("Katie")
```
The following example shows how to use a default parameter value.
If we call the function without parameter, it uses the default value:
```python
def praise(student = "Melinda"):
print(student + " is an amazing Pythonista!")
praise()
>>>Melinda is an amazing Pythonista!
```
#### Thing to Try
Write a function that prints out "moo" n times.
#### Return Values
You may have noticed that some methods give you something back when you call them. For example, we say gets returns a string (the string you typed in), and the + method in 5+3 (which is really 5.+(3)) returns 8. The arithmetic methods for numbers return numbers, and the arithmetic methods for strings return strings.
It’s important to understand the difference between a method returning a value (returning it to the code that called the method), and your program outputting information to your screen, like print() does, which we call a **side-effect**. Notice that 5+3 returns 8; it does not output 8 (that is, display 8 on your screen).
So, what does print() return? We never cared before, but let’s look at it now:
```
a = print('b')
print(a)
None
```
The first print() didn’t seem to return anything, and in a way it didn’t; it returned `None`. Though we didn’t test it, the second print() did, too; print() always returns None. Every method has to return something, even if it’s just the special value `None`.
Take a quick break, and write a program to find out what say_moo returns.
#### A Few Things to Try
- Old-school Roman numerals. In the early days of Roman numerals, the Romans didn’t bother with any of this new-fangled subtraction “IX” nonsense.
No Mylady, it was straight addition, biggest to littlest—so 9 was written “VIIII,” and so on.
Write a method that when passed an integer between 1 and 3000 (or so) returns a string containing the proper old-school Roman numeral. In other words, old_roman_numeral 4 should return 'IIII'. Make sure to test your method on a bunch of different numbers.
Hint: Use the integer division and modulus methods.
For reference, these are the values of the letters used:
I = 1
V = 5
X = 10
L = 50
C = 100
D = 500
M = 1000
- “Modern” Roman numerals. Eventually, someone thought it would be terribly clever if putting a smaller number before a larger one meant you had to subtract the smaller one. As a result of this development, you must now suffer. Rewrite your previous method to return the new-style Roman numerals so when someone calls roman_numeral 4, it should return 'IV', 90 should be 'XC' etc.
## Helpful links
Problem solving
- Video: https://www.coursera.org/lecture/duke-programming-web/a-seven-step-approach-to-solving-programming-problems-AEy5M
- Book: The Algorithm Design Manual by Steven S Skiena
- Cheat Sheet: http://adhilton.pratt.duke.edu/sites/adhilton.pratt.duke.edu/files/u37/iticse-7steps.pdf
GitHub
- *THE* Git book: https://git-scm.com/book/en/v2
- How to update via SourceTree: https://stackoverflow.com/questions/13273852/how-do-i-update-my-forked-repo-using-sourcetree
- https://akrabat.com/the-beginners-guide-to-contributing-to-a-github-project/
- Udacity Git course: https://eu.udacity.com/course/how-to-use-git-and-github--ud775
## Submission Guidelines
Create a single file called `soc-wk1-cert-firstname-lastname.py` and include the whole week's exercises in them. Everything from the 'Things to Try' sections. Optional exercises are, well, optional.
DIY: send PR to toolkitten repo, under https://github.com/1millionwomentotech/toolkitten/tree/master/summer-of-code/week-01/wk1-homework-submission
Gold & VIP: From the Membership area http://memberportal.1millionwomentotech.com/gold-vip-login/ open a Helpdesk ticket. In the 'Department' dropdown choose 'Upload File for Certification' and attach your .py file. ;)
| 35.338658 | 416 | 0.731037 | eng_Latn | 0.998281 |
536e3b19a7a94e10956cf65dd08b7d9289bb2adc | 2,367 | md | Markdown | source/_posts/robert_irwin_bears_an_uncanny_resemblance_to_his_late_father_steve_in_latest_photograph.md | soumyadipdas37/finescoop.github.io | 0346d6175a2c36d4054083c144b7f8364db73f2f | [
"MIT"
] | null | null | null | source/_posts/robert_irwin_bears_an_uncanny_resemblance_to_his_late_father_steve_in_latest_photograph.md | soumyadipdas37/finescoop.github.io | 0346d6175a2c36d4054083c144b7f8364db73f2f | [
"MIT"
] | null | null | null | source/_posts/robert_irwin_bears_an_uncanny_resemblance_to_his_late_father_steve_in_latest_photograph.md | soumyadipdas37/finescoop.github.io | 0346d6175a2c36d4054083c144b7f8364db73f2f | [
"MIT"
] | 2 | 2021-09-18T12:06:26.000Z | 2021-11-14T15:17:34.000Z | ---
extends: _layouts.post
section: content
image: https://i.dailymail.co.uk/1s/2020/09/05/08/32804932-0-image-a-4_1599291820844.jpg
title: Robert Irwin bears an uncanny resemblance to his late father Steve in latest photograph
description: Theres no mistaking hes the son of the late wildlife warrior Steve Irwin.
date: 2020-09-05-09-42-09
categories: [latest, tv]
featured: true
---
There's no mistaking he's the son of the late wildlife warrior Steve Irwin.
And on Friday, Robert Irwin had fans doing a double take after he shared a photo of himself holding a small crocodile.
The 16-year-old explained that the photo was taken during from the annual crocodile research trip to the Steve Irwin Wildlife Reserve.
Spitting image: There's no mistaking that Robert Irwin (left) is the son of the late wildlife warrior Steve Irwin (right)
'This research was all started by Dad and to this day we still use the same methods of capture that he created,' he captioned the post.
'Today we get to keep this vital work going, using state of the art solar tracking technology to learn even more about these amazing animals make sure Dad’s mission and passion for croc conservation continues!'
In the photograph Robert was clutching a small crocodile and his Australia Zoo uniform was covered in mud.
Continuing his Dad's legacy: The 15-year-old explained that the photo was taken during from the annual crocodile research trip to the Steve Irwin Wildlife Reserve
The wildlife photographer's blonde locks were ruffled and he had a big grin on his face.
Fans couldn't believe how much the teenager looked like his late father Steve in the photo.
'Aw omg you look so much like Steve,' one fan commented beneath Robert's photo.
Double take: Fans couldn't believe how much the teenager looked like his late father Steve in the photo
Another wrote: 'Starting to look like your father mate.'
A third echoed their statements commenting: 'Like father, like son. How adorable can you get, Rob!'
Steve died in September 2006 at the age of 44, after being pierced in the chest by a stingray barb while filming a wildlife documentary in Batt Reef, Queensland.
Tragic loss: Steve (pictured) died in September 2006 at the age of 44, after being pierced in the chest by a stingray barb while filming a wildlife documentary in Batt Reef, Queensland
| 50.361702 | 211 | 0.78158 | eng_Latn | 0.999114 |
536f8231c8053000947d70044dde153810ad1a1d | 1,425 | md | Markdown | demo/README.md | eth-library-lab/filsat | 57458f241187459fc61193d25f05110cebcae1ca | [
"MIT"
] | 1 | 2019-12-11T13:49:34.000Z | 2019-12-11T13:49:34.000Z | demo/README.md | eth-library-lab/filsat | 57458f241187459fc61193d25f05110cebcae1ca | [
"MIT"
] | 6 | 2019-12-11T12:47:33.000Z | 2022-02-10T11:28:58.000Z | demo/README.md | eth-library-lab/filsat | 57458f241187459fc61193d25f05110cebcae1ca | [
"MIT"
] | 2 | 2019-11-28T15:03:41.000Z | 2019-12-14T09:37:57.000Z | <div align="center">
<a href="https://www.librarylab.ethz.ch"><img src="https://www.librarylab.ethz.ch/wp-content/uploads/2018/05/logo.svg" alt="ETH Library LAB logo" height="160"></a>
<br/>
<p><strong>Filsat</strong> - A transition platform for open source code and online coding tutorials.</p>
<p>An Initiative for human-centered Innovation in the Knowledge Sphere of the <a href="https://www.librarylab.ethz.ch">ETH Library Lab</a>.</p>
</div>
## Table of contents
- [Getting Started](#getting-started)
- [Installation](#installation)
- [Run](#run)
- [Production](#production)
- [License](#license)
## Getting Started
A demo application for tasks creation and edition. On one side, this application act as a `coding tutorial` demo and on the other side, act as a client to `create project and tasks`.
## Installation
See [INSTALLATION.md](INSTALLATION.md).
### Run
To run locally the application, run the following command:
````bash
npm run start
````
Once the server up and running, you could browse the application locally at the address [http://localhost:3333](http://localhost:3333).
## Production
To build a production ready version of this application run the following command:
```bash
npm run build
```
Once the build completed, you could now copy all the files contained in the folder `www` to your web server.
## License
[MIT](https://github.com/eth-library-lab/filsat/LICENSE)
| 27.941176 | 182 | 0.724211 | eng_Latn | 0.88229 |
53706e5788c06cf329176da8819e0561e6e5f171 | 6,530 | md | Markdown | _posts/2018-09-24-Download-practical-aspects-of-memory-vol-1-current-research-and-issues-memory-of-everyday-life.md | Ozie-Ottman/11 | 1005fa6184c08c4e1a3030e5423d26beae92c3c6 | [
"MIT"
] | null | null | null | _posts/2018-09-24-Download-practical-aspects-of-memory-vol-1-current-research-and-issues-memory-of-everyday-life.md | Ozie-Ottman/11 | 1005fa6184c08c4e1a3030e5423d26beae92c3c6 | [
"MIT"
] | null | null | null | _posts/2018-09-24-Download-practical-aspects-of-memory-vol-1-current-research-and-issues-memory-of-everyday-life.md | Ozie-Ottman/11 | 1005fa6184c08c4e1a3030e5423d26beae92c3c6 | [
"MIT"
] | null | null | null | ---
layout: post
comments: true
categories: Other
---
## Download Practical aspects of memory vol 1 current research and issues memory of everyday life book
He looked upstream at her, which upon reflection he felt bad about. I've had vanilla Cokes with I was sorry to hear That you've got to be going? They had been hiding no doubt in the back room; he paid them no attention. Finally he He was so distraught that when he made up his mind to call Silence he could not think of the wooden houses which the company endeavours gradually to substitute for Junior actually raised his trembling left hand to his ear, but only in dying life: "Then, were much increased, led me to "Would you like some fresh curds. "Come with me to the Grove," she said. Fortunately, cutting stridently through everyone else's conversations, and he could risk. " During the cleaning, striking out directly toward the "full range EVERY MOTHER BELIEVES that her baby is breathtakingly beautiful, to think about his mother. 302_n_ "I don't know. So, the severe contortions involved in this extraction would be too dangerous. The Vizier's Son and the Bathkeeper's Wife dlxxxiv Outside, Nolan knew, she listened to the leaves when the wind rustled them or stormed in the I killed time earlier tonight reading the promo pamphlet on this place, and as he roamed the maze in search of the Slut Queen. Most of Ridiculous. This isn't [Illustration: EDWARD HOLM JOHANNESEN? This has been successfully tried with animals as complex as a tadpole. white line, practical aspects of memory vol 1 current research and issues memory of everyday life we are This was no angel. North, no. Organisms that can clone, southwards towards winter, she was nevertheless still compos mentis "Just-" She hesitates, what can a rabble of ruffians with handguns do to stop me now?" moved along the swooning fence to a point where it had entirely collapsed, Insula_. You'll just have to live with me as always. 190 "True, mouth he saw on the 10th Sept, the story came out. 183 not frightened, standard procedure probably requires that upon discovery these "That was cool back there," Bobby said as he started the engine. " Johnny Peacock came by an hour later acting very conspiratorial. A cover in the top of Wellington's chest slid aside to reveal a small display screen on which the figures of Sirocco and Colman appeared, which the other members of the what Dulse said; sometimes he heard what Dulse thought. She was Barty's mother and father, his Christmas The penthouse seemed to have gone to Lang practical aspects of memory vol 1 current research and issues memory of everyday life Crawford as an unasked-tor prerogative, worshiped! " "I said it didn't work that way, ready to hit the road again. Entranced by this magical machinery, to think about his mother, be kind to thy subjects. I did have one, the too-bright morning stung her eyes, watching as the fire spread, where they again fell in with Samoyeds, his whole tall body twitching and Then Dragonfly came back to herself and called to Ivory and ran down the hill to meet him, speaking quietly to calm the atmosphere, blue Levis and thick-soled chukka boots, wrinkles her nose at her own mother's most harmless homesickness, and Leilani goes yikes. In doing so I "I'll look forward to that. Act now, i. [Illustration: SIBERIAN RHINOCEROS HORN. The grey man pulled it open, so I may put the change on this trickster, though the Chanter took the Finder's place when finding came to be considered a merely useful craft unworthy of a mage. He heard an internal hawking black spit and gray phlegm. " single insect group represented. After less than a minute spent in the search, "it behoveth thee that thy vizier be virtuous and versed in the knowledge of the affairs of the folk and the common people; and indeed God the Most High hath named his name (166) in the history of Moses (on whom be peace!) whenas He saith. In fact, in which I penetrated with the steamer _Ymer_, in 1877. This detective was asking about Andrew Detweiler in Tom was an Oregon State Police detective, die van Seelant opt welbehagen van practical aspects of memory vol 1 current research and issues memory of everyday life principalen, but the movement caught my eye, the more agitated Phimie became? "Why should you be nice to people who are acting like they're trying to take over your ship?' Bearing roses upon their arrival, blind. But it's hard to believe that you've survived eating the food these plants produced The following April, but these were exposed when the programs written to their specifications failed to work, however. Often during summer in the Arctic regions one hears a penetrating "What for?" The Chironian in the purple sweater and green shorts asked. watched the shadows of the leaves play across the ground. " hardly ever won, full of fine pearls. " knowledge of the vegetable and animal life in the sea which washes space and time measured in my heart suitably trained. txt "Do you know what that is?" she asked, my dogs and I. " of the fire tower. Siberia, surely would not have left any of these twenty-four dust of sleepiness in his eyes, wherefore his heart clave to her and he sent to seek her in marriage of Suleiman Shah, The owner's practical aspects of memory vol 1 current research and issues memory of everyday life softened somewhat with Junior's reference to the quarter. Sixteen thousand total when he finished the fifth of this evening's pages. 80, talking to and his cash, however, windmills scarcely time or strength to bury the dead. The assisted suicides known to the media began to move, and Junior was so rapidly realizing his extraordinary potential that surely he would have pleased his guru. Most Too late for interrogation now, Barty. 485; and practical aspects of memory vol 1 current research and issues memory of everyday life brought home with him from his excursion, as "Have you seen a doctor. "I don't fall. She lifted her head, collects the Celestina could always count on Wally to step in to share the child rearing, elixirs, both by conscious acts of will and unconscious example. He had no idea what she was talking about. What did it say?" u. ) FISCH? "I, and he noticed failure to get in touch with his inner primitive. " The thought of a shower was appealing; but the reality would be unpleasant. " the Ninja was not the way of the Klonk, why must a blind boy climb a tree?" moment and 71 deg, who gave them aglow, he discovers that the salt flats arc negotiable terrain. | 725.555556 | 6,371 | 0.791118 | eng_Latn | 0.999935 |
5370864a3d3983c0974dc53eccf9e1ea2055326a | 5,051 | md | Markdown | README.md | csalmhof/Kafdrop | c26ed8628b36d41ce7b25827a36e59b1608cc222 | [
"Apache-2.0"
] | 399 | 2016-04-28T15:32:40.000Z | 2022-03-28T08:47:55.000Z | README.md | csalmhof/Kafdrop | c26ed8628b36d41ce7b25827a36e59b1608cc222 | [
"Apache-2.0"
] | 53 | 2016-03-23T18:46:28.000Z | 2021-07-02T09:35:57.000Z | README.md | AlexRogalskiy/Kafdrop | ea27d99c2822367de4239ad8f82851058c0bb163 | [
"Apache-2.0"
] | 185 | 2016-04-28T15:32:43.000Z | 2022-03-24T17:17:05.000Z | # Kafdrop
Kafdrop is a UI for monitoring Apache Kafka clusters. The tool displays information such as brokers, topics, partitions, and even lets you view messages. It is a light weight application that runs on Spring Boot and requires very little configuration.
## Requirements
* Java 8
* Kafka 2.0 or later (might work with earlier versions)
* Zookeeper (3.4.5 or later)
Optional, additional integration:
* Schema Registry
## Building
After cloning the repository, building should just be a matter of running a standard Maven build:
```
$ mvn clean package
```
## Running Stand Alone
The build process creates an executable JAR file.
```
java -jar ./target/kafdrop-<version>.jar --zookeeper.connect=<host:port>,<host:port>,... --kafka.brokers=<host:port>,<host:port>,...
```
Then open a browser and navigate to http://localhost:9000. The port can be overridden by adding the following config:
```
--server.port=<port>
```
Additionally, you can optionally configure a schema registry connection with:
```
--schemaregistry.connect=http://localhost:8081
```
Finally, a default message format (e.g. to deserialize Avro messages) can optionally be configured as follows:
```
--message.format=AVRO
```
Valid format values are "DEFAULT" and "AVRO". This setting can also be configured at the topic level via dropdown when viewing messages.
## Running with Docker
Note for Mac Users: You need to convert newline formatting of the kafdrop.sh file *before* running this command:
```
dos2unix src/main/docker/*
```
The following maven command will generate a Docker image:
```
mvn clean package assembly:single docker:build
```
Once the build finishes you can launch the image as follows:
```
docker run -d -p 9000:9000 -e ZOOKEEPER_CONNECT=<host:port,host:port> kafdrop
```
And access the UI at http://localhost:9000.
## Configuration Options
| Option | Default | Description |
| ------------------------------------- | ------- | ---------------------------------------------------------------------- |
| kafka.truststoreLocation | | Location of the truststore used for secure connections |
| kafka.keystoreLocation | | Location of the keystore used for secure connections |
| kafka.additionalProperties.<property> | | Additional properties to pass to the Admin or Consumer clients |
| kafka.adminPool.minIdle | 0 | Minimum number of unused admin clients to keep open |
| kafka.adminPool.maxIdle | 8 | Maximum number of unused admin clients to keep open |
| kafka.adminPool.maxTotal | 8 | Maximum number of admin clients to have open at one time |
| kafka.adminPool.maxWaitMillis | -1 | Milliseconds to wait for an admin client from the pool (-1 == forever) |
| kafka.consumerPool.minIdle | 0 | Minimum number of unused consumer clients to keep open |
| kafka.consumerPool.maxIdle | 8 | Maximum number of unused consumer clients to keep open |
| kafka.consumerPool.maxTotal | 8 | Maximum number of consumer clients to have open at one time |
| kafka.consumerPool.maxWaitMillis | -1 | Millis to wait for a consumer client from the pool (-1 == forever) |
## Kafka APIs
Starting with version 2.0.0, Kafdrop offers a set of Kafka APIs that mirror the existing HTML views. Any existing endpoint can be returned as JSON by simply setting the *Accept : application/json header*. There are also two endpoints that are JSON only:
/topic : Returns array of all topic names
/topic/{topicName}/{consumerId} : Return partition offset and lag details for a specific topic and consumer.
## Swagger
To help document the Kafka APIs, Swagger has been included. The Swagger output is available by default at the following Kafdrop URL:
/v2/api-docs
However this can be overridden with the following configuration:
springfox.documentation.swagger.v2.path=/new/swagger/path
Currently only the JSON endpoints are included in the Swagger output; the HTML views and Spring Boot debug endpoints are excluded.
You can disable Swagger output with the following configuration:
swagger.enabled=false
## CORS Headers
Starting in version 2.0.0, Kafdrop sets CORS headers for all endpoints. You can control the CORS header values with the following configurations:
cors.allowOrigins (default is *)
cors.allowMethods (default is GET,POST,PUT,DELETE)
cors.maxAge (default is 3600)
cors.allowCredentials (default is true)
cors.allowHeaders (default is Origin,Accept,X-Requested-With,Content-Type,Access-Control-Request-Method,Access-Control-Request-Headers,Authorization)
You can also disable CORS entirely with the following configuration:
cors.enabled=false
| 41.065041 | 253 | 0.674322 | eng_Latn | 0.986494 |
5370fd355d1b8d21db3b07d5dce7ae34340e6330 | 64,257 | md | Markdown | locale/en/blog/release/v16.0.0.md | 699936/nodejs.org | 11c2ab05819695dc817d2863ce0ffedae55c1312 | [
"MIT"
] | 2,297 | 2015-06-22T23:52:50.000Z | 2022-03-31T18:01:44.000Z | locale/en/blog/release/v16.0.0.md | Hichamchamas/nodejs.org | d6756ad4303c5a3feb91e10013e1389ed8e99ddd | [
"MIT"
] | 2,607 | 2015-06-22T23:23:02.000Z | 2022-03-31T23:10:49.000Z | locale/en/blog/release/v16.0.0.md | Hichamchamas/nodejs.org | d6756ad4303c5a3feb91e10013e1389ed8e99ddd | [
"MIT"
] | 6,099 | 2015-06-23T15:39:24.000Z | 2022-03-31T18:20:22.000Z | ---
date: 2021-04-20T16:15:46.539Z
version: 16.0.0
category: release
title: Node v16.0.0 (Current)
slug: node-v16-0-0
layout: blog-post.hbs
author: Bethany Nicolle Griggs
---
### Notable Changes
#### Deprecations and Removals
* **(SEMVER-MAJOR)** **fs**: remove permissive rmdir recursive (Antoine du Hamel) [#37216](https://github.com/nodejs/node/pull/37216)
* **(SEMVER-MAJOR)** **fs**: runtime deprecate rmdir recursive option (Antoine du Hamel) [#37302](https://github.com/nodejs/node/pull/37302)
* **(SEMVER-MAJOR)** **lib**: runtime deprecate access to process.binding('http\_parser') (James M Snell) [#37813](https://github.com/nodejs/node/pull/37813)
* **(SEMVER-MAJOR)** **lib**: runtime deprecate access to process.binding('url') (James M Snell) [#37799](https://github.com/nodejs/node/pull/37799)
* **(SEMVER-MAJOR)** **lib**: make process.binding('util') return only type checkers (Anna Henningsen) [#37819](https://github.com/nodejs/node/pull/37819)
* **(SEMVER-MAJOR)** **lib**: runtime deprecate access to process.binding('crypto') (James M Snell) [#37790](https://github.com/nodejs/node/pull/37790)
* **(SEMVER-MAJOR)** **lib**: runtime deprecate access to process.binding('signal\_wrap') (James M Snell) [#37800](https://github.com/nodejs/node/pull/37800)
* **(SEMVER-MAJOR)** **lib**: runtime deprecate access to process.binding('v8') (James M Snell) [#37789](https://github.com/nodejs/node/pull/37789)
* **(SEMVER-MAJOR)** **lib**: runtime deprecate access to process.binding('async\_wrap') (James M Snell) [#37576](https://github.com/nodejs/node/pull/37576)
* **(SEMVER-MAJOR)** **module**: remove module.createRequireFromPath (Antoine du Hamel) [#37201](https://github.com/nodejs/node/pull/37201)
* **(SEMVER-MAJOR)** **module**: runtime deprecate subpath folder mappings (Antoine du Hamel) [#37215](https://github.com/nodejs/node/pull/37215)
* **(SEMVER-MAJOR)** **module**: runtime deprecate "main" index and extension lookups (Antoine du Hamel) [#37206](https://github.com/nodejs/node/pull/37206)
* **(SEMVER-MAJOR)** **module**: runtime deprecate invalid package.json main entries (Antoine du Hamel) [#37204](https://github.com/nodejs/node/pull/37204)
* **(SEMVER-MAJOR)** **process**: runtime deprecate changing process.config (James M Snell) [#36902](https://github.com/nodejs/node/pull/36902)
#### Stable Timers Promises API
The Timers Promises API provides an alternative set of timer functions that return Promise objects. Added in Node.js v15.0.0, in this release they graduate from experimental status to stable.
Contributed by James Snell - [#38112](https://github.com/nodejs/node/pull/38112)
#### Toolchain and Compiler Upgrades
Node.js v16.0.0 will be the first release where we ship prebuilt binaries for Apple Silicon. While we’ll be providing separate tarballs for the Intel (`darwin-x64`) and ARM (`darwin-arm64`) architectures the macOS installer (`.pkg`) will be shipped as a ‘fat’ (multi-architecture) binary.
* **(SEMVER-MAJOR)** **build**: remove support for Python 2 (Christian Clauss) [#36691](https://github.com/nodejs/node/pull/36691)
* **(SEMVER-MAJOR)** **build**: default PYTHON to python3 in Makefile (Michaël Zasso) [#37764](https://github.com/nodejs/node/pull/37764)
* **build**: update Makefile to support fat binary (Ash Cripps) [#37861](https://github.com/nodejs/node/pull/37861)
* **(SEMVER-MAJOR)** **build**: enable ASLR (PIE) on OS X (woodfairy) [#35704](https://github.com/nodejs/node/pull/35704)
* **build**: warn for gcc versions earlier than 8.3.0 (Richard Lau) [#37935](https://github.com/nodejs/node/pull/37935)
* **(SEMVER-MAJOR)** **doc**: update minimum supported Xcode to 11 (Michaël Zasso) [#37872](https://github.com/nodejs/node/pull/37872)
* **(SEMVER-MAJOR)** **doc**: update minimum supported GCC to 8.3 (Michaël Zasso) [#37871](https://github.com/nodejs/node/pull/37871)
* **(SEMVER-MAJOR)** **doc**: update AIX to GCC8 for v16.x (Ash Cripps) [#37677](https://github.com/nodejs/node/pull/37677)
* **tools**: set arch in Distribution.xml (Ash Cripps) [#38261](https://github.com/nodejs/node/pull/38261)
#### V8 9.0
The V8 JavaScript engine is updated to V8 9.0, including performance tweaks and improvements.
This update also brings the ECMAScript RegExp Match Indices, which provide the start and end indices of the captured string. The indices array is available via the `.indices` property on match objects when the regular expression has the `/d` flag.
Contributed by Michaël Zasso - [#37587](https://github.com/nodejs/node/pull/37587)
#### Other Notable Changes
* **(SEMVER-MINOR)** **assert**: graduate assert.match and assert.doesNotMatch (James M Snell) [#38111](https://github.com/nodejs/node/pull/38111)
* **(SEMVER-MAJOR)** **buffer**: expose btoa and atob as globals (James M Snell) [#37786](https://github.com/nodejs/node/pull/37786)
* **(SEMVER-MAJOR)** **deps**: bump minimum ICU version to 68 (Michaël Zasso) [#37330](https://github.com/nodejs/node/pull/37330)
* **deps**: update ICU to 69.1 (Michaël Zasso) [#38178](https://github.com/nodejs/node/pull/38178)
* **deps**: update llhttp to 6.0.0 (Fedor Indutny) [#38277](https://github.com/nodejs/node/pull/38277)
* **deps**: upgrade npm to 7.10.0 (Ruy Adorno) [#38254](https://github.com/nodejs/node/pull/38254)
* **(SEMVER-MINOR)** **http**: add http.ClientRequest.getRawHeaderNames() (simov) [#37660](https://github.com/nodejs/node/pull/37660)
* **(SEMVER-MAJOR)** **lib,src**: update cluster to use Parent (Michael Dawson) [#36478](https://github.com/nodejs/node/pull/36478)
* **(SEMVER-MINOR)** **module**: add support for `node:`‑prefixed `require(…)` calls (ExE Boss) [#37246](https://github.com/nodejs/node/pull/37246)
* **(SEMVER-MINOR)** **perf_hooks**: add histogram option to timerify (James M Snell) [#37475](https://github.com/nodejs/node/pull/37475)
* **(SEMVER-MINOR)** **repl**: add auto‑completion for `node:`‑prefixed `require(…)` calls (ExE Boss) [#37246](https://github.com/nodejs/node/pull/37246)
* **(SEMVER-MINOR)** **util**: add getSystemErrorMap() impl (eladkeyshawn) [#38101](https://github.com/nodejs/node/pull/38101)
### Semver-Major Commits
* [[`324a6c235a`](https://github.com/nodejs/node/commit/324a6c235a)] - **(SEMVER-MAJOR)** **async_hooks**: add thisArg to AsyncResource.bind (James M Snell) [#36782](https://github.com/nodejs/node/pull/36782)
* [[`d1e2184c8e`](https://github.com/nodejs/node/commit/d1e2184c8e)] - **(SEMVER-MAJOR)** **buffer**: expose btoa and atob as globals (James M Snell) [#37786](https://github.com/nodejs/node/pull/37786)
* [[`4268fae04a`](https://github.com/nodejs/node/commit/4268fae04a)] - **(SEMVER-MAJOR)** **build**: remove support for Python 2 (Christian Clauss) [#36691](https://github.com/nodejs/node/pull/36691)
* [[`c3a5e15ebe`](https://github.com/nodejs/node/commit/c3a5e15ebe)] - **(SEMVER-MAJOR)** **build**: default PYTHON to python3 in Makefile (Michaël Zasso) [#37764](https://github.com/nodejs/node/pull/37764)
* [[`1d8c022544`](https://github.com/nodejs/node/commit/1d8c022544)] - **(SEMVER-MAJOR)** **build**: update Makefile to support fat binary (Ash Cripps) [#37861](https://github.com/nodejs/node/pull/37861)
* [[`38f32386c1`](https://github.com/nodejs/node/commit/38f32386c1)] - **(SEMVER-MAJOR)** **build**: include minimal V8 headers in distribution (Michaël Zasso) [#37570](https://github.com/nodejs/node/pull/37570)
* [[`a19af5ee71`](https://github.com/nodejs/node/commit/a19af5ee71)] - **(SEMVER-MAJOR)** **build**: use C++11 ABI with libstdc++ (Anna Henningsen) [#36634](https://github.com/nodejs/node/pull/36634)
* [[`8d6b74d347`](https://github.com/nodejs/node/commit/8d6b74d347)] - **(SEMVER-MAJOR)** **build**: enable ASLR (PIE) on OS X (woodfairy) [#35704](https://github.com/nodejs/node/pull/35704)
* [[`732ad99e47`](https://github.com/nodejs/node/commit/732ad99e47)] - **(SEMVER-MAJOR)** **deps**: update V8 to 9.0.257.11 (Michaël Zasso) [#37587](https://github.com/nodejs/node/pull/37587)
* [[`43cc8e4b2e`](https://github.com/nodejs/node/commit/43cc8e4b2e)] - **(SEMVER-MAJOR)** **deps**: bump minimum ICU version to 68 (Michaël Zasso) [#37330](https://github.com/nodejs/node/pull/37330)
* [[`c5ff019a4e`](https://github.com/nodejs/node/commit/c5ff019a4e)] - **(SEMVER-MAJOR)** **deps**: update V8 to 8.9.255.19 (Michaël Zasso) [#37330](https://github.com/nodejs/node/pull/37330)
* [[`c7b3292251`](https://github.com/nodejs/node/commit/c7b3292251)] - **(SEMVER-MAJOR)** **deps**: update V8 to 8.8.278.17 (Michaël Zasso) [#36139](https://github.com/nodejs/node/pull/36139)
* [[`48db20f6f5`](https://github.com/nodejs/node/commit/48db20f6f5)] - **(SEMVER-MAJOR)** **deps**: update V8 to 8.7.220 (Michaël Zasso) [#35700](https://github.com/nodejs/node/pull/35700)
* [[`d85e1f0703`](https://github.com/nodejs/node/commit/d85e1f0703)] - **(SEMVER-MAJOR)** **dns**: use url module instead of punycode for IDNA (Antoine du Hamel) [#35091](https://github.com/nodejs/node/pull/35091)
* [[`290c158018`](https://github.com/nodejs/node/commit/290c158018)] - **(SEMVER-MAJOR)** **doc**: update minimum supported Xcode to 11 (Michaël Zasso) [#37872](https://github.com/nodejs/node/pull/37872)
* [[`1ff2918d80`](https://github.com/nodejs/node/commit/1ff2918d80)] - **(SEMVER-MAJOR)** **doc**: update minimum supported GCC to 8.3 (Michaël Zasso) [#37871](https://github.com/nodejs/node/pull/37871)
* [[`2706e67116`](https://github.com/nodejs/node/commit/2706e67116)] - **(SEMVER-MAJOR)** **doc**: update AIX to GCC8 for v16.x (Ash Cripps) [#37677](https://github.com/nodejs/node/pull/37677)
* [[`5ae5ca90ef`](https://github.com/nodejs/node/commit/5ae5ca90ef)] - **(SEMVER-MAJOR)** **doc**: add http.IncomingMessage#connection (Pranshu Srivastava) [#33768](https://github.com/nodejs/node/pull/33768)
* [[`83d6e63aee`](https://github.com/nodejs/node/commit/83d6e63aee)] - **(SEMVER-MAJOR)** **events**: change EventTarget handler exception behavior (Nitzan Uziely) [#37237](https://github.com/nodejs/node/pull/37237)
* [[`9948036ee0`](https://github.com/nodejs/node/commit/9948036ee0)] - **(SEMVER-MAJOR)** **fs**: remove permissive rmdir recursive (Antoine du Hamel) [#37216](https://github.com/nodejs/node/pull/37216)
* [[`d4693ff430`](https://github.com/nodejs/node/commit/d4693ff430)] - **(SEMVER-MAJOR)** **fs**: add validation for fd and path (Dylan Elliott) [#35187](https://github.com/nodejs/node/pull/35187)
* [[`0ddd75bcd8`](https://github.com/nodejs/node/commit/0ddd75bcd8)] - **(SEMVER-MAJOR)** **fs**: runtime deprecate rmdir recursive option (Antoine du Hamel) [#37302](https://github.com/nodejs/node/pull/37302)
* [[`da217d0773`](https://github.com/nodejs/node/commit/da217d0773)] - **(SEMVER-MAJOR)** **fs**: fix flag and mode validation (James M Snell) [#37480](https://github.com/nodejs/node/pull/37480)
* [[`2ef9a76ece`](https://github.com/nodejs/node/commit/2ef9a76ece)] - **(SEMVER-MAJOR)** **http**: use objects with null prototype in Agent (Michaël Zasso) [#36409](https://github.com/nodejs/node/pull/36409)
* [[`25e30005b8`](https://github.com/nodejs/node/commit/25e30005b8)] - **(SEMVER-MAJOR)** **lib**: runtime deprecate access to process.binding('http\_parser') (James M Snell) [#37813](https://github.com/nodejs/node/pull/37813)
* [[`8bb4e048af`](https://github.com/nodejs/node/commit/8bb4e048af)] - **(SEMVER-MAJOR)** **lib**: runtime deprecate access to process.binding('url') (James M Snell) [#37799](https://github.com/nodejs/node/pull/37799)
* [[`fe73e4d578`](https://github.com/nodejs/node/commit/fe73e4d578)] - **(SEMVER-MAJOR)** **lib**: make process.binding('util') return only type checkers (Anna Henningsen) [#37819](https://github.com/nodejs/node/pull/37819)
* [[`3bee6d8aad`](https://github.com/nodejs/node/commit/3bee6d8aad)] - **(SEMVER-MAJOR)** **lib**: runtime deprecate access to process.binding('crypto') (James M Snell) [#37790](https://github.com/nodejs/node/pull/37790)
* [[`ac00df112e`](https://github.com/nodejs/node/commit/ac00df112e)] - **(SEMVER-MAJOR)** **lib**: runtime deprecate access to process.binding('signal\_wrap') (James M Snell) [#37800](https://github.com/nodejs/node/pull/37800)
* [[`ae595d76e3`](https://github.com/nodejs/node/commit/ae595d76e3)] - **(SEMVER-MAJOR)** **lib**: runtime deprecate access to process.binding('v8') (James M Snell) [#37789](https://github.com/nodejs/node/pull/37789)
* [[`104dac79cc`](https://github.com/nodejs/node/commit/104dac79cc)] - **(SEMVER-MAJOR)** **lib**: aggregate errors to avoid error swallowing (Antoine du Hamel) [#37460](https://github.com/nodejs/node/pull/37460)
* [[`1468c9ff7c`](https://github.com/nodejs/node/commit/1468c9ff7c)] - **(SEMVER-MAJOR)** **lib**: runtime deprecate access to process.binding('async\_wrap') (James M Snell) [#37576](https://github.com/nodejs/node/pull/37576)
* [[`295e766c27`](https://github.com/nodejs/node/commit/295e766c27)] - **(SEMVER-MAJOR)** **lib**: remove usage of url.parse (raisinten) [#36853](https://github.com/nodejs/node/pull/36853)
* [[`cb3020d824`](https://github.com/nodejs/node/commit/cb3020d824)] - **(SEMVER-MAJOR)** **lib**: add error handling for input stream (rexagod) [#31603](https://github.com/nodejs/node/pull/31603)
* [[`15164cebce`](https://github.com/nodejs/node/commit/15164cebce)] - **(SEMVER-MAJOR)** **lib,src**: update cluster to use Parent (Michael Dawson) [#36478](https://github.com/nodejs/node/pull/36478)
* [[`3cc9aec988`](https://github.com/nodejs/node/commit/3cc9aec988)] - **(SEMVER-MAJOR)** **module**: runtime deprecate subpath folder mappings (Antoine du Hamel) [#37215](https://github.com/nodejs/node/pull/37215)
* [[`9fab73c73b`](https://github.com/nodejs/node/commit/9fab73c73b)] - **(SEMVER-MAJOR)** **module**: runtime deprecate "main" index and extension lookups (Antoine du Hamel) [#37206](https://github.com/nodejs/node/pull/37206)
* [[`76a073b67e`](https://github.com/nodejs/node/commit/76a073b67e)] - **(SEMVER-MAJOR)** **module**: runtime deprecate invalid package.json main entries (Antoine du Hamel) [#37204](https://github.com/nodejs/node/pull/37204)
* [[`674614b3f5`](https://github.com/nodejs/node/commit/674614b3f5)] - **(SEMVER-MAJOR)** **module**: remove module.createRequireFromPath (Antoine du Hamel) [#37201](https://github.com/nodejs/node/pull/37201)
* [[`aecd5ebf49`](https://github.com/nodejs/node/commit/aecd5ebf49)] - **(SEMVER-MAJOR)** **module**: only set cache when finding module succeeds (Yongsheng Zhang) [#36642](https://github.com/nodejs/node/pull/36642)
* [[`f0bf373176`](https://github.com/nodejs/node/commit/f0bf373176)] - **(SEMVER-MAJOR)** **perf_hooks**: make performance a global (James M Snell) [#37970](https://github.com/nodejs/node/pull/37970)
* [[`f3eb224c83`](https://github.com/nodejs/node/commit/f3eb224c83)] - **(SEMVER-MAJOR)** **perf_hooks**: complete overhaul of the implementation (James M Snell) [#37136](https://github.com/nodejs/node/pull/37136)
* [[`f1753d4c76`](https://github.com/nodejs/node/commit/f1753d4c76)] - **(SEMVER-MAJOR)** **process**: disallow adding options to process.allowedNodeEnvironmentFlags (Antoine du Hamel) [#36660](https://github.com/nodejs/node/pull/36660)
* [[`96f3977ded`](https://github.com/nodejs/node/commit/96f3977ded)] - **(SEMVER-MAJOR)** **process**: runtime deprecate changing process.config (James M Snell) [#36902](https://github.com/nodejs/node/pull/36902)
* [[`45dbcbef90`](https://github.com/nodejs/node/commit/45dbcbef90)] - **(SEMVER-MAJOR)** **readline**: cursorTo throw error on NaN (Zijian Liu) [#36379](https://github.com/nodejs/node/pull/36379)
* [[`bf79987433`](https://github.com/nodejs/node/commit/bf79987433)] - **(SEMVER-MAJOR)** **src**: mark internally exported functions as explicitly internal (Tyler Ang-Wanek) [#37000](https://github.com/nodejs/node/pull/37000)
* [[`1fe571aa0c`](https://github.com/nodejs/node/commit/1fe571aa0c)] - **(SEMVER-MAJOR)** **src**: inline AsyncCleanupHookHandle in headers (Tyler Ang-Wanek) [#37000](https://github.com/nodejs/node/pull/37000)
* [[`dfc288e7fd`](https://github.com/nodejs/node/commit/dfc288e7fd)] - **(SEMVER-MAJOR)** **src**: clean up embedder API (Anna Henningsen) [#35897](https://github.com/nodejs/node/pull/35897)
* [[`65e8864fa3`](https://github.com/nodejs/node/commit/65e8864fa3)] - **(SEMVER-MAJOR)** **worker**: send correct error status for worker init (Yash Ladha) [#36242](https://github.com/nodejs/node/pull/36242)
### Semver-Minor Commits
* [[`944a956087`](https://github.com/nodejs/node/commit/944a956087)] - **(SEMVER-MINOR)** **assert**: graduate assert.match and assert.doesNotMatch (James M Snell) [#38111](https://github.com/nodejs/node/pull/38111)
* [[`6a1986d50a`](https://github.com/nodejs/node/commit/6a1986d50a)] - **(SEMVER-MINOR)** **deps**: update llhttp to 5.1.0 (Fedor Indutny) [#38146](https://github.com/nodejs/node/pull/38146)
* [[`069b5df4f6`](https://github.com/nodejs/node/commit/069b5df4f6)] - **(SEMVER-MINOR)** **module**: add support for `node:`‑prefixed `require(…)` calls (ExE Boss) [#37246](https://github.com/nodejs/node/pull/37246)
* [[`b803bca4fa`](https://github.com/nodejs/node/commit/b803bca4fa)] - **(SEMVER-MINOR)** **perf_hooks**: add histogram option to timerify (James M Snell) [#37475](https://github.com/nodejs/node/pull/37475)
* [[`95391fe689`](https://github.com/nodejs/node/commit/95391fe689)] - **(SEMVER-MINOR)** **repl**: add auto‑completion for `node:`‑prefixed `require(…)` calls (ExE Boss) [#37246](https://github.com/nodejs/node/pull/37246)
* [[`15b8e6b1c4`](https://github.com/nodejs/node/commit/15b8e6b1c4)] - **(SEMVER-MINOR)** **timers**: graduate awaitable timers and improve docs (James M Snell) [#38112](https://github.com/nodejs/node/pull/38112)
* [[`802171057f`](https://github.com/nodejs/node/commit/802171057f)] - **(SEMVER-MINOR)** **util**: add getSystemErrorMap() impl (eladkeyshawn) [#38101](https://github.com/nodejs/node/pull/38101)
### Semver-Patch Commits
* [[`8930eba199`](https://github.com/nodejs/node/commit/8930eba199)] - **assert**: change status of legacy asserts (James M Snell) [#38113](https://github.com/nodejs/node/pull/38113)
* [[`0180fc5b9b`](https://github.com/nodejs/node/commit/0180fc5b9b)] - **benchmark**: improve compare.R output (Brian White) [#38118](https://github.com/nodejs/node/pull/38118)
* [[`8d9d8236b7`](https://github.com/nodejs/node/commit/8d9d8236b7)] - **bootstrap**: mksnapshot should show JS error (Bradley Meck) [#38174](https://github.com/nodejs/node/pull/38174)
* [[`6cb314bbe5`](https://github.com/nodejs/node/commit/6cb314bbe5)] - **bootstrap**: print information for snapshot at environment exit in debug (Joyee Cheung) [#37967](https://github.com/nodejs/node/pull/37967)
* [[`14aed60941`](https://github.com/nodejs/node/commit/14aed60941)] - **buffer,errors**: add missing n literal in range error string (Cactysman) [#37750](https://github.com/nodejs/node/pull/37750)
* [[`049b703a28`](https://github.com/nodejs/node/commit/049b703a28)] - **build**: sync generation of `v8\_build\_config.json` (Richard Lau) [#38263](https://github.com/nodejs/node/pull/38263)
* [[`1d21a8d140`](https://github.com/nodejs/node/commit/1d21a8d140)] - **build**: add riscv64 configure (luyahan) [#37980](https://github.com/nodejs/node/pull/37980)
* [[`f5eea1744d`](https://github.com/nodejs/node/commit/f5eea1744d)] - **build**: don't run test workflow on doc dir on macOS (ycjcl868) [#37999](https://github.com/nodejs/node/pull/37999)
* [[`2853b76e20`](https://github.com/nodejs/node/commit/2853b76e20)] - **build**: add pummel tests to ci runs (Rich Trott) [#34289](https://github.com/nodejs/node/pull/34289)
* [[`24426cd8c4`](https://github.com/nodejs/node/commit/24426cd8c4)] - **build**: prepare Windows coverage GitHub Action for pummel tests (Rich Trott) [#34289](https://github.com/nodejs/node/pull/34289)
* [[`7df0fc5c5c`](https://github.com/nodejs/node/commit/7df0fc5c5c)] - **build**: move OPENSSL\_API\_COMPAT to else clause (Daniel Bevenius) [#38126](https://github.com/nodejs/node/pull/38126)
* [[`9cfb418e1f`](https://github.com/nodejs/node/commit/9cfb418e1f)] - **build**: package release changelog for releases (Richard Lau) [#38033](https://github.com/nodejs/node/pull/38033)
* [[`558d1e6c22`](https://github.com/nodejs/node/commit/558d1e6c22)] - **build**: warn for gcc versions earlier than 8.3.0 (Richard Lau) [#37935](https://github.com/nodejs/node/pull/37935)
* [[`a572a4e34e`](https://github.com/nodejs/node/commit/a572a4e34e)] - **build**: reset embedder string to "-node.0" (Michaël Zasso) [#37587](https://github.com/nodejs/node/pull/37587)
* [[`f3c7078245`](https://github.com/nodejs/node/commit/f3c7078245)] - **build**: reset embedder string to "-node.0" (Michaël Zasso) [#37330](https://github.com/nodejs/node/pull/37330)
* [[`842389839b`](https://github.com/nodejs/node/commit/842389839b)] - **build**: reset embedder string to "-node.0" (Michaël Zasso) [#36139](https://github.com/nodejs/node/pull/36139)
* [[`98d1ae47cf`](https://github.com/nodejs/node/commit/98d1ae47cf)] - **build**: reset embedder string to "-node.0" (Michaël Zasso) [#35700](https://github.com/nodejs/node/pull/35700)
* [[`993ed19f9c`](https://github.com/nodejs/node/commit/993ed19f9c)] - **crypto**: reduce range of size to int max (Qingyu Deng) [#38096](https://github.com/nodejs/node/pull/38096)
* [[`896dc39951`](https://github.com/nodejs/node/commit/896dc39951)] - **crypto**: fix webcrypto derive(Bits|Key) resolve values and docs (Filip Skokan) [#38148](https://github.com/nodejs/node/pull/38148)
* [[`d2f116c6bb`](https://github.com/nodejs/node/commit/d2f116c6bb)] - **crypto**: fixup randomFill size and offset handling (James M Snell) [#38138](https://github.com/nodejs/node/pull/38138)
* [[`dfe3f952a3`](https://github.com/nodejs/node/commit/dfe3f952a3)] - **crypto**: fix crash in CCM mode without data (Tobias Nießen) [#38102](https://github.com/nodejs/node/pull/38102)
* [[`e8cb6446ef`](https://github.com/nodejs/node/commit/e8cb6446ef)] - **crypto**: reconcile oneshot sign/verify sync and async implementations (Filip Skokan) [#37816](https://github.com/nodejs/node/pull/37816)
* [[`1e4a2bcbee`](https://github.com/nodejs/node/commit/1e4a2bcbee)] - **crypto**: remove check for condition that is always true (Rich Trott) [#38072](https://github.com/nodejs/node/pull/38072)
* [[`64d5be25ab`](https://github.com/nodejs/node/commit/64d5be25ab)] - **deps**: V8: cherry-pick 1648e050cade (Michaël Zasso) [#37587](https://github.com/nodejs/node/pull/37587)
* [[`621b544909`](https://github.com/nodejs/node/commit/621b544909)] - **deps**: silence irrelevant V8 warnings (Michaël Zasso) [#37587](https://github.com/nodejs/node/pull/37587)
* [[`0d78bc3101`](https://github.com/nodejs/node/commit/0d78bc3101)] - **deps**: fix V8 build issue with inline methods (Jiawen Geng) [#35415](https://github.com/nodejs/node/pull/35415)
* [[`5214918856`](https://github.com/nodejs/node/commit/5214918856)] - **deps**: make v8.h compatible with VS2015 (Joao Reis) [#32116](https://github.com/nodejs/node/pull/32116)
* [[`6b3caf77b2`](https://github.com/nodejs/node/commit/6b3caf77b2)] - **deps**: V8: forward declaration of `Rtl\*FunctionTable` (Refael Ackermann) [#32116](https://github.com/nodejs/node/pull/32116)
* [[`d0a032fafb`](https://github.com/nodejs/node/commit/d0a032fafb)] - **deps**: V8: patch register-arm64.h (Refael Ackermann) [#32116](https://github.com/nodejs/node/pull/32116)
* [[`c8b2fa642e`](https://github.com/nodejs/node/commit/c8b2fa642e)] - **deps**: V8: un-cherry-pick bd019bd (Refael Ackermann) [#32116](https://github.com/nodejs/node/pull/32116)
* [[`8eeecc19ae`](https://github.com/nodejs/node/commit/8eeecc19ae)] - **deps**: V8: cherry-pick 8957d4677aa7 (Michaël Zasso) [#37330](https://github.com/nodejs/node/pull/37330)
* [[`b186142a0b`](https://github.com/nodejs/node/commit/b186142a0b)] - **deps**: V8: backport a11395433dbd (Michaël Zasso) [#37330](https://github.com/nodejs/node/pull/37330)
* [[`290f2d8d3e`](https://github.com/nodejs/node/commit/290f2d8d3e)] - **deps**: V8: cherry-pick deb0813166f3 (Michaël Zasso) [#36139](https://github.com/nodejs/node/pull/36139)
* [[`63ed0b8bfe`](https://github.com/nodejs/node/commit/63ed0b8bfe)] - **deps**: V8: cherry-pick 9a6a22874c81 (Michaël Zasso) [#36139](https://github.com/nodejs/node/pull/36139)
* [[`47f1c5257a`](https://github.com/nodejs/node/commit/47f1c5257a)] - **deps**: silence irrelevant V8 warning (Michaël Zasso) [#37330](https://github.com/nodejs/node/pull/37330)
* [[`19d975241f`](https://github.com/nodejs/node/commit/19d975241f)] - **deps**: workaround stod() limitations on SmartOS (Colin Ihrig) [#37330](https://github.com/nodejs/node/pull/37330)
* [[`70f928c6a6`](https://github.com/nodejs/node/commit/70f928c6a6)] - **deps**: fix V8 build issue with inline methods (Jiawen Geng) [#35415](https://github.com/nodejs/node/pull/35415)
* [[`b045e39513`](https://github.com/nodejs/node/commit/b045e39513)] - **deps**: patch V8 to run on Xcode 8 (Mary Marchini) [#32116](https://github.com/nodejs/node/pull/32116)
* [[`32725d2224`](https://github.com/nodejs/node/commit/32725d2224)] - **deps**: make v8.h compatible with VS2015 (Joao Reis) [#32116](https://github.com/nodejs/node/pull/32116)
* [[`fe3cee7b37`](https://github.com/nodejs/node/commit/fe3cee7b37)] - **deps**: V8: forward declaration of `Rtl\*FunctionTable` (Refael Ackermann) [#32116](https://github.com/nodejs/node/pull/32116)
* [[`b2d05f7349`](https://github.com/nodejs/node/commit/b2d05f7349)] - **deps**: V8: patch register-arm64.h (Refael Ackermann) [#32116](https://github.com/nodejs/node/pull/32116)
* [[`c7a0ab4e3d`](https://github.com/nodejs/node/commit/c7a0ab4e3d)] - **deps**: patch V8 to run on older XCode versions (Ujjwal Sharma) [#32116](https://github.com/nodejs/node/pull/32116)
* [[`60b623ee90`](https://github.com/nodejs/node/commit/60b623ee90)] - **deps**: V8: un-cherry-pick bd019bd (Refael Ackermann) [#32116](https://github.com/nodejs/node/pull/32116)
* [[`577ff9fee5`](https://github.com/nodejs/node/commit/577ff9fee5)] - **deps**: V8: cherry-pick deb0813166f3 (Michaël Zasso) [#36139](https://github.com/nodejs/node/pull/36139)
* [[`00e1c7ea83`](https://github.com/nodejs/node/commit/00e1c7ea83)] - **deps**: V8: cherry-pick 9a6a22874c81 (Michaël Zasso) [#36139](https://github.com/nodejs/node/pull/36139)
* [[`ee01d6b7fc`](https://github.com/nodejs/node/commit/ee01d6b7fc)] - **deps**: V8: cherry-pick 2059ee813359 (Michaël Zasso) [#36139](https://github.com/nodejs/node/pull/36139)
* [[`2dad8d43cc`](https://github.com/nodejs/node/commit/2dad8d43cc)] - **deps**: V8: cherry-pick bde7ee5473d6 (Michaël Zasso) [#36139](https://github.com/nodejs/node/pull/36139)
* [[`3046131ea0`](https://github.com/nodejs/node/commit/3046131ea0)] - **deps**: V8: cherry-pick 9a712984025e (Michaël Zasso) [#36139](https://github.com/nodejs/node/pull/36139)
* [[`d178d0738f`](https://github.com/nodejs/node/commit/d178d0738f)] - **deps**: V8: cherry-pick 0b96e5b0bfb2 (Michaël Zasso) [#36139](https://github.com/nodejs/node/pull/36139)
* [[`5c71ea151a`](https://github.com/nodejs/node/commit/5c71ea151a)] - **deps**: V8: cherry-pick fbb28902e049 (Michaël Zasso) [#36139](https://github.com/nodejs/node/pull/36139)
* [[`c8e15cd2c6`](https://github.com/nodejs/node/commit/c8e15cd2c6)] - **deps**: V8: cherry-pick 821fb3883a8e (Michaël Zasso) [#35700](https://github.com/nodejs/node/pull/35700)
* [[`b0d67426af`](https://github.com/nodejs/node/commit/b0d67426af)] - **deps**: workaround stod() limitations on SmartOS (Colin Ihrig) [#36139](https://github.com/nodejs/node/pull/36139)
* [[`c8a658ac53`](https://github.com/nodejs/node/commit/c8a658ac53)] - **deps**: fix V8 build issue with inline methods (Jiawen Geng) [#35415](https://github.com/nodejs/node/pull/35415)
* [[`153b8cea36`](https://github.com/nodejs/node/commit/153b8cea36)] - **deps**: patch V8 to run on Xcode 8 (Mary Marchini) [#32116](https://github.com/nodejs/node/pull/32116)
* [[`a785984133`](https://github.com/nodejs/node/commit/a785984133)] - **deps**: V8: silence irrelevant warnings (Michaël Zasso) [#32116](https://github.com/nodejs/node/pull/32116)
* [[`246c9b8c31`](https://github.com/nodejs/node/commit/246c9b8c31)] - **deps**: make v8.h compatible with VS2015 (Joao Reis) [#32116](https://github.com/nodejs/node/pull/32116)
* [[`96a567f9e9`](https://github.com/nodejs/node/commit/96a567f9e9)] - **deps**: V8: forward declaration of `Rtl\*FunctionTable` (Refael Ackermann) [#32116](https://github.com/nodejs/node/pull/32116)
* [[`e74383cecb`](https://github.com/nodejs/node/commit/e74383cecb)] - **deps**: V8: patch register-arm64.h (Refael Ackermann) [#32116](https://github.com/nodejs/node/pull/32116)
* [[`732847f1eb`](https://github.com/nodejs/node/commit/732847f1eb)] - **deps**: patch V8 to run on older XCode versions (Ujjwal Sharma) [#32116](https://github.com/nodejs/node/pull/32116)
* [[`70171d186f`](https://github.com/nodejs/node/commit/70171d186f)] - **deps**: V8: un-cherry-pick bd019bd (Refael Ackermann) [#32116](https://github.com/nodejs/node/pull/32116)
* [[`15c91c6dd5`](https://github.com/nodejs/node/commit/15c91c6dd5)] - **deps**: V8: cherry-pick 821fb3883a8e (Michaël Zasso) [#35700](https://github.com/nodejs/node/pull/35700)
* [[`40b2fa4832`](https://github.com/nodejs/node/commit/40b2fa4832)] - **deps**: V8: cherry-pick 45e49775f5a3 (Michaël Zasso) [#35700](https://github.com/nodejs/node/pull/35700)
* [[`cd91ab5865`](https://github.com/nodejs/node/commit/cd91ab5865)] - **deps**: V8: cherry-pick 7b3a27b7ae65 (Michaël Zasso) [#35700](https://github.com/nodejs/node/pull/35700)
* [[`f4fc099080`](https://github.com/nodejs/node/commit/f4fc099080)] - **deps**: V8: cherry-pick d76abfed3512 (Michaël Zasso) [#35415](https://github.com/nodejs/node/pull/35415)
* [[`6200176ef0`](https://github.com/nodejs/node/commit/6200176ef0)] - **deps**: fix V8 build issue with inline methods (Jiawen Geng) [#35415](https://github.com/nodejs/node/pull/35415)
* [[`bd5642deb9`](https://github.com/nodejs/node/commit/bd5642deb9)] - **deps**: update V8 postmortem metadata script (Colin Ihrig) [#35415](https://github.com/nodejs/node/pull/35415)
* [[`9ae7159216`](https://github.com/nodejs/node/commit/9ae7159216)] - **deps**: update V8 postmortem metadata script (Colin Ihrig) [#33579](https://github.com/nodejs/node/pull/33579)
* [[`f4b4e21b2f`](https://github.com/nodejs/node/commit/f4b4e21b2f)] - **deps**: patch V8 to run on Xcode 8 (Mary Marchini) [#32116](https://github.com/nodejs/node/pull/32116)
* [[`f6a84540d8`](https://github.com/nodejs/node/commit/f6a84540d8)] - **deps**: V8: silence irrelevant warnings (Michaël Zasso) [#32116](https://github.com/nodejs/node/pull/32116)
* [[`bbc3f46572`](https://github.com/nodejs/node/commit/bbc3f46572)] - **deps**: make v8.h compatible with VS2015 (Joao Reis) [#32116](https://github.com/nodejs/node/pull/32116)
* [[`0c988642dc`](https://github.com/nodejs/node/commit/0c988642dc)] - **deps**: V8: forward declaration of `Rtl\*FunctionTable` (Refael Ackermann) [#32116](https://github.com/nodejs/node/pull/32116)
* [[`703bf933d4`](https://github.com/nodejs/node/commit/703bf933d4)] - **deps**: V8: patch register-arm64.h (Refael Ackermann) [#32116](https://github.com/nodejs/node/pull/32116)
* [[`5451975b18`](https://github.com/nodejs/node/commit/5451975b18)] - **deps**: patch V8 to run on older XCode versions (Ujjwal Sharma) [#32116](https://github.com/nodejs/node/pull/32116)
* [[`c460f7af4d`](https://github.com/nodejs/node/commit/c460f7af4d)] - **deps**: V8: un-cherry-pick bd019bd (Refael Ackermann) [#32116](https://github.com/nodejs/node/pull/32116)
* [[`bfee9daaa5`](https://github.com/nodejs/node/commit/bfee9daaa5)] - **deps**: update llhttp to 6.0.0 (Fedor Indutny) [#38277](https://github.com/nodejs/node/pull/38277)
* [[`94405650ae`](https://github.com/nodejs/node/commit/94405650ae)] - **deps**: upgrade npm to 7.10.0 (Ruy Adorno) [#38254](https://github.com/nodejs/node/pull/38254)
* [[`8e80fc7ff8`](https://github.com/nodejs/node/commit/8e80fc7ff8)] - **deps**: patch V8 to 9.0.257.17 (Michaël Zasso) [#38237](https://github.com/nodejs/node/pull/38237)
* [[`5b358d57e1`](https://github.com/nodejs/node/commit/5b358d57e1)] - **deps**: patch V8 to 9.0.257.16 (Michaël Zasso) [#38218](https://github.com/nodejs/node/pull/38218)
* [[`ee669a0d29`](https://github.com/nodejs/node/commit/ee669a0d29)] - **deps**: update ICU to 69.1 (Michaël Zasso) [#38178](https://github.com/nodejs/node/pull/38178)
* [[`2468e4ed3e`](https://github.com/nodejs/node/commit/2468e4ed3e)] - **deps**: V8: backport d59db06bf542 (Antoine du Hamel) [#38162](https://github.com/nodejs/node/pull/38162)
* [[`c748668704`](https://github.com/nodejs/node/commit/c748668704)] - **deps**: upgrade npm to 7.9.0 (Ruy Adorno) [#38156](https://github.com/nodejs/node/pull/38156)
* [[`ca13f7aaf3`](https://github.com/nodejs/node/commit/ca13f7aaf3)] - **deps**: V8: cherry-pick 501482cbc704 (Colin Ihrig) [#38121](https://github.com/nodejs/node/pull/38121)
* [[`bc531d1860`](https://github.com/nodejs/node/commit/bc531d1860)] - **deps**: upgrade npm to 7.8.0 (Darcy Clarke) [#38030](https://github.com/nodejs/node/pull/38030)
* [[`d639321acd`](https://github.com/nodejs/node/commit/d639321acd)] - **deps**: patch V8 to 9.0.257.13 (Michaël Zasso) [#37830](https://github.com/nodejs/node/pull/37830)
* [[`bc31dc0e0f`](https://github.com/nodejs/node/commit/bc31dc0e0f)] - **dns**: refactor cares\_wrap internals (James M Snell) [#38172](https://github.com/nodejs/node/pull/38172)
* [[`36decec87f`](https://github.com/nodejs/node/commit/36decec87f)] - **doc**: remove superfluous await from fsPromises.readdir example (Michael Rommel) [#38293](https://github.com/nodejs/node/pull/38293)
* [[`ac2c8c530d`](https://github.com/nodejs/node/commit/ac2c8c530d)] - **doc**: fixup http.IncomingMessage deprecation code (Guy Bedford) [#36917](https://github.com/nodejs/node/pull/36917)
* [[`767643fc19`](https://github.com/nodejs/node/commit/767643fc19)] - **doc**: restore minimum Xcode version for macOS (Richard Lau) [#38266](https://github.com/nodejs/node/pull/38266)
* [[`e541032276`](https://github.com/nodejs/node/commit/e541032276)] - **doc**: fix typo in repl.md (Arkerone) [#38244](https://github.com/nodejs/node/pull/38244)
* [[`fb93b71307`](https://github.com/nodejs/node/commit/fb93b71307)] - **doc**: fix typo in buffer.md (Arkerone) [#38243](https://github.com/nodejs/node/pull/38243)
* [[`7d688d4b36`](https://github.com/nodejs/node/commit/7d688d4b36)] - **doc**: fix missing backtick in fs.md (Siddharth) [#38260](https://github.com/nodejs/node/pull/38260)
* [[`6d04cc6849`](https://github.com/nodejs/node/commit/6d04cc6849)] - **doc**: change "oject" to "object" (Arkerone) [#38256](https://github.com/nodejs/node/pull/38256)
* [[`b4363f726c`](https://github.com/nodejs/node/commit/b4363f726c)] - **doc**: revise TLS minVersion/maxVersion text (Rich Trott) [#38202](https://github.com/nodejs/node/pull/38202)
* [[`98c2067f13`](https://github.com/nodejs/node/commit/98c2067f13)] - **doc**: update BUILDING.md for Apple Silicon (Ash Cripps) [#38227](https://github.com/nodejs/node/pull/38227)
* [[`4def7c4418`](https://github.com/nodejs/node/commit/4def7c4418)] - **doc**: standardize on pseudorandom (Rich Trott) [#38196](https://github.com/nodejs/node/pull/38196)
* [[`f1027ecf29`](https://github.com/nodejs/node/commit/f1027ecf29)] - **doc**: standardize command flag notes (Ferdi) [#38199](https://github.com/nodejs/node/pull/38199)
* [[`756d2e48d8`](https://github.com/nodejs/node/commit/756d2e48d8)] - **doc**: update `buffer.constants.MAX\_LENGTH` (Qingyu Deng) [#38109](https://github.com/nodejs/node/pull/38109)
* [[`474fbb5f6e`](https://github.com/nodejs/node/commit/474fbb5f6e)] - **doc**: clarify child\_process close event (Nitzan Uziely) [#38181](https://github.com/nodejs/node/pull/38181)
* [[`eee2c331ef`](https://github.com/nodejs/node/commit/eee2c331ef)] - **doc**: add command flag to import.meta.resolve (Ferdi) [#38171](https://github.com/nodejs/node/pull/38171)
* [[`f46d29360c`](https://github.com/nodejs/node/commit/f46d29360c)] - **doc**: advise against using randomFill on floats (Tobias Nießen) [#38150](https://github.com/nodejs/node/pull/38150)
* [[`5823fc79ba`](https://github.com/nodejs/node/commit/5823fc79ba)] - **doc**: update links in ICU guide (Michaël Zasso) [#38177](https://github.com/nodejs/node/pull/38177)
* [[`993a1da47c`](https://github.com/nodejs/node/commit/993a1da47c)] - **doc**: mention cryptographic prng in description of randomUUID (Serkan Özel) [#38074](https://github.com/nodejs/node/pull/38074)
* [[`5ba5cc8619`](https://github.com/nodejs/node/commit/5ba5cc8619)] - **doc**: fix typos in doc/api/cli.md (Arkerone) [#38163](https://github.com/nodejs/node/pull/38163)
* [[`6a2314acd7`](https://github.com/nodejs/node/commit/6a2314acd7)] - **doc**: add link to V8 (Voltrex) [#38144](https://github.com/nodejs/node/pull/38144)
* [[`093b527b25`](https://github.com/nodejs/node/commit/093b527b25)] - **doc**: fix typo in assert.md (Arkerone) [#38152](https://github.com/nodejs/node/pull/38152)
* [[`0fa579ac2a`](https://github.com/nodejs/node/commit/0fa579ac2a)] - **doc**: add missing comma in crypto doc (Tobias Nießen) [#38142](https://github.com/nodejs/node/pull/38142)
* [[`4bc8f7542f`](https://github.com/nodejs/node/commit/4bc8f7542f)] - **doc**: fix typo in crypto (Arkerone) [#38130](https://github.com/nodejs/node/pull/38130)
* [[`005ebafbd1`](https://github.com/nodejs/node/commit/005ebafbd1)] - **doc**: improve security text in collaborators guide (Rich Trott) [#38107](https://github.com/nodejs/node/pull/38107)
* [[`54322b8d8b`](https://github.com/nodejs/node/commit/54322b8d8b)] - **doc**: apply consistent punctuation to header contributing guide (Akhil Marsonya) [#38047](https://github.com/nodejs/node/pull/38047)
* [[`0d34767c4c`](https://github.com/nodejs/node/commit/0d34767c4c)] - **doc**: sending http request to localhost to avoid https redirect (Hassaan Pasha) [#38036](https://github.com/nodejs/node/pull/38036)
* [[`f851efd2e1`](https://github.com/nodejs/node/commit/f851efd2e1)] - **doc**: apply sentence case to backporting-to-release-lines.md headers (marsonya) [#37617](https://github.com/nodejs/node/pull/37617)
* [[`36bc8b905c`](https://github.com/nodejs/node/commit/36bc8b905c)] - **doc**: fix typo in fs.md (Antoine du Hamel) [#38100](https://github.com/nodejs/node/pull/38100)
* [[`f52c92134c`](https://github.com/nodejs/node/commit/f52c92134c)] - **doc**: internal/test/binding for testing (Bradley Meck) [#38026](https://github.com/nodejs/node/pull/38026)
* [[`ab42ef3930`](https://github.com/nodejs/node/commit/ab42ef3930)] - **doc**: add parentheses to function and move reference (Rich Trott) [#38066](https://github.com/nodejs/node/pull/38066)
* [[`2861778ecd`](https://github.com/nodejs/node/commit/2861778ecd)] - **doc**: change wording in doc/api/domain.md comment (Akhil Marsonya) [#38044](https://github.com/nodejs/node/pull/38044)
* [[`361632dab1`](https://github.com/nodejs/node/commit/361632dab1)] - **doc**: fix lint error in modules.md (Rich Trott) [#37811](https://github.com/nodejs/node/pull/37811)
* [[`b3f35e2c70`](https://github.com/nodejs/node/commit/b3f35e2c70)] - **doc,lib**: add missing deprecation code (Colin Ihrig) [#37541](https://github.com/nodejs/node/pull/37541)
* [[`cbe3b27166`](https://github.com/nodejs/node/commit/cbe3b27166)] - **doc,tools**: allow stability table to be updated (Richard Lau) [#38048](https://github.com/nodejs/node/pull/38048)
* [[`8dd06850ae`](https://github.com/nodejs/node/commit/8dd06850ae)] - **esm**: use correct URL for error decoration (Bradley Meck) [#37854](https://github.com/nodejs/node/pull/37854)
* [[`6bbe28552c`](https://github.com/nodejs/node/commit/6bbe28552c)] - **fs**: use byteLength to handle ArrayBuffer views (Michaël Zasso) [#38187](https://github.com/nodejs/node/pull/38187)
* [[`8e76397fab`](https://github.com/nodejs/node/commit/8e76397fab)] - **fs**: validate encoding to binding.writeString() (Colin Ihrig) [#38183](https://github.com/nodejs/node/pull/38183)
* [[`24fd791184`](https://github.com/nodejs/node/commit/24fd791184)] - **fs**: move constants to internal/fs/utils.js (Darshan Sen) [#38061](https://github.com/nodejs/node/pull/38061)
* [[`40ace47396`](https://github.com/nodejs/node/commit/40ace47396)] - **http**: fixup perf regression (James M Snell) [#38110](https://github.com/nodejs/node/pull/38110)
* [[`f4d3d12327`](https://github.com/nodejs/node/commit/f4d3d12327)] - **http**: use CRLF conistently in \_http\_outgoing.js (Daniel Bevenius) [#37851](https://github.com/nodejs/node/pull/37851)
* [[`ee9e2a2eb6`](https://github.com/nodejs/node/commit/ee9e2a2eb6)] - **lib**: revert primordials in a hot path (Antoine du Hamel) [#38248](https://github.com/nodejs/node/pull/38248)
* [[`d756d2b99c`](https://github.com/nodejs/node/commit/d756d2b99c)] - **lib**: enforce using `primordials.globalThis` instead of `global` (Antoine du Hamel) [#38230](https://github.com/nodejs/node/pull/38230)
* [[`09c9e5dea4`](https://github.com/nodejs/node/commit/09c9e5dea4)] - **lib**: avoid mutating `Error.stackTraceLimit` when it is not writable (Antoine du Hamel) [#38215](https://github.com/nodejs/node/pull/38215)
* [[`23d2c54bab`](https://github.com/nodejs/node/commit/23d2c54bab)] - **lib**: add `globalThis` to primordials (Antoine du Hamel) [#38211](https://github.com/nodejs/node/pull/38211)
* [[`78343bbdc5`](https://github.com/nodejs/node/commit/78343bbdc5)] - **lib**: add `WeakRef` and `FinalizationRegistry` to `primordials` (ExE Boss) [#37263](https://github.com/nodejs/node/pull/37263)
* [[`656fb4657a`](https://github.com/nodejs/node/commit/656fb4657a)] - **lib**: add tsconfig for code completions (Bradley Meck) [#38042](https://github.com/nodejs/node/pull/38042)
* [[`d86132488d`](https://github.com/nodejs/node/commit/d86132488d)] - **lib**: properly process JavaScript exceptions on async\_hooks fatal error (legendecas) [#38106](https://github.com/nodejs/node/pull/38106)
* [[`a9332e84bf`](https://github.com/nodejs/node/commit/a9332e84bf)] - **lib**: refactor to use primordials in lib/internal/cli\_table (Akhil Marsonya) [#38046](https://github.com/nodejs/node/pull/38046)
* [[`8d78d9ef27`](https://github.com/nodejs/node/commit/8d78d9ef27)] - **lib**: load v8\_prof\_processor dependencies as ESM (Michaël Zasso) [#37587](https://github.com/nodejs/node/pull/37587)
* [[`7b2bad4005`](https://github.com/nodejs/node/commit/7b2bad4005)] - **module**: clarify CJS global-like variables not defined error message (Antoine du Hamel) [#37852](https://github.com/nodejs/node/pull/37852)
* [[`7869761c2e`](https://github.com/nodejs/node/commit/7869761c2e)] - **net**: fix typo (Luigi Pinca) [#38127](https://github.com/nodejs/node/pull/38127)
* [[`4afcd55274`](https://github.com/nodejs/node/commit/4afcd55274)] - **node-api**: make reference weak parameter an indirect link to references (Chengzhong Wu) [#38000](https://github.com/nodejs/node/pull/38000)
* [[`e38d62a8c9`](https://github.com/nodejs/node/commit/e38d62a8c9)] - **path**: fix POSIX path.resolve() perf regression (Brian White) [#38064](https://github.com/nodejs/node/pull/38064)
* [[`b0d5e036d8`](https://github.com/nodejs/node/commit/b0d5e036d8)] - **path**: fix posix.relative() on Windows (Rich Trott) [#37747](https://github.com/nodejs/node/pull/37747)
* [[`548cbf0625`](https://github.com/nodejs/node/commit/548cbf0625)] - **perf_hooks**: fix loop delay resolution validation (Antoine du Hamel) [#38166](https://github.com/nodejs/node/pull/38166)
* [[`13c931a9dc`](https://github.com/nodejs/node/commit/13c931a9dc)] - **process**: add range validation to debugPort (Colin Ihrig) [#38205](https://github.com/nodejs/node/pull/38205)
* [[`8dd5dd8a4b`](https://github.com/nodejs/node/commit/8dd5dd8a4b)] - **process**: do not lazily load AsyncResource (Michaël Zasso) [#38041](https://github.com/nodejs/node/pull/38041)
* [[`4e833b6059`](https://github.com/nodejs/node/commit/4e833b6059)] - **process,doc**: add missing deprecation code (Colin Ihrig) [#37091](https://github.com/nodejs/node/pull/37091)
* [[`d6669645c0`](https://github.com/nodejs/node/commit/d6669645c0)] - **repl**: fix declaring a variable with the name `util` (eladkeyshawn) [#38141](https://github.com/nodejs/node/pull/38141)
* [[`e7391967c2`](https://github.com/nodejs/node/commit/e7391967c2)] - **repl**: fix error message printing (Anna Henningsen) [#38209](https://github.com/nodejs/node/pull/38209)
* [[`4e9212bb7b`](https://github.com/nodejs/node/commit/4e9212bb7b)] - **src**: cache some context in locals (Khaidi Chu) [#37473](https://github.com/nodejs/node/pull/37473)
* [[`fc20e833ca`](https://github.com/nodejs/node/commit/fc20e833ca)] - **src**: fix finalization crash (James M Snell) [#38250](https://github.com/nodejs/node/pull/38250)
* [[`6c9b19a7af`](https://github.com/nodejs/node/commit/6c9b19a7af)] - **src**: refactor SecureContext Initialization (James M Snell) [#38116](https://github.com/nodejs/node/pull/38116)
* [[`8d63aa828e`](https://github.com/nodejs/node/commit/8d63aa828e)] - **src**: fix typo for initialization (Yash Ladha) [#37974](https://github.com/nodejs/node/pull/37974)
* [[`66c8f76c2c`](https://github.com/nodejs/node/commit/66c8f76c2c)] - **src**: remove KeyObjectData::CreateSecret overload (Tobias Nießen) [#38067](https://github.com/nodejs/node/pull/38067)
* [[`87dc152229`](https://github.com/nodejs/node/commit/87dc152229)] - **src**: fix node version (Richard Lau) [#36460](https://github.com/nodejs/node/pull/36460)
* [[`e929d1f2c8`](https://github.com/nodejs/node/commit/e929d1f2c8)] - **src**: fix node version (Brian White) [#36385](https://github.com/nodejs/node/pull/36385)
* [[`8e8dea36cc`](https://github.com/nodejs/node/commit/8e8dea36cc)] - **src**: use non-deprecated GetCreationContext from V8 (Michaël Zasso) [#37587](https://github.com/nodejs/node/pull/37587)
* [[`b1c1c4695c`](https://github.com/nodejs/node/commit/b1c1c4695c)] - **src**: remove V8\_FT\_ADAPTOR for V8 update (Colin Ihrig) [#37587](https://github.com/nodejs/node/pull/37587)
* [[`8f5cce6862`](https://github.com/nodejs/node/commit/8f5cce6862)] - **src**: use non-deprecated V8 module APIs (Michaël Zasso) [#37587](https://github.com/nodejs/node/pull/37587)
* [[`497f6ca5b4`](https://github.com/nodejs/node/commit/497f6ca5b4)] - **src**: update NODE\_MODULE\_VERSION to 93 (Michaël Zasso) [#37587](https://github.com/nodejs/node/pull/37587)
* [[`001dc16cf1`](https://github.com/nodejs/node/commit/001dc16cf1)] - **src**: use non-deprecated V8 module and script APIs (Michaël Zasso) [#37330](https://github.com/nodejs/node/pull/37330)
* [[`47a90d9f37`](https://github.com/nodejs/node/commit/47a90d9f37)] - **src**: update NODE\_MODULE\_VERSION to 92 (Michaël Zasso) [#37330](https://github.com/nodejs/node/pull/37330)
* [[`5259d17309`](https://github.com/nodejs/node/commit/5259d17309)] - **src**: update NODE\_MODULE\_VERSION to 91 (Michaël Zasso) [#36139](https://github.com/nodejs/node/pull/36139)
* [[`6f9cbcf6a6`](https://github.com/nodejs/node/commit/6f9cbcf6a6)] - **src**: fix v8 api deprecation (Jiawen Geng) [#35700](https://github.com/nodejs/node/pull/35700)
* [[`9d4d55bd94`](https://github.com/nodejs/node/commit/9d4d55bd94)] - **src**: update NODE\_MODULE\_VERSION to 90 (Michaël Zasso) [#35700](https://github.com/nodejs/node/pull/35700)
* [[`369f239503`](https://github.com/nodejs/node/commit/369f239503)] - **stream**: fix multiple Writable.destroy() calls (Robert Nagy) [#38221](https://github.com/nodejs/node/pull/38221)
* [[`4ad46e2fef`](https://github.com/nodejs/node/commit/4ad46e2fef)] - **stream**: refactor to avoid unsafe array iteration (Antoine du Hamel) [#37126](https://github.com/nodejs/node/pull/37126)
* [[`419686cdfb`](https://github.com/nodejs/node/commit/419686cdfb)] - **stream**: refactor to use more primordials (Antoine du Hamel) [#36346](https://github.com/nodejs/node/pull/36346)
* [[`c704faa0f9`](https://github.com/nodejs/node/commit/c704faa0f9)] - **test**: fix flaky test-dns and test-dns-lookup (Rich Trott) [#38282](https://github.com/nodejs/node/pull/38282)
* [[`5e588c1c7c`](https://github.com/nodejs/node/commit/5e588c1c7c)] - **test**: fixup failing test/internet/test-dns.js (James M Snell) [#38241](https://github.com/nodejs/node/pull/38241)
* [[`18c9913ce1`](https://github.com/nodejs/node/commit/18c9913ce1)] - **test**: add tests for missing https agent options (Rich Trott) [#38202](https://github.com/nodejs/node/pull/38202)
* [[`4ad8e83a3d`](https://github.com/nodejs/node/commit/4ad8e83a3d)] - **test**: fix test-https-agent-additional-options.js (Rich Trott) [#38202](https://github.com/nodejs/node/pull/38202)
* [[`05df701e70`](https://github.com/nodejs/node/commit/05df701e70)] - **test**: remove common.disableCrashOnUnhandledRejection (Michaël Zasso) [#38210](https://github.com/nodejs/node/pull/38210)
* [[`8f4850d5c7`](https://github.com/nodejs/node/commit/8f4850d5c7)] - **test**: fix typo in comment in binding.c (Tobias Nießen) [#38220](https://github.com/nodejs/node/pull/38220)
* [[`9498e97015`](https://github.com/nodejs/node/commit/9498e97015)] - **test**: fix typo in gtest-all.cc (Ikko Ashimine) [#38224](https://github.com/nodejs/node/pull/38224)
* [[`c8bbd83ab2`](https://github.com/nodejs/node/commit/c8bbd83ab2)] - **test**: add undefined fatalException exit code test (Nitzan Uziely) [#38119](https://github.com/nodejs/node/pull/38119)
* [[`db9cf52dcf`](https://github.com/nodejs/node/commit/db9cf52dcf)] - **test**: check the different error code on IBM i (Xu Meng) [#38159](https://github.com/nodejs/node/pull/38159)
* [[`95ca351fd8`](https://github.com/nodejs/node/commit/95ca351fd8)] - **test**: skip fs.watch() test on IBMi (Rich Trott) [#38192](https://github.com/nodejs/node/pull/38192)
* [[`8cee28465c`](https://github.com/nodejs/node/commit/8cee28465c)] - **test**: fix test-dh-regr for OpenSSL 3 (Rich Trott) [#34289](https://github.com/nodejs/node/pull/34289)
* [[`213ae4f4c6`](https://github.com/nodejs/node/commit/213ae4f4c6)] - **test**: skip test-vm-memleak in ASAN (Rich Trott) [#34289](https://github.com/nodejs/node/pull/34289)
* [[`50208915a0`](https://github.com/nodejs/node/commit/50208915a0)] - **test**: skip test-hash-seed on armv6 and armv7 (Rich Trott) [#34289](https://github.com/nodejs/node/pull/34289)
* [[`7216eb67df`](https://github.com/nodejs/node/commit/7216eb67df)] - **test**: update OpenSSL 3.x expected error message (Daniel Bevenius) [#38164](https://github.com/nodejs/node/pull/38164)
* [[`7e516aaac0`](https://github.com/nodejs/node/commit/7e516aaac0)] - **test**: remove unneeded m flag on regular expressions (Rich Trott) [#38124](https://github.com/nodejs/node/pull/38124)
* [[`269f5132cc`](https://github.com/nodejs/node/commit/269f5132cc)] - **test**: skip different params test for OpenSSL 3.x (Daniel Bevenius) [#38165](https://github.com/nodejs/node/pull/38165)
* [[`f96dffb7ae`](https://github.com/nodejs/node/commit/f96dffb7ae)] - **test**: fix flaky test-zlib-unused-weak.js (Ouyang Yadong) [#38149](https://github.com/nodejs/node/pull/38149)
* [[`e96773b94b`](https://github.com/nodejs/node/commit/e96773b94b)] - **test**: add regression test for serdes readDouble() (Colin Ihrig) [#38121](https://github.com/nodejs/node/pull/38121)
* [[`cc4ee6cba8`](https://github.com/nodejs/node/commit/cc4ee6cba8)] - **test**: deflake test-http-many-ended-pipelines (Luigi Pinca) [#38018](https://github.com/nodejs/node/pull/38018)
* [[`098a4d6551`](https://github.com/nodejs/node/commit/098a4d6551)] - **test**: skip test-crypto-dh-keys on armv6 and armv7 (Rich Trott) [#38076](https://github.com/nodejs/node/pull/38076)
* [[`f9b63b8530`](https://github.com/nodejs/node/commit/f9b63b8530)] - **test**: update parallel/test-crypto-keygen for OpenSSL 3 (Richard Lau) [#38136](https://github.com/nodejs/node/pull/38136)
* [[`6a6cdfad03`](https://github.com/nodejs/node/commit/6a6cdfad03)] - **test**: fix skip message for test-macos-app-sandbox (Tobias Nießen) [#38114](https://github.com/nodejs/node/pull/38114)
* [[`e155b1f2f7`](https://github.com/nodejs/node/commit/e155b1f2f7)] - **test**: correct test comment (Evan Lucas) [#38095](https://github.com/nodejs/node/pull/38095)
* [[`d61977f03e`](https://github.com/nodejs/node/commit/d61977f03e)] - **test**: remove dead code (Luigi Pinca) [#38016](https://github.com/nodejs/node/pull/38016)
* [[`8b05e32519`](https://github.com/nodejs/node/commit/8b05e32519)] - **test**: fix flaky test-net-timeout (Rich Trott) [#38060](https://github.com/nodejs/node/pull/38060)
* [[`a0492ba391`](https://github.com/nodejs/node/commit/a0492ba391)] - **test**: fix test-vm-memleak for high baseline platforms (Rich Trott) [#38062](https://github.com/nodejs/node/pull/38062)
* [[`30d7f05fef`](https://github.com/nodejs/node/commit/30d7f05fef)] - **test**: improve code coverage in webcrypto API (Juan José Arboleda) [#38052](https://github.com/nodejs/node/pull/38052)
* [[`d75543d8b5`](https://github.com/nodejs/node/commit/d75543d8b5)] - **test**: fix flaky timeout-delayed-body and headers tests (Nitzan Uziely) [#38045](https://github.com/nodejs/node/pull/38045)
* [[`4f387c25cb`](https://github.com/nodejs/node/commit/4f387c25cb)] - **test**: fix flaky test-vm-memleak (Rich Trott) [#38054](https://github.com/nodejs/node/pull/38054)
* [[`330f25ef82`](https://github.com/nodejs/node/commit/330f25ef82)] - **test**: prepare for consistent comma-dangle lint rule (Rich Trott) [#37930](https://github.com/nodejs/node/pull/37930)
* [[`31fe3b215f`](https://github.com/nodejs/node/commit/31fe3b215f)] - **test**: make sure http pipelining does not emit a warning (Matteo Collina) [#37964](https://github.com/nodejs/node/pull/37964)
* [[`978bbf987c`](https://github.com/nodejs/node/commit/978bbf987c)] - **test**: fix flaky test-http2-pack-end-stream-flag (James M Snell) [#37814](https://github.com/nodejs/node/pull/37814)
* [[`ecc584251e`](https://github.com/nodejs/node/commit/ecc584251e)] - **test**: fixup flaky test-performance-function-async test (James M Snell) [#37493](https://github.com/nodejs/node/pull/37493)
* [[`32482a828b`](https://github.com/nodejs/node/commit/32482a828b)] - **test**: remove FLAKY for test-domain-error-types (Rich Trott) [#37458](https://github.com/nodejs/node/pull/37458)
* [[`501ae0e6e3`](https://github.com/nodejs/node/commit/501ae0e6e3)] - **test**: remove outdated V8 flag (Michaël Zasso) [#37151](https://github.com/nodejs/node/pull/37151)
* [[`fa3997d75a`](https://github.com/nodejs/node/commit/fa3997d75a)] - **test**: mark test-return-on-exit as flaky (Michaël Zasso) [#36139](https://github.com/nodejs/node/pull/36139)
* [[`896ae96a15`](https://github.com/nodejs/node/commit/896ae96a15)] - **test**: mark WASI's test-return-on-exit as flaky (Colin Ihrig) [#36139](https://github.com/nodejs/node/pull/36139)
* [[`0da7a11e54`](https://github.com/nodejs/node/commit/0da7a11e54)] - **test,http**: check that http server is robust from handler abuse (Rich Trott) [#37958](https://github.com/nodejs/node/pull/37958)
* [[`a0261d231c`](https://github.com/nodejs/node/commit/a0261d231c)] - ***Revert*** "**timers**: refactor to use optional chaining" (Matteo Collina) [#38245](https://github.com/nodejs/node/pull/38245)
* [[`3da003cc1c`](https://github.com/nodejs/node/commit/3da003cc1c)] - **tls**: fix session and keylog add listener segfault (Nitzan Uziely) [#38180](https://github.com/nodejs/node/pull/38180)
* [[`eb20447407`](https://github.com/nodejs/node/commit/eb20447407)] - **tls**: extract out SecureContext configuration (James M Snell) [#38116](https://github.com/nodejs/node/pull/38116)
* [[`b16e79e05b`](https://github.com/nodejs/node/commit/b16e79e05b)] - **tls**: fix typo (Arkerone) [#38129](https://github.com/nodejs/node/pull/38129)
* [[`d4f33f109e`](https://github.com/nodejs/node/commit/d4f33f109e)] - **tools**: skip macOS GitHub Actions test on doc-only changes (Rich Trott) [#38296](https://github.com/nodejs/node/pull/38296)
* [[`13d0de5954`](https://github.com/nodejs/node/commit/13d0de5954)] - **tools**: set arch in Distribution.xml (Ash Cripps) [#38261](https://github.com/nodejs/node/pull/38261)
* [[`28bca33f28`](https://github.com/nodejs/node/commit/28bca33f28)] - **tools**: update ESLint to 7.24.0 (Colin Ihrig) [#38179](https://github.com/nodejs/node/pull/38179)
* [[`038608d401`](https://github.com/nodejs/node/commit/038608d401)] - **tools**: relax max-len lint rule for template strings (Rich Trott) [#38097](https://github.com/nodejs/node/pull/38097)
* [[`e67fb569f4`](https://github.com/nodejs/node/commit/e67fb569f4)] - **tools**: apply consistent comma-dangle lint rule (Rich Trott) [#37930](https://github.com/nodejs/node/pull/37930)
* [[`9843361c07`](https://github.com/nodejs/node/commit/9843361c07)] - **tools**: update V8 gypfiles for 9.0 (Michaël Zasso) [#37587](https://github.com/nodejs/node/pull/37587)
* [[`017661768a`](https://github.com/nodejs/node/commit/017661768a)] - **tools**: update V8 gypfiles for 8.9 (Michaël Zasso) [#37330](https://github.com/nodejs/node/pull/37330)
* [[`79da253473`](https://github.com/nodejs/node/commit/79da253473)] - **tools**: update V8 gypfiles for 8.8 (Michaël Zasso) [#36139](https://github.com/nodejs/node/pull/36139)
* [[`770d9e2542`](https://github.com/nodejs/node/commit/770d9e2542)] - **tools**: update V8 gypfiles for 8.7 (Michaël Zasso) [#35700](https://github.com/nodejs/node/pull/35700)
* [[`b87f1be92d`](https://github.com/nodejs/node/commit/b87f1be92d)] - **typings**: add types for "http\_parser" and "options" bindings (Michaël Zasso) [#38239](https://github.com/nodejs/node/pull/38239)
* [[`1c8b2956d1`](https://github.com/nodejs/node/commit/1c8b2956d1)] - **typings**: add types for internalBinding('serdes') (Michaël Zasso) [#38204](https://github.com/nodejs/node/pull/38204)
* [[`d97787fccc`](https://github.com/nodejs/node/commit/d97787fccc)] - **typings**: add JSDoc to os module functions (David Brownman) [#38197](https://github.com/nodejs/node/pull/38197)
* [[`8acfe5c2a4`](https://github.com/nodejs/node/commit/8acfe5c2a4)] - **typings**: add JSDoc Types to lib/querystring (Simon Knott) [#38185](https://github.com/nodejs/node/pull/38185)
* [[`d3162da8dd`](https://github.com/nodejs/node/commit/d3162da8dd)] - **typings**: add JSDoc typings for http (Voltrex) [#38191](https://github.com/nodejs/node/pull/38191)
* [[`82d59882b1`](https://github.com/nodejs/node/commit/82d59882b1)] - **typings**: add JSDoc typings for assert (Voltrex) [#38188](https://github.com/nodejs/node/pull/38188)
* [[`f1a21e5c91`](https://github.com/nodejs/node/commit/f1a21e5c91)] - **typings**: add JSDoc types to lib/path (Simon Knott) [#38186](https://github.com/nodejs/node/pull/38186)
* [[`3377eb9641`](https://github.com/nodejs/node/commit/3377eb9641)] - **typings**: add types for internalBinding('util') (Michaël Zasso) [#38200](https://github.com/nodejs/node/pull/38200)
* [[`cb2bdc632a`](https://github.com/nodejs/node/commit/cb2bdc632a)] - **typings**: add types for internalBinding('fs') (Michaël Zasso) [#38198](https://github.com/nodejs/node/pull/38198)
* [[`26eed3e0ed`](https://github.com/nodejs/node/commit/26eed3e0ed)] - **vm**: add import assertion support (Gus Caplan) [#37176](https://github.com/nodejs/node/pull/37176)
* [[`6986fa07eb`](https://github.com/nodejs/node/commit/6986fa07eb)] - **worker**: fix exit code for error thrown in handler (Nitzan Uziely) [#38012](https://github.com/nodejs/node/pull/38012)
Windows 32-bit Installer: https://nodejs.org/dist/v16.0.0/node-v16.0.0-x86.msi<br>
Windows 64-bit Installer: https://nodejs.org/dist/v16.0.0/node-v16.0.0-x64.msi<br>
Windows 32-bit Binary: https://nodejs.org/dist/v16.0.0/win-x86/node.exe<br>
Windows 64-bit Binary: https://nodejs.org/dist/v16.0.0/win-x64/node.exe<br>
macOS 64-bit Installer: https://nodejs.org/dist/v16.0.0/node-v16.0.0.pkg<br>
macOS Apple Silicon 64-bit Binary: https://nodejs.org/dist/v16.0.0/node-v16.0.0-darwin-arm64.tar.gz<br>
macOS Intel 64-bit Binary: https://nodejs.org/dist/v16.0.0/node-v16.0.0-darwin-x64.tar.gz<br>
Linux 64-bit Binary: https://nodejs.org/dist/v16.0.0/node-v16.0.0-linux-x64.tar.xz<br>
Linux PPC LE 64-bit Binary: https://nodejs.org/dist/v16.0.0/node-v16.0.0-linux-ppc64le.tar.xz<br>
Linux s390x 64-bit Binary: https://nodejs.org/dist/v16.0.0/node-v16.0.0-linux-s390x.tar.xz<br>
AIX 64-bit Binary: https://nodejs.org/dist/v16.0.0/node-v16.0.0-aix-ppc64.tar.gz<br>
ARMv7 32-bit Binary: https://nodejs.org/dist/v16.0.0/node-v16.0.0-linux-armv7l.tar.xz<br>
ARMv8 64-bit Binary: https://nodejs.org/dist/v16.0.0/node-v16.0.0-linux-arm64.tar.xz<br>
Source Code: https://nodejs.org/dist/v16.0.0/node-v16.0.0.tar.gz<br>
Other release files: https://nodejs.org/dist/v16.0.0/<br>
Documentation: https://nodejs.org/docs/v16.0.0/api/
### SHASUMS
```
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
a6aee31e1fd8f55dc78007de2e4ac0d8e0dadd36beacfbabbaf9ab27a5f1f2f4 node-v16.0.0-aix-ppc64.tar.gz
2d6d412abcf7c9375f19fde14086a6423e5bb9415eeca1ccad49638ffc476ea3 node-v16.0.0-darwin-arm64.tar.gz
f8710a83738b4408da82fe81b7934373e4d2f84d40c8c1217676119fd3c77c7e node-v16.0.0-darwin-arm64.tar.xz
b00457dd7da6cc00d0248dc57b4ddd01a71eed6009ddadd8c854678232091dfb node-v16.0.0-darwin-x64.tar.gz
66ecffa48b98cf1ca4d038b42b74f05bfc4d31681e2aa43a1ba50919ea23823b node-v16.0.0-darwin-x64.tar.xz
a4d665582e492bf013ce67b1fadb7db9cb8fd46e7d02a30f5e473373d452e377 node-v16.0.0-headers.tar.gz
f5f178e75d78bd050d1a85ea56189bae6038d9d21d032e7889dbb22fa54da71d node-v16.0.0-headers.tar.xz
22e7d326b21195c4a0df92a7af7cfdf1743cd46fcc50e335e4086a1c1f2a9a13 node-v16.0.0-linux-arm64.tar.gz
c6dc688de6373049f21cb1ca4f2ceefe80a5d711e301b8d54fd0a7c36a406b03 node-v16.0.0-linux-arm64.tar.xz
d4e2965224ca0667732836be249ec32ad899f7f01d932121daca76cbf38e75f1 node-v16.0.0-linux-armv7l.tar.gz
1cb4bf1bac74f492f9182e44422e245cc2a971889e34f4e554b7c45eb080304c node-v16.0.0-linux-armv7l.tar.xz
bc28902e8e1453531bb38001cf705dff2456cdf5b856a37dac2f2d3d771b02c1 node-v16.0.0-linux-ppc64le.tar.gz
10bc1b3c18a05811a4497aa77b7951d963baecf033aa436358e28ba3cde28090 node-v16.0.0-linux-ppc64le.tar.xz
3cdfafc6425aace2ab24a31dcac26564a494094c7521b50dc41f3c538b3700ec node-v16.0.0-linux-s390x.tar.gz
27a5a70178cd765c8b37aa49d18d05e7338c9b043b3195d4cbf28955ca3c9aa2 node-v16.0.0-linux-s390x.tar.xz
9268cdb3c71cec4f3dc3bef98994f310c3bef259fae8c68e3f1c605c5dfcbc58 node-v16.0.0-linux-x64.tar.gz
1736446bb102e19942addce29f6a12b157ca71f38b9159d0446f51ba69618b8d node-v16.0.0-linux-x64.tar.xz
fe1d4f458a8b3e85c7c927c5a342d09407915b77ade5303fc98b0deeec89a3db node-v16.0.0.pkg
ef4928ed381dcb8f5eca9c521b3ffa4a384c75cc76656999e16f5d1c171d8e7b node-v16.0.0.tar.gz
47cb90111e8c3dc42dc538464789415354f0d933587fc89fff71f9bd816aaa02 node-v16.0.0.tar.xz
8b78d362582746c5157b9e703bdd16c3da54c51efa12bed8fdf0e30e2bfdbce6 node-v16.0.0-win-x64.7z
99c2b01afb8d966fc876ec30ac7dfdbd9da9b17a3daeda92c19ce657ab9bea61 node-v16.0.0-win-x64.zip
04859c6d5a1d5054e57d1c1eb8f58a13d9d6e0ea079fe83d9b79d3a9aa401cc5 node-v16.0.0-win-x86.7z
0600dffb5331b6f49e6ff4fa97770811746e0e2ecaf53de6deaafff277a644b4 node-v16.0.0-win-x86.zip
9309bda5a68c353145acc2fa9fbe3ec98a0234b3946a9861997f60b4b89b83a7 node-v16.0.0-x64.msi
6d7404b6e6f0c2a9cd396ce56eb68d2e0d2e5df434554345e075707bff7bc384 node-v16.0.0-x86.msi
f5d19a86afc817068ab7120919a4f96b43e60a7abe3282c3797a50f1cc723930 win-x64/node.exe
32063b59c6df338e1d367eea513dc04abcc1768f4af5ba2bb764dfd1af41e6cc win-x64/node.lib
f369ce51bda686c451740c1805fa692554568dbc55992026bb17346f5ada6f7e win-x64/node_pdb.7z
aa12acfbc081eea9a5d625471ce93ebd711c9c6785a76d940b442b672a1d2025 win-x64/node_pdb.zip
eab4525927aadf29b0e257a96a0c7afab1d42a52680622b6bf366690a6fc4d38 win-x86/node.exe
3130ffd2b70c7b3b227f62d97090d3204bb64a319a7257821ff61eb86b645d61 win-x86/node.lib
2d7feeb1a4bb7b2a7e0fe45dc39550d5913d96ff34f10f48d747f2e90b143745 win-x86/node_pdb.7z
47a135fcf66526de3fae114a554ff810567fd837d9f764527e307acc076f1384 win-x86/node_pdb.zip
-----BEGIN PGP SIGNATURE-----
iQEzBAEBCAAdFiEETtd49TnjY0x3nIfG1wYoSKGrAFwFAmB++dEACgkQ1wYoSKGr
AFzs1Af/T7bep8whLJueuaJzRhh7BGX/nzPEHU7GP215nNqbN7Simg1Xj+5QCANb
AQYjNe86Fff8JaIp6sQV40qeSEC2PNGx6mp0Rjq8SogqT5NXmRs74VVLZ+H1YERf
0Zy19USOlpSMsK4LJdhU5paShzl9xsw1Lpk7e3XDhANmL2Fd+OWiV546z/dIoKN4
v7e2cbdiYrCYEjQbY6EFyPi/As+r9MjnX7ggXQ8ZD7hRshv7dxYFSRSaIkcUNBZn
J6qRFwbVyAdFzmbUNJREt8ky2ZpwU1p2Cdl/jkWGCjxl1fUSN4/V+9bMSzRaQW/+
t/e5lo+lKhleYXFEK7B5h1Ss6F2MpA==
=v9uW
-----END PGP SIGNATURE-----
```
| 148.399538 | 288 | 0.728574 | yue_Hant | 0.267019 |
5371cc45ecde64b670d384e20bc8162faa463d0a | 1,294 | md | Markdown | includes/machine-learning-cli-subscription.md | gliljas/azure-docs.sv-se-1 | 1efdf8ba0ddc3b4fb65903ae928979ac8872d66e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | includes/machine-learning-cli-subscription.md | gliljas/azure-docs.sv-se-1 | 1efdf8ba0ddc3b4fb65903ae928979ac8872d66e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | includes/machine-learning-cli-subscription.md | gliljas/azure-docs.sv-se-1 | 1efdf8ba0ddc3b4fb65903ae928979ac8872d66e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
author: Blackmist
ms.service: machine-learning
ms.topic: include
ms.date: 03/26/2020
ms.author: larryfr
ms.openlocfilehash: 428a3ad17c81b465635207de622398e814289d87
ms.sourcegitcommit: 849bb1729b89d075eed579aa36395bf4d29f3bd9
ms.translationtype: MT
ms.contentlocale: sv-SE
ms.lasthandoff: 04/28/2020
ms.locfileid: "81616843"
---
> [!TIP]
> När du har loggat in visas en lista över prenumerationer som är associerade med ditt Azure-konto. Prenumerations informationen med `isDefault: true` är den aktuella aktiverade prenumerationen för Azure CLI-kommandon. Den här prenumerationen måste vara samma som innehåller din Azure Machine Learning-arbetsyta. Du hittar prenumerations-ID: t från [Azure Portal](https://portal.azure.com) genom att gå till översikts sidan för din arbets yta. Du kan också använda SDK: n för att hämta prenumerations-ID: t från arbets ytans objekt. Till exempel `Workspace.from_config().subscription_id`.
>
> Om du vill välja en annan prenumeration använder du `az account set -s <subscription name or ID>` kommandot och anger det prenumerations namn eller-ID som du vill växla till. Mer information om val av prenumeration finns i [använda flera Azure-prenumerationer](https://docs.microsoft.com/cli/azure/manage-azure-subscriptions-azure-cli?view=azure-cli-latest). | 76.117647 | 588 | 0.808346 | swe_Latn | 0.993483 |
537205e9cbe60ef27884b0dbb28089e87417b19a | 5,463 | md | Markdown | docs/framework/unmanaged-api/profiling/profiling-enumerations.md | ANahr/docs.de-de | 14ad02cb12132d62994c5cb66fb6896864c7cfd7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/unmanaged-api/profiling/profiling-enumerations.md | ANahr/docs.de-de | 14ad02cb12132d62994c5cb66fb6896864c7cfd7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/unmanaged-api/profiling/profiling-enumerations.md | ANahr/docs.de-de | 14ad02cb12132d62994c5cb66fb6896864c7cfd7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Profilerstellungsenumerationen
ms.date: 03/30/2017
helpviewer_keywords:
- profiling enumerations [.NET Framework]
- enumerations [.NET Framework profiling]
- unmanaged enumerations [.NET Framework], profiling
ms.assetid: 8d5f9570-9853-4ce8-8101-df235d5b258e
author: mairaw
ms.author: mairaw
ms.openlocfilehash: 996352637f34b0b6c0d12e611a6d9e70ab85230e
ms.sourcegitcommit: 3d5d33f384eeba41b2dff79d096f47ccc8d8f03d
ms.translationtype: HT
ms.contentlocale: de-DE
ms.lasthandoff: 05/04/2018
ms.locfileid: "33461759"
---
# <a name="profiling-enumerations"></a>Profilerstellungsenumerationen
Dieser Abschnitt beschreibt die nicht verwalteten Enumerationen, die die Profilerstellungs-API verwendet.
## <a name="in-this-section"></a>In diesem Abschnitt
[COR_PRF_CLAUSE_TYPE-Enumeration](../../../../docs/framework/unmanaged-api/profiling/cor-prf-clause-type-enumeration.md)
Zeigt den Typ der Ausnahmeklausel an, die der Code gerade eben eingegeben oder zurückgelassen hat.
[COR_PRF_CODEGEN_FLAGS-Enumeration](../../../../docs/framework/unmanaged-api/profiling/cor-prf-codegen-flags-enumeration.md)
Definiert die codeerstellungskennzeichen, die mit festgelegt werden, können die [icorprofilerfunctioncontrol:: Setcodegenflags](../../../../docs/framework/unmanaged-api/profiling/icorprofilerfunctioncontrol-setcodegenflags-method.md) Methode.
[COR_PRF_FINALIZER_FLAGS-Enumeration](../../../../docs/framework/unmanaged-api/profiling/cor-prf-finalizer-flags-enumeration.md)
Beschreibt den Finalizer für ein Objekt.
[COR_PRF_GC_GENERATION-Enumeration](../../../../docs/framework/unmanaged-api/profiling/cor-prf-gc-generation-enumeration.md)
Identifiziert die Erstellung der Garbage Collection.
[COR_PRF_GC_REASON-Enumeration](../../../../docs/framework/unmanaged-api/profiling/cor-prf-gc-reason-enumeration.md)
Zeigt den Grund, weshalb die Garbage Collection stattfindet.
[COR_PRF_GC_ROOT_FLAGS-Enumeration](../../../../docs/framework/unmanaged-api/profiling/cor-prf-gc-root-flags-enumeration.md)
Zeigt die Eigenschaften eines Garbage Collector-Stamms.
[COR_PRF_GC_ROOT_KIND-Enumeration](../../../../docs/framework/unmanaged-api/profiling/cor-prf-gc-root-kind-enumeration.md)
Gibt die Art der Garbage Collector-Stamms an, die von verfügbar gemacht wird die [ICorProfilerCallback2:: Rootreferences2](../../../../docs/framework/unmanaged-api/profiling/icorprofilercallback2-rootreferences2-method.md) Rückruf.
[COR_PRF_HIGH_MONITOR-Enumeration](../../../../docs/framework/unmanaged-api/profiling/cor-prf-high-monitor-enumeration.md)
Stellt Kennzeichen neben solche, die der [COR_PRF_MONITOR](../../../../docs/framework/unmanaged-api/profiling/cor-prf-monitor-enumeration.md) -Enumeration, die der Profiler angeben kann, zu der [icorprofilerinfo5:: Seteventmask2](../../../../docs/framework/unmanaged-api/profiling/icorprofilerinfo5-seteventmask2-method.md) Methode, wenn es geladen wird.
[COR_PRF_JIT_CACHE-Enumeration](../../../../docs/framework/unmanaged-api/profiling/cor-prf-jit-cache-enumeration.md)
Zeigt das Ergebnis einer zwischengespeicherten Funktionssuche.
[COR_PRF_MISC-Enumeration](../../../../docs/framework/unmanaged-api/profiling/cor-prf-misc-enumeration.md)
Enthält Konstantenwerte, die spezielle Bezeichner angeben.
[COR_PRF_MODULE_FLAGS-Enumeration](../../../../docs/framework/unmanaged-api/profiling/cor-prf-module-flags-enumeration.md)
Gibt die Eigenschaften eines Moduls an.
[COR_PRF_MONITOR-Enumeration](../../../../docs/framework/unmanaged-api/profiling/cor-prf-monitor-enumeration.md)
Enthält Werte, die zur Angabe von Verhalten, Funktionen oder Ereignissen verwendet werden, die der Profiler abonnieren möchte.
[COR_PRF_RUNTIME_TYPE-Enumeration](../../../../docs/framework/unmanaged-api/profiling/cor-prf-runtime-type-enumeration.md)
Enthält Werte, die die Version der Common Language Runtime angeben.
[COR_PRF_SNAPSHOT_INFO-Enumeration](../../../../docs/framework/unmanaged-api/profiling/cor-prf-snapshot-info-enumeration.md)
Gibt an, wie viele Daten in jedem Aufruf an die `StackSnapshotCallback`-Funktion des Profilers an eine Stapelmomentaufnahmeme zurückgegeben werden.
[COR_PRF_STATIC_TYPE-Enumeration](../../../../docs/framework/unmanaged-api/profiling/cor-prf-static-type-enumeration.md)
Zeigt an, ob ein Feld statisch ist und, falls dies der Fall ist, ob die statische Qualität für das Feld gilt.
[COR_PRF_SUSPEND_REASON-Enumeration](../../../../docs/framework/unmanaged-api/profiling/cor-prf-suspend-reason-enumeration.md)
Zeigt den Grund an, aus dem die Laufzeit angehalten wurde.
[COR_PRF_TRANSITION_REASON-Enumeration](../../../../docs/framework/unmanaged-api/profiling/cor-prf-transition-reason-enumeration.md)
Zeigt den Grund für einen Übergang von verwaltetem zu nicht verwaltetem Code an oder umgekehrt.
## <a name="related-sections"></a>Verwandte Abschnitte
[Übersicht über die Profilerstellung](../../../../docs/framework/unmanaged-api/profiling/profiling-overview.md)
[Profilerstellungsschnittstellen](../../../../docs/framework/unmanaged-api/profiling/profiling-interfaces.md)
[Profilerstellung für globale statische Funktionen](../../../../docs/framework/unmanaged-api/profiling/profiling-global-static-functions.md)
[Profilerstellungsstrukturen](../../../../docs/framework/unmanaged-api/profiling/profiling-structures.md)
| 67.444444 | 357 | 0.766978 | deu_Latn | 0.620106 |
5372582a98cf2aa265c309c70fc3f82af32619d2 | 18,390 | md | Markdown | articles/active-directory/manage-apps/application-sign-in-problem-federated-sso-gallery.md | krimog/azure-docs.fr-fr | f9e0062239eb8e7107ea45ad1a8e07f6c905031e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/active-directory/manage-apps/application-sign-in-problem-federated-sso-gallery.md | krimog/azure-docs.fr-fr | f9e0062239eb8e7107ea45ad1a8e07f6c905031e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/active-directory/manage-apps/application-sign-in-problem-federated-sso-gallery.md | krimog/azure-docs.fr-fr | f9e0062239eb8e7107ea45ad1a8e07f6c905031e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Problèmes de connexion à une application d’authentification unique fédérée de la galerie | Microsoft Docs
description: Instructions sur la manière de résoudre les problèmes rencontrés lors de la connexion à une application configurée pour l’authentification unique SAML fédérée avec Azure AD
services: active-directory
documentationcenter: ''
author: msmimart
manager: CelesteDG
ms.assetid: ''
ms.service: active-directory
ms.subservice: app-mgmt
ms.workload: identity
ms.tgt_pltfrm: na
ms.devlang: na
ms.topic: conceptual
ms.date: 02/18/2019
ms.author: mimart
ms.reviewer: luleon, asteen
ms.collection: M365-identity-device-management
ms.openlocfilehash: 32f3b2f45a808ebfa71f456c015de3dd59d60bd9
ms.sourcegitcommit: 04ec7b5fa7a92a4eb72fca6c6cb617be35d30d0c
ms.translationtype: HT
ms.contentlocale: fr-FR
ms.lasthandoff: 07/22/2019
ms.locfileid: "68381365"
---
# <a name="problems-signing-in-to-a-gallery-application-configured-for-federated-single-sign-on"></a>Problèmes de connexion à une application de la galerie configurée pour l’authentification unique fédérée
Pour résoudre les problèmes de connexion ci-dessous, nous vous recommandons de suivre ces suggestions afin de bénéficier du meilleur diagnostic et d’automatiser les étapes de résolution :
- Installez l’[extension de navigateur sécurisée Mes applications](access-panel-extension-problem-installing.md) pour aider Azure Active Directory (Azure AD) à fournir un meilleur diagnostic et de meilleures résolutions lorsque vous utilisez l’expérience de test dans le portail Azure.
- Reproduisez l’erreur à l’aide de l’expérience de test sur la page de configuration d’application du portail Azure. En savoir plus sur le [débogage d’applications avec authentification unique SAML](../develop/howto-v1-debug-saml-sso-issues.md)
## <a name="application-not-found-in-directory"></a>Application introuvable dans le répertoire
*Erreur AADSTS70001 : L’application avec l’identificateur « https:\//contoso.com » est introuvable dans le répertoire*.
**Cause possible**
L’attribut `Issuer` envoyé de l’application vers Azure AD dans la requête SAML ne correspond pas à la valeur de l’identificateur configurée pour l’application dans Azure AD.
**Résolution :**
Vérifiez que l’attribut `Issuer` de la requête SAML correspond à la valeur de l’identificateur configurée dans Azure AD. Si vous utilisez l’[expérience de test](../develop/howto-v1-debug-saml-sso-issues.md) dans le portail Azure avec l’extension de navigateur sécurisée Mes applications, vous n’avez pas besoin de suivre ces étapes manuellement.
1. Ouvrez le [**portail Azure**](https://portal.azure.com/) et connectez-vous en tant **qu’Administrateur général** ou que **Coadministrateur**.
1. Ouvrez l’**extension Azure Active Directory** en sélectionnant **Tous les services** en haut du menu de navigation principal de gauche.
1. Entrez « **Azure Active Directory** » dans la zone de recherche de filtre et sélectionnez l’élément **Azure Active Directory**.
1. Sélectionnez **Applications d’entreprise** dans le menu de navigation de gauche d’Azure Active Directory.
1. Sélectionnez **Toutes les applications** pour afficher la liste complète de vos applications.
Si l’application que vous recherchez ne figure pas dans la liste, utilisez la commande **Filtre** en haut de la **liste de toutes les applications** et définissez l’option **Afficher** sur **Toutes les applications**.
1. Sélectionnez l’application pour laquelle vous souhaitez configurer l’authentification unique.
1. Une fois l’application chargée, ouvrez **Configuration SAML de base**. Vérifiez que la valeur située dans la zone de texte de l’identificateur correspond à celle de l’identificateur mentionnée dans l’erreur.
## <a name="the-reply-address-does-not-match-the-reply-addresses-configured-for-the-application"></a>L’adresse de réponse ne correspond pas aux adresses de réponse configurées pour l’application
*Erreur AADSTS50011 : L’adresse de réponse « https:\//contoso.com » ne correspond pas à l’adresse de réponse configurée pour l’application*
**Cause possible**
La valeur `AssertionConsumerServiceURL` dans la requête SAML ne correspond pas à la valeur de l’URL de réponse ou au modèle configuré dans Azure AD. La valeur `AssertionConsumerServiceURL` dans la requête SAML est l’URL que vous voyez dans l’erreur.
**Résolution :**
Vérifiez que la valeur `AssertionConsumerServiceURL` de la requête SAML correspond à la valeur de l’URL de réponse configurée dans Azure AD. Si vous utilisez l’[expérience de test](../develop/howto-v1-debug-saml-sso-issues.md) dans le portail Azure avec l’extension de navigateur sécurisée Mes applications, vous n’avez pas besoin de suivre ces étapes manuellement.
1. Ouvrez le [**portail Azure**](https://portal.azure.com/) et connectez-vous en tant **qu’Administrateur général** ou que **Coadministrateur**.
1. Ouvrez l’**extension Azure Active Directory** en sélectionnant **Tous les services** en haut du menu de navigation principal de gauche.
1. Entrez « **Azure Active Directory** » dans la zone de recherche de filtre et sélectionnez l’élément **Azure Active Directory**.
1. Sélectionnez **Applications d’entreprise** dans le menu de navigation de gauche d’Azure Active Directory.
1. Sélectionnez **Toutes les applications** pour afficher la liste complète de vos applications.
Si l’application que vous recherchez ne figure pas dans la liste, utilisez la commande **Filtre** en haut de la **liste de toutes les applications** et définissez l’option **Afficher** sur **Toutes les applications**.
1. Sélectionnez l’application pour laquelle vous souhaitez configurer l’authentification unique.
1. Une fois l’application chargée, ouvrez **Configuration SAML de base**. Vérifiez ou mettez à jour la valeur figurant dans la zone de texte URL de réponse pour qu’elle corresponde à la valeur `AssertionConsumerServiceURL` dans la requête SAML.
Une fois que vous avez mis à jour la valeur de l’URL de réponse dans Azure AD et qu’elle correspond à celle envoyée par l’application dans la requête SAML, vous devez être en mesure de vous connecter à l’application.
## <a name="user-not-assigned-a-role"></a>Utilisateur non affecté à un rôle
*Erreur AADSTS50105 : L’utilisateur connecté « brian\@contoso.com » n’est pas affecté à un rôle pour l’application*.
**Cause possible**
L’utilisateur ne dispose pas des autorisations nécessaires pour accéder à l’application dans Azure AD.
**Résolution :**
Pour affecter un ou plusieurs utilisateurs directement à une application, effectuez les étapes suivantes. Si vous utilisez l’[expérience de test](../develop/howto-v1-debug-saml-sso-issues.md) dans le portail Azure avec l’extension de navigateur sécurisée Mes applications, vous n’avez pas besoin de suivre ces étapes manuellement.
1. Ouvrez le [**portail Azure**](https://portal.azure.com/) et connectez-vous en tant **qu’administrateur général**.
1. Ouvrez l’**extension Azure Active Directory** en sélectionnant **Tous les services** en haut du menu de navigation principal de gauche.
1. Entrez « **Azure Active Directory** » dans la zone de recherche de filtre et sélectionnez l’élément **Azure Active Directory**.
1. Sélectionnez **Applications d’entreprise** dans le menu de navigation de gauche d’Azure Active Directory.
1. Sélectionnez **Toutes les applications** pour afficher la liste complète de vos applications.
Si l’application que vous recherchez ne figure pas dans la liste, utilisez la commande **Filtre** en haut de la **liste de toutes les applications** et définissez l’option **Afficher** sur **Toutes les applications**.
1. Dans la liste d’applications qui s’affiche, sélectionnez l’application à laquelle vous souhaitez affecter un utilisateur.
1. Une fois l’application chargée, sélectionnez **Utilisateurs et groupes** dans le menu de navigation gauche de l’application.
1. Cliquez sur le bouton **Ajouter** en haut de la liste **Utilisateurs et groupes** pour ouvrir le volet **Ajouter une attribution**.
1. Sélectionnez le sélecteur **Utilisateurs et groupes** à partir du volet **Ajouter une attribution**.
1. Dans la zone de recherche **Rechercher par nom ou adresse de messagerie** , tapez le nom complet ou l’adresse e-mail de l’utilisateur que vous voulez ajouter.
1. Pointez sur **l’utilisateur** dans la liste pour afficher une **case à cocher**. Cliquez sur la case à cocher en regard de la photo de profil ou du logo de l’utilisateur pour ajouter ce dernier à la liste **Sélectionné**.
1. **Facultatif :** Si vous souhaitez **ajouter plusieurs utilisateurs**, entrez un autre nom complet ou une autre adresse e-mail dans la zone de recherche **Rechercher par nom ou adresse de messagerie**, puis cochez la case pour ajouter cet utilisateur à la liste **Sélectionné**.
1. Après avoir sélectionné les utilisateurs, cliquez sur le bouton **Sélectionner** pour les ajouter à la liste des utilisateurs et des groupes à affecter à l’application.
1. **Facultatif :** Cliquez sur le sélecteur **Sélectionner un rôle** dans le volet **Ajouter une attribution** pour sélectionner un rôle à affecter aux utilisateurs que vous avez sélectionnés.
1. Cliquez sur le bouton **Attribuer** pour affecter l’application aux utilisateurs sélectionnés.
Après quelques instants, les utilisateurs que vous avez sélectionnés seront en mesure de démarrer ces applications à l’aide des méthodes décrites dans la section de description des solutions.
## <a name="not-a-valid-saml-request"></a>Requête SAML non valide
*Erreur AADSTS75005 : La requête n’est pas un message de protocole Saml2 valide.*
**Cause possible**
Azure AD ne prend pas en charge les requêtes SAML envoyées par l’application pour l’authentification unique. Voici certains problèmes courants :
- Des champs obligatoires sont manquants dans la demande SAML
- La méthode de demande SAML encodée
**Résolution :**
1. Capturez la requête SAML. Pour savoir comment capturer la requête SAML, suivez le didacticiel [Comment déboguer une authentification unique SAML pour des applications dans Azure AD](../develop/howto-v1-debug-saml-sso-issues.md).
1. Contactez le fournisseur de l’application et communiquez-lui les informations suivantes :
- Demande SAML
- [Spécifications du protocole SAML d’authentification unique Azure AD](../develop/single-sign-on-saml-protocol.md)
Le fournisseur d’application doit confirmer sa prise en charge de l’implémentation SAML Azure AD pour l’authentification unique.
## <a name="misconfigured-application"></a>Application mal configurée
*Erreur AADSTS650056 : Application mal configurée. La raison peut être l’une des suivantes : Le client n’a pas répertorié toutes les autorisations pour « AAD Graph » dans les autorisations demandées de l’inscription d’application du client. Ou l’administrateur n’a pas donné son consentement dans le locataire. Vous pouvez aussi vérifier l’identificateur d’application dans la requête pour vous assurer qu’il correspond à l’identificateur d’application cliente configuré. Contactez votre administrateur pour corriger la configuration ou donner un consentement au nom du locataire.* .
**Cause possible**
L’attribut `Issuer` envoyé de l’application vers Azure AD dans la requête SAML ne correspond pas à la valeur de l’identificateur configurée pour l’application dans Azure AD.
**Résolution :**
Vérifiez que l’attribut `Issuer` de la requête SAML correspond à la valeur de l’identificateur configurée dans Azure AD. Si vous utilisez l’[expérience de test](../develop/howto-v1-debug-saml-sso-issues.md) dans le portail Azure avec l’extension de navigateur sécurisée Mes applications, vous n’avez pas besoin de suivre ces étapes manuellement :
1. Ouvrez le [**portail Azure**](https://portal.azure.com/) et connectez-vous en tant **qu’Administrateur général** ou que **Coadministrateur**.
1. Ouvrez l’**extension Azure Active Directory** en sélectionnant **Tous les services** en haut du menu de navigation principal de gauche.
1. Entrez « **Azure Active Directory** » dans la zone de recherche de filtre et sélectionnez l’élément **Azure Active Directory**.
1. Sélectionnez **Applications d’entreprise** dans le menu de navigation de gauche d’Azure Active Directory.
1. Sélectionnez **Toutes les applications** pour afficher la liste complète de vos applications.
Si l’application que vous recherchez ne figure pas dans la liste, utilisez la commande **Filtre** en haut de la **liste de toutes les applications** et définissez l’option **Afficher** sur **Toutes les applications**.
1. Sélectionnez l’application pour laquelle vous souhaitez configurer l’authentification unique.
1. Une fois l’application chargée, ouvrez **Configuration SAML de base**. Vérifiez que la valeur située dans la zone de texte de l’identificateur correspond à celle de l’identificateur mentionnée dans l’erreur.
## <a name="certificate-or-key-not-configured"></a>Certificat ou clé non configuré(e)
*Erreur AADSTS50003 : Aucune clé de signature configurée.*
**Cause possible**
L’objet d’application est endommagé et Azure AD ne reconnaît pas le certificat configuré pour l’application.
**Résolution :**
Pour supprimer et créer un nouveau certificat, effectuez les étapes suivantes :
1. Ouvrez le [**portail Azure**](https://portal.azure.com/) et connectez-vous en tant **qu’Administrateur général** ou que **Coadministrateur**.
1. Ouvrez **l’extension Azure Active Directory** en cliquant sur **Tous les services** en haut du menu de navigation principal de gauche.
1. Entrez « **Azure Active Directory** » dans la zone de recherche de filtre et sélectionnez l’élément **Azure Active Directory**.
1. Sélectionnez **Applications d’entreprise** dans le menu de navigation de gauche d’Azure Active Directory.
1. Sélectionnez **Toutes les applications** pour afficher la liste complète de vos applications.
Si l’application que vous recherchez ne figure pas dans la liste, utilisez la commande **Filtre** en haut de la **liste de toutes les applications** et définissez l’option **Afficher** sur **Toutes les applications**.
1. Sélectionnez l’application pour laquelle vous souhaitez configurer l’authentification unique.
1. Une fois l’application chargée, cliquez sur **Authentification unique** dans le menu de navigation de gauche de l’application.
1. Dans la section **Certificat de signature SAML**, sélectionnez **Créer un certificat**.
1. Sélectionnez une date d’expiration, puis cliquez sur **Enregistrer**.
1. Cochez la case **Activer le nouveau certificat** pour substituer le certificat actif. Ensuite, cliquez sur **Enregistrer** en haut du volet, puis acceptez d’activer le certificat de substitution.
1. Dans la section **Certificat de signature SAML**, cliquez sur **Supprimer** pour supprimer le certificat **Inutilisé**.
## <a name="saml-request-not-present-in-the-request"></a>Requête SAML absente de la requête
*Erreur AADSTS750054 : SAMLRequest ou SAMLResponse doit être présent en tant que paramètres de la chaîne de requête dans la requête HTTP pour la liaison de redirection SAML.*
**Cause possible**
Azure AD n'a pas pu identifier la requête SAML dans les paramètres URL de la requête HTTP. Cela peut se produire si l’application n’utilise pas la liaison de redirection HTTP pour l’envoi de la requête SAML vers Azure AD.
**Résolution :**
L’application doit envoyer la requête SAML encodée dans l’en-tête d’emplacement à l’aide de la liaison de redirection HTTP. Pour plus d'informations sur la mise en œuvre, lisez la section Liaison de redirection HTTP dans le [document de spécification du protocole SAML](https://docs.oasis-open.org/security/saml/v2.0/saml-bindings-2.0-os.pdf).
## <a name="azure-ad-is-sending-the-token-to-an-incorrect-endpoint"></a>Azure AD envoie le jeton vers un point de terminaison incorrect
**Cause possible**
Lors de l’authentification unique, si la requête de connexion ne contient pas une URL de réponse explicite (URL Assertion Consumer Service), Azure AD sélectionne l’une des URL de réponse configurées pour cette application. Même si l’application possède une URL de réponse explicite configurée, l’utilisateur peut être redirigé vers https://127.0.0.1:444.
Lorsque l’application a été ajoutée comme ne figurant pas sur galerie, Azure Active Directory a créé cette URL de réponse en tant que valeur par défaut. Ce comportement a changé et Azure Active Directory n’ajoute plus cette URL par défaut.
**Résolution :**
Supprimez les URL de réponse non utilisées qui sont configurées pour l’application.
1. Ouvrez le [**portail Azure**](https://portal.azure.com/) et connectez-vous en tant **qu’Administrateur général** ou que **Coadministrateur**.
2. Ouvrez l’**extension Azure Active Directory** en sélectionnant **Tous les services** en haut du menu de navigation principal de gauche.
3. Entrez « **Azure Active Directory** » dans la zone de recherche de filtre et sélectionnez l’élément **Azure Active Directory**.
4. Sélectionnez **Applications d’entreprise** dans le menu de navigation de gauche d’Azure Active Directory.
5. Sélectionnez **Toutes les applications** pour afficher la liste complète de vos applications.
Si l’application que vous recherchez ne figure pas dans la liste, utilisez la commande **Filtre** en haut de la **liste de toutes les applications** et définissez l’option **Afficher** sur **Toutes les applications**.
6. Sélectionnez l’application pour laquelle vous souhaitez configurer l’authentification unique.
7. Une fois l’application chargée, ouvrez **Configuration SAML de base**. Dans **URL de réponse (URL Assertion Consumer Service)** , supprimez les URL de réponse non utilisées ou par défaut que le système a créées. Par exemple : `https://127.0.0.1:444/applications/default.aspx`.
## <a name="problem-when-customizing-the-saml-claims-sent-to-an-application"></a>Problème lors de la personnalisation des revendications SAML envoyées à une application
Pour savoir comment personnaliser les revendications d’attribut SAML envoyées à votre application, consultez [Mappage de revendications dans Azure Active Directory](../develop/active-directory-claims-mapping.md).
## <a name="next-steps"></a>Étapes suivantes
[Comment déboguer une authentification unique SAML pour des applications dans Azure AD](../develop/howto-v1-debug-saml-sso-issues.md)
| 66.872727 | 583 | 0.780044 | fra_Latn | 0.979905 |
5372b076b222f3f744225c323317f87a039e09e9 | 8,853 | md | Markdown | README.md | bsnacks000/isdparser | c969aacc615d1bbb364cb270ab9ba6e4b192c97b | [
"MIT"
] | null | null | null | README.md | bsnacks000/isdparser | c969aacc615d1bbb364cb270ab9ba6e4b192c97b | [
"MIT"
] | 2 | 2021-06-01T19:59:17.000Z | 2021-06-02T13:55:39.000Z | README.md | cunybpl/isdparser | c969aacc615d1bbb364cb270ab9ba6e4b192c97b | [
"MIT"
] | null | null | null | ## isdparser
[](https://github.com/bsnacks000/isdparser/actions/workflows/CI.yaml)
A utility package to help parse noaa isd files from `ftp://ftp.ncdc.noaa.gov/pub/data/noaa`
Turns this:
```
0130010230999992020010100004+69067+018533FM-12+007999999V0200501N001019999999N999999999-00291-00381099661
```
Into this:
```python
{'datestamp': datetime.datetime(2020, 1, 1, 0, 0, tzinfo=datetime.timezone.utc),
'identifier': '010230-99999',
'sections': [{'measures': [{'usaf': '010230'},
{'wban': '99999'},
{'date': '20200101'},
{'time': '0000'},
{'description': 'USAF SURFACE HOURLY observation',
'measure': 'data_source_flag',
'value': '4'},
{'measure': 'latitude',
'unit': 'angular_degrees',
'value': 69.067},
{'measure': 'longitude',
'unit': 'angular_degrees',
'value': 18.533},
{'description': 'SYNOP Report of surface '
'observation form a fixed land '
'station',
'measure': 'code',
'value': 'FM-12'},
{'measure': 'elevation_dimension',
'unit': 'meters',
'value': 79.0},
{'call_letter_identifier': None},
{'description': 'Automated Quality Control',
'measure': 'quality_control_process_name',
'value': 'V020'}],
'name': 'control'},
{'measures': [{'measure': 'wind_observation_direction_angle',
'unit': 'angular_degrees',
'value': 50.0},
{'description': 'Passed all quality control '
'checks',
'measure': 'wind_observation_direction_quality_code',
'value': '1'},
{'description': 'Normal',
'measure': 'wind_observation_type_code',
'value': 'N'},
{'measure': 'wind_observation_speed_rate',
'unit': 'meters_per_second',
'value': 1.0},
{'description': 'Passed all quality control '
'checks',
'measure': 'wind_observation_speed_quality_code',
'value': '1'},
{'measure': 'sky_condition_observation_ceiling_height_dimension',
'unit': 'meters',
'value': None},
{'description': 'Passed gross limits check if '
'element is present',
'measure': 'sky_condition_observation_ceiling_quality_code',
'value': '9'},
{'description': 'Missing',
'measure': 'sky_condition_observation_ceiling_determination_code',
'value': '9'},
{'description': 'No',
'measure': 'sky_condition_observation_cavok_code',
'value': 'N'},
{'measure': 'visibility_observation_distance_dimension',
'unit': 'meters',
'value': None},
{'description': 'Passed gross limits check if '
'element is present',
'measure': 'visibility_observation_distance_quality_code',
'value': '9'},
{'description': 'Missing',
'measure': 'visibility_observation_variability_code',
'value': '9'},
{'description': 'Passed gross limits check if '
'element is present',
'measure': 'visibility_observation_quality_variability_code',
'value': '9'},
{'measure': 'air_temperature_observation_air_temperature',
'unit': 'degrees_celsius',
'value': -2.9},
{'description': 'Passed all quality control '
'checks',
'measure': 'air_temperature_observation_air_temperature_quality_code',
'value': '1'},
{'measure': 'air_temperature_observation_dew_point_temperature',
'unit': 'degrees_celsius',
'value': -3.8},
{'description': 'Passed all quality control '
'checks',
'measure': 'air_temperature_observation_dew_point_quality_code',
'value': '1'},
{'measure': 'atmospheric_pressure_observation_sea_level_pressure',
'unit': 'hectopascals',
'value': 996.6},
{'description': 'Passed all quality control '
'checks',
'measure': 'atmospheric_pressure_observation_sea_level_pressure_quality_code',
'value': '1'}],
'name': 'mandatory'}]},
```
### Notes
The current version parses only the `control` and `mandatory` data sections of the isd record-string. The `additional` data section is extremely inconsistent between records and would require alot of work to properly map in a sane way.
The above schema was constructed with mongo in mind so feel free to fork and modify to your needs. Any changes or additions to the above schema will incur a minor version bump (see below for install).
Missing data for numerical measures are represented as `None` in the python schema. This should map to a database better then symbols like `-7777` or `+999999` which are used in the strings.
All numerical data is also "scaled" and appropriately signed by the provided `scaling_factor` from the pdf.
I've included a `data-dictionary.md` and a copy of the isd documentation I used to write the parsers in the `extras` folder.
### Install
For now install from here with pip. The master branch will contain the most up to date version.
```
pip install git+https://github.com/bsnacks000/isdparser.git
```
or a specific version
```
pip install git+https://github.com/bsnacks000/[email protected]
```
### Usage
If you want to parse an isd with the default configuration which includes the mappings I created and all points from the `control` and `data` sections, then it is very simple.
```python
from isdparser import ISDRecordFactory
# assumes you downloaded a file but could also be a direct to ftp connection.
with open('010230-99999-2020.txt', 'r') as f:
lines = f.readlines()
# create a list of record objects
records = [ISDRecordFactory().create(line) for line in lines]
# build the schema
schema = [r.schema() for r in records]
# do things...
```
This will produce the schema documented above. It can can serialized directly to json, offloaded to mongo or easily flattened and uploaded to a sql database.
One caveat for using the high level factory is that it will make a root level key by looking up certain values in the control section. This is a convenience and assumes you want to have some easy to manage data integrity based on the usaf, wban and datestamp. These three fields will need to be given in the control section. If for some reason you don't want this behavior I recommend not using the high level factory and creating your own high level schema.
If you want to modify the API to parse less data then simply create new lists of `control` and `mandatory` measures and give these along with names to the `ISDRecordFactory`. You can do this either using a callable or explicitly with lists of Measures. | 56.031646 | 458 | 0.50785 | eng_Latn | 0.930685 |
5373a407990f43955d31e31de5fc9513e19a5e1a | 80 | markdown | Markdown | _tags/329.markdown | sawyerh/organizedwonder | ec1c09e67d13776b134a1e4968b6413ca35f3780 | [
"MIT"
] | 2 | 2016-01-06T17:13:12.000Z | 2016-07-03T03:28:36.000Z | _tags/329.markdown | sawyerh/organizedwonder | ec1c09e67d13776b134a1e4968b6413ca35f3780 | [
"MIT"
] | null | null | null | _tags/329.markdown | sawyerh/organizedwonder | ec1c09e67d13776b134a1e4968b6413ca35f3780 | [
"MIT"
] | null | null | null | ---
title: "lego"
id: tag.id
permalink: "/tags/lego"
videos: [257,1311,1869]
--- | 13.333333 | 23 | 0.625 | nld_Latn | 0.111997 |
5373b0150c2cee83c63113ee23163f23eeca5731 | 2,931 | md | Markdown | docs/visual-basic/language-reference/queries/skip-while-clause.md | WindOfMind/docs.ru-ru | 5e034322f8f6ce87e425a9217189e8275b54d6e8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/visual-basic/language-reference/queries/skip-while-clause.md | WindOfMind/docs.ru-ru | 5e034322f8f6ce87e425a9217189e8275b54d6e8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/visual-basic/language-reference/queries/skip-while-clause.md | WindOfMind/docs.ru-ru | 5e034322f8f6ce87e425a9217189e8275b54d6e8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Предложение Skip While (Visual Basic)
ms.date: 07/20/2015
f1_keywords:
- vb.QuerySkipWhile
helpviewer_keywords:
- Skip While statement [Visual Basic]
- Skip While clause [Visual Basic]
- queries [Visual Basic], Skip While
ms.assetid: 5dee8350-7520-4f1a-b00d-590cacd572d6
ms.openlocfilehash: 3d6caeb1938e8e53e8ec2575f740cd5e49496f62
ms.sourcegitcommit: 9b552addadfb57fab0b9e7852ed4f1f1b8a42f8e
ms.translationtype: MT
ms.contentlocale: ru-RU
ms.lasthandoff: 04/23/2019
ms.locfileid: "62054423"
---
# <a name="skip-while-clause-visual-basic"></a>Предложение Skip While (Visual Basic)
Пропускает элементы в коллекции, если заданное условие имеет значение `true`, и возвращает остальные элементы.
## <a name="syntax"></a>Синтаксис
```
Skip While expression
```
## <a name="parts"></a>Части
|Термин|Определение|
|---|---|
|`expression`|Обязательный. Выражение, которое представляет условие для проверки элементов. Выражение должно возвращать `Boolean` значение или функциональный эквивалент, например `Integer` для оценки в качестве `Boolean`.|
## <a name="remarks"></a>Примечания
`Skip While` Предложение пропускает элементы в начале результатов запроса до предоставленного `expression` возвращает `false`. После `expression` возвращает `false`, запрос возвращает все оставшиеся элементы. `expression` Игнорируется для оставшихся результатов.
`Skip While` Предложение отличается от `Where` предложение, в который `Where` предложение может использоваться для исключения всех элементов из запроса, который не удовлетворяют определенному условию. `Skip While` Предложение исключает элементы только до момента первого, условие не выполняется. `Skip While` Предложение наиболее полезно при работе с упорядоченным результатом запроса.
Можно пропустить определенное количество результатов в начале результата запроса с помощью `Skip` предложение.
## <a name="example"></a>Пример
В следующем примере кода используется `Skip While` предложение для обхода результатов, пока не будет найдено первого заказчика из США.
[!code-vb[VbSimpleQuerySamples#3](~/samples/snippets/visualbasic/VS_Snippets_VBCSharp/VbSimpleQuerySamples/VB/QuerySamples1.vb#3)]
## <a name="see-also"></a>См. также
- [Introduction to LINQ in Visual Basic](../../../visual-basic/programming-guide/language-features/linq/introduction-to-linq.md) (Знакомство с LINQ в Visual Basic)
- [Запросы](../../../visual-basic/language-reference/queries/index.md)
- [Предложение Select](../../../visual-basic/language-reference/queries/select-clause.md)
- [Предложение From](../../../visual-basic/language-reference/queries/from-clause.md)
- [Предложение Skip](../../../visual-basic/language-reference/queries/skip-clause.md)
- [Предложение Take While](../../../visual-basic/language-reference/queries/take-while-clause.md)
- [Предложения Where](../../../visual-basic/language-reference/queries/where-clause.md)
| 54.277778 | 388 | 0.76595 | rus_Cyrl | 0.654898 |
5374744f6bac18ea1a8bb8dcc6058d190d9e1283 | 332 | md | Markdown | README.md | caarlos0-graveyard/ansible-role-java-dev | f60797724655d69aab5711652922b00a3d087c31 | [
"MIT"
] | 2 | 2016-05-16T14:13:13.000Z | 2017-01-30T04:11:09.000Z | README.md | caarlos0-graveyard/ansible-role-java-dev | f60797724655d69aab5711652922b00a3d087c31 | [
"MIT"
] | 1 | 2017-06-25T15:55:48.000Z | 2017-06-25T18:15:09.000Z | README.md | caarlos0-graveyard/ansible-role-java-dev | f60797724655d69aab5711652922b00a3d087c31 | [
"MIT"
] | null | null | null | ansible-role-java-dev [](https://travis-ci.org/caarlos0/ansible-role-java-dev)
=========
Installs JDK 8, Maven and Gradle.
Example Playbook
----------------
- hosts: servers
roles:
- caarlos0.java-dev
License
-------
MIT
| 19.529412 | 165 | 0.635542 | nld_Latn | 0.131366 |
5374be660644712abd0a7c5358cb894e3249d169 | 41,338 | md | Markdown | docs/tutorials/advanced-tutorial.md | SoYoung210/redux-toolkit | 395e0d46bba40916b50295199425bb289306cbc3 | [
"MIT"
] | 2 | 2020-07-06T06:48:33.000Z | 2020-09-02T00:54:22.000Z | docs/tutorials/advanced-tutorial.md | SoYoung210/redux-toolkit | 395e0d46bba40916b50295199425bb289306cbc3 | [
"MIT"
] | null | null | null | docs/tutorials/advanced-tutorial.md | SoYoung210/redux-toolkit | 395e0d46bba40916b50295199425bb289306cbc3 | [
"MIT"
] | 1 | 2021-08-14T09:38:27.000Z | 2021-08-14T09:38:27.000Z | ---
id: advanced-tutorial
title: Advanced Tutorial
sidebar_label: Advanced Tutorial
hide_title: true
---
# 심화 튜토리얼 : Redux Toolkit in Practive
[중급 튜토리얼](./ intermediate-tutorial.md) 에서 일반적인 기본 React 앱에서 Redux Toolkit을 사용하는 방법과 기존의 일반 Redux 코드를 RTK를 대신 사용하도록 변환하는 방법을 확인했습니다. 또한 감속기 함수에서 "변형"불변 업데이트를 작성하는 방법과 작업 페이로드를 생성하기 위해 "콜백 준비"를 작성하는 방법도 살펴 보았습니다.
이 튜토리얼에서는 할일 목록 예제보다 더 큰 "실제"앱의 일부로 Redux Toolkit을 사용하는 방법을 볼 수 있습니다. 이 튜토리얼은 몇 가지 개념을 보여줍니다:
- Redux를 사용하기 위해 "plain React"앱을 변환하는 방법
- 데이터 fetch와 같은 비동기 로직이 RTK에 적합한 방식
- TypeScript로 RTK를 사용하는 방법
이 과정에서 코드를 개선하는 데 사용할 수있는 TypeScript 기술의 몇 가지 예를 살펴보고 [기존`connect` API](https://react-redux.js.org/api/connect) 대신 새로운 [React-Redux Hooks API](https://react-redux.js.org/api/hooks)를 사용하는 방법을 살펴 보겠습니다.
> ** 참고 ** : 이것은 TypeScript를 일반적으로 또는 특별히 Redux와 함께 사용하는 방법에 대한 완전한 문서가 아니며 여기에 표시된 예제는 100% 완전한 type 안전성을 달성하려고 시도하지 않습니다. 자세한 내용은 [React TypeScript Cheatsheet](https://github.com/typescript-cheatsheets/react-typescript-cheatsheet) 및 [React / Redux TypeScript 가이드](https://github.com/piotrwitek/react-redux-typescript-guide).
>
> 또한이 튜토리얼은 React 앱 로직을 Redux로 완전히 변환해야한다는 의미는 아닙니다. [React 컴포넌트에 어떤 상태가 있어야하고 Redux에 무엇이 있어야하는지는 사용자가 결정합니다.](https://redux.js.org/faq/organizing-state#do-i-have-to-put-all-my-state-into-redux-should-i-ever-use-reacts-setstate). 이것은 당신이 원한다면 Redux를 사용하도록 로직을 변환하는 방법의 예일뿐입니다.
이 가이드에서 변환 된 애플리케이션의 전체 소스 코드는 [github.com/reduxjs/rtk-github-issues-example](https://github.com/reduxjs/rtk-github-issues-example)에서 확인할 수 있습니다. 이 리포지토리의 역사에 표시된대로 변환 프로세스를 살펴 보겠습니다. 의미있는 개별 커밋에 대한 링크는 다음과 같이 따옴표 블록으로 강조 표시됩니다.
>-여기에 메시지 커밋
## 시작 예제 애플리케이션 검토
이 튜토리얼의 예제 애플리케이션은 Github 이슈 뷰어 앱입니다. 사용자는 Github 조직 및 리포지토리의 이름을 입력하고, 현재 진행중인 문제 목록을 가져오고, 문제 목록을 페이지로 이동하고, 특정 문제의 내용과 댓글을 볼 수 있습니다.
이 애플리케이션의 시작 커밋은 데이터 가져 오기와 같은 상태 및 부작용에 대한 후크가있는 함수 구성 요소를 사용하는 일반 React 구현입니다. 코드는 이미 TypeScript로 작성되었으며 스타일 지정은 CSS 모듈을 통해 수행됩니다.
작동중인 원래의 일반 React 앱을 살펴 보겠습니다.
<iframe src="https://codesandbox.io/embed/rsk-github-issues-example-8jf6d?fontsize=14&hidenavigation=1&theme=dark&view=preview"
style={{ width: '100%', height: '500px', border: 0, borderRadius: '4px', overflow: 'hidden' }}
title="rtk-github-issues-example-01-plain-react"
allow="geolocation; microphone; camera; midi; vr; accelerometer; gyroscope; payment; ambient-light-sensor; encrypted-media; usb"
sandbox="allow-modals allow-forms allow-popups allow-scripts allow-same-origin"
></iframe>
### React Codebase 소스 개요
코드베이스는 이미 "기능 폴더"구조로 배치되어 있습니다. 주요 부분은 다음과 같습니다.
-`/api` : Github Issues API에 대한 함수 및 TS 유형 가져 오기
-`/app` : 기본`<App>`구성 요소
-`/components` : 여러 곳에서 재사용되는 컴포넌트
-`/features`
-`/issueDetails :`이슈 세부 정보 페이지의 구성 요소
-`/issuesList` : 이슈 목록 표시를위한 구성 요소
-`/repoSearch` : Repo Search 폼의 구성 요소
-`/utils` : 다양한 문자열 유틸리티 기능
## Redux Store 설정
이 앱은 아직 Redux를 전혀 사용하지 않기 때문에 첫 번째 단계는 Redux Toolkit과 React-Redux를 설치하는 것입니다. 이것은 TypeScript 앱이므로`@ types / react-redux`도 추가해야합니다. Yarn 또는 NPM을 통해 해당 패키지를 프로젝트에 추가하십시오.
>-[Redux Toolkit 및 React-Redux 패키지 추가] (https://github.com/reduxjs/rtk-github-issues-example/compare/Add_Redux_Toolkit_and_React-Redux_packages~1..reduxjs:Add_Redux_Toolkit_and_React-Redux_packages)
다음으로 루트 리듀서 함수, Redux 스토어, 그리고 컴포넌트 트리에서 해당 스토어를 사용할 수 있도록`<Provider>`와 같은 일반적인 부분을 설정해야합니다.
이 과정에서 앱에 대한 "핫 모듈 교체"를 설정합니다. 이렇게하면 리듀서 로직이나 구성 요소 트리를 변경할 때마다 Create-React-App이 페이지를 완전히 새로 고칠 필요없이 앱을 다시 빌드하고 변경된 코드를 실행중인 앱으로 스왑합니다.
#### Root Reducer 만들기
>-[Reducer HMR로 저장소 및 루트 감속기 추가] (https://github.com/reduxjs/rtk-github-issues-example/compare/Add_store_and_root_reducer_with_reducer_HMR~1..reduxjs:Add_store_and_root_reducer_with_reducer_HMR)
먼저 루트 감속기 함수를 만듭니다. 아직 슬라이스가 없으므로 빈 객체를 반환합니다.
그러나 우리 코드가 Redux 저장소 상태에 액세스해야 할 때마다 'state'변수의 유형을 선언해야하기 때문에 해당 루트 상태 객체에 대한 TypeScript 유형이 무엇인지 알고 싶습니다 (예 : `mapState` 함수,`useSelector` 선택기 및 썽크의`getState`).
각 상태 슬라이스에 대해 올바른 유형으로 TS 유형을 수동으로 작성할 수 있지만 슬라이스의 상태 구조를 변경할 때마다 해당 유형을 계속 업데이트해야합니다. 다행히 TS는 우리가 이미 작성한 코드에서 유형을 추론하는 데 일반적으로 능숙합니다. 이 경우 "이 유형은`rootReducer`에서 반환되는 모든 유형입니다"라는 유형을 정의 할 수 있으며, TS는 코드가 변경됨에 따라 포함 된 내용을 자동으로 파악합니다. 해당 유형을 내 보내면 앱의 다른 부분에서 사용할 수 있으며 최신 상태임을 압니다. 우리가해야 할 일은 내장 TS`ReturnType` 유틸리티 유형을 사용하고 "`rootReducer` 함수 유형"을 일반 인수로 입력하는 것입니다.
**app/rootReducer.ts**
```ts
import { combineReducers } from '@reduxjs/toolkit'
const rootReducer = combineReducers({})
export type RootState = ReturnType<typeof rootReducer>
export default rootReducer
```
#### 스토어 설정 및 HMR
다음으로 루트 리듀서의 핫 리로딩을 포함하여 스토어 인스턴스를 생성합니다. [`module.hot` API for reloading] (https://webpack.js.org/concepts/hot-module-replacement/)을 사용하여 루트 감속기 함수의 새 버전을 언제든지 다시 가져올 수 있습니다. 다시 컴파일하고 대신 새 버전을 사용하도록 상점에 알립니다.
**app/store.ts**
```ts
import { configureStore } from '@reduxjs/toolkit'
import rootReducer from './rootReducer'
const store = configureStore({
reducer: rootReducer
})
if (process.env.NODE_ENV === 'development' && module.hot) {
module.hot.accept('./rootReducer', () => {
const newRootReducer = require('./rootReducer').default
store.replaceReducer(newRootReducer)
})
}
export type AppDispatch = typeof store.dispatch
export default store
```
`require ( './ rootReducer'). default`는 약간 이상해 보입니다. 이는 CommonJS 동기 가져 오기 구문을 ES 모듈과 혼합하기 때문에 "기본 내보내기"는`default`라는 개체 필드에 있습니다. 우리는 아마도`import ()`를 사용하고 감속기 교체를 비동기 적으로 처리했을 수도 있습니다.
####`Provider` 렌더링
이제 저장소가 생성되었으므로 React 컴포넌트 트리에 추가 할 수 있습니다.
>-[Render Redux Provider with app HMR] (https://github.com/reduxjs/rtk-github-issues-example/compare/Render_Redux_Provider_with_app_HMR~1..reduxjs:Render_Redux_Provider_with_app_HMR)
루트 리듀서와 마찬가지로 컴포넌트 파일이 변경 될 때마다 React 컴포넌트 트리를 핫 리로드 할 수 있습니다. 가장 좋은 방법은`<App>`구성 요소를 가져 와서 렌더링하는 함수를 작성하고 시작시 한 번 호출하여 평소와 같이 React 구성 요소 트리를 표시 한 다음 구성 요소가 변경 될 때마다 해당 함수를 재사용하는 것입니다.
**index.tsx**
```tsx
import React from 'react'
import ReactDOM from 'react-dom'
import { Provider } from 'react-redux'
import store from './app/store'
import './index.css'
const render = () => {
const App = require('./app/App').default
ReactDOM.render(
<Provider store={store}>
<App />
</Provider>,
document.getElementById('root')
)
}
render()
if (process.env.NODE_ENV === 'development' && module.hot) {
module.hot.accept('./app/App', render)
}
```
## 메인 앱 디스플레이 변환
메인 스토어 설정이 완료되면 이제 실제 앱 로직을 Redux를 사용하도록 변환 할 수 있습니다.
### 기존 앱 상태 평가
현재 최상위 <`App>`구성 요소는 React`useState` 후크를 사용하여 여러 정보를 저장합니다.
-선택한 Github 조직 및 저장소
-현재 호 목록 페이지 번호
-문제 목록을 보는지 아니면 특정 문제에 대한 세부 정보를 보는지
한편`<RepoSearchForm>`구성 요소는 상태 후크를 사용하여 제어 된 양식 입력에 대해 진행중인 작업 값을 저장합니다.
Redux FAQ에는 [Redux에 데이터를 넣는 것이 합리적 일 때에 대한 몇 가지 경험 규칙] (https://redux.js.org/faq/organizing-state#do-i-have-to-put-all-my -state-into-redux- 나는 항상 사용-반응 -setstate). 이 경우`<App>`에서 상태 값을 추출하여 Redux 스토어에 넣는 것이 합리적입니다. 현재이를 사용하는 구성 요소는 하나 뿐이지 만 더 큰 앱에는 해당 값을 고려하는 여러 구성 요소가있을 수 있습니다. HMR을 설정 했으므로 나중에 구성 요소 트리를 편집 할 때 해당 값을 유지하는 것도 도움이 될 것입니다.
반면에 WIP 양식 값을 Redux 저장소에 _ 할 수는 있지만 _ 그렇게해도 실질적인 이점은 없습니다. `<RepoSearchForm>`구성 요소 만 해당 값에 관심이 있으며 다른 경험 규칙은 여기에 적용되지 않습니다. 일반적으로 [대부분의 양식 상태는 Redux에 보관해서는 안됩니다] (https://redux.js.org/faq/organizing-state#should-i-put-form-state-or-other-ui-state -in-my-store). 그래서 우리는 그것을 내버려 둘 것입니다.
### 초기 상태 슬라이스 생성
첫 번째 단계는 현재`<App>`에 보관되고있는 데이터를 살펴보고이를 "문제 표시"슬라이스의 유형과 초기 상태 값으로 바꾸는 것입니다. 거기에서 리듀서를 정의하여 적절하게 업데이트 할 수 있습니다.
전체 슬라이스의 소스를 살펴본 다음 어떤 작업을하는지 분석해 보겠습니다.
>-[UI 표시를위한 초기 상태 슬라이스 추가](https://github.com/reduxjs/rtk-github-issues-example/compare/Add_initial_state_slice_for_UI_display~1..reduxjs:Add_initial_state_slice_for_UI_display)
**features/issuesDisplay/issuesDisplaySlice.ts**
```ts
import { createSlice, PayloadAction } from '@reduxjs/toolkit'
interface CurrentDisplay {
displayType: 'issues' | 'comments'
issueId: number | null
}
interface CurrentDisplayPayload {
displayType: 'issues' | 'comments'
issueId?: number
}
interface CurrentRepo {
org: string
repo: string
}
type CurrentDisplayState = {
page: number
} & CurrentDisplay &
CurrentRepo
let initialState: CurrentDisplayState = {
org: 'rails',
repo: 'rails',
page: 1,
displayType: 'issues',
issueId: null
}
const issuesDisplaySlice = createSlice({
name: 'issuesDisplay',
initialState,
reducers: {
displayRepo(state, action: PayloadAction<CurrentRepo>) {
const { org, repo } = action.payload
state.org = org
state.repo = repo
},
setCurrentPage(state, action: PayloadAction<number>) {
state.page = action.payload
},
setCurrentDisplayType(state, action: PayloadAction<CurrentDisplayPayload>) {
const { displayType, issueId = null } = action.payload
state.displayType = displayType
state.issueId = issueId
}
}
})
export const {
displayRepo,
setCurrentDisplayType,
setCurrentPage
} = issuesDisplaySlice.actions
export default issuesDisplaySlice.reducer
```
#### 상태 내용 유형 선언
org 및 repo 값은 단순한 문자열이며 현재 문제 페이지는 숫자 일뿐입니다. 문자열 상수의 합집합을 사용하여 문제 목록 또는 단일 문제의 세부 정보를 표시하는지 여부를 나타내며 세부 정보 인 경우 문제 ID 번호를 알아야합니다.
나중에 액션 유형에서 재사용 할 수 있도록 이러한 부분에 대한 유형을 스스로 정의 할 수 있으며 추적하려는 전체 상태에 대해 더 큰 유형으로 결합 할 수도 있습니다.
상태에 대해 나열된 유형에 페이지 번호가 포함되어 있기 때문에 "현재 표시"부분에는 약간의 추가 작업이 필요하지만 UI는 문제 목록으로 전환하는 작업을 전달할 때 하나를 포함하지 않습니다. 따라서 해당 작업의 내용에 대해 별도의 유형을 정의합니다.
#### 슬라이스 상태 및 작업에 대한 유형 선언
`createSlice`는 두 가지 소스에서 유형을 추론하려고합니다.
-상태 유형은 'initialState'필드의 유형을 기반으로합니다.
-각 감속기는 처리 할 작업 유형을 선언해야합니다.
상태 유형은 각 케이스 리듀서에서 'state'매개 변수의 유형과 생성 된 리듀서 함수의 반환 유형으로 사용되며, 해당 생성 된 액션 생성자에 대해 액션 유형이 사용됩니다. (또는 감속기와 함께 "콜백 준비"를 정의하는 경우 준비 콜백의 인수가 작업 생성자에게도 사용되며 콜백의 반환 값은 감속기가 예상하는 작업에 대해 선언 된 유형과 일치해야합니다.)
감속기에서 작업 유형을 선언 할 때 사용할 기본 유형은 **`PayloadAction <PayloadType>`**입니다. `createAction`은이 유형을 반환 값으로 사용합니다.
특정 감속기를 예로 들어 보겠습니다.
```ts
setCurrentPage(state, action: PayloadAction<number>) {
state.page = action.payload
},
```
`createSlice`는 이것이 우리의`initialState` (`CurrentDisplayState` 유형)와 동일한 유형이어야한다는 것을 이미 알고 있기 때문에`state`에 대한 유형을 선언 할 필요가 없습니다.
액션 객체는`PayloadAction`이며`action.payload`는`숫자`라고 선언합니다. 그런 다음`state.page = action.payload`를 할당하면 TS는 숫자에 숫자를 할당하고 있음을 알고 올바르게 작동합니다. `issuesDisplaySlice.actions.setCurrentPage ()`를 호출하려고한다면 그 숫자가 액션의 페이로드가되기 때문에 인수로 숫자를 전달해야합니다.
마찬가지로`displayRepo (state, action : PayloadAction <CurrentRepo>)`의 경우 TS는`action.payload`가`org` 및`repo` 문자열 필드가있는 객체임을 알고 있으며이를 상태에 할당 할 수 있습니다. (이 "mutative"할당은 'createSlice'가 내부 Immer를 사용하기 때문에 안전하고 가능합니다!)
#### Slice Reducer 사용
다른 예제와 마찬가지로, 이슈 디스플레이 슬라이스 리듀서를 루트 리듀서로 가져 와서 추가해야합니다.
**app/rootReducer.ts**
```diff
import { combineReducers } from '@reduxjs/toolkit'
+import issuesDisplayReducer from 'features/issuesDisplay/issuesDisplaySlice'
-const rootReducer = combineReducers({})
+const rootReducer = combineReducers({
+ issuesDisplay: issuesDisplayReducer
+})
```
### 이슈 디스플레이 변환
이슈 디스플레이 슬라이스가 스토어에 연결되었으므로 내부 구성 요소 상태 대신 사용하도록`<App>`을 업데이트 할 수 있습니다.
>-[주요 이슈 디스플레이 컨트롤을 Redux로 변환](https://github.com/reduxjs/rtk-github-issues-example/compare/Convert_main_issues_display_control_to_Redux~1..reduxjs:Convert_main_issues_display_control_to_Redux)
`App` 구성 요소에 세 가지 그룹을 변경해야합니다.
-`useState` 선언을 제거해야합니다.
-Redux 스토어에서 해당 상태 값을 읽어야합니다.
-사용자가 구성 요소와 상호 작용할 때 Redux 작업을 전달해야합니다.
전통적으로 마지막 두 가지 측면은 [React-Redux`connect` API](https://react-redux.js.org/api/connect)를 통해 처리됩니다. 데이터를 검색하는`mapState` 함수와 액션 생성자를 보관하는`mapDispatch` 함수를 작성하고,이를`connect`에 전달하고, 모든 것을 props로 가져온 다음`this.props.setCurrentPage ()`를 호출하여 해당 액션 유형을 전달합니다.
그러나 [React-Redux에는 이제 후크 API가 있습니다](https://react-redux.js.org/api/hooks), 스토어와보다 직접적으로 상호 작용할 수 있습니다. `useSelector`를 사용하면 스토어에서 데이터를 읽고 업데이트를 구독 할 수 있으며`useDispatch`는 스토어의`dispatch` 메소드에 대한 참조를 제공합니다. 이 튜토리얼의 나머지 부분에서이를 사용합니다.
먼저 필요한 함수와 앞서 선언 한`RootState` 유형을 가져 와서 하드 코딩 된 기본 조직 및 저장소 문자열을 제거합니다.
**app/App.tsx**
```diff
import React, { useState } from 'react'
+import { useSelector, useDispatch } from 'react-redux'
+import { RootState } from './rootReducer'
import { RepoSearchForm } from 'features/repoSearch/RepoSearchForm'
import { IssuesListPage } from 'features/issuesList/IssuesListPage'
import { IssueDetailsPage } from 'features/issueDetails/IssueDetailsPage'
-const ORG = 'rails'
-const REPO = 'rails'
+import {
+ displayRepo,
+ setCurrentDisplayType,
+ setCurrentPage
+} from 'features/issuesDisplay/issuesDisplaySlice'
import './App.css'
```
다음으로,`App` 상단에서 이전`useState` 후크를 제거하고`useDispatch` 및`useSelector` 호출로 대체합니다.
```diff
const App: React.FC = () => {
- const [org, setOrg] = useState(ORG)
- const [repo, setRepo] = useState(REPO)
- const [page, setPage] = useState(1)
- const [currentDisplay, setCurrentDisplay] = useState<CurrentDisplay>({
- type: 'issues'
- })
+ const dispatch = useDispatch()
+ const { org, repo, displayType, page, issueId } = useSelector(
+ (state: RootState) => state.issuesDisplay
+ )
```
"selector"함수를`useSelector`에 전달합니다.이 함수는 Redux 스토어 상태를 매개 변수로 받아들이고 일부 결과를 반환하는 함수입니다. 우리는`state` 인자의 유형이 루트 감속기에서 정의한`RootState` 유형임을 선언하므로 TS는`state` 안에 어떤 필드가 있는지 알 수 있습니다. `state.issuesDisplay` 슬라이스를 하나의 조각으로 검색하고 결과 객체를 구성 요소 내부의 여러 변수로 분해 할 수 있습니다.
이제 이전과 같이 컴포넌트 내부에 거의 동일한 데이터 변수가 있습니다.`useState` 후크 대신 Redux 스토어에서 온 것입니다.
마지막 단계는`useState` setter를 호출하는 대신 사용자가 무언가를 할 때마다 Redux 액션을 전달하는 것입니다.
```diff
const setOrgAndRepo = (org: string, repo: string) => {
- setOrg(org)
- setRepo(repo)
+ dispatch(displayRepo({ org, repo }))
}
const setJumpToPage = (page: number) => {
- setPage(page)
+ dispatch(setCurrentPage(page))
}
const showIssuesList = () => {
- setCurrentDisplay({ type: 'issues' })
+ dispatch(setCurrentDisplayType({ displayType: 'issues' }))
}
const showIssueComments = (issueId: number) => {
- setCurrentDisplay({ type: 'comments', issueId })
+ dispatch(setCurrentDisplayType({ displayType: 'comments', issueId }))
}
```
일반적인`connect` +`mapDispatch` 사용법과 달리 여기서는`dispatch ()`를 직접 호출하고 올바른`payload` 값으로 액션 생성자를 호출하고 결과 액션을`dispatch`에 전달합니다.
이것이 작동하는지 봅시다!
<iframe src="https://codesandbox.io/embed/rtk-github-issues-example-02-issues-display-tdx2w?fontsize=14&hidenavigation=1&module=%2Fsrc%2Fapp%2FApp.tsx&theme=dark&view=preview"
style={{ width: '100%', height: '500px', border: 0, borderRadius: '4px', overflow: 'hidden' }}
title="rtk-github-issues-example-02-issues-display"
allow="geolocation; microphone; camera; midi; vr; accelerometer; gyroscope; payment; ambient-light-sensor; encrypted-media; usb"
sandbox="allow-modals allow-forms allow-popups allow-scripts allow-same-origin"
></iframe>
"이건 이전 예제와 똑같이 보이고 동작한다"고 생각한다면... 훌륭합니다! 즉, 지금까지 논리의 첫 번째 비트를 Redux로 올바르게 변환했음을 의미합니다. Redux 로직이 실행 중인지 확인하려면 "새 창에서 열기"버튼을 클릭하고 Redux DevTools Extension에서 상점을 검사하십시오.
## 이슈 목록 페이지 변환
다음 작업은`<IssuesListPage>`구성 요소를 Redux를 통해 문제를 가져오고 저장하도록 변환하는 것입니다. 현재`<IssuesListPage>`는 가져온 이슈를 포함하여 모든 데이터를 `useState`hooks에 저장하고 있습니다. ʻuseEffect` 후크에서 AJAX 호출을 수행하여 문제를 가져옵니다.
처음에 언급했듯이 실제로 이것에는 잘못된 것이 없습니다! React 구성 요소가 자체 데이터를 가져와 저장하는 것은 완전히 괜찮습니다. 그러나이 튜토리얼의 목적을 위해 Redux 변환 프로세스가 어떻게 보이는지보고 싶습니다.
### 문제 목록 구성 요소 검토
다음은`<IssuesListPage>`의 초기 청크입니다.
```ts
export const IssuesListPage = ({
org,
repo,
page = 1,
setJumpToPage,
showIssueComments
}: ILProps) => {
const [issuesResult, setIssues] = useState<IssuesResult>({
pageLinks: null,
pageCount: 1,
issues: []
})
const [numIssues, setNumIssues] = useState<number>(-1)
const [isLoading, setIsLoading] = useState<boolean>(false)
const [issuesError, setIssuesError] = useState<Error | null>(null)
const { issues, pageCount } = issuesResult
useEffect(() => {
async function fetchEverything() {
async function fetchIssues() {
const issuesResult = await getIssues(org, repo, page)
setIssues(issuesResult)
}
async function fetchIssueCount() {
const repoDetails = await getRepoDetails(org, repo)
setNumIssues(repoDetails.open_issues_count)
}
try {
await Promise.all([fetchIssues(), fetchIssueCount()])
setIssuesError(null)
} catch (err) {
console.error(err)
setIssuesError(err)
} finally {
setIsLoading(false)
}
}
setIsLoading(true)
fetchEverything()
}, [org, repo, page])
// omit rendering
}
```
`useEffect` 콜백은 외부 'async function fetchEverything ()'을 정의하고 즉시 호출합니다. `useEffect` 콜백 자체를 비동기로 선언 할 수 없기 때문입니다. React는 `useEffect` 콜백의 반환 값이 정리 함수가 될 것으로 예상합니다. 모든 비동기 함수는 자동으로`Promise`를 반환하기 때문에 React는 대신`Promise`를 인식하고 React가 실제로 올바르게 정리하지 못하게합니다.
내부에서 이슈를 가져오고 미해결 이슈 수를 가져 오는 두 가지 비동기 함수를 더 정의하고 둘 다 호출합니다. 그런 다음 두 함수가 성공적으로 해결 될 때까지 기다립니다. (이 논리를 구성 할 수있는 몇 가지 다른 방법이 있지만 예제로는 충분했습니다.)
### Thunk에서 생각하기
#### "Thunk"란 무엇입니까?
Redux 코어 (즉,`createStore`)는 완전히 동 기적입니다. `store.dispatch()`를 호출하면 스토어가 루트 리듀서를 실행하고 반환 값을 저장하고 구독자 콜백을 실행하고 일시 중지없이 반환합니다. 기본적으로 모든 비동기 성은 저장소 외부에서 발생해야합니다.
그러나 현재 스토어 상태를 디스패치하거나 확인하여 비동기 로직이 스토어와 상호 작용하게하려면 어떻게해야할까요? 이것이 바로 [Redux 미들웨어] (https://redux.js.org/advanced/middleware)가 들어오는 곳입니다. 스토어를 확장하고 다음을 수행 할 수 있습니다.
-조치가 전달 될 때 추가 논리 실행 (예 : 조치 및 상태 로깅)
-디스패치 된 작업 일시 중지, 수정, 지연, 교체 또는 중지
-`dispatch` 및`getState`에 대한 액세스 권한이있는 추가 코드 작성
-함수 나 프라 미스와 같은 일반 액션 객체 외에 다른 값을 가로 채고 대신 실제 액션 객체를 디스패치하여 받아들이는 방법을 'dispatch'에 가르칩니다.
가장 일반적인 Redux 미들웨어는 [`redux-thunk`] (https://github.com/reduxjs/redux-thunk)입니다. "thunk"라는 단어는 "나중까지 계산을 지연시키는 기능"을 의미합니다. 우리의 경우 Redux 스토어에 썽크 미들웨어를 추가하면 함수를`store.dispatch ()`에 직접 전달할 수 있습니다. 썽크 미들웨어는 함수를보고 실제로 "실제"저장소에 도달하는 것을 방지하고 함수를 호출하고`dispatch` 및`getState`를 인수로 전달합니다. 따라서 "thunk function"은 다음과 같습니다.
```js
function exampleThunkFunction(dispatch, getState) {
// do something useful with dispatching or the store state here
}
// normally an error, but okay if the thunk middleware is added
store.dispatch(exampleThunkFunction)
```
thunk 함수 내에서 원하는 코드를 작성할 수 있습니다. 가장 일반적인 사용법은 AJAX 호출을 통해 일부 데이터를 가져오고 해당 데이터를 Redux 저장소에로드하는 작업을 보내는 것입니다. ʻasync / await` 구문을 사용하면 AJAX 호출을 수행하는 썽크를 더 쉽게 작성할 수 있습니다.
일반적으로 우리는 코드에 액션 객체를 직접 작성하지 않습니다. 액션 생성 함수를 사용하여 만들고`dispatch (addTodo ())`처럼 사용합니다. 같은 방식으로, 우리는 일반적으로 다음과 같이 썽크 함수를 반환하는 "thunk action creator"함수를 작성합니다.
```js
function exampleThunk() {
return function exampleThunkFunction(dispatch, getState) {
// do something useful with dispatching or the store state here
}
}
// normally an error, but okay if the thunk middleware is added
store.dispatch(exampleThunk())
```
#### 왜 Thunks를 쓸까요?
이 모든 것의 요점이 무엇인지 궁금 할 것입니다. 썽크를 사용하는 데는 몇 가지 이유가 있습니다.
- thunk를 사용하면 _a_ Redux 스토어와 상호 작용하는 재사용 가능한 로직을 작성할 수 있지만 특정 스토어 인스턴스를 참조 할 필요가 없습니다.
- Thunk를 사용하면 구성 요소 외부로 더 복잡한 논리를 이동할 수 있습니다.
- 컴포넌트의 관점에서 보면 평범한 액션을 전달하든 비동기 로직을 시작하든 상관하지 않습니다. 단지`dispatch (doSomething())`을 호출하고 계속 진행합니다.
- Thunk는 promise와 같은 값을 반환하여 구성 요소 내부의 논리가 다른 작업이 완료 될 때까지 기다릴 수 있습니다.
자세한 설명은 [`redux-thunk`문서](https://github.com/reduxjs/redux-thunk#why-do-i-need-this)를 참조하세요.
비동기 기능을 추가하는 다른 종류의 Redux 미들웨어가 많이 있습니다. 가장 인기있는 것은 생성기 함수를 사용하는 [`redux-saga`](https://redux-saga.js.org/)와 [`redux-observable`](https://redux.js.org/faq/actions#what-async-middleware-should-i-use-how-do-you-decide-between-thunks-sagas-observables-or-something-else).
그러나 sagas 및 Observable은 유용하지만 대부분의 앱에는 제공하는 성능과 기능이 필요하지 않습니다. 그래서 **thunks는 Redux**로 비동기 로직을 작성하기위한 기본 권장 방식입니다.
#### Redux Toolkit에서 Thunk 작성하기
썽크 함수를 작성하려면 설정 프로세스의 일부로 'redux-thunk'미들웨어를 저장소에 추가해야합니다. Redux Toolkit의`configureStore` 기능은 자동으로 수행됩니다. [`thunk`는 기본 미들웨어 중 하나입니다] (../ api / getDefaultMiddleware.md).
그러나 Redux Toolkit은 현재 썽크 함수 작성을위한 특수 함수 나 구문을 제공하지 않습니다. 특히`createSlice ()`호출의 일부로 정의 할 수 없습니다. 감속기 로직과 별도로 작성해야합니다.
일반적인 Redux 앱에서 썽크 액션 생성자는 보통 일반 액션 생성자와 함께 "액션"파일에 정의됩니다. Thunks는 일반적으로`dispatch (dataLoaded (response.data))`와 같은 일반 작업을 전달합니다.
별도의 "작업"파일이 없기 때문에 이러한 썽크를 "슬라이스"파일에 직접 작성하는 것이 좋습니다. 이렇게하면 슬라이스에서 일반 액션 제작자에 액세스 할 수 있으며 썽크 함수가 어디에 있는지 쉽게 찾을 수 있습니다.
### Github Repo 세부 정보를 가져 오는 논리
#### 재사용 가능한 Thunk 함수 유형 추가
썽크 미들웨어가 이미 설정되어 있으므로 작업을 수행 할 필요가 없습니다. 그러나 썽크에 대한 TypeScript 유형은 다소 길고 혼란스럽고 일반적으로 우리가 작성하는 모든 썽크 함수에 대해 동일한 유형 선언을 반복해야합니다.
더 진행하기 전에 대신 재사용 할 수있는 유형 선언을 추가해 보겠습니다.
> - [Add AppThunk type](https://github.com/reduxjs/rtk-github-issues-example/compare/Add_AppThunk_type~1..reduxjs:Add_AppThunk_type)
**app/store.ts**
```diff
-import { configureStore } from '@reduxjs/toolkit'
+import { configureStore, Action } from '@reduxjs/toolkit'
+import { ThunkAction } from 'redux-thunk'
-import rootReducer from './rootReducer'
+import rootReducer, { RootState } from './rootReducer'
export type AppDispatch = typeof store.dispatch
+export type AppThunk = ThunkAction<void, RootState, unknown, Action<string>>
```
`AppThunk` 유형은 우리가 사용하는 "액션"이 특히 썽크 함수임을 선언합니다. 썽 크는 몇 가지 추가 유형 매개 변수로 사용자 정의됩니다.
1. 반환 값 : 썽크가 아무것도 반환하지 않습니다.
2.`getState`의 상태 유형 :`RootState` 유형을 반환합니다.
3. "추가 인수": 추가 값을 전달하도록 썽크 미들웨어를 사용자 정의 할 수 있지만이 앱에서는 그렇게하지 않습니다.
4.`dispatch`에서 허용하는 작업 유형 :`type`이 문자열 인 모든 작업.
여기에 다른 유형 설정을 원하는 경우가 많이 있지만 이것이 아마도 가장 일반적인 설정일 것입니다. 이렇게하면 썽크를 작성할 때마다 동일한 유형 선언이 반복되는 것을 방지 할 수 있습니다.
#### Repo 세부 정보 조각 추가
이제 해당 유형이 있으므로 리포지토리에 대한 세부 정보를 가져 오는 상태 조각을 작성할 수 있습니다.
> - [Add a slice for storing repo details](https://github.com/reduxjs/rtk-github-issues-example/compare/Add_a_slice_for_storing_repo_details~1..reduxjs:Add_a_slice_for_storing_repo_details)
**features/repoSearch/repoDetailsSlice.ts**
```ts
import { createSlice, PayloadAction } from '@reduxjs/toolkit'
import { AppThunk } from 'app/store'
import { RepoDetails, getRepoDetails } from 'api/githubAPI'
interface RepoDetailsState {
openIssuesCount: number
error: string | null
}
const initialState: RepoDetailsState = {
openIssuesCount: -1,
error: null
}
const repoDetails = createSlice({
name: 'repoDetails',
initialState,
reducers: {
getRepoDetailsSuccess(state, action: PayloadAction<RepoDetails>) {
state.openIssuesCount = action.payload.open_issues_count
state.error = null
},
getRepoDetailsFailed(state, action: PayloadAction<string>) {
state.openIssuesCount = -1
state.error = action.payload
}
}
})
export const {
getRepoDetailsSuccess,
getRepoDetailsFailed
} = repoDetails.actions
export default repoDetails.reducer
export const fetchIssuesCount = (
org: string,
repo: string
): AppThunk => async dispatch => {
try {
const repoDetails = await getRepoDetails(org, repo)
dispatch(getRepoDetailsSuccess(repoDetails))
} catch (err) {
dispatch(getRepoDetailsFailed(err.toString()))
}
}
```
첫 번째 부분은 간단 해 보입니다. 슬라이스 상태 모양, 초기 상태 값을 선언하고 열린 문제 수 또는 오류 문자열을 저장하는 리듀서로 슬라이스를 작성한 다음 액션 생성자와 리듀서를 내 보냅니다.
하단에는 첫 번째 데이터 가져 오기 썽크가 있습니다. 여기서 주목해야 할 중요한 사항은 다음과 같습니다.
- **Thunk는 슬라이스와 별도로 정의됩니다**. RTK에는 현재 슬라이스의 일부로 썽크를 정의하는 특수 구문이 없기 때문입니다.
- **Thunk Action Creator를 화살표 함수로 선언하고 방금 만든 `AppThunk` 유형을 사용합니다.** 화살표 함수 또는`function`키워드를 사용하여 썽크 함수와 썽크 액션 생성자를 작성할 수 있습니다. 대신`function fetchIssueCount () : AppThunk`로 작성할 수도 있습니다.
- **thunk 함수 자체에 `async / await` 구문을 사용합니다.** 다시 말하지만, 이것은 필수는 아니지만 일반적으로 `async / await`는 중첩 된 Promise`.then ()`체인보다 코드가 더 간단합니다.
- **Thunk 내부에서 `createSlice` 호출로 생성 된 일반 액션 크리에이터를 export합니다**.
표시되지는 않았지만 루트 감속기에 슬라이스 감속기를 추가합니다.
#### Thunk에서 비동기 에러 핸들링
작성된대로`fetchIssuesCount ()`썽크에 잠재적 인 결함이 하나 있습니다. `try / catch` 블록은 현재 발생한 오류를 포착합니다.
(실제 실패한 AJAX 호출과 같은)`getRepoDetails ()`에 의해 이루어 지지만`getRepoDetailsSuccess ()`의 디스패치 내에서 발생하는 모든 오류도 포착합니다. 두 경우 모두`getRepoDetailsFailed ()`를 전달합니다. 이것은 실제 오류가 무엇인지에 대한 잘못된 이유를 보여줄 수 있으므로 오류를 처리하는 데 바람직한 방법이 아닐 수 있습니다.
이 문제를 방지하기 위해 코드를 재구성 할 수있는 몇 가지 방법이 있습니다. 첫째, `await`는 성공 및 실패 사례에 대해 별도의 콜백이 전달되는 표준 약속 체인으로 전환 될 수 있습니다.
```js
getRepoDetails(org, repo).then(
// success callback
repoDetails => dispatch(getRepoDetailsSuccess(repoDetails)),
// error callback
err => dispatch(getRepoDetailsFailed(err.toString()))
)
```
또는 오류가 발견되지 않은 경우에만 dispatch하도록 thunk를 다시 작성할 수 있습니다.
```ts
let repoDetails
try {
repoDetails = await getRepoDetails(org, repo)
} catch (err) {
dispatch(getRepoDetailsFailed(err.toString()))
return
}
dispatch(getRepoDetailsSuccess(repoDetails))
}
```
간단하게하기 위해 나머지 튜토리얼에서는 로직을 그대로 사용하겠습니다.
### 문제 목록에서 Repo 세부 정보 가져 오기
이제 저장소 세부 정보 조각이 있으므로`<IssuesListPage>`구성 요소에서 사용할 수 있습니다.
> - [Redux를 통해 리포지토리 세부 정보를 가져 오도록 IssuesListPage 업데이트](https://github.com/reduxjs/rtk-github-issues-example/compare/Update_IssuesListPage_to_fetch_repo_details_via_Redux~1..reduxjs:Update_IssuesListPage_to_fetch_repo_details_via_Redux)
**features/issuesList/IssuesListPage.tsx**
```diff
import React, { useState, useEffect } from 'react'
+import { useSelector, useDispatch } from 'react-redux'
-import { getIssues, getRepoDetails, IssuesResult } from 'api/githubAPI'
+import { getIssues, IssuesResult } from 'api/githubAPI'
+import { fetchIssuesCount } from 'features/repoSearch/repoDetailsSlice'
+import { RootState } from 'app/rootReducer'
// omit code
export const IssuesListPage = ({
org,
repo,
page = 1,
setJumpToPage,
showIssueComments
}: ILProps) => {
+ const dispatch = useDispatch()
const [issuesResult, setIssues] = useState<IssuesResult>({
pageLinks: null,
pageCount: 1,
issues: []
})
- const [numIssues, setNumIssues] = useState<number>(-1)
const [isLoading, setIsLoading] = useState<boolean>(false)
const [issuesError, setIssuesError] = useState<Error | null>(null)
+ const openIssueCount = useSelector(
+ (state: RootState) => state.repoDetails.openIssuesCount
+ )
useEffect(() => {
async function fetchEverything() {
async function fetchIssues() {
const issuesResult = await getIssues(org, repo, page)
setIssues(issuesResult)
}
- async function fetchIssueCount() {
- const repoDetails = await getRepoDetails(org, repo)
- setNumIssues(repoDetails.open_issues_count)
- }
try {
- await Promise.all([fetchIssues(), fetchIssueCount()])
+ await Promise.all([
+ fetchIssues(),
+ dispatch(fetchIssuesCount(org, repo))
+ ])
setIssuesError(null)
} catch (err) {
console.error(err)
setIssuesError(err)
} finally {
setIsLoading(false)
}
}
setIsLoading(true)
fetchEverything()
- }, [org, repo, page])
+ }, [org, repo, page, dispatch])
```
`<IssuesListPage>`에서 새로운`fetchIssuesCount` 썽크를 가져오고 구성 요소를 다시 작성하여 Redux 저장소에서 열린 문제 수 값을 읽습니다.
`useEffect` 내에서`fetchIssueCount`함수를 삭제하고 대신`fetchIssuesCount`를 전달합니다.
### 저장소에 대한 문제를 가져 오기위한 논리
다음으로 미해결 문제 목록을 가져 오는 로직을 교체해야합니다.
> - [문제 상태 추적을위한 슬라이스 추가](https://github.com/reduxjs/rtk-github-issues-example/compare/Add_a_slice_for_tracking_issues_state~1..reduxjs:Add_a_slice_for_tracking_issues_state)
**features/issuesList/issuesSlice.ts**
```ts
import { createSlice, PayloadAction } from '@reduxjs/toolkit'
import { Links } from 'parse-link-header'
import { Issue, IssuesResult, getIssue, getIssues } from 'api/githubAPI'
import { AppThunk } from 'app/store'
interface IssuesState {
issuesByNumber: Record<number, Issue>
currentPageIssues: number[]
pageCount: number
pageLinks: Links | null
isLoading: boolean
error: string | null
}
const issuesInitialState: IssuesState = {
issuesByNumber: {},
currentPageIssues: [],
pageCount: 0,
pageLinks: {},
isLoading: false,
error: null
}
function startLoading(state: IssuesState) {
state.isLoading = true
}
function loadingFailed(state: IssuesState, action: PayloadAction<string>) {
state.isLoading = false
state.error = action.payload
}
const issues = createSlice({
name: 'issues',
initialState: issuesInitialState,
reducers: {
getIssueStart: startLoading,
getIssuesStart: startLoading,
getIssueSuccess(state, { payload }: PayloadAction<Issue>) {
const { number } = payload
state.issuesByNumber[number] = payload
state.isLoading = false
state.error = null
},
getIssuesSuccess(state, { payload }: PayloadAction<IssuesResult>) {
const { pageCount, issues, pageLinks } = payload
state.pageCount = pageCount
state.pageLinks = pageLinks
state.isLoading = false
state.error = null
issues.forEach(issue => {
state.issuesByNumber[issue.number] = issue
})
state.currentPageIssues = issues.map(issue => issue.number)
},
getIssueFailure: loadingFailed,
getIssuesFailure: loadingFailed
}
})
export const {
getIssuesStart,
getIssuesSuccess,
getIssueStart,
getIssueSuccess,
getIssueFailure,
getIssuesFailure
} = issues.actions
export default issues.reducer
export const fetchIssues = (
org: string,
repo: string,
page?: number
): AppThunk => async dispatch => {
try {
dispatch(getIssuesStart())
const issues = await getIssues(org, repo, page)
dispatch(getIssuesSuccess(issues))
} catch (err) {
dispatch(getIssuesFailure(err.toString()))
}
}
export const fetchIssue = (
org: string,
repo: string,
number: number
): AppThunk => async dispatch => {
try {
dispatch(getIssueStart())
const issue = await getIssue(org, repo, number)
dispatch(getIssueSuccess(issue))
} catch (err) {
dispatch(getIssueFailure(err.toString()))
}
}
```
이 슬라이스는 약간 더 길지만 이전과 동일한 기본 접근 방식입니다. API 호출 결과를 처리하는 리듀서로 슬라이스를 작성한 다음 해당 결과를 가져와 작업을 전달하는 썽크를 작성합니다. 이 조각에서 새롭고 흥미로운 부분은 다음과 같습니다.
- '가져 오기 시작'및 '가져 오기 실패'리듀서 로직은 단일 문제 및 여러 문제 가져 오기 사례 모두에서 동일합니다. 따라서 우리는 슬라이스 외부에 이러한 함수를 한 번 작성한 다음`reducers` 객체 내부에서 다른 이름으로 여러 번 재사용합니다.
- Github API는 이슈 항목의 배열을 반환하지만, 우리는 [번호로 이슈를 쉽게 찾을 수 있도록 데이터를 "정규화 된"구조로 저장하고 싶습니다](https://redux.js.org/recipes/structuring-reducers/normalizing-state-shape). 이 경우`Record <number, Issue>`를 선언하여 일반 객체를 조회 테이블로 사용합니다.
### 이슈 목록에서 이슈 가져 오기
이제 로직을 가져 오는 이슈를 교체하여 `<IssuesListPage>`컴포넌트 변환을 완료 할 수 있습니다.
> - [Redux를 통해 이슈 데이터를 가져 오도록 IssuesListPage 업데이트](https://github.com/reduxjs/rtk-github-issues-example/compare/Update_IssuesListPage_to_fetch_issues_data_via_Redux~1..reduxjs:Update_IssuesListPage_to_fetch_issues_data_via_Redux)
변화를 살펴보자.
**features/issuesList/IssuesListPage.tsx**
```diff
-import React, { useState, useEffect } from 'react'
+import React, { useEffect } from 'react'
import { useSelector, useDispatch } from 'react-redux'
-import { getIssues, IssuesResult } from 'api/githubAPI'
import { fetchIssuesCount } from 'features/repoSearch/repoDetailsSlice'
import { RootState } from 'app/rootReducer'
import { IssuesPageHeader } from './IssuesPageHeader'
import { IssuesList } from './IssuesList'
import { IssuePagination, OnPageChangeCallback } from './IssuePagination'
+import { fetchIssues } from './issuesSlice'
// omit code
const dispatch = useDispatch()
- const [issuesResult, setIssues] = useState<IssuesResult>({
- pageLinks: null,
- pageCount: 1,
- issues: []
- })
- const [isLoading, setIsLoading] = useState<boolean>(false)
- const [issuesError, setIssuesError] = useState<Error | null>(null)
+ const {
+ currentPageIssues,
+ isLoading,
+ error: issuesError,
+ issuesByNumber,
+ pageCount
+ } = useSelector((state: RootState) => state.issues)
const openIssueCount = useSelector(
(state: RootState) => state.repoDetails.openIssuesCount
)
- const { issues, pageCount } = issuesResult
+ const issues = currentPageIssues.map(
+ issueNumber => issuesByNumber[issueNumber]
+ )
useEffect(() => {
- async function fetchEverything() {
- async function fetchIssues() {
- const issuesResult = await getIssues(org, repo, page)
- setIssues(issuesResult)
- }
-
- try {
- await Promise.all([
- fetchIssues(),
- dispatch(fetchIssuesCount(org, repo))
- ])
- setIssuesError(null)
- } catch (err) {
- console.error(err)
- setIssuesError(err)
- } finally {
- setIsLoading(false)
- }
- }
-
- setIsLoading(true)
-
- fetchEverything()
+ dispatch(fetchIssues(org, repo, page))
+ dispatch(fetchIssuesCount(org, repo))
}, [org, repo, page, dispatch])
```
`<IssuesListPage>`에서 나머지 `useState` 후크를 제거하고 다른 ʻuseSelector`를 추가하여 Redux 저장소에서 실제 문제 데이터를 검색하고 "현재 페이지 문제 ID"배열에 매핑하여 렌더링 할 문제 목록을 구성합니다. ID로 각 이슈 객체를 조회합니다.
`useEffect`에서 컴포넌트에 직접있는 나머지 데이터 가져 오기 로직을 삭제하고 두 데이터 가져 오기 썽크를 모두 전달합니다.
이것은 구성 요소의 논리를 단순화하지만 수행중인 작업을 제거하지 않고 다른 곳으로 옮겼습니다. 다시 말하지만, 두 접근 방식이 "올바른"것인지 "틀렸는 지"가 아니라 데이터와 논리가 어디에 있는지, 어떤 접근 방식이 앱과 상황에 대해 더 유지 관리 할 수 있는지에 대한 질문 일뿐입니다.
## 이슈 세부 정보 페이지 변환
변환에 남은 마지막 주요 작업은`<IssueDetailsPage>`구성 요소입니다. 그것이 무엇을하는지 살펴 보자.
### 문제 세부 정보 구성 요소 검토
다음은 상태 및 데이터 가져 오기를 포함하는`<IssueDetailsPage>`의 현재 전반입니다.
```ts
export const IssueDetailsPage = ({
org,
repo,
issueId,
showIssuesList
}: IDProps) => {
const [issue, setIssue] = useState<Issue | null>(null)
const [comments, setComments] = useState<Comment[]>([])
const [commentsError, setCommentsError] = useState<Error | null>(null)
useEffect(() => {
async function fetchIssue() {
try {
setCommentsError(null)
const issue = await getIssue(org, repo, issueId)
setIssue(issue)
} catch (err) {
setCommentsError(err)
}
}
fetchIssue()
}, [org, repo, issueId])
useEffect(() => {
async function fetchComments() {
if (issue !== null) {
const comments = await getComments(issue.comments_url)
setComments(comments)
}
}
fetchComments()
}, [issue])
// omit rendering
}
```
`<IssuesListPage>`와 매우 유사합니다. 현재 표시된 `Issue`, 가져온 코멘트 및 잠재적 오류를 저장합니다. 현재 이슈를 ID로 가져오고 이슈가 변경 될 때마다 주석을 가져 오는 `useEffect` 후크가 있습니다.
### 현재 문제 가져 오기
우리는 이미 단일 이슈를 가져 오기위한 Redux 로직을 가지고 있습니다. 이미 `issuesSlice.ts`의 일부로 작성했습니다. 따라서 바로 여기 `IssueDetailsPage`에서 바로 사용할 수 있습니다.
> - [Redux를 통해 이슈 데이터를 가져 오도록 IssueDetailsPage 업데이트](https://github.com/reduxjs/rtk-github-issues-example/compare/Update_IssueDetailsPage_to_fetch_issue_data_via_Redux~1..reduxjs:Update_IssueDetailsPage_to_fetch_issue_data_via_Redux)
**features/issueDetails/IssueDetailsPage.tsx**
```diff
import React, { useState, useEffect } from 'react'
+import { useSelector, useDispatch } from 'react-redux'
import ReactMarkdown from 'react-markdown'
import classnames from 'classnames'
import { insertMentionLinks } from 'utils/stringUtils'
-import { getIssue, getComments, Issue, Comment } from 'api/githubAPI'
+import { getComments, Comment } from 'api/githubAPI'
import { IssueLabels } from 'components/IssueLabels'
+import { RootState } from 'app/rootReducer'
+import { fetchIssue } from 'features/issuesList/issuesSlice'
export const IssueDetailsPage = ({
org,
repo,
issueId,
showIssuesList
}: IDProps) => {
- const [issue, setIssue] = useState<Issue | null>(null)
const [comments, setComments] = useState<Comment[]>([])
- const [commentsError, setCommentsError] = useState<Error | null>(null)
+ const [commentsError] = useState<Error | null>(null)
+ const dispatch = useDispatch()
+ const issue = useSelector(
+ (state: RootState) => state.issues.issuesByNumber[issueId]
+ )
useEffect(() => {
- async function fetchIssue() {
- try {
- setCommentsError(null)
- const issue = await getIssue(org, repo, issueId)
- setIssue(issue)
- } catch (err) {
- setCommentsError(err)
- }
- }
- fetchIssue()
+ if (!issue) {
+ dispatch(fetchIssue(org, repo, issueId))
+ }
+ // Since we may have the issue already, ensure we're scrolled to the top
+ window.scrollTo({ top: 0 })
- }, [org, repo, issueId])
+ }, [org, repo, issueId, issue, dispatch])
```
우리는 일반적인 패턴을 계속합니다. 기존의 ʻuseState` 후크를 삭제하고 ʻuseSelector`를 통해 ʻuseDispatch` 및 필요한 상태를 가져온 다음`fetchIssue` 썽크를 전달하여 데이터를 가져옵니다.
흥미롭게도 여기에는 실제로 행동에 약간의 변화가 있습니다. 원래 React 코드는 가져온 이슈를`<IssuesListPage>`에 저장했고`<IssueDetailsPage>`는 항상 자체 이슈에 대해 별도의 가져 오기를 수행해야했습니다. 이제 Redux 스토어에 이슈를 저장하고 있기 때문에 대부분의 경우 나열된 이슈는 이미 캐시되어 있어야하며 가져올 필요도 없습니다. 이제 React만으로도 비슷한 일을 할 수 있습니다. 우리가해야 할 일은 부모 컴포넌트에서 이슈를 전달하는 것뿐입니다. 그래도 Redux에 해당 데이터가 있으면 캐싱을 더 쉽게 수행 할 수 있습니다.
(흥미로운 참고 사항 : 첫 번째 렌더링 중에 문제가 존재하지 않았기 때문에 콘텐츠가 없었기 때문에 원래 코드는 항상 페이지를 맨 위로 건너 뛰도록했습니다. 문제가 _does_ 존재하고 즉시 렌더링하면 , 페이지가 이슈 목록의 스크롤 위치를 유지할 수 있으므로 맨 위로 스크롤을 강제해야합니다.)
### Comment 가져 오기 로직
작성할 슬라이스가 하나 더 남아 있습니다. 현재 문제에 대한 주석을 가져 와서 저장해야합니다.
> - [comments data추가를 위한 slice추가](https://github.com/reduxjs/rtk-github-issues-example/compare/Add_a_slice_for_tracking_comments_data~1..reduxjs:Add_a_slice_for_tracking_comments_data)
**features/issueDetails/commentsSlice.ts**
```ts
import { createSlice, PayloadAction } from '@reduxjs/toolkit'
import { Comment, getComments, Issue } from 'api/githubAPI'
import { AppThunk } from 'app/store'
interface CommentsState {
commentsByIssue: Record<number, Comment[] | undefined>
loading: boolean
error: string | null
}
interface CommentLoaded {
issueId: number
comments: Comment[]
}
const initialState: CommentsState = {
commentsByIssue: {},
loading: false,
error: null
}
const comments = createSlice({
name: 'comments',
initialState,
reducers: {
getCommentsStart(state) {
state.loading = true
state.error = null
},
getCommentsSuccess(state, action: PayloadAction<CommentLoaded>) {
const { comments, issueId } = action.payload
state.commentsByIssue[issueId] = comments
state.loading = false
state.error = null
},
getCommentsFailure(state, action: PayloadAction<string>) {
state.loading = false
state.error = action.payload
}
}
})
export const {
getCommentsStart,
getCommentsSuccess,
getCommentsFailure
} = comments.actions
export default comments.reducer
export const fetchComments = (issue: Issue): AppThunk => async dispatch => {
try {
dispatch(getCommentsStart())
const comments = await getComments(issue.comments_url)
dispatch(getCommentsSuccess({ issueId: issue.number, comments }))
} catch (err) {
dispatch(getCommentsFailure(err))
}
}
```
이 시점에서 슬라이스는 꽤 익숙해 보일 것입니다. 우리의 주요 상태는 이슈 ID로 입력 된 주석의 조회 테이블입니다. 슬라이스 후, 주어진 이슈에 대한 주석을 가져 오는 썽크를 추가하고 결과 배열을 슬라이스에 저장하기위한 액션을 전달합니다.
### 이슈 댓글 가져 오기
마지막 단계는`<IssueDetailsPage>`에서 댓글 가져 오기 로직을 바꾸는 것입니다.
> - [Redux를 통해 댓글을 가져 오도록 IssueDetailsPage 업데이트](https://github.com/reduxjs/rtk-github-issues-example/compare/Update_IssueDetailsPage_to_fetch_comments_via_Redux~1..reduxjs:Update_IssueDetailsPage_to_fetch_comments_via_Redux)
**features/issueDetails/IssueDetailsPage.tsx**
```diff
-import React, { useState, useEffect } from 'react'
+import React, { useEffect } from 'react'
-import { useSelector, useDispatch } from 'react-redux'
+import { useSelector, useDispatch, shallowEqual } from 'react-redux'
import ReactMarkdown from 'react-markdown'
import classnames from 'classnames'
import { insertMentionLinks } from 'utils/stringUtils'
-import { getComments, Comment } from 'api/githubAPI'
import { IssueLabels } from 'components/IssueLabels'
import { RootState } from 'app/rootReducer'
import { fetchIssue } from 'features/issuesList/issuesSlice'
import { IssueMeta } from './IssueMeta'
import { IssueComments } from './IssueComments'
+import { fetchComments } from './commentsSlice'
export const IssueDetailsPage = ({
org,
repo,
issueId,
showIssuesList
}: IDProps) => {
- const [comments, setComments] = useState<Comment[]>([])
- const [commentsError] = useState<Error | null>(null)
-
const dispatch = useDispatch()
const issue = useSelector(
(state: RootState) => state.issues.issuesByNumber[issueId]
)
+ const { commentsLoading, commentsError, comments } = useSelector(
+ (state: RootState) => {
+ return {
+ commentsLoading: state.comments.loading,
+ commentsError: state.comments.error,
+ comments: state.comments.commentsByIssue[issueId]
+ }
+ },
+ shallowEqual
+ )
// omit effect
useEffect(() => {
- async function fetchComments() {
- if (issue) {
- const comments = await getComments(issue.comments_url)
- setComments(comments)
- }
- }
- fetchComments()
+ if (issue) {
+ dispatch(fetchComments(issue))
+ }
- }, [issue])
+ }, [issue, dispatch])
```
현재 코멘트 데이터를 가져 오기 위해 또 다른 ʻuseSelector` 후크를 추가합니다. 이 경우 로딩 플래그, 잠재적 오류 및이 문제에 대한 실제 코멘트 배열의 세 가지 다른 부분이 필요합니다.
그러나 이로 인해 성능 문제가 발생합니다. 이 선택기가 실행될 때마다`{commentsLoading, commentsError, comments}`라는 새 객체를 반환합니다. **`connect`와 달리 ʻuseSelector`는 기본적으로 참조 동등성에 의존합니다.** 따라서 새 객체를 반환하면 코멘트이 동일하더라도 작업이 전달 될 때마다이 구성 요소가 다시 렌더링됩니다!
이 문제를 해결하는 몇 가지 방법이 있습니다.
- 별도의 'useSelector'호출로 작성할 수 있습니다.
- Reselect의`createSelector`와 같은 메모 된 선택기를 사용할 수 있습니다.
- React-Redux`shallowEqual` 함수를 사용하여 결과를 비교할 수 있으므로 객체의 _contents_가 변경된 경우에만 다시 렌더링이 발생합니다.
이 경우 `useSelector`의 비교 함수로`shallowEqual`을 추가합니다.
## 요약
그리고 그것으로 우리는 끝났습니다! 전체 Github Issues 앱은 이제 썽크를 통해 데이터를 가져오고, Redux에 데이터를 저장하고, React-Redux 후크를 통해 스토어와 상호 작용해야합니다. Github API 호출을위한 Typescript 유형이 있고 API 유형은 Redux 상태 슬라이스에 사용되고 저장소 상태 유형은 React 구성 요소에서 사용됩니다.
우리가 원하면 더 많은 타입 안전성을 추가하기 위해 할 수있는 일이 더 많지만 (예를 들어`dispatch`에 전달할 수있는 가능한 액션 유형을 제한하려는 시도), 이는 너무 많은 추가 노력없이 합리적인 "80 % 솔루션"을 제공합니다.
이제 Redux Toolkit이 실제 응용 프로그램에서 어떻게 보이는지 확실히 이해 하셨기를 바랍니다.
전체 소스 코드와 실행중인 앱을 한 번 더 살펴보면서 마무리하겠습니다.
<iframe src="https://codesandbox.io/embed/rtk-github-issues-example-03-final-ihttc?fontsize=14&hidenavigation=1&module=%2Fsrc%2Ffeatures%2FissueDetails%2FcommentsSlice.ts&theme=dark&view=editor"
style={{ width: '100%', height: '500px', border: 0, borderRadius: '4px', overflow: 'hidden' }}
title="rtk-github-issues-example03-final"
allow="geolocation; microphone; camera; midi; vr; accelerometer; gyroscope; payment; ambient-light-sensor; encrypted-media; usb"
sandbox="allow-modals allow-forms allow-popups allow-scripts allow-same-origin"
></iframe>
**Now, go out there and build something cool!**
| 33.662866 | 362 | 0.711887 | kor_Hang | 0.999972 |
53753346ee4bc4842f80b08c951973b55b1d8a5b | 190 | md | Markdown | docs/README.md | jccultima123/jcsimple-php-app | 14de99f4a48a113c67fe70dc72c58785950891d0 | [
"MIT"
] | null | null | null | docs/README.md | jccultima123/jcsimple-php-app | 14de99f4a48a113c67fe70dc72c58785950891d0 | [
"MIT"
] | null | null | null | docs/README.md | jccultima123/jcsimple-php-app | 14de99f4a48a113c67fe70dc72c58785950891d0 | [
"MIT"
] | null | null | null | # hello-php Mini Documentation
Work in progress
- [Additional Features](ADDITIONAL_FEATURES.md)
- [Global Variables](GLOBAL_VARIABLES.md)
- [Database Migration Guide](MIGRATION_GUIDE.md)
| 21.111111 | 48 | 0.789474 | yue_Hant | 0.702204 |
5375347202e50691b5e6b68286681e61cc9dfe2b | 4,243 | md | Markdown | data-categories/2bf16f8a.md | axibase/open-data-catalog | 18210b49b6e2c7ef05d316b6699d2f0778fa565f | [
"Apache-2.0"
] | 7 | 2017-05-02T16:08:17.000Z | 2021-05-27T09:59:46.000Z | data-categories/2bf16f8a.md | axibase/open-data-catalog | 18210b49b6e2c7ef05d316b6699d2f0778fa565f | [
"Apache-2.0"
] | 5 | 2017-11-27T15:40:39.000Z | 2017-12-05T14:34:14.000Z | data-categories/2bf16f8a.md | axibase/open-data-catalog | 18210b49b6e2c7ef05d316b6699d2f0778fa565f | [
"Apache-2.0"
] | 3 | 2017-03-03T14:48:48.000Z | 2019-05-23T12:57:42.000Z | # Transparency
Name | Agency | Published
---- | ---- | ---------
[Campaign Finance Filers on Record with the New York State Board of Elections: Beginning July 1999](../socrata/p9kb-7ijk.md) | data.ny.gov | 2017-04-10
[Campaign Finance Filings Submitted to the New York State Board of Elections: Beginning 1999](../socrata/55r5-jny4.md) | data.ny.gov | 2017-04-11
[Directory of Public Authorities](../socrata/4vym-q77x.md) | data.ny.gov | 2017-03-21
[Industrial Development Agencies' Project Data](../socrata/9rtk-3fkw.md) | data.ny.gov | 2017-02-07
[Lobbying Clients Disclosures: Beginning 2007](../socrata/8bmh-tuz3.md) | data.ny.gov | 2016-05-02
[Lobbying Clients Reportable Business Relationships with State Officials and Employees: Beginning 2012](../socrata/238s-kr2h.md) | data.ny.gov | 2016-11-30
[Lobbying Clients Sources of Funding for Lobbying Activities: Beginning 2012](../socrata/m8it-6x3c.md) | data.ny.gov | 2015-03-13
[Lobbyist Disbursement of Public Monies Disclosures: Beginning 2008](../socrata/scx8-uayk.md) | data.ny.gov | 2016-05-02
[Lobbyist Reportable Business Relationships with State Officials and Employees: Beginning 2012](../socrata/jtad-7m6s.md) | data.ny.gov | 2016-11-30
[Local Development Corporations Bonds](../socrata/9kfh-uzu3.md) | data.ny.gov | 2016-10-06
[Local Development Corporations Grants Dataset](../socrata/j5ab-5nj2.md) | data.ny.gov | 2016-10-06
[Local Development Corporations Loans](../socrata/vp83-gfyz.md) | data.ny.gov | 2016-10-06
[NYS Attorney Registrations](../socrata/eqw2-r5nb.md) | data.ny.gov | 2017-04-20
[Political Consultant Filings: Beginning 2016](../socrata/tekz-xrvb.md) | data.ny.gov | 2017-04-20
[Procurement Report for Industrial Development Agencies](../socrata/p3p6-xqr5.md) | data.ny.gov | 2016-10-19
[Procurement Report for Local Authorities](../socrata/8w5p-k45m.md) | data.ny.gov | 2016-10-17
[Procurement Report for Local Development Corporations](../socrata/d84c-dk28.md) | data.ny.gov | 2016-10-17
[Procurement Report for State Authorities](../socrata/ehig-g5x3.md) | data.ny.gov | 2016-10-17
[Public Determinations of the NYS Commission on Judicial Conduct: Beginning 1977](../socrata/gnpf-e4p2.md) | data.ny.gov | 2016-01-07
[Real Property Transactions of Industrial Development Agencies](../socrata/dixy-n3q7.md) | data.ny.gov | 2016-10-20
[Real Property Transactions of Local Authorities](../socrata/kmkz-x3aa.md) | data.ny.gov | 2016-10-20
[Real Property Transactions of Local Development Corporations](../socrata/ajgp-mddq.md) | data.ny.gov | 2016-10-20
[Real Property Transactions of State Authorities](../socrata/t7uh-5ac8.md) | data.ny.gov | 2016-10-20
[Registered Lobbyist Disclosures: Beginning 2007](../socrata/djsm-9cw7.md) | data.ny.gov | 2016-05-02
[Registered Public Corporations Disclosures: Beginning 2007](../socrata/kn2d-a3m3.md) | data.ny.gov | 2016-05-02
[Salary Information for Industrial Development Agencies](../socrata/9yx9-29p4.md) | data.ny.gov | 2016-10-12
[Salary Information for Local Authorities](../socrata/fx93-cifz.md) | data.ny.gov | 2016-10-12
[Salary Information for Local Development Corporations](../socrata/wryv-rizw.md) | data.ny.gov | 2016-10-12
[Salary Information for State Authorities](../socrata/unag-2p27.md) | data.ny.gov | 2016-10-12
[Schedule of Debt for Industrial Development Agencies](../socrata/dtk8-znku.md) | data.ny.gov | 2016-11-04
[Schedule of Debt for Local Authorities](../socrata/vfju-zm9q.md) | data.ny.gov | 2016-11-04
[Schedule of Debt for Local Development Corporations](../socrata/utc6-v4cn.md) | data.ny.gov | 2016-11-04
[Schedule of Debt for State Authorities](../socrata/f7ju-wpvk.md) | data.ny.gov | 2016-11-04
[State Inspector General Public Reports and Press Releases: Beginning 2006](../socrata/ptx6-hh79.md) | data.ny.gov | 2016-04-25
[Summary Financial Information for Industrial Development Agencies](../socrata/2jrz-w65a.md) | data.ny.gov | 2016-10-28
[Summary Financial Information for Local Authorities](../socrata/cgg6-2ah8.md) | data.ny.gov | 2016-10-28
[Summary Financial Information for Local Development Corporations](../socrata/wgry-y5zd.md) | data.ny.gov | 2016-10-28
[Summary Financial Information for State Authorities](../socrata/y6wc-tvay.md) | data.ny.gov | 2016-10-28
| 96.431818 | 155 | 0.748056 | eng_Latn | 0.263217 |
537557c6de1eb9f27cb50f92a1047377ab084783 | 1,486 | md | Markdown | docs/profiling/marker-series-write-message-method.md | MicrosoftDocs/visualstudio-docs.zh-cn | 6bcfcf281388988baad2f36e7d058f26397c0a35 | [
"CC-BY-4.0",
"MIT"
] | 55 | 2017-08-15T00:28:59.000Z | 2022-03-26T09:36:19.000Z | docs/profiling/marker-series-write-message-method.md | MicrosoftDocs/visualstudio-docs.zh-cn | 6bcfcf281388988baad2f36e7d058f26397c0a35 | [
"CC-BY-4.0",
"MIT"
] | 167 | 2017-08-15T00:42:02.000Z | 2021-08-23T07:04:49.000Z | docs/profiling/marker-series-write-message-method.md | MicrosoftDocs/visualstudio-docs.zh-cn | 6bcfcf281388988baad2f36e7d058f26397c0a35 | [
"CC-BY-4.0",
"MIT"
] | 114 | 2017-08-09T16:14:39.000Z | 2022-03-26T09:51:51.000Z | ---
description: 向并发可视化工具跟踪文件写入一条消息。
title: 'marker_series:: write_message 方法 | Microsoft Docs'
ms.date: 11/04/2016
ms.topic: reference
f1_keywords:
- cvmarkersobj/Concurrency, diagnostic::marker_series::write_message
helpviewer_keywords:
- Concurrency, diagnostic::marker_series::write_message method
ms.assetid: 546121bc-67e0-4a5a-a456-12bd78fd6de2
author: mikejo5000
ms.author: mikejo
manager: jmartens
ms.technology: vs-ide-debug
ms.workload:
- multiple
ms.openlocfilehash: 483ee104f2141888d5f4468278f7b1707fe6b7bb
ms.sourcegitcommit: b12a38744db371d2894769ecf305585f9577792f
ms.translationtype: HT
ms.contentlocale: zh-CN
ms.lasthandoff: 09/13/2021
ms.locfileid: "126735694"
---
# <a name="marker_serieswrite_message-method"></a>marker_series::write_message 方法
向并发可视化工具跟踪文件写入一条消息。
## <a name="syntax"></a>语法
```cpp
void write_message(
_In_ LPCTSTR _Format,
...
);
void write_message(
marker_importance _Importance,
_In_ LPCTSTR _Format,
...
);
void write_message(
int _Category,
_In_ LPCTSTR _Format,
...
);
void write_message(
marker_importance _Importance,
int _Category,
_In_ LPCTSTR _Format,
...
);
```
#### <a name="parameters"></a>参数
`_Format` 一个复合格式字符串,其中包含与零个或多个格式项混合的文本,这些格式项对应于参数列表中的对象。
`_Importance` 重要性级别。
`_Category` Category.Importance 级别。
## <a name="requirements"></a>要求
**标头:cvmarkersobj.h**
**命名空间:** Concurrency::diagnostic
## <a name="see-also"></a>另请参阅
- [marker_series 类](../profiling/marker-series-class.md)
| 22.515152 | 81 | 0.753028 | eng_Latn | 0.159697 |
53774884eabd31b62b42b9455c51dedc9973a689 | 17,769 | md | Markdown | src/content/posts/2020-07-30-a-guide-to-virtual-workshops.md | raveling/aaronpowell.github.io | f71f50e72378dc3dc8698261c3d6de805c5e5696 | [
"Apache-2.0"
] | 10 | 2015-02-28T14:14:10.000Z | 2021-03-28T02:09:07.000Z | src/content/posts/2020-07-30-a-guide-to-virtual-workshops.md | raveling/aaronpowell.github.io | f71f50e72378dc3dc8698261c3d6de805c5e5696 | [
"Apache-2.0"
] | 13 | 2016-11-14T06:17:56.000Z | 2020-03-09T03:24:10.000Z | src/content/posts/2020-07-30-a-guide-to-virtual-workshops.md | raveling/aaronpowell.github.io | f71f50e72378dc3dc8698261c3d6de805c5e5696 | [
"Apache-2.0"
] | 15 | 2016-10-10T20:59:13.000Z | 2022-01-19T03:13:41.000Z | +++
title = "A Guide to Virtual Workshops"
date = 2020-07-30T04:55:27+10:00
description = "I recently ran my first virtual workshop and wanted to share how I did it and some thoughts I had on doing it"
draft = false
tags = ["public-speaking", "conference"]
cover_image = "/images/a-guide-to-virtual-workshops/003.png"
+++
Since the COVID-19 pandemic started I've done a number of virtual events (I shared my thoughts on being successful with them [last week]({{<ref "/posts/2020-07-20-online-events-experience-from-three-perspectives.md">}})) but earlier this week I did something new, I ran a two-day workshop as part of the [NDC Melbourne](https://ndcmelbourne.com) virtual event programming.
{{<tweet 1287549519786151936>}}
The workshop was the [React for Beginners workshop]({{<ref "/talks/react-workshop.md">}}) that I've been running as part of NDC Sydney for the past few years (and originally created with [Jake Ginnivan](https://twitter.com/JakeGinnivan)) but normally it's done in person in person, so I wanted to do a write up on how I ran it virtually, what worked and where I feel there's room for improvement.
## Considerations for Online Workshops
When I was getting prepared to deliver the workshop there were a few things that I started to consider on what would make the experience as seamless for attendees. Since I'm pretty familiar with how to deliver the workshop in person, I wanted to try and replicate as much of a normal experience as I could, even though I wasn't able to walk about the room and talk to people.
The first thing to think about is how would I engage with the attendees. I mentioned in my [online events post]({{<ref "/posts/2020-07-20-online-events-experience-from-three-perspectives.md">}}) that you can run an event in one of two formats, conference call or broadcast. Since a workshop should be an interactive experience a conference call format is the optimal way to go, I can see the people (if they turn on their camera) and we can talk to each other. NDC uses WebEx as the platform for this (I did have an option for Zoom, but WebEx is their preferred), it does what you need it to do, but _personally_ I would avoid WebEx as a platform as I found it clunky to use and the desktop app error prone (I ended up running it in the browser which was more stable).
The next consideration I had was how would we make it optimal for people to see the slides and code as I share it. Having been on both sides of virtual tech presentations I know what it's like when the text is hard to read because I forgot to bump the font size, but then you also need to consider latency and visual artefact on the stream, will the text be legible? So, you need to think about what the best way for everyone will be to ensure they can watch what you're presenting.
Finally, since the workshop is hands on, attendees build something throughout, I needed to think through what options we'd have to replace the normal experience of an instructor coming and sitting next to them to pair on a problem.
## Setting Up a Studio
I've been doing some video stuff recently (including streaming every Friday on [my Twitch channel](https://twitch.tv/NumberOneAaron)) so I've been learning about how to use [OBS Studio](https://obsproject.com/). OBS (Open Broadcast Software) is an open source application for creating video streams and gives you the ability to take different inputs, combine them together, produce a single output feed. It can be a bit daunting to get started with but you'll find plenty of videos on YouTube ([here's a good starting point](https://www.youtube.com/watch?v=EuSUPpoi0Vs)) and once you get the basics down, it's really fun to see how you get set everything up and make you look professional.
### Camera
For the workshop, I was presenting from my home office which looks like this.

This room use to be our baby room, before our kids started to share, but it still contains some of the facilities of being a baby room, like the nappy change table, one of their wardrobes, and generally, piles of junk. This isn't really what I wanted everyone to have to deal with in my background (and I don't need it for calls that I'm on or when I'm streaming), but I don't have the facilities to setup a green screen behind me, since there's a door in the way.
Thankfully, there's a solution to that, a **virtual** green screen. Early on in the pandemic one of my colleagues introduced me to [XSplit VCam](https://www.xsplit.com/vcam), which runs as an interception of your webcam feed and allows you to do things like background blur, virtual backgrounds or background removal. It's not perfect, as it's using image detection to work out where a person is in the image and do removal of everything else, but it's good enough. Using XSplit with a virtual background I now look like this:

You can see the edges of me are fuzzing out, but overall, it's a better picture than the junk background. If you can smooth out what's behind you (I closed the wardrobe and draped a solid-colour towel over the hanging stuff) then it'll become even better. It might not be as good as a proper green screen but it's a lot simpler to use!
### Presenting
When it comes to presenting online, you'll share your screen (or share an app) and everyone sees that in full screen, but the cameras are pushed away to focus on the content. This starts removing the personalisation aspect of the session as you lose the connection to the presenter. Not ideal if we're going to be spending two days together on a call.
To tackle this, I decided to change the presentation format up from a screen share to creating using a virtual camera.

Using OBS I created a scene which is made up of three components, my camera feed via XSplit with background removal, a background image for NDC Melbourne and my screen. I layered my camera on top of everything so I'm now sitting in front of the slides (or code) and can talk to the slides just over my shoulder.
I then created another scene for when we were in code which increased the size of the shared screen and decreased the size of me.

With these two scenes I, as the presenter, was clearly visible the whole time making it easier to maintain a connection to the audience, even though I can't see them.
Lastly, this video feed needs to be sent back out over the presentation platform (WebEx in this case), and to do that you'll need a virtual camera plugin for OBS. [Scott Hanselman](https://twitter.com/shanselman) has a [great post on how to set this up](https://www.hanselman.com/blog/TakeRemoteWorkerEducatorWebcamVideoCallsToTheNextLevelWithOBSNDIToolsAndElgatoStreamDeck.aspx) and I went down the route of using NDI to expose the feed from OBS and then NDI's virtual camera to send the feed over the call.
#### Downsides to Virtual Cameras
Mostly, this approach worked really well for us, but there is a downside to using a virtual camera rather than traditional screen sharing, and that is that conference call software is designed to have the person who's speaking as the camera in focus. This can be a problem when your camera is **also** your presentation medium, since if someone else's audio comes in (they ask a question or they aren't on mute and make a noise) all of a sudden your camera is defocused and people can't follow along.
My tip here is to have everyone on mute _by default_, so that you are considered the active speaker, or if your software allows it, get people to pin the camera view of you. You'd best doing a tech check or two to practice just how it'll work and what your attendees will see so you can be prepared to help someone through a loss of video.
### Lightening the Load
Anyone who's used OBS, whether it's to stream coding or gaming, will know that it can be heavy on system resources, combine this with an app doing virtual a green screen, running a browser + editor + whatever tooling you need and finally, connecting to the call you're presenting on, well you need to have a pretty powerful machine.
Alas, I don't have that. Sure, I've got a top-spec'ed Surface Book 2, but it isn't quite powerful enough for all this stuff (as you'll may have seen if you've joined any of my Twitch streams). So, I needed to think creatively here, or I would fall back to the obvious solution to just simplify my life and not try and run a production studio in conjunction to the call.
Enter NDI.
[NDI](https://en.wikipedia.org/wiki/Network_Device_Interface), Network Device Interface, is a standard for sharing audio and video over a network connection. If you want to splash some cash you can buy devices that you connect as an external monitor that then makes it available as a network source to OBS, but I don't have a \$1000 to spend, so instead we'll go with a software solution, [OBS's NDI plugin](https://obsproject.com/forum/resources/obs-ndi-newtek-ndi%E2%84%A2-integration-into-obs-studio.528/).
Using this plugin, you can expose OBS from one machine to be received by another machine on as an input to NDI's virtual camera. This means that I no longer needed to connect to WebEx on my laptop, and instead have that running on a separate device, freeing up some CPU cycles for everything else. This also meant that I had a level of redundancy. If my laptop that's running the slides/demo went offline, it didn't kill the call, I could still chat with attendees while doing a recovery on my main device (thankfully it didn't happen, but it was in the back of my mind), similarly if the call dropped I could re-connect and the screen would easily come back up at the exact correct place.
This did mean that my Surface Book 2 was outputting an NDI stream over my wifi network to my Surface Pro4 that was turning it into a webcam to push out via WebEx. Yeah, totally not an over-engineered setup at all! 😂
## Improving Accessibility
One of the biggest hurdles with online events is accessibility. I'm lucky to have a decent (by Australian standards) internet connection at home, a large screen, good hearing and vision, but not everyone is in the same situation. And also, given that it was two days online I was anticipating that at some point the video would lose frames and the quality would drop, I wanted some way to ensure that the attendees would be able to still read what I was presenting.
### PowerPoint
I was presenting the slides out of PowerPoint and this gives you some options in how you can improve access to the slides for attendees. If you're on a Windows machine you can use [Office Presentation Services](https://support.microsoft.com/office/broadcast-your-powerpoint-presentation-online-to-a-remote-audience-25330108-518e-44be-a281-e3d85f784fee?{{<cda>}}), which allows you to start a presentation and then share a URL to the slides to the attendees. Attendees can then connect in their browser and watch along as you move through a deck, as well as download the slides (if you enable it). Alternatively, if you have a Microsoft 365 account you can use [Live Presentations](https://support.microsoft.com/office/present-live-engage-your-audience-with-live-presentations-039aa2cc-67fa-4fb5-9677-46ed8a060c8c?{{<cda>}}) which works similar, but gives you a QR code for the attendees to scan (as well as the URL), live transcription and reactions. The transcription feature even offers the viewer the ability to change the language that the transcription is played in, so if English isn't their preferred language, they can optimise for their experience.
The only downside of this was that all the hard work I'd put in to creating a fancy scene setup in OBS and stung together with NDI so that they still had a connection to the talking head was put aside, but that's a minor point when it comes to improving the accessibility of content for your audience.
### Code
As you might've noticed in the screenshot above of my editor, I have a rather random colour pallet in use. I figure that an editor is somewhere I'm spending a lot of time, so why not make it bright and fun, so I switch between a few really whacky themes, but I do appreciate that this isn't everyone's preference, we all have the font size just right, the colours that work best for us and windows docked where they need to be. Also, as I mentioned above, the chance of a degraded video quality is high, and you don't want people to fall behind because they're dropping frames.
To reduce this barrier we can use [Visual Studio Live Share](https://visualstudio.microsoft.com/services/live-share/?{{<cda>}}) which is a service that allows you to setup a remote connection into your editor that anyone can join and collaborate in (or watch if you make it read-only). The best part is that while I might be using VS Code, others can use Visual Studio or just connect in the browser, meaning that people could follow along in _their_ preferred experience, not in what you deem to be optimal. When I was talking with some of the attendees, one made a comment that they found this useful as they could then go exploring the codebase themselves, which I hadn't thought of as a benefit, but it meant if they wanted a reminder of how we did something earlier, they didn't need me to swap to a different file, they could just do it themselves.
Another idea with Live Share, which we didn't use this time but I want to try in future, is that attendees can share **their** editor with the teacher, allowing you to pair through a problem, just like you would do in person by sitting with a student.
## Hitting the Ground Running
Having run this workshop in person a few times I know that one of the challenges that we always faced is ensuring that people were able to start writing code quickly, and not spending time installing software and getting an environment setup. When you're in person you can easily sit with someone and work through an error they are receiving, but it's a lot harder when it's virtual, so to streamline the process make sure that you have a really comprehensive setup guide that people can follow before you get started. Detail out potential error messages that will come up and how to work through them, so that people can be as ready as possible before getting started.
Another option worth exploring (but wasn't viable for _this_ workshop) is using [Visual Studio Codespaces](https://visualstudio.microsoft.com/services/visual-studio-codespaces/?{{<cda>}}) or [VS Code Remote Containers](https://code.visualstudio.com/docs/remote/containers). Both of these options allow you to configure the development environment and have it ready to go with all needed dependencies and extensions (for VS Code) so that people don't need to worry about _what version of the runtime do I need?_ issues. There is a limitation of people either needing an Azure account (since Codespaces isn't free) or Docker to run a container, but if your tool chain is complex, maybe it's a small price to pay to save setup complexity.
Also, consider recording a welcome video for your attendees. Introduce yourself and the workshop to them, talk to them about what they'll learn, cover off the setup guide, setup ground rules, etc. so that people are as prepared as they can be coming into day one.
## Be Interactive
This is the biggest learning I took away as a teacher, just how much harder interaction is in a virtual workshop. People can be shy and not want to speak up on a call, I can understand that, so it's up to you as the instructor to foster interaction with participants.
Look to leverage things like polls or quizzes throughout the workshop so that people can test their knowledge. Avoid asking questions of the floor and instead ask directly to an attendee. These are two things I didn't do and looking back it was a missed opportunity.
But also deviate from "the script" to inject some personality. I changed my VS Code theme throughout the workshop to mix it up and then talked about different themes. I got sidetracked when looking for something in search results and started talking about a random topic instead. My kids pop their heads in because they were at home and bored because it was raining. I joked with one of the attendees who was in the UK, so they were doing the workshop from midnight to 8am about having lunch at 4am is simply weird.
## Conclusion
Online workshops are hard, much harder than a normal presentation because you are no longer able to sit with your students and just check in with them, but there are things you can do to make it a bit easier.
Think about how you're going to feel connected with the attendees. Sure, I might have had an over-engineered setup in place, but it was a bit of fun and injected some of my quirky personality into it.
Think about how you can improve accessibility. Leverage tools like presenting your slides on a publicly accessible URL and using Live Share for everyone to jump into your editor.
Think about how you can simplify everyone's setup experience, remembering that you're unlikely to be able to see their screen and help them debug, so give them the tools beforehand. Or, if it's possible, pre-provision an environment with Codespaces or a Dockerfile.
Think about how to be interactive. I realise now that I wasn't as interactive as I should've been, so it could've been a very long two days of people watching PowerPoint and someone code. So, make sure they feel a part of the event.
Lastly, have fun. It's a long time to be learning but if you're having fun as a teacher that'll impart on your students.
| 137.744186 | 1,157 | 0.784344 | eng_Latn | 0.999829 |
53775708a9f99c78b7cdec795ab0f65499ded1f3 | 2,645 | md | Markdown | docs/code-quality/ca2205.md | icnocop/visualstudio-docs | 61ee799c65dc6ccd0559e7872e168ab75387ed96 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-02-21T21:24:11.000Z | 2021-02-21T21:24:11.000Z | docs/code-quality/ca2205.md | icnocop/visualstudio-docs | 61ee799c65dc6ccd0559e7872e168ab75387ed96 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/code-quality/ca2205.md | icnocop/visualstudio-docs | 61ee799c65dc6ccd0559e7872e168ab75387ed96 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-02-21T21:24:15.000Z | 2021-02-21T21:24:15.000Z | ---
title: 'CA2205: Use managed equivalents of Win32 API'
ms.date: 11/04/2016
ms.topic: reference
f1_keywords:
- UseManagedEquivalentsOfWin32Api
- CA2205
helpviewer_keywords:
- UseManagedEquivalentsOfWin32Api
- CA2205
ms.assetid: 1c65ab59-3e50-4488-a727-3969c7f6cbe4
author: mikejo5000
ms.author: mikejo
manager: jmartens
dev_langs:
- CSharp
- VB
ms.workload:
- dotnet
---
# CA2205: Use managed equivalents of Win32 API
|Item|Value|
|-|-|
|RuleId|CA2205|
|Category|Microsoft.Usage|
|Breaking change|Non-breaking|
## Cause
A platform invoke method is defined and a method with the equivalent functionality exists in .NET.
## Rule description
A platform invoke method is used to call an unmanaged DLL function and is defined using the <xref:System.Runtime.InteropServices.DllImportAttribute?displayProperty=fullName> attribute, or the `Declare` keyword in Visual Basic. An incorrectly defined platform invoke method can lead to run-time exceptions because of issues such as a misnamed function, faulty mapping of parameter and return value data types, and incorrect field specifications, such as the calling convention and character set. If available, it is simpler and less error prone to call the equivalent managed method than to define and call the unmanaged method directly. Calling a platform invoke method can also lead to additional security issues that need to be addressed.
## How to fix violations
To fix a violation of this rule, replace the call to the unmanaged function with a call to its managed equivalent.
## When to suppress warnings
Suppress a warning from this rule if the suggested replacement method does not provide the needed functionality.
## Example
The following example shows a platform invoke method definition that violates the rule. In addition, the calls to the platform invoke method and the equivalent managed method are shown.
[!code-csharp[FxCop.Usage.ManagedEquivalents#1](../code-quality/codesnippet/CSharp/ca2205-use-managed-equivalents-of-win32-api_1.cs)]
[!code-vb[FxCop.Usage.ManagedEquivalents#1](../code-quality/codesnippet/VisualBasic/ca2205-use-managed-equivalents-of-win32-api_1.vb)]
## Related rules
- [CA1404: Call GetLastError immediately after P/Invoke](../code-quality/ca1404.md)
- [CA1060: Move P/Invokes to NativeMethods class](/dotnet/fundamentals/code-analysis/quality-rules/ca1060)
- [CA1400: P/Invoke entry points should exist](../code-quality/ca1400.md)
- [CA1401: P/Invokes should not be visible](/dotnet/fundamentals/code-analysis/quality-rules/ca1401)
- [CA2101: Specify marshaling for P/Invoke string arguments](/dotnet/fundamentals/code-analysis/quality-rules/ca2101)
| 44.830508 | 740 | 0.798488 | eng_Latn | 0.978095 |
537786065530d3cad9e9eae774db5c20a6e977ff | 5,142 | md | Markdown | doc/release-notes/bitcoin/release-notes-0.10.1.md | seduscoin/seduscoin | a584075870d4adca3c7b545e1a4a9b88bdaade9f | [
"MIT"
] | null | null | null | doc/release-notes/bitcoin/release-notes-0.10.1.md | seduscoin/seduscoin | a584075870d4adca3c7b545e1a4a9b88bdaade9f | [
"MIT"
] | null | null | null | doc/release-notes/bitcoin/release-notes-0.10.1.md | seduscoin/seduscoin | a584075870d4adca3c7b545e1a4a9b88bdaade9f | [
"MIT"
] | null | null | null | Bitcoin Core version 0.10.1 is now available from:
<https://bitcoin.org/bin/bitcoin-core-0.10.1/>
This is a new minor version release, bringing bug fixes and translation
updates. It is recommended to upgrade to this version.
Please report bugs using the issue tracker at github:
<https://github.com/bitcoin/bitcoin/issues>
Upgrading and downgrading
=========================
How to Upgrade
--------------
If you are running an older version, shut it down. Wait until it has completely
shut down (which might take a few minutes for older versions), then run the
installer (on Windows) or just copy over /Applications/Bitcoin-Qt (on Mac) or
bitcoind/bitcoin-qt (on Linux).
Downgrade warning
------------------
Because release 0.10.0 and later makes use of headers-first synchronization and
parallel block download (see further), the block files and databases are not
backwards-compatible with pre-0.10 versions of Bitcoin Core or other software:
* Blocks will be stored on disk out of order (in the order they are
received, really), which makes it incompatible with some tools or
other programs. Reindexing using earlier versions will also not work
anymore as a result of this.
* The block index database will now hold headers for which no block is
stored on disk, which earlier versions won't support.
If you want to be able to downgrade smoothly, make a backup of your entire data
directory. Without this your node will need start syncing (or importing from
bootstrap.dat) anew afterwards. It is possible that the data from a completely
synchronised 0.10 node may be usable in older versions as-is, but this is not
supported and may break as soon as the older version attempts to reindex.
This does not affect wallet forward or backward compatibility.
Notable changes
===============
This is a minor release and hence there are no notable changes.
For the notable changes in 0.10, refer to the release notes for the
0.10.0 release at https://github.com/bitcoin/bitcoin/blob/v0.10.0/doc/release-notes.md
0.10.1 Change log
=================
Detailed release notes follow. This overview includes changes that affect external
behavior, not code moves, refactors or string updates.
RPC:
- `7f502be` fix crash: createmultisig and addmultisigaddress
- `eae305f` Fix missing lock in submitblock
Block (database) and transaction handling:
- `1d2cdd2` Fix InvalidateBlock to add chainActive.Tip to setBlockIndexCandidates
- `c91c660` fix InvalidateBlock to repopulate setBlockIndexCandidates
- `002c8a2` fix possible block db breakage during re-index
- `a1f425b` Add (optional) consistency check for the block chain data structures
- `1c62e84` Keep mempool consistent during block-reorgs
- `57d1f46` Fix CheckBlockIndex for reindex
- `bac6fca` Set nSequenceId when a block is fully linked
P2P protocol and network code:
- `78f64ef` don't trickle for whitelisted nodes
- `ca301bf` Reduce fingerprinting through timestamps in 'addr' messages.
- `200f293` Ignore getaddr messages on Outbound connections.
- `d5d8998` Limit message sizes before transfer
- `aeb9279` Better fingerprinting protection for non-main-chain getdatas.
- `cf0218f` Make addrman's bucket placement deterministic (countermeasure 1 against eclipse attacks, see http://cs-people.bu.edu/heilman/eclipse/)
- `0c6f334` Always use a 50% chance to choose between tried and new entries (countermeasure 2 against eclipse attacks)
- `214154e` Do not bias outgoing connections towards fresh addresses (countermeasure 2 against eclipse attacks)
- `aa587d4` Scale up addrman (countermeasure 6 against eclipse attacks)
- `139cd81` Cap nAttempts penalty at 8 and switch to pow instead of a division loop
Validation:
- `d148f62` Acquire CCheckQueue's lock to avoid race condition
Build system:
- `8752b5c` 0.10 fix for crashes on OSX 10.6
Wallet:
- N/A
GUI:
- `2c08406` some mac specifiy cleanup (memory handling, unnecessary code)
- `81145a6` fix OSX dock icon window reopening
- `786cf72` fix a issue where "command line options"-action overwrite "Preference"-action (on OSX)
Tests:
- `1117378` add RPC test for InvalidateBlock
Miscellaneous:
- `c9e022b` Initialization: set Boost path locale in main thread
- `23126a0` Sanitize command strings before logging them.
- `323de27` Initialization: setup environment before starting Qt tests
- `7494e09` Initialization: setup environment before starting tests
- `df45564` Initialization: set fallback locale as environment variable
Credits
=======
Thanks to everyone who directly contributed to this release:
- Alex Morcos
- Cory Fields
- dexX7
- fsb4000
- Gavin Andresen
- Gregory Maxwell
- Ivan Pustogarov
- Jonas Schnelli
- Matt Corallo
- mrbandrews
- Pieter Wuille
- Ruben de Vries
- Suhas Daftuar
- Wladimir J. van der Laan
And all those who contributed additional code review and/or security research:
- 21E14
- Alison Kendler
- Aviv Zohar
- Ethan Heilman
- Evil-Knievel
- fanquake
- Jeff Garzik
- Jonas Nick
- Luke Sedusjr
- Patrick Strateman
- Philip Kaufmann
- Sergio Demian Lerner
- Sharon Goldberg
As well as everyone that helped translating on [Transifex](https://www.transifex.com/projects/p/bitcoin/).
| 35.708333 | 146 | 0.771295 | eng_Latn | 0.985263 |
53783763b1def941200e1388fd64c4319e48b1e3 | 6,147 | md | Markdown | readme.md | gu-ma/iart-hek-ml-workshop | c4a7ba6d268d4880f20b0f7e503623d1629fc85a | [
"MIT"
] | 4 | 2019-05-28T05:43:39.000Z | 2019-09-30T23:57:51.000Z | readme.md | gu-ma/iart-hek-ml-workshop | c4a7ba6d268d4880f20b0f7e503623d1629fc85a | [
"MIT"
] | null | null | null | readme.md | gu-ma/iart-hek-ml-workshop | c4a7ba6d268d4880f20b0f7e503623d1629fc85a | [
"MIT"
] | 1 | 2019-05-26T19:18:29.000Z | 2019-05-26T19:18:29.000Z | 
# iart AI session and H3K ML workshop
[](http://opensource.org/licenses/MIT)
[](https://twitter.com/iartag)
Main repository for the internal AI session @iart and the public H3K ML workshop.
All the info regarding the workshop as well as direct links to learning materials (slides, notebooks, examples, etc... ) are accessible via the github pages for this repository:
https://iartag.github.io/hek-ml-workshop/
## Schedule
* 11am - Start :smiley_cat:
* 11am - Introduction
* 12pm - Lunch
* 12.45pm - Software setup
* 1.15pm - Experiments
* 3.15 - Presentation
* 4pm - End :crying_cat_face:
## Slides
1. [Slides for the ML workshop](https://iartag.github.io/hek-ml-workshop/slides/presentation02.html)
2. [~~Slides for the internal presentation at iart~~](https://iartag.github.io/hek-ml-workshop/slides/presentation01.html)
## Samples
The sample folder contains different examples:
* _00_styletransfer_: simple style transfer example with live webcam feed
* _01_styletransfer_: style transfer with gui + realtime filter
* _02_styletransfer_: style transfer drawing
* _03_styletransfer_: style transfer feedback loop
* _04_mobilenet_: simple mobilenet example
* _05_cocossd_: cocossd example (box + label drawing)
* _06_maskrcnn_: simple maskrcnn example
* _07_posenet_im2txt_: The text from im2text is _"following"_ one body part
* _08_posenet_im2txt_: The text from im2text scaled / rotated according to the user hands
* _09_posenet_im2txt_: The text from im2text are turned into particles for interactions (WIP)
* _10_im2txt_attngan_: The image is described by im2txt and an image is generated by attngan
* _11_pix2pix_: pix2pix drawing
* _12_pix2pix_facelandmarks_: pix2pix face to facade (WIP)
* _~~13_cocossd_facerecognition_: (WIP)~~
## Tools
#### System requirement
Modern machine with decent hardware and sufficient space on the hard drive (20+ Gb)
#### Runway
We are using [__Runway__](https://runwayapp.ai), a tool which makes deploying ML models easy, as middleware to build the interactive experiments. All participants to the workshop should have received an invitations with some GPU credits :tada:. For those who have not installed it prior to the workshop, we will go through the [installation process](https://docs.runwayml.com/#/getting-started/installation) together.
#### Docker
[__Docker__](https://www.docker.com/) is needed in order to deploy some of the models locally. This will give us some flexibility when running experiments locally. It will also allow us to _chain_ models (at the moment a user can only run one model instance using the provided cloud GPU in Runway). A guide to getting started is [available](https://docs.runwayml.com/#/getting-started/installation?id=download-docker). For linux users, those [post install steps](https://docs.docker.com/install/linux/linux-postinstall/) could be useful as well.
> Docker for Windows requires Microsoft Hyper-V, which is supported only in the Pro, Enterprise or Education editions of Windows. If you don't have a Pro, Enterprise or Education Windows edition you will not be able to install Docker and you will be able to only run some models using cloud GPU.
#### P5.js
We will use [__p5.js__ ](https://p5js.org/) for the front end. It’s a high level creative programming framework with an [intuitive API](https://p5js.org/reference/). If some of you have used Processing before you should be confortable using p5.js. To get familiar with p5 you can go through this list of tutorials / guides:
- [P5 Learn](https://p5js.org/learn/)
- [P5 Wiki](https://github.com/processing/p5.js/wiki/)
- [Creative Coding](https://creative-coding.decontextualize.com/)
- [Shiffman's Foundation of programming in js](https://www.youtube.com/playlist?list=PLRqwX-V7Uu6Zy51Q-x9tMWIv9cueOFTFA)
- [P5js reference](https://p5js.org/reference/)
#### Code editor
If you don’t have a code editor, please install one. Some suggestions (in no particular order)
- [Sublime Text](https://www.sublimetext.com)
- [Visual Studio](https://code.visualstudio.com)
- [Atom](https://atom.io)
#### Web server
We need a simple web server to run the experiments locally. Some suggestions
- If you have node.js/npm installed you can use _live-server_: `npm install -g live-server`
- [Other recommended options](https://github.com/processing/p5.js/wiki/Local-server)
## References / Reading list
* History:
+ [History - Longer history of Machine Learning](http://www.andreykurenkov.com/writing/ai/a-brief-history-of-neural-nets-and-deep-learning/)
+ [History - History of Machine Learning](https://cloud.withgoogle.com/build/data-analytics/explore-history-machine-learning/)
* Intro:
+ [Neural Networks - Intro videos](https://www.youtube.com/playlist?list=PLZHQObOWTQDNU6R1_67000Dx_ZCJB-3pi)
+ [Neural Networks - Intro text](https://ml4a.github.io/ml4a/neural_networks/)
+ [Machine Learning - Getting started](https://www.youtube.com/watch?v=I74ymkoNTnw)
+ [Machine Learning is fun (series)](https://medium.com/@ageitgey/machine-learning-is-fun-80ea3ec3c471)
* Books:
+ [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python)
+ [Grokking Deep Learning](https://www.manning.com/books/grokking-deep-learning)
+ [Intelligence Artificielles, Miroirs de nos vies (BD) ](http://www.sceneario.com/bande-dessinee/intelligences-artificielles/miroirs-de-nos-vies/29059.html)
* Tools:
+ [ML5js - Friendly Machine Learning for the Web](https://ml5js.org/)
+ [Tensorflow.js](https://www.tensorflow.org/js/)
## Repository structure
```
├── docs
│ ├── _layouts
│ ├── assets (img, etc.. for content)
│ │ ├── css
│ │ └── images
│ └── slides (slides of the presentations)
│ ├── demos
│ └── static (img, etc.. for slides)
├── samples (code sample)
└── utilities (scripts and notes)
``` | 49.97561 | 545 | 0.746706 | eng_Latn | 0.803923 |
5378e2b9702322b9386cdd6a94863ad8d8e91cd2 | 2,596 | md | Markdown | templates/resources/tutorial.md | lilitgh/community | 56be378e14d77f4ac4e1b44c64aa5cfc84dc8eb0 | [
"CC-BY-4.0"
] | 45 | 2018-07-24T18:18:56.000Z | 2022-01-26T14:04:30.000Z | templates/resources/tutorial.md | lilitgh/community | 56be378e14d77f4ac4e1b44c64aa5cfc84dc8eb0 | [
"CC-BY-4.0"
] | 424 | 2018-07-26T13:44:57.000Z | 2022-03-31T08:44:01.000Z | templates/resources/tutorial.md | lilitgh/community | 56be378e14d77f4ac4e1b44c64aa5cfc84dc8eb0 | [
"CC-BY-4.0"
] | 109 | 2018-07-25T08:39:31.000Z | 2022-03-22T12:55:14.000Z | ---
title: {Document title}
---
<!-- Use this template to write "how-to" instructions that enable users to accomplish a task. Each task topic should tell how to perform a single, specific procedure.
You can use this template for any step-by-step instruction, no matter whether it's a task during the getting started guide, a tutorial for software developers, or an operational guide.
For the document file name, follow the pattern `{COMPONENT_ABBRV}-{NUMBER_PER_COMPONENT}-{FILE_NAME}.md`.
Select a title that describes the task that's accomplished, not the documented software feature. For example, use "Define resource consumption", not "Select a profile". Use the imperative "Select...", rather than gerund form "Selecting..." or "How to select...".
With regards to structure, it’s nice to have an **introductory paragraph** ("why would I want to do this task?"), **prerequisites** if needed, then the **steps**, and finally the expected **result** that shows the operation was successful.
It's good practice to have 5-9 steps; anything longer can probably be split.
-->
## Context
<!-- Briefly provide background information for the task so that the users understand the purpose of the task and what they will gain by completing the task correctly. This section should be brief and does not replace or recreate a concept topic on the same subject, although the context section might include some conceptual information.
-->
## Prerequisites
<!-- Describes information that the user needs to know or things they need to do or have before starting the immediate task.
If it's more than one prerequisite, use an unordered list.
For example, specify the authorizations the user must have and what software (and versions) must be installed already.
-->
-
-
-
## Procedure
<!-- Provide a series of steps needed to perform the task.
Use a numbered list with one number for each action that the users must take.
It's good practice to describe the result of the procedure so that the users can see they accomplished the task successfully.
Sometimes it's also very helpful to describe the result of a specific step (don't use a number for step results, just a new line below the step). Remember about appropriate indentation for this line.
If the task at hand is typically followed by another one, you can add a link to that other document as "Next Steps".
-->
1.
2.
3.
Result:
<!-- Not mandatory, but recommended. Help the reader to be sure they accomplished the task successfully. -->
Next Steps:
<!-- Optional - might be useful if another activity typically follows this one -->
| 47.2 | 338 | 0.763482 | eng_Latn | 0.99978 |
53795c8490c67608fc1fb548a2c017b7940648e9 | 7,538 | md | Markdown | _posts/2019-01-20-Download-opencl-programming-guide.md | Camille-Conlin/26 | 00f0ca24639a34f881d6df937277b5431ae2dd5d | [
"MIT"
] | null | null | null | _posts/2019-01-20-Download-opencl-programming-guide.md | Camille-Conlin/26 | 00f0ca24639a34f881d6df937277b5431ae2dd5d | [
"MIT"
] | null | null | null | _posts/2019-01-20-Download-opencl-programming-guide.md | Camille-Conlin/26 | 00f0ca24639a34f881d6df937277b5431ae2dd5d | [
"MIT"
] | null | null | null | ---
layout: post
comments: true
categories: Other
---
## Download Opencl programming guide book
survive, such a spouse was the moral remained for strategy, the opencl programming guide I know This didn't work for Junior, Aihal, opencl programming guide bared, his adversaries can never predict his actions. " At any moment, Dr, if she could. 159 name, delicate arms. "Being lame, I feel better than I've felt in. [366] In warmongers, Palander, i, poring through the stacks in search of exotic volumes on the occult. "My name is Hal. ] "Okay, 189 failing opencl programming guide dispel the shadow of confusion in which she sat, ii. [25] He was a very wealthy man in those For a while he stood beside the sedan, you in writing (or by e-mail) within 30 days of receipt that she sense. I thought about all of March 1870, there ought opencl programming guide be a little trust. It was highly unlikely that she'd been unaware of her Whether or not the visitor in the client's chair had ever known much romance, by N, "I'm an easily confused layman, and she said, tears had sprung into her eyes, startling him, time and that their companions had been killed with the exception of some pie kind of mood. to know, 66 grinned mischievously and winked. "What?" opencl programming guide better than any college of professors that could opencl programming guide been assigned to "But you wouldn't be willing to use that skill in the King's service?" on viewing him as alien royalty, sang the following verses: In fact. This information the Scythians have got from the He let go of the girl's chin, but if he'd tried to MOORE'S Eye the Girl With Rapid Movements every vale and peak of every continent, the address was an apartment building with guard dogs in the lobby and a doorman who didn't talk? The Lords of Pendor are good men. I opencl programming guide you said they was dead here. At least most of them do, opencl programming guide successful during the voyage of the _Vega_ pages 306! No Cain. Bridges and high ledges. Merrick had not singled him out as any special object of his opencl programming guide. Certainly there were no signs of any violent evangelical revivals about to take place, King Es Shisban had changed his favour. flushing opencl programming guide in the trailer, though he's not proud of his criminality, she lay opencl programming guide the bedspread, opencl programming guide a sofa or armchair that you could drive at liberty among the other chairs, forcing out opencl programming guide, natural size, a popular haunt of off-duty regular troops. Sometimes opencl programming guide told him that in his path was an object that ordinarily would not have opencl programming guide there; but as often as not, and he knew that he could have any of them, nor filtered the early daylight, Hal! Now and then the sound of the sea penetrated to our ears. Finally he singled me out and came over to where I was standing, and free water. " the hunters and Cossacks for adventurous, the same sort of bones or of whalebone rose to the summit of the handlingar_, to put the net in order and procure all that was more noise than the shots themselves. There was face and a clown's crop of fiery red hair snares Curtis by the shirt, covered with luxuriant vegetation, the perception of a When the hive queen finished grinding, although Doerma, 89, we can't live forever, ma'am, Micky kicked off her toe-pinching high heels! You don't have He nodded. a region which is all the year round inhabited by hundreds of of Josias Logan from Pechora, including criminal trials of your leaders, and for the most part ice-bestrewed waters, he opencl programming guide troubled by the Instead. Quite cool. white goatee when he turned his head to look at Edom. Two big SUVs, got Academy, they weren't coming to it from different planets. And here, yet he was instantly certain that this was no coincidental look-alike, opencl programming guide. On either side of the door was a square, of course, and souls don't and her lower lip, he drops to his knees to search the closet floor for anything that 4, most married couples end up not saying bites, yeah, on the island a stranded whale. It probably dated representatives of Earth -- to an increasing degree, in which all possible intermediate "Why?" On the 13th September a grand dinner was arranged for us by the "Not for the same reasons as you," she said, i, Leilani couldn't quite hear what old Sinsemilla "I'm just-" yourself, river territory, imaginary goblins explaining life to others but living a pale version of it, whence came the tormenting The Book of the Dark, but they were opencl programming guide well back and they were alert. monster walk, feigned regret, on the west coast of "We can't let you go to Idaho. "How long had Harry been dead?" representative from another studio been here already this morning?" of the Russians to correspond with those of the Portuguese and the ruled their departments in academia. She stiffened momentarily at my touch, for ornament. Sympathy for Before leaving the motel, Wendy Quail failed to arouse his anger, except the king command us thereto and give us assurance from [that which we] fear, as if with the Clutching the purse as though determined to resist robbery even in death. I pushed aside the twisted The MacKinnons were not in their blue settee, but not frightened. would. " He glanced at the two SD's standing a few paces opencl programming guide with their rifles held at the ready. Then Zeke said, she saluted her; but Mariyeh returned not her salutation and she said. She wanted to tell him not to say these queer things, "Barty, i. " "It is himself," answered opencl programming guide woman, to and forever would be the only opencl programming guide of his fate, I'd like to leave. The New Siberian Islands, such as searching the lunatic lawman for his car keys and his badge, c, twisty-funny the right circumstances with sweet Naomi as gloriously attractive as ever but sooner or later, in the causal sense, and as we that she had assumed was fantasy, who relied increasingly on his worried The report on the tower forced Junior to consider his mortality; fear, but her father in all senses except As Junior ascended behind Naomi, some short speeches were exchanged. He shook his head sadly. Billy Belay would talk and drink and laugh, through a boundless egoism. He dialed with little pause between digits, to Reno! The The canes opencl programming guide stored in groups in several umbrella stands, time and that their companions had been killed with the exception of some pie kind of mood. " the earth, propelled by steam. "For what reason?" a. The press see themselves in him? He didn't know why he'd spoken her name, not screaming Warning herself to check her anger but not able entirely to heed her own deeper timbre and crisper diction than his own, the author of many 1 New York Times bestsellers, no, _Tedljgio_, Tom took the beauty of the day like a the sky this afternoon. "They're probably in there? In recent years the catch has increased so that in each of identify a reason for this almost sweet anticipation. In the bathroom there was no tub or sink, said. Barty. There a storm damaged the tender-vessels. " Her statement both reassures and strangely disconcerts the boy, the flow of sparks in the diamond disks that hid her C, away. And she didn't give up anything for it. "Why so, 143. He had always loved her, functional layout more in keeping with what the Kuan-yin's mission planners had envisaged. | 837.555556 | 7,440 | 0.789467 | eng_Latn | 0.999942 |
537a0050f2d175be1d009cf4ef2bed91daacb229 | 119 | md | Markdown | group04/README.md | ccarterlandis/toy-repository | 3d33d26b28ad19fd3753949d6f0495965d383847 | [
"MIT"
] | null | null | null | group04/README.md | ccarterlandis/toy-repository | 3d33d26b28ad19fd3753949d6f0495965d383847 | [
"MIT"
] | 8 | 2019-02-21T03:47:21.000Z | 2019-04-29T19:13:02.000Z | group04/README.md | ccarterlandis/toy-repository | 3d33d26b28ad19fd3753949d6f0495965d383847 | [
"MIT"
] | 93 | 2019-02-19T20:58:16.000Z | 2019-02-26T05:23:35.000Z | # This is Group 4's README
## Software Engineering Spring 2019
## Members: Olivia Bishop, Dylan Bunch, Matthew Carroll
| 29.75 | 55 | 0.764706 | eng_Latn | 0.89855 |
537a860665eeee7ce3a05ec4ef866dfe0a6ad493 | 4,312 | md | Markdown | windows-driver-docs-pr/print/xps-filters.md | NazmusLabs/windows-driver-docs | 31b536f4e8c233955da96a953da856575504e7da | [
"CC-BY-4.0",
"MIT"
] | 1 | 2018-08-23T07:40:03.000Z | 2018-08-23T07:40:03.000Z | windows-driver-docs-pr/print/xps-filters.md | NazmusLabs/windows-driver-docs | 31b536f4e8c233955da96a953da856575504e7da | [
"CC-BY-4.0",
"MIT"
] | null | null | null | windows-driver-docs-pr/print/xps-filters.md | NazmusLabs/windows-driver-docs | 31b536f4e8c233955da96a953da856575504e7da | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-02-25T05:28:44.000Z | 2020-02-25T05:28:44.000Z | ---
title: XPS Filters
author: windows-driver-content
description: XPS Filters
ms.assetid: dd8044a6-6558-488e-9508-a83718fabb7d
keywords:
- XPSDrv printer drivers WDK , render modules
- render modules WDK XPSDrv , XPS filters
- XPS filters WDK XPSDrv
- DllGetClassObject
- filters WDK XPS
- IPrintPipelineFilter
ms.author: windowsdriverdev
ms.date: 04/20/2017
ms.topic: article
ms.prod: windows-hardware
ms.technology: windows-devices
---
# XPS Filters
For the XPS print path, filters are the primary way that a driver prepares print data for the printer. In versions of the Microsoft Windows operating system before Windows Vista, print processors and rendering modules did the work of filters.
An XPS filter is a DLL that exports [DllGetClassObject](http://go.microsoft.com/fwlink/p/?linkid=123418) and [DllCanUnloadNow](http://go.microsoft.com/fwlink/p/?linkid=123419) functions. The filter pipeline manager calls these functions when it loads and unloads the XPS filter DLL. After loading the filter DLL, the filter pipeline manager does the following:
- Calls **DllGetClassObject** to obtain a reference to the filter object's [IClassFactory](http://go.microsoft.com/fwlink/p/?linkid=123420) interface.
- Calls the [IClassFactory::CreateInstance](http://go.microsoft.com/fwlink/p/?linkid=123421) method to obtain a reference to the filter object's [IPrintPipelineFilter](https://msdn.microsoft.com/library/windows/hardware/ff554286) interface.
- Calls the [**IPrintPipelineFilter::InitializeFilter**](https://msdn.microsoft.com/library/windows/hardware/ff554291) method to initialize the filter object.
Before unloading the filter DLL, the filter pipeline manager calls **DllCanUnloadNow**.
**Note** In some older XPS filters, the **DllGetClassObject** function retrieves a reference to the filter's **IPrintPipelineFilter** interface instead of to an **IClassFactory** interface. For backward compatibility, the filter pipeline manager in Windows Vista and later versions of Windows will continue to support these filters. However, for new filter designs, **DllGetClassObject** should retrieve a reference to an **IClassFactory** interface.
XPS filters make the printing subsystem more robust, because the filters run in a process different from the spooler. This "sandboxing" both protects against failures and allows a plug-in to run with different security permissions. XPSDrv also enables you to reuse filters across families of printers to lower costs and development time.
For maximum flexibility and reuse, each filter should perform a specific print processing function. For example, one filter would only apply a watermark, while another would only perform accounting.
Windows Vista does not include any filters in-box, but the following sample filters are included in the Windows Driver Kit (WDK) in the \\Src\\Print\\Xpsdrvsmpl\\Src\\Filters folder:
- Booklet
- Color conversion
- Nup
- Page scaling
- Watermark
For more information about the filter pipeline manager, see [XPSDrv Render Module](xpsdrv-render-module.md).
For more information about implementing filters, see [Implementing XPS Filters](implementing-xps-filters.md).
For more information about asynchronous notifications in print filters, see [Asynchronous Notifications in Print Filters](asynchronous-notifications-in-print-filters.md).
You must configure filters by using the [filter pipeline configuration file](filter-pipeline-configuration-file.md).
For information about how to debug the print filter pipeline service, see [Attaching a Debugger to the Print Filter Pipeline Service](attaching-a-debugger-to-the-print-filter-pipeline-service.md).
In Windows 7, XPS filters can use the [XPS rasterization service](using-the-xps-rasterization-service.md) to convert fixed pages in XPS documents to bitmaps.
For information about the way Windows uses GPU acceleration for XPS rasterization, see [XPSRas GPU Usage Decision Tree](xpsras-usage-decision-tree.md).
For more information about XPS filters, see the following white papers at the [WHDC](http://go.microsoft.com/fwlink/p/?linkid=69253) Web site:
[XPSDrv Configuration Module Implementation](http://go.microsoft.com/fwlink/p/?linkid=133878)
[XPSDrv Filter Pipeline](http://go.microsoft.com/fwlink/p/?linkid=133879)
| 52.585366 | 452 | 0.794295 | eng_Latn | 0.916965 |
537a9a50e985fc3041f2ded62ab12e12f46da313 | 67 | md | Markdown | client/markdown/privacy-policy.md | krgamestudios/krgamestudios.github.io | bb711bc4944e7b5c0a2c5afce416bd78230cbc6d | [
"MTLL"
] | null | null | null | client/markdown/privacy-policy.md | krgamestudios/krgamestudios.github.io | bb711bc4944e7b5c0a2c5afce416bd78230cbc6d | [
"MTLL"
] | null | null | null | client/markdown/privacy-policy.md | krgamestudios/krgamestudios.github.io | bb711bc4944e7b5c0a2c5afce416bd78230cbc6d | [
"MTLL"
] | null | null | null | <header>
<h1 class="text centered">Privacy Policy</h1>
</header>
| 13.4 | 46 | 0.686567 | eng_Latn | 0.874095 |
537ac4425a8192a8bf4d03336cbeaddf1ae46670 | 1,289 | md | Markdown | os/switching-channels.md | omkensey/docs | 78b9036e185efdcbb03ed973285747c25253a58d | [
"Apache-2.0"
] | null | null | null | os/switching-channels.md | omkensey/docs | 78b9036e185efdcbb03ed973285747c25253a58d | [
"Apache-2.0"
] | null | null | null | os/switching-channels.md | omkensey/docs | 78b9036e185efdcbb03ed973285747c25253a58d | [
"Apache-2.0"
] | null | null | null | # Switching release channels
Container Linux is designed to be [updated automatically](https://coreos.com/why/#updates) with different schedules per channel. You can [disable this feature](update-strategies.md), although we don't recommend it. Read the [release notes](https://coreos.com/releases) for specific features and bug fixes.
By design, the Container Linux update engine does not execute downgrades. If you're switching from a channel with a higher Container Linux version than the new channel, your machine won't be updated again until the new channel contains a higher version number.

## Create update config file
You can switch machines between channels by creating `/etc/coreos/update.conf`:
```ini
GROUP=beta
```
## Restart update engine
The last step is to restart the update engine in order for it to pick up the changed channel:
```sh
sudo systemctl restart update-engine
```
## Debugging
After the update engine is restarted, the machine should check for an update within an hour. You can view the update engine log if you'd like to see the requests that are being made to the update service:
```sh
journalctl -f -u update-engine
```
For reference, you can find the current version:
```sh
cat /etc/os-release
```
| 33.921053 | 305 | 0.769589 | eng_Latn | 0.99529 |
537addb89825257a6432ac1b5b196213589f02e6 | 15,251 | md | Markdown | docs/csharp/programming-guide/strings/index.md | RickAcb/docs | 837f804b43e983b5bd9419939e8fcc1fd250d8e5 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2019-10-14T13:58:01.000Z | 2020-07-22T17:34:42.000Z | docs/csharp/programming-guide/strings/index.md | RickAcb/docs | 837f804b43e983b5bd9419939e8fcc1fd250d8e5 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2018-12-03T20:19:56.000Z | 2019-06-12T17:48:50.000Z | docs/csharp/programming-guide/strings/index.md | RickAcb/docs | 837f804b43e983b5bd9419939e8fcc1fd250d8e5 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2019-09-22T20:06:22.000Z | 2019-10-05T14:34:46.000Z | ---
title: "Strings - C# Programming Guide"
ms.custom: seodec18
ms.date: 06/27/2019
helpviewer_keywords:
- "C# language, strings"
- "strings [C#]"
ms.assetid: 21580405-cb25-4541-89d5-037846a38b07
---
# Strings (C# Programming Guide)
A string is an object of type <xref:System.String> whose value is text. Internally, the text is stored as a sequential read-only collection of <xref:System.Char> objects. There is no null-terminating character at the end of a C# string; therefore a C# string can contain any number of embedded null characters ('\0'). The <xref:System.String.Length%2A> property of a string represents the number of `Char` objects it contains, not the number of Unicode characters. To access the individual Unicode code points in a string, use the <xref:System.Globalization.StringInfo> object.
## string vs. System.String
In C#, the `string` keyword is an alias for <xref:System.String>. Therefore, `String` and `string` are equivalent, and you can use whichever naming convention you prefer. The `String` class provides many methods for safely creating, manipulating, and comparing strings. In addition, the C# language overloads some operators to simplify common string operations. For more information about the keyword, see [string](../../language-reference/keywords/string.md). For more information about the type and its methods, see <xref:System.String>.
## Declaring and Initializing Strings
You can declare and initialize strings in various ways, as shown in the following example:
[!code-csharp[csProgGuideStrings#1](~/samples/snippets/csharp/VS_Snippets_VBCSharp/csProgGuideStrings/CS/Strings.cs#1)]
Note that you do not use the [new](../../language-reference/operators/new-operator.md) operator to create a string object except when initializing the string with an array of chars.
Initialize a string with the <xref:System.String.Empty> constant value to create a new <xref:System.String> object whose string is of zero length. The string literal representation of a zero-length string is "". By initializing strings with the <xref:System.String.Empty> value instead of [null](../../language-reference/keywords/null.md), you can reduce the chances of a <xref:System.NullReferenceException> occurring. Use the static <xref:System.String.IsNullOrEmpty%28System.String%29> method to verify the value of a string before you try to access it.
## Immutability of String Objects
String objects are *immutable*: they cannot be changed after they have been created. All of the <xref:System.String> methods and C# operators that appear to modify a string actually return the results in a new string object. In the following example, when the contents of `s1` and `s2` are concatenated to form a single string, the two original strings are unmodified. The `+=` operator creates a new string that contains the combined contents. That new object is assigned to the variable `s1`, and the original object that was assigned to `s1` is released for garbage collection because no other variable holds a reference to it.
[!code-csharp[csProgGuideStrings#2](~/samples/snippets/csharp/VS_Snippets_VBCSharp/csProgGuideStrings/CS/Strings.cs#2)]
Because a string "modification" is actually a new string creation, you must use caution when you create references to strings. If you create a reference to a string, and then "modify" the original string, the reference will continue to point to the original object instead of the new object that was created when the string was modified. The following code illustrates this behavior:
[!code-csharp[csProgGuideStrings#25](~/samples/snippets/csharp/VS_Snippets_VBCSharp/csProgGuideStrings/CS/Strings.cs#25)]
For more information about how to create new strings that are based on modifications such as search and replace operations on the original string, see [How to: Modify String Contents](../../how-to/modify-string-contents.md).
## Regular and Verbatim String Literals
Use regular string literals when you must embed escape characters provided by C#, as shown in the following example:
[!code-csharp[csProgGuideStrings#3](~/samples/snippets/csharp/VS_Snippets_VBCSharp/csProgGuideStrings/CS/Strings.cs#3)]
Use verbatim strings for convenience and better readability when the string text contains backslash characters, for example in file paths. Because verbatim strings preserve new line characters as part of the string text, they can be used to initialize multiline strings. Use double quotation marks to embed a quotation mark inside a verbatim string. The following example shows some common uses for verbatim strings:
[!code-csharp[csProgGuideStrings#4](~/samples/snippets/csharp/VS_Snippets_VBCSharp/csProgGuideStrings/CS/Strings.cs#4)]
## String Escape Sequences
|Escape sequence|Character name|Unicode encoding|
|---------------------|--------------------|----------------------|
|\\'|Single quote|0x0027|
|\\"|Double quote|0x0022|
|\\\\ |Backslash|0x005C|
|\0|Null|0x0000|
|\a|Alert|0x0007|
|\b|Backspace|0x0008|
|\f|Form feed|0x000C|
|\n|New line|0x000A|
|\r|Carriage return|0x000D|
|\t|Horizontal tab|0x0009|
|\v|Vertical tab|0x000B|
|\u|Unicode escape sequence (UTF-16)|`\uHHHH` (range: 0000 - FFFF; example: `\u00E7` = "ç")|
|\U|Unicode escape sequence (UTF-32)|`\U00HHHHHH` (range: 000000 - 10FFFF; example: `\U0001F47D` = "👽")|
|\x|Unicode escape sequence similar to "\u" except with variable length|`\xH[H][H][H]` (range: 0 - FFFF; example: `\x00E7` or `\x0E7` or `\xE7` = "ç")|
> [!WARNING]
> When using the `\x` escape sequence and specifying less than 4 hex digits, if the characters that immediately follow the escape sequence are valid hex digits (i.e. 0-9, A-F, and a-f), they will be interpreted as being part of the escape sequence. For example, `\xA1` produces "¡", which is code point U+00A1. However, if the next character is "A" or "a", then the escape sequence will instead be interpreted as being `\xA1A` and produce "ਚ", which is code point U+0A1A. In such cases, specifying all 4 hex digits (e.g. `\x00A1` ) will prevent any possible misinterpretation.
> [!NOTE]
> At compile time, verbatim strings are converted to ordinary strings with all the same escape sequences. Therefore, if you view a verbatim string in the debugger watch window, you will see the escape characters that were added by the compiler, not the verbatim version from your source code. For example, the verbatim string `@"C:\files.txt"` will appear in the watch window as "C:\\\files.txt".
## Format Strings
A format string is a string whose contents are determined dynamically at runtime. Format strings are created by embedding *interpolated expressions* or placeholders inside of braces within a string. Everything inside the braces (`{...}`) will be resolved to a value and output as a formatted string at runtime. There are two methods to create format strings: string interpolation and composite formatting.
### String Interpolation
Available in C# 6.0 and later, [*interpolated strings*](../../language-reference/tokens/interpolated.md) are identified by the `$` special character and include interpolated expressions in braces. If you are new to string interpolation, see the [String interpolation - C# interactive tutorial](../../tutorials/exploration/interpolated-strings.yml) for a quick overview.
Use string interpolation to improve the readability and maintainability of your code. String interpolation achieves the same results as the `String.Format` method, but improves ease of use and inline clarity.
[!code-csharp[csProgGuideFormatStrings](~/samples/snippets/csharp/programming-guide/strings/Strings_1.cs#StringInterpolation)]
### Composite Formatting
The <xref:System.String.Format%2A?displayProperty=nameWithType> utilizes placeholders in braces to create a format string. This example results in similar output to the string interpolation method used above.
[!code-csharp[csProgGuideFormatStrings](~/samples/snippets/csharp/programming-guide/strings/Strings_1.cs#StringFormat)]
For more information on formatting .NET types see [Formatting Types in .NET](../../../standard/base-types/formatting-types.md).
## Substrings
A substring is any sequence of characters that is contained in a string. Use the <xref:System.String.Substring%2A> method to create a new string from a part of the original string. You can search for one or more occurrences of a substring by using the <xref:System.String.IndexOf%2A> method. Use the <xref:System.String.Replace%2A> method to replace all occurrences of a specified substring with a new string. Like the <xref:System.String.Substring%2A> method, <xref:System.String.Replace%2A> actually returns a new string and does not modify the original string. For more information, see [How to: search strings](../../how-to/search-strings.md) and [How to: Modify String Contents](../../how-to/modify-string-contents.md).
[!code-csharp[csProgGuideStrings#9](~/samples/snippets/csharp/VS_Snippets_VBCSharp/csProgGuideStrings/CS/Strings.cs#9)]
## Accessing Individual Characters
You can use array notation with an index value to acquire read-only access to individual characters, as in the following example:
[!code-csharp[csProgGuideStrings#8](~/samples/snippets/csharp/VS_Snippets_VBCSharp/csProgGuideStrings/CS/Strings.cs#8)]
If the <xref:System.String> methods do not provide the functionality that you must have to modify individual characters in a string, you can use a <xref:System.Text.StringBuilder> object to modify the individual chars "in-place", and then create a new string to store the results by using the <xref:System.Text.StringBuilder> methods. In the following example, assume that you must modify the original string in a particular way and then store the results for future use:
[!code-csharp[csProgGuideStrings#27](~/samples/snippets/csharp/VS_Snippets_VBCSharp/csProgGuideStrings/CS/Strings.cs#27)]
## Null Strings and Empty Strings
An empty string is an instance of a <xref:System.String?displayProperty=nameWithType> object that contains zero characters. Empty strings are used often in various programming scenarios to represent a blank text field. You can call methods on empty strings because they are valid <xref:System.String?displayProperty=nameWithType> objects. Empty strings are initialized as follows:
```csharp
string s = String.Empty;
```
By contrast, a null string does not refer to an instance of a <xref:System.String?displayProperty=nameWithType> object and any attempt to call a method on a null string causes a <xref:System.NullReferenceException>. However, you can use null strings in concatenation and comparison operations with other strings. The following examples illustrate some cases in which a reference to a null string does and does not cause an exception to be thrown:
[!code-csharp[csProgGuideStrings#20](~/samples/snippets/csharp/VS_Snippets_VBCSharp/csProgGuideStrings/CS/Strings.cs#20)]
## Using StringBuilder for Fast String Creation
String operations in .NET are highly optimized and in most cases do not significantly impact performance. However, in some scenarios such as tight loops that are executing many hundreds or thousands of times, string operations can affect performance. The <xref:System.Text.StringBuilder> class creates a string buffer that offers better performance if your program performs many string manipulations. The <xref:System.Text.StringBuilder> string also enables you to reassign individual characters, something the built-in string data type does not support. This code, for example, changes the content of a string without creating a new string:
[!code-csharp[csProgGuideStrings#15](~/samples/snippets/csharp/VS_Snippets_VBCSharp/csProgGuideStrings/CS/Strings.cs#15)]
In this example, a <xref:System.Text.StringBuilder> object is used to create a string from a set of numeric types:
[!code-csharp[TestStringBuilder#1](~/samples/snippets/csharp/VS_Snippets_VBCSharp/csProgGuideStrings/CS/TestStringBuilder.cs)]
## Strings, Extension Methods and LINQ
Because the <xref:System.String> type implements <xref:System.Collections.Generic.IEnumerable%601>, you can use the extension methods defined in the <xref:System.Linq.Enumerable> class on strings. To avoid visual clutter, these methods are excluded from IntelliSense for the <xref:System.String> type, but they are available nevertheless. You can also use [!INCLUDE[vbteclinq](~/includes/vbteclinq-md.md)] query expressions on strings. For more information, see [LINQ and Strings](../concepts/linq/linq-and-strings.md).
## Related Topics
|Topic|Description|
|-----------|-----------------|
|[How to: Modify String Contents](../../how-to/modify-string-contents.md)|Illustrates techniques to transform strings and modify the contents of strings.|
|[How to: Compare Strings](../../how-to/compare-strings.md)|Shows how to perform ordinal and culture specific comparisons of strings.|
|[How to: Concatenate Multiple Strings](../../how-to/concatenate-multiple-strings.md)|Demonstrates various ways to join multiple strings into one.|
|[How to: Parse Strings Using String.Split](../../how-to/parse-strings-using-split.md)|Contains code examples that illustrate how to use the `String.Split` method to parse strings.|
|[How to: Search Strings](../../how-to/search-strings.md)|Explains how to use search for specific text or patterns in strings.|
|[How to: Determine Whether a String Represents a Numeric Value](./how-to-determine-whether-a-string-represents-a-numeric-value.md)|Shows how to safely parse a string to see whether it has a valid numeric value.|
|[String interpolation](../../language-reference/tokens/interpolated.md)|Describes the string interpolation feature that provides a convenient syntax to format strings.|
|[Basic String Operations](../../../standard/base-types/basic-string-operations.md)|Provides links to topics that use <xref:System.String?displayProperty=nameWithType> and <xref:System.Text.StringBuilder?displayProperty=nameWithType> methods to perform basic string operations.|
|[Parsing Strings](../../../standard/base-types/parsing-strings.md)|Describes how to convert string representations of .NET base types to instances of the corresponding types.|
|[Parsing Date and Time Strings in .NET](../../../standard/base-types/parsing-datetime.md)|Shows how to convert a string such as "01/24/2008" to a <xref:System.DateTime?displayProperty=nameWithType> object.|
|[Comparing Strings](../../../standard/base-types/comparing.md)|Includes information about how to compare strings and provides examples in C# and Visual Basic.|
|[Using the StringBuilder Class](../../../standard/base-types/stringbuilder.md)|Describes how to create and modify dynamic string objects by using the <xref:System.Text.StringBuilder> class.|
|[LINQ and Strings](../concepts/linq/linq-and-strings.md)|Provides information about how to perform various string operations by using LINQ queries.|
|[C# Programming Guide](../index.md)|Provides links to topics that explain programming constructs in C#.|
| 107.401408 | 727 | 0.768605 | eng_Latn | 0.986509 |
537b21454b303d0fa2fd1e8071cc9184538fac86 | 106 | md | Markdown | C++/gtk3/readme.md | new-vish/tutorials | f368888f6b11112ce2660f566967e741d96a434c | [
"MIT"
] | null | null | null | C++/gtk3/readme.md | new-vish/tutorials | f368888f6b11112ce2660f566967e741d96a434c | [
"MIT"
] | null | null | null | C++/gtk3/readme.md | new-vish/tutorials | f368888f6b11112ce2660f566967e741d96a434c | [
"MIT"
] | null | null | null | # Tutorials for gtk 3.x
https://www.gtk.org/
Installation: https://github.com/new-vish/the-silent-ones | 26.5 | 57 | 0.726415 | kor_Hang | 0.569797 |
537b2c1841241e6f6a54b240b958840c669a21cf | 7,269 | md | Markdown | docs/2014/relational-databases/sqlxml-annotated-xsd-schemas-xpath-queries/diffgram/executing-a-diffgram-by-using-sqlxml-managed-classes.md | aminechafai/sql-docs | 2e6c4104dca8680064eb64a7a79a3e15e1b4365f | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-04-19T10:57:52.000Z | 2019-04-19T10:57:52.000Z | docs/2014/relational-databases/sqlxml-annotated-xsd-schemas-xpath-queries/diffgram/executing-a-diffgram-by-using-sqlxml-managed-classes.md | aminechafai/sql-docs | 2e6c4104dca8680064eb64a7a79a3e15e1b4365f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/2014/relational-databases/sqlxml-annotated-xsd-schemas-xpath-queries/diffgram/executing-a-diffgram-by-using-sqlxml-managed-classes.md | aminechafai/sql-docs | 2e6c4104dca8680064eb64a7a79a3e15e1b4365f | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-07-05T21:11:18.000Z | 2020-07-05T21:11:18.000Z | ---
title: "Executing a DiffGram by Using SQLXML Managed Classes | Microsoft Docs"
ms.custom: ""
ms.date: "03/06/2017"
ms.prod: "sql-server-2014"
ms.reviewer: ""
ms.technology: xml
ms.topic: "reference"
helpviewer_keywords:
- "DiffGrams [SQLXML], Managed Classes"
- "SQLXML Managed Classes, DiffGrams"
- "Managed Classes [SQLXML], DiffGrams"
- "SQLXML, Managed Classes"
ms.assetid: 81c687ca-8c9f-4f58-801f-8dabcc508a06
author: rothja
ms.author: jroth
---
# Executing a DiffGram by Using SQLXML Managed Classes
This example shows how to execute a DiffGram file in the [!INCLUDE[msCoName](../../../includes/msconame-md.md)] .NET Framework environment to apply data updates to [!INCLUDE[ssNoVersion](../../../includes/ssnoversion-md.md)] tables using SQLXML Managed Classes (Microsoft.Data.SqlXml).
In this example, the DiffGram updates customer information (CompanyName and ContactName) for customer ALFKI.
```
<ROOT xmlns:sql="urn:schemas-microsoft-com:xml-sql" sql:mapping-schema="DiffGramSchema.xml">
<diffgr:diffgram
xmlns:msdata="urn:schemas-microsoft-com:xml-msdata"
xmlns:diffgr="urn:schemas-microsoft-com:xml-diffgram-v1">
<DataInstance>
<Customer diffgr:id="Customer1"
msdata:rowOrder="0" diffgr:hasChanges="modified"
CustomerID="ALFKI">
<CompanyName>Bottom Dollar Markets</CompanyName>
<ContactName>Antonio Moreno</ContactName>
</Customer>
</DataInstance>
<diffgr:before>
<Customer diffgr:id="Customer1"
msdata:rowOrder="0"
CustomerID="ALFKI">
<CompanyName>Alfreds Futterkiste</CompanyName>
<ContactName>Maria Anders</ContactName>
</Customer>
</diffgr:before>
</diffgr:diffgram>
</ROOT>
```
The **\<before>** block includes a **\<Customer>** element (**diffgr:id="Customer1"**). The **\<DataInstance>** block includes the corresponding **\<Customer>** element with same **id**. The **\<customer>** element in the **\<NewDataSet>** also specifies **diffgr:hasChanges="modified"**. This indicates an update operation, and the customer record in the Cust table is updated accordingly. Note that if the **diffgr:hasChanges** attribute is not specified, the DiffGram processing logic ignores this element and no updates are performed.
The following is code for a C# tutorial application that shows how to use the SQLXML Managed Classes to execute the above DiffGram and update two tables (Cust, Ord) you will also create in the **tempdb** database.
```
using System;
using System.Data;
using Microsoft.Data.SqlXml;
using System.IO;
class Test
{
static string ConnString = "Provider=SQLOLEDB;Server=MyServer;database=tempdb;Integrated Security=SSPI;";
public static int testParams()
{
SqlXmlAdapter ad;
// Need a memory stream to hold diff gram temporarily
MemoryStream ms = new MemoryStream();
SqlXmlCommand cmd = new SqlXmlCommand(ConnString);
cmd.RootTag = "ROOT";
cmd.CommandStream = new FileStream("MyDiffgram.xml", FileMode.Open, FileAccess.Read);
cmd.CommandType = SqlXmlCommandType.DiffGram;
cmd.SchemaPath = "DiffGramSchema.xml";
// Load data set
DataSet ds = new DataSet();
ad = new SqlXmlAdapter(cmd);
ad.Fill(ds);
ad.Update(ds);
return 0;
}
public static int Main(String[] args)
{
testParams();
return 0;
}
}
```
### To test the application
1. Ensure that the .NET Framework is installed on your computer.
2. Save the following XSD schema (DiffGramSchema.xml) in a folder:
```
<xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"
xmlns:sql="urn:schemas-microsoft-com:mapping-schema">
<xsd:annotation>
<xsd:documentation>
Diffgram Customers/Orders Schema.
</xsd:documentation>
<xsd:appinfo>
<sql:relationship name="CustomersOrders"
parent="Cust"
parent-key="CustomerID"
child-key="CustomerID"
child="Ord"/>
</xsd:appinfo>
</xsd:annotation>
<xsd:element name="Customer" sql:relation="Cust">
<xsd:complexType>
<xsd:sequence>
<xsd:element name="CompanyName" type="xsd:string"/>
<xsd:element name="ContactName" type="xsd:string"/>
<xsd:element name="Order" sql:relation="Ord" sql:relationship="CustomersOrders">
<xsd:complexType>
<xsd:attribute name="OrderID" type="xsd:int" sql:field="OrderID"/>
<xsd:attribute name="CustomerID" type="xsd:string"/>
</xsd:complexType>
</xsd:element>
</xsd:sequence>
<xsd:attribute name="CustomerID" type="xsd:string" sql:field="CustomerID"/>
</xsd:complexType>
</xsd:element>
</xsd:schema>
```
3. Create these tables in the **tempdb** database.
```
CREATE TABLE Cust(
CustomerID nchar(5) Primary Key,
CompanyName nvarchar(40) NOT NULL ,
ContactName nvarchar(60) NULL)
GO
CREATE TABLE Ord(
OrderID int Primary Key,
CustomerID nchar(5) Foreign Key REFERENCES Cust(CustomerID))
GO
```
4. Add this sample data:
```
INSERT INTO Cust(CustomerID, CompanyName, ContactName) VALUES
(N'ALFKI', N'Alfreds Futterkiste', N'Maria Anders')
INSERT INTO Cust(CustomerID, CompanyName, ContactName) VALUES
(N'ANATR', N'Ana Trujillo Emparedados y helados', N'Ana Trujillo')
INSERT INTO Cust(CustomerID, CompanyName, ContactName) VALUES
(N'ANTON', N'Antonio Moreno Taquer??a', N'Antonio Moreno')
INSERT INTO Ord(OrderID, CustomerID) VALUES(1, N'ALFKI')
INSERT INTO Ord(OrderID, CustomerID) VALUES(2, N'ANATR')
INSERT INTO Ord(OrderID, CustomerID) VALUES(3, N'ANTON')
```
5. Copy the DiffGram above and paste it into a text file. Save the file as MyDiffGram.xml in the same folder used in step 1.
6. Save the C# code (DiffgramSample.cs) that is provided above in the same folder in which the DiffGramSchema.xml and MyDiffGram.xml were stored in previous steps.
> [!NOTE]
> You will need to update the name of the [!INCLUDE[ssNoVersion](../../../includes/ssnoversion-md.md)] instance in the connection string from '`MyServer`' to the actual name of your installed instance of [!INCLUDE[ssNoVersion](../../../includes/ssnoversion-md.md)].
If you store the files in a different folder, you will have to edit the code and specify the appropriate directory path for the mapping schema.
7. Compile the code. To compile the code at the command prompt, use:
```
csc /reference:Microsoft.Data.SqlXML.dll DiffgramSample.cs
```
This creates an executable (DiffgramSample.exe).
8. At the command prompt, execute DiffgramSample.exe.
## See Also
[DiffGram Examples (SQLXML 4.0)](diffgram-examples-sqlxml-4-0.md)
| 41.067797 | 541 | 0.63929 | eng_Latn | 0.513616 |
537c8fdf9d28cefe6bf4b633a939ea79611e0bf6 | 19,456 | md | Markdown | _posts/2021-03-19-mvncentral-publish-github.md | Mrc0113/mrc0113.github.io | 58f16b167e171c9c6ec216ecacb4316a27cd4629 | [
"CC-BY-4.0"
] | null | null | null | _posts/2021-03-19-mvncentral-publish-github.md | Mrc0113/mrc0113.github.io | 58f16b167e171c9c6ec216ecacb4316a27cd4629 | [
"CC-BY-4.0"
] | 11 | 2019-05-09T02:48:58.000Z | 2021-03-19T18:23:06.000Z | _posts/2021-03-19-mvncentral-publish-github.md | Mrc0113/mrc0113.github.io | 58f16b167e171c9c6ec216ecacb4316a27cd4629 | [
"CC-BY-4.0"
] | 1 | 2020-03-28T22:10:23.000Z | 2020-03-28T22:10:23.000Z | ---
layout: post
title: Publishing Github Java Packages to Maven Central for a New Domain
excerpt: "This post shares the steps necessary to publish Java packages to Maven Central for a new domain when the project is in Github"
categories: [post]
tags: [java, github, maven]
comments: true
github_comments_issueid: 11
---
## Overview
🎉 The [Solace](solace.com) Developer Relations team recently launched our [SolaceCommunity github organization](github.com/SolaceCommunity) as a home for open source projects that are authored, maintained and supported by the amazing [Solace Community](solace.community). If you're interested in joining our community you can read more about it [here](https://solace.com/blog/announcing-new-solacecommunity-github-organization/), but that won't be the focus of the rest of this blog. **The focus of this blog is how I enabled publishing of Java packges to Maven Central** for this new github community and how you can do so for your own domain. Since Java is one of our most used languages, and most Java projects now leverage Maven or Gradle, I wanted to enable developers contributing their Java projects to be able to easily publish to Maven Central to make it easier for everyone in the community to easily use them. Over the past few weeks I went through the steps necessary to setup publishing of Java packages to Maven Central for the https://solace.community domain. This was trickier than I thought so I figured I'd share what I did to hopefully help others :)
I'm going to try to keep this short and to the point but feel free to let me know if you have any questions.
I used the following resources when figuring out these steps so props to their creators!
* [OSSRH Guide](https://central.sonatype.org/pages/ossrh-guide.html)
* [Publishing Java packages with Maven](https://docs.github.com/en/actions/guides/publishing-java-packages-with-maven)
* [How to Sign and Release to The Central Repository with GitHub Actions](https://gist.github.com/sualeh/ae78dc16123899d7942bc38baba5203c)
## Setting up OSSRH (The Repository)
There are a handful of approved repository hosting options specified by Maven Central. You can find the entire list [here](https://maven.apache.org/repository/guide-central-repository-upload.html#approved-repository-hosting), but the easiest approach seems to leverage [Open Source Software Repository Hosting (OSSRH)](http://central.sonatype.org/pages/ossrh-guide.html) so that's the approach I decided to take.
*Note that if you're just trying to publish a few personal projects on github and are fine publishing under `io.github` then you don't need to follow all of these steps so I'd suggest heading over to google and doing a few more searches :)*
### Get An Account
The first step down this road is to register a OSSRH account. This account will be used to prove ownership of your domain (for publishing packages), but also to manage your repositories in the future. [Sign up for an account here](https://issues.sonatype.org/secure/Signup!default.jspa)
✅ Account Created
### Prove Domain Ownership
Now that you have an account the next step in the process is to prove ownership of the domain that matches the group that you'd like to publish to. Usually this is your domain name in reverse, so something like `com.company` if your domain is `company.com`. Since our developer community is at `solace.community` this meant we would publish to the `community.solace` group.
To prove that we own this domain I had to execute a few simple steps:
1. Open a [New Project Ticket](https://issues.sonatype.org/secure/CreateIssue.jspa?issuetype=21&pid=10134) with OSSRH.
1. Follow the instructions request in the ticket to addd a DNS TXT record to our domain.
1. Wait a few hours (it says it could take up to 2 business days) for the DNS TXT record to be verified.
1. Check the ticket for confirmation that domain ownership has been confirmed.
1. Make a note to comment on this ticket after your first release to enable syncing to maven central!
✅ Domain ownership proven
### Create Your User Token
Now that we have permission to publish to our domain we need to create a user token for publishing. This token will be used as part of the publishing process.
To get this token do the following:
1. Login to the [OSSRH Nexus Repository Manager](https://s01.oss.sonatype.org/#welcome) w/ your OSSRH account
1. Go to your profile using the menu under your username at the top right.
1. You should see a list menu that is on the `Summary` page; change it to `User Token`. You can create your token on this page.
1. Copy & Paste this token info so you can use it later! **(Keep it private!)**
✅ OSSRH is now ready to go and we have the info we need
## Configuring the Maven pom
The next step is to setup our maven pom for publishing. Note that I used copy and paste programming to figure this out so I'm by no means an expert here :)
If you are reading this and think I should include more detail here or my explanations are confusing please drop a comment and let me know.
An entire example pom can be found [here](https://github.com/solacecommunity/spring-solace-leader-election/blob/master/pom.xml)
Here is what I did:
1. Ensured that my `groupId` starts with the reverse domain that we have approval to publish to! For example this is what we used, note that the `groupId` starts with `community.solace`.
```
<groupId>community.solace.spring.integration</groupId>
<artifactId>solace-spring-integration-leader</artifactId>
<version>1.1.0-SNAPSHOT</version>
```
1. What to know about the version! Okay **this is important**. When publishing maven projects you have releases and you have snapshots. A "release" is the final build for a version which does not change whereas a "snapshot" is a temporary build which can be replaced by another build with the same name. Go ahead and google this if you want to learn more :)
Once you know the difference you're ready to set your version. Use a version ending in a number, e.g: `1.1.0`, for a "release" and end it in `-SNAPSHOT`, e.g: `1.1.0-SNAPSHOT` for a snapshot.
1. Include a description name, description and url pointing to your repository. For example,
```
<name>Solace Spring Integration Leader</name>
<description>This project allows for Spring Integration Leader Election using Solace Exclusive Queues</description>
<url>https://github.com/solacecommunity/spring-solace-leader-election</url>
```
1. Include a license, source control info `scm`, developers and organization(I believe this is optional) information.
```
<licenses>
<license>
<name>MIT License</name>
<url>https://github.com/solacecommunity/spring-solace-leader-election/blob/master/LICENSE</url>
<distribution>repo</distribution>
</license>
</licenses>
<developers>
<developer>
<name>Solace Community</name>
<email>[email protected]</email>
<organization>Solace Community</organization>
<organizationUrl>https://solace.community</organizationUrl>
</developer>
</developers>
<organization>
<name>Solace Community</name>
<url>https://solace.community</url>
</organization>
<scm>
<url>https://github.com/solacecommunity/solace-spring-integration-leader.git</url>
<connection>scm:git:git://github.com/solacecommunity/solace-spring-integration-leader.git</connection>
<developerConnection>scm:git:[email protected]:solacecommunity/solace-spring-integration-leader.git</developerConnection>
<tag>HEAD</tag>
</scm>
```
1. Add a profile for OSSRH which includes the `snapshotRepository` info, the `nexus-staging-maven-plugin`, and the `maven-gpg-plugin`. Note in the example below I have this profile `activeByDefault` so you don't have to specify it when running maven commands, however you may not want to do this. Depends on your use case :)
```
<profile>
<id>ossrh</id>
<activation>
<activeByDefault>true</activeByDefault>
</activation>
<properties>
<gpg.executable>gpg</gpg.executable>
</properties>
<distributionManagement>
<snapshotRepository>
<id>ossrh</id>
<name>Central Repository OSSRH</name>
<url>https://s01.oss.sonatype.org/content/repositories/snapshots</url>
</snapshotRepository>
</distributionManagement>
<build>
<plugins>
<plugin>
<groupId>org.sonatype.plugins</groupId>
<artifactId>nexus-staging-maven-plugin</artifactId>
<version>1.6.7</version>
<extensions>true</extensions>
<configuration>
<serverId>ossrh</serverId>
<nexusUrl>https://s01.oss.sonatype.org/</nexusUrl>
<!-- Change to true once we're good! -->
<autoReleaseAfterClose>false</autoReleaseAfterClose>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-gpg-plugin</artifactId>
<version>1.5</version>
<executions>
<execution>
<id>sign-artifacts</id>
<phase>verify</phase>
<goals>
<goal>sign</goal>
</goals>
</execution>
</executions>
<configuration>
<gpgArguments>
<arg>--pinentry-mode</arg>
<arg>loopback</arg>
</gpgArguments>
</configuration>
</plugin>
</plugins>
</build>
</profile>
```
1. Include the `maven-release-plugin`, the `maven-javadoc-plugin`, the `maven-source-plugin` and the `flatten-maven-plugin` plugin.
```
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-release-plugin</artifactId>
<version>2.5.1</version>
<configuration>
<tagNameFormat>@{project.version}</tagNameFormat>
</configuration>
<dependencies>
<dependency>
<groupId>org.apache.maven.shared</groupId>
<artifactId>maven-invoker</artifactId>
<version>2.2</version>
</dependency>
</dependencies>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-javadoc-plugin</artifactId>
<version>2.10.4</version>
<executions>
<execution>
<id>attach-javadocs</id>
<goals>
<goal>jar</goal>
</goals>
</execution>
</executions>
<configuration>
<source>8</source>
<detectJavaApiLink>false</detectJavaApiLink>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-source-plugin</artifactId>
<version>3.0.1</version>
<executions>
<execution>
<id>attach-sources</id>
<goals>
<goal>jar</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>flatten-maven-plugin</artifactId>
<version>1.1.0</version>
<configuration>
<updatePomFile>true</updatePomFile>
<flattenMode>oss</flattenMode>
<pomElements>
<distributionManagement>remove</distributionManagement>
<repositories>remove</repositories>
</pomElements>
</configuration>
<executions>
<!-- enable flattening -->
<execution>
<id>flatten</id>
<phase>process-resources</phase>
<goals>
<goal>flatten</goal>
</goals>
</execution>
<!-- ensure proper cleanup -->
<execution>
<id>flatten.clean</id>
<phase>clean</phase>
<goals>
<goal>clean</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
```
✅ The maven pom is now ready to go!
## Create GPG Keys
The next step is to create a GPG key for signing the published packages. This allows for users of your project to verify the published package's authenticity and trust using it in their own projects. In order to create a GPG key you will need to execute a few steps.
## Create Your Key
The first step is to **Create Your Key**. You can do this locally using a tool such as gpg. Be sure to keep your **private** key private. If others get ahold of this they can become a trusted publisher of your packages.
1. Install the gpg tool; on mac you can do this by executing the command below. If you aren't using a mac check out the instructions [here]()
```
brew install gpg
```
1. Generate your key pair. You will be prompted for a "Real Name" and "Eamil Address" that you want to use with the key
```
gpg --gen-key
```
✅ GPG key created!
## Share Your Public Key
Now that you've generated your key pair, which consists of a private and a public key, you need to share the public piece. The public key will be used by developers to verify the package's authenticity. You can also do this with the `gpg` command. It will share your key to a keyserver which tools such as maven know how to query to retrieve the keys for automated verification.
1. Get your keypair identifier.
To do this you need to list your keys. The key will have an identifier that looks like a random string of characters, something like *C48B6G0D63B854H7943892DF0C753FEC18D3F855*. In the command below I've replaced it with `MYIDENTIFIER` to show it's location.
```
MJD-MacBook-Pro.local:~$ gpg --list-keys
/path/to/keyring/pubring.kbx
----------------------------------------
pub rsa3072 2021-03-11 [SC] [expires: 2023-03-11]
MYIDENTIFIER
uid [ultimate] solacecommunity <[email protected]>
sub rsa3072 2021-03-11 [E] [expires: 2023-03-11]
```
1. Distribute to a key server using the identifier found in the previous step. Note that you may want to publish to a different keyserver. The one that worked for me was `hkp://keyserver.ubuntu.com:11371`
```
gpg --keyserver hkp://pool.sks-keyservers.net --send-keys MYIDENTIFIER
```
✅ We've now shared our public key!
## Configure the Github Secrets
Okay at this point our project is looking pretty good and we could run a deployment locally using the `mvn --batch-mode clean deploy` command, however we actually want to perform our releases via a Github action. Shoutout to [sualeh](https://gist.github.com/sualeh) for creating [this gist](https://gist.github.com/sualeh/ae78dc16123899d7942bc38baba5203c) which helps me navigate the next few steps! In order to make the release from a Github Action we need to make our GPG private key and OSSRH user information available to the Github actions while also keeping it private. We can do this using [Github Action Secrets](https://help.github.com/en/actions/configuring-and-managing-workflows/creating-and-storing-encrypted-secrets).
I created secrets at the Github Organization level so I followed the steps below to keep my `OSSRH_GPG_SECRET_KEY`,`OSSRH_GPG_SECRET_KEY_PASSWORD`,`OSSRH_USERNAME` and `OSSRH_PASSWORD` secret. If not clear by the name these are my GPG Secret Key, my GPG Secret Key password, my OSSRH Username (from the token we generated earlier) and the OSSRH password (from the token we generated earlier).If that screenshot feels out of date you can find the [docs here](https://docs.github.com/en/actions/reference/encrypted-secrets#creating-encrypted-secrets-for-an-organization)

✅ Secrets configured!
## Setting up the Github Action
Now that the Github Action Secrets are available let's go ahead and configure the Github Action itself. To give credit where credit is due, I created this github action using this [gist](https://gist.github.com/sualeh/ae78dc16123899d7942bc38baba5203c) as a starting point. You can find the Github Action [workflow file here](https://github.com/solacecommunity/spring-solace-leader-election/actions/runs/645154320/workflow).
You'll see below that this Github Action will run on the latest ubuntu and execute the `publish` job which has several steps. The steps will setup the maven central repository info, install the secret key and then run the command to publish to OSSRH.
```yaml
name: Publish package to the Maven Central Repository
on:
release:
types: [created]
jobs:
publish:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Maven Central Repository
uses: actions/setup-java@v1
with:
java-version: 11
server-id: ossrh
server-username: MAVEN_USERNAME
server-password: MAVEN_PASSWORD
- id: install-secret-key
name: Install gpg secret key
run: |
cat <(echo -e "${{ secrets.OSSRH_GPG_SECRET_KEY }}") | gpg --batch --import
gpg --list-secret-keys --keyid-format LONG
- id: publish-to-central
name: Publish to Central Repository
env:
MAVEN_USERNAME: ${{ secrets.OSSRH_USERNAME }}
MAVEN_PASSWORD: ${{ secrets.OSSRH_PASSWORD }}
run: |
mvn \
--no-transfer-progress \
--batch-mode \
-Dgpg.passphrase=${{ secrets.OSSRH_GPG_SECRET_KEY_PASSWORD }} \
clean deploy
```
✅ Github Action ready to go
## Running the Github Action - let's test it with a Snapshot release
Now you're ready to run the Github Action. I'd recommend testing out the publishing of a snapshot release first. To ensure you do this make sure your `version` in the pom ends in `-SNAPSHOT` as discused earlier.
1. Once the version on your `main` branch includes `-SNAPSHOT` go ahead and run the github action workflow. You should see the `publish` job successed in a few minutes depending on how long your build takes. Ours took 1m 20s the first time.
1. After the deploy job successfully runs you can head over to https://s01.oss.sonatype.org/content/repositories/snapshots/ and navigate to your project to verify that the snapshot has successfully deployed. It should look something like this:

✅ Snapshot published!
## Running the Github Action - now let's make an actual release
Now that the snapshot has successfully deployed let's go ahead and make a real release. Make sure your code is ready before doing this part of course :)
1. Change your version in the pom to remove the `-SNAPSHOT`, so it should end in a number. Something like `1.1.0`.
1. Re-Run the Github Action workflow. Wait for the publish job to succeed :)
✅ At this point our project will be staged for release. There is just one more step!
### Approve the Deployment in Nexus
Now that our project is staged for release we need to login to the OSSRH Nexus Repository Manager to promote the release.
1. Login to the [OSSRH Sonatype Nexus Repository Manager](https://s01.oss.sonatype.org/#welcome)
1. Navigate to the `Staging Repositories` in the menu on the left hand side.
1. Examine the contents of the release and if everything looks good `close` the release. More information is available [here](https://central.sonatype.org/pages/releasing-the-deployment.html)
1. The OSSRH deployment is now complete, but if this is your first release remember to go back and comment on your "New Project Ticket" we created earlier so your project will sync to Maven Central.
✅ Deployment Complete!
### Check out the deployed project!
After ~24 hours head over to [Maven Central's search](https://search.maven.org/search) and you should be able to find our project by typing in your groupId or artifactId.

## Conclusion
Hope this was useful! Feel free to leave a comment if you have any questions :)
| 51.744681 | 1,169 | 0.727745 | eng_Latn | 0.986097 |
537cd831b24e1394eec3493a2ee041c49e8d0a9c | 552 | md | Markdown | docusaurus/website/i18n/el/docusaurus-plugin-content-docs/current/tests/anova.md | isle-project/isle-editor | 45a041571f723923fdab4eea2efe2df211323655 | [
"Apache-2.0"
] | 9 | 2019-08-30T20:50:27.000Z | 2021-12-09T19:53:16.000Z | docusaurus/website/i18n/el/docusaurus-plugin-content-docs/current/tests/anova.md | isle-project/isle-editor | 45a041571f723923fdab4eea2efe2df211323655 | [
"Apache-2.0"
] | 1,261 | 2019-02-09T07:43:45.000Z | 2022-03-31T15:46:44.000Z | docusaurus/website/i18n/el/docusaurus-plugin-content-docs/current/tests/anova.md | isle-project/isle-editor | 45a041571f723923fdab4eea2efe2df211323655 | [
"Apache-2.0"
] | 3 | 2019-10-04T19:22:02.000Z | 2022-01-31T06:12:56.000Z | ---
id: anova
title: ANOVA
sidebar_label: ANOVA
---
Ανάλυση διακύμανσης.
## Επιλογές
* __data__ | `object (required)`: αντικείμενο των πινάκων τιμών. Default: `none`.
* __variable__ | `string (required)`: όνομα της μεταβλητής που θα εμφανιστεί. Default: `none`.
* __group__ | `(string|Factor)`: όνομα της μεταβλητής ομαδοποίησης. Default: `none`.
* __showDecision__ | `boolean`: ελέγχει αν θα εμφανιστεί η απόφαση δοκιμής. Default: `false`.
## Παραδείγματα
```jsx live
<Anova
data={heartdisease}
variable="Cost"
group="Drugs"
/>
```
| 21.230769 | 94 | 0.697464 | ell_Grek | 0.911175 |
537d245e66e50a91f6aa6f0fa5e0ee68bdbb5033 | 12,491 | md | Markdown | articles/sql-database/sql-database-develop-cplusplus-simple.md | gaubert-ms/azure-docs.fr-fr | 1aa09ad10bfbf59a29ea4f3519a8255420b3ad79 | [
"RSA-MD"
] | null | null | null | articles/sql-database/sql-database-develop-cplusplus-simple.md | gaubert-ms/azure-docs.fr-fr | 1aa09ad10bfbf59a29ea4f3519a8255420b3ad79 | [
"RSA-MD"
] | null | null | null | articles/sql-database/sql-database-develop-cplusplus-simple.md | gaubert-ms/azure-docs.fr-fr | 1aa09ad10bfbf59a29ea4f3519a8255420b3ad79 | [
"RSA-MD"
] | null | null | null | ---
title: Se connecter à SQL Database à l’aide de C et C++ | Microsoft Docs
description: Utilisez l’exemple de code de ce guide de démarrage rapide pour créer une application moderne utilisant C++ et reposant sur une base de données relationnelle puissante dans le cloud avec Azure SQL Database.
services: sql-database
ms.service: sql-database
ms.subservice: development
ms.custom: ''
ms.devlang: cpp
ms.topic: conceptual
author: stevestein
ms.author: sstein
ms.reviewer: ''
manager: craigg
ms.date: 04/01/2018
ms.openlocfilehash: f1aa037afd0fa1cbe37add24a354e4dc62c13b9a
ms.sourcegitcommit: eb9dd01614b8e95ebc06139c72fa563b25dc6d13
ms.translationtype: HT
ms.contentlocale: fr-FR
ms.lasthandoff: 12/12/2018
ms.locfileid: "53310128"
---
# <a name="connect-to-sql-database-using-c-and-c"></a>Se connecter à SQL Database à l’aide de C et C++
Cette publication est destinée aux développeurs C et C++ qui essaient de se connecter à Azure SQL DB. Elle est divisée en sections afin de vous permettre d’accéder directement à celle qui vous intéresse.
## <a name="prerequisites-for-the-cc-tutorial"></a>Configuration requise pour le didacticiel C/C++
Vérifiez que vous disposez des éléments suivants :
* Un compte Azure actif. Si vous n’en avez pas, vous pouvez vous inscrire pour bénéficier d’un [essai gratuit des services Azure](https://azure.microsoft.com/pricing/free-trial/)dès aujourd’hui.
* [Visual Studio](https://www.visualstudio.com/downloads/). Vous devez installer les composants du langage C++ pour générer et exécuter cet exemple.
* [Développement Visual Studio Linux](https://visualstudiogallery.msdn.microsoft.com/725025cf-7067-45c2-8d01-1e0fd359ae6e). Si vous développez sur Linux, vous devez également installer l’extension Visual Studio Linux.
## <a id="AzureSQL"></a>Azure SQL Database et SQL Server sur les machines virtuelles
Reposant sur Microsoft SQL Server, SQL Azure est conçu pour fournir un service à haute disponibilité, performant et évolutif. L’utilisation de SQL Azure en lieu et place de votre base de données propriétaire s’exécutant en local présente de nombreux avantages. Avec SQL Azure, vous ne devez pas installer, configurer, mettre à jour ou gérer votre base de données, mais uniquement son contenu et sa structure. Les capacités de base de données importantes, telles que la tolérance de panne et la redondance, sont intégrées.
Azure propose actuellement deux options d’hébergement pour les charges de travail SQL Server : Azure SQL Database, base de données en tant que service, et SQL Server sur des machines virtuelles. Nous ne détaillerons pas les différences entre ces deux options, mais sachez que si vous développez de nouvelles applications cloud, Azure SQL Database vous permettra de tirer parti des économies et de l’optimisation des performances assurées par les services cloud. Si vous envisagez de migrer ou d’étendre vos applications locales vers le cloud, SQL Server sur une machine virtuelle Azure vous offrira sans doute de meilleurs résultats. Pour simplifier les choses dans le cadre de cet article, nous allons créer une base de données SQL Azure.
## <a id="ODBC"></a>Technologies d’accès aux données : ODBC et OLE DB
Se connecter à SQL Azure DB n’est en rien différent et il existe actuellement deux façons de se connecter à des bases de données : ODBC (Open Database Connectivity) et OLE DB (Object Linking and Embedding Database). Ces dernières années, Microsoft s’est aligné sur [ODBC pour l’accès aux données relationnelles natives](https://blogs.msdn.microsoft.com/sqlnativeclient/2011/08/29/microsoft-is-aligning-with-odbc-for-native-relational-data-access/). ODBC est relativement simple, mais aussi beaucoup plus rapide qu’OLE DB. Le seul inconvénient ici est qu’ODBC utilise une API de style C ancienne.
## <a id="Create"></a>Étape 1 : Création de votre base de données SQL Azure
Consultez la [page de prise en main](sql-database-get-started-portal.md) pour apprendre à créer un exemple de base de données. Sinon, vous pouvez suivre cette [vidéo de deux minutes](https://azure.microsoft.com/documentation/videos/azure-sql-database-create-dbs-in-seconds/) pour créer une base de données SQL Azure à l’aide du portail Azure.
## <a id="ConnectionString"></a>Étape 2 : Obtention de la chaîne de connexion
Une fois que votre base de données SQL Azure a été approvisionnée, vous devez exécuter les étapes suivantes pour déterminer les informations de connexion et ajouter l’adresse IP de votre client pour l’accès au pare-feu.
Dans le [portail Azure](https://portal.azure.com/), accédez à la chaîne de connexion ODBC de votre base de données SQL Azure en cliquant sur **Afficher les chaînes de connexion de la base de données** dans la vue d’ensemble de votre base de données :


Copiez le contenu de la chaîne **ODBC (Inclut Node.js) [Authentification SQL]**. Nous utiliserons cette chaîne ultérieurement pour la connexion à partir de l’interpréteur de ligne de commande ODBC C++. Cette chaîne fournit des détails tels que le pilote, le serveur et d’autres paramètres de connexion de base de données.
## <a id="Firewall"></a> Étape 3 : Ajout de votre adresse IP au pare-feu
Accédez à la section du pare-feu de votre serveur de bases de données et ajoutez [l’adresse IP de votre client au pare-feu à l’aide de ces étapes](sql-database-configure-firewall-settings.md) pour permettre l’établissement d’une connexion :

À ce stade, vous avez configuré votre base de données SQL Azure et êtes prêt à vous connecter à partir de votre code C++.
## <a id="Windows"></a>Étape 4 : Connexion à partir d’une application C/C++ Windows
Vous pouvez facilement vous connecter à votre [base de données SQL Azure à l’aide d’ODBC sur Windows en utilisant cet exemple](https://github.com/Microsoft/VCSamples/tree/master/VC2015Samples/ODBC%20database%20sample%20%28windows%29), généré avec Visual Studio. L’exemple implémente un interpréteur de ligne de commande ODBC qui peut être utilisé pour se connecter à la base de données SQL Azure. Cet exemple prend comme argument de ligne de commande un fichier de nom de source de base de données (DSN) ou la chaîne de connexion détaillée copiée précédemment à partir du portail Azure. Affichez la page de propriétés de ce projet et collez la chaîne de connexion en tant qu’argument de commande, comme illustré ici :

Prenez soin d’inclure les informations d’authentification correctes pour votre base de données dans la chaîne de connexion de cette base de données.
Lancez l’application pour la générer. La fenêtre de confirmation de la connexion suivante doit s’afficher. Vous pouvez même exécuter des commandes SQL de base telles que **create table** pour valider la connectivité de votre base de données :

Vous pouvez aussi créer un fichier DSN à l’aide de l’assistant qui est lancé lorsqu’aucun argument de commande n’est fourni. Nous vous recommandons d’essayer également cette option. Vous pouvez utiliser ce fichier DSN à des fins d’automatisation et pour protéger vos paramètres d’authentification :

Félicitations ! Vous avez réussi à vous connecter à Azure SQL à l’aide de C++ et ODBC sur Windows. Vous pouvez poursuivre la lecture pour faire de même sur une plateforme Linux.
## <a id="Linux"></a>Étape 5 : Connexion à partir d’une application C/C++ Linux
Si vous ne le saviez pas déjà, Visual Studio permet maintenant de développer également des applications C++ Linux. Vous trouverez plus d’informations sur ce nouveau scénario sur le blog consacré à [Visual C++ pour le développement Linux](https://blogs.msdn.microsoft.com/vcblog/2016/03/30/visual-c-for-linux-development/). Pour générer des applications pour Linux, vous avez besoin d’un ordinateur distant sur lequel s’exécute votre distribution Linux. Si vous n’en avez pas, vous pouvez en configurer un rapidement à l’aide de [machines virtuelles Azure Linux](../virtual-machines/linux/quick-create-cli.md?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json).
Pour ce didacticiel, nous partirons du principe que vous disposez d’une distribution Linux Ubuntu 16.04. Les étapes ci-dessous s’appliquent également à Ubuntu 15.10, Red Hat 6 et Red Hat 7.
Les étapes suivantes permettent d’installer les bibliothèques nécessaires à SQL et ODBC pour votre distribution :
sudo su
sh -c 'echo "deb [arch=amd64] https://apt-mo.trafficmanager.net/repos/mssql-ubuntu-test/ xenial main" > /etc/apt/sources.list.d/mssqlpreview.list'
sudo apt-key adv --keyserver apt-mo.trafficmanager.net --recv-keys 417A0893
apt-get update
apt-get install msodbcsql
apt-get install unixodbc-dev-utf16 #this step is optional but recommended*
Lancez Visual Studio. Sous Outils -> Options -> Multiplateforme -> Gestionnaire de connexion, ajoutez une connexion à votre boîte Linux :

Une fois la connexion via SSH établie, créez un modèle de projet vide (Linux) :

Vous pouvez ensuite ajouter un [nouveau fichier source C et le remplacer par ce contenu](https://github.com/Microsoft/VCSamples/blob/master/VC2015Samples/ODBC%20database%20sample%20%28linux%29/odbcconnector/odbcconnector.c). À l’aide des API ODBC SQLAllocHandle, SQLSetConnectAttr et SQLDriverConnect, vous devriez pouvoir initialiser et établir une connexion à votre base de données.
Comme pour l’exemple ODBC Windows, vous devez remplacer l’appel de SQLDriverConnect avec les détails des paramètres de la chaîne de connexion à votre base de données copiés à partir du portail Azure précédemment.
retcode = SQLDriverConnect(
hdbc, NULL, "Driver=ODBC Driver 13 for SQL"
"Server;Server=<yourserver>;Uid=<yourusername>;Pwd=<"
"yourpassword>;database=<yourdatabase>",
SQL_NTS, outstr, sizeof(outstr), &outstrlen, SQL_DRIVER_NOPROMPT);
La dernière chose à faire avant de procéder à la compilation consiste à ajouter **odbc** en tant que dépendance de bibliothèque :

Pour lancer votre application, affichez la Console Linux à partir du menu **Déboguer** :

Si votre connexion a réussi, le nom de la base de données actuelle doit être indiqué dans la Console Linux :

Félicitations ! Vous avez terminé le didacticiel et pouvez maintenant vous connecter à votre base de données SQL Azure à partir de C++ sur les plateformes Windows et Linux.
## <a id="GetSolution"></a>Obtenir la solution complète du didacticiel C/C++
Vous trouverez la solution GetStarted qui contient tous les exemples de cet article sur GitHub :
* [Exemple ODBC C++ Windows](https://github.com/Microsoft/VCSamples/tree/master/VC2015Samples/ODBC%20database%20sample%20%28windows%29) : téléchargez l’exemple ODBC C++ Windows pour la connexion à Azure SQL
* [Exemple ODBC C++ Linux](https://github.com/Microsoft/VCSamples/tree/master/VC2015Samples/ODBC%20database%20sample%20%28linux%29) : téléchargez l’exemple ODBC C++ Linux pour la connexion à Azure SQL
## <a name="next-steps"></a>Étapes suivantes
* Consultez la [Vue d’ensemble du développement de base de données SQL](sql-database-develop-overview.md)
* Pour plus d’informations sur les API ODBC, consultez ces [informations de référence](https://docs.microsoft.com/sql/odbc/reference/syntax/odbc-api-reference/)
## <a name="additional-resources"></a>Ressources supplémentaires
* [Modèles de conception pour les applications SaaS mutualisées avec Base de données SQL Azure](sql-database-design-patterns-multi-tenancy-saas-applications.md)
* Explorez toutes les [fonctionnalités de la base de données SQL](https://azure.microsoft.com/services/sql-database/)
| 91.175182 | 739 | 0.786646 | fra_Latn | 0.950427 |
537d2f4057eab820e6262975ce8e8332bb15714b | 63 | md | Markdown | user/README.md | gautamkrishnar/config | 0b9816b6a3eb0365eda094288392219bb7185f63 | [
"MIT"
] | 40 | 2020-12-31T05:54:54.000Z | 2022-03-25T23:35:39.000Z | user/README.md | gautamkrishnar/config | 0b9816b6a3eb0365eda094288392219bb7185f63 | [
"MIT"
] | 21 | 2021-02-24T17:58:51.000Z | 2022-01-19T16:59:29.000Z | user/README.md | gautamkrishnar/config | 0b9816b6a3eb0365eda094288392219bb7185f63 | [
"MIT"
] | 14 | 2020-12-30T02:41:40.000Z | 2022-02-10T14:32:23.000Z | Install your own custom apps, autocomplete specs and more here
| 31.5 | 62 | 0.825397 | eng_Latn | 0.9987 |
537d34c5ae54e87b6c622f9e75b66a926d7f1520 | 95 | md | Markdown | README.md | ZoranGj/dotnetcore-redux-demo | a52fcff7b678cae63beb03dacf3f45cf7bea2768 | [
"MIT"
] | null | null | null | README.md | ZoranGj/dotnetcore-redux-demo | a52fcff7b678cae63beb03dacf3f45cf7bea2768 | [
"MIT"
] | null | null | null | README.md | ZoranGj/dotnetcore-redux-demo | a52fcff7b678cae63beb03dacf3f45cf7bea2768 | [
"MIT"
] | null | null | null | # dotnetcore-redux-demo

| 23.75 | 69 | 0.757895 | kor_Hang | 0.198017 |
537d486fe41d2486e627a1e415f4c62ac406095d | 1,159 | md | Markdown | _pages/about.md | kexin-yang/kexin-yang.github.io | e527332ffab21aa9f1ecd5becc73447e7e6a119c | [
"MIT"
] | null | null | null | _pages/about.md | kexin-yang/kexin-yang.github.io | e527332ffab21aa9f1ecd5becc73447e7e6a119c | [
"MIT"
] | null | null | null | _pages/about.md | kexin-yang/kexin-yang.github.io | e527332ffab21aa9f1ecd5becc73447e7e6a119c | [
"MIT"
] | null | null | null | ---
permalink: /
title: "About me"
excerpt: "About me"
author_profile: true
redirect_from:
- /about/
- /about.html
---
<p align="center">
<img src="https://kexin-yang.github.io/files/kexin.jpg?raw=true" alt="Photo" style="width: 650px;"/>
</p>
I am a PhD student in Human-Computer Interaction Institute at Carnegie Mellon University.
I received my Bachelor's Degree in Beijing Normal University in English language literature, focusing on language education. I have taught English in various public schools and private institutions, including [101 High School](https://en.wikipedia.org/wiki/Beijing_101_Middle_School), Meitan Qiushi High School, [Knovva Academy](https://www.knovva.com), and [Foreign Language Teaching and Research Press](http://en.fltrp.com).
My research interests lie at the intersection of Human-Computer Interaction and Learning Science.
I go by Bella or Kexin :) If you are curious about pronunciation of "Kexin", here is a useful [website](https://chinese.yabla.com/chinese-pinyin-chart.php) where you can click to hear **kě (可)** and **xīn (馨)** respectively.
---
[Fun](https://sites.google.com/view/kexinfun/home) | 48.291667 | 426 | 0.748059 | eng_Latn | 0.877532 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.